DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH 00/19] A new net PMD - ice
@ 2018-11-23  6:56 Wenzhuo Lu
  2018-11-23  6:56 ` [dpdk-dev] [PATCH 01/19] net/ice: add base code Wenzhuo Lu
                   ` (24 more replies)
  0 siblings, 25 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-11-23  6:56 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu

This patch set adds the support of a new net PMD,
Intel® Ethernet Network Adapters E810, also
called ice.

Besides enabling this new NIC, also some other features
supported on this NIC.
Like below,

Basic features:
1, Basic device operations: probe, initialization, start/stop, configure, info get.
2, RX/TX queue operations: setup/release, start/stop, info get.
3, RX/TX.

HW Offload features:
1, CRC Stripping/insertion.
2, L2/L3 checksum strip/insertion.
3, PVID set.
4, TPID change.
5, TSO (LRO/RSC not supported).

Stats:
1, statics & xstatics.

Switch functions:
1, MAC Filter Add/Delete.
2, VLAN Filter Add/Delete.

Power saving:
1, RX interrupt mode.

Misc:
1, Interrupt For Link Status.
2, firmware info query.
3, Jumbo Frame Support.
4, ptype check.
5, EEPROM check and set.

Wenzhuo Lu (19):
  net/ice: add base code
  net/ice: support device initialization
  net/ice: support device and queue ops
  net/ice: support getting device information
  net/ice: support packet type getting
  net/ice: support link update
  net/ice: support MTU setting
  net/ice: support MAC ops
  net/ice: support VLAN ops
  net/ice: support RSS
  net/ice: support RX queue interruption
  net/ice: support FW version getting
  net/ice: support EEPROM information getting
  net/ice: support statistics
  net/ice: support queue information getting
  net/ice: support basic RX/TX
  net/ice: support advance RX/TX
  net/ice: support descriptor ops
  doc: add ICE description and update release note

 MAINTAINERS                              |    7 +
 config/common_base                       |    9 +
 doc/guides/nics/features/ice.ini         |   39 +
 doc/guides/nics/ice.rst                  |   78 +
 drivers/net/Makefile                     |    1 +
 drivers/net/ice/Makefile                 |   76 +
 drivers/net/ice/base/README              |   22 +
 drivers/net/ice/base/ice_acl.c           |    4 +
 drivers/net/ice/base/ice_acl.h           |    7 +
 drivers/net/ice/base/ice_acl_ctrl.c      |    4 +
 drivers/net/ice/base/ice_adminq_cmd.h    | 1724 ++++++
 drivers/net/ice/base/ice_alloc.h         |   22 +
 drivers/net/ice/base/ice_bitops.h        |  233 +
 drivers/net/ice/base/ice_common.c        | 3332 ++++++++++
 drivers/net/ice/base/ice_common.h        |  159 +
 drivers/net/ice/base/ice_controlq.c      | 1098 ++++
 drivers/net/ice/base/ice_controlq.h      |   97 +
 drivers/net/ice/base/ice_devids.h        |   17 +
 drivers/net/ice/base/ice_flex_pipe.c     |    5 +
 drivers/net/ice/base/ice_flex_pipe.h     |   83 +
 drivers/net/ice/base/ice_flex_type.h     |   19 +
 drivers/net/ice/base/ice_flow.c          |    4 +
 drivers/net/ice/base/ice_flow.h          |    8 +
 drivers/net/ice/base/ice_hw_autogen.h    | 9815 ++++++++++++++++++++++++++++++
 drivers/net/ice/base/ice_impl_guide.c    |  167 +
 drivers/net/ice/base/ice_lan_tx_rx.h     | 2290 +++++++
 drivers/net/ice/base/ice_nvm.c           |  388 ++
 drivers/net/ice/base/ice_osdep.h         |  491 ++
 drivers/net/ice/base/ice_protocol_type.h |  237 +
 drivers/net/ice/base/ice_sbq_cmd.h       |   93 +
 drivers/net/ice/base/ice_sched.c         | 1715 ++++++
 drivers/net/ice/base/ice_sched.h         |   68 +
 drivers/net/ice/base/ice_sriov.c         |  129 +
 drivers/net/ice/base/ice_sriov.h         |   35 +
 drivers/net/ice/base/ice_status.h        |   45 +
 drivers/net/ice/base/ice_switch.c        | 2415 ++++++++
 drivers/net/ice/base/ice_switch.h        |  320 +
 drivers/net/ice/base/ice_type.h          |  789 +++
 drivers/net/ice/base/virtchnl.h          |  787 +++
 drivers/net/ice/ice_ethdev.c             | 3302 ++++++++++
 drivers/net/ice/ice_ethdev.h             |  318 +
 drivers/net/ice/ice_lan_rxtx.c           | 2922 +++++++++
 drivers/net/ice/ice_logs.h               |   45 +
 drivers/net/ice/ice_rxtx.h               |  155 +
 drivers/net/ice/rte_pmd_ice_version.map  |    4 +
 mk/rte.app.mk                            |    1 +
 46 files changed, 33579 insertions(+)
 create mode 100644 doc/guides/nics/features/ice.ini
 create mode 100644 doc/guides/nics/ice.rst
 create mode 100644 drivers/net/ice/Makefile
 create mode 100644 drivers/net/ice/base/README
 create mode 100644 drivers/net/ice/base/ice_acl.c
 create mode 100644 drivers/net/ice/base/ice_acl.h
 create mode 100644 drivers/net/ice/base/ice_acl_ctrl.c
 create mode 100644 drivers/net/ice/base/ice_adminq_cmd.h
 create mode 100644 drivers/net/ice/base/ice_alloc.h
 create mode 100644 drivers/net/ice/base/ice_bitops.h
 create mode 100644 drivers/net/ice/base/ice_common.c
 create mode 100644 drivers/net/ice/base/ice_common.h
 create mode 100644 drivers/net/ice/base/ice_controlq.c
 create mode 100644 drivers/net/ice/base/ice_controlq.h
 create mode 100644 drivers/net/ice/base/ice_devids.h
 create mode 100644 drivers/net/ice/base/ice_flex_pipe.c
 create mode 100644 drivers/net/ice/base/ice_flex_pipe.h
 create mode 100644 drivers/net/ice/base/ice_flex_type.h
 create mode 100644 drivers/net/ice/base/ice_flow.c
 create mode 100644 drivers/net/ice/base/ice_flow.h
 create mode 100644 drivers/net/ice/base/ice_hw_autogen.h
 create mode 100644 drivers/net/ice/base/ice_impl_guide.c
 create mode 100644 drivers/net/ice/base/ice_lan_tx_rx.h
 create mode 100644 drivers/net/ice/base/ice_nvm.c
 create mode 100644 drivers/net/ice/base/ice_osdep.h
 create mode 100644 drivers/net/ice/base/ice_protocol_type.h
 create mode 100644 drivers/net/ice/base/ice_sbq_cmd.h
 create mode 100644 drivers/net/ice/base/ice_sched.c
 create mode 100644 drivers/net/ice/base/ice_sched.h
 create mode 100644 drivers/net/ice/base/ice_sriov.c
 create mode 100644 drivers/net/ice/base/ice_sriov.h
 create mode 100644 drivers/net/ice/base/ice_status.h
 create mode 100644 drivers/net/ice/base/ice_switch.c
 create mode 100644 drivers/net/ice/base/ice_switch.h
 create mode 100644 drivers/net/ice/base/ice_type.h
 create mode 100644 drivers/net/ice/base/virtchnl.h
 create mode 100644 drivers/net/ice/ice_ethdev.c
 create mode 100644 drivers/net/ice/ice_ethdev.h
 create mode 100644 drivers/net/ice/ice_lan_rxtx.c
 create mode 100644 drivers/net/ice/ice_logs.h
 create mode 100644 drivers/net/ice/ice_rxtx.h
 create mode 100644 drivers/net/ice/rte_pmd_ice_version.map

-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH 01/19] net/ice: add base code
  2018-11-23  6:56 [dpdk-dev] [PATCH 00/19] A new net PMD - ice Wenzhuo Lu
@ 2018-11-23  6:56 ` Wenzhuo Lu
  2018-11-23  6:56 ` [dpdk-dev] [PATCH 02/19] net/ice: support device initialization Wenzhuo Lu
                   ` (23 subsequent siblings)
  24 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-11-23  6:56 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu

Current version 2018.10.30.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 MAINTAINERS                              |    6 +
 drivers/net/ice/base/README              |   22 +
 drivers/net/ice/base/ice_acl.c           |    4 +
 drivers/net/ice/base/ice_acl.h           |    7 +
 drivers/net/ice/base/ice_acl_ctrl.c      |    4 +
 drivers/net/ice/base/ice_adminq_cmd.h    | 1724 ++++++
 drivers/net/ice/base/ice_alloc.h         |   22 +
 drivers/net/ice/base/ice_bitops.h        |  233 +
 drivers/net/ice/base/ice_common.c        | 3332 ++++++++++
 drivers/net/ice/base/ice_common.h        |  159 +
 drivers/net/ice/base/ice_controlq.c      | 1098 ++++
 drivers/net/ice/base/ice_controlq.h      |   97 +
 drivers/net/ice/base/ice_devids.h        |   17 +
 drivers/net/ice/base/ice_flex_pipe.c     |    5 +
 drivers/net/ice/base/ice_flex_pipe.h     |   83 +
 drivers/net/ice/base/ice_flex_type.h     |   19 +
 drivers/net/ice/base/ice_flow.c          |    4 +
 drivers/net/ice/base/ice_flow.h          |    8 +
 drivers/net/ice/base/ice_hw_autogen.h    | 9815 ++++++++++++++++++++++++++++++
 drivers/net/ice/base/ice_impl_guide.c    |  167 +
 drivers/net/ice/base/ice_lan_tx_rx.h     | 2290 +++++++
 drivers/net/ice/base/ice_nvm.c           |  388 ++
 drivers/net/ice/base/ice_osdep.h         |  491 ++
 drivers/net/ice/base/ice_protocol_type.h |  237 +
 drivers/net/ice/base/ice_sbq_cmd.h       |   93 +
 drivers/net/ice/base/ice_sched.c         | 1715 ++++++
 drivers/net/ice/base/ice_sched.h         |   68 +
 drivers/net/ice/base/ice_sriov.c         |  129 +
 drivers/net/ice/base/ice_sriov.h         |   35 +
 drivers/net/ice/base/ice_status.h        |   45 +
 drivers/net/ice/base/ice_switch.c        | 2415 ++++++++
 drivers/net/ice/base/ice_switch.h        |  320 +
 drivers/net/ice/base/ice_type.h          |  789 +++
 drivers/net/ice/base/virtchnl.h          |  787 +++
 34 files changed, 26628 insertions(+)
 create mode 100644 drivers/net/ice/base/README
 create mode 100644 drivers/net/ice/base/ice_acl.c
 create mode 100644 drivers/net/ice/base/ice_acl.h
 create mode 100644 drivers/net/ice/base/ice_acl_ctrl.c
 create mode 100644 drivers/net/ice/base/ice_adminq_cmd.h
 create mode 100644 drivers/net/ice/base/ice_alloc.h
 create mode 100644 drivers/net/ice/base/ice_bitops.h
 create mode 100644 drivers/net/ice/base/ice_common.c
 create mode 100644 drivers/net/ice/base/ice_common.h
 create mode 100644 drivers/net/ice/base/ice_controlq.c
 create mode 100644 drivers/net/ice/base/ice_controlq.h
 create mode 100644 drivers/net/ice/base/ice_devids.h
 create mode 100644 drivers/net/ice/base/ice_flex_pipe.c
 create mode 100644 drivers/net/ice/base/ice_flex_pipe.h
 create mode 100644 drivers/net/ice/base/ice_flex_type.h
 create mode 100644 drivers/net/ice/base/ice_flow.c
 create mode 100644 drivers/net/ice/base/ice_flow.h
 create mode 100644 drivers/net/ice/base/ice_hw_autogen.h
 create mode 100644 drivers/net/ice/base/ice_impl_guide.c
 create mode 100644 drivers/net/ice/base/ice_lan_tx_rx.h
 create mode 100644 drivers/net/ice/base/ice_nvm.c
 create mode 100644 drivers/net/ice/base/ice_osdep.h
 create mode 100644 drivers/net/ice/base/ice_protocol_type.h
 create mode 100644 drivers/net/ice/base/ice_sbq_cmd.h
 create mode 100644 drivers/net/ice/base/ice_sched.c
 create mode 100644 drivers/net/ice/base/ice_sched.h
 create mode 100644 drivers/net/ice/base/ice_sriov.c
 create mode 100644 drivers/net/ice/base/ice_sriov.h
 create mode 100644 drivers/net/ice/base/ice_status.h
 create mode 100644 drivers/net/ice/base/ice_switch.c
 create mode 100644 drivers/net/ice/base/ice_switch.h
 create mode 100644 drivers/net/ice/base/ice_type.h
 create mode 100644 drivers/net/ice/base/virtchnl.h

diff --git a/MAINTAINERS b/MAINTAINERS
index 19353ac..00a5e03 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -593,6 +593,12 @@ F: drivers/net/ifc/
 F: doc/guides/nics/ifc.rst
 F: doc/guides/nics/features/ifc*.ini
 
+Intel ice
+M: Qiming Yang <qiming.yang@intel.com>
+M: Wenzhuo Lu <wenzhuo.lu@intel.com>
+T: git://dpdk.org/next/dpdk-next-net-intel
+F: drivers/net/ice/
+
 Marvell mvpp2
 M: Tomasz Duszynski <tdu@semihalf.com>
 M: Dmitri Epshtein <dima@marvell.com>
diff --git a/drivers/net/ice/base/README b/drivers/net/ice/base/README
new file mode 100644
index 0000000..d8c7a9b
--- /dev/null
+++ b/drivers/net/ice/base/README
@@ -0,0 +1,22 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+Intel® ICE driver
+==================
+
+This directory contains source code of FreeBSD ice driver of version
+2018.10.30 released by the team which develops
+basic drivers for any ice NIC. The directory of base/ contains the
+original source package.
+This driver is valid for the product(s) listed below
+
+* Intel® Ethernet Network Adapters E810
+
+Updating the driver
+===================
+
+NOTE: The source code in this directory should not be modified apart from
+the following file(s):
+
+    ice_osdep.h
diff --git a/drivers/net/ice/base/ice_acl.c b/drivers/net/ice/base/ice_acl.c
new file mode 100644
index 0000000..49b22bc
--- /dev/null
+++ b/drivers/net/ice/base/ice_acl.c
@@ -0,0 +1,4 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
diff --git a/drivers/net/ice/base/ice_acl.h b/drivers/net/ice/base/ice_acl.h
new file mode 100644
index 0000000..730b593
--- /dev/null
+++ b/drivers/net/ice/base/ice_acl.h
@@ -0,0 +1,7 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_ACL_H_
+#define _ICE_ACL_H_
+#endif /* _ICE_ACL_H_ */
diff --git a/drivers/net/ice/base/ice_acl_ctrl.c b/drivers/net/ice/base/ice_acl_ctrl.c
new file mode 100644
index 0000000..49b22bc
--- /dev/null
+++ b/drivers/net/ice/base/ice_acl_ctrl.c
@@ -0,0 +1,4 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
diff --git a/drivers/net/ice/base/ice_adminq_cmd.h b/drivers/net/ice/base/ice_adminq_cmd.h
new file mode 100644
index 0000000..e711502
--- /dev/null
+++ b/drivers/net/ice/base/ice_adminq_cmd.h
@@ -0,0 +1,1724 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_ADMINQ_CMD_H_
+#define _ICE_ADMINQ_CMD_H_
+
+/* This header file defines the Admin Queue commands, error codes and
+ * descriptor format. It is shared between Firmware and Software.
+ */
+
+
+#define ICE_MAX_VSI			768
+#define ICE_AQC_TOPO_MAX_LEVEL_NUM	0x9
+#define ICE_AQ_SET_MAC_FRAME_SIZE_MAX	9728
+
+
+struct ice_aqc_generic {
+	__le32 param0;
+	__le32 param1;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Get version (direct 0x0001) */
+struct ice_aqc_get_ver {
+	__le32 rom_ver;
+	__le32 fw_build;
+	u8 fw_branch;
+	u8 fw_major;
+	u8 fw_minor;
+	u8 fw_patch;
+	u8 api_branch;
+	u8 api_major;
+	u8 api_minor;
+	u8 api_patch;
+};
+
+
+
+/* Queue Shutdown (direct 0x0003) */
+struct ice_aqc_q_shutdown {
+	__le32 driver_unloading;
+#define ICE_AQC_DRIVER_UNLOADING	BIT(0)
+	u8 reserved[12];
+};
+
+
+
+
+/* Request resource ownership (direct 0x0008)
+ * Release resource ownership (direct 0x0009)
+ */
+struct ice_aqc_req_res {
+	__le16 res_id;
+#define ICE_AQC_RES_ID_NVM		1
+#define ICE_AQC_RES_ID_SDP		2
+#define ICE_AQC_RES_ID_CHNG_LOCK	3
+#define ICE_AQC_RES_ID_GLBL_LOCK	4
+	__le16 access_type;
+#define ICE_AQC_RES_ACCESS_READ		1
+#define ICE_AQC_RES_ACCESS_WRITE	2
+
+	/* Upon successful completion, FW writes this value and driver is
+	 * expected to release resource before timeout. This value is provided
+	 * in milliseconds.
+	 */
+	__le32 timeout;
+#define ICE_AQ_RES_NVM_READ_DFLT_TIMEOUT_MS	3000
+#define ICE_AQ_RES_NVM_WRITE_DFLT_TIMEOUT_MS	180000
+#define ICE_AQ_RES_CHNG_LOCK_DFLT_TIMEOUT_MS	1000
+#define ICE_AQ_RES_GLBL_LOCK_DFLT_TIMEOUT_MS	3000
+	/* For SDP: pin id of the SDP */
+	__le32 res_number;
+	/* Status is only used for ICE_AQC_RES_ID_GLBL_LOCK */
+	__le16 status;
+#define ICE_AQ_RES_GLBL_SUCCESS		0
+#define ICE_AQ_RES_GLBL_IN_PROG		1
+#define ICE_AQ_RES_GLBL_DONE		2
+	u8 reserved[2];
+};
+
+
+/* Get function capabilities (indirect 0x000A)
+ * Get device capabilities (indirect 0x000B)
+ */
+struct ice_aqc_list_caps {
+	u8 cmd_flags;
+	u8 pf_index;
+	u8 reserved[2];
+	__le32 count;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Device/Function buffer entry, repeated per reported capability */
+struct ice_aqc_list_caps_elem {
+	__le16 cap;
+#define ICE_AQC_CAPS_VALID_FUNCTIONS			0x0005
+#define ICE_AQC_CAPS_SRIOV				0x0012
+#define ICE_AQC_CAPS_VF					0x0013
+#define ICE_AQC_CAPS_VSI				0x0017
+#define ICE_AQC_CAPS_RSS				0x0040
+#define ICE_AQC_CAPS_RXQS				0x0041
+#define ICE_AQC_CAPS_TXQS				0x0042
+#define ICE_AQC_CAPS_MSIX				0x0043
+#define ICE_AQC_CAPS_MAX_MTU				0x0047
+
+	u8 major_ver;
+	u8 minor_ver;
+	/* Number of resources described by this capability */
+	__le32 number;
+	/* Only meaningful for some types of resources */
+	__le32 logical_id;
+	/* Only meaningful for some types of resources */
+	__le32 phys_id;
+	__le64 rsvd1;
+	__le64 rsvd2;
+};
+
+
+/* Manage MAC address, read command - indirect (0x0107)
+ * This struct is also used for the response
+ */
+struct ice_aqc_manage_mac_read {
+	__le16 flags; /* Zeroed by device driver */
+#define ICE_AQC_MAN_MAC_LAN_ADDR_VALID		BIT(4)
+#define ICE_AQC_MAN_MAC_SAN_ADDR_VALID		BIT(5)
+#define ICE_AQC_MAN_MAC_PORT_ADDR_VALID		BIT(6)
+#define ICE_AQC_MAN_MAC_WOL_ADDR_VALID		BIT(7)
+#define ICE_AQC_MAN_MAC_READ_S			4
+#define ICE_AQC_MAN_MAC_READ_M			(0xF << ICE_AQC_MAN_MAC_READ_S)
+	u8 lport_num;
+	u8 lport_num_valid;
+#define ICE_AQC_MAN_MAC_PORT_NUM_IS_VALID	BIT(0)
+	u8 num_addr; /* Used in response */
+	u8 reserved[3];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Response buffer format for manage MAC read command */
+struct ice_aqc_manage_mac_read_resp {
+	u8 lport_num;
+	u8 addr_type;
+#define ICE_AQC_MAN_MAC_ADDR_TYPE_LAN		0
+#define ICE_AQC_MAN_MAC_ADDR_TYPE_WOL		1
+	u8 mac_addr[ETH_ALEN];
+};
+
+
+/* Manage MAC address, write command - direct (0x0108) */
+struct ice_aqc_manage_mac_write {
+	u8 port_num;
+	u8 flags;
+#define ICE_AQC_MAN_MAC_WR_MC_MAG_EN		BIT(0)
+#define ICE_AQC_MAN_MAC_WR_WOL_LAA_PFR_KEEP	BIT(1)
+#define ICE_AQC_MAN_MAC_WR_S		6
+#define ICE_AQC_MAN_MAC_WR_M		(3 << ICE_AQC_MAN_MAC_WR_S)
+#define ICE_AQC_MAN_MAC_UPDATE_LAA	0
+#define ICE_AQC_MAN_MAC_UPDATE_LAA_WOL	(BIT(0) << ICE_AQC_MAN_MAC_WR_S)
+	/* High 16 bits of MAC address in big endian order */
+	__be16 sah;
+	/* Low 32 bits of MAC address in big endian order */
+	__be32 sal;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Clear PXE Command and response (direct 0x0110) */
+struct ice_aqc_clear_pxe {
+	u8 rx_cnt;
+#define ICE_AQC_CLEAR_PXE_RX_CNT		0x2
+	u8 reserved[15];
+};
+
+
+/* Get switch configuration (0x0200) */
+struct ice_aqc_get_sw_cfg {
+	/* Reserved for command and copy of request flags for response */
+	__le16 flags;
+	/* First desc in case of command and next_elem in case of response
+	 * In case of response, if it is not zero, means all the configuration
+	 * was not returned and new command shall be sent with this value in
+	 * the 'first desc' field
+	 */
+	__le16 element;
+	/* Reserved for command, only used for response */
+	__le16 num_elems;
+	__le16 rsvd;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Each entry in the response buffer is of the following type: */
+struct ice_aqc_get_sw_cfg_resp_elem {
+	/* VSI/Port Number */
+	__le16 vsi_port_num;
+#define ICE_AQC_GET_SW_CONF_RESP_VSI_PORT_NUM_S	0
+#define ICE_AQC_GET_SW_CONF_RESP_VSI_PORT_NUM_M	\
+			(0x3FF << ICE_AQC_GET_SW_CONF_RESP_VSI_PORT_NUM_S)
+#define ICE_AQC_GET_SW_CONF_RESP_TYPE_S	14
+#define ICE_AQC_GET_SW_CONF_RESP_TYPE_M	(0x3 << ICE_AQC_GET_SW_CONF_RESP_TYPE_S)
+#define ICE_AQC_GET_SW_CONF_RESP_PHYS_PORT	0
+#define ICE_AQC_GET_SW_CONF_RESP_VIRT_PORT	1
+#define ICE_AQC_GET_SW_CONF_RESP_VSI		2
+
+	/* SWID VSI/Port belongs to */
+	__le16 swid;
+
+	/* Bit 14..0 : PF/VF number VSI belongs to
+	 * Bit 15 : VF indication bit
+	 */
+	__le16 pf_vf_num;
+#define ICE_AQC_GET_SW_CONF_RESP_FUNC_NUM_S	0
+#define ICE_AQC_GET_SW_CONF_RESP_FUNC_NUM_M	\
+				(0x7FFF << ICE_AQC_GET_SW_CONF_RESP_FUNC_NUM_S)
+#define ICE_AQC_GET_SW_CONF_RESP_IS_VF		BIT(15)
+};
+
+
+/* The response buffer is as follows. Note that the length of the
+ * elements array varies with the length of the command response.
+ */
+struct ice_aqc_get_sw_cfg_resp {
+	struct ice_aqc_get_sw_cfg_resp_elem elements[1];
+};
+
+
+
+/* These resource type defines are used for all switch resource
+ * commands where a resource type is required, such as:
+ * Get Resource Allocation command (indirect 0x0204)
+ * Allocate Resources command (indirect 0x0208)
+ * Free Resources command (indirect 0x0209)
+ * Get Allocated Resource Descriptors Command (indirect 0x020A)
+ */
+#define ICE_AQC_RES_TYPE_VSI_LIST_REP			0x03
+#define ICE_AQC_RES_TYPE_VSI_LIST_PRUNE			0x04
+
+#define ICE_AQC_RES_TYPE_FLAG_SHARED			BIT(7)
+#define ICE_AQC_RES_TYPE_FLAG_SCAN_BOTTOM		BIT(12)
+#define ICE_AQC_RES_TYPE_FLAG_IGNORE_INDEX		BIT(13)
+
+#define ICE_AQC_RES_TYPE_FLAG_DEDICATED			0x00
+
+
+
+/* Allocate Resources command (indirect 0x0208)
+ * Free Resources command (indirect 0x0209)
+ */
+struct ice_aqc_alloc_free_res_cmd {
+	__le16 num_entries; /* Number of Resource entries */
+	u8 reserved[6];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Resource descriptor */
+struct ice_aqc_res_elem {
+	union {
+		__le16 sw_resp;
+		__le16 flu_resp;
+	} e;
+};
+
+
+/* Buffer for Allocate/Free Resources commands */
+struct ice_aqc_alloc_free_res_elem {
+	__le16 res_type; /* Types defined above cmd 0x0204 */
+#define ICE_AQC_RES_TYPE_VSI_PRUNE_LIST_S	8
+#define ICE_AQC_RES_TYPE_VSI_PRUNE_LIST_M	\
+				(0xF << ICE_AQC_RES_TYPE_VSI_PRUNE_LIST_S)
+	__le16 num_elems;
+	struct ice_aqc_res_elem elem[1];
+};
+
+
+
+
+/* Add VSI (indirect 0x0210)
+ * Update VSI (indirect 0x0211)
+ * Get VSI (indirect 0x0212)
+ * Free VSI (indirect 0x0213)
+ */
+struct ice_aqc_add_get_update_free_vsi {
+	__le16 vsi_num;
+#define ICE_AQ_VSI_NUM_S	0
+#define ICE_AQ_VSI_NUM_M	(0x03FF << ICE_AQ_VSI_NUM_S)
+#define ICE_AQ_VSI_IS_VALID	BIT(15)
+	__le16 cmd_flags;
+#define ICE_AQ_VSI_KEEP_ALLOC	0x1
+	u8 vf_id;
+	u8 reserved;
+	__le16 vsi_flags;
+#define ICE_AQ_VSI_TYPE_S	0
+#define ICE_AQ_VSI_TYPE_M	(0x3 << ICE_AQ_VSI_TYPE_S)
+#define ICE_AQ_VSI_TYPE_VF	0x0
+#define ICE_AQ_VSI_TYPE_VMDQ2	0x1
+#define ICE_AQ_VSI_TYPE_PF	0x2
+#define ICE_AQ_VSI_TYPE_EMP_MNG	0x3
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Response descriptor for:
+ * Add VSI (indirect 0x0210)
+ * Update VSI (indirect 0x0211)
+ * Free VSI (indirect 0x0213)
+ */
+struct ice_aqc_add_update_free_vsi_resp {
+	__le16 vsi_num;
+	__le16 ext_status;
+	__le16 vsi_used;
+	__le16 vsi_free;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+
+struct ice_aqc_vsi_props {
+	__le16 valid_sections;
+#define ICE_AQ_VSI_PROP_SW_VALID		BIT(0)
+#define ICE_AQ_VSI_PROP_SECURITY_VALID		BIT(1)
+#define ICE_AQ_VSI_PROP_VLAN_VALID		BIT(2)
+#define ICE_AQ_VSI_PROP_OUTER_TAG_VALID		BIT(3)
+#define ICE_AQ_VSI_PROP_INGRESS_UP_VALID	BIT(4)
+#define ICE_AQ_VSI_PROP_EGRESS_UP_VALID		BIT(5)
+#define ICE_AQ_VSI_PROP_RXQ_MAP_VALID		BIT(6)
+#define ICE_AQ_VSI_PROP_Q_OPT_VALID		BIT(7)
+#define ICE_AQ_VSI_PROP_OUTER_UP_VALID		BIT(8)
+#define ICE_AQ_VSI_PROP_FLOW_DIR_VALID		BIT(11)
+#define ICE_AQ_VSI_PROP_PASID_VALID		BIT(12)
+	/* switch section */
+	u8 sw_id;
+	u8 sw_flags;
+#define ICE_AQ_VSI_SW_FLAG_ALLOW_LB		BIT(5)
+#define ICE_AQ_VSI_SW_FLAG_LOCAL_LB		BIT(6)
+#define ICE_AQ_VSI_SW_FLAG_SRC_PRUNE		BIT(7)
+	u8 sw_flags2;
+#define ICE_AQ_VSI_SW_FLAG_RX_PRUNE_EN_S	0
+#define ICE_AQ_VSI_SW_FLAG_RX_PRUNE_EN_M	\
+				(0xF << ICE_AQ_VSI_SW_FLAG_RX_PRUNE_EN_S)
+#define ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA	BIT(0)
+#define ICE_AQ_VSI_SW_FLAG_LAN_ENA		BIT(4)
+	u8 veb_stat_id;
+#define ICE_AQ_VSI_SW_VEB_STAT_ID_S		0
+#define ICE_AQ_VSI_SW_VEB_STAT_ID_M	(0x1F << ICE_AQ_VSI_SW_VEB_STAT_ID_S)
+#define ICE_AQ_VSI_SW_VEB_STAT_ID_VALID		BIT(5)
+	/* security section */
+	u8 sec_flags;
+#define ICE_AQ_VSI_SEC_FLAG_ALLOW_DEST_OVRD	BIT(0)
+#define ICE_AQ_VSI_SEC_FLAG_ENA_MAC_ANTI_SPOOF	BIT(2)
+#define ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S	4
+#define ICE_AQ_VSI_SEC_TX_PRUNE_ENA_M	(0xF << ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S)
+#define ICE_AQ_VSI_SEC_TX_VLAN_PRUNE_ENA	BIT(0)
+	u8 sec_reserved;
+	/* VLAN section */
+	__le16 pvid; /* VLANS include priority bits */
+	u8 pvlan_reserved[2];
+	u8 vlan_flags;
+#define ICE_AQ_VSI_VLAN_MODE_S	0
+#define ICE_AQ_VSI_VLAN_MODE_M	(0x3 << ICE_AQ_VSI_VLAN_MODE_S)
+#define ICE_AQ_VSI_VLAN_MODE_UNTAGGED	0x1
+#define ICE_AQ_VSI_VLAN_MODE_TAGGED	0x2
+#define ICE_AQ_VSI_VLAN_MODE_ALL	0x3
+#define ICE_AQ_VSI_PVLAN_INSERT_PVID	BIT(2)
+#define ICE_AQ_VSI_VLAN_EMOD_S	3
+#define ICE_AQ_VSI_VLAN_EMOD_M	(0x3 << ICE_AQ_VSI_VLAN_EMOD_S)
+#define ICE_AQ_VSI_VLAN_EMOD_STR_BOTH	(0x0 << ICE_AQ_VSI_VLAN_EMOD_S)
+#define ICE_AQ_VSI_VLAN_EMOD_STR_UP	(0x1 << ICE_AQ_VSI_VLAN_EMOD_S)
+#define ICE_AQ_VSI_VLAN_EMOD_STR	(0x2 << ICE_AQ_VSI_VLAN_EMOD_S)
+#define ICE_AQ_VSI_VLAN_EMOD_NOTHING	(0x3 << ICE_AQ_VSI_VLAN_EMOD_S)
+	u8 pvlan_reserved2[3];
+	/* ingress egress up sections */
+	__le32 ingress_table; /* bitmap, 3 bits per up */
+#define ICE_AQ_VSI_UP_TABLE_UP0_S	0
+#define ICE_AQ_VSI_UP_TABLE_UP0_M	(0x7 << ICE_AQ_VSI_UP_TABLE_UP0_S)
+#define ICE_AQ_VSI_UP_TABLE_UP1_S	3
+#define ICE_AQ_VSI_UP_TABLE_UP1_M	(0x7 << ICE_AQ_VSI_UP_TABLE_UP1_S)
+#define ICE_AQ_VSI_UP_TABLE_UP2_S	6
+#define ICE_AQ_VSI_UP_TABLE_UP2_M	(0x7 << ICE_AQ_VSI_UP_TABLE_UP2_S)
+#define ICE_AQ_VSI_UP_TABLE_UP3_S	9
+#define ICE_AQ_VSI_UP_TABLE_UP3_M	(0x7 << ICE_AQ_VSI_UP_TABLE_UP3_S)
+#define ICE_AQ_VSI_UP_TABLE_UP4_S	12
+#define ICE_AQ_VSI_UP_TABLE_UP4_M	(0x7 << ICE_AQ_VSI_UP_TABLE_UP4_S)
+#define ICE_AQ_VSI_UP_TABLE_UP5_S	15
+#define ICE_AQ_VSI_UP_TABLE_UP5_M	(0x7 << ICE_AQ_VSI_UP_TABLE_UP5_S)
+#define ICE_AQ_VSI_UP_TABLE_UP6_S	18
+#define ICE_AQ_VSI_UP_TABLE_UP6_M	(0x7 << ICE_AQ_VSI_UP_TABLE_UP6_S)
+#define ICE_AQ_VSI_UP_TABLE_UP7_S	21
+#define ICE_AQ_VSI_UP_TABLE_UP7_M	(0x7 << ICE_AQ_VSI_UP_TABLE_UP7_S)
+	__le32 egress_table;   /* same defines as for ingress table */
+	/* outer tags section */
+	__le16 outer_tag;
+	u8 outer_tag_flags;
+#define ICE_AQ_VSI_OUTER_TAG_MODE_S	0
+#define ICE_AQ_VSI_OUTER_TAG_MODE_M	(0x3 << ICE_AQ_VSI_OUTER_TAG_MODE_S)
+#define ICE_AQ_VSI_OUTER_TAG_NOTHING	0x0
+#define ICE_AQ_VSI_OUTER_TAG_REMOVE	0x1
+#define ICE_AQ_VSI_OUTER_TAG_COPY	0x2
+#define ICE_AQ_VSI_OUTER_TAG_TYPE_S	2
+#define ICE_AQ_VSI_OUTER_TAG_TYPE_M	(0x3 << ICE_AQ_VSI_OUTER_TAG_TYPE_S)
+#define ICE_AQ_VSI_OUTER_TAG_NONE	0x0
+#define ICE_AQ_VSI_OUTER_TAG_STAG	0x1
+#define ICE_AQ_VSI_OUTER_TAG_VLAN_8100	0x2
+#define ICE_AQ_VSI_OUTER_TAG_VLAN_9100	0x3
+#define ICE_AQ_VSI_OUTER_TAG_INSERT	BIT(4)
+#define ICE_AQ_VSI_OUTER_TAG_ACCEPT_HOST BIT(6)
+	u8 outer_tag_reserved;
+	/* queue mapping section */
+	__le16 mapping_flags;
+#define ICE_AQ_VSI_Q_MAP_CONTIG	0x0
+#define ICE_AQ_VSI_Q_MAP_NONCONTIG	BIT(0)
+	__le16 q_mapping[16];
+#define ICE_AQ_VSI_Q_S		0
+#define ICE_AQ_VSI_Q_M		(0x7FF << ICE_AQ_VSI_Q_S)
+	__le16 tc_mapping[8];
+#define ICE_AQ_VSI_TC_Q_OFFSET_S	0
+#define ICE_AQ_VSI_TC_Q_OFFSET_M	(0x7FF << ICE_AQ_VSI_TC_Q_OFFSET_S)
+#define ICE_AQ_VSI_TC_Q_NUM_S		11
+#define ICE_AQ_VSI_TC_Q_NUM_M		(0xF << ICE_AQ_VSI_TC_Q_NUM_S)
+	/* queueing option section */
+	u8 q_opt_rss;
+#define ICE_AQ_VSI_Q_OPT_RSS_LUT_S	0
+#define ICE_AQ_VSI_Q_OPT_RSS_LUT_M	(0x3 << ICE_AQ_VSI_Q_OPT_RSS_LUT_S)
+#define ICE_AQ_VSI_Q_OPT_RSS_LUT_VSI	0x0
+#define ICE_AQ_VSI_Q_OPT_RSS_LUT_PF	0x2
+#define ICE_AQ_VSI_Q_OPT_RSS_LUT_GBL	0x3
+#define ICE_AQ_VSI_Q_OPT_RSS_GBL_LUT_S	2
+#define ICE_AQ_VSI_Q_OPT_RSS_GBL_LUT_M	(0xF << ICE_AQ_VSI_Q_OPT_RSS_GBL_LUT_S)
+#define ICE_AQ_VSI_Q_OPT_RSS_HASH_S	6
+#define ICE_AQ_VSI_Q_OPT_RSS_HASH_M	(0x3 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S)
+#define ICE_AQ_VSI_Q_OPT_RSS_TPLZ	(0x0 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S)
+#define ICE_AQ_VSI_Q_OPT_RSS_SYM_TPLZ	(0x1 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S)
+#define ICE_AQ_VSI_Q_OPT_RSS_XOR	(0x2 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S)
+#define ICE_AQ_VSI_Q_OPT_RSS_JHASH	(0x3 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S)
+	u8 q_opt_tc;
+#define ICE_AQ_VSI_Q_OPT_TC_OVR_S	0
+#define ICE_AQ_VSI_Q_OPT_TC_OVR_M	(0x1F << ICE_AQ_VSI_Q_OPT_TC_OVR_S)
+#define ICE_AQ_VSI_Q_OPT_PROF_TC_OVR	BIT(7)
+	u8 q_opt_flags;
+#define ICE_AQ_VSI_Q_OPT_PE_FLTR_EN	BIT(0)
+	u8 q_opt_reserved[3];
+	/* outer up section */
+	__le32 outer_up_table; /* same structure and defines as ingress tbl */
+	/* section 10 */
+	__le16 sect_10_reserved;
+	/* flow director section */
+	__le16 fd_options;
+#define ICE_AQ_VSI_FD_ENABLE		BIT(0)
+#define ICE_AQ_VSI_FD_TX_AUTO_ENABLE	BIT(1)
+#define ICE_AQ_VSI_FD_PROG_ENABLE	BIT(3)
+	__le16 max_fd_fltr_dedicated;
+	__le16 max_fd_fltr_shared;
+	__le16 fd_def_q;
+#define ICE_AQ_VSI_FD_DEF_Q_S		0
+#define ICE_AQ_VSI_FD_DEF_Q_M		(0x7FF << ICE_AQ_VSI_FD_DEF_Q_S)
+#define ICE_AQ_VSI_FD_DEF_GRP_S	12
+#define ICE_AQ_VSI_FD_DEF_GRP_M	(0x7 << ICE_AQ_VSI_FD_DEF_GRP_S)
+	__le16 fd_report_opt;
+#define ICE_AQ_VSI_FD_REPORT_Q_S	0
+#define ICE_AQ_VSI_FD_REPORT_Q_M	(0x7FF << ICE_AQ_VSI_FD_REPORT_Q_S)
+#define ICE_AQ_VSI_FD_DEF_PRIORITY_S	12
+#define ICE_AQ_VSI_FD_DEF_PRIORITY_M	(0x7 << ICE_AQ_VSI_FD_DEF_PRIORITY_S)
+#define ICE_AQ_VSI_FD_DEF_DROP		BIT(15)
+	/* PASID section */
+	__le32 pasid_id;
+#define ICE_AQ_VSI_PASID_ID_S		0
+#define ICE_AQ_VSI_PASID_ID_M		(0xFFFFF << ICE_AQ_VSI_PASID_ID_S)
+#define ICE_AQ_VSI_PASID_ID_VALID	BIT(31)
+	u8 reserved[24];
+};
+
+
+
+#define ICE_MAX_NUM_RECIPES 64
+
+
+/* Add/Update/Remove/Get switch rules (indirect 0x02A0, 0x02A1, 0x02A2, 0x02A3)
+ */
+struct ice_aqc_sw_rules {
+	/* ops: add switch rules, referring the number of rules.
+	 * ops: update switch rules, referring the number of filters
+	 * ops: remove switch rules, referring the entry index.
+	 * ops: get switch rules, referring to the number of filters.
+	 */
+	__le16 num_rules_fltr_entry_index;
+	u8 reserved[6];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+#pragma pack(1)
+/* Add/Update/Get/Remove lookup Rx/Tx command/response entry
+ * This structures describes the lookup rules and associated actions. "index"
+ * is returned as part of a response to a successful Add command, and can be
+ * used to identify the rule for Update/Get/Remove commands.
+ */
+struct ice_sw_rule_lkup_rx_tx {
+	__le16 recipe_id;
+#define ICE_SW_RECIPE_LOGICAL_PORT_FWD		10
+	/* Source port for LOOKUP_RX and source VSI in case of LOOKUP_TX */
+	__le16 src;
+	__le32 act;
+
+	/* Bit 0:1 - Action type */
+#define ICE_SINGLE_ACT_TYPE_S	0x00
+#define ICE_SINGLE_ACT_TYPE_M	(0x3 << ICE_SINGLE_ACT_TYPE_S)
+
+	/* Bit 2 - Loop back enable
+	 * Bit 3 - LAN enable
+	 */
+#define ICE_SINGLE_ACT_LB_ENABLE	BIT(2)
+#define ICE_SINGLE_ACT_LAN_ENABLE	BIT(3)
+
+	/* Action type = 0 - Forward to VSI or VSI list */
+#define ICE_SINGLE_ACT_VSI_FORWARDING	0x0
+
+#define ICE_SINGLE_ACT_VSI_ID_S		4
+#define ICE_SINGLE_ACT_VSI_ID_M		(0x3FF << ICE_SINGLE_ACT_VSI_ID_S)
+#define ICE_SINGLE_ACT_VSI_LIST_ID_S	4
+#define ICE_SINGLE_ACT_VSI_LIST_ID_M	(0x3FF << ICE_SINGLE_ACT_VSI_LIST_ID_S)
+	/* This bit needs to be set if action is forward to VSI list */
+#define ICE_SINGLE_ACT_VSI_LIST		BIT(14)
+#define ICE_SINGLE_ACT_VALID_BIT	BIT(17)
+#define ICE_SINGLE_ACT_DROP		BIT(18)
+
+	/* Action type = 1 - Forward to Queue of Queue group */
+#define ICE_SINGLE_ACT_TO_Q		0x1
+#define ICE_SINGLE_ACT_Q_INDEX_S	4
+#define ICE_SINGLE_ACT_Q_INDEX_M	(0x7FF << ICE_SINGLE_ACT_Q_INDEX_S)
+#define ICE_SINGLE_ACT_Q_REGION_S	15
+#define ICE_SINGLE_ACT_Q_REGION_M	(0x7 << ICE_SINGLE_ACT_Q_REGION_S)
+#define ICE_SINGLE_ACT_Q_PRIORITY	BIT(18)
+
+	/* Action type = 2 - Prune */
+#define ICE_SINGLE_ACT_PRUNE		0x2
+#define ICE_SINGLE_ACT_EGRESS		BIT(15)
+#define ICE_SINGLE_ACT_INGRESS		BIT(16)
+#define ICE_SINGLE_ACT_PRUNET		BIT(17)
+	/* Bit 18 should be set to 0 for this action */
+
+	/* Action type = 2 - Pointer */
+#define ICE_SINGLE_ACT_PTR		0x2
+#define ICE_SINGLE_ACT_PTR_VAL_S	4
+#define ICE_SINGLE_ACT_PTR_VAL_M	(0x1FFF << ICE_SINGLE_ACT_PTR_VAL_S)
+	/* Bit 18 should be set to 1 */
+#define ICE_SINGLE_ACT_PTR_BIT		BIT(18)
+
+	/* Action type = 3 - Other actions. Last two bits
+	 * are other action identifier
+	 */
+#define ICE_SINGLE_ACT_OTHER_ACTS		0x3
+#define ICE_SINGLE_OTHER_ACT_IDENTIFIER_S	17
+#define ICE_SINGLE_OTHER_ACT_IDENTIFIER_M	\
+				(0x3 << \ ICE_SINGLE_OTHER_ACT_IDENTIFIER_S)
+
+	/* Bit 17:18 - Defines other actions */
+	/* Other action = 0 - Mirror VSI */
+#define ICE_SINGLE_OTHER_ACT_MIRROR		0
+#define ICE_SINGLE_ACT_MIRROR_VSI_ID_S	4
+#define ICE_SINGLE_ACT_MIRROR_VSI_ID_M	\
+				(0x3FF << ICE_SINGLE_ACT_MIRROR_VSI_ID_S)
+
+	/* Other action = 3 - Set Stat count */
+#define ICE_SINGLE_OTHER_ACT_STAT_COUNT		3
+#define ICE_SINGLE_ACT_STAT_COUNT_INDEX_S	4
+#define ICE_SINGLE_ACT_STAT_COUNT_INDEX_M	\
+				(0x7F << ICE_SINGLE_ACT_STAT_COUNT_INDEX_S)
+
+	__le16 index; /* The index of the rule in the lookup table */
+	/* Length and values of the header to be matched per recipe or
+	 * lookup-type
+	 */
+	__le16 hdr_len;
+	u8 hdr[1];
+};
+#pragma pack()
+
+
+/* Add/Update/Remove large action command/response entry
+ * "index" is returned as part of a response to a successful Add command, and
+ * can be used to identify the action for Update/Get/Remove commands.
+ */
+struct ice_sw_rule_lg_act {
+	__le16 index; /* Index in large action table */
+	__le16 size;
+	__le32 act[1]; /* array of size for actions */
+	/* Max number of large actions */
+#define ICE_MAX_LG_ACT	4
+	/* Bit 0:1 - Action type */
+#define ICE_LG_ACT_TYPE_S	0
+#define ICE_LG_ACT_TYPE_M	(0x7 << ICE_LG_ACT_TYPE_S)
+
+	/* Action type = 0 - Forward to VSI or VSI list */
+#define ICE_LG_ACT_VSI_FORWARDING	0
+#define ICE_LG_ACT_VSI_ID_S		3
+#define ICE_LG_ACT_VSI_ID_M		(0x3FF << ICE_LG_ACT_VSI_ID_S)
+#define ICE_LG_ACT_VSI_LIST_ID_S	3
+#define ICE_LG_ACT_VSI_LIST_ID_M	(0x3FF << ICE_LG_ACT_VSI_LIST_ID_S)
+	/* This bit needs to be set if action is forward to VSI list */
+#define ICE_LG_ACT_VSI_LIST		BIT(13)
+
+#define ICE_LG_ACT_VALID_BIT		BIT(16)
+
+	/* Action type = 1 - Forward to Queue of Queue group */
+#define ICE_LG_ACT_TO_Q			0x1
+#define ICE_LG_ACT_Q_INDEX_S		3
+#define ICE_LG_ACT_Q_INDEX_M		(0x7FF << ICE_LG_ACT_Q_INDEX_S)
+#define ICE_LG_ACT_Q_REGION_S		14
+#define ICE_LG_ACT_Q_REGION_M		(0x7 << ICE_LG_ACT_Q_REGION_S)
+#define ICE_LG_ACT_Q_PRIORITY_SET	BIT(17)
+
+	/* Action type = 2 - Prune */
+#define ICE_LG_ACT_PRUNE		0x2
+#define ICE_LG_ACT_EGRESS		BIT(14)
+#define ICE_LG_ACT_INGRESS		BIT(15)
+#define ICE_LG_ACT_PRUNET		BIT(16)
+
+	/* Action type = 3 - Mirror VSI */
+#define ICE_LG_OTHER_ACT_MIRROR		0x3
+#define ICE_LG_ACT_MIRROR_VSI_ID_S	3
+#define ICE_LG_ACT_MIRROR_VSI_ID_M	(0x3FF << ICE_LG_ACT_MIRROR_VSI_ID_S)
+
+	/* Action type = 5 - Generic Value */
+#define ICE_LG_ACT_GENERIC		0x5
+#define ICE_LG_ACT_GENERIC_VALUE_S	3
+#define ICE_LG_ACT_GENERIC_VALUE_M	(0xFFFF << ICE_LG_ACT_GENERIC_VALUE_S)
+#define ICE_LG_ACT_GENERIC_OFFSET_S	19
+#define ICE_LG_ACT_GENERIC_OFFSET_M	(0x7 << ICE_LG_ACT_GENERIC_OFFSET_S)
+#define ICE_LG_ACT_GENERIC_PRIORITY_S	22
+#define ICE_LG_ACT_GENERIC_PRIORITY_M	(0x7 << ICE_LG_ACT_GENERIC_PRIORITY_S)
+#define ICE_LG_ACT_GENERIC_OFF_RX_DESC_PROF_IDX	7
+
+	/* Action = 7 - Set Stat count */
+#define ICE_LG_ACT_STAT_COUNT		0x7
+#define ICE_LG_ACT_STAT_COUNT_S		3
+#define ICE_LG_ACT_STAT_COUNT_M		(0x7F << ICE_LG_ACT_STAT_COUNT_S)
+};
+
+
+/* Add/Update/Remove VSI list command/response entry
+ * "index" is returned as part of a response to a successful Add command, and
+ * can be used to identify the VSI list for Update/Get/Remove commands.
+ */
+struct ice_sw_rule_vsi_list {
+	__le16 index; /* Index of VSI/Prune list */
+	__le16 number_vsi;
+	__le16 vsi[1]; /* Array of number_vsi VSI numbers */
+};
+
+
+#pragma pack(1)
+/* Query VSI list command/response entry */
+struct ice_sw_rule_vsi_list_query {
+	__le16 index;
+	ice_declare_bitmap(vsi_list, ICE_MAX_VSI);
+};
+#pragma pack()
+
+
+/* Add switch rule response:
+ * Content of return buffer is same as the input buffer. The status field and
+ * LUT index are updated as part of the response
+ */
+struct ice_aqc_sw_rules_elem {
+	__le16 type; /* Switch rule type, one of T_... */
+#define ICE_AQC_SW_RULES_T_LKUP_RX		0x0
+#define ICE_AQC_SW_RULES_T_LKUP_TX		0x1
+#define ICE_AQC_SW_RULES_T_LG_ACT		0x2
+#define ICE_AQC_SW_RULES_T_VSI_LIST_SET		0x3
+#define ICE_AQC_SW_RULES_T_VSI_LIST_CLEAR	0x4
+#define ICE_AQC_SW_RULES_T_PRUNE_LIST_SET	0x5
+#define ICE_AQC_SW_RULES_T_PRUNE_LIST_CLEAR	0x6
+	__le16 status;
+	union {
+		struct ice_sw_rule_lkup_rx_tx lkup_tx_rx;
+		struct ice_sw_rule_lg_act lg_act;
+		struct ice_sw_rule_vsi_list vsi_list;
+		struct ice_sw_rule_vsi_list_query vsi_list_query;
+	} __packed pdata;
+};
+
+
+
+
+/* Get Default Topology (indirect 0x0400) */
+struct ice_aqc_get_topo {
+	u8 port_num;
+	u8 num_branches;
+	__le16 reserved1;
+	__le32 reserved2;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Update TSE (indirect 0x0403)
+ * Get TSE (indirect 0x0404)
+ * Add TSE (indirect 0x0401)
+ * Delete TSE (indirect 0x040F)
+ * Move TSE (indirect 0x0408)
+ * Suspend Nodes (indirect 0x0409)
+ * Resume Nodes (indirect 0x040A)
+ */
+struct ice_aqc_sched_elem_cmd {
+	__le16 num_elem_req;	/* Used by commands */
+	__le16 num_elem_resp;	/* Used by responses */
+	__le32 reserved;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* This is the buffer for:
+ * Suspend Nodes (indirect 0x0409)
+ * Resume Nodes (indirect 0x040A)
+ */
+struct ice_aqc_suspend_resume_elem {
+	__le32 teid[1];
+};
+
+
+
+
+struct ice_aqc_elem_info_bw {
+	__le16 bw_profile_idx;
+	__le16 bw_alloc;
+};
+
+
+struct ice_aqc_txsched_elem {
+	u8 elem_type; /* Special field, reserved for some aq calls */
+#define ICE_AQC_ELEM_TYPE_UNDEFINED		0x0
+#define ICE_AQC_ELEM_TYPE_ROOT_PORT		0x1
+#define ICE_AQC_ELEM_TYPE_TC			0x2
+#define ICE_AQC_ELEM_TYPE_SE_GENERIC		0x3
+#define ICE_AQC_ELEM_TYPE_ENTRY_POINT		0x4
+#define ICE_AQC_ELEM_TYPE_LEAF			0x5
+#define ICE_AQC_ELEM_TYPE_SE_PADDED		0x6
+	u8 valid_sections;
+#define ICE_AQC_ELEM_VALID_GENERIC		BIT(0)
+#define ICE_AQC_ELEM_VALID_CIR			BIT(1)
+#define ICE_AQC_ELEM_VALID_EIR			BIT(2)
+#define ICE_AQC_ELEM_VALID_SHARED		BIT(3)
+	u8 generic;
+#define ICE_AQC_ELEM_GENERIC_MODE_M		0x1
+#define ICE_AQC_ELEM_GENERIC_PRIO_S		0x1
+#define ICE_AQC_ELEM_GENERIC_PRIO_M	(0x7 << ICE_AQC_ELEM_GENERIC_PRIO_S)
+#define ICE_AQC_ELEM_GENERIC_SP_S		0x4
+#define ICE_AQC_ELEM_GENERIC_SP_M	(0x1 << ICE_AQC_ELEM_GENERIC_SP_S)
+#define ICE_AQC_ELEM_GENERIC_ADJUST_VAL_S	0x5
+#define ICE_AQC_ELEM_GENERIC_ADJUST_VAL_M	\
+	(0x3 << ICE_AQC_ELEM_GENERIC_ADJUST_VAL_S)
+	u8 flags; /* Special field, reserved for some aq calls */
+#define ICE_AQC_ELEM_FLAG_SUSPEND_M		0x1
+	struct ice_aqc_elem_info_bw cir_bw;
+	struct ice_aqc_elem_info_bw eir_bw;
+	__le16 srl_id;
+	__le16 reserved2;
+};
+
+
+struct ice_aqc_txsched_elem_data {
+	__le32 parent_teid;
+	__le32 node_teid;
+	struct ice_aqc_txsched_elem data;
+};
+
+
+struct ice_aqc_txsched_topo_grp_info_hdr {
+	__le32 parent_teid;
+	__le16 num_elems;
+	__le16 reserved2;
+};
+
+
+struct ice_aqc_add_elem {
+	struct ice_aqc_txsched_topo_grp_info_hdr hdr;
+	struct ice_aqc_txsched_elem_data generic[1];
+};
+
+
+
+struct ice_aqc_get_elem {
+	struct ice_aqc_txsched_elem_data generic[1];
+};
+
+
+struct ice_aqc_get_topo_elem {
+	struct ice_aqc_txsched_topo_grp_info_hdr hdr;
+	struct ice_aqc_txsched_elem_data
+		generic[ICE_AQC_TOPO_MAX_LEVEL_NUM];
+};
+
+
+struct ice_aqc_delete_elem {
+	struct ice_aqc_txsched_topo_grp_info_hdr hdr;
+	__le32 teid[1];
+};
+
+
+
+
+
+
+
+/* Query Scheduler Resource Allocation (indirect 0x0412)
+ * This indirect command retrieves the scheduler resources allocated by
+ * EMP Firmware to the given PF.
+ */
+struct ice_aqc_query_txsched_res {
+	u8 reserved[8];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+struct ice_aqc_generic_sched_props {
+	__le16 phys_levels;
+	__le16 logical_levels;
+	u8 flattening_bitmap;
+	u8 max_device_cgds;
+	u8 max_pf_cgds;
+	u8 rsvd0;
+	__le16 rdma_qsets;
+	u8 rsvd1[22];
+};
+
+
+struct ice_aqc_layer_props {
+	u8 logical_layer;
+	u8 chunk_size;
+	__le16 max_device_nodes;
+	__le16 max_pf_nodes;
+	u8 rsvd0[4];
+	__le16 max_sibl_grp_sz;
+	__le16 max_cir_rl_profiles;
+	__le16 max_eir_rl_profiles;
+	__le16 max_srl_profiles;
+	u8 rsvd1[14];
+};
+
+
+struct ice_aqc_query_txsched_res_resp {
+	struct ice_aqc_generic_sched_props sched_props;
+	struct ice_aqc_layer_props layer_props[ICE_AQC_TOPO_MAX_LEVEL_NUM];
+};
+
+
+
+/* Get PHY capabilities (indirect 0x0600) */
+struct ice_aqc_get_phy_caps {
+	u8 lport_num;
+	u8 reserved;
+	__le16 param0;
+	/* 18.0 - Report qualified modules */
+#define ICE_AQC_GET_PHY_RQM		BIT(0)
+	/* 18.1 - 18.2 : Report mode
+	 * 00b - Report NVM capabilities
+	 * 01b - Report topology capabilities
+	 * 10b - Report SW configured
+	 */
+#define ICE_AQC_REPORT_MODE_S		1
+#define ICE_AQC_REPORT_MODE_M		(3 << ICE_AQC_REPORT_MODE_S)
+#define ICE_AQC_REPORT_NVM_CAP		0
+#define ICE_AQC_REPORT_TOPO_CAP		BIT(1)
+#define ICE_AQC_REPORT_SW_CFG		BIT(2)
+	__le32 reserved1;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* This is #define of PHY type (Extended):
+ * The first set of defines is for phy_type_low.
+ */
+#define ICE_PHY_TYPE_LOW_100BASE_TX		BIT_ULL(0)
+#define ICE_PHY_TYPE_LOW_100M_SGMII		BIT_ULL(1)
+#define ICE_PHY_TYPE_LOW_1000BASE_T		BIT_ULL(2)
+#define ICE_PHY_TYPE_LOW_1000BASE_SX		BIT_ULL(3)
+#define ICE_PHY_TYPE_LOW_1000BASE_LX		BIT_ULL(4)
+#define ICE_PHY_TYPE_LOW_1000BASE_KX		BIT_ULL(5)
+#define ICE_PHY_TYPE_LOW_1G_SGMII		BIT_ULL(6)
+#define ICE_PHY_TYPE_LOW_2500BASE_T		BIT_ULL(7)
+#define ICE_PHY_TYPE_LOW_2500BASE_X		BIT_ULL(8)
+#define ICE_PHY_TYPE_LOW_2500BASE_KX		BIT_ULL(9)
+#define ICE_PHY_TYPE_LOW_5GBASE_T		BIT_ULL(10)
+#define ICE_PHY_TYPE_LOW_5GBASE_KR		BIT_ULL(11)
+#define ICE_PHY_TYPE_LOW_10GBASE_T		BIT_ULL(12)
+#define ICE_PHY_TYPE_LOW_10G_SFI_DA		BIT_ULL(13)
+#define ICE_PHY_TYPE_LOW_10GBASE_SR		BIT_ULL(14)
+#define ICE_PHY_TYPE_LOW_10GBASE_LR		BIT_ULL(15)
+#define ICE_PHY_TYPE_LOW_10GBASE_KR_CR1		BIT_ULL(16)
+#define ICE_PHY_TYPE_LOW_10G_SFI_AOC_ACC	BIT_ULL(17)
+#define ICE_PHY_TYPE_LOW_10G_SFI_C2C		BIT_ULL(18)
+#define ICE_PHY_TYPE_LOW_25GBASE_T		BIT_ULL(19)
+#define ICE_PHY_TYPE_LOW_25GBASE_CR		BIT_ULL(20)
+#define ICE_PHY_TYPE_LOW_25GBASE_CR_S		BIT_ULL(21)
+#define ICE_PHY_TYPE_LOW_25GBASE_CR1		BIT_ULL(22)
+#define ICE_PHY_TYPE_LOW_25GBASE_SR		BIT_ULL(23)
+#define ICE_PHY_TYPE_LOW_25GBASE_LR		BIT_ULL(24)
+#define ICE_PHY_TYPE_LOW_25GBASE_KR		BIT_ULL(25)
+#define ICE_PHY_TYPE_LOW_25GBASE_KR_S		BIT_ULL(26)
+#define ICE_PHY_TYPE_LOW_25GBASE_KR1		BIT_ULL(27)
+#define ICE_PHY_TYPE_LOW_25G_AUI_AOC_ACC	BIT_ULL(28)
+#define ICE_PHY_TYPE_LOW_25G_AUI_C2C		BIT_ULL(29)
+#define ICE_PHY_TYPE_LOW_40GBASE_CR4		BIT_ULL(30)
+#define ICE_PHY_TYPE_LOW_40GBASE_SR4		BIT_ULL(31)
+#define ICE_PHY_TYPE_LOW_40GBASE_LR4		BIT_ULL(32)
+#define ICE_PHY_TYPE_LOW_40GBASE_KR4		BIT_ULL(33)
+#define ICE_PHY_TYPE_LOW_40G_XLAUI_AOC_ACC	BIT_ULL(34)
+#define ICE_PHY_TYPE_LOW_40G_XLAUI		BIT_ULL(35)
+#define ICE_PHY_TYPE_LOW_MAX_INDEX		63
+
+struct ice_aqc_get_phy_caps_data {
+	__le64 phy_type_low; /* Use values from ICE_PHY_TYPE_LOW_* */
+	__le64 reserved;
+	u8 caps;
+#define ICE_AQC_PHY_EN_TX_LINK_PAUSE			BIT(0)
+#define ICE_AQC_PHY_EN_RX_LINK_PAUSE			BIT(1)
+#define ICE_AQC_PHY_LOW_POWER_MODE			BIT(2)
+#define ICE_AQC_PHY_EN_LINK				BIT(3)
+#define ICE_AQC_PHY_AN_MODE				BIT(4)
+#define ICE_AQC_PHY_EN_MOD_QUAL				BIT(5)
+#define ICE_AQC_PHY_EN_LESM				BIT(6)
+#define ICE_AQC_PHY_EN_AUTO_FEC				BIT(7)
+#define ICE_AQC_PHY_CAPS_MASK				MAKEMASK(0xff, 0)
+	u8 low_power_ctrl;
+#define ICE_AQC_PHY_EN_D3COLD_LOW_POWER_AUTONEG		BIT(0)
+	__le16 eee_cap;
+#define ICE_AQC_PHY_EEE_EN_100BASE_TX			BIT(0)
+#define ICE_AQC_PHY_EEE_EN_1000BASE_T			BIT(1)
+#define ICE_AQC_PHY_EEE_EN_10GBASE_T			BIT(2)
+#define ICE_AQC_PHY_EEE_EN_1000BASE_KX			BIT(3)
+#define ICE_AQC_PHY_EEE_EN_10GBASE_KR			BIT(4)
+#define ICE_AQC_PHY_EEE_EN_25GBASE_KR			BIT(5)
+#define ICE_AQC_PHY_EEE_EN_40GBASE_KR4			BIT(6)
+	__le16 eeer_value;
+	u8 phy_id_oui[4]; /* PHY/Module ID connected on the port */
+	u8 phy_fw_ver[8];
+	u8 link_fec_options;
+#define ICE_AQC_PHY_FEC_10G_KR_40G_KR4_EN		BIT(0)
+#define ICE_AQC_PHY_FEC_10G_KR_40G_KR4_REQ		BIT(1)
+#define ICE_AQC_PHY_FEC_25G_RS_528_REQ			BIT(2)
+#define ICE_AQC_PHY_FEC_25G_KR_REQ			BIT(3)
+#define ICE_AQC_PHY_FEC_25G_RS_544_REQ			BIT(4)
+#define ICE_AQC_PHY_FEC_25G_RS_CLAUSE91_EN		BIT(6)
+#define ICE_AQC_PHY_FEC_25G_KR_CLAUSE74_EN		BIT(7)
+#define ICE_AQC_PHY_FEC_MASK				MAKEMASK(0xdf, 0)
+	u8 extended_compliance_code;
+#define ICE_MODULE_TYPE_TOTAL_BYTE			3
+	u8 module_type[ICE_MODULE_TYPE_TOTAL_BYTE];
+#define ICE_AQC_MOD_TYPE_BYTE0_SFP_PLUS			0xA0
+#define ICE_AQC_MOD_TYPE_BYTE0_QSFP_PLUS		0x80
+#define ICE_AQC_MOD_TYPE_BYTE1_SFP_PLUS_CU_PASSIVE	BIT(0)
+#define ICE_AQC_MOD_TYPE_BYTE1_SFP_PLUS_CU_ACTIVE	BIT(1)
+#define ICE_AQC_MOD_TYPE_BYTE1_10G_BASE_SR		BIT(4)
+#define ICE_AQC_MOD_TYPE_BYTE1_10G_BASE_LR		BIT(5)
+#define ICE_AQC_MOD_TYPE_BYTE1_10G_BASE_LRM		BIT(6)
+#define ICE_AQC_MOD_TYPE_BYTE1_10G_BASE_ER		BIT(7)
+#define ICE_AQC_MOD_TYPE_BYTE2_SFP_PLUS			0xA0
+#define ICE_AQC_MOD_TYPE_BYTE2_QSFP_PLUS		0x86
+	u8 qualified_module_count;
+#define ICE_AQC_QUAL_MOD_COUNT_MAX			16
+	struct {
+		u8 v_oui[3];
+		u8 rsvd3;
+		u8 v_part[16];
+		__le32 v_rev;
+		__le64 rsvd8;
+	} qual_modules[ICE_AQC_QUAL_MOD_COUNT_MAX];
+};
+
+
+/* Set PHY capabilities (direct 0x0601)
+ * NOTE: This command must be followed by setup link and restart auto-neg
+ */
+struct ice_aqc_set_phy_cfg {
+	u8 lport_num;
+	u8 reserved[7];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Set PHY config command data structure */
+struct ice_aqc_set_phy_cfg_data {
+	__le64 phy_type_low; /* Use values from ICE_PHY_TYPE_LOW_* */
+	__le64 rsvd0;
+	u8 caps;
+#define ICE_AQ_PHY_ENA_TX_PAUSE_ABILITY		BIT(0)
+#define ICE_AQ_PHY_ENA_RX_PAUSE_ABILITY		BIT(1)
+#define ICE_AQ_PHY_ENA_LOW_POWER	BIT(2)
+#define ICE_AQ_PHY_ENA_LINK		BIT(3)
+#define ICE_AQ_PHY_ENA_AUTO_LINK_UPDT	BIT(5)
+#define ICE_AQ_PHY_ENA_LESM		BIT(6)
+#define ICE_AQ_PHY_ENA_AUTO_FEC		BIT(7)
+	u8 low_power_ctrl;
+	__le16 eee_cap; /* Value from ice_aqc_get_phy_caps */
+	__le16 eeer_value;
+	u8 link_fec_opt; /* Use defines from ice_aqc_get_phy_caps */
+	u8 rsvd1;
+};
+
+
+
+/* Restart AN command data structure (direct 0x0605)
+ * Also used for response, with only the lport_num field present.
+ */
+struct ice_aqc_restart_an {
+	u8 lport_num;
+	u8 reserved;
+	u8 cmd_flags;
+#define ICE_AQC_RESTART_AN_LINK_RESTART	BIT(1)
+#define ICE_AQC_RESTART_AN_LINK_ENABLE	BIT(2)
+	u8 reserved2[13];
+};
+
+
+/* Get link status (indirect 0x0607), also used for Link Status Event */
+struct ice_aqc_get_link_status {
+	u8 lport_num;
+	u8 reserved;
+	__le16 cmd_flags;
+#define ICE_AQ_LSE_M			0x3
+#define ICE_AQ_LSE_NOP			0x0
+#define ICE_AQ_LSE_DIS			0x2
+#define ICE_AQ_LSE_ENA			0x3
+	/* only response uses this flag */
+#define ICE_AQ_LSE_IS_ENABLED		0x1
+	__le32 reserved2;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Get link status response data structure, also used for Link Status Event */
+struct ice_aqc_get_link_status_data {
+	u8 topo_media_conflict;
+#define ICE_AQ_LINK_TOPO_CONFLICT	BIT(0)
+#define ICE_AQ_LINK_MEDIA_CONFLICT	BIT(1)
+#define ICE_AQ_LINK_TOPO_CORRUPT	BIT(2)
+	u8 reserved1;
+	u8 link_info;
+#define ICE_AQ_LINK_UP			BIT(0)	/* Link Status */
+#define ICE_AQ_LINK_FAULT		BIT(1)
+#define ICE_AQ_LINK_FAULT_TX		BIT(2)
+#define ICE_AQ_LINK_FAULT_RX		BIT(3)
+#define ICE_AQ_LINK_FAULT_REMOTE	BIT(4)
+#define ICE_AQ_LINK_UP_PORT		BIT(5)	/* External Port Link Status */
+#define ICE_AQ_MEDIA_AVAILABLE		BIT(6)
+#define ICE_AQ_SIGNAL_DETECT		BIT(7)
+	u8 an_info;
+#define ICE_AQ_AN_COMPLETED		BIT(0)
+#define ICE_AQ_LP_AN_ABILITY		BIT(1)
+#define ICE_AQ_PD_FAULT			BIT(2)	/* Parallel Detection Fault */
+#define ICE_AQ_FEC_EN			BIT(3)
+#define ICE_AQ_PHY_LOW_POWER		BIT(4)	/* Low Power State */
+#define ICE_AQ_LINK_PAUSE_TX		BIT(5)
+#define ICE_AQ_LINK_PAUSE_RX		BIT(6)
+#define ICE_AQ_QUALIFIED_MODULE		BIT(7)
+	u8 ext_info;
+#define ICE_AQ_LINK_PHY_TEMP_ALARM	BIT(0)
+#define ICE_AQ_LINK_EXCESSIVE_ERRORS	BIT(1)	/* Excessive Link Errors */
+	/* Port TX Suspended */
+#define ICE_AQ_LINK_TX_S		2
+#define ICE_AQ_LINK_TX_M		(0x03 << ICE_AQ_LINK_TX_S)
+#define ICE_AQ_LINK_TX_ACTIVE		0
+#define ICE_AQ_LINK_TX_DRAINED		1
+#define ICE_AQ_LINK_TX_FLUSHED		3
+	u8 reserved2;
+	__le16 max_frame_size;
+	u8 cfg;
+#define ICE_AQ_LINK_25G_KR_FEC_EN	BIT(0)
+#define ICE_AQ_LINK_25G_RS_528_FEC_EN	BIT(1)
+#define ICE_AQ_LINK_25G_RS_544_FEC_EN	BIT(2)
+#define ICE_AQ_FEC_MASK			MAKEMASK(0x7, 0)
+	/* Pacing Config */
+#define ICE_AQ_CFG_PACING_S		3
+#define ICE_AQ_CFG_PACING_M		(0xF << ICE_AQ_CFG_PACING_S)
+#define ICE_AQ_CFG_PACING_TYPE_M	BIT(7)
+#define ICE_AQ_CFG_PACING_TYPE_AVG	0
+#define ICE_AQ_CFG_PACING_TYPE_FIXED	ICE_AQ_CFG_PACING_TYPE_M
+	/* External Device Power Ability */
+	u8 power_desc;
+#define ICE_AQ_PWR_CLASS_M		0x3
+#define ICE_AQ_LINK_PWR_BASET_LOW_HIGH	0
+#define ICE_AQ_LINK_PWR_BASET_HIGH	1
+#define ICE_AQ_LINK_PWR_QSFP_CLASS_1	0
+#define ICE_AQ_LINK_PWR_QSFP_CLASS_2	1
+#define ICE_AQ_LINK_PWR_QSFP_CLASS_3	2
+#define ICE_AQ_LINK_PWR_QSFP_CLASS_4	3
+	__le16 link_speed;
+#define ICE_AQ_LINK_SPEED_10MB		BIT(0)
+#define ICE_AQ_LINK_SPEED_100MB		BIT(1)
+#define ICE_AQ_LINK_SPEED_1000MB	BIT(2)
+#define ICE_AQ_LINK_SPEED_2500MB	BIT(3)
+#define ICE_AQ_LINK_SPEED_5GB		BIT(4)
+#define ICE_AQ_LINK_SPEED_10GB		BIT(5)
+#define ICE_AQ_LINK_SPEED_20GB		BIT(6)
+#define ICE_AQ_LINK_SPEED_25GB		BIT(7)
+#define ICE_AQ_LINK_SPEED_40GB		BIT(8)
+#define ICE_AQ_LINK_SPEED_UNKNOWN	BIT(15)
+	__le32 reserved3; /* Aligns next field to 8-byte boundary */
+	__le64 phy_type_low; /* Use values from ICE_PHY_TYPE_LOW_* */
+	__le64 reserved4;
+};
+
+
+/* Set event mask command (direct 0x0613) */
+struct ice_aqc_set_event_mask {
+	u8	lport_num;
+	u8	reserved[7];
+	__le16	event_mask;
+#define ICE_AQ_LINK_EVENT_UPDOWN		BIT(1)
+#define ICE_AQ_LINK_EVENT_MEDIA_NA		BIT(2)
+#define ICE_AQ_LINK_EVENT_LINK_FAULT		BIT(3)
+#define ICE_AQ_LINK_EVENT_PHY_TEMP_ALARM	BIT(4)
+#define ICE_AQ_LINK_EVENT_EXCESSIVE_ERRORS	BIT(5)
+#define ICE_AQ_LINK_EVENT_SIGNAL_DETECT		BIT(6)
+#define ICE_AQ_LINK_EVENT_AN_COMPLETED		BIT(7)
+#define ICE_AQ_LINK_EVENT_MODULE_QUAL_FAIL	BIT(8)
+#define ICE_AQ_LINK_EVENT_PORT_TX_SUSPENDED	BIT(9)
+	u8	reserved1[6];
+};
+
+
+
+/* Set MAC Loopback command (direct 0x0620) */
+struct ice_aqc_set_mac_lb {
+	u8 lb_mode;
+#define ICE_AQ_MAC_LB_EN		BIT(0)
+#define ICE_AQ_MAC_LB_OSC_CLK		BIT(1)
+	u8 reserved[15];
+};
+
+
+
+
+
+/* Set Port Identification LED (direct, 0x06E9) */
+struct ice_aqc_set_port_id_led {
+	u8 lport_num;
+	u8 lport_num_valid;
+#define ICE_AQC_PORT_ID_PORT_NUM_VALID	BIT(0)
+	u8 ident_mode;
+#define ICE_AQC_PORT_IDENT_LED_BLINK	BIT(0)
+#define ICE_AQC_PORT_IDENT_LED_ORIG	0
+	u8 rsvd[13];
+};
+
+
+
+/* NVM Read command (indirect 0x0701)
+ * NVM Erase commands (direct 0x0702)
+ * NVM Update commands (indirect 0x0703)
+ */
+struct ice_aqc_nvm {
+	__le16 offset_low;
+	u8 offset_high;
+	u8 cmd_flags;
+#define ICE_AQC_NVM_LAST_CMD		BIT(0)
+#define ICE_AQC_NVM_PCIR_REQ		BIT(0)	/* Used by NVM Update reply */
+#define ICE_AQC_NVM_PRESERVATION_S	1
+#define ICE_AQC_NVM_PRESERVATION_M	(3 << ICE_AQC_NVM_PRESERVATION_S)
+#define ICE_AQC_NVM_NO_PRESERVATION	(0 << ICE_AQC_NVM_PRESERVATION_S)
+#define ICE_AQC_NVM_PRESERVE_ALL	BIT(1)
+#define ICE_AQC_NVM_FACTORY_DEFAULT	(2 << ICE_AQC_NVM_PRESERVATION_S)
+#define ICE_AQC_NVM_PRESERVE_SELECTED	(3 << ICE_AQC_NVM_PRESERVATION_S)
+#define ICE_AQC_NVM_FLASH_ONLY		BIT(7)
+	__le16 module_typeid;
+	__le16 length;
+#define ICE_AQC_NVM_ERASE_LEN	0xFFFF
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Used for 0x0704 as well as for 0x0705 commands */
+struct ice_aqc_nvm_cfg {
+	u8	cmd_flags;
+#define ICE_AQC_ANVM_MULTIPLE_ELEMS	BIT(0)
+#define ICE_AQC_ANVM_IMMEDIATE_FIELD	BIT(1)
+#define ICE_AQC_ANVM_NEW_CFG		BIT(2)
+	u8	reserved;
+	__le16 count;
+	__le16 id;
+	u8 reserved1[2];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+struct ice_aqc_nvm_cfg_data {
+	__le16 field_id;
+	__le16 field_options;
+	__le16 field_value;
+};
+
+
+/* NVM Checksum Command (direct, 0x0706) */
+struct ice_aqc_nvm_checksum {
+	u8 flags;
+#define ICE_AQC_NVM_CHECKSUM_VERIFY	BIT(0)
+#define ICE_AQC_NVM_CHECKSUM_RECALC	BIT(1)
+	u8 rsvd;
+	__le16 checksum; /* Used only by response */
+#define ICE_AQC_NVM_CHECKSUM_CORRECT	0xBABA
+	u8 rsvd2[12];
+};
+
+
+/**
+ * Send to PF command (indirect 0x0801) id is only used by PF
+ *
+ * Send to VF command (indirect 0x0802) id is only used by PF
+ *
+ */
+struct ice_aqc_pf_vf_msg {
+	__le32 id;
+	u32 reserved;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+
+
+/* Get/Set RSS key (indirect 0x0B04/0x0B02) */
+struct ice_aqc_get_set_rss_key {
+#define ICE_AQC_GSET_RSS_KEY_VSI_VALID	BIT(15)
+#define ICE_AQC_GSET_RSS_KEY_VSI_ID_S	0
+#define ICE_AQC_GSET_RSS_KEY_VSI_ID_M	(0x3FF << ICE_AQC_GSET_RSS_KEY_VSI_ID_S)
+	__le16 vsi_id;
+	u8 reserved[6];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+#define ICE_AQC_GET_SET_RSS_KEY_DATA_RSS_KEY_SIZE	0x28
+#define ICE_AQC_GET_SET_RSS_KEY_DATA_HASH_KEY_SIZE	0xC
+
+struct ice_aqc_get_set_rss_keys {
+	u8 standard_rss_key[ICE_AQC_GET_SET_RSS_KEY_DATA_RSS_KEY_SIZE];
+	u8 extended_hash_key[ICE_AQC_GET_SET_RSS_KEY_DATA_HASH_KEY_SIZE];
+};
+
+
+/* Get/Set RSS LUT (indirect 0x0B05/0x0B03) */
+struct ice_aqc_get_set_rss_lut {
+#define ICE_AQC_GSET_RSS_LUT_VSI_VALID	BIT(15)
+#define ICE_AQC_GSET_RSS_LUT_VSI_ID_S	0
+#define ICE_AQC_GSET_RSS_LUT_VSI_ID_M	(0x1FF << ICE_AQC_GSET_RSS_LUT_VSI_ID_S)
+	__le16 vsi_id;
+#define ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_S	0
+#define ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_M	\
+				(0x3 << ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_S)
+
+#define ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_VSI	 0
+#define ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF	 1
+#define ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_GLOBAL	 2
+
+#define ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S	 2
+#define ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_M	 \
+				(0x3 << ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S)
+
+#define ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_128	 128
+#define ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_128_FLAG 0
+#define ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_512	 512
+#define ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_512_FLAG 1
+#define ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K	 2048
+#define ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K_FLAG	 2
+
+#define ICE_AQC_GSET_RSS_LUT_GLOBAL_IDX_S	 4
+#define ICE_AQC_GSET_RSS_LUT_GLOBAL_IDX_M	 \
+				(0xF << ICE_AQC_GSET_RSS_LUT_GLOBAL_IDX_S)
+
+	__le16 flags;
+	__le32 reserved;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+
+
+
+/* Add TX LAN Queues (indirect 0x0C30) */
+struct ice_aqc_add_txqs {
+	u8 num_qgrps;
+	u8 reserved[3];
+	__le32 reserved1;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* This is the descriptor of each queue entry for the Add TX LAN Queues
+ * command (0x0C30). Only used within struct ice_aqc_add_tx_qgrp.
+ */
+struct ice_aqc_add_txqs_perq {
+	__le16 txq_id;
+	u8 rsvd[2];
+	__le32 q_teid;
+	u8 txq_ctx[22];
+	u8 rsvd2[2];
+	struct ice_aqc_txsched_elem info;
+};
+
+
+/* The format of the command buffer for Add TX LAN Queues (0x0C30)
+ * is an array of the following structs. Please note that the length of
+ * each struct ice_aqc_add_tx_qgrp is variable due
+ * to the variable number of queues in each group!
+ */
+struct ice_aqc_add_tx_qgrp {
+	__le32 parent_teid;
+	u8 num_txqs;
+	u8 rsvd[3];
+	struct ice_aqc_add_txqs_perq txqs[1];
+};
+
+
+/* Disable TX LAN Queues (indirect 0x0C31) */
+struct ice_aqc_dis_txqs {
+	u8 cmd_type;
+#define ICE_AQC_Q_DIS_CMD_S		0
+#define ICE_AQC_Q_DIS_CMD_M		(0x3 << ICE_AQC_Q_DIS_CMD_S)
+#define ICE_AQC_Q_DIS_CMD_NO_FUNC_RESET	(0 << ICE_AQC_Q_DIS_CMD_S)
+#define ICE_AQC_Q_DIS_CMD_VM_RESET	BIT(ICE_AQC_Q_DIS_CMD_S)
+#define ICE_AQC_Q_DIS_CMD_VF_RESET	(2 << ICE_AQC_Q_DIS_CMD_S)
+#define ICE_AQC_Q_DIS_CMD_PF_RESET	(3 << ICE_AQC_Q_DIS_CMD_S)
+#define ICE_AQC_Q_DIS_CMD_SUBSEQ_CALL	BIT(2)
+#define ICE_AQC_Q_DIS_CMD_FLUSH_PIPE	BIT(3)
+	u8 num_entries;
+	__le16 vmvf_and_timeout;
+#define ICE_AQC_Q_DIS_VMVF_NUM_S	0
+#define ICE_AQC_Q_DIS_VMVF_NUM_M	(0x3FF << ICE_AQC_Q_DIS_VMVF_NUM_S)
+#define ICE_AQC_Q_DIS_TIMEOUT_S		10
+#define ICE_AQC_Q_DIS_TIMEOUT_M		(0x3F << ICE_AQC_Q_DIS_TIMEOUT_S)
+	__le32 blocked_cgds;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* The buffer for Disable TX LAN Queues (indirect 0x0C31)
+ * contains the following structures, arrayed one after the
+ * other.
+ * Note: Since the q_id is 16 bits wide, if the
+ * number of queues is even, then 2 bytes of alignment MUST be
+ * added before the start of the next group, to allow correct
+ * alignment of the parent_teid field.
+ */
+struct ice_aqc_dis_txq_item {
+	__le32 parent_teid;
+	u8 num_qs;
+	u8 rsvd;
+	/* The length of the q_id array varies according to num_qs */
+	__le16 q_id[1];
+	/* This only applies from F8 onward */
+#define ICE_AQC_Q_DIS_BUF_ELEM_TYPE_S		15
+#define ICE_AQC_Q_DIS_BUF_ELEM_TYPE_LAN_Q	\
+			(0 << ICE_AQC_Q_DIS_BUF_ELEM_TYPE_S)
+#define ICE_AQC_Q_DIS_BUF_ELEM_TYPE_RDMA_QSET	\
+			(1 << ICE_AQC_Q_DIS_BUF_ELEM_TYPE_S)
+};
+
+
+struct ice_aqc_dis_txq {
+	struct ice_aqc_dis_txq_item qgrps[1];
+};
+
+
+
+
+
+
+
+/* Lan Queue Overflow Event (direct, 0x1001) */
+struct ice_aqc_event_lan_overflow {
+	__le32 prtdcb_ruptq;
+	__le32 qtx_ctl;
+	u8 reserved[8];
+};
+
+
+
+/* Configure Firmware Logging Command (indirect 0xFF09)
+ * Logging Information Read Response (indirect 0xFF10)
+ * Note: The 0xFF10 command has no input parameters.
+ */
+struct ice_aqc_fw_logging {
+	u8 log_ctrl;
+#define ICE_AQC_FW_LOG_AQ_EN		BIT(0)
+#define ICE_AQC_FW_LOG_UART_EN		BIT(1)
+	u8 rsvd0;
+	u8 log_ctrl_valid; /* Not used by 0xFF10 Response */
+#define ICE_AQC_FW_LOG_AQ_VALID		BIT(0)
+#define ICE_AQC_FW_LOG_UART_VALID	BIT(1)
+	u8 rsvd1[5];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+enum ice_aqc_fw_logging_mod {
+	ICE_AQC_FW_LOG_ID_GENERAL = 0,
+	ICE_AQC_FW_LOG_ID_CTRL,
+	ICE_AQC_FW_LOG_ID_LINK,
+	ICE_AQC_FW_LOG_ID_LINK_TOPO,
+	ICE_AQC_FW_LOG_ID_DNL,
+	ICE_AQC_FW_LOG_ID_I2C,
+	ICE_AQC_FW_LOG_ID_SDP,
+	ICE_AQC_FW_LOG_ID_MDIO,
+	ICE_AQC_FW_LOG_ID_ADMINQ,
+	ICE_AQC_FW_LOG_ID_HDMA,
+	ICE_AQC_FW_LOG_ID_LLDP,
+	ICE_AQC_FW_LOG_ID_DCBX,
+	ICE_AQC_FW_LOG_ID_DCB,
+	ICE_AQC_FW_LOG_ID_NETPROXY,
+	ICE_AQC_FW_LOG_ID_NVM,
+	ICE_AQC_FW_LOG_ID_AUTH,
+	ICE_AQC_FW_LOG_ID_VPD,
+	ICE_AQC_FW_LOG_ID_IOSF,
+	ICE_AQC_FW_LOG_ID_PARSER,
+	ICE_AQC_FW_LOG_ID_SW,
+	ICE_AQC_FW_LOG_ID_SCHEDULER,
+	ICE_AQC_FW_LOG_ID_TXQ,
+	ICE_AQC_FW_LOG_ID_RSVD,
+	ICE_AQC_FW_LOG_ID_POST,
+	ICE_AQC_FW_LOG_ID_WATCHDOG,
+	ICE_AQC_FW_LOG_ID_TASK_DISPATCH,
+	ICE_AQC_FW_LOG_ID_MNG,
+	ICE_AQC_FW_LOG_ID_MAX,
+};
+
+/* This is the buffer for both of the logging commands.
+ * The entry array size depends on the datalen parameter in the descriptor.
+ * There will be a total of datalen / 2 entries.
+ */
+struct ice_aqc_fw_logging_data {
+	__le16 entry[1];
+#define ICE_AQC_FW_LOG_ID_S		0
+#define ICE_AQC_FW_LOG_ID_M		(0xFFF << ICE_AQC_FW_LOG_ID_S)
+
+#define ICE_AQC_FW_LOG_CONF_SUCCESS	0	/* Used by response */
+#define ICE_AQC_FW_LOG_CONF_BAD_INDX	BIT(12)	/* Used by response */
+
+#define ICE_AQC_FW_LOG_EN_S		12
+#define ICE_AQC_FW_LOG_EN_M		(0xF << ICE_AQC_FW_LOG_EN_S)
+#define ICE_AQC_FW_LOG_INFO_EN		BIT(12)	/* Used by command */
+#define ICE_AQC_FW_LOG_INIT_EN		BIT(13)	/* Used by command */
+#define ICE_AQC_FW_LOG_FLOW_EN		BIT(14)	/* Used by command */
+#define ICE_AQC_FW_LOG_ERR_EN		BIT(15)	/* Used by command */
+};
+
+
+/* Get/Clear FW Log (indirect 0xFF11) */
+struct ice_aqc_get_clear_fw_log {
+	u8 flags;
+#define ICE_AQC_FW_LOG_CLEAR		BIT(0)
+#define ICE_AQC_FW_LOG_MORE_DATA_AVAIL	BIT(1)
+	u8 rsvd1[7];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/**
+ * struct ice_aq_desc - Admin Queue (AQ) descriptor
+ * @flags: ICE_AQ_FLAG_* flags
+ * @opcode: AQ command opcode
+ * @datalen: length in bytes of indirect/external data buffer
+ * @retval: return value from firmware
+ * @cookie_h: opaque data high-half
+ * @cookie_l: opaque data low-half
+ * @params: command-specific parameters
+ *
+ * Descriptor format for commands the driver posts on the Admin Transmit Queue
+ * (ATQ). The firmware writes back onto the command descriptor and returns
+ * the result of the command. Asynchronous events that are not an immediate
+ * result of the command are written to the Admin Receive Queue (ARQ) using
+ * the same descriptor format. Descriptors are in little-endian notation with
+ * 32-bit words.
+ */
+struct ice_aq_desc {
+	__le16 flags;
+	__le16 opcode;
+	__le16 datalen;
+	__le16 retval;
+	__le32 cookie_high;
+	__le32 cookie_low;
+	union {
+		u8 raw[16];
+		struct ice_aqc_generic generic;
+		struct ice_aqc_get_ver get_ver;
+		struct ice_aqc_q_shutdown q_shutdown;
+		struct ice_aqc_req_res res_owner;
+		struct ice_aqc_manage_mac_read mac_read;
+		struct ice_aqc_manage_mac_write mac_write;
+		struct ice_aqc_clear_pxe clear_pxe;
+		struct ice_aqc_list_caps get_cap;
+		struct ice_aqc_get_phy_caps get_phy;
+		struct ice_aqc_set_phy_cfg set_phy;
+		struct ice_aqc_restart_an restart_an;
+		struct ice_aqc_set_port_id_led set_port_id_led;
+		struct ice_aqc_get_sw_cfg get_sw_conf;
+		struct ice_aqc_sw_rules sw_rules;
+		struct ice_aqc_get_topo get_topo;
+		struct ice_aqc_sched_elem_cmd sched_elem_cmd;
+		struct ice_aqc_query_txsched_res query_sched_res;
+		struct ice_aqc_nvm nvm;
+		struct ice_aqc_nvm_cfg nvm_cfg;
+		struct ice_aqc_nvm_checksum nvm_checksum;
+		struct ice_aqc_pf_vf_msg virt;
+		struct ice_aqc_get_set_rss_lut get_set_rss_lut;
+		struct ice_aqc_get_set_rss_key get_set_rss_key;
+		struct ice_aqc_add_txqs add_txqs;
+		struct ice_aqc_dis_txqs dis_txqs;
+		struct ice_aqc_add_get_update_free_vsi vsi_cmd;
+		struct ice_aqc_add_update_free_vsi_resp add_update_free_vsi_res;
+		struct ice_aqc_fw_logging fw_logging;
+		struct ice_aqc_get_clear_fw_log get_clear_fw_log;
+		struct ice_aqc_set_mac_lb set_mac_lb;
+		struct ice_aqc_alloc_free_res_cmd sw_res_ctrl;
+		struct ice_aqc_set_event_mask set_event_mask;
+		struct ice_aqc_get_link_status get_link_status;
+	} params;
+};
+
+
+/* FW defined boundary for a large buffer, 4k >= Large buffer > 512 bytes */
+#define ICE_AQ_LG_BUF	512
+
+/* Flags sub-structure
+ * |0  |1  |2  |3  |4  |5  |6  |7  |8  |9  |10 |11 |12 |13 |14 |15 |
+ * |DD |CMP|ERR|VFE| * *  RESERVED * * |LB |RD |VFC|BUF|SI |EI |FE |
+ */
+
+/* command flags and offsets */
+#define ICE_AQ_FLAG_DD_S	0
+#define ICE_AQ_FLAG_CMP_S	1
+#define ICE_AQ_FLAG_ERR_S	2
+#define ICE_AQ_FLAG_VFE_S	3
+#define ICE_AQ_FLAG_LB_S	9
+#define ICE_AQ_FLAG_RD_S	10
+#define ICE_AQ_FLAG_VFC_S	11
+#define ICE_AQ_FLAG_BUF_S	12
+#define ICE_AQ_FLAG_SI_S	13
+#define ICE_AQ_FLAG_EI_S	14
+#define ICE_AQ_FLAG_FE_S	15
+
+#define ICE_AQ_FLAG_DD		BIT(ICE_AQ_FLAG_DD_S)  /* 0x1    */
+#define ICE_AQ_FLAG_CMP		BIT(ICE_AQ_FLAG_CMP_S) /* 0x2    */
+#define ICE_AQ_FLAG_ERR		BIT(ICE_AQ_FLAG_ERR_S) /* 0x4    */
+#define ICE_AQ_FLAG_VFE		BIT(ICE_AQ_FLAG_VFE_S) /* 0x8    */
+#define ICE_AQ_FLAG_LB		BIT(ICE_AQ_FLAG_LB_S)  /* 0x200  */
+#define ICE_AQ_FLAG_RD		BIT(ICE_AQ_FLAG_RD_S)  /* 0x400  */
+#define ICE_AQ_FLAG_VFC		BIT(ICE_AQ_FLAG_VFC_S) /* 0x800  */
+#define ICE_AQ_FLAG_BUF		BIT(ICE_AQ_FLAG_BUF_S) /* 0x1000 */
+#define ICE_AQ_FLAG_SI		BIT(ICE_AQ_FLAG_SI_S)  /* 0x2000 */
+#define ICE_AQ_FLAG_EI		BIT(ICE_AQ_FLAG_EI_S)  /* 0x4000 */
+#define ICE_AQ_FLAG_FE		BIT(ICE_AQ_FLAG_FE_S)  /* 0x8000 */
+
+/* error codes */
+enum ice_aq_err {
+	ICE_AQ_RC_OK		= 0,  /* Success */
+	ICE_AQ_RC_EPERM		= 1,  /* Operation not permitted */
+	ICE_AQ_RC_ENOENT	= 2,  /* No such element */
+	ICE_AQ_RC_ESRCH		= 3,  /* Bad opcode */
+	ICE_AQ_RC_EINTR		= 4,  /* Operation interrupted */
+	ICE_AQ_RC_EIO		= 5,  /* I/O error */
+	ICE_AQ_RC_ENXIO		= 6,  /* No such resource */
+	ICE_AQ_RC_E2BIG		= 7,  /* Arg too long */
+	ICE_AQ_RC_EAGAIN	= 8,  /* Try again */
+	ICE_AQ_RC_ENOMEM	= 9,  /* Out of memory */
+	ICE_AQ_RC_EACCES	= 10, /* Permission denied */
+	ICE_AQ_RC_EFAULT	= 11, /* Bad address */
+	ICE_AQ_RC_EBUSY		= 12, /* Device or resource busy */
+	ICE_AQ_RC_EEXIST	= 13, /* object already exists */
+	ICE_AQ_RC_EINVAL	= 14, /* Invalid argument */
+	ICE_AQ_RC_ENOTTY	= 15, /* Not a typewriter */
+	ICE_AQ_RC_ENOSPC	= 16, /* No space left or allocation failure */
+	ICE_AQ_RC_ENOSYS	= 17, /* Function not implemented */
+	ICE_AQ_RC_ERANGE	= 18, /* Parameter out of range */
+	ICE_AQ_RC_EFLUSHED	= 19, /* Cmd flushed due to prev cmd error */
+	ICE_AQ_RC_BAD_ADDR	= 20, /* Descriptor contains a bad pointer */
+	ICE_AQ_RC_EMODE		= 21, /* Op not allowed in current dev mode */
+	ICE_AQ_RC_EFBIG		= 22, /* File too big */
+	ICE_AQ_RC_ESBCOMP	= 23, /* SB-IOSF completion unsuccessful */
+	ICE_AQ_RC_ENOSEC	= 24, /* Missing security manifest */
+	ICE_AQ_RC_EBADSIG	= 25, /* Bad RSA signature */
+	ICE_AQ_RC_ESVN		= 26, /* SVN number prohibits this package */
+	ICE_AQ_RC_EBADMAN	= 27, /* Manifest hash mismatch */
+	ICE_AQ_RC_EBADBUF	= 28, /* Buffer hash mismatches manifest */
+};
+
+/* Admin Queue command opcodes */
+enum ice_adminq_opc {
+	/* AQ commands */
+	ice_aqc_opc_get_ver				= 0x0001,
+	ice_aqc_opc_driver_ver				= 0x0002,
+	ice_aqc_opc_q_shutdown				= 0x0003,
+	ice_aqc_opc_get_exp_err				= 0x0005,
+
+	/* resource ownership */
+	ice_aqc_opc_req_res				= 0x0008,
+	ice_aqc_opc_release_res				= 0x0009,
+
+	/* device/function capabilities */
+	ice_aqc_opc_list_func_caps			= 0x000A,
+	ice_aqc_opc_list_dev_caps			= 0x000B,
+
+	/* manage MAC address */
+	ice_aqc_opc_manage_mac_read			= 0x0107,
+	ice_aqc_opc_manage_mac_write			= 0x0108,
+
+	/* PXE */
+	ice_aqc_opc_clear_pxe_mode			= 0x0110,
+
+	/* internal switch commands */
+	ice_aqc_opc_get_sw_cfg				= 0x0200,
+
+	/* Alloc/Free/Get Resources */
+	ice_aqc_opc_get_res_alloc			= 0x0204,
+	ice_aqc_opc_alloc_res				= 0x0208,
+	ice_aqc_opc_free_res				= 0x0209,
+	ice_aqc_opc_get_allocd_res_desc			= 0x020A,
+
+	/* VSI commands */
+	ice_aqc_opc_add_vsi				= 0x0210,
+	ice_aqc_opc_update_vsi				= 0x0211,
+	ice_aqc_opc_get_vsi_params			= 0x0212,
+	ice_aqc_opc_free_vsi				= 0x0213,
+
+
+
+	/* switch rules population commands */
+	ice_aqc_opc_add_sw_rules			= 0x02A0,
+	ice_aqc_opc_update_sw_rules			= 0x02A1,
+	ice_aqc_opc_remove_sw_rules			= 0x02A2,
+	ice_aqc_opc_get_sw_rules			= 0x02A3,
+	ice_aqc_opc_clear_pf_cfg			= 0x02A4,
+
+
+	/* transmit scheduler commands */
+	ice_aqc_opc_get_dflt_topo			= 0x0400,
+	ice_aqc_opc_add_sched_elems			= 0x0401,
+	ice_aqc_opc_cfg_sched_elems			= 0x0403,
+	ice_aqc_opc_get_sched_elems			= 0x0404,
+	ice_aqc_opc_move_sched_elems			= 0x0408,
+	ice_aqc_opc_suspend_sched_elems			= 0x0409,
+	ice_aqc_opc_resume_sched_elems			= 0x040A,
+	ice_aqc_opc_suspend_sched_traffic		= 0x040B,
+	ice_aqc_opc_resume_sched_traffic		= 0x040C,
+	ice_aqc_opc_delete_sched_elems			= 0x040F,
+	ice_aqc_opc_query_sched_res			= 0x0412,
+	ice_aqc_opc_query_node_to_root			= 0x0413,
+	ice_aqc_opc_cfg_l2_node_cgd			= 0x0414,
+
+	/* PHY commands */
+	ice_aqc_opc_get_phy_caps			= 0x0600,
+	ice_aqc_opc_set_phy_cfg				= 0x0601,
+	ice_aqc_opc_set_mac_cfg				= 0x0603,
+	ice_aqc_opc_restart_an				= 0x0605,
+	ice_aqc_opc_get_link_status			= 0x0607,
+	ice_aqc_opc_set_event_mask			= 0x0613,
+	ice_aqc_opc_set_mac_lb				= 0x0620,
+	ice_aqc_opc_set_port_id_led			= 0x06E9,
+	ice_aqc_opc_get_port_options			= 0x06EA,
+	ice_aqc_opc_set_port_option			= 0x06EB,
+	ice_aqc_opc_set_gpio				= 0x06EC,
+	ice_aqc_opc_get_gpio				= 0x06ED,
+
+	/* NVM commands */
+	ice_aqc_opc_nvm_read				= 0x0701,
+	ice_aqc_opc_nvm_erase				= 0x0702,
+	ice_aqc_opc_nvm_update				= 0x0703,
+	ice_aqc_opc_nvm_cfg_read			= 0x0704,
+	ice_aqc_opc_nvm_cfg_write			= 0x0705,
+	ice_aqc_opc_nvm_checksum			= 0x0706,
+
+	/* PF/VF mailbox commands */
+	ice_mbx_opc_send_msg_to_pf			= 0x0801,
+	ice_mbx_opc_send_msg_to_vf			= 0x0802,
+
+	/* RSS commands */
+	ice_aqc_opc_set_rss_key				= 0x0B02,
+	ice_aqc_opc_set_rss_lut				= 0x0B03,
+	ice_aqc_opc_get_rss_key				= 0x0B04,
+	ice_aqc_opc_get_rss_lut				= 0x0B05,
+
+	/* TX queue handling commands/events */
+	ice_aqc_opc_add_txqs				= 0x0C30,
+	ice_aqc_opc_dis_txqs				= 0x0C31,
+	ice_aqc_opc_txqs_cleanup			= 0x0C31,
+	ice_aqc_opc_move_recfg_txqs			= 0x0C32,
+
+
+
+
+	/* Standalone Commands/Events */
+	ice_aqc_opc_event_lan_overflow			= 0x1001,
+
+	/* debug commands */
+	ice_aqc_opc_fw_logging				= 0xFF09,
+	ice_aqc_opc_fw_logging_info			= 0xFF10,
+	ice_aqc_opc_get_clear_fw_log			= 0xFF11
+};
+
+#endif /* _ICE_ADMINQ_CMD_H_ */
diff --git a/drivers/net/ice/base/ice_alloc.h b/drivers/net/ice/base/ice_alloc.h
new file mode 100644
index 0000000..7883104
--- /dev/null
+++ b/drivers/net/ice/base/ice_alloc.h
@@ -0,0 +1,22 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_ALLOC_H_
+#define _ICE_ALLOC_H_
+
+/* Memory types */
+enum ice_memset_type {
+	ICE_NONDMA_MEM = 0,
+	ICE_DMA_MEM
+};
+
+/* Memcpy types */
+enum ice_memcpy_type {
+	ICE_NONDMA_TO_NONDMA = 0,
+	ICE_NONDMA_TO_DMA,
+	ICE_DMA_TO_DMA,
+	ICE_DMA_TO_NONDMA
+};
+
+#endif /* _ICE_ALLOC_H_ */
diff --git a/drivers/net/ice/base/ice_bitops.h b/drivers/net/ice/base/ice_bitops.h
new file mode 100644
index 0000000..ac6a51b
--- /dev/null
+++ b/drivers/net/ice/base/ice_bitops.h
@@ -0,0 +1,233 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_BITOPS_H_
+#define _ICE_BITOPS_H_
+
+/* Define the size of the bitmap chunk */
+typedef u32 ice_bitmap_t;
+
+
+/* Number of bits per bitmap chunk */
+#define BITS_PER_CHUNK		(BITS_PER_BYTE * sizeof(ice_bitmap_t))
+/* Determine which chunk a bit belongs in */
+#define BIT_CHUNK(nr)		((nr) / BITS_PER_CHUNK)
+/* How many chunks are required to store this many bits */
+#define BITS_TO_CHUNKS(sz)	DIVIDE_AND_ROUND_UP((sz), BITS_PER_CHUNK)
+/* Which bit inside a chunk this bit corresponds to */
+#define BIT_IN_CHUNK(nr)	BIT((nr) % BITS_PER_CHUNK)
+/* How many bits are valid in the last chunk, assumes nr > 0 */
+#define LAST_CHUNK_BITS(nr)	((((nr) - 1) % BITS_PER_CHUNK) + 1)
+/* Generate a bitmask of valid bits in the last chunk, assumes nr > 0 */
+#define LAST_CHUNK_MASK(nr)	(((ice_bitmap_t)~0) >> \
+				 (BITS_PER_CHUNK - LAST_CHUNK_BITS(nr)))
+
+#define ice_declare_bitmap(A, sz) \
+	ice_bitmap_t A[BITS_TO_CHUNKS(sz)]
+
+/**
+ * ice_is_bit_set - Check state of a bit in a bitmap
+ * @bitmap: the bitmap to check
+ * @nr: the bit to check
+ *
+ * Returns true if bit nr of bitmap is set. False otherwise. Assumes that nr
+ * is less than the size of the bitmap.
+ */
+static inline bool ice_is_bit_set(const ice_bitmap_t *bitmap, u16 nr)
+{
+	return !!(bitmap[BIT_CHUNK(nr)] & BIT_IN_CHUNK(nr));
+}
+
+/**
+ * ice_clear_bit - Clear a bit in a bitmap
+ * @bitmap: the bitmap to change
+ * @nr: the bit to change
+ *
+ * Clears the bit nr in bitmap. Assumes that nr is less than the size of the
+ * bitmap.
+ */
+static inline void ice_clear_bit(u16 nr, ice_bitmap_t *bitmap)
+{
+	bitmap[BIT_CHUNK(nr)] &= ~BIT_IN_CHUNK(nr);
+}
+
+/**
+ * ice_set_bit - Set a bit in a bitmap
+ * @bitmap: the bitmap to change
+ * @nr: the bit to change
+ *
+ * Sets the bit nr in bitmap. Assumes that nr is less than the size of the
+ * bitmap.
+ */
+static inline void ice_set_bit(u16 nr, ice_bitmap_t *bitmap)
+{
+	bitmap[BIT_CHUNK(nr)] |= BIT_IN_CHUNK(nr);
+}
+
+/* ice_zero_bitmap - set bits of bitmap to zero.
+ * @bmp: bitmap to set zeros
+ * @size: Size of the bitmaps in bits
+ *
+ * This function sets bits of a bitmap to zero.
+ */
+static inline void ice_zero_bitmap(ice_bitmap_t *bmp, u16 size)
+{
+	ice_bitmap_t mask;
+	u16 i;
+
+	/* Handle all but last chunk*/
+	for (i = 0; i < BITS_TO_CHUNKS(size) - 1; i++)
+		bmp[i] = 0;
+	/* For the last chunk, we want to take care of not to modify bits
+	 * outside the size boundary. ~mask take care of all the bits outside
+	 * the boundary.
+	 */
+	mask = LAST_CHUNK_MASK(size);
+	bmp[i] &= ~mask;
+}
+
+/**
+ * ice_and_bitmap - bitwise AND 2 bitmaps and store result in dst bitmap
+ * @dst: Destination bitmap that receive the result of the operation
+ * @bmp1: The first bitmap to intersect
+ * @bmp2: The second bitmap to intersect wit the first
+ * @size: Size of the bitmaps in bits
+ *
+ * This function performs a bitwise AND on two "source" bitmaps of the same size
+ * and stores the result to "dst" bitmap. The "dst" bitmap must be of the same
+ * size as the "source" bitmaps to avoid buffer overflows. This function returns
+ * a non-zero value if at least one bit location from both "source" bitmaps is
+ * non-zero.
+ */
+static inline int
+ice_and_bitmap(ice_bitmap_t *dst, const ice_bitmap_t *bmp1,
+	       const ice_bitmap_t *bmp2, u16 size)
+{
+	ice_bitmap_t res = 0, mask;
+	u16 i;
+
+	/* Handle all but the last chunk */
+	for (i = 0; i < BITS_TO_CHUNKS(size) - 1; i++) {
+		dst[i] = bmp1[i] & bmp2[i];
+		res |= dst[i];
+	}
+
+	/* We want to take care not to modify any bits outside of the bitmap
+	 * size, even in the destination bitmap. Thus, we won't directly
+	 * assign the last bitmap, but instead use a bitmask to ensure we only
+	 * modify bits which are within the size, and leave any bits above the
+	 * size value alone.
+	 */
+	mask = LAST_CHUNK_MASK(size);
+	dst[i] &= ~mask;
+	dst[i] |= (bmp1[i] & bmp2[i]) & mask;
+	res |= dst[i] & mask;
+
+	return res != 0;
+}
+
+/**
+ * ice_or_bitmap - bitwise OR 2 bitmaps and store result in dst bitmap
+ * @dst: Destination bitmap that receive the result of the operation
+ * @bmp1: The first bitmap to intersect
+ * @bmp2: The second bitmap to intersect wit the first
+ * @size: Size of the bitmaps in bits
+ *
+ * This function performs a bitwise OR on two "source" bitmaps of the same size
+ * and stores the result to "dst" bitmap. The "dst" bitmap must be of the same
+ * size as the "source" bitmaps to avoid buffer overflows.
+ */
+static inline void
+ice_or_bitmap(ice_bitmap_t *dst, const ice_bitmap_t *bmp1,
+	      const ice_bitmap_t *bmp2, u16 size)
+{
+	ice_bitmap_t mask;
+	u16 i;
+
+	/* Handle all but last chunk*/
+	for (i = 0; i < BITS_TO_CHUNKS(size) - 1; i++)
+		dst[i] = bmp1[i] | bmp2[i];
+
+	/* We want to only OR bits within the size. Furthermore, we also do
+	 * not want to modify destination bits which are beyond the specified
+	 * size. Use a bitmask to ensure that we only modify the bits that are
+	 * within the specified size.
+	 */
+	mask = LAST_CHUNK_MASK(size);
+	dst[i] &= ~mask;
+	dst[i] |= (bmp1[i] | bmp2[i]) & mask;
+}
+
+/**
+ * ice_find_next_bit - Find the index of the next set bit of a bitmap
+ * @bitmap: the bitmap to scan
+ * @size: the size in bits of the bitmap
+ * @offset: the offset to start at
+ *
+ * Scans the bitmap and returns the index of the first set bit which is equal
+ * to or after the specified offset. Will return size if no bits are set.
+ */
+static inline u16
+ice_find_next_bit(const ice_bitmap_t *bitmap, u16 size, u16 offset)
+{
+	u16 i, j;
+
+	if (offset >= size)
+		return size;
+
+	/* Since the starting position may not be directly on a chunk
+	 * boundary, we need to be careful to handle the first chunk specially
+	 */
+	i = BIT_CHUNK(offset);
+	if (bitmap[i] != 0) {
+		u16 off = i * BITS_PER_CHUNK;
+
+		for (j = offset % BITS_PER_CHUNK; j < BITS_PER_CHUNK; j++) {
+			if (ice_is_bit_set(bitmap, off + j))
+				return min(size, (u16)(off + j));
+		}
+	}
+
+	/* Now we handle the remaining chunks, if any */
+	for (i++; i < BITS_TO_CHUNKS(size); i++) {
+		if (bitmap[i] != 0) {
+			u16 off = i * BITS_PER_CHUNK;
+
+			for (j = 0; j < BITS_PER_CHUNK; j++) {
+				if (ice_is_bit_set(bitmap, off + j))
+					return min(size, (u16)(off + j));
+			}
+		}
+	}
+	return size;
+}
+
+/**
+ * ice_find_first_bit - Find the index of the first set bit of a bitmap
+ * @bitmap: the bitmap to scan
+ * @size: the size in bits of the bitmap
+ *
+ * Scans the bitmap and returns the index of the first set bit. Will return
+ * size if no bits are set.
+ */
+static inline u16 ice_find_first_bit(const ice_bitmap_t *bitmap, u16 size)
+{
+	return ice_find_next_bit(bitmap, size, 0);
+}
+
+/**
+ * ice_is_any_bit_set - Return true of any bit in the bitmap is set
+ * @bitmap: the bitmap to check
+ * @size: the size of the bitmap
+ *
+ * Equivalent to checking if ice_find_first_bit returns a value less than the
+ * bitmap size.
+ */
+static inline bool ice_is_any_bit_set(ice_bitmap_t *bitmap, u16 size)
+{
+	return ice_find_first_bit(bitmap, size) < size;
+}
+
+
+#endif /* _ICE_BITOPS_H_ */
diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c
new file mode 100644
index 0000000..d2e294e
--- /dev/null
+++ b/drivers/net/ice/base/ice_common.c
@@ -0,0 +1,3332 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#include "ice_common.h"
+#include "ice_sched.h"
+#include "ice_adminq_cmd.h"
+
+#include "ice_flow.h"
+#include "ice_switch.h"
+
+#define ICE_PF_RESET_WAIT_COUNT	200
+
+#define ICE_PROG_FLEX_ENTRY(hw, rxdid, mdid, idx) \
+	wr32((hw), GLFLXP_RXDID_FLX_WRD_##idx(rxdid), \
+	     ((ICE_RX_OPC_MDID << \
+	       GLFLXP_RXDID_FLX_WRD_##idx##_RXDID_OPCODE_S) & \
+	      GLFLXP_RXDID_FLX_WRD_##idx##_RXDID_OPCODE_M) | \
+	     (((mdid) << GLFLXP_RXDID_FLX_WRD_##idx##_PROT_MDID_S) & \
+	      GLFLXP_RXDID_FLX_WRD_##idx##_PROT_MDID_M))
+
+#define ICE_PROG_FLG_ENTRY(hw, rxdid, flg_0, flg_1, flg_2, flg_3, idx) \
+	wr32((hw), GLFLXP_RXDID_FLAGS(rxdid, idx), \
+	     (((flg_0) << GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_S) & \
+	      GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_M) | \
+	     (((flg_1) << GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_1_S) & \
+	      GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_1_M) | \
+	     (((flg_2) << GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_2_S) & \
+	      GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_2_M) | \
+	     (((flg_3) << GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_3_S) & \
+	      GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_3_M))
+
+
+/**
+ * ice_set_mac_type - Sets MAC type
+ * @hw: pointer to the HW structure
+ *
+ * This function sets the MAC type of the adapter based on the
+ * vendor ID and device ID stored in the hw structure.
+ */
+static enum ice_status ice_set_mac_type(struct ice_hw *hw)
+{
+	enum ice_status status = ICE_SUCCESS;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_set_mac_type\n");
+
+	if (hw->vendor_id == ICE_INTEL_VENDOR_ID) {
+		switch (hw->device_id) {
+		default:
+			hw->mac_type = ICE_MAC_GENERIC;
+			break;
+		}
+	} else {
+		status = ICE_ERR_DEVICE_NOT_SUPPORTED;
+	}
+
+	ice_debug(hw, ICE_DBG_INIT, "found mac_type: %d, status: %d\n",
+		  hw->mac_type, status);
+
+	return status;
+}
+
+#if defined(FPGA_SUPPORT) || defined(CVL_A0_SUPPORT)
+void ice_dev_onetime_setup(struct ice_hw *hw)
+{
+
+	/* configure Rx - set non pxe mode */
+	wr32(hw, GLLAN_RCTL_0, 0x1);
+
+
+#define MBX_PF_VT_PFALLOC	0x00231E80 /* Reset Source: CORER */
+	/* set VFs per PF */
+	wr32(hw, MBX_PF_VT_PFALLOC, rd32(hw, PF_VT_PFALLOC_HIF));
+}
+#endif /* FPGA_SUPPORT || CVL_A0_SUPPORT */
+
+/**
+ * ice_clear_pf_cfg - Clear PF configuration
+ * @hw: pointer to the hardware structure
+ *
+ * Clears any existing PF configuration (VSIs, VSI lists, switch rules, port
+ * configuration, flow director filters, etc.).
+ */
+enum ice_status ice_clear_pf_cfg(struct ice_hw *hw)
+{
+	struct ice_aq_desc desc;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_clear_pf_cfg);
+
+	return ice_aq_send_cmd(hw, &desc, NULL, 0, NULL);
+}
+
+/**
+ * ice_aq_manage_mac_read - manage MAC address read command
+ * @hw: pointer to the hw struct
+ * @buf: a virtual buffer to hold the manage MAC read response
+ * @buf_size: Size of the virtual buffer
+ * @cd: pointer to command details structure or NULL
+ *
+ * This function is used to return per PF station MAC address (0x0107).
+ * NOTE: Upon successful completion of this command, MAC address information
+ * is returned in user specified buffer. Please interpret user specified
+ * buffer as "manage_mac_read" response.
+ * Response such as various MAC addresses are stored in HW struct (port.mac)
+ * ice_aq_discover_caps is expected to be called before this function is called.
+ */
+static enum ice_status
+ice_aq_manage_mac_read(struct ice_hw *hw, void *buf, u16 buf_size,
+		       struct ice_sq_cd *cd)
+{
+	struct ice_aqc_manage_mac_read_resp *resp;
+	struct ice_aqc_manage_mac_read *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+	u16 flags;
+	u8 i;
+
+	cmd = &desc.params.mac_read;
+
+	if (buf_size < sizeof(*resp))
+		return ICE_ERR_BUF_TOO_SHORT;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_manage_mac_read);
+
+	status = ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
+	if (status)
+		return status;
+
+	resp = (struct ice_aqc_manage_mac_read_resp *)buf;
+	flags = LE16_TO_CPU(cmd->flags) & ICE_AQC_MAN_MAC_READ_M;
+
+	if (!(flags & ICE_AQC_MAN_MAC_LAN_ADDR_VALID)) {
+		ice_debug(hw, ICE_DBG_LAN, "got invalid MAC address\n");
+		return ICE_ERR_CFG;
+	}
+
+	/* A single port can report up to two (LAN and WoL) addresses */
+	for (i = 0; i < cmd->num_addr; i++)
+		if (resp[i].addr_type == ICE_AQC_MAN_MAC_ADDR_TYPE_LAN) {
+			ice_memcpy(hw->port_info->mac.lan_addr,
+				   resp[i].mac_addr, ETH_ALEN,
+				   ICE_DMA_TO_NONDMA);
+			ice_memcpy(hw->port_info->mac.perm_addr,
+				   resp[i].mac_addr,
+				   ETH_ALEN, ICE_DMA_TO_NONDMA);
+			break;
+		}
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_aq_get_phy_caps - returns PHY capabilities
+ * @pi: port information structure
+ * @qual_mods: report qualified modules
+ * @report_mode: report mode capabilities
+ * @pcaps: structure for PHY capabilities to be filled
+ * @cd: pointer to command details structure or NULL
+ *
+ * Returns the various PHY capabilities supported on the Port (0x0600)
+ */
+enum ice_status
+ice_aq_get_phy_caps(struct ice_port_info *pi, bool qual_mods, u8 report_mode,
+		    struct ice_aqc_get_phy_caps_data *pcaps,
+		    struct ice_sq_cd *cd)
+{
+	struct ice_aqc_get_phy_caps *cmd;
+	u16 pcaps_size = sizeof(*pcaps);
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	cmd = &desc.params.get_phy;
+
+	if (!pcaps || (report_mode & ~ICE_AQC_REPORT_MODE_M) || !pi)
+		return ICE_ERR_PARAM;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_phy_caps);
+
+	if (qual_mods)
+		cmd->param0 |= CPU_TO_LE16(ICE_AQC_GET_PHY_RQM);
+
+	cmd->param0 |= CPU_TO_LE16(report_mode);
+	status = ice_aq_send_cmd(pi->hw, &desc, pcaps, pcaps_size, cd);
+
+	if (status == ICE_SUCCESS && report_mode == ICE_AQC_REPORT_TOPO_CAP)
+		pi->phy.phy_type_low = LE64_TO_CPU(pcaps->phy_type_low);
+
+	return status;
+}
+
+/**
+ * ice_get_media_type - Gets media type
+ * @pi: port information structure
+ */
+static enum ice_media_type ice_get_media_type(struct ice_port_info *pi)
+{
+	struct ice_link_status *hw_link_info;
+
+	if (!pi)
+		return ICE_MEDIA_UNKNOWN;
+
+	hw_link_info = &pi->phy.link_info;
+
+	if (hw_link_info->phy_type_low) {
+		switch (hw_link_info->phy_type_low) {
+		case ICE_PHY_TYPE_LOW_1000BASE_SX:
+		case ICE_PHY_TYPE_LOW_1000BASE_LX:
+		case ICE_PHY_TYPE_LOW_10GBASE_SR:
+		case ICE_PHY_TYPE_LOW_10GBASE_LR:
+		case ICE_PHY_TYPE_LOW_10G_SFI_C2C:
+		case ICE_PHY_TYPE_LOW_25GBASE_SR:
+		case ICE_PHY_TYPE_LOW_25GBASE_LR:
+		case ICE_PHY_TYPE_LOW_25G_AUI_C2C:
+		case ICE_PHY_TYPE_LOW_40GBASE_SR4:
+		case ICE_PHY_TYPE_LOW_40GBASE_LR4:
+			return ICE_MEDIA_FIBER;
+		case ICE_PHY_TYPE_LOW_100BASE_TX:
+		case ICE_PHY_TYPE_LOW_1000BASE_T:
+		case ICE_PHY_TYPE_LOW_2500BASE_T:
+		case ICE_PHY_TYPE_LOW_5GBASE_T:
+		case ICE_PHY_TYPE_LOW_10GBASE_T:
+		case ICE_PHY_TYPE_LOW_25GBASE_T:
+			return ICE_MEDIA_BASET;
+		case ICE_PHY_TYPE_LOW_10G_SFI_DA:
+		case ICE_PHY_TYPE_LOW_25GBASE_CR:
+		case ICE_PHY_TYPE_LOW_25GBASE_CR_S:
+		case ICE_PHY_TYPE_LOW_25GBASE_CR1:
+		case ICE_PHY_TYPE_LOW_40GBASE_CR4:
+			return ICE_MEDIA_DA;
+		case ICE_PHY_TYPE_LOW_1000BASE_KX:
+		case ICE_PHY_TYPE_LOW_2500BASE_KX:
+		case ICE_PHY_TYPE_LOW_2500BASE_X:
+		case ICE_PHY_TYPE_LOW_5GBASE_KR:
+		case ICE_PHY_TYPE_LOW_10GBASE_KR_CR1:
+		case ICE_PHY_TYPE_LOW_25GBASE_KR:
+		case ICE_PHY_TYPE_LOW_25GBASE_KR1:
+		case ICE_PHY_TYPE_LOW_25GBASE_KR_S:
+		case ICE_PHY_TYPE_LOW_40GBASE_KR4:
+			return ICE_MEDIA_BACKPLANE;
+		}
+	}
+	return ICE_MEDIA_UNKNOWN;
+}
+
+/**
+ * ice_aq_get_link_info
+ * @pi: port information structure
+ * @ena_lse: enable/disable LinkStatusEvent reporting
+ * @link: pointer to link status structure - optional
+ * @cd: pointer to command details structure or NULL
+ *
+ * Get Link Status (0x607). Returns the link status of the adapter.
+ */
+enum ice_status
+ice_aq_get_link_info(struct ice_port_info *pi, bool ena_lse,
+		     struct ice_link_status *link, struct ice_sq_cd *cd)
+{
+	struct ice_link_status *hw_link_info_old, *hw_link_info;
+	struct ice_aqc_get_link_status_data link_data = { 0 };
+	struct ice_aqc_get_link_status *resp;
+	enum ice_media_type *hw_media_type;
+	struct ice_fc_info *hw_fc_info;
+	bool tx_pause, rx_pause;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+	u16 cmd_flags;
+
+	if (!pi)
+		return ICE_ERR_PARAM;
+	hw_link_info_old = &pi->phy.link_info_old;
+	hw_media_type = &pi->phy.media_type;
+	hw_link_info = &pi->phy.link_info;
+	hw_fc_info = &pi->fc;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_link_status);
+	cmd_flags = (ena_lse) ? ICE_AQ_LSE_ENA : ICE_AQ_LSE_DIS;
+	resp = &desc.params.get_link_status;
+	resp->cmd_flags = CPU_TO_LE16(cmd_flags);
+	resp->lport_num = pi->lport;
+
+	status = ice_aq_send_cmd(pi->hw, &desc, &link_data, sizeof(link_data),
+				 cd);
+
+	if (status != ICE_SUCCESS)
+		return status;
+
+	/* save off old link status information */
+	*hw_link_info_old = *hw_link_info;
+
+	/* update current link status information */
+	hw_link_info->link_speed = LE16_TO_CPU(link_data.link_speed);
+	hw_link_info->phy_type_low = LE64_TO_CPU(link_data.phy_type_low);
+	*hw_media_type = ice_get_media_type(pi);
+	hw_link_info->link_info = link_data.link_info;
+	hw_link_info->an_info = link_data.an_info;
+	hw_link_info->ext_info = link_data.ext_info;
+	hw_link_info->max_frame_size = LE16_TO_CPU(link_data.max_frame_size);
+	hw_link_info->fec_info = link_data.cfg & ICE_AQ_FEC_MASK;
+	hw_link_info->topo_media_conflict = link_data.topo_media_conflict;
+	hw_link_info->pacing = link_data.cfg & ICE_AQ_CFG_PACING_M;
+
+	/* update fc info */
+	tx_pause = !!(link_data.an_info & ICE_AQ_LINK_PAUSE_TX);
+	rx_pause = !!(link_data.an_info & ICE_AQ_LINK_PAUSE_RX);
+	if (tx_pause && rx_pause)
+		hw_fc_info->current_mode = ICE_FC_FULL;
+	else if (tx_pause)
+		hw_fc_info->current_mode = ICE_FC_TX_PAUSE;
+	else if (rx_pause)
+		hw_fc_info->current_mode = ICE_FC_RX_PAUSE;
+	else
+		hw_fc_info->current_mode = ICE_FC_NONE;
+
+	hw_link_info->lse_ena =
+		!!(resp->cmd_flags & CPU_TO_LE16(ICE_AQ_LSE_IS_ENABLED));
+
+
+	/* save link status information */
+	if (link)
+		*link = *hw_link_info;
+
+	/* flag cleared so calling functions don't call AQ again */
+	pi->phy.get_link_info = false;
+
+	return status;
+}
+
+/**
+ * ice_init_flex_flags
+ * @hw: pointer to the hardware structure
+ * @prof_id: Rx Descriptor Builder profile ID
+ *
+ * Function to initialize Rx flex flags
+ */
+static void ice_init_flex_flags(struct ice_hw *hw, enum ice_rxdid prof_id)
+{
+	u8 idx = 0;
+
+	/* Flex-flag fields (0-2) are programmed with FLG64 bits with layout:
+	 * flexiflags0[5:0] - TCP flags, is_packet_fragmented, is_packet_UDP_GRE
+	 * flexiflags1[3:0] - Not used for flag programming
+	 * flexiflags2[7:0] - Tunnel and VLAN types
+	 * 2 invalid fields in last index
+	 */
+	switch (prof_id) {
+	/* Rx flex flags are currently programmed for the NIC profiles only.
+	 * Different flag bit programming configurations can be added per
+	 * profile as needed.
+	 */
+	case ICE_RXDID_FLEX_NIC:
+	case ICE_RXDID_FLEX_NIC_2:
+		ICE_PROG_FLG_ENTRY(hw, prof_id, ICE_RXFLG_PKT_FRG,
+				   ICE_RXFLG_UDP_GRE, ICE_RXFLG_PKT_DSI,
+				   ICE_RXFLG_FIN, idx++);
+		/* flex flag 1 is not used for flexi-flag programming, skipping
+		 * these four FLG64 bits.
+		 */
+		ICE_PROG_FLG_ENTRY(hw, prof_id, ICE_RXFLG_SYN, ICE_RXFLG_RST,
+				   ICE_RXFLG_PKT_DSI, ICE_RXFLG_PKT_DSI, idx++);
+		ICE_PROG_FLG_ENTRY(hw, prof_id, ICE_RXFLG_PKT_DSI,
+				   ICE_RXFLG_PKT_DSI, ICE_RXFLG_EVLAN_x8100,
+				   ICE_RXFLG_EVLAN_x9100, idx++);
+		ICE_PROG_FLG_ENTRY(hw, prof_id, ICE_RXFLG_VLAN_x8100,
+				   ICE_RXFLG_TNL_VLAN, ICE_RXFLG_TNL_MAC,
+				   ICE_RXFLG_TNL0, idx++);
+		ICE_PROG_FLG_ENTRY(hw, prof_id, ICE_RXFLG_TNL1, ICE_RXFLG_TNL2,
+				   ICE_RXFLG_PKT_DSI, ICE_RXFLG_PKT_DSI, idx);
+		break;
+
+	default:
+		ice_debug(hw, ICE_DBG_INIT,
+			  "Flag programming for profile ID %d not supported\n",
+			  prof_id);
+	}
+}
+
+/**
+ * ice_init_flex_flds
+ * @hw: pointer to the hardware structure
+ * @prof_id: Rx Descriptor Builder profile ID
+ *
+ * Function to initialize flex descriptors
+ */
+static void ice_init_flex_flds(struct ice_hw *hw, enum ice_rxdid prof_id)
+{
+	enum ice_flex_rx_mdid mdid;
+
+	switch (prof_id) {
+	case ICE_RXDID_FLEX_NIC:
+	case ICE_RXDID_FLEX_NIC_2:
+		ICE_PROG_FLEX_ENTRY(hw, prof_id, ICE_RX_MDID_HASH_LOW, 0);
+		ICE_PROG_FLEX_ENTRY(hw, prof_id, ICE_RX_MDID_HASH_HIGH, 1);
+		ICE_PROG_FLEX_ENTRY(hw, prof_id, ICE_RX_MDID_FLOW_ID_LOWER, 2);
+
+		mdid = (prof_id == ICE_RXDID_FLEX_NIC_2) ?
+			ICE_RX_MDID_SRC_VSI : ICE_RX_MDID_FLOW_ID_HIGH;
+
+		ICE_PROG_FLEX_ENTRY(hw, prof_id, mdid, 3);
+
+		ice_init_flex_flags(hw, prof_id);
+		break;
+
+	default:
+		ice_debug(hw, ICE_DBG_INIT,
+			  "Field init for profile ID %d not supported\n",
+			  prof_id);
+	}
+}
+
+
+/**
+ * ice_init_fltr_mgmt_struct - initializes filter management list and locks
+ * @hw: pointer to the hw struct
+ */
+static enum ice_status ice_init_fltr_mgmt_struct(struct ice_hw *hw)
+{
+	struct ice_switch_info *sw;
+
+	hw->switch_info = (struct ice_switch_info *)
+			  ice_malloc(hw, sizeof(*hw->switch_info));
+	sw = hw->switch_info;
+
+	if (!sw)
+		return ICE_ERR_NO_MEMORY;
+
+	INIT_LIST_HEAD(&sw->vsi_list_map_head);
+
+	return ice_init_def_sw_recp(hw);
+}
+
+/**
+ * ice_cleanup_fltr_mgmt_struct - cleanup filter management list and locks
+ * @hw: pointer to the hw struct
+ */
+static void ice_cleanup_fltr_mgmt_struct(struct ice_hw *hw)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	struct ice_vsi_list_map_info *v_pos_map;
+	struct ice_vsi_list_map_info *v_tmp_map;
+	struct ice_sw_recipe *recps;
+	u8 i;
+
+	LIST_FOR_EACH_ENTRY_SAFE(v_pos_map, v_tmp_map, &sw->vsi_list_map_head,
+				 ice_vsi_list_map_info, list_entry) {
+		LIST_DEL(&v_pos_map->list_entry);
+		ice_free(hw, v_pos_map);
+	}
+	recps = hw->switch_info->recp_list;
+	for (i = 0; i < ICE_SW_LKUP_LAST; i++) {
+		recps[i].root_rid = i;
+
+		if (recps[i].adv_rule) {
+			struct ice_adv_fltr_mgmt_list_entry *tmp_entry;
+			struct ice_adv_fltr_mgmt_list_entry *lst_itr;
+
+			ice_destroy_lock(&recps[i].filt_rule_lock);
+			LIST_FOR_EACH_ENTRY_SAFE(lst_itr, tmp_entry,
+						 &recps[i].filt_rules,
+						 ice_adv_fltr_mgmt_list_entry,
+						 list_entry) {
+				LIST_DEL(&lst_itr->list_entry);
+				ice_free(hw, lst_itr->lkups);
+				ice_free(hw, lst_itr);
+			}
+		} else {
+			struct ice_fltr_mgmt_list_entry *lst_itr, *tmp_entry;
+
+			ice_destroy_lock(&recps[i].filt_rule_lock);
+			LIST_FOR_EACH_ENTRY_SAFE(lst_itr, tmp_entry,
+						 &recps[i].filt_rules,
+						 ice_fltr_mgmt_list_entry,
+						 list_entry) {
+				LIST_DEL(&lst_itr->list_entry);
+				ice_free(hw, lst_itr);
+			}
+		}
+	}
+	ice_rm_all_sw_replay_rule_info(hw);
+	ice_free(hw, sw->recp_list);
+	ice_free(hw, sw);
+}
+
+#define ICE_FW_LOG_DESC_SIZE(n)	(sizeof(struct ice_aqc_fw_logging_data) + \
+	(((n) - 1) * sizeof(((struct ice_aqc_fw_logging_data *)0)->entry)))
+#define ICE_FW_LOG_DESC_SIZE_MAX	\
+	ICE_FW_LOG_DESC_SIZE(ICE_AQC_FW_LOG_ID_MAX)
+
+/**
+ * ice_cfg_fw_log - configure FW logging
+ * @hw: pointer to the hw struct
+ * @enable: enable certain FW logging events if true, disable all if false
+ *
+ * This function enables/disables the FW logging via Rx CQ events and a UART
+ * port based on predetermined configurations. FW logging via the Rx CQ can be
+ * enabled/disabled for individual PF's. However, FW logging via the UART can
+ * only be enabled/disabled for all PFs on the same device.
+ *
+ * To enable overall FW logging, the "cq_en" and "uart_en" enable bits in
+ * hw->fw_log need to be set accordingly, e.g. based on user-provided input,
+ * before initializing the device.
+ *
+ * When re/configuring FW logging, callers need to update the "cfg" elements of
+ * the hw->fw_log.evnts array with the desired logging event configurations for
+ * modules of interest. When disabling FW logging completely, the callers can
+ * just pass false in the "enable" parameter. On completion, the function will
+ * update the "cur" element of the hw->fw_log.evnts array with the resulting
+ * logging event configurations of the modules that are being re/configured. FW
+ * logging modules that are not part of a reconfiguration operation retain their
+ * previous states.
+ *
+ * Before resetting the device, it is recommended that the driver disables FW
+ * logging before shutting down the control queue. When disabling FW logging
+ * ("enable" = false), the latest configurations of FW logging events stored in
+ * hw->fw_log.evnts[] are not overridden to allow them to be reconfigured after
+ * a device reset.
+ *
+ * When enabling FW logging to emit log messages via the Rx CQ during the
+ * device's initialization phase, a mechanism alternative to interrupt handlers
+ * needs to be used to extract FW log messages from the Rx CQ periodically and
+ * to prevent the Rx CQ from being full and stalling other types of control
+ * messages from FW to SW. Interrupts are typically disabled during the device's
+ * initialization phase.
+ */
+static enum ice_status ice_cfg_fw_log(struct ice_hw *hw, bool enable)
+{
+	struct ice_aqc_fw_logging_data *data = NULL;
+	struct ice_aqc_fw_logging *cmd;
+	enum ice_status status = ICE_SUCCESS;
+	u16 i, chgs = 0, len = 0;
+	struct ice_aq_desc desc;
+	u8 actv_evnts = 0;
+	void *buf = NULL;
+
+	if (!hw->fw_log.cq_en && !hw->fw_log.uart_en)
+		return ICE_SUCCESS;
+
+	/* Disable FW logging only when the control queue is still responsive */
+	if (!enable &&
+	    (!hw->fw_log.actv_evnts || !ice_check_sq_alive(hw, &hw->adminq)))
+		return ICE_SUCCESS;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_fw_logging);
+	cmd = &desc.params.fw_logging;
+
+	/* Indicate which controls are valid */
+	if (hw->fw_log.cq_en)
+		cmd->log_ctrl_valid |= ICE_AQC_FW_LOG_AQ_VALID;
+
+	if (hw->fw_log.uart_en)
+		cmd->log_ctrl_valid |= ICE_AQC_FW_LOG_UART_VALID;
+
+	if (enable) {
+		/* Fill in an array of entries with FW logging modules and
+		 * logging events being reconfigured.
+		 */
+		for (i = 0; i < ICE_AQC_FW_LOG_ID_MAX; i++) {
+			u16 val;
+
+			/* Keep track of enabled event types */
+			actv_evnts |= hw->fw_log.evnts[i].cfg;
+
+			if (hw->fw_log.evnts[i].cfg == hw->fw_log.evnts[i].cur)
+				continue;
+
+			if (!data) {
+				data = (struct ice_aqc_fw_logging_data *)
+					ice_malloc(hw,
+						   ICE_FW_LOG_DESC_SIZE_MAX);
+				if (!data)
+					return ICE_ERR_NO_MEMORY;
+			}
+
+			val = i << ICE_AQC_FW_LOG_ID_S;
+			val |= hw->fw_log.evnts[i].cfg << ICE_AQC_FW_LOG_EN_S;
+			data->entry[chgs++] = CPU_TO_LE16(val);
+		}
+
+		/* Only enable FW logging if at least one module is specified.
+		 * If FW logging is currently enabled but all modules are not
+		 * enabled to emit log messages, disable FW logging altogether.
+		 */
+		if (actv_evnts) {
+			/* Leave if there is effectively no change */
+			if (!chgs)
+				goto out;
+
+			if (hw->fw_log.cq_en)
+				cmd->log_ctrl |= ICE_AQC_FW_LOG_AQ_EN;
+
+			if (hw->fw_log.uart_en)
+				cmd->log_ctrl |= ICE_AQC_FW_LOG_UART_EN;
+
+			buf = data;
+			len = ICE_FW_LOG_DESC_SIZE(chgs);
+			desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+		}
+	}
+
+	status = ice_aq_send_cmd(hw, &desc, buf, len, NULL);
+	if (!status) {
+		/* Update the current configuration to reflect events enabled.
+		 * hw->fw_log.cq_en and hw->fw_log.uart_en indicate if the FW
+		 * logging mode is enabled for the device. They do not reflect
+		 * actual modules being enabled to emit log messages. So, their
+		 * values remain unchanged even when all modules are disabled.
+		 */
+		u16 cnt = enable ? chgs : (u16)ICE_AQC_FW_LOG_ID_MAX;
+
+		hw->fw_log.actv_evnts = actv_evnts;
+		for (i = 0; i < cnt; i++) {
+			u16 v, m;
+
+			if (!enable) {
+				/* When disabling all FW logging events as part
+				 * of device's de-initialization, the original
+				 * configurations are retained, and can be used
+				 * to reconfigure FW logging later if the device
+				 * is re-initialized.
+				 */
+				hw->fw_log.evnts[i].cur = 0;
+				continue;
+			}
+
+			v = LE16_TO_CPU(data->entry[i]);
+			m = (v & ICE_AQC_FW_LOG_ID_M) >> ICE_AQC_FW_LOG_ID_S;
+			hw->fw_log.evnts[m].cur = hw->fw_log.evnts[m].cfg;
+		}
+	}
+
+out:
+	if (data)
+		ice_free(hw, data);
+
+	return status;
+}
+
+/**
+ * ice_output_fw_log
+ * @hw: pointer to the hw struct
+ * @desc: pointer to the AQ message descriptor
+ * @buf: pointer to the buffer accompanying the AQ message
+ *
+ * Formats a FW Log message and outputs it via the standard driver logs.
+ */
+void ice_output_fw_log(struct ice_hw *hw, struct ice_aq_desc *desc, void *buf)
+{
+	ice_debug(hw, ICE_DBG_AQ_MSG, "[ FW Log Msg Start ]\n");
+	ice_debug_array(hw, ICE_DBG_AQ_MSG, 16, 1, (u8 *)buf,
+			LE16_TO_CPU(desc->datalen));
+	ice_debug(hw, ICE_DBG_AQ_MSG, "[ FW Log Msg End ]\n");
+}
+
+/**
+ * ice_get_itr_intrl_gran - determine int/intrl granularity
+ * @hw: pointer to the hw struct
+ *
+ * Determines the itr/intrl granularities based on the maximum aggregate
+ * bandwidth according to the device's configuration during power-on.
+ */
+static enum ice_status ice_get_itr_intrl_gran(struct ice_hw *hw)
+{
+	u8 max_agg_bw = (rd32(hw, GL_PWR_MODE_CTL) &
+			 GL_PWR_MODE_CTL_CAR_MAX_BW_M) >>
+			GL_PWR_MODE_CTL_CAR_MAX_BW_S;
+
+	switch (max_agg_bw) {
+	case ICE_MAX_AGG_BW_200G:
+	case ICE_MAX_AGG_BW_100G:
+	case ICE_MAX_AGG_BW_50G:
+		hw->itr_gran = ICE_ITR_GRAN_ABOVE_25;
+		hw->intrl_gran = ICE_INTRL_GRAN_ABOVE_25;
+		break;
+	case ICE_MAX_AGG_BW_25G:
+		hw->itr_gran = ICE_ITR_GRAN_MAX_25;
+		hw->intrl_gran = ICE_INTRL_GRAN_MAX_25;
+		break;
+	default:
+		ice_debug(hw, ICE_DBG_INIT,
+			  "Failed to determine itr/intrl granularity\n");
+		return ICE_ERR_CFG;
+	}
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_init_hw - main hardware initialization routine
+ * @hw: pointer to the hardware structure
+ */
+enum ice_status ice_init_hw(struct ice_hw *hw)
+{
+	struct ice_aqc_get_phy_caps_data *pcaps;
+	enum ice_status status;
+	u16 mac_buf_len;
+	void *mac_buf;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_init_hw");
+
+
+	/* Set MAC type based on DeviceID */
+	status = ice_set_mac_type(hw);
+	if (status)
+		return status;
+
+	hw->pf_id = (u8)(rd32(hw, PF_FUNC_RID) &
+			 PF_FUNC_RID_FUNCTION_NUMBER_M) >>
+		PF_FUNC_RID_FUNCTION_NUMBER_S;
+
+
+	status = ice_reset(hw, ICE_RESET_PFR);
+	if (status)
+		return status;
+
+	status = ice_get_itr_intrl_gran(hw);
+	if (status)
+		return status;
+
+
+	status = ice_init_all_ctrlq(hw);
+	if (status)
+		goto err_unroll_cqinit;
+
+	/* Enable FW logging. Not fatal if this fails. */
+	status = ice_cfg_fw_log(hw, true);
+	if (status)
+		ice_debug(hw, ICE_DBG_INIT, "Failed to enable FW logging.\n");
+
+	status = ice_clear_pf_cfg(hw);
+	if (status)
+		goto err_unroll_cqinit;
+
+
+	ice_clear_pxe_mode(hw);
+
+	status = ice_init_nvm(hw);
+	if (status)
+		goto err_unroll_cqinit;
+
+	status = ice_get_caps(hw);
+	if (status)
+		goto err_unroll_cqinit;
+
+	hw->port_info = (struct ice_port_info *)
+			ice_malloc(hw, sizeof(*hw->port_info));
+	if (!hw->port_info) {
+		status = ICE_ERR_NO_MEMORY;
+		goto err_unroll_cqinit;
+	}
+
+	/* set the back pointer to hw */
+	hw->port_info->hw = hw;
+
+	/* Initialize port_info struct with switch configuration data */
+	status = ice_get_initial_sw_cfg(hw);
+	if (status)
+		goto err_unroll_alloc;
+
+	hw->evb_veb = true;
+
+	/* Query the allocated resources for Tx scheduler */
+	status = ice_sched_query_res_alloc(hw);
+	if (status) {
+		ice_debug(hw, ICE_DBG_SCHED,
+			  "Failed to get scheduler allocated resources\n");
+		goto err_unroll_alloc;
+	}
+
+
+	/* Initialize port_info struct with scheduler data */
+	status = ice_sched_init_port(hw->port_info);
+	if (status)
+		goto err_unroll_sched;
+
+	pcaps = (struct ice_aqc_get_phy_caps_data *)
+		ice_malloc(hw, sizeof(*pcaps));
+	if (!pcaps) {
+		status = ICE_ERR_NO_MEMORY;
+		goto err_unroll_sched;
+	}
+
+	/* Initialize port_info struct with PHY capabilities */
+	status = ice_aq_get_phy_caps(hw->port_info, false,
+				     ICE_AQC_REPORT_TOPO_CAP, pcaps, NULL);
+	ice_free(hw, pcaps);
+	if (status)
+		goto err_unroll_sched;
+
+	/* Initialize port_info struct with link information */
+	status = ice_aq_get_link_info(hw->port_info, false, NULL, NULL);
+	if (status)
+		goto err_unroll_sched;
+	/* need a valid SW entry point to build a Tx tree */
+	if (!hw->sw_entry_point_layer) {
+		ice_debug(hw, ICE_DBG_SCHED, "invalid sw entry point\n");
+		status = ICE_ERR_CFG;
+		goto err_unroll_sched;
+	}
+	INIT_LIST_HEAD(&hw->agg_list);
+
+	status = ice_init_fltr_mgmt_struct(hw);
+	if (status)
+		goto err_unroll_sched;
+
+#if defined(FPGA_SUPPORT) || defined(CVL_A0_SUPPORT)
+	/* some of the register write workarounds to get Rx working */
+	ice_dev_onetime_setup(hw);
+#endif /* FPGA_SUPPORT || CVL_A0_SUPPORT */
+
+	/* Get MAC information */
+	/* A single port can report up to two (LAN and WoL) addresses */
+	mac_buf = ice_calloc(hw, 2,
+			     sizeof(struct ice_aqc_manage_mac_read_resp));
+	mac_buf_len = 2 * sizeof(struct ice_aqc_manage_mac_read_resp);
+
+	if (!mac_buf) {
+		status = ICE_ERR_NO_MEMORY;
+		goto err_unroll_fltr_mgmt_struct;
+	}
+
+	status = ice_aq_manage_mac_read(hw, mac_buf, mac_buf_len, NULL);
+	ice_free(hw, mac_buf);
+
+	if (status)
+		goto err_unroll_fltr_mgmt_struct;
+
+	ice_init_flex_flds(hw, ICE_RXDID_FLEX_NIC);
+	ice_init_flex_flds(hw, ICE_RXDID_FLEX_NIC_2);
+
+
+	return ICE_SUCCESS;
+
+err_unroll_fltr_mgmt_struct:
+	ice_cleanup_fltr_mgmt_struct(hw);
+err_unroll_sched:
+	ice_sched_cleanup_all(hw);
+err_unroll_alloc:
+	ice_free(hw, hw->port_info);
+	hw->port_info = NULL;
+err_unroll_cqinit:
+	ice_shutdown_all_ctrlq(hw);
+	return status;
+}
+
+/**
+ * ice_deinit_hw - unroll initialization operations done by ice_init_hw
+ * @hw: pointer to the hardware structure
+ *
+ * This should be called only during nominal operation, not as a result of
+ * ice_init_hw() failing since ice_init_hw() will take care of unrolling
+ * applicable initializations if it fails for any reason.
+ */
+void ice_deinit_hw(struct ice_hw *hw)
+{
+	ice_cleanup_fltr_mgmt_struct(hw);
+
+	ice_sched_cleanup_all(hw);
+	ice_sched_clear_agg(hw);
+
+	if (hw->port_info) {
+		ice_free(hw, hw->port_info);
+		hw->port_info = NULL;
+	}
+
+	/* Attempt to disable FW logging before shutting down control queues */
+	ice_cfg_fw_log(hw, false);
+	ice_shutdown_all_ctrlq(hw);
+
+	/* Clear VSI contexts if not already cleared */
+	ice_clear_all_vsi_ctx(hw);
+}
+
+/**
+ * ice_check_reset - Check to see if a global reset is complete
+ * @hw: pointer to the hardware structure
+ */
+enum ice_status ice_check_reset(struct ice_hw *hw)
+{
+	u32 cnt, reg = 0, grst_delay;
+
+	/* Poll for Device Active state in case a recent CORER, GLOBR,
+	 * or EMPR has occurred. The grst delay value is in 100ms units.
+	 * Add 1sec for outstanding AQ commands that can take a long time.
+	 */
+#define GLGEN_RSTCTL		0x000B8180 /* Reset Source: POR */
+#define GLGEN_RSTCTL_GRSTDEL_S	0
+#define GLGEN_RSTCTL_GRSTDEL_M	MAKEMASK(0x3F, GLGEN_RSTCTL_GRSTDEL_S)
+	grst_delay = ((rd32(hw, GLGEN_RSTCTL) & GLGEN_RSTCTL_GRSTDEL_M) >>
+		      GLGEN_RSTCTL_GRSTDEL_S) + 10;
+
+	for (cnt = 0; cnt < grst_delay; cnt++) {
+		ice_msec_delay(100, true);
+		reg = rd32(hw, GLGEN_RSTAT);
+		if (!(reg & GLGEN_RSTAT_DEVSTATE_M))
+			break;
+	}
+
+	if (cnt == grst_delay) {
+		ice_debug(hw, ICE_DBG_INIT,
+			  "Global reset polling failed to complete.\n");
+		return ICE_ERR_RESET_FAILED;
+	}
+
+#define ICE_RESET_DONE_MASK	(GLNVM_ULD_CORER_DONE_M | \
+				 GLNVM_ULD_GLOBR_DONE_M)
+
+	/* Device is Active; check Global Reset processes are done */
+	for (cnt = 0; cnt < ICE_PF_RESET_WAIT_COUNT; cnt++) {
+		reg = rd32(hw, GLNVM_ULD) & ICE_RESET_DONE_MASK;
+		if (reg == ICE_RESET_DONE_MASK) {
+			ice_debug(hw, ICE_DBG_INIT,
+				  "Global reset processes done. %d\n", cnt);
+			break;
+		}
+		ice_msec_delay(10, true);
+	}
+
+	if (cnt == ICE_PF_RESET_WAIT_COUNT) {
+		ice_debug(hw, ICE_DBG_INIT,
+			  "Wait for Reset Done timed out. GLNVM_ULD = 0x%x\n",
+			  reg);
+		return ICE_ERR_RESET_FAILED;
+	}
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_pf_reset - Reset the PF
+ * @hw: pointer to the hardware structure
+ *
+ * If a global reset has been triggered, this function checks
+ * for its completion and then issues the PF reset
+ */
+static enum ice_status ice_pf_reset(struct ice_hw *hw)
+{
+	u32 cnt, reg;
+
+	/* If at function entry a global reset was already in progress, i.e.
+	 * state is not 'device active' or any of the reset done bits are not
+	 * set in GLNVM_ULD, there is no need for a PF Reset; poll until the
+	 * global reset is done.
+	 */
+	if ((rd32(hw, GLGEN_RSTAT) & GLGEN_RSTAT_DEVSTATE_M) ||
+	    (rd32(hw, GLNVM_ULD) & ICE_RESET_DONE_MASK) ^ ICE_RESET_DONE_MASK) {
+		/* poll on global reset currently in progress until done */
+		if (ice_check_reset(hw))
+			return ICE_ERR_RESET_FAILED;
+
+		return ICE_SUCCESS;
+	}
+
+	/* Reset the PF */
+	reg = rd32(hw, PFGEN_CTRL);
+
+	wr32(hw, PFGEN_CTRL, (reg | PFGEN_CTRL_PFSWR_M));
+
+	for (cnt = 0; cnt < ICE_PF_RESET_WAIT_COUNT; cnt++) {
+		reg = rd32(hw, PFGEN_CTRL);
+		if (!(reg & PFGEN_CTRL_PFSWR_M))
+			break;
+
+		ice_msec_delay(1, true);
+	}
+
+	if (cnt == ICE_PF_RESET_WAIT_COUNT) {
+		ice_debug(hw, ICE_DBG_INIT,
+			  "PF reset polling failed to complete.\n");
+		return ICE_ERR_RESET_FAILED;
+	}
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_reset - Perform different types of reset
+ * @hw: pointer to the hardware structure
+ * @req: reset request
+ *
+ * This function triggers a reset as specified by the req parameter.
+ *
+ * Note:
+ * If anything other than a PF reset is triggered, PXE mode is restored.
+ * This has to be cleared using ice_clear_pxe_mode again, once the AQ
+ * interface has been restored in the rebuild flow.
+ */
+enum ice_status ice_reset(struct ice_hw *hw, enum ice_reset_req req)
+{
+	u32 val = 0;
+
+	switch (req) {
+	case ICE_RESET_PFR:
+		return ice_pf_reset(hw);
+	case ICE_RESET_CORER:
+		ice_debug(hw, ICE_DBG_INIT, "CoreR requested\n");
+		val = GLGEN_RTRIG_CORER_M;
+		break;
+	case ICE_RESET_GLOBR:
+		ice_debug(hw, ICE_DBG_INIT, "GlobalR requested\n");
+		val = GLGEN_RTRIG_GLOBR_M;
+		break;
+	default:
+		return ICE_ERR_PARAM;
+	}
+
+	val |= rd32(hw, GLGEN_RTRIG);
+	wr32(hw, GLGEN_RTRIG, val);
+	ice_flush(hw);
+
+
+	/* wait for the FW to be ready */
+	return ice_check_reset(hw);
+}
+
+
+
+/**
+ * ice_copy_rxq_ctx_to_hw
+ * @hw: pointer to the hardware structure
+ * @ice_rxq_ctx: pointer to the rxq context
+ * @rxq_index: the index of the Rx queue
+ *
+ * Copies rxq context from dense structure to hw register space
+ */
+static enum ice_status
+ice_copy_rxq_ctx_to_hw(struct ice_hw *hw, u8 *ice_rxq_ctx, u32 rxq_index)
+{
+	u8 i;
+
+	if (!ice_rxq_ctx)
+		return ICE_ERR_BAD_PTR;
+
+	if (rxq_index > QRX_CTRL_MAX_INDEX)
+		return ICE_ERR_PARAM;
+
+	/* Copy each dword separately to hw */
+	for (i = 0; i < ICE_RXQ_CTX_SIZE_DWORDS; i++) {
+		wr32(hw, QRX_CONTEXT(i, rxq_index),
+		     *((u32 *)(ice_rxq_ctx + (i * sizeof(u32)))));
+
+		ice_debug(hw, ICE_DBG_QCTX, "qrxdata[%d]: %08X\n", i,
+			  *((u32 *)(ice_rxq_ctx + (i * sizeof(u32)))));
+	}
+
+	return ICE_SUCCESS;
+}
+
+/* LAN Rx Queue Context */
+static const struct ice_ctx_ele ice_rlan_ctx_info[] = {
+	/* Field		Width	LSB */
+	ICE_CTX_STORE(ice_rlan_ctx, head,		13,	0),
+	ICE_CTX_STORE(ice_rlan_ctx, cpuid,		8,	13),
+	ICE_CTX_STORE(ice_rlan_ctx, base,		57,	32),
+	ICE_CTX_STORE(ice_rlan_ctx, qlen,		13,	89),
+	ICE_CTX_STORE(ice_rlan_ctx, dbuf,		7,	102),
+	ICE_CTX_STORE(ice_rlan_ctx, hbuf,		5,	109),
+	ICE_CTX_STORE(ice_rlan_ctx, dtype,		2,	114),
+	ICE_CTX_STORE(ice_rlan_ctx, dsize,		1,	116),
+	ICE_CTX_STORE(ice_rlan_ctx, crcstrip,		1,	117),
+	ICE_CTX_STORE(ice_rlan_ctx, l2tsel,		1,	119),
+	ICE_CTX_STORE(ice_rlan_ctx, hsplit_0,		4,	120),
+	ICE_CTX_STORE(ice_rlan_ctx, hsplit_1,		2,	124),
+	ICE_CTX_STORE(ice_rlan_ctx, showiv,		1,	127),
+	ICE_CTX_STORE(ice_rlan_ctx, rxmax,		14,	174),
+	ICE_CTX_STORE(ice_rlan_ctx, tphrdesc_ena,	1,	193),
+	ICE_CTX_STORE(ice_rlan_ctx, tphwdesc_ena,	1,	194),
+	ICE_CTX_STORE(ice_rlan_ctx, tphdata_ena,	1,	195),
+	ICE_CTX_STORE(ice_rlan_ctx, tphhead_ena,	1,	196),
+	ICE_CTX_STORE(ice_rlan_ctx, lrxqthresh,		3,	198),
+	{ 0 }
+};
+
+/**
+ * ice_write_rxq_ctx
+ * @hw: pointer to the hardware structure
+ * @rlan_ctx: pointer to the rxq context
+ * @rxq_index: the index of the Rx queue
+ *
+ * Converts rxq context from sparse to dense structure and then writes
+ * it to hw register space
+ */
+enum ice_status
+ice_write_rxq_ctx(struct ice_hw *hw, struct ice_rlan_ctx *rlan_ctx,
+		  u32 rxq_index)
+{
+	u8 ctx_buf[ICE_RXQ_CTX_SZ] = { 0 };
+
+	ice_set_ctx((u8 *)rlan_ctx, ctx_buf, ice_rlan_ctx_info);
+	return ice_copy_rxq_ctx_to_hw(hw, ctx_buf, rxq_index);
+}
+
+#if !defined(NO_UNUSED_CTX_CODE) || defined(AE_DRIVER)
+/**
+ * ice_clear_rxq_ctx
+ * @hw: pointer to the hardware structure
+ * @rxq_index: the index of the Rx queue to clear
+ *
+ * Clears rxq context in hw register space
+ */
+enum ice_status ice_clear_rxq_ctx(struct ice_hw *hw, u32 rxq_index)
+{
+	u8 i;
+
+	if (rxq_index > QRX_CTRL_MAX_INDEX)
+		return ICE_ERR_PARAM;
+
+	/* Clear each dword register separately */
+	for (i = 0; i < ICE_RXQ_CTX_SIZE_DWORDS; i++)
+		wr32(hw, QRX_CONTEXT(i, rxq_index), 0);
+
+	return ICE_SUCCESS;
+}
+#endif /* !NO_UNUSED_CTX_CODE || AE_DRIVER */
+
+/* LAN Tx Queue Context */
+const struct ice_ctx_ele ice_tlan_ctx_info[] = {
+				    /* Field			Width	LSB */
+	ICE_CTX_STORE(ice_tlan_ctx, base,			57,	0),
+	ICE_CTX_STORE(ice_tlan_ctx, port_num,			3,	57),
+	ICE_CTX_STORE(ice_tlan_ctx, cgd_num,			5,	60),
+	ICE_CTX_STORE(ice_tlan_ctx, pf_num,			3,	65),
+	ICE_CTX_STORE(ice_tlan_ctx, vmvf_num,			10,	68),
+	ICE_CTX_STORE(ice_tlan_ctx, vmvf_type,			2,	78),
+	ICE_CTX_STORE(ice_tlan_ctx, src_vsi,			10,	80),
+	ICE_CTX_STORE(ice_tlan_ctx, tsyn_ena,			1,	90),
+	ICE_CTX_STORE(ice_tlan_ctx, alt_vlan,			1,	92),
+	ICE_CTX_STORE(ice_tlan_ctx, cpuid,			8,	93),
+	ICE_CTX_STORE(ice_tlan_ctx, wb_mode,			1,	101),
+	ICE_CTX_STORE(ice_tlan_ctx, tphrd_desc,			1,	102),
+	ICE_CTX_STORE(ice_tlan_ctx, tphrd,			1,	103),
+	ICE_CTX_STORE(ice_tlan_ctx, tphwr_desc,			1,	104),
+	ICE_CTX_STORE(ice_tlan_ctx, cmpq_id,			9,	105),
+	ICE_CTX_STORE(ice_tlan_ctx, qnum_in_func,		14,	114),
+	ICE_CTX_STORE(ice_tlan_ctx, itr_notification_mode,	1,	128),
+	ICE_CTX_STORE(ice_tlan_ctx, adjust_prof_id,		6,	129),
+	ICE_CTX_STORE(ice_tlan_ctx, qlen,			13,	135),
+	ICE_CTX_STORE(ice_tlan_ctx, quanta_prof_idx,		4,	148),
+	ICE_CTX_STORE(ice_tlan_ctx, tso_ena,			1,	152),
+	ICE_CTX_STORE(ice_tlan_ctx, tso_qnum,			11,	153),
+	ICE_CTX_STORE(ice_tlan_ctx, legacy_int,			1,	164),
+	ICE_CTX_STORE(ice_tlan_ctx, drop_ena,			1,	165),
+	ICE_CTX_STORE(ice_tlan_ctx, cache_prof_idx,		2,	166),
+	ICE_CTX_STORE(ice_tlan_ctx, pkt_shaper_prof_idx,	3,	168),
+	ICE_CTX_STORE(ice_tlan_ctx, int_q_state,		110,	171),
+	{ 0 }
+};
+
+#if !defined(NO_UNUSED_CTX_CODE) || defined(AE_DRIVER)
+/**
+ * ice_copy_tx_cmpltnq_ctx_to_hw
+ * @hw: pointer to the hardware structure
+ * @ice_tx_cmpltnq_ctx: pointer to the Tx completion queue context
+ * @tx_cmpltnq_index: the index of the completion queue
+ *
+ * Copies Tx completion q context from dense structure to hw register space
+ */
+static enum ice_status
+ice_copy_tx_cmpltnq_ctx_to_hw(struct ice_hw *hw, u8 *ice_tx_cmpltnq_ctx,
+			      u32 tx_cmpltnq_index)
+{
+	u8 i;
+
+	if (!ice_tx_cmpltnq_ctx)
+		return ICE_ERR_BAD_PTR;
+
+	if (tx_cmpltnq_index > GLTCLAN_CQ_CNTX0_MAX_INDEX)
+		return ICE_ERR_PARAM;
+
+	/* Copy each dword separately to hw */
+	for (i = 0; i < ICE_TX_CMPLTNQ_CTX_SIZE_DWORDS; i++) {
+		wr32(hw, GLTCLAN_CQ_CNTX(i, tx_cmpltnq_index),
+		     *((u32 *)(ice_tx_cmpltnq_ctx + (i * sizeof(u32)))));
+
+		ice_debug(hw, ICE_DBG_QCTX, "cmpltnqdata[%d]: %08X\n", i,
+			  *((u32 *)(ice_tx_cmpltnq_ctx + (i * sizeof(u32)))));
+	}
+
+	return ICE_SUCCESS;
+}
+
+/* LAN Tx Completion Queue Context */
+static const struct ice_ctx_ele ice_tx_cmpltnq_ctx_info[] = {
+				       /* Field			Width   LSB */
+	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, base,			57,	0),
+	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, q_len,		18,	64),
+	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, generation,		1,	96),
+	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, wrt_ptr,		22,	97),
+	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, pf_num,		3,	128),
+	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, vmvf_num,		10,	131),
+	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, vmvf_type,		2,	141),
+	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, tph_desc_wr,		1,	160),
+	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, cpuid,		8,	161),
+	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, cmpltn_cache,		512,	192),
+	{ 0 }
+};
+
+/**
+ * ice_write_tx_cmpltnq_ctx
+ * @hw: pointer to the hardware structure
+ * @tx_cmpltnq_ctx: pointer to the completion queue context
+ * @tx_cmpltnq_index: the index of the completion queue
+ *
+ * Converts completion queue context from sparse to dense structure and then
+ * writes it to hw register space
+ */
+enum ice_status
+ice_write_tx_cmpltnq_ctx(struct ice_hw *hw,
+			 struct ice_tx_cmpltnq_ctx *tx_cmpltnq_ctx,
+			 u32 tx_cmpltnq_index)
+{
+	u8 ctx_buf[ICE_TX_CMPLTNQ_CTX_SIZE_DWORDS * sizeof(u32)] = { 0 };
+
+	ice_set_ctx((u8 *)tx_cmpltnq_ctx, ctx_buf, ice_tx_cmpltnq_ctx_info);
+	return ice_copy_tx_cmpltnq_ctx_to_hw(hw, ctx_buf, tx_cmpltnq_index);
+}
+
+/**
+ * ice_clear_tx_cmpltnq_ctx
+ * @hw: pointer to the hardware structure
+ * @tx_cmpltnq_index: the index of the completion queue to clear
+ *
+ * Clears Tx completion queue context in hw register space
+ */
+enum ice_status
+ice_clear_tx_cmpltnq_ctx(struct ice_hw *hw, u32 tx_cmpltnq_index)
+{
+	u8 i;
+
+	if (tx_cmpltnq_index > GLTCLAN_CQ_CNTX0_MAX_INDEX)
+		return ICE_ERR_PARAM;
+
+	/* Clear each dword register separately */
+	for (i = 0; i < ICE_TX_CMPLTNQ_CTX_SIZE_DWORDS; i++)
+		wr32(hw, GLTCLAN_CQ_CNTX(i, tx_cmpltnq_index), 0);
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_copy_tx_drbell_q_ctx_to_hw
+ * @hw: pointer to the hardware structure
+ * @ice_tx_drbell_q_ctx: pointer to the doorbell queue context
+ * @tx_drbell_q_index: the index of the doorbell queue
+ *
+ * Copies doorbell q context from dense structure to hw register space
+ */
+static enum ice_status
+ice_copy_tx_drbell_q_ctx_to_hw(struct ice_hw *hw, u8 *ice_tx_drbell_q_ctx,
+			       u32 tx_drbell_q_index)
+{
+	u8 i;
+
+	if (!ice_tx_drbell_q_ctx)
+		return ICE_ERR_BAD_PTR;
+
+	if (tx_drbell_q_index > QTX_COMM_DBLQ_DBELL_MAX_INDEX)
+		return ICE_ERR_PARAM;
+
+	/* Copy each dword separately to hw */
+	for (i = 0; i < ICE_TX_DRBELL_Q_CTX_SIZE_DWORDS; i++) {
+		wr32(hw, QTX_COMM_DBLQ_CNTX(i, tx_drbell_q_index),
+		     *((u32 *)(ice_tx_drbell_q_ctx + (i * sizeof(u32)))));
+
+		ice_debug(hw, ICE_DBG_QCTX, "tx_drbell_qdata[%d]: %08X\n", i,
+			  *((u32 *)(ice_tx_drbell_q_ctx + (i * sizeof(u32)))));
+	}
+
+	return ICE_SUCCESS;
+}
+
+/* LAN Tx Doorbell Queue Context info */
+static const struct ice_ctx_ele ice_tx_drbell_q_ctx_info[] = {
+					/* Field		Width   LSB */
+	ICE_CTX_STORE(ice_tx_drbell_q_ctx, base,		57,	0),
+	ICE_CTX_STORE(ice_tx_drbell_q_ctx, ring_len,		13,	64),
+	ICE_CTX_STORE(ice_tx_drbell_q_ctx, pf_num,		3,	80),
+	ICE_CTX_STORE(ice_tx_drbell_q_ctx, vf_num,		8,	84),
+	ICE_CTX_STORE(ice_tx_drbell_q_ctx, vmvf_type,		2,	94),
+	ICE_CTX_STORE(ice_tx_drbell_q_ctx, cpuid,		8,	96),
+	ICE_CTX_STORE(ice_tx_drbell_q_ctx, tph_desc_rd,		1,	104),
+	ICE_CTX_STORE(ice_tx_drbell_q_ctx, tph_desc_wr,		1,	108),
+	ICE_CTX_STORE(ice_tx_drbell_q_ctx, db_q_en,		1,	112),
+	ICE_CTX_STORE(ice_tx_drbell_q_ctx, rd_head,		13,	128),
+	ICE_CTX_STORE(ice_tx_drbell_q_ctx, rd_tail,		13,	144),
+	{ 0 }
+};
+
+/**
+ * ice_write_tx_drbell_q_ctx
+ * @hw: pointer to the hardware structure
+ * @tx_drbell_q_ctx: pointer to the doorbell queue context
+ * @tx_drbell_q_index: the index of the doorbell queue
+ *
+ * Converts doorbell queue context from sparse to dense structure and then
+ * writes it to hw register space
+ */
+enum ice_status
+ice_write_tx_drbell_q_ctx(struct ice_hw *hw,
+			  struct ice_tx_drbell_q_ctx *tx_drbell_q_ctx,
+			  u32 tx_drbell_q_index)
+{
+	u8 ctx_buf[ICE_TX_DRBELL_Q_CTX_SIZE_DWORDS * sizeof(u32)] = { 0 };
+
+	ice_set_ctx((u8 *)tx_drbell_q_ctx, ctx_buf, ice_tx_drbell_q_ctx_info);
+	return ice_copy_tx_drbell_q_ctx_to_hw(hw, ctx_buf, tx_drbell_q_index);
+}
+
+/**
+ * ice_clear_tx_drbell_q_ctx
+ * @hw: pointer to the hardware structure
+ * @tx_drbell_q_index: the index of the doorbell queue to clear
+ *
+ * Clears doorbell queue context in hw register space
+ */
+enum ice_status
+ice_clear_tx_drbell_q_ctx(struct ice_hw *hw, u32 tx_drbell_q_index)
+{
+	u8 i;
+
+	if (tx_drbell_q_index > QTX_COMM_DBLQ_DBELL_MAX_INDEX)
+		return ICE_ERR_PARAM;
+
+	/* Clear each dword register separately */
+	for (i = 0; i < ICE_TX_DRBELL_Q_CTX_SIZE_DWORDS; i++)
+		wr32(hw, QTX_COMM_DBLQ_CNTX(i, tx_drbell_q_index), 0);
+
+	return ICE_SUCCESS;
+}
+#endif /* !NO_UNUSED_CTX_CODE || AE_DRIVER */
+
+/**
+ * ice_debug_cq
+ * @hw: pointer to the hardware structure
+ * @mask: debug mask
+ * @desc: pointer to control queue descriptor
+ * @buf: pointer to command buffer
+ * @buf_len: max length of buf
+ *
+ * Dumps debug log about control command with descriptor contents.
+ */
+void
+ice_debug_cq(struct ice_hw *hw, u32 __maybe_unused mask, void *desc, void *buf,
+	     u16 buf_len)
+{
+	struct ice_aq_desc *cq_desc = (struct ice_aq_desc *)desc;
+	u16 len;
+
+
+	if (!desc)
+		return;
+
+	len = LE16_TO_CPU(cq_desc->datalen);
+
+	ice_debug(hw, mask,
+		  "CQ CMD: opcode 0x%04X, flags 0x%04X, datalen 0x%04X, retval 0x%04X\n",
+		  LE16_TO_CPU(cq_desc->opcode),
+		  LE16_TO_CPU(cq_desc->flags),
+		  LE16_TO_CPU(cq_desc->datalen), LE16_TO_CPU(cq_desc->retval));
+	ice_debug(hw, mask, "\tcookie (h,l) 0x%08X 0x%08X\n",
+		  LE32_TO_CPU(cq_desc->cookie_high),
+		  LE32_TO_CPU(cq_desc->cookie_low));
+	ice_debug(hw, mask, "\tparam (0,1)  0x%08X 0x%08X\n",
+		  LE32_TO_CPU(cq_desc->params.generic.param0),
+		  LE32_TO_CPU(cq_desc->params.generic.param1));
+	ice_debug(hw, mask, "\taddr (h,l)   0x%08X 0x%08X\n",
+		  LE32_TO_CPU(cq_desc->params.generic.addr_high),
+		  LE32_TO_CPU(cq_desc->params.generic.addr_low));
+	if (buf && cq_desc->datalen != 0) {
+		ice_debug(hw, mask, "Buffer:\n");
+		if (buf_len < len)
+			len = buf_len;
+
+		ice_debug_array(hw, mask, 16, 1, (u8 *)buf, len);
+	}
+}
+
+
+/* FW Admin Queue command wrappers */
+
+/**
+ * ice_aq_send_cmd - send FW Admin Queue command to FW Admin Queue
+ * @hw: pointer to the hw struct
+ * @desc: descriptor describing the command
+ * @buf: buffer to use for indirect commands (NULL for direct commands)
+ * @buf_size: size of buffer for indirect commands (0 for direct commands)
+ * @cd: pointer to command details structure
+ *
+ * Helper function to send FW Admin Queue commands to the FW Admin Queue.
+ */
+enum ice_status
+ice_aq_send_cmd(struct ice_hw *hw, struct ice_aq_desc *desc, void *buf,
+		u16 buf_size, struct ice_sq_cd *cd)
+{
+	return ice_sq_send_cmd(hw, &hw->adminq, desc, buf, buf_size, cd);
+}
+
+/**
+ * ice_aq_get_fw_ver
+ * @hw: pointer to the hw struct
+ * @cd: pointer to command details structure or NULL
+ *
+ * Get the firmware version (0x0001) from the admin queue commands
+ */
+enum ice_status ice_aq_get_fw_ver(struct ice_hw *hw, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_get_ver *resp;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	resp = &desc.params.get_ver;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_ver);
+
+	status = ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+
+	if (!status) {
+		hw->fw_branch = resp->fw_branch;
+		hw->fw_maj_ver = resp->fw_major;
+		hw->fw_min_ver = resp->fw_minor;
+		hw->fw_patch = resp->fw_patch;
+		hw->fw_build = LE32_TO_CPU(resp->fw_build);
+		hw->api_branch = resp->api_branch;
+		hw->api_maj_ver = resp->api_major;
+		hw->api_min_ver = resp->api_minor;
+		hw->api_patch = resp->api_patch;
+	}
+
+	return status;
+}
+
+
+/**
+ * ice_aq_q_shutdown
+ * @hw: pointer to the hw struct
+ * @unloading: is the driver unloading itself
+ *
+ * Tell the Firmware that we're shutting down the AdminQ and whether
+ * or not the driver is unloading as well (0x0003).
+ */
+enum ice_status ice_aq_q_shutdown(struct ice_hw *hw, bool unloading)
+{
+	struct ice_aqc_q_shutdown *cmd;
+	struct ice_aq_desc desc;
+
+	cmd = &desc.params.q_shutdown;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_q_shutdown);
+
+	if (unloading)
+		cmd->driver_unloading = CPU_TO_LE32(ICE_AQC_DRIVER_UNLOADING);
+
+	return ice_aq_send_cmd(hw, &desc, NULL, 0, NULL);
+}
+
+/**
+ * ice_aq_req_res
+ * @hw: pointer to the hw struct
+ * @res: resource id
+ * @access: access type
+ * @sdp_number: resource number
+ * @timeout: the maximum time in ms that the driver may hold the resource
+ * @cd: pointer to command details structure or NULL
+ *
+ * Requests common resource using the admin queue commands (0x0008).
+ * When attempting to acquire the Global Config Lock, the driver can
+ * learn of three states:
+ *  1) ICE_SUCCESS -        acquired lock, and can perform download package
+ *  2) ICE_ERR_AQ_ERROR -   did not get lock, driver should fail to load
+ *  3) ICE_ERR_AQ_NO_WORK - did not get lock, but another driver has
+ *                          successfully downloaded the package; the driver does
+ *                          not have to download the package and can continue
+ *                          loading
+ *
+ * Note that if the caller is in an acquire lock, perform action, release lock
+ * phase of operation, it is possible that the FW may detect a timeout and issue
+ * a CORER. In this case, the driver will receive a CORER interrupt and will
+ * have to determine its cause. The calling thread that is handling this flow
+ * will likely get an error propagated back to it indicating the Download
+ * Package, Update Package or the Release Resource AQ commands timed out.
+ */
+static enum ice_status
+ice_aq_req_res(struct ice_hw *hw, enum ice_aq_res_ids res,
+	       enum ice_aq_res_access_type access, u8 sdp_number, u32 *timeout,
+	       struct ice_sq_cd *cd)
+{
+	struct ice_aqc_req_res *cmd_resp;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_req_res");
+
+	cmd_resp = &desc.params.res_owner;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_req_res);
+
+	cmd_resp->res_id = CPU_TO_LE16(res);
+	cmd_resp->access_type = CPU_TO_LE16(access);
+	cmd_resp->res_number = CPU_TO_LE32(sdp_number);
+	cmd_resp->timeout = CPU_TO_LE32(*timeout);
+	*timeout = 0;
+
+	status = ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+
+	/* The completion specifies the maximum time in ms that the driver
+	 * may hold the resource in the Timeout field.
+	 */
+
+	/* Global config lock response utilizes an additional status field.
+	 *
+	 * If the Global config lock resource is held by some other driver, the
+	 * command completes with ICE_AQ_RES_GLBL_IN_PROG in the status field
+	 * and the timeout field indicates the maximum time the current owner
+	 * of the resource has to free it.
+	 */
+	if (res == ICE_GLOBAL_CFG_LOCK_RES_ID) {
+		if (LE16_TO_CPU(cmd_resp->status) == ICE_AQ_RES_GLBL_SUCCESS) {
+			*timeout = LE32_TO_CPU(cmd_resp->timeout);
+			return ICE_SUCCESS;
+		} else if (LE16_TO_CPU(cmd_resp->status) ==
+			   ICE_AQ_RES_GLBL_IN_PROG) {
+			*timeout = LE32_TO_CPU(cmd_resp->timeout);
+			return ICE_ERR_AQ_ERROR;
+		} else if (LE16_TO_CPU(cmd_resp->status) ==
+			   ICE_AQ_RES_GLBL_DONE) {
+			return ICE_ERR_AQ_NO_WORK;
+		}
+
+		/* invalid FW response, force a timeout immediately */
+		*timeout = 0;
+		return ICE_ERR_AQ_ERROR;
+	}
+
+	/* If the resource is held by some other driver, the command completes
+	 * with a busy return value and the timeout field indicates the maximum
+	 * time the current owner of the resource has to free it.
+	 */
+	if (!status || hw->adminq.sq_last_status == ICE_AQ_RC_EBUSY)
+		*timeout = LE32_TO_CPU(cmd_resp->timeout);
+
+	return status;
+}
+
+/**
+ * ice_aq_release_res
+ * @hw: pointer to the hw struct
+ * @res: resource id
+ * @sdp_number: resource number
+ * @cd: pointer to command details structure or NULL
+ *
+ * release common resource using the admin queue commands (0x0009)
+ */
+static enum ice_status
+ice_aq_release_res(struct ice_hw *hw, enum ice_aq_res_ids res, u8 sdp_number,
+		   struct ice_sq_cd *cd)
+{
+	struct ice_aqc_req_res *cmd;
+	struct ice_aq_desc desc;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_release_res");
+
+	cmd = &desc.params.res_owner;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_release_res);
+
+	cmd->res_id = CPU_TO_LE16(res);
+	cmd->res_number = CPU_TO_LE32(sdp_number);
+
+	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+}
+
+/**
+ * ice_acquire_res
+ * @hw: pointer to the HW structure
+ * @res: resource id
+ * @access: access type (read or write)
+ * @timeout: timeout in milliseconds
+ *
+ * This function will attempt to acquire the ownership of a resource.
+ */
+enum ice_status
+ice_acquire_res(struct ice_hw *hw, enum ice_aq_res_ids res,
+		enum ice_aq_res_access_type access, u32 timeout)
+{
+#define ICE_RES_POLLING_DELAY_MS	10
+	u32 delay = ICE_RES_POLLING_DELAY_MS;
+	u32 time_left = timeout;
+	enum ice_status status;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_acquire_res");
+
+	status = ice_aq_req_res(hw, res, access, 0, &time_left, NULL);
+
+	/* A return code of ICE_ERR_AQ_NO_WORK means that another driver has
+	 * previously acquired the resource and performed any necessary updates;
+	 * in this case the caller does not obtain the resource and has no
+	 * further work to do.
+	 */
+	if (status == ICE_ERR_AQ_NO_WORK)
+		goto ice_acquire_res_exit;
+
+	if (status)
+		ice_debug(hw, ICE_DBG_RES,
+			  "resource %d acquire type %d failed.\n", res, access);
+
+	/* If necessary, poll until the current lock owner timeouts */
+	timeout = time_left;
+	while (status && timeout && time_left) {
+		ice_msec_delay(delay, true);
+		timeout = (timeout > delay) ? timeout - delay : 0;
+		status = ice_aq_req_res(hw, res, access, 0, &time_left, NULL);
+
+		if (status == ICE_ERR_AQ_NO_WORK)
+			/* lock free, but no work to do */
+			break;
+
+		if (!status)
+			/* lock acquired */
+			break;
+	}
+	if (status && status != ICE_ERR_AQ_NO_WORK)
+		ice_debug(hw, ICE_DBG_RES, "resource acquire timed out.\n");
+
+ice_acquire_res_exit:
+	if (status == ICE_ERR_AQ_NO_WORK) {
+		if (access == ICE_RES_WRITE)
+			ice_debug(hw, ICE_DBG_RES,
+				  "resource indicates no work to do.\n");
+		else
+			ice_debug(hw, ICE_DBG_RES,
+				  "Warning: ICE_ERR_AQ_NO_WORK not expected\n");
+	}
+	return status;
+}
+
+/**
+ * ice_release_res
+ * @hw: pointer to the HW structure
+ * @res: resource id
+ *
+ * This function will release a resource using the proper Admin Command.
+ */
+void ice_release_res(struct ice_hw *hw, enum ice_aq_res_ids res)
+{
+	enum ice_status status;
+	u32 total_delay = 0;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_release_res");
+
+	status = ice_aq_release_res(hw, res, 0, NULL);
+
+	/* there are some rare cases when trying to release the resource
+	 * results in an admin Q timeout, so handle them correctly
+	 */
+	while ((status == ICE_ERR_AQ_TIMEOUT) &&
+	       (total_delay < hw->adminq.sq_cmd_timeout)) {
+		ice_msec_delay(1, true);
+		status = ice_aq_release_res(hw, res, 0, NULL);
+		total_delay++;
+	}
+}
+
+/**
+ * ice_aq_alloc_free_res - command to allocate/free resources
+ * @hw: pointer to the hw struct
+ * @num_entries: number of resource entries in buffer
+ * @buf: Indirect buffer to hold data parameters and response
+ * @buf_size: size of buffer for indirect commands
+ * @opc: pass in the command opcode
+ * @cd: pointer to command details structure or NULL
+ *
+ * Helper function to allocate/free resources using the admin queue commands
+ */
+enum ice_status
+ice_aq_alloc_free_res(struct ice_hw *hw, u16 num_entries,
+		      struct ice_aqc_alloc_free_res_elem *buf, u16 buf_size,
+		      enum ice_adminq_opc opc, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_alloc_free_res_cmd *cmd;
+	struct ice_aq_desc desc;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_alloc_free_res");
+
+	cmd = &desc.params.sw_res_ctrl;
+
+	if (!buf)
+		return ICE_ERR_PARAM;
+
+	if (buf_size < (num_entries * sizeof(buf->elem[0])))
+		return ICE_ERR_PARAM;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, opc);
+
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+
+	cmd->num_entries = CPU_TO_LE16(num_entries);
+
+	return ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
+}
+
+
+/**
+ * ice_get_guar_num_vsi - determine number of guar VSI for a PF
+ * @hw: pointer to the hw structure
+ *
+ * Determine the number of valid functions by going through the bitmap returned
+ * from parsing capabilities and use this to calculate the number of VSI per PF.
+ */
+static u32 ice_get_guar_num_vsi(struct ice_hw *hw)
+{
+	u8 funcs;
+
+#define ICE_CAPS_VALID_FUNCS_M	0xFF
+	funcs = ice_hweight8(hw->dev_caps.common_cap.valid_functions &
+			     ICE_CAPS_VALID_FUNCS_M);
+
+	if (!funcs)
+		return 0;
+
+	return ICE_MAX_VSI / funcs;
+}
+
+/**
+ * ice_parse_caps - parse function/device capabilities
+ * @hw: pointer to the hw struct
+ * @buf: pointer to a buffer containing function/device capability records
+ * @cap_count: number of capability records in the list
+ * @opc: type of capabilities list to parse
+ *
+ * Helper function to parse function(0x000a)/device(0x000b) capabilities list.
+ */
+static void
+ice_parse_caps(struct ice_hw *hw, void *buf, u32 cap_count,
+	       enum ice_adminq_opc opc)
+{
+	struct ice_aqc_list_caps_elem *cap_resp;
+	struct ice_hw_func_caps *func_p = NULL;
+	struct ice_hw_dev_caps *dev_p = NULL;
+	struct ice_hw_common_caps *caps;
+	u32 i;
+
+	if (!buf)
+		return;
+
+	cap_resp = (struct ice_aqc_list_caps_elem *)buf;
+
+	if (opc == ice_aqc_opc_list_dev_caps) {
+		dev_p = &hw->dev_caps;
+		caps = &dev_p->common_cap;
+	} else if (opc == ice_aqc_opc_list_func_caps) {
+		func_p = &hw->func_caps;
+		caps = &func_p->common_cap;
+	} else {
+		ice_debug(hw, ICE_DBG_INIT, "wrong opcode\n");
+		return;
+	}
+
+	for (i = 0; caps && i < cap_count; i++, cap_resp++) {
+		u32 logical_id = LE32_TO_CPU(cap_resp->logical_id);
+		u32 phys_id = LE32_TO_CPU(cap_resp->phys_id);
+		u32 number = LE32_TO_CPU(cap_resp->number);
+		u16 cap = LE16_TO_CPU(cap_resp->cap);
+
+		switch (cap) {
+		case ICE_AQC_CAPS_VALID_FUNCTIONS:
+			caps->valid_functions = number;
+			ice_debug(hw, ICE_DBG_INIT,
+				  "HW caps: Valid Functions = %d\n",
+				  caps->valid_functions);
+			break;
+		case ICE_AQC_CAPS_SRIOV:
+			caps->sr_iov_1_1 = (number == 1);
+			ice_debug(hw, ICE_DBG_INIT,
+				  "HW caps: SR-IOV = %d\n", caps->sr_iov_1_1);
+			break;
+		case ICE_AQC_CAPS_VF:
+			if (dev_p) {
+				dev_p->num_vfs_exposed = number;
+				ice_debug(hw, ICE_DBG_INIT,
+					  "HW caps: VFs exposed = %d\n",
+					  dev_p->num_vfs_exposed);
+			} else if (func_p) {
+				func_p->num_allocd_vfs = number;
+				func_p->vf_base_id = logical_id;
+				ice_debug(hw, ICE_DBG_INIT,
+					  "HW caps: VFs allocated = %d\n",
+					  func_p->num_allocd_vfs);
+				ice_debug(hw, ICE_DBG_INIT,
+					  "HW caps: VF base_id = %d\n",
+					  func_p->vf_base_id);
+			}
+			break;
+		case ICE_AQC_CAPS_VSI:
+			if (dev_p) {
+				dev_p->num_vsi_allocd_to_host = number;
+				ice_debug(hw, ICE_DBG_INIT,
+					  "HW caps: Dev.VSI cnt = %d\n",
+					  dev_p->num_vsi_allocd_to_host);
+			} else if (func_p) {
+				func_p->guar_num_vsi =
+					ice_get_guar_num_vsi(hw);
+				ice_debug(hw, ICE_DBG_INIT,
+					  "HW caps: Func.VSI cnt = %d\n",
+					  number);
+			}
+			break;
+		case ICE_AQC_CAPS_RSS:
+			caps->rss_table_size = number;
+			caps->rss_table_entry_width = logical_id;
+			ice_debug(hw, ICE_DBG_INIT,
+				  "HW caps: RSS table size = %d\n",
+				  caps->rss_table_size);
+			ice_debug(hw, ICE_DBG_INIT,
+				  "HW caps: RSS table width = %d\n",
+				  caps->rss_table_entry_width);
+			break;
+		case ICE_AQC_CAPS_RXQS:
+			caps->num_rxq = number;
+			caps->rxq_first_id = phys_id;
+			ice_debug(hw, ICE_DBG_INIT,
+				  "HW caps: Num Rx Qs = %d\n", caps->num_rxq);
+			ice_debug(hw, ICE_DBG_INIT,
+				  "HW caps: Rx first queue ID = %d\n",
+				  caps->rxq_first_id);
+			break;
+		case ICE_AQC_CAPS_TXQS:
+			caps->num_txq = number;
+			caps->txq_first_id = phys_id;
+			ice_debug(hw, ICE_DBG_INIT,
+				  "HW caps: Num Tx Qs = %d\n", caps->num_txq);
+			ice_debug(hw, ICE_DBG_INIT,
+				  "HW caps: Tx first queue ID = %d\n",
+				  caps->txq_first_id);
+			break;
+		case ICE_AQC_CAPS_MSIX:
+			caps->num_msix_vectors = number;
+			caps->msix_vector_first_id = phys_id;
+			ice_debug(hw, ICE_DBG_INIT,
+				  "HW caps: MSIX vector count = %d\n",
+				  caps->num_msix_vectors);
+			ice_debug(hw, ICE_DBG_INIT,
+				  "HW caps: MSIX first vector index = %d\n",
+				  caps->msix_vector_first_id);
+			break;
+		case ICE_AQC_CAPS_MAX_MTU:
+			caps->max_mtu = number;
+			if (dev_p)
+				ice_debug(hw, ICE_DBG_INIT,
+					  "HW caps: Dev.MaxMTU = %d\n",
+					  caps->max_mtu);
+			else if (func_p)
+				ice_debug(hw, ICE_DBG_INIT,
+					  "HW caps: func.MaxMTU = %d\n",
+					  caps->max_mtu);
+			break;
+		default:
+			ice_debug(hw, ICE_DBG_INIT,
+				  "HW caps: Unknown capability[%d]: 0x%x\n", i,
+				  cap);
+			break;
+		}
+	}
+}
+
+/**
+ * ice_aq_discover_caps - query function/device capabilities
+ * @hw: pointer to the hw struct
+ * @buf: a virtual buffer to hold the capabilities
+ * @buf_size: Size of the virtual buffer
+ * @cap_count: cap count needed if AQ err==ENOMEM
+ * @opc: capabilities type to discover - pass in the command opcode
+ * @cd: pointer to command details structure or NULL
+ *
+ * Get the function(0x000a)/device(0x000b) capabilities description from
+ * the firmware.
+ */
+static enum ice_status
+ice_aq_discover_caps(struct ice_hw *hw, void *buf, u16 buf_size, u32 *cap_count,
+		     enum ice_adminq_opc opc, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_list_caps *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	cmd = &desc.params.get_cap;
+
+	if (opc != ice_aqc_opc_list_func_caps &&
+	    opc != ice_aqc_opc_list_dev_caps)
+		return ICE_ERR_PARAM;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, opc);
+
+	status = ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
+	if (!status)
+		ice_parse_caps(hw, buf, LE32_TO_CPU(cmd->count), opc);
+	else if (hw->adminq.sq_last_status == ICE_AQ_RC_ENOMEM)
+		*cap_count = LE32_TO_CPU(cmd->count);
+	return status;
+}
+
+/**
+ * ice_discover_caps - get info about the HW
+ * @hw: pointer to the hardware structure
+ * @opc: capabilities type to discover - pass in the command opcode
+ */
+static enum ice_status
+ice_discover_caps(struct ice_hw *hw, enum ice_adminq_opc opc)
+{
+	enum ice_status status;
+	u32 cap_count;
+	u16 cbuf_len;
+	u8 retries;
+
+	/* The driver doesn't know how many capabilities the device will return
+	 * so the buffer size required isn't known ahead of time. The driver
+	 * starts with cbuf_len and if this turns out to be insufficient, the
+	 * device returns ICE_AQ_RC_ENOMEM and also the cap_count it needs.
+	 * The driver then allocates the buffer based on the count and retries
+	 * the operation. So it follows that the retry count is 2.
+	 */
+#define ICE_GET_CAP_BUF_COUNT	40
+#define ICE_GET_CAP_RETRY_COUNT	2
+
+	cap_count = ICE_GET_CAP_BUF_COUNT;
+	retries = ICE_GET_CAP_RETRY_COUNT;
+
+	do {
+		void *cbuf;
+
+		cbuf_len = (u16)(cap_count *
+				 sizeof(struct ice_aqc_list_caps_elem));
+		cbuf = ice_malloc(hw, cbuf_len);
+		if (!cbuf)
+			return ICE_ERR_NO_MEMORY;
+
+		status = ice_aq_discover_caps(hw, cbuf, cbuf_len, &cap_count,
+					      opc, NULL);
+		ice_free(hw, cbuf);
+
+		if (!status || hw->adminq.sq_last_status != ICE_AQ_RC_ENOMEM)
+			break;
+
+		/* If ENOMEM is returned, try again with bigger buffer */
+	} while (--retries);
+
+	return status;
+}
+
+/**
+ * ice_get_caps - get info about the HW
+ * @hw: pointer to the hardware structure
+ */
+enum ice_status ice_get_caps(struct ice_hw *hw)
+{
+	enum ice_status status;
+
+	status = ice_discover_caps(hw, ice_aqc_opc_list_dev_caps);
+	if (!status)
+		status = ice_discover_caps(hw, ice_aqc_opc_list_func_caps);
+
+	return status;
+}
+
+/**
+ * ice_aq_manage_mac_write - manage MAC address write command
+ * @hw: pointer to the hw struct
+ * @mac_addr: MAC address to be written as LAA/LAA+WoL/Port address
+ * @flags: flags to control write behavior
+ * @cd: pointer to command details structure or NULL
+ *
+ * This function is used to write MAC address to the NVM (0x0108).
+ */
+enum ice_status
+ice_aq_manage_mac_write(struct ice_hw *hw, const u8 *mac_addr, u8 flags,
+			struct ice_sq_cd *cd)
+{
+	struct ice_aqc_manage_mac_write *cmd;
+	struct ice_aq_desc desc;
+
+	cmd = &desc.params.mac_write;
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_manage_mac_write);
+
+	cmd->flags = flags;
+
+
+	/* Prep values for flags, sah, sal */
+	cmd->sah = HTONS(*((const u16 *)mac_addr));
+	cmd->sal = HTONL(*((const u32 *)(mac_addr + 2)));
+
+	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+}
+
+/**
+ * ice_aq_clear_pxe_mode
+ * @hw: pointer to the hw struct
+ *
+ * Tell the firmware that the driver is taking over from PXE (0x0110).
+ */
+static enum ice_status ice_aq_clear_pxe_mode(struct ice_hw *hw)
+{
+	struct ice_aq_desc desc;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_clear_pxe_mode);
+	desc.params.clear_pxe.rx_cnt = ICE_AQC_CLEAR_PXE_RX_CNT;
+
+	return ice_aq_send_cmd(hw, &desc, NULL, 0, NULL);
+}
+
+/**
+ * ice_clear_pxe_mode - clear pxe operations mode
+ * @hw: pointer to the hw struct
+ *
+ * Make sure all PXE mode settings are cleared, including things
+ * like descriptor fetch/write-back mode.
+ */
+void ice_clear_pxe_mode(struct ice_hw *hw)
+{
+	if (ice_check_sq_alive(hw, &hw->adminq))
+		ice_aq_clear_pxe_mode(hw);
+}
+
+
+
+/**
+ * ice_get_link_speed_based_on_phy_type - returns link speed
+ * @phy_type_low: lower part of phy_type
+ *
+ * This helper function will convert a phy_type_low to its corresponding link
+ * speed.
+ * Note: In the structure of phy_type_low, there should be one bit set, as
+ * this function will convert one phy type to its speed.
+ * If no bit gets set, ICE_LINK_SPEED_UNKNOWN will be returned
+ * If more than one bit gets set, ICE_LINK_SPEED_UNKNOWN will be returned
+ */
+static u16 ice_get_link_speed_based_on_phy_type(u64 phy_type_low)
+{
+	u16 speed_phy_type_low = ICE_AQ_LINK_SPEED_UNKNOWN;
+
+	switch (phy_type_low) {
+	case ICE_PHY_TYPE_LOW_100BASE_TX:
+	case ICE_PHY_TYPE_LOW_100M_SGMII:
+		speed_phy_type_low = ICE_AQ_LINK_SPEED_100MB;
+		break;
+	case ICE_PHY_TYPE_LOW_1000BASE_T:
+	case ICE_PHY_TYPE_LOW_1000BASE_SX:
+	case ICE_PHY_TYPE_LOW_1000BASE_LX:
+	case ICE_PHY_TYPE_LOW_1000BASE_KX:
+	case ICE_PHY_TYPE_LOW_1G_SGMII:
+		speed_phy_type_low = ICE_AQ_LINK_SPEED_1000MB;
+		break;
+	case ICE_PHY_TYPE_LOW_2500BASE_T:
+	case ICE_PHY_TYPE_LOW_2500BASE_X:
+	case ICE_PHY_TYPE_LOW_2500BASE_KX:
+		speed_phy_type_low = ICE_AQ_LINK_SPEED_2500MB;
+		break;
+	case ICE_PHY_TYPE_LOW_5GBASE_T:
+	case ICE_PHY_TYPE_LOW_5GBASE_KR:
+		speed_phy_type_low = ICE_AQ_LINK_SPEED_5GB;
+		break;
+	case ICE_PHY_TYPE_LOW_10GBASE_T:
+	case ICE_PHY_TYPE_LOW_10G_SFI_DA:
+	case ICE_PHY_TYPE_LOW_10GBASE_SR:
+	case ICE_PHY_TYPE_LOW_10GBASE_LR:
+	case ICE_PHY_TYPE_LOW_10GBASE_KR_CR1:
+	case ICE_PHY_TYPE_LOW_10G_SFI_AOC_ACC:
+	case ICE_PHY_TYPE_LOW_10G_SFI_C2C:
+		speed_phy_type_low = ICE_AQ_LINK_SPEED_10GB;
+		break;
+	case ICE_PHY_TYPE_LOW_25GBASE_T:
+	case ICE_PHY_TYPE_LOW_25GBASE_CR:
+	case ICE_PHY_TYPE_LOW_25GBASE_CR_S:
+	case ICE_PHY_TYPE_LOW_25GBASE_CR1:
+	case ICE_PHY_TYPE_LOW_25GBASE_SR:
+	case ICE_PHY_TYPE_LOW_25GBASE_LR:
+	case ICE_PHY_TYPE_LOW_25GBASE_KR:
+	case ICE_PHY_TYPE_LOW_25GBASE_KR_S:
+	case ICE_PHY_TYPE_LOW_25GBASE_KR1:
+	case ICE_PHY_TYPE_LOW_25G_AUI_AOC_ACC:
+	case ICE_PHY_TYPE_LOW_25G_AUI_C2C:
+		speed_phy_type_low = ICE_AQ_LINK_SPEED_25GB;
+		break;
+	case ICE_PHY_TYPE_LOW_40GBASE_CR4:
+	case ICE_PHY_TYPE_LOW_40GBASE_SR4:
+	case ICE_PHY_TYPE_LOW_40GBASE_LR4:
+	case ICE_PHY_TYPE_LOW_40GBASE_KR4:
+	case ICE_PHY_TYPE_LOW_40G_XLAUI_AOC_ACC:
+	case ICE_PHY_TYPE_LOW_40G_XLAUI:
+		speed_phy_type_low = ICE_AQ_LINK_SPEED_40GB;
+		break;
+	default:
+		speed_phy_type_low = ICE_AQ_LINK_SPEED_UNKNOWN;
+		break;
+	}
+
+	return speed_phy_type_low;
+}
+
+/**
+ * ice_update_phy_type
+ * @phy_type_low: pointer to the lower part of phy_type
+ * @link_speeds_bitmap: targeted link speeds bitmap
+ *
+ * Note: For the link_speeds_bitmap structure, you can check it at
+ * [ice_aqc_get_link_status->link_speed]. Caller can pass in
+ * link_speeds_bitmap include multiple speeds.
+ *
+ * The value of phy_type_low will present a certain link speed. This helper
+ * function will turn on bits in the phy_type_low based on the value of
+ * link_speeds_bitmap input parameter.
+ */
+void ice_update_phy_type(u64 *phy_type_low, u16 link_speeds_bitmap)
+{
+	u16 speed = ICE_AQ_LINK_SPEED_UNKNOWN;
+	u64 pt_low;
+	int index;
+
+	/* We first check with low part of phy_type */
+	for (index = 0; index <= ICE_PHY_TYPE_LOW_MAX_INDEX; index++) {
+		pt_low = BIT_ULL(index);
+		speed = ice_get_link_speed_based_on_phy_type(pt_low);
+
+		if (link_speeds_bitmap & speed)
+			*phy_type_low |= BIT_ULL(index);
+	}
+}
+
+/**
+ * ice_aq_set_phy_cfg
+ * @hw: pointer to the hw struct
+ * @lport: logical port number
+ * @cfg: structure with PHY configuration data to be set
+ * @cd: pointer to command details structure or NULL
+ *
+ * Set the various PHY configuration parameters supported on the Port.
+ * One or more of the Set PHY config parameters may be ignored in an MFP
+ * mode as the PF may not have the privilege to set some of the PHY Config
+ * parameters. This status will be indicated by the command response (0x0601).
+ */
+enum ice_status
+ice_aq_set_phy_cfg(struct ice_hw *hw, u8 lport,
+		   struct ice_aqc_set_phy_cfg_data *cfg, struct ice_sq_cd *cd)
+{
+	struct ice_aq_desc desc;
+
+	if (!cfg)
+		return ICE_ERR_PARAM;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_phy_cfg);
+	desc.params.set_phy.lport_num = lport;
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+
+	return ice_aq_send_cmd(hw, &desc, cfg, sizeof(*cfg), cd);
+}
+
+/**
+ * ice_update_link_info - update status of the HW network link
+ * @pi: port info structure of the interested logical port
+ */
+enum ice_status ice_update_link_info(struct ice_port_info *pi)
+{
+	struct ice_aqc_get_phy_caps_data *pcaps;
+	struct ice_phy_info *phy_info;
+	enum ice_status status;
+	struct ice_hw *hw;
+
+	if (!pi)
+		return ICE_ERR_PARAM;
+
+	hw = pi->hw;
+
+	pcaps = (struct ice_aqc_get_phy_caps_data *)
+		ice_malloc(hw, sizeof(*pcaps));
+	if (!pcaps)
+		return ICE_ERR_NO_MEMORY;
+
+	phy_info = &pi->phy;
+	status = ice_aq_get_link_info(pi, true, NULL, NULL);
+	if (status)
+		goto out;
+
+	if (phy_info->link_info.link_info & ICE_AQ_MEDIA_AVAILABLE) {
+		status = ice_aq_get_phy_caps(pi, false, ICE_AQC_REPORT_SW_CFG,
+					     pcaps, NULL);
+		if (status)
+			goto out;
+
+		ice_memcpy(phy_info->link_info.module_type, &pcaps->module_type,
+			   sizeof(phy_info->link_info.module_type),
+			   ICE_NONDMA_TO_NONDMA);
+	}
+out:
+	ice_free(hw, pcaps);
+	return status;
+}
+
+/**
+ * ice_set_fc
+ * @pi: port information structure
+ * @aq_failures: pointer to status code, specific to ice_set_fc routine
+ * @ena_auto_link_update: enable automatic link update
+ *
+ * Set the requested flow control mode.
+ */
+enum ice_status
+ice_set_fc(struct ice_port_info *pi, u8 *aq_failures, bool ena_auto_link_update)
+{
+	struct ice_aqc_set_phy_cfg_data cfg = { 0 };
+	struct ice_aqc_get_phy_caps_data *pcaps;
+	enum ice_status status;
+	u8 pause_mask = 0x0;
+	struct ice_hw *hw;
+
+	if (!pi)
+		return ICE_ERR_PARAM;
+	hw = pi->hw;
+	*aq_failures = ICE_SET_FC_AQ_FAIL_NONE;
+
+	switch (pi->fc.req_mode) {
+	case ICE_FC_FULL:
+		pause_mask |= ICE_AQC_PHY_EN_TX_LINK_PAUSE;
+		pause_mask |= ICE_AQC_PHY_EN_RX_LINK_PAUSE;
+		break;
+	case ICE_FC_RX_PAUSE:
+		pause_mask |= ICE_AQC_PHY_EN_RX_LINK_PAUSE;
+		break;
+	case ICE_FC_TX_PAUSE:
+		pause_mask |= ICE_AQC_PHY_EN_TX_LINK_PAUSE;
+		break;
+	default:
+		break;
+	}
+
+	pcaps = (struct ice_aqc_get_phy_caps_data *)
+		ice_malloc(hw, sizeof(*pcaps));
+	if (!pcaps)
+		return ICE_ERR_NO_MEMORY;
+
+	/* Get the current phy config */
+	status = ice_aq_get_phy_caps(pi, false, ICE_AQC_REPORT_SW_CFG, pcaps,
+				     NULL);
+	if (status) {
+		*aq_failures = ICE_SET_FC_AQ_FAIL_GET;
+		goto out;
+	}
+
+	/* clear the old pause settings */
+	cfg.caps = pcaps->caps & ~(ICE_AQC_PHY_EN_TX_LINK_PAUSE |
+				   ICE_AQC_PHY_EN_RX_LINK_PAUSE);
+	/* set the new capabilities */
+	cfg.caps |= pause_mask;
+	/* If the capabilities have changed, then set the new config */
+	if (cfg.caps != pcaps->caps) {
+		int retry_count, retry_max = 10;
+
+		/* Auto restart link so settings take effect */
+		if (ena_auto_link_update)
+			cfg.caps |= ICE_AQ_PHY_ENA_AUTO_LINK_UPDT;
+		/* Copy over all the old settings */
+		cfg.phy_type_low = pcaps->phy_type_low;
+		cfg.low_power_ctrl = pcaps->low_power_ctrl;
+		cfg.eee_cap = pcaps->eee_cap;
+		cfg.eeer_value = pcaps->eeer_value;
+		cfg.link_fec_opt = pcaps->link_fec_options;
+
+		status = ice_aq_set_phy_cfg(hw, pi->lport, &cfg, NULL);
+		if (status) {
+			*aq_failures = ICE_SET_FC_AQ_FAIL_SET;
+			goto out;
+		}
+
+		/* Update the link info
+		 * It sometimes takes a really long time for link to
+		 * come back from the atomic reset. Thus, we wait a
+		 * little bit.
+		 */
+		for (retry_count = 0; retry_count < retry_max; retry_count++) {
+			status = ice_update_link_info(pi);
+
+			if (status == ICE_SUCCESS)
+				break;
+
+			ice_msec_delay(100, true);
+		}
+
+		if (status)
+			*aq_failures = ICE_SET_FC_AQ_FAIL_UPDATE;
+	}
+
+out:
+	ice_free(hw, pcaps);
+	return status;
+}
+
+
+/**
+ * ice_get_link_status - get status of the HW network link
+ * @pi: port information structure
+ * @link_up: pointer to bool (true/false = linkup/linkdown)
+ *
+ * Variable link_up is true if link is up, false if link is down.
+ * The variable link_up is invalid if status is non zero. As a
+ * result of this call, link status reporting becomes enabled
+ */
+enum ice_status ice_get_link_status(struct ice_port_info *pi, bool *link_up)
+{
+	struct ice_phy_info *phy_info;
+	enum ice_status status = ICE_SUCCESS;
+
+	if (!pi || !link_up)
+		return ICE_ERR_PARAM;
+
+	phy_info = &pi->phy;
+
+	if (phy_info->get_link_info) {
+		status = ice_update_link_info(pi);
+
+		if (status)
+			ice_debug(pi->hw, ICE_DBG_LINK,
+				  "get link status error, status = %d\n",
+				  status);
+	}
+
+	*link_up = phy_info->link_info.link_info & ICE_AQ_LINK_UP;
+
+	return status;
+}
+
+/**
+ * ice_aq_set_link_restart_an
+ * @pi: pointer to the port information structure
+ * @ena_link: if true: enable link, if false: disable link
+ * @cd: pointer to command details structure or NULL
+ *
+ * Sets up the link and restarts the Auto-Negotiation over the link.
+ */
+enum ice_status
+ice_aq_set_link_restart_an(struct ice_port_info *pi, bool ena_link,
+			   struct ice_sq_cd *cd)
+{
+	struct ice_aqc_restart_an *cmd;
+	struct ice_aq_desc desc;
+
+	cmd = &desc.params.restart_an;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_restart_an);
+
+	cmd->cmd_flags = ICE_AQC_RESTART_AN_LINK_RESTART;
+	cmd->lport_num = pi->lport;
+	if (ena_link)
+		cmd->cmd_flags |= ICE_AQC_RESTART_AN_LINK_ENABLE;
+	else
+		cmd->cmd_flags &= ~ICE_AQC_RESTART_AN_LINK_ENABLE;
+
+	return ice_aq_send_cmd(pi->hw, &desc, NULL, 0, cd);
+}
+
+/**
+ * ice_aq_set_event_mask
+ * @hw: pointer to the hw struct
+ * @port_num: port number of the physical function
+ * @mask: event mask to be set
+ * @cd: pointer to command details structure or NULL
+ *
+ * Set event mask (0x0613)
+ */
+enum ice_status
+ice_aq_set_event_mask(struct ice_hw *hw, u8 port_num, u16 mask,
+		      struct ice_sq_cd *cd)
+{
+	struct ice_aqc_set_event_mask *cmd;
+	struct ice_aq_desc desc;
+
+	cmd = &desc.params.set_event_mask;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_event_mask);
+
+	cmd->lport_num = port_num;
+
+	cmd->event_mask = CPU_TO_LE16(mask);
+	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+}
+
+/**
+ * ice_aq_set_mac_loopback
+ * @hw: pointer to the hw struct
+ * @ena_lpbk: Enable or Disable loopback
+ * @cd: pointer to command details structure or NULL
+ *
+ * Enable/disable loopback on a given port
+ */
+enum ice_status
+ice_aq_set_mac_loopback(struct ice_hw *hw, bool ena_lpbk, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_set_mac_lb *cmd;
+	struct ice_aq_desc desc;
+
+	cmd = &desc.params.set_mac_lb;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_mac_lb);
+	if (ena_lpbk)
+		cmd->lb_mode = ICE_AQ_MAC_LB_EN;
+
+	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+}
+
+
+/**
+ * ice_aq_set_port_id_led
+ * @pi: pointer to the port information
+ * @is_orig_mode: is this LED set to original mode (by the net-list)
+ * @cd: pointer to command details structure or NULL
+ *
+ * Set LED value for the given port (0x06e9)
+ */
+enum ice_status
+ice_aq_set_port_id_led(struct ice_port_info *pi, bool is_orig_mode,
+		       struct ice_sq_cd *cd)
+{
+	struct ice_aqc_set_port_id_led *cmd;
+	struct ice_hw *hw = pi->hw;
+	struct ice_aq_desc desc;
+
+	cmd = &desc.params.set_port_id_led;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_port_id_led);
+
+
+	if (is_orig_mode)
+		cmd->ident_mode = ICE_AQC_PORT_IDENT_LED_ORIG;
+	else
+		cmd->ident_mode = ICE_AQC_PORT_IDENT_LED_BLINK;
+
+	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+}
+
+
+/**
+ * __ice_aq_get_set_rss_lut
+ * @hw: pointer to the hardware structure
+ * @vsi_id: VSI FW index
+ * @lut_type: LUT table type
+ * @lut: pointer to the LUT buffer provided by the caller
+ * @lut_size: size of the LUT buffer
+ * @glob_lut_idx: global LUT index
+ * @set: set true to set the table, false to get the table
+ *
+ * Internal function to get (0x0B05) or set (0x0B03) RSS look up table
+ */
+static enum ice_status
+__ice_aq_get_set_rss_lut(struct ice_hw *hw, u16 vsi_id, u8 lut_type, u8 *lut,
+			 u16 lut_size, u8 glob_lut_idx, bool set)
+{
+	struct ice_aqc_get_set_rss_lut *cmd_resp;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+	u16 flags = 0;
+
+	cmd_resp = &desc.params.get_set_rss_lut;
+
+	if (set) {
+		ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_rss_lut);
+		desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+	} else {
+		ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_rss_lut);
+	}
+
+	cmd_resp->vsi_id = CPU_TO_LE16(((vsi_id <<
+					 ICE_AQC_GSET_RSS_LUT_VSI_ID_S) &
+					ICE_AQC_GSET_RSS_LUT_VSI_ID_M) |
+				       ICE_AQC_GSET_RSS_LUT_VSI_VALID);
+
+	switch (lut_type) {
+	case ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_VSI:
+	case ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF:
+	case ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_GLOBAL:
+		flags |= ((lut_type << ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_S) &
+			  ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_M);
+		break;
+	default:
+		status = ICE_ERR_PARAM;
+		goto ice_aq_get_set_rss_lut_exit;
+	}
+
+	if (lut_type == ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_GLOBAL) {
+		flags |= ((glob_lut_idx << ICE_AQC_GSET_RSS_LUT_GLOBAL_IDX_S) &
+			  ICE_AQC_GSET_RSS_LUT_GLOBAL_IDX_M);
+
+		if (!set)
+			goto ice_aq_get_set_rss_lut_send;
+	} else if (lut_type == ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF) {
+		if (!set)
+			goto ice_aq_get_set_rss_lut_send;
+	} else {
+		goto ice_aq_get_set_rss_lut_send;
+	}
+
+	/* LUT size is only valid for Global and PF table types */
+	switch (lut_size) {
+	case ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_128:
+		break;
+	case ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_512:
+		flags |= (ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_512_FLAG <<
+			  ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S) &
+			 ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_M;
+		break;
+	case ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K:
+		if (lut_type == ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF) {
+			flags |= (ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K_FLAG <<
+				  ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S) &
+				 ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_M;
+			break;
+		}
+		/* fall-through */
+	default:
+		status = ICE_ERR_PARAM;
+		goto ice_aq_get_set_rss_lut_exit;
+	}
+
+ice_aq_get_set_rss_lut_send:
+	cmd_resp->flags = CPU_TO_LE16(flags);
+	status = ice_aq_send_cmd(hw, &desc, lut, lut_size, NULL);
+
+ice_aq_get_set_rss_lut_exit:
+	return status;
+}
+
+/**
+ * ice_aq_get_rss_lut
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: software VSI handle
+ * @lut_type: LUT table type
+ * @lut: pointer to the LUT buffer provided by the caller
+ * @lut_size: size of the LUT buffer
+ *
+ * get the RSS lookup table, PF or VSI type
+ */
+enum ice_status
+ice_aq_get_rss_lut(struct ice_hw *hw, u16 vsi_handle, u8 lut_type,
+		   u8 *lut, u16 lut_size)
+{
+	if (!ice_is_vsi_valid(hw, vsi_handle) || !lut)
+		return ICE_ERR_PARAM;
+
+	return __ice_aq_get_set_rss_lut(hw, ice_get_hw_vsi_num(hw, vsi_handle),
+					lut_type, lut, lut_size, 0, false);
+}
+
+/**
+ * ice_aq_set_rss_lut
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: software VSI handle
+ * @lut_type: LUT table type
+ * @lut: pointer to the LUT buffer provided by the caller
+ * @lut_size: size of the LUT buffer
+ *
+ * set the RSS lookup table, PF or VSI type
+ */
+enum ice_status
+ice_aq_set_rss_lut(struct ice_hw *hw, u16 vsi_handle, u8 lut_type,
+		   u8 *lut, u16 lut_size)
+{
+	if (!ice_is_vsi_valid(hw, vsi_handle) || !lut)
+		return ICE_ERR_PARAM;
+
+	return __ice_aq_get_set_rss_lut(hw, ice_get_hw_vsi_num(hw, vsi_handle),
+					lut_type, lut, lut_size, 0, true);
+}
+
+/**
+ * __ice_aq_get_set_rss_key
+ * @hw: pointer to the hw struct
+ * @vsi_id: VSI FW index
+ * @key: pointer to key info struct
+ * @set: set true to set the key, false to get the key
+ *
+ * get (0x0B04) or set (0x0B02) the RSS key per VSI
+ */
+static enum
+ice_status __ice_aq_get_set_rss_key(struct ice_hw *hw, u16 vsi_id,
+				    struct ice_aqc_get_set_rss_keys *key,
+				    bool set)
+{
+	struct ice_aqc_get_set_rss_key *cmd_resp;
+	u16 key_size = sizeof(*key);
+	struct ice_aq_desc desc;
+
+	cmd_resp = &desc.params.get_set_rss_key;
+
+	if (set) {
+		ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_rss_key);
+		desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+	} else {
+		ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_rss_key);
+	}
+
+	cmd_resp->vsi_id = CPU_TO_LE16(((vsi_id <<
+					 ICE_AQC_GSET_RSS_KEY_VSI_ID_S) &
+					ICE_AQC_GSET_RSS_KEY_VSI_ID_M) |
+				       ICE_AQC_GSET_RSS_KEY_VSI_VALID);
+
+	return ice_aq_send_cmd(hw, &desc, key, key_size, NULL);
+}
+
+/**
+ * ice_aq_get_rss_key
+ * @hw: pointer to the hw struct
+ * @vsi_handle: software VSI handle
+ * @key: pointer to key info struct
+ *
+ * get the RSS key per VSI
+ */
+enum ice_status
+ice_aq_get_rss_key(struct ice_hw *hw, u16 vsi_handle,
+		   struct ice_aqc_get_set_rss_keys *key)
+{
+	if (!ice_is_vsi_valid(hw, vsi_handle) || !key)
+		return ICE_ERR_PARAM;
+
+	return __ice_aq_get_set_rss_key(hw, ice_get_hw_vsi_num(hw, vsi_handle),
+					key, false);
+}
+
+/**
+ * ice_aq_set_rss_key
+ * @hw: pointer to the hw struct
+ * @vsi_handle: software VSI handle
+ * @keys: pointer to key info struct
+ *
+ * set the RSS key per VSI
+ */
+enum ice_status
+ice_aq_set_rss_key(struct ice_hw *hw, u16 vsi_handle,
+		   struct ice_aqc_get_set_rss_keys *keys)
+{
+	if (!ice_is_vsi_valid(hw, vsi_handle) || !keys)
+		return ICE_ERR_PARAM;
+
+	return __ice_aq_get_set_rss_key(hw, ice_get_hw_vsi_num(hw, vsi_handle),
+					keys, true);
+}
+
+/**
+ * ice_aq_add_lan_txq
+ * @hw: pointer to the hardware structure
+ * @num_qgrps: Number of added queue groups
+ * @qg_list: list of queue groups to be added
+ * @buf_size: size of buffer for indirect command
+ * @cd: pointer to command details structure or NULL
+ *
+ * Add Tx LAN queue (0x0C30)
+ *
+ * NOTE:
+ * Prior to calling add Tx LAN queue:
+ * Initialize the following as part of the Tx queue context:
+ * Completion queue ID if the queue uses Completion queue, Quanta profile,
+ * Cache profile and Packet shaper profile.
+ *
+ * After add Tx LAN queue AQ command is completed:
+ * Interrupts should be associated with specific queues,
+ * Association of Tx queue to Doorbell queue is not part of Add LAN Tx queue
+ * flow.
+ */
+static enum ice_status
+ice_aq_add_lan_txq(struct ice_hw *hw, u8 num_qgrps,
+		   struct ice_aqc_add_tx_qgrp *qg_list, u16 buf_size,
+		   struct ice_sq_cd *cd)
+{
+	u16 i, sum_header_size, sum_q_size = 0;
+	struct ice_aqc_add_tx_qgrp *list;
+	struct ice_aqc_add_txqs *cmd;
+	struct ice_aq_desc desc;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_add_lan_txq");
+
+	cmd = &desc.params.add_txqs;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_add_txqs);
+
+	if (!qg_list)
+		return ICE_ERR_PARAM;
+
+	if (num_qgrps > ICE_LAN_TXQ_MAX_QGRPS)
+		return ICE_ERR_PARAM;
+
+	sum_header_size = num_qgrps *
+		(sizeof(*qg_list) - sizeof(*qg_list->txqs));
+
+	list = qg_list;
+	for (i = 0; i < num_qgrps; i++) {
+		struct ice_aqc_add_txqs_perq *q = list->txqs;
+
+		sum_q_size += list->num_txqs * sizeof(*q);
+		list = (struct ice_aqc_add_tx_qgrp *)(q + list->num_txqs);
+	}
+
+	if (buf_size != (sum_header_size + sum_q_size))
+		return ICE_ERR_PARAM;
+
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+
+	cmd->num_qgrps = num_qgrps;
+
+	return ice_aq_send_cmd(hw, &desc, qg_list, buf_size, cd);
+}
+
+/**
+ * ice_aq_dis_lan_txq
+ * @hw: pointer to the hardware structure
+ * @num_qgrps: number of groups in the list
+ * @qg_list: the list of groups to disable
+ * @buf_size: the total size of the qg_list buffer in bytes
+ * @rst_src: if called due to reset, specifies the rst source
+ * @vmvf_num: the relative vm or vf number that is undergoing the reset
+ * @cd: pointer to command details structure or NULL
+ *
+ * Disable LAN Tx queue (0x0C31)
+ */
+static enum ice_status
+ice_aq_dis_lan_txq(struct ice_hw *hw, u8 num_qgrps,
+		   struct ice_aqc_dis_txq_item *qg_list, u16 buf_size,
+		   enum ice_disq_rst_src rst_src, u16 vmvf_num,
+		   struct ice_sq_cd *cd)
+{
+	struct ice_aqc_dis_txqs *cmd;
+	struct ice_aq_desc desc;
+	u16 i, sz = 0;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_dis_lan_txq");
+	cmd = &desc.params.dis_txqs;
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_dis_txqs);
+
+	/* qg_list can be NULL only in VM/VF reset flow */
+	if (!qg_list && !rst_src)
+		return ICE_ERR_PARAM;
+
+	if (num_qgrps > ICE_LAN_TXQ_MAX_QGRPS)
+		return ICE_ERR_PARAM;
+
+	cmd->num_entries = num_qgrps;
+
+	cmd->vmvf_and_timeout = CPU_TO_LE16((5 << ICE_AQC_Q_DIS_TIMEOUT_S) &
+					    ICE_AQC_Q_DIS_TIMEOUT_M);
+
+	switch (rst_src) {
+	case ICE_VM_RESET:
+		cmd->cmd_type = ICE_AQC_Q_DIS_CMD_VM_RESET;
+		cmd->vmvf_and_timeout |=
+			CPU_TO_LE16(vmvf_num & ICE_AQC_Q_DIS_VMVF_NUM_M);
+		break;
+	case ICE_VF_RESET:
+		cmd->cmd_type = ICE_AQC_Q_DIS_CMD_VF_RESET;
+		/* In this case, FW expects vmvf_num to be absolute VF id */
+		cmd->vmvf_and_timeout |=
+			CPU_TO_LE16((vmvf_num + hw->func_caps.vf_base_id) &
+				    ICE_AQC_Q_DIS_VMVF_NUM_M);
+		break;
+	case ICE_NO_RESET:
+	default:
+		break;
+	}
+
+	/* If no queue group info, we are in a reset flow. Issue the AQ */
+	if (!qg_list)
+		goto do_aq;
+
+	/* set RD bit to indicate that command buffer is provided by the driver
+	 * and it needs to be read by the firmware
+	 */
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+
+	for (i = 0; i < num_qgrps; ++i) {
+		/* Calculate the size taken up by the queue IDs in this group */
+		sz += qg_list[i].num_qs * sizeof(qg_list[i].q_id);
+
+		/* Add the size of the group header */
+		sz += sizeof(qg_list[i]) - sizeof(qg_list[i].q_id);
+
+		/* If the num of queues is even, add 2 bytes of padding */
+		if ((qg_list[i].num_qs % 2) == 0)
+			sz += 2;
+	}
+
+	if (buf_size != sz)
+		return ICE_ERR_PARAM;
+
+do_aq:
+	return ice_aq_send_cmd(hw, &desc, qg_list, buf_size, cd);
+}
+
+
+/* End of FW Admin Queue command wrappers */
+
+/**
+ * ice_write_byte - write a byte to a packed context structure
+ * @src_ctx:  the context structure to read from
+ * @dest_ctx: the context to be written to
+ * @ce_info:  a description of the struct to be filled
+ */
+static void
+ice_write_byte(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info)
+{
+	u8 src_byte, dest_byte, mask;
+	u8 *from, *dest;
+	u16 shift_width;
+
+	/* copy from the next struct field */
+	from = src_ctx + ce_info->offset;
+
+	/* prepare the bits and mask */
+	shift_width = ce_info->lsb % 8;
+	mask = (u8)(BIT(ce_info->width) - 1);
+
+	src_byte = *from;
+	src_byte &= mask;
+
+	/* shift to correct alignment */
+	mask <<= shift_width;
+	src_byte <<= shift_width;
+
+	/* get the current bits from the target bit string */
+	dest = dest_ctx + (ce_info->lsb / 8);
+
+	ice_memcpy(&dest_byte, dest, sizeof(dest_byte), ICE_DMA_TO_NONDMA);
+
+	dest_byte &= ~mask;	/* get the bits not changing */
+	dest_byte |= src_byte;	/* add in the new bits */
+
+	/* put it all back */
+	ice_memcpy(dest, &dest_byte, sizeof(dest_byte), ICE_NONDMA_TO_DMA);
+}
+
+/**
+ * ice_write_word - write a word to a packed context structure
+ * @src_ctx:  the context structure to read from
+ * @dest_ctx: the context to be written to
+ * @ce_info:  a description of the struct to be filled
+ */
+static void
+ice_write_word(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info)
+{
+	u16 src_word, mask;
+	__le16 dest_word;
+	u8 *from, *dest;
+	u16 shift_width;
+
+	/* copy from the next struct field */
+	from = src_ctx + ce_info->offset;
+
+	/* prepare the bits and mask */
+	shift_width = ce_info->lsb % 8;
+	mask = BIT(ce_info->width) - 1;
+
+	/* don't swizzle the bits until after the mask because the mask bits
+	 * will be in a different bit position on big endian machines
+	 */
+	src_word = *(u16 *)from;
+	src_word &= mask;
+
+	/* shift to correct alignment */
+	mask <<= shift_width;
+	src_word <<= shift_width;
+
+	/* get the current bits from the target bit string */
+	dest = dest_ctx + (ce_info->lsb / 8);
+
+	ice_memcpy(&dest_word, dest, sizeof(dest_word), ICE_DMA_TO_NONDMA);
+
+	dest_word &= ~(CPU_TO_LE16(mask));	/* get the bits not changing */
+	dest_word |= CPU_TO_LE16(src_word);	/* add in the new bits */
+
+	/* put it all back */
+	ice_memcpy(dest, &dest_word, sizeof(dest_word), ICE_NONDMA_TO_DMA);
+}
+
+/**
+ * ice_write_dword - write a dword to a packed context structure
+ * @src_ctx:  the context structure to read from
+ * @dest_ctx: the context to be written to
+ * @ce_info:  a description of the struct to be filled
+ */
+static void
+ice_write_dword(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info)
+{
+	u32 src_dword, mask;
+	__le32 dest_dword;
+	u8 *from, *dest;
+	u16 shift_width;
+
+	/* copy from the next struct field */
+	from = src_ctx + ce_info->offset;
+
+	/* prepare the bits and mask */
+	shift_width = ce_info->lsb % 8;
+
+	/* if the field width is exactly 32 on an x86 machine, then the shift
+	 * operation will not work because the SHL instructions count is masked
+	 * to 5 bits so the shift will do nothing
+	 */
+	if (ce_info->width < 32)
+		mask = BIT(ce_info->width) - 1;
+	else
+		mask = (u32)~0;
+
+	/* don't swizzle the bits until after the mask because the mask bits
+	 * will be in a different bit position on big endian machines
+	 */
+	src_dword = *(u32 *)from;
+	src_dword &= mask;
+
+	/* shift to correct alignment */
+	mask <<= shift_width;
+	src_dword <<= shift_width;
+
+	/* get the current bits from the target bit string */
+	dest = dest_ctx + (ce_info->lsb / 8);
+
+	ice_memcpy(&dest_dword, dest, sizeof(dest_dword), ICE_DMA_TO_NONDMA);
+
+	dest_dword &= ~(CPU_TO_LE32(mask));	/* get the bits not changing */
+	dest_dword |= CPU_TO_LE32(src_dword);	/* add in the new bits */
+
+	/* put it all back */
+	ice_memcpy(dest, &dest_dword, sizeof(dest_dword), ICE_NONDMA_TO_DMA);
+}
+
+/**
+ * ice_write_qword - write a qword to a packed context structure
+ * @src_ctx:  the context structure to read from
+ * @dest_ctx: the context to be written to
+ * @ce_info:  a description of the struct to be filled
+ */
+static void
+ice_write_qword(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info)
+{
+	u64 src_qword, mask;
+	__le64 dest_qword;
+	u8 *from, *dest;
+	u16 shift_width;
+
+	/* copy from the next struct field */
+	from = src_ctx + ce_info->offset;
+
+	/* prepare the bits and mask */
+	shift_width = ce_info->lsb % 8;
+
+	/* if the field width is exactly 64 on an x86 machine, then the shift
+	 * operation will not work because the SHL instructions count is masked
+	 * to 6 bits so the shift will do nothing
+	 */
+	if (ce_info->width < 64)
+		mask = BIT_ULL(ce_info->width) - 1;
+	else
+		mask = (u64)~0;
+
+	/* don't swizzle the bits until after the mask because the mask bits
+	 * will be in a different bit position on big endian machines
+	 */
+	src_qword = *(u64 *)from;
+	src_qword &= mask;
+
+	/* shift to correct alignment */
+	mask <<= shift_width;
+	src_qword <<= shift_width;
+
+	/* get the current bits from the target bit string */
+	dest = dest_ctx + (ce_info->lsb / 8);
+
+	ice_memcpy(&dest_qword, dest, sizeof(dest_qword), ICE_DMA_TO_NONDMA);
+
+	dest_qword &= ~(CPU_TO_LE64(mask));	/* get the bits not changing */
+	dest_qword |= CPU_TO_LE64(src_qword);	/* add in the new bits */
+
+	/* put it all back */
+	ice_memcpy(dest, &dest_qword, sizeof(dest_qword), ICE_NONDMA_TO_DMA);
+}
+
+/**
+ * ice_set_ctx - set context bits in packed structure
+ * @src_ctx:  pointer to a generic non-packed context structure
+ * @dest_ctx: pointer to memory for the packed structure
+ * @ce_info:  a description of the structure to be transformed
+ */
+enum ice_status
+ice_set_ctx(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info)
+{
+	int f;
+
+	for (f = 0; ce_info[f].width; f++) {
+		/* We have to deal with each element of the FW response
+		 * using the correct size so that we are correct regardless
+		 * of the endianness of the machine.
+		 */
+		switch (ce_info[f].size_of) {
+		case sizeof(u8):
+			ice_write_byte(src_ctx, dest_ctx, &ce_info[f]);
+			break;
+		case sizeof(u16):
+			ice_write_word(src_ctx, dest_ctx, &ce_info[f]);
+			break;
+		case sizeof(u32):
+			ice_write_dword(src_ctx, dest_ctx, &ce_info[f]);
+			break;
+		case sizeof(u64):
+			ice_write_qword(src_ctx, dest_ctx, &ce_info[f]);
+			break;
+		default:
+			return ICE_ERR_INVAL_SIZE;
+		}
+	}
+
+	return ICE_SUCCESS;
+}
+
+
+
+
+
+/**
+ * ice_ena_vsi_txq
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc: tc number
+ * @num_qgrps: Number of added queue groups
+ * @buf: list of queue groups to be added
+ * @buf_size: size of buffer for indirect command
+ * @cd: pointer to command details structure or NULL
+ *
+ * This function adds one lan q
+ */
+enum ice_status
+ice_ena_vsi_txq(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u8 num_qgrps,
+		struct ice_aqc_add_tx_qgrp *buf, u16 buf_size,
+		struct ice_sq_cd *cd)
+{
+	struct ice_aqc_txsched_elem_data node = { 0 };
+	struct ice_sched_node *parent;
+	enum ice_status status;
+	struct ice_hw *hw;
+
+	if (!pi || pi->port_state != ICE_SCHED_PORT_STATE_READY)
+		return ICE_ERR_CFG;
+
+	if (num_qgrps > 1 || buf->num_txqs > 1)
+		return ICE_ERR_MAX_LIMIT;
+
+	hw = pi->hw;
+
+	if (!ice_is_vsi_valid(hw, vsi_handle))
+		return ICE_ERR_PARAM;
+
+	ice_acquire_lock(&pi->sched_lock);
+
+	/* find a parent node */
+	parent = ice_sched_get_free_qparent(pi, vsi_handle, tc,
+					    ICE_SCHED_NODE_OWNER_LAN);
+	if (!parent) {
+		status = ICE_ERR_PARAM;
+		goto ena_txq_exit;
+	}
+
+	buf->parent_teid = parent->info.node_teid;
+	node.parent_teid = parent->info.node_teid;
+	/* Mark that the values in the "generic" section as valid. The default
+	 * value in the "generic" section is zero. This means that :
+	 * - Scheduling mode is Bytes Per Second (BPS), indicated by Bit 0.
+	 * - 0 priority among siblings, indicated by Bit 1-3.
+	 * - WFQ, indicated by Bit 4.
+	 * - 0 Adjustment value is used in PSM credit update flow, indicated by
+	 * Bit 5-6.
+	 * - Bit 7 is reserved.
+	 * Without setting the generic section as valid in valid_sections, the
+	 * Admin Q command will fail with error code ICE_AQ_RC_EINVAL.
+	 */
+	buf->txqs[0].info.valid_sections = ICE_AQC_ELEM_VALID_GENERIC;
+
+	/* add the lan q */
+	status = ice_aq_add_lan_txq(hw, num_qgrps, buf, buf_size, cd);
+	if (status != ICE_SUCCESS)
+		goto ena_txq_exit;
+
+	node.node_teid = buf->txqs[0].q_teid;
+	node.data.elem_type = ICE_AQC_ELEM_TYPE_LEAF;
+
+	/* add a leaf node into schduler tree q layer */
+	status = ice_sched_add_node(pi, hw->num_tx_sched_layers - 1, &node);
+
+ena_txq_exit:
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_dis_vsi_txq
+ * @pi: port information structure
+ * @num_queues: number of queues
+ * @q_ids: pointer to the q_id array
+ * @q_teids: pointer to queue node teids
+ * @rst_src: if called due to reset, specifies the rst source
+ * @vmvf_num: the relative vm or vf number that is undergoing the reset
+ * @cd: pointer to command details structure or NULL
+ *
+ * This function removes queues and their corresponding nodes in SW DB
+ */
+enum ice_status
+ice_dis_vsi_txq(struct ice_port_info *pi, u8 num_queues, u16 *q_ids,
+		u32 *q_teids, enum ice_disq_rst_src rst_src, u16 vmvf_num,
+		struct ice_sq_cd *cd)
+{
+	enum ice_status status = ICE_ERR_DOES_NOT_EXIST;
+	struct ice_aqc_dis_txq_item qg_list;
+	u16 i;
+
+	if (!pi || pi->port_state != ICE_SCHED_PORT_STATE_READY)
+		return ICE_ERR_CFG;
+
+	/* if queue is disabled already yet the disable queue command has to be
+	 * sent to complete the VF reset, then call ice_aq_dis_lan_txq without
+	 * any queue information
+	 */
+
+	if (!num_queues && rst_src)
+		return ice_aq_dis_lan_txq(pi->hw, 0, NULL, 0, rst_src, vmvf_num,
+					  NULL);
+
+	ice_acquire_lock(&pi->sched_lock);
+
+	for (i = 0; i < num_queues; i++) {
+		struct ice_sched_node *node;
+
+		node = ice_sched_find_node_by_teid(pi->root, q_teids[i]);
+		if (!node)
+			continue;
+		qg_list.parent_teid = node->info.parent_teid;
+		qg_list.num_qs = 1;
+		qg_list.q_id[0] = CPU_TO_LE16(q_ids[i]);
+		status = ice_aq_dis_lan_txq(pi->hw, 1, &qg_list,
+					    sizeof(qg_list), rst_src, vmvf_num,
+					    cd);
+
+		if (status != ICE_SUCCESS)
+			break;
+		ice_free_sched_node(pi, node);
+	}
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_cfg_vsi_qs - configure the new/exisiting VSI queues
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc_bitmap: TC bitmap
+ * @maxqs: max queues array per TC
+ * @owner: lan or rdma
+ *
+ * This function adds/updates the VSI queues per TC.
+ */
+static enum ice_status
+ice_cfg_vsi_qs(struct ice_port_info *pi, u16 vsi_handle, u8 tc_bitmap,
+	       u16 *maxqs, u8 owner)
+{
+	enum ice_status status = ICE_SUCCESS;
+	u8 i;
+
+	if (!pi || pi->port_state != ICE_SCHED_PORT_STATE_READY)
+		return ICE_ERR_CFG;
+
+	if (!ice_is_vsi_valid(pi->hw, vsi_handle))
+		return ICE_ERR_PARAM;
+
+	ice_acquire_lock(&pi->sched_lock);
+
+	for (i = 0; i < ICE_MAX_TRAFFIC_CLASS; i++) {
+		/* configuration is possible only if TC node is present */
+		if (!ice_sched_get_tc_node(pi, i))
+			continue;
+
+		status = ice_sched_cfg_vsi(pi, vsi_handle, i, maxqs[i], owner,
+					   ice_is_tc_ena(tc_bitmap, i));
+		if (status)
+			break;
+	}
+
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_cfg_vsi_lan - configure VSI lan queues
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc_bitmap: TC bitmap
+ * @max_lanqs: max lan queues array per TC
+ *
+ * This function adds/updates the VSI lan queues per TC.
+ */
+enum ice_status
+ice_cfg_vsi_lan(struct ice_port_info *pi, u16 vsi_handle, u8 tc_bitmap,
+		u16 *max_lanqs)
+{
+	return ice_cfg_vsi_qs(pi, vsi_handle, tc_bitmap, max_lanqs,
+			      ICE_SCHED_NODE_OWNER_LAN);
+}
+
+
+
+
+
+/**
+ * ice_replay_pre_init - replay pre initialization
+ * @hw: pointer to the hw struct
+ *
+ * Initializes required config data for VSI, FD, ACL, and RSS before replay.
+ */
+static enum ice_status ice_replay_pre_init(struct ice_hw *hw)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	u8 i;
+
+	/* Delete old entries from replay filter list head if there is any */
+	ice_rm_all_sw_replay_rule_info(hw);
+	/* In start of replay, move entries into replay_rules list, it
+	 * will allow adding rules entries back to filt_rules list,
+	 * which is operational list.
+	 */
+	for (i = 0; i < ICE_SW_LKUP_LAST; i++)
+		LIST_REPLACE_INIT(&sw->recp_list[i].filt_rules,
+				  &sw->recp_list[i].filt_replay_rules);
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_replay_vsi - replay vsi configuration
+ * @hw: pointer to the hw struct
+ * @vsi_handle: driver vsi handle
+ *
+ * Restore all VSI configuration after reset. It is required to call this
+ * function with main VSI first.
+ */
+enum ice_status ice_replay_vsi(struct ice_hw *hw, u16 vsi_handle)
+{
+	enum ice_status status;
+
+	if (!ice_is_vsi_valid(hw, vsi_handle))
+		return ICE_ERR_PARAM;
+
+	/* Replay pre-initialization if there is any */
+	if (vsi_handle == ICE_MAIN_VSI_HANDLE) {
+		status = ice_replay_pre_init(hw);
+		if (status)
+			return status;
+	}
+
+	/* Replay per VSI all filters */
+	status = ice_replay_vsi_all_fltr(hw, vsi_handle);
+	return status;
+}
+
+/**
+ * ice_replay_post - post replay configuration cleanup
+ * @hw: pointer to the hw struct
+ *
+ * Post replay cleanup.
+ */
+void ice_replay_post(struct ice_hw *hw)
+{
+	/* Delete old entries from replay filter list head */
+	ice_rm_all_sw_replay_rule_info(hw);
+}
+
+/**
+ * ice_stat_update40 - read 40 bit stat from the chip and update stat values
+ * @hw: ptr to the hardware info
+ * @hireg: high 32 bit HW register to read from
+ * @loreg: low 32 bit HW register to read from
+ * @prev_stat_loaded: bool to specify if previous stats are loaded
+ * @prev_stat: ptr to previous loaded stat value
+ * @cur_stat: ptr to current stat value
+ */
+void
+ice_stat_update40(struct ice_hw *hw, u32 hireg, u32 loreg,
+		  bool prev_stat_loaded, u64 *prev_stat, u64 *cur_stat)
+{
+	u64 new_data;
+
+	new_data = rd32(hw, loreg);
+	new_data |= ((u64)(rd32(hw, hireg) & 0xFFFF)) << 32;
+
+	/* device stats are not reset at PFR, they likely will not be zeroed
+	 * when the driver starts. So save the first values read and use them as
+	 * offsets to be subtracted from the raw values in order to report stats
+	 * that count from zero.
+	 */
+	if (!prev_stat_loaded)
+		*prev_stat = new_data;
+	if (new_data >= *prev_stat)
+		*cur_stat = new_data - *prev_stat;
+	else
+		/* to manage the potential roll-over */
+		*cur_stat = (new_data + BIT_ULL(40)) - *prev_stat;
+	*cur_stat &= 0xFFFFFFFFFFULL;
+}
+
+/**
+ * ice_stat_update32 - read 32 bit stat from the chip and update stat values
+ * @hw: ptr to the hardware info
+ * @reg: HW register to read from
+ * @prev_stat_loaded: bool to specify if previous stats are loaded
+ * @prev_stat: ptr to previous loaded stat value
+ * @cur_stat: ptr to current stat value
+ */
+void
+ice_stat_update32(struct ice_hw *hw, u32 reg, bool prev_stat_loaded,
+		  u64 *prev_stat, u64 *cur_stat)
+{
+	u32 new_data;
+
+	new_data = rd32(hw, reg);
+
+	/* device stats are not reset at PFR, they likely will not be zeroed
+	 * when the driver starts. So save the first values read and use them as
+	 * offsets to be subtracted from the raw values in order to report stats
+	 * that count from zero.
+	 */
+	if (!prev_stat_loaded)
+		*prev_stat = new_data;
+	if (new_data >= *prev_stat)
+		*cur_stat = new_data - *prev_stat;
+	else
+		/* to manage the potential roll-over */
+		*cur_stat = (new_data + BIT_ULL(32)) - *prev_stat;
+}
+
diff --git a/drivers/net/ice/base/ice_common.h b/drivers/net/ice/base/ice_common.h
new file mode 100644
index 0000000..fc2870c
--- /dev/null
+++ b/drivers/net/ice/base/ice_common.h
@@ -0,0 +1,159 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_COMMON_H_
+#define _ICE_COMMON_H_
+
+#include "ice_type.h"
+
+#include "virtchnl.h"
+#include "ice_switch.h"
+
+enum ice_status ice_nvm_validate_checksum(struct ice_hw *hw);
+
+void
+ice_debug_cq(struct ice_hw *hw, u32 mask, void *desc, void *buf, u16 buf_len);
+enum ice_status ice_init_hw(struct ice_hw *hw);
+void ice_deinit_hw(struct ice_hw *hw);
+enum ice_status ice_check_reset(struct ice_hw *hw);
+enum ice_status ice_reset(struct ice_hw *hw, enum ice_reset_req req);
+
+enum ice_status ice_init_all_ctrlq(struct ice_hw *hw);
+void ice_shutdown_all_ctrlq(struct ice_hw *hw);
+enum ice_status
+ice_clean_rq_elem(struct ice_hw *hw, struct ice_ctl_q_info *cq,
+		  struct ice_rq_event_info *e, u16 *pending);
+enum ice_status
+ice_get_link_status(struct ice_port_info *pi, bool *link_up);
+enum ice_status
+ice_update_link_info(struct ice_port_info *pi);
+enum ice_status
+ice_acquire_res(struct ice_hw *hw, enum ice_aq_res_ids res,
+		enum ice_aq_res_access_type access, u32 timeout);
+void ice_release_res(struct ice_hw *hw, enum ice_aq_res_ids res);
+enum ice_status
+ice_aq_alloc_free_res(struct ice_hw *hw, u16 num_entries,
+		      struct ice_aqc_alloc_free_res_elem *buf, u16 buf_size,
+		      enum ice_adminq_opc opc, struct ice_sq_cd *cd);
+enum ice_status ice_init_nvm(struct ice_hw *hw);
+enum ice_status ice_read_sr_word(struct ice_hw *hw, u16 offset, u16 *data);
+enum ice_status
+ice_read_sr_buf(struct ice_hw *hw, u16 offset, u16 *words, u16 *data);
+enum ice_status
+ice_sq_send_cmd(struct ice_hw *hw, struct ice_ctl_q_info *cq,
+		struct ice_aq_desc *desc, void *buf, u16 buf_size,
+		struct ice_sq_cd *cd);
+void ice_clear_pxe_mode(struct ice_hw *hw);
+
+enum ice_status ice_get_caps(struct ice_hw *hw);
+
+
+
+#if defined(FPGA_SUPPORT) || defined(CVL_A0_SUPPORT)
+void ice_dev_onetime_setup(struct ice_hw *hw);
+#endif /* FPGA_SUPPORT || CVL_A0_SUPPORT */
+
+
+enum ice_status
+ice_write_rxq_ctx(struct ice_hw *hw, struct ice_rlan_ctx *rlan_ctx,
+		  u32 rxq_index);
+#if !defined(NO_UNUSED_CTX_CODE) || defined(AE_DRIVER)
+enum ice_status ice_clear_rxq_ctx(struct ice_hw *hw, u32 rxq_index);
+enum ice_status
+ice_clear_tx_cmpltnq_ctx(struct ice_hw *hw, u32 tx_cmpltnq_index);
+enum ice_status
+ice_write_tx_cmpltnq_ctx(struct ice_hw *hw,
+			 struct ice_tx_cmpltnq_ctx *tx_cmpltnq_ctx,
+			 u32 tx_cmpltnq_index);
+enum ice_status
+ice_clear_tx_drbell_q_ctx(struct ice_hw *hw, u32 tx_drbell_q_index);
+enum ice_status
+ice_write_tx_drbell_q_ctx(struct ice_hw *hw,
+			  struct ice_tx_drbell_q_ctx *tx_drbell_q_ctx,
+			  u32 tx_drbell_q_index);
+#endif /* !NO_UNUSED_CTX_CODE || AE_DRIVER */
+
+enum ice_status
+ice_aq_get_rss_lut(struct ice_hw *hw, u16 vsi_handle, u8 lut_type, u8 *lut,
+		   u16 lut_size);
+enum ice_status
+ice_aq_set_rss_lut(struct ice_hw *hw, u16 vsi_handle, u8 lut_type, u8 *lut,
+		   u16 lut_size);
+enum ice_status
+ice_aq_get_rss_key(struct ice_hw *hw, u16 vsi_handle,
+		   struct ice_aqc_get_set_rss_keys *keys);
+enum ice_status
+ice_aq_set_rss_key(struct ice_hw *hw, u16 vsi_handle,
+		   struct ice_aqc_get_set_rss_keys *keys);
+
+bool ice_check_sq_alive(struct ice_hw *hw, struct ice_ctl_q_info *cq);
+enum ice_status ice_aq_q_shutdown(struct ice_hw *hw, bool unloading);
+void ice_fill_dflt_direct_cmd_desc(struct ice_aq_desc *desc, u16 opcode);
+extern const struct ice_ctx_ele ice_tlan_ctx_info[];
+enum ice_status
+ice_set_ctx(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info);
+enum ice_status
+ice_aq_send_cmd(struct ice_hw *hw, struct ice_aq_desc *desc,
+		void *buf, u16 buf_size, struct ice_sq_cd *cd);
+enum ice_status ice_aq_get_fw_ver(struct ice_hw *hw, struct ice_sq_cd *cd);
+
+enum ice_status
+ice_aq_get_phy_caps(struct ice_port_info *pi, bool qual_mods, u8 report_mode,
+		    struct ice_aqc_get_phy_caps_data *caps,
+		    struct ice_sq_cd *cd);
+void
+ice_update_phy_type(u64 *phy_type_low, u16 link_speeds_bitmap);
+enum ice_status
+ice_aq_manage_mac_write(struct ice_hw *hw, const u8 *mac_addr, u8 flags,
+			struct ice_sq_cd *cd);
+
+enum ice_status ice_clear_pf_cfg(struct ice_hw *hw);
+enum ice_status
+ice_aq_set_phy_cfg(struct ice_hw *hw, u8 lport,
+		   struct ice_aqc_set_phy_cfg_data *cfg, struct ice_sq_cd *cd);
+enum ice_status
+ice_set_fc(struct ice_port_info *pi, u8 *aq_failures,
+	   bool ena_auto_link_update);
+
+enum ice_status
+ice_aq_set_link_restart_an(struct ice_port_info *pi, bool ena_link,
+			   struct ice_sq_cd *cd);
+enum ice_status
+ice_aq_get_link_info(struct ice_port_info *pi, bool ena_lse,
+		     struct ice_link_status *link, struct ice_sq_cd *cd);
+enum ice_status
+ice_aq_set_event_mask(struct ice_hw *hw, u8 port_num, u16 mask,
+		      struct ice_sq_cd *cd);
+enum ice_status
+ice_aq_set_mac_loopback(struct ice_hw *hw, bool ena_lpbk, struct ice_sq_cd *cd);
+
+
+enum ice_status
+ice_aq_set_port_id_led(struct ice_port_info *pi, bool is_orig_mode,
+		       struct ice_sq_cd *cd);
+
+
+
+
+enum ice_status
+ice_dis_vsi_txq(struct ice_port_info *pi, u8 num_queues, u16 *q_ids,
+		u32 *q_teids, enum ice_disq_rst_src rst_src, u16 vmvf_num,
+		struct ice_sq_cd *cmd_details);
+enum ice_status
+ice_cfg_vsi_lan(struct ice_port_info *pi, u16 vsi_handle, u8 tc_bitmap,
+		u16 *max_lanqs);
+enum ice_status
+ice_ena_vsi_txq(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u8 num_qgrps,
+		struct ice_aqc_add_tx_qgrp *buf, u16 buf_size,
+		struct ice_sq_cd *cd);
+enum ice_status ice_replay_vsi(struct ice_hw *hw, u16 vsi_handle);
+void ice_replay_post(struct ice_hw *hw);
+void ice_output_fw_log(struct ice_hw *hw, struct ice_aq_desc *desc, void *buf);
+void
+ice_stat_update40(struct ice_hw *hw, u32 hireg, u32 loreg,
+		  bool prev_stat_loaded, u64 *prev_stat, u64 *cur_stat);
+void
+ice_stat_update32(struct ice_hw *hw, u32 reg, bool prev_stat_loaded,
+		  u64 *prev_stat, u64 *cur_stat);
+#endif /* _ICE_COMMON_H_ */
diff --git a/drivers/net/ice/base/ice_controlq.c b/drivers/net/ice/base/ice_controlq.c
new file mode 100644
index 0000000..cbc4cb4
--- /dev/null
+++ b/drivers/net/ice/base/ice_controlq.c
@@ -0,0 +1,1098 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#include "ice_common.h"
+
+
+#define ICE_CQ_INIT_REGS(qinfo, prefix)				\
+do {								\
+	(qinfo)->sq.head = prefix##_ATQH;			\
+	(qinfo)->sq.tail = prefix##_ATQT;			\
+	(qinfo)->sq.len = prefix##_ATQLEN;			\
+	(qinfo)->sq.bah = prefix##_ATQBAH;			\
+	(qinfo)->sq.bal = prefix##_ATQBAL;			\
+	(qinfo)->sq.len_mask = prefix##_ATQLEN_ATQLEN_M;	\
+	(qinfo)->sq.len_ena_mask = prefix##_ATQLEN_ATQENABLE_M;	\
+	(qinfo)->sq.head_mask = prefix##_ATQH_ATQH_M;		\
+	(qinfo)->rq.head = prefix##_ARQH;			\
+	(qinfo)->rq.tail = prefix##_ARQT;			\
+	(qinfo)->rq.len = prefix##_ARQLEN;			\
+	(qinfo)->rq.bah = prefix##_ARQBAH;			\
+	(qinfo)->rq.bal = prefix##_ARQBAL;			\
+	(qinfo)->rq.len_mask = prefix##_ARQLEN_ARQLEN_M;	\
+	(qinfo)->rq.len_ena_mask = prefix##_ARQLEN_ARQENABLE_M;	\
+	(qinfo)->rq.head_mask = prefix##_ARQH_ARQH_M;		\
+} while (0)
+
+/**
+ * ice_adminq_init_regs - Initialize AdminQ registers
+ * @hw: pointer to the hardware structure
+ *
+ * This assumes the alloc_sq and alloc_rq functions have already been called
+ */
+static void ice_adminq_init_regs(struct ice_hw *hw)
+{
+	struct ice_ctl_q_info *cq = &hw->adminq;
+
+	ICE_CQ_INIT_REGS(cq, PF_FW);
+}
+
+/**
+ * ice_mailbox_init_regs - Initialize Mailbox registers
+ * @hw: pointer to the hardware structure
+ *
+ * This assumes the alloc_sq and alloc_rq functions have already been called
+ */
+static void ice_mailbox_init_regs(struct ice_hw *hw)
+{
+	struct ice_ctl_q_info *cq = &hw->mailboxq;
+
+	ICE_CQ_INIT_REGS(cq, PF_MBX);
+}
+
+
+/**
+ * ice_check_sq_alive
+ * @hw: pointer to the hw struct
+ * @cq: pointer to the specific Control queue
+ *
+ * Returns true if Queue is enabled else false.
+ */
+bool ice_check_sq_alive(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	/* check both queue-length and queue-enable fields */
+	if (cq->sq.len && cq->sq.len_mask && cq->sq.len_ena_mask)
+		return (rd32(hw, cq->sq.len) & (cq->sq.len_mask |
+						cq->sq.len_ena_mask)) ==
+			(cq->num_sq_entries | cq->sq.len_ena_mask);
+
+	return false;
+}
+
+/**
+ * ice_alloc_ctrlq_sq_ring - Allocate Control Transmit Queue (ATQ) rings
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ */
+static enum ice_status
+ice_alloc_ctrlq_sq_ring(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	size_t size = cq->num_sq_entries * sizeof(struct ice_aq_desc);
+
+	cq->sq.desc_buf.va = ice_alloc_dma_mem(hw, &cq->sq.desc_buf, size);
+	if (!cq->sq.desc_buf.va)
+		return ICE_ERR_NO_MEMORY;
+
+	cq->sq.cmd_buf = ice_calloc(hw, cq->num_sq_entries,
+				    sizeof(struct ice_sq_cd));
+	if (!cq->sq.cmd_buf) {
+		ice_free_dma_mem(hw, &cq->sq.desc_buf);
+		return ICE_ERR_NO_MEMORY;
+	}
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_alloc_ctrlq_rq_ring - Allocate Control Receive Queue (ARQ) rings
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ */
+static enum ice_status
+ice_alloc_ctrlq_rq_ring(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	size_t size = cq->num_rq_entries * sizeof(struct ice_aq_desc);
+
+	cq->rq.desc_buf.va = ice_alloc_dma_mem(hw, &cq->rq.desc_buf, size);
+	if (!cq->rq.desc_buf.va)
+		return ICE_ERR_NO_MEMORY;
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_free_cq_ring - Free control queue ring
+ * @hw: pointer to the hardware structure
+ * @ring: pointer to the specific control queue ring
+ *
+ * This assumes the posted buffers have already been cleaned
+ * and de-allocated
+ */
+static void ice_free_cq_ring(struct ice_hw *hw, struct ice_ctl_q_ring *ring)
+{
+	ice_free_dma_mem(hw, &ring->desc_buf);
+}
+
+/**
+ * ice_alloc_rq_bufs - Allocate pre-posted buffers for the ARQ
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ */
+static enum ice_status
+ice_alloc_rq_bufs(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	int i;
+
+	/* We'll be allocating the buffer info memory first, then we can
+	 * allocate the mapped buffers for the event processing
+	 */
+	cq->rq.dma_head = ice_calloc(hw, cq->num_rq_entries,
+				     sizeof(cq->rq.desc_buf));
+	if (!cq->rq.dma_head)
+		return ICE_ERR_NO_MEMORY;
+	cq->rq.r.rq_bi = (struct ice_dma_mem *)cq->rq.dma_head;
+
+	/* allocate the mapped buffers */
+	for (i = 0; i < cq->num_rq_entries; i++) {
+		struct ice_aq_desc *desc;
+		struct ice_dma_mem *bi;
+
+		bi = &cq->rq.r.rq_bi[i];
+		bi->va = ice_alloc_dma_mem(hw, bi, cq->rq_buf_size);
+		if (!bi->va)
+			goto unwind_alloc_rq_bufs;
+
+		/* now configure the descriptors for use */
+		desc = ICE_CTL_Q_DESC(cq->rq, i);
+
+		desc->flags = CPU_TO_LE16(ICE_AQ_FLAG_BUF);
+		if (cq->rq_buf_size > ICE_AQ_LG_BUF)
+			desc->flags |= CPU_TO_LE16(ICE_AQ_FLAG_LB);
+		desc->opcode = 0;
+		/* This is in accordance with Admin queue design, there is no
+		 * register for buffer size configuration
+		 */
+		desc->datalen = CPU_TO_LE16(bi->size);
+		desc->retval = 0;
+		desc->cookie_high = 0;
+		desc->cookie_low = 0;
+		desc->params.generic.addr_high =
+			CPU_TO_LE32(ICE_HI_DWORD(bi->pa));
+		desc->params.generic.addr_low =
+			CPU_TO_LE32(ICE_LO_DWORD(bi->pa));
+		desc->params.generic.param0 = 0;
+		desc->params.generic.param1 = 0;
+	}
+	return ICE_SUCCESS;
+
+unwind_alloc_rq_bufs:
+	/* don't try to free the one that failed... */
+	i--;
+	for (; i >= 0; i--)
+		ice_free_dma_mem(hw, &cq->rq.r.rq_bi[i]);
+	ice_free(hw, cq->rq.dma_head);
+
+	return ICE_ERR_NO_MEMORY;
+}
+
+/**
+ * ice_alloc_sq_bufs - Allocate empty buffer structs for the ATQ
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ */
+static enum ice_status
+ice_alloc_sq_bufs(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	int i;
+
+	/* No mapped memory needed yet, just the buffer info structures */
+	cq->sq.dma_head = ice_calloc(hw, cq->num_sq_entries,
+				     sizeof(cq->sq.desc_buf));
+	if (!cq->sq.dma_head)
+		return ICE_ERR_NO_MEMORY;
+	cq->sq.r.sq_bi = (struct ice_dma_mem *)cq->sq.dma_head;
+
+	/* allocate the mapped buffers */
+	for (i = 0; i < cq->num_sq_entries; i++) {
+		struct ice_dma_mem *bi;
+
+		bi = &cq->sq.r.sq_bi[i];
+		bi->va = ice_alloc_dma_mem(hw, bi, cq->sq_buf_size);
+		if (!bi->va)
+			goto unwind_alloc_sq_bufs;
+	}
+	return ICE_SUCCESS;
+
+unwind_alloc_sq_bufs:
+	/* don't try to free the one that failed... */
+	i--;
+	for (; i >= 0; i--)
+		ice_free_dma_mem(hw, &cq->sq.r.sq_bi[i]);
+	ice_free(hw, cq->sq.dma_head);
+
+	return ICE_ERR_NO_MEMORY;
+}
+
+static enum ice_status
+ice_cfg_cq_regs(struct ice_hw *hw, struct ice_ctl_q_ring *ring, u16 num_entries)
+{
+	/* Clear Head and Tail */
+	wr32(hw, ring->head, 0);
+	wr32(hw, ring->tail, 0);
+
+	/* set starting point */
+	wr32(hw, ring->len, (num_entries | ring->len_ena_mask));
+	wr32(hw, ring->bal, ICE_LO_DWORD(ring->desc_buf.pa));
+	wr32(hw, ring->bah, ICE_HI_DWORD(ring->desc_buf.pa));
+
+	/* Check one register to verify that config was applied */
+	if (rd32(hw, ring->bal) != ICE_LO_DWORD(ring->desc_buf.pa))
+		return ICE_ERR_AQ_ERROR;
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_cfg_sq_regs - configure Control ATQ registers
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ *
+ * Configure base address and length registers for the transmit queue
+ */
+static enum ice_status
+ice_cfg_sq_regs(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	return ice_cfg_cq_regs(hw, &cq->sq, cq->num_sq_entries);
+}
+
+/**
+ * ice_cfg_rq_regs - configure Control ARQ register
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ *
+ * Configure base address and length registers for the receive (event q)
+ */
+static enum ice_status
+ice_cfg_rq_regs(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	enum ice_status status;
+
+	status = ice_cfg_cq_regs(hw, &cq->rq, cq->num_rq_entries);
+	if (status)
+		return status;
+
+	/* Update tail in the HW to post pre-allocated buffers */
+	wr32(hw, cq->rq.tail, (u32)(cq->num_rq_entries - 1));
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_init_sq - main initialization routine for Control ATQ
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ *
+ * This is the main initialization routine for the Control Send Queue
+ * Prior to calling this function, drivers *MUST* set the following fields
+ * in the cq->structure:
+ *     - cq->num_sq_entries
+ *     - cq->sq_buf_size
+ *
+ * Do *NOT* hold the lock when calling this as the memory allocation routines
+ * called are not going to be atomic context safe
+ */
+static enum ice_status ice_init_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	enum ice_status ret_code;
+
+	if (cq->sq.count > 0) {
+		/* queue already initialized */
+		ret_code = ICE_ERR_NOT_READY;
+		goto init_ctrlq_exit;
+	}
+
+	/* verify input for valid configuration */
+	if (!cq->num_sq_entries || !cq->sq_buf_size) {
+		ret_code = ICE_ERR_CFG;
+		goto init_ctrlq_exit;
+	}
+
+	cq->sq.next_to_use = 0;
+	cq->sq.next_to_clean = 0;
+
+	/* allocate the ring memory */
+	ret_code = ice_alloc_ctrlq_sq_ring(hw, cq);
+	if (ret_code)
+		goto init_ctrlq_exit;
+
+	/* allocate buffers in the rings */
+	ret_code = ice_alloc_sq_bufs(hw, cq);
+	if (ret_code)
+		goto init_ctrlq_free_rings;
+
+	/* initialize base registers */
+	ret_code = ice_cfg_sq_regs(hw, cq);
+	if (ret_code)
+		goto init_ctrlq_free_rings;
+
+	/* success! */
+	cq->sq.count = cq->num_sq_entries;
+	goto init_ctrlq_exit;
+
+init_ctrlq_free_rings:
+	ice_free_cq_ring(hw, &cq->sq);
+
+init_ctrlq_exit:
+	return ret_code;
+}
+
+/**
+ * ice_init_rq - initialize ARQ
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ *
+ * The main initialization routine for the Admin Receive (Event) Queue.
+ * Prior to calling this function, drivers *MUST* set the following fields
+ * in the cq->structure:
+ *     - cq->num_rq_entries
+ *     - cq->rq_buf_size
+ *
+ * Do *NOT* hold the lock when calling this as the memory allocation routines
+ * called are not going to be atomic context safe
+ */
+static enum ice_status ice_init_rq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	enum ice_status ret_code;
+
+	if (cq->rq.count > 0) {
+		/* queue already initialized */
+		ret_code = ICE_ERR_NOT_READY;
+		goto init_ctrlq_exit;
+	}
+
+	/* verify input for valid configuration */
+	if (!cq->num_rq_entries || !cq->rq_buf_size) {
+		ret_code = ICE_ERR_CFG;
+		goto init_ctrlq_exit;
+	}
+
+	cq->rq.next_to_use = 0;
+	cq->rq.next_to_clean = 0;
+
+	/* allocate the ring memory */
+	ret_code = ice_alloc_ctrlq_rq_ring(hw, cq);
+	if (ret_code)
+		goto init_ctrlq_exit;
+
+	/* allocate buffers in the rings */
+	ret_code = ice_alloc_rq_bufs(hw, cq);
+	if (ret_code)
+		goto init_ctrlq_free_rings;
+
+	/* initialize base registers */
+	ret_code = ice_cfg_rq_regs(hw, cq);
+	if (ret_code)
+		goto init_ctrlq_free_rings;
+
+	/* success! */
+	cq->rq.count = cq->num_rq_entries;
+	goto init_ctrlq_exit;
+
+init_ctrlq_free_rings:
+	ice_free_cq_ring(hw, &cq->rq);
+
+init_ctrlq_exit:
+	return ret_code;
+}
+
+#define ICE_FREE_CQ_BUFS(hw, qi, ring)					\
+do {									\
+	int i;								\
+	/* free descriptors */						\
+	for (i = 0; i < (qi)->num_##ring##_entries; i++)		\
+		if ((qi)->ring.r.ring##_bi[i].pa)			\
+			ice_free_dma_mem((hw),				\
+					 &(qi)->ring.r.ring##_bi[i]);	\
+	/* free the buffer info list */					\
+	if ((qi)->ring.cmd_buf)						\
+		ice_free(hw, (qi)->ring.cmd_buf);			\
+	/* free dma head */						\
+	ice_free(hw, (qi)->ring.dma_head);				\
+} while (0)
+
+/**
+ * ice_shutdown_sq - shutdown the Control ATQ
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ *
+ * The main shutdown routine for the Control Transmit Queue
+ */
+static enum ice_status
+ice_shutdown_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	enum ice_status ret_code = ICE_SUCCESS;
+
+	ice_acquire_lock(&cq->sq_lock);
+
+	if (!cq->sq.count) {
+		ret_code = ICE_ERR_NOT_READY;
+		goto shutdown_sq_out;
+	}
+
+	/* Stop firmware AdminQ processing */
+	wr32(hw, cq->sq.head, 0);
+	wr32(hw, cq->sq.tail, 0);
+	wr32(hw, cq->sq.len, 0);
+	wr32(hw, cq->sq.bal, 0);
+	wr32(hw, cq->sq.bah, 0);
+
+	cq->sq.count = 0;	/* to indicate uninitialized queue */
+
+	/* free ring buffers and the ring itself */
+	ICE_FREE_CQ_BUFS(hw, cq, sq);
+	ice_free_cq_ring(hw, &cq->sq);
+
+shutdown_sq_out:
+	ice_release_lock(&cq->sq_lock);
+	return ret_code;
+}
+
+/**
+ * ice_aq_ver_check - Check the reported AQ API version.
+ * @hw: pointer to the hardware structure
+ *
+ * Checks if the driver should load on a given AQ API version.
+ *
+ * Return: 'true' iff the driver should attempt to load. 'false' otherwise.
+ */
+static bool ice_aq_ver_check(struct ice_hw *hw)
+{
+	if (hw->api_maj_ver > EXP_FW_API_VER_MAJOR) {
+		/* Major API version is newer than expected, don't load */
+		ice_warn(hw, "The driver for the device stopped because the NVM image is newer than expected. You must install the most recent version of the network driver.\n");
+		return false;
+	} else if (hw->api_maj_ver == EXP_FW_API_VER_MAJOR) {
+		if (hw->api_min_ver > (EXP_FW_API_VER_MINOR + 2))
+			ice_info(hw, "The driver for the device detected a newer version of the NVM image than expected. Please install the most recent version of the network driver.\n");
+		else if ((hw->api_min_ver + 2) < EXP_FW_API_VER_MINOR)
+			ice_info(hw, "The driver for the device detected an older version of the NVM image than expected. Please update the NVM image.\n");
+	} else {
+		/* Major API version is older than expected, log a warning */
+		ice_info(hw, "The driver for the device detected an older version of the NVM image than expected. Please update the NVM image.\n");
+	}
+	return true;
+}
+
+/**
+ * ice_shutdown_rq - shutdown Control ARQ
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ *
+ * The main shutdown routine for the Control Receive Queue
+ */
+static enum ice_status
+ice_shutdown_rq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	enum ice_status ret_code = ICE_SUCCESS;
+
+	ice_acquire_lock(&cq->rq_lock);
+
+	if (!cq->rq.count) {
+		ret_code = ICE_ERR_NOT_READY;
+		goto shutdown_rq_out;
+	}
+
+	/* Stop Control Queue processing */
+	wr32(hw, cq->rq.head, 0);
+	wr32(hw, cq->rq.tail, 0);
+	wr32(hw, cq->rq.len, 0);
+	wr32(hw, cq->rq.bal, 0);
+	wr32(hw, cq->rq.bah, 0);
+
+	/* set rq.count to 0 to indicate uninitialized queue */
+	cq->rq.count = 0;
+
+	/* free ring buffers and the ring itself */
+	ICE_FREE_CQ_BUFS(hw, cq, rq);
+	ice_free_cq_ring(hw, &cq->rq);
+
+shutdown_rq_out:
+	ice_release_lock(&cq->rq_lock);
+	return ret_code;
+}
+
+
+/**
+ * ice_init_check_adminq - Check version for Admin Queue to know if its alive
+ * @hw: pointer to the hardware structure
+ */
+static enum ice_status ice_init_check_adminq(struct ice_hw *hw)
+{
+	struct ice_ctl_q_info *cq = &hw->adminq;
+	enum ice_status status;
+
+
+	status = ice_aq_get_fw_ver(hw, NULL);
+	if (status)
+		goto init_ctrlq_free_rq;
+
+
+	if (!ice_aq_ver_check(hw)) {
+		status = ICE_ERR_FW_API_VER;
+		goto init_ctrlq_free_rq;
+	}
+
+	return ICE_SUCCESS;
+
+init_ctrlq_free_rq:
+	if (cq->rq.count) {
+		ice_shutdown_rq(hw, cq);
+		ice_destroy_lock(&cq->rq_lock);
+	}
+	if (cq->sq.count) {
+		ice_shutdown_sq(hw, cq);
+		ice_destroy_lock(&cq->sq_lock);
+	}
+	return status;
+}
+
+/**
+ * ice_init_ctrlq - main initialization routine for any control Queue
+ * @hw: pointer to the hardware structure
+ * @q_type: specific Control queue type
+ *
+ * Prior to calling this function, drivers *MUST* set the following fields
+ * in the cq->structure:
+ *     - cq->num_sq_entries
+ *     - cq->num_rq_entries
+ *     - cq->rq_buf_size
+ *     - cq->sq_buf_size
+ */
+static enum ice_status ice_init_ctrlq(struct ice_hw *hw, enum ice_ctl_q q_type)
+{
+	struct ice_ctl_q_info *cq;
+	enum ice_status ret_code;
+
+	switch (q_type) {
+	case ICE_CTL_Q_ADMIN:
+		ice_adminq_init_regs(hw);
+		cq = &hw->adminq;
+		break;
+	case ICE_CTL_Q_MAILBOX:
+		ice_mailbox_init_regs(hw);
+		cq = &hw->mailboxq;
+		break;
+	default:
+		return ICE_ERR_PARAM;
+	}
+	cq->qtype = q_type;
+
+	/* verify input for valid configuration */
+	if (!cq->num_rq_entries || !cq->num_sq_entries ||
+	    !cq->rq_buf_size || !cq->sq_buf_size) {
+		return ICE_ERR_CFG;
+	}
+	ice_init_lock(&cq->sq_lock);
+	ice_init_lock(&cq->rq_lock);
+
+	/* setup SQ command write back timeout */
+	cq->sq_cmd_timeout = ICE_CTL_Q_SQ_CMD_TIMEOUT;
+
+	/* allocate the ATQ */
+	ret_code = ice_init_sq(hw, cq);
+	if (ret_code)
+		goto init_ctrlq_destroy_locks;
+
+	/* allocate the ARQ */
+	ret_code = ice_init_rq(hw, cq);
+	if (ret_code)
+		goto init_ctrlq_free_sq;
+
+	/* success! */
+	return ICE_SUCCESS;
+
+init_ctrlq_free_sq:
+	ice_shutdown_sq(hw, cq);
+init_ctrlq_destroy_locks:
+	ice_destroy_lock(&cq->sq_lock);
+	ice_destroy_lock(&cq->rq_lock);
+	return ret_code;
+}
+
+/**
+ * ice_init_all_ctrlq - main initialization routine for all control queues
+ * @hw: pointer to the hardware structure
+ *
+ * Prior to calling this function, drivers *MUST* set the following fields
+ * in the cq->structure for all control queues:
+ *     - cq->num_sq_entries
+ *     - cq->num_rq_entries
+ *     - cq->rq_buf_size
+ *     - cq->sq_buf_size
+ */
+enum ice_status ice_init_all_ctrlq(struct ice_hw *hw)
+{
+	enum ice_status ret_code;
+
+
+	/* Init FW admin queue */
+	ret_code = ice_init_ctrlq(hw, ICE_CTL_Q_ADMIN);
+	if (ret_code)
+		return ret_code;
+
+	ret_code = ice_init_check_adminq(hw);
+	if (ret_code)
+		return ret_code;
+	/* Init Mailbox queue */
+	return ice_init_ctrlq(hw, ICE_CTL_Q_MAILBOX);
+}
+
+/**
+ * ice_shutdown_ctrlq - shutdown routine for any control queue
+ * @hw: pointer to the hardware structure
+ * @q_type: specific Control queue type
+ */
+static void ice_shutdown_ctrlq(struct ice_hw *hw, enum ice_ctl_q q_type)
+{
+	struct ice_ctl_q_info *cq;
+
+	switch (q_type) {
+	case ICE_CTL_Q_ADMIN:
+		cq = &hw->adminq;
+		if (ice_check_sq_alive(hw, cq))
+			ice_aq_q_shutdown(hw, true);
+		break;
+	case ICE_CTL_Q_MAILBOX:
+		cq = &hw->mailboxq;
+		break;
+	default:
+		return;
+	}
+
+	if (cq->sq.count) {
+		ice_shutdown_sq(hw, cq);
+		ice_destroy_lock(&cq->sq_lock);
+	}
+	if (cq->rq.count) {
+		ice_shutdown_rq(hw, cq);
+		ice_destroy_lock(&cq->rq_lock);
+	}
+}
+
+/**
+ * ice_shutdown_all_ctrlq - shutdown routine for all control queues
+ * @hw: pointer to the hardware structure
+ */
+void ice_shutdown_all_ctrlq(struct ice_hw *hw)
+{
+	/* Shutdown FW admin queue */
+	ice_shutdown_ctrlq(hw, ICE_CTL_Q_ADMIN);
+	/* Shutdown PF-VF Mailbox */
+	ice_shutdown_ctrlq(hw, ICE_CTL_Q_MAILBOX);
+}
+
+/**
+ * ice_clean_sq - cleans Admin send queue (ATQ)
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ *
+ * returns the number of free desc
+ */
+static u16 ice_clean_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	struct ice_ctl_q_ring *sq = &cq->sq;
+	u16 ntc = sq->next_to_clean;
+	struct ice_sq_cd *details;
+#if 0
+	struct ice_aq_desc desc_cb;
+#endif
+	struct ice_aq_desc *desc;
+
+	desc = ICE_CTL_Q_DESC(*sq, ntc);
+	details = ICE_CTL_Q_DETAILS(*sq, ntc);
+
+	while (rd32(hw, cq->sq.head) != ntc) {
+		ice_debug(hw, ICE_DBG_AQ_MSG,
+			  "ntc %d head %d.\n", ntc, rd32(hw, cq->sq.head));
+#if 0
+		if (details->callback) {
+			ICE_CTL_Q_CALLBACK cb_func =
+				(ICE_CTL_Q_CALLBACK)details->callback;
+			ice_memcpy(&desc_cb, desc, sizeof(desc_cb),
+				   ICE_DMA_TO_DMA);
+			cb_func(hw, &desc_cb);
+		}
+#endif
+		ice_memset(desc, 0, sizeof(*desc), ICE_DMA_MEM);
+		ice_memset(details, 0, sizeof(*details), ICE_NONDMA_MEM);
+		ntc++;
+		if (ntc == sq->count)
+			ntc = 0;
+		desc = ICE_CTL_Q_DESC(*sq, ntc);
+		details = ICE_CTL_Q_DETAILS(*sq, ntc);
+	}
+
+	sq->next_to_clean = ntc;
+
+	return ICE_CTL_Q_DESC_UNUSED(sq);
+}
+
+/**
+ * ice_sq_done - check if FW has processed the Admin Send Queue (ATQ)
+ * @hw: pointer to the hw struct
+ * @cq: pointer to the specific Control queue
+ *
+ * Returns true if the firmware has processed all descriptors on the
+ * admin send queue. Returns false if there are still requests pending.
+ */
+static bool ice_sq_done(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	/* AQ designers suggest use of head for better
+	 * timing reliability than DD bit
+	 */
+	return rd32(hw, cq->sq.head) == cq->sq.next_to_use;
+}
+
+/**
+ * ice_sq_send_cmd - send command to Control Queue (ATQ)
+ * @hw: pointer to the hw struct
+ * @cq: pointer to the specific Control queue
+ * @desc: prefilled descriptor describing the command (non DMA mem)
+ * @buf: buffer to use for indirect commands (or NULL for direct commands)
+ * @buf_size: size of buffer for indirect commands (or 0 for direct commands)
+ * @cd: pointer to command details structure
+ *
+ * This is the main send command routine for the ATQ. It runs the queue,
+ * cleans the queue, etc.
+ */
+enum ice_status
+ice_sq_send_cmd(struct ice_hw *hw, struct ice_ctl_q_info *cq,
+		struct ice_aq_desc *desc, void *buf, u16 buf_size,
+		struct ice_sq_cd *cd)
+{
+	struct ice_dma_mem *dma_buf = NULL;
+	struct ice_aq_desc *desc_on_ring;
+	bool cmd_completed = false;
+	enum ice_status status = ICE_SUCCESS;
+	struct ice_sq_cd *details;
+	u32 total_delay = 0;
+	u16 retval = 0;
+	u32 val = 0;
+
+	/* if reset is in progress return a soft error */
+	if (hw->reset_ongoing)
+		return ICE_ERR_RESET_ONGOING;
+	ice_acquire_lock(&cq->sq_lock);
+
+	cq->sq_last_status = ICE_AQ_RC_OK;
+
+	if (!cq->sq.count) {
+		ice_debug(hw, ICE_DBG_AQ_MSG,
+			  "Control Send queue not initialized.\n");
+		status = ICE_ERR_AQ_EMPTY;
+		goto sq_send_command_error;
+	}
+
+	if ((buf && !buf_size) || (!buf && buf_size)) {
+		status = ICE_ERR_PARAM;
+		goto sq_send_command_error;
+	}
+
+	if (buf) {
+		if (buf_size > cq->sq_buf_size) {
+			ice_debug(hw, ICE_DBG_AQ_MSG,
+				  "Invalid buffer size for Control Send queue: %d.\n",
+				  buf_size);
+			status = ICE_ERR_INVAL_SIZE;
+			goto sq_send_command_error;
+		}
+
+		desc->flags |= CPU_TO_LE16(ICE_AQ_FLAG_BUF);
+		if (buf_size > ICE_AQ_LG_BUF)
+			desc->flags |= CPU_TO_LE16(ICE_AQ_FLAG_LB);
+	}
+
+	val = rd32(hw, cq->sq.head);
+	if (val >= cq->num_sq_entries) {
+		ice_debug(hw, ICE_DBG_AQ_MSG,
+			  "head overrun at %d in the Control Send Queue ring\n",
+			  val);
+		status = ICE_ERR_AQ_EMPTY;
+		goto sq_send_command_error;
+	}
+
+	details = ICE_CTL_Q_DETAILS(cq->sq, cq->sq.next_to_use);
+	if (cd)
+		*details = *cd;
+#if 0
+		/* FIXME: if/when this block gets enabled (when the #if 0
+		 * is removed), add braces to both branches of the surrounding
+		 * conditional expression. The braces have been removed to
+		 * prevent checkpatch complaining.
+		 */
+
+		/* If the command details are defined copy the cookie. The
+		 * CPU_TO_LE32 is not needed here because the data is ignored
+		 * by the FW, only used by the driver
+		 */
+		if (details->cookie) {
+			desc->cookie_high =
+				CPU_TO_LE32(ICE_HI_DWORD(details->cookie));
+			desc->cookie_low =
+				CPU_TO_LE32(ICE_LO_DWORD(details->cookie));
+		}
+#endif
+	else
+		ice_memset(details, 0, sizeof(*details), ICE_NONDMA_MEM);
+#if 0
+	/* clear requested flags and then set additional flags if defined */
+	desc->flags &= ~CPU_TO_LE16(details->flags_dis);
+	desc->flags |= CPU_TO_LE16(details->flags_ena);
+
+	if (details->postpone && !details->async) {
+		ice_debug(hw, ICE_DBG_AQ_MSG,
+			  "Async flag not set along with postpone flag\n");
+		status = ICE_ERR_PARAM;
+		goto sq_send_command_error;
+	}
+#endif
+
+	/* Call clean and check queue available function to reclaim the
+	 * descriptors that were processed by FW/MBX; the function returns the
+	 * number of desc available. The clean function called here could be
+	 * called in a separate thread in case of asynchronous completions.
+	 */
+	if (ice_clean_sq(hw, cq) == 0) {
+		ice_debug(hw, ICE_DBG_AQ_MSG,
+			  "Error: Control Send Queue is full.\n");
+		status = ICE_ERR_AQ_FULL;
+		goto sq_send_command_error;
+	}
+
+	/* initialize the temp desc pointer with the right desc */
+	desc_on_ring = ICE_CTL_Q_DESC(cq->sq, cq->sq.next_to_use);
+
+	/* if the desc is available copy the temp desc to the right place */
+	ice_memcpy(desc_on_ring, desc, sizeof(*desc_on_ring),
+		   ICE_NONDMA_TO_DMA);
+
+	/* if buf is not NULL assume indirect command */
+	if (buf) {
+		dma_buf = &cq->sq.r.sq_bi[cq->sq.next_to_use];
+		/* copy the user buf into the respective DMA buf */
+		ice_memcpy(dma_buf->va, buf, buf_size, ICE_NONDMA_TO_DMA);
+		desc_on_ring->datalen = CPU_TO_LE16(buf_size);
+
+		/* Update the address values in the desc with the pa value
+		 * for respective buffer
+		 */
+		desc_on_ring->params.generic.addr_high =
+			CPU_TO_LE32(ICE_HI_DWORD(dma_buf->pa));
+		desc_on_ring->params.generic.addr_low =
+			CPU_TO_LE32(ICE_LO_DWORD(dma_buf->pa));
+	}
+
+	/* Debug desc and buffer */
+	ice_debug(hw, ICE_DBG_AQ_MSG,
+		  "ATQ: Control Send queue desc and buffer:\n");
+
+	ice_debug_cq(hw, ICE_DBG_AQ_CMD, (void *)desc_on_ring, buf, buf_size);
+
+
+	(cq->sq.next_to_use)++;
+	if (cq->sq.next_to_use == cq->sq.count)
+		cq->sq.next_to_use = 0;
+#if 0
+	/* FIXME - handle this case? */
+	if (!details->postpone)
+#endif
+	wr32(hw, cq->sq.tail, cq->sq.next_to_use);
+
+#if 0
+	/* if command details are not defined or async flag is not set,
+	 * we need to wait for desc write back
+	 */
+	if (!details->async && !details->postpone) {
+		/* FIXME - handle this case? */
+	}
+#endif
+	do {
+		if (ice_sq_done(hw, cq))
+			break;
+
+		ice_msec_delay(1, false);
+		total_delay++;
+	} while (total_delay < cq->sq_cmd_timeout);
+
+	/* if ready, copy the desc back to temp */
+	if (ice_sq_done(hw, cq)) {
+		ice_memcpy(desc, desc_on_ring, sizeof(*desc),
+			   ICE_DMA_TO_NONDMA);
+		if (buf) {
+			/* get returned length to copy */
+			u16 copy_size = LE16_TO_CPU(desc->datalen);
+
+			if (copy_size > buf_size) {
+				ice_debug(hw, ICE_DBG_AQ_MSG,
+					  "Return len %d > than buf len %d\n",
+					  copy_size, buf_size);
+				status = ICE_ERR_AQ_ERROR;
+			} else {
+				ice_memcpy(buf, dma_buf->va, copy_size,
+					   ICE_DMA_TO_NONDMA);
+			}
+		}
+		retval = LE16_TO_CPU(desc->retval);
+		if (retval) {
+			ice_debug(hw, ICE_DBG_AQ_MSG,
+				  "Control Send Queue command completed with error 0x%x\n",
+				  retval);
+
+			/* strip off FW internal code */
+			retval &= 0xff;
+		}
+		cmd_completed = true;
+		if (!status && retval != ICE_AQ_RC_OK)
+			status = ICE_ERR_AQ_ERROR;
+		cq->sq_last_status = (enum ice_aq_err)retval;
+	}
+
+	ice_debug(hw, ICE_DBG_AQ_MSG,
+		  "ATQ: desc and buffer writeback:\n");
+
+	ice_debug_cq(hw, ICE_DBG_AQ_CMD, (void *)desc, buf, buf_size);
+
+
+	/* save writeback AQ if requested */
+	if (details->wb_desc)
+		ice_memcpy(details->wb_desc, desc_on_ring,
+			   sizeof(*details->wb_desc), ICE_DMA_TO_NONDMA);
+
+	/* update the error if time out occurred */
+	if (!cmd_completed) {
+#if 0
+	    (!details->async && !details->postpone)) {
+#endif
+		ice_debug(hw, ICE_DBG_AQ_MSG,
+			  "Control Send Queue Writeback timeout.\n");
+		status = ICE_ERR_AQ_TIMEOUT;
+	}
+
+sq_send_command_error:
+	ice_release_lock(&cq->sq_lock);
+	return status;
+}
+
+/**
+ * ice_fill_dflt_direct_cmd_desc - AQ descriptor helper function
+ * @desc: pointer to the temp descriptor (non DMA mem)
+ * @opcode: the opcode can be used to decide which flags to turn off or on
+ *
+ * Fill the desc with default values
+ */
+void ice_fill_dflt_direct_cmd_desc(struct ice_aq_desc *desc, u16 opcode)
+{
+	/* zero out the desc */
+	ice_memset(desc, 0, sizeof(*desc), ICE_NONDMA_MEM);
+	desc->opcode = CPU_TO_LE16(opcode);
+	desc->flags = CPU_TO_LE16(ICE_AQ_FLAG_SI);
+}
+
+/**
+ * ice_clean_rq_elem
+ * @hw: pointer to the hw struct
+ * @cq: pointer to the specific Control queue
+ * @e: event info from the receive descriptor, includes any buffers
+ * @pending: number of events that could be left to process
+ *
+ * This function cleans one Admin Receive Queue element and returns
+ * the contents through e. It can also return how many events are
+ * left to process through 'pending'.
+ */
+enum ice_status
+ice_clean_rq_elem(struct ice_hw *hw, struct ice_ctl_q_info *cq,
+		  struct ice_rq_event_info *e, u16 *pending)
+{
+	u16 ntc = cq->rq.next_to_clean;
+	enum ice_status ret_code = ICE_SUCCESS;
+	struct ice_aq_desc *desc;
+	struct ice_dma_mem *bi;
+	u16 desc_idx;
+	u16 datalen;
+	u16 flags;
+	u16 ntu;
+
+	/* pre-clean the event info */
+	ice_memset(&e->desc, 0, sizeof(e->desc), ICE_NONDMA_MEM);
+
+	/* take the lock before we start messing with the ring */
+	ice_acquire_lock(&cq->rq_lock);
+
+	if (!cq->rq.count) {
+		ice_debug(hw, ICE_DBG_AQ_MSG,
+			  "Control Receive queue not initialized.\n");
+		ret_code = ICE_ERR_AQ_EMPTY;
+		goto clean_rq_elem_err;
+	}
+
+	/* set next_to_use to head */
+	ntu = (u16)(rd32(hw, cq->rq.head) & cq->rq.head_mask);
+
+	if (ntu == ntc) {
+		/* nothing to do - shouldn't need to update ring's values */
+		ret_code = ICE_ERR_AQ_NO_WORK;
+		goto clean_rq_elem_out;
+	}
+
+	/* now clean the next descriptor */
+	desc = ICE_CTL_Q_DESC(cq->rq, ntc);
+	desc_idx = ntc;
+
+	cq->rq_last_status = (enum ice_aq_err)LE16_TO_CPU(desc->retval);
+	flags = LE16_TO_CPU(desc->flags);
+	if (flags & ICE_AQ_FLAG_ERR) {
+		ret_code = ICE_ERR_AQ_ERROR;
+		ice_debug(hw, ICE_DBG_AQ_MSG,
+			  "Control Receive Queue Event received with error 0x%x\n",
+			  cq->rq_last_status);
+	}
+	ice_memcpy(&e->desc, desc, sizeof(e->desc), ICE_DMA_TO_NONDMA);
+	datalen = LE16_TO_CPU(desc->datalen);
+	e->msg_len = min(datalen, e->buf_len);
+	if (e->msg_buf && e->msg_len)
+		ice_memcpy(e->msg_buf, cq->rq.r.rq_bi[desc_idx].va,
+			   e->msg_len, ICE_DMA_TO_NONDMA);
+
+	ice_debug(hw, ICE_DBG_AQ_MSG, "ARQ: desc and buffer:\n");
+
+	ice_debug_cq(hw, ICE_DBG_AQ_CMD, (void *)desc, e->msg_buf,
+		     cq->rq_buf_size);
+
+
+	/* Restore the original datalen and buffer address in the desc,
+	 * FW updates datalen to indicate the event message size
+	 */
+	bi = &cq->rq.r.rq_bi[ntc];
+	ice_memset(desc, 0, sizeof(*desc), ICE_DMA_MEM);
+
+	desc->flags = CPU_TO_LE16(ICE_AQ_FLAG_BUF);
+	if (cq->rq_buf_size > ICE_AQ_LG_BUF)
+		desc->flags |= CPU_TO_LE16(ICE_AQ_FLAG_LB);
+	desc->datalen = CPU_TO_LE16(bi->size);
+	desc->params.generic.addr_high = CPU_TO_LE32(ICE_HI_DWORD(bi->pa));
+	desc->params.generic.addr_low = CPU_TO_LE32(ICE_LO_DWORD(bi->pa));
+
+	/* set tail = the last cleaned desc index. */
+	wr32(hw, cq->rq.tail, ntc);
+	/* ntc is updated to tail + 1 */
+	ntc++;
+	if (ntc == cq->num_rq_entries)
+		ntc = 0;
+	cq->rq.next_to_clean = ntc;
+	cq->rq.next_to_use = ntu;
+
+#if 0
+	ice_nvmupd_check_wait_event(hw, LE16_TO_CPU(e->desc.opcode));
+#endif
+clean_rq_elem_out:
+	/* Set pending if needed, unlock and return */
+	if (pending) {
+		/* re-read HW head to calculate actual pending messages */
+		ntu = (u16)(rd32(hw, cq->rq.head) & cq->rq.head_mask);
+		*pending = (u16)((ntc > ntu ? cq->rq.count : 0) + (ntu - ntc));
+	}
+clean_rq_elem_err:
+	ice_release_lock(&cq->rq_lock);
+
+	return ret_code;
+}
diff --git a/drivers/net/ice/base/ice_controlq.h b/drivers/net/ice/base/ice_controlq.h
new file mode 100644
index 0000000..db2db93
--- /dev/null
+++ b/drivers/net/ice/base/ice_controlq.h
@@ -0,0 +1,97 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_CONTROLQ_H_
+#define _ICE_CONTROLQ_H_
+
+#include "ice_adminq_cmd.h"
+
+
+/* Maximum buffer lengths for all control queue types */
+#define ICE_AQ_MAX_BUF_LEN 4096
+#define ICE_MBXQ_MAX_BUF_LEN 4096
+
+#define ICE_CTL_Q_DESC(R, i) \
+	(&(((struct ice_aq_desc *)((R).desc_buf.va))[i]))
+
+#define ICE_CTL_Q_DESC_UNUSED(R) \
+	(u16)((((R)->next_to_clean > (R)->next_to_use) ? 0 : (R)->count) + \
+	      (R)->next_to_clean - (R)->next_to_use - 1)
+
+/* Defines that help manage the driver vs FW API checks.
+ * Take a look at ice_aq_ver_check in ice_controlq.c for actual usage.
+ */
+#define EXP_FW_API_VER_BRANCH		0x00
+#define EXP_FW_API_VER_MAJOR		0x01
+#define EXP_FW_API_VER_MINOR		0x03
+
+/* Different control queue types: These are mainly for SW consumption. */
+enum ice_ctl_q {
+	ICE_CTL_Q_UNKNOWN = 0,
+	ICE_CTL_Q_ADMIN,
+	ICE_CTL_Q_MAILBOX,
+};
+
+/* Control Queue default settings */
+#define ICE_CTL_Q_SQ_CMD_TIMEOUT	250  /* msecs */
+
+struct ice_ctl_q_ring {
+	void *dma_head;			/* Virtual address to dma head */
+	struct ice_dma_mem desc_buf;	/* descriptor ring memory */
+	void *cmd_buf;			/* command buffer memory */
+
+	union {
+		struct ice_dma_mem *sq_bi;
+		struct ice_dma_mem *rq_bi;
+	} r;
+
+	u16 count;		/* Number of descriptors */
+
+	/* used for interrupt processing */
+	u16 next_to_use;
+	u16 next_to_clean;
+
+	/* used for queue tracking */
+	u32 head;
+	u32 tail;
+	u32 len;
+	u32 bah;
+	u32 bal;
+	u32 len_mask;
+	u32 len_ena_mask;
+	u32 head_mask;
+};
+
+/* sq transaction details */
+struct ice_sq_cd {
+	struct ice_aq_desc *wb_desc;
+};
+
+#define ICE_CTL_Q_DETAILS(R, i) (&(((struct ice_sq_cd *)((R).cmd_buf))[i]))
+
+/* rq event information */
+struct ice_rq_event_info {
+	struct ice_aq_desc desc;
+	u16 msg_len;
+	u16 buf_len;
+	u8 *msg_buf;
+};
+
+/* Control Queue information */
+struct ice_ctl_q_info {
+	enum ice_ctl_q qtype;
+	struct ice_ctl_q_ring rq;	/* receive queue */
+	struct ice_ctl_q_ring sq;	/* send queue */
+	u32 sq_cmd_timeout;		/* send queue cmd write back timeout */
+	u16 num_rq_entries;		/* receive queue depth */
+	u16 num_sq_entries;		/* send queue depth */
+	u16 rq_buf_size;		/* receive queue buffer size */
+	u16 sq_buf_size;		/* send queue buffer size */
+	struct ice_lock sq_lock;		/* Send queue lock */
+	struct ice_lock rq_lock;		/* Receive queue lock */
+	enum ice_aq_err sq_last_status;	/* last status on send queue */
+	enum ice_aq_err rq_last_status;	/* last status on receive queue */
+};
+
+#endif /* _ICE_CONTROLQ_H_ */
diff --git a/drivers/net/ice/base/ice_devids.h b/drivers/net/ice/base/ice_devids.h
new file mode 100644
index 0000000..87f17ab
--- /dev/null
+++ b/drivers/net/ice/base/ice_devids.h
@@ -0,0 +1,17 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_DEVIDS_H_
+#define _ICE_DEVIDS_H_
+
+
+/* Device IDs */
+/* Intel(R) Ethernet Controller E810-C for backplane */
+#define ICE_DEV_ID_E810C_BACKPLANE	0x1591
+/* Intel(R) Ethernet Controller E810-C for QSFP */
+#define ICE_DEV_ID_E810C_QSFP		0x1592
+/* Intel(R) Ethernet Controller E810-C for SFP */
+#define ICE_DEV_ID_E810C_SFP		0x1593
+
+#endif /* _ICE_DEVIDS_H_ */
diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c
new file mode 100644
index 0000000..934cb26
--- /dev/null
+++ b/drivers/net/ice/base/ice_flex_pipe.c
@@ -0,0 +1,5 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#include "ice_common.h"
diff --git a/drivers/net/ice/base/ice_flex_pipe.h b/drivers/net/ice/base/ice_flex_pipe.h
new file mode 100644
index 0000000..f52f7a4
--- /dev/null
+++ b/drivers/net/ice/base/ice_flex_pipe.h
@@ -0,0 +1,83 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_FLEX_PIPE_H_
+#define _ICE_FLEX_PIPE_H_
+
+#include "ice_type.h"
+
+/* Package format version */
+#define ICE_PKG_FMT_VER_MAJ	1
+#define ICE_PKG_FMT_VER_MNR	0
+#define ICE_PKG_FMT_VER_UPD	0
+#define ICE_PKG_FMT_VER_DFT	0
+
+enum ice_status
+ice_update_pkg(struct ice_hw *hw, struct ice_buf *bufs, u32 count);
+
+
+/* package buffer building routines */
+
+struct ice_buf_build *ice_pkg_buf_alloc(struct ice_hw *hw);
+enum ice_status
+ice_pkg_buf_reserve_section(struct ice_buf_build *bld, u16 count);
+void *ice_pkg_buf_alloc_section(struct ice_buf_build *bld, u32 type, u16 size);
+struct ice_buf *ice_pkg_buf(struct ice_buf_build *bld);
+void ice_pkg_buf_free(struct ice_hw *hw, struct ice_buf_build *bld);
+
+/* XLT1/PType group functions */
+enum ice_status ice_ptg_update_xlt1(struct ice_hw *hw, enum ice_block blk);
+enum ice_status
+ice_ptg_find_ptype(struct ice_hw *hw, enum ice_block blk, u16 ptype, u8 *ptg);
+u8 ice_ptg_alloc(struct ice_hw *hw, enum ice_block blk);
+void ice_ptg_free(struct ice_hw *hw, enum ice_block blk, u8 ptg);
+enum ice_status
+ice_ptg_add_mv_ptype(struct ice_hw *hw, enum ice_block blk, u16 ptype, u8 ptg);
+
+/* XLT2/Vsi group functions */
+enum ice_status ice_vsig_update_xlt2(struct ice_hw *hw, enum ice_block blk);
+enum ice_status
+ice_vsig_find_vsi(struct ice_hw *hw, enum ice_block blk, u16 vsi, u16 *vsig);
+enum ice_status
+ice_find_dup_props_vsig(struct ice_hw *hw, enum ice_block blk,
+			struct LIST_HEAD_TYPE *chs, u16 *vsig);
+
+enum ice_status
+ice_vsig_add_mv_vsi(struct ice_hw *hw, enum ice_block blk, u16 vsi, u16 vsig);
+enum ice_status ice_vsig_free(struct ice_hw *hw, enum ice_block blk, u16 vsig);
+enum ice_status
+ice_vsig_add_mv_vsi(struct ice_hw *hw, enum ice_block blk, u16 vsi, u16 vsig);
+enum ice_status
+ice_vsig_remove_vsi(struct ice_hw *hw, enum ice_block blk, u16 vsi, u16 vsig);
+enum ice_status
+ice_add_prof(struct ice_hw *hw, enum ice_block blk, u64 id, u8 ptypes[],
+	     struct ice_fv_word *es);
+struct ice_prof_map *
+ice_search_prof_id(struct ice_hw *hw, enum ice_block blk, u64 id);
+enum ice_status
+ice_add_prof_id_flow(struct ice_hw *hw, enum ice_block blk, u16 vsi, u64 hdl);
+enum ice_status
+ice_rem_prof_id_flow(struct ice_hw *hw, enum ice_block blk, u16 vsi, u64 hdl);
+struct ice_prof_map *
+ice_set_prof_context(struct ice_hw *hw, enum ice_block blk, u64 id, u64 cntxt);
+struct ice_prof_map *
+ice_get_prof_context(struct ice_hw *hw, enum ice_block blk, u64 id, u64 *cntxt);
+enum ice_status
+ice_copy_and_init_pkg(struct ice_hw *hw, const u8 *buf, u32 len);
+enum ice_status ice_init_hw_tbls(struct ice_hw *hw);
+void ice_free_seg(struct ice_hw *hw);
+void ice_free_hw_tbls(struct ice_hw *hw);
+enum ice_status
+ice_add_flow(struct ice_hw *hw, enum ice_block blk, u16 vsi[], u8 count,
+	     u64 id);
+enum ice_status
+ice_rem_flow(struct ice_hw *hw, enum ice_block blk, u16 vsi[], u8 count,
+	     u64 id);
+enum ice_status
+ice_rem_prof(struct ice_hw *hw, enum ice_block blk, u64 id);
+
+enum ice_status
+ice_set_key(u8 *key, u16 size, u8 *val, u8 *upd, u8 *dc, u8 *nm, u16 off,
+	    u16 len);
+#endif /* _ICE_FLEX_PIPE_H_ */
diff --git a/drivers/net/ice/base/ice_flex_type.h b/drivers/net/ice/base/ice_flex_type.h
new file mode 100644
index 0000000..84a38cb
--- /dev/null
+++ b/drivers/net/ice/base/ice_flex_type.h
@@ -0,0 +1,19 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_FLEX_TYPE_H_
+#define _ICE_FLEX_TYPE_H_
+
+/* Extraction Sequence (Field Vector) Table */
+struct ice_fv_word {
+	u8 prot_id;
+	u8 off;		/* Offset within the protocol header */
+};
+
+#define ICE_MAX_FV_WORDS 48
+struct ice_fv {
+	struct ice_fv_word ew[ICE_MAX_FV_WORDS];
+};
+
+#endif /* _ICE_FLEX_TYPE_H_ */
diff --git a/drivers/net/ice/base/ice_flow.c b/drivers/net/ice/base/ice_flow.c
new file mode 100644
index 0000000..49b22bc
--- /dev/null
+++ b/drivers/net/ice/base/ice_flow.c
@@ -0,0 +1,4 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
diff --git a/drivers/net/ice/base/ice_flow.h b/drivers/net/ice/base/ice_flow.h
new file mode 100644
index 0000000..228a2c0
--- /dev/null
+++ b/drivers/net/ice/base/ice_flow.h
@@ -0,0 +1,8 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_FLOW_H_
+#define _ICE_FLOW_H_
+
+#endif /* _ICE_FLOW_H_ */
diff --git a/drivers/net/ice/base/ice_hw_autogen.h b/drivers/net/ice/base/ice_hw_autogen.h
new file mode 100644
index 0000000..8c79891
--- /dev/null
+++ b/drivers/net/ice/base/ice_hw_autogen.h
@@ -0,0 +1,9815 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+/* Machine-generated file; do not edit */
+#ifndef _ICE_HW_AUTOGEN_H_
+#define _ICE_HW_AUTOGEN_H_
+
+
+
+#define GL_RDPU_CNTRL				0x00052054 /* Reset Source: CORER */
+#define GL_RDPU_CNTRL_RX_PAD_EN_S		0
+#define GL_RDPU_CNTRL_RX_PAD_EN_M		BIT(0)
+#define GL_RDPU_CNTRL_UDP_ZERO_EN_S		1
+#define GL_RDPU_CNTRL_UDP_ZERO_EN_M		BIT(1)
+#define GL_RDPU_CNTRL_BLNC_EN_S			2
+#define GL_RDPU_CNTRL_BLNC_EN_M			BIT(2)
+#define GL_RDPU_CNTRL_RECIPE_BYPASS_S		3
+#define GL_RDPU_CNTRL_RECIPE_BYPASS_M		BIT(3)
+#define GL_RDPU_CNTRL_RLAN_ACK_REQ_PM_TH_S	4
+#define GL_RDPU_CNTRL_RLAN_ACK_REQ_PM_TH_M	MAKEMASK(0x3F, 4)
+#define GL_RDPU_CNTRL_PE_ACK_REQ_PM_TH_S	10
+#define GL_RDPU_CNTRL_PE_ACK_REQ_PM_TH_M	MAKEMASK(0x3F, 10)
+#define GL_RDPU_CNTRL_REQ_WB_PM_TH_S		16
+#define GL_RDPU_CNTRL_REQ_WB_PM_TH_M		MAKEMASK(0x1F, 16)
+#define GL_RDPU_CNTRL_ECO_S			21
+#define GL_RDPU_CNTRL_ECO_M			MAKEMASK(0x7FF, 21)
+#define MSIX_PBA(_i)				(0x00008000 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: FLR */
+#define MSIX_PBA_MAX_INDEX			2
+#define MSIX_PBA_PENBIT_S			0
+#define MSIX_PBA_PENBIT_M			MAKEMASK(0xFFFFFFFF, 0)
+#define MSIX_TADD(_i)				(0x00000000 + ((_i) * 16)) /* _i=0...64 */ /* Reset Source: FLR */
+#define MSIX_TADD_MAX_INDEX			64
+#define MSIX_TADD_MSIXTADD10_S			0
+#define MSIX_TADD_MSIXTADD10_M			MAKEMASK(0x3, 0)
+#define MSIX_TADD_MSIXTADD_S			2
+#define MSIX_TADD_MSIXTADD_M			MAKEMASK(0x3FFFFFFF, 2)
+#define MSIX_TUADD(_i)				(0x00000004 + ((_i) * 16)) /* _i=0...64 */ /* Reset Source: FLR */
+#define MSIX_TUADD_MAX_INDEX			64
+#define MSIX_TUADD_MSIXTUADD_S			0
+#define MSIX_TUADD_MSIXTUADD_M			MAKEMASK(0xFFFFFFFF, 0)
+#define MSIX_TVCTRL(_i)				(0x0000000C + ((_i) * 16)) /* _i=0...64 */ /* Reset Source: FLR */
+#define MSIX_TVCTRL_MAX_INDEX			64
+#define MSIX_TVCTRL_MASK_S			0
+#define MSIX_TVCTRL_MASK_M			BIT(0)
+#define PF0_FW_HLP_ARQBAH_PAGE			0x02D00180 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQBAH_PAGE_ARQBAH_S		0
+#define PF0_FW_HLP_ARQBAH_PAGE_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_FW_HLP_ARQBAL_PAGE			0x02D00080 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQBAL_PAGE_ARQBAL_LSB_S	0
+#define PF0_FW_HLP_ARQBAL_PAGE_ARQBAL_LSB_M	MAKEMASK(0x3F, 0)
+#define PF0_FW_HLP_ARQBAL_PAGE_ARQBAL_S		6
+#define PF0_FW_HLP_ARQBAL_PAGE_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_FW_HLP_ARQH_PAGE			0x02D00380 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQH_PAGE_ARQH_S		0
+#define PF0_FW_HLP_ARQH_PAGE_ARQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ARQLEN_PAGE			0x02D00280 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQLEN_S		0
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQVFE_S		28
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQVFE_M		BIT(28)
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQOVFL_S	29
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQOVFL_M	BIT(29)
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQCRIT_S	30
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQCRIT_M	BIT(30)
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQENABLE_S	31
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQENABLE_M	BIT(31)
+#define PF0_FW_HLP_ARQT_PAGE			0x02D00480 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQT_PAGE_ARQT_S		0
+#define PF0_FW_HLP_ARQT_PAGE_ARQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ATQBAH_PAGE			0x02D00100 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQBAH_PAGE_ATQBAH_S		0
+#define PF0_FW_HLP_ATQBAH_PAGE_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_FW_HLP_ATQBAL_PAGE			0x02D00000 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQBAL_PAGE_ATQBAL_LSB_S	0
+#define PF0_FW_HLP_ATQBAL_PAGE_ATQBAL_LSB_M	MAKEMASK(0x3F, 0)
+#define PF0_FW_HLP_ATQBAL_PAGE_ATQBAL_S		6
+#define PF0_FW_HLP_ATQBAL_PAGE_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_FW_HLP_ATQH_PAGE			0x02D00300 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQH_PAGE_ATQH_S		0
+#define PF0_FW_HLP_ATQH_PAGE_ATQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ATQLEN_PAGE			0x02D00200 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQLEN_S		0
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQVFE_S		28
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQVFE_M		BIT(28)
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQOVFL_S	29
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQOVFL_M	BIT(29)
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQCRIT_S	30
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQCRIT_M	BIT(30)
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQENABLE_S	31
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQENABLE_M	BIT(31)
+#define PF0_FW_HLP_ATQT_PAGE			0x02D00400 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQT_PAGE_ATQT_S		0
+#define PF0_FW_HLP_ATQT_PAGE_ATQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ARQBAH_PAGE			0x02D40180 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQBAH_PAGE_ARQBAH_S		0
+#define PF0_FW_PSM_ARQBAH_PAGE_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_FW_PSM_ARQBAL_PAGE			0x02D40080 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQBAL_PAGE_ARQBAL_LSB_S	0
+#define PF0_FW_PSM_ARQBAL_PAGE_ARQBAL_LSB_M	MAKEMASK(0x3F, 0)
+#define PF0_FW_PSM_ARQBAL_PAGE_ARQBAL_S		6
+#define PF0_FW_PSM_ARQBAL_PAGE_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_FW_PSM_ARQH_PAGE			0x02D40380 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQH_PAGE_ARQH_S		0
+#define PF0_FW_PSM_ARQH_PAGE_ARQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ARQLEN_PAGE			0x02D40280 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQLEN_S		0
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQVFE_S		28
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQVFE_M		BIT(28)
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQOVFL_S	29
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQOVFL_M	BIT(29)
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQCRIT_S	30
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQCRIT_M	BIT(30)
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQENABLE_S	31
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQENABLE_M	BIT(31)
+#define PF0_FW_PSM_ARQT_PAGE			0x02D40480 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQT_PAGE_ARQT_S		0
+#define PF0_FW_PSM_ARQT_PAGE_ARQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ATQBAH_PAGE			0x02D40100 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQBAH_PAGE_ATQBAH_S		0
+#define PF0_FW_PSM_ATQBAH_PAGE_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_FW_PSM_ATQBAL_PAGE			0x02D40000 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQBAL_PAGE_ATQBAL_LSB_S	0
+#define PF0_FW_PSM_ATQBAL_PAGE_ATQBAL_LSB_M	MAKEMASK(0x3F, 0)
+#define PF0_FW_PSM_ATQBAL_PAGE_ATQBAL_S		6
+#define PF0_FW_PSM_ATQBAL_PAGE_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_FW_PSM_ATQH_PAGE			0x02D40300 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQH_PAGE_ATQH_S		0
+#define PF0_FW_PSM_ATQH_PAGE_ATQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ATQLEN_PAGE			0x02D40200 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQLEN_S		0
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQVFE_S		28
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQVFE_M		BIT(28)
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQOVFL_S	29
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQOVFL_M	BIT(29)
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQCRIT_S	30
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQCRIT_M	BIT(30)
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQENABLE_S	31
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQENABLE_M	BIT(31)
+#define PF0_FW_PSM_ATQT_PAGE			0x02D40400 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQT_PAGE_ATQT_S		0
+#define PF0_FW_PSM_ATQT_PAGE_ATQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ARQBAH_PAGE			0x02D80190 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQBAH_PAGE_ARQBAH_S	0
+#define PF0_MBX_CPM_ARQBAH_PAGE_ARQBAH_M	MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_CPM_ARQBAL_PAGE			0x02D80090 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQBAL_PAGE_ARQBAL_LSB_S	0
+#define PF0_MBX_CPM_ARQBAL_PAGE_ARQBAL_LSB_M	MAKEMASK(0x3F, 0)
+#define PF0_MBX_CPM_ARQBAL_PAGE_ARQBAL_S	6
+#define PF0_MBX_CPM_ARQBAL_PAGE_ARQBAL_M	MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_CPM_ARQH_PAGE			0x02D80390 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQH_PAGE_ARQH_S		0
+#define PF0_MBX_CPM_ARQH_PAGE_ARQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ARQLEN_PAGE			0x02D80290 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQLEN_S	0
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQLEN_M	MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQVFE_S	28
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQVFE_M	BIT(28)
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQOVFL_S	29
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQOVFL_M	BIT(29)
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQCRIT_S	30
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQCRIT_M	BIT(30)
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQENABLE_S	31
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQENABLE_M	BIT(31)
+#define PF0_MBX_CPM_ARQT_PAGE			0x02D80490 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQT_PAGE_ARQT_S		0
+#define PF0_MBX_CPM_ARQT_PAGE_ARQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ATQBAH_PAGE			0x02D80110 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQBAH_PAGE_ATQBAH_S	0
+#define PF0_MBX_CPM_ATQBAH_PAGE_ATQBAH_M	MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_CPM_ATQBAL_PAGE			0x02D80010 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQBAL_PAGE_ATQBAL_S	6
+#define PF0_MBX_CPM_ATQBAL_PAGE_ATQBAL_M	MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_CPM_ATQH_PAGE			0x02D80310 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQH_PAGE_ATQH_S		0
+#define PF0_MBX_CPM_ATQH_PAGE_ATQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ATQLEN_PAGE			0x02D80210 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQLEN_S	0
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQLEN_M	MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQVFE_S	28
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQVFE_M	BIT(28)
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQOVFL_S	29
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQOVFL_M	BIT(29)
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQCRIT_S	30
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQCRIT_M	BIT(30)
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQENABLE_S	31
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQENABLE_M	BIT(31)
+#define PF0_MBX_CPM_ATQT_PAGE			0x02D80410 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQT_PAGE_ATQT_S		0
+#define PF0_MBX_CPM_ATQT_PAGE_ATQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ARQBAH_PAGE			0x02D00190 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQBAH_PAGE_ARQBAH_S	0
+#define PF0_MBX_HLP_ARQBAH_PAGE_ARQBAH_M	MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_HLP_ARQBAL_PAGE			0x02D00090 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQBAL_PAGE_ARQBAL_LSB_S	0
+#define PF0_MBX_HLP_ARQBAL_PAGE_ARQBAL_LSB_M	MAKEMASK(0x3F, 0)
+#define PF0_MBX_HLP_ARQBAL_PAGE_ARQBAL_S	6
+#define PF0_MBX_HLP_ARQBAL_PAGE_ARQBAL_M	MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_HLP_ARQH_PAGE			0x02D00390 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQH_PAGE_ARQH_S		0
+#define PF0_MBX_HLP_ARQH_PAGE_ARQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ARQLEN_PAGE			0x02D00290 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQLEN_S	0
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQLEN_M	MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQVFE_S	28
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQVFE_M	BIT(28)
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQOVFL_S	29
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQOVFL_M	BIT(29)
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQCRIT_S	30
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQCRIT_M	BIT(30)
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQENABLE_S	31
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQENABLE_M	BIT(31)
+#define PF0_MBX_HLP_ARQT_PAGE			0x02D00490 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQT_PAGE_ARQT_S		0
+#define PF0_MBX_HLP_ARQT_PAGE_ARQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ATQBAH_PAGE			0x02D00110 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQBAH_PAGE_ATQBAH_S	0
+#define PF0_MBX_HLP_ATQBAH_PAGE_ATQBAH_M	MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_HLP_ATQBAL_PAGE			0x02D00010 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQBAL_PAGE_ATQBAL_S	6
+#define PF0_MBX_HLP_ATQBAL_PAGE_ATQBAL_M	MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_HLP_ATQH_PAGE			0x02D00310 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQH_PAGE_ATQH_S		0
+#define PF0_MBX_HLP_ATQH_PAGE_ATQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ATQLEN_PAGE			0x02D00210 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQLEN_S	0
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQLEN_M	MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQVFE_S	28
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQVFE_M	BIT(28)
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQOVFL_S	29
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQOVFL_M	BIT(29)
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQCRIT_S	30
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQCRIT_M	BIT(30)
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQENABLE_S	31
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQENABLE_M	BIT(31)
+#define PF0_MBX_HLP_ATQT_PAGE			0x02D00410 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQT_PAGE_ATQT_S		0
+#define PF0_MBX_HLP_ATQT_PAGE_ATQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ARQBAH_PAGE			0x02D40190 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQBAH_PAGE_ARQBAH_S	0
+#define PF0_MBX_PSM_ARQBAH_PAGE_ARQBAH_M	MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_PSM_ARQBAL_PAGE			0x02D40090 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQBAL_PAGE_ARQBAL_LSB_S	0
+#define PF0_MBX_PSM_ARQBAL_PAGE_ARQBAL_LSB_M	MAKEMASK(0x3F, 0)
+#define PF0_MBX_PSM_ARQBAL_PAGE_ARQBAL_S	6
+#define PF0_MBX_PSM_ARQBAL_PAGE_ARQBAL_M	MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_PSM_ARQH_PAGE			0x02D40390 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQH_PAGE_ARQH_S		0
+#define PF0_MBX_PSM_ARQH_PAGE_ARQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ARQLEN_PAGE			0x02D40290 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQLEN_S	0
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQLEN_M	MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQVFE_S	28
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQVFE_M	BIT(28)
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQOVFL_S	29
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQOVFL_M	BIT(29)
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQCRIT_S	30
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQCRIT_M	BIT(30)
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQENABLE_S	31
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQENABLE_M	BIT(31)
+#define PF0_MBX_PSM_ARQT_PAGE			0x02D40490 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQT_PAGE_ARQT_S		0
+#define PF0_MBX_PSM_ARQT_PAGE_ARQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ATQBAH_PAGE			0x02D40110 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQBAH_PAGE_ATQBAH_S	0
+#define PF0_MBX_PSM_ATQBAH_PAGE_ATQBAH_M	MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_PSM_ATQBAL_PAGE			0x02D40010 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQBAL_PAGE_ATQBAL_S	6
+#define PF0_MBX_PSM_ATQBAL_PAGE_ATQBAL_M	MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_PSM_ATQH_PAGE			0x02D40310 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQH_PAGE_ATQH_S		0
+#define PF0_MBX_PSM_ATQH_PAGE_ATQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ATQLEN_PAGE			0x02D40210 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQLEN_S	0
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQLEN_M	MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQVFE_S	28
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQVFE_M	BIT(28)
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQOVFL_S	29
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQOVFL_M	BIT(29)
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQCRIT_S	30
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQCRIT_M	BIT(30)
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQENABLE_S	31
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQENABLE_M	BIT(31)
+#define PF0_MBX_PSM_ATQT_PAGE			0x02D40410 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQT_PAGE_ATQT_S		0
+#define PF0_MBX_PSM_ATQT_PAGE_ATQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ARQBAH_PAGE			0x02D801A0 /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQBAH_PAGE_ARQBAH_S		0
+#define PF0_SB_CPM_ARQBAH_PAGE_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_SB_CPM_ARQBAL_PAGE			0x02D800A0 /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQBAL_PAGE_ARQBAL_LSB_S	0
+#define PF0_SB_CPM_ARQBAL_PAGE_ARQBAL_LSB_M	MAKEMASK(0x3F, 0)
+#define PF0_SB_CPM_ARQBAL_PAGE_ARQBAL_S		6
+#define PF0_SB_CPM_ARQBAL_PAGE_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_SB_CPM_ARQH_PAGE			0x02D803A0 /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQH_PAGE_ARQH_S		0
+#define PF0_SB_CPM_ARQH_PAGE_ARQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ARQLEN_PAGE			0x02D802A0 /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQLEN_S		0
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQVFE_S		28
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQVFE_M		BIT(28)
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQOVFL_S	29
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQOVFL_M	BIT(29)
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQCRIT_S	30
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQCRIT_M	BIT(30)
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQENABLE_S	31
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQENABLE_M	BIT(31)
+#define PF0_SB_CPM_ARQT_PAGE			0x02D804A0 /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQT_PAGE_ARQT_S		0
+#define PF0_SB_CPM_ARQT_PAGE_ARQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ATQBAH_PAGE			0x02D80120 /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQBAH_PAGE_ATQBAH_S		0
+#define PF0_SB_CPM_ATQBAH_PAGE_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_SB_CPM_ATQBAL_PAGE			0x02D80020 /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQBAL_PAGE_ATQBAL_S		6
+#define PF0_SB_CPM_ATQBAL_PAGE_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_SB_CPM_ATQH_PAGE			0x02D80320 /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQH_PAGE_ATQH_S		0
+#define PF0_SB_CPM_ATQH_PAGE_ATQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ATQLEN_PAGE			0x02D80220 /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQLEN_S		0
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQVFE_S		28
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQVFE_M		BIT(28)
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQOVFL_S	29
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQOVFL_M	BIT(29)
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQCRIT_S	30
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQCRIT_M	BIT(30)
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQENABLE_S	31
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQENABLE_M	BIT(31)
+#define PF0_SB_CPM_ATQT_PAGE			0x02D80420 /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQT_PAGE_ATQT_S		0
+#define PF0_SB_CPM_ATQT_PAGE_ATQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ARQBAH_PAGE			0x02D001A0 /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQBAH_PAGE_ARQBAH_S		0
+#define PF0_SB_HLP_ARQBAH_PAGE_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_SB_HLP_ARQBAL_PAGE			0x02D000A0 /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQBAL_PAGE_ARQBAL_LSB_S	0
+#define PF0_SB_HLP_ARQBAL_PAGE_ARQBAL_LSB_M	MAKEMASK(0x3F, 0)
+#define PF0_SB_HLP_ARQBAL_PAGE_ARQBAL_S		6
+#define PF0_SB_HLP_ARQBAL_PAGE_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_SB_HLP_ARQH_PAGE			0x02D003A0 /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQH_PAGE_ARQH_S		0
+#define PF0_SB_HLP_ARQH_PAGE_ARQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ARQLEN_PAGE			0x02D002A0 /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQLEN_S		0
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQVFE_S		28
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQVFE_M		BIT(28)
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQOVFL_S	29
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQOVFL_M	BIT(29)
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQCRIT_S	30
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQCRIT_M	BIT(30)
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQENABLE_S	31
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQENABLE_M	BIT(31)
+#define PF0_SB_HLP_ARQT_PAGE			0x02D004A0 /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQT_PAGE_ARQT_S		0
+#define PF0_SB_HLP_ARQT_PAGE_ARQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ATQBAH_PAGE			0x02D00120 /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQBAH_PAGE_ATQBAH_S		0
+#define PF0_SB_HLP_ATQBAH_PAGE_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_SB_HLP_ATQBAL_PAGE			0x02D00020 /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQBAL_PAGE_ATQBAL_S		6
+#define PF0_SB_HLP_ATQBAL_PAGE_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_SB_HLP_ATQH_PAGE			0x02D00320 /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQH_PAGE_ATQH_S		0
+#define PF0_SB_HLP_ATQH_PAGE_ATQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ATQLEN_PAGE			0x02D00220 /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQLEN_S		0
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQVFE_S		28
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQVFE_M		BIT(28)
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQOVFL_S	29
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQOVFL_M	BIT(29)
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQCRIT_S	30
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQCRIT_M	BIT(30)
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQENABLE_S	31
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQENABLE_M	BIT(31)
+#define PF0_SB_HLP_ATQT_PAGE			0x02D00420 /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQT_PAGE_ATQT_S		0
+#define PF0_SB_HLP_ATQT_PAGE_ATQT_M		MAKEMASK(0x3FF, 0)
+#define PF0INT_DYN_CTL(_i)			(0x03000000 + ((_i) * 4096)) /* _i=0...2047 */ /* Reset Source: PFR */
+#define PF0INT_DYN_CTL_MAX_INDEX		2047
+#define PF0INT_DYN_CTL_INTENA_S			0
+#define PF0INT_DYN_CTL_INTENA_M			BIT(0)
+#define PF0INT_DYN_CTL_CLEARPBA_S		1
+#define PF0INT_DYN_CTL_CLEARPBA_M		BIT(1)
+#define PF0INT_DYN_CTL_SWINT_TRIG_S		2
+#define PF0INT_DYN_CTL_SWINT_TRIG_M		BIT(2)
+#define PF0INT_DYN_CTL_ITR_INDX_S		3
+#define PF0INT_DYN_CTL_ITR_INDX_M		MAKEMASK(0x3, 3)
+#define PF0INT_DYN_CTL_INTERVAL_S		5
+#define PF0INT_DYN_CTL_INTERVAL_M		MAKEMASK(0xFFF, 5)
+#define PF0INT_DYN_CTL_SW_ITR_INDX_ENA_S	24
+#define PF0INT_DYN_CTL_SW_ITR_INDX_ENA_M	BIT(24)
+#define PF0INT_DYN_CTL_SW_ITR_INDX_S		25
+#define PF0INT_DYN_CTL_SW_ITR_INDX_M		MAKEMASK(0x3, 25)
+#define PF0INT_DYN_CTL_WB_ON_ITR_S		30
+#define PF0INT_DYN_CTL_WB_ON_ITR_M		BIT(30)
+#define PF0INT_DYN_CTL_INTENA_MSK_S		31
+#define PF0INT_DYN_CTL_INTENA_MSK_M		BIT(31)
+#define PF0INT_ITR_0(_i)			(0x03000004 + ((_i) * 4096)) /* _i=0...2047 */ /* Reset Source: PFR */
+#define PF0INT_ITR_0_MAX_INDEX			2047
+#define PF0INT_ITR_0_INTERVAL_S			0
+#define PF0INT_ITR_0_INTERVAL_M			MAKEMASK(0xFFF, 0)
+#define PF0INT_ITR_1(_i)			(0x03000008 + ((_i) * 4096)) /* _i=0...2047 */ /* Reset Source: PFR */
+#define PF0INT_ITR_1_MAX_INDEX			2047
+#define PF0INT_ITR_1_INTERVAL_S			0
+#define PF0INT_ITR_1_INTERVAL_M			MAKEMASK(0xFFF, 0)
+#define PF0INT_ITR_2(_i)			(0x0300000C + ((_i) * 4096)) /* _i=0...2047 */ /* Reset Source: PFR */
+#define PF0INT_ITR_2_MAX_INDEX			2047
+#define PF0INT_ITR_2_INTERVAL_S			0
+#define PF0INT_ITR_2_INTERVAL_M			MAKEMASK(0xFFF, 0)
+#define PF0INT_OICR_CPM_PAGE			0x02D03000 /* Reset Source: CORER */
+#define PF0INT_OICR_CPM_PAGE_INTEVENT_S		0
+#define PF0INT_OICR_CPM_PAGE_INTEVENT_M		BIT(0)
+#define PF0INT_OICR_CPM_PAGE_QUEUE_S		1
+#define PF0INT_OICR_CPM_PAGE_QUEUE_M		BIT(1)
+#define PF0INT_OICR_CPM_PAGE_RSV1_S		2
+#define PF0INT_OICR_CPM_PAGE_RSV1_M		MAKEMASK(0xFF, 2)
+#define PF0INT_OICR_CPM_PAGE_HH_COMP_S		10
+#define PF0INT_OICR_CPM_PAGE_HH_COMP_M		BIT(10)
+#define PF0INT_OICR_CPM_PAGE_TSYN_TX_S		11
+#define PF0INT_OICR_CPM_PAGE_TSYN_TX_M		BIT(11)
+#define PF0INT_OICR_CPM_PAGE_TSYN_EVNT_S	12
+#define PF0INT_OICR_CPM_PAGE_TSYN_EVNT_M	BIT(12)
+#define PF0INT_OICR_CPM_PAGE_TSYN_TGT_S		13
+#define PF0INT_OICR_CPM_PAGE_TSYN_TGT_M		BIT(13)
+#define PF0INT_OICR_CPM_PAGE_HLP_RDY_S		14
+#define PF0INT_OICR_CPM_PAGE_HLP_RDY_M		BIT(14)
+#define PF0INT_OICR_CPM_PAGE_CPM_RDY_S		15
+#define PF0INT_OICR_CPM_PAGE_CPM_RDY_M		BIT(15)
+#define PF0INT_OICR_CPM_PAGE_ECC_ERR_S		16
+#define PF0INT_OICR_CPM_PAGE_ECC_ERR_M		BIT(16)
+#define PF0INT_OICR_CPM_PAGE_RSV2_S		17
+#define PF0INT_OICR_CPM_PAGE_RSV2_M		MAKEMASK(0x3, 17)
+#define PF0INT_OICR_CPM_PAGE_MAL_DETECT_S	19
+#define PF0INT_OICR_CPM_PAGE_MAL_DETECT_M	BIT(19)
+#define PF0INT_OICR_CPM_PAGE_GRST_S		20
+#define PF0INT_OICR_CPM_PAGE_GRST_M		BIT(20)
+#define PF0INT_OICR_CPM_PAGE_PCI_EXCEPTION_S	21
+#define PF0INT_OICR_CPM_PAGE_PCI_EXCEPTION_M	BIT(21)
+#define PF0INT_OICR_CPM_PAGE_GPIO_S		22
+#define PF0INT_OICR_CPM_PAGE_GPIO_M		BIT(22)
+#define PF0INT_OICR_CPM_PAGE_RSV3_S		23
+#define PF0INT_OICR_CPM_PAGE_RSV3_M		BIT(23)
+#define PF0INT_OICR_CPM_PAGE_STORM_DETECT_S	24
+#define PF0INT_OICR_CPM_PAGE_STORM_DETECT_M	BIT(24)
+#define PF0INT_OICR_CPM_PAGE_LINK_STAT_CHANGE_S 25
+#define PF0INT_OICR_CPM_PAGE_LINK_STAT_CHANGE_M BIT(25)
+#define PF0INT_OICR_CPM_PAGE_HMC_ERR_S		26
+#define PF0INT_OICR_CPM_PAGE_HMC_ERR_M		BIT(26)
+#define PF0INT_OICR_CPM_PAGE_PE_PUSH_S		27
+#define PF0INT_OICR_CPM_PAGE_PE_PUSH_M		BIT(27)
+#define PF0INT_OICR_CPM_PAGE_PE_CRITERR_S	28
+#define PF0INT_OICR_CPM_PAGE_PE_CRITERR_M	BIT(28)
+#define PF0INT_OICR_CPM_PAGE_VFLR_S		29
+#define PF0INT_OICR_CPM_PAGE_VFLR_M		BIT(29)
+#define PF0INT_OICR_CPM_PAGE_XLR_HW_DONE_S	30
+#define PF0INT_OICR_CPM_PAGE_XLR_HW_DONE_M	BIT(30)
+#define PF0INT_OICR_CPM_PAGE_SWINT_S		31
+#define PF0INT_OICR_CPM_PAGE_SWINT_M		BIT(31)
+#define PF0INT_OICR_ENA_CPM_PAGE		0x02D03100 /* Reset Source: CORER */
+#define PF0INT_OICR_ENA_CPM_PAGE_RSV0_S		0
+#define PF0INT_OICR_ENA_CPM_PAGE_RSV0_M		BIT(0)
+#define PF0INT_OICR_ENA_CPM_PAGE_INT_ENA_S	1
+#define PF0INT_OICR_ENA_CPM_PAGE_INT_ENA_M	MAKEMASK(0x7FFFFFFF, 1)
+#define PF0INT_OICR_ENA_HLP_PAGE		0x02D01100 /* Reset Source: CORER */
+#define PF0INT_OICR_ENA_HLP_PAGE_RSV0_S		0
+#define PF0INT_OICR_ENA_HLP_PAGE_RSV0_M		BIT(0)
+#define PF0INT_OICR_ENA_HLP_PAGE_INT_ENA_S	1
+#define PF0INT_OICR_ENA_HLP_PAGE_INT_ENA_M	MAKEMASK(0x7FFFFFFF, 1)
+#define PF0INT_OICR_ENA_PSM_PAGE		0x02D02100 /* Reset Source: CORER */
+#define PF0INT_OICR_ENA_PSM_PAGE_RSV0_S		0
+#define PF0INT_OICR_ENA_PSM_PAGE_RSV0_M		BIT(0)
+#define PF0INT_OICR_ENA_PSM_PAGE_INT_ENA_S	1
+#define PF0INT_OICR_ENA_PSM_PAGE_INT_ENA_M	MAKEMASK(0x7FFFFFFF, 1)
+#define PF0INT_OICR_HLP_PAGE			0x02D01000 /* Reset Source: CORER */
+#define PF0INT_OICR_HLP_PAGE_INTEVENT_S		0
+#define PF0INT_OICR_HLP_PAGE_INTEVENT_M		BIT(0)
+#define PF0INT_OICR_HLP_PAGE_QUEUE_S		1
+#define PF0INT_OICR_HLP_PAGE_QUEUE_M		BIT(1)
+#define PF0INT_OICR_HLP_PAGE_RSV1_S		2
+#define PF0INT_OICR_HLP_PAGE_RSV1_M		MAKEMASK(0xFF, 2)
+#define PF0INT_OICR_HLP_PAGE_HH_COMP_S		10
+#define PF0INT_OICR_HLP_PAGE_HH_COMP_M		BIT(10)
+#define PF0INT_OICR_HLP_PAGE_TSYN_TX_S		11
+#define PF0INT_OICR_HLP_PAGE_TSYN_TX_M		BIT(11)
+#define PF0INT_OICR_HLP_PAGE_TSYN_EVNT_S	12
+#define PF0INT_OICR_HLP_PAGE_TSYN_EVNT_M	BIT(12)
+#define PF0INT_OICR_HLP_PAGE_TSYN_TGT_S		13
+#define PF0INT_OICR_HLP_PAGE_TSYN_TGT_M		BIT(13)
+#define PF0INT_OICR_HLP_PAGE_HLP_RDY_S		14
+#define PF0INT_OICR_HLP_PAGE_HLP_RDY_M		BIT(14)
+#define PF0INT_OICR_HLP_PAGE_CPM_RDY_S		15
+#define PF0INT_OICR_HLP_PAGE_CPM_RDY_M		BIT(15)
+#define PF0INT_OICR_HLP_PAGE_ECC_ERR_S		16
+#define PF0INT_OICR_HLP_PAGE_ECC_ERR_M		BIT(16)
+#define PF0INT_OICR_HLP_PAGE_RSV2_S		17
+#define PF0INT_OICR_HLP_PAGE_RSV2_M		MAKEMASK(0x3, 17)
+#define PF0INT_OICR_HLP_PAGE_MAL_DETECT_S	19
+#define PF0INT_OICR_HLP_PAGE_MAL_DETECT_M	BIT(19)
+#define PF0INT_OICR_HLP_PAGE_GRST_S		20
+#define PF0INT_OICR_HLP_PAGE_GRST_M		BIT(20)
+#define PF0INT_OICR_HLP_PAGE_PCI_EXCEPTION_S	21
+#define PF0INT_OICR_HLP_PAGE_PCI_EXCEPTION_M	BIT(21)
+#define PF0INT_OICR_HLP_PAGE_GPIO_S		22
+#define PF0INT_OICR_HLP_PAGE_GPIO_M		BIT(22)
+#define PF0INT_OICR_HLP_PAGE_RSV3_S		23
+#define PF0INT_OICR_HLP_PAGE_RSV3_M		BIT(23)
+#define PF0INT_OICR_HLP_PAGE_STORM_DETECT_S	24
+#define PF0INT_OICR_HLP_PAGE_STORM_DETECT_M	BIT(24)
+#define PF0INT_OICR_HLP_PAGE_LINK_STAT_CHANGE_S 25
+#define PF0INT_OICR_HLP_PAGE_LINK_STAT_CHANGE_M BIT(25)
+#define PF0INT_OICR_HLP_PAGE_HMC_ERR_S		26
+#define PF0INT_OICR_HLP_PAGE_HMC_ERR_M		BIT(26)
+#define PF0INT_OICR_HLP_PAGE_PE_PUSH_S		27
+#define PF0INT_OICR_HLP_PAGE_PE_PUSH_M		BIT(27)
+#define PF0INT_OICR_HLP_PAGE_PE_CRITERR_S	28
+#define PF0INT_OICR_HLP_PAGE_PE_CRITERR_M	BIT(28)
+#define PF0INT_OICR_HLP_PAGE_VFLR_S		29
+#define PF0INT_OICR_HLP_PAGE_VFLR_M		BIT(29)
+#define PF0INT_OICR_HLP_PAGE_XLR_HW_DONE_S	30
+#define PF0INT_OICR_HLP_PAGE_XLR_HW_DONE_M	BIT(30)
+#define PF0INT_OICR_HLP_PAGE_SWINT_S		31
+#define PF0INT_OICR_HLP_PAGE_SWINT_M		BIT(31)
+#define PF0INT_OICR_PSM_PAGE			0x02D02000 /* Reset Source: CORER */
+#define PF0INT_OICR_PSM_PAGE_INTEVENT_S		0
+#define PF0INT_OICR_PSM_PAGE_INTEVENT_M		BIT(0)
+#define PF0INT_OICR_PSM_PAGE_QUEUE_S		1
+#define PF0INT_OICR_PSM_PAGE_QUEUE_M		BIT(1)
+#define PF0INT_OICR_PSM_PAGE_RSV1_S		2
+#define PF0INT_OICR_PSM_PAGE_RSV1_M		MAKEMASK(0xFF, 2)
+#define PF0INT_OICR_PSM_PAGE_HH_COMP_S		10
+#define PF0INT_OICR_PSM_PAGE_HH_COMP_M		BIT(10)
+#define PF0INT_OICR_PSM_PAGE_TSYN_TX_S		11
+#define PF0INT_OICR_PSM_PAGE_TSYN_TX_M		BIT(11)
+#define PF0INT_OICR_PSM_PAGE_TSYN_EVNT_S	12
+#define PF0INT_OICR_PSM_PAGE_TSYN_EVNT_M	BIT(12)
+#define PF0INT_OICR_PSM_PAGE_TSYN_TGT_S		13
+#define PF0INT_OICR_PSM_PAGE_TSYN_TGT_M		BIT(13)
+#define PF0INT_OICR_PSM_PAGE_HLP_RDY_S		14
+#define PF0INT_OICR_PSM_PAGE_HLP_RDY_M		BIT(14)
+#define PF0INT_OICR_PSM_PAGE_CPM_RDY_S		15
+#define PF0INT_OICR_PSM_PAGE_CPM_RDY_M		BIT(15)
+#define PF0INT_OICR_PSM_PAGE_ECC_ERR_S		16
+#define PF0INT_OICR_PSM_PAGE_ECC_ERR_M		BIT(16)
+#define PF0INT_OICR_PSM_PAGE_RSV2_S		17
+#define PF0INT_OICR_PSM_PAGE_RSV2_M		MAKEMASK(0x3, 17)
+#define PF0INT_OICR_PSM_PAGE_MAL_DETECT_S	19
+#define PF0INT_OICR_PSM_PAGE_MAL_DETECT_M	BIT(19)
+#define PF0INT_OICR_PSM_PAGE_GRST_S		20
+#define PF0INT_OICR_PSM_PAGE_GRST_M		BIT(20)
+#define PF0INT_OICR_PSM_PAGE_PCI_EXCEPTION_S	21
+#define PF0INT_OICR_PSM_PAGE_PCI_EXCEPTION_M	BIT(21)
+#define PF0INT_OICR_PSM_PAGE_GPIO_S		22
+#define PF0INT_OICR_PSM_PAGE_GPIO_M		BIT(22)
+#define PF0INT_OICR_PSM_PAGE_RSV3_S		23
+#define PF0INT_OICR_PSM_PAGE_RSV3_M		BIT(23)
+#define PF0INT_OICR_PSM_PAGE_STORM_DETECT_S	24
+#define PF0INT_OICR_PSM_PAGE_STORM_DETECT_M	BIT(24)
+#define PF0INT_OICR_PSM_PAGE_LINK_STAT_CHANGE_S 25
+#define PF0INT_OICR_PSM_PAGE_LINK_STAT_CHANGE_M BIT(25)
+#define PF0INT_OICR_PSM_PAGE_HMC_ERR_S		26
+#define PF0INT_OICR_PSM_PAGE_HMC_ERR_M		BIT(26)
+#define PF0INT_OICR_PSM_PAGE_PE_PUSH_S		27
+#define PF0INT_OICR_PSM_PAGE_PE_PUSH_M		BIT(27)
+#define PF0INT_OICR_PSM_PAGE_PE_CRITERR_S	28
+#define PF0INT_OICR_PSM_PAGE_PE_CRITERR_M	BIT(28)
+#define PF0INT_OICR_PSM_PAGE_VFLR_S		29
+#define PF0INT_OICR_PSM_PAGE_VFLR_M		BIT(29)
+#define PF0INT_OICR_PSM_PAGE_XLR_HW_DONE_S	30
+#define PF0INT_OICR_PSM_PAGE_XLR_HW_DONE_M	BIT(30)
+#define PF0INT_OICR_PSM_PAGE_SWINT_S		31
+#define PF0INT_OICR_PSM_PAGE_SWINT_M		BIT(31)
+#define QRX_TAIL_PAGE(_QRX)			(0x03800000 + ((_QRX) * 4096)) /* _i=0...2047 */ /* Reset Source: CORER */
+#define QRX_TAIL_PAGE_MAX_INDEX			2047
+#define QRX_TAIL_PAGE_TAIL_S			0
+#define QRX_TAIL_PAGE_TAIL_M			MAKEMASK(0x1FFF, 0)
+#define QTX_COMM_DBELL_PAGE(_DBQM)		(0x04000000 + ((_DBQM) * 4096)) /* _i=0...16383 */ /* Reset Source: CORER */
+#define QTX_COMM_DBELL_PAGE_MAX_INDEX		16383
+#define QTX_COMM_DBELL_PAGE_QTX_COMM_DBELL_S	0
+#define QTX_COMM_DBELL_PAGE_QTX_COMM_DBELL_M	MAKEMASK(0xFFFFFFFF, 0)
+#define QTX_COMM_DBLQ_DBELL_PAGE(_DBLQ)		(0x02F00000 + ((_DBLQ) * 8)) /* _i=0...255 */ /* Reset Source: CORER */
+#define QTX_COMM_DBLQ_DBELL_PAGE_MAX_INDEX	255
+#define QTX_COMM_DBLQ_DBELL_PAGE_TAIL_S		0
+#define QTX_COMM_DBLQ_DBELL_PAGE_TAIL_M		MAKEMASK(0x1FFF, 0)
+#define VSI_MBX_ARQBAH(_VSI)			(0x02000018 + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ARQBAH_MAX_INDEX		767
+#define VSI_MBX_ARQBAH_ARQBAH_S			0
+#define VSI_MBX_ARQBAH_ARQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define VSI_MBX_ARQBAL(_VSI)			(0x02000014 + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ARQBAL_MAX_INDEX		767
+#define VSI_MBX_ARQBAL_ARQBAL_LSB_S		0
+#define VSI_MBX_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define VSI_MBX_ARQBAL_ARQBAL_S			6
+#define VSI_MBX_ARQBAL_ARQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define VSI_MBX_ARQH(_VSI)			(0x02000020 + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ARQH_MAX_INDEX			767
+#define VSI_MBX_ARQH_ARQH_S			0
+#define VSI_MBX_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define VSI_MBX_ARQLEN(_VSI)			(0x0200001C + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ARQLEN_MAX_INDEX		767
+#define VSI_MBX_ARQLEN_ARQLEN_S			0
+#define VSI_MBX_ARQLEN_ARQLEN_M			MAKEMASK(0x3FF, 0)
+#define VSI_MBX_ARQLEN_ARQVFE_S			28
+#define VSI_MBX_ARQLEN_ARQVFE_M			BIT(28)
+#define VSI_MBX_ARQLEN_ARQOVFL_S		29
+#define VSI_MBX_ARQLEN_ARQOVFL_M		BIT(29)
+#define VSI_MBX_ARQLEN_ARQCRIT_S		30
+#define VSI_MBX_ARQLEN_ARQCRIT_M		BIT(30)
+#define VSI_MBX_ARQLEN_ARQENABLE_S		31
+#define VSI_MBX_ARQLEN_ARQENABLE_M		BIT(31)
+#define VSI_MBX_ARQT(_VSI)			(0x02000024 + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ARQT_MAX_INDEX			767
+#define VSI_MBX_ARQT_ARQT_S			0
+#define VSI_MBX_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define VSI_MBX_ATQBAH(_VSI)			(0x02000004 + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ATQBAH_MAX_INDEX		767
+#define VSI_MBX_ATQBAH_ATQBAH_S			0
+#define VSI_MBX_ATQBAH_ATQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define VSI_MBX_ATQBAL(_VSI)			(0x02000000 + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ATQBAL_MAX_INDEX		767
+#define VSI_MBX_ATQBAL_ATQBAL_S			6
+#define VSI_MBX_ATQBAL_ATQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define VSI_MBX_ATQH(_VSI)			(0x0200000C + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ATQH_MAX_INDEX			767
+#define VSI_MBX_ATQH_ATQH_S			0
+#define VSI_MBX_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define VSI_MBX_ATQLEN(_VSI)			(0x02000008 + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ATQLEN_MAX_INDEX		767
+#define VSI_MBX_ATQLEN_ATQLEN_S			0
+#define VSI_MBX_ATQLEN_ATQLEN_M			MAKEMASK(0x3FF, 0)
+#define VSI_MBX_ATQLEN_ATQVFE_S			28
+#define VSI_MBX_ATQLEN_ATQVFE_M			BIT(28)
+#define VSI_MBX_ATQLEN_ATQOVFL_S		29
+#define VSI_MBX_ATQLEN_ATQOVFL_M		BIT(29)
+#define VSI_MBX_ATQLEN_ATQCRIT_S		30
+#define VSI_MBX_ATQLEN_ATQCRIT_M		BIT(30)
+#define VSI_MBX_ATQLEN_ATQENABLE_S		31
+#define VSI_MBX_ATQLEN_ATQENABLE_M		BIT(31)
+#define VSI_MBX_ATQT(_VSI)			(0x02000010 + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ATQT_MAX_INDEX			767
+#define VSI_MBX_ATQT_ATQT_S			0
+#define VSI_MBX_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define GL_ACL_ACCESS_CMD			0x00391000 /* Reset Source: CORER */
+#define GL_ACL_ACCESS_CMD_TABLE_ID_S		0
+#define GL_ACL_ACCESS_CMD_TABLE_ID_M		MAKEMASK(0xFF, 0)
+#define GL_ACL_ACCESS_CMD_ENTRY_INDEX_S		8
+#define GL_ACL_ACCESS_CMD_ENTRY_INDEX_M		MAKEMASK(0xFFF, 8)
+#define GL_ACL_ACCESS_CMD_OPERATION_S		20
+#define GL_ACL_ACCESS_CMD_OPERATION_M		BIT(20)
+#define GL_ACL_ACCESS_CMD_OBJ_TYPE_S		24
+#define GL_ACL_ACCESS_CMD_OBJ_TYPE_M		MAKEMASK(0xF, 24)
+#define GL_ACL_ACCESS_CMD_EXECUTE_S		31
+#define GL_ACL_ACCESS_CMD_EXECUTE_M		BIT(31)
+#define GL_ACL_ACCESS_STATUS			0x00391004 /* Reset Source: CORER */
+#define GL_ACL_ACCESS_STATUS_BUSY_S		0
+#define GL_ACL_ACCESS_STATUS_BUSY_M		BIT(0)
+#define GL_ACL_ACCESS_STATUS_DONE_S		1
+#define GL_ACL_ACCESS_STATUS_DONE_M		BIT(1)
+#define GL_ACL_ACCESS_STATUS_ERROR_S		2
+#define GL_ACL_ACCESS_STATUS_ERROR_M		BIT(2)
+#define GL_ACL_ACCESS_STATUS_OPERATION_S	3
+#define GL_ACL_ACCESS_STATUS_OPERATION_M	BIT(3)
+#define GL_ACL_ACCESS_STATUS_ERROR_CODE_S	4
+#define GL_ACL_ACCESS_STATUS_ERROR_CODE_M	MAKEMASK(0xF, 4)
+#define GL_ACL_ACCESS_STATUS_TABLE_ID_S		8
+#define GL_ACL_ACCESS_STATUS_TABLE_ID_M		MAKEMASK(0xFF, 8)
+#define GL_ACL_ACCESS_STATUS_ENTRY_INDEX_S	16
+#define GL_ACL_ACCESS_STATUS_ENTRY_INDEX_M	MAKEMASK(0xFFF, 16)
+#define GL_ACL_ACCESS_STATUS_OBJ_TYPE_S		28
+#define GL_ACL_ACCESS_STATUS_OBJ_TYPE_M		MAKEMASK(0xF, 28)
+#define GL_ACL_ACTMEM_ACT(_i)			(0x00393824 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GL_ACL_ACTMEM_ACT_MAX_INDEX		1
+#define GL_ACL_ACTMEM_ACT_VALUE_S		0
+#define GL_ACL_ACTMEM_ACT_VALUE_M		MAKEMASK(0xFFFF, 0)
+#define GL_ACL_ACTMEM_ACT_MDID_S		20
+#define GL_ACL_ACTMEM_ACT_MDID_M		MAKEMASK(0x3F, 20)
+#define GL_ACL_ACTMEM_ACT_PRIORITY_S		28
+#define GL_ACL_ACTMEM_ACT_PRIORITY_M		MAKEMASK(0x7, 28)
+#define GL_ACL_CHICKEN_REGISTER			0x00393810 /* Reset Source: CORER */
+#define GL_ACL_CHICKEN_REGISTER_TCAM_DATA_POL_CH_S 0
+#define GL_ACL_CHICKEN_REGISTER_TCAM_DATA_POL_CH_M BIT(0)
+#define GL_ACL_CHICKEN_REGISTER_TCAM_ADDR_POL_CH_S 1
+#define GL_ACL_CHICKEN_REGISTER_TCAM_ADDR_POL_CH_M BIT(1)
+#define GL_ACL_DEFAULT_ACT(_i)			(0x00391168 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GL_ACL_DEFAULT_ACT_MAX_INDEX		15
+#define GL_ACL_DEFAULT_ACT_VALUE_S		0
+#define GL_ACL_DEFAULT_ACT_VALUE_M		MAKEMASK(0xFFFF, 0)
+#define GL_ACL_DEFAULT_ACT_MDID_S		20
+#define GL_ACL_DEFAULT_ACT_MDID_M		MAKEMASK(0x3F, 20)
+#define GL_ACL_DEFAULT_ACT_PRIORITY_S		28
+#define GL_ACL_DEFAULT_ACT_PRIORITY_M		MAKEMASK(0x7, 28)
+#define GL_ACL_PROFILE_BWSB_SEL(_i)		(0x00391008 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GL_ACL_PROFILE_BWSB_SEL_MAX_INDEX	31
+#define GL_ACL_PROFILE_BWSB_SEL_BSB_SRC_OFF_S	0
+#define GL_ACL_PROFILE_BWSB_SEL_BSB_SRC_OFF_M	MAKEMASK(0x3F, 0)
+#define GL_ACL_PROFILE_BWSB_SEL_WSB_SRC_OFF_S	8
+#define GL_ACL_PROFILE_BWSB_SEL_WSB_SRC_OFF_M	MAKEMASK(0x1F, 8)
+#define GL_ACL_PROFILE_DWSB_SEL(_i)		(0x00391088 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GL_ACL_PROFILE_DWSB_SEL_MAX_INDEX	15
+#define GL_ACL_PROFILE_DWSB_SEL_DWORD_SEL_OFF_S 0
+#define GL_ACL_PROFILE_DWSB_SEL_DWORD_SEL_OFF_M MAKEMASK(0xF, 0)
+#define GL_ACL_PROFILE_PF_CFG(_i)		(0x003910C8 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GL_ACL_PROFILE_PF_CFG_MAX_INDEX		7
+#define GL_ACL_PROFILE_PF_CFG_SCEN_SEL_S	0
+#define GL_ACL_PROFILE_PF_CFG_SCEN_SEL_M	MAKEMASK(0x3F, 0)
+#define GL_ACL_PROFILE_RC_CFG(_i)		(0x003910E8 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GL_ACL_PROFILE_RC_CFG_MAX_INDEX		7
+#define GL_ACL_PROFILE_RC_CFG_LOW_BOUND_S	0
+#define GL_ACL_PROFILE_RC_CFG_LOW_BOUND_M	MAKEMASK(0xFFFF, 0)
+#define GL_ACL_PROFILE_RC_CFG_HIGH_BOUND_S	16
+#define GL_ACL_PROFILE_RC_CFG_HIGH_BOUND_M	MAKEMASK(0xFFFF, 16)
+#define GL_ACL_PROFILE_RCF_MASK(_i)		(0x00391108 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GL_ACL_PROFILE_RCF_MASK_MAX_INDEX	7
+#define GL_ACL_PROFILE_RCF_MASK_MASK_S		0
+#define GL_ACL_PROFILE_RCF_MASK_MASK_M		MAKEMASK(0xFFFF, 0)
+#define GL_ACL_SCENARIO_ACT_CFG(_i)		(0x003938AC + ((_i) * 4)) /* _i=0...19 */ /* Reset Source: CORER */
+#define GL_ACL_SCENARIO_ACT_CFG_MAX_INDEX	19
+#define GL_ACL_SCENARIO_ACT_CFG_ACTMEM_SEL_S	0
+#define GL_ACL_SCENARIO_ACT_CFG_ACTMEM_SEL_M	MAKEMASK(0xF, 0)
+#define GL_ACL_SCENARIO_ACT_CFG_ACTMEM_EN_S	8
+#define GL_ACL_SCENARIO_ACT_CFG_ACTMEM_EN_M	BIT(8)
+#define GL_ACL_SCENARIO_CFG_H(_i)		(0x0039386C + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GL_ACL_SCENARIO_CFG_H_MAX_INDEX		15
+#define GL_ACL_SCENARIO_CFG_H_SELECT4_S		0
+#define GL_ACL_SCENARIO_CFG_H_SELECT4_M		MAKEMASK(0x1F, 0)
+#define GL_ACL_SCENARIO_CFG_H_CHUNKMASK_S	8
+#define GL_ACL_SCENARIO_CFG_H_CHUNKMASK_M	MAKEMASK(0xFF, 8)
+#define GL_ACL_SCENARIO_CFG_H_START_COMPARE_S	24
+#define GL_ACL_SCENARIO_CFG_H_START_COMPARE_M	BIT(24)
+#define GL_ACL_SCENARIO_CFG_H_START_SET_S	28
+#define GL_ACL_SCENARIO_CFG_H_START_SET_M	BIT(28)
+#define GL_ACL_SCENARIO_CFG_L(_i)		(0x0039382C + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GL_ACL_SCENARIO_CFG_L_MAX_INDEX		15
+#define GL_ACL_SCENARIO_CFG_L_SELECT0_S		0
+#define GL_ACL_SCENARIO_CFG_L_SELECT0_M		MAKEMASK(0x7F, 0)
+#define GL_ACL_SCENARIO_CFG_L_SELECT1_S		8
+#define GL_ACL_SCENARIO_CFG_L_SELECT1_M		MAKEMASK(0x7F, 8)
+#define GL_ACL_SCENARIO_CFG_L_SELECT2_S		16
+#define GL_ACL_SCENARIO_CFG_L_SELECT2_M		MAKEMASK(0x7F, 16)
+#define GL_ACL_SCENARIO_CFG_L_SELECT3_S		24
+#define GL_ACL_SCENARIO_CFG_L_SELECT3_M		MAKEMASK(0x7F, 24)
+#define GL_ACL_TCAM_KEY_H			0x00393818 /* Reset Source: CORER */
+#define GL_ACL_TCAM_KEY_H_GL_ACL_FFU_TCAM_KEY_H_S 0
+#define GL_ACL_TCAM_KEY_H_GL_ACL_FFU_TCAM_KEY_H_M MAKEMASK(0xFF, 0)
+#define GL_ACL_TCAM_KEY_INV_H			0x00393820 /* Reset Source: CORER */
+#define GL_ACL_TCAM_KEY_INV_H_GL_ACL_FFU_TCAM_KEY_INV_H_S 0
+#define GL_ACL_TCAM_KEY_INV_H_GL_ACL_FFU_TCAM_KEY_INV_H_M MAKEMASK(0xFF, 0)
+#define GL_ACL_TCAM_KEY_INV_L			0x0039381C /* Reset Source: CORER */
+#define GL_ACL_TCAM_KEY_INV_L_GL_ACL_FFU_TCAM_KEY_INV_L_S 0
+#define GL_ACL_TCAM_KEY_INV_L_GL_ACL_FFU_TCAM_KEY_INV_L_M MAKEMASK(0xFFFFFFFF, 0)
+#define GL_ACL_TCAM_KEY_L			0x00393814 /* Reset Source: CORER */
+#define GL_ACL_TCAM_KEY_L_GL_ACL_FFU_TCAM_KEY_L_S 0
+#define GL_ACL_TCAM_KEY_L_GL_ACL_FFU_TCAM_KEY_L_M MAKEMASK(0xFFFFFFFF, 0)
+#define VSI_ACL_DEF_SEL(_VSI)			(0x00391800 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_ACL_DEF_SEL_MAX_INDEX		767
+#define VSI_ACL_DEF_SEL_RX_PROFILE_MISS_SEL_S	0
+#define VSI_ACL_DEF_SEL_RX_PROFILE_MISS_SEL_M	MAKEMASK(0x3, 0)
+#define VSI_ACL_DEF_SEL_RX_TABLES_MISS_SEL_S	4
+#define VSI_ACL_DEF_SEL_RX_TABLES_MISS_SEL_M	MAKEMASK(0x3, 4)
+#define VSI_ACL_DEF_SEL_TX_PROFILE_MISS_SEL_S	8
+#define VSI_ACL_DEF_SEL_TX_PROFILE_MISS_SEL_M	MAKEMASK(0x3, 8)
+#define VSI_ACL_DEF_SEL_TX_TABLES_MISS_SEL_S	12
+#define VSI_ACL_DEF_SEL_TX_TABLES_MISS_SEL_M	MAKEMASK(0x3, 12)
+#define GL_SWT_L2TAG0(_i)			(0x000492A8 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GL_SWT_L2TAG0_MAX_INDEX			7
+#define GL_SWT_L2TAG0_DATA_S			0
+#define GL_SWT_L2TAG0_DATA_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GL_SWT_L2TAG1(_i)			(0x000492C8 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GL_SWT_L2TAG1_MAX_INDEX			7
+#define GL_SWT_L2TAG1_DATA_S			0
+#define GL_SWT_L2TAG1_DATA_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GL_SWT_L2TAGCTRL(_i)			(0x001D2660 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GL_SWT_L2TAGCTRL_MAX_INDEX		7
+#define GL_SWT_L2TAGCTRL_LENGTH_S		0
+#define GL_SWT_L2TAGCTRL_LENGTH_M		MAKEMASK(0x7F, 0)
+#define GL_SWT_L2TAGCTRL_HAS_UP_S		7
+#define GL_SWT_L2TAGCTRL_HAS_UP_M		BIT(7)
+#define GL_SWT_L2TAGCTRL_ISVLAN_S		9
+#define GL_SWT_L2TAGCTRL_ISVLAN_M		BIT(9)
+#define GL_SWT_L2TAGCTRL_INNERUP_S		10
+#define GL_SWT_L2TAGCTRL_INNERUP_M		BIT(10)
+#define GL_SWT_L2TAGCTRL_OUTERUP_S		11
+#define GL_SWT_L2TAGCTRL_OUTERUP_M		BIT(11)
+#define GL_SWT_L2TAGCTRL_LONG_S			12
+#define GL_SWT_L2TAGCTRL_LONG_M			BIT(12)
+#define GL_SWT_L2TAGCTRL_ISMPLS_S		13
+#define GL_SWT_L2TAGCTRL_ISMPLS_M		BIT(13)
+#define GL_SWT_L2TAGCTRL_ISNSH_S		14
+#define GL_SWT_L2TAGCTRL_ISNSH_M		BIT(14)
+#define GL_SWT_L2TAGCTRL_ETHERTYPE_S		16
+#define GL_SWT_L2TAGCTRL_ETHERTYPE_M		MAKEMASK(0xFFFF, 16)
+#define GL_SWT_L2TAGRXEB(_i)			(0x00052000 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GL_SWT_L2TAGRXEB_MAX_INDEX		7
+#define GL_SWT_L2TAGRXEB_OFFSET_S		0
+#define GL_SWT_L2TAGRXEB_OFFSET_M		MAKEMASK(0xFF, 0)
+#define GL_SWT_L2TAGRXEB_LENGTH_S		8
+#define GL_SWT_L2TAGRXEB_LENGTH_M		MAKEMASK(0x3, 8)
+#define GL_SWT_L2TAGTXIB(_i)			(0x000492E8 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GL_SWT_L2TAGTXIB_MAX_INDEX		7
+#define GL_SWT_L2TAGTXIB_OFFSET_S		0
+#define GL_SWT_L2TAGTXIB_OFFSET_M		MAKEMASK(0xFF, 0)
+#define GL_SWT_L2TAGTXIB_LENGTH_S		8
+#define GL_SWT_L2TAGTXIB_LENGTH_M		MAKEMASK(0x3, 8)
+#define PRT_TDPUL2TAGSEN			0x00040BA0 /* Reset Source: CORER */
+#define PRT_TDPUL2TAGSEN_ENABLE_S		0
+#define PRT_TDPUL2TAGSEN_ENABLE_M		MAKEMASK(0xFF, 0)
+#define PRT_TDPUL2TAGSEN_NONLAST_TAG_S		8
+#define PRT_TDPUL2TAGSEN_NONLAST_TAG_M		MAKEMASK(0xFF, 8)
+#define GLCM_PE_CACHESIZE			0x005046B4 /* Reset Source: CORER */
+#define GLCM_PE_CACHESIZE_WORD_SIZE_S		0
+#define GLCM_PE_CACHESIZE_WORD_SIZE_M		MAKEMASK(0xFFF, 0)
+#define GLCM_PE_CACHESIZE_SETS_S		12
+#define GLCM_PE_CACHESIZE_SETS_M		MAKEMASK(0xF, 12)
+#define GLCM_PE_CACHESIZE_WAYS_S		16
+#define GLCM_PE_CACHESIZE_WAYS_M		MAKEMASK(0x1FF, 16)
+#define GLCOMM_CQ_CTL(_CQ)			(0x000F0000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLCOMM_CQ_CTL_MAX_INDEX			511
+#define GLCOMM_CQ_CTL_COMP_TYPE_S		0
+#define GLCOMM_CQ_CTL_COMP_TYPE_M		MAKEMASK(0x7, 0)
+#define GLCOMM_CQ_CTL_CMD_S			4
+#define GLCOMM_CQ_CTL_CMD_M			MAKEMASK(0x7, 4)
+#define GLCOMM_CQ_CTL_ID_S			16
+#define GLCOMM_CQ_CTL_ID_M			MAKEMASK(0x3FFF, 16)
+#define GLCOMM_MIN_MAX_PKT			0x000FC064 /* Reset Source: CORER */
+#define GLCOMM_MIN_MAX_PKT_MAHDL_S		0
+#define GLCOMM_MIN_MAX_PKT_MAHDL_M		MAKEMASK(0x3FFF, 0)
+#define GLCOMM_MIN_MAX_PKT_MIHDL_S		16
+#define GLCOMM_MIN_MAX_PKT_MIHDL_M		MAKEMASK(0x3F, 16)
+#define GLCOMM_MIN_MAX_PKT_LSO_COMS_MIHDL_S	22
+#define GLCOMM_MIN_MAX_PKT_LSO_COMS_MIHDL_M	MAKEMASK(0x3FF, 22)
+#define GLCOMM_PKT_SHAPER_PROF(_i)		(0x002D2DA8 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLCOMM_PKT_SHAPER_PROF_MAX_INDEX	7
+#define GLCOMM_PKT_SHAPER_PROF_PKTCNT_S		0
+#define GLCOMM_PKT_SHAPER_PROF_PKTCNT_M		MAKEMASK(0x3F, 0)
+#define GLCOMM_QTX_CNTX_CTL			0x002D2DC8 /* Reset Source: CORER */
+#define GLCOMM_QTX_CNTX_CTL_QUEUE_ID_S		0
+#define GLCOMM_QTX_CNTX_CTL_QUEUE_ID_M		MAKEMASK(0x3FFF, 0)
+#define GLCOMM_QTX_CNTX_CTL_CMD_S		16
+#define GLCOMM_QTX_CNTX_CTL_CMD_M		MAKEMASK(0x7, 16)
+#define GLCOMM_QTX_CNTX_CTL_CMD_EXEC_S		19
+#define GLCOMM_QTX_CNTX_CTL_CMD_EXEC_M		BIT(19)
+#define GLCOMM_QTX_CNTX_DATA(_i)		(0x002D2D40 + ((_i) * 4)) /* _i=0...9 */ /* Reset Source: CORER */
+#define GLCOMM_QTX_CNTX_DATA_MAX_INDEX		9
+#define GLCOMM_QTX_CNTX_DATA_DATA_S		0
+#define GLCOMM_QTX_CNTX_DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLCOMM_QTX_CNTX_STAT			0x002D2DCC /* Reset Source: CORER */
+#define GLCOMM_QTX_CNTX_STAT_CMD_IN_PROG_S	0
+#define GLCOMM_QTX_CNTX_STAT_CMD_IN_PROG_M	BIT(0)
+#define GLCOMM_QUANTA_PROF(_i)			(0x002D2D68 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLCOMM_QUANTA_PROF_MAX_INDEX		15
+#define GLCOMM_QUANTA_PROF_QUANTA_SIZE_S	0
+#define GLCOMM_QUANTA_PROF_QUANTA_SIZE_M	MAKEMASK(0x3FFF, 0)
+#define GLCOMM_QUANTA_PROF_MAX_CMD_S		16
+#define GLCOMM_QUANTA_PROF_MAX_CMD_M		MAKEMASK(0xFF, 16)
+#define GLCOMM_QUANTA_PROF_MAX_DESC_S		24
+#define GLCOMM_QUANTA_PROF_MAX_DESC_M		MAKEMASK(0x3F, 24)
+#define GLLAN_TCLAN_CACHE_CTL			0x000FC0B8 /* Reset Source: CORER */
+#define GLLAN_TCLAN_CACHE_CTL_MIN_FETCH_THRESH_S 0
+#define GLLAN_TCLAN_CACHE_CTL_MIN_FETCH_THRESH_M MAKEMASK(0x3F, 0)
+#define GLLAN_TCLAN_CACHE_CTL_FETCH_CL_ALIGN_S	6
+#define GLLAN_TCLAN_CACHE_CTL_FETCH_CL_ALIGN_M	BIT(6)
+#define GLLAN_TCLAN_CACHE_CTL_MIN_ALLOC_THRESH_S 7
+#define GLLAN_TCLAN_CACHE_CTL_MIN_ALLOC_THRESH_M MAKEMASK(0x7F, 7)
+#define GLLAN_TCLAN_CACHE_CTL_CACHE_ENTRY_CNT_S 14
+#define GLLAN_TCLAN_CACHE_CTL_CACHE_ENTRY_CNT_M MAKEMASK(0xFF, 14)
+#define GLLAN_TCLAN_CACHE_CTL_CACHE_DESC_LIM_S	22
+#define GLLAN_TCLAN_CACHE_CTL_CACHE_DESC_LIM_M	MAKEMASK(0x3FF, 22)
+#define GLTCLAN_CQ_CNTX0(_CQ)			(0x000F0800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX0_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX0_RING_ADDR_LSB_S	0
+#define GLTCLAN_CQ_CNTX0_RING_ADDR_LSB_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX1(_CQ)			(0x000F1000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX1_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX1_RING_ADDR_MSB_S	0
+#define GLTCLAN_CQ_CNTX1_RING_ADDR_MSB_M	MAKEMASK(0x1FFFFFF, 0)
+#define GLTCLAN_CQ_CNTX10(_CQ)			(0x000F5800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX10_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX10_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX10_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX11(_CQ)			(0x000F6000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX11_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX11_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX11_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX12(_CQ)			(0x000F6800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX12_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX12_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX12_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX13(_CQ)			(0x000F7000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX13_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX13_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX13_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX14(_CQ)			(0x000F7800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX14_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX14_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX14_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX15(_CQ)			(0x000F8000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX15_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX15_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX15_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX16(_CQ)			(0x000F8800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX16_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX16_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX16_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX17(_CQ)			(0x000F9000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX17_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX17_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX17_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX18(_CQ)			(0x000F9800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX18_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX18_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX18_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX19(_CQ)			(0x000FA000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX19_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX19_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX19_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX2(_CQ)			(0x000F1800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX2_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX2_RING_LEN_S		0
+#define GLTCLAN_CQ_CNTX2_RING_LEN_M		MAKEMASK(0x3FFFF, 0)
+#define GLTCLAN_CQ_CNTX20(_CQ)			(0x000FA800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX20_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX20_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX20_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX21(_CQ)			(0x000FB000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX21_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX21_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX21_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX3(_CQ)			(0x000F2000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX3_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX3_GENERATION_S		0
+#define GLTCLAN_CQ_CNTX3_GENERATION_M		BIT(0)
+#define GLTCLAN_CQ_CNTX3_CQ_WR_PTR_S		1
+#define GLTCLAN_CQ_CNTX3_CQ_WR_PTR_M		MAKEMASK(0x3FFFFF, 1)
+#define GLTCLAN_CQ_CNTX4(_CQ)			(0x000F2800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX4_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX4_PF_NUM_S		0
+#define GLTCLAN_CQ_CNTX4_PF_NUM_M		MAKEMASK(0x7, 0)
+#define GLTCLAN_CQ_CNTX4_VMVF_NUM_S		3
+#define GLTCLAN_CQ_CNTX4_VMVF_NUM_M		MAKEMASK(0x3FF, 3)
+#define GLTCLAN_CQ_CNTX4_VMVF_TYPE_S		13
+#define GLTCLAN_CQ_CNTX4_VMVF_TYPE_M		MAKEMASK(0x3, 13)
+#define GLTCLAN_CQ_CNTX5(_CQ)			(0x000F3000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX5_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX5_TPH_EN_S		0
+#define GLTCLAN_CQ_CNTX5_TPH_EN_M		BIT(0)
+#define GLTCLAN_CQ_CNTX5_CPU_ID_S		1
+#define GLTCLAN_CQ_CNTX5_CPU_ID_M		MAKEMASK(0xFF, 1)
+#define GLTCLAN_CQ_CNTX5_FLUSH_ON_ITR_DIS_S	9
+#define GLTCLAN_CQ_CNTX5_FLUSH_ON_ITR_DIS_M	BIT(9)
+#define GLTCLAN_CQ_CNTX6(_CQ)			(0x000F3800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX6_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX6_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX6_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX7(_CQ)			(0x000F4000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX7_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX7_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX7_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX8(_CQ)			(0x000F4800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX8_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX8_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX8_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX9(_CQ)			(0x000F5000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX9_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX9_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX9_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define QTX_COMM_DBELL(_DBQM)			(0x002C0000 + ((_DBQM) * 4)) /* _i=0...16383 */ /* Reset Source: CORER */
+#define QTX_COMM_DBELL_MAX_INDEX		16383
+#define QTX_COMM_DBELL_QTX_COMM_DBELL_S		0
+#define QTX_COMM_DBELL_QTX_COMM_DBELL_M		MAKEMASK(0xFFFFFFFF, 0)
+#define QTX_COMM_DBLQ_CNTX(_i, _DBLQ)		(0x002D0000 + ((_i) * 1024 + (_DBLQ) * 4)) /* _i=0...4, _DBLQ=0...255 */ /* Reset Source: CORER */
+#define QTX_COMM_DBLQ_CNTX_MAX_INDEX		4
+#define QTX_COMM_DBLQ_CNTX_DATA_S		0
+#define QTX_COMM_DBLQ_CNTX_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define QTX_COMM_DBLQ_DBELL(_DBLQ)		(0x002D1400 + ((_DBLQ) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define QTX_COMM_DBLQ_DBELL_MAX_INDEX		255
+#define QTX_COMM_DBLQ_DBELL_TAIL_S		0
+#define QTX_COMM_DBLQ_DBELL_TAIL_M		MAKEMASK(0x1FFF, 0)
+#define QTX_COMM_HEAD(_DBQM)			(0x000E0000 + ((_DBQM) * 4)) /* _i=0...16383 */ /* Reset Source: CORER */
+#define QTX_COMM_HEAD_MAX_INDEX			16383
+#define QTX_COMM_HEAD_HEAD_S			0
+#define QTX_COMM_HEAD_HEAD_M			MAKEMASK(0x1FFF, 0)
+#define QTX_COMM_HEAD_RS_PENDING_S		16
+#define QTX_COMM_HEAD_RS_PENDING_M		BIT(16)
+#define GL_FW_TOOL_ARQBAH			0x000801C0 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ARQBAH_ARQBAH_S		0
+#define GL_FW_TOOL_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_FW_TOOL_ARQBAL			0x000800C0 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ARQBAL_ARQBAL_LSB_S		0
+#define GL_FW_TOOL_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define GL_FW_TOOL_ARQBAL_ARQBAL_S		6
+#define GL_FW_TOOL_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define GL_FW_TOOL_ARQH				0x000803C0 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ARQH_ARQH_S			0
+#define GL_FW_TOOL_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define GL_FW_TOOL_ARQLEN			0x000802C0 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ARQLEN_ARQLEN_S		0
+#define GL_FW_TOOL_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define GL_FW_TOOL_ARQLEN_ARQVFE_S		28
+#define GL_FW_TOOL_ARQLEN_ARQVFE_M		BIT(28)
+#define GL_FW_TOOL_ARQLEN_ARQOVFL_S		29
+#define GL_FW_TOOL_ARQLEN_ARQOVFL_M		BIT(29)
+#define GL_FW_TOOL_ARQLEN_ARQCRIT_S		30
+#define GL_FW_TOOL_ARQLEN_ARQCRIT_M		BIT(30)
+#define GL_FW_TOOL_ARQLEN_ARQENABLE_S		31
+#define GL_FW_TOOL_ARQLEN_ARQENABLE_M		BIT(31)
+#define GL_FW_TOOL_ARQT				0x000804C0 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ARQT_ARQT_S			0
+#define GL_FW_TOOL_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define GL_FW_TOOL_ATQBAH			0x00080140 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ATQBAH_ATQBAH_S		0
+#define GL_FW_TOOL_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_FW_TOOL_ATQBAL			0x00080040 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ATQBAL_ATQBAL_LSB_S		0
+#define GL_FW_TOOL_ATQBAL_ATQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define GL_FW_TOOL_ATQBAL_ATQBAL_S		6
+#define GL_FW_TOOL_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define GL_FW_TOOL_ATQH				0x00080340 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ATQH_ATQH_S			0
+#define GL_FW_TOOL_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define GL_FW_TOOL_ATQLEN			0x00080240 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ATQLEN_ATQLEN_S		0
+#define GL_FW_TOOL_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define GL_FW_TOOL_ATQLEN_ATQVFE_S		28
+#define GL_FW_TOOL_ATQLEN_ATQVFE_M		BIT(28)
+#define GL_FW_TOOL_ATQLEN_ATQOVFL_S		29
+#define GL_FW_TOOL_ATQLEN_ATQOVFL_M		BIT(29)
+#define GL_FW_TOOL_ATQLEN_ATQCRIT_S		30
+#define GL_FW_TOOL_ATQLEN_ATQCRIT_M		BIT(30)
+#define GL_FW_TOOL_ATQLEN_ATQENABLE_S		31
+#define GL_FW_TOOL_ATQLEN_ATQENABLE_M		BIT(31)
+#define GL_FW_TOOL_ATQT				0x00080440 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ATQT_ATQT_S			0
+#define GL_FW_TOOL_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define GL_MBX_PASID				0x00231EC0 /* Reset Source: CORER */
+#define GL_MBX_PASID_PASID_MODE_S		0
+#define GL_MBX_PASID_PASID_MODE_M		BIT(0)
+#define GL_MBX_PASID_PASID_MODE_VALID_S		1
+#define GL_MBX_PASID_PASID_MODE_VALID_M		BIT(1)
+#define PF_FW_ARQBAH				0x00080180 /* Reset Source: EMPR */
+#define PF_FW_ARQBAH_ARQBAH_S			0
+#define PF_FW_ARQBAH_ARQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PF_FW_ARQBAL				0x00080080 /* Reset Source: EMPR */
+#define PF_FW_ARQBAL_ARQBAL_LSB_S		0
+#define PF_FW_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF_FW_ARQBAL_ARQBAL_S			6
+#define PF_FW_ARQBAL_ARQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define PF_FW_ARQH				0x00080380 /* Reset Source: EMPR */
+#define PF_FW_ARQH_ARQH_S			0
+#define PF_FW_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define PF_FW_ARQLEN				0x00080280 /* Reset Source: EMPR */
+#define PF_FW_ARQLEN_ARQLEN_S			0
+#define PF_FW_ARQLEN_ARQLEN_M			MAKEMASK(0x3FF, 0)
+#define PF_FW_ARQLEN_ARQVFE_S			28
+#define PF_FW_ARQLEN_ARQVFE_M			BIT(28)
+#define PF_FW_ARQLEN_ARQOVFL_S			29
+#define PF_FW_ARQLEN_ARQOVFL_M			BIT(29)
+#define PF_FW_ARQLEN_ARQCRIT_S			30
+#define PF_FW_ARQLEN_ARQCRIT_M			BIT(30)
+#define PF_FW_ARQLEN_ARQENABLE_S		31
+#define PF_FW_ARQLEN_ARQENABLE_M		BIT(31)
+#define PF_FW_ARQT				0x00080480 /* Reset Source: EMPR */
+#define PF_FW_ARQT_ARQT_S			0
+#define PF_FW_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define PF_FW_ATQBAH				0x00080100 /* Reset Source: EMPR */
+#define PF_FW_ATQBAH_ATQBAH_S			0
+#define PF_FW_ATQBAH_ATQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PF_FW_ATQBAL				0x00080000 /* Reset Source: EMPR */
+#define PF_FW_ATQBAL_ATQBAL_LSB_S		0
+#define PF_FW_ATQBAL_ATQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF_FW_ATQBAL_ATQBAL_S			6
+#define PF_FW_ATQBAL_ATQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define PF_FW_ATQH				0x00080300 /* Reset Source: EMPR */
+#define PF_FW_ATQH_ATQH_S			0
+#define PF_FW_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define PF_FW_ATQLEN				0x00080200 /* Reset Source: EMPR */
+#define PF_FW_ATQLEN_ATQLEN_S			0
+#define PF_FW_ATQLEN_ATQLEN_M			MAKEMASK(0x3FF, 0)
+#define PF_FW_ATQLEN_ATQVFE_S			28
+#define PF_FW_ATQLEN_ATQVFE_M			BIT(28)
+#define PF_FW_ATQLEN_ATQOVFL_S			29
+#define PF_FW_ATQLEN_ATQOVFL_M			BIT(29)
+#define PF_FW_ATQLEN_ATQCRIT_S			30
+#define PF_FW_ATQLEN_ATQCRIT_M			BIT(30)
+#define PF_FW_ATQLEN_ATQENABLE_S		31
+#define PF_FW_ATQLEN_ATQENABLE_M		BIT(31)
+#define PF_FW_ATQT				0x00080400 /* Reset Source: EMPR */
+#define PF_FW_ATQT_ATQT_S			0
+#define PF_FW_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define PF_MBX_ARQBAH				0x0022E400 /* Reset Source: CORER */
+#define PF_MBX_ARQBAH_ARQBAH_S			0
+#define PF_MBX_ARQBAH_ARQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PF_MBX_ARQBAL				0x0022E380 /* Reset Source: CORER */
+#define PF_MBX_ARQBAL_ARQBAL_LSB_S		0
+#define PF_MBX_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF_MBX_ARQBAL_ARQBAL_S			6
+#define PF_MBX_ARQBAL_ARQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define PF_MBX_ARQH				0x0022E500 /* Reset Source: CORER */
+#define PF_MBX_ARQH_ARQH_S			0
+#define PF_MBX_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define PF_MBX_ARQLEN				0x0022E480 /* Reset Source: CORER */
+#define PF_MBX_ARQLEN_ARQLEN_S			0
+#define PF_MBX_ARQLEN_ARQLEN_M			MAKEMASK(0x3FF, 0)
+#define PF_MBX_ARQLEN_ARQVFE_S			28
+#define PF_MBX_ARQLEN_ARQVFE_M			BIT(28)
+#define PF_MBX_ARQLEN_ARQOVFL_S			29
+#define PF_MBX_ARQLEN_ARQOVFL_M			BIT(29)
+#define PF_MBX_ARQLEN_ARQCRIT_S			30
+#define PF_MBX_ARQLEN_ARQCRIT_M			BIT(30)
+#define PF_MBX_ARQLEN_ARQENABLE_S		31
+#define PF_MBX_ARQLEN_ARQENABLE_M		BIT(31)
+#define PF_MBX_ARQT				0x0022E580 /* Reset Source: CORER */
+#define PF_MBX_ARQT_ARQT_S			0
+#define PF_MBX_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define PF_MBX_ATQBAH				0x0022E180 /* Reset Source: CORER */
+#define PF_MBX_ATQBAH_ATQBAH_S			0
+#define PF_MBX_ATQBAH_ATQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PF_MBX_ATQBAL				0x0022E100 /* Reset Source: CORER */
+#define PF_MBX_ATQBAL_ATQBAL_S			6
+#define PF_MBX_ATQBAL_ATQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define PF_MBX_ATQH				0x0022E280 /* Reset Source: CORER */
+#define PF_MBX_ATQH_ATQH_S			0
+#define PF_MBX_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define PF_MBX_ATQLEN				0x0022E200 /* Reset Source: CORER */
+#define PF_MBX_ATQLEN_ATQLEN_S			0
+#define PF_MBX_ATQLEN_ATQLEN_M			MAKEMASK(0x3FF, 0)
+#define PF_MBX_ATQLEN_ATQVFE_S			28
+#define PF_MBX_ATQLEN_ATQVFE_M			BIT(28)
+#define PF_MBX_ATQLEN_ATQOVFL_S			29
+#define PF_MBX_ATQLEN_ATQOVFL_M			BIT(29)
+#define PF_MBX_ATQLEN_ATQCRIT_S			30
+#define PF_MBX_ATQLEN_ATQCRIT_M			BIT(30)
+#define PF_MBX_ATQLEN_ATQENABLE_S		31
+#define PF_MBX_ATQLEN_ATQENABLE_M		BIT(31)
+#define PF_MBX_ATQT				0x0022E300 /* Reset Source: CORER */
+#define PF_MBX_ATQT_ATQT_S			0
+#define PF_MBX_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define PF_SB_ARQBAH				0x0022FF00 /* Reset Source: CORER */
+#define PF_SB_ARQBAH_ARQBAH_S			0
+#define PF_SB_ARQBAH_ARQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PF_SB_ARQBAL				0x0022FE80 /* Reset Source: CORER */
+#define PF_SB_ARQBAL_ARQBAL_LSB_S		0
+#define PF_SB_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF_SB_ARQBAL_ARQBAL_S			6
+#define PF_SB_ARQBAL_ARQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define PF_SB_ARQH				0x00230000 /* Reset Source: CORER */
+#define PF_SB_ARQH_ARQH_S			0
+#define PF_SB_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define PF_SB_ARQLEN				0x0022FF80 /* Reset Source: CORER */
+#define PF_SB_ARQLEN_ARQLEN_S			0
+#define PF_SB_ARQLEN_ARQLEN_M			MAKEMASK(0x3FF, 0)
+#define PF_SB_ARQLEN_ARQVFE_S			28
+#define PF_SB_ARQLEN_ARQVFE_M			BIT(28)
+#define PF_SB_ARQLEN_ARQOVFL_S			29
+#define PF_SB_ARQLEN_ARQOVFL_M			BIT(29)
+#define PF_SB_ARQLEN_ARQCRIT_S			30
+#define PF_SB_ARQLEN_ARQCRIT_M			BIT(30)
+#define PF_SB_ARQLEN_ARQENABLE_S		31
+#define PF_SB_ARQLEN_ARQENABLE_M		BIT(31)
+#define PF_SB_ARQT				0x00230080 /* Reset Source: CORER */
+#define PF_SB_ARQT_ARQT_S			0
+#define PF_SB_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define PF_SB_ATQBAH				0x0022FC80 /* Reset Source: CORER */
+#define PF_SB_ATQBAH_ATQBAH_S			0
+#define PF_SB_ATQBAH_ATQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PF_SB_ATQBAL				0x0022FC00 /* Reset Source: CORER */
+#define PF_SB_ATQBAL_ATQBAL_S			6
+#define PF_SB_ATQBAL_ATQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define PF_SB_ATQH				0x0022FD80 /* Reset Source: CORER */
+#define PF_SB_ATQH_ATQH_S			0
+#define PF_SB_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define PF_SB_ATQLEN				0x0022FD00 /* Reset Source: CORER */
+#define PF_SB_ATQLEN_ATQLEN_S			0
+#define PF_SB_ATQLEN_ATQLEN_M			MAKEMASK(0x3FF, 0)
+#define PF_SB_ATQLEN_ATQVFE_S			28
+#define PF_SB_ATQLEN_ATQVFE_M			BIT(28)
+#define PF_SB_ATQLEN_ATQOVFL_S			29
+#define PF_SB_ATQLEN_ATQOVFL_M			BIT(29)
+#define PF_SB_ATQLEN_ATQCRIT_S			30
+#define PF_SB_ATQLEN_ATQCRIT_M			BIT(30)
+#define PF_SB_ATQLEN_ATQENABLE_S		31
+#define PF_SB_ATQLEN_ATQENABLE_M		BIT(31)
+#define PF_SB_ATQT				0x0022FE00 /* Reset Source: CORER */
+#define PF_SB_ATQT_ATQT_S			0
+#define PF_SB_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define PF_SB_REM_DEV_CTL			0x002300F0 /* Reset Source: CORER */
+#define PF_SB_REM_DEV_CTL_DEST_EN_S		0
+#define PF_SB_REM_DEV_CTL_DEST_EN_M		MAKEMASK(0xFFFF, 0)
+#define PF0_FW_HLP_ARQBAH			0x000801C8 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQBAH_ARQBAH_S		0
+#define PF0_FW_HLP_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_FW_HLP_ARQBAL			0x000800C8 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQBAL_ARQBAL_LSB_S		0
+#define PF0_FW_HLP_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF0_FW_HLP_ARQBAL_ARQBAL_S		6
+#define PF0_FW_HLP_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_FW_HLP_ARQH				0x000803C8 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQH_ARQH_S			0
+#define PF0_FW_HLP_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ARQLEN			0x000802C8 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQLEN_ARQLEN_S		0
+#define PF0_FW_HLP_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ARQLEN_ARQVFE_S		28
+#define PF0_FW_HLP_ARQLEN_ARQVFE_M		BIT(28)
+#define PF0_FW_HLP_ARQLEN_ARQOVFL_S		29
+#define PF0_FW_HLP_ARQLEN_ARQOVFL_M		BIT(29)
+#define PF0_FW_HLP_ARQLEN_ARQCRIT_S		30
+#define PF0_FW_HLP_ARQLEN_ARQCRIT_M		BIT(30)
+#define PF0_FW_HLP_ARQLEN_ARQENABLE_S		31
+#define PF0_FW_HLP_ARQLEN_ARQENABLE_M		BIT(31)
+#define PF0_FW_HLP_ARQT				0x000804C8 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQT_ARQT_S			0
+#define PF0_FW_HLP_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ATQBAH			0x00080148 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQBAH_ATQBAH_S		0
+#define PF0_FW_HLP_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_FW_HLP_ATQBAL			0x00080048 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQBAL_ATQBAL_LSB_S		0
+#define PF0_FW_HLP_ATQBAL_ATQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF0_FW_HLP_ATQBAL_ATQBAL_S		6
+#define PF0_FW_HLP_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_FW_HLP_ATQH				0x00080348 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQH_ATQH_S			0
+#define PF0_FW_HLP_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ATQLEN			0x00080248 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQLEN_ATQLEN_S		0
+#define PF0_FW_HLP_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ATQLEN_ATQVFE_S		28
+#define PF0_FW_HLP_ATQLEN_ATQVFE_M		BIT(28)
+#define PF0_FW_HLP_ATQLEN_ATQOVFL_S		29
+#define PF0_FW_HLP_ATQLEN_ATQOVFL_M		BIT(29)
+#define PF0_FW_HLP_ATQLEN_ATQCRIT_S		30
+#define PF0_FW_HLP_ATQLEN_ATQCRIT_M		BIT(30)
+#define PF0_FW_HLP_ATQLEN_ATQENABLE_S		31
+#define PF0_FW_HLP_ATQLEN_ATQENABLE_M		BIT(31)
+#define PF0_FW_HLP_ATQT				0x00080448 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQT_ATQT_S			0
+#define PF0_FW_HLP_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ARQBAH			0x000801C4 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQBAH_ARQBAH_S		0
+#define PF0_FW_PSM_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_FW_PSM_ARQBAL			0x000800C4 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQBAL_ARQBAL_LSB_S		0
+#define PF0_FW_PSM_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF0_FW_PSM_ARQBAL_ARQBAL_S		6
+#define PF0_FW_PSM_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_FW_PSM_ARQH				0x000803C4 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQH_ARQH_S			0
+#define PF0_FW_PSM_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ARQLEN			0x000802C4 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQLEN_ARQLEN_S		0
+#define PF0_FW_PSM_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ARQLEN_ARQVFE_S		28
+#define PF0_FW_PSM_ARQLEN_ARQVFE_M		BIT(28)
+#define PF0_FW_PSM_ARQLEN_ARQOVFL_S		29
+#define PF0_FW_PSM_ARQLEN_ARQOVFL_M		BIT(29)
+#define PF0_FW_PSM_ARQLEN_ARQCRIT_S		30
+#define PF0_FW_PSM_ARQLEN_ARQCRIT_M		BIT(30)
+#define PF0_FW_PSM_ARQLEN_ARQENABLE_S		31
+#define PF0_FW_PSM_ARQLEN_ARQENABLE_M		BIT(31)
+#define PF0_FW_PSM_ARQT				0x000804C4 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQT_ARQT_S			0
+#define PF0_FW_PSM_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ATQBAH			0x00080144 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQBAH_ATQBAH_S		0
+#define PF0_FW_PSM_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_FW_PSM_ATQBAL			0x00080044 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQBAL_ATQBAL_LSB_S		0
+#define PF0_FW_PSM_ATQBAL_ATQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF0_FW_PSM_ATQBAL_ATQBAL_S		6
+#define PF0_FW_PSM_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_FW_PSM_ATQH				0x00080344 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQH_ATQH_S			0
+#define PF0_FW_PSM_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ATQLEN			0x00080244 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQLEN_ATQLEN_S		0
+#define PF0_FW_PSM_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ATQLEN_ATQVFE_S		28
+#define PF0_FW_PSM_ATQLEN_ATQVFE_M		BIT(28)
+#define PF0_FW_PSM_ATQLEN_ATQOVFL_S		29
+#define PF0_FW_PSM_ATQLEN_ATQOVFL_M		BIT(29)
+#define PF0_FW_PSM_ATQLEN_ATQCRIT_S		30
+#define PF0_FW_PSM_ATQLEN_ATQCRIT_M		BIT(30)
+#define PF0_FW_PSM_ATQLEN_ATQENABLE_S		31
+#define PF0_FW_PSM_ATQLEN_ATQENABLE_M		BIT(31)
+#define PF0_FW_PSM_ATQT				0x00080444 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQT_ATQT_S			0
+#define PF0_FW_PSM_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ARQBAH			0x0022E5D8 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQBAH_ARQBAH_S		0
+#define PF0_MBX_CPM_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_CPM_ARQBAL			0x0022E5D4 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQBAL_ARQBAL_LSB_S		0
+#define PF0_MBX_CPM_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF0_MBX_CPM_ARQBAL_ARQBAL_S		6
+#define PF0_MBX_CPM_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_CPM_ARQH			0x0022E5E0 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQH_ARQH_S			0
+#define PF0_MBX_CPM_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ARQLEN			0x0022E5DC /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQLEN_ARQLEN_S		0
+#define PF0_MBX_CPM_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ARQLEN_ARQVFE_S		28
+#define PF0_MBX_CPM_ARQLEN_ARQVFE_M		BIT(28)
+#define PF0_MBX_CPM_ARQLEN_ARQOVFL_S		29
+#define PF0_MBX_CPM_ARQLEN_ARQOVFL_M		BIT(29)
+#define PF0_MBX_CPM_ARQLEN_ARQCRIT_S		30
+#define PF0_MBX_CPM_ARQLEN_ARQCRIT_M		BIT(30)
+#define PF0_MBX_CPM_ARQLEN_ARQENABLE_S		31
+#define PF0_MBX_CPM_ARQLEN_ARQENABLE_M		BIT(31)
+#define PF0_MBX_CPM_ARQT			0x0022E5E4 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQT_ARQT_S			0
+#define PF0_MBX_CPM_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ATQBAH			0x0022E5C4 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQBAH_ATQBAH_S		0
+#define PF0_MBX_CPM_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_CPM_ATQBAL			0x0022E5C0 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQBAL_ATQBAL_S		6
+#define PF0_MBX_CPM_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_CPM_ATQH			0x0022E5CC /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQH_ATQH_S			0
+#define PF0_MBX_CPM_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ATQLEN			0x0022E5C8 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQLEN_ATQLEN_S		0
+#define PF0_MBX_CPM_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ATQLEN_ATQVFE_S		28
+#define PF0_MBX_CPM_ATQLEN_ATQVFE_M		BIT(28)
+#define PF0_MBX_CPM_ATQLEN_ATQOVFL_S		29
+#define PF0_MBX_CPM_ATQLEN_ATQOVFL_M		BIT(29)
+#define PF0_MBX_CPM_ATQLEN_ATQCRIT_S		30
+#define PF0_MBX_CPM_ATQLEN_ATQCRIT_M		BIT(30)
+#define PF0_MBX_CPM_ATQLEN_ATQENABLE_S		31
+#define PF0_MBX_CPM_ATQLEN_ATQENABLE_M		BIT(31)
+#define PF0_MBX_CPM_ATQT			0x0022E5D0 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQT_ATQT_S			0
+#define PF0_MBX_CPM_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ARQBAH			0x0022E600 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQBAH_ARQBAH_S		0
+#define PF0_MBX_HLP_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_HLP_ARQBAL			0x0022E5FC /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQBAL_ARQBAL_LSB_S		0
+#define PF0_MBX_HLP_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF0_MBX_HLP_ARQBAL_ARQBAL_S		6
+#define PF0_MBX_HLP_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_HLP_ARQH			0x0022E608 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQH_ARQH_S			0
+#define PF0_MBX_HLP_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ARQLEN			0x0022E604 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQLEN_ARQLEN_S		0
+#define PF0_MBX_HLP_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ARQLEN_ARQVFE_S		28
+#define PF0_MBX_HLP_ARQLEN_ARQVFE_M		BIT(28)
+#define PF0_MBX_HLP_ARQLEN_ARQOVFL_S		29
+#define PF0_MBX_HLP_ARQLEN_ARQOVFL_M		BIT(29)
+#define PF0_MBX_HLP_ARQLEN_ARQCRIT_S		30
+#define PF0_MBX_HLP_ARQLEN_ARQCRIT_M		BIT(30)
+#define PF0_MBX_HLP_ARQLEN_ARQENABLE_S		31
+#define PF0_MBX_HLP_ARQLEN_ARQENABLE_M		BIT(31)
+#define PF0_MBX_HLP_ARQT			0x0022E60C /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQT_ARQT_S			0
+#define PF0_MBX_HLP_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ATQBAH			0x0022E5EC /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQBAH_ATQBAH_S		0
+#define PF0_MBX_HLP_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_HLP_ATQBAL			0x0022E5E8 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQBAL_ATQBAL_S		6
+#define PF0_MBX_HLP_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_HLP_ATQH			0x0022E5F4 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQH_ATQH_S			0
+#define PF0_MBX_HLP_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ATQLEN			0x0022E5F0 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQLEN_ATQLEN_S		0
+#define PF0_MBX_HLP_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ATQLEN_ATQVFE_S		28
+#define PF0_MBX_HLP_ATQLEN_ATQVFE_M		BIT(28)
+#define PF0_MBX_HLP_ATQLEN_ATQOVFL_S		29
+#define PF0_MBX_HLP_ATQLEN_ATQOVFL_M		BIT(29)
+#define PF0_MBX_HLP_ATQLEN_ATQCRIT_S		30
+#define PF0_MBX_HLP_ATQLEN_ATQCRIT_M		BIT(30)
+#define PF0_MBX_HLP_ATQLEN_ATQENABLE_S		31
+#define PF0_MBX_HLP_ATQLEN_ATQENABLE_M		BIT(31)
+#define PF0_MBX_HLP_ATQT			0x0022E5F8 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQT_ATQT_S			0
+#define PF0_MBX_HLP_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ARQBAH			0x0022E628 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQBAH_ARQBAH_S		0
+#define PF0_MBX_PSM_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_PSM_ARQBAL			0x0022E624 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQBAL_ARQBAL_LSB_S		0
+#define PF0_MBX_PSM_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF0_MBX_PSM_ARQBAL_ARQBAL_S		6
+#define PF0_MBX_PSM_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_PSM_ARQH			0x0022E630 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQH_ARQH_S			0
+#define PF0_MBX_PSM_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ARQLEN			0x0022E62C /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQLEN_ARQLEN_S		0
+#define PF0_MBX_PSM_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ARQLEN_ARQVFE_S		28
+#define PF0_MBX_PSM_ARQLEN_ARQVFE_M		BIT(28)
+#define PF0_MBX_PSM_ARQLEN_ARQOVFL_S		29
+#define PF0_MBX_PSM_ARQLEN_ARQOVFL_M		BIT(29)
+#define PF0_MBX_PSM_ARQLEN_ARQCRIT_S		30
+#define PF0_MBX_PSM_ARQLEN_ARQCRIT_M		BIT(30)
+#define PF0_MBX_PSM_ARQLEN_ARQENABLE_S		31
+#define PF0_MBX_PSM_ARQLEN_ARQENABLE_M		BIT(31)
+#define PF0_MBX_PSM_ARQT			0x0022E634 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQT_ARQT_S			0
+#define PF0_MBX_PSM_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ATQBAH			0x0022E614 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQBAH_ATQBAH_S		0
+#define PF0_MBX_PSM_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_PSM_ATQBAL			0x0022E610 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQBAL_ATQBAL_S		6
+#define PF0_MBX_PSM_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_PSM_ATQH			0x0022E61C /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQH_ATQH_S			0
+#define PF0_MBX_PSM_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ATQLEN			0x0022E618 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQLEN_ATQLEN_S		0
+#define PF0_MBX_PSM_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ATQLEN_ATQVFE_S		28
+#define PF0_MBX_PSM_ATQLEN_ATQVFE_M		BIT(28)
+#define PF0_MBX_PSM_ATQLEN_ATQOVFL_S		29
+#define PF0_MBX_PSM_ATQLEN_ATQOVFL_M		BIT(29)
+#define PF0_MBX_PSM_ATQLEN_ATQCRIT_S		30
+#define PF0_MBX_PSM_ATQLEN_ATQCRIT_M		BIT(30)
+#define PF0_MBX_PSM_ATQLEN_ATQENABLE_S		31
+#define PF0_MBX_PSM_ATQLEN_ATQENABLE_M		BIT(31)
+#define PF0_MBX_PSM_ATQT			0x0022E620 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQT_ATQT_S			0
+#define PF0_MBX_PSM_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ARQBAH			0x0022E650 /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQBAH_ARQBAH_S		0
+#define PF0_SB_CPM_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_SB_CPM_ARQBAL			0x0022E64C /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQBAL_ARQBAL_LSB_S		0
+#define PF0_SB_CPM_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF0_SB_CPM_ARQBAL_ARQBAL_S		6
+#define PF0_SB_CPM_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_SB_CPM_ARQH				0x0022E658 /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQH_ARQH_S			0
+#define PF0_SB_CPM_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ARQLEN			0x0022E654 /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQLEN_ARQLEN_S		0
+#define PF0_SB_CPM_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ARQLEN_ARQVFE_S		28
+#define PF0_SB_CPM_ARQLEN_ARQVFE_M		BIT(28)
+#define PF0_SB_CPM_ARQLEN_ARQOVFL_S		29
+#define PF0_SB_CPM_ARQLEN_ARQOVFL_M		BIT(29)
+#define PF0_SB_CPM_ARQLEN_ARQCRIT_S		30
+#define PF0_SB_CPM_ARQLEN_ARQCRIT_M		BIT(30)
+#define PF0_SB_CPM_ARQLEN_ARQENABLE_S		31
+#define PF0_SB_CPM_ARQLEN_ARQENABLE_M		BIT(31)
+#define PF0_SB_CPM_ARQT				0x0022E65C /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQT_ARQT_S			0
+#define PF0_SB_CPM_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ATQBAH			0x0022E63C /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQBAH_ATQBAH_S		0
+#define PF0_SB_CPM_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_SB_CPM_ATQBAL			0x0022E638 /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQBAL_ATQBAL_S		6
+#define PF0_SB_CPM_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_SB_CPM_ATQH				0x0022E644 /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQH_ATQH_S			0
+#define PF0_SB_CPM_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ATQLEN			0x0022E640 /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQLEN_ATQLEN_S		0
+#define PF0_SB_CPM_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ATQLEN_ATQVFE_S		28
+#define PF0_SB_CPM_ATQLEN_ATQVFE_M		BIT(28)
+#define PF0_SB_CPM_ATQLEN_ATQOVFL_S		29
+#define PF0_SB_CPM_ATQLEN_ATQOVFL_M		BIT(29)
+#define PF0_SB_CPM_ATQLEN_ATQCRIT_S		30
+#define PF0_SB_CPM_ATQLEN_ATQCRIT_M		BIT(30)
+#define PF0_SB_CPM_ATQLEN_ATQENABLE_S		31
+#define PF0_SB_CPM_ATQLEN_ATQENABLE_M		BIT(31)
+#define PF0_SB_CPM_ATQT				0x0022E648 /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQT_ATQT_S			0
+#define PF0_SB_CPM_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_REM_DEV_CTL			0x002300F4 /* Reset Source: CORER */
+#define PF0_SB_CPM_REM_DEV_CTL_DEST_EN_S	0
+#define PF0_SB_CPM_REM_DEV_CTL_DEST_EN_M	MAKEMASK(0xFFFF, 0)
+#define PF0_SB_HLP_ARQBAH			0x002300D8 /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQBAH_ARQBAH_S		0
+#define PF0_SB_HLP_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_SB_HLP_ARQBAL			0x002300D4 /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQBAL_ARQBAL_LSB_S		0
+#define PF0_SB_HLP_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF0_SB_HLP_ARQBAL_ARQBAL_S		6
+#define PF0_SB_HLP_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_SB_HLP_ARQH				0x002300E0 /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQH_ARQH_S			0
+#define PF0_SB_HLP_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ARQLEN			0x002300DC /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQLEN_ARQLEN_S		0
+#define PF0_SB_HLP_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ARQLEN_ARQVFE_S		28
+#define PF0_SB_HLP_ARQLEN_ARQVFE_M		BIT(28)
+#define PF0_SB_HLP_ARQLEN_ARQOVFL_S		29
+#define PF0_SB_HLP_ARQLEN_ARQOVFL_M		BIT(29)
+#define PF0_SB_HLP_ARQLEN_ARQCRIT_S		30
+#define PF0_SB_HLP_ARQLEN_ARQCRIT_M		BIT(30)
+#define PF0_SB_HLP_ARQLEN_ARQENABLE_S		31
+#define PF0_SB_HLP_ARQLEN_ARQENABLE_M		BIT(31)
+#define PF0_SB_HLP_ARQT				0x002300E4 /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQT_ARQT_S			0
+#define PF0_SB_HLP_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ATQBAH			0x002300C4 /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQBAH_ATQBAH_S		0
+#define PF0_SB_HLP_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_SB_HLP_ATQBAL			0x002300C0 /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQBAL_ATQBAL_S		6
+#define PF0_SB_HLP_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_SB_HLP_ATQH				0x002300CC /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQH_ATQH_S			0
+#define PF0_SB_HLP_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ATQLEN			0x002300C8 /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQLEN_ATQLEN_S		0
+#define PF0_SB_HLP_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ATQLEN_ATQVFE_S		28
+#define PF0_SB_HLP_ATQLEN_ATQVFE_M		BIT(28)
+#define PF0_SB_HLP_ATQLEN_ATQOVFL_S		29
+#define PF0_SB_HLP_ATQLEN_ATQOVFL_M		BIT(29)
+#define PF0_SB_HLP_ATQLEN_ATQCRIT_S		30
+#define PF0_SB_HLP_ATQLEN_ATQCRIT_M		BIT(30)
+#define PF0_SB_HLP_ATQLEN_ATQENABLE_S		31
+#define PF0_SB_HLP_ATQLEN_ATQENABLE_M		BIT(31)
+#define PF0_SB_HLP_ATQT				0x002300D0 /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQT_ATQT_S			0
+#define PF0_SB_HLP_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_REM_DEV_CTL			0x002300E8 /* Reset Source: CORER */
+#define PF0_SB_HLP_REM_DEV_CTL_DEST_EN_S	0
+#define PF0_SB_HLP_REM_DEV_CTL_DEST_EN_M	MAKEMASK(0xFFFF, 0)
+#define SB_REM_DEV_DEST(_i)			(0x002300F8 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define SB_REM_DEV_DEST_MAX_INDEX		7
+#define SB_REM_DEV_DEST_DEST_S			0
+#define SB_REM_DEV_DEST_DEST_M			MAKEMASK(0xF, 0)
+#define SB_REM_DEV_DEST_DEST_VALID_S		31
+#define SB_REM_DEV_DEST_DEST_VALID_M		BIT(31)
+#define VF_MBX_ARQBAH(_VF)			(0x0022B800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ARQBAH_MAX_INDEX			255
+#define VF_MBX_ARQBAH_ARQBAH_S			0
+#define VF_MBX_ARQBAH_ARQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_ARQBAL(_VF)			(0x0022B400 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ARQBAL_MAX_INDEX			255
+#define VF_MBX_ARQBAL_ARQBAL_LSB_S		0
+#define VF_MBX_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define VF_MBX_ARQBAL_ARQBAL_S			6
+#define VF_MBX_ARQBAL_ARQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_ARQH(_VF)			(0x0022C000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ARQH_MAX_INDEX			255
+#define VF_MBX_ARQH_ARQH_S			0
+#define VF_MBX_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_ARQLEN(_VF)			(0x0022BC00 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ARQLEN_MAX_INDEX			255
+#define VF_MBX_ARQLEN_ARQLEN_S			0
+#define VF_MBX_ARQLEN_ARQLEN_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_ARQLEN_ARQVFE_S			28
+#define VF_MBX_ARQLEN_ARQVFE_M			BIT(28)
+#define VF_MBX_ARQLEN_ARQOVFL_S			29
+#define VF_MBX_ARQLEN_ARQOVFL_M			BIT(29)
+#define VF_MBX_ARQLEN_ARQCRIT_S			30
+#define VF_MBX_ARQLEN_ARQCRIT_M			BIT(30)
+#define VF_MBX_ARQLEN_ARQENABLE_S		31
+#define VF_MBX_ARQLEN_ARQENABLE_M		BIT(31)
+#define VF_MBX_ARQT(_VF)			(0x0022C400 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ARQT_MAX_INDEX			255
+#define VF_MBX_ARQT_ARQT_S			0
+#define VF_MBX_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_ATQBAH(_VF)			(0x0022A400 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ATQBAH_MAX_INDEX			255
+#define VF_MBX_ATQBAH_ATQBAH_S			0
+#define VF_MBX_ATQBAH_ATQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_ATQBAL(_VF)			(0x0022A000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ATQBAL_MAX_INDEX			255
+#define VF_MBX_ATQBAL_ATQBAL_S			6
+#define VF_MBX_ATQBAL_ATQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_ATQH(_VF)			(0x0022AC00 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ATQH_MAX_INDEX			255
+#define VF_MBX_ATQH_ATQH_S			0
+#define VF_MBX_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_ATQLEN(_VF)			(0x0022A800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ATQLEN_MAX_INDEX			255
+#define VF_MBX_ATQLEN_ATQLEN_S			0
+#define VF_MBX_ATQLEN_ATQLEN_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_ATQLEN_ATQVFE_S			28
+#define VF_MBX_ATQLEN_ATQVFE_M			BIT(28)
+#define VF_MBX_ATQLEN_ATQOVFL_S			29
+#define VF_MBX_ATQLEN_ATQOVFL_M			BIT(29)
+#define VF_MBX_ATQLEN_ATQCRIT_S			30
+#define VF_MBX_ATQLEN_ATQCRIT_M			BIT(30)
+#define VF_MBX_ATQLEN_ATQENABLE_S		31
+#define VF_MBX_ATQLEN_ATQENABLE_M		BIT(31)
+#define VF_MBX_ATQT(_VF)			(0x0022B000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ATQT_MAX_INDEX			255
+#define VF_MBX_ATQT_ATQT_S			0
+#define VF_MBX_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ARQBAH(_VF128)		(0x0022D400 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ARQBAH_MAX_INDEX		127
+#define VF_MBX_CPM_ARQBAH_ARQBAH_S		0
+#define VF_MBX_CPM_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_CPM_ARQBAL(_VF128)		(0x0022D200 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ARQBAL_MAX_INDEX		127
+#define VF_MBX_CPM_ARQBAL_ARQBAL_LSB_S		0
+#define VF_MBX_CPM_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define VF_MBX_CPM_ARQBAL_ARQBAL_S		6
+#define VF_MBX_CPM_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_CPM_ARQH(_VF128)			(0x0022D800 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ARQH_MAX_INDEX		127
+#define VF_MBX_CPM_ARQH_ARQH_S			0
+#define VF_MBX_CPM_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ARQLEN(_VF128)		(0x0022D600 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ARQLEN_MAX_INDEX		127
+#define VF_MBX_CPM_ARQLEN_ARQLEN_S		0
+#define VF_MBX_CPM_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ARQLEN_ARQVFE_S		28
+#define VF_MBX_CPM_ARQLEN_ARQVFE_M		BIT(28)
+#define VF_MBX_CPM_ARQLEN_ARQOVFL_S		29
+#define VF_MBX_CPM_ARQLEN_ARQOVFL_M		BIT(29)
+#define VF_MBX_CPM_ARQLEN_ARQCRIT_S		30
+#define VF_MBX_CPM_ARQLEN_ARQCRIT_M		BIT(30)
+#define VF_MBX_CPM_ARQLEN_ARQENABLE_S		31
+#define VF_MBX_CPM_ARQLEN_ARQENABLE_M		BIT(31)
+#define VF_MBX_CPM_ARQT(_VF128)			(0x0022DA00 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ARQT_MAX_INDEX		127
+#define VF_MBX_CPM_ARQT_ARQT_S			0
+#define VF_MBX_CPM_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ATQBAH(_VF128)		(0x0022CA00 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ATQBAH_MAX_INDEX		127
+#define VF_MBX_CPM_ATQBAH_ATQBAH_S		0
+#define VF_MBX_CPM_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_CPM_ATQBAL(_VF128)		(0x0022C800 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ATQBAL_MAX_INDEX		127
+#define VF_MBX_CPM_ATQBAL_ATQBAL_S		6
+#define VF_MBX_CPM_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_CPM_ATQH(_VF128)			(0x0022CE00 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ATQH_MAX_INDEX		127
+#define VF_MBX_CPM_ATQH_ATQH_S			0
+#define VF_MBX_CPM_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ATQLEN(_VF128)		(0x0022CC00 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ATQLEN_MAX_INDEX		127
+#define VF_MBX_CPM_ATQLEN_ATQLEN_S		0
+#define VF_MBX_CPM_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ATQLEN_ATQVFE_S		28
+#define VF_MBX_CPM_ATQLEN_ATQVFE_M		BIT(28)
+#define VF_MBX_CPM_ATQLEN_ATQOVFL_S		29
+#define VF_MBX_CPM_ATQLEN_ATQOVFL_M		BIT(29)
+#define VF_MBX_CPM_ATQLEN_ATQCRIT_S		30
+#define VF_MBX_CPM_ATQLEN_ATQCRIT_M		BIT(30)
+#define VF_MBX_CPM_ATQLEN_ATQENABLE_S		31
+#define VF_MBX_CPM_ATQLEN_ATQENABLE_M		BIT(31)
+#define VF_MBX_CPM_ATQT(_VF128)			(0x0022D000 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ATQT_MAX_INDEX		127
+#define VF_MBX_CPM_ATQT_ATQT_S			0
+#define VF_MBX_CPM_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ARQBAH(_VF16)		(0x0022DD80 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ARQBAH_MAX_INDEX		15
+#define VF_MBX_HLP_ARQBAH_ARQBAH_S		0
+#define VF_MBX_HLP_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_HLP_ARQBAL(_VF16)		(0x0022DD40 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ARQBAL_MAX_INDEX		15
+#define VF_MBX_HLP_ARQBAL_ARQBAL_LSB_S		0
+#define VF_MBX_HLP_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define VF_MBX_HLP_ARQBAL_ARQBAL_S		6
+#define VF_MBX_HLP_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_HLP_ARQH(_VF16)			(0x0022DE00 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ARQH_MAX_INDEX		15
+#define VF_MBX_HLP_ARQH_ARQH_S			0
+#define VF_MBX_HLP_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ARQLEN(_VF16)		(0x0022DDC0 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ARQLEN_MAX_INDEX		15
+#define VF_MBX_HLP_ARQLEN_ARQLEN_S		0
+#define VF_MBX_HLP_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ARQLEN_ARQVFE_S		28
+#define VF_MBX_HLP_ARQLEN_ARQVFE_M		BIT(28)
+#define VF_MBX_HLP_ARQLEN_ARQOVFL_S		29
+#define VF_MBX_HLP_ARQLEN_ARQOVFL_M		BIT(29)
+#define VF_MBX_HLP_ARQLEN_ARQCRIT_S		30
+#define VF_MBX_HLP_ARQLEN_ARQCRIT_M		BIT(30)
+#define VF_MBX_HLP_ARQLEN_ARQENABLE_S		31
+#define VF_MBX_HLP_ARQLEN_ARQENABLE_M		BIT(31)
+#define VF_MBX_HLP_ARQT(_VF16)			(0x0022DE40 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ARQT_MAX_INDEX		15
+#define VF_MBX_HLP_ARQT_ARQT_S			0
+#define VF_MBX_HLP_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ATQBAH(_VF16)		(0x0022DC40 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ATQBAH_MAX_INDEX		15
+#define VF_MBX_HLP_ATQBAH_ATQBAH_S		0
+#define VF_MBX_HLP_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_HLP_ATQBAL(_VF16)		(0x0022DC00 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ATQBAL_MAX_INDEX		15
+#define VF_MBX_HLP_ATQBAL_ATQBAL_S		6
+#define VF_MBX_HLP_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_HLP_ATQH(_VF16)			(0x0022DCC0 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ATQH_MAX_INDEX		15
+#define VF_MBX_HLP_ATQH_ATQH_S			0
+#define VF_MBX_HLP_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ATQLEN(_VF16)		(0x0022DC80 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ATQLEN_MAX_INDEX		15
+#define VF_MBX_HLP_ATQLEN_ATQLEN_S		0
+#define VF_MBX_HLP_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ATQLEN_ATQVFE_S		28
+#define VF_MBX_HLP_ATQLEN_ATQVFE_M		BIT(28)
+#define VF_MBX_HLP_ATQLEN_ATQOVFL_S		29
+#define VF_MBX_HLP_ATQLEN_ATQOVFL_M		BIT(29)
+#define VF_MBX_HLP_ATQLEN_ATQCRIT_S		30
+#define VF_MBX_HLP_ATQLEN_ATQCRIT_M		BIT(30)
+#define VF_MBX_HLP_ATQLEN_ATQENABLE_S		31
+#define VF_MBX_HLP_ATQLEN_ATQENABLE_M		BIT(31)
+#define VF_MBX_HLP_ATQT(_VF16)			(0x0022DD00 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ATQT_MAX_INDEX		15
+#define VF_MBX_HLP_ATQT_ATQT_S			0
+#define VF_MBX_HLP_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ARQBAH(_VF16)		(0x0022E000 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ARQBAH_MAX_INDEX		15
+#define VF_MBX_PSM_ARQBAH_ARQBAH_S		0
+#define VF_MBX_PSM_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_PSM_ARQBAL(_VF16)		(0x0022DFC0 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ARQBAL_MAX_INDEX		15
+#define VF_MBX_PSM_ARQBAL_ARQBAL_LSB_S		0
+#define VF_MBX_PSM_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define VF_MBX_PSM_ARQBAL_ARQBAL_S		6
+#define VF_MBX_PSM_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_PSM_ARQH(_VF16)			(0x0022E080 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ARQH_MAX_INDEX		15
+#define VF_MBX_PSM_ARQH_ARQH_S			0
+#define VF_MBX_PSM_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ARQLEN(_VF16)		(0x0022E040 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ARQLEN_MAX_INDEX		15
+#define VF_MBX_PSM_ARQLEN_ARQLEN_S		0
+#define VF_MBX_PSM_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ARQLEN_ARQVFE_S		28
+#define VF_MBX_PSM_ARQLEN_ARQVFE_M		BIT(28)
+#define VF_MBX_PSM_ARQLEN_ARQOVFL_S		29
+#define VF_MBX_PSM_ARQLEN_ARQOVFL_M		BIT(29)
+#define VF_MBX_PSM_ARQLEN_ARQCRIT_S		30
+#define VF_MBX_PSM_ARQLEN_ARQCRIT_M		BIT(30)
+#define VF_MBX_PSM_ARQLEN_ARQENABLE_S		31
+#define VF_MBX_PSM_ARQLEN_ARQENABLE_M		BIT(31)
+#define VF_MBX_PSM_ARQT(_VF16)			(0x0022E0C0 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ARQT_MAX_INDEX		15
+#define VF_MBX_PSM_ARQT_ARQT_S			0
+#define VF_MBX_PSM_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ATQBAH(_VF16)		(0x0022DEC0 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ATQBAH_MAX_INDEX		15
+#define VF_MBX_PSM_ATQBAH_ATQBAH_S		0
+#define VF_MBX_PSM_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_PSM_ATQBAL(_VF16)		(0x0022DE80 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ATQBAL_MAX_INDEX		15
+#define VF_MBX_PSM_ATQBAL_ATQBAL_S		6
+#define VF_MBX_PSM_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_PSM_ATQH(_VF16)			(0x0022DF40 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ATQH_MAX_INDEX		15
+#define VF_MBX_PSM_ATQH_ATQH_S			0
+#define VF_MBX_PSM_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ATQLEN(_VF16)		(0x0022DF00 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ATQLEN_MAX_INDEX		15
+#define VF_MBX_PSM_ATQLEN_ATQLEN_S		0
+#define VF_MBX_PSM_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ATQLEN_ATQVFE_S		28
+#define VF_MBX_PSM_ATQLEN_ATQVFE_M		BIT(28)
+#define VF_MBX_PSM_ATQLEN_ATQOVFL_S		29
+#define VF_MBX_PSM_ATQLEN_ATQOVFL_M		BIT(29)
+#define VF_MBX_PSM_ATQLEN_ATQCRIT_S		30
+#define VF_MBX_PSM_ATQLEN_ATQCRIT_M		BIT(30)
+#define VF_MBX_PSM_ATQLEN_ATQENABLE_S		31
+#define VF_MBX_PSM_ATQLEN_ATQENABLE_M		BIT(31)
+#define VF_MBX_PSM_ATQT(_VF16)			(0x0022DF80 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ATQT_MAX_INDEX		15
+#define VF_MBX_PSM_ATQT_ATQT_S			0
+#define VF_MBX_PSM_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ARQBAH(_VF128)		(0x0022F400 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ARQBAH_MAX_INDEX		127
+#define VF_SB_CPM_ARQBAH_ARQBAH_S		0
+#define VF_SB_CPM_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_SB_CPM_ARQBAL(_VF128)		(0x0022F200 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ARQBAL_MAX_INDEX		127
+#define VF_SB_CPM_ARQBAL_ARQBAL_LSB_S		0
+#define VF_SB_CPM_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define VF_SB_CPM_ARQBAL_ARQBAL_S		6
+#define VF_SB_CPM_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_SB_CPM_ARQH(_VF128)			(0x0022F800 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ARQH_MAX_INDEX		127
+#define VF_SB_CPM_ARQH_ARQH_S			0
+#define VF_SB_CPM_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ARQLEN(_VF128)		(0x0022F600 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ARQLEN_MAX_INDEX		127
+#define VF_SB_CPM_ARQLEN_ARQLEN_S		0
+#define VF_SB_CPM_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ARQLEN_ARQVFE_S		28
+#define VF_SB_CPM_ARQLEN_ARQVFE_M		BIT(28)
+#define VF_SB_CPM_ARQLEN_ARQOVFL_S		29
+#define VF_SB_CPM_ARQLEN_ARQOVFL_M		BIT(29)
+#define VF_SB_CPM_ARQLEN_ARQCRIT_S		30
+#define VF_SB_CPM_ARQLEN_ARQCRIT_M		BIT(30)
+#define VF_SB_CPM_ARQLEN_ARQENABLE_S		31
+#define VF_SB_CPM_ARQLEN_ARQENABLE_M		BIT(31)
+#define VF_SB_CPM_ARQT(_VF128)			(0x0022FA00 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ARQT_MAX_INDEX		127
+#define VF_SB_CPM_ARQT_ARQT_S			0
+#define VF_SB_CPM_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ATQBAH(_VF128)		(0x0022EA00 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ATQBAH_MAX_INDEX		127
+#define VF_SB_CPM_ATQBAH_ATQBAH_S		0
+#define VF_SB_CPM_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_SB_CPM_ATQBAL(_VF128)		(0x0022E800 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ATQBAL_MAX_INDEX		127
+#define VF_SB_CPM_ATQBAL_ATQBAL_S		6
+#define VF_SB_CPM_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_SB_CPM_ATQH(_VF128)			(0x0022EE00 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ATQH_MAX_INDEX		127
+#define VF_SB_CPM_ATQH_ATQH_S			0
+#define VF_SB_CPM_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ATQLEN(_VF128)		(0x0022EC00 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ATQLEN_MAX_INDEX		127
+#define VF_SB_CPM_ATQLEN_ATQLEN_S		0
+#define VF_SB_CPM_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ATQLEN_ATQVFE_S		28
+#define VF_SB_CPM_ATQLEN_ATQVFE_M		BIT(28)
+#define VF_SB_CPM_ATQLEN_ATQOVFL_S		29
+#define VF_SB_CPM_ATQLEN_ATQOVFL_M		BIT(29)
+#define VF_SB_CPM_ATQLEN_ATQCRIT_S		30
+#define VF_SB_CPM_ATQLEN_ATQCRIT_M		BIT(30)
+#define VF_SB_CPM_ATQLEN_ATQENABLE_S		31
+#define VF_SB_CPM_ATQLEN_ATQENABLE_M		BIT(31)
+#define VF_SB_CPM_ATQT(_VF128)			(0x0022F000 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ATQT_MAX_INDEX		127
+#define VF_SB_CPM_ATQT_ATQT_S			0
+#define VF_SB_CPM_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_REM_DEV_CTL			0x002300EC /* Reset Source: CORER */
+#define VF_SB_CPM_REM_DEV_CTL_DEST_EN_S		0
+#define VF_SB_CPM_REM_DEV_CTL_DEST_EN_M		MAKEMASK(0xFFFF, 0)
+#define VP_MBX_CPM_PF_VF_CTRL(_VP128)		(0x00231800 + ((_VP128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VP_MBX_CPM_PF_VF_CTRL_MAX_INDEX		127
+#define VP_MBX_CPM_PF_VF_CTRL_QUEUE_EN_S	0
+#define VP_MBX_CPM_PF_VF_CTRL_QUEUE_EN_M	BIT(0)
+#define VP_MBX_HLP_PF_VF_CTRL(_VP16)		(0x00231A00 + ((_VP16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VP_MBX_HLP_PF_VF_CTRL_MAX_INDEX		15
+#define VP_MBX_HLP_PF_VF_CTRL_QUEUE_EN_S	0
+#define VP_MBX_HLP_PF_VF_CTRL_QUEUE_EN_M	BIT(0)
+#define VP_MBX_PF_VF_CTRL(_VSI)			(0x00230800 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VP_MBX_PF_VF_CTRL_MAX_INDEX		767
+#define VP_MBX_PF_VF_CTRL_QUEUE_EN_S		0
+#define VP_MBX_PF_VF_CTRL_QUEUE_EN_M		BIT(0)
+#define VP_MBX_PSM_PF_VF_CTRL(_VP16)		(0x00231A40 + ((_VP16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VP_MBX_PSM_PF_VF_CTRL_MAX_INDEX		15
+#define VP_MBX_PSM_PF_VF_CTRL_QUEUE_EN_S	0
+#define VP_MBX_PSM_PF_VF_CTRL_QUEUE_EN_M	BIT(0)
+#define VP_SB_CPM_PF_VF_CTRL(_VP128)		(0x00231C00 + ((_VP128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VP_SB_CPM_PF_VF_CTRL_MAX_INDEX		127
+#define VP_SB_CPM_PF_VF_CTRL_QUEUE_EN_S		0
+#define VP_SB_CPM_PF_VF_CTRL_QUEUE_EN_M		BIT(0)
+#define GL_DCB_TDSCP2TC_BLOCK_DIS		0x00049218 /* Reset Source: CORER */
+#define GL_DCB_TDSCP2TC_BLOCK_DIS_DSCP2TC_BLOCK_DIS_S 0
+#define GL_DCB_TDSCP2TC_BLOCK_DIS_DSCP2TC_BLOCK_DIS_M BIT(0)
+#define GL_DCB_TDSCP2TC_BLOCK_IPV4(_i)		(0x00049018 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GL_DCB_TDSCP2TC_BLOCK_IPV4_MAX_INDEX	63
+#define GL_DCB_TDSCP2TC_BLOCK_IPV4_TC_BLOCK_LUT_S 0
+#define GL_DCB_TDSCP2TC_BLOCK_IPV4_TC_BLOCK_LUT_M MAKEMASK(0xFFFFFFFF, 0)
+#define GL_DCB_TDSCP2TC_BLOCK_IPV6(_i)		(0x00049118 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GL_DCB_TDSCP2TC_BLOCK_IPV6_MAX_INDEX	63
+#define GL_DCB_TDSCP2TC_BLOCK_IPV6_TC_BLOCK_LUT_S 0
+#define GL_DCB_TDSCP2TC_BLOCK_IPV6_TC_BLOCK_LUT_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_GENC				0x00083044 /* Reset Source: CORER */
+#define GLDCB_GENC_PCIRTT_S			0
+#define GLDCB_GENC_PCIRTT_M			MAKEMASK(0xFFFF, 0)
+#define GLDCB_PRS_RETSTCC(_i)			(0x002000B0 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLDCB_PRS_RETSTCC_MAX_INDEX		31
+#define GLDCB_PRS_RETSTCC_BWSHARE_S		0
+#define GLDCB_PRS_RETSTCC_BWSHARE_M		MAKEMASK(0x7F, 0)
+#define GLDCB_PRS_RETSTCC_ETSTC_S		31
+#define GLDCB_PRS_RETSTCC_ETSTC_M		BIT(31)
+#define GLDCB_PRS_RSPMC				0x00200160 /* Reset Source: CORER */
+#define GLDCB_PRS_RSPMC_RSPM_S			0
+#define GLDCB_PRS_RSPMC_RSPM_M			MAKEMASK(0xFF, 0)
+#define GLDCB_PRS_RSPMC_RPM_MODE_S		8
+#define GLDCB_PRS_RSPMC_RPM_MODE_M		MAKEMASK(0x3, 8)
+#define GLDCB_PRS_RSPMC_PRR_MAX_EXP_S		10
+#define GLDCB_PRS_RSPMC_PRR_MAX_EXP_M		MAKEMASK(0xF, 10)
+#define GLDCB_PRS_RSPMC_PFCTIMER_S		14
+#define GLDCB_PRS_RSPMC_PFCTIMER_M		MAKEMASK(0x3FFF, 14)
+#define GLDCB_PRS_RSPMC_RPM_DIS_S		31
+#define GLDCB_PRS_RSPMC_RPM_DIS_M		BIT(31)
+#define GLDCB_RETSTCC(_i)			(0x00122140 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLDCB_RETSTCC_MAX_INDEX			31
+#define GLDCB_RETSTCC_BWSHARE_S			0
+#define GLDCB_RETSTCC_BWSHARE_M			MAKEMASK(0x7F, 0)
+#define GLDCB_RETSTCC_ETSTC_S			31
+#define GLDCB_RETSTCC_ETSTC_M			BIT(31)
+#define GLDCB_RETSTCS(_i)			(0x001221C0 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLDCB_RETSTCS_MAX_INDEX			31
+#define GLDCB_RETSTCS_CREDITS_S			0
+#define GLDCB_RETSTCS_CREDITS_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_RTC2PFC_RCB			0x00122100 /* Reset Source: CORER */
+#define GLDCB_RTC2PFC_RCB_TC2PFC_S		0
+#define GLDCB_RTC2PFC_RCB_TC2PFC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_SWT_RETSTCC(_i)			(0x0020A040 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLDCB_SWT_RETSTCC_MAX_INDEX		31
+#define GLDCB_SWT_RETSTCC_BWSHARE_S		0
+#define GLDCB_SWT_RETSTCC_BWSHARE_M		MAKEMASK(0x7F, 0)
+#define GLDCB_SWT_RETSTCC_ETSTC_S		31
+#define GLDCB_SWT_RETSTCC_ETSTC_M		BIT(31)
+#define GLDCB_TC2PFC				0x001D2694 /* Reset Source: CORER */
+#define GLDCB_TC2PFC_TC2PFC_S			0
+#define GLDCB_TC2PFC_TC2PFC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_TCB_MNG_SP			0x000AE12C /* Reset Source: CORER */
+#define GLDCB_TCB_MNG_SP_MNG_SP_S		0
+#define GLDCB_TCB_MNG_SP_MNG_SP_M		BIT(0)
+#define GLDCB_TCB_TCLL_CFG			0x000AE134 /* Reset Source: CORER */
+#define GLDCB_TCB_TCLL_CFG_LLTC_S		0
+#define GLDCB_TCB_TCLL_CFG_LLTC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_TCB_WB_SP				0x000AE310 /* Reset Source: CORER */
+#define GLDCB_TCB_WB_SP_WB_SP_S			0
+#define GLDCB_TCB_WB_SP_WB_SP_M			BIT(0)
+#define GLDCB_TCUPM_IMM_EN			0x000BC824 /* Reset Source: CORER */
+#define GLDCB_TCUPM_IMM_EN_IMM_EN_S		0
+#define GLDCB_TCUPM_IMM_EN_IMM_EN_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_TCUPM_LEGACY_TC			0x000BC828 /* Reset Source: CORER */
+#define GLDCB_TCUPM_LEGACY_TC_LEGTC_S		0
+#define GLDCB_TCUPM_LEGACY_TC_LEGTC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_TCUPM_NO_EXCEED_DIS		0x000BC830 /* Reset Source: CORER */
+#define GLDCB_TCUPM_NO_EXCEED_DIS_NON_EXCEED_DIS_S 0
+#define GLDCB_TCUPM_NO_EXCEED_DIS_NON_EXCEED_DIS_M BIT(0)
+#define GLDCB_TCUPM_WB_DIS			0x000BC834 /* Reset Source: CORER */
+#define GLDCB_TCUPM_WB_DIS_PORT_DISABLE_S	0
+#define GLDCB_TCUPM_WB_DIS_PORT_DISABLE_M	BIT(0)
+#define GLDCB_TCUPM_WB_DIS_TC_DISABLE_S		1
+#define GLDCB_TCUPM_WB_DIS_TC_DISABLE_M		BIT(1)
+#define GLDCB_TFPFCI				0x0009949C /* Reset Source: CORER */
+#define GLDCB_TFPFCI_GLDCB_TFPFCI_S		0
+#define GLDCB_TFPFCI_GLDCB_TFPFCI_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_TLPM_IMM_TCB			0x000A0190 /* Reset Source: CORER */
+#define GLDCB_TLPM_IMM_TCB_IMM_EN_S		0
+#define GLDCB_TLPM_IMM_TCB_IMM_EN_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_TLPM_IMM_TCUPM			0x000A018C /* Reset Source: CORER */
+#define GLDCB_TLPM_IMM_TCUPM_IMM_EN_S		0
+#define GLDCB_TLPM_IMM_TCUPM_IMM_EN_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_TLPM_PCI_DM			0x000A0180 /* Reset Source: CORER */
+#define GLDCB_TLPM_PCI_DM_MONITOR_S		0
+#define GLDCB_TLPM_PCI_DM_MONITOR_M		MAKEMASK(0x7FFFF, 0)
+#define GLDCB_TLPM_PCI_DTHR			0x000A0184 /* Reset Source: CORER */
+#define GLDCB_TLPM_PCI_DTHR_PCI_TDATA_S		0
+#define GLDCB_TLPM_PCI_DTHR_PCI_TDATA_M		MAKEMASK(0xFFF, 0)
+#define GLDCB_TPB_IMM_TLPM			0x00099468 /* Reset Source: CORER */
+#define GLDCB_TPB_IMM_TLPM_IMM_EN_S		0
+#define GLDCB_TPB_IMM_TLPM_IMM_EN_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_TPB_IMM_TPB			0x0009946C /* Reset Source: CORER */
+#define GLDCB_TPB_IMM_TPB_IMM_EN_S		0
+#define GLDCB_TPB_IMM_TPB_IMM_EN_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_TPB_TCLL_CFG			0x00099464 /* Reset Source: CORER */
+#define GLDCB_TPB_TCLL_CFG_LLTC_S		0
+#define GLDCB_TPB_TCLL_CFG_LLTC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCB_BULK_DWRR_REG_QUANTA		0x000AE0E0 /* Reset Source: CORER */
+#define GLTCB_BULK_DWRR_REG_QUANTA_QUANTA_S	0
+#define GLTCB_BULK_DWRR_REG_QUANTA_QUANTA_M	MAKEMASK(0x7FF, 0)
+#define GLTCB_BULK_DWRR_REG_SAT			0x000AE0F0 /* Reset Source: CORER */
+#define GLTCB_BULK_DWRR_REG_SAT_SATURATION_S	0
+#define GLTCB_BULK_DWRR_REG_SAT_SATURATION_M	MAKEMASK(0x1FFFF, 0)
+#define GLTCB_BULK_DWRR_WB_QUANTA		0x000AE0E4 /* Reset Source: CORER */
+#define GLTCB_BULK_DWRR_WB_QUANTA_QUANTA_S	0
+#define GLTCB_BULK_DWRR_WB_QUANTA_QUANTA_M	MAKEMASK(0x7FF, 0)
+#define GLTCB_BULK_DWRR_WB_SAT			0x000AE0F4 /* Reset Source: CORER */
+#define GLTCB_BULK_DWRR_WB_SAT_SATURATION_S	0
+#define GLTCB_BULK_DWRR_WB_SAT_SATURATION_M	MAKEMASK(0x1FFFF, 0)
+#define GLTCB_CREDIT_EXP_CTL			0x000AE120 /* Reset Source: CORER */
+#define GLTCB_CREDIT_EXP_CTL_EN_S		0
+#define GLTCB_CREDIT_EXP_CTL_EN_M		BIT(0)
+#define GLTCB_CREDIT_EXP_CTL_MIN_PKT_S		1
+#define GLTCB_CREDIT_EXP_CTL_MIN_PKT_M		MAKEMASK(0x1FF, 1)
+#define GLTCB_LL_DWRR_REG_QUANTA		0x000AE0E8 /* Reset Source: CORER */
+#define GLTCB_LL_DWRR_REG_QUANTA_QUANTA_S	0
+#define GLTCB_LL_DWRR_REG_QUANTA_QUANTA_M	MAKEMASK(0x7FF, 0)
+#define GLTCB_LL_DWRR_REG_SAT			0x000AE0F8 /* Reset Source: CORER */
+#define GLTCB_LL_DWRR_REG_SAT_SATURATION_S	0
+#define GLTCB_LL_DWRR_REG_SAT_SATURATION_M	MAKEMASK(0x1FFFF, 0)
+#define GLTCB_LL_DWRR_WB_QUANTA			0x000AE0EC /* Reset Source: CORER */
+#define GLTCB_LL_DWRR_WB_QUANTA_QUANTA_S	0
+#define GLTCB_LL_DWRR_WB_QUANTA_QUANTA_M	MAKEMASK(0x7FF, 0)
+#define GLTCB_LL_DWRR_WB_SAT			0x000AE0FC /* Reset Source: CORER */
+#define GLTCB_LL_DWRR_WB_SAT_SATURATION_S	0
+#define GLTCB_LL_DWRR_WB_SAT_SATURATION_M	MAKEMASK(0x1FFFF, 0)
+#define GLTCB_WB_RL				0x000AE238 /* Reset Source: CORER */
+#define GLTCB_WB_RL_PERIOD_S			0
+#define GLTCB_WB_RL_PERIOD_M			MAKEMASK(0xFFFF, 0)
+#define GLTCB_WB_RL_EN_S			16
+#define GLTCB_WB_RL_EN_M			BIT(16)
+#define GLTPB_WB_RL				0x00099460 /* Reset Source: CORER */
+#define GLTPB_WB_RL_PERIOD_S			0
+#define GLTPB_WB_RL_PERIOD_M			MAKEMASK(0xFFFF, 0)
+#define GLTPB_WB_RL_EN_S			16
+#define GLTPB_WB_RL_EN_M			BIT(16)
+#define PRTDCB_FCCFG				0x001E4640 /* Reset Source: GLOBR */
+#define PRTDCB_FCCFG_TFCE_S			3
+#define PRTDCB_FCCFG_TFCE_M			MAKEMASK(0x3, 3)
+#define PRTDCB_FCRTV				0x001E4600 /* Reset Source: GLOBR */
+#define PRTDCB_FCRTV_FC_REFRESH_TH_S		0
+#define PRTDCB_FCRTV_FC_REFRESH_TH_M		MAKEMASK(0xFFFF, 0)
+#define PRTDCB_FCTTVN(_i)			(0x001E4580 + ((_i) * 32)) /* _i=0...3 */ /* Reset Source: GLOBR */
+#define PRTDCB_FCTTVN_MAX_INDEX			3
+#define PRTDCB_FCTTVN_TTV_2N_S			0
+#define PRTDCB_FCTTVN_TTV_2N_M			MAKEMASK(0xFFFF, 0)
+#define PRTDCB_FCTTVN_TTV_2N_P1_S		16
+#define PRTDCB_FCTTVN_TTV_2N_P1_M		MAKEMASK(0xFFFF, 16)
+#define PRTDCB_GENC				0x00083000 /* Reset Source: CORER */
+#define PRTDCB_GENC_NUMTC_S			2
+#define PRTDCB_GENC_NUMTC_M			MAKEMASK(0xF, 2)
+#define PRTDCB_GENC_FCOEUP_S			6
+#define PRTDCB_GENC_FCOEUP_M			MAKEMASK(0x7, 6)
+#define PRTDCB_GENC_FCOEUP_VALID_S		9
+#define PRTDCB_GENC_FCOEUP_VALID_M		BIT(9)
+#define PRTDCB_GENC_PFCLDA_S			16
+#define PRTDCB_GENC_PFCLDA_M			MAKEMASK(0xFFFF, 16)
+#define PRTDCB_GENS				0x00083020 /* Reset Source: CORER */
+#define PRTDCB_GENS_DCBX_STATUS_S		0
+#define PRTDCB_GENS_DCBX_STATUS_M		MAKEMASK(0x7, 0)
+#define PRTDCB_PRS_RETSC			0x002001A0 /* Reset Source: CORER */
+#define PRTDCB_PRS_RETSC_ETS_MODE_S		0
+#define PRTDCB_PRS_RETSC_ETS_MODE_M		BIT(0)
+#define PRTDCB_PRS_RETSC_NON_ETS_MODE_S		1
+#define PRTDCB_PRS_RETSC_NON_ETS_MODE_M		BIT(1)
+#define PRTDCB_PRS_RETSC_ETS_MAX_EXP_S		2
+#define PRTDCB_PRS_RETSC_ETS_MAX_EXP_M		MAKEMASK(0xF, 2)
+#define PRTDCB_PRS_RPRRC			0x00200180 /* Reset Source: CORER */
+#define PRTDCB_PRS_RPRRC_BWSHARE_S		0
+#define PRTDCB_PRS_RPRRC_BWSHARE_M		MAKEMASK(0x3FF, 0)
+#define PRTDCB_PRS_RPRRC_BWSHARE_DIS_S		31
+#define PRTDCB_PRS_RPRRC_BWSHARE_DIS_M		BIT(31)
+#define PRTDCB_RETSC				0x001222A0 /* Reset Source: CORER */
+#define PRTDCB_RETSC_ETS_MODE_S			0
+#define PRTDCB_RETSC_ETS_MODE_M			BIT(0)
+#define PRTDCB_RETSC_NON_ETS_MODE_S		1
+#define PRTDCB_RETSC_NON_ETS_MODE_M		BIT(1)
+#define PRTDCB_RETSC_ETS_MAX_EXP_S		2
+#define PRTDCB_RETSC_ETS_MAX_EXP_M		MAKEMASK(0xF, 2)
+#define PRTDCB_RPRRC				0x001220C0 /* Reset Source: CORER */
+#define PRTDCB_RPRRC_BWSHARE_S			0
+#define PRTDCB_RPRRC_BWSHARE_M			MAKEMASK(0x3FF, 0)
+#define PRTDCB_RPRRC_BWSHARE_DIS_S		31
+#define PRTDCB_RPRRC_BWSHARE_DIS_M		BIT(31)
+#define PRTDCB_RPRRS				0x001220E0 /* Reset Source: CORER */
+#define PRTDCB_RPRRS_CREDITS_S			0
+#define PRTDCB_RPRRS_CREDITS_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PRTDCB_RUP_TDPU				0x00040960 /* Reset Source: CORER */
+#define PRTDCB_RUP_TDPU_NOVLANUP_S		0
+#define PRTDCB_RUP_TDPU_NOVLANUP_M		MAKEMASK(0x7, 0)
+#define PRTDCB_RUP2TC				0x001D2640 /* Reset Source: CORER */
+#define PRTDCB_RUP2TC_UP0TC_S			0
+#define PRTDCB_RUP2TC_UP0TC_M			MAKEMASK(0x7, 0)
+#define PRTDCB_RUP2TC_UP1TC_S			3
+#define PRTDCB_RUP2TC_UP1TC_M			MAKEMASK(0x7, 3)
+#define PRTDCB_RUP2TC_UP2TC_S			6
+#define PRTDCB_RUP2TC_UP2TC_M			MAKEMASK(0x7, 6)
+#define PRTDCB_RUP2TC_UP3TC_S			9
+#define PRTDCB_RUP2TC_UP3TC_M			MAKEMASK(0x7, 9)
+#define PRTDCB_RUP2TC_UP4TC_S			12
+#define PRTDCB_RUP2TC_UP4TC_M			MAKEMASK(0x7, 12)
+#define PRTDCB_RUP2TC_UP5TC_S			15
+#define PRTDCB_RUP2TC_UP5TC_M			MAKEMASK(0x7, 15)
+#define PRTDCB_RUP2TC_UP6TC_S			18
+#define PRTDCB_RUP2TC_UP6TC_M			MAKEMASK(0x7, 18)
+#define PRTDCB_RUP2TC_UP7TC_S			21
+#define PRTDCB_RUP2TC_UP7TC_M			MAKEMASK(0x7, 21)
+#define PRTDCB_SWT_RETSC			0x0020A140 /* Reset Source: CORER */
+#define PRTDCB_SWT_RETSC_ETS_MODE_S		0
+#define PRTDCB_SWT_RETSC_ETS_MODE_M		BIT(0)
+#define PRTDCB_SWT_RETSC_NON_ETS_MODE_S		1
+#define PRTDCB_SWT_RETSC_NON_ETS_MODE_M		BIT(1)
+#define PRTDCB_SWT_RETSC_ETS_MAX_EXP_S		2
+#define PRTDCB_SWT_RETSC_ETS_MAX_EXP_M		MAKEMASK(0xF, 2)
+#define PRTDCB_TCB_DWRR_CREDITS			0x000AE000 /* Reset Source: CORER */
+#define PRTDCB_TCB_DWRR_CREDITS_CREDITS_S	0
+#define PRTDCB_TCB_DWRR_CREDITS_CREDITS_M	MAKEMASK(0x3FFFF, 0)
+#define PRTDCB_TCB_DWRR_QUANTA			0x000AE020 /* Reset Source: CORER */
+#define PRTDCB_TCB_DWRR_QUANTA_QUANTA_S		0
+#define PRTDCB_TCB_DWRR_QUANTA_QUANTA_M		MAKEMASK(0x7FF, 0)
+#define PRTDCB_TCB_DWRR_SAT			0x000AE040 /* Reset Source: CORER */
+#define PRTDCB_TCB_DWRR_SAT_SATURATION_S	0
+#define PRTDCB_TCB_DWRR_SAT_SATURATION_M	MAKEMASK(0x1FFFF, 0)
+#define PRTDCB_TCUPM_NO_EXCEED_DM		0x000BC3C0 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_NO_EXCEED_DM_MONITOR_S	0
+#define PRTDCB_TCUPM_NO_EXCEED_DM_MONITOR_M	MAKEMASK(0x7FFFF, 0)
+#define PRTDCB_TCUPM_REG_CM			0x000BC360 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_REG_CM_MONITOR_S		0
+#define PRTDCB_TCUPM_REG_CM_MONITOR_M		MAKEMASK(0x7FFF, 0)
+#define PRTDCB_TCUPM_REG_CTHR			0x000BC380 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_REG_CTHR_PORTOFFTH_H_S	0
+#define PRTDCB_TCUPM_REG_CTHR_PORTOFFTH_H_M	MAKEMASK(0x7FFF, 0)
+#define PRTDCB_TCUPM_REG_CTHR_PORTOFFTH_L_S	15
+#define PRTDCB_TCUPM_REG_CTHR_PORTOFFTH_L_M	MAKEMASK(0x7FFF, 15)
+#define PRTDCB_TCUPM_REG_DM			0x000BC3A0 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_REG_DM_MONITOR_S		0
+#define PRTDCB_TCUPM_REG_DM_MONITOR_M		MAKEMASK(0x7FFFF, 0)
+#define PRTDCB_TCUPM_REG_DTHR			0x000BC3E0 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_REG_DTHR_PORTOFFTH_H_S	0
+#define PRTDCB_TCUPM_REG_DTHR_PORTOFFTH_H_M	MAKEMASK(0xFFF, 0)
+#define PRTDCB_TCUPM_REG_DTHR_PORTOFFTH_L_S	12
+#define PRTDCB_TCUPM_REG_DTHR_PORTOFFTH_L_M	MAKEMASK(0xFFF, 12)
+#define PRTDCB_TCUPM_REG_PE_HB_DM		0x000BC400 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_REG_PE_HB_DM_MONITOR_S	0
+#define PRTDCB_TCUPM_REG_PE_HB_DM_MONITOR_M	MAKEMASK(0xFFF, 0)
+#define PRTDCB_TCUPM_REG_PE_HB_DTHR		0x000BC420 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_REG_PE_HB_DTHR_PORTOFFTH_H_S 0
+#define PRTDCB_TCUPM_REG_PE_HB_DTHR_PORTOFFTH_H_M MAKEMASK(0xFFF, 0)
+#define PRTDCB_TCUPM_REG_PE_HB_DTHR_PORTOFFTH_L_S 12
+#define PRTDCB_TCUPM_REG_PE_HB_DTHR_PORTOFFTH_L_M MAKEMASK(0xFFF, 12)
+#define PRTDCB_TCUPM_WAIT_PFC_CM		0x000BC440 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_WAIT_PFC_CM_MONITOR_S	0
+#define PRTDCB_TCUPM_WAIT_PFC_CM_MONITOR_M	MAKEMASK(0x7FFF, 0)
+#define PRTDCB_TCUPM_WAIT_PFC_CTHR		0x000BC460 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_WAIT_PFC_CTHR_PORTOFFTH_S	0
+#define PRTDCB_TCUPM_WAIT_PFC_CTHR_PORTOFFTH_M	MAKEMASK(0x7FFF, 0)
+#define PRTDCB_TCUPM_WAIT_PFC_DM		0x000BC480 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_WAIT_PFC_DM_MONITOR_S	0
+#define PRTDCB_TCUPM_WAIT_PFC_DM_MONITOR_M	MAKEMASK(0x7FFFF, 0)
+#define PRTDCB_TCUPM_WAIT_PFC_DTHR		0x000BC4A0 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_WAIT_PFC_DTHR_PORTOFFTH_S	0
+#define PRTDCB_TCUPM_WAIT_PFC_DTHR_PORTOFFTH_M	MAKEMASK(0xFFF, 0)
+#define PRTDCB_TCUPM_WAIT_PFC_PE_HB_DM		0x000BC4C0 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_WAIT_PFC_PE_HB_DM_MONITOR_S 0
+#define PRTDCB_TCUPM_WAIT_PFC_PE_HB_DM_MONITOR_M MAKEMASK(0xFFF, 0)
+#define PRTDCB_TCUPM_WAIT_PFC_PE_HB_DTHR	0x000BC4E0 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_WAIT_PFC_PE_HB_DTHR_PORTOFFTH_S 0
+#define PRTDCB_TCUPM_WAIT_PFC_PE_HB_DTHR_PORTOFFTH_M MAKEMASK(0xFFF, 0)
+#define PRTDCB_TDPUC				0x00040940 /* Reset Source: CORER */
+#define PRTDCB_TDPUC_MAX_TXFRAME_S		0
+#define PRTDCB_TDPUC_MAX_TXFRAME_M		MAKEMASK(0xFFFF, 0)
+#define PRTDCB_TDPUC_MAL_LENGTH_S		16
+#define PRTDCB_TDPUC_MAL_LENGTH_M		BIT(16)
+#define PRTDCB_TDPUC_MAL_CMD_S			17
+#define PRTDCB_TDPUC_MAL_CMD_M			BIT(17)
+#define PRTDCB_TDPUC_TTL_DROP_S			18
+#define PRTDCB_TDPUC_TTL_DROP_M			BIT(18)
+#define PRTDCB_TDPUC_UR_DROP_S			19
+#define PRTDCB_TDPUC_UR_DROP_M			BIT(19)
+#define PRTDCB_TDPUC_DUMMY_S			20
+#define PRTDCB_TDPUC_DUMMY_M			BIT(20)
+#define PRTDCB_TDPUC_BIG_PKT_SIZE_S		21
+#define PRTDCB_TDPUC_BIG_PKT_SIZE_M		BIT(21)
+#define PRTDCB_TDPUC_L2_ACCEPT_FAIL_S		22
+#define PRTDCB_TDPUC_L2_ACCEPT_FAIL_M		BIT(22)
+#define PRTDCB_TDPUC_DSCP_CHECK_FAIL_S		23
+#define PRTDCB_TDPUC_DSCP_CHECK_FAIL_M		BIT(23)
+#define PRTDCB_TDPUC_RCU_ANTISPOOF_S		24
+#define PRTDCB_TDPUC_RCU_ANTISPOOF_M		BIT(24)
+#define PRTDCB_TDPUC_NIC_DSI_S			25
+#define PRTDCB_TDPUC_NIC_DSI_M			BIT(25)
+#define PRTDCB_TDPUC_NIC_IPSEC_S		26
+#define PRTDCB_TDPUC_NIC_IPSEC_M		BIT(26)
+#define PRTDCB_TDPUC_CLEAR_DROP_S		31
+#define PRTDCB_TDPUC_CLEAR_DROP_M		BIT(31)
+#define PRTDCB_TFCS				0x001E4560 /* Reset Source: GLOBR */
+#define PRTDCB_TFCS_TXOFF_S			0
+#define PRTDCB_TFCS_TXOFF_M			BIT(0)
+#define PRTDCB_TFCS_TXOFF0_S			8
+#define PRTDCB_TFCS_TXOFF0_M			BIT(8)
+#define PRTDCB_TFCS_TXOFF1_S			9
+#define PRTDCB_TFCS_TXOFF1_M			BIT(9)
+#define PRTDCB_TFCS_TXOFF2_S			10
+#define PRTDCB_TFCS_TXOFF2_M			BIT(10)
+#define PRTDCB_TFCS_TXOFF3_S			11
+#define PRTDCB_TFCS_TXOFF3_M			BIT(11)
+#define PRTDCB_TFCS_TXOFF4_S			12
+#define PRTDCB_TFCS_TXOFF4_M			BIT(12)
+#define PRTDCB_TFCS_TXOFF5_S			13
+#define PRTDCB_TFCS_TXOFF5_M			BIT(13)
+#define PRTDCB_TFCS_TXOFF6_S			14
+#define PRTDCB_TFCS_TXOFF6_M			BIT(14)
+#define PRTDCB_TFCS_TXOFF7_S			15
+#define PRTDCB_TFCS_TXOFF7_M			BIT(15)
+#define PRTDCB_TLPM_REG_DM			0x000A0000 /* Reset Source: CORER */
+#define PRTDCB_TLPM_REG_DM_MONITOR_S		0
+#define PRTDCB_TLPM_REG_DM_MONITOR_M		MAKEMASK(0x7FFFF, 0)
+#define PRTDCB_TLPM_REG_DTHR			0x000A0020 /* Reset Source: CORER */
+#define PRTDCB_TLPM_REG_DTHR_PORTOFFTH_H_S	0
+#define PRTDCB_TLPM_REG_DTHR_PORTOFFTH_H_M	MAKEMASK(0xFFF, 0)
+#define PRTDCB_TLPM_REG_DTHR_PORTOFFTH_L_S	12
+#define PRTDCB_TLPM_REG_DTHR_PORTOFFTH_L_M	MAKEMASK(0xFFF, 12)
+#define PRTDCB_TLPM_WAIT_PFC_DM			0x000A0040 /* Reset Source: CORER */
+#define PRTDCB_TLPM_WAIT_PFC_DM_MONITOR_S	0
+#define PRTDCB_TLPM_WAIT_PFC_DM_MONITOR_M	MAKEMASK(0x7FFFF, 0)
+#define PRTDCB_TLPM_WAIT_PFC_DTHR		0x000A0060 /* Reset Source: CORER */
+#define PRTDCB_TLPM_WAIT_PFC_DTHR_PORTOFFTH_S	0
+#define PRTDCB_TLPM_WAIT_PFC_DTHR_PORTOFFTH_M	MAKEMASK(0xFFF, 0)
+#define PRTDCB_TPFCTS(_i)			(0x001E4660 + ((_i) * 32)) /* _i=0...7 */ /* Reset Source: GLOBR */
+#define PRTDCB_TPFCTS_MAX_INDEX			7
+#define PRTDCB_TPFCTS_PFCTIMER_S		0
+#define PRTDCB_TPFCTS_PFCTIMER_M		MAKEMASK(0x3FFF, 0)
+#define PRTDCB_TUP2TC				0x001D26C0 /* Reset Source: CORER */
+#define PRTDCB_TUP2TC_UP0TC_S			0
+#define PRTDCB_TUP2TC_UP0TC_M			MAKEMASK(0x7, 0)
+#define PRTDCB_TUP2TC_UP1TC_S			3
+#define PRTDCB_TUP2TC_UP1TC_M			MAKEMASK(0x7, 3)
+#define PRTDCB_TUP2TC_UP2TC_S			6
+#define PRTDCB_TUP2TC_UP2TC_M			MAKEMASK(0x7, 6)
+#define PRTDCB_TUP2TC_UP3TC_S			9
+#define PRTDCB_TUP2TC_UP3TC_M			MAKEMASK(0x7, 9)
+#define PRTDCB_TUP2TC_UP4TC_S			12
+#define PRTDCB_TUP2TC_UP4TC_M			MAKEMASK(0x7, 12)
+#define PRTDCB_TUP2TC_UP5TC_S			15
+#define PRTDCB_TUP2TC_UP5TC_M			MAKEMASK(0x7, 15)
+#define PRTDCB_TUP2TC_UP6TC_S			18
+#define PRTDCB_TUP2TC_UP6TC_M			MAKEMASK(0x7, 18)
+#define PRTDCB_TUP2TC_UP7TC_S			21
+#define PRTDCB_TUP2TC_UP7TC_M			MAKEMASK(0x7, 21)
+#define PRTDCB_TX_DSCP2UP_CTL			0x00040980 /* Reset Source: CORER */
+#define PRTDCB_TX_DSCP2UP_CTL_DSCP2UP_ENA_S	0
+#define PRTDCB_TX_DSCP2UP_CTL_DSCP2UP_ENA_M	BIT(0)
+#define PRTDCB_TX_DSCP2UP_CTL_DSCP_DEFAULT_UP_S 1
+#define PRTDCB_TX_DSCP2UP_CTL_DSCP_DEFAULT_UP_M MAKEMASK(0x7, 1)
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT(_i)		(0x000409A0 + ((_i) * 32)) /* _i=0...7 */ /* Reset Source: CORER */
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_MAX_INDEX	7
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_0_S 0
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_0_M MAKEMASK(0x7, 0)
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_1_S 4
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_1_M MAKEMASK(0x7, 4)
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_2_S 8
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_2_M MAKEMASK(0x7, 8)
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_3_S 12
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_3_M MAKEMASK(0x7, 12)
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_4_S 16
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_4_M MAKEMASK(0x7, 16)
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_5_S 20
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_5_M MAKEMASK(0x7, 20)
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_6_S 24
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_6_M MAKEMASK(0x7, 24)
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_7_S 28
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_7_M MAKEMASK(0x7, 28)
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT(_i)		(0x00040AA0 + ((_i) * 32)) /* _i=0...7 */ /* Reset Source: CORER */
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_MAX_INDEX	7
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_0_S 0
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_0_M MAKEMASK(0x7, 0)
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_1_S 4
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_1_M MAKEMASK(0x7, 4)
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_2_S 8
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_2_M MAKEMASK(0x7, 8)
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_3_S 12
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_3_M MAKEMASK(0x7, 12)
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_4_S 16
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_4_M MAKEMASK(0x7, 16)
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_5_S 20
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_5_M MAKEMASK(0x7, 20)
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_6_S 24
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_6_M MAKEMASK(0x7, 24)
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_7_S 28
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_7_M MAKEMASK(0x7, 28)
+#define PRTTCB_BULK_DWRR_REG_CREDITS		0x000AE060 /* Reset Source: CORER */
+#define PRTTCB_BULK_DWRR_REG_CREDITS_CREDITS_S	0
+#define PRTTCB_BULK_DWRR_REG_CREDITS_CREDITS_M	MAKEMASK(0x3FFFF, 0)
+#define PRTTCB_BULK_DWRR_WB_CREDITS		0x000AE080 /* Reset Source: CORER */
+#define PRTTCB_BULK_DWRR_WB_CREDITS_CREDITS_S	0
+#define PRTTCB_BULK_DWRR_WB_CREDITS_CREDITS_M	MAKEMASK(0x3FFFF, 0)
+#define PRTTCB_CREDIT_EXP			0x000AE100 /* Reset Source: CORER */
+#define PRTTCB_CREDIT_EXP_EXPANSION_S		0
+#define PRTTCB_CREDIT_EXP_EXPANSION_M		MAKEMASK(0xFF, 0)
+#define PRTTCB_LL_DWRR_REG_CREDITS		0x000AE0A0 /* Reset Source: CORER */
+#define PRTTCB_LL_DWRR_REG_CREDITS_CREDITS_S	0
+#define PRTTCB_LL_DWRR_REG_CREDITS_CREDITS_M	MAKEMASK(0x3FFFF, 0)
+#define PRTTCB_LL_DWRR_WB_CREDITS		0x000AE0C0 /* Reset Source: CORER */
+#define PRTTCB_LL_DWRR_WB_CREDITS_CREDITS_S	0
+#define PRTTCB_LL_DWRR_WB_CREDITS_CREDITS_M	MAKEMASK(0x3FFFF, 0)
+#define TCDCB_TCUPM_WAIT_CM(_i)			(0x000BC520 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCDCB_TCUPM_WAIT_CM_MAX_INDEX		31
+#define TCDCB_TCUPM_WAIT_CM_MONITOR_S		0
+#define TCDCB_TCUPM_WAIT_CM_MONITOR_M		MAKEMASK(0x7FFF, 0)
+#define TCDCB_TCUPM_WAIT_CTHR(_i)		(0x000BC5A0 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCDCB_TCUPM_WAIT_CTHR_MAX_INDEX		31
+#define TCDCB_TCUPM_WAIT_CTHR_TCOFFTH_S		0
+#define TCDCB_TCUPM_WAIT_CTHR_TCOFFTH_M		MAKEMASK(0x7FFF, 0)
+#define TCDCB_TCUPM_WAIT_DM(_i)			(0x000BC620 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCDCB_TCUPM_WAIT_DM_MAX_INDEX		31
+#define TCDCB_TCUPM_WAIT_DM_MONITOR_S		0
+#define TCDCB_TCUPM_WAIT_DM_MONITOR_M		MAKEMASK(0x7FFFF, 0)
+#define TCDCB_TCUPM_WAIT_DTHR(_i)		(0x000BC6A0 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCDCB_TCUPM_WAIT_DTHR_MAX_INDEX		31
+#define TCDCB_TCUPM_WAIT_DTHR_TCOFFTH_S		0
+#define TCDCB_TCUPM_WAIT_DTHR_TCOFFTH_M		MAKEMASK(0xFFF, 0)
+#define TCDCB_TCUPM_WAIT_PE_HB_DM(_i)		(0x000BC720 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCDCB_TCUPM_WAIT_PE_HB_DM_MAX_INDEX	31
+#define TCDCB_TCUPM_WAIT_PE_HB_DM_MONITOR_S	0
+#define TCDCB_TCUPM_WAIT_PE_HB_DM_MONITOR_M	MAKEMASK(0xFFF, 0)
+#define TCDCB_TCUPM_WAIT_PE_HB_DTHR(_i)		(0x000BC7A0 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCDCB_TCUPM_WAIT_PE_HB_DTHR_MAX_INDEX	31
+#define TCDCB_TCUPM_WAIT_PE_HB_DTHR_TCOFFTH_S	0
+#define TCDCB_TCUPM_WAIT_PE_HB_DTHR_TCOFFTH_M	MAKEMASK(0xFFF, 0)
+#define TCDCB_TLPM_WAIT_DM(_i)			(0x000A0080 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCDCB_TLPM_WAIT_DM_MAX_INDEX		31
+#define TCDCB_TLPM_WAIT_DM_MONITOR_S		0
+#define TCDCB_TLPM_WAIT_DM_MONITOR_M		MAKEMASK(0x7FFFF, 0)
+#define TCDCB_TLPM_WAIT_DTHR(_i)		(0x000A0100 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCDCB_TLPM_WAIT_DTHR_MAX_INDEX		31
+#define TCDCB_TLPM_WAIT_DTHR_TCOFFTH_S		0
+#define TCDCB_TLPM_WAIT_DTHR_TCOFFTH_M		MAKEMASK(0xFFF, 0)
+#define TCTCB_WB_RL_TC_CFG(_i)			(0x000AE138 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCTCB_WB_RL_TC_CFG_MAX_INDEX		31
+#define TCTCB_WB_RL_TC_CFG_TOKENS_S		0
+#define TCTCB_WB_RL_TC_CFG_TOKENS_M		MAKEMASK(0xFFF, 0)
+#define TCTCB_WB_RL_TC_CFG_BURST_SIZE_S		12
+#define TCTCB_WB_RL_TC_CFG_BURST_SIZE_M		MAKEMASK(0x3FF, 12)
+#define TCTCB_WB_RL_TC_STAT(_i)			(0x000AE1B8 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCTCB_WB_RL_TC_STAT_MAX_INDEX		31
+#define TCTCB_WB_RL_TC_STAT_BUCKET_S		0
+#define TCTCB_WB_RL_TC_STAT_BUCKET_M		MAKEMASK(0x1FFFF, 0)
+#define TPB_BULK_DWRR_REG_QUANTA		0x00099340 /* Reset Source: CORER */
+#define TPB_BULK_DWRR_REG_QUANTA_QUANTA_S	0
+#define TPB_BULK_DWRR_REG_QUANTA_QUANTA_M	MAKEMASK(0x7FF, 0)
+#define TPB_BULK_DWRR_REG_SAT			0x00099350 /* Reset Source: CORER */
+#define TPB_BULK_DWRR_REG_SAT_SATURATION_S	0
+#define TPB_BULK_DWRR_REG_SAT_SATURATION_M	MAKEMASK(0x1FFFF, 0)
+#define TPB_BULK_DWRR_WB_QUANTA			0x00099344 /* Reset Source: CORER */
+#define TPB_BULK_DWRR_WB_QUANTA_QUANTA_S	0
+#define TPB_BULK_DWRR_WB_QUANTA_QUANTA_M	MAKEMASK(0x7FF, 0)
+#define TPB_BULK_DWRR_WB_SAT			0x00099354 /* Reset Source: CORER */
+#define TPB_BULK_DWRR_WB_SAT_SATURATION_S	0
+#define TPB_BULK_DWRR_WB_SAT_SATURATION_M	MAKEMASK(0x1FFFF, 0)
+#define TPB_GLDCB_TCB_WB_SP			0x0009966C /* Reset Source: CORER */
+#define TPB_GLDCB_TCB_WB_SP_WB_SP_S		0
+#define TPB_GLDCB_TCB_WB_SP_WB_SP_M		BIT(0)
+#define TPB_GLTCB_CREDIT_EXP_CTL		0x00099664 /* Reset Source: CORER */
+#define TPB_GLTCB_CREDIT_EXP_CTL_EN_S		0
+#define TPB_GLTCB_CREDIT_EXP_CTL_EN_M		BIT(0)
+#define TPB_GLTCB_CREDIT_EXP_CTL_MIN_PKT_S	1
+#define TPB_GLTCB_CREDIT_EXP_CTL_MIN_PKT_M	MAKEMASK(0x1FF, 1)
+#define TPB_LL_DWRR_REG_QUANTA			0x00099348 /* Reset Source: CORER */
+#define TPB_LL_DWRR_REG_QUANTA_QUANTA_S		0
+#define TPB_LL_DWRR_REG_QUANTA_QUANTA_M		MAKEMASK(0x7FF, 0)
+#define TPB_LL_DWRR_REG_SAT			0x00099358 /* Reset Source: CORER */
+#define TPB_LL_DWRR_REG_SAT_SATURATION_S	0
+#define TPB_LL_DWRR_REG_SAT_SATURATION_M	MAKEMASK(0x1FFFF, 0)
+#define TPB_LL_DWRR_WB_QUANTA			0x0009934C /* Reset Source: CORER */
+#define TPB_LL_DWRR_WB_QUANTA_QUANTA_S		0
+#define TPB_LL_DWRR_WB_QUANTA_QUANTA_M		MAKEMASK(0x7FF, 0)
+#define TPB_LL_DWRR_WB_SAT			0x0009935C /* Reset Source: CORER */
+#define TPB_LL_DWRR_WB_SAT_SATURATION_S		0
+#define TPB_LL_DWRR_WB_SAT_SATURATION_M		MAKEMASK(0x1FFFF, 0)
+#define TPB_PRTDCB_TCB_DWRR_CREDITS		0x000991C0 /* Reset Source: CORER */
+#define TPB_PRTDCB_TCB_DWRR_CREDITS_CREDITS_S	0
+#define TPB_PRTDCB_TCB_DWRR_CREDITS_CREDITS_M	MAKEMASK(0x3FFFF, 0)
+#define TPB_PRTDCB_TCB_DWRR_QUANTA		0x00099220 /* Reset Source: CORER */
+#define TPB_PRTDCB_TCB_DWRR_QUANTA_QUANTA_S	0
+#define TPB_PRTDCB_TCB_DWRR_QUANTA_QUANTA_M	MAKEMASK(0x7FF, 0)
+#define TPB_PRTDCB_TCB_DWRR_SAT			0x00099260 /* Reset Source: CORER */
+#define TPB_PRTDCB_TCB_DWRR_SAT_SATURATION_S	0
+#define TPB_PRTDCB_TCB_DWRR_SAT_SATURATION_M	MAKEMASK(0x1FFFF, 0)
+#define TPB_PRTTCB_BULK_DWRR_REG_CREDITS	0x000992A0 /* Reset Source: CORER */
+#define TPB_PRTTCB_BULK_DWRR_REG_CREDITS_CREDITS_S 0
+#define TPB_PRTTCB_BULK_DWRR_REG_CREDITS_CREDITS_M MAKEMASK(0x3FFFF, 0)
+#define TPB_PRTTCB_BULK_DWRR_WB_CREDITS		0x000992C0 /* Reset Source: CORER */
+#define TPB_PRTTCB_BULK_DWRR_WB_CREDITS_CREDITS_S 0
+#define TPB_PRTTCB_BULK_DWRR_WB_CREDITS_CREDITS_M MAKEMASK(0x3FFFF, 0)
+#define TPB_PRTTCB_CREDIT_EXP			0x00099644 /* Reset Source: CORER */
+#define TPB_PRTTCB_CREDIT_EXP_EXPANSION_S	0
+#define TPB_PRTTCB_CREDIT_EXP_EXPANSION_M	MAKEMASK(0xFF, 0)
+#define TPB_PRTTCB_LL_DWRR_REG_CREDITS		0x00099300 /* Reset Source: CORER */
+#define TPB_PRTTCB_LL_DWRR_REG_CREDITS_CREDITS_S 0
+#define TPB_PRTTCB_LL_DWRR_REG_CREDITS_CREDITS_M MAKEMASK(0x3FFFF, 0)
+#define TPB_PRTTCB_LL_DWRR_WB_CREDITS		0x00099320 /* Reset Source: CORER */
+#define TPB_PRTTCB_LL_DWRR_WB_CREDITS_CREDITS_S 0
+#define TPB_PRTTCB_LL_DWRR_WB_CREDITS_CREDITS_M MAKEMASK(0x3FFFF, 0)
+#define TPB_WB_RL_TC_CFG(_i)			(0x00099360 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TPB_WB_RL_TC_CFG_MAX_INDEX		31
+#define TPB_WB_RL_TC_CFG_TOKENS_S		0
+#define TPB_WB_RL_TC_CFG_TOKENS_M		MAKEMASK(0xFFF, 0)
+#define TPB_WB_RL_TC_CFG_BURST_SIZE_S		12
+#define TPB_WB_RL_TC_CFG_BURST_SIZE_M		MAKEMASK(0x3FF, 12)
+#define TPB_WB_RL_TC_STAT(_i)			(0x000993E0 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TPB_WB_RL_TC_STAT_MAX_INDEX		31
+#define TPB_WB_RL_TC_STAT_BUCKET_S		0
+#define TPB_WB_RL_TC_STAT_BUCKET_M		MAKEMASK(0x1FFFF, 0)
+#define GL_ACLEXT_CDMD_L1SEL(_i)		(0x00210054 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_CDMD_L1SEL_MAX_INDEX		2
+#define GL_ACLEXT_CDMD_L1SEL_RX_SEL_S		0
+#define GL_ACLEXT_CDMD_L1SEL_RX_SEL_M		MAKEMASK(0x1F, 0)
+#define GL_ACLEXT_CDMD_L1SEL_TX_SEL_S		8
+#define GL_ACLEXT_CDMD_L1SEL_TX_SEL_M		MAKEMASK(0x1F, 8)
+#define GL_ACLEXT_CDMD_L1SEL_AUX0_SEL_S		16
+#define GL_ACLEXT_CDMD_L1SEL_AUX0_SEL_M		MAKEMASK(0x1F, 16)
+#define GL_ACLEXT_CDMD_L1SEL_AUX1_SEL_S		24
+#define GL_ACLEXT_CDMD_L1SEL_AUX1_SEL_M		MAKEMASK(0x1F, 24)
+#define GL_ACLEXT_CDMD_L1SEL_BIDIR_ENA_S	30
+#define GL_ACLEXT_CDMD_L1SEL_BIDIR_ENA_M	MAKEMASK(0x3, 30)
+#define GL_ACLEXT_CTLTBL_L2ADDR(_i)		(0x00210084 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_CTLTBL_L2ADDR_MAX_INDEX	2
+#define GL_ACLEXT_CTLTBL_L2ADDR_LINE_OFF_S	0
+#define GL_ACLEXT_CTLTBL_L2ADDR_LINE_OFF_M	MAKEMASK(0x7, 0)
+#define GL_ACLEXT_CTLTBL_L2ADDR_LINE_IDX_S	8
+#define GL_ACLEXT_CTLTBL_L2ADDR_LINE_IDX_M	MAKEMASK(0x7, 8)
+#define GL_ACLEXT_CTLTBL_L2ADDR_AUTO_INC_S	31
+#define GL_ACLEXT_CTLTBL_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_ACLEXT_CTLTBL_L2DATA(_i)		(0x00210090 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_CTLTBL_L2DATA_MAX_INDEX	2
+#define GL_ACLEXT_CTLTBL_L2DATA_DATA_S		0
+#define GL_ACLEXT_CTLTBL_L2DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_ACLEXT_DFLT_L2PRFL(_i)		(0x00210138 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_DFLT_L2PRFL_MAX_INDEX		2
+#define GL_ACLEXT_DFLT_L2PRFL_DFLT_PRFL_S	0
+#define GL_ACLEXT_DFLT_L2PRFL_DFLT_PRFL_M	MAKEMASK(0xFFFF, 0)
+#define GL_ACLEXT_DFLT_L2PRFL_ACL(_i)		(0x00393800 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_DFLT_L2PRFL_ACL_MAX_INDEX	2
+#define GL_ACLEXT_DFLT_L2PRFL_ACL_DFLT_PRFL_S	0
+#define GL_ACLEXT_DFLT_L2PRFL_ACL_DFLT_PRFL_M	MAKEMASK(0xFFFF, 0)
+#define GL_ACLEXT_FLGS_L1SEL0_1(_i)		(0x0021006C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_FLGS_L1SEL0_1_MAX_INDEX	2
+#define GL_ACLEXT_FLGS_L1SEL0_1_FLS0_S		0
+#define GL_ACLEXT_FLGS_L1SEL0_1_FLS0_M		MAKEMASK(0x1FF, 0)
+#define GL_ACLEXT_FLGS_L1SEL0_1_FLS1_S		16
+#define GL_ACLEXT_FLGS_L1SEL0_1_FLS1_M		MAKEMASK(0x1FF, 16)
+#define GL_ACLEXT_FLGS_L1SEL2_3(_i)		(0x00210078 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_FLGS_L1SEL2_3_MAX_INDEX	2
+#define GL_ACLEXT_FLGS_L1SEL2_3_FLS2_S		0
+#define GL_ACLEXT_FLGS_L1SEL2_3_FLS2_M		MAKEMASK(0x1FF, 0)
+#define GL_ACLEXT_FLGS_L1SEL2_3_FLS3_S		16
+#define GL_ACLEXT_FLGS_L1SEL2_3_FLS3_M		MAKEMASK(0x1FF, 16)
+#define GL_ACLEXT_FLGS_L1TBL(_i)		(0x00210060 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_FLGS_L1TBL_MAX_INDEX		2
+#define GL_ACLEXT_FLGS_L1TBL_LSB_S		0
+#define GL_ACLEXT_FLGS_L1TBL_LSB_M		MAKEMASK(0xFFFF, 0)
+#define GL_ACLEXT_FLGS_L1TBL_MSB_S		16
+#define GL_ACLEXT_FLGS_L1TBL_MSB_M		MAKEMASK(0xFFFF, 16)
+#define GL_ACLEXT_FORCE_L1CDID(_i)		(0x00210018 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_FORCE_L1CDID_MAX_INDEX	2
+#define GL_ACLEXT_FORCE_L1CDID_STATIC_CDID_S	0
+#define GL_ACLEXT_FORCE_L1CDID_STATIC_CDID_M	MAKEMASK(0xF, 0)
+#define GL_ACLEXT_FORCE_L1CDID_STATIC_CDID_EN_S 31
+#define GL_ACLEXT_FORCE_L1CDID_STATIC_CDID_EN_M BIT(31)
+#define GL_ACLEXT_FORCE_PID(_i)			(0x00210000 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_FORCE_PID_MAX_INDEX		2
+#define GL_ACLEXT_FORCE_PID_STATIC_PID_S	0
+#define GL_ACLEXT_FORCE_PID_STATIC_PID_M	MAKEMASK(0xFFFF, 0)
+#define GL_ACLEXT_FORCE_PID_STATIC_PID_EN_S	31
+#define GL_ACLEXT_FORCE_PID_STATIC_PID_EN_M	BIT(31)
+#define GL_ACLEXT_K2N_L2ADDR(_i)		(0x00210144 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_K2N_L2ADDR_MAX_INDEX		2
+#define GL_ACLEXT_K2N_L2ADDR_LINE_IDX_S		0
+#define GL_ACLEXT_K2N_L2ADDR_LINE_IDX_M		MAKEMASK(0x7F, 0)
+#define GL_ACLEXT_K2N_L2ADDR_AUTO_INC_S		31
+#define GL_ACLEXT_K2N_L2ADDR_AUTO_INC_M		BIT(31)
+#define GL_ACLEXT_K2N_L2DATA(_i)		(0x00210150 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_K2N_L2DATA_MAX_INDEX		2
+#define GL_ACLEXT_K2N_L2DATA_DATA0_S		0
+#define GL_ACLEXT_K2N_L2DATA_DATA0_M		MAKEMASK(0xFF, 0)
+#define GL_ACLEXT_K2N_L2DATA_DATA1_S		8
+#define GL_ACLEXT_K2N_L2DATA_DATA1_M		MAKEMASK(0xFF, 8)
+#define GL_ACLEXT_K2N_L2DATA_DATA2_S		16
+#define GL_ACLEXT_K2N_L2DATA_DATA2_M		MAKEMASK(0xFF, 16)
+#define GL_ACLEXT_K2N_L2DATA_DATA3_S		24
+#define GL_ACLEXT_K2N_L2DATA_DATA3_M		MAKEMASK(0xFF, 24)
+#define GL_ACLEXT_L2_PMASK0(_i)			(0x002100FC + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_L2_PMASK0_MAX_INDEX		2
+#define GL_ACLEXT_L2_PMASK0_BITMASK_S		0
+#define GL_ACLEXT_L2_PMASK0_BITMASK_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_ACLEXT_L2_PMASK1(_i)			(0x00210108 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_L2_PMASK1_MAX_INDEX		2
+#define GL_ACLEXT_L2_PMASK1_BITMASK_S		0
+#define GL_ACLEXT_L2_PMASK1_BITMASK_M		MAKEMASK(0xFFFF, 0)
+#define GL_ACLEXT_L2_TMASK0(_i)			(0x00210498 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_L2_TMASK0_MAX_INDEX		2
+#define GL_ACLEXT_L2_TMASK0_BITMASK_S		0
+#define GL_ACLEXT_L2_TMASK0_BITMASK_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_ACLEXT_L2_TMASK1(_i)			(0x002104A4 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_L2_TMASK1_MAX_INDEX		2
+#define GL_ACLEXT_L2_TMASK1_BITMASK_S		0
+#define GL_ACLEXT_L2_TMASK1_BITMASK_M		MAKEMASK(0xFF, 0)
+#define GL_ACLEXT_L2BMP0_3(_i)			(0x002100A8 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_L2BMP0_3_MAX_INDEX		2
+#define GL_ACLEXT_L2BMP0_3_BMP0_S		0
+#define GL_ACLEXT_L2BMP0_3_BMP0_M		MAKEMASK(0xFF, 0)
+#define GL_ACLEXT_L2BMP0_3_BMP1_S		8
+#define GL_ACLEXT_L2BMP0_3_BMP1_M		MAKEMASK(0xFF, 8)
+#define GL_ACLEXT_L2BMP0_3_BMP2_S		16
+#define GL_ACLEXT_L2BMP0_3_BMP2_M		MAKEMASK(0xFF, 16)
+#define GL_ACLEXT_L2BMP0_3_BMP3_S		24
+#define GL_ACLEXT_L2BMP0_3_BMP3_M		MAKEMASK(0xFF, 24)
+#define GL_ACLEXT_L2BMP4_7(_i)			(0x002100B4 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_L2BMP4_7_MAX_INDEX		2
+#define GL_ACLEXT_L2BMP4_7_BMP4_S		0
+#define GL_ACLEXT_L2BMP4_7_BMP4_M		MAKEMASK(0xFF, 0)
+#define GL_ACLEXT_L2BMP4_7_BMP5_S		8
+#define GL_ACLEXT_L2BMP4_7_BMP5_M		MAKEMASK(0xFF, 8)
+#define GL_ACLEXT_L2BMP4_7_BMP6_S		16
+#define GL_ACLEXT_L2BMP4_7_BMP6_M		MAKEMASK(0xFF, 16)
+#define GL_ACLEXT_L2BMP4_7_BMP7_S		24
+#define GL_ACLEXT_L2BMP4_7_BMP7_M		MAKEMASK(0xFF, 24)
+#define GL_ACLEXT_L2PRTMOD(_i)			(0x0021009C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_L2PRTMOD_MAX_INDEX		2
+#define GL_ACLEXT_L2PRTMOD_XLT1_S		0
+#define GL_ACLEXT_L2PRTMOD_XLT1_M		MAKEMASK(0x3, 0)
+#define GL_ACLEXT_L2PRTMOD_XLT2_S		8
+#define GL_ACLEXT_L2PRTMOD_XLT2_M		MAKEMASK(0x3, 8)
+#define GL_ACLEXT_N2N_L2ADDR(_i)		(0x0021015C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_N2N_L2ADDR_MAX_INDEX		2
+#define GL_ACLEXT_N2N_L2ADDR_LINE_IDX_S		0
+#define GL_ACLEXT_N2N_L2ADDR_LINE_IDX_M		MAKEMASK(0x3F, 0)
+#define GL_ACLEXT_N2N_L2ADDR_AUTO_INC_S		31
+#define GL_ACLEXT_N2N_L2ADDR_AUTO_INC_M		BIT(31)
+#define GL_ACLEXT_N2N_L2DATA(_i)		(0x00210168 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_N2N_L2DATA_MAX_INDEX		2
+#define GL_ACLEXT_N2N_L2DATA_DATA0_S		0
+#define GL_ACLEXT_N2N_L2DATA_DATA0_M		MAKEMASK(0xFF, 0)
+#define GL_ACLEXT_N2N_L2DATA_DATA1_S		8
+#define GL_ACLEXT_N2N_L2DATA_DATA1_M		MAKEMASK(0xFF, 8)
+#define GL_ACLEXT_N2N_L2DATA_DATA2_S		16
+#define GL_ACLEXT_N2N_L2DATA_DATA2_M		MAKEMASK(0xFF, 16)
+#define GL_ACLEXT_N2N_L2DATA_DATA3_S		24
+#define GL_ACLEXT_N2N_L2DATA_DATA3_M		MAKEMASK(0xFF, 24)
+#define GL_ACLEXT_P2P_L1ADDR(_i)		(0x00210024 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_P2P_L1ADDR_MAX_INDEX		2
+#define GL_ACLEXT_P2P_L1ADDR_LINE_IDX_S		0
+#define GL_ACLEXT_P2P_L1ADDR_LINE_IDX_M		BIT(0)
+#define GL_ACLEXT_P2P_L1ADDR_AUTO_INC_S		31
+#define GL_ACLEXT_P2P_L1ADDR_AUTO_INC_M		BIT(31)
+#define GL_ACLEXT_P2P_L1DATA(_i)		(0x00210030 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_P2P_L1DATA_MAX_INDEX		2
+#define GL_ACLEXT_P2P_L1DATA_DATA_S		0
+#define GL_ACLEXT_P2P_L1DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_ACLEXT_PID_L2GKTYPE(_i)		(0x002100F0 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_PID_L2GKTYPE_MAX_INDEX	2
+#define GL_ACLEXT_PID_L2GKTYPE_PID_GKTYPE_S	0
+#define GL_ACLEXT_PID_L2GKTYPE_PID_GKTYPE_M	MAKEMASK(0x3, 0)
+#define GL_ACLEXT_PLVL_SEL(_i)			(0x0021000C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_PLVL_SEL_MAX_INDEX		2
+#define GL_ACLEXT_PLVL_SEL_PLVL_SEL_S		0
+#define GL_ACLEXT_PLVL_SEL_PLVL_SEL_M		BIT(0)
+#define GL_ACLEXT_TCAM_L2ADDR(_i)		(0x00210114 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_TCAM_L2ADDR_MAX_INDEX		2
+#define GL_ACLEXT_TCAM_L2ADDR_LINE_IDX_S	0
+#define GL_ACLEXT_TCAM_L2ADDR_LINE_IDX_M	MAKEMASK(0x3FF, 0)
+#define GL_ACLEXT_TCAM_L2ADDR_AUTO_INC_S	31
+#define GL_ACLEXT_TCAM_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_ACLEXT_TCAM_L2DATALSB(_i)		(0x00210120 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_TCAM_L2DATALSB_MAX_INDEX	2
+#define GL_ACLEXT_TCAM_L2DATALSB_DATALSB_S	0
+#define GL_ACLEXT_TCAM_L2DATALSB_DATALSB_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GL_ACLEXT_TCAM_L2DATAMSB(_i)		(0x0021012C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_TCAM_L2DATAMSB_MAX_INDEX	2
+#define GL_ACLEXT_TCAM_L2DATAMSB_DATAMSB_S	0
+#define GL_ACLEXT_TCAM_L2DATAMSB_DATAMSB_M	MAKEMASK(0xFF, 0)
+#define GL_ACLEXT_XLT0_L1ADDR(_i)		(0x0021003C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_XLT0_L1ADDR_MAX_INDEX		2
+#define GL_ACLEXT_XLT0_L1ADDR_LINE_IDX_S	0
+#define GL_ACLEXT_XLT0_L1ADDR_LINE_IDX_M	MAKEMASK(0xFF, 0)
+#define GL_ACLEXT_XLT0_L1ADDR_AUTO_INC_S	31
+#define GL_ACLEXT_XLT0_L1ADDR_AUTO_INC_M	BIT(31)
+#define GL_ACLEXT_XLT0_L1DATA(_i)		(0x00210048 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_XLT0_L1DATA_MAX_INDEX		2
+#define GL_ACLEXT_XLT0_L1DATA_DATA_S		0
+#define GL_ACLEXT_XLT0_L1DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_ACLEXT_XLT1_L2ADDR(_i)		(0x002100C0 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_XLT1_L2ADDR_MAX_INDEX		2
+#define GL_ACLEXT_XLT1_L2ADDR_LINE_IDX_S	0
+#define GL_ACLEXT_XLT1_L2ADDR_LINE_IDX_M	MAKEMASK(0x7FF, 0)
+#define GL_ACLEXT_XLT1_L2ADDR_AUTO_INC_S	31
+#define GL_ACLEXT_XLT1_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_ACLEXT_XLT1_L2DATA(_i)		(0x002100CC + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_XLT1_L2DATA_MAX_INDEX		2
+#define GL_ACLEXT_XLT1_L2DATA_DATA_S		0
+#define GL_ACLEXT_XLT1_L2DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_ACLEXT_XLT2_L2ADDR(_i)		(0x002100D8 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_XLT2_L2ADDR_MAX_INDEX		2
+#define GL_ACLEXT_XLT2_L2ADDR_LINE_IDX_S	0
+#define GL_ACLEXT_XLT2_L2ADDR_LINE_IDX_M	MAKEMASK(0x1FF, 0)
+#define GL_ACLEXT_XLT2_L2ADDR_AUTO_INC_S	31
+#define GL_ACLEXT_XLT2_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_ACLEXT_XLT2_L2DATA(_i)		(0x002100E4 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_XLT2_L2DATA_MAX_INDEX		2
+#define GL_ACLEXT_XLT2_L2DATA_DATA_S		0
+#define GL_ACLEXT_XLT2_L2DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PREEXT_CDMD_L1SEL(_i)		(0x0020F054 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_CDMD_L1SEL_MAX_INDEX		2
+#define GL_PREEXT_CDMD_L1SEL_RX_SEL_S		0
+#define GL_PREEXT_CDMD_L1SEL_RX_SEL_M		MAKEMASK(0x1F, 0)
+#define GL_PREEXT_CDMD_L1SEL_TX_SEL_S		8
+#define GL_PREEXT_CDMD_L1SEL_TX_SEL_M		MAKEMASK(0x1F, 8)
+#define GL_PREEXT_CDMD_L1SEL_AUX0_SEL_S		16
+#define GL_PREEXT_CDMD_L1SEL_AUX0_SEL_M		MAKEMASK(0x1F, 16)
+#define GL_PREEXT_CDMD_L1SEL_AUX1_SEL_S		24
+#define GL_PREEXT_CDMD_L1SEL_AUX1_SEL_M		MAKEMASK(0x1F, 24)
+#define GL_PREEXT_CDMD_L1SEL_BIDIR_ENA_S	30
+#define GL_PREEXT_CDMD_L1SEL_BIDIR_ENA_M	MAKEMASK(0x3, 30)
+#define GL_PREEXT_CTLTBL_L2ADDR(_i)		(0x0020F084 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_CTLTBL_L2ADDR_MAX_INDEX	2
+#define GL_PREEXT_CTLTBL_L2ADDR_LINE_OFF_S	0
+#define GL_PREEXT_CTLTBL_L2ADDR_LINE_OFF_M	MAKEMASK(0x7, 0)
+#define GL_PREEXT_CTLTBL_L2ADDR_LINE_IDX_S	8
+#define GL_PREEXT_CTLTBL_L2ADDR_LINE_IDX_M	MAKEMASK(0x7, 8)
+#define GL_PREEXT_CTLTBL_L2ADDR_AUTO_INC_S	31
+#define GL_PREEXT_CTLTBL_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_PREEXT_CTLTBL_L2DATA(_i)		(0x0020F090 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_CTLTBL_L2DATA_MAX_INDEX	2
+#define GL_PREEXT_CTLTBL_L2DATA_DATA_S		0
+#define GL_PREEXT_CTLTBL_L2DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PREEXT_DFLT_L2PRFL(_i)		(0x0020F138 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_DFLT_L2PRFL_MAX_INDEX		2
+#define GL_PREEXT_DFLT_L2PRFL_DFLT_PRFL_S	0
+#define GL_PREEXT_DFLT_L2PRFL_DFLT_PRFL_M	MAKEMASK(0xFFFF, 0)
+#define GL_PREEXT_FLGS_L1SEL0_1(_i)		(0x0020F06C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_FLGS_L1SEL0_1_MAX_INDEX	2
+#define GL_PREEXT_FLGS_L1SEL0_1_FLS0_S		0
+#define GL_PREEXT_FLGS_L1SEL0_1_FLS0_M		MAKEMASK(0x1FF, 0)
+#define GL_PREEXT_FLGS_L1SEL0_1_FLS1_S		16
+#define GL_PREEXT_FLGS_L1SEL0_1_FLS1_M		MAKEMASK(0x1FF, 16)
+#define GL_PREEXT_FLGS_L1SEL2_3(_i)		(0x0020F078 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_FLGS_L1SEL2_3_MAX_INDEX	2
+#define GL_PREEXT_FLGS_L1SEL2_3_FLS2_S		0
+#define GL_PREEXT_FLGS_L1SEL2_3_FLS2_M		MAKEMASK(0x1FF, 0)
+#define GL_PREEXT_FLGS_L1SEL2_3_FLS3_S		16
+#define GL_PREEXT_FLGS_L1SEL2_3_FLS3_M		MAKEMASK(0x1FF, 16)
+#define GL_PREEXT_FLGS_L1TBL(_i)		(0x0020F060 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_FLGS_L1TBL_MAX_INDEX		2
+#define GL_PREEXT_FLGS_L1TBL_LSB_S		0
+#define GL_PREEXT_FLGS_L1TBL_LSB_M		MAKEMASK(0xFFFF, 0)
+#define GL_PREEXT_FLGS_L1TBL_MSB_S		16
+#define GL_PREEXT_FLGS_L1TBL_MSB_M		MAKEMASK(0xFFFF, 16)
+#define GL_PREEXT_FORCE_L1CDID(_i)		(0x0020F018 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_FORCE_L1CDID_MAX_INDEX	2
+#define GL_PREEXT_FORCE_L1CDID_STATIC_CDID_S	0
+#define GL_PREEXT_FORCE_L1CDID_STATIC_CDID_M	MAKEMASK(0xF, 0)
+#define GL_PREEXT_FORCE_L1CDID_STATIC_CDID_EN_S 31
+#define GL_PREEXT_FORCE_L1CDID_STATIC_CDID_EN_M BIT(31)
+#define GL_PREEXT_FORCE_PID(_i)			(0x0020F000 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_FORCE_PID_MAX_INDEX		2
+#define GL_PREEXT_FORCE_PID_STATIC_PID_S	0
+#define GL_PREEXT_FORCE_PID_STATIC_PID_M	MAKEMASK(0xFFFF, 0)
+#define GL_PREEXT_FORCE_PID_STATIC_PID_EN_S	31
+#define GL_PREEXT_FORCE_PID_STATIC_PID_EN_M	BIT(31)
+#define GL_PREEXT_K2N_L2ADDR(_i)		(0x0020F144 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_K2N_L2ADDR_MAX_INDEX		2
+#define GL_PREEXT_K2N_L2ADDR_LINE_IDX_S		0
+#define GL_PREEXT_K2N_L2ADDR_LINE_IDX_M		MAKEMASK(0x7F, 0)
+#define GL_PREEXT_K2N_L2ADDR_AUTO_INC_S		31
+#define GL_PREEXT_K2N_L2ADDR_AUTO_INC_M		BIT(31)
+#define GL_PREEXT_K2N_L2DATA(_i)		(0x0020F150 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_K2N_L2DATA_MAX_INDEX		2
+#define GL_PREEXT_K2N_L2DATA_DATA0_S		0
+#define GL_PREEXT_K2N_L2DATA_DATA0_M		MAKEMASK(0xFF, 0)
+#define GL_PREEXT_K2N_L2DATA_DATA1_S		8
+#define GL_PREEXT_K2N_L2DATA_DATA1_M		MAKEMASK(0xFF, 8)
+#define GL_PREEXT_K2N_L2DATA_DATA2_S		16
+#define GL_PREEXT_K2N_L2DATA_DATA2_M		MAKEMASK(0xFF, 16)
+#define GL_PREEXT_K2N_L2DATA_DATA3_S		24
+#define GL_PREEXT_K2N_L2DATA_DATA3_M		MAKEMASK(0xFF, 24)
+#define GL_PREEXT_L2_PMASK0(_i)			(0x0020F0FC + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_L2_PMASK0_MAX_INDEX		2
+#define GL_PREEXT_L2_PMASK0_BITMASK_S		0
+#define GL_PREEXT_L2_PMASK0_BITMASK_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PREEXT_L2_PMASK1(_i)			(0x0020F108 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_L2_PMASK1_MAX_INDEX		2
+#define GL_PREEXT_L2_PMASK1_BITMASK_S		0
+#define GL_PREEXT_L2_PMASK1_BITMASK_M		MAKEMASK(0xFFFF, 0)
+#define GL_PREEXT_L2_TMASK0(_i)			(0x0020F498 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_L2_TMASK0_MAX_INDEX		2
+#define GL_PREEXT_L2_TMASK0_BITMASK_S		0
+#define GL_PREEXT_L2_TMASK0_BITMASK_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PREEXT_L2_TMASK1(_i)			(0x0020F4A4 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_L2_TMASK1_MAX_INDEX		2
+#define GL_PREEXT_L2_TMASK1_BITMASK_S		0
+#define GL_PREEXT_L2_TMASK1_BITMASK_M		MAKEMASK(0xFF, 0)
+#define GL_PREEXT_L2BMP0_3(_i)			(0x0020F0A8 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_L2BMP0_3_MAX_INDEX		2
+#define GL_PREEXT_L2BMP0_3_BMP0_S		0
+#define GL_PREEXT_L2BMP0_3_BMP0_M		MAKEMASK(0xFF, 0)
+#define GL_PREEXT_L2BMP0_3_BMP1_S		8
+#define GL_PREEXT_L2BMP0_3_BMP1_M		MAKEMASK(0xFF, 8)
+#define GL_PREEXT_L2BMP0_3_BMP2_S		16
+#define GL_PREEXT_L2BMP0_3_BMP2_M		MAKEMASK(0xFF, 16)
+#define GL_PREEXT_L2BMP0_3_BMP3_S		24
+#define GL_PREEXT_L2BMP0_3_BMP3_M		MAKEMASK(0xFF, 24)
+#define GL_PREEXT_L2BMP4_7(_i)			(0x0020F0B4 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_L2BMP4_7_MAX_INDEX		2
+#define GL_PREEXT_L2BMP4_7_BMP4_S		0
+#define GL_PREEXT_L2BMP4_7_BMP4_M		MAKEMASK(0xFF, 0)
+#define GL_PREEXT_L2BMP4_7_BMP5_S		8
+#define GL_PREEXT_L2BMP4_7_BMP5_M		MAKEMASK(0xFF, 8)
+#define GL_PREEXT_L2BMP4_7_BMP6_S		16
+#define GL_PREEXT_L2BMP4_7_BMP6_M		MAKEMASK(0xFF, 16)
+#define GL_PREEXT_L2BMP4_7_BMP7_S		24
+#define GL_PREEXT_L2BMP4_7_BMP7_M		MAKEMASK(0xFF, 24)
+#define GL_PREEXT_L2PRTMOD(_i)			(0x0020F09C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_L2PRTMOD_MAX_INDEX		2
+#define GL_PREEXT_L2PRTMOD_XLT1_S		0
+#define GL_PREEXT_L2PRTMOD_XLT1_M		MAKEMASK(0x3, 0)
+#define GL_PREEXT_L2PRTMOD_XLT2_S		8
+#define GL_PREEXT_L2PRTMOD_XLT2_M		MAKEMASK(0x3, 8)
+#define GL_PREEXT_N2N_L2ADDR(_i)		(0x0020F15C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_N2N_L2ADDR_MAX_INDEX		2
+#define GL_PREEXT_N2N_L2ADDR_LINE_IDX_S		0
+#define GL_PREEXT_N2N_L2ADDR_LINE_IDX_M		MAKEMASK(0x3F, 0)
+#define GL_PREEXT_N2N_L2ADDR_AUTO_INC_S		31
+#define GL_PREEXT_N2N_L2ADDR_AUTO_INC_M		BIT(31)
+#define GL_PREEXT_N2N_L2DATA(_i)		(0x0020F168 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_N2N_L2DATA_MAX_INDEX		2
+#define GL_PREEXT_N2N_L2DATA_DATA0_S		0
+#define GL_PREEXT_N2N_L2DATA_DATA0_M		MAKEMASK(0xFF, 0)
+#define GL_PREEXT_N2N_L2DATA_DATA1_S		8
+#define GL_PREEXT_N2N_L2DATA_DATA1_M		MAKEMASK(0xFF, 8)
+#define GL_PREEXT_N2N_L2DATA_DATA2_S		16
+#define GL_PREEXT_N2N_L2DATA_DATA2_M		MAKEMASK(0xFF, 16)
+#define GL_PREEXT_N2N_L2DATA_DATA3_S		24
+#define GL_PREEXT_N2N_L2DATA_DATA3_M		MAKEMASK(0xFF, 24)
+#define GL_PREEXT_P2P_L1ADDR(_i)		(0x0020F024 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_P2P_L1ADDR_MAX_INDEX		2
+#define GL_PREEXT_P2P_L1ADDR_LINE_IDX_S		0
+#define GL_PREEXT_P2P_L1ADDR_LINE_IDX_M		BIT(0)
+#define GL_PREEXT_P2P_L1ADDR_AUTO_INC_S		31
+#define GL_PREEXT_P2P_L1ADDR_AUTO_INC_M		BIT(31)
+#define GL_PREEXT_P2P_L1DATA(_i)		(0x0020F030 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_P2P_L1DATA_MAX_INDEX		2
+#define GL_PREEXT_P2P_L1DATA_DATA_S		0
+#define GL_PREEXT_P2P_L1DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PREEXT_PID_L2GKTYPE(_i)		(0x0020F0F0 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_PID_L2GKTYPE_MAX_INDEX	2
+#define GL_PREEXT_PID_L2GKTYPE_PID_GKTYPE_S	0
+#define GL_PREEXT_PID_L2GKTYPE_PID_GKTYPE_M	MAKEMASK(0x3, 0)
+#define GL_PREEXT_PLVL_SEL(_i)			(0x0020F00C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_PLVL_SEL_MAX_INDEX		2
+#define GL_PREEXT_PLVL_SEL_PLVL_SEL_S		0
+#define GL_PREEXT_PLVL_SEL_PLVL_SEL_M		BIT(0)
+#define GL_PREEXT_TCAM_L2ADDR(_i)		(0x0020F114 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_TCAM_L2ADDR_MAX_INDEX		2
+#define GL_PREEXT_TCAM_L2ADDR_LINE_IDX_S	0
+#define GL_PREEXT_TCAM_L2ADDR_LINE_IDX_M	MAKEMASK(0x3FF, 0)
+#define GL_PREEXT_TCAM_L2ADDR_AUTO_INC_S	31
+#define GL_PREEXT_TCAM_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_PREEXT_TCAM_L2DATALSB(_i)		(0x0020F120 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_TCAM_L2DATALSB_MAX_INDEX	2
+#define GL_PREEXT_TCAM_L2DATALSB_DATALSB_S	0
+#define GL_PREEXT_TCAM_L2DATALSB_DATALSB_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PREEXT_TCAM_L2DATAMSB(_i)		(0x0020F12C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_TCAM_L2DATAMSB_MAX_INDEX	2
+#define GL_PREEXT_TCAM_L2DATAMSB_DATAMSB_S	0
+#define GL_PREEXT_TCAM_L2DATAMSB_DATAMSB_M	MAKEMASK(0xFF, 0)
+#define GL_PREEXT_XLT0_L1ADDR(_i)		(0x0020F03C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_XLT0_L1ADDR_MAX_INDEX		2
+#define GL_PREEXT_XLT0_L1ADDR_LINE_IDX_S	0
+#define GL_PREEXT_XLT0_L1ADDR_LINE_IDX_M	MAKEMASK(0xFF, 0)
+#define GL_PREEXT_XLT0_L1ADDR_AUTO_INC_S	31
+#define GL_PREEXT_XLT0_L1ADDR_AUTO_INC_M	BIT(31)
+#define GL_PREEXT_XLT0_L1DATA(_i)		(0x0020F048 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_XLT0_L1DATA_MAX_INDEX		2
+#define GL_PREEXT_XLT0_L1DATA_DATA_S		0
+#define GL_PREEXT_XLT0_L1DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PREEXT_XLT1_L2ADDR(_i)		(0x0020F0C0 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_XLT1_L2ADDR_MAX_INDEX		2
+#define GL_PREEXT_XLT1_L2ADDR_LINE_IDX_S	0
+#define GL_PREEXT_XLT1_L2ADDR_LINE_IDX_M	MAKEMASK(0x7FF, 0)
+#define GL_PREEXT_XLT1_L2ADDR_AUTO_INC_S	31
+#define GL_PREEXT_XLT1_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_PREEXT_XLT1_L2DATA(_i)		(0x0020F0CC + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_XLT1_L2DATA_MAX_INDEX		2
+#define GL_PREEXT_XLT1_L2DATA_DATA_S		0
+#define GL_PREEXT_XLT1_L2DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PREEXT_XLT2_L2ADDR(_i)		(0x0020F0D8 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_XLT2_L2ADDR_MAX_INDEX		2
+#define GL_PREEXT_XLT2_L2ADDR_LINE_IDX_S	0
+#define GL_PREEXT_XLT2_L2ADDR_LINE_IDX_M	MAKEMASK(0x1FF, 0)
+#define GL_PREEXT_XLT2_L2ADDR_AUTO_INC_S	31
+#define GL_PREEXT_XLT2_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_PREEXT_XLT2_L2DATA(_i)		(0x0020F0E4 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_XLT2_L2DATA_MAX_INDEX		2
+#define GL_PREEXT_XLT2_L2DATA_DATA_S		0
+#define GL_PREEXT_XLT2_L2DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_CDMD_L1SEL(_i)		(0x0020E054 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_CDMD_L1SEL_MAX_INDEX		2
+#define GL_PSTEXT_CDMD_L1SEL_RX_SEL_S		0
+#define GL_PSTEXT_CDMD_L1SEL_RX_SEL_M		MAKEMASK(0x1F, 0)
+#define GL_PSTEXT_CDMD_L1SEL_TX_SEL_S		8
+#define GL_PSTEXT_CDMD_L1SEL_TX_SEL_M		MAKEMASK(0x1F, 8)
+#define GL_PSTEXT_CDMD_L1SEL_AUX0_SEL_S		16
+#define GL_PSTEXT_CDMD_L1SEL_AUX0_SEL_M		MAKEMASK(0x1F, 16)
+#define GL_PSTEXT_CDMD_L1SEL_AUX1_SEL_S		24
+#define GL_PSTEXT_CDMD_L1SEL_AUX1_SEL_M		MAKEMASK(0x1F, 24)
+#define GL_PSTEXT_CDMD_L1SEL_BIDIR_ENA_S	30
+#define GL_PSTEXT_CDMD_L1SEL_BIDIR_ENA_M	MAKEMASK(0x3, 30)
+#define GL_PSTEXT_CTLTBL_L2ADDR(_i)		(0x0020E084 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_CTLTBL_L2ADDR_MAX_INDEX	2
+#define GL_PSTEXT_CTLTBL_L2ADDR_LINE_OFF_S	0
+#define GL_PSTEXT_CTLTBL_L2ADDR_LINE_OFF_M	MAKEMASK(0x7, 0)
+#define GL_PSTEXT_CTLTBL_L2ADDR_LINE_IDX_S	8
+#define GL_PSTEXT_CTLTBL_L2ADDR_LINE_IDX_M	MAKEMASK(0x7, 8)
+#define GL_PSTEXT_CTLTBL_L2ADDR_AUTO_INC_S	31
+#define GL_PSTEXT_CTLTBL_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_PSTEXT_CTLTBL_L2DATA(_i)		(0x0020E090 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_CTLTBL_L2DATA_MAX_INDEX	2
+#define GL_PSTEXT_CTLTBL_L2DATA_DATA_S		0
+#define GL_PSTEXT_CTLTBL_L2DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_DFLT_L2PRFL(_i)		(0x0020E138 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_DFLT_L2PRFL_MAX_INDEX		2
+#define GL_PSTEXT_DFLT_L2PRFL_DFLT_PRFL_S	0
+#define GL_PSTEXT_DFLT_L2PRFL_DFLT_PRFL_M	MAKEMASK(0xFFFF, 0)
+#define GL_PSTEXT_FL15_BMPLSB(_i)		(0x0020E480 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_FL15_BMPLSB_MAX_INDEX		2
+#define GL_PSTEXT_FL15_BMPLSB_BMPLSB_S		0
+#define GL_PSTEXT_FL15_BMPLSB_BMPLSB_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_FL15_BMPMSB(_i)		(0x0020E48C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_FL15_BMPMSB_MAX_INDEX		2
+#define GL_PSTEXT_FL15_BMPMSB_BMPMSB_S		0
+#define GL_PSTEXT_FL15_BMPMSB_BMPMSB_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_FLGS_L1SEL0_1(_i)		(0x0020E06C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_FLGS_L1SEL0_1_MAX_INDEX	2
+#define GL_PSTEXT_FLGS_L1SEL0_1_FLS0_S		0
+#define GL_PSTEXT_FLGS_L1SEL0_1_FLS0_M		MAKEMASK(0x1FF, 0)
+#define GL_PSTEXT_FLGS_L1SEL0_1_FLS1_S		16
+#define GL_PSTEXT_FLGS_L1SEL0_1_FLS1_M		MAKEMASK(0x1FF, 16)
+#define GL_PSTEXT_FLGS_L1SEL2_3(_i)		(0x0020E078 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_FLGS_L1SEL2_3_MAX_INDEX	2
+#define GL_PSTEXT_FLGS_L1SEL2_3_FLS2_S		0
+#define GL_PSTEXT_FLGS_L1SEL2_3_FLS2_M		MAKEMASK(0x1FF, 0)
+#define GL_PSTEXT_FLGS_L1SEL2_3_FLS3_S		16
+#define GL_PSTEXT_FLGS_L1SEL2_3_FLS3_M		MAKEMASK(0x1FF, 16)
+#define GL_PSTEXT_FLGS_L1TBL(_i)		(0x0020E060 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_FLGS_L1TBL_MAX_INDEX		2
+#define GL_PSTEXT_FLGS_L1TBL_LSB_S		0
+#define GL_PSTEXT_FLGS_L1TBL_LSB_M		MAKEMASK(0xFFFF, 0)
+#define GL_PSTEXT_FLGS_L1TBL_MSB_S		16
+#define GL_PSTEXT_FLGS_L1TBL_MSB_M		MAKEMASK(0xFFFF, 16)
+#define GL_PSTEXT_FORCE_L1CDID(_i)		(0x0020E018 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_FORCE_L1CDID_MAX_INDEX	2
+#define GL_PSTEXT_FORCE_L1CDID_STATIC_CDID_S	0
+#define GL_PSTEXT_FORCE_L1CDID_STATIC_CDID_M	MAKEMASK(0xF, 0)
+#define GL_PSTEXT_FORCE_L1CDID_STATIC_CDID_EN_S 31
+#define GL_PSTEXT_FORCE_L1CDID_STATIC_CDID_EN_M BIT(31)
+#define GL_PSTEXT_FORCE_PID(_i)			(0x0020E000 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_FORCE_PID_MAX_INDEX		2
+#define GL_PSTEXT_FORCE_PID_STATIC_PID_S	0
+#define GL_PSTEXT_FORCE_PID_STATIC_PID_M	MAKEMASK(0xFFFF, 0)
+#define GL_PSTEXT_FORCE_PID_STATIC_PID_EN_S	31
+#define GL_PSTEXT_FORCE_PID_STATIC_PID_EN_M	BIT(31)
+#define GL_PSTEXT_K2N_L2ADDR(_i)		(0x0020E144 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_K2N_L2ADDR_MAX_INDEX		2
+#define GL_PSTEXT_K2N_L2ADDR_LINE_IDX_S		0
+#define GL_PSTEXT_K2N_L2ADDR_LINE_IDX_M		MAKEMASK(0x7F, 0)
+#define GL_PSTEXT_K2N_L2ADDR_AUTO_INC_S		31
+#define GL_PSTEXT_K2N_L2ADDR_AUTO_INC_M		BIT(31)
+#define GL_PSTEXT_K2N_L2DATA(_i)		(0x0020E150 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_K2N_L2DATA_MAX_INDEX		2
+#define GL_PSTEXT_K2N_L2DATA_DATA0_S		0
+#define GL_PSTEXT_K2N_L2DATA_DATA0_M		MAKEMASK(0xFF, 0)
+#define GL_PSTEXT_K2N_L2DATA_DATA1_S		8
+#define GL_PSTEXT_K2N_L2DATA_DATA1_M		MAKEMASK(0xFF, 8)
+#define GL_PSTEXT_K2N_L2DATA_DATA2_S		16
+#define GL_PSTEXT_K2N_L2DATA_DATA2_M		MAKEMASK(0xFF, 16)
+#define GL_PSTEXT_K2N_L2DATA_DATA3_S		24
+#define GL_PSTEXT_K2N_L2DATA_DATA3_M		MAKEMASK(0xFF, 24)
+#define GL_PSTEXT_L2_PMASK0(_i)			(0x0020E0FC + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_L2_PMASK0_MAX_INDEX		2
+#define GL_PSTEXT_L2_PMASK0_BITMASK_S		0
+#define GL_PSTEXT_L2_PMASK0_BITMASK_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_L2_PMASK1(_i)			(0x0020E108 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_L2_PMASK1_MAX_INDEX		2
+#define GL_PSTEXT_L2_PMASK1_BITMASK_S		0
+#define GL_PSTEXT_L2_PMASK1_BITMASK_M		MAKEMASK(0xFFFF, 0)
+#define GL_PSTEXT_L2_TMASK0(_i)			(0x0020E498 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_L2_TMASK0_MAX_INDEX		2
+#define GL_PSTEXT_L2_TMASK0_BITMASK_S		0
+#define GL_PSTEXT_L2_TMASK0_BITMASK_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_L2_TMASK1(_i)			(0x0020E4A4 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_L2_TMASK1_MAX_INDEX		2
+#define GL_PSTEXT_L2_TMASK1_BITMASK_S		0
+#define GL_PSTEXT_L2_TMASK1_BITMASK_M		MAKEMASK(0xFF, 0)
+#define GL_PSTEXT_L2PRTMOD(_i)			(0x0020E09C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_L2PRTMOD_MAX_INDEX		2
+#define GL_PSTEXT_L2PRTMOD_XLT1_S		0
+#define GL_PSTEXT_L2PRTMOD_XLT1_M		MAKEMASK(0x3, 0)
+#define GL_PSTEXT_L2PRTMOD_XLT2_S		8
+#define GL_PSTEXT_L2PRTMOD_XLT2_M		MAKEMASK(0x3, 8)
+#define GL_PSTEXT_N2N_L2ADDR(_i)		(0x0020E15C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_N2N_L2ADDR_MAX_INDEX		2
+#define GL_PSTEXT_N2N_L2ADDR_LINE_IDX_S		0
+#define GL_PSTEXT_N2N_L2ADDR_LINE_IDX_M		MAKEMASK(0x3F, 0)
+#define GL_PSTEXT_N2N_L2ADDR_AUTO_INC_S		31
+#define GL_PSTEXT_N2N_L2ADDR_AUTO_INC_M		BIT(31)
+#define GL_PSTEXT_N2N_L2DATA(_i)		(0x0020E168 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_N2N_L2DATA_MAX_INDEX		2
+#define GL_PSTEXT_N2N_L2DATA_DATA0_S		0
+#define GL_PSTEXT_N2N_L2DATA_DATA0_M		MAKEMASK(0xFF, 0)
+#define GL_PSTEXT_N2N_L2DATA_DATA1_S		8
+#define GL_PSTEXT_N2N_L2DATA_DATA1_M		MAKEMASK(0xFF, 8)
+#define GL_PSTEXT_N2N_L2DATA_DATA2_S		16
+#define GL_PSTEXT_N2N_L2DATA_DATA2_M		MAKEMASK(0xFF, 16)
+#define GL_PSTEXT_N2N_L2DATA_DATA3_S		24
+#define GL_PSTEXT_N2N_L2DATA_DATA3_M		MAKEMASK(0xFF, 24)
+#define GL_PSTEXT_P2P_L1ADDR(_i)		(0x0020E024 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_P2P_L1ADDR_MAX_INDEX		2
+#define GL_PSTEXT_P2P_L1ADDR_LINE_IDX_S		0
+#define GL_PSTEXT_P2P_L1ADDR_LINE_IDX_M		BIT(0)
+#define GL_PSTEXT_P2P_L1ADDR_AUTO_INC_S		31
+#define GL_PSTEXT_P2P_L1ADDR_AUTO_INC_M		BIT(31)
+#define GL_PSTEXT_P2P_L1DATA(_i)		(0x0020E030 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_P2P_L1DATA_MAX_INDEX		2
+#define GL_PSTEXT_P2P_L1DATA_DATA_S		0
+#define GL_PSTEXT_P2P_L1DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_PID_L2GKTYPE(_i)		(0x0020E0F0 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_PID_L2GKTYPE_MAX_INDEX	2
+#define GL_PSTEXT_PID_L2GKTYPE_PID_GKTYPE_S	0
+#define GL_PSTEXT_PID_L2GKTYPE_PID_GKTYPE_M	MAKEMASK(0x3, 0)
+#define GL_PSTEXT_PLVL_SEL(_i)			(0x0020E00C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_PLVL_SEL_MAX_INDEX		2
+#define GL_PSTEXT_PLVL_SEL_PLVL_SEL_S		0
+#define GL_PSTEXT_PLVL_SEL_PLVL_SEL_M		BIT(0)
+#define GL_PSTEXT_PRFLM_CTRL(_i)		(0x0020E474 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_PRFLM_CTRL_MAX_INDEX		2
+#define GL_PSTEXT_PRFLM_CTRL_PRFL_IDX_S		0
+#define GL_PSTEXT_PRFLM_CTRL_PRFL_IDX_M		MAKEMASK(0xFF, 0)
+#define GL_PSTEXT_PRFLM_CTRL_RD_REQ_S		30
+#define GL_PSTEXT_PRFLM_CTRL_RD_REQ_M		BIT(30)
+#define GL_PSTEXT_PRFLM_CTRL_WR_REQ_S		31
+#define GL_PSTEXT_PRFLM_CTRL_WR_REQ_M		BIT(31)
+#define GL_PSTEXT_PRFLM_DATA_0(_i)		(0x0020E174 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GL_PSTEXT_PRFLM_DATA_0_MAX_INDEX	63
+#define GL_PSTEXT_PRFLM_DATA_0_PROT_S		0
+#define GL_PSTEXT_PRFLM_DATA_0_PROT_M		MAKEMASK(0xFF, 0)
+#define GL_PSTEXT_PRFLM_DATA_0_OFF_S		16
+#define GL_PSTEXT_PRFLM_DATA_0_OFF_M		MAKEMASK(0x1FF, 16)
+#define GL_PSTEXT_PRFLM_DATA_1(_i)		(0x0020E274 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GL_PSTEXT_PRFLM_DATA_1_MAX_INDEX	63
+#define GL_PSTEXT_PRFLM_DATA_1_PROT_S		0
+#define GL_PSTEXT_PRFLM_DATA_1_PROT_M		MAKEMASK(0xFF, 0)
+#define GL_PSTEXT_PRFLM_DATA_1_OFF_S		16
+#define GL_PSTEXT_PRFLM_DATA_1_OFF_M		MAKEMASK(0x1FF, 16)
+#define GL_PSTEXT_PRFLM_DATA_2(_i)		(0x0020E374 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GL_PSTEXT_PRFLM_DATA_2_MAX_INDEX	63
+#define GL_PSTEXT_PRFLM_DATA_2_PROT_S		0
+#define GL_PSTEXT_PRFLM_DATA_2_PROT_M		MAKEMASK(0xFF, 0)
+#define GL_PSTEXT_PRFLM_DATA_2_OFF_S		16
+#define GL_PSTEXT_PRFLM_DATA_2_OFF_M		MAKEMASK(0x1FF, 16)
+#define GL_PSTEXT_TCAM_L2ADDR(_i)		(0x0020E114 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_TCAM_L2ADDR_MAX_INDEX		2
+#define GL_PSTEXT_TCAM_L2ADDR_LINE_IDX_S	0
+#define GL_PSTEXT_TCAM_L2ADDR_LINE_IDX_M	MAKEMASK(0x3FF, 0)
+#define GL_PSTEXT_TCAM_L2ADDR_AUTO_INC_S	31
+#define GL_PSTEXT_TCAM_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_PSTEXT_TCAM_L2DATALSB(_i)		(0x0020E120 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_TCAM_L2DATALSB_MAX_INDEX	2
+#define GL_PSTEXT_TCAM_L2DATALSB_DATALSB_S	0
+#define GL_PSTEXT_TCAM_L2DATALSB_DATALSB_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_TCAM_L2DATAMSB(_i)		(0x0020E12C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_TCAM_L2DATAMSB_MAX_INDEX	2
+#define GL_PSTEXT_TCAM_L2DATAMSB_DATAMSB_S	0
+#define GL_PSTEXT_TCAM_L2DATAMSB_DATAMSB_M	MAKEMASK(0xFF, 0)
+#define GL_PSTEXT_XLT0_L1ADDR(_i)		(0x0020E03C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_XLT0_L1ADDR_MAX_INDEX		2
+#define GL_PSTEXT_XLT0_L1ADDR_LINE_IDX_S	0
+#define GL_PSTEXT_XLT0_L1ADDR_LINE_IDX_M	MAKEMASK(0xFF, 0)
+#define GL_PSTEXT_XLT0_L1ADDR_AUTO_INC_S	31
+#define GL_PSTEXT_XLT0_L1ADDR_AUTO_INC_M	BIT(31)
+#define GL_PSTEXT_XLT0_L1DATA(_i)		(0x0020E048 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_XLT0_L1DATA_MAX_INDEX		2
+#define GL_PSTEXT_XLT0_L1DATA_DATA_S		0
+#define GL_PSTEXT_XLT0_L1DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_XLT1_L2ADDR(_i)		(0x0020E0C0 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_XLT1_L2ADDR_MAX_INDEX		2
+#define GL_PSTEXT_XLT1_L2ADDR_LINE_IDX_S	0
+#define GL_PSTEXT_XLT1_L2ADDR_LINE_IDX_M	MAKEMASK(0x7FF, 0)
+#define GL_PSTEXT_XLT1_L2ADDR_AUTO_INC_S	31
+#define GL_PSTEXT_XLT1_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_PSTEXT_XLT1_L2DATA(_i)		(0x0020E0CC + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_XLT1_L2DATA_MAX_INDEX		2
+#define GL_PSTEXT_XLT1_L2DATA_DATA_S		0
+#define GL_PSTEXT_XLT1_L2DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_XLT2_L2ADDR(_i)		(0x0020E0D8 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_XLT2_L2ADDR_MAX_INDEX		2
+#define GL_PSTEXT_XLT2_L2ADDR_LINE_IDX_S	0
+#define GL_PSTEXT_XLT2_L2ADDR_LINE_IDX_M	MAKEMASK(0x1FF, 0)
+#define GL_PSTEXT_XLT2_L2ADDR_AUTO_INC_S	31
+#define GL_PSTEXT_XLT2_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_PSTEXT_XLT2_L2DATA(_i)		(0x0020E0E4 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_XLT2_L2DATA_MAX_INDEX		2
+#define GL_PSTEXT_XLT2_L2DATA_DATA_S		0
+#define GL_PSTEXT_XLT2_L2DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLFLXP_PTYPE_TRANSLATION(_i)		(0x0045C000 + ((_i) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define GLFLXP_PTYPE_TRANSLATION_MAX_INDEX	255
+#define GLFLXP_PTYPE_TRANSLATION_PTYPE_4N_S	0
+#define GLFLXP_PTYPE_TRANSLATION_PTYPE_4N_M	MAKEMASK(0xFF, 0)
+#define GLFLXP_PTYPE_TRANSLATION_PTYPE_4N_1_S	8
+#define GLFLXP_PTYPE_TRANSLATION_PTYPE_4N_1_M	MAKEMASK(0xFF, 8)
+#define GLFLXP_PTYPE_TRANSLATION_PTYPE_4N_2_S	16
+#define GLFLXP_PTYPE_TRANSLATION_PTYPE_4N_2_M	MAKEMASK(0xFF, 16)
+#define GLFLXP_PTYPE_TRANSLATION_PTYPE_4N_3_S	24
+#define GLFLXP_PTYPE_TRANSLATION_PTYPE_4N_3_M	MAKEMASK(0xFF, 24)
+#define GLFLXP_RX_CMD_LX_PROT_IDX(_i)		(0x0045C400 + ((_i) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define GLFLXP_RX_CMD_LX_PROT_IDX_MAX_INDEX	255
+#define GLFLXP_RX_CMD_LX_PROT_IDX_INNER_CLOUD_OFFSET_INDEX_S 0
+#define GLFLXP_RX_CMD_LX_PROT_IDX_INNER_CLOUD_OFFSET_INDEX_M MAKEMASK(0x7, 0)
+#define GLFLXP_RX_CMD_LX_PROT_IDX_L4_OFFSET_INDEX_S 4
+#define GLFLXP_RX_CMD_LX_PROT_IDX_L4_OFFSET_INDEX_M MAKEMASK(0x7, 4)
+#define GLFLXP_RX_CMD_LX_PROT_IDX_PAYLOAD_OFFSET_INDEX_S 8
+#define GLFLXP_RX_CMD_LX_PROT_IDX_PAYLOAD_OFFSET_INDEX_M MAKEMASK(0x7, 8)
+#define GLFLXP_RX_CMD_LX_PROT_IDX_L3_PROTOCOL_S 12
+#define GLFLXP_RX_CMD_LX_PROT_IDX_L3_PROTOCOL_M MAKEMASK(0x3, 12)
+#define GLFLXP_RX_CMD_LX_PROT_IDX_L4_PROTOCOL_S 14
+#define GLFLXP_RX_CMD_LX_PROT_IDX_L4_PROTOCOL_M MAKEMASK(0x3, 14)
+#define GLFLXP_RX_CMD_PROTIDS(_i, _j)		(0x0045A000 + ((_i) * 4 + (_j) * 1024)) /* _i=0...255, _j=0...5 */ /* Reset Source: CORER */
+#define GLFLXP_RX_CMD_PROTIDS_MAX_INDEX		255
+#define GLFLXP_RX_CMD_PROTIDS_PROTID_4N_S	0
+#define GLFLXP_RX_CMD_PROTIDS_PROTID_4N_M	MAKEMASK(0xFF, 0)
+#define GLFLXP_RX_CMD_PROTIDS_PROTID_4N_1_S	8
+#define GLFLXP_RX_CMD_PROTIDS_PROTID_4N_1_M	MAKEMASK(0xFF, 8)
+#define GLFLXP_RX_CMD_PROTIDS_PROTID_4N_2_S	16
+#define GLFLXP_RX_CMD_PROTIDS_PROTID_4N_2_M	MAKEMASK(0xFF, 16)
+#define GLFLXP_RX_CMD_PROTIDS_PROTID_4N_3_S	24
+#define GLFLXP_RX_CMD_PROTIDS_PROTID_4N_3_M	MAKEMASK(0xFF, 24)
+#define GLFLXP_RXDID_FLAGS(_i, _j)		(0x0045D000 + ((_i) * 4 + (_j) * 256)) /* _i=0...63, _j=0...4 */ /* Reset Source: CORER */
+#define GLFLXP_RXDID_FLAGS_MAX_INDEX		63
+#define GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_S	0
+#define GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_M	MAKEMASK(0x3F, 0)
+#define GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_1_S	8
+#define GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_1_M	MAKEMASK(0x3F, 8)
+#define GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_2_S	16
+#define GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_2_M	MAKEMASK(0x3F, 16)
+#define GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_3_S	24
+#define GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_3_M	MAKEMASK(0x3F, 24)
+#define GLFLXP_RXDID_FLAGS1_OVERRIDE(_i)	(0x0045D600 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GLFLXP_RXDID_FLAGS1_OVERRIDE_MAX_INDEX	63
+#define GLFLXP_RXDID_FLAGS1_OVERRIDE_FLEXIFLAGS1_OVERRIDE_S 0
+#define GLFLXP_RXDID_FLAGS1_OVERRIDE_FLEXIFLAGS1_OVERRIDE_M MAKEMASK(0xF, 0)
+#define GLFLXP_RXDID_FLX_WRD_0(_i)		(0x0045c800 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GLFLXP_RXDID_FLX_WRD_0_MAX_INDEX	63
+#define GLFLXP_RXDID_FLX_WRD_0_PROT_MDID_S	0
+#define GLFLXP_RXDID_FLX_WRD_0_PROT_MDID_M	MAKEMASK(0xFF, 0)
+#define GLFLXP_RXDID_FLX_WRD_0_EXTRACTION_OFFSET_S 8
+#define GLFLXP_RXDID_FLX_WRD_0_EXTRACTION_OFFSET_M MAKEMASK(0x3FF, 8)
+#define GLFLXP_RXDID_FLX_WRD_0_RXDID_OPCODE_S	30
+#define GLFLXP_RXDID_FLX_WRD_0_RXDID_OPCODE_M	MAKEMASK(0x3, 30)
+#define GLFLXP_RXDID_FLX_WRD_1(_i)		(0x0045c900 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GLFLXP_RXDID_FLX_WRD_1_MAX_INDEX	63
+#define GLFLXP_RXDID_FLX_WRD_1_PROT_MDID_S	0
+#define GLFLXP_RXDID_FLX_WRD_1_PROT_MDID_M	MAKEMASK(0xFF, 0)
+#define GLFLXP_RXDID_FLX_WRD_1_EXTRACTION_OFFSET_S 8
+#define GLFLXP_RXDID_FLX_WRD_1_EXTRACTION_OFFSET_M MAKEMASK(0x3FF, 8)
+#define GLFLXP_RXDID_FLX_WRD_1_RXDID_OPCODE_S	30
+#define GLFLXP_RXDID_FLX_WRD_1_RXDID_OPCODE_M	MAKEMASK(0x3, 30)
+#define GLFLXP_RXDID_FLX_WRD_2(_i)		(0x0045ca00 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GLFLXP_RXDID_FLX_WRD_2_MAX_INDEX	63
+#define GLFLXP_RXDID_FLX_WRD_2_PROT_MDID_S	0
+#define GLFLXP_RXDID_FLX_WRD_2_PROT_MDID_M	MAKEMASK(0xFF, 0)
+#define GLFLXP_RXDID_FLX_WRD_2_EXTRACTION_OFFSET_S 8
+#define GLFLXP_RXDID_FLX_WRD_2_EXTRACTION_OFFSET_M MAKEMASK(0x3FF, 8)
+#define GLFLXP_RXDID_FLX_WRD_2_RXDID_OPCODE_S	30
+#define GLFLXP_RXDID_FLX_WRD_2_RXDID_OPCODE_M	MAKEMASK(0x3, 30)
+#define GLFLXP_RXDID_FLX_WRD_3(_i)		(0x0045cb00 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GLFLXP_RXDID_FLX_WRD_3_MAX_INDEX	63
+#define GLFLXP_RXDID_FLX_WRD_3_PROT_MDID_S	0
+#define GLFLXP_RXDID_FLX_WRD_3_PROT_MDID_M	MAKEMASK(0xFF, 0)
+#define GLFLXP_RXDID_FLX_WRD_3_EXTRACTION_OFFSET_S 8
+#define GLFLXP_RXDID_FLX_WRD_3_EXTRACTION_OFFSET_M MAKEMASK(0x3FF, 8)
+#define GLFLXP_RXDID_FLX_WRD_3_RXDID_OPCODE_S	30
+#define GLFLXP_RXDID_FLX_WRD_3_RXDID_OPCODE_M	MAKEMASK(0x3, 30)
+#define GLFLXP_RXDID_FLX_WRD_4(_i)		(0x0045cc00 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GLFLXP_RXDID_FLX_WRD_4_MAX_INDEX	63
+#define GLFLXP_RXDID_FLX_WRD_4_PROT_MDID_S	0
+#define GLFLXP_RXDID_FLX_WRD_4_PROT_MDID_M	MAKEMASK(0xFF, 0)
+#define GLFLXP_RXDID_FLX_WRD_4_EXTRACTION_OFFSET_S 8
+#define GLFLXP_RXDID_FLX_WRD_4_EXTRACTION_OFFSET_M MAKEMASK(0x3FF, 8)
+#define GLFLXP_RXDID_FLX_WRD_4_RXDID_OPCODE_S	30
+#define GLFLXP_RXDID_FLX_WRD_4_RXDID_OPCODE_M	MAKEMASK(0x3, 30)
+#define GLFLXP_RXDID_FLX_WRD_5(_i)		(0x0045cd00 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GLFLXP_RXDID_FLX_WRD_5_MAX_INDEX	63
+#define GLFLXP_RXDID_FLX_WRD_5_PROT_MDID_S	0
+#define GLFLXP_RXDID_FLX_WRD_5_PROT_MDID_M	MAKEMASK(0xFF, 0)
+#define GLFLXP_RXDID_FLX_WRD_5_EXTRACTION_OFFSET_S 8
+#define GLFLXP_RXDID_FLX_WRD_5_EXTRACTION_OFFSET_M MAKEMASK(0x3FF, 8)
+#define GLFLXP_RXDID_FLX_WRD_5_RXDID_OPCODE_S	30
+#define GLFLXP_RXDID_FLX_WRD_5_RXDID_OPCODE_M	MAKEMASK(0x3, 30)
+#define GLFLXP_TX_SCHED_CORRECT(_i, _j)		(0x00458000 + ((_i) * 4 + (_j) * 256)) /* _i=0...63, _j=0...31 */ /* Reset Source: CORER */
+#define GLFLXP_TX_SCHED_CORRECT_MAX_INDEX	63
+#define GLFLXP_TX_SCHED_CORRECT_PROTD_ID_2N_S	0
+#define GLFLXP_TX_SCHED_CORRECT_PROTD_ID_2N_M	MAKEMASK(0xFF, 0)
+#define GLFLXP_TX_SCHED_CORRECT_RECIPE_2N_S	8
+#define GLFLXP_TX_SCHED_CORRECT_RECIPE_2N_M	MAKEMASK(0x1F, 8)
+#define GLFLXP_TX_SCHED_CORRECT_PROTD_ID_2N_1_S 16
+#define GLFLXP_TX_SCHED_CORRECT_PROTD_ID_2N_1_M MAKEMASK(0xFF, 16)
+#define GLFLXP_TX_SCHED_CORRECT_RECIPE_2N_1_S	24
+#define GLFLXP_TX_SCHED_CORRECT_RECIPE_2N_1_M	MAKEMASK(0x1F, 24)
+#define QRXFLXP_CNTXT(_QRX)			(0x00480000 + ((_QRX) * 4)) /* _i=0...2047 */ /* Reset Source: CORER */
+#define QRXFLXP_CNTXT_MAX_INDEX			2047
+#define QRXFLXP_CNTXT_RXDID_IDX_S		0
+#define QRXFLXP_CNTXT_RXDID_IDX_M		MAKEMASK(0x3F, 0)
+#define QRXFLXP_CNTXT_RXDID_PRIO_S		8
+#define QRXFLXP_CNTXT_RXDID_PRIO_M		MAKEMASK(0x7, 8)
+#define QRXFLXP_CNTXT_TS_S			11
+#define QRXFLXP_CNTXT_TS_M			BIT(11)
+#define GL_FWSTS				0x00083048 /* Reset Source: POR */
+#define GL_FWSTS_FWS0B_S			0
+#define GL_FWSTS_FWS0B_M			MAKEMASK(0xFF, 0)
+#define GL_FWSTS_FWROWD_S			8
+#define GL_FWSTS_FWROWD_M			BIT(8)
+#define GL_FWSTS_FWRI_S				9
+#define GL_FWSTS_FWRI_M				BIT(9)
+#define GL_FWSTS_FWS1B_S			16
+#define GL_FWSTS_FWS1B_M			MAKEMASK(0xFF, 16)
+#define GL_TCVMLR_DRAIN_CNTR_CTL		0x000A21E0 /* Reset Source: CORER */
+#define GL_TCVMLR_DRAIN_CNTR_CTL_OP_S		0
+#define GL_TCVMLR_DRAIN_CNTR_CTL_OP_M		BIT(0)
+#define GL_TCVMLR_DRAIN_CNTR_CTL_PORT_S		1
+#define GL_TCVMLR_DRAIN_CNTR_CTL_PORT_M		MAKEMASK(0x7, 1)
+#define GL_TCVMLR_DRAIN_CNTR_CTL_VALUE_S	4
+#define GL_TCVMLR_DRAIN_CNTR_CTL_VALUE_M	MAKEMASK(0x3FFF, 4)
+#define GL_TCVMLR_DRAIN_DONE_DEC		0x000A21A8 /* Reset Source: CORER */
+#define GL_TCVMLR_DRAIN_DONE_DEC_TARGET_S	0
+#define GL_TCVMLR_DRAIN_DONE_DEC_TARGET_M	BIT(0)
+#define GL_TCVMLR_DRAIN_DONE_DEC_INDEX_S	1
+#define GL_TCVMLR_DRAIN_DONE_DEC_INDEX_M	MAKEMASK(0x1F, 1)
+#define GL_TCVMLR_DRAIN_DONE_DEC_VALUE_S	6
+#define GL_TCVMLR_DRAIN_DONE_DEC_VALUE_M	MAKEMASK(0xFF, 6)
+#define GL_TCVMLR_DRAIN_DONE_TCLAN(_i)		(0x000A20A8 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GL_TCVMLR_DRAIN_DONE_TCLAN_MAX_INDEX	31
+#define GL_TCVMLR_DRAIN_DONE_TCLAN_COUNT_S	0
+#define GL_TCVMLR_DRAIN_DONE_TCLAN_COUNT_M	MAKEMASK(0xFF, 0)
+#define GL_TCVMLR_DRAIN_DONE_TPB(_i)		(0x000A2128 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GL_TCVMLR_DRAIN_DONE_TPB_MAX_INDEX	31
+#define GL_TCVMLR_DRAIN_DONE_TPB_COUNT_S	0
+#define GL_TCVMLR_DRAIN_DONE_TPB_COUNT_M	MAKEMASK(0xFF, 0)
+#define GL_TCVMLR_DRAIN_MARKER			0x000A2008 /* Reset Source: CORER */
+#define GL_TCVMLR_DRAIN_MARKER_PORT_S		0
+#define GL_TCVMLR_DRAIN_MARKER_PORT_M		MAKEMASK(0x7, 0)
+#define GL_TCVMLR_DRAIN_MARKER_TC_S		3
+#define GL_TCVMLR_DRAIN_MARKER_TC_M		MAKEMASK(0x1F, 3)
+#define GL_TCVMLR_ERR_STAT			0x000A2024 /* Reset Source: CORER */
+#define GL_TCVMLR_ERR_STAT_ERROR_S		0
+#define GL_TCVMLR_ERR_STAT_ERROR_M		BIT(0)
+#define GL_TCVMLR_ERR_STAT_FW_REQ_S		1
+#define GL_TCVMLR_ERR_STAT_FW_REQ_M		BIT(1)
+#define GL_TCVMLR_ERR_STAT_STAT_S		2
+#define GL_TCVMLR_ERR_STAT_STAT_M		MAKEMASK(0x7, 2)
+#define GL_TCVMLR_ERR_STAT_ENT_TYPE_S		5
+#define GL_TCVMLR_ERR_STAT_ENT_TYPE_M		MAKEMASK(0x7, 5)
+#define GL_TCVMLR_ERR_STAT_ENT_ID_S		8
+#define GL_TCVMLR_ERR_STAT_ENT_ID_M		MAKEMASK(0x3FFF, 8)
+#define GL_TCVMLR_QCFG				0x000A2010 /* Reset Source: CORER */
+#define GL_TCVMLR_QCFG_QID_S			0
+#define GL_TCVMLR_QCFG_QID_M			MAKEMASK(0x3FFF, 0)
+#define GL_TCVMLR_QCFG_OP_S			14
+#define GL_TCVMLR_QCFG_OP_M			BIT(14)
+#define GL_TCVMLR_QCFG_PORT_S			15
+#define GL_TCVMLR_QCFG_PORT_M			MAKEMASK(0x7, 15)
+#define GL_TCVMLR_QCFG_TC_S			18
+#define GL_TCVMLR_QCFG_TC_M			MAKEMASK(0x1F, 18)
+#define GL_TCVMLR_QCFG_RD			0x000A2014 /* Reset Source: CORER */
+#define GL_TCVMLR_QCFG_RD_QID_S			0
+#define GL_TCVMLR_QCFG_RD_QID_M			MAKEMASK(0x3FFF, 0)
+#define GL_TCVMLR_QCFG_RD_PORT_S		14
+#define GL_TCVMLR_QCFG_RD_PORT_M		MAKEMASK(0x7, 14)
+#define GL_TCVMLR_QCFG_RD_TC_S			17
+#define GL_TCVMLR_QCFG_RD_TC_M			MAKEMASK(0x1F, 17)
+#define GL_TCVMLR_QCNTR				0x000A200C /* Reset Source: CORER */
+#define GL_TCVMLR_QCNTR_CNTR_S			0
+#define GL_TCVMLR_QCNTR_CNTR_M			MAKEMASK(0x7FFF, 0)
+#define GL_TCVMLR_QCTL				0x000A2004 /* Reset Source: CORER */
+#define GL_TCVMLR_QCTL_QID_S			0
+#define GL_TCVMLR_QCTL_QID_M			MAKEMASK(0x3FFF, 0)
+#define GL_TCVMLR_QCTL_OP_S			14
+#define GL_TCVMLR_QCTL_OP_M			BIT(14)
+#define GL_TCVMLR_REQ_STAT			0x000A2018 /* Reset Source: CORER */
+#define GL_TCVMLR_REQ_STAT_ENT_TYPE_S		0
+#define GL_TCVMLR_REQ_STAT_ENT_TYPE_M		MAKEMASK(0x7, 0)
+#define GL_TCVMLR_REQ_STAT_ENT_ID_S		3
+#define GL_TCVMLR_REQ_STAT_ENT_ID_M		MAKEMASK(0x3FFF, 3)
+#define GL_TCVMLR_REQ_STAT_OP_S			17
+#define GL_TCVMLR_REQ_STAT_OP_M			BIT(17)
+#define GL_TCVMLR_REQ_STAT_WRITE_STATUS_S	18
+#define GL_TCVMLR_REQ_STAT_WRITE_STATUS_M	MAKEMASK(0x7, 18)
+#define GL_TCVMLR_STAT				0x000A201C /* Reset Source: CORER */
+#define GL_TCVMLR_STAT_ENT_TYPE_S		0
+#define GL_TCVMLR_STAT_ENT_TYPE_M		MAKEMASK(0x7, 0)
+#define GL_TCVMLR_STAT_ENT_ID_S			3
+#define GL_TCVMLR_STAT_ENT_ID_M			MAKEMASK(0x3FFF, 3)
+#define GL_TCVMLR_STAT_STATUS_S			17
+#define GL_TCVMLR_STAT_STATUS_M			MAKEMASK(0x7, 17)
+#define GL_XLR_MARKER_TRIG_TCVMLR		0x000A2000 /* Reset Source: CORER */
+#define GL_XLR_MARKER_TRIG_TCVMLR_VM_VF_NUM_S	0
+#define GL_XLR_MARKER_TRIG_TCVMLR_VM_VF_NUM_M	MAKEMASK(0x3FF, 0)
+#define GL_XLR_MARKER_TRIG_TCVMLR_VM_VF_TYPE_S	10
+#define GL_XLR_MARKER_TRIG_TCVMLR_VM_VF_TYPE_M	MAKEMASK(0x3, 10)
+#define GL_XLR_MARKER_TRIG_TCVMLR_PF_NUM_S	12
+#define GL_XLR_MARKER_TRIG_TCVMLR_PF_NUM_M	MAKEMASK(0x7, 12)
+#define GL_XLR_MARKER_TRIG_TCVMLR_PORT_NUM_S	16
+#define GL_XLR_MARKER_TRIG_TCVMLR_PORT_NUM_M	MAKEMASK(0x7, 16)
+#define GL_XLR_MARKER_TRIG_VMLR			0x00093804 /* Reset Source: CORER */
+#define GL_XLR_MARKER_TRIG_VMLR_VM_VF_NUM_S	0
+#define GL_XLR_MARKER_TRIG_VMLR_VM_VF_NUM_M	MAKEMASK(0x3FF, 0)
+#define GL_XLR_MARKER_TRIG_VMLR_VM_VF_TYPE_S	10
+#define GL_XLR_MARKER_TRIG_VMLR_VM_VF_TYPE_M	MAKEMASK(0x3, 10)
+#define GL_XLR_MARKER_TRIG_VMLR_PF_NUM_S	12
+#define GL_XLR_MARKER_TRIG_VMLR_PF_NUM_M	MAKEMASK(0x7, 12)
+#define GL_XLR_MARKER_TRIG_VMLR_PORT_NUM_S	16
+#define GL_XLR_MARKER_TRIG_VMLR_PORT_NUM_M	MAKEMASK(0x7, 16)
+#define GLGEN_ANA_ABORT_PTYPE			0x0020C21C /* Reset Source: CORER */
+#define GLGEN_ANA_ABORT_PTYPE_ABORT_S		0
+#define GLGEN_ANA_ABORT_PTYPE_ABORT_M		MAKEMASK(0x3FF, 0)
+#define GLGEN_ANA_ALU_ACCSS_OUT_OF_PKT		0x0020C208 /* Reset Source: CORER */
+#define GLGEN_ANA_ALU_ACCSS_OUT_OF_PKT_NPC_S	0
+#define GLGEN_ANA_ALU_ACCSS_OUT_OF_PKT_NPC_M	MAKEMASK(0xFF, 0)
+#define GLGEN_ANA_CFG_CTRL			0x0020C104 /* Reset Source: CORER */
+#define GLGEN_ANA_CFG_CTRL_LINE_IDX_S		0
+#define GLGEN_ANA_CFG_CTRL_LINE_IDX_M		MAKEMASK(0x3FFFF, 0)
+#define GLGEN_ANA_CFG_CTRL_TABLE_ID_S		18
+#define GLGEN_ANA_CFG_CTRL_TABLE_ID_M		MAKEMASK(0xFF, 18)
+#define GLGEN_ANA_CFG_CTRL_RESRVED_S		26
+#define GLGEN_ANA_CFG_CTRL_RESRVED_M		MAKEMASK(0x7, 26)
+#define GLGEN_ANA_CFG_CTRL_OPERATION_ID_S	29
+#define GLGEN_ANA_CFG_CTRL_OPERATION_ID_M	MAKEMASK(0x7, 29)
+#define GLGEN_ANA_CFG_HTBL_LU_RESULT		0x0020C158 /* Reset Source: CORER */
+#define GLGEN_ANA_CFG_HTBL_LU_RESULT_HIT_S	0
+#define GLGEN_ANA_CFG_HTBL_LU_RESULT_HIT_M	BIT(0)
+#define GLGEN_ANA_CFG_HTBL_LU_RESULT_PG_MEM_IDX_S 1
+#define GLGEN_ANA_CFG_HTBL_LU_RESULT_PG_MEM_IDX_M MAKEMASK(0x7, 1)
+#define GLGEN_ANA_CFG_HTBL_LU_RESULT_ADDR_S	4
+#define GLGEN_ANA_CFG_HTBL_LU_RESULT_ADDR_M	MAKEMASK(0x1FF, 4)
+#define GLGEN_ANA_CFG_LU_KEY(_i)		(0x0020C14C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GLGEN_ANA_CFG_LU_KEY_MAX_INDEX		2
+#define GLGEN_ANA_CFG_LU_KEY_LU_KEY_S		0
+#define GLGEN_ANA_CFG_LU_KEY_LU_KEY_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_CFG_RDDATA(_i)		(0x0020C10C + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLGEN_ANA_CFG_RDDATA_MAX_INDEX		15
+#define GLGEN_ANA_CFG_RDDATA_RD_DATA_S		0
+#define GLGEN_ANA_CFG_RDDATA_RD_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_CFG_SPLBUF_LU_RESULT		0x0020C15C /* Reset Source: CORER */
+#define GLGEN_ANA_CFG_SPLBUF_LU_RESULT_HIT_S	0
+#define GLGEN_ANA_CFG_SPLBUF_LU_RESULT_HIT_M	BIT(0)
+#define GLGEN_ANA_CFG_SPLBUF_LU_RESULT_RSV_S	1
+#define GLGEN_ANA_CFG_SPLBUF_LU_RESULT_RSV_M	MAKEMASK(0x7, 1)
+#define GLGEN_ANA_CFG_SPLBUF_LU_RESULT_ADDR_S	4
+#define GLGEN_ANA_CFG_SPLBUF_LU_RESULT_ADDR_M	MAKEMASK(0x1FF, 4)
+#define GLGEN_ANA_CFG_WRDATA			0x0020C108 /* Reset Source: CORER */
+#define GLGEN_ANA_CFG_WRDATA_WR_DATA_S		0
+#define GLGEN_ANA_CFG_WRDATA_WR_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_DEF_PTYPE			0x0020C100 /* Reset Source: CORER */
+#define GLGEN_ANA_DEF_PTYPE_DEF_PTYPE_S		0
+#define GLGEN_ANA_DEF_PTYPE_DEF_PTYPE_M		MAKEMASK(0x3FF, 0)
+#define GLGEN_ANA_DFD_FIFO_0			0x0020C398 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_FIFO_0_PC_NXT_S		0
+#define GLGEN_ANA_DFD_FIFO_0_PC_NXT_M		BIT(0)
+#define GLGEN_ANA_DFD_FIFO_0_HO_NXT_S		1
+#define GLGEN_ANA_DFD_FIFO_0_HO_NXT_M		BIT(1)
+#define GLGEN_ANA_DFD_FIFO_0_NID_NXT_S		2
+#define GLGEN_ANA_DFD_FIFO_0_NID_NXT_M		BIT(2)
+#define GLGEN_ANA_DFD_FIFO_0_PG_KEY_SEL_S	8
+#define GLGEN_ANA_DFD_FIFO_0_PG_KEY_SEL_M	BIT(8)
+#define GLGEN_ANA_DFD_FIFO_PTR			0x0020C43C /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_FIFO_PTR_HEAD_S		0
+#define GLGEN_ANA_DFD_FIFO_PTR_HEAD_M		MAKEMASK(0x1FF, 0)
+#define GLGEN_ANA_DFD_FIFO_PTR_USED_SPACE_S	16
+#define GLGEN_ANA_DFD_FIFO_PTR_USED_SPACE_M	MAKEMASK(0x3FF, 16)
+#define GLGEN_ANA_DFD_GEN_CTRL			0x0020C38C /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_GEN_CTRL_ENABLE_S		0
+#define GLGEN_ANA_DFD_GEN_CTRL_ENABLE_M		BIT(0)
+#define GLGEN_ANA_DFD_GEN_CTRL_BLK_INPUT_S	1
+#define GLGEN_ANA_DFD_GEN_CTRL_BLK_INPUT_M	BIT(1)
+#define GLGEN_ANA_DFD_LOG_0			0x0020C3A8 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_LOG_0_SOURCE_S		0
+#define GLGEN_ANA_DFD_LOG_0_SOURCE_M		MAKEMASK(0x7, 0)
+#define GLGEN_ANA_DFD_LOG_0_LVL_OR_EDGE_S	3
+#define GLGEN_ANA_DFD_LOG_0_LVL_OR_EDGE_M	BIT(3)
+#define GLGEN_ANA_DFD_LOG_0_RC_DISP_TRIG_S	4
+#define GLGEN_ANA_DFD_LOG_0_RC_DISP_TRIG_M	MAKEMASK(0x7, 4)
+#define GLGEN_ANA_DFD_LOG_0_FLD_MODE_S		8
+#define GLGEN_ANA_DFD_LOG_0_FLD_MODE_M		BIT(8)
+#define GLGEN_ANA_DFD_LOG_0_DLY_CYCL_S		16
+#define GLGEN_ANA_DFD_LOG_0_DLY_CYCL_M		MAKEMASK(0x3FF, 16)
+#define GLGEN_ANA_DFD_LOG_1			0x0020C3AC /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_LOG_1_NUM_EVENTS_S	0
+#define GLGEN_ANA_DFD_LOG_1_NUM_EVENTS_M	MAKEMASK(0x3FF, 0)
+#define GLGEN_ANA_DFD_LOG_1_NUM_TRIGS_S		16
+#define GLGEN_ANA_DFD_LOG_1_NUM_TRIGS_M		MAKEMASK(0x3FF, 16)
+#define GLGEN_ANA_DFD_LOG_ACTN_EN		0x0020C3F8 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_BLK_PH_ARB_S	0
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_BLK_PH_ARB_M	BIT(0)
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_BLK_PH_FB_S	1
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_BLK_PH_FB_M	BIT(1)
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_BLK_INPUT_S	2
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_BLK_INPUT_M	BIT(2)
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_STP_OUTPUT_S	3
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_STP_OUTPUT_M	BIT(3)
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_STP_WR_DFD_FIFO_S 4
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_STP_WR_DFD_FIFO_M BIT(4)
+#define GLGEN_ANA_DFD_LOG_ACTN_RST		0x0020C3FC /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_BLK_PH_ARB_S 0
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_BLK_PH_ARB_M BIT(0)
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_BLK_PH_FB_S	1
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_BLK_PH_FB_M	BIT(1)
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_BLK_INPUT_S	2
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_BLK_INPUT_M	BIT(2)
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_STP_OUTPUT_S 3
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_STP_OUTPUT_M BIT(3)
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_STP_WR_DFD_FIFO_S 4
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_STP_WR_DFD_FIFO_M BIT(4)
+#define GLGEN_ANA_DFD_LOG_DATA(_i)		(0x0020C3B0 + ((_i) * 4)) /* _i=0...8 */ /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_LOG_DATA_MAX_INDEX	8
+#define GLGEN_ANA_DFD_LOG_DATA_DATA_S		0
+#define GLGEN_ANA_DFD_LOG_DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_DFD_LOG_MASK(_i)		(0x0020C3D4 + ((_i) * 4)) /* _i=0...8 */ /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_LOG_MASK_MAX_INDEX	8
+#define GLGEN_ANA_DFD_LOG_MASK_MASK_S		0
+#define GLGEN_ANA_DFD_LOG_MASK_MASK_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_DFD_LOG_RST_ALL		0x0020C400 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_LOG_RST_ALL_RST_S		0
+#define GLGEN_ANA_DFD_LOG_RST_ALL_RST_M		BIT(0)
+#define GLGEN_ANA_DFD_LOG_RST_ALL_GEN_RST_S	1
+#define GLGEN_ANA_DFD_LOG_RST_ALL_GEN_RST_M	BIT(1)
+#define GLGEN_ANA_DFD_LOG_TRG_0			0x0020C404 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_LOG_TRG_0_TAGID_S		0
+#define GLGEN_ANA_DFD_LOG_TRG_0_TAGID_M		MAKEMASK(0x3F, 0)
+#define GLGEN_ANA_DFD_LOG_TRG_0_ACT_TRIGGED_S	16
+#define GLGEN_ANA_DFD_LOG_TRG_0_ACT_TRIGGED_M	BIT(16)
+#define GLGEN_ANA_DFD_LOG_TRG_0_MAX_NUM_RND_S	24
+#define GLGEN_ANA_DFD_LOG_TRG_0_MAX_NUM_RND_M	MAKEMASK(0x7F, 24)
+#define GLGEN_ANA_DFD_LOG_TRG_0_TRIGGED_S	31
+#define GLGEN_ANA_DFD_LOG_TRG_0_TRIGGED_M	BIT(31)
+#define GLGEN_ANA_DFD_LOG_TRG_DATA(_i)		(0x0020C408 + ((_i) * 4)) /* _i=0...8 */ /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_LOG_TRG_DATA_MAX_INDEX	8
+#define GLGEN_ANA_DFD_LOG_TRG_DATA_DATA_S	0
+#define GLGEN_ANA_DFD_LOG_TRG_DATA_DATA_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_DFD_PACE_OUT			0x0020C4CC /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_PACE_OUT_PUSH_S		0
+#define GLGEN_ANA_DFD_PACE_OUT_PUSH_M		BIT(0)
+#define GLGEN_ANA_DFD_PACING_0			0x0020C390 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_PACING_0_STOP_FEEDBK_S	0
+#define GLGEN_ANA_DFD_PACING_0_STOP_FEEDBK_M	BIT(0)
+#define GLGEN_ANA_DFD_PACING_0_STOP_ARB_S	1
+#define GLGEN_ANA_DFD_PACING_0_STOP_ARB_M	BIT(1)
+#define GLGEN_ANA_DFD_PACING_0_NUM_CHUNKS_S	2
+#define GLGEN_ANA_DFD_PACING_0_NUM_CHUNKS_M	MAKEMASK(0x1F, 2)
+#define GLGEN_ANA_DFD_PACING_1			0x0020C394 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_PACING_1_PUSH_S		0
+#define GLGEN_ANA_DFD_PACING_1_PUSH_M		BIT(0)
+#define GLGEN_ANA_DFD_REG_FILE_ACC_0		0x0020C39C /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_REG_FILE_ACC_0_SLOT_ID_S	0
+#define GLGEN_ANA_DFD_REG_FILE_ACC_0_SLOT_ID_M	MAKEMASK(0xF, 0)
+#define GLGEN_ANA_DFD_REG_FILE_ACC_1		0x0020C3A0 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_REG_FILE_ACC_1_REGID_S	0
+#define GLGEN_ANA_DFD_REG_FILE_ACC_1_REGID_M	MAKEMASK(0xFF, 0)
+#define GLGEN_ANA_DFD_REG_FILE_ACC_RES		0x0020C3A4 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_REG_FILE_ACC_RES_REG_VAL_S 0
+#define GLGEN_ANA_DFD_REG_FILE_ACC_RES_REG_VAL_M MAKEMASK(0xFFFF, 0)
+#define GLGEN_ANA_DFD_REG_FILE_ACC_RES_EXCEPTIONS_S 16
+#define GLGEN_ANA_DFD_REG_FILE_ACC_RES_EXCEPTIONS_M MAKEMASK(0x7FFF, 16)
+#define GLGEN_ANA_DFD_TAGIDS			0x0020C438 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_TAGIDS_TAGID_IN_DFD_FIFO_S 0
+#define GLGEN_ANA_DFD_TAGIDS_TAGID_IN_DFD_FIFO_M MAKEMASK(0x3F, 0)
+#define GLGEN_ANA_DFD_TAGIDS_TAGID_NXT_ANA_S	8
+#define GLGEN_ANA_DFD_TAGIDS_TAGID_NXT_ANA_M	MAKEMASK(0x3F, 8)
+#define GLGEN_ANA_DFD_TAGIDS_TAGID_OUT_S	16
+#define GLGEN_ANA_DFD_TAGIDS_TAGID_OUT_M	MAKEMASK(0x3F, 16)
+#define GLGEN_ANA_DFD_TAGIDS_SLOTID_IN_DFD_FIFO_S 24
+#define GLGEN_ANA_DFD_TAGIDS_SLOTID_IN_DFD_FIFO_M MAKEMASK(0xF, 24)
+#define GLGEN_ANA_DFD_TAGIDS_SLOTID_NXT_ANA_S	28
+#define GLGEN_ANA_DFD_TAGIDS_SLOTID_NXT_ANA_M	MAKEMASK(0xF, 28)
+#define GLGEN_ANA_ERR_AUX			0x0020C228 /* Reset Source: CORER */
+#define GLGEN_ANA_ERR_AUX_IPLEN_GPREG_S		0
+#define GLGEN_ANA_ERR_AUX_IPLEN_GPREG_M		MAKEMASK(0xF, 0)
+#define GLGEN_ANA_ERR_CTRL			0x0020C220 /* Reset Source: CORER */
+#define GLGEN_ANA_ERR_CTRL_ERR_MASK_EN_S	0
+#define GLGEN_ANA_ERR_CTRL_ERR_MASK_EN_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_FLAG_MAP(_i)			(0x0020C000 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GLGEN_ANA_FLAG_MAP_MAX_INDEX		63
+#define GLGEN_ANA_FLAG_MAP_FLAG_EN_S		0
+#define GLGEN_ANA_FLAG_MAP_FLAG_EN_M		BIT(0)
+#define GLGEN_ANA_FLAG_MAP_EXT_FLAG_ID_S	1
+#define GLGEN_ANA_FLAG_MAP_EXT_FLAG_ID_M	MAKEMASK(0x3F, 1)
+#define GLGEN_ANA_GEN_DFD_RO			0x0020C4C8 /* Reset Source: CORER */
+#define GLGEN_ANA_GEN_DFD_RO_GEN_VAL_S		0
+#define GLGEN_ANA_GEN_DFD_RO_GEN_VAL_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_GIGO_FIFO_PTR			0x0020C448 /* Reset Source: CORER */
+#define GLGEN_ANA_GIGO_FIFO_PTR_HEAD_S		0
+#define GLGEN_ANA_GIGO_FIFO_PTR_HEAD_M		MAKEMASK(0x1FF, 0)
+#define GLGEN_ANA_GIGO_FIFO_PTR_USED_SPACE_S	16
+#define GLGEN_ANA_GIGO_FIFO_PTR_USED_SPACE_M	MAKEMASK(0x3FF, 16)
+#define GLGEN_ANA_HDR_FIFO_FIFO_PTR		0x0020C44C /* Reset Source: CORER */
+#define GLGEN_ANA_HDR_FIFO_FIFO_PTR_HEAD_S	0
+#define GLGEN_ANA_HDR_FIFO_FIFO_PTR_HEAD_M	MAKEMASK(0x1FF, 0)
+#define GLGEN_ANA_HDR_FIFO_FIFO_PTR_USED_SPACE_S 16
+#define GLGEN_ANA_HDR_FIFO_FIFO_PTR_USED_SPACE_M MAKEMASK(0x3FF, 16)
+#define GLGEN_ANA_INV_NODE_PTYPE		0x0020C210 /* Reset Source: CORER */
+#define GLGEN_ANA_INV_NODE_PTYPE_INV_NODE_PTYPE_S 0
+#define GLGEN_ANA_INV_NODE_PTYPE_INV_NODE_PTYPE_M MAKEMASK(0x7FF, 0)
+#define GLGEN_ANA_INV_PROT_ID			0x0020C214 /* Reset Source: CORER */
+#define GLGEN_ANA_INV_PROT_ID_INV_PROT_ID_S	0
+#define GLGEN_ANA_INV_PROT_ID_INV_PROT_ID_M	MAKEMASK(0xFF, 0)
+#define GLGEN_ANA_INV_PTYPE_MARKER		0x0020C218 /* Reset Source: CORER */
+#define GLGEN_ANA_INV_PTYPE_MARKER_INV_PTYPE_MARKER_S 0
+#define GLGEN_ANA_INV_PTYPE_MARKER_INV_PTYPE_MARKER_M MAKEMASK(0x7F, 0)
+#define GLGEN_ANA_LAST_PROT_ID(_i)		(0x0020C1E4 + ((_i) * 4)) /* _i=0...5 */ /* Reset Source: CORER */
+#define GLGEN_ANA_LAST_PROT_ID_MAX_INDEX	5
+#define GLGEN_ANA_LAST_PROT_ID_EN_S		0
+#define GLGEN_ANA_LAST_PROT_ID_EN_M		BIT(0)
+#define GLGEN_ANA_LAST_PROT_ID_PROT_ID_S	1
+#define GLGEN_ANA_LAST_PROT_ID_PROT_ID_M	MAKEMASK(0xFF, 1)
+#define GLGEN_ANA_MAX_HDRLEN			0x0020C1E0 /* Reset Source: CORER */
+#define GLGEN_ANA_MAX_HDRLEN_NPC_S		0
+#define GLGEN_ANA_MAX_HDRLEN_NPC_M		MAKEMASK(0xFF, 0)
+#define GLGEN_ANA_MAX_HDRLEN_MAX_HDR_LEN_S	8
+#define GLGEN_ANA_MAX_HDRLEN_MAX_HDR_LEN_M	MAKEMASK(0x1FF, 8)
+#define GLGEN_ANA_MAX_PROT			0x0020C224 /* Reset Source: CORER */
+#define GLGEN_ANA_MAX_PROT_MAX_PRTS_S		0
+#define GLGEN_ANA_MAX_PROT_MAX_PRTS_M		MAKEMASK(0x7F, 0)
+#define GLGEN_ANA_MAX_ROUND			0x0020C20C /* Reset Source: CORER */
+#define GLGEN_ANA_MAX_ROUND_MAX_ROUND_ABS_S	0
+#define GLGEN_ANA_MAX_ROUND_MAX_ROUND_ABS_M	MAKEMASK(0x7F, 0)
+#define GLGEN_ANA_MIN_PKT			0x0020C42C /* Reset Source: CORER */
+#define GLGEN_ANA_MIN_PKT_MIN_LEN_S		0
+#define GLGEN_ANA_MIN_PKT_MIN_LEN_M		MAKEMASK(0x3FFF, 0)
+#define GLGEN_ANA_NMPG_KEYMASK(_i)		(0x0020C1D0 + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: CORER */
+#define GLGEN_ANA_NMPG_KEYMASK_MAX_INDEX	3
+#define GLGEN_ANA_NMPG_KEYMASK_HASH_KEY_S	0
+#define GLGEN_ANA_NMPG_KEYMASK_HASH_KEY_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_NMPG0_HASHKEY(_i)		(0x0020C1B0 + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: CORER */
+#define GLGEN_ANA_NMPG0_HASHKEY_MAX_INDEX	3
+#define GLGEN_ANA_NMPG0_HASHKEY_HASH_KEY_S	0
+#define GLGEN_ANA_NMPG0_HASHKEY_HASH_KEY_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_NO_HIT_PG_NM_PG		0x0020C204 /* Reset Source: CORER */
+#define GLGEN_ANA_NO_HIT_PG_NM_PG_NPC_S		0
+#define GLGEN_ANA_NO_HIT_PG_NM_PG_NPC_M		MAKEMASK(0xFF, 0)
+#define GLGEN_ANA_OUT_OF_PKT			0x0020C200 /* Reset Source: CORER */
+#define GLGEN_ANA_OUT_OF_PKT_NPC_S		0
+#define GLGEN_ANA_OUT_OF_PKT_NPC_M		MAKEMASK(0xFF, 0)
+#define GLGEN_ANA_P2P(_i)			(0x0020C160 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLGEN_ANA_P2P_MAX_INDEX			15
+#define GLGEN_ANA_P2P_TARGET_PROF_S		0
+#define GLGEN_ANA_P2P_TARGET_PROF_M		MAKEMASK(0xF, 0)
+#define GLGEN_ANA_PG_KEYMASK(_i)		(0x0020C1C0 + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: CORER */
+#define GLGEN_ANA_PG_KEYMASK_MAX_INDEX		3
+#define GLGEN_ANA_PG_KEYMASK_HASH_KEY_S		0
+#define GLGEN_ANA_PG_KEYMASK_HASH_KEY_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_PG0_HASHKEY(_i)		(0x0020C1A0 + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: CORER */
+#define GLGEN_ANA_PG0_HASHKEY_MAX_INDEX		3
+#define GLGEN_ANA_PG0_HASHKEY_HASH_KEY_S	0
+#define GLGEN_ANA_PG0_HASHKEY_HASH_KEY_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_PROFIL_CTRL			0x0020C1FC /* Reset Source: CORER */
+#define GLGEN_ANA_PROFIL_CTRL_PROFILE_SELECT_MDID_S 0
+#define GLGEN_ANA_PROFIL_CTRL_PROFILE_SELECT_MDID_M MAKEMASK(0x1F, 0)
+#define GLGEN_ANA_PROFIL_CTRL_PROFILE_SELECT_MDSTART_S 5
+#define GLGEN_ANA_PROFIL_CTRL_PROFILE_SELECT_MDSTART_M MAKEMASK(0xF, 5)
+#define GLGEN_ANA_PROFIL_CTRL_PROFILE_SELECT_MD_LEN_S 9
+#define GLGEN_ANA_PROFIL_CTRL_PROFILE_SELECT_MD_LEN_M MAKEMASK(0x1F, 9)
+#define GLGEN_ANA_PROFIL_CTRL_NUM_CTRL_DOMAIN_S 14
+#define GLGEN_ANA_PROFIL_CTRL_NUM_CTRL_DOMAIN_M MAKEMASK(0x3, 14)
+#define GLGEN_ANA_PROFIL_CTRL_DEF_PROF_ID_S	16
+#define GLGEN_ANA_PROFIL_CTRL_DEF_PROF_ID_M	MAKEMASK(0xF, 16)
+#define GLGEN_ANA_PROFIL_CTRL_SEL_DEF_PROF_ID_S 20
+#define GLGEN_ANA_PROFIL_CTRL_SEL_DEF_PROF_ID_M BIT(20)
+#define GLGEN_ANA_PSTAT_FIFO_PTR		0x0020C444 /* Reset Source: CORER */
+#define GLGEN_ANA_PSTAT_FIFO_PTR_HEAD_S		0
+#define GLGEN_ANA_PSTAT_FIFO_PTR_HEAD_M		MAKEMASK(0x1FF, 0)
+#define GLGEN_ANA_PSTAT_FIFO_PTR_USED_SPACE_S	16
+#define GLGEN_ANA_PSTAT_FIFO_PTR_USED_SPACE_M	MAKEMASK(0x3FF, 16)
+#define GLGEN_ANA_STAT_FIFO_PTR			0x0020C440 /* Reset Source: CORER */
+#define GLGEN_ANA_STAT_FIFO_PTR_HEAD_S		0
+#define GLGEN_ANA_STAT_FIFO_PTR_HEAD_M		MAKEMASK(0x1FF, 0)
+#define GLGEN_ANA_STAT_FIFO_PTR_USED_SPACE_S	16
+#define GLGEN_ANA_STAT_FIFO_PTR_USED_SPACE_M	MAKEMASK(0x3FF, 16)
+#define GLGEN_ANA_TX_DFD_LOG_0			0x0020D3A8 /* Reset Source: CORER */
+#define GLGEN_ANA_TX_DFD_LOG_0_SOURCE_S		0
+#define GLGEN_ANA_TX_DFD_LOG_0_SOURCE_M		MAKEMASK(0x7, 0)
+#define GLGEN_ANA_TX_DFD_LOG_0_LVL_OR_EDGE_S	3
+#define GLGEN_ANA_TX_DFD_LOG_0_LVL_OR_EDGE_M	BIT(3)
+#define GLGEN_ANA_TX_DFD_LOG_0_RC_DISP_TRIG_S	4
+#define GLGEN_ANA_TX_DFD_LOG_0_RC_DISP_TRIG_M	MAKEMASK(0x7, 4)
+#define GLGEN_ANA_TX_DFD_LOG_0_FLD_MODE_S	8
+#define GLGEN_ANA_TX_DFD_LOG_0_FLD_MODE_M	BIT(8)
+#define GLGEN_ANA_TX_DFD_LOG_0_DLY_CYCL_S	16
+#define GLGEN_ANA_TX_DFD_LOG_0_DLY_CYCL_M	MAKEMASK(0x3FF, 16)
+#define GLGEN_ANA_TX_DFD_PACE_OUT		0x0020D4CC /* Reset Source: CORER */
+#define GLGEN_ANA_TX_DFD_PACE_OUT_PUSH_S	0
+#define GLGEN_ANA_TX_DFD_PACE_OUT_PUSH_M	BIT(0)
+#define GLGEN_ANA_TX_GEN_DFD_RO			0x0020D4C8 /* Reset Source: CORER */
+#define GLGEN_ANA_TX_GEN_DFD_RO_GEN_VAL_S	0
+#define GLGEN_ANA_TX_GEN_DFD_RO_GEN_VAL_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_TX_P2P(_i)			(0x0020D160 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLGEN_ANA_TX_P2P_MAX_INDEX		15
+#define GLGEN_ANA_TX_P2P_TARGET_PROF_S		0
+#define GLGEN_ANA_TX_P2P_TARGET_PROF_M		MAKEMASK(0xF, 0)
+#define GLGEN_ASSERT_HLP			0x000B81E4 /* Reset Source: POR */
+#define GLGEN_ASSERT_HLP_CORE_ON_RST_S		0
+#define GLGEN_ASSERT_HLP_CORE_ON_RST_M		BIT(0)
+#define GLGEN_ASSERT_HLP_FULL_ON_RST_S		1
+#define GLGEN_ASSERT_HLP_FULL_ON_RST_M		BIT(1)
+#define GLGEN_CLKSTAT				0x000B8184 /* Reset Source: POR */
+#define GLGEN_CLKSTAT_U_CLK_SPEED_S		0
+#define GLGEN_CLKSTAT_U_CLK_SPEED_M		MAKEMASK(0x7, 0)
+#define GLGEN_CLKSTAT_L_CLK_SPEED_S		3
+#define GLGEN_CLKSTAT_L_CLK_SPEED_M		MAKEMASK(0x7, 3)
+#define GLGEN_CLKSTAT_PSM_CLK_SPEED_S		6
+#define GLGEN_CLKSTAT_PSM_CLK_SPEED_M		MAKEMASK(0x7, 6)
+#define GLGEN_CLKSTAT_RXCTL_CLK_SPEED_S		9
+#define GLGEN_CLKSTAT_RXCTL_CLK_SPEED_M		MAKEMASK(0x7, 9)
+#define GLGEN_CLKSTAT_UANA_CLK_SPEED_S		12
+#define GLGEN_CLKSTAT_UANA_CLK_SPEED_M		MAKEMASK(0x7, 12)
+#define GLGEN_CLKSTAT_PE_CLK_SPEED_S		18
+#define GLGEN_CLKSTAT_PE_CLK_SPEED_M		MAKEMASK(0x7, 18)
+#define GLGEN_CLKSTAT_SRC			0x000B826C /* Reset Source: POR */
+#define GLGEN_CLKSTAT_SRC_U_CLK_SRC_S		0
+#define GLGEN_CLKSTAT_SRC_U_CLK_SRC_M		MAKEMASK(0x3, 0)
+#define GLGEN_CLKSTAT_SRC_L_CLK_SRC_S		2
+#define GLGEN_CLKSTAT_SRC_L_CLK_SRC_M		MAKEMASK(0x3, 2)
+#define GLGEN_CLKSTAT_SRC_PSM_CLK_SRC_S		4
+#define GLGEN_CLKSTAT_SRC_PSM_CLK_SRC_M		MAKEMASK(0x3, 4)
+#define GLGEN_CLKSTAT_SRC_RXCTL_CLK_SRC_S	6
+#define GLGEN_CLKSTAT_SRC_RXCTL_CLK_SRC_M	MAKEMASK(0x3, 6)
+#define GLGEN_CLKSTAT_SRC_UANA_CLK_SRC_S	8
+#define GLGEN_CLKSTAT_SRC_UANA_CLK_SRC_M	MAKEMASK(0xF, 8)
+#define GLGEN_ECC_ERR_INT_TOG_MASK_H		0x00093A00 /* Reset Source: CORER */
+#define GLGEN_ECC_ERR_INT_TOG_MASK_H_CLIENT_NUM_S 0
+#define GLGEN_ECC_ERR_INT_TOG_MASK_H_CLIENT_NUM_M MAKEMASK(0x7F, 0)
+#define GLGEN_ECC_ERR_INT_TOG_MASK_L		0x000939FC /* Reset Source: CORER */
+#define GLGEN_ECC_ERR_INT_TOG_MASK_L_CLIENT_NUM_S 0
+#define GLGEN_ECC_ERR_INT_TOG_MASK_L_CLIENT_NUM_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ECC_ERR_RST_MASK_H		0x000939F8 /* Reset Source: CORER */
+#define GLGEN_ECC_ERR_RST_MASK_H_CLIENT_NUM_S	0
+#define GLGEN_ECC_ERR_RST_MASK_H_CLIENT_NUM_M	MAKEMASK(0x7F, 0)
+#define GLGEN_ECC_ERR_RST_MASK_L		0x000939F4 /* Reset Source: CORER */
+#define GLGEN_ECC_ERR_RST_MASK_L_CLIENT_NUM_S	0
+#define GLGEN_ECC_ERR_RST_MASK_L_CLIENT_NUM_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_GPIO_CTL(_i)			(0x000880C8 + ((_i) * 4)) /* _i=0...6 */ /* Reset Source: POR */
+#define GLGEN_GPIO_CTL_MAX_INDEX		6
+#define GLGEN_GPIO_CTL_IN_VALUE_S		0
+#define GLGEN_GPIO_CTL_IN_VALUE_M		BIT(0)
+#define GLGEN_GPIO_CTL_IN_TRANSIT_S		1
+#define GLGEN_GPIO_CTL_IN_TRANSIT_M		BIT(1)
+#define GLGEN_GPIO_CTL_OUT_VALUE_S		2
+#define GLGEN_GPIO_CTL_OUT_VALUE_M		BIT(2)
+#define GLGEN_GPIO_CTL_NO_P_UP_S		3
+#define GLGEN_GPIO_CTL_NO_P_UP_M		BIT(3)
+#define GLGEN_GPIO_CTL_PIN_DIR_S		4
+#define GLGEN_GPIO_CTL_PIN_DIR_M		BIT(4)
+#define GLGEN_GPIO_CTL_TRI_CTL_S		5
+#define GLGEN_GPIO_CTL_TRI_CTL_M		BIT(5)
+#define GLGEN_GPIO_CTL_PIN_FUNC_S		8
+#define GLGEN_GPIO_CTL_PIN_FUNC_M		MAKEMASK(0xF, 8)
+#define GLGEN_GPIO_CTL_INT_MODE_S		12
+#define GLGEN_GPIO_CTL_INT_MODE_M		MAKEMASK(0x3, 12)
+#define GLGEN_MARKER_COUNT			0x000939E8 /* Reset Source: CORER */
+#define GLGEN_MARKER_COUNT_MARKER_COUNT_S	0
+#define GLGEN_MARKER_COUNT_MARKER_COUNT_M	MAKEMASK(0xFF, 0)
+#define GLGEN_MARKER_COUNT_MARKER_COUNT_EN_S	31
+#define GLGEN_MARKER_COUNT_MARKER_COUNT_EN_M	BIT(31)
+#define GLGEN_RSTAT				0x000B8188 /* Reset Source: POR */
+#define GLGEN_RSTAT_DEVSTATE_S			0
+#define GLGEN_RSTAT_DEVSTATE_M			MAKEMASK(0x3, 0)
+#define GLGEN_RSTAT_RESET_TYPE_S		2
+#define GLGEN_RSTAT_RESET_TYPE_M		MAKEMASK(0x3, 2)
+#define GLGEN_RSTAT_CORERCNT_S			4
+#define GLGEN_RSTAT_CORERCNT_M			MAKEMASK(0x3, 4)
+#define GLGEN_RSTAT_GLOBRCNT_S			6
+#define GLGEN_RSTAT_GLOBRCNT_M			MAKEMASK(0x3, 6)
+#define GLGEN_RSTAT_EMPRCNT_S			8
+#define GLGEN_RSTAT_EMPRCNT_M			MAKEMASK(0x3, 8)
+#define GLGEN_RSTAT_TIME_TO_RST_S		10
+#define GLGEN_RSTAT_TIME_TO_RST_M		MAKEMASK(0x3F, 10)
+#define GLGEN_RSTAT_RTRIG_FLR_S			16
+#define GLGEN_RSTAT_RTRIG_FLR_M			BIT(16)
+#define GLGEN_RSTAT_RTRIG_ECC_S			17
+#define GLGEN_RSTAT_RTRIG_ECC_M			BIT(17)
+#define GLGEN_RSTAT_RTRIG_FW_AUX_S		18
+#define GLGEN_RSTAT_RTRIG_FW_AUX_M		BIT(18)
+#define GLGEN_RTRIG				0x000B8190 /* Reset Source: CORER */
+#define GLGEN_RTRIG_CORER_S			0
+#define GLGEN_RTRIG_CORER_M			BIT(0)
+#define GLGEN_RTRIG_GLOBR_S			1
+#define GLGEN_RTRIG_GLOBR_M			BIT(1)
+#define GLGEN_RTRIG_EMPFWR_S			2
+#define GLGEN_RTRIG_EMPFWR_M			BIT(2)
+#define GLGEN_STAT				0x000B612C /* Reset Source: POR */
+#define GLGEN_STAT_RSVD4FW_S			0
+#define GLGEN_STAT_RSVD4FW_M			MAKEMASK(0xFF, 0)
+#define GLGEN_VFLRSTAT(_i)			(0x00093A04 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLGEN_VFLRSTAT_MAX_INDEX		7
+#define GLGEN_VFLRSTAT_VFLRS_S			0
+#define GLGEN_VFLRSTAT_VFLRS_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_XLR_MSK2HLP_RDY			0x000939F0 /* Reset Source: CORER */
+#define GLGEN_XLR_MSK2HLP_RDY_GLGEN_XLR_MSK2HLP_RDY_S 0
+#define GLGEN_XLR_MSK2HLP_RDY_GLGEN_XLR_MSK2HLP_RDY_M BIT(0)
+#define GLGEN_XLR_TRNS_WAIT_COUNT		0x000939EC /* Reset Source: CORER */
+#define GLGEN_XLR_TRNS_WAIT_COUNT_W_BTWN_TRNS_COUNT_S 0
+#define GLGEN_XLR_TRNS_WAIT_COUNT_W_BTWN_TRNS_COUNT_M MAKEMASK(0x1F, 0)
+#define GLGEN_XLR_TRNS_WAIT_COUNT_W_PEND_TRNS_COUNT_S 8
+#define GLGEN_XLR_TRNS_WAIT_COUNT_W_PEND_TRNS_COUNT_M MAKEMASK(0xFF, 8)
+#define GLQDC_DFD_CAM_ACC			0x002D2E24 /* Reset Source: CORER */
+#define GLQDC_DFD_CAM_ACC_CLNUM_S		0
+#define GLQDC_DFD_CAM_ACC_CLNUM_M		MAKEMASK(0x7F, 0)
+#define GLQDC_DFD_CAM_ACC_RES_0			0x002D2E28 /* Reset Source: CORER */
+#define GLQDC_DFD_CAM_ACC_RES_0_QID_S		0
+#define GLQDC_DFD_CAM_ACC_RES_0_QID_M		MAKEMASK(0x3FFF, 0)
+#define GLQDC_DFD_CAM_ACC_RES_0_CAM_V_S		16
+#define GLQDC_DFD_CAM_ACC_RES_0_CAM_V_M		BIT(16)
+#define GLQDC_DFD_CAM_ACC_RES_0_CAM_E_S		31
+#define GLQDC_DFD_CAM_ACC_RES_0_CAM_E_M		BIT(31)
+#define GLQDC_DFD_CAM_ACC_RES_1			0x002D2E2C /* Reset Source: CORER */
+#define GLQDC_DFD_CAM_ACC_RES_1_CL_HEAD_S	0
+#define GLQDC_DFD_CAM_ACC_RES_1_CL_HEAD_M	MAKEMASK(0x3F, 0)
+#define GLQDC_DFD_CAM_ACC_RES_1_CL_TAIL_S	8
+#define GLQDC_DFD_CAM_ACC_RES_1_CL_TAIL_M	MAKEMASK(0x3F, 8)
+#define GLQDC_DFD_CAM_ACC_RES_1_CL_EMPTY_S	16
+#define GLQDC_DFD_CAM_ACC_RES_1_CL_EMPTY_M	BIT(16)
+#define GLQDC_DFD_CAM_ACC_RES_1_CL_MALC_S	24
+#define GLQDC_DFD_CAM_ACC_RES_1_CL_MALC_M	MAKEMASK(0x3F, 24)
+#define GLQDC_DFD_FIFO_CFG_0			0x002D2E34 /* Reset Source: CORER */
+#define GLQDC_DFD_FIFO_CFG_0_QID_S		0
+#define GLQDC_DFD_FIFO_CFG_0_QID_M		MAKEMASK(0x3FFF, 0)
+#define GLQDC_DFD_FIFO_CFG_0_SMPL_PT_S		16
+#define GLQDC_DFD_FIFO_CFG_0_SMPL_PT_M		MAKEMASK(0xFF, 16)
+#define GLQDC_DFD_FIFO_CFG_0_ALL_QID_S		31
+#define GLQDC_DFD_FIFO_CFG_0_ALL_QID_M		BIT(31)
+#define GLQDC_DFD_FIFO_CFG_1			0x002D2E38 /* Reset Source: CORER */
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_0_S		0
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_0_M		MAKEMASK(0x7, 0)
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_1_S		4
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_1_M		MAKEMASK(0x7, 4)
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_2_S		8
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_2_M		MAKEMASK(0x7, 8)
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_3_S		12
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_3_M		MAKEMASK(0x7, 12)
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_4_S		16
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_4_M		MAKEMASK(0x7, 16)
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_5_S		20
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_5_M		MAKEMASK(0x7, 20)
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_6_S		24
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_6_M		MAKEMASK(0x7, 24)
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_7_S		28
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_7_M		MAKEMASK(0x7, 28)
+#define GLQDC_DFD_FIFO_SZ_CFG			0x002D30AC /* Reset Source: CORER */
+#define GLQDC_DFD_FIFO_SZ_CFG_COMP_S		0
+#define GLQDC_DFD_FIFO_SZ_CFG_COMP_M		MAKEMASK(0xFF, 0)
+#define GLQDC_DFD_FIFO_SZ_CFG_MISS_S		8
+#define GLQDC_DFD_FIFO_SZ_CFG_MISS_M		MAKEMASK(0xFF, 8)
+#define GLQDC_DFD_FIFO_SZ_CFG_MISS_COMP_S	16
+#define GLQDC_DFD_FIFO_SZ_CFG_MISS_COMP_M	MAKEMASK(0xFF, 16)
+#define GLQDC_DFD_GEN_CHKN			0x002D30A0 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_CHKN_GEN_BITS_S		0
+#define GLQDC_DFD_GEN_CHKN_GEN_BITS_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_GEN_CHKN_2			0x002D30A4 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_CHKN_2_GEN_BITS_S		0
+#define GLQDC_DFD_GEN_CHKN_2_GEN_BITS_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_GEN_CTRL			0x002D2E20 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_CTRL_ENABLE_S		0
+#define GLQDC_DFD_GEN_CTRL_ENABLE_M		BIT(0)
+#define GLQDC_DFD_GEN_CTRL_BLK_INJECT_M1_S	1
+#define GLQDC_DFD_GEN_CTRL_BLK_INJECT_M1_M	BIT(1)
+#define GLQDC_DFD_GEN_CTRL_NUM_PAUSE_M1_S	16
+#define GLQDC_DFD_GEN_CTRL_NUM_PAUSE_M1_M	MAKEMASK(0x3FF, 16)
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0		0x002D2EE8 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_MISS_COMP_ACK_S 0
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_MISS_COMP_ACK_M MAKEMASK(0x7F, 0)
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_MISS_COMP_S 7
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_MISS_COMP_M MAKEMASK(0x7F, 7)
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_COMP_FSM_DATA_S 14
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_COMP_FSM_DATA_M MAKEMASK(0x3, 14)
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_COMP_FSM_S	16
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_COMP_FSM_M	MAKEMASK(0x7F, 16)
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_PCIE_OUT_S	23
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_PCIE_OUT_M	MAKEMASK(0x7, 23)
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_1		0x002D2EEC /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_1_MISS_FSM_S	0
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_1_MISS_FSM_M	MAKEMASK(0x7F, 0)
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_1_DFD_S	7
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_1_DFD_M	MAKEMASK(0xFF, 7)
+#define GLQDC_DFD_GEN_LOG_FSM			0x002D2EF0 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOG_FSM_FTSTATE_S		0
+#define GLQDC_DFD_GEN_LOG_FSM_FTSTATE_M		MAKEMASK(0x3, 0)
+#define GLQDC_DFD_GEN_LOG_FSM_MISS_FIFO_FSM_ST_S 2
+#define GLQDC_DFD_GEN_LOG_FSM_MISS_FIFO_FSM_ST_M MAKEMASK(0x7, 2)
+#define GLQDC_DFD_GEN_LOG_FSM_IN_MISS_FIFO_S	5
+#define GLQDC_DFD_GEN_LOG_FSM_IN_MISS_FIFO_M	MAKEMASK(0x3, 5)
+#define GLQDC_DFD_GEN_LOG_FSM_CPSTATE_S		7
+#define GLQDC_DFD_GEN_LOG_FSM_CPSTATE_M		MAKEMASK(0x7, 7)
+#define GLQDC_DFD_GEN_LOGGNG_0			0x002D2EE0 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOGGNG_0_RINGH_WR_RD_S	0
+#define GLQDC_DFD_GEN_LOGGNG_0_RINGH_WR_RD_M	BIT(0)
+#define GLQDC_DFD_GEN_LOGGNG_0_QD_WR_RD_S	1
+#define GLQDC_DFD_GEN_LOGGNG_0_QD_WR_RD_M	BIT(1)
+#define GLQDC_DFD_GEN_LOGGNG_0_PCIE_RD_REQ_VLD_S 2
+#define GLQDC_DFD_GEN_LOGGNG_0_PCIE_RD_REQ_VLD_M BIT(2)
+#define GLQDC_DFD_GEN_LOGGNG_0_NXT_SQ_VLD_S	3
+#define GLQDC_DFD_GEN_LOGGNG_0_NXT_SQ_VLD_M	BIT(3)
+#define GLQDC_DFD_GEN_LOGGNG_0_SQ_VLD_TO_DONE_S 4
+#define GLQDC_DFD_GEN_LOGGNG_0_SQ_VLD_TO_DONE_M BIT(4)
+#define GLQDC_DFD_GEN_LOGGNG_0_PCIE_COMP_VLD_S	5
+#define GLQDC_DFD_GEN_LOGGNG_0_PCIE_COMP_VLD_M	BIT(5)
+#define GLQDC_DFD_GEN_LOGGNG_0_FETCH_NXT_SQ_VLD_S 6
+#define GLQDC_DFD_GEN_LOGGNG_0_FETCH_NXT_SQ_VLD_M BIT(6)
+#define GLQDC_DFD_GEN_LOGGNG_0_MALC_RPT_S	8
+#define GLQDC_DFD_GEN_LOGGNG_0_MALC_RPT_M	MAKEMASK(0xF, 8)
+#define GLQDC_DFD_GEN_LOGGNG_0_DFD_FIFO_ADD_S	16
+#define GLQDC_DFD_GEN_LOGGNG_0_DFD_FIFO_ADD_M	MAKEMASK(0x7F, 16)
+#define GLQDC_DFD_GEN_LOGGNG_1			0x002D2EE4 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_RFIL_WM_S	0
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_RFIL_WM_M	MAKEMASK(0x3, 0)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_RFIL_S	2
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_RFIL_M	MAKEMASK(0x3, 2)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_MRED_WM_S	4
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_MRED_WM_M	MAKEMASK(0x3, 4)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_MRED_S	6
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_MRED_M	MAKEMASK(0x3, 6)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_M3_WM_S	8
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_M3_WM_M	MAKEMASK(0x3, 8)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_M3_S		10
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_M3_M		MAKEMASK(0x3, 10)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_LSO_MT_M3_WM_S 12
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_LSO_MT_M3_WM_M MAKEMASK(0x3, 12)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_LSO_MT_M3_S	14
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_LSO_MT_M3_M	MAKEMASK(0x3, 14)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_ACK_MISS_FIFO_M3_WM_S 16
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_ACK_MISS_FIFO_M3_WM_M MAKEMASK(0x3, 16)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_ACK_MISS_FIFO_M3_S 18
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_ACK_MISS_FIFO_M3_M MAKEMASK(0x3, 18)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_EVICT_WM_S	20
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_EVICT_WM_M	MAKEMASK(0x3, 20)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_EVICT_S	22
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_EVICT_M	MAKEMASK(0x3, 22)
+#define GLQDC_DFD_GEN_LOGGNG_2			0x002D2FFC /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOGGNG_2_WR_WHEN_FULL_S	0
+#define GLQDC_DFD_GEN_LOGGNG_2_WR_WHEN_FULL_M	MAKEMASK(0x3F, 0)
+#define GLQDC_DFD_GEN_LOGGNG_2_WR_WHEN_FULL_LT_S 6
+#define GLQDC_DFD_GEN_LOGGNG_2_WR_WHEN_FULL_LT_M MAKEMASK(0x3F, 6)
+#define GLQDC_DFD_GEN_LOGGNG_2_TEST_S		24
+#define GLQDC_DFD_GEN_LOGGNG_2_TEST_M		MAKEMASK(0xFF, 24)
+#define GLQDC_DFD_GEN_LOGGNG_3			0x002D3008 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOGGNG_3_GEN_S		0
+#define GLQDC_DFD_GEN_LOGGNG_3_GEN_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_GEN_LOGGNG_4			0x002D300C /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOGGNG_4_GEN_S		0
+#define GLQDC_DFD_GEN_LOGGNG_4_GEN_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_GEN_LOGGNG_5			0x002D3010 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOGGNG_5_GEN_S		0
+#define GLQDC_DFD_GEN_LOGGNG_5_GEN_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_GEN_LOGGNG_6			0x002D3014 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOGGNG_6_GEN_S		0
+#define GLQDC_DFD_GEN_LOGGNG_6_GEN_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_GEN_STAT_REGS(_i)		(0x002D3018 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_STAT_REGS_MAX_INDEX	15
+#define GLQDC_DFD_GEN_STAT_REGS_COUNT_S		0
+#define GLQDC_DFD_GEN_STAT_REGS_COUNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_LOG_0				0x002D2E3C /* Reset Source: CORER */
+#define GLQDC_DFD_LOG_0_SOURCE_S		0
+#define GLQDC_DFD_LOG_0_SOURCE_M		MAKEMASK(0x3, 0)
+#define GLQDC_DFD_LOG_0_LVL_OR_EDGE_S		4
+#define GLQDC_DFD_LOG_0_LVL_OR_EDGE_M		BIT(4)
+#define GLQDC_DFD_LOG_0_DLY_CYCL_S		16
+#define GLQDC_DFD_LOG_0_DLY_CYCL_M		MAKEMASK(0x3FF, 16)
+#define GLQDC_DFD_LOG_1				0x002D2E40 /* Reset Source: CORER */
+#define GLQDC_DFD_LOG_1_NUM_EVENTS_S		0
+#define GLQDC_DFD_LOG_1_NUM_EVENTS_M		MAKEMASK(0x3FF, 0)
+#define GLQDC_DFD_LOG_1_NUM_TRIGS_S		16
+#define GLQDC_DFD_LOG_1_NUM_TRIGS_M		MAKEMASK(0x3FF, 16)
+#define GLQDC_DFD_LOG_1_TRIG_B2B_S		31
+#define GLQDC_DFD_LOG_1_TRIG_B2B_M		BIT(31)
+#define GLQDC_DFD_LOG_ACTN_EN			0x002D2EA4 /* Reset Source: CORER */
+#define GLQDC_DFD_LOG_ACTN_EN_BLK_INJECT_M1_S	0
+#define GLQDC_DFD_LOG_ACTN_EN_BLK_INJECT_M1_M	BIT(0)
+#define GLQDC_DFD_LOG_ACTN_EN_STP_WR_DFD_FIFO_S 1
+#define GLQDC_DFD_LOG_ACTN_EN_STP_WR_DFD_FIFO_M BIT(1)
+#define GLQDC_DFD_LOG_ACTN_EN_STP_UPDT_MALC_RPT_CSR_S 2
+#define GLQDC_DFD_LOG_ACTN_EN_STP_UPDT_MALC_RPT_CSR_M BIT(2)
+#define GLQDC_DFD_LOG_ACTN_RST			0x002D2EA8 /* Reset Source: CORER */
+#define GLQDC_DFD_LOG_ACTN_RST_BLK_INJECT_M1_S	0
+#define GLQDC_DFD_LOG_ACTN_RST_BLK_INJECT_M1_M	BIT(0)
+#define GLQDC_DFD_LOG_ACTN_RST_STP_WR_DFD_FIFO_S 1
+#define GLQDC_DFD_LOG_ACTN_RST_STP_WR_DFD_FIFO_M BIT(1)
+#define GLQDC_DFD_LOG_ACTN_RST_STP_UPDT_MALC_RPT_CSR_S 2
+#define GLQDC_DFD_LOG_ACTN_RST_STP_UPDT_MALC_RPT_CSR_M BIT(2)
+#define GLQDC_DFD_LOG_DATA(_i)			(0x002D2E44 + ((_i) * 4)) /* _i=0...11 */ /* Reset Source: CORER */
+#define GLQDC_DFD_LOG_DATA_MAX_INDEX		11
+#define GLQDC_DFD_LOG_DATA_DATA_S		0
+#define GLQDC_DFD_LOG_DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_LOG_MASK(_i)			(0x002D2E74 + ((_i) * 4)) /* _i=0...11 */ /* Reset Source: CORER */
+#define GLQDC_DFD_LOG_MASK_MAX_INDEX		11
+#define GLQDC_DFD_LOG_MASK_MASK_S		0
+#define GLQDC_DFD_LOG_MASK_MASK_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_LOG_TRG_0			0x002D2EAC /* Reset Source: CORER */
+#define GLQDC_DFD_LOG_TRG_0_QID_S		0
+#define GLQDC_DFD_LOG_TRG_0_QID_M		MAKEMASK(0x3FFF, 0)
+#define GLQDC_DFD_LOG_TRG_0_ACT_TRIGGED_S	16
+#define GLQDC_DFD_LOG_TRG_0_ACT_TRIGGED_M	BIT(16)
+#define GLQDC_DFD_LOG_TRG_0_TRIGGED_S		31
+#define GLQDC_DFD_LOG_TRG_0_TRIGGED_M		BIT(31)
+#define GLQDC_DFD_LOG_TRG_DATA(_i)		(0x002D2EB0 + ((_i) * 4)) /* _i=0...11 */ /* Reset Source: CORER */
+#define GLQDC_DFD_LOG_TRG_DATA_MAX_INDEX	11
+#define GLQDC_DFD_LOG_TRG_DATA_DATA_S		0
+#define GLQDC_DFD_LOG_TRG_DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_PACE				0x002D3000 /* Reset Source: CORER */
+#define GLQDC_DFD_PACE_PUSH_S			0
+#define GLQDC_DFD_PACE_PUSH_M			BIT(0)
+#define GLQDC_DFD_RST				0x002D2E30 /* Reset Source: CORER */
+#define GLQDC_DFD_RST_RST_S			0
+#define GLQDC_DFD_RST_RST_M			BIT(0)
+#define GLQDC_DFD_RST_CLR_MALC_RPT_S		1
+#define GLQDC_DFD_RST_CLR_MALC_RPT_M		BIT(1)
+#define GLQDC_DFD_RST_LOG_RST_S			2
+#define GLQDC_DFD_RST_LOG_RST_M			BIT(2)
+#define GLQDC_DFD_SAMPLE_RO_CSR			0x002D3004 /* Reset Source: CORER */
+#define GLQDC_DFD_SAMPLE_RO_CSR_SMPL_S		0
+#define GLQDC_DFD_SAMPLE_RO_CSR_SMPL_M		BIT(0)
+#define GLQDC_DFD_STATS_CFG_0			0x002D3058 /* Reset Source: CORER */
+#define GLQDC_DFD_STATS_CFG_0_CLR_S		0
+#define GLQDC_DFD_STATS_CFG_0_CLR_M		BIT(0)
+#define GLQDC_DFD_STATS_CFG_1			0x002D305C /* Reset Source: CORER */
+#define GLQDC_DFD_STATS_CFG_1_QID_S		0
+#define GLQDC_DFD_STATS_CFG_1_QID_M		MAKEMASK(0x3FFF, 0)
+#define GLQDC_DFD_STATS_CFG_1_GEN_CFG_S		16
+#define GLQDC_DFD_STATS_CFG_1_GEN_CFG_M		MAKEMASK(0x1F, 16)
+#define GLQDC_DFD_STATS_CFG_EVNT(_i)		(0x002D3060 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLQDC_DFD_STATS_CFG_EVNT_MAX_INDEX	15
+#define GLQDC_DFD_STATS_CFG_EVNT_EVNT_ID_S	0
+#define GLQDC_DFD_STATS_CFG_EVNT_EVNT_ID_M	MAKEMASK(0x1F, 0)
+#define GLQDC_DFD_STATS_CFG_EVNT_WRAP_EN_S	31
+#define GLQDC_DFD_STATS_CFG_EVNT_WRAP_EN_M	BIT(31)
+#define GLQDC_DFD_TEST_MNG			0x002D30A8 /* Reset Source: CORER */
+#define GLQDC_DFD_TEST_MNG_TST_S		2
+#define GLQDC_DFD_TEST_MNG_TST_M		BIT(2)
+#define GLVFGEN_TIMER				0x000B8214 /* Reset Source: POR */
+#define GLVFGEN_TIMER_GTIME_S			0
+#define GLVFGEN_TIMER_GTIME_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PFGEN_CTRL				0x00091000 /* Reset Source: CORER */
+#define PFGEN_CTRL_PFSWR_S			0
+#define PFGEN_CTRL_PFSWR_M			BIT(0)
+#define PFGEN_DRUN				0x00091180 /* Reset Source: CORER */
+#define PFGEN_DRUN_DRVUNLD_S			0
+#define PFGEN_DRUN_DRVUNLD_M			BIT(0)
+#define PFGEN_PFRSTAT				0x00091080 /* Reset Source: CORER */
+#define PFGEN_PFRSTAT_PFRD_S			0
+#define PFGEN_PFRSTAT_PFRD_M			BIT(0)
+#define PFGEN_PORTNUM				0x001D2400 /* Reset Source: CORER */
+#define PFGEN_PORTNUM_PORT_NUM_S		0
+#define PFGEN_PORTNUM_PORT_NUM_M		MAKEMASK(0x7, 0)
+#define PFGEN_STATE				0x00088000 /* Reset Source: CORER */
+#define PFGEN_STATE_PFPEEN_S			0
+#define PFGEN_STATE_PFPEEN_M			BIT(0)
+#define PFGEN_STATE_RSVD_S			1
+#define PFGEN_STATE_RSVD_M			BIT(1)
+#define PFGEN_STATE_PFLINKEN_S			2
+#define PFGEN_STATE_PFLINKEN_M			BIT(2)
+#define PFGEN_STATE_PFSCEN_S			3
+#define PFGEN_STATE_PFSCEN_M			BIT(3)
+#define PRT_TCVMLR_DRAIN_CNTR			0x000A21C0 /* Reset Source: CORER */
+#define PRT_TCVMLR_DRAIN_CNTR_CNTR_S		0
+#define PRT_TCVMLR_DRAIN_CNTR_CNTR_M		MAKEMASK(0x3FFF, 0)
+#define PRTGEN_CNF				0x000B8120 /* Reset Source: POR */
+#define PRTGEN_CNF_PORT_DIS_S			0
+#define PRTGEN_CNF_PORT_DIS_M			BIT(0)
+#define PRTGEN_CNF_ALLOW_PORT_DIS_S		1
+#define PRTGEN_CNF_ALLOW_PORT_DIS_M		BIT(1)
+#define PRTGEN_CNF_EMP_PORT_DIS_S		2
+#define PRTGEN_CNF_EMP_PORT_DIS_M		BIT(2)
+#define PRTGEN_CNF2				0x000B8160 /* Reset Source: POR */
+#define PRTGEN_CNF2_ACTIVATE_PORT_LINK_S	0
+#define PRTGEN_CNF2_ACTIVATE_PORT_LINK_M	BIT(0)
+#define PRTGEN_CNF3				0x000B8280 /* Reset Source: POR */
+#define PRTGEN_CNF3_PORT_STAGERING_EN_S		0
+#define PRTGEN_CNF3_PORT_STAGERING_EN_M		BIT(0)
+#define PRTGEN_STATUS				0x000B8100 /* Reset Source: POR */
+#define PRTGEN_STATUS_PORT_VALID_S		0
+#define PRTGEN_STATUS_PORT_VALID_M		BIT(0)
+#define PRTGEN_STATUS_PORT_ACTIVE_S		1
+#define PRTGEN_STATUS_PORT_ACTIVE_M		BIT(1)
+#define VFGEN_RSTAT(_VF)			(0x00074000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: VFR */
+#define VFGEN_RSTAT_MAX_INDEX			255
+#define VFGEN_RSTAT_VFR_STATE_S			0
+#define VFGEN_RSTAT_VFR_STATE_M			MAKEMASK(0x3, 0)
+#define VPGEN_VFRSTAT(_VF)			(0x00090800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPGEN_VFRSTAT_MAX_INDEX			255
+#define VPGEN_VFRSTAT_VFRD_S			0
+#define VPGEN_VFRSTAT_VFRD_M			BIT(0)
+#define VPGEN_VFRTRIG(_VF)			(0x00090000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPGEN_VFRTRIG_MAX_INDEX			255
+#define VPGEN_VFRTRIG_VFSWR_S			0
+#define VPGEN_VFRTRIG_VFSWR_M			BIT(0)
+#define VSIGEN_RSTAT(_VSI)			(0x00092800 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSIGEN_RSTAT_MAX_INDEX			767
+#define VSIGEN_RSTAT_VMRD_S			0
+#define VSIGEN_RSTAT_VMRD_M			BIT(0)
+#define VSIGEN_RTRIG(_VSI)			(0x00091800 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSIGEN_RTRIG_MAX_INDEX			767
+#define VSIGEN_RTRIG_VMSWR_S			0
+#define VSIGEN_RTRIG_VMSWR_M			BIT(0)
+#define GLHMC_APBVTINUSEBASE(_i)		(0x00524A00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_APBVTINUSEBASE_MAX_INDEX		7
+#define GLHMC_APBVTINUSEBASE_FPMAPBINUSEBASE_S	0
+#define GLHMC_APBVTINUSEBASE_FPMAPBINUSEBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_CEQPART(_i)			(0x005031C0 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_CEQPART_MAX_INDEX			7
+#define GLHMC_CEQPART_PMCEQBASE_S		0
+#define GLHMC_CEQPART_PMCEQBASE_M		MAKEMASK(0x3FF, 0)
+#define GLHMC_CEQPART_PMCEQSIZE_S		16
+#define GLHMC_CEQPART_PMCEQSIZE_M		MAKEMASK(0x3FF, 16)
+#define GLHMC_DBCQMAX				0x005220F0 /* Reset Source: CORER */
+#define GLHMC_DBCQMAX_GLHMC_DBCQMAX_S		0
+#define GLHMC_DBCQMAX_GLHMC_DBCQMAX_M		MAKEMASK(0xFFFFF, 0)
+#define GLHMC_DBCQPART(_i)			(0x00503180 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_DBCQPART_MAX_INDEX		7
+#define GLHMC_DBCQPART_PMDBCQBASE_S		0
+#define GLHMC_DBCQPART_PMDBCQBASE_M		MAKEMASK(0x3FFF, 0)
+#define GLHMC_DBCQPART_PMDBCQSIZE_S		16
+#define GLHMC_DBCQPART_PMDBCQSIZE_M		MAKEMASK(0x7FFF, 16)
+#define GLHMC_DBQPMAX				0x005220EC /* Reset Source: CORER */
+#define GLHMC_DBQPMAX_GLHMC_DBQPMAX_S		0
+#define GLHMC_DBQPMAX_GLHMC_DBQPMAX_M		MAKEMASK(0x7FFFF, 0)
+#define GLHMC_DBQPPART(_i)			(0x005044C0 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_DBQPPART_MAX_INDEX		7
+#define GLHMC_DBQPPART_PMDBQPBASE_S		0
+#define GLHMC_DBQPPART_PMDBQPBASE_M		MAKEMASK(0x3FFF, 0)
+#define GLHMC_DBQPPART_PMDBQPSIZE_S		16
+#define GLHMC_DBQPPART_PMDBQPSIZE_M		MAKEMASK(0x7FFF, 16)
+#define GLHMC_FSIAVBASE(_i)			(0x00525600 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_FSIAVBASE_MAX_INDEX		7
+#define GLHMC_FSIAVBASE_FPMFSIAVBASE_S		0
+#define GLHMC_FSIAVBASE_FPMFSIAVBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_FSIAVCNT(_i)			(0x00525700 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_FSIAVCNT_MAX_INDEX		7
+#define GLHMC_FSIAVCNT_FPMFSIAVCNT_S		0
+#define GLHMC_FSIAVCNT_FPMFSIAVCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_FSIAVMAX				0x00522068 /* Reset Source: CORER */
+#define GLHMC_FSIAVMAX_PMFSIAVMAX_S		0
+#define GLHMC_FSIAVMAX_PMFSIAVMAX_M		MAKEMASK(0x1FFFF, 0)
+#define GLHMC_FSIAVOBJSZ			0x00522064 /* Reset Source: CORER */
+#define GLHMC_FSIAVOBJSZ_PMFSIAVOBJSZ_S		0
+#define GLHMC_FSIAVOBJSZ_PMFSIAVOBJSZ_M		MAKEMASK(0xF, 0)
+#define GLHMC_FSIMCBASE(_i)			(0x00526000 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_FSIMCBASE_MAX_INDEX		7
+#define GLHMC_FSIMCBASE_FPMFSIMCBASE_S		0
+#define GLHMC_FSIMCBASE_FPMFSIMCBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_FSIMCCNT(_i)			(0x00526100 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_FSIMCCNT_MAX_INDEX		7
+#define GLHMC_FSIMCCNT_FPMFSIMCSZ_S		0
+#define GLHMC_FSIMCCNT_FPMFSIMCSZ_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_FSIMCMAX				0x00522060 /* Reset Source: CORER */
+#define GLHMC_FSIMCMAX_PMFSIMCMAX_S		0
+#define GLHMC_FSIMCMAX_PMFSIMCMAX_M		MAKEMASK(0x3FFF, 0)
+#define GLHMC_FSIMCOBJSZ			0x0052205C /* Reset Source: CORER */
+#define GLHMC_FSIMCOBJSZ_PMFSIMCOBJSZ_S		0
+#define GLHMC_FSIMCOBJSZ_PMFSIMCOBJSZ_M		MAKEMASK(0xF, 0)
+#define GLHMC_FWPDINV				0x0052207C /* Reset Source: CORER */
+#define GLHMC_FWPDINV_PMSDIDX_S			0
+#define GLHMC_FWPDINV_PMSDIDX_M			MAKEMASK(0xFFF, 0)
+#define GLHMC_FWPDINV_PMSDPARTSEL_S		15
+#define GLHMC_FWPDINV_PMSDPARTSEL_M		BIT(15)
+#define GLHMC_FWPDINV_PMPDIDX_S			16
+#define GLHMC_FWPDINV_PMPDIDX_M			MAKEMASK(0x1FF, 16)
+#define GLHMC_FWPDINV_FPMAT			0x0010207c /* Reset Source: CORER */
+#define GLHMC_FWPDINV_FPMAT_PMSDIDX_S		0
+#define GLHMC_FWPDINV_FPMAT_PMSDIDX_M		MAKEMASK(0xFFF, 0)
+#define GLHMC_FWPDINV_FPMAT_PMSDPARTSEL_S	15
+#define GLHMC_FWPDINV_FPMAT_PMSDPARTSEL_M	BIT(15)
+#define GLHMC_FWPDINV_FPMAT_PMPDIDX_S		16
+#define GLHMC_FWPDINV_FPMAT_PMPDIDX_M		MAKEMASK(0x1FF, 16)
+#define GLHMC_FWSDDATAHIGH			0x00522078 /* Reset Source: CORER */
+#define GLHMC_FWSDDATAHIGH_PMSDDATAHIGH_S	0
+#define GLHMC_FWSDDATAHIGH_PMSDDATAHIGH_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_FWSDDATAHIGH_FPMAT		0x00102078 /* Reset Source: CORER */
+#define GLHMC_FWSDDATAHIGH_FPMAT_PMSDDATAHIGH_S 0
+#define GLHMC_FWSDDATAHIGH_FPMAT_PMSDDATAHIGH_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_FWSDDATALOW			0x00522074 /* Reset Source: CORER */
+#define GLHMC_FWSDDATALOW_PMSDVALID_S		0
+#define GLHMC_FWSDDATALOW_PMSDVALID_M		BIT(0)
+#define GLHMC_FWSDDATALOW_PMSDTYPE_S		1
+#define GLHMC_FWSDDATALOW_PMSDTYPE_M		BIT(1)
+#define GLHMC_FWSDDATALOW_PMSDBPCOUNT_S		2
+#define GLHMC_FWSDDATALOW_PMSDBPCOUNT_M		MAKEMASK(0x3FF, 2)
+#define GLHMC_FWSDDATALOW_PMSDDATALOW_S		12
+#define GLHMC_FWSDDATALOW_PMSDDATALOW_M		MAKEMASK(0xFFFFF, 12)
+#define GLHMC_FWSDDATALOW_FPMAT			0x00102074 /* Reset Source: CORER */
+#define GLHMC_FWSDDATALOW_FPMAT_PMSDVALID_S	0
+#define GLHMC_FWSDDATALOW_FPMAT_PMSDVALID_M	BIT(0)
+#define GLHMC_FWSDDATALOW_FPMAT_PMSDTYPE_S	1
+#define GLHMC_FWSDDATALOW_FPMAT_PMSDTYPE_M	BIT(1)
+#define GLHMC_FWSDDATALOW_FPMAT_PMSDBPCOUNT_S	2
+#define GLHMC_FWSDDATALOW_FPMAT_PMSDBPCOUNT_M	MAKEMASK(0x3FF, 2)
+#define GLHMC_FWSDDATALOW_FPMAT_PMSDDATALOW_S	12
+#define GLHMC_FWSDDATALOW_FPMAT_PMSDDATALOW_M	MAKEMASK(0xFFFFF, 12)
+#define GLHMC_PEARPBASE(_i)			(0x00524800 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEARPBASE_MAX_INDEX		7
+#define GLHMC_PEARPBASE_FPMPEARPBASE_S		0
+#define GLHMC_PEARPBASE_FPMPEARPBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEARPCNT(_i)			(0x00524900 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEARPCNT_MAX_INDEX		7
+#define GLHMC_PEARPCNT_FPMPEARPCNT_S		0
+#define GLHMC_PEARPCNT_FPMPEARPCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEARPMAX				0x00522038 /* Reset Source: CORER */
+#define GLHMC_PEARPMAX_PMPEARPMAX_S		0
+#define GLHMC_PEARPMAX_PMPEARPMAX_M		MAKEMASK(0x1FFFF, 0)
+#define GLHMC_PEARPOBJSZ			0x00522034 /* Reset Source: CORER */
+#define GLHMC_PEARPOBJSZ_PMPEARPOBJSZ_S		0
+#define GLHMC_PEARPOBJSZ_PMPEARPOBJSZ_M		MAKEMASK(0x7, 0)
+#define GLHMC_PECQBASE(_i)			(0x00524200 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PECQBASE_MAX_INDEX		7
+#define GLHMC_PECQBASE_FPMPECQBASE_S		0
+#define GLHMC_PECQBASE_FPMPECQBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PECQCNT(_i)			(0x00524300 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PECQCNT_MAX_INDEX			7
+#define GLHMC_PECQCNT_FPMPECQCNT_S		0
+#define GLHMC_PECQCNT_FPMPECQCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PECQOBJSZ				0x00522020 /* Reset Source: CORER */
+#define GLHMC_PECQOBJSZ_PMPECQOBJSZ_S		0
+#define GLHMC_PECQOBJSZ_PMPECQOBJSZ_M		MAKEMASK(0xF, 0)
+#define GLHMC_PEHDRBASE(_i)			(0x00526200 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEHDRBASE_MAX_INDEX		7
+#define GLHMC_PEHDRBASE_GLHMC_PEHDRBASE_S	0
+#define GLHMC_PEHDRBASE_GLHMC_PEHDRBASE_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PEHDRCNT(_i)			(0x00526300 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEHDRCNT_MAX_INDEX		7
+#define GLHMC_PEHDRCNT_GLHMC_PEHDRCNT_S		0
+#define GLHMC_PEHDRCNT_GLHMC_PEHDRCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PEHDRMAX				0x00522008 /* Reset Source: CORER */
+#define GLHMC_PEHDRMAX_PMPEHDRMAX_S		0
+#define GLHMC_PEHDRMAX_PMPEHDRMAX_M		MAKEMASK(0x7FFFF, 0)
+#define GLHMC_PEHDRMAX_RSVD_S			19
+#define GLHMC_PEHDRMAX_RSVD_M			MAKEMASK(0x1FFF, 19)
+#define GLHMC_PEHDROBJSZ			0x00522004 /* Reset Source: CORER */
+#define GLHMC_PEHDROBJSZ_PMPEHDROBJSZ_S		0
+#define GLHMC_PEHDROBJSZ_PMPEHDROBJSZ_M		MAKEMASK(0xF, 0)
+#define GLHMC_PEHDROBJSZ_RSVD_S			4
+#define GLHMC_PEHDROBJSZ_RSVD_M			MAKEMASK(0xFFFFFFF, 4)
+#define GLHMC_PEHTCNT(_i)			(0x00524700 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEHTCNT_MAX_INDEX			7
+#define GLHMC_PEHTCNT_FPMPEHTCNT_S		0
+#define GLHMC_PEHTCNT_FPMPEHTCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEHTCNT_FPMAT(_i)			(0x00104700 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEHTCNT_FPMAT_MAX_INDEX		7
+#define GLHMC_PEHTCNT_FPMAT_FPMPEHTCNT_S	0
+#define GLHMC_PEHTCNT_FPMAT_FPMPEHTCNT_M	MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEHTEBASE(_i)			(0x00524600 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEHTEBASE_MAX_INDEX		7
+#define GLHMC_PEHTEBASE_FPMPEHTEBASE_S		0
+#define GLHMC_PEHTEBASE_FPMPEHTEBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEHTEBASE_FPMAT(_i)		(0x00104600 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEHTEBASE_FPMAT_MAX_INDEX		7
+#define GLHMC_PEHTEBASE_FPMAT_FPMPEHTEBASE_S	0
+#define GLHMC_PEHTEBASE_FPMAT_FPMPEHTEBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEHTEOBJSZ			0x0052202C /* Reset Source: CORER */
+#define GLHMC_PEHTEOBJSZ_PMPEHTEOBJSZ_S		0
+#define GLHMC_PEHTEOBJSZ_PMPEHTEOBJSZ_M		MAKEMASK(0xF, 0)
+#define GLHMC_PEHTEOBJSZ_FPMAT			0x0010202c /* Reset Source: CORER */
+#define GLHMC_PEHTEOBJSZ_FPMAT_PMPEHTEOBJSZ_S	0
+#define GLHMC_PEHTEOBJSZ_FPMAT_PMPEHTEOBJSZ_M	MAKEMASK(0xF, 0)
+#define GLHMC_PEHTMAX				0x00522030 /* Reset Source: CORER */
+#define GLHMC_PEHTMAX_PMPEHTMAX_S		0
+#define GLHMC_PEHTMAX_PMPEHTMAX_M		MAKEMASK(0x1FFFFF, 0)
+#define GLHMC_PEHTMAX_FPMAT			0x00102030 /* Reset Source: CORER */
+#define GLHMC_PEHTMAX_FPMAT_PMPEHTMAX_S		0
+#define GLHMC_PEHTMAX_FPMAT_PMPEHTMAX_M		MAKEMASK(0x1FFFFF, 0)
+#define GLHMC_PEMDBASE(_i)			(0x00526400 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEMDBASE_MAX_INDEX		7
+#define GLHMC_PEMDBASE_GLHMC_PEMDBASE_S		0
+#define GLHMC_PEMDBASE_GLHMC_PEMDBASE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PEMDCNT(_i)			(0x00526500 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEMDCNT_MAX_INDEX			7
+#define GLHMC_PEMDCNT_GLHMC_PEMDCNT_S		0
+#define GLHMC_PEMDCNT_GLHMC_PEMDCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PEMDMAX				0x00522010 /* Reset Source: CORER */
+#define GLHMC_PEMDMAX_PMPEMDMAX_S		0
+#define GLHMC_PEMDMAX_PMPEMDMAX_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEMDMAX_RSVD_S			24
+#define GLHMC_PEMDMAX_RSVD_M			MAKEMASK(0xFF, 24)
+#define GLHMC_PEMDOBJSZ				0x0052200C /* Reset Source: CORER */
+#define GLHMC_PEMDOBJSZ_PMPEMDOBJSZ_S		0
+#define GLHMC_PEMDOBJSZ_PMPEMDOBJSZ_M		MAKEMASK(0xF, 0)
+#define GLHMC_PEMDOBJSZ_RSVD_S			4
+#define GLHMC_PEMDOBJSZ_RSVD_M			MAKEMASK(0xFFFFFFF, 4)
+#define GLHMC_PEMRBASE(_i)			(0x00524C00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEMRBASE_MAX_INDEX		7
+#define GLHMC_PEMRBASE_FPMPEMRBASE_S		0
+#define GLHMC_PEMRBASE_FPMPEMRBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEMRCNT(_i)			(0x00524D00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEMRCNT_MAX_INDEX			7
+#define GLHMC_PEMRCNT_FPMPEMRSZ_S		0
+#define GLHMC_PEMRCNT_FPMPEMRSZ_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEMRMAX				0x00522040 /* Reset Source: CORER */
+#define GLHMC_PEMRMAX_PMPEMRMAX_S		0
+#define GLHMC_PEMRMAX_PMPEMRMAX_M		MAKEMASK(0x7FFFFF, 0)
+#define GLHMC_PEMROBJSZ				0x0052203c /* Reset Source: CORER */
+#define GLHMC_PEMROBJSZ_PMPEMROBJSZ_S		0
+#define GLHMC_PEMROBJSZ_PMPEMROBJSZ_M		MAKEMASK(0xF, 0)
+#define GLHMC_PEOOISCBASE(_i)			(0x00526600 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEOOISCBASE_MAX_INDEX		7
+#define GLHMC_PEOOISCBASE_GLHMC_PEOOISCBASE_S	0
+#define GLHMC_PEOOISCBASE_GLHMC_PEOOISCBASE_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PEOOISCCNT(_i)			(0x00526700 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEOOISCCNT_MAX_INDEX		7
+#define GLHMC_PEOOISCCNT_GLHMC_PEOOISCCNT_S	0
+#define GLHMC_PEOOISCCNT_GLHMC_PEOOISCCNT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PEOOISCFFLBASE(_i)		(0x00526C00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEOOISCFFLBASE_MAX_INDEX		7
+#define GLHMC_PEOOISCFFLBASE_GLHMC_PEOOISCFFLBASE_S 0
+#define GLHMC_PEOOISCFFLBASE_GLHMC_PEOOISCFFLBASE_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PEOOISCFFLCNT_PMAT(_i)		(0x00526D00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEOOISCFFLCNT_PMAT_MAX_INDEX	7
+#define GLHMC_PEOOISCFFLCNT_PMAT_FPMPEOOISCFLCNT_S 0
+#define GLHMC_PEOOISCFFLCNT_PMAT_FPMPEOOISCFLCNT_M MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEOOISCFFLMAX			0x005220A4 /* Reset Source: CORER */
+#define GLHMC_PEOOISCFFLMAX_PMPEOOISCFFLMAX_S	0
+#define GLHMC_PEOOISCFFLMAX_PMPEOOISCFFLMAX_M	MAKEMASK(0x7FFFF, 0)
+#define GLHMC_PEOOISCFFLMAX_RSVD_S		19
+#define GLHMC_PEOOISCFFLMAX_RSVD_M		MAKEMASK(0x1FFF, 19)
+#define GLHMC_PEOOISCMAX			0x00522018 /* Reset Source: CORER */
+#define GLHMC_PEOOISCMAX_PMPEOOISCMAX_S		0
+#define GLHMC_PEOOISCMAX_PMPEOOISCMAX_M		MAKEMASK(0x7FFFF, 0)
+#define GLHMC_PEOOISCMAX_RSVD_S			19
+#define GLHMC_PEOOISCMAX_RSVD_M			MAKEMASK(0x1FFF, 19)
+#define GLHMC_PEOOISCOBJSZ			0x00522014 /* Reset Source: CORER */
+#define GLHMC_PEOOISCOBJSZ_PMPEOOISCOBJSZ_S	0
+#define GLHMC_PEOOISCOBJSZ_PMPEOOISCOBJSZ_M	MAKEMASK(0xF, 0)
+#define GLHMC_PEOOISCOBJSZ_RSVD_S		4
+#define GLHMC_PEOOISCOBJSZ_RSVD_M		MAKEMASK(0xFFFFFFF, 4)
+#define GLHMC_PEPBLBASE(_i)			(0x00525800 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEPBLBASE_MAX_INDEX		7
+#define GLHMC_PEPBLBASE_FPMPEPBLBASE_S		0
+#define GLHMC_PEPBLBASE_FPMPEPBLBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEPBLCNT(_i)			(0x00525900 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEPBLCNT_MAX_INDEX		7
+#define GLHMC_PEPBLCNT_FPMPEPBLCNT_S		0
+#define GLHMC_PEPBLCNT_FPMPEPBLCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEPBLMAX				0x0052206C /* Reset Source: CORER */
+#define GLHMC_PEPBLMAX_PMPEPBLMAX_S		0
+#define GLHMC_PEPBLMAX_PMPEPBLMAX_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEQ1BASE(_i)			(0x00525200 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEQ1BASE_MAX_INDEX		7
+#define GLHMC_PEQ1BASE_FPMPEQ1BASE_S		0
+#define GLHMC_PEQ1BASE_FPMPEQ1BASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEQ1CNT(_i)			(0x00525300 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEQ1CNT_MAX_INDEX			7
+#define GLHMC_PEQ1CNT_FPMPEQ1CNT_S		0
+#define GLHMC_PEQ1CNT_FPMPEQ1CNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEQ1FLBASE(_i)			(0x00525400 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEQ1FLBASE_MAX_INDEX		7
+#define GLHMC_PEQ1FLBASE_FPMPEQ1FLBASE_S	0
+#define GLHMC_PEQ1FLBASE_FPMPEQ1FLBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEQ1FLMAX				0x00522058 /* Reset Source: CORER */
+#define GLHMC_PEQ1FLMAX_PMPEQ1FLMAX_S		0
+#define GLHMC_PEQ1FLMAX_PMPEQ1FLMAX_M		MAKEMASK(0x3FFFFFF, 0)
+#define GLHMC_PEQ1MAX				0x00522054 /* Reset Source: CORER */
+#define GLHMC_PEQ1MAX_PMPEQ1MAX_S		0
+#define GLHMC_PEQ1MAX_PMPEQ1MAX_M		MAKEMASK(0xFFFFFFF, 0)
+#define GLHMC_PEQ1OBJSZ				0x00522050 /* Reset Source: CORER */
+#define GLHMC_PEQ1OBJSZ_PMPEQ1OBJSZ_S		0
+#define GLHMC_PEQ1OBJSZ_PMPEQ1OBJSZ_M		MAKEMASK(0xF, 0)
+#define GLHMC_PEQPBASE(_i)			(0x00524000 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEQPBASE_MAX_INDEX		7
+#define GLHMC_PEQPBASE_FPMPEQPBASE_S		0
+#define GLHMC_PEQPBASE_FPMPEQPBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEQPCNT(_i)			(0x00524100 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEQPCNT_MAX_INDEX			7
+#define GLHMC_PEQPCNT_FPMPEQPCNT_S		0
+#define GLHMC_PEQPCNT_FPMPEQPCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEQPOBJSZ				0x0052201C /* Reset Source: CORER */
+#define GLHMC_PEQPOBJSZ_PMPEQPOBJSZ_S		0
+#define GLHMC_PEQPOBJSZ_PMPEQPOBJSZ_M		MAKEMASK(0xF, 0)
+#define GLHMC_PERRFBASE(_i)			(0x00526800 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PERRFBASE_MAX_INDEX		7
+#define GLHMC_PERRFBASE_GLHMC_PERRFBASE_S	0
+#define GLHMC_PERRFBASE_GLHMC_PERRFBASE_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PERRFCNT(_i)			(0x00526900 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PERRFCNT_MAX_INDEX		7
+#define GLHMC_PERRFCNT_GLHMC_PERRFCNT_S		0
+#define GLHMC_PERRFCNT_GLHMC_PERRFCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PERRFFLBASE(_i)			(0x00526A00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PERRFFLBASE_MAX_INDEX		7
+#define GLHMC_PERRFFLBASE_GLHMC_PERRFFLBASE_S	0
+#define GLHMC_PERRFFLBASE_GLHMC_PERRFFLBASE_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PERRFFLCNT_PMAT(_i)		(0x00526B00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PERRFFLCNT_PMAT_MAX_INDEX		7
+#define GLHMC_PERRFFLCNT_PMAT_FPMPERRFFLCNT_S	0
+#define GLHMC_PERRFFLCNT_PMAT_FPMPERRFFLCNT_M	MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PERRFFLMAX			0x005220A0 /* Reset Source: CORER */
+#define GLHMC_PERRFFLMAX_PMPERRFFLMAX_S		0
+#define GLHMC_PERRFFLMAX_PMPERRFFLMAX_M		MAKEMASK(0x3FFFFFF, 0)
+#define GLHMC_PERRFFLMAX_RSVD_S			26
+#define GLHMC_PERRFFLMAX_RSVD_M			MAKEMASK(0x3F, 26)
+#define GLHMC_PERRFMAX				0x0052209C /* Reset Source: CORER */
+#define GLHMC_PERRFMAX_PMPERRFMAX_S		0
+#define GLHMC_PERRFMAX_PMPERRFMAX_M		MAKEMASK(0xFFFFFFF, 0)
+#define GLHMC_PERRFMAX_RSVD_S			28
+#define GLHMC_PERRFMAX_RSVD_M			MAKEMASK(0xF, 28)
+#define GLHMC_PERRFOBJSZ			0x00522098 /* Reset Source: CORER */
+#define GLHMC_PERRFOBJSZ_PMPERRFOBJSZ_S		0
+#define GLHMC_PERRFOBJSZ_PMPERRFOBJSZ_M		MAKEMASK(0xF, 0)
+#define GLHMC_PERRFOBJSZ_RSVD_S			4
+#define GLHMC_PERRFOBJSZ_RSVD_M			MAKEMASK(0xFFFFFFF, 4)
+#define GLHMC_PETIMERBASE(_i)			(0x00525A00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PETIMERBASE_MAX_INDEX		7
+#define GLHMC_PETIMERBASE_FPMPETIMERBASE_S	0
+#define GLHMC_PETIMERBASE_FPMPETIMERBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PETIMERCNT(_i)			(0x00525B00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PETIMERCNT_MAX_INDEX		7
+#define GLHMC_PETIMERCNT_FPMPETIMERCNT_S	0
+#define GLHMC_PETIMERCNT_FPMPETIMERCNT_M	MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PETIMERMAX			0x00522084 /* Reset Source: CORER */
+#define GLHMC_PETIMERMAX_PMPETIMERMAX_S		0
+#define GLHMC_PETIMERMAX_PMPETIMERMAX_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PETIMEROBJSZ			0x00522080 /* Reset Source: CORER */
+#define GLHMC_PETIMEROBJSZ_PMPETIMEROBJSZ_S	0
+#define GLHMC_PETIMEROBJSZ_PMPETIMEROBJSZ_M	MAKEMASK(0xF, 0)
+#define GLHMC_PEXFBASE(_i)			(0x00524E00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEXFBASE_MAX_INDEX		7
+#define GLHMC_PEXFBASE_FPMPEXFBASE_S		0
+#define GLHMC_PEXFBASE_FPMPEXFBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEXFCNT(_i)			(0x00524F00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEXFCNT_MAX_INDEX			7
+#define GLHMC_PEXFCNT_FPMPEXFCNT_S		0
+#define GLHMC_PEXFCNT_FPMPEXFCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEXFFLBASE(_i)			(0x00525000 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEXFFLBASE_MAX_INDEX		7
+#define GLHMC_PEXFFLBASE_FPMPEXFFLBASE_S	0
+#define GLHMC_PEXFFLBASE_FPMPEXFFLBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEXFFLMAX				0x0052204C /* Reset Source: CORER */
+#define GLHMC_PEXFFLMAX_PMPEXFFLMAX_S		0
+#define GLHMC_PEXFFLMAX_PMPEXFFLMAX_M		MAKEMASK(0x3FFFFFF, 0)
+#define GLHMC_PEXFMAX				0x00522048 /* Reset Source: CORER */
+#define GLHMC_PEXFMAX_PMPEXFMAX_S		0
+#define GLHMC_PEXFMAX_PMPEXFMAX_M		MAKEMASK(0xFFFFFFF, 0)
+#define GLHMC_PEXFOBJSZ				0x00522044 /* Reset Source: CORER */
+#define GLHMC_PEXFOBJSZ_PMPEXFOBJSZ_S		0
+#define GLHMC_PEXFOBJSZ_PMPEXFOBJSZ_M		MAKEMASK(0xF, 0)
+#define GLHMC_PFPESDPART(_i)			(0x00520880 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PFPESDPART_MAX_INDEX		7
+#define GLHMC_PFPESDPART_PMSDBASE_S		0
+#define GLHMC_PFPESDPART_PMSDBASE_M		MAKEMASK(0xFFF, 0)
+#define GLHMC_PFPESDPART_PMSDSIZE_S		16
+#define GLHMC_PFPESDPART_PMSDSIZE_M		MAKEMASK(0x1FFF, 16)
+#define GLHMC_PFPESDPART_FPMAT(_i)		(0x00100880 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PFPESDPART_FPMAT_MAX_INDEX	7
+#define GLHMC_PFPESDPART_FPMAT_PMSDBASE_S	0
+#define GLHMC_PFPESDPART_FPMAT_PMSDBASE_M	MAKEMASK(0xFFF, 0)
+#define GLHMC_PFPESDPART_FPMAT_PMSDSIZE_S	16
+#define GLHMC_PFPESDPART_FPMAT_PMSDSIZE_M	MAKEMASK(0x1FFF, 16)
+#define GLHMC_SDPART(_i)			(0x00520800 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_SDPART_MAX_INDEX			7
+#define GLHMC_SDPART_PMSDBASE_S			0
+#define GLHMC_SDPART_PMSDBASE_M			MAKEMASK(0xFFF, 0)
+#define GLHMC_SDPART_PMSDSIZE_S			16
+#define GLHMC_SDPART_PMSDSIZE_M			MAKEMASK(0x1FFF, 16)
+#define GLHMC_SDPART_FPMAT(_i)			(0x00100800 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_SDPART_FPMAT_MAX_INDEX		7
+#define GLHMC_SDPART_FPMAT_PMSDBASE_S		0
+#define GLHMC_SDPART_FPMAT_PMSDBASE_M		MAKEMASK(0xFFF, 0)
+#define GLHMC_SDPART_FPMAT_PMSDSIZE_S		16
+#define GLHMC_SDPART_FPMAT_PMSDSIZE_M		MAKEMASK(0x1FFF, 16)
+#define GLHMC_VFAPBVTINUSEBASE(_i)		(0x0052CA00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFAPBVTINUSEBASE_MAX_INDEX	31
+#define GLHMC_VFAPBVTINUSEBASE_FPMAPBINUSEBASE_S 0
+#define GLHMC_VFAPBVTINUSEBASE_FPMAPBINUSEBASE_M MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFCEQPART(_i)			(0x00502F00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFCEQPART_MAX_INDEX		31
+#define GLHMC_VFCEQPART_PMCEQBASE_S		0
+#define GLHMC_VFCEQPART_PMCEQBASE_M		MAKEMASK(0x3FF, 0)
+#define GLHMC_VFCEQPART_PMCEQSIZE_S		16
+#define GLHMC_VFCEQPART_PMCEQSIZE_M		MAKEMASK(0x3FF, 16)
+#define GLHMC_VFDBCQPART(_i)			(0x00502E00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFDBCQPART_MAX_INDEX		31
+#define GLHMC_VFDBCQPART_PMDBCQBASE_S		0
+#define GLHMC_VFDBCQPART_PMDBCQBASE_M		MAKEMASK(0x3FFF, 0)
+#define GLHMC_VFDBCQPART_PMDBCQSIZE_S		16
+#define GLHMC_VFDBCQPART_PMDBCQSIZE_M		MAKEMASK(0x7FFF, 16)
+#define GLHMC_VFDBQPPART(_i)			(0x00504520 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFDBQPPART_MAX_INDEX		31
+#define GLHMC_VFDBQPPART_PMDBQPBASE_S		0
+#define GLHMC_VFDBQPPART_PMDBQPBASE_M		MAKEMASK(0x3FFF, 0)
+#define GLHMC_VFDBQPPART_PMDBQPSIZE_S		16
+#define GLHMC_VFDBQPPART_PMDBQPSIZE_M		MAKEMASK(0x7FFF, 16)
+#define GLHMC_VFFSIAVBASE(_i)			(0x0052D600 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFFSIAVBASE_MAX_INDEX		31
+#define GLHMC_VFFSIAVBASE_FPMFSIAVBASE_S	0
+#define GLHMC_VFFSIAVBASE_FPMFSIAVBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFFSIAVCNT(_i)			(0x0052D700 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFFSIAVCNT_MAX_INDEX		31
+#define GLHMC_VFFSIAVCNT_FPMFSIAVCNT_S		0
+#define GLHMC_VFFSIAVCNT_FPMFSIAVCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFFSIMCBASE(_i)			(0x0052E000 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFFSIMCBASE_MAX_INDEX		31
+#define GLHMC_VFFSIMCBASE_FPMFSIMCBASE_S	0
+#define GLHMC_VFFSIMCBASE_FPMFSIMCBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFFSIMCCNT(_i)			(0x0052E100 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFFSIMCCNT_MAX_INDEX		31
+#define GLHMC_VFFSIMCCNT_FPMFSIMCSZ_S		0
+#define GLHMC_VFFSIMCCNT_FPMFSIMCSZ_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPDINV(_i)			(0x00528300 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPDINV_MAX_INDEX			31
+#define GLHMC_VFPDINV_PMSDIDX_S			0
+#define GLHMC_VFPDINV_PMSDIDX_M			MAKEMASK(0xFFF, 0)
+#define GLHMC_VFPDINV_PMSDPARTSEL_S		15
+#define GLHMC_VFPDINV_PMSDPARTSEL_M		BIT(15)
+#define GLHMC_VFPDINV_PMPDIDX_S			16
+#define GLHMC_VFPDINV_PMPDIDX_M			MAKEMASK(0x1FF, 16)
+#define GLHMC_VFPDINV_FPMAT(_i)			(0x00108300 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPDINV_FPMAT_MAX_INDEX		31
+#define GLHMC_VFPDINV_FPMAT_PMSDIDX_S		0
+#define GLHMC_VFPDINV_FPMAT_PMSDIDX_M		MAKEMASK(0xFFF, 0)
+#define GLHMC_VFPDINV_FPMAT_PMSDPARTSEL_S	15
+#define GLHMC_VFPDINV_FPMAT_PMSDPARTSEL_M	BIT(15)
+#define GLHMC_VFPDINV_FPMAT_PMPDIDX_S		16
+#define GLHMC_VFPDINV_FPMAT_PMPDIDX_M		MAKEMASK(0x1FF, 16)
+#define GLHMC_VFPEARPBASE(_i)			(0x0052C800 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEARPBASE_MAX_INDEX		31
+#define GLHMC_VFPEARPBASE_FPMPEARPBASE_S	0
+#define GLHMC_VFPEARPBASE_FPMPEARPBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPEARPCNT(_i)			(0x0052C900 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEARPCNT_MAX_INDEX		31
+#define GLHMC_VFPEARPCNT_FPMPEARPCNT_S		0
+#define GLHMC_VFPEARPCNT_FPMPEARPCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPECQBASE(_i)			(0x0052C200 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPECQBASE_MAX_INDEX		31
+#define GLHMC_VFPECQBASE_FPMPECQBASE_S		0
+#define GLHMC_VFPECQBASE_FPMPECQBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPECQCNT(_i)			(0x0052C300 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPECQCNT_MAX_INDEX		31
+#define GLHMC_VFPECQCNT_FPMPECQCNT_S		0
+#define GLHMC_VFPECQCNT_FPMPECQCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPEHDRBASE(_i)			(0x0052E200 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEHDRBASE_MAX_INDEX		31
+#define GLHMC_VFPEHDRBASE_GLHMC_PEHDRBASE_S	0
+#define GLHMC_VFPEHDRBASE_GLHMC_PEHDRBASE_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPEHDRCNT(_i)			(0x0052E300 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEHDRCNT_MAX_INDEX		31
+#define GLHMC_VFPEHDRCNT_GLHMC_PEHDRCNT_S	0
+#define GLHMC_VFPEHDRCNT_GLHMC_PEHDRCNT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPEHTCNT(_i)			(0x0052C700 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEHTCNT_MAX_INDEX		31
+#define GLHMC_VFPEHTCNT_FPMPEHTCNT_S		0
+#define GLHMC_VFPEHTCNT_FPMPEHTCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPEHTCNT_FPMAT(_i)		(0x0010c700 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEHTCNT_FPMAT_MAX_INDEX		31
+#define GLHMC_VFPEHTCNT_FPMAT_FPMPEHTCNT_S	0
+#define GLHMC_VFPEHTCNT_FPMAT_FPMPEHTCNT_M	MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPEHTEBASE(_i)			(0x0052C600 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEHTEBASE_MAX_INDEX		31
+#define GLHMC_VFPEHTEBASE_FPMPEHTEBASE_S	0
+#define GLHMC_VFPEHTEBASE_FPMPEHTEBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPEHTEBASE_FPMAT(_i)		(0x0010C600 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEHTEBASE_FPMAT_MAX_INDEX	31
+#define GLHMC_VFPEHTEBASE_FPMAT_FPMPEHTEBASE_S	0
+#define GLHMC_VFPEHTEBASE_FPMAT_FPMPEHTEBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPEMDBASE(_i)			(0x0052E400 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEMDBASE_MAX_INDEX		31
+#define GLHMC_VFPEMDBASE_GLHMC_PEMDBASE_S	0
+#define GLHMC_VFPEMDBASE_GLHMC_PEMDBASE_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPEMDCNT(_i)			(0x0052E500 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEMDCNT_MAX_INDEX		31
+#define GLHMC_VFPEMDCNT_GLHMC_PEMDCNT_S		0
+#define GLHMC_VFPEMDCNT_GLHMC_PEMDCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPEMRBASE(_i)			(0x0052CC00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEMRBASE_MAX_INDEX		31
+#define GLHMC_VFPEMRBASE_FPMPEMRBASE_S		0
+#define GLHMC_VFPEMRBASE_FPMPEMRBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPEMRCNT(_i)			(0x0052CD00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEMRCNT_MAX_INDEX		31
+#define GLHMC_VFPEMRCNT_FPMPEMRSZ_S		0
+#define GLHMC_VFPEMRCNT_FPMPEMRSZ_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPEOOISCBASE(_i)			(0x0052E600 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEOOISCBASE_MAX_INDEX		31
+#define GLHMC_VFPEOOISCBASE_GLHMC_PEOOISCBASE_S 0
+#define GLHMC_VFPEOOISCBASE_GLHMC_PEOOISCBASE_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPEOOISCCNT(_i)			(0x0052E700 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEOOISCCNT_MAX_INDEX		31
+#define GLHMC_VFPEOOISCCNT_GLHMC_PEOOISCCNT_S	0
+#define GLHMC_VFPEOOISCCNT_GLHMC_PEOOISCCNT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPEOOISCFFLBASE(_i)		(0x0052EC00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEOOISCFFLBASE_MAX_INDEX	31
+#define GLHMC_VFPEOOISCFFLBASE_GLHMC_PEOOISCFFLBASE_S 0
+#define GLHMC_VFPEOOISCFFLBASE_GLHMC_PEOOISCFFLBASE_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPEPBLBASE(_i)			(0x0052D800 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEPBLBASE_MAX_INDEX		31
+#define GLHMC_VFPEPBLBASE_FPMPEPBLBASE_S	0
+#define GLHMC_VFPEPBLBASE_FPMPEPBLBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPEPBLCNT(_i)			(0x0052D900 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEPBLCNT_MAX_INDEX		31
+#define GLHMC_VFPEPBLCNT_FPMPEPBLCNT_S		0
+#define GLHMC_VFPEPBLCNT_FPMPEPBLCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPEQ1BASE(_i)			(0x0052D200 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEQ1BASE_MAX_INDEX		31
+#define GLHMC_VFPEQ1BASE_FPMPEQ1BASE_S		0
+#define GLHMC_VFPEQ1BASE_FPMPEQ1BASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPEQ1CNT(_i)			(0x0052D300 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEQ1CNT_MAX_INDEX		31
+#define GLHMC_VFPEQ1CNT_FPMPEQ1CNT_S		0
+#define GLHMC_VFPEQ1CNT_FPMPEQ1CNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPEQ1FLBASE(_i)			(0x0052D400 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEQ1FLBASE_MAX_INDEX		31
+#define GLHMC_VFPEQ1FLBASE_FPMPEQ1FLBASE_S	0
+#define GLHMC_VFPEQ1FLBASE_FPMPEQ1FLBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPEQPBASE(_i)			(0x0052C000 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEQPBASE_MAX_INDEX		31
+#define GLHMC_VFPEQPBASE_FPMPEQPBASE_S		0
+#define GLHMC_VFPEQPBASE_FPMPEQPBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPEQPCNT(_i)			(0x0052C100 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEQPCNT_MAX_INDEX		31
+#define GLHMC_VFPEQPCNT_FPMPEQPCNT_S		0
+#define GLHMC_VFPEQPCNT_FPMPEQPCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPERRFBASE(_i)			(0x0052E800 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPERRFBASE_MAX_INDEX		31
+#define GLHMC_VFPERRFBASE_GLHMC_PERRFBASE_S	0
+#define GLHMC_VFPERRFBASE_GLHMC_PERRFBASE_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPERRFCNT(_i)			(0x0052E900 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPERRFCNT_MAX_INDEX		31
+#define GLHMC_VFPERRFCNT_GLHMC_PERRFCNT_S	0
+#define GLHMC_VFPERRFCNT_GLHMC_PERRFCNT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPERRFFLBASE(_i)			(0x0052EA00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPERRFFLBASE_MAX_INDEX		31
+#define GLHMC_VFPERRFFLBASE_GLHMC_PERRFFLBASE_S 0
+#define GLHMC_VFPERRFFLBASE_GLHMC_PERRFFLBASE_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPETIMERBASE(_i)			(0x0052DA00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPETIMERBASE_MAX_INDEX		31
+#define GLHMC_VFPETIMERBASE_FPMPETIMERBASE_S	0
+#define GLHMC_VFPETIMERBASE_FPMPETIMERBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPETIMERCNT(_i)			(0x0052DB00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPETIMERCNT_MAX_INDEX		31
+#define GLHMC_VFPETIMERCNT_FPMPETIMERCNT_S	0
+#define GLHMC_VFPETIMERCNT_FPMPETIMERCNT_M	MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPEXFBASE(_i)			(0x0052CE00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEXFBASE_MAX_INDEX		31
+#define GLHMC_VFPEXFBASE_FPMPEXFBASE_S		0
+#define GLHMC_VFPEXFBASE_FPMPEXFBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPEXFCNT(_i)			(0x0052CF00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEXFCNT_MAX_INDEX		31
+#define GLHMC_VFPEXFCNT_FPMPEXFCNT_S		0
+#define GLHMC_VFPEXFCNT_FPMPEXFCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPEXFFLBASE(_i)			(0x0052D000 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEXFFLBASE_MAX_INDEX		31
+#define GLHMC_VFPEXFFLBASE_FPMPEXFFLBASE_S	0
+#define GLHMC_VFPEXFFLBASE_FPMPEXFFLBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFSDDATAHIGH(_i)			(0x00528200 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFSDDATAHIGH_MAX_INDEX		31
+#define GLHMC_VFSDDATAHIGH_PMSDDATAHIGH_S	0
+#define GLHMC_VFSDDATAHIGH_PMSDDATAHIGH_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFSDDATAHIGH_FPMAT(_i)		(0x00108200 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFSDDATAHIGH_FPMAT_MAX_INDEX	31
+#define GLHMC_VFSDDATAHIGH_FPMAT_PMSDDATAHIGH_S 0
+#define GLHMC_VFSDDATAHIGH_FPMAT_PMSDDATAHIGH_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFSDDATALOW(_i)			(0x00528100 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFSDDATALOW_MAX_INDEX		31
+#define GLHMC_VFSDDATALOW_PMSDVALID_S		0
+#define GLHMC_VFSDDATALOW_PMSDVALID_M		BIT(0)
+#define GLHMC_VFSDDATALOW_PMSDTYPE_S		1
+#define GLHMC_VFSDDATALOW_PMSDTYPE_M		BIT(1)
+#define GLHMC_VFSDDATALOW_PMSDBPCOUNT_S		2
+#define GLHMC_VFSDDATALOW_PMSDBPCOUNT_M		MAKEMASK(0x3FF, 2)
+#define GLHMC_VFSDDATALOW_PMSDDATALOW_S		12
+#define GLHMC_VFSDDATALOW_PMSDDATALOW_M		MAKEMASK(0xFFFFF, 12)
+#define GLHMC_VFSDDATALOW_FPMAT(_i)		(0x00108100 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFSDDATALOW_FPMAT_MAX_INDEX	31
+#define GLHMC_VFSDDATALOW_FPMAT_PMSDVALID_S	0
+#define GLHMC_VFSDDATALOW_FPMAT_PMSDVALID_M	BIT(0)
+#define GLHMC_VFSDDATALOW_FPMAT_PMSDTYPE_S	1
+#define GLHMC_VFSDDATALOW_FPMAT_PMSDTYPE_M	BIT(1)
+#define GLHMC_VFSDDATALOW_FPMAT_PMSDBPCOUNT_S	2
+#define GLHMC_VFSDDATALOW_FPMAT_PMSDBPCOUNT_M	MAKEMASK(0x3FF, 2)
+#define GLHMC_VFSDDATALOW_FPMAT_PMSDDATALOW_S	12
+#define GLHMC_VFSDDATALOW_FPMAT_PMSDDATALOW_M	MAKEMASK(0xFFFFF, 12)
+#define GLHMC_VFSDPART(_i)			(0x00528800 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFSDPART_MAX_INDEX		31
+#define GLHMC_VFSDPART_PMSDBASE_S		0
+#define GLHMC_VFSDPART_PMSDBASE_M		MAKEMASK(0xFFF, 0)
+#define GLHMC_VFSDPART_PMSDSIZE_S		16
+#define GLHMC_VFSDPART_PMSDSIZE_M		MAKEMASK(0x1FFF, 16)
+#define GLHMC_VFSDPART_FPMAT(_i)		(0x00108800 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFSDPART_FPMAT_MAX_INDEX		31
+#define GLHMC_VFSDPART_FPMAT_PMSDBASE_S		0
+#define GLHMC_VFSDPART_FPMAT_PMSDBASE_M		MAKEMASK(0xFFF, 0)
+#define GLHMC_VFSDPART_FPMAT_PMSDSIZE_S		16
+#define GLHMC_VFSDPART_FPMAT_PMSDSIZE_M		MAKEMASK(0x1FFF, 16)
+#define GLMDOC_CACHESIZE			0x0051C06C /* Reset Source: CORER */
+#define GLMDOC_CACHESIZE_WORD_SIZE_S		0
+#define GLMDOC_CACHESIZE_WORD_SIZE_M		MAKEMASK(0xFF, 0)
+#define GLMDOC_CACHESIZE_SETS_S			8
+#define GLMDOC_CACHESIZE_SETS_M			MAKEMASK(0xFFF, 8)
+#define GLMDOC_CACHESIZE_WAYS_S			20
+#define GLMDOC_CACHESIZE_WAYS_M			MAKEMASK(0xF, 20)
+#define GLPBLOC0_CACHESIZE			0x00518074 /* Reset Source: CORER */
+#define GLPBLOC0_CACHESIZE_WORD_SIZE_S		0
+#define GLPBLOC0_CACHESIZE_WORD_SIZE_M		MAKEMASK(0xFF, 0)
+#define GLPBLOC0_CACHESIZE_SETS_S		8
+#define GLPBLOC0_CACHESIZE_SETS_M		MAKEMASK(0xFFF, 8)
+#define GLPBLOC0_CACHESIZE_WAYS_S		20
+#define GLPBLOC0_CACHESIZE_WAYS_M		MAKEMASK(0xF, 20)
+#define GLPBLOC1_CACHESIZE			0x0051A074 /* Reset Source: CORER */
+#define GLPBLOC1_CACHESIZE_WORD_SIZE_S		0
+#define GLPBLOC1_CACHESIZE_WORD_SIZE_M		MAKEMASK(0xFF, 0)
+#define GLPBLOC1_CACHESIZE_SETS_S		8
+#define GLPBLOC1_CACHESIZE_SETS_M		MAKEMASK(0xFFF, 8)
+#define GLPBLOC1_CACHESIZE_WAYS_S		20
+#define GLPBLOC1_CACHESIZE_WAYS_M		MAKEMASK(0xF, 20)
+#define GLPDOC_CACHESIZE			0x00530048 /* Reset Source: CORER */
+#define GLPDOC_CACHESIZE_WORD_SIZE_S		0
+#define GLPDOC_CACHESIZE_WORD_SIZE_M		MAKEMASK(0xFF, 0)
+#define GLPDOC_CACHESIZE_SETS_S			8
+#define GLPDOC_CACHESIZE_SETS_M			MAKEMASK(0xFFF, 8)
+#define GLPDOC_CACHESIZE_WAYS_S			20
+#define GLPDOC_CACHESIZE_WAYS_M			MAKEMASK(0xF, 20)
+#define GLPDOC_CACHESIZE_FPMAT			0x00110088 /* Reset Source: CORER */
+#define GLPDOC_CACHESIZE_FPMAT_WORD_SIZE_S	0
+#define GLPDOC_CACHESIZE_FPMAT_WORD_SIZE_M	MAKEMASK(0xFF, 0)
+#define GLPDOC_CACHESIZE_FPMAT_SETS_S		8
+#define GLPDOC_CACHESIZE_FPMAT_SETS_M		MAKEMASK(0xFFF, 8)
+#define GLPDOC_CACHESIZE_FPMAT_WAYS_S		20
+#define GLPDOC_CACHESIZE_FPMAT_WAYS_M		MAKEMASK(0xF, 20)
+#define GLPEOC0_CACHESIZE			0x005140A8 /* Reset Source: CORER */
+#define GLPEOC0_CACHESIZE_WORD_SIZE_S		0
+#define GLPEOC0_CACHESIZE_WORD_SIZE_M		MAKEMASK(0xFF, 0)
+#define GLPEOC0_CACHESIZE_SETS_S		8
+#define GLPEOC0_CACHESIZE_SETS_M		MAKEMASK(0xFFF, 8)
+#define GLPEOC0_CACHESIZE_WAYS_S		20
+#define GLPEOC0_CACHESIZE_WAYS_M		MAKEMASK(0xF, 20)
+#define GLPEOC1_CACHESIZE			0x005160A8 /* Reset Source: CORER */
+#define GLPEOC1_CACHESIZE_WORD_SIZE_S		0
+#define GLPEOC1_CACHESIZE_WORD_SIZE_M		MAKEMASK(0xFF, 0)
+#define GLPEOC1_CACHESIZE_SETS_S		8
+#define GLPEOC1_CACHESIZE_SETS_M		MAKEMASK(0xFFF, 8)
+#define GLPEOC1_CACHESIZE_WAYS_S		20
+#define GLPEOC1_CACHESIZE_WAYS_M		MAKEMASK(0xF, 20)
+#define PFHMC_ERRORDATA				0x00520500 /* Reset Source: PFR */
+#define PFHMC_ERRORDATA_HMC_ERROR_DATA_S	0
+#define PFHMC_ERRORDATA_HMC_ERROR_DATA_M	MAKEMASK(0x3FFFFFFF, 0)
+#define PFHMC_ERRORDATA_FPMAT			0x00100500 /* Reset Source: PFR */
+#define PFHMC_ERRORDATA_FPMAT_HMC_ERROR_DATA_S	0
+#define PFHMC_ERRORDATA_FPMAT_HMC_ERROR_DATA_M	MAKEMASK(0x3FFFFFFF, 0)
+#define PFHMC_ERRORINFO				0x00520400 /* Reset Source: PFR */
+#define PFHMC_ERRORINFO_PMF_INDEX_S		0
+#define PFHMC_ERRORINFO_PMF_INDEX_M		MAKEMASK(0x1F, 0)
+#define PFHMC_ERRORINFO_PMF_ISVF_S		7
+#define PFHMC_ERRORINFO_PMF_ISVF_M		BIT(7)
+#define PFHMC_ERRORINFO_HMC_ERROR_TYPE_S	8
+#define PFHMC_ERRORINFO_HMC_ERROR_TYPE_M	MAKEMASK(0xF, 8)
+#define PFHMC_ERRORINFO_HMC_OBJECT_TYPE_S	16
+#define PFHMC_ERRORINFO_HMC_OBJECT_TYPE_M	MAKEMASK(0x1F, 16)
+#define PFHMC_ERRORINFO_ERROR_DETECTED_S	31
+#define PFHMC_ERRORINFO_ERROR_DETECTED_M	BIT(31)
+#define PFHMC_ERRORINFO_FPMAT			0x00100400 /* Reset Source: PFR */
+#define PFHMC_ERRORINFO_FPMAT_PMF_INDEX_S	0
+#define PFHMC_ERRORINFO_FPMAT_PMF_INDEX_M	MAKEMASK(0x1F, 0)
+#define PFHMC_ERRORINFO_FPMAT_PMF_ISVF_S	7
+#define PFHMC_ERRORINFO_FPMAT_PMF_ISVF_M	BIT(7)
+#define PFHMC_ERRORINFO_FPMAT_HMC_ERROR_TYPE_S	8
+#define PFHMC_ERRORINFO_FPMAT_HMC_ERROR_TYPE_M	MAKEMASK(0xF, 8)
+#define PFHMC_ERRORINFO_FPMAT_HMC_OBJECT_TYPE_S 16
+#define PFHMC_ERRORINFO_FPMAT_HMC_OBJECT_TYPE_M MAKEMASK(0x1F, 16)
+#define PFHMC_ERRORINFO_FPMAT_ERROR_DETECTED_S	31
+#define PFHMC_ERRORINFO_FPMAT_ERROR_DETECTED_M	BIT(31)
+#define PFHMC_PDINV				0x00520300 /* Reset Source: PFR */
+#define PFHMC_PDINV_PMSDIDX_S			0
+#define PFHMC_PDINV_PMSDIDX_M			MAKEMASK(0xFFF, 0)
+#define PFHMC_PDINV_PMSDPARTSEL_S		15
+#define PFHMC_PDINV_PMSDPARTSEL_M		BIT(15)
+#define PFHMC_PDINV_PMPDIDX_S			16
+#define PFHMC_PDINV_PMPDIDX_M			MAKEMASK(0x1FF, 16)
+#define PFHMC_PDINV_FPMAT			0x00100300 /* Reset Source: PFR */
+#define PFHMC_PDINV_FPMAT_PMSDIDX_S		0
+#define PFHMC_PDINV_FPMAT_PMSDIDX_M		MAKEMASK(0xFFF, 0)
+#define PFHMC_PDINV_FPMAT_PMSDPARTSEL_S		15
+#define PFHMC_PDINV_FPMAT_PMSDPARTSEL_M		BIT(15)
+#define PFHMC_PDINV_FPMAT_PMPDIDX_S		16
+#define PFHMC_PDINV_FPMAT_PMPDIDX_M		MAKEMASK(0x1FF, 16)
+#define PFHMC_SDCMD				0x00520000 /* Reset Source: PFR */
+#define PFHMC_SDCMD_PMSDIDX_S			0
+#define PFHMC_SDCMD_PMSDIDX_M			MAKEMASK(0xFFF, 0)
+#define PFHMC_SDCMD_PMSDPARTSEL_S		15
+#define PFHMC_SDCMD_PMSDPARTSEL_M		BIT(15)
+#define PFHMC_SDCMD_PMSDWR_S			31
+#define PFHMC_SDCMD_PMSDWR_M			BIT(31)
+#define PFHMC_SDCMD_FPMAT			0x00100000 /* Reset Source: PFR */
+#define PFHMC_SDCMD_FPMAT_PMSDIDX_S		0
+#define PFHMC_SDCMD_FPMAT_PMSDIDX_M		MAKEMASK(0xFFF, 0)
+#define PFHMC_SDCMD_FPMAT_PMSDPARTSEL_S		15
+#define PFHMC_SDCMD_FPMAT_PMSDPARTSEL_M		BIT(15)
+#define PFHMC_SDCMD_FPMAT_PMSDWR_S		31
+#define PFHMC_SDCMD_FPMAT_PMSDWR_M		BIT(31)
+#define PFHMC_SDDATAHIGH			0x00520200 /* Reset Source: PFR */
+#define PFHMC_SDDATAHIGH_PMSDDATAHIGH_S		0
+#define PFHMC_SDDATAHIGH_PMSDDATAHIGH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PFHMC_SDDATAHIGH_FPMAT			0x00100200 /* Reset Source: PFR */
+#define PFHMC_SDDATAHIGH_FPMAT_PMSDDATAHIGH_S	0
+#define PFHMC_SDDATAHIGH_FPMAT_PMSDDATAHIGH_M	MAKEMASK(0xFFFFFFFF, 0)
+#define PFHMC_SDDATALOW				0x00520100 /* Reset Source: PFR */
+#define PFHMC_SDDATALOW_PMSDVALID_S		0
+#define PFHMC_SDDATALOW_PMSDVALID_M		BIT(0)
+#define PFHMC_SDDATALOW_PMSDTYPE_S		1
+#define PFHMC_SDDATALOW_PMSDTYPE_M		BIT(1)
+#define PFHMC_SDDATALOW_PMSDBPCOUNT_S		2
+#define PFHMC_SDDATALOW_PMSDBPCOUNT_M		MAKEMASK(0x3FF, 2)
+#define PFHMC_SDDATALOW_PMSDDATALOW_S		12
+#define PFHMC_SDDATALOW_PMSDDATALOW_M		MAKEMASK(0xFFFFF, 12)
+#define PFHMC_SDDATALOW_FPMAT			0x00100100 /* Reset Source: PFR */
+#define PFHMC_SDDATALOW_FPMAT_PMSDVALID_S	0
+#define PFHMC_SDDATALOW_FPMAT_PMSDVALID_M	BIT(0)
+#define PFHMC_SDDATALOW_FPMAT_PMSDTYPE_S	1
+#define PFHMC_SDDATALOW_FPMAT_PMSDTYPE_M	BIT(1)
+#define PFHMC_SDDATALOW_FPMAT_PMSDBPCOUNT_S	2
+#define PFHMC_SDDATALOW_FPMAT_PMSDBPCOUNT_M	MAKEMASK(0x3FF, 2)
+#define PFHMC_SDDATALOW_FPMAT_PMSDDATALOW_S	12
+#define PFHMC_SDDATALOW_FPMAT_PMSDDATALOW_M	MAKEMASK(0xFFFFF, 12)
+#define GL_DSI_RDPC				0x00294204 /* Reset Source: CORER */
+#define GL_DSI_RDPC_RDPC_S			0
+#define GL_DSI_RDPC_RDPC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GL_DSI_REPC				0x00294208 /* Reset Source: CORER */
+#define GL_DSI_REPC_NO_DESC_CNT_S		0
+#define GL_DSI_REPC_NO_DESC_CNT_M		MAKEMASK(0xFFFF, 0)
+#define GL_DSI_REPC_ERROR_CNT_S			16
+#define GL_DSI_REPC_ERROR_CNT_M			MAKEMASK(0xFFFF, 16)
+#define GL_MDCK_TDAT_TCLAN			0x000FC0DC /* Reset Source: CORER */
+#define GL_MDCK_TDAT_TCLAN_WRONG_ORDER_FORMAT_DESC_S 0
+#define GL_MDCK_TDAT_TCLAN_WRONG_ORDER_FORMAT_DESC_M BIT(0)
+#define GL_MDCK_TDAT_TCLAN_UR_S			1
+#define GL_MDCK_TDAT_TCLAN_UR_M			BIT(1)
+#define GL_MDCK_TDAT_TCLAN_TAIL_DESC_NOT_DDESC_EOP_NOP_S 2
+#define GL_MDCK_TDAT_TCLAN_TAIL_DESC_NOT_DDESC_EOP_NOP_M BIT(2)
+#define GL_MDCK_TDAT_TCLAN_FALSE_SCHEDULING_S	3
+#define GL_MDCK_TDAT_TCLAN_FALSE_SCHEDULING_M	BIT(3)
+#define GL_MDCK_TDAT_TCLAN_TAIL_VALUE_BIGGER_THAN_RING_LEN_S 4
+#define GL_MDCK_TDAT_TCLAN_TAIL_VALUE_BIGGER_THAN_RING_LEN_M BIT(4)
+#define GL_MDCK_TDAT_TCLAN_MORE_THAN_8_DCMDS_IN_PKT_S 5
+#define GL_MDCK_TDAT_TCLAN_MORE_THAN_8_DCMDS_IN_PKT_M BIT(5)
+#define GL_MDCK_TDAT_TCLAN_NO_HEAD_UPDATE_IN_QUANTA_S 6
+#define GL_MDCK_TDAT_TCLAN_NO_HEAD_UPDATE_IN_QUANTA_M BIT(6)
+#define GL_MDCK_TDAT_TCLAN_PKT_LEN_NOT_LEGAL_S	7
+#define GL_MDCK_TDAT_TCLAN_PKT_LEN_NOT_LEGAL_M	BIT(7)
+#define GL_MDCK_TDAT_TCLAN_TSO_TLEN_NOT_COHERENT_WITH_SUM_BUFS_S 8
+#define GL_MDCK_TDAT_TCLAN_TSO_TLEN_NOT_COHERENT_WITH_SUM_BUFS_M BIT(8)
+#define GL_MDCK_TDAT_TCLAN_TSO_TAIL_REACHED_BEFORE_TLEN_END_S 9
+#define GL_MDCK_TDAT_TCLAN_TSO_TAIL_REACHED_BEFORE_TLEN_END_M BIT(9)
+#define GL_MDCK_TDAT_TCLAN_TSO_MORE_THAN_3_HDRS_S 10
+#define GL_MDCK_TDAT_TCLAN_TSO_MORE_THAN_3_HDRS_M BIT(10)
+#define GL_MDCK_TDAT_TCLAN_TSO_SUM_BUFFS_LT_SUM_HDRS_S 11
+#define GL_MDCK_TDAT_TCLAN_TSO_SUM_BUFFS_LT_SUM_HDRS_M BIT(11)
+#define GL_MDCK_TDAT_TCLAN_TSO_ZERO_MSS_TLEN_HDRS_S 12
+#define GL_MDCK_TDAT_TCLAN_TSO_ZERO_MSS_TLEN_HDRS_M BIT(12)
+#define GL_MDCK_TDAT_TCLAN_TSO_CTX_DESC_IPSEC_S 13
+#define GL_MDCK_TDAT_TCLAN_TSO_CTX_DESC_IPSEC_M BIT(13)
+#define GL_MDCK_TDAT_TCLAN_SSO_COMS_NOT_WHOLE_PKT_NUM_IN_QUANTA_S 14
+#define GL_MDCK_TDAT_TCLAN_SSO_COMS_NOT_WHOLE_PKT_NUM_IN_QUANTA_M BIT(14)
+#define GL_MDCK_TDAT_TCLAN_COMS_QUANTA_BYTES_EXCEED_PKTLEN_X_64_S 15
+#define GL_MDCK_TDAT_TCLAN_COMS_QUANTA_BYTES_EXCEED_PKTLEN_X_64_M BIT(15)
+#define GL_MDCK_TDAT_TCLAN_COMS_QUANTA_CMDS_EXCEED_S 16
+#define GL_MDCK_TDAT_TCLAN_COMS_QUANTA_CMDS_EXCEED_M BIT(16)
+#define GL_MDCK_TDAT_TCLAN_TSO_COMS_TSO_DESCS_LAST_LSO_QUANTA_S 17
+#define GL_MDCK_TDAT_TCLAN_TSO_COMS_TSO_DESCS_LAST_LSO_QUANTA_M BIT(17)
+#define GL_MDCK_TDAT_TCLAN_TSO_COMS_TSO_DESCS_TLEN_S 18
+#define GL_MDCK_TDAT_TCLAN_TSO_COMS_TSO_DESCS_TLEN_M BIT(18)
+#define GL_MDCK_TDAT_TCLAN_TSO_COMS_QUANTA_FINISHED_TOO_EARLY_S 19
+#define GL_MDCK_TDAT_TCLAN_TSO_COMS_QUANTA_FINISHED_TOO_EARLY_M BIT(19)
+#define GL_MDCK_TDAT_TCLAN_COMS_NUM_PKTS_IN_QUANTA_S 20
+#define GL_MDCK_TDAT_TCLAN_COMS_NUM_PKTS_IN_QUANTA_M BIT(20)
+#define GL_PPRS_SPARE_0				0x000841A8 /* Reset Source: CORER */
+#define GL_PPRS_SPARE_0_GL_PPRS_SPARE_S		0
+#define GL_PPRS_SPARE_0_GL_PPRS_SPARE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PPRS_SPARE_1				0x000851A8 /* Reset Source: CORER */
+#define GL_PPRS_SPARE_1_GL_PPRS_SPARE_S		0
+#define GL_PPRS_SPARE_1_GL_PPRS_SPARE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PPRS_SPARE_2				0x000861A8 /* Reset Source: CORER */
+#define GL_PPRS_SPARE_2_GL_PPRS_SPARE_S		0
+#define GL_PPRS_SPARE_2_GL_PPRS_SPARE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PPRS_SPARE_3				0x000871A8 /* Reset Source: CORER */
+#define GL_PPRS_SPARE_3_GL_PPRS_SPARE_S		0
+#define GL_PPRS_SPARE_3_GL_PPRS_SPARE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLCORE_CLKCTL_H				0x000B81E8 /* Reset Source: POR */
+#define GLCORE_CLKCTL_H_UPPER_CLK_SRC_H_S	0
+#define GLCORE_CLKCTL_H_UPPER_CLK_SRC_H_M	MAKEMASK(0x3, 0)
+#define GLCORE_CLKCTL_H_LOWER_CLK_SRC_H_S	2
+#define GLCORE_CLKCTL_H_LOWER_CLK_SRC_H_M	MAKEMASK(0x3, 2)
+#define GLCORE_CLKCTL_H_PSM_CLK_SRC_H_S		4
+#define GLCORE_CLKCTL_H_PSM_CLK_SRC_H_M		MAKEMASK(0x3, 4)
+#define GLCORE_CLKCTL_H_RXCTL_CLK_SRC_H_S	6
+#define GLCORE_CLKCTL_H_RXCTL_CLK_SRC_H_M	MAKEMASK(0x3, 6)
+#define GLCORE_CLKCTL_H_UANA_CLK_SRC_H_S	8
+#define GLCORE_CLKCTL_H_UANA_CLK_SRC_H_M	MAKEMASK(0x7, 8)
+#define GLCORE_CLKCTL_L				0x000B8254 /* Reset Source: POR */
+#define GLCORE_CLKCTL_L_UPPER_CLK_SRC_L_S	0
+#define GLCORE_CLKCTL_L_UPPER_CLK_SRC_L_M	MAKEMASK(0x3, 0)
+#define GLCORE_CLKCTL_L_LOWER_CLK_SRC_L_S	2
+#define GLCORE_CLKCTL_L_LOWER_CLK_SRC_L_M	MAKEMASK(0x3, 2)
+#define GLCORE_CLKCTL_L_PSM_CLK_SRC_L_S		4
+#define GLCORE_CLKCTL_L_PSM_CLK_SRC_L_M		MAKEMASK(0x3, 4)
+#define GLCORE_CLKCTL_L_RXCTL_CLK_SRC_L_S	6
+#define GLCORE_CLKCTL_L_RXCTL_CLK_SRC_L_M	MAKEMASK(0x3, 6)
+#define GLCORE_CLKCTL_L_UANA_CLK_SRC_L_S	8
+#define GLCORE_CLKCTL_L_UANA_CLK_SRC_L_M	MAKEMASK(0x7, 8)
+#define GLCORE_CLKCTL_M				0x000B8258 /* Reset Source: POR */
+#define GLCORE_CLKCTL_M_UPPER_CLK_SRC_M_S	0
+#define GLCORE_CLKCTL_M_UPPER_CLK_SRC_M_M	MAKEMASK(0x3, 0)
+#define GLCORE_CLKCTL_M_LOWER_CLK_SRC_M_S	2
+#define GLCORE_CLKCTL_M_LOWER_CLK_SRC_M_M	MAKEMASK(0x3, 2)
+#define GLCORE_CLKCTL_M_PSM_CLK_SRC_M_S		4
+#define GLCORE_CLKCTL_M_PSM_CLK_SRC_M_M		MAKEMASK(0x3, 4)
+#define GLCORE_CLKCTL_M_RXCTL_CLK_SRC_M_S	6
+#define GLCORE_CLKCTL_M_RXCTL_CLK_SRC_M_M	MAKEMASK(0x3, 6)
+#define GLCORE_CLKCTL_M_UANA_CLK_SRC_M_S	8
+#define GLCORE_CLKCTL_M_UANA_CLK_SRC_M_M	MAKEMASK(0x7, 8)
+#define GLFOC_CACHESIZE				0x000AA074 /* Reset Source: CORER */
+#define GLFOC_CACHESIZE_WORD_SIZE_S		0
+#define GLFOC_CACHESIZE_WORD_SIZE_M		MAKEMASK(0xFF, 0)
+#define GLFOC_CACHESIZE_SETS_S			8
+#define GLFOC_CACHESIZE_SETS_M			MAKEMASK(0xFFF, 8)
+#define GLFOC_CACHESIZE_WAYS_S			20
+#define GLFOC_CACHESIZE_WAYS_M			MAKEMASK(0xF, 20)
+#define GLGEN_CAR_DEBUG				0x000B81C0 /* Reset Source: POR */
+#define GLGEN_CAR_DEBUG_CAR_UPPER_CORE_CLK_EN_S 0
+#define GLGEN_CAR_DEBUG_CAR_UPPER_CORE_CLK_EN_M BIT(0)
+#define GLGEN_CAR_DEBUG_CAR_PCIE_HIU_CLK_EN_S	1
+#define GLGEN_CAR_DEBUG_CAR_PCIE_HIU_CLK_EN_M	BIT(1)
+#define GLGEN_CAR_DEBUG_CAR_PE_CLK_EN_S		2
+#define GLGEN_CAR_DEBUG_CAR_PE_CLK_EN_M		BIT(2)
+#define GLGEN_CAR_DEBUG_CAR_PCIE_PRIM_CLK_ACTIVE_S 3
+#define GLGEN_CAR_DEBUG_CAR_PCIE_PRIM_CLK_ACTIVE_M BIT(3)
+#define GLGEN_CAR_DEBUG_CDC_PE_ACTIVE_S		4
+#define GLGEN_CAR_DEBUG_CDC_PE_ACTIVE_M		BIT(4)
+#define GLGEN_CAR_DEBUG_CAR_PCIE_RAW_PRST_RESET_N_S 5
+#define GLGEN_CAR_DEBUG_CAR_PCIE_RAW_PRST_RESET_N_M BIT(5)
+#define GLGEN_CAR_DEBUG_CAR_PCIE_RAW_SCLR_RESET_N_S 6
+#define GLGEN_CAR_DEBUG_CAR_PCIE_RAW_SCLR_RESET_N_M BIT(6)
+#define GLGEN_CAR_DEBUG_CAR_PCIE_RAW_IB_RESET_N_S 7
+#define GLGEN_CAR_DEBUG_CAR_PCIE_RAW_IB_RESET_N_M BIT(7)
+#define GLGEN_CAR_DEBUG_CAR_PCIE_RAW_IMIB_RESET_N_S 8
+#define GLGEN_CAR_DEBUG_CAR_PCIE_RAW_IMIB_RESET_N_M BIT(8)
+#define GLGEN_CAR_DEBUG_CAR_RAW_EMP_RESET_N_S	9
+#define GLGEN_CAR_DEBUG_CAR_RAW_EMP_RESET_N_M	BIT(9)
+#define GLGEN_CAR_DEBUG_CAR_RAW_GLOBAL_RESET_N_S 10
+#define GLGEN_CAR_DEBUG_CAR_RAW_GLOBAL_RESET_N_M BIT(10)
+#define GLGEN_CAR_DEBUG_CAR_RAW_LAN_POWER_GOOD_S 11
+#define GLGEN_CAR_DEBUG_CAR_RAW_LAN_POWER_GOOD_M BIT(11)
+#define GLGEN_CAR_DEBUG_CDC_IOSF_PRIMERY_RST_B_S 12
+#define GLGEN_CAR_DEBUG_CDC_IOSF_PRIMERY_RST_B_M BIT(12)
+#define GLGEN_CAR_DEBUG_GBE_GLOBALRST_B_S	13
+#define GLGEN_CAR_DEBUG_GBE_GLOBALRST_B_M	BIT(13)
+#define GLGEN_CAR_DEBUG_FLEEP_AL_GLOBR_DONE_S	14
+#define GLGEN_CAR_DEBUG_FLEEP_AL_GLOBR_DONE_M	BIT(14)
+#define GLGEN_CAR_DEBUG_CAR_RST_STATE_S		15
+#define GLGEN_CAR_DEBUG_CAR_RST_STATE_M		MAKEMASK(0xF, 15)
+#define GLGEN_CAR_SPARE				0x000B81C4 /* Reset Source: POR */
+#define GLGEN_CAR_SPARE_SPARE_CLEAR_S		0
+#define GLGEN_CAR_SPARE_SPARE_CLEAR_M		MAKEMASK(0xFFFF, 0)
+#define GLGEN_CAR_SPARE_SPARE_SET_S		16
+#define GLGEN_CAR_SPARE_SPARE_SET_M		MAKEMASK(0xFFFF, 16)
+#define GLMAC_CLKSTAT				0x000B8210 /* Reset Source: POR */
+#define GLMAC_CLKSTAT_P0_CLK_SPEED_S		0
+#define GLMAC_CLKSTAT_P0_CLK_SPEED_M		MAKEMASK(0xF, 0)
+#define GLMAC_CLKSTAT_P1_CLK_SPEED_S		4
+#define GLMAC_CLKSTAT_P1_CLK_SPEED_M		MAKEMASK(0xF, 4)
+#define GLMAC_CLKSTAT_P2_CLK_SPEED_S		8
+#define GLMAC_CLKSTAT_P2_CLK_SPEED_M		MAKEMASK(0xF, 8)
+#define GLMAC_CLKSTAT_P3_CLK_SPEED_S		12
+#define GLMAC_CLKSTAT_P3_CLK_SPEED_M		MAKEMASK(0xF, 12)
+#define GLMAC_CLKSTAT_P4_CLK_SPEED_S		16
+#define GLMAC_CLKSTAT_P4_CLK_SPEED_M		MAKEMASK(0xF, 16)
+#define GLMAC_CLKSTAT_P5_CLK_SPEED_S		20
+#define GLMAC_CLKSTAT_P5_CLK_SPEED_M		MAKEMASK(0xF, 20)
+#define GLMAC_CLKSTAT_P6_CLK_SPEED_S		24
+#define GLMAC_CLKSTAT_P6_CLK_SPEED_M		MAKEMASK(0xF, 24)
+#define GLMAC_CLKSTAT_P7_CLK_SPEED_S		28
+#define GLMAC_CLKSTAT_P7_CLK_SPEED_M		MAKEMASK(0xF, 28)
+#define GLRCB_DCB_LAN_PMS			0x001223F8 /* Reset Source: CORER */
+#define GLRCB_DCB_LAN_PMS_PSM_LAN_S		0
+#define GLRCB_DCB_LAN_PMS_PSM_LAN_M		MAKEMASK(0x3FFF, 0)
+#define GLRCB_DCB_RDMA_PMS			0x001223FC /* Reset Source: CORER */
+#define GLRCB_DCB_RDMA_PMS_PSM_RDMA_S		0
+#define GLRCB_DCB_RDMA_PMS_PSM_RDMA_M		MAKEMASK(0x3FFF, 0)
+#define GLRLAN_MDET				0x00294200 /* Reset Source: CORER */
+#define GLRLAN_MDET_PCKT_EXTRCT_ERR_S		0
+#define GLRLAN_MDET_PCKT_EXTRCT_ERR_M		BIT(0)
+#define GLTPB_100G_MAC_FC_THRESH		0x00099510 /* Reset Source: CORER */
+#define GLTPB_100G_MAC_FC_THRESH_PORT0_FC_THRESH_S 0
+#define GLTPB_100G_MAC_FC_THRESH_PORT0_FC_THRESH_M MAKEMASK(0xFFFF, 0)
+#define GLTPB_100G_MAC_FC_THRESH_PORT1_FC_THRESH_S 16
+#define GLTPB_100G_MAC_FC_THRESH_PORT1_FC_THRESH_M MAKEMASK(0xFFFF, 16)
+#define GLTPB_100G_RPB_FC_THRESH		0x0009963C /* Reset Source: CORER */
+#define GLTPB_100G_RPB_FC_THRESH_PORT0_FC_THRESH_S 0
+#define GLTPB_100G_RPB_FC_THRESH_PORT0_FC_THRESH_M MAKEMASK(0xFFFF, 0)
+#define GLTPB_100G_RPB_FC_THRESH_PORT1_FC_THRESH_S 16
+#define GLTPB_100G_RPB_FC_THRESH_PORT1_FC_THRESH_M MAKEMASK(0xFFFF, 16)
+#define GLTPB_PACING_10G			0x000994E4 /* Reset Source: CORER */
+#define GLTPB_PACING_10G_N_S			0
+#define GLTPB_PACING_10G_N_M			MAKEMASK(0xFF, 0)
+#define GLTPB_PACING_10G_K_S			8
+#define GLTPB_PACING_10G_K_M			MAKEMASK(0xFF, 8)
+#define GLTPB_PACING_10G_S_S			16
+#define GLTPB_PACING_10G_S_M			MAKEMASK(0x1FF, 16)
+#define GLTPB_PACING_25G			0x000994E0 /* Reset Source: CORER */
+#define GLTPB_PACING_25G_N_S			0
+#define GLTPB_PACING_25G_N_M			MAKEMASK(0xFF, 0)
+#define GLTPB_PACING_25G_K_S			8
+#define GLTPB_PACING_25G_K_M			MAKEMASK(0xFF, 8)
+#define GLTPB_PACING_25G_S_S			16
+#define GLTPB_PACING_25G_S_M			MAKEMASK(0x1FF, 16)
+#define GLTPB_PORT_PACING_SPEED			0x000994E8 /* Reset Source: CORER */
+#define GLTPB_PORT_PACING_SPEED_PORT0_SPEED_S	0
+#define GLTPB_PORT_PACING_SPEED_PORT0_SPEED_M	BIT(0)
+#define GLTPB_PORT_PACING_SPEED_PORT1_SPEED_S	1
+#define GLTPB_PORT_PACING_SPEED_PORT1_SPEED_M	BIT(1)
+#define GLTPB_PORT_PACING_SPEED_PORT2_SPEED_S	2
+#define GLTPB_PORT_PACING_SPEED_PORT2_SPEED_M	BIT(2)
+#define GLTPB_PORT_PACING_SPEED_PORT3_SPEED_S	3
+#define GLTPB_PORT_PACING_SPEED_PORT3_SPEED_M	BIT(3)
+#define GLTPB_PORT_PACING_SPEED_PORT4_SPEED_S	4
+#define GLTPB_PORT_PACING_SPEED_PORT4_SPEED_M	BIT(4)
+#define GLTPB_PORT_PACING_SPEED_PORT5_SPEED_S	5
+#define GLTPB_PORT_PACING_SPEED_PORT5_SPEED_M	BIT(5)
+#define GLTPB_PORT_PACING_SPEED_PORT6_SPEED_S	6
+#define GLTPB_PORT_PACING_SPEED_PORT6_SPEED_M	BIT(6)
+#define GLTPB_PORT_PACING_SPEED_PORT7_SPEED_S	7
+#define GLTPB_PORT_PACING_SPEED_PORT7_SPEED_M	BIT(7)
+#define GLTSYN_HH_DBG				0x000889F0 /* Reset Source: CORER */
+#define GLTSYN_HH_DBG_HH_SYNC_S			0
+#define GLTSYN_HH_DBG_HH_SYNC_M			BIT(0)
+#define GLTSYN_HH_DBG_HH_LATCH_EN_S		1
+#define GLTSYN_HH_DBG_HH_LATCH_EN_M		BIT(1)
+#define TPB_CFG_SCHEDULED_BC_THRESHOLD		0x00099494 /* Reset Source: CORER */
+#define TPB_CFG_SCHEDULED_BC_THRESHOLD_THRESHOLD_S 0
+#define TPB_CFG_SCHEDULED_BC_THRESHOLD_THRESHOLD_M MAKEMASK(0x7FFF, 0)
+#define GL_UFUSE_SOC				0x000A400C /* Reset Source: POR */
+#define GL_UFUSE_SOC_PORT_MODE_S		0
+#define GL_UFUSE_SOC_PORT_MODE_M		MAKEMASK(0x3, 0)
+#define GL_UFUSE_SOC_BANDWIDTH_S		2
+#define GL_UFUSE_SOC_BANDWIDTH_M		MAKEMASK(0x3, 2)
+#define GL_UFUSE_SOC_PE_DISABLE_S		4
+#define GL_UFUSE_SOC_PE_DISABLE_M		BIT(4)
+#define GL_UFUSE_SOC_SWITCH_MODE_S		5
+#define GL_UFUSE_SOC_SWITCH_MODE_M		BIT(5)
+#define GL_UFUSE_SOC_CSR_PROTECTION_ENABLE_S	6
+#define GL_UFUSE_SOC_CSR_PROTECTION_ENABLE_M	BIT(6)
+#define GL_UFUSE_SOC_SERIAL_50G_S		7
+#define GL_UFUSE_SOC_SERIAL_50G_M		BIT(7)
+#define GL_UFUSE_SOC_NIC_ID_S			8
+#define GL_UFUSE_SOC_NIC_ID_M			BIT(8)
+#define GL_UFUSE_SOC_BLOCK_BME_TO_FW_S		9
+#define GL_UFUSE_SOC_BLOCK_BME_TO_FW_M		BIT(9)
+#define GL_UFUSE_SOC_SOC_TYPE_S			10
+#define GL_UFUSE_SOC_SOC_TYPE_M			BIT(10)
+#define GL_UFUSE_SOC_BTS_MODE_S			11
+#define GL_UFUSE_SOC_BTS_MODE_M			BIT(11)
+#define GL_UFUSE_SOC_SPARE_FUSES_S		12
+#define GL_UFUSE_SOC_SPARE_FUSES_M		MAKEMASK(0xF, 12)
+#define EMPINT_GPIO_ENA				0x000880C0 /* Reset Source: POR */
+#define EMPINT_GPIO_ENA_GPIO0_ENA_S		0
+#define EMPINT_GPIO_ENA_GPIO0_ENA_M		BIT(0)
+#define EMPINT_GPIO_ENA_GPIO1_ENA_S		1
+#define EMPINT_GPIO_ENA_GPIO1_ENA_M		BIT(1)
+#define EMPINT_GPIO_ENA_GPIO2_ENA_S		2
+#define EMPINT_GPIO_ENA_GPIO2_ENA_M		BIT(2)
+#define EMPINT_GPIO_ENA_GPIO3_ENA_S		3
+#define EMPINT_GPIO_ENA_GPIO3_ENA_M		BIT(3)
+#define EMPINT_GPIO_ENA_GPIO4_ENA_S		4
+#define EMPINT_GPIO_ENA_GPIO4_ENA_M		BIT(4)
+#define EMPINT_GPIO_ENA_GPIO5_ENA_S		5
+#define EMPINT_GPIO_ENA_GPIO5_ENA_M		BIT(5)
+#define EMPINT_GPIO_ENA_GPIO6_ENA_S		6
+#define EMPINT_GPIO_ENA_GPIO6_ENA_M		BIT(6)
+#define GL_CLKGEN_DEBUG				0x000B8268 /* Reset Source: POR */
+#define GL_CLKGEN_DEBUG_PROBE_S			0
+#define GL_CLKGEN_DEBUG_PROBE_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GL_CLKGEN_DEBUG_SEL			0x000B8264 /* Reset Source: POR */
+#define GL_CLKGEN_DEBUG_SEL_GL_CLKGEN_DEBUG_SEL_S 0
+#define GL_CLKGEN_DEBUG_SEL_GL_CLKGEN_DEBUG_SEL_M MAKEMASK(0xFFFF, 0)
+#define GLGEN_MAC_LINK_TOPO			0x000B81DC /* Reset Source: GLOBR */
+#define GLGEN_MAC_LINK_TOPO_LINK_TOPO_S		0
+#define GLGEN_MAC_LINK_TOPO_LINK_TOPO_M		MAKEMASK(0x3, 0)
+#define GLINT_CEQCTL(_INT)			(0x0015C000 + ((_INT) * 4)) /* _i=0...2047 */ /* Reset Source: CORER */
+#define GLINT_CEQCTL_MAX_INDEX			2047
+#define GLINT_CEQCTL_MSIX_INDX_S		0
+#define GLINT_CEQCTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define GLINT_CEQCTL_ITR_INDX_S			11
+#define GLINT_CEQCTL_ITR_INDX_M			MAKEMASK(0x3, 11)
+#define GLINT_CEQCTL_CAUSE_ENA_S		30
+#define GLINT_CEQCTL_CAUSE_ENA_M		BIT(30)
+#define GLINT_CEQCTL_INTEVENT_S			31
+#define GLINT_CEQCTL_INTEVENT_M			BIT(31)
+#define GLINT_CTL				0x0016CC54 /* Reset Source: CORER */
+#define GLINT_CTL_DIS_AUTOMASK_S		0
+#define GLINT_CTL_DIS_AUTOMASK_M		BIT(0)
+#define GLINT_CTL_RSVD_S			1
+#define GLINT_CTL_RSVD_M			MAKEMASK(0x7FFF, 1)
+#define GLINT_CTL_ITR_GRAN_200_S		16
+#define GLINT_CTL_ITR_GRAN_200_M		MAKEMASK(0xF, 16)
+#define GLINT_CTL_ITR_GRAN_100_S		20
+#define GLINT_CTL_ITR_GRAN_100_M		MAKEMASK(0xF, 20)
+#define GLINT_CTL_ITR_GRAN_50_S			24
+#define GLINT_CTL_ITR_GRAN_50_M			MAKEMASK(0xF, 24)
+#define GLINT_CTL_ITR_GRAN_25_S			28
+#define GLINT_CTL_ITR_GRAN_25_M			MAKEMASK(0xF, 28)
+#define GLINT_DYN_CTL(_INT)			(0x00160000 + ((_INT) * 4)) /* _i=0...2047 */ /* Reset Source: PFR */
+#define GLINT_DYN_CTL_MAX_INDEX			2047
+#define GLINT_DYN_CTL_INTENA_S			0
+#define GLINT_DYN_CTL_INTENA_M			BIT(0)
+#define GLINT_DYN_CTL_CLEARPBA_S		1
+#define GLINT_DYN_CTL_CLEARPBA_M		BIT(1)
+#define GLINT_DYN_CTL_SWINT_TRIG_S		2
+#define GLINT_DYN_CTL_SWINT_TRIG_M		BIT(2)
+#define GLINT_DYN_CTL_ITR_INDX_S		3
+#define GLINT_DYN_CTL_ITR_INDX_M		MAKEMASK(0x3, 3)
+#define GLINT_DYN_CTL_INTERVAL_S		5
+#define GLINT_DYN_CTL_INTERVAL_M		MAKEMASK(0xFFF, 5)
+#define GLINT_DYN_CTL_SW_ITR_INDX_ENA_S		24
+#define GLINT_DYN_CTL_SW_ITR_INDX_ENA_M		BIT(24)
+#define GLINT_DYN_CTL_SW_ITR_INDX_S		25
+#define GLINT_DYN_CTL_SW_ITR_INDX_M		MAKEMASK(0x3, 25)
+#define GLINT_DYN_CTL_WB_ON_ITR_S		30
+#define GLINT_DYN_CTL_WB_ON_ITR_M		BIT(30)
+#define GLINT_DYN_CTL_INTENA_MSK_S		31
+#define GLINT_DYN_CTL_INTENA_MSK_M		BIT(31)
+#define GLINT_FW_TOOL_CTL			0x0016C840 /* Reset Source: CORER */
+#define GLINT_FW_TOOL_CTL_MSIX_INDX_S		0
+#define GLINT_FW_TOOL_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define GLINT_FW_TOOL_CTL_ITR_INDX_S		11
+#define GLINT_FW_TOOL_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define GLINT_FW_TOOL_CTL_CAUSE_ENA_S		30
+#define GLINT_FW_TOOL_CTL_CAUSE_ENA_M		BIT(30)
+#define GLINT_FW_TOOL_CTL_INTEVENT_S		31
+#define GLINT_FW_TOOL_CTL_INTEVENT_M		BIT(31)
+#define GLINT_ITR(_i, _INT)			(0x00154000 + ((_i) * 8192 + (_INT) * 4)) /* _i=0...2, _INT=0...2047 */ /* Reset Source: PFR */
+#define GLINT_ITR_MAX_INDEX			2
+#define GLINT_ITR_INTERVAL_S			0
+#define GLINT_ITR_INTERVAL_M			MAKEMASK(0xFFF, 0)
+#define GLINT_RATE(_INT)			(0x0015A000 + ((_INT) * 4)) /* _i=0...2047 */ /* Reset Source: PFR */
+#define GLINT_RATE_MAX_INDEX			2047
+#define GLINT_RATE_INTERVAL_S			0
+#define GLINT_RATE_INTERVAL_M			MAKEMASK(0x3F, 0)
+#define GLINT_RATE_INTRL_ENA_S			6
+#define GLINT_RATE_INTRL_ENA_M			BIT(6)
+#define GLINT_TSYN_PFMSTR(_i)			(0x0016CCC0 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLINT_TSYN_PFMSTR_MAX_INDEX		1
+#define GLINT_TSYN_PFMSTR_PF_MASTER_S		0
+#define GLINT_TSYN_PFMSTR_PF_MASTER_M		MAKEMASK(0x7, 0)
+#define GLINT_TSYN_PHY				0x0016CC50 /* Reset Source: CORER */
+#define GLINT_TSYN_PHY_PHY_INDX_S		0
+#define GLINT_TSYN_PHY_PHY_INDX_M		MAKEMASK(0x1F, 0)
+#define GLINT_VECT2FUNC(_INT)			(0x00162000 + ((_INT) * 4)) /* _i=0...2047 */ /* Reset Source: CORER */
+#define GLINT_VECT2FUNC_MAX_INDEX		2047
+#define GLINT_VECT2FUNC_VF_NUM_S		0
+#define GLINT_VECT2FUNC_VF_NUM_M		MAKEMASK(0xFF, 0)
+#define GLINT_VECT2FUNC_PF_NUM_S		12
+#define GLINT_VECT2FUNC_PF_NUM_M		MAKEMASK(0x7, 12)
+#define GLINT_VECT2FUNC_IS_PF_S			16
+#define GLINT_VECT2FUNC_IS_PF_M			BIT(16)
+#define PF0INT_FW_HLP_CTL			0x0016C844 /* Reset Source: CORER */
+#define PF0INT_FW_HLP_CTL_MSIX_INDX_S		0
+#define PF0INT_FW_HLP_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PF0INT_FW_HLP_CTL_ITR_INDX_S		11
+#define PF0INT_FW_HLP_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PF0INT_FW_HLP_CTL_CAUSE_ENA_S		30
+#define PF0INT_FW_HLP_CTL_CAUSE_ENA_M		BIT(30)
+#define PF0INT_FW_HLP_CTL_INTEVENT_S		31
+#define PF0INT_FW_HLP_CTL_INTEVENT_M		BIT(31)
+#define PF0INT_FW_PSM_CTL			0x0016C848 /* Reset Source: CORER */
+#define PF0INT_FW_PSM_CTL_MSIX_INDX_S		0
+#define PF0INT_FW_PSM_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PF0INT_FW_PSM_CTL_ITR_INDX_S		11
+#define PF0INT_FW_PSM_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PF0INT_FW_PSM_CTL_CAUSE_ENA_S		30
+#define PF0INT_FW_PSM_CTL_CAUSE_ENA_M		BIT(30)
+#define PF0INT_FW_PSM_CTL_INTEVENT_S		31
+#define PF0INT_FW_PSM_CTL_INTEVENT_M		BIT(31)
+#define PF0INT_MBX_CPM_CTL			0x0016B2C0 /* Reset Source: CORER */
+#define PF0INT_MBX_CPM_CTL_MSIX_INDX_S		0
+#define PF0INT_MBX_CPM_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PF0INT_MBX_CPM_CTL_ITR_INDX_S		11
+#define PF0INT_MBX_CPM_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PF0INT_MBX_CPM_CTL_CAUSE_ENA_S		30
+#define PF0INT_MBX_CPM_CTL_CAUSE_ENA_M		BIT(30)
+#define PF0INT_MBX_CPM_CTL_INTEVENT_S		31
+#define PF0INT_MBX_CPM_CTL_INTEVENT_M		BIT(31)
+#define PF0INT_MBX_HLP_CTL			0x0016B2C4 /* Reset Source: CORER */
+#define PF0INT_MBX_HLP_CTL_MSIX_INDX_S		0
+#define PF0INT_MBX_HLP_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PF0INT_MBX_HLP_CTL_ITR_INDX_S		11
+#define PF0INT_MBX_HLP_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PF0INT_MBX_HLP_CTL_CAUSE_ENA_S		30
+#define PF0INT_MBX_HLP_CTL_CAUSE_ENA_M		BIT(30)
+#define PF0INT_MBX_HLP_CTL_INTEVENT_S		31
+#define PF0INT_MBX_HLP_CTL_INTEVENT_M		BIT(31)
+#define PF0INT_MBX_PSM_CTL			0x0016B2C8 /* Reset Source: CORER */
+#define PF0INT_MBX_PSM_CTL_MSIX_INDX_S		0
+#define PF0INT_MBX_PSM_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PF0INT_MBX_PSM_CTL_ITR_INDX_S		11
+#define PF0INT_MBX_PSM_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PF0INT_MBX_PSM_CTL_CAUSE_ENA_S		30
+#define PF0INT_MBX_PSM_CTL_CAUSE_ENA_M		BIT(30)
+#define PF0INT_MBX_PSM_CTL_INTEVENT_S		31
+#define PF0INT_MBX_PSM_CTL_INTEVENT_M		BIT(31)
+#define PF0INT_OICR_CPM				0x0016CC40 /* Reset Source: CORER */
+#define PF0INT_OICR_CPM_INTEVENT_S		0
+#define PF0INT_OICR_CPM_INTEVENT_M		BIT(0)
+#define PF0INT_OICR_CPM_QUEUE_S			1
+#define PF0INT_OICR_CPM_QUEUE_M			BIT(1)
+#define PF0INT_OICR_CPM_RSV1_S			2
+#define PF0INT_OICR_CPM_RSV1_M			MAKEMASK(0xFF, 2)
+#define PF0INT_OICR_CPM_HH_COMP_S		10
+#define PF0INT_OICR_CPM_HH_COMP_M		BIT(10)
+#define PF0INT_OICR_CPM_TSYN_TX_S		11
+#define PF0INT_OICR_CPM_TSYN_TX_M		BIT(11)
+#define PF0INT_OICR_CPM_TSYN_EVNT_S		12
+#define PF0INT_OICR_CPM_TSYN_EVNT_M		BIT(12)
+#define PF0INT_OICR_CPM_TSYN_TGT_S		13
+#define PF0INT_OICR_CPM_TSYN_TGT_M		BIT(13)
+#define PF0INT_OICR_CPM_HLP_RDY_S		14
+#define PF0INT_OICR_CPM_HLP_RDY_M		BIT(14)
+#define PF0INT_OICR_CPM_CPM_RDY_S		15
+#define PF0INT_OICR_CPM_CPM_RDY_M		BIT(15)
+#define PF0INT_OICR_CPM_ECC_ERR_S		16
+#define PF0INT_OICR_CPM_ECC_ERR_M		BIT(16)
+#define PF0INT_OICR_CPM_RSV2_S			17
+#define PF0INT_OICR_CPM_RSV2_M			MAKEMASK(0x3, 17)
+#define PF0INT_OICR_CPM_MAL_DETECT_S		19
+#define PF0INT_OICR_CPM_MAL_DETECT_M		BIT(19)
+#define PF0INT_OICR_CPM_GRST_S			20
+#define PF0INT_OICR_CPM_GRST_M			BIT(20)
+#define PF0INT_OICR_CPM_PCI_EXCEPTION_S		21
+#define PF0INT_OICR_CPM_PCI_EXCEPTION_M		BIT(21)
+#define PF0INT_OICR_CPM_GPIO_S			22
+#define PF0INT_OICR_CPM_GPIO_M			BIT(22)
+#define PF0INT_OICR_CPM_RSV3_S			23
+#define PF0INT_OICR_CPM_RSV3_M			BIT(23)
+#define PF0INT_OICR_CPM_STORM_DETECT_S		24
+#define PF0INT_OICR_CPM_STORM_DETECT_M		BIT(24)
+#define PF0INT_OICR_CPM_LINK_STAT_CHANGE_S	25
+#define PF0INT_OICR_CPM_LINK_STAT_CHANGE_M	BIT(25)
+#define PF0INT_OICR_CPM_HMC_ERR_S		26
+#define PF0INT_OICR_CPM_HMC_ERR_M		BIT(26)
+#define PF0INT_OICR_CPM_PE_PUSH_S		27
+#define PF0INT_OICR_CPM_PE_PUSH_M		BIT(27)
+#define PF0INT_OICR_CPM_PE_CRITERR_S		28
+#define PF0INT_OICR_CPM_PE_CRITERR_M		BIT(28)
+#define PF0INT_OICR_CPM_VFLR_S			29
+#define PF0INT_OICR_CPM_VFLR_M			BIT(29)
+#define PF0INT_OICR_CPM_XLR_HW_DONE_S		30
+#define PF0INT_OICR_CPM_XLR_HW_DONE_M		BIT(30)
+#define PF0INT_OICR_CPM_SWINT_S			31
+#define PF0INT_OICR_CPM_SWINT_M			BIT(31)
+#define PF0INT_OICR_CTL_CPM			0x0016CC48 /* Reset Source: CORER */
+#define PF0INT_OICR_CTL_CPM_MSIX_INDX_S		0
+#define PF0INT_OICR_CTL_CPM_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PF0INT_OICR_CTL_CPM_ITR_INDX_S		11
+#define PF0INT_OICR_CTL_CPM_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PF0INT_OICR_CTL_CPM_CAUSE_ENA_S		30
+#define PF0INT_OICR_CTL_CPM_CAUSE_ENA_M		BIT(30)
+#define PF0INT_OICR_CTL_CPM_INTEVENT_S		31
+#define PF0INT_OICR_CTL_CPM_INTEVENT_M		BIT(31)
+#define PF0INT_OICR_CTL_HLP			0x0016CC5C /* Reset Source: CORER */
+#define PF0INT_OICR_CTL_HLP_MSIX_INDX_S		0
+#define PF0INT_OICR_CTL_HLP_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PF0INT_OICR_CTL_HLP_ITR_INDX_S		11
+#define PF0INT_OICR_CTL_HLP_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PF0INT_OICR_CTL_HLP_CAUSE_ENA_S		30
+#define PF0INT_OICR_CTL_HLP_CAUSE_ENA_M		BIT(30)
+#define PF0INT_OICR_CTL_HLP_INTEVENT_S		31
+#define PF0INT_OICR_CTL_HLP_INTEVENT_M		BIT(31)
+#define PF0INT_OICR_CTL_PSM			0x0016CC64 /* Reset Source: CORER */
+#define PF0INT_OICR_CTL_PSM_MSIX_INDX_S		0
+#define PF0INT_OICR_CTL_PSM_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PF0INT_OICR_CTL_PSM_ITR_INDX_S		11
+#define PF0INT_OICR_CTL_PSM_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PF0INT_OICR_CTL_PSM_CAUSE_ENA_S		30
+#define PF0INT_OICR_CTL_PSM_CAUSE_ENA_M		BIT(30)
+#define PF0INT_OICR_CTL_PSM_INTEVENT_S		31
+#define PF0INT_OICR_CTL_PSM_INTEVENT_M		BIT(31)
+#define PF0INT_OICR_ENA_CPM			0x0016CC60 /* Reset Source: CORER */
+#define PF0INT_OICR_ENA_CPM_RSV0_S		0
+#define PF0INT_OICR_ENA_CPM_RSV0_M		BIT(0)
+#define PF0INT_OICR_ENA_CPM_INT_ENA_S		1
+#define PF0INT_OICR_ENA_CPM_INT_ENA_M		MAKEMASK(0x7FFFFFFF, 1)
+#define PF0INT_OICR_ENA_HLP			0x0016CC4C /* Reset Source: CORER */
+#define PF0INT_OICR_ENA_HLP_RSV0_S		0
+#define PF0INT_OICR_ENA_HLP_RSV0_M		BIT(0)
+#define PF0INT_OICR_ENA_HLP_INT_ENA_S		1
+#define PF0INT_OICR_ENA_HLP_INT_ENA_M		MAKEMASK(0x7FFFFFFF, 1)
+#define PF0INT_OICR_ENA_PSM			0x0016CC58 /* Reset Source: CORER */
+#define PF0INT_OICR_ENA_PSM_RSV0_S		0
+#define PF0INT_OICR_ENA_PSM_RSV0_M		BIT(0)
+#define PF0INT_OICR_ENA_PSM_INT_ENA_S		1
+#define PF0INT_OICR_ENA_PSM_INT_ENA_M		MAKEMASK(0x7FFFFFFF, 1)
+#define PF0INT_OICR_HLP				0x0016CC68 /* Reset Source: CORER */
+#define PF0INT_OICR_HLP_INTEVENT_S		0
+#define PF0INT_OICR_HLP_INTEVENT_M		BIT(0)
+#define PF0INT_OICR_HLP_QUEUE_S			1
+#define PF0INT_OICR_HLP_QUEUE_M			BIT(1)
+#define PF0INT_OICR_HLP_RSV1_S			2
+#define PF0INT_OICR_HLP_RSV1_M			MAKEMASK(0xFF, 2)
+#define PF0INT_OICR_HLP_HH_COMP_S		10
+#define PF0INT_OICR_HLP_HH_COMP_M		BIT(10)
+#define PF0INT_OICR_HLP_TSYN_TX_S		11
+#define PF0INT_OICR_HLP_TSYN_TX_M		BIT(11)
+#define PF0INT_OICR_HLP_TSYN_EVNT_S		12
+#define PF0INT_OICR_HLP_TSYN_EVNT_M		BIT(12)
+#define PF0INT_OICR_HLP_TSYN_TGT_S		13
+#define PF0INT_OICR_HLP_TSYN_TGT_M		BIT(13)
+#define PF0INT_OICR_HLP_HLP_RDY_S		14
+#define PF0INT_OICR_HLP_HLP_RDY_M		BIT(14)
+#define PF0INT_OICR_HLP_CPM_RDY_S		15
+#define PF0INT_OICR_HLP_CPM_RDY_M		BIT(15)
+#define PF0INT_OICR_HLP_ECC_ERR_S		16
+#define PF0INT_OICR_HLP_ECC_ERR_M		BIT(16)
+#define PF0INT_OICR_HLP_RSV2_S			17
+#define PF0INT_OICR_HLP_RSV2_M			MAKEMASK(0x3, 17)
+#define PF0INT_OICR_HLP_MAL_DETECT_S		19
+#define PF0INT_OICR_HLP_MAL_DETECT_M		BIT(19)
+#define PF0INT_OICR_HLP_GRST_S			20
+#define PF0INT_OICR_HLP_GRST_M			BIT(20)
+#define PF0INT_OICR_HLP_PCI_EXCEPTION_S		21
+#define PF0INT_OICR_HLP_PCI_EXCEPTION_M		BIT(21)
+#define PF0INT_OICR_HLP_GPIO_S			22
+#define PF0INT_OICR_HLP_GPIO_M			BIT(22)
+#define PF0INT_OICR_HLP_RSV3_S			23
+#define PF0INT_OICR_HLP_RSV3_M			BIT(23)
+#define PF0INT_OICR_HLP_STORM_DETECT_S		24
+#define PF0INT_OICR_HLP_STORM_DETECT_M		BIT(24)
+#define PF0INT_OICR_HLP_LINK_STAT_CHANGE_S	25
+#define PF0INT_OICR_HLP_LINK_STAT_CHANGE_M	BIT(25)
+#define PF0INT_OICR_HLP_HMC_ERR_S		26
+#define PF0INT_OICR_HLP_HMC_ERR_M		BIT(26)
+#define PF0INT_OICR_HLP_PE_PUSH_S		27
+#define PF0INT_OICR_HLP_PE_PUSH_M		BIT(27)
+#define PF0INT_OICR_HLP_PE_CRITERR_S		28
+#define PF0INT_OICR_HLP_PE_CRITERR_M		BIT(28)
+#define PF0INT_OICR_HLP_VFLR_S			29
+#define PF0INT_OICR_HLP_VFLR_M			BIT(29)
+#define PF0INT_OICR_HLP_XLR_HW_DONE_S		30
+#define PF0INT_OICR_HLP_XLR_HW_DONE_M		BIT(30)
+#define PF0INT_OICR_HLP_SWINT_S			31
+#define PF0INT_OICR_HLP_SWINT_M			BIT(31)
+#define PF0INT_OICR_PSM				0x0016CC44 /* Reset Source: CORER */
+#define PF0INT_OICR_PSM_INTEVENT_S		0
+#define PF0INT_OICR_PSM_INTEVENT_M		BIT(0)
+#define PF0INT_OICR_PSM_QUEUE_S			1
+#define PF0INT_OICR_PSM_QUEUE_M			BIT(1)
+#define PF0INT_OICR_PSM_RSV1_S			2
+#define PF0INT_OICR_PSM_RSV1_M			MAKEMASK(0xFF, 2)
+#define PF0INT_OICR_PSM_HH_COMP_S		10
+#define PF0INT_OICR_PSM_HH_COMP_M		BIT(10)
+#define PF0INT_OICR_PSM_TSYN_TX_S		11
+#define PF0INT_OICR_PSM_TSYN_TX_M		BIT(11)
+#define PF0INT_OICR_PSM_TSYN_EVNT_S		12
+#define PF0INT_OICR_PSM_TSYN_EVNT_M		BIT(12)
+#define PF0INT_OICR_PSM_TSYN_TGT_S		13
+#define PF0INT_OICR_PSM_TSYN_TGT_M		BIT(13)
+#define PF0INT_OICR_PSM_HLP_RDY_S		14
+#define PF0INT_OICR_PSM_HLP_RDY_M		BIT(14)
+#define PF0INT_OICR_PSM_CPM_RDY_S		15
+#define PF0INT_OICR_PSM_CPM_RDY_M		BIT(15)
+#define PF0INT_OICR_PSM_ECC_ERR_S		16
+#define PF0INT_OICR_PSM_ECC_ERR_M		BIT(16)
+#define PF0INT_OICR_PSM_RSV2_S			17
+#define PF0INT_OICR_PSM_RSV2_M			MAKEMASK(0x3, 17)
+#define PF0INT_OICR_PSM_MAL_DETECT_S		19
+#define PF0INT_OICR_PSM_MAL_DETECT_M		BIT(19)
+#define PF0INT_OICR_PSM_GRST_S			20
+#define PF0INT_OICR_PSM_GRST_M			BIT(20)
+#define PF0INT_OICR_PSM_PCI_EXCEPTION_S		21
+#define PF0INT_OICR_PSM_PCI_EXCEPTION_M		BIT(21)
+#define PF0INT_OICR_PSM_GPIO_S			22
+#define PF0INT_OICR_PSM_GPIO_M			BIT(22)
+#define PF0INT_OICR_PSM_RSV3_S			23
+#define PF0INT_OICR_PSM_RSV3_M			BIT(23)
+#define PF0INT_OICR_PSM_STORM_DETECT_S		24
+#define PF0INT_OICR_PSM_STORM_DETECT_M		BIT(24)
+#define PF0INT_OICR_PSM_LINK_STAT_CHANGE_S	25
+#define PF0INT_OICR_PSM_LINK_STAT_CHANGE_M	BIT(25)
+#define PF0INT_OICR_PSM_HMC_ERR_S		26
+#define PF0INT_OICR_PSM_HMC_ERR_M		BIT(26)
+#define PF0INT_OICR_PSM_PE_PUSH_S		27
+#define PF0INT_OICR_PSM_PE_PUSH_M		BIT(27)
+#define PF0INT_OICR_PSM_PE_CRITERR_S		28
+#define PF0INT_OICR_PSM_PE_CRITERR_M		BIT(28)
+#define PF0INT_OICR_PSM_VFLR_S			29
+#define PF0INT_OICR_PSM_VFLR_M			BIT(29)
+#define PF0INT_OICR_PSM_XLR_HW_DONE_S		30
+#define PF0INT_OICR_PSM_XLR_HW_DONE_M		BIT(30)
+#define PF0INT_OICR_PSM_SWINT_S			31
+#define PF0INT_OICR_PSM_SWINT_M			BIT(31)
+#define PF0INT_SB_CPM_CTL			0x0016B2CC /* Reset Source: CORER */
+#define PF0INT_SB_CPM_CTL_MSIX_INDX_S		0
+#define PF0INT_SB_CPM_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PF0INT_SB_CPM_CTL_ITR_INDX_S		11
+#define PF0INT_SB_CPM_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PF0INT_SB_CPM_CTL_CAUSE_ENA_S		30
+#define PF0INT_SB_CPM_CTL_CAUSE_ENA_M		BIT(30)
+#define PF0INT_SB_CPM_CTL_INTEVENT_S		31
+#define PF0INT_SB_CPM_CTL_INTEVENT_M		BIT(31)
+#define PF0INT_SB_HLP_CTL			0x0016B640 /* Reset Source: CORER */
+#define PF0INT_SB_HLP_CTL_MSIX_INDX_S		0
+#define PF0INT_SB_HLP_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PF0INT_SB_HLP_CTL_ITR_INDX_S		11
+#define PF0INT_SB_HLP_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PF0INT_SB_HLP_CTL_CAUSE_ENA_S		30
+#define PF0INT_SB_HLP_CTL_CAUSE_ENA_M		BIT(30)
+#define PF0INT_SB_HLP_CTL_INTEVENT_S		31
+#define PF0INT_SB_HLP_CTL_INTEVENT_M		BIT(31)
+#define PFINT_AEQCTL				0x0016CB00 /* Reset Source: CORER */
+#define PFINT_AEQCTL_MSIX_INDX_S		0
+#define PFINT_AEQCTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PFINT_AEQCTL_ITR_INDX_S			11
+#define PFINT_AEQCTL_ITR_INDX_M			MAKEMASK(0x3, 11)
+#define PFINT_AEQCTL_CAUSE_ENA_S		30
+#define PFINT_AEQCTL_CAUSE_ENA_M		BIT(30)
+#define PFINT_AEQCTL_INTEVENT_S			31
+#define PFINT_AEQCTL_INTEVENT_M			BIT(31)
+#define PFINT_ALLOC				0x001D2600 /* Reset Source: CORER */
+#define PFINT_ALLOC_FIRST_S			0
+#define PFINT_ALLOC_FIRST_M			MAKEMASK(0x7FF, 0)
+#define PFINT_ALLOC_LAST_S			12
+#define PFINT_ALLOC_LAST_M			MAKEMASK(0x7FF, 12)
+#define PFINT_ALLOC_VALID_S			31
+#define PFINT_ALLOC_VALID_M			BIT(31)
+#define PFINT_ALLOC_PCI				0x0009D800 /* Reset Source: PCIR */
+#define PFINT_ALLOC_PCI_FIRST_S			0
+#define PFINT_ALLOC_PCI_FIRST_M			MAKEMASK(0x7FF, 0)
+#define PFINT_ALLOC_PCI_LAST_S			12
+#define PFINT_ALLOC_PCI_LAST_M			MAKEMASK(0x7FF, 12)
+#define PFINT_ALLOC_PCI_VALID_S			31
+#define PFINT_ALLOC_PCI_VALID_M			BIT(31)
+#define PFINT_FW_CTL				0x0016C800 /* Reset Source: CORER */
+#define PFINT_FW_CTL_MSIX_INDX_S		0
+#define PFINT_FW_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PFINT_FW_CTL_ITR_INDX_S			11
+#define PFINT_FW_CTL_ITR_INDX_M			MAKEMASK(0x3, 11)
+#define PFINT_FW_CTL_CAUSE_ENA_S		30
+#define PFINT_FW_CTL_CAUSE_ENA_M		BIT(30)
+#define PFINT_FW_CTL_INTEVENT_S			31
+#define PFINT_FW_CTL_INTEVENT_M			BIT(31)
+#define PFINT_GPIO_ENA				0x00088080 /* Reset Source: CORER */
+#define PFINT_GPIO_ENA_GPIO0_ENA_S		0
+#define PFINT_GPIO_ENA_GPIO0_ENA_M		BIT(0)
+#define PFINT_GPIO_ENA_GPIO1_ENA_S		1
+#define PFINT_GPIO_ENA_GPIO1_ENA_M		BIT(1)
+#define PFINT_GPIO_ENA_GPIO2_ENA_S		2
+#define PFINT_GPIO_ENA_GPIO2_ENA_M		BIT(2)
+#define PFINT_GPIO_ENA_GPIO3_ENA_S		3
+#define PFINT_GPIO_ENA_GPIO3_ENA_M		BIT(3)
+#define PFINT_GPIO_ENA_GPIO4_ENA_S		4
+#define PFINT_GPIO_ENA_GPIO4_ENA_M		BIT(4)
+#define PFINT_GPIO_ENA_GPIO5_ENA_S		5
+#define PFINT_GPIO_ENA_GPIO5_ENA_M		BIT(5)
+#define PFINT_GPIO_ENA_GPIO6_ENA_S		6
+#define PFINT_GPIO_ENA_GPIO6_ENA_M		BIT(6)
+#define PFINT_MBX_CTL				0x0016B280 /* Reset Source: CORER */
+#define PFINT_MBX_CTL_MSIX_INDX_S		0
+#define PFINT_MBX_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PFINT_MBX_CTL_ITR_INDX_S		11
+#define PFINT_MBX_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PFINT_MBX_CTL_CAUSE_ENA_S		30
+#define PFINT_MBX_CTL_CAUSE_ENA_M		BIT(30)
+#define PFINT_MBX_CTL_INTEVENT_S		31
+#define PFINT_MBX_CTL_INTEVENT_M		BIT(31)
+#define PFINT_OICR				0x0016CA00 /* Reset Source: CORER */
+#define PFINT_OICR_INTEVENT_S			0
+#define PFINT_OICR_INTEVENT_M			BIT(0)
+#define PFINT_OICR_QUEUE_S			1
+#define PFINT_OICR_QUEUE_M			BIT(1)
+#define PFINT_OICR_RSV1_S			2
+#define PFINT_OICR_RSV1_M			MAKEMASK(0xFF, 2)
+#define PFINT_OICR_HH_COMP_S			10
+#define PFINT_OICR_HH_COMP_M			BIT(10)
+#define PFINT_OICR_TSYN_TX_S			11
+#define PFINT_OICR_TSYN_TX_M			BIT(11)
+#define PFINT_OICR_TSYN_EVNT_S			12
+#define PFINT_OICR_TSYN_EVNT_M			BIT(12)
+#define PFINT_OICR_TSYN_TGT_S			13
+#define PFINT_OICR_TSYN_TGT_M			BIT(13)
+#define PFINT_OICR_HLP_RDY_S			14
+#define PFINT_OICR_HLP_RDY_M			BIT(14)
+#define PFINT_OICR_CPM_RDY_S			15
+#define PFINT_OICR_CPM_RDY_M			BIT(15)
+#define PFINT_OICR_ECC_ERR_S			16
+#define PFINT_OICR_ECC_ERR_M			BIT(16)
+#define PFINT_OICR_RSV2_S			17
+#define PFINT_OICR_RSV2_M			MAKEMASK(0x3, 17)
+#define PFINT_OICR_MAL_DETECT_S			19
+#define PFINT_OICR_MAL_DETECT_M			BIT(19)
+#define PFINT_OICR_GRST_S			20
+#define PFINT_OICR_GRST_M			BIT(20)
+#define PFINT_OICR_PCI_EXCEPTION_S		21
+#define PFINT_OICR_PCI_EXCEPTION_M		BIT(21)
+#define PFINT_OICR_GPIO_S			22
+#define PFINT_OICR_GPIO_M			BIT(22)
+#define PFINT_OICR_RSV3_S			23
+#define PFINT_OICR_RSV3_M			BIT(23)
+#define PFINT_OICR_STORM_DETECT_S		24
+#define PFINT_OICR_STORM_DETECT_M		BIT(24)
+#define PFINT_OICR_LINK_STAT_CHANGE_S		25
+#define PFINT_OICR_LINK_STAT_CHANGE_M		BIT(25)
+#define PFINT_OICR_HMC_ERR_S			26
+#define PFINT_OICR_HMC_ERR_M			BIT(26)
+#define PFINT_OICR_PE_PUSH_S			27
+#define PFINT_OICR_PE_PUSH_M			BIT(27)
+#define PFINT_OICR_PE_CRITERR_S			28
+#define PFINT_OICR_PE_CRITERR_M			BIT(28)
+#define PFINT_OICR_VFLR_S			29
+#define PFINT_OICR_VFLR_M			BIT(29)
+#define PFINT_OICR_XLR_HW_DONE_S		30
+#define PFINT_OICR_XLR_HW_DONE_M		BIT(30)
+#define PFINT_OICR_SWINT_S			31
+#define PFINT_OICR_SWINT_M			BIT(31)
+#define PFINT_OICR_CTL				0x0016CA80 /* Reset Source: CORER */
+#define PFINT_OICR_CTL_MSIX_INDX_S		0
+#define PFINT_OICR_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PFINT_OICR_CTL_ITR_INDX_S		11
+#define PFINT_OICR_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PFINT_OICR_CTL_CAUSE_ENA_S		30
+#define PFINT_OICR_CTL_CAUSE_ENA_M		BIT(30)
+#define PFINT_OICR_CTL_INTEVENT_S		31
+#define PFINT_OICR_CTL_INTEVENT_M		BIT(31)
+#define PFINT_OICR_ENA				0x0016C900 /* Reset Source: CORER */
+#define PFINT_OICR_ENA_RSV0_S			0
+#define PFINT_OICR_ENA_RSV0_M			BIT(0)
+#define PFINT_OICR_ENA_INT_ENA_S		1
+#define PFINT_OICR_ENA_INT_ENA_M		MAKEMASK(0x7FFFFFFF, 1)
+#define PFINT_SB_CTL				0x0016B600 /* Reset Source: CORER */
+#define PFINT_SB_CTL_MSIX_INDX_S		0
+#define PFINT_SB_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PFINT_SB_CTL_ITR_INDX_S			11
+#define PFINT_SB_CTL_ITR_INDX_M			MAKEMASK(0x3, 11)
+#define PFINT_SB_CTL_CAUSE_ENA_S		30
+#define PFINT_SB_CTL_CAUSE_ENA_M		BIT(30)
+#define PFINT_SB_CTL_INTEVENT_S			31
+#define PFINT_SB_CTL_INTEVENT_M			BIT(31)
+#define PFINT_TSYN_MSK				0x0016C980 /* Reset Source: CORER */
+#define PFINT_TSYN_MSK_PHY_INDX_S		0
+#define PFINT_TSYN_MSK_PHY_INDX_M		MAKEMASK(0x1F, 0)
+#define QINT_RQCTL(_QRX)			(0x00150000 + ((_QRX) * 4)) /* _i=0...2047 */ /* Reset Source: CORER */
+#define QINT_RQCTL_MAX_INDEX			2047
+#define QINT_RQCTL_MSIX_INDX_S			0
+#define QINT_RQCTL_MSIX_INDX_M			MAKEMASK(0x7FF, 0)
+#define QINT_RQCTL_ITR_INDX_S			11
+#define QINT_RQCTL_ITR_INDX_M			MAKEMASK(0x3, 11)
+#define QINT_RQCTL_CAUSE_ENA_S			30
+#define QINT_RQCTL_CAUSE_ENA_M			BIT(30)
+#define QINT_RQCTL_INTEVENT_S			31
+#define QINT_RQCTL_INTEVENT_M			BIT(31)
+#define QINT_TQCTL(_DBQM)			(0x00140000 + ((_DBQM) * 4)) /* _i=0...16383 */ /* Reset Source: CORER */
+#define QINT_TQCTL_MAX_INDEX			16383
+#define QINT_TQCTL_MSIX_INDX_S			0
+#define QINT_TQCTL_MSIX_INDX_M			MAKEMASK(0x7FF, 0)
+#define QINT_TQCTL_ITR_INDX_S			11
+#define QINT_TQCTL_ITR_INDX_M			MAKEMASK(0x3, 11)
+#define QINT_TQCTL_CAUSE_ENA_S			30
+#define QINT_TQCTL_CAUSE_ENA_M			BIT(30)
+#define QINT_TQCTL_INTEVENT_S			31
+#define QINT_TQCTL_INTEVENT_M			BIT(31)
+#define VPINT_AEQCTL(_VF)			(0x0016B800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPINT_AEQCTL_MAX_INDEX			255
+#define VPINT_AEQCTL_MSIX_INDX_S		0
+#define VPINT_AEQCTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define VPINT_AEQCTL_ITR_INDX_S			11
+#define VPINT_AEQCTL_ITR_INDX_M			MAKEMASK(0x3, 11)
+#define VPINT_AEQCTL_CAUSE_ENA_S		30
+#define VPINT_AEQCTL_CAUSE_ENA_M		BIT(30)
+#define VPINT_AEQCTL_INTEVENT_S			31
+#define VPINT_AEQCTL_INTEVENT_M			BIT(31)
+#define VPINT_ALLOC(_VF)			(0x001D1000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPINT_ALLOC_MAX_INDEX			255
+#define VPINT_ALLOC_FIRST_S			0
+#define VPINT_ALLOC_FIRST_M			MAKEMASK(0x7FF, 0)
+#define VPINT_ALLOC_LAST_S			12
+#define VPINT_ALLOC_LAST_M			MAKEMASK(0x7FF, 12)
+#define VPINT_ALLOC_VALID_S			31
+#define VPINT_ALLOC_VALID_M			BIT(31)
+#define VPINT_ALLOC_PCI(_VF)			(0x0009D000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PCIR */
+#define VPINT_ALLOC_PCI_MAX_INDEX		255
+#define VPINT_ALLOC_PCI_FIRST_S			0
+#define VPINT_ALLOC_PCI_FIRST_M			MAKEMASK(0x7FF, 0)
+#define VPINT_ALLOC_PCI_LAST_S			12
+#define VPINT_ALLOC_PCI_LAST_M			MAKEMASK(0x7FF, 12)
+#define VPINT_ALLOC_PCI_VALID_S			31
+#define VPINT_ALLOC_PCI_VALID_M			BIT(31)
+#define VPINT_MBX_CPM_CTL(_VP128)		(0x0016B000 + ((_VP128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VPINT_MBX_CPM_CTL_MAX_INDEX		127
+#define VPINT_MBX_CPM_CTL_MSIX_INDX_S		0
+#define VPINT_MBX_CPM_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define VPINT_MBX_CPM_CTL_ITR_INDX_S		11
+#define VPINT_MBX_CPM_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define VPINT_MBX_CPM_CTL_CAUSE_ENA_S		30
+#define VPINT_MBX_CPM_CTL_CAUSE_ENA_M		BIT(30)
+#define VPINT_MBX_CPM_CTL_INTEVENT_S		31
+#define VPINT_MBX_CPM_CTL_INTEVENT_M		BIT(31)
+#define VPINT_MBX_CTL(_VSI)			(0x0016A000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VPINT_MBX_CTL_MAX_INDEX			767
+#define VPINT_MBX_CTL_MSIX_INDX_S		0
+#define VPINT_MBX_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define VPINT_MBX_CTL_ITR_INDX_S		11
+#define VPINT_MBX_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define VPINT_MBX_CTL_CAUSE_ENA_S		30
+#define VPINT_MBX_CTL_CAUSE_ENA_M		BIT(30)
+#define VPINT_MBX_CTL_INTEVENT_S		31
+#define VPINT_MBX_CTL_INTEVENT_M		BIT(31)
+#define VPINT_MBX_HLP_CTL(_VP16)		(0x0016B200 + ((_VP16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VPINT_MBX_HLP_CTL_MAX_INDEX		15
+#define VPINT_MBX_HLP_CTL_MSIX_INDX_S		0
+#define VPINT_MBX_HLP_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define VPINT_MBX_HLP_CTL_ITR_INDX_S		11
+#define VPINT_MBX_HLP_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define VPINT_MBX_HLP_CTL_CAUSE_ENA_S		30
+#define VPINT_MBX_HLP_CTL_CAUSE_ENA_M		BIT(30)
+#define VPINT_MBX_HLP_CTL_INTEVENT_S		31
+#define VPINT_MBX_HLP_CTL_INTEVENT_M		BIT(31)
+#define VPINT_MBX_PSM_CTL(_VP16)		(0x0016B240 + ((_VP16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VPINT_MBX_PSM_CTL_MAX_INDEX		15
+#define VPINT_MBX_PSM_CTL_MSIX_INDX_S		0
+#define VPINT_MBX_PSM_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define VPINT_MBX_PSM_CTL_ITR_INDX_S		11
+#define VPINT_MBX_PSM_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define VPINT_MBX_PSM_CTL_CAUSE_ENA_S		30
+#define VPINT_MBX_PSM_CTL_CAUSE_ENA_M		BIT(30)
+#define VPINT_MBX_PSM_CTL_INTEVENT_S		31
+#define VPINT_MBX_PSM_CTL_INTEVENT_M		BIT(31)
+#define VPINT_SB_CPM_CTL(_VP128)		(0x0016B400 + ((_VP128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VPINT_SB_CPM_CTL_MAX_INDEX		127
+#define VPINT_SB_CPM_CTL_MSIX_INDX_S		0
+#define VPINT_SB_CPM_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define VPINT_SB_CPM_CTL_ITR_INDX_S		11
+#define VPINT_SB_CPM_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define VPINT_SB_CPM_CTL_CAUSE_ENA_S		30
+#define VPINT_SB_CPM_CTL_CAUSE_ENA_M		BIT(30)
+#define VPINT_SB_CPM_CTL_INTEVENT_S		31
+#define VPINT_SB_CPM_CTL_INTEVENT_M		BIT(31)
+#define GL_HLP_PRT_IPG_PREAMBLE_SIZE(_i)	(0x00049240 + ((_i) * 4)) /* _i=0...20 */ /* Reset Source: CORER */
+#define GL_HLP_PRT_IPG_PREAMBLE_SIZE_MAX_INDEX	20
+#define GL_HLP_PRT_IPG_PREAMBLE_SIZE_IPG_PREAMBLE_SIZE_S 0
+#define GL_HLP_PRT_IPG_PREAMBLE_SIZE_IPG_PREAMBLE_SIZE_M MAKEMASK(0xFF, 0)
+#define GL_TDPU_PSM_DEFAULT_RECIPE(_i)		(0x00049294 + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: CORER */
+#define GL_TDPU_PSM_DEFAULT_RECIPE_MAX_INDEX	3
+#define GL_TDPU_PSM_DEFAULT_RECIPE_ADD_IPG_S	0
+#define GL_TDPU_PSM_DEFAULT_RECIPE_ADD_IPG_M	BIT(0)
+#define GL_TDPU_PSM_DEFAULT_RECIPE_SUB_CRC_S	1
+#define GL_TDPU_PSM_DEFAULT_RECIPE_SUB_CRC_M	BIT(1)
+#define GL_TDPU_PSM_DEFAULT_RECIPE_SUB_ESP_TRAILER_S 2
+#define GL_TDPU_PSM_DEFAULT_RECIPE_SUB_ESP_TRAILER_M BIT(2)
+#define GL_TDPU_PSM_DEFAULT_RECIPE_INCLUDE_L2_PAD_S 3
+#define GL_TDPU_PSM_DEFAULT_RECIPE_INCLUDE_L2_PAD_M BIT(3)
+#define GL_TDPU_PSM_DEFAULT_RECIPE_DEFAULT_UPDATE_MODE_S 4
+#define GL_TDPU_PSM_DEFAULT_RECIPE_DEFAULT_UPDATE_MODE_M BIT(4)
+#define GLLAN_PF_RECIPE(_i)			(0x0029420C + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLLAN_PF_RECIPE_MAX_INDEX		7
+#define GLLAN_PF_RECIPE_RECIPE_S		0
+#define GLLAN_PF_RECIPE_RECIPE_M		MAKEMASK(0x3, 0)
+#define GLLAN_RCTL_0				0x002941F8 /* Reset Source: CORER */
+#define GLLAN_RCTL_0_PXE_MODE_S			0
+#define GLLAN_RCTL_0_PXE_MODE_M			BIT(0)
+#define GLLAN_RCTL_1				0x002941FC /* Reset Source: CORER */
+#define GLLAN_RCTL_1_RXMAX_EXPANSION_S		12
+#define GLLAN_RCTL_1_RXMAX_EXPANSION_M		MAKEMASK(0xF, 12)
+#define GLLAN_RCTL_1_RXDRDCTL_S			17
+#define GLLAN_RCTL_1_RXDRDCTL_M			BIT(17)
+#define GLLAN_RCTL_1_RXDESCRDROEN_S		18
+#define GLLAN_RCTL_1_RXDESCRDROEN_M		BIT(18)
+#define GLLAN_RCTL_1_RXDATAWRROEN_S		19
+#define GLLAN_RCTL_1_RXDATAWRROEN_M		BIT(19)
+#define GLLAN_TSOMSK_F				0x00049308 /* Reset Source: CORER */
+#define GLLAN_TSOMSK_F_TCPMSKF_S		0
+#define GLLAN_TSOMSK_F_TCPMSKF_M		MAKEMASK(0xFFF, 0)
+#define GLLAN_TSOMSK_L				0x00049310 /* Reset Source: CORER */
+#define GLLAN_TSOMSK_L_TCPMSKL_S		0
+#define GLLAN_TSOMSK_L_TCPMSKL_M		MAKEMASK(0xFFF, 0)
+#define GLLAN_TSOMSK_M				0x0004930C /* Reset Source: CORER */
+#define GLLAN_TSOMSK_M_TCPMSKM_S		0
+#define GLLAN_TSOMSK_M_TCPMSKM_M		MAKEMASK(0xFFF, 0)
+#define PFLAN_CP_QALLOC				0x00075700 /* Reset Source: CORER */
+#define PFLAN_CP_QALLOC_FIRSTQ_S		0
+#define PFLAN_CP_QALLOC_FIRSTQ_M		MAKEMASK(0x1FF, 0)
+#define PFLAN_CP_QALLOC_LASTQ_S			16
+#define PFLAN_CP_QALLOC_LASTQ_M			MAKEMASK(0x1FF, 16)
+#define PFLAN_CP_QALLOC_VALID_S			31
+#define PFLAN_CP_QALLOC_VALID_M			BIT(31)
+#define PFLAN_DB_QALLOC				0x00075680 /* Reset Source: CORER */
+#define PFLAN_DB_QALLOC_FIRSTQ_S		0
+#define PFLAN_DB_QALLOC_FIRSTQ_M		MAKEMASK(0xFF, 0)
+#define PFLAN_DB_QALLOC_LASTQ_S			16
+#define PFLAN_DB_QALLOC_LASTQ_M			MAKEMASK(0xFF, 16)
+#define PFLAN_DB_QALLOC_VALID_S			31
+#define PFLAN_DB_QALLOC_VALID_M			BIT(31)
+#define PFLAN_RX_QALLOC				0x001D2500 /* Reset Source: CORER */
+#define PFLAN_RX_QALLOC_FIRSTQ_S		0
+#define PFLAN_RX_QALLOC_FIRSTQ_M		MAKEMASK(0x7FF, 0)
+#define PFLAN_RX_QALLOC_LASTQ_S			16
+#define PFLAN_RX_QALLOC_LASTQ_M			MAKEMASK(0x7FF, 16)
+#define PFLAN_RX_QALLOC_VALID_S			31
+#define PFLAN_RX_QALLOC_VALID_M			BIT(31)
+#define PFLAN_TX_QALLOC				0x001D2580 /* Reset Source: CORER */
+#define PFLAN_TX_QALLOC_FIRSTQ_S		0
+#define PFLAN_TX_QALLOC_FIRSTQ_M		MAKEMASK(0x3FFF, 0)
+#define PFLAN_TX_QALLOC_LASTQ_S			16
+#define PFLAN_TX_QALLOC_LASTQ_M			MAKEMASK(0x3FFF, 16)
+#define PFLAN_TX_QALLOC_VALID_S			31
+#define PFLAN_TX_QALLOC_VALID_M			BIT(31)
+#define QRX_CONTEXT(_i, _QRX)			(0x00280000 + ((_i) * 8192 + (_QRX) * 4)) /* _i=0...7, _QRX=0...2047 */ /* Reset Source: CORER */
+#define QRX_CONTEXT_MAX_INDEX			7
+#define QRX_CONTEXT_RXQ_CONTEXT_S		0
+#define QRX_CONTEXT_RXQ_CONTEXT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define QRX_CTRL(_QRX)				(0x00120000 + ((_QRX) * 4)) /* _i=0...2047 */ /* Reset Source: PFR */
+#define QRX_CTRL_MAX_INDEX			2047
+#define QRX_CTRL_QENA_REQ_S			0
+#define QRX_CTRL_QENA_REQ_M			BIT(0)
+#define QRX_CTRL_FAST_QDIS_S			1
+#define QRX_CTRL_FAST_QDIS_M			BIT(1)
+#define QRX_CTRL_QENA_STAT_S			2
+#define QRX_CTRL_QENA_STAT_M			BIT(2)
+#define QRX_CTRL_CDE_S				3
+#define QRX_CTRL_CDE_M				BIT(3)
+#define QRX_CTRL_CDS_S				4
+#define QRX_CTRL_CDS_M				BIT(4)
+#define QRX_ITR(_QRX)				(0x00292000 + ((_QRX) * 4)) /* _i=0...2047 */ /* Reset Source: CORER */
+#define QRX_ITR_MAX_INDEX			2047
+#define QRX_ITR_NO_EXPR_S			0
+#define QRX_ITR_NO_EXPR_M			BIT(0)
+#define QRX_TAIL(_QRX)				(0x00290000 + ((_QRX) * 4)) /* _i=0...2047 */ /* Reset Source: CORER */
+#define QRX_TAIL_MAX_INDEX			2047
+#define QRX_TAIL_TAIL_S				0
+#define QRX_TAIL_TAIL_M				MAKEMASK(0x1FFF, 0)
+#define VPDSI_RX_QTABLE(_i, _VP16)		(0x00074C00 + ((_i) * 64 + (_VP16) * 4)) /* _i=0...15, _VP16=0...15 */ /* Reset Source: CORER */
+#define VPDSI_RX_QTABLE_MAX_INDEX		15
+#define VPDSI_RX_QTABLE_PAGE_INDEX0_S		0
+#define VPDSI_RX_QTABLE_PAGE_INDEX0_M		MAKEMASK(0x7F, 0)
+#define VPDSI_RX_QTABLE_PAGE_INDEX1_S		8
+#define VPDSI_RX_QTABLE_PAGE_INDEX1_M		MAKEMASK(0x7F, 8)
+#define VPDSI_RX_QTABLE_PAGE_INDEX2_S		16
+#define VPDSI_RX_QTABLE_PAGE_INDEX2_M		MAKEMASK(0x7F, 16)
+#define VPDSI_RX_QTABLE_PAGE_INDEX3_S		24
+#define VPDSI_RX_QTABLE_PAGE_INDEX3_M		MAKEMASK(0x7F, 24)
+#define VPDSI_TX_QTABLE(_i, _VP16)		(0x001D2000 + ((_i) * 64 + (_VP16) * 4)) /* _i=0...15, _VP16=0...15 */ /* Reset Source: CORER */
+#define VPDSI_TX_QTABLE_MAX_INDEX		15
+#define VPDSI_TX_QTABLE_PAGE_INDEX0_S		0
+#define VPDSI_TX_QTABLE_PAGE_INDEX0_M		MAKEMASK(0x7F, 0)
+#define VPDSI_TX_QTABLE_PAGE_INDEX1_S		8
+#define VPDSI_TX_QTABLE_PAGE_INDEX1_M		MAKEMASK(0x7F, 8)
+#define VPDSI_TX_QTABLE_PAGE_INDEX2_S		16
+#define VPDSI_TX_QTABLE_PAGE_INDEX2_M		MAKEMASK(0x7F, 16)
+#define VPDSI_TX_QTABLE_PAGE_INDEX3_S		24
+#define VPDSI_TX_QTABLE_PAGE_INDEX3_M		MAKEMASK(0x7F, 24)
+#define VPLAN_DB_QTABLE(_i, _VF)		(0x00070000 + ((_i) * 2048 + (_VF) * 4)) /* _i=0...3, _VF=0...255 */ /* Reset Source: CORER */
+#define VPLAN_DB_QTABLE_MAX_INDEX		3
+#define VPLAN_DB_QTABLE_QINDEX_S		0
+#define VPLAN_DB_QTABLE_QINDEX_M		MAKEMASK(0x1FF, 0)
+#define VPLAN_DSI_VF_MODE(_VP16)		(0x002D2C00 + ((_VP16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VPLAN_DSI_VF_MODE_MAX_INDEX		15
+#define VPLAN_DSI_VF_MODE_LAN_DSI_VF_MODE_S	0
+#define VPLAN_DSI_VF_MODE_LAN_DSI_VF_MODE_M	BIT(0)
+#define VPLAN_RX_QBASE(_VF)			(0x00072000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPLAN_RX_QBASE_MAX_INDEX		255
+#define VPLAN_RX_QBASE_VFFIRSTQ_S		0
+#define VPLAN_RX_QBASE_VFFIRSTQ_M		MAKEMASK(0x7FF, 0)
+#define VPLAN_RX_QBASE_VFNUMQ_S			16
+#define VPLAN_RX_QBASE_VFNUMQ_M			MAKEMASK(0xFF, 16)
+#define VPLAN_RX_QBASE_VFQTABLE_ENA_S		31
+#define VPLAN_RX_QBASE_VFQTABLE_ENA_M		BIT(31)
+#define VPLAN_RX_QTABLE(_i, _VF)		(0x00060000 + ((_i) * 2048 + (_VF) * 4)) /* _i=0...15, _VF=0...255 */ /* Reset Source: CORER */
+#define VPLAN_RX_QTABLE_MAX_INDEX		15
+#define VPLAN_RX_QTABLE_QINDEX_S		0
+#define VPLAN_RX_QTABLE_QINDEX_M		MAKEMASK(0xFFF, 0)
+#define VPLAN_RXQ_MAPENA(_VF)			(0x00073000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPLAN_RXQ_MAPENA_MAX_INDEX		255
+#define VPLAN_RXQ_MAPENA_RX_ENA_S		0
+#define VPLAN_RXQ_MAPENA_RX_ENA_M		BIT(0)
+#define VPLAN_TX_QBASE(_VF)			(0x001D1800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPLAN_TX_QBASE_MAX_INDEX		255
+#define VPLAN_TX_QBASE_VFFIRSTQ_S		0
+#define VPLAN_TX_QBASE_VFFIRSTQ_M		MAKEMASK(0x3FFF, 0)
+#define VPLAN_TX_QBASE_VFNUMQ_S			16
+#define VPLAN_TX_QBASE_VFNUMQ_M			MAKEMASK(0xFF, 16)
+#define VPLAN_TX_QBASE_VFQTABLE_ENA_S		31
+#define VPLAN_TX_QBASE_VFQTABLE_ENA_M		BIT(31)
+#define VPLAN_TX_QTABLE(_i, _VF)		(0x001C0000 + ((_i) * 2048 + (_VF) * 4)) /* _i=0...15, _VF=0...255 */ /* Reset Source: CORER */
+#define VPLAN_TX_QTABLE_MAX_INDEX		15
+#define VPLAN_TX_QTABLE_QINDEX_S		0
+#define VPLAN_TX_QTABLE_QINDEX_M		MAKEMASK(0x7FFF, 0)
+#define VPLAN_TXQ_MAPENA(_VF)			(0x00073800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPLAN_TXQ_MAPENA_MAX_INDEX		255
+#define VPLAN_TXQ_MAPENA_TX_ENA_S		0
+#define VPLAN_TXQ_MAPENA_TX_ENA_M		BIT(0)
+#define VSILAN_QBASE(_VSI)			(0x0044c000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSILAN_QBASE_MAX_INDEX			767
+#define VSILAN_QBASE_VSIBASE_S			0
+#define VSILAN_QBASE_VSIBASE_M			MAKEMASK(0x7FF, 0)
+#define VSILAN_QBASE_VSIQTABLE_ENA_S		11
+#define VSILAN_QBASE_VSIQTABLE_ENA_M		BIT(11)
+#define VSILAN_QTABLE(_i, _VSI)			(0x00440000 + ((_i) * 4096 + (_VSI) * 4)) /* _i=0...7, _VSI=0...767 */ /* Reset Source: PFR */
+#define VSILAN_QTABLE_MAX_INDEX			7
+#define VSILAN_QTABLE_QINDEX_0_S		0
+#define VSILAN_QTABLE_QINDEX_0_M		MAKEMASK(0x7FF, 0)
+#define VSILAN_QTABLE_QINDEX_1_S		16
+#define VSILAN_QTABLE_QINDEX_1_M		MAKEMASK(0x7FF, 16)
+#define PRTMAC_HSEC_CTL_RX_ENABLE_GCP		0x001E31C0 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_ENABLE_GCP_HSEC_CTL_RX_ENABLE_GCP_S 0
+#define PRTMAC_HSEC_CTL_RX_ENABLE_GCP_HSEC_CTL_RX_ENABLE_GCP_M BIT(0)
+#define PRTMAC_HSEC_CTL_RX_ENABLE_GPP		0x001E34C0 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_ENABLE_GPP_HSEC_CTL_RX_ENABLE_GPP_S 0
+#define PRTMAC_HSEC_CTL_RX_ENABLE_GPP_HSEC_CTL_RX_ENABLE_GPP_M BIT(0)
+#define PRTMAC_HSEC_CTL_RX_ENABLE_PPP		0x001E35C0 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_ENABLE_PPP_HSEC_CTL_RX_ENABLE_PPP_S 0
+#define PRTMAC_HSEC_CTL_RX_ENABLE_PPP_HSEC_CTL_RX_ENABLE_PPP_M BIT(0)
+#define PRTMAC_HSEC_CTL_RX_FORWARD_CONTROL	0x001E36C0 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_FORWARD_CONTROL_HSEC_CTL_RX_FORWARD_CONTROL_S 0
+#define PRTMAC_HSEC_CTL_RX_FORWARD_CONTROL_HSEC_CTL_RX_FORWARD_CONTROL_M BIT(0)
+#define PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART1 0x001E3220 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART1_HSEC_CTL_RX_PAUSE_DA_UCAST_PART1_S 0
+#define PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART1_HSEC_CTL_RX_PAUSE_DA_UCAST_PART1_M MAKEMASK(0xFFFFFFFF, 0)
+#define PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART2 0x001E3240 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART2_HSEC_CTL_RX_PAUSE_DA_UCAST_PART2_S 0
+#define PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART2_HSEC_CTL_RX_PAUSE_DA_UCAST_PART2_M MAKEMASK(0xFFFF, 0)
+#define PRTMAC_HSEC_CTL_RX_PAUSE_ENABLE		0x001E3180 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_PAUSE_ENABLE_HSEC_CTL_RX_PAUSE_ENABLE_S 0
+#define PRTMAC_HSEC_CTL_RX_PAUSE_ENABLE_HSEC_CTL_RX_PAUSE_ENABLE_M MAKEMASK(0x1FF, 0)
+#define PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART1	0x001E3280 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART1_HSEC_CTL_RX_PAUSE_SA_PART1_S 0
+#define PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART1_HSEC_CTL_RX_PAUSE_SA_PART1_M MAKEMASK(0xFFFFFFFF, 0)
+#define PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART2	0x001E32A0 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART2_HSEC_CTL_RX_PAUSE_SA_PART2_S 0
+#define PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART2_HSEC_CTL_RX_PAUSE_SA_PART2_M MAKEMASK(0xFFFF, 0)
+#define PRTMAC_HSEC_CTL_RX_QUANTA_S		0x001E3C40 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_QUANTA_SHIFT_PRTMAC_HSEC_CTL_RX_QUANTA_SHIFT_S 0
+#define PRTMAC_HSEC_CTL_RX_QUANTA_SHIFT_PRTMAC_HSEC_CTL_RX_QUANTA_SHIFT_M MAKEMASK(0xFFFF, 0)
+#define PRTMAC_HSEC_CTL_TX_PAUSE_ENABLE		0x001E31A0 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_TX_PAUSE_ENABLE_HSEC_CTL_TX_PAUSE_ENABLE_S 0
+#define PRTMAC_HSEC_CTL_TX_PAUSE_ENABLE_HSEC_CTL_TX_PAUSE_ENABLE_M MAKEMASK(0x1FF, 0)
+#define PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA(_i)	(0x001E36E0 + ((_i) * 32)) /* _i=0...8 */ /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA_MAX_INDEX 8
+#define PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA_HSEC_CTL_TX_PAUSE_QUANTA_S 0
+#define PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA_HSEC_CTL_TX_PAUSE_QUANTA_M MAKEMASK(0xFFFF, 0)
+#define PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER(_i) (0x001E3800 + ((_i) * 32)) /* _i=0...8 */ /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_MAX_INDEX 8
+#define PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_S 0
+#define PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_M MAKEMASK(0xFFFF, 0)
+#define PRTMAC_HSEC_CTL_TX_SA_PART1		0x001E3960 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_TX_SA_PART1_HSEC_CTL_TX_SA_PART1_S 0
+#define PRTMAC_HSEC_CTL_TX_SA_PART1_HSEC_CTL_TX_SA_PART1_M MAKEMASK(0xFFFFFFFF, 0)
+#define PRTMAC_HSEC_CTL_TX_SA_PART2		0x001E3980 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_TX_SA_PART2_HSEC_CTL_TX_SA_PART2_S 0
+#define PRTMAC_HSEC_CTL_TX_SA_PART2_HSEC_CTL_TX_SA_PART2_M MAKEMASK(0xFFFF, 0)
+#define PRTMAC_LINK_DOWN_COUNTER		0x001E47C0 /* Reset Source: GLOBR */
+#define PRTMAC_LINK_DOWN_COUNTER_LINK_DOWN_COUNTER_S 0
+#define PRTMAC_LINK_DOWN_COUNTER_LINK_DOWN_COUNTER_M MAKEMASK(0xFFFF, 0)
+#define PRTMAC_MD_OVRRIDE_ENABLE(_i)		(0x001E3C60 + ((_i) * 32)) /* _i=0...7 */ /* Reset Source: GLOBR */
+#define PRTMAC_MD_OVRRIDE_ENABLE_MAX_INDEX	7
+#define PRTMAC_MD_OVRRIDE_ENABLE_PRTMAC_MD_OVRRIDE_ENABLE_S 0
+#define PRTMAC_MD_OVRRIDE_ENABLE_PRTMAC_MD_OVRRIDE_ENABLE_M MAKEMASK(0xFFFFFFFF, 0)
+#define PRTMAC_MD_OVRRIDE_VAL(_i)		(0x001E3D60 + ((_i) * 32)) /* _i=0...7 */ /* Reset Source: GLOBR */
+#define PRTMAC_MD_OVRRIDE_VAL_MAX_INDEX		7
+#define PRTMAC_MD_OVRRIDE_VAL_PRTMAC_MD_OVRRIDE_ENABLE_S 0
+#define PRTMAC_MD_OVRRIDE_VAL_PRTMAC_MD_OVRRIDE_ENABLE_M MAKEMASK(0xFFFFFFFF, 0)
+#define PRTMAC_RX_CNT_MRKR			0x001E48E0 /* Reset Source: GLOBR */
+#define PRTMAC_RX_CNT_MRKR_RX_CNT_MRKR_S	0
+#define PRTMAC_RX_CNT_MRKR_RX_CNT_MRKR_M	MAKEMASK(0xFFFF, 0)
+#define PRTMAC_RX_PKT_DRP_CNT			0x001E3C20 /* Reset Source: GLOBR */
+#define PRTMAC_RX_PKT_DRP_CNT_RX_PKT_DRP_CNT_S	0
+#define PRTMAC_RX_PKT_DRP_CNT_RX_PKT_DRP_CNT_M	MAKEMASK(0xFFFF, 0)
+#define PRTMAC_RX_PKT_DRP_CNT_RX_MKR_PKT_DRP_CNT_S 16
+#define PRTMAC_RX_PKT_DRP_CNT_RX_MKR_PKT_DRP_CNT_M MAKEMASK(0xFFFF, 16)
+#define PRTMAC_TX_CNT_MRKR			0x001E48C0 /* Reset Source: GLOBR */
+#define PRTMAC_TX_CNT_MRKR_TX_CNT_MRKR_S	0
+#define PRTMAC_TX_CNT_MRKR_TX_CNT_MRKR_M	MAKEMASK(0xFFFF, 0)
+#define PRTMAC_TX_LNK_UP_CNT			0x001E4840 /* Reset Source: GLOBR */
+#define PRTMAC_TX_LNK_UP_CNT_TX_LINK_UP_CNT_S	0
+#define PRTMAC_TX_LNK_UP_CNT_TX_LINK_UP_CNT_M	MAKEMASK(0xFFFF, 0)
+#define GL_MDCK_CFG1_TX_PQM			0x002D2DF4 /* Reset Source: CORER */
+#define GL_MDCK_CFG1_TX_PQM_SSO_MAX_DATA_LEN_S	0
+#define GL_MDCK_CFG1_TX_PQM_SSO_MAX_DATA_LEN_M	MAKEMASK(0xFF, 0)
+#define GL_MDCK_CFG1_TX_PQM_SSO_MAX_PKT_CNT_S	8
+#define GL_MDCK_CFG1_TX_PQM_SSO_MAX_PKT_CNT_M	MAKEMASK(0x3F, 8)
+#define GL_MDCK_CFG1_TX_PQM_SSO_MAX_DESC_CNT_S	16
+#define GL_MDCK_CFG1_TX_PQM_SSO_MAX_DESC_CNT_M	MAKEMASK(0x3F, 16)
+#define GL_MDCK_EN_TX_PQM			0x002D2DFC /* Reset Source: CORER */
+#define GL_MDCK_EN_TX_PQM_PCI_DUMMY_COMP_S	0
+#define GL_MDCK_EN_TX_PQM_PCI_DUMMY_COMP_M	BIT(0)
+#define GL_MDCK_EN_TX_PQM_PCI_UR_COMP_S		1
+#define GL_MDCK_EN_TX_PQM_PCI_UR_COMP_M		BIT(1)
+#define GL_MDCK_EN_TX_PQM_RCV_SH_BE_LSO_S	3
+#define GL_MDCK_EN_TX_PQM_RCV_SH_BE_LSO_M	BIT(3)
+#define GL_MDCK_EN_TX_PQM_Q_FL_MNG_EPY_CH_S	4
+#define GL_MDCK_EN_TX_PQM_Q_FL_MNG_EPY_CH_M	BIT(4)
+#define GL_MDCK_EN_TX_PQM_Q_EPY_MNG_FL_CH_S	5
+#define GL_MDCK_EN_TX_PQM_Q_EPY_MNG_FL_CH_M	BIT(5)
+#define GL_MDCK_EN_TX_PQM_LSO_NUMDESCS_ZERO_S	6
+#define GL_MDCK_EN_TX_PQM_LSO_NUMDESCS_ZERO_M	BIT(6)
+#define GL_MDCK_EN_TX_PQM_LSO_LENGTH_ZERO_S	7
+#define GL_MDCK_EN_TX_PQM_LSO_LENGTH_ZERO_M	BIT(7)
+#define GL_MDCK_EN_TX_PQM_LSO_MSS_BELOW_MIN_S	8
+#define GL_MDCK_EN_TX_PQM_LSO_MSS_BELOW_MIN_M	BIT(8)
+#define GL_MDCK_EN_TX_PQM_LSO_MSS_ABOVE_MAX_S	9
+#define GL_MDCK_EN_TX_PQM_LSO_MSS_ABOVE_MAX_M	BIT(9)
+#define GL_MDCK_EN_TX_PQM_LSO_HDR_SIZE_ZERO_S	10
+#define GL_MDCK_EN_TX_PQM_LSO_HDR_SIZE_ZERO_M	BIT(10)
+#define GL_MDCK_EN_TX_PQM_RCV_CNT_BE_LSO_S	11
+#define GL_MDCK_EN_TX_PQM_RCV_CNT_BE_LSO_M	BIT(11)
+#define GL_MDCK_EN_TX_PQM_SKIP_ONE_QT_ONLY_S	12
+#define GL_MDCK_EN_TX_PQM_SKIP_ONE_QT_ONLY_M	BIT(12)
+#define GL_MDCK_EN_TX_PQM_LSO_PKTCNT_ZERO_S	13
+#define GL_MDCK_EN_TX_PQM_LSO_PKTCNT_ZERO_M	BIT(13)
+#define GL_MDCK_EN_TX_PQM_SSO_LENGTH_ZERO_S	14
+#define GL_MDCK_EN_TX_PQM_SSO_LENGTH_ZERO_M	BIT(14)
+#define GL_MDCK_EN_TX_PQM_SSO_LENGTH_EXCEED_S	15
+#define GL_MDCK_EN_TX_PQM_SSO_LENGTH_EXCEED_M	BIT(15)
+#define GL_MDCK_EN_TX_PQM_SSO_PKTCNT_ZERO_S	16
+#define GL_MDCK_EN_TX_PQM_SSO_PKTCNT_ZERO_M	BIT(16)
+#define GL_MDCK_EN_TX_PQM_SSO_PKTCNT_EXCEED_S	17
+#define GL_MDCK_EN_TX_PQM_SSO_PKTCNT_EXCEED_M	BIT(17)
+#define GL_MDCK_EN_TX_PQM_SSO_NUMDESCS_ZERO_S	18
+#define GL_MDCK_EN_TX_PQM_SSO_NUMDESCS_ZERO_M	BIT(18)
+#define GL_MDCK_EN_TX_PQM_SSO_NUMDESCS_EXCEED_S 19
+#define GL_MDCK_EN_TX_PQM_SSO_NUMDESCS_EXCEED_M BIT(19)
+#define GL_MDCK_EN_TX_PQM_TAIL_GT_RING_LENGTH_S 20
+#define GL_MDCK_EN_TX_PQM_TAIL_GT_RING_LENGTH_M BIT(20)
+#define GL_MDCK_EN_TX_PQM_RESERVED_DBL_TYPE_S	21
+#define GL_MDCK_EN_TX_PQM_RESERVED_DBL_TYPE_M	BIT(21)
+#define GL_MDCK_EN_TX_PQM_ILLEGAL_HEAD_DROP_DBL_S 22
+#define GL_MDCK_EN_TX_PQM_ILLEGAL_HEAD_DROP_DBL_M BIT(22)
+#define GL_MDCK_EN_TX_PQM_LSO_OVER_COMMS_Q_S	23
+#define GL_MDCK_EN_TX_PQM_LSO_OVER_COMMS_Q_M	BIT(23)
+#define GL_MDCK_EN_TX_PQM_ILLEGAL_VF_QNUM_S	24
+#define GL_MDCK_EN_TX_PQM_ILLEGAL_VF_QNUM_M	BIT(24)
+#define GL_MDCK_EN_TX_PQM_QTAIL_GT_RING_LENGTH_S 25
+#define GL_MDCK_EN_TX_PQM_QTAIL_GT_RING_LENGTH_M BIT(25)
+#define GL_MDCK_EN_TX_PQM_RSVD_S		26
+#define GL_MDCK_EN_TX_PQM_RSVD_M		MAKEMASK(0x3F, 26)
+#define GL_MDCK_RX				0x0029422C /* Reset Source: CORER */
+#define GL_MDCK_RX_DESC_ADDR_S			0
+#define GL_MDCK_RX_DESC_ADDR_M			BIT(0)
+#define GL_MDET_RX				0x00294C00 /* Reset Source: CORER */
+#define GL_MDET_RX_QNUM_S			0
+#define GL_MDET_RX_QNUM_M			MAKEMASK(0x7FFF, 0)
+#define GL_MDET_RX_VF_NUM_S			15
+#define GL_MDET_RX_VF_NUM_M			MAKEMASK(0xFF, 15)
+#define GL_MDET_RX_PF_NUM_S			23
+#define GL_MDET_RX_PF_NUM_M			MAKEMASK(0x7, 23)
+#define GL_MDET_RX_MAL_TYPE_S			26
+#define GL_MDET_RX_MAL_TYPE_M			MAKEMASK(0x1F, 26)
+#define GL_MDET_RX_VALID_S			31
+#define GL_MDET_RX_VALID_M			BIT(31)
+#define GL_MDET_TX_PQM				0x002D2E00 /* Reset Source: CORER */
+#define GL_MDET_TX_PQM_PF_NUM_S			0
+#define GL_MDET_TX_PQM_PF_NUM_M			MAKEMASK(0x7, 0)
+#define GL_MDET_TX_PQM_VF_NUM_S			4
+#define GL_MDET_TX_PQM_VF_NUM_M			MAKEMASK(0xFF, 4)
+#define GL_MDET_TX_PQM_QNUM_S			12
+#define GL_MDET_TX_PQM_QNUM_M			MAKEMASK(0x3FFF, 12)
+#define GL_MDET_TX_PQM_MAL_TYPE_S		26
+#define GL_MDET_TX_PQM_MAL_TYPE_M		MAKEMASK(0x1F, 26)
+#define GL_MDET_TX_PQM_VALID_S			31
+#define GL_MDET_TX_PQM_VALID_M			BIT(31)
+#define GL_MDET_TX_TCLAN			0x000FC068 /* Reset Source: CORER */
+#define GL_MDET_TX_TCLAN_QNUM_S			0
+#define GL_MDET_TX_TCLAN_QNUM_M			MAKEMASK(0x7FFF, 0)
+#define GL_MDET_TX_TCLAN_VF_NUM_S		15
+#define GL_MDET_TX_TCLAN_VF_NUM_M		MAKEMASK(0xFF, 15)
+#define GL_MDET_TX_TCLAN_PF_NUM_S		23
+#define GL_MDET_TX_TCLAN_PF_NUM_M		MAKEMASK(0x7, 23)
+#define GL_MDET_TX_TCLAN_MAL_TYPE_S		26
+#define GL_MDET_TX_TCLAN_MAL_TYPE_M		MAKEMASK(0x1F, 26)
+#define GL_MDET_TX_TCLAN_VALID_S		31
+#define GL_MDET_TX_TCLAN_VALID_M		BIT(31)
+#define PF_MDET_RX				0x00294280 /* Reset Source: CORER */
+#define PF_MDET_RX_VALID_S			0
+#define PF_MDET_RX_VALID_M			BIT(0)
+#define PF_MDET_TX_PQM				0x002D2C80 /* Reset Source: CORER */
+#define PF_MDET_TX_PQM_VALID_S			0
+#define PF_MDET_TX_PQM_VALID_M			BIT(0)
+#define PF_MDET_TX_TCLAN			0x000FC000 /* Reset Source: CORER */
+#define PF_MDET_TX_TCLAN_VALID_S		0
+#define PF_MDET_TX_TCLAN_VALID_M		BIT(0)
+#define PF_MDET_TX_TDPU				0x00040800 /* Reset Source: CORER */
+#define PF_MDET_TX_TDPU_VALID_S			0
+#define PF_MDET_TX_TDPU_VALID_M			BIT(0)
+#define VP_MDET_RX(_VF)				(0x00294400 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VP_MDET_RX_MAX_INDEX			255
+#define VP_MDET_RX_VALID_S			0
+#define VP_MDET_RX_VALID_M			BIT(0)
+#define VP_MDET_TX_PQM(_VF)			(0x002D2000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VP_MDET_TX_PQM_MAX_INDEX		255
+#define VP_MDET_TX_PQM_VALID_S			0
+#define VP_MDET_TX_PQM_VALID_M			BIT(0)
+#define VP_MDET_TX_TCLAN(_VF)			(0x000FB800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VP_MDET_TX_TCLAN_MAX_INDEX		255
+#define VP_MDET_TX_TCLAN_VALID_S		0
+#define VP_MDET_TX_TCLAN_VALID_M		BIT(0)
+#define VP_MDET_TX_TDPU(_VF)			(0x00040000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VP_MDET_TX_TDPU_MAX_INDEX		255
+#define VP_MDET_TX_TDPU_VALID_S			0
+#define VP_MDET_TX_TDPU_VALID_M			BIT(0)
+#define GENERAL_MNG_FW_DBG_CSR(_i)		(0x000B6180 + ((_i) * 4)) /* _i=0...9 */ /* Reset Source: POR */
+#define GENERAL_MNG_FW_DBG_CSR_MAX_INDEX	9
+#define GENERAL_MNG_FW_DBG_CSR_GENERAL_FW_DBG_S 0
+#define GENERAL_MNG_FW_DBG_CSR_GENERAL_FW_DBG_M MAKEMASK(0xFFFFFFFF, 0)
+#define GL_FWRESETCNT				0x00083100 /* Reset Source: POR */
+#define GL_FWRESETCNT_FWRESETCNT_S		0
+#define GL_FWRESETCNT_FWRESETCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_MNG_FW_RAM_STAT			0x0008309C /* Reset Source: POR */
+#define GL_MNG_FW_RAM_STAT_FW_RAM_RST_STAT_S	0
+#define GL_MNG_FW_RAM_STAT_FW_RAM_RST_STAT_M	BIT(0)
+#define GL_MNG_FW_RAM_STAT_MNG_MEM_ECC_ERR_S	1
+#define GL_MNG_FW_RAM_STAT_MNG_MEM_ECC_ERR_M	BIT(1)
+#define GL_MNG_FWSM				0x000B6134 /* Reset Source: POR */
+#define GL_MNG_FWSM_FW_MODES_S			0
+#define GL_MNG_FWSM_FW_MODES_M			MAKEMASK(0x3, 0)
+#define GL_MNG_FWSM_RSV0_S			2
+#define GL_MNG_FWSM_RSV0_M			MAKEMASK(0xFF, 2)
+#define GL_MNG_FWSM_EEP_RELOAD_IND_S		10
+#define GL_MNG_FWSM_EEP_RELOAD_IND_M		BIT(10)
+#define GL_MNG_FWSM_RSV1_S			11
+#define GL_MNG_FWSM_RSV1_M			MAKEMASK(0xF, 11)
+#define GL_MNG_FWSM_RSV2_S			15
+#define GL_MNG_FWSM_RSV2_M			BIT(15)
+#define GL_MNG_FWSM_PCIR_AL_FAILURE_S		16
+#define GL_MNG_FWSM_PCIR_AL_FAILURE_M		BIT(16)
+#define GL_MNG_FWSM_POR_AL_FAILURE_S		17
+#define GL_MNG_FWSM_POR_AL_FAILURE_M		BIT(17)
+#define GL_MNG_FWSM_RSV3_S			18
+#define GL_MNG_FWSM_RSV3_M			BIT(18)
+#define GL_MNG_FWSM_EXT_ERR_IND_S		19
+#define GL_MNG_FWSM_EXT_ERR_IND_M		MAKEMASK(0x3F, 19)
+#define GL_MNG_FWSM_RSV4_S			25
+#define GL_MNG_FWSM_RSV4_M			BIT(25)
+#define GL_MNG_FWSM_RESERVED_11_S		26
+#define GL_MNG_FWSM_RESERVED_11_M		MAKEMASK(0xF, 26)
+#define GL_MNG_FWSM_RSV5_S			30
+#define GL_MNG_FWSM_RSV5_M			MAKEMASK(0x3, 30)
+#define GL_MNG_HWARB_CTRL			0x000B6130 /* Reset Source: POR */
+#define GL_MNG_HWARB_CTRL_NCSI_ARB_EN_S		0
+#define GL_MNG_HWARB_CTRL_NCSI_ARB_EN_M		BIT(0)
+#define GL_MNG_SHA_EXTEND(_i)			(0x00083120 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: EMPR */
+#define GL_MNG_SHA_EXTEND_MAX_INDEX		7
+#define GL_MNG_SHA_EXTEND_GL_MNG_SHA_EXTEND_S	0
+#define GL_MNG_SHA_EXTEND_GL_MNG_SHA_EXTEND_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GL_MNG_SHA_EXTEND_ROM(_i)		(0x00083160 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: EMPR */
+#define GL_MNG_SHA_EXTEND_ROM_MAX_INDEX		7
+#define GL_MNG_SHA_EXTEND_ROM_GL_MNG_SHA_EXTEND_ROM_S 0
+#define GL_MNG_SHA_EXTEND_ROM_GL_MNG_SHA_EXTEND_ROM_M MAKEMASK(0xFFFFFFFF, 0)
+#define GL_MNG_SHA_EXTEND_STATUS		0x00083148 /* Reset Source: EMPR */
+#define GL_MNG_SHA_EXTEND_STATUS_STAGE_S	0
+#define GL_MNG_SHA_EXTEND_STATUS_STAGE_M	MAKEMASK(0x7, 0)
+#define GL_MNG_SHA_EXTEND_STATUS_FW_HALTED_S	30
+#define GL_MNG_SHA_EXTEND_STATUS_FW_HALTED_M	BIT(30)
+#define GL_MNG_SHA_EXTEND_STATUS_DONE_S		31
+#define GL_MNG_SHA_EXTEND_STATUS_DONE_M		BIT(31)
+#define GL_SWT_PRT2MDEF(_i)			(0x00216018 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: POR */
+#define GL_SWT_PRT2MDEF_MAX_INDEX		31
+#define GL_SWT_PRT2MDEF_MDEFIDX_S		0
+#define GL_SWT_PRT2MDEF_MDEFIDX_M		MAKEMASK(0x7, 0)
+#define GL_SWT_PRT2MDEF_MDEFENA_S		31
+#define GL_SWT_PRT2MDEF_MDEFENA_M		BIT(31)
+#define PRT_MNG_MANC				0x00214720 /* Reset Source: POR */
+#define PRT_MNG_MANC_FLOW_CONTROL_DISCARD_S	0
+#define PRT_MNG_MANC_FLOW_CONTROL_DISCARD_M	BIT(0)
+#define PRT_MNG_MANC_NCSI_DISCARD_S		1
+#define PRT_MNG_MANC_NCSI_DISCARD_M		BIT(1)
+#define PRT_MNG_MANC_RCV_TCO_EN_S		17
+#define PRT_MNG_MANC_RCV_TCO_EN_M		BIT(17)
+#define PRT_MNG_MANC_RCV_ALL_S			19
+#define PRT_MNG_MANC_RCV_ALL_M			BIT(19)
+#define PRT_MNG_MANC_FIXED_NET_TYPE_S		25
+#define PRT_MNG_MANC_FIXED_NET_TYPE_M		BIT(25)
+#define PRT_MNG_MANC_NET_TYPE_S			26
+#define PRT_MNG_MANC_NET_TYPE_M			BIT(26)
+#define PRT_MNG_MANC_EN_BMC2OS_S		28
+#define PRT_MNG_MANC_EN_BMC2OS_M		BIT(28)
+#define PRT_MNG_MANC_EN_BMC2NET_S		29
+#define PRT_MNG_MANC_EN_BMC2NET_M		BIT(29)
+#define PRT_MNG_MAVTV(_i)			(0x00214780 + ((_i) * 32)) /* _i=0...7 */ /* Reset Source: POR */
+#define PRT_MNG_MAVTV_MAX_INDEX			7
+#define PRT_MNG_MAVTV_VID_S			0
+#define PRT_MNG_MAVTV_VID_M			MAKEMASK(0xFFF, 0)
+#define PRT_MNG_MDEF(_i)			(0x00214880 + ((_i) * 32)) /* _i=0...7 */ /* Reset Source: POR */
+#define PRT_MNG_MDEF_MAX_INDEX			7
+#define PRT_MNG_MDEF_MAC_EXACT_AND_S		0
+#define PRT_MNG_MDEF_MAC_EXACT_AND_M		MAKEMASK(0xF, 0)
+#define PRT_MNG_MDEF_BROADCAST_AND_S		4
+#define PRT_MNG_MDEF_BROADCAST_AND_M		BIT(4)
+#define PRT_MNG_MDEF_VLAN_AND_S			5
+#define PRT_MNG_MDEF_VLAN_AND_M			MAKEMASK(0xFF, 5)
+#define PRT_MNG_MDEF_IPV4_ADDRESS_AND_S		13
+#define PRT_MNG_MDEF_IPV4_ADDRESS_AND_M		MAKEMASK(0xF, 13)
+#define PRT_MNG_MDEF_IPV6_ADDRESS_AND_S		17
+#define PRT_MNG_MDEF_IPV6_ADDRESS_AND_M		MAKEMASK(0xF, 17)
+#define PRT_MNG_MDEF_MAC_EXACT_OR_S		21
+#define PRT_MNG_MDEF_MAC_EXACT_OR_M		MAKEMASK(0xF, 21)
+#define PRT_MNG_MDEF_BROADCAST_OR_S		25
+#define PRT_MNG_MDEF_BROADCAST_OR_M		BIT(25)
+#define PRT_MNG_MDEF_MULTICAST_AND_S		26
+#define PRT_MNG_MDEF_MULTICAST_AND_M		BIT(26)
+#define PRT_MNG_MDEF_ARP_REQUEST_OR_S		27
+#define PRT_MNG_MDEF_ARP_REQUEST_OR_M		BIT(27)
+#define PRT_MNG_MDEF_ARP_RESPONSE_OR_S		28
+#define PRT_MNG_MDEF_ARP_RESPONSE_OR_M		BIT(28)
+#define PRT_MNG_MDEF_NEIGHBOR_DISCOVERY_134_OR_S 29
+#define PRT_MNG_MDEF_NEIGHBOR_DISCOVERY_134_OR_M BIT(29)
+#define PRT_MNG_MDEF_PORT_0X298_OR_S		30
+#define PRT_MNG_MDEF_PORT_0X298_OR_M		BIT(30)
+#define PRT_MNG_MDEF_PORT_0X26F_OR_S		31
+#define PRT_MNG_MDEF_PORT_0X26F_OR_M		BIT(31)
+#define PRT_MNG_MDEF_EXT(_i)			(0x00214A00 + ((_i) * 32)) /* _i=0...7 */ /* Reset Source: POR */
+#define PRT_MNG_MDEF_EXT_MAX_INDEX		7
+#define PRT_MNG_MDEF_EXT_L2_ETHERTYPE_AND_S	0
+#define PRT_MNG_MDEF_EXT_L2_ETHERTYPE_AND_M	MAKEMASK(0xF, 0)
+#define PRT_MNG_MDEF_EXT_L2_ETHERTYPE_OR_S	4
+#define PRT_MNG_MDEF_EXT_L2_ETHERTYPE_OR_M	MAKEMASK(0xF, 4)
+#define PRT_MNG_MDEF_EXT_FLEX_PORT_OR_S		8
+#define PRT_MNG_MDEF_EXT_FLEX_PORT_OR_M		MAKEMASK(0xFFFF, 8)
+#define PRT_MNG_MDEF_EXT_FLEX_TCO_S		24
+#define PRT_MNG_MDEF_EXT_FLEX_TCO_M		BIT(24)
+#define PRT_MNG_MDEF_EXT_NEIGHBOR_DISCOVERY_135_OR_S 25
+#define PRT_MNG_MDEF_EXT_NEIGHBOR_DISCOVERY_135_OR_M BIT(25)
+#define PRT_MNG_MDEF_EXT_NEIGHBOR_DISCOVERY_136_OR_S 26
+#define PRT_MNG_MDEF_EXT_NEIGHBOR_DISCOVERY_136_OR_M BIT(26)
+#define PRT_MNG_MDEF_EXT_NEIGHBOR_DISCOVERY_137_OR_S 27
+#define PRT_MNG_MDEF_EXT_NEIGHBOR_DISCOVERY_137_OR_M BIT(27)
+#define PRT_MNG_MDEF_EXT_ICMP_OR_S		28
+#define PRT_MNG_MDEF_EXT_ICMP_OR_M		BIT(28)
+#define PRT_MNG_MDEF_EXT_MLD_S			29
+#define PRT_MNG_MDEF_EXT_MLD_M			BIT(29)
+#define PRT_MNG_MDEF_EXT_APPLY_TO_NETWORK_TRAFFIC_S 30
+#define PRT_MNG_MDEF_EXT_APPLY_TO_NETWORK_TRAFFIC_M BIT(30)
+#define PRT_MNG_MDEF_EXT_APPLY_TO_HOST_TRAFFIC_S 31
+#define PRT_MNG_MDEF_EXT_APPLY_TO_HOST_TRAFFIC_M BIT(31)
+#define PRT_MNG_MDEFVSI(_i)			(0x00214980 + ((_i) * 32)) /* _i=0...3 */ /* Reset Source: POR */
+#define PRT_MNG_MDEFVSI_MAX_INDEX		3
+#define PRT_MNG_MDEFVSI_MDEFVSI_2N_S		0
+#define PRT_MNG_MDEFVSI_MDEFVSI_2N_M		MAKEMASK(0xFFFF, 0)
+#define PRT_MNG_MDEFVSI_MDEFVSI_2NP1_S		16
+#define PRT_MNG_MDEFVSI_MDEFVSI_2NP1_M		MAKEMASK(0xFFFF, 16)
+#define PRT_MNG_METF(_i)			(0x00214120 + ((_i) * 32)) /* _i=0...3 */ /* Reset Source: POR */
+#define PRT_MNG_METF_MAX_INDEX			3
+#define PRT_MNG_METF_ETYPE_S			0
+#define PRT_MNG_METF_ETYPE_M			MAKEMASK(0xFFFF, 0)
+#define PRT_MNG_METF_POLARITY_S			30
+#define PRT_MNG_METF_POLARITY_M			BIT(30)
+#define PRT_MNG_MFUTP(_i)			(0x00214320 + ((_i) * 32)) /* _i=0...15 */ /* Reset Source: POR */
+#define PRT_MNG_MFUTP_MAX_INDEX			15
+#define PRT_MNG_MFUTP_MFUTP_N_S			0
+#define PRT_MNG_MFUTP_MFUTP_N_M			MAKEMASK(0xFFFF, 0)
+#define PRT_MNG_MFUTP_UDP_S			16
+#define PRT_MNG_MFUTP_UDP_M			BIT(16)
+#define PRT_MNG_MFUTP_TCP_S			17
+#define PRT_MNG_MFUTP_TCP_M			BIT(17)
+#define PRT_MNG_MFUTP_SOURCE_DESTINATION_S	18
+#define PRT_MNG_MFUTP_SOURCE_DESTINATION_M	BIT(18)
+#define PRT_MNG_MIPAF4(_i)			(0x002141A0 + ((_i) * 32)) /* _i=0...3 */ /* Reset Source: POR */
+#define PRT_MNG_MIPAF4_MAX_INDEX		3
+#define PRT_MNG_MIPAF4_MIPAF_S			0
+#define PRT_MNG_MIPAF4_MIPAF_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PRT_MNG_MIPAF6(_i)			(0x00214520 + ((_i) * 32)) /* _i=0...15 */ /* Reset Source: POR */
+#define PRT_MNG_MIPAF6_MAX_INDEX		15
+#define PRT_MNG_MIPAF6_MIPAF_S			0
+#define PRT_MNG_MIPAF6_MIPAF_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PRT_MNG_MMAH(_i)			(0x00214220 + ((_i) * 32)) /* _i=0...3 */ /* Reset Source: POR */
+#define PRT_MNG_MMAH_MAX_INDEX			3
+#define PRT_MNG_MMAH_MMAH_S			0
+#define PRT_MNG_MMAH_MMAH_M			MAKEMASK(0xFFFF, 0)
+#define PRT_MNG_MMAL(_i)			(0x002142A0 + ((_i) * 32)) /* _i=0...3 */ /* Reset Source: POR */
+#define PRT_MNG_MMAL_MAX_INDEX			3
+#define PRT_MNG_MMAL_MMAL_S			0
+#define PRT_MNG_MMAL_MMAL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PRT_MNG_MNGONLY				0x00214740 /* Reset Source: POR */
+#define PRT_MNG_MNGONLY_EXCLUSIVE_TO_MANAGEABILITY_S 0
+#define PRT_MNG_MNGONLY_EXCLUSIVE_TO_MANAGEABILITY_M MAKEMASK(0xFF, 0)
+#define PRT_MNG_MSFM				0x00214760 /* Reset Source: POR */
+#define PRT_MNG_MSFM_PORT_26F_UDP_S		0
+#define PRT_MNG_MSFM_PORT_26F_UDP_M		BIT(0)
+#define PRT_MNG_MSFM_PORT_26F_TCP_S		1
+#define PRT_MNG_MSFM_PORT_26F_TCP_M		BIT(1)
+#define PRT_MNG_MSFM_PORT_298_UDP_S		2
+#define PRT_MNG_MSFM_PORT_298_UDP_M		BIT(2)
+#define PRT_MNG_MSFM_PORT_298_TCP_S		3
+#define PRT_MNG_MSFM_PORT_298_TCP_M		BIT(3)
+#define PRT_MNG_MSFM_IPV6_0_MASK_S		4
+#define PRT_MNG_MSFM_IPV6_0_MASK_M		BIT(4)
+#define PRT_MNG_MSFM_IPV6_1_MASK_S		5
+#define PRT_MNG_MSFM_IPV6_1_MASK_M		BIT(5)
+#define PRT_MNG_MSFM_IPV6_2_MASK_S		6
+#define PRT_MNG_MSFM_IPV6_2_MASK_M		BIT(6)
+#define PRT_MNG_MSFM_IPV6_3_MASK_S		7
+#define PRT_MNG_MSFM_IPV6_3_MASK_M		BIT(7)
+#define MSIX_PBA_PAGE(_i)			(0x02E08000 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: FLR */
+#define MSIX_PBA_PAGE_MAX_INDEX			63
+#define MSIX_PBA_PAGE_PENBIT_S			0
+#define MSIX_PBA_PAGE_PENBIT_M			MAKEMASK(0xFFFFFFFF, 0)
+#define MSIX_PBA1(_i)				(0x00008000 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: FLR */
+#define MSIX_PBA1_MAX_INDEX			63
+#define MSIX_PBA1_PENBIT_S			0
+#define MSIX_PBA1_PENBIT_M			MAKEMASK(0xFFFFFFFF, 0)
+#define MSIX_TADD_PAGE(_i)			(0x02E00000 + ((_i) * 16)) /* _i=0...2047 */ /* Reset Source: FLR */
+#define MSIX_TADD_PAGE_MAX_INDEX		2047
+#define MSIX_TADD_PAGE_MSIXTADD10_S		0
+#define MSIX_TADD_PAGE_MSIXTADD10_M		MAKEMASK(0x3, 0)
+#define MSIX_TADD_PAGE_MSIXTADD_S		2
+#define MSIX_TADD_PAGE_MSIXTADD_M		MAKEMASK(0x3FFFFFFF, 2)
+#define MSIX_TADD1(_i)				(0x00000000 + ((_i) * 16)) /* _i=0...2047 */ /* Reset Source: FLR */
+#define MSIX_TADD1_MAX_INDEX			2047
+#define MSIX_TADD1_MSIXTADD10_S			0
+#define MSIX_TADD1_MSIXTADD10_M			MAKEMASK(0x3, 0)
+#define MSIX_TADD1_MSIXTADD_S			2
+#define MSIX_TADD1_MSIXTADD_M			MAKEMASK(0x3FFFFFFF, 2)
+#define MSIX_TMSG(_i)				(0x00000008 + ((_i) * 16)) /* _i=0...2047 */ /* Reset Source: FLR */
+#define MSIX_TMSG_MAX_INDEX			2047
+#define MSIX_TMSG_MSIXTMSG_S			0
+#define MSIX_TMSG_MSIXTMSG_M			MAKEMASK(0xFFFFFFFF, 0)
+#define MSIX_TMSG_PAGE(_i)			(0x02E00008 + ((_i) * 16)) /* _i=0...2047 */ /* Reset Source: FLR */
+#define MSIX_TMSG_PAGE_MAX_INDEX		2047
+#define MSIX_TMSG_PAGE_MSIXTMSG_S		0
+#define MSIX_TMSG_PAGE_MSIXTMSG_M		MAKEMASK(0xFFFFFFFF, 0)
+#define MSIX_TUADD_PAGE(_i)			(0x02E00004 + ((_i) * 16)) /* _i=0...2047 */ /* Reset Source: FLR */
+#define MSIX_TUADD_PAGE_MAX_INDEX		2047
+#define MSIX_TUADD_PAGE_MSIXTUADD_S		0
+#define MSIX_TUADD_PAGE_MSIXTUADD_M		MAKEMASK(0xFFFFFFFF, 0)
+#define MSIX_TUADD1(_i)				(0x00000004 + ((_i) * 16)) /* _i=0...2047 */ /* Reset Source: FLR */
+#define MSIX_TUADD1_MAX_INDEX			2047
+#define MSIX_TUADD1_MSIXTUADD_S			0
+#define MSIX_TUADD1_MSIXTUADD_M			MAKEMASK(0xFFFFFFFF, 0)
+#define MSIX_TVCTRL_PAGE(_i)			(0x02E0000C + ((_i) * 16)) /* _i=0...2047 */ /* Reset Source: FLR */
+#define MSIX_TVCTRL_PAGE_MAX_INDEX		2047
+#define MSIX_TVCTRL_PAGE_MASK_S			0
+#define MSIX_TVCTRL_PAGE_MASK_M			BIT(0)
+#define MSIX_TVCTRL1(_i)			(0x0000000C + ((_i) * 16)) /* _i=0...2047 */ /* Reset Source: FLR */
+#define MSIX_TVCTRL1_MAX_INDEX			2047
+#define MSIX_TVCTRL1_MASK_S			0
+#define MSIX_TVCTRL1_MASK_M			BIT(0)
+#define GLNVM_AL_DONE_HLP			0x000824C4 /* Reset Source: POR */
+#define GLNVM_AL_DONE_HLP_HLP_CORER_S		0
+#define GLNVM_AL_DONE_HLP_HLP_CORER_M		BIT(0)
+#define GLNVM_AL_DONE_HLP_HLP_FULLR_S		1
+#define GLNVM_AL_DONE_HLP_HLP_FULLR_M		BIT(1)
+#define GLNVM_ALTIMERS				0x000B6140 /* Reset Source: POR */
+#define GLNVM_ALTIMERS_PCI_ALTIMER_S		0
+#define GLNVM_ALTIMERS_PCI_ALTIMER_M		MAKEMASK(0xFFF, 0)
+#define GLNVM_ALTIMERS_GEN_ALTIMER_S		12
+#define GLNVM_ALTIMERS_GEN_ALTIMER_M		MAKEMASK(0xFFFFF, 12)
+#define GLNVM_FLA				0x000B6108 /* Reset Source: POR */
+#define GLNVM_FLA_LOCKED_S			6
+#define GLNVM_FLA_LOCKED_M			BIT(6)
+#define GLNVM_GENS				0x000B6100 /* Reset Source: POR */
+#define GLNVM_GENS_NVM_PRES_S			0
+#define GLNVM_GENS_NVM_PRES_M			BIT(0)
+#define GLNVM_GENS_SR_SIZE_S			5
+#define GLNVM_GENS_SR_SIZE_M			MAKEMASK(0x7, 5)
+#define GLNVM_GENS_BANK1VAL_S			8
+#define GLNVM_GENS_BANK1VAL_M			BIT(8)
+#define GLNVM_GENS_ALT_PRST_S			23
+#define GLNVM_GENS_ALT_PRST_M			BIT(23)
+#define GLNVM_GENS_FL_AUTO_RD_S			25
+#define GLNVM_GENS_FL_AUTO_RD_M			BIT(25)
+#define GLNVM_PROTCSR(_i)			(0x000B6010 + ((_i) * 4)) /* _i=0...59 */ /* Reset Source: POR */
+#define GLNVM_PROTCSR_MAX_INDEX			59
+#define GLNVM_PROTCSR_ADDR_BLOCK_S		0
+#define GLNVM_PROTCSR_ADDR_BLOCK_M		MAKEMASK(0xFFFFFF, 0)
+#define GLNVM_ULD				0x000B6008 /* Reset Source: POR */
+#define GLNVM_ULD_PCIER_DONE_S			0
+#define GLNVM_ULD_PCIER_DONE_M			BIT(0)
+#define GLNVM_ULD_PCIER_DONE_1_S		1
+#define GLNVM_ULD_PCIER_DONE_1_M		BIT(1)
+#define GLNVM_ULD_CORER_DONE_S			3
+#define GLNVM_ULD_CORER_DONE_M			BIT(3)
+#define GLNVM_ULD_GLOBR_DONE_S			4
+#define GLNVM_ULD_GLOBR_DONE_M			BIT(4)
+#define GLNVM_ULD_POR_DONE_S			5
+#define GLNVM_ULD_POR_DONE_M			BIT(5)
+#define GLNVM_ULD_POR_DONE_1_S			8
+#define GLNVM_ULD_POR_DONE_1_M			BIT(8)
+#define GLNVM_ULD_PCIER_DONE_2_S		9
+#define GLNVM_ULD_PCIER_DONE_2_M		BIT(9)
+#define GLNVM_ULD_PE_DONE_S			10
+#define GLNVM_ULD_PE_DONE_M			BIT(10)
+#define GLNVM_ULD_HLP_CORE_DONE_S		11
+#define GLNVM_ULD_HLP_CORE_DONE_M		BIT(11)
+#define GLNVM_ULD_HLP_FULL_DONE_S		12
+#define GLNVM_ULD_HLP_FULL_DONE_M		BIT(12)
+#define GLNVM_ULT				0x000B6154 /* Reset Source: POR */
+#define GLNVM_ULT_CONF_PCIR_AE_S		0
+#define GLNVM_ULT_CONF_PCIR_AE_M		BIT(0)
+#define GLNVM_ULT_CONF_PCIRTL_AE_S		1
+#define GLNVM_ULT_CONF_PCIRTL_AE_M		BIT(1)
+#define GLNVM_ULT_RESERVED_1_S			2
+#define GLNVM_ULT_RESERVED_1_M			BIT(2)
+#define GLNVM_ULT_CONF_CORE_AE_S		3
+#define GLNVM_ULT_CONF_CORE_AE_M		BIT(3)
+#define GLNVM_ULT_CONF_GLOBAL_AE_S		4
+#define GLNVM_ULT_CONF_GLOBAL_AE_M		BIT(4)
+#define GLNVM_ULT_CONF_POR_AE_S			5
+#define GLNVM_ULT_CONF_POR_AE_M			BIT(5)
+#define GLNVM_ULT_RESERVED_2_S			6
+#define GLNVM_ULT_RESERVED_2_M			BIT(6)
+#define GLNVM_ULT_RESERVED_3_S			7
+#define GLNVM_ULT_RESERVED_3_M			BIT(7)
+#define GLNVM_ULT_RESERVED_5_S			8
+#define GLNVM_ULT_RESERVED_5_M			BIT(8)
+#define GLNVM_ULT_CONF_PCIALT_AE_S		9
+#define GLNVM_ULT_CONF_PCIALT_AE_M		BIT(9)
+#define GLNVM_ULT_CONF_PE_AE_S			10
+#define GLNVM_ULT_CONF_PE_AE_M			BIT(10)
+#define GLNVM_ULT_RESERVED_4_S			11
+#define GLNVM_ULT_RESERVED_4_M			MAKEMASK(0x1FFFFF, 11)
+#define GL_COTF_MARKER_STATUS			0x00200200 /* Reset Source: CORER */
+#define GL_COTF_MARKER_STATUS_MRKR_BUSY_S	0
+#define GL_COTF_MARKER_STATUS_MRKR_BUSY_M	MAKEMASK(0xFF, 0)
+#define GL_COTF_MARKER_TRIG_RCU_PRS(_i)		(0x002001D4 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GL_COTF_MARKER_TRIG_RCU_PRS_MAX_INDEX	7
+#define GL_COTF_MARKER_TRIG_RCU_PRS_SET_RST_S	0
+#define GL_COTF_MARKER_TRIG_RCU_PRS_SET_RST_M	BIT(0)
+#define GL_PRS_MARKER_ERROR			0x00200204 /* Reset Source: CORER */
+#define GL_PRS_MARKER_ERROR_XLR_CFG_ERR_S	0
+#define GL_PRS_MARKER_ERROR_XLR_CFG_ERR_M	BIT(0)
+#define GL_PRS_MARKER_ERROR_QH_CFG_ERR_S	1
+#define GL_PRS_MARKER_ERROR_QH_CFG_ERR_M	BIT(1)
+#define GL_PRS_MARKER_ERROR_COTF_CFG_ERR_S	2
+#define GL_PRS_MARKER_ERROR_COTF_CFG_ERR_M	BIT(2)
+#define GL_PRS_RX_PIPE_INIT0(_i)		(0x0020000C + ((_i) * 4)) /* _i=0...6 */ /* Reset Source: CORER */
+#define GL_PRS_RX_PIPE_INIT0_MAX_INDEX		6
+#define GL_PRS_RX_PIPE_INIT0_GPCSR_INIT_S	0
+#define GL_PRS_RX_PIPE_INIT0_GPCSR_INIT_M	MAKEMASK(0xFFFF, 0)
+#define GL_PRS_RX_PIPE_INIT1			0x00200028 /* Reset Source: CORER */
+#define GL_PRS_RX_PIPE_INIT1_GPCSR_INIT_S	0
+#define GL_PRS_RX_PIPE_INIT1_GPCSR_INIT_M	MAKEMASK(0xFFFF, 0)
+#define GL_PRS_RX_PIPE_INIT2			0x0020002C /* Reset Source: CORER */
+#define GL_PRS_RX_PIPE_INIT2_GPCSR_INIT_S	0
+#define GL_PRS_RX_PIPE_INIT2_GPCSR_INIT_M	MAKEMASK(0xFFFF, 0)
+#define GL_PRS_RX_SIZE_CTRL			0x00200004 /* Reset Source: CORER */
+#define GL_PRS_RX_SIZE_CTRL_MIN_SIZE_S		0
+#define GL_PRS_RX_SIZE_CTRL_MIN_SIZE_M		MAKEMASK(0x3FF, 0)
+#define GL_PRS_RX_SIZE_CTRL_MIN_SIZE_EN_S	15
+#define GL_PRS_RX_SIZE_CTRL_MIN_SIZE_EN_M	BIT(15)
+#define GL_PRS_RX_SIZE_CTRL_MAX_SIZE_S		16
+#define GL_PRS_RX_SIZE_CTRL_MAX_SIZE_M		MAKEMASK(0x3FF, 16)
+#define GL_PRS_RX_SIZE_CTRL_MAX_SIZE_EN_S	31
+#define GL_PRS_RX_SIZE_CTRL_MAX_SIZE_EN_M	BIT(31)
+#define GL_PRS_TX_PIPE_INIT0(_i)		(0x00202018 + ((_i) * 4)) /* _i=0...6 */ /* Reset Source: CORER */
+#define GL_PRS_TX_PIPE_INIT0_MAX_INDEX		6
+#define GL_PRS_TX_PIPE_INIT0_GPCSR_INIT_S	0
+#define GL_PRS_TX_PIPE_INIT0_GPCSR_INIT_M	MAKEMASK(0xFFFF, 0)
+#define GL_PRS_TX_PIPE_INIT1			0x00202034 /* Reset Source: CORER */
+#define GL_PRS_TX_PIPE_INIT1_GPCSR_INIT_S	0
+#define GL_PRS_TX_PIPE_INIT1_GPCSR_INIT_M	MAKEMASK(0xFFFF, 0)
+#define GL_PRS_TX_PIPE_INIT2			0x00202038 /* Reset Source: CORER */
+#define GL_PRS_TX_PIPE_INIT2_GPCSR_INIT_S	0
+#define GL_PRS_TX_PIPE_INIT2_GPCSR_INIT_M	MAKEMASK(0xFFFF, 0)
+#define GL_PRS_TX_SIZE_CTRL			0x00202014 /* Reset Source: CORER */
+#define GL_PRS_TX_SIZE_CTRL_MIN_SIZE_S		0
+#define GL_PRS_TX_SIZE_CTRL_MIN_SIZE_M		MAKEMASK(0x3FF, 0)
+#define GL_PRS_TX_SIZE_CTRL_MIN_SIZE_EN_S	15
+#define GL_PRS_TX_SIZE_CTRL_MIN_SIZE_EN_M	BIT(15)
+#define GL_PRS_TX_SIZE_CTRL_MAX_SIZE_S		16
+#define GL_PRS_TX_SIZE_CTRL_MAX_SIZE_M		MAKEMASK(0x3FF, 16)
+#define GL_PRS_TX_SIZE_CTRL_MAX_SIZE_EN_S	31
+#define GL_PRS_TX_SIZE_CTRL_MAX_SIZE_EN_M	BIT(31)
+#define GL_QH_MARKER_STATUS			0x002001FC /* Reset Source: CORER */
+#define GL_QH_MARKER_STATUS_MRKR_BUSY_S		0
+#define GL_QH_MARKER_STATUS_MRKR_BUSY_M		MAKEMASK(0xF, 0)
+#define GL_QH_MARKER_TRIG_RCU_PRS(_i)		(0x002001C4 + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: CORER */
+#define GL_QH_MARKER_TRIG_RCU_PRS_MAX_INDEX	3
+#define GL_QH_MARKER_TRIG_RCU_PRS_QPID_S	0
+#define GL_QH_MARKER_TRIG_RCU_PRS_QPID_M	MAKEMASK(0x3FFFF, 0)
+#define GL_QH_MARKER_TRIG_RCU_PRS_PE_TAG_S	18
+#define GL_QH_MARKER_TRIG_RCU_PRS_PE_TAG_M	MAKEMASK(0xFF, 18)
+#define GL_QH_MARKER_TRIG_RCU_PRS_PORT_NUM_S	26
+#define GL_QH_MARKER_TRIG_RCU_PRS_PORT_NUM_M	MAKEMASK(0x7, 26)
+#define GL_QH_MARKER_TRIG_RCU_PRS_SET_RST_S	31
+#define GL_QH_MARKER_TRIG_RCU_PRS_SET_RST_M	BIT(31)
+#define GL_RPRS_ANA_CSR_CTRL			0x00200708 /* Reset Source: CORER */
+#define GL_RPRS_ANA_CSR_CTRL_SELECT_EN_S	0
+#define GL_RPRS_ANA_CSR_CTRL_SELECT_EN_M	BIT(0)
+#define GL_RPRS_ANA_CSR_CTRL_SELECTED_ANA_S	1
+#define GL_RPRS_ANA_CSR_CTRL_SELECTED_ANA_M	BIT(1)
+#define GL_TPRS_ANA_CSR_CTRL			0x00202100 /* Reset Source: CORER */
+#define GL_TPRS_ANA_CSR_CTRL_SELECT_EN_S	0
+#define GL_TPRS_ANA_CSR_CTRL_SELECT_EN_M	BIT(0)
+#define GL_TPRS_ANA_CSR_CTRL_SELECTED_ANA_S	1
+#define GL_TPRS_ANA_CSR_CTRL_SELECTED_ANA_M	BIT(1)
+#define GL_TPRS_MNG_PM_THR			0x00202004 /* Reset Source: CORER */
+#define GL_TPRS_MNG_PM_THR_MNG_PM_THR_S		0
+#define GL_TPRS_MNG_PM_THR_MNG_PM_THR_M		MAKEMASK(0x3FFF, 0)
+#define GL_TPRS_PM_CNT(_i)			(0x00202008 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GL_TPRS_PM_CNT_MAX_INDEX		1
+#define GL_TPRS_PM_CNT_GL_PRS_PM_CNT_S		0
+#define GL_TPRS_PM_CNT_GL_PRS_PM_CNT_M		MAKEMASK(0x3FFF, 0)
+#define GL_TPRS_PM_THR				0x00202000 /* Reset Source: CORER */
+#define GL_TPRS_PM_THR_PM_THR_S			0
+#define GL_TPRS_PM_THR_PM_THR_M			MAKEMASK(0x3FFF, 0)
+#define GL_XLR_MARKER_LOG_RCU_PRS(_i)		(0x00200208 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GL_XLR_MARKER_LOG_RCU_PRS_MAX_INDEX	63
+#define GL_XLR_MARKER_LOG_RCU_PRS_XLR_TRIG_S	0
+#define GL_XLR_MARKER_LOG_RCU_PRS_XLR_TRIG_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GL_XLR_MARKER_STATUS(_i)		(0x002001F4 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GL_XLR_MARKER_STATUS_MAX_INDEX		1
+#define GL_XLR_MARKER_STATUS_MRKR_BUSY_S	0
+#define GL_XLR_MARKER_STATUS_MRKR_BUSY_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GL_XLR_MARKER_TRIG_PE			0x005008C0 /* Reset Source: CORER */
+#define GL_XLR_MARKER_TRIG_PE_VM_VF_NUM_S	0
+#define GL_XLR_MARKER_TRIG_PE_VM_VF_NUM_M	MAKEMASK(0x3FF, 0)
+#define GL_XLR_MARKER_TRIG_PE_VM_VF_TYPE_S	10
+#define GL_XLR_MARKER_TRIG_PE_VM_VF_TYPE_M	MAKEMASK(0x3, 10)
+#define GL_XLR_MARKER_TRIG_PE_PF_NUM_S		12
+#define GL_XLR_MARKER_TRIG_PE_PF_NUM_M		MAKEMASK(0x7, 12)
+#define GL_XLR_MARKER_TRIG_PE_PORT_NUM_S	16
+#define GL_XLR_MARKER_TRIG_PE_PORT_NUM_M	MAKEMASK(0x7, 16)
+#define GL_XLR_MARKER_TRIG_RCU_PRS		0x002001C0 /* Reset Source: CORER */
+#define GL_XLR_MARKER_TRIG_RCU_PRS_VM_VF_NUM_S	0
+#define GL_XLR_MARKER_TRIG_RCU_PRS_VM_VF_NUM_M	MAKEMASK(0x3FF, 0)
+#define GL_XLR_MARKER_TRIG_RCU_PRS_VM_VF_TYPE_S 10
+#define GL_XLR_MARKER_TRIG_RCU_PRS_VM_VF_TYPE_M MAKEMASK(0x3, 10)
+#define GL_XLR_MARKER_TRIG_RCU_PRS_PF_NUM_S	12
+#define GL_XLR_MARKER_TRIG_RCU_PRS_PF_NUM_M	MAKEMASK(0x7, 12)
+#define GL_XLR_MARKER_TRIG_RCU_PRS_PORT_NUM_S	16
+#define GL_XLR_MARKER_TRIG_RCU_PRS_PORT_NUM_M	MAKEMASK(0x7, 16)
+#define GL_CLKGATE_EVENTS			0x0009DE70 /* Reset Source: PERST */
+#define GL_CLKGATE_EVENTS_PRIMARY_CLKGATE_EVENTS_S 0
+#define GL_CLKGATE_EVENTS_PRIMARY_CLKGATE_EVENTS_M MAKEMASK(0xFFFF, 0)
+#define GL_CLKGATE_EVENTS_SIDEBAND_CLKGATE_EVENTS_S 16
+#define GL_CLKGATE_EVENTS_SIDEBAND_CLKGATE_EVENTS_M MAKEMASK(0xFFFF, 16)
+#define GLPCI_BYTCTH_NP_C			0x000BFDA8 /* Reset Source: PCIR */
+#define GLPCI_BYTCTH_NP_C_PCI_COUNT_BW_BCT_S	0
+#define GLPCI_BYTCTH_NP_C_PCI_COUNT_BW_BCT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPCI_BYTCTH_P				0x0009E970 /* Reset Source: PCIR */
+#define GLPCI_BYTCTH_P_PCI_COUNT_BW_BCT_S	0
+#define GLPCI_BYTCTH_P_PCI_COUNT_BW_BCT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPCI_BYTCTL_NP_C			0x000BFDAC /* Reset Source: PCIR */
+#define GLPCI_BYTCTL_NP_C_PCI_COUNT_BW_BCT_S	0
+#define GLPCI_BYTCTL_NP_C_PCI_COUNT_BW_BCT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPCI_BYTCTL_P				0x0009E994 /* Reset Source: PCIR */
+#define GLPCI_BYTCTL_P_PCI_COUNT_BW_BCT_S	0
+#define GLPCI_BYTCTL_P_PCI_COUNT_BW_BCT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPCI_CAPCTRL				0x0009DE88 /* Reset Source: PCIR */
+#define GLPCI_CAPCTRL_VPD_EN_S			0
+#define GLPCI_CAPCTRL_VPD_EN_M			BIT(0)
+#define GLPCI_CAPSUP				0x0009DE8C /* Reset Source: PCIR */
+#define GLPCI_CAPSUP_PCIE_VER_S			0
+#define GLPCI_CAPSUP_PCIE_VER_M			BIT(0)
+#define GLPCI_CAPSUP_RESERVED_2_S		1
+#define GLPCI_CAPSUP_RESERVED_2_M		BIT(1)
+#define GLPCI_CAPSUP_LTR_EN_S			2
+#define GLPCI_CAPSUP_LTR_EN_M			BIT(2)
+#define GLPCI_CAPSUP_TPH_EN_S			3
+#define GLPCI_CAPSUP_TPH_EN_M			BIT(3)
+#define GLPCI_CAPSUP_ARI_EN_S			4
+#define GLPCI_CAPSUP_ARI_EN_M			BIT(4)
+#define GLPCI_CAPSUP_IOV_EN_S			5
+#define GLPCI_CAPSUP_IOV_EN_M			BIT(5)
+#define GLPCI_CAPSUP_ACS_EN_S			6
+#define GLPCI_CAPSUP_ACS_EN_M			BIT(6)
+#define GLPCI_CAPSUP_SEC_EN_S			7
+#define GLPCI_CAPSUP_SEC_EN_M			BIT(7)
+#define GLPCI_CAPSUP_PASID_EN_S			8
+#define GLPCI_CAPSUP_PASID_EN_M			BIT(8)
+#define GLPCI_CAPSUP_DLFE_EN_S			9
+#define GLPCI_CAPSUP_DLFE_EN_M			BIT(9)
+#define GLPCI_CAPSUP_GEN4_EXT_EN_S		10
+#define GLPCI_CAPSUP_GEN4_EXT_EN_M		BIT(10)
+#define GLPCI_CAPSUP_GEN4_MARG_EN_S		11
+#define GLPCI_CAPSUP_GEN4_MARG_EN_M		BIT(11)
+#define GLPCI_CAPSUP_ECRC_GEN_EN_S		16
+#define GLPCI_CAPSUP_ECRC_GEN_EN_M		BIT(16)
+#define GLPCI_CAPSUP_ECRC_CHK_EN_S		17
+#define GLPCI_CAPSUP_ECRC_CHK_EN_M		BIT(17)
+#define GLPCI_CAPSUP_IDO_EN_S			18
+#define GLPCI_CAPSUP_IDO_EN_M			BIT(18)
+#define GLPCI_CAPSUP_MSI_MASK_S			19
+#define GLPCI_CAPSUP_MSI_MASK_M			BIT(19)
+#define GLPCI_CAPSUP_CSR_CONF_EN_S		20
+#define GLPCI_CAPSUP_CSR_CONF_EN_M		BIT(20)
+#define GLPCI_CAPSUP_WAKUP_EN_S			21
+#define GLPCI_CAPSUP_WAKUP_EN_M			BIT(21)
+#define GLPCI_CAPSUP_LOAD_SUBSYS_ID_S		30
+#define GLPCI_CAPSUP_LOAD_SUBSYS_ID_M		BIT(30)
+#define GLPCI_CAPSUP_LOAD_DEV_ID_S		31
+#define GLPCI_CAPSUP_LOAD_DEV_ID_M		BIT(31)
+#define GLPCI_CNF				0x0009DEA0 /* Reset Source: POR */
+#define GLPCI_CNF_FLEX10_S			1
+#define GLPCI_CNF_FLEX10_M			BIT(1)
+#define GLPCI_CNF_WAKE_PIN_EN_S			2
+#define GLPCI_CNF_WAKE_PIN_EN_M			BIT(2)
+#define GLPCI_CNF_MSIX_ECC_BLOCK_DISABLE_S	3
+#define GLPCI_CNF_MSIX_ECC_BLOCK_DISABLE_M	BIT(3)
+#define GLPCI_CNF2				0x000BE004 /* Reset Source: PCIR */
+#define GLPCI_CNF2_RO_DIS_S			0
+#define GLPCI_CNF2_RO_DIS_M			BIT(0)
+#define GLPCI_CNF2_CACHELINE_SIZE_S		1
+#define GLPCI_CNF2_CACHELINE_SIZE_M		BIT(1)
+#define GLPCI_DREVID				0x0009E9AC /* Reset Source: PCIR */
+#define GLPCI_DREVID_DEFAULT_REVID_S		0
+#define GLPCI_DREVID_DEFAULT_REVID_M		MAKEMASK(0xFF, 0)
+#define GLPCI_GSCL_1_NP_C			0x000BFDA4 /* Reset Source: PCIR */
+#define GLPCI_GSCL_1_NP_C_RT_MODE_S		8
+#define GLPCI_GSCL_1_NP_C_RT_MODE_M		BIT(8)
+#define GLPCI_GSCL_1_NP_C_RT_EVENT_S		9
+#define GLPCI_GSCL_1_NP_C_RT_EVENT_M		MAKEMASK(0x1F, 9)
+#define GLPCI_GSCL_1_NP_C_PCI_COUNT_BW_EN_S	14
+#define GLPCI_GSCL_1_NP_C_PCI_COUNT_BW_EN_M	BIT(14)
+#define GLPCI_GSCL_1_NP_C_PCI_COUNT_BW_EV_S	15
+#define GLPCI_GSCL_1_NP_C_PCI_COUNT_BW_EV_M	MAKEMASK(0x1F, 15)
+#define GLPCI_GSCL_1_NP_C_GIO_COUNT_RESET_S	29
+#define GLPCI_GSCL_1_NP_C_GIO_COUNT_RESET_M	BIT(29)
+#define GLPCI_GSCL_1_NP_C_GIO_COUNT_STOP_S	30
+#define GLPCI_GSCL_1_NP_C_GIO_COUNT_STOP_M	BIT(30)
+#define GLPCI_GSCL_1_NP_C_GIO_COUNT_START_S	31
+#define GLPCI_GSCL_1_NP_C_GIO_COUNT_START_M	BIT(31)
+#define GLPCI_GSCL_1_P				0x0009E9B4 /* Reset Source: PCIR */
+#define GLPCI_GSCL_1_P_GIO_COUNT_EN_0_S		0
+#define GLPCI_GSCL_1_P_GIO_COUNT_EN_0_M		BIT(0)
+#define GLPCI_GSCL_1_P_GIO_COUNT_EN_1_S		1
+#define GLPCI_GSCL_1_P_GIO_COUNT_EN_1_M		BIT(1)
+#define GLPCI_GSCL_1_P_GIO_COUNT_EN_2_S		2
+#define GLPCI_GSCL_1_P_GIO_COUNT_EN_2_M		BIT(2)
+#define GLPCI_GSCL_1_P_GIO_COUNT_EN_3_S		3
+#define GLPCI_GSCL_1_P_GIO_COUNT_EN_3_M		BIT(3)
+#define GLPCI_GSCL_1_P_LBC_ENABLE_0_S		4
+#define GLPCI_GSCL_1_P_LBC_ENABLE_0_M		BIT(4)
+#define GLPCI_GSCL_1_P_LBC_ENABLE_1_S		5
+#define GLPCI_GSCL_1_P_LBC_ENABLE_1_M		BIT(5)
+#define GLPCI_GSCL_1_P_LBC_ENABLE_2_S		6
+#define GLPCI_GSCL_1_P_LBC_ENABLE_2_M		BIT(6)
+#define GLPCI_GSCL_1_P_LBC_ENABLE_3_S		7
+#define GLPCI_GSCL_1_P_LBC_ENABLE_3_M		BIT(7)
+#define GLPCI_GSCL_1_P_PCI_COUNT_BW_EN_S	14
+#define GLPCI_GSCL_1_P_PCI_COUNT_BW_EN_M	BIT(14)
+#define GLPCI_GSCL_1_P_GIO_64_BIT_EN_S		28
+#define GLPCI_GSCL_1_P_GIO_64_BIT_EN_M		BIT(28)
+#define GLPCI_GSCL_1_P_GIO_COUNT_RESET_S	29
+#define GLPCI_GSCL_1_P_GIO_COUNT_RESET_M	BIT(29)
+#define GLPCI_GSCL_1_P_GIO_COUNT_STOP_S		30
+#define GLPCI_GSCL_1_P_GIO_COUNT_STOP_M		BIT(30)
+#define GLPCI_GSCL_1_P_GIO_COUNT_START_S	31
+#define GLPCI_GSCL_1_P_GIO_COUNT_START_M	BIT(31)
+#define GLPCI_GSCL_2				0x0009E998 /* Reset Source: PCIR */
+#define GLPCI_GSCL_2_GIO_EVENT_NUM_0_S		0
+#define GLPCI_GSCL_2_GIO_EVENT_NUM_0_M		MAKEMASK(0xFF, 0)
+#define GLPCI_GSCL_2_GIO_EVENT_NUM_1_S		8
+#define GLPCI_GSCL_2_GIO_EVENT_NUM_1_M		MAKEMASK(0xFF, 8)
+#define GLPCI_GSCL_2_GIO_EVENT_NUM_2_S		16
+#define GLPCI_GSCL_2_GIO_EVENT_NUM_2_M		MAKEMASK(0xFF, 16)
+#define GLPCI_GSCL_2_GIO_EVENT_NUM_3_S		24
+#define GLPCI_GSCL_2_GIO_EVENT_NUM_3_M		MAKEMASK(0xFF, 24)
+#define GLPCI_GSCL_5_8(_i)			(0x0009E954 + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: PCIR */
+#define GLPCI_GSCL_5_8_MAX_INDEX		3
+#define GLPCI_GSCL_5_8_LBC_THRESHOLD_N_S	0
+#define GLPCI_GSCL_5_8_LBC_THRESHOLD_N_M	MAKEMASK(0xFFFF, 0)
+#define GLPCI_GSCL_5_8_LBC_TIMER_N_S		16
+#define GLPCI_GSCL_5_8_LBC_TIMER_N_M		MAKEMASK(0xFFFF, 16)
+#define GLPCI_GSCN_0_3(_i)			(0x0009E99C + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: PCIR */
+#define GLPCI_GSCN_0_3_MAX_INDEX		3
+#define GLPCI_GSCN_0_3_EVENT_COUNTER_S		0
+#define GLPCI_GSCN_0_3_EVENT_COUNTER_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPCI_LATCT_NP_C			0x000BFDA0 /* Reset Source: PCIR */
+#define GLPCI_LATCT_NP_C_PCI_LATENCY_COUNT_S	0
+#define GLPCI_LATCT_NP_C_PCI_LATENCY_COUNT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPCI_LBARCTRL				0x0009DE74 /* Reset Source: POR */
+#define GLPCI_LBARCTRL_PREFBAR_S		0
+#define GLPCI_LBARCTRL_PREFBAR_M		BIT(0)
+#define GLPCI_LBARCTRL_BAR32_S			1
+#define GLPCI_LBARCTRL_BAR32_M			BIT(1)
+#define GLPCI_LBARCTRL_PAGES_SPACE_EN_PF_S	2
+#define GLPCI_LBARCTRL_PAGES_SPACE_EN_PF_M	BIT(2)
+#define GLPCI_LBARCTRL_FLASH_EXPOSE_S		3
+#define GLPCI_LBARCTRL_FLASH_EXPOSE_M		BIT(3)
+#define GLPCI_LBARCTRL_PE_DB_SIZE_S		4
+#define GLPCI_LBARCTRL_PE_DB_SIZE_M		MAKEMASK(0x3, 4)
+#define GLPCI_LBARCTRL_PAGES_SPACE_EN_VF_S	9
+#define GLPCI_LBARCTRL_PAGES_SPACE_EN_VF_M	BIT(9)
+#define GLPCI_LBARCTRL_EXROM_SIZE_S		11
+#define GLPCI_LBARCTRL_EXROM_SIZE_M		MAKEMASK(0x7, 11)
+#define GLPCI_LBARCTRL_VF_PE_DB_SIZE_S		14
+#define GLPCI_LBARCTRL_VF_PE_DB_SIZE_M		MAKEMASK(0x3, 14)
+#define GLPCI_LINKCAP				0x0009DE90 /* Reset Source: PCIR */
+#define GLPCI_LINKCAP_LINK_SPEEDS_VECTOR_S	0
+#define GLPCI_LINKCAP_LINK_SPEEDS_VECTOR_M	MAKEMASK(0x3F, 0)
+#define GLPCI_LINKCAP_MAX_LINK_WIDTH_S		9
+#define GLPCI_LINKCAP_MAX_LINK_WIDTH_M		MAKEMASK(0xF, 9)
+#define GLPCI_NPQ_CFG				0x000BFD80 /* Reset Source: PCIR */
+#define GLPCI_NPQ_CFG_EXTEND_TO_S		0
+#define GLPCI_NPQ_CFG_EXTEND_TO_M		BIT(0)
+#define GLPCI_NPQ_CFG_SMALL_TO_S		1
+#define GLPCI_NPQ_CFG_SMALL_TO_M		BIT(1)
+#define GLPCI_NPQ_CFG_WEIGHT_AVG_S		2
+#define GLPCI_NPQ_CFG_WEIGHT_AVG_M		MAKEMASK(0xF, 2)
+#define GLPCI_NPQ_CFG_NPQ_SPARE_S		6
+#define GLPCI_NPQ_CFG_NPQ_SPARE_M		MAKEMASK(0x3FF, 6)
+#define GLPCI_NPQ_CFG_NPQ_ERR_STAT_S		16
+#define GLPCI_NPQ_CFG_NPQ_ERR_STAT_M		MAKEMASK(0xF, 16)
+#define GLPCI_PKTCT_NP_C			0x000BFD9C /* Reset Source: PCIR */
+#define GLPCI_PKTCT_NP_C_PCI_COUNT_BW_PCT_S	0
+#define GLPCI_PKTCT_NP_C_PCI_COUNT_BW_PCT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPCI_PKTCT_P				0x0009E9B0 /* Reset Source: PCIR */
+#define GLPCI_PKTCT_P_PCI_COUNT_BW_PCT_S	0
+#define GLPCI_PKTCT_P_PCI_COUNT_BW_PCT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPCI_PMSUP				0x0009DE94 /* Reset Source: PCIR */
+#define GLPCI_PMSUP_RESERVED_0_S		0
+#define GLPCI_PMSUP_RESERVED_0_M		MAKEMASK(0x3, 0)
+#define GLPCI_PMSUP_RESERVED_1_S		2
+#define GLPCI_PMSUP_RESERVED_1_M		MAKEMASK(0x7, 2)
+#define GLPCI_PMSUP_RESERVED_2_S		5
+#define GLPCI_PMSUP_RESERVED_2_M		MAKEMASK(0x7, 5)
+#define GLPCI_PMSUP_L0S_ACC_LAT_S		8
+#define GLPCI_PMSUP_L0S_ACC_LAT_M		MAKEMASK(0x7, 8)
+#define GLPCI_PMSUP_L1_ACC_LAT_S		11
+#define GLPCI_PMSUP_L1_ACC_LAT_M		MAKEMASK(0x7, 11)
+#define GLPCI_PMSUP_RESERVED_3_S		14
+#define GLPCI_PMSUP_RESERVED_3_M		BIT(14)
+#define GLPCI_PMSUP_OBFF_SUP_S			15
+#define GLPCI_PMSUP_OBFF_SUP_M			MAKEMASK(0x3, 15)
+#define GLPCI_PUSH_PE_IF_TO_STATUS		0x0009DF44 /* Reset Source: PCIR */
+#define GLPCI_PUSH_PE_IF_TO_STATUS_GLPCI_PUSH_PE_IF_TO_STATUS_S 0
+#define GLPCI_PUSH_PE_IF_TO_STATUS_GLPCI_PUSH_PE_IF_TO_STATUS_M BIT(0)
+#define GLPCI_PWRDATA				0x0009DE7C /* Reset Source: PCIR */
+#define GLPCI_PWRDATA_D0_POWER_S		0
+#define GLPCI_PWRDATA_D0_POWER_M		MAKEMASK(0xFF, 0)
+#define GLPCI_PWRDATA_COMM_POWER_S		8
+#define GLPCI_PWRDATA_COMM_POWER_M		MAKEMASK(0xFF, 8)
+#define GLPCI_PWRDATA_D3_POWER_S		16
+#define GLPCI_PWRDATA_D3_POWER_M		MAKEMASK(0xFF, 16)
+#define GLPCI_PWRDATA_DATA_SCALE_S		24
+#define GLPCI_PWRDATA_DATA_SCALE_M		MAKEMASK(0x3, 24)
+#define GLPCI_REVID				0x0009DE98 /* Reset Source: PCIR */
+#define GLPCI_REVID_NVM_REVID_S			0
+#define GLPCI_REVID_NVM_REVID_M			MAKEMASK(0xFF, 0)
+#define GLPCI_SERH				0x0009DE84 /* Reset Source: PCIR */
+#define GLPCI_SERH_SER_NUM_H_S			0
+#define GLPCI_SERH_SER_NUM_H_M			MAKEMASK(0xFFFF, 0)
+#define GLPCI_SERL				0x0009DE80 /* Reset Source: PCIR */
+#define GLPCI_SERL_SER_NUM_L_S			0
+#define GLPCI_SERL_SER_NUM_L_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPCI_SUBVENID				0x0009DEE8 /* Reset Source: PCIR */
+#define GLPCI_SUBVENID_SUB_VEN_ID_S		0
+#define GLPCI_SUBVENID_SUB_VEN_ID_M		MAKEMASK(0xFFFF, 0)
+#define GLPCI_UPADD				0x000BE0D4 /* Reset Source: PCIR */
+#define GLPCI_UPADD_ADDRESS_S			1
+#define GLPCI_UPADD_ADDRESS_M			MAKEMASK(0x7FFFFFFF, 1)
+#define GLPCI_VENDORID				0x0009DEC8 /* Reset Source: PCIR */
+#define GLPCI_VENDORID_VENDORID_S		0
+#define GLPCI_VENDORID_VENDORID_M		MAKEMASK(0xFFFF, 0)
+#define GLPCI_VFSUP				0x0009DE9C /* Reset Source: PCIR */
+#define GLPCI_VFSUP_VF_PREFETCH_S		0
+#define GLPCI_VFSUP_VF_PREFETCH_M		BIT(0)
+#define GLPCI_VFSUP_VR_BAR_TYPE_S		1
+#define GLPCI_VFSUP_VR_BAR_TYPE_M		BIT(1)
+#define GLPCI_WATMK_CLNT_PIPEMON		0x000BFD90 /* Reset Source: PCIR */
+#define GLPCI_WATMK_CLNT_PIPEMON_DATA_LINES_S	0
+#define GLPCI_WATMK_CLNT_PIPEMON_DATA_LINES_M	MAKEMASK(0xFFFF, 0)
+#define PF_FUNC_RID				0x0009E880 /* Reset Source: PCIR */
+#define PF_FUNC_RID_FUNCTION_NUMBER_S		0
+#define PF_FUNC_RID_FUNCTION_NUMBER_M		MAKEMASK(0x7, 0)
+#define PF_FUNC_RID_DEVICE_NUMBER_S		3
+#define PF_FUNC_RID_DEVICE_NUMBER_M		MAKEMASK(0x1F, 3)
+#define PF_FUNC_RID_BUS_NUMBER_S		8
+#define PF_FUNC_RID_BUS_NUMBER_M		MAKEMASK(0xFF, 8)
+#define PF_PCI_CIAA				0x0009E580 /* Reset Source: FLR */
+#define PF_PCI_CIAA_ADDRESS_S			0
+#define PF_PCI_CIAA_ADDRESS_M			MAKEMASK(0xFFF, 0)
+#define PF_PCI_CIAA_VF_NUM_S			12
+#define PF_PCI_CIAA_VF_NUM_M			MAKEMASK(0xFF, 12)
+#define PF_PCI_CIAD				0x0009E500 /* Reset Source: FLR */
+#define PF_PCI_CIAD_DATA_S			0
+#define PF_PCI_CIAD_DATA_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PFPCI_CLASS				0x0009DB00 /* Reset Source: PCIR */
+#define PFPCI_CLASS_STORAGE_CLASS_S		0
+#define PFPCI_CLASS_STORAGE_CLASS_M		BIT(0)
+#define PFPCI_CLASS_PF_IS_LAN_S			2
+#define PFPCI_CLASS_PF_IS_LAN_M			BIT(2)
+#define PFPCI_CNF				0x0009DF00 /* Reset Source: PCIR */
+#define PFPCI_CNF_MSI_EN_S			2
+#define PFPCI_CNF_MSI_EN_M			BIT(2)
+#define PFPCI_CNF_EXROM_DIS_S			3
+#define PFPCI_CNF_EXROM_DIS_M			BIT(3)
+#define PFPCI_CNF_IO_BAR_S			4
+#define PFPCI_CNF_IO_BAR_M			BIT(4)
+#define PFPCI_CNF_INT_PIN_S			5
+#define PFPCI_CNF_INT_PIN_M			MAKEMASK(0x3, 5)
+#define PFPCI_DEVID				0x0009DE00 /* Reset Source: PCIR */
+#define PFPCI_DEVID_PF_DEV_ID_S			0
+#define PFPCI_DEVID_PF_DEV_ID_M			MAKEMASK(0xFFFF, 0)
+#define PFPCI_DEVID_VF_DEV_ID_S			16
+#define PFPCI_DEVID_VF_DEV_ID_M			MAKEMASK(0xFFFF, 16)
+#define PFPCI_FACTPS				0x0009E900 /* Reset Source: FLR */
+#define PFPCI_FACTPS_FUNC_POWER_STATE_S		0
+#define PFPCI_FACTPS_FUNC_POWER_STATE_M		MAKEMASK(0x3, 0)
+#define PFPCI_FACTPS_FUNC_AUX_EN_S		3
+#define PFPCI_FACTPS_FUNC_AUX_EN_M		BIT(3)
+#define PFPCI_FUNC				0x0009D980 /* Reset Source: POR */
+#define PFPCI_FUNC_FUNC_DIS_S			0
+#define PFPCI_FUNC_FUNC_DIS_M			BIT(0)
+#define PFPCI_FUNC_ALLOW_FUNC_DIS_S		1
+#define PFPCI_FUNC_ALLOW_FUNC_DIS_M		BIT(1)
+#define PFPCI_FUNC_DIS_FUNC_ON_PORT_DIS_S	2
+#define PFPCI_FUNC_DIS_FUNC_ON_PORT_DIS_M	BIT(2)
+#define PFPCI_PF_FLUSH_DONE			0x0009E400 /* Reset Source: PCIR */
+#define PFPCI_PF_FLUSH_DONE_FLUSH_DONE_S	0
+#define PFPCI_PF_FLUSH_DONE_FLUSH_DONE_M	BIT(0)
+#define PFPCI_PM				0x0009DA80 /* Reset Source: POR */
+#define PFPCI_PM_PME_EN_S			0
+#define PFPCI_PM_PME_EN_M			BIT(0)
+#define PFPCI_STATUS1				0x0009DA00 /* Reset Source: POR */
+#define PFPCI_STATUS1_FUNC_VALID_S		0
+#define PFPCI_STATUS1_FUNC_VALID_M		BIT(0)
+#define PFPCI_SUBSYSID				0x0009D880 /* Reset Source: PCIR */
+#define PFPCI_SUBSYSID_PF_SUBSYS_ID_S		0
+#define PFPCI_SUBSYSID_PF_SUBSYS_ID_M		MAKEMASK(0xFFFF, 0)
+#define PFPCI_SUBSYSID_VF_SUBSYS_ID_S		16
+#define PFPCI_SUBSYSID_VF_SUBSYS_ID_M		MAKEMASK(0xFFFF, 16)
+#define PFPCI_VF_FLUSH_DONE(_VF)		(0x0009E000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PCIR */
+#define PFPCI_VF_FLUSH_DONE_MAX_INDEX		255
+#define PFPCI_VF_FLUSH_DONE_FLUSH_DONE_S	0
+#define PFPCI_VF_FLUSH_DONE_FLUSH_DONE_M	BIT(0)
+#define PFPCI_VM_FLUSH_DONE			0x0009E480 /* Reset Source: PCIR */
+#define PFPCI_VM_FLUSH_DONE_FLUSH_DONE_S	0
+#define PFPCI_VM_FLUSH_DONE_FLUSH_DONE_M	BIT(0)
+#define PFPCI_VMINDEX				0x0009E600 /* Reset Source: PCIR */
+#define PFPCI_VMINDEX_VMINDEX_S			0
+#define PFPCI_VMINDEX_VMINDEX_M			MAKEMASK(0x3FF, 0)
+#define PFPCI_VMPEND				0x0009E800 /* Reset Source: PCIR */
+#define PFPCI_VMPEND_PENDING_S			0
+#define PFPCI_VMPEND_PENDING_M			BIT(0)
+#define PQ_FIFO_STATUS				0x0009DF40 /* Reset Source: PCIR */
+#define PQ_FIFO_STATUS_PQ_FIFO_COUNT_S		0
+#define PQ_FIFO_STATUS_PQ_FIFO_COUNT_M		MAKEMASK(0x7FFFFFFF, 0)
+#define PQ_FIFO_STATUS_PQ_FIFO_EMPTY_S		31
+#define PQ_FIFO_STATUS_PQ_FIFO_EMPTY_M		BIT(31)
+#define GLPE_CPUSTATUS0				0x0050BA5C /* Reset Source: CORER */
+#define GLPE_CPUSTATUS0_PECPUSTATUS0_S		0
+#define GLPE_CPUSTATUS0_PECPUSTATUS0_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPE_CPUSTATUS1				0x0050BA60 /* Reset Source: CORER */
+#define GLPE_CPUSTATUS1_PECPUSTATUS1_S		0
+#define GLPE_CPUSTATUS1_PECPUSTATUS1_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPE_CPUSTATUS2				0x0050BA64 /* Reset Source: CORER */
+#define GLPE_CPUSTATUS2_PECPUSTATUS2_S		0
+#define GLPE_CPUSTATUS2_PECPUSTATUS2_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPE_MDQ_BASE(_i)			(0x00536000 + ((_i) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLPE_MDQ_BASE_MAX_INDEX			511
+#define GLPE_MDQ_BASE_MDOC_INDEX_S		0
+#define GLPE_MDQ_BASE_MDOC_INDEX_M		MAKEMASK(0xFFFFFFF, 0)
+#define GLPE_MDQ_PTR(_i)			(0x00537000 + ((_i) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLPE_MDQ_PTR_MAX_INDEX			511
+#define GLPE_MDQ_PTR_MDQ_HEAD_S			0
+#define GLPE_MDQ_PTR_MDQ_HEAD_M			MAKEMASK(0x3FFF, 0)
+#define GLPE_MDQ_PTR_MDQ_TAIL_S			16
+#define GLPE_MDQ_PTR_MDQ_TAIL_M			MAKEMASK(0x3FFF, 16)
+#define GLPE_MDQ_SIZE(_i)			(0x00536800 + ((_i) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLPE_MDQ_SIZE_MAX_INDEX			511
+#define GLPE_MDQ_SIZE_MDQ_SIZE_S		0
+#define GLPE_MDQ_SIZE_MDQ_SIZE_M		MAKEMASK(0x3FFF, 0)
+#define GLPE_PEPM_CTRL				0x0050C000 /* Reset Source: PERST */
+#define GLPE_PEPM_CTRL_PEPM_ENABLE_S		0
+#define GLPE_PEPM_CTRL_PEPM_ENABLE_M		BIT(0)
+#define GLPE_PEPM_CTRL_PEPM_HALT_S		8
+#define GLPE_PEPM_CTRL_PEPM_HALT_M		BIT(8)
+#define GLPE_PEPM_CTRL_PEPM_PUSH_MARGIN_S	16
+#define GLPE_PEPM_CTRL_PEPM_PUSH_MARGIN_M	MAKEMASK(0xFF, 16)
+#define GLPE_PEPM_DEALLOC			0x0050C004 /* Reset Source: PERST */
+#define GLPE_PEPM_DEALLOC_MDQ_CREDITS_S		0
+#define GLPE_PEPM_DEALLOC_MDQ_CREDITS_M		MAKEMASK(0x3FFF, 0)
+#define GLPE_PEPM_DEALLOC_PSQ_CREDITS_S		14
+#define GLPE_PEPM_DEALLOC_PSQ_CREDITS_M		MAKEMASK(0x1F, 14)
+#define GLPE_PEPM_DEALLOC_PQID_S		19
+#define GLPE_PEPM_DEALLOC_PQID_M		MAKEMASK(0x1FF, 19)
+#define GLPE_PEPM_DEALLOC_PORT_S		28
+#define GLPE_PEPM_DEALLOC_PORT_M		MAKEMASK(0x7, 28)
+#define GLPE_PEPM_DEALLOC_DEALLOC_RDY_S		31
+#define GLPE_PEPM_DEALLOC_DEALLOC_RDY_M		BIT(31)
+#define GLPE_PEPM_PSQ_COUNT			0x0050C020 /* Reset Source: PERST */
+#define GLPE_PEPM_PSQ_COUNT_PEPM_PSQ_COUNT_S	0
+#define GLPE_PEPM_PSQ_COUNT_PEPM_PSQ_COUNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_PEPM_THRESH(_i)			(0x0050C840 + ((_i) * 4)) /* _i=0...511 */ /* Reset Source: PERST */
+#define GLPE_PEPM_THRESH_MAX_INDEX		511
+#define GLPE_PEPM_THRESH_PEPM_PSQ_THRESH_S	0
+#define GLPE_PEPM_THRESH_PEPM_PSQ_THRESH_M	MAKEMASK(0x1F, 0)
+#define GLPE_PEPM_THRESH_PEPM_MDQ_THRESH_S	16
+#define GLPE_PEPM_THRESH_PEPM_MDQ_THRESH_M	MAKEMASK(0x3FFF, 16)
+#define GLPE_PFAEQEDROPCNT(_i)			(0x00503240 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPE_PFAEQEDROPCNT_MAX_INDEX		7
+#define GLPE_PFAEQEDROPCNT_AEQEDROPCNT_S	0
+#define GLPE_PFAEQEDROPCNT_AEQEDROPCNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_PFCEQEDROPCNT(_i)			(0x00503220 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPE_PFCEQEDROPCNT_MAX_INDEX		7
+#define GLPE_PFCEQEDROPCNT_CEQEDROPCNT_S	0
+#define GLPE_PFCEQEDROPCNT_CEQEDROPCNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_PFCQEDROPCNT(_i)			(0x00503200 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPE_PFCQEDROPCNT_MAX_INDEX		7
+#define GLPE_PFCQEDROPCNT_CQEDROPCNT_S		0
+#define GLPE_PFCQEDROPCNT_CQEDROPCNT_M		MAKEMASK(0xFFFF, 0)
+#define GLPE_PFFLMOOISCALLOCERR(_i)		(0x0050B960 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPE_PFFLMOOISCALLOCERR_MAX_INDEX	7
+#define GLPE_PFFLMOOISCALLOCERR_ERROR_COUNT_S	0
+#define GLPE_PFFLMOOISCALLOCERR_ERROR_COUNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_PFFLMQ1ALLOCERR(_i)		(0x0050B920 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPE_PFFLMQ1ALLOCERR_MAX_INDEX		7
+#define GLPE_PFFLMQ1ALLOCERR_ERROR_COUNT_S	0
+#define GLPE_PFFLMQ1ALLOCERR_ERROR_COUNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_PFFLMRRFALLOCERR(_i)		(0x0050B940 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPE_PFFLMRRFALLOCERR_MAX_INDEX		7
+#define GLPE_PFFLMRRFALLOCERR_ERROR_COUNT_S	0
+#define GLPE_PFFLMRRFALLOCERR_ERROR_COUNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_PFFLMXMITALLOCERR(_i)		(0x0050B900 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPE_PFFLMXMITALLOCERR_MAX_INDEX	7
+#define GLPE_PFFLMXMITALLOCERR_ERROR_COUNT_S	0
+#define GLPE_PFFLMXMITALLOCERR_ERROR_COUNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_PFTCPNOW50USCNT(_i)		(0x0050B8C0 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPE_PFTCPNOW50USCNT_MAX_INDEX		7
+#define GLPE_PFTCPNOW50USCNT_CNT_S		0
+#define GLPE_PFTCPNOW50USCNT_CNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPE_PUSH_PEPM				0x0053241C /* Reset Source: CORER */
+#define GLPE_PUSH_PEPM_MDQ_CREDITS_S		0
+#define GLPE_PUSH_PEPM_MDQ_CREDITS_M		MAKEMASK(0xFF, 0)
+#define GLPE_VFAEQEDROPCNT(_i)			(0x00503100 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLPE_VFAEQEDROPCNT_MAX_INDEX		31
+#define GLPE_VFAEQEDROPCNT_AEQEDROPCNT_S	0
+#define GLPE_VFAEQEDROPCNT_AEQEDROPCNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_VFCEQEDROPCNT(_i)			(0x00503080 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLPE_VFCEQEDROPCNT_MAX_INDEX		31
+#define GLPE_VFCEQEDROPCNT_CEQEDROPCNT_S	0
+#define GLPE_VFCEQEDROPCNT_CEQEDROPCNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_VFCQEDROPCNT(_i)			(0x00503000 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLPE_VFCQEDROPCNT_MAX_INDEX		31
+#define GLPE_VFCQEDROPCNT_CQEDROPCNT_S		0
+#define GLPE_VFCQEDROPCNT_CQEDROPCNT_M		MAKEMASK(0xFFFF, 0)
+#define GLPE_VFFLMOOISCALLOCERR(_i)		(0x0050B580 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLPE_VFFLMOOISCALLOCERR_MAX_INDEX	31
+#define GLPE_VFFLMOOISCALLOCERR_ERROR_COUNT_S	0
+#define GLPE_VFFLMOOISCALLOCERR_ERROR_COUNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_VFFLMQ1ALLOCERR(_i)		(0x0050B480 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLPE_VFFLMQ1ALLOCERR_MAX_INDEX		31
+#define GLPE_VFFLMQ1ALLOCERR_ERROR_COUNT_S	0
+#define GLPE_VFFLMQ1ALLOCERR_ERROR_COUNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_VFFLMRRFALLOCERR(_i)		(0x0050B500 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLPE_VFFLMRRFALLOCERR_MAX_INDEX		31
+#define GLPE_VFFLMRRFALLOCERR_ERROR_COUNT_S	0
+#define GLPE_VFFLMRRFALLOCERR_ERROR_COUNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_VFFLMXMITALLOCERR(_i)		(0x0050B400 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLPE_VFFLMXMITALLOCERR_MAX_INDEX	31
+#define GLPE_VFFLMXMITALLOCERR_ERROR_COUNT_S	0
+#define GLPE_VFFLMXMITALLOCERR_ERROR_COUNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_VFTCPNOW50USCNT(_i)		(0x0050B300 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: PE_CORER */
+#define GLPE_VFTCPNOW50USCNT_MAX_INDEX		31
+#define GLPE_VFTCPNOW50USCNT_CNT_S		0
+#define GLPE_VFTCPNOW50USCNT_CNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PFPE_AEQALLOC				0x00502D00 /* Reset Source: PFR */
+#define PFPE_AEQALLOC_AECOUNT_S			0
+#define PFPE_AEQALLOC_AECOUNT_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PFPE_CCQPHIGH				0x0050A100 /* Reset Source: PFR */
+#define PFPE_CCQPHIGH_PECCQPHIGH_S		0
+#define PFPE_CCQPHIGH_PECCQPHIGH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PFPE_CCQPLOW				0x0050A080 /* Reset Source: PFR */
+#define PFPE_CCQPLOW_PECCQPLOW_S		0
+#define PFPE_CCQPLOW_PECCQPLOW_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PFPE_CCQPSTATUS				0x0050A000 /* Reset Source: PFR */
+#define PFPE_CCQPSTATUS_CCQP_DONE_S		0
+#define PFPE_CCQPSTATUS_CCQP_DONE_M		BIT(0)
+#define PFPE_CCQPSTATUS_HMC_PROFILE_S		4
+#define PFPE_CCQPSTATUS_HMC_PROFILE_M		MAKEMASK(0x7, 4)
+#define PFPE_CCQPSTATUS_RDMA_EN_VFS_S		16
+#define PFPE_CCQPSTATUS_RDMA_EN_VFS_M		MAKEMASK(0x3F, 16)
+#define PFPE_CCQPSTATUS_CCQP_ERR_S		31
+#define PFPE_CCQPSTATUS_CCQP_ERR_M		BIT(31)
+#define PFPE_CQACK				0x00502C80 /* Reset Source: PFR */
+#define PFPE_CQACK_PECQID_S			0
+#define PFPE_CQACK_PECQID_M			MAKEMASK(0x7FFFF, 0)
+#define PFPE_CQARM				0x00502C00 /* Reset Source: PFR */
+#define PFPE_CQARM_PECQID_S			0
+#define PFPE_CQARM_PECQID_M			MAKEMASK(0x7FFFF, 0)
+#define PFPE_CQPDB				0x00500800 /* Reset Source: PFR */
+#define PFPE_CQPDB_WQHEAD_S			0
+#define PFPE_CQPDB_WQHEAD_M			MAKEMASK(0x7FF, 0)
+#define PFPE_CQPERRCODES			0x0050A200 /* Reset Source: PFR */
+#define PFPE_CQPERRCODES_CQP_MINOR_CODE_S	0
+#define PFPE_CQPERRCODES_CQP_MINOR_CODE_M	MAKEMASK(0xFFFF, 0)
+#define PFPE_CQPERRCODES_CQP_MAJOR_CODE_S	16
+#define PFPE_CQPERRCODES_CQP_MAJOR_CODE_M	MAKEMASK(0xFFFF, 16)
+#define PFPE_CQPTAIL				0x00500880 /* Reset Source: PFR */
+#define PFPE_CQPTAIL_WQTAIL_S			0
+#define PFPE_CQPTAIL_WQTAIL_M			MAKEMASK(0x7FF, 0)
+#define PFPE_CQPTAIL_CQP_OP_ERR_S		31
+#define PFPE_CQPTAIL_CQP_OP_ERR_M		BIT(31)
+#define PFPE_IPCONFIG0				0x0050A180 /* Reset Source: PFR */
+#define PFPE_IPCONFIG0_PEIPID_S			0
+#define PFPE_IPCONFIG0_PEIPID_M			MAKEMASK(0xFFFF, 0)
+#define PFPE_IPCONFIG0_USEENTIREIDRANGE_S	16
+#define PFPE_IPCONFIG0_USEENTIREIDRANGE_M	BIT(16)
+#define PFPE_IPCONFIG0_UDP_SRC_PORT_MASK_EN_S	17
+#define PFPE_IPCONFIG0_UDP_SRC_PORT_MASK_EN_M	BIT(17)
+#define PFPE_MRTEIDXMASK			0x0050A300 /* Reset Source: PFR */
+#define PFPE_MRTEIDXMASK_MRTEIDXMASKBITS_S	0
+#define PFPE_MRTEIDXMASK_MRTEIDXMASKBITS_M	MAKEMASK(0x1F, 0)
+#define PFPE_RCVUNEXPECTEDERROR			0x0050A380 /* Reset Source: PFR */
+#define PFPE_RCVUNEXPECTEDERROR_TCP_RX_UNEXP_ERR_S 0
+#define PFPE_RCVUNEXPECTEDERROR_TCP_RX_UNEXP_ERR_M MAKEMASK(0xFFFFFF, 0)
+#define PFPE_TCPNOWTIMER			0x0050A280 /* Reset Source: PFR */
+#define PFPE_TCPNOWTIMER_TCP_NOW_S		0
+#define PFPE_TCPNOWTIMER_TCP_NOW_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PFPE_WQEALLOC				0x00504400 /* Reset Source: PFR */
+#define PFPE_WQEALLOC_PEQPID_S			0
+#define PFPE_WQEALLOC_PEQPID_M			MAKEMASK(0x3FFFF, 0)
+#define PFPE_WQEALLOC_WQE_DESC_INDEX_S		20
+#define PFPE_WQEALLOC_WQE_DESC_INDEX_M		MAKEMASK(0xFFF, 20)
+#define PRT_PEPM_COUNT(_i)			(0x0050C040 + ((_i) * 4)) /* _i=0...511 */ /* Reset Source: PERST */
+#define PRT_PEPM_COUNT_MAX_INDEX		511
+#define PRT_PEPM_COUNT_PEPM_PSQ_COUNT_S		0
+#define PRT_PEPM_COUNT_PEPM_PSQ_COUNT_M		MAKEMASK(0x1F, 0)
+#define PRT_PEPM_COUNT_PEPM_MDQ_COUNT_S		16
+#define PRT_PEPM_COUNT_PEPM_MDQ_COUNT_M		MAKEMASK(0x3FFF, 16)
+#define VFPE_AEQALLOC(_VF)			(0x00502800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_AEQALLOC_MAX_INDEX			255
+#define VFPE_AEQALLOC_AECOUNT_S			0
+#define VFPE_AEQALLOC_AECOUNT_M			MAKEMASK(0xFFFFFFFF, 0)
+#define VFPE_CCQPHIGH(_VF)			(0x00508800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_CCQPHIGH_MAX_INDEX			255
+#define VFPE_CCQPHIGH_PECCQPHIGH_S		0
+#define VFPE_CCQPHIGH_PECCQPHIGH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VFPE_CCQPLOW(_VF)			(0x00508400 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_CCQPLOW_MAX_INDEX			255
+#define VFPE_CCQPLOW_PECCQPLOW_S		0
+#define VFPE_CCQPLOW_PECCQPLOW_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VFPE_CCQPSTATUS(_VF)			(0x00508000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_CCQPSTATUS_MAX_INDEX		255
+#define VFPE_CCQPSTATUS_CCQP_DONE_S		0
+#define VFPE_CCQPSTATUS_CCQP_DONE_M		BIT(0)
+#define VFPE_CCQPSTATUS_HMC_PROFILE_S		4
+#define VFPE_CCQPSTATUS_HMC_PROFILE_M		MAKEMASK(0x7, 4)
+#define VFPE_CCQPSTATUS_RDMA_EN_VFS_S		16
+#define VFPE_CCQPSTATUS_RDMA_EN_VFS_M		MAKEMASK(0x3F, 16)
+#define VFPE_CCQPSTATUS_CCQP_ERR_S		31
+#define VFPE_CCQPSTATUS_CCQP_ERR_M		BIT(31)
+#define VFPE_CQACK(_VF)				(0x00502400 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_CQACK_MAX_INDEX			255
+#define VFPE_CQACK_PECQID_S			0
+#define VFPE_CQACK_PECQID_M			MAKEMASK(0x7FFFF, 0)
+#define VFPE_CQARM(_VF)				(0x00502000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_CQARM_MAX_INDEX			255
+#define VFPE_CQARM_PECQID_S			0
+#define VFPE_CQARM_PECQID_M			MAKEMASK(0x7FFFF, 0)
+#define VFPE_CQPDB(_VF)				(0x00500000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_CQPDB_MAX_INDEX			255
+#define VFPE_CQPDB_WQHEAD_S			0
+#define VFPE_CQPDB_WQHEAD_M			MAKEMASK(0x7FF, 0)
+#define VFPE_CQPERRCODES(_VF)			(0x00509000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_CQPERRCODES_MAX_INDEX		255
+#define VFPE_CQPERRCODES_CQP_MINOR_CODE_S	0
+#define VFPE_CQPERRCODES_CQP_MINOR_CODE_M	MAKEMASK(0xFFFF, 0)
+#define VFPE_CQPERRCODES_CQP_MAJOR_CODE_S	16
+#define VFPE_CQPERRCODES_CQP_MAJOR_CODE_M	MAKEMASK(0xFFFF, 16)
+#define VFPE_CQPTAIL(_VF)			(0x00500400 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_CQPTAIL_MAX_INDEX			255
+#define VFPE_CQPTAIL_WQTAIL_S			0
+#define VFPE_CQPTAIL_WQTAIL_M			MAKEMASK(0x7FF, 0)
+#define VFPE_CQPTAIL_CQP_OP_ERR_S		31
+#define VFPE_CQPTAIL_CQP_OP_ERR_M		BIT(31)
+#define VFPE_IPCONFIG0(_VF)			(0x00508C00 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_IPCONFIG0_MAX_INDEX		255
+#define VFPE_IPCONFIG0_PEIPID_S			0
+#define VFPE_IPCONFIG0_PEIPID_M			MAKEMASK(0xFFFF, 0)
+#define VFPE_IPCONFIG0_USEENTIREIDRANGE_S	16
+#define VFPE_IPCONFIG0_USEENTIREIDRANGE_M	BIT(16)
+#define VFPE_IPCONFIG0_UDP_SRC_PORT_MASK_EN_S	17
+#define VFPE_IPCONFIG0_UDP_SRC_PORT_MASK_EN_M	BIT(17)
+#define VFPE_RCVUNEXPECTEDERROR(_VF)		(0x00509C00 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_RCVUNEXPECTEDERROR_MAX_INDEX	255
+#define VFPE_RCVUNEXPECTEDERROR_TCP_RX_UNEXP_ERR_S 0
+#define VFPE_RCVUNEXPECTEDERROR_TCP_RX_UNEXP_ERR_M MAKEMASK(0xFFFFFF, 0)
+#define VFPE_TCPNOWTIMER(_VF)			(0x00509400 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_TCPNOWTIMER_MAX_INDEX		255
+#define VFPE_TCPNOWTIMER_TCP_NOW_S		0
+#define VFPE_TCPNOWTIMER_TCP_NOW_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VFPE_WQEALLOC(_VF)			(0x00504000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_WQEALLOC_MAX_INDEX			255
+#define VFPE_WQEALLOC_PEQPID_S			0
+#define VFPE_WQEALLOC_PEQPID_M			MAKEMASK(0x3FFFF, 0)
+#define VFPE_WQEALLOC_WQE_DESC_INDEX_S		20
+#define VFPE_WQEALLOC_WQE_DESC_INDEX_M		MAKEMASK(0xFFF, 20)
+#define GLPES_PFIP4RXDISCARD(_i)		(0x00541400 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXDISCARD_MAX_INDEX		127
+#define GLPES_PFIP4RXDISCARD_IP4RXDISCARD_S	0
+#define GLPES_PFIP4RXDISCARD_IP4RXDISCARD_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4RXFRAGSHI(_i)		(0x00541C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXFRAGSHI_MAX_INDEX		127
+#define GLPES_PFIP4RXFRAGSHI_IP4RXFRAGSHI_S	0
+#define GLPES_PFIP4RXFRAGSHI_IP4RXFRAGSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP4RXFRAGSLO(_i)		(0x00541C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXFRAGSLO_MAX_INDEX		127
+#define GLPES_PFIP4RXFRAGSLO_IP4RXFRAGSLO_S	0
+#define GLPES_PFIP4RXFRAGSLO_IP4RXFRAGSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4RXMCOCTSHI(_i)		(0x00542404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXMCOCTSHI_MAX_INDEX		127
+#define GLPES_PFIP4RXMCOCTSHI_IP4RXMCOCTSHI_S	0
+#define GLPES_PFIP4RXMCOCTSHI_IP4RXMCOCTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP4RXMCOCTSLO(_i)		(0x00542400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXMCOCTSLO_MAX_INDEX		127
+#define GLPES_PFIP4RXMCOCTSLO_IP4RXMCOCTSLO_S	0
+#define GLPES_PFIP4RXMCOCTSLO_IP4RXMCOCTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4RXMCPKTSHI(_i)		(0x00542C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXMCPKTSHI_MAX_INDEX		127
+#define GLPES_PFIP4RXMCPKTSHI_IP4RXMCPKTSHI_S	0
+#define GLPES_PFIP4RXMCPKTSHI_IP4RXMCPKTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP4RXMCPKTSLO(_i)		(0x00542C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXMCPKTSLO_MAX_INDEX		127
+#define GLPES_PFIP4RXMCPKTSLO_IP4RXMCPKTSLO_S	0
+#define GLPES_PFIP4RXMCPKTSLO_IP4RXMCPKTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4RXOCTSHI(_i)			(0x00540404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXOCTSHI_MAX_INDEX		127
+#define GLPES_PFIP4RXOCTSHI_IP4RXOCTSHI_S	0
+#define GLPES_PFIP4RXOCTSHI_IP4RXOCTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP4RXOCTSLO(_i)			(0x00540400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXOCTSLO_MAX_INDEX		127
+#define GLPES_PFIP4RXOCTSLO_IP4RXOCTSLO_S	0
+#define GLPES_PFIP4RXOCTSLO_IP4RXOCTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4RXPKTSHI(_i)			(0x00540C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXPKTSHI_MAX_INDEX		127
+#define GLPES_PFIP4RXPKTSHI_IP4RXPKTSHI_S	0
+#define GLPES_PFIP4RXPKTSHI_IP4RXPKTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP4RXPKTSLO(_i)			(0x00540C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXPKTSLO_MAX_INDEX		127
+#define GLPES_PFIP4RXPKTSLO_IP4RXPKTSLO_S	0
+#define GLPES_PFIP4RXPKTSLO_IP4RXPKTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4RXTRUNC(_i)			(0x00541800 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXTRUNC_MAX_INDEX		127
+#define GLPES_PFIP4RXTRUNC_IP4RXTRUNC_S		0
+#define GLPES_PFIP4RXTRUNC_IP4RXTRUNC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4TXFRAGSHI(_i)		(0x00547404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4TXFRAGSHI_MAX_INDEX		127
+#define GLPES_PFIP4TXFRAGSHI_IP4TXFRAGSHI_S	0
+#define GLPES_PFIP4TXFRAGSHI_IP4TXFRAGSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP4TXFRAGSLO(_i)		(0x00547400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4TXFRAGSLO_MAX_INDEX		127
+#define GLPES_PFIP4TXFRAGSLO_IP4TXFRAGSLO_S	0
+#define GLPES_PFIP4TXFRAGSLO_IP4TXFRAGSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4TXMCOCTSHI(_i)		(0x00547C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4TXMCOCTSHI_MAX_INDEX		127
+#define GLPES_PFIP4TXMCOCTSHI_IP4TXMCOCTSHI_S	0
+#define GLPES_PFIP4TXMCOCTSHI_IP4TXMCOCTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP4TXMCOCTSLO(_i)		(0x00547C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4TXMCOCTSLO_MAX_INDEX		127
+#define GLPES_PFIP4TXMCOCTSLO_IP4TXMCOCTSLO_S	0
+#define GLPES_PFIP4TXMCOCTSLO_IP4TXMCOCTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4TXMCPKTSHI(_i)		(0x00548404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4TXMCPKTSHI_MAX_INDEX		127
+#define GLPES_PFIP4TXMCPKTSHI_IP4TXMCPKTSHI_S	0
+#define GLPES_PFIP4TXMCPKTSHI_IP4TXMCPKTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP4TXMCPKTSLO(_i)		(0x00548400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4TXMCPKTSLO_MAX_INDEX		127
+#define GLPES_PFIP4TXMCPKTSLO_IP4TXMCPKTSLO_S	0
+#define GLPES_PFIP4TXMCPKTSLO_IP4TXMCPKTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4TXNOROUTE(_i)		(0x0054B400 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4TXNOROUTE_MAX_INDEX		127
+#define GLPES_PFIP4TXNOROUTE_IP4TXNOROUTE_S	0
+#define GLPES_PFIP4TXNOROUTE_IP4TXNOROUTE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLPES_PFIP4TXOCTSHI(_i)			(0x00546404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4TXOCTSHI_MAX_INDEX		127
+#define GLPES_PFIP4TXOCTSHI_IP4TXOCTSHI_S	0
+#define GLPES_PFIP4TXOCTSHI_IP4TXOCTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP4TXOCTSLO(_i)			(0x00546400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4TXOCTSLO_MAX_INDEX		127
+#define GLPES_PFIP4TXOCTSLO_IP4TXOCTSLO_S	0
+#define GLPES_PFIP4TXOCTSLO_IP4TXOCTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4TXPKTSHI(_i)			(0x00546C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4TXPKTSHI_MAX_INDEX		127
+#define GLPES_PFIP4TXPKTSHI_IP4TXPKTSHI_S	0
+#define GLPES_PFIP4TXPKTSHI_IP4TXPKTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP4TXPKTSLO(_i)			(0x00546C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4TXPKTSLO_MAX_INDEX		127
+#define GLPES_PFIP4TXPKTSLO_IP4TXPKTSLO_S	0
+#define GLPES_PFIP4TXPKTSLO_IP4TXPKTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6RXDISCARD(_i)		(0x00544400 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXDISCARD_MAX_INDEX		127
+#define GLPES_PFIP6RXDISCARD_IP6RXDISCARD_S	0
+#define GLPES_PFIP6RXDISCARD_IP6RXDISCARD_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6RXFRAGSHI(_i)		(0x00544C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXFRAGSHI_MAX_INDEX		127
+#define GLPES_PFIP6RXFRAGSHI_IP6RXFRAGSHI_S	0
+#define GLPES_PFIP6RXFRAGSHI_IP6RXFRAGSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP6RXFRAGSLO(_i)		(0x00544C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXFRAGSLO_MAX_INDEX		127
+#define GLPES_PFIP6RXFRAGSLO_IP6RXFRAGSLO_S	0
+#define GLPES_PFIP6RXFRAGSLO_IP6RXFRAGSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6RXMCOCTSHI(_i)		(0x00545404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXMCOCTSHI_MAX_INDEX		127
+#define GLPES_PFIP6RXMCOCTSHI_IP6RXMCOCTSHI_S	0
+#define GLPES_PFIP6RXMCOCTSHI_IP6RXMCOCTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP6RXMCOCTSLO(_i)		(0x00545400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXMCOCTSLO_MAX_INDEX		127
+#define GLPES_PFIP6RXMCOCTSLO_IP6RXMCOCTSLO_S	0
+#define GLPES_PFIP6RXMCOCTSLO_IP6RXMCOCTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6RXMCPKTSHI(_i)		(0x00545C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXMCPKTSHI_MAX_INDEX		127
+#define GLPES_PFIP6RXMCPKTSHI_IP6RXMCPKTSHI_S	0
+#define GLPES_PFIP6RXMCPKTSHI_IP6RXMCPKTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP6RXMCPKTSLO(_i)		(0x00545C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXMCPKTSLO_MAX_INDEX		127
+#define GLPES_PFIP6RXMCPKTSLO_IP6RXMCPKTSLO_S	0
+#define GLPES_PFIP6RXMCPKTSLO_IP6RXMCPKTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6RXOCTSHI(_i)			(0x00543404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXOCTSHI_MAX_INDEX		127
+#define GLPES_PFIP6RXOCTSHI_IP6RXOCTSHI_S	0
+#define GLPES_PFIP6RXOCTSHI_IP6RXOCTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP6RXOCTSLO(_i)			(0x00543400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXOCTSLO_MAX_INDEX		127
+#define GLPES_PFIP6RXOCTSLO_IP6RXOCTSLO_S	0
+#define GLPES_PFIP6RXOCTSLO_IP6RXOCTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6RXPKTSHI(_i)			(0x00543C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXPKTSHI_MAX_INDEX		127
+#define GLPES_PFIP6RXPKTSHI_IP6RXPKTSHI_S	0
+#define GLPES_PFIP6RXPKTSHI_IP6RXPKTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP6RXPKTSLO(_i)			(0x00543C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXPKTSLO_MAX_INDEX		127
+#define GLPES_PFIP6RXPKTSLO_IP6RXPKTSLO_S	0
+#define GLPES_PFIP6RXPKTSLO_IP6RXPKTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6RXTRUNC(_i)			(0x00544800 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXTRUNC_MAX_INDEX		127
+#define GLPES_PFIP6RXTRUNC_IP6RXTRUNC_S		0
+#define GLPES_PFIP6RXTRUNC_IP6RXTRUNC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6TXFRAGSHI(_i)		(0x00549C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6TXFRAGSHI_MAX_INDEX		127
+#define GLPES_PFIP6TXFRAGSHI_IP6TXFRAGSHI_S	0
+#define GLPES_PFIP6TXFRAGSHI_IP6TXFRAGSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP6TXFRAGSLO(_i)		(0x00549C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6TXFRAGSLO_MAX_INDEX		127
+#define GLPES_PFIP6TXFRAGSLO_IP6TXFRAGSLO_S	0
+#define GLPES_PFIP6TXFRAGSLO_IP6TXFRAGSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6TXMCOCTSHI(_i)		(0x0054A404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6TXMCOCTSHI_MAX_INDEX		127
+#define GLPES_PFIP6TXMCOCTSHI_IP6TXMCOCTSHI_S	0
+#define GLPES_PFIP6TXMCOCTSHI_IP6TXMCOCTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP6TXMCOCTSLO(_i)		(0x0054A400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6TXMCOCTSLO_MAX_INDEX		127
+#define GLPES_PFIP6TXMCOCTSLO_IP6TXMCOCTSLO_S	0
+#define GLPES_PFIP6TXMCOCTSLO_IP6TXMCOCTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6TXMCPKTSHI(_i)		(0x0054AC04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6TXMCPKTSHI_MAX_INDEX		127
+#define GLPES_PFIP6TXMCPKTSHI_IP6TXMCPKTSHI_S	0
+#define GLPES_PFIP6TXMCPKTSHI_IP6TXMCPKTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP6TXMCPKTSLO(_i)		(0x0054AC00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6TXMCPKTSLO_MAX_INDEX		127
+#define GLPES_PFIP6TXMCPKTSLO_IP6TXMCPKTSLO_S	0
+#define GLPES_PFIP6TXMCPKTSLO_IP6TXMCPKTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6TXNOROUTE(_i)		(0x0054B800 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6TXNOROUTE_MAX_INDEX		127
+#define GLPES_PFIP6TXNOROUTE_IP6TXNOROUTE_S	0
+#define GLPES_PFIP6TXNOROUTE_IP6TXNOROUTE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLPES_PFIP6TXOCTSHI(_i)			(0x00548C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6TXOCTSHI_MAX_INDEX		127
+#define GLPES_PFIP6TXOCTSHI_IP6TXOCTSHI_S	0
+#define GLPES_PFIP6TXOCTSHI_IP6TXOCTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP6TXOCTSLO(_i)			(0x00548C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6TXOCTSLO_MAX_INDEX		127
+#define GLPES_PFIP6TXOCTSLO_IP6TXOCTSLO_S	0
+#define GLPES_PFIP6TXOCTSLO_IP6TXOCTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6TXPKTSHI(_i)			(0x00549404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6TXPKTSHI_MAX_INDEX		127
+#define GLPES_PFIP6TXPKTSHI_IP6TXPKTSHI_S	0
+#define GLPES_PFIP6TXPKTSHI_IP6TXPKTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP6TXPKTSLO(_i)			(0x00549400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6TXPKTSLO_MAX_INDEX		127
+#define GLPES_PFIP6TXPKTSLO_IP6TXPKTSLO_S	0
+#define GLPES_PFIP6TXPKTSLO_IP6TXPKTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFRDMARXRDSHI(_i)			(0x0054EC04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMARXRDSHI_MAX_INDEX		127
+#define GLPES_PFRDMARXRDSHI_RDMARXRDSHI_S	0
+#define GLPES_PFRDMARXRDSHI_RDMARXRDSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFRDMARXRDSLO(_i)			(0x0054EC00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMARXRDSLO_MAX_INDEX		127
+#define GLPES_PFRDMARXRDSLO_RDMARXRDSLO_S	0
+#define GLPES_PFRDMARXRDSLO_RDMARXRDSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFRDMARXSNDSHI(_i)		(0x0054F404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMARXSNDSHI_MAX_INDEX		127
+#define GLPES_PFRDMARXSNDSHI_RDMARXSNDSHI_S	0
+#define GLPES_PFRDMARXSNDSHI_RDMARXSNDSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFRDMARXSNDSLO(_i)		(0x0054F400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMARXSNDSLO_MAX_INDEX		127
+#define GLPES_PFRDMARXSNDSLO_RDMARXSNDSLO_S	0
+#define GLPES_PFRDMARXSNDSLO_RDMARXSNDSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFRDMARXWRSHI(_i)			(0x0054E404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMARXWRSHI_MAX_INDEX		127
+#define GLPES_PFRDMARXWRSHI_RDMARXWRSHI_S	0
+#define GLPES_PFRDMARXWRSHI_RDMARXWRSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFRDMARXWRSLO(_i)			(0x0054E400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMARXWRSLO_MAX_INDEX		127
+#define GLPES_PFRDMARXWRSLO_RDMARXWRSLO_S	0
+#define GLPES_PFRDMARXWRSLO_RDMARXWRSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFRDMATXRDSHI(_i)			(0x00550404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMATXRDSHI_MAX_INDEX		127
+#define GLPES_PFRDMATXRDSHI_RDMARXRDSHI_S	0
+#define GLPES_PFRDMATXRDSHI_RDMARXRDSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFRDMATXRDSLO(_i)			(0x00550400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMATXRDSLO_MAX_INDEX		127
+#define GLPES_PFRDMATXRDSLO_RDMARXRDSLO_S	0
+#define GLPES_PFRDMATXRDSLO_RDMARXRDSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFRDMATXSNDSHI(_i)		(0x00550C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMATXSNDSHI_MAX_INDEX		127
+#define GLPES_PFRDMATXSNDSHI_RDMARXSNDSHI_S	0
+#define GLPES_PFRDMATXSNDSHI_RDMARXSNDSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFRDMATXSNDSLO(_i)		(0x00550C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMATXSNDSLO_MAX_INDEX		127
+#define GLPES_PFRDMATXSNDSLO_RDMARXSNDSLO_S	0
+#define GLPES_PFRDMATXSNDSLO_RDMARXSNDSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFRDMATXWRSHI(_i)			(0x0054FC04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMATXWRSHI_MAX_INDEX		127
+#define GLPES_PFRDMATXWRSHI_RDMARXWRSHI_S	0
+#define GLPES_PFRDMATXWRSHI_RDMARXWRSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFRDMATXWRSLO(_i)			(0x0054FC00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMATXWRSLO_MAX_INDEX		127
+#define GLPES_PFRDMATXWRSLO_RDMARXWRSLO_S	0
+#define GLPES_PFRDMATXWRSLO_RDMARXWRSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFRDMAVBNDHI(_i)			(0x00551404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMAVBNDHI_MAX_INDEX		127
+#define GLPES_PFRDMAVBNDHI_RDMAVBNDHI_S		0
+#define GLPES_PFRDMAVBNDHI_RDMAVBNDHI_M		MAKEMASK(0xFFFF, 0)
+#define GLPES_PFRDMAVBNDLO(_i)			(0x00551400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMAVBNDLO_MAX_INDEX		127
+#define GLPES_PFRDMAVBNDLO_RDMAVBNDLO_S		0
+#define GLPES_PFRDMAVBNDLO_RDMAVBNDLO_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFRDMAVINVHI(_i)			(0x00551C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMAVINVHI_MAX_INDEX		127
+#define GLPES_PFRDMAVINVHI_RDMAVINVHI_S		0
+#define GLPES_PFRDMAVINVHI_RDMAVINVHI_M		MAKEMASK(0xFFFF, 0)
+#define GLPES_PFRDMAVINVLO(_i)			(0x00551C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMAVINVLO_MAX_INDEX		127
+#define GLPES_PFRDMAVINVLO_RDMAVINVLO_S		0
+#define GLPES_PFRDMAVINVLO_RDMAVINVLO_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFRXVLANERR(_i)			(0x00540000 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRXVLANERR_MAX_INDEX		127
+#define GLPES_PFRXVLANERR_RXVLANERR_S		0
+#define GLPES_PFRXVLANERR_RXVLANERR_M		MAKEMASK(0xFFFFFF, 0)
+#define GLPES_PFTCPRTXSEG(_i)			(0x00552400 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFTCPRTXSEG_MAX_INDEX		127
+#define GLPES_PFTCPRTXSEG_TCPRTXSEG_S		0
+#define GLPES_PFTCPRTXSEG_TCPRTXSEG_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFTCPRXOPTERR(_i)			(0x0054C400 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFTCPRXOPTERR_MAX_INDEX		127
+#define GLPES_PFTCPRXOPTERR_TCPRXOPTERR_S	0
+#define GLPES_PFTCPRXOPTERR_TCPRXOPTERR_M	MAKEMASK(0xFFFFFF, 0)
+#define GLPES_PFTCPRXPROTOERR(_i)		(0x0054C800 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFTCPRXPROTOERR_MAX_INDEX		127
+#define GLPES_PFTCPRXPROTOERR_TCPRXPROTOERR_S	0
+#define GLPES_PFTCPRXPROTOERR_TCPRXPROTOERR_M	MAKEMASK(0xFFFFFF, 0)
+#define GLPES_PFTCPRXSEGSHI(_i)			(0x0054BC04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFTCPRXSEGSHI_MAX_INDEX		127
+#define GLPES_PFTCPRXSEGSHI_TCPRXSEGSHI_S	0
+#define GLPES_PFTCPRXSEGSHI_TCPRXSEGSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFTCPRXSEGSLO(_i)			(0x0054BC00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFTCPRXSEGSLO_MAX_INDEX		127
+#define GLPES_PFTCPRXSEGSLO_TCPRXSEGSLO_S	0
+#define GLPES_PFTCPRXSEGSLO_TCPRXSEGSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFTCPTXSEGHI(_i)			(0x0054CC04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFTCPTXSEGHI_MAX_INDEX		127
+#define GLPES_PFTCPTXSEGHI_TCPTXSEGHI_S		0
+#define GLPES_PFTCPTXSEGHI_TCPTXSEGHI_M		MAKEMASK(0xFFFF, 0)
+#define GLPES_PFTCPTXSEGLO(_i)			(0x0054CC00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFTCPTXSEGLO_MAX_INDEX		127
+#define GLPES_PFTCPTXSEGLO_TCPTXSEGLO_S		0
+#define GLPES_PFTCPTXSEGLO_TCPTXSEGLO_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFUDPRXPKTSHI(_i)			(0x0054D404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFUDPRXPKTSHI_MAX_INDEX		127
+#define GLPES_PFUDPRXPKTSHI_UDPRXPKTSHI_S	0
+#define GLPES_PFUDPRXPKTSHI_UDPRXPKTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFUDPRXPKTSLO(_i)			(0x0054D400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFUDPRXPKTSLO_MAX_INDEX		127
+#define GLPES_PFUDPRXPKTSLO_UDPRXPKTSLO_S	0
+#define GLPES_PFUDPRXPKTSLO_UDPRXPKTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFUDPTXPKTSHI(_i)			(0x0054DC04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFUDPTXPKTSHI_MAX_INDEX		127
+#define GLPES_PFUDPTXPKTSHI_UDPTXPKTSHI_S	0
+#define GLPES_PFUDPTXPKTSHI_UDPTXPKTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFUDPTXPKTSLO(_i)			(0x0054DC00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFUDPTXPKTSLO_MAX_INDEX		127
+#define GLPES_PFUDPTXPKTSLO_UDPTXPKTSLO_S	0
+#define GLPES_PFUDPTXPKTSLO_UDPTXPKTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_RDMARXMULTFPDUSHI			0x0055E00C /* Reset Source: CORER */
+#define GLPES_RDMARXMULTFPDUSHI_RDMARXMULTFPDUSHI_S 0
+#define GLPES_RDMARXMULTFPDUSHI_RDMARXMULTFPDUSHI_M MAKEMASK(0xFFFFFF, 0)
+#define GLPES_RDMARXMULTFPDUSLO			0x0055E008 /* Reset Source: CORER */
+#define GLPES_RDMARXMULTFPDUSLO_RDMARXMULTFPDUSLO_S 0
+#define GLPES_RDMARXMULTFPDUSLO_RDMARXMULTFPDUSLO_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_RDMARXOOODDPHI			0x0055E014 /* Reset Source: CORER */
+#define GLPES_RDMARXOOODDPHI_RDMARXOOODDPHI_S	0
+#define GLPES_RDMARXOOODDPHI_RDMARXOOODDPHI_M	MAKEMASK(0xFFFFFF, 0)
+#define GLPES_RDMARXOOODDPLO			0x0055E010 /* Reset Source: CORER */
+#define GLPES_RDMARXOOODDPLO_RDMARXOOODDPLO_S	0
+#define GLPES_RDMARXOOODDPLO_RDMARXOOODDPLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_RDMARXOOONOMARK			0x0055E004 /* Reset Source: CORER */
+#define GLPES_RDMARXOOONOMARK_RDMAOOONOMARK_S	0
+#define GLPES_RDMARXOOONOMARK_RDMAOOONOMARK_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_RDMARXUNALIGN			0x0055E000 /* Reset Source: CORER */
+#define GLPES_RDMARXUNALIGN_RDMRXAUNALIGN_S	0
+#define GLPES_RDMARXUNALIGN_RDMRXAUNALIGN_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_TCPRXFOURHOLEHI			0x0055E03C /* Reset Source: CORER */
+#define GLPES_TCPRXFOURHOLEHI_TCPRXFOURHOLEHI_S 0
+#define GLPES_TCPRXFOURHOLEHI_TCPRXFOURHOLEHI_M MAKEMASK(0xFFFFFF, 0)
+#define GLPES_TCPRXFOURHOLELO			0x0055E038 /* Reset Source: CORER */
+#define GLPES_TCPRXFOURHOLELO_TCPRXFOURHOLELO_S 0
+#define GLPES_TCPRXFOURHOLELO_TCPRXFOURHOLELO_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_TCPRXONEHOLEHI			0x0055E024 /* Reset Source: CORER */
+#define GLPES_TCPRXONEHOLEHI_TCPRXONEHOLEHI_S	0
+#define GLPES_TCPRXONEHOLEHI_TCPRXONEHOLEHI_M	MAKEMASK(0xFFFFFF, 0)
+#define GLPES_TCPRXONEHOLELO			0x0055E020 /* Reset Source: CORER */
+#define GLPES_TCPRXONEHOLELO_TCPRXONEHOLELO_S	0
+#define GLPES_TCPRXONEHOLELO_TCPRXONEHOLELO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_TCPRXPUREACKHI			0x0055E01C /* Reset Source: CORER */
+#define GLPES_TCPRXPUREACKHI_TCPRXPUREACKSHI_S	0
+#define GLPES_TCPRXPUREACKHI_TCPRXPUREACKSHI_M	MAKEMASK(0xFFFFFF, 0)
+#define GLPES_TCPRXPUREACKSLO			0x0055E018 /* Reset Source: CORER */
+#define GLPES_TCPRXPUREACKSLO_TCPRXPUREACKLO_S	0
+#define GLPES_TCPRXPUREACKSLO_TCPRXPUREACKLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_TCPRXTHREEHOLEHI			0x0055E034 /* Reset Source: CORER */
+#define GLPES_TCPRXTHREEHOLEHI_TCPRXTHREEHOLEHI_S 0
+#define GLPES_TCPRXTHREEHOLEHI_TCPRXTHREEHOLEHI_M MAKEMASK(0xFFFFFF, 0)
+#define GLPES_TCPRXTHREEHOLELO			0x0055E030 /* Reset Source: CORER */
+#define GLPES_TCPRXTHREEHOLELO_TCPRXTHREEHOLELO_S 0
+#define GLPES_TCPRXTHREEHOLELO_TCPRXTHREEHOLELO_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_TCPRXTWOHOLEHI			0x0055E02C /* Reset Source: CORER */
+#define GLPES_TCPRXTWOHOLEHI_TCPRXTWOHOLEHI_S	0
+#define GLPES_TCPRXTWOHOLEHI_TCPRXTWOHOLEHI_M	MAKEMASK(0xFFFFFF, 0)
+#define GLPES_TCPRXTWOHOLELO			0x0055E028 /* Reset Source: CORER */
+#define GLPES_TCPRXTWOHOLELO_TCPRXTWOHOLELO_S	0
+#define GLPES_TCPRXTWOHOLELO_TCPRXTWOHOLELO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_TCPTXRETRANSFASTHI		0x0055E044 /* Reset Source: CORER */
+#define GLPES_TCPTXRETRANSFASTHI_TCPTXRETRANSFASTHI_S 0
+#define GLPES_TCPTXRETRANSFASTHI_TCPTXRETRANSFASTHI_M MAKEMASK(0xFFFFFF, 0)
+#define GLPES_TCPTXRETRANSFASTLO		0x0055E040 /* Reset Source: CORER */
+#define GLPES_TCPTXRETRANSFASTLO_TCPTXRETRANSFASTLO_S 0
+#define GLPES_TCPTXRETRANSFASTLO_TCPTXRETRANSFASTLO_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_TCPTXTOUTSFASTHI			0x0055E04C /* Reset Source: CORER */
+#define GLPES_TCPTXTOUTSFASTHI_TCPTXTOUTSFASTHI_S 0
+#define GLPES_TCPTXTOUTSFASTHI_TCPTXTOUTSFASTHI_M MAKEMASK(0xFFFFFF, 0)
+#define GLPES_TCPTXTOUTSFASTLO			0x0055E048 /* Reset Source: CORER */
+#define GLPES_TCPTXTOUTSFASTLO_TCPTXTOUTSFASTLO_S 0
+#define GLPES_TCPTXTOUTSFASTLO_TCPTXTOUTSFASTLO_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_TCPTXTOUTSHI			0x0055E054 /* Reset Source: CORER */
+#define GLPES_TCPTXTOUTSHI_TCPTXTOUTSHI_S	0
+#define GLPES_TCPTXTOUTSHI_TCPTXTOUTSHI_M	MAKEMASK(0xFFFFFF, 0)
+#define GLPES_TCPTXTOUTSLO			0x0055E050 /* Reset Source: CORER */
+#define GLPES_TCPTXTOUTSLO_TCPTXTOUTSLO_S	0
+#define GLPES_TCPTXTOUTSLO_TCPTXTOUTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PWR_MODE_CTL				0x000B820C /* Reset Source: POR */
+#define GL_PWR_MODE_CTL_SWITCH_PWR_MODE_EN_S	0
+#define GL_PWR_MODE_CTL_SWITCH_PWR_MODE_EN_M	BIT(0)
+#define GL_PWR_MODE_CTL_NIC_PWR_MODE_EN_S	1
+#define GL_PWR_MODE_CTL_NIC_PWR_MODE_EN_M	BIT(1)
+#define GL_PWR_MODE_CTL_S5_PWR_MODE_EN_S	2
+#define GL_PWR_MODE_CTL_S5_PWR_MODE_EN_M	BIT(2)
+#define GL_PWR_MODE_CTL_CAR_MAX_SW_CONFIG_S	3
+#define GL_PWR_MODE_CTL_CAR_MAX_SW_CONFIG_M	MAKEMASK(0x3, 3)
+#define GL_PWR_MODE_CTL_CAR_MAX_BW_S		30
+#define GL_PWR_MODE_CTL_CAR_MAX_BW_M		MAKEMASK(0x3, 30)
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT	0x000B825C /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_PECLK_S 0
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_PECLK_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_UCLK_S 3
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_UCLK_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_LCLK_S 6
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_LCLK_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_PSM_S 9
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_PSM_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_RXCTL_S 12
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_RXCTL_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_UANA_S 15
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_UANA_M MAKEMASK(0x7, 15)
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_S5_S 18
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_S5_M MAKEMASK(0x7, 18)
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT	0x000B8218 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_PECLK_S 0
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_PECLK_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_UCLK_S 3
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_UCLK_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_LCLK_S 6
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_LCLK_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_PSM_S 9
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_PSM_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_RXCTL_S 12
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_RXCTL_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_UANA_S 15
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_UANA_M MAKEMASK(0x7, 15)
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_S5_S 18
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_S5_M MAKEMASK(0x7, 18)
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT	0x000B8260 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_PECLK_S 0
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_PECLK_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_UCLK_S 3
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_UCLK_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_LCLK_S 6
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_LCLK_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_PSM_S 9
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_PSM_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_RXCTL_S 12
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_RXCTL_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_UANA_S 15
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_UANA_M MAKEMASK(0x7, 15)
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_S5_S 18
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_S5_M MAKEMASK(0x7, 18)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_LCLK	0x000B8200 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_LCLK_DIV_VAL_TBW_50G_H_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_LCLK_DIV_VAL_TBW_50G_H_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_LCLK_DIV_VAL_TBW_25G_H_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_LCLK_DIV_VAL_TBW_25G_H_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_LCLK_DIV_VAL_TBW_10G_H_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_LCLK_DIV_VAL_TBW_10G_H_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_LCLK_DIV_VAL_TBW_4G_H_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_LCLK_DIV_VAL_TBW_4G_H_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_LCLK_DIV_VAL_TBW_A50G_H_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_LCLK_DIV_VAL_TBW_A50G_H_M MAKEMASK(0xF, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PECLK	0x000B81F0 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PECLK_DIV_VAL_TBW_50G_H_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PECLK_DIV_VAL_TBW_50G_H_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PECLK_DIV_VAL_TBW_25G_H_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PECLK_DIV_VAL_TBW_25G_H_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PECLK_DIV_VAL_TBW_10G_H_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PECLK_DIV_VAL_TBW_10G_H_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PECLK_DIV_VAL_TBW_4G_H_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PECLK_DIV_VAL_TBW_4G_H_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PECLK_DIV_VAL_TBW_A50G_H_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PECLK_DIV_VAL_TBW_A50G_H_M MAKEMASK(0xF, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PSM	0x000B81FC /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PSM_DIV_VAL_TBW_50G_H_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PSM_DIV_VAL_TBW_50G_H_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PSM_DIV_VAL_TBW_25G_H_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PSM_DIV_VAL_TBW_25G_H_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PSM_DIV_VAL_TBW_10G_H_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PSM_DIV_VAL_TBW_10G_H_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PSM_DIV_VAL_TBW_4G_H_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PSM_DIV_VAL_TBW_4G_H_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PSM_DIV_VAL_TBW_A50G_H_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PSM_DIV_VAL_TBW_A50G_H_M MAKEMASK(0xF, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_RXCTL	0x000B81F8 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_RXCTL_DIV_VAL_TBW_50G_H_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_RXCTL_DIV_VAL_TBW_50G_H_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_RXCTL_DIV_VAL_TBW_25G_H_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_RXCTL_DIV_VAL_TBW_25G_H_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_RXCTL_DIV_VAL_TBW_10G_H_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_RXCTL_DIV_VAL_TBW_10G_H_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_RXCTL_DIV_VAL_TBW_4G_H_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_RXCTL_DIV_VAL_TBW_4G_H_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_RXCTL_DIV_VAL_TBW_A50G_H_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_RXCTL_DIV_VAL_TBW_A50G_H_M MAKEMASK(0xF, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UANA	0x000B8208 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UANA_DIV_VAL_TBW_50G_H_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UANA_DIV_VAL_TBW_50G_H_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UANA_DIV_VAL_TBW_25G_H_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UANA_DIV_VAL_TBW_25G_H_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UANA_DIV_VAL_TBW_10G_H_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UANA_DIV_VAL_TBW_10G_H_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UANA_DIV_VAL_TBW_4G_H_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UANA_DIV_VAL_TBW_4G_H_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UANA_DIV_VAL_TBW_A50G_H_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UANA_DIV_VAL_TBW_A50G_H_M MAKEMASK(0xF, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UCLK	0x000B81F4 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UCLK_DIV_VAL_TBW_50G_H_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UCLK_DIV_VAL_TBW_50G_H_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UCLK_DIV_VAL_TBW_25G_H_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UCLK_DIV_VAL_TBW_25G_H_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UCLK_DIV_VAL_TBW_10G_H_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UCLK_DIV_VAL_TBW_10G_H_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UCLK_DIV_VAL_TBW_4G_H_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UCLK_DIV_VAL_TBW_4G_H_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UCLK_DIV_VAL_TBW_A50G_H_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UCLK_DIV_VAL_TBW_A50G_H_M MAKEMASK(0xF, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_LCLK	0x000B8244 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_LCLK_DIV_VAL_TBW_50G_L_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_LCLK_DIV_VAL_TBW_50G_L_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_LCLK_DIV_VAL_TBW_25G_L_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_LCLK_DIV_VAL_TBW_25G_L_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_LCLK_DIV_VAL_TBW_10G_L_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_LCLK_DIV_VAL_TBW_10G_L_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_LCLK_DIV_VAL_TBW_4G_L_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_LCLK_DIV_VAL_TBW_4G_L_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_LCLK_DIV_VAL_TBW_A50G_L_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_LCLK_DIV_VAL_TBW_A50G_L_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PECLK	0x000B8220 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PECLK_DIV_VAL_TBW_50G_L_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PECLK_DIV_VAL_TBW_50G_L_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PECLK_DIV_VAL_TBW_25G_L_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PECLK_DIV_VAL_TBW_25G_L_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PECLK_DIV_VAL_TBW_10G_L_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PECLK_DIV_VAL_TBW_10G_L_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PECLK_DIV_VAL_TBW_4G_L_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PECLK_DIV_VAL_TBW_4G_L_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PECLK_DIV_VAL_TBW_A50G_L_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PECLK_DIV_VAL_TBW_A50G_L_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PSM	0x000B8240 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PSM_DIV_VAL_TBW_50G_L_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PSM_DIV_VAL_TBW_50G_L_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PSM_DIV_VAL_TBW_25G_L_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PSM_DIV_VAL_TBW_25G_L_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PSM_DIV_VAL_TBW_10G_L_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PSM_DIV_VAL_TBW_10G_L_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PSM_DIV_VAL_TBW_4G_L_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PSM_DIV_VAL_TBW_4G_L_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PSM_DIV_VAL_TBW_A50G_L_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PSM_DIV_VAL_TBW_A50G_L_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_RXCTL	0x000B823C /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_RXCTL_DIV_VAL_TBW_50G_L_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_RXCTL_DIV_VAL_TBW_50G_L_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_RXCTL_DIV_VAL_TBW_25G_L_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_RXCTL_DIV_VAL_TBW_25G_L_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_RXCTL_DIV_VAL_TBW_10G_L_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_RXCTL_DIV_VAL_TBW_10G_L_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_RXCTL_DIV_VAL_TBW_4G_L_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_RXCTL_DIV_VAL_TBW_4G_L_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_RXCTL_DIV_VAL_TBW_A50G_L_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_RXCTL_DIV_VAL_TBW_A50G_L_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UANA	0x000B8248 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UANA_DIV_VAL_TBW_50G_L_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UANA_DIV_VAL_TBW_50G_L_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UANA_DIV_VAL_TBW_25G_L_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UANA_DIV_VAL_TBW_25G_L_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UANA_DIV_VAL_TBW_10G_L_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UANA_DIV_VAL_TBW_10G_L_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UANA_DIV_VAL_TBW_4G_L_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UANA_DIV_VAL_TBW_4G_L_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UANA_DIV_VAL_TBW_A50G_L_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UANA_DIV_VAL_TBW_A50G_L_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UCLK	0x000B8238 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UCLK_DIV_VAL_TBW_50G_L_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UCLK_DIV_VAL_TBW_50G_L_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UCLK_DIV_VAL_TBW_25G_L_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UCLK_DIV_VAL_TBW_25G_L_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UCLK_DIV_VAL_TBW_10G_L_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UCLK_DIV_VAL_TBW_10G_L_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UCLK_DIV_VAL_TBW_4G_L_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UCLK_DIV_VAL_TBW_4G_L_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UCLK_DIV_VAL_TBW_A50G_L_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UCLK_DIV_VAL_TBW_A50G_L_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_LCLK	0x000B8230 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_LCLK_DIV_VAL_TBW_50G_M_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_LCLK_DIV_VAL_TBW_50G_M_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_LCLK_DIV_VAL_TBW_25G_M_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_LCLK_DIV_VAL_TBW_25G_M_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_LCLK_DIV_VAL_TBW_10G_M_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_LCLK_DIV_VAL_TBW_10G_M_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_LCLK_DIV_VAL_TBW_4G_M_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_LCLK_DIV_VAL_TBW_4G_M_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_LCLK_DIV_VAL_TBW_A50G_M_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_LCLK_DIV_VAL_TBW_A50G_M_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PECLK	0x000B821C /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PECLK_DIV_VAL_TBW_50G_M_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PECLK_DIV_VAL_TBW_50G_M_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PECLK_DIV_VAL_TBW_25G_M_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PECLK_DIV_VAL_TBW_25G_M_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PECLK_DIV_VAL_TBW_10G_M_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PECLK_DIV_VAL_TBW_10G_M_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PECLK_DIV_VAL_TBW_4G_M_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PECLK_DIV_VAL_TBW_4G_M_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PECLK_DIV_VAL_TBW_A50G_M_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PECLK_DIV_VAL_TBW_A50G_M_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PSM	0x000B822C /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PSM_DIV_VAL_TBW_50G_M_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PSM_DIV_VAL_TBW_50G_M_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PSM_DIV_VAL_TBW_25G_M_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PSM_DIV_VAL_TBW_25G_M_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PSM_DIV_VAL_TBW_10G_M_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PSM_DIV_VAL_TBW_10G_M_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PSM_DIV_VAL_TBW_4G_M_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PSM_DIV_VAL_TBW_4G_M_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PSM_DIV_VAL_TBW_A50G_M_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PSM_DIV_VAL_TBW_A50G_M_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_RXCTL	0x000B8228 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_RXCTL_DIV_VAL_TBW_50G_M_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_RXCTL_DIV_VAL_TBW_50G_M_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_RXCTL_DIV_VAL_TBW_25G_M_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_RXCTL_DIV_VAL_TBW_25G_M_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_RXCTL_DIV_VAL_TBW_10G_M_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_RXCTL_DIV_VAL_TBW_10G_M_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_RXCTL_DIV_VAL_TBW_4G_M_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_RXCTL_DIV_VAL_TBW_4G_M_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_RXCTL_DIV_VAL_TBW_A50G_M_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_RXCTL_DIV_VAL_TBW_A50G_M_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UANA	0x000B8234 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UANA_DIV_VAL_TBW_50G_M_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UANA_DIV_VAL_TBW_50G_M_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UANA_DIV_VAL_TBW_25G_M_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UANA_DIV_VAL_TBW_25G_M_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UANA_DIV_VAL_TBW_10G_M_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UANA_DIV_VAL_TBW_10G_M_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UANA_DIV_VAL_TBW_4G_M_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UANA_DIV_VAL_TBW_4G_M_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UANA_DIV_VAL_TBW_A50G_M_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UANA_DIV_VAL_TBW_A50G_M_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UCLK	0x000B8224 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UCLK_DIV_VAL_TBW_50G_M_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UCLK_DIV_VAL_TBW_50G_M_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UCLK_DIV_VAL_TBW_25G_M_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UCLK_DIV_VAL_TBW_25G_M_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UCLK_DIV_VAL_TBW_10G_M_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UCLK_DIV_VAL_TBW_10G_M_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UCLK_DIV_VAL_TBW_4G_M_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UCLK_DIV_VAL_TBW_4G_M_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UCLK_DIV_VAL_TBW_A50G_M_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UCLK_DIV_VAL_TBW_A50G_M_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S5_H_CTRL		0x000B81EC /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S5_H_CTRL_DIV_VAL_TBW_50G_H_S 0
+#define GL_PWR_MODE_DIVIDE_S5_H_CTRL_DIV_VAL_TBW_50G_H_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S5_H_CTRL_DIV_VAL_TBW_25G_H_S 3
+#define GL_PWR_MODE_DIVIDE_S5_H_CTRL_DIV_VAL_TBW_25G_H_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S5_H_CTRL_DIV_VAL_TBW_10G_H_S 6
+#define GL_PWR_MODE_DIVIDE_S5_H_CTRL_DIV_VAL_TBW_10G_H_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S5_H_CTRL_DIV_VAL_TBW_4G_H_S 9
+#define GL_PWR_MODE_DIVIDE_S5_H_CTRL_DIV_VAL_TBW_4G_H_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S5_H_CTRL_DIV_VAL_TBW_A50G_H_S 12
+#define GL_PWR_MODE_DIVIDE_S5_H_CTRL_DIV_VAL_TBW_A50G_H_M MAKEMASK(0xF, 12)
+#define GL_PWR_MODE_DIVIDE_S5_L_CTRL		0x000B824C /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S5_L_CTRL_DIV_VAL_TBW_50G_L_S 0
+#define GL_PWR_MODE_DIVIDE_S5_L_CTRL_DIV_VAL_TBW_50G_L_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S5_L_CTRL_DIV_VAL_TBW_25G_L_S 3
+#define GL_PWR_MODE_DIVIDE_S5_L_CTRL_DIV_VAL_TBW_25G_L_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S5_L_CTRL_DIV_VAL_TBW_10G_L_S 6
+#define GL_PWR_MODE_DIVIDE_S5_L_CTRL_DIV_VAL_TBW_10G_L_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S5_L_CTRL_DIV_VAL_TBW_4G_L_S 9
+#define GL_PWR_MODE_DIVIDE_S5_L_CTRL_DIV_VAL_TBW_4G_L_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S5_L_CTRL_DIV_VAL_TBW_A50G_L_S 12
+#define GL_PWR_MODE_DIVIDE_S5_L_CTRL_DIV_VAL_TBW_A50G_L_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S5_M_CTRL		0x000B8250 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S5_M_CTRL_DIV_VAL_TBW_50G_M_S 0
+#define GL_PWR_MODE_DIVIDE_S5_M_CTRL_DIV_VAL_TBW_50G_M_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S5_M_CTRL_DIV_VAL_TBW_25G_M_S 3
+#define GL_PWR_MODE_DIVIDE_S5_M_CTRL_DIV_VAL_TBW_25G_M_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S5_M_CTRL_DIV_VAL_TBW_10G_M_S 6
+#define GL_PWR_MODE_DIVIDE_S5_M_CTRL_DIV_VAL_TBW_10G_M_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S5_M_CTRL_DIV_VAL_TBW_4G_M_S 9
+#define GL_PWR_MODE_DIVIDE_S5_M_CTRL_DIV_VAL_TBW_4G_M_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S5_M_CTRL_DIV_VAL_TBW_A50G_M_S 12
+#define GL_PWR_MODE_DIVIDE_S5_M_CTRL_DIV_VAL_TBW_A50G_M_M MAKEMASK(0x7, 12)
+#define GL_S5_PWR_MODE_EXIT_CTL			0x000B8270 /* Reset Source: POR */
+#define GL_S5_PWR_MODE_EXIT_CTL_S5_PWR_MODE_AUTO_EXIT_S 0
+#define GL_S5_PWR_MODE_EXIT_CTL_S5_PWR_MODE_AUTO_EXIT_M BIT(0)
+#define GL_S5_PWR_MODE_EXIT_CTL_S5_PWR_MODE_FW_EXIT_S 1
+#define GL_S5_PWR_MODE_EXIT_CTL_S5_PWR_MODE_FW_EXIT_M BIT(1)
+#define GL_S5_PWR_MODE_EXIT_CTL_S5_PWR_MODE_PRST_FLOWS_ON_CORER_S 3
+#define GL_S5_PWR_MODE_EXIT_CTL_S5_PWR_MODE_PRST_FLOWS_ON_CORER_M BIT(3)
+#define GLGEN_PME_TO				0x000B81BC /* Reset Source: POR */
+#define GLGEN_PME_TO_PME_TO_FOR_PE_S		0
+#define GLGEN_PME_TO_PME_TO_FOR_PE_M		BIT(0)
+#define PRTPM_EEE_STAT				0x001E4320 /* Reset Source: GLOBR */
+#define PRTPM_EEE_STAT_EEE_NEG_S		29
+#define PRTPM_EEE_STAT_EEE_NEG_M		BIT(29)
+#define PRTPM_EEE_STAT_RX_LPI_STATUS_S		30
+#define PRTPM_EEE_STAT_RX_LPI_STATUS_M		BIT(30)
+#define PRTPM_EEE_STAT_TX_LPI_STATUS_S		31
+#define PRTPM_EEE_STAT_TX_LPI_STATUS_M		BIT(31)
+#define PRTPM_EEEC				0x001E4380 /* Reset Source: GLOBR */
+#define PRTPM_EEEC_TW_WAKE_MIN_S		16
+#define PRTPM_EEEC_TW_WAKE_MIN_M		MAKEMASK(0x3F, 16)
+#define PRTPM_EEEC_TX_LU_LPI_DLY_S		24
+#define PRTPM_EEEC_TX_LU_LPI_DLY_M		MAKEMASK(0x3, 24)
+#define PRTPM_EEEC_TEEE_DLY_S			26
+#define PRTPM_EEEC_TEEE_DLY_M			MAKEMASK(0x3F, 26)
+#define PRTPM_EEEFWD				0x001E4400 /* Reset Source: GLOBR */
+#define PRTPM_EEEFWD_EEE_FW_CONFIG_DONE_S	31
+#define PRTPM_EEEFWD_EEE_FW_CONFIG_DONE_M	BIT(31)
+#define PRTPM_EEER				0x001E4360 /* Reset Source: GLOBR */
+#define PRTPM_EEER_TW_SYSTEM_S			0
+#define PRTPM_EEER_TW_SYSTEM_M			MAKEMASK(0xFFFF, 0)
+#define PRTPM_EEER_TX_LPI_EN_S			16
+#define PRTPM_EEER_TX_LPI_EN_M			BIT(16)
+#define PRTPM_EEETXC				0x001E43E0 /* Reset Source: GLOBR */
+#define PRTPM_EEETXC_TW_PHY_S			0
+#define PRTPM_EEETXC_TW_PHY_M			MAKEMASK(0xFFFF, 0)
+#define PRTPM_RLPIC				0x001E43A0 /* Reset Source: GLOBR */
+#define PRTPM_RLPIC_ERLPIC_S			0
+#define PRTPM_RLPIC_ERLPIC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PRTPM_TLPIC				0x001E43C0 /* Reset Source: GLOBR */
+#define PRTPM_TLPIC_ETLPIC_S			0
+#define PRTPM_TLPIC_ETLPIC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLRPB_DHW(_i)				(0x000AC000 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLRPB_DHW_MAX_INDEX			15
+#define GLRPB_DHW_DHW_TCN_S			0
+#define GLRPB_DHW_DHW_TCN_M			MAKEMASK(0xFFFFF, 0)
+#define GLRPB_DLW(_i)				(0x000AC044 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLRPB_DLW_MAX_INDEX			15
+#define GLRPB_DLW_DLW_TCN_S			0
+#define GLRPB_DLW_DLW_TCN_M			MAKEMASK(0xFFFFF, 0)
+#define GLRPB_DPS(_i)				(0x000AC084 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLRPB_DPS_MAX_INDEX			15
+#define GLRPB_DPS_DPS_TCN_S			0
+#define GLRPB_DPS_DPS_TCN_M			MAKEMASK(0xFFFFF, 0)
+#define GLRPB_DSI_EN				0x000AC324 /* Reset Source: CORER */
+#define GLRPB_DSI_EN_DSI_EN_S			0
+#define GLRPB_DSI_EN_DSI_EN_M			BIT(0)
+#define GLRPB_DSI_EN_DSI_L2_MAC_ERR_DROP_EN_S	1
+#define GLRPB_DSI_EN_DSI_L2_MAC_ERR_DROP_EN_M	BIT(1)
+#define GLRPB_SHW(_i)				(0x000AC120 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLRPB_SHW_MAX_INDEX			7
+#define GLRPB_SHW_SHW_S				0
+#define GLRPB_SHW_SHW_M				MAKEMASK(0xFFFFF, 0)
+#define GLRPB_SLW(_i)				(0x000AC140 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLRPB_SLW_MAX_INDEX			7
+#define GLRPB_SLW_SLW_S				0
+#define GLRPB_SLW_SLW_M				MAKEMASK(0xFFFFF, 0)
+#define GLRPB_SPS(_i)				(0x000AC0C4 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLRPB_SPS_MAX_INDEX			7
+#define GLRPB_SPS_SPS_TCN_S			0
+#define GLRPB_SPS_SPS_TCN_M			MAKEMASK(0xFFFFF, 0)
+#define GLRPB_TC_CFG(_i)			(0x000AC2A4 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLRPB_TC_CFG_MAX_INDEX			31
+#define GLRPB_TC_CFG_D_POOL_S			0
+#define GLRPB_TC_CFG_D_POOL_M			MAKEMASK(0xFFFF, 0)
+#define GLRPB_TC_CFG_S_POOL_S			16
+#define GLRPB_TC_CFG_S_POOL_M			MAKEMASK(0xFFFF, 16)
+#define GLRPB_TCHW(_i)				(0x000AC330 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLRPB_TCHW_MAX_INDEX			31
+#define GLRPB_TCHW_TCHW_S			0
+#define GLRPB_TCHW_TCHW_M			MAKEMASK(0xFFFFF, 0)
+#define GLRPB_TCLW(_i)				(0x000AC3B0 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLRPB_TCLW_MAX_INDEX			31
+#define GLRPB_TCLW_TCLW_S			0
+#define GLRPB_TCLW_TCLW_M			MAKEMASK(0xFFFFF, 0)
+#define GLQF_APBVT(_i)				(0x00450000 + ((_i) * 4)) /* _i=0...2047 */ /* Reset Source: CORER */
+#define GLQF_APBVT_MAX_INDEX			2047
+#define GLQF_APBVT_APBVT_S			0
+#define GLQF_APBVT_APBVT_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLQF_FD_CLSN_0				0x00460028 /* Reset Source: CORER */
+#define GLQF_FD_CLSN_0_HITSBCNT_S		0
+#define GLQF_FD_CLSN_0_HITSBCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQF_FD_CLSN1				0x00460030 /* Reset Source: CORER */
+#define GLQF_FD_CLSN1_HITLBCNT_S		0
+#define GLQF_FD_CLSN1_HITLBCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQF_FD_CNT				0x00460018 /* Reset Source: CORER */
+#define GLQF_FD_CNT_FD_GCNT_S			0
+#define GLQF_FD_CNT_FD_GCNT_M			MAKEMASK(0x7FFF, 0)
+#define GLQF_FD_CNT_FD_BCNT_S			16
+#define GLQF_FD_CNT_FD_BCNT_M			MAKEMASK(0x7FFF, 16)
+#define GLQF_FD_CTL				0x00460000 /* Reset Source: CORER */
+#define GLQF_FD_CTL_FDLONG_S			0
+#define GLQF_FD_CTL_FDLONG_M			MAKEMASK(0xF, 0)
+#define GLQF_FD_CTL_HASH_REPORT_S		4
+#define GLQF_FD_CTL_HASH_REPORT_M		BIT(4)
+#define GLQF_FD_CTL_FLT_ADDR_REPORT_S		5
+#define GLQF_FD_CTL_FLT_ADDR_REPORT_M		BIT(5)
+#define GLQF_FD_SIZE				0x00460010 /* Reset Source: CORER */
+#define GLQF_FD_SIZE_FD_GSIZE_S			0
+#define GLQF_FD_SIZE_FD_GSIZE_M			MAKEMASK(0x7FFF, 0)
+#define GLQF_FD_SIZE_FD_BSIZE_S			16
+#define GLQF_FD_SIZE_FD_BSIZE_M			MAKEMASK(0x7FFF, 16)
+#define GLQF_FDCNT_0				0x00460020 /* Reset Source: CORER */
+#define GLQF_FDCNT_0_BUCKETCNT_S		0
+#define GLQF_FDCNT_0_BUCKETCNT_M		MAKEMASK(0x7FFF, 0)
+#define GLQF_FDCNT_0_CNT_NOT_VLD_S		31
+#define GLQF_FDCNT_0_CNT_NOT_VLD_M		BIT(31)
+#define GLQF_FDEVICTENA(_i)			(0x00452000 + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: CORER */
+#define GLQF_FDEVICTENA_MAX_INDEX		3
+#define GLQF_FDEVICTENA_FDEVICTENA_S		0
+#define GLQF_FDEVICTENA_FDEVICTENA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQF_FDINSET(_i, _j)			(0x00412000 + ((_i) * 4 + (_j) * 512)) /* _i=0...127, _j=0...5 */ /* Reset Source: CORER */
+#define GLQF_FDINSET_MAX_INDEX			127
+#define GLQF_FDINSET_FV_WORD_INDX0_S		0
+#define GLQF_FDINSET_FV_WORD_INDX0_M		MAKEMASK(0x1F, 0)
+#define GLQF_FDINSET_FV_WORD_VAL0_S		7
+#define GLQF_FDINSET_FV_WORD_VAL0_M		BIT(7)
+#define GLQF_FDINSET_FV_WORD_INDX1_S		8
+#define GLQF_FDINSET_FV_WORD_INDX1_M		MAKEMASK(0x1F, 8)
+#define GLQF_FDINSET_FV_WORD_VAL1_S		15
+#define GLQF_FDINSET_FV_WORD_VAL1_M		BIT(15)
+#define GLQF_FDINSET_FV_WORD_INDX2_S		16
+#define GLQF_FDINSET_FV_WORD_INDX2_M		MAKEMASK(0x1F, 16)
+#define GLQF_FDINSET_FV_WORD_VAL2_S		23
+#define GLQF_FDINSET_FV_WORD_VAL2_M		BIT(23)
+#define GLQF_FDINSET_FV_WORD_INDX3_S		24
+#define GLQF_FDINSET_FV_WORD_INDX3_M		MAKEMASK(0x1F, 24)
+#define GLQF_FDINSET_FV_WORD_VAL3_S		31
+#define GLQF_FDINSET_FV_WORD_VAL3_M		BIT(31)
+#define GLQF_FDMASK(_i)				(0x00410800 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLQF_FDMASK_MAX_INDEX			31
+#define GLQF_FDMASK_MSK_INDEX_S			0
+#define GLQF_FDMASK_MSK_INDEX_M			MAKEMASK(0x1F, 0)
+#define GLQF_FDMASK_MASK_S			16
+#define GLQF_FDMASK_MASK_M			MAKEMASK(0xFFFF, 16)
+#define GLQF_FDMASK_SEL(_i)			(0x00410400 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLQF_FDMASK_SEL_MAX_INDEX		127
+#define GLQF_FDMASK_SEL_MASK_SEL_S		0
+#define GLQF_FDMASK_SEL_MASK_SEL_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQF_FDSWAP(_i, _j)			(0x00413000 + ((_i) * 4 + (_j) * 512)) /* _i=0...127, _j=0...5 */ /* Reset Source: CORER */
+#define GLQF_FDSWAP_MAX_INDEX			127
+#define GLQF_FDSWAP_FV_WORD_INDX0_S		0
+#define GLQF_FDSWAP_FV_WORD_INDX0_M		MAKEMASK(0x1F, 0)
+#define GLQF_FDSWAP_FV_WORD_VAL0_S		7
+#define GLQF_FDSWAP_FV_WORD_VAL0_M		BIT(7)
+#define GLQF_FDSWAP_FV_WORD_INDX1_S		8
+#define GLQF_FDSWAP_FV_WORD_INDX1_M		MAKEMASK(0x1F, 8)
+#define GLQF_FDSWAP_FV_WORD_VAL1_S		15
+#define GLQF_FDSWAP_FV_WORD_VAL1_M		BIT(15)
+#define GLQF_FDSWAP_FV_WORD_INDX2_S		16
+#define GLQF_FDSWAP_FV_WORD_INDX2_M		MAKEMASK(0x1F, 16)
+#define GLQF_FDSWAP_FV_WORD_VAL2_S		23
+#define GLQF_FDSWAP_FV_WORD_VAL2_M		BIT(23)
+#define GLQF_FDSWAP_FV_WORD_INDX3_S		24
+#define GLQF_FDSWAP_FV_WORD_INDX3_M		MAKEMASK(0x1F, 24)
+#define GLQF_FDSWAP_FV_WORD_VAL3_S		31
+#define GLQF_FDSWAP_FV_WORD_VAL3_M		BIT(31)
+#define GLQF_HINSET(_i, _j)			(0x0040E000 + ((_i) * 4 + (_j) * 512)) /* _i=0...127, _j=0...5 */ /* Reset Source: CORER */
+#define GLQF_HINSET_MAX_INDEX			127
+#define GLQF_HINSET_FV_WORD_INDX0_S		0
+#define GLQF_HINSET_FV_WORD_INDX0_M		MAKEMASK(0x1F, 0)
+#define GLQF_HINSET_FV_WORD_VAL0_S		7
+#define GLQF_HINSET_FV_WORD_VAL0_M		BIT(7)
+#define GLQF_HINSET_FV_WORD_INDX1_S		8
+#define GLQF_HINSET_FV_WORD_INDX1_M		MAKEMASK(0x1F, 8)
+#define GLQF_HINSET_FV_WORD_VAL1_S		15
+#define GLQF_HINSET_FV_WORD_VAL1_M		BIT(15)
+#define GLQF_HINSET_FV_WORD_INDX2_S		16
+#define GLQF_HINSET_FV_WORD_INDX2_M		MAKEMASK(0x1F, 16)
+#define GLQF_HINSET_FV_WORD_VAL2_S		23
+#define GLQF_HINSET_FV_WORD_VAL2_M		BIT(23)
+#define GLQF_HINSET_FV_WORD_INDX3_S		24
+#define GLQF_HINSET_FV_WORD_INDX3_M		MAKEMASK(0x1F, 24)
+#define GLQF_HINSET_FV_WORD_VAL3_S		31
+#define GLQF_HINSET_FV_WORD_VAL3_M		BIT(31)
+#define GLQF_HKEY(_i)				(0x00456000 + ((_i) * 4)) /* _i=0...12 */ /* Reset Source: CORER */
+#define GLQF_HKEY_MAX_INDEX			12
+#define GLQF_HKEY_KEY_0_S			0
+#define GLQF_HKEY_KEY_0_M			MAKEMASK(0xFF, 0)
+#define GLQF_HKEY_KEY_1_S			8
+#define GLQF_HKEY_KEY_1_M			MAKEMASK(0xFF, 8)
+#define GLQF_HKEY_KEY_2_S			16
+#define GLQF_HKEY_KEY_2_M			MAKEMASK(0xFF, 16)
+#define GLQF_HKEY_KEY_3_S			24
+#define GLQF_HKEY_KEY_3_M			MAKEMASK(0xFF, 24)
+#define GLQF_HLUT(_i, _j)			(0x00438000 + ((_i) * 4 + (_j) * 512)) /* _i=0...127, _j=0...15 */ /* Reset Source: CORER */
+#define GLQF_HLUT_MAX_INDEX			127
+#define GLQF_HLUT_LUT0_S			0
+#define GLQF_HLUT_LUT0_M			MAKEMASK(0x3F, 0)
+#define GLQF_HLUT_LUT1_S			8
+#define GLQF_HLUT_LUT1_M			MAKEMASK(0x3F, 8)
+#define GLQF_HLUT_LUT2_S			16
+#define GLQF_HLUT_LUT2_M			MAKEMASK(0x3F, 16)
+#define GLQF_HLUT_LUT3_S			24
+#define GLQF_HLUT_LUT3_M			MAKEMASK(0x3F, 24)
+#define GLQF_HLUT_SIZE(_i)			(0x00455400 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLQF_HLUT_SIZE_MAX_INDEX		15
+#define GLQF_HLUT_SIZE_HSIZE_S			0
+#define GLQF_HLUT_SIZE_HSIZE_M			BIT(0)
+#define GLQF_HMASK(_i)				(0x0040FC00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLQF_HMASK_MAX_INDEX			31
+#define GLQF_HMASK_MSK_INDEX_S			0
+#define GLQF_HMASK_MSK_INDEX_M			MAKEMASK(0x1F, 0)
+#define GLQF_HMASK_MASK_S			16
+#define GLQF_HMASK_MASK_M			MAKEMASK(0xFFFF, 16)
+#define GLQF_HMASK_SEL(_i)			(0x00410000 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLQF_HMASK_SEL_MAX_INDEX		127
+#define GLQF_HMASK_SEL_MASK_SEL_S		0
+#define GLQF_HMASK_SEL_MASK_SEL_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQF_HSYMM(_i, _j)			(0x0040F000 + ((_i) * 4 + (_j) * 512)) /* _i=0...127, _j=0...5 */ /* Reset Source: CORER */
+#define GLQF_HSYMM_MAX_INDEX			127
+#define GLQF_HSYMM_FV_SYMM_INDX0_S		0
+#define GLQF_HSYMM_FV_SYMM_INDX0_M		MAKEMASK(0x1F, 0)
+#define GLQF_HSYMM_SYMM0_ENA_S			7
+#define GLQF_HSYMM_SYMM0_ENA_M			BIT(7)
+#define GLQF_HSYMM_FV_SYMM_INDX1_S		8
+#define GLQF_HSYMM_FV_SYMM_INDX1_M		MAKEMASK(0x1F, 8)
+#define GLQF_HSYMM_SYMM1_ENA_S			15
+#define GLQF_HSYMM_SYMM1_ENA_M			BIT(15)
+#define GLQF_HSYMM_FV_SYMM_INDX2_S		16
+#define GLQF_HSYMM_FV_SYMM_INDX2_M		MAKEMASK(0x1F, 16)
+#define GLQF_HSYMM_SYMM2_ENA_S			23
+#define GLQF_HSYMM_SYMM2_ENA_M			BIT(23)
+#define GLQF_HSYMM_FV_SYMM_INDX3_S		24
+#define GLQF_HSYMM_FV_SYMM_INDX3_M		MAKEMASK(0x1F, 24)
+#define GLQF_HSYMM_SYMM3_ENA_S			31
+#define GLQF_HSYMM_SYMM3_ENA_M			BIT(31)
+#define GLQF_PE_APBVT_CNT			0x00455500 /* Reset Source: CORER */
+#define GLQF_PE_APBVT_CNT_APBVT_LAN_S		0
+#define GLQF_PE_APBVT_CNT_APBVT_LAN_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQF_PE_CMD				0x00471080 /* Reset Source: CORER */
+#define GLQF_PE_CMD_ADDREM_STS_S		0
+#define GLQF_PE_CMD_ADDREM_STS_M		MAKEMASK(0xFFFFFF, 0)
+#define GLQF_PE_CMD_ADDREM_ID_S			28
+#define GLQF_PE_CMD_ADDREM_ID_M			MAKEMASK(0xF, 28)
+#define GLQF_PE_CTL				0x004710C0 /* Reset Source: CORER */
+#define GLQF_PE_CTL_PELONG_S			0
+#define GLQF_PE_CTL_PELONG_M			MAKEMASK(0xF, 0)
+#define GLQF_PE_CTL2(_i)			(0x00455200 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLQF_PE_CTL2_MAX_INDEX			31
+#define GLQF_PE_CTL2_TO_QH_S			0
+#define GLQF_PE_CTL2_TO_QH_M			MAKEMASK(0x3, 0)
+#define GLQF_PE_CTL2_APBVT_ENA_S		2
+#define GLQF_PE_CTL2_APBVT_ENA_M		BIT(2)
+#define GLQF_PE_FVE				0x0020E514 /* Reset Source: CORER */
+#define GLQF_PE_FVE_W_ENA_S			0
+#define GLQF_PE_FVE_W_ENA_M			MAKEMASK(0xFFFFFF, 0)
+#define GLQF_PE_OSR_STS				0x00471040 /* Reset Source: CORER */
+#define GLQF_PE_OSR_STS_QH_SRCH_MAXOSR_S	0
+#define GLQF_PE_OSR_STS_QH_SRCH_MAXOSR_M	MAKEMASK(0x3FF, 0)
+#define GLQF_PE_OSR_STS_QH_CMD_MAXOSR_S		16
+#define GLQF_PE_OSR_STS_QH_CMD_MAXOSR_M		MAKEMASK(0x3FF, 16)
+#define GLQF_PEINSET(_i, _j)			(0x00415000 + ((_i) * 4 + (_j) * 128)) /* _i=0...31, _j=0...5 */ /* Reset Source: CORER */
+#define GLQF_PEINSET_MAX_INDEX			31
+#define GLQF_PEINSET_FV_WORD_INDX0_S		0
+#define GLQF_PEINSET_FV_WORD_INDX0_M		MAKEMASK(0x1F, 0)
+#define GLQF_PEINSET_FV_WORD_VAL0_S		7
+#define GLQF_PEINSET_FV_WORD_VAL0_M		BIT(7)
+#define GLQF_PEINSET_FV_WORD_INDX1_S		8
+#define GLQF_PEINSET_FV_WORD_INDX1_M		MAKEMASK(0x1F, 8)
+#define GLQF_PEINSET_FV_WORD_VAL1_S		15
+#define GLQF_PEINSET_FV_WORD_VAL1_M		BIT(15)
+#define GLQF_PEINSET_FV_WORD_INDX2_S		16
+#define GLQF_PEINSET_FV_WORD_INDX2_M		MAKEMASK(0x1F, 16)
+#define GLQF_PEINSET_FV_WORD_VAL2_S		23
+#define GLQF_PEINSET_FV_WORD_VAL2_M		BIT(23)
+#define GLQF_PEINSET_FV_WORD_INDX3_S		24
+#define GLQF_PEINSET_FV_WORD_INDX3_M		MAKEMASK(0x1F, 24)
+#define GLQF_PEINSET_FV_WORD_VAL3_S		31
+#define GLQF_PEINSET_FV_WORD_VAL3_M		BIT(31)
+#define GLQF_PEMASK(_i)				(0x00415400 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLQF_PEMASK_MAX_INDEX			15
+#define GLQF_PEMASK_MSK_INDEX_S			0
+#define GLQF_PEMASK_MSK_INDEX_M			MAKEMASK(0x1F, 0)
+#define GLQF_PEMASK_MASK_S			16
+#define GLQF_PEMASK_MASK_M			MAKEMASK(0xFFFF, 16)
+#define GLQF_PEMASK_SEL(_i)			(0x00415500 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLQF_PEMASK_SEL_MAX_INDEX		31
+#define GLQF_PEMASK_SEL_MASK_SEL_S		0
+#define GLQF_PEMASK_SEL_MASK_SEL_M		MAKEMASK(0xFFFF, 0)
+#define GLQF_PETABLE_CLR(_i)			(0x000AA078 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLQF_PETABLE_CLR_MAX_INDEX		1
+#define GLQF_PETABLE_CLR_VM_VF_NUM_S		0
+#define GLQF_PETABLE_CLR_VM_VF_NUM_M		MAKEMASK(0x3FF, 0)
+#define GLQF_PETABLE_CLR_VM_VF_TYPE_S		10
+#define GLQF_PETABLE_CLR_VM_VF_TYPE_M		MAKEMASK(0x3, 10)
+#define GLQF_PETABLE_CLR_PF_NUM_S		12
+#define GLQF_PETABLE_CLR_PF_NUM_M		MAKEMASK(0x7, 12)
+#define GLQF_PETABLE_CLR_PE_BUSY_S		16
+#define GLQF_PETABLE_CLR_PE_BUSY_M		BIT(16)
+#define GLQF_PETABLE_CLR_PE_CLEAR_S		17
+#define GLQF_PETABLE_CLR_PE_CLEAR_M		BIT(17)
+#define GLQF_PROF2TC(_i, _j)			(0x0044D000 + ((_i) * 4 + (_j) * 512)) /* _i=0...127, _j=0...3 */ /* Reset Source: CORER */
+#define GLQF_PROF2TC_MAX_INDEX			127
+#define GLQF_PROF2TC_OVERRIDE_ENA_0_S		0
+#define GLQF_PROF2TC_OVERRIDE_ENA_0_M		BIT(0)
+#define GLQF_PROF2TC_REGION_0_S			1
+#define GLQF_PROF2TC_REGION_0_M			MAKEMASK(0x7, 1)
+#define GLQF_PROF2TC_OVERRIDE_ENA_1_S		4
+#define GLQF_PROF2TC_OVERRIDE_ENA_1_M		BIT(4)
+#define GLQF_PROF2TC_REGION_1_S			5
+#define GLQF_PROF2TC_REGION_1_M			MAKEMASK(0x7, 5)
+#define GLQF_PROF2TC_OVERRIDE_ENA_2_S		8
+#define GLQF_PROF2TC_OVERRIDE_ENA_2_M		BIT(8)
+#define GLQF_PROF2TC_REGION_2_S			9
+#define GLQF_PROF2TC_REGION_2_M			MAKEMASK(0x7, 9)
+#define GLQF_PROF2TC_OVERRIDE_ENA_3_S		12
+#define GLQF_PROF2TC_OVERRIDE_ENA_3_M		BIT(12)
+#define GLQF_PROF2TC_REGION_3_S			13
+#define GLQF_PROF2TC_REGION_3_M			MAKEMASK(0x7, 13)
+#define GLQF_PROF2TC_OVERRIDE_ENA_4_S		16
+#define GLQF_PROF2TC_OVERRIDE_ENA_4_M		BIT(16)
+#define GLQF_PROF2TC_REGION_4_S			17
+#define GLQF_PROF2TC_REGION_4_M			MAKEMASK(0x7, 17)
+#define GLQF_PROF2TC_OVERRIDE_ENA_5_S		20
+#define GLQF_PROF2TC_OVERRIDE_ENA_5_M		BIT(20)
+#define GLQF_PROF2TC_REGION_5_S			21
+#define GLQF_PROF2TC_REGION_5_M			MAKEMASK(0x7, 21)
+#define GLQF_PROF2TC_OVERRIDE_ENA_6_S		24
+#define GLQF_PROF2TC_OVERRIDE_ENA_6_M		BIT(24)
+#define GLQF_PROF2TC_REGION_6_S			25
+#define GLQF_PROF2TC_REGION_6_M			MAKEMASK(0x7, 25)
+#define GLQF_PROF2TC_OVERRIDE_ENA_7_S		28
+#define GLQF_PROF2TC_OVERRIDE_ENA_7_M		BIT(28)
+#define GLQF_PROF2TC_REGION_7_S			29
+#define GLQF_PROF2TC_REGION_7_M			MAKEMASK(0x7, 29)
+#define PFQF_FD_CNT				0x00460180 /* Reset Source: CORER */
+#define PFQF_FD_CNT_FD_GCNT_S			0
+#define PFQF_FD_CNT_FD_GCNT_M			MAKEMASK(0x7FFF, 0)
+#define PFQF_FD_CNT_FD_BCNT_S			16
+#define PFQF_FD_CNT_FD_BCNT_M			MAKEMASK(0x7FFF, 16)
+#define PFQF_FD_ENA				0x0043A000 /* Reset Source: CORER */
+#define PFQF_FD_ENA_FD_ENA_S			0
+#define PFQF_FD_ENA_FD_ENA_M			BIT(0)
+#define PFQF_FD_SIZE				0x00460100 /* Reset Source: CORER */
+#define PFQF_FD_SIZE_FD_GSIZE_S			0
+#define PFQF_FD_SIZE_FD_GSIZE_M			MAKEMASK(0x7FFF, 0)
+#define PFQF_FD_SIZE_FD_BSIZE_S			16
+#define PFQF_FD_SIZE_FD_BSIZE_M			MAKEMASK(0x7FFF, 16)
+#define PFQF_FD_SUBTRACT			0x00460200 /* Reset Source: CORER */
+#define PFQF_FD_SUBTRACT_FD_GCNT_S		0
+#define PFQF_FD_SUBTRACT_FD_GCNT_M		MAKEMASK(0x7FFF, 0)
+#define PFQF_FD_SUBTRACT_FD_BCNT_S		16
+#define PFQF_FD_SUBTRACT_FD_BCNT_M		MAKEMASK(0x7FFF, 16)
+#define PFQF_HLUT(_i)				(0x00430000 + ((_i) * 64)) /* _i=0...511 */ /* Reset Source: CORER */
+#define PFQF_HLUT_MAX_INDEX			511
+#define PFQF_HLUT_LUT0_S			0
+#define PFQF_HLUT_LUT0_M			MAKEMASK(0xFF, 0)
+#define PFQF_HLUT_LUT1_S			8
+#define PFQF_HLUT_LUT1_M			MAKEMASK(0xFF, 8)
+#define PFQF_HLUT_LUT2_S			16
+#define PFQF_HLUT_LUT2_M			MAKEMASK(0xFF, 16)
+#define PFQF_HLUT_LUT3_S			24
+#define PFQF_HLUT_LUT3_M			MAKEMASK(0xFF, 24)
+#define PFQF_HLUT_SIZE				0x00455480 /* Reset Source: CORER */
+#define PFQF_HLUT_SIZE_HSIZE_S			0
+#define PFQF_HLUT_SIZE_HSIZE_M			MAKEMASK(0x3, 0)
+#define PFQF_PE_CLSN0				0x00470480 /* Reset Source: CORER */
+#define PFQF_PE_CLSN0_HITSBCNT_S		0
+#define PFQF_PE_CLSN0_HITSBCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PFQF_PE_CLSN1				0x00470500 /* Reset Source: CORER */
+#define PFQF_PE_CLSN1_HITLBCNT_S		0
+#define PFQF_PE_CLSN1_HITLBCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PFQF_PE_CTL1				0x00470000 /* Reset Source: CORER */
+#define PFQF_PE_CTL1_PEHSIZE_S			0
+#define PFQF_PE_CTL1_PEHSIZE_M			MAKEMASK(0xF, 0)
+#define PFQF_PE_CTL2				0x00470040 /* Reset Source: CORER */
+#define PFQF_PE_CTL2_PEDSIZE_S			0
+#define PFQF_PE_CTL2_PEDSIZE_M			MAKEMASK(0xF, 0)
+#define PFQF_PE_FILTERING_ENA			0x0043A080 /* Reset Source: CORER */
+#define PFQF_PE_FILTERING_ENA_PE_ENA_S		0
+#define PFQF_PE_FILTERING_ENA_PE_ENA_M		BIT(0)
+#define PFQF_PE_FLHD				0x00470100 /* Reset Source: CORER */
+#define PFQF_PE_FLHD_FLHD_S			0
+#define PFQF_PE_FLHD_FLHD_M			MAKEMASK(0xFFFFFF, 0)
+#define PFQF_PE_ST_CTL				0x00470400 /* Reset Source: CORER */
+#define PFQF_PE_ST_CTL_PF_CNT_EN_S		0
+#define PFQF_PE_ST_CTL_PF_CNT_EN_M		BIT(0)
+#define PFQF_PE_ST_CTL_VFS_CNT_EN_S		1
+#define PFQF_PE_ST_CTL_VFS_CNT_EN_M		BIT(1)
+#define PFQF_PE_ST_CTL_VF_CNT_EN_S		2
+#define PFQF_PE_ST_CTL_VF_CNT_EN_M		BIT(2)
+#define PFQF_PE_ST_CTL_VF_NUM_S			16
+#define PFQF_PE_ST_CTL_VF_NUM_M			MAKEMASK(0xFF, 16)
+#define PFQF_PE_TC_CTL				0x00452080 /* Reset Source: CORER */
+#define PFQF_PE_TC_CTL_TC_EN_PF_S		0
+#define PFQF_PE_TC_CTL_TC_EN_PF_M		MAKEMASK(0xFF, 0)
+#define PFQF_PE_TC_CTL_TC_EN_VF_S		16
+#define PFQF_PE_TC_CTL_TC_EN_VF_M		MAKEMASK(0xFF, 16)
+#define PFQF_PECNT_0				0x00470200 /* Reset Source: CORER */
+#define PFQF_PECNT_0_BUCKETCNT_S		0
+#define PFQF_PECNT_0_BUCKETCNT_M		MAKEMASK(0x3FFFF, 0)
+#define PFQF_PECNT_1				0x00470300 /* Reset Source: CORER */
+#define PFQF_PECNT_1_FLTCNT_S			0
+#define PFQF_PECNT_1_FLTCNT_M			MAKEMASK(0x3FFFF, 0)
+#define VPQF_PE_CTL1(_VF)			(0x00474000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPQF_PE_CTL1_MAX_INDEX			255
+#define VPQF_PE_CTL1_PEHSIZE_S			0
+#define VPQF_PE_CTL1_PEHSIZE_M			MAKEMASK(0xF, 0)
+#define VPQF_PE_CTL2(_VF)			(0x00474800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPQF_PE_CTL2_MAX_INDEX			255
+#define VPQF_PE_CTL2_PEDSIZE_S			0
+#define VPQF_PE_CTL2_PEDSIZE_M			MAKEMASK(0xF, 0)
+#define VPQF_PE_FILTERING_ENA(_VF)		(0x00455800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPQF_PE_FILTERING_ENA_MAX_INDEX		255
+#define VPQF_PE_FILTERING_ENA_PE_ENA_S		0
+#define VPQF_PE_FILTERING_ENA_PE_ENA_M		BIT(0)
+#define VPQF_PE_FLHD(_VF)			(0x00472000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPQF_PE_FLHD_MAX_INDEX			255
+#define VPQF_PE_FLHD_FLHD_S			0
+#define VPQF_PE_FLHD_FLHD_M			MAKEMASK(0xFFFFFF, 0)
+#define VPQF_PECNT_0(_VF)			(0x00472800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPQF_PECNT_0_MAX_INDEX			255
+#define VPQF_PECNT_0_BUCKETCNT_S		0
+#define VPQF_PECNT_0_BUCKETCNT_M		MAKEMASK(0x3FFFF, 0)
+#define VPQF_PECNT_1(_VF)			(0x00473000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPQF_PECNT_1_MAX_INDEX			255
+#define VPQF_PECNT_1_FLTCNT_S			0
+#define VPQF_PECNT_1_FLTCNT_M			MAKEMASK(0x3FFFF, 0)
+#define GLDCB_RMPMC				0x001223C8 /* Reset Source: CORER */
+#define GLDCB_RMPMC_RSPM_S			0
+#define GLDCB_RMPMC_RSPM_M			MAKEMASK(0x3F, 0)
+#define GLDCB_RMPMC_MIQ_NODROP_MODE_S		6
+#define GLDCB_RMPMC_MIQ_NODROP_MODE_M		MAKEMASK(0x1F, 6)
+#define GLDCB_RMPMC_RPM_DIS_S			31
+#define GLDCB_RMPMC_RPM_DIS_M			BIT(31)
+#define GLDCB_RMPMS				0x001223CC /* Reset Source: CORER */
+#define GLDCB_RMPMS_RMPM_S			0
+#define GLDCB_RMPMS_RMPM_M			MAKEMASK(0xFFFF, 0)
+#define GLDCB_RPCC				0x00122260 /* Reset Source: CORER */
+#define GLDCB_RPCC_EN_S				0
+#define GLDCB_RPCC_EN_M				BIT(0)
+#define GLDCB_RPCC_SCL_FACT_S			4
+#define GLDCB_RPCC_SCL_FACT_M			MAKEMASK(0x1F, 4)
+#define GLDCB_RPCC_THRSH_S			16
+#define GLDCB_RPCC_THRSH_M			MAKEMASK(0xFFF, 16)
+#define GLDCB_RSPMC				0x001223C4 /* Reset Source: CORER */
+#define GLDCB_RSPMC_RSPM_S			0
+#define GLDCB_RSPMC_RSPM_M			MAKEMASK(0xFF, 0)
+#define GLDCB_RSPMC_RPM_MODE_S			8
+#define GLDCB_RSPMC_RPM_MODE_M			MAKEMASK(0x3, 8)
+#define GLDCB_RSPMC_PRR_MAX_EXP_S		10
+#define GLDCB_RSPMC_PRR_MAX_EXP_M		MAKEMASK(0xF, 10)
+#define GLDCB_RSPMC_PFCTIMER_S			14
+#define GLDCB_RSPMC_PFCTIMER_M			MAKEMASK(0x3FFF, 14)
+#define GLDCB_RSPMC_RPM_DIS_S			31
+#define GLDCB_RSPMC_RPM_DIS_M			BIT(31)
+#define GLDCB_RSPMS				0x001223C0 /* Reset Source: CORER */
+#define GLDCB_RSPMS_RSPM_S			0
+#define GLDCB_RSPMS_RSPM_M			MAKEMASK(0x3FFFF, 0)
+#define GLDCB_RTCTI				0x001223D0 /* Reset Source: CORER */
+#define GLDCB_RTCTI_PFCTIMEOUT_TC_S		0
+#define GLDCB_RTCTI_PFCTIMEOUT_TC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_RTCTQ(_i)				(0x001222C0 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLDCB_RTCTQ_MAX_INDEX			31
+#define GLDCB_RTCTQ_RXQNUM_S			0
+#define GLDCB_RTCTQ_RXQNUM_M			MAKEMASK(0x7FF, 0)
+#define GLDCB_RTCTQ_IS_PF_Q_S			16
+#define GLDCB_RTCTQ_IS_PF_Q_M			BIT(16)
+#define GLDCB_RTCTS(_i)				(0x00122340 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLDCB_RTCTS_MAX_INDEX			31
+#define GLDCB_RTCTS_PFCTIMER_S			0
+#define GLDCB_RTCTS_PFCTIMER_M			MAKEMASK(0x3FFF, 0)
+#define GLRCB_CFG_COTF_CNT(_i)			(0x001223D4 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLRCB_CFG_COTF_CNT_MAX_INDEX		7
+#define GLRCB_CFG_COTF_CNT_MRKR_COTF_CNT_S	0
+#define GLRCB_CFG_COTF_CNT_MRKR_COTF_CNT_M	MAKEMASK(0x3F, 0)
+#define GLRCB_CFG_COTF_ST			0x001223F4 /* Reset Source: CORER */
+#define GLRCB_CFG_COTF_ST_MRKR_COTF_ST_S	0
+#define GLRCB_CFG_COTF_ST_MRKR_COTF_ST_M	MAKEMASK(0xFF, 0)
+#define GLRPRS_PMCFG_DHW(_i)			(0x00200388 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLRPRS_PMCFG_DHW_MAX_INDEX		15
+#define GLRPRS_PMCFG_DHW_DHW_S			0
+#define GLRPRS_PMCFG_DHW_DHW_M			MAKEMASK(0xFFFFF, 0)
+#define GLRPRS_PMCFG_DLW(_i)			(0x002003C8 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLRPRS_PMCFG_DLW_MAX_INDEX		15
+#define GLRPRS_PMCFG_DLW_DLW_S			0
+#define GLRPRS_PMCFG_DLW_DLW_M			MAKEMASK(0xFFFFF, 0)
+#define GLRPRS_PMCFG_DPS(_i)			(0x00200308 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLRPRS_PMCFG_DPS_MAX_INDEX		15
+#define GLRPRS_PMCFG_DPS_DPS_S			0
+#define GLRPRS_PMCFG_DPS_DPS_M			MAKEMASK(0xFFFFF, 0)
+#define GLRPRS_PMCFG_SHW(_i)			(0x00200448 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLRPRS_PMCFG_SHW_MAX_INDEX		7
+#define GLRPRS_PMCFG_SHW_SHW_S			0
+#define GLRPRS_PMCFG_SHW_SHW_M			MAKEMASK(0xFFFFF, 0)
+#define GLRPRS_PMCFG_SLW(_i)			(0x00200468 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLRPRS_PMCFG_SLW_MAX_INDEX		7
+#define GLRPRS_PMCFG_SLW_SLW_S			0
+#define GLRPRS_PMCFG_SLW_SLW_M			MAKEMASK(0xFFFFF, 0)
+#define GLRPRS_PMCFG_SPS(_i)			(0x00200408 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLRPRS_PMCFG_SPS_MAX_INDEX		7
+#define GLRPRS_PMCFG_SPS_SPS_S			0
+#define GLRPRS_PMCFG_SPS_SPS_M			MAKEMASK(0xFFFFF, 0)
+#define GLRPRS_PMCFG_TC_CFG(_i)			(0x00200488 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLRPRS_PMCFG_TC_CFG_MAX_INDEX		31
+#define GLRPRS_PMCFG_TC_CFG_D_POOL_S		0
+#define GLRPRS_PMCFG_TC_CFG_D_POOL_M		MAKEMASK(0xF, 0)
+#define GLRPRS_PMCFG_TC_CFG_S_POOL_S		16
+#define GLRPRS_PMCFG_TC_CFG_S_POOL_M		MAKEMASK(0x7, 16)
+#define GLRPRS_PMCFG_TCHW(_i)			(0x00200588 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLRPRS_PMCFG_TCHW_MAX_INDEX		31
+#define GLRPRS_PMCFG_TCHW_TCHW_S		0
+#define GLRPRS_PMCFG_TCHW_TCHW_M		MAKEMASK(0xFFFFF, 0)
+#define GLRPRS_PMCFG_TCLW(_i)			(0x00200608 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLRPRS_PMCFG_TCLW_MAX_INDEX		31
+#define GLRPRS_PMCFG_TCLW_TCLW_S		0
+#define GLRPRS_PMCFG_TCLW_TCLW_M		MAKEMASK(0xFFFFF, 0)
+#define GLSWT_PMCFG_TC_CFG(_i)			(0x00204900 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSWT_PMCFG_TC_CFG_MAX_INDEX		31
+#define GLSWT_PMCFG_TC_CFG_D_POOL_S		0
+#define GLSWT_PMCFG_TC_CFG_D_POOL_M		MAKEMASK(0xF, 0)
+#define GLSWT_PMCFG_TC_CFG_S_POOL_S		16
+#define GLSWT_PMCFG_TC_CFG_S_POOL_M		MAKEMASK(0x7, 16)
+#define PRTDCB_RLANPMS				0x00122280 /* Reset Source: CORER */
+#define PRTDCB_RLANPMS_LANRPPM_S		0
+#define PRTDCB_RLANPMS_LANRPPM_M		MAKEMASK(0x3FFFF, 0)
+#define PRTDCB_RPPMC				0x00122240 /* Reset Source: CORER */
+#define PRTDCB_RPPMC_LANRPPM_S			0
+#define PRTDCB_RPPMC_LANRPPM_M			MAKEMASK(0xFF, 0)
+#define PRTDCB_RPPMC_RDMARPPM_S			8
+#define PRTDCB_RPPMC_RDMARPPM_M			MAKEMASK(0xFF, 8)
+#define PRTDCB_RRDMAPMS				0x00122120 /* Reset Source: CORER */
+#define PRTDCB_RRDMAPMS_RDMARPPM_S		0
+#define PRTDCB_RRDMAPMS_RDMARPPM_M		MAKEMASK(0x3FFFF, 0)
+#define GL_STAT_SWR_BPCH(_i)			(0x00347804 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GL_STAT_SWR_BPCH_MAX_INDEX		127
+#define GL_STAT_SWR_BPCH_VLBPCH_S		0
+#define GL_STAT_SWR_BPCH_VLBPCH_M		MAKEMASK(0xFF, 0)
+#define GL_STAT_SWR_BPCL(_i)			(0x00347800 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GL_STAT_SWR_BPCL_MAX_INDEX		127
+#define GL_STAT_SWR_BPCL_VLBPCL_S		0
+#define GL_STAT_SWR_BPCL_VLBPCL_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_STAT_SWR_GORCH(_i)			(0x00342004 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GL_STAT_SWR_GORCH_MAX_INDEX		127
+#define GL_STAT_SWR_GORCH_VLBCH_S		0
+#define GL_STAT_SWR_GORCH_VLBCH_M		MAKEMASK(0xFF, 0)
+#define GL_STAT_SWR_GORCL(_i)			(0x00342000 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GL_STAT_SWR_GORCL_MAX_INDEX		127
+#define GL_STAT_SWR_GORCL_VLBCL_S		0
+#define GL_STAT_SWR_GORCL_VLBCL_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_STAT_SWR_GOTCH(_i)			(0x00304004 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GL_STAT_SWR_GOTCH_MAX_INDEX		127
+#define GL_STAT_SWR_GOTCH_VLBCH_S		0
+#define GL_STAT_SWR_GOTCH_VLBCH_M		MAKEMASK(0xFF, 0)
+#define GL_STAT_SWR_GOTCL(_i)			(0x00304000 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GL_STAT_SWR_GOTCL_MAX_INDEX		127
+#define GL_STAT_SWR_GOTCL_VLBCL_S		0
+#define GL_STAT_SWR_GOTCL_VLBCL_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_STAT_SWR_MPCH(_i)			(0x00347404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GL_STAT_SWR_MPCH_MAX_INDEX		127
+#define GL_STAT_SWR_MPCH_VLMPCH_S		0
+#define GL_STAT_SWR_MPCH_VLMPCH_M		MAKEMASK(0xFF, 0)
+#define GL_STAT_SWR_MPCL(_i)			(0x00347400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GL_STAT_SWR_MPCL_MAX_INDEX		127
+#define GL_STAT_SWR_MPCL_VLMPCL_S		0
+#define GL_STAT_SWR_MPCL_VLMPCL_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_STAT_SWR_UPCH(_i)			(0x00347004 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GL_STAT_SWR_UPCH_MAX_INDEX		127
+#define GL_STAT_SWR_UPCH_VLUPCH_S		0
+#define GL_STAT_SWR_UPCH_VLUPCH_M		MAKEMASK(0xFF, 0)
+#define GL_STAT_SWR_UPCL(_i)			(0x00347000 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GL_STAT_SWR_UPCL_MAX_INDEX		127
+#define GL_STAT_SWR_UPCL_VLUPCL_S		0
+#define GL_STAT_SWR_UPCL_VLUPCL_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_AORCL(_i)				(0x003812C0 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_AORCL_MAX_INDEX			7
+#define GLPRT_AORCL_AORCL_S			0
+#define GLPRT_AORCL_AORCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_BPRCH(_i)				(0x00381384 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_BPRCH_MAX_INDEX			7
+#define GLPRT_BPRCH_UPRCH_S			0
+#define GLPRT_BPRCH_UPRCH_M			MAKEMASK(0xFF, 0)
+#define GLPRT_BPRCL(_i)				(0x00381380 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_BPRCL_MAX_INDEX			7
+#define GLPRT_BPRCL_UPRCH_S			0
+#define GLPRT_BPRCL_UPRCH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_BPTCH(_i)				(0x00381244 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_BPTCH_MAX_INDEX			7
+#define GLPRT_BPTCH_UPRCH_S			0
+#define GLPRT_BPTCH_UPRCH_M			MAKEMASK(0xFF, 0)
+#define GLPRT_BPTCL(_i)				(0x00381240 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_BPTCL_MAX_INDEX			7
+#define GLPRT_BPTCL_UPRCH_S			0
+#define GLPRT_BPTCL_UPRCH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_CRCERRS(_i)			(0x00380100 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_CRCERRS_MAX_INDEX			7
+#define GLPRT_CRCERRS_CRCERRS_S			0
+#define GLPRT_CRCERRS_CRCERRS_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_CRCERRS_H(_i)			(0x00380104 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_CRCERRS_H_MAX_INDEX		7
+#define GLPRT_CRCERRS_H_CRCERRS_S		0
+#define GLPRT_CRCERRS_H_CRCERRS_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_GORCH(_i)				(0x00380004 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_GORCH_MAX_INDEX			7
+#define GLPRT_GORCH_GORCH_S			0
+#define GLPRT_GORCH_GORCH_M			MAKEMASK(0xFF, 0)
+#define GLPRT_GORCL(_i)				(0x00380000 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_GORCL_MAX_INDEX			7
+#define GLPRT_GORCL_GORCL_S			0
+#define GLPRT_GORCL_GORCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_GOTCH(_i)				(0x00380B44 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_GOTCH_MAX_INDEX			7
+#define GLPRT_GOTCH_GOTCH_S			0
+#define GLPRT_GOTCH_GOTCH_M			MAKEMASK(0xFF, 0)
+#define GLPRT_GOTCL(_i)				(0x00380B40 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_GOTCL_MAX_INDEX			7
+#define GLPRT_GOTCL_GOTCL_S			0
+#define GLPRT_GOTCL_GOTCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_ILLERRC(_i)			(0x003801C0 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_ILLERRC_MAX_INDEX			7
+#define GLPRT_ILLERRC_ILLERRC_S			0
+#define GLPRT_ILLERRC_ILLERRC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_ILLERRC_H(_i)			(0x003801C4 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_ILLERRC_H_MAX_INDEX		7
+#define GLPRT_ILLERRC_H_ILLERRC_S		0
+#define GLPRT_ILLERRC_H_ILLERRC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_LXOFFRXC(_i)			(0x003802C0 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_LXOFFRXC_MAX_INDEX		7
+#define GLPRT_LXOFFRXC_LXOFFRXCNT_S		0
+#define GLPRT_LXOFFRXC_LXOFFRXCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_LXOFFRXC_H(_i)			(0x003802C4 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_LXOFFRXC_H_MAX_INDEX		7
+#define GLPRT_LXOFFRXC_H_LXOFFRXCNT_S		0
+#define GLPRT_LXOFFRXC_H_LXOFFRXCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_LXOFFTXC(_i)			(0x00381180 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_LXOFFTXC_MAX_INDEX		7
+#define GLPRT_LXOFFTXC_LXOFFTXC_S		0
+#define GLPRT_LXOFFTXC_LXOFFTXC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_LXOFFTXC_H(_i)			(0x00381184 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_LXOFFTXC_H_MAX_INDEX		7
+#define GLPRT_LXOFFTXC_H_LXOFFTXC_S		0
+#define GLPRT_LXOFFTXC_H_LXOFFTXC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_LXONRXC(_i)			(0x00380280 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_LXONRXC_MAX_INDEX			7
+#define GLPRT_LXONRXC_LXONRXCNT_S		0
+#define GLPRT_LXONRXC_LXONRXCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_LXONRXC_H(_i)			(0x00380284 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_LXONRXC_H_MAX_INDEX		7
+#define GLPRT_LXONRXC_H_LXONRXCNT_S		0
+#define GLPRT_LXONRXC_H_LXONRXCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_LXONTXC(_i)			(0x00381140 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_LXONTXC_MAX_INDEX			7
+#define GLPRT_LXONTXC_LXONTXC_S			0
+#define GLPRT_LXONTXC_LXONTXC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_LXONTXC_H(_i)			(0x00381144 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_LXONTXC_H_MAX_INDEX		7
+#define GLPRT_LXONTXC_H_LXONTXC_S		0
+#define GLPRT_LXONTXC_H_LXONTXC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_MLFC(_i)				(0x00380040 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_MLFC_MAX_INDEX			7
+#define GLPRT_MLFC_MLFC_S			0
+#define GLPRT_MLFC_MLFC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_MLFC_H(_i)			(0x00380044 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_MLFC_H_MAX_INDEX			7
+#define GLPRT_MLFC_H_MLFC_S			0
+#define GLPRT_MLFC_H_MLFC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_MPRCH(_i)				(0x00381344 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_MPRCH_MAX_INDEX			7
+#define GLPRT_MPRCH_MPRCH_S			0
+#define GLPRT_MPRCH_MPRCH_M			MAKEMASK(0xFF, 0)
+#define GLPRT_MPRCL(_i)				(0x00381340 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_MPRCL_MAX_INDEX			7
+#define GLPRT_MPRCL_MPRCL_S			0
+#define GLPRT_MPRCL_MPRCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_MPTCH(_i)				(0x00381204 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_MPTCH_MAX_INDEX			7
+#define GLPRT_MPTCH_MPTCH_S			0
+#define GLPRT_MPTCH_MPTCH_M			MAKEMASK(0xFF, 0)
+#define GLPRT_MPTCL(_i)				(0x00381200 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_MPTCL_MAX_INDEX			7
+#define GLPRT_MPTCL_MPTCL_S			0
+#define GLPRT_MPTCL_MPTCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_MRFC(_i)				(0x00380080 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_MRFC_MAX_INDEX			7
+#define GLPRT_MRFC_MRFC_S			0
+#define GLPRT_MRFC_MRFC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_MRFC_H(_i)			(0x00380084 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_MRFC_H_MAX_INDEX			7
+#define GLPRT_MRFC_H_MRFC_S			0
+#define GLPRT_MRFC_H_MRFC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PRC1023H(_i)			(0x00380A04 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC1023H_MAX_INDEX		7
+#define GLPRT_PRC1023H_PRC1023H_S		0
+#define GLPRT_PRC1023H_PRC1023H_M		MAKEMASK(0xFF, 0)
+#define GLPRT_PRC1023L(_i)			(0x00380A00 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC1023L_MAX_INDEX		7
+#define GLPRT_PRC1023L_PRC1023L_S		0
+#define GLPRT_PRC1023L_PRC1023L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PRC127H(_i)			(0x00380944 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC127H_MAX_INDEX			7
+#define GLPRT_PRC127H_PRC127H_S			0
+#define GLPRT_PRC127H_PRC127H_M			MAKEMASK(0xFF, 0)
+#define GLPRT_PRC127L(_i)			(0x00380940 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC127L_MAX_INDEX			7
+#define GLPRT_PRC127L_PRC127L_S			0
+#define GLPRT_PRC127L_PRC127L_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PRC1522H(_i)			(0x00380A44 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC1522H_MAX_INDEX		7
+#define GLPRT_PRC1522H_PRC1522H_S		0
+#define GLPRT_PRC1522H_PRC1522H_M		MAKEMASK(0xFF, 0)
+#define GLPRT_PRC1522L(_i)			(0x00380A40 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC1522L_MAX_INDEX		7
+#define GLPRT_PRC1522L_PRC1522L_S		0
+#define GLPRT_PRC1522L_PRC1522L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PRC255H(_i)			(0x00380984 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC255H_MAX_INDEX			7
+#define GLPRT_PRC255H_PRTPRC255H_S		0
+#define GLPRT_PRC255H_PRTPRC255H_M		MAKEMASK(0xFF, 0)
+#define GLPRT_PRC255L(_i)			(0x00380980 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC255L_MAX_INDEX			7
+#define GLPRT_PRC255L_PRC255L_S			0
+#define GLPRT_PRC255L_PRC255L_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PRC511H(_i)			(0x003809C4 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC511H_MAX_INDEX			7
+#define GLPRT_PRC511H_PRC511H_S			0
+#define GLPRT_PRC511H_PRC511H_M			MAKEMASK(0xFF, 0)
+#define GLPRT_PRC511L(_i)			(0x003809C0 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC511L_MAX_INDEX			7
+#define GLPRT_PRC511L_PRC511L_S			0
+#define GLPRT_PRC511L_PRC511L_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PRC64H(_i)			(0x00380904 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC64H_MAX_INDEX			7
+#define GLPRT_PRC64H_PRC64H_S			0
+#define GLPRT_PRC64H_PRC64H_M			MAKEMASK(0xFF, 0)
+#define GLPRT_PRC64L(_i)			(0x00380900 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC64L_MAX_INDEX			7
+#define GLPRT_PRC64L_PRC64L_S			0
+#define GLPRT_PRC64L_PRC64L_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PRC9522H(_i)			(0x00380A84 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC9522H_MAX_INDEX		7
+#define GLPRT_PRC9522H_PRC1522H_S		0
+#define GLPRT_PRC9522H_PRC1522H_M		MAKEMASK(0xFF, 0)
+#define GLPRT_PRC9522L(_i)			(0x00380A80 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC9522L_MAX_INDEX		7
+#define GLPRT_PRC9522L_PRC1522L_S		0
+#define GLPRT_PRC9522L_PRC1522L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PTC1023H(_i)			(0x00380C84 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC1023H_MAX_INDEX		7
+#define GLPRT_PTC1023H_PTC1023H_S		0
+#define GLPRT_PTC1023H_PTC1023H_M		MAKEMASK(0xFF, 0)
+#define GLPRT_PTC1023L(_i)			(0x00380C80 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC1023L_MAX_INDEX		7
+#define GLPRT_PTC1023L_PTC1023L_S		0
+#define GLPRT_PTC1023L_PTC1023L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PTC127H(_i)			(0x00380BC4 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC127H_MAX_INDEX			7
+#define GLPRT_PTC127H_PTC127H_S			0
+#define GLPRT_PTC127H_PTC127H_M			MAKEMASK(0xFF, 0)
+#define GLPRT_PTC127L(_i)			(0x00380BC0 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC127L_MAX_INDEX			7
+#define GLPRT_PTC127L_PTC127L_S			0
+#define GLPRT_PTC127L_PTC127L_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PTC1522H(_i)			(0x00380CC4 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC1522H_MAX_INDEX		7
+#define GLPRT_PTC1522H_PTC1522H_S		0
+#define GLPRT_PTC1522H_PTC1522H_M		MAKEMASK(0xFF, 0)
+#define GLPRT_PTC1522L(_i)			(0x00380CC0 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC1522L_MAX_INDEX		7
+#define GLPRT_PTC1522L_PTC1522L_S		0
+#define GLPRT_PTC1522L_PTC1522L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PTC255H(_i)			(0x00380C04 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC255H_MAX_INDEX			7
+#define GLPRT_PTC255H_PTC255H_S			0
+#define GLPRT_PTC255H_PTC255H_M			MAKEMASK(0xFF, 0)
+#define GLPRT_PTC255L(_i)			(0x00380C00 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC255L_MAX_INDEX			7
+#define GLPRT_PTC255L_PTC255L_S			0
+#define GLPRT_PTC255L_PTC255L_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PTC511H(_i)			(0x00380C44 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC511H_MAX_INDEX			7
+#define GLPRT_PTC511H_PTC511H_S			0
+#define GLPRT_PTC511H_PTC511H_M			MAKEMASK(0xFF, 0)
+#define GLPRT_PTC511L(_i)			(0x00380C40 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC511L_MAX_INDEX			7
+#define GLPRT_PTC511L_PTC511L_S			0
+#define GLPRT_PTC511L_PTC511L_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PTC64H(_i)			(0x00380B84 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC64H_MAX_INDEX			7
+#define GLPRT_PTC64H_PTC64H_S			0
+#define GLPRT_PTC64H_PTC64H_M			MAKEMASK(0xFF, 0)
+#define GLPRT_PTC64L(_i)			(0x00380B80 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC64L_MAX_INDEX			7
+#define GLPRT_PTC64L_PTC64L_S			0
+#define GLPRT_PTC64L_PTC64L_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PTC9522H(_i)			(0x00380D04 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC9522H_MAX_INDEX		7
+#define GLPRT_PTC9522H_PTC9522H_S		0
+#define GLPRT_PTC9522H_PTC9522H_M		MAKEMASK(0xFF, 0)
+#define GLPRT_PTC9522L(_i)			(0x00380D00 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC9522L_MAX_INDEX		7
+#define GLPRT_PTC9522L_PTC9522L_S		0
+#define GLPRT_PTC9522L_PTC9522L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PXOFFRXC(_i, _j)			(0x00380500 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PXOFFRXC_MAX_INDEX		7
+#define GLPRT_PXOFFRXC_PRPXOFFRXCNT_S		0
+#define GLPRT_PXOFFRXC_PRPXOFFRXCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PXOFFRXC_H(_i, _j)		(0x00380504 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PXOFFRXC_H_MAX_INDEX		7
+#define GLPRT_PXOFFRXC_H_PRPXOFFRXCNT_S		0
+#define GLPRT_PXOFFRXC_H_PRPXOFFRXCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PXOFFTXC(_i, _j)			(0x00380F40 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PXOFFTXC_MAX_INDEX		7
+#define GLPRT_PXOFFTXC_PRPXOFFTXCNT_S		0
+#define GLPRT_PXOFFTXC_PRPXOFFTXCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PXOFFTXC_H(_i, _j)		(0x00380F44 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PXOFFTXC_H_MAX_INDEX		7
+#define GLPRT_PXOFFTXC_H_PRPXOFFTXCNT_S		0
+#define GLPRT_PXOFFTXC_H_PRPXOFFTXCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PXONRXC(_i, _j)			(0x00380300 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PXONRXC_MAX_INDEX			7
+#define GLPRT_PXONRXC_PRPXONRXCNT_S		0
+#define GLPRT_PXONRXC_PRPXONRXCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PXONRXC_H(_i, _j)			(0x00380304 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PXONRXC_H_MAX_INDEX		7
+#define GLPRT_PXONRXC_H_PRPXONRXCNT_S		0
+#define GLPRT_PXONRXC_H_PRPXONRXCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PXONTXC(_i, _j)			(0x00380D40 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PXONTXC_MAX_INDEX			7
+#define GLPRT_PXONTXC_PRPXONTXC_S		0
+#define GLPRT_PXONTXC_PRPXONTXC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PXONTXC_H(_i, _j)			(0x00380D44 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PXONTXC_H_MAX_INDEX		7
+#define GLPRT_PXONTXC_H_PRPXONTXC_S		0
+#define GLPRT_PXONTXC_H_PRPXONTXC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_RFC(_i)				(0x00380AC0 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_RFC_MAX_INDEX			7
+#define GLPRT_RFC_RFC_S				0
+#define GLPRT_RFC_RFC_M				MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_RFC_H(_i)				(0x00380AC4 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_RFC_H_MAX_INDEX			7
+#define GLPRT_RFC_H_RFC_S			0
+#define GLPRT_RFC_H_RFC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_RJC(_i)				(0x00380B00 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_RJC_MAX_INDEX			7
+#define GLPRT_RJC_RJC_S				0
+#define GLPRT_RJC_RJC_M				MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_RJC_H(_i)				(0x00380B04 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_RJC_H_MAX_INDEX			7
+#define GLPRT_RJC_H_RJC_S			0
+#define GLPRT_RJC_H_RJC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_RLEC(_i)				(0x00380140 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_RLEC_MAX_INDEX			7
+#define GLPRT_RLEC_RLEC_S			0
+#define GLPRT_RLEC_RLEC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_RLEC_H(_i)			(0x00380144 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_RLEC_H_MAX_INDEX			7
+#define GLPRT_RLEC_H_RLEC_S			0
+#define GLPRT_RLEC_H_RLEC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_ROC(_i)				(0x00380240 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_ROC_MAX_INDEX			7
+#define GLPRT_ROC_ROC_S				0
+#define GLPRT_ROC_ROC_M				MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_ROC_H(_i)				(0x00380244 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_ROC_H_MAX_INDEX			7
+#define GLPRT_ROC_H_ROC_S			0
+#define GLPRT_ROC_H_ROC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_RUC(_i)				(0x00380200 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_RUC_MAX_INDEX			7
+#define GLPRT_RUC_RUC_S				0
+#define GLPRT_RUC_RUC_M				MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_RUC_H(_i)				(0x00380204 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_RUC_H_MAX_INDEX			7
+#define GLPRT_RUC_H_RUC_S			0
+#define GLPRT_RUC_H_RUC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_RXON2OFFCNT(_i, _j)		(0x00380700 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...7 */ /* Reset Source: CORER */
+#define GLPRT_RXON2OFFCNT_MAX_INDEX		7
+#define GLPRT_RXON2OFFCNT_PRRXON2OFFCNT_S	0
+#define GLPRT_RXON2OFFCNT_PRRXON2OFFCNT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_RXON2OFFCNT_H(_i, _j)		(0x00380704 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...7 */ /* Reset Source: CORER */
+#define GLPRT_RXON2OFFCNT_H_MAX_INDEX		7
+#define GLPRT_RXON2OFFCNT_H_PRRXON2OFFCNT_S	0
+#define GLPRT_RXON2OFFCNT_H_PRRXON2OFFCNT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_STDC(_i)				(0x00340000 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_STDC_MAX_INDEX			7
+#define GLPRT_STDC_STDC_S			0
+#define GLPRT_STDC_STDC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_TDOLD(_i)				(0x00381280 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_TDOLD_MAX_INDEX			7
+#define GLPRT_TDOLD_GLPRT_TDOLD_S		0
+#define GLPRT_TDOLD_GLPRT_TDOLD_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_TDOLD_H(_i)			(0x00381284 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_TDOLD_H_MAX_INDEX			7
+#define GLPRT_TDOLD_H_GLPRT_TDOLD_S		0
+#define GLPRT_TDOLD_H_GLPRT_TDOLD_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_UPRCH(_i)				(0x00381304 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_UPRCH_MAX_INDEX			7
+#define GLPRT_UPRCH_UPRCH_S			0
+#define GLPRT_UPRCH_UPRCH_M			MAKEMASK(0xFF, 0)
+#define GLPRT_UPRCL(_i)				(0x00381300 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_UPRCL_MAX_INDEX			7
+#define GLPRT_UPRCL_UPRCL_S			0
+#define GLPRT_UPRCL_UPRCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_UPTCH(_i)				(0x003811C4 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_UPTCH_MAX_INDEX			7
+#define GLPRT_UPTCH_UPTCH_S			0
+#define GLPRT_UPTCH_UPTCH_M			MAKEMASK(0xFF, 0)
+#define GLPRT_UPTCL(_i)				(0x003811C0 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_UPTCL_MAX_INDEX			7
+#define GLPRT_UPTCL_VUPTCH_S			0
+#define GLPRT_UPTCL_VUPTCH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLSTAT_ACL_CNT_0_H(_i)			(0x00388004 + ((_i) * 8)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLSTAT_ACL_CNT_0_H_MAX_INDEX		511
+#define GLSTAT_ACL_CNT_0_H_CNT_MSB_S		0
+#define GLSTAT_ACL_CNT_0_H_CNT_MSB_M		MAKEMASK(0xFF, 0)
+#define GLSTAT_ACL_CNT_0_L(_i)			(0x00388000 + ((_i) * 8)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLSTAT_ACL_CNT_0_L_MAX_INDEX		511
+#define GLSTAT_ACL_CNT_0_L_CNT_LSB_S		0
+#define GLSTAT_ACL_CNT_0_L_CNT_LSB_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLSTAT_ACL_CNT_1_H(_i)			(0x00389004 + ((_i) * 8)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLSTAT_ACL_CNT_1_H_MAX_INDEX		511
+#define GLSTAT_ACL_CNT_1_H_CNT_MSB_S		0
+#define GLSTAT_ACL_CNT_1_H_CNT_MSB_M		MAKEMASK(0xFF, 0)
+#define GLSTAT_ACL_CNT_1_L(_i)			(0x00389000 + ((_i) * 8)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLSTAT_ACL_CNT_1_L_MAX_INDEX		511
+#define GLSTAT_ACL_CNT_1_L_CNT_LSB_S		0
+#define GLSTAT_ACL_CNT_1_L_CNT_LSB_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLSTAT_ACL_CNT_2_H(_i)			(0x0038A004 + ((_i) * 8)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLSTAT_ACL_CNT_2_H_MAX_INDEX		511
+#define GLSTAT_ACL_CNT_2_H_CNT_MSB_S		0
+#define GLSTAT_ACL_CNT_2_H_CNT_MSB_M		MAKEMASK(0xFF, 0)
+#define GLSTAT_ACL_CNT_2_L(_i)			(0x0038A000 + ((_i) * 8)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLSTAT_ACL_CNT_2_L_MAX_INDEX		511
+#define GLSTAT_ACL_CNT_2_L_CNT_LSB_S		0
+#define GLSTAT_ACL_CNT_2_L_CNT_LSB_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLSTAT_ACL_CNT_3_H(_i)			(0x0038B004 + ((_i) * 8)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLSTAT_ACL_CNT_3_H_MAX_INDEX		511
+#define GLSTAT_ACL_CNT_3_H_CNT_MSB_S		0
+#define GLSTAT_ACL_CNT_3_H_CNT_MSB_M		MAKEMASK(0xFF, 0)
+#define GLSTAT_ACL_CNT_3_L(_i)			(0x0038B000 + ((_i) * 8)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLSTAT_ACL_CNT_3_L_MAX_INDEX		511
+#define GLSTAT_ACL_CNT_3_L_CNT_LSB_S		0
+#define GLSTAT_ACL_CNT_3_L_CNT_LSB_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLSTAT_FD_CNT0H(_i)			(0x003A0004 + ((_i) * 8)) /* _i=0...4095 */ /* Reset Source: CORER */
+#define GLSTAT_FD_CNT0H_MAX_INDEX		4095
+#define GLSTAT_FD_CNT0H_FD0_CNT_H_S		0
+#define GLSTAT_FD_CNT0H_FD0_CNT_H_M		MAKEMASK(0xFF, 0)
+#define GLSTAT_FD_CNT0L(_i)			(0x003A0000 + ((_i) * 8)) /* _i=0...4095 */ /* Reset Source: CORER */
+#define GLSTAT_FD_CNT0L_MAX_INDEX		4095
+#define GLSTAT_FD_CNT0L_FD0_CNT_L_S		0
+#define GLSTAT_FD_CNT0L_FD0_CNT_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLSTAT_FD_CNT1H(_i)			(0x003A8004 + ((_i) * 8)) /* _i=0...4095 */ /* Reset Source: CORER */
+#define GLSTAT_FD_CNT1H_MAX_INDEX		4095
+#define GLSTAT_FD_CNT1H_FD0_CNT_H_S		0
+#define GLSTAT_FD_CNT1H_FD0_CNT_H_M		MAKEMASK(0xFF, 0)
+#define GLSTAT_FD_CNT1L(_i)			(0x003A8000 + ((_i) * 8)) /* _i=0...4095 */ /* Reset Source: CORER */
+#define GLSTAT_FD_CNT1L_MAX_INDEX		4095
+#define GLSTAT_FD_CNT1L_FD0_CNT_L_S		0
+#define GLSTAT_FD_CNT1L_FD0_CNT_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLSW_BPRCH(_i)				(0x00346204 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_BPRCH_MAX_INDEX			31
+#define GLSW_BPRCH_BPRCH_S			0
+#define GLSW_BPRCH_BPRCH_M			MAKEMASK(0xFF, 0)
+#define GLSW_BPRCL(_i)				(0x00346200 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_BPRCL_MAX_INDEX			31
+#define GLSW_BPRCL_BPRCL_S			0
+#define GLSW_BPRCL_BPRCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLSW_BPTCH(_i)				(0x00310204 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_BPTCH_MAX_INDEX			31
+#define GLSW_BPTCH_BPTCH_S			0
+#define GLSW_BPTCH_BPTCH_M			MAKEMASK(0xFF, 0)
+#define GLSW_BPTCL(_i)				(0x00310200 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_BPTCL_MAX_INDEX			31
+#define GLSW_BPTCL_BPTCL_S			0
+#define GLSW_BPTCL_BPTCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLSW_GORCH(_i)				(0x00341004 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_GORCH_MAX_INDEX			31
+#define GLSW_GORCH_GORCH_S			0
+#define GLSW_GORCH_GORCH_M			MAKEMASK(0xFF, 0)
+#define GLSW_GORCL(_i)				(0x00341000 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_GORCL_MAX_INDEX			31
+#define GLSW_GORCL_GORCL_S			0
+#define GLSW_GORCL_GORCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLSW_GOTCH(_i)				(0x00302004 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_GOTCH_MAX_INDEX			31
+#define GLSW_GOTCH_GOTCH_S			0
+#define GLSW_GOTCH_GOTCH_M			MAKEMASK(0xFF, 0)
+#define GLSW_GOTCL(_i)				(0x00302000 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_GOTCL_MAX_INDEX			31
+#define GLSW_GOTCL_GOTCL_S			0
+#define GLSW_GOTCL_GOTCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLSW_MPRCH(_i)				(0x00346104 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_MPRCH_MAX_INDEX			31
+#define GLSW_MPRCH_MPRCH_S			0
+#define GLSW_MPRCH_MPRCH_M			MAKEMASK(0xFF, 0)
+#define GLSW_MPRCL(_i)				(0x00346100 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_MPRCL_MAX_INDEX			31
+#define GLSW_MPRCL_MPRCL_S			0
+#define GLSW_MPRCL_MPRCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLSW_MPTCH(_i)				(0x00310104 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_MPTCH_MAX_INDEX			31
+#define GLSW_MPTCH_MPTCH_S			0
+#define GLSW_MPTCH_MPTCH_M			MAKEMASK(0xFF, 0)
+#define GLSW_MPTCL(_i)				(0x00310100 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_MPTCL_MAX_INDEX			31
+#define GLSW_MPTCL_MPTCL_S			0
+#define GLSW_MPTCL_MPTCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLSW_UPRCH(_i)				(0x00346004 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_UPRCH_MAX_INDEX			31
+#define GLSW_UPRCH_UPRCH_S			0
+#define GLSW_UPRCH_UPRCH_M			MAKEMASK(0xFF, 0)
+#define GLSW_UPRCL(_i)				(0x00346000 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_UPRCL_MAX_INDEX			31
+#define GLSW_UPRCL_UPRCL_S			0
+#define GLSW_UPRCL_UPRCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLSW_UPTCH(_i)				(0x00310004 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_UPTCH_MAX_INDEX			31
+#define GLSW_UPTCH_UPTCH_S			0
+#define GLSW_UPTCH_UPTCH_M			MAKEMASK(0xFF, 0)
+#define GLSW_UPTCL(_i)				(0x00310000 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_UPTCL_MAX_INDEX			31
+#define GLSW_UPTCL_UPTCL_S			0
+#define GLSW_UPTCL_UPTCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLSWID_RUPP(_i)				(0x00345000 + ((_i) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define GLSWID_RUPP_MAX_INDEX			255
+#define GLSWID_RUPP_RUPP_S			0
+#define GLSWID_RUPP_RUPP_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLV_BPRCH(_i)				(0x003B6004 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_BPRCH_MAX_INDEX			767
+#define GLV_BPRCH_BPRCH_S			0
+#define GLV_BPRCH_BPRCH_M			MAKEMASK(0xFF, 0)
+#define GLV_BPRCL(_i)				(0x003B6000 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_BPRCL_MAX_INDEX			767
+#define GLV_BPRCL_BPRCL_S			0
+#define GLV_BPRCL_BPRCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLV_BPTCH(_i)				(0x0030E004 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_BPTCH_MAX_INDEX			767
+#define GLV_BPTCH_BPTCH_S			0
+#define GLV_BPTCH_BPTCH_M			MAKEMASK(0xFF, 0)
+#define GLV_BPTCL(_i)				(0x0030E000 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_BPTCL_MAX_INDEX			767
+#define GLV_BPTCL_BPTCL_S			0
+#define GLV_BPTCL_BPTCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLV_GORCH(_i)				(0x003B0004 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_GORCH_MAX_INDEX			767
+#define GLV_GORCH_GORCH_S			0
+#define GLV_GORCH_GORCH_M			MAKEMASK(0xFF, 0)
+#define GLV_GORCL(_i)				(0x003B0000 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_GORCL_MAX_INDEX			767
+#define GLV_GORCL_GORCL_S			0
+#define GLV_GORCL_GORCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLV_GOTCH(_i)				(0x00300004 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_GOTCH_MAX_INDEX			767
+#define GLV_GOTCH_GOTCH_S			0
+#define GLV_GOTCH_GOTCH_M			MAKEMASK(0xFF, 0)
+#define GLV_GOTCL(_i)				(0x00300000 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_GOTCL_MAX_INDEX			767
+#define GLV_GOTCL_GOTCL_S			0
+#define GLV_GOTCL_GOTCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLV_MPRCH(_i)				(0x003B4004 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_MPRCH_MAX_INDEX			767
+#define GLV_MPRCH_MPRCH_S			0
+#define GLV_MPRCH_MPRCH_M			MAKEMASK(0xFF, 0)
+#define GLV_MPRCL(_i)				(0x003B4000 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_MPRCL_MAX_INDEX			767
+#define GLV_MPRCL_MPRCL_S			0
+#define GLV_MPRCL_MPRCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLV_MPTCH(_i)				(0x0030C004 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_MPTCH_MAX_INDEX			767
+#define GLV_MPTCH_MPTCH_S			0
+#define GLV_MPTCH_MPTCH_M			MAKEMASK(0xFF, 0)
+#define GLV_MPTCL(_i)				(0x0030C000 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_MPTCL_MAX_INDEX			767
+#define GLV_MPTCL_MPTCL_S			0
+#define GLV_MPTCL_MPTCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLV_RDPC(_i)				(0x00294C04 + ((_i) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_RDPC_MAX_INDEX			767
+#define GLV_RDPC_RDPC_S				0
+#define GLV_RDPC_RDPC_M				MAKEMASK(0xFFFFFFFF, 0)
+#define GLV_REPC(_i)				(0x00295804 + ((_i) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_REPC_MAX_INDEX			767
+#define GLV_REPC_NO_DESC_CNT_S			0
+#define GLV_REPC_NO_DESC_CNT_M			MAKEMASK(0xFFFF, 0)
+#define GLV_REPC_ERROR_CNT_S			16
+#define GLV_REPC_ERROR_CNT_M			MAKEMASK(0xFFFF, 16)
+#define GLV_TEPC(_VSI)				(0x00312000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_TEPC_MAX_INDEX			767
+#define GLV_TEPC_TEPC_S				0
+#define GLV_TEPC_TEPC_M				MAKEMASK(0xFFFFFFFF, 0)
+#define GLV_UPRCH(_i)				(0x003B2004 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_UPRCH_MAX_INDEX			767
+#define GLV_UPRCH_UPRCH_S			0
+#define GLV_UPRCH_UPRCH_M			MAKEMASK(0xFF, 0)
+#define GLV_UPRCL(_i)				(0x003B2000 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_UPRCL_MAX_INDEX			767
+#define GLV_UPRCL_UPRCL_S			0
+#define GLV_UPRCL_UPRCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLV_UPTCH(_i)				(0x0030A004 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_UPTCH_MAX_INDEX			767
+#define GLV_UPTCH_GLVUPTCH_S			0
+#define GLV_UPTCH_GLVUPTCH_M			MAKEMASK(0xFF, 0)
+#define GLV_UPTCL(_i)				(0x0030A000 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_UPTCL_MAX_INDEX			767
+#define GLV_UPTCL_UPTCL_S			0
+#define GLV_UPTCL_UPTCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLVEBUP_RBCH(_i, _j)			(0x00343004 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...31 */ /* Reset Source: CORER */
+#define GLVEBUP_RBCH_MAX_INDEX			7
+#define GLVEBUP_RBCH_UPBCH_S			0
+#define GLVEBUP_RBCH_UPBCH_M			MAKEMASK(0xFF, 0)
+#define GLVEBUP_RBCL(_i, _j)			(0x00343000 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...31 */ /* Reset Source: CORER */
+#define GLVEBUP_RBCL_MAX_INDEX			7
+#define GLVEBUP_RBCL_UPBCL_S			0
+#define GLVEBUP_RBCL_UPBCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLVEBUP_RPCH(_i, _j)			(0x00344004 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...31 */ /* Reset Source: CORER */
+#define GLVEBUP_RPCH_MAX_INDEX			7
+#define GLVEBUP_RPCH_UPPCH_S			0
+#define GLVEBUP_RPCH_UPPCH_M			MAKEMASK(0xFF, 0)
+#define GLVEBUP_RPCL(_i, _j)			(0x00344000 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...31 */ /* Reset Source: CORER */
+#define GLVEBUP_RPCL_MAX_INDEX			7
+#define GLVEBUP_RPCL_UPPCL_S			0
+#define GLVEBUP_RPCL_UPPCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLVEBUP_TBCH(_i, _j)			(0x00306004 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...31 */ /* Reset Source: CORER */
+#define GLVEBUP_TBCH_MAX_INDEX			7
+#define GLVEBUP_TBCH_UPBCH_S			0
+#define GLVEBUP_TBCH_UPBCH_M			MAKEMASK(0xFF, 0)
+#define GLVEBUP_TBCL(_i, _j)			(0x00306000 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...31 */ /* Reset Source: CORER */
+#define GLVEBUP_TBCL_MAX_INDEX			7
+#define GLVEBUP_TBCL_UPBCL_S			0
+#define GLVEBUP_TBCL_UPBCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLVEBUP_TPCH(_i, _j)			(0x00308004 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...31 */ /* Reset Source: CORER */
+#define GLVEBUP_TPCH_MAX_INDEX			7
+#define GLVEBUP_TPCH_UPPCH_S			0
+#define GLVEBUP_TPCH_UPPCH_M			MAKEMASK(0xFF, 0)
+#define GLVEBUP_TPCL(_i, _j)			(0x00308000 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...31 */ /* Reset Source: CORER */
+#define GLVEBUP_TPCL_MAX_INDEX			7
+#define GLVEBUP_TPCL_UPPCL_S			0
+#define GLVEBUP_TPCL_UPPCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PRTRPB_LDPC				0x000AC280 /* Reset Source: CORER */
+#define PRTRPB_LDPC_CRCERRS_S			0
+#define PRTRPB_LDPC_CRCERRS_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PRTRPB_RDPC				0x000AC260 /* Reset Source: CORER */
+#define PRTRPB_RDPC_CRCERRS_S			0
+#define PRTRPB_RDPC_CRCERRS_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PRTTPB_STAT_TC_BYTES_SENTL(_i)		(0x00098200 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define PRTTPB_STAT_TC_BYTES_SENTL_MAX_INDEX	63
+#define PRTTPB_STAT_TC_BYTES_SENTL_TCCNT_S	0
+#define PRTTPB_STAT_TC_BYTES_SENTL_TCCNT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define TPB_PRTTPB_STAT_PKT_SENT(_i)		(0x00099470 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define TPB_PRTTPB_STAT_PKT_SENT_MAX_INDEX	7
+#define TPB_PRTTPB_STAT_PKT_SENT_PKTCNT_S	0
+#define TPB_PRTTPB_STAT_PKT_SENT_PKTCNT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define TPB_PRTTPB_STAT_TC_BYTES_SENT(_i)	(0x00099094 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define TPB_PRTTPB_STAT_TC_BYTES_SENT_MAX_INDEX 63
+#define TPB_PRTTPB_STAT_TC_BYTES_SENT_TCCNT_S	0
+#define TPB_PRTTPB_STAT_TC_BYTES_SENT_TCCNT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define EMP_SWT_PRUNIND				0x00204020 /* Reset Source: CORER */
+#define EMP_SWT_PRUNIND_OPCODE_S		0
+#define EMP_SWT_PRUNIND_OPCODE_M		MAKEMASK(0xF, 0)
+#define EMP_SWT_PRUNIND_LIST_INDEX_NUM_S	4
+#define EMP_SWT_PRUNIND_LIST_INDEX_NUM_M	MAKEMASK(0x3FF, 4)
+#define EMP_SWT_PRUNIND_VSI_NUM_S		16
+#define EMP_SWT_PRUNIND_VSI_NUM_M		MAKEMASK(0x3FF, 16)
+#define EMP_SWT_PRUNIND_BIT_VALUE_S		31
+#define EMP_SWT_PRUNIND_BIT_VALUE_M		BIT(31)
+#define EMP_SWT_REPIND				0x0020401c /* Reset Source: CORER */
+#define EMP_SWT_REPIND_OPCODE_S			0
+#define EMP_SWT_REPIND_OPCODE_M			MAKEMASK(0xF, 0)
+#define EMP_SWT_REPIND_LIST_INDEX_NUMBER_S	4
+#define EMP_SWT_REPIND_LIST_INDEX_NUMBER_M	MAKEMASK(0x3FF, 4)
+#define EMP_SWT_REPIND_VSI_NUM_S		16
+#define EMP_SWT_REPIND_VSI_NUM_M		MAKEMASK(0x3FF, 16)
+#define EMP_SWT_REPIND_BIT_VALUE_S		31
+#define EMP_SWT_REPIND_BIT_VALUE_M		BIT(31)
+#define GL_OVERRIDEC				0x002040a4 /* Reset Source: CORER */
+#define GL_OVERRIDEC_OVERRIDE_ATTEMPTC_S	0
+#define GL_OVERRIDEC_OVERRIDE_ATTEMPTC_M	MAKEMASK(0xFFFF, 0)
+#define GL_OVERRIDEC_LAST_VSI_S			16
+#define GL_OVERRIDEC_LAST_VSI_M			MAKEMASK(0x3FF, 16)
+#define GL_PLG_AVG_CALC_CFG			0x0020A5AC /* Reset Source: CORER */
+#define GL_PLG_AVG_CALC_CFG_CYCLE_LEN_S		0
+#define GL_PLG_AVG_CALC_CFG_CYCLE_LEN_M		MAKEMASK(0x7FFFFFFF, 0)
+#define GL_PLG_AVG_CALC_CFG_MODE_S		31
+#define GL_PLG_AVG_CALC_CFG_MODE_M		BIT(31)
+#define GL_PLG_AVG_CALC_ST			0x0020A5B0 /* Reset Source: CORER */
+#define GL_PLG_AVG_CALC_ST_IN_DATA_S		0
+#define GL_PLG_AVG_CALC_ST_IN_DATA_M		MAKEMASK(0x7FFF, 0)
+#define GL_PLG_AVG_CALC_ST_OUT_DATA_S		16
+#define GL_PLG_AVG_CALC_ST_OUT_DATA_M		MAKEMASK(0x7FFF, 16)
+#define GL_PLG_AVG_CALC_ST_VALID_S		31
+#define GL_PLG_AVG_CALC_ST_VALID_M		BIT(31)
+#define GL_PRE_CFG_CMD				0x00214090 /* Reset Source: CORER */
+#define GL_PRE_CFG_CMD_ADDR_S			0
+#define GL_PRE_CFG_CMD_ADDR_M			MAKEMASK(0x1FFF, 0)
+#define GL_PRE_CFG_CMD_TBLIDX_S			16
+#define GL_PRE_CFG_CMD_TBLIDX_M			MAKEMASK(0x7, 16)
+#define GL_PRE_CFG_CMD_CMD_S			29
+#define GL_PRE_CFG_CMD_CMD_M			BIT(29)
+#define GL_PRE_CFG_CMD_DONE_S			31
+#define GL_PRE_CFG_CMD_DONE_M			BIT(31)
+#define GL_PRE_CFG_DATA(_i)			(0x00214074 + ((_i) * 4)) /* _i=0...6 */ /* Reset Source: CORER */
+#define GL_PRE_CFG_DATA_MAX_INDEX		6
+#define GL_PRE_CFG_DATA_GL_PRE_RCP_DATA_S	0
+#define GL_PRE_CFG_DATA_GL_PRE_RCP_DATA_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GL_SWT_FUNCFILT				0x001D2698 /* Reset Source: CORER */
+#define GL_SWT_FUNCFILT_FUNCFILT_S		0
+#define GL_SWT_FUNCFILT_FUNCFILT_M		BIT(0)
+#define GL_SWT_FW_STS(_i)			(0x00216000 + ((_i) * 4)) /* _i=0...5 */ /* Reset Source: CORER */
+#define GL_SWT_FW_STS_MAX_INDEX			5
+#define GL_SWT_FW_STS_GL_SWT_FW_STS_S		0
+#define GL_SWT_FW_STS_GL_SWT_FW_STS_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_SWT_LAT_DOUBLE			0x00204004 /* Reset Source: CORER */
+#define GL_SWT_LAT_DOUBLE_BASE_S		0
+#define GL_SWT_LAT_DOUBLE_BASE_M		MAKEMASK(0x7FF, 0)
+#define GL_SWT_LAT_DOUBLE_SIZE_S		16
+#define GL_SWT_LAT_DOUBLE_SIZE_M		MAKEMASK(0x7FF, 16)
+#define GL_SWT_LAT_QUAD				0x00204008 /* Reset Source: CORER */
+#define GL_SWT_LAT_QUAD_BASE_S			0
+#define GL_SWT_LAT_QUAD_BASE_M			MAKEMASK(0x7FF, 0)
+#define GL_SWT_LAT_QUAD_SIZE_S			16
+#define GL_SWT_LAT_QUAD_SIZE_M			MAKEMASK(0x7FF, 16)
+#define GL_SWT_LAT_SINGLE			0x00204000 /* Reset Source: CORER */
+#define GL_SWT_LAT_SINGLE_BASE_S		0
+#define GL_SWT_LAT_SINGLE_BASE_M		MAKEMASK(0x7FF, 0)
+#define GL_SWT_LAT_SINGLE_SIZE_S		16
+#define GL_SWT_LAT_SINGLE_SIZE_M		MAKEMASK(0x7FF, 16)
+#define GL_SWT_MD_PRI				0x002040ac /* Reset Source: CORER */
+#define GL_SWT_MD_PRI_VSI_PRI_S			0
+#define GL_SWT_MD_PRI_VSI_PRI_M			MAKEMASK(0x7, 0)
+#define GL_SWT_MD_PRI_LB_PRI_S			4
+#define GL_SWT_MD_PRI_LB_PRI_M			MAKEMASK(0x7, 4)
+#define GL_SWT_MD_PRI_LAN_EN_PRI_S		8
+#define GL_SWT_MD_PRI_LAN_EN_PRI_M		MAKEMASK(0x7, 8)
+#define GL_SWT_MD_PRI_QH_PRI_S			12
+#define GL_SWT_MD_PRI_QH_PRI_M			MAKEMASK(0x7, 12)
+#define GL_SWT_MD_PRI_QL_PRI_S			16
+#define GL_SWT_MD_PRI_QL_PRI_M			MAKEMASK(0x7, 16)
+#define GL_SWT_MIRTARVSI(_i)			(0x00204500 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GL_SWT_MIRTARVSI_MAX_INDEX		63
+#define GL_SWT_MIRTARVSI_VFVMNUMBER_S		0
+#define GL_SWT_MIRTARVSI_VFVMNUMBER_M		MAKEMASK(0x3FF, 0)
+#define GL_SWT_MIRTARVSI_FUNCTIONTYPE_S		10
+#define GL_SWT_MIRTARVSI_FUNCTIONTYPE_M		MAKEMASK(0x3, 10)
+#define GL_SWT_MIRTARVSI_PFNUMBER_S		12
+#define GL_SWT_MIRTARVSI_PFNUMBER_M		MAKEMASK(0x7, 12)
+#define GL_SWT_MIRTARVSI_TARGETVSI_S		20
+#define GL_SWT_MIRTARVSI_TARGETVSI_M		MAKEMASK(0x3FF, 20)
+#define GL_SWT_MIRTARVSI_RULEENABLE_S		31
+#define GL_SWT_MIRTARVSI_RULEENABLE_M		BIT(31)
+#define GL_SWT_NOMDEF_FLGS_H			0x0021411C /* Reset Source: CORER */
+#define GL_SWT_NOMDEF_FLGS_H_FLGS_S		0
+#define GL_SWT_NOMDEF_FLGS_H_FLGS_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_SWT_NOMDEF_FLGS_L			0x00214118 /* Reset Source: CORER */
+#define GL_SWT_NOMDEF_FLGS_L_FLGS_S		0
+#define GL_SWT_NOMDEF_FLGS_L_FLGS_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_SWT_SWIDFVIDX			0x00214114 /* Reset Source: CORER */
+#define GL_SWT_SWIDFVIDX_SWIDFVIDX_S		0
+#define GL_SWT_SWIDFVIDX_SWIDFVIDX_M		MAKEMASK(0x3F, 0)
+#define GL_SWT_SWIDFVIDX_PORT_TYPE_S		31
+#define GL_SWT_SWIDFVIDX_PORT_TYPE_M		BIT(31)
+#define GL_VP_SWITCHID(_i)			(0x00214094 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GL_VP_SWITCHID_MAX_INDEX		31
+#define GL_VP_SWITCHID_SWITCHID_S		0
+#define GL_VP_SWITCHID_SWITCHID_M		MAKEMASK(0xFF, 0)
+#define GLSWID_STAT_BLOCK(_i)			(0x0020A1A4 + ((_i) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define GLSWID_STAT_BLOCK_MAX_INDEX		255
+#define GLSWID_STAT_BLOCK_VEBID_S		0
+#define GLSWID_STAT_BLOCK_VEBID_M		MAKEMASK(0x1F, 0)
+#define GLSWID_STAT_BLOCK_VEBID_VALID_S		31
+#define GLSWID_STAT_BLOCK_VEBID_VALID_M		BIT(31)
+#define GLSWT_ACT_RESP_0			0x0020A5A4 /* Reset Source: CORER */
+#define GLSWT_ACT_RESP_0_GLSWT_ACT_RESP_S	0
+#define GLSWT_ACT_RESP_0_GLSWT_ACT_RESP_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLSWT_ACT_RESP_1			0x0020A5A8 /* Reset Source: CORER */
+#define GLSWT_ACT_RESP_1_GLSWT_ACT_RESP_S	0
+#define GLSWT_ACT_RESP_1_GLSWT_ACT_RESP_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLSWT_ARB_MODE				0x0020A674 /* Reset Source: CORER */
+#define GLSWT_ARB_MODE_FLU_PRI_SHM_S		0
+#define GLSWT_ARB_MODE_FLU_PRI_SHM_M		BIT(0)
+#define GLSWT_ARB_MODE_TX_RX_FWD_PRI_S		1
+#define GLSWT_ARB_MODE_TX_RX_FWD_PRI_M		BIT(1)
+#define PRT_SBPVSI				0x00204120 /* Reset Source: CORER */
+#define PRT_SBPVSI_BAD_FRAMES_VSI_S		0
+#define PRT_SBPVSI_BAD_FRAMES_VSI_M		MAKEMASK(0x3FF, 0)
+#define PRT_SBPVSI_SBP_S			31
+#define PRT_SBPVSI_SBP_M			BIT(31)
+#define PRT_SCSTS				0x00204140 /* Reset Source: CORER */
+#define PRT_SCSTS_BSCA_S			0
+#define PRT_SCSTS_BSCA_M			BIT(0)
+#define PRT_SCSTS_BSCAP_S			1
+#define PRT_SCSTS_BSCAP_M			BIT(1)
+#define PRT_SCSTS_MSCA_S			2
+#define PRT_SCSTS_MSCA_M			BIT(2)
+#define PRT_SCSTS_MSCAP_S			3
+#define PRT_SCSTS_MSCAP_M			BIT(3)
+#define PRT_SWT_BSCCNT				0x00204160 /* Reset Source: CORER */
+#define PRT_SWT_BSCCNT_CCOUNT_S			0
+#define PRT_SWT_BSCCNT_CCOUNT_M			MAKEMASK(0x1FFFFFF, 0)
+#define PRT_SWT_BSCTRH				0x00204180 /* Reset Source: CORER */
+#define PRT_SWT_BSCTRH_UTRESH_S			0
+#define PRT_SWT_BSCTRH_UTRESH_M			MAKEMASK(0x7FFFF, 0)
+#define PRT_SWT_MIREG				0x002042A0 /* Reset Source: CORER */
+#define PRT_SWT_MIREG_MIRRULE_S			0
+#define PRT_SWT_MIREG_MIRRULE_M			MAKEMASK(0x3F, 0)
+#define PRT_SWT_MIREG_MIRENA_S			7
+#define PRT_SWT_MIREG_MIRENA_M			BIT(7)
+#define PRT_SWT_MIRIG				0x00204280 /* Reset Source: CORER */
+#define PRT_SWT_MIRIG_MIRRULE_S			0
+#define PRT_SWT_MIRIG_MIRRULE_M			MAKEMASK(0x3F, 0)
+#define PRT_SWT_MIRIG_MIRENA_S			7
+#define PRT_SWT_MIRIG_MIRENA_M			BIT(7)
+#define PRT_SWT_MSCCNT				0x00204100 /* Reset Source: CORER */
+#define PRT_SWT_MSCCNT_CCOUNT_S			0
+#define PRT_SWT_MSCCNT_CCOUNT_M			MAKEMASK(0x1FFFFFF, 0)
+#define PRT_SWT_MSCTRH				0x002041c0 /* Reset Source: CORER */
+#define PRT_SWT_MSCTRH_UTRESH_S			0
+#define PRT_SWT_MSCTRH_UTRESH_M			MAKEMASK(0x7FFFF, 0)
+#define PRT_SWT_SCBI				0x002041e0 /* Reset Source: CORER */
+#define PRT_SWT_SCBI_BI_S			0
+#define PRT_SWT_SCBI_BI_M			MAKEMASK(0x1FFFFFF, 0)
+#define PRT_SWT_SCCRL				0x00204200 /* Reset Source: CORER */
+#define PRT_SWT_SCCRL_MDIPW_S			0
+#define PRT_SWT_SCCRL_MDIPW_M			BIT(0)
+#define PRT_SWT_SCCRL_MDICW_S			1
+#define PRT_SWT_SCCRL_MDICW_M			BIT(1)
+#define PRT_SWT_SCCRL_BDIPW_S			2
+#define PRT_SWT_SCCRL_BDIPW_M			BIT(2)
+#define PRT_SWT_SCCRL_BDICW_S			3
+#define PRT_SWT_SCCRL_BDICW_M			BIT(3)
+#define PRT_SWT_SCCRL_INTERVAL_S		8
+#define PRT_SWT_SCCRL_INTERVAL_M		MAKEMASK(0xFFFFF, 8)
+#define PRT_TCTUPR(_i)				(0x00040840 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define PRT_TCTUPR_MAX_INDEX			31
+#define PRT_TCTUPR_UP0_S			0
+#define PRT_TCTUPR_UP0_M			MAKEMASK(0x7, 0)
+#define PRT_TCTUPR_UP1_S			4
+#define PRT_TCTUPR_UP1_M			MAKEMASK(0x7, 4)
+#define PRT_TCTUPR_UP2_S			8
+#define PRT_TCTUPR_UP2_M			MAKEMASK(0x7, 8)
+#define PRT_TCTUPR_UP3_S			12
+#define PRT_TCTUPR_UP3_M			MAKEMASK(0x7, 12)
+#define PRT_TCTUPR_UP4_S			16
+#define PRT_TCTUPR_UP4_M			MAKEMASK(0x7, 16)
+#define PRT_TCTUPR_UP5_S			20
+#define PRT_TCTUPR_UP5_M			MAKEMASK(0x7, 20)
+#define PRT_TCTUPR_UP6_S			24
+#define PRT_TCTUPR_UP6_M			MAKEMASK(0x7, 24)
+#define PRT_TCTUPR_UP7_S			28
+#define PRT_TCTUPR_UP7_M			MAKEMASK(0x7, 28)
+#define GLHH_ART_CTL				0x000A41D4 /* Reset Source: POR */
+#define GLHH_ART_CTL_ACTIVE_S			0
+#define GLHH_ART_CTL_ACTIVE_M			BIT(0)
+#define GLHH_ART_CTL_TIME_OUT1_S		1
+#define GLHH_ART_CTL_TIME_OUT1_M		BIT(1)
+#define GLHH_ART_CTL_TIME_OUT2_S		2
+#define GLHH_ART_CTL_TIME_OUT2_M		BIT(2)
+#define GLHH_ART_CTL_RESET_HH_S			31
+#define GLHH_ART_CTL_RESET_HH_M			BIT(31)
+#define GLHH_ART_DATA				0x000A41E0 /* Reset Source: POR */
+#define GLHH_ART_DATA_AGENT_TYPE_S		0
+#define GLHH_ART_DATA_AGENT_TYPE_M		MAKEMASK(0x7, 0)
+#define GLHH_ART_DATA_SYNC_TYPE_S		3
+#define GLHH_ART_DATA_SYNC_TYPE_M		BIT(3)
+#define GLHH_ART_DATA_MAX_DELAY_S		4
+#define GLHH_ART_DATA_MAX_DELAY_M		MAKEMASK(0xF, 4)
+#define GLHH_ART_DATA_TIME_BASE_S		8
+#define GLHH_ART_DATA_TIME_BASE_M		MAKEMASK(0xF, 8)
+#define GLHH_ART_DATA_RSV_DATA_S		12
+#define GLHH_ART_DATA_RSV_DATA_M		MAKEMASK(0xFFFFF, 12)
+#define GLHH_ART_TIME_H				0x000A41D8 /* Reset Source: POR */
+#define GLHH_ART_TIME_H_ART_TIME_H_S		0
+#define GLHH_ART_TIME_H_ART_TIME_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLHH_ART_TIME_L				0x000A41DC /* Reset Source: POR */
+#define GLHH_ART_TIME_L_ART_TIME_L_S		0
+#define GLHH_ART_TIME_L_ART_TIME_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_AUX_IN_0(_i)			(0x000889D8 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_AUX_IN_0_MAX_INDEX		1
+#define GLTSYN_AUX_IN_0_EVNTLVL_S		0
+#define GLTSYN_AUX_IN_0_EVNTLVL_M		MAKEMASK(0x3, 0)
+#define GLTSYN_AUX_IN_0_INT_ENA_S		4
+#define GLTSYN_AUX_IN_0_INT_ENA_M		BIT(4)
+#define GLTSYN_AUX_IN_1(_i)			(0x000889E0 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_AUX_IN_1_MAX_INDEX		1
+#define GLTSYN_AUX_IN_1_EVNTLVL_S		0
+#define GLTSYN_AUX_IN_1_EVNTLVL_M		MAKEMASK(0x3, 0)
+#define GLTSYN_AUX_IN_1_INT_ENA_S		4
+#define GLTSYN_AUX_IN_1_INT_ENA_M		BIT(4)
+#define GLTSYN_AUX_IN_2(_i)			(0x000889E8 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_AUX_IN_2_MAX_INDEX		1
+#define GLTSYN_AUX_IN_2_EVNTLVL_S		0
+#define GLTSYN_AUX_IN_2_EVNTLVL_M		MAKEMASK(0x3, 0)
+#define GLTSYN_AUX_IN_2_INT_ENA_S		4
+#define GLTSYN_AUX_IN_2_INT_ENA_M		BIT(4)
+#define GLTSYN_AUX_OUT_0(_i)			(0x00088998 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_AUX_OUT_0_MAX_INDEX		1
+#define GLTSYN_AUX_OUT_0_OUT_ENA_S		0
+#define GLTSYN_AUX_OUT_0_OUT_ENA_M		BIT(0)
+#define GLTSYN_AUX_OUT_0_OUTMOD_S		1
+#define GLTSYN_AUX_OUT_0_OUTMOD_M		MAKEMASK(0x3, 1)
+#define GLTSYN_AUX_OUT_0_OUTLVL_S		3
+#define GLTSYN_AUX_OUT_0_OUTLVL_M		BIT(3)
+#define GLTSYN_AUX_OUT_0_INT_ENA_S		4
+#define GLTSYN_AUX_OUT_0_INT_ENA_M		BIT(4)
+#define GLTSYN_AUX_OUT_0_PULSEW_S		8
+#define GLTSYN_AUX_OUT_0_PULSEW_M		MAKEMASK(0xF, 8)
+#define GLTSYN_AUX_OUT_1(_i)			(0x000889A0 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_AUX_OUT_1_MAX_INDEX		1
+#define GLTSYN_AUX_OUT_1_OUT_ENA_S		0
+#define GLTSYN_AUX_OUT_1_OUT_ENA_M		BIT(0)
+#define GLTSYN_AUX_OUT_1_OUTMOD_S		1
+#define GLTSYN_AUX_OUT_1_OUTMOD_M		MAKEMASK(0x3, 1)
+#define GLTSYN_AUX_OUT_1_OUTLVL_S		3
+#define GLTSYN_AUX_OUT_1_OUTLVL_M		BIT(3)
+#define GLTSYN_AUX_OUT_1_INT_ENA_S		4
+#define GLTSYN_AUX_OUT_1_INT_ENA_M		BIT(4)
+#define GLTSYN_AUX_OUT_1_PULSEW_S		8
+#define GLTSYN_AUX_OUT_1_PULSEW_M		MAKEMASK(0xF, 8)
+#define GLTSYN_AUX_OUT_2(_i)			(0x000889A8 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_AUX_OUT_2_MAX_INDEX		1
+#define GLTSYN_AUX_OUT_2_OUT_ENA_S		0
+#define GLTSYN_AUX_OUT_2_OUT_ENA_M		BIT(0)
+#define GLTSYN_AUX_OUT_2_OUTMOD_S		1
+#define GLTSYN_AUX_OUT_2_OUTMOD_M		MAKEMASK(0x3, 1)
+#define GLTSYN_AUX_OUT_2_OUTLVL_S		3
+#define GLTSYN_AUX_OUT_2_OUTLVL_M		BIT(3)
+#define GLTSYN_AUX_OUT_2_INT_ENA_S		4
+#define GLTSYN_AUX_OUT_2_INT_ENA_M		BIT(4)
+#define GLTSYN_AUX_OUT_2_PULSEW_S		8
+#define GLTSYN_AUX_OUT_2_PULSEW_M		MAKEMASK(0xF, 8)
+#define GLTSYN_AUX_OUT_3(_i)			(0x000889B0 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_AUX_OUT_3_MAX_INDEX		1
+#define GLTSYN_AUX_OUT_3_OUT_ENA_S		0
+#define GLTSYN_AUX_OUT_3_OUT_ENA_M		BIT(0)
+#define GLTSYN_AUX_OUT_3_OUTMOD_S		1
+#define GLTSYN_AUX_OUT_3_OUTMOD_M		MAKEMASK(0x3, 1)
+#define GLTSYN_AUX_OUT_3_OUTLVL_S		3
+#define GLTSYN_AUX_OUT_3_OUTLVL_M		BIT(3)
+#define GLTSYN_AUX_OUT_3_INT_ENA_S		4
+#define GLTSYN_AUX_OUT_3_INT_ENA_M		BIT(4)
+#define GLTSYN_AUX_OUT_3_PULSEW_S		8
+#define GLTSYN_AUX_OUT_3_PULSEW_M		MAKEMASK(0xF, 8)
+#define GLTSYN_CLKO_0(_i)			(0x000889B8 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_CLKO_0_MAX_INDEX			1
+#define GLTSYN_CLKO_0_TSYNCLKO_S		0
+#define GLTSYN_CLKO_0_TSYNCLKO_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_CLKO_1(_i)			(0x000889C0 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_CLKO_1_MAX_INDEX			1
+#define GLTSYN_CLKO_1_TSYNCLKO_S		0
+#define GLTSYN_CLKO_1_TSYNCLKO_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_CLKO_2(_i)			(0x000889C8 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_CLKO_2_MAX_INDEX			1
+#define GLTSYN_CLKO_2_TSYNCLKO_S		0
+#define GLTSYN_CLKO_2_TSYNCLKO_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_CLKO_3(_i)			(0x000889D0 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_CLKO_3_MAX_INDEX			1
+#define GLTSYN_CLKO_3_TSYNCLKO_S		0
+#define GLTSYN_CLKO_3_TSYNCLKO_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_CMD				0x00088810 /* Reset Source: CORER */
+#define GLTSYN_CMD_CMD_S			0
+#define GLTSYN_CMD_CMD_M			MAKEMASK(0xFF, 0)
+#define GLTSYN_CMD_SEL_MASTER_S			8
+#define GLTSYN_CMD_SEL_MASTER_M			BIT(8)
+#define GLTSYN_CMD_SYNC				0x00088814 /* Reset Source: CORER */
+#define GLTSYN_CMD_SYNC_SYNC_S			0
+#define GLTSYN_CMD_SYNC_SYNC_M			MAKEMASK(0x3, 0)
+#define GLTSYN_ENA(_i)				(0x00088808 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_ENA_MAX_INDEX			1
+#define GLTSYN_ENA_TSYN_ENA_S			0
+#define GLTSYN_ENA_TSYN_ENA_M			BIT(0)
+#define GLTSYN_EVNT_H_0(_i)			(0x00088970 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_EVNT_H_0_MAX_INDEX		1
+#define GLTSYN_EVNT_H_0_TSYNEVNT_H_S		0
+#define GLTSYN_EVNT_H_0_TSYNEVNT_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_EVNT_H_1(_i)			(0x00088980 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_EVNT_H_1_MAX_INDEX		1
+#define GLTSYN_EVNT_H_1_TSYNEVNT_H_S		0
+#define GLTSYN_EVNT_H_1_TSYNEVNT_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_EVNT_H_2(_i)			(0x00088990 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_EVNT_H_2_MAX_INDEX		1
+#define GLTSYN_EVNT_H_2_TSYNEVNT_H_S		0
+#define GLTSYN_EVNT_H_2_TSYNEVNT_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_EVNT_L_0(_i)			(0x00088968 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_EVNT_L_0_MAX_INDEX		1
+#define GLTSYN_EVNT_L_0_TSYNEVNT_L_S		0
+#define GLTSYN_EVNT_L_0_TSYNEVNT_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_EVNT_L_1(_i)			(0x00088978 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_EVNT_L_1_MAX_INDEX		1
+#define GLTSYN_EVNT_L_1_TSYNEVNT_L_S		0
+#define GLTSYN_EVNT_L_1_TSYNEVNT_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_EVNT_L_2(_i)			(0x00088988 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_EVNT_L_2_MAX_INDEX		1
+#define GLTSYN_EVNT_L_2_TSYNEVNT_L_S		0
+#define GLTSYN_EVNT_L_2_TSYNEVNT_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_HHTIME_H(_i)			(0x00088900 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_HHTIME_H_MAX_INDEX		1
+#define GLTSYN_HHTIME_H_TSYNEVNT_H_S		0
+#define GLTSYN_HHTIME_H_TSYNEVNT_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_HHTIME_L(_i)			(0x000888F8 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_HHTIME_L_MAX_INDEX		1
+#define GLTSYN_HHTIME_L_TSYNEVNT_L_S		0
+#define GLTSYN_HHTIME_L_TSYNEVNT_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_INCVAL_H(_i)			(0x00088920 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_INCVAL_H_MAX_INDEX		1
+#define GLTSYN_INCVAL_H_INCVAL_H_S		0
+#define GLTSYN_INCVAL_H_INCVAL_H_M		MAKEMASK(0xFF, 0)
+#define GLTSYN_INCVAL_L(_i)			(0x00088918 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_INCVAL_L_MAX_INDEX		1
+#define GLTSYN_INCVAL_L_INCVAL_L_S		0
+#define GLTSYN_INCVAL_L_INCVAL_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_SHADJ_H(_i)			(0x00088910 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_SHADJ_H_MAX_INDEX		1
+#define GLTSYN_SHADJ_H_ADJUST_H_S		0
+#define GLTSYN_SHADJ_H_ADJUST_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_SHADJ_L(_i)			(0x00088908 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_SHADJ_L_MAX_INDEX		1
+#define GLTSYN_SHADJ_L_ADJUST_L_S		0
+#define GLTSYN_SHADJ_L_ADJUST_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_SHTIME_0(_i)			(0x000888E0 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_SHTIME_0_MAX_INDEX		1
+#define GLTSYN_SHTIME_0_TSYNTIME_0_S		0
+#define GLTSYN_SHTIME_0_TSYNTIME_0_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_SHTIME_H(_i)			(0x000888F0 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_SHTIME_H_MAX_INDEX		1
+#define GLTSYN_SHTIME_H_TSYNTIME_H_S		0
+#define GLTSYN_SHTIME_H_TSYNTIME_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_SHTIME_L(_i)			(0x000888E8 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_SHTIME_L_MAX_INDEX		1
+#define GLTSYN_SHTIME_L_TSYNTIME_L_S		0
+#define GLTSYN_SHTIME_L_TSYNTIME_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_STAT(_i)				(0x000888C0 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_STAT_MAX_INDEX			1
+#define GLTSYN_STAT_EVENT0_S			0
+#define GLTSYN_STAT_EVENT0_M			BIT(0)
+#define GLTSYN_STAT_EVENT1_S			1
+#define GLTSYN_STAT_EVENT1_M			BIT(1)
+#define GLTSYN_STAT_EVENT2_S			2
+#define GLTSYN_STAT_EVENT2_M			BIT(2)
+#define GLTSYN_STAT_TGT0_S			4
+#define GLTSYN_STAT_TGT0_M			BIT(4)
+#define GLTSYN_STAT_TGT1_S			5
+#define GLTSYN_STAT_TGT1_M			BIT(5)
+#define GLTSYN_STAT_TGT2_S			6
+#define GLTSYN_STAT_TGT2_M			BIT(6)
+#define GLTSYN_STAT_TGT3_S			7
+#define GLTSYN_STAT_TGT3_M			BIT(7)
+#define GLTSYN_SYNC_DLAY			0x00088818 /* Reset Source: CORER */
+#define GLTSYN_SYNC_DLAY_SYNC_DELAY_S		0
+#define GLTSYN_SYNC_DLAY_SYNC_DELAY_M		MAKEMASK(0x1F, 0)
+#define GLTSYN_TGT_H_0(_i)			(0x00088930 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_TGT_H_0_MAX_INDEX		1
+#define GLTSYN_TGT_H_0_TSYNTGTT_H_S		0
+#define GLTSYN_TGT_H_0_TSYNTGTT_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_TGT_H_1(_i)			(0x00088940 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_TGT_H_1_MAX_INDEX		1
+#define GLTSYN_TGT_H_1_TSYNTGTT_H_S		0
+#define GLTSYN_TGT_H_1_TSYNTGTT_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_TGT_H_2(_i)			(0x00088950 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_TGT_H_2_MAX_INDEX		1
+#define GLTSYN_TGT_H_2_TSYNTGTT_H_S		0
+#define GLTSYN_TGT_H_2_TSYNTGTT_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_TGT_H_3(_i)			(0x00088960 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_TGT_H_3_MAX_INDEX		1
+#define GLTSYN_TGT_H_3_TSYNTGTT_H_S		0
+#define GLTSYN_TGT_H_3_TSYNTGTT_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_TGT_L_0(_i)			(0x00088928 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_TGT_L_0_MAX_INDEX		1
+#define GLTSYN_TGT_L_0_TSYNTGTT_L_S		0
+#define GLTSYN_TGT_L_0_TSYNTGTT_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_TGT_L_1(_i)			(0x00088938 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_TGT_L_1_MAX_INDEX		1
+#define GLTSYN_TGT_L_1_TSYNTGTT_L_S		0
+#define GLTSYN_TGT_L_1_TSYNTGTT_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_TGT_L_2(_i)			(0x00088948 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_TGT_L_2_MAX_INDEX		1
+#define GLTSYN_TGT_L_2_TSYNTGTT_L_S		0
+#define GLTSYN_TGT_L_2_TSYNTGTT_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_TGT_L_3(_i)			(0x00088958 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_TGT_L_3_MAX_INDEX		1
+#define GLTSYN_TGT_L_3_TSYNTGTT_L_S		0
+#define GLTSYN_TGT_L_3_TSYNTGTT_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_TIME_0(_i)			(0x000888C8 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_TIME_0_MAX_INDEX			1
+#define GLTSYN_TIME_0_TSYNTIME_0_S		0
+#define GLTSYN_TIME_0_TSYNTIME_0_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_TIME_H(_i)			(0x000888D8 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_TIME_H_MAX_INDEX			1
+#define GLTSYN_TIME_H_TSYNTIME_H_S		0
+#define GLTSYN_TIME_H_TSYNTIME_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_TIME_L(_i)			(0x000888D0 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_TIME_L_MAX_INDEX			1
+#define GLTSYN_TIME_L_TSYNTIME_L_S		0
+#define GLTSYN_TIME_L_TSYNTIME_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PFHH_SEM				0x000A4200 /* Reset Source: PFR */
+#define PFHH_SEM_BUSY_S				0
+#define PFHH_SEM_BUSY_M				BIT(0)
+#define PFHH_SEM_PF_OWNER_S			4
+#define PFHH_SEM_PF_OWNER_M			MAKEMASK(0x7, 4)
+#define PFTSYN_SEM				0x00088880 /* Reset Source: PFR */
+#define PFTSYN_SEM_BUSY_S			0
+#define PFTSYN_SEM_BUSY_M			BIT(0)
+#define PFTSYN_SEM_PF_OWNER_S			4
+#define PFTSYN_SEM_PF_OWNER_M			MAKEMASK(0x7, 4)
+#define GLPE_TSCD_FLR(_i)			(0x0051E24c + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: CORER */
+#define GLPE_TSCD_FLR_MAX_INDEX			3
+#define GLPE_TSCD_FLR_DRAIN_VCTR_ID_S		0
+#define GLPE_TSCD_FLR_DRAIN_VCTR_ID_M		MAKEMASK(0x3, 0)
+#define GLPE_TSCD_FLR_PORT_S			2
+#define GLPE_TSCD_FLR_PORT_M			MAKEMASK(0x7, 2)
+#define GLPE_TSCD_FLR_PF_NUM_S			5
+#define GLPE_TSCD_FLR_PF_NUM_M			MAKEMASK(0x7, 5)
+#define GLPE_TSCD_FLR_VM_VF_TYPE_S		8
+#define GLPE_TSCD_FLR_VM_VF_TYPE_M		MAKEMASK(0x3, 8)
+#define GLPE_TSCD_FLR_VM_VF_NUM_S		16
+#define GLPE_TSCD_FLR_VM_VF_NUM_M		MAKEMASK(0x3FF, 16)
+#define GLPE_TSCD_FLR_VLD_S			31
+#define GLPE_TSCD_FLR_VLD_M			BIT(31)
+#define GLPE_TSCD_PEPM				0x0051E228 /* Reset Source: CORER */
+#define GLPE_TSCD_PEPM_MDQ_CREDITS_S		0
+#define GLPE_TSCD_PEPM_MDQ_CREDITS_M		MAKEMASK(0xFF, 0)
+#define PF_VIRT_VSTATUS				0x0009E680 /* Reset Source: PFR */
+#define PF_VIRT_VSTATUS_NUM_VFS_S		0
+#define PF_VIRT_VSTATUS_NUM_VFS_M		MAKEMASK(0xFF, 0)
+#define PF_VIRT_VSTATUS_TOTAL_VFS_S		8
+#define PF_VIRT_VSTATUS_TOTAL_VFS_M		MAKEMASK(0xFF, 8)
+#define PF_VIRT_VSTATUS_IOV_ACTIVE_S		16
+#define PF_VIRT_VSTATUS_IOV_ACTIVE_M		BIT(16)
+#define PF_VT_PFALLOC				0x001D2480 /* Reset Source: CORER */
+#define PF_VT_PFALLOC_FIRSTVF_S			0
+#define PF_VT_PFALLOC_FIRSTVF_M			MAKEMASK(0xFF, 0)
+#define PF_VT_PFALLOC_LASTVF_S			8
+#define PF_VT_PFALLOC_LASTVF_M			MAKEMASK(0xFF, 8)
+#define PF_VT_PFALLOC_VALID_S			31
+#define PF_VT_PFALLOC_VALID_M			BIT(31)
+#define PF_VT_PFALLOC_HIF			0x0009DD80 /* Reset Source: PCIR */
+#define PF_VT_PFALLOC_HIF_FIRSTVF_S		0
+#define PF_VT_PFALLOC_HIF_FIRSTVF_M		MAKEMASK(0xFF, 0)
+#define PF_VT_PFALLOC_HIF_LASTVF_S		8
+#define PF_VT_PFALLOC_HIF_LASTVF_M		MAKEMASK(0xFF, 8)
+#define PF_VT_PFALLOC_HIF_VALID_S		31
+#define PF_VT_PFALLOC_HIF_VALID_M		BIT(31)
+#define PF_VT_PFALLOC_PCIE			0x000BE080 /* Reset Source: PCIR */
+#define PF_VT_PFALLOC_PCIE_FIRSTVF_S		0
+#define PF_VT_PFALLOC_PCIE_FIRSTVF_M		MAKEMASK(0xFF, 0)
+#define PF_VT_PFALLOC_PCIE_LASTVF_S		8
+#define PF_VT_PFALLOC_PCIE_LASTVF_M		MAKEMASK(0xFF, 8)
+#define PF_VT_PFALLOC_PCIE_VALID_S		31
+#define PF_VT_PFALLOC_PCIE_VALID_M		BIT(31)
+#define VSI_L2TAGSTXVALID(_VSI)			(0x00046000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_L2TAGSTXVALID_MAX_INDEX		767
+#define VSI_L2TAGSTXVALID_L2TAG1INSERTID_S	0
+#define VSI_L2TAGSTXVALID_L2TAG1INSERTID_M	MAKEMASK(0x7, 0)
+#define VSI_L2TAGSTXVALID_L2TAG1INSERTID_VALID_S 3
+#define VSI_L2TAGSTXVALID_L2TAG1INSERTID_VALID_M BIT(3)
+#define VSI_L2TAGSTXVALID_L2TAG2INSERTID_S	4
+#define VSI_L2TAGSTXVALID_L2TAG2INSERTID_M	MAKEMASK(0x7, 4)
+#define VSI_L2TAGSTXVALID_L2TAG2INSERTID_VALID_S 7
+#define VSI_L2TAGSTXVALID_L2TAG2INSERTID_VALID_M BIT(7)
+#define VSI_L2TAGSTXVALID_TIR0INSERTID_S	16
+#define VSI_L2TAGSTXVALID_TIR0INSERTID_M	MAKEMASK(0x7, 16)
+#define VSI_L2TAGSTXVALID_TIR0_INSERT_S		19
+#define VSI_L2TAGSTXVALID_TIR0_INSERT_M		BIT(19)
+#define VSI_L2TAGSTXVALID_TIR1INSERTID_S	20
+#define VSI_L2TAGSTXVALID_TIR1INSERTID_M	MAKEMASK(0x7, 20)
+#define VSI_L2TAGSTXVALID_TIR1_INSERT_S		23
+#define VSI_L2TAGSTXVALID_TIR1_INSERT_M		BIT(23)
+#define VSI_L2TAGSTXVALID_TIR2INSERTID_S	24
+#define VSI_L2TAGSTXVALID_TIR2INSERTID_M	MAKEMASK(0x7, 24)
+#define VSI_L2TAGSTXVALID_TIR2_INSERT_S		27
+#define VSI_L2TAGSTXVALID_TIR2_INSERT_M		BIT(27)
+#define VSI_PASID(_VSI)				(0x0009C000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_PASID_MAX_INDEX			767
+#define VSI_PASID_PASID_S			0
+#define VSI_PASID_PASID_M			MAKEMASK(0xFFFFF, 0)
+#define VSI_PASID_EN_S				31
+#define VSI_PASID_EN_M				BIT(31)
+#define VSI_RUPR(_VSI)				(0x00050000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_RUPR_MAX_INDEX			767
+#define VSI_RUPR_UP0_S				0
+#define VSI_RUPR_UP0_M				MAKEMASK(0x7, 0)
+#define VSI_RUPR_UP1_S				3
+#define VSI_RUPR_UP1_M				MAKEMASK(0x7, 3)
+#define VSI_RUPR_UP2_S				6
+#define VSI_RUPR_UP2_M				MAKEMASK(0x7, 6)
+#define VSI_RUPR_UP3_S				9
+#define VSI_RUPR_UP3_M				MAKEMASK(0x7, 9)
+#define VSI_RUPR_UP4_S				12
+#define VSI_RUPR_UP4_M				MAKEMASK(0x7, 12)
+#define VSI_RUPR_UP5_S				15
+#define VSI_RUPR_UP5_M				MAKEMASK(0x7, 15)
+#define VSI_RUPR_UP6_S				18
+#define VSI_RUPR_UP6_M				MAKEMASK(0x7, 18)
+#define VSI_RUPR_UP7_S				21
+#define VSI_RUPR_UP7_M				MAKEMASK(0x7, 21)
+#define VSI_RXSWCTRL(_VSI)			(0x00205000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_RXSWCTRL_MAX_INDEX			767
+#define VSI_RXSWCTRL_MACVSIPRUNEENABLE_S	8
+#define VSI_RXSWCTRL_MACVSIPRUNEENABLE_M	BIT(8)
+#define VSI_RXSWCTRL_PRUNEENABLE_S		9
+#define VSI_RXSWCTRL_PRUNEENABLE_M		MAKEMASK(0xF, 9)
+#define VSI_RXSWCTRL_SRCPRUNEENABLE_S		13
+#define VSI_RXSWCTRL_SRCPRUNEENABLE_M		BIT(13)
+#define VSI_SRCSWCTRL(_VSI)			(0x00209000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_SRCSWCTRL_MAX_INDEX			767
+#define VSI_SRCSWCTRL_ALLOWDESTOVERRIDE_S	0
+#define VSI_SRCSWCTRL_ALLOWDESTOVERRIDE_M	BIT(0)
+#define VSI_SRCSWCTRL_ALLOWLOOPBACK_S		1
+#define VSI_SRCSWCTRL_ALLOWLOOPBACK_M		BIT(1)
+#define VSI_SRCSWCTRL_LANENABLE_S		2
+#define VSI_SRCSWCTRL_LANENABLE_M		BIT(2)
+#define VSI_SRCSWCTRL_MACAS_S			3
+#define VSI_SRCSWCTRL_MACAS_M			BIT(3)
+#define VSI_SRCSWCTRL_PRUNEENABLE_S		4
+#define VSI_SRCSWCTRL_PRUNEENABLE_M		MAKEMASK(0xF, 4)
+#define VSI_SWITCHID(_VSI)			(0x00215000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_SWITCHID_MAX_INDEX			767
+#define VSI_SWITCHID_SWITCHID_S			0
+#define VSI_SWITCHID_SWITCHID_M			MAKEMASK(0xFF, 0)
+#define VSI_SWT_MIREG(_VSI)			(0x00207000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_SWT_MIREG_MAX_INDEX			767
+#define VSI_SWT_MIREG_MIRRULE_S			0
+#define VSI_SWT_MIREG_MIRRULE_M			MAKEMASK(0x3F, 0)
+#define VSI_SWT_MIREG_MIRENA_S			7
+#define VSI_SWT_MIREG_MIRENA_M			BIT(7)
+#define VSI_SWT_MIRIG(_VSI)			(0x00208000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_SWT_MIRIG_MAX_INDEX			767
+#define VSI_SWT_MIRIG_MIRRULE_S			0
+#define VSI_SWT_MIRIG_MIRRULE_M			MAKEMASK(0x3F, 0)
+#define VSI_SWT_MIRIG_MIRENA_S			7
+#define VSI_SWT_MIRIG_MIRENA_M			BIT(7)
+#define VSI_TAIR(_VSI)				(0x00044000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_TAIR_MAX_INDEX			767
+#define VSI_TAIR_PORT_TAG_ID_S			0
+#define VSI_TAIR_PORT_TAG_ID_M			MAKEMASK(0xFFFF, 0)
+#define VSI_TAR(_VSI)				(0x00045000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_TAR_MAX_INDEX			767
+#define VSI_TAR_ACCEPTTAGGED_S			0
+#define VSI_TAR_ACCEPTTAGGED_M			MAKEMASK(0x3FF, 0)
+#define VSI_TAR_ACCEPTUNTAGGED_S		16
+#define VSI_TAR_ACCEPTUNTAGGED_M		MAKEMASK(0x3FF, 16)
+#define VSI_TIR_0(_VSI)				(0x00041000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_TIR_0_MAX_INDEX			767
+#define VSI_TIR_0_PORT_TAG_ID_S			0
+#define VSI_TIR_0_PORT_TAG_ID_M			MAKEMASK(0xFFFF, 0)
+#define VSI_TIR_1(_VSI)				(0x00042000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_TIR_1_MAX_INDEX			767
+#define VSI_TIR_1_PORT_TAG_ID_S			0
+#define VSI_TIR_1_PORT_TAG_ID_M			MAKEMASK(0xFFFFFFFF, 0)
+#define VSI_TIR_2(_VSI)				(0x00043000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_TIR_2_MAX_INDEX			767
+#define VSI_TIR_2_PORT_TAG_ID_S			0
+#define VSI_TIR_2_PORT_TAG_ID_M			MAKEMASK(0xFFFF, 0)
+#define VSI_TSR(_VSI)				(0x00051000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_TSR_MAX_INDEX			767
+#define VSI_TSR_STRIPTAG_S			0
+#define VSI_TSR_STRIPTAG_M			MAKEMASK(0x3FF, 0)
+#define VSI_TSR_SHOWTAG_S			10
+#define VSI_TSR_SHOWTAG_M			MAKEMASK(0x3FF, 10)
+#define VSI_TSR_SHOWPRIONLY_S			20
+#define VSI_TSR_SHOWPRIONLY_M			MAKEMASK(0x3FF, 20)
+#define VSI_TUPIOM(_VSI)			(0x00048000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_TUPIOM_MAX_INDEX			767
+#define VSI_TUPIOM_UP0_S			0
+#define VSI_TUPIOM_UP0_M			MAKEMASK(0x7, 0)
+#define VSI_TUPIOM_UP1_S			3
+#define VSI_TUPIOM_UP1_M			MAKEMASK(0x7, 3)
+#define VSI_TUPIOM_UP2_S			6
+#define VSI_TUPIOM_UP2_M			MAKEMASK(0x7, 6)
+#define VSI_TUPIOM_UP3_S			9
+#define VSI_TUPIOM_UP3_M			MAKEMASK(0x7, 9)
+#define VSI_TUPIOM_UP4_S			12
+#define VSI_TUPIOM_UP4_M			MAKEMASK(0x7, 12)
+#define VSI_TUPIOM_UP5_S			15
+#define VSI_TUPIOM_UP5_M			MAKEMASK(0x7, 15)
+#define VSI_TUPIOM_UP6_S			18
+#define VSI_TUPIOM_UP6_M			MAKEMASK(0x7, 18)
+#define VSI_TUPIOM_UP7_S			21
+#define VSI_TUPIOM_UP7_M			MAKEMASK(0x7, 21)
+#define VSI_TUPR(_VSI)				(0x00047000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_TUPR_MAX_INDEX			767
+#define VSI_TUPR_UP0_S				0
+#define VSI_TUPR_UP0_M				MAKEMASK(0x7, 0)
+#define VSI_TUPR_UP1_S				3
+#define VSI_TUPR_UP1_M				MAKEMASK(0x7, 3)
+#define VSI_TUPR_UP2_S				6
+#define VSI_TUPR_UP2_M				MAKEMASK(0x7, 6)
+#define VSI_TUPR_UP3_S				9
+#define VSI_TUPR_UP3_M				MAKEMASK(0x7, 9)
+#define VSI_TUPR_UP4_S				12
+#define VSI_TUPR_UP4_M				MAKEMASK(0x7, 12)
+#define VSI_TUPR_UP5_S				15
+#define VSI_TUPR_UP5_M				MAKEMASK(0x7, 15)
+#define VSI_TUPR_UP6_S				18
+#define VSI_TUPR_UP6_M				MAKEMASK(0x7, 18)
+#define VSI_TUPR_UP7_S				21
+#define VSI_TUPR_UP7_M				MAKEMASK(0x7, 21)
+#define VSI_VSI2F(_VSI)				(0x001D0000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_VSI2F_MAX_INDEX			767
+#define VSI_VSI2F_VFVMNUMBER_S			0
+#define VSI_VSI2F_VFVMNUMBER_M			MAKEMASK(0x3FF, 0)
+#define VSI_VSI2F_FUNCTIONTYPE_S		10
+#define VSI_VSI2F_FUNCTIONTYPE_M		MAKEMASK(0x3, 10)
+#define VSI_VSI2F_PFNUMBER_S			12
+#define VSI_VSI2F_PFNUMBER_M			MAKEMASK(0x7, 12)
+#define VSI_VSI2F_BUFFERNUMBER_S		16
+#define VSI_VSI2F_BUFFERNUMBER_M		MAKEMASK(0x7, 16)
+#define VSI_VSI2F_VSI_NUMBER_S			20
+#define VSI_VSI2F_VSI_NUMBER_M			MAKEMASK(0x3FF, 20)
+#define VSI_VSI2F_VSI_ENABLE_S			31
+#define VSI_VSI2F_VSI_ENABLE_M			BIT(31)
+#define VSI_VSI2F_MBX(_VSI)			(0x00232000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_VSI2F_MBX_MAX_INDEX			767
+#define VSI_VSI2F_MBX_VFVMNUMBER_S		0
+#define VSI_VSI2F_MBX_VFVMNUMBER_M		MAKEMASK(0x3FF, 0)
+#define VSI_VSI2F_MBX_FUNCTIONTYPE_S		10
+#define VSI_VSI2F_MBX_FUNCTIONTYPE_M		MAKEMASK(0x3, 10)
+#define VSI_VSI2F_MBX_PFNUMBER_S		12
+#define VSI_VSI2F_MBX_PFNUMBER_M		MAKEMASK(0x7, 12)
+#define VSI_VSI2F_MBX_BUFFERNUMBER_S		16
+#define VSI_VSI2F_MBX_BUFFERNUMBER_M		MAKEMASK(0x7, 16)
+#define VSI_VSI2F_MBX_VSI_NUMBER_S		20
+#define VSI_VSI2F_MBX_VSI_NUMBER_M		MAKEMASK(0x3FF, 20)
+#define VSI_VSI2F_MBX_VSI_ENABLE_S		31
+#define VSI_VSI2F_MBX_VSI_ENABLE_M		BIT(31)
+#define VSIQF_FD_CNT(_VSI)			(0x00464000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSIQF_FD_CNT_MAX_INDEX			767
+#define VSIQF_FD_CNT_FD_GCNT_S			0
+#define VSIQF_FD_CNT_FD_GCNT_M			MAKEMASK(0x3FFF, 0)
+#define VSIQF_FD_CNT_FD_BCNT_S			16
+#define VSIQF_FD_CNT_FD_BCNT_M			MAKEMASK(0x3FFF, 16)
+#define VSIQF_FD_CTL1(_VSI)			(0x00411000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSIQF_FD_CTL1_MAX_INDEX			767
+#define VSIQF_FD_CTL1_FLT_ENA_S			0
+#define VSIQF_FD_CTL1_FLT_ENA_M			BIT(0)
+#define VSIQF_FD_CTL1_CFG_ENA_S			1
+#define VSIQF_FD_CTL1_CFG_ENA_M			BIT(1)
+#define VSIQF_FD_CTL1_EVICT_ENA_S		2
+#define VSIQF_FD_CTL1_EVICT_ENA_M		BIT(2)
+#define VSIQF_FD_DFLT(_VSI)			(0x00457000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSIQF_FD_DFLT_MAX_INDEX			767
+#define VSIQF_FD_DFLT_DEFLT_QINDX_S		0
+#define VSIQF_FD_DFLT_DEFLT_QINDX_M		MAKEMASK(0x7FF, 0)
+#define VSIQF_FD_DFLT_DEFLT_TOQUEUE_S		12
+#define VSIQF_FD_DFLT_DEFLT_TOQUEUE_M		MAKEMASK(0x7, 12)
+#define VSIQF_FD_DFLT_COMP_QINDX_S		16
+#define VSIQF_FD_DFLT_COMP_QINDX_M		MAKEMASK(0x7FF, 16)
+#define VSIQF_FD_DFLT_DEFLT_QINDX_PRIO_S	28
+#define VSIQF_FD_DFLT_DEFLT_QINDX_PRIO_M	MAKEMASK(0x7, 28)
+#define VSIQF_FD_DFLT_DEFLT_DROP_S		31
+#define VSIQF_FD_DFLT_DEFLT_DROP_M		BIT(31)
+#define VSIQF_FD_SIZE(_VSI)			(0x00462000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSIQF_FD_SIZE_MAX_INDEX			767
+#define VSIQF_FD_SIZE_FD_GSIZE_S		0
+#define VSIQF_FD_SIZE_FD_GSIZE_M		MAKEMASK(0x3FFF, 0)
+#define VSIQF_FD_SIZE_FD_BSIZE_S		16
+#define VSIQF_FD_SIZE_FD_BSIZE_M		MAKEMASK(0x3FFF, 16)
+#define VSIQF_HASH_CTL(_VSI)			(0x0040D000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSIQF_HASH_CTL_MAX_INDEX		767
+#define VSIQF_HASH_CTL_HASH_LUT_SEL_S		0
+#define VSIQF_HASH_CTL_HASH_LUT_SEL_M		MAKEMASK(0x3, 0)
+#define VSIQF_HASH_CTL_GLOB_LUT_S		2
+#define VSIQF_HASH_CTL_GLOB_LUT_M		MAKEMASK(0xF, 2)
+#define VSIQF_HASH_CTL_HASH_SCHEME_S		6
+#define VSIQF_HASH_CTL_HASH_SCHEME_M		MAKEMASK(0x3, 6)
+#define VSIQF_HASH_CTL_TC_OVER_SEL_S		8
+#define VSIQF_HASH_CTL_TC_OVER_SEL_M		MAKEMASK(0x1F, 8)
+#define VSIQF_HASH_CTL_TC_OVER_ENA_S		15
+#define VSIQF_HASH_CTL_TC_OVER_ENA_M		BIT(15)
+#define VSIQF_HKEY(_i, _VSI)			(0x00400000 + ((_i) * 4096 + (_VSI) * 4)) /* _i=0...12, _VSI=0...767 */ /* Reset Source: PFR */
+#define VSIQF_HKEY_MAX_INDEX			12
+#define VSIQF_HKEY_KEY_0_S			0
+#define VSIQF_HKEY_KEY_0_M			MAKEMASK(0xFF, 0)
+#define VSIQF_HKEY_KEY_1_S			8
+#define VSIQF_HKEY_KEY_1_M			MAKEMASK(0xFF, 8)
+#define VSIQF_HKEY_KEY_2_S			16
+#define VSIQF_HKEY_KEY_2_M			MAKEMASK(0xFF, 16)
+#define VSIQF_HKEY_KEY_3_S			24
+#define VSIQF_HKEY_KEY_3_M			MAKEMASK(0xFF, 24)
+#define VSIQF_HLUT(_i, _VSI)			(0x00420000 + ((_i) * 4096 + (_VSI) * 4)) /* _i=0...15, _VSI=0...767 */ /* Reset Source: PFR */
+#define VSIQF_HLUT_MAX_INDEX			15
+#define VSIQF_HLUT_LUT0_S			0
+#define VSIQF_HLUT_LUT0_M			MAKEMASK(0xF, 0)
+#define VSIQF_HLUT_LUT1_S			8
+#define VSIQF_HLUT_LUT1_M			MAKEMASK(0xF, 8)
+#define VSIQF_HLUT_LUT2_S			16
+#define VSIQF_HLUT_LUT2_M			MAKEMASK(0xF, 16)
+#define VSIQF_HLUT_LUT3_S			24
+#define VSIQF_HLUT_LUT3_M			MAKEMASK(0xF, 24)
+#define VSIQF_PE_CTL1(_VSI)			(0x00414000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSIQF_PE_CTL1_MAX_INDEX			767
+#define VSIQF_PE_CTL1_PE_FLTENA_S		0
+#define VSIQF_PE_CTL1_PE_FLTENA_M		BIT(0)
+#define VSIQF_TC_REGION(_i, _VSI)		(0x00448000 + ((_i) * 4096 + (_VSI) * 4)) /* _i=0...3, _VSI=0...767 */ /* Reset Source: PFR */
+#define VSIQF_TC_REGION_MAX_INDEX		3
+#define VSIQF_TC_REGION_TC_BASE0_S		0
+#define VSIQF_TC_REGION_TC_BASE0_M		MAKEMASK(0x7FF, 0)
+#define VSIQF_TC_REGION_TC_SIZE0_S		11
+#define VSIQF_TC_REGION_TC_SIZE0_M		MAKEMASK(0xF, 11)
+#define VSIQF_TC_REGION_TC_BASE1_S		16
+#define VSIQF_TC_REGION_TC_BASE1_M		MAKEMASK(0x7FF, 16)
+#define VSIQF_TC_REGION_TC_SIZE1_S		27
+#define VSIQF_TC_REGION_TC_SIZE1_M		MAKEMASK(0xF, 27)
+#define GLPM_WUMC				0x0009DEE4 /* Reset Source: POR */
+#define GLPM_WUMC_MNG_WU_PF_S			16
+#define GLPM_WUMC_MNG_WU_PF_M			MAKEMASK(0xFF, 16)
+#define PFPM_APM				0x000B8080 /* Reset Source: POR */
+#define PFPM_APM_APME_S				0
+#define PFPM_APM_APME_M				BIT(0)
+#define PFPM_WUC				0x0009DC80 /* Reset Source: POR */
+#define PFPM_WUC_EN_APM_D0_S			5
+#define PFPM_WUC_EN_APM_D0_M			BIT(5)
+#define PFPM_WUFC				0x0009DC00 /* Reset Source: POR */
+#define PFPM_WUFC_LNKC_S			0
+#define PFPM_WUFC_LNKC_M			BIT(0)
+#define PFPM_WUFC_MAG_S				1
+#define PFPM_WUFC_MAG_M				BIT(1)
+#define PFPM_WUFC_MNG_S				3
+#define PFPM_WUFC_MNG_M				BIT(3)
+#define PFPM_WUFC_FLX0_ACT_S			4
+#define PFPM_WUFC_FLX0_ACT_M			BIT(4)
+#define PFPM_WUFC_FLX1_ACT_S			5
+#define PFPM_WUFC_FLX1_ACT_M			BIT(5)
+#define PFPM_WUFC_FLX2_ACT_S			6
+#define PFPM_WUFC_FLX2_ACT_M			BIT(6)
+#define PFPM_WUFC_FLX3_ACT_S			7
+#define PFPM_WUFC_FLX3_ACT_M			BIT(7)
+#define PFPM_WUFC_FLX4_ACT_S			8
+#define PFPM_WUFC_FLX4_ACT_M			BIT(8)
+#define PFPM_WUFC_FLX5_ACT_S			9
+#define PFPM_WUFC_FLX5_ACT_M			BIT(9)
+#define PFPM_WUFC_FLX6_ACT_S			10
+#define PFPM_WUFC_FLX6_ACT_M			BIT(10)
+#define PFPM_WUFC_FLX7_ACT_S			11
+#define PFPM_WUFC_FLX7_ACT_M			BIT(11)
+#define PFPM_WUFC_FLX0_S			16
+#define PFPM_WUFC_FLX0_M			BIT(16)
+#define PFPM_WUFC_FLX1_S			17
+#define PFPM_WUFC_FLX1_M			BIT(17)
+#define PFPM_WUFC_FLX2_S			18
+#define PFPM_WUFC_FLX2_M			BIT(18)
+#define PFPM_WUFC_FLX3_S			19
+#define PFPM_WUFC_FLX3_M			BIT(19)
+#define PFPM_WUFC_FLX4_S			20
+#define PFPM_WUFC_FLX4_M			BIT(20)
+#define PFPM_WUFC_FLX5_S			21
+#define PFPM_WUFC_FLX5_M			BIT(21)
+#define PFPM_WUFC_FLX6_S			22
+#define PFPM_WUFC_FLX6_M			BIT(22)
+#define PFPM_WUFC_FLX7_S			23
+#define PFPM_WUFC_FLX7_M			BIT(23)
+#define PFPM_WUFC_FW_RST_WK_S			31
+#define PFPM_WUFC_FW_RST_WK_M			BIT(31)
+#define PFPM_WUS				0x0009DB80 /* Reset Source: POR */
+#define PFPM_WUS_LNKC_S				0
+#define PFPM_WUS_LNKC_M				BIT(0)
+#define PFPM_WUS_MAG_S				1
+#define PFPM_WUS_MAG_M				BIT(1)
+#define PFPM_WUS_PME_STATUS_S			2
+#define PFPM_WUS_PME_STATUS_M			BIT(2)
+#define PFPM_WUS_MNG_S				3
+#define PFPM_WUS_MNG_M				BIT(3)
+#define PFPM_WUS_FLX0_S				16
+#define PFPM_WUS_FLX0_M				BIT(16)
+#define PFPM_WUS_FLX1_S				17
+#define PFPM_WUS_FLX1_M				BIT(17)
+#define PFPM_WUS_FLX2_S				18
+#define PFPM_WUS_FLX2_M				BIT(18)
+#define PFPM_WUS_FLX3_S				19
+#define PFPM_WUS_FLX3_M				BIT(19)
+#define PFPM_WUS_FLX4_S				20
+#define PFPM_WUS_FLX4_M				BIT(20)
+#define PFPM_WUS_FLX5_S				21
+#define PFPM_WUS_FLX5_M				BIT(21)
+#define PFPM_WUS_FLX6_S				22
+#define PFPM_WUS_FLX6_M				BIT(22)
+#define PFPM_WUS_FLX7_S				23
+#define PFPM_WUS_FLX7_M				BIT(23)
+#define PFPM_WUS_FW_RST_WK_S			31
+#define PFPM_WUS_FW_RST_WK_M			BIT(31)
+#define PRTPM_SAH(_i)				(0x001E3BA0 + ((_i) * 32)) /* _i=0...3 */ /* Reset Source: PFR */
+#define PRTPM_SAH_MAX_INDEX			3
+#define PRTPM_SAH_PFPM_SAH_S			0
+#define PRTPM_SAH_PFPM_SAH_M			MAKEMASK(0xFFFF, 0)
+#define PRTPM_SAH_PF_NUM_S			26
+#define PRTPM_SAH_PF_NUM_M			MAKEMASK(0xF, 26)
+#define PRTPM_SAH_MC_MAG_EN_S			30
+#define PRTPM_SAH_MC_MAG_EN_M			BIT(30)
+#define PRTPM_SAH_AV_S				31
+#define PRTPM_SAH_AV_M				BIT(31)
+#define PRTPM_SAL(_i)				(0x001E3B20 + ((_i) * 32)) /* _i=0...3 */ /* Reset Source: PFR */
+#define PRTPM_SAL_MAX_INDEX			3
+#define PRTPM_SAL_PFPM_SAL_S			0
+#define PRTPM_SAL_PFPM_SAL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPE_CQM_FUNC_INVALIDATE		0x00503300 /* Reset Source: CORER */
+#define GLPE_CQM_FUNC_INVALIDATE_PF_NUM_S	0
+#define GLPE_CQM_FUNC_INVALIDATE_PF_NUM_M	MAKEMASK(0x7, 0)
+#define GLPE_CQM_FUNC_INVALIDATE_VM_VF_NUM_S	3
+#define GLPE_CQM_FUNC_INVALIDATE_VM_VF_NUM_M	MAKEMASK(0x3FF, 3)
+#define GLPE_CQM_FUNC_INVALIDATE_VM_VF_TYPE_S	13
+#define GLPE_CQM_FUNC_INVALIDATE_VM_VF_TYPE_M	MAKEMASK(0x3, 13)
+#define GLPE_CQM_FUNC_INVALIDATE_ENABLE_S	31
+#define GLPE_CQM_FUNC_INVALIDATE_ENABLE_M	BIT(31)
+#define VFPE_MRTEIDXMASK			0x00009000 /* Reset Source: PFR */
+#define VFPE_MRTEIDXMASK_MRTEIDXMASKBITS_S	0
+#define VFPE_MRTEIDXMASK_MRTEIDXMASKBITS_M	MAKEMASK(0x1F, 0)
+#define GLTSYN_HH_DLAY				0x0008881C /* Reset Source: CORER */
+#define GLTSYN_HH_DLAY_SYNC_DELAY_S		0
+#define GLTSYN_HH_DLAY_SYNC_DELAY_M		MAKEMASK(0xF, 0)
+#define VF_MBX_ARQBAH1				0x00006000 /* Reset Source: CORER */
+#define VF_MBX_ARQBAH1_ARQBAH_S			0
+#define VF_MBX_ARQBAH1_ARQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_ARQBAL1				0x00006C00 /* Reset Source: CORER */
+#define VF_MBX_ARQBAL1_ARQBAL_LSB_S		0
+#define VF_MBX_ARQBAL1_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define VF_MBX_ARQBAL1_ARQBAL_S			6
+#define VF_MBX_ARQBAL1_ARQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_ARQH1				0x00007400 /* Reset Source: CORER */
+#define VF_MBX_ARQH1_ARQH_S			0
+#define VF_MBX_ARQH1_ARQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_ARQLEN1				0x00008000 /* Reset Source: CORER */
+#define VF_MBX_ARQLEN1_ARQLEN_S			0
+#define VF_MBX_ARQLEN1_ARQLEN_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_ARQLEN1_ARQVFE_S			28
+#define VF_MBX_ARQLEN1_ARQVFE_M			BIT(28)
+#define VF_MBX_ARQLEN1_ARQOVFL_S		29
+#define VF_MBX_ARQLEN1_ARQOVFL_M		BIT(29)
+#define VF_MBX_ARQLEN1_ARQCRIT_S		30
+#define VF_MBX_ARQLEN1_ARQCRIT_M		BIT(30)
+#define VF_MBX_ARQLEN1_ARQENABLE_S		31
+#define VF_MBX_ARQLEN1_ARQENABLE_M		BIT(31)
+#define VF_MBX_ARQT1				0x00007000 /* Reset Source: CORER */
+#define VF_MBX_ARQT1_ARQT_S			0
+#define VF_MBX_ARQT1_ARQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_ATQBAH1				0x00007800 /* Reset Source: CORER */
+#define VF_MBX_ATQBAH1_ATQBAH_S			0
+#define VF_MBX_ATQBAH1_ATQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_ATQBAL1				0x00007C00 /* Reset Source: CORER */
+#define VF_MBX_ATQBAL1_ATQBAL_S			6
+#define VF_MBX_ATQBAL1_ATQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_ATQH1				0x00006400 /* Reset Source: CORER */
+#define VF_MBX_ATQH1_ATQH_S			0
+#define VF_MBX_ATQH1_ATQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_ATQLEN1				0x00006800 /* Reset Source: CORER */
+#define VF_MBX_ATQLEN1_ATQLEN_S			0
+#define VF_MBX_ATQLEN1_ATQLEN_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_ATQLEN1_ATQVFE_S			28
+#define VF_MBX_ATQLEN1_ATQVFE_M			BIT(28)
+#define VF_MBX_ATQLEN1_ATQOVFL_S		29
+#define VF_MBX_ATQLEN1_ATQOVFL_M		BIT(29)
+#define VF_MBX_ATQLEN1_ATQCRIT_S		30
+#define VF_MBX_ATQLEN1_ATQCRIT_M		BIT(30)
+#define VF_MBX_ATQLEN1_ATQENABLE_S		31
+#define VF_MBX_ATQLEN1_ATQENABLE_M		BIT(31)
+#define VF_MBX_ATQT1				0x00008400 /* Reset Source: CORER */
+#define VF_MBX_ATQT1_ATQT_S			0
+#define VF_MBX_ATQT1_ATQT_M			MAKEMASK(0x3FF, 0)
+#define PFPCI_VF_FLUSH_DONE1			0x0000E400 /* Reset Source: PCIR */
+#define PFPCI_VF_FLUSH_DONE1_FLUSH_DONE_S	0
+#define PFPCI_VF_FLUSH_DONE1_FLUSH_DONE_M	BIT(0)
+#define VFGEN_RSTAT1				0x00008800 /* Reset Source: VFR */
+#define VFGEN_RSTAT1_VFR_STATE_S		0
+#define VFGEN_RSTAT1_VFR_STATE_M		MAKEMASK(0x3, 0)
+#define VFINT_DYN_CTL0				0x00005C00 /* Reset Source: PFR */
+#define VFINT_DYN_CTL0_INTENA_S			0
+#define VFINT_DYN_CTL0_INTENA_M			BIT(0)
+#define VFINT_DYN_CTL0_CLEARPBA_S		1
+#define VFINT_DYN_CTL0_CLEARPBA_M		BIT(1)
+#define VFINT_DYN_CTL0_SWINT_TRIG_S		2
+#define VFINT_DYN_CTL0_SWINT_TRIG_M		BIT(2)
+#define VFINT_DYN_CTL0_ITR_INDX_S		3
+#define VFINT_DYN_CTL0_ITR_INDX_M		MAKEMASK(0x3, 3)
+#define VFINT_DYN_CTL0_INTERVAL_S		5
+#define VFINT_DYN_CTL0_INTERVAL_M		MAKEMASK(0xFFF, 5)
+#define VFINT_DYN_CTL0_SW_ITR_INDX_ENA_S	24
+#define VFINT_DYN_CTL0_SW_ITR_INDX_ENA_M	BIT(24)
+#define VFINT_DYN_CTL0_SW_ITR_INDX_S		25
+#define VFINT_DYN_CTL0_SW_ITR_INDX_M		MAKEMASK(0x3, 25)
+#define VFINT_DYN_CTL0_WB_ON_ITR_S		30
+#define VFINT_DYN_CTL0_WB_ON_ITR_M		BIT(30)
+#define VFINT_DYN_CTL0_INTENA_MSK_S		31
+#define VFINT_DYN_CTL0_INTENA_MSK_M		BIT(31)
+#define VFINT_DYN_CTLN(_i)			(0x00003800 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: PFR */
+#define VFINT_DYN_CTLN_MAX_INDEX		63
+#define VFINT_DYN_CTLN_INTENA_S			0
+#define VFINT_DYN_CTLN_INTENA_M			BIT(0)
+#define VFINT_DYN_CTLN_CLEARPBA_S		1
+#define VFINT_DYN_CTLN_CLEARPBA_M		BIT(1)
+#define VFINT_DYN_CTLN_SWINT_TRIG_S		2
+#define VFINT_DYN_CTLN_SWINT_TRIG_M		BIT(2)
+#define VFINT_DYN_CTLN_ITR_INDX_S		3
+#define VFINT_DYN_CTLN_ITR_INDX_M		MAKEMASK(0x3, 3)
+#define VFINT_DYN_CTLN_INTERVAL_S		5
+#define VFINT_DYN_CTLN_INTERVAL_M		MAKEMASK(0xFFF, 5)
+#define VFINT_DYN_CTLN_SW_ITR_INDX_ENA_S	24
+#define VFINT_DYN_CTLN_SW_ITR_INDX_ENA_M	BIT(24)
+#define VFINT_DYN_CTLN_SW_ITR_INDX_S		25
+#define VFINT_DYN_CTLN_SW_ITR_INDX_M		MAKEMASK(0x3, 25)
+#define VFINT_DYN_CTLN_WB_ON_ITR_S		30
+#define VFINT_DYN_CTLN_WB_ON_ITR_M		BIT(30)
+#define VFINT_DYN_CTLN_INTENA_MSK_S		31
+#define VFINT_DYN_CTLN_INTENA_MSK_M		BIT(31)
+#define VFINT_ITR0(_i)				(0x00004C00 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: PFR */
+#define VFINT_ITR0_MAX_INDEX			2
+#define VFINT_ITR0_INTERVAL_S			0
+#define VFINT_ITR0_INTERVAL_M			MAKEMASK(0xFFF, 0)
+#define VFINT_ITRN(_i, _j)			(0x00002800 + ((_i) * 4 + (_j) * 12)) /* _i=0...2, _j=0...63 */ /* Reset Source: PFR */
+#define VFINT_ITRN_MAX_INDEX			2
+#define VFINT_ITRN_INTERVAL_S			0
+#define VFINT_ITRN_INTERVAL_M			MAKEMASK(0xFFF, 0)
+#define QRX_TAIL1(_QRX)				(0x00002000 + ((_QRX) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define QRX_TAIL1_MAX_INDEX			255
+#define QRX_TAIL1_TAIL_S			0
+#define QRX_TAIL1_TAIL_M			MAKEMASK(0x1FFF, 0)
+#define QTX_TAIL(_DBQM)				(0x00000000 + ((_DBQM) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define QTX_TAIL_MAX_INDEX			255
+#define QTX_TAIL_QTX_COMM_DBELL_S		0
+#define QTX_TAIL_QTX_COMM_DBELL_M		MAKEMASK(0xFFFFFFFF, 0)
+#define MSIX_TMSG1(_i)				(0x00000008 + ((_i) * 16)) /* _i=0...64 */ /* Reset Source: FLR */
+#define MSIX_TMSG1_MAX_INDEX			64
+#define MSIX_TMSG1_MSIXTMSG_S			0
+#define MSIX_TMSG1_MSIXTMSG_M			MAKEMASK(0xFFFFFFFF, 0)
+#define VFPE_AEQALLOC1				0x0000A400 /* Reset Source: VFR */
+#define VFPE_AEQALLOC1_AECOUNT_S		0
+#define VFPE_AEQALLOC1_AECOUNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VFPE_CCQPHIGH1				0x00009800 /* Reset Source: VFR */
+#define VFPE_CCQPHIGH1_PECCQPHIGH_S		0
+#define VFPE_CCQPHIGH1_PECCQPHIGH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VFPE_CCQPLOW1				0x0000AC00 /* Reset Source: VFR */
+#define VFPE_CCQPLOW1_PECCQPLOW_S		0
+#define VFPE_CCQPLOW1_PECCQPLOW_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VFPE_CCQPSTATUS1			0x0000B800 /* Reset Source: VFR */
+#define VFPE_CCQPSTATUS1_CCQP_DONE_S		0
+#define VFPE_CCQPSTATUS1_CCQP_DONE_M		BIT(0)
+#define VFPE_CCQPSTATUS1_HMC_PROFILE_S		4
+#define VFPE_CCQPSTATUS1_HMC_PROFILE_M		MAKEMASK(0x7, 4)
+#define VFPE_CCQPSTATUS1_RDMA_EN_VFS_S		16
+#define VFPE_CCQPSTATUS1_RDMA_EN_VFS_M		MAKEMASK(0x3F, 16)
+#define VFPE_CCQPSTATUS1_CCQP_ERR_S		31
+#define VFPE_CCQPSTATUS1_CCQP_ERR_M		BIT(31)
+#define VFPE_CQACK1				0x0000B000 /* Reset Source: VFR */
+#define VFPE_CQACK1_PECQID_S			0
+#define VFPE_CQACK1_PECQID_M			MAKEMASK(0x7FFFF, 0)
+#define VFPE_CQARM1				0x0000B400 /* Reset Source: VFR */
+#define VFPE_CQARM1_PECQID_S			0
+#define VFPE_CQARM1_PECQID_M			MAKEMASK(0x7FFFF, 0)
+#define VFPE_CQPDB1				0x0000BC00 /* Reset Source: VFR */
+#define VFPE_CQPDB1_WQHEAD_S			0
+#define VFPE_CQPDB1_WQHEAD_M			MAKEMASK(0x7FF, 0)
+#define VFPE_CQPERRCODES1			0x00009C00 /* Reset Source: VFR */
+#define VFPE_CQPERRCODES1_CQP_MINOR_CODE_S	0
+#define VFPE_CQPERRCODES1_CQP_MINOR_CODE_M	MAKEMASK(0xFFFF, 0)
+#define VFPE_CQPERRCODES1_CQP_MAJOR_CODE_S	16
+#define VFPE_CQPERRCODES1_CQP_MAJOR_CODE_M	MAKEMASK(0xFFFF, 16)
+#define VFPE_CQPTAIL1				0x0000A000 /* Reset Source: VFR */
+#define VFPE_CQPTAIL1_WQTAIL_S			0
+#define VFPE_CQPTAIL1_WQTAIL_M			MAKEMASK(0x7FF, 0)
+#define VFPE_CQPTAIL1_CQP_OP_ERR_S		31
+#define VFPE_CQPTAIL1_CQP_OP_ERR_M		BIT(31)
+#define VFPE_IPCONFIG01				0x00008C00 /* Reset Source: VFR */
+#define VFPE_IPCONFIG01_PEIPID_S		0
+#define VFPE_IPCONFIG01_PEIPID_M		MAKEMASK(0xFFFF, 0)
+#define VFPE_IPCONFIG01_USEENTIREIDRANGE_S	16
+#define VFPE_IPCONFIG01_USEENTIREIDRANGE_M	BIT(16)
+#define VFPE_IPCONFIG01_UDP_SRC_PORT_MASK_EN_S	17
+#define VFPE_IPCONFIG01_UDP_SRC_PORT_MASK_EN_M	BIT(17)
+#define VFPE_MRTEIDXMASK1(_VF)			(0x00509800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_MRTEIDXMASK1_MAX_INDEX		255
+#define VFPE_MRTEIDXMASK1_MRTEIDXMASKBITS_S	0
+#define VFPE_MRTEIDXMASK1_MRTEIDXMASKBITS_M	MAKEMASK(0x1F, 0)
+#define VFPE_RCVUNEXPECTEDERROR1		0x00009400 /* Reset Source: VFR */
+#define VFPE_RCVUNEXPECTEDERROR1_TCP_RX_UNEXP_ERR_S 0
+#define VFPE_RCVUNEXPECTEDERROR1_TCP_RX_UNEXP_ERR_M MAKEMASK(0xFFFFFF, 0)
+#define VFPE_TCPNOWTIMER1			0x0000A800 /* Reset Source: VFR */
+#define VFPE_TCPNOWTIMER1_TCP_NOW_S		0
+#define VFPE_TCPNOWTIMER1_TCP_NOW_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VFPE_WQEALLOC1				0x0000C000 /* Reset Source: VFR */
+#define VFPE_WQEALLOC1_PEQPID_S			0
+#define VFPE_WQEALLOC1_PEQPID_M			MAKEMASK(0x3FFFF, 0)
+#define VFPE_WQEALLOC1_WQE_DESC_INDEX_S		20
+#define VFPE_WQEALLOC1_WQE_DESC_INDEX_M		MAKEMASK(0xFFF, 20)
+#define VF_MBX_CPM_ARQBAH1			0x0000F060 /* Reset Source: CORER */
+#define VF_MBX_CPM_ARQBAH1_ARQBAH_S		0
+#define VF_MBX_CPM_ARQBAH1_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_CPM_ARQBAL1			0x0000F050 /* Reset Source: CORER */
+#define VF_MBX_CPM_ARQBAL1_ARQBAL_LSB_S		0
+#define VF_MBX_CPM_ARQBAL1_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define VF_MBX_CPM_ARQBAL1_ARQBAL_S		6
+#define VF_MBX_CPM_ARQBAL1_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_CPM_ARQH1			0x0000F080 /* Reset Source: CORER */
+#define VF_MBX_CPM_ARQH1_ARQH_S			0
+#define VF_MBX_CPM_ARQH1_ARQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ARQLEN1			0x0000F070 /* Reset Source: CORER */
+#define VF_MBX_CPM_ARQLEN1_ARQLEN_S		0
+#define VF_MBX_CPM_ARQLEN1_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ARQLEN1_ARQVFE_S		28
+#define VF_MBX_CPM_ARQLEN1_ARQVFE_M		BIT(28)
+#define VF_MBX_CPM_ARQLEN1_ARQOVFL_S		29
+#define VF_MBX_CPM_ARQLEN1_ARQOVFL_M		BIT(29)
+#define VF_MBX_CPM_ARQLEN1_ARQCRIT_S		30
+#define VF_MBX_CPM_ARQLEN1_ARQCRIT_M		BIT(30)
+#define VF_MBX_CPM_ARQLEN1_ARQENABLE_S		31
+#define VF_MBX_CPM_ARQLEN1_ARQENABLE_M		BIT(31)
+#define VF_MBX_CPM_ARQT1			0x0000F090 /* Reset Source: CORER */
+#define VF_MBX_CPM_ARQT1_ARQT_S			0
+#define VF_MBX_CPM_ARQT1_ARQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ATQBAH1			0x0000F010 /* Reset Source: CORER */
+#define VF_MBX_CPM_ATQBAH1_ATQBAH_S		0
+#define VF_MBX_CPM_ATQBAH1_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_CPM_ATQBAL1			0x0000F000 /* Reset Source: CORER */
+#define VF_MBX_CPM_ATQBAL1_ATQBAL_S		6
+#define VF_MBX_CPM_ATQBAL1_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_CPM_ATQH1			0x0000F030 /* Reset Source: CORER */
+#define VF_MBX_CPM_ATQH1_ATQH_S			0
+#define VF_MBX_CPM_ATQH1_ATQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ATQLEN1			0x0000F020 /* Reset Source: CORER */
+#define VF_MBX_CPM_ATQLEN1_ATQLEN_S		0
+#define VF_MBX_CPM_ATQLEN1_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ATQLEN1_ATQVFE_S		28
+#define VF_MBX_CPM_ATQLEN1_ATQVFE_M		BIT(28)
+#define VF_MBX_CPM_ATQLEN1_ATQOVFL_S		29
+#define VF_MBX_CPM_ATQLEN1_ATQOVFL_M		BIT(29)
+#define VF_MBX_CPM_ATQLEN1_ATQCRIT_S		30
+#define VF_MBX_CPM_ATQLEN1_ATQCRIT_M		BIT(30)
+#define VF_MBX_CPM_ATQLEN1_ATQENABLE_S		31
+#define VF_MBX_CPM_ATQLEN1_ATQENABLE_M		BIT(31)
+#define VF_MBX_CPM_ATQT1			0x0000F040 /* Reset Source: CORER */
+#define VF_MBX_CPM_ATQT1_ATQT_S			0
+#define VF_MBX_CPM_ATQT1_ATQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ARQBAH1			0x00020060 /* Reset Source: CORER */
+#define VF_MBX_HLP_ARQBAH1_ARQBAH_S		0
+#define VF_MBX_HLP_ARQBAH1_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_HLP_ARQBAL1			0x00020050 /* Reset Source: CORER */
+#define VF_MBX_HLP_ARQBAL1_ARQBAL_LSB_S		0
+#define VF_MBX_HLP_ARQBAL1_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define VF_MBX_HLP_ARQBAL1_ARQBAL_S		6
+#define VF_MBX_HLP_ARQBAL1_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_HLP_ARQH1			0x00020080 /* Reset Source: CORER */
+#define VF_MBX_HLP_ARQH1_ARQH_S			0
+#define VF_MBX_HLP_ARQH1_ARQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ARQLEN1			0x00020070 /* Reset Source: CORER */
+#define VF_MBX_HLP_ARQLEN1_ARQLEN_S		0
+#define VF_MBX_HLP_ARQLEN1_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ARQLEN1_ARQVFE_S		28
+#define VF_MBX_HLP_ARQLEN1_ARQVFE_M		BIT(28)
+#define VF_MBX_HLP_ARQLEN1_ARQOVFL_S		29
+#define VF_MBX_HLP_ARQLEN1_ARQOVFL_M		BIT(29)
+#define VF_MBX_HLP_ARQLEN1_ARQCRIT_S		30
+#define VF_MBX_HLP_ARQLEN1_ARQCRIT_M		BIT(30)
+#define VF_MBX_HLP_ARQLEN1_ARQENABLE_S		31
+#define VF_MBX_HLP_ARQLEN1_ARQENABLE_M		BIT(31)
+#define VF_MBX_HLP_ARQT1			0x00020090 /* Reset Source: CORER */
+#define VF_MBX_HLP_ARQT1_ARQT_S			0
+#define VF_MBX_HLP_ARQT1_ARQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ATQBAH1			0x00020010 /* Reset Source: CORER */
+#define VF_MBX_HLP_ATQBAH1_ATQBAH_S		0
+#define VF_MBX_HLP_ATQBAH1_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_HLP_ATQBAL1			0x00020000 /* Reset Source: CORER */
+#define VF_MBX_HLP_ATQBAL1_ATQBAL_S		6
+#define VF_MBX_HLP_ATQBAL1_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_HLP_ATQH1			0x00020030 /* Reset Source: CORER */
+#define VF_MBX_HLP_ATQH1_ATQH_S			0
+#define VF_MBX_HLP_ATQH1_ATQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ATQLEN1			0x00020020 /* Reset Source: CORER */
+#define VF_MBX_HLP_ATQLEN1_ATQLEN_S		0
+#define VF_MBX_HLP_ATQLEN1_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ATQLEN1_ATQVFE_S		28
+#define VF_MBX_HLP_ATQLEN1_ATQVFE_M		BIT(28)
+#define VF_MBX_HLP_ATQLEN1_ATQOVFL_S		29
+#define VF_MBX_HLP_ATQLEN1_ATQOVFL_M		BIT(29)
+#define VF_MBX_HLP_ATQLEN1_ATQCRIT_S		30
+#define VF_MBX_HLP_ATQLEN1_ATQCRIT_M		BIT(30)
+#define VF_MBX_HLP_ATQLEN1_ATQENABLE_S		31
+#define VF_MBX_HLP_ATQLEN1_ATQENABLE_M		BIT(31)
+#define VF_MBX_HLP_ATQT1			0x00020040 /* Reset Source: CORER */
+#define VF_MBX_HLP_ATQT1_ATQT_S			0
+#define VF_MBX_HLP_ATQT1_ATQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ARQBAH1			0x00021060 /* Reset Source: CORER */
+#define VF_MBX_PSM_ARQBAH1_ARQBAH_S		0
+#define VF_MBX_PSM_ARQBAH1_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_PSM_ARQBAL1			0x00021050 /* Reset Source: CORER */
+#define VF_MBX_PSM_ARQBAL1_ARQBAL_LSB_S		0
+#define VF_MBX_PSM_ARQBAL1_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define VF_MBX_PSM_ARQBAL1_ARQBAL_S		6
+#define VF_MBX_PSM_ARQBAL1_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_PSM_ARQH1			0x00021080 /* Reset Source: CORER */
+#define VF_MBX_PSM_ARQH1_ARQH_S			0
+#define VF_MBX_PSM_ARQH1_ARQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ARQLEN1			0x00021070 /* Reset Source: CORER */
+#define VF_MBX_PSM_ARQLEN1_ARQLEN_S		0
+#define VF_MBX_PSM_ARQLEN1_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ARQLEN1_ARQVFE_S		28
+#define VF_MBX_PSM_ARQLEN1_ARQVFE_M		BIT(28)
+#define VF_MBX_PSM_ARQLEN1_ARQOVFL_S		29
+#define VF_MBX_PSM_ARQLEN1_ARQOVFL_M		BIT(29)
+#define VF_MBX_PSM_ARQLEN1_ARQCRIT_S		30
+#define VF_MBX_PSM_ARQLEN1_ARQCRIT_M		BIT(30)
+#define VF_MBX_PSM_ARQLEN1_ARQENABLE_S		31
+#define VF_MBX_PSM_ARQLEN1_ARQENABLE_M		BIT(31)
+#define VF_MBX_PSM_ARQT1			0x00021090 /* Reset Source: CORER */
+#define VF_MBX_PSM_ARQT1_ARQT_S			0
+#define VF_MBX_PSM_ARQT1_ARQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ATQBAH1			0x00021010 /* Reset Source: CORER */
+#define VF_MBX_PSM_ATQBAH1_ATQBAH_S		0
+#define VF_MBX_PSM_ATQBAH1_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_PSM_ATQBAL1			0x00021000 /* Reset Source: CORER */
+#define VF_MBX_PSM_ATQBAL1_ATQBAL_S		6
+#define VF_MBX_PSM_ATQBAL1_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_PSM_ATQH1			0x00021030 /* Reset Source: CORER */
+#define VF_MBX_PSM_ATQH1_ATQH_S			0
+#define VF_MBX_PSM_ATQH1_ATQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ATQLEN1			0x00021020 /* Reset Source: CORER */
+#define VF_MBX_PSM_ATQLEN1_ATQLEN_S		0
+#define VF_MBX_PSM_ATQLEN1_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ATQLEN1_ATQVFE_S		28
+#define VF_MBX_PSM_ATQLEN1_ATQVFE_M		BIT(28)
+#define VF_MBX_PSM_ATQLEN1_ATQOVFL_S		29
+#define VF_MBX_PSM_ATQLEN1_ATQOVFL_M		BIT(29)
+#define VF_MBX_PSM_ATQLEN1_ATQCRIT_S		30
+#define VF_MBX_PSM_ATQLEN1_ATQCRIT_M		BIT(30)
+#define VF_MBX_PSM_ATQLEN1_ATQENABLE_S		31
+#define VF_MBX_PSM_ATQLEN1_ATQENABLE_M		BIT(31)
+#define VF_MBX_PSM_ATQT1			0x00021040 /* Reset Source: CORER */
+#define VF_MBX_PSM_ATQT1_ATQT_S			0
+#define VF_MBX_PSM_ATQT1_ATQT_M			MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ARQBAH1			0x0000F160 /* Reset Source: CORER */
+#define VF_SB_CPM_ARQBAH1_ARQBAH_S		0
+#define VF_SB_CPM_ARQBAH1_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_SB_CPM_ARQBAL1			0x0000F150 /* Reset Source: CORER */
+#define VF_SB_CPM_ARQBAL1_ARQBAL_LSB_S		0
+#define VF_SB_CPM_ARQBAL1_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define VF_SB_CPM_ARQBAL1_ARQBAL_S		6
+#define VF_SB_CPM_ARQBAL1_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_SB_CPM_ARQH1				0x0000F180 /* Reset Source: CORER */
+#define VF_SB_CPM_ARQH1_ARQH_S			0
+#define VF_SB_CPM_ARQH1_ARQH_M			MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ARQLEN1			0x0000F170 /* Reset Source: CORER */
+#define VF_SB_CPM_ARQLEN1_ARQLEN_S		0
+#define VF_SB_CPM_ARQLEN1_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ARQLEN1_ARQVFE_S		28
+#define VF_SB_CPM_ARQLEN1_ARQVFE_M		BIT(28)
+#define VF_SB_CPM_ARQLEN1_ARQOVFL_S		29
+#define VF_SB_CPM_ARQLEN1_ARQOVFL_M		BIT(29)
+#define VF_SB_CPM_ARQLEN1_ARQCRIT_S		30
+#define VF_SB_CPM_ARQLEN1_ARQCRIT_M		BIT(30)
+#define VF_SB_CPM_ARQLEN1_ARQENABLE_S		31
+#define VF_SB_CPM_ARQLEN1_ARQENABLE_M		BIT(31)
+#define VF_SB_CPM_ARQT1				0x0000F190 /* Reset Source: CORER */
+#define VF_SB_CPM_ARQT1_ARQT_S			0
+#define VF_SB_CPM_ARQT1_ARQT_M			MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ATQBAH1			0x0000F110 /* Reset Source: CORER */
+#define VF_SB_CPM_ATQBAH1_ATQBAH_S		0
+#define VF_SB_CPM_ATQBAH1_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_SB_CPM_ATQBAL1			0x0000F100 /* Reset Source: CORER */
+#define VF_SB_CPM_ATQBAL1_ATQBAL_S		6
+#define VF_SB_CPM_ATQBAL1_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_SB_CPM_ATQH1				0x0000F130 /* Reset Source: CORER */
+#define VF_SB_CPM_ATQH1_ATQH_S			0
+#define VF_SB_CPM_ATQH1_ATQH_M			MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ATQLEN1			0x0000F120 /* Reset Source: CORER */
+#define VF_SB_CPM_ATQLEN1_ATQLEN_S		0
+#define VF_SB_CPM_ATQLEN1_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ATQLEN1_ATQVFE_S		28
+#define VF_SB_CPM_ATQLEN1_ATQVFE_M		BIT(28)
+#define VF_SB_CPM_ATQLEN1_ATQOVFL_S		29
+#define VF_SB_CPM_ATQLEN1_ATQOVFL_M		BIT(29)
+#define VF_SB_CPM_ATQLEN1_ATQCRIT_S		30
+#define VF_SB_CPM_ATQLEN1_ATQCRIT_M		BIT(30)
+#define VF_SB_CPM_ATQLEN1_ATQENABLE_S		31
+#define VF_SB_CPM_ATQLEN1_ATQENABLE_M		BIT(31)
+#define VF_SB_CPM_ATQT1				0x0000F140 /* Reset Source: CORER */
+#define VF_SB_CPM_ATQT1_ATQT_S			0
+#define VF_SB_CPM_ATQT1_ATQT_M			MAKEMASK(0x3FF, 0)
+#define VFINT_DYN_CTL(_i)			(0x00023000 + ((_i) * 4096)) /* _i=0...7 */ /* Reset Source: PFR */
+#define VFINT_DYN_CTL_MAX_INDEX			7
+#define VFINT_DYN_CTL_INTENA_S			0
+#define VFINT_DYN_CTL_INTENA_M			BIT(0)
+#define VFINT_DYN_CTL_CLEARPBA_S		1
+#define VFINT_DYN_CTL_CLEARPBA_M		BIT(1)
+#define VFINT_DYN_CTL_SWINT_TRIG_S		2
+#define VFINT_DYN_CTL_SWINT_TRIG_M		BIT(2)
+#define VFINT_DYN_CTL_ITR_INDX_S		3
+#define VFINT_DYN_CTL_ITR_INDX_M		MAKEMASK(0x3, 3)
+#define VFINT_DYN_CTL_INTERVAL_S		5
+#define VFINT_DYN_CTL_INTERVAL_M		MAKEMASK(0xFFF, 5)
+#define VFINT_DYN_CTL_SW_ITR_INDX_ENA_S		24
+#define VFINT_DYN_CTL_SW_ITR_INDX_ENA_M		BIT(24)
+#define VFINT_DYN_CTL_SW_ITR_INDX_S		25
+#define VFINT_DYN_CTL_SW_ITR_INDX_M		MAKEMASK(0x3, 25)
+#define VFINT_DYN_CTL_WB_ON_ITR_S		30
+#define VFINT_DYN_CTL_WB_ON_ITR_M		BIT(30)
+#define VFINT_DYN_CTL_INTENA_MSK_S		31
+#define VFINT_DYN_CTL_INTENA_MSK_M		BIT(31)
+#define VFINT_ITR_0(_i)				(0x00023004 + ((_i) * 4096)) /* _i=0...7 */ /* Reset Source: PFR */
+#define VFINT_ITR_0_MAX_INDEX			7
+#define VFINT_ITR_0_INTERVAL_S			0
+#define VFINT_ITR_0_INTERVAL_M			MAKEMASK(0xFFF, 0)
+#define VFINT_ITR_1(_i)				(0x00023008 + ((_i) * 4096)) /* _i=0...7 */ /* Reset Source: PFR */
+#define VFINT_ITR_1_MAX_INDEX			7
+#define VFINT_ITR_1_INTERVAL_S			0
+#define VFINT_ITR_1_INTERVAL_M			MAKEMASK(0xFFF, 0)
+#define VFINT_ITR_2(_i)				(0x0002300C + ((_i) * 4096)) /* _i=0...7 */ /* Reset Source: PFR */
+#define VFINT_ITR_2_MAX_INDEX			7
+#define VFINT_ITR_2_INTERVAL_S			0
+#define VFINT_ITR_2_INTERVAL_M			MAKEMASK(0xFFF, 0)
+#define VFQRX_TAIL(_QRX)			(0x0002E000 + ((_QRX) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VFQRX_TAIL_MAX_INDEX			255
+#define VFQRX_TAIL_TAIL_S			0
+#define VFQRX_TAIL_TAIL_M			MAKEMASK(0x1FFF, 0)
+#define VFQTX_COMM_DBELL(_DBQM)			(0x00030000 + ((_DBQM) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VFQTX_COMM_DBELL_MAX_INDEX		255
+#define VFQTX_COMM_DBELL_QTX_COMM_DBELL_S	0
+#define VFQTX_COMM_DBELL_QTX_COMM_DBELL_M	MAKEMASK(0xFFFFFFFF, 0)
+#define VFQTX_COMM_DBLQ_DBELL(_DBLQ)		(0x00022000 + ((_DBLQ) * 4)) /* _i=0...3 */ /* Reset Source: CORER */
+#define VFQTX_COMM_DBLQ_DBELL_MAX_INDEX		3
+#define VFQTX_COMM_DBLQ_DBELL_TAIL_S		0
+#define VFQTX_COMM_DBLQ_DBELL_TAIL_M		MAKEMASK(0x1FFF, 0)
+
+#endif
diff --git a/drivers/net/ice/base/ice_impl_guide.c b/drivers/net/ice/base/ice_impl_guide.c
new file mode 100644
index 0000000..853cf52
--- /dev/null
+++ b/drivers/net/ice/base/ice_impl_guide.c
@@ -0,0 +1,167 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+/*! \mainpage Intel FIXME Shared Code Implementation Guide
+ *
+ *\section secA  Operating System Interface
+ * The Shared Code is common code designed to coordinate, and
+ * make common, the initialization and other hardware tasks.
+ *
+ * \section sec2 Operating System Dependent Files
+ * Each driver is required to implement one or two files, ice_osdep.c and
+ * ice_osdep.h for the operating system dependent portions of the shared code.
+ * The following are required in the osdep file(s) (in header file if
+ * implemented as a macro/inline-function or in the C file if implemented as a
+ * function with a prototype in the header file).
+ *
+ * \section sec3 Data Types/structures
+ * \htmlonly <br>
+ * __le16<br>
+ * __le32<br>
+ * __le64<br>
+ * <br>
+ * struct ice_dma_mem {<br>
+ * &emsp;void *va;<br>
+ * &emsp;&lt;os-specific physical address type&gt; pa;<br>
+ * &emsp;&lt;os-specific size type&gt; size;<br>
+ * &emsp;&lt;other OS-specific data...&gt;<br>
+ * }<br>
+ * <br>
+ * struct ice_lock {<br>
+ * &emsp;&lt;os specific lock type&gt; lock;<br>
+ * }<br>
+ * <br>
+ * LIST_ENTRY_TYPE	(list entry, e.g. list_head on Linux, _LIST_ENTRY on Windows)<br>
+ * LIST_HEAD_TYPE	(list head, e.g. list_head on Linux, _LIST_ENTRY on Windows)<br>
+ * \endhtmlonly
+ *
+ * \section sec4 Functions/macros
+ * \htmlonly <br>
+ * <bold>See ice_common.c:ice_init_hw() for some examples</bold><br>
+ * <br>
+ * STATIC<br>
+ * CPU_TO_BE64(a)<br>
+ * CPU_TO_BE32(a)<br>
+ * CPU_TO_BE16(a)<br>
+ * CPU_TO_LE64(a)<br>
+ * CPU_TO_LE32(a)<br>
+ * CPU_TO_LE16(a)<br>
+ * LE64_TO_CPU(a)<br>
+ * LE32_TO_CPU(a)<br>
+ * LE16_TO_CPU(a)<br>
+ * offsetof(_type, _field)<br>
+ * FIELD_SIZEOF(_type, _field)<br>
+ * ARRAY_SIZE(_array)<br>
+ * NTOHL(a)<br>
+ * NTOHS(a)<br>
+ * HTONL(a)<br>
+ * HTONS(a)<br>
+ * SNPRINTF(buf, size, fmt, ...)<br>
+ * <br>
+ * u32 rd32(struct ice_hw *, reg_offset)<br>
+ * void wr32(struct ice_hw *, reg_offset, u32 value)<br>
+ * u64 rd64(struct ice_hw *, reg_offset)<br>
+ * void wr64(struct ice_hw *, reg_offset, u64 value)<br>
+ * <br>
+ * void ice_flush(struct ice_hw *)<br>
+ * <br>
+ * void ice_debug(struct ice_hw *hw, u32 mask, char *format, ...)<br>
+ * void ice_debug_array(struct ice_hw *hw, u32 mask, u32 rowsize, u32 groupsize, char *buf, size_t len)<br>
+ * <br>
+ * void ice_info(struct ice_hw *hw, char *format, ...)<br>
+ * <br>
+ * void ice_warn(struct ice_hw *hw, char *format, ...)<br>
+ * Like ice_info but may log the message at a higher warning level<br>
+ * <br>
+ * Delay functions - bool sleep indicates sleep (true) or busy-wait (false)<br>
+ * void ice_usec_delay(unsigned long usecs, bool sleep)<br>
+ * void ice_msec_delay(unsigned long msecs, bool sleep)<br>
+ * <br>
+ * void *ice_memset(void *addr, int c, size_t n, ice_memset_type direction)<br>
+ * void *ice_memcpy(void *d, const void *s, size_t n, ice_memcpy_type dir)<br>
+ * void *ice_memdup(struct ice_hw *hw, const void *s, size_t n, ice_memcpy_type dir)<br>
+ * <br>
+ * Memory allocation functions - expected to provide zero'ed memory<br>
+ * void *ice_alloc_dma_mem(struct ice_hw *hw, struct ice_dma_mem *m, u64 size)<br>
+ * void *ice_malloc(struct ice_hw *hw, size)<br>
+ * void *ice_calloc(struct ice_hw *hw, cnt, size)<br>
+ * <br>
+ * void ice_free_dma_mem(struct ice_hw *hw, struct ice_dma_mem *m)<br>
+ * void ice_free(struct ice_hw *, void *) - should not fail if void pointer is NULL<br>
+ * <br>
+ * void ice_init_lock(struct ice_lock *lock);<br>
+ * void ice_acquire_lock(struct ice_lock *lock);<br>
+ * void ice_release_lock(struct ice_lock *lock);<br>
+ * void ice_destroy_lock(struct ice_lock *lock);<br>
+ * <br>
+ * void ice_declare_bitmap(name, u16 size);<br>
+ * void ice_set_bit(unsigned int bit, unsigned long *name);<br>
+ * <br>
+ * u8 ice_hweight8(u8 weight) - determine hamming weight of an 8-bit value<br>
+ * <br>
+ * <bold>doubly-linked list management macros:</bold><br>
+ * INIT_LIST_HEAD(struct LIST_HEAD_TYPE *head)<br>
+ * LIST_EMPTY(const struct LIST_HEAD_TYPE *head)<br>
+ * LIST_ADD(struct LIST_ENTRY_TYPE *entry, struct LIST_HEAD_TYPE *head)<br>
+ * LIST_ADD_AFTER(struct LIST_ENTRY_TYPE *entry, struct LIST_ENTRY_TYPE *elem)<br>
+ * LIST_FIRST_ENTRY(struct LIST_HEAD_TYPE *head, &lt;name of struct&gt;, &lt;name of LIST_ENTRY_TYPE member in struct&gt;)<br>
+ * LIST_DEL(struct LIST_ENTRY_TYPE *entry)<br>
+ * LIST_FOR_EACH_ENTRY(&lt;&amp;struct used as iterator&gt;, struct LIST_HEAD_TYPE *head, &lt;name of struct&gt;, &lt;name of LIST_ENTRY_TYPE member in struct&gt;)<br>
+ * Note: it is not safe to remove list entries in a LIST_FOR_EACH_ENTRY() loop<br>
+ * LIST_FOR_EACH_ENTRY_SAFE(&lt;&amp;struct used as iterator&gt;, &lt;&amp;struct used as iterator&gt;struct LIST_ENTRY_TYPE *entry;, struct LIST_HEAD_TYPE *head, &lt;name of struct&gt;, &lt;name of LIST_ENTRY_TYPE member in struct&gt;)<br>
+ * LIST_REPLACE_INIT(struct LIST_HEAD_TYPE *old_head, struct LIST_HEAD_TYPE *new_head)<br>
+ *
+ * \section sec5 List implementation details
+ * The LIST macros are defined to implement a doubly-linked list which embeds
+ * the LIST_ENTRY structures as elements of the items linked to the list. The
+ * macros assume that pointer arithmetic can be used to extract the container
+ * structure from the LIST_ENTRY element and the structure type.
+ * <br>
+ * INIT_LIST_HEAD is expected to be able to initialize a pointer to a new
+ * list.
+ * <br>
+ * LIST_EMPTY is called to determine if a list pointed to by a given list head
+ * contains any elements. Calling LIST_EMPTY on an uninitialized list head
+ * results in undefined implementation specific behavior.
+ * <br>
+ * LIST_ADD is called to add an element to the front of a list pointed to by
+ * a given list head. It is assumed that LIST_ADD will perform any required
+ * initialization for the LIST_ENTRY_TYPE structure.
+ * <br>
+ * LIST_ADD_AFTER is called to insert a new element into the list after the
+ * given element. It is assumed that LIST_ADD_AFTER will perform any required
+ * initialization for the LIST_ENTRY_TYPE structure.
+ * <br>
+ * LIST_FIRST_ENTRY is called to obtain a pointer to the structure containing
+ * the first LIST_ENTRY_TYPE element of a list pointed to by the given list
+ * head. Calling LIST_FIRST_ENTRY with an empty or uninitialized list results
+ * in undefined implementation specific behavior.
+ * <br>
+ * LIST_NEXT_ENTRY is called to obtain a pointer to the structure containing
+ * the next LIST_ENTRY_TYPE element in the list, given a pointer to the current
+ * structure.
+ * <br>
+ * LIST_DEL is called to remove an element from its associated list.
+ * <br>
+ * LIST_FOR_EACH_ENTRY is used to loop through every element of a list, using
+ * a pointer of the containing type as an iterator. It is expected to have
+ * semantics similar to for loops, and can take either a single or block
+ * statement following it. Calling LIST_DEL on the iterator is not safe and
+ * results in undefined implementation specific behavior. If deleting elements
+ * from the list while iterating is required, use LIST_FOR_EACH_ENTRY_SAFE
+ * instead.
+ * <br>
+ * LIST_FOR_EACH_ENTRY_SAFE is used to loop through every element of a list
+ * guaranteeing safety to delete the iterator element even during the
+ * iteration. It requires two temporary pointers both of the struct type used
+ * as the iterator. If the ability to remove the iterated element from the
+ * list is required, then LIST_FOR_EACH_ENTRY_SAFE must be used instead of
+ * LIST_FOR_EACH_ENTRY.
+ * <br>
+ * LIST_REPLACE_INIT is used to replace old head by a new head. This will also
+ * reinitialize the old head to make it a empty list head. The new list does
+ * not have to be initialized for this function.
+ * <br>
+ * \endhtmlonly
+ */
diff --git a/drivers/net/ice/base/ice_lan_tx_rx.h b/drivers/net/ice/base/ice_lan_tx_rx.h
new file mode 100644
index 0000000..30671a5
--- /dev/null
+++ b/drivers/net/ice/base/ice_lan_tx_rx.h
@@ -0,0 +1,2290 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_LAN_TX_RX_H_
+#define _ICE_LAN_TX_RX_H_
+#include "ice_osdep.h"
+
+/* RX Descriptors */
+union ice_16byte_rx_desc {
+	struct {
+		__le64 pkt_addr; /* Packet buffer address */
+		__le64 hdr_addr; /* Header buffer address */
+	} read;
+	struct {
+		struct {
+			struct {
+				__le16 mirroring_status;
+				__le16 l2tag1;
+			} lo_dword;
+			union {
+				__le32 rss; /* RSS Hash */
+				__le32 fd_id; /* Flow Director filter id */
+			} hi_dword;
+		} qword0;
+		struct {
+			/* ext status/error/PTYPE/length */
+			__le64 status_error_len;
+		} qword1;
+	} wb;  /* writeback */
+};
+
+union ice_32byte_rx_desc {
+	struct {
+		__le64 pkt_addr; /* Packet buffer address */
+		__le64 hdr_addr; /* Header buffer address */
+			/* bit 0 of hdr_addr is DD bit */
+		__le64 rsvd1;
+		__le64 rsvd2;
+	} read;
+	struct {
+		struct {
+			struct {
+				__le16 mirroring_status;
+				__le16 l2tag1;
+			} lo_dword;
+			union {
+				__le32 rss; /* RSS Hash */
+				__le32 fd_id; /* Flow Director filter id */
+			} hi_dword;
+		} qword0;
+		struct {
+			/* status/error/PTYPE/length */
+			__le64 status_error_len;
+		} qword1;
+		struct {
+			__le16 ext_status; /* extended status */
+			__le16 rsvd;
+			__le16 l2tag2_1;
+			__le16 l2tag2_2;
+		} qword2;
+		struct {
+			__le32 reserved;
+			__le32 fd_id;
+		} qword3;
+	} wb; /* writeback */
+};
+
+struct ice_fltr_desc {
+	__le64 qidx_compq_space_stat;
+	__le64 dtype_cmd_vsi_fdid;
+};
+
+#define ICE_FXD_FLTR_QW0_QINDEX_S	0
+#define ICE_FXD_FLTR_QW0_QINDEX_M	(0x7FFULL << ICE_FXD_FLTR_QW0_QINDEX_S)
+#define ICE_FXD_FLTR_QW0_COMP_Q_S	11
+#define ICE_FXD_FLTR_QW0_COMP_Q_M	BIT_ULL(ICE_FXD_FLTR_QW0_COMP_Q_S)
+#define ICE_FXD_FLTR_QW0_COMP_Q_ZERO	0x0ULL
+#define ICE_FXD_FLTR_QW0_COMP_Q_QINDX	0x1ULL
+
+#define ICE_FXD_FLTR_QW0_COMP_REPORT_S	12
+#define ICE_FXD_FLTR_QW0_COMP_REPORT_M	\
+				(0x3ULL << ICE_FXD_FLTR_QW0_COMP_REPORT_S)
+#define ICE_FXD_FLTR_QW0_COMP_REPORT_NONE	0x0ULL
+#define ICE_FXD_FLTR_QW0_COMP_REPORT_SW_FAIL	0x1ULL
+#define ICE_FXD_FLTR_QW0_COMP_REPORT_SW		0x2ULL
+
+#define ICE_FXD_FLTR_QW0_FD_SPACE_S	14
+#define ICE_FXD_FLTR_QW0_FD_SPACE_M	(0x3ULL << ICE_FXD_FLTR_QW0_FD_SPACE_S)
+#define ICE_FXD_FLTR_QW0_FD_SPACE_GUAR			0x0ULL
+#define ICE_FXD_FLTR_QW0_FD_SPACE_BEST_EFFORT		0x1ULL
+#define ICE_FXD_FLTR_QW0_FD_SPACE_GUAR_BEST		0x2ULL
+#define ICE_FXD_FLTR_QW0_FD_SPACE_BEST_GUAR		0x3ULL
+
+#define ICE_FXD_FLTR_QW0_STAT_CNT_S	16
+#define ICE_FXD_FLTR_QW0_STAT_CNT_M	\
+				(0x1FFFULL << ICE_FXD_FLTR_QW0_STAT_CNT_S)
+#define ICE_FXD_FLTR_QW0_STAT_ENA_S	29
+#define ICE_FXD_FLTR_QW0_STAT_ENA_M	(0x3ULL << ICE_FXD_FLTR_QW0_STAT_ENA_S)
+#define ICE_FXD_FLTR_QW0_STAT_ENA_NONE		0x0ULL
+#define ICE_FXD_FLTR_QW0_STAT_ENA_PKTS		0x1ULL
+#define ICE_FXD_FLTR_QW0_STAT_ENA_BYTES		0x2ULL
+#define ICE_FXD_FLTR_QW0_STAT_ENA_PKTS_BYTES	0x3ULL
+
+#define ICE_FXD_FLTR_QW0_EVICT_ENA_S	31
+#define ICE_FXD_FLTR_QW0_EVICT_ENA_M	BIT_ULL(ICE_FXD_FLTR_QW0_EVICT_ENA_S)
+#define ICE_FXD_FLTR_QW0_EVICT_ENA_FALSE	0x0ULL
+#define ICE_FXD_FLTR_QW0_EVICT_ENA_TRUE		0x1ULL
+
+#define ICE_FXD_FLTR_QW0_TO_Q_S		32
+#define ICE_FXD_FLTR_QW0_TO_Q_M		(0x7ULL << ICE_FXD_FLTR_QW0_TO_Q_S)
+#define ICE_FXD_FLTR_QW0_TO_Q_EQUALS_QINDEX	0x0ULL
+
+#define ICE_FXD_FLTR_QW0_TO_Q_PRI_S	35
+#define ICE_FXD_FLTR_QW0_TO_Q_PRI_M	(0x7ULL << ICE_FXD_FLTR_QW0_TO_Q_PRI_S)
+#define ICE_FXD_FLTR_QW0_TO_Q_PRIO1	0x1ULL
+
+#define ICE_FXD_FLTR_QW0_DPU_RECIPE_S	38
+#define ICE_FXD_FLTR_QW0_DPU_RECIPE_M	\
+			(0x3ULL << ICE_FXD_FLTR_QW0_DPU_RECIPE_S)
+#define ICE_FXD_FLTR_QW0_DPU_RECIPE_DFLT	0x0ULL
+
+#define ICE_FXD_FLTR_QW0_DROP_S		40
+#define ICE_FXD_FLTR_QW0_DROP_M		BIT_ULL(ICE_FXD_FLTR_QW0_DROP_S)
+#define ICE_FXD_FLTR_QW0_DROP_NO	0x0ULL
+#define ICE_FXD_FLTR_QW0_DROP_YES	0x1ULL
+
+#define ICE_FXD_FLTR_QW0_FLEX_PRI_S	41
+#define ICE_FXD_FLTR_QW0_FLEX_PRI_M	(0x7ULL << ICE_FXD_FLTR_QW0_FLEX_PRI_S)
+#define ICE_FXD_FLTR_QW0_FLEX_PRI_NONE	0x0ULL
+
+#define ICE_FXD_FLTR_QW0_FLEX_MDID_S	44
+#define ICE_FXD_FLTR_QW0_FLEX_MDID_M	(0xFULL << ICE_FXD_FLTR_QW0_FLEX_MDID_S)
+#define ICE_FXD_FLTR_QW0_FLEX_MDID0	0x0ULL
+
+#define ICE_FXD_FLTR_QW0_FLEX_VAL_S	48
+#define ICE_FXD_FLTR_QW0_FLEX_VAL_M	\
+				(0xFFFFULL << ICE_FXD_FLTR_QW0_FLEX_VAL_S)
+#define ICE_FXD_FLTR_QW0_FLEX_VAL0	0x0ULL
+
+#define ICE_FXD_FLTR_QW1_DTYPE_S	0
+#define ICE_FXD_FLTR_QW1_DTYPE_M	(0xFULL << ICE_FXD_FLTR_QW1_DTYPE_S)
+#define ICE_FXD_FLTR_QW1_PCMD_S		4
+#define ICE_FXD_FLTR_QW1_PCMD_M		BIT_ULL(ICE_FXD_FLTR_QW1_PCMD_S)
+#define ICE_FXD_FLTR_QW1_PCMD_ADD	0x0ULL
+#define ICE_FXD_FLTR_QW1_PCMD_REMOVE	0x1ULL
+
+#define ICE_FXD_FLTR_QW1_PROF_PRI_S	5
+#define ICE_FXD_FLTR_QW1_PROF_PRI_M	(0x7ULL << ICE_FXD_FLTR_QW1_PROF_PRI_S)
+#define ICE_FXD_FLTR_QW1_PROF_PRIO_ZERO	0x0ULL
+
+#define ICE_FXD_FLTR_QW1_PROF_S		8
+#define ICE_FXD_FLTR_QW1_PROF_M		(0x3FULL << ICE_FXD_FLTR_QW1_PROF_S)
+#define ICE_FXD_FLTR_QW1_PROF_ZERO	0x0ULL
+
+#define ICE_FXD_FLTR_QW1_FD_VSI_S	14
+#define ICE_FXD_FLTR_QW1_FD_VSI_M	(0x3FFULL << ICE_FXD_FLTR_QW1_FD_VSI_S)
+#define ICE_FXD_FLTR_QW1_SWAP_S		24
+#define ICE_FXD_FLTR_QW1_SWAP_M		BIT_ULL(ICE_FXD_FLTR_QW1_SWAP_S)
+#define ICE_FXD_FLTR_QW1_SWAP_NOT_SET	0x0ULL
+#define ICE_FXD_FLTR_QW1_SWAP_SET	0x1ULL
+
+#define ICE_FXD_FLTR_QW1_FDID_PRI_S	25
+#define ICE_FXD_FLTR_QW1_FDID_PRI_M	(0x7ULL << ICE_FXD_FLTR_QW1_FDID_PRI_S)
+#define ICE_FXD_FLTR_QW1_FDID_PRI_ZERO	0x0ULL
+
+#define ICE_FXD_FLTR_QW1_FDID_MDID_S	28
+#define ICE_FXD_FLTR_QW1_FDID_MDID_M	(0xFULL << ICE_FXD_FLTR_QW1_FDID_MDID_S)
+#define ICE_FXD_FLTR_QW1_FDID_MDID_FD	0x05ULL
+
+#define ICE_FXD_FLTR_QW1_FDID_S		32
+#define ICE_FXD_FLTR_QW1_FDID_M		\
+			(0xFFFFFFFFULL << ICE_FXD_FLTR_QW1_FDID_S)
+#define ICE_FXD_FLTR_QW1_FDID_ZERO	0x0ULL
+
+
+enum ice_rx_desc_status_bits {
+	/* Note: These are predefined bit offsets */
+	ICE_RX_DESC_STATUS_DD_S			= 0,
+	ICE_RX_DESC_STATUS_EOF_S		= 1,
+	ICE_RX_DESC_STATUS_L2TAG1P_S		= 2,
+	ICE_RX_DESC_STATUS_L3L4P_S		= 3,
+	ICE_RX_DESC_STATUS_CRCP_S		= 4,
+	ICE_RX_DESC_STATUS_TSYNINDX_S		= 5, /* 2 BITS */
+	ICE_RX_DESC_STATUS_TSYNVALID_S		= 7,
+	ICE_RX_DESC_STATUS_EXT_UDP_0_S		= 8,
+	ICE_RX_DESC_STATUS_UMBCAST_S		= 9, /* 2 BITS */
+	ICE_RX_DESC_STATUS_FLM_S		= 11,
+	ICE_RX_DESC_STATUS_FLTSTAT_S		= 12, /* 2 BITS */
+	ICE_RX_DESC_STATUS_LPBK_S		= 14,
+	ICE_RX_DESC_STATUS_IPV6EXADD_S		= 15,
+	ICE_RX_DESC_STATUS_RESERVED2_S		= 16, /* 2 BITS */
+	ICE_RX_DESC_STATUS_INT_UDP_0_S		= 18,
+	ICE_RX_DESC_STATUS_LAST /* this entry must be last!!! */
+};
+
+#define ICE_RXD_QW1_STATUS_S	0
+#define ICE_RXD_QW1_STATUS_M	((BIT(ICE_RX_DESC_STATUS_LAST) - 1) << \
+				 ICE_RXD_QW1_STATUS_S)
+
+#define ICE_RXD_QW1_STATUS_TSYNINDX_S ICE_RX_DESC_STATUS_TSYNINDX_S
+#define ICE_RXD_QW1_STATUS_TSYNINDX_M (0x3UL << ICE_RXD_QW1_STATUS_TSYNINDX_S)
+
+#define ICE_RXD_QW1_STATUS_TSYNVALID_S ICE_RX_DESC_STATUS_TSYNVALID_S
+#define ICE_RXD_QW1_STATUS_TSYNVALID_M BIT_ULL(ICE_RXD_QW1_STATUS_TSYNVALID_S)
+
+
+enum ice_rx_desc_fltstat_values {
+	ICE_RX_DESC_FLTSTAT_NO_DATA	= 0,
+	ICE_RX_DESC_FLTSTAT_RSV_FD_ID	= 1, /* 16byte desc? FD_ID : RSV */
+	ICE_RX_DESC_FLTSTAT_RSV		= 2,
+	ICE_RX_DESC_FLTSTAT_RSS_HASH	= 3,
+};
+
+
+#define ICE_RXD_QW1_ERROR_S	19
+#define ICE_RXD_QW1_ERROR_M		(0xFFUL << ICE_RXD_QW1_ERROR_S)
+
+enum ice_rx_desc_error_bits {
+	/* Note: These are predefined bit offsets */
+	ICE_RX_DESC_ERROR_RXE_S			= 0,
+	ICE_RX_DESC_ERROR_RECIPE_S		= 1,
+	ICE_RX_DESC_ERROR_HBO_S			= 2,
+	ICE_RX_DESC_ERROR_L3L4E_S		= 3, /* 3 BITS */
+	ICE_RX_DESC_ERROR_IPE_S			= 3,
+	ICE_RX_DESC_ERROR_L4E_S			= 4,
+	ICE_RX_DESC_ERROR_EIPE_S		= 5,
+	ICE_RX_DESC_ERROR_OVERSIZE_S		= 6,
+	ICE_RX_DESC_ERROR_PPRS_S		= 7
+};
+
+enum ice_rx_desc_error_l3l4e_masks {
+	ICE_RX_DESC_ERROR_L3L4E_NONE		= 0,
+	ICE_RX_DESC_ERROR_L3L4E_PROT		= 1,
+};
+
+#define ICE_RXD_QW1_PTYPE_S	30
+#define ICE_RXD_QW1_PTYPE_M	(0xFFULL << ICE_RXD_QW1_PTYPE_S)
+
+/* Packet type non-ip values */
+enum ice_rx_l2_ptype {
+	ICE_RX_PTYPE_L2_RESERVED	= 0,
+	ICE_RX_PTYPE_L2_MAC_PAY2	= 1,
+	ICE_RX_PTYPE_L2_FIP_PAY2	= 3,
+	ICE_RX_PTYPE_L2_OUI_PAY2	= 4,
+	ICE_RX_PTYPE_L2_MACCNTRL_PAY2	= 5,
+	ICE_RX_PTYPE_L2_LLDP_PAY2	= 6,
+	ICE_RX_PTYPE_L2_ECP_PAY2	= 7,
+	ICE_RX_PTYPE_L2_EVB_PAY2	= 8,
+	ICE_RX_PTYPE_L2_QCN_PAY2	= 9,
+	ICE_RX_PTYPE_L2_EAPOL_PAY2	= 10,
+	ICE_RX_PTYPE_L2_ARP		= 11,
+};
+
+struct ice_rx_ptype_decoded {
+	u32 ptype:10;
+	u32 known:1;
+	u32 outer_ip:1;
+	u32 outer_ip_ver:2;
+	u32 outer_frag:1;
+	u32 tunnel_type:3;
+	u32 tunnel_end_prot:2;
+	u32 tunnel_end_frag:1;
+	u32 inner_prot:4;
+	u32 payload_layer:3;
+};
+
+enum ice_rx_ptype_outer_ip {
+	ICE_RX_PTYPE_OUTER_L2	= 0,
+	ICE_RX_PTYPE_OUTER_IP	= 1,
+};
+
+enum ice_rx_ptype_outer_ip_ver {
+	ICE_RX_PTYPE_OUTER_NONE	= 0,
+	ICE_RX_PTYPE_OUTER_IPV4	= 1,
+	ICE_RX_PTYPE_OUTER_IPV6	= 2,
+};
+
+enum ice_rx_ptype_outer_fragmented {
+	ICE_RX_PTYPE_NOT_FRAG	= 0,
+	ICE_RX_PTYPE_FRAG	= 1,
+};
+
+enum ice_rx_ptype_tunnel_type {
+	ICE_RX_PTYPE_TUNNEL_NONE		= 0,
+	ICE_RX_PTYPE_TUNNEL_IP_IP		= 1,
+	ICE_RX_PTYPE_TUNNEL_IP_GRENAT		= 2,
+	ICE_RX_PTYPE_TUNNEL_IP_GRENAT_MAC	= 3,
+	ICE_RX_PTYPE_TUNNEL_IP_GRENAT_MAC_VLAN	= 4,
+};
+
+enum ice_rx_ptype_tunnel_end_prot {
+	ICE_RX_PTYPE_TUNNEL_END_NONE	= 0,
+	ICE_RX_PTYPE_TUNNEL_END_IPV4	= 1,
+	ICE_RX_PTYPE_TUNNEL_END_IPV6	= 2,
+};
+
+enum ice_rx_ptype_inner_prot {
+	ICE_RX_PTYPE_INNER_PROT_NONE		= 0,
+	ICE_RX_PTYPE_INNER_PROT_UDP		= 1,
+	ICE_RX_PTYPE_INNER_PROT_TCP		= 2,
+	ICE_RX_PTYPE_INNER_PROT_SCTP		= 3,
+	ICE_RX_PTYPE_INNER_PROT_ICMP		= 4,
+};
+
+enum ice_rx_ptype_payload_layer {
+	ICE_RX_PTYPE_PAYLOAD_LAYER_NONE	= 0,
+	ICE_RX_PTYPE_PAYLOAD_LAYER_PAY2	= 1,
+	ICE_RX_PTYPE_PAYLOAD_LAYER_PAY3	= 2,
+	ICE_RX_PTYPE_PAYLOAD_LAYER_PAY4	= 3,
+};
+
+
+#define ICE_RXD_QW1_LEN_PBUF_S	38
+#define ICE_RXD_QW1_LEN_PBUF_M	(0x3FFFULL << ICE_RXD_QW1_LEN_PBUF_S)
+
+#define ICE_RXD_QW1_LEN_HBUF_S	52
+#define ICE_RXD_QW1_LEN_HBUF_M	(0x7FFULL << ICE_RXD_QW1_LEN_HBUF_S)
+
+#define ICE_RXD_QW1_LEN_SPH_S	63
+#define ICE_RXD_QW1_LEN_SPH_M	BIT_ULL(ICE_RXD_QW1_LEN_SPH_S)
+
+
+enum ice_rx_desc_ext_status_bits {
+	/* Note: These are predefined bit offsets */
+	ICE_RX_DESC_EXT_STATUS_L2TAG2P_S	= 0,
+	ICE_RX_DESC_EXT_STATUS_L2TAG3P_S	= 1,
+	ICE_RX_DESC_EXT_STATUS_FLEXBL_S		= 2, /* 2 BITS */
+	ICE_RX_DESC_EXT_STATUS_FLEXBH_S		= 4, /* 2 BITS */
+	ICE_RX_DESC_EXT_STATUS_FDLONGB_S	= 9,
+	ICE_RX_DESC_EXT_STATUS_PELONGB_S	= 11,
+};
+
+
+enum ice_rx_desc_pe_status_bits {
+	/* Note: These are predefined bit offsets */
+	ICE_RX_DESC_PE_STATUS_QPID_S		= 0, /* 18 BITS */
+	ICE_RX_DESC_PE_STATUS_L4PORT_S		= 0, /* 16 BITS */
+	ICE_RX_DESC_PE_STATUS_IPINDEX_S		= 16, /* 8 BITS */
+	ICE_RX_DESC_PE_STATUS_QPIDHIT_S		= 24,
+	ICE_RX_DESC_PE_STATUS_APBVTHIT_S	= 25,
+	ICE_RX_DESC_PE_STATUS_PORTV_S		= 26,
+	ICE_RX_DESC_PE_STATUS_URG_S		= 27,
+	ICE_RX_DESC_PE_STATUS_IPFRAG_S		= 28,
+	ICE_RX_DESC_PE_STATUS_IPOPT_S		= 29
+};
+
+#define ICE_RX_PROG_STATUS_DESC_LEN_S	38
+#define ICE_RX_PROG_STATUS_DESC_LEN	0x2000000
+
+#define ICE_RX_PROG_STATUS_DESC_QW1_PROGID_S	2
+#define ICE_RX_PROG_STATUS_DESC_QW1_PROGID_M	\
+			(0x7UL << ICE_RX_PROG_STATUS_DESC_QW1_PROGID_S)
+
+
+#define ICE_RX_PROG_STATUS_DESC_QW1_ERROR_S	19
+#define ICE_RX_PROG_STATUS_DESC_QW1_ERROR_M	\
+			(0x3FUL << ICE_RX_PROG_STATUS_DESC_QW1_ERROR_S)
+
+enum ice_rx_prog_status_desc_status_bits {
+	/* Note: These are predefined bit offsets */
+	ICE_RX_PROG_STATUS_DESC_DD_S		= 0,
+	ICE_RX_PROG_STATUS_DESC_PROG_ID_S	= 2 /* 3 BITS */
+};
+
+enum ice_rx_prog_status_desc_prog_id_masks {
+	ICE_RX_PROG_STATUS_DESC_FD_FLTR_STATUS	= 1,
+};
+
+enum ice_rx_prog_status_desc_error_bits {
+	/* Note: These are predefined bit offsets */
+	ICE_RX_PROG_STATUS_DESC_FD_TBL_FULL_S	= 0,
+	ICE_RX_PROG_STATUS_DESC_NO_FD_ENTRY_S	= 1,
+};
+
+/* RX Flex Descriptor
+ * This descriptor is used instead of the legacy version descriptor when
+ * ice_rlan_ctx.adv_desc is set
+ */
+union ice_32b_rx_flex_desc {
+	struct {
+		__le64 pkt_addr; /* Packet buffer address */
+		__le64 hdr_addr; /* Header buffer address */
+				 /* bit 0 of hdr_addr is DD bit */
+		__le64 rsvd1;
+		__le64 rsvd2;
+	} read;
+	struct {
+		/* Qword 0 */
+		u8 rxdid; /* descriptor builder profile id */
+		u8 mir_id_umb_cast; /* mirror=[5:0], umb=[7:6] */
+		__le16 ptype_flex_flags0; /* ptype=[9:0], ff0=[15:10] */
+		__le16 pkt_len; /* [15:14] are reserved */
+		__le16 hdr_len_sph_flex_flags1; /* header=[10:0] */
+						/* sph=[11:11] */
+						/* ff1/ext=[15:12] */
+
+		/* Qword 1 */
+		__le16 status_error0;
+		__le16 l2tag1;
+		__le16 flex_meta0;
+		__le16 flex_meta1;
+
+		/* Qword 2 */
+		__le16 status_error1;
+		u8 flex_flags2;
+		u8 time_stamp_low;
+		__le16 l2tag2_1st;
+		__le16 l2tag2_2nd;
+
+		/* Qword 3 */
+		__le16 flex_meta2;
+		__le16 flex_meta3;
+		union {
+			struct {
+				__le16 flex_meta4;
+				__le16 flex_meta5;
+			} flex;
+			__le32 ts_high;
+		} flex_ts;
+	} wb; /* writeback */
+};
+
+/* Rx Flex Descriptor NIC Profile
+ * RxDID Profile Id 2
+ * Flex-field 0: RSS hash lower 16-bits
+ * Flex-field 1: RSS hash upper 16-bits
+ * Flex-field 2: Flow Id lower 16-bits
+ * Flex-field 3: Flow Id higher 16-bits
+ * Flex-field 4: reserved, Vlan id taken from L2Tag
+ */
+struct ice_32b_rx_flex_desc_nic {
+	/* Qword 0 */
+	u8 rxdid;
+	u8 mir_id_umb_cast;
+	__le16 ptype_flexi_flags0;
+	__le16 pkt_len;
+	__le16 hdr_len_sph_flex_flags1;
+
+	/* Qword 1 */
+	__le16 status_error0;
+	__le16 l2tag1;
+	__le32 rss_hash;
+
+	/* Qword 2 */
+	__le16 status_error1;
+	u8 flexi_flags2;
+	u8 ts_low;
+	__le16 l2tag2_1st;
+	__le16 l2tag2_2nd;
+
+	/* Qword 3 */
+	__le32 flow_id;
+	union {
+		struct {
+			__le16 rsvd;
+			__le16 flow_id_ipv6;
+		} flex;
+		__le32 ts_high;
+	} flex_ts;
+};
+
+/* Rx Flex Descriptor Switch Profile
+ * RxDID Profile Id 3
+ * Flex-field 0: Source Vsi
+ */
+struct ice_32b_rx_flex_desc_sw {
+	/* Qword 0 */
+	u8 rxdid;
+	u8 mir_id_umb_cast;
+	__le16 ptype_flexi_flags0;
+	__le16 pkt_len;
+	__le16 hdr_len_sph_flex_flags1;
+
+	/* Qword 1 */
+	__le16 status_error0;
+	__le16 l2tag1;
+	__le16 src_vsi; /* [10:15] are reserved */
+	__le16 flex_md1_rsvd;
+
+	/* Qword 2 */
+	__le16 status_error1;
+	u8 flex_flags2;
+	u8 ts_low;
+	__le16 l2tag2_1st;
+	__le16 l2tag2_2nd;
+
+	/* Qword 3 */
+	__le32 rsvd; /* flex words 2-3 are reserved */
+	__le32 ts_high;
+};
+
+/* Rx Flex Descriptor NIC VEB Profile
+ * RxDID Profile Id 4
+ * Flex-field 0: Destination Vsi
+ */
+struct ice_32b_rx_flex_desc_nic_veb_dbg {
+	/* Qword 0 */
+	u8 rxdid;
+	u8 mir_id_umb_cast;
+	__le16 ptype_flexi_flags0;
+	__le16 pkt_len;
+	__le16 hdr_len_sph_flex_flags1;
+
+	/* Qword 1 */
+	__le16 status_error0;
+	__le16 l2tag1;
+	__le16 dst_vsi; /* [0:12]: destination vsi */
+			/* 13: vsi valid bit */
+			/* [14:15] are reserved */
+	__le16 flex_field_1;
+
+	/* Qword 2 */
+	__le16 status_error1;
+	u8 flex_flags2;
+	u8 ts_low;
+	__le16 l2tag2_1st;
+	__le16 l2tag2_2nd;
+
+	/* Qword 3 */
+	__le32 rsvd; /* flex words 2-3 are reserved */
+	__le32 ts_high;
+};
+
+/* Rx Flex Descriptor NIC ACL Profile
+ * RxDID Profile Id 5
+ * Flex-field 0: ACL Counter 0
+ * Flex-field 1: ACL Counter 1
+ * Flex-field 2: ACL Counter 2
+ */
+struct ice_32b_rx_flex_desc_nic_acl_dbg {
+	/* Qword 0 */
+	u8 rxdid;
+	u8 mir_id_umb_cast;
+	__le16 ptype_flexi_flags0;
+	__le16 pkt_len;
+	__le16 hdr_len_sph_flex_flags1;
+
+	/* Qword 1 */
+	__le16 status_error0;
+	__le16 l2tag1;
+	__le16 acl_ctr0;
+	__le16 acl_ctr1;
+
+	/* Qword 2 */
+	__le16 status_error1;
+	u8 flex_flags2;
+	u8 ts_low;
+	__le16 l2tag2_1st;
+	__le16 l2tag2_2nd;
+
+	/* Qword 3 */
+	__le16 acl_ctr2;
+	__le16 rsvd; /* flex words 2-3 are reserved */
+	__le32 ts_high;
+};
+
+/* Rx Flex Descriptor NIC Profile
+ * RxDID Profile Id 6
+ * Flex-field 0: RSS hash lower 16-bits
+ * Flex-field 1: RSS hash upper 16-bits
+ * Flex-field 2: Flow Id lower 16-bits
+ * Flex-field 3: Source Vsi
+ * Flex-field 4: reserved, Vlan id taken from L2Tag
+ */
+struct ice_32b_rx_flex_desc_nic_2 {
+	/* Qword 0 */
+	u8 rxdid;
+	u8 mir_id_umb_cast;
+	__le16 ptype_flexi_flags0;
+	__le16 pkt_len;
+	__le16 hdr_len_sph_flex_flags1;
+
+	/* Qword 1 */
+	__le16 status_error0;
+	__le16 l2tag1;
+	__le32 rss_hash;
+
+	/* Qword 2 */
+	__le16 status_error1;
+	u8 flexi_flags2;
+	u8 ts_low;
+	__le16 l2tag2_1st;
+	__le16 l2tag2_2nd;
+
+	/* Qword 3 */
+	__le16 flow_id;
+	__le16 src_vsi;
+	union {
+		struct {
+			__le16 rsvd;
+			__le16 flow_id_ipv6;
+		} flex;
+		__le32 ts_high;
+	} flex_ts;
+};
+
+/* Receive Flex Descriptor profile IDs: There are a total
+ * of 64 profiles where profile IDs 0/1 are for legacy; and
+ * profiles 2-63 are flex profiles that can be programmed
+ * with a specific metadata (profile 7 reserved for HW)
+ */
+enum ice_rxdid {
+	ICE_RXDID_LEGACY_0		= 0,
+	ICE_RXDID_LEGACY_1		= 1,
+	ICE_RXDID_FLEX_NIC		= 2,
+	ICE_RXDID_FLEX_NIC_2		= 6,
+	ICE_RXDID_HW			= 7,
+	ICE_RXDID_LAST			= 63,
+};
+
+/* Recceive Flex descriptor Dword Index */
+enum ice_flex_word {
+	ICE_RX_FLEX_DWORD_0 = 0,
+	ICE_RX_FLEX_DWORD_1,
+	ICE_RX_FLEX_DWORD_2,
+	ICE_RX_FLEX_DWORD_3,
+	ICE_RX_FLEX_DWORD_4,
+	ICE_RX_FLEX_DWORD_5
+};
+
+/* Receive Flex Descriptor Rx opcode values */
+enum ice_flex_opcode {
+	ICE_RX_OPC_DEBUG = 0,
+	ICE_RX_OPC_MDID,
+	ICE_RX_OPC_EXTRACT,
+	ICE_RX_OPC_PROTID
+};
+
+/* Receive Descriptor MDID values */
+enum ice_flex_rx_mdid {
+	ICE_RX_MDID_FLOW_ID_LOWER	= 5,
+	ICE_RX_MDID_FLOW_ID_HIGH,
+	ICE_RX_MDID_DST_VSI		= 13,
+	ICE_RX_MDID_SRC_VSI		= 19,
+	ICE_RX_MDID_HASH_LOW		= 56,
+	ICE_RX_MDID_HASH_HIGH,
+	ICE_RX_MDID_ACL_CTR0		= ICE_RX_MDID_HASH_LOW,
+	ICE_RX_MDID_ACL_CTR1		= ICE_RX_MDID_HASH_HIGH,
+	ICE_RX_MDID_ACL_CTR2		= 59
+};
+
+/* for ice_32byte_rx_flex_desc.mir_id_umb_cast member */
+#define ICE_RX_FLEX_DESC_MIRROR_M	(0x3F) /* 6-bits */
+
+/* Rx Flag64 packet flag bits */
+enum ice_rx_flg64_bits {
+	ICE_RXFLG_PKT_DSI	= 0,
+	ICE_RXFLG_EVLAN_x8100	= 15,
+	ICE_RXFLG_EVLAN_x9100,
+	ICE_RXFLG_VLAN_x8100,
+	ICE_RXFLG_TNL_MAC	= 22,
+	ICE_RXFLG_TNL_VLAN,
+	ICE_RXFLG_PKT_FRG,
+	ICE_RXFLG_FIN		= 32,
+	ICE_RXFLG_SYN,
+	ICE_RXFLG_RST,
+	ICE_RXFLG_TNL0		= 38,
+	ICE_RXFLG_TNL1,
+	ICE_RXFLG_TNL2,
+	ICE_RXFLG_UDP_GRE,
+	ICE_RXFLG_RSVD		= 63
+};
+
+enum ice_rx_flex_desc_umb_cast_bits { /* field is 2 bits long */
+	ICE_RX_FLEX_DESC_UMB_CAST_S = 6,
+	ICE_RX_FLEX_DESC_UMB_CAST_LAST /* this entry must be last!!! */
+};
+
+enum ice_umbcast_dest_addr_types {
+	ICE_DEST_UNICAST = 0,
+	ICE_DEST_MULTICAST,
+	ICE_DEST_BROADCAST,
+	ICE_DEST_MIRRORED,
+};
+
+/* for ice_32byte_rx_flex_desc.ptype_flexi_flags0 member */
+#define ICE_RX_FLEX_DESC_PTYPE_M	(0x3FF) /* 10-bits */
+
+enum ice_rx_flex_desc_flexi_flags0_bits { /* field is 6 bits long */
+	ICE_RX_FLEX_DESC_FLEXI_FLAGS0_S = 10,
+	ICE_RX_FLEX_DESC_FLEXI_FLAGS0_LAST /* this entry must be last!!! */
+};
+
+/* for ice_32byte_rx_flex_desc.pkt_length member */
+#define ICE_RX_FLX_DESC_PKT_LEN_M	(0x3FFF) /* 14-bits */
+
+/* for ice_32byte_rx_flex_desc.header_length_sph_flexi_flags1 member */
+#define ICE_RX_FLEX_DESC_HEADER_LEN_M	(0x7FF) /* 11-bits */
+
+enum ice_rx_flex_desc_sph_bits { /* field is 1 bit long */
+	ICE_RX_FLEX_DESC_SPH_S = 11,
+	ICE_RX_FLEX_DESC_SPH_LAST /* this entry must be last!!! */
+};
+
+enum ice_rx_flex_desc_flexi_flags1_bits { /* field is 4 bits long */
+	ICE_RX_FLEX_DESC_FLEXI_FLAGS1_S = 12,
+	ICE_RX_FLEX_DESC_FLEXI_FLAGS1_LAST /* this entry must be last!!! */
+};
+
+enum ice_rx_flex_desc_ext_status_bits { /* field is 4 bits long */
+	ICE_RX_FLEX_DESC_EXT_STATUS_EXT_UDP_S = 12,
+	ICE_RX_FLEX_DESC_EXT_STATUS_INT_UDP_S = 13,
+	ICE_RX_FLEX_DESC_EXT_STATUS_RECIPE_S = 14,
+	ICE_RX_FLEX_DESC_EXT_STATUS_OVERSIZE_S = 15,
+	ICE_RX_FLEX_DESC_EXT_STATUS_LAST /* entry must be last!!! */
+};
+
+enum ice_rx_flex_desc_status_error_0_bits {
+	/* Note: These are predefined bit offsets */
+	ICE_RX_FLEX_DESC_STATUS0_DD_S = 0,
+	ICE_RX_FLEX_DESC_STATUS0_EOF_S,
+	ICE_RX_FLEX_DESC_STATUS0_HBO_S,
+	ICE_RX_FLEX_DESC_STATUS0_L3L4P_S,
+	ICE_RX_FLEX_DESC_STATUS0_XSUM_IPE_S,
+	ICE_RX_FLEX_DESC_STATUS0_XSUM_L4E_S,
+	ICE_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S,
+	ICE_RX_FLEX_DESC_STATUS0_XSUM_EUDPE_S,
+	ICE_RX_FLEX_DESC_STATUS0_LPBK_S,
+	ICE_RX_FLEX_DESC_STATUS0_IPV6EXADD_S,
+	ICE_RX_FLEX_DESC_STATUS0_RXE_S,
+	ICE_RX_FLEX_DESC_STATUS0_CRCP_S,
+	ICE_RX_FLEX_DESC_STATUS0_RSS_VALID_S,
+	ICE_RX_FLEX_DESC_STATUS0_L2TAG1P_S,
+	ICE_RX_FLEX_DESC_STATUS0_XTRMD0_VALID_S,
+	ICE_RX_FLEX_DESC_STATUS0_XTRMD1_VALID_S,
+	ICE_RX_FLEX_DESC_STATUS0_LAST /* this entry must be last!!! */
+};
+
+enum ice_rx_flex_desc_status_error_1_bits {
+	/* Note: These are predefined bit offsets */
+	ICE_RX_FLEX_DESC_STATUS1_CPM_S = 0, /* 4 bits */
+	ICE_RX_FLEX_DESC_STATUS1_NAT_S = 4,
+	ICE_RX_FLEX_DESC_STATUS1_CRYPTO_S = 5,
+	/* [10:6] reserved */
+	ICE_RX_FLEX_DESC_STATUS1_L2TAG2P_S = 11,
+	ICE_RX_FLEX_DESC_STATUS1_XTRMD2_VALID_S = 12,
+	ICE_RX_FLEX_DESC_STATUS1_XTRMD3_VALID_S = 13,
+	ICE_RX_FLEX_DESC_STATUS1_XTRMD4_VALID_S = 14,
+	ICE_RX_FLEX_DESC_STATUS1_XTRMD5_VALID_S = 15,
+	ICE_RX_FLEX_DESC_STATUS1_LAST /* this entry must be last!!! */
+};
+
+enum ice_rx_flex_desc_exstat_bits {
+	/* Note: These are predefined bit offsets */
+	ICE_RX_FLEX_DESC_EXSTAT_EXTUDP_S = 0,
+	ICE_RX_FLEX_DESC_EXSTAT_INTUDP_S = 1,
+	ICE_RX_FLEX_DESC_EXSTAT_RECIPE_S = 2,
+	ICE_RX_FLEX_DESC_EXSTAT_OVERSIZE_S = 3,
+};
+
+
+#define ICE_RXQ_CTX_SIZE_DWORDS		8
+#define ICE_RXQ_CTX_SZ			(ICE_RXQ_CTX_SIZE_DWORDS * sizeof(u32))
+#define ICE_TX_CMPLTNQ_CTX_SIZE_DWORDS	22
+#define ICE_TX_DRBELL_Q_CTX_SIZE_DWORDS	5
+#define GLTCLAN_CQ_CNTX(i, CQ)		(GLTCLAN_CQ_CNTX0(CQ) + ((i) * 0x0800))
+
+/* RLAN Rx queue context data
+ *
+ * The sizes of the variables may be larger than needed due to crossing byte
+ * boundaries. If we do not have the width of the variable set to the correct
+ * size then we could end up shifting bits off the top of the variable when the
+ * variable is at the top of a byte and crosses over into the next byte.
+ */
+struct ice_rlan_ctx {
+	u16 head;
+	u16 cpuid; /* bigger than needed, see above for reason */
+#define ICE_RLAN_BASE_S 7
+	u64 base;
+	u16 qlen;
+#define ICE_RLAN_CTX_DBUF_S 7
+	u16 dbuf; /* bigger than needed, see above for reason */
+#define ICE_RLAN_CTX_HBUF_S 6
+	u16 hbuf; /* bigger than needed, see above for reason */
+	u8 dtype;
+	u8 dsize;
+	u8 crcstrip;
+	u8 l2tsel;
+	u8 hsplit_0;
+	u8 hsplit_1;
+	u8 showiv;
+	u32 rxmax; /* bigger than needed, see above for reason */
+	u8 tphrdesc_ena;
+	u8 tphwdesc_ena;
+	u8 tphdata_ena;
+	u8 tphhead_ena;
+	u16 lrxqthresh; /* bigger than needed, see above for reason */
+};
+
+struct ice_ctx_ele {
+	u16 offset;
+	u16 size_of;
+	u16 width;
+	u16 lsb;
+};
+
+#define ICE_CTX_STORE(_struct, _ele, _width, _lsb) {	\
+	.offset = offsetof(struct _struct, _ele),	\
+	.size_of = FIELD_SIZEOF(struct _struct, _ele),	\
+	.width = _width,				\
+	.lsb = _lsb,					\
+}
+
+/* for hsplit_0 field of Rx RLAN context */
+enum ice_rlan_ctx_rx_hsplit_0 {
+	ICE_RLAN_RX_HSPLIT_0_NO_SPLIT		= 0,
+	ICE_RLAN_RX_HSPLIT_0_SPLIT_L2		= 1,
+	ICE_RLAN_RX_HSPLIT_0_SPLIT_IP		= 2,
+	ICE_RLAN_RX_HSPLIT_0_SPLIT_TCP_UDP	= 4,
+	ICE_RLAN_RX_HSPLIT_0_SPLIT_SCTP		= 8,
+};
+
+/* for hsplit_1 field of Rx RLAN context */
+enum ice_rlan_ctx_rx_hsplit_1 {
+	ICE_RLAN_RX_HSPLIT_1_NO_SPLIT		= 0,
+	ICE_RLAN_RX_HSPLIT_1_SPLIT_L2		= 1,
+	ICE_RLAN_RX_HSPLIT_1_SPLIT_ALWAYS	= 2,
+};
+
+/* TX Descriptor */
+struct ice_tx_desc {
+	__le64 buf_addr; /* Address of descriptor's data buf */
+	__le64 cmd_type_offset_bsz;
+};
+
+#define ICE_TXD_QW1_DTYPE_S	0
+#define ICE_TXD_QW1_DTYPE_M	(0xFUL << ICE_TXD_QW1_DTYPE_S)
+
+enum ice_tx_desc_dtype_value {
+	ICE_TX_DESC_DTYPE_DATA		= 0x0,
+	ICE_TX_DESC_DTYPE_CTX		= 0x1,
+	ICE_TX_DESC_DTYPE_IPSEC		= 0x3,
+	ICE_TX_DESC_DTYPE_FLTR_PROG	= 0x8,
+	ICE_TX_DESC_DTYPE_HLP_META	= 0x9,
+	/* DESC_DONE - HW has completed write-back of descriptor */
+	ICE_TX_DESC_DTYPE_DESC_DONE	= 0xF,
+};
+
+#define ICE_TXD_QW1_CMD_S	4
+#define ICE_TXD_QW1_CMD_M	(0xFFFUL << ICE_TXD_QW1_CMD_S)
+
+enum ice_tx_desc_cmd_bits {
+	ICE_TX_DESC_CMD_EOP			= 0x0001,
+	ICE_TX_DESC_CMD_RS			= 0x0002,
+	ICE_TX_DESC_CMD_RSVD			= 0x0004,
+	ICE_TX_DESC_CMD_IL2TAG1			= 0x0008,
+	ICE_TX_DESC_CMD_DUMMY			= 0x0010,
+	ICE_TX_DESC_CMD_IIPT_NONIP		= 0x0000, /* 2 BITS */
+	ICE_TX_DESC_CMD_IIPT_IPV6		= 0x0020, /* 2 BITS */
+	ICE_TX_DESC_CMD_IIPT_IPV4		= 0x0040, /* 2 BITS */
+	ICE_TX_DESC_CMD_IIPT_IPV4_CSUM		= 0x0060, /* 2 BITS */
+	ICE_TX_DESC_CMD_RSVD2			= 0x0080,
+	ICE_TX_DESC_CMD_L4T_EOFT_UNK		= 0x0000, /* 2 BITS */
+	ICE_TX_DESC_CMD_L4T_EOFT_TCP		= 0x0100, /* 2 BITS */
+	ICE_TX_DESC_CMD_L4T_EOFT_SCTP		= 0x0200, /* 2 BITS */
+	ICE_TX_DESC_CMD_L4T_EOFT_UDP		= 0x0300, /* 2 BITS */
+	ICE_TX_DESC_CMD_RE			= 0x0400,
+	ICE_TX_DESC_CMD_RSVD3			= 0x0800,
+};
+
+#define ICE_TXD_QW1_OFFSET_S	16
+#define ICE_TXD_QW1_OFFSET_M	(0x3FFFFULL << ICE_TXD_QW1_OFFSET_S)
+
+enum ice_tx_desc_len_fields {
+	/* Note: These are predefined bit offsets */
+	ICE_TX_DESC_LEN_MACLEN_S	= 0, /* 7 BITS */
+	ICE_TX_DESC_LEN_IPLEN_S	= 7, /* 7 BITS */
+	ICE_TX_DESC_LEN_L4_LEN_S	= 14 /* 4 BITS */
+};
+
+#define ICE_TXD_QW1_MACLEN_M (0x7FUL << ICE_TX_DESC_LEN_MACLEN_S)
+#define ICE_TXD_QW1_IPLEN_M  (0x7FUL << ICE_TX_DESC_LEN_IPLEN_S)
+#define ICE_TXD_QW1_L4LEN_M  (0xFUL << ICE_TX_DESC_LEN_L4_LEN_S)
+
+/* Tx descriptor field limits in bytes */
+#define ICE_TXD_MACLEN_MAX ((ICE_TXD_QW1_MACLEN_M >> \
+			     ICE_TX_DESC_LEN_MACLEN_S) * ICE_BYTES_PER_WORD)
+#define ICE_TXD_IPLEN_MAX ((ICE_TXD_QW1_IPLEN_M >> \
+			    ICE_TX_DESC_LEN_IPLEN_S) * ICE_BYTES_PER_DWORD)
+#define ICE_TXD_L4LEN_MAX ((ICE_TXD_QW1_L4LEN_M >> \
+			    ICE_TX_DESC_LEN_L4_LEN_S) * ICE_BYTES_PER_DWORD)
+
+#define ICE_TXD_QW1_TX_BUF_SZ_S	34
+#define ICE_TXD_QW1_TX_BUF_SZ_M	(0x3FFFULL << ICE_TXD_QW1_TX_BUF_SZ_S)
+
+#define ICE_TXD_QW1_L2TAG1_S	48
+#define ICE_TXD_QW1_L2TAG1_M	(0xFFFFULL << ICE_TXD_QW1_L2TAG1_S)
+
+/* Context descriptors */
+struct ice_tx_ctx_desc {
+	__le32 tunneling_params;
+	__le16 l2tag2;
+	__le16 rsvd;
+	__le64 qw1;
+};
+
+#define ICE_TXD_CTX_QW1_DTYPE_S	0
+#define ICE_TXD_CTX_QW1_DTYPE_M	(0xFUL << ICE_TXD_CTX_QW1_DTYPE_S)
+
+#define ICE_TXD_CTX_QW1_CMD_S	4
+#define ICE_TXD_CTX_QW1_CMD_M	(0x7FUL << ICE_TXD_CTX_QW1_CMD_S)
+
+#define ICE_TXD_CTX_QW1_IPSEC_S	11
+#define ICE_TXD_CTX_QW1_IPSEC_M	(0x7FUL << ICE_TXD_CTX_QW1_IPSEC_S)
+
+#define ICE_TXD_CTX_QW1_TSO_LEN_S	30
+#define ICE_TXD_CTX_QW1_TSO_LEN_M	\
+			(0x3FFFFULL << ICE_TXD_CTX_QW1_TSO_LEN_S)
+
+#define ICE_TXD_CTX_QW1_TSYN_S	ICE_TXD_CTX_QW1_TSO_LEN_S
+#define ICE_TXD_CTX_QW1_TSYN_M	ICE_TXD_CTX_QW1_TSO_LEN_M
+
+#define ICE_TXD_CTX_QW1_MSS_S	50
+#define ICE_TXD_CTX_QW1_MSS_M	(0x3FFFULL << ICE_TXD_CTX_QW1_MSS_S)
+#define ICE_TXD_CTX_MIN_MSS	64
+#define ICE_TXD_CTX_MAX_MSS	9668
+
+#define ICE_TXD_CTX_QW1_VSI_S	50
+#define ICE_TXD_CTX_QW1_VSI_M	(0x3FFULL << ICE_TXD_CTX_QW1_VSI_S)
+
+enum ice_tx_ctx_desc_cmd_bits {
+	ICE_TX_CTX_DESC_TSO		= 0x01,
+	ICE_TX_CTX_DESC_TSYN		= 0x02,
+	ICE_TX_CTX_DESC_IL2TAG2		= 0x04,
+	ICE_TX_CTX_DESC_IL2TAG2_IL2H	= 0x08,
+	ICE_TX_CTX_DESC_SWTCH_NOTAG	= 0x00,
+	ICE_TX_CTX_DESC_SWTCH_UPLINK	= 0x10,
+	ICE_TX_CTX_DESC_SWTCH_LOCAL	= 0x20,
+	ICE_TX_CTX_DESC_SWTCH_VSI	= 0x30,
+	ICE_TX_CTX_DESC_RESERVED	= 0x40
+};
+
+enum ice_tx_ctx_desc_eipt_offload {
+	ICE_TX_CTX_EIPT_NONE		= 0x0,
+	ICE_TX_CTX_EIPT_IPV6		= 0x1,
+	ICE_TX_CTX_EIPT_IPV4_NO_CSUM	= 0x2,
+	ICE_TX_CTX_EIPT_IPV4		= 0x3
+};
+
+#define ICE_TXD_CTX_QW0_EIPT_S	0
+#define ICE_TXD_CTX_QW0_EIPT_M	(0x3ULL << ICE_TXD_CTX_QW0_EIPT_S)
+
+#define ICE_TXD_CTX_QW0_EIPLEN_S	2
+#define ICE_TXD_CTX_QW0_EIPLEN_M	(0x7FUL << ICE_TXD_CTX_QW0_EIPLEN_S)
+
+#define ICE_TXD_CTX_QW0_L4TUNT_S	9
+#define ICE_TXD_CTX_QW0_L4TUNT_M	(0x3ULL << ICE_TXD_CTX_QW0_L4TUNT_S)
+
+#define ICE_TXD_CTX_UDP_TUNNELING	BIT_ULL(ICE_TXD_CTX_QW0_L4TUNT_S)
+#define ICE_TXD_CTX_GRE_TUNNELING	(0x2ULL << ICE_TXD_CTX_QW0_L4TUNT_S)
+
+#define ICE_TXD_CTX_QW0_EIP_NOINC_S	11
+#define ICE_TXD_CTX_QW0_EIP_NOINC_M	BIT_ULL(ICE_TXD_CTX_QW0_EIP_NOINC_S)
+
+#define ICE_TXD_CTX_EIP_NOINC_IPID_CONST	ICE_TXD_CTX_QW0_EIP_NOINC_M
+
+#define ICE_TXD_CTX_QW0_NATLEN_S	12
+#define ICE_TXD_CTX_QW0_NATLEN_M	(0X7FULL << ICE_TXD_CTX_QW0_NATLEN_S)
+
+#define ICE_TXD_CTX_QW0_DECTTL_S	19
+#define ICE_TXD_CTX_QW0_DECTTL_M	(0xFULL << ICE_TXD_CTX_QW0_DECTTL_S)
+
+#define ICE_TXD_CTX_QW0_L4T_CS_S	23
+#define ICE_TXD_CTX_QW0_L4T_CS_M	BIT_ULL(ICE_TXD_CTX_QW0_L4T_CS_S)
+
+
+#define ICE_LAN_TXQ_MAX_QGRPS	127
+#define ICE_LAN_TXQ_MAX_QDIS	1023
+
+/* Tx queue context data
+ *
+ * The sizes of the variables may be larger than needed due to crossing byte
+ * boundaries. If we do not have the width of the variable set to the correct
+ * size then we could end up shifting bits off the top of the variable when the
+ * variable is at the top of a byte and crosses over into the next byte.
+ */
+struct ice_tlan_ctx {
+#define ICE_TLAN_CTX_BASE_S	7
+	u64 base;		/* base is defined in 128-byte units */
+	u8 port_num;
+	u16 cgd_num;		/* bigger than needed, see above for reason */
+	u8 pf_num;
+	u16 vmvf_num;
+	u8 vmvf_type;
+#define ICE_TLAN_CTX_VMVF_TYPE_VF	0
+#define ICE_TLAN_CTX_VMVF_TYPE_VMQ	1
+#define ICE_TLAN_CTX_VMVF_TYPE_PF	2
+	u16 src_vsi;
+	u8 tsyn_ena;
+	u8 alt_vlan;
+	u16 cpuid;		/* bigger than needed, see above for reason */
+	u8 wb_mode;
+	u8 tphrd_desc;
+	u8 tphrd;
+	u8 tphwr_desc;
+	u16 cmpq_id;
+	u16 qnum_in_func;
+	u8 itr_notification_mode;
+	u8 adjust_prof_id;
+	u32 qlen;		/* bigger than needed, see above for reason */
+	u8 quanta_prof_idx;
+	u8 tso_ena;
+	u16 tso_qnum;
+	u8 legacy_int;
+	u8 drop_ena;
+	u8 cache_prof_idx;
+	u8 pkt_shaper_prof_idx;
+	u8 int_q_state;	/* width not needed - internal do not write */
+};
+
+/* LAN Tx Completion Queue data */
+#pragma pack(1)
+struct ice_tx_cmpltnq {
+	u16 txq_id;
+	u8 generation;
+	u16 tx_head;
+	u8 cmpl_type;
+};
+#pragma pack()
+
+
+/* LAN Tx Completion Queue Context */
+#pragma pack(1)
+struct ice_tx_cmpltnq_ctx {
+	u64 base;
+	u32 q_len;
+#define ICE_TX_CMPLTNQ_CTX_Q_LEN_S	4
+	u8 generation;
+	u32 wrt_ptr;
+	u8 pf_num;
+	u16 vmvf_num;
+	u8 vmvf_type;
+	u8 tph_desc_wr;
+	u8 cpuid;
+	u32 cmpltn_cache[16];
+};
+#pragma pack()
+
+/* LAN Tx Doorbell Descriptor Format */
+struct ice_tx_drbell_fmt {
+	u16 txq_id;
+	u8 dd;
+	u8 rs;
+	u32 db;
+};
+
+
+/* LAN Tx Doorbell Queue Context */
+#pragma pack(1)
+struct ice_tx_drbell_q_ctx {
+	u64 base;
+	u16 ring_len;
+	u8 pf_num;
+	u16 vf_num;
+	u8 vmvf_type;
+	u8 cpuid;
+	u8 tph_desc_rd;
+	u8 tph_desc_wr;
+	u8 db_q_en;
+	u16 rd_head;
+	u16 rd_tail;
+};
+#pragma pack()
+
+/* The ice_ptype_lkup table is used to convert from the 10-bit ptype in the
+ * hardware to a bit-field that can be used by SW to more easily determine the
+ * packet type.
+ *
+ * Macros are used to shorten the table lines and make this table human
+ * readable.
+ *
+ * We store the PTYPE in the top byte of the bit field - this is just so that
+ * we can check that the table doesn't have a row missing, as the index into
+ * the table should be the PTYPE.
+ *
+ * Typical work flow:
+ *
+ * IF NOT ice_ptype_lkup[ptype].known
+ * THEN
+ *      Packet is unknown
+ * ELSE IF ice_ptype_lkup[ptype].outer_ip == ICE_RX_PTYPE_OUTER_IP
+ *      Use the rest of the fields to look at the tunnels, inner protocols, etc
+ * ELSE
+ *      Use the enum ice_rx_l2_ptype to decode the packet type
+ * ENDIF
+ */
+
+/* macro to make the table lines short */
+#define ICE_PTT(PTYPE, OUTER_IP, OUTER_IP_VER, OUTER_FRAG, T, TE, TEF, I, PL)\
+	{	PTYPE, \
+		1, \
+		ICE_RX_PTYPE_OUTER_##OUTER_IP, \
+		ICE_RX_PTYPE_OUTER_##OUTER_IP_VER, \
+		ICE_RX_PTYPE_##OUTER_FRAG, \
+		ICE_RX_PTYPE_TUNNEL_##T, \
+		ICE_RX_PTYPE_TUNNEL_END_##TE, \
+		ICE_RX_PTYPE_##TEF, \
+		ICE_RX_PTYPE_INNER_PROT_##I, \
+		ICE_RX_PTYPE_PAYLOAD_LAYER_##PL }
+
+#define ICE_PTT_UNUSED_ENTRY(PTYPE) { PTYPE, 0, 0, 0, 0, 0, 0, 0, 0, 0 }
+
+/* shorter macros makes the table fit but are terse */
+#define ICE_RX_PTYPE_NOF		ICE_RX_PTYPE_NOT_FRAG
+#define ICE_RX_PTYPE_FRG		ICE_RX_PTYPE_FRAG
+
+/* Lookup table mapping the HW PTYPE to the bit field for decoding */
+static const struct ice_rx_ptype_decoded ice_ptype_lkup[] = {
+	/* L2 Packet types */
+	ICE_PTT_UNUSED_ENTRY(0),
+	ICE_PTT(1, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2),
+	ICE_PTT(2, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE),
+	ICE_PTT_UNUSED_ENTRY(3),
+	ICE_PTT_UNUSED_ENTRY(4),
+	ICE_PTT_UNUSED_ENTRY(5),
+	ICE_PTT(6, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE),
+	ICE_PTT(7, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE),
+	ICE_PTT_UNUSED_ENTRY(8),
+	ICE_PTT_UNUSED_ENTRY(9),
+	ICE_PTT(10, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE),
+	ICE_PTT(11, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE),
+	ICE_PTT_UNUSED_ENTRY(12),
+	ICE_PTT_UNUSED_ENTRY(13),
+	ICE_PTT_UNUSED_ENTRY(14),
+	ICE_PTT_UNUSED_ENTRY(15),
+	ICE_PTT_UNUSED_ENTRY(16),
+	ICE_PTT_UNUSED_ENTRY(17),
+	ICE_PTT_UNUSED_ENTRY(18),
+	ICE_PTT_UNUSED_ENTRY(19),
+	ICE_PTT_UNUSED_ENTRY(20),
+	ICE_PTT_UNUSED_ENTRY(21),
+
+	/* Non Tunneled IPv4 */
+	ICE_PTT(22, IP, IPV4, FRG, NONE, NONE, NOF, NONE, PAY3),
+	ICE_PTT(23, IP, IPV4, NOF, NONE, NONE, NOF, NONE, PAY3),
+	ICE_PTT(24, IP, IPV4, NOF, NONE, NONE, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(25),
+	ICE_PTT(26, IP, IPV4, NOF, NONE, NONE, NOF, TCP,  PAY4),
+	ICE_PTT(27, IP, IPV4, NOF, NONE, NONE, NOF, SCTP, PAY4),
+	ICE_PTT(28, IP, IPV4, NOF, NONE, NONE, NOF, ICMP, PAY4),
+
+	/* IPv4 --> IPv4 */
+	ICE_PTT(29, IP, IPV4, NOF, IP_IP, IPV4, FRG, NONE, PAY3),
+	ICE_PTT(30, IP, IPV4, NOF, IP_IP, IPV4, NOF, NONE, PAY3),
+	ICE_PTT(31, IP, IPV4, NOF, IP_IP, IPV4, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(32),
+	ICE_PTT(33, IP, IPV4, NOF, IP_IP, IPV4, NOF, TCP,  PAY4),
+	ICE_PTT(34, IP, IPV4, NOF, IP_IP, IPV4, NOF, SCTP, PAY4),
+	ICE_PTT(35, IP, IPV4, NOF, IP_IP, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv4 --> IPv6 */
+	ICE_PTT(36, IP, IPV4, NOF, IP_IP, IPV6, FRG, NONE, PAY3),
+	ICE_PTT(37, IP, IPV4, NOF, IP_IP, IPV6, NOF, NONE, PAY3),
+	ICE_PTT(38, IP, IPV4, NOF, IP_IP, IPV6, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(39),
+	ICE_PTT(40, IP, IPV4, NOF, IP_IP, IPV6, NOF, TCP,  PAY4),
+	ICE_PTT(41, IP, IPV4, NOF, IP_IP, IPV6, NOF, SCTP, PAY4),
+	ICE_PTT(42, IP, IPV4, NOF, IP_IP, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv4 --> GRE/NAT */
+	ICE_PTT(43, IP, IPV4, NOF, IP_GRENAT, NONE, NOF, NONE, PAY3),
+
+	/* IPv4 --> GRE/NAT --> IPv4 */
+	ICE_PTT(44, IP, IPV4, NOF, IP_GRENAT, IPV4, FRG, NONE, PAY3),
+	ICE_PTT(45, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, NONE, PAY3),
+	ICE_PTT(46, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(47),
+	ICE_PTT(48, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, TCP,  PAY4),
+	ICE_PTT(49, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, SCTP, PAY4),
+	ICE_PTT(50, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv4 --> GRE/NAT --> IPv6 */
+	ICE_PTT(51, IP, IPV4, NOF, IP_GRENAT, IPV6, FRG, NONE, PAY3),
+	ICE_PTT(52, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, NONE, PAY3),
+	ICE_PTT(53, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(54),
+	ICE_PTT(55, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, TCP,  PAY4),
+	ICE_PTT(56, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, SCTP, PAY4),
+	ICE_PTT(57, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv4 --> GRE/NAT --> MAC */
+	ICE_PTT(58, IP, IPV4, NOF, IP_GRENAT_MAC, NONE, NOF, NONE, PAY3),
+
+	/* IPv4 --> GRE/NAT --> MAC --> IPv4 */
+	ICE_PTT(59, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, FRG, NONE, PAY3),
+	ICE_PTT(60, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, NONE, PAY3),
+	ICE_PTT(61, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(62),
+	ICE_PTT(63, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, TCP,  PAY4),
+	ICE_PTT(64, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, SCTP, PAY4),
+	ICE_PTT(65, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv4 --> GRE/NAT -> MAC --> IPv6 */
+	ICE_PTT(66, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, FRG, NONE, PAY3),
+	ICE_PTT(67, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, NONE, PAY3),
+	ICE_PTT(68, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(69),
+	ICE_PTT(70, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, TCP,  PAY4),
+	ICE_PTT(71, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, SCTP, PAY4),
+	ICE_PTT(72, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv4 --> GRE/NAT --> MAC/VLAN */
+	ICE_PTT(73, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, NONE, NOF, NONE, PAY3),
+
+	/* IPv4 ---> GRE/NAT -> MAC/VLAN --> IPv4 */
+	ICE_PTT(74, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, FRG, NONE, PAY3),
+	ICE_PTT(75, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, NONE, PAY3),
+	ICE_PTT(76, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(77),
+	ICE_PTT(78, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, TCP,  PAY4),
+	ICE_PTT(79, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, SCTP, PAY4),
+	ICE_PTT(80, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv4 -> GRE/NAT -> MAC/VLAN --> IPv6 */
+	ICE_PTT(81, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, FRG, NONE, PAY3),
+	ICE_PTT(82, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, NONE, PAY3),
+	ICE_PTT(83, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(84),
+	ICE_PTT(85, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, TCP,  PAY4),
+	ICE_PTT(86, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, SCTP, PAY4),
+	ICE_PTT(87, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, ICMP, PAY4),
+
+	/* Non Tunneled IPv6 */
+	ICE_PTT(88, IP, IPV6, FRG, NONE, NONE, NOF, NONE, PAY3),
+	ICE_PTT(89, IP, IPV6, NOF, NONE, NONE, NOF, NONE, PAY3),
+	ICE_PTT(90, IP, IPV6, NOF, NONE, NONE, NOF, UDP,  PAY3),
+	ICE_PTT_UNUSED_ENTRY(91),
+	ICE_PTT(92, IP, IPV6, NOF, NONE, NONE, NOF, TCP,  PAY4),
+	ICE_PTT(93, IP, IPV6, NOF, NONE, NONE, NOF, SCTP, PAY4),
+	ICE_PTT(94, IP, IPV6, NOF, NONE, NONE, NOF, ICMP, PAY4),
+
+	/* IPv6 --> IPv4 */
+	ICE_PTT(95, IP, IPV6, NOF, IP_IP, IPV4, FRG, NONE, PAY3),
+	ICE_PTT(96, IP, IPV6, NOF, IP_IP, IPV4, NOF, NONE, PAY3),
+	ICE_PTT(97, IP, IPV6, NOF, IP_IP, IPV4, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(98),
+	ICE_PTT(99, IP, IPV6, NOF, IP_IP, IPV4, NOF, TCP,  PAY4),
+	ICE_PTT(100, IP, IPV6, NOF, IP_IP, IPV4, NOF, SCTP, PAY4),
+	ICE_PTT(101, IP, IPV6, NOF, IP_IP, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv6 --> IPv6 */
+	ICE_PTT(102, IP, IPV6, NOF, IP_IP, IPV6, FRG, NONE, PAY3),
+	ICE_PTT(103, IP, IPV6, NOF, IP_IP, IPV6, NOF, NONE, PAY3),
+	ICE_PTT(104, IP, IPV6, NOF, IP_IP, IPV6, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(105),
+	ICE_PTT(106, IP, IPV6, NOF, IP_IP, IPV6, NOF, TCP,  PAY4),
+	ICE_PTT(107, IP, IPV6, NOF, IP_IP, IPV6, NOF, SCTP, PAY4),
+	ICE_PTT(108, IP, IPV6, NOF, IP_IP, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT */
+	ICE_PTT(109, IP, IPV6, NOF, IP_GRENAT, NONE, NOF, NONE, PAY3),
+
+	/* IPv6 --> GRE/NAT -> IPv4 */
+	ICE_PTT(110, IP, IPV6, NOF, IP_GRENAT, IPV4, FRG, NONE, PAY3),
+	ICE_PTT(111, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, NONE, PAY3),
+	ICE_PTT(112, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(113),
+	ICE_PTT(114, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, TCP,  PAY4),
+	ICE_PTT(115, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, SCTP, PAY4),
+	ICE_PTT(116, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT -> IPv6 */
+	ICE_PTT(117, IP, IPV6, NOF, IP_GRENAT, IPV6, FRG, NONE, PAY3),
+	ICE_PTT(118, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, NONE, PAY3),
+	ICE_PTT(119, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(120),
+	ICE_PTT(121, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, TCP,  PAY4),
+	ICE_PTT(122, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, SCTP, PAY4),
+	ICE_PTT(123, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT -> MAC */
+	ICE_PTT(124, IP, IPV6, NOF, IP_GRENAT_MAC, NONE, NOF, NONE, PAY3),
+
+	/* IPv6 --> GRE/NAT -> MAC -> IPv4 */
+	ICE_PTT(125, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, FRG, NONE, PAY3),
+	ICE_PTT(126, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, NONE, PAY3),
+	ICE_PTT(127, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(128),
+	ICE_PTT(129, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, TCP,  PAY4),
+	ICE_PTT(130, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, SCTP, PAY4),
+	ICE_PTT(131, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT -> MAC -> IPv6 */
+	ICE_PTT(132, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, FRG, NONE, PAY3),
+	ICE_PTT(133, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, NONE, PAY3),
+	ICE_PTT(134, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(135),
+	ICE_PTT(136, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, TCP,  PAY4),
+	ICE_PTT(137, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, SCTP, PAY4),
+	ICE_PTT(138, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT -> MAC/VLAN */
+	ICE_PTT(139, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, NONE, NOF, NONE, PAY3),
+
+	/* IPv6 --> GRE/NAT -> MAC/VLAN --> IPv4 */
+	ICE_PTT(140, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, FRG, NONE, PAY3),
+	ICE_PTT(141, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, NONE, PAY3),
+	ICE_PTT(142, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(143),
+	ICE_PTT(144, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, TCP,  PAY4),
+	ICE_PTT(145, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, SCTP, PAY4),
+	ICE_PTT(146, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT -> MAC/VLAN --> IPv6 */
+	ICE_PTT(147, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, FRG, NONE, PAY3),
+	ICE_PTT(148, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, NONE, PAY3),
+	ICE_PTT(149, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(150),
+	ICE_PTT(151, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, TCP,  PAY4),
+	ICE_PTT(152, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, SCTP, PAY4),
+	ICE_PTT(153, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, ICMP, PAY4),
+
+	/* unused entries */
+	ICE_PTT_UNUSED_ENTRY(154),
+	ICE_PTT_UNUSED_ENTRY(155),
+	ICE_PTT_UNUSED_ENTRY(156),
+	ICE_PTT_UNUSED_ENTRY(157),
+	ICE_PTT_UNUSED_ENTRY(158),
+	ICE_PTT_UNUSED_ENTRY(159),
+
+	ICE_PTT_UNUSED_ENTRY(160),
+	ICE_PTT_UNUSED_ENTRY(161),
+	ICE_PTT_UNUSED_ENTRY(162),
+	ICE_PTT_UNUSED_ENTRY(163),
+	ICE_PTT_UNUSED_ENTRY(164),
+	ICE_PTT_UNUSED_ENTRY(165),
+	ICE_PTT_UNUSED_ENTRY(166),
+	ICE_PTT_UNUSED_ENTRY(167),
+	ICE_PTT_UNUSED_ENTRY(168),
+	ICE_PTT_UNUSED_ENTRY(169),
+
+	ICE_PTT_UNUSED_ENTRY(170),
+	ICE_PTT_UNUSED_ENTRY(171),
+	ICE_PTT_UNUSED_ENTRY(172),
+	ICE_PTT_UNUSED_ENTRY(173),
+	ICE_PTT_UNUSED_ENTRY(174),
+	ICE_PTT_UNUSED_ENTRY(175),
+	ICE_PTT_UNUSED_ENTRY(176),
+	ICE_PTT_UNUSED_ENTRY(177),
+	ICE_PTT_UNUSED_ENTRY(178),
+	ICE_PTT_UNUSED_ENTRY(179),
+
+	ICE_PTT_UNUSED_ENTRY(180),
+	ICE_PTT_UNUSED_ENTRY(181),
+	ICE_PTT_UNUSED_ENTRY(182),
+	ICE_PTT_UNUSED_ENTRY(183),
+	ICE_PTT_UNUSED_ENTRY(184),
+	ICE_PTT_UNUSED_ENTRY(185),
+	ICE_PTT_UNUSED_ENTRY(186),
+	ICE_PTT_UNUSED_ENTRY(187),
+	ICE_PTT_UNUSED_ENTRY(188),
+	ICE_PTT_UNUSED_ENTRY(189),
+
+	ICE_PTT_UNUSED_ENTRY(190),
+	ICE_PTT_UNUSED_ENTRY(191),
+	ICE_PTT_UNUSED_ENTRY(192),
+	ICE_PTT_UNUSED_ENTRY(193),
+	ICE_PTT_UNUSED_ENTRY(194),
+	ICE_PTT_UNUSED_ENTRY(195),
+	ICE_PTT_UNUSED_ENTRY(196),
+	ICE_PTT_UNUSED_ENTRY(197),
+	ICE_PTT_UNUSED_ENTRY(198),
+	ICE_PTT_UNUSED_ENTRY(199),
+
+	ICE_PTT_UNUSED_ENTRY(200),
+	ICE_PTT_UNUSED_ENTRY(201),
+	ICE_PTT_UNUSED_ENTRY(202),
+	ICE_PTT_UNUSED_ENTRY(203),
+	ICE_PTT_UNUSED_ENTRY(204),
+	ICE_PTT_UNUSED_ENTRY(205),
+	ICE_PTT_UNUSED_ENTRY(206),
+	ICE_PTT_UNUSED_ENTRY(207),
+	ICE_PTT_UNUSED_ENTRY(208),
+	ICE_PTT_UNUSED_ENTRY(209),
+
+	ICE_PTT_UNUSED_ENTRY(210),
+	ICE_PTT_UNUSED_ENTRY(211),
+	ICE_PTT_UNUSED_ENTRY(212),
+	ICE_PTT_UNUSED_ENTRY(213),
+	ICE_PTT_UNUSED_ENTRY(214),
+	ICE_PTT_UNUSED_ENTRY(215),
+	ICE_PTT_UNUSED_ENTRY(216),
+	ICE_PTT_UNUSED_ENTRY(217),
+	ICE_PTT_UNUSED_ENTRY(218),
+	ICE_PTT_UNUSED_ENTRY(219),
+
+	ICE_PTT_UNUSED_ENTRY(220),
+	ICE_PTT_UNUSED_ENTRY(221),
+	ICE_PTT_UNUSED_ENTRY(222),
+	ICE_PTT_UNUSED_ENTRY(223),
+	ICE_PTT_UNUSED_ENTRY(224),
+	ICE_PTT_UNUSED_ENTRY(225),
+	ICE_PTT_UNUSED_ENTRY(226),
+	ICE_PTT_UNUSED_ENTRY(227),
+	ICE_PTT_UNUSED_ENTRY(228),
+	ICE_PTT_UNUSED_ENTRY(229),
+
+	ICE_PTT_UNUSED_ENTRY(230),
+	ICE_PTT_UNUSED_ENTRY(231),
+	ICE_PTT_UNUSED_ENTRY(232),
+	ICE_PTT_UNUSED_ENTRY(233),
+	ICE_PTT_UNUSED_ENTRY(234),
+	ICE_PTT_UNUSED_ENTRY(235),
+	ICE_PTT_UNUSED_ENTRY(236),
+	ICE_PTT_UNUSED_ENTRY(237),
+	ICE_PTT_UNUSED_ENTRY(238),
+	ICE_PTT_UNUSED_ENTRY(239),
+
+	ICE_PTT_UNUSED_ENTRY(240),
+	ICE_PTT_UNUSED_ENTRY(241),
+	ICE_PTT_UNUSED_ENTRY(242),
+	ICE_PTT_UNUSED_ENTRY(243),
+	ICE_PTT_UNUSED_ENTRY(244),
+	ICE_PTT_UNUSED_ENTRY(245),
+	ICE_PTT_UNUSED_ENTRY(246),
+	ICE_PTT_UNUSED_ENTRY(247),
+	ICE_PTT_UNUSED_ENTRY(248),
+	ICE_PTT_UNUSED_ENTRY(249),
+
+	ICE_PTT_UNUSED_ENTRY(250),
+	ICE_PTT_UNUSED_ENTRY(251),
+	ICE_PTT_UNUSED_ENTRY(252),
+	ICE_PTT_UNUSED_ENTRY(253),
+	ICE_PTT_UNUSED_ENTRY(254),
+	ICE_PTT_UNUSED_ENTRY(255),
+	ICE_PTT_UNUSED_ENTRY(256),
+	ICE_PTT_UNUSED_ENTRY(257),
+	ICE_PTT_UNUSED_ENTRY(258),
+	ICE_PTT_UNUSED_ENTRY(259),
+
+	ICE_PTT_UNUSED_ENTRY(260),
+	ICE_PTT_UNUSED_ENTRY(261),
+	ICE_PTT_UNUSED_ENTRY(262),
+	ICE_PTT_UNUSED_ENTRY(263),
+	ICE_PTT_UNUSED_ENTRY(264),
+	ICE_PTT_UNUSED_ENTRY(265),
+	ICE_PTT_UNUSED_ENTRY(266),
+	ICE_PTT_UNUSED_ENTRY(267),
+	ICE_PTT_UNUSED_ENTRY(268),
+	ICE_PTT_UNUSED_ENTRY(269),
+
+	ICE_PTT_UNUSED_ENTRY(270),
+	ICE_PTT_UNUSED_ENTRY(271),
+	ICE_PTT_UNUSED_ENTRY(272),
+	ICE_PTT_UNUSED_ENTRY(273),
+	ICE_PTT_UNUSED_ENTRY(274),
+	ICE_PTT_UNUSED_ENTRY(275),
+	ICE_PTT_UNUSED_ENTRY(276),
+	ICE_PTT_UNUSED_ENTRY(277),
+	ICE_PTT_UNUSED_ENTRY(278),
+	ICE_PTT_UNUSED_ENTRY(279),
+
+	ICE_PTT_UNUSED_ENTRY(280),
+	ICE_PTT_UNUSED_ENTRY(281),
+	ICE_PTT_UNUSED_ENTRY(282),
+	ICE_PTT_UNUSED_ENTRY(283),
+	ICE_PTT_UNUSED_ENTRY(284),
+	ICE_PTT_UNUSED_ENTRY(285),
+	ICE_PTT_UNUSED_ENTRY(286),
+	ICE_PTT_UNUSED_ENTRY(287),
+	ICE_PTT_UNUSED_ENTRY(288),
+	ICE_PTT_UNUSED_ENTRY(289),
+
+	ICE_PTT_UNUSED_ENTRY(290),
+	ICE_PTT_UNUSED_ENTRY(291),
+	ICE_PTT_UNUSED_ENTRY(292),
+	ICE_PTT_UNUSED_ENTRY(293),
+	ICE_PTT_UNUSED_ENTRY(294),
+	ICE_PTT_UNUSED_ENTRY(295),
+	ICE_PTT_UNUSED_ENTRY(296),
+	ICE_PTT_UNUSED_ENTRY(297),
+	ICE_PTT_UNUSED_ENTRY(298),
+	ICE_PTT_UNUSED_ENTRY(299),
+
+	ICE_PTT_UNUSED_ENTRY(300),
+	ICE_PTT_UNUSED_ENTRY(301),
+	ICE_PTT_UNUSED_ENTRY(302),
+	ICE_PTT_UNUSED_ENTRY(303),
+	ICE_PTT_UNUSED_ENTRY(304),
+	ICE_PTT_UNUSED_ENTRY(305),
+	ICE_PTT_UNUSED_ENTRY(306),
+	ICE_PTT_UNUSED_ENTRY(307),
+	ICE_PTT_UNUSED_ENTRY(308),
+	ICE_PTT_UNUSED_ENTRY(309),
+
+	ICE_PTT_UNUSED_ENTRY(310),
+	ICE_PTT_UNUSED_ENTRY(311),
+	ICE_PTT_UNUSED_ENTRY(312),
+	ICE_PTT_UNUSED_ENTRY(313),
+	ICE_PTT_UNUSED_ENTRY(314),
+	ICE_PTT_UNUSED_ENTRY(315),
+	ICE_PTT_UNUSED_ENTRY(316),
+	ICE_PTT_UNUSED_ENTRY(317),
+	ICE_PTT_UNUSED_ENTRY(318),
+	ICE_PTT_UNUSED_ENTRY(319),
+
+	ICE_PTT_UNUSED_ENTRY(320),
+	ICE_PTT_UNUSED_ENTRY(321),
+	ICE_PTT_UNUSED_ENTRY(322),
+	ICE_PTT_UNUSED_ENTRY(323),
+	ICE_PTT_UNUSED_ENTRY(324),
+	ICE_PTT_UNUSED_ENTRY(325),
+	ICE_PTT_UNUSED_ENTRY(326),
+	ICE_PTT_UNUSED_ENTRY(327),
+	ICE_PTT_UNUSED_ENTRY(328),
+	ICE_PTT_UNUSED_ENTRY(329),
+
+	ICE_PTT_UNUSED_ENTRY(330),
+	ICE_PTT_UNUSED_ENTRY(331),
+	ICE_PTT_UNUSED_ENTRY(332),
+	ICE_PTT_UNUSED_ENTRY(333),
+	ICE_PTT_UNUSED_ENTRY(334),
+	ICE_PTT_UNUSED_ENTRY(335),
+	ICE_PTT_UNUSED_ENTRY(336),
+	ICE_PTT_UNUSED_ENTRY(337),
+	ICE_PTT_UNUSED_ENTRY(338),
+	ICE_PTT_UNUSED_ENTRY(339),
+
+	ICE_PTT_UNUSED_ENTRY(340),
+	ICE_PTT_UNUSED_ENTRY(341),
+	ICE_PTT_UNUSED_ENTRY(342),
+	ICE_PTT_UNUSED_ENTRY(343),
+	ICE_PTT_UNUSED_ENTRY(344),
+	ICE_PTT_UNUSED_ENTRY(345),
+	ICE_PTT_UNUSED_ENTRY(346),
+	ICE_PTT_UNUSED_ENTRY(347),
+	ICE_PTT_UNUSED_ENTRY(348),
+	ICE_PTT_UNUSED_ENTRY(349),
+
+	ICE_PTT_UNUSED_ENTRY(350),
+	ICE_PTT_UNUSED_ENTRY(351),
+	ICE_PTT_UNUSED_ENTRY(352),
+	ICE_PTT_UNUSED_ENTRY(353),
+	ICE_PTT_UNUSED_ENTRY(354),
+	ICE_PTT_UNUSED_ENTRY(355),
+	ICE_PTT_UNUSED_ENTRY(356),
+	ICE_PTT_UNUSED_ENTRY(357),
+	ICE_PTT_UNUSED_ENTRY(358),
+	ICE_PTT_UNUSED_ENTRY(359),
+
+	ICE_PTT_UNUSED_ENTRY(360),
+	ICE_PTT_UNUSED_ENTRY(361),
+	ICE_PTT_UNUSED_ENTRY(362),
+	ICE_PTT_UNUSED_ENTRY(363),
+	ICE_PTT_UNUSED_ENTRY(364),
+	ICE_PTT_UNUSED_ENTRY(365),
+	ICE_PTT_UNUSED_ENTRY(366),
+	ICE_PTT_UNUSED_ENTRY(367),
+	ICE_PTT_UNUSED_ENTRY(368),
+	ICE_PTT_UNUSED_ENTRY(369),
+
+	ICE_PTT_UNUSED_ENTRY(370),
+	ICE_PTT_UNUSED_ENTRY(371),
+	ICE_PTT_UNUSED_ENTRY(372),
+	ICE_PTT_UNUSED_ENTRY(373),
+	ICE_PTT_UNUSED_ENTRY(374),
+	ICE_PTT_UNUSED_ENTRY(375),
+	ICE_PTT_UNUSED_ENTRY(376),
+	ICE_PTT_UNUSED_ENTRY(377),
+	ICE_PTT_UNUSED_ENTRY(378),
+	ICE_PTT_UNUSED_ENTRY(379),
+
+	ICE_PTT_UNUSED_ENTRY(380),
+	ICE_PTT_UNUSED_ENTRY(381),
+	ICE_PTT_UNUSED_ENTRY(382),
+	ICE_PTT_UNUSED_ENTRY(383),
+	ICE_PTT_UNUSED_ENTRY(384),
+	ICE_PTT_UNUSED_ENTRY(385),
+	ICE_PTT_UNUSED_ENTRY(386),
+	ICE_PTT_UNUSED_ENTRY(387),
+	ICE_PTT_UNUSED_ENTRY(388),
+	ICE_PTT_UNUSED_ENTRY(389),
+
+	ICE_PTT_UNUSED_ENTRY(390),
+	ICE_PTT_UNUSED_ENTRY(391),
+	ICE_PTT_UNUSED_ENTRY(392),
+	ICE_PTT_UNUSED_ENTRY(393),
+	ICE_PTT_UNUSED_ENTRY(394),
+	ICE_PTT_UNUSED_ENTRY(395),
+	ICE_PTT_UNUSED_ENTRY(396),
+	ICE_PTT_UNUSED_ENTRY(397),
+	ICE_PTT_UNUSED_ENTRY(398),
+	ICE_PTT_UNUSED_ENTRY(399),
+
+	ICE_PTT_UNUSED_ENTRY(400),
+	ICE_PTT_UNUSED_ENTRY(401),
+	ICE_PTT_UNUSED_ENTRY(402),
+	ICE_PTT_UNUSED_ENTRY(403),
+	ICE_PTT_UNUSED_ENTRY(404),
+	ICE_PTT_UNUSED_ENTRY(405),
+	ICE_PTT_UNUSED_ENTRY(406),
+	ICE_PTT_UNUSED_ENTRY(407),
+	ICE_PTT_UNUSED_ENTRY(408),
+	ICE_PTT_UNUSED_ENTRY(409),
+
+	ICE_PTT_UNUSED_ENTRY(410),
+	ICE_PTT_UNUSED_ENTRY(411),
+	ICE_PTT_UNUSED_ENTRY(412),
+	ICE_PTT_UNUSED_ENTRY(413),
+	ICE_PTT_UNUSED_ENTRY(414),
+	ICE_PTT_UNUSED_ENTRY(415),
+	ICE_PTT_UNUSED_ENTRY(416),
+	ICE_PTT_UNUSED_ENTRY(417),
+	ICE_PTT_UNUSED_ENTRY(418),
+	ICE_PTT_UNUSED_ENTRY(419),
+
+	ICE_PTT_UNUSED_ENTRY(420),
+	ICE_PTT_UNUSED_ENTRY(421),
+	ICE_PTT_UNUSED_ENTRY(422),
+	ICE_PTT_UNUSED_ENTRY(423),
+	ICE_PTT_UNUSED_ENTRY(424),
+	ICE_PTT_UNUSED_ENTRY(425),
+	ICE_PTT_UNUSED_ENTRY(426),
+	ICE_PTT_UNUSED_ENTRY(427),
+	ICE_PTT_UNUSED_ENTRY(428),
+	ICE_PTT_UNUSED_ENTRY(429),
+
+	ICE_PTT_UNUSED_ENTRY(430),
+	ICE_PTT_UNUSED_ENTRY(431),
+	ICE_PTT_UNUSED_ENTRY(432),
+	ICE_PTT_UNUSED_ENTRY(433),
+	ICE_PTT_UNUSED_ENTRY(434),
+	ICE_PTT_UNUSED_ENTRY(435),
+	ICE_PTT_UNUSED_ENTRY(436),
+	ICE_PTT_UNUSED_ENTRY(437),
+	ICE_PTT_UNUSED_ENTRY(438),
+	ICE_PTT_UNUSED_ENTRY(439),
+
+	ICE_PTT_UNUSED_ENTRY(440),
+	ICE_PTT_UNUSED_ENTRY(441),
+	ICE_PTT_UNUSED_ENTRY(442),
+	ICE_PTT_UNUSED_ENTRY(443),
+	ICE_PTT_UNUSED_ENTRY(444),
+	ICE_PTT_UNUSED_ENTRY(445),
+	ICE_PTT_UNUSED_ENTRY(446),
+	ICE_PTT_UNUSED_ENTRY(447),
+	ICE_PTT_UNUSED_ENTRY(448),
+	ICE_PTT_UNUSED_ENTRY(449),
+
+	ICE_PTT_UNUSED_ENTRY(450),
+	ICE_PTT_UNUSED_ENTRY(451),
+	ICE_PTT_UNUSED_ENTRY(452),
+	ICE_PTT_UNUSED_ENTRY(453),
+	ICE_PTT_UNUSED_ENTRY(454),
+	ICE_PTT_UNUSED_ENTRY(455),
+	ICE_PTT_UNUSED_ENTRY(456),
+	ICE_PTT_UNUSED_ENTRY(457),
+	ICE_PTT_UNUSED_ENTRY(458),
+	ICE_PTT_UNUSED_ENTRY(459),
+
+	ICE_PTT_UNUSED_ENTRY(460),
+	ICE_PTT_UNUSED_ENTRY(461),
+	ICE_PTT_UNUSED_ENTRY(462),
+	ICE_PTT_UNUSED_ENTRY(463),
+	ICE_PTT_UNUSED_ENTRY(464),
+	ICE_PTT_UNUSED_ENTRY(465),
+	ICE_PTT_UNUSED_ENTRY(466),
+	ICE_PTT_UNUSED_ENTRY(467),
+	ICE_PTT_UNUSED_ENTRY(468),
+	ICE_PTT_UNUSED_ENTRY(469),
+
+	ICE_PTT_UNUSED_ENTRY(470),
+	ICE_PTT_UNUSED_ENTRY(471),
+	ICE_PTT_UNUSED_ENTRY(472),
+	ICE_PTT_UNUSED_ENTRY(473),
+	ICE_PTT_UNUSED_ENTRY(474),
+	ICE_PTT_UNUSED_ENTRY(475),
+	ICE_PTT_UNUSED_ENTRY(476),
+	ICE_PTT_UNUSED_ENTRY(477),
+	ICE_PTT_UNUSED_ENTRY(478),
+	ICE_PTT_UNUSED_ENTRY(479),
+
+	ICE_PTT_UNUSED_ENTRY(480),
+	ICE_PTT_UNUSED_ENTRY(481),
+	ICE_PTT_UNUSED_ENTRY(482),
+	ICE_PTT_UNUSED_ENTRY(483),
+	ICE_PTT_UNUSED_ENTRY(484),
+	ICE_PTT_UNUSED_ENTRY(485),
+	ICE_PTT_UNUSED_ENTRY(486),
+	ICE_PTT_UNUSED_ENTRY(487),
+	ICE_PTT_UNUSED_ENTRY(488),
+	ICE_PTT_UNUSED_ENTRY(489),
+
+	ICE_PTT_UNUSED_ENTRY(490),
+	ICE_PTT_UNUSED_ENTRY(491),
+	ICE_PTT_UNUSED_ENTRY(492),
+	ICE_PTT_UNUSED_ENTRY(493),
+	ICE_PTT_UNUSED_ENTRY(494),
+	ICE_PTT_UNUSED_ENTRY(495),
+	ICE_PTT_UNUSED_ENTRY(496),
+	ICE_PTT_UNUSED_ENTRY(497),
+	ICE_PTT_UNUSED_ENTRY(498),
+	ICE_PTT_UNUSED_ENTRY(499),
+
+	ICE_PTT_UNUSED_ENTRY(500),
+	ICE_PTT_UNUSED_ENTRY(501),
+	ICE_PTT_UNUSED_ENTRY(502),
+	ICE_PTT_UNUSED_ENTRY(503),
+	ICE_PTT_UNUSED_ENTRY(504),
+	ICE_PTT_UNUSED_ENTRY(505),
+	ICE_PTT_UNUSED_ENTRY(506),
+	ICE_PTT_UNUSED_ENTRY(507),
+	ICE_PTT_UNUSED_ENTRY(508),
+	ICE_PTT_UNUSED_ENTRY(509),
+
+	ICE_PTT_UNUSED_ENTRY(510),
+	ICE_PTT_UNUSED_ENTRY(511),
+	ICE_PTT_UNUSED_ENTRY(512),
+	ICE_PTT_UNUSED_ENTRY(513),
+	ICE_PTT_UNUSED_ENTRY(514),
+	ICE_PTT_UNUSED_ENTRY(515),
+	ICE_PTT_UNUSED_ENTRY(516),
+	ICE_PTT_UNUSED_ENTRY(517),
+	ICE_PTT_UNUSED_ENTRY(518),
+	ICE_PTT_UNUSED_ENTRY(519),
+
+	ICE_PTT_UNUSED_ENTRY(520),
+	ICE_PTT_UNUSED_ENTRY(521),
+	ICE_PTT_UNUSED_ENTRY(522),
+	ICE_PTT_UNUSED_ENTRY(523),
+	ICE_PTT_UNUSED_ENTRY(524),
+	ICE_PTT_UNUSED_ENTRY(525),
+	ICE_PTT_UNUSED_ENTRY(526),
+	ICE_PTT_UNUSED_ENTRY(527),
+	ICE_PTT_UNUSED_ENTRY(528),
+	ICE_PTT_UNUSED_ENTRY(529),
+
+	ICE_PTT_UNUSED_ENTRY(530),
+	ICE_PTT_UNUSED_ENTRY(531),
+	ICE_PTT_UNUSED_ENTRY(532),
+	ICE_PTT_UNUSED_ENTRY(533),
+	ICE_PTT_UNUSED_ENTRY(534),
+	ICE_PTT_UNUSED_ENTRY(535),
+	ICE_PTT_UNUSED_ENTRY(536),
+	ICE_PTT_UNUSED_ENTRY(537),
+	ICE_PTT_UNUSED_ENTRY(538),
+	ICE_PTT_UNUSED_ENTRY(539),
+
+	ICE_PTT_UNUSED_ENTRY(540),
+	ICE_PTT_UNUSED_ENTRY(541),
+	ICE_PTT_UNUSED_ENTRY(542),
+	ICE_PTT_UNUSED_ENTRY(543),
+	ICE_PTT_UNUSED_ENTRY(544),
+	ICE_PTT_UNUSED_ENTRY(545),
+	ICE_PTT_UNUSED_ENTRY(546),
+	ICE_PTT_UNUSED_ENTRY(547),
+	ICE_PTT_UNUSED_ENTRY(548),
+	ICE_PTT_UNUSED_ENTRY(549),
+
+	ICE_PTT_UNUSED_ENTRY(550),
+	ICE_PTT_UNUSED_ENTRY(551),
+	ICE_PTT_UNUSED_ENTRY(552),
+	ICE_PTT_UNUSED_ENTRY(553),
+	ICE_PTT_UNUSED_ENTRY(554),
+	ICE_PTT_UNUSED_ENTRY(555),
+	ICE_PTT_UNUSED_ENTRY(556),
+	ICE_PTT_UNUSED_ENTRY(557),
+	ICE_PTT_UNUSED_ENTRY(558),
+	ICE_PTT_UNUSED_ENTRY(559),
+
+	ICE_PTT_UNUSED_ENTRY(560),
+	ICE_PTT_UNUSED_ENTRY(561),
+	ICE_PTT_UNUSED_ENTRY(562),
+	ICE_PTT_UNUSED_ENTRY(563),
+	ICE_PTT_UNUSED_ENTRY(564),
+	ICE_PTT_UNUSED_ENTRY(565),
+	ICE_PTT_UNUSED_ENTRY(566),
+	ICE_PTT_UNUSED_ENTRY(567),
+	ICE_PTT_UNUSED_ENTRY(568),
+	ICE_PTT_UNUSED_ENTRY(569),
+
+	ICE_PTT_UNUSED_ENTRY(570),
+	ICE_PTT_UNUSED_ENTRY(571),
+	ICE_PTT_UNUSED_ENTRY(572),
+	ICE_PTT_UNUSED_ENTRY(573),
+	ICE_PTT_UNUSED_ENTRY(574),
+	ICE_PTT_UNUSED_ENTRY(575),
+	ICE_PTT_UNUSED_ENTRY(576),
+	ICE_PTT_UNUSED_ENTRY(577),
+	ICE_PTT_UNUSED_ENTRY(578),
+	ICE_PTT_UNUSED_ENTRY(579),
+
+	ICE_PTT_UNUSED_ENTRY(580),
+	ICE_PTT_UNUSED_ENTRY(581),
+	ICE_PTT_UNUSED_ENTRY(582),
+	ICE_PTT_UNUSED_ENTRY(583),
+	ICE_PTT_UNUSED_ENTRY(584),
+	ICE_PTT_UNUSED_ENTRY(585),
+	ICE_PTT_UNUSED_ENTRY(586),
+	ICE_PTT_UNUSED_ENTRY(587),
+	ICE_PTT_UNUSED_ENTRY(588),
+	ICE_PTT_UNUSED_ENTRY(589),
+
+	ICE_PTT_UNUSED_ENTRY(590),
+	ICE_PTT_UNUSED_ENTRY(591),
+	ICE_PTT_UNUSED_ENTRY(592),
+	ICE_PTT_UNUSED_ENTRY(593),
+	ICE_PTT_UNUSED_ENTRY(594),
+	ICE_PTT_UNUSED_ENTRY(595),
+	ICE_PTT_UNUSED_ENTRY(596),
+	ICE_PTT_UNUSED_ENTRY(597),
+	ICE_PTT_UNUSED_ENTRY(598),
+	ICE_PTT_UNUSED_ENTRY(599),
+
+	ICE_PTT_UNUSED_ENTRY(600),
+	ICE_PTT_UNUSED_ENTRY(601),
+	ICE_PTT_UNUSED_ENTRY(602),
+	ICE_PTT_UNUSED_ENTRY(603),
+	ICE_PTT_UNUSED_ENTRY(604),
+	ICE_PTT_UNUSED_ENTRY(605),
+	ICE_PTT_UNUSED_ENTRY(606),
+	ICE_PTT_UNUSED_ENTRY(607),
+	ICE_PTT_UNUSED_ENTRY(608),
+	ICE_PTT_UNUSED_ENTRY(609),
+
+	ICE_PTT_UNUSED_ENTRY(610),
+	ICE_PTT_UNUSED_ENTRY(611),
+	ICE_PTT_UNUSED_ENTRY(612),
+	ICE_PTT_UNUSED_ENTRY(613),
+	ICE_PTT_UNUSED_ENTRY(614),
+	ICE_PTT_UNUSED_ENTRY(615),
+	ICE_PTT_UNUSED_ENTRY(616),
+	ICE_PTT_UNUSED_ENTRY(617),
+	ICE_PTT_UNUSED_ENTRY(618),
+	ICE_PTT_UNUSED_ENTRY(619),
+
+	ICE_PTT_UNUSED_ENTRY(620),
+	ICE_PTT_UNUSED_ENTRY(621),
+	ICE_PTT_UNUSED_ENTRY(622),
+	ICE_PTT_UNUSED_ENTRY(623),
+	ICE_PTT_UNUSED_ENTRY(624),
+	ICE_PTT_UNUSED_ENTRY(625),
+	ICE_PTT_UNUSED_ENTRY(626),
+	ICE_PTT_UNUSED_ENTRY(627),
+	ICE_PTT_UNUSED_ENTRY(628),
+	ICE_PTT_UNUSED_ENTRY(629),
+
+	ICE_PTT_UNUSED_ENTRY(630),
+	ICE_PTT_UNUSED_ENTRY(631),
+	ICE_PTT_UNUSED_ENTRY(632),
+	ICE_PTT_UNUSED_ENTRY(633),
+	ICE_PTT_UNUSED_ENTRY(634),
+	ICE_PTT_UNUSED_ENTRY(635),
+	ICE_PTT_UNUSED_ENTRY(636),
+	ICE_PTT_UNUSED_ENTRY(637),
+	ICE_PTT_UNUSED_ENTRY(638),
+	ICE_PTT_UNUSED_ENTRY(639),
+
+	ICE_PTT_UNUSED_ENTRY(640),
+	ICE_PTT_UNUSED_ENTRY(641),
+	ICE_PTT_UNUSED_ENTRY(642),
+	ICE_PTT_UNUSED_ENTRY(643),
+	ICE_PTT_UNUSED_ENTRY(644),
+	ICE_PTT_UNUSED_ENTRY(645),
+	ICE_PTT_UNUSED_ENTRY(646),
+	ICE_PTT_UNUSED_ENTRY(647),
+	ICE_PTT_UNUSED_ENTRY(648),
+	ICE_PTT_UNUSED_ENTRY(649),
+
+	ICE_PTT_UNUSED_ENTRY(650),
+	ICE_PTT_UNUSED_ENTRY(651),
+	ICE_PTT_UNUSED_ENTRY(652),
+	ICE_PTT_UNUSED_ENTRY(653),
+	ICE_PTT_UNUSED_ENTRY(654),
+	ICE_PTT_UNUSED_ENTRY(655),
+	ICE_PTT_UNUSED_ENTRY(656),
+	ICE_PTT_UNUSED_ENTRY(657),
+	ICE_PTT_UNUSED_ENTRY(658),
+	ICE_PTT_UNUSED_ENTRY(659),
+
+	ICE_PTT_UNUSED_ENTRY(660),
+	ICE_PTT_UNUSED_ENTRY(661),
+	ICE_PTT_UNUSED_ENTRY(662),
+	ICE_PTT_UNUSED_ENTRY(663),
+	ICE_PTT_UNUSED_ENTRY(664),
+	ICE_PTT_UNUSED_ENTRY(665),
+	ICE_PTT_UNUSED_ENTRY(666),
+	ICE_PTT_UNUSED_ENTRY(667),
+	ICE_PTT_UNUSED_ENTRY(668),
+	ICE_PTT_UNUSED_ENTRY(669),
+
+	ICE_PTT_UNUSED_ENTRY(670),
+	ICE_PTT_UNUSED_ENTRY(671),
+	ICE_PTT_UNUSED_ENTRY(672),
+	ICE_PTT_UNUSED_ENTRY(673),
+	ICE_PTT_UNUSED_ENTRY(674),
+	ICE_PTT_UNUSED_ENTRY(675),
+	ICE_PTT_UNUSED_ENTRY(676),
+	ICE_PTT_UNUSED_ENTRY(677),
+	ICE_PTT_UNUSED_ENTRY(678),
+	ICE_PTT_UNUSED_ENTRY(679),
+
+	ICE_PTT_UNUSED_ENTRY(680),
+	ICE_PTT_UNUSED_ENTRY(681),
+	ICE_PTT_UNUSED_ENTRY(682),
+	ICE_PTT_UNUSED_ENTRY(683),
+	ICE_PTT_UNUSED_ENTRY(684),
+	ICE_PTT_UNUSED_ENTRY(685),
+	ICE_PTT_UNUSED_ENTRY(686),
+	ICE_PTT_UNUSED_ENTRY(687),
+	ICE_PTT_UNUSED_ENTRY(688),
+	ICE_PTT_UNUSED_ENTRY(689),
+
+	ICE_PTT_UNUSED_ENTRY(690),
+	ICE_PTT_UNUSED_ENTRY(691),
+	ICE_PTT_UNUSED_ENTRY(692),
+	ICE_PTT_UNUSED_ENTRY(693),
+	ICE_PTT_UNUSED_ENTRY(694),
+	ICE_PTT_UNUSED_ENTRY(695),
+	ICE_PTT_UNUSED_ENTRY(696),
+	ICE_PTT_UNUSED_ENTRY(697),
+	ICE_PTT_UNUSED_ENTRY(698),
+	ICE_PTT_UNUSED_ENTRY(699),
+
+	ICE_PTT_UNUSED_ENTRY(700),
+	ICE_PTT_UNUSED_ENTRY(701),
+	ICE_PTT_UNUSED_ENTRY(702),
+	ICE_PTT_UNUSED_ENTRY(703),
+	ICE_PTT_UNUSED_ENTRY(704),
+	ICE_PTT_UNUSED_ENTRY(705),
+	ICE_PTT_UNUSED_ENTRY(706),
+	ICE_PTT_UNUSED_ENTRY(707),
+	ICE_PTT_UNUSED_ENTRY(708),
+	ICE_PTT_UNUSED_ENTRY(709),
+
+	ICE_PTT_UNUSED_ENTRY(710),
+	ICE_PTT_UNUSED_ENTRY(711),
+	ICE_PTT_UNUSED_ENTRY(712),
+	ICE_PTT_UNUSED_ENTRY(713),
+	ICE_PTT_UNUSED_ENTRY(714),
+	ICE_PTT_UNUSED_ENTRY(715),
+	ICE_PTT_UNUSED_ENTRY(716),
+	ICE_PTT_UNUSED_ENTRY(717),
+	ICE_PTT_UNUSED_ENTRY(718),
+	ICE_PTT_UNUSED_ENTRY(719),
+
+	ICE_PTT_UNUSED_ENTRY(720),
+	ICE_PTT_UNUSED_ENTRY(721),
+	ICE_PTT_UNUSED_ENTRY(722),
+	ICE_PTT_UNUSED_ENTRY(723),
+	ICE_PTT_UNUSED_ENTRY(724),
+	ICE_PTT_UNUSED_ENTRY(725),
+	ICE_PTT_UNUSED_ENTRY(726),
+	ICE_PTT_UNUSED_ENTRY(727),
+	ICE_PTT_UNUSED_ENTRY(728),
+	ICE_PTT_UNUSED_ENTRY(729),
+
+	ICE_PTT_UNUSED_ENTRY(730),
+	ICE_PTT_UNUSED_ENTRY(731),
+	ICE_PTT_UNUSED_ENTRY(732),
+	ICE_PTT_UNUSED_ENTRY(733),
+	ICE_PTT_UNUSED_ENTRY(734),
+	ICE_PTT_UNUSED_ENTRY(735),
+	ICE_PTT_UNUSED_ENTRY(736),
+	ICE_PTT_UNUSED_ENTRY(737),
+	ICE_PTT_UNUSED_ENTRY(738),
+	ICE_PTT_UNUSED_ENTRY(739),
+
+	ICE_PTT_UNUSED_ENTRY(740),
+	ICE_PTT_UNUSED_ENTRY(741),
+	ICE_PTT_UNUSED_ENTRY(742),
+	ICE_PTT_UNUSED_ENTRY(743),
+	ICE_PTT_UNUSED_ENTRY(744),
+	ICE_PTT_UNUSED_ENTRY(745),
+	ICE_PTT_UNUSED_ENTRY(746),
+	ICE_PTT_UNUSED_ENTRY(747),
+	ICE_PTT_UNUSED_ENTRY(748),
+	ICE_PTT_UNUSED_ENTRY(749),
+
+	ICE_PTT_UNUSED_ENTRY(750),
+	ICE_PTT_UNUSED_ENTRY(751),
+	ICE_PTT_UNUSED_ENTRY(752),
+	ICE_PTT_UNUSED_ENTRY(753),
+	ICE_PTT_UNUSED_ENTRY(754),
+	ICE_PTT_UNUSED_ENTRY(755),
+	ICE_PTT_UNUSED_ENTRY(756),
+	ICE_PTT_UNUSED_ENTRY(757),
+	ICE_PTT_UNUSED_ENTRY(758),
+	ICE_PTT_UNUSED_ENTRY(759),
+
+	ICE_PTT_UNUSED_ENTRY(760),
+	ICE_PTT_UNUSED_ENTRY(761),
+	ICE_PTT_UNUSED_ENTRY(762),
+	ICE_PTT_UNUSED_ENTRY(763),
+	ICE_PTT_UNUSED_ENTRY(764),
+	ICE_PTT_UNUSED_ENTRY(765),
+	ICE_PTT_UNUSED_ENTRY(766),
+	ICE_PTT_UNUSED_ENTRY(767),
+	ICE_PTT_UNUSED_ENTRY(768),
+	ICE_PTT_UNUSED_ENTRY(769),
+
+	ICE_PTT_UNUSED_ENTRY(770),
+	ICE_PTT_UNUSED_ENTRY(771),
+	ICE_PTT_UNUSED_ENTRY(772),
+	ICE_PTT_UNUSED_ENTRY(773),
+	ICE_PTT_UNUSED_ENTRY(774),
+	ICE_PTT_UNUSED_ENTRY(775),
+	ICE_PTT_UNUSED_ENTRY(776),
+	ICE_PTT_UNUSED_ENTRY(777),
+	ICE_PTT_UNUSED_ENTRY(778),
+	ICE_PTT_UNUSED_ENTRY(779),
+
+	ICE_PTT_UNUSED_ENTRY(780),
+	ICE_PTT_UNUSED_ENTRY(781),
+	ICE_PTT_UNUSED_ENTRY(782),
+	ICE_PTT_UNUSED_ENTRY(783),
+	ICE_PTT_UNUSED_ENTRY(784),
+	ICE_PTT_UNUSED_ENTRY(785),
+	ICE_PTT_UNUSED_ENTRY(786),
+	ICE_PTT_UNUSED_ENTRY(787),
+	ICE_PTT_UNUSED_ENTRY(788),
+	ICE_PTT_UNUSED_ENTRY(789),
+
+	ICE_PTT_UNUSED_ENTRY(790),
+	ICE_PTT_UNUSED_ENTRY(791),
+	ICE_PTT_UNUSED_ENTRY(792),
+	ICE_PTT_UNUSED_ENTRY(793),
+	ICE_PTT_UNUSED_ENTRY(794),
+	ICE_PTT_UNUSED_ENTRY(795),
+	ICE_PTT_UNUSED_ENTRY(796),
+	ICE_PTT_UNUSED_ENTRY(797),
+	ICE_PTT_UNUSED_ENTRY(798),
+	ICE_PTT_UNUSED_ENTRY(799),
+
+	ICE_PTT_UNUSED_ENTRY(800),
+	ICE_PTT_UNUSED_ENTRY(801),
+	ICE_PTT_UNUSED_ENTRY(802),
+	ICE_PTT_UNUSED_ENTRY(803),
+	ICE_PTT_UNUSED_ENTRY(804),
+	ICE_PTT_UNUSED_ENTRY(805),
+	ICE_PTT_UNUSED_ENTRY(806),
+	ICE_PTT_UNUSED_ENTRY(807),
+	ICE_PTT_UNUSED_ENTRY(808),
+	ICE_PTT_UNUSED_ENTRY(809),
+
+	ICE_PTT_UNUSED_ENTRY(810),
+	ICE_PTT_UNUSED_ENTRY(811),
+	ICE_PTT_UNUSED_ENTRY(812),
+	ICE_PTT_UNUSED_ENTRY(813),
+	ICE_PTT_UNUSED_ENTRY(814),
+	ICE_PTT_UNUSED_ENTRY(815),
+	ICE_PTT_UNUSED_ENTRY(816),
+	ICE_PTT_UNUSED_ENTRY(817),
+	ICE_PTT_UNUSED_ENTRY(818),
+	ICE_PTT_UNUSED_ENTRY(819),
+
+	ICE_PTT_UNUSED_ENTRY(820),
+	ICE_PTT_UNUSED_ENTRY(821),
+	ICE_PTT_UNUSED_ENTRY(822),
+	ICE_PTT_UNUSED_ENTRY(823),
+	ICE_PTT_UNUSED_ENTRY(824),
+	ICE_PTT_UNUSED_ENTRY(825),
+	ICE_PTT_UNUSED_ENTRY(826),
+	ICE_PTT_UNUSED_ENTRY(827),
+	ICE_PTT_UNUSED_ENTRY(828),
+	ICE_PTT_UNUSED_ENTRY(829),
+
+	ICE_PTT_UNUSED_ENTRY(830),
+	ICE_PTT_UNUSED_ENTRY(831),
+	ICE_PTT_UNUSED_ENTRY(832),
+	ICE_PTT_UNUSED_ENTRY(833),
+	ICE_PTT_UNUSED_ENTRY(834),
+	ICE_PTT_UNUSED_ENTRY(835),
+	ICE_PTT_UNUSED_ENTRY(836),
+	ICE_PTT_UNUSED_ENTRY(837),
+	ICE_PTT_UNUSED_ENTRY(838),
+	ICE_PTT_UNUSED_ENTRY(839),
+
+	ICE_PTT_UNUSED_ENTRY(840),
+	ICE_PTT_UNUSED_ENTRY(841),
+	ICE_PTT_UNUSED_ENTRY(842),
+	ICE_PTT_UNUSED_ENTRY(843),
+	ICE_PTT_UNUSED_ENTRY(844),
+	ICE_PTT_UNUSED_ENTRY(845),
+	ICE_PTT_UNUSED_ENTRY(846),
+	ICE_PTT_UNUSED_ENTRY(847),
+	ICE_PTT_UNUSED_ENTRY(848),
+	ICE_PTT_UNUSED_ENTRY(849),
+
+	ICE_PTT_UNUSED_ENTRY(850),
+	ICE_PTT_UNUSED_ENTRY(851),
+	ICE_PTT_UNUSED_ENTRY(852),
+	ICE_PTT_UNUSED_ENTRY(853),
+	ICE_PTT_UNUSED_ENTRY(854),
+	ICE_PTT_UNUSED_ENTRY(855),
+	ICE_PTT_UNUSED_ENTRY(856),
+	ICE_PTT_UNUSED_ENTRY(857),
+	ICE_PTT_UNUSED_ENTRY(858),
+	ICE_PTT_UNUSED_ENTRY(859),
+
+	ICE_PTT_UNUSED_ENTRY(860),
+	ICE_PTT_UNUSED_ENTRY(861),
+	ICE_PTT_UNUSED_ENTRY(862),
+	ICE_PTT_UNUSED_ENTRY(863),
+	ICE_PTT_UNUSED_ENTRY(864),
+	ICE_PTT_UNUSED_ENTRY(865),
+	ICE_PTT_UNUSED_ENTRY(866),
+	ICE_PTT_UNUSED_ENTRY(867),
+	ICE_PTT_UNUSED_ENTRY(868),
+	ICE_PTT_UNUSED_ENTRY(869),
+
+	ICE_PTT_UNUSED_ENTRY(870),
+	ICE_PTT_UNUSED_ENTRY(871),
+	ICE_PTT_UNUSED_ENTRY(872),
+	ICE_PTT_UNUSED_ENTRY(873),
+	ICE_PTT_UNUSED_ENTRY(874),
+	ICE_PTT_UNUSED_ENTRY(875),
+	ICE_PTT_UNUSED_ENTRY(876),
+	ICE_PTT_UNUSED_ENTRY(877),
+	ICE_PTT_UNUSED_ENTRY(878),
+	ICE_PTT_UNUSED_ENTRY(879),
+
+	ICE_PTT_UNUSED_ENTRY(880),
+	ICE_PTT_UNUSED_ENTRY(881),
+	ICE_PTT_UNUSED_ENTRY(882),
+	ICE_PTT_UNUSED_ENTRY(883),
+	ICE_PTT_UNUSED_ENTRY(884),
+	ICE_PTT_UNUSED_ENTRY(885),
+	ICE_PTT_UNUSED_ENTRY(886),
+	ICE_PTT_UNUSED_ENTRY(887),
+	ICE_PTT_UNUSED_ENTRY(888),
+	ICE_PTT_UNUSED_ENTRY(889),
+
+	ICE_PTT_UNUSED_ENTRY(890),
+	ICE_PTT_UNUSED_ENTRY(891),
+	ICE_PTT_UNUSED_ENTRY(892),
+	ICE_PTT_UNUSED_ENTRY(893),
+	ICE_PTT_UNUSED_ENTRY(894),
+	ICE_PTT_UNUSED_ENTRY(895),
+	ICE_PTT_UNUSED_ENTRY(896),
+	ICE_PTT_UNUSED_ENTRY(897),
+	ICE_PTT_UNUSED_ENTRY(898),
+	ICE_PTT_UNUSED_ENTRY(899),
+
+	ICE_PTT_UNUSED_ENTRY(900),
+	ICE_PTT_UNUSED_ENTRY(901),
+	ICE_PTT_UNUSED_ENTRY(902),
+	ICE_PTT_UNUSED_ENTRY(903),
+	ICE_PTT_UNUSED_ENTRY(904),
+	ICE_PTT_UNUSED_ENTRY(905),
+	ICE_PTT_UNUSED_ENTRY(906),
+	ICE_PTT_UNUSED_ENTRY(907),
+	ICE_PTT_UNUSED_ENTRY(908),
+	ICE_PTT_UNUSED_ENTRY(909),
+
+	ICE_PTT_UNUSED_ENTRY(910),
+	ICE_PTT_UNUSED_ENTRY(911),
+	ICE_PTT_UNUSED_ENTRY(912),
+	ICE_PTT_UNUSED_ENTRY(913),
+	ICE_PTT_UNUSED_ENTRY(914),
+	ICE_PTT_UNUSED_ENTRY(915),
+	ICE_PTT_UNUSED_ENTRY(916),
+	ICE_PTT_UNUSED_ENTRY(917),
+	ICE_PTT_UNUSED_ENTRY(918),
+	ICE_PTT_UNUSED_ENTRY(919),
+
+	ICE_PTT_UNUSED_ENTRY(920),
+	ICE_PTT_UNUSED_ENTRY(921),
+	ICE_PTT_UNUSED_ENTRY(922),
+	ICE_PTT_UNUSED_ENTRY(923),
+	ICE_PTT_UNUSED_ENTRY(924),
+	ICE_PTT_UNUSED_ENTRY(925),
+	ICE_PTT_UNUSED_ENTRY(926),
+	ICE_PTT_UNUSED_ENTRY(927),
+	ICE_PTT_UNUSED_ENTRY(928),
+	ICE_PTT_UNUSED_ENTRY(929),
+
+	ICE_PTT_UNUSED_ENTRY(930),
+	ICE_PTT_UNUSED_ENTRY(931),
+	ICE_PTT_UNUSED_ENTRY(932),
+	ICE_PTT_UNUSED_ENTRY(933),
+	ICE_PTT_UNUSED_ENTRY(934),
+	ICE_PTT_UNUSED_ENTRY(935),
+	ICE_PTT_UNUSED_ENTRY(936),
+	ICE_PTT_UNUSED_ENTRY(937),
+	ICE_PTT_UNUSED_ENTRY(938),
+	ICE_PTT_UNUSED_ENTRY(939),
+
+	ICE_PTT_UNUSED_ENTRY(940),
+	ICE_PTT_UNUSED_ENTRY(941),
+	ICE_PTT_UNUSED_ENTRY(942),
+	ICE_PTT_UNUSED_ENTRY(943),
+	ICE_PTT_UNUSED_ENTRY(944),
+	ICE_PTT_UNUSED_ENTRY(945),
+	ICE_PTT_UNUSED_ENTRY(946),
+	ICE_PTT_UNUSED_ENTRY(947),
+	ICE_PTT_UNUSED_ENTRY(948),
+	ICE_PTT_UNUSED_ENTRY(949),
+
+	ICE_PTT_UNUSED_ENTRY(950),
+	ICE_PTT_UNUSED_ENTRY(951),
+	ICE_PTT_UNUSED_ENTRY(952),
+	ICE_PTT_UNUSED_ENTRY(953),
+	ICE_PTT_UNUSED_ENTRY(954),
+	ICE_PTT_UNUSED_ENTRY(955),
+	ICE_PTT_UNUSED_ENTRY(956),
+	ICE_PTT_UNUSED_ENTRY(957),
+	ICE_PTT_UNUSED_ENTRY(958),
+	ICE_PTT_UNUSED_ENTRY(959),
+
+	ICE_PTT_UNUSED_ENTRY(960),
+	ICE_PTT_UNUSED_ENTRY(961),
+	ICE_PTT_UNUSED_ENTRY(962),
+	ICE_PTT_UNUSED_ENTRY(963),
+	ICE_PTT_UNUSED_ENTRY(964),
+	ICE_PTT_UNUSED_ENTRY(965),
+	ICE_PTT_UNUSED_ENTRY(966),
+	ICE_PTT_UNUSED_ENTRY(967),
+	ICE_PTT_UNUSED_ENTRY(968),
+	ICE_PTT_UNUSED_ENTRY(969),
+
+	ICE_PTT_UNUSED_ENTRY(970),
+	ICE_PTT_UNUSED_ENTRY(971),
+	ICE_PTT_UNUSED_ENTRY(972),
+	ICE_PTT_UNUSED_ENTRY(973),
+	ICE_PTT_UNUSED_ENTRY(974),
+	ICE_PTT_UNUSED_ENTRY(975),
+	ICE_PTT_UNUSED_ENTRY(976),
+	ICE_PTT_UNUSED_ENTRY(977),
+	ICE_PTT_UNUSED_ENTRY(978),
+	ICE_PTT_UNUSED_ENTRY(979),
+
+	ICE_PTT_UNUSED_ENTRY(980),
+	ICE_PTT_UNUSED_ENTRY(981),
+	ICE_PTT_UNUSED_ENTRY(982),
+	ICE_PTT_UNUSED_ENTRY(983),
+	ICE_PTT_UNUSED_ENTRY(984),
+	ICE_PTT_UNUSED_ENTRY(985),
+	ICE_PTT_UNUSED_ENTRY(986),
+	ICE_PTT_UNUSED_ENTRY(987),
+	ICE_PTT_UNUSED_ENTRY(988),
+	ICE_PTT_UNUSED_ENTRY(989),
+
+	ICE_PTT_UNUSED_ENTRY(990),
+	ICE_PTT_UNUSED_ENTRY(991),
+	ICE_PTT_UNUSED_ENTRY(992),
+	ICE_PTT_UNUSED_ENTRY(993),
+	ICE_PTT_UNUSED_ENTRY(994),
+	ICE_PTT_UNUSED_ENTRY(995),
+	ICE_PTT_UNUSED_ENTRY(996),
+	ICE_PTT_UNUSED_ENTRY(997),
+	ICE_PTT_UNUSED_ENTRY(998),
+	ICE_PTT_UNUSED_ENTRY(999),
+
+	ICE_PTT_UNUSED_ENTRY(1000),
+	ICE_PTT_UNUSED_ENTRY(1001),
+	ICE_PTT_UNUSED_ENTRY(1002),
+	ICE_PTT_UNUSED_ENTRY(1003),
+	ICE_PTT_UNUSED_ENTRY(1004),
+	ICE_PTT_UNUSED_ENTRY(1005),
+	ICE_PTT_UNUSED_ENTRY(1006),
+	ICE_PTT_UNUSED_ENTRY(1007),
+	ICE_PTT_UNUSED_ENTRY(1008),
+	ICE_PTT_UNUSED_ENTRY(1009),
+
+	ICE_PTT_UNUSED_ENTRY(1010),
+	ICE_PTT_UNUSED_ENTRY(1011),
+	ICE_PTT_UNUSED_ENTRY(1012),
+	ICE_PTT_UNUSED_ENTRY(1013),
+	ICE_PTT_UNUSED_ENTRY(1014),
+	ICE_PTT_UNUSED_ENTRY(1015),
+	ICE_PTT_UNUSED_ENTRY(1016),
+	ICE_PTT_UNUSED_ENTRY(1017),
+	ICE_PTT_UNUSED_ENTRY(1018),
+	ICE_PTT_UNUSED_ENTRY(1019),
+
+	ICE_PTT_UNUSED_ENTRY(1020),
+	ICE_PTT_UNUSED_ENTRY(1021),
+	ICE_PTT_UNUSED_ENTRY(1022),
+	ICE_PTT_UNUSED_ENTRY(1023),
+};
+
+static inline struct ice_rx_ptype_decoded ice_decode_rx_desc_ptype(u16 ptype)
+{
+	return ice_ptype_lkup[ptype];
+}
+
+#define ICE_LINK_SPEED_UNKNOWN		0
+#define ICE_LINK_SPEED_10MBPS		10
+#define ICE_LINK_SPEED_100MBPS		100
+#define ICE_LINK_SPEED_1000MBPS		1000
+#define ICE_LINK_SPEED_2500MBPS		2500
+#define ICE_LINK_SPEED_5000MBPS		5000
+#define ICE_LINK_SPEED_10000MBPS	10000
+#define ICE_LINK_SPEED_20000MBPS	20000
+#define ICE_LINK_SPEED_25000MBPS	25000
+#define ICE_LINK_SPEED_40000MBPS	40000
+
+#endif /* _ICE_LAN_TX_RX_H_ */
diff --git a/drivers/net/ice/base/ice_nvm.c b/drivers/net/ice/base/ice_nvm.c
new file mode 100644
index 0000000..fc26531
--- /dev/null
+++ b/drivers/net/ice/base/ice_nvm.c
@@ -0,0 +1,388 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#include "ice_common.h"
+
+
+/**
+ * ice_aq_read_nvm
+ * @hw: pointer to the hw struct
+ * @module_typeid: module pointer location in words from the NVM beginning
+ * @offset: byte offset from the module beginning
+ * @length: length of the section to be read (in bytes from the offset)
+ * @data: command buffer (size [bytes] = length)
+ * @last_command: tells if this is the last command in a series
+ * @cd: pointer to command details structure or NULL
+ *
+ * Read the NVM using the admin queue commands (0x0701)
+ */
+static enum ice_status
+ice_aq_read_nvm(struct ice_hw *hw, u16 module_typeid, u32 offset, u16 length,
+		void *data, bool last_command, struct ice_sq_cd *cd)
+{
+	struct ice_aq_desc desc;
+	struct ice_aqc_nvm *cmd;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_read_nvm");
+
+	cmd = &desc.params.nvm;
+
+	/* In offset the highest byte must be zeroed. */
+	if (offset & 0xFF000000)
+		return ICE_ERR_PARAM;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_nvm_read);
+
+	/* If this is the last command in a series, set the proper flag. */
+	if (last_command)
+		cmd->cmd_flags |= ICE_AQC_NVM_LAST_CMD;
+	cmd->module_typeid = CPU_TO_LE16(module_typeid);
+	cmd->offset_low = CPU_TO_LE16(offset & 0xFFFF);
+	cmd->offset_high = (offset >> 16) & 0xFF;
+	cmd->length = CPU_TO_LE16(length);
+
+	return ice_aq_send_cmd(hw, &desc, data, length, cd);
+}
+
+/**
+ * ice_check_sr_access_params - verify params for Shadow RAM R/W operations.
+ * @hw: pointer to the HW structure
+ * @offset: offset in words from module start
+ * @words: number of words to access
+ */
+static enum ice_status
+ice_check_sr_access_params(struct ice_hw *hw, u32 offset, u16 words)
+{
+	if ((offset + words) > hw->nvm.sr_words) {
+		ice_debug(hw, ICE_DBG_NVM,
+			  "NVM error: offset beyond SR lmt.\n");
+		return ICE_ERR_PARAM;
+	}
+
+	if (words > ICE_SR_SECTOR_SIZE_IN_WORDS) {
+		/* We can access only up to 4KB (one sector), in one AQ write */
+		ice_debug(hw, ICE_DBG_NVM,
+			  "NVM error: tried to access %d words, limit is %d.\n",
+			  words, ICE_SR_SECTOR_SIZE_IN_WORDS);
+		return ICE_ERR_PARAM;
+	}
+
+	if (((offset + (words - 1)) / ICE_SR_SECTOR_SIZE_IN_WORDS) !=
+	    (offset / ICE_SR_SECTOR_SIZE_IN_WORDS)) {
+		/* A single access cannot spread over two sectors */
+		ice_debug(hw, ICE_DBG_NVM,
+			  "NVM error: cannot spread over two sectors.\n");
+		return ICE_ERR_PARAM;
+	}
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_read_sr_aq - Read Shadow RAM.
+ * @hw: pointer to the HW structure
+ * @offset: offset in words from module start
+ * @words: number of words to read
+ * @data: buffer for words reads from Shadow RAM
+ * @last_command: tells the AdminQ that this is the last command
+ *
+ * Reads 16-bit word buffers from the Shadow RAM using the admin command.
+ */
+static enum ice_status
+ice_read_sr_aq(struct ice_hw *hw, u32 offset, u16 words, u16 *data,
+	       bool last_command)
+{
+	enum ice_status status;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_read_sr_aq");
+
+	status = ice_check_sr_access_params(hw, offset, words);
+
+	/* values in "offset" and "words" parameters are sized as words
+	 * (16 bits) but ice_aq_read_nvm expects these values in bytes.
+	 * So do this conversion while calling ice_aq_read_nvm.
+	 */
+	if (!status)
+		status = ice_aq_read_nvm(hw, 0, 2 * offset, 2 * words, data,
+					 last_command, NULL);
+
+	return status;
+}
+
+/**
+ * ice_read_sr_word_aq - Reads Shadow RAM via AQ
+ * @hw: pointer to the HW structure
+ * @offset: offset of the Shadow RAM word to read (0x000000 - 0x001FFF)
+ * @data: word read from the Shadow RAM
+ *
+ * Reads one 16 bit word from the Shadow RAM using the ice_read_sr_aq method.
+ */
+static enum ice_status
+ice_read_sr_word_aq(struct ice_hw *hw, u16 offset, u16 *data)
+{
+	enum ice_status status;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_read_sr_word_aq");
+
+	status = ice_read_sr_aq(hw, offset, 1, data, true);
+	if (!status)
+		*data = LE16_TO_CPU(*(__le16 *)data);
+
+	return status;
+}
+
+
+/**
+ * ice_read_sr_buf_aq - Reads Shadow RAM buf via AQ
+ * @hw: pointer to the HW structure
+ * @offset: offset of the Shadow RAM word to read (0x000000 - 0x001FFF)
+ * @words: (in) number of words to read; (out) number of words actually read
+ * @data: words read from the Shadow RAM
+ *
+ * Reads 16 bit words (data buf) from the SR using the ice_read_sr_aq
+ * method. Ownership of the NVM is taken before reading the buffer and later
+ * released.
+ */
+static enum ice_status
+ice_read_sr_buf_aq(struct ice_hw *hw, u16 offset, u16 *words, u16 *data)
+{
+	enum ice_status status;
+	bool last_cmd = false;
+	u16 words_read = 0;
+	u16 i = 0;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_read_sr_buf_aq");
+
+	do {
+		u16 read_size, off_w;
+
+		/* Calculate number of bytes we should read in this step.
+		 * It's not allowed to read more than one page at a time or
+		 * to cross page boundaries.
+		 */
+		off_w = offset % ICE_SR_SECTOR_SIZE_IN_WORDS;
+		read_size = off_w ?
+			min(*words,
+			    (u16)(ICE_SR_SECTOR_SIZE_IN_WORDS - off_w)) :
+			min((*words - words_read), ICE_SR_SECTOR_SIZE_IN_WORDS);
+
+		/* Check if this is last command, if so set proper flag */
+		if ((words_read + read_size) >= *words)
+			last_cmd = true;
+
+		status = ice_read_sr_aq(hw, offset, read_size,
+					data + words_read, last_cmd);
+		if (status)
+			goto read_nvm_buf_aq_exit;
+
+		/* Increment counter for words already read and move offset to
+		 * new read location
+		 */
+		words_read += read_size;
+		offset += read_size;
+	} while (words_read < *words);
+
+	for (i = 0; i < *words; i++)
+		data[i] = LE16_TO_CPU(((__le16 *)data)[i]);
+
+read_nvm_buf_aq_exit:
+	*words = words_read;
+	return status;
+}
+
+/**
+ * ice_acquire_nvm - Generic request for acquiring the NVM ownership
+ * @hw: pointer to the HW structure
+ * @access: NVM access type (read or write)
+ *
+ * This function will request NVM ownership.
+ */
+static enum ice_status
+ice_acquire_nvm(struct ice_hw *hw, enum ice_aq_res_access_type access)
+{
+	ice_debug(hw, ICE_DBG_TRACE, "ice_acquire_nvm");
+
+	if (hw->nvm.blank_nvm_mode)
+		return ICE_SUCCESS;
+
+	return ice_acquire_res(hw, ICE_NVM_RES_ID, access, ICE_NVM_TIMEOUT);
+}
+
+/**
+ * ice_release_nvm - Generic request for releasing the NVM ownership
+ * @hw: pointer to the HW structure
+ *
+ * This function will release NVM ownership.
+ */
+static void ice_release_nvm(struct ice_hw *hw)
+{
+	ice_debug(hw, ICE_DBG_TRACE, "ice_release_nvm");
+
+	if (hw->nvm.blank_nvm_mode)
+		return;
+
+	ice_release_res(hw, ICE_NVM_RES_ID);
+}
+
+/**
+ * ice_read_sr_word - Reads Shadow RAM word and acquire NVM if necessary
+ * @hw: pointer to the HW structure
+ * @offset: offset of the Shadow RAM word to read (0x000000 - 0x001FFF)
+ * @data: word read from the Shadow RAM
+ *
+ * Reads one 16 bit word from the Shadow RAM using the ice_read_sr_word_aq.
+ */
+enum ice_status ice_read_sr_word(struct ice_hw *hw, u16 offset, u16 *data)
+{
+	enum ice_status status;
+
+	status = ice_acquire_nvm(hw, ICE_RES_READ);
+	if (!status) {
+		status = ice_read_sr_word_aq(hw, offset, data);
+		ice_release_nvm(hw);
+	}
+
+	return status;
+}
+
+/**
+ * ice_init_nvm - initializes NVM setting
+ * @hw: pointer to the hw struct
+ *
+ * This function reads and populates NVM settings such as Shadow RAM size,
+ * max_timeout, and blank_nvm_mode
+ */
+enum ice_status ice_init_nvm(struct ice_hw *hw)
+{
+	struct ice_nvm_info *nvm = &hw->nvm;
+	u16 oem_hi, oem_lo, cfg_ptr;
+	u16 eetrack_lo, eetrack_hi;
+	enum ice_status status = ICE_SUCCESS;
+	u32 fla, gens_stat;
+	u8 sr_size;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_init_nvm");
+
+	/* The SR size is stored regardless of the nvm programming mode
+	 * as the blank mode may be used in the factory line.
+	 */
+	gens_stat = rd32(hw, GLNVM_GENS);
+	sr_size = (gens_stat & GLNVM_GENS_SR_SIZE_M) >> GLNVM_GENS_SR_SIZE_S;
+
+	/* Switching to words (sr_size contains power of 2) */
+	nvm->sr_words = BIT(sr_size) * ICE_SR_WORDS_IN_1KB;
+
+	/* Check if we are in the normal or blank NVM programming mode */
+	fla = rd32(hw, GLNVM_FLA);
+	if (fla & GLNVM_FLA_LOCKED_M) { /* Normal programming mode */
+		nvm->blank_nvm_mode = false;
+	} else { /* Blank programming mode */
+		nvm->blank_nvm_mode = true;
+		status = ICE_ERR_NVM_BLANK_MODE;
+		ice_debug(hw, ICE_DBG_NVM,
+			  "NVM init error: unsupported blank mode.\n");
+		return status;
+	}
+
+	status = ice_read_sr_word(hw, ICE_SR_NVM_DEV_STARTER_VER, &hw->nvm.ver);
+	if (status) {
+		ice_debug(hw, ICE_DBG_INIT,
+			  "Failed to read DEV starter version.\n");
+		return status;
+	}
+
+	status = ice_read_sr_word(hw, ICE_SR_NVM_EETRACK_LO, &eetrack_lo);
+	if (status) {
+		ice_debug(hw, ICE_DBG_INIT, "Failed to read EETRACK lo.\n");
+		return status;
+	}
+	status = ice_read_sr_word(hw, ICE_SR_NVM_EETRACK_HI, &eetrack_hi);
+	if (status) {
+		ice_debug(hw, ICE_DBG_INIT, "Failed to read EETRACK hi.\n");
+		return status;
+	}
+
+	hw->nvm.eetrack = (eetrack_hi << 16) | eetrack_lo;
+
+	status = ice_read_sr_word(hw, ICE_SR_BOOT_CFG_PTR, &cfg_ptr);
+	if (status) {
+		ice_debug(hw, ICE_DBG_INIT, "Failed to read BOOT_CONFIG_PTR.\n");
+		return status;
+	}
+
+	status = ice_read_sr_word(hw, (cfg_ptr + ICE_NVM_OEM_VER_OFF), &oem_hi);
+	if (status) {
+		ice_debug(hw, ICE_DBG_INIT, "Failed to read OEM_VER hi.\n");
+		return status;
+	}
+
+	status = ice_read_sr_word(hw, (cfg_ptr + (ICE_NVM_OEM_VER_OFF + 1)),
+				  &oem_lo);
+	if (status) {
+		ice_debug(hw, ICE_DBG_INIT, "Failed to read OEM_VER lo.\n");
+		return status;
+	}
+
+	hw->nvm.oem_ver = ((u32)oem_hi << 16) | oem_lo;
+	return status;
+}
+
+
+/**
+ * ice_read_sr_buf - Reads Shadow RAM buf and acquire lock if necessary
+ * @hw: pointer to the HW structure
+ * @offset: offset of the Shadow RAM word to read (0x000000 - 0x001FFF)
+ * @words: (in) number of words to read; (out) number of words actually read
+ * @data: words read from the Shadow RAM
+ *
+ * Reads 16 bit words (data buf) from the SR using the ice_read_nvm_buf_aq
+ * method. The buf read is preceded by the NVM ownership take
+ * and followed by the release.
+ */
+enum ice_status
+ice_read_sr_buf(struct ice_hw *hw, u16 offset, u16 *words, u16 *data)
+{
+	enum ice_status status;
+
+	status = ice_acquire_nvm(hw, ICE_RES_READ);
+	if (!status) {
+		status = ice_read_sr_buf_aq(hw, offset, words, data);
+		ice_release_nvm(hw);
+	}
+
+	return status;
+}
+
+
+/**
+ * ice_nvm_validate_checksum
+ * @hw: pointer to the hw struct
+ *
+ * Verify NVM PFA checksum validity (0x0706)
+ */
+enum ice_status ice_nvm_validate_checksum(struct ice_hw *hw)
+{
+	struct ice_aqc_nvm_checksum *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	status = ice_acquire_nvm(hw, ICE_RES_READ);
+	if (status)
+		return status;
+
+	cmd = &desc.params.nvm_checksum;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_nvm_checksum);
+	cmd->flags = ICE_AQC_NVM_CHECKSUM_VERIFY;
+
+	status = ice_aq_send_cmd(hw, &desc, NULL, 0, NULL);
+	ice_release_nvm(hw);
+
+	if (!status)
+		if (LE16_TO_CPU(cmd->checksum) != ICE_AQC_NVM_CHECKSUM_CORRECT)
+			status = ICE_ERR_NVM_CHECKSUM;
+
+	return status;
+}
+
diff --git a/drivers/net/ice/base/ice_osdep.h b/drivers/net/ice/base/ice_osdep.h
new file mode 100644
index 0000000..e1f7581
--- /dev/null
+++ b/drivers/net/ice/base/ice_osdep.h
@@ -0,0 +1,491 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _ICE_OSDEP_H_
+#define _ICE_OSDEP_H_
+
+#include <string.h>
+#include <stdint.h>
+#include <stdio.h>
+#include <stdarg.h>
+#include <inttypes.h>
+#include <sys/queue.h>
+#include <stdbool.h>
+
+#include <rte_common.h>
+#include <rte_memcpy.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <rte_byteorder.h>
+#include <rte_cycles.h>
+#include <rte_spinlock.h>
+#include <rte_log.h>
+#include <rte_random.h>
+#include <rte_io.h>
+
+#include "../ice_logs.h"
+
+#define INLINE inline
+#define STATIC static
+
+typedef uint8_t         u8;
+typedef int8_t          s8;
+typedef uint16_t        u16;
+typedef int16_t         s16;
+typedef uint32_t        u32;
+typedef int32_t         s32;
+typedef uint64_t        u64;
+typedef uint64_t        s64;
+
+#define __iomem
+#define hw_dbg(hw, S, A...) do {} while (0)
+#define upper_32_bits(n) ((u32)(((n) >> 16) >> 16))
+#define lower_32_bits(n) ((u32)(n))
+#define low_16_bits(x)   ((x) & 0xFFFF)
+#define high_16_bits(x)  (((x) & 0xFFFF0000) >> 16)
+
+#ifndef ETH_ADDR_LEN
+#define ETH_ADDR_LEN                  6
+#endif
+
+#ifndef __le16
+#define __le16          uint16_t
+#endif
+#ifndef __le32
+#define __le32          uint32_t
+#endif
+#ifndef __le64
+#define __le64          uint64_t
+#endif
+#ifndef __be16
+#define __be16          uint16_t
+#endif
+#ifndef __be32
+#define __be32          uint32_t
+#endif
+#ifndef __be64
+#define __be64          uint64_t
+#endif
+
+#ifndef __always_unused
+#define __always_unused  __attribute__((unused))
+#endif
+#ifndef __maybe_unused
+#define __maybe_unused  __attribute__((unused))
+#endif
+#ifndef __packed
+#define __packed  __attribute__((packed))
+#endif
+
+#ifndef BIT_ULL
+#define BIT_ULL(a) (1ULL << (a))
+#endif
+
+#define FALSE           0
+#define TRUE            1
+#define false           0
+#define true            1
+
+#define min(a, b) RTE_MIN(a, b)
+#define max(a, b) RTE_MAX(a, b)
+
+#define ARRAY_SIZE(arr) (sizeof(arr) / sizeof(arr[0]))
+#define FIELD_SIZEOF(t, f) (sizeof(((t *)0)->f))
+#define MAKEMASK(m, s) ((m) << (s))
+
+#define DEBUGOUT(S, A...) PMD_DRV_LOG_RAW(DEBUG, S, ##A)
+#define DEBUGFUNC(F) PMD_DRV_LOG_RAW(DEBUG, F)
+
+#define ice_debug(h, m, s, ...)					\
+do {								\
+	if (((m) & (h)->debug_mask))				\
+		PMD_DRV_LOG_RAW(DEBUG, "ice %02x.%x " s,	\
+			(h)->bus.device, (h)->bus.func,		\
+					##__VA_ARGS__);		\
+} while (0)
+
+#define ice_info(hw, fmt, args...) ice_debug(hw, ICE_DBG_ALL, fmt, ##args)
+#define ice_warn(hw, fmt, args...) ice_debug(hw, ICE_DBG_ALL, fmt, ##args)
+#define ice_debug_array(hw, type, rowsize, groupsize, buf, len)		\
+do {									\
+	struct ice_hw *hw_l = hw;					\
+		u16 len_l = len;					\
+		u8 *buf_l = buf;					\
+		int i;							\
+		for (i = 0; i < len_l; i += 8)				\
+			ice_debug(hw_l, type,				\
+				  "0x%04X  0x%016"PRIx64"\n",		\
+				  i, *((u64 *)((buf_l) + i)));		\
+} while (0)
+#define ice_snprintf snprintf
+#ifndef SNPRINTF
+#define SNPRINTF ice_snprintf
+#endif
+
+#define ICE_PCI_REG(reg)     rte_read32(reg)
+#define ICE_PCI_REG_ADDR(a, reg) \
+	((volatile uint32_t *)((char *)(a)->hw_addr + (reg)))
+static inline uint32_t ice_read_addr(volatile void *addr)
+{
+	return rte_le_to_cpu_32(ICE_PCI_REG(addr));
+}
+
+#define ICE_PCI_REG_WRITE(reg, value) \
+	rte_write32((rte_cpu_to_le_32(value)), reg)
+
+#define ice_flush(a)   ICE_READ_REG((a), GLGEN_STAT)
+#define icevf_flush(a) ICE_READ_REG((a), VFGEN_RSTAT)
+#define ICE_READ_REG(hw, reg) ice_read_addr(ICE_PCI_REG_ADDR((hw), (reg)))
+#define ICE_WRITE_REG(hw, reg, value) \
+	ICE_PCI_REG_WRITE(ICE_PCI_REG_ADDR((hw), (reg)), (value))
+
+#define rd32(a, reg) ice_read_addr(ICE_PCI_REG_ADDR((a), (reg)))
+#define wr32(a, reg, value) \
+	ICE_PCI_REG_WRITE(ICE_PCI_REG_ADDR((a), (reg)), (value))
+#define flush(a) ice_read_addr(ICE_PCI_REG_ADDR((a), (GLGEN_STAT)))
+#define div64_long(n, d) ((n) / (d))
+
+typedef u8 ice_bitmap_t;
+#define ice_declare_bitmap(name, bits) \
+	unsigned long name[BITS_TO_LONGS(bits)]
+
+#define BITS_TO_LONGS(nr)   DIV_ROUND_UP(nr, BITS_PER_BYTE * sizeof(long))
+#define DIV_ROUND_UP(n, d) (((n) + (d) - 1) / (d))
+#define BITS_PER_BYTE       8
+#define BITS_CHUNK_MASK(nr)	(((ice_bitmap_t)~0) >>			\
+		((BITS_PER_BYTE * sizeof(ice_bitmap_t)) -		\
+		(((nr) - 1) % (BITS_PER_BYTE * sizeof(ice_bitmap_t))	\
+		 + 1)))
+#define ice_is_bit_set(name, bits) \
+	((name)[BITS_TO_LONGS(bits)] & 1)
+#define ice_and_bitmap(d, b1, b2, sz) \
+	ice_intersect_bitmaps((u8 *)d, (u8 *)b1, (const u8 *)b2, (u16)sz)
+static inline int
+ice_intersect_bitmaps(u8 *dst, const u8 *bmp1, const u8 *bmp2, u16 sz)
+{
+	u32 res = 0;
+	int cnt;
+	u16 i;
+
+	/* Utilize 32-bit operations */
+	cnt = (sz % BITS_PER_BYTE) ?
+		(sz / BITS_PER_BYTE) + 1 : sz / BITS_PER_BYTE;
+	for (i = 0; i < cnt / 4; i++) {
+		((u32 *)dst)[i] = ((const u32 *)bmp1)[i] &
+		((const u32 *)bmp2)[i];
+		res |= ((u32 *)dst)[i];
+	}
+
+	for (i *= 4; i < cnt; i++) {
+		if ((sz % 8 == 0) || (i + 1 < cnt)) {
+			dst[i] = bmp1[i] & bmp2[i];
+		} else {
+			/* Remaining bits that do not occupy the whole byte */
+			u8 mask = ~0u >> (8 - (sz % 8));
+
+			dst[i] = bmp1[i] & bmp2[i] & mask;
+		}
+
+		res |= dst[i];
+	}
+
+	return res != 0;
+}
+
+static inline int ice_find_first_bit(unsigned long *name, u16 size)
+{
+	u16 i;
+
+	for (i = 0; i < BITS_PER_BYTE * (size / BITS_PER_BYTE); i++)
+		if (ice_is_bit_set(name, i))
+			return i;
+	return size;
+}
+
+static inline int ice_find_next_bit(unsigned long *name, u16 size, u16 bits)
+{
+	u16 i;
+
+	for (i = bits; i < BITS_PER_BYTE * (size / BITS_PER_BYTE); i++)
+		if (ice_is_bit_set(name, i))
+			return i;
+	return bits;
+}
+
+#define for_each_set_bit(bit, addr, size)				\
+	for ((bit) = ice_find_first_bit((addr), (size));		\
+	(bit) < (size);							\
+	(bit) = ice_find_next_bit((addr), (size), (bit) + 1))
+
+#ifndef LINUX_SUPPORT
+static inline bool ice_is_any_bit_set(u8 *bitmap, u32 bits)
+#else
+static inline bool ice_is_any_bit_set(unsigned long *bitmap, u32 bits)
+#endif
+{
+#ifndef LINUX_SUPPORT
+	u32 max_index = (bits % 8) ? (bits / 8) + 1 : (bits / 8);
+#else
+	u32 max_index = BITS_TO_LONGS(bits);
+#endif
+	u32 i;
+
+	for (i = 0; i < max_index; i++) {
+		if (bitmap[i])
+			return true;
+	}
+	return false;
+}
+
+/* memory allocation tracking */
+struct ice_dma_mem {
+	void *va;
+	u64 pa;
+	u32 size;
+	const void *zone;
+} __attribute__((packed));
+
+struct ice_virt_mem {
+	void *va;
+	u32 size;
+} __attribute__((packed));
+
+#define ice_malloc(h, s)    rte_zmalloc("ice", s, 0)
+#define ice_calloc(h, c, s) rte_zmalloc("ice", c * s, 0)
+#define ice_free(h, m)         rte_free(m)
+
+#define ice_memset(a, b, c, d) memset((a), (b), (c))
+#define ice_memcpy(a, b, c, d) rte_memcpy((a), (b), (c))
+#define ice_memdup(a, b, c, d) rte_memcpy(ice_malloc(a, c), b, c)
+
+#define CPU_TO_BE16(o) rte_cpu_to_be_16(o)
+#define CPU_TO_BE32(o) rte_cpu_to_be_32(o)
+#define CPU_TO_BE64(o) rte_cpu_to_be_64(o)
+#define CPU_TO_LE16(o) rte_cpu_to_le_16(o)
+#define CPU_TO_LE32(s) rte_cpu_to_le_32(s)
+#define CPU_TO_LE64(h) rte_cpu_to_le_64(h)
+#define LE16_TO_CPU(a) rte_le_to_cpu_16(a)
+#define LE32_TO_CPU(c) rte_le_to_cpu_32(c)
+#define LE64_TO_CPU(k) rte_le_to_cpu_64(k)
+
+#define NTOHS(a) rte_be_to_cpu_16(a)
+#define NTOHL(a) rte_be_to_cpu_32(a)
+#define HTONS(a) rte_cpu_to_be_16(a)
+#define HTONL(a) rte_cpu_to_be_32(a)
+
+static inline void
+ice_set_bit(unsigned int nr, volatile unsigned long *addr)
+{
+	__sync_fetch_and_or(addr, (1UL << nr));
+}
+
+static inline void
+ice_clear_bit(unsigned int nr, volatile unsigned long *addr)
+{
+	__sync_fetch_and_and(addr, (0UL << nr));
+}
+
+static inline void
+ice_zero_bitmap(unsigned long *bmp, u16 size)
+{
+	unsigned long mask;
+	u16 i;
+
+	for (i = 0; i < BITS_TO_LONGS(size) - 1; i++)
+		bmp[i] = 0;
+	mask = BITS_CHUNK_MASK(size);
+	bmp[i] &= ~mask;
+}
+
+static inline void
+ice_or_bitmap(unsigned long *dst, const unsigned long *bmp1,
+	      const unsigned long *bmp2, u16 size)
+{
+	unsigned long mask;
+	u16 i;
+
+	/* Handle all but last chunk*/
+	for (i = 0; i < BITS_TO_LONGS(size) - 1; i++)
+		dst[i] = bmp1[i] | bmp2[i];
+
+	/* We want to only OR bits within the size. Furthermore, we also do
+	 * not want to modify destination bits which are beyond the specified
+	 * size. Use a bitmask to ensure that we only modify the bits that are
+	 * within the specified size.
+	 */
+	mask = BITS_CHUNK_MASK(size);
+	dst[i] &= ~mask;
+	dst[i] |= (bmp1[i] | bmp2[i]) & mask;
+}
+
+/* SW spinlock */
+struct ice_lock {
+	rte_spinlock_t spinlock;
+};
+
+static inline void
+ice_init_lock(struct ice_lock *sp)
+{
+	rte_spinlock_init(&sp->spinlock);
+}
+
+static inline void
+ice_acquire_lock(struct ice_lock *sp)
+{
+	rte_spinlock_lock(&sp->spinlock);
+}
+
+static inline void
+ice_release_lock(struct ice_lock *sp)
+{
+	rte_spinlock_unlock(&sp->spinlock);
+}
+
+static inline void
+ice_destroy_lock(__attribute__((unused)) struct ice_lock *sp)
+{
+}
+
+struct ice_hw;
+
+static inline void *
+ice_alloc_dma_mem(__attribute__((unused)) struct ice_hw *hw,
+		  struct ice_dma_mem *mem, u64 size)
+{
+	const struct rte_memzone *mz = NULL;
+	char z_name[RTE_MEMZONE_NAMESIZE];
+
+	if (!mem)
+		return NULL;
+
+	snprintf(z_name, sizeof(z_name), "ice_dma_%"PRIu64, rte_rand());
+	mz = rte_memzone_reserve_bounded(z_name, size, SOCKET_ID_ANY, 0,
+					 0, RTE_PGSIZE_2M);
+	if (!mz)
+		return NULL;
+
+	mem->size = size;
+	mem->va = mz->addr;
+	mem->pa = mz->phys_addr;
+	mem->zone = (const void *)mz;
+	PMD_DRV_LOG(DEBUG, "memzone %s allocated with physical address: "
+		    "%"PRIu64, mz->name, mem->pa);
+
+	return mem->va;
+}
+
+static inline void
+ice_free_dma_mem(__attribute__((unused)) struct ice_hw *hw,
+		 struct ice_dma_mem *mem)
+{
+	PMD_DRV_LOG(DEBUG, "memzone %s to be freed with physical address: "
+		    "%"PRIu64, ((const struct rte_memzone *)mem->zone)->name,
+		    mem->pa);
+	rte_memzone_free((const struct rte_memzone *)mem->zone);
+	mem->zone = NULL;
+	mem->va = NULL;
+	mem->pa = (u64)0;
+}
+
+static inline u8
+ice_hweight8(u32 num)
+{
+	u8 bits = 0;
+	u32 i;
+
+	for (i = 0; i < 8; i++) {
+		bits += (u8)(num & 0x1);
+		num >>= 1;
+	}
+
+	return bits;
+}
+
+#define DIV_ROUND_UP(n, d) (((n) + (d) - 1) / (d))
+#define DELAY(x) rte_delay_us(x)
+#define ice_usec_delay(x) rte_delay_us(x)
+#define ice_msec_delay(x, y) rte_delay_us(1000 * (x))
+#define udelay(x) DELAY(x)
+#define msleep(x) DELAY(1000 * (x))
+#define usleep_range(min, max) msleep(DIV_ROUND_UP(min, 1000))
+
+struct ice_list_entry {
+	LIST_ENTRY(ice_list_entry) next;
+};
+
+LIST_HEAD(ice_list_head, ice_list_entry);
+
+#define LIST_ENTRY_TYPE    ice_list_entry
+#define LIST_HEAD_TYPE     ice_list_head
+#define INIT_LIST_HEAD(list_head)  LIST_INIT(list_head)
+#define LIST_DEL(entry)            LIST_REMOVE(entry, next)
+/* LIST_EMPTY(list_head)) the same in sys/queue.h */
+
+/*Note parameters are swapped*/
+#define LIST_FIRST_ENTRY(head, type, field) (type *)((head)->lh_first)
+#define LIST_ADD(entry, list_head)    LIST_INSERT_HEAD(list_head, entry, next)
+#define LIST_ADD_AFTER(entry, list_entry) \
+	LIST_INSERT_AFTER(list_entry, entry, next)
+#define LIST_FOR_EACH_ENTRY(pos, head, type, member)			       \
+	for ((pos) = (head)->lh_first ?					       \
+		     container_of((head)->lh_first, struct type, member) :     \
+		     0;							       \
+	     (pos);							       \
+	     (pos) = (pos)->member.next.le_next ?			       \
+		     container_of((pos)->member.next.le_next, struct type,     \
+				  member) :				       \
+		     0)
+
+#define LIST_REPLACE_INIT(list_head, head) do {				\
+	(head)->lh_first = (list_head)->lh_first;			\
+	INIT_LIST_HEAD(list_head);					\
+} while (0)
+
+#define HLIST_NODE_TYPE         LIST_ENTRY_TYPE
+#define HLIST_HEAD_TYPE         LIST_HEAD_TYPE
+#define INIT_HLIST_HEAD(list_head)             INIT_LIST_HEAD(list_head)
+#define HLIST_ADD_HEAD(entry, list_head)       LIST_ADD(entry, list_head)
+#define HLIST_EMPTY(list_head)                 LIST_EMPTY(list_head)
+#define HLIST_DEL(entry)                       LIST_DEL(entry)
+#define HLIST_FOR_EACH_ENTRY(pos, head, type, member) \
+	LIST_FOR_EACH_ENTRY(pos, head, type, member)
+#define LIST_FOR_EACH_ENTRY_SAFE(pos, tmp, head, type, member) \
+	LIST_FOR_EACH_ENTRY(pos, head, type, member)
+
+#ifndef ICE_DBG_TRACE
+#define ICE_DBG_TRACE		BIT_ULL(0)
+#endif
+
+#ifndef DIVIDE_AND_ROUND_UP
+#define DIVIDE_AND_ROUND_UP(a, b) (((a) + (b) - 1) / (b))
+#endif
+
+#ifndef ICE_INTEL_VENDOR_ID
+#define ICE_INTEL_VENDOR_ID		0x8086
+#endif
+
+#ifndef IS_UNICAST_ETHER_ADDR
+#define IS_UNICAST_ETHER_ADDR(addr) \
+	((bool)((((u8 *)(addr))[0] % ((u8)0x2)) == 0))
+#endif
+
+#ifndef IS_MULTICAST_ETHER_ADDR
+#define IS_MULTICAST_ETHER_ADDR(addr) \
+	((bool)((((u8 *)(addr))[0] % ((u8)0x2)) == 1))
+#endif
+
+#ifndef IS_BROADCAST_ETHER_ADDR
+/* Check whether an address is broadcast. */
+#define IS_BROADCAST_ETHER_ADDR(addr)	\
+	((bool)((((u16 *)(addr))[0] == ((u16)0xffff))))
+#endif
+
+#ifndef IS_ZERO_ETHER_ADDR
+#define IS_ZERO_ETHER_ADDR(addr) \
+	(((bool)((((u16 *)(addr))[0] == ((u16)0x0)))) && \
+	 ((bool)((((u16 *)(addr))[1] == ((u16)0x0)))) && \
+	 ((bool)((((u16 *)(addr))[2] == ((u16)0x0)))))
+#endif
+
+#endif /* _ICE_OSDEP_H_ */
diff --git a/drivers/net/ice/base/ice_protocol_type.h b/drivers/net/ice/base/ice_protocol_type.h
new file mode 100644
index 0000000..665856a
--- /dev/null
+++ b/drivers/net/ice/base/ice_protocol_type.h
@@ -0,0 +1,237 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_PROTOCOL_TYPE_H_
+#define _ICE_PROTOCOL_TYPE_H_
+#include "ice_flex_type.h"
+#define ICE_IPV6_ADDR_LENGTH 16
+
+/* Each recipe can match up to 5 different fields. Fields to match can be meta-
+ * data, values extracted from packet headers, or results from other recipes.
+ * One of the 5 fields is reserved for matching the switch ID. So, up to 4
+ * recipes can provide intermediate results to another one through chaining,
+ * e.g. recipes 0, 1, 2, and 3 can provide intermediate results to recipe 4.
+ */
+#define ICE_NUM_WORDS_RECIPE 4
+
+/* Max recipes that can be chained */
+#define ICE_MAX_CHAIN_RECIPE 5
+
+/* 1 word reserved for switch id from allowed 5 words.
+ * So a recipe can have max 4 words. And you can chain 5 such recipes
+ * together. So maximum words that can be programmed for look up is 5 * 4.
+ */
+#define ICE_MAX_CHAIN_WORDS (ICE_NUM_WORDS_RECIPE * ICE_MAX_CHAIN_RECIPE)
+
+/* Field vector index corresponding to chaining */
+#define ICE_CHAIN_FV_INDEX_START 47
+
+enum ice_protocol_type {
+	ICE_MAC_OFOS = 0,
+	ICE_MAC_IL,
+	ICE_IPV4_OFOS,
+	ICE_IPV4_IL,
+	ICE_IPV6_IL,
+	ICE_IPV6_OFOS,
+	ICE_TCP_IL,
+	ICE_UDP_ILOS,
+	ICE_SCTP_IL,
+	ICE_VXLAN,
+	ICE_GENEVE,
+	ICE_VXLAN_GPE,
+	ICE_NVGRE,
+	ICE_PROTOCOL_LAST
+};
+
+enum ice_sw_tunnel_type {
+	ICE_NON_TUN,
+	ICE_SW_TUN_VXLAN_GPE,
+	ICE_SW_TUN_GENEVE,
+	ICE_SW_TUN_VXLAN,
+	ICE_SW_TUN_NVGRE,
+	ICE_SW_TUN_UDP, /* This means all "UDP" tunnel types: VXLAN-GPE, VXLAN
+			 * and GENEVE
+			 */
+	ICE_ALL_TUNNELS /* All tunnel types including NVGRE */
+};
+
+/* Decoders for ice_prot_id:
+ * - F: First
+ * - I: Inner
+ * - L: Last
+ * - O: Outer
+ * - S: Single
+ */
+enum ice_prot_id {
+	ICE_PROT_ID_INVAL	= 0,
+	ICE_PROT_MAC_OF_OR_S	= 1,
+	ICE_PROT_MAC_O2		= 2,
+	ICE_PROT_MAC_IL		= 4,
+	ICE_PROT_MAC_IN_MAC	= 7,
+	ICE_PROT_ETYPE_OL	= 9,
+	ICE_PROT_ETYPE_IL	= 10,
+	ICE_PROT_PAY		= 15,
+	ICE_PROT_EVLAN_O	= 16,
+	ICE_PROT_VLAN_O		= 17,
+	ICE_PROT_VLAN_IF	= 18,
+	ICE_PROT_MPLS_OL_MINUS_1 = 27,
+	ICE_PROT_MPLS_OL_OR_OS	= 28,
+	ICE_PROT_MPLS_IL	= 29,
+	ICE_PROT_IPV4_OF_OR_S	= 32,
+	ICE_PROT_IPV4_IL	= 33,
+	ICE_PROT_IPV6_OF_OR_S	= 40,
+	ICE_PROT_IPV6_IL	= 41,
+	ICE_PROT_IPV6_FRAG	= 47,
+	ICE_PROT_TCP_IL		= 49,
+	ICE_PROT_UDP_OF		= 52,
+	ICE_PROT_UDP_IL_OR_S	= 53,
+	ICE_PROT_GRE_OF		= 64,
+	ICE_PROT_NSH_F		= 84,
+	ICE_PROT_ESP_F		= 88,
+	ICE_PROT_ESP_2		= 89,
+	ICE_PROT_SCTP_IL	= 96,
+	ICE_PROT_ICMP_IL	= 98,
+	ICE_PROT_ICMPV6_IL	= 100,
+	ICE_PROT_VRRP_F		= 101,
+	ICE_PROT_OSPF		= 102,
+	ICE_PROT_ATAOE_OF	= 114,
+	ICE_PROT_CTRL_OF	= 116,
+	ICE_PROT_LLDP_OF	= 117,
+	ICE_PROT_ARP_OF		= 118,
+	ICE_PROT_EAPOL_OF	= 120,
+	ICE_PROT_META_ID	= 255, /* when offset == metaddata */
+	ICE_PROT_INVALID	= 255  /* when offset == 0xFF */
+};
+
+
+#define ICE_MAC_OFOS_HW		1
+#define ICE_MAC_IL_HW		4
+#define ICE_IPV4_OFOS_HW	32
+#define ICE_IPV4_IL_HW		33
+#define ICE_IPV6_OFOS_HW	40
+#define ICE_IPV6_IL_HW		41
+#define ICE_TCP_IL_HW		49
+#define ICE_UDP_ILOS_HW		53
+#define ICE_SCTP_IL_HW		96
+
+/* ICE_UDP_OF is used to identify all 3 tunnel types
+ * VXLAN, GENEVE and VXLAN_GPE. To differentiate further
+ * need to use flags from the field vector
+ */
+#define ICE_UDP_OF_HW	52 /* UDP Tunnels */
+#define ICE_GRE_OF_HW	64 /* NVGRE */
+#define ICE_META_DATA_ID_HW 255 /* this is used for tunnel type */
+
+#define ICE_TUN_FLAG_MASK 0xFF
+#define ICE_TUN_FLAG_FV_IND 2
+
+#define ICE_PROTOCOL_MAX_ENTRIES 16
+
+/* Mapping of software defined protocol id to hardware defined protocol id */
+struct ice_protocol_entry {
+	enum ice_protocol_type type;
+	u8 protocol_id;
+};
+
+
+struct ice_ether_hdr {
+	u8 dst_addr[ETH_ALEN];
+	u8 src_addr[ETH_ALEN];
+	u16 ethtype_id;
+};
+
+struct ice_ether_vlan_hdr {
+	u8 dst_addr[ETH_ALEN];
+	u8 src_addr[ETH_ALEN];
+	u32 vlan_id;
+};
+
+struct ice_ipv4_hdr {
+	u8 version;
+	u8 tos;
+	u16 total_length;
+	u16 id;
+	u16 frag_off;
+	u8 time_to_live;
+	u8 protocol;
+	u16 check;
+	u32 src_addr;
+	u32 dst_addr;
+};
+
+struct ice_ipv6_hdr {
+	u8 version;
+	u8 tc;
+	u16 flow_label;
+	u8 src_addr[ICE_IPV6_ADDR_LENGTH];
+	u8 dst_addr[ICE_IPV6_ADDR_LENGTH];
+};
+
+struct ice_l4_hdr {
+	u16 src_port;
+	u16 dst_port;
+	u16 len;
+	u16 check;
+};
+
+struct ice_udp_tnl_hdr {
+	u16 field;
+	u16 proto_type;
+	u16 vni;
+};
+
+struct ice_nvgre {
+	u16 tni;
+	u16 flow_id;
+};
+
+union ice_prot_hdr {
+		struct ice_ether_hdr eth_hdr;
+		struct ice_ipv4_hdr ipv4_hdr;
+		struct ice_ipv6_hdr ice_ipv6_ofos_hdr;
+		struct ice_l4_hdr l4_hdr;
+		struct ice_udp_tnl_hdr tnl_hdr;
+		struct ice_nvgre nvgre_hdr;
+};
+
+/* This is mapping table entry that maps every word within a given protocol
+ * structure to the real byte offset as per the specification of that
+ * protocol header.
+ * for e.g. dst address is 3 words in ethertype header and corresponding bytes
+ * are 0, 2, 3 in the actual packet header and src address is at 4, 6, 8
+ */
+struct ice_prot_ext_tbl_entry {
+	enum ice_protocol_type prot_type;
+	/* Byte offset into header of given protocol type */
+	u8 offs[sizeof(union ice_prot_hdr)];
+};
+
+/* Extractions to be looked up for a given recipe */
+struct ice_prot_lkup_ext {
+	u16 prot_type;
+	u8 n_val_words;
+	/* create a buffer to hold max words per recipe */
+	u8 field_off[ICE_MAX_CHAIN_WORDS];
+
+	struct ice_fv_word fv_words[ICE_MAX_CHAIN_WORDS];
+
+	/* Indicate field offsets that have field vector indices assigned */
+	ice_declare_bitmap(done, ICE_MAX_CHAIN_WORDS);
+};
+
+struct ice_pref_recipe_group {
+	u8 n_val_pairs;		/* Number of valid pairs */
+	struct ice_fv_word pairs[ICE_NUM_WORDS_RECIPE];
+};
+
+struct ice_recp_grp_entry {
+	struct LIST_ENTRY_TYPE l_entry;
+
+#define ICE_INVAL_CHAIN_IND 0xFF
+	u16 rid;
+	u8 chain_idx;
+	u16 fv_idx[ICE_NUM_WORDS_RECIPE];
+	struct ice_pref_recipe_group r_group;
+};
+#endif /* _ICE_PROTOCOL_TYPE_H_ */
diff --git a/drivers/net/ice/base/ice_sbq_cmd.h b/drivers/net/ice/base/ice_sbq_cmd.h
new file mode 100644
index 0000000..6dff378
--- /dev/null
+++ b/drivers/net/ice/base/ice_sbq_cmd.h
@@ -0,0 +1,93 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_SBQ_CMD_H_
+#define _ICE_SBQ_CMD_H_
+
+/* This header file defines the Sideband Queue commands, error codes and
+ * descriptor format. It is shared between Firmware and Software.
+ */
+
+/* Sideband Queue command structure and opcodes */
+enum ice_sbq_opc {
+	/* Sideband Queue commands */
+	ice_sbq_opc_neigh_dev_req			= 0x0C00,
+	ice_sbq_opc_neigh_dev_ev			= 0x0C01
+};
+
+/* Sideband Queue descriptor. Indirect command
+ * and non posted
+ */
+struct ice_sbq_cmd_desc {
+	__le16 flags;
+	__le16 opcode;
+	__le16 datalen;
+	__le16 cmd_retval;
+
+	/* Opaque message data */
+	__le32 cookie_high;
+	__le32 cookie_low;
+
+	union {
+		__le16 cmd_len;
+		__le16 cmpl_len;
+	} param0;
+
+	u8 reserved[6];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+struct ice_sbq_evt_desc {
+	__le16 flags;
+	__le16 opcode;
+	__le16 datalen;
+	__le16 cmd_retval;
+	u8 data[24];
+};
+
+enum ice_sbq_msg_dev {
+	rmn_0	= 0x02,
+	rmn_1	= 0x03,
+	rmn_2	= 0x04,
+	cgu	= 0x06
+};
+
+enum ice_sbq_msg_opcode {
+	ice_sbq_msg_rd	= 0x00,
+	ice_sbq_msg_wr	= 0x01
+};
+
+#define ICE_SBQ_MSG_FLAGS	0x40
+#define ICE_SBQ_MSG_SBE_FBE	0x0F
+
+struct ice_sbq_msg_req {
+	u8 dest_dev;
+	u8 src_dev;
+	u8 opcode;
+	u8 flags;
+	u8 sbe_fbe;
+	u8 func_id;
+	__le16 msg_addr_low;
+	__le32 msg_addr_high;
+	__le32 data;
+};
+
+struct ice_sbq_msg_cmpl {
+	u8 dest_dev;
+	u8 src_dev;
+	u8 opcode;
+	u8 flags;
+	__le32 data;
+};
+
+/* Internal struct */
+struct ice_sbq_msg_input {
+	u8 dest_dev;
+	u8 opcode;
+	u16 msg_addr_low;
+	u32 msg_addr_high;
+	u32 data;
+};
+#endif /* _ICE_SBQ_CMD_H_ */
diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c
new file mode 100644
index 0000000..662d136
--- /dev/null
+++ b/drivers/net/ice/base/ice_sched.c
@@ -0,0 +1,1715 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#include "ice_sched.h"
+
+
+/**
+ * ice_sched_add_root_node - Insert the Tx scheduler root node in SW DB
+ * @pi: port information structure
+ * @info: Scheduler element information from firmware
+ *
+ * This function inserts the root node of the scheduling tree topology
+ * to the SW DB.
+ */
+static enum ice_status
+ice_sched_add_root_node(struct ice_port_info *pi,
+			struct ice_aqc_txsched_elem_data *info)
+{
+	struct ice_sched_node *root;
+	struct ice_hw *hw;
+
+	if (!pi)
+		return ICE_ERR_PARAM;
+
+	hw = pi->hw;
+
+	root = (struct ice_sched_node *)ice_malloc(hw, sizeof(*root));
+	if (!root)
+		return ICE_ERR_NO_MEMORY;
+
+	/* coverity[suspicious_sizeof] */
+	root->children = (struct ice_sched_node **)
+		ice_calloc(hw, hw->max_children[0], sizeof(*root));
+	if (!root->children) {
+		ice_free(hw, root);
+		return ICE_ERR_NO_MEMORY;
+	}
+
+	ice_memcpy(&root->info, info, sizeof(*info), ICE_DMA_TO_NONDMA);
+	pi->root = root;
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_sched_find_node_by_teid - Find the Tx scheduler node in SW DB
+ * @start_node: pointer to the starting ice_sched_node struct in a sub-tree
+ * @teid: node teid to search
+ *
+ * This function searches for a node matching the teid in the scheduling tree
+ * from the SW DB. The search is recursive and is restricted by the number of
+ * layers it has searched through; stopping at the max supported layer.
+ *
+ * This function needs to be called when holding the port_info->sched_lock
+ */
+struct ice_sched_node *
+ice_sched_find_node_by_teid(struct ice_sched_node *start_node, u32 teid)
+{
+	u16 i;
+
+	/* The TEID is same as that of the start_node */
+	if (ICE_TXSCHED_GET_NODE_TEID(start_node) == teid)
+		return start_node;
+
+	/* The node has no children or is at the max layer */
+	if (!start_node->num_children ||
+	    start_node->tx_sched_layer >= ICE_AQC_TOPO_MAX_LEVEL_NUM ||
+	    start_node->info.data.elem_type == ICE_AQC_ELEM_TYPE_LEAF)
+		return NULL;
+
+	/* Check if teid matches to any of the children nodes */
+	for (i = 0; i < start_node->num_children; i++)
+		if (ICE_TXSCHED_GET_NODE_TEID(start_node->children[i]) == teid)
+			return start_node->children[i];
+
+	/* Search within each child's sub-tree */
+	for (i = 0; i < start_node->num_children; i++) {
+		struct ice_sched_node *tmp;
+
+		tmp = ice_sched_find_node_by_teid(start_node->children[i],
+						  teid);
+		if (tmp)
+			return tmp;
+	}
+
+	return NULL;
+}
+
+/**
+ * ice_aqc_send_sched_elem_cmd - send scheduling elements cmd
+ * @hw: pointer to the hw struct
+ * @cmd_opc: cmd opcode
+ * @elems_req: number of elements to request
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @elems_resp: returns total number of elements response
+ * @cd: pointer to command details structure or NULL
+ *
+ * This function sends a scheduling elements cmd (cmd_opc)
+ */
+static enum ice_status
+ice_aqc_send_sched_elem_cmd(struct ice_hw *hw, enum ice_adminq_opc cmd_opc,
+			    u16 elems_req, void *buf, u16 buf_size,
+			    u16 *elems_resp, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_sched_elem_cmd *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	cmd = &desc.params.sched_elem_cmd;
+	ice_fill_dflt_direct_cmd_desc(&desc, cmd_opc);
+	cmd->num_elem_req = CPU_TO_LE16(elems_req);
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+	status = ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
+	if (!status && elems_resp)
+		*elems_resp = LE16_TO_CPU(cmd->num_elem_resp);
+
+	return status;
+}
+
+/**
+ * ice_aq_query_sched_elems - query scheduler elements
+ * @hw: pointer to the hw struct
+ * @elems_req: number of elements to query
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @elems_ret: returns total number of elements returned
+ * @cd: pointer to command details structure or NULL
+ *
+ * Query scheduling elements (0x0404)
+ */
+static enum ice_status
+ice_aq_query_sched_elems(struct ice_hw *hw, u16 elems_req,
+			 struct ice_aqc_get_elem *buf, u16 buf_size,
+			 u16 *elems_ret, struct ice_sq_cd *cd)
+{
+	return ice_aqc_send_sched_elem_cmd(hw, ice_aqc_opc_get_sched_elems,
+					   elems_req, (void *)buf, buf_size,
+					   elems_ret, cd);
+}
+
+/**
+ * ice_sched_query_elem - query element information from hw
+ * @hw: pointer to the hw struct
+ * @node_teid: node teid to be queried
+ * @buf: buffer to element information
+ *
+ * This function queries HW element information
+ */
+static enum ice_status
+ice_sched_query_elem(struct ice_hw *hw, u32 node_teid,
+		     struct ice_aqc_get_elem *buf)
+{
+	u16 buf_size, num_elem_ret = 0;
+	enum ice_status status;
+
+	buf_size = sizeof(*buf);
+	ice_memset(buf, 0, buf_size, ICE_NONDMA_MEM);
+	buf->generic[0].node_teid = CPU_TO_LE32(node_teid);
+	status = ice_aq_query_sched_elems(hw, 1, buf, buf_size, &num_elem_ret,
+					  NULL);
+	if (status != ICE_SUCCESS || num_elem_ret != 1)
+		ice_debug(hw, ICE_DBG_SCHED, "query element failed\n");
+	return status;
+}
+
+/**
+ * ice_sched_add_node - Insert the Tx scheduler node in SW DB
+ * @pi: port information structure
+ * @layer: Scheduler layer of the node
+ * @info: Scheduler element information from firmware
+ *
+ * This function inserts a scheduler node to the SW DB.
+ */
+enum ice_status
+ice_sched_add_node(struct ice_port_info *pi, u8 layer,
+		   struct ice_aqc_txsched_elem_data *info)
+{
+	struct ice_sched_node *parent;
+	struct ice_aqc_get_elem elem;
+	struct ice_sched_node *node;
+	enum ice_status status;
+	struct ice_hw *hw;
+
+	if (!pi)
+		return ICE_ERR_PARAM;
+
+	hw = pi->hw;
+
+	/* A valid parent node should be there */
+	parent = ice_sched_find_node_by_teid(pi->root,
+					     LE32_TO_CPU(info->parent_teid));
+	if (!parent) {
+		ice_debug(hw, ICE_DBG_SCHED,
+			  "Parent Node not found for parent_teid=0x%x\n",
+			  LE32_TO_CPU(info->parent_teid));
+		return ICE_ERR_PARAM;
+	}
+
+	/* query the current node information from FW  before additing it
+	 * to the SW DB
+	 */
+	status = ice_sched_query_elem(hw, LE32_TO_CPU(info->node_teid), &elem);
+	if (status)
+		return status;
+	node = (struct ice_sched_node *)ice_malloc(hw, sizeof(*node));
+	if (!node)
+		return ICE_ERR_NO_MEMORY;
+	if (hw->max_children[layer]) {
+		/* coverity[suspicious_sizeof] */
+		node->children = (struct ice_sched_node **)
+			ice_calloc(hw, hw->max_children[layer], sizeof(*node));
+		if (!node->children) {
+			ice_free(hw, node);
+			return ICE_ERR_NO_MEMORY;
+		}
+	}
+
+	node->in_use = true;
+	node->parent = parent;
+	node->tx_sched_layer = layer;
+	parent->children[parent->num_children++] = node;
+	node->info = elem.generic[0];
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_aq_delete_sched_elems - delete scheduler elements
+ * @hw: pointer to the hw struct
+ * @grps_req: number of groups to delete
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @grps_del: returns total number of elements deleted
+ * @cd: pointer to command details structure or NULL
+ *
+ * Delete scheduling elements (0x040F)
+ */
+static enum ice_status
+ice_aq_delete_sched_elems(struct ice_hw *hw, u16 grps_req,
+			  struct ice_aqc_delete_elem *buf, u16 buf_size,
+			  u16 *grps_del, struct ice_sq_cd *cd)
+{
+	return ice_aqc_send_sched_elem_cmd(hw, ice_aqc_opc_delete_sched_elems,
+					   grps_req, (void *)buf, buf_size,
+					   grps_del, cd);
+}
+
+/**
+ * ice_sched_remove_elems - remove nodes from hw
+ * @hw: pointer to the hw struct
+ * @parent: pointer to the parent node
+ * @num_nodes: number of nodes
+ * @node_teids: array of node teids to be deleted
+ *
+ * This function remove nodes from hw
+ */
+static enum ice_status
+ice_sched_remove_elems(struct ice_hw *hw, struct ice_sched_node *parent,
+		       u16 num_nodes, u32 *node_teids)
+{
+	struct ice_aqc_delete_elem *buf;
+	u16 i, num_groups_removed = 0;
+	enum ice_status status;
+	u16 buf_size;
+
+	buf_size = sizeof(*buf) + sizeof(u32) * (num_nodes - 1);
+	buf = (struct ice_aqc_delete_elem *)ice_malloc(hw, buf_size);
+	if (!buf)
+		return ICE_ERR_NO_MEMORY;
+
+	buf->hdr.parent_teid = parent->info.node_teid;
+	buf->hdr.num_elems = CPU_TO_LE16(num_nodes);
+	for (i = 0; i < num_nodes; i++)
+		buf->teid[i] = CPU_TO_LE32(node_teids[i]);
+
+	status = ice_aq_delete_sched_elems(hw, 1, buf, buf_size,
+					   &num_groups_removed, NULL);
+	if (status != ICE_SUCCESS || num_groups_removed != 1)
+		ice_debug(hw, ICE_DBG_SCHED, "remove elements failed\n");
+
+	ice_free(hw, buf);
+	return status;
+}
+
+/**
+ * ice_sched_get_first_node - get the first node of the given layer
+ * @hw: pointer to the hw struct
+ * @parent: pointer the base node of the subtree
+ * @layer: layer number
+ *
+ * This function retrieves the first node of the given layer from the subtree
+ */
+static struct ice_sched_node *
+ice_sched_get_first_node(struct ice_hw *hw, struct ice_sched_node *parent,
+			 u8 layer)
+{
+	u8 i;
+
+	if (layer < hw->sw_entry_point_layer)
+		return NULL;
+	for (i = 0; i < parent->num_children; i++) {
+		struct ice_sched_node *node = parent->children[i];
+
+		if (node) {
+			if (node->tx_sched_layer == layer)
+				return node;
+			/* this recursion is intentional, and wouldn't
+			 * go more than 9 calls
+			 */
+			return ice_sched_get_first_node(hw, node, layer);
+		}
+	}
+	return NULL;
+}
+
+/**
+ * ice_sched_get_tc_node - get pointer to TC node
+ * @pi: port information structure
+ * @tc: TC number
+ *
+ * This function returns the TC node pointer
+ */
+struct ice_sched_node *ice_sched_get_tc_node(struct ice_port_info *pi, u8 tc)
+{
+	u8 i;
+
+	if (!pi)
+		return NULL;
+	for (i = 0; i < pi->root->num_children; i++)
+		if (pi->root->children[i]->tc_num == tc)
+			return pi->root->children[i];
+	return NULL;
+}
+
+/**
+ * ice_free_sched_node - Free a Tx scheduler node from SW DB
+ * @pi: port information structure
+ * @node: pointer to the ice_sched_node struct
+ *
+ * This function frees up a node from SW DB as well as from HW
+ *
+ * This function needs to be called with the port_info->sched_lock held
+ */
+void ice_free_sched_node(struct ice_port_info *pi, struct ice_sched_node *node)
+{
+	struct ice_sched_node *parent;
+	struct ice_hw *hw = pi->hw;
+	u8 i, j;
+
+	/* Free the children before freeing up the parent node
+	 * The parent array is updated below and that shifts the nodes
+	 * in the array. So always pick the first child if num children > 0
+	 */
+	while (node->num_children)
+		ice_free_sched_node(pi, node->children[0]);
+
+	/* Leaf, TC and root nodes can't be deleted by SW */
+	if (node->tx_sched_layer >= hw->sw_entry_point_layer &&
+	    node->info.data.elem_type != ICE_AQC_ELEM_TYPE_TC &&
+	    node->info.data.elem_type != ICE_AQC_ELEM_TYPE_ROOT_PORT &&
+	    node->info.data.elem_type != ICE_AQC_ELEM_TYPE_LEAF) {
+		u32 teid = LE32_TO_CPU(node->info.node_teid);
+		enum ice_status status;
+
+		status = ice_sched_remove_elems(hw, node->parent, 1, &teid);
+		if (status != ICE_SUCCESS)
+			ice_debug(hw, ICE_DBG_SCHED,
+				  "remove element failed %d\n", status);
+	}
+	parent = node->parent;
+	/* root has no parent */
+	if (parent) {
+		struct ice_sched_node *p, *tc_node;
+
+		/* update the parent */
+		for (i = 0; i < parent->num_children; i++)
+			if (parent->children[i] == node) {
+				for (j = i + 1; j < parent->num_children; j++)
+					parent->children[j - 1] =
+						parent->children[j];
+				parent->num_children--;
+				break;
+			}
+
+		/* search for previous sibling that points to this node and
+		 * remove the reference
+		 */
+		tc_node = ice_sched_get_tc_node(pi, node->tc_num);
+		if (!tc_node) {
+			ice_debug(hw, ICE_DBG_SCHED,
+				  "Invalid TC number %d\n", node->tc_num);
+			goto err_exit;
+		}
+		p = ice_sched_get_first_node(hw, tc_node, node->tx_sched_layer);
+		while (p) {
+			if (p->sibling == node) {
+				p->sibling = node->sibling;
+				break;
+			}
+			p = p->sibling;
+		}
+	}
+err_exit:
+	/* leaf nodes have no children */
+	if (node->children)
+		ice_free(hw, node->children);
+	ice_free(hw, node);
+}
+
+/**
+ * ice_aq_get_dflt_topo - gets default scheduler topology
+ * @hw: pointer to the hw struct
+ * @lport: logical port number
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @num_branches: returns total number of queue to port branches
+ * @cd: pointer to command details structure or NULL
+ *
+ * Get default scheduler topology (0x400)
+ */
+static enum ice_status
+ice_aq_get_dflt_topo(struct ice_hw *hw, u8 lport,
+		     struct ice_aqc_get_topo_elem *buf, u16 buf_size,
+		     u8 *num_branches, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_get_topo *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	cmd = &desc.params.get_topo;
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_dflt_topo);
+	cmd->port_num = lport;
+	status = ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
+	if (!status && num_branches)
+		*num_branches = cmd->num_branches;
+
+	return status;
+}
+
+/**
+ * ice_aq_add_sched_elems - adds scheduling element
+ * @hw: pointer to the hw struct
+ * @grps_req: the number of groups that are requested to be added
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @grps_added: returns total number of groups added
+ * @cd: pointer to command details structure or NULL
+ *
+ * Add scheduling elements (0x0401)
+ */
+static enum ice_status
+ice_aq_add_sched_elems(struct ice_hw *hw, u16 grps_req,
+		       struct ice_aqc_add_elem *buf, u16 buf_size,
+		       u16 *grps_added, struct ice_sq_cd *cd)
+{
+	return ice_aqc_send_sched_elem_cmd(hw, ice_aqc_opc_add_sched_elems,
+					   grps_req, (void *)buf, buf_size,
+					   grps_added, cd);
+}
+
+
+
+/**
+ * ice_aq_suspend_sched_elems - suspend scheduler elements
+ * @hw: pointer to the hw struct
+ * @elems_req: number of elements to suspend
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @elems_ret: returns total number of elements suspended
+ * @cd: pointer to command details structure or NULL
+ *
+ * Suspend scheduling elements (0x0409)
+ */
+static enum ice_status
+ice_aq_suspend_sched_elems(struct ice_hw *hw, u16 elems_req,
+			   struct ice_aqc_suspend_resume_elem *buf,
+			   u16 buf_size, u16 *elems_ret, struct ice_sq_cd *cd)
+{
+	return ice_aqc_send_sched_elem_cmd(hw, ice_aqc_opc_suspend_sched_elems,
+					   elems_req, (void *)buf, buf_size,
+					   elems_ret, cd);
+}
+
+/**
+ * ice_aq_resume_sched_elems - resume scheduler elements
+ * @hw: pointer to the hw struct
+ * @elems_req: number of elements to resume
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @elems_ret: returns total number of elements resumed
+ * @cd: pointer to command details structure or NULL
+ *
+ * resume scheduling elements (0x040A)
+ */
+static enum ice_status
+ice_aq_resume_sched_elems(struct ice_hw *hw, u16 elems_req,
+			  struct ice_aqc_suspend_resume_elem *buf,
+			  u16 buf_size, u16 *elems_ret, struct ice_sq_cd *cd)
+{
+	return ice_aqc_send_sched_elem_cmd(hw, ice_aqc_opc_resume_sched_elems,
+					   elems_req, (void *)buf, buf_size,
+					   elems_ret, cd);
+}
+
+/**
+ * ice_aq_query_sched_res - query scheduler resource
+ * @hw: pointer to the hw struct
+ * @buf_size: buffer size in bytes
+ * @buf: pointer to buffer
+ * @cd: pointer to command details structure or NULL
+ *
+ * Query scheduler resource allocation (0x0412)
+ */
+static enum ice_status
+ice_aq_query_sched_res(struct ice_hw *hw, u16 buf_size,
+		       struct ice_aqc_query_txsched_res_resp *buf,
+		       struct ice_sq_cd *cd)
+{
+	struct ice_aq_desc desc;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_query_sched_res);
+	return ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
+}
+
+
+/**
+ * ice_sched_suspend_resume_elems - suspend or resume hw nodes
+ * @hw: pointer to the hw struct
+ * @num_nodes: number of nodes
+ * @node_teids: array of node teids to be suspended or resumed
+ * @suspend: true means suspend / false means resume
+ *
+ * This function suspends or resumes hw nodes
+ */
+static enum ice_status
+ice_sched_suspend_resume_elems(struct ice_hw *hw, u8 num_nodes, u32 *node_teids,
+			       bool suspend)
+{
+	struct ice_aqc_suspend_resume_elem *buf;
+	u16 i, buf_size, num_elem_ret = 0;
+	enum ice_status status;
+
+	buf_size = sizeof(*buf) * num_nodes;
+	buf = (struct ice_aqc_suspend_resume_elem *)
+		ice_malloc(hw, buf_size);
+	if (!buf)
+		return ICE_ERR_NO_MEMORY;
+
+	for (i = 0; i < num_nodes; i++)
+		buf->teid[i] = CPU_TO_LE32(node_teids[i]);
+
+	if (suspend)
+		status = ice_aq_suspend_sched_elems(hw, num_nodes, buf,
+						    buf_size, &num_elem_ret,
+						    NULL);
+	else
+		status = ice_aq_resume_sched_elems(hw, num_nodes, buf,
+						   buf_size, &num_elem_ret,
+						   NULL);
+	if (status != ICE_SUCCESS || num_elem_ret != num_nodes)
+		ice_debug(hw, ICE_DBG_SCHED, "suspend/resume failed\n");
+
+	ice_free(hw, buf);
+	return status;
+}
+
+
+
+
+/**
+ * ice_sched_clear_agg - clears the agg related information
+ * @hw: pointer to the hardware structure
+ *
+ * This function removes agg list and free up agg related memory
+ * previously allocated.
+ */
+void ice_sched_clear_agg(struct ice_hw *hw)
+{
+	struct ice_sched_agg_info *agg_info;
+	struct ice_sched_agg_info *atmp;
+
+	LIST_FOR_EACH_ENTRY_SAFE(agg_info, atmp, &hw->agg_list,
+				 ice_sched_agg_info,
+				 list_entry) {
+		struct ice_sched_agg_vsi_info *agg_vsi_info;
+		struct ice_sched_agg_vsi_info *vtmp;
+
+		LIST_FOR_EACH_ENTRY_SAFE(agg_vsi_info, vtmp,
+					 &agg_info->agg_vsi_list,
+					 ice_sched_agg_vsi_info, list_entry) {
+			LIST_DEL(&agg_vsi_info->list_entry);
+			ice_free(hw, agg_vsi_info);
+		}
+		LIST_DEL(&agg_info->list_entry);
+		ice_free(hw, agg_info);
+	}
+}
+
+/**
+ * ice_sched_clear_tx_topo - clears the schduler tree nodes
+ * @pi: port information structure
+ *
+ * This function removes all the nodes from HW as well as from SW DB.
+ */
+static void ice_sched_clear_tx_topo(struct ice_port_info *pi)
+{
+	if (!pi)
+		return;
+	if (pi->root) {
+		ice_free_sched_node(pi, pi->root);
+		pi->root = NULL;
+	}
+}
+
+/**
+ * ice_sched_clear_port - clear the scheduler elements from SW DB for a port
+ * @pi: port information structure
+ *
+ * Cleanup scheduling elements from SW DB
+ */
+void ice_sched_clear_port(struct ice_port_info *pi)
+{
+	if (!pi || pi->port_state != ICE_SCHED_PORT_STATE_READY)
+		return;
+
+	pi->port_state = ICE_SCHED_PORT_STATE_INIT;
+	ice_acquire_lock(&pi->sched_lock);
+	ice_sched_clear_tx_topo(pi);
+	ice_release_lock(&pi->sched_lock);
+	ice_destroy_lock(&pi->sched_lock);
+}
+
+/**
+ * ice_sched_cleanup_all - cleanup scheduler elements from SW DB for all ports
+ * @hw: pointer to the hw struct
+ *
+ * Cleanup scheduling elements from SW DB for all the ports
+ */
+void ice_sched_cleanup_all(struct ice_hw *hw)
+{
+	if (!hw)
+		return;
+
+	if (hw->layer_info) {
+		ice_free(hw, hw->layer_info);
+		hw->layer_info = NULL;
+	}
+
+	if (hw->port_info)
+		ice_sched_clear_port(hw->port_info);
+
+	hw->num_tx_sched_layers = 0;
+	hw->num_tx_sched_phys_layers = 0;
+	hw->flattened_layers = 0;
+	hw->max_cgds = 0;
+}
+
+
+/**
+ * ice_sched_add_elems - add nodes to hw and SW DB
+ * @pi: port information structure
+ * @tc_node: pointer to the branch node
+ * @parent: pointer to the parent node
+ * @layer: layer number to add nodes
+ * @num_nodes: number of nodes
+ * @num_nodes_added: pointer to num nodes added
+ * @first_node_teid: if new nodes are added then return the teid of first node
+ *
+ * This function add nodes to hw as well as to SW DB for a given layer
+ */
+static enum ice_status
+ice_sched_add_elems(struct ice_port_info *pi, struct ice_sched_node *tc_node,
+		    struct ice_sched_node *parent, u8 layer, u16 num_nodes,
+		    u16 *num_nodes_added, u32 *first_node_teid)
+{
+	struct ice_sched_node *prev, *new_node;
+	struct ice_aqc_add_elem *buf;
+	u16 i, num_groups_added = 0;
+	enum ice_status status = ICE_SUCCESS;
+	struct ice_hw *hw = pi->hw;
+	u16 buf_size;
+	u32 teid;
+
+	buf_size = sizeof(*buf) + sizeof(*buf->generic) * (num_nodes - 1);
+	buf = (struct ice_aqc_add_elem *)ice_malloc(hw, buf_size);
+	if (!buf)
+		return ICE_ERR_NO_MEMORY;
+
+	buf->hdr.parent_teid = parent->info.node_teid;
+	buf->hdr.num_elems = CPU_TO_LE16(num_nodes);
+	for (i = 0; i < num_nodes; i++) {
+		buf->generic[i].parent_teid = parent->info.node_teid;
+		buf->generic[i].data.elem_type = ICE_AQC_ELEM_TYPE_SE_GENERIC;
+		buf->generic[i].data.valid_sections =
+			ICE_AQC_ELEM_VALID_GENERIC | ICE_AQC_ELEM_VALID_CIR |
+			ICE_AQC_ELEM_VALID_EIR;
+		buf->generic[i].data.generic = 0;
+		buf->generic[i].data.cir_bw.bw_profile_idx =
+			CPU_TO_LE16(ICE_SCHED_DFLT_RL_PROF_ID);
+		buf->generic[i].data.cir_bw.bw_alloc =
+			CPU_TO_LE16(ICE_SCHED_DFLT_BW_WT);
+		buf->generic[i].data.eir_bw.bw_profile_idx =
+			CPU_TO_LE16(ICE_SCHED_DFLT_RL_PROF_ID);
+		buf->generic[i].data.eir_bw.bw_alloc =
+			CPU_TO_LE16(ICE_SCHED_DFLT_BW_WT);
+	}
+
+	status = ice_aq_add_sched_elems(hw, 1, buf, buf_size,
+					&num_groups_added, NULL);
+	if (status != ICE_SUCCESS || num_groups_added != 1) {
+		ice_debug(hw, ICE_DBG_SCHED, "add elements failed\n");
+		ice_free(hw, buf);
+		return ICE_ERR_CFG;
+	}
+
+	*num_nodes_added = num_nodes;
+	/* add nodes to the SW DB */
+	for (i = 0; i < num_nodes; i++) {
+		status = ice_sched_add_node(pi, layer, &buf->generic[i]);
+		if (status != ICE_SUCCESS) {
+			ice_debug(hw, ICE_DBG_SCHED,
+				  "add nodes in SW DB failed status =%d\n",
+				  status);
+			break;
+		}
+
+		teid = LE32_TO_CPU(buf->generic[i].node_teid);
+		new_node = ice_sched_find_node_by_teid(parent, teid);
+		if (!new_node) {
+			ice_debug(hw, ICE_DBG_SCHED,
+				  "Node is missing for teid =%d\n", teid);
+			break;
+		}
+
+		new_node->sibling = NULL;
+		new_node->tc_num = tc_node->tc_num;
+
+		/* add it to previous node sibling pointer */
+		/* Note: siblings are not linked across branches */
+		prev = ice_sched_get_first_node(hw, tc_node, layer);
+		if (prev && prev != new_node) {
+			while (prev->sibling)
+				prev = prev->sibling;
+			prev->sibling = new_node;
+		}
+
+		if (i == 0)
+			*first_node_teid = teid;
+	}
+
+	ice_free(hw, buf);
+	return status;
+}
+
+/**
+ * ice_sched_add_nodes_to_layer - Add nodes to a given layer
+ * @pi: port information structure
+ * @tc_node: pointer to TC node
+ * @parent: pointer to parent node
+ * @layer: layer number to add nodes
+ * @num_nodes: number of nodes to be added
+ * @first_node_teid: pointer to the first node teid
+ * @num_nodes_added: pointer to number of nodes added
+ *
+ * This function add nodes to a given layer.
+ */
+static enum ice_status
+ice_sched_add_nodes_to_layer(struct ice_port_info *pi,
+			     struct ice_sched_node *tc_node,
+			     struct ice_sched_node *parent, u8 layer,
+			     u16 num_nodes, u32 *first_node_teid,
+			     u16 *num_nodes_added)
+{
+	u32 *first_teid_ptr = first_node_teid;
+	u16 new_num_nodes, max_child_nodes;
+	enum ice_status status = ICE_SUCCESS;
+	struct ice_hw *hw = pi->hw;
+	u16 num_added = 0;
+	u32 temp;
+
+	*num_nodes_added = 0;
+
+	if (!num_nodes)
+		return status;
+
+	if (!parent || layer < hw->sw_entry_point_layer)
+		return ICE_ERR_PARAM;
+
+	/* max children per node per layer */
+	max_child_nodes = hw->max_children[parent->tx_sched_layer];
+
+	/* current number of children + required nodes exceed max children ? */
+	if ((parent->num_children + num_nodes) > max_child_nodes) {
+		/* Fail if the parent is a TC node */
+		if (parent == tc_node)
+			return ICE_ERR_CFG;
+
+		/* utilize all the spaces if the parent is not full */
+		if (parent->num_children < max_child_nodes) {
+			new_num_nodes = max_child_nodes - parent->num_children;
+			/* this recursion is intentional, and wouldn't
+			 * go more than 2 calls
+			 */
+			status = ice_sched_add_nodes_to_layer(pi, tc_node,
+							      parent, layer,
+							      new_num_nodes,
+							      first_node_teid,
+							      &num_added);
+			if (status != ICE_SUCCESS)
+				return status;
+
+			*num_nodes_added += num_added;
+		}
+		/* Don't modify the first node teid memory if the first node was
+		 * added already in the above call. Instead send some temp
+		 * memory for all other recursive calls.
+		 */
+		if (num_added)
+			first_teid_ptr = &temp;
+
+		new_num_nodes = num_nodes - num_added;
+
+		/* This parent is full, try the next sibling */
+		parent = parent->sibling;
+
+		/* this recursion is intentional, for 1024 queues
+		 * per VSI, it goes max of 16 iterations.
+		 * 1024 / 8 = 128 layer 8 nodes
+		 * 128 /8 = 16 (add 8 nodes per iteration)
+		 */
+		status = ice_sched_add_nodes_to_layer(pi, tc_node, parent,
+						      layer, new_num_nodes,
+						      first_teid_ptr,
+						      &num_added);
+		*num_nodes_added += num_added;
+		return status;
+	}
+
+	status = ice_sched_add_elems(pi, tc_node, parent, layer, num_nodes,
+				     num_nodes_added, first_node_teid);
+	return status;
+}
+
+/**
+ * ice_sched_get_qgrp_layer - get the current queue group layer number
+ * @hw: pointer to the hw struct
+ *
+ * This function returns the current queue group layer number
+ */
+static u8 ice_sched_get_qgrp_layer(struct ice_hw *hw)
+{
+	/* It's always total layers - 1, the array is 0 relative so -2 */
+	return hw->num_tx_sched_layers - ICE_QGRP_LAYER_OFFSET;
+}
+
+/**
+ * ice_sched_get_vsi_layer - get the current VSI layer number
+ * @hw: pointer to the hw struct
+ *
+ * This function returns the current VSI layer number
+ */
+static u8 ice_sched_get_vsi_layer(struct ice_hw *hw)
+{
+	/* Num Layers       VSI layer
+	 *     9               6
+	 *     7               4
+	 *     5 or less       sw_entry_point_layer
+	 */
+	/* calculate the vsi layer based on number of layers. */
+	if (hw->num_tx_sched_layers > ICE_VSI_LAYER_OFFSET + 1) {
+		u8 layer = hw->num_tx_sched_layers - ICE_VSI_LAYER_OFFSET;
+
+		if (layer > hw->sw_entry_point_layer)
+			return layer;
+	}
+	return hw->sw_entry_point_layer;
+}
+
+
+/**
+ * ice_rm_dflt_leaf_node - remove the default leaf node in the tree
+ * @pi: port information structure
+ *
+ * This function removes the leaf node that was created by the FW
+ * during initialization
+ */
+static void ice_rm_dflt_leaf_node(struct ice_port_info *pi)
+{
+	struct ice_sched_node *node;
+
+	node = pi->root;
+	while (node) {
+		if (!node->num_children)
+			break;
+		node = node->children[0];
+	}
+	if (node && node->info.data.elem_type == ICE_AQC_ELEM_TYPE_LEAF) {
+		u32 teid = LE32_TO_CPU(node->info.node_teid);
+		enum ice_status status;
+
+		/* remove the default leaf node */
+		status = ice_sched_remove_elems(pi->hw, node->parent, 1, &teid);
+		if (!status)
+			ice_free_sched_node(pi, node);
+	}
+}
+
+/**
+ * ice_sched_rm_dflt_nodes - free the default nodes in the tree
+ * @pi: port information structure
+ *
+ * This function frees all the nodes except root and TC that were created by
+ * the FW during initialization
+ */
+static void ice_sched_rm_dflt_nodes(struct ice_port_info *pi)
+{
+	struct ice_sched_node *node;
+
+	ice_rm_dflt_leaf_node(pi);
+
+	/* remove the default nodes except TC and root nodes */
+	node = pi->root;
+	while (node) {
+		if (node->tx_sched_layer >= pi->hw->sw_entry_point_layer &&
+		    node->info.data.elem_type != ICE_AQC_ELEM_TYPE_TC &&
+		    node->info.data.elem_type != ICE_AQC_ELEM_TYPE_ROOT_PORT) {
+			ice_free_sched_node(pi, node);
+			break;
+		}
+
+		if (!node->num_children)
+			break;
+		node = node->children[0];
+	}
+}
+
+/**
+ * ice_sched_init_port - Initialize scheduler by querying information from FW
+ * @pi: port info structure for the tree to cleanup
+ *
+ * This function is the initial call to find the total number of Tx scheduler
+ * resources, default topology created by firmware and storing the information
+ * in SW DB.
+ */
+enum ice_status ice_sched_init_port(struct ice_port_info *pi)
+{
+	struct ice_aqc_get_topo_elem *buf;
+	enum ice_status status;
+	struct ice_hw *hw;
+	u8 num_branches;
+	u16 num_elems;
+	u8 i, j;
+
+	if (!pi)
+		return ICE_ERR_PARAM;
+	hw = pi->hw;
+
+	/* Query the Default Topology from FW */
+	buf = (struct ice_aqc_get_topo_elem *)ice_malloc(hw,
+							 ICE_AQ_MAX_BUF_LEN);
+	if (!buf)
+		return ICE_ERR_NO_MEMORY;
+
+	/* Query default scheduling tree topology */
+	status = ice_aq_get_dflt_topo(hw, pi->lport, buf, ICE_AQ_MAX_BUF_LEN,
+				      &num_branches, NULL);
+	if (status)
+		goto err_init_port;
+
+	/* num_branches should be between 1-8 */
+	if (num_branches < 1 || num_branches > ICE_TXSCHED_MAX_BRANCHES) {
+		ice_debug(hw, ICE_DBG_SCHED, "num_branches unexpected %d\n",
+			  num_branches);
+		status = ICE_ERR_PARAM;
+		goto err_init_port;
+	}
+
+	/* get the number of elements on the default/first branch */
+	num_elems = LE16_TO_CPU(buf[0].hdr.num_elems);
+
+	/* num_elems should always be between 1-9 */
+	if (num_elems < 1 || num_elems > ICE_AQC_TOPO_MAX_LEVEL_NUM) {
+		ice_debug(hw, ICE_DBG_SCHED, "num_elems unexpected %d\n",
+			  num_elems);
+		status = ICE_ERR_PARAM;
+		goto err_init_port;
+	}
+
+	/* If the last node is a leaf node then the index of the Q group
+	 * layer is two less than the number of elements.
+	 */
+	if (num_elems > 2 && buf[0].generic[num_elems - 1].data.elem_type ==
+	    ICE_AQC_ELEM_TYPE_LEAF)
+		pi->last_node_teid =
+			LE32_TO_CPU(buf[0].generic[num_elems - 2].node_teid);
+	else
+		pi->last_node_teid =
+			LE32_TO_CPU(buf[0].generic[num_elems - 1].node_teid);
+
+	/* Insert the Tx Sched root node */
+	status = ice_sched_add_root_node(pi, &buf[0].generic[0]);
+	if (status)
+		goto err_init_port;
+
+	/* Parse the default tree and cache the information */
+	for (i = 0; i < num_branches; i++) {
+		num_elems = LE16_TO_CPU(buf[i].hdr.num_elems);
+
+		/* Skip root element as already inserted */
+		for (j = 1; j < num_elems; j++) {
+			/* update the sw entry point */
+			if (buf[0].generic[j].data.elem_type ==
+			    ICE_AQC_ELEM_TYPE_ENTRY_POINT)
+				hw->sw_entry_point_layer = j;
+
+			status = ice_sched_add_node(pi, j, &buf[i].generic[j]);
+			if (status)
+				goto err_init_port;
+		}
+	}
+
+	/* Remove the default nodes. */
+	if (pi->root)
+		ice_sched_rm_dflt_nodes(pi);
+
+	/* initialize the port for handling the scheduler tree */
+	pi->port_state = ICE_SCHED_PORT_STATE_READY;
+	ice_init_lock(&pi->sched_lock);
+
+err_init_port:
+	if (status && pi->root) {
+		ice_free_sched_node(pi, pi->root);
+		pi->root = NULL;
+	}
+
+	ice_free(hw, buf);
+	return status;
+}
+
+
+/**
+ * ice_sched_query_res_alloc - query the FW for num of logical sched layers
+ * @hw: pointer to the HW struct
+ *
+ * query FW for allocated scheduler resources and store in HW struct
+ */
+enum ice_status ice_sched_query_res_alloc(struct ice_hw *hw)
+{
+	struct ice_aqc_query_txsched_res_resp *buf;
+	enum ice_status status = ICE_SUCCESS;
+	__le16 max_sibl;
+	u8 i;
+
+	if (hw->layer_info)
+		return status;
+
+	buf = (struct ice_aqc_query_txsched_res_resp *)
+		ice_malloc(hw, sizeof(*buf));
+	if (!buf)
+		return ICE_ERR_NO_MEMORY;
+
+	status = ice_aq_query_sched_res(hw, sizeof(*buf), buf, NULL);
+	if (status)
+		goto sched_query_out;
+
+	hw->num_tx_sched_layers = LE16_TO_CPU(buf->sched_props.logical_levels);
+	hw->num_tx_sched_phys_layers =
+		LE16_TO_CPU(buf->sched_props.phys_levels);
+	hw->flattened_layers = buf->sched_props.flattening_bitmap;
+	hw->max_cgds = buf->sched_props.max_pf_cgds;
+
+	/* max sibling group size of current layer refers to the max children
+	 * of the below layer node.
+	 * layer 1 node max children will be layer 2 max sibling group size
+	 * layer 2 node max children will be layer 3 max sibling group size
+	 * and so on. This array will be populated from root (index 0) to
+	 * qgroup layer 7. Leaf node has no children.
+	 */
+	for (i = 0; i < hw->num_tx_sched_layers - 1; i++) {
+		max_sibl = buf->layer_props[i + 1].max_sibl_grp_sz;
+		hw->max_children[i] = LE16_TO_CPU(max_sibl);
+	}
+
+	hw->layer_info = (struct ice_aqc_layer_props *)
+			 ice_memdup(hw, buf->layer_props,
+				    (hw->num_tx_sched_layers *
+				     sizeof(*hw->layer_info)),
+				    ICE_DMA_TO_DMA);
+	if (!hw->layer_info) {
+		status = ICE_ERR_NO_MEMORY;
+		goto sched_query_out;
+	}
+
+
+sched_query_out:
+	ice_free(hw, buf);
+	return status;
+}
+
+/**
+ * ice_sched_find_node_in_subtree - Find node in part of base node subtree
+ * @hw: pointer to the hw struct
+ * @base: pointer to the base node
+ * @node: pointer to the node to search
+ *
+ * This function checks whether a given node is part of the base node
+ * subtree or not
+ */
+static bool
+ice_sched_find_node_in_subtree(struct ice_hw *hw, struct ice_sched_node *base,
+			       struct ice_sched_node *node)
+{
+	u8 i;
+
+	for (i = 0; i < base->num_children; i++) {
+		struct ice_sched_node *child = base->children[i];
+
+		if (node == child)
+			return true;
+
+		if (child->tx_sched_layer > node->tx_sched_layer)
+			return false;
+
+		/* this recursion is intentional, and wouldn't
+		 * go more than 8 calls
+		 */
+		if (ice_sched_find_node_in_subtree(hw, child, node))
+			return true;
+	}
+	return false;
+}
+
+/**
+ * ice_sched_get_free_qparent - Get a free lan or rdma q group node
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc: branch number
+ * @owner: lan or rdma
+ *
+ * This function retrieves a free lan or rdma q group node
+ */
+struct ice_sched_node *
+ice_sched_get_free_qparent(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+			   u8 owner)
+{
+	struct ice_sched_node *vsi_node, *qgrp_node = NULL;
+	struct ice_vsi_ctx *vsi_ctx;
+	u16 max_children;
+	u8 qgrp_layer;
+
+	qgrp_layer = ice_sched_get_qgrp_layer(pi->hw);
+	max_children = pi->hw->max_children[qgrp_layer];
+
+	vsi_ctx = ice_get_vsi_ctx(pi->hw, vsi_handle);
+	if (!vsi_ctx)
+		return NULL;
+	vsi_node = vsi_ctx->sched.vsi_node[tc];
+	/* validate invalid VSI id */
+	if (!vsi_node)
+		goto lan_q_exit;
+
+	/* get the first q group node from VSI sub-tree */
+	qgrp_node = ice_sched_get_first_node(pi->hw, vsi_node, qgrp_layer);
+	while (qgrp_node) {
+		/* make sure the qgroup node is part of the VSI subtree */
+		if (ice_sched_find_node_in_subtree(pi->hw, vsi_node, qgrp_node))
+			if (qgrp_node->num_children < max_children &&
+			    qgrp_node->owner == owner)
+				break;
+		qgrp_node = qgrp_node->sibling;
+	}
+
+lan_q_exit:
+	return qgrp_node;
+}
+
+/**
+ * ice_sched_get_vsi_node - Get a VSI node based on VSI id
+ * @hw: pointer to the hw struct
+ * @tc_node: pointer to the TC node
+ * @vsi_handle: software VSI handle
+ *
+ * This function retrieves a VSI node for a given VSI id from a given
+ * TC branch
+ */
+static struct ice_sched_node *
+ice_sched_get_vsi_node(struct ice_hw *hw, struct ice_sched_node *tc_node,
+		       u16 vsi_handle)
+{
+	struct ice_sched_node *node;
+	u8 vsi_layer;
+
+	vsi_layer = ice_sched_get_vsi_layer(hw);
+	node = ice_sched_get_first_node(hw, tc_node, vsi_layer);
+
+	/* Check whether it already exists */
+	while (node) {
+		if (node->vsi_handle == vsi_handle)
+			return node;
+		node = node->sibling;
+	}
+
+	return node;
+}
+
+
+
+/**
+ * ice_sched_calc_vsi_child_nodes - calculate number of VSI child nodes
+ * @hw: pointer to the hw struct
+ * @num_qs: number of queues
+ * @num_nodes: num nodes array
+ *
+ * This function calculates the number of VSI child nodes based on the
+ * number of queues.
+ */
+static void
+ice_sched_calc_vsi_child_nodes(struct ice_hw *hw, u16 num_qs, u16 *num_nodes)
+{
+	u16 num = num_qs;
+	u8 i, qgl, vsil;
+
+	qgl = ice_sched_get_qgrp_layer(hw);
+	vsil = ice_sched_get_vsi_layer(hw);
+
+	/* calculate num nodes from q group to VSI layer */
+	for (i = qgl; i > vsil; i--) {
+		/* round to the next integer if there is a remainder */
+		num = DIVIDE_AND_ROUND_UP(num, hw->max_children[i]);
+
+		/* need at least one node */
+		num_nodes[i] = num ? num : 1;
+	}
+}
+
+/**
+ * ice_sched_add_vsi_child_nodes - add VSI child nodes to tree
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc_node: pointer to the TC node
+ * @num_nodes: pointer to the num nodes that needs to be added per layer
+ * @owner: node owner (lan or rdma)
+ *
+ * This function adds the VSI child nodes to tree. It gets called for
+ * lan and rdma separately.
+ */
+static enum ice_status
+ice_sched_add_vsi_child_nodes(struct ice_port_info *pi, u16 vsi_handle,
+			      struct ice_sched_node *tc_node, u16 *num_nodes,
+			      u8 owner)
+{
+	struct ice_sched_node *parent, *node;
+	struct ice_hw *hw = pi->hw;
+	enum ice_status status;
+	u32 first_node_teid;
+	u16 num_added = 0;
+	u8 i, qgl, vsil;
+
+	qgl = ice_sched_get_qgrp_layer(hw);
+	vsil = ice_sched_get_vsi_layer(hw);
+	parent = ice_sched_get_vsi_node(hw, tc_node, vsi_handle);
+	for (i = vsil + 1; i <= qgl; i++) {
+		if (!parent)
+			return ICE_ERR_CFG;
+
+		status = ice_sched_add_nodes_to_layer(pi, tc_node, parent, i,
+						      num_nodes[i],
+						      &first_node_teid,
+						      &num_added);
+		if (status != ICE_SUCCESS || num_nodes[i] != num_added)
+			return ICE_ERR_CFG;
+
+		/* The newly added node can be a new parent for the next
+		 * layer nodes
+		 */
+		if (num_added) {
+			parent = ice_sched_find_node_by_teid(tc_node,
+							     first_node_teid);
+			node = parent;
+			while (node) {
+				node->owner = owner;
+				node = node->sibling;
+			}
+		} else {
+			parent = parent->children[0];
+		}
+	}
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_sched_rm_vsi_child_nodes - remove VSI child nodes from the tree
+ * @pi: port information structure
+ * @vsi_node: pointer to the VSI node
+ * @num_nodes: pointer to the num nodes that needs to be removed per layer
+ * @owner: node owner (lan or rdma)
+ *
+ * This function removes the VSI child nodes from the tree. It gets called for
+ * lan and rdma separately.
+ */
+static void
+ice_sched_rm_vsi_child_nodes(struct ice_port_info *pi,
+			     struct ice_sched_node *vsi_node, u16 *num_nodes,
+			     u8 owner)
+{
+	struct ice_sched_node *node, *next;
+	u8 i, qgl, vsil;
+	u16 num;
+
+	qgl = ice_sched_get_qgrp_layer(pi->hw);
+	vsil = ice_sched_get_vsi_layer(pi->hw);
+
+	for (i = qgl; i > vsil; i--) {
+		num = num_nodes[i];
+		node = ice_sched_get_first_node(pi->hw, vsi_node, i);
+		while (node && num) {
+			next = node->sibling;
+			if (node->owner == owner && !node->num_children) {
+				ice_free_sched_node(pi, node);
+				num--;
+			}
+			node = next;
+		}
+	}
+}
+
+/**
+ * ice_sched_calc_vsi_support_nodes - calculate number of VSI support nodes
+ * @hw: pointer to the hw struct
+ * @tc_node: pointer to TC node
+ * @num_nodes: pointer to num nodes array
+ *
+ * This function calculates the number of supported nodes needed to add this
+ * VSI into Tx tree including the VSI, parent and intermediate nodes in below
+ * layers
+ */
+static void
+ice_sched_calc_vsi_support_nodes(struct ice_hw *hw,
+				 struct ice_sched_node *tc_node, u16 *num_nodes)
+{
+	struct ice_sched_node *node;
+	u8 vsil;
+	int i;
+
+	vsil = ice_sched_get_vsi_layer(hw);
+	for (i = vsil; i >= hw->sw_entry_point_layer; i--)
+		/* Add intermediate nodes if TC has no children and
+		 * need at least one node for VSI
+		 */
+		if (!tc_node->num_children || i == vsil) {
+			num_nodes[i]++;
+		} else {
+			/* If intermediate nodes are reached max children
+			 * then add a new one.
+			 */
+			node = ice_sched_get_first_node(hw, tc_node, (u8)i);
+			/* scan all the siblings */
+			while (node) {
+				if (node->num_children < hw->max_children[i])
+					break;
+				node = node->sibling;
+			}
+
+			/* tree has one intermediate node to add this new VSI.
+			 * So no need to calculate supported nodes for below
+			 * layers.
+			 */
+			if (node)
+				break;
+			/* all the nodes are full, allocate a new one */
+			num_nodes[i]++;
+		}
+}
+
+/**
+ * ice_sched_add_vsi_support_nodes - add VSI supported nodes into Tx tree
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc_node: pointer to TC node
+ * @num_nodes: pointer to num nodes array
+ *
+ * This function adds the VSI supported nodes into Tx tree including the
+ * VSI, its parent and intermediate nodes in below layers
+ */
+static enum ice_status
+ice_sched_add_vsi_support_nodes(struct ice_port_info *pi, u16 vsi_handle,
+				struct ice_sched_node *tc_node, u16 *num_nodes)
+{
+	struct ice_sched_node *parent = tc_node;
+	enum ice_status status;
+	u32 first_node_teid;
+	u16 num_added = 0;
+	u8 i, vsil;
+
+	if (!pi)
+		return ICE_ERR_PARAM;
+
+	vsil = ice_sched_get_vsi_layer(pi->hw);
+	for (i = pi->hw->sw_entry_point_layer; i <= vsil; i++) {
+		status = ice_sched_add_nodes_to_layer(pi, tc_node, parent,
+						      i, num_nodes[i],
+						      &first_node_teid,
+						      &num_added);
+		if (status != ICE_SUCCESS || num_nodes[i] != num_added)
+			return ICE_ERR_CFG;
+
+		/* The newly added node can be a new parent for the next
+		 * layer nodes
+		 */
+		if (num_added)
+			parent = ice_sched_find_node_by_teid(tc_node,
+							     first_node_teid);
+		else
+			parent = parent->children[0];
+
+		if (!parent)
+			return ICE_ERR_CFG;
+
+		if (i == vsil)
+			parent->vsi_handle = vsi_handle;
+	}
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_sched_add_vsi_to_topo - add a new VSI into tree
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc: TC number
+ *
+ * This function adds a new VSI into scheduler tree
+ */
+static enum ice_status
+ice_sched_add_vsi_to_topo(struct ice_port_info *pi, u16 vsi_handle, u8 tc)
+{
+	u16 num_nodes[ICE_AQC_TOPO_MAX_LEVEL_NUM] = { 0 };
+	struct ice_sched_node *tc_node;
+	struct ice_hw *hw = pi->hw;
+
+	tc_node = ice_sched_get_tc_node(pi, tc);
+	if (!tc_node)
+		return ICE_ERR_PARAM;
+
+	/* calculate number of supported nodes needed for this VSI */
+	ice_sched_calc_vsi_support_nodes(hw, tc_node, num_nodes);
+
+	/* add vsi supported nodes to tc subtree */
+	return ice_sched_add_vsi_support_nodes(pi, vsi_handle, tc_node,
+					       num_nodes);
+}
+
+/**
+ * ice_sched_update_vsi_child_nodes - update VSI child nodes
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc: TC number
+ * @new_numqs: new number of max queues
+ * @owner: owner of this subtree
+ *
+ * This function updates the VSI child nodes based on the number of queues
+ */
+static enum ice_status
+ice_sched_update_vsi_child_nodes(struct ice_port_info *pi, u16 vsi_handle,
+				 u8 tc, u16 new_numqs, u8 owner)
+{
+	u16 prev_num_nodes[ICE_AQC_TOPO_MAX_LEVEL_NUM] = { 0 };
+	u16 new_num_nodes[ICE_AQC_TOPO_MAX_LEVEL_NUM] = { 0 };
+	struct ice_sched_node *vsi_node;
+	struct ice_sched_node *tc_node;
+	struct ice_vsi_ctx *vsi_ctx;
+	enum ice_status status = ICE_SUCCESS;
+	struct ice_hw *hw = pi->hw;
+	u16 prev_numqs;
+	u8 i;
+
+	tc_node = ice_sched_get_tc_node(pi, tc);
+	if (!tc_node)
+		return ICE_ERR_CFG;
+
+	vsi_node = ice_sched_get_vsi_node(hw, tc_node, vsi_handle);
+	if (!vsi_node)
+		return ICE_ERR_CFG;
+
+	vsi_ctx = ice_get_vsi_ctx(hw, vsi_handle);
+	if (!vsi_ctx)
+		return ICE_ERR_PARAM;
+
+	if (owner == ICE_SCHED_NODE_OWNER_LAN)
+		prev_numqs = vsi_ctx->sched.max_lanq[tc];
+	else
+		return ICE_ERR_PARAM;
+
+	/* num queues are not changed */
+	if (prev_numqs == new_numqs)
+		return status;
+
+	/* calculate number of nodes based on prev/new number of qs */
+	if (prev_numqs)
+		ice_sched_calc_vsi_child_nodes(hw, prev_numqs, prev_num_nodes);
+
+	if (new_numqs)
+		ice_sched_calc_vsi_child_nodes(hw, new_numqs, new_num_nodes);
+
+	if (prev_numqs > new_numqs) {
+		for (i = 0; i < ICE_AQC_TOPO_MAX_LEVEL_NUM; i++)
+			new_num_nodes[i] = prev_num_nodes[i] - new_num_nodes[i];
+
+		ice_sched_rm_vsi_child_nodes(pi, vsi_node, new_num_nodes,
+					     owner);
+	} else {
+		for (i = 0; i < ICE_AQC_TOPO_MAX_LEVEL_NUM; i++)
+			new_num_nodes[i] -= prev_num_nodes[i];
+
+		status = ice_sched_add_vsi_child_nodes(pi, vsi_handle, tc_node,
+						       new_num_nodes, owner);
+		if (status)
+			return status;
+	}
+
+	vsi_ctx->sched.max_lanq[tc] = new_numqs;
+
+	return status;
+}
+
+/**
+ * ice_sched_cfg_vsi - configure the new/existing VSI
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc: TC number
+ * @maxqs: max number of queues
+ * @owner: lan or rdma
+ * @enable: TC enabled or disabled
+ *
+ * This function adds/updates VSI nodes based on the number of queues. If TC is
+ * enabled and VSI is in suspended state then resume the VSI back. If TC is
+ * disabled then suspend the VSI if it is not already.
+ */
+enum ice_status
+ice_sched_cfg_vsi(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u16 maxqs,
+		  u8 owner, bool enable)
+{
+	struct ice_sched_node *vsi_node, *tc_node;
+	struct ice_vsi_ctx *vsi_ctx;
+	enum ice_status status = ICE_SUCCESS;
+	struct ice_hw *hw = pi->hw;
+
+	tc_node = ice_sched_get_tc_node(pi, tc);
+	if (!tc_node)
+		return ICE_ERR_PARAM;
+	vsi_ctx = ice_get_vsi_ctx(hw, vsi_handle);
+	if (!vsi_ctx)
+		return ICE_ERR_PARAM;
+	vsi_node = ice_sched_get_vsi_node(hw, tc_node, vsi_handle);
+
+	/* suspend the VSI if tc is not enabled */
+	if (!enable) {
+		if (vsi_node && vsi_node->in_use) {
+			u32 teid = LE32_TO_CPU(vsi_node->info.node_teid);
+
+			status = ice_sched_suspend_resume_elems(hw, 1, &teid,
+								true);
+			if (!status)
+				vsi_node->in_use = false;
+		}
+		return status;
+	}
+
+	/* TC is enabled, if it is a new VSI then add it to the tree */
+	if (!vsi_node) {
+		status = ice_sched_add_vsi_to_topo(pi, vsi_handle, tc);
+		if (status)
+			return status;
+
+		vsi_node = ice_sched_get_vsi_node(hw, tc_node, vsi_handle);
+		if (!vsi_node)
+			return ICE_ERR_CFG;
+
+		vsi_ctx->sched.vsi_node[tc] = vsi_node;
+		vsi_node->in_use = true;
+		/* invalidate the max queues whenever VSI gets added first time
+		 * into the scheduler tree (boot or after reset). We need to
+		 * recreate the child nodes all the time in these cases.
+		 */
+		vsi_ctx->sched.max_lanq[tc] = 0;
+	}
+
+	/* update the VSI child nodes */
+	status = ice_sched_update_vsi_child_nodes(pi, vsi_handle, tc, maxqs,
+						  owner);
+	if (status)
+		return status;
+
+	/* TC is enabled, resume the VSI if it is in the suspend state */
+	if (!vsi_node->in_use) {
+		u32 teid = LE32_TO_CPU(vsi_node->info.node_teid);
+
+		status = ice_sched_suspend_resume_elems(hw, 1, &teid, false);
+		if (!status)
+			vsi_node->in_use = true;
+	}
+
+	return status;
+}
+
+/**
+ * ice_sched_rm_agg_vsi_entry - remove agg related vsi info entry
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ *
+ * This function removes single aggregator vsi info entry from
+ * aggregator list.
+ */
+static void
+ice_sched_rm_agg_vsi_info(struct ice_port_info *pi, u16 vsi_handle)
+{
+	struct ice_sched_agg_info *agg_info;
+	struct ice_sched_agg_info *atmp;
+
+	LIST_FOR_EACH_ENTRY_SAFE(agg_info, atmp, &pi->hw->agg_list,
+				 ice_sched_agg_info,
+				 list_entry) {
+		struct ice_sched_agg_vsi_info *agg_vsi_info;
+		struct ice_sched_agg_vsi_info *vtmp;
+
+		LIST_FOR_EACH_ENTRY_SAFE(agg_vsi_info, vtmp,
+					 &agg_info->agg_vsi_list,
+					 ice_sched_agg_vsi_info, list_entry)
+			if (agg_vsi_info->vsi_handle == vsi_handle) {
+				LIST_DEL(&agg_vsi_info->list_entry);
+				ice_free(pi->hw, agg_vsi_info);
+				return;
+			}
+	}
+}
+
+/**
+ * ice_sched_rm_vsi_cfg - remove the VSI and its children nodes
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @owner: lan or rdma
+ *
+ * This function removes the VSI and its lan or rdma children nodes from the
+ * scheduler tree.
+ */
+static enum ice_status
+ice_sched_rm_vsi_cfg(struct ice_port_info *pi, u16 vsi_handle, u8 owner)
+{
+	enum ice_status status = ICE_ERR_PARAM;
+	struct ice_vsi_ctx *vsi_ctx;
+	u8 i, j = 0;
+
+	if (!ice_is_vsi_valid(pi->hw, vsi_handle))
+		return status;
+	ice_acquire_lock(&pi->sched_lock);
+	vsi_ctx = ice_get_vsi_ctx(pi->hw, vsi_handle);
+	if (!vsi_ctx)
+		goto exit_sched_rm_vsi_cfg;
+
+	for (i = 0; i < ICE_MAX_TRAFFIC_CLASS; i++) {
+		struct ice_sched_node *vsi_node, *tc_node;
+
+		tc_node = ice_sched_get_tc_node(pi, i);
+		if (!tc_node)
+			continue;
+
+		vsi_node = ice_sched_get_vsi_node(pi->hw, tc_node, vsi_handle);
+		if (!vsi_node)
+			continue;
+
+		while (j < vsi_node->num_children) {
+			if (vsi_node->children[j]->owner == owner) {
+				ice_free_sched_node(pi, vsi_node->children[j]);
+
+				/* reset the counter again since the num
+				 * children will be updated after node removal
+				 */
+				j = 0;
+			} else {
+				j++;
+			}
+		}
+		/* remove the VSI if it has no children */
+		if (!vsi_node->num_children) {
+			ice_free_sched_node(pi, vsi_node);
+			vsi_ctx->sched.vsi_node[i] = NULL;
+
+			/* clean up agg related vsi info if any */
+			ice_sched_rm_agg_vsi_info(pi, vsi_handle);
+		}
+		if (owner == ICE_SCHED_NODE_OWNER_LAN)
+			vsi_ctx->sched.max_lanq[i] = 0;
+	}
+	status = ICE_SUCCESS;
+
+exit_sched_rm_vsi_cfg:
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_rm_vsi_lan_cfg - remove VSI and its lan children nodes
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ *
+ * This function clears the VSI and its lan children nodes from scheduler tree
+ * for all TCs.
+ */
+enum ice_status ice_rm_vsi_lan_cfg(struct ice_port_info *pi, u16 vsi_handle)
+{
+	return ice_sched_rm_vsi_cfg(pi, vsi_handle, ICE_SCHED_NODE_OWNER_LAN);
+}
+
+
diff --git a/drivers/net/ice/base/ice_sched.h b/drivers/net/ice/base/ice_sched.h
new file mode 100644
index 0000000..9a8a215
--- /dev/null
+++ b/drivers/net/ice/base/ice_sched.h
@@ -0,0 +1,68 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_SCHED_H_
+#define _ICE_SCHED_H_
+
+#include "ice_common.h"
+
+#define ICE_QGRP_LAYER_OFFSET	2
+#define ICE_VSI_LAYER_OFFSET	4
+#define ICE_AGG_LAYER_OFFSET	6
+#define ICE_SCHED_INVAL_LAYER_NUM	0xFF
+/* Burst size is a 12 bits register that is configured while creating the RL
+ * profile(s). MSB is a granularity bit and tells the granularity type
+ * 0 - LSB bits are in bytes granularity
+ * 1 - LSB bits are in 1K bytes granularity
+ */
+#define ICE_BYTE_GRANULARITY			0
+#define ICE_KBYTE_GRANULARITY			0x800
+#define ICE_MIN_BURST_SIZE_ALLOWED		1 /* In Bytes */
+#define ICE_MAX_BURST_SIZE_ALLOWED		(2047 * 1024) /* In Bytes */
+#define ICE_MAX_BURST_SIZE_BYTE_GRANULARITY	2047 /* In Bytes */
+#define ICE_MAX_BURST_SIZE_KBYTE_GRANULARITY	ICE_MAX_BURST_SIZE_ALLOWED
+
+
+
+struct ice_sched_agg_vsi_info {
+	struct LIST_ENTRY_TYPE list_entry;
+	ice_declare_bitmap(tc_bitmap, ICE_MAX_TRAFFIC_CLASS);
+	u16 vsi_handle;
+};
+
+struct ice_sched_agg_info {
+	struct LIST_HEAD_TYPE agg_vsi_list;
+	struct LIST_ENTRY_TYPE list_entry;
+	ice_declare_bitmap(tc_bitmap, ICE_MAX_TRAFFIC_CLASS);
+	u32 agg_id;
+	enum ice_agg_type agg_type;
+	/* bw_t_info saves agg bw information */
+	struct ice_bw_type_info bw_t_info[ICE_MAX_TRAFFIC_CLASS];
+};
+
+/* FW AQ command calls */
+enum ice_status ice_sched_init_port(struct ice_port_info *pi);
+enum ice_status ice_sched_query_res_alloc(struct ice_hw *hw);
+
+/* Functions to cleanup scheduler SW DB */
+void ice_sched_clear_port(struct ice_port_info *pi);
+void ice_sched_cleanup_all(struct ice_hw *hw);
+void ice_sched_clear_agg(struct ice_hw *hw);
+
+struct ice_sched_node *
+ice_sched_find_node_by_teid(struct ice_sched_node *start_node, u32 teid);
+/* Add a scheduling node into SW DB for given info */
+enum ice_status
+ice_sched_add_node(struct ice_port_info *pi, u8 layer,
+		   struct ice_aqc_txsched_elem_data *info);
+void ice_free_sched_node(struct ice_port_info *pi, struct ice_sched_node *node);
+struct ice_sched_node *ice_sched_get_tc_node(struct ice_port_info *pi, u8 tc);
+struct ice_sched_node *
+ice_sched_get_free_qparent(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+			   u8 owner);
+enum ice_status
+ice_sched_cfg_vsi(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u16 maxqs,
+		  u8 owner, bool enable);
+enum ice_status ice_rm_vsi_lan_cfg(struct ice_port_info *pi, u16 vsi_handle);
+#endif /* _ICE_SCHED_H_ */
diff --git a/drivers/net/ice/base/ice_sriov.c b/drivers/net/ice/base/ice_sriov.c
new file mode 100644
index 0000000..0ee7496
--- /dev/null
+++ b/drivers/net/ice/base/ice_sriov.c
@@ -0,0 +1,129 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#include "ice_common.h"
+#include "ice_adminq_cmd.h"
+#include "ice_sriov.h"
+
+/**
+ * ice_aq_send_msg_to_vf
+ * @hw: pointer to the hardware structure
+ * @vfid: VF ID to send msg
+ * @v_opcode: opcodes for VF-PF communication
+ * @v_retval: return error code
+ * @msg: pointer to the msg buffer
+ * @msglen: msg length
+ * @cd: pointer to command details
+ *
+ * Send message to VF driver (0x0802) using mailbox
+ * queue and asynchronously sending message via
+ * ice_sq_send_cmd() function
+ */
+enum ice_status
+ice_aq_send_msg_to_vf(struct ice_hw *hw, u16 vfid, u32 v_opcode, u32 v_retval,
+		      u8 *msg, u16 msglen, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_pf_vf_msg *cmd;
+	struct ice_aq_desc desc;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_mbx_opc_send_msg_to_vf);
+
+	cmd = &desc.params.virt;
+	cmd->id = CPU_TO_LE32(vfid);
+
+	desc.cookie_high = CPU_TO_LE32(v_opcode);
+	desc.cookie_low = CPU_TO_LE32(v_retval);
+
+	if (msglen)
+		desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+
+	return ice_sq_send_cmd(hw, &hw->mailboxq, &desc, msg, msglen, cd);
+}
+
+
+/**
+ * ice_conv_link_speed_to_virtchnl
+ * @adv_link_support: determines the format of the returned link speed
+ * @link_speed: variable containing the link_speed to be converted
+ *
+ * Convert link speed supported by hw to link speed supported by virtchnl.
+ * If adv_link_support is true, then return link speed in Mbps. Else return
+ * link speed as a VIRTCHNL_LINK_SPEED_* casted to a u32. Note that the caller
+ * needs to cast back to an enum virtchnl_link_speed in the case where
+ * adv_link_support is false, but when adv_link_support is true the caller can
+ * expect the speed in Mbps.
+ */
+u32 ice_conv_link_speed_to_virtchnl(bool adv_link_support, u16 link_speed)
+{
+	u32 speed;
+
+	if (adv_link_support)
+		switch (link_speed) {
+		case ICE_AQ_LINK_SPEED_10MB:
+			speed = ICE_LINK_SPEED_10MBPS;
+			break;
+		case ICE_AQ_LINK_SPEED_100MB:
+			speed = ICE_LINK_SPEED_100MBPS;
+			break;
+		case ICE_AQ_LINK_SPEED_1000MB:
+			speed = ICE_LINK_SPEED_1000MBPS;
+			break;
+		case ICE_AQ_LINK_SPEED_2500MB:
+			speed = ICE_LINK_SPEED_2500MBPS;
+			break;
+		case ICE_AQ_LINK_SPEED_5GB:
+			speed = ICE_LINK_SPEED_5000MBPS;
+			break;
+		case ICE_AQ_LINK_SPEED_10GB:
+			speed = ICE_LINK_SPEED_10000MBPS;
+			break;
+		case ICE_AQ_LINK_SPEED_20GB:
+			speed = ICE_LINK_SPEED_20000MBPS;
+			break;
+		case ICE_AQ_LINK_SPEED_25GB:
+			speed = ICE_LINK_SPEED_25000MBPS;
+			break;
+		case ICE_AQ_LINK_SPEED_40GB:
+			speed = ICE_LINK_SPEED_40000MBPS;
+			break;
+		default:
+			speed = ICE_LINK_SPEED_UNKNOWN;
+			break;
+		}
+	else
+		/* Virtchnl speeds are not defined for every speed supported in
+		 * the hardware. To maintain compatibility with older AVF
+		 * drivers, while reporting the speed the new speed values are
+		 * resolved to the closest known virtchnl speeds
+		 */
+		switch (link_speed) {
+		case ICE_AQ_LINK_SPEED_10MB:
+		case ICE_AQ_LINK_SPEED_100MB:
+			speed = (u32)VIRTCHNL_LINK_SPEED_100MB;
+			break;
+		case ICE_AQ_LINK_SPEED_1000MB:
+		case ICE_AQ_LINK_SPEED_2500MB:
+		case ICE_AQ_LINK_SPEED_5GB:
+			speed = (u32)VIRTCHNL_LINK_SPEED_1GB;
+			break;
+		case ICE_AQ_LINK_SPEED_10GB:
+			speed = (u32)VIRTCHNL_LINK_SPEED_10GB;
+			break;
+		case ICE_AQ_LINK_SPEED_20GB:
+			speed = (u32)VIRTCHNL_LINK_SPEED_20GB;
+			break;
+		case ICE_AQ_LINK_SPEED_25GB:
+			speed = (u32)VIRTCHNL_LINK_SPEED_25GB;
+			break;
+		case ICE_AQ_LINK_SPEED_40GB:
+			/* fall through */
+			speed = (u32)VIRTCHNL_LINK_SPEED_40GB;
+			break;
+		default:
+			speed = (u32)VIRTCHNL_LINK_SPEED_UNKNOWN;
+			break;
+		}
+
+	return speed;
+}
diff --git a/drivers/net/ice/base/ice_sriov.h b/drivers/net/ice/base/ice_sriov.h
new file mode 100644
index 0000000..e1734d6
--- /dev/null
+++ b/drivers/net/ice/base/ice_sriov.h
@@ -0,0 +1,35 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_SRIOV_H_
+#define _ICE_SRIOV_H_
+
+#include "ice_common.h"
+
+/* #ifdef CONFIG_PCI_IOV */
+enum ice_status
+ice_aq_send_msg_to_vf(struct ice_hw *hw, u16 vfid, u32 v_opcode, u32 v_retval,
+		      u8 *msg, u16 msglen, struct ice_sq_cd *cd);
+
+u32 ice_conv_link_speed_to_virtchnl(bool adv_link_support, u16 link_speed);
+/* #else CONFIG_PCI_IOV */
+static inline enum ice_status
+ice_aq_send_msg_to_vf(struct ice_hw __always_unused *hw,
+		      u16 __always_unused vfid, u32 __always_unused v_opcode,
+		      u32 __always_unused v_retval, u8 __always_unused *msg,
+		      u16 __always_unused msglen,
+		      struct ice_sq_cd __always_unused *cd)
+{
+	return ICE_SUCCESS;
+}
+
+static inline u32
+ice_conv_link_speed_to_virtchnl(bool __always_unused adv_link_support,
+				u16 __always_unused link_speed)
+{
+	return 0;
+}
+
+/* #endif CONFIG_PCI_IOV */
+#endif /* _ICE_SRIOV_H_ */
diff --git a/drivers/net/ice/base/ice_status.h b/drivers/net/ice/base/ice_status.h
new file mode 100644
index 0000000..898bfa6
--- /dev/null
+++ b/drivers/net/ice/base/ice_status.h
@@ -0,0 +1,45 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_STATUS_H_
+#define _ICE_STATUS_H_
+
+/* Error Codes */
+enum ice_status {
+	ICE_SUCCESS				= 0,
+
+	/* Generic codes : Range -1..-49 */
+	ICE_ERR_PARAM				= -1,
+	ICE_ERR_NOT_IMPL			= -2,
+	ICE_ERR_NOT_READY			= -3,
+	ICE_ERR_BAD_PTR				= -5,
+	ICE_ERR_INVAL_SIZE			= -6,
+	ICE_ERR_DEVICE_NOT_SUPPORTED		= -8,
+	ICE_ERR_RESET_FAILED			= -9,
+	ICE_ERR_FW_API_VER			= -10,
+	ICE_ERR_NO_MEMORY			= -11,
+	ICE_ERR_CFG				= -12,
+	ICE_ERR_OUT_OF_RANGE			= -13,
+	ICE_ERR_ALREADY_EXISTS			= -14,
+	ICE_ERR_DOES_NOT_EXIST			= -15,
+	ICE_ERR_IN_USE				= -16,
+	ICE_ERR_MAX_LIMIT			= -17,
+	ICE_ERR_RESET_ONGOING			= -18,
+	ICE_ERR_HW_TABLE			= -19,
+
+	/* NVM specific error codes: Range -50..-59 */
+	ICE_ERR_NVM				= -50,
+	ICE_ERR_NVM_CHECKSUM			= -51,
+	ICE_ERR_BUF_TOO_SHORT			= -52,
+	ICE_ERR_NVM_BLANK_MODE			= -53,
+
+	/* ARQ/ASQ specific error codes. Range -100..-109 */
+	ICE_ERR_AQ_ERROR			= -100,
+	ICE_ERR_AQ_TIMEOUT			= -101,
+	ICE_ERR_AQ_FULL				= -102,
+	ICE_ERR_AQ_NO_WORK			= -103,
+	ICE_ERR_AQ_EMPTY			= -104,
+};
+
+#endif /* _ICE_STATUS_H_ */
diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
new file mode 100644
index 0000000..c768733
--- /dev/null
+++ b/drivers/net/ice/base/ice_switch.c
@@ -0,0 +1,2415 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#include "ice_switch.h"
+
+
+#define ICE_ETH_DA_OFFSET		0
+#define ICE_ETH_ETHTYPE_OFFSET		12
+#define ICE_ETH_VLAN_TCI_OFFSET		14
+#define ICE_MAX_VLAN_ID			0xFFF
+
+/* Dummy ethernet header needed in the ice_aqc_sw_rules_elem
+ * struct to configure any switch filter rules.
+ * {DA (6 bytes), SA(6 bytes),
+ * Ether type (2 bytes for header without VLAN tag) OR
+ * VLAN tag (4 bytes for header with VLAN tag) }
+ *
+ * Word on Hardcoded values
+ * byte 0 = 0x2: to identify it as locally administered DA MAC
+ * byte 6 = 0x2: to identify it as locally administered SA MAC
+ * byte 12 = 0x81 & byte 13 = 0x00:
+ *	In case of VLAN filter first two bytes defines ether type (0x8100)
+ *	and remaining two bytes are placeholder for programming a given VLAN id
+ *	In case of Ether type filter it is treated as header without VLAN tag
+ *	and byte 12 and 13 is used to program a given Ether type instead
+ */
+#define DUMMY_ETH_HDR_LEN		16
+static const u8 dummy_eth_header[DUMMY_ETH_HDR_LEN] = { 0x2, 0, 0, 0, 0, 0,
+							0x2, 0, 0, 0, 0, 0,
+							0x81, 0, 0, 0};
+
+#define ICE_SW_RULE_RX_TX_ETH_HDR_SIZE \
+	(sizeof(struct ice_aqc_sw_rules_elem) - \
+	 sizeof(((struct ice_aqc_sw_rules_elem *)0)->pdata) + \
+	 sizeof(struct ice_sw_rule_lkup_rx_tx) + DUMMY_ETH_HDR_LEN - 1)
+#define ICE_SW_RULE_RX_TX_NO_HDR_SIZE \
+	(sizeof(struct ice_aqc_sw_rules_elem) - \
+	 sizeof(((struct ice_aqc_sw_rules_elem *)0)->pdata) + \
+	 sizeof(struct ice_sw_rule_lkup_rx_tx) - 1)
+#define ICE_SW_RULE_LG_ACT_SIZE(n) \
+	(sizeof(struct ice_aqc_sw_rules_elem) - \
+	 sizeof(((struct ice_aqc_sw_rules_elem *)0)->pdata) + \
+	 sizeof(struct ice_sw_rule_lg_act) - \
+	 sizeof(((struct ice_sw_rule_lg_act *)0)->act) + \
+	 ((n) * sizeof(((struct ice_sw_rule_lg_act *)0)->act)))
+#define ICE_SW_RULE_VSI_LIST_SIZE(n) \
+	(sizeof(struct ice_aqc_sw_rules_elem) - \
+	 sizeof(((struct ice_aqc_sw_rules_elem *)0)->pdata) + \
+	 sizeof(struct ice_sw_rule_vsi_list) - \
+	 sizeof(((struct ice_sw_rule_vsi_list *)0)->vsi) + \
+	 ((n) * sizeof(((struct ice_sw_rule_vsi_list *)0)->vsi)))
+
+
+/**
+ * ice_init_def_sw_recp - initialize the recipe book keeping tables
+ * @hw: pointer to the hw struct
+ *
+ * Allocate memory for the entire recipe table and initialize the structures/
+ * entries corresponding to basic recipes.
+ */
+enum ice_status ice_init_def_sw_recp(struct ice_hw *hw)
+{
+	struct ice_sw_recipe *recps;
+	u8 i;
+
+	recps = (struct ice_sw_recipe *)
+		ice_calloc(hw, ICE_MAX_NUM_RECIPES, sizeof(*recps));
+	if (!recps)
+		return ICE_ERR_NO_MEMORY;
+
+	for (i = 0; i < ICE_SW_LKUP_LAST; i++) {
+		recps[i].root_rid = i;
+		INIT_LIST_HEAD(&recps[i].filt_rules);
+		INIT_LIST_HEAD(&recps[i].filt_replay_rules);
+		ice_init_lock(&recps[i].filt_rule_lock);
+	}
+
+	hw->switch_info->recp_list = recps;
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_aq_get_sw_cfg - get switch configuration
+ * @hw: pointer to the hardware structure
+ * @buf: pointer to the result buffer
+ * @buf_size: length of the buffer available for response
+ * @req_desc: pointer to requested descriptor
+ * @num_elems: pointer to number of elements
+ * @cd: pointer to command details structure or NULL
+ *
+ * Get switch configuration (0x0200) to be placed in 'buff'.
+ * This admin command returns information such as initial VSI/port number
+ * and switch ID it belongs to.
+ *
+ * NOTE: *req_desc is both an input/output parameter.
+ * The caller of this function first calls this function with *request_desc set
+ * to 0. If the response from f/w has *req_desc set to 0, all the switch
+ * configuration information has been returned; if non-zero (meaning not all
+ * the information was returned), the caller should call this function again
+ * with *req_desc set to the previous value returned by f/w to get the
+ * next block of switch configuration information.
+ *
+ * *num_elems is output only parameter. This reflects the number of elements
+ * in response buffer. The caller of this function to use *num_elems while
+ * parsing the response buffer.
+ */
+static enum ice_status
+ice_aq_get_sw_cfg(struct ice_hw *hw, struct ice_aqc_get_sw_cfg_resp *buf,
+		  u16 buf_size, u16 *req_desc, u16 *num_elems,
+		  struct ice_sq_cd *cd)
+{
+	struct ice_aqc_get_sw_cfg *cmd;
+	enum ice_status status;
+	struct ice_aq_desc desc;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_sw_cfg);
+	cmd = &desc.params.get_sw_conf;
+	cmd->element = CPU_TO_LE16(*req_desc);
+
+	status = ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
+	if (!status) {
+		*req_desc = LE16_TO_CPU(cmd->element);
+		*num_elems = LE16_TO_CPU(cmd->num_elems);
+	}
+
+	return status;
+}
+
+
+
+/**
+ * ice_aq_add_vsi
+ * @hw: pointer to the hw struct
+ * @vsi_ctx: pointer to a VSI context struct
+ * @cd: pointer to command details structure or NULL
+ *
+ * Add a VSI context to the hardware (0x0210)
+ */
+static enum ice_status
+ice_aq_add_vsi(struct ice_hw *hw, struct ice_vsi_ctx *vsi_ctx,
+	       struct ice_sq_cd *cd)
+{
+	struct ice_aqc_add_update_free_vsi_resp *res;
+	struct ice_aqc_add_get_update_free_vsi *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	cmd = &desc.params.vsi_cmd;
+	res = &desc.params.add_update_free_vsi_res;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_add_vsi);
+
+	if (!vsi_ctx->alloc_from_pool)
+		cmd->vsi_num = CPU_TO_LE16(vsi_ctx->vsi_num |
+					   ICE_AQ_VSI_IS_VALID);
+	cmd->vf_id = vsi_ctx->vf_num;
+
+	cmd->vsi_flags = CPU_TO_LE16(vsi_ctx->flags);
+
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+
+	status = ice_aq_send_cmd(hw, &desc, &vsi_ctx->info,
+				 sizeof(vsi_ctx->info), cd);
+
+	if (!status) {
+		vsi_ctx->vsi_num = LE16_TO_CPU(res->vsi_num) & ICE_AQ_VSI_NUM_M;
+		vsi_ctx->vsis_allocd = LE16_TO_CPU(res->vsi_used);
+		vsi_ctx->vsis_unallocated = LE16_TO_CPU(res->vsi_free);
+	}
+
+	return status;
+}
+
+/**
+ * ice_aq_free_vsi
+ * @hw: pointer to the hw struct
+ * @vsi_ctx: pointer to a VSI context struct
+ * @keep_vsi_alloc: keep VSI allocation as part of this PF's resources
+ * @cd: pointer to command details structure or NULL
+ *
+ * Free VSI context info from hardware (0x0213)
+ */
+static enum ice_status
+ice_aq_free_vsi(struct ice_hw *hw, struct ice_vsi_ctx *vsi_ctx,
+		bool keep_vsi_alloc, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_add_update_free_vsi_resp *resp;
+	struct ice_aqc_add_get_update_free_vsi *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	cmd = &desc.params.vsi_cmd;
+	resp = &desc.params.add_update_free_vsi_res;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_free_vsi);
+
+	cmd->vsi_num = CPU_TO_LE16(vsi_ctx->vsi_num | ICE_AQ_VSI_IS_VALID);
+	if (keep_vsi_alloc)
+		cmd->cmd_flags = CPU_TO_LE16(ICE_AQ_VSI_KEEP_ALLOC);
+
+	status = ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+	if (!status) {
+		vsi_ctx->vsis_allocd = LE16_TO_CPU(resp->vsi_used);
+		vsi_ctx->vsis_unallocated = LE16_TO_CPU(resp->vsi_free);
+	}
+
+	return status;
+}
+
+/**
+ * ice_aq_update_vsi
+ * @hw: pointer to the hw struct
+ * @vsi_ctx: pointer to a VSI context struct
+ * @cd: pointer to command details structure or NULL
+ *
+ * Update VSI context in the hardware (0x0211)
+ */
+static enum ice_status
+ice_aq_update_vsi(struct ice_hw *hw, struct ice_vsi_ctx *vsi_ctx,
+		  struct ice_sq_cd *cd)
+{
+	struct ice_aqc_add_update_free_vsi_resp *resp;
+	struct ice_aqc_add_get_update_free_vsi *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	cmd = &desc.params.vsi_cmd;
+	resp = &desc.params.add_update_free_vsi_res;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_update_vsi);
+
+	cmd->vsi_num = CPU_TO_LE16(vsi_ctx->vsi_num | ICE_AQ_VSI_IS_VALID);
+
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+
+	status = ice_aq_send_cmd(hw, &desc, &vsi_ctx->info,
+				 sizeof(vsi_ctx->info), cd);
+
+	if (!status) {
+		vsi_ctx->vsis_allocd = LE16_TO_CPU(resp->vsi_used);
+		vsi_ctx->vsis_unallocated = LE16_TO_CPU(resp->vsi_free);
+	}
+
+	return status;
+}
+
+/**
+ * ice_is_vsi_valid - check whether the VSI is valid or not
+ * @hw: pointer to the hw struct
+ * @vsi_handle: VSI handle
+ *
+ * check whether the VSI is valid or not
+ */
+bool ice_is_vsi_valid(struct ice_hw *hw, u16 vsi_handle)
+{
+	return vsi_handle < ICE_MAX_VSI && hw->vsi_ctx[vsi_handle];
+}
+
+/**
+ * ice_get_hw_vsi_num - return the hw VSI number
+ * @hw: pointer to the hw struct
+ * @vsi_handle: VSI handle
+ *
+ * return the hw VSI number
+ * Caution: call this function only if VSI is valid (ice_is_vsi_valid)
+ */
+u16 ice_get_hw_vsi_num(struct ice_hw *hw, u16 vsi_handle)
+{
+	return hw->vsi_ctx[vsi_handle]->vsi_num;
+}
+
+/**
+ * ice_get_vsi_ctx - return the VSI context entry for a given VSI handle
+ * @hw: pointer to the hw struct
+ * @vsi_handle: VSI handle
+ *
+ * return the VSI context entry for a given VSI handle
+ */
+struct ice_vsi_ctx *ice_get_vsi_ctx(struct ice_hw *hw, u16 vsi_handle)
+{
+	return (vsi_handle >= ICE_MAX_VSI) ? NULL : hw->vsi_ctx[vsi_handle];
+}
+
+/**
+ * ice_save_vsi_ctx - save the VSI context for a given VSI handle
+ * @hw: pointer to the hw struct
+ * @vsi_handle: VSI handle
+ * @vsi: VSI context pointer
+ *
+ * save the VSI context entry for a given VSI handle
+ */
+static void
+ice_save_vsi_ctx(struct ice_hw *hw, u16 vsi_handle, struct ice_vsi_ctx *vsi)
+{
+	hw->vsi_ctx[vsi_handle] = vsi;
+}
+
+/**
+ * ice_clear_vsi_ctx - clear the VSI context entry
+ * @hw: pointer to the hw struct
+ * @vsi_handle: VSI handle
+ *
+ * clear the VSI context entry
+ */
+static void ice_clear_vsi_ctx(struct ice_hw *hw, u16 vsi_handle)
+{
+	struct ice_vsi_ctx *vsi;
+
+	vsi = ice_get_vsi_ctx(hw, vsi_handle);
+	if (vsi) {
+		ice_destroy_lock(&vsi->rss_locks);
+		ice_free(hw, vsi);
+		hw->vsi_ctx[vsi_handle] = NULL;
+	}
+}
+
+/**
+ * ice_clear_all_vsi_ctx - clear all the VSI context entries
+ * @hw: pointer to the hw struct
+ */
+void ice_clear_all_vsi_ctx(struct ice_hw *hw)
+{
+	u16 i;
+
+	for (i = 0; i < ICE_MAX_VSI; i++)
+		ice_clear_vsi_ctx(hw, i);
+}
+
+/**
+ * ice_add_vsi - add VSI context to the hardware and VSI handle list
+ * @hw: pointer to the hw struct
+ * @vsi_handle: unique VSI handle provided by drivers
+ * @vsi_ctx: pointer to a VSI context struct
+ * @cd: pointer to command details structure or NULL
+ *
+ * Add a VSI context to the hardware also add it into the VSI handle list.
+ * If this function gets called after reset for exisiting VSIs then update
+ * with the new HW VSI number in the corresponding VSI handle list entry.
+ */
+enum ice_status
+ice_add_vsi(struct ice_hw *hw, u16 vsi_handle, struct ice_vsi_ctx *vsi_ctx,
+	    struct ice_sq_cd *cd)
+{
+	struct ice_vsi_ctx *tmp_vsi_ctx;
+	enum ice_status status;
+
+	if (vsi_handle >= ICE_MAX_VSI)
+		return ICE_ERR_PARAM;
+	status = ice_aq_add_vsi(hw, vsi_ctx, cd);
+	if (status)
+		return status;
+	tmp_vsi_ctx = ice_get_vsi_ctx(hw, vsi_handle);
+	if (!tmp_vsi_ctx) {
+		/* Create a new vsi context */
+		tmp_vsi_ctx = (struct ice_vsi_ctx *)
+			ice_malloc(hw, sizeof(*tmp_vsi_ctx));
+		if (!tmp_vsi_ctx) {
+			ice_aq_free_vsi(hw, vsi_ctx, false, cd);
+			return ICE_ERR_NO_MEMORY;
+		}
+		*tmp_vsi_ctx = *vsi_ctx;
+		ice_init_lock(&tmp_vsi_ctx->rss_locks);
+		INIT_LIST_HEAD(&tmp_vsi_ctx->rss_list_head);
+		ice_save_vsi_ctx(hw, vsi_handle, tmp_vsi_ctx);
+	} else {
+		/* update with new HW VSI num */
+		if (tmp_vsi_ctx->vsi_num != vsi_ctx->vsi_num)
+			tmp_vsi_ctx->vsi_num = vsi_ctx->vsi_num;
+	}
+
+	return status;
+}
+
+/**
+ * ice_free_vsi- free VSI context from hardware and VSI handle list
+ * @hw: pointer to the hw struct
+ * @vsi_handle: unique VSI handle
+ * @vsi_ctx: pointer to a VSI context struct
+ * @keep_vsi_alloc: keep VSI allocation as part of this PF's resources
+ * @cd: pointer to command details structure or NULL
+ *
+ * Free VSI context info from hardware as well as from VSI handle list
+ */
+enum ice_status
+ice_free_vsi(struct ice_hw *hw, u16 vsi_handle, struct ice_vsi_ctx *vsi_ctx,
+	     bool keep_vsi_alloc, struct ice_sq_cd *cd)
+{
+	enum ice_status status;
+
+	if (!ice_is_vsi_valid(hw, vsi_handle))
+		return ICE_ERR_PARAM;
+	vsi_ctx->vsi_num = ice_get_hw_vsi_num(hw, vsi_handle);
+	status = ice_aq_free_vsi(hw, vsi_ctx, keep_vsi_alloc, cd);
+	if (!status)
+		ice_clear_vsi_ctx(hw, vsi_handle);
+	return status;
+}
+
+/**
+ * ice_update_vsi
+ * @hw: pointer to the hw struct
+ * @vsi_handle: unique VSI handle
+ * @vsi_ctx: pointer to a VSI context struct
+ * @cd: pointer to command details structure or NULL
+ *
+ * Update VSI context in the hardware
+ */
+enum ice_status
+ice_update_vsi(struct ice_hw *hw, u16 vsi_handle, struct ice_vsi_ctx *vsi_ctx,
+	       struct ice_sq_cd *cd)
+{
+	if (!ice_is_vsi_valid(hw, vsi_handle))
+		return ICE_ERR_PARAM;
+	vsi_ctx->vsi_num = ice_get_hw_vsi_num(hw, vsi_handle);
+	return ice_aq_update_vsi(hw, vsi_ctx, cd);
+}
+
+
+
+/**
+ * ice_aq_alloc_free_vsi_list
+ * @hw: pointer to the hw struct
+ * @vsi_list_id: VSI list id returned or used for lookup
+ * @lkup_type: switch rule filter lookup type
+ * @opc: switch rules population command type - pass in the command opcode
+ *
+ * allocates or free a VSI list resource
+ */
+static enum ice_status
+ice_aq_alloc_free_vsi_list(struct ice_hw *hw, u16 *vsi_list_id,
+			   enum ice_sw_lkup_type lkup_type,
+			   enum ice_adminq_opc opc)
+{
+	struct ice_aqc_alloc_free_res_elem *sw_buf;
+	struct ice_aqc_res_elem *vsi_ele;
+	enum ice_status status;
+	u16 buf_len;
+
+	buf_len = sizeof(*sw_buf);
+	sw_buf = (struct ice_aqc_alloc_free_res_elem *)
+		ice_malloc(hw, buf_len);
+	if (!sw_buf)
+		return ICE_ERR_NO_MEMORY;
+	sw_buf->num_elems = CPU_TO_LE16(1);
+
+	if (lkup_type == ICE_SW_LKUP_MAC ||
+	    lkup_type == ICE_SW_LKUP_MAC_VLAN ||
+	    lkup_type == ICE_SW_LKUP_ETHERTYPE ||
+	    lkup_type == ICE_SW_LKUP_ETHERTYPE_MAC ||
+	    lkup_type == ICE_SW_LKUP_PROMISC ||
+	    lkup_type == ICE_SW_LKUP_PROMISC_VLAN) {
+		sw_buf->res_type = CPU_TO_LE16(ICE_AQC_RES_TYPE_VSI_LIST_REP);
+	} else if (lkup_type == ICE_SW_LKUP_VLAN) {
+		sw_buf->res_type =
+			CPU_TO_LE16(ICE_AQC_RES_TYPE_VSI_LIST_PRUNE);
+	} else {
+		status = ICE_ERR_PARAM;
+		goto ice_aq_alloc_free_vsi_list_exit;
+	}
+
+	if (opc == ice_aqc_opc_free_res)
+		sw_buf->elem[0].e.sw_resp = CPU_TO_LE16(*vsi_list_id);
+
+	status = ice_aq_alloc_free_res(hw, 1, sw_buf, buf_len, opc, NULL);
+	if (status)
+		goto ice_aq_alloc_free_vsi_list_exit;
+
+	if (opc == ice_aqc_opc_alloc_res) {
+		vsi_ele = &sw_buf->elem[0];
+		*vsi_list_id = LE16_TO_CPU(vsi_ele->e.sw_resp);
+	}
+
+ice_aq_alloc_free_vsi_list_exit:
+	ice_free(hw, sw_buf);
+	return status;
+}
+
+
+/**
+ * ice_aq_sw_rules - add/update/remove switch rules
+ * @hw: pointer to the hw struct
+ * @rule_list: pointer to switch rule population list
+ * @rule_list_sz: total size of the rule list in bytes
+ * @num_rules: number of switch rules in the rule_list
+ * @opc: switch rules population command type - pass in the command opcode
+ * @cd: pointer to command details structure or NULL
+ *
+ * Add(0x02a0)/Update(0x02a1)/Remove(0x02a2) switch rules commands to firmware
+ */
+static enum ice_status
+ice_aq_sw_rules(struct ice_hw *hw, void *rule_list, u16 rule_list_sz,
+		u8 num_rules, enum ice_adminq_opc opc, struct ice_sq_cd *cd)
+{
+	struct ice_aq_desc desc;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_sw_rules");
+
+	if (opc != ice_aqc_opc_add_sw_rules &&
+	    opc != ice_aqc_opc_update_sw_rules &&
+	    opc != ice_aqc_opc_remove_sw_rules)
+		return ICE_ERR_PARAM;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, opc);
+
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+	desc.params.sw_rules.num_rules_fltr_entry_index =
+		CPU_TO_LE16(num_rules);
+	return ice_aq_send_cmd(hw, &desc, rule_list, rule_list_sz, cd);
+}
+
+
+/* ice_init_port_info - Initialize port_info with switch configuration data
+ * @pi: pointer to port_info
+ * @vsi_port_num: VSI number or port number
+ * @type: Type of switch element (port or VSI)
+ * @swid: switch ID of the switch the element is attached to
+ * @pf_vf_num: PF or VF number
+ * @is_vf: true if the element is a VF, false otherwise
+ */
+static void
+ice_init_port_info(struct ice_port_info *pi, u16 vsi_port_num, u8 type,
+		   u16 swid, u16 pf_vf_num, bool is_vf)
+{
+	switch (type) {
+	case ICE_AQC_GET_SW_CONF_RESP_PHYS_PORT:
+		pi->lport = (u8)(vsi_port_num & ICE_LPORT_MASK);
+		pi->sw_id = swid;
+		pi->pf_vf_num = pf_vf_num;
+		pi->is_vf = is_vf;
+		pi->dflt_tx_vsi_num = ICE_DFLT_VSI_INVAL;
+		pi->dflt_rx_vsi_num = ICE_DFLT_VSI_INVAL;
+		break;
+	default:
+		ice_debug(pi->hw, ICE_DBG_SW,
+			  "incorrect VSI/port type received\n");
+		break;
+	}
+}
+
+/* ice_get_initial_sw_cfg - Get initial port and default VSI data
+ * @hw: pointer to the hardware structure
+ */
+enum ice_status ice_get_initial_sw_cfg(struct ice_hw *hw)
+{
+	struct ice_aqc_get_sw_cfg_resp *rbuf;
+	enum ice_status status;
+	u16 num_total_ports;
+	u16 req_desc = 0;
+	u16 num_elems;
+	u16 j = 0;
+	u16 i;
+
+	num_total_ports = 1;
+
+	rbuf = (struct ice_aqc_get_sw_cfg_resp *)
+		ice_malloc(hw, ICE_SW_CFG_MAX_BUF_LEN);
+
+	if (!rbuf)
+		return ICE_ERR_NO_MEMORY;
+
+	/* Multiple calls to ice_aq_get_sw_cfg may be required
+	 * to get all the switch configuration information. The need
+	 * for additional calls is indicated by ice_aq_get_sw_cfg
+	 * writing a non-zero value in req_desc
+	 */
+	do {
+		status = ice_aq_get_sw_cfg(hw, rbuf, ICE_SW_CFG_MAX_BUF_LEN,
+					   &req_desc, &num_elems, NULL);
+
+		if (status)
+			break;
+
+		for (i = 0; i < num_elems; i++) {
+			struct ice_aqc_get_sw_cfg_resp_elem *ele;
+			u16 pf_vf_num, swid, vsi_port_num;
+			bool is_vf = false;
+			u8 type;
+
+			ele = rbuf[i].elements;
+			vsi_port_num = LE16_TO_CPU(ele->vsi_port_num) &
+				ICE_AQC_GET_SW_CONF_RESP_VSI_PORT_NUM_M;
+
+			pf_vf_num = LE16_TO_CPU(ele->pf_vf_num) &
+				ICE_AQC_GET_SW_CONF_RESP_FUNC_NUM_M;
+
+			swid = LE16_TO_CPU(ele->swid);
+
+			if (LE16_TO_CPU(ele->pf_vf_num) &
+			    ICE_AQC_GET_SW_CONF_RESP_IS_VF)
+				is_vf = true;
+
+			type = LE16_TO_CPU(ele->vsi_port_num) >>
+				ICE_AQC_GET_SW_CONF_RESP_TYPE_S;
+
+			switch (type) {
+			case ICE_AQC_GET_SW_CONF_RESP_PHYS_PORT:
+			case ICE_AQC_GET_SW_CONF_RESP_VIRT_PORT:
+				if (j == num_total_ports) {
+					ice_debug(hw, ICE_DBG_SW,
+						  "more ports than expected\n");
+					status = ICE_ERR_CFG;
+					goto out;
+				}
+				ice_init_port_info(hw->port_info,
+						   vsi_port_num, type, swid,
+						   pf_vf_num, is_vf);
+				j++;
+				break;
+			default:
+				break;
+			}
+		}
+	} while (req_desc && !status);
+
+
+out:
+	ice_free(hw, (void *)rbuf);
+	return status;
+}
+
+
+/**
+ * ice_fill_sw_info - Helper function to populate lb_en and lan_en
+ * @hw: pointer to the hardware structure
+ * @fi: filter info structure to fill/update
+ *
+ * This helper function populates the lb_en and lan_en elements of the provided
+ * ice_fltr_info struct using the switch's type and characteristics of the
+ * switch rule being configured.
+ */
+static void ice_fill_sw_info(struct ice_hw *hw, struct ice_fltr_info *fi)
+{
+	fi->lb_en = false;
+	fi->lan_en = false;
+	if ((fi->flag & ICE_FLTR_TX) &&
+	    (fi->fltr_act == ICE_FWD_TO_VSI ||
+	     fi->fltr_act == ICE_FWD_TO_VSI_LIST ||
+	     fi->fltr_act == ICE_FWD_TO_Q ||
+	     fi->fltr_act == ICE_FWD_TO_QGRP)) {
+		fi->lb_en = true;
+		/* Do not set lan_en to TRUE if
+		 * 1. The switch is a VEB AND
+		 * 2
+		 * 2.1 The lookup is MAC with unicast addr for MAC, OR
+		 * 2.2 The lookup is MAC_VLAN with unicast addr for MAC
+		 *
+		 * In all other cases, the LAN enable has to be set to true.
+		 */
+		if (!(hw->evb_veb &&
+		      ((fi->lkup_type == ICE_SW_LKUP_MAC &&
+			IS_UNICAST_ETHER_ADDR(fi->l_data.mac.mac_addr)) ||
+		       (fi->lkup_type == ICE_SW_LKUP_MAC_VLAN &&
+			IS_UNICAST_ETHER_ADDR(fi->l_data.mac_vlan.mac_addr)))))
+			fi->lan_en = true;
+	}
+}
+
+/**
+ * ice_ilog2 - Caculates integer log base 2 of a number
+ * @n: number on which to perform operation
+ */
+static int ice_ilog2(u64 n)
+{
+	int i;
+
+	for (i = 63; i >= 0; i--)
+		if (((u64)1 << i) & n)
+			return i;
+
+	return -1;
+}
+
+
+/**
+ * ice_fill_sw_rule - Helper function to fill switch rule structure
+ * @hw: pointer to the hardware structure
+ * @f_info: entry containing packet forwarding information
+ * @s_rule: switch rule structure to be filled in based on mac_entry
+ * @opc: switch rules population command type - pass in the command opcode
+ */
+static void
+ice_fill_sw_rule(struct ice_hw *hw, struct ice_fltr_info *f_info,
+		 struct ice_aqc_sw_rules_elem *s_rule, enum ice_adminq_opc opc)
+{
+	u16 vlan_id = ICE_MAX_VLAN_ID + 1;
+	void *daddr = NULL;
+	u16 eth_hdr_sz;
+	u8 *eth_hdr;
+	u32 act = 0;
+	__be16 *off;
+	u8 q_rgn;
+
+
+	if (opc == ice_aqc_opc_remove_sw_rules) {
+		s_rule->pdata.lkup_tx_rx.act = 0;
+		s_rule->pdata.lkup_tx_rx.index =
+			CPU_TO_LE16(f_info->fltr_rule_id);
+		s_rule->pdata.lkup_tx_rx.hdr_len = 0;
+		return;
+	}
+
+	eth_hdr_sz = sizeof(dummy_eth_header);
+	eth_hdr = s_rule->pdata.lkup_tx_rx.hdr;
+
+	/* initialize the ether header with a dummy header */
+	ice_memcpy(eth_hdr, dummy_eth_header, eth_hdr_sz, ICE_NONDMA_TO_NONDMA);
+	ice_fill_sw_info(hw, f_info);
+
+	switch (f_info->fltr_act) {
+	case ICE_FWD_TO_VSI:
+		act |= (f_info->fwd_id.hw_vsi_id << ICE_SINGLE_ACT_VSI_ID_S) &
+			ICE_SINGLE_ACT_VSI_ID_M;
+		if (f_info->lkup_type != ICE_SW_LKUP_VLAN)
+			act |= ICE_SINGLE_ACT_VSI_FORWARDING |
+				ICE_SINGLE_ACT_VALID_BIT;
+		break;
+	case ICE_FWD_TO_VSI_LIST:
+		act |= ICE_SINGLE_ACT_VSI_LIST;
+		act |= (f_info->fwd_id.vsi_list_id <<
+			ICE_SINGLE_ACT_VSI_LIST_ID_S) &
+			ICE_SINGLE_ACT_VSI_LIST_ID_M;
+		if (f_info->lkup_type != ICE_SW_LKUP_VLAN)
+			act |= ICE_SINGLE_ACT_VSI_FORWARDING |
+				ICE_SINGLE_ACT_VALID_BIT;
+		break;
+	case ICE_FWD_TO_Q:
+		act |= ICE_SINGLE_ACT_TO_Q;
+		act |= (f_info->fwd_id.q_id << ICE_SINGLE_ACT_Q_INDEX_S) &
+			ICE_SINGLE_ACT_Q_INDEX_M;
+		break;
+	case ICE_DROP_PACKET:
+		act |= ICE_SINGLE_ACT_VSI_FORWARDING | ICE_SINGLE_ACT_DROP |
+			ICE_SINGLE_ACT_VALID_BIT;
+		break;
+	case ICE_FWD_TO_QGRP:
+		q_rgn = f_info->qgrp_size > 0 ?
+			(u8)ice_ilog2(f_info->qgrp_size) : 0;
+		act |= ICE_SINGLE_ACT_TO_Q;
+		act |= (f_info->fwd_id.q_id << ICE_SINGLE_ACT_Q_INDEX_S) &
+			ICE_SINGLE_ACT_Q_INDEX_M;
+		act |= (q_rgn << ICE_SINGLE_ACT_Q_REGION_S) &
+			ICE_SINGLE_ACT_Q_REGION_M;
+		break;
+	default:
+		return;
+	}
+
+	if (f_info->lb_en)
+		act |= ICE_SINGLE_ACT_LB_ENABLE;
+	if (f_info->lan_en)
+		act |= ICE_SINGLE_ACT_LAN_ENABLE;
+
+	switch (f_info->lkup_type) {
+	case ICE_SW_LKUP_MAC:
+		daddr = f_info->l_data.mac.mac_addr;
+		break;
+	case ICE_SW_LKUP_VLAN:
+		vlan_id = f_info->l_data.vlan.vlan_id;
+		if (f_info->fltr_act == ICE_FWD_TO_VSI ||
+		    f_info->fltr_act == ICE_FWD_TO_VSI_LIST) {
+			act |= ICE_SINGLE_ACT_PRUNE;
+			act |= ICE_SINGLE_ACT_EGRESS | ICE_SINGLE_ACT_INGRESS;
+		}
+		break;
+	case ICE_SW_LKUP_ETHERTYPE_MAC:
+		daddr = f_info->l_data.ethertype_mac.mac_addr;
+		/* fall-through */
+	case ICE_SW_LKUP_ETHERTYPE:
+		off = (__be16 *)(eth_hdr + ICE_ETH_ETHTYPE_OFFSET);
+		*off = CPU_TO_BE16(f_info->l_data.ethertype_mac.ethertype);
+		break;
+	case ICE_SW_LKUP_MAC_VLAN:
+		daddr = f_info->l_data.mac_vlan.mac_addr;
+		vlan_id = f_info->l_data.mac_vlan.vlan_id;
+		break;
+	case ICE_SW_LKUP_PROMISC_VLAN:
+		vlan_id = f_info->l_data.mac_vlan.vlan_id;
+		/* fall-through */
+	case ICE_SW_LKUP_PROMISC:
+		daddr = f_info->l_data.mac_vlan.mac_addr;
+		break;
+	default:
+		break;
+	}
+
+	s_rule->type = (f_info->flag & ICE_FLTR_RX) ?
+		CPU_TO_LE16(ICE_AQC_SW_RULES_T_LKUP_RX) :
+		CPU_TO_LE16(ICE_AQC_SW_RULES_T_LKUP_TX);
+
+	/* Recipe set depending on lookup type */
+	s_rule->pdata.lkup_tx_rx.recipe_id = CPU_TO_LE16(f_info->lkup_type);
+	s_rule->pdata.lkup_tx_rx.src = CPU_TO_LE16(f_info->src);
+	s_rule->pdata.lkup_tx_rx.act = CPU_TO_LE32(act);
+
+	if (daddr)
+		ice_memcpy(eth_hdr + ICE_ETH_DA_OFFSET, daddr, ETH_ALEN,
+			   ICE_NONDMA_TO_NONDMA);
+
+	if (!(vlan_id > ICE_MAX_VLAN_ID)) {
+		off = (__be16 *)(eth_hdr + ICE_ETH_VLAN_TCI_OFFSET);
+		*off = CPU_TO_BE16(vlan_id);
+	}
+
+	/* Create the switch rule with the final dummy Ethernet header */
+	if (opc != ice_aqc_opc_update_sw_rules)
+		s_rule->pdata.lkup_tx_rx.hdr_len = CPU_TO_LE16(eth_hdr_sz);
+}
+
+/**
+ * ice_add_marker_act
+ * @hw: pointer to the hardware structure
+ * @m_ent: the management entry for which sw marker needs to be added
+ * @sw_marker: sw marker to tag the Rx descriptor with
+ * @l_id: large action resource id
+ *
+ * Create a large action to hold software marker and update the switch rule
+ * entry pointed by m_ent with newly created large action
+ */
+static enum ice_status
+ice_add_marker_act(struct ice_hw *hw, struct ice_fltr_mgmt_list_entry *m_ent,
+		   u16 sw_marker, u16 l_id)
+{
+	struct ice_aqc_sw_rules_elem *lg_act, *rx_tx;
+	/* For software marker we need 3 large actions
+	 * 1. FWD action: FWD TO VSI or VSI LIST
+	 * 2. GENERIC VALUE action to hold the profile id
+	 * 3. GENERIC VALUE action to hold the software marker id
+	 */
+	const u16 num_lg_acts = 3;
+	enum ice_status status;
+	u16 lg_act_size;
+	u16 rules_size;
+	u32 act;
+	u16 id;
+
+	if (m_ent->fltr_info.lkup_type != ICE_SW_LKUP_MAC)
+		return ICE_ERR_PARAM;
+
+	/* Create two back-to-back switch rules and submit them to the HW using
+	 * one memory buffer:
+	 *    1. Large Action
+	 *    2. Look up Tx Rx
+	 */
+	lg_act_size = (u16)ICE_SW_RULE_LG_ACT_SIZE(num_lg_acts);
+	rules_size = lg_act_size + ICE_SW_RULE_RX_TX_ETH_HDR_SIZE;
+	lg_act = (struct ice_aqc_sw_rules_elem *)ice_malloc(hw, rules_size);
+	if (!lg_act)
+		return ICE_ERR_NO_MEMORY;
+
+	rx_tx = (struct ice_aqc_sw_rules_elem *)((u8 *)lg_act + lg_act_size);
+
+	/* Fill in the first switch rule i.e. large action */
+	lg_act->type = CPU_TO_LE16(ICE_AQC_SW_RULES_T_LG_ACT);
+	lg_act->pdata.lg_act.index = CPU_TO_LE16(l_id);
+	lg_act->pdata.lg_act.size = CPU_TO_LE16(num_lg_acts);
+
+	/* First action VSI forwarding or VSI list forwarding depending on how
+	 * many VSIs
+	 */
+	id = (m_ent->vsi_count > 1) ? m_ent->fltr_info.fwd_id.vsi_list_id :
+		m_ent->fltr_info.fwd_id.hw_vsi_id;
+
+	act = ICE_LG_ACT_VSI_FORWARDING | ICE_LG_ACT_VALID_BIT;
+	act |= (id << ICE_LG_ACT_VSI_LIST_ID_S) &
+		ICE_LG_ACT_VSI_LIST_ID_M;
+	if (m_ent->vsi_count > 1)
+		act |= ICE_LG_ACT_VSI_LIST;
+	lg_act->pdata.lg_act.act[0] = CPU_TO_LE32(act);
+
+	/* Second action descriptor type */
+	act = ICE_LG_ACT_GENERIC;
+
+	act |= (1 << ICE_LG_ACT_GENERIC_VALUE_S) & ICE_LG_ACT_GENERIC_VALUE_M;
+	lg_act->pdata.lg_act.act[1] = CPU_TO_LE32(act);
+
+	act = (ICE_LG_ACT_GENERIC_OFF_RX_DESC_PROF_IDX <<
+	       ICE_LG_ACT_GENERIC_OFFSET_S) & ICE_LG_ACT_GENERIC_OFFSET_M;
+
+	/* Third action Marker value */
+	act |= ICE_LG_ACT_GENERIC;
+	act |= (sw_marker << ICE_LG_ACT_GENERIC_VALUE_S) &
+		ICE_LG_ACT_GENERIC_VALUE_M;
+
+	lg_act->pdata.lg_act.act[2] = CPU_TO_LE32(act);
+
+	/* call the fill switch rule to fill the lookup Tx Rx structure */
+	ice_fill_sw_rule(hw, &m_ent->fltr_info, rx_tx,
+			 ice_aqc_opc_update_sw_rules);
+
+	/* Update the action to point to the large action id */
+	rx_tx->pdata.lkup_tx_rx.act =
+		CPU_TO_LE32(ICE_SINGLE_ACT_PTR |
+			    ((l_id << ICE_SINGLE_ACT_PTR_VAL_S) &
+			     ICE_SINGLE_ACT_PTR_VAL_M));
+
+	/* Use the filter rule id of the previously created rule with single
+	 * act. Once the update happens, hardware will treat this as large
+	 * action
+	 */
+	rx_tx->pdata.lkup_tx_rx.index =
+		CPU_TO_LE16(m_ent->fltr_info.fltr_rule_id);
+
+	status = ice_aq_sw_rules(hw, lg_act, rules_size, 2,
+				 ice_aqc_opc_update_sw_rules, NULL);
+	if (!status) {
+		m_ent->lg_act_idx = l_id;
+		m_ent->sw_marker_id = sw_marker;
+	}
+
+	ice_free(hw, lg_act);
+	return status;
+}
+
+
+/**
+ * ice_create_vsi_list_map
+ * @hw: pointer to the hardware structure
+ * @vsi_handle_arr: array of VSI handles to set in the VSI mapping
+ * @num_vsi: number of VSI handles in the array
+ * @vsi_list_id: VSI list id generated as part of allocate resource
+ *
+ * Helper function to create a new entry of VSI list id to VSI mapping
+ * using the given VSI list id
+ */
+static struct ice_vsi_list_map_info *
+ice_create_vsi_list_map(struct ice_hw *hw, u16 *vsi_handle_arr, u16 num_vsi,
+			u16 vsi_list_id)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	struct ice_vsi_list_map_info *v_map;
+	int i;
+
+	v_map = (struct ice_vsi_list_map_info *)ice_calloc(hw, 1,
+		sizeof(*v_map));
+	if (!v_map)
+		return NULL;
+
+	v_map->vsi_list_id = vsi_list_id;
+	v_map->ref_cnt = 1;
+	for (i = 0; i < num_vsi; i++)
+		ice_set_bit(vsi_handle_arr[i], v_map->vsi_map);
+
+	LIST_ADD(&v_map->list_entry, &sw->vsi_list_map_head);
+	return v_map;
+}
+
+/**
+ * ice_update_vsi_list_rule
+ * @hw: pointer to the hardware structure
+ * @vsi_handle_arr: array of VSI handles to form a VSI list
+ * @num_vsi: number of VSI handles in the array
+ * @vsi_list_id: VSI list id generated as part of allocate resource
+ * @remove: Boolean value to indicate if this is a remove action
+ * @opc: switch rules population command type - pass in the command opcode
+ * @lkup_type: lookup type of the filter
+ *
+ * Call AQ command to add a new switch rule or update existing switch rule
+ * using the given VSI list id
+ */
+static enum ice_status
+ice_update_vsi_list_rule(struct ice_hw *hw, u16 *vsi_handle_arr, u16 num_vsi,
+			 u16 vsi_list_id, bool remove, enum ice_adminq_opc opc,
+			 enum ice_sw_lkup_type lkup_type)
+{
+	struct ice_aqc_sw_rules_elem *s_rule;
+	enum ice_status status;
+	u16 s_rule_size;
+	u16 type;
+	int i;
+
+	if (!num_vsi)
+		return ICE_ERR_PARAM;
+
+	if (lkup_type == ICE_SW_LKUP_MAC ||
+	    lkup_type == ICE_SW_LKUP_MAC_VLAN ||
+	    lkup_type == ICE_SW_LKUP_ETHERTYPE ||
+	    lkup_type == ICE_SW_LKUP_ETHERTYPE_MAC ||
+	    lkup_type == ICE_SW_LKUP_PROMISC ||
+	    lkup_type == ICE_SW_LKUP_PROMISC_VLAN)
+		type = remove ? ICE_AQC_SW_RULES_T_VSI_LIST_CLEAR :
+				ICE_AQC_SW_RULES_T_VSI_LIST_SET;
+	else if (lkup_type == ICE_SW_LKUP_VLAN)
+		type = remove ? ICE_AQC_SW_RULES_T_PRUNE_LIST_CLEAR :
+				ICE_AQC_SW_RULES_T_PRUNE_LIST_SET;
+	else
+		return ICE_ERR_PARAM;
+
+	s_rule_size = (u16)ICE_SW_RULE_VSI_LIST_SIZE(num_vsi);
+	s_rule = (struct ice_aqc_sw_rules_elem *)ice_malloc(hw, s_rule_size);
+	if (!s_rule)
+		return ICE_ERR_NO_MEMORY;
+	for (i = 0; i < num_vsi; i++) {
+		if (!ice_is_vsi_valid(hw, vsi_handle_arr[i])) {
+			status = ICE_ERR_PARAM;
+			goto exit;
+		}
+		/* AQ call requires hw_vsi_id(s) */
+		s_rule->pdata.vsi_list.vsi[i] =
+			CPU_TO_LE16(ice_get_hw_vsi_num(hw, vsi_handle_arr[i]));
+	}
+
+	s_rule->type = CPU_TO_LE16(type);
+	s_rule->pdata.vsi_list.number_vsi = CPU_TO_LE16(num_vsi);
+	s_rule->pdata.vsi_list.index = CPU_TO_LE16(vsi_list_id);
+
+	status = ice_aq_sw_rules(hw, s_rule, s_rule_size, 1, opc, NULL);
+
+exit:
+	ice_free(hw, s_rule);
+	return status;
+}
+
+/**
+ * ice_create_vsi_list_rule - Creates and populates a VSI list rule
+ * @hw: pointer to the hw struct
+ * @vsi_handle_arr: array of VSI handles to form a VSI list
+ * @num_vsi: number of VSI handles in the array
+ * @vsi_list_id: stores the ID of the VSI list to be created
+ * @lkup_type: switch rule filter's lookup type
+ */
+static enum ice_status
+ice_create_vsi_list_rule(struct ice_hw *hw, u16 *vsi_handle_arr, u16 num_vsi,
+			 u16 *vsi_list_id, enum ice_sw_lkup_type lkup_type)
+{
+	enum ice_status status;
+
+	status = ice_aq_alloc_free_vsi_list(hw, vsi_list_id, lkup_type,
+					    ice_aqc_opc_alloc_res);
+	if (status)
+		return status;
+
+	/* Update the newly created VSI list to include the specified VSIs */
+	return ice_update_vsi_list_rule(hw, vsi_handle_arr, num_vsi,
+					*vsi_list_id, false,
+					ice_aqc_opc_add_sw_rules, lkup_type);
+}
+
+/**
+ * ice_create_pkt_fwd_rule
+ * @hw: pointer to the hardware structure
+ * @f_entry: entry containing packet forwarding information
+ *
+ * Create switch rule with given filter information and add an entry
+ * to the corresponding filter management list to track this switch rule
+ * and VSI mapping
+ */
+static enum ice_status
+ice_create_pkt_fwd_rule(struct ice_hw *hw,
+			struct ice_fltr_list_entry *f_entry)
+{
+	struct ice_fltr_mgmt_list_entry *fm_entry;
+	struct ice_aqc_sw_rules_elem *s_rule;
+	enum ice_sw_lkup_type l_type;
+	struct ice_sw_recipe *recp;
+	enum ice_status status;
+
+	s_rule = (struct ice_aqc_sw_rules_elem *)
+		ice_malloc(hw, ICE_SW_RULE_RX_TX_ETH_HDR_SIZE);
+	if (!s_rule)
+		return ICE_ERR_NO_MEMORY;
+	fm_entry = (struct ice_fltr_mgmt_list_entry *)
+		   ice_malloc(hw, sizeof(*fm_entry));
+	if (!fm_entry) {
+		status = ICE_ERR_NO_MEMORY;
+		goto ice_create_pkt_fwd_rule_exit;
+	}
+
+	fm_entry->fltr_info = f_entry->fltr_info;
+
+	/* Initialize all the fields for the management entry */
+	fm_entry->vsi_count = 1;
+	fm_entry->lg_act_idx = ICE_INVAL_LG_ACT_INDEX;
+	fm_entry->sw_marker_id = ICE_INVAL_SW_MARKER_ID;
+	fm_entry->counter_index = ICE_INVAL_COUNTER_ID;
+
+	ice_fill_sw_rule(hw, &fm_entry->fltr_info, s_rule,
+			 ice_aqc_opc_add_sw_rules);
+
+	status = ice_aq_sw_rules(hw, s_rule, ICE_SW_RULE_RX_TX_ETH_HDR_SIZE, 1,
+				 ice_aqc_opc_add_sw_rules, NULL);
+	if (status) {
+		ice_free(hw, fm_entry);
+		goto ice_create_pkt_fwd_rule_exit;
+	}
+
+	f_entry->fltr_info.fltr_rule_id =
+		LE16_TO_CPU(s_rule->pdata.lkup_tx_rx.index);
+	fm_entry->fltr_info.fltr_rule_id =
+		LE16_TO_CPU(s_rule->pdata.lkup_tx_rx.index);
+
+	/* The book keeping entries will get removed when base driver
+	 * calls remove filter AQ command
+	 */
+	l_type = fm_entry->fltr_info.lkup_type;
+	recp = &hw->switch_info->recp_list[l_type];
+	LIST_ADD(&fm_entry->list_entry, &recp->filt_rules);
+
+ice_create_pkt_fwd_rule_exit:
+	ice_free(hw, s_rule);
+	return status;
+}
+
+/**
+ * ice_update_pkt_fwd_rule
+ * @hw: pointer to the hardware structure
+ * @f_info: filter information for switch rule
+ *
+ * Call AQ command to update a previously created switch rule with a
+ * VSI list id
+ */
+static enum ice_status
+ice_update_pkt_fwd_rule(struct ice_hw *hw, struct ice_fltr_info *f_info)
+{
+	struct ice_aqc_sw_rules_elem *s_rule;
+	enum ice_status status;
+
+	s_rule = (struct ice_aqc_sw_rules_elem *)
+		ice_malloc(hw, ICE_SW_RULE_RX_TX_ETH_HDR_SIZE);
+	if (!s_rule)
+		return ICE_ERR_NO_MEMORY;
+
+	ice_fill_sw_rule(hw, f_info, s_rule, ice_aqc_opc_update_sw_rules);
+
+	s_rule->pdata.lkup_tx_rx.index = CPU_TO_LE16(f_info->fltr_rule_id);
+
+	/* Update switch rule with new rule set to forward VSI list */
+	status = ice_aq_sw_rules(hw, s_rule, ICE_SW_RULE_RX_TX_ETH_HDR_SIZE, 1,
+				 ice_aqc_opc_update_sw_rules, NULL);
+
+	ice_free(hw, s_rule);
+	return status;
+}
+
+/**
+ * ice_update_sw_rule_bridge_mode
+ * @hw: pointer to the hw struct
+ *
+ * Updates unicast switch filter rules based on VEB/VEPA mode
+ */
+enum ice_status ice_update_sw_rule_bridge_mode(struct ice_hw *hw)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	struct ice_fltr_mgmt_list_entry *fm_entry;
+	enum ice_status status = ICE_SUCCESS;
+	struct LIST_HEAD_TYPE *rule_head;
+	struct ice_lock *rule_lock; /* Lock to protect filter rule list */
+
+	rule_lock = &sw->recp_list[ICE_SW_LKUP_MAC].filt_rule_lock;
+	rule_head = &sw->recp_list[ICE_SW_LKUP_MAC].filt_rules;
+
+	ice_acquire_lock(rule_lock);
+	LIST_FOR_EACH_ENTRY(fm_entry, rule_head, ice_fltr_mgmt_list_entry,
+			    list_entry) {
+		struct ice_fltr_info *fi = &fm_entry->fltr_info;
+		u8 *addr = fi->l_data.mac.mac_addr;
+
+		/* Update unicast Tx rules to reflect the selected
+		 * VEB/VEPA mode
+		 */
+		if ((fi->flag & ICE_FLTR_TX) && IS_UNICAST_ETHER_ADDR(addr) &&
+		    (fi->fltr_act == ICE_FWD_TO_VSI ||
+		     fi->fltr_act == ICE_FWD_TO_VSI_LIST ||
+		     fi->fltr_act == ICE_FWD_TO_Q ||
+		     fi->fltr_act == ICE_FWD_TO_QGRP)) {
+			status = ice_update_pkt_fwd_rule(hw, fi);
+			if (status)
+				break;
+		}
+	}
+
+	ice_release_lock(rule_lock);
+
+	return status;
+}
+
+/**
+ * ice_add_update_vsi_list
+ * @hw: pointer to the hardware structure
+ * @m_entry: pointer to current filter management list entry
+ * @cur_fltr: filter information from the book keeping entry
+ * @new_fltr: filter information with the new VSI to be added
+ *
+ * Call AQ command to add or update previously created VSI list with new VSI.
+ *
+ * Helper function to do book keeping associated with adding filter information
+ * The algorithm to do the book keeping is described below :
+ * When a VSI needs to subscribe to a given filter (MAC/VLAN/Ethtype etc.)
+ *	if only one VSI has been added till now
+ *		Allocate a new VSI list and add two VSIs
+ *		to this list using switch rule command
+ *		Update the previously created switch rule with the
+ *		newly created VSI list id
+ *	if a VSI list was previously created
+ *		Add the new VSI to the previously created VSI list set
+ *		using the update switch rule command
+ */
+static enum ice_status
+ice_add_update_vsi_list(struct ice_hw *hw,
+			struct ice_fltr_mgmt_list_entry *m_entry,
+			struct ice_fltr_info *cur_fltr,
+			struct ice_fltr_info *new_fltr)
+{
+	enum ice_status status = ICE_SUCCESS;
+	u16 vsi_list_id = 0;
+
+	if ((cur_fltr->fltr_act == ICE_FWD_TO_Q ||
+	     cur_fltr->fltr_act == ICE_FWD_TO_QGRP))
+		return ICE_ERR_NOT_IMPL;
+
+	if ((new_fltr->fltr_act == ICE_FWD_TO_Q ||
+	     new_fltr->fltr_act == ICE_FWD_TO_QGRP) &&
+	    (cur_fltr->fltr_act == ICE_FWD_TO_VSI ||
+	     cur_fltr->fltr_act == ICE_FWD_TO_VSI_LIST))
+		return ICE_ERR_NOT_IMPL;
+
+	if (m_entry->vsi_count < 2 && !m_entry->vsi_list_info) {
+		/* Only one entry existed in the mapping and it was not already
+		 * a part of a VSI list. So, create a VSI list with the old and
+		 * new VSIs.
+		 */
+		struct ice_fltr_info tmp_fltr;
+		u16 vsi_handle_arr[2];
+
+		/* A rule already exists with the new VSI being added */
+		if (cur_fltr->fwd_id.hw_vsi_id == new_fltr->fwd_id.hw_vsi_id)
+			return ICE_ERR_ALREADY_EXISTS;
+
+		vsi_handle_arr[0] = cur_fltr->vsi_handle;
+		vsi_handle_arr[1] = new_fltr->vsi_handle;
+		status = ice_create_vsi_list_rule(hw, &vsi_handle_arr[0], 2,
+						  &vsi_list_id,
+						  new_fltr->lkup_type);
+		if (status)
+			return status;
+
+		tmp_fltr = *new_fltr;
+		tmp_fltr.fltr_rule_id = cur_fltr->fltr_rule_id;
+		tmp_fltr.fltr_act = ICE_FWD_TO_VSI_LIST;
+		tmp_fltr.fwd_id.vsi_list_id = vsi_list_id;
+		/* Update the previous switch rule of "MAC forward to VSI" to
+		 * "MAC fwd to VSI list"
+		 */
+		status = ice_update_pkt_fwd_rule(hw, &tmp_fltr);
+		if (status)
+			return status;
+
+		cur_fltr->fwd_id.vsi_list_id = vsi_list_id;
+		cur_fltr->fltr_act = ICE_FWD_TO_VSI_LIST;
+		m_entry->vsi_list_info =
+			ice_create_vsi_list_map(hw, &vsi_handle_arr[0], 2,
+						vsi_list_id);
+
+		/* If this entry was large action then the large action needs
+		 * to be updated to point to FWD to VSI list
+		 */
+		if (m_entry->sw_marker_id != ICE_INVAL_SW_MARKER_ID)
+			status =
+			    ice_add_marker_act(hw, m_entry,
+					       m_entry->sw_marker_id,
+					       m_entry->lg_act_idx);
+	} else {
+		u16 vsi_handle = new_fltr->vsi_handle;
+		enum ice_adminq_opc opcode;
+
+		if (!m_entry->vsi_list_info)
+			return ICE_ERR_CFG;
+
+		/* A rule already exists with the new VSI being added */
+		if (ice_is_bit_set(m_entry->vsi_list_info->vsi_map, vsi_handle))
+			return ICE_SUCCESS;
+
+		/* Update the previously created VSI list set with
+		 * the new VSI id passed in
+		 */
+		vsi_list_id = cur_fltr->fwd_id.vsi_list_id;
+		opcode = ice_aqc_opc_update_sw_rules;
+
+		status = ice_update_vsi_list_rule(hw, &vsi_handle, 1,
+						  vsi_list_id, false, opcode,
+						  new_fltr->lkup_type);
+		/* update VSI list mapping info with new VSI id */
+		if (!status)
+			ice_set_bit(vsi_handle,
+				    m_entry->vsi_list_info->vsi_map);
+	}
+	if (!status)
+		m_entry->vsi_count++;
+	return status;
+}
+
+/**
+ * ice_find_rule_entry - Search a rule entry
+ * @hw: pointer to the hardware structure
+ * @recp_id: lookup type for which the specified rule needs to be searched
+ * @f_info: rule information
+ *
+ * Helper function to search for a given rule entry
+ * Returns pointer to entry storing the rule if found
+ */
+static struct ice_fltr_mgmt_list_entry *
+ice_find_rule_entry(struct ice_hw *hw, u8 recp_id, struct ice_fltr_info *f_info)
+{
+	struct ice_fltr_mgmt_list_entry *list_itr, *ret = NULL;
+	struct ice_switch_info *sw = hw->switch_info;
+	struct LIST_HEAD_TYPE *list_head;
+
+	list_head = &sw->recp_list[recp_id].filt_rules;
+	LIST_FOR_EACH_ENTRY(list_itr, list_head, ice_fltr_mgmt_list_entry,
+			    list_entry) {
+		if (!memcmp(&f_info->l_data, &list_itr->fltr_info.l_data,
+			    sizeof(f_info->l_data)) &&
+		    f_info->flag == list_itr->fltr_info.flag) {
+			ret = list_itr;
+			break;
+		}
+	}
+	return ret;
+}
+
+/**
+ * ice_find_vsi_list_entry - Search VSI list map with VSI count 1
+ * @hw: pointer to the hardware structure
+ * @recp_id: lookup type for which VSI lists needs to be searched
+ * @vsi_handle: VSI handle to be found in VSI list
+ * @vsi_list_id: VSI list id found contaning vsi_handle
+ *
+ * Helper function to search a VSI list with single entry containing given VSI
+ * handle element. This can be extended further to search VSI list with more
+ * than 1 vsi_count. Returns pointer to VSI list entry if found.
+ */
+static struct ice_vsi_list_map_info *
+ice_find_vsi_list_entry(struct ice_hw *hw, u8 recp_id, u16 vsi_handle,
+			u16 *vsi_list_id)
+{
+	struct ice_vsi_list_map_info *map_info = NULL;
+	struct ice_switch_info *sw = hw->switch_info;
+	struct ice_fltr_mgmt_list_entry *list_itr;
+	struct LIST_HEAD_TYPE *list_head;
+
+	list_head = &sw->recp_list[recp_id].filt_rules;
+	LIST_FOR_EACH_ENTRY(list_itr, list_head, ice_fltr_mgmt_list_entry,
+			    list_entry) {
+		if (list_itr->vsi_count == 1 && list_itr->vsi_list_info) {
+			map_info = list_itr->vsi_list_info;
+			if (ice_is_bit_set(map_info->vsi_map, vsi_handle)) {
+				*vsi_list_id = map_info->vsi_list_id;
+				return map_info;
+			}
+		}
+	}
+	return NULL;
+}
+
+/**
+ * ice_add_rule_internal - add rule for a given lookup type
+ * @hw: pointer to the hardware structure
+ * @recp_id: lookup type (recipe id) for which rule has to be added
+ * @f_entry: structure containing MAC forwarding information
+ *
+ * Adds or updates the rule lists for a given recipe
+ */
+static enum ice_status
+ice_add_rule_internal(struct ice_hw *hw, u8 recp_id,
+		      struct ice_fltr_list_entry *f_entry)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	struct ice_fltr_info *new_fltr, *cur_fltr;
+	struct ice_fltr_mgmt_list_entry *m_entry;
+	struct ice_lock *rule_lock; /* Lock to protect filter rule list */
+	enum ice_status status = ICE_SUCCESS;
+
+	if (!ice_is_vsi_valid(hw, f_entry->fltr_info.vsi_handle))
+		return ICE_ERR_PARAM;
+
+	/* Load the hw_vsi_id only if the fwd action is fwd to VSI */
+	if (f_entry->fltr_info.fltr_act == ICE_FWD_TO_VSI)
+		f_entry->fltr_info.fwd_id.hw_vsi_id =
+			ice_get_hw_vsi_num(hw, f_entry->fltr_info.vsi_handle);
+
+	rule_lock = &sw->recp_list[recp_id].filt_rule_lock;
+
+	ice_acquire_lock(rule_lock);
+	new_fltr = &f_entry->fltr_info;
+	if (new_fltr->flag & ICE_FLTR_RX)
+		new_fltr->src = hw->port_info->lport;
+	else if (new_fltr->flag & ICE_FLTR_TX)
+		new_fltr->src =
+			ice_get_hw_vsi_num(hw, f_entry->fltr_info.vsi_handle);
+
+	m_entry = ice_find_rule_entry(hw, recp_id, new_fltr);
+	if (!m_entry) {
+		ice_release_lock(rule_lock);
+		return ice_create_pkt_fwd_rule(hw, f_entry);
+	}
+
+	cur_fltr = &m_entry->fltr_info;
+	status = ice_add_update_vsi_list(hw, m_entry, cur_fltr, new_fltr);
+	ice_release_lock(rule_lock);
+
+	return status;
+}
+
+/**
+ * ice_remove_vsi_list_rule
+ * @hw: pointer to the hardware structure
+ * @vsi_list_id: VSI list id generated as part of allocate resource
+ * @lkup_type: switch rule filter lookup type
+ *
+ * The VSI list should be emptied before this function is called to remove the
+ * VSI list.
+ */
+static enum ice_status
+ice_remove_vsi_list_rule(struct ice_hw *hw, u16 vsi_list_id,
+			 enum ice_sw_lkup_type lkup_type)
+{
+	struct ice_aqc_sw_rules_elem *s_rule;
+	enum ice_status status;
+	u16 s_rule_size;
+
+	s_rule_size = (u16)ICE_SW_RULE_VSI_LIST_SIZE(0);
+	s_rule = (struct ice_aqc_sw_rules_elem *)ice_malloc(hw, s_rule_size);
+	if (!s_rule)
+		return ICE_ERR_NO_MEMORY;
+
+	s_rule->type = CPU_TO_LE16(ICE_AQC_SW_RULES_T_VSI_LIST_CLEAR);
+	s_rule->pdata.vsi_list.index = CPU_TO_LE16(vsi_list_id);
+
+	/* Free the vsi_list resource that we allocated. It is assumed that the
+	 * list is empty at this point.
+	 */
+	status = ice_aq_alloc_free_vsi_list(hw, &vsi_list_id, lkup_type,
+					    ice_aqc_opc_free_res);
+
+	ice_free(hw, s_rule);
+	return status;
+}
+
+/**
+ * ice_rem_update_vsi_list
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: VSI handle of the VSI to remove
+ * @fm_list: filter management entry for which the VSI list management needs to
+ *	     be done
+ */
+static enum ice_status
+ice_rem_update_vsi_list(struct ice_hw *hw, u16 vsi_handle,
+			struct ice_fltr_mgmt_list_entry *fm_list)
+{
+	enum ice_sw_lkup_type lkup_type;
+	enum ice_status status = ICE_SUCCESS;
+	u16 vsi_list_id;
+
+	if (fm_list->fltr_info.fltr_act != ICE_FWD_TO_VSI_LIST ||
+	    fm_list->vsi_count == 0)
+		return ICE_ERR_PARAM;
+
+	/* A rule with the VSI being removed does not exist */
+	if (!ice_is_bit_set(fm_list->vsi_list_info->vsi_map, vsi_handle))
+		return ICE_ERR_DOES_NOT_EXIST;
+
+	lkup_type = fm_list->fltr_info.lkup_type;
+	vsi_list_id = fm_list->fltr_info.fwd_id.vsi_list_id;
+	status = ice_update_vsi_list_rule(hw, &vsi_handle, 1, vsi_list_id, true,
+					  ice_aqc_opc_update_sw_rules,
+					  lkup_type);
+	if (status)
+		return status;
+
+	fm_list->vsi_count--;
+	ice_clear_bit(vsi_handle, fm_list->vsi_list_info->vsi_map);
+
+	if (fm_list->vsi_count == 1 && lkup_type != ICE_SW_LKUP_VLAN) {
+		struct ice_fltr_info tmp_fltr_info = fm_list->fltr_info;
+		struct ice_vsi_list_map_info *vsi_list_info =
+			fm_list->vsi_list_info;
+		u16 rem_vsi_handle;
+
+		rem_vsi_handle = ice_find_first_bit(vsi_list_info->vsi_map,
+						    ICE_MAX_VSI);
+		if (!ice_is_vsi_valid(hw, rem_vsi_handle))
+			return ICE_ERR_OUT_OF_RANGE;
+
+		/* Make sure VSI list is empty before removing it below */
+		status = ice_update_vsi_list_rule(hw, &rem_vsi_handle, 1,
+						  vsi_list_id, true,
+						  ice_aqc_opc_update_sw_rules,
+						  lkup_type);
+		if (status)
+			return status;
+
+		tmp_fltr_info.fltr_act = ICE_FWD_TO_VSI;
+		tmp_fltr_info.fwd_id.hw_vsi_id =
+			ice_get_hw_vsi_num(hw, rem_vsi_handle);
+		tmp_fltr_info.vsi_handle = rem_vsi_handle;
+		status = ice_update_pkt_fwd_rule(hw, &tmp_fltr_info);
+		if (status) {
+			ice_debug(hw, ICE_DBG_SW,
+				  "Failed to update pkt fwd rule to FWD_TO_VSI on HW VSI %d, error %d\n",
+				  tmp_fltr_info.fwd_id.hw_vsi_id, status);
+			return status;
+		}
+
+		fm_list->fltr_info = tmp_fltr_info;
+	}
+
+	if ((fm_list->vsi_count == 1 && lkup_type != ICE_SW_LKUP_VLAN) ||
+	    (fm_list->vsi_count == 0 && lkup_type == ICE_SW_LKUP_VLAN)) {
+		struct ice_vsi_list_map_info *vsi_list_info =
+			fm_list->vsi_list_info;
+
+		/* Remove the VSI list since it is no longer used */
+		status = ice_remove_vsi_list_rule(hw, vsi_list_id, lkup_type);
+		if (status) {
+			ice_debug(hw, ICE_DBG_SW,
+				  "Failed to remove VSI list %d, error %d\n",
+				  vsi_list_id, status);
+			return status;
+		}
+
+		LIST_DEL(&vsi_list_info->list_entry);
+		ice_free(hw, vsi_list_info);
+		fm_list->vsi_list_info = NULL;
+	}
+
+	return status;
+}
+
+/**
+ * ice_remove_rule_internal - Remove a filter rule of a given type
+ *
+ * @hw: pointer to the hardware structure
+ * @recp_id: recipe id for which the rule needs to removed
+ * @f_entry: rule entry containing filter information
+ */
+static enum ice_status
+ice_remove_rule_internal(struct ice_hw *hw, u8 recp_id,
+			 struct ice_fltr_list_entry *f_entry)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	struct ice_fltr_mgmt_list_entry *list_elem;
+	struct ice_lock *rule_lock; /* Lock to protect filter rule list */
+	enum ice_status status = ICE_SUCCESS;
+	bool remove_rule = false;
+	u16 vsi_handle;
+
+	if (!ice_is_vsi_valid(hw, f_entry->fltr_info.vsi_handle))
+		return ICE_ERR_PARAM;
+	f_entry->fltr_info.fwd_id.hw_vsi_id =
+		ice_get_hw_vsi_num(hw, f_entry->fltr_info.vsi_handle);
+
+	rule_lock = &sw->recp_list[recp_id].filt_rule_lock;
+	ice_acquire_lock(rule_lock);
+	list_elem = ice_find_rule_entry(hw, recp_id, &f_entry->fltr_info);
+	if (!list_elem) {
+		status = ICE_ERR_DOES_NOT_EXIST;
+		goto exit;
+	}
+
+	if (list_elem->fltr_info.fltr_act != ICE_FWD_TO_VSI_LIST) {
+		remove_rule = true;
+	} else if (!list_elem->vsi_list_info) {
+		status = ICE_ERR_DOES_NOT_EXIST;
+		goto exit;
+	} else {
+		if (list_elem->vsi_list_info->ref_cnt > 1)
+			list_elem->vsi_list_info->ref_cnt--;
+		vsi_handle = f_entry->fltr_info.vsi_handle;
+		status = ice_rem_update_vsi_list(hw, vsi_handle, list_elem);
+		if (status)
+			goto exit;
+		/* if vsi count goes to zero after updating the vsi list */
+		if (list_elem->vsi_count == 0)
+			remove_rule = true;
+	}
+
+	if (remove_rule) {
+		/* Remove the lookup rule */
+		struct ice_aqc_sw_rules_elem *s_rule;
+
+		s_rule = (struct ice_aqc_sw_rules_elem *)
+			ice_malloc(hw, ICE_SW_RULE_RX_TX_NO_HDR_SIZE);
+		if (!s_rule) {
+			status = ICE_ERR_NO_MEMORY;
+			goto exit;
+		}
+
+		ice_fill_sw_rule(hw, &list_elem->fltr_info, s_rule,
+				 ice_aqc_opc_remove_sw_rules);
+
+		status = ice_aq_sw_rules(hw, s_rule,
+					 ICE_SW_RULE_RX_TX_NO_HDR_SIZE, 1,
+					 ice_aqc_opc_remove_sw_rules, NULL);
+		if (status)
+			goto exit;
+
+		/* Remove a book keeping from the list */
+		ice_free(hw, s_rule);
+
+		LIST_DEL(&list_elem->list_entry);
+		ice_free(hw, list_elem);
+	}
+exit:
+	ice_release_lock(rule_lock);
+	return status;
+}
+
+
+/**
+ * ice_add_mac - Add a MAC address based filter rule
+ * @hw: pointer to the hardware structure
+ * @m_list: list of MAC addresses and forwarding information
+ *
+ * IMPORTANT: When the ucast_shared flag is set to false and m_list has
+ * multiple unicast addresses, the function assumes that all the
+ * addresses are unique in a given add_mac call. It doesn't
+ * check for duplicates in this case, removing duplicates from a given
+ * list should be taken care of in the caller of this function.
+ */
+enum ice_status
+ice_add_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list)
+{
+	struct ice_aqc_sw_rules_elem *s_rule, *r_iter;
+	struct ice_fltr_list_entry *m_list_itr;
+	struct LIST_HEAD_TYPE *rule_head;
+	u16 elem_sent, total_elem_left;
+	struct ice_switch_info *sw;
+	struct ice_lock *rule_lock; /* Lock to protect filter rule list */
+	enum ice_status status = ICE_SUCCESS;
+	u16 num_unicast = 0;
+	u16 s_rule_size;
+
+	if (!m_list || !hw)
+		return ICE_ERR_PARAM;
+	s_rule = NULL;
+	sw = hw->switch_info;
+	rule_lock = &sw->recp_list[ICE_SW_LKUP_MAC].filt_rule_lock;
+	LIST_FOR_EACH_ENTRY(m_list_itr, m_list, ice_fltr_list_entry,
+			    list_entry) {
+		u8 *add = &m_list_itr->fltr_info.l_data.mac.mac_addr[0];
+		u16 vsi_handle;
+		u16 hw_vsi_id;
+
+		m_list_itr->fltr_info.flag = ICE_FLTR_TX;
+		vsi_handle = m_list_itr->fltr_info.vsi_handle;
+		if (!ice_is_vsi_valid(hw, vsi_handle))
+			return ICE_ERR_PARAM;
+		hw_vsi_id = ice_get_hw_vsi_num(hw, vsi_handle);
+		m_list_itr->fltr_info.fwd_id.hw_vsi_id = hw_vsi_id;
+		/* update the src in case it is vsi num */
+		if (m_list_itr->fltr_info.src_id != ICE_SRC_ID_VSI)
+			return ICE_ERR_PARAM;
+		m_list_itr->fltr_info.src = hw_vsi_id;
+		if (m_list_itr->fltr_info.lkup_type != ICE_SW_LKUP_MAC ||
+		    IS_ZERO_ETHER_ADDR(add))
+			return ICE_ERR_PARAM;
+		if (IS_UNICAST_ETHER_ADDR(add) && !hw->ucast_shared) {
+			/* Don't overwrite the unicast address */
+			ice_acquire_lock(rule_lock);
+			if (ice_find_rule_entry(hw, ICE_SW_LKUP_MAC,
+						&m_list_itr->fltr_info)) {
+				ice_release_lock(rule_lock);
+				return ICE_ERR_ALREADY_EXISTS;
+			}
+			ice_release_lock(rule_lock);
+			num_unicast++;
+		} else if (IS_MULTICAST_ETHER_ADDR(add) ||
+			   (IS_UNICAST_ETHER_ADDR(add) && hw->ucast_shared)) {
+			m_list_itr->status =
+				ice_add_rule_internal(hw, ICE_SW_LKUP_MAC,
+						      m_list_itr);
+			if (m_list_itr->status)
+				return m_list_itr->status;
+		}
+	}
+
+	ice_acquire_lock(rule_lock);
+	/* Exit if no suitable entries were found for adding bulk switch rule */
+	if (!num_unicast) {
+		status = ICE_SUCCESS;
+		goto ice_add_mac_exit;
+	}
+
+	rule_head = &sw->recp_list[ICE_SW_LKUP_MAC].filt_rules;
+
+	/* Allocate switch rule buffer for the bulk update for unicast */
+	s_rule_size = ICE_SW_RULE_RX_TX_ETH_HDR_SIZE;
+	s_rule = (struct ice_aqc_sw_rules_elem *)
+		ice_calloc(hw, num_unicast, s_rule_size);
+	if (!s_rule) {
+		status = ICE_ERR_NO_MEMORY;
+		goto ice_add_mac_exit;
+	}
+
+	r_iter = s_rule;
+	LIST_FOR_EACH_ENTRY(m_list_itr, m_list, ice_fltr_list_entry,
+			    list_entry) {
+		struct ice_fltr_info *f_info = &m_list_itr->fltr_info;
+		u8 *mac_addr = &f_info->l_data.mac.mac_addr[0];
+
+		if (IS_UNICAST_ETHER_ADDR(mac_addr)) {
+			ice_fill_sw_rule(hw, &m_list_itr->fltr_info, r_iter,
+					 ice_aqc_opc_add_sw_rules);
+			r_iter = (struct ice_aqc_sw_rules_elem *)
+				((u8 *)r_iter + s_rule_size);
+		}
+	}
+
+	/* Call AQ bulk switch rule update for all unicast addresses */
+	r_iter = s_rule;
+	/* Call AQ switch rule in AQ_MAX chunk */
+	for (total_elem_left = num_unicast; total_elem_left > 0;
+	     total_elem_left -= elem_sent) {
+		struct ice_aqc_sw_rules_elem *entry = r_iter;
+
+		elem_sent = min(total_elem_left,
+				(u16)(ICE_AQ_MAX_BUF_LEN / s_rule_size));
+		status = ice_aq_sw_rules(hw, entry, elem_sent * s_rule_size,
+					 elem_sent, ice_aqc_opc_add_sw_rules,
+					 NULL);
+		if (status)
+			goto ice_add_mac_exit;
+		r_iter = (struct ice_aqc_sw_rules_elem *)
+			((u8 *)r_iter + (elem_sent * s_rule_size));
+	}
+
+	/* Fill up rule id based on the value returned from FW */
+	r_iter = s_rule;
+	LIST_FOR_EACH_ENTRY(m_list_itr, m_list, ice_fltr_list_entry,
+			    list_entry) {
+		struct ice_fltr_info *f_info = &m_list_itr->fltr_info;
+		u8 *mac_addr = &f_info->l_data.mac.mac_addr[0];
+		struct ice_fltr_mgmt_list_entry *fm_entry;
+
+		if (IS_UNICAST_ETHER_ADDR(mac_addr)) {
+			f_info->fltr_rule_id =
+				LE16_TO_CPU(r_iter->pdata.lkup_tx_rx.index);
+			f_info->fltr_act = ICE_FWD_TO_VSI;
+			/* Create an entry to track this MAC address */
+			fm_entry = (struct ice_fltr_mgmt_list_entry *)
+				ice_malloc(hw, sizeof(*fm_entry));
+			if (!fm_entry) {
+				status = ICE_ERR_NO_MEMORY;
+				goto ice_add_mac_exit;
+			}
+			fm_entry->fltr_info = *f_info;
+			fm_entry->vsi_count = 1;
+			/* The book keeping entries will get removed when
+			 * base driver calls remove filter AQ command
+			 */
+
+			LIST_ADD(&fm_entry->list_entry, rule_head);
+			r_iter = (struct ice_aqc_sw_rules_elem *)
+				((u8 *)r_iter + s_rule_size);
+		}
+	}
+
+ice_add_mac_exit:
+	ice_release_lock(rule_lock);
+	if (s_rule)
+		ice_free(hw, s_rule);
+	return status;
+}
+
+/**
+ * ice_add_vlan_internal - Add one VLAN based filter rule
+ * @hw: pointer to the hardware structure
+ * @f_entry: filter entry containing one VLAN information
+ */
+static enum ice_status
+ice_add_vlan_internal(struct ice_hw *hw, struct ice_fltr_list_entry *f_entry)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	struct ice_fltr_mgmt_list_entry *v_list_itr;
+	struct ice_fltr_info *new_fltr, *cur_fltr;
+	enum ice_sw_lkup_type lkup_type;
+	u16 vsi_list_id = 0, vsi_handle;
+	struct ice_lock *rule_lock; /* Lock to protect filter rule list */
+	enum ice_status status = ICE_SUCCESS;
+
+	if (!ice_is_vsi_valid(hw, f_entry->fltr_info.vsi_handle))
+		return ICE_ERR_PARAM;
+
+	f_entry->fltr_info.fwd_id.hw_vsi_id =
+		ice_get_hw_vsi_num(hw, f_entry->fltr_info.vsi_handle);
+	new_fltr = &f_entry->fltr_info;
+
+	/* VLAN id should only be 12 bits */
+	if (new_fltr->l_data.vlan.vlan_id > ICE_MAX_VLAN_ID)
+		return ICE_ERR_PARAM;
+
+	if (new_fltr->src_id != ICE_SRC_ID_VSI)
+		return ICE_ERR_PARAM;
+
+	new_fltr->src = new_fltr->fwd_id.hw_vsi_id;
+	lkup_type = new_fltr->lkup_type;
+	vsi_handle = new_fltr->vsi_handle;
+	rule_lock = &sw->recp_list[ICE_SW_LKUP_VLAN].filt_rule_lock;
+	ice_acquire_lock(rule_lock);
+	v_list_itr = ice_find_rule_entry(hw, ICE_SW_LKUP_VLAN, new_fltr);
+	if (!v_list_itr) {
+		struct ice_vsi_list_map_info *map_info = NULL;
+
+		if (new_fltr->fltr_act == ICE_FWD_TO_VSI) {
+			/* All VLAN pruning rules use a VSI list. Check if
+			 * there is already a VSI list containing VSI that we
+			 * want to add. If found, use the same vsi_list_id for
+			 * this new VLAN rule or else create a new list.
+			 */
+			map_info = ice_find_vsi_list_entry(hw, ICE_SW_LKUP_VLAN,
+							   vsi_handle,
+							   &vsi_list_id);
+			if (!map_info) {
+				status = ice_create_vsi_list_rule(hw,
+								  &vsi_handle,
+								  1,
+								  &vsi_list_id,
+								  lkup_type);
+				if (status)
+					goto exit;
+			}
+			/* Convert the action to forwarding to a VSI list. */
+			new_fltr->fltr_act = ICE_FWD_TO_VSI_LIST;
+			new_fltr->fwd_id.vsi_list_id = vsi_list_id;
+		}
+
+		status = ice_create_pkt_fwd_rule(hw, f_entry);
+		if (!status) {
+			v_list_itr = ice_find_rule_entry(hw, ICE_SW_LKUP_VLAN,
+							 new_fltr);
+			if (!v_list_itr) {
+				status = ICE_ERR_DOES_NOT_EXIST;
+				goto exit;
+			}
+			/* reuse VSI list for new rule and increment ref_cnt */
+			if (map_info) {
+				v_list_itr->vsi_list_info = map_info;
+				map_info->ref_cnt++;
+			} else {
+				v_list_itr->vsi_list_info =
+					ice_create_vsi_list_map(hw, &vsi_handle,
+								1, vsi_list_id);
+			}
+		}
+	} else if (v_list_itr->vsi_list_info->ref_cnt == 1) {
+		/* Update existing VSI list to add new VSI id only if it used
+		 * by one VLAN rule.
+		 */
+		cur_fltr = &v_list_itr->fltr_info;
+		status = ice_add_update_vsi_list(hw, v_list_itr, cur_fltr,
+						 new_fltr);
+	} else {
+		/* If VLAN rule exists and VSI list being used by this rule is
+		 * referenced by more than 1 VLAN rule. Then create a new VSI
+		 * list appending previous VSI with new VSI and update existing
+		 * VLAN rule to point to new VSI list id
+		 */
+		struct ice_fltr_info tmp_fltr;
+		u16 vsi_handle_arr[2];
+		u16 cur_handle;
+
+		/* Current implementation only supports reusing VSI list with
+		 * one VSI count. We should never hit below condition
+		 */
+		if (v_list_itr->vsi_count > 1 &&
+		    v_list_itr->vsi_list_info->ref_cnt > 1) {
+			ice_debug(hw, ICE_DBG_SW,
+				  "Invalid configuration: Optimization to reuse VSI list with more than one VSI is not being done yet\n");
+			status = ICE_ERR_CFG;
+			goto exit;
+		}
+
+		cur_handle =
+			ice_find_first_bit(v_list_itr->vsi_list_info->vsi_map,
+					   ICE_MAX_VSI);
+
+		/* A rule already exists with the new VSI being added */
+		if (cur_handle == vsi_handle) {
+			status = ICE_ERR_ALREADY_EXISTS;
+			goto exit;
+		}
+
+		vsi_handle_arr[0] = cur_handle;
+		vsi_handle_arr[1] = vsi_handle;
+		status = ice_create_vsi_list_rule(hw, &vsi_handle_arr[0], 2,
+						  &vsi_list_id, lkup_type);
+		if (status)
+			goto exit;
+
+		tmp_fltr = v_list_itr->fltr_info;
+		tmp_fltr.fltr_rule_id = v_list_itr->fltr_info.fltr_rule_id;
+		tmp_fltr.fwd_id.vsi_list_id = vsi_list_id;
+		tmp_fltr.fltr_act = ICE_FWD_TO_VSI_LIST;
+		/* Update the previous switch rule to a new VSI list which
+		 * includes current VSI that is requested
+		 */
+		status = ice_update_pkt_fwd_rule(hw, &tmp_fltr);
+		if (status)
+			goto exit;
+
+		/* before overriding VSI list map info. decrement ref_cnt of
+		 * previous VSI list
+		 */
+		v_list_itr->vsi_list_info->ref_cnt--;
+
+		/* now update to newly created list */
+		v_list_itr->fltr_info.fwd_id.vsi_list_id = vsi_list_id;
+		v_list_itr->vsi_list_info =
+			ice_create_vsi_list_map(hw, &vsi_handle_arr[0], 2,
+						vsi_list_id);
+		v_list_itr->vsi_count++;
+	}
+
+exit:
+	ice_release_lock(rule_lock);
+	return status;
+}
+
+/**
+ * ice_add_vlan - Add VLAN based filter rule
+ * @hw: pointer to the hardware structure
+ * @v_list: list of VLAN entries and forwarding information
+ */
+enum ice_status
+ice_add_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list)
+{
+	struct ice_fltr_list_entry *v_list_itr;
+
+	if (!v_list || !hw)
+		return ICE_ERR_PARAM;
+
+	LIST_FOR_EACH_ENTRY(v_list_itr, v_list, ice_fltr_list_entry,
+			    list_entry) {
+		if (v_list_itr->fltr_info.lkup_type != ICE_SW_LKUP_VLAN)
+			return ICE_ERR_PARAM;
+		v_list_itr->fltr_info.flag = ICE_FLTR_TX;
+		v_list_itr->status = ice_add_vlan_internal(hw, v_list_itr);
+		if (v_list_itr->status)
+			return v_list_itr->status;
+	}
+	return ICE_SUCCESS;
+}
+
+
+
+/**
+ * ice_rem_sw_rule_info
+ * @hw: pointer to the hardware structure
+ * @rule_head: pointer to the switch list structure that we want to delete
+ */
+static void
+ice_rem_sw_rule_info(struct ice_hw *hw, struct LIST_HEAD_TYPE *rule_head)
+{
+	if (!LIST_EMPTY(rule_head)) {
+		struct ice_fltr_mgmt_list_entry *entry;
+		struct ice_fltr_mgmt_list_entry *tmp;
+
+		LIST_FOR_EACH_ENTRY_SAFE(entry, tmp, rule_head,
+					 ice_fltr_mgmt_list_entry, list_entry) {
+			LIST_DEL(&entry->list_entry);
+			ice_free(hw, entry);
+		}
+	}
+}
+
+
+
+/**
+ * ice_cfg_dflt_vsi - change state of VSI to set/clear default
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: VSI handle to set as default
+ * @set: true to add the above mentioned switch rule, false to remove it
+ * @direction: ICE_FLTR_RX or ICE_FLTR_TX
+ *
+ * add filter rule to set/unset given VSI as default VSI for the switch
+ * (represented by swid)
+ */
+enum ice_status
+ice_cfg_dflt_vsi(struct ice_hw *hw, u16 vsi_handle, bool set, u8 direction)
+{
+	struct ice_aqc_sw_rules_elem *s_rule;
+	struct ice_fltr_info f_info;
+	enum ice_adminq_opc opcode;
+	enum ice_status status;
+	u16 s_rule_size;
+	u16 hw_vsi_id;
+
+	if (!ice_is_vsi_valid(hw, vsi_handle))
+		return ICE_ERR_PARAM;
+	hw_vsi_id = ice_get_hw_vsi_num(hw, vsi_handle);
+
+	s_rule_size = set ? ICE_SW_RULE_RX_TX_ETH_HDR_SIZE :
+			    ICE_SW_RULE_RX_TX_NO_HDR_SIZE;
+	s_rule = (struct ice_aqc_sw_rules_elem *)ice_malloc(hw, s_rule_size);
+	if (!s_rule)
+		return ICE_ERR_NO_MEMORY;
+
+	ice_memset(&f_info, 0, sizeof(f_info), ICE_NONDMA_MEM);
+
+	f_info.lkup_type = ICE_SW_LKUP_DFLT;
+	f_info.flag = direction;
+	f_info.fltr_act = ICE_FWD_TO_VSI;
+	f_info.fwd_id.hw_vsi_id = hw_vsi_id;
+
+	if (f_info.flag & ICE_FLTR_RX) {
+		f_info.src = hw->port_info->lport;
+		f_info.src_id = ICE_SRC_ID_LPORT;
+		if (!set)
+			f_info.fltr_rule_id =
+				hw->port_info->dflt_rx_vsi_rule_id;
+	} else if (f_info.flag & ICE_FLTR_TX) {
+		f_info.src_id = ICE_SRC_ID_VSI;
+		f_info.src = hw_vsi_id;
+		if (!set)
+			f_info.fltr_rule_id =
+				hw->port_info->dflt_tx_vsi_rule_id;
+	}
+
+	if (set)
+		opcode = ice_aqc_opc_add_sw_rules;
+	else
+		opcode = ice_aqc_opc_remove_sw_rules;
+
+	ice_fill_sw_rule(hw, &f_info, s_rule, opcode);
+
+	status = ice_aq_sw_rules(hw, s_rule, s_rule_size, 1, opcode, NULL);
+	if (status || !(f_info.flag & ICE_FLTR_TX_RX))
+		goto out;
+	if (set) {
+		u16 index = LE16_TO_CPU(s_rule->pdata.lkup_tx_rx.index);
+
+		if (f_info.flag & ICE_FLTR_TX) {
+			hw->port_info->dflt_tx_vsi_num = hw_vsi_id;
+			hw->port_info->dflt_tx_vsi_rule_id = index;
+		} else if (f_info.flag & ICE_FLTR_RX) {
+			hw->port_info->dflt_rx_vsi_num = hw_vsi_id;
+			hw->port_info->dflt_rx_vsi_rule_id = index;
+		}
+	} else {
+		if (f_info.flag & ICE_FLTR_TX) {
+			hw->port_info->dflt_tx_vsi_num = ICE_DFLT_VSI_INVAL;
+			hw->port_info->dflt_tx_vsi_rule_id = ICE_INVAL_ACT;
+		} else if (f_info.flag & ICE_FLTR_RX) {
+			hw->port_info->dflt_rx_vsi_num = ICE_DFLT_VSI_INVAL;
+			hw->port_info->dflt_rx_vsi_rule_id = ICE_INVAL_ACT;
+		}
+	}
+
+out:
+	ice_free(hw, s_rule);
+	return status;
+}
+
+/**
+ * ice_remove_mac - remove a MAC address based filter rule
+ * @hw: pointer to the hardware structure
+ * @m_list: list of MAC addresses and forwarding information
+ *
+ * This function removes either a MAC filter rule or a specific VSI from a
+ * VSI list for a multicast MAC address.
+ *
+ * Returns ICE_ERR_DOES_NOT_EXIST if a given entry was not added by
+ * ice_add_mac. Caller should be aware that this call will only work if all
+ * the entries passed into m_list were added previously. It will not attempt to
+ * do a partial remove of entries that were found.
+ */
+enum ice_status
+ice_remove_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list)
+{
+	struct ice_fltr_list_entry *list_itr, *tmp;
+
+	if (!m_list)
+		return ICE_ERR_PARAM;
+
+	LIST_FOR_EACH_ENTRY_SAFE(list_itr, tmp, m_list, ice_fltr_list_entry,
+				 list_entry) {
+		enum ice_sw_lkup_type l_type = list_itr->fltr_info.lkup_type;
+
+		if (l_type != ICE_SW_LKUP_MAC)
+			return ICE_ERR_PARAM;
+		list_itr->status = ice_remove_rule_internal(hw,
+							    ICE_SW_LKUP_MAC,
+							    list_itr);
+		if (list_itr->status)
+			return list_itr->status;
+	}
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_remove_vlan - Remove VLAN based filter rule
+ * @hw: pointer to the hardware structure
+ * @v_list: list of VLAN entries and forwarding information
+ */
+enum ice_status
+ice_remove_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list)
+{
+	struct ice_fltr_list_entry *v_list_itr, *tmp;
+
+	if (!v_list || !hw)
+		return ICE_ERR_PARAM;
+
+	LIST_FOR_EACH_ENTRY_SAFE(v_list_itr, tmp, v_list, ice_fltr_list_entry,
+				 list_entry) {
+		enum ice_sw_lkup_type l_type = v_list_itr->fltr_info.lkup_type;
+
+		if (l_type != ICE_SW_LKUP_VLAN)
+			return ICE_ERR_PARAM;
+		v_list_itr->status = ice_remove_rule_internal(hw,
+							      ICE_SW_LKUP_VLAN,
+							      v_list_itr);
+		if (v_list_itr->status)
+			return v_list_itr->status;
+	}
+	return ICE_SUCCESS;
+}
+
+
+/**
+ * ice_vsi_uses_fltr - Determine if given VSI uses specified filter
+ * @fm_entry: filter entry to inspect
+ * @vsi_handle: VSI handle to compare with filter info
+ */
+static bool
+ice_vsi_uses_fltr(struct ice_fltr_mgmt_list_entry *fm_entry, u16 vsi_handle)
+{
+	return ((fm_entry->fltr_info.fltr_act == ICE_FWD_TO_VSI &&
+		 fm_entry->fltr_info.vsi_handle == vsi_handle) ||
+		(fm_entry->fltr_info.fltr_act == ICE_FWD_TO_VSI_LIST &&
+		 (ice_is_bit_set(fm_entry->vsi_list_info->vsi_map,
+				 vsi_handle))));
+}
+
+/**
+ * ice_add_entry_to_vsi_fltr_list - Add copy of fltr_list_entry to remove list
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: VSI handle to remove filters from
+ * @vsi_list_head: pointer to the list to add entry to
+ * @fi: pointer to fltr_info of filter entry to copy & add
+ *
+ * Helper function, used when creating a list of filters to remove from
+ * a specific VSI. The entry added to vsi_list_head is a COPY of the
+ * original filter entry, with the exception of fltr_info.fltr_act and
+ * fltr_info.fwd_id fields. These are set such that later logic can
+ * extract which VSI to remove the fltr from, and pass on that information.
+ */
+static enum ice_status
+ice_add_entry_to_vsi_fltr_list(struct ice_hw *hw, u16 vsi_handle,
+			       struct LIST_HEAD_TYPE *vsi_list_head,
+			       struct ice_fltr_info *fi)
+{
+	struct ice_fltr_list_entry *tmp;
+
+	/* this memory is freed up in the caller function
+	 * once filters for this VSI are removed
+	 */
+	tmp = (struct ice_fltr_list_entry *)ice_malloc(hw, sizeof(*tmp));
+	if (!tmp)
+		return ICE_ERR_NO_MEMORY;
+
+	tmp->fltr_info = *fi;
+
+	/* Overwrite these fields to indicate which VSI to remove filter from,
+	 * so find and remove logic can extract the information from the
+	 * list entries. Note that original entries will still have proper
+	 * values.
+	 */
+	tmp->fltr_info.fltr_act = ICE_FWD_TO_VSI;
+	tmp->fltr_info.vsi_handle = vsi_handle;
+	tmp->fltr_info.fwd_id.hw_vsi_id = ice_get_hw_vsi_num(hw, vsi_handle);
+
+	LIST_ADD(&tmp->list_entry, vsi_list_head);
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_add_to_vsi_fltr_list - Add VSI filters to the list
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: VSI handle to remove filters from
+ * @lkup_list_head: pointer to the list that has certain lookup type filters
+ * @vsi_list_head: pointer to the list pertaining to VSI with vsi_handle
+ *
+ * Locates all filters in lkup_list_head that are used by the given VSI,
+ * and adds COPIES of those entries to vsi_list_head (intended to be used
+ * to remove the listed filters).
+ * Note that this means all entries in vsi_list_head must be explicitly
+ * deallocated by the caller when done with list.
+ */
+#if defined(SRIOV_SUPPORT) && !defined(NO_VF_PROMISC_SUPPORT)
+enum ice_status
+#else
+static enum ice_status
+#endif
+ice_add_to_vsi_fltr_list(struct ice_hw *hw, u16 vsi_handle,
+			 struct LIST_HEAD_TYPE *lkup_list_head,
+			 struct LIST_HEAD_TYPE *vsi_list_head)
+{
+	struct ice_fltr_mgmt_list_entry *fm_entry;
+	enum ice_status status = ICE_SUCCESS;
+
+	/* check to make sure VSI id is valid and within boundary */
+	if (!ice_is_vsi_valid(hw, vsi_handle))
+		return ICE_ERR_PARAM;
+
+	LIST_FOR_EACH_ENTRY(fm_entry, lkup_list_head,
+			    ice_fltr_mgmt_list_entry, list_entry) {
+		struct ice_fltr_info *fi;
+
+		fi = &fm_entry->fltr_info;
+		if (!fi || !ice_vsi_uses_fltr(fm_entry, vsi_handle))
+			continue;
+
+		status = ice_add_entry_to_vsi_fltr_list(hw, vsi_handle,
+							vsi_list_head, fi);
+		if (status)
+			return status;
+	}
+	return status;
+}
+
+
+
+/**
+ * ice_remove_vsi_lkup_fltr - Remove lookup type filters for a VSI
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: VSI handle to remove filters from
+ * @lkup: switch rule filter lookup type
+ */
+static void
+ice_remove_vsi_lkup_fltr(struct ice_hw *hw, u16 vsi_handle,
+			 enum ice_sw_lkup_type lkup)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	struct ice_fltr_list_entry *fm_entry;
+	struct LIST_HEAD_TYPE remove_list_head;
+	struct LIST_HEAD_TYPE *rule_head;
+	struct ice_fltr_list_entry *tmp;
+	struct ice_lock *rule_lock;	/* Lock to protect filter rule list */
+	enum ice_status status;
+
+	INIT_LIST_HEAD(&remove_list_head);
+	rule_lock = &sw->recp_list[lkup].filt_rule_lock;
+	rule_head = &sw->recp_list[lkup].filt_rules;
+	ice_acquire_lock(rule_lock);
+	status = ice_add_to_vsi_fltr_list(hw, vsi_handle, rule_head,
+					  &remove_list_head);
+	ice_release_lock(rule_lock);
+	if (status)
+		return;
+
+	switch (lkup) {
+	case ICE_SW_LKUP_MAC:
+		ice_remove_mac(hw, &remove_list_head);
+		break;
+	case ICE_SW_LKUP_VLAN:
+		ice_remove_vlan(hw, &remove_list_head);
+		break;
+	case ICE_SW_LKUP_MAC_VLAN:
+	case ICE_SW_LKUP_ETHERTYPE:
+	case ICE_SW_LKUP_ETHERTYPE_MAC:
+	case ICE_SW_LKUP_PROMISC:
+	case ICE_SW_LKUP_DFLT:
+		ice_debug(hw, ICE_DBG_SW,
+			  "Remove filters for this lookup type hasn't been implemented yet\n");
+		break;
+	case ICE_SW_LKUP_PROMISC_VLAN:
+	case ICE_SW_LKUP_LAST:
+		ice_debug(hw, ICE_DBG_SW, "Unsupported lookup type\n");
+		break;
+	}
+
+	LIST_FOR_EACH_ENTRY_SAFE(fm_entry, tmp, &remove_list_head,
+				 ice_fltr_list_entry, list_entry) {
+		LIST_DEL(&fm_entry->list_entry);
+		ice_free(hw, fm_entry);
+	}
+}
+
+/**
+ * ice_remove_vsi_fltr - Remove all filters for a VSI
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: VSI handle to remove filters from
+ */
+void ice_remove_vsi_fltr(struct ice_hw *hw, u16 vsi_handle)
+{
+	ice_debug(hw, ICE_DBG_TRACE, "ice_remove_vsi_fltr\n");
+
+	ice_remove_vsi_lkup_fltr(hw, vsi_handle, ICE_SW_LKUP_MAC);
+	ice_remove_vsi_lkup_fltr(hw, vsi_handle, ICE_SW_LKUP_MAC_VLAN);
+	ice_remove_vsi_lkup_fltr(hw, vsi_handle, ICE_SW_LKUP_PROMISC);
+	ice_remove_vsi_lkup_fltr(hw, vsi_handle, ICE_SW_LKUP_VLAN);
+	ice_remove_vsi_lkup_fltr(hw, vsi_handle, ICE_SW_LKUP_DFLT);
+	ice_remove_vsi_lkup_fltr(hw, vsi_handle, ICE_SW_LKUP_ETHERTYPE);
+	ice_remove_vsi_lkup_fltr(hw, vsi_handle, ICE_SW_LKUP_ETHERTYPE_MAC);
+	ice_remove_vsi_lkup_fltr(hw, vsi_handle, ICE_SW_LKUP_PROMISC_VLAN);
+}
+
+
+
+
+
+/**
+ * ice_replay_vsi_fltr - Replay filters for requested VSI
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: driver vsi handle
+ * @recp_id: Recipe id for which rules need to be replayed
+ * @list_head: list for which filters need to be replayed
+ *
+ * Replays the filter of recipe recp_id for a VSI represented via vsi_handle.
+ * It is required to pass valid VSI handle.
+ */
+static enum ice_status
+ice_replay_vsi_fltr(struct ice_hw *hw, u16 vsi_handle, u8 recp_id,
+		    struct LIST_HEAD_TYPE *list_head)
+{
+	struct ice_fltr_mgmt_list_entry *itr;
+	enum ice_status status = ICE_SUCCESS;
+	u16 hw_vsi_id;
+
+	if (LIST_EMPTY(list_head))
+		return status;
+	hw_vsi_id = ice_get_hw_vsi_num(hw, vsi_handle);
+
+	LIST_FOR_EACH_ENTRY(itr, list_head, ice_fltr_mgmt_list_entry,
+			    list_entry) {
+		struct ice_fltr_list_entry f_entry;
+
+		f_entry.fltr_info = itr->fltr_info;
+		if (itr->vsi_count < 2 && recp_id != ICE_SW_LKUP_VLAN &&
+		    itr->fltr_info.vsi_handle == vsi_handle) {
+			/* update the src in case it is vsi num */
+			if (f_entry.fltr_info.src_id == ICE_SRC_ID_VSI)
+				f_entry.fltr_info.src = hw_vsi_id;
+			status = ice_add_rule_internal(hw, recp_id, &f_entry);
+			if (status != ICE_SUCCESS)
+				goto end;
+			continue;
+		}
+		if (!itr->vsi_list_info ||
+		    !ice_is_bit_set(itr->vsi_list_info->vsi_map, vsi_handle))
+			continue;
+		/* Clearing it so that the logic can add it back */
+		ice_clear_bit(vsi_handle, itr->vsi_list_info->vsi_map);
+		f_entry.fltr_info.vsi_handle = vsi_handle;
+		f_entry.fltr_info.fltr_act = ICE_FWD_TO_VSI;
+		/* update the src in case it is vsi num */
+		if (f_entry.fltr_info.src_id == ICE_SRC_ID_VSI)
+			f_entry.fltr_info.src = hw_vsi_id;
+		if (recp_id == ICE_SW_LKUP_VLAN)
+			status = ice_add_vlan_internal(hw, &f_entry);
+		else
+			status = ice_add_rule_internal(hw, recp_id, &f_entry);
+		if (status != ICE_SUCCESS)
+			goto end;
+	}
+end:
+	return status;
+}
+
+
+/**
+ * ice_replay_vsi_all_fltr - replay all filters stored in bookkeeping lists
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: driver vsi handle
+ *
+ * Replays filters for requested VSI via vsi_handle.
+ */
+enum ice_status ice_replay_vsi_all_fltr(struct ice_hw *hw, u16 vsi_handle)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	enum ice_status status = ICE_SUCCESS;
+	u8 i;
+
+	for (i = 0; i < ICE_MAX_NUM_RECIPES; i++) {
+		/* Update the default recipe lines and ones that were created */
+		if (i < ICE_SW_LKUP_LAST || sw->recp_list[i].recp_created) {
+			struct LIST_HEAD_TYPE *head;
+
+			head = &sw->recp_list[i].filt_replay_rules;
+			if (!sw->recp_list[i].adv_rule)
+				status = ice_replay_vsi_fltr(hw, vsi_handle, i,
+							     head);
+			if (status != ICE_SUCCESS)
+				return status;
+		}
+	}
+	return status;
+}
+
+/**
+ * ice_rm_all_sw_replay_rule_info - deletes filter replay rules
+ * @hw: pointer to the hw struct
+ *
+ * Deletes the filter replay rules.
+ */
+void ice_rm_all_sw_replay_rule_info(struct ice_hw *hw)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	u8 i;
+
+	if (!sw)
+		return;
+
+	for (i = 0; i < ICE_SW_LKUP_LAST; i++) {
+		if (!LIST_EMPTY(&sw->recp_list[i].filt_replay_rules)) {
+			struct LIST_HEAD_TYPE *l_head;
+
+			l_head = &sw->recp_list[i].filt_replay_rules;
+			if (!sw->recp_list[i].adv_rule)
+				ice_rem_sw_rule_info(hw, l_head);
+		}
+	}
+}
diff --git a/drivers/net/ice/base/ice_switch.h b/drivers/net/ice/base/ice_switch.h
new file mode 100644
index 0000000..1c55c63
--- /dev/null
+++ b/drivers/net/ice/base/ice_switch.h
@@ -0,0 +1,320 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_SWITCH_H_
+#define _ICE_SWITCH_H_
+
+#include "ice_common.h"
+#include "ice_protocol_type.h"
+
+#define ICE_SW_CFG_MAX_BUF_LEN 2048
+#define ICE_MAX_SW 256
+#define ICE_DFLT_VSI_INVAL 0xff
+
+
+
+#define ICE_VSI_INVAL_ID 0xFFFF
+
+/* VSI context structure for add/get/update/free operations */
+struct ice_vsi_ctx {
+	u16 vsi_num;
+	u16 vsis_allocd;
+	u16 vsis_unallocated;
+	u16 flags;
+	struct ice_aqc_vsi_props info;
+	struct ice_sched_vsi_info sched;
+	u8 alloc_from_pool;
+	u8 vf_num;
+	struct ice_lock rss_locks;
+	struct LIST_HEAD_TYPE rss_list_head;
+};
+
+
+/* Switch recipe ID enum values are specific to hardware */
+enum ice_sw_lkup_type {
+	ICE_SW_LKUP_ETHERTYPE = 0,
+	ICE_SW_LKUP_MAC = 1,
+	ICE_SW_LKUP_MAC_VLAN = 2,
+	ICE_SW_LKUP_PROMISC = 3,
+	ICE_SW_LKUP_VLAN = 4,
+	ICE_SW_LKUP_DFLT = 5,
+	ICE_SW_LKUP_ETHERTYPE_MAC = 8,
+	ICE_SW_LKUP_PROMISC_VLAN = 9,
+	ICE_SW_LKUP_LAST
+};
+
+/* type of filter src id */
+enum ice_src_id {
+	ICE_SRC_ID_UNKNOWN = 0,
+	ICE_SRC_ID_VSI,
+	ICE_SRC_ID_QUEUE,
+	ICE_SRC_ID_LPORT,
+};
+
+struct ice_fltr_info {
+	/* Look up information: how to look up packet */
+	enum ice_sw_lkup_type lkup_type;
+	/* Forward action: filter action to do after lookup */
+	enum ice_sw_fwd_act_type fltr_act;
+	/* rule ID returned by firmware once filter rule is created */
+	u16 fltr_rule_id;
+	u16 flag;
+#define ICE_FLTR_RX		BIT(0)
+#define ICE_FLTR_TX		BIT(1)
+#define ICE_FLTR_TX_RX		(ICE_FLTR_RX | ICE_FLTR_TX)
+
+	/* Source VSI for LOOKUP_TX or source port for LOOKUP_RX */
+	u16 src;
+	enum ice_src_id src_id;
+
+	union {
+		struct {
+			u8 mac_addr[ETH_ALEN];
+		} mac;
+		struct {
+			u8 mac_addr[ETH_ALEN];
+			u16 vlan_id;
+		} mac_vlan;
+		struct {
+			u16 vlan_id;
+		} vlan;
+		/* Set lkup_type as ICE_SW_LKUP_ETHERTYPE
+		 * if just using ethertype as filter. Set lkup_type as
+		 * ICE_SW_LKUP_ETHERTYPE_MAC if MAC also needs to be
+		 * passed in as filter.
+		 */
+		struct {
+			u16 ethertype;
+			u8 mac_addr[ETH_ALEN]; /* optional */
+		} ethertype_mac;
+	} l_data; /* Make sure to zero out the memory of l_data before using
+		   * it or only set the data associated with lookup match
+		   * rest everything should be zero
+		   */
+
+	/* Depending on filter action */
+	union {
+		/* queue id in case of ICE_FWD_TO_Q and starting
+		 * queue id in case of ICE_FWD_TO_QGRP.
+		 */
+		u16 q_id:11;
+		u16 hw_vsi_id:10;
+		u16 vsi_id:10;
+		u16 vsi_list_id:10;
+	} fwd_id;
+
+	/* Sw VSI handle */
+	u16 vsi_handle;
+
+	/* Set to num_queues if action is ICE_FWD_TO_QGRP. This field
+	 * determines the range of queues the packet needs to be forwarded to.
+	 * Note that qgrp_size must be set to a power of 2.
+	 */
+	u8 qgrp_size;
+
+	/* Rule creations populate these indicators basing on the switch type */
+	u8 lb_en;	/* Indicate if packet can be looped back */
+	u8 lan_en;	/* Indicate if packet can be forwarded to the uplink */
+};
+
+struct ice_adv_lkup_elem {
+	enum ice_protocol_type type;
+	union ice_prot_hdr h_u;	/* Header values */
+	union ice_prot_hdr m_u;	/* Mask of header values to match */
+};
+
+struct ice_sw_act_ctrl {
+	/* Source VSI for LOOKUP_TX or source port for LOOKUP_RX */
+	u16 src;
+	u16 flag;
+#define ICE_FLTR_RX             BIT(0)
+#define ICE_FLTR_TX             BIT(1)
+#define ICE_FLTR_TX_RX (ICE_FLTR_RX | ICE_FLTR_TX)
+
+	enum ice_sw_fwd_act_type fltr_act;
+	/* Depending on filter action */
+	union {
+		/* This is a queue id in case of ICE_FWD_TO_Q and starting
+		 * queue id in case of ICE_FWD_TO_QGRP.
+		 */
+		u16 q_id:11;
+		u16 vsi_id:10;
+		u16 hw_vsi_id:10;
+		u16 vsi_list_id:10;
+	} fwd_id;
+	/* software VSI handle */
+	u16 vsi_handle;
+	u8 qgrp_size;
+};
+
+struct ice_adv_rule_info {
+	enum ice_sw_tunnel_type tun_type;
+	struct ice_sw_act_ctrl sw_act;
+	u32 priority;
+};
+
+/* A collection of one or more four word recipe */
+struct ice_sw_recipe {
+	/* For a chained recipe the root recipe is what should be used for
+	 * programming rules
+	 */
+	u8 root_rid;
+	u8 recp_created;
+
+	/* Number of extraction words */
+	u8 n_ext_words;
+	/* Protocol ID and Offset pair (extraction word) to describe the
+	 * recipe
+	 */
+	struct ice_fv_word ext_words[ICE_MAX_CHAIN_WORDS];
+
+	/* if this recipe is a collection of other recipe */
+	u8 big_recp;
+
+	/* if this recipe is part of another bigger recipe then chain index
+	 * corresponding to this recipe
+	 */
+	u8 chain_idx;
+
+	/* if this recipe is a collection of other recipe then count of other
+	 * recipes and recipe ids of those recipes
+	 */
+	u8 n_grp_count;
+
+	/* Bit map specifying the IDs associated with this group of recipe */
+	ice_declare_bitmap(r_bitmap, ICE_MAX_NUM_RECIPES);
+
+	enum ice_sw_tunnel_type tun_type;
+
+	/* List of type ice_fltr_mgmt_list_entry or adv_rule */
+	u8 adv_rule;
+	struct LIST_HEAD_TYPE filt_rules;
+	struct LIST_HEAD_TYPE filt_replay_rules;
+
+	/* Lock to protect filter rule structure */
+	struct ice_lock filt_rule_lock;
+
+	/* Profiles this recipe should be associated with */
+	struct LIST_HEAD_TYPE fv_list;
+
+	/* Profiles this recipe is associated with */
+	u8 num_profs, *prof_ids;
+
+	struct LIST_HEAD_TYPE rg_list;
+
+	/* AQ buffer associated with this recipe */
+	struct ice_aqc_recipe_data_elem *root_buf;
+};
+
+/* Bookkeeping structure to hold bitmap of VSIs corresponding to VSI list id */
+struct ice_vsi_list_map_info {
+	struct LIST_ENTRY_TYPE list_entry;
+	ice_declare_bitmap(vsi_map, ICE_MAX_VSI);
+	u16 vsi_list_id;
+	/* counter to track how many rules are reusing this VSI list */
+	u16 ref_cnt;
+};
+
+struct ice_fltr_list_entry {
+	struct LIST_ENTRY_TYPE list_entry;
+	enum ice_status status;
+	struct ice_fltr_info fltr_info;
+};
+
+/* This defines an entry in the list that maintains MAC or VLAN membership
+ * to HW list mapping, since multiple VSIs can subscribe to the same MAC or
+ * VLAN. As an optimization the VSI list should be created only when a
+ * second VSI becomes a subscriber to the same MAC address. VSI lists are always
+ * used for VLAN membership.
+ */
+struct ice_fltr_mgmt_list_entry {
+	/* back pointer to VSI list id to VSI list mapping */
+	struct ice_vsi_list_map_info *vsi_list_info;
+	u16 vsi_count;
+#define ICE_INVAL_LG_ACT_INDEX 0xffff
+	u16 lg_act_idx;
+#define ICE_INVAL_SW_MARKER_ID 0xffff
+	u16 sw_marker_id;
+	struct LIST_ENTRY_TYPE list_entry;
+	struct ice_fltr_info fltr_info;
+#define ICE_INVAL_COUNTER_ID 0xff
+	u8 counter_index;
+};
+
+struct ice_adv_fltr_mgmt_list_entry {
+	struct LIST_ENTRY_TYPE list_entry;
+
+	struct ice_adv_lkup_elem *lkups;
+	struct ice_adv_rule_info rule_info;
+	u16 lkups_cnt;
+};
+
+enum ice_promisc_flags {
+	ICE_PROMISC_UCAST_RX = 0x1,
+	ICE_PROMISC_UCAST_TX = 0x2,
+	ICE_PROMISC_MCAST_RX = 0x4,
+	ICE_PROMISC_MCAST_TX = 0x8,
+	ICE_PROMISC_BCAST_RX = 0x10,
+	ICE_PROMISC_BCAST_TX = 0x20,
+	ICE_PROMISC_VLAN_RX = 0x40,
+	ICE_PROMISC_VLAN_TX = 0x80,
+};
+
+/* VSI related commands */
+enum ice_status
+ice_add_vsi(struct ice_hw *hw, u16 vsi_handle, struct ice_vsi_ctx *vsi_ctx,
+	    struct ice_sq_cd *cd);
+enum ice_status
+ice_free_vsi(struct ice_hw *hw, u16 vsi_handle, struct ice_vsi_ctx *vsi_ctx,
+	     bool keep_vsi_alloc, struct ice_sq_cd *cd);
+enum ice_status
+ice_update_vsi(struct ice_hw *hw, u16 vsi_handle, struct ice_vsi_ctx *vsi_ctx,
+	       struct ice_sq_cd *cd);
+struct ice_vsi_ctx *ice_get_vsi_ctx(struct ice_hw *hw, u16 vsi_handle);
+void ice_clear_all_vsi_ctx(struct ice_hw *hw);
+/* Switch config */
+enum ice_status ice_get_initial_sw_cfg(struct ice_hw *hw);
+
+enum ice_status
+ice_alloc_vlan_res_counter(struct ice_hw *hw, u16 *counter_id);
+enum ice_status
+ice_free_vlan_res_counter(struct ice_hw *hw, u16 counter_id);
+
+/* Switch/bridge related commands */
+enum ice_status ice_update_sw_rule_bridge_mode(struct ice_hw *hw);
+enum ice_status
+ice_add_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list);
+enum ice_status
+ice_add_mac_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list);
+enum ice_status ice_add_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_lst);
+enum ice_status ice_remove_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_lst);
+enum ice_status
+ice_remove_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list);
+enum ice_status
+ice_remove_mac_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list);
+
+void ice_remove_vsi_fltr(struct ice_hw *hw, u16 vsi_handle);
+#if defined(SRIOV_SUPPORT) && !defined(NO_VF_PROMISC_SUPPORT)
+enum ice_status
+ice_add_to_vsi_fltr_list(struct ice_hw *hw, u16 vsi_handle,
+			 struct LIST_HEAD_TYPE *lkup_list_head,
+			 struct LIST_HEAD_TYPE *vsi_list_head);
+#endif
+
+
+enum ice_status
+ice_cfg_dflt_vsi(struct ice_hw *hw, u16 vsi_handle, bool set, u8 direction);
+
+
+
+
+
+enum ice_status ice_init_def_sw_recp(struct ice_hw *hw);
+u16 ice_get_hw_vsi_num(struct ice_hw *hw, u16 vsi_handle);
+bool ice_is_vsi_valid(struct ice_hw *hw, u16 vsi_handle);
+
+enum ice_status ice_replay_vsi_all_fltr(struct ice_hw *hw, u16 vsi_handle);
+void ice_rm_all_sw_replay_rule_info(struct ice_hw *hw);
+
+#endif /* _ICE_SWITCH_H_ */
diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h
new file mode 100644
index 0000000..057bc85
--- /dev/null
+++ b/drivers/net/ice/base/ice_type.h
@@ -0,0 +1,789 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_TYPE_H_
+#define _ICE_TYPE_H_
+
+#define ETH_ALEN	6
+
+#define ETH_HEADER_LEN	14
+
+#define BIT(a) (1UL << (a))
+
+#define BITS_PER_BYTE	8
+
+#define ICE_BYTES_PER_WORD	2
+#define ICE_BYTES_PER_DWORD	4
+#define ICE_MAX_TRAFFIC_CLASS	8
+
+
+#include "ice_status.h"
+#include "ice_hw_autogen.h"
+#include "ice_devids.h"
+#include "ice_osdep.h"
+#include "ice_controlq.h"
+#include "ice_lan_tx_rx.h"
+#include "ice_flex_type.h"
+#include "ice_protocol_type.h"
+
+static inline bool ice_is_tc_ena(ice_bitmap_t bitmap, u8 tc)
+{
+	return ice_is_bit_set(&bitmap, tc);
+}
+
+#ifndef DIV_64BIT
+#define DIV_64BIT(n, d) div64_long((n), (d))
+#endif /* DIV_64BIT */
+
+static inline u64 round_up_64bit(u64 a, u32 b)
+{
+	return DIV_64BIT(((a) + (b) / 2), (b));
+}
+
+
+/* Driver always calls main vsi_handle first */
+#define ICE_MAIN_VSI_HANDLE		0
+
+/* Switch from ms to the 1usec global time (this is the GTIME resolution) */
+#define ICE_MS_TO_GTIME(time)		((time) * 1000)
+
+/* Data type manipulation macros. */
+#define ICE_HI_DWORD(x)		((u32)((((x) >> 16) >> 16) & 0xFFFFFFFF))
+#define ICE_LO_DWORD(x)		((u32)((x) & 0xFFFFFFFF))
+#define ICE_HI_WORD(x)		((u16)(((x) >> 16) & 0xFFFF))
+
+/* debug masks - set these bits in hw->debug_mask to control output */
+#define ICE_DBG_INIT		BIT_ULL(1)
+#define ICE_DBG_RELEASE		BIT_ULL(2)
+
+#define ICE_DBG_LINK		BIT_ULL(4)
+#define ICE_DBG_PHY		BIT_ULL(5)
+#define ICE_DBG_QCTX		BIT_ULL(6)
+#define ICE_DBG_NVM		BIT_ULL(7)
+#define ICE_DBG_LAN		BIT_ULL(8)
+#define ICE_DBG_FLOW		BIT_ULL(9)
+#define ICE_DBG_DCB		BIT_ULL(10)
+#define ICE_DBG_DIAG		BIT_ULL(11)
+#define ICE_DBG_FD		BIT_ULL(12)
+#define ICE_DBG_SW		BIT_ULL(13)
+#define ICE_DBG_SCHED		BIT_ULL(14)
+
+#define ICE_DBG_PKG		BIT_ULL(16)
+#define ICE_DBG_RES		BIT_ULL(17)
+#define ICE_DBG_AQ_MSG		BIT_ULL(24)
+#define ICE_DBG_AQ_DESC		BIT_ULL(25)
+#define ICE_DBG_AQ_DESC_BUF	BIT_ULL(26)
+#define ICE_DBG_AQ_CMD		BIT_ULL(27)
+#define ICE_DBG_AQ		(ICE_DBG_AQ_MSG		| \
+				 ICE_DBG_AQ_DESC	| \
+				 ICE_DBG_AQ_DESC_BUF	| \
+				 ICE_DBG_AQ_CMD)
+
+#define ICE_DBG_USER		BIT_ULL(31)
+#define ICE_DBG_ALL		0xFFFFFFFFFFFFFFFFULL
+
+
+
+
+
+
+enum ice_aq_res_ids {
+	ICE_NVM_RES_ID = 1,
+	ICE_SPD_RES_ID,
+	ICE_CHANGE_LOCK_RES_ID,
+	ICE_GLOBAL_CFG_LOCK_RES_ID
+};
+
+/* FW update timeout definitions are in milliseconds */
+#define ICE_NVM_TIMEOUT			180000
+#define ICE_CHANGE_LOCK_TIMEOUT		1000
+#define ICE_GLOBAL_CFG_LOCK_TIMEOUT	3000
+
+enum ice_aq_res_access_type {
+	ICE_RES_READ = 1,
+	ICE_RES_WRITE
+};
+
+struct ice_driver_ver {
+	u8 major_ver;
+	u8 minor_ver;
+	u8 build_ver;
+	u8 subbuild_ver;
+	u8 driver_string[32];
+};
+
+enum ice_fc_mode {
+	ICE_FC_NONE = 0,
+	ICE_FC_RX_PAUSE,
+	ICE_FC_TX_PAUSE,
+	ICE_FC_FULL,
+	ICE_FC_PFC,
+	ICE_FC_DFLT
+};
+
+enum ice_fec_mode {
+	ICE_FEC_NONE = 0,
+	ICE_FEC_RS,
+	ICE_FEC_BASER,
+	ICE_FEC_AUTO
+};
+
+enum ice_set_fc_aq_failures {
+	ICE_SET_FC_AQ_FAIL_NONE = 0,
+	ICE_SET_FC_AQ_FAIL_GET,
+	ICE_SET_FC_AQ_FAIL_SET,
+	ICE_SET_FC_AQ_FAIL_UPDATE
+};
+
+/* These are structs for managing the hardware information and the operations */
+/* MAC types */
+enum ice_mac_type {
+	ICE_MAC_UNKNOWN = 0,
+	ICE_MAC_VF,
+	ICE_MAC_GENERIC,
+};
+
+/* Media Types */
+enum ice_media_type {
+	ICE_MEDIA_UNKNOWN = 0,
+	ICE_MEDIA_FIBER,
+	ICE_MEDIA_BASET,
+	ICE_MEDIA_BACKPLANE,
+	ICE_MEDIA_DA,
+};
+
+/* Software VSI types. */
+enum ice_vsi_type {
+	ICE_VSI_PF = 0,
+	ICE_VSI_VF = 1,
+};
+
+struct ice_link_status {
+	/* Refer to ice_aq_phy_type for bits definition */
+	u64 phy_type_low;
+	u8 topo_media_conflict;
+	u16 max_frame_size;
+	u16 link_speed;
+	u16 req_speeds;
+	u8 lse_ena;	/* Link Status Event notification */
+	u8 link_info;
+	u8 an_info;
+	u8 ext_info;
+	u8 fec_info;
+	u8 pacing;
+	/* Refer to #define from module_type[ICE_MODULE_TYPE_TOTAL_BYTE] of
+	 * ice_aqc_get_phy_caps structure
+	 */
+	u8 module_type[ICE_MODULE_TYPE_TOTAL_BYTE];
+};
+
+/* Different data queue types: These are mainly for SW consumption. */
+enum ice_q {
+	ICE_DATA_Q_DOORBELL,
+	ICE_DATA_Q_CMPL,
+	ICE_DATA_Q_QUANTA,
+	ICE_DATA_Q_RX,
+	ICE_DATA_Q_TX,
+};
+
+/* Different reset sources for which a disable queue AQ call has to be made in
+ * order to clean the TX scheduler as a part of the reset
+ */
+enum ice_disq_rst_src {
+	ICE_NO_RESET = 0,
+	ICE_VM_RESET,
+	ICE_VF_RESET,
+};
+
+/* PHY info such as phy_type, etc... */
+struct ice_phy_info {
+	struct ice_link_status link_info;
+	struct ice_link_status link_info_old;
+	u64 phy_type_low;
+	enum ice_media_type media_type;
+	u8 get_link_info;
+};
+
+#define ICE_MAX_NUM_MIRROR_RULES	64
+
+/* Common HW capabilities for SW use */
+struct ice_hw_common_caps {
+	/* Write CSR protection */
+	u64 wr_csr_prot;
+	u32 switching_mode;
+	/* switching mode supported - EVB switching (including cloud) */
+#define ICE_NVM_IMAGE_TYPE_EVB		0x0
+
+	/* Manageablity mode & supported protocols over MCTP */
+	u32 mgmt_mode;
+#define ICE_MGMT_MODE_PASS_THRU_MODE_M		0xF
+#define ICE_MGMT_MODE_CTL_INTERFACE_M		0xF0
+#define ICE_MGMT_MODE_REDIR_SB_INTERFACE_M	0xF00
+
+	u32 mgmt_protocols_mctp;
+#define ICE_MGMT_MODE_PROTO_RSVD	BIT(0)
+#define ICE_MGMT_MODE_PROTO_PLDM	BIT(1)
+#define ICE_MGMT_MODE_PROTO_OEM		BIT(2)
+#define ICE_MGMT_MODE_PROTO_NC_SI	BIT(3)
+
+	u32 os2bmc;
+	u32 valid_functions;
+
+	/* RSS related capabilities */
+	u32 rss_table_size;		/* 512 for PFs and 64 for VFs */
+	u32 rss_table_entry_width;	/* RSS Entry width in bits */
+
+	/* TX/RX queues */
+	u32 num_rxq;			/* Number/Total RX queues */
+	u32 rxq_first_id;		/* First queue ID for RX queues */
+	u32 num_txq;			/* Number/Total TX queues */
+	u32 txq_first_id;		/* First queue ID for TX queues */
+
+	/* MSI-X vectors */
+	u32 num_msix_vectors;
+	u32 msix_vector_first_id;
+
+	/* Max MTU for function or device */
+	u32 max_mtu;
+
+	/* WOL related */
+	u32 num_wol_proxy_fltr;
+	u32 wol_proxy_vsi_seid;
+
+	/* LED/SDP pin count */
+	u32 led_pin_num;
+	u32 sdp_pin_num;
+
+	/* LED/SDP - Supports up to 12 LED pins and 8 SDP signals */
+#define ICE_MAX_SUPPORTED_GPIO_LED	12
+#define ICE_MAX_SUPPORTED_GPIO_SDP	8
+	u8 led[ICE_MAX_SUPPORTED_GPIO_LED];
+	u8 sdp[ICE_MAX_SUPPORTED_GPIO_SDP];
+
+	/* Virtualization support */
+	u8 sr_iov_1_1;			/* SR-IOV enabled */
+
+	/* EVB capabilities */
+	u8 evb_802_1_qbg;		/* Edge Virtual Bridging */
+	u8 evb_802_1_qbh;		/* Bridge Port Extension */
+
+	u8 iscsi;
+	u8 mgmt_cem;
+
+	/* WoL and APM support */
+#define ICE_WOL_SUPPORT_M		BIT(0)
+#define ICE_ACPI_PROG_MTHD_M		BIT(1)
+#define ICE_PROXY_SUPPORT_M		BIT(2)
+	u8 apm_wol_support;
+	u8 acpi_prog_mthd;
+	u8 proxy_support;
+};
+
+
+/* Function specific capabilities */
+struct ice_hw_func_caps {
+	struct ice_hw_common_caps common_cap;
+	u32 num_allocd_vfs;		/* Number of allocated VFs */
+	u32 vf_base_id;			/* Logical ID of the first VF */
+	u32 guar_num_vsi;
+};
+
+/* Device wide capabilities */
+struct ice_hw_dev_caps {
+	struct ice_hw_common_caps common_cap;
+	u32 num_vfs_exposed;		/* Total number of VFs exposed */
+	u32 num_vsi_allocd_to_host;	/* Excluding EMP VSI */
+};
+
+
+/* Information about MAC such as address, etc... */
+struct ice_mac_info {
+	u8 lan_addr[ETH_ALEN];
+	u8 perm_addr[ETH_ALEN];
+	u8 port_addr[ETH_ALEN];
+	u8 wol_addr[ETH_ALEN];
+};
+
+/* PCI bus types */
+enum ice_bus_type {
+	ice_bus_unknown = 0,
+	ice_bus_pci_express,
+	ice_bus_embedded, /* Is device Embedded versus card */
+	ice_bus_reserved
+};
+
+/* PCI bus speeds */
+enum ice_pcie_bus_speed {
+	ice_pcie_speed_unknown	= 0xff,
+	ice_pcie_speed_2_5GT	= 0x14,
+	ice_pcie_speed_5_0GT	= 0x15,
+	ice_pcie_speed_8_0GT	= 0x16,
+	ice_pcie_speed_16_0GT	= 0x17
+};
+
+/* PCI bus widths */
+enum ice_pcie_link_width {
+	ice_pcie_lnk_width_resrv	= 0x00,
+	ice_pcie_lnk_x1			= 0x01,
+	ice_pcie_lnk_x2			= 0x02,
+	ice_pcie_lnk_x4			= 0x04,
+	ice_pcie_lnk_x8			= 0x08,
+	ice_pcie_lnk_x12		= 0x0C,
+	ice_pcie_lnk_x16		= 0x10,
+	ice_pcie_lnk_x32		= 0x20,
+	ice_pcie_lnk_width_unknown	= 0xff,
+};
+
+/* Reset types used to determine which kind of reset was requested. These
+ * defines match what the RESET_TYPE field of the GLGEN_RSTAT register.
+ * ICE_RESET_PFR does not match any RESET_TYPE field in the GLGEN_RSTAT register
+ * because its reset source is different than the other types listed.
+ */
+enum ice_reset_req {
+	ICE_RESET_POR	= 0,
+	ICE_RESET_INVAL	= 0,
+	ICE_RESET_CORER	= 1,
+	ICE_RESET_GLOBR	= 2,
+	ICE_RESET_EMPR	= 3,
+	ICE_RESET_PFR	= 4,
+};
+
+/* Bus parameters */
+struct ice_bus_info {
+	enum ice_pcie_bus_speed speed;
+	enum ice_pcie_link_width width;
+	enum ice_bus_type type;
+	u16 domain_num;
+	u16 device;
+	u8 func;
+	u8 bus_num;
+};
+
+/* Flow control (FC) parameters */
+struct ice_fc_info {
+	enum ice_fc_mode current_mode;	/* FC mode in effect */
+	enum ice_fc_mode req_mode;	/* FC mode requested by caller */
+};
+
+/* NVM Information */
+struct ice_nvm_info {
+	u32 eetrack;			/* NVM data version */
+	u32 oem_ver;			/* OEM version info */
+	u16 sr_words;			/* Shadow RAM size in words */
+	u16 ver;			/* NVM package version */
+	u8 blank_nvm_mode;		/* is NVM empty (no FW present)*/
+};
+
+/* Max number of port to queue branches w.r.t topology */
+#define ICE_TXSCHED_MAX_BRANCHES ICE_MAX_TRAFFIC_CLASS
+/* ICE_DFLT_AGG_ID means that all new VM(s)/VSI node connects
+ * to driver defined policy for default aggregator
+ */
+#define ICE_INVAL_TEID 0xFFFFFFFF
+#define ICE_DFLT_AGG_ID 0
+
+struct ice_sched_node {
+	struct ice_sched_node *parent;
+	struct ice_sched_node *sibling; /* next sibling in the same layer */
+	struct ice_sched_node **children;
+	struct ice_aqc_txsched_elem_data info;
+	u32 agg_id;			/* aggregator group id */
+	u16 vsi_handle;
+	u8 in_use;			/* suspended or in use */
+	u8 tx_sched_layer;		/* Logical Layer (1-9) */
+	u8 num_children;
+	u8 tc_num;
+	u8 owner;
+#define ICE_SCHED_NODE_OWNER_LAN	0
+#define ICE_SCHED_NODE_OWNER_AE		1
+#define ICE_SCHED_NODE_OWNER_RDMA	2
+};
+
+/* Access Macros for Tx Sched Elements data */
+#define ICE_TXSCHED_GET_NODE_TEID(x) LE32_TO_CPU((x)->info.node_teid)
+#define ICE_TXSCHED_GET_PARENT_TEID(x) LE32_TO_CPU((x)->info.parent_teid)
+#define ICE_TXSCHED_GET_CIR_RL_ID(x)	\
+	LE16_TO_CPU((x)->info.cir_bw.bw_profile_idx)
+#define ICE_TXSCHED_GET_EIR_RL_ID(x)	\
+	LE16_TO_CPU((x)->info.eir_bw.bw_profile_idx)
+#define ICE_TXSCHED_GET_SRL_ID(x) LE16_TO_CPU((x)->info.srl_id)
+#define ICE_TXSCHED_GET_CIR_BWALLOC(x)	\
+	LE16_TO_CPU((x)->info.cir_bw.bw_alloc)
+#define ICE_TXSCHED_GET_EIR_BWALLOC(x)	\
+	LE16_TO_CPU((x)->info.eir_bw.bw_alloc)
+
+
+/* The aggregator type determines if identifier is for a VSI group,
+ * aggregator group, aggregator of queues, or queue group.
+ */
+enum ice_agg_type {
+	ICE_AGG_TYPE_UNKNOWN = 0,
+	ICE_AGG_TYPE_TC,
+	ICE_AGG_TYPE_AGG, /* aggregator */
+	ICE_AGG_TYPE_VSI,
+	ICE_AGG_TYPE_QG,
+	ICE_AGG_TYPE_Q
+};
+
+
+#define ICE_SCHED_MIN_BW		500		/* in Kbps */
+#define ICE_SCHED_MAX_BW		100000000	/* in Kbps */
+#define ICE_SCHED_DFLT_BW		0xFFFFFFFF	/* unlimited */
+#define ICE_SCHED_NO_PRIORITY		0
+#define ICE_SCHED_NO_BW_WT		0
+#define ICE_SCHED_DFLT_RL_PROF_ID	0
+#define ICE_SCHED_DFLT_BW_WT		1
+#define ICE_SCHED_INVAL_PROF_ID		0xFFFF
+#define ICE_SCHED_DFLT_BURST_SIZE	(15 * 1024)	/* in bytes (15k) */
+
+
+/* The following tree example shows the naming conventions followed under
+ * ice_port_info struct for default scheduler tree topology.
+ *
+ *                 A tree on a port
+ *                       *                ---> root node
+ *        (TC0)/  /  /  / \  \  \  \(TC7) ---> num_branches (range:1- 8)
+ *            *  *  *  *   *  *  *  *     |
+ *           /                            |
+ *          *                             |
+ *         /                              |-> num_elements (range:1 - 9)
+ *        *                               |   implies num_of_layers
+ *       /                                |
+ *   (a)*                                 |
+ *
+ *  (a) is the last_node_teid(not of type Leaf). A leaf node is created under
+ *  (a) as child node where queues get added, add Tx/Rx queue admin commands;
+ *  need teid of (a) to add queues.
+ *
+ *  This tree
+ *       -> has 8 branches (one for each TC)
+ *       -> First branch (TC0) has 4 elements
+ *       -> has 4 layers
+ *       -> (a) is the topmost layer node created by firmware on branch 0
+ *
+ *  Note: Above asterisk tree covers only basic terminology and scenario.
+ *  Refer to the documentation for more info.
+ */
+
+ /* Data structure for saving bw information */
+enum ice_bw_type {
+	ICE_BW_TYPE_PRIO,
+	ICE_BW_TYPE_CIR,
+	ICE_BW_TYPE_CIR_WT,
+	ICE_BW_TYPE_EIR,
+	ICE_BW_TYPE_EIR_WT,
+	ICE_BW_TYPE_SHARED,
+	ICE_BW_TYPE_CNT		/* This must be last */
+};
+
+struct ice_bw {
+	u32 bw;
+	u16 bw_alloc;
+};
+
+struct ice_bw_type_info {
+	ice_declare_bitmap(bw_t_bitmap, ICE_BW_TYPE_CNT);
+	u8 generic;
+	struct ice_bw cir_bw;
+	struct ice_bw eir_bw;
+	u32 shared_bw;
+};
+
+/* vsi type list entry to locate corresponding vsi/ag nodes */
+struct ice_sched_vsi_info {
+	struct ice_sched_node *vsi_node[ICE_MAX_TRAFFIC_CLASS];
+	struct ice_sched_node *ag_node[ICE_MAX_TRAFFIC_CLASS];
+	u16 max_lanq[ICE_MAX_TRAFFIC_CLASS];
+	/* bw_t_info saves VSI bw information */
+	struct ice_bw_type_info bw_t_info[ICE_MAX_TRAFFIC_CLASS];
+};
+
+
+struct ice_port_info {
+	struct ice_sched_node *root;	/* Root Node per Port */
+	struct ice_hw *hw;		/* back pointer to hw instance */
+	u32 last_node_teid;		/* scheduler last node info */
+	u16 sw_id;			/* Initial switch ID belongs to port */
+	u16 pf_vf_num;
+	u8 port_state;
+#define ICE_SCHED_PORT_STATE_INIT	0x0
+#define ICE_SCHED_PORT_STATE_READY	0x1
+	u16 dflt_tx_vsi_rule_id;
+	u16 dflt_tx_vsi_num;
+	u16 dflt_rx_vsi_rule_id;
+	u16 dflt_rx_vsi_num;
+	struct ice_fc_info fc;
+	struct ice_mac_info mac;
+	struct ice_phy_info phy;
+	struct ice_lock sched_lock;	/* protect access to TXSched tree */
+	u8 lport;
+#define ICE_LPORT_MASK		0xff
+	u8 is_vf;
+};
+
+struct ice_switch_info {
+	struct LIST_HEAD_TYPE vsi_list_map_head;
+	struct ice_sw_recipe *recp_list;
+};
+
+/* FW logging configuration */
+struct ice_fw_log_evnt {
+	u8 cfg : 4;	/* New event enables to configure */
+	u8 cur : 4;	/* Current/active event enables */
+};
+
+struct ice_fw_log_cfg {
+	u8 cq_en : 1;    /* FW logging is enabled via the control queue */
+	u8 uart_en : 1;  /* FW logging is enabled via UART for all PFs */
+	u8 actv_evnts;   /* Cumulation of currently enabled log events */
+
+#define ICE_FW_LOG_EVNT_INFO	(ICE_AQC_FW_LOG_INFO_EN >> ICE_AQC_FW_LOG_EN_S)
+#define ICE_FW_LOG_EVNT_INIT	(ICE_AQC_FW_LOG_INIT_EN >> ICE_AQC_FW_LOG_EN_S)
+#define ICE_FW_LOG_EVNT_FLOW	(ICE_AQC_FW_LOG_FLOW_EN >> ICE_AQC_FW_LOG_EN_S)
+#define ICE_FW_LOG_EVNT_ERR	(ICE_AQC_FW_LOG_ERR_EN >> ICE_AQC_FW_LOG_EN_S)
+	struct ice_fw_log_evnt evnts[ICE_AQC_FW_LOG_ID_MAX];
+};
+
+/* Port hardware description */
+struct ice_hw {
+	u8 *hw_addr;
+	void *back;
+	struct ice_aqc_layer_props *layer_info;
+	struct ice_port_info *port_info;
+	u64 debug_mask;		/* BITMAP for debug mask */
+	enum ice_mac_type mac_type;
+
+	/* pci info */
+	u16 device_id;
+	u16 vendor_id;
+	u16 subsystem_device_id;
+	u16 subsystem_vendor_id;
+	u8 revision_id;
+
+	u8 pf_id;		/* device profile info */
+
+	/* TX Scheduler values */
+	u16 num_tx_sched_layers;
+	u16 num_tx_sched_phys_layers;
+	u8 flattened_layers;
+	u8 max_cgds;
+	u8 sw_entry_point_layer;
+	u16 max_children[ICE_AQC_TOPO_MAX_LEVEL_NUM];
+	struct LIST_HEAD_TYPE agg_list;	/* lists all aggregator */
+
+	struct ice_vsi_ctx *vsi_ctx[ICE_MAX_VSI];
+	u8 evb_veb;		/* true for VEB, false for VEPA */
+	u8 reset_ongoing;	/* true if hw is in reset, false otherwise */
+	struct ice_bus_info bus;
+	struct ice_nvm_info nvm;
+	struct ice_hw_dev_caps dev_caps;	/* device capabilities */
+	struct ice_hw_func_caps func_caps;	/* function capabilities */
+
+	struct ice_switch_info *switch_info;	/* switch filter lists */
+
+	/* Control Queue info */
+	struct ice_ctl_q_info adminq;
+	struct ice_ctl_q_info mailboxq;
+
+	u8 api_branch;		/* API branch version */
+	u8 api_maj_ver;		/* API major version */
+	u8 api_min_ver;		/* API minor version */
+	u8 api_patch;		/* API patch version */
+	u8 fw_branch;		/* firmware branch version */
+	u8 fw_maj_ver;		/* firmware major version */
+	u8 fw_min_ver;		/* firmware minor version */
+	u8 fw_patch;		/* firmware patch version */
+	u32 fw_build;		/* firmware build number */
+
+	struct ice_fw_log_cfg fw_log;
+
+/* Device max aggregate bandwidths corresponding to the GL_PWR_MODE_CTL
+ * register. Used for determining the itr/intrl granularity during
+ * initialization.
+ */
+#define ICE_MAX_AGG_BW_200G	0x0
+#define ICE_MAX_AGG_BW_100G	0X1
+#define ICE_MAX_AGG_BW_50G	0x2
+#define ICE_MAX_AGG_BW_25G	0x3
+	/* ITR granularity for different speeds */
+#define ICE_ITR_GRAN_ABOVE_25	2
+#define ICE_ITR_GRAN_MAX_25	4
+	/* ITR granularity in 1 us */
+	u8 itr_gran;
+	/* INTRL granularity for different speeds */
+#define ICE_INTRL_GRAN_ABOVE_25	4
+#define ICE_INTRL_GRAN_MAX_25	8
+	/* INTRL granularity in 1 us */
+	u8 intrl_gran;
+
+	u8 ucast_shared;	/* true if VSIs can share unicast addr */
+
+
+};
+
+/* Statistics collected by each port, VSI, VEB, and S-channel */
+struct ice_eth_stats {
+	u64 rx_bytes;			/* gorc */
+	u64 rx_unicast;			/* uprc */
+	u64 rx_multicast;		/* mprc */
+	u64 rx_broadcast;		/* bprc */
+	u64 rx_discards;		/* rdpc */
+	u64 rx_unknown_protocol;	/* rupp */
+	u64 tx_bytes;			/* gotc */
+	u64 tx_unicast;			/* uptc */
+	u64 tx_multicast;		/* mptc */
+	u64 tx_broadcast;		/* bptc */
+	u64 tx_discards;		/* tdpc */
+	u64 tx_errors;			/* tepc */
+};
+
+#define ICE_MAX_UP	8
+
+/* Statistics collected per VEB per User Priority (UP) for up to 8 UPs */
+struct ice_veb_up_stats {
+	u64 up_rx_pkts[ICE_MAX_UP];
+	u64 up_rx_bytes[ICE_MAX_UP];
+	u64 up_tx_pkts[ICE_MAX_UP];
+	u64 up_tx_bytes[ICE_MAX_UP];
+};
+
+/* Statistics collected by the MAC */
+struct ice_hw_port_stats {
+	/* eth stats collected by the port */
+	struct ice_eth_stats eth;
+	/* additional port specific stats */
+	u64 tx_dropped_link_down;	/* tdold */
+	u64 crc_errors;			/* crcerrs */
+	u64 illegal_bytes;		/* illerrc */
+	u64 error_bytes;		/* errbc */
+	u64 mac_local_faults;		/* mlfc */
+	u64 mac_remote_faults;		/* mrfc */
+	u64 rx_len_errors;		/* rlec */
+	u64 link_xon_rx;		/* lxonrxc */
+	u64 link_xoff_rx;		/* lxoffrxc */
+	u64 link_xon_tx;		/* lxontxc */
+	u64 link_xoff_tx;		/* lxofftxc */
+	u64 rx_size_64;			/* prc64 */
+	u64 rx_size_127;		/* prc127 */
+	u64 rx_size_255;		/* prc255 */
+	u64 rx_size_511;		/* prc511 */
+	u64 rx_size_1023;		/* prc1023 */
+	u64 rx_size_1522;		/* prc1522 */
+	u64 rx_size_big;		/* prc9522 */
+	u64 rx_undersize;		/* ruc */
+	u64 rx_fragments;		/* rfc */
+	u64 rx_oversize;		/* roc */
+	u64 rx_jabber;			/* rjc */
+	u64 tx_size_64;			/* ptc64 */
+	u64 tx_size_127;		/* ptc127 */
+	u64 tx_size_255;		/* ptc255 */
+	u64 tx_size_511;		/* ptc511 */
+	u64 tx_size_1023;		/* ptc1023 */
+	u64 tx_size_1522;		/* ptc1522 */
+	u64 tx_size_big;		/* ptc9522 */
+	u64 mac_short_pkt_dropped;	/* mspdc */
+	/* EEE LPI */
+	u32 tx_lpi_status;
+	u32 rx_lpi_status;
+	u64 tx_lpi_count;		/* etlpic */
+	u64 rx_lpi_count;		/* erlpic */
+};
+
+enum ice_sw_fwd_act_type {
+	ICE_FWD_TO_VSI = 0,
+	ICE_FWD_TO_VSI_LIST, /* Do not use this when adding filter */
+	ICE_FWD_TO_Q,
+	ICE_FWD_TO_QGRP,
+	ICE_DROP_PACKET,
+	ICE_INVAL_ACT
+};
+
+/* Checksum and Shadow RAM pointers */
+#define ICE_SR_NVM_CTRL_WORD			0x00
+#define ICE_SR_PHY_ANALOG_PTR			0x04
+#define ICE_SR_OPTION_ROM_PTR			0x05
+#define ICE_SR_RO_PCIR_REGS_AUTO_LOAD_PTR	0x06
+#define ICE_SR_AUTO_GENERATED_POINTERS_PTR	0x07
+#define ICE_SR_PCIR_REGS_AUTO_LOAD_PTR		0x08
+#define ICE_SR_EMP_GLOBAL_MODULE_PTR		0x09
+#define ICE_SR_EMP_IMAGE_PTR			0x0B
+#define ICE_SR_PE_IMAGE_PTR			0x0C
+#define ICE_SR_CSR_PROTECTED_LIST_PTR		0x0D
+#define ICE_SR_MNG_CFG_PTR			0x0E
+#define ICE_SR_EMP_MODULE_PTR			0x0F
+#define ICE_SR_PBA_FLAGS			0x15
+#define ICE_SR_PBA_BLOCK_PTR			0x16
+#define ICE_SR_BOOT_CFG_PTR			0x17
+#define ICE_SR_NVM_WOL_CFG			0x19
+#define ICE_NVM_OEM_VER_OFF			0x83
+#define ICE_SR_NVM_DEV_STARTER_VER		0x18
+#define ICE_SR_ALTERNATE_SAN_MAC_ADDR_PTR	0x27
+#define ICE_SR_PERMANENT_SAN_MAC_ADDR_PTR	0x28
+#define ICE_SR_NVM_MAP_VER			0x29
+#define ICE_SR_NVM_IMAGE_VER			0x2A
+#define ICE_SR_NVM_STRUCTURE_VER		0x2B
+#define ICE_SR_NVM_EETRACK_LO			0x2D
+#define ICE_SR_NVM_EETRACK_HI			0x2E
+#define ICE_NVM_VER_LO_SHIFT			0
+#define ICE_NVM_VER_LO_MASK			(0xff << ICE_NVM_VER_LO_SHIFT)
+#define ICE_NVM_VER_HI_SHIFT			12
+#define ICE_NVM_VER_HI_MASK			(0xf << ICE_NVM_VER_HI_SHIFT)
+#define ICE_OEM_EETRACK_ID			0xffffffff
+#define ICE_OEM_VER_PATCH_SHIFT			0
+#define ICE_OEM_VER_PATCH_MASK		(0xff << ICE_OEM_VER_PATCH_SHIFT)
+#define ICE_OEM_VER_BUILD_SHIFT			8
+#define ICE_OEM_VER_BUILD_MASK		(0xffff << ICE_OEM_VER_BUILD_SHIFT)
+#define ICE_OEM_VER_SHIFT			24
+#define ICE_OEM_VER_MASK			(0xff << ICE_OEM_VER_SHIFT)
+#define ICE_SR_VPD_PTR				0x2F
+#define ICE_SR_PXE_SETUP_PTR			0x30
+#define ICE_SR_PXE_CFG_CUST_OPTIONS_PTR		0x31
+#define ICE_SR_NVM_ORIGINAL_EETRACK_LO		0x34
+#define ICE_SR_NVM_ORIGINAL_EETRACK_HI		0x35
+#define ICE_SR_VLAN_CFG_PTR			0x37
+#define ICE_SR_POR_REGS_AUTO_LOAD_PTR		0x38
+#define ICE_SR_EMPR_REGS_AUTO_LOAD_PTR		0x3A
+#define ICE_SR_GLOBR_REGS_AUTO_LOAD_PTR		0x3B
+#define ICE_SR_CORER_REGS_AUTO_LOAD_PTR		0x3C
+#define ICE_SR_PHY_CFG_SCRIPT_PTR		0x3D
+#define ICE_SR_PCIE_ALT_AUTO_LOAD_PTR		0x3E
+#define ICE_SR_SW_CHECKSUM_WORD			0x3F
+#define ICE_SR_PFA_PTR				0x40
+#define ICE_SR_1ST_SCRATCH_PAD_PTR		0x41
+#define ICE_SR_1ST_NVM_BANK_PTR			0x42
+#define ICE_SR_NVM_BANK_SIZE			0x43
+#define ICE_SR_1ND_OROM_BANK_PTR		0x44
+#define ICE_SR_OROM_BANK_SIZE			0x45
+#define ICE_SR_EMP_SR_SETTINGS_PTR		0x48
+#define ICE_SR_CONFIGURATION_METADATA_PTR	0x4D
+#define ICE_SR_IMMEDIATE_VALUES_PTR		0x4E
+
+/* Auxiliary field, mask and shift definition for Shadow RAM and NVM Flash */
+#define ICE_SR_VPD_SIZE_WORDS		512
+#define ICE_SR_PCIE_ALT_SIZE_WORDS	512
+#define ICE_SR_CTRL_WORD_1_S		0x06
+#define ICE_SR_CTRL_WORD_1_M		(0x03 << ICE_SR_CTRL_WORD_1_S)
+
+/* Shadow RAM related */
+#define ICE_SR_SECTOR_SIZE_IN_WORDS	0x800
+#define ICE_SR_BUF_ALIGNMENT		4096
+#define ICE_SR_WORDS_IN_1KB		512
+/* Checksum should be calculated such that after adding all the words,
+ * including the checksum word itself, the sum should be 0xBABA.
+ */
+#define ICE_SR_SW_CHECKSUM_BASE		0xBABA
+
+#define ICE_PBA_FLAG_DFLT		0xFAFA
+/* Hash redirection LUT for VSI - maximum array size */
+#define ICE_VSIQF_HLUT_ARRAY_SIZE	((VSIQF_HLUT_MAX_INDEX + 1) * 4)
+
+/*
+ * Defines for values in the VF_PE_DB_SIZE bits in the GLPCI_LBARCTRL register.
+ * This is needed to determine the BAR0 space for the VFs
+ */
+#define GLPCI_LBARCTRL_VF_PE_DB_SIZE_0KB 0x0
+#define GLPCI_LBARCTRL_VF_PE_DB_SIZE_8KB 0x1
+#define GLPCI_LBARCTRL_VF_PE_DB_SIZE_64KB 0x2
+
+#endif /* _ICE_TYPE_H_ */
diff --git a/drivers/net/ice/base/virtchnl.h b/drivers/net/ice/base/virtchnl.h
new file mode 100644
index 0000000..90192f5
--- /dev/null
+++ b/drivers/net/ice/base/virtchnl.h
@@ -0,0 +1,787 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _VIRTCHNL_H_
+#define _VIRTCHNL_H_
+
+/* Description:
+ * This header file describes the VF-PF communication protocol used
+ * by the drivers for all devices starting from our 40G product line
+ *
+ * Admin queue buffer usage:
+ * desc->opcode is always aqc_opc_send_msg_to_pf
+ * flags, retval, datalen, and data addr are all used normally.
+ * The Firmware copies the cookie fields when sending messages between the
+ * PF and VF, but uses all other fields internally. Due to this limitation,
+ * we must send all messages as "indirect", i.e. using an external buffer.
+ *
+ * All the VSI indexes are relative to the VF. Each VF can have maximum of
+ * three VSIs. All the queue indexes are relative to the VSI.  Each VF can
+ * have a maximum of sixteen queues for all of its VSIs.
+ *
+ * The PF is required to return a status code in v_retval for all messages
+ * except RESET_VF, which does not require any response. The return value
+ * is of status_code type, defined in the shared type.h.
+ *
+ * In general, VF driver initialization should roughly follow the order of
+ * these opcodes. The VF driver must first validate the API version of the
+ * PF driver, then request a reset, then get resources, then configure
+ * queues and interrupts. After these operations are complete, the VF
+ * driver may start its queues, optionally add MAC and VLAN filters, and
+ * process traffic.
+ */
+
+/* START GENERIC DEFINES
+ * Need to ensure the following enums and defines hold the same meaning and
+ * value in current and future projects
+ */
+
+/* Error Codes */
+enum virtchnl_status_code {
+	VIRTCHNL_STATUS_SUCCESS				= 0,
+	VIRTCHNL_STATUS_ERR_PARAM			= -5,
+	VIRTCHNL_STATUS_ERR_NO_MEMORY			= -18,
+	VIRTCHNL_STATUS_ERR_OPCODE_MISMATCH		= -38,
+	VIRTCHNL_STATUS_ERR_CQP_COMPL_ERROR		= -39,
+	VIRTCHNL_STATUS_ERR_INVALID_VF_ID		= -40,
+	VIRTCHNL_STATUS_ERR_ADMIN_QUEUE_ERROR		= -53,
+	VIRTCHNL_STATUS_ERR_NOT_SUPPORTED		= -64,
+};
+
+/* Backward compatibility */
+#define VIRTCHNL_ERR_PARAM VIRTCHNL_STATUS_ERR_PARAM
+#define VIRTCHNL_STATUS_NOT_SUPPORTED VIRTCHNL_STATUS_ERR_NOT_SUPPORTED
+
+#define VIRTCHNL_LINK_SPEED_100MB_SHIFT		0x1
+#define VIRTCHNL_LINK_SPEED_1000MB_SHIFT	0x2
+#define VIRTCHNL_LINK_SPEED_10GB_SHIFT		0x3
+#define VIRTCHNL_LINK_SPEED_40GB_SHIFT		0x4
+#define VIRTCHNL_LINK_SPEED_20GB_SHIFT		0x5
+#define VIRTCHNL_LINK_SPEED_25GB_SHIFT		0x6
+
+enum virtchnl_link_speed {
+	VIRTCHNL_LINK_SPEED_UNKNOWN	= 0,
+	VIRTCHNL_LINK_SPEED_100MB	= BIT(VIRTCHNL_LINK_SPEED_100MB_SHIFT),
+	VIRTCHNL_LINK_SPEED_1GB		= BIT(VIRTCHNL_LINK_SPEED_1000MB_SHIFT),
+	VIRTCHNL_LINK_SPEED_10GB	= BIT(VIRTCHNL_LINK_SPEED_10GB_SHIFT),
+	VIRTCHNL_LINK_SPEED_40GB	= BIT(VIRTCHNL_LINK_SPEED_40GB_SHIFT),
+	VIRTCHNL_LINK_SPEED_20GB	= BIT(VIRTCHNL_LINK_SPEED_20GB_SHIFT),
+	VIRTCHNL_LINK_SPEED_25GB	= BIT(VIRTCHNL_LINK_SPEED_25GB_SHIFT),
+};
+
+/* for hsplit_0 field of Rx HMC context */
+/* deprecated with AVF 1.0 */
+enum virtchnl_rx_hsplit {
+	VIRTCHNL_RX_HSPLIT_NO_SPLIT      = 0,
+	VIRTCHNL_RX_HSPLIT_SPLIT_L2      = 1,
+	VIRTCHNL_RX_HSPLIT_SPLIT_IP      = 2,
+	VIRTCHNL_RX_HSPLIT_SPLIT_TCP_UDP = 4,
+	VIRTCHNL_RX_HSPLIT_SPLIT_SCTP    = 8,
+};
+
+/* END GENERIC DEFINES */
+
+/* Opcodes for VF-PF communication. These are placed in the v_opcode field
+ * of the virtchnl_msg structure.
+ */
+enum virtchnl_ops {
+/* The PF sends status change events to VFs using
+ * the VIRTCHNL_OP_EVENT opcode.
+ * VFs send requests to the PF using the other ops.
+ * Use of "advanced opcode" features must be negotiated as part of capabilities
+ * exchange and are not considered part of base mode feature set.
+ */
+	VIRTCHNL_OP_UNKNOWN = 0,
+	VIRTCHNL_OP_VERSION = 1, /* must ALWAYS be 1 */
+	VIRTCHNL_OP_RESET_VF = 2,
+	VIRTCHNL_OP_GET_VF_RESOURCES = 3,
+	VIRTCHNL_OP_CONFIG_TX_QUEUE = 4,
+	VIRTCHNL_OP_CONFIG_RX_QUEUE = 5,
+	VIRTCHNL_OP_CONFIG_VSI_QUEUES = 6,
+	VIRTCHNL_OP_CONFIG_IRQ_MAP = 7,
+	VIRTCHNL_OP_ENABLE_QUEUES = 8,
+	VIRTCHNL_OP_DISABLE_QUEUES = 9,
+	VIRTCHNL_OP_ADD_ETH_ADDR = 10,
+	VIRTCHNL_OP_DEL_ETH_ADDR = 11,
+	VIRTCHNL_OP_ADD_VLAN = 12,
+	VIRTCHNL_OP_DEL_VLAN = 13,
+	VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE = 14,
+	VIRTCHNL_OP_GET_STATS = 15,
+	VIRTCHNL_OP_RSVD = 16,
+	VIRTCHNL_OP_EVENT = 17, /* must ALWAYS be 17 */
+	/* opcode 19 is reserved */
+	/* opcodes 20, 21, and 22 are reserved */
+	VIRTCHNL_OP_CONFIG_RSS_KEY = 23,
+	VIRTCHNL_OP_CONFIG_RSS_LUT = 24,
+	VIRTCHNL_OP_GET_RSS_HENA_CAPS = 25,
+	VIRTCHNL_OP_SET_RSS_HENA = 26,
+	VIRTCHNL_OP_ENABLE_VLAN_STRIPPING = 27,
+	VIRTCHNL_OP_DISABLE_VLAN_STRIPPING = 28,
+	VIRTCHNL_OP_REQUEST_QUEUES = 29,
+	VIRTCHNL_OP_ENABLE_CHANNELS = 30,
+	VIRTCHNL_OP_DISABLE_CHANNELS = 31,
+	VIRTCHNL_OP_ADD_CLOUD_FILTER = 32,
+	VIRTCHNL_OP_DEL_CLOUD_FILTER = 33,
+
+};
+
+/* These macros are used to generate compilation errors if a structure/union
+ * is not exactly the correct length. It gives a divide by zero error if the
+ * structure/union is not of the correct size, otherwise it creates an enum
+ * that is never used.
+ */
+#define VIRTCHNL_CHECK_STRUCT_LEN(n, X) enum virtchnl_static_assert_enum_##X \
+	{ virtchnl_static_assert_##X = (n)/((sizeof(struct X) == (n)) ? 1 : 0) }
+#define VIRTCHNL_CHECK_UNION_LEN(n, X) enum virtchnl_static_asset_enum_##X \
+	{ virtchnl_static_assert_##X = (n)/((sizeof(union X) == (n)) ? 1 : 0) }
+
+/* Virtual channel message descriptor. This overlays the admin queue
+ * descriptor. All other data is passed in external buffers.
+ */
+
+struct virtchnl_msg {
+	u8 pad[8];			 /* AQ flags/opcode/len/retval fields */
+	enum virtchnl_ops v_opcode; /* avoid confusion with desc->opcode */
+	enum virtchnl_status_code v_retval;  /* ditto for desc->retval */
+	u32 vfid;			 /* used by PF when sending to VF */
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(20, virtchnl_msg);
+
+/* Message descriptions and data structures. */
+
+/* VIRTCHNL_OP_VERSION
+ * VF posts its version number to the PF. PF responds with its version number
+ * in the same format, along with a return code.
+ * Reply from PF has its major/minor versions also in param0 and param1.
+ * If there is a major version mismatch, then the VF cannot operate.
+ * If there is a minor version mismatch, then the VF can operate but should
+ * add a warning to the system log.
+ *
+ * This enum element MUST always be specified as == 1, regardless of other
+ * changes in the API. The PF must always respond to this message without
+ * error regardless of version mismatch.
+ */
+#define VIRTCHNL_VERSION_MAJOR		1
+#define VIRTCHNL_VERSION_MINOR		1
+#define VIRTCHNL_VERSION_MINOR_NO_VF_CAPS	0
+
+struct virtchnl_version_info {
+	u32 major;
+	u32 minor;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(8, virtchnl_version_info);
+
+#define VF_IS_V10(_v) (((_v)->major == 1) && ((_v)->minor == 0))
+#define VF_IS_V11(_ver) (((_ver)->major == 1) && ((_ver)->minor == 1))
+
+/* VIRTCHNL_OP_RESET_VF
+ * VF sends this request to PF with no parameters
+ * PF does NOT respond! VF driver must delay then poll VFGEN_RSTAT register
+ * until reset completion is indicated. The admin queue must be reinitialized
+ * after this operation.
+ *
+ * When reset is complete, PF must ensure that all queues in all VSIs associated
+ * with the VF are stopped, all queue configurations in the HMC are set to 0,
+ * and all MAC and VLAN filters (except the default MAC address) on all VSIs
+ * are cleared.
+ */
+
+/* VSI types that use VIRTCHNL interface for VF-PF communication. VSI_SRIOV
+ * vsi_type should always be 6 for backward compatibility. Add other fields
+ * as needed.
+ */
+enum virtchnl_vsi_type {
+	VIRTCHNL_VSI_TYPE_INVALID = 0,
+	VIRTCHNL_VSI_SRIOV = 6,
+};
+
+/* VIRTCHNL_OP_GET_VF_RESOURCES
+ * Version 1.0 VF sends this request to PF with no parameters
+ * Version 1.1 VF sends this request to PF with u32 bitmap of its capabilities
+ * PF responds with an indirect message containing
+ * virtchnl_vf_resource and one or more
+ * virtchnl_vsi_resource structures.
+ */
+
+struct virtchnl_vsi_resource {
+	u16 vsi_id;
+	u16 num_queue_pairs;
+	enum virtchnl_vsi_type vsi_type;
+	u16 qset_handle;
+	u8 default_mac_addr[ETH_ALEN];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_vsi_resource);
+
+/* VF capability flags
+ * VIRTCHNL_VF_OFFLOAD_L2 flag is inclusive of base mode L2 offloads including
+ * TX/RX Checksum offloading and TSO for non-tunnelled packets.
+ */
+#define VIRTCHNL_VF_OFFLOAD_L2			0x00000001
+#define VIRTCHNL_VF_OFFLOAD_IWARP		0x00000002
+#define VIRTCHNL_VF_OFFLOAD_RSVD		0x00000004
+#define VIRTCHNL_VF_OFFLOAD_RSS_AQ		0x00000008
+#define VIRTCHNL_VF_OFFLOAD_RSS_REG		0x00000010
+#define VIRTCHNL_VF_OFFLOAD_WB_ON_ITR		0x00000020
+#define VIRTCHNL_VF_OFFLOAD_REQ_QUEUES		0x00000040
+#define VIRTCHNL_VF_OFFLOAD_VLAN		0x00010000
+#define VIRTCHNL_VF_OFFLOAD_RX_POLLING		0x00020000
+#define VIRTCHNL_VF_OFFLOAD_RSS_PCTYPE_V2	0x00040000
+#define VIRTCHNL_VF_OFFLOAD_RSS_PF		0X00080000
+#define VIRTCHNL_VF_OFFLOAD_ENCAP		0X00100000
+#define VIRTCHNL_VF_OFFLOAD_ENCAP_CSUM		0X00200000
+#define VIRTCHNL_VF_OFFLOAD_RX_ENCAP_CSUM	0X00400000
+#define VIRTCHNL_VF_OFFLOAD_ADQ			0X00800000
+/* Define below the capability flags that are not offloads */
+#define VIRTCHNL_VF_CAP_ADV_LINK_SPEED		0x00000080
+
+#define VF_BASE_MODE_OFFLOADS (VIRTCHNL_VF_OFFLOAD_L2 | \
+			       VIRTCHNL_VF_OFFLOAD_VLAN | \
+			       VIRTCHNL_VF_OFFLOAD_RSS_PF)
+
+struct virtchnl_vf_resource {
+	u16 num_vsis;
+	u16 num_queue_pairs;
+	u16 max_vectors;
+	u16 max_mtu;
+
+	u32 vf_cap_flags;
+	u32 rss_key_size;
+	u32 rss_lut_size;
+
+	struct virtchnl_vsi_resource vsi_res[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(36, virtchnl_vf_resource);
+
+/* VIRTCHNL_OP_CONFIG_TX_QUEUE
+ * VF sends this message to set up parameters for one TX queue.
+ * External data buffer contains one instance of virtchnl_txq_info.
+ * PF configures requested queue and returns a status code.
+ */
+
+/* Tx queue config info */
+struct virtchnl_txq_info {
+	u16 vsi_id;
+	u16 queue_id;
+	u16 ring_len;		/* number of descriptors, multiple of 8 */
+	u16 headwb_enabled; /* deprecated with AVF 1.0 */
+	u64 dma_ring_addr;
+	u64 dma_headwb_addr; /* deprecated with AVF 1.0 */
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(24, virtchnl_txq_info);
+
+/* VIRTCHNL_OP_CONFIG_RX_QUEUE
+ * VF sends this message to set up parameters for one RX queue.
+ * External data buffer contains one instance of virtchnl_rxq_info.
+ * PF configures requested queue and returns a status code.
+ */
+
+/* Rx queue config info */
+struct virtchnl_rxq_info {
+	u16 vsi_id;
+	u16 queue_id;
+	u32 ring_len;		/* number of descriptors, multiple of 32 */
+	u16 hdr_size;
+	u16 splithdr_enabled; /* deprecated with AVF 1.0 */
+	u32 databuffer_size;
+	u32 max_pkt_size;
+	u32 pad1;
+	u64 dma_ring_addr;
+	enum virtchnl_rx_hsplit rx_split_pos; /* deprecated with AVF 1.0 */
+	u32 pad2;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(40, virtchnl_rxq_info);
+
+/* VIRTCHNL_OP_CONFIG_VSI_QUEUES
+ * VF sends this message to set parameters for all active TX and RX queues
+ * associated with the specified VSI.
+ * PF configures queues and returns status.
+ * If the number of queues specified is greater than the number of queues
+ * associated with the VSI, an error is returned and no queues are configured.
+ */
+struct virtchnl_queue_pair_info {
+	/* NOTE: vsi_id and queue_id should be identical for both queues. */
+	struct virtchnl_txq_info txq;
+	struct virtchnl_rxq_info rxq;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(64, virtchnl_queue_pair_info);
+
+struct virtchnl_vsi_queue_config_info {
+	u16 vsi_id;
+	u16 num_queue_pairs;
+	u32 pad;
+	struct virtchnl_queue_pair_info qpair[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(72, virtchnl_vsi_queue_config_info);
+
+/* VIRTCHNL_OP_REQUEST_QUEUES
+ * VF sends this message to request the PF to allocate additional queues to
+ * this VF.  Each VF gets a guaranteed number of queues on init but asking for
+ * additional queues must be negotiated.  This is a best effort request as it
+ * is possible the PF does not have enough queues left to support the request.
+ * If the PF cannot support the number requested it will respond with the
+ * maximum number it is able to support.  If the request is successful, PF will
+ * then reset the VF to institute required changes.
+ */
+
+/* VF resource request */
+struct virtchnl_vf_res_request {
+	u16 num_queue_pairs;
+};
+
+/* VIRTCHNL_OP_CONFIG_IRQ_MAP
+ * VF uses this message to map vectors to queues.
+ * The rxq_map and txq_map fields are bitmaps used to indicate which queues
+ * are to be associated with the specified vector.
+ * The "other" causes are always mapped to vector 0.
+ * PF configures interrupt mapping and returns status.
+ */
+struct virtchnl_vector_map {
+	u16 vsi_id;
+	u16 vector_id;
+	u16 rxq_map;
+	u16 txq_map;
+	u16 rxitr_idx;
+	u16 txitr_idx;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_vector_map);
+
+struct virtchnl_irq_map_info {
+	u16 num_vectors;
+	struct virtchnl_vector_map vecmap[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(14, virtchnl_irq_map_info);
+
+/* VIRTCHNL_OP_ENABLE_QUEUES
+ * VIRTCHNL_OP_DISABLE_QUEUES
+ * VF sends these message to enable or disable TX/RX queue pairs.
+ * The queues fields are bitmaps indicating which queues to act upon.
+ * (Currently, we only support 16 queues per VF, but we make the field
+ * u32 to allow for expansion.)
+ * PF performs requested action and returns status.
+ */
+struct virtchnl_queue_select {
+	u16 vsi_id;
+	u16 pad;
+	u32 rx_queues;
+	u32 tx_queues;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_queue_select);
+
+/* VIRTCHNL_OP_ADD_ETH_ADDR
+ * VF sends this message in order to add one or more unicast or multicast
+ * address filters for the specified VSI.
+ * PF adds the filters and returns status.
+ */
+
+/* VIRTCHNL_OP_DEL_ETH_ADDR
+ * VF sends this message in order to remove one or more unicast or multicast
+ * filters for the specified VSI.
+ * PF removes the filters and returns status.
+ */
+
+struct virtchnl_ether_addr {
+	u8 addr[ETH_ALEN];
+	u8 pad[2];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(8, virtchnl_ether_addr);
+
+struct virtchnl_ether_addr_list {
+	u16 vsi_id;
+	u16 num_elements;
+	struct virtchnl_ether_addr list[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_ether_addr_list);
+
+/* VIRTCHNL_OP_ADD_VLAN
+ * VF sends this message to add one or more VLAN tag filters for receives.
+ * PF adds the filters and returns status.
+ * If a port VLAN is configured by the PF, this operation will return an
+ * error to the VF.
+ */
+
+/* VIRTCHNL_OP_DEL_VLAN
+ * VF sends this message to remove one or more VLAN tag filters for receives.
+ * PF removes the filters and returns status.
+ * If a port VLAN is configured by the PF, this operation will return an
+ * error to the VF.
+ */
+
+struct virtchnl_vlan_filter_list {
+	u16 vsi_id;
+	u16 num_elements;
+	u16 vlan_id[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(6, virtchnl_vlan_filter_list);
+
+/* VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE
+ * VF sends VSI id and flags.
+ * PF returns status code in retval.
+ * Note: we assume that broadcast accept mode is always enabled.
+ */
+struct virtchnl_promisc_info {
+	u16 vsi_id;
+	u16 flags;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(4, virtchnl_promisc_info);
+
+#define FLAG_VF_UNICAST_PROMISC	0x00000001
+#define FLAG_VF_MULTICAST_PROMISC	0x00000002
+
+/* VIRTCHNL_OP_GET_STATS
+ * VF sends this message to request stats for the selected VSI. VF uses
+ * the virtchnl_queue_select struct to specify the VSI. The queue_id
+ * field is ignored by the PF.
+ *
+ * PF replies with struct virtchnl_eth_stats in an external buffer.
+ */
+
+struct virtchnl_eth_stats {
+	u64 rx_bytes;			/* received bytes */
+	u64 rx_unicast;			/* received unicast pkts */
+	u64 rx_multicast;		/* received multicast pkts */
+	u64 rx_broadcast;		/* received broadcast pkts */
+	u64 rx_discards;
+	u64 rx_unknown_protocol;
+	u64 tx_bytes;			/* transmitted bytes */
+	u64 tx_unicast;			/* transmitted unicast pkts */
+	u64 tx_multicast;		/* transmitted multicast pkts */
+	u64 tx_broadcast;		/* transmitted broadcast pkts */
+	u64 tx_discards;
+	u64 tx_errors;
+};
+
+/* VIRTCHNL_OP_CONFIG_RSS_KEY
+ * VIRTCHNL_OP_CONFIG_RSS_LUT
+ * VF sends these messages to configure RSS. Only supported if both PF
+ * and VF drivers set the VIRTCHNL_VF_OFFLOAD_RSS_PF bit during
+ * configuration negotiation. If this is the case, then the RSS fields in
+ * the VF resource struct are valid.
+ * Both the key and LUT are initialized to 0 by the PF, meaning that
+ * RSS is effectively disabled until set up by the VF.
+ */
+struct virtchnl_rss_key {
+	u16 vsi_id;
+	u16 key_len;
+	u8 key[1];         /* RSS hash key, packed bytes */
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(6, virtchnl_rss_key);
+
+struct virtchnl_rss_lut {
+	u16 vsi_id;
+	u16 lut_entries;
+	u8 lut[1];        /* RSS lookup table */
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(6, virtchnl_rss_lut);
+
+/* VIRTCHNL_OP_GET_RSS_HENA_CAPS
+ * VIRTCHNL_OP_SET_RSS_HENA
+ * VF sends these messages to get and set the hash filter enable bits for RSS.
+ * By default, the PF sets these to all possible traffic types that the
+ * hardware supports. The VF can query this value if it wants to change the
+ * traffic types that are hashed by the hardware.
+ */
+struct virtchnl_rss_hena {
+	u64 hena;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(8, virtchnl_rss_hena);
+
+/* VIRTCHNL_OP_ENABLE_CHANNELS
+ * VIRTCHNL_OP_DISABLE_CHANNELS
+ * VF sends these messages to enable or disable channels based on
+ * the user specified queue count and queue offset for each traffic class.
+ * This struct encompasses all the information that the PF needs from
+ * VF to create a channel.
+ */
+struct virtchnl_channel_info {
+	u16 count; /* number of queues in a channel */
+	u16 offset; /* queues in a channel start from 'offset' */
+	u32 pad;
+	u64 max_tx_rate;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_channel_info);
+
+struct virtchnl_tc_info {
+	u32	num_tc;
+	u32	pad;
+	struct	virtchnl_channel_info list[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(24, virtchnl_tc_info);
+
+/* VIRTCHNL_ADD_CLOUD_FILTER
+ * VIRTCHNL_DEL_CLOUD_FILTER
+ * VF sends these messages to add or delete a cloud filter based on the
+ * user specified match and action filters. These structures encompass
+ * all the information that the PF needs from the VF to add/delete a
+ * cloud filter.
+ */
+
+struct virtchnl_l4_spec {
+	u8	src_mac[ETH_ALEN];
+	u8	dst_mac[ETH_ALEN];
+	__be16	vlan_id;
+	__be16	pad; /* reserved for future use */
+	__be32	src_ip[4];
+	__be32	dst_ip[4];
+	__be16	src_port;
+	__be16	dst_port;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(52, virtchnl_l4_spec);
+
+union virtchnl_flow_spec {
+	struct	virtchnl_l4_spec tcp_spec;
+	u8	buffer[128]; /* reserved for future use */
+};
+
+VIRTCHNL_CHECK_UNION_LEN(128, virtchnl_flow_spec);
+
+enum virtchnl_action {
+	/* action types */
+	VIRTCHNL_ACTION_DROP = 0,
+	VIRTCHNL_ACTION_TC_REDIRECT,
+};
+
+enum virtchnl_flow_type {
+	/* flow types */
+	VIRTCHNL_TCP_V4_FLOW = 0,
+	VIRTCHNL_TCP_V6_FLOW,
+};
+
+struct virtchnl_filter {
+	union	virtchnl_flow_spec data;
+	union	virtchnl_flow_spec mask;
+	enum	virtchnl_flow_type flow_type;
+	enum	virtchnl_action action;
+	u32	action_meta;
+	u8	field_flags;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(272, virtchnl_filter);
+
+/* VIRTCHNL_OP_EVENT
+ * PF sends this message to inform the VF driver of events that may affect it.
+ * No direct response is expected from the VF, though it may generate other
+ * messages in response to this one.
+ */
+enum virtchnl_event_codes {
+	VIRTCHNL_EVENT_UNKNOWN = 0,
+	VIRTCHNL_EVENT_LINK_CHANGE,
+	VIRTCHNL_EVENT_RESET_IMPENDING,
+	VIRTCHNL_EVENT_PF_DRIVER_CLOSE,
+};
+
+#define PF_EVENT_SEVERITY_INFO		0
+#define PF_EVENT_SEVERITY_CERTAIN_DOOM	255
+
+struct virtchnl_pf_event {
+	enum virtchnl_event_codes event;
+	union {
+		/* If the PF driver does not support the new speed reporting
+		 * capabilities then use link_event else use link_event_adv to
+		 * get the speed and link information. The ability to understand
+		 * new speeds is indicated by setting the capability flag
+		 * VIRTCHNL_VF_CAP_ADV_LINK_SPEED in vf_cap_flags parameter
+		 * in virtchnl_vf_resource struct and can be used to determine
+		 * which link event struct to use below.
+		 */
+		struct {
+			enum virtchnl_link_speed link_speed;
+			u8 link_status;
+		} link_event;
+		struct {
+			/* link_speed provided in Mbps */
+			u32 link_speed;
+			u8 link_status;
+		} link_event_adv;
+	} event_data;
+
+	int severity;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_pf_event);
+
+
+/* VF reset states - these are written into the RSTAT register:
+ * VFGEN_RSTAT on the VF
+ * When the PF initiates a reset, it writes 0
+ * When the reset is complete, it writes 1
+ * When the PF detects that the VF has recovered, it writes 2
+ * VF checks this register periodically to determine if a reset has occurred,
+ * then polls it to know when the reset is complete.
+ * If either the PF or VF reads the register while the hardware
+ * is in a reset state, it will return DEADBEEF, which, when masked
+ * will result in 3.
+ */
+enum virtchnl_vfr_states {
+	VIRTCHNL_VFR_INPROGRESS = 0,
+	VIRTCHNL_VFR_COMPLETED,
+	VIRTCHNL_VFR_VFACTIVE,
+};
+
+/**
+ * virtchnl_vc_validate_vf_msg
+ * @ver: Virtchnl version info
+ * @v_opcode: Opcode for the message
+ * @msg: pointer to the msg buffer
+ * @msglen: msg length
+ *
+ * validate msg format against struct for each opcode
+ */
+static inline int
+virtchnl_vc_validate_vf_msg(struct virtchnl_version_info *ver, u32 v_opcode,
+			    u8 *msg, u16 msglen)
+{
+	bool err_msg_format = false;
+	int valid_len = 0;
+
+	/* Validate message length. */
+	switch (v_opcode) {
+	case VIRTCHNL_OP_VERSION:
+		valid_len = sizeof(struct virtchnl_version_info);
+		break;
+	case VIRTCHNL_OP_RESET_VF:
+		break;
+	case VIRTCHNL_OP_GET_VF_RESOURCES:
+		if (VF_IS_V11(ver))
+			valid_len = sizeof(u32);
+		break;
+	case VIRTCHNL_OP_CONFIG_TX_QUEUE:
+		valid_len = sizeof(struct virtchnl_txq_info);
+		break;
+	case VIRTCHNL_OP_CONFIG_RX_QUEUE:
+		valid_len = sizeof(struct virtchnl_rxq_info);
+		break;
+	case VIRTCHNL_OP_CONFIG_VSI_QUEUES:
+		valid_len = sizeof(struct virtchnl_vsi_queue_config_info);
+		if (msglen >= valid_len) {
+			struct virtchnl_vsi_queue_config_info *vqc =
+			    (struct virtchnl_vsi_queue_config_info *)msg;
+			valid_len += (vqc->num_queue_pairs *
+				      sizeof(struct
+					     virtchnl_queue_pair_info));
+			if (vqc->num_queue_pairs == 0)
+				err_msg_format = true;
+		}
+		break;
+	case VIRTCHNL_OP_CONFIG_IRQ_MAP:
+		valid_len = sizeof(struct virtchnl_irq_map_info);
+		if (msglen >= valid_len) {
+			struct virtchnl_irq_map_info *vimi =
+			    (struct virtchnl_irq_map_info *)msg;
+			valid_len += (vimi->num_vectors *
+				      sizeof(struct virtchnl_vector_map));
+			if (vimi->num_vectors == 0)
+				err_msg_format = true;
+		}
+		break;
+	case VIRTCHNL_OP_ENABLE_QUEUES:
+	case VIRTCHNL_OP_DISABLE_QUEUES:
+		valid_len = sizeof(struct virtchnl_queue_select);
+		break;
+	case VIRTCHNL_OP_ADD_ETH_ADDR:
+	case VIRTCHNL_OP_DEL_ETH_ADDR:
+		valid_len = sizeof(struct virtchnl_ether_addr_list);
+		if (msglen >= valid_len) {
+			struct virtchnl_ether_addr_list *veal =
+			    (struct virtchnl_ether_addr_list *)msg;
+			valid_len += veal->num_elements *
+			    sizeof(struct virtchnl_ether_addr);
+			if (veal->num_elements == 0)
+				err_msg_format = true;
+		}
+		break;
+	case VIRTCHNL_OP_ADD_VLAN:
+	case VIRTCHNL_OP_DEL_VLAN:
+		valid_len = sizeof(struct virtchnl_vlan_filter_list);
+		if (msglen >= valid_len) {
+			struct virtchnl_vlan_filter_list *vfl =
+			    (struct virtchnl_vlan_filter_list *)msg;
+			valid_len += vfl->num_elements * sizeof(u16);
+			if (vfl->num_elements == 0)
+				err_msg_format = true;
+		}
+		break;
+	case VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE:
+		valid_len = sizeof(struct virtchnl_promisc_info);
+		break;
+	case VIRTCHNL_OP_GET_STATS:
+		valid_len = sizeof(struct virtchnl_queue_select);
+		break;
+	case VIRTCHNL_OP_CONFIG_RSS_KEY:
+		valid_len = sizeof(struct virtchnl_rss_key);
+		if (msglen >= valid_len) {
+			struct virtchnl_rss_key *vrk =
+				(struct virtchnl_rss_key *)msg;
+			valid_len += vrk->key_len - 1;
+		}
+		break;
+	case VIRTCHNL_OP_CONFIG_RSS_LUT:
+		valid_len = sizeof(struct virtchnl_rss_lut);
+		if (msglen >= valid_len) {
+			struct virtchnl_rss_lut *vrl =
+				(struct virtchnl_rss_lut *)msg;
+			valid_len += vrl->lut_entries - 1;
+		}
+		break;
+	case VIRTCHNL_OP_GET_RSS_HENA_CAPS:
+		break;
+	case VIRTCHNL_OP_SET_RSS_HENA:
+		valid_len = sizeof(struct virtchnl_rss_hena);
+		break;
+	case VIRTCHNL_OP_ENABLE_VLAN_STRIPPING:
+	case VIRTCHNL_OP_DISABLE_VLAN_STRIPPING:
+		break;
+	case VIRTCHNL_OP_REQUEST_QUEUES:
+		valid_len = sizeof(struct virtchnl_vf_res_request);
+		break;
+	case VIRTCHNL_OP_ENABLE_CHANNELS:
+		valid_len = sizeof(struct virtchnl_tc_info);
+		if (msglen >= valid_len) {
+			struct virtchnl_tc_info *vti =
+				(struct virtchnl_tc_info *)msg;
+			valid_len += (vti->num_tc - 1) *
+				     sizeof(struct virtchnl_channel_info);
+			if (vti->num_tc == 0)
+				err_msg_format = true;
+		}
+		break;
+	case VIRTCHNL_OP_DISABLE_CHANNELS:
+		break;
+	case VIRTCHNL_OP_ADD_CLOUD_FILTER:
+	case VIRTCHNL_OP_DEL_CLOUD_FILTER:
+		valid_len = sizeof(struct virtchnl_filter);
+		break;
+	/* These are always errors coming from the VF. */
+	case VIRTCHNL_OP_EVENT:
+	case VIRTCHNL_OP_UNKNOWN:
+	default:
+		return VIRTCHNL_STATUS_ERR_PARAM;
+	}
+	/* few more checks */
+	if (err_msg_format || valid_len != msglen)
+		return VIRTCHNL_STATUS_ERR_OPCODE_MISMATCH;
+
+	return 0;
+}
+#endif /* _VIRTCHNL_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH 02/19] net/ice: support device initialization
  2018-11-23  6:56 [dpdk-dev] [PATCH 00/19] A new net PMD - ice Wenzhuo Lu
  2018-11-23  6:56 ` [dpdk-dev] [PATCH 01/19] net/ice: add base code Wenzhuo Lu
@ 2018-11-23  6:56 ` Wenzhuo Lu
  2018-11-23  7:56   ` Varghese, Vipin
  2018-11-23  6:56 ` [dpdk-dev] [PATCH 03/19] net/ice: support device and queue ops Wenzhuo Lu
                   ` (22 subsequent siblings)
  24 siblings, 1 reply; 309+ messages in thread
From: Wenzhuo Lu @ 2018-11-23  6:56 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 config/common_base                      |   9 +
 drivers/net/Makefile                    |   1 +
 drivers/net/ice/Makefile                |  75 ++++
 drivers/net/ice/ice_ethdev.c            | 645 ++++++++++++++++++++++++++++++++
 drivers/net/ice/ice_ethdev.h            | 318 ++++++++++++++++
 drivers/net/ice/ice_logs.h              |  45 +++
 drivers/net/ice/ice_rxtx.h              | 117 ++++++
 drivers/net/ice/rte_pmd_ice_version.map |   4 +
 mk/rte.app.mk                           |   1 +
 9 files changed, 1215 insertions(+)
 create mode 100644 drivers/net/ice/Makefile
 create mode 100644 drivers/net/ice/ice_ethdev.c
 create mode 100644 drivers/net/ice/ice_ethdev.h
 create mode 100644 drivers/net/ice/ice_logs.h
 create mode 100644 drivers/net/ice/ice_rxtx.h
 create mode 100644 drivers/net/ice/rte_pmd_ice_version.map

diff --git a/config/common_base b/config/common_base
index d12ae98..a342760 100644
--- a/config/common_base
+++ b/config/common_base
@@ -297,6 +297,15 @@ CONFIG_RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE=y
 CONFIG_RTE_LIBRTE_FM10K_INC_VECTOR=y
 
 #
+# Compile burst-oriented ICE PMD driver
+#
+CONFIG_RTE_LIBRTE_ICE_PMD=y
+CONFIG_RTE_LIBRTE_ICE_DEBUG_RX=n
+CONFIG_RTE_LIBRTE_ICE_DEBUG_TX=n
+CONFIG_RTE_LIBRTE_ICE_DEBUG_TX_FREE=n
+CONFIG_RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC=y
+CONFIG_RTE_LIBRTE_ICE_16BYTE_RX_DESC=n
+
 # Compile burst-oriented AVF PMD driver
 #
 CONFIG_RTE_LIBRTE_AVF_PMD=y
diff --git a/drivers/net/Makefile b/drivers/net/Makefile
index c0386fe..670d7f7 100644
--- a/drivers/net/Makefile
+++ b/drivers/net/Makefile
@@ -30,6 +30,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_ENIC_PMD) += enic
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_FAILSAFE) += failsafe
 DIRS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k
 DIRS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += i40e
+DIRS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice
 DIRS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe
 DIRS-$(CONFIG_RTE_LIBRTE_LIO_PMD) += liquidio
 DIRS-$(CONFIG_RTE_LIBRTE_MLX4_PMD) += mlx4
diff --git a/drivers/net/ice/Makefile b/drivers/net/ice/Makefile
new file mode 100644
index 0000000..00e1dda
--- /dev/null
+++ b/drivers/net/ice/Makefile
@@ -0,0 +1,75 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Intel Corporation
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_pmd_ice.a
+
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+EXPORT_MAP := rte_pmd_ice_version.map
+
+LIBABIVER := 1
+
+#
+# Add extra flags for base driver files (also known as shared code)
+# to disable warnings
+#
+ifeq ($(CONFIG_RTE_TOOLCHAIN_ICC),y)
+CFLAGS_BASE_DRIVER = -wd593 -wd188
+else ifeq ($(CONFIG_RTE_TOOLCHAIN_CLANG),y)
+CFLAGS_BASE_DRIVER += -Wno-sign-compare
+CFLAGS_BASE_DRIVER += -Wno-unused-value
+CFLAGS_BASE_DRIVER += -Wno-unused-parameter
+CFLAGS_BASE_DRIVER += -Wno-strict-aliasing
+CFLAGS_BASE_DRIVER += -Wno-format
+CFLAGS_BASE_DRIVER += -Wno-missing-field-initializers
+CFLAGS_BASE_DRIVER += -Wno-pointer-to-int-cast
+CFLAGS_BASE_DRIVER += -Wno-format-nonliteral
+CFLAGS_BASE_DRIVER += -Wno-unused-variable
+else
+CFLAGS_BASE_DRIVER  = -Wno-sign-compare
+CFLAGS_BASE_DRIVER += -Wno-unused-value
+CFLAGS_BASE_DRIVER += -Wno-unused-parameter
+CFLAGS_BASE_DRIVER += -Wno-strict-aliasing
+CFLAGS_BASE_DRIVER += -Wno-format
+CFLAGS_BASE_DRIVER += -Wno-missing-field-initializers
+CFLAGS_BASE_DRIVER += -Wno-pointer-to-int-cast
+CFLAGS_BASE_DRIVER += -Wno-format-nonliteral
+CFLAGS_BASE_DRIVER += -Wno-format-security
+CFLAGS_BASE_DRIVER += -Wno-unused-variable
+
+ifeq ($(shell test $(GCC_VERSION) -ge 44 && echo 1), 1)
+CFLAGS_BASE_DRIVER += -Wno-unused-but-set-variable
+endif
+
+endif
+OBJS_BASE_DRIVER=$(patsubst %.c,%.o,$(notdir $(wildcard $(SRCDIR)/base/*.c)))
+$(foreach obj, $(OBJS_BASE_DRIVER), $(eval CFLAGS_$(obj)+=$(CFLAGS_BASE_DRIVER)))
+
+VPATH += $(SRCDIR)/base
+
+#
+# all source are stored in SRCS-y
+#
+SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_controlq.c
+SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_common.c
+SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_sched.c
+SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_flow.c
+SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_switch.c
+SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_nvm.c
+SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_flex_pipe.c
+
+SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_ethdev.c
+
+# this lib depends upon:
+DEPDIRS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += lib/librte_eal lib/librte_ether
+DEPDIRS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += lib/librte_mempool lib/librte_mbuf
+DEPDIRS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += lib/librte_net
+DEPDIRS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += lib/librte_kvargs
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
new file mode 100644
index 0000000..7be77cf
--- /dev/null
+++ b/drivers/net/ice/ice_ethdev.c
@@ -0,0 +1,645 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#include <rte_ethdev_pci.h>
+
+#include "base/ice_sched.h"
+#include "ice_ethdev.h"
+#include "ice_rxtx.h"
+
+#define ICE_MAX_QP_NUM "max_queue_pair_num"
+#define ICE_DFLT_OUTER_TAG_TYPE ICE_AQ_VSI_OUTER_TAG_VLAN_9100
+
+int ice_logtype_init;
+int ice_logtype_driver;
+
+static void ice_dev_close(struct rte_eth_dev *dev);
+
+static const struct rte_pci_id pci_id_ice_map[] = {
+	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
+	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_QSFP) },
+	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_SFP) },
+	{ .vendor_id = 0, /* sentinel */ },
+};
+
+static const struct eth_dev_ops ice_eth_dev_ops = {
+	.dev_configure                = NULL,
+};
+
+static void
+ice_init_controlq_parameter(struct ice_hw *hw)
+{
+	/* fields for adminq */
+	hw->adminq.num_rq_entries = ICE_ADMINQ_LEN;
+	hw->adminq.num_sq_entries = ICE_ADMINQ_LEN;
+	hw->adminq.rq_buf_size = ICE_ADMINQ_BUF_SZ;
+	hw->adminq.sq_buf_size = ICE_ADMINQ_BUF_SZ;
+
+	/* fields for mailboxq, DPDK used as PF host */
+	hw->mailboxq.num_rq_entries = ICE_MAILBOXQ_LEN;
+	hw->mailboxq.num_sq_entries = ICE_MAILBOXQ_LEN;
+	hw->mailboxq.rq_buf_size = ICE_MAILBOXQ_BUF_SZ;
+	hw->mailboxq.sq_buf_size = ICE_MAILBOXQ_BUF_SZ;
+}
+
+static int
+ice_check_qp_num(const char *key, const char *qp_value,
+		 __rte_unused void *opaque)
+{
+	char *end = NULL;
+	int num = 0;
+
+	while (isblank(*qp_value))
+		qp_value++;
+
+	num = strtoul(qp_value, &end, 10);
+
+	if (!num || (*end == '-') || (errno != 0)) {
+		PMD_DRV_LOG(WARNING, "invalid value:\"%s\" for key:\"%s\", "
+			    "value must be > 0",
+			    qp_value, key);
+		return -1;
+	}
+
+	return num;
+}
+
+static int
+ice_config_max_queue_pair_num(struct rte_devargs *devargs)
+{
+	struct rte_kvargs *kvlist;
+	const char *queue_num_key = ICE_MAX_QP_NUM;
+	int ret;
+
+	if (!devargs)
+		return 0;
+
+	kvlist = rte_kvargs_parse(devargs->args, NULL);
+	if (!kvlist)
+		return 0;
+
+	if (!rte_kvargs_count(kvlist, queue_num_key)) {
+		rte_kvargs_free(kvlist);
+		return 0;
+	}
+
+	if (rte_kvargs_process(kvlist, queue_num_key,
+			       ice_check_qp_num, NULL) < 0) {
+		rte_kvargs_free(kvlist);
+		return 0;
+	}
+	ret = rte_kvargs_process(kvlist, queue_num_key,
+				 ice_check_qp_num, NULL);
+	rte_kvargs_free(kvlist);
+
+	return ret;
+}
+
+static int
+ice_res_pool_init(struct ice_res_pool_info *pool, uint32_t base,
+		  uint32_t num)
+{
+	struct pool_entry *entry;
+
+	if (!pool || !num)
+		return -EINVAL;
+
+	entry = rte_zmalloc("ice", sizeof(*entry), 0);
+	if (!entry) {
+		PMD_INIT_LOG(ERR,
+			     "Failed to allocate memory for resource pool");
+		return -ENOMEM;
+	}
+
+	/* queue heap initialize */
+	pool->num_free = num;
+	pool->num_alloc = 0;
+	pool->base = base;
+	LIST_INIT(&pool->alloc_list);
+	LIST_INIT(&pool->free_list);
+
+	/* Initialize element  */
+	entry->base = 0;
+	entry->len = num;
+
+	LIST_INSERT_HEAD(&pool->free_list, entry, next);
+	return 0;
+}
+
+static int
+ice_res_pool_alloc(struct ice_res_pool_info *pool,
+		   uint16_t num)
+{
+	struct pool_entry *entry, *valid_entry;
+
+	if (!pool || !num) {
+		PMD_INIT_LOG(ERR, "Invalid parameter");
+		return -EINVAL;
+	}
+
+	if (pool->num_free < num) {
+		PMD_INIT_LOG(ERR, "No resource. ask:%u, available:%u",
+			     num, pool->num_free);
+		return -ENOMEM;
+	}
+
+	valid_entry = NULL;
+	/* Lookup  in free list and find most fit one */
+	LIST_FOREACH(entry, &pool->free_list, next) {
+		if (entry->len >= num) {
+			/* Find best one */
+			if (entry->len == num) {
+				valid_entry = entry;
+				break;
+			}
+			if (!valid_entry ||
+			    valid_entry->len > entry->len)
+				valid_entry = entry;
+		}
+	}
+
+	/* Not find one to satisfy the request, return */
+	if (!valid_entry) {
+		PMD_INIT_LOG(ERR, "No valid entry found");
+		return -ENOMEM;
+	}
+	/**
+	 * The entry have equal queue number as requested,
+	 * remove it from alloc_list.
+	 */
+	if (valid_entry->len == num) {
+		LIST_REMOVE(valid_entry, next);
+	} else {
+		/**
+		 * The entry have more numbers than requested,
+		 * create a new entry for alloc_list and minus its
+		 * queue base and number in free_list.
+		 */
+		entry = rte_zmalloc("res_pool", sizeof(*entry), 0);
+		if (!entry) {
+			PMD_INIT_LOG(ERR,
+				     "Failed to allocate memory for "
+				     "resource pool");
+			return -ENOMEM;
+		}
+		entry->base = valid_entry->base;
+		entry->len = num;
+		valid_entry->base += num;
+		valid_entry->len -= num;
+		valid_entry = entry;
+	}
+
+	/* Insert it into alloc list, not sorted */
+	LIST_INSERT_HEAD(&pool->alloc_list, valid_entry, next);
+
+	pool->num_free -= valid_entry->len;
+	pool->num_alloc += valid_entry->len;
+
+	return valid_entry->base + pool->base;
+}
+
+static void
+ice_res_pool_destroy(struct ice_res_pool_info *pool)
+{
+	struct pool_entry *entry, *next_entry;
+
+	if (!pool)
+		return;
+
+	for (entry = LIST_FIRST(&pool->alloc_list);
+	     entry && (next_entry = LIST_NEXT(entry, next), 1);
+	     entry = next_entry) {
+		LIST_REMOVE(entry, next);
+		rte_free(entry);
+	}
+
+	for (entry = LIST_FIRST(&pool->free_list);
+	     entry && (next_entry = LIST_NEXT(entry, next), 1);
+	     entry = next_entry) {
+		LIST_REMOVE(entry, next);
+		rte_free(entry);
+	}
+
+	pool->num_free = 0;
+	pool->num_alloc = 0;
+	pool->base = 0;
+	LIST_INIT(&pool->alloc_list);
+	LIST_INIT(&pool->free_list);
+}
+
+static void
+ice_vsi_config_default_rss(struct ice_aqc_vsi_props *info)
+{
+	/* Set VSI LUT selection */
+	info->q_opt_rss = ICE_AQ_VSI_Q_OPT_RSS_LUT_VSI &
+			  ICE_AQ_VSI_Q_OPT_RSS_LUT_M;
+	/* Set Hash scheme */
+	info->q_opt_rss |= ICE_AQ_VSI_Q_OPT_RSS_TPLZ &
+			   ICE_AQ_VSI_Q_OPT_RSS_HASH_M;
+	/* enable TC */
+	info->q_opt_tc = ICE_AQ_VSI_Q_OPT_TC_OVR_M;
+}
+
+static enum ice_status
+ice_vsi_config_tc_queue_mapping(struct ice_vsi *vsi,
+				struct ice_aqc_vsi_props *info,
+				uint8_t enabled_tcmap)
+{
+	uint16_t bsf, qp_idx;
+
+	/* default tc 0 now. Multi-TC supporting need to be done later.
+	 * Configure TC and queue mapping parameters, for enabled TC,
+	 * allocate qpnum_per_tc queues to this traffic.
+	 */
+	if (enabled_tcmap != 0x01) {
+		PMD_INIT_LOG(ERR, "only TC0 is supported");
+		return -ENOTSUP;
+	}
+
+	vsi->nb_qps = RTE_MIN(vsi->nb_qps, ICE_MAX_Q_PER_TC);
+	bsf = rte_bsf32(vsi->nb_qps);
+	/* Adjust the queue number to actual queues that can be applied */
+	vsi->nb_qps = 0x1 << bsf;
+
+	qp_idx = 0;
+	/* Set tc and queue mapping with VSI */
+	info->tc_mapping[0] = rte_cpu_to_le_16((qp_idx <<
+						ICE_AQ_VSI_TC_Q_OFFSET_S) |
+					       (bsf << ICE_AQ_VSI_TC_Q_NUM_S));
+
+	/* Associate queue number with VSI */
+	info->mapping_flags |= rte_cpu_to_le_16(ICE_AQ_VSI_Q_MAP_CONTIG);
+	info->q_mapping[0] = rte_cpu_to_le_16(vsi->base_queue);
+	info->q_mapping[1] = rte_cpu_to_le_16(vsi->nb_qps);
+	info->valid_sections |=
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_RXQ_MAP_VALID);
+	/* Set the info.ingress_table and info.egress_table
+	 * for UP translate table. Now just set it to 1:1 map by default
+	 * -- 0b 111 110 101 100 011 010 001 000 == 0xFAC688
+	 */
+	info->ingress_table  = rte_cpu_to_le_32(0x00FAC688);
+	info->egress_table   = rte_cpu_to_le_32(0x00FAC688);
+	info->outer_up_table = rte_cpu_to_le_32(0x00FAC688);
+	return 0;
+}
+
+static int
+ice_init_mac_address(struct rte_eth_dev *dev)
+{
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	if (!is_unicast_ether_addr(
+		(struct ether_addr *)hw->port_info[0].mac.lan_addr)) {
+		PMD_INIT_LOG(ERR, "Invalid MAC address");
+		return -EINVAL;
+	}
+
+	ether_addr_copy((struct ether_addr *)hw->port_info[0].mac.lan_addr,
+			(struct ether_addr *)hw->port_info[0].mac.perm_addr);
+
+	dev->data->mac_addrs = rte_zmalloc("ice", sizeof(struct ether_addr), 0);
+	if (!dev->data->mac_addrs) {
+		PMD_INIT_LOG(ERR,
+			     "Failed to allocate memory to store mac address");
+		return -ENOMEM;
+	}
+	/* store it to dev data */
+	ether_addr_copy((struct ether_addr *)hw->port_info[0].mac.perm_addr,
+			&dev->data->mac_addrs[0]);
+	return 0;
+}
+
+/*  Initialize SW parameters of PF */
+static int
+ice_pf_sw_init(struct rte_eth_dev *dev)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_hw *hw = ICE_PF_TO_HW(pf);
+
+	if (ice_config_max_queue_pair_num(dev->device->devargs) > 0)
+		pf->lan_nb_qp_max =
+			ice_config_max_queue_pair_num(dev->device->devargs);
+	else
+		pf->lan_nb_qp_max =
+			(uint16_t)RTE_MIN(hw->func_caps.common_cap.num_txq,
+					  hw->func_caps.common_cap.num_rxq);
+
+	pf->lan_nb_qps = pf->lan_nb_qp_max;
+
+	return 0;
+}
+
+static struct ice_vsi *
+ice_setup_vsi(struct ice_pf *pf, enum ice_vsi_type type)
+{
+	struct ice_hw *hw = ICE_PF_TO_HW(pf);
+	struct ice_vsi *vsi = NULL;
+	struct ice_vsi_ctx vsi_ctx;
+	int ret;
+	uint16_t max_txqs[ICE_MAX_TRAFFIC_CLASS] = { 0 };
+	uint8_t tc_bitmap = 0x1;
+
+	/* hw->num_lports = 1 in NIC mode */
+	vsi = rte_zmalloc("ice_vsi", sizeof(struct ice_vsi), 0);
+	if (!vsi)
+		return NULL;
+
+	vsi->idx = pf->next_vsi_idx;
+	pf->next_vsi_idx++;
+	vsi->type = type;
+	vsi->adapter = ICE_PF_TO_ADAPTER(pf);
+	vsi->max_macaddrs = ICE_NUM_MACADDR_MAX;
+	vsi->vlan_anti_spoof_on = 0;
+	vsi->vlan_filter_on = 1;
+	TAILQ_INIT(&vsi->mac_list);
+	TAILQ_INIT(&vsi->vlan_list);
+
+	memset(&vsi_ctx, 0, sizeof(vsi_ctx));
+	/* base_queue in used in queue mapping of VSI add/update command.
+	 * Suppose vsi->base_queue is 0 now, don't consider SRIOV, VMDQ
+	 * cases in the first stage. Only Main VSI.
+	 */
+	vsi->base_queue = 0;
+	switch (type) {
+	case ICE_VSI_PF:
+		vsi->nb_qps = pf->lan_nb_qps;
+		ice_vsi_config_default_rss(&vsi_ctx.info);
+		vsi_ctx.alloc_from_pool = true;
+		vsi_ctx.flags = ICE_AQ_VSI_TYPE_PF;
+		/* switch_id is queried by get_switch_config aq, which is done
+		 * by ice_init_hw
+		 */
+		vsi_ctx.info.sw_id = hw->port_info->sw_id;
+		vsi_ctx.info.sw_flags2 = ICE_AQ_VSI_SW_FLAG_LAN_ENA;
+		/* Allow all untagged or tagged packets */
+		vsi_ctx.info.vlan_flags = ICE_AQ_VSI_VLAN_MODE_ALL;
+		vsi_ctx.info.vlan_flags |= ICE_AQ_VSI_VLAN_EMOD_NOTHING;
+		vsi_ctx.info.q_opt_rss = ICE_AQ_VSI_Q_OPT_RSS_LUT_PF |
+					 ICE_AQ_VSI_Q_OPT_RSS_TPLZ;
+		/* Enable VLAN/UP trip */
+		ret = ice_vsi_config_tc_queue_mapping(vsi,
+						      &vsi_ctx.info,
+						      ICE_DEFAULT_TCMAP);
+		if (ret) {
+			PMD_INIT_LOG(ERR,
+				     "tc queue mapping with vsi failed, "
+				     "err = %d",
+				     ret);
+			goto fail_mem;
+		}
+
+		break;
+	default:
+		/* for other types of VSI */
+		PMD_INIT_LOG(ERR, "other types of VSI not supported");
+		goto fail_mem;
+	}
+
+	/* VF has MSIX interrupt in VF range, don't allocate here */
+	if (type == ICE_VSI_PF) {
+		ret = ice_res_pool_alloc(&pf->msix_pool,
+					 RTE_MIN(vsi->nb_qps,
+						 RTE_MAX_RXTX_INTR_VEC_ID));
+		if (ret < 0) {
+			PMD_INIT_LOG(ERR, "VSI MAIN %d get heap failed %d",
+				     vsi->vsi_id, ret);
+		}
+		vsi->msix_intr = ret;
+		vsi->nb_msix = RTE_MIN(vsi->nb_qps, RTE_MAX_RXTX_INTR_VEC_ID);
+	} else {
+		vsi->msix_intr = 0;
+		vsi->nb_msix = 0;
+	}
+	ret = ice_add_vsi(hw, vsi->idx, &vsi_ctx, NULL);
+	if (ret != ICE_SUCCESS) {
+		PMD_INIT_LOG(ERR, "add vsi failed, err = %d", ret);
+		goto fail_mem;
+	}
+	/* store vsi information is SW structure */
+	vsi->vsi_id = vsi_ctx.vsi_num;
+	vsi->info = vsi_ctx.info;
+	pf->vsis_allocated = vsi_ctx.vsis_allocd;
+	pf->vsis_unallocated = vsi_ctx.vsis_unallocated;
+
+	/* At the beginning, only TC0. */
+	/* What we need here is the maximam number of the TX queues.
+	 * Currently vsi->nb_qps means it.
+	 * Correct it if any change.
+	 */
+	max_txqs[0] = vsi->nb_qps;
+	ret = ice_cfg_vsi_lan(hw->port_info, vsi->idx,
+			      tc_bitmap, max_txqs);
+	if (ret != ICE_SUCCESS)
+		PMD_INIT_LOG(ERR, "Failed to config vsi sched");
+
+	return vsi;
+fail_mem:
+	rte_free(vsi);
+	pf->next_vsi_idx--;
+	return NULL;
+}
+
+static int
+ice_pf_setup(struct ice_pf *pf)
+{
+	struct ice_vsi *vsi;
+
+	/* Clear all stats counters */
+	pf->offset_loaded = FALSE;
+	memset(&pf->stats, 0, sizeof(struct ice_hw_port_stats));
+	memset(&pf->stats_offset, 0, sizeof(struct ice_hw_port_stats));
+	memset(&pf->internal_stats, 0, sizeof(struct ice_eth_stats));
+	memset(&pf->internal_stats_offset, 0, sizeof(struct ice_eth_stats));
+
+	vsi = ice_setup_vsi(pf, ICE_VSI_PF);
+	if (!vsi) {
+		PMD_INIT_LOG(ERR, "Failed to add vsi for PF");
+		return -EINVAL;
+	}
+
+	pf->main_vsi = vsi;
+
+	return 0;
+}
+
+static int
+ice_dev_init(struct rte_eth_dev *dev)
+{
+	struct rte_pci_device *pci_dev;
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	int ret;
+
+	dev->dev_ops = &ice_eth_dev_ops;
+
+	pci_dev = RTE_DEV_TO_PCI(dev->device);
+
+	rte_eth_copy_pci_info(dev, pci_dev);
+	pf->adapter = ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	pf->adapter->eth_dev = dev;
+	pf->dev_data = dev->data;
+	hw->back = pf->adapter;
+	hw->hw_addr = (uint8_t *)pci_dev->mem_resource[0].addr;
+	hw->vendor_id = pci_dev->id.vendor_id;
+	hw->device_id = pci_dev->id.device_id;
+	hw->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id;
+	hw->subsystem_device_id = pci_dev->id.subsystem_device_id;
+	hw->bus.device = pci_dev->addr.devid;
+	hw->bus.func = pci_dev->addr.function;
+
+	ice_init_controlq_parameter(hw);
+
+	ret = ice_init_hw(hw);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to initialize HW");
+		return -EINVAL;
+	}
+
+	PMD_INIT_LOG(INFO, "FW %d.%d.%05d API %d.%d",
+		     hw->fw_maj_ver, hw->fw_min_ver, hw->fw_build,
+		     hw->api_maj_ver, hw->api_min_ver);
+
+	ice_pf_sw_init(dev);
+	ret = ice_init_mac_address(dev);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to initialize mac address");
+		goto err_init_mac;
+	}
+
+	ret = ice_res_pool_init(&pf->msix_pool, 1,
+				hw->func_caps.common_cap.num_msix_vectors - 1);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to init MSIX pool");
+		goto err_msix_pool_init;
+	}
+
+	ret = ice_pf_setup(pf);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to setup PF");
+		goto err_pf_setup;
+	}
+
+	return 0;
+
+err_pf_setup:
+	ice_res_pool_destroy(&pf->msix_pool);
+err_msix_pool_init:
+	rte_free(dev->data->mac_addrs);
+err_init_mac:
+	ice_sched_cleanup_all(hw);
+	rte_free(hw->port_info);
+	ice_shutdown_all_ctrlq(hw);
+
+	return ret;
+}
+
+static int
+ice_release_vsi(struct ice_vsi *vsi)
+{
+	struct ice_hw *hw;
+	struct ice_vsi_ctx vsi_ctx;
+	enum ice_status ret;
+
+	if (!vsi)
+		return 0;
+
+	hw = ICE_VSI_TO_HW(vsi);
+
+	memset(&vsi_ctx, 0, sizeof(vsi_ctx));
+
+	vsi_ctx.vsi_num = vsi->vsi_id;
+	vsi_ctx.info = vsi->info;
+	ret = ice_free_vsi(hw, vsi->idx, &vsi_ctx, false, NULL);
+	if (ret != ICE_SUCCESS) {
+		PMD_INIT_LOG(ERR, "Failed to free vsi by aq, %u", vsi->vsi_id);
+		rte_free(vsi);
+		return -1;
+	}
+
+	rte_free(vsi);
+	return 0;
+}
+
+static int
+ice_dev_uninit(struct rte_eth_dev *dev)
+{
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+
+	if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+		return 0;
+
+	ice_dev_close(dev);
+
+	dev->dev_ops = NULL;
+	dev->rx_pkt_burst = NULL;
+	dev->tx_pkt_burst = NULL;
+
+	rte_free(dev->data->mac_addrs);
+	dev->data->mac_addrs = NULL;
+
+	ice_release_vsi(pf->main_vsi);
+	ice_sched_cleanup_all(hw);
+	rte_free(hw->port_info);
+	ice_shutdown_all_ctrlq(hw);
+
+	return 0;
+}
+
+static int
+ice_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+	      struct rte_pci_device *pci_dev)
+{
+	return rte_eth_dev_pci_generic_probe(pci_dev,
+					     sizeof(struct ice_adapter),
+					     ice_dev_init);
+}
+
+static int
+ice_pci_remove(struct rte_pci_device *pci_dev)
+{
+	return rte_eth_dev_pci_generic_remove(pci_dev, ice_dev_uninit);
+}
+
+static struct rte_pci_driver rte_ice_pmd = {
+	.id_table = pci_id_ice_map,
+	.drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC,
+	.probe = ice_pci_probe,
+	.remove = ice_pci_remove,
+};
+
+/**
+ * Driver initialization routine.
+ * Invoked once at EAL init time.
+ * Register itself as the [Poll Mode] Driver of PCI devices.
+ */
+RTE_PMD_REGISTER_PCI(net_ice, rte_ice_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(ice, pci_id_ice_map);
+
+RTE_INIT(ice_init_log);
+static void
+ice_init_log(void)
+{
+	ice_logtype_init = rte_log_register("pmd.ice.init");
+	if (ice_logtype_init >= 0)
+		rte_log_set_level(ice_logtype_init, RTE_LOG_NOTICE);
+	ice_logtype_driver = rte_log_register("pmd.ice.driver");
+	if (ice_logtype_driver >= 0)
+		rte_log_set_level(ice_logtype_driver, RTE_LOG_NOTICE);
+}
+
+static void
+ice_dev_close(struct rte_eth_dev *dev)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+		return;
+
+	ice_res_pool_destroy(&pf->msix_pool);
+	ice_release_vsi(pf->main_vsi);
+
+	ice_shutdown_all_ctrlq(hw);
+}
diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
new file mode 100644
index 0000000..7928684
--- /dev/null
+++ b/drivers/net/ice/ice_ethdev.h
@@ -0,0 +1,318 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _ICE_ETHDEV_H_
+#define _ICE_ETHDEV_H_
+
+#include <rte_kvargs.h>
+
+#include "base/ice_common.h"
+#include "base/ice_adminq_cmd.h"
+
+#define ICE_VLAN_TAG_SIZE        4
+
+#define ICE_ADMINQ_LEN               32
+#define ICE_SBIOQ_LEN                32
+#define ICE_MAILBOXQ_LEN             32
+#define ICE_ADMINQ_BUF_SZ            4096
+#define ICE_SBIOQ_BUF_SZ             4096
+#define ICE_MAILBOXQ_BUF_SZ          4096
+/* Number of queues per TC should be one of 1, 2, 4, 8, 16, 32, 64 */
+#define ICE_MAX_Q_PER_TC         64
+#define ICE_NUM_DESC_DEFAULT     512
+#define ICE_BUF_SIZE_MIN         1024
+#define ICE_FRAME_SIZE_MAX       9728
+#define ICE_QUEUE_BASE_ADDR_UNIT 128
+/* number of VSIs and queue default setting */
+#define ICE_MAX_QP_NUM_PER_VF    16
+#define ICE_DEFAULT_QP_NUM_FDIR  1
+#define ICE_UINT32_BIT_SIZE      (CHAR_BIT * sizeof(uint32_t))
+#define ICE_VFTA_SIZE            (4096 / ICE_UINT32_BIT_SIZE)
+/* Maximun number of MAC addresses */
+#define ICE_NUM_MACADDR_MAX       64
+/* Maximum number of VFs */
+#define ICE_MAX_VF               128
+#define ICE_MAX_INTR_QUEUE_NUM   256
+
+#define ICE_MISC_VEC_ID          RTE_INTR_VEC_ZERO_OFFSET
+#define ICE_RX_VEC_ID            RTE_INTR_VEC_RXTX_OFFSET
+
+#define ICE_MAX_PKT_TYPE  1024
+
+/**
+ * vlan_id is a 12 bit number.
+ * The VFTA array is actually a 4096 bit array, 128 of 32bit elements.
+ * 2^5 = 32. The val of lower 5 bits specifies the bit in the 32bit element.
+ * The higher 7 bit val specifies VFTA array index.
+ */
+#define ICE_VFTA_BIT(vlan_id)    (1 << ((vlan_id) & 0x1F))
+#define ICE_VFTA_IDX(vlan_id)    ((vlan_id) >> 5)
+
+/* Default TC traffic in case DCB is not enabled */
+#define ICE_DEFAULT_TCMAP        0x1
+#define ICE_FDIR_QUEUE_ID        0
+
+/* Always assign pool 0 to main VSI, VMDQ will start from 1 */
+#define ICE_VMDQ_POOL_BASE       1
+
+#define ICE_DEFAULT_RX_FREE_THRESH  32
+#define ICE_DEFAULT_RX_PTHRESH      8
+#define ICE_DEFAULT_RX_HTHRESH      8
+#define ICE_DEFAULT_RX_WTHRESH      0
+
+#define ICE_DEFAULT_TX_FREE_THRESH  32
+#define ICE_DEFAULT_TX_PTHRESH      32
+#define ICE_DEFAULT_TX_HTHRESH      0
+#define ICE_DEFAULT_TX_WTHRESH      0
+#define ICE_DEFAULT_TX_RSBIT_THRESH 32
+
+/* Bit shift and mask */
+#define ICE_4_BIT_WIDTH  (CHAR_BIT / 2)
+#define ICE_4_BIT_MASK   RTE_LEN2MASK(ICE_4_BIT_WIDTH, uint8_t)
+#define ICE_8_BIT_WIDTH  CHAR_BIT
+#define ICE_8_BIT_MASK   UINT8_MAX
+#define ICE_16_BIT_WIDTH (CHAR_BIT * 2)
+#define ICE_16_BIT_MASK  UINT16_MAX
+#define ICE_32_BIT_WIDTH (CHAR_BIT * 4)
+#define ICE_32_BIT_MASK  UINT32_MAX
+#define ICE_40_BIT_WIDTH (CHAR_BIT * 5)
+#define ICE_40_BIT_MASK  RTE_LEN2MASK(ICE_40_BIT_WIDTH, uint64_t)
+#define ICE_48_BIT_WIDTH (CHAR_BIT * 6)
+#define ICE_48_BIT_MASK  RTE_LEN2MASK(ICE_48_BIT_WIDTH, uint64_t)
+
+#define ICE_FLAG_RSS                   BIT_ULL(0)
+#define ICE_FLAG_DCB                   BIT_ULL(1)
+#define ICE_FLAG_VMDQ                  BIT_ULL(2)
+#define ICE_FLAG_SRIOV                 BIT_ULL(3)
+#define ICE_FLAG_HEADER_SPLIT_DISABLED BIT_ULL(4)
+#define ICE_FLAG_HEADER_SPLIT_ENABLED  BIT_ULL(5)
+#define ICE_FLAG_FDIR                  BIT_ULL(6)
+#define ICE_FLAG_VXLAN                 BIT_ULL(7)
+#define ICE_FLAG_RSS_AQ_CAPABLE        BIT_ULL(8)
+#define ICE_FLAG_VF_MAC_BY_PF          BIT_ULL(9)
+#define ICE_FLAG_ALL  (ICE_FLAG_RSS | \
+		       ICE_FLAG_DCB | \
+		       ICE_FLAG_VMDQ | \
+		       ICE_FLAG_SRIOV | \
+		       ICE_FLAG_HEADER_SPLIT_DISABLED | \
+		       ICE_FLAG_HEADER_SPLIT_ENABLED | \
+		       ICE_FLAG_FDIR | \
+		       ICE_FLAG_VXLAN | \
+		       ICE_FLAG_RSS_AQ_CAPABLE | \
+		       ICE_FLAG_VF_MAC_BY_PF)
+
+#define ICE_RSS_OFFLOAD_ALL ( \
+	ETH_RSS_FRAG_IPV4 | \
+	ETH_RSS_NONFRAG_IPV4_TCP | \
+	ETH_RSS_NONFRAG_IPV4_UDP | \
+	ETH_RSS_NONFRAG_IPV4_SCTP | \
+	ETH_RSS_NONFRAG_IPV4_OTHER | \
+	ETH_RSS_FRAG_IPV6 | \
+	ETH_RSS_NONFRAG_IPV6_TCP | \
+	ETH_RSS_NONFRAG_IPV6_UDP | \
+	ETH_RSS_NONFRAG_IPV6_SCTP | \
+	ETH_RSS_NONFRAG_IPV6_OTHER | \
+	ETH_RSS_L2_PAYLOAD)
+
+struct ice_adapter;
+
+/**
+ * MAC filter structure
+ */
+struct ice_mac_filter_info {
+	struct ether_addr mac_addr;
+};
+
+TAILQ_HEAD(ice_mac_filter_list, ice_mac_filter);
+
+/* MAC filter list structure */
+struct ice_mac_filter {
+	TAILQ_ENTRY(ice_mac_filter) next;
+	struct ice_mac_filter_info mac_info;
+};
+
+/**
+ * VLAN filter structure
+ */
+struct ice_vlan_filter_info {
+	uint16_t vlan_id;
+};
+
+TAILQ_HEAD(ice_vlan_filter_list, ice_vlan_filter);
+
+/* VLAN filter list structure */
+struct ice_vlan_filter {
+	TAILQ_ENTRY(ice_vlan_filter) next;
+	struct ice_vlan_filter_info vlan_info;
+};
+
+struct pool_entry {
+	LIST_ENTRY(pool_entry) next;
+	uint16_t base;
+	uint16_t len;
+};
+
+LIST_HEAD(res_list, pool_entry);
+
+struct ice_res_pool_info {
+	uint32_t base;              /* Resource start index */
+	uint32_t num_alloc;         /* Allocated resource number */
+	uint32_t num_free;          /* Total available resource number */
+	struct res_list alloc_list; /* Allocated resource list */
+	struct res_list free_list;  /* Available resource list */
+};
+
+TAILQ_HEAD(ice_vsi_list_head, ice_vsi_list);
+
+struct ice_vsi;
+
+/* VSI list structure */
+struct ice_vsi_list {
+	TAILQ_ENTRY(ice_vsi_list) list;
+	struct ice_vsi *vsi;
+};
+
+struct ice_rx_queue;
+struct ice_tx_queue;
+
+/**
+ * Structure that defines a VSI, associated with a adapter.
+ */
+struct ice_vsi {
+	struct ice_adapter *adapter; /* Backreference to associated adapter */
+	struct ice_aqc_vsi_props info; /* VSI properties */
+	/**
+	 * When drivers loaded, only a default main VSI exists. In case new VSI
+	 * needs to add, HW needs to know the layout that VSIs are organized.
+	 * Besides that, VSI isan element and can't switch packets, which needs
+	 * to add new component VEB to perform switching. So, a new VSI needs
+	 * to specify the the uplink VSI (Parent VSI) before created. The
+	 * uplink VSI will check whether it had a VEB to switch packets. If no,
+	 * it will try to create one. Then, uplink VSI will move the new VSI
+	 * into its' sib_vsi_list to manage all the downlink VSI.
+	 *  sib_vsi_list: the VSI list that shared the same uplink VSI.
+	 *  parent_vsi  : the uplink VSI. It's NULL for main VSI.
+	 *  veb         : the VEB associates with the VSI.
+	 */
+	struct ice_vsi_list sib_vsi_list; /* sibling vsi list */
+	struct ice_vsi *parent_vsi;
+	enum ice_vsi_type type; /* VSI types */
+	uint16_t vlan_num;       /* Total VLAN number */
+	uint16_t mac_num;        /* Total mac number */
+	struct ice_mac_filter_list mac_list; /* macvlan filter list */
+	struct ice_vlan_filter_list vlan_list; /* vlan filter list */
+	uint16_t nb_qps;         /* Number of queue pairs VSI can occupy */
+	uint16_t nb_used_qps;    /* Number of queue pairs VSI uses */
+	uint16_t max_macaddrs;   /* Maximum number of MAC addresses */
+	uint16_t base_queue;     /* The first queue index of this VSI */
+	uint16_t vsi_id;         /* Hardware Id */
+	uint16_t idx;            /* vsi_handle: SW index in hw->vsi_ctx */
+	/* VF number to which the VSI connects, valid when VSI is VF type */
+	uint8_t vf_num;
+	uint16_t msix_intr; /* The MSIX interrupt binds to VSI */
+	uint16_t nb_msix;   /* The max number of msix vector */
+	uint8_t enabled_tc; /* The traffic class enabled */
+	uint8_t vlan_anti_spoof_on; /* The VLAN anti-spoofing enabled */
+	uint8_t vlan_filter_on; /* The VLAN filter enabled */
+	/* information about rss configuration */
+	u32 rss_key_size;
+	u32 rss_lut_size;
+	uint8_t *rss_lut;
+	uint8_t *rss_key;
+	struct ice_eth_stats eth_stats_offset;
+	struct ice_eth_stats eth_stats;
+	bool offset_loaded;
+};
+
+struct ice_pf {
+	struct ice_adapter *adapter; /* The adapter this PF associate to */
+	struct ice_vsi *main_vsi; /* pointer to main VSI structure */
+	/* Used for next free software vsi idx.
+	 * To save the effort, we don't recycle the index.
+	 * Suppose the indexes are more than enough.
+	 */
+	uint16_t next_vsi_idx;
+	uint16_t vsis_allocated;
+	uint16_t vsis_unallocated;
+	struct ice_res_pool_info qp_pool;    /*Queue pair pool */
+	struct ice_res_pool_info msix_pool;  /* MSIX interrupt pool */
+	struct rte_eth_dev_data *dev_data; /* Pointer to the device data */
+	struct ether_addr dev_addr; /* PF device mac address */
+	uint64_t flags; /* PF feature flags */
+	uint16_t hash_lut_size; /* The size of hash lookup table */
+	uint16_t lan_nb_qp_max;
+	uint16_t lan_nb_qps; /* The number of queue pairs of LAN */
+	struct ice_hw_port_stats stats_offset;
+	struct ice_hw_port_stats stats;
+	/* internal packet statistics, it should be excluded from the total */
+	struct ice_eth_stats internal_stats_offset;
+	struct ice_eth_stats internal_stats;
+	bool offset_loaded;
+	bool adapter_stopped;
+};
+
+/**
+ * Structure to store private data for each PF/VF instance.
+ */
+struct ice_adapter {
+	/* Common for both PF and VF */
+	struct ice_hw hw;
+	struct rte_eth_dev *eth_dev;
+	struct ice_pf pf;
+	bool rx_bulk_alloc_allowed;
+	bool tx_simple_allowed;
+	/* ptype mapping table */
+	uint32_t ptype_tbl[ICE_MAX_PKT_TYPE] __rte_cache_min_aligned;
+};
+
+struct ice_vsi_vlan_pvid_info {
+	uint16_t on;		/* Enable or disable pvid */
+	union {
+		uint16_t pvid;	/* Valid in case 'on' is set to set pvid */
+		struct {
+			/* Valid in case 'on' is cleared. 'tagged' will reject
+			 * tagged packets, while 'untagged' will reject
+			 * untagged packets.
+			 */
+			uint8_t tagged;
+			uint8_t untagged;
+		} reject;
+	} config;
+};
+
+#define ICE_DEV_TO_PCI(eth_dev) \
+	RTE_DEV_TO_PCI((eth_dev)->device)
+
+/* ICE_DEV_PRIVATE_TO */
+#define ICE_DEV_PRIVATE_TO_PF(adapter) \
+	(&((struct ice_adapter *)adapter)->pf)
+#define ICE_DEV_PRIVATE_TO_HW(adapter) \
+	(&((struct ice_adapter *)adapter)->hw)
+#define ICE_DEV_PRIVATE_TO_ADAPTER(adapter) \
+	((struct ice_adapter *)adapter)
+
+/* ICE_VSI_TO */
+#define ICE_VSI_TO_HW(vsi) \
+	(&(((struct ice_vsi *)vsi)->adapter->hw))
+#define ICE_VSI_TO_PF(vsi) \
+	(&(((struct ice_vsi *)vsi)->adapter->pf))
+#define ICE_VSI_TO_ETH_DEV(vsi) \
+	(((struct ice_vsi *)vsi)->adapter->eth_dev)
+
+/* ICE_PF_TO */
+#define ICE_PF_TO_HW(pf) \
+	(&(((struct ice_pf *)pf)->adapter->hw))
+#define ICE_PF_TO_ADAPTER(pf) \
+	((struct ice_adapter *)pf->adapter)
+#define ICE_PF_TO_ETH_DEV(pf) \
+	(((struct ice_pf *)pf)->adapter->eth_dev)
+
+static inline int
+ice_align_floor(int n)
+{
+	if (n == 0)
+		return 0;
+	return 1 << (sizeof(n) * CHAR_BIT - 1 - __builtin_clz(n));
+}
+#endif /* _ICE_ETHDEV_H_ */
diff --git a/drivers/net/ice/ice_logs.h b/drivers/net/ice/ice_logs.h
new file mode 100644
index 0000000..de2d573
--- /dev/null
+++ b/drivers/net/ice/ice_logs.h
@@ -0,0 +1,45 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _ICE_LOGS_H_
+#define _ICE_LOGS_H_
+
+extern int ice_logtype_init;
+extern int ice_logtype_driver;
+
+#define PMD_INIT_LOG(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, ice_logtype_init, "%s(): " fmt "\n", \
+		__func__, ##args)
+
+#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>")
+
+#ifdef RTE_LIBRTE_ICE_DEBUG_RX
+#define PMD_RX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_RX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_ICE_DEBUG_TX
+#define PMD_TX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_TX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_ICE_DEBUG_TX_FREE
+#define PMD_TX_FREE_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_TX_FREE_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#define PMD_DRV_LOG_RAW(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, ice_logtype_driver, "%s(): " fmt, \
+		__func__, ## args)
+
+#define PMD_DRV_LOG(level, fmt, args...) \
+	PMD_DRV_LOG_RAW(level, fmt "\n", ## args)
+
+#endif /* _ICE_LOGS_H_ */
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
new file mode 100644
index 0000000..c37dc23
--- /dev/null
+++ b/drivers/net/ice/ice_rxtx.h
@@ -0,0 +1,117 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _ICE_RXTX_H_
+#define _ICE_RXTX_H_
+
+#include "ice_ethdev.h"
+
+#define ICE_ALIGN_RING_DESC  32
+#define ICE_MIN_RING_DESC    64
+#define ICE_MAX_RING_DESC    4096
+#define ICE_DMA_MEM_ALIGN    4096
+#define ICE_RING_BASE_ALIGN  128
+
+#define ICE_RX_MAX_BURST 32
+#define ICE_TX_MAX_BURST 32
+
+#define ICE_CHK_Q_ENA_COUNT        100
+#define ICE_CHK_Q_ENA_INTERVAL_US  100
+
+#ifdef RTE_LIBRTE_ICE_16BYTE_RX_DESC
+#define ice_rx_desc ice_16byte_rx_desc
+#else
+#define ice_rx_desc ice_32byte_rx_desc
+#endif
+
+#define ICE_SUPPORT_CHAIN_NUM 5
+
+struct ice_rx_entry {
+	struct rte_mbuf *mbuf;
+};
+
+struct ice_rx_queue {
+	struct rte_mempool *mp; /* mbuf pool to populate RX ring */
+	volatile union ice_rx_desc *rx_ring;/* RX ring virtual address */
+	uint64_t rx_ring_phys_addr; /* RX ring DMA address */
+	struct ice_rx_entry *sw_ring; /* address of RX soft ring */
+	uint16_t nb_rx_desc; /* number of RX descriptors */
+	uint16_t rx_free_thresh; /* max free RX desc to hold */
+	uint16_t rx_tail; /* current value of tail */
+	uint16_t nb_rx_hold; /* number of held free RX desc */
+	struct rte_mbuf *pkt_first_seg; /**< first segment of current packet */
+	struct rte_mbuf *pkt_last_seg; /**< last segment of current packet */
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+	uint16_t rx_nb_avail; /**< number of staged packets ready */
+	uint16_t rx_next_avail; /**< index of next staged packets */
+	uint16_t rx_free_trigger; /**< triggers rx buffer allocation */
+	struct rte_mbuf fake_mbuf; /**< dummy mbuf */
+	struct rte_mbuf *rx_stage[ICE_RX_MAX_BURST * 2];
+#endif
+	uint8_t port_id; /* device port ID */
+	uint8_t crc_len; /* 0 if CRC stripped, 4 otherwise */
+	uint16_t queue_id; /* RX queue index */
+	uint16_t reg_idx; /* RX queue register index */
+	uint8_t drop_en; /* if not 0, set register bit */
+	volatile uint8_t *qrx_tail; /* register address of tail */
+	struct ice_vsi *vsi; /* the VSI this queue belongs to */
+	uint16_t rx_buf_len; /* The packet buffer size */
+	uint16_t rx_hdr_len; /* The header buffer size */
+	uint16_t max_pkt_len; /* Maximum packet length */
+	bool q_set; /* indicate if rx queue has been configured */
+	bool rx_deferred_start; /* don't start this queue in dev start */
+};
+
+struct ice_tx_entry {
+	struct rte_mbuf *mbuf;
+	uint16_t next_id;
+	uint16_t last_id;
+};
+
+struct ice_tx_queue {
+	uint16_t nb_tx_desc; /* number of TX descriptors */
+	uint64_t tx_ring_phys_addr; /* TX ring DMA address */
+	volatile struct ice_tx_desc *tx_ring; /* TX ring virtual address */
+	struct ice_tx_entry *sw_ring; /* virtual address of SW ring */
+	uint16_t tx_tail; /* current value of tail register */
+	volatile uint8_t *qtx_tail; /* register address of tail */
+	uint16_t nb_tx_used; /* number of TX desc used since RS bit set */
+	/* index to last TX descriptor to have been cleaned */
+	uint16_t last_desc_cleaned;
+	/* Total number of TX descriptors ready to be allocated. */
+	uint16_t nb_tx_free;
+	/* Start freeing TX buffers if there are less free descriptors than
+	 * this value.
+	 */
+	uint16_t tx_free_thresh;
+	/* Number of TX descriptors to use before RS bit is set. */
+	uint16_t tx_rs_thresh;
+	uint8_t pthresh; /**< Prefetch threshold register. */
+	uint8_t hthresh; /**< Host threshold register. */
+	uint8_t wthresh; /**< Write-back threshold reg. */
+	uint8_t port_id; /* Device port identifier. */
+	uint16_t queue_id; /* TX queue index. */
+	uint32_t q_teid; /* TX schedule node id. */
+	uint16_t reg_idx;
+	uint64_t offloads;
+	struct ice_vsi *vsi; /* the VSI this queue belongs to */
+	uint16_t tx_next_dd;
+	uint16_t tx_next_rs;
+	bool tx_deferred_start; /* don't start this queue in dev start */
+	bool q_set; /* indicate if tx queue has been configured */
+};
+
+/* Offload features */
+union ice_tx_offload {
+	uint64_t data;
+	struct {
+		uint64_t l2_len:7; /* L2 (MAC) Header Length. */
+		uint64_t l3_len:9; /* L3 (IP) Header Length. */
+		uint64_t l4_len:8; /* L4 Header Length. */
+		uint64_t tso_segsz:16; /* TCP TSO segment size */
+		uint64_t outer_l2_len:8; /* outer L2 Header Length */
+		uint64_t outer_l3_len:16; /* outer L3 Header Length */
+	};
+};
+#endif /* _ICE_RXTX_H_ */
diff --git a/drivers/net/ice/rte_pmd_ice_version.map b/drivers/net/ice/rte_pmd_ice_version.map
new file mode 100644
index 0000000..7b23b60
--- /dev/null
+++ b/drivers/net/ice/rte_pmd_ice_version.map
@@ -0,0 +1,4 @@
+DPDK_19.02 {
+
+	local: *;
+};
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 5699d97..02e8b6f 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -163,6 +163,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_ENIC_PMD)       += -lrte_pmd_enic
 _LDLIBS-$(CONFIG_RTE_LIBRTE_FM10K_PMD)      += -lrte_pmd_fm10k
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_FAILSAFE)   += -lrte_pmd_failsafe
 _LDLIBS-$(CONFIG_RTE_LIBRTE_I40E_PMD)       += -lrte_pmd_i40e
+_LDLIBS-$(CONFIG_RTE_LIBRTE_ICE_PMD)        += -lrte_pmd_ice
 _LDLIBS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD)      += -lrte_pmd_ixgbe
 ifeq ($(CONFIG_RTE_LIBRTE_KNI),y)
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_KNI)        += -lrte_pmd_kni
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH 03/19] net/ice: support device and queue ops
  2018-11-23  6:56 [dpdk-dev] [PATCH 00/19] A new net PMD - ice Wenzhuo Lu
  2018-11-23  6:56 ` [dpdk-dev] [PATCH 01/19] net/ice: add base code Wenzhuo Lu
  2018-11-23  6:56 ` [dpdk-dev] [PATCH 02/19] net/ice: support device initialization Wenzhuo Lu
@ 2018-11-23  6:56 ` Wenzhuo Lu
  2018-12-03 15:24   ` Rami Rosen
  2018-11-23  6:56 ` [dpdk-dev] [PATCH 04/19] net/ice: support getting device information Wenzhuo Lu
                   ` (21 subsequent siblings)
  24 siblings, 1 reply; 309+ messages in thread
From: Wenzhuo Lu @ 2018-11-23  6:56 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Normally when starting/stopping the device the queue
should be started and stopped. Support them both in
this patch.

Below ops are added,
dev_configure
dev_start
dev_stop
dev_close
dev_reset
rx_queue_start
rx_queue_stop
tx_queue_start
tx_queue_stop
rx_queue_setup
rx_queue_release
tx_queue_setup
tx_queue_release

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/ice/Makefile       |   1 +
 drivers/net/ice/ice_ethdev.c   | 207 ++++++++-
 drivers/net/ice/ice_lan_rxtx.c | 951 +++++++++++++++++++++++++++++++++++++++++
 drivers/net/ice/ice_rxtx.h     |  20 +
 4 files changed, 1178 insertions(+), 1 deletion(-)
 create mode 100644 drivers/net/ice/ice_lan_rxtx.c

diff --git a/drivers/net/ice/Makefile b/drivers/net/ice/Makefile
index 00e1dda..955d719 100644
--- a/drivers/net/ice/Makefile
+++ b/drivers/net/ice/Makefile
@@ -65,6 +65,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_nvm.c
 SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_flex_pipe.c
 
 SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_ethdev.c
+SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_lan_rxtx.c
 
 # this lib depends upon:
 DEPDIRS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += lib/librte_eal lib/librte_ether
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 7be77cf..bbf1ed4 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -14,7 +14,11 @@
 int ice_logtype_init;
 int ice_logtype_driver;
 
+static int ice_dev_configure(struct rte_eth_dev *dev);
+static int ice_dev_start(struct rte_eth_dev *dev);
+static void ice_dev_stop(struct rte_eth_dev *dev);
 static void ice_dev_close(struct rte_eth_dev *dev);
+static int ice_dev_reset(struct rte_eth_dev *dev);
 
 static const struct rte_pci_id pci_id_ice_map[] = {
 	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
@@ -24,7 +28,19 @@
 };
 
 static const struct eth_dev_ops ice_eth_dev_ops = {
-	.dev_configure                = NULL,
+	.dev_configure                = ice_dev_configure,
+	.dev_start                    = ice_dev_start,
+	.dev_stop                     = ice_dev_stop,
+	.dev_close                    = ice_dev_close,
+	.dev_reset                    = ice_dev_reset,
+	.rx_queue_start               = ice_rx_queue_start,
+	.rx_queue_stop                = ice_rx_queue_stop,
+	.tx_queue_start               = ice_tx_queue_start,
+	.tx_queue_stop                = ice_tx_queue_stop,
+	.rx_queue_setup               = ice_rx_queue_setup,
+	.rx_queue_release             = ice_rx_queue_release,
+	.tx_queue_setup               = ice_tx_queue_setup,
+	.tx_queue_release             = ice_tx_queue_release,
 };
 
 static void
@@ -629,6 +645,164 @@
 		rte_log_set_level(ice_logtype_driver, RTE_LOG_NOTICE);
 }
 
+static int
+ice_dev_configure(__rte_unused struct rte_eth_dev *dev)
+{
+	struct ice_adapter *ad =
+		ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+
+	if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+		return -E_RTE_SECONDARY;
+
+	/* Initialize to TRUE. If any of Rx queues doesn't meet the
+	 * bulk allocation or vector Rx preconditions we will reset it.
+	 */
+	ad->rx_bulk_alloc_allowed = true;
+	ad->tx_simple_allowed = true;
+
+	return 0;
+}
+
+static int ice_init_rss(struct ice_pf *pf)
+{
+	struct ice_hw *hw = ICE_PF_TO_HW(pf);
+	struct ice_vsi *vsi = pf->main_vsi;
+	struct rte_eth_dev *dev = pf->adapter->eth_dev;
+	struct rte_eth_rss_conf *rss_conf;
+	struct ice_aqc_get_set_rss_keys key;
+	uint16_t i, nb_q;
+	int ret = 0;
+
+	rss_conf = &dev->data->dev_conf.rx_adv_conf.rss_conf;
+	nb_q = dev->data->nb_rx_queues;
+	vsi->rss_key_size = ICE_AQC_GET_SET_RSS_KEY_DATA_RSS_KEY_SIZE;
+	vsi->rss_lut_size = hw->func_caps.common_cap.rss_table_size;
+
+	if (!vsi->rss_key)
+		vsi->rss_key = rte_zmalloc("rss_key",
+					   vsi->rss_key_size, 0);
+	if (!vsi->rss_lut)
+		vsi->rss_lut = rte_zmalloc("rss_lut",
+					   vsi->rss_lut_size, 0);
+
+	/* configure RSS key */
+	if (!rss_conf->rss_key) {
+		/* Calculate the default hash key */
+		for (i = 0; i <= vsi->rss_key_size; i++)
+			vsi->rss_key[i] = (uint8_t)rte_rand();
+	} else
+		rte_memcpy(vsi->rss_key, rss_conf->rss_key,
+			   RTE_MIN(rss_conf->rss_key_len,
+				   vsi->rss_key_size));
+	rte_memcpy(key.standard_rss_key, vsi->rss_key, vsi->rss_key_size);
+	ret = ice_aq_set_rss_key(hw, vsi->vsi_id, &key);
+	if (ret)
+		return -EINVAL;
+
+	/* init RSS LUT table */
+	for (i = 0; i < vsi->rss_lut_size; i++)
+		vsi->rss_lut[i] = i % nb_q;
+
+	ret = ice_aq_set_rss_lut(hw, vsi->vsi_id,
+				 ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF,
+				 vsi->rss_lut, vsi->rss_lut_size);
+	if (ret)
+		return -EINVAL;
+
+	return 0;
+}
+
+static int
+ice_dev_start(struct rte_eth_dev *dev)
+{
+	struct rte_eth_dev_data *data = dev->data;
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	uint16_t nb_rxq = 0;
+	uint16_t nb_txq, i;
+	int ret;
+
+	if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+		return -E_RTE_SECONDARY;
+
+	/* program Tx queues' contex in hardware */
+	for (nb_txq = 0; nb_txq < data->nb_tx_queues; nb_txq++) {
+		ret = ice_tx_queue_start(dev, nb_txq);
+		if (ret) {
+			PMD_DRV_LOG(ERR, "fail to start Tx queue %u", nb_txq);
+			goto tx_err;
+		}
+	}
+
+	/* program Rx queues' context in hardware*/
+	for (nb_rxq = 0; nb_rxq < data->nb_rx_queues; nb_rxq++) {
+		ret = ice_rx_queue_start(dev, nb_rxq);
+		if (ret) {
+			PMD_DRV_LOG(ERR, "fail to start Rx queue %u", nb_rxq);
+			goto rx_err;
+		}
+	}
+
+	ret = ice_init_rss(pf);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to enable rss for PF");
+		goto rx_err;
+	}
+
+	ret = ice_aq_set_event_mask(hw, hw->port_info->lport,
+				    ((u16)(ICE_AQ_LINK_EVENT_LINK_FAULT |
+				     ICE_AQ_LINK_EVENT_PHY_TEMP_ALARM |
+				     ICE_AQ_LINK_EVENT_EXCESSIVE_ERRORS |
+				     ICE_AQ_LINK_EVENT_SIGNAL_DETECT |
+				     ICE_AQ_LINK_EVENT_AN_COMPLETED |
+				     ICE_AQ_LINK_EVENT_PORT_TX_SUSPENDED)),
+				     NULL);
+	if (ret != ICE_SUCCESS)
+		PMD_DRV_LOG(WARNING, "Fail to set phy mask");
+
+	pf->adapter_stopped = false;
+
+	return 0;
+
+	/* stop the started queues if failed to start all queues */
+rx_err:
+	for (i = 0; i < nb_txq; i++)
+		ice_tx_queue_stop(dev, i);
+tx_err:
+	for (i = 0; i < nb_rxq; i++)
+		ice_rx_queue_stop(dev, i);
+
+	return -EIO;
+}
+
+static void
+ice_dev_stop(struct rte_eth_dev *dev)
+{
+	struct rte_eth_dev_data *data = dev->data;
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	uint16_t i;
+
+	/* avoid stopping again */
+	if (pf->adapter_stopped)
+		return;
+
+	if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+		return;
+
+	/* stop and clear all Rx queues */
+	for (i = 0; i < data->nb_rx_queues; i++)
+		ice_rx_queue_stop(dev, i);
+
+	/* stop and clear all Tx queues */
+	for (i = 0; i < data->nb_tx_queues; i++)
+		ice_tx_queue_stop(dev, i);
+
+	/* Clear all queues and release mbufs */
+	ice_clear_queues(dev);
+
+	pf->adapter_stopped = true;
+}
+
 static void
 ice_dev_close(struct rte_eth_dev *dev)
 {
@@ -638,8 +812,39 @@
 	if (rte_eal_process_type() == RTE_PROC_SECONDARY)
 		return;
 
+	ice_dev_stop(dev);
+
+	/* release all queue resource */
+	ice_free_queues(dev);
+
 	ice_res_pool_destroy(&pf->msix_pool);
 	ice_release_vsi(pf->main_vsi);
 
 	ice_shutdown_all_ctrlq(hw);
 }
+
+static int
+ice_dev_reset(struct rte_eth_dev *dev)
+{
+	int ret;
+
+	if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+		return -E_RTE_SECONDARY;
+
+	if (dev->data->sriov.active)
+		return -ENOTSUP;
+
+	ret = ice_dev_uninit(dev);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "failed to uninit device, status = %d", ret);
+		return -ENXIO;
+	}
+
+	ret = ice_dev_init(dev);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "failed to init device, status = %d", ret);
+		return -ENXIO;
+	}
+
+	return 0;
+}
diff --git a/drivers/net/ice/ice_lan_rxtx.c b/drivers/net/ice/ice_lan_rxtx.c
new file mode 100644
index 0000000..67b89df
--- /dev/null
+++ b/drivers/net/ice/ice_lan_rxtx.c
@@ -0,0 +1,951 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#include <rte_ethdev_driver.h>
+#include <rte_net.h>
+
+#include "ice_rxtx.h"
+
+#define ICE_TD_CMD ICE_TX_DESC_CMD_EOP
+
+#define ICE_TX_CKSUM_OFFLOAD_MASK (		 \
+		PKT_TX_IP_CKSUM |		 \
+		PKT_TX_L4_MASK |		 \
+		PKT_TX_TCP_SEG |		 \
+		PKT_TX_OUTER_IP_CKSUM)
+
+#define ICE_RX_ERR_BITS 0x3f
+
+static enum ice_status
+ice_program_hw_rx_queue(struct ice_rx_queue *rxq)
+{
+	struct ice_vsi *vsi = rxq->vsi;
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	struct rte_eth_dev *dev = ICE_VSI_TO_ETH_DEV(rxq->vsi);
+	struct ice_rlan_ctx rx_ctx;
+	enum ice_status err;
+	uint16_t buf_size, len;
+	struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
+	uint32_t regval;
+
+	/**
+	 * The kernel driver uses flex descriptor. It sets the register
+	 * to flex descriptor mode.
+	 * DPDK uses legacy descriptor. It should set the register back
+	 * to the default value, then uses legacy descriptor mode.
+	 */
+	regval = (0x01 << QRXFLXP_CNTXT_RXDID_PRIO_S) &
+		 QRXFLXP_CNTXT_RXDID_PRIO_M;
+	ICE_WRITE_REG(hw, QRXFLXP_CNTXT(rxq->reg_idx), regval);
+
+	/* Set buffer size as the head split is disabled. */
+	buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) -
+			      RTE_PKTMBUF_HEADROOM);
+	rxq->rx_hdr_len = 0;
+	rxq->rx_buf_len = RTE_ALIGN(buf_size, (1 << ICE_RLAN_CTX_DBUF_S));
+	len = ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len;
+	rxq->max_pkt_len = RTE_MIN(len,
+				   dev->data->dev_conf.rxmode.max_rx_pkt_len);
+
+	if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+		if (rxq->max_pkt_len <= ETHER_MAX_LEN ||
+		    rxq->max_pkt_len > ICE_FRAME_SIZE_MAX) {
+			PMD_DRV_LOG(ERR, "maximum packet length must "
+				    "be larger than %u and smaller than %u,"
+				    "as jumbo frame is enabled",
+				    (uint32_t)ETHER_MAX_LEN,
+				    (uint32_t)ICE_FRAME_SIZE_MAX);
+			return -EINVAL;
+		}
+	} else {
+		if (rxq->max_pkt_len < ETHER_MIN_LEN ||
+		    rxq->max_pkt_len > ETHER_MAX_LEN) {
+			PMD_DRV_LOG(ERR, "maximum packet length must be "
+				    "larger than %u and smaller than %u, "
+				    "as jumbo frame is disabled",
+				    (uint32_t)ETHER_MIN_LEN,
+				    (uint32_t)ETHER_MAX_LEN);
+			return -EINVAL;
+		}
+	}
+
+	memset(&rx_ctx, 0, sizeof(rx_ctx));
+
+	rx_ctx.base = rxq->rx_ring_phys_addr / ICE_QUEUE_BASE_ADDR_UNIT;
+	rx_ctx.qlen = rxq->nb_rx_desc;
+	rx_ctx.dbuf = rxq->rx_buf_len >> ICE_RLAN_CTX_DBUF_S;
+	rx_ctx.hbuf = rxq->rx_hdr_len >> ICE_RLAN_CTX_HBUF_S;
+	rx_ctx.dtype = 0; /* No Header Split mode */
+#ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
+	rx_ctx.dsize = 1; /* 32B descriptors */
+#endif
+	rx_ctx.rxmax = rxq->max_pkt_len;
+	/* TPH: Transaction Layer Packet (TLP) processing hints */
+	rx_ctx.tphrdesc_ena = 1;
+	rx_ctx.tphwdesc_ena = 1;
+	rx_ctx.tphdata_ena = 1;
+	rx_ctx.tphhead_ena = 1;
+	/* Low Receive Queue Threshold defined in 64 descriptors units.
+	 * When the number of free descriptors goes below the lrxqthresh,
+	 * an immediate interrupt is triggered.
+	 */
+	rx_ctx.lrxqthresh = 2;
+	/*default use 32 byte descriptor, vlan tag extract to L2TAG2(1st)*/
+	rx_ctx.l2tsel = 1;
+	rx_ctx.showiv = 0;
+
+	err = ice_clear_rxq_ctx(hw, rxq->reg_idx);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to clear Lan Rx queue (%u) context",
+			    rxq->queue_id);
+		return -EINVAL;
+	}
+	err = ice_write_rxq_ctx(hw, &rx_ctx, rxq->reg_idx);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to write Lan Rx queue (%u) context",
+			    rxq->queue_id);
+		return -EINVAL;
+	}
+
+	buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) -
+			      RTE_PKTMBUF_HEADROOM);
+
+	/* Check if scattered RX needs to be used. */
+	if ((rxq->max_pkt_len + 2 * ICE_VLAN_TAG_SIZE) > buf_size)
+		dev->data->scattered_rx = 1;
+
+	rxq->qrx_tail = hw->hw_addr + QRX_TAIL(rxq->reg_idx);
+
+	/* Init the Rx tail register*/
+	ICE_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1);
+
+	return 0;
+}
+
+/* Allocate mbufs for all descriptors in rx queue */
+static int
+ice_alloc_rx_queue_mbufs(struct ice_rx_queue *rxq)
+{
+	struct ice_rx_entry *rxe = rxq->sw_ring;
+	uint64_t dma_addr;
+	uint16_t i;
+
+	for (i = 0; i < rxq->nb_rx_desc; i++) {
+		volatile union ice_rx_desc *rxd;
+		struct rte_mbuf *mbuf = rte_mbuf_raw_alloc(rxq->mp);
+
+		if (unlikely(!mbuf)) {
+			PMD_DRV_LOG(ERR, "Failed to allocate mbuf for RX");
+			return -ENOMEM;
+		}
+
+		rte_mbuf_refcnt_set(mbuf, 1);
+		mbuf->next = NULL;
+		mbuf->data_off = RTE_PKTMBUF_HEADROOM;
+		mbuf->nb_segs = 1;
+		mbuf->port = rxq->port_id;
+
+		dma_addr =
+			rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf));
+
+		rxd = &rxq->rx_ring[i];
+		rxd->read.pkt_addr = dma_addr;
+		rxd->read.hdr_addr = 0;
+#ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
+		rxd->read.rsvd1 = 0;
+		rxd->read.rsvd2 = 0;
+#endif
+		rxe[i].mbuf = mbuf;
+	}
+
+	return 0;
+}
+
+/* Free all mbufs for descriptors in rx queue */
+static void
+ice_rx_queue_release_mbufs(struct ice_rx_queue *rxq)
+{
+	uint16_t i;
+
+	if (!rxq || !rxq->sw_ring) {
+		PMD_DRV_LOG(DEBUG, "Pointer to sw_ring is NULL");
+		return;
+	}
+
+	for (i = 0; i < rxq->nb_rx_desc; i++) {
+		if (rxq->sw_ring[i].mbuf) {
+			rte_pktmbuf_free_seg(rxq->sw_ring[i].mbuf);
+			rxq->sw_ring[i].mbuf = NULL;
+		}
+	}
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+		if (rxq->rx_nb_avail == 0)
+			return;
+		for (i = 0; i < rxq->rx_nb_avail; i++) {
+			struct rte_mbuf *mbuf;
+
+			mbuf = rxq->rx_stage[rxq->rx_next_avail + i];
+			rte_pktmbuf_free_seg(mbuf);
+		}
+		rxq->rx_nb_avail = 0;
+#endif /* RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC */
+}
+
+/* turn on or off rx queue
+ * @q_idx: queue index in pf scope
+ * @on: turn on or off the queue
+ */
+static int
+ice_switch_rx_queue(struct ice_hw *hw, uint16_t q_idx, bool on)
+{
+	uint32_t reg;
+	uint16_t j;
+
+	/* QRX_CTRL = QRX_ENA */
+	reg = ICE_READ_REG(hw, QRX_CTRL(q_idx));
+
+	if (on) {
+		if (reg & QRX_CTRL_QENA_STAT_M)
+			return 0; /* Already on, skip */
+		reg |= QRX_CTRL_QENA_REQ_M;
+	} else {
+		if (!(reg & QRX_CTRL_QENA_STAT_M))
+			return 0; /* Already off, skip */
+		reg &= ~QRX_CTRL_QENA_REQ_M;
+	}
+
+	/* Write the register */
+	ICE_WRITE_REG(hw, QRX_CTRL(q_idx), reg);
+	/* Check the result. It is said that QENA_STAT
+	 * follows the QENA_REQ not more than 10 use.
+	 * TODO: need to change the wait counter later
+	 */
+	for (j = 0; j < ICE_CHK_Q_ENA_COUNT; j++) {
+		rte_delay_us(ICE_CHK_Q_ENA_INTERVAL_US);
+		reg = ICE_READ_REG(hw, QRX_CTRL(q_idx));
+		if (on) {
+			if ((reg & QRX_CTRL_QENA_REQ_M) &&
+			    (reg & QRX_CTRL_QENA_STAT_M))
+				break;
+		} else {
+			if (!(reg & QRX_CTRL_QENA_REQ_M) &&
+			    !(reg & QRX_CTRL_QENA_STAT_M))
+				break;
+		}
+	}
+
+	/* Check if it is timeout */
+	if (j >= ICE_CHK_Q_ENA_COUNT) {
+		PMD_DRV_LOG(ERR, "Failed to %s rx queue[%u]",
+			    (on ? "enable" : "disable"), q_idx);
+		return -ETIMEDOUT;
+	}
+
+	return 0;
+}
+
+static inline int
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+ice_check_rx_burst_bulk_alloc_preconditions(struct ice_rx_queue *rxq)
+#else
+ice_check_rx_burst_bulk_alloc_preconditions(
+	__rte_unused struct ice_rx_queue *rxq)
+#endif
+{
+	int ret = 0;
+
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+	if (!(rxq->rx_free_thresh >= ICE_RX_MAX_BURST)) {
+		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: "
+			     "rxq->rx_free_thresh=%d, "
+			     "ICE_RX_MAX_BURST=%d",
+			     rxq->rx_free_thresh, ICE_RX_MAX_BURST);
+		ret = -EINVAL;
+	} else if (!(rxq->rx_free_thresh < rxq->nb_rx_desc)) {
+		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: "
+			     "rxq->rx_free_thresh=%d, "
+			     "rxq->nb_rx_desc=%d",
+			     rxq->rx_free_thresh, rxq->nb_rx_desc);
+		ret = -EINVAL;
+	} else if (rxq->nb_rx_desc % rxq->rx_free_thresh != 0) {
+		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: "
+			     "rxq->nb_rx_desc=%d, "
+			     "rxq->rx_free_thresh=%d",
+			     rxq->nb_rx_desc, rxq->rx_free_thresh);
+		ret = -EINVAL;
+	}
+#else
+	ret = -EINVAL;
+#endif
+
+	return ret;
+}
+
+/* reset fields in ice_rx_queue back to default */
+static void
+ice_reset_rx_queue(struct ice_rx_queue *rxq)
+{
+	unsigned i;
+	uint16_t len;
+
+	if (!rxq) {
+		PMD_DRV_LOG(DEBUG, "Pointer to rxq is NULL");
+		return;
+	}
+
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+	if (ice_check_rx_burst_bulk_alloc_preconditions(rxq) == 0)
+		len = (uint16_t)(rxq->nb_rx_desc + ICE_RX_MAX_BURST);
+	else
+#endif /* RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC */
+		len = rxq->nb_rx_desc;
+
+	for (i = 0; i < len * sizeof(union ice_rx_desc); i++)
+		((volatile char *)rxq->rx_ring)[i] = 0;
+
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+	memset(&rxq->fake_mbuf, 0x0, sizeof(rxq->fake_mbuf));
+	for (i = 0; i < ICE_RX_MAX_BURST; ++i)
+		rxq->sw_ring[rxq->nb_rx_desc + i].mbuf = &rxq->fake_mbuf;
+
+	rxq->rx_nb_avail = 0;
+	rxq->rx_next_avail = 0;
+	rxq->rx_free_trigger = (uint16_t)(rxq->rx_free_thresh - 1);
+#endif /* RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC */
+
+	rxq->rx_tail = 0;
+	rxq->nb_rx_hold = 0;
+	rxq->pkt_first_seg = NULL;
+	rxq->pkt_last_seg = NULL;
+}
+
+int
+ice_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct ice_rx_queue *rxq;
+	int err;
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+		return -E_RTE_SECONDARY;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (rx_queue_id >= dev->data->nb_rx_queues) {
+		PMD_DRV_LOG(ERR, "RX queue %u is out of range %u",
+			    rx_queue_id, dev->data->nb_rx_queues);
+		return -EINVAL;
+	}
+
+	rxq = dev->data->rx_queues[rx_queue_id];
+	if (!rxq || !rxq->q_set) {
+		PMD_DRV_LOG(ERR, "RX queue %u not available or setup",
+			    rx_queue_id);
+		return -EINVAL;
+	}
+
+	err = ice_program_hw_rx_queue(rxq);
+	if (err) {
+		PMD_DRV_LOG(ERR, "fail to program RX queue %u",
+			    rx_queue_id);
+		return -EIO;
+	}
+
+	err = ice_alloc_rx_queue_mbufs(rxq);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to allocate RX queue mbuf");
+		return -ENOMEM;
+	}
+
+	rte_wmb();
+
+	/* Init the RX tail register. */
+	ICE_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1);
+
+	err = ice_switch_rx_queue(hw, rxq->reg_idx, TRUE);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u on",
+			    rx_queue_id);
+
+		ice_rx_queue_release_mbufs(rxq);
+		ice_reset_rx_queue(rxq);
+		return -EINVAL;
+	}
+
+	dev->data->rx_queue_state[rx_queue_id] =
+		RTE_ETH_QUEUE_STATE_STARTED;
+
+	return 0;
+}
+
+int
+ice_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct ice_rx_queue *rxq;
+	int err;
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+		return -E_RTE_SECONDARY;
+
+	if (rx_queue_id < dev->data->nb_rx_queues) {
+		rxq = dev->data->rx_queues[rx_queue_id];
+
+		err = ice_switch_rx_queue(hw, rxq->reg_idx, FALSE);
+		if (err) {
+			PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off",
+				    rx_queue_id);
+			return -EINVAL;
+		}
+		ice_rx_queue_release_mbufs(rxq);
+		ice_reset_rx_queue(rxq);
+		dev->data->rx_queue_state[rx_queue_id] =
+			RTE_ETH_QUEUE_STATE_STOPPED;
+	}
+
+	return 0;
+}
+
+int
+ice_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct ice_tx_queue *txq;
+	int err;
+	struct ice_vsi *vsi;
+	struct ice_hw *hw;
+	struct ice_aqc_add_tx_qgrp txq_elem;
+	struct ice_tlan_ctx tx_ctx;
+
+	if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+		return -E_RTE_SECONDARY;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (tx_queue_id >= dev->data->nb_tx_queues) {
+		PMD_DRV_LOG(ERR, "TX queue %u is out of range %u",
+			    tx_queue_id, dev->data->nb_tx_queues);
+		return -EINVAL;
+	}
+
+	txq = dev->data->tx_queues[tx_queue_id];
+	if (!txq || !txq->q_set) {
+		PMD_DRV_LOG(ERR, "TX queue %u is not available or setup",
+			    tx_queue_id);
+		return -EINVAL;
+	}
+
+	vsi = txq->vsi;
+	hw = ICE_VSI_TO_HW(vsi);
+
+	memset(&txq_elem, 0, sizeof(txq_elem));
+	memset(&tx_ctx, 0, sizeof(tx_ctx));
+	txq_elem.num_txqs = 1;
+	txq_elem.txqs[0].txq_id = rte_cpu_to_le_16(txq->reg_idx);
+
+	tx_ctx.base = txq->tx_ring_phys_addr / ICE_QUEUE_BASE_ADDR_UNIT;
+	tx_ctx.qlen = txq->nb_tx_desc;
+	tx_ctx.pf_num = hw->pf_id;
+	tx_ctx.vmvf_type = ICE_TLAN_CTX_VMVF_TYPE_PF;
+	tx_ctx.src_vsi = vsi->vsi_id;
+	tx_ctx.port_num = hw->port_info->lport;
+	tx_ctx.tso_ena = 1; /* tso enable */
+	tx_ctx.tso_qnum = txq->reg_idx; /* index for tso state structure */
+	tx_ctx.legacy_int = 1; /* Legacy or Advanced Host Interface */
+
+	ice_set_ctx((uint8_t *)&tx_ctx, txq_elem.txqs[0].txq_ctx,
+		    ice_tlan_ctx_info);
+
+	txq->qtx_tail = hw->hw_addr + QTX_COMM_DBELL(txq->reg_idx);
+
+	/* Init the Tx tail register*/
+	ICE_PCI_REG_WRITE(txq->qtx_tail, 0);
+
+	err = ice_ena_vsi_txq(hw->port_info, vsi->idx, 0, 1, &txq_elem,
+			      sizeof(txq_elem), NULL);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to add lan txq");
+		return -EIO;
+	}
+	/* store the schedule node id */
+	txq->q_teid = txq_elem.txqs[0].q_teid;
+
+	dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED;
+	return 0;
+}
+
+/* Free all mbufs for descriptors in tx queue */
+static void
+ice_tx_queue_release_mbufs(struct ice_tx_queue *txq)
+{
+	uint16_t i;
+
+	if (!txq || !txq->sw_ring) {
+		PMD_DRV_LOG(DEBUG, "Pointer to txq or sw_ring is NULL");
+		return;
+	}
+
+	for (i = 0; i < txq->nb_tx_desc; i++) {
+		if (txq->sw_ring[i].mbuf) {
+			rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
+			txq->sw_ring[i].mbuf = NULL;
+		}
+	}
+}
+
+static void
+ice_reset_tx_queue(struct ice_tx_queue *txq)
+{
+	struct ice_tx_entry *txe;
+	uint16_t i, prev, size;
+
+	if (!txq) {
+		PMD_DRV_LOG(DEBUG, "Pointer to txq is NULL");
+		return;
+	}
+
+	txe = txq->sw_ring;
+	size = sizeof(struct ice_tx_desc) * txq->nb_tx_desc;
+	for (i = 0; i < size; i++)
+		((volatile char *)txq->tx_ring)[i] = 0;
+
+	prev = (uint16_t)(txq->nb_tx_desc - 1);
+	for (i = 0; i < txq->nb_tx_desc; i++) {
+		volatile struct ice_tx_desc *txd = &txq->tx_ring[i];
+
+		txd->cmd_type_offset_bsz =
+			rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE);
+		txe[i].mbuf =  NULL;
+		txe[i].last_id = i;
+		txe[prev].next_id = i;
+		prev = i;
+	}
+
+	txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
+	txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
+
+	txq->tx_tail = 0;
+	txq->nb_tx_used = 0;
+
+	txq->last_desc_cleaned = (uint16_t)(txq->nb_tx_desc - 1);
+	txq->nb_tx_free = (uint16_t)(txq->nb_tx_desc - 1);
+}
+
+int
+ice_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct ice_tx_queue *txq;
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	enum ice_status status;
+	uint16_t q_ids[1];
+	uint32_t q_teids[1];
+
+	if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+		return -E_RTE_SECONDARY;
+
+	if (tx_queue_id >= dev->data->nb_tx_queues) {
+		PMD_DRV_LOG(ERR, "TX queue %u is out of range %u",
+			    tx_queue_id, dev->data->nb_tx_queues);
+		return -EINVAL;
+	}
+
+	txq = dev->data->tx_queues[tx_queue_id];
+	if (!txq) {
+		PMD_DRV_LOG(ERR, "TX queue %u is not available",
+			    tx_queue_id);
+		return -EINVAL;
+	}
+
+	q_ids[0] = txq->reg_idx;
+	q_teids[0] = txq->q_teid;
+
+	status = ice_dis_vsi_txq(hw->port_info, 1, q_ids, q_teids,
+				 ICE_NO_RESET, 0, NULL);
+	if (status != ICE_SUCCESS) {
+		PMD_DRV_LOG(DEBUG, "Failed to disble Lan Tx queue");
+		return -EINVAL;
+	}
+
+	ice_tx_queue_release_mbufs(txq);
+	ice_reset_tx_queue(txq);
+	dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+
+	return 0;
+}
+
+int
+ice_rx_queue_setup(struct rte_eth_dev *dev,
+		   uint16_t queue_idx,
+		   uint16_t nb_desc,
+		   unsigned int socket_id,
+		   const struct rte_eth_rxconf *rx_conf,
+		   struct rte_mempool *mp)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_adapter *ad =
+		ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct ice_vsi *vsi = pf->main_vsi;
+	struct ice_rx_queue *rxq;
+	const struct rte_memzone *rz;
+	uint32_t ring_size;
+	uint16_t len;
+	int use_def_burst_func = 1;
+
+	if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+		return -E_RTE_SECONDARY;
+
+	if (nb_desc % ICE_ALIGN_RING_DESC != 0 ||
+	    nb_desc > ICE_MAX_RING_DESC ||
+	    nb_desc < ICE_MIN_RING_DESC) {
+		PMD_INIT_LOG(ERR, "Number (%u) of receive descriptors is "
+			     "invalid", nb_desc);
+		return -EINVAL;
+	}
+
+	/* Free memory if needed */
+	if (dev->data->rx_queues[queue_idx]) {
+		ice_rx_queue_release(dev->data->rx_queues[queue_idx]);
+		dev->data->rx_queues[queue_idx] = NULL;
+	}
+
+	/* Allocate the rx queue data structure */
+	rxq = rte_zmalloc_socket("ice rx queue",
+				 sizeof(struct ice_rx_queue),
+				 RTE_CACHE_LINE_SIZE,
+				 socket_id);
+	if (!rxq) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for "
+			     "rx queue data structure");
+		return -ENOMEM;
+	}
+	rxq->mp = mp;
+	rxq->nb_rx_desc = nb_desc;
+	rxq->rx_free_thresh = rx_conf->rx_free_thresh;
+	rxq->queue_id = queue_idx;
+
+	rxq->reg_idx = vsi->base_queue + queue_idx;
+	rxq->port_id = dev->data->port_id;
+	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+		rxq->crc_len = ETHER_CRC_LEN;
+	else
+		rxq->crc_len = 0;
+
+	rxq->drop_en = rx_conf->rx_drop_en;
+	rxq->vsi = vsi;
+	rxq->rx_deferred_start = rx_conf->rx_deferred_start;
+
+	/* Allocate the maximun number of RX ring hardware descriptor. */
+	len = ICE_MAX_RING_DESC;
+
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+	/**
+	 * Allocating a little more memory because vectorized/bulk_alloc Rx
+	 * functions doesn't check boundaries each time.
+	 */
+	len += ICE_RX_MAX_BURST;
+#endif
+
+	/* Allocate the maximum number of RX ring hardware descriptor. */
+	ring_size = sizeof(union ice_rx_desc) * len;
+	ring_size = RTE_ALIGN(ring_size, ICE_DMA_MEM_ALIGN);
+	rz = rte_eth_dma_zone_reserve(dev, "rx_ring", queue_idx,
+				      ring_size, ICE_RING_BASE_ALIGN,
+				      socket_id);
+	if (!rz) {
+		ice_rx_queue_release(rxq);
+		PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for RX");
+		return -ENOMEM;
+	}
+
+	/* Zero all the descriptors in the ring. */
+	memset(rz->addr, 0, ring_size);
+
+	rxq->rx_ring_phys_addr = rz->phys_addr;
+	rxq->rx_ring = (union ice_rx_desc *)rz->addr;
+
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+	len = (uint16_t)(nb_desc + ICE_RX_MAX_BURST);
+#else
+	len = nb_desc;
+#endif
+
+	/* Allocate the software ring. */
+	rxq->sw_ring = rte_zmalloc_socket("ice rx sw ring",
+					  sizeof(struct ice_rx_entry) * len,
+					  RTE_CACHE_LINE_SIZE,
+					  socket_id);
+	if (!rxq->sw_ring) {
+		ice_rx_queue_release(rxq);
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for SW ring");
+		return -ENOMEM;
+	}
+
+	ice_reset_rx_queue(rxq);
+	rxq->q_set = TRUE;
+	dev->data->rx_queues[queue_idx] = rxq;
+
+	use_def_burst_func = ice_check_rx_burst_bulk_alloc_preconditions(rxq);
+
+	if (!use_def_burst_func) {
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions are "
+			     "satisfied. Rx Burst Bulk Alloc function will be "
+			     "used on port=%d, queue=%d.",
+			     rxq->port_id, rxq->queue_id);
+#endif /* RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC */
+	} else {
+		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions are "
+			     "not satisfied, Scattered Rx is requested, "
+			     "or RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC is "
+			     "not enabled on port=%d, queue=%d.",
+			     rxq->port_id, rxq->queue_id);
+		ad->rx_bulk_alloc_allowed = false;
+	}
+
+	return 0;
+}
+
+void
+ice_rx_queue_release(void *rxq)
+{
+	struct ice_rx_queue *q = (struct ice_rx_queue *)rxq;
+
+	if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+		return;
+
+	if (!q) {
+		PMD_DRV_LOG(DEBUG, "Pointer to rxq is NULL");
+		return;
+	}
+
+	ice_rx_queue_release_mbufs(q);
+	rte_free(q->sw_ring);
+	rte_free(q);
+}
+
+int
+ice_tx_queue_setup(struct rte_eth_dev *dev,
+		   uint16_t queue_idx,
+		   uint16_t nb_desc,
+		   unsigned int socket_id,
+		   const struct rte_eth_txconf *tx_conf)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vsi *vsi = pf->main_vsi;
+	struct ice_tx_queue *txq;
+	const struct rte_memzone *tz;
+	uint32_t ring_size;
+	uint16_t tx_rs_thresh, tx_free_thresh;
+	uint64_t offloads;
+
+	offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads;
+
+	if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+		return -E_RTE_SECONDARY;
+
+	if (nb_desc % ICE_ALIGN_RING_DESC != 0 ||
+	    nb_desc > ICE_MAX_RING_DESC ||
+	    nb_desc < ICE_MIN_RING_DESC) {
+		PMD_INIT_LOG(ERR, "Number (%u) of transmit descriptors is "
+			     "invalid", nb_desc);
+		return -EINVAL;
+	}
+
+	/**
+	 * The following two parameters control the setting of the RS bit on
+	 * transmit descriptors. TX descriptors will have their RS bit set
+	 * after txq->tx_rs_thresh descriptors have been used. The TX
+	 * descriptor ring will be cleaned after txq->tx_free_thresh
+	 * descriptors are used or if the number of descriptors required to
+	 * transmit a packet is greater than the number of free TX descriptors.
+	 *
+	 * The following constraints must be satisfied:
+	 *  - tx_rs_thresh must be greater than 0.
+	 *  - tx_rs_thresh must be less than the size of the ring minus 2.
+	 *  - tx_rs_thresh must be less than or equal to tx_free_thresh.
+	 *  - tx_rs_thresh must be a divisor of the ring size.
+	 *  - tx_free_thresh must be greater than 0.
+	 *  - tx_free_thresh must be less than the size of the ring minus 3.
+	 *
+	 * One descriptor in the TX ring is used as a sentinel to avoid a H/W
+	 * race condition, hence the maximum threshold constraints. When set
+	 * to zero use default values.
+	 */
+	tx_rs_thresh = (uint16_t)(tx_conf->tx_rs_thresh ?
+				  tx_conf->tx_rs_thresh :
+				  ICE_DEFAULT_TX_RSBIT_THRESH);
+	tx_free_thresh = (uint16_t)(tx_conf->tx_free_thresh ?
+				    tx_conf->tx_free_thresh :
+				    ICE_DEFAULT_TX_FREE_THRESH);
+	if (tx_rs_thresh >= (nb_desc - 2)) {
+		PMD_INIT_LOG(ERR, "tx_rs_thresh must be less than the "
+			     "number of TX descriptors minus 2. "
+			     "(tx_rs_thresh=%u port=%d queue=%d)",
+			     (unsigned int)tx_rs_thresh,
+			     (int)dev->data->port_id,
+			     (int)queue_idx);
+		return -EINVAL;
+	}
+	if (tx_free_thresh >= (nb_desc - 3)) {
+		PMD_INIT_LOG(ERR, "tx_rs_thresh must be less than the "
+			     "tx_free_thresh must be less than the "
+			     "number of TX descriptors minus 3. "
+			     "(tx_free_thresh=%u port=%d queue=%d)",
+			     (unsigned int)tx_free_thresh,
+			     (int)dev->data->port_id,
+			     (int)queue_idx);
+		return -EINVAL;
+	}
+	if (tx_rs_thresh > tx_free_thresh) {
+		PMD_INIT_LOG(ERR, "tx_rs_thresh must be less than or "
+			     "equal to tx_free_thresh. (tx_free_thresh=%u"
+			     " tx_rs_thresh=%u port=%d queue=%d)",
+			     (unsigned int)tx_free_thresh,
+			     (unsigned int)tx_rs_thresh,
+			     (int)dev->data->port_id,
+			     (int)queue_idx);
+		return -EINVAL;
+	}
+	if ((nb_desc % tx_rs_thresh) != 0) {
+		PMD_INIT_LOG(ERR, "tx_rs_thresh must be a divisor of the "
+			     "number of TX descriptors. (tx_rs_thresh=%u"
+			     " port=%d queue=%d)",
+			     (unsigned int)tx_rs_thresh,
+			     (int)dev->data->port_id,
+			     (int)queue_idx);
+		return -EINVAL;
+	}
+	if ((tx_rs_thresh > 1) && (tx_conf->tx_thresh.wthresh != 0)) {
+		PMD_INIT_LOG(ERR, "TX WTHRESH must be set to 0 if "
+			     "tx_rs_thresh is greater than 1. "
+			     "(tx_rs_thresh=%u port=%d queue=%d)",
+			     (unsigned int)tx_rs_thresh,
+			     (int)dev->data->port_id,
+			     (int)queue_idx);
+		return -EINVAL;
+	}
+
+	/* Free memory if needed. */
+	if (dev->data->tx_queues[queue_idx]) {
+		ice_tx_queue_release(dev->data->tx_queues[queue_idx]);
+		dev->data->tx_queues[queue_idx] = NULL;
+	}
+
+	/* Allocate the TX queue data structure. */
+	txq = rte_zmalloc_socket("ice tx queue",
+				 sizeof(struct ice_tx_queue),
+				 RTE_CACHE_LINE_SIZE,
+				 socket_id);
+	if (!txq) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for "
+			     "tx queue structure");
+		return -ENOMEM;
+	}
+
+	/* Allocate TX hardware ring descriptors. */
+	ring_size = sizeof(struct ice_tx_desc) * ICE_MAX_RING_DESC;
+	ring_size = RTE_ALIGN(ring_size, ICE_DMA_MEM_ALIGN);
+	tz = rte_eth_dma_zone_reserve(dev, "tx_ring", queue_idx,
+				      ring_size, ICE_RING_BASE_ALIGN,
+				      socket_id);
+	if (!tz) {
+		ice_tx_queue_release(txq);
+		PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for TX");
+		return -ENOMEM;
+	}
+
+	txq->nb_tx_desc = nb_desc;
+	txq->tx_rs_thresh = tx_rs_thresh;
+	txq->tx_free_thresh = tx_free_thresh;
+	txq->pthresh = tx_conf->tx_thresh.pthresh;
+	txq->hthresh = tx_conf->tx_thresh.hthresh;
+	txq->wthresh = tx_conf->tx_thresh.wthresh;
+	txq->queue_id = queue_idx;
+
+	txq->reg_idx = vsi->base_queue + queue_idx;
+	txq->port_id = dev->data->port_id;
+	txq->offloads = offloads;
+	txq->vsi = vsi;
+	txq->tx_deferred_start = tx_conf->tx_deferred_start;
+
+	txq->tx_ring_phys_addr = tz->phys_addr;
+	txq->tx_ring = (struct ice_tx_desc *)tz->addr;
+
+	/* Allocate software ring */
+	txq->sw_ring =
+		rte_zmalloc_socket("ice tx sw ring",
+				   sizeof(struct ice_tx_entry) * nb_desc,
+				   RTE_CACHE_LINE_SIZE,
+				   socket_id);
+	if (!txq->sw_ring) {
+		ice_tx_queue_release(txq);
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for SW TX ring");
+		return -ENOMEM;
+	}
+
+	ice_reset_tx_queue(txq);
+	txq->q_set = TRUE;
+	dev->data->tx_queues[queue_idx] = txq;
+
+	return 0;
+}
+
+void
+ice_tx_queue_release(void *txq)
+{
+	struct ice_tx_queue *q = (struct ice_tx_queue *)txq;
+
+	if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+		return;
+
+	if (!q) {
+		PMD_DRV_LOG(DEBUG, "Pointer to TX queue is NULL");
+		return;
+	}
+
+	ice_tx_queue_release_mbufs(q);
+	rte_free(q->sw_ring);
+	rte_free(q);
+}
+
+void
+ice_clear_queues(struct rte_eth_dev *dev)
+{
+	uint16_t i;
+
+	PMD_INIT_FUNC_TRACE();
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		ice_tx_queue_release_mbufs(dev->data->tx_queues[i]);
+		ice_reset_tx_queue(dev->data->tx_queues[i]);
+	}
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		ice_rx_queue_release_mbufs(dev->data->rx_queues[i]);
+		ice_reset_rx_queue(dev->data->rx_queues[i]);
+	}
+}
+
+void
+ice_free_queues(struct rte_eth_dev *dev)
+{
+	uint16_t i;
+
+	PMD_INIT_FUNC_TRACE();
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		if (!dev->data->rx_queues[i])
+			continue;
+		ice_rx_queue_release(dev->data->rx_queues[i]);
+		dev->data->rx_queues[i] = NULL;
+	}
+	dev->data->nb_rx_queues = 0;
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		if (!dev->data->tx_queues[i])
+			continue;
+		ice_tx_queue_release(dev->data->tx_queues[i]);
+		dev->data->tx_queues[i] = NULL;
+	}
+	dev->data->nb_tx_queues = 0;
+}
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index c37dc23..088a206 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -114,4 +114,24 @@ struct ice_tx_queue {
 		uint64_t outer_l3_len:16; /* outer L3 Header Length */
 	};
 };
+
+int ice_rx_queue_setup(struct rte_eth_dev *dev,
+		       uint16_t queue_idx,
+		       uint16_t nb_desc,
+		       unsigned int socket_id,
+		       const struct rte_eth_rxconf *rx_conf,
+		       struct rte_mempool *mp);
+int ice_tx_queue_setup(struct rte_eth_dev *dev,
+		       uint16_t queue_idx,
+		       uint16_t nb_desc,
+		       unsigned int socket_id,
+		       const struct rte_eth_txconf *tx_conf);
+int ice_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+int ice_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+int ice_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+int ice_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+void ice_rx_queue_release(void *rxq);
+void ice_tx_queue_release(void *txq);
+void ice_clear_queues(struct rte_eth_dev *dev);
+void ice_free_queues(struct rte_eth_dev *dev);
 #endif /* _ICE_RXTX_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH 04/19] net/ice: support getting device information
  2018-11-23  6:56 [dpdk-dev] [PATCH 00/19] A new net PMD - ice Wenzhuo Lu
                   ` (2 preceding siblings ...)
  2018-11-23  6:56 ` [dpdk-dev] [PATCH 03/19] net/ice: support device and queue ops Wenzhuo Lu
@ 2018-11-23  6:56 ` Wenzhuo Lu
  2018-11-23  6:56 ` [dpdk-dev] [PATCH 05/19] net/ice: support packet type getting Wenzhuo Lu
                   ` (20 subsequent siblings)
  24 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-11-23  6:56 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add ops dev_infos_get.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/ice/ice_ethdev.c | 90 ++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 90 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index bbf1ed4..b46f7b2 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -19,6 +19,8 @@
 static void ice_dev_stop(struct rte_eth_dev *dev);
 static void ice_dev_close(struct rte_eth_dev *dev);
 static int ice_dev_reset(struct rte_eth_dev *dev);
+static void ice_dev_info_get(struct rte_eth_dev *dev,
+			     struct rte_eth_dev_info *dev_info);
 
 static const struct rte_pci_id pci_id_ice_map[] = {
 	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
@@ -41,6 +43,7 @@
 	.rx_queue_release             = ice_rx_queue_release,
 	.tx_queue_setup               = ice_tx_queue_setup,
 	.tx_queue_release             = ice_tx_queue_release,
+	.dev_infos_get                = ice_dev_info_get,
 };
 
 static void
@@ -848,3 +851,90 @@ static int ice_init_rss(struct ice_pf *pf)
 
 	return 0;
 }
+
+static void
+ice_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ice_vsi *vsi = pf->main_vsi;
+	struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
+
+	dev_info->min_rx_bufsize = ICE_BUF_SIZE_MIN;
+	dev_info->max_rx_pktlen = ICE_FRAME_SIZE_MAX;
+	dev_info->max_rx_queues = vsi->nb_qps;
+	dev_info->max_tx_queues = vsi->nb_qps;
+	dev_info->max_mac_addrs = vsi->max_macaddrs;
+	dev_info->max_vfs = pci_dev->max_vfs;
+
+	dev_info->rx_offload_capa =
+		DEV_RX_OFFLOAD_VLAN_STRIP |
+		DEV_RX_OFFLOAD_IPV4_CKSUM |
+		DEV_RX_OFFLOAD_UDP_CKSUM |
+		DEV_RX_OFFLOAD_TCP_CKSUM |
+		DEV_RX_OFFLOAD_QINQ_STRIP |
+		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		DEV_RX_OFFLOAD_VLAN_EXTEND |
+		DEV_RX_OFFLOAD_JUMBO_FRAME;
+	dev_info->tx_offload_capa =
+		DEV_TX_OFFLOAD_VLAN_INSERT |
+		DEV_TX_OFFLOAD_QINQ_INSERT |
+		DEV_TX_OFFLOAD_IPV4_CKSUM |
+		DEV_TX_OFFLOAD_UDP_CKSUM |
+		DEV_TX_OFFLOAD_TCP_CKSUM |
+		DEV_TX_OFFLOAD_SCTP_CKSUM |
+		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		DEV_TX_OFFLOAD_TCP_TSO;
+	dev_info->rx_queue_offload_capa = 0;
+	dev_info->tx_queue_offload_capa = 0;
+
+	dev_info->reta_size = hw->func_caps.common_cap.rss_table_size;
+	dev_info->hash_key_size = (VSIQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t);
+	dev_info->flow_type_rss_offloads = ICE_RSS_OFFLOAD_ALL;
+
+	dev_info->default_rxconf = (struct rte_eth_rxconf) {
+		.rx_thresh = {
+			.pthresh = ICE_DEFAULT_RX_PTHRESH,
+			.hthresh = ICE_DEFAULT_RX_HTHRESH,
+			.wthresh = ICE_DEFAULT_RX_WTHRESH,
+		},
+		.rx_free_thresh = ICE_DEFAULT_RX_FREE_THRESH,
+		.rx_drop_en = 0,
+		.offloads = 0,
+	};
+
+	dev_info->default_txconf = (struct rte_eth_txconf) {
+		.tx_thresh = {
+			.pthresh = ICE_DEFAULT_TX_PTHRESH,
+			.hthresh = ICE_DEFAULT_TX_HTHRESH,
+			.wthresh = ICE_DEFAULT_TX_WTHRESH,
+		},
+		.tx_free_thresh = ICE_DEFAULT_TX_FREE_THRESH,
+		.tx_rs_thresh = ICE_DEFAULT_TX_RSBIT_THRESH,
+		.offloads = 0,
+	};
+
+	dev_info->rx_desc_lim = (struct rte_eth_desc_lim) {
+		.nb_max = ICE_MAX_RING_DESC,
+		.nb_min = ICE_MIN_RING_DESC,
+		.nb_align = ICE_ALIGN_RING_DESC,
+	};
+
+	dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
+		.nb_max = ICE_MAX_RING_DESC,
+		.nb_min = ICE_MIN_RING_DESC,
+		.nb_align = ICE_ALIGN_RING_DESC,
+	};
+
+	dev_info->speed_capa = ETH_LINK_SPEED_100G;
+
+	dev_info->nb_rx_queues = dev->data->nb_rx_queues;
+	dev_info->nb_tx_queues = dev->data->nb_tx_queues;
+
+	dev_info->default_rxportconf.burst_size = 32;
+	dev_info->default_txportconf.burst_size = 32;
+	dev_info->default_rxportconf.nb_queues = 1;
+	dev_info->default_txportconf.nb_queues = 1;
+	dev_info->default_rxportconf.ring_size = 1024;
+	dev_info->default_txportconf.ring_size = 1024;
+}
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH 05/19] net/ice: support packet type getting
  2018-11-23  6:56 [dpdk-dev] [PATCH 00/19] A new net PMD - ice Wenzhuo Lu
                   ` (3 preceding siblings ...)
  2018-11-23  6:56 ` [dpdk-dev] [PATCH 04/19] net/ice: support getting device information Wenzhuo Lu
@ 2018-11-23  6:56 ` Wenzhuo Lu
  2018-11-23  6:56 ` [dpdk-dev] [PATCH 06/19] net/ice: support link update Wenzhuo Lu
                   ` (19 subsequent siblings)
  24 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-11-23  6:56 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Wei Zhao

Add ops dev_supported_ptypes_get.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 drivers/net/ice/ice_ethdev.c   |   2 +
 drivers/net/ice/ice_lan_rxtx.c | 601 +++++++++++++++++++++++++++++++++++++++++
 drivers/net/ice/ice_rxtx.h     |   2 +
 3 files changed, 605 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index b46f7b2..61ae9ca 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -44,6 +44,7 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 	.tx_queue_setup               = ice_tx_queue_setup,
 	.tx_queue_release             = ice_tx_queue_release,
 	.dev_infos_get                = ice_dev_info_get,
+	.dev_supported_ptypes_get     = ice_dev_supported_ptypes_get,
 };
 
 static void
@@ -492,6 +493,7 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 
 	dev->dev_ops = &ice_eth_dev_ops;
 
+	ice_set_default_ptype_table(dev);
 	pci_dev = RTE_DEV_TO_PCI(dev->device);
 
 	rte_eth_copy_pci_info(dev, pci_dev);
diff --git a/drivers/net/ice/ice_lan_rxtx.c b/drivers/net/ice/ice_lan_rxtx.c
index 67b89df..a9e8c03 100644
--- a/drivers/net/ice/ice_lan_rxtx.c
+++ b/drivers/net/ice/ice_lan_rxtx.c
@@ -908,6 +908,42 @@
 	rte_free(q);
 }
 
+const uint32_t *
+ice_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
+{
+	static const uint32_t ptypes[] = {
+		/* refers to ice_get_default_pkt_type() */
+		RTE_PTYPE_L2_ETHER,
+		RTE_PTYPE_L2_ETHER_LLDP,
+		RTE_PTYPE_L2_ETHER_ARP,
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN,
+		RTE_PTYPE_L3_IPV6_EXT_UNKNOWN,
+		RTE_PTYPE_L4_FRAG,
+		RTE_PTYPE_L4_ICMP,
+		RTE_PTYPE_L4_NONFRAG,
+		RTE_PTYPE_L4_SCTP,
+		RTE_PTYPE_L4_TCP,
+		RTE_PTYPE_L4_UDP,
+		RTE_PTYPE_TUNNEL_GRENAT,
+		RTE_PTYPE_TUNNEL_IP,
+		RTE_PTYPE_INNER_L2_ETHER,
+		RTE_PTYPE_INNER_L2_ETHER_VLAN,
+		RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN,
+		RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN,
+		RTE_PTYPE_INNER_L4_FRAG,
+		RTE_PTYPE_INNER_L4_ICMP,
+		RTE_PTYPE_INNER_L4_NONFRAG,
+		RTE_PTYPE_INNER_L4_SCTP,
+		RTE_PTYPE_INNER_L4_TCP,
+		RTE_PTYPE_INNER_L4_UDP,
+		RTE_PTYPE_TUNNEL_GTPC,
+		RTE_PTYPE_TUNNEL_GTPU,
+		RTE_PTYPE_UNKNOWN
+	};
+
+	return ptypes;
+}
+
 void
 ice_clear_queues(struct rte_eth_dev *dev)
 {
@@ -949,3 +985,568 @@
 	}
 	dev->data->nb_tx_queues = 0;
 }
+
+/* For each value it means, datasheet of hardware can tell more details
+ *
+ * @note: fix ice_dev_supported_ptypes_get() if any change here.
+ */
+static inline uint32_t
+ice_get_default_pkt_type(uint16_t ptype)
+{
+	static const uint32_t type_table[ICE_MAX_PKT_TYPE]
+		__rte_cache_aligned = {
+		/* L2 types */
+		/* [0] reserved */
+		[1] = RTE_PTYPE_L2_ETHER,
+		/* [2] - [5] reserved */
+		[6] = RTE_PTYPE_L2_ETHER_LLDP,
+		/* [7] - [10] reserved */
+		[11] = RTE_PTYPE_L2_ETHER_ARP,
+		/* [12] - [21] reserved */
+
+		/* Non tunneled IPv4 */
+		[22] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_FRAG,
+		[23] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_NONFRAG,
+		[24] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_UDP,
+		/* [25] reserved */
+		[26] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_TCP,
+		[27] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_SCTP,
+		[28] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_ICMP,
+
+		/* IPv4 --> IPv4 */
+		[29] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_FRAG,
+		[30] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_NONFRAG,
+		[31] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_UDP,
+		/* [32] reserved */
+		[33] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_TCP,
+		[34] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_SCTP,
+		[35] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv4 --> IPv6 */
+		[36] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_FRAG,
+		[37] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_NONFRAG,
+		[38] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_UDP,
+		/* [39] reserved */
+		[40] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_TCP,
+		[41] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_SCTP,
+		[42] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv4 --> GRE/Teredo/VXLAN */
+		[43] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT,
+
+		/* IPv4 --> GRE/Teredo/VXLAN --> IPv4 */
+		[44] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_FRAG,
+		[45] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_NONFRAG,
+		[46] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_UDP,
+		/* [47] reserved */
+		[48] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_TCP,
+		[49] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_SCTP,
+		[50] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv4 --> GRE/Teredo/VXLAN --> IPv6 */
+		[51] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_FRAG,
+		[52] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_NONFRAG,
+		[53] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_UDP,
+		/* [54] reserved */
+		[55] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_TCP,
+		[56] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_SCTP,
+		[57] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv4 --> GRE/Teredo/VXLAN --> MAC */
+		[58] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER,
+
+		/* IPv4 --> GRE/Teredo/VXLAN --> MAC --> IPv4 */
+		[59] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_FRAG,
+		[60] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_NONFRAG,
+		[61] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_UDP,
+		/* [62] reserved */
+		[63] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_TCP,
+		[64] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_SCTP,
+		[65] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv4 --> GRE/Teredo/VXLAN --> MAC --> IPv6 */
+		[66] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_FRAG,
+		[67] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_NONFRAG,
+		[68] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_UDP,
+		/* [69] reserved */
+		[70] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_TCP,
+		[71] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_SCTP,
+		[72] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv4 --> GRE/Teredo/VXLAN --> MAC/VLAN */
+		[73] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN,
+
+		/* IPv4 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv4 */
+		[74] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_FRAG,
+		[75] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_NONFRAG,
+		[76] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_UDP,
+		/* [77] reserved */
+		[78] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_TCP,
+		[79] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_SCTP,
+		[80] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv4 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv6 */
+		[81] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_FRAG,
+		[82] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_NONFRAG,
+		[83] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_UDP,
+		/* [84] reserved */
+		[85] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_TCP,
+		[86] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_SCTP,
+		[87] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_ICMP,
+
+		/* Non tunneled IPv6 */
+		[88] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_FRAG,
+		[89] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_NONFRAG,
+		[90] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_UDP,
+		/* [91] reserved */
+		[92] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_TCP,
+		[93] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_SCTP,
+		[94] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_ICMP,
+
+		/* IPv6 --> IPv4 */
+		[95] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_FRAG,
+		[96] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_NONFRAG,
+		[97] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_UDP,
+		/* [98] reserved */
+		[99] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_TCP,
+		[100] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_IP |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_SCTP,
+		[101] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_IP |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv6 --> IPv6 */
+		[102] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_IP |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_FRAG,
+		[103] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_IP |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_NONFRAG,
+		[104] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_IP |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_UDP,
+		/* [105] reserved */
+		[106] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_IP |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_TCP,
+		[107] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_IP |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_SCTP,
+		[108] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_IP |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv6 --> GRE/Teredo/VXLAN */
+		[109] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT,
+
+		/* IPv6 --> GRE/Teredo/VXLAN --> IPv4 */
+		[110] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_FRAG,
+		[111] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_NONFRAG,
+		[112] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_UDP,
+		/* [113] reserved */
+		[114] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_TCP,
+		[115] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_SCTP,
+		[116] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv6 --> GRE/Teredo/VXLAN --> IPv6 */
+		[117] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_FRAG,
+		[118] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_NONFRAG,
+		[119] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_UDP,
+		/* [120] reserved */
+		[121] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_TCP,
+		[122] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_SCTP,
+		[123] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv6 --> GRE/Teredo/VXLAN --> MAC */
+		[124] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER,
+
+		/* IPv6 --> GRE/Teredo/VXLAN --> MAC --> IPv4 */
+		[125] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_FRAG,
+		[126] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_NONFRAG,
+		[127] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_UDP,
+		/* [128] reserved */
+		[129] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_TCP,
+		[130] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_SCTP,
+		[131] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv6 --> GRE/Teredo/VXLAN --> MAC --> IPv6 */
+		[132] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_FRAG,
+		[133] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_NONFRAG,
+		[134] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_UDP,
+		/* [135] reserved */
+		[136] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_TCP,
+		[137] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_SCTP,
+		[138] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv6 --> GRE/Teredo/VXLAN --> MAC/VLAN */
+		[139] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN,
+
+		/* IPv6 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv4 */
+		[140] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_FRAG,
+		[141] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_NONFRAG,
+		[142] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_UDP,
+		/* [143] reserved */
+		[144] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_TCP,
+		[145] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_SCTP,
+		[146] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv6 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv6 */
+		[147] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_FRAG,
+		[148] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_NONFRAG,
+		[149] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_UDP,
+		/* [150] reserved */
+		[151] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_TCP,
+		[152] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_SCTP,
+		[153] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_ICMP,
+		/* [154] - [255] reserved */
+		[256] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GTPC,
+		[257] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GTPC,
+		[258] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+				RTE_PTYPE_TUNNEL_GTPU,
+		[259] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+				RTE_PTYPE_TUNNEL_GTPU,
+		/* [260] - [263] reserved */
+		[264] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GTPC,
+		[265] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GTPC,
+		[266] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+				RTE_PTYPE_TUNNEL_GTPU,
+		[267] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+				RTE_PTYPE_TUNNEL_GTPU,
+
+		/* All others reserved */
+	};
+
+	return type_table[ptype];
+}
+
+void __attribute__((cold))
+ice_set_default_ptype_table(struct rte_eth_dev *dev)
+{
+	struct ice_adapter *ad =
+		ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	int i;
+
+	for (i = 0; i < ICE_MAX_PKT_TYPE; i++)
+		ad->ptype_tbl[i] = ice_get_default_pkt_type(i);
+}
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index 088a206..871646f 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -134,4 +134,6 @@ int ice_tx_queue_setup(struct rte_eth_dev *dev,
 void ice_tx_queue_release(void *txq);
 void ice_clear_queues(struct rte_eth_dev *dev);
 void ice_free_queues(struct rte_eth_dev *dev);
+void ice_set_default_ptype_table(struct rte_eth_dev *dev);
+const uint32_t *ice_dev_supported_ptypes_get(struct rte_eth_dev *dev);
 #endif /* _ICE_RXTX_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH 06/19] net/ice: support link update
  2018-11-23  6:56 [dpdk-dev] [PATCH 00/19] A new net PMD - ice Wenzhuo Lu
                   ` (4 preceding siblings ...)
  2018-11-23  6:56 ` [dpdk-dev] [PATCH 05/19] net/ice: support packet type getting Wenzhuo Lu
@ 2018-11-23  6:56 ` Wenzhuo Lu
  2018-11-23  6:56 ` [dpdk-dev] [PATCH 07/19] net/ice: support MTU setting Wenzhuo Lu
                   ` (18 subsequent siblings)
  24 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-11-23  6:56 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add ops link_update.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/ice/ice_ethdev.c | 335 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 335 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 61ae9ca..75246da 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -21,6 +21,8 @@
 static int ice_dev_reset(struct rte_eth_dev *dev);
 static void ice_dev_info_get(struct rte_eth_dev *dev,
 			     struct rte_eth_dev_info *dev_info);
+static int ice_link_update(struct rte_eth_dev *dev,
+			   int wait_to_complete);
 
 static const struct rte_pci_id pci_id_ice_map[] = {
 	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
@@ -45,6 +47,7 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 	.tx_queue_release             = ice_tx_queue_release,
 	.dev_infos_get                = ice_dev_info_get,
 	.dev_supported_ptypes_get     = ice_dev_supported_ptypes_get,
+	.link_update                  = ice_link_update,
 };
 
 static void
@@ -330,6 +333,187 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 	return 0;
 }
 
+/* Enable IRQ0 */
+static void
+ice_pf_enable_irq0(struct ice_hw *hw)
+{
+	/* reset the registers */
+	ICE_WRITE_REG(hw, PFINT_OICR_ENA, 0);
+	ICE_READ_REG(hw, PFINT_OICR);
+
+#ifdef ICE_LSE_SPT
+	ICE_WRITE_REG(hw, PFINT_OICR_ENA,
+		      (uint32_t)(PFINT_OICR_ENA_INT_ENA_M &
+				 (~PFINT_OICR_LINK_STAT_CHANGE_M)));
+
+	ICE_WRITE_REG(hw, PFINT_OICR_CTL,
+		      (0 & PFINT_OICR_CTL_MSIX_INDX_M) |
+		      ((0 << PFINT_OICR_CTL_ITR_INDX_S) &
+		       PFINT_OICR_CTL_ITR_INDX_M) |
+		      PFINT_OICR_CTL_CAUSE_ENA_M);
+
+	ICE_WRITE_REG(hw, PFINT_FW_CTL,
+		      (0 & PFINT_FW_CTL_MSIX_INDX_M) |
+		      ((0 << PFINT_FW_CTL_ITR_INDX_S) &
+		       PFINT_FW_CTL_ITR_INDX_M) |
+		      PFINT_FW_CTL_CAUSE_ENA_M);
+#else
+	ICE_WRITE_REG(hw, PFINT_OICR_ENA, PFINT_OICR_ENA_INT_ENA_M);
+#endif
+
+	ICE_WRITE_REG(hw, GLINT_DYN_CTL(0),
+		      GLINT_DYN_CTL_INTENA_M |
+		      GLINT_DYN_CTL_CLEARPBA_M |
+		      GLINT_DYN_CTL_ITR_INDX_M);
+
+	ice_flush(hw);
+}
+
+/* Disable IRQ0 */
+static void
+ice_pf_disable_irq0(struct ice_hw *hw)
+{
+	/* Disable all interrupt types */
+	ICE_WRITE_REG(hw, GLINT_DYN_CTL(0), GLINT_DYN_CTL_WB_ON_ITR_M);
+	ice_flush(hw);
+}
+
+#ifdef ICE_LSE_SPT
+static void
+ice_handle_aq_msg(struct rte_eth_dev *dev)
+{
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ice_ctl_q_info *cq = &hw->adminq;
+	struct ice_rq_event_info event;
+	uint16_t pending, opcode;
+	int ret;
+
+	event.buf_len = ICE_AQ_MAX_BUF_LEN;
+	event.msg_buf = rte_zmalloc("msg_buffer", event.buf_len, 0);
+	if (!event.msg_buf) {
+		PMD_DRV_LOG(ERR, "Failed to allocate mem");
+		return;
+	}
+
+	pending = 1;
+	while (pending) {
+		ret = ice_clean_rq_elem(hw, cq, &event, &pending);
+
+		if (ret != ICE_SUCCESS) {
+			PMD_DRV_LOG(INFO,
+				    "Failed to read msg from AdminQ, "
+				    "adminq_err: %u",
+				    hw->adminq.sq_last_status);
+			break;
+		}
+		opcode = rte_le_to_cpu_16(event.desc.opcode);
+
+		switch (opcode) {
+		case ice_aqc_opc_get_link_status:
+			ret = ice_link_update(dev, 0);
+			if (!ret)
+				_rte_eth_dev_callback_process
+					(dev, RTE_ETH_EVENT_INTR_LSC, NULL);
+			break;
+		default:
+			PMD_DRV_LOG(DEBUG, "Request %u is not supported yet",
+				    opcode);
+			break;
+		}
+	}
+	rte_free(event.msg_buf);
+}
+#endif
+
+/**
+ * Interrupt handler triggered by NIC for handling
+ * specific interrupt.
+ *
+ * @param handle
+ *  Pointer to interrupt handle.
+ * @param param
+ *  The address of parameter (struct rte_eth_dev *) regsitered before.
+ *
+ * @return
+ *  void
+ */
+static void
+ice_interrupt_handler(void *param)
+{
+	struct rte_eth_dev *dev = (struct rte_eth_dev *)param;
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	uint32_t oicr;
+	uint32_t reg;
+	uint8_t pf_num;
+	uint8_t event;
+	uint16_t queue;
+#ifdef ICE_LSE_SPT
+	uint32_t int_fw_ctl;
+#endif
+
+	/* Disable interrupt */
+	ice_pf_disable_irq0(hw);
+
+	/* read out interrupt causes */
+	oicr = ICE_READ_REG(hw, PFINT_OICR);
+#ifdef ICE_LSE_SPT
+	int_fw_ctl = ICE_READ_REG(hw, PFINT_FW_CTL);
+#endif
+
+	/* No interrupt event indicated */
+	if (!(oicr & PFINT_OICR_INTEVENT_M)) {
+		PMD_DRV_LOG(INFO, "No interrupt event");
+		goto done;
+	}
+
+#ifdef ICE_LSE_SPT
+	if (int_fw_ctl & PFINT_FW_CTL_INTEVENT_M) {
+		PMD_DRV_LOG(INFO, "FW_CTL: link state change event");
+		ice_handle_aq_msg(dev);
+	}
+#else
+	if (oicr & PFINT_OICR_LINK_STAT_CHANGE_M) {
+		PMD_DRV_LOG(INFO, "OICR: link state change event");
+		ice_link_update(dev, 0);
+	}
+#endif
+
+	if (oicr & PFINT_OICR_MAL_DETECT_M) {
+		PMD_DRV_LOG(WARNING, "OICR: MDD event");
+		reg = ICE_READ_REG(hw, GL_MDET_TX_PQM);
+		if (reg & GL_MDET_TX_PQM_VALID_M) {
+			pf_num = (reg & GL_MDET_TX_PQM_PF_NUM_M) >>
+				 GL_MDET_TX_PQM_PF_NUM_S;
+			event = (reg & GL_MDET_TX_PQM_MAL_TYPE_M) >>
+				GL_MDET_TX_PQM_MAL_TYPE_S;
+			queue = (reg & GL_MDET_TX_PQM_QNUM_M) >>
+				GL_MDET_TX_PQM_QNUM_S;
+
+			PMD_DRV_LOG(WARNING, "Malicious Driver Detection event "
+				    "%d by PQM on TX queue %d PF# %d",
+				    event, queue, pf_num);
+		}
+
+		reg = ICE_READ_REG(hw, GL_MDET_TX_TCLAN);
+		if (reg & GL_MDET_TX_TCLAN_VALID_M) {
+			pf_num = (reg & GL_MDET_TX_TCLAN_PF_NUM_M) >>
+				 GL_MDET_TX_TCLAN_PF_NUM_S;
+			event = (reg & GL_MDET_TX_TCLAN_MAL_TYPE_M) >>
+				GL_MDET_TX_TCLAN_MAL_TYPE_S;
+			queue = (reg & GL_MDET_TX_TCLAN_QNUM_M) >>
+				GL_MDET_TX_TCLAN_QNUM_S;
+
+			PMD_DRV_LOG(WARNING, "Malicious Driver Detection event "
+				    "%d by TCLAN on TX queue %d PF# %d",
+				    event, queue, pf_num);
+		}
+	}
+done:
+	/* Enable interrupt */
+	ice_pf_enable_irq0(hw);
+	rte_intr_enable(dev->intr_handle);
+}
+
 /*  Initialize SW parameters of PF */
 static int
 ice_pf_sw_init(struct rte_eth_dev *dev)
@@ -487,6 +671,7 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 ice_dev_init(struct rte_eth_dev *dev)
 {
 	struct rte_pci_device *pci_dev;
+	struct rte_intr_handle *intr_handle;
 	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
 	int ret;
@@ -495,6 +680,7 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 
 	ice_set_default_ptype_table(dev);
 	pci_dev = RTE_DEV_TO_PCI(dev->device);
+	intr_handle = &pci_dev->intr_handle;
 
 	rte_eth_copy_pci_info(dev, pci_dev);
 	pf->adapter = ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
@@ -541,6 +727,15 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 		goto err_pf_setup;
 	}
 
+	/* register callback func to eal lib */
+	rte_intr_callback_register(intr_handle,
+				   ice_interrupt_handler, dev);
+
+	ice_pf_enable_irq0(hw);
+
+	/* enable uio intr after callback register */
+	rte_intr_enable(intr_handle);
+
 	return 0;
 
 err_pf_setup:
@@ -587,6 +782,8 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 {
 	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
 
 	if (rte_eal_process_type() == RTE_PROC_SECONDARY)
 		return 0;
@@ -600,6 +797,13 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 	rte_free(dev->data->mac_addrs);
 	dev->data->mac_addrs = NULL;
 
+	/* disable uio intr before callback unregister */
+	rte_intr_disable(intr_handle);
+
+	/* register callback func to eal lib */
+	rte_intr_callback_unregister(intr_handle,
+				     ice_interrupt_handler, dev);
+
 	ice_release_vsi(pf->main_vsi);
 	ice_sched_cleanup_all(hw);
 	rte_free(hw->port_info);
@@ -765,6 +969,9 @@ static int ice_init_rss(struct ice_pf *pf)
 	if (ret != ICE_SUCCESS)
 		PMD_DRV_LOG(WARNING, "Fail to set phy mask");
 
+	/* Call get_link_info aq commond to enable/disable LSE */
+	ice_link_update(dev, 0);
+
 	pf->adapter_stopped = false;
 
 	return 0;
@@ -785,6 +992,8 @@ static int ice_init_rss(struct ice_pf *pf)
 {
 	struct rte_eth_dev_data *data = dev->data;
 	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
 	uint16_t i;
 
 	/* avoid stopping again */
@@ -805,6 +1014,13 @@ static int ice_init_rss(struct ice_pf *pf)
 	/* Clear all queues and release mbufs */
 	ice_clear_queues(dev);
 
+	/* Clean datapath event and queue/vec mapping */
+	rte_intr_efd_disable(intr_handle);
+	if (intr_handle->intr_vec) {
+		rte_free(intr_handle->intr_vec);
+		intr_handle->intr_vec = NULL;
+	}
+
 	pf->adapter_stopped = true;
 }
 
@@ -940,3 +1156,122 @@ static int ice_init_rss(struct ice_pf *pf)
 	dev_info->default_rxportconf.ring_size = 1024;
 	dev_info->default_txportconf.ring_size = 1024;
 }
+
+static inline int
+ice_atomic_read_link_status(struct rte_eth_dev *dev,
+			    struct rte_eth_link *link)
+{
+	struct rte_eth_link *dst = link;
+	struct rte_eth_link *src = &dev->data->dev_link;
+
+	if (rte_atomic64_cmpset((uint64_t *)dst, *(uint64_t *)dst,
+				*(uint64_t *)src) == 0)
+		return -1;
+
+	return 0;
+}
+
+static inline int
+ice_atomic_write_link_status(struct rte_eth_dev *dev,
+			     struct rte_eth_link *link)
+{
+	struct rte_eth_link *dst = &dev->data->dev_link;
+	struct rte_eth_link *src = link;
+
+	if (rte_atomic64_cmpset((uint64_t *)dst, *(uint64_t *)dst,
+				*(uint64_t *)src) == 0)
+		return -1;
+
+	return 0;
+}
+
+static int
+ice_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete)
+{
+#define CHECK_INTERVAL 100  /* 100ms */
+#define MAX_REPEAT_TIME 10  /* 1s (10 * 100ms) in total */
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ice_link_status link_status;
+	struct rte_eth_link link, old;
+	int status;
+	unsigned int rep_cnt = MAX_REPEAT_TIME;
+	bool enable_lse = dev->data->dev_conf.intr_conf.lsc ? true : false;
+
+	if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+		return -E_RTE_SECONDARY;
+
+	memset(&link, 0, sizeof(link));
+	memset(&old, 0, sizeof(old));
+	memset(&link_status, 0, sizeof(link_status));
+	ice_atomic_read_link_status(dev, &old);
+
+	do {
+		/* Get link status information from hardware */
+		status = ice_aq_get_link_info(hw->port_info, enable_lse,
+					      &link_status, NULL);
+		if (status != ICE_SUCCESS) {
+			link.link_speed = ETH_SPEED_NUM_100M;
+			link.link_duplex = ETH_LINK_FULL_DUPLEX;
+			PMD_DRV_LOG(ERR, "Failed to get link info");
+			goto out;
+		}
+
+		link.link_status = link_status.link_info & ICE_AQ_LINK_UP;
+		if (!wait_to_complete || link.link_status)
+			break;
+
+		rte_delay_ms(CHECK_INTERVAL);
+	} while (--rep_cnt);
+
+	if (!link.link_status)
+		goto out;
+
+	/* Full-duplex operation at all supported speeds */
+	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+
+	/* Parse the link status */
+	switch (link_status.link_speed) {
+	case ICE_AQ_LINK_SPEED_10MB:
+		link.link_speed = ETH_SPEED_NUM_10M;
+		break;
+	case ICE_AQ_LINK_SPEED_100MB:
+		link.link_speed = ETH_SPEED_NUM_100M;
+		break;
+	case ICE_AQ_LINK_SPEED_1000MB:
+		link.link_speed = ETH_SPEED_NUM_1G;
+		break;
+	case ICE_AQ_LINK_SPEED_2500MB:
+		link.link_speed = ETH_SPEED_NUM_2_5G;
+		break;
+	case ICE_AQ_LINK_SPEED_5GB:
+		link.link_speed = ETH_SPEED_NUM_5G;
+		break;
+	case ICE_AQ_LINK_SPEED_10GB:
+		link.link_speed = ETH_SPEED_NUM_10G;
+		break;
+	case ICE_AQ_LINK_SPEED_20GB:
+		link.link_speed = ETH_SPEED_NUM_20G;
+		break;
+	case ICE_AQ_LINK_SPEED_25GB:
+		link.link_speed = ETH_SPEED_NUM_25G;
+		break;
+	case ICE_AQ_LINK_SPEED_40GB:
+		link.link_speed = ETH_SPEED_NUM_40G;
+		break;
+	case ICE_AQ_LINK_SPEED_UNKNOWN:
+	default:
+		PMD_DRV_LOG(ERR, "Unknown link speed");
+		link.link_speed = ETH_SPEED_NUM_NONE;
+		break;
+	}
+
+	link.link_autoneg = !(dev->data->dev_conf.link_speeds &
+			      ETH_LINK_SPEED_FIXED);
+
+out:
+	ice_atomic_write_link_status(dev, &link);
+	if (link.link_status == old.link_status)
+		return -1;
+
+	return 0;
+}
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH 07/19] net/ice: support MTU setting
  2018-11-23  6:56 [dpdk-dev] [PATCH 00/19] A new net PMD - ice Wenzhuo Lu
                   ` (5 preceding siblings ...)
  2018-11-23  6:56 ` [dpdk-dev] [PATCH 06/19] net/ice: support link update Wenzhuo Lu
@ 2018-11-23  6:56 ` Wenzhuo Lu
  2018-11-23  9:58   ` Varghese, Vipin
  2018-11-23  6:56 ` [dpdk-dev] [PATCH 08/19] net/ice: support MAC ops Wenzhuo Lu
                   ` (17 subsequent siblings)
  24 siblings, 1 reply; 309+ messages in thread
From: Wenzhuo Lu @ 2018-11-23  6:56 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add ops link_update.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/ice/ice_ethdev.c | 37 +++++++++++++++++++++++++++++++++++++
 1 file changed, 37 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 75246da..5beb356 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -23,6 +23,7 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 			     struct rte_eth_dev_info *dev_info);
 static int ice_link_update(struct rte_eth_dev *dev,
 			   int wait_to_complete);
+static int ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu);
 
 static const struct rte_pci_id pci_id_ice_map[] = {
 	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
@@ -48,6 +49,7 @@ static int ice_link_update(struct rte_eth_dev *dev,
 	.dev_infos_get                = ice_dev_info_get,
 	.dev_supported_ptypes_get     = ice_dev_supported_ptypes_get,
 	.link_update                  = ice_link_update,
+	.mtu_set                      = ice_mtu_set,
 };
 
 static void
@@ -1275,3 +1277,38 @@ static int ice_init_rss(struct ice_pf *pf)
 
 	return 0;
 }
+
+static int
+ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct rte_eth_dev_data *dev_data = pf->dev_data;
+	uint32_t frame_size = mtu + ETHER_HDR_LEN
+			      + ETHER_CRC_LEN + ICE_VLAN_TAG_SIZE;
+
+	if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+		return -E_RTE_SECONDARY;
+
+	/* check if mtu is within the allowed range */
+	if ((mtu < ETHER_MIN_MTU) || (frame_size > ICE_FRAME_SIZE_MAX))
+		return -EINVAL;
+
+	/* mtu setting is forbidden if port is start */
+	if (dev_data->dev_started) {
+		PMD_DRV_LOG(ERR,
+			    "port %d must be stopped before configuration",
+			    dev_data->port_id);
+		return -EBUSY;
+	}
+
+	if (frame_size > ETHER_MAX_LEN)
+		dev_data->dev_conf.rxmode.offloads |=
+			DEV_RX_OFFLOAD_JUMBO_FRAME;
+	else
+		dev_data->dev_conf.rxmode.offloads &=
+			~DEV_RX_OFFLOAD_JUMBO_FRAME;
+
+	dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
+
+	return 0;
+}
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH 08/19] net/ice: support MAC ops
  2018-11-23  6:56 [dpdk-dev] [PATCH 00/19] A new net PMD - ice Wenzhuo Lu
                   ` (6 preceding siblings ...)
  2018-11-23  6:56 ` [dpdk-dev] [PATCH 07/19] net/ice: support MTU setting Wenzhuo Lu
@ 2018-11-23  6:56 ` Wenzhuo Lu
  2018-11-23  6:56 ` [dpdk-dev] [PATCH 09/19] net/ice: support VLAN ops Wenzhuo Lu
                   ` (16 subsequent siblings)
  24 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-11-23  6:56 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add below ops,
mac_addr_set
mac_addr_add
mac_addr_remove

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/ice/ice_ethdev.c | 242 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 242 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 5beb356..afcbad1 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -24,6 +24,13 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 static int ice_link_update(struct rte_eth_dev *dev,
 			   int wait_to_complete);
 static int ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu);
+static int ice_macaddr_set(struct rte_eth_dev *dev,
+			   struct ether_addr *mac_addr);
+static int ice_macaddr_add(struct rte_eth_dev *dev,
+			   struct ether_addr *mac_addr,
+			   __rte_unused uint32_t index,
+			   uint32_t pool);
+static void ice_macaddr_remove(struct rte_eth_dev *dev, uint32_t index);
 
 static const struct rte_pci_id pci_id_ice_map[] = {
 	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
@@ -50,6 +57,9 @@ static int ice_link_update(struct rte_eth_dev *dev,
 	.dev_supported_ptypes_get     = ice_dev_supported_ptypes_get,
 	.link_update                  = ice_link_update,
 	.mtu_set                      = ice_mtu_set,
+	.mac_addr_set                 = ice_macaddr_set,
+	.mac_addr_add                 = ice_macaddr_add,
+	.mac_addr_remove              = ice_macaddr_remove,
 };
 
 static void
@@ -335,6 +345,130 @@ static int ice_link_update(struct rte_eth_dev *dev,
 	return 0;
 }
 
+/* Find out specific MAC filter */
+static struct ice_mac_filter *
+ice_find_mac_filter(struct ice_vsi *vsi, struct ether_addr *macaddr)
+{
+	struct ice_mac_filter *f;
+
+	TAILQ_FOREACH(f, &vsi->mac_list, next) {
+		if (is_same_ether_addr(macaddr, &f->mac_info.mac_addr))
+			return f;
+	}
+
+	return NULL;
+}
+
+static int
+ice_add_mac_filter(struct ice_vsi *vsi, struct ether_addr *mac_addr)
+{
+	struct ice_fltr_list_entry *m_list_itr = NULL;
+	struct ice_mac_filter *f;
+	struct LIST_HEAD_TYPE list_head;
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	int ret = 0;
+
+	/* If it's added and configured, return */
+	f = ice_find_mac_filter(vsi, mac_addr);
+	if (f) {
+		PMD_DRV_LOG(INFO, "This MAC filter already exists.");
+		return 0;
+	}
+
+	INIT_LIST_HEAD(&list_head);
+
+	m_list_itr = (struct ice_fltr_list_entry *)
+		ice_malloc(hw, sizeof(*m_list_itr));
+	if (!m_list_itr) {
+		ret = -ENOMEM;
+		goto DONE;
+	}
+	ice_memcpy(m_list_itr->fltr_info.l_data.mac.mac_addr,
+		   mac_addr, ETH_ALEN, ICE_NONDMA_TO_NONDMA);
+	m_list_itr->fltr_info.src_id = ICE_SRC_ID_VSI;
+	m_list_itr->fltr_info.fltr_act = ICE_FWD_TO_VSI;
+	m_list_itr->fltr_info.lkup_type = ICE_SW_LKUP_MAC;
+	m_list_itr->fltr_info.flag = ICE_FLTR_TX;
+	m_list_itr->fltr_info.vsi_handle = vsi->idx;
+
+	LIST_ADD(&m_list_itr->list_entry, &list_head);
+
+	/* Add the mac */
+	ret = ice_add_mac(hw, &list_head);
+	if (ret != ICE_SUCCESS) {
+		PMD_DRV_LOG(ERR, "Failed to add MAC filter");
+		ret = -EINVAL;
+		goto DONE;
+	}
+	/* Add the mac addr into mac list */
+	f = rte_zmalloc("mac_filter", sizeof(*f), 0);
+	if (!f) {
+		PMD_DRV_LOG(ERR, "failed to allocate memory");
+		ret = -ENOMEM;
+		goto DONE;
+	}
+	rte_memcpy(&f->mac_info.mac_addr, mac_addr, ETH_ADDR_LEN);
+	TAILQ_INSERT_TAIL(&vsi->mac_list, f, next);
+	vsi->mac_num++;
+
+	ret = 0;
+
+DONE:
+	rte_free(m_list_itr);
+	return ret;
+}
+
+static int
+ice_remove_mac_filter(struct ice_vsi *vsi, struct ether_addr *mac_addr)
+{
+	struct ice_fltr_list_entry *m_list_itr = NULL;
+	struct ice_mac_filter *f;
+	struct LIST_HEAD_TYPE list_head;
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	int ret = 0;
+
+	/* Can't find it, return an error */
+	f = ice_find_mac_filter(vsi, mac_addr);
+	if (!f)
+		return -EINVAL;
+
+	INIT_LIST_HEAD(&list_head);
+
+	m_list_itr = (struct ice_fltr_list_entry *)
+		ice_malloc(hw, sizeof(*m_list_itr));
+	if (!m_list_itr) {
+		ret = -ENOMEM;
+		goto DONE;
+	}
+	ice_memcpy(m_list_itr->fltr_info.l_data.mac.mac_addr,
+		   mac_addr, ETH_ALEN, ICE_NONDMA_TO_NONDMA);
+	m_list_itr->fltr_info.src_id = ICE_SRC_ID_VSI;
+	m_list_itr->fltr_info.fltr_act = ICE_FWD_TO_VSI;
+	m_list_itr->fltr_info.lkup_type = ICE_SW_LKUP_MAC;
+	m_list_itr->fltr_info.flag = ICE_FLTR_TX;
+	m_list_itr->fltr_info.vsi_handle = vsi->idx;
+
+	LIST_ADD(&m_list_itr->list_entry, &list_head);
+
+	/* remove the mac filter */
+	ret = ice_remove_mac(hw, &list_head);
+	if (ret != ICE_SUCCESS) {
+		PMD_DRV_LOG(ERR, "Failed to remove MAC filter");
+		ret = -EINVAL;
+		goto DONE;
+	}
+
+	/* Remove the mac addr from mac list */
+	TAILQ_REMOVE(&vsi->mac_list, f, next);
+	rte_free(f);
+	vsi->mac_num--;
+
+	ret = 0;
+DONE:
+	rte_free(m_list_itr);
+	return ret;
+}
+
 /* Enable IRQ0 */
 static void
 ice_pf_enable_irq0(struct ice_hw *hw)
@@ -543,6 +677,9 @@ static int ice_link_update(struct rte_eth_dev *dev,
 	struct ice_vsi *vsi = NULL;
 	struct ice_vsi_ctx vsi_ctx;
 	int ret;
+	struct ether_addr broadcast = {
+		.addr_bytes = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff} };
+	struct ether_addr mac_addr;
 	uint16_t max_txqs[ICE_MAX_TRAFFIC_CLASS] = { 0 };
 	uint8_t tc_bitmap = 0x1;
 
@@ -628,6 +765,21 @@ static int ice_link_update(struct rte_eth_dev *dev,
 	pf->vsis_allocated = vsi_ctx.vsis_allocd;
 	pf->vsis_unallocated = vsi_ctx.vsis_unallocated;
 
+	/* MAC configuration */
+	rte_memcpy(pf->dev_addr.addr_bytes,
+		   hw->port_info->mac.perm_addr,
+		   ETH_ADDR_LEN);
+
+	rte_memcpy(&mac_addr, &pf->dev_addr, ETHER_ADDR_LEN);
+	ret = ice_add_mac_filter(vsi, &mac_addr);
+	if (ret != ICE_SUCCESS)
+		PMD_INIT_LOG(ERR, "Failed to add dflt MAC filter");
+
+	rte_memcpy(&mac_addr, &broadcast, ETHER_ADDR_LEN);
+	ret = ice_add_mac_filter(vsi, &mac_addr);
+	if (ret != ICE_SUCCESS)
+		PMD_INIT_LOG(ERR, "Failed to add MAC filter");
+
 	/* At the beginning, only TC0. */
 	/* What we need here is the maximam number of the TX queues.
 	 * Currently vsi->nb_qps means it.
@@ -1312,3 +1464,93 @@ static int ice_init_rss(struct ice_pf *pf)
 
 	return 0;
 }
+
+static int ice_macaddr_set(struct rte_eth_dev *dev,
+			   struct ether_addr *mac_addr)
+{
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vsi *vsi = pf->main_vsi;
+	struct ice_mac_filter *f;
+	uint8_t flags = 0;
+	int ret;
+
+	if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+		return -E_RTE_SECONDARY;
+
+	if (!is_valid_assigned_ether_addr(mac_addr)) {
+		PMD_DRV_LOG(ERR, "Tried to set invalid MAC address.");
+		return -EINVAL;
+	}
+
+	TAILQ_FOREACH(f, &vsi->mac_list, next) {
+		if (is_same_ether_addr(&pf->dev_addr, &f->mac_info.mac_addr))
+			break;
+	}
+
+	if (!f) {
+		PMD_DRV_LOG(ERR, "Failed to find filter for default mac");
+		return -EIO;
+	}
+
+	ret = ice_remove_mac_filter(vsi, &f->mac_info.mac_addr);
+	if (ret != ICE_SUCCESS) {
+		PMD_DRV_LOG(ERR, "Failed to delete mac filter");
+		return -EIO;
+	}
+	ret = ice_add_mac_filter(vsi, mac_addr);
+	if (ret != ICE_SUCCESS) {
+		PMD_DRV_LOG(ERR, "Failed to add mac filter");
+		return -EIO;
+	}
+	memcpy(&pf->dev_addr, mac_addr, ETH_ADDR_LEN);
+
+	flags = ICE_AQC_MAN_MAC_UPDATE_LAA_WOL;
+	ice_aq_manage_mac_write(hw, mac_addr->addr_bytes, flags, NULL);
+
+	return 0;
+}
+
+/* Add a MAC address, and update filters */
+static int
+ice_macaddr_add(struct rte_eth_dev *dev,
+		struct ether_addr *mac_addr,
+		__rte_unused uint32_t index,
+		__rte_unused uint32_t pool)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vsi *vsi = pf->main_vsi;
+	int ret;
+
+	if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+		return -E_RTE_SECONDARY;
+
+	ret = ice_add_mac_filter(vsi, mac_addr);
+	if (ret != ICE_SUCCESS) {
+		PMD_DRV_LOG(ERR, "Failed to add MAC filter");
+		return -EINVAL;
+	}
+
+	return ICE_SUCCESS;
+}
+
+/* Remove a MAC address, and update filters */
+static void
+ice_macaddr_remove(struct rte_eth_dev *dev, uint32_t index)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vsi *vsi = pf->main_vsi;
+	struct rte_eth_dev_data *data = dev->data;
+	struct ether_addr *macaddr;
+	int ret;
+
+	if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+		return;
+
+	macaddr = &data->mac_addrs[index];
+	ret = ice_remove_mac_filter(vsi, macaddr);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to remove MAC filter");
+		return;
+	}
+}
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH 09/19] net/ice: support VLAN ops
  2018-11-23  6:56 [dpdk-dev] [PATCH 00/19] A new net PMD - ice Wenzhuo Lu
                   ` (7 preceding siblings ...)
  2018-11-23  6:56 ` [dpdk-dev] [PATCH 08/19] net/ice: support MAC ops Wenzhuo Lu
@ 2018-11-23  6:56 ` Wenzhuo Lu
  2018-11-23  6:56 ` [dpdk-dev] [PATCH 10/19] net/ice: support RSS Wenzhuo Lu
                   ` (15 subsequent siblings)
  24 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-11-23  6:56 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add below ops,
ice_vlan_filter_set
ice_vlan_offload_set
ice_vlan_tpid_set
ice_vlan_pvid_set

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/ice/ice_ethdev.c | 602 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 602 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index afcbad1..c6267fa 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -24,6 +24,13 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 static int ice_link_update(struct rte_eth_dev *dev,
 			   int wait_to_complete);
 static int ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu);
+static int ice_vlan_offload_set(struct rte_eth_dev *dev, int mask);
+static int ice_vlan_tpid_set(struct rte_eth_dev *dev,
+			     enum rte_vlan_type vlan_type,
+			     uint16_t tpid);
+static int ice_vlan_filter_set(struct rte_eth_dev *dev,
+			       uint16_t vlan_id,
+			       int on);
 static int ice_macaddr_set(struct rte_eth_dev *dev,
 			   struct ether_addr *mac_addr);
 static int ice_macaddr_add(struct rte_eth_dev *dev,
@@ -31,6 +38,8 @@ static int ice_macaddr_add(struct rte_eth_dev *dev,
 			   __rte_unused uint32_t index,
 			   uint32_t pool);
 static void ice_macaddr_remove(struct rte_eth_dev *dev, uint32_t index);
+static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
+			     uint16_t pvid, int on);
 
 static const struct rte_pci_id pci_id_ice_map[] = {
 	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
@@ -60,6 +69,10 @@ static int ice_macaddr_add(struct rte_eth_dev *dev,
 	.mac_addr_set                 = ice_macaddr_set,
 	.mac_addr_add                 = ice_macaddr_add,
 	.mac_addr_remove              = ice_macaddr_remove,
+	.vlan_filter_set              = ice_vlan_filter_set,
+	.vlan_offload_set             = ice_vlan_offload_set,
+	.vlan_tpid_set                = ice_vlan_tpid_set,
+	.vlan_pvid_set                = ice_vlan_pvid_set,
 };
 
 static void
@@ -469,6 +482,297 @@ static int ice_macaddr_add(struct rte_eth_dev *dev,
 	return ret;
 }
 
+/* Find out specific VLAN filter */
+static struct ice_vlan_filter *
+ice_find_vlan_filter(struct ice_vsi *vsi, uint16_t vlan_id)
+{
+	struct ice_vlan_filter *f;
+
+	TAILQ_FOREACH(f, &vsi->vlan_list, next) {
+		if (vlan_id == f->vlan_info.vlan_id)
+			return f;
+	}
+
+	return NULL;
+}
+
+static int
+ice_add_vlan_filter(struct ice_vsi *vsi, uint16_t vlan_id)
+{
+	struct ice_fltr_list_entry *v_list_itr = NULL;
+	struct ice_vlan_filter *f;
+	struct LIST_HEAD_TYPE list_head;
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	int ret = 0;
+
+	if (!vsi || vlan_id > ETHER_MAX_VLAN_ID)
+		return -EINVAL;
+
+	/* If it's added and configured, return. */
+	f = ice_find_vlan_filter(vsi, vlan_id);
+	if (f) {
+		PMD_DRV_LOG(INFO, "This VLAN filter already exists.");
+		return 0;
+	}
+
+	if (!vsi->vlan_anti_spoof_on && !vsi->vlan_filter_on)
+		return 0;
+
+	INIT_LIST_HEAD(&list_head);
+
+	v_list_itr = (struct ice_fltr_list_entry *)
+		      ice_malloc(hw, sizeof(*v_list_itr));
+	if (!v_list_itr) {
+		ret = -ENOMEM;
+		goto DONE;
+	}
+	v_list_itr->fltr_info.l_data.vlan.vlan_id = vlan_id;
+	v_list_itr->fltr_info.src_id = ICE_SRC_ID_VSI;
+	v_list_itr->fltr_info.fltr_act = ICE_FWD_TO_VSI;
+	v_list_itr->fltr_info.lkup_type = ICE_SW_LKUP_VLAN;
+	v_list_itr->fltr_info.flag = ICE_FLTR_TX;
+	v_list_itr->fltr_info.vsi_handle = vsi->idx;
+
+	LIST_ADD(&v_list_itr->list_entry, &list_head);
+
+	/* Add the vlan */
+	ret = ice_add_vlan(hw, &list_head);
+	if (ret != ICE_SUCCESS) {
+		PMD_DRV_LOG(ERR, "Failed to add VLAN filter");
+		ret = -EINVAL;
+		goto DONE;
+	}
+
+	/* Add vlan into vlan list */
+	f = rte_zmalloc("vlan_filter", sizeof(*f), 0);
+	if (!f) {
+		PMD_DRV_LOG(ERR, "failed to allocate memory");
+		ret = -ENOMEM;
+		goto DONE;
+	}
+	f->vlan_info.vlan_id = vlan_id;
+	TAILQ_INSERT_TAIL(&vsi->vlan_list, f, next);
+	vsi->vlan_num++;
+
+	ret = 0;
+
+DONE:
+	rte_free(v_list_itr);
+	return ret;
+}
+
+static int
+ice_remove_vlan_filter(struct ice_vsi *vsi, uint16_t vlan_id)
+{
+	struct ice_fltr_list_entry *v_list_itr = NULL;
+	struct ice_vlan_filter *f;
+	struct LIST_HEAD_TYPE list_head;
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	int ret = 0;
+
+	/**
+	 * Vlan 0 is the generic filter for untagged packets
+	 * and can't be removed.
+	 */
+	if (!vsi || vlan_id == 0 || vlan_id > ETHER_MAX_VLAN_ID)
+		return -EINVAL;
+
+	/* Can't find it, return an error */
+	f = ice_find_vlan_filter(vsi, vlan_id);
+	if (!f)
+		return -EINVAL;
+
+	INIT_LIST_HEAD(&list_head);
+
+	v_list_itr = (struct ice_fltr_list_entry *)
+		      ice_malloc(hw, sizeof(*v_list_itr));
+	if (!v_list_itr) {
+		ret = -ENOMEM;
+		goto DONE;
+	}
+
+	v_list_itr->fltr_info.l_data.vlan.vlan_id = vlan_id;
+	v_list_itr->fltr_info.src_id = ICE_SRC_ID_VSI;
+	v_list_itr->fltr_info.fltr_act = ICE_FWD_TO_VSI;
+	v_list_itr->fltr_info.lkup_type = ICE_SW_LKUP_VLAN;
+	v_list_itr->fltr_info.flag = ICE_FLTR_TX;
+	v_list_itr->fltr_info.vsi_handle = vsi->idx;
+
+	LIST_ADD(&v_list_itr->list_entry, &list_head);
+
+	/* remove the vlan filter */
+	ret = ice_remove_vlan(hw, &list_head);
+	if (ret != ICE_SUCCESS) {
+		PMD_DRV_LOG(ERR, "Failed to remove VLAN filter");
+		ret = -EINVAL;
+		goto DONE;
+	}
+
+	/* Remove the vlan id from vlan list */
+	TAILQ_REMOVE(&vsi->vlan_list, f, next);
+	rte_free(f);
+	vsi->vlan_num--;
+
+	ret = 0;
+DONE:
+	rte_free(v_list_itr);
+	return ret;
+}
+
+static int
+ice_remove_all_mac_vlan_filters(struct ice_vsi *vsi)
+{
+	struct ice_mac_filter *m_f;
+	struct ice_vlan_filter *v_f;
+	int ret = 0;
+
+	if (!vsi || !vsi->mac_num)
+		return -EINVAL;
+
+	TAILQ_FOREACH(m_f, &vsi->mac_list, next) {
+		ret = ice_remove_mac_filter(vsi, &m_f->mac_info.mac_addr);
+		if (ret != ICE_SUCCESS) {
+			ret = -EINVAL;
+			goto DONE;
+		}
+	}
+
+	if (vsi->vlan_num == 0)
+		return 0;
+
+	TAILQ_FOREACH(v_f, &vsi->vlan_list, next) {
+		ret = ice_remove_vlan_filter(vsi, v_f->vlan_info.vlan_id);
+		if (ret != ICE_SUCCESS) {
+			ret = -EINVAL;
+			goto DONE;
+		}
+	}
+
+DONE:
+	return ret;
+}
+
+static int
+ice_vsi_config_qinq_insertion(struct ice_vsi *vsi, bool on)
+{
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	struct ice_vsi_ctx ctxt;
+	uint8_t qinq_flags;
+	int ret = 0;
+
+	/* Check if it has been already on or off */
+	if (vsi->info.valid_sections &
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID)) {
+		if (on) {
+			if ((vsi->info.outer_tag_flags &
+			     ICE_AQ_VSI_OUTER_TAG_ACCEPT_HOST) ==
+			    ICE_AQ_VSI_OUTER_TAG_ACCEPT_HOST)
+				return 0; /* already on */
+		} else {
+			if (!(vsi->info.outer_tag_flags &
+			      ICE_AQ_VSI_OUTER_TAG_ACCEPT_HOST))
+				return 0; /* already off */
+		}
+	}
+
+	if (on)
+		qinq_flags = ICE_AQ_VSI_OUTER_TAG_ACCEPT_HOST;
+	else
+		qinq_flags = 0;
+	/* clear global insertion and use per packet insertion */
+	vsi->info.outer_tag_flags &= ~(ICE_AQ_VSI_OUTER_TAG_INSERT);
+	vsi->info.outer_tag_flags &= ~(ICE_AQ_VSI_OUTER_TAG_ACCEPT_HOST);
+	vsi->info.outer_tag_flags |= qinq_flags;
+	/* use default vlan type 0x8100 */
+	vsi->info.outer_tag_flags &= ~(ICE_AQ_VSI_OUTER_TAG_TYPE_M);
+	vsi->info.outer_tag_flags |= ICE_DFLT_OUTER_TAG_TYPE <<
+				     ICE_AQ_VSI_OUTER_TAG_TYPE_S;
+	(void)rte_memcpy(&ctxt.info, &vsi->info, sizeof(vsi->info));
+	ctxt.info.valid_sections =
+			rte_cpu_to_le_16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID);
+	ctxt.vsi_num = vsi->vsi_id;
+	ret = ice_update_vsi(hw, vsi->idx, &ctxt, NULL);
+	if (ret) {
+		PMD_DRV_LOG(INFO,
+			    "Update VSI failed to %s qinq stripping",
+			    on ? "enable" : "disable");
+		return -EINVAL;
+	}
+
+	vsi->info.valid_sections |=
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID);
+
+	return ret;
+}
+
+static int
+ice_vsi_config_qinq_stripping(struct ice_vsi *vsi, bool on)
+{
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	struct ice_vsi_ctx ctxt;
+	uint8_t qinq_flags;
+	int ret = 0;
+
+	/* Check if it has been already on or off */
+	if (vsi->info.valid_sections &
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID)) {
+		if (on) {
+			if ((vsi->info.outer_tag_flags &
+			     ICE_AQ_VSI_OUTER_TAG_MODE_M) ==
+			    ICE_AQ_VSI_OUTER_TAG_COPY)
+				return 0; /* already on */
+		} else {
+			if ((vsi->info.outer_tag_flags &
+			     ICE_AQ_VSI_OUTER_TAG_MODE_M) ==
+			    ICE_AQ_VSI_OUTER_TAG_NOTHING)
+				return 0; /* already off */
+		}
+	}
+
+	if (on)
+		qinq_flags = ICE_AQ_VSI_OUTER_TAG_COPY;
+	else
+		qinq_flags = ICE_AQ_VSI_OUTER_TAG_NOTHING;
+	vsi->info.outer_tag_flags &= ~(ICE_AQ_VSI_OUTER_TAG_MODE_M);
+	vsi->info.outer_tag_flags |= qinq_flags;
+	/* use default vlan type 0x8100 */
+	vsi->info.outer_tag_flags &= ~(ICE_AQ_VSI_OUTER_TAG_TYPE_M);
+	vsi->info.outer_tag_flags |= ICE_DFLT_OUTER_TAG_TYPE <<
+				     ICE_AQ_VSI_OUTER_TAG_TYPE_S;
+	(void)rte_memcpy(&ctxt.info, &vsi->info, sizeof(vsi->info));
+	ctxt.info.valid_sections =
+			rte_cpu_to_le_16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID);
+	ctxt.vsi_num = vsi->vsi_id;
+	ret = ice_update_vsi(hw, vsi->idx, &ctxt, NULL);
+	if (ret) {
+		PMD_DRV_LOG(INFO,
+			    "Update VSI failed to %s qinq stripping",
+			    on ? "enable" : "disable");
+		return -EINVAL;
+	}
+
+	vsi->info.valid_sections |=
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID);
+
+	return ret;
+}
+
+static int
+ice_vsi_config_double_vlan(struct ice_vsi *vsi, int on)
+{
+	int ret;
+
+	ret = ice_vsi_config_qinq_stripping(vsi, on);
+	if (ret)
+		PMD_DRV_LOG(ERR, "Fail to set qinq stripping - %d", ret);
+
+	ret = ice_vsi_config_qinq_insertion(vsi, on);
+	if (ret)
+		PMD_DRV_LOG(ERR, "Fail to set qinq insertion - %d", ret);
+
+	return ret;
+}
+
 /* Enable IRQ0 */
 static void
 ice_pf_enable_irq0(struct ice_hw *hw)
@@ -828,6 +1132,7 @@ static int ice_macaddr_add(struct rte_eth_dev *dev,
 	struct rte_intr_handle *intr_handle;
 	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vsi *vsi;
 	int ret;
 
 	dev->dev_ops = &ice_eth_dev_ops;
@@ -881,6 +1186,11 @@ static int ice_macaddr_add(struct rte_eth_dev *dev,
 		goto err_pf_setup;
 	}
 
+	vsi = pf->main_vsi;
+
+	/* Disable double vlan by default */
+	ice_vsi_config_double_vlan(vsi, FALSE);
+
 	/* register callback func to eal lib */
 	rte_intr_callback_register(intr_handle,
 				   ice_interrupt_handler, dev);
@@ -916,6 +1226,8 @@ static int ice_macaddr_add(struct rte_eth_dev *dev,
 
 	hw = ICE_VSI_TO_HW(vsi);
 
+	ice_remove_all_mac_vlan_filters(vsi);
+
 	memset(&vsi_ctx, 0, sizeof(vsi_ctx));
 
 	vsi_ctx.vsi_num = vsi->vsi_id;
@@ -1554,3 +1866,293 @@ static int ice_macaddr_set(struct rte_eth_dev *dev,
 		return;
 	}
 }
+
+static int
+ice_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vsi *vsi = pf->main_vsi;
+	int ret;
+
+	if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+		return -E_RTE_SECONDARY;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (on) {
+		ret = ice_add_vlan_filter(vsi, vlan_id);
+		if (ret < 0) {
+			PMD_DRV_LOG(ERR, "Failed to add vlan filter");
+			return -EINVAL;
+		}
+	} else {
+		ret = ice_remove_vlan_filter(vsi, vlan_id);
+		if (ret < 0) {
+			PMD_DRV_LOG(ERR, "Failed to remove vlan filter");
+			return -EINVAL;
+		}
+	}
+
+	return 0;
+}
+
+/* Configure vlan filter on or off */
+static int
+ice_vsi_config_vlan_filter(struct ice_vsi *vsi, bool on)
+{
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	struct ice_vsi_ctx ctxt;
+	uint8_t sec_flags, sw_flags2;
+	int ret = 0;
+
+	sec_flags = ICE_AQ_VSI_SEC_TX_VLAN_PRUNE_ENA <<
+		    ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S;
+	sw_flags2 = ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA;
+
+	if (on) {
+		vsi->info.sec_flags |= sec_flags;
+		vsi->info.sw_flags2 |= sw_flags2;
+	} else {
+		vsi->info.sec_flags &= ~sec_flags;
+		vsi->info.sw_flags2 &= ~sw_flags2;
+	}
+	vsi->info.sw_id = hw->port_info->sw_id;
+	(void)rte_memcpy(&ctxt.info, &vsi->info, sizeof(vsi->info));
+	ctxt.info.valid_sections =
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_SW_VALID |
+				 ICE_AQ_VSI_PROP_SECURITY_VALID);
+	ctxt.vsi_num = vsi->vsi_id;
+
+	ret = ice_update_vsi(hw, vsi->idx, &ctxt, NULL);
+	if (ret) {
+		PMD_DRV_LOG(INFO, "Update VSI failed to %s vlan rx pruning",
+			    on ? "enable" : "disable");
+		ret = -EINVAL;
+	} else {
+		vsi->info.valid_sections |=
+			rte_cpu_to_le_16(ICE_AQ_VSI_PROP_SW_VALID |
+					 ICE_AQ_VSI_PROP_SECURITY_VALID);
+	}
+
+	return ret;
+}
+
+static int
+ice_vsi_config_vlan_stripping(struct ice_vsi *vsi, bool on)
+{
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	struct ice_vsi_ctx ctxt;
+	uint8_t vlan_flags;
+	int ret = 0;
+
+	/* Check if it has been already on or off */
+	if (vsi->info.valid_sections &
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_VLAN_VALID)) {
+		if (on) {
+			if ((vsi->info.vlan_flags &
+			     ICE_AQ_VSI_VLAN_EMOD_M) ==
+			    ICE_AQ_VSI_VLAN_EMOD_STR_BOTH)
+				return 0; /* already on */
+		} else {
+			if ((vsi->info.vlan_flags &
+			     ICE_AQ_VSI_VLAN_EMOD_M) ==
+			    ICE_AQ_VSI_VLAN_EMOD_NOTHING)
+				return 0; /* already off */
+		}
+	}
+
+	if (on)
+		vlan_flags = ICE_AQ_VSI_VLAN_EMOD_STR_BOTH;
+	else
+		vlan_flags = ICE_AQ_VSI_VLAN_EMOD_NOTHING;
+	vsi->info.vlan_flags &= ~(ICE_AQ_VSI_VLAN_EMOD_M);
+	vsi->info.vlan_flags |= vlan_flags;
+	(void)rte_memcpy(&ctxt.info, &vsi->info, sizeof(vsi->info));
+	ctxt.info.valid_sections =
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_VLAN_VALID);
+	ctxt.vsi_num = vsi->vsi_id;
+	ret = ice_update_vsi(hw, vsi->idx, &ctxt, NULL);
+	if (ret) {
+		PMD_DRV_LOG(INFO, "Update VSI failed to %s vlan stripping",
+			    on ? "enable" : "disable");
+		return -EINVAL;
+	}
+
+	vsi->info.valid_sections |=
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_VLAN_VALID);
+
+	return ret;
+}
+
+static int
+ice_vlan_offload_set(struct rte_eth_dev *dev, int mask)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vsi *vsi = pf->main_vsi;
+	struct rte_eth_rxmode *rxmode;
+
+	if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+		return -E_RTE_SECONDARY;
+
+	rxmode = &dev->data->dev_conf.rxmode;
+	if (mask & ETH_VLAN_FILTER_MASK) {
+		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+			ice_vsi_config_vlan_filter(vsi, TRUE);
+		else
+			ice_vsi_config_vlan_filter(vsi, FALSE);
+	}
+
+	if (mask & ETH_VLAN_STRIP_MASK) {
+		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+			ice_vsi_config_vlan_stripping(vsi, TRUE);
+		else
+			ice_vsi_config_vlan_stripping(vsi, FALSE);
+	}
+
+	if (mask & ETH_VLAN_EXTEND_MASK) {
+		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+			ice_vsi_config_double_vlan(vsi, TRUE);
+		else
+			ice_vsi_config_double_vlan(vsi, FALSE);
+	}
+
+	return 0;
+}
+
+static int
+ice_vlan_tpid_set(struct rte_eth_dev *dev,
+		  enum rte_vlan_type vlan_type,
+		  uint16_t tpid)
+{
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	uint64_t reg_r = 0, reg_w = 0;
+	uint16_t reg_id = 0;
+	int ret = 0;
+	int qinq = dev->data->dev_conf.rxmode.offloads &
+		   DEV_RX_OFFLOAD_VLAN_EXTEND;
+
+	if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+		return -E_RTE_SECONDARY;
+
+	switch (vlan_type) {
+	case ETH_VLAN_TYPE_OUTER:
+		if (qinq)
+			reg_id = 3;
+		else
+			reg_id = 5;
+	break;
+	case ETH_VLAN_TYPE_INNER:
+		if (qinq) {
+			reg_id = 5;
+		} else {
+			PMD_DRV_LOG(ERR,
+				    "Unsupported vlan type in single vlan.");
+			return -EINVAL;
+		}
+		break;
+	default:
+		PMD_DRV_LOG(ERR, "Unsupported vlan type %d", vlan_type);
+		return -EINVAL;
+	}
+	reg_r = ICE_READ_REG(hw, GL_SWT_L2TAGCTRL(reg_id));
+	PMD_DRV_LOG(DEBUG, "Debug read from ICE GL_SWT_L2TAGCTRL[%d]: "
+		    "0x%08"PRIx64"", reg_id, reg_r);
+
+	reg_w = reg_r & (~(GL_SWT_L2TAGCTRL_ETHERTYPE_M));
+	reg_w |= ((uint64_t)tpid << GL_SWT_L2TAGCTRL_ETHERTYPE_S);
+	if (reg_r == reg_w) {
+		PMD_DRV_LOG(DEBUG, "No need to write");
+		return 0;
+	}
+
+	ICE_WRITE_REG(hw, GL_SWT_L2TAGCTRL(reg_id), reg_w);
+	PMD_DRV_LOG(DEBUG, "Debug write 0x%08"PRIx64" to "
+		    "ICE GL_SWT_L2TAGCTRL[%d]", reg_w, reg_id);
+
+	return ret;
+}
+
+static int
+ice_vsi_vlan_pvid_set(struct ice_vsi *vsi, struct ice_vsi_vlan_pvid_info *info)
+{
+	struct ice_hw *hw;
+	struct ice_vsi_ctx ctxt;
+	uint8_t vlan_flags = 0;
+	int ret;
+
+	if (!vsi || !info) {
+		PMD_DRV_LOG(ERR, "invalid parameters");
+		return -EINVAL;
+	}
+
+	if (info->on) {
+		vsi->info.pvid = info->config.pvid;
+		/**
+		 * If insert pvid is enabled, only tagged pkts are
+		 * allowed to be sent out.
+		 */
+		vlan_flags = ICE_AQ_VSI_PVLAN_INSERT_PVID |
+			     ICE_AQ_VSI_VLAN_MODE_UNTAGGED;
+	} else {
+		vsi->info.pvid = 0;
+		if (info->config.reject.tagged == 0)
+			vlan_flags |= ICE_AQ_VSI_VLAN_MODE_TAGGED;
+
+		if (info->config.reject.untagged == 0)
+			vlan_flags |= ICE_AQ_VSI_VLAN_MODE_UNTAGGED;
+	}
+	vsi->info.vlan_flags &= ~(ICE_AQ_VSI_PVLAN_INSERT_PVID |
+				  ICE_AQ_VSI_VLAN_MODE_M);
+	vsi->info.vlan_flags |= vlan_flags;
+	memset(&ctxt, 0, sizeof(ctxt));
+	rte_memcpy(&ctxt.info, &vsi->info, sizeof(vsi->info));
+	ctxt.info.valid_sections =
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_VLAN_VALID);
+	ctxt.vsi_num = vsi->vsi_id;
+
+	hw = ICE_VSI_TO_HW(vsi);
+	ret = ice_update_vsi(hw, vsi->idx, &ctxt, NULL);
+	if (ret != ICE_SUCCESS) {
+		PMD_DRV_LOG(ERR,
+			    "update VSI for VLAN insert failed, err %d",
+			    ret);
+		return -EINVAL;
+	}
+
+	vsi->info.valid_sections |=
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_VLAN_VALID);
+
+	return ret;
+}
+
+static int
+ice_vlan_pvid_set(struct rte_eth_dev *dev, uint16_t pvid, int on)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vsi *vsi = pf->main_vsi;
+	struct rte_eth_dev_data *data = pf->dev_data;
+	struct ice_vsi_vlan_pvid_info info;
+	int ret;
+
+	if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+		return -E_RTE_SECONDARY;
+
+	memset(&info, 0, sizeof(info));
+	info.on = on;
+	if (info.on) {
+		info.config.pvid = pvid;
+	} else {
+		info.config.reject.tagged =
+			data->dev_conf.txmode.hw_vlan_reject_tagged;
+		info.config.reject.untagged =
+			data->dev_conf.txmode.hw_vlan_reject_untagged;
+	}
+
+	ret = ice_vsi_vlan_pvid_set(vsi, &info);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Failed to set pvid.");
+		return -EINVAL;
+	}
+
+	return 0;
+}
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH 10/19] net/ice: support RSS
  2018-11-23  6:56 [dpdk-dev] [PATCH 00/19] A new net PMD - ice Wenzhuo Lu
                   ` (8 preceding siblings ...)
  2018-11-23  6:56 ` [dpdk-dev] [PATCH 09/19] net/ice: support VLAN ops Wenzhuo Lu
@ 2018-11-23  6:56 ` Wenzhuo Lu
  2018-11-23  6:56 ` [dpdk-dev] [PATCH 11/19] net/ice: support RX queue interruption Wenzhuo Lu
                   ` (14 subsequent siblings)
  24 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-11-23  6:56 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add below ops,
reta_update
reta_query
rss_hash_update
rss_hash_conf_get

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/ice/ice_ethdev.c | 248 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 248 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index c6267fa..31c77e4 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -28,6 +28,16 @@ static int ice_link_update(struct rte_eth_dev *dev,
 static int ice_vlan_tpid_set(struct rte_eth_dev *dev,
 			     enum rte_vlan_type vlan_type,
 			     uint16_t tpid);
+static int ice_rss_reta_update(struct rte_eth_dev *dev,
+			       struct rte_eth_rss_reta_entry64 *reta_conf,
+			       uint16_t reta_size);
+static int ice_rss_reta_query(struct rte_eth_dev *dev,
+			      struct rte_eth_rss_reta_entry64 *reta_conf,
+			      uint16_t reta_size);
+static int ice_rss_hash_update(struct rte_eth_dev *dev,
+			       struct rte_eth_rss_conf *rss_conf);
+static int ice_rss_hash_conf_get(struct rte_eth_dev *dev,
+				 struct rte_eth_rss_conf *rss_conf);
 static int ice_vlan_filter_set(struct rte_eth_dev *dev,
 			       uint16_t vlan_id,
 			       int on);
@@ -72,6 +82,10 @@ static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
 	.vlan_filter_set              = ice_vlan_filter_set,
 	.vlan_offload_set             = ice_vlan_offload_set,
 	.vlan_tpid_set                = ice_vlan_tpid_set,
+	.reta_update                  = ice_rss_reta_update,
+	.reta_query                   = ice_rss_reta_query,
+	.rss_hash_update              = ice_rss_hash_update,
+	.rss_hash_conf_get            = ice_rss_hash_conf_get,
 	.vlan_pvid_set                = ice_vlan_pvid_set,
 };
 
@@ -2073,6 +2087,240 @@ static int ice_macaddr_set(struct rte_eth_dev *dev,
 }
 
 static int
+ice_get_rss_lut(struct ice_vsi *vsi, uint8_t *lut, uint16_t lut_size)
+{
+	struct ice_pf *pf = ICE_VSI_TO_PF(vsi);
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	int ret;
+
+	if (!lut)
+		return -EINVAL;
+
+	if (pf->flags & ICE_FLAG_RSS_AQ_CAPABLE) {
+		ret = ice_aq_get_rss_lut(hw, vsi->vsi_id, TRUE,
+					 lut, lut_size);
+		if (ret) {
+			PMD_DRV_LOG(ERR, "Failed to get RSS lookup table");
+			return -EINVAL;
+		}
+	} else {
+		uint64_t *lut_dw = (uint64_t *)lut;
+		uint16_t i, lut_size_dw = lut_size / 4;
+
+		for (i = 0; i < lut_size_dw; i++)
+			lut_dw[i] = ICE_READ_REG(hw, PFQF_HLUT(i));
+	}
+
+	return 0;
+}
+
+static int
+ice_set_rss_lut(struct ice_vsi *vsi, uint8_t *lut, uint16_t lut_size)
+{
+	struct ice_pf *pf = ICE_VSI_TO_PF(vsi);
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	int ret;
+
+	if (!vsi || !lut)
+		return -EINVAL;
+
+	if (pf->flags & ICE_FLAG_RSS_AQ_CAPABLE) {
+		ret = ice_aq_set_rss_lut(hw, vsi->vsi_id, TRUE,
+					 lut, lut_size);
+		if (ret) {
+			PMD_DRV_LOG(ERR, "Failed to set RSS lookup table");
+			return -EINVAL;
+		}
+	} else {
+		uint64_t *lut_dw = (uint64_t *)lut;
+		uint16_t i, lut_size_dw = lut_size / 4;
+
+		for (i = 0; i < lut_size_dw; i++)
+			ICE_WRITE_REG(hw, PFQF_HLUT(i), lut_dw[i]);
+
+		ice_flush(hw);
+	}
+
+	return 0;
+}
+
+static int
+ice_rss_reta_update(struct rte_eth_dev *dev,
+		    struct rte_eth_rss_reta_entry64 *reta_conf,
+		    uint16_t reta_size)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	uint16_t i, lut_size = hw->func_caps.common_cap.rss_table_size;
+	uint16_t idx, shift;
+	uint8_t *lut;
+	int ret;
+
+	if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+		return -E_RTE_SECONDARY;
+
+	if (reta_size != lut_size ||
+	    reta_size > ETH_RSS_RETA_SIZE_512) {
+		PMD_DRV_LOG(ERR,
+			    "The size of hash lookup table configured (%d)"
+			    "doesn't match the number hardware can "
+			    "supported (%d)",
+			    reta_size, lut_size);
+		return -EINVAL;
+	}
+
+	lut = rte_zmalloc("ice_rss_lut", reta_size, 0);
+	if (!lut) {
+		PMD_DRV_LOG(ERR, "No memory can be allocated");
+		return -ENOMEM;
+	}
+	ret = ice_get_rss_lut(pf->main_vsi, lut, reta_size);
+	if (ret)
+		goto out;
+
+	for (i = 0; i < reta_size; i++) {
+		idx = i / RTE_RETA_GROUP_SIZE;
+		shift = i % RTE_RETA_GROUP_SIZE;
+		if (reta_conf[idx].mask & (1ULL << shift))
+			lut[i] = reta_conf[idx].reta[shift];
+	}
+	ret = ice_set_rss_lut(pf->main_vsi, lut, reta_size);
+
+out:
+	rte_free(lut);
+
+	return ret;
+}
+
+static int
+ice_rss_reta_query(struct rte_eth_dev *dev,
+		   struct rte_eth_rss_reta_entry64 *reta_conf,
+		   uint16_t reta_size)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	uint16_t i, lut_size = hw->func_caps.common_cap.rss_table_size;
+	uint16_t idx, shift;
+	uint8_t *lut;
+	int ret;
+
+	if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+		return -E_RTE_SECONDARY;
+
+	if (reta_size != lut_size ||
+	    reta_size > ETH_RSS_RETA_SIZE_512) {
+		PMD_DRV_LOG(ERR,
+			    "The size of hash lookup table configured (%d)"
+			    "doesn't match the number hardware can "
+			    "supported (%d)",
+			    reta_size, lut_size);
+		return -EINVAL;
+	}
+
+	lut = rte_zmalloc("ice_rss_lut", reta_size, 0);
+	if (!lut) {
+		PMD_DRV_LOG(ERR, "No memory can be allocated");
+		return -ENOMEM;
+	}
+
+	ret = ice_get_rss_lut(pf->main_vsi, lut, reta_size);
+	if (ret)
+		goto out;
+
+	for (i = 0; i < reta_size; i++) {
+		idx = i / RTE_RETA_GROUP_SIZE;
+		shift = i % RTE_RETA_GROUP_SIZE;
+		if (reta_conf[idx].mask & (1ULL << shift))
+			reta_conf[idx].reta[shift] = lut[i];
+	}
+
+out:
+	rte_free(lut);
+
+	return ret;
+}
+
+static int
+ice_set_rss_key(struct ice_vsi *vsi, uint8_t *key, uint8_t key_len)
+{
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	int ret = 0;
+
+	if (!key || key_len == 0) {
+		PMD_DRV_LOG(DEBUG, "No key to be configured");
+		return 0;
+	} else if (key_len != (VSIQF_HKEY_MAX_INDEX + 1) *
+		   sizeof(uint32_t)) {
+		PMD_DRV_LOG(ERR, "Invalid key length %u", key_len);
+		return -EINVAL;
+	}
+
+	struct ice_aqc_get_set_rss_keys *key_dw =
+		(struct ice_aqc_get_set_rss_keys *)key;
+
+	ret = ice_aq_set_rss_key(hw, vsi->vsi_id, key_dw);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to configure RSS key via AQ");
+		ret = -EINVAL;
+	}
+
+	return ret;
+}
+
+static int
+ice_get_rss_key(struct ice_vsi *vsi, uint8_t *key, uint8_t *key_len)
+{
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	int ret;
+
+	if (!key || !key_len)
+		return -EINVAL;
+
+	ret = ice_aq_get_rss_key(
+		hw, vsi->vsi_id,
+		(struct ice_aqc_get_set_rss_keys *)key);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to get RSS key via AQ");
+		return -EINVAL;
+	}
+	*key_len = (VSIQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t);
+
+	return 0;
+}
+
+static int
+ice_rss_hash_update(struct rte_eth_dev *dev,
+		    struct rte_eth_rss_conf *rss_conf)
+{
+	enum ice_status status = ICE_SUCCESS;
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vsi *vsi = pf->main_vsi;
+
+	/* set hash key */
+	status = ice_set_rss_key(vsi, rss_conf->rss_key, rss_conf->rss_key_len);
+	if (status)
+		return status;
+
+	/* TODO: hash enable config, ice_add_rss_cfg */
+	return 0;
+}
+
+static int
+ice_rss_hash_conf_get(struct rte_eth_dev *dev,
+		      struct rte_eth_rss_conf *rss_conf)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vsi *vsi = pf->main_vsi;
+
+	ice_get_rss_key(vsi, rss_conf->rss_key,
+			&rss_conf->rss_key_len);
+
+	/* TODO: default set to 0 as hf config is not supported now */
+	rss_conf->rss_hf = 0;
+	return 0;
+}
+
+static int
 ice_vsi_vlan_pvid_set(struct ice_vsi *vsi, struct ice_vsi_vlan_pvid_info *info)
 {
 	struct ice_hw *hw;
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH 11/19] net/ice: support RX queue interruption
  2018-11-23  6:56 [dpdk-dev] [PATCH 00/19] A new net PMD - ice Wenzhuo Lu
                   ` (9 preceding siblings ...)
  2018-11-23  6:56 ` [dpdk-dev] [PATCH 10/19] net/ice: support RSS Wenzhuo Lu
@ 2018-11-23  6:56 ` Wenzhuo Lu
  2018-11-23  6:56 ` [dpdk-dev] [PATCH 12/19] net/ice: support FW version getting Wenzhuo Lu
                   ` (13 subsequent siblings)
  24 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-11-23  6:56 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add below ops,
rx_queue_intr_enable
rx_queue_intr_disable

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/ice/ice_ethdev.c | 236 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 236 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 31c77e4..8d41516 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -48,6 +48,10 @@ static int ice_macaddr_add(struct rte_eth_dev *dev,
 			   __rte_unused uint32_t index,
 			   uint32_t pool);
 static void ice_macaddr_remove(struct rte_eth_dev *dev, uint32_t index);
+static int ice_rx_queue_intr_enable(struct rte_eth_dev *dev,
+				    uint16_t queue_id);
+static int ice_rx_queue_intr_disable(struct rte_eth_dev *dev,
+				     uint16_t queue_id);
 static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
 			     uint16_t pvid, int on);
 
@@ -86,6 +90,8 @@ static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
 	.reta_query                   = ice_rss_reta_query,
 	.rss_hash_update              = ice_rss_hash_update,
 	.rss_hash_conf_get            = ice_rss_hash_conf_get,
+	.rx_queue_intr_enable         = ice_rx_queue_intr_enable,
+	.rx_queue_intr_disable        = ice_rx_queue_intr_disable,
 	.vlan_pvid_set                = ice_vlan_pvid_set,
 };
 
@@ -1401,6 +1407,186 @@ static int ice_init_rss(struct ice_pf *pf)
 	return 0;
 }
 
+static void
+__vsi_queues_bind_intr(struct ice_vsi *vsi, uint16_t msix_vect,
+		       int base_queue, int nb_queue)
+{
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	uint32_t val, val_tx;
+	int i;
+
+	for (i = 0; i < nb_queue; i++) {
+		/*do actual bind*/
+		val = (msix_vect & QINT_RQCTL_MSIX_INDX_M) |
+		      (0 < QINT_RQCTL_ITR_INDX_S) | QINT_RQCTL_CAUSE_ENA_M;
+		val_tx = (msix_vect & QINT_TQCTL_MSIX_INDX_M) |
+			 (0 < QINT_TQCTL_ITR_INDX_S) | QINT_TQCTL_CAUSE_ENA_M;
+
+		PMD_DRV_LOG(INFO, "queue %d is binding to vect %d",
+			    base_queue + i, msix_vect);
+		/* set ITR0 value */
+		ICE_WRITE_REG(hw, GLINT_ITR(0, msix_vect), 0x10);
+		ICE_WRITE_REG(hw, QINT_RQCTL(base_queue + i), val);
+		ICE_WRITE_REG(hw, QINT_TQCTL(base_queue + i), val_tx);
+	}
+}
+
+static void
+ice_vsi_queues_bind_intr(struct ice_vsi *vsi)
+{
+	struct rte_eth_dev *dev = vsi->adapter->eth_dev;
+	struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	uint16_t msix_vect = vsi->msix_intr;
+	uint16_t nb_msix = RTE_MIN(vsi->nb_msix, intr_handle->nb_efd);
+	uint16_t queue_idx = 0;
+	int record = 0;
+	int i;
+
+	/* clear Rx/Tx queue interrupt */
+	for (i = 0; i < vsi->nb_used_qps; i++) {
+		ICE_WRITE_REG(hw, QINT_TQCTL(vsi->base_queue + i), 0);
+		ICE_WRITE_REG(hw, QINT_RQCTL(vsi->base_queue + i), 0);
+	}
+
+	/* PF bind interrupt */
+	if (rte_intr_dp_is_en(intr_handle)) {
+		queue_idx = 0;
+		record = 1;
+	}
+
+	for (i = 0; i < vsi->nb_used_qps; i++) {
+		if (nb_msix <= 1) {
+			if (!rte_intr_allow_others(intr_handle))
+				msix_vect = ICE_MISC_VEC_ID;
+
+			/* uio mapping all queue to one msix_vect */
+			__vsi_queues_bind_intr(vsi, msix_vect,
+					       vsi->base_queue + i,
+					       vsi->nb_used_qps - i);
+
+			for (; !!record && i < vsi->nb_used_qps; i++)
+				intr_handle->intr_vec[queue_idx + i] =
+					msix_vect;
+			break;
+		}
+
+		/* vfio 1:1 queue/msix_vect mapping */
+		__vsi_queues_bind_intr(vsi, msix_vect,
+				       vsi->base_queue + i, 1);
+
+		if (!!record)
+			intr_handle->intr_vec[queue_idx + i] = msix_vect;
+
+		msix_vect++;
+		nb_msix--;
+	}
+}
+
+static void
+ice_vsi_enable_queues_intr(struct ice_vsi *vsi)
+{
+	struct rte_eth_dev *dev = vsi->adapter->eth_dev;
+	struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	uint16_t msix_intr, i;
+
+	if (rte_intr_allow_others(intr_handle))
+		for (i = 0; i < vsi->nb_used_qps; i++) {
+			msix_intr = vsi->msix_intr + i;
+			ICE_WRITE_REG(hw, GLINT_DYN_CTL(msix_intr),
+				      GLINT_DYN_CTL_INTENA_M |
+				      GLINT_DYN_CTL_CLEARPBA_M |
+				      GLINT_DYN_CTL_ITR_INDX_M |
+				      GLINT_DYN_CTL_WB_ON_ITR_M);
+		}
+	else
+		ICE_WRITE_REG(hw, GLINT_DYN_CTL(0),
+			      GLINT_DYN_CTL_INTENA_M |
+			      GLINT_DYN_CTL_CLEARPBA_M |
+			      GLINT_DYN_CTL_ITR_INDX_M |
+			      GLINT_DYN_CTL_WB_ON_ITR_M);
+}
+
+static void
+ice_vsi_disable_queues_intr(struct ice_vsi *vsi)
+{
+	struct rte_eth_dev *dev = vsi->adapter->eth_dev;
+	struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	uint16_t msix_intr, i;
+
+	/* disable interrupt and also clear all the exist config */
+	for (i = 0; i < vsi->nb_qps; i++) {
+		ICE_WRITE_REG(hw, QINT_TQCTL(vsi->base_queue + i), 0);
+		ICE_WRITE_REG(hw, QINT_RQCTL(vsi->base_queue + i), 0);
+		rte_wmb();
+	}
+
+	if (rte_intr_allow_others(intr_handle))
+		/* vfio-pci */
+		for (i = 0; i < vsi->nb_msix; i++) {
+			msix_intr = vsi->msix_intr + i;
+			ICE_WRITE_REG(hw, GLINT_DYN_CTL(msix_intr),
+				      GLINT_DYN_CTL_WB_ON_ITR_M);
+		}
+	else
+		/* igb_uio */
+		ICE_WRITE_REG(hw, GLINT_DYN_CTL(0), GLINT_DYN_CTL_WB_ON_ITR_M);
+}
+
+static int
+ice_rxq_intr_setup(struct rte_eth_dev *dev)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+	struct ice_vsi *vsi = pf->main_vsi;
+	uint32_t intr_vector = 0;
+
+	rte_intr_disable(intr_handle);
+
+	/* check and configure queue intr-vector mapping */
+	if ((rte_intr_cap_multiple(intr_handle) ||
+	     !RTE_ETH_DEV_SRIOV(dev).active) &&
+	    dev->data->dev_conf.intr_conf.rxq != 0) {
+		intr_vector = dev->data->nb_rx_queues;
+		if (intr_vector > ICE_MAX_INTR_QUEUE_NUM) {
+			PMD_DRV_LOG(ERR, "At most %d intr queues supported",
+				    ICE_MAX_INTR_QUEUE_NUM);
+			return -ENOTSUP;
+		}
+		if (rte_intr_efd_enable(intr_handle, intr_vector))
+			return -1;
+	}
+
+	if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
+		intr_handle->intr_vec =
+		rte_zmalloc("intr_vec", dev->data->nb_rx_queues * sizeof(int),
+			    0);
+		if (!intr_handle->intr_vec) {
+			PMD_DRV_LOG(ERR,
+				    "Failed to allocate %d rx_queues intr_vec",
+				    dev->data->nb_rx_queues);
+			return -ENOMEM;
+		}
+	}
+
+	/* Map queues with MSIX interrupt */
+	vsi->nb_used_qps = dev->data->nb_rx_queues;
+	ice_vsi_queues_bind_intr(vsi);
+
+	/* Enable interrupts for all the queues */
+	ice_vsi_enable_queues_intr(vsi);
+
+	rte_intr_enable(intr_handle);
+
+	return 0;
+}
+
 static int
 ice_dev_start(struct rte_eth_dev *dev)
 {
@@ -1438,6 +1624,10 @@ static int ice_init_rss(struct ice_pf *pf)
 		goto rx_err;
 	}
 
+	/* enable Rx interrput and mapping Rx queue to interrupt vector */
+	if (ice_rxq_intr_setup(dev))
+		return -EIO;
+
 	ret = ice_aq_set_event_mask(hw, hw->port_info->lport,
 				    ((u16)(ICE_AQ_LINK_EVENT_LINK_FAULT |
 				     ICE_AQ_LINK_EVENT_PHY_TEMP_ALARM |
@@ -1472,6 +1662,7 @@ static int ice_init_rss(struct ice_pf *pf)
 {
 	struct rte_eth_dev_data *data = dev->data;
 	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vsi *main_vsi = pf->main_vsi;
 	struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
 	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
 	uint16_t i;
@@ -1491,6 +1682,9 @@ static int ice_init_rss(struct ice_pf *pf)
 	for (i = 0; i < data->nb_tx_queues; i++)
 		ice_tx_queue_stop(dev, i);
 
+	/* disable all queue interrupts */
+	ice_vsi_disable_queues_intr(main_vsi);
+
 	/* Clear all queues and release mbufs */
 	ice_clear_queues(dev);
 
@@ -2320,6 +2514,48 @@ static int ice_macaddr_set(struct rte_eth_dev *dev,
 	return 0;
 }
 
+static int ice_rx_queue_intr_enable(struct rte_eth_dev *dev,
+				    uint16_t queue_id)
+{
+	struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	uint32_t val;
+	uint16_t msix_intr;
+
+	if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+		return -E_RTE_SECONDARY;
+
+	msix_intr = intr_handle->intr_vec[queue_id];
+
+	val = GLINT_DYN_CTL_INTENA_M | GLINT_DYN_CTL_CLEARPBA_M |
+	      GLINT_DYN_CTL_ITR_INDX_M;
+	val &= ~GLINT_DYN_CTL_WB_ON_ITR_M;
+
+	ICE_WRITE_REG(hw, GLINT_DYN_CTL(msix_intr), val);
+	rte_intr_enable(&pci_dev->intr_handle);
+
+	return 0;
+}
+
+static int ice_rx_queue_intr_disable(struct rte_eth_dev *dev,
+				     uint16_t queue_id)
+{
+	struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	uint16_t msix_intr;
+
+	if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+		return -E_RTE_SECONDARY;
+
+	msix_intr = intr_handle->intr_vec[queue_id];
+
+	ICE_WRITE_REG(hw, GLINT_DYN_CTL(msix_intr), GLINT_DYN_CTL_WB_ON_ITR_M);
+
+	return 0;
+}
+
 static int
 ice_vsi_vlan_pvid_set(struct ice_vsi *vsi, struct ice_vsi_vlan_pvid_info *info)
 {
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH 12/19] net/ice: support FW version getting
  2018-11-23  6:56 [dpdk-dev] [PATCH 00/19] A new net PMD - ice Wenzhuo Lu
                   ` (10 preceding siblings ...)
  2018-11-23  6:56 ` [dpdk-dev] [PATCH 11/19] net/ice: support RX queue interruption Wenzhuo Lu
@ 2018-11-23  6:56 ` Wenzhuo Lu
  2018-11-23  6:56 ` [dpdk-dev] [PATCH 13/19] net/ice: support EEPROM information getting Wenzhuo Lu
                   ` (12 subsequent siblings)
  24 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-11-23  6:56 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add ops fw_version_get.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/ice/ice_ethdev.c | 21 +++++++++++++++++++++
 1 file changed, 21 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 8d41516..2d5e73d 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -52,6 +52,8 @@ static int ice_rx_queue_intr_enable(struct rte_eth_dev *dev,
 				    uint16_t queue_id);
 static int ice_rx_queue_intr_disable(struct rte_eth_dev *dev,
 				     uint16_t queue_id);
+static int ice_fw_version_get(struct rte_eth_dev *dev, char *fw_version,
+			      size_t fw_size);
 static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
 			     uint16_t pvid, int on);
 
@@ -92,6 +94,7 @@ static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
 	.rss_hash_conf_get            = ice_rss_hash_conf_get,
 	.rx_queue_intr_enable         = ice_rx_queue_intr_enable,
 	.rx_queue_intr_disable        = ice_rx_queue_intr_disable,
+	.fw_version_get               = ice_fw_version_get,
 	.vlan_pvid_set                = ice_vlan_pvid_set,
 };
 
@@ -2557,6 +2560,24 @@ static int ice_rx_queue_intr_disable(struct rte_eth_dev *dev,
 }
 
 static int
+ice_fw_version_get(struct rte_eth_dev *dev, char *fw_version, size_t fw_size)
+{
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	int ret;
+
+	ret = snprintf(fw_version, fw_size, "%d.%d.%05d %d.%d",
+		       hw->fw_maj_ver, hw->fw_min_ver, hw->fw_build,
+		       hw->api_maj_ver, hw->api_min_ver);
+
+	/* add the size of '\0' */
+	ret += 1;
+	if (fw_size < (u32)ret)
+		return ret;
+	else
+		return 0;
+}
+
+static int
 ice_vsi_vlan_pvid_set(struct ice_vsi *vsi, struct ice_vsi_vlan_pvid_info *info)
 {
 	struct ice_hw *hw;
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH 13/19] net/ice: support EEPROM information getting
  2018-11-23  6:56 [dpdk-dev] [PATCH 00/19] A new net PMD - ice Wenzhuo Lu
                   ` (11 preceding siblings ...)
  2018-11-23  6:56 ` [dpdk-dev] [PATCH 12/19] net/ice: support FW version getting Wenzhuo Lu
@ 2018-11-23  6:56 ` Wenzhuo Lu
  2018-11-23  6:56 ` [dpdk-dev] [PATCH 14/19] net/ice: support statistics Wenzhuo Lu
                   ` (11 subsequent siblings)
  24 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-11-23  6:56 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Wei Zhao

Add below ops,
get_eeprom_length
get_eeprom

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 drivers/net/ice/ice_ethdev.c | 45 ++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 45 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 2d5e73d..e54088e 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -56,6 +56,9 @@ static int ice_fw_version_get(struct rte_eth_dev *dev, char *fw_version,
 			      size_t fw_size);
 static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
 			     uint16_t pvid, int on);
+static int ice_get_eeprom_length(struct rte_eth_dev *dev);
+static int ice_get_eeprom(struct rte_eth_dev *dev,
+			  struct rte_dev_eeprom_info *eeprom);
 
 static const struct rte_pci_id pci_id_ice_map[] = {
 	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
@@ -96,6 +99,8 @@ static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
 	.rx_queue_intr_disable        = ice_rx_queue_intr_disable,
 	.fw_version_get               = ice_fw_version_get,
 	.vlan_pvid_set                = ice_vlan_pvid_set,
+	.get_eeprom_length            = ice_get_eeprom_length,
+	.get_eeprom                   = ice_get_eeprom,
 };
 
 static void
@@ -2661,3 +2666,43 @@ static int ice_rx_queue_intr_disable(struct rte_eth_dev *dev,
 
 	return 0;
 }
+
+static int
+ice_get_eeprom_length(struct rte_eth_dev *dev)
+{
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	/* Convert word count to byte count */
+	return hw->nvm.sr_words << 1;
+}
+
+static int
+ice_get_eeprom(struct rte_eth_dev *dev,
+	       struct rte_dev_eeprom_info *eeprom)
+{
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	uint16_t *data = eeprom->data;
+	uint16_t offset, length, i;
+	enum ice_status ret_code = ICE_SUCCESS;
+
+	offset = eeprom->offset >> 1;
+	length = eeprom->length >> 1;
+
+	if (offset > hw->nvm.sr_words ||
+	    offset + length > hw->nvm.sr_words) {
+		PMD_DRV_LOG(ERR, "Requested EEPROM bytes out of range.");
+		return -EINVAL;
+	}
+
+	eeprom->magic = hw->vendor_id | (hw->device_id << 16);
+
+	for (i = 0; i < length; i++) {
+		ret_code = ice_read_sr_word(hw, offset + i, &data[i]);
+		if (ret_code != ICE_SUCCESS) {
+			PMD_DRV_LOG(ERR, "EEPROM read failed.");
+			return -EIO;
+		}
+	}
+
+	return 0;
+}
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH 14/19] net/ice: support statistics
  2018-11-23  6:56 [dpdk-dev] [PATCH 00/19] A new net PMD - ice Wenzhuo Lu
                   ` (12 preceding siblings ...)
  2018-11-23  6:56 ` [dpdk-dev] [PATCH 13/19] net/ice: support EEPROM information getting Wenzhuo Lu
@ 2018-11-23  6:56 ` Wenzhuo Lu
  2018-11-23  6:56 ` [dpdk-dev] [PATCH 15/19] net/ice: support queue information getting Wenzhuo Lu
                   ` (10 subsequent siblings)
  24 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-11-23  6:56 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Jia Guo

Add below ops,
stats_get
stats_reset
xstats_get
xstats_get_names
xstats_reset

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Jia Guo <jia.guo@intel.com>
---
 drivers/net/ice/ice_ethdev.c | 574 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 574 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index e54088e..3500d5b 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -59,6 +59,14 @@ static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
 static int ice_get_eeprom_length(struct rte_eth_dev *dev);
 static int ice_get_eeprom(struct rte_eth_dev *dev,
 			  struct rte_dev_eeprom_info *eeprom);
+static int ice_stats_get(struct rte_eth_dev *dev,
+			 struct rte_eth_stats *stats);
+static void ice_stats_reset(struct rte_eth_dev *dev);
+static int ice_xstats_get(struct rte_eth_dev *dev,
+			  struct rte_eth_xstat *xstats, unsigned int n);
+static int ice_xstats_get_names(struct rte_eth_dev *dev,
+				struct rte_eth_xstat_name *xstats_names,
+				unsigned int limit);
 
 static const struct rte_pci_id pci_id_ice_map[] = {
 	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
@@ -101,8 +109,100 @@ static int ice_get_eeprom(struct rte_eth_dev *dev,
 	.vlan_pvid_set                = ice_vlan_pvid_set,
 	.get_eeprom_length            = ice_get_eeprom_length,
 	.get_eeprom                   = ice_get_eeprom,
+	.stats_get                    = ice_stats_get,
+	.stats_reset                  = ice_stats_reset,
+	.xstats_get                   = ice_xstats_get,
+	.xstats_get_names             = ice_xstats_get_names,
+	.xstats_reset                 = ice_stats_reset,
 };
 
+/* store statistics names and its offset in stats structure */
+struct ice_xstats_name_off {
+	char name[RTE_ETH_XSTATS_NAME_SIZE];
+	unsigned int offset;
+};
+
+static const struct ice_xstats_name_off ice_stats_strings[] = {
+	{"rx_unicast_packets", offsetof(struct ice_eth_stats, rx_unicast)},
+	{"rx_multicast_packets", offsetof(struct ice_eth_stats, rx_multicast)},
+	{"rx_broadcast_packets", offsetof(struct ice_eth_stats, rx_broadcast)},
+	{"rx_dropped", offsetof(struct ice_eth_stats, rx_discards)},
+	{"rx_unknown_protocol_packets", offsetof(struct ice_eth_stats,
+		rx_unknown_protocol)},
+	{"tx_unicast_packets", offsetof(struct ice_eth_stats, tx_unicast)},
+	{"tx_multicast_packets", offsetof(struct ice_eth_stats, tx_multicast)},
+	{"tx_broadcast_packets", offsetof(struct ice_eth_stats, tx_broadcast)},
+	{"tx_dropped", offsetof(struct ice_eth_stats, tx_discards)},
+};
+
+#define ICE_NB_ETH_XSTATS (sizeof(ice_stats_strings) / \
+		sizeof(ice_stats_strings[0]))
+
+static const struct ice_xstats_name_off ice_hw_port_strings[] = {
+	{"tx_link_down_dropped", offsetof(struct ice_hw_port_stats,
+		tx_dropped_link_down)},
+	{"rx_crc_errors", offsetof(struct ice_hw_port_stats, crc_errors)},
+	{"rx_illegal_byte_errors", offsetof(struct ice_hw_port_stats,
+		illegal_bytes)},
+	{"rx_error_bytes", offsetof(struct ice_hw_port_stats, error_bytes)},
+	{"mac_local_errors", offsetof(struct ice_hw_port_stats,
+		mac_local_faults)},
+	{"mac_remote_errors", offsetof(struct ice_hw_port_stats,
+		mac_remote_faults)},
+	{"rx_len_errors", offsetof(struct ice_hw_port_stats,
+		rx_len_errors)},
+	{"tx_xon_packets", offsetof(struct ice_hw_port_stats, link_xon_tx)},
+	{"rx_xon_packets", offsetof(struct ice_hw_port_stats, link_xon_rx)},
+	{"tx_xoff_packets", offsetof(struct ice_hw_port_stats, link_xoff_tx)},
+	{"rx_xoff_packets", offsetof(struct ice_hw_port_stats, link_xoff_rx)},
+	{"rx_size_64_packets", offsetof(struct ice_hw_port_stats, rx_size_64)},
+	{"rx_size_65_to_127_packets", offsetof(struct ice_hw_port_stats,
+		rx_size_127)},
+	{"rx_size_128_to_255_packets", offsetof(struct ice_hw_port_stats,
+		rx_size_255)},
+	{"rx_size_256_to_511_packets", offsetof(struct ice_hw_port_stats,
+		rx_size_511)},
+	{"rx_size_512_to_1023_packets", offsetof(struct ice_hw_port_stats,
+		rx_size_1023)},
+	{"rx_size_1024_to_1522_packets", offsetof(struct ice_hw_port_stats,
+		rx_size_1522)},
+	{"rx_size_1523_to_max_packets", offsetof(struct ice_hw_port_stats,
+		rx_size_big)},
+	{"rx_undersized_errors", offsetof(struct ice_hw_port_stats,
+		rx_undersize)},
+	{"rx_oversize_errors", offsetof(struct ice_hw_port_stats,
+		rx_oversize)},
+	{"rx_mac_short_pkt_dropped", offsetof(struct ice_hw_port_stats,
+		mac_short_pkt_dropped)},
+	{"rx_fragmented_errors", offsetof(struct ice_hw_port_stats,
+		rx_fragments)},
+	{"rx_jabber_errors", offsetof(struct ice_hw_port_stats, rx_jabber)},
+	{"tx_size_64_packets", offsetof(struct ice_hw_port_stats, tx_size_64)},
+	{"tx_size_65_to_127_packets", offsetof(struct ice_hw_port_stats,
+		tx_size_127)},
+	{"tx_size_128_to_255_packets", offsetof(struct ice_hw_port_stats,
+		tx_size_255)},
+	{"tx_size_256_to_511_packets", offsetof(struct ice_hw_port_stats,
+		tx_size_511)},
+	{"tx_size_512_to_1023_packets", offsetof(struct ice_hw_port_stats,
+		tx_size_1023)},
+	{"tx_size_1024_to_1522_packets", offsetof(struct ice_hw_port_stats,
+		tx_size_1522)},
+	{"tx_size_1523_to_max_packets", offsetof(struct ice_hw_port_stats,
+		tx_size_big)},
+	{"tx_low_power_idle_status", offsetof(struct ice_hw_port_stats,
+		tx_lpi_status)},
+	{"rx_low_power_idle_status", offsetof(struct ice_hw_port_stats,
+		rx_lpi_status)},
+	{"tx_low_power_idle_count", offsetof(struct ice_hw_port_stats,
+		tx_lpi_count)},
+	{"rx_low_power_idle_count", offsetof(struct ice_hw_port_stats,
+		rx_lpi_count)},
+};
+
+#define ICE_NB_HW_PORT_XSTATS (sizeof(ice_hw_port_strings) / \
+		sizeof(ice_hw_port_strings[0]))
+
 static void
 ice_init_controlq_parameter(struct ice_hw *hw)
 {
@@ -2706,3 +2806,477 @@ static int ice_rx_queue_intr_disable(struct rte_eth_dev *dev,
 
 	return 0;
 }
+
+static void
+ice_stat_update_32(struct ice_hw *hw,
+		   uint32_t reg,
+		   bool offset_loaded,
+		   uint64_t *offset,
+		   uint64_t *stat)
+{
+	uint64_t new_data;
+
+	new_data = (uint64_t)ICE_READ_REG(hw, reg);
+	if (!offset_loaded)
+		*offset = new_data;
+
+	if (new_data >= *offset)
+		*stat = (uint64_t)(new_data - *offset);
+	else
+		*stat = (uint64_t)((new_data +
+				    ((uint64_t)1 << ICE_32_BIT_WIDTH))
+				   - *offset);
+}
+
+static void
+ice_stat_update_40(struct ice_hw *hw,
+		   uint32_t hireg,
+		   uint32_t loreg,
+		   bool offset_loaded,
+		   uint64_t *offset,
+		   uint64_t *stat)
+{
+	uint64_t new_data;
+
+	new_data = (uint64_t)ICE_READ_REG(hw, loreg);
+	new_data |= (uint64_t)(ICE_READ_REG(hw, hireg) & ICE_8_BIT_MASK) <<
+		    ICE_32_BIT_WIDTH;
+
+	if (!offset_loaded)
+		*offset = new_data;
+
+	if (new_data >= *offset)
+		*stat = new_data - *offset;
+	else
+		*stat = (uint64_t)((new_data +
+				    ((uint64_t)1 << ICE_40_BIT_WIDTH)) -
+				   *offset);
+
+	*stat &= ICE_40_BIT_MASK;
+}
+
+/* Get all the statistics of a VSI */
+static void
+ice_update_vsi_stats(struct ice_vsi *vsi)
+{
+	struct ice_eth_stats *oes = &vsi->eth_stats_offset;
+	struct ice_eth_stats *nes = &vsi->eth_stats;
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	int idx = rte_le_to_cpu_16(vsi->vsi_id);
+
+	ice_stat_update_40(hw, GLV_GORCH(idx), GLV_GORCL(idx),
+			   vsi->offset_loaded, &oes->rx_bytes,
+			   &nes->rx_bytes);
+	ice_stat_update_40(hw, GLV_UPRCH(idx), GLV_UPRCL(idx),
+			   vsi->offset_loaded, &oes->rx_unicast,
+			   &nes->rx_unicast);
+	ice_stat_update_40(hw, GLV_MPRCH(idx), GLV_MPRCL(idx),
+			   vsi->offset_loaded, &oes->rx_multicast,
+			   &nes->rx_multicast);
+	ice_stat_update_40(hw, GLV_BPRCH(idx), GLV_BPRCL(idx),
+			   vsi->offset_loaded, &oes->rx_broadcast,
+			   &nes->rx_broadcast);
+	/* exclude CRC bytes */
+	nes->rx_bytes -= (nes->rx_unicast + nes->rx_multicast +
+			  nes->rx_broadcast) * ETHER_CRC_LEN;
+
+	ice_stat_update_32(hw, GLV_RDPC(idx), vsi->offset_loaded,
+			   &oes->rx_discards, &nes->rx_discards);
+	/* GLV_REPC not supported */
+	/* GLV_RMPC not supported */
+	ice_stat_update_32(hw, GLSWID_RUPP(idx), vsi->offset_loaded,
+			   &oes->rx_unknown_protocol,
+			   &nes->rx_unknown_protocol);
+	ice_stat_update_40(hw, GLV_GOTCH(idx), GLV_GOTCL(idx),
+			   vsi->offset_loaded, &oes->tx_bytes,
+			   &nes->tx_bytes);
+	ice_stat_update_40(hw, GLV_UPTCH(idx), GLV_UPTCL(idx),
+			   vsi->offset_loaded, &oes->tx_unicast,
+			   &nes->tx_unicast);
+	ice_stat_update_40(hw, GLV_MPTCH(idx), GLV_MPTCL(idx),
+			   vsi->offset_loaded, &oes->tx_multicast,
+			   &nes->tx_multicast);
+	ice_stat_update_40(hw, GLV_BPTCH(idx), GLV_BPTCL(idx),
+			   vsi->offset_loaded,  &oes->tx_broadcast,
+			   &nes->tx_broadcast);
+	/* GLV_TDPC not supported */
+	ice_stat_update_32(hw, GLV_TEPC(idx), vsi->offset_loaded,
+			   &oes->tx_errors, &nes->tx_errors);
+	vsi->offset_loaded = true;
+
+	PMD_DRV_LOG(DEBUG, "************** VSI[%u] stats start **************",
+		    vsi->vsi_id);
+	PMD_DRV_LOG(DEBUG, "rx_bytes:            %"PRIu64"", nes->rx_bytes);
+	PMD_DRV_LOG(DEBUG, "rx_unicast:          %"PRIu64"", nes->rx_unicast);
+	PMD_DRV_LOG(DEBUG, "rx_multicast:        %"PRIu64"", nes->rx_multicast);
+	PMD_DRV_LOG(DEBUG, "rx_broadcast:        %"PRIu64"", nes->rx_broadcast);
+	PMD_DRV_LOG(DEBUG, "rx_discards:         %"PRIu64"", nes->rx_discards);
+	PMD_DRV_LOG(DEBUG, "rx_unknown_protocol: %"PRIu64"",
+		    nes->rx_unknown_protocol);
+	PMD_DRV_LOG(DEBUG, "tx_bytes:            %"PRIu64"", nes->tx_bytes);
+	PMD_DRV_LOG(DEBUG, "tx_unicast:          %"PRIu64"", nes->tx_unicast);
+	PMD_DRV_LOG(DEBUG, "tx_multicast:        %"PRIu64"", nes->tx_multicast);
+	PMD_DRV_LOG(DEBUG, "tx_broadcast:        %"PRIu64"", nes->tx_broadcast);
+	PMD_DRV_LOG(DEBUG, "tx_discards:         %"PRIu64"", nes->tx_discards);
+	PMD_DRV_LOG(DEBUG, "tx_errors:           %"PRIu64"", nes->tx_errors);
+	PMD_DRV_LOG(DEBUG, "************** VSI[%u] stats end ****************",
+		    vsi->vsi_id);
+}
+
+static void
+ice_read_stats_registers(struct ice_pf *pf, struct ice_hw *hw)
+{
+	struct ice_hw_port_stats *ns = &pf->stats; /* new stats */
+	struct ice_hw_port_stats *os = &pf->stats_offset; /* old stats */
+
+	/* Get statistics of struct ice_eth_stats */
+	ice_stat_update_40(hw, GLPRT_GORCH(hw->port_info->lport),
+			   GLPRT_GORCL(hw->port_info->lport),
+			   pf->offset_loaded, &os->eth.rx_bytes,
+			   &ns->eth.rx_bytes);
+	ice_stat_update_40(hw, GLPRT_UPRCH(hw->port_info->lport),
+			   GLPRT_UPRCL(hw->port_info->lport),
+			   pf->offset_loaded, &os->eth.rx_unicast,
+			   &ns->eth.rx_unicast);
+	ice_stat_update_40(hw, GLPRT_MPRCH(hw->port_info->lport),
+			   GLPRT_MPRCL(hw->port_info->lport),
+			   pf->offset_loaded, &os->eth.rx_multicast,
+			   &ns->eth.rx_multicast);
+	ice_stat_update_40(hw, GLPRT_BPRCH(hw->port_info->lport),
+			   GLPRT_BPRCL(hw->port_info->lport),
+			   pf->offset_loaded, &os->eth.rx_broadcast,
+			   &ns->eth.rx_broadcast);
+	ice_stat_update_32(hw, PRTRPB_RDPC,
+			   pf->offset_loaded, &os->eth.rx_discards,
+			   &ns->eth.rx_discards);
+
+	/* Workaround: CRC size should not be included in byte statistics,
+	 * so subtract ETHER_CRC_LEN from the byte counter for each rx packet.
+	 */
+	ns->eth.rx_bytes -= (ns->eth.rx_unicast + ns->eth.rx_multicast +
+			     ns->eth.rx_broadcast) * ETHER_CRC_LEN;
+
+	/* GLPRT_REPC not supported */
+	/* GLPRT_RMPC not supported */
+	ice_stat_update_32(hw, GLSWID_RUPP(hw->port_info->lport),
+			   pf->offset_loaded,
+			   &os->eth.rx_unknown_protocol,
+			   &ns->eth.rx_unknown_protocol);
+	ice_stat_update_40(hw, GLPRT_GOTCH(hw->port_info->lport),
+			   GLPRT_GOTCL(hw->port_info->lport),
+			   pf->offset_loaded, &os->eth.tx_bytes,
+			   &ns->eth.tx_bytes);
+	ice_stat_update_40(hw, GLPRT_UPTCH(hw->port_info->lport),
+			   GLPRT_UPTCL(hw->port_info->lport),
+			   pf->offset_loaded, &os->eth.tx_unicast,
+			   &ns->eth.tx_unicast);
+	ice_stat_update_40(hw, GLPRT_MPTCH(hw->port_info->lport),
+			   GLPRT_MPTCL(hw->port_info->lport),
+			   pf->offset_loaded, &os->eth.tx_multicast,
+			   &ns->eth.tx_multicast);
+	ice_stat_update_40(hw, GLPRT_BPTCH(hw->port_info->lport),
+			   GLPRT_BPTCL(hw->port_info->lport),
+			   pf->offset_loaded, &os->eth.tx_broadcast,
+			   &ns->eth.tx_broadcast);
+	ns->eth.tx_bytes -= (ns->eth.tx_unicast + ns->eth.tx_multicast +
+			     ns->eth.tx_broadcast) * ETHER_CRC_LEN;
+
+	/* GLPRT_TEPC not supported */
+
+	/* additional port specific stats */
+	ice_stat_update_32(hw, GLPRT_TDOLD(hw->port_info->lport),
+			   pf->offset_loaded, &os->tx_dropped_link_down,
+			   &ns->tx_dropped_link_down);
+	ice_stat_update_32(hw, GLPRT_CRCERRS(hw->port_info->lport),
+			   pf->offset_loaded, &os->crc_errors,
+			   &ns->crc_errors);
+	ice_stat_update_32(hw, GLPRT_ILLERRC(hw->port_info->lport),
+			   pf->offset_loaded, &os->illegal_bytes,
+			   &ns->illegal_bytes);
+	/* GLPRT_ERRBC not supported */
+	ice_stat_update_32(hw, GLPRT_MLFC(hw->port_info->lport),
+			   pf->offset_loaded, &os->mac_local_faults,
+			   &ns->mac_local_faults);
+	ice_stat_update_32(hw, GLPRT_MRFC(hw->port_info->lport),
+			   pf->offset_loaded, &os->mac_remote_faults,
+			   &ns->mac_remote_faults);
+
+	ice_stat_update_32(hw, GLPRT_RLEC(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_len_errors,
+			   &ns->rx_len_errors);
+
+	ice_stat_update_32(hw, GLPRT_LXONRXC(hw->port_info->lport),
+			   pf->offset_loaded, &os->link_xon_rx,
+			   &ns->link_xon_rx);
+	ice_stat_update_32(hw, GLPRT_LXOFFRXC(hw->port_info->lport),
+			   pf->offset_loaded, &os->link_xoff_rx,
+			   &ns->link_xoff_rx);
+	ice_stat_update_32(hw, GLPRT_LXONTXC(hw->port_info->lport),
+			   pf->offset_loaded, &os->link_xon_tx,
+			   &ns->link_xon_tx);
+	ice_stat_update_32(hw, GLPRT_LXOFFTXC(hw->port_info->lport),
+			   pf->offset_loaded, &os->link_xoff_tx,
+			   &ns->link_xoff_tx);
+	ice_stat_update_40(hw, GLPRT_PRC64H(hw->port_info->lport),
+			   GLPRT_PRC64L(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_size_64,
+			   &ns->rx_size_64);
+	ice_stat_update_40(hw, GLPRT_PRC127H(hw->port_info->lport),
+			   GLPRT_PRC127L(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_size_127,
+			   &ns->rx_size_127);
+	ice_stat_update_40(hw, GLPRT_PRC255H(hw->port_info->lport),
+			   GLPRT_PRC255L(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_size_255,
+			   &ns->rx_size_255);
+	ice_stat_update_40(hw, GLPRT_PRC511H(hw->port_info->lport),
+			   GLPRT_PRC511L(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_size_511,
+			   &ns->rx_size_511);
+	ice_stat_update_40(hw, GLPRT_PRC1023H(hw->port_info->lport),
+			   GLPRT_PRC1023L(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_size_1023,
+			   &ns->rx_size_1023);
+	ice_stat_update_40(hw, GLPRT_PRC1522H(hw->port_info->lport),
+			   GLPRT_PRC1522L(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_size_1522,
+			   &ns->rx_size_1522);
+	ice_stat_update_40(hw, GLPRT_PRC9522H(hw->port_info->lport),
+			   GLPRT_PRC9522L(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_size_big,
+			   &ns->rx_size_big);
+	ice_stat_update_32(hw, GLPRT_RUC(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_undersize,
+			   &ns->rx_undersize);
+	ice_stat_update_32(hw, GLPRT_RFC(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_fragments,
+			   &ns->rx_fragments);
+	ice_stat_update_32(hw, GLPRT_ROC(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_oversize,
+			   &ns->rx_oversize);
+	ice_stat_update_32(hw, GLPRT_RJC(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_jabber,
+			   &ns->rx_jabber);
+	ice_stat_update_40(hw, GLPRT_PTC64H(hw->port_info->lport),
+			   GLPRT_PTC64L(hw->port_info->lport),
+			   pf->offset_loaded, &os->tx_size_64,
+			   &ns->tx_size_64);
+	ice_stat_update_40(hw, GLPRT_PTC127H(hw->port_info->lport),
+			   GLPRT_PTC127L(hw->port_info->lport),
+			   pf->offset_loaded, &os->tx_size_127,
+			   &ns->tx_size_127);
+	ice_stat_update_40(hw, GLPRT_PTC255H(hw->port_info->lport),
+			   GLPRT_PTC255L(hw->port_info->lport),
+			   pf->offset_loaded, &os->tx_size_255,
+			   &ns->tx_size_255);
+	ice_stat_update_40(hw, GLPRT_PTC511H(hw->port_info->lport),
+			   GLPRT_PTC511L(hw->port_info->lport),
+			   pf->offset_loaded, &os->tx_size_511,
+			   &ns->tx_size_511);
+	ice_stat_update_40(hw, GLPRT_PTC1023H(hw->port_info->lport),
+			   GLPRT_PTC1023L(hw->port_info->lport),
+			   pf->offset_loaded, &os->tx_size_1023,
+			   &ns->tx_size_1023);
+	ice_stat_update_40(hw, GLPRT_PTC1522H(hw->port_info->lport),
+			   GLPRT_PTC1522L(hw->port_info->lport),
+			   pf->offset_loaded, &os->tx_size_1522,
+			   &ns->tx_size_1522);
+	ice_stat_update_40(hw, GLPRT_PTC9522H(hw->port_info->lport),
+			   GLPRT_PTC9522L(hw->port_info->lport),
+			   pf->offset_loaded, &os->tx_size_big,
+			   &ns->tx_size_big);
+
+	/* GLPRT_MSPDC not supported */
+	/* GLPRT_XEC not supported */
+
+	pf->offset_loaded = true;
+
+	if (pf->main_vsi)
+		ice_update_vsi_stats(pf->main_vsi);
+}
+
+/* Get all statistics of a port */
+static int
+ice_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ice_hw_port_stats *ns = &pf->stats; /* new stats */
+
+	/* call read registers - updates values, now write them to struct */
+	ice_read_stats_registers(pf, hw);
+
+	stats->ipackets = ns->eth.rx_unicast +
+			  ns->eth.rx_multicast +
+			  ns->eth.rx_broadcast -
+			  ns->eth.rx_discards -
+			  pf->main_vsi->eth_stats.rx_discards;
+	stats->opackets = ns->eth.tx_unicast +
+			  ns->eth.tx_multicast +
+			  ns->eth.tx_broadcast;
+	stats->ibytes   = ns->eth.rx_bytes;
+	stats->obytes   = ns->eth.tx_bytes;
+	stats->oerrors  = ns->eth.tx_errors +
+			  pf->main_vsi->eth_stats.tx_errors;
+
+	/* Rx Errors */
+	stats->imissed  = ns->eth.rx_discards +
+			  pf->main_vsi->eth_stats.rx_discards;
+	stats->ierrors  = ns->crc_errors +
+			  ns->rx_undersize +
+			  ns->rx_oversize + ns->rx_fragments + ns->rx_jabber;
+
+	PMD_DRV_LOG(DEBUG, "*************** PF stats start *****************");
+	PMD_DRV_LOG(DEBUG, "rx_bytes:	%"PRIu64"", ns->eth.rx_bytes);
+	PMD_DRV_LOG(DEBUG, "rx_unicast:	%"PRIu64"", ns->eth.rx_unicast);
+	PMD_DRV_LOG(DEBUG, "rx_multicast:%"PRIu64"", ns->eth.rx_multicast);
+	PMD_DRV_LOG(DEBUG, "rx_broadcast:%"PRIu64"", ns->eth.rx_broadcast);
+	PMD_DRV_LOG(DEBUG, "rx_discards:%"PRIu64"", ns->eth.rx_discards);
+	PMD_DRV_LOG(DEBUG, "vsi rx_discards:%"PRIu64"",
+		    pf->main_vsi->eth_stats.rx_discards);
+	PMD_DRV_LOG(DEBUG, "rx_unknown_protocol:  %"PRIu64"",
+		    ns->eth.rx_unknown_protocol);
+	PMD_DRV_LOG(DEBUG, "tx_bytes:	%"PRIu64"", ns->eth.tx_bytes);
+	PMD_DRV_LOG(DEBUG, "tx_unicast:	%"PRIu64"", ns->eth.tx_unicast);
+	PMD_DRV_LOG(DEBUG, "tx_multicast:%"PRIu64"", ns->eth.tx_multicast);
+	PMD_DRV_LOG(DEBUG, "tx_broadcast:%"PRIu64"", ns->eth.tx_broadcast);
+	PMD_DRV_LOG(DEBUG, "tx_discards:%"PRIu64"", ns->eth.tx_discards);
+	PMD_DRV_LOG(DEBUG, "vsi tx_discards:%"PRIu64"",
+		    pf->main_vsi->eth_stats.tx_discards);
+	PMD_DRV_LOG(DEBUG, "tx_errors:		%"PRIu64"", ns->eth.tx_errors);
+
+	PMD_DRV_LOG(DEBUG, "tx_dropped_link_down:	%"PRIu64"",
+		    ns->tx_dropped_link_down);
+	PMD_DRV_LOG(DEBUG, "crc_errors:	%"PRIu64"", ns->crc_errors);
+	PMD_DRV_LOG(DEBUG, "illegal_bytes:	%"PRIu64"",
+		    ns->illegal_bytes);
+	PMD_DRV_LOG(DEBUG, "error_bytes:	%"PRIu64"", ns->error_bytes);
+	PMD_DRV_LOG(DEBUG, "mac_local_faults:	%"PRIu64"",
+		    ns->mac_local_faults);
+	PMD_DRV_LOG(DEBUG, "mac_remote_faults:	%"PRIu64"",
+		    ns->mac_remote_faults);
+	PMD_DRV_LOG(DEBUG, "link_xon_rx:	%"PRIu64"", ns->link_xon_rx);
+	PMD_DRV_LOG(DEBUG, "link_xoff_rx:	%"PRIu64"", ns->link_xoff_rx);
+	PMD_DRV_LOG(DEBUG, "link_xon_tx:	%"PRIu64"", ns->link_xon_tx);
+	PMD_DRV_LOG(DEBUG, "link_xoff_tx:	%"PRIu64"", ns->link_xoff_tx);
+	PMD_DRV_LOG(DEBUG, "rx_size_64:		%"PRIu64"", ns->rx_size_64);
+	PMD_DRV_LOG(DEBUG, "rx_size_127:	%"PRIu64"", ns->rx_size_127);
+	PMD_DRV_LOG(DEBUG, "rx_size_255:	%"PRIu64"", ns->rx_size_255);
+	PMD_DRV_LOG(DEBUG, "rx_size_511:	%"PRIu64"", ns->rx_size_511);
+	PMD_DRV_LOG(DEBUG, "rx_size_1023:	%"PRIu64"", ns->rx_size_1023);
+	PMD_DRV_LOG(DEBUG, "rx_size_1522:	%"PRIu64"", ns->rx_size_1522);
+	PMD_DRV_LOG(DEBUG, "rx_size_big:	%"PRIu64"", ns->rx_size_big);
+	PMD_DRV_LOG(DEBUG, "rx_undersize:	%"PRIu64"", ns->rx_undersize);
+	PMD_DRV_LOG(DEBUG, "rx_fragments:	%"PRIu64"", ns->rx_fragments);
+	PMD_DRV_LOG(DEBUG, "rx_oversize:	%"PRIu64"", ns->rx_oversize);
+	PMD_DRV_LOG(DEBUG, "rx_jabber:		%"PRIu64"", ns->rx_jabber);
+	PMD_DRV_LOG(DEBUG, "tx_size_64:		%"PRIu64"", ns->tx_size_64);
+	PMD_DRV_LOG(DEBUG, "tx_size_127:	%"PRIu64"", ns->tx_size_127);
+	PMD_DRV_LOG(DEBUG, "tx_size_255:	%"PRIu64"", ns->tx_size_255);
+	PMD_DRV_LOG(DEBUG, "tx_size_511:	%"PRIu64"", ns->tx_size_511);
+	PMD_DRV_LOG(DEBUG, "tx_size_1023:	%"PRIu64"", ns->tx_size_1023);
+	PMD_DRV_LOG(DEBUG, "tx_size_1522:	%"PRIu64"", ns->tx_size_1522);
+	PMD_DRV_LOG(DEBUG, "tx_size_big:	%"PRIu64"", ns->tx_size_big);
+	PMD_DRV_LOG(DEBUG, "rx_len_errors:	%"PRIu64"", ns->rx_len_errors);
+	PMD_DRV_LOG(DEBUG, "************* PF stats end ****************");
+	return 0;
+}
+
+/* Reset the statistics */
+static void
+ice_stats_reset(struct rte_eth_dev *dev)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	/* Mark PF and VSI stats to update the offset, aka "reset" */
+	pf->offset_loaded = false;
+	if (pf->main_vsi)
+		pf->main_vsi->offset_loaded = false;
+
+	/* read the stats, reading current register values into offset */
+	ice_read_stats_registers(pf, hw);
+}
+
+static uint32_t
+ice_xstats_calc_num(void)
+{
+	uint32_t num;
+
+	num = ICE_NB_ETH_XSTATS + ICE_NB_HW_PORT_XSTATS;
+
+	return num;
+}
+
+static int
+ice_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats,
+	       unsigned int n)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	unsigned int i;
+	unsigned int count;
+	struct ice_hw_port_stats *hw_stats = &pf->stats;
+
+	count = ice_xstats_calc_num();
+	if (n < count)
+		return count;
+
+	ice_read_stats_registers(pf, hw);
+
+	if (!xstats)
+		return 0;
+
+	count = 0;
+
+	/* Get stats from ice_eth_stats struct */
+	for (i = 0; i < ICE_NB_ETH_XSTATS; i++) {
+		xstats[count].value =
+			*(uint64_t *)((char *)&hw_stats->eth +
+				      ice_stats_strings[i].offset);
+		xstats[count].id = count;
+		count++;
+	}
+
+	/* Get individiual stats from ice_hw_port struct */
+	for (i = 0; i < ICE_NB_HW_PORT_XSTATS; i++) {
+		xstats[count].value =
+			*(uint64_t *)((char *)hw_stats +
+				      ice_hw_port_strings[i].offset);
+		xstats[count].id = count;
+		count++;
+	}
+
+	return count;
+}
+
+static int ice_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
+				struct rte_eth_xstat_name *xstats_names,
+				__rte_unused unsigned int limit)
+{
+	unsigned int count = 0;
+	unsigned int i;
+
+	if (!xstats_names)
+		return ice_xstats_calc_num();
+
+	/* Note: limit checked in rte_eth_xstats_names() */
+
+	/* Get stats from ice_eth_stats struct */
+	for (i = 0; i < ICE_NB_ETH_XSTATS; i++) {
+		snprintf(xstats_names[count].name,
+			 sizeof(xstats_names[count].name),
+			 "%s", ice_stats_strings[i].name);
+		count++;
+	}
+
+	/* Get individiual stats from ice_hw_port struct */
+	for (i = 0; i < ICE_NB_HW_PORT_XSTATS; i++) {
+		snprintf(xstats_names[count].name,
+			 sizeof(xstats_names[count].name),
+			 "%s", ice_hw_port_strings[i].name);
+		count++;
+	}
+
+	return count;
+}
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH 15/19] net/ice: support queue information getting
  2018-11-23  6:56 [dpdk-dev] [PATCH 00/19] A new net PMD - ice Wenzhuo Lu
                   ` (13 preceding siblings ...)
  2018-11-23  6:56 ` [dpdk-dev] [PATCH 14/19] net/ice: support statistics Wenzhuo Lu
@ 2018-11-23  6:56 ` Wenzhuo Lu
  2018-11-23  6:56 ` [dpdk-dev] [PATCH 16/19] net/ice: support basic RX/TX Wenzhuo Lu
                   ` (9 subsequent siblings)
  24 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-11-23  6:56 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add below ops,
rxq_info_get
txq_info_get
rx_queue_count

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/ice/ice_ethdev.c   |  3 ++
 drivers/net/ice/ice_lan_rxtx.c | 66 ++++++++++++++++++++++++++++++++++++++++++
 drivers/net/ice/ice_rxtx.h     |  5 ++++
 3 files changed, 74 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 3500d5b..6485577 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -107,8 +107,11 @@ static int ice_xstats_get_names(struct rte_eth_dev *dev,
 	.rx_queue_intr_disable        = ice_rx_queue_intr_disable,
 	.fw_version_get               = ice_fw_version_get,
 	.vlan_pvid_set                = ice_vlan_pvid_set,
+	.rxq_info_get                 = ice_rxq_info_get,
+	.txq_info_get                 = ice_txq_info_get,
 	.get_eeprom_length            = ice_get_eeprom_length,
 	.get_eeprom                   = ice_get_eeprom,
+	.rx_queue_count               = ice_rx_queue_count,
 	.stats_get                    = ice_stats_get,
 	.stats_reset                  = ice_stats_reset,
 	.xstats_get                   = ice_xstats_get,
diff --git a/drivers/net/ice/ice_lan_rxtx.c b/drivers/net/ice/ice_lan_rxtx.c
index a9e8c03..d4b7277 100644
--- a/drivers/net/ice/ice_lan_rxtx.c
+++ b/drivers/net/ice/ice_lan_rxtx.c
@@ -945,6 +945,72 @@
 }
 
 void
+ice_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+		 struct rte_eth_rxq_info *qinfo)
+{
+	struct ice_rx_queue *rxq;
+
+	rxq = dev->data->rx_queues[queue_id];
+
+	qinfo->mp = rxq->mp;
+	qinfo->scattered_rx = dev->data->scattered_rx;
+	qinfo->nb_desc = rxq->nb_rx_desc;
+
+	qinfo->conf.rx_free_thresh = rxq->rx_free_thresh;
+	qinfo->conf.rx_drop_en = rxq->drop_en;
+	qinfo->conf.rx_deferred_start = rxq->rx_deferred_start;
+}
+
+void
+ice_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+		 struct rte_eth_txq_info *qinfo)
+{
+	struct ice_tx_queue *txq;
+
+	txq = dev->data->tx_queues[queue_id];
+
+	qinfo->nb_desc = txq->nb_tx_desc;
+
+	qinfo->conf.tx_thresh.pthresh = txq->pthresh;
+	qinfo->conf.tx_thresh.hthresh = txq->hthresh;
+	qinfo->conf.tx_thresh.wthresh = txq->wthresh;
+
+	qinfo->conf.tx_free_thresh = txq->tx_free_thresh;
+	qinfo->conf.tx_rs_thresh = txq->tx_rs_thresh;
+	qinfo->conf.offloads = txq->offloads;
+	qinfo->conf.tx_deferred_start = txq->tx_deferred_start;
+}
+
+uint32_t
+ice_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+#define ICE_RXQ_SCAN_INTERVAL 4
+	volatile union ice_rx_desc *rxdp;
+	struct ice_rx_queue *rxq;
+	uint16_t desc = 0;
+
+	rxq = dev->data->rx_queues[rx_queue_id];
+	rxdp = &rxq->rx_ring[rxq->rx_tail];
+	while ((desc < rxq->nb_rx_desc) &&
+	       ((rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
+		 ICE_RXD_QW1_STATUS_M) >> ICE_RXD_QW1_STATUS_S) &
+	       (1 << ICE_RX_DESC_STATUS_DD_S)) {
+		/**
+		 * Check the DD bit of a rx descriptor of each 4 in a group,
+		 * to avoid checking too frequently and downgrading performance
+		 * too much.
+		 */
+		desc += ICE_RXQ_SCAN_INTERVAL;
+		rxdp += ICE_RXQ_SCAN_INTERVAL;
+		if (rxq->rx_tail + desc >= rxq->nb_rx_desc)
+			rxdp = &(rxq->rx_ring[rxq->rx_tail +
+				 desc - rxq->nb_rx_desc]);
+	}
+
+	return desc;
+}
+
+void
 ice_clear_queues(struct rte_eth_dev *dev)
 {
 	uint16_t i;
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index 871646f..bad2b89 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -134,6 +134,11 @@ int ice_tx_queue_setup(struct rte_eth_dev *dev,
 void ice_tx_queue_release(void *txq);
 void ice_clear_queues(struct rte_eth_dev *dev);
 void ice_free_queues(struct rte_eth_dev *dev);
+uint32_t ice_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 void ice_set_default_ptype_table(struct rte_eth_dev *dev);
 const uint32_t *ice_dev_supported_ptypes_get(struct rte_eth_dev *dev);
+void ice_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+		      struct rte_eth_rxq_info *qinfo);
+void ice_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+		      struct rte_eth_txq_info *qinfo);
 #endif /* _ICE_RXTX_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH 16/19] net/ice: support basic RX/TX
  2018-11-23  6:56 [dpdk-dev] [PATCH 00/19] A new net PMD - ice Wenzhuo Lu
                   ` (14 preceding siblings ...)
  2018-11-23  6:56 ` [dpdk-dev] [PATCH 15/19] net/ice: support queue information getting Wenzhuo Lu
@ 2018-11-23  6:56 ` Wenzhuo Lu
  2018-11-23  6:56 ` [dpdk-dev] [PATCH 17/19] net/ice: support advance RX/TX Wenzhuo Lu
                   ` (8 subsequent siblings)
  24 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-11-23  6:56 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/ice/ice_ethdev.c   |  14 +
 drivers/net/ice/ice_lan_rxtx.c | 568 ++++++++++++++++++++++++++++++++++++++++-
 drivers/net/ice/ice_rxtx.h     |   8 +
 3 files changed, 588 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 6485577..21a251f 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -1267,7 +1267,19 @@ struct ice_xstats_name_off {
 	int ret;
 
 	dev->dev_ops = &ice_eth_dev_ops;
+	dev->rx_pkt_burst = ice_recv_pkts;
+	dev->tx_pkt_burst = ice_xmit_pkts;
+	dev->tx_pkt_prepare = ice_prep_pkts;
 
+	/* For secondary processes, we don't initialise any further as primary
+	 * has already done this work. Only check we don't need a different
+	 * RX function.
+	 */
+	if (rte_eal_process_type() == RTE_PROC_SECONDARY) {
+		ice_set_rx_function(dev);
+		ice_set_tx_function(dev);
+		return 0;
+	}
 	ice_set_default_ptype_table(dev);
 	pci_dev = RTE_DEV_TO_PCI(dev->device);
 	intr_handle = &pci_dev->intr_handle;
@@ -1735,6 +1747,8 @@ static int ice_init_rss(struct ice_pf *pf)
 		goto rx_err;
 	}
 
+	ice_set_rx_function(dev);
+
 	/* enable Rx interrput and mapping Rx queue to interrupt vector */
 	if (ice_rxq_intr_setup(dev))
 		return -EIO;
diff --git a/drivers/net/ice/ice_lan_rxtx.c b/drivers/net/ice/ice_lan_rxtx.c
index d4b7277..71ce048 100644
--- a/drivers/net/ice/ice_lan_rxtx.c
+++ b/drivers/net/ice/ice_lan_rxtx.c
@@ -908,8 +908,81 @@
 	rte_free(q);
 }
 
+/* Translate the rx descriptor status to pkt flags */
+static inline uint64_t
+ice_rxd_status_to_pkt_flags(uint64_t qword)
+{
+	uint64_t flags;
+
+	/* Check if RSS_HASH */
+	flags = (((qword >> ICE_RX_DESC_STATUS_FLTSTAT_S) &
+		  ICE_RX_DESC_FLTSTAT_RSS_HASH) ==
+		 ICE_RX_DESC_FLTSTAT_RSS_HASH) ? PKT_RX_RSS_HASH : 0;
+
+	return flags;
+}
+
+/* Rx L3/L4 checksum */
+static inline uint64_t
+ice_rxd_error_to_pkt_flags(uint64_t qword)
+{
+	uint64_t flags = 0;
+	uint64_t error_bits = (qword >> ICE_RXD_QW1_ERROR_S);
+
+	if (likely((error_bits & ICE_RX_ERR_BITS) == 0)) {
+		flags |= (PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD);
+		return flags;
+	}
+
+	if (unlikely(error_bits & (1 << ICE_RX_DESC_ERROR_IPE_S)))
+		flags |= PKT_RX_IP_CKSUM_BAD;
+	else
+		flags |= PKT_RX_IP_CKSUM_GOOD;
+
+	if (unlikely(error_bits & (1 << ICE_RX_DESC_ERROR_L4E_S)))
+		flags |= PKT_RX_L4_CKSUM_BAD;
+	else
+		flags |= PKT_RX_L4_CKSUM_GOOD;
+
+	if (unlikely(error_bits & (1 << ICE_RX_DESC_ERROR_EIPE_S)))
+		flags |= PKT_RX_EIP_CKSUM_BAD;
+
+	return flags;
+}
+
+static inline void
+ice_rxd_to_vlan_tci(struct rte_mbuf *mb, volatile union ice_rx_desc *rxdp)
+{
+	if (rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
+	    (1 << ICE_RX_DESC_STATUS_L2TAG1P_S)) {
+		mb->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		mb->vlan_tci =
+			rte_le_to_cpu_16(rxdp->wb.qword0.lo_dword.l2tag1);
+		PMD_RX_LOG(DEBUG, "Descriptor l2tag1: %u",
+			   rte_le_to_cpu_16(rxdp->wb.qword0.lo_dword.l2tag1));
+	} else {
+		mb->vlan_tci = 0;
+	}
+
+#ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
+	if (rte_le_to_cpu_16(rxdp->wb.qword2.ext_status) &
+	    (1 << ICE_RX_DESC_EXT_STATUS_L2TAG2P_S)) {
+		mb->ol_flags |= PKT_RX_QINQ_STRIPPED | PKT_RX_QINQ |
+				PKT_RX_VLAN_STRIPPED | PKT_RX_VLAN;
+		mb->vlan_tci_outer = mb->vlan_tci;
+		mb->vlan_tci = rte_le_to_cpu_16(rxdp->wb.qword2.l2tag2_2);
+		PMD_RX_LOG(DEBUG, "Descriptor l2tag2_1: %u, l2tag2_2: %u",
+			   rte_le_to_cpu_16(rxdp->wb.qword2.l2tag2_1),
+			   rte_le_to_cpu_16(rxdp->wb.qword2.l2tag2_2));
+	} else {
+		mb->vlan_tci_outer = 0;
+	}
+#endif
+	PMD_RX_LOG(DEBUG, "Mbuf vlan_tci: %u, vlan_tci_outer: %u",
+		   mb->vlan_tci, mb->vlan_tci_outer);
+}
 const uint32_t *
-ice_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
+ice_dev_supported_ptypes_get(struct rte_eth_dev *dev)
 {
 	static const uint32_t ptypes[] = {
 		/* refers to ice_get_default_pkt_type() */
@@ -941,7 +1014,9 @@
 		RTE_PTYPE_UNKNOWN
 	};
 
-	return ptypes;
+	if (dev->rx_pkt_burst == ice_recv_pkts)
+		return ptypes;
+	return NULL;
 }
 
 void
@@ -1052,6 +1127,495 @@
 	dev->data->nb_tx_queues = 0;
 }
 
+uint16_t
+ice_recv_pkts(void *rx_queue,
+	      struct rte_mbuf **rx_pkts,
+	      uint16_t nb_pkts)
+{
+	struct ice_rx_queue *rxq = rx_queue;
+	volatile union ice_rx_desc *rx_ring = rxq->rx_ring;
+	volatile union ice_rx_desc *rxdp;
+	union ice_rx_desc rxd;
+	struct ice_rx_entry *sw_ring = rxq->sw_ring;
+	struct ice_rx_entry *rxe;
+	struct rte_mbuf *nmb; /* new allocated mbuf */
+	struct rte_mbuf *rxm; /* pointer to store old mbuf in SW ring */
+	uint16_t rx_id = rxq->rx_tail;
+	uint16_t nb_rx = 0;
+	uint16_t nb_hold = 0;
+	uint16_t rx_packet_len;
+	uint32_t rx_status;
+	uint64_t qword1;
+	uint64_t dma_addr;
+	uint64_t pkt_flags = 0;
+	uint32_t *ptype_tbl = rxq->vsi->adapter->ptype_tbl;
+	struct rte_eth_dev *dev;
+
+	while (nb_rx < nb_pkts) {
+		rxdp = &rx_ring[rx_id];
+		qword1 = rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len);
+		rx_status = (qword1 & ICE_RXD_QW1_STATUS_M) >>
+			    ICE_RXD_QW1_STATUS_S;
+
+		/* Check the DD bit first */
+		if (!(rx_status & (1 << ICE_RX_DESC_STATUS_DD_S)))
+			break;
+
+		/* allocate mbuf */
+		nmb = rte_mbuf_raw_alloc(rxq->mp);
+		if (unlikely(!nmb)) {
+			dev = ICE_VSI_TO_ETH_DEV(rxq->vsi);
+			dev->data->rx_mbuf_alloc_failed++;
+			break;
+		}
+		rxd = *rxdp; /* copy descriptor in ring to temp variable*/
+
+		nb_hold++;
+		rxe = &sw_ring[rx_id]; /* get corresponding mbuf in SW ring */
+		rx_id++;
+		if (unlikely(rx_id == rxq->nb_rx_desc))
+			rx_id = 0;
+		rxm = rxe->mbuf;
+		rxe->mbuf = nmb;
+		dma_addr =
+			rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb));
+
+		/**
+		 * fill the read format of descriptor with physic address in
+		 * new allocated mbuf: nmb
+		 */
+		rxdp->read.hdr_addr = 0;
+		rxdp->read.pkt_addr = dma_addr;
+
+		/* calculate rx_packet_len of the received pkt */
+		rx_packet_len = ((qword1 & ICE_RXD_QW1_LEN_PBUF_M) >>
+				ICE_RXD_QW1_LEN_PBUF_S) - rxq->crc_len;
+
+		/* fill old mbuf with received descriptor: rxd */
+		rxm->data_off = RTE_PKTMBUF_HEADROOM;
+		rte_prefetch0(RTE_PTR_ADD(rxm->buf_addr, RTE_PKTMBUF_HEADROOM));
+		rxm->nb_segs = 1;
+		rxm->next = NULL;
+		rxm->pkt_len = rx_packet_len;
+		rxm->data_len = rx_packet_len;
+		rxm->port = rxq->port_id;
+		ice_rxd_to_vlan_tci(rxm, rxdp);
+		rxm->packet_type = ptype_tbl[(uint8_t)((qword1 &
+							ICE_RXD_QW1_PTYPE_M) >>
+						       ICE_RXD_QW1_PTYPE_S)];
+		pkt_flags = ice_rxd_status_to_pkt_flags(qword1);
+		pkt_flags |= ice_rxd_error_to_pkt_flags(qword1);
+		if (pkt_flags & PKT_RX_RSS_HASH)
+			rxm->hash.rss =
+				rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
+		rxm->ol_flags |= pkt_flags;
+		/* copy old mbuf to rx_pkts */
+		rx_pkts[nb_rx++] = rxm;
+	}
+	rxq->rx_tail = rx_id;
+	/**
+	 * If the number of free RX descriptors is greater than the RX free
+	 * threshold of the queue, advance the receive tail register of queue.
+	 * Update that register with the value of the last processed RX
+	 * descriptor minus 1.
+	 */
+	nb_hold = (uint16_t)(nb_hold + rxq->nb_rx_hold);
+	if (nb_hold > rxq->rx_free_thresh) {
+		rx_id = (uint16_t)(rx_id == 0 ?
+				   (rxq->nb_rx_desc - 1) : (rx_id - 1));
+		/* write TAIL register */
+		ICE_PCI_REG_WRITE(rxq->qrx_tail, rx_id);
+		nb_hold = 0;
+	}
+	rxq->nb_rx_hold = nb_hold;
+
+	/* return received packet in the burst */
+	return nb_rx;
+}
+
+static inline void
+ice_txd_enable_checksum(uint64_t ol_flags,
+			uint32_t *td_cmd,
+			uint32_t *td_offset,
+			union ice_tx_offload tx_offload)
+{
+	/* L2 length must be set. */
+	*td_offset |= (tx_offload.l2_len >> 1) <<
+		      ICE_TX_DESC_LEN_MACLEN_S;
+
+	/* Enable L3 checksum offloads */
+	if (ol_flags & PKT_TX_IP_CKSUM) {
+		*td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV4_CSUM;
+		*td_offset |= (tx_offload.l3_len >> 2) <<
+			      ICE_TX_DESC_LEN_IPLEN_S;
+	} else if (ol_flags & PKT_TX_IPV4) {
+		*td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV4;
+		*td_offset |= (tx_offload.l3_len >> 2) <<
+			      ICE_TX_DESC_LEN_IPLEN_S;
+	} else if (ol_flags & PKT_TX_IPV6) {
+		*td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV6;
+		*td_offset |= (tx_offload.l3_len >> 2) <<
+			      ICE_TX_DESC_LEN_IPLEN_S;
+	}
+
+	if (ol_flags & PKT_TX_TCP_SEG) {
+		*td_cmd |= ICE_TX_DESC_CMD_L4T_EOFT_TCP;
+		*td_offset |= (tx_offload.l4_len >> 2) <<
+			      ICE_TX_DESC_LEN_L4_LEN_S;
+		return;
+	}
+
+	/* Enable L4 checksum offloads */
+	switch (ol_flags & PKT_TX_L4_MASK) {
+	case PKT_TX_TCP_CKSUM:
+		*td_cmd |= ICE_TX_DESC_CMD_L4T_EOFT_TCP;
+		*td_offset |= (sizeof(struct tcp_hdr) >> 2) <<
+			      ICE_TX_DESC_LEN_L4_LEN_S;
+		break;
+	case PKT_TX_SCTP_CKSUM:
+		*td_cmd |= ICE_TX_DESC_CMD_L4T_EOFT_SCTP;
+		*td_offset |= (sizeof(struct sctp_hdr) >> 2) <<
+			      ICE_TX_DESC_LEN_L4_LEN_S;
+		break;
+	case PKT_TX_UDP_CKSUM:
+		*td_cmd |= ICE_TX_DESC_CMD_L4T_EOFT_UDP;
+		*td_offset |= (sizeof(struct udp_hdr) >> 2) <<
+			      ICE_TX_DESC_LEN_L4_LEN_S;
+		break;
+	default:
+		break;
+	}
+}
+
+static inline int
+ice_xmit_cleanup(struct ice_tx_queue *txq)
+{
+	struct ice_tx_entry *sw_ring = txq->sw_ring;
+	volatile struct ice_tx_desc *txd = txq->tx_ring;
+	uint16_t last_desc_cleaned = txq->last_desc_cleaned;
+	uint16_t nb_tx_desc = txq->nb_tx_desc;
+	uint16_t desc_to_clean_to;
+	uint16_t nb_tx_to_clean;
+
+	/* Determine the last descriptor needing to be cleaned */
+	desc_to_clean_to = (uint16_t)(last_desc_cleaned + txq->tx_rs_thresh);
+	if (desc_to_clean_to >= nb_tx_desc)
+		desc_to_clean_to = (uint16_t)(desc_to_clean_to - nb_tx_desc);
+
+	/* Check to make sure the last descriptor to clean is done */
+	desc_to_clean_to = sw_ring[desc_to_clean_to].last_id;
+	if (!(txd[desc_to_clean_to].cmd_type_offset_bsz &
+	    rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE))) {
+		PMD_TX_FREE_LOG(DEBUG, "TX descriptor %4u is not done "
+				"(port=%d queue=%d) value=0x%lx\n",
+				desc_to_clean_to,
+				txq->port_id, txq->queue_id,
+				txd[desc_to_clean_to].cmd_type_offset_bsz);
+		/* Failed to clean any descriptors */
+		return -1;
+	}
+
+	/* Figure out how many descriptors will be cleaned */
+	if (last_desc_cleaned > desc_to_clean_to)
+		nb_tx_to_clean = (uint16_t)((nb_tx_desc - last_desc_cleaned) +
+					    desc_to_clean_to);
+	else
+		nb_tx_to_clean = (uint16_t)(desc_to_clean_to -
+					    last_desc_cleaned);
+
+	/* The last descriptor to clean is done, so that means all the
+	 * descriptors from the last descriptor that was cleaned
+	 * up to the last descriptor with the RS bit set
+	 * are done. Only reset the threshold descriptor.
+	 */
+	txd[desc_to_clean_to].cmd_type_offset_bsz = 0;
+
+	/* Update the txq to reflect the last descriptor that was cleaned */
+	txq->last_desc_cleaned = desc_to_clean_to;
+	txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + nb_tx_to_clean);
+
+	return 0;
+}
+
+/* Check if the context descriptor is needed for TX offloading */
+static inline uint16_t
+ice_calc_context_desc(uint64_t flags)
+{
+	static uint64_t mask = PKT_TX_TCP_SEG | PKT_TX_QINQ_PKT;
+
+	return (flags & mask) ? 1 : 0;
+}
+
+/* set ice TSO context descriptor */
+static inline uint64_t
+ice_set_tso_ctx(struct rte_mbuf *mbuf, union ice_tx_offload tx_offload)
+{
+	uint64_t ctx_desc = 0;
+	uint32_t cd_cmd, hdr_len, cd_tso_len;
+
+	if (!tx_offload.l4_len) {
+		PMD_TX_LOG(DEBUG, "L4 length set to 0");
+		return ctx_desc;
+	}
+
+	/**
+	 * in case of non tunneling packet, the outer_l2_len and
+	 * outer_l3_len must be 0.
+	 */
+	hdr_len = tx_offload.outer_l2_len +
+		  tx_offload.outer_l3_len +
+		  tx_offload.l2_len +
+		  tx_offload.l3_len +
+		  tx_offload.l4_len;
+
+	cd_cmd = ICE_TX_CTX_DESC_TSO;
+	cd_tso_len = mbuf->pkt_len - hdr_len;
+	ctx_desc |= ((uint64_t)cd_cmd << ICE_TXD_CTX_QW1_CMD_S) |
+		    ((uint64_t)cd_tso_len << ICE_TXD_CTX_QW1_TSO_LEN_S) |
+		    ((uint64_t)mbuf->tso_segsz << ICE_TXD_CTX_QW1_MSS_S);
+
+	return ctx_desc;
+}
+
+uint16_t
+ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+	struct ice_tx_queue *txq;
+	volatile struct ice_tx_desc *tx_ring;
+	volatile struct ice_tx_desc *txd;
+	struct ice_tx_entry *sw_ring;
+	struct ice_tx_entry *txe, *txn;
+	struct rte_mbuf *tx_pkt;
+	struct rte_mbuf *m_seg;
+	uint16_t tx_id;
+	uint16_t nb_tx;
+	uint16_t nb_used;
+	uint16_t nb_ctx;
+	uint32_t td_cmd = 0;
+	uint32_t td_offset = 0;
+	uint32_t td_tag = 0;
+	uint16_t tx_last;
+	uint64_t buf_dma_addr;
+	uint64_t ol_flags;
+	union ice_tx_offload tx_offload = {0};
+
+	txq = tx_queue;
+	sw_ring = txq->sw_ring;
+	tx_ring = txq->tx_ring;
+	tx_id = txq->tx_tail;
+	txe = &sw_ring[tx_id];
+
+	/* Check if the descriptor ring needs to be cleaned. */
+	if (txq->nb_tx_free < txq->tx_free_thresh)
+		ice_xmit_cleanup(txq);
+
+	for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
+		tx_pkt = *tx_pkts++;
+
+		td_cmd = 0;
+		ol_flags = tx_pkt->ol_flags;
+		tx_offload.l2_len = tx_pkt->l2_len;
+		tx_offload.l3_len = tx_pkt->l3_len;
+		tx_offload.outer_l2_len = tx_pkt->outer_l2_len;
+		tx_offload.outer_l3_len = tx_pkt->outer_l3_len;
+		tx_offload.l4_len = tx_pkt->l4_len;
+		tx_offload.tso_segsz = tx_pkt->tso_segsz;
+		/* Calculate the number of context descriptors needed. */
+		nb_ctx = ice_calc_context_desc(ol_flags);
+
+		/* The number of descriptors that must be allocated for
+		 * a packet equals to the number of the segments of that
+		 * packet plus the number of context descriptor if needed.
+		 */
+		nb_used = (uint16_t)(tx_pkt->nb_segs + nb_ctx);
+		tx_last = (uint16_t)(tx_id + nb_used - 1);
+
+		/* Circular ring */
+		if (tx_last >= txq->nb_tx_desc)
+			tx_last = (uint16_t)(tx_last - txq->nb_tx_desc);
+
+		if (nb_used > txq->nb_tx_free) {
+			if (ice_xmit_cleanup(txq) != 0) {
+				if (nb_tx == 0)
+					return 0;
+				goto end_of_tx;
+			}
+			if (unlikely(nb_used > txq->tx_rs_thresh)) {
+				while (nb_used > txq->nb_tx_free) {
+					if (ice_xmit_cleanup(txq) != 0) {
+						if (nb_tx == 0)
+							return 0;
+						goto end_of_tx;
+					}
+				}
+			}
+		}
+
+		/* Descriptor based VLAN insertion */
+		if (ol_flags & (PKT_TX_VLAN_PKT | PKT_TX_QINQ_PKT)) {
+			td_cmd |= ICE_TX_DESC_CMD_IL2TAG1;
+			td_tag = tx_pkt->vlan_tci;
+		}
+
+		/* Enable checksum offloading */
+		if (ol_flags & ICE_TX_CKSUM_OFFLOAD_MASK) {
+			ice_txd_enable_checksum(ol_flags, &td_cmd,
+						&td_offset, tx_offload);
+		}
+
+		if (nb_ctx) {
+			/* Setup TX context descriptor if required */
+			volatile struct ice_tx_ctx_desc *ctx_txd =
+				(volatile struct ice_tx_ctx_desc *)
+					&tx_ring[tx_id];
+			uint16_t cd_l2tag2 = 0;
+			uint64_t cd_type_cmd_tso_mss = ICE_TX_DESC_DTYPE_CTX;
+
+			txn = &sw_ring[txe->next_id];
+			RTE_MBUF_PREFETCH_TO_FREE(txn->mbuf);
+			if (txe->mbuf) {
+				rte_pktmbuf_free_seg(txe->mbuf);
+				txe->mbuf = NULL;
+			}
+
+			if (ol_flags & PKT_TX_TCP_SEG)
+				cd_type_cmd_tso_mss |=
+					ice_set_tso_ctx(tx_pkt, tx_offload);
+
+			/* TX context descriptor based double VLAN insert */
+			if (ol_flags & PKT_TX_QINQ_PKT) {
+				cd_l2tag2 = tx_pkt->vlan_tci_outer;
+				cd_type_cmd_tso_mss |=
+					((uint64_t)ICE_TX_CTX_DESC_IL2TAG2 <<
+					 ICE_TXD_CTX_QW1_CMD_S);
+			}
+			ctx_txd->l2tag2 = rte_cpu_to_le_16(cd_l2tag2);
+			ctx_txd->qw1 =
+				rte_cpu_to_le_64(cd_type_cmd_tso_mss);
+
+			txe->last_id = tx_last;
+			tx_id = txe->next_id;
+			txe = txn;
+		}
+		m_seg = tx_pkt;
+
+		do {
+			txd = &tx_ring[tx_id];
+			txn = &sw_ring[txe->next_id];
+
+			if (txe->mbuf)
+				rte_pktmbuf_free_seg(txe->mbuf);
+			txe->mbuf = m_seg;
+
+			/* Setup TX Descriptor */
+			buf_dma_addr = rte_mbuf_data_iova(m_seg);
+			txd->buf_addr = rte_cpu_to_le_64(buf_dma_addr);
+			txd->cmd_type_offset_bsz =
+				rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DATA |
+				((uint64_t)td_cmd  << ICE_TXD_QW1_CMD_S) |
+				((uint64_t)td_offset << ICE_TXD_QW1_OFFSET_S) |
+				((uint64_t)m_seg->data_len  <<
+				 ICE_TXD_QW1_TX_BUF_SZ_S) |
+				((uint64_t)td_tag  << ICE_TXD_QW1_L2TAG1_S));
+
+			txe->last_id = tx_last;
+			tx_id = txe->next_id;
+			txe = txn;
+			m_seg = m_seg->next;
+		} while (m_seg);
+
+		/* fill the last descriptor with End of Packet (EOP) bit */
+		td_cmd |= ICE_TX_DESC_CMD_EOP;
+		txq->nb_tx_used = (uint16_t)(txq->nb_tx_used + nb_used);
+		txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_used);
+
+		/* set RS bit on the last descriptor of one packet */
+		if (txq->nb_tx_used >= txq->tx_rs_thresh) {
+			PMD_TX_FREE_LOG(DEBUG,
+					"Setting RS bit on TXD id="
+					"%4u (port=%d queue=%d)",
+					tx_last, txq->port_id, txq->queue_id);
+
+			td_cmd |= ICE_TX_DESC_CMD_RS;
+
+			/* Update txq RS bit counters */
+			txq->nb_tx_used = 0;
+		}
+		txd->cmd_type_offset_bsz |=
+			rte_cpu_to_le_64(((uint64_t)td_cmd) <<
+					 ICE_TXD_QW1_CMD_S);
+	}
+end_of_tx:
+	rte_wmb();
+
+	/* update Tail register */
+	ICE_PCI_REG_WRITE(txq->qtx_tail, tx_id);
+	txq->tx_tail = tx_id;
+
+	return nb_tx;
+}
+
+void __attribute__((cold))
+ice_set_rx_function(struct rte_eth_dev *dev)
+{
+	dev->rx_pkt_burst = ice_recv_pkts;
+}
+
+/*********************************************************************
+ *
+ *  TX prep functions
+ *
+ **********************************************************************/
+/* The default values of TSO MSS */
+#define ICE_MIN_TSO_MSS            64
+#define ICE_MAX_TSO_MSS            9728
+#define ICE_MAX_TSO_FRAME_SIZE     262144
+uint16_t
+ice_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
+	      uint16_t nb_pkts)
+{
+	int i, ret;
+	uint64_t ol_flags;
+	struct rte_mbuf *m;
+
+	for (i = 0; i < nb_pkts; i++) {
+		m = tx_pkts[i];
+		ol_flags = m->ol_flags;
+
+		if (ol_flags & PKT_TX_TCP_SEG &&
+		    (m->tso_segsz < ICE_MIN_TSO_MSS ||
+		     m->tso_segsz > ICE_MAX_TSO_MSS ||
+		     m->pkt_len > ICE_MAX_TSO_FRAME_SIZE)) {
+			/**
+			 * MSS outside the range are considered malicious
+			 */
+			rte_errno = -EINVAL;
+			return i;
+		}
+
+#ifdef RTE_LIBRTE_ETHDEV_DEBUG
+		ret = rte_validate_tx_offload(m);
+		if (ret != 0) {
+			rte_errno = ret;
+			return i;
+		}
+#endif
+		ret = rte_net_intel_cksum_prepare(m);
+		if (ret != 0) {
+			rte_errno = ret;
+			return i;
+		}
+	}
+	return i;
+}
+
+void __attribute__((cold))
+ice_set_tx_function(struct rte_eth_dev *dev)
+{
+		dev->tx_pkt_burst = ice_xmit_pkts;
+		dev->tx_pkt_prepare = ice_prep_pkts;
+}
+
 /* For each value it means, datasheet of hardware can tell more details
  *
  * @note: fix ice_dev_supported_ptypes_get() if any change here.
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index bad2b89..e0218b3 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -134,6 +134,14 @@ int ice_tx_queue_setup(struct rte_eth_dev *dev,
 void ice_tx_queue_release(void *txq);
 void ice_clear_queues(struct rte_eth_dev *dev);
 void ice_free_queues(struct rte_eth_dev *dev);
+uint16_t ice_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+		       uint16_t nb_pkts);
+uint16_t ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+		       uint16_t nb_pkts);
+void ice_set_rx_function(struct rte_eth_dev *dev);
+uint16_t ice_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
+		       uint16_t nb_pkts);
+void ice_set_tx_function(struct rte_eth_dev *dev);
 uint32_t ice_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 void ice_set_default_ptype_table(struct rte_eth_dev *dev);
 const uint32_t *ice_dev_supported_ptypes_get(struct rte_eth_dev *dev);
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH 17/19] net/ice: support advance RX/TX
  2018-11-23  6:56 [dpdk-dev] [PATCH 00/19] A new net PMD - ice Wenzhuo Lu
                   ` (15 preceding siblings ...)
  2018-11-23  6:56 ` [dpdk-dev] [PATCH 16/19] net/ice: support basic RX/TX Wenzhuo Lu
@ 2018-11-23  6:56 ` Wenzhuo Lu
  2018-11-23  6:56 ` [dpdk-dev] [PATCH 18/19] net/ice: support descriptor ops Wenzhuo Lu
                   ` (7 subsequent siblings)
  24 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-11-23  6:56 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add RX functions, scatter and bulk.
Add TX function, simple.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/ice/ice_lan_rxtx.c | 660 ++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 658 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ice/ice_lan_rxtx.c b/drivers/net/ice/ice_lan_rxtx.c
index 71ce048..07ab677 100644
--- a/drivers/net/ice/ice_lan_rxtx.c
+++ b/drivers/net/ice/ice_lan_rxtx.c
@@ -981,6 +981,431 @@
 	PMD_RX_LOG(DEBUG, "Mbuf vlan_tci: %u, vlan_tci_outer: %u",
 		   mb->vlan_tci, mb->vlan_tci_outer);
 }
+
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+#define ICE_LOOK_AHEAD 8
+#if (ICE_LOOK_AHEAD != 8)
+#error "PMD ICE: ICE_LOOK_AHEAD must be 8\n"
+#endif
+static inline int
+ice_rx_scan_hw_ring(struct ice_rx_queue *rxq)
+{
+	volatile union ice_rx_desc *rxdp;
+	struct ice_rx_entry *rxep;
+	struct rte_mbuf *mb;
+	uint16_t pkt_len;
+	uint64_t qword1;
+	uint32_t rx_status;
+	int32_t s[ICE_LOOK_AHEAD], nb_dd;
+	int32_t i, j, nb_rx = 0;
+	uint64_t pkt_flags = 0;
+	uint32_t *ptype_tbl = rxq->vsi->adapter->ptype_tbl;
+
+	rxdp = &rxq->rx_ring[rxq->rx_tail];
+	rxep = &rxq->sw_ring[rxq->rx_tail];
+
+	qword1 = rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len);
+	rx_status = (qword1 & ICE_RXD_QW1_STATUS_M) >> ICE_RXD_QW1_STATUS_S;
+
+	/* Make sure there is at least 1 packet to receive */
+	if (!(rx_status & (1 << ICE_RX_DESC_STATUS_DD_S)))
+		return 0;
+
+	/**
+	 * Scan LOOK_AHEAD descriptors at a time to determine which
+	 * descriptors reference packets that are ready to be received.
+	 */
+	for (i = 0; i < ICE_RX_MAX_BURST; i += ICE_LOOK_AHEAD,
+	     rxdp += ICE_LOOK_AHEAD, rxep += ICE_LOOK_AHEAD) {
+		/* Read desc statuses backwards to avoid race condition */
+		for (j = ICE_LOOK_AHEAD - 1; j >= 0; j--) {
+			qword1 = rte_le_to_cpu_64(
+					rxdp[j].wb.qword1.status_error_len);
+			s[j] = (qword1 & ICE_RXD_QW1_STATUS_M) >>
+			       ICE_RXD_QW1_STATUS_S;
+		}
+
+		rte_smp_rmb();
+
+		/* Compute how many status bits were set */
+		for (j = 0, nb_dd = 0; j < ICE_LOOK_AHEAD; j++)
+			nb_dd += s[j] & (1 << ICE_RX_DESC_STATUS_DD_S);
+
+		nb_rx += nb_dd;
+
+		/* Translate descriptor info to mbuf parameters */
+		for (j = 0; j < nb_dd; j++) {
+			mb = rxep[j].mbuf;
+			qword1 = rte_le_to_cpu_64(
+					rxdp[j].wb.qword1.status_error_len);
+			pkt_len = ((qword1 & ICE_RXD_QW1_LEN_PBUF_M) >>
+				   ICE_RXD_QW1_LEN_PBUF_S) - rxq->crc_len;
+			mb->data_len = pkt_len;
+			mb->pkt_len = pkt_len;
+			mb->ol_flags = 0;
+			pkt_flags = ice_rxd_status_to_pkt_flags(qword1);
+			pkt_flags |= ice_rxd_error_to_pkt_flags(qword1);
+			if (pkt_flags & PKT_RX_RSS_HASH)
+				mb->hash.rss =
+					rte_le_to_cpu_32(
+						rxdp[j].wb.qword0.hi_dword.rss);
+			mb->packet_type = ptype_tbl[(uint8_t)(
+						(qword1 &
+						 ICE_RXD_QW1_PTYPE_M) >>
+						ICE_RXD_QW1_PTYPE_S)];
+			ice_rxd_to_vlan_tci(mb, &rxdp[j]);
+
+			mb->ol_flags |= pkt_flags;
+		}
+
+		for (j = 0; j < ICE_LOOK_AHEAD; j++)
+			rxq->rx_stage[i + j] = rxep[j].mbuf;
+
+		if (nb_dd != ICE_LOOK_AHEAD)
+			break;
+	}
+
+	/* Clear software ring entries */
+	for (i = 0; i < nb_rx; i++)
+		rxq->sw_ring[rxq->rx_tail + i].mbuf = NULL;
+
+	PMD_RX_LOG(DEBUG, "ice_rx_scan_hw_ring: "
+		   "port_id=%u, queue_id=%u, nb_rx=%d",
+		   rxq->port_id, rxq->queue_id, nb_rx);
+
+	return nb_rx;
+}
+
+static inline uint16_t
+ice_rx_fill_from_stage(struct ice_rx_queue *rxq,
+		       struct rte_mbuf **rx_pkts,
+		       uint16_t nb_pkts)
+{
+	uint16_t i;
+	struct rte_mbuf **stage = &rxq->rx_stage[rxq->rx_next_avail];
+
+	nb_pkts = (uint16_t)RTE_MIN(nb_pkts, rxq->rx_nb_avail);
+
+	for (i = 0; i < nb_pkts; i++)
+		rx_pkts[i] = stage[i];
+
+	rxq->rx_nb_avail = (uint16_t)(rxq->rx_nb_avail - nb_pkts);
+	rxq->rx_next_avail = (uint16_t)(rxq->rx_next_avail + nb_pkts);
+
+	return nb_pkts;
+}
+
+static inline int
+ice_rx_alloc_bufs(struct ice_rx_queue *rxq)
+{
+	volatile union ice_rx_desc *rxdp;
+	struct ice_rx_entry *rxep;
+	struct rte_mbuf *mb;
+	uint16_t alloc_idx, i;
+	uint64_t dma_addr;
+	int diag;
+
+	/* Allocate buffers in bulk */
+	alloc_idx = (uint16_t)(rxq->rx_free_trigger -
+			       (rxq->rx_free_thresh - 1));
+	rxep = &rxq->sw_ring[alloc_idx];
+	diag = rte_mempool_get_bulk(rxq->mp, (void *)rxep,
+				    rxq->rx_free_thresh);
+	if (unlikely(diag != 0)) {
+		PMD_RX_LOG(ERR, "Failed to get mbufs in bulk");
+		return -ENOMEM;
+	}
+
+	rxdp = &rxq->rx_ring[alloc_idx];
+	for (i = 0; i < rxq->rx_free_thresh; i++) {
+		if (likely(i < (rxq->rx_free_thresh - 1)))
+			/* Prefetch next mbuf */
+			rte_prefetch0(rxep[i + 1].mbuf);
+
+		mb = rxep[i].mbuf;
+		rte_mbuf_refcnt_set(mb, 1);
+		mb->next = NULL;
+		mb->data_off = RTE_PKTMBUF_HEADROOM;
+		mb->nb_segs = 1;
+		mb->port = rxq->port_id;
+		dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(mb));
+		rxdp[i].read.hdr_addr = 0;
+		rxdp[i].read.pkt_addr = dma_addr;
+	}
+
+	/* Update rx tail regsiter */
+	rte_wmb();
+	ICE_PCI_REG_WRITE(rxq->qrx_tail, rxq->rx_free_trigger);
+
+	rxq->rx_free_trigger =
+		(uint16_t)(rxq->rx_free_trigger + rxq->rx_free_thresh);
+	if (rxq->rx_free_trigger >= rxq->nb_rx_desc)
+		rxq->rx_free_trigger = (uint16_t)(rxq->rx_free_thresh - 1);
+
+	return 0;
+}
+
+static inline uint16_t
+rx_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
+{
+	struct ice_rx_queue *rxq = (struct ice_rx_queue *)rx_queue;
+	uint16_t nb_rx = 0;
+	struct rte_eth_dev *dev;
+
+	if (!nb_pkts)
+		return 0;
+
+	if (rxq->rx_nb_avail)
+		return ice_rx_fill_from_stage(rxq, rx_pkts, nb_pkts);
+
+	nb_rx = (uint16_t)ice_rx_scan_hw_ring(rxq);
+	rxq->rx_next_avail = 0;
+	rxq->rx_nb_avail = nb_rx;
+	rxq->rx_tail = (uint16_t)(rxq->rx_tail + nb_rx);
+
+	if (rxq->rx_tail > rxq->rx_free_trigger) {
+		if (ice_rx_alloc_bufs(rxq) != 0) {
+			uint16_t i, j;
+
+			dev = ICE_VSI_TO_ETH_DEV(rxq->vsi);
+			dev->data->rx_mbuf_alloc_failed +=
+				rxq->rx_free_thresh;
+			PMD_RX_LOG(DEBUG, "Rx mbuf alloc failed for "
+				   "port_id=%u, queue_id=%u",
+				   rxq->port_id, rxq->queue_id);
+			rxq->rx_nb_avail = 0;
+			rxq->rx_tail = (uint16_t)(rxq->rx_tail - nb_rx);
+			for (i = 0, j = rxq->rx_tail; i < nb_rx; i++, j++)
+				rxq->sw_ring[j].mbuf = rxq->rx_stage[i];
+
+			return 0;
+		}
+	}
+
+	if (rxq->rx_tail >= rxq->nb_rx_desc)
+		rxq->rx_tail = 0;
+
+	if (rxq->rx_nb_avail)
+		return ice_rx_fill_from_stage(rxq, rx_pkts, nb_pkts);
+
+	return 0;
+}
+
+static uint16_t
+ice_recv_pkts_bulk_alloc(void *rx_queue,
+			 struct rte_mbuf **rx_pkts,
+			 uint16_t nb_pkts)
+{
+	uint16_t nb_rx = 0;
+	uint16_t n;
+	uint16_t count;
+
+	if (unlikely(nb_pkts == 0))
+		return nb_rx;
+
+	if (likely(nb_pkts <= ICE_RX_MAX_BURST))
+		return rx_recv_pkts(rx_queue, rx_pkts, nb_pkts);
+
+	while (nb_pkts) {
+		n = RTE_MIN(nb_pkts, ICE_RX_MAX_BURST);
+		count = rx_recv_pkts(rx_queue, &rx_pkts[nb_rx], n);
+		nb_rx = (uint16_t)(nb_rx + count);
+		nb_pkts = (uint16_t)(nb_pkts - count);
+		if (count < n)
+			break;
+	}
+
+	return nb_rx;
+}
+#else
+static uint16_t
+ice_recv_pkts_bulk_alloc(void __rte_unused *rx_queue,
+			 struct rte_mbuf __rte_unused **rx_pkts,
+			 uint16_t __rte_unused nb_pkts)
+{
+	return 0;
+}
+#endif /* RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC */
+
+static uint16_t
+ice_recv_scattered_pkts(void *rx_queue,
+			struct rte_mbuf **rx_pkts,
+			uint16_t nb_pkts)
+{
+	struct ice_rx_queue *rxq = rx_queue;
+	volatile union ice_rx_desc *rx_ring = rxq->rx_ring;
+	volatile union ice_rx_desc *rxdp;
+	union ice_rx_desc rxd;
+	struct ice_rx_entry *sw_ring = rxq->sw_ring;
+	struct ice_rx_entry *rxe;
+	struct rte_mbuf *first_seg = rxq->pkt_first_seg;
+	struct rte_mbuf *last_seg = rxq->pkt_last_seg;
+	struct rte_mbuf *nmb; /* new allocated mbuf */
+	struct rte_mbuf *rxm; /* pointer to store old mbuf in SW ring */
+	uint16_t rx_id = rxq->rx_tail;
+	uint16_t nb_rx = 0;
+	uint16_t nb_hold = 0;
+	uint16_t rx_packet_len;
+	uint32_t rx_status;
+	uint64_t qword1;
+	uint64_t dma_addr;
+	uint64_t pkt_flags = 0;
+	uint32_t *ptype_tbl = rxq->vsi->adapter->ptype_tbl;
+	struct rte_eth_dev *dev;
+
+	while (nb_rx < nb_pkts) {
+		rxdp = &rx_ring[rx_id];
+		qword1 = rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len);
+		rx_status = (qword1 & ICE_RXD_QW1_STATUS_M) >>
+			    ICE_RXD_QW1_STATUS_S;
+
+		/* Check the DD bit first */
+		if (!(rx_status & (1 << ICE_RX_DESC_STATUS_DD_S)))
+			break;
+
+		/* allocate mbuf */
+		nmb = rte_mbuf_raw_alloc(rxq->mp);
+		if (unlikely(!nmb)) {
+			dev = ICE_VSI_TO_ETH_DEV(rxq->vsi);
+			dev->data->rx_mbuf_alloc_failed++;
+			break;
+		}
+		rxd = *rxdp; /* copy descriptor in ring to temp variable*/
+
+		nb_hold++;
+		rxe = &sw_ring[rx_id]; /* get corresponding mbuf in SW ring */
+		rx_id++;
+		if (unlikely(rx_id == rxq->nb_rx_desc))
+			rx_id = 0;
+
+		/* Prefetch next mbuf */
+		rte_prefetch0(sw_ring[rx_id].mbuf);
+
+		/**
+		 * When next RX descriptor is on a cache line boundary,
+		 * prefetch the next 4 RX descriptors and next 8 pointers
+		 * to mbufs.
+		 */
+		if ((rx_id & 0x3) == 0) {
+			rte_prefetch0(&rx_ring[rx_id]);
+			rte_prefetch0(&sw_ring[rx_id]);
+		}
+
+		rxm = rxe->mbuf;
+		rxe->mbuf = nmb;
+		dma_addr =
+			rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb));
+
+		/* Set data buffer address and data length of the mbuf */
+		rxdp->read.hdr_addr = 0;
+		rxdp->read.pkt_addr = dma_addr;
+		rx_packet_len = (qword1 & ICE_RXD_QW1_LEN_PBUF_M) >>
+				ICE_RXD_QW1_LEN_PBUF_S;
+		rxm->data_len = rx_packet_len;
+		rxm->data_off = RTE_PKTMBUF_HEADROOM;
+		ice_rxd_to_vlan_tci(rxm, rxdp);
+		rxm->packet_type = ptype_tbl[(uint8_t)((qword1 &
+							ICE_RXD_QW1_PTYPE_M) >>
+						       ICE_RXD_QW1_PTYPE_S)];
+
+		/**
+		 * If this is the first buffer of the received packet, set the
+		 * pointer to the first mbuf of the packet and initialize its
+		 * context. Otherwise, update the total length and the number
+		 * of segments of the current scattered packet, and update the
+		 * pointer to the last mbuf of the current packet.
+		 */
+		if (!first_seg) {
+			first_seg = rxm;
+			first_seg->nb_segs = 1;
+			first_seg->pkt_len = rx_packet_len;
+		} else {
+			first_seg->pkt_len =
+				(uint16_t)(first_seg->pkt_len +
+					   rx_packet_len);
+			first_seg->nb_segs++;
+			last_seg->next = rxm;
+		}
+
+		/**
+		 * If this is not the last buffer of the received packet,
+		 * update the pointer to the last mbuf of the current scattered
+		 * packet and continue to parse the RX ring.
+		 */
+		if (!(rx_status & (1 << ICE_RX_DESC_STATUS_EOF_S))) {
+			last_seg = rxm;
+			continue;
+		}
+
+		/**
+		 * This is the last buffer of the received packet. If the CRC
+		 * is not stripped by the hardware:
+		 *  - Subtract the CRC length from the total packet length.
+		 *  - If the last buffer only contains the whole CRC or a part
+		 *  of it, free the mbuf associated to the last buffer. If part
+		 *  of the CRC is also contained in the previous mbuf, subtract
+		 *  the length of that CRC part from the data length of the
+		 *  previous mbuf.
+		 */
+		rxm->next = NULL;
+		if (unlikely(rxq->crc_len > 0)) {
+			first_seg->pkt_len -= ETHER_CRC_LEN;
+			if (rx_packet_len <= ETHER_CRC_LEN) {
+				rte_pktmbuf_free_seg(rxm);
+				first_seg->nb_segs--;
+				last_seg->data_len =
+					(uint16_t)(last_seg->data_len -
+					(ETHER_CRC_LEN - rx_packet_len));
+				last_seg->next = NULL;
+			} else
+				rxm->data_len = (uint16_t)(rx_packet_len -
+							   ETHER_CRC_LEN);
+		}
+
+		first_seg->port = rxq->port_id;
+		first_seg->ol_flags = 0;
+
+		pkt_flags = ice_rxd_status_to_pkt_flags(qword1);
+		pkt_flags |= ice_rxd_error_to_pkt_flags(qword1);
+		if (pkt_flags & PKT_RX_RSS_HASH)
+			first_seg->hash.rss =
+				rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
+
+		first_seg->ol_flags |= pkt_flags;
+		/* Prefetch data of first segment, if configured to do so. */
+		rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr,
+					  first_seg->data_off));
+		rx_pkts[nb_rx++] = first_seg;
+		first_seg = NULL;
+	}
+
+	/* Record index of the next RX descriptor to probe. */
+	rxq->rx_tail = rx_id;
+	rxq->pkt_first_seg = first_seg;
+	rxq->pkt_last_seg = last_seg;
+
+	/**
+	 * If the number of free RX descriptors is greater than the RX free
+	 * threshold of the queue, advance the Receive Descriptor Tail (RDT)
+	 * register. Update the RDT with the value of the last processed RX
+	 * descriptor minus 1, to guarantee that the RDT register is never
+	 * equal to the RDH register, which creates a "full" ring situtation
+	 * from the hardware point of view.
+	 */
+	nb_hold = (uint16_t)(nb_hold + rxq->nb_rx_hold);
+	if (nb_hold > rxq->rx_free_thresh) {
+		rx_id = (uint16_t)(rx_id == 0 ?
+				   (rxq->nb_rx_desc - 1) : (rx_id - 1));
+		/* write TAIL register */
+		ICE_PCI_REG_WRITE(rxq->qrx_tail, rx_id);
+		nb_hold = 0;
+	}
+	rxq->nb_rx_hold = nb_hold;
+
+	/* return received packet in the burst */
+	return nb_rx;
+}
+
 const uint32_t *
 ice_dev_supported_ptypes_get(struct rte_eth_dev *dev)
 {
@@ -1014,7 +1439,11 @@
 		RTE_PTYPE_UNKNOWN
 	};
 
-	if (dev->rx_pkt_burst == ice_recv_pkts)
+	if (dev->rx_pkt_burst == ice_recv_pkts ||
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+	    dev->rx_pkt_burst == ice_recv_pkts_bulk_alloc ||
+#endif
+	    dev->rx_pkt_burst == ice_recv_scattered_pkts)
 		return ptypes;
 	return NULL;
 }
@@ -1337,6 +1766,20 @@
 	return 0;
 }
 
+/* Construct the tx flags */
+static inline uint64_t
+ice_build_ctob(uint32_t td_cmd,
+	       uint32_t td_offset,
+	       uint16_t size,
+	       uint32_t td_tag)
+{
+	return rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DATA |
+				((uint64_t)td_cmd << ICE_TXD_QW1_CMD_S) |
+				((uint64_t)td_offset << ICE_TXD_QW1_OFFSET_S) |
+				((uint64_t)size << ICE_TXD_QW1_TX_BUF_SZ_S) |
+				((uint64_t)td_tag << ICE_TXD_QW1_L2TAG1_S));
+}
+
 /* Check if the context descriptor is needed for TX offloading */
 static inline uint16_t
 ice_calc_context_desc(uint64_t flags)
@@ -1555,10 +1998,213 @@
 	return nb_tx;
 }
 
+static inline int __attribute__((always_inline))
+ice_tx_free_bufs(struct ice_tx_queue *txq)
+{
+	struct ice_tx_entry *txep;
+	uint16_t i;
+
+	if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
+	     rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M)) !=
+	    rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE))
+		return 0;
+
+	txep = &txq->sw_ring[txq->tx_next_dd - (txq->tx_rs_thresh - 1)];
+
+	for (i = 0; i < txq->tx_rs_thresh; i++)
+		rte_prefetch0((txep + i)->mbuf);
+
+	if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) {
+		for (i = 0; i < txq->tx_rs_thresh; ++i, ++txep) {
+			rte_mempool_put(txep->mbuf->pool, txep->mbuf);
+			txep->mbuf = NULL;
+		}
+	} else {
+		for (i = 0; i < txq->tx_rs_thresh; ++i, ++txep) {
+			rte_pktmbuf_free_seg(txep->mbuf);
+			txep->mbuf = NULL;
+		}
+	}
+
+	txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh);
+	txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh);
+	if (txq->tx_next_dd >= txq->nb_tx_desc)
+		txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
+
+	return txq->tx_rs_thresh;
+}
+
+/* Populate 4 descriptors with data from 4 mbufs */
+static inline void
+tx4(volatile struct ice_tx_desc *txdp, struct rte_mbuf **pkts)
+{
+	uint64_t dma_addr;
+	uint32_t i;
+
+	for (i = 0; i < 4; i++, txdp++, pkts++) {
+		dma_addr = rte_mbuf_data_iova(*pkts);
+		txdp->buf_addr = rte_cpu_to_le_64(dma_addr);
+		txdp->cmd_type_offset_bsz =
+			ice_build_ctob((uint32_t)ICE_TD_CMD, 0,
+				       (*pkts)->data_len, 0);
+	}
+}
+
+/* Populate 1 descriptor with data from 1 mbuf */
+static inline void
+tx1(volatile struct ice_tx_desc *txdp, struct rte_mbuf **pkts)
+{
+	uint64_t dma_addr;
+
+	dma_addr = rte_mbuf_data_iova(*pkts);
+	txdp->buf_addr = rte_cpu_to_le_64(dma_addr);
+	txdp->cmd_type_offset_bsz =
+		ice_build_ctob((uint32_t)ICE_TD_CMD, 0,
+			       (*pkts)->data_len, 0);
+}
+
+static inline void
+ice_tx_fill_hw_ring(struct ice_tx_queue *txq, struct rte_mbuf **pkts,
+		    uint16_t nb_pkts)
+{
+	volatile struct ice_tx_desc *txdp = &txq->tx_ring[txq->tx_tail];
+	struct ice_tx_entry *txep = &txq->sw_ring[txq->tx_tail];
+	const int N_PER_LOOP = 4;
+	const int N_PER_LOOP_MASK = N_PER_LOOP - 1;
+	int mainpart, leftover;
+	int i, j;
+
+	/**
+	 * Process most of the packets in chunks of N pkts.  Any
+	 * leftover packets will get processed one at a time.
+	 */
+	mainpart = nb_pkts & ((uint32_t)~N_PER_LOOP_MASK);
+	leftover = nb_pkts & ((uint32_t)N_PER_LOOP_MASK);
+	for (i = 0; i < mainpart; i += N_PER_LOOP) {
+		/* Copy N mbuf pointers to the S/W ring */
+		for (j = 0; j < N_PER_LOOP; ++j)
+			(txep + i + j)->mbuf = *(pkts + i + j);
+		tx4(txdp + i, pkts + i);
+	}
+
+	if (unlikely(leftover > 0)) {
+		for (i = 0; i < leftover; ++i) {
+			(txep + mainpart + i)->mbuf = *(pkts + mainpart + i);
+			tx1(txdp + mainpart + i, pkts + mainpart + i);
+		}
+	}
+}
+
+static inline uint16_t
+tx_xmit_pkts(struct ice_tx_queue *txq,
+	     struct rte_mbuf **tx_pkts,
+	     uint16_t nb_pkts)
+{
+	volatile struct ice_tx_desc *txr = txq->tx_ring;
+	uint16_t n = 0;
+
+	/**
+	 * Begin scanning the H/W ring for done descriptors when the number
+	 * of available descriptors drops below tx_free_thresh. For each done
+	 * descriptor, free the associated buffer.
+	 */
+	if (txq->nb_tx_free < txq->tx_free_thresh)
+		ice_tx_free_bufs(txq);
+
+	/* Use available descriptor only */
+	nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
+	if (unlikely(!nb_pkts))
+		return 0;
+
+	txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
+	if ((txq->tx_tail + nb_pkts) > txq->nb_tx_desc) {
+		n = (uint16_t)(txq->nb_tx_desc - txq->tx_tail);
+		ice_tx_fill_hw_ring(txq, tx_pkts, n);
+		txr[txq->tx_next_rs].cmd_type_offset_bsz |=
+			rte_cpu_to_le_64(((uint64_t)ICE_TX_DESC_CMD_RS) <<
+					 ICE_TXD_QW1_CMD_S);
+		txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
+		txq->tx_tail = 0;
+	}
+
+	/* Fill hardware descriptor ring with mbuf data */
+	ice_tx_fill_hw_ring(txq, tx_pkts + n, (uint16_t)(nb_pkts - n));
+	txq->tx_tail = (uint16_t)(txq->tx_tail + (nb_pkts - n));
+
+	/* Determin if RS bit needs to be set */
+	if (txq->tx_tail > txq->tx_next_rs) {
+		txr[txq->tx_next_rs].cmd_type_offset_bsz |=
+			rte_cpu_to_le_64(((uint64_t)ICE_TX_DESC_CMD_RS) <<
+					 ICE_TXD_QW1_CMD_S);
+		txq->tx_next_rs =
+			(uint16_t)(txq->tx_next_rs + txq->tx_rs_thresh);
+		if (txq->tx_next_rs >= txq->nb_tx_desc)
+			txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
+	}
+
+	if (txq->tx_tail >= txq->nb_tx_desc)
+		txq->tx_tail = 0;
+
+	/* Update the tx tail register */
+	rte_wmb();
+	ICE_PCI_REG_WRITE(txq->qtx_tail, txq->tx_tail);
+
+	return nb_pkts;
+}
+
+static uint16_t
+ice_xmit_pkts_simple(void *tx_queue,
+		     struct rte_mbuf **tx_pkts,
+		     uint16_t nb_pkts)
+{
+	uint16_t nb_tx = 0;
+
+	if (likely(nb_pkts <= ICE_TX_MAX_BURST))
+		return tx_xmit_pkts((struct ice_tx_queue *)tx_queue,
+				    tx_pkts, nb_pkts);
+
+	while (nb_pkts) {
+		uint16_t ret, num = (uint16_t)RTE_MIN(nb_pkts,
+						      ICE_TX_MAX_BURST);
+
+		ret = tx_xmit_pkts((struct ice_tx_queue *)tx_queue,
+				   &tx_pkts[nb_tx], num);
+		nb_tx = (uint16_t)(nb_tx + ret);
+		nb_pkts = (uint16_t)(nb_pkts - ret);
+		if (ret < num)
+			break;
+	}
+
+	return nb_tx;
+}
+
 void __attribute__((cold))
 ice_set_rx_function(struct rte_eth_dev *dev)
 {
-	dev->rx_pkt_burst = ice_recv_pkts;
+	PMD_INIT_FUNC_TRACE();
+	struct ice_adapter *ad =
+		ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+
+	if (dev->data->scattered_rx) {
+		/* Set the non-LRO scattered function */
+		PMD_INIT_LOG(DEBUG,
+			     "Using a Scattered function on port %d.",
+			     dev->data->port_id);
+		dev->rx_pkt_burst = ice_recv_scattered_pkts;
+	} else if (ad->rx_bulk_alloc_allowed) {
+		PMD_INIT_LOG(DEBUG,
+			     "Rx Burst Bulk Alloc Preconditions are "
+			     "satisfied. Rx Burst Bulk Alloc function "
+			     "will be used on port %d.",
+			     dev->data->port_id);
+		dev->rx_pkt_burst = ice_recv_pkts_bulk_alloc;
+	} else {
+		PMD_INIT_LOG(DEBUG,
+			     "Rx Burst Bulk Alloc Preconditions are not "
+			     "satisfied, Normal Rx will be used on port %d.",
+			     dev->data->port_id);
+		dev->rx_pkt_burst = ice_recv_pkts;
+	}
 }
 
 /*********************************************************************
@@ -1612,8 +2258,18 @@ void __attribute__((cold))
 void __attribute__((cold))
 ice_set_tx_function(struct rte_eth_dev *dev)
 {
+	struct ice_adapter *ad =
+		ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+
+	if (ad->tx_simple_allowed) {
+		PMD_INIT_LOG(DEBUG, "Simple tx finally be used.");
+		dev->tx_pkt_burst = ice_xmit_pkts_simple;
+		dev->tx_pkt_prepare = NULL;
+	} else {
+		PMD_INIT_LOG(DEBUG, "Normal tx finally be used.");
 		dev->tx_pkt_burst = ice_xmit_pkts;
 		dev->tx_pkt_prepare = ice_prep_pkts;
+	}
 }
 
 /* For each value it means, datasheet of hardware can tell more details
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH 18/19] net/ice: support descriptor ops
  2018-11-23  6:56 [dpdk-dev] [PATCH 00/19] A new net PMD - ice Wenzhuo Lu
                   ` (16 preceding siblings ...)
  2018-11-23  6:56 ` [dpdk-dev] [PATCH 17/19] net/ice: support advance RX/TX Wenzhuo Lu
@ 2018-11-23  6:56 ` Wenzhuo Lu
  2018-11-23  6:56 ` [dpdk-dev] [PATCH 19/19] doc: add ICE description and update release note Wenzhuo Lu
                   ` (6 subsequent siblings)
  24 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-11-23  6:56 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add below ops,
rx_descriptor_done
rx_descriptor_status
tx_descriptor_status

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/ice/ice_ethdev.c   |  3 ++
 drivers/net/ice/ice_lan_rxtx.c | 84 ++++++++++++++++++++++++++++++++++++++++++
 drivers/net/ice/ice_rxtx.h     |  3 ++
 3 files changed, 90 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 21a251f..c9dca15 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -112,6 +112,9 @@ static int ice_xstats_get_names(struct rte_eth_dev *dev,
 	.get_eeprom_length            = ice_get_eeprom_length,
 	.get_eeprom                   = ice_get_eeprom,
 	.rx_queue_count               = ice_rx_queue_count,
+	.rx_descriptor_done           = ice_rx_descriptor_done,
+	.rx_descriptor_status         = ice_rx_descriptor_status,
+	.tx_descriptor_status         = ice_tx_descriptor_status,
 	.stats_get                    = ice_stats_get,
 	.stats_reset                  = ice_stats_reset,
 	.xstats_get                   = ice_xstats_get,
diff --git a/drivers/net/ice/ice_lan_rxtx.c b/drivers/net/ice/ice_lan_rxtx.c
index 07ab677..4e6c0ff 100644
--- a/drivers/net/ice/ice_lan_rxtx.c
+++ b/drivers/net/ice/ice_lan_rxtx.c
@@ -1514,6 +1514,90 @@
 	return desc;
 }
 
+int
+ice_rx_descriptor_done(void *rx_queue, uint16_t offset)
+{
+	volatile union ice_rx_desc *rxdp;
+	struct ice_rx_queue *rxq = rx_queue;
+	uint16_t desc;
+	int ret;
+
+	if (unlikely(offset >= rxq->nb_rx_desc)) {
+		PMD_DRV_LOG(ERR, "Invalid RX descriptor id %u", offset);
+		return 0;
+	}
+
+	desc = rxq->rx_tail + offset;
+	if (desc >= rxq->nb_rx_desc)
+		desc -= rxq->nb_rx_desc;
+
+	rxdp = &rxq->rx_ring[desc];
+
+	ret = !!(((rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
+		  ICE_RXD_QW1_STATUS_M) >> ICE_RXD_QW1_STATUS_S) &
+		 (1 << ICE_RX_DESC_STATUS_DD_S));
+
+	return ret;
+}
+
+int
+ice_rx_descriptor_status(void *rx_queue, uint16_t offset)
+{
+	struct ice_rx_queue *rxq = rx_queue;
+	volatile uint64_t *status;
+	uint64_t mask;
+	uint32_t desc;
+
+	if (unlikely(offset >= rxq->nb_rx_desc))
+		return -EINVAL;
+
+	if (offset >= rxq->nb_rx_desc - rxq->nb_rx_hold)
+		return RTE_ETH_RX_DESC_UNAVAIL;
+
+	desc = rxq->rx_tail + offset;
+	if (desc >= rxq->nb_rx_desc)
+		desc -= rxq->nb_rx_desc;
+
+	status = &rxq->rx_ring[desc].wb.qword1.status_error_len;
+	mask = rte_cpu_to_le_64((1ULL << ICE_RX_DESC_STATUS_DD_S) <<
+				ICE_RXD_QW1_STATUS_S);
+	if (*status & mask)
+		return RTE_ETH_RX_DESC_DONE;
+
+	return RTE_ETH_RX_DESC_AVAIL;
+}
+
+int
+ice_tx_descriptor_status(void *tx_queue, uint16_t offset)
+{
+	struct ice_tx_queue *txq = tx_queue;
+	volatile uint64_t *status;
+	uint64_t mask, expect;
+	uint32_t desc;
+
+	if (unlikely(offset >= txq->nb_tx_desc))
+		return -EINVAL;
+
+	desc = txq->tx_tail + offset;
+	/* go to next desc that has the RS bit */
+	desc = ((desc + txq->tx_rs_thresh - 1) / txq->tx_rs_thresh) *
+		txq->tx_rs_thresh;
+	if (desc >= txq->nb_tx_desc) {
+		desc -= txq->nb_tx_desc;
+		if (desc >= txq->nb_tx_desc)
+			desc -= txq->nb_tx_desc;
+	}
+
+	status = &txq->tx_ring[desc].cmd_type_offset_bsz;
+	mask = rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M);
+	expect = rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE <<
+				  ICE_TXD_QW1_DTYPE_S);
+	if ((*status & mask) == expect)
+		return RTE_ETH_TX_DESC_DONE;
+
+	return RTE_ETH_TX_DESC_FULL;
+}
+
 void
 ice_clear_queues(struct rte_eth_dev *dev)
 {
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index e0218b3..12ad383 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -143,6 +143,9 @@ uint16_t ice_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
 		       uint16_t nb_pkts);
 void ice_set_tx_function(struct rte_eth_dev *dev);
 uint32_t ice_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+int ice_rx_descriptor_done(void *rx_queue, uint16_t offset);
+int ice_rx_descriptor_status(void *rx_queue, uint16_t offset);
+int ice_tx_descriptor_status(void *tx_queue, uint16_t offset);
 void ice_set_default_ptype_table(struct rte_eth_dev *dev);
 const uint32_t *ice_dev_supported_ptypes_get(struct rte_eth_dev *dev);
 void ice_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH 19/19] doc: add ICE description and update release note
  2018-11-23  6:56 [dpdk-dev] [PATCH 00/19] A new net PMD - ice Wenzhuo Lu
                   ` (17 preceding siblings ...)
  2018-11-23  6:56 ` [dpdk-dev] [PATCH 18/19] net/ice: support descriptor ops Wenzhuo Lu
@ 2018-11-23  6:56 ` Wenzhuo Lu
  2018-11-23  7:45   ` Varghese, Vipin
  2018-11-23 11:00 ` [dpdk-dev] [PATCH 00/19] A new net PMD - ice Thomas Monjalon
                   ` (5 subsequent siblings)
  24 siblings, 1 reply; 309+ messages in thread
From: Wenzhuo Lu @ 2018-11-23  6:56 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 MAINTAINERS                      |  1 +
 doc/guides/nics/features/ice.ini | 39 ++++++++++++++++++++
 doc/guides/nics/ice.rst          | 78 ++++++++++++++++++++++++++++++++++++++++
 3 files changed, 118 insertions(+)
 create mode 100644 doc/guides/nics/features/ice.ini
 create mode 100644 doc/guides/nics/ice.rst

diff --git a/MAINTAINERS b/MAINTAINERS
index 00a5e03..2e1a537 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -598,6 +598,7 @@ M: Qiming Yang <qiming.yang@intel.com>
 M: Wenzhuo Lu <wenzhuo.lu@intel.com>
 T: git://dpdk.org/next/dpdk-next-net-intel
 F: drivers/net/ice/
+F: doc/guides/nics/features/ice*.ini
 
 Marvell mvpp2
 M: Tomasz Duszynski <tdu@semihalf.com>
diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
new file mode 100644
index 0000000..2be52ca
--- /dev/null
+++ b/doc/guides/nics/features/ice.ini
@@ -0,0 +1,39 @@
+;
+; Supported features of the 'ice' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Speed capabilities   = Y
+Link status          = Y
+Link status event    = Y
+Rx interrupt         = Y
+Queue start/stop     = Y
+MTU update           = Y
+Jumbo frame          = Y
+Scattered Rx         = Y
+TSO                  = Y
+Unicast MAC filter   = Y
+Multicast MAC filter = Y
+RSS hash             = Y
+RSS key update       = Y
+RSS reta update      = Y
+VLAN filter          = Y
+CRC offload          = Y
+VLAN offload         = Y
+QinQ offload         = Y
+L3 checksum offload  = Y
+L4 checksum offload  = Y
+Packet type parsing  = Y
+Rx descriptor status = Y
+Tx descriptor status = Y
+Basic stats          = Y
+Extended stats       = Y
+FW version           = Y
+Module EEPROM dump   = Y
+Multiprocess aware   = Y
+BSD nic_uio          = Y
+Linux UIO            = Y
+Linux VFIO           = Y
+x86-32               = Y
+x86-64               = Y
diff --git a/doc/guides/nics/ice.rst b/doc/guides/nics/ice.rst
new file mode 100644
index 0000000..12a12d2
--- /dev/null
+++ b/doc/guides/nics/ice.rst
@@ -0,0 +1,78 @@
+..  SPDX-License-Identifier: BSD-3-Clause
+    Copyright(c) 2018 Intel Corporation.
+
+ICE Poll Mode Driver
+======================
+
+The ice PMD (librte_pmd_ice) provides poll mode driver support for
+10/25 Gbps Intel® Ethernet 810 Series Network Adapters based on
+the Intel Ethernet Controller E810.
+
+
+Prerequisites
+-------------
+
+- Identifying your adapter using `Intel Support
+  <http://www.intel.com/support>`_ and get the latest NVM/FW images.
+
+- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
+
+- To get better performance on Intel platforms, please follow the "How to get best performance with NICs on Intel platforms"
+  section of the :ref:`Getting Started Guide for Linux <linux_gsg>`.
+
+
+Pre-Installation Configuration
+------------------------------
+
+Config File Options
+~~~~~~~~~~~~~~~~~~~
+
+The following options can be modified in the ``config`` file.
+Please note that enabling debugging options may affect system performance.
+
+- ``CONFIG_RTE_LIBRTE_ICE_PMD`` (default ``y``)
+
+  Toggle compilation of the ``librte_pmd_ice`` driver.
+
+- ``CONFIG_RTE_LIBRTE_ICE_DEBUG_*`` (default ``n``)
+
+  Toggle display of generic debugging messages.
+
+- ``CONFIG_RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC`` (default ``y``)
+
+  Toggle bulk allocation for RX.
+
+- ``CONFIG_RTE_LIBRTE_ICE_16BYTE_RX_DESC`` (default ``n``)
+
+  Toggle to use a 16-byte RX descriptor, by default the RX descriptor is 32 byte.
+
+
+Driver compilation and testing
+------------------------------
+
+Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
+for details.
+
+
+Sample Application Notes
+------------------------
+
+Vlan filter
+~~~~~~~~~~~
+
+Vlan filter only works when Promiscuous mode is off.
+
+To start ``testpmd``, and add vlan 10 to port 0:
+
+.. code-block:: console
+
+    ./app/testpmd -l 0-15 -n 4 -- -i
+    ...
+
+    testpmd> rx_vlan add 10 0
+
+
+Limitations or Known issues
+---------------------------
+
+N/A
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH 19/19] doc: add ICE description and update release note
  2018-11-23  6:56 ` [dpdk-dev] [PATCH 19/19] doc: add ICE description and update release note Wenzhuo Lu
@ 2018-11-23  7:45   ` Varghese, Vipin
  2018-11-26  3:42     ` Yang, Qiming
  0 siblings, 1 reply; 309+ messages in thread
From: Varghese, Vipin @ 2018-11-23  7:45 UTC (permalink / raw)
  To: Lu, Wenzhuo, dev; +Cc: Lu, Wenzhuo

Hi Wenzhuo

<snipped>

> 
> Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> ---
>  MAINTAINERS                      |  1 +
>  doc/guides/nics/features/ice.ini | 39 ++++++++++++++++++++
>  doc/guides/nics/ice.rst          | 78
> ++++++++++++++++++++++++++++++++++++++++
>  3 files changed, 118 insertions(+)
>  create mode 100644 doc/guides/nics/features/ice.ini  create mode 100644
> doc/guides/nics/ice.rst
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 00a5e03..2e1a537 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -598,6 +598,7 @@ M: Qiming Yang <qiming.yang@intel.com>
>  M: Wenzhuo Lu <wenzhuo.lu@intel.com>
>  T: git://dpdk.org/next/dpdk-next-net-intel
>  F: drivers/net/ice/
> +F: doc/guides/nics/features/ice*.ini
> 
>  Marvell mvpp2
>  M: Tomasz Duszynski <tdu@semihalf.com>
> diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
> new file mode 100644
> index 0000000..2be52ca
> --- /dev/null
> +++ b/doc/guides/nics/features/ice.ini
> @@ -0,0 +1,39 @@
> +;
> +; Supported features of the 'ice' network poll mode driver.
> +;
> +; Refer to default.ini for the full list of available PMD features.
> +;
> +[Features]
> +Speed capabilities   = Y
> +Link status          = Y
> +Link status event    = Y
> +Rx interrupt         = Y
> +Queue start/stop     = Y
> +MTU update           = Y
> +Jumbo frame          = Y
> +Scattered Rx         = Y
> +TSO                  = Y
> +Unicast MAC filter   = Y
> +Multicast MAC filter = Y
> +RSS hash             = Y
> +RSS key update       = Y
> +RSS reta update      = Y
> +VLAN filter          = Y
> +CRC offload          = Y
> +VLAN offload         = Y
> +QinQ offload         = Y
> +L3 checksum offload  = Y
> +L4 checksum offload  = Y
> +Packet type parsing  = Y
> +Rx descriptor status = Y
> +Tx descriptor status = Y
> +Basic stats          = Y
> +Extended stats       = Y
> +FW version           = Y
> +Module EEPROM dump   = Y
> +Multiprocess aware   = Y
> +BSD nic_uio          = Y
> +Linux UIO            = Y
> +Linux VFIO           = Y
> +x86-32               = Y
> +x86-64               = Y

Is all the features also supported for secondary multiprocess?

> diff --git a/doc/guides/nics/ice.rst b/doc/guides/nics/ice.rst new file mode
> 100644 index 0000000..12a12d2
> --- /dev/null
> +++ b/doc/guides/nics/ice.rst
> @@ -0,0 +1,78 @@
> +..  SPDX-License-Identifier: BSD-3-Clause
> +    Copyright(c) 2018 Intel Corporation.
> +
> +ICE Poll Mode Driver
> +======================
> +
> +The ice PMD (librte_pmd_ice) provides poll mode driver support for
> +10/25 Gbps Intel® Ethernet 810 Series Network Adapters based on the
> +Intel Ethernet Controller E810.
> +
> +
> +Prerequisites
> +-------------
> +
> +- Identifying your adapter using `Intel Support
> +  <http://www.intel.com/support>`_ and get the latest NVM/FW images.
> +
> +- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup
> the basic DPDK environment.
> +
> +- To get better performance on Intel platforms, please follow the "How to get
> best performance with NICs on Intel platforms"
> +  section of the :ref:`Getting Started Guide for Linux <linux_gsg>`.
> +
> +
> +Pre-Installation Configuration
> +------------------------------
> +
> +Config File Options
> +~~~~~~~~~~~~~~~~~~~
> +
> +The following options can be modified in the ``config`` file.
> +Please note that enabling debugging options may affect system performance.
> +
> +- ``CONFIG_RTE_LIBRTE_ICE_PMD`` (default ``y``)
> +
> +  Toggle compilation of the ``librte_pmd_ice`` driver.
> +
> +- ``CONFIG_RTE_LIBRTE_ICE_DEBUG_*`` (default ``n``)
> +
> +  Toggle display of generic debugging messages.
> +
> +- ``CONFIG_RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC`` (default ``y``)
> +
> +  Toggle bulk allocation for RX.
> +
> +- ``CONFIG_RTE_LIBRTE_ICE_16BYTE_RX_DESC`` (default ``n``)
> +
> +  Toggle to use a 16-byte RX descriptor, by default the RX descriptor is 32 byte.
> +
> +
> +Driver compilation and testing
> +------------------------------
> +
> +Refer to the document :ref:`compiling and testing a PMD for a NIC
> +<pmd_build_and_test>` for details.
> +
> +
> +Sample Application Notes
> +------------------------
> +
> +Vlan filter
> +~~~~~~~~~~~
> +
> +Vlan filter only works when Promiscuous mode is off.
> +
> +To start ``testpmd``, and add vlan 10 to port 0:
> +
> +.. code-block:: console
> +
> +    ./app/testpmd -l 0-15 -n 4 -- -i
> +    ...
> +
> +    testpmd> rx_vlan add 10 0
> +
> +
> +Limitations or Known issues
> +---------------------------
> +

If there are features not supported in secondary, can you mention the same here?

> +N/A
> --
> 1.9.3


^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH 02/19] net/ice: support device initialization
  2018-11-23  6:56 ` [dpdk-dev] [PATCH 02/19] net/ice: support device initialization Wenzhuo Lu
@ 2018-11-23  7:56   ` Varghese, Vipin
  2018-11-26  5:09     ` Li, Xiaoyun
  0 siblings, 1 reply; 309+ messages in thread
From: Varghese, Vipin @ 2018-11-23  7:56 UTC (permalink / raw)
  To: Lu, Wenzhuo, dev; +Cc: Lu, Wenzhuo, Yang, Qiming, Li, Xiaoyun, Wu, Jingjing

Hi Wenzhuo,

<snipped>

> Subject: [dpdk-dev] [PATCH 02/19] net/ice: support device initialization
> 
> Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> Signed-off-by: Qiming Yang <qiming.yang@intel.com>
> Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
> Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
> ---

<snipped>

> +static int
> +ice_dev_init(struct rte_eth_dev *dev)
> +{
> +	struct rte_pci_device *pci_dev;
> +	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data-
> >dev_private);
> +	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
> +	int ret;
> +
> +	dev->dev_ops = &ice_eth_dev_ops;
> +
> +	pci_dev = RTE_DEV_TO_PCI(dev->device);
> +
> +	rte_eth_copy_pci_info(dev, pci_dev);
> +	pf->adapter = ICE_DEV_PRIVATE_TO_ADAPTER(dev->data-
> >dev_private);
> +	pf->adapter->eth_dev = dev;
> +	pf->dev_data = dev->data;
> +	hw->back = pf->adapter;
> +	hw->hw_addr = (uint8_t *)pci_dev->mem_resource[0].addr;
> +	hw->vendor_id = pci_dev->id.vendor_id;
> +	hw->device_id = pci_dev->id.device_id;
> +	hw->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id;
> +	hw->subsystem_device_id = pci_dev->id.subsystem_device_id;
> +	hw->bus.device = pci_dev->addr.devid;
> +	hw->bus.func = pci_dev->addr.function;
> +
> +	ice_init_controlq_parameter(hw);
> +

Do we check if process is secondary and ICE PMD is already is initialized? If we do not check will we run to multi process reinitilization?

> +	ret = ice_init_hw(hw);
> +	if (ret) {
> +		PMD_INIT_LOG(ERR, "Failed to initialize HW");
> +		return -EINVAL;
> +	}
> +
> +	PMD_INIT_LOG(INFO, "FW %d.%d.%05d API %d.%d",
> +		     hw->fw_maj_ver, hw->fw_min_ver, hw->fw_build,
> +		     hw->api_maj_ver, hw->api_min_ver);
> +
> +	ice_pf_sw_init(dev);
> +	ret = ice_init_mac_address(dev);
> +	if (ret) {
> +		PMD_INIT_LOG(ERR, "Failed to initialize mac address");
> +		goto err_init_mac;
> +	}

Assuming in secondary multi process this will be skipped if primary has already initialized. Is this understanding correct?

> +
> +	ret = ice_res_pool_init(&pf->msix_pool, 1,
> +				hw-
> >func_caps.common_cap.num_msix_vectors - 1);
> +	if (ret) {
> +		PMD_INIT_LOG(ERR, "Failed to init MSIX pool");
> +		goto err_msix_pool_init;
> +	}
> +
> +	ret = ice_pf_setup(pf);
> +	if (ret) {
> +		PMD_INIT_LOG(ERR, "Failed to setup PF");
> +		goto err_pf_setup;
> +	}

Pool init and pf setup also for secondary skip if primary is done?

> +
> +	return 0;
> +
> +err_pf_setup:
> +	ice_res_pool_destroy(&pf->msix_pool);
> +err_msix_pool_init:
> +	rte_free(dev->data->mac_addrs);
> +err_init_mac:
> +	ice_sched_cleanup_all(hw);
> +	rte_free(hw->port_info);
> +	ice_shutdown_all_ctrlq(hw);
> +
> +	return ret;
> +}
> +
> +static int
> +ice_release_vsi(struct ice_vsi *vsi)
> +{
> +	struct ice_hw *hw;
> +	struct ice_vsi_ctx vsi_ctx;
> +	enum ice_status ret;
> +
> +	if (!vsi)
> +		return 0;

Should we check if process is secondary and primary sees the port, then skip the destroy?

> +
> +	hw = ICE_VSI_TO_HW(vsi);
> +
> +	memset(&vsi_ctx, 0, sizeof(vsi_ctx));
> +
> +	vsi_ctx.vsi_num = vsi->vsi_id;
> +	vsi_ctx.info = vsi->info;
> +	ret = ice_free_vsi(hw, vsi->idx, &vsi_ctx, false, NULL);
> +	if (ret != ICE_SUCCESS) {
> +		PMD_INIT_LOG(ERR, "Failed to free vsi by aq, %u", vsi->vsi_id);
> +		rte_free(vsi);
> +		return -1;
> +	}
> +
> +	rte_free(vsi);
> +	return 0;
> +}
> +
> +static int
> +ice_dev_uninit(struct rte_eth_dev *dev) {
> +	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data-
> >dev_private);
> +	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
> +
> +	if (rte_eal_process_type() == RTE_PROC_SECONDARY)
> +		return 0;
> +

Here we have check for secondary, but if the port is added in secondary and not primary is it valid to return 0?

> +	ice_dev_close(dev);
> +
> +	dev->dev_ops = NULL;
> +	dev->rx_pkt_burst = NULL;
> +	dev->tx_pkt_burst = NULL;
> +
> +	rte_free(dev->data->mac_addrs);
> +	dev->data->mac_addrs = NULL;
> +
> +	ice_release_vsi(pf->main_vsi);
> +	ice_sched_cleanup_all(hw);
> +	rte_free(hw->port_info);
> +	ice_shutdown_all_ctrlq(hw);
> +
> +	return 0;
> +}
> +

<snipped>

> +static void
> +ice_dev_close(struct rte_eth_dev *dev)
> +{
> +	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
> +	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data-
> >dev_private);
> +
> +	if (rte_eal_process_type() == RTE_PROC_SECONDARY)
> +		return;
> +

Same as previous comment, if port is started in secondary it will not be seen in primary. Hence is it right to return 0 without checking?

> +	ice_res_pool_destroy(&pf->msix_pool);
> +	ice_release_vsi(pf->main_vsi);
> +
> +	ice_shutdown_all_ctrlq(hw);
> +}

<snipped>

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH 07/19] net/ice: support MTU setting
  2018-11-23  6:56 ` [dpdk-dev] [PATCH 07/19] net/ice: support MTU setting Wenzhuo Lu
@ 2018-11-23  9:58   ` Varghese, Vipin
  2018-11-26  3:38     ` Yang, Qiming
  0 siblings, 1 reply; 309+ messages in thread
From: Varghese, Vipin @ 2018-11-23  9:58 UTC (permalink / raw)
  To: Lu, Wenzhuo, dev; +Cc: Lu, Wenzhuo, Yang, Qiming, Li, Xiaoyun, Wu, Jingjing

HI Wenzhou,

Following is a thought but not an issue. Can you please let me know your thought?

<snipped>

> +static int
> +ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) {
> +	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
> +	struct rte_eth_dev_data *dev_data = pf->dev_data;
> +	uint32_t frame_size = mtu + ETHER_HDR_LEN
> +			      + ETHER_CRC_LEN + ICE_VLAN_TAG_SIZE;
> +
> +	if (rte_eal_process_type() == RTE_PROC_SECONDARY)
> +		return -E_RTE_SECONDARY;
> +
> +	/* check if mtu is within the allowed range */
> +	if ((mtu < ETHER_MIN_MTU) || (frame_size > ICE_FRAME_SIZE_MAX))
> +		return -EINVAL;
> +

Should we set MTU > 1500 (Jumbo frame) if device is not configured to run with jumbo frame? If no, should we check the jumbo config is enabled for the current device?

> +	/* mtu setting is forbidden if port is start */
> +	if (dev_data->dev_started) {
> +		PMD_DRV_LOG(ERR,
> +			    "port %d must be stopped before configuration",
> +			    dev_data->port_id);
> +		return -EBUSY;
> +	}
> +
> +	if (frame_size > ETHER_MAX_LEN)
> +		dev_data->dev_conf.rxmode.offloads |=
> +			DEV_RX_OFFLOAD_JUMBO_FRAME;
> +	else
> +		dev_data->dev_conf.rxmode.offloads &=
> +			~DEV_RX_OFFLOAD_JUMBO_FRAME;
> +
> +	dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> +
> +	return 0;
> +}
> --
> 1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH 00/19] A new net PMD - ice
  2018-11-23  6:56 [dpdk-dev] [PATCH 00/19] A new net PMD - ice Wenzhuo Lu
                   ` (18 preceding siblings ...)
  2018-11-23  6:56 ` [dpdk-dev] [PATCH 19/19] doc: add ICE description and update release note Wenzhuo Lu
@ 2018-11-23 11:00 ` Thomas Monjalon
  2018-12-05  6:39   ` Lu, Wenzhuo
  2018-12-03  7:06 ` [dpdk-dev] [PATCH v2 00/20] " Wenzhuo Lu
                   ` (4 subsequent siblings)
  24 siblings, 1 reply; 309+ messages in thread
From: Thomas Monjalon @ 2018-11-23 11:00 UTC (permalink / raw)
  To: Wenzhuo Lu; +Cc: dev

Hi Wenzhuo,

23/11/2018 07:56, Wenzhuo Lu:
>   net/ice: add base code

This first patch is really too big.
Please could you try to split it logically?

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH 07/19] net/ice: support MTU setting
  2018-11-23  9:58   ` Varghese, Vipin
@ 2018-11-26  3:38     ` Yang, Qiming
  2018-11-26  3:58       ` Varghese, Vipin
  0 siblings, 1 reply; 309+ messages in thread
From: Yang, Qiming @ 2018-11-26  3:38 UTC (permalink / raw)
  To: Varghese, Vipin, Lu, Wenzhuo, dev; +Cc: Lu, Wenzhuo, Li, Xiaoyun, Wu, Jingjing

Hi, Vipin
Not sure understand your question.
We have no need to configure jumbo frame, because jumbo frame offload will be enable when mtu>1500.

Qiming
-----Original Message-----
From: Varghese, Vipin 
Sent: Friday, November 23, 2018 5:58 PM
To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>; Yang, Qiming <qiming.yang@intel.com>; Li, Xiaoyun <xiaoyun.li@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>
Subject: RE: [dpdk-dev] [PATCH 07/19] net/ice: support MTU setting

HI Wenzhou,

Following is a thought but not an issue. Can you please let me know your thought?

<snipped>

> +static int
> +ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) {
> +	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
> +	struct rte_eth_dev_data *dev_data = pf->dev_data;
> +	uint32_t frame_size = mtu + ETHER_HDR_LEN
> +			      + ETHER_CRC_LEN + ICE_VLAN_TAG_SIZE;
> +
> +	if (rte_eal_process_type() == RTE_PROC_SECONDARY)
> +		return -E_RTE_SECONDARY;
> +
> +	/* check if mtu is within the allowed range */
> +	if ((mtu < ETHER_MIN_MTU) || (frame_size > ICE_FRAME_SIZE_MAX))
> +		return -EINVAL;
> +

Should we set MTU > 1500 (Jumbo frame) if device is not configured to run with jumbo frame? If no, should we check the jumbo config is enabled for the current device?

> +	/* mtu setting is forbidden if port is start */
> +	if (dev_data->dev_started) {
> +		PMD_DRV_LOG(ERR,
> +			    "port %d must be stopped before configuration",
> +			    dev_data->port_id);
> +		return -EBUSY;
> +	}
> +
> +	if (frame_size > ETHER_MAX_LEN)
> +		dev_data->dev_conf.rxmode.offloads |=
> +			DEV_RX_OFFLOAD_JUMBO_FRAME;
> +	else
> +		dev_data->dev_conf.rxmode.offloads &=
> +			~DEV_RX_OFFLOAD_JUMBO_FRAME;
> +
> +	dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> +
> +	return 0;
> +}
> --
> 1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH 19/19] doc: add ICE description and update release note
  2018-11-23  7:45   ` Varghese, Vipin
@ 2018-11-26  3:42     ` Yang, Qiming
  2018-11-26  3:59       ` Varghese, Vipin
  0 siblings, 1 reply; 309+ messages in thread
From: Yang, Qiming @ 2018-11-26  3:42 UTC (permalink / raw)
  To: Varghese, Vipin, Lu, Wenzhuo, dev; +Cc: Lu, Wenzhuo

Hi, Vipin
Not all feature enabled for secondary process. We add secondary process check in each ops like below,
	
if (rte_eal_process_type() == RTE_PROC_SECONDARY)
		return -E_RTE_SECONDARY;

Qiming

-----Original Message-----
From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Varghese, Vipin
Sent: Friday, November 23, 2018 3:45 PM
To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>
Subject: Re: [dpdk-dev] [PATCH 19/19] doc: add ICE description and update release note

Hi Wenzhuo

<snipped>

> 
> Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> ---
>  MAINTAINERS                      |  1 +
>  doc/guides/nics/features/ice.ini | 39 ++++++++++++++++++++
>  doc/guides/nics/ice.rst          | 78
> ++++++++++++++++++++++++++++++++++++++++
>  3 files changed, 118 insertions(+)
>  create mode 100644 doc/guides/nics/features/ice.ini  create mode 
> 100644 doc/guides/nics/ice.rst
> 
> diff --git a/MAINTAINERS b/MAINTAINERS index 00a5e03..2e1a537 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -598,6 +598,7 @@ M: Qiming Yang <qiming.yang@intel.com>
>  M: Wenzhuo Lu <wenzhuo.lu@intel.com>
>  T: git://dpdk.org/next/dpdk-next-net-intel
>  F: drivers/net/ice/
> +F: doc/guides/nics/features/ice*.ini
> 
>  Marvell mvpp2
>  M: Tomasz Duszynski <tdu@semihalf.com> diff --git 
> a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
> new file mode 100644
> index 0000000..2be52ca
> --- /dev/null
> +++ b/doc/guides/nics/features/ice.ini
> @@ -0,0 +1,39 @@
> +;
> +; Supported features of the 'ice' network poll mode driver.
> +;
> +; Refer to default.ini for the full list of available PMD features.
> +;
> +[Features]
> +Speed capabilities   = Y
> +Link status          = Y
> +Link status event    = Y
> +Rx interrupt         = Y
> +Queue start/stop     = Y
> +MTU update           = Y
> +Jumbo frame          = Y
> +Scattered Rx         = Y
> +TSO                  = Y
> +Unicast MAC filter   = Y
> +Multicast MAC filter = Y
> +RSS hash             = Y
> +RSS key update       = Y
> +RSS reta update      = Y
> +VLAN filter          = Y
> +CRC offload          = Y
> +VLAN offload         = Y
> +QinQ offload         = Y
> +L3 checksum offload  = Y
> +L4 checksum offload  = Y
> +Packet type parsing  = Y
> +Rx descriptor status = Y
> +Tx descriptor status = Y
> +Basic stats          = Y
> +Extended stats       = Y
> +FW version           = Y
> +Module EEPROM dump   = Y
> +Multiprocess aware   = Y
> +BSD nic_uio          = Y
> +Linux UIO            = Y
> +Linux VFIO           = Y
> +x86-32               = Y
> +x86-64               = Y

Is all the features also supported for secondary multiprocess?

> diff --git a/doc/guides/nics/ice.rst b/doc/guides/nics/ice.rst new 
> file mode
> 100644 index 0000000..12a12d2
> --- /dev/null
> +++ b/doc/guides/nics/ice.rst
> @@ -0,0 +1,78 @@
> +..  SPDX-License-Identifier: BSD-3-Clause
> +    Copyright(c) 2018 Intel Corporation.
> +
> +ICE Poll Mode Driver
> +======================
> +
> +The ice PMD (librte_pmd_ice) provides poll mode driver support for
> +10/25 Gbps Intel® Ethernet 810 Series Network Adapters based on the 
> +Intel Ethernet Controller E810.
> +
> +
> +Prerequisites
> +-------------
> +
> +- Identifying your adapter using `Intel Support
> +  <http://www.intel.com/support>`_ and get the latest NVM/FW images.
> +
> +- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` 
> +to setup
> the basic DPDK environment.
> +
> +- To get better performance on Intel platforms, please follow the 
> +"How to get
> best performance with NICs on Intel platforms"
> +  section of the :ref:`Getting Started Guide for Linux <linux_gsg>`.
> +
> +
> +Pre-Installation Configuration
> +------------------------------
> +
> +Config File Options
> +~~~~~~~~~~~~~~~~~~~
> +
> +The following options can be modified in the ``config`` file.
> +Please note that enabling debugging options may affect system performance.
> +
> +- ``CONFIG_RTE_LIBRTE_ICE_PMD`` (default ``y``)
> +
> +  Toggle compilation of the ``librte_pmd_ice`` driver.
> +
> +- ``CONFIG_RTE_LIBRTE_ICE_DEBUG_*`` (default ``n``)
> +
> +  Toggle display of generic debugging messages.
> +
> +- ``CONFIG_RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC`` (default ``y``)
> +
> +  Toggle bulk allocation for RX.
> +
> +- ``CONFIG_RTE_LIBRTE_ICE_16BYTE_RX_DESC`` (default ``n``)
> +
> +  Toggle to use a 16-byte RX descriptor, by default the RX descriptor is 32 byte.
> +
> +
> +Driver compilation and testing
> +------------------------------
> +
> +Refer to the document :ref:`compiling and testing a PMD for a NIC 
> +<pmd_build_and_test>` for details.
> +
> +
> +Sample Application Notes
> +------------------------
> +
> +Vlan filter
> +~~~~~~~~~~~
> +
> +Vlan filter only works when Promiscuous mode is off.
> +
> +To start ``testpmd``, and add vlan 10 to port 0:
> +
> +.. code-block:: console
> +
> +    ./app/testpmd -l 0-15 -n 4 -- -i
> +    ...
> +
> +    testpmd> rx_vlan add 10 0
> +
> +
> +Limitations or Known issues
> +---------------------------
> +

If there are features not supported in secondary, can you mention the same here?

> +N/A
> --
> 1.9.3


^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH 07/19] net/ice: support MTU setting
  2018-11-26  3:38     ` Yang, Qiming
@ 2018-11-26  3:58       ` Varghese, Vipin
  0 siblings, 0 replies; 309+ messages in thread
From: Varghese, Vipin @ 2018-11-26  3:58 UTC (permalink / raw)
  To: Yang, Qiming, Lu, Wenzhuo, dev; +Cc: Lu, Wenzhuo, Li, Xiaoyun, Wu, Jingjing

Hi Qiming

> -----Original Message-----
> From: Yang, Qiming
> Sent: Monday, November 26, 2018 9:09 AM
> To: Varghese, Vipin <vipin.varghese@intel.com>; Lu, Wenzhuo
> <wenzhuo.lu@intel.com>; dev@dpdk.org
> Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>; Li, Xiaoyun <xiaoyun.li@intel.com>;
> Wu, Jingjing <jingjing.wu@intel.com>
> Subject: RE: [dpdk-dev] [PATCH 07/19] net/ice: support MTU setting
> 
> Hi, Vipin
> Not sure understand your question.
> We have no need to configure jumbo frame, because jumbo frame offload will be
> enable when mtu>1500.

Apologies if the question is not clear. Let me try to explain what I am trying to ask below

> 
> Qiming
> -----Original Message-----
> From: Varghese, Vipin
> Sent: Friday, November 23, 2018 5:58 PM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>; Yang, Qiming
> <qiming.yang@intel.com>; Li, Xiaoyun <xiaoyun.li@intel.com>; Wu, Jingjing
> <jingjing.wu@intel.com>
> Subject: RE: [dpdk-dev] [PATCH 07/19] net/ice: support MTU setting
> 
> HI Wenzhou,
> 
> Following is a thought but not an issue. Can you please let me know your
> thought?
> 
> <snipped>
> 
> > +static int
> > +ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) {
> > +	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
> > +	struct rte_eth_dev_data *dev_data = pf->dev_data;
> > +	uint32_t frame_size = mtu + ETHER_HDR_LEN
> > +			      + ETHER_CRC_LEN + ICE_VLAN_TAG_SIZE;
> > +
> > +	if (rte_eal_process_type() == RTE_PROC_SECONDARY)
> > +		return -E_RTE_SECONDARY;
> > +
> > +	/* check if mtu is within the allowed range */
> > +	if ((mtu < ETHER_MIN_MTU) || (frame_size > ICE_FRAME_SIZE_MAX))
> > +		return -EINVAL;
> > +
> 
> Should we set MTU > 1500 (Jumbo frame) if device is not configured to run with
> jumbo frame? If no, should we check the jumbo config is enabled for the current
> device?
> 

1. If by default JUMBO enabled for this device?
2. There is RX offload which is configured with value 'DEV_RX_OFFLOAD_JUMBO_FRAME'. Should this be enabled for supporting JUMBO frames?
3. There is per RX queue offload flag too 'DEV_RX_OFFLOAD_JUMBO_FRAME'. If port_conf is set for 'DEV_RX_OFFLOAD_JUMBO_FRAME' but disabled for specific RX queue 'DEV_RX_OFFLOAD_JUMBO_FRAME', what is the JUMBO setting for the device?

Hence do we need to check 'mtu_set' for whether actually device is configured for JUMBO processing or not?

> > +	/* mtu setting is forbidden if port is start */
> > +	if (dev_data->dev_started) {
> > +		PMD_DRV_LOG(ERR,
> > +			    "port %d must be stopped before configuration",
> > +			    dev_data->port_id);
> > +		return -EBUSY;
> > +	}
> > +
> > +	if (frame_size > ETHER_MAX_LEN)
> > +		dev_data->dev_conf.rxmode.offloads |=
> > +			DEV_RX_OFFLOAD_JUMBO_FRAME;
> > +	else
> > +		dev_data->dev_conf.rxmode.offloads &=
> > +			~DEV_RX_OFFLOAD_JUMBO_FRAME;
> > +
> > +	dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> > +
> > +	return 0;
> > +}
> > --
> > 1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH 19/19] doc: add ICE description and update release note
  2018-11-26  3:42     ` Yang, Qiming
@ 2018-11-26  3:59       ` Varghese, Vipin
  0 siblings, 0 replies; 309+ messages in thread
From: Varghese, Vipin @ 2018-11-26  3:59 UTC (permalink / raw)
  To: Yang, Qiming, Lu, Wenzhuo, dev; +Cc: Lu, Wenzhuo

Hi Qiming,

Thanks for the update, please feel free to update the limitation in release notes which all are not supported too.

Note: do we plan to add test cases (unit) for the same?

> -----Original Message-----
> From: Yang, Qiming
> Sent: Monday, November 26, 2018 9:13 AM
> To: Varghese, Vipin <vipin.varghese@intel.com>; Lu, Wenzhuo
> <wenzhuo.lu@intel.com>; dev@dpdk.org
> Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>
> Subject: RE: [dpdk-dev] [PATCH 19/19] doc: add ICE description and update
> release note
> 
> Hi, Vipin
> Not all feature enabled for secondary process. We add secondary process check
> in each ops like below,
> 
> if (rte_eal_process_type() == RTE_PROC_SECONDARY)
> 		return -E_RTE_SECONDARY;
> 
> Qiming
> 
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Varghese, Vipin
> Sent: Friday, November 23, 2018 3:45 PM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>
> Subject: Re: [dpdk-dev] [PATCH 19/19] doc: add ICE description and update
> release note
> 
> Hi Wenzhuo
> 
> <snipped>
> 
> >
> > Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> > ---
> >  MAINTAINERS                      |  1 +
> >  doc/guides/nics/features/ice.ini | 39 ++++++++++++++++++++
> >  doc/guides/nics/ice.rst          | 78
> > ++++++++++++++++++++++++++++++++++++++++
> >  3 files changed, 118 insertions(+)
> >  create mode 100644 doc/guides/nics/features/ice.ini  create mode
> > 100644 doc/guides/nics/ice.rst
> >
> > diff --git a/MAINTAINERS b/MAINTAINERS index 00a5e03..2e1a537 100644
> > --- a/MAINTAINERS
> > +++ b/MAINTAINERS
> > @@ -598,6 +598,7 @@ M: Qiming Yang <qiming.yang@intel.com>
> >  M: Wenzhuo Lu <wenzhuo.lu@intel.com>
> >  T: git://dpdk.org/next/dpdk-next-net-intel
> >  F: drivers/net/ice/
> > +F: doc/guides/nics/features/ice*.ini
> >
> >  Marvell mvpp2
> >  M: Tomasz Duszynski <tdu@semihalf.com> diff --git
> > a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
> > new file mode 100644
> > index 0000000..2be52ca
> > --- /dev/null
> > +++ b/doc/guides/nics/features/ice.ini
> > @@ -0,0 +1,39 @@
> > +;
> > +; Supported features of the 'ice' network poll mode driver.
> > +;
> > +; Refer to default.ini for the full list of available PMD features.
> > +;
> > +[Features]
> > +Speed capabilities   = Y
> > +Link status          = Y
> > +Link status event    = Y
> > +Rx interrupt         = Y
> > +Queue start/stop     = Y
> > +MTU update           = Y
> > +Jumbo frame          = Y
> > +Scattered Rx         = Y
> > +TSO                  = Y
> > +Unicast MAC filter   = Y
> > +Multicast MAC filter = Y
> > +RSS hash             = Y
> > +RSS key update       = Y
> > +RSS reta update      = Y
> > +VLAN filter          = Y
> > +CRC offload          = Y
> > +VLAN offload         = Y
> > +QinQ offload         = Y
> > +L3 checksum offload  = Y
> > +L4 checksum offload  = Y
> > +Packet type parsing  = Y
> > +Rx descriptor status = Y
> > +Tx descriptor status = Y
> > +Basic stats          = Y
> > +Extended stats       = Y
> > +FW version           = Y
> > +Module EEPROM dump   = Y
> > +Multiprocess aware   = Y
> > +BSD nic_uio          = Y
> > +Linux UIO            = Y
> > +Linux VFIO           = Y
> > +x86-32               = Y
> > +x86-64               = Y
> 
> Is all the features also supported for secondary multiprocess?
> 
> > diff --git a/doc/guides/nics/ice.rst b/doc/guides/nics/ice.rst new
> > file mode
> > 100644 index 0000000..12a12d2
> > --- /dev/null
> > +++ b/doc/guides/nics/ice.rst
> > @@ -0,0 +1,78 @@
> > +..  SPDX-License-Identifier: BSD-3-Clause
> > +    Copyright(c) 2018 Intel Corporation.
> > +
> > +ICE Poll Mode Driver
> > +======================
> > +
> > +The ice PMD (librte_pmd_ice) provides poll mode driver support for
> > +10/25 Gbps Intel® Ethernet 810 Series Network Adapters based on the
> > +Intel Ethernet Controller E810.
> > +
> > +
> > +Prerequisites
> > +-------------
> > +
> > +- Identifying your adapter using `Intel Support
> > +  <http://www.intel.com/support>`_ and get the latest NVM/FW images.
> > +
> > +- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>`
> > +to setup
> > the basic DPDK environment.
> > +
> > +- To get better performance on Intel platforms, please follow the
> > +"How to get
> > best performance with NICs on Intel platforms"
> > +  section of the :ref:`Getting Started Guide for Linux <linux_gsg>`.
> > +
> > +
> > +Pre-Installation Configuration
> > +------------------------------
> > +
> > +Config File Options
> > +~~~~~~~~~~~~~~~~~~~
> > +
> > +The following options can be modified in the ``config`` file.
> > +Please note that enabling debugging options may affect system performance.
> > +
> > +- ``CONFIG_RTE_LIBRTE_ICE_PMD`` (default ``y``)
> > +
> > +  Toggle compilation of the ``librte_pmd_ice`` driver.
> > +
> > +- ``CONFIG_RTE_LIBRTE_ICE_DEBUG_*`` (default ``n``)
> > +
> > +  Toggle display of generic debugging messages.
> > +
> > +- ``CONFIG_RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC`` (default ``y``)
> > +
> > +  Toggle bulk allocation for RX.
> > +
> > +- ``CONFIG_RTE_LIBRTE_ICE_16BYTE_RX_DESC`` (default ``n``)
> > +
> > +  Toggle to use a 16-byte RX descriptor, by default the RX descriptor is 32 byte.
> > +
> > +
> > +Driver compilation and testing
> > +------------------------------
> > +
> > +Refer to the document :ref:`compiling and testing a PMD for a NIC
> > +<pmd_build_and_test>` for details.
> > +
> > +
> > +Sample Application Notes
> > +------------------------
> > +
> > +Vlan filter
> > +~~~~~~~~~~~
> > +
> > +Vlan filter only works when Promiscuous mode is off.
> > +
> > +To start ``testpmd``, and add vlan 10 to port 0:
> > +
> > +.. code-block:: console
> > +
> > +    ./app/testpmd -l 0-15 -n 4 -- -i
> > +    ...
> > +
> > +    testpmd> rx_vlan add 10 0
> > +
> > +
> > +Limitations or Known issues
> > +---------------------------
> > +
> 
> If there are features not supported in secondary, can you mention the same
> here?
> 
> > +N/A
> > --
> > 1.9.3


^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH 02/19] net/ice: support device initialization
  2018-11-23  7:56   ` Varghese, Vipin
@ 2018-11-26  5:09     ` Li, Xiaoyun
  2018-11-26  5:13       ` Varghese, Vipin
  0 siblings, 1 reply; 309+ messages in thread
From: Li, Xiaoyun @ 2018-11-26  5:09 UTC (permalink / raw)
  To: Varghese, Vipin, Lu, Wenzhuo, dev; +Cc: Lu, Wenzhuo, Yang, Qiming, Wu, Jingjing

Hi

> Do we check if process is secondary and ICE PMD is already is initialized? If we
> do not check will we run to multi process reinitilization?

Yes. We check. It is in [PATCH 16/19] net/ice: support basic RX/TX. Please see that.
We roughly split the ice codes in different patch. So the file in only one patch is not complete.

> 
> > +	ret = ice_init_hw(hw);
> > +	if (ret) {
> > +		PMD_INIT_LOG(ERR, "Failed to initialize HW");
> > +		return -EINVAL;
> > +	}
> > +
> > +	PMD_INIT_LOG(INFO, "FW %d.%d.%05d API %d.%d",
> > +		     hw->fw_maj_ver, hw->fw_min_ver, hw->fw_build,
> > +		     hw->api_maj_ver, hw->api_min_ver);
> > +
> > +	ice_pf_sw_init(dev);
> > +	ret = ice_init_mac_address(dev);
> > +	if (ret) {
> > +		PMD_INIT_LOG(ERR, "Failed to initialize mac address");
> > +		goto err_init_mac;
> > +	}
> 
> Assuming in secondary multi process this will be skipped if primary has already
> initialized. Is this understanding correct?
> 
> > +
> > +	ret = ice_res_pool_init(&pf->msix_pool, 1,
> > +				hw-
> > >func_caps.common_cap.num_msix_vectors - 1);
> > +	if (ret) {
> > +		PMD_INIT_LOG(ERR, "Failed to init MSIX pool");
> > +		goto err_msix_pool_init;
> > +	}
> > +
> > +	ret = ice_pf_setup(pf);
> > +	if (ret) {
> > +		PMD_INIT_LOG(ERR, "Failed to setup PF");
> > +		goto err_pf_setup;
> > +	}
> 
> Pool init and pf setup also for secondary skip if primary is done?
> 
> > +
> > +	return 0;
> > +
> > +err_pf_setup:
> > +	ice_res_pool_destroy(&pf->msix_pool);
> > +err_msix_pool_init:
> > +	rte_free(dev->data->mac_addrs);
> > +err_init_mac:
> > +	ice_sched_cleanup_all(hw);
> > +	rte_free(hw->port_info);
> > +	ice_shutdown_all_ctrlq(hw);
> > +
> > +	return ret;
> > +}
> > +
> > +static int
> > +ice_release_vsi(struct ice_vsi *vsi)
> > +{
> > +	struct ice_hw *hw;
> > +	struct ice_vsi_ctx vsi_ctx;
> > +	enum ice_status ret;
> > +
> > +	if (!vsi)
> > +		return 0;
> 
> Should we check if process is secondary and primary sees the port, then skip the
> destroy?
> 
> > +
> > +	hw = ICE_VSI_TO_HW(vsi);
> > +
> > +	memset(&vsi_ctx, 0, sizeof(vsi_ctx));
> > +
> > +	vsi_ctx.vsi_num = vsi->vsi_id;
> > +	vsi_ctx.info = vsi->info;
> > +	ret = ice_free_vsi(hw, vsi->idx, &vsi_ctx, false, NULL);
> > +	if (ret != ICE_SUCCESS) {
> > +		PMD_INIT_LOG(ERR, "Failed to free vsi by aq, %u", vsi->vsi_id);
> > +		rte_free(vsi);
> > +		return -1;
> > +	}
> > +
> > +	rte_free(vsi);
> > +	return 0;
> > +}
> > +
> > +static int
> > +ice_dev_uninit(struct rte_eth_dev *dev) {
> > +	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data-
> > >dev_private);
> > +	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
> > +
> > +	if (rte_eal_process_type() == RTE_PROC_SECONDARY)
> > +		return 0;
> > +
> 
> Here we have check for secondary, but if the port is added in secondary and not
> primary is it valid to return 0?
> 
> > +	ice_dev_close(dev);
> > +
> > +	dev->dev_ops = NULL;
> > +	dev->rx_pkt_burst = NULL;
> > +	dev->tx_pkt_burst = NULL;
> > +
> > +	rte_free(dev->data->mac_addrs);
> > +	dev->data->mac_addrs = NULL;
> > +
> > +	ice_release_vsi(pf->main_vsi);
> > +	ice_sched_cleanup_all(hw);
> > +	rte_free(hw->port_info);
> > +	ice_shutdown_all_ctrlq(hw);
> > +
> > +	return 0;
> > +}
> > +
> 
> <snipped>
> 
> > +static void
> > +ice_dev_close(struct rte_eth_dev *dev) {
> > +	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
> > +	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data-
> > >dev_private);
> > +
> > +	if (rte_eal_process_type() == RTE_PROC_SECONDARY)
> > +		return;
> > +
> 
> Same as previous comment, if port is started in secondary it will not be seen in
> primary. Hence is it right to return 0 without checking?
> 
> > +	ice_res_pool_destroy(&pf->msix_pool);
> > +	ice_release_vsi(pf->main_vsi);
> > +
> > +	ice_shutdown_all_ctrlq(hw);
> > +}
> 
> <snipped>

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH 02/19] net/ice: support device initialization
  2018-11-26  5:09     ` Li, Xiaoyun
@ 2018-11-26  5:13       ` Varghese, Vipin
  2018-11-26  5:19         ` Li, Xiaoyun
  0 siblings, 1 reply; 309+ messages in thread
From: Varghese, Vipin @ 2018-11-26  5:13 UTC (permalink / raw)
  To: Li, Xiaoyun, Lu, Wenzhuo, dev; +Cc: Lu, Wenzhuo, Yang, Qiming, Wu, Jingjing

Thanks for the update, so we can assume all functions are available in secondary. If not supported it will return right error code and documentation is updated for the limitation too.

> -----Original Message-----
> From: Li, Xiaoyun
> Sent: Monday, November 26, 2018 10:40 AM
> To: Varghese, Vipin <vipin.varghese@intel.com>; Lu, Wenzhuo
> <wenzhuo.lu@intel.com>; dev@dpdk.org
> Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>; Yang, Qiming
> <qiming.yang@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>
> Subject: RE: [dpdk-dev] [PATCH 02/19] net/ice: support device initialization
> 
> Hi
> 
> > Do we check if process is secondary and ICE PMD is already is
> > initialized? If we do not check will we run to multi process reinitilization?
> 
> Yes. We check. It is in [PATCH 16/19] net/ice: support basic RX/TX. Please see
> that.
> We roughly split the ice codes in different patch. So the file in only one patch is
> not complete.
> 
> >
> > > +	ret = ice_init_hw(hw);
> > > +	if (ret) {
> > > +		PMD_INIT_LOG(ERR, "Failed to initialize HW");
> > > +		return -EINVAL;
> > > +	}
> > > +
> > > +	PMD_INIT_LOG(INFO, "FW %d.%d.%05d API %d.%d",
> > > +		     hw->fw_maj_ver, hw->fw_min_ver, hw->fw_build,
> > > +		     hw->api_maj_ver, hw->api_min_ver);
> > > +
> > > +	ice_pf_sw_init(dev);
> > > +	ret = ice_init_mac_address(dev);
> > > +	if (ret) {
> > > +		PMD_INIT_LOG(ERR, "Failed to initialize mac address");
> > > +		goto err_init_mac;
> > > +	}
> >
> > Assuming in secondary multi process this will be skipped if primary
> > has already initialized. Is this understanding correct?
> >
> > > +
> > > +	ret = ice_res_pool_init(&pf->msix_pool, 1,
> > > +				hw-
> > > >func_caps.common_cap.num_msix_vectors - 1);
> > > +	if (ret) {
> > > +		PMD_INIT_LOG(ERR, "Failed to init MSIX pool");
> > > +		goto err_msix_pool_init;
> > > +	}
> > > +
> > > +	ret = ice_pf_setup(pf);
> > > +	if (ret) {
> > > +		PMD_INIT_LOG(ERR, "Failed to setup PF");
> > > +		goto err_pf_setup;
> > > +	}
> >
> > Pool init and pf setup also for secondary skip if primary is done?
> >
> > > +
> > > +	return 0;
> > > +
> > > +err_pf_setup:
> > > +	ice_res_pool_destroy(&pf->msix_pool);
> > > +err_msix_pool_init:
> > > +	rte_free(dev->data->mac_addrs);
> > > +err_init_mac:
> > > +	ice_sched_cleanup_all(hw);
> > > +	rte_free(hw->port_info);
> > > +	ice_shutdown_all_ctrlq(hw);
> > > +
> > > +	return ret;
> > > +}
> > > +
> > > +static int
> > > +ice_release_vsi(struct ice_vsi *vsi) {
> > > +	struct ice_hw *hw;
> > > +	struct ice_vsi_ctx vsi_ctx;
> > > +	enum ice_status ret;
> > > +
> > > +	if (!vsi)
> > > +		return 0;
> >
> > Should we check if process is secondary and primary sees the port,
> > then skip the destroy?
> >
> > > +
> > > +	hw = ICE_VSI_TO_HW(vsi);
> > > +
> > > +	memset(&vsi_ctx, 0, sizeof(vsi_ctx));
> > > +
> > > +	vsi_ctx.vsi_num = vsi->vsi_id;
> > > +	vsi_ctx.info = vsi->info;
> > > +	ret = ice_free_vsi(hw, vsi->idx, &vsi_ctx, false, NULL);
> > > +	if (ret != ICE_SUCCESS) {
> > > +		PMD_INIT_LOG(ERR, "Failed to free vsi by aq, %u", vsi->vsi_id);
> > > +		rte_free(vsi);
> > > +		return -1;
> > > +	}
> > > +
> > > +	rte_free(vsi);
> > > +	return 0;
> > > +}
> > > +
> > > +static int
> > > +ice_dev_uninit(struct rte_eth_dev *dev) {
> > > +	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data-
> > > >dev_private);
> > > +	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
> > > +
> > > +	if (rte_eal_process_type() == RTE_PROC_SECONDARY)
> > > +		return 0;
> > > +
> >
> > Here we have check for secondary, but if the port is added in
> > secondary and not primary is it valid to return 0?
> >
> > > +	ice_dev_close(dev);
> > > +
> > > +	dev->dev_ops = NULL;
> > > +	dev->rx_pkt_burst = NULL;
> > > +	dev->tx_pkt_burst = NULL;
> > > +
> > > +	rte_free(dev->data->mac_addrs);
> > > +	dev->data->mac_addrs = NULL;
> > > +
> > > +	ice_release_vsi(pf->main_vsi);
> > > +	ice_sched_cleanup_all(hw);
> > > +	rte_free(hw->port_info);
> > > +	ice_shutdown_all_ctrlq(hw);
> > > +
> > > +	return 0;
> > > +}
> > > +
> >
> > <snipped>
> >
> > > +static void
> > > +ice_dev_close(struct rte_eth_dev *dev) {
> > > +	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
> > > +	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data-
> > > >dev_private);
> > > +
> > > +	if (rte_eal_process_type() == RTE_PROC_SECONDARY)
> > > +		return;
> > > +
> >
> > Same as previous comment, if port is started in secondary it will not
> > be seen in primary. Hence is it right to return 0 without checking?
> >
> > > +	ice_res_pool_destroy(&pf->msix_pool);
> > > +	ice_release_vsi(pf->main_vsi);
> > > +
> > > +	ice_shutdown_all_ctrlq(hw);
> > > +}
> >
> > <snipped>

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH 02/19] net/ice: support device initialization
  2018-11-26  5:13       ` Varghese, Vipin
@ 2018-11-26  5:19         ` Li, Xiaoyun
  2018-11-26  5:22           ` Varghese, Vipin
  0 siblings, 1 reply; 309+ messages in thread
From: Li, Xiaoyun @ 2018-11-26  5:19 UTC (permalink / raw)
  To: Varghese, Vipin, Lu, Wenzhuo, dev; +Cc: Lu, Wenzhuo, Yang, Qiming, Wu, Jingjing

Yes.

> -----Original Message-----
> From: Varghese, Vipin
> Sent: Monday, November 26, 2018 13:14
> To: Li, Xiaoyun <xiaoyun.li@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>;
> dev@dpdk.org
> Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>; Yang, Qiming
> <qiming.yang@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>
> Subject: RE: [dpdk-dev] [PATCH 02/19] net/ice: support device initialization
> 
> Thanks for the update, so we can assume all functions are available in secondary.
> If not supported it will return right error code and documentation is updated for
> the limitation too.
> 
> > -----Original Message-----
> > From: Li, Xiaoyun
> > Sent: Monday, November 26, 2018 10:40 AM
> > To: Varghese, Vipin <vipin.varghese@intel.com>; Lu, Wenzhuo
> > <wenzhuo.lu@intel.com>; dev@dpdk.org
> > Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>; Yang, Qiming
> > <qiming.yang@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>
> > Subject: RE: [dpdk-dev] [PATCH 02/19] net/ice: support device
> > initialization
> >
> > Hi
> >
> > > Do we check if process is secondary and ICE PMD is already is
> > > initialized? If we do not check will we run to multi process reinitilization?
> >
> > Yes. We check. It is in [PATCH 16/19] net/ice: support basic RX/TX.
> > Please see that.
> > We roughly split the ice codes in different patch. So the file in only
> > one patch is not complete.
> >
> > >
> > > > +	ret = ice_init_hw(hw);
> > > > +	if (ret) {
> > > > +		PMD_INIT_LOG(ERR, "Failed to initialize HW");
> > > > +		return -EINVAL;
> > > > +	}
> > > > +
> > > > +	PMD_INIT_LOG(INFO, "FW %d.%d.%05d API %d.%d",
> > > > +		     hw->fw_maj_ver, hw->fw_min_ver, hw->fw_build,
> > > > +		     hw->api_maj_ver, hw->api_min_ver);
> > > > +
> > > > +	ice_pf_sw_init(dev);
> > > > +	ret = ice_init_mac_address(dev);
> > > > +	if (ret) {
> > > > +		PMD_INIT_LOG(ERR, "Failed to initialize mac address");
> > > > +		goto err_init_mac;
> > > > +	}
> > >
> > > Assuming in secondary multi process this will be skipped if primary
> > > has already initialized. Is this understanding correct?
> > >
> > > > +
> > > > +	ret = ice_res_pool_init(&pf->msix_pool, 1,
> > > > +				hw-
> > > > >func_caps.common_cap.num_msix_vectors - 1);
> > > > +	if (ret) {
> > > > +		PMD_INIT_LOG(ERR, "Failed to init MSIX pool");
> > > > +		goto err_msix_pool_init;
> > > > +	}
> > > > +
> > > > +	ret = ice_pf_setup(pf);
> > > > +	if (ret) {
> > > > +		PMD_INIT_LOG(ERR, "Failed to setup PF");
> > > > +		goto err_pf_setup;
> > > > +	}
> > >
> > > Pool init and pf setup also for secondary skip if primary is done?
> > >
> > > > +
> > > > +	return 0;
> > > > +
> > > > +err_pf_setup:
> > > > +	ice_res_pool_destroy(&pf->msix_pool);
> > > > +err_msix_pool_init:
> > > > +	rte_free(dev->data->mac_addrs);
> > > > +err_init_mac:
> > > > +	ice_sched_cleanup_all(hw);
> > > > +	rte_free(hw->port_info);
> > > > +	ice_shutdown_all_ctrlq(hw);
> > > > +
> > > > +	return ret;
> > > > +}
> > > > +
> > > > +static int
> > > > +ice_release_vsi(struct ice_vsi *vsi) {
> > > > +	struct ice_hw *hw;
> > > > +	struct ice_vsi_ctx vsi_ctx;
> > > > +	enum ice_status ret;
> > > > +
> > > > +	if (!vsi)
> > > > +		return 0;
> > >
> > > Should we check if process is secondary and primary sees the port,
> > > then skip the destroy?
> > >
> > > > +
> > > > +	hw = ICE_VSI_TO_HW(vsi);
> > > > +
> > > > +	memset(&vsi_ctx, 0, sizeof(vsi_ctx));
> > > > +
> > > > +	vsi_ctx.vsi_num = vsi->vsi_id;
> > > > +	vsi_ctx.info = vsi->info;
> > > > +	ret = ice_free_vsi(hw, vsi->idx, &vsi_ctx, false, NULL);
> > > > +	if (ret != ICE_SUCCESS) {
> > > > +		PMD_INIT_LOG(ERR, "Failed to free vsi by aq, %u", vsi->vsi_id);
> > > > +		rte_free(vsi);
> > > > +		return -1;
> > > > +	}
> > > > +
> > > > +	rte_free(vsi);
> > > > +	return 0;
> > > > +}
> > > > +
> > > > +static int
> > > > +ice_dev_uninit(struct rte_eth_dev *dev) {
> > > > +	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data-
> > > > >dev_private);
> > > > +	struct ice_pf *pf =
> > > > +ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
> > > > +
> > > > +	if (rte_eal_process_type() == RTE_PROC_SECONDARY)
> > > > +		return 0;
> > > > +
> > >
> > > Here we have check for secondary, but if the port is added in
> > > secondary and not primary is it valid to return 0?
> > >
> > > > +	ice_dev_close(dev);
> > > > +
> > > > +	dev->dev_ops = NULL;
> > > > +	dev->rx_pkt_burst = NULL;
> > > > +	dev->tx_pkt_burst = NULL;
> > > > +
> > > > +	rte_free(dev->data->mac_addrs);
> > > > +	dev->data->mac_addrs = NULL;
> > > > +
> > > > +	ice_release_vsi(pf->main_vsi);
> > > > +	ice_sched_cleanup_all(hw);
> > > > +	rte_free(hw->port_info);
> > > > +	ice_shutdown_all_ctrlq(hw);
> > > > +
> > > > +	return 0;
> > > > +}
> > > > +
> > >
> > > <snipped>
> > >
> > > > +static void
> > > > +ice_dev_close(struct rte_eth_dev *dev) {
> > > > +	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
> > > > +	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data-
> > > > >dev_private);
> > > > +
> > > > +	if (rte_eal_process_type() == RTE_PROC_SECONDARY)
> > > > +		return;
> > > > +
> > >
> > > Same as previous comment, if port is started in secondary it will
> > > not be seen in primary. Hence is it right to return 0 without checking?
> > >
> > > > +	ice_res_pool_destroy(&pf->msix_pool);
> > > > +	ice_release_vsi(pf->main_vsi);
> > > > +
> > > > +	ice_shutdown_all_ctrlq(hw);
> > > > +}
> > >
> > > <snipped>

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH 02/19] net/ice: support device initialization
  2018-11-26  5:19         ` Li, Xiaoyun
@ 2018-11-26  5:22           ` Varghese, Vipin
  0 siblings, 0 replies; 309+ messages in thread
From: Varghese, Vipin @ 2018-11-26  5:22 UTC (permalink / raw)
  To: Li, Xiaoyun, Lu, Wenzhuo, dev; +Cc: Lu, Wenzhuo, Yang, Qiming, Wu, Jingjing

Thank you, will wait for updated documentation and release notes.

> -----Original Message-----
> From: Li, Xiaoyun
> Sent: Monday, November 26, 2018 10:49 AM
> To: Varghese, Vipin <vipin.varghese@intel.com>; Lu, Wenzhuo
> <wenzhuo.lu@intel.com>; dev@dpdk.org
> Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>; Yang, Qiming
> <qiming.yang@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>
> Subject: RE: [dpdk-dev] [PATCH 02/19] net/ice: support device initialization
> 
> Yes.
> 
> > -----Original Message-----
> > From: Varghese, Vipin
> > Sent: Monday, November 26, 2018 13:14
> > To: Li, Xiaoyun <xiaoyun.li@intel.com>; Lu, Wenzhuo
> > <wenzhuo.lu@intel.com>; dev@dpdk.org
> > Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>; Yang, Qiming
> > <qiming.yang@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>
> > Subject: RE: [dpdk-dev] [PATCH 02/19] net/ice: support device
> > initialization
> >
> > Thanks for the update, so we can assume all functions are available in
> secondary.
> > If not supported it will return right error code and documentation is
> > updated for the limitation too.
> >
> > > -----Original Message-----
> > > From: Li, Xiaoyun
> > > Sent: Monday, November 26, 2018 10:40 AM
> > > To: Varghese, Vipin <vipin.varghese@intel.com>; Lu, Wenzhuo
> > > <wenzhuo.lu@intel.com>; dev@dpdk.org
> > > Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>; Yang, Qiming
> > > <qiming.yang@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>
> > > Subject: RE: [dpdk-dev] [PATCH 02/19] net/ice: support device
> > > initialization
> > >
> > > Hi
> > >
> > > > Do we check if process is secondary and ICE PMD is already is
> > > > initialized? If we do not check will we run to multi process reinitilization?
> > >
> > > Yes. We check. It is in [PATCH 16/19] net/ice: support basic RX/TX.
> > > Please see that.
> > > We roughly split the ice codes in different patch. So the file in
> > > only one patch is not complete.
> > >
> > > >
> > > > > +	ret = ice_init_hw(hw);
> > > > > +	if (ret) {
> > > > > +		PMD_INIT_LOG(ERR, "Failed to initialize HW");
> > > > > +		return -EINVAL;
> > > > > +	}
> > > > > +
> > > > > +	PMD_INIT_LOG(INFO, "FW %d.%d.%05d API %d.%d",
> > > > > +		     hw->fw_maj_ver, hw->fw_min_ver, hw->fw_build,
> > > > > +		     hw->api_maj_ver, hw->api_min_ver);
> > > > > +
> > > > > +	ice_pf_sw_init(dev);
> > > > > +	ret = ice_init_mac_address(dev);
> > > > > +	if (ret) {
> > > > > +		PMD_INIT_LOG(ERR, "Failed to initialize mac address");
> > > > > +		goto err_init_mac;
> > > > > +	}
> > > >
> > > > Assuming in secondary multi process this will be skipped if
> > > > primary has already initialized. Is this understanding correct?
> > > >
> > > > > +
> > > > > +	ret = ice_res_pool_init(&pf->msix_pool, 1,
> > > > > +				hw-
> > > > > >func_caps.common_cap.num_msix_vectors - 1);
> > > > > +	if (ret) {
> > > > > +		PMD_INIT_LOG(ERR, "Failed to init MSIX pool");
> > > > > +		goto err_msix_pool_init;
> > > > > +	}
> > > > > +
> > > > > +	ret = ice_pf_setup(pf);
> > > > > +	if (ret) {
> > > > > +		PMD_INIT_LOG(ERR, "Failed to setup PF");
> > > > > +		goto err_pf_setup;
> > > > > +	}
> > > >
> > > > Pool init and pf setup also for secondary skip if primary is done?
> > > >
> > > > > +
> > > > > +	return 0;
> > > > > +
> > > > > +err_pf_setup:
> > > > > +	ice_res_pool_destroy(&pf->msix_pool);
> > > > > +err_msix_pool_init:
> > > > > +	rte_free(dev->data->mac_addrs);
> > > > > +err_init_mac:
> > > > > +	ice_sched_cleanup_all(hw);
> > > > > +	rte_free(hw->port_info);
> > > > > +	ice_shutdown_all_ctrlq(hw);
> > > > > +
> > > > > +	return ret;
> > > > > +}
> > > > > +
> > > > > +static int
> > > > > +ice_release_vsi(struct ice_vsi *vsi) {
> > > > > +	struct ice_hw *hw;
> > > > > +	struct ice_vsi_ctx vsi_ctx;
> > > > > +	enum ice_status ret;
> > > > > +
> > > > > +	if (!vsi)
> > > > > +		return 0;
> > > >
> > > > Should we check if process is secondary and primary sees the port,
> > > > then skip the destroy?
> > > >
> > > > > +
> > > > > +	hw = ICE_VSI_TO_HW(vsi);
> > > > > +
> > > > > +	memset(&vsi_ctx, 0, sizeof(vsi_ctx));
> > > > > +
> > > > > +	vsi_ctx.vsi_num = vsi->vsi_id;
> > > > > +	vsi_ctx.info = vsi->info;
> > > > > +	ret = ice_free_vsi(hw, vsi->idx, &vsi_ctx, false, NULL);
> > > > > +	if (ret != ICE_SUCCESS) {
> > > > > +		PMD_INIT_LOG(ERR, "Failed to free vsi by aq, %u", vsi-
> >vsi_id);
> > > > > +		rte_free(vsi);
> > > > > +		return -1;
> > > > > +	}
> > > > > +
> > > > > +	rte_free(vsi);
> > > > > +	return 0;
> > > > > +}
> > > > > +
> > > > > +static int
> > > > > +ice_dev_uninit(struct rte_eth_dev *dev) {
> > > > > +	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data-
> > > > > >dev_private);
> > > > > +	struct ice_pf *pf =
> > > > > +ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
> > > > > +
> > > > > +	if (rte_eal_process_type() == RTE_PROC_SECONDARY)
> > > > > +		return 0;
> > > > > +
> > > >
> > > > Here we have check for secondary, but if the port is added in
> > > > secondary and not primary is it valid to return 0?
> > > >
> > > > > +	ice_dev_close(dev);
> > > > > +
> > > > > +	dev->dev_ops = NULL;
> > > > > +	dev->rx_pkt_burst = NULL;
> > > > > +	dev->tx_pkt_burst = NULL;
> > > > > +
> > > > > +	rte_free(dev->data->mac_addrs);
> > > > > +	dev->data->mac_addrs = NULL;
> > > > > +
> > > > > +	ice_release_vsi(pf->main_vsi);
> > > > > +	ice_sched_cleanup_all(hw);
> > > > > +	rte_free(hw->port_info);
> > > > > +	ice_shutdown_all_ctrlq(hw);
> > > > > +
> > > > > +	return 0;
> > > > > +}
> > > > > +
> > > >
> > > > <snipped>
> > > >
> > > > > +static void
> > > > > +ice_dev_close(struct rte_eth_dev *dev) {
> > > > > +	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data-
> >dev_private);
> > > > > +	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data-
> > > > > >dev_private);
> > > > > +
> > > > > +	if (rte_eal_process_type() == RTE_PROC_SECONDARY)
> > > > > +		return;
> > > > > +
> > > >
> > > > Same as previous comment, if port is started in secondary it will
> > > > not be seen in primary. Hence is it right to return 0 without checking?
> > > >
> > > > > +	ice_res_pool_destroy(&pf->msix_pool);
> > > > > +	ice_release_vsi(pf->main_vsi);
> > > > > +
> > > > > +	ice_shutdown_all_ctrlq(hw);
> > > > > +}
> > > >
> > > > <snipped>

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v2 00/20] A new net PMD - ice
  2018-11-23  6:56 [dpdk-dev] [PATCH 00/19] A new net PMD - ice Wenzhuo Lu
                   ` (19 preceding siblings ...)
  2018-11-23 11:00 ` [dpdk-dev] [PATCH 00/19] A new net PMD - ice Thomas Monjalon
@ 2018-12-03  7:06 ` Wenzhuo Lu
  2018-12-03  7:06   ` [dpdk-dev] [PATCH v2 01/20] net/ice: add base code Wenzhuo Lu
                     ` (19 more replies)
  2018-12-12  6:59 ` [dpdk-dev] [PATCH v3 00/34] A new net PMD - ice Wenzhuo Lu
                   ` (3 subsequent siblings)
  24 siblings, 20 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-03  7:06 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu

This patch set adds the support of a new net PMD,
Intel® Ethernet Network Adapters E810, also
called ice.

Besides enabling this new NIC, also some other features
supported on this NIC.
Like below,

Basic features:
1, Basic device operations: probe, initialization, start/stop, configure, info get.
2, RX/TX queue operations: setup/release, start/stop, info get.
3, RX/TX.

HW Offload features:
1, CRC Stripping/insertion.
2, L2/L3 checksum strip/insertion.
3, PVID set.
4, TPID change.
5, TSO (LRO/RSC not supported).

Stats:
1, statics & xstatics.

Switch functions:
1, MAC Filter Add/Delete.
2, VLAN Filter Add/Delete.

Power saving:
1, RX interrupt mode.

Misc:
1, Interrupt For Link Status.
2, firmware info query.
3, Jumbo Frame Support.
4, ptype check.
5, EEPROM check and set.

v2:
 - Fix shared lib compile issue.
 - Add meson build support.
 - Update documents.
 - Fix more checkpatch issues.

Wenzhuo Lu (20):
  net/ice: add base code
  net/ice: support device initialization
  net/ice: support device and queue ops
  net/ice: support getting device information
  net/ice: support packet type getting
  net/ice: support link update
  net/ice: support MTU setting
  net/ice: support MAC ops
  net/ice: support VLAN ops
  net/ice: support RSS
  net/ice: support RX queue interruption
  net/ice: support FW version getting
  net/ice: support EEPROM information getting
  net/ice: support statistics
  net/ice: support queue information getting
  net/ice: support basic RX/TX
  net/ice: support advance RX/TX
  net/ice: support descriptor ops
  doc: add ICE description and update release note
  net/ice: support meson build

 MAINTAINERS                              |    7 +
 config/common_base                       |    9 +
 doc/guides/nics/features/ice.ini         |   39 +
 doc/guides/nics/ice.rst                  |   87 +
 doc/guides/rel_notes/release_19_02.rst   |    4 +
 drivers/net/Makefile                     |    1 +
 drivers/net/ice/Makefile                 |   76 +
 drivers/net/ice/base/README              |   22 +
 drivers/net/ice/base/ice_adminq_cmd.h    | 1724 ++++++
 drivers/net/ice/base/ice_alloc.h         |   22 +
 drivers/net/ice/base/ice_bitops.h        |  233 +
 drivers/net/ice/base/ice_common.c        | 3331 ++++++++++
 drivers/net/ice/base/ice_common.h        |  159 +
 drivers/net/ice/base/ice_controlq.c      | 1098 ++++
 drivers/net/ice/base/ice_controlq.h      |   97 +
 drivers/net/ice/base/ice_devids.h        |   17 +
 drivers/net/ice/base/ice_flex_type.h     |   19 +
 drivers/net/ice/base/ice_flow.h          |    8 +
 drivers/net/ice/base/ice_hw_autogen.h    | 9815 ++++++++++++++++++++++++++++++
 drivers/net/ice/base/ice_impl_guide.c    |  167 +
 drivers/net/ice/base/ice_lan_tx_rx.h     | 2290 +++++++
 drivers/net/ice/base/ice_nvm.c           |  387 ++
 drivers/net/ice/base/ice_osdep.h         |  491 ++
 drivers/net/ice/base/ice_protocol_type.h |  237 +
 drivers/net/ice/base/ice_sbq_cmd.h       |   93 +
 drivers/net/ice/base/ice_sched.c         | 1713 ++++++
 drivers/net/ice/base/ice_sched.h         |   68 +
 drivers/net/ice/base/ice_sriov.c         |  129 +
 drivers/net/ice/base/ice_sriov.h         |   35 +
 drivers/net/ice/base/ice_status.h        |   45 +
 drivers/net/ice/base/ice_switch.c        | 2415 ++++++++
 drivers/net/ice/base/ice_switch.h        |  320 +
 drivers/net/ice/base/ice_type.h          |  789 +++
 drivers/net/ice/base/meson.build         |   30 +
 drivers/net/ice/base/virtchnl.h          |  787 +++
 drivers/net/ice/ice_ethdev.c             | 3320 ++++++++++
 drivers/net/ice/ice_ethdev.h             |  348 ++
 drivers/net/ice/ice_lan_rxtx.c           | 2914 +++++++++
 drivers/net/ice/ice_logs.h               |   45 +
 drivers/net/ice/ice_rxtx.h               |  155 +
 drivers/net/ice/meson.build              |   15 +
 drivers/net/ice/rte_pmd_ice_version.map  |    4 +
 drivers/net/meson.build                  |    1 +
 mk/rte.app.mk                            |    1 +
 44 files changed, 33567 insertions(+)
 create mode 100644 doc/guides/nics/features/ice.ini
 create mode 100644 doc/guides/nics/ice.rst
 create mode 100644 drivers/net/ice/Makefile
 create mode 100644 drivers/net/ice/base/README
 create mode 100644 drivers/net/ice/base/ice_adminq_cmd.h
 create mode 100644 drivers/net/ice/base/ice_alloc.h
 create mode 100644 drivers/net/ice/base/ice_bitops.h
 create mode 100644 drivers/net/ice/base/ice_common.c
 create mode 100644 drivers/net/ice/base/ice_common.h
 create mode 100644 drivers/net/ice/base/ice_controlq.c
 create mode 100644 drivers/net/ice/base/ice_controlq.h
 create mode 100644 drivers/net/ice/base/ice_devids.h
 create mode 100644 drivers/net/ice/base/ice_flex_type.h
 create mode 100644 drivers/net/ice/base/ice_flow.h
 create mode 100644 drivers/net/ice/base/ice_hw_autogen.h
 create mode 100644 drivers/net/ice/base/ice_impl_guide.c
 create mode 100644 drivers/net/ice/base/ice_lan_tx_rx.h
 create mode 100644 drivers/net/ice/base/ice_nvm.c
 create mode 100644 drivers/net/ice/base/ice_osdep.h
 create mode 100644 drivers/net/ice/base/ice_protocol_type.h
 create mode 100644 drivers/net/ice/base/ice_sbq_cmd.h
 create mode 100644 drivers/net/ice/base/ice_sched.c
 create mode 100644 drivers/net/ice/base/ice_sched.h
 create mode 100644 drivers/net/ice/base/ice_sriov.c
 create mode 100644 drivers/net/ice/base/ice_sriov.h
 create mode 100644 drivers/net/ice/base/ice_status.h
 create mode 100644 drivers/net/ice/base/ice_switch.c
 create mode 100644 drivers/net/ice/base/ice_switch.h
 create mode 100644 drivers/net/ice/base/ice_type.h
 create mode 100644 drivers/net/ice/base/meson.build
 create mode 100644 drivers/net/ice/base/virtchnl.h
 create mode 100644 drivers/net/ice/ice_ethdev.c
 create mode 100644 drivers/net/ice/ice_ethdev.h
 create mode 100644 drivers/net/ice/ice_lan_rxtx.c
 create mode 100644 drivers/net/ice/ice_logs.h
 create mode 100644 drivers/net/ice/ice_rxtx.h
 create mode 100644 drivers/net/ice/meson.build
 create mode 100644 drivers/net/ice/rte_pmd_ice_version.map

-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v2 01/20] net/ice: add base code
  2018-12-03  7:06 ` [dpdk-dev] [PATCH v2 00/20] " Wenzhuo Lu
@ 2018-12-03  7:06   ` Wenzhuo Lu
  2018-12-04  4:18     ` Varghese, Vipin
  2018-12-03  7:06   ` [dpdk-dev] [PATCH v2 02/20] net/ice: support device initialization Wenzhuo Lu
                     ` (18 subsequent siblings)
  19 siblings, 1 reply; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-03  7:06 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu

Current version 2018.10.30.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 MAINTAINERS                              |    6 +
 drivers/net/ice/base/README              |   22 +
 drivers/net/ice/base/ice_adminq_cmd.h    | 1724 ++++++
 drivers/net/ice/base/ice_alloc.h         |   22 +
 drivers/net/ice/base/ice_bitops.h        |  233 +
 drivers/net/ice/base/ice_common.c        | 3331 ++++++++++
 drivers/net/ice/base/ice_common.h        |  159 +
 drivers/net/ice/base/ice_controlq.c      | 1098 ++++
 drivers/net/ice/base/ice_controlq.h      |   97 +
 drivers/net/ice/base/ice_devids.h        |   17 +
 drivers/net/ice/base/ice_flex_type.h     |   19 +
 drivers/net/ice/base/ice_flow.h          |    8 +
 drivers/net/ice/base/ice_hw_autogen.h    | 9815 ++++++++++++++++++++++++++++++
 drivers/net/ice/base/ice_impl_guide.c    |  167 +
 drivers/net/ice/base/ice_lan_tx_rx.h     | 2290 +++++++
 drivers/net/ice/base/ice_nvm.c           |  387 ++
 drivers/net/ice/base/ice_osdep.h         |  491 ++
 drivers/net/ice/base/ice_protocol_type.h |  237 +
 drivers/net/ice/base/ice_sbq_cmd.h       |   93 +
 drivers/net/ice/base/ice_sched.c         | 1713 ++++++
 drivers/net/ice/base/ice_sched.h         |   68 +
 drivers/net/ice/base/ice_sriov.c         |  129 +
 drivers/net/ice/base/ice_sriov.h         |   35 +
 drivers/net/ice/base/ice_status.h        |   45 +
 drivers/net/ice/base/ice_switch.c        | 2415 ++++++++
 drivers/net/ice/base/ice_switch.h        |  320 +
 drivers/net/ice/base/ice_type.h          |  789 +++
 drivers/net/ice/base/virtchnl.h          |  787 +++
 28 files changed, 26517 insertions(+)
 create mode 100644 drivers/net/ice/base/README
 create mode 100644 drivers/net/ice/base/ice_adminq_cmd.h
 create mode 100644 drivers/net/ice/base/ice_alloc.h
 create mode 100644 drivers/net/ice/base/ice_bitops.h
 create mode 100644 drivers/net/ice/base/ice_common.c
 create mode 100644 drivers/net/ice/base/ice_common.h
 create mode 100644 drivers/net/ice/base/ice_controlq.c
 create mode 100644 drivers/net/ice/base/ice_controlq.h
 create mode 100644 drivers/net/ice/base/ice_devids.h
 create mode 100644 drivers/net/ice/base/ice_flex_type.h
 create mode 100644 drivers/net/ice/base/ice_flow.h
 create mode 100644 drivers/net/ice/base/ice_hw_autogen.h
 create mode 100644 drivers/net/ice/base/ice_impl_guide.c
 create mode 100644 drivers/net/ice/base/ice_lan_tx_rx.h
 create mode 100644 drivers/net/ice/base/ice_nvm.c
 create mode 100644 drivers/net/ice/base/ice_osdep.h
 create mode 100644 drivers/net/ice/base/ice_protocol_type.h
 create mode 100644 drivers/net/ice/base/ice_sbq_cmd.h
 create mode 100644 drivers/net/ice/base/ice_sched.c
 create mode 100644 drivers/net/ice/base/ice_sched.h
 create mode 100644 drivers/net/ice/base/ice_sriov.c
 create mode 100644 drivers/net/ice/base/ice_sriov.h
 create mode 100644 drivers/net/ice/base/ice_status.h
 create mode 100644 drivers/net/ice/base/ice_switch.c
 create mode 100644 drivers/net/ice/base/ice_switch.h
 create mode 100644 drivers/net/ice/base/ice_type.h
 create mode 100644 drivers/net/ice/base/virtchnl.h

diff --git a/MAINTAINERS b/MAINTAINERS
index 71ba312..37f3bf7 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -593,6 +593,12 @@ F: drivers/net/ifc/
 F: doc/guides/nics/ifc.rst
 F: doc/guides/nics/features/ifc*.ini
 
+Intel ice
+M: Qiming Yang <qiming.yang@intel.com>
+M: Wenzhuo Lu <wenzhuo.lu@intel.com>
+T: git://dpdk.org/next/dpdk-next-net-intel
+F: drivers/net/ice/
+
 Marvell mvpp2
 M: Tomasz Duszynski <tdu@semihalf.com>
 M: Dmitri Epshtein <dima@marvell.com>
diff --git a/drivers/net/ice/base/README b/drivers/net/ice/base/README
new file mode 100644
index 0000000..d8c7a9b
--- /dev/null
+++ b/drivers/net/ice/base/README
@@ -0,0 +1,22 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+Intel® ICE driver
+==================
+
+This directory contains source code of FreeBSD ice driver of version
+2018.10.30 released by the team which develops
+basic drivers for any ice NIC. The directory of base/ contains the
+original source package.
+This driver is valid for the product(s) listed below
+
+* Intel® Ethernet Network Adapters E810
+
+Updating the driver
+===================
+
+NOTE: The source code in this directory should not be modified apart from
+the following file(s):
+
+    ice_osdep.h
diff --git a/drivers/net/ice/base/ice_adminq_cmd.h b/drivers/net/ice/base/ice_adminq_cmd.h
new file mode 100644
index 0000000..e711502
--- /dev/null
+++ b/drivers/net/ice/base/ice_adminq_cmd.h
@@ -0,0 +1,1724 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_ADMINQ_CMD_H_
+#define _ICE_ADMINQ_CMD_H_
+
+/* This header file defines the Admin Queue commands, error codes and
+ * descriptor format. It is shared between Firmware and Software.
+ */
+
+
+#define ICE_MAX_VSI			768
+#define ICE_AQC_TOPO_MAX_LEVEL_NUM	0x9
+#define ICE_AQ_SET_MAC_FRAME_SIZE_MAX	9728
+
+
+struct ice_aqc_generic {
+	__le32 param0;
+	__le32 param1;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Get version (direct 0x0001) */
+struct ice_aqc_get_ver {
+	__le32 rom_ver;
+	__le32 fw_build;
+	u8 fw_branch;
+	u8 fw_major;
+	u8 fw_minor;
+	u8 fw_patch;
+	u8 api_branch;
+	u8 api_major;
+	u8 api_minor;
+	u8 api_patch;
+};
+
+
+
+/* Queue Shutdown (direct 0x0003) */
+struct ice_aqc_q_shutdown {
+	__le32 driver_unloading;
+#define ICE_AQC_DRIVER_UNLOADING	BIT(0)
+	u8 reserved[12];
+};
+
+
+
+
+/* Request resource ownership (direct 0x0008)
+ * Release resource ownership (direct 0x0009)
+ */
+struct ice_aqc_req_res {
+	__le16 res_id;
+#define ICE_AQC_RES_ID_NVM		1
+#define ICE_AQC_RES_ID_SDP		2
+#define ICE_AQC_RES_ID_CHNG_LOCK	3
+#define ICE_AQC_RES_ID_GLBL_LOCK	4
+	__le16 access_type;
+#define ICE_AQC_RES_ACCESS_READ		1
+#define ICE_AQC_RES_ACCESS_WRITE	2
+
+	/* Upon successful completion, FW writes this value and driver is
+	 * expected to release resource before timeout. This value is provided
+	 * in milliseconds.
+	 */
+	__le32 timeout;
+#define ICE_AQ_RES_NVM_READ_DFLT_TIMEOUT_MS	3000
+#define ICE_AQ_RES_NVM_WRITE_DFLT_TIMEOUT_MS	180000
+#define ICE_AQ_RES_CHNG_LOCK_DFLT_TIMEOUT_MS	1000
+#define ICE_AQ_RES_GLBL_LOCK_DFLT_TIMEOUT_MS	3000
+	/* For SDP: pin id of the SDP */
+	__le32 res_number;
+	/* Status is only used for ICE_AQC_RES_ID_GLBL_LOCK */
+	__le16 status;
+#define ICE_AQ_RES_GLBL_SUCCESS		0
+#define ICE_AQ_RES_GLBL_IN_PROG		1
+#define ICE_AQ_RES_GLBL_DONE		2
+	u8 reserved[2];
+};
+
+
+/* Get function capabilities (indirect 0x000A)
+ * Get device capabilities (indirect 0x000B)
+ */
+struct ice_aqc_list_caps {
+	u8 cmd_flags;
+	u8 pf_index;
+	u8 reserved[2];
+	__le32 count;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Device/Function buffer entry, repeated per reported capability */
+struct ice_aqc_list_caps_elem {
+	__le16 cap;
+#define ICE_AQC_CAPS_VALID_FUNCTIONS			0x0005
+#define ICE_AQC_CAPS_SRIOV				0x0012
+#define ICE_AQC_CAPS_VF					0x0013
+#define ICE_AQC_CAPS_VSI				0x0017
+#define ICE_AQC_CAPS_RSS				0x0040
+#define ICE_AQC_CAPS_RXQS				0x0041
+#define ICE_AQC_CAPS_TXQS				0x0042
+#define ICE_AQC_CAPS_MSIX				0x0043
+#define ICE_AQC_CAPS_MAX_MTU				0x0047
+
+	u8 major_ver;
+	u8 minor_ver;
+	/* Number of resources described by this capability */
+	__le32 number;
+	/* Only meaningful for some types of resources */
+	__le32 logical_id;
+	/* Only meaningful for some types of resources */
+	__le32 phys_id;
+	__le64 rsvd1;
+	__le64 rsvd2;
+};
+
+
+/* Manage MAC address, read command - indirect (0x0107)
+ * This struct is also used for the response
+ */
+struct ice_aqc_manage_mac_read {
+	__le16 flags; /* Zeroed by device driver */
+#define ICE_AQC_MAN_MAC_LAN_ADDR_VALID		BIT(4)
+#define ICE_AQC_MAN_MAC_SAN_ADDR_VALID		BIT(5)
+#define ICE_AQC_MAN_MAC_PORT_ADDR_VALID		BIT(6)
+#define ICE_AQC_MAN_MAC_WOL_ADDR_VALID		BIT(7)
+#define ICE_AQC_MAN_MAC_READ_S			4
+#define ICE_AQC_MAN_MAC_READ_M			(0xF << ICE_AQC_MAN_MAC_READ_S)
+	u8 lport_num;
+	u8 lport_num_valid;
+#define ICE_AQC_MAN_MAC_PORT_NUM_IS_VALID	BIT(0)
+	u8 num_addr; /* Used in response */
+	u8 reserved[3];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Response buffer format for manage MAC read command */
+struct ice_aqc_manage_mac_read_resp {
+	u8 lport_num;
+	u8 addr_type;
+#define ICE_AQC_MAN_MAC_ADDR_TYPE_LAN		0
+#define ICE_AQC_MAN_MAC_ADDR_TYPE_WOL		1
+	u8 mac_addr[ETH_ALEN];
+};
+
+
+/* Manage MAC address, write command - direct (0x0108) */
+struct ice_aqc_manage_mac_write {
+	u8 port_num;
+	u8 flags;
+#define ICE_AQC_MAN_MAC_WR_MC_MAG_EN		BIT(0)
+#define ICE_AQC_MAN_MAC_WR_WOL_LAA_PFR_KEEP	BIT(1)
+#define ICE_AQC_MAN_MAC_WR_S		6
+#define ICE_AQC_MAN_MAC_WR_M		(3 << ICE_AQC_MAN_MAC_WR_S)
+#define ICE_AQC_MAN_MAC_UPDATE_LAA	0
+#define ICE_AQC_MAN_MAC_UPDATE_LAA_WOL	(BIT(0) << ICE_AQC_MAN_MAC_WR_S)
+	/* High 16 bits of MAC address in big endian order */
+	__be16 sah;
+	/* Low 32 bits of MAC address in big endian order */
+	__be32 sal;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Clear PXE Command and response (direct 0x0110) */
+struct ice_aqc_clear_pxe {
+	u8 rx_cnt;
+#define ICE_AQC_CLEAR_PXE_RX_CNT		0x2
+	u8 reserved[15];
+};
+
+
+/* Get switch configuration (0x0200) */
+struct ice_aqc_get_sw_cfg {
+	/* Reserved for command and copy of request flags for response */
+	__le16 flags;
+	/* First desc in case of command and next_elem in case of response
+	 * In case of response, if it is not zero, means all the configuration
+	 * was not returned and new command shall be sent with this value in
+	 * the 'first desc' field
+	 */
+	__le16 element;
+	/* Reserved for command, only used for response */
+	__le16 num_elems;
+	__le16 rsvd;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Each entry in the response buffer is of the following type: */
+struct ice_aqc_get_sw_cfg_resp_elem {
+	/* VSI/Port Number */
+	__le16 vsi_port_num;
+#define ICE_AQC_GET_SW_CONF_RESP_VSI_PORT_NUM_S	0
+#define ICE_AQC_GET_SW_CONF_RESP_VSI_PORT_NUM_M	\
+			(0x3FF << ICE_AQC_GET_SW_CONF_RESP_VSI_PORT_NUM_S)
+#define ICE_AQC_GET_SW_CONF_RESP_TYPE_S	14
+#define ICE_AQC_GET_SW_CONF_RESP_TYPE_M	(0x3 << ICE_AQC_GET_SW_CONF_RESP_TYPE_S)
+#define ICE_AQC_GET_SW_CONF_RESP_PHYS_PORT	0
+#define ICE_AQC_GET_SW_CONF_RESP_VIRT_PORT	1
+#define ICE_AQC_GET_SW_CONF_RESP_VSI		2
+
+	/* SWID VSI/Port belongs to */
+	__le16 swid;
+
+	/* Bit 14..0 : PF/VF number VSI belongs to
+	 * Bit 15 : VF indication bit
+	 */
+	__le16 pf_vf_num;
+#define ICE_AQC_GET_SW_CONF_RESP_FUNC_NUM_S	0
+#define ICE_AQC_GET_SW_CONF_RESP_FUNC_NUM_M	\
+				(0x7FFF << ICE_AQC_GET_SW_CONF_RESP_FUNC_NUM_S)
+#define ICE_AQC_GET_SW_CONF_RESP_IS_VF		BIT(15)
+};
+
+
+/* The response buffer is as follows. Note that the length of the
+ * elements array varies with the length of the command response.
+ */
+struct ice_aqc_get_sw_cfg_resp {
+	struct ice_aqc_get_sw_cfg_resp_elem elements[1];
+};
+
+
+
+/* These resource type defines are used for all switch resource
+ * commands where a resource type is required, such as:
+ * Get Resource Allocation command (indirect 0x0204)
+ * Allocate Resources command (indirect 0x0208)
+ * Free Resources command (indirect 0x0209)
+ * Get Allocated Resource Descriptors Command (indirect 0x020A)
+ */
+#define ICE_AQC_RES_TYPE_VSI_LIST_REP			0x03
+#define ICE_AQC_RES_TYPE_VSI_LIST_PRUNE			0x04
+
+#define ICE_AQC_RES_TYPE_FLAG_SHARED			BIT(7)
+#define ICE_AQC_RES_TYPE_FLAG_SCAN_BOTTOM		BIT(12)
+#define ICE_AQC_RES_TYPE_FLAG_IGNORE_INDEX		BIT(13)
+
+#define ICE_AQC_RES_TYPE_FLAG_DEDICATED			0x00
+
+
+
+/* Allocate Resources command (indirect 0x0208)
+ * Free Resources command (indirect 0x0209)
+ */
+struct ice_aqc_alloc_free_res_cmd {
+	__le16 num_entries; /* Number of Resource entries */
+	u8 reserved[6];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Resource descriptor */
+struct ice_aqc_res_elem {
+	union {
+		__le16 sw_resp;
+		__le16 flu_resp;
+	} e;
+};
+
+
+/* Buffer for Allocate/Free Resources commands */
+struct ice_aqc_alloc_free_res_elem {
+	__le16 res_type; /* Types defined above cmd 0x0204 */
+#define ICE_AQC_RES_TYPE_VSI_PRUNE_LIST_S	8
+#define ICE_AQC_RES_TYPE_VSI_PRUNE_LIST_M	\
+				(0xF << ICE_AQC_RES_TYPE_VSI_PRUNE_LIST_S)
+	__le16 num_elems;
+	struct ice_aqc_res_elem elem[1];
+};
+
+
+
+
+/* Add VSI (indirect 0x0210)
+ * Update VSI (indirect 0x0211)
+ * Get VSI (indirect 0x0212)
+ * Free VSI (indirect 0x0213)
+ */
+struct ice_aqc_add_get_update_free_vsi {
+	__le16 vsi_num;
+#define ICE_AQ_VSI_NUM_S	0
+#define ICE_AQ_VSI_NUM_M	(0x03FF << ICE_AQ_VSI_NUM_S)
+#define ICE_AQ_VSI_IS_VALID	BIT(15)
+	__le16 cmd_flags;
+#define ICE_AQ_VSI_KEEP_ALLOC	0x1
+	u8 vf_id;
+	u8 reserved;
+	__le16 vsi_flags;
+#define ICE_AQ_VSI_TYPE_S	0
+#define ICE_AQ_VSI_TYPE_M	(0x3 << ICE_AQ_VSI_TYPE_S)
+#define ICE_AQ_VSI_TYPE_VF	0x0
+#define ICE_AQ_VSI_TYPE_VMDQ2	0x1
+#define ICE_AQ_VSI_TYPE_PF	0x2
+#define ICE_AQ_VSI_TYPE_EMP_MNG	0x3
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Response descriptor for:
+ * Add VSI (indirect 0x0210)
+ * Update VSI (indirect 0x0211)
+ * Free VSI (indirect 0x0213)
+ */
+struct ice_aqc_add_update_free_vsi_resp {
+	__le16 vsi_num;
+	__le16 ext_status;
+	__le16 vsi_used;
+	__le16 vsi_free;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+
+struct ice_aqc_vsi_props {
+	__le16 valid_sections;
+#define ICE_AQ_VSI_PROP_SW_VALID		BIT(0)
+#define ICE_AQ_VSI_PROP_SECURITY_VALID		BIT(1)
+#define ICE_AQ_VSI_PROP_VLAN_VALID		BIT(2)
+#define ICE_AQ_VSI_PROP_OUTER_TAG_VALID		BIT(3)
+#define ICE_AQ_VSI_PROP_INGRESS_UP_VALID	BIT(4)
+#define ICE_AQ_VSI_PROP_EGRESS_UP_VALID		BIT(5)
+#define ICE_AQ_VSI_PROP_RXQ_MAP_VALID		BIT(6)
+#define ICE_AQ_VSI_PROP_Q_OPT_VALID		BIT(7)
+#define ICE_AQ_VSI_PROP_OUTER_UP_VALID		BIT(8)
+#define ICE_AQ_VSI_PROP_FLOW_DIR_VALID		BIT(11)
+#define ICE_AQ_VSI_PROP_PASID_VALID		BIT(12)
+	/* switch section */
+	u8 sw_id;
+	u8 sw_flags;
+#define ICE_AQ_VSI_SW_FLAG_ALLOW_LB		BIT(5)
+#define ICE_AQ_VSI_SW_FLAG_LOCAL_LB		BIT(6)
+#define ICE_AQ_VSI_SW_FLAG_SRC_PRUNE		BIT(7)
+	u8 sw_flags2;
+#define ICE_AQ_VSI_SW_FLAG_RX_PRUNE_EN_S	0
+#define ICE_AQ_VSI_SW_FLAG_RX_PRUNE_EN_M	\
+				(0xF << ICE_AQ_VSI_SW_FLAG_RX_PRUNE_EN_S)
+#define ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA	BIT(0)
+#define ICE_AQ_VSI_SW_FLAG_LAN_ENA		BIT(4)
+	u8 veb_stat_id;
+#define ICE_AQ_VSI_SW_VEB_STAT_ID_S		0
+#define ICE_AQ_VSI_SW_VEB_STAT_ID_M	(0x1F << ICE_AQ_VSI_SW_VEB_STAT_ID_S)
+#define ICE_AQ_VSI_SW_VEB_STAT_ID_VALID		BIT(5)
+	/* security section */
+	u8 sec_flags;
+#define ICE_AQ_VSI_SEC_FLAG_ALLOW_DEST_OVRD	BIT(0)
+#define ICE_AQ_VSI_SEC_FLAG_ENA_MAC_ANTI_SPOOF	BIT(2)
+#define ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S	4
+#define ICE_AQ_VSI_SEC_TX_PRUNE_ENA_M	(0xF << ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S)
+#define ICE_AQ_VSI_SEC_TX_VLAN_PRUNE_ENA	BIT(0)
+	u8 sec_reserved;
+	/* VLAN section */
+	__le16 pvid; /* VLANS include priority bits */
+	u8 pvlan_reserved[2];
+	u8 vlan_flags;
+#define ICE_AQ_VSI_VLAN_MODE_S	0
+#define ICE_AQ_VSI_VLAN_MODE_M	(0x3 << ICE_AQ_VSI_VLAN_MODE_S)
+#define ICE_AQ_VSI_VLAN_MODE_UNTAGGED	0x1
+#define ICE_AQ_VSI_VLAN_MODE_TAGGED	0x2
+#define ICE_AQ_VSI_VLAN_MODE_ALL	0x3
+#define ICE_AQ_VSI_PVLAN_INSERT_PVID	BIT(2)
+#define ICE_AQ_VSI_VLAN_EMOD_S	3
+#define ICE_AQ_VSI_VLAN_EMOD_M	(0x3 << ICE_AQ_VSI_VLAN_EMOD_S)
+#define ICE_AQ_VSI_VLAN_EMOD_STR_BOTH	(0x0 << ICE_AQ_VSI_VLAN_EMOD_S)
+#define ICE_AQ_VSI_VLAN_EMOD_STR_UP	(0x1 << ICE_AQ_VSI_VLAN_EMOD_S)
+#define ICE_AQ_VSI_VLAN_EMOD_STR	(0x2 << ICE_AQ_VSI_VLAN_EMOD_S)
+#define ICE_AQ_VSI_VLAN_EMOD_NOTHING	(0x3 << ICE_AQ_VSI_VLAN_EMOD_S)
+	u8 pvlan_reserved2[3];
+	/* ingress egress up sections */
+	__le32 ingress_table; /* bitmap, 3 bits per up */
+#define ICE_AQ_VSI_UP_TABLE_UP0_S	0
+#define ICE_AQ_VSI_UP_TABLE_UP0_M	(0x7 << ICE_AQ_VSI_UP_TABLE_UP0_S)
+#define ICE_AQ_VSI_UP_TABLE_UP1_S	3
+#define ICE_AQ_VSI_UP_TABLE_UP1_M	(0x7 << ICE_AQ_VSI_UP_TABLE_UP1_S)
+#define ICE_AQ_VSI_UP_TABLE_UP2_S	6
+#define ICE_AQ_VSI_UP_TABLE_UP2_M	(0x7 << ICE_AQ_VSI_UP_TABLE_UP2_S)
+#define ICE_AQ_VSI_UP_TABLE_UP3_S	9
+#define ICE_AQ_VSI_UP_TABLE_UP3_M	(0x7 << ICE_AQ_VSI_UP_TABLE_UP3_S)
+#define ICE_AQ_VSI_UP_TABLE_UP4_S	12
+#define ICE_AQ_VSI_UP_TABLE_UP4_M	(0x7 << ICE_AQ_VSI_UP_TABLE_UP4_S)
+#define ICE_AQ_VSI_UP_TABLE_UP5_S	15
+#define ICE_AQ_VSI_UP_TABLE_UP5_M	(0x7 << ICE_AQ_VSI_UP_TABLE_UP5_S)
+#define ICE_AQ_VSI_UP_TABLE_UP6_S	18
+#define ICE_AQ_VSI_UP_TABLE_UP6_M	(0x7 << ICE_AQ_VSI_UP_TABLE_UP6_S)
+#define ICE_AQ_VSI_UP_TABLE_UP7_S	21
+#define ICE_AQ_VSI_UP_TABLE_UP7_M	(0x7 << ICE_AQ_VSI_UP_TABLE_UP7_S)
+	__le32 egress_table;   /* same defines as for ingress table */
+	/* outer tags section */
+	__le16 outer_tag;
+	u8 outer_tag_flags;
+#define ICE_AQ_VSI_OUTER_TAG_MODE_S	0
+#define ICE_AQ_VSI_OUTER_TAG_MODE_M	(0x3 << ICE_AQ_VSI_OUTER_TAG_MODE_S)
+#define ICE_AQ_VSI_OUTER_TAG_NOTHING	0x0
+#define ICE_AQ_VSI_OUTER_TAG_REMOVE	0x1
+#define ICE_AQ_VSI_OUTER_TAG_COPY	0x2
+#define ICE_AQ_VSI_OUTER_TAG_TYPE_S	2
+#define ICE_AQ_VSI_OUTER_TAG_TYPE_M	(0x3 << ICE_AQ_VSI_OUTER_TAG_TYPE_S)
+#define ICE_AQ_VSI_OUTER_TAG_NONE	0x0
+#define ICE_AQ_VSI_OUTER_TAG_STAG	0x1
+#define ICE_AQ_VSI_OUTER_TAG_VLAN_8100	0x2
+#define ICE_AQ_VSI_OUTER_TAG_VLAN_9100	0x3
+#define ICE_AQ_VSI_OUTER_TAG_INSERT	BIT(4)
+#define ICE_AQ_VSI_OUTER_TAG_ACCEPT_HOST BIT(6)
+	u8 outer_tag_reserved;
+	/* queue mapping section */
+	__le16 mapping_flags;
+#define ICE_AQ_VSI_Q_MAP_CONTIG	0x0
+#define ICE_AQ_VSI_Q_MAP_NONCONTIG	BIT(0)
+	__le16 q_mapping[16];
+#define ICE_AQ_VSI_Q_S		0
+#define ICE_AQ_VSI_Q_M		(0x7FF << ICE_AQ_VSI_Q_S)
+	__le16 tc_mapping[8];
+#define ICE_AQ_VSI_TC_Q_OFFSET_S	0
+#define ICE_AQ_VSI_TC_Q_OFFSET_M	(0x7FF << ICE_AQ_VSI_TC_Q_OFFSET_S)
+#define ICE_AQ_VSI_TC_Q_NUM_S		11
+#define ICE_AQ_VSI_TC_Q_NUM_M		(0xF << ICE_AQ_VSI_TC_Q_NUM_S)
+	/* queueing option section */
+	u8 q_opt_rss;
+#define ICE_AQ_VSI_Q_OPT_RSS_LUT_S	0
+#define ICE_AQ_VSI_Q_OPT_RSS_LUT_M	(0x3 << ICE_AQ_VSI_Q_OPT_RSS_LUT_S)
+#define ICE_AQ_VSI_Q_OPT_RSS_LUT_VSI	0x0
+#define ICE_AQ_VSI_Q_OPT_RSS_LUT_PF	0x2
+#define ICE_AQ_VSI_Q_OPT_RSS_LUT_GBL	0x3
+#define ICE_AQ_VSI_Q_OPT_RSS_GBL_LUT_S	2
+#define ICE_AQ_VSI_Q_OPT_RSS_GBL_LUT_M	(0xF << ICE_AQ_VSI_Q_OPT_RSS_GBL_LUT_S)
+#define ICE_AQ_VSI_Q_OPT_RSS_HASH_S	6
+#define ICE_AQ_VSI_Q_OPT_RSS_HASH_M	(0x3 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S)
+#define ICE_AQ_VSI_Q_OPT_RSS_TPLZ	(0x0 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S)
+#define ICE_AQ_VSI_Q_OPT_RSS_SYM_TPLZ	(0x1 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S)
+#define ICE_AQ_VSI_Q_OPT_RSS_XOR	(0x2 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S)
+#define ICE_AQ_VSI_Q_OPT_RSS_JHASH	(0x3 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S)
+	u8 q_opt_tc;
+#define ICE_AQ_VSI_Q_OPT_TC_OVR_S	0
+#define ICE_AQ_VSI_Q_OPT_TC_OVR_M	(0x1F << ICE_AQ_VSI_Q_OPT_TC_OVR_S)
+#define ICE_AQ_VSI_Q_OPT_PROF_TC_OVR	BIT(7)
+	u8 q_opt_flags;
+#define ICE_AQ_VSI_Q_OPT_PE_FLTR_EN	BIT(0)
+	u8 q_opt_reserved[3];
+	/* outer up section */
+	__le32 outer_up_table; /* same structure and defines as ingress tbl */
+	/* section 10 */
+	__le16 sect_10_reserved;
+	/* flow director section */
+	__le16 fd_options;
+#define ICE_AQ_VSI_FD_ENABLE		BIT(0)
+#define ICE_AQ_VSI_FD_TX_AUTO_ENABLE	BIT(1)
+#define ICE_AQ_VSI_FD_PROG_ENABLE	BIT(3)
+	__le16 max_fd_fltr_dedicated;
+	__le16 max_fd_fltr_shared;
+	__le16 fd_def_q;
+#define ICE_AQ_VSI_FD_DEF_Q_S		0
+#define ICE_AQ_VSI_FD_DEF_Q_M		(0x7FF << ICE_AQ_VSI_FD_DEF_Q_S)
+#define ICE_AQ_VSI_FD_DEF_GRP_S	12
+#define ICE_AQ_VSI_FD_DEF_GRP_M	(0x7 << ICE_AQ_VSI_FD_DEF_GRP_S)
+	__le16 fd_report_opt;
+#define ICE_AQ_VSI_FD_REPORT_Q_S	0
+#define ICE_AQ_VSI_FD_REPORT_Q_M	(0x7FF << ICE_AQ_VSI_FD_REPORT_Q_S)
+#define ICE_AQ_VSI_FD_DEF_PRIORITY_S	12
+#define ICE_AQ_VSI_FD_DEF_PRIORITY_M	(0x7 << ICE_AQ_VSI_FD_DEF_PRIORITY_S)
+#define ICE_AQ_VSI_FD_DEF_DROP		BIT(15)
+	/* PASID section */
+	__le32 pasid_id;
+#define ICE_AQ_VSI_PASID_ID_S		0
+#define ICE_AQ_VSI_PASID_ID_M		(0xFFFFF << ICE_AQ_VSI_PASID_ID_S)
+#define ICE_AQ_VSI_PASID_ID_VALID	BIT(31)
+	u8 reserved[24];
+};
+
+
+
+#define ICE_MAX_NUM_RECIPES 64
+
+
+/* Add/Update/Remove/Get switch rules (indirect 0x02A0, 0x02A1, 0x02A2, 0x02A3)
+ */
+struct ice_aqc_sw_rules {
+	/* ops: add switch rules, referring the number of rules.
+	 * ops: update switch rules, referring the number of filters
+	 * ops: remove switch rules, referring the entry index.
+	 * ops: get switch rules, referring to the number of filters.
+	 */
+	__le16 num_rules_fltr_entry_index;
+	u8 reserved[6];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+#pragma pack(1)
+/* Add/Update/Get/Remove lookup Rx/Tx command/response entry
+ * This structures describes the lookup rules and associated actions. "index"
+ * is returned as part of a response to a successful Add command, and can be
+ * used to identify the rule for Update/Get/Remove commands.
+ */
+struct ice_sw_rule_lkup_rx_tx {
+	__le16 recipe_id;
+#define ICE_SW_RECIPE_LOGICAL_PORT_FWD		10
+	/* Source port for LOOKUP_RX and source VSI in case of LOOKUP_TX */
+	__le16 src;
+	__le32 act;
+
+	/* Bit 0:1 - Action type */
+#define ICE_SINGLE_ACT_TYPE_S	0x00
+#define ICE_SINGLE_ACT_TYPE_M	(0x3 << ICE_SINGLE_ACT_TYPE_S)
+
+	/* Bit 2 - Loop back enable
+	 * Bit 3 - LAN enable
+	 */
+#define ICE_SINGLE_ACT_LB_ENABLE	BIT(2)
+#define ICE_SINGLE_ACT_LAN_ENABLE	BIT(3)
+
+	/* Action type = 0 - Forward to VSI or VSI list */
+#define ICE_SINGLE_ACT_VSI_FORWARDING	0x0
+
+#define ICE_SINGLE_ACT_VSI_ID_S		4
+#define ICE_SINGLE_ACT_VSI_ID_M		(0x3FF << ICE_SINGLE_ACT_VSI_ID_S)
+#define ICE_SINGLE_ACT_VSI_LIST_ID_S	4
+#define ICE_SINGLE_ACT_VSI_LIST_ID_M	(0x3FF << ICE_SINGLE_ACT_VSI_LIST_ID_S)
+	/* This bit needs to be set if action is forward to VSI list */
+#define ICE_SINGLE_ACT_VSI_LIST		BIT(14)
+#define ICE_SINGLE_ACT_VALID_BIT	BIT(17)
+#define ICE_SINGLE_ACT_DROP		BIT(18)
+
+	/* Action type = 1 - Forward to Queue of Queue group */
+#define ICE_SINGLE_ACT_TO_Q		0x1
+#define ICE_SINGLE_ACT_Q_INDEX_S	4
+#define ICE_SINGLE_ACT_Q_INDEX_M	(0x7FF << ICE_SINGLE_ACT_Q_INDEX_S)
+#define ICE_SINGLE_ACT_Q_REGION_S	15
+#define ICE_SINGLE_ACT_Q_REGION_M	(0x7 << ICE_SINGLE_ACT_Q_REGION_S)
+#define ICE_SINGLE_ACT_Q_PRIORITY	BIT(18)
+
+	/* Action type = 2 - Prune */
+#define ICE_SINGLE_ACT_PRUNE		0x2
+#define ICE_SINGLE_ACT_EGRESS		BIT(15)
+#define ICE_SINGLE_ACT_INGRESS		BIT(16)
+#define ICE_SINGLE_ACT_PRUNET		BIT(17)
+	/* Bit 18 should be set to 0 for this action */
+
+	/* Action type = 2 - Pointer */
+#define ICE_SINGLE_ACT_PTR		0x2
+#define ICE_SINGLE_ACT_PTR_VAL_S	4
+#define ICE_SINGLE_ACT_PTR_VAL_M	(0x1FFF << ICE_SINGLE_ACT_PTR_VAL_S)
+	/* Bit 18 should be set to 1 */
+#define ICE_SINGLE_ACT_PTR_BIT		BIT(18)
+
+	/* Action type = 3 - Other actions. Last two bits
+	 * are other action identifier
+	 */
+#define ICE_SINGLE_ACT_OTHER_ACTS		0x3
+#define ICE_SINGLE_OTHER_ACT_IDENTIFIER_S	17
+#define ICE_SINGLE_OTHER_ACT_IDENTIFIER_M	\
+				(0x3 << \ ICE_SINGLE_OTHER_ACT_IDENTIFIER_S)
+
+	/* Bit 17:18 - Defines other actions */
+	/* Other action = 0 - Mirror VSI */
+#define ICE_SINGLE_OTHER_ACT_MIRROR		0
+#define ICE_SINGLE_ACT_MIRROR_VSI_ID_S	4
+#define ICE_SINGLE_ACT_MIRROR_VSI_ID_M	\
+				(0x3FF << ICE_SINGLE_ACT_MIRROR_VSI_ID_S)
+
+	/* Other action = 3 - Set Stat count */
+#define ICE_SINGLE_OTHER_ACT_STAT_COUNT		3
+#define ICE_SINGLE_ACT_STAT_COUNT_INDEX_S	4
+#define ICE_SINGLE_ACT_STAT_COUNT_INDEX_M	\
+				(0x7F << ICE_SINGLE_ACT_STAT_COUNT_INDEX_S)
+
+	__le16 index; /* The index of the rule in the lookup table */
+	/* Length and values of the header to be matched per recipe or
+	 * lookup-type
+	 */
+	__le16 hdr_len;
+	u8 hdr[1];
+};
+#pragma pack()
+
+
+/* Add/Update/Remove large action command/response entry
+ * "index" is returned as part of a response to a successful Add command, and
+ * can be used to identify the action for Update/Get/Remove commands.
+ */
+struct ice_sw_rule_lg_act {
+	__le16 index; /* Index in large action table */
+	__le16 size;
+	__le32 act[1]; /* array of size for actions */
+	/* Max number of large actions */
+#define ICE_MAX_LG_ACT	4
+	/* Bit 0:1 - Action type */
+#define ICE_LG_ACT_TYPE_S	0
+#define ICE_LG_ACT_TYPE_M	(0x7 << ICE_LG_ACT_TYPE_S)
+
+	/* Action type = 0 - Forward to VSI or VSI list */
+#define ICE_LG_ACT_VSI_FORWARDING	0
+#define ICE_LG_ACT_VSI_ID_S		3
+#define ICE_LG_ACT_VSI_ID_M		(0x3FF << ICE_LG_ACT_VSI_ID_S)
+#define ICE_LG_ACT_VSI_LIST_ID_S	3
+#define ICE_LG_ACT_VSI_LIST_ID_M	(0x3FF << ICE_LG_ACT_VSI_LIST_ID_S)
+	/* This bit needs to be set if action is forward to VSI list */
+#define ICE_LG_ACT_VSI_LIST		BIT(13)
+
+#define ICE_LG_ACT_VALID_BIT		BIT(16)
+
+	/* Action type = 1 - Forward to Queue of Queue group */
+#define ICE_LG_ACT_TO_Q			0x1
+#define ICE_LG_ACT_Q_INDEX_S		3
+#define ICE_LG_ACT_Q_INDEX_M		(0x7FF << ICE_LG_ACT_Q_INDEX_S)
+#define ICE_LG_ACT_Q_REGION_S		14
+#define ICE_LG_ACT_Q_REGION_M		(0x7 << ICE_LG_ACT_Q_REGION_S)
+#define ICE_LG_ACT_Q_PRIORITY_SET	BIT(17)
+
+	/* Action type = 2 - Prune */
+#define ICE_LG_ACT_PRUNE		0x2
+#define ICE_LG_ACT_EGRESS		BIT(14)
+#define ICE_LG_ACT_INGRESS		BIT(15)
+#define ICE_LG_ACT_PRUNET		BIT(16)
+
+	/* Action type = 3 - Mirror VSI */
+#define ICE_LG_OTHER_ACT_MIRROR		0x3
+#define ICE_LG_ACT_MIRROR_VSI_ID_S	3
+#define ICE_LG_ACT_MIRROR_VSI_ID_M	(0x3FF << ICE_LG_ACT_MIRROR_VSI_ID_S)
+
+	/* Action type = 5 - Generic Value */
+#define ICE_LG_ACT_GENERIC		0x5
+#define ICE_LG_ACT_GENERIC_VALUE_S	3
+#define ICE_LG_ACT_GENERIC_VALUE_M	(0xFFFF << ICE_LG_ACT_GENERIC_VALUE_S)
+#define ICE_LG_ACT_GENERIC_OFFSET_S	19
+#define ICE_LG_ACT_GENERIC_OFFSET_M	(0x7 << ICE_LG_ACT_GENERIC_OFFSET_S)
+#define ICE_LG_ACT_GENERIC_PRIORITY_S	22
+#define ICE_LG_ACT_GENERIC_PRIORITY_M	(0x7 << ICE_LG_ACT_GENERIC_PRIORITY_S)
+#define ICE_LG_ACT_GENERIC_OFF_RX_DESC_PROF_IDX	7
+
+	/* Action = 7 - Set Stat count */
+#define ICE_LG_ACT_STAT_COUNT		0x7
+#define ICE_LG_ACT_STAT_COUNT_S		3
+#define ICE_LG_ACT_STAT_COUNT_M		(0x7F << ICE_LG_ACT_STAT_COUNT_S)
+};
+
+
+/* Add/Update/Remove VSI list command/response entry
+ * "index" is returned as part of a response to a successful Add command, and
+ * can be used to identify the VSI list for Update/Get/Remove commands.
+ */
+struct ice_sw_rule_vsi_list {
+	__le16 index; /* Index of VSI/Prune list */
+	__le16 number_vsi;
+	__le16 vsi[1]; /* Array of number_vsi VSI numbers */
+};
+
+
+#pragma pack(1)
+/* Query VSI list command/response entry */
+struct ice_sw_rule_vsi_list_query {
+	__le16 index;
+	ice_declare_bitmap(vsi_list, ICE_MAX_VSI);
+};
+#pragma pack()
+
+
+/* Add switch rule response:
+ * Content of return buffer is same as the input buffer. The status field and
+ * LUT index are updated as part of the response
+ */
+struct ice_aqc_sw_rules_elem {
+	__le16 type; /* Switch rule type, one of T_... */
+#define ICE_AQC_SW_RULES_T_LKUP_RX		0x0
+#define ICE_AQC_SW_RULES_T_LKUP_TX		0x1
+#define ICE_AQC_SW_RULES_T_LG_ACT		0x2
+#define ICE_AQC_SW_RULES_T_VSI_LIST_SET		0x3
+#define ICE_AQC_SW_RULES_T_VSI_LIST_CLEAR	0x4
+#define ICE_AQC_SW_RULES_T_PRUNE_LIST_SET	0x5
+#define ICE_AQC_SW_RULES_T_PRUNE_LIST_CLEAR	0x6
+	__le16 status;
+	union {
+		struct ice_sw_rule_lkup_rx_tx lkup_tx_rx;
+		struct ice_sw_rule_lg_act lg_act;
+		struct ice_sw_rule_vsi_list vsi_list;
+		struct ice_sw_rule_vsi_list_query vsi_list_query;
+	} __packed pdata;
+};
+
+
+
+
+/* Get Default Topology (indirect 0x0400) */
+struct ice_aqc_get_topo {
+	u8 port_num;
+	u8 num_branches;
+	__le16 reserved1;
+	__le32 reserved2;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Update TSE (indirect 0x0403)
+ * Get TSE (indirect 0x0404)
+ * Add TSE (indirect 0x0401)
+ * Delete TSE (indirect 0x040F)
+ * Move TSE (indirect 0x0408)
+ * Suspend Nodes (indirect 0x0409)
+ * Resume Nodes (indirect 0x040A)
+ */
+struct ice_aqc_sched_elem_cmd {
+	__le16 num_elem_req;	/* Used by commands */
+	__le16 num_elem_resp;	/* Used by responses */
+	__le32 reserved;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* This is the buffer for:
+ * Suspend Nodes (indirect 0x0409)
+ * Resume Nodes (indirect 0x040A)
+ */
+struct ice_aqc_suspend_resume_elem {
+	__le32 teid[1];
+};
+
+
+
+
+struct ice_aqc_elem_info_bw {
+	__le16 bw_profile_idx;
+	__le16 bw_alloc;
+};
+
+
+struct ice_aqc_txsched_elem {
+	u8 elem_type; /* Special field, reserved for some aq calls */
+#define ICE_AQC_ELEM_TYPE_UNDEFINED		0x0
+#define ICE_AQC_ELEM_TYPE_ROOT_PORT		0x1
+#define ICE_AQC_ELEM_TYPE_TC			0x2
+#define ICE_AQC_ELEM_TYPE_SE_GENERIC		0x3
+#define ICE_AQC_ELEM_TYPE_ENTRY_POINT		0x4
+#define ICE_AQC_ELEM_TYPE_LEAF			0x5
+#define ICE_AQC_ELEM_TYPE_SE_PADDED		0x6
+	u8 valid_sections;
+#define ICE_AQC_ELEM_VALID_GENERIC		BIT(0)
+#define ICE_AQC_ELEM_VALID_CIR			BIT(1)
+#define ICE_AQC_ELEM_VALID_EIR			BIT(2)
+#define ICE_AQC_ELEM_VALID_SHARED		BIT(3)
+	u8 generic;
+#define ICE_AQC_ELEM_GENERIC_MODE_M		0x1
+#define ICE_AQC_ELEM_GENERIC_PRIO_S		0x1
+#define ICE_AQC_ELEM_GENERIC_PRIO_M	(0x7 << ICE_AQC_ELEM_GENERIC_PRIO_S)
+#define ICE_AQC_ELEM_GENERIC_SP_S		0x4
+#define ICE_AQC_ELEM_GENERIC_SP_M	(0x1 << ICE_AQC_ELEM_GENERIC_SP_S)
+#define ICE_AQC_ELEM_GENERIC_ADJUST_VAL_S	0x5
+#define ICE_AQC_ELEM_GENERIC_ADJUST_VAL_M	\
+	(0x3 << ICE_AQC_ELEM_GENERIC_ADJUST_VAL_S)
+	u8 flags; /* Special field, reserved for some aq calls */
+#define ICE_AQC_ELEM_FLAG_SUSPEND_M		0x1
+	struct ice_aqc_elem_info_bw cir_bw;
+	struct ice_aqc_elem_info_bw eir_bw;
+	__le16 srl_id;
+	__le16 reserved2;
+};
+
+
+struct ice_aqc_txsched_elem_data {
+	__le32 parent_teid;
+	__le32 node_teid;
+	struct ice_aqc_txsched_elem data;
+};
+
+
+struct ice_aqc_txsched_topo_grp_info_hdr {
+	__le32 parent_teid;
+	__le16 num_elems;
+	__le16 reserved2;
+};
+
+
+struct ice_aqc_add_elem {
+	struct ice_aqc_txsched_topo_grp_info_hdr hdr;
+	struct ice_aqc_txsched_elem_data generic[1];
+};
+
+
+
+struct ice_aqc_get_elem {
+	struct ice_aqc_txsched_elem_data generic[1];
+};
+
+
+struct ice_aqc_get_topo_elem {
+	struct ice_aqc_txsched_topo_grp_info_hdr hdr;
+	struct ice_aqc_txsched_elem_data
+		generic[ICE_AQC_TOPO_MAX_LEVEL_NUM];
+};
+
+
+struct ice_aqc_delete_elem {
+	struct ice_aqc_txsched_topo_grp_info_hdr hdr;
+	__le32 teid[1];
+};
+
+
+
+
+
+
+
+/* Query Scheduler Resource Allocation (indirect 0x0412)
+ * This indirect command retrieves the scheduler resources allocated by
+ * EMP Firmware to the given PF.
+ */
+struct ice_aqc_query_txsched_res {
+	u8 reserved[8];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+struct ice_aqc_generic_sched_props {
+	__le16 phys_levels;
+	__le16 logical_levels;
+	u8 flattening_bitmap;
+	u8 max_device_cgds;
+	u8 max_pf_cgds;
+	u8 rsvd0;
+	__le16 rdma_qsets;
+	u8 rsvd1[22];
+};
+
+
+struct ice_aqc_layer_props {
+	u8 logical_layer;
+	u8 chunk_size;
+	__le16 max_device_nodes;
+	__le16 max_pf_nodes;
+	u8 rsvd0[4];
+	__le16 max_sibl_grp_sz;
+	__le16 max_cir_rl_profiles;
+	__le16 max_eir_rl_profiles;
+	__le16 max_srl_profiles;
+	u8 rsvd1[14];
+};
+
+
+struct ice_aqc_query_txsched_res_resp {
+	struct ice_aqc_generic_sched_props sched_props;
+	struct ice_aqc_layer_props layer_props[ICE_AQC_TOPO_MAX_LEVEL_NUM];
+};
+
+
+
+/* Get PHY capabilities (indirect 0x0600) */
+struct ice_aqc_get_phy_caps {
+	u8 lport_num;
+	u8 reserved;
+	__le16 param0;
+	/* 18.0 - Report qualified modules */
+#define ICE_AQC_GET_PHY_RQM		BIT(0)
+	/* 18.1 - 18.2 : Report mode
+	 * 00b - Report NVM capabilities
+	 * 01b - Report topology capabilities
+	 * 10b - Report SW configured
+	 */
+#define ICE_AQC_REPORT_MODE_S		1
+#define ICE_AQC_REPORT_MODE_M		(3 << ICE_AQC_REPORT_MODE_S)
+#define ICE_AQC_REPORT_NVM_CAP		0
+#define ICE_AQC_REPORT_TOPO_CAP		BIT(1)
+#define ICE_AQC_REPORT_SW_CFG		BIT(2)
+	__le32 reserved1;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* This is #define of PHY type (Extended):
+ * The first set of defines is for phy_type_low.
+ */
+#define ICE_PHY_TYPE_LOW_100BASE_TX		BIT_ULL(0)
+#define ICE_PHY_TYPE_LOW_100M_SGMII		BIT_ULL(1)
+#define ICE_PHY_TYPE_LOW_1000BASE_T		BIT_ULL(2)
+#define ICE_PHY_TYPE_LOW_1000BASE_SX		BIT_ULL(3)
+#define ICE_PHY_TYPE_LOW_1000BASE_LX		BIT_ULL(4)
+#define ICE_PHY_TYPE_LOW_1000BASE_KX		BIT_ULL(5)
+#define ICE_PHY_TYPE_LOW_1G_SGMII		BIT_ULL(6)
+#define ICE_PHY_TYPE_LOW_2500BASE_T		BIT_ULL(7)
+#define ICE_PHY_TYPE_LOW_2500BASE_X		BIT_ULL(8)
+#define ICE_PHY_TYPE_LOW_2500BASE_KX		BIT_ULL(9)
+#define ICE_PHY_TYPE_LOW_5GBASE_T		BIT_ULL(10)
+#define ICE_PHY_TYPE_LOW_5GBASE_KR		BIT_ULL(11)
+#define ICE_PHY_TYPE_LOW_10GBASE_T		BIT_ULL(12)
+#define ICE_PHY_TYPE_LOW_10G_SFI_DA		BIT_ULL(13)
+#define ICE_PHY_TYPE_LOW_10GBASE_SR		BIT_ULL(14)
+#define ICE_PHY_TYPE_LOW_10GBASE_LR		BIT_ULL(15)
+#define ICE_PHY_TYPE_LOW_10GBASE_KR_CR1		BIT_ULL(16)
+#define ICE_PHY_TYPE_LOW_10G_SFI_AOC_ACC	BIT_ULL(17)
+#define ICE_PHY_TYPE_LOW_10G_SFI_C2C		BIT_ULL(18)
+#define ICE_PHY_TYPE_LOW_25GBASE_T		BIT_ULL(19)
+#define ICE_PHY_TYPE_LOW_25GBASE_CR		BIT_ULL(20)
+#define ICE_PHY_TYPE_LOW_25GBASE_CR_S		BIT_ULL(21)
+#define ICE_PHY_TYPE_LOW_25GBASE_CR1		BIT_ULL(22)
+#define ICE_PHY_TYPE_LOW_25GBASE_SR		BIT_ULL(23)
+#define ICE_PHY_TYPE_LOW_25GBASE_LR		BIT_ULL(24)
+#define ICE_PHY_TYPE_LOW_25GBASE_KR		BIT_ULL(25)
+#define ICE_PHY_TYPE_LOW_25GBASE_KR_S		BIT_ULL(26)
+#define ICE_PHY_TYPE_LOW_25GBASE_KR1		BIT_ULL(27)
+#define ICE_PHY_TYPE_LOW_25G_AUI_AOC_ACC	BIT_ULL(28)
+#define ICE_PHY_TYPE_LOW_25G_AUI_C2C		BIT_ULL(29)
+#define ICE_PHY_TYPE_LOW_40GBASE_CR4		BIT_ULL(30)
+#define ICE_PHY_TYPE_LOW_40GBASE_SR4		BIT_ULL(31)
+#define ICE_PHY_TYPE_LOW_40GBASE_LR4		BIT_ULL(32)
+#define ICE_PHY_TYPE_LOW_40GBASE_KR4		BIT_ULL(33)
+#define ICE_PHY_TYPE_LOW_40G_XLAUI_AOC_ACC	BIT_ULL(34)
+#define ICE_PHY_TYPE_LOW_40G_XLAUI		BIT_ULL(35)
+#define ICE_PHY_TYPE_LOW_MAX_INDEX		63
+
+struct ice_aqc_get_phy_caps_data {
+	__le64 phy_type_low; /* Use values from ICE_PHY_TYPE_LOW_* */
+	__le64 reserved;
+	u8 caps;
+#define ICE_AQC_PHY_EN_TX_LINK_PAUSE			BIT(0)
+#define ICE_AQC_PHY_EN_RX_LINK_PAUSE			BIT(1)
+#define ICE_AQC_PHY_LOW_POWER_MODE			BIT(2)
+#define ICE_AQC_PHY_EN_LINK				BIT(3)
+#define ICE_AQC_PHY_AN_MODE				BIT(4)
+#define ICE_AQC_PHY_EN_MOD_QUAL				BIT(5)
+#define ICE_AQC_PHY_EN_LESM				BIT(6)
+#define ICE_AQC_PHY_EN_AUTO_FEC				BIT(7)
+#define ICE_AQC_PHY_CAPS_MASK				MAKEMASK(0xff, 0)
+	u8 low_power_ctrl;
+#define ICE_AQC_PHY_EN_D3COLD_LOW_POWER_AUTONEG		BIT(0)
+	__le16 eee_cap;
+#define ICE_AQC_PHY_EEE_EN_100BASE_TX			BIT(0)
+#define ICE_AQC_PHY_EEE_EN_1000BASE_T			BIT(1)
+#define ICE_AQC_PHY_EEE_EN_10GBASE_T			BIT(2)
+#define ICE_AQC_PHY_EEE_EN_1000BASE_KX			BIT(3)
+#define ICE_AQC_PHY_EEE_EN_10GBASE_KR			BIT(4)
+#define ICE_AQC_PHY_EEE_EN_25GBASE_KR			BIT(5)
+#define ICE_AQC_PHY_EEE_EN_40GBASE_KR4			BIT(6)
+	__le16 eeer_value;
+	u8 phy_id_oui[4]; /* PHY/Module ID connected on the port */
+	u8 phy_fw_ver[8];
+	u8 link_fec_options;
+#define ICE_AQC_PHY_FEC_10G_KR_40G_KR4_EN		BIT(0)
+#define ICE_AQC_PHY_FEC_10G_KR_40G_KR4_REQ		BIT(1)
+#define ICE_AQC_PHY_FEC_25G_RS_528_REQ			BIT(2)
+#define ICE_AQC_PHY_FEC_25G_KR_REQ			BIT(3)
+#define ICE_AQC_PHY_FEC_25G_RS_544_REQ			BIT(4)
+#define ICE_AQC_PHY_FEC_25G_RS_CLAUSE91_EN		BIT(6)
+#define ICE_AQC_PHY_FEC_25G_KR_CLAUSE74_EN		BIT(7)
+#define ICE_AQC_PHY_FEC_MASK				MAKEMASK(0xdf, 0)
+	u8 extended_compliance_code;
+#define ICE_MODULE_TYPE_TOTAL_BYTE			3
+	u8 module_type[ICE_MODULE_TYPE_TOTAL_BYTE];
+#define ICE_AQC_MOD_TYPE_BYTE0_SFP_PLUS			0xA0
+#define ICE_AQC_MOD_TYPE_BYTE0_QSFP_PLUS		0x80
+#define ICE_AQC_MOD_TYPE_BYTE1_SFP_PLUS_CU_PASSIVE	BIT(0)
+#define ICE_AQC_MOD_TYPE_BYTE1_SFP_PLUS_CU_ACTIVE	BIT(1)
+#define ICE_AQC_MOD_TYPE_BYTE1_10G_BASE_SR		BIT(4)
+#define ICE_AQC_MOD_TYPE_BYTE1_10G_BASE_LR		BIT(5)
+#define ICE_AQC_MOD_TYPE_BYTE1_10G_BASE_LRM		BIT(6)
+#define ICE_AQC_MOD_TYPE_BYTE1_10G_BASE_ER		BIT(7)
+#define ICE_AQC_MOD_TYPE_BYTE2_SFP_PLUS			0xA0
+#define ICE_AQC_MOD_TYPE_BYTE2_QSFP_PLUS		0x86
+	u8 qualified_module_count;
+#define ICE_AQC_QUAL_MOD_COUNT_MAX			16
+	struct {
+		u8 v_oui[3];
+		u8 rsvd3;
+		u8 v_part[16];
+		__le32 v_rev;
+		__le64 rsvd8;
+	} qual_modules[ICE_AQC_QUAL_MOD_COUNT_MAX];
+};
+
+
+/* Set PHY capabilities (direct 0x0601)
+ * NOTE: This command must be followed by setup link and restart auto-neg
+ */
+struct ice_aqc_set_phy_cfg {
+	u8 lport_num;
+	u8 reserved[7];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Set PHY config command data structure */
+struct ice_aqc_set_phy_cfg_data {
+	__le64 phy_type_low; /* Use values from ICE_PHY_TYPE_LOW_* */
+	__le64 rsvd0;
+	u8 caps;
+#define ICE_AQ_PHY_ENA_TX_PAUSE_ABILITY		BIT(0)
+#define ICE_AQ_PHY_ENA_RX_PAUSE_ABILITY		BIT(1)
+#define ICE_AQ_PHY_ENA_LOW_POWER	BIT(2)
+#define ICE_AQ_PHY_ENA_LINK		BIT(3)
+#define ICE_AQ_PHY_ENA_AUTO_LINK_UPDT	BIT(5)
+#define ICE_AQ_PHY_ENA_LESM		BIT(6)
+#define ICE_AQ_PHY_ENA_AUTO_FEC		BIT(7)
+	u8 low_power_ctrl;
+	__le16 eee_cap; /* Value from ice_aqc_get_phy_caps */
+	__le16 eeer_value;
+	u8 link_fec_opt; /* Use defines from ice_aqc_get_phy_caps */
+	u8 rsvd1;
+};
+
+
+
+/* Restart AN command data structure (direct 0x0605)
+ * Also used for response, with only the lport_num field present.
+ */
+struct ice_aqc_restart_an {
+	u8 lport_num;
+	u8 reserved;
+	u8 cmd_flags;
+#define ICE_AQC_RESTART_AN_LINK_RESTART	BIT(1)
+#define ICE_AQC_RESTART_AN_LINK_ENABLE	BIT(2)
+	u8 reserved2[13];
+};
+
+
+/* Get link status (indirect 0x0607), also used for Link Status Event */
+struct ice_aqc_get_link_status {
+	u8 lport_num;
+	u8 reserved;
+	__le16 cmd_flags;
+#define ICE_AQ_LSE_M			0x3
+#define ICE_AQ_LSE_NOP			0x0
+#define ICE_AQ_LSE_DIS			0x2
+#define ICE_AQ_LSE_ENA			0x3
+	/* only response uses this flag */
+#define ICE_AQ_LSE_IS_ENABLED		0x1
+	__le32 reserved2;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Get link status response data structure, also used for Link Status Event */
+struct ice_aqc_get_link_status_data {
+	u8 topo_media_conflict;
+#define ICE_AQ_LINK_TOPO_CONFLICT	BIT(0)
+#define ICE_AQ_LINK_MEDIA_CONFLICT	BIT(1)
+#define ICE_AQ_LINK_TOPO_CORRUPT	BIT(2)
+	u8 reserved1;
+	u8 link_info;
+#define ICE_AQ_LINK_UP			BIT(0)	/* Link Status */
+#define ICE_AQ_LINK_FAULT		BIT(1)
+#define ICE_AQ_LINK_FAULT_TX		BIT(2)
+#define ICE_AQ_LINK_FAULT_RX		BIT(3)
+#define ICE_AQ_LINK_FAULT_REMOTE	BIT(4)
+#define ICE_AQ_LINK_UP_PORT		BIT(5)	/* External Port Link Status */
+#define ICE_AQ_MEDIA_AVAILABLE		BIT(6)
+#define ICE_AQ_SIGNAL_DETECT		BIT(7)
+	u8 an_info;
+#define ICE_AQ_AN_COMPLETED		BIT(0)
+#define ICE_AQ_LP_AN_ABILITY		BIT(1)
+#define ICE_AQ_PD_FAULT			BIT(2)	/* Parallel Detection Fault */
+#define ICE_AQ_FEC_EN			BIT(3)
+#define ICE_AQ_PHY_LOW_POWER		BIT(4)	/* Low Power State */
+#define ICE_AQ_LINK_PAUSE_TX		BIT(5)
+#define ICE_AQ_LINK_PAUSE_RX		BIT(6)
+#define ICE_AQ_QUALIFIED_MODULE		BIT(7)
+	u8 ext_info;
+#define ICE_AQ_LINK_PHY_TEMP_ALARM	BIT(0)
+#define ICE_AQ_LINK_EXCESSIVE_ERRORS	BIT(1)	/* Excessive Link Errors */
+	/* Port TX Suspended */
+#define ICE_AQ_LINK_TX_S		2
+#define ICE_AQ_LINK_TX_M		(0x03 << ICE_AQ_LINK_TX_S)
+#define ICE_AQ_LINK_TX_ACTIVE		0
+#define ICE_AQ_LINK_TX_DRAINED		1
+#define ICE_AQ_LINK_TX_FLUSHED		3
+	u8 reserved2;
+	__le16 max_frame_size;
+	u8 cfg;
+#define ICE_AQ_LINK_25G_KR_FEC_EN	BIT(0)
+#define ICE_AQ_LINK_25G_RS_528_FEC_EN	BIT(1)
+#define ICE_AQ_LINK_25G_RS_544_FEC_EN	BIT(2)
+#define ICE_AQ_FEC_MASK			MAKEMASK(0x7, 0)
+	/* Pacing Config */
+#define ICE_AQ_CFG_PACING_S		3
+#define ICE_AQ_CFG_PACING_M		(0xF << ICE_AQ_CFG_PACING_S)
+#define ICE_AQ_CFG_PACING_TYPE_M	BIT(7)
+#define ICE_AQ_CFG_PACING_TYPE_AVG	0
+#define ICE_AQ_CFG_PACING_TYPE_FIXED	ICE_AQ_CFG_PACING_TYPE_M
+	/* External Device Power Ability */
+	u8 power_desc;
+#define ICE_AQ_PWR_CLASS_M		0x3
+#define ICE_AQ_LINK_PWR_BASET_LOW_HIGH	0
+#define ICE_AQ_LINK_PWR_BASET_HIGH	1
+#define ICE_AQ_LINK_PWR_QSFP_CLASS_1	0
+#define ICE_AQ_LINK_PWR_QSFP_CLASS_2	1
+#define ICE_AQ_LINK_PWR_QSFP_CLASS_3	2
+#define ICE_AQ_LINK_PWR_QSFP_CLASS_4	3
+	__le16 link_speed;
+#define ICE_AQ_LINK_SPEED_10MB		BIT(0)
+#define ICE_AQ_LINK_SPEED_100MB		BIT(1)
+#define ICE_AQ_LINK_SPEED_1000MB	BIT(2)
+#define ICE_AQ_LINK_SPEED_2500MB	BIT(3)
+#define ICE_AQ_LINK_SPEED_5GB		BIT(4)
+#define ICE_AQ_LINK_SPEED_10GB		BIT(5)
+#define ICE_AQ_LINK_SPEED_20GB		BIT(6)
+#define ICE_AQ_LINK_SPEED_25GB		BIT(7)
+#define ICE_AQ_LINK_SPEED_40GB		BIT(8)
+#define ICE_AQ_LINK_SPEED_UNKNOWN	BIT(15)
+	__le32 reserved3; /* Aligns next field to 8-byte boundary */
+	__le64 phy_type_low; /* Use values from ICE_PHY_TYPE_LOW_* */
+	__le64 reserved4;
+};
+
+
+/* Set event mask command (direct 0x0613) */
+struct ice_aqc_set_event_mask {
+	u8	lport_num;
+	u8	reserved[7];
+	__le16	event_mask;
+#define ICE_AQ_LINK_EVENT_UPDOWN		BIT(1)
+#define ICE_AQ_LINK_EVENT_MEDIA_NA		BIT(2)
+#define ICE_AQ_LINK_EVENT_LINK_FAULT		BIT(3)
+#define ICE_AQ_LINK_EVENT_PHY_TEMP_ALARM	BIT(4)
+#define ICE_AQ_LINK_EVENT_EXCESSIVE_ERRORS	BIT(5)
+#define ICE_AQ_LINK_EVENT_SIGNAL_DETECT		BIT(6)
+#define ICE_AQ_LINK_EVENT_AN_COMPLETED		BIT(7)
+#define ICE_AQ_LINK_EVENT_MODULE_QUAL_FAIL	BIT(8)
+#define ICE_AQ_LINK_EVENT_PORT_TX_SUSPENDED	BIT(9)
+	u8	reserved1[6];
+};
+
+
+
+/* Set MAC Loopback command (direct 0x0620) */
+struct ice_aqc_set_mac_lb {
+	u8 lb_mode;
+#define ICE_AQ_MAC_LB_EN		BIT(0)
+#define ICE_AQ_MAC_LB_OSC_CLK		BIT(1)
+	u8 reserved[15];
+};
+
+
+
+
+
+/* Set Port Identification LED (direct, 0x06E9) */
+struct ice_aqc_set_port_id_led {
+	u8 lport_num;
+	u8 lport_num_valid;
+#define ICE_AQC_PORT_ID_PORT_NUM_VALID	BIT(0)
+	u8 ident_mode;
+#define ICE_AQC_PORT_IDENT_LED_BLINK	BIT(0)
+#define ICE_AQC_PORT_IDENT_LED_ORIG	0
+	u8 rsvd[13];
+};
+
+
+
+/* NVM Read command (indirect 0x0701)
+ * NVM Erase commands (direct 0x0702)
+ * NVM Update commands (indirect 0x0703)
+ */
+struct ice_aqc_nvm {
+	__le16 offset_low;
+	u8 offset_high;
+	u8 cmd_flags;
+#define ICE_AQC_NVM_LAST_CMD		BIT(0)
+#define ICE_AQC_NVM_PCIR_REQ		BIT(0)	/* Used by NVM Update reply */
+#define ICE_AQC_NVM_PRESERVATION_S	1
+#define ICE_AQC_NVM_PRESERVATION_M	(3 << ICE_AQC_NVM_PRESERVATION_S)
+#define ICE_AQC_NVM_NO_PRESERVATION	(0 << ICE_AQC_NVM_PRESERVATION_S)
+#define ICE_AQC_NVM_PRESERVE_ALL	BIT(1)
+#define ICE_AQC_NVM_FACTORY_DEFAULT	(2 << ICE_AQC_NVM_PRESERVATION_S)
+#define ICE_AQC_NVM_PRESERVE_SELECTED	(3 << ICE_AQC_NVM_PRESERVATION_S)
+#define ICE_AQC_NVM_FLASH_ONLY		BIT(7)
+	__le16 module_typeid;
+	__le16 length;
+#define ICE_AQC_NVM_ERASE_LEN	0xFFFF
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Used for 0x0704 as well as for 0x0705 commands */
+struct ice_aqc_nvm_cfg {
+	u8	cmd_flags;
+#define ICE_AQC_ANVM_MULTIPLE_ELEMS	BIT(0)
+#define ICE_AQC_ANVM_IMMEDIATE_FIELD	BIT(1)
+#define ICE_AQC_ANVM_NEW_CFG		BIT(2)
+	u8	reserved;
+	__le16 count;
+	__le16 id;
+	u8 reserved1[2];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+struct ice_aqc_nvm_cfg_data {
+	__le16 field_id;
+	__le16 field_options;
+	__le16 field_value;
+};
+
+
+/* NVM Checksum Command (direct, 0x0706) */
+struct ice_aqc_nvm_checksum {
+	u8 flags;
+#define ICE_AQC_NVM_CHECKSUM_VERIFY	BIT(0)
+#define ICE_AQC_NVM_CHECKSUM_RECALC	BIT(1)
+	u8 rsvd;
+	__le16 checksum; /* Used only by response */
+#define ICE_AQC_NVM_CHECKSUM_CORRECT	0xBABA
+	u8 rsvd2[12];
+};
+
+
+/**
+ * Send to PF command (indirect 0x0801) id is only used by PF
+ *
+ * Send to VF command (indirect 0x0802) id is only used by PF
+ *
+ */
+struct ice_aqc_pf_vf_msg {
+	__le32 id;
+	u32 reserved;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+
+
+/* Get/Set RSS key (indirect 0x0B04/0x0B02) */
+struct ice_aqc_get_set_rss_key {
+#define ICE_AQC_GSET_RSS_KEY_VSI_VALID	BIT(15)
+#define ICE_AQC_GSET_RSS_KEY_VSI_ID_S	0
+#define ICE_AQC_GSET_RSS_KEY_VSI_ID_M	(0x3FF << ICE_AQC_GSET_RSS_KEY_VSI_ID_S)
+	__le16 vsi_id;
+	u8 reserved[6];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+#define ICE_AQC_GET_SET_RSS_KEY_DATA_RSS_KEY_SIZE	0x28
+#define ICE_AQC_GET_SET_RSS_KEY_DATA_HASH_KEY_SIZE	0xC
+
+struct ice_aqc_get_set_rss_keys {
+	u8 standard_rss_key[ICE_AQC_GET_SET_RSS_KEY_DATA_RSS_KEY_SIZE];
+	u8 extended_hash_key[ICE_AQC_GET_SET_RSS_KEY_DATA_HASH_KEY_SIZE];
+};
+
+
+/* Get/Set RSS LUT (indirect 0x0B05/0x0B03) */
+struct ice_aqc_get_set_rss_lut {
+#define ICE_AQC_GSET_RSS_LUT_VSI_VALID	BIT(15)
+#define ICE_AQC_GSET_RSS_LUT_VSI_ID_S	0
+#define ICE_AQC_GSET_RSS_LUT_VSI_ID_M	(0x1FF << ICE_AQC_GSET_RSS_LUT_VSI_ID_S)
+	__le16 vsi_id;
+#define ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_S	0
+#define ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_M	\
+				(0x3 << ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_S)
+
+#define ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_VSI	 0
+#define ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF	 1
+#define ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_GLOBAL	 2
+
+#define ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S	 2
+#define ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_M	 \
+				(0x3 << ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S)
+
+#define ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_128	 128
+#define ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_128_FLAG 0
+#define ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_512	 512
+#define ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_512_FLAG 1
+#define ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K	 2048
+#define ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K_FLAG	 2
+
+#define ICE_AQC_GSET_RSS_LUT_GLOBAL_IDX_S	 4
+#define ICE_AQC_GSET_RSS_LUT_GLOBAL_IDX_M	 \
+				(0xF << ICE_AQC_GSET_RSS_LUT_GLOBAL_IDX_S)
+
+	__le16 flags;
+	__le32 reserved;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+
+
+
+/* Add TX LAN Queues (indirect 0x0C30) */
+struct ice_aqc_add_txqs {
+	u8 num_qgrps;
+	u8 reserved[3];
+	__le32 reserved1;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* This is the descriptor of each queue entry for the Add TX LAN Queues
+ * command (0x0C30). Only used within struct ice_aqc_add_tx_qgrp.
+ */
+struct ice_aqc_add_txqs_perq {
+	__le16 txq_id;
+	u8 rsvd[2];
+	__le32 q_teid;
+	u8 txq_ctx[22];
+	u8 rsvd2[2];
+	struct ice_aqc_txsched_elem info;
+};
+
+
+/* The format of the command buffer for Add TX LAN Queues (0x0C30)
+ * is an array of the following structs. Please note that the length of
+ * each struct ice_aqc_add_tx_qgrp is variable due
+ * to the variable number of queues in each group!
+ */
+struct ice_aqc_add_tx_qgrp {
+	__le32 parent_teid;
+	u8 num_txqs;
+	u8 rsvd[3];
+	struct ice_aqc_add_txqs_perq txqs[1];
+};
+
+
+/* Disable TX LAN Queues (indirect 0x0C31) */
+struct ice_aqc_dis_txqs {
+	u8 cmd_type;
+#define ICE_AQC_Q_DIS_CMD_S		0
+#define ICE_AQC_Q_DIS_CMD_M		(0x3 << ICE_AQC_Q_DIS_CMD_S)
+#define ICE_AQC_Q_DIS_CMD_NO_FUNC_RESET	(0 << ICE_AQC_Q_DIS_CMD_S)
+#define ICE_AQC_Q_DIS_CMD_VM_RESET	BIT(ICE_AQC_Q_DIS_CMD_S)
+#define ICE_AQC_Q_DIS_CMD_VF_RESET	(2 << ICE_AQC_Q_DIS_CMD_S)
+#define ICE_AQC_Q_DIS_CMD_PF_RESET	(3 << ICE_AQC_Q_DIS_CMD_S)
+#define ICE_AQC_Q_DIS_CMD_SUBSEQ_CALL	BIT(2)
+#define ICE_AQC_Q_DIS_CMD_FLUSH_PIPE	BIT(3)
+	u8 num_entries;
+	__le16 vmvf_and_timeout;
+#define ICE_AQC_Q_DIS_VMVF_NUM_S	0
+#define ICE_AQC_Q_DIS_VMVF_NUM_M	(0x3FF << ICE_AQC_Q_DIS_VMVF_NUM_S)
+#define ICE_AQC_Q_DIS_TIMEOUT_S		10
+#define ICE_AQC_Q_DIS_TIMEOUT_M		(0x3F << ICE_AQC_Q_DIS_TIMEOUT_S)
+	__le32 blocked_cgds;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* The buffer for Disable TX LAN Queues (indirect 0x0C31)
+ * contains the following structures, arrayed one after the
+ * other.
+ * Note: Since the q_id is 16 bits wide, if the
+ * number of queues is even, then 2 bytes of alignment MUST be
+ * added before the start of the next group, to allow correct
+ * alignment of the parent_teid field.
+ */
+struct ice_aqc_dis_txq_item {
+	__le32 parent_teid;
+	u8 num_qs;
+	u8 rsvd;
+	/* The length of the q_id array varies according to num_qs */
+	__le16 q_id[1];
+	/* This only applies from F8 onward */
+#define ICE_AQC_Q_DIS_BUF_ELEM_TYPE_S		15
+#define ICE_AQC_Q_DIS_BUF_ELEM_TYPE_LAN_Q	\
+			(0 << ICE_AQC_Q_DIS_BUF_ELEM_TYPE_S)
+#define ICE_AQC_Q_DIS_BUF_ELEM_TYPE_RDMA_QSET	\
+			(1 << ICE_AQC_Q_DIS_BUF_ELEM_TYPE_S)
+};
+
+
+struct ice_aqc_dis_txq {
+	struct ice_aqc_dis_txq_item qgrps[1];
+};
+
+
+
+
+
+
+
+/* Lan Queue Overflow Event (direct, 0x1001) */
+struct ice_aqc_event_lan_overflow {
+	__le32 prtdcb_ruptq;
+	__le32 qtx_ctl;
+	u8 reserved[8];
+};
+
+
+
+/* Configure Firmware Logging Command (indirect 0xFF09)
+ * Logging Information Read Response (indirect 0xFF10)
+ * Note: The 0xFF10 command has no input parameters.
+ */
+struct ice_aqc_fw_logging {
+	u8 log_ctrl;
+#define ICE_AQC_FW_LOG_AQ_EN		BIT(0)
+#define ICE_AQC_FW_LOG_UART_EN		BIT(1)
+	u8 rsvd0;
+	u8 log_ctrl_valid; /* Not used by 0xFF10 Response */
+#define ICE_AQC_FW_LOG_AQ_VALID		BIT(0)
+#define ICE_AQC_FW_LOG_UART_VALID	BIT(1)
+	u8 rsvd1[5];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+enum ice_aqc_fw_logging_mod {
+	ICE_AQC_FW_LOG_ID_GENERAL = 0,
+	ICE_AQC_FW_LOG_ID_CTRL,
+	ICE_AQC_FW_LOG_ID_LINK,
+	ICE_AQC_FW_LOG_ID_LINK_TOPO,
+	ICE_AQC_FW_LOG_ID_DNL,
+	ICE_AQC_FW_LOG_ID_I2C,
+	ICE_AQC_FW_LOG_ID_SDP,
+	ICE_AQC_FW_LOG_ID_MDIO,
+	ICE_AQC_FW_LOG_ID_ADMINQ,
+	ICE_AQC_FW_LOG_ID_HDMA,
+	ICE_AQC_FW_LOG_ID_LLDP,
+	ICE_AQC_FW_LOG_ID_DCBX,
+	ICE_AQC_FW_LOG_ID_DCB,
+	ICE_AQC_FW_LOG_ID_NETPROXY,
+	ICE_AQC_FW_LOG_ID_NVM,
+	ICE_AQC_FW_LOG_ID_AUTH,
+	ICE_AQC_FW_LOG_ID_VPD,
+	ICE_AQC_FW_LOG_ID_IOSF,
+	ICE_AQC_FW_LOG_ID_PARSER,
+	ICE_AQC_FW_LOG_ID_SW,
+	ICE_AQC_FW_LOG_ID_SCHEDULER,
+	ICE_AQC_FW_LOG_ID_TXQ,
+	ICE_AQC_FW_LOG_ID_RSVD,
+	ICE_AQC_FW_LOG_ID_POST,
+	ICE_AQC_FW_LOG_ID_WATCHDOG,
+	ICE_AQC_FW_LOG_ID_TASK_DISPATCH,
+	ICE_AQC_FW_LOG_ID_MNG,
+	ICE_AQC_FW_LOG_ID_MAX,
+};
+
+/* This is the buffer for both of the logging commands.
+ * The entry array size depends on the datalen parameter in the descriptor.
+ * There will be a total of datalen / 2 entries.
+ */
+struct ice_aqc_fw_logging_data {
+	__le16 entry[1];
+#define ICE_AQC_FW_LOG_ID_S		0
+#define ICE_AQC_FW_LOG_ID_M		(0xFFF << ICE_AQC_FW_LOG_ID_S)
+
+#define ICE_AQC_FW_LOG_CONF_SUCCESS	0	/* Used by response */
+#define ICE_AQC_FW_LOG_CONF_BAD_INDX	BIT(12)	/* Used by response */
+
+#define ICE_AQC_FW_LOG_EN_S		12
+#define ICE_AQC_FW_LOG_EN_M		(0xF << ICE_AQC_FW_LOG_EN_S)
+#define ICE_AQC_FW_LOG_INFO_EN		BIT(12)	/* Used by command */
+#define ICE_AQC_FW_LOG_INIT_EN		BIT(13)	/* Used by command */
+#define ICE_AQC_FW_LOG_FLOW_EN		BIT(14)	/* Used by command */
+#define ICE_AQC_FW_LOG_ERR_EN		BIT(15)	/* Used by command */
+};
+
+
+/* Get/Clear FW Log (indirect 0xFF11) */
+struct ice_aqc_get_clear_fw_log {
+	u8 flags;
+#define ICE_AQC_FW_LOG_CLEAR		BIT(0)
+#define ICE_AQC_FW_LOG_MORE_DATA_AVAIL	BIT(1)
+	u8 rsvd1[7];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/**
+ * struct ice_aq_desc - Admin Queue (AQ) descriptor
+ * @flags: ICE_AQ_FLAG_* flags
+ * @opcode: AQ command opcode
+ * @datalen: length in bytes of indirect/external data buffer
+ * @retval: return value from firmware
+ * @cookie_h: opaque data high-half
+ * @cookie_l: opaque data low-half
+ * @params: command-specific parameters
+ *
+ * Descriptor format for commands the driver posts on the Admin Transmit Queue
+ * (ATQ). The firmware writes back onto the command descriptor and returns
+ * the result of the command. Asynchronous events that are not an immediate
+ * result of the command are written to the Admin Receive Queue (ARQ) using
+ * the same descriptor format. Descriptors are in little-endian notation with
+ * 32-bit words.
+ */
+struct ice_aq_desc {
+	__le16 flags;
+	__le16 opcode;
+	__le16 datalen;
+	__le16 retval;
+	__le32 cookie_high;
+	__le32 cookie_low;
+	union {
+		u8 raw[16];
+		struct ice_aqc_generic generic;
+		struct ice_aqc_get_ver get_ver;
+		struct ice_aqc_q_shutdown q_shutdown;
+		struct ice_aqc_req_res res_owner;
+		struct ice_aqc_manage_mac_read mac_read;
+		struct ice_aqc_manage_mac_write mac_write;
+		struct ice_aqc_clear_pxe clear_pxe;
+		struct ice_aqc_list_caps get_cap;
+		struct ice_aqc_get_phy_caps get_phy;
+		struct ice_aqc_set_phy_cfg set_phy;
+		struct ice_aqc_restart_an restart_an;
+		struct ice_aqc_set_port_id_led set_port_id_led;
+		struct ice_aqc_get_sw_cfg get_sw_conf;
+		struct ice_aqc_sw_rules sw_rules;
+		struct ice_aqc_get_topo get_topo;
+		struct ice_aqc_sched_elem_cmd sched_elem_cmd;
+		struct ice_aqc_query_txsched_res query_sched_res;
+		struct ice_aqc_nvm nvm;
+		struct ice_aqc_nvm_cfg nvm_cfg;
+		struct ice_aqc_nvm_checksum nvm_checksum;
+		struct ice_aqc_pf_vf_msg virt;
+		struct ice_aqc_get_set_rss_lut get_set_rss_lut;
+		struct ice_aqc_get_set_rss_key get_set_rss_key;
+		struct ice_aqc_add_txqs add_txqs;
+		struct ice_aqc_dis_txqs dis_txqs;
+		struct ice_aqc_add_get_update_free_vsi vsi_cmd;
+		struct ice_aqc_add_update_free_vsi_resp add_update_free_vsi_res;
+		struct ice_aqc_fw_logging fw_logging;
+		struct ice_aqc_get_clear_fw_log get_clear_fw_log;
+		struct ice_aqc_set_mac_lb set_mac_lb;
+		struct ice_aqc_alloc_free_res_cmd sw_res_ctrl;
+		struct ice_aqc_set_event_mask set_event_mask;
+		struct ice_aqc_get_link_status get_link_status;
+	} params;
+};
+
+
+/* FW defined boundary for a large buffer, 4k >= Large buffer > 512 bytes */
+#define ICE_AQ_LG_BUF	512
+
+/* Flags sub-structure
+ * |0  |1  |2  |3  |4  |5  |6  |7  |8  |9  |10 |11 |12 |13 |14 |15 |
+ * |DD |CMP|ERR|VFE| * *  RESERVED * * |LB |RD |VFC|BUF|SI |EI |FE |
+ */
+
+/* command flags and offsets */
+#define ICE_AQ_FLAG_DD_S	0
+#define ICE_AQ_FLAG_CMP_S	1
+#define ICE_AQ_FLAG_ERR_S	2
+#define ICE_AQ_FLAG_VFE_S	3
+#define ICE_AQ_FLAG_LB_S	9
+#define ICE_AQ_FLAG_RD_S	10
+#define ICE_AQ_FLAG_VFC_S	11
+#define ICE_AQ_FLAG_BUF_S	12
+#define ICE_AQ_FLAG_SI_S	13
+#define ICE_AQ_FLAG_EI_S	14
+#define ICE_AQ_FLAG_FE_S	15
+
+#define ICE_AQ_FLAG_DD		BIT(ICE_AQ_FLAG_DD_S)  /* 0x1    */
+#define ICE_AQ_FLAG_CMP		BIT(ICE_AQ_FLAG_CMP_S) /* 0x2    */
+#define ICE_AQ_FLAG_ERR		BIT(ICE_AQ_FLAG_ERR_S) /* 0x4    */
+#define ICE_AQ_FLAG_VFE		BIT(ICE_AQ_FLAG_VFE_S) /* 0x8    */
+#define ICE_AQ_FLAG_LB		BIT(ICE_AQ_FLAG_LB_S)  /* 0x200  */
+#define ICE_AQ_FLAG_RD		BIT(ICE_AQ_FLAG_RD_S)  /* 0x400  */
+#define ICE_AQ_FLAG_VFC		BIT(ICE_AQ_FLAG_VFC_S) /* 0x800  */
+#define ICE_AQ_FLAG_BUF		BIT(ICE_AQ_FLAG_BUF_S) /* 0x1000 */
+#define ICE_AQ_FLAG_SI		BIT(ICE_AQ_FLAG_SI_S)  /* 0x2000 */
+#define ICE_AQ_FLAG_EI		BIT(ICE_AQ_FLAG_EI_S)  /* 0x4000 */
+#define ICE_AQ_FLAG_FE		BIT(ICE_AQ_FLAG_FE_S)  /* 0x8000 */
+
+/* error codes */
+enum ice_aq_err {
+	ICE_AQ_RC_OK		= 0,  /* Success */
+	ICE_AQ_RC_EPERM		= 1,  /* Operation not permitted */
+	ICE_AQ_RC_ENOENT	= 2,  /* No such element */
+	ICE_AQ_RC_ESRCH		= 3,  /* Bad opcode */
+	ICE_AQ_RC_EINTR		= 4,  /* Operation interrupted */
+	ICE_AQ_RC_EIO		= 5,  /* I/O error */
+	ICE_AQ_RC_ENXIO		= 6,  /* No such resource */
+	ICE_AQ_RC_E2BIG		= 7,  /* Arg too long */
+	ICE_AQ_RC_EAGAIN	= 8,  /* Try again */
+	ICE_AQ_RC_ENOMEM	= 9,  /* Out of memory */
+	ICE_AQ_RC_EACCES	= 10, /* Permission denied */
+	ICE_AQ_RC_EFAULT	= 11, /* Bad address */
+	ICE_AQ_RC_EBUSY		= 12, /* Device or resource busy */
+	ICE_AQ_RC_EEXIST	= 13, /* object already exists */
+	ICE_AQ_RC_EINVAL	= 14, /* Invalid argument */
+	ICE_AQ_RC_ENOTTY	= 15, /* Not a typewriter */
+	ICE_AQ_RC_ENOSPC	= 16, /* No space left or allocation failure */
+	ICE_AQ_RC_ENOSYS	= 17, /* Function not implemented */
+	ICE_AQ_RC_ERANGE	= 18, /* Parameter out of range */
+	ICE_AQ_RC_EFLUSHED	= 19, /* Cmd flushed due to prev cmd error */
+	ICE_AQ_RC_BAD_ADDR	= 20, /* Descriptor contains a bad pointer */
+	ICE_AQ_RC_EMODE		= 21, /* Op not allowed in current dev mode */
+	ICE_AQ_RC_EFBIG		= 22, /* File too big */
+	ICE_AQ_RC_ESBCOMP	= 23, /* SB-IOSF completion unsuccessful */
+	ICE_AQ_RC_ENOSEC	= 24, /* Missing security manifest */
+	ICE_AQ_RC_EBADSIG	= 25, /* Bad RSA signature */
+	ICE_AQ_RC_ESVN		= 26, /* SVN number prohibits this package */
+	ICE_AQ_RC_EBADMAN	= 27, /* Manifest hash mismatch */
+	ICE_AQ_RC_EBADBUF	= 28, /* Buffer hash mismatches manifest */
+};
+
+/* Admin Queue command opcodes */
+enum ice_adminq_opc {
+	/* AQ commands */
+	ice_aqc_opc_get_ver				= 0x0001,
+	ice_aqc_opc_driver_ver				= 0x0002,
+	ice_aqc_opc_q_shutdown				= 0x0003,
+	ice_aqc_opc_get_exp_err				= 0x0005,
+
+	/* resource ownership */
+	ice_aqc_opc_req_res				= 0x0008,
+	ice_aqc_opc_release_res				= 0x0009,
+
+	/* device/function capabilities */
+	ice_aqc_opc_list_func_caps			= 0x000A,
+	ice_aqc_opc_list_dev_caps			= 0x000B,
+
+	/* manage MAC address */
+	ice_aqc_opc_manage_mac_read			= 0x0107,
+	ice_aqc_opc_manage_mac_write			= 0x0108,
+
+	/* PXE */
+	ice_aqc_opc_clear_pxe_mode			= 0x0110,
+
+	/* internal switch commands */
+	ice_aqc_opc_get_sw_cfg				= 0x0200,
+
+	/* Alloc/Free/Get Resources */
+	ice_aqc_opc_get_res_alloc			= 0x0204,
+	ice_aqc_opc_alloc_res				= 0x0208,
+	ice_aqc_opc_free_res				= 0x0209,
+	ice_aqc_opc_get_allocd_res_desc			= 0x020A,
+
+	/* VSI commands */
+	ice_aqc_opc_add_vsi				= 0x0210,
+	ice_aqc_opc_update_vsi				= 0x0211,
+	ice_aqc_opc_get_vsi_params			= 0x0212,
+	ice_aqc_opc_free_vsi				= 0x0213,
+
+
+
+	/* switch rules population commands */
+	ice_aqc_opc_add_sw_rules			= 0x02A0,
+	ice_aqc_opc_update_sw_rules			= 0x02A1,
+	ice_aqc_opc_remove_sw_rules			= 0x02A2,
+	ice_aqc_opc_get_sw_rules			= 0x02A3,
+	ice_aqc_opc_clear_pf_cfg			= 0x02A4,
+
+
+	/* transmit scheduler commands */
+	ice_aqc_opc_get_dflt_topo			= 0x0400,
+	ice_aqc_opc_add_sched_elems			= 0x0401,
+	ice_aqc_opc_cfg_sched_elems			= 0x0403,
+	ice_aqc_opc_get_sched_elems			= 0x0404,
+	ice_aqc_opc_move_sched_elems			= 0x0408,
+	ice_aqc_opc_suspend_sched_elems			= 0x0409,
+	ice_aqc_opc_resume_sched_elems			= 0x040A,
+	ice_aqc_opc_suspend_sched_traffic		= 0x040B,
+	ice_aqc_opc_resume_sched_traffic		= 0x040C,
+	ice_aqc_opc_delete_sched_elems			= 0x040F,
+	ice_aqc_opc_query_sched_res			= 0x0412,
+	ice_aqc_opc_query_node_to_root			= 0x0413,
+	ice_aqc_opc_cfg_l2_node_cgd			= 0x0414,
+
+	/* PHY commands */
+	ice_aqc_opc_get_phy_caps			= 0x0600,
+	ice_aqc_opc_set_phy_cfg				= 0x0601,
+	ice_aqc_opc_set_mac_cfg				= 0x0603,
+	ice_aqc_opc_restart_an				= 0x0605,
+	ice_aqc_opc_get_link_status			= 0x0607,
+	ice_aqc_opc_set_event_mask			= 0x0613,
+	ice_aqc_opc_set_mac_lb				= 0x0620,
+	ice_aqc_opc_set_port_id_led			= 0x06E9,
+	ice_aqc_opc_get_port_options			= 0x06EA,
+	ice_aqc_opc_set_port_option			= 0x06EB,
+	ice_aqc_opc_set_gpio				= 0x06EC,
+	ice_aqc_opc_get_gpio				= 0x06ED,
+
+	/* NVM commands */
+	ice_aqc_opc_nvm_read				= 0x0701,
+	ice_aqc_opc_nvm_erase				= 0x0702,
+	ice_aqc_opc_nvm_update				= 0x0703,
+	ice_aqc_opc_nvm_cfg_read			= 0x0704,
+	ice_aqc_opc_nvm_cfg_write			= 0x0705,
+	ice_aqc_opc_nvm_checksum			= 0x0706,
+
+	/* PF/VF mailbox commands */
+	ice_mbx_opc_send_msg_to_pf			= 0x0801,
+	ice_mbx_opc_send_msg_to_vf			= 0x0802,
+
+	/* RSS commands */
+	ice_aqc_opc_set_rss_key				= 0x0B02,
+	ice_aqc_opc_set_rss_lut				= 0x0B03,
+	ice_aqc_opc_get_rss_key				= 0x0B04,
+	ice_aqc_opc_get_rss_lut				= 0x0B05,
+
+	/* TX queue handling commands/events */
+	ice_aqc_opc_add_txqs				= 0x0C30,
+	ice_aqc_opc_dis_txqs				= 0x0C31,
+	ice_aqc_opc_txqs_cleanup			= 0x0C31,
+	ice_aqc_opc_move_recfg_txqs			= 0x0C32,
+
+
+
+
+	/* Standalone Commands/Events */
+	ice_aqc_opc_event_lan_overflow			= 0x1001,
+
+	/* debug commands */
+	ice_aqc_opc_fw_logging				= 0xFF09,
+	ice_aqc_opc_fw_logging_info			= 0xFF10,
+	ice_aqc_opc_get_clear_fw_log			= 0xFF11
+};
+
+#endif /* _ICE_ADMINQ_CMD_H_ */
diff --git a/drivers/net/ice/base/ice_alloc.h b/drivers/net/ice/base/ice_alloc.h
new file mode 100644
index 0000000..7883104
--- /dev/null
+++ b/drivers/net/ice/base/ice_alloc.h
@@ -0,0 +1,22 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_ALLOC_H_
+#define _ICE_ALLOC_H_
+
+/* Memory types */
+enum ice_memset_type {
+	ICE_NONDMA_MEM = 0,
+	ICE_DMA_MEM
+};
+
+/* Memcpy types */
+enum ice_memcpy_type {
+	ICE_NONDMA_TO_NONDMA = 0,
+	ICE_NONDMA_TO_DMA,
+	ICE_DMA_TO_DMA,
+	ICE_DMA_TO_NONDMA
+};
+
+#endif /* _ICE_ALLOC_H_ */
diff --git a/drivers/net/ice/base/ice_bitops.h b/drivers/net/ice/base/ice_bitops.h
new file mode 100644
index 0000000..ac6a51b
--- /dev/null
+++ b/drivers/net/ice/base/ice_bitops.h
@@ -0,0 +1,233 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_BITOPS_H_
+#define _ICE_BITOPS_H_
+
+/* Define the size of the bitmap chunk */
+typedef u32 ice_bitmap_t;
+
+
+/* Number of bits per bitmap chunk */
+#define BITS_PER_CHUNK		(BITS_PER_BYTE * sizeof(ice_bitmap_t))
+/* Determine which chunk a bit belongs in */
+#define BIT_CHUNK(nr)		((nr) / BITS_PER_CHUNK)
+/* How many chunks are required to store this many bits */
+#define BITS_TO_CHUNKS(sz)	DIVIDE_AND_ROUND_UP((sz), BITS_PER_CHUNK)
+/* Which bit inside a chunk this bit corresponds to */
+#define BIT_IN_CHUNK(nr)	BIT((nr) % BITS_PER_CHUNK)
+/* How many bits are valid in the last chunk, assumes nr > 0 */
+#define LAST_CHUNK_BITS(nr)	((((nr) - 1) % BITS_PER_CHUNK) + 1)
+/* Generate a bitmask of valid bits in the last chunk, assumes nr > 0 */
+#define LAST_CHUNK_MASK(nr)	(((ice_bitmap_t)~0) >> \
+				 (BITS_PER_CHUNK - LAST_CHUNK_BITS(nr)))
+
+#define ice_declare_bitmap(A, sz) \
+	ice_bitmap_t A[BITS_TO_CHUNKS(sz)]
+
+/**
+ * ice_is_bit_set - Check state of a bit in a bitmap
+ * @bitmap: the bitmap to check
+ * @nr: the bit to check
+ *
+ * Returns true if bit nr of bitmap is set. False otherwise. Assumes that nr
+ * is less than the size of the bitmap.
+ */
+static inline bool ice_is_bit_set(const ice_bitmap_t *bitmap, u16 nr)
+{
+	return !!(bitmap[BIT_CHUNK(nr)] & BIT_IN_CHUNK(nr));
+}
+
+/**
+ * ice_clear_bit - Clear a bit in a bitmap
+ * @bitmap: the bitmap to change
+ * @nr: the bit to change
+ *
+ * Clears the bit nr in bitmap. Assumes that nr is less than the size of the
+ * bitmap.
+ */
+static inline void ice_clear_bit(u16 nr, ice_bitmap_t *bitmap)
+{
+	bitmap[BIT_CHUNK(nr)] &= ~BIT_IN_CHUNK(nr);
+}
+
+/**
+ * ice_set_bit - Set a bit in a bitmap
+ * @bitmap: the bitmap to change
+ * @nr: the bit to change
+ *
+ * Sets the bit nr in bitmap. Assumes that nr is less than the size of the
+ * bitmap.
+ */
+static inline void ice_set_bit(u16 nr, ice_bitmap_t *bitmap)
+{
+	bitmap[BIT_CHUNK(nr)] |= BIT_IN_CHUNK(nr);
+}
+
+/* ice_zero_bitmap - set bits of bitmap to zero.
+ * @bmp: bitmap to set zeros
+ * @size: Size of the bitmaps in bits
+ *
+ * This function sets bits of a bitmap to zero.
+ */
+static inline void ice_zero_bitmap(ice_bitmap_t *bmp, u16 size)
+{
+	ice_bitmap_t mask;
+	u16 i;
+
+	/* Handle all but last chunk*/
+	for (i = 0; i < BITS_TO_CHUNKS(size) - 1; i++)
+		bmp[i] = 0;
+	/* For the last chunk, we want to take care of not to modify bits
+	 * outside the size boundary. ~mask take care of all the bits outside
+	 * the boundary.
+	 */
+	mask = LAST_CHUNK_MASK(size);
+	bmp[i] &= ~mask;
+}
+
+/**
+ * ice_and_bitmap - bitwise AND 2 bitmaps and store result in dst bitmap
+ * @dst: Destination bitmap that receive the result of the operation
+ * @bmp1: The first bitmap to intersect
+ * @bmp2: The second bitmap to intersect wit the first
+ * @size: Size of the bitmaps in bits
+ *
+ * This function performs a bitwise AND on two "source" bitmaps of the same size
+ * and stores the result to "dst" bitmap. The "dst" bitmap must be of the same
+ * size as the "source" bitmaps to avoid buffer overflows. This function returns
+ * a non-zero value if at least one bit location from both "source" bitmaps is
+ * non-zero.
+ */
+static inline int
+ice_and_bitmap(ice_bitmap_t *dst, const ice_bitmap_t *bmp1,
+	       const ice_bitmap_t *bmp2, u16 size)
+{
+	ice_bitmap_t res = 0, mask;
+	u16 i;
+
+	/* Handle all but the last chunk */
+	for (i = 0; i < BITS_TO_CHUNKS(size) - 1; i++) {
+		dst[i] = bmp1[i] & bmp2[i];
+		res |= dst[i];
+	}
+
+	/* We want to take care not to modify any bits outside of the bitmap
+	 * size, even in the destination bitmap. Thus, we won't directly
+	 * assign the last bitmap, but instead use a bitmask to ensure we only
+	 * modify bits which are within the size, and leave any bits above the
+	 * size value alone.
+	 */
+	mask = LAST_CHUNK_MASK(size);
+	dst[i] &= ~mask;
+	dst[i] |= (bmp1[i] & bmp2[i]) & mask;
+	res |= dst[i] & mask;
+
+	return res != 0;
+}
+
+/**
+ * ice_or_bitmap - bitwise OR 2 bitmaps and store result in dst bitmap
+ * @dst: Destination bitmap that receive the result of the operation
+ * @bmp1: The first bitmap to intersect
+ * @bmp2: The second bitmap to intersect wit the first
+ * @size: Size of the bitmaps in bits
+ *
+ * This function performs a bitwise OR on two "source" bitmaps of the same size
+ * and stores the result to "dst" bitmap. The "dst" bitmap must be of the same
+ * size as the "source" bitmaps to avoid buffer overflows.
+ */
+static inline void
+ice_or_bitmap(ice_bitmap_t *dst, const ice_bitmap_t *bmp1,
+	      const ice_bitmap_t *bmp2, u16 size)
+{
+	ice_bitmap_t mask;
+	u16 i;
+
+	/* Handle all but last chunk*/
+	for (i = 0; i < BITS_TO_CHUNKS(size) - 1; i++)
+		dst[i] = bmp1[i] | bmp2[i];
+
+	/* We want to only OR bits within the size. Furthermore, we also do
+	 * not want to modify destination bits which are beyond the specified
+	 * size. Use a bitmask to ensure that we only modify the bits that are
+	 * within the specified size.
+	 */
+	mask = LAST_CHUNK_MASK(size);
+	dst[i] &= ~mask;
+	dst[i] |= (bmp1[i] | bmp2[i]) & mask;
+}
+
+/**
+ * ice_find_next_bit - Find the index of the next set bit of a bitmap
+ * @bitmap: the bitmap to scan
+ * @size: the size in bits of the bitmap
+ * @offset: the offset to start at
+ *
+ * Scans the bitmap and returns the index of the first set bit which is equal
+ * to or after the specified offset. Will return size if no bits are set.
+ */
+static inline u16
+ice_find_next_bit(const ice_bitmap_t *bitmap, u16 size, u16 offset)
+{
+	u16 i, j;
+
+	if (offset >= size)
+		return size;
+
+	/* Since the starting position may not be directly on a chunk
+	 * boundary, we need to be careful to handle the first chunk specially
+	 */
+	i = BIT_CHUNK(offset);
+	if (bitmap[i] != 0) {
+		u16 off = i * BITS_PER_CHUNK;
+
+		for (j = offset % BITS_PER_CHUNK; j < BITS_PER_CHUNK; j++) {
+			if (ice_is_bit_set(bitmap, off + j))
+				return min(size, (u16)(off + j));
+		}
+	}
+
+	/* Now we handle the remaining chunks, if any */
+	for (i++; i < BITS_TO_CHUNKS(size); i++) {
+		if (bitmap[i] != 0) {
+			u16 off = i * BITS_PER_CHUNK;
+
+			for (j = 0; j < BITS_PER_CHUNK; j++) {
+				if (ice_is_bit_set(bitmap, off + j))
+					return min(size, (u16)(off + j));
+			}
+		}
+	}
+	return size;
+}
+
+/**
+ * ice_find_first_bit - Find the index of the first set bit of a bitmap
+ * @bitmap: the bitmap to scan
+ * @size: the size in bits of the bitmap
+ *
+ * Scans the bitmap and returns the index of the first set bit. Will return
+ * size if no bits are set.
+ */
+static inline u16 ice_find_first_bit(const ice_bitmap_t *bitmap, u16 size)
+{
+	return ice_find_next_bit(bitmap, size, 0);
+}
+
+/**
+ * ice_is_any_bit_set - Return true of any bit in the bitmap is set
+ * @bitmap: the bitmap to check
+ * @size: the size of the bitmap
+ *
+ * Equivalent to checking if ice_find_first_bit returns a value less than the
+ * bitmap size.
+ */
+static inline bool ice_is_any_bit_set(ice_bitmap_t *bitmap, u16 size)
+{
+	return ice_find_first_bit(bitmap, size) < size;
+}
+
+
+#endif /* _ICE_BITOPS_H_ */
diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c
new file mode 100644
index 0000000..a012749
--- /dev/null
+++ b/drivers/net/ice/base/ice_common.c
@@ -0,0 +1,3331 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#include "ice_common.h"
+#include "ice_sched.h"
+#include "ice_adminq_cmd.h"
+
+#include "ice_flow.h"
+#include "ice_switch.h"
+
+#define ICE_PF_RESET_WAIT_COUNT	200
+
+#define ICE_PROG_FLEX_ENTRY(hw, rxdid, mdid, idx) \
+	wr32((hw), GLFLXP_RXDID_FLX_WRD_##idx(rxdid), \
+	     ((ICE_RX_OPC_MDID << \
+	       GLFLXP_RXDID_FLX_WRD_##idx##_RXDID_OPCODE_S) & \
+	      GLFLXP_RXDID_FLX_WRD_##idx##_RXDID_OPCODE_M) | \
+	     (((mdid) << GLFLXP_RXDID_FLX_WRD_##idx##_PROT_MDID_S) & \
+	      GLFLXP_RXDID_FLX_WRD_##idx##_PROT_MDID_M))
+
+#define ICE_PROG_FLG_ENTRY(hw, rxdid, flg_0, flg_1, flg_2, flg_3, idx) \
+	wr32((hw), GLFLXP_RXDID_FLAGS(rxdid, idx), \
+	     (((flg_0) << GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_S) & \
+	      GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_M) | \
+	     (((flg_1) << GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_1_S) & \
+	      GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_1_M) | \
+	     (((flg_2) << GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_2_S) & \
+	      GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_2_M) | \
+	     (((flg_3) << GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_3_S) & \
+	      GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_3_M))
+
+
+/**
+ * ice_set_mac_type - Sets MAC type
+ * @hw: pointer to the HW structure
+ *
+ * This function sets the MAC type of the adapter based on the
+ * vendor ID and device ID stored in the hw structure.
+ */
+static enum ice_status ice_set_mac_type(struct ice_hw *hw)
+{
+	enum ice_status status = ICE_SUCCESS;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_set_mac_type\n");
+
+	if (hw->vendor_id == ICE_INTEL_VENDOR_ID) {
+		switch (hw->device_id) {
+		default:
+			hw->mac_type = ICE_MAC_GENERIC;
+			break;
+		}
+	} else {
+		status = ICE_ERR_DEVICE_NOT_SUPPORTED;
+	}
+
+	ice_debug(hw, ICE_DBG_INIT, "found mac_type: %d, status: %d\n",
+		  hw->mac_type, status);
+
+	return status;
+}
+
+#if defined(FPGA_SUPPORT) || defined(CVL_A0_SUPPORT)
+void ice_dev_onetime_setup(struct ice_hw *hw)
+{
+
+	/* configure Rx - set non pxe mode */
+	wr32(hw, GLLAN_RCTL_0, 0x1);
+
+
+#define MBX_PF_VT_PFALLOC	0x00231E80 /* Reset Source: CORER */
+	/* set VFs per PF */
+	wr32(hw, MBX_PF_VT_PFALLOC, rd32(hw, PF_VT_PFALLOC_HIF));
+}
+#endif /* FPGA_SUPPORT || CVL_A0_SUPPORT */
+
+/**
+ * ice_clear_pf_cfg - Clear PF configuration
+ * @hw: pointer to the hardware structure
+ *
+ * Clears any existing PF configuration (VSIs, VSI lists, switch rules, port
+ * configuration, flow director filters, etc.).
+ */
+enum ice_status ice_clear_pf_cfg(struct ice_hw *hw)
+{
+	struct ice_aq_desc desc;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_clear_pf_cfg);
+
+	return ice_aq_send_cmd(hw, &desc, NULL, 0, NULL);
+}
+
+/**
+ * ice_aq_manage_mac_read - manage MAC address read command
+ * @hw: pointer to the hw struct
+ * @buf: a virtual buffer to hold the manage MAC read response
+ * @buf_size: Size of the virtual buffer
+ * @cd: pointer to command details structure or NULL
+ *
+ * This function is used to return per PF station MAC address (0x0107).
+ * NOTE: Upon successful completion of this command, MAC address information
+ * is returned in user specified buffer. Please interpret user specified
+ * buffer as "manage_mac_read" response.
+ * Response such as various MAC addresses are stored in HW struct (port.mac)
+ * ice_aq_discover_caps is expected to be called before this function is called.
+ */
+static enum ice_status
+ice_aq_manage_mac_read(struct ice_hw *hw, void *buf, u16 buf_size,
+		       struct ice_sq_cd *cd)
+{
+	struct ice_aqc_manage_mac_read_resp *resp;
+	struct ice_aqc_manage_mac_read *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+	u16 flags;
+	u8 i;
+
+	cmd = &desc.params.mac_read;
+
+	if (buf_size < sizeof(*resp))
+		return ICE_ERR_BUF_TOO_SHORT;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_manage_mac_read);
+
+	status = ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
+	if (status)
+		return status;
+
+	resp = (struct ice_aqc_manage_mac_read_resp *)buf;
+	flags = LE16_TO_CPU(cmd->flags) & ICE_AQC_MAN_MAC_READ_M;
+
+	if (!(flags & ICE_AQC_MAN_MAC_LAN_ADDR_VALID)) {
+		ice_debug(hw, ICE_DBG_LAN, "got invalid MAC address\n");
+		return ICE_ERR_CFG;
+	}
+
+	/* A single port can report up to two (LAN and WoL) addresses */
+	for (i = 0; i < cmd->num_addr; i++)
+		if (resp[i].addr_type == ICE_AQC_MAN_MAC_ADDR_TYPE_LAN) {
+			ice_memcpy(hw->port_info->mac.lan_addr,
+				   resp[i].mac_addr, ETH_ALEN,
+				   ICE_DMA_TO_NONDMA);
+			ice_memcpy(hw->port_info->mac.perm_addr,
+				   resp[i].mac_addr,
+				   ETH_ALEN, ICE_DMA_TO_NONDMA);
+			break;
+		}
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_aq_get_phy_caps - returns PHY capabilities
+ * @pi: port information structure
+ * @qual_mods: report qualified modules
+ * @report_mode: report mode capabilities
+ * @pcaps: structure for PHY capabilities to be filled
+ * @cd: pointer to command details structure or NULL
+ *
+ * Returns the various PHY capabilities supported on the Port (0x0600)
+ */
+enum ice_status
+ice_aq_get_phy_caps(struct ice_port_info *pi, bool qual_mods, u8 report_mode,
+		    struct ice_aqc_get_phy_caps_data *pcaps,
+		    struct ice_sq_cd *cd)
+{
+	struct ice_aqc_get_phy_caps *cmd;
+	u16 pcaps_size = sizeof(*pcaps);
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	cmd = &desc.params.get_phy;
+
+	if (!pcaps || (report_mode & ~ICE_AQC_REPORT_MODE_M) || !pi)
+		return ICE_ERR_PARAM;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_phy_caps);
+
+	if (qual_mods)
+		cmd->param0 |= CPU_TO_LE16(ICE_AQC_GET_PHY_RQM);
+
+	cmd->param0 |= CPU_TO_LE16(report_mode);
+	status = ice_aq_send_cmd(pi->hw, &desc, pcaps, pcaps_size, cd);
+
+	if (status == ICE_SUCCESS && report_mode == ICE_AQC_REPORT_TOPO_CAP)
+		pi->phy.phy_type_low = LE64_TO_CPU(pcaps->phy_type_low);
+
+	return status;
+}
+
+/**
+ * ice_get_media_type - Gets media type
+ * @pi: port information structure
+ */
+static enum ice_media_type ice_get_media_type(struct ice_port_info *pi)
+{
+	struct ice_link_status *hw_link_info;
+
+	if (!pi)
+		return ICE_MEDIA_UNKNOWN;
+
+	hw_link_info = &pi->phy.link_info;
+
+	if (hw_link_info->phy_type_low) {
+		switch (hw_link_info->phy_type_low) {
+		case ICE_PHY_TYPE_LOW_1000BASE_SX:
+		case ICE_PHY_TYPE_LOW_1000BASE_LX:
+		case ICE_PHY_TYPE_LOW_10GBASE_SR:
+		case ICE_PHY_TYPE_LOW_10GBASE_LR:
+		case ICE_PHY_TYPE_LOW_10G_SFI_C2C:
+		case ICE_PHY_TYPE_LOW_25GBASE_SR:
+		case ICE_PHY_TYPE_LOW_25GBASE_LR:
+		case ICE_PHY_TYPE_LOW_25G_AUI_C2C:
+		case ICE_PHY_TYPE_LOW_40GBASE_SR4:
+		case ICE_PHY_TYPE_LOW_40GBASE_LR4:
+			return ICE_MEDIA_FIBER;
+		case ICE_PHY_TYPE_LOW_100BASE_TX:
+		case ICE_PHY_TYPE_LOW_1000BASE_T:
+		case ICE_PHY_TYPE_LOW_2500BASE_T:
+		case ICE_PHY_TYPE_LOW_5GBASE_T:
+		case ICE_PHY_TYPE_LOW_10GBASE_T:
+		case ICE_PHY_TYPE_LOW_25GBASE_T:
+			return ICE_MEDIA_BASET;
+		case ICE_PHY_TYPE_LOW_10G_SFI_DA:
+		case ICE_PHY_TYPE_LOW_25GBASE_CR:
+		case ICE_PHY_TYPE_LOW_25GBASE_CR_S:
+		case ICE_PHY_TYPE_LOW_25GBASE_CR1:
+		case ICE_PHY_TYPE_LOW_40GBASE_CR4:
+			return ICE_MEDIA_DA;
+		case ICE_PHY_TYPE_LOW_1000BASE_KX:
+		case ICE_PHY_TYPE_LOW_2500BASE_KX:
+		case ICE_PHY_TYPE_LOW_2500BASE_X:
+		case ICE_PHY_TYPE_LOW_5GBASE_KR:
+		case ICE_PHY_TYPE_LOW_10GBASE_KR_CR1:
+		case ICE_PHY_TYPE_LOW_25GBASE_KR:
+		case ICE_PHY_TYPE_LOW_25GBASE_KR1:
+		case ICE_PHY_TYPE_LOW_25GBASE_KR_S:
+		case ICE_PHY_TYPE_LOW_40GBASE_KR4:
+			return ICE_MEDIA_BACKPLANE;
+		}
+	}
+	return ICE_MEDIA_UNKNOWN;
+}
+
+/**
+ * ice_aq_get_link_info
+ * @pi: port information structure
+ * @ena_lse: enable/disable LinkStatusEvent reporting
+ * @link: pointer to link status structure - optional
+ * @cd: pointer to command details structure or NULL
+ *
+ * Get Link Status (0x607). Returns the link status of the adapter.
+ */
+enum ice_status
+ice_aq_get_link_info(struct ice_port_info *pi, bool ena_lse,
+		     struct ice_link_status *link, struct ice_sq_cd *cd)
+{
+	struct ice_link_status *hw_link_info_old, *hw_link_info;
+	struct ice_aqc_get_link_status_data link_data = { 0 };
+	struct ice_aqc_get_link_status *resp;
+	enum ice_media_type *hw_media_type;
+	struct ice_fc_info *hw_fc_info;
+	bool tx_pause, rx_pause;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+	u16 cmd_flags;
+
+	if (!pi)
+		return ICE_ERR_PARAM;
+	hw_link_info_old = &pi->phy.link_info_old;
+	hw_media_type = &pi->phy.media_type;
+	hw_link_info = &pi->phy.link_info;
+	hw_fc_info = &pi->fc;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_link_status);
+	cmd_flags = (ena_lse) ? ICE_AQ_LSE_ENA : ICE_AQ_LSE_DIS;
+	resp = &desc.params.get_link_status;
+	resp->cmd_flags = CPU_TO_LE16(cmd_flags);
+	resp->lport_num = pi->lport;
+
+	status = ice_aq_send_cmd(pi->hw, &desc, &link_data, sizeof(link_data),
+				 cd);
+
+	if (status != ICE_SUCCESS)
+		return status;
+
+	/* save off old link status information */
+	*hw_link_info_old = *hw_link_info;
+
+	/* update current link status information */
+	hw_link_info->link_speed = LE16_TO_CPU(link_data.link_speed);
+	hw_link_info->phy_type_low = LE64_TO_CPU(link_data.phy_type_low);
+	*hw_media_type = ice_get_media_type(pi);
+	hw_link_info->link_info = link_data.link_info;
+	hw_link_info->an_info = link_data.an_info;
+	hw_link_info->ext_info = link_data.ext_info;
+	hw_link_info->max_frame_size = LE16_TO_CPU(link_data.max_frame_size);
+	hw_link_info->fec_info = link_data.cfg & ICE_AQ_FEC_MASK;
+	hw_link_info->topo_media_conflict = link_data.topo_media_conflict;
+	hw_link_info->pacing = link_data.cfg & ICE_AQ_CFG_PACING_M;
+
+	/* update fc info */
+	tx_pause = !!(link_data.an_info & ICE_AQ_LINK_PAUSE_TX);
+	rx_pause = !!(link_data.an_info & ICE_AQ_LINK_PAUSE_RX);
+	if (tx_pause && rx_pause)
+		hw_fc_info->current_mode = ICE_FC_FULL;
+	else if (tx_pause)
+		hw_fc_info->current_mode = ICE_FC_TX_PAUSE;
+	else if (rx_pause)
+		hw_fc_info->current_mode = ICE_FC_RX_PAUSE;
+	else
+		hw_fc_info->current_mode = ICE_FC_NONE;
+
+	hw_link_info->lse_ena =
+		!!(resp->cmd_flags & CPU_TO_LE16(ICE_AQ_LSE_IS_ENABLED));
+
+
+	/* save link status information */
+	if (link)
+		*link = *hw_link_info;
+
+	/* flag cleared so calling functions don't call AQ again */
+	pi->phy.get_link_info = false;
+
+	return status;
+}
+
+/**
+ * ice_init_flex_flags
+ * @hw: pointer to the hardware structure
+ * @prof_id: Rx Descriptor Builder profile ID
+ *
+ * Function to initialize Rx flex flags
+ */
+static void ice_init_flex_flags(struct ice_hw *hw, enum ice_rxdid prof_id)
+{
+	u8 idx = 0;
+
+	/* Flex-flag fields (0-2) are programmed with FLG64 bits with layout:
+	 * flexiflags0[5:0] - TCP flags, is_packet_fragmented, is_packet_UDP_GRE
+	 * flexiflags1[3:0] - Not used for flag programming
+	 * flexiflags2[7:0] - Tunnel and VLAN types
+	 * 2 invalid fields in last index
+	 */
+	switch (prof_id) {
+	/* Rx flex flags are currently programmed for the NIC profiles only.
+	 * Different flag bit programming configurations can be added per
+	 * profile as needed.
+	 */
+	case ICE_RXDID_FLEX_NIC:
+	case ICE_RXDID_FLEX_NIC_2:
+		ICE_PROG_FLG_ENTRY(hw, prof_id, ICE_RXFLG_PKT_FRG,
+				   ICE_RXFLG_UDP_GRE, ICE_RXFLG_PKT_DSI,
+				   ICE_RXFLG_FIN, idx++);
+		/* flex flag 1 is not used for flexi-flag programming, skipping
+		 * these four FLG64 bits.
+		 */
+		ICE_PROG_FLG_ENTRY(hw, prof_id, ICE_RXFLG_SYN, ICE_RXFLG_RST,
+				   ICE_RXFLG_PKT_DSI, ICE_RXFLG_PKT_DSI, idx++);
+		ICE_PROG_FLG_ENTRY(hw, prof_id, ICE_RXFLG_PKT_DSI,
+				   ICE_RXFLG_PKT_DSI, ICE_RXFLG_EVLAN_x8100,
+				   ICE_RXFLG_EVLAN_x9100, idx++);
+		ICE_PROG_FLG_ENTRY(hw, prof_id, ICE_RXFLG_VLAN_x8100,
+				   ICE_RXFLG_TNL_VLAN, ICE_RXFLG_TNL_MAC,
+				   ICE_RXFLG_TNL0, idx++);
+		ICE_PROG_FLG_ENTRY(hw, prof_id, ICE_RXFLG_TNL1, ICE_RXFLG_TNL2,
+				   ICE_RXFLG_PKT_DSI, ICE_RXFLG_PKT_DSI, idx);
+		break;
+
+	default:
+		ice_debug(hw, ICE_DBG_INIT,
+			  "Flag programming for profile ID %d not supported\n",
+			  prof_id);
+	}
+}
+
+/**
+ * ice_init_flex_flds
+ * @hw: pointer to the hardware structure
+ * @prof_id: Rx Descriptor Builder profile ID
+ *
+ * Function to initialize flex descriptors
+ */
+static void ice_init_flex_flds(struct ice_hw *hw, enum ice_rxdid prof_id)
+{
+	enum ice_flex_rx_mdid mdid;
+
+	switch (prof_id) {
+	case ICE_RXDID_FLEX_NIC:
+	case ICE_RXDID_FLEX_NIC_2:
+		ICE_PROG_FLEX_ENTRY(hw, prof_id, ICE_RX_MDID_HASH_LOW, 0);
+		ICE_PROG_FLEX_ENTRY(hw, prof_id, ICE_RX_MDID_HASH_HIGH, 1);
+		ICE_PROG_FLEX_ENTRY(hw, prof_id, ICE_RX_MDID_FLOW_ID_LOWER, 2);
+
+		mdid = (prof_id == ICE_RXDID_FLEX_NIC_2) ?
+			ICE_RX_MDID_SRC_VSI : ICE_RX_MDID_FLOW_ID_HIGH;
+
+		ICE_PROG_FLEX_ENTRY(hw, prof_id, mdid, 3);
+
+		ice_init_flex_flags(hw, prof_id);
+		break;
+
+	default:
+		ice_debug(hw, ICE_DBG_INIT,
+			  "Field init for profile ID %d not supported\n",
+			  prof_id);
+	}
+}
+
+
+/**
+ * ice_init_fltr_mgmt_struct - initializes filter management list and locks
+ * @hw: pointer to the hw struct
+ */
+static enum ice_status ice_init_fltr_mgmt_struct(struct ice_hw *hw)
+{
+	struct ice_switch_info *sw;
+
+	hw->switch_info = (struct ice_switch_info *)
+			  ice_malloc(hw, sizeof(*hw->switch_info));
+	sw = hw->switch_info;
+
+	if (!sw)
+		return ICE_ERR_NO_MEMORY;
+
+	INIT_LIST_HEAD(&sw->vsi_list_map_head);
+
+	return ice_init_def_sw_recp(hw);
+}
+
+/**
+ * ice_cleanup_fltr_mgmt_struct - cleanup filter management list and locks
+ * @hw: pointer to the hw struct
+ */
+static void ice_cleanup_fltr_mgmt_struct(struct ice_hw *hw)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	struct ice_vsi_list_map_info *v_pos_map;
+	struct ice_vsi_list_map_info *v_tmp_map;
+	struct ice_sw_recipe *recps;
+	u8 i;
+
+	LIST_FOR_EACH_ENTRY_SAFE(v_pos_map, v_tmp_map, &sw->vsi_list_map_head,
+				 ice_vsi_list_map_info, list_entry) {
+		LIST_DEL(&v_pos_map->list_entry);
+		ice_free(hw, v_pos_map);
+	}
+	recps = hw->switch_info->recp_list;
+	for (i = 0; i < ICE_SW_LKUP_LAST; i++) {
+		recps[i].root_rid = i;
+
+		if (recps[i].adv_rule) {
+			struct ice_adv_fltr_mgmt_list_entry *tmp_entry;
+			struct ice_adv_fltr_mgmt_list_entry *lst_itr;
+
+			ice_destroy_lock(&recps[i].filt_rule_lock);
+			LIST_FOR_EACH_ENTRY_SAFE(lst_itr, tmp_entry,
+						 &recps[i].filt_rules,
+						 ice_adv_fltr_mgmt_list_entry,
+						 list_entry) {
+				LIST_DEL(&lst_itr->list_entry);
+				ice_free(hw, lst_itr->lkups);
+				ice_free(hw, lst_itr);
+			}
+		} else {
+			struct ice_fltr_mgmt_list_entry *lst_itr, *tmp_entry;
+
+			ice_destroy_lock(&recps[i].filt_rule_lock);
+			LIST_FOR_EACH_ENTRY_SAFE(lst_itr, tmp_entry,
+						 &recps[i].filt_rules,
+						 ice_fltr_mgmt_list_entry,
+						 list_entry) {
+				LIST_DEL(&lst_itr->list_entry);
+				ice_free(hw, lst_itr);
+			}
+		}
+	}
+	ice_rm_all_sw_replay_rule_info(hw);
+	ice_free(hw, sw->recp_list);
+	ice_free(hw, sw);
+}
+
+#define ICE_FW_LOG_DESC_SIZE(n)	(sizeof(struct ice_aqc_fw_logging_data) + \
+	(((n) - 1) * sizeof(((struct ice_aqc_fw_logging_data *)0)->entry)))
+#define ICE_FW_LOG_DESC_SIZE_MAX	\
+	ICE_FW_LOG_DESC_SIZE(ICE_AQC_FW_LOG_ID_MAX)
+
+/**
+ * ice_cfg_fw_log - configure FW logging
+ * @hw: pointer to the hw struct
+ * @enable: enable certain FW logging events if true, disable all if false
+ *
+ * This function enables/disables the FW logging via Rx CQ events and a UART
+ * port based on predetermined configurations. FW logging via the Rx CQ can be
+ * enabled/disabled for individual PF's. However, FW logging via the UART can
+ * only be enabled/disabled for all PFs on the same device.
+ *
+ * To enable overall FW logging, the "cq_en" and "uart_en" enable bits in
+ * hw->fw_log need to be set accordingly, e.g. based on user-provided input,
+ * before initializing the device.
+ *
+ * When re/configuring FW logging, callers need to update the "cfg" elements of
+ * the hw->fw_log.evnts array with the desired logging event configurations for
+ * modules of interest. When disabling FW logging completely, the callers can
+ * just pass false in the "enable" parameter. On completion, the function will
+ * update the "cur" element of the hw->fw_log.evnts array with the resulting
+ * logging event configurations of the modules that are being re/configured. FW
+ * logging modules that are not part of a reconfiguration operation retain their
+ * previous states.
+ *
+ * Before resetting the device, it is recommended that the driver disables FW
+ * logging before shutting down the control queue. When disabling FW logging
+ * ("enable" = false), the latest configurations of FW logging events stored in
+ * hw->fw_log.evnts[] are not overridden to allow them to be reconfigured after
+ * a device reset.
+ *
+ * When enabling FW logging to emit log messages via the Rx CQ during the
+ * device's initialization phase, a mechanism alternative to interrupt handlers
+ * needs to be used to extract FW log messages from the Rx CQ periodically and
+ * to prevent the Rx CQ from being full and stalling other types of control
+ * messages from FW to SW. Interrupts are typically disabled during the device's
+ * initialization phase.
+ */
+static enum ice_status ice_cfg_fw_log(struct ice_hw *hw, bool enable)
+{
+	struct ice_aqc_fw_logging_data *data = NULL;
+	struct ice_aqc_fw_logging *cmd;
+	enum ice_status status = ICE_SUCCESS;
+	u16 i, chgs = 0, len = 0;
+	struct ice_aq_desc desc;
+	u8 actv_evnts = 0;
+	void *buf = NULL;
+
+	if (!hw->fw_log.cq_en && !hw->fw_log.uart_en)
+		return ICE_SUCCESS;
+
+	/* Disable FW logging only when the control queue is still responsive */
+	if (!enable &&
+	    (!hw->fw_log.actv_evnts || !ice_check_sq_alive(hw, &hw->adminq)))
+		return ICE_SUCCESS;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_fw_logging);
+	cmd = &desc.params.fw_logging;
+
+	/* Indicate which controls are valid */
+	if (hw->fw_log.cq_en)
+		cmd->log_ctrl_valid |= ICE_AQC_FW_LOG_AQ_VALID;
+
+	if (hw->fw_log.uart_en)
+		cmd->log_ctrl_valid |= ICE_AQC_FW_LOG_UART_VALID;
+
+	if (enable) {
+		/* Fill in an array of entries with FW logging modules and
+		 * logging events being reconfigured.
+		 */
+		for (i = 0; i < ICE_AQC_FW_LOG_ID_MAX; i++) {
+			u16 val;
+
+			/* Keep track of enabled event types */
+			actv_evnts |= hw->fw_log.evnts[i].cfg;
+
+			if (hw->fw_log.evnts[i].cfg == hw->fw_log.evnts[i].cur)
+				continue;
+
+			if (!data) {
+				data = (struct ice_aqc_fw_logging_data *)
+					ice_malloc(hw,
+						   ICE_FW_LOG_DESC_SIZE_MAX);
+				if (!data)
+					return ICE_ERR_NO_MEMORY;
+			}
+
+			val = i << ICE_AQC_FW_LOG_ID_S;
+			val |= hw->fw_log.evnts[i].cfg << ICE_AQC_FW_LOG_EN_S;
+			data->entry[chgs++] = CPU_TO_LE16(val);
+		}
+
+		/* Only enable FW logging if at least one module is specified.
+		 * If FW logging is currently enabled but all modules are not
+		 * enabled to emit log messages, disable FW logging altogether.
+		 */
+		if (actv_evnts) {
+			/* Leave if there is effectively no change */
+			if (!chgs)
+				goto out;
+
+			if (hw->fw_log.cq_en)
+				cmd->log_ctrl |= ICE_AQC_FW_LOG_AQ_EN;
+
+			if (hw->fw_log.uart_en)
+				cmd->log_ctrl |= ICE_AQC_FW_LOG_UART_EN;
+
+			buf = data;
+			len = ICE_FW_LOG_DESC_SIZE(chgs);
+			desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+		}
+	}
+
+	status = ice_aq_send_cmd(hw, &desc, buf, len, NULL);
+	if (!status) {
+		/* Update the current configuration to reflect events enabled.
+		 * hw->fw_log.cq_en and hw->fw_log.uart_en indicate if the FW
+		 * logging mode is enabled for the device. They do not reflect
+		 * actual modules being enabled to emit log messages. So, their
+		 * values remain unchanged even when all modules are disabled.
+		 */
+		u16 cnt = enable ? chgs : (u16)ICE_AQC_FW_LOG_ID_MAX;
+
+		hw->fw_log.actv_evnts = actv_evnts;
+		for (i = 0; i < cnt; i++) {
+			u16 v, m;
+
+			if (!enable) {
+				/* When disabling all FW logging events as part
+				 * of device's de-initialization, the original
+				 * configurations are retained, and can be used
+				 * to reconfigure FW logging later if the device
+				 * is re-initialized.
+				 */
+				hw->fw_log.evnts[i].cur = 0;
+				continue;
+			}
+
+			v = LE16_TO_CPU(data->entry[i]);
+			m = (v & ICE_AQC_FW_LOG_ID_M) >> ICE_AQC_FW_LOG_ID_S;
+			hw->fw_log.evnts[m].cur = hw->fw_log.evnts[m].cfg;
+		}
+	}
+
+out:
+	if (data)
+		ice_free(hw, data);
+
+	return status;
+}
+
+/**
+ * ice_output_fw_log
+ * @hw: pointer to the hw struct
+ * @desc: pointer to the AQ message descriptor
+ * @buf: pointer to the buffer accompanying the AQ message
+ *
+ * Formats a FW Log message and outputs it via the standard driver logs.
+ */
+void ice_output_fw_log(struct ice_hw *hw, struct ice_aq_desc *desc, void *buf)
+{
+	ice_debug(hw, ICE_DBG_AQ_MSG, "[ FW Log Msg Start ]\n");
+	ice_debug_array(hw, ICE_DBG_AQ_MSG, 16, 1, (u8 *)buf,
+			LE16_TO_CPU(desc->datalen));
+	ice_debug(hw, ICE_DBG_AQ_MSG, "[ FW Log Msg End ]\n");
+}
+
+/**
+ * ice_get_itr_intrl_gran - determine int/intrl granularity
+ * @hw: pointer to the hw struct
+ *
+ * Determines the itr/intrl granularities based on the maximum aggregate
+ * bandwidth according to the device's configuration during power-on.
+ */
+static enum ice_status ice_get_itr_intrl_gran(struct ice_hw *hw)
+{
+	u8 max_agg_bw = (rd32(hw, GL_PWR_MODE_CTL) &
+			 GL_PWR_MODE_CTL_CAR_MAX_BW_M) >>
+			GL_PWR_MODE_CTL_CAR_MAX_BW_S;
+
+	switch (max_agg_bw) {
+	case ICE_MAX_AGG_BW_200G:
+	case ICE_MAX_AGG_BW_100G:
+	case ICE_MAX_AGG_BW_50G:
+		hw->itr_gran = ICE_ITR_GRAN_ABOVE_25;
+		hw->intrl_gran = ICE_INTRL_GRAN_ABOVE_25;
+		break;
+	case ICE_MAX_AGG_BW_25G:
+		hw->itr_gran = ICE_ITR_GRAN_MAX_25;
+		hw->intrl_gran = ICE_INTRL_GRAN_MAX_25;
+		break;
+	default:
+		ice_debug(hw, ICE_DBG_INIT,
+			  "Failed to determine itr/intrl granularity\n");
+		return ICE_ERR_CFG;
+	}
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_init_hw - main hardware initialization routine
+ * @hw: pointer to the hardware structure
+ */
+enum ice_status ice_init_hw(struct ice_hw *hw)
+{
+	struct ice_aqc_get_phy_caps_data *pcaps;
+	enum ice_status status;
+	u16 mac_buf_len;
+	void *mac_buf;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_init_hw");
+
+
+	/* Set MAC type based on DeviceID */
+	status = ice_set_mac_type(hw);
+	if (status)
+		return status;
+
+	hw->pf_id = (u8)(rd32(hw, PF_FUNC_RID) &
+			 PF_FUNC_RID_FUNCTION_NUMBER_M) >>
+		PF_FUNC_RID_FUNCTION_NUMBER_S;
+
+
+	status = ice_reset(hw, ICE_RESET_PFR);
+	if (status)
+		return status;
+
+	status = ice_get_itr_intrl_gran(hw);
+	if (status)
+		return status;
+
+
+	status = ice_init_all_ctrlq(hw);
+	if (status)
+		goto err_unroll_cqinit;
+
+	/* Enable FW logging. Not fatal if this fails. */
+	status = ice_cfg_fw_log(hw, true);
+	if (status)
+		ice_debug(hw, ICE_DBG_INIT, "Failed to enable FW logging.\n");
+
+	status = ice_clear_pf_cfg(hw);
+	if (status)
+		goto err_unroll_cqinit;
+
+
+	ice_clear_pxe_mode(hw);
+
+	status = ice_init_nvm(hw);
+	if (status)
+		goto err_unroll_cqinit;
+
+	status = ice_get_caps(hw);
+	if (status)
+		goto err_unroll_cqinit;
+
+	hw->port_info = (struct ice_port_info *)
+			ice_malloc(hw, sizeof(*hw->port_info));
+	if (!hw->port_info) {
+		status = ICE_ERR_NO_MEMORY;
+		goto err_unroll_cqinit;
+	}
+
+	/* set the back pointer to hw */
+	hw->port_info->hw = hw;
+
+	/* Initialize port_info struct with switch configuration data */
+	status = ice_get_initial_sw_cfg(hw);
+	if (status)
+		goto err_unroll_alloc;
+
+	hw->evb_veb = true;
+
+	/* Query the allocated resources for Tx scheduler */
+	status = ice_sched_query_res_alloc(hw);
+	if (status) {
+		ice_debug(hw, ICE_DBG_SCHED,
+			  "Failed to get scheduler allocated resources\n");
+		goto err_unroll_alloc;
+	}
+
+
+	/* Initialize port_info struct with scheduler data */
+	status = ice_sched_init_port(hw->port_info);
+	if (status)
+		goto err_unroll_sched;
+
+	pcaps = (struct ice_aqc_get_phy_caps_data *)
+		ice_malloc(hw, sizeof(*pcaps));
+	if (!pcaps) {
+		status = ICE_ERR_NO_MEMORY;
+		goto err_unroll_sched;
+	}
+
+	/* Initialize port_info struct with PHY capabilities */
+	status = ice_aq_get_phy_caps(hw->port_info, false,
+				     ICE_AQC_REPORT_TOPO_CAP, pcaps, NULL);
+	ice_free(hw, pcaps);
+	if (status)
+		goto err_unroll_sched;
+
+	/* Initialize port_info struct with link information */
+	status = ice_aq_get_link_info(hw->port_info, false, NULL, NULL);
+	if (status)
+		goto err_unroll_sched;
+	/* need a valid SW entry point to build a Tx tree */
+	if (!hw->sw_entry_point_layer) {
+		ice_debug(hw, ICE_DBG_SCHED, "invalid sw entry point\n");
+		status = ICE_ERR_CFG;
+		goto err_unroll_sched;
+	}
+	INIT_LIST_HEAD(&hw->agg_list);
+
+	status = ice_init_fltr_mgmt_struct(hw);
+	if (status)
+		goto err_unroll_sched;
+
+#if defined(FPGA_SUPPORT) || defined(CVL_A0_SUPPORT)
+	/* some of the register write workarounds to get Rx working */
+	ice_dev_onetime_setup(hw);
+#endif /* FPGA_SUPPORT || CVL_A0_SUPPORT */
+
+	/* Get MAC information */
+	/* A single port can report up to two (LAN and WoL) addresses */
+	mac_buf = ice_calloc(hw, 2,
+			     sizeof(struct ice_aqc_manage_mac_read_resp));
+	mac_buf_len = 2 * sizeof(struct ice_aqc_manage_mac_read_resp);
+
+	if (!mac_buf) {
+		status = ICE_ERR_NO_MEMORY;
+		goto err_unroll_fltr_mgmt_struct;
+	}
+
+	status = ice_aq_manage_mac_read(hw, mac_buf, mac_buf_len, NULL);
+	ice_free(hw, mac_buf);
+
+	if (status)
+		goto err_unroll_fltr_mgmt_struct;
+
+	ice_init_flex_flds(hw, ICE_RXDID_FLEX_NIC);
+	ice_init_flex_flds(hw, ICE_RXDID_FLEX_NIC_2);
+
+
+	return ICE_SUCCESS;
+
+err_unroll_fltr_mgmt_struct:
+	ice_cleanup_fltr_mgmt_struct(hw);
+err_unroll_sched:
+	ice_sched_cleanup_all(hw);
+err_unroll_alloc:
+	ice_free(hw, hw->port_info);
+	hw->port_info = NULL;
+err_unroll_cqinit:
+	ice_shutdown_all_ctrlq(hw);
+	return status;
+}
+
+/**
+ * ice_deinit_hw - unroll initialization operations done by ice_init_hw
+ * @hw: pointer to the hardware structure
+ *
+ * This should be called only during nominal operation, not as a result of
+ * ice_init_hw() failing since ice_init_hw() will take care of unrolling
+ * applicable initializations if it fails for any reason.
+ */
+void ice_deinit_hw(struct ice_hw *hw)
+{
+	ice_cleanup_fltr_mgmt_struct(hw);
+
+	ice_sched_cleanup_all(hw);
+	ice_sched_clear_agg(hw);
+
+	if (hw->port_info) {
+		ice_free(hw, hw->port_info);
+		hw->port_info = NULL;
+	}
+
+	/* Attempt to disable FW logging before shutting down control queues */
+	ice_cfg_fw_log(hw, false);
+	ice_shutdown_all_ctrlq(hw);
+
+	/* Clear VSI contexts if not already cleared */
+	ice_clear_all_vsi_ctx(hw);
+}
+
+/**
+ * ice_check_reset - Check to see if a global reset is complete
+ * @hw: pointer to the hardware structure
+ */
+enum ice_status ice_check_reset(struct ice_hw *hw)
+{
+	u32 cnt, reg = 0, grst_delay;
+
+	/* Poll for Device Active state in case a recent CORER, GLOBR,
+	 * or EMPR has occurred. The grst delay value is in 100ms units.
+	 * Add 1sec for outstanding AQ commands that can take a long time.
+	 */
+#define GLGEN_RSTCTL		0x000B8180 /* Reset Source: POR */
+#define GLGEN_RSTCTL_GRSTDEL_S	0
+#define GLGEN_RSTCTL_GRSTDEL_M	MAKEMASK(0x3F, GLGEN_RSTCTL_GRSTDEL_S)
+	grst_delay = ((rd32(hw, GLGEN_RSTCTL) & GLGEN_RSTCTL_GRSTDEL_M) >>
+		      GLGEN_RSTCTL_GRSTDEL_S) + 10;
+
+	for (cnt = 0; cnt < grst_delay; cnt++) {
+		ice_msec_delay(100, true);
+		reg = rd32(hw, GLGEN_RSTAT);
+		if (!(reg & GLGEN_RSTAT_DEVSTATE_M))
+			break;
+	}
+
+	if (cnt == grst_delay) {
+		ice_debug(hw, ICE_DBG_INIT,
+			  "Global reset polling failed to complete.\n");
+		return ICE_ERR_RESET_FAILED;
+	}
+
+#define ICE_RESET_DONE_MASK	(GLNVM_ULD_CORER_DONE_M | \
+				 GLNVM_ULD_GLOBR_DONE_M)
+
+	/* Device is Active; check Global Reset processes are done */
+	for (cnt = 0; cnt < ICE_PF_RESET_WAIT_COUNT; cnt++) {
+		reg = rd32(hw, GLNVM_ULD) & ICE_RESET_DONE_MASK;
+		if (reg == ICE_RESET_DONE_MASK) {
+			ice_debug(hw, ICE_DBG_INIT,
+				  "Global reset processes done. %d\n", cnt);
+			break;
+		}
+		ice_msec_delay(10, true);
+	}
+
+	if (cnt == ICE_PF_RESET_WAIT_COUNT) {
+		ice_debug(hw, ICE_DBG_INIT,
+			  "Wait for Reset Done timed out. GLNVM_ULD = 0x%x\n",
+			  reg);
+		return ICE_ERR_RESET_FAILED;
+	}
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_pf_reset - Reset the PF
+ * @hw: pointer to the hardware structure
+ *
+ * If a global reset has been triggered, this function checks
+ * for its completion and then issues the PF reset
+ */
+static enum ice_status ice_pf_reset(struct ice_hw *hw)
+{
+	u32 cnt, reg;
+
+	/* If at function entry a global reset was already in progress, i.e.
+	 * state is not 'device active' or any of the reset done bits are not
+	 * set in GLNVM_ULD, there is no need for a PF Reset; poll until the
+	 * global reset is done.
+	 */
+	if ((rd32(hw, GLGEN_RSTAT) & GLGEN_RSTAT_DEVSTATE_M) ||
+	    (rd32(hw, GLNVM_ULD) & ICE_RESET_DONE_MASK) ^ ICE_RESET_DONE_MASK) {
+		/* poll on global reset currently in progress until done */
+		if (ice_check_reset(hw))
+			return ICE_ERR_RESET_FAILED;
+
+		return ICE_SUCCESS;
+	}
+
+	/* Reset the PF */
+	reg = rd32(hw, PFGEN_CTRL);
+
+	wr32(hw, PFGEN_CTRL, (reg | PFGEN_CTRL_PFSWR_M));
+
+	for (cnt = 0; cnt < ICE_PF_RESET_WAIT_COUNT; cnt++) {
+		reg = rd32(hw, PFGEN_CTRL);
+		if (!(reg & PFGEN_CTRL_PFSWR_M))
+			break;
+
+		ice_msec_delay(1, true);
+	}
+
+	if (cnt == ICE_PF_RESET_WAIT_COUNT) {
+		ice_debug(hw, ICE_DBG_INIT,
+			  "PF reset polling failed to complete.\n");
+		return ICE_ERR_RESET_FAILED;
+	}
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_reset - Perform different types of reset
+ * @hw: pointer to the hardware structure
+ * @req: reset request
+ *
+ * This function triggers a reset as specified by the req parameter.
+ *
+ * Note:
+ * If anything other than a PF reset is triggered, PXE mode is restored.
+ * This has to be cleared using ice_clear_pxe_mode again, once the AQ
+ * interface has been restored in the rebuild flow.
+ */
+enum ice_status ice_reset(struct ice_hw *hw, enum ice_reset_req req)
+{
+	u32 val = 0;
+
+	switch (req) {
+	case ICE_RESET_PFR:
+		return ice_pf_reset(hw);
+	case ICE_RESET_CORER:
+		ice_debug(hw, ICE_DBG_INIT, "CoreR requested\n");
+		val = GLGEN_RTRIG_CORER_M;
+		break;
+	case ICE_RESET_GLOBR:
+		ice_debug(hw, ICE_DBG_INIT, "GlobalR requested\n");
+		val = GLGEN_RTRIG_GLOBR_M;
+		break;
+	default:
+		return ICE_ERR_PARAM;
+	}
+
+	val |= rd32(hw, GLGEN_RTRIG);
+	wr32(hw, GLGEN_RTRIG, val);
+	ice_flush(hw);
+
+
+	/* wait for the FW to be ready */
+	return ice_check_reset(hw);
+}
+
+
+
+/**
+ * ice_copy_rxq_ctx_to_hw
+ * @hw: pointer to the hardware structure
+ * @ice_rxq_ctx: pointer to the rxq context
+ * @rxq_index: the index of the Rx queue
+ *
+ * Copies rxq context from dense structure to hw register space
+ */
+static enum ice_status
+ice_copy_rxq_ctx_to_hw(struct ice_hw *hw, u8 *ice_rxq_ctx, u32 rxq_index)
+{
+	u8 i;
+
+	if (!ice_rxq_ctx)
+		return ICE_ERR_BAD_PTR;
+
+	if (rxq_index > QRX_CTRL_MAX_INDEX)
+		return ICE_ERR_PARAM;
+
+	/* Copy each dword separately to hw */
+	for (i = 0; i < ICE_RXQ_CTX_SIZE_DWORDS; i++) {
+		wr32(hw, QRX_CONTEXT(i, rxq_index),
+		     *((u32 *)(ice_rxq_ctx + (i * sizeof(u32)))));
+
+		ice_debug(hw, ICE_DBG_QCTX, "qrxdata[%d]: %08X\n", i,
+			  *((u32 *)(ice_rxq_ctx + (i * sizeof(u32)))));
+	}
+
+	return ICE_SUCCESS;
+}
+
+/* LAN Rx Queue Context */
+static const struct ice_ctx_ele ice_rlan_ctx_info[] = {
+	/* Field		Width	LSB */
+	ICE_CTX_STORE(ice_rlan_ctx, head,		13,	0),
+	ICE_CTX_STORE(ice_rlan_ctx, cpuid,		8,	13),
+	ICE_CTX_STORE(ice_rlan_ctx, base,		57,	32),
+	ICE_CTX_STORE(ice_rlan_ctx, qlen,		13,	89),
+	ICE_CTX_STORE(ice_rlan_ctx, dbuf,		7,	102),
+	ICE_CTX_STORE(ice_rlan_ctx, hbuf,		5,	109),
+	ICE_CTX_STORE(ice_rlan_ctx, dtype,		2,	114),
+	ICE_CTX_STORE(ice_rlan_ctx, dsize,		1,	116),
+	ICE_CTX_STORE(ice_rlan_ctx, crcstrip,		1,	117),
+	ICE_CTX_STORE(ice_rlan_ctx, l2tsel,		1,	119),
+	ICE_CTX_STORE(ice_rlan_ctx, hsplit_0,		4,	120),
+	ICE_CTX_STORE(ice_rlan_ctx, hsplit_1,		2,	124),
+	ICE_CTX_STORE(ice_rlan_ctx, showiv,		1,	127),
+	ICE_CTX_STORE(ice_rlan_ctx, rxmax,		14,	174),
+	ICE_CTX_STORE(ice_rlan_ctx, tphrdesc_ena,	1,	193),
+	ICE_CTX_STORE(ice_rlan_ctx, tphwdesc_ena,	1,	194),
+	ICE_CTX_STORE(ice_rlan_ctx, tphdata_ena,	1,	195),
+	ICE_CTX_STORE(ice_rlan_ctx, tphhead_ena,	1,	196),
+	ICE_CTX_STORE(ice_rlan_ctx, lrxqthresh,		3,	198),
+	{ 0 }
+};
+
+/**
+ * ice_write_rxq_ctx
+ * @hw: pointer to the hardware structure
+ * @rlan_ctx: pointer to the rxq context
+ * @rxq_index: the index of the Rx queue
+ *
+ * Converts rxq context from sparse to dense structure and then writes
+ * it to hw register space
+ */
+enum ice_status
+ice_write_rxq_ctx(struct ice_hw *hw, struct ice_rlan_ctx *rlan_ctx,
+		  u32 rxq_index)
+{
+	u8 ctx_buf[ICE_RXQ_CTX_SZ] = { 0 };
+
+	ice_set_ctx((u8 *)rlan_ctx, ctx_buf, ice_rlan_ctx_info);
+	return ice_copy_rxq_ctx_to_hw(hw, ctx_buf, rxq_index);
+}
+
+#if !defined(NO_UNUSED_CTX_CODE) || defined(AE_DRIVER)
+/**
+ * ice_clear_rxq_ctx
+ * @hw: pointer to the hardware structure
+ * @rxq_index: the index of the Rx queue to clear
+ *
+ * Clears rxq context in hw register space
+ */
+enum ice_status ice_clear_rxq_ctx(struct ice_hw *hw, u32 rxq_index)
+{
+	u8 i;
+
+	if (rxq_index > QRX_CTRL_MAX_INDEX)
+		return ICE_ERR_PARAM;
+
+	/* Clear each dword register separately */
+	for (i = 0; i < ICE_RXQ_CTX_SIZE_DWORDS; i++)
+		wr32(hw, QRX_CONTEXT(i, rxq_index), 0);
+
+	return ICE_SUCCESS;
+}
+#endif /* !NO_UNUSED_CTX_CODE || AE_DRIVER */
+
+/* LAN Tx Queue Context */
+const struct ice_ctx_ele ice_tlan_ctx_info[] = {
+				    /* Field			Width	LSB */
+	ICE_CTX_STORE(ice_tlan_ctx, base,			57,	0),
+	ICE_CTX_STORE(ice_tlan_ctx, port_num,			3,	57),
+	ICE_CTX_STORE(ice_tlan_ctx, cgd_num,			5,	60),
+	ICE_CTX_STORE(ice_tlan_ctx, pf_num,			3,	65),
+	ICE_CTX_STORE(ice_tlan_ctx, vmvf_num,			10,	68),
+	ICE_CTX_STORE(ice_tlan_ctx, vmvf_type,			2,	78),
+	ICE_CTX_STORE(ice_tlan_ctx, src_vsi,			10,	80),
+	ICE_CTX_STORE(ice_tlan_ctx, tsyn_ena,			1,	90),
+	ICE_CTX_STORE(ice_tlan_ctx, alt_vlan,			1,	92),
+	ICE_CTX_STORE(ice_tlan_ctx, cpuid,			8,	93),
+	ICE_CTX_STORE(ice_tlan_ctx, wb_mode,			1,	101),
+	ICE_CTX_STORE(ice_tlan_ctx, tphrd_desc,			1,	102),
+	ICE_CTX_STORE(ice_tlan_ctx, tphrd,			1,	103),
+	ICE_CTX_STORE(ice_tlan_ctx, tphwr_desc,			1,	104),
+	ICE_CTX_STORE(ice_tlan_ctx, cmpq_id,			9,	105),
+	ICE_CTX_STORE(ice_tlan_ctx, qnum_in_func,		14,	114),
+	ICE_CTX_STORE(ice_tlan_ctx, itr_notification_mode,	1,	128),
+	ICE_CTX_STORE(ice_tlan_ctx, adjust_prof_id,		6,	129),
+	ICE_CTX_STORE(ice_tlan_ctx, qlen,			13,	135),
+	ICE_CTX_STORE(ice_tlan_ctx, quanta_prof_idx,		4,	148),
+	ICE_CTX_STORE(ice_tlan_ctx, tso_ena,			1,	152),
+	ICE_CTX_STORE(ice_tlan_ctx, tso_qnum,			11,	153),
+	ICE_CTX_STORE(ice_tlan_ctx, legacy_int,			1,	164),
+	ICE_CTX_STORE(ice_tlan_ctx, drop_ena,			1,	165),
+	ICE_CTX_STORE(ice_tlan_ctx, cache_prof_idx,		2,	166),
+	ICE_CTX_STORE(ice_tlan_ctx, pkt_shaper_prof_idx,	3,	168),
+	ICE_CTX_STORE(ice_tlan_ctx, int_q_state,		110,	171),
+	{ 0 }
+};
+
+#if !defined(NO_UNUSED_CTX_CODE) || defined(AE_DRIVER)
+/**
+ * ice_copy_tx_cmpltnq_ctx_to_hw
+ * @hw: pointer to the hardware structure
+ * @ice_tx_cmpltnq_ctx: pointer to the Tx completion queue context
+ * @tx_cmpltnq_index: the index of the completion queue
+ *
+ * Copies Tx completion q context from dense structure to hw register space
+ */
+static enum ice_status
+ice_copy_tx_cmpltnq_ctx_to_hw(struct ice_hw *hw, u8 *ice_tx_cmpltnq_ctx,
+			      u32 tx_cmpltnq_index)
+{
+	u8 i;
+
+	if (!ice_tx_cmpltnq_ctx)
+		return ICE_ERR_BAD_PTR;
+
+	if (tx_cmpltnq_index > GLTCLAN_CQ_CNTX0_MAX_INDEX)
+		return ICE_ERR_PARAM;
+
+	/* Copy each dword separately to hw */
+	for (i = 0; i < ICE_TX_CMPLTNQ_CTX_SIZE_DWORDS; i++) {
+		wr32(hw, GLTCLAN_CQ_CNTX(i, tx_cmpltnq_index),
+		     *((u32 *)(ice_tx_cmpltnq_ctx + (i * sizeof(u32)))));
+
+		ice_debug(hw, ICE_DBG_QCTX, "cmpltnqdata[%d]: %08X\n", i,
+			  *((u32 *)(ice_tx_cmpltnq_ctx + (i * sizeof(u32)))));
+	}
+
+	return ICE_SUCCESS;
+}
+
+/* LAN Tx Completion Queue Context */
+static const struct ice_ctx_ele ice_tx_cmpltnq_ctx_info[] = {
+				       /* Field			Width   LSB */
+	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, base,			57,	0),
+	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, q_len,		18,	64),
+	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, generation,		1,	96),
+	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, wrt_ptr,		22,	97),
+	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, pf_num,		3,	128),
+	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, vmvf_num,		10,	131),
+	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, vmvf_type,		2,	141),
+	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, tph_desc_wr,		1,	160),
+	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, cpuid,		8,	161),
+	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, cmpltn_cache,		512,	192),
+	{ 0 }
+};
+
+/**
+ * ice_write_tx_cmpltnq_ctx
+ * @hw: pointer to the hardware structure
+ * @tx_cmpltnq_ctx: pointer to the completion queue context
+ * @tx_cmpltnq_index: the index of the completion queue
+ *
+ * Converts completion queue context from sparse to dense structure and then
+ * writes it to hw register space
+ */
+enum ice_status
+ice_write_tx_cmpltnq_ctx(struct ice_hw *hw,
+			 struct ice_tx_cmpltnq_ctx *tx_cmpltnq_ctx,
+			 u32 tx_cmpltnq_index)
+{
+	u8 ctx_buf[ICE_TX_CMPLTNQ_CTX_SIZE_DWORDS * sizeof(u32)] = { 0 };
+
+	ice_set_ctx((u8 *)tx_cmpltnq_ctx, ctx_buf, ice_tx_cmpltnq_ctx_info);
+	return ice_copy_tx_cmpltnq_ctx_to_hw(hw, ctx_buf, tx_cmpltnq_index);
+}
+
+/**
+ * ice_clear_tx_cmpltnq_ctx
+ * @hw: pointer to the hardware structure
+ * @tx_cmpltnq_index: the index of the completion queue to clear
+ *
+ * Clears Tx completion queue context in hw register space
+ */
+enum ice_status
+ice_clear_tx_cmpltnq_ctx(struct ice_hw *hw, u32 tx_cmpltnq_index)
+{
+	u8 i;
+
+	if (tx_cmpltnq_index > GLTCLAN_CQ_CNTX0_MAX_INDEX)
+		return ICE_ERR_PARAM;
+
+	/* Clear each dword register separately */
+	for (i = 0; i < ICE_TX_CMPLTNQ_CTX_SIZE_DWORDS; i++)
+		wr32(hw, GLTCLAN_CQ_CNTX(i, tx_cmpltnq_index), 0);
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_copy_tx_drbell_q_ctx_to_hw
+ * @hw: pointer to the hardware structure
+ * @ice_tx_drbell_q_ctx: pointer to the doorbell queue context
+ * @tx_drbell_q_index: the index of the doorbell queue
+ *
+ * Copies doorbell q context from dense structure to hw register space
+ */
+static enum ice_status
+ice_copy_tx_drbell_q_ctx_to_hw(struct ice_hw *hw, u8 *ice_tx_drbell_q_ctx,
+			       u32 tx_drbell_q_index)
+{
+	u8 i;
+
+	if (!ice_tx_drbell_q_ctx)
+		return ICE_ERR_BAD_PTR;
+
+	if (tx_drbell_q_index > QTX_COMM_DBLQ_DBELL_MAX_INDEX)
+		return ICE_ERR_PARAM;
+
+	/* Copy each dword separately to hw */
+	for (i = 0; i < ICE_TX_DRBELL_Q_CTX_SIZE_DWORDS; i++) {
+		wr32(hw, QTX_COMM_DBLQ_CNTX(i, tx_drbell_q_index),
+		     *((u32 *)(ice_tx_drbell_q_ctx + (i * sizeof(u32)))));
+
+		ice_debug(hw, ICE_DBG_QCTX, "tx_drbell_qdata[%d]: %08X\n", i,
+			  *((u32 *)(ice_tx_drbell_q_ctx + (i * sizeof(u32)))));
+	}
+
+	return ICE_SUCCESS;
+}
+
+/* LAN Tx Doorbell Queue Context info */
+static const struct ice_ctx_ele ice_tx_drbell_q_ctx_info[] = {
+					/* Field		Width   LSB */
+	ICE_CTX_STORE(ice_tx_drbell_q_ctx, base,		57,	0),
+	ICE_CTX_STORE(ice_tx_drbell_q_ctx, ring_len,		13,	64),
+	ICE_CTX_STORE(ice_tx_drbell_q_ctx, pf_num,		3,	80),
+	ICE_CTX_STORE(ice_tx_drbell_q_ctx, vf_num,		8,	84),
+	ICE_CTX_STORE(ice_tx_drbell_q_ctx, vmvf_type,		2,	94),
+	ICE_CTX_STORE(ice_tx_drbell_q_ctx, cpuid,		8,	96),
+	ICE_CTX_STORE(ice_tx_drbell_q_ctx, tph_desc_rd,		1,	104),
+	ICE_CTX_STORE(ice_tx_drbell_q_ctx, tph_desc_wr,		1,	108),
+	ICE_CTX_STORE(ice_tx_drbell_q_ctx, db_q_en,		1,	112),
+	ICE_CTX_STORE(ice_tx_drbell_q_ctx, rd_head,		13,	128),
+	ICE_CTX_STORE(ice_tx_drbell_q_ctx, rd_tail,		13,	144),
+	{ 0 }
+};
+
+/**
+ * ice_write_tx_drbell_q_ctx
+ * @hw: pointer to the hardware structure
+ * @tx_drbell_q_ctx: pointer to the doorbell queue context
+ * @tx_drbell_q_index: the index of the doorbell queue
+ *
+ * Converts doorbell queue context from sparse to dense structure and then
+ * writes it to hw register space
+ */
+enum ice_status
+ice_write_tx_drbell_q_ctx(struct ice_hw *hw,
+			  struct ice_tx_drbell_q_ctx *tx_drbell_q_ctx,
+			  u32 tx_drbell_q_index)
+{
+	u8 ctx_buf[ICE_TX_DRBELL_Q_CTX_SIZE_DWORDS * sizeof(u32)] = { 0 };
+
+	ice_set_ctx((u8 *)tx_drbell_q_ctx, ctx_buf, ice_tx_drbell_q_ctx_info);
+	return ice_copy_tx_drbell_q_ctx_to_hw(hw, ctx_buf, tx_drbell_q_index);
+}
+
+/**
+ * ice_clear_tx_drbell_q_ctx
+ * @hw: pointer to the hardware structure
+ * @tx_drbell_q_index: the index of the doorbell queue to clear
+ *
+ * Clears doorbell queue context in hw register space
+ */
+enum ice_status
+ice_clear_tx_drbell_q_ctx(struct ice_hw *hw, u32 tx_drbell_q_index)
+{
+	u8 i;
+
+	if (tx_drbell_q_index > QTX_COMM_DBLQ_DBELL_MAX_INDEX)
+		return ICE_ERR_PARAM;
+
+	/* Clear each dword register separately */
+	for (i = 0; i < ICE_TX_DRBELL_Q_CTX_SIZE_DWORDS; i++)
+		wr32(hw, QTX_COMM_DBLQ_CNTX(i, tx_drbell_q_index), 0);
+
+	return ICE_SUCCESS;
+}
+#endif /* !NO_UNUSED_CTX_CODE || AE_DRIVER */
+
+/**
+ * ice_debug_cq
+ * @hw: pointer to the hardware structure
+ * @mask: debug mask
+ * @desc: pointer to control queue descriptor
+ * @buf: pointer to command buffer
+ * @buf_len: max length of buf
+ *
+ * Dumps debug log about control command with descriptor contents.
+ */
+void
+ice_debug_cq(struct ice_hw *hw, u32 __maybe_unused mask, void *desc, void *buf,
+	     u16 buf_len)
+{
+	struct ice_aq_desc *cq_desc = (struct ice_aq_desc *)desc;
+	u16 len;
+
+
+	if (!desc)
+		return;
+
+	len = LE16_TO_CPU(cq_desc->datalen);
+
+	ice_debug(hw, mask,
+		  "CQ CMD: opcode 0x%04X, flags 0x%04X, datalen 0x%04X, retval 0x%04X\n",
+		  LE16_TO_CPU(cq_desc->opcode),
+		  LE16_TO_CPU(cq_desc->flags),
+		  LE16_TO_CPU(cq_desc->datalen), LE16_TO_CPU(cq_desc->retval));
+	ice_debug(hw, mask, "\tcookie (h,l) 0x%08X 0x%08X\n",
+		  LE32_TO_CPU(cq_desc->cookie_high),
+		  LE32_TO_CPU(cq_desc->cookie_low));
+	ice_debug(hw, mask, "\tparam (0,1)  0x%08X 0x%08X\n",
+		  LE32_TO_CPU(cq_desc->params.generic.param0),
+		  LE32_TO_CPU(cq_desc->params.generic.param1));
+	ice_debug(hw, mask, "\taddr (h,l)   0x%08X 0x%08X\n",
+		  LE32_TO_CPU(cq_desc->params.generic.addr_high),
+		  LE32_TO_CPU(cq_desc->params.generic.addr_low));
+	if (buf && cq_desc->datalen != 0) {
+		ice_debug(hw, mask, "Buffer:\n");
+		if (buf_len < len)
+			len = buf_len;
+
+		ice_debug_array(hw, mask, 16, 1, (u8 *)buf, len);
+	}
+}
+
+
+/* FW Admin Queue command wrappers */
+
+/**
+ * ice_aq_send_cmd - send FW Admin Queue command to FW Admin Queue
+ * @hw: pointer to the hw struct
+ * @desc: descriptor describing the command
+ * @buf: buffer to use for indirect commands (NULL for direct commands)
+ * @buf_size: size of buffer for indirect commands (0 for direct commands)
+ * @cd: pointer to command details structure
+ *
+ * Helper function to send FW Admin Queue commands to the FW Admin Queue.
+ */
+enum ice_status
+ice_aq_send_cmd(struct ice_hw *hw, struct ice_aq_desc *desc, void *buf,
+		u16 buf_size, struct ice_sq_cd *cd)
+{
+	return ice_sq_send_cmd(hw, &hw->adminq, desc, buf, buf_size, cd);
+}
+
+/**
+ * ice_aq_get_fw_ver
+ * @hw: pointer to the hw struct
+ * @cd: pointer to command details structure or NULL
+ *
+ * Get the firmware version (0x0001) from the admin queue commands
+ */
+enum ice_status ice_aq_get_fw_ver(struct ice_hw *hw, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_get_ver *resp;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	resp = &desc.params.get_ver;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_ver);
+
+	status = ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+
+	if (!status) {
+		hw->fw_branch = resp->fw_branch;
+		hw->fw_maj_ver = resp->fw_major;
+		hw->fw_min_ver = resp->fw_minor;
+		hw->fw_patch = resp->fw_patch;
+		hw->fw_build = LE32_TO_CPU(resp->fw_build);
+		hw->api_branch = resp->api_branch;
+		hw->api_maj_ver = resp->api_major;
+		hw->api_min_ver = resp->api_minor;
+		hw->api_patch = resp->api_patch;
+	}
+
+	return status;
+}
+
+
+/**
+ * ice_aq_q_shutdown
+ * @hw: pointer to the hw struct
+ * @unloading: is the driver unloading itself
+ *
+ * Tell the Firmware that we're shutting down the AdminQ and whether
+ * or not the driver is unloading as well (0x0003).
+ */
+enum ice_status ice_aq_q_shutdown(struct ice_hw *hw, bool unloading)
+{
+	struct ice_aqc_q_shutdown *cmd;
+	struct ice_aq_desc desc;
+
+	cmd = &desc.params.q_shutdown;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_q_shutdown);
+
+	if (unloading)
+		cmd->driver_unloading = CPU_TO_LE32(ICE_AQC_DRIVER_UNLOADING);
+
+	return ice_aq_send_cmd(hw, &desc, NULL, 0, NULL);
+}
+
+/**
+ * ice_aq_req_res
+ * @hw: pointer to the hw struct
+ * @res: resource id
+ * @access: access type
+ * @sdp_number: resource number
+ * @timeout: the maximum time in ms that the driver may hold the resource
+ * @cd: pointer to command details structure or NULL
+ *
+ * Requests common resource using the admin queue commands (0x0008).
+ * When attempting to acquire the Global Config Lock, the driver can
+ * learn of three states:
+ *  1) ICE_SUCCESS -        acquired lock, and can perform download package
+ *  2) ICE_ERR_AQ_ERROR -   did not get lock, driver should fail to load
+ *  3) ICE_ERR_AQ_NO_WORK - did not get lock, but another driver has
+ *                          successfully downloaded the package; the driver does
+ *                          not have to download the package and can continue
+ *                          loading
+ *
+ * Note that if the caller is in an acquire lock, perform action, release lock
+ * phase of operation, it is possible that the FW may detect a timeout and issue
+ * a CORER. In this case, the driver will receive a CORER interrupt and will
+ * have to determine its cause. The calling thread that is handling this flow
+ * will likely get an error propagated back to it indicating the Download
+ * Package, Update Package or the Release Resource AQ commands timed out.
+ */
+static enum ice_status
+ice_aq_req_res(struct ice_hw *hw, enum ice_aq_res_ids res,
+	       enum ice_aq_res_access_type access, u8 sdp_number, u32 *timeout,
+	       struct ice_sq_cd *cd)
+{
+	struct ice_aqc_req_res *cmd_resp;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_req_res");
+
+	cmd_resp = &desc.params.res_owner;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_req_res);
+
+	cmd_resp->res_id = CPU_TO_LE16(res);
+	cmd_resp->access_type = CPU_TO_LE16(access);
+	cmd_resp->res_number = CPU_TO_LE32(sdp_number);
+	cmd_resp->timeout = CPU_TO_LE32(*timeout);
+	*timeout = 0;
+
+	status = ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+
+	/* The completion specifies the maximum time in ms that the driver
+	 * may hold the resource in the Timeout field.
+	 */
+
+	/* Global config lock response utilizes an additional status field.
+	 *
+	 * If the Global config lock resource is held by some other driver, the
+	 * command completes with ICE_AQ_RES_GLBL_IN_PROG in the status field
+	 * and the timeout field indicates the maximum time the current owner
+	 * of the resource has to free it.
+	 */
+	if (res == ICE_GLOBAL_CFG_LOCK_RES_ID) {
+		if (LE16_TO_CPU(cmd_resp->status) == ICE_AQ_RES_GLBL_SUCCESS) {
+			*timeout = LE32_TO_CPU(cmd_resp->timeout);
+			return ICE_SUCCESS;
+		} else if (LE16_TO_CPU(cmd_resp->status) ==
+			   ICE_AQ_RES_GLBL_IN_PROG) {
+			*timeout = LE32_TO_CPU(cmd_resp->timeout);
+			return ICE_ERR_AQ_ERROR;
+		} else if (LE16_TO_CPU(cmd_resp->status) ==
+			   ICE_AQ_RES_GLBL_DONE) {
+			return ICE_ERR_AQ_NO_WORK;
+		}
+
+		/* invalid FW response, force a timeout immediately */
+		*timeout = 0;
+		return ICE_ERR_AQ_ERROR;
+	}
+
+	/* If the resource is held by some other driver, the command completes
+	 * with a busy return value and the timeout field indicates the maximum
+	 * time the current owner of the resource has to free it.
+	 */
+	if (!status || hw->adminq.sq_last_status == ICE_AQ_RC_EBUSY)
+		*timeout = LE32_TO_CPU(cmd_resp->timeout);
+
+	return status;
+}
+
+/**
+ * ice_aq_release_res
+ * @hw: pointer to the hw struct
+ * @res: resource id
+ * @sdp_number: resource number
+ * @cd: pointer to command details structure or NULL
+ *
+ * release common resource using the admin queue commands (0x0009)
+ */
+static enum ice_status
+ice_aq_release_res(struct ice_hw *hw, enum ice_aq_res_ids res, u8 sdp_number,
+		   struct ice_sq_cd *cd)
+{
+	struct ice_aqc_req_res *cmd;
+	struct ice_aq_desc desc;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_release_res");
+
+	cmd = &desc.params.res_owner;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_release_res);
+
+	cmd->res_id = CPU_TO_LE16(res);
+	cmd->res_number = CPU_TO_LE32(sdp_number);
+
+	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+}
+
+/**
+ * ice_acquire_res
+ * @hw: pointer to the HW structure
+ * @res: resource id
+ * @access: access type (read or write)
+ * @timeout: timeout in milliseconds
+ *
+ * This function will attempt to acquire the ownership of a resource.
+ */
+enum ice_status
+ice_acquire_res(struct ice_hw *hw, enum ice_aq_res_ids res,
+		enum ice_aq_res_access_type access, u32 timeout)
+{
+#define ICE_RES_POLLING_DELAY_MS	10
+	u32 delay = ICE_RES_POLLING_DELAY_MS;
+	u32 time_left = timeout;
+	enum ice_status status;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_acquire_res");
+
+	status = ice_aq_req_res(hw, res, access, 0, &time_left, NULL);
+
+	/* A return code of ICE_ERR_AQ_NO_WORK means that another driver has
+	 * previously acquired the resource and performed any necessary updates;
+	 * in this case the caller does not obtain the resource and has no
+	 * further work to do.
+	 */
+	if (status == ICE_ERR_AQ_NO_WORK)
+		goto ice_acquire_res_exit;
+
+	if (status)
+		ice_debug(hw, ICE_DBG_RES,
+			  "resource %d acquire type %d failed.\n", res, access);
+
+	/* If necessary, poll until the current lock owner timeouts */
+	timeout = time_left;
+	while (status && timeout && time_left) {
+		ice_msec_delay(delay, true);
+		timeout = (timeout > delay) ? timeout - delay : 0;
+		status = ice_aq_req_res(hw, res, access, 0, &time_left, NULL);
+
+		if (status == ICE_ERR_AQ_NO_WORK)
+			/* lock free, but no work to do */
+			break;
+
+		if (!status)
+			/* lock acquired */
+			break;
+	}
+	if (status && status != ICE_ERR_AQ_NO_WORK)
+		ice_debug(hw, ICE_DBG_RES, "resource acquire timed out.\n");
+
+ice_acquire_res_exit:
+	if (status == ICE_ERR_AQ_NO_WORK) {
+		if (access == ICE_RES_WRITE)
+			ice_debug(hw, ICE_DBG_RES,
+				  "resource indicates no work to do.\n");
+		else
+			ice_debug(hw, ICE_DBG_RES,
+				  "Warning: ICE_ERR_AQ_NO_WORK not expected\n");
+	}
+	return status;
+}
+
+/**
+ * ice_release_res
+ * @hw: pointer to the HW structure
+ * @res: resource id
+ *
+ * This function will release a resource using the proper Admin Command.
+ */
+void ice_release_res(struct ice_hw *hw, enum ice_aq_res_ids res)
+{
+	enum ice_status status;
+	u32 total_delay = 0;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_release_res");
+
+	status = ice_aq_release_res(hw, res, 0, NULL);
+
+	/* there are some rare cases when trying to release the resource
+	 * results in an admin Q timeout, so handle them correctly
+	 */
+	while ((status == ICE_ERR_AQ_TIMEOUT) &&
+	       (total_delay < hw->adminq.sq_cmd_timeout)) {
+		ice_msec_delay(1, true);
+		status = ice_aq_release_res(hw, res, 0, NULL);
+		total_delay++;
+	}
+}
+
+/**
+ * ice_aq_alloc_free_res - command to allocate/free resources
+ * @hw: pointer to the hw struct
+ * @num_entries: number of resource entries in buffer
+ * @buf: Indirect buffer to hold data parameters and response
+ * @buf_size: size of buffer for indirect commands
+ * @opc: pass in the command opcode
+ * @cd: pointer to command details structure or NULL
+ *
+ * Helper function to allocate/free resources using the admin queue commands
+ */
+enum ice_status
+ice_aq_alloc_free_res(struct ice_hw *hw, u16 num_entries,
+		      struct ice_aqc_alloc_free_res_elem *buf, u16 buf_size,
+		      enum ice_adminq_opc opc, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_alloc_free_res_cmd *cmd;
+	struct ice_aq_desc desc;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_alloc_free_res");
+
+	cmd = &desc.params.sw_res_ctrl;
+
+	if (!buf)
+		return ICE_ERR_PARAM;
+
+	if (buf_size < (num_entries * sizeof(buf->elem[0])))
+		return ICE_ERR_PARAM;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, opc);
+
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+
+	cmd->num_entries = CPU_TO_LE16(num_entries);
+
+	return ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
+}
+
+
+/**
+ * ice_get_guar_num_vsi - determine number of guar VSI for a PF
+ * @hw: pointer to the hw structure
+ *
+ * Determine the number of valid functions by going through the bitmap returned
+ * from parsing capabilities and use this to calculate the number of VSI per PF.
+ */
+static u32 ice_get_guar_num_vsi(struct ice_hw *hw)
+{
+	u8 funcs;
+
+#define ICE_CAPS_VALID_FUNCS_M	0xFF
+	funcs = ice_hweight8(hw->dev_caps.common_cap.valid_functions &
+			     ICE_CAPS_VALID_FUNCS_M);
+
+	if (!funcs)
+		return 0;
+
+	return ICE_MAX_VSI / funcs;
+}
+
+/**
+ * ice_parse_caps - parse function/device capabilities
+ * @hw: pointer to the hw struct
+ * @buf: pointer to a buffer containing function/device capability records
+ * @cap_count: number of capability records in the list
+ * @opc: type of capabilities list to parse
+ *
+ * Helper function to parse function(0x000a)/device(0x000b) capabilities list.
+ */
+static void
+ice_parse_caps(struct ice_hw *hw, void *buf, u32 cap_count,
+	       enum ice_adminq_opc opc)
+{
+	struct ice_aqc_list_caps_elem *cap_resp;
+	struct ice_hw_func_caps *func_p = NULL;
+	struct ice_hw_dev_caps *dev_p = NULL;
+	struct ice_hw_common_caps *caps;
+	u32 i;
+
+	if (!buf)
+		return;
+
+	cap_resp = (struct ice_aqc_list_caps_elem *)buf;
+
+	if (opc == ice_aqc_opc_list_dev_caps) {
+		dev_p = &hw->dev_caps;
+		caps = &dev_p->common_cap;
+	} else if (opc == ice_aqc_opc_list_func_caps) {
+		func_p = &hw->func_caps;
+		caps = &func_p->common_cap;
+	} else {
+		ice_debug(hw, ICE_DBG_INIT, "wrong opcode\n");
+		return;
+	}
+
+	for (i = 0; caps && i < cap_count; i++, cap_resp++) {
+		u32 logical_id = LE32_TO_CPU(cap_resp->logical_id);
+		u32 phys_id = LE32_TO_CPU(cap_resp->phys_id);
+		u32 number = LE32_TO_CPU(cap_resp->number);
+		u16 cap = LE16_TO_CPU(cap_resp->cap);
+
+		switch (cap) {
+		case ICE_AQC_CAPS_VALID_FUNCTIONS:
+			caps->valid_functions = number;
+			ice_debug(hw, ICE_DBG_INIT,
+				  "HW caps: Valid Functions = %d\n",
+				  caps->valid_functions);
+			break;
+		case ICE_AQC_CAPS_SRIOV:
+			caps->sr_iov_1_1 = (number == 1);
+			ice_debug(hw, ICE_DBG_INIT,
+				  "HW caps: SR-IOV = %d\n", caps->sr_iov_1_1);
+			break;
+		case ICE_AQC_CAPS_VF:
+			if (dev_p) {
+				dev_p->num_vfs_exposed = number;
+				ice_debug(hw, ICE_DBG_INIT,
+					  "HW caps: VFs exposed = %d\n",
+					  dev_p->num_vfs_exposed);
+			} else if (func_p) {
+				func_p->num_allocd_vfs = number;
+				func_p->vf_base_id = logical_id;
+				ice_debug(hw, ICE_DBG_INIT,
+					  "HW caps: VFs allocated = %d\n",
+					  func_p->num_allocd_vfs);
+				ice_debug(hw, ICE_DBG_INIT,
+					  "HW caps: VF base_id = %d\n",
+					  func_p->vf_base_id);
+			}
+			break;
+		case ICE_AQC_CAPS_VSI:
+			if (dev_p) {
+				dev_p->num_vsi_allocd_to_host = number;
+				ice_debug(hw, ICE_DBG_INIT,
+					  "HW caps: Dev.VSI cnt = %d\n",
+					  dev_p->num_vsi_allocd_to_host);
+			} else if (func_p) {
+				func_p->guar_num_vsi =
+					ice_get_guar_num_vsi(hw);
+				ice_debug(hw, ICE_DBG_INIT,
+					  "HW caps: Func.VSI cnt = %d\n",
+					  number);
+			}
+			break;
+		case ICE_AQC_CAPS_RSS:
+			caps->rss_table_size = number;
+			caps->rss_table_entry_width = logical_id;
+			ice_debug(hw, ICE_DBG_INIT,
+				  "HW caps: RSS table size = %d\n",
+				  caps->rss_table_size);
+			ice_debug(hw, ICE_DBG_INIT,
+				  "HW caps: RSS table width = %d\n",
+				  caps->rss_table_entry_width);
+			break;
+		case ICE_AQC_CAPS_RXQS:
+			caps->num_rxq = number;
+			caps->rxq_first_id = phys_id;
+			ice_debug(hw, ICE_DBG_INIT,
+				  "HW caps: Num Rx Qs = %d\n", caps->num_rxq);
+			ice_debug(hw, ICE_DBG_INIT,
+				  "HW caps: Rx first queue ID = %d\n",
+				  caps->rxq_first_id);
+			break;
+		case ICE_AQC_CAPS_TXQS:
+			caps->num_txq = number;
+			caps->txq_first_id = phys_id;
+			ice_debug(hw, ICE_DBG_INIT,
+				  "HW caps: Num Tx Qs = %d\n", caps->num_txq);
+			ice_debug(hw, ICE_DBG_INIT,
+				  "HW caps: Tx first queue ID = %d\n",
+				  caps->txq_first_id);
+			break;
+		case ICE_AQC_CAPS_MSIX:
+			caps->num_msix_vectors = number;
+			caps->msix_vector_first_id = phys_id;
+			ice_debug(hw, ICE_DBG_INIT,
+				  "HW caps: MSIX vector count = %d\n",
+				  caps->num_msix_vectors);
+			ice_debug(hw, ICE_DBG_INIT,
+				  "HW caps: MSIX first vector index = %d\n",
+				  caps->msix_vector_first_id);
+			break;
+		case ICE_AQC_CAPS_MAX_MTU:
+			caps->max_mtu = number;
+			if (dev_p)
+				ice_debug(hw, ICE_DBG_INIT,
+					  "HW caps: Dev.MaxMTU = %d\n",
+					  caps->max_mtu);
+			else if (func_p)
+				ice_debug(hw, ICE_DBG_INIT,
+					  "HW caps: func.MaxMTU = %d\n",
+					  caps->max_mtu);
+			break;
+		default:
+			ice_debug(hw, ICE_DBG_INIT,
+				  "HW caps: Unknown capability[%d]: 0x%x\n", i,
+				  cap);
+			break;
+		}
+	}
+}
+
+/**
+ * ice_aq_discover_caps - query function/device capabilities
+ * @hw: pointer to the hw struct
+ * @buf: a virtual buffer to hold the capabilities
+ * @buf_size: Size of the virtual buffer
+ * @cap_count: cap count needed if AQ err==ENOMEM
+ * @opc: capabilities type to discover - pass in the command opcode
+ * @cd: pointer to command details structure or NULL
+ *
+ * Get the function(0x000a)/device(0x000b) capabilities description from
+ * the firmware.
+ */
+static enum ice_status
+ice_aq_discover_caps(struct ice_hw *hw, void *buf, u16 buf_size, u32 *cap_count,
+		     enum ice_adminq_opc opc, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_list_caps *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	cmd = &desc.params.get_cap;
+
+	if (opc != ice_aqc_opc_list_func_caps &&
+	    opc != ice_aqc_opc_list_dev_caps)
+		return ICE_ERR_PARAM;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, opc);
+
+	status = ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
+	if (!status)
+		ice_parse_caps(hw, buf, LE32_TO_CPU(cmd->count), opc);
+	else if (hw->adminq.sq_last_status == ICE_AQ_RC_ENOMEM)
+		*cap_count = LE32_TO_CPU(cmd->count);
+	return status;
+}
+
+/**
+ * ice_discover_caps - get info about the HW
+ * @hw: pointer to the hardware structure
+ * @opc: capabilities type to discover - pass in the command opcode
+ */
+static enum ice_status
+ice_discover_caps(struct ice_hw *hw, enum ice_adminq_opc opc)
+{
+	enum ice_status status;
+	u32 cap_count;
+	u16 cbuf_len;
+	u8 retries;
+
+	/* The driver doesn't know how many capabilities the device will return
+	 * so the buffer size required isn't known ahead of time. The driver
+	 * starts with cbuf_len and if this turns out to be insufficient, the
+	 * device returns ICE_AQ_RC_ENOMEM and also the cap_count it needs.
+	 * The driver then allocates the buffer based on the count and retries
+	 * the operation. So it follows that the retry count is 2.
+	 */
+#define ICE_GET_CAP_BUF_COUNT	40
+#define ICE_GET_CAP_RETRY_COUNT	2
+
+	cap_count = ICE_GET_CAP_BUF_COUNT;
+	retries = ICE_GET_CAP_RETRY_COUNT;
+
+	do {
+		void *cbuf;
+
+		cbuf_len = (u16)(cap_count *
+				 sizeof(struct ice_aqc_list_caps_elem));
+		cbuf = ice_malloc(hw, cbuf_len);
+		if (!cbuf)
+			return ICE_ERR_NO_MEMORY;
+
+		status = ice_aq_discover_caps(hw, cbuf, cbuf_len, &cap_count,
+					      opc, NULL);
+		ice_free(hw, cbuf);
+
+		if (!status || hw->adminq.sq_last_status != ICE_AQ_RC_ENOMEM)
+			break;
+
+		/* If ENOMEM is returned, try again with bigger buffer */
+	} while (--retries);
+
+	return status;
+}
+
+/**
+ * ice_get_caps - get info about the HW
+ * @hw: pointer to the hardware structure
+ */
+enum ice_status ice_get_caps(struct ice_hw *hw)
+{
+	enum ice_status status;
+
+	status = ice_discover_caps(hw, ice_aqc_opc_list_dev_caps);
+	if (!status)
+		status = ice_discover_caps(hw, ice_aqc_opc_list_func_caps);
+
+	return status;
+}
+
+/**
+ * ice_aq_manage_mac_write - manage MAC address write command
+ * @hw: pointer to the hw struct
+ * @mac_addr: MAC address to be written as LAA/LAA+WoL/Port address
+ * @flags: flags to control write behavior
+ * @cd: pointer to command details structure or NULL
+ *
+ * This function is used to write MAC address to the NVM (0x0108).
+ */
+enum ice_status
+ice_aq_manage_mac_write(struct ice_hw *hw, const u8 *mac_addr, u8 flags,
+			struct ice_sq_cd *cd)
+{
+	struct ice_aqc_manage_mac_write *cmd;
+	struct ice_aq_desc desc;
+
+	cmd = &desc.params.mac_write;
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_manage_mac_write);
+
+	cmd->flags = flags;
+
+
+	/* Prep values for flags, sah, sal */
+	cmd->sah = HTONS(*((const u16 *)mac_addr));
+	cmd->sal = HTONL(*((const u32 *)(mac_addr + 2)));
+
+	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+}
+
+/**
+ * ice_aq_clear_pxe_mode
+ * @hw: pointer to the hw struct
+ *
+ * Tell the firmware that the driver is taking over from PXE (0x0110).
+ */
+static enum ice_status ice_aq_clear_pxe_mode(struct ice_hw *hw)
+{
+	struct ice_aq_desc desc;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_clear_pxe_mode);
+	desc.params.clear_pxe.rx_cnt = ICE_AQC_CLEAR_PXE_RX_CNT;
+
+	return ice_aq_send_cmd(hw, &desc, NULL, 0, NULL);
+}
+
+/**
+ * ice_clear_pxe_mode - clear pxe operations mode
+ * @hw: pointer to the hw struct
+ *
+ * Make sure all PXE mode settings are cleared, including things
+ * like descriptor fetch/write-back mode.
+ */
+void ice_clear_pxe_mode(struct ice_hw *hw)
+{
+	if (ice_check_sq_alive(hw, &hw->adminq))
+		ice_aq_clear_pxe_mode(hw);
+}
+
+
+
+/**
+ * ice_get_link_speed_based_on_phy_type - returns link speed
+ * @phy_type_low: lower part of phy_type
+ *
+ * This helper function will convert a phy_type_low to its corresponding link
+ * speed.
+ * Note: In the structure of phy_type_low, there should be one bit set, as
+ * this function will convert one phy type to its speed.
+ * If no bit gets set, ICE_LINK_SPEED_UNKNOWN will be returned
+ * If more than one bit gets set, ICE_LINK_SPEED_UNKNOWN will be returned
+ */
+static u16 ice_get_link_speed_based_on_phy_type(u64 phy_type_low)
+{
+	u16 speed_phy_type_low = ICE_AQ_LINK_SPEED_UNKNOWN;
+
+	switch (phy_type_low) {
+	case ICE_PHY_TYPE_LOW_100BASE_TX:
+	case ICE_PHY_TYPE_LOW_100M_SGMII:
+		speed_phy_type_low = ICE_AQ_LINK_SPEED_100MB;
+		break;
+	case ICE_PHY_TYPE_LOW_1000BASE_T:
+	case ICE_PHY_TYPE_LOW_1000BASE_SX:
+	case ICE_PHY_TYPE_LOW_1000BASE_LX:
+	case ICE_PHY_TYPE_LOW_1000BASE_KX:
+	case ICE_PHY_TYPE_LOW_1G_SGMII:
+		speed_phy_type_low = ICE_AQ_LINK_SPEED_1000MB;
+		break;
+	case ICE_PHY_TYPE_LOW_2500BASE_T:
+	case ICE_PHY_TYPE_LOW_2500BASE_X:
+	case ICE_PHY_TYPE_LOW_2500BASE_KX:
+		speed_phy_type_low = ICE_AQ_LINK_SPEED_2500MB;
+		break;
+	case ICE_PHY_TYPE_LOW_5GBASE_T:
+	case ICE_PHY_TYPE_LOW_5GBASE_KR:
+		speed_phy_type_low = ICE_AQ_LINK_SPEED_5GB;
+		break;
+	case ICE_PHY_TYPE_LOW_10GBASE_T:
+	case ICE_PHY_TYPE_LOW_10G_SFI_DA:
+	case ICE_PHY_TYPE_LOW_10GBASE_SR:
+	case ICE_PHY_TYPE_LOW_10GBASE_LR:
+	case ICE_PHY_TYPE_LOW_10GBASE_KR_CR1:
+	case ICE_PHY_TYPE_LOW_10G_SFI_AOC_ACC:
+	case ICE_PHY_TYPE_LOW_10G_SFI_C2C:
+		speed_phy_type_low = ICE_AQ_LINK_SPEED_10GB;
+		break;
+	case ICE_PHY_TYPE_LOW_25GBASE_T:
+	case ICE_PHY_TYPE_LOW_25GBASE_CR:
+	case ICE_PHY_TYPE_LOW_25GBASE_CR_S:
+	case ICE_PHY_TYPE_LOW_25GBASE_CR1:
+	case ICE_PHY_TYPE_LOW_25GBASE_SR:
+	case ICE_PHY_TYPE_LOW_25GBASE_LR:
+	case ICE_PHY_TYPE_LOW_25GBASE_KR:
+	case ICE_PHY_TYPE_LOW_25GBASE_KR_S:
+	case ICE_PHY_TYPE_LOW_25GBASE_KR1:
+	case ICE_PHY_TYPE_LOW_25G_AUI_AOC_ACC:
+	case ICE_PHY_TYPE_LOW_25G_AUI_C2C:
+		speed_phy_type_low = ICE_AQ_LINK_SPEED_25GB;
+		break;
+	case ICE_PHY_TYPE_LOW_40GBASE_CR4:
+	case ICE_PHY_TYPE_LOW_40GBASE_SR4:
+	case ICE_PHY_TYPE_LOW_40GBASE_LR4:
+	case ICE_PHY_TYPE_LOW_40GBASE_KR4:
+	case ICE_PHY_TYPE_LOW_40G_XLAUI_AOC_ACC:
+	case ICE_PHY_TYPE_LOW_40G_XLAUI:
+		speed_phy_type_low = ICE_AQ_LINK_SPEED_40GB;
+		break;
+	default:
+		speed_phy_type_low = ICE_AQ_LINK_SPEED_UNKNOWN;
+		break;
+	}
+
+	return speed_phy_type_low;
+}
+
+/**
+ * ice_update_phy_type
+ * @phy_type_low: pointer to the lower part of phy_type
+ * @link_speeds_bitmap: targeted link speeds bitmap
+ *
+ * Note: For the link_speeds_bitmap structure, you can check it at
+ * [ice_aqc_get_link_status->link_speed]. Caller can pass in
+ * link_speeds_bitmap include multiple speeds.
+ *
+ * The value of phy_type_low will present a certain link speed. This helper
+ * function will turn on bits in the phy_type_low based on the value of
+ * link_speeds_bitmap input parameter.
+ */
+void ice_update_phy_type(u64 *phy_type_low, u16 link_speeds_bitmap)
+{
+	u16 speed = ICE_AQ_LINK_SPEED_UNKNOWN;
+	u64 pt_low;
+	int index;
+
+	/* We first check with low part of phy_type */
+	for (index = 0; index <= ICE_PHY_TYPE_LOW_MAX_INDEX; index++) {
+		pt_low = BIT_ULL(index);
+		speed = ice_get_link_speed_based_on_phy_type(pt_low);
+
+		if (link_speeds_bitmap & speed)
+			*phy_type_low |= BIT_ULL(index);
+	}
+}
+
+/**
+ * ice_aq_set_phy_cfg
+ * @hw: pointer to the hw struct
+ * @lport: logical port number
+ * @cfg: structure with PHY configuration data to be set
+ * @cd: pointer to command details structure or NULL
+ *
+ * Set the various PHY configuration parameters supported on the Port.
+ * One or more of the Set PHY config parameters may be ignored in an MFP
+ * mode as the PF may not have the privilege to set some of the PHY Config
+ * parameters. This status will be indicated by the command response (0x0601).
+ */
+enum ice_status
+ice_aq_set_phy_cfg(struct ice_hw *hw, u8 lport,
+		   struct ice_aqc_set_phy_cfg_data *cfg, struct ice_sq_cd *cd)
+{
+	struct ice_aq_desc desc;
+
+	if (!cfg)
+		return ICE_ERR_PARAM;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_phy_cfg);
+	desc.params.set_phy.lport_num = lport;
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+
+	return ice_aq_send_cmd(hw, &desc, cfg, sizeof(*cfg), cd);
+}
+
+/**
+ * ice_update_link_info - update status of the HW network link
+ * @pi: port info structure of the interested logical port
+ */
+enum ice_status ice_update_link_info(struct ice_port_info *pi)
+{
+	struct ice_aqc_get_phy_caps_data *pcaps;
+	struct ice_phy_info *phy_info;
+	enum ice_status status;
+	struct ice_hw *hw;
+
+	if (!pi)
+		return ICE_ERR_PARAM;
+
+	hw = pi->hw;
+
+	pcaps = (struct ice_aqc_get_phy_caps_data *)
+		ice_malloc(hw, sizeof(*pcaps));
+	if (!pcaps)
+		return ICE_ERR_NO_MEMORY;
+
+	phy_info = &pi->phy;
+	status = ice_aq_get_link_info(pi, true, NULL, NULL);
+	if (status)
+		goto out;
+
+	if (phy_info->link_info.link_info & ICE_AQ_MEDIA_AVAILABLE) {
+		status = ice_aq_get_phy_caps(pi, false, ICE_AQC_REPORT_SW_CFG,
+					     pcaps, NULL);
+		if (status)
+			goto out;
+
+		ice_memcpy(phy_info->link_info.module_type, &pcaps->module_type,
+			   sizeof(phy_info->link_info.module_type),
+			   ICE_NONDMA_TO_NONDMA);
+	}
+out:
+	ice_free(hw, pcaps);
+	return status;
+}
+
+/**
+ * ice_set_fc
+ * @pi: port information structure
+ * @aq_failures: pointer to status code, specific to ice_set_fc routine
+ * @ena_auto_link_update: enable automatic link update
+ *
+ * Set the requested flow control mode.
+ */
+enum ice_status
+ice_set_fc(struct ice_port_info *pi, u8 *aq_failures, bool ena_auto_link_update)
+{
+	struct ice_aqc_set_phy_cfg_data cfg = { 0 };
+	struct ice_aqc_get_phy_caps_data *pcaps;
+	enum ice_status status;
+	u8 pause_mask = 0x0;
+	struct ice_hw *hw;
+
+	if (!pi)
+		return ICE_ERR_PARAM;
+	hw = pi->hw;
+	*aq_failures = ICE_SET_FC_AQ_FAIL_NONE;
+
+	switch (pi->fc.req_mode) {
+	case ICE_FC_FULL:
+		pause_mask |= ICE_AQC_PHY_EN_TX_LINK_PAUSE;
+		pause_mask |= ICE_AQC_PHY_EN_RX_LINK_PAUSE;
+		break;
+	case ICE_FC_RX_PAUSE:
+		pause_mask |= ICE_AQC_PHY_EN_RX_LINK_PAUSE;
+		break;
+	case ICE_FC_TX_PAUSE:
+		pause_mask |= ICE_AQC_PHY_EN_TX_LINK_PAUSE;
+		break;
+	default:
+		break;
+	}
+
+	pcaps = (struct ice_aqc_get_phy_caps_data *)
+		ice_malloc(hw, sizeof(*pcaps));
+	if (!pcaps)
+		return ICE_ERR_NO_MEMORY;
+
+	/* Get the current phy config */
+	status = ice_aq_get_phy_caps(pi, false, ICE_AQC_REPORT_SW_CFG, pcaps,
+				     NULL);
+	if (status) {
+		*aq_failures = ICE_SET_FC_AQ_FAIL_GET;
+		goto out;
+	}
+
+	/* clear the old pause settings */
+	cfg.caps = pcaps->caps & ~(ICE_AQC_PHY_EN_TX_LINK_PAUSE |
+				   ICE_AQC_PHY_EN_RX_LINK_PAUSE);
+	/* set the new capabilities */
+	cfg.caps |= pause_mask;
+	/* If the capabilities have changed, then set the new config */
+	if (cfg.caps != pcaps->caps) {
+		int retry_count, retry_max = 10;
+
+		/* Auto restart link so settings take effect */
+		if (ena_auto_link_update)
+			cfg.caps |= ICE_AQ_PHY_ENA_AUTO_LINK_UPDT;
+		/* Copy over all the old settings */
+		cfg.phy_type_low = pcaps->phy_type_low;
+		cfg.low_power_ctrl = pcaps->low_power_ctrl;
+		cfg.eee_cap = pcaps->eee_cap;
+		cfg.eeer_value = pcaps->eeer_value;
+		cfg.link_fec_opt = pcaps->link_fec_options;
+
+		status = ice_aq_set_phy_cfg(hw, pi->lport, &cfg, NULL);
+		if (status) {
+			*aq_failures = ICE_SET_FC_AQ_FAIL_SET;
+			goto out;
+		}
+
+		/* Update the link info
+		 * It sometimes takes a really long time for link to
+		 * come back from the atomic reset. Thus, we wait a
+		 * little bit.
+		 */
+		for (retry_count = 0; retry_count < retry_max; retry_count++) {
+			status = ice_update_link_info(pi);
+
+			if (status == ICE_SUCCESS)
+				break;
+
+			ice_msec_delay(100, true);
+		}
+
+		if (status)
+			*aq_failures = ICE_SET_FC_AQ_FAIL_UPDATE;
+	}
+
+out:
+	ice_free(hw, pcaps);
+	return status;
+}
+
+
+/**
+ * ice_get_link_status - get status of the HW network link
+ * @pi: port information structure
+ * @link_up: pointer to bool (true/false = linkup/linkdown)
+ *
+ * Variable link_up is true if link is up, false if link is down.
+ * The variable link_up is invalid if status is non zero. As a
+ * result of this call, link status reporting becomes enabled
+ */
+enum ice_status ice_get_link_status(struct ice_port_info *pi, bool *link_up)
+{
+	struct ice_phy_info *phy_info;
+	enum ice_status status = ICE_SUCCESS;
+
+	if (!pi || !link_up)
+		return ICE_ERR_PARAM;
+
+	phy_info = &pi->phy;
+
+	if (phy_info->get_link_info) {
+		status = ice_update_link_info(pi);
+
+		if (status)
+			ice_debug(pi->hw, ICE_DBG_LINK,
+				  "get link status error, status = %d\n",
+				  status);
+	}
+
+	*link_up = phy_info->link_info.link_info & ICE_AQ_LINK_UP;
+
+	return status;
+}
+
+/**
+ * ice_aq_set_link_restart_an
+ * @pi: pointer to the port information structure
+ * @ena_link: if true: enable link, if false: disable link
+ * @cd: pointer to command details structure or NULL
+ *
+ * Sets up the link and restarts the Auto-Negotiation over the link.
+ */
+enum ice_status
+ice_aq_set_link_restart_an(struct ice_port_info *pi, bool ena_link,
+			   struct ice_sq_cd *cd)
+{
+	struct ice_aqc_restart_an *cmd;
+	struct ice_aq_desc desc;
+
+	cmd = &desc.params.restart_an;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_restart_an);
+
+	cmd->cmd_flags = ICE_AQC_RESTART_AN_LINK_RESTART;
+	cmd->lport_num = pi->lport;
+	if (ena_link)
+		cmd->cmd_flags |= ICE_AQC_RESTART_AN_LINK_ENABLE;
+	else
+		cmd->cmd_flags &= ~ICE_AQC_RESTART_AN_LINK_ENABLE;
+
+	return ice_aq_send_cmd(pi->hw, &desc, NULL, 0, cd);
+}
+
+/**
+ * ice_aq_set_event_mask
+ * @hw: pointer to the hw struct
+ * @port_num: port number of the physical function
+ * @mask: event mask to be set
+ * @cd: pointer to command details structure or NULL
+ *
+ * Set event mask (0x0613)
+ */
+enum ice_status
+ice_aq_set_event_mask(struct ice_hw *hw, u8 port_num, u16 mask,
+		      struct ice_sq_cd *cd)
+{
+	struct ice_aqc_set_event_mask *cmd;
+	struct ice_aq_desc desc;
+
+	cmd = &desc.params.set_event_mask;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_event_mask);
+
+	cmd->lport_num = port_num;
+
+	cmd->event_mask = CPU_TO_LE16(mask);
+	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+}
+
+/**
+ * ice_aq_set_mac_loopback
+ * @hw: pointer to the hw struct
+ * @ena_lpbk: Enable or Disable loopback
+ * @cd: pointer to command details structure or NULL
+ *
+ * Enable/disable loopback on a given port
+ */
+enum ice_status
+ice_aq_set_mac_loopback(struct ice_hw *hw, bool ena_lpbk, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_set_mac_lb *cmd;
+	struct ice_aq_desc desc;
+
+	cmd = &desc.params.set_mac_lb;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_mac_lb);
+	if (ena_lpbk)
+		cmd->lb_mode = ICE_AQ_MAC_LB_EN;
+
+	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+}
+
+
+/**
+ * ice_aq_set_port_id_led
+ * @pi: pointer to the port information
+ * @is_orig_mode: is this LED set to original mode (by the net-list)
+ * @cd: pointer to command details structure or NULL
+ *
+ * Set LED value for the given port (0x06e9)
+ */
+enum ice_status
+ice_aq_set_port_id_led(struct ice_port_info *pi, bool is_orig_mode,
+		       struct ice_sq_cd *cd)
+{
+	struct ice_aqc_set_port_id_led *cmd;
+	struct ice_hw *hw = pi->hw;
+	struct ice_aq_desc desc;
+
+	cmd = &desc.params.set_port_id_led;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_port_id_led);
+
+
+	if (is_orig_mode)
+		cmd->ident_mode = ICE_AQC_PORT_IDENT_LED_ORIG;
+	else
+		cmd->ident_mode = ICE_AQC_PORT_IDENT_LED_BLINK;
+
+	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+}
+
+
+/**
+ * __ice_aq_get_set_rss_lut
+ * @hw: pointer to the hardware structure
+ * @vsi_id: VSI FW index
+ * @lut_type: LUT table type
+ * @lut: pointer to the LUT buffer provided by the caller
+ * @lut_size: size of the LUT buffer
+ * @glob_lut_idx: global LUT index
+ * @set: set true to set the table, false to get the table
+ *
+ * Internal function to get (0x0B05) or set (0x0B03) RSS look up table
+ */
+static enum ice_status
+__ice_aq_get_set_rss_lut(struct ice_hw *hw, u16 vsi_id, u8 lut_type, u8 *lut,
+			 u16 lut_size, u8 glob_lut_idx, bool set)
+{
+	struct ice_aqc_get_set_rss_lut *cmd_resp;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+	u16 flags = 0;
+
+	cmd_resp = &desc.params.get_set_rss_lut;
+
+	if (set) {
+		ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_rss_lut);
+		desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+	} else {
+		ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_rss_lut);
+	}
+
+	cmd_resp->vsi_id = CPU_TO_LE16(((vsi_id <<
+					 ICE_AQC_GSET_RSS_LUT_VSI_ID_S) &
+					ICE_AQC_GSET_RSS_LUT_VSI_ID_M) |
+				       ICE_AQC_GSET_RSS_LUT_VSI_VALID);
+
+	switch (lut_type) {
+	case ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_VSI:
+	case ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF:
+	case ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_GLOBAL:
+		flags |= ((lut_type << ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_S) &
+			  ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_M);
+		break;
+	default:
+		status = ICE_ERR_PARAM;
+		goto ice_aq_get_set_rss_lut_exit;
+	}
+
+	if (lut_type == ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_GLOBAL) {
+		flags |= ((glob_lut_idx << ICE_AQC_GSET_RSS_LUT_GLOBAL_IDX_S) &
+			  ICE_AQC_GSET_RSS_LUT_GLOBAL_IDX_M);
+
+		if (!set)
+			goto ice_aq_get_set_rss_lut_send;
+	} else if (lut_type == ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF) {
+		if (!set)
+			goto ice_aq_get_set_rss_lut_send;
+	} else {
+		goto ice_aq_get_set_rss_lut_send;
+	}
+
+	/* LUT size is only valid for Global and PF table types */
+	switch (lut_size) {
+	case ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_128:
+		break;
+	case ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_512:
+		flags |= (ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_512_FLAG <<
+			  ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S) &
+			 ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_M;
+		break;
+	case ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K:
+		if (lut_type == ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF) {
+			flags |= (ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K_FLAG <<
+				  ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S) &
+				 ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_M;
+			break;
+		}
+		/* fall-through */
+	default:
+		status = ICE_ERR_PARAM;
+		goto ice_aq_get_set_rss_lut_exit;
+	}
+
+ice_aq_get_set_rss_lut_send:
+	cmd_resp->flags = CPU_TO_LE16(flags);
+	status = ice_aq_send_cmd(hw, &desc, lut, lut_size, NULL);
+
+ice_aq_get_set_rss_lut_exit:
+	return status;
+}
+
+/**
+ * ice_aq_get_rss_lut
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: software VSI handle
+ * @lut_type: LUT table type
+ * @lut: pointer to the LUT buffer provided by the caller
+ * @lut_size: size of the LUT buffer
+ *
+ * get the RSS lookup table, PF or VSI type
+ */
+enum ice_status
+ice_aq_get_rss_lut(struct ice_hw *hw, u16 vsi_handle, u8 lut_type,
+		   u8 *lut, u16 lut_size)
+{
+	if (!ice_is_vsi_valid(hw, vsi_handle) || !lut)
+		return ICE_ERR_PARAM;
+
+	return __ice_aq_get_set_rss_lut(hw, ice_get_hw_vsi_num(hw, vsi_handle),
+					lut_type, lut, lut_size, 0, false);
+}
+
+/**
+ * ice_aq_set_rss_lut
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: software VSI handle
+ * @lut_type: LUT table type
+ * @lut: pointer to the LUT buffer provided by the caller
+ * @lut_size: size of the LUT buffer
+ *
+ * set the RSS lookup table, PF or VSI type
+ */
+enum ice_status
+ice_aq_set_rss_lut(struct ice_hw *hw, u16 vsi_handle, u8 lut_type,
+		   u8 *lut, u16 lut_size)
+{
+	if (!ice_is_vsi_valid(hw, vsi_handle) || !lut)
+		return ICE_ERR_PARAM;
+
+	return __ice_aq_get_set_rss_lut(hw, ice_get_hw_vsi_num(hw, vsi_handle),
+					lut_type, lut, lut_size, 0, true);
+}
+
+/**
+ * __ice_aq_get_set_rss_key
+ * @hw: pointer to the hw struct
+ * @vsi_id: VSI FW index
+ * @key: pointer to key info struct
+ * @set: set true to set the key, false to get the key
+ *
+ * get (0x0B04) or set (0x0B02) the RSS key per VSI
+ */
+static enum
+ice_status __ice_aq_get_set_rss_key(struct ice_hw *hw, u16 vsi_id,
+				    struct ice_aqc_get_set_rss_keys *key,
+				    bool set)
+{
+	struct ice_aqc_get_set_rss_key *cmd_resp;
+	u16 key_size = sizeof(*key);
+	struct ice_aq_desc desc;
+
+	cmd_resp = &desc.params.get_set_rss_key;
+
+	if (set) {
+		ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_rss_key);
+		desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+	} else {
+		ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_rss_key);
+	}
+
+	cmd_resp->vsi_id = CPU_TO_LE16(((vsi_id <<
+					 ICE_AQC_GSET_RSS_KEY_VSI_ID_S) &
+					ICE_AQC_GSET_RSS_KEY_VSI_ID_M) |
+				       ICE_AQC_GSET_RSS_KEY_VSI_VALID);
+
+	return ice_aq_send_cmd(hw, &desc, key, key_size, NULL);
+}
+
+/**
+ * ice_aq_get_rss_key
+ * @hw: pointer to the hw struct
+ * @vsi_handle: software VSI handle
+ * @key: pointer to key info struct
+ *
+ * get the RSS key per VSI
+ */
+enum ice_status
+ice_aq_get_rss_key(struct ice_hw *hw, u16 vsi_handle,
+		   struct ice_aqc_get_set_rss_keys *key)
+{
+	if (!ice_is_vsi_valid(hw, vsi_handle) || !key)
+		return ICE_ERR_PARAM;
+
+	return __ice_aq_get_set_rss_key(hw, ice_get_hw_vsi_num(hw, vsi_handle),
+					key, false);
+}
+
+/**
+ * ice_aq_set_rss_key
+ * @hw: pointer to the hw struct
+ * @vsi_handle: software VSI handle
+ * @keys: pointer to key info struct
+ *
+ * set the RSS key per VSI
+ */
+enum ice_status
+ice_aq_set_rss_key(struct ice_hw *hw, u16 vsi_handle,
+		   struct ice_aqc_get_set_rss_keys *keys)
+{
+	if (!ice_is_vsi_valid(hw, vsi_handle) || !keys)
+		return ICE_ERR_PARAM;
+
+	return __ice_aq_get_set_rss_key(hw, ice_get_hw_vsi_num(hw, vsi_handle),
+					keys, true);
+}
+
+/**
+ * ice_aq_add_lan_txq
+ * @hw: pointer to the hardware structure
+ * @num_qgrps: Number of added queue groups
+ * @qg_list: list of queue groups to be added
+ * @buf_size: size of buffer for indirect command
+ * @cd: pointer to command details structure or NULL
+ *
+ * Add Tx LAN queue (0x0C30)
+ *
+ * NOTE:
+ * Prior to calling add Tx LAN queue:
+ * Initialize the following as part of the Tx queue context:
+ * Completion queue ID if the queue uses Completion queue, Quanta profile,
+ * Cache profile and Packet shaper profile.
+ *
+ * After add Tx LAN queue AQ command is completed:
+ * Interrupts should be associated with specific queues,
+ * Association of Tx queue to Doorbell queue is not part of Add LAN Tx queue
+ * flow.
+ */
+static enum ice_status
+ice_aq_add_lan_txq(struct ice_hw *hw, u8 num_qgrps,
+		   struct ice_aqc_add_tx_qgrp *qg_list, u16 buf_size,
+		   struct ice_sq_cd *cd)
+{
+	u16 i, sum_header_size, sum_q_size = 0;
+	struct ice_aqc_add_tx_qgrp *list;
+	struct ice_aqc_add_txqs *cmd;
+	struct ice_aq_desc desc;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_add_lan_txq");
+
+	cmd = &desc.params.add_txqs;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_add_txqs);
+
+	if (!qg_list)
+		return ICE_ERR_PARAM;
+
+	if (num_qgrps > ICE_LAN_TXQ_MAX_QGRPS)
+		return ICE_ERR_PARAM;
+
+	sum_header_size = num_qgrps *
+		(sizeof(*qg_list) - sizeof(*qg_list->txqs));
+
+	list = qg_list;
+	for (i = 0; i < num_qgrps; i++) {
+		struct ice_aqc_add_txqs_perq *q = list->txqs;
+
+		sum_q_size += list->num_txqs * sizeof(*q);
+		list = (struct ice_aqc_add_tx_qgrp *)(q + list->num_txqs);
+	}
+
+	if (buf_size != (sum_header_size + sum_q_size))
+		return ICE_ERR_PARAM;
+
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+
+	cmd->num_qgrps = num_qgrps;
+
+	return ice_aq_send_cmd(hw, &desc, qg_list, buf_size, cd);
+}
+
+/**
+ * ice_aq_dis_lan_txq
+ * @hw: pointer to the hardware structure
+ * @num_qgrps: number of groups in the list
+ * @qg_list: the list of groups to disable
+ * @buf_size: the total size of the qg_list buffer in bytes
+ * @rst_src: if called due to reset, specifies the rst source
+ * @vmvf_num: the relative vm or vf number that is undergoing the reset
+ * @cd: pointer to command details structure or NULL
+ *
+ * Disable LAN Tx queue (0x0C31)
+ */
+static enum ice_status
+ice_aq_dis_lan_txq(struct ice_hw *hw, u8 num_qgrps,
+		   struct ice_aqc_dis_txq_item *qg_list, u16 buf_size,
+		   enum ice_disq_rst_src rst_src, u16 vmvf_num,
+		   struct ice_sq_cd *cd)
+{
+	struct ice_aqc_dis_txqs *cmd;
+	struct ice_aq_desc desc;
+	u16 i, sz = 0;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_dis_lan_txq");
+	cmd = &desc.params.dis_txqs;
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_dis_txqs);
+
+	/* qg_list can be NULL only in VM/VF reset flow */
+	if (!qg_list && !rst_src)
+		return ICE_ERR_PARAM;
+
+	if (num_qgrps > ICE_LAN_TXQ_MAX_QGRPS)
+		return ICE_ERR_PARAM;
+
+	cmd->num_entries = num_qgrps;
+
+	cmd->vmvf_and_timeout = CPU_TO_LE16((5 << ICE_AQC_Q_DIS_TIMEOUT_S) &
+					    ICE_AQC_Q_DIS_TIMEOUT_M);
+
+	switch (rst_src) {
+	case ICE_VM_RESET:
+		cmd->cmd_type = ICE_AQC_Q_DIS_CMD_VM_RESET;
+		cmd->vmvf_and_timeout |=
+			CPU_TO_LE16(vmvf_num & ICE_AQC_Q_DIS_VMVF_NUM_M);
+		break;
+	case ICE_VF_RESET:
+		cmd->cmd_type = ICE_AQC_Q_DIS_CMD_VF_RESET;
+		/* In this case, FW expects vmvf_num to be absolute VF id */
+		cmd->vmvf_and_timeout |=
+			CPU_TO_LE16((vmvf_num + hw->func_caps.vf_base_id) &
+				    ICE_AQC_Q_DIS_VMVF_NUM_M);
+		break;
+	case ICE_NO_RESET:
+	default:
+		break;
+	}
+
+	/* If no queue group info, we are in a reset flow. Issue the AQ */
+	if (!qg_list)
+		goto do_aq;
+
+	/* set RD bit to indicate that command buffer is provided by the driver
+	 * and it needs to be read by the firmware
+	 */
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+
+	for (i = 0; i < num_qgrps; ++i) {
+		/* Calculate the size taken up by the queue IDs in this group */
+		sz += qg_list[i].num_qs * sizeof(qg_list[i].q_id);
+
+		/* Add the size of the group header */
+		sz += sizeof(qg_list[i]) - sizeof(qg_list[i].q_id);
+
+		/* If the num of queues is even, add 2 bytes of padding */
+		if ((qg_list[i].num_qs % 2) == 0)
+			sz += 2;
+	}
+
+	if (buf_size != sz)
+		return ICE_ERR_PARAM;
+
+do_aq:
+	return ice_aq_send_cmd(hw, &desc, qg_list, buf_size, cd);
+}
+
+
+/* End of FW Admin Queue command wrappers */
+
+/**
+ * ice_write_byte - write a byte to a packed context structure
+ * @src_ctx:  the context structure to read from
+ * @dest_ctx: the context to be written to
+ * @ce_info:  a description of the struct to be filled
+ */
+static void
+ice_write_byte(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info)
+{
+	u8 src_byte, dest_byte, mask;
+	u8 *from, *dest;
+	u16 shift_width;
+
+	/* copy from the next struct field */
+	from = src_ctx + ce_info->offset;
+
+	/* prepare the bits and mask */
+	shift_width = ce_info->lsb % 8;
+	mask = (u8)(BIT(ce_info->width) - 1);
+
+	src_byte = *from;
+	src_byte &= mask;
+
+	/* shift to correct alignment */
+	mask <<= shift_width;
+	src_byte <<= shift_width;
+
+	/* get the current bits from the target bit string */
+	dest = dest_ctx + (ce_info->lsb / 8);
+
+	ice_memcpy(&dest_byte, dest, sizeof(dest_byte), ICE_DMA_TO_NONDMA);
+
+	dest_byte &= ~mask;	/* get the bits not changing */
+	dest_byte |= src_byte;	/* add in the new bits */
+
+	/* put it all back */
+	ice_memcpy(dest, &dest_byte, sizeof(dest_byte), ICE_NONDMA_TO_DMA);
+}
+
+/**
+ * ice_write_word - write a word to a packed context structure
+ * @src_ctx:  the context structure to read from
+ * @dest_ctx: the context to be written to
+ * @ce_info:  a description of the struct to be filled
+ */
+static void
+ice_write_word(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info)
+{
+	u16 src_word, mask;
+	__le16 dest_word;
+	u8 *from, *dest;
+	u16 shift_width;
+
+	/* copy from the next struct field */
+	from = src_ctx + ce_info->offset;
+
+	/* prepare the bits and mask */
+	shift_width = ce_info->lsb % 8;
+	mask = BIT(ce_info->width) - 1;
+
+	/* don't swizzle the bits until after the mask because the mask bits
+	 * will be in a different bit position on big endian machines
+	 */
+	src_word = *(u16 *)from;
+	src_word &= mask;
+
+	/* shift to correct alignment */
+	mask <<= shift_width;
+	src_word <<= shift_width;
+
+	/* get the current bits from the target bit string */
+	dest = dest_ctx + (ce_info->lsb / 8);
+
+	ice_memcpy(&dest_word, dest, sizeof(dest_word), ICE_DMA_TO_NONDMA);
+
+	dest_word &= ~(CPU_TO_LE16(mask));	/* get the bits not changing */
+	dest_word |= CPU_TO_LE16(src_word);	/* add in the new bits */
+
+	/* put it all back */
+	ice_memcpy(dest, &dest_word, sizeof(dest_word), ICE_NONDMA_TO_DMA);
+}
+
+/**
+ * ice_write_dword - write a dword to a packed context structure
+ * @src_ctx:  the context structure to read from
+ * @dest_ctx: the context to be written to
+ * @ce_info:  a description of the struct to be filled
+ */
+static void
+ice_write_dword(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info)
+{
+	u32 src_dword, mask;
+	__le32 dest_dword;
+	u8 *from, *dest;
+	u16 shift_width;
+
+	/* copy from the next struct field */
+	from = src_ctx + ce_info->offset;
+
+	/* prepare the bits and mask */
+	shift_width = ce_info->lsb % 8;
+
+	/* if the field width is exactly 32 on an x86 machine, then the shift
+	 * operation will not work because the SHL instructions count is masked
+	 * to 5 bits so the shift will do nothing
+	 */
+	if (ce_info->width < 32)
+		mask = BIT(ce_info->width) - 1;
+	else
+		mask = (u32)~0;
+
+	/* don't swizzle the bits until after the mask because the mask bits
+	 * will be in a different bit position on big endian machines
+	 */
+	src_dword = *(u32 *)from;
+	src_dword &= mask;
+
+	/* shift to correct alignment */
+	mask <<= shift_width;
+	src_dword <<= shift_width;
+
+	/* get the current bits from the target bit string */
+	dest = dest_ctx + (ce_info->lsb / 8);
+
+	ice_memcpy(&dest_dword, dest, sizeof(dest_dword), ICE_DMA_TO_NONDMA);
+
+	dest_dword &= ~(CPU_TO_LE32(mask));	/* get the bits not changing */
+	dest_dword |= CPU_TO_LE32(src_dword);	/* add in the new bits */
+
+	/* put it all back */
+	ice_memcpy(dest, &dest_dword, sizeof(dest_dword), ICE_NONDMA_TO_DMA);
+}
+
+/**
+ * ice_write_qword - write a qword to a packed context structure
+ * @src_ctx:  the context structure to read from
+ * @dest_ctx: the context to be written to
+ * @ce_info:  a description of the struct to be filled
+ */
+static void
+ice_write_qword(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info)
+{
+	u64 src_qword, mask;
+	__le64 dest_qword;
+	u8 *from, *dest;
+	u16 shift_width;
+
+	/* copy from the next struct field */
+	from = src_ctx + ce_info->offset;
+
+	/* prepare the bits and mask */
+	shift_width = ce_info->lsb % 8;
+
+	/* if the field width is exactly 64 on an x86 machine, then the shift
+	 * operation will not work because the SHL instructions count is masked
+	 * to 6 bits so the shift will do nothing
+	 */
+	if (ce_info->width < 64)
+		mask = BIT_ULL(ce_info->width) - 1;
+	else
+		mask = (u64)~0;
+
+	/* don't swizzle the bits until after the mask because the mask bits
+	 * will be in a different bit position on big endian machines
+	 */
+	src_qword = *(u64 *)from;
+	src_qword &= mask;
+
+	/* shift to correct alignment */
+	mask <<= shift_width;
+	src_qword <<= shift_width;
+
+	/* get the current bits from the target bit string */
+	dest = dest_ctx + (ce_info->lsb / 8);
+
+	ice_memcpy(&dest_qword, dest, sizeof(dest_qword), ICE_DMA_TO_NONDMA);
+
+	dest_qword &= ~(CPU_TO_LE64(mask));	/* get the bits not changing */
+	dest_qword |= CPU_TO_LE64(src_qword);	/* add in the new bits */
+
+	/* put it all back */
+	ice_memcpy(dest, &dest_qword, sizeof(dest_qword), ICE_NONDMA_TO_DMA);
+}
+
+/**
+ * ice_set_ctx - set context bits in packed structure
+ * @src_ctx:  pointer to a generic non-packed context structure
+ * @dest_ctx: pointer to memory for the packed structure
+ * @ce_info:  a description of the structure to be transformed
+ */
+enum ice_status
+ice_set_ctx(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info)
+{
+	int f;
+
+	for (f = 0; ce_info[f].width; f++) {
+		/* We have to deal with each element of the FW response
+		 * using the correct size so that we are correct regardless
+		 * of the endianness of the machine.
+		 */
+		switch (ce_info[f].size_of) {
+		case sizeof(u8):
+			ice_write_byte(src_ctx, dest_ctx, &ce_info[f]);
+			break;
+		case sizeof(u16):
+			ice_write_word(src_ctx, dest_ctx, &ce_info[f]);
+			break;
+		case sizeof(u32):
+			ice_write_dword(src_ctx, dest_ctx, &ce_info[f]);
+			break;
+		case sizeof(u64):
+			ice_write_qword(src_ctx, dest_ctx, &ce_info[f]);
+			break;
+		default:
+			return ICE_ERR_INVAL_SIZE;
+		}
+	}
+
+	return ICE_SUCCESS;
+}
+
+
+
+
+
+/**
+ * ice_ena_vsi_txq
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc: tc number
+ * @num_qgrps: Number of added queue groups
+ * @buf: list of queue groups to be added
+ * @buf_size: size of buffer for indirect command
+ * @cd: pointer to command details structure or NULL
+ *
+ * This function adds one lan q
+ */
+enum ice_status
+ice_ena_vsi_txq(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u8 num_qgrps,
+		struct ice_aqc_add_tx_qgrp *buf, u16 buf_size,
+		struct ice_sq_cd *cd)
+{
+	struct ice_aqc_txsched_elem_data node = { 0 };
+	struct ice_sched_node *parent;
+	enum ice_status status;
+	struct ice_hw *hw;
+
+	if (!pi || pi->port_state != ICE_SCHED_PORT_STATE_READY)
+		return ICE_ERR_CFG;
+
+	if (num_qgrps > 1 || buf->num_txqs > 1)
+		return ICE_ERR_MAX_LIMIT;
+
+	hw = pi->hw;
+
+	if (!ice_is_vsi_valid(hw, vsi_handle))
+		return ICE_ERR_PARAM;
+
+	ice_acquire_lock(&pi->sched_lock);
+
+	/* find a parent node */
+	parent = ice_sched_get_free_qparent(pi, vsi_handle, tc,
+					    ICE_SCHED_NODE_OWNER_LAN);
+	if (!parent) {
+		status = ICE_ERR_PARAM;
+		goto ena_txq_exit;
+	}
+
+	buf->parent_teid = parent->info.node_teid;
+	node.parent_teid = parent->info.node_teid;
+	/* Mark that the values in the "generic" section as valid. The default
+	 * value in the "generic" section is zero. This means that :
+	 * - Scheduling mode is Bytes Per Second (BPS), indicated by Bit 0.
+	 * - 0 priority among siblings, indicated by Bit 1-3.
+	 * - WFQ, indicated by Bit 4.
+	 * - 0 Adjustment value is used in PSM credit update flow, indicated by
+	 * Bit 5-6.
+	 * - Bit 7 is reserved.
+	 * Without setting the generic section as valid in valid_sections, the
+	 * Admin Q command will fail with error code ICE_AQ_RC_EINVAL.
+	 */
+	buf->txqs[0].info.valid_sections = ICE_AQC_ELEM_VALID_GENERIC;
+
+	/* add the lan q */
+	status = ice_aq_add_lan_txq(hw, num_qgrps, buf, buf_size, cd);
+	if (status != ICE_SUCCESS)
+		goto ena_txq_exit;
+
+	node.node_teid = buf->txqs[0].q_teid;
+	node.data.elem_type = ICE_AQC_ELEM_TYPE_LEAF;
+
+	/* add a leaf node into schduler tree q layer */
+	status = ice_sched_add_node(pi, hw->num_tx_sched_layers - 1, &node);
+
+ena_txq_exit:
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_dis_vsi_txq
+ * @pi: port information structure
+ * @num_queues: number of queues
+ * @q_ids: pointer to the q_id array
+ * @q_teids: pointer to queue node teids
+ * @rst_src: if called due to reset, specifies the rst source
+ * @vmvf_num: the relative vm or vf number that is undergoing the reset
+ * @cd: pointer to command details structure or NULL
+ *
+ * This function removes queues and their corresponding nodes in SW DB
+ */
+enum ice_status
+ice_dis_vsi_txq(struct ice_port_info *pi, u8 num_queues, u16 *q_ids,
+		u32 *q_teids, enum ice_disq_rst_src rst_src, u16 vmvf_num,
+		struct ice_sq_cd *cd)
+{
+	enum ice_status status = ICE_ERR_DOES_NOT_EXIST;
+	struct ice_aqc_dis_txq_item qg_list;
+	u16 i;
+
+	if (!pi || pi->port_state != ICE_SCHED_PORT_STATE_READY)
+		return ICE_ERR_CFG;
+
+	/* if queue is disabled already yet the disable queue command has to be
+	 * sent to complete the VF reset, then call ice_aq_dis_lan_txq without
+	 * any queue information
+	 */
+
+	if (!num_queues && rst_src)
+		return ice_aq_dis_lan_txq(pi->hw, 0, NULL, 0, rst_src, vmvf_num,
+					  NULL);
+
+	ice_acquire_lock(&pi->sched_lock);
+
+	for (i = 0; i < num_queues; i++) {
+		struct ice_sched_node *node;
+
+		node = ice_sched_find_node_by_teid(pi->root, q_teids[i]);
+		if (!node)
+			continue;
+		qg_list.parent_teid = node->info.parent_teid;
+		qg_list.num_qs = 1;
+		qg_list.q_id[0] = CPU_TO_LE16(q_ids[i]);
+		status = ice_aq_dis_lan_txq(pi->hw, 1, &qg_list,
+					    sizeof(qg_list), rst_src, vmvf_num,
+					    cd);
+
+		if (status != ICE_SUCCESS)
+			break;
+		ice_free_sched_node(pi, node);
+	}
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_cfg_vsi_qs - configure the new/exisiting VSI queues
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc_bitmap: TC bitmap
+ * @maxqs: max queues array per TC
+ * @owner: lan or rdma
+ *
+ * This function adds/updates the VSI queues per TC.
+ */
+static enum ice_status
+ice_cfg_vsi_qs(struct ice_port_info *pi, u16 vsi_handle, u8 tc_bitmap,
+	       u16 *maxqs, u8 owner)
+{
+	enum ice_status status = ICE_SUCCESS;
+	u8 i;
+
+	if (!pi || pi->port_state != ICE_SCHED_PORT_STATE_READY)
+		return ICE_ERR_CFG;
+
+	if (!ice_is_vsi_valid(pi->hw, vsi_handle))
+		return ICE_ERR_PARAM;
+
+	ice_acquire_lock(&pi->sched_lock);
+
+	for (i = 0; i < ICE_MAX_TRAFFIC_CLASS; i++) {
+		/* configuration is possible only if TC node is present */
+		if (!ice_sched_get_tc_node(pi, i))
+			continue;
+
+		status = ice_sched_cfg_vsi(pi, vsi_handle, i, maxqs[i], owner,
+					   ice_is_tc_ena(tc_bitmap, i));
+		if (status)
+			break;
+	}
+
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_cfg_vsi_lan - configure VSI lan queues
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc_bitmap: TC bitmap
+ * @max_lanqs: max lan queues array per TC
+ *
+ * This function adds/updates the VSI lan queues per TC.
+ */
+enum ice_status
+ice_cfg_vsi_lan(struct ice_port_info *pi, u16 vsi_handle, u8 tc_bitmap,
+		u16 *max_lanqs)
+{
+	return ice_cfg_vsi_qs(pi, vsi_handle, tc_bitmap, max_lanqs,
+			      ICE_SCHED_NODE_OWNER_LAN);
+}
+
+
+
+
+
+/**
+ * ice_replay_pre_init - replay pre initialization
+ * @hw: pointer to the hw struct
+ *
+ * Initializes required config data for VSI, FD, ACL, and RSS before replay.
+ */
+static enum ice_status ice_replay_pre_init(struct ice_hw *hw)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	u8 i;
+
+	/* Delete old entries from replay filter list head if there is any */
+	ice_rm_all_sw_replay_rule_info(hw);
+	/* In start of replay, move entries into replay_rules list, it
+	 * will allow adding rules entries back to filt_rules list,
+	 * which is operational list.
+	 */
+	for (i = 0; i < ICE_SW_LKUP_LAST; i++)
+		LIST_REPLACE_INIT(&sw->recp_list[i].filt_rules,
+				  &sw->recp_list[i].filt_replay_rules);
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_replay_vsi - replay vsi configuration
+ * @hw: pointer to the hw struct
+ * @vsi_handle: driver vsi handle
+ *
+ * Restore all VSI configuration after reset. It is required to call this
+ * function with main VSI first.
+ */
+enum ice_status ice_replay_vsi(struct ice_hw *hw, u16 vsi_handle)
+{
+	enum ice_status status;
+
+	if (!ice_is_vsi_valid(hw, vsi_handle))
+		return ICE_ERR_PARAM;
+
+	/* Replay pre-initialization if there is any */
+	if (vsi_handle == ICE_MAIN_VSI_HANDLE) {
+		status = ice_replay_pre_init(hw);
+		if (status)
+			return status;
+	}
+
+	/* Replay per VSI all filters */
+	status = ice_replay_vsi_all_fltr(hw, vsi_handle);
+	return status;
+}
+
+/**
+ * ice_replay_post - post replay configuration cleanup
+ * @hw: pointer to the hw struct
+ *
+ * Post replay cleanup.
+ */
+void ice_replay_post(struct ice_hw *hw)
+{
+	/* Delete old entries from replay filter list head */
+	ice_rm_all_sw_replay_rule_info(hw);
+}
+
+/**
+ * ice_stat_update40 - read 40 bit stat from the chip and update stat values
+ * @hw: ptr to the hardware info
+ * @hireg: high 32 bit HW register to read from
+ * @loreg: low 32 bit HW register to read from
+ * @prev_stat_loaded: bool to specify if previous stats are loaded
+ * @prev_stat: ptr to previous loaded stat value
+ * @cur_stat: ptr to current stat value
+ */
+void
+ice_stat_update40(struct ice_hw *hw, u32 hireg, u32 loreg,
+		  bool prev_stat_loaded, u64 *prev_stat, u64 *cur_stat)
+{
+	u64 new_data;
+
+	new_data = rd32(hw, loreg);
+	new_data |= ((u64)(rd32(hw, hireg) & 0xFFFF)) << 32;
+
+	/* device stats are not reset at PFR, they likely will not be zeroed
+	 * when the driver starts. So save the first values read and use them as
+	 * offsets to be subtracted from the raw values in order to report stats
+	 * that count from zero.
+	 */
+	if (!prev_stat_loaded)
+		*prev_stat = new_data;
+	if (new_data >= *prev_stat)
+		*cur_stat = new_data - *prev_stat;
+	else
+		/* to manage the potential roll-over */
+		*cur_stat = (new_data + BIT_ULL(40)) - *prev_stat;
+	*cur_stat &= 0xFFFFFFFFFFULL;
+}
+
+/**
+ * ice_stat_update32 - read 32 bit stat from the chip and update stat values
+ * @hw: ptr to the hardware info
+ * @reg: HW register to read from
+ * @prev_stat_loaded: bool to specify if previous stats are loaded
+ * @prev_stat: ptr to previous loaded stat value
+ * @cur_stat: ptr to current stat value
+ */
+void
+ice_stat_update32(struct ice_hw *hw, u32 reg, bool prev_stat_loaded,
+		  u64 *prev_stat, u64 *cur_stat)
+{
+	u32 new_data;
+
+	new_data = rd32(hw, reg);
+
+	/* device stats are not reset at PFR, they likely will not be zeroed
+	 * when the driver starts. So save the first values read and use them as
+	 * offsets to be subtracted from the raw values in order to report stats
+	 * that count from zero.
+	 */
+	if (!prev_stat_loaded)
+		*prev_stat = new_data;
+	if (new_data >= *prev_stat)
+		*cur_stat = new_data - *prev_stat;
+	else
+		/* to manage the potential roll-over */
+		*cur_stat = (new_data + BIT_ULL(32)) - *prev_stat;
+}
diff --git a/drivers/net/ice/base/ice_common.h b/drivers/net/ice/base/ice_common.h
new file mode 100644
index 0000000..fc2870c
--- /dev/null
+++ b/drivers/net/ice/base/ice_common.h
@@ -0,0 +1,159 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_COMMON_H_
+#define _ICE_COMMON_H_
+
+#include "ice_type.h"
+
+#include "virtchnl.h"
+#include "ice_switch.h"
+
+enum ice_status ice_nvm_validate_checksum(struct ice_hw *hw);
+
+void
+ice_debug_cq(struct ice_hw *hw, u32 mask, void *desc, void *buf, u16 buf_len);
+enum ice_status ice_init_hw(struct ice_hw *hw);
+void ice_deinit_hw(struct ice_hw *hw);
+enum ice_status ice_check_reset(struct ice_hw *hw);
+enum ice_status ice_reset(struct ice_hw *hw, enum ice_reset_req req);
+
+enum ice_status ice_init_all_ctrlq(struct ice_hw *hw);
+void ice_shutdown_all_ctrlq(struct ice_hw *hw);
+enum ice_status
+ice_clean_rq_elem(struct ice_hw *hw, struct ice_ctl_q_info *cq,
+		  struct ice_rq_event_info *e, u16 *pending);
+enum ice_status
+ice_get_link_status(struct ice_port_info *pi, bool *link_up);
+enum ice_status
+ice_update_link_info(struct ice_port_info *pi);
+enum ice_status
+ice_acquire_res(struct ice_hw *hw, enum ice_aq_res_ids res,
+		enum ice_aq_res_access_type access, u32 timeout);
+void ice_release_res(struct ice_hw *hw, enum ice_aq_res_ids res);
+enum ice_status
+ice_aq_alloc_free_res(struct ice_hw *hw, u16 num_entries,
+		      struct ice_aqc_alloc_free_res_elem *buf, u16 buf_size,
+		      enum ice_adminq_opc opc, struct ice_sq_cd *cd);
+enum ice_status ice_init_nvm(struct ice_hw *hw);
+enum ice_status ice_read_sr_word(struct ice_hw *hw, u16 offset, u16 *data);
+enum ice_status
+ice_read_sr_buf(struct ice_hw *hw, u16 offset, u16 *words, u16 *data);
+enum ice_status
+ice_sq_send_cmd(struct ice_hw *hw, struct ice_ctl_q_info *cq,
+		struct ice_aq_desc *desc, void *buf, u16 buf_size,
+		struct ice_sq_cd *cd);
+void ice_clear_pxe_mode(struct ice_hw *hw);
+
+enum ice_status ice_get_caps(struct ice_hw *hw);
+
+
+
+#if defined(FPGA_SUPPORT) || defined(CVL_A0_SUPPORT)
+void ice_dev_onetime_setup(struct ice_hw *hw);
+#endif /* FPGA_SUPPORT || CVL_A0_SUPPORT */
+
+
+enum ice_status
+ice_write_rxq_ctx(struct ice_hw *hw, struct ice_rlan_ctx *rlan_ctx,
+		  u32 rxq_index);
+#if !defined(NO_UNUSED_CTX_CODE) || defined(AE_DRIVER)
+enum ice_status ice_clear_rxq_ctx(struct ice_hw *hw, u32 rxq_index);
+enum ice_status
+ice_clear_tx_cmpltnq_ctx(struct ice_hw *hw, u32 tx_cmpltnq_index);
+enum ice_status
+ice_write_tx_cmpltnq_ctx(struct ice_hw *hw,
+			 struct ice_tx_cmpltnq_ctx *tx_cmpltnq_ctx,
+			 u32 tx_cmpltnq_index);
+enum ice_status
+ice_clear_tx_drbell_q_ctx(struct ice_hw *hw, u32 tx_drbell_q_index);
+enum ice_status
+ice_write_tx_drbell_q_ctx(struct ice_hw *hw,
+			  struct ice_tx_drbell_q_ctx *tx_drbell_q_ctx,
+			  u32 tx_drbell_q_index);
+#endif /* !NO_UNUSED_CTX_CODE || AE_DRIVER */
+
+enum ice_status
+ice_aq_get_rss_lut(struct ice_hw *hw, u16 vsi_handle, u8 lut_type, u8 *lut,
+		   u16 lut_size);
+enum ice_status
+ice_aq_set_rss_lut(struct ice_hw *hw, u16 vsi_handle, u8 lut_type, u8 *lut,
+		   u16 lut_size);
+enum ice_status
+ice_aq_get_rss_key(struct ice_hw *hw, u16 vsi_handle,
+		   struct ice_aqc_get_set_rss_keys *keys);
+enum ice_status
+ice_aq_set_rss_key(struct ice_hw *hw, u16 vsi_handle,
+		   struct ice_aqc_get_set_rss_keys *keys);
+
+bool ice_check_sq_alive(struct ice_hw *hw, struct ice_ctl_q_info *cq);
+enum ice_status ice_aq_q_shutdown(struct ice_hw *hw, bool unloading);
+void ice_fill_dflt_direct_cmd_desc(struct ice_aq_desc *desc, u16 opcode);
+extern const struct ice_ctx_ele ice_tlan_ctx_info[];
+enum ice_status
+ice_set_ctx(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info);
+enum ice_status
+ice_aq_send_cmd(struct ice_hw *hw, struct ice_aq_desc *desc,
+		void *buf, u16 buf_size, struct ice_sq_cd *cd);
+enum ice_status ice_aq_get_fw_ver(struct ice_hw *hw, struct ice_sq_cd *cd);
+
+enum ice_status
+ice_aq_get_phy_caps(struct ice_port_info *pi, bool qual_mods, u8 report_mode,
+		    struct ice_aqc_get_phy_caps_data *caps,
+		    struct ice_sq_cd *cd);
+void
+ice_update_phy_type(u64 *phy_type_low, u16 link_speeds_bitmap);
+enum ice_status
+ice_aq_manage_mac_write(struct ice_hw *hw, const u8 *mac_addr, u8 flags,
+			struct ice_sq_cd *cd);
+
+enum ice_status ice_clear_pf_cfg(struct ice_hw *hw);
+enum ice_status
+ice_aq_set_phy_cfg(struct ice_hw *hw, u8 lport,
+		   struct ice_aqc_set_phy_cfg_data *cfg, struct ice_sq_cd *cd);
+enum ice_status
+ice_set_fc(struct ice_port_info *pi, u8 *aq_failures,
+	   bool ena_auto_link_update);
+
+enum ice_status
+ice_aq_set_link_restart_an(struct ice_port_info *pi, bool ena_link,
+			   struct ice_sq_cd *cd);
+enum ice_status
+ice_aq_get_link_info(struct ice_port_info *pi, bool ena_lse,
+		     struct ice_link_status *link, struct ice_sq_cd *cd);
+enum ice_status
+ice_aq_set_event_mask(struct ice_hw *hw, u8 port_num, u16 mask,
+		      struct ice_sq_cd *cd);
+enum ice_status
+ice_aq_set_mac_loopback(struct ice_hw *hw, bool ena_lpbk, struct ice_sq_cd *cd);
+
+
+enum ice_status
+ice_aq_set_port_id_led(struct ice_port_info *pi, bool is_orig_mode,
+		       struct ice_sq_cd *cd);
+
+
+
+
+enum ice_status
+ice_dis_vsi_txq(struct ice_port_info *pi, u8 num_queues, u16 *q_ids,
+		u32 *q_teids, enum ice_disq_rst_src rst_src, u16 vmvf_num,
+		struct ice_sq_cd *cmd_details);
+enum ice_status
+ice_cfg_vsi_lan(struct ice_port_info *pi, u16 vsi_handle, u8 tc_bitmap,
+		u16 *max_lanqs);
+enum ice_status
+ice_ena_vsi_txq(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u8 num_qgrps,
+		struct ice_aqc_add_tx_qgrp *buf, u16 buf_size,
+		struct ice_sq_cd *cd);
+enum ice_status ice_replay_vsi(struct ice_hw *hw, u16 vsi_handle);
+void ice_replay_post(struct ice_hw *hw);
+void ice_output_fw_log(struct ice_hw *hw, struct ice_aq_desc *desc, void *buf);
+void
+ice_stat_update40(struct ice_hw *hw, u32 hireg, u32 loreg,
+		  bool prev_stat_loaded, u64 *prev_stat, u64 *cur_stat);
+void
+ice_stat_update32(struct ice_hw *hw, u32 reg, bool prev_stat_loaded,
+		  u64 *prev_stat, u64 *cur_stat);
+#endif /* _ICE_COMMON_H_ */
diff --git a/drivers/net/ice/base/ice_controlq.c b/drivers/net/ice/base/ice_controlq.c
new file mode 100644
index 0000000..cbc4cb4
--- /dev/null
+++ b/drivers/net/ice/base/ice_controlq.c
@@ -0,0 +1,1098 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#include "ice_common.h"
+
+
+#define ICE_CQ_INIT_REGS(qinfo, prefix)				\
+do {								\
+	(qinfo)->sq.head = prefix##_ATQH;			\
+	(qinfo)->sq.tail = prefix##_ATQT;			\
+	(qinfo)->sq.len = prefix##_ATQLEN;			\
+	(qinfo)->sq.bah = prefix##_ATQBAH;			\
+	(qinfo)->sq.bal = prefix##_ATQBAL;			\
+	(qinfo)->sq.len_mask = prefix##_ATQLEN_ATQLEN_M;	\
+	(qinfo)->sq.len_ena_mask = prefix##_ATQLEN_ATQENABLE_M;	\
+	(qinfo)->sq.head_mask = prefix##_ATQH_ATQH_M;		\
+	(qinfo)->rq.head = prefix##_ARQH;			\
+	(qinfo)->rq.tail = prefix##_ARQT;			\
+	(qinfo)->rq.len = prefix##_ARQLEN;			\
+	(qinfo)->rq.bah = prefix##_ARQBAH;			\
+	(qinfo)->rq.bal = prefix##_ARQBAL;			\
+	(qinfo)->rq.len_mask = prefix##_ARQLEN_ARQLEN_M;	\
+	(qinfo)->rq.len_ena_mask = prefix##_ARQLEN_ARQENABLE_M;	\
+	(qinfo)->rq.head_mask = prefix##_ARQH_ARQH_M;		\
+} while (0)
+
+/**
+ * ice_adminq_init_regs - Initialize AdminQ registers
+ * @hw: pointer to the hardware structure
+ *
+ * This assumes the alloc_sq and alloc_rq functions have already been called
+ */
+static void ice_adminq_init_regs(struct ice_hw *hw)
+{
+	struct ice_ctl_q_info *cq = &hw->adminq;
+
+	ICE_CQ_INIT_REGS(cq, PF_FW);
+}
+
+/**
+ * ice_mailbox_init_regs - Initialize Mailbox registers
+ * @hw: pointer to the hardware structure
+ *
+ * This assumes the alloc_sq and alloc_rq functions have already been called
+ */
+static void ice_mailbox_init_regs(struct ice_hw *hw)
+{
+	struct ice_ctl_q_info *cq = &hw->mailboxq;
+
+	ICE_CQ_INIT_REGS(cq, PF_MBX);
+}
+
+
+/**
+ * ice_check_sq_alive
+ * @hw: pointer to the hw struct
+ * @cq: pointer to the specific Control queue
+ *
+ * Returns true if Queue is enabled else false.
+ */
+bool ice_check_sq_alive(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	/* check both queue-length and queue-enable fields */
+	if (cq->sq.len && cq->sq.len_mask && cq->sq.len_ena_mask)
+		return (rd32(hw, cq->sq.len) & (cq->sq.len_mask |
+						cq->sq.len_ena_mask)) ==
+			(cq->num_sq_entries | cq->sq.len_ena_mask);
+
+	return false;
+}
+
+/**
+ * ice_alloc_ctrlq_sq_ring - Allocate Control Transmit Queue (ATQ) rings
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ */
+static enum ice_status
+ice_alloc_ctrlq_sq_ring(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	size_t size = cq->num_sq_entries * sizeof(struct ice_aq_desc);
+
+	cq->sq.desc_buf.va = ice_alloc_dma_mem(hw, &cq->sq.desc_buf, size);
+	if (!cq->sq.desc_buf.va)
+		return ICE_ERR_NO_MEMORY;
+
+	cq->sq.cmd_buf = ice_calloc(hw, cq->num_sq_entries,
+				    sizeof(struct ice_sq_cd));
+	if (!cq->sq.cmd_buf) {
+		ice_free_dma_mem(hw, &cq->sq.desc_buf);
+		return ICE_ERR_NO_MEMORY;
+	}
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_alloc_ctrlq_rq_ring - Allocate Control Receive Queue (ARQ) rings
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ */
+static enum ice_status
+ice_alloc_ctrlq_rq_ring(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	size_t size = cq->num_rq_entries * sizeof(struct ice_aq_desc);
+
+	cq->rq.desc_buf.va = ice_alloc_dma_mem(hw, &cq->rq.desc_buf, size);
+	if (!cq->rq.desc_buf.va)
+		return ICE_ERR_NO_MEMORY;
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_free_cq_ring - Free control queue ring
+ * @hw: pointer to the hardware structure
+ * @ring: pointer to the specific control queue ring
+ *
+ * This assumes the posted buffers have already been cleaned
+ * and de-allocated
+ */
+static void ice_free_cq_ring(struct ice_hw *hw, struct ice_ctl_q_ring *ring)
+{
+	ice_free_dma_mem(hw, &ring->desc_buf);
+}
+
+/**
+ * ice_alloc_rq_bufs - Allocate pre-posted buffers for the ARQ
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ */
+static enum ice_status
+ice_alloc_rq_bufs(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	int i;
+
+	/* We'll be allocating the buffer info memory first, then we can
+	 * allocate the mapped buffers for the event processing
+	 */
+	cq->rq.dma_head = ice_calloc(hw, cq->num_rq_entries,
+				     sizeof(cq->rq.desc_buf));
+	if (!cq->rq.dma_head)
+		return ICE_ERR_NO_MEMORY;
+	cq->rq.r.rq_bi = (struct ice_dma_mem *)cq->rq.dma_head;
+
+	/* allocate the mapped buffers */
+	for (i = 0; i < cq->num_rq_entries; i++) {
+		struct ice_aq_desc *desc;
+		struct ice_dma_mem *bi;
+
+		bi = &cq->rq.r.rq_bi[i];
+		bi->va = ice_alloc_dma_mem(hw, bi, cq->rq_buf_size);
+		if (!bi->va)
+			goto unwind_alloc_rq_bufs;
+
+		/* now configure the descriptors for use */
+		desc = ICE_CTL_Q_DESC(cq->rq, i);
+
+		desc->flags = CPU_TO_LE16(ICE_AQ_FLAG_BUF);
+		if (cq->rq_buf_size > ICE_AQ_LG_BUF)
+			desc->flags |= CPU_TO_LE16(ICE_AQ_FLAG_LB);
+		desc->opcode = 0;
+		/* This is in accordance with Admin queue design, there is no
+		 * register for buffer size configuration
+		 */
+		desc->datalen = CPU_TO_LE16(bi->size);
+		desc->retval = 0;
+		desc->cookie_high = 0;
+		desc->cookie_low = 0;
+		desc->params.generic.addr_high =
+			CPU_TO_LE32(ICE_HI_DWORD(bi->pa));
+		desc->params.generic.addr_low =
+			CPU_TO_LE32(ICE_LO_DWORD(bi->pa));
+		desc->params.generic.param0 = 0;
+		desc->params.generic.param1 = 0;
+	}
+	return ICE_SUCCESS;
+
+unwind_alloc_rq_bufs:
+	/* don't try to free the one that failed... */
+	i--;
+	for (; i >= 0; i--)
+		ice_free_dma_mem(hw, &cq->rq.r.rq_bi[i]);
+	ice_free(hw, cq->rq.dma_head);
+
+	return ICE_ERR_NO_MEMORY;
+}
+
+/**
+ * ice_alloc_sq_bufs - Allocate empty buffer structs for the ATQ
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ */
+static enum ice_status
+ice_alloc_sq_bufs(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	int i;
+
+	/* No mapped memory needed yet, just the buffer info structures */
+	cq->sq.dma_head = ice_calloc(hw, cq->num_sq_entries,
+				     sizeof(cq->sq.desc_buf));
+	if (!cq->sq.dma_head)
+		return ICE_ERR_NO_MEMORY;
+	cq->sq.r.sq_bi = (struct ice_dma_mem *)cq->sq.dma_head;
+
+	/* allocate the mapped buffers */
+	for (i = 0; i < cq->num_sq_entries; i++) {
+		struct ice_dma_mem *bi;
+
+		bi = &cq->sq.r.sq_bi[i];
+		bi->va = ice_alloc_dma_mem(hw, bi, cq->sq_buf_size);
+		if (!bi->va)
+			goto unwind_alloc_sq_bufs;
+	}
+	return ICE_SUCCESS;
+
+unwind_alloc_sq_bufs:
+	/* don't try to free the one that failed... */
+	i--;
+	for (; i >= 0; i--)
+		ice_free_dma_mem(hw, &cq->sq.r.sq_bi[i]);
+	ice_free(hw, cq->sq.dma_head);
+
+	return ICE_ERR_NO_MEMORY;
+}
+
+static enum ice_status
+ice_cfg_cq_regs(struct ice_hw *hw, struct ice_ctl_q_ring *ring, u16 num_entries)
+{
+	/* Clear Head and Tail */
+	wr32(hw, ring->head, 0);
+	wr32(hw, ring->tail, 0);
+
+	/* set starting point */
+	wr32(hw, ring->len, (num_entries | ring->len_ena_mask));
+	wr32(hw, ring->bal, ICE_LO_DWORD(ring->desc_buf.pa));
+	wr32(hw, ring->bah, ICE_HI_DWORD(ring->desc_buf.pa));
+
+	/* Check one register to verify that config was applied */
+	if (rd32(hw, ring->bal) != ICE_LO_DWORD(ring->desc_buf.pa))
+		return ICE_ERR_AQ_ERROR;
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_cfg_sq_regs - configure Control ATQ registers
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ *
+ * Configure base address and length registers for the transmit queue
+ */
+static enum ice_status
+ice_cfg_sq_regs(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	return ice_cfg_cq_regs(hw, &cq->sq, cq->num_sq_entries);
+}
+
+/**
+ * ice_cfg_rq_regs - configure Control ARQ register
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ *
+ * Configure base address and length registers for the receive (event q)
+ */
+static enum ice_status
+ice_cfg_rq_regs(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	enum ice_status status;
+
+	status = ice_cfg_cq_regs(hw, &cq->rq, cq->num_rq_entries);
+	if (status)
+		return status;
+
+	/* Update tail in the HW to post pre-allocated buffers */
+	wr32(hw, cq->rq.tail, (u32)(cq->num_rq_entries - 1));
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_init_sq - main initialization routine for Control ATQ
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ *
+ * This is the main initialization routine for the Control Send Queue
+ * Prior to calling this function, drivers *MUST* set the following fields
+ * in the cq->structure:
+ *     - cq->num_sq_entries
+ *     - cq->sq_buf_size
+ *
+ * Do *NOT* hold the lock when calling this as the memory allocation routines
+ * called are not going to be atomic context safe
+ */
+static enum ice_status ice_init_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	enum ice_status ret_code;
+
+	if (cq->sq.count > 0) {
+		/* queue already initialized */
+		ret_code = ICE_ERR_NOT_READY;
+		goto init_ctrlq_exit;
+	}
+
+	/* verify input for valid configuration */
+	if (!cq->num_sq_entries || !cq->sq_buf_size) {
+		ret_code = ICE_ERR_CFG;
+		goto init_ctrlq_exit;
+	}
+
+	cq->sq.next_to_use = 0;
+	cq->sq.next_to_clean = 0;
+
+	/* allocate the ring memory */
+	ret_code = ice_alloc_ctrlq_sq_ring(hw, cq);
+	if (ret_code)
+		goto init_ctrlq_exit;
+
+	/* allocate buffers in the rings */
+	ret_code = ice_alloc_sq_bufs(hw, cq);
+	if (ret_code)
+		goto init_ctrlq_free_rings;
+
+	/* initialize base registers */
+	ret_code = ice_cfg_sq_regs(hw, cq);
+	if (ret_code)
+		goto init_ctrlq_free_rings;
+
+	/* success! */
+	cq->sq.count = cq->num_sq_entries;
+	goto init_ctrlq_exit;
+
+init_ctrlq_free_rings:
+	ice_free_cq_ring(hw, &cq->sq);
+
+init_ctrlq_exit:
+	return ret_code;
+}
+
+/**
+ * ice_init_rq - initialize ARQ
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ *
+ * The main initialization routine for the Admin Receive (Event) Queue.
+ * Prior to calling this function, drivers *MUST* set the following fields
+ * in the cq->structure:
+ *     - cq->num_rq_entries
+ *     - cq->rq_buf_size
+ *
+ * Do *NOT* hold the lock when calling this as the memory allocation routines
+ * called are not going to be atomic context safe
+ */
+static enum ice_status ice_init_rq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	enum ice_status ret_code;
+
+	if (cq->rq.count > 0) {
+		/* queue already initialized */
+		ret_code = ICE_ERR_NOT_READY;
+		goto init_ctrlq_exit;
+	}
+
+	/* verify input for valid configuration */
+	if (!cq->num_rq_entries || !cq->rq_buf_size) {
+		ret_code = ICE_ERR_CFG;
+		goto init_ctrlq_exit;
+	}
+
+	cq->rq.next_to_use = 0;
+	cq->rq.next_to_clean = 0;
+
+	/* allocate the ring memory */
+	ret_code = ice_alloc_ctrlq_rq_ring(hw, cq);
+	if (ret_code)
+		goto init_ctrlq_exit;
+
+	/* allocate buffers in the rings */
+	ret_code = ice_alloc_rq_bufs(hw, cq);
+	if (ret_code)
+		goto init_ctrlq_free_rings;
+
+	/* initialize base registers */
+	ret_code = ice_cfg_rq_regs(hw, cq);
+	if (ret_code)
+		goto init_ctrlq_free_rings;
+
+	/* success! */
+	cq->rq.count = cq->num_rq_entries;
+	goto init_ctrlq_exit;
+
+init_ctrlq_free_rings:
+	ice_free_cq_ring(hw, &cq->rq);
+
+init_ctrlq_exit:
+	return ret_code;
+}
+
+#define ICE_FREE_CQ_BUFS(hw, qi, ring)					\
+do {									\
+	int i;								\
+	/* free descriptors */						\
+	for (i = 0; i < (qi)->num_##ring##_entries; i++)		\
+		if ((qi)->ring.r.ring##_bi[i].pa)			\
+			ice_free_dma_mem((hw),				\
+					 &(qi)->ring.r.ring##_bi[i]);	\
+	/* free the buffer info list */					\
+	if ((qi)->ring.cmd_buf)						\
+		ice_free(hw, (qi)->ring.cmd_buf);			\
+	/* free dma head */						\
+	ice_free(hw, (qi)->ring.dma_head);				\
+} while (0)
+
+/**
+ * ice_shutdown_sq - shutdown the Control ATQ
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ *
+ * The main shutdown routine for the Control Transmit Queue
+ */
+static enum ice_status
+ice_shutdown_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	enum ice_status ret_code = ICE_SUCCESS;
+
+	ice_acquire_lock(&cq->sq_lock);
+
+	if (!cq->sq.count) {
+		ret_code = ICE_ERR_NOT_READY;
+		goto shutdown_sq_out;
+	}
+
+	/* Stop firmware AdminQ processing */
+	wr32(hw, cq->sq.head, 0);
+	wr32(hw, cq->sq.tail, 0);
+	wr32(hw, cq->sq.len, 0);
+	wr32(hw, cq->sq.bal, 0);
+	wr32(hw, cq->sq.bah, 0);
+
+	cq->sq.count = 0;	/* to indicate uninitialized queue */
+
+	/* free ring buffers and the ring itself */
+	ICE_FREE_CQ_BUFS(hw, cq, sq);
+	ice_free_cq_ring(hw, &cq->sq);
+
+shutdown_sq_out:
+	ice_release_lock(&cq->sq_lock);
+	return ret_code;
+}
+
+/**
+ * ice_aq_ver_check - Check the reported AQ API version.
+ * @hw: pointer to the hardware structure
+ *
+ * Checks if the driver should load on a given AQ API version.
+ *
+ * Return: 'true' iff the driver should attempt to load. 'false' otherwise.
+ */
+static bool ice_aq_ver_check(struct ice_hw *hw)
+{
+	if (hw->api_maj_ver > EXP_FW_API_VER_MAJOR) {
+		/* Major API version is newer than expected, don't load */
+		ice_warn(hw, "The driver for the device stopped because the NVM image is newer than expected. You must install the most recent version of the network driver.\n");
+		return false;
+	} else if (hw->api_maj_ver == EXP_FW_API_VER_MAJOR) {
+		if (hw->api_min_ver > (EXP_FW_API_VER_MINOR + 2))
+			ice_info(hw, "The driver for the device detected a newer version of the NVM image than expected. Please install the most recent version of the network driver.\n");
+		else if ((hw->api_min_ver + 2) < EXP_FW_API_VER_MINOR)
+			ice_info(hw, "The driver for the device detected an older version of the NVM image than expected. Please update the NVM image.\n");
+	} else {
+		/* Major API version is older than expected, log a warning */
+		ice_info(hw, "The driver for the device detected an older version of the NVM image than expected. Please update the NVM image.\n");
+	}
+	return true;
+}
+
+/**
+ * ice_shutdown_rq - shutdown Control ARQ
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ *
+ * The main shutdown routine for the Control Receive Queue
+ */
+static enum ice_status
+ice_shutdown_rq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	enum ice_status ret_code = ICE_SUCCESS;
+
+	ice_acquire_lock(&cq->rq_lock);
+
+	if (!cq->rq.count) {
+		ret_code = ICE_ERR_NOT_READY;
+		goto shutdown_rq_out;
+	}
+
+	/* Stop Control Queue processing */
+	wr32(hw, cq->rq.head, 0);
+	wr32(hw, cq->rq.tail, 0);
+	wr32(hw, cq->rq.len, 0);
+	wr32(hw, cq->rq.bal, 0);
+	wr32(hw, cq->rq.bah, 0);
+
+	/* set rq.count to 0 to indicate uninitialized queue */
+	cq->rq.count = 0;
+
+	/* free ring buffers and the ring itself */
+	ICE_FREE_CQ_BUFS(hw, cq, rq);
+	ice_free_cq_ring(hw, &cq->rq);
+
+shutdown_rq_out:
+	ice_release_lock(&cq->rq_lock);
+	return ret_code;
+}
+
+
+/**
+ * ice_init_check_adminq - Check version for Admin Queue to know if its alive
+ * @hw: pointer to the hardware structure
+ */
+static enum ice_status ice_init_check_adminq(struct ice_hw *hw)
+{
+	struct ice_ctl_q_info *cq = &hw->adminq;
+	enum ice_status status;
+
+
+	status = ice_aq_get_fw_ver(hw, NULL);
+	if (status)
+		goto init_ctrlq_free_rq;
+
+
+	if (!ice_aq_ver_check(hw)) {
+		status = ICE_ERR_FW_API_VER;
+		goto init_ctrlq_free_rq;
+	}
+
+	return ICE_SUCCESS;
+
+init_ctrlq_free_rq:
+	if (cq->rq.count) {
+		ice_shutdown_rq(hw, cq);
+		ice_destroy_lock(&cq->rq_lock);
+	}
+	if (cq->sq.count) {
+		ice_shutdown_sq(hw, cq);
+		ice_destroy_lock(&cq->sq_lock);
+	}
+	return status;
+}
+
+/**
+ * ice_init_ctrlq - main initialization routine for any control Queue
+ * @hw: pointer to the hardware structure
+ * @q_type: specific Control queue type
+ *
+ * Prior to calling this function, drivers *MUST* set the following fields
+ * in the cq->structure:
+ *     - cq->num_sq_entries
+ *     - cq->num_rq_entries
+ *     - cq->rq_buf_size
+ *     - cq->sq_buf_size
+ */
+static enum ice_status ice_init_ctrlq(struct ice_hw *hw, enum ice_ctl_q q_type)
+{
+	struct ice_ctl_q_info *cq;
+	enum ice_status ret_code;
+
+	switch (q_type) {
+	case ICE_CTL_Q_ADMIN:
+		ice_adminq_init_regs(hw);
+		cq = &hw->adminq;
+		break;
+	case ICE_CTL_Q_MAILBOX:
+		ice_mailbox_init_regs(hw);
+		cq = &hw->mailboxq;
+		break;
+	default:
+		return ICE_ERR_PARAM;
+	}
+	cq->qtype = q_type;
+
+	/* verify input for valid configuration */
+	if (!cq->num_rq_entries || !cq->num_sq_entries ||
+	    !cq->rq_buf_size || !cq->sq_buf_size) {
+		return ICE_ERR_CFG;
+	}
+	ice_init_lock(&cq->sq_lock);
+	ice_init_lock(&cq->rq_lock);
+
+	/* setup SQ command write back timeout */
+	cq->sq_cmd_timeout = ICE_CTL_Q_SQ_CMD_TIMEOUT;
+
+	/* allocate the ATQ */
+	ret_code = ice_init_sq(hw, cq);
+	if (ret_code)
+		goto init_ctrlq_destroy_locks;
+
+	/* allocate the ARQ */
+	ret_code = ice_init_rq(hw, cq);
+	if (ret_code)
+		goto init_ctrlq_free_sq;
+
+	/* success! */
+	return ICE_SUCCESS;
+
+init_ctrlq_free_sq:
+	ice_shutdown_sq(hw, cq);
+init_ctrlq_destroy_locks:
+	ice_destroy_lock(&cq->sq_lock);
+	ice_destroy_lock(&cq->rq_lock);
+	return ret_code;
+}
+
+/**
+ * ice_init_all_ctrlq - main initialization routine for all control queues
+ * @hw: pointer to the hardware structure
+ *
+ * Prior to calling this function, drivers *MUST* set the following fields
+ * in the cq->structure for all control queues:
+ *     - cq->num_sq_entries
+ *     - cq->num_rq_entries
+ *     - cq->rq_buf_size
+ *     - cq->sq_buf_size
+ */
+enum ice_status ice_init_all_ctrlq(struct ice_hw *hw)
+{
+	enum ice_status ret_code;
+
+
+	/* Init FW admin queue */
+	ret_code = ice_init_ctrlq(hw, ICE_CTL_Q_ADMIN);
+	if (ret_code)
+		return ret_code;
+
+	ret_code = ice_init_check_adminq(hw);
+	if (ret_code)
+		return ret_code;
+	/* Init Mailbox queue */
+	return ice_init_ctrlq(hw, ICE_CTL_Q_MAILBOX);
+}
+
+/**
+ * ice_shutdown_ctrlq - shutdown routine for any control queue
+ * @hw: pointer to the hardware structure
+ * @q_type: specific Control queue type
+ */
+static void ice_shutdown_ctrlq(struct ice_hw *hw, enum ice_ctl_q q_type)
+{
+	struct ice_ctl_q_info *cq;
+
+	switch (q_type) {
+	case ICE_CTL_Q_ADMIN:
+		cq = &hw->adminq;
+		if (ice_check_sq_alive(hw, cq))
+			ice_aq_q_shutdown(hw, true);
+		break;
+	case ICE_CTL_Q_MAILBOX:
+		cq = &hw->mailboxq;
+		break;
+	default:
+		return;
+	}
+
+	if (cq->sq.count) {
+		ice_shutdown_sq(hw, cq);
+		ice_destroy_lock(&cq->sq_lock);
+	}
+	if (cq->rq.count) {
+		ice_shutdown_rq(hw, cq);
+		ice_destroy_lock(&cq->rq_lock);
+	}
+}
+
+/**
+ * ice_shutdown_all_ctrlq - shutdown routine for all control queues
+ * @hw: pointer to the hardware structure
+ */
+void ice_shutdown_all_ctrlq(struct ice_hw *hw)
+{
+	/* Shutdown FW admin queue */
+	ice_shutdown_ctrlq(hw, ICE_CTL_Q_ADMIN);
+	/* Shutdown PF-VF Mailbox */
+	ice_shutdown_ctrlq(hw, ICE_CTL_Q_MAILBOX);
+}
+
+/**
+ * ice_clean_sq - cleans Admin send queue (ATQ)
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ *
+ * returns the number of free desc
+ */
+static u16 ice_clean_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	struct ice_ctl_q_ring *sq = &cq->sq;
+	u16 ntc = sq->next_to_clean;
+	struct ice_sq_cd *details;
+#if 0
+	struct ice_aq_desc desc_cb;
+#endif
+	struct ice_aq_desc *desc;
+
+	desc = ICE_CTL_Q_DESC(*sq, ntc);
+	details = ICE_CTL_Q_DETAILS(*sq, ntc);
+
+	while (rd32(hw, cq->sq.head) != ntc) {
+		ice_debug(hw, ICE_DBG_AQ_MSG,
+			  "ntc %d head %d.\n", ntc, rd32(hw, cq->sq.head));
+#if 0
+		if (details->callback) {
+			ICE_CTL_Q_CALLBACK cb_func =
+				(ICE_CTL_Q_CALLBACK)details->callback;
+			ice_memcpy(&desc_cb, desc, sizeof(desc_cb),
+				   ICE_DMA_TO_DMA);
+			cb_func(hw, &desc_cb);
+		}
+#endif
+		ice_memset(desc, 0, sizeof(*desc), ICE_DMA_MEM);
+		ice_memset(details, 0, sizeof(*details), ICE_NONDMA_MEM);
+		ntc++;
+		if (ntc == sq->count)
+			ntc = 0;
+		desc = ICE_CTL_Q_DESC(*sq, ntc);
+		details = ICE_CTL_Q_DETAILS(*sq, ntc);
+	}
+
+	sq->next_to_clean = ntc;
+
+	return ICE_CTL_Q_DESC_UNUSED(sq);
+}
+
+/**
+ * ice_sq_done - check if FW has processed the Admin Send Queue (ATQ)
+ * @hw: pointer to the hw struct
+ * @cq: pointer to the specific Control queue
+ *
+ * Returns true if the firmware has processed all descriptors on the
+ * admin send queue. Returns false if there are still requests pending.
+ */
+static bool ice_sq_done(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	/* AQ designers suggest use of head for better
+	 * timing reliability than DD bit
+	 */
+	return rd32(hw, cq->sq.head) == cq->sq.next_to_use;
+}
+
+/**
+ * ice_sq_send_cmd - send command to Control Queue (ATQ)
+ * @hw: pointer to the hw struct
+ * @cq: pointer to the specific Control queue
+ * @desc: prefilled descriptor describing the command (non DMA mem)
+ * @buf: buffer to use for indirect commands (or NULL for direct commands)
+ * @buf_size: size of buffer for indirect commands (or 0 for direct commands)
+ * @cd: pointer to command details structure
+ *
+ * This is the main send command routine for the ATQ. It runs the queue,
+ * cleans the queue, etc.
+ */
+enum ice_status
+ice_sq_send_cmd(struct ice_hw *hw, struct ice_ctl_q_info *cq,
+		struct ice_aq_desc *desc, void *buf, u16 buf_size,
+		struct ice_sq_cd *cd)
+{
+	struct ice_dma_mem *dma_buf = NULL;
+	struct ice_aq_desc *desc_on_ring;
+	bool cmd_completed = false;
+	enum ice_status status = ICE_SUCCESS;
+	struct ice_sq_cd *details;
+	u32 total_delay = 0;
+	u16 retval = 0;
+	u32 val = 0;
+
+	/* if reset is in progress return a soft error */
+	if (hw->reset_ongoing)
+		return ICE_ERR_RESET_ONGOING;
+	ice_acquire_lock(&cq->sq_lock);
+
+	cq->sq_last_status = ICE_AQ_RC_OK;
+
+	if (!cq->sq.count) {
+		ice_debug(hw, ICE_DBG_AQ_MSG,
+			  "Control Send queue not initialized.\n");
+		status = ICE_ERR_AQ_EMPTY;
+		goto sq_send_command_error;
+	}
+
+	if ((buf && !buf_size) || (!buf && buf_size)) {
+		status = ICE_ERR_PARAM;
+		goto sq_send_command_error;
+	}
+
+	if (buf) {
+		if (buf_size > cq->sq_buf_size) {
+			ice_debug(hw, ICE_DBG_AQ_MSG,
+				  "Invalid buffer size for Control Send queue: %d.\n",
+				  buf_size);
+			status = ICE_ERR_INVAL_SIZE;
+			goto sq_send_command_error;
+		}
+
+		desc->flags |= CPU_TO_LE16(ICE_AQ_FLAG_BUF);
+		if (buf_size > ICE_AQ_LG_BUF)
+			desc->flags |= CPU_TO_LE16(ICE_AQ_FLAG_LB);
+	}
+
+	val = rd32(hw, cq->sq.head);
+	if (val >= cq->num_sq_entries) {
+		ice_debug(hw, ICE_DBG_AQ_MSG,
+			  "head overrun at %d in the Control Send Queue ring\n",
+			  val);
+		status = ICE_ERR_AQ_EMPTY;
+		goto sq_send_command_error;
+	}
+
+	details = ICE_CTL_Q_DETAILS(cq->sq, cq->sq.next_to_use);
+	if (cd)
+		*details = *cd;
+#if 0
+		/* FIXME: if/when this block gets enabled (when the #if 0
+		 * is removed), add braces to both branches of the surrounding
+		 * conditional expression. The braces have been removed to
+		 * prevent checkpatch complaining.
+		 */
+
+		/* If the command details are defined copy the cookie. The
+		 * CPU_TO_LE32 is not needed here because the data is ignored
+		 * by the FW, only used by the driver
+		 */
+		if (details->cookie) {
+			desc->cookie_high =
+				CPU_TO_LE32(ICE_HI_DWORD(details->cookie));
+			desc->cookie_low =
+				CPU_TO_LE32(ICE_LO_DWORD(details->cookie));
+		}
+#endif
+	else
+		ice_memset(details, 0, sizeof(*details), ICE_NONDMA_MEM);
+#if 0
+	/* clear requested flags and then set additional flags if defined */
+	desc->flags &= ~CPU_TO_LE16(details->flags_dis);
+	desc->flags |= CPU_TO_LE16(details->flags_ena);
+
+	if (details->postpone && !details->async) {
+		ice_debug(hw, ICE_DBG_AQ_MSG,
+			  "Async flag not set along with postpone flag\n");
+		status = ICE_ERR_PARAM;
+		goto sq_send_command_error;
+	}
+#endif
+
+	/* Call clean and check queue available function to reclaim the
+	 * descriptors that were processed by FW/MBX; the function returns the
+	 * number of desc available. The clean function called here could be
+	 * called in a separate thread in case of asynchronous completions.
+	 */
+	if (ice_clean_sq(hw, cq) == 0) {
+		ice_debug(hw, ICE_DBG_AQ_MSG,
+			  "Error: Control Send Queue is full.\n");
+		status = ICE_ERR_AQ_FULL;
+		goto sq_send_command_error;
+	}
+
+	/* initialize the temp desc pointer with the right desc */
+	desc_on_ring = ICE_CTL_Q_DESC(cq->sq, cq->sq.next_to_use);
+
+	/* if the desc is available copy the temp desc to the right place */
+	ice_memcpy(desc_on_ring, desc, sizeof(*desc_on_ring),
+		   ICE_NONDMA_TO_DMA);
+
+	/* if buf is not NULL assume indirect command */
+	if (buf) {
+		dma_buf = &cq->sq.r.sq_bi[cq->sq.next_to_use];
+		/* copy the user buf into the respective DMA buf */
+		ice_memcpy(dma_buf->va, buf, buf_size, ICE_NONDMA_TO_DMA);
+		desc_on_ring->datalen = CPU_TO_LE16(buf_size);
+
+		/* Update the address values in the desc with the pa value
+		 * for respective buffer
+		 */
+		desc_on_ring->params.generic.addr_high =
+			CPU_TO_LE32(ICE_HI_DWORD(dma_buf->pa));
+		desc_on_ring->params.generic.addr_low =
+			CPU_TO_LE32(ICE_LO_DWORD(dma_buf->pa));
+	}
+
+	/* Debug desc and buffer */
+	ice_debug(hw, ICE_DBG_AQ_MSG,
+		  "ATQ: Control Send queue desc and buffer:\n");
+
+	ice_debug_cq(hw, ICE_DBG_AQ_CMD, (void *)desc_on_ring, buf, buf_size);
+
+
+	(cq->sq.next_to_use)++;
+	if (cq->sq.next_to_use == cq->sq.count)
+		cq->sq.next_to_use = 0;
+#if 0
+	/* FIXME - handle this case? */
+	if (!details->postpone)
+#endif
+	wr32(hw, cq->sq.tail, cq->sq.next_to_use);
+
+#if 0
+	/* if command details are not defined or async flag is not set,
+	 * we need to wait for desc write back
+	 */
+	if (!details->async && !details->postpone) {
+		/* FIXME - handle this case? */
+	}
+#endif
+	do {
+		if (ice_sq_done(hw, cq))
+			break;
+
+		ice_msec_delay(1, false);
+		total_delay++;
+	} while (total_delay < cq->sq_cmd_timeout);
+
+	/* if ready, copy the desc back to temp */
+	if (ice_sq_done(hw, cq)) {
+		ice_memcpy(desc, desc_on_ring, sizeof(*desc),
+			   ICE_DMA_TO_NONDMA);
+		if (buf) {
+			/* get returned length to copy */
+			u16 copy_size = LE16_TO_CPU(desc->datalen);
+
+			if (copy_size > buf_size) {
+				ice_debug(hw, ICE_DBG_AQ_MSG,
+					  "Return len %d > than buf len %d\n",
+					  copy_size, buf_size);
+				status = ICE_ERR_AQ_ERROR;
+			} else {
+				ice_memcpy(buf, dma_buf->va, copy_size,
+					   ICE_DMA_TO_NONDMA);
+			}
+		}
+		retval = LE16_TO_CPU(desc->retval);
+		if (retval) {
+			ice_debug(hw, ICE_DBG_AQ_MSG,
+				  "Control Send Queue command completed with error 0x%x\n",
+				  retval);
+
+			/* strip off FW internal code */
+			retval &= 0xff;
+		}
+		cmd_completed = true;
+		if (!status && retval != ICE_AQ_RC_OK)
+			status = ICE_ERR_AQ_ERROR;
+		cq->sq_last_status = (enum ice_aq_err)retval;
+	}
+
+	ice_debug(hw, ICE_DBG_AQ_MSG,
+		  "ATQ: desc and buffer writeback:\n");
+
+	ice_debug_cq(hw, ICE_DBG_AQ_CMD, (void *)desc, buf, buf_size);
+
+
+	/* save writeback AQ if requested */
+	if (details->wb_desc)
+		ice_memcpy(details->wb_desc, desc_on_ring,
+			   sizeof(*details->wb_desc), ICE_DMA_TO_NONDMA);
+
+	/* update the error if time out occurred */
+	if (!cmd_completed) {
+#if 0
+	    (!details->async && !details->postpone)) {
+#endif
+		ice_debug(hw, ICE_DBG_AQ_MSG,
+			  "Control Send Queue Writeback timeout.\n");
+		status = ICE_ERR_AQ_TIMEOUT;
+	}
+
+sq_send_command_error:
+	ice_release_lock(&cq->sq_lock);
+	return status;
+}
+
+/**
+ * ice_fill_dflt_direct_cmd_desc - AQ descriptor helper function
+ * @desc: pointer to the temp descriptor (non DMA mem)
+ * @opcode: the opcode can be used to decide which flags to turn off or on
+ *
+ * Fill the desc with default values
+ */
+void ice_fill_dflt_direct_cmd_desc(struct ice_aq_desc *desc, u16 opcode)
+{
+	/* zero out the desc */
+	ice_memset(desc, 0, sizeof(*desc), ICE_NONDMA_MEM);
+	desc->opcode = CPU_TO_LE16(opcode);
+	desc->flags = CPU_TO_LE16(ICE_AQ_FLAG_SI);
+}
+
+/**
+ * ice_clean_rq_elem
+ * @hw: pointer to the hw struct
+ * @cq: pointer to the specific Control queue
+ * @e: event info from the receive descriptor, includes any buffers
+ * @pending: number of events that could be left to process
+ *
+ * This function cleans one Admin Receive Queue element and returns
+ * the contents through e. It can also return how many events are
+ * left to process through 'pending'.
+ */
+enum ice_status
+ice_clean_rq_elem(struct ice_hw *hw, struct ice_ctl_q_info *cq,
+		  struct ice_rq_event_info *e, u16 *pending)
+{
+	u16 ntc = cq->rq.next_to_clean;
+	enum ice_status ret_code = ICE_SUCCESS;
+	struct ice_aq_desc *desc;
+	struct ice_dma_mem *bi;
+	u16 desc_idx;
+	u16 datalen;
+	u16 flags;
+	u16 ntu;
+
+	/* pre-clean the event info */
+	ice_memset(&e->desc, 0, sizeof(e->desc), ICE_NONDMA_MEM);
+
+	/* take the lock before we start messing with the ring */
+	ice_acquire_lock(&cq->rq_lock);
+
+	if (!cq->rq.count) {
+		ice_debug(hw, ICE_DBG_AQ_MSG,
+			  "Control Receive queue not initialized.\n");
+		ret_code = ICE_ERR_AQ_EMPTY;
+		goto clean_rq_elem_err;
+	}
+
+	/* set next_to_use to head */
+	ntu = (u16)(rd32(hw, cq->rq.head) & cq->rq.head_mask);
+
+	if (ntu == ntc) {
+		/* nothing to do - shouldn't need to update ring's values */
+		ret_code = ICE_ERR_AQ_NO_WORK;
+		goto clean_rq_elem_out;
+	}
+
+	/* now clean the next descriptor */
+	desc = ICE_CTL_Q_DESC(cq->rq, ntc);
+	desc_idx = ntc;
+
+	cq->rq_last_status = (enum ice_aq_err)LE16_TO_CPU(desc->retval);
+	flags = LE16_TO_CPU(desc->flags);
+	if (flags & ICE_AQ_FLAG_ERR) {
+		ret_code = ICE_ERR_AQ_ERROR;
+		ice_debug(hw, ICE_DBG_AQ_MSG,
+			  "Control Receive Queue Event received with error 0x%x\n",
+			  cq->rq_last_status);
+	}
+	ice_memcpy(&e->desc, desc, sizeof(e->desc), ICE_DMA_TO_NONDMA);
+	datalen = LE16_TO_CPU(desc->datalen);
+	e->msg_len = min(datalen, e->buf_len);
+	if (e->msg_buf && e->msg_len)
+		ice_memcpy(e->msg_buf, cq->rq.r.rq_bi[desc_idx].va,
+			   e->msg_len, ICE_DMA_TO_NONDMA);
+
+	ice_debug(hw, ICE_DBG_AQ_MSG, "ARQ: desc and buffer:\n");
+
+	ice_debug_cq(hw, ICE_DBG_AQ_CMD, (void *)desc, e->msg_buf,
+		     cq->rq_buf_size);
+
+
+	/* Restore the original datalen and buffer address in the desc,
+	 * FW updates datalen to indicate the event message size
+	 */
+	bi = &cq->rq.r.rq_bi[ntc];
+	ice_memset(desc, 0, sizeof(*desc), ICE_DMA_MEM);
+
+	desc->flags = CPU_TO_LE16(ICE_AQ_FLAG_BUF);
+	if (cq->rq_buf_size > ICE_AQ_LG_BUF)
+		desc->flags |= CPU_TO_LE16(ICE_AQ_FLAG_LB);
+	desc->datalen = CPU_TO_LE16(bi->size);
+	desc->params.generic.addr_high = CPU_TO_LE32(ICE_HI_DWORD(bi->pa));
+	desc->params.generic.addr_low = CPU_TO_LE32(ICE_LO_DWORD(bi->pa));
+
+	/* set tail = the last cleaned desc index. */
+	wr32(hw, cq->rq.tail, ntc);
+	/* ntc is updated to tail + 1 */
+	ntc++;
+	if (ntc == cq->num_rq_entries)
+		ntc = 0;
+	cq->rq.next_to_clean = ntc;
+	cq->rq.next_to_use = ntu;
+
+#if 0
+	ice_nvmupd_check_wait_event(hw, LE16_TO_CPU(e->desc.opcode));
+#endif
+clean_rq_elem_out:
+	/* Set pending if needed, unlock and return */
+	if (pending) {
+		/* re-read HW head to calculate actual pending messages */
+		ntu = (u16)(rd32(hw, cq->rq.head) & cq->rq.head_mask);
+		*pending = (u16)((ntc > ntu ? cq->rq.count : 0) + (ntu - ntc));
+	}
+clean_rq_elem_err:
+	ice_release_lock(&cq->rq_lock);
+
+	return ret_code;
+}
diff --git a/drivers/net/ice/base/ice_controlq.h b/drivers/net/ice/base/ice_controlq.h
new file mode 100644
index 0000000..db2db93
--- /dev/null
+++ b/drivers/net/ice/base/ice_controlq.h
@@ -0,0 +1,97 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_CONTROLQ_H_
+#define _ICE_CONTROLQ_H_
+
+#include "ice_adminq_cmd.h"
+
+
+/* Maximum buffer lengths for all control queue types */
+#define ICE_AQ_MAX_BUF_LEN 4096
+#define ICE_MBXQ_MAX_BUF_LEN 4096
+
+#define ICE_CTL_Q_DESC(R, i) \
+	(&(((struct ice_aq_desc *)((R).desc_buf.va))[i]))
+
+#define ICE_CTL_Q_DESC_UNUSED(R) \
+	(u16)((((R)->next_to_clean > (R)->next_to_use) ? 0 : (R)->count) + \
+	      (R)->next_to_clean - (R)->next_to_use - 1)
+
+/* Defines that help manage the driver vs FW API checks.
+ * Take a look at ice_aq_ver_check in ice_controlq.c for actual usage.
+ */
+#define EXP_FW_API_VER_BRANCH		0x00
+#define EXP_FW_API_VER_MAJOR		0x01
+#define EXP_FW_API_VER_MINOR		0x03
+
+/* Different control queue types: These are mainly for SW consumption. */
+enum ice_ctl_q {
+	ICE_CTL_Q_UNKNOWN = 0,
+	ICE_CTL_Q_ADMIN,
+	ICE_CTL_Q_MAILBOX,
+};
+
+/* Control Queue default settings */
+#define ICE_CTL_Q_SQ_CMD_TIMEOUT	250  /* msecs */
+
+struct ice_ctl_q_ring {
+	void *dma_head;			/* Virtual address to dma head */
+	struct ice_dma_mem desc_buf;	/* descriptor ring memory */
+	void *cmd_buf;			/* command buffer memory */
+
+	union {
+		struct ice_dma_mem *sq_bi;
+		struct ice_dma_mem *rq_bi;
+	} r;
+
+	u16 count;		/* Number of descriptors */
+
+	/* used for interrupt processing */
+	u16 next_to_use;
+	u16 next_to_clean;
+
+	/* used for queue tracking */
+	u32 head;
+	u32 tail;
+	u32 len;
+	u32 bah;
+	u32 bal;
+	u32 len_mask;
+	u32 len_ena_mask;
+	u32 head_mask;
+};
+
+/* sq transaction details */
+struct ice_sq_cd {
+	struct ice_aq_desc *wb_desc;
+};
+
+#define ICE_CTL_Q_DETAILS(R, i) (&(((struct ice_sq_cd *)((R).cmd_buf))[i]))
+
+/* rq event information */
+struct ice_rq_event_info {
+	struct ice_aq_desc desc;
+	u16 msg_len;
+	u16 buf_len;
+	u8 *msg_buf;
+};
+
+/* Control Queue information */
+struct ice_ctl_q_info {
+	enum ice_ctl_q qtype;
+	struct ice_ctl_q_ring rq;	/* receive queue */
+	struct ice_ctl_q_ring sq;	/* send queue */
+	u32 sq_cmd_timeout;		/* send queue cmd write back timeout */
+	u16 num_rq_entries;		/* receive queue depth */
+	u16 num_sq_entries;		/* send queue depth */
+	u16 rq_buf_size;		/* receive queue buffer size */
+	u16 sq_buf_size;		/* send queue buffer size */
+	struct ice_lock sq_lock;		/* Send queue lock */
+	struct ice_lock rq_lock;		/* Receive queue lock */
+	enum ice_aq_err sq_last_status;	/* last status on send queue */
+	enum ice_aq_err rq_last_status;	/* last status on receive queue */
+};
+
+#endif /* _ICE_CONTROLQ_H_ */
diff --git a/drivers/net/ice/base/ice_devids.h b/drivers/net/ice/base/ice_devids.h
new file mode 100644
index 0000000..87f17ab
--- /dev/null
+++ b/drivers/net/ice/base/ice_devids.h
@@ -0,0 +1,17 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_DEVIDS_H_
+#define _ICE_DEVIDS_H_
+
+
+/* Device IDs */
+/* Intel(R) Ethernet Controller E810-C for backplane */
+#define ICE_DEV_ID_E810C_BACKPLANE	0x1591
+/* Intel(R) Ethernet Controller E810-C for QSFP */
+#define ICE_DEV_ID_E810C_QSFP		0x1592
+/* Intel(R) Ethernet Controller E810-C for SFP */
+#define ICE_DEV_ID_E810C_SFP		0x1593
+
+#endif /* _ICE_DEVIDS_H_ */
diff --git a/drivers/net/ice/base/ice_flex_type.h b/drivers/net/ice/base/ice_flex_type.h
new file mode 100644
index 0000000..84a38cb
--- /dev/null
+++ b/drivers/net/ice/base/ice_flex_type.h
@@ -0,0 +1,19 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_FLEX_TYPE_H_
+#define _ICE_FLEX_TYPE_H_
+
+/* Extraction Sequence (Field Vector) Table */
+struct ice_fv_word {
+	u8 prot_id;
+	u8 off;		/* Offset within the protocol header */
+};
+
+#define ICE_MAX_FV_WORDS 48
+struct ice_fv {
+	struct ice_fv_word ew[ICE_MAX_FV_WORDS];
+};
+
+#endif /* _ICE_FLEX_TYPE_H_ */
diff --git a/drivers/net/ice/base/ice_flow.h b/drivers/net/ice/base/ice_flow.h
new file mode 100644
index 0000000..228a2c0
--- /dev/null
+++ b/drivers/net/ice/base/ice_flow.h
@@ -0,0 +1,8 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_FLOW_H_
+#define _ICE_FLOW_H_
+
+#endif /* _ICE_FLOW_H_ */
diff --git a/drivers/net/ice/base/ice_hw_autogen.h b/drivers/net/ice/base/ice_hw_autogen.h
new file mode 100644
index 0000000..8c79891
--- /dev/null
+++ b/drivers/net/ice/base/ice_hw_autogen.h
@@ -0,0 +1,9815 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+/* Machine-generated file; do not edit */
+#ifndef _ICE_HW_AUTOGEN_H_
+#define _ICE_HW_AUTOGEN_H_
+
+
+
+#define GL_RDPU_CNTRL				0x00052054 /* Reset Source: CORER */
+#define GL_RDPU_CNTRL_RX_PAD_EN_S		0
+#define GL_RDPU_CNTRL_RX_PAD_EN_M		BIT(0)
+#define GL_RDPU_CNTRL_UDP_ZERO_EN_S		1
+#define GL_RDPU_CNTRL_UDP_ZERO_EN_M		BIT(1)
+#define GL_RDPU_CNTRL_BLNC_EN_S			2
+#define GL_RDPU_CNTRL_BLNC_EN_M			BIT(2)
+#define GL_RDPU_CNTRL_RECIPE_BYPASS_S		3
+#define GL_RDPU_CNTRL_RECIPE_BYPASS_M		BIT(3)
+#define GL_RDPU_CNTRL_RLAN_ACK_REQ_PM_TH_S	4
+#define GL_RDPU_CNTRL_RLAN_ACK_REQ_PM_TH_M	MAKEMASK(0x3F, 4)
+#define GL_RDPU_CNTRL_PE_ACK_REQ_PM_TH_S	10
+#define GL_RDPU_CNTRL_PE_ACK_REQ_PM_TH_M	MAKEMASK(0x3F, 10)
+#define GL_RDPU_CNTRL_REQ_WB_PM_TH_S		16
+#define GL_RDPU_CNTRL_REQ_WB_PM_TH_M		MAKEMASK(0x1F, 16)
+#define GL_RDPU_CNTRL_ECO_S			21
+#define GL_RDPU_CNTRL_ECO_M			MAKEMASK(0x7FF, 21)
+#define MSIX_PBA(_i)				(0x00008000 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: FLR */
+#define MSIX_PBA_MAX_INDEX			2
+#define MSIX_PBA_PENBIT_S			0
+#define MSIX_PBA_PENBIT_M			MAKEMASK(0xFFFFFFFF, 0)
+#define MSIX_TADD(_i)				(0x00000000 + ((_i) * 16)) /* _i=0...64 */ /* Reset Source: FLR */
+#define MSIX_TADD_MAX_INDEX			64
+#define MSIX_TADD_MSIXTADD10_S			0
+#define MSIX_TADD_MSIXTADD10_M			MAKEMASK(0x3, 0)
+#define MSIX_TADD_MSIXTADD_S			2
+#define MSIX_TADD_MSIXTADD_M			MAKEMASK(0x3FFFFFFF, 2)
+#define MSIX_TUADD(_i)				(0x00000004 + ((_i) * 16)) /* _i=0...64 */ /* Reset Source: FLR */
+#define MSIX_TUADD_MAX_INDEX			64
+#define MSIX_TUADD_MSIXTUADD_S			0
+#define MSIX_TUADD_MSIXTUADD_M			MAKEMASK(0xFFFFFFFF, 0)
+#define MSIX_TVCTRL(_i)				(0x0000000C + ((_i) * 16)) /* _i=0...64 */ /* Reset Source: FLR */
+#define MSIX_TVCTRL_MAX_INDEX			64
+#define MSIX_TVCTRL_MASK_S			0
+#define MSIX_TVCTRL_MASK_M			BIT(0)
+#define PF0_FW_HLP_ARQBAH_PAGE			0x02D00180 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQBAH_PAGE_ARQBAH_S		0
+#define PF0_FW_HLP_ARQBAH_PAGE_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_FW_HLP_ARQBAL_PAGE			0x02D00080 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQBAL_PAGE_ARQBAL_LSB_S	0
+#define PF0_FW_HLP_ARQBAL_PAGE_ARQBAL_LSB_M	MAKEMASK(0x3F, 0)
+#define PF0_FW_HLP_ARQBAL_PAGE_ARQBAL_S		6
+#define PF0_FW_HLP_ARQBAL_PAGE_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_FW_HLP_ARQH_PAGE			0x02D00380 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQH_PAGE_ARQH_S		0
+#define PF0_FW_HLP_ARQH_PAGE_ARQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ARQLEN_PAGE			0x02D00280 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQLEN_S		0
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQVFE_S		28
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQVFE_M		BIT(28)
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQOVFL_S	29
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQOVFL_M	BIT(29)
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQCRIT_S	30
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQCRIT_M	BIT(30)
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQENABLE_S	31
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQENABLE_M	BIT(31)
+#define PF0_FW_HLP_ARQT_PAGE			0x02D00480 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQT_PAGE_ARQT_S		0
+#define PF0_FW_HLP_ARQT_PAGE_ARQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ATQBAH_PAGE			0x02D00100 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQBAH_PAGE_ATQBAH_S		0
+#define PF0_FW_HLP_ATQBAH_PAGE_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_FW_HLP_ATQBAL_PAGE			0x02D00000 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQBAL_PAGE_ATQBAL_LSB_S	0
+#define PF0_FW_HLP_ATQBAL_PAGE_ATQBAL_LSB_M	MAKEMASK(0x3F, 0)
+#define PF0_FW_HLP_ATQBAL_PAGE_ATQBAL_S		6
+#define PF0_FW_HLP_ATQBAL_PAGE_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_FW_HLP_ATQH_PAGE			0x02D00300 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQH_PAGE_ATQH_S		0
+#define PF0_FW_HLP_ATQH_PAGE_ATQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ATQLEN_PAGE			0x02D00200 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQLEN_S		0
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQVFE_S		28
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQVFE_M		BIT(28)
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQOVFL_S	29
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQOVFL_M	BIT(29)
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQCRIT_S	30
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQCRIT_M	BIT(30)
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQENABLE_S	31
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQENABLE_M	BIT(31)
+#define PF0_FW_HLP_ATQT_PAGE			0x02D00400 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQT_PAGE_ATQT_S		0
+#define PF0_FW_HLP_ATQT_PAGE_ATQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ARQBAH_PAGE			0x02D40180 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQBAH_PAGE_ARQBAH_S		0
+#define PF0_FW_PSM_ARQBAH_PAGE_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_FW_PSM_ARQBAL_PAGE			0x02D40080 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQBAL_PAGE_ARQBAL_LSB_S	0
+#define PF0_FW_PSM_ARQBAL_PAGE_ARQBAL_LSB_M	MAKEMASK(0x3F, 0)
+#define PF0_FW_PSM_ARQBAL_PAGE_ARQBAL_S		6
+#define PF0_FW_PSM_ARQBAL_PAGE_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_FW_PSM_ARQH_PAGE			0x02D40380 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQH_PAGE_ARQH_S		0
+#define PF0_FW_PSM_ARQH_PAGE_ARQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ARQLEN_PAGE			0x02D40280 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQLEN_S		0
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQVFE_S		28
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQVFE_M		BIT(28)
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQOVFL_S	29
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQOVFL_M	BIT(29)
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQCRIT_S	30
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQCRIT_M	BIT(30)
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQENABLE_S	31
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQENABLE_M	BIT(31)
+#define PF0_FW_PSM_ARQT_PAGE			0x02D40480 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQT_PAGE_ARQT_S		0
+#define PF0_FW_PSM_ARQT_PAGE_ARQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ATQBAH_PAGE			0x02D40100 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQBAH_PAGE_ATQBAH_S		0
+#define PF0_FW_PSM_ATQBAH_PAGE_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_FW_PSM_ATQBAL_PAGE			0x02D40000 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQBAL_PAGE_ATQBAL_LSB_S	0
+#define PF0_FW_PSM_ATQBAL_PAGE_ATQBAL_LSB_M	MAKEMASK(0x3F, 0)
+#define PF0_FW_PSM_ATQBAL_PAGE_ATQBAL_S		6
+#define PF0_FW_PSM_ATQBAL_PAGE_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_FW_PSM_ATQH_PAGE			0x02D40300 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQH_PAGE_ATQH_S		0
+#define PF0_FW_PSM_ATQH_PAGE_ATQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ATQLEN_PAGE			0x02D40200 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQLEN_S		0
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQVFE_S		28
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQVFE_M		BIT(28)
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQOVFL_S	29
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQOVFL_M	BIT(29)
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQCRIT_S	30
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQCRIT_M	BIT(30)
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQENABLE_S	31
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQENABLE_M	BIT(31)
+#define PF0_FW_PSM_ATQT_PAGE			0x02D40400 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQT_PAGE_ATQT_S		0
+#define PF0_FW_PSM_ATQT_PAGE_ATQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ARQBAH_PAGE			0x02D80190 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQBAH_PAGE_ARQBAH_S	0
+#define PF0_MBX_CPM_ARQBAH_PAGE_ARQBAH_M	MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_CPM_ARQBAL_PAGE			0x02D80090 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQBAL_PAGE_ARQBAL_LSB_S	0
+#define PF0_MBX_CPM_ARQBAL_PAGE_ARQBAL_LSB_M	MAKEMASK(0x3F, 0)
+#define PF0_MBX_CPM_ARQBAL_PAGE_ARQBAL_S	6
+#define PF0_MBX_CPM_ARQBAL_PAGE_ARQBAL_M	MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_CPM_ARQH_PAGE			0x02D80390 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQH_PAGE_ARQH_S		0
+#define PF0_MBX_CPM_ARQH_PAGE_ARQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ARQLEN_PAGE			0x02D80290 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQLEN_S	0
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQLEN_M	MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQVFE_S	28
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQVFE_M	BIT(28)
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQOVFL_S	29
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQOVFL_M	BIT(29)
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQCRIT_S	30
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQCRIT_M	BIT(30)
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQENABLE_S	31
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQENABLE_M	BIT(31)
+#define PF0_MBX_CPM_ARQT_PAGE			0x02D80490 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQT_PAGE_ARQT_S		0
+#define PF0_MBX_CPM_ARQT_PAGE_ARQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ATQBAH_PAGE			0x02D80110 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQBAH_PAGE_ATQBAH_S	0
+#define PF0_MBX_CPM_ATQBAH_PAGE_ATQBAH_M	MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_CPM_ATQBAL_PAGE			0x02D80010 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQBAL_PAGE_ATQBAL_S	6
+#define PF0_MBX_CPM_ATQBAL_PAGE_ATQBAL_M	MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_CPM_ATQH_PAGE			0x02D80310 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQH_PAGE_ATQH_S		0
+#define PF0_MBX_CPM_ATQH_PAGE_ATQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ATQLEN_PAGE			0x02D80210 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQLEN_S	0
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQLEN_M	MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQVFE_S	28
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQVFE_M	BIT(28)
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQOVFL_S	29
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQOVFL_M	BIT(29)
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQCRIT_S	30
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQCRIT_M	BIT(30)
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQENABLE_S	31
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQENABLE_M	BIT(31)
+#define PF0_MBX_CPM_ATQT_PAGE			0x02D80410 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQT_PAGE_ATQT_S		0
+#define PF0_MBX_CPM_ATQT_PAGE_ATQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ARQBAH_PAGE			0x02D00190 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQBAH_PAGE_ARQBAH_S	0
+#define PF0_MBX_HLP_ARQBAH_PAGE_ARQBAH_M	MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_HLP_ARQBAL_PAGE			0x02D00090 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQBAL_PAGE_ARQBAL_LSB_S	0
+#define PF0_MBX_HLP_ARQBAL_PAGE_ARQBAL_LSB_M	MAKEMASK(0x3F, 0)
+#define PF0_MBX_HLP_ARQBAL_PAGE_ARQBAL_S	6
+#define PF0_MBX_HLP_ARQBAL_PAGE_ARQBAL_M	MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_HLP_ARQH_PAGE			0x02D00390 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQH_PAGE_ARQH_S		0
+#define PF0_MBX_HLP_ARQH_PAGE_ARQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ARQLEN_PAGE			0x02D00290 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQLEN_S	0
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQLEN_M	MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQVFE_S	28
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQVFE_M	BIT(28)
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQOVFL_S	29
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQOVFL_M	BIT(29)
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQCRIT_S	30
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQCRIT_M	BIT(30)
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQENABLE_S	31
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQENABLE_M	BIT(31)
+#define PF0_MBX_HLP_ARQT_PAGE			0x02D00490 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQT_PAGE_ARQT_S		0
+#define PF0_MBX_HLP_ARQT_PAGE_ARQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ATQBAH_PAGE			0x02D00110 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQBAH_PAGE_ATQBAH_S	0
+#define PF0_MBX_HLP_ATQBAH_PAGE_ATQBAH_M	MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_HLP_ATQBAL_PAGE			0x02D00010 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQBAL_PAGE_ATQBAL_S	6
+#define PF0_MBX_HLP_ATQBAL_PAGE_ATQBAL_M	MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_HLP_ATQH_PAGE			0x02D00310 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQH_PAGE_ATQH_S		0
+#define PF0_MBX_HLP_ATQH_PAGE_ATQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ATQLEN_PAGE			0x02D00210 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQLEN_S	0
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQLEN_M	MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQVFE_S	28
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQVFE_M	BIT(28)
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQOVFL_S	29
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQOVFL_M	BIT(29)
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQCRIT_S	30
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQCRIT_M	BIT(30)
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQENABLE_S	31
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQENABLE_M	BIT(31)
+#define PF0_MBX_HLP_ATQT_PAGE			0x02D00410 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQT_PAGE_ATQT_S		0
+#define PF0_MBX_HLP_ATQT_PAGE_ATQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ARQBAH_PAGE			0x02D40190 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQBAH_PAGE_ARQBAH_S	0
+#define PF0_MBX_PSM_ARQBAH_PAGE_ARQBAH_M	MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_PSM_ARQBAL_PAGE			0x02D40090 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQBAL_PAGE_ARQBAL_LSB_S	0
+#define PF0_MBX_PSM_ARQBAL_PAGE_ARQBAL_LSB_M	MAKEMASK(0x3F, 0)
+#define PF0_MBX_PSM_ARQBAL_PAGE_ARQBAL_S	6
+#define PF0_MBX_PSM_ARQBAL_PAGE_ARQBAL_M	MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_PSM_ARQH_PAGE			0x02D40390 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQH_PAGE_ARQH_S		0
+#define PF0_MBX_PSM_ARQH_PAGE_ARQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ARQLEN_PAGE			0x02D40290 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQLEN_S	0
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQLEN_M	MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQVFE_S	28
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQVFE_M	BIT(28)
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQOVFL_S	29
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQOVFL_M	BIT(29)
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQCRIT_S	30
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQCRIT_M	BIT(30)
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQENABLE_S	31
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQENABLE_M	BIT(31)
+#define PF0_MBX_PSM_ARQT_PAGE			0x02D40490 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQT_PAGE_ARQT_S		0
+#define PF0_MBX_PSM_ARQT_PAGE_ARQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ATQBAH_PAGE			0x02D40110 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQBAH_PAGE_ATQBAH_S	0
+#define PF0_MBX_PSM_ATQBAH_PAGE_ATQBAH_M	MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_PSM_ATQBAL_PAGE			0x02D40010 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQBAL_PAGE_ATQBAL_S	6
+#define PF0_MBX_PSM_ATQBAL_PAGE_ATQBAL_M	MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_PSM_ATQH_PAGE			0x02D40310 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQH_PAGE_ATQH_S		0
+#define PF0_MBX_PSM_ATQH_PAGE_ATQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ATQLEN_PAGE			0x02D40210 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQLEN_S	0
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQLEN_M	MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQVFE_S	28
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQVFE_M	BIT(28)
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQOVFL_S	29
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQOVFL_M	BIT(29)
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQCRIT_S	30
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQCRIT_M	BIT(30)
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQENABLE_S	31
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQENABLE_M	BIT(31)
+#define PF0_MBX_PSM_ATQT_PAGE			0x02D40410 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQT_PAGE_ATQT_S		0
+#define PF0_MBX_PSM_ATQT_PAGE_ATQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ARQBAH_PAGE			0x02D801A0 /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQBAH_PAGE_ARQBAH_S		0
+#define PF0_SB_CPM_ARQBAH_PAGE_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_SB_CPM_ARQBAL_PAGE			0x02D800A0 /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQBAL_PAGE_ARQBAL_LSB_S	0
+#define PF0_SB_CPM_ARQBAL_PAGE_ARQBAL_LSB_M	MAKEMASK(0x3F, 0)
+#define PF0_SB_CPM_ARQBAL_PAGE_ARQBAL_S		6
+#define PF0_SB_CPM_ARQBAL_PAGE_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_SB_CPM_ARQH_PAGE			0x02D803A0 /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQH_PAGE_ARQH_S		0
+#define PF0_SB_CPM_ARQH_PAGE_ARQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ARQLEN_PAGE			0x02D802A0 /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQLEN_S		0
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQVFE_S		28
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQVFE_M		BIT(28)
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQOVFL_S	29
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQOVFL_M	BIT(29)
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQCRIT_S	30
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQCRIT_M	BIT(30)
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQENABLE_S	31
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQENABLE_M	BIT(31)
+#define PF0_SB_CPM_ARQT_PAGE			0x02D804A0 /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQT_PAGE_ARQT_S		0
+#define PF0_SB_CPM_ARQT_PAGE_ARQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ATQBAH_PAGE			0x02D80120 /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQBAH_PAGE_ATQBAH_S		0
+#define PF0_SB_CPM_ATQBAH_PAGE_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_SB_CPM_ATQBAL_PAGE			0x02D80020 /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQBAL_PAGE_ATQBAL_S		6
+#define PF0_SB_CPM_ATQBAL_PAGE_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_SB_CPM_ATQH_PAGE			0x02D80320 /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQH_PAGE_ATQH_S		0
+#define PF0_SB_CPM_ATQH_PAGE_ATQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ATQLEN_PAGE			0x02D80220 /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQLEN_S		0
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQVFE_S		28
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQVFE_M		BIT(28)
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQOVFL_S	29
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQOVFL_M	BIT(29)
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQCRIT_S	30
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQCRIT_M	BIT(30)
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQENABLE_S	31
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQENABLE_M	BIT(31)
+#define PF0_SB_CPM_ATQT_PAGE			0x02D80420 /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQT_PAGE_ATQT_S		0
+#define PF0_SB_CPM_ATQT_PAGE_ATQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ARQBAH_PAGE			0x02D001A0 /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQBAH_PAGE_ARQBAH_S		0
+#define PF0_SB_HLP_ARQBAH_PAGE_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_SB_HLP_ARQBAL_PAGE			0x02D000A0 /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQBAL_PAGE_ARQBAL_LSB_S	0
+#define PF0_SB_HLP_ARQBAL_PAGE_ARQBAL_LSB_M	MAKEMASK(0x3F, 0)
+#define PF0_SB_HLP_ARQBAL_PAGE_ARQBAL_S		6
+#define PF0_SB_HLP_ARQBAL_PAGE_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_SB_HLP_ARQH_PAGE			0x02D003A0 /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQH_PAGE_ARQH_S		0
+#define PF0_SB_HLP_ARQH_PAGE_ARQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ARQLEN_PAGE			0x02D002A0 /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQLEN_S		0
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQVFE_S		28
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQVFE_M		BIT(28)
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQOVFL_S	29
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQOVFL_M	BIT(29)
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQCRIT_S	30
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQCRIT_M	BIT(30)
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQENABLE_S	31
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQENABLE_M	BIT(31)
+#define PF0_SB_HLP_ARQT_PAGE			0x02D004A0 /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQT_PAGE_ARQT_S		0
+#define PF0_SB_HLP_ARQT_PAGE_ARQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ATQBAH_PAGE			0x02D00120 /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQBAH_PAGE_ATQBAH_S		0
+#define PF0_SB_HLP_ATQBAH_PAGE_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_SB_HLP_ATQBAL_PAGE			0x02D00020 /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQBAL_PAGE_ATQBAL_S		6
+#define PF0_SB_HLP_ATQBAL_PAGE_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_SB_HLP_ATQH_PAGE			0x02D00320 /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQH_PAGE_ATQH_S		0
+#define PF0_SB_HLP_ATQH_PAGE_ATQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ATQLEN_PAGE			0x02D00220 /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQLEN_S		0
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQVFE_S		28
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQVFE_M		BIT(28)
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQOVFL_S	29
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQOVFL_M	BIT(29)
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQCRIT_S	30
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQCRIT_M	BIT(30)
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQENABLE_S	31
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQENABLE_M	BIT(31)
+#define PF0_SB_HLP_ATQT_PAGE			0x02D00420 /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQT_PAGE_ATQT_S		0
+#define PF0_SB_HLP_ATQT_PAGE_ATQT_M		MAKEMASK(0x3FF, 0)
+#define PF0INT_DYN_CTL(_i)			(0x03000000 + ((_i) * 4096)) /* _i=0...2047 */ /* Reset Source: PFR */
+#define PF0INT_DYN_CTL_MAX_INDEX		2047
+#define PF0INT_DYN_CTL_INTENA_S			0
+#define PF0INT_DYN_CTL_INTENA_M			BIT(0)
+#define PF0INT_DYN_CTL_CLEARPBA_S		1
+#define PF0INT_DYN_CTL_CLEARPBA_M		BIT(1)
+#define PF0INT_DYN_CTL_SWINT_TRIG_S		2
+#define PF0INT_DYN_CTL_SWINT_TRIG_M		BIT(2)
+#define PF0INT_DYN_CTL_ITR_INDX_S		3
+#define PF0INT_DYN_CTL_ITR_INDX_M		MAKEMASK(0x3, 3)
+#define PF0INT_DYN_CTL_INTERVAL_S		5
+#define PF0INT_DYN_CTL_INTERVAL_M		MAKEMASK(0xFFF, 5)
+#define PF0INT_DYN_CTL_SW_ITR_INDX_ENA_S	24
+#define PF0INT_DYN_CTL_SW_ITR_INDX_ENA_M	BIT(24)
+#define PF0INT_DYN_CTL_SW_ITR_INDX_S		25
+#define PF0INT_DYN_CTL_SW_ITR_INDX_M		MAKEMASK(0x3, 25)
+#define PF0INT_DYN_CTL_WB_ON_ITR_S		30
+#define PF0INT_DYN_CTL_WB_ON_ITR_M		BIT(30)
+#define PF0INT_DYN_CTL_INTENA_MSK_S		31
+#define PF0INT_DYN_CTL_INTENA_MSK_M		BIT(31)
+#define PF0INT_ITR_0(_i)			(0x03000004 + ((_i) * 4096)) /* _i=0...2047 */ /* Reset Source: PFR */
+#define PF0INT_ITR_0_MAX_INDEX			2047
+#define PF0INT_ITR_0_INTERVAL_S			0
+#define PF0INT_ITR_0_INTERVAL_M			MAKEMASK(0xFFF, 0)
+#define PF0INT_ITR_1(_i)			(0x03000008 + ((_i) * 4096)) /* _i=0...2047 */ /* Reset Source: PFR */
+#define PF0INT_ITR_1_MAX_INDEX			2047
+#define PF0INT_ITR_1_INTERVAL_S			0
+#define PF0INT_ITR_1_INTERVAL_M			MAKEMASK(0xFFF, 0)
+#define PF0INT_ITR_2(_i)			(0x0300000C + ((_i) * 4096)) /* _i=0...2047 */ /* Reset Source: PFR */
+#define PF0INT_ITR_2_MAX_INDEX			2047
+#define PF0INT_ITR_2_INTERVAL_S			0
+#define PF0INT_ITR_2_INTERVAL_M			MAKEMASK(0xFFF, 0)
+#define PF0INT_OICR_CPM_PAGE			0x02D03000 /* Reset Source: CORER */
+#define PF0INT_OICR_CPM_PAGE_INTEVENT_S		0
+#define PF0INT_OICR_CPM_PAGE_INTEVENT_M		BIT(0)
+#define PF0INT_OICR_CPM_PAGE_QUEUE_S		1
+#define PF0INT_OICR_CPM_PAGE_QUEUE_M		BIT(1)
+#define PF0INT_OICR_CPM_PAGE_RSV1_S		2
+#define PF0INT_OICR_CPM_PAGE_RSV1_M		MAKEMASK(0xFF, 2)
+#define PF0INT_OICR_CPM_PAGE_HH_COMP_S		10
+#define PF0INT_OICR_CPM_PAGE_HH_COMP_M		BIT(10)
+#define PF0INT_OICR_CPM_PAGE_TSYN_TX_S		11
+#define PF0INT_OICR_CPM_PAGE_TSYN_TX_M		BIT(11)
+#define PF0INT_OICR_CPM_PAGE_TSYN_EVNT_S	12
+#define PF0INT_OICR_CPM_PAGE_TSYN_EVNT_M	BIT(12)
+#define PF0INT_OICR_CPM_PAGE_TSYN_TGT_S		13
+#define PF0INT_OICR_CPM_PAGE_TSYN_TGT_M		BIT(13)
+#define PF0INT_OICR_CPM_PAGE_HLP_RDY_S		14
+#define PF0INT_OICR_CPM_PAGE_HLP_RDY_M		BIT(14)
+#define PF0INT_OICR_CPM_PAGE_CPM_RDY_S		15
+#define PF0INT_OICR_CPM_PAGE_CPM_RDY_M		BIT(15)
+#define PF0INT_OICR_CPM_PAGE_ECC_ERR_S		16
+#define PF0INT_OICR_CPM_PAGE_ECC_ERR_M		BIT(16)
+#define PF0INT_OICR_CPM_PAGE_RSV2_S		17
+#define PF0INT_OICR_CPM_PAGE_RSV2_M		MAKEMASK(0x3, 17)
+#define PF0INT_OICR_CPM_PAGE_MAL_DETECT_S	19
+#define PF0INT_OICR_CPM_PAGE_MAL_DETECT_M	BIT(19)
+#define PF0INT_OICR_CPM_PAGE_GRST_S		20
+#define PF0INT_OICR_CPM_PAGE_GRST_M		BIT(20)
+#define PF0INT_OICR_CPM_PAGE_PCI_EXCEPTION_S	21
+#define PF0INT_OICR_CPM_PAGE_PCI_EXCEPTION_M	BIT(21)
+#define PF0INT_OICR_CPM_PAGE_GPIO_S		22
+#define PF0INT_OICR_CPM_PAGE_GPIO_M		BIT(22)
+#define PF0INT_OICR_CPM_PAGE_RSV3_S		23
+#define PF0INT_OICR_CPM_PAGE_RSV3_M		BIT(23)
+#define PF0INT_OICR_CPM_PAGE_STORM_DETECT_S	24
+#define PF0INT_OICR_CPM_PAGE_STORM_DETECT_M	BIT(24)
+#define PF0INT_OICR_CPM_PAGE_LINK_STAT_CHANGE_S 25
+#define PF0INT_OICR_CPM_PAGE_LINK_STAT_CHANGE_M BIT(25)
+#define PF0INT_OICR_CPM_PAGE_HMC_ERR_S		26
+#define PF0INT_OICR_CPM_PAGE_HMC_ERR_M		BIT(26)
+#define PF0INT_OICR_CPM_PAGE_PE_PUSH_S		27
+#define PF0INT_OICR_CPM_PAGE_PE_PUSH_M		BIT(27)
+#define PF0INT_OICR_CPM_PAGE_PE_CRITERR_S	28
+#define PF0INT_OICR_CPM_PAGE_PE_CRITERR_M	BIT(28)
+#define PF0INT_OICR_CPM_PAGE_VFLR_S		29
+#define PF0INT_OICR_CPM_PAGE_VFLR_M		BIT(29)
+#define PF0INT_OICR_CPM_PAGE_XLR_HW_DONE_S	30
+#define PF0INT_OICR_CPM_PAGE_XLR_HW_DONE_M	BIT(30)
+#define PF0INT_OICR_CPM_PAGE_SWINT_S		31
+#define PF0INT_OICR_CPM_PAGE_SWINT_M		BIT(31)
+#define PF0INT_OICR_ENA_CPM_PAGE		0x02D03100 /* Reset Source: CORER */
+#define PF0INT_OICR_ENA_CPM_PAGE_RSV0_S		0
+#define PF0INT_OICR_ENA_CPM_PAGE_RSV0_M		BIT(0)
+#define PF0INT_OICR_ENA_CPM_PAGE_INT_ENA_S	1
+#define PF0INT_OICR_ENA_CPM_PAGE_INT_ENA_M	MAKEMASK(0x7FFFFFFF, 1)
+#define PF0INT_OICR_ENA_HLP_PAGE		0x02D01100 /* Reset Source: CORER */
+#define PF0INT_OICR_ENA_HLP_PAGE_RSV0_S		0
+#define PF0INT_OICR_ENA_HLP_PAGE_RSV0_M		BIT(0)
+#define PF0INT_OICR_ENA_HLP_PAGE_INT_ENA_S	1
+#define PF0INT_OICR_ENA_HLP_PAGE_INT_ENA_M	MAKEMASK(0x7FFFFFFF, 1)
+#define PF0INT_OICR_ENA_PSM_PAGE		0x02D02100 /* Reset Source: CORER */
+#define PF0INT_OICR_ENA_PSM_PAGE_RSV0_S		0
+#define PF0INT_OICR_ENA_PSM_PAGE_RSV0_M		BIT(0)
+#define PF0INT_OICR_ENA_PSM_PAGE_INT_ENA_S	1
+#define PF0INT_OICR_ENA_PSM_PAGE_INT_ENA_M	MAKEMASK(0x7FFFFFFF, 1)
+#define PF0INT_OICR_HLP_PAGE			0x02D01000 /* Reset Source: CORER */
+#define PF0INT_OICR_HLP_PAGE_INTEVENT_S		0
+#define PF0INT_OICR_HLP_PAGE_INTEVENT_M		BIT(0)
+#define PF0INT_OICR_HLP_PAGE_QUEUE_S		1
+#define PF0INT_OICR_HLP_PAGE_QUEUE_M		BIT(1)
+#define PF0INT_OICR_HLP_PAGE_RSV1_S		2
+#define PF0INT_OICR_HLP_PAGE_RSV1_M		MAKEMASK(0xFF, 2)
+#define PF0INT_OICR_HLP_PAGE_HH_COMP_S		10
+#define PF0INT_OICR_HLP_PAGE_HH_COMP_M		BIT(10)
+#define PF0INT_OICR_HLP_PAGE_TSYN_TX_S		11
+#define PF0INT_OICR_HLP_PAGE_TSYN_TX_M		BIT(11)
+#define PF0INT_OICR_HLP_PAGE_TSYN_EVNT_S	12
+#define PF0INT_OICR_HLP_PAGE_TSYN_EVNT_M	BIT(12)
+#define PF0INT_OICR_HLP_PAGE_TSYN_TGT_S		13
+#define PF0INT_OICR_HLP_PAGE_TSYN_TGT_M		BIT(13)
+#define PF0INT_OICR_HLP_PAGE_HLP_RDY_S		14
+#define PF0INT_OICR_HLP_PAGE_HLP_RDY_M		BIT(14)
+#define PF0INT_OICR_HLP_PAGE_CPM_RDY_S		15
+#define PF0INT_OICR_HLP_PAGE_CPM_RDY_M		BIT(15)
+#define PF0INT_OICR_HLP_PAGE_ECC_ERR_S		16
+#define PF0INT_OICR_HLP_PAGE_ECC_ERR_M		BIT(16)
+#define PF0INT_OICR_HLP_PAGE_RSV2_S		17
+#define PF0INT_OICR_HLP_PAGE_RSV2_M		MAKEMASK(0x3, 17)
+#define PF0INT_OICR_HLP_PAGE_MAL_DETECT_S	19
+#define PF0INT_OICR_HLP_PAGE_MAL_DETECT_M	BIT(19)
+#define PF0INT_OICR_HLP_PAGE_GRST_S		20
+#define PF0INT_OICR_HLP_PAGE_GRST_M		BIT(20)
+#define PF0INT_OICR_HLP_PAGE_PCI_EXCEPTION_S	21
+#define PF0INT_OICR_HLP_PAGE_PCI_EXCEPTION_M	BIT(21)
+#define PF0INT_OICR_HLP_PAGE_GPIO_S		22
+#define PF0INT_OICR_HLP_PAGE_GPIO_M		BIT(22)
+#define PF0INT_OICR_HLP_PAGE_RSV3_S		23
+#define PF0INT_OICR_HLP_PAGE_RSV3_M		BIT(23)
+#define PF0INT_OICR_HLP_PAGE_STORM_DETECT_S	24
+#define PF0INT_OICR_HLP_PAGE_STORM_DETECT_M	BIT(24)
+#define PF0INT_OICR_HLP_PAGE_LINK_STAT_CHANGE_S 25
+#define PF0INT_OICR_HLP_PAGE_LINK_STAT_CHANGE_M BIT(25)
+#define PF0INT_OICR_HLP_PAGE_HMC_ERR_S		26
+#define PF0INT_OICR_HLP_PAGE_HMC_ERR_M		BIT(26)
+#define PF0INT_OICR_HLP_PAGE_PE_PUSH_S		27
+#define PF0INT_OICR_HLP_PAGE_PE_PUSH_M		BIT(27)
+#define PF0INT_OICR_HLP_PAGE_PE_CRITERR_S	28
+#define PF0INT_OICR_HLP_PAGE_PE_CRITERR_M	BIT(28)
+#define PF0INT_OICR_HLP_PAGE_VFLR_S		29
+#define PF0INT_OICR_HLP_PAGE_VFLR_M		BIT(29)
+#define PF0INT_OICR_HLP_PAGE_XLR_HW_DONE_S	30
+#define PF0INT_OICR_HLP_PAGE_XLR_HW_DONE_M	BIT(30)
+#define PF0INT_OICR_HLP_PAGE_SWINT_S		31
+#define PF0INT_OICR_HLP_PAGE_SWINT_M		BIT(31)
+#define PF0INT_OICR_PSM_PAGE			0x02D02000 /* Reset Source: CORER */
+#define PF0INT_OICR_PSM_PAGE_INTEVENT_S		0
+#define PF0INT_OICR_PSM_PAGE_INTEVENT_M		BIT(0)
+#define PF0INT_OICR_PSM_PAGE_QUEUE_S		1
+#define PF0INT_OICR_PSM_PAGE_QUEUE_M		BIT(1)
+#define PF0INT_OICR_PSM_PAGE_RSV1_S		2
+#define PF0INT_OICR_PSM_PAGE_RSV1_M		MAKEMASK(0xFF, 2)
+#define PF0INT_OICR_PSM_PAGE_HH_COMP_S		10
+#define PF0INT_OICR_PSM_PAGE_HH_COMP_M		BIT(10)
+#define PF0INT_OICR_PSM_PAGE_TSYN_TX_S		11
+#define PF0INT_OICR_PSM_PAGE_TSYN_TX_M		BIT(11)
+#define PF0INT_OICR_PSM_PAGE_TSYN_EVNT_S	12
+#define PF0INT_OICR_PSM_PAGE_TSYN_EVNT_M	BIT(12)
+#define PF0INT_OICR_PSM_PAGE_TSYN_TGT_S		13
+#define PF0INT_OICR_PSM_PAGE_TSYN_TGT_M		BIT(13)
+#define PF0INT_OICR_PSM_PAGE_HLP_RDY_S		14
+#define PF0INT_OICR_PSM_PAGE_HLP_RDY_M		BIT(14)
+#define PF0INT_OICR_PSM_PAGE_CPM_RDY_S		15
+#define PF0INT_OICR_PSM_PAGE_CPM_RDY_M		BIT(15)
+#define PF0INT_OICR_PSM_PAGE_ECC_ERR_S		16
+#define PF0INT_OICR_PSM_PAGE_ECC_ERR_M		BIT(16)
+#define PF0INT_OICR_PSM_PAGE_RSV2_S		17
+#define PF0INT_OICR_PSM_PAGE_RSV2_M		MAKEMASK(0x3, 17)
+#define PF0INT_OICR_PSM_PAGE_MAL_DETECT_S	19
+#define PF0INT_OICR_PSM_PAGE_MAL_DETECT_M	BIT(19)
+#define PF0INT_OICR_PSM_PAGE_GRST_S		20
+#define PF0INT_OICR_PSM_PAGE_GRST_M		BIT(20)
+#define PF0INT_OICR_PSM_PAGE_PCI_EXCEPTION_S	21
+#define PF0INT_OICR_PSM_PAGE_PCI_EXCEPTION_M	BIT(21)
+#define PF0INT_OICR_PSM_PAGE_GPIO_S		22
+#define PF0INT_OICR_PSM_PAGE_GPIO_M		BIT(22)
+#define PF0INT_OICR_PSM_PAGE_RSV3_S		23
+#define PF0INT_OICR_PSM_PAGE_RSV3_M		BIT(23)
+#define PF0INT_OICR_PSM_PAGE_STORM_DETECT_S	24
+#define PF0INT_OICR_PSM_PAGE_STORM_DETECT_M	BIT(24)
+#define PF0INT_OICR_PSM_PAGE_LINK_STAT_CHANGE_S 25
+#define PF0INT_OICR_PSM_PAGE_LINK_STAT_CHANGE_M BIT(25)
+#define PF0INT_OICR_PSM_PAGE_HMC_ERR_S		26
+#define PF0INT_OICR_PSM_PAGE_HMC_ERR_M		BIT(26)
+#define PF0INT_OICR_PSM_PAGE_PE_PUSH_S		27
+#define PF0INT_OICR_PSM_PAGE_PE_PUSH_M		BIT(27)
+#define PF0INT_OICR_PSM_PAGE_PE_CRITERR_S	28
+#define PF0INT_OICR_PSM_PAGE_PE_CRITERR_M	BIT(28)
+#define PF0INT_OICR_PSM_PAGE_VFLR_S		29
+#define PF0INT_OICR_PSM_PAGE_VFLR_M		BIT(29)
+#define PF0INT_OICR_PSM_PAGE_XLR_HW_DONE_S	30
+#define PF0INT_OICR_PSM_PAGE_XLR_HW_DONE_M	BIT(30)
+#define PF0INT_OICR_PSM_PAGE_SWINT_S		31
+#define PF0INT_OICR_PSM_PAGE_SWINT_M		BIT(31)
+#define QRX_TAIL_PAGE(_QRX)			(0x03800000 + ((_QRX) * 4096)) /* _i=0...2047 */ /* Reset Source: CORER */
+#define QRX_TAIL_PAGE_MAX_INDEX			2047
+#define QRX_TAIL_PAGE_TAIL_S			0
+#define QRX_TAIL_PAGE_TAIL_M			MAKEMASK(0x1FFF, 0)
+#define QTX_COMM_DBELL_PAGE(_DBQM)		(0x04000000 + ((_DBQM) * 4096)) /* _i=0...16383 */ /* Reset Source: CORER */
+#define QTX_COMM_DBELL_PAGE_MAX_INDEX		16383
+#define QTX_COMM_DBELL_PAGE_QTX_COMM_DBELL_S	0
+#define QTX_COMM_DBELL_PAGE_QTX_COMM_DBELL_M	MAKEMASK(0xFFFFFFFF, 0)
+#define QTX_COMM_DBLQ_DBELL_PAGE(_DBLQ)		(0x02F00000 + ((_DBLQ) * 8)) /* _i=0...255 */ /* Reset Source: CORER */
+#define QTX_COMM_DBLQ_DBELL_PAGE_MAX_INDEX	255
+#define QTX_COMM_DBLQ_DBELL_PAGE_TAIL_S		0
+#define QTX_COMM_DBLQ_DBELL_PAGE_TAIL_M		MAKEMASK(0x1FFF, 0)
+#define VSI_MBX_ARQBAH(_VSI)			(0x02000018 + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ARQBAH_MAX_INDEX		767
+#define VSI_MBX_ARQBAH_ARQBAH_S			0
+#define VSI_MBX_ARQBAH_ARQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define VSI_MBX_ARQBAL(_VSI)			(0x02000014 + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ARQBAL_MAX_INDEX		767
+#define VSI_MBX_ARQBAL_ARQBAL_LSB_S		0
+#define VSI_MBX_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define VSI_MBX_ARQBAL_ARQBAL_S			6
+#define VSI_MBX_ARQBAL_ARQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define VSI_MBX_ARQH(_VSI)			(0x02000020 + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ARQH_MAX_INDEX			767
+#define VSI_MBX_ARQH_ARQH_S			0
+#define VSI_MBX_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define VSI_MBX_ARQLEN(_VSI)			(0x0200001C + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ARQLEN_MAX_INDEX		767
+#define VSI_MBX_ARQLEN_ARQLEN_S			0
+#define VSI_MBX_ARQLEN_ARQLEN_M			MAKEMASK(0x3FF, 0)
+#define VSI_MBX_ARQLEN_ARQVFE_S			28
+#define VSI_MBX_ARQLEN_ARQVFE_M			BIT(28)
+#define VSI_MBX_ARQLEN_ARQOVFL_S		29
+#define VSI_MBX_ARQLEN_ARQOVFL_M		BIT(29)
+#define VSI_MBX_ARQLEN_ARQCRIT_S		30
+#define VSI_MBX_ARQLEN_ARQCRIT_M		BIT(30)
+#define VSI_MBX_ARQLEN_ARQENABLE_S		31
+#define VSI_MBX_ARQLEN_ARQENABLE_M		BIT(31)
+#define VSI_MBX_ARQT(_VSI)			(0x02000024 + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ARQT_MAX_INDEX			767
+#define VSI_MBX_ARQT_ARQT_S			0
+#define VSI_MBX_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define VSI_MBX_ATQBAH(_VSI)			(0x02000004 + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ATQBAH_MAX_INDEX		767
+#define VSI_MBX_ATQBAH_ATQBAH_S			0
+#define VSI_MBX_ATQBAH_ATQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define VSI_MBX_ATQBAL(_VSI)			(0x02000000 + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ATQBAL_MAX_INDEX		767
+#define VSI_MBX_ATQBAL_ATQBAL_S			6
+#define VSI_MBX_ATQBAL_ATQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define VSI_MBX_ATQH(_VSI)			(0x0200000C + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ATQH_MAX_INDEX			767
+#define VSI_MBX_ATQH_ATQH_S			0
+#define VSI_MBX_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define VSI_MBX_ATQLEN(_VSI)			(0x02000008 + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ATQLEN_MAX_INDEX		767
+#define VSI_MBX_ATQLEN_ATQLEN_S			0
+#define VSI_MBX_ATQLEN_ATQLEN_M			MAKEMASK(0x3FF, 0)
+#define VSI_MBX_ATQLEN_ATQVFE_S			28
+#define VSI_MBX_ATQLEN_ATQVFE_M			BIT(28)
+#define VSI_MBX_ATQLEN_ATQOVFL_S		29
+#define VSI_MBX_ATQLEN_ATQOVFL_M		BIT(29)
+#define VSI_MBX_ATQLEN_ATQCRIT_S		30
+#define VSI_MBX_ATQLEN_ATQCRIT_M		BIT(30)
+#define VSI_MBX_ATQLEN_ATQENABLE_S		31
+#define VSI_MBX_ATQLEN_ATQENABLE_M		BIT(31)
+#define VSI_MBX_ATQT(_VSI)			(0x02000010 + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ATQT_MAX_INDEX			767
+#define VSI_MBX_ATQT_ATQT_S			0
+#define VSI_MBX_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define GL_ACL_ACCESS_CMD			0x00391000 /* Reset Source: CORER */
+#define GL_ACL_ACCESS_CMD_TABLE_ID_S		0
+#define GL_ACL_ACCESS_CMD_TABLE_ID_M		MAKEMASK(0xFF, 0)
+#define GL_ACL_ACCESS_CMD_ENTRY_INDEX_S		8
+#define GL_ACL_ACCESS_CMD_ENTRY_INDEX_M		MAKEMASK(0xFFF, 8)
+#define GL_ACL_ACCESS_CMD_OPERATION_S		20
+#define GL_ACL_ACCESS_CMD_OPERATION_M		BIT(20)
+#define GL_ACL_ACCESS_CMD_OBJ_TYPE_S		24
+#define GL_ACL_ACCESS_CMD_OBJ_TYPE_M		MAKEMASK(0xF, 24)
+#define GL_ACL_ACCESS_CMD_EXECUTE_S		31
+#define GL_ACL_ACCESS_CMD_EXECUTE_M		BIT(31)
+#define GL_ACL_ACCESS_STATUS			0x00391004 /* Reset Source: CORER */
+#define GL_ACL_ACCESS_STATUS_BUSY_S		0
+#define GL_ACL_ACCESS_STATUS_BUSY_M		BIT(0)
+#define GL_ACL_ACCESS_STATUS_DONE_S		1
+#define GL_ACL_ACCESS_STATUS_DONE_M		BIT(1)
+#define GL_ACL_ACCESS_STATUS_ERROR_S		2
+#define GL_ACL_ACCESS_STATUS_ERROR_M		BIT(2)
+#define GL_ACL_ACCESS_STATUS_OPERATION_S	3
+#define GL_ACL_ACCESS_STATUS_OPERATION_M	BIT(3)
+#define GL_ACL_ACCESS_STATUS_ERROR_CODE_S	4
+#define GL_ACL_ACCESS_STATUS_ERROR_CODE_M	MAKEMASK(0xF, 4)
+#define GL_ACL_ACCESS_STATUS_TABLE_ID_S		8
+#define GL_ACL_ACCESS_STATUS_TABLE_ID_M		MAKEMASK(0xFF, 8)
+#define GL_ACL_ACCESS_STATUS_ENTRY_INDEX_S	16
+#define GL_ACL_ACCESS_STATUS_ENTRY_INDEX_M	MAKEMASK(0xFFF, 16)
+#define GL_ACL_ACCESS_STATUS_OBJ_TYPE_S		28
+#define GL_ACL_ACCESS_STATUS_OBJ_TYPE_M		MAKEMASK(0xF, 28)
+#define GL_ACL_ACTMEM_ACT(_i)			(0x00393824 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GL_ACL_ACTMEM_ACT_MAX_INDEX		1
+#define GL_ACL_ACTMEM_ACT_VALUE_S		0
+#define GL_ACL_ACTMEM_ACT_VALUE_M		MAKEMASK(0xFFFF, 0)
+#define GL_ACL_ACTMEM_ACT_MDID_S		20
+#define GL_ACL_ACTMEM_ACT_MDID_M		MAKEMASK(0x3F, 20)
+#define GL_ACL_ACTMEM_ACT_PRIORITY_S		28
+#define GL_ACL_ACTMEM_ACT_PRIORITY_M		MAKEMASK(0x7, 28)
+#define GL_ACL_CHICKEN_REGISTER			0x00393810 /* Reset Source: CORER */
+#define GL_ACL_CHICKEN_REGISTER_TCAM_DATA_POL_CH_S 0
+#define GL_ACL_CHICKEN_REGISTER_TCAM_DATA_POL_CH_M BIT(0)
+#define GL_ACL_CHICKEN_REGISTER_TCAM_ADDR_POL_CH_S 1
+#define GL_ACL_CHICKEN_REGISTER_TCAM_ADDR_POL_CH_M BIT(1)
+#define GL_ACL_DEFAULT_ACT(_i)			(0x00391168 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GL_ACL_DEFAULT_ACT_MAX_INDEX		15
+#define GL_ACL_DEFAULT_ACT_VALUE_S		0
+#define GL_ACL_DEFAULT_ACT_VALUE_M		MAKEMASK(0xFFFF, 0)
+#define GL_ACL_DEFAULT_ACT_MDID_S		20
+#define GL_ACL_DEFAULT_ACT_MDID_M		MAKEMASK(0x3F, 20)
+#define GL_ACL_DEFAULT_ACT_PRIORITY_S		28
+#define GL_ACL_DEFAULT_ACT_PRIORITY_M		MAKEMASK(0x7, 28)
+#define GL_ACL_PROFILE_BWSB_SEL(_i)		(0x00391008 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GL_ACL_PROFILE_BWSB_SEL_MAX_INDEX	31
+#define GL_ACL_PROFILE_BWSB_SEL_BSB_SRC_OFF_S	0
+#define GL_ACL_PROFILE_BWSB_SEL_BSB_SRC_OFF_M	MAKEMASK(0x3F, 0)
+#define GL_ACL_PROFILE_BWSB_SEL_WSB_SRC_OFF_S	8
+#define GL_ACL_PROFILE_BWSB_SEL_WSB_SRC_OFF_M	MAKEMASK(0x1F, 8)
+#define GL_ACL_PROFILE_DWSB_SEL(_i)		(0x00391088 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GL_ACL_PROFILE_DWSB_SEL_MAX_INDEX	15
+#define GL_ACL_PROFILE_DWSB_SEL_DWORD_SEL_OFF_S 0
+#define GL_ACL_PROFILE_DWSB_SEL_DWORD_SEL_OFF_M MAKEMASK(0xF, 0)
+#define GL_ACL_PROFILE_PF_CFG(_i)		(0x003910C8 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GL_ACL_PROFILE_PF_CFG_MAX_INDEX		7
+#define GL_ACL_PROFILE_PF_CFG_SCEN_SEL_S	0
+#define GL_ACL_PROFILE_PF_CFG_SCEN_SEL_M	MAKEMASK(0x3F, 0)
+#define GL_ACL_PROFILE_RC_CFG(_i)		(0x003910E8 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GL_ACL_PROFILE_RC_CFG_MAX_INDEX		7
+#define GL_ACL_PROFILE_RC_CFG_LOW_BOUND_S	0
+#define GL_ACL_PROFILE_RC_CFG_LOW_BOUND_M	MAKEMASK(0xFFFF, 0)
+#define GL_ACL_PROFILE_RC_CFG_HIGH_BOUND_S	16
+#define GL_ACL_PROFILE_RC_CFG_HIGH_BOUND_M	MAKEMASK(0xFFFF, 16)
+#define GL_ACL_PROFILE_RCF_MASK(_i)		(0x00391108 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GL_ACL_PROFILE_RCF_MASK_MAX_INDEX	7
+#define GL_ACL_PROFILE_RCF_MASK_MASK_S		0
+#define GL_ACL_PROFILE_RCF_MASK_MASK_M		MAKEMASK(0xFFFF, 0)
+#define GL_ACL_SCENARIO_ACT_CFG(_i)		(0x003938AC + ((_i) * 4)) /* _i=0...19 */ /* Reset Source: CORER */
+#define GL_ACL_SCENARIO_ACT_CFG_MAX_INDEX	19
+#define GL_ACL_SCENARIO_ACT_CFG_ACTMEM_SEL_S	0
+#define GL_ACL_SCENARIO_ACT_CFG_ACTMEM_SEL_M	MAKEMASK(0xF, 0)
+#define GL_ACL_SCENARIO_ACT_CFG_ACTMEM_EN_S	8
+#define GL_ACL_SCENARIO_ACT_CFG_ACTMEM_EN_M	BIT(8)
+#define GL_ACL_SCENARIO_CFG_H(_i)		(0x0039386C + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GL_ACL_SCENARIO_CFG_H_MAX_INDEX		15
+#define GL_ACL_SCENARIO_CFG_H_SELECT4_S		0
+#define GL_ACL_SCENARIO_CFG_H_SELECT4_M		MAKEMASK(0x1F, 0)
+#define GL_ACL_SCENARIO_CFG_H_CHUNKMASK_S	8
+#define GL_ACL_SCENARIO_CFG_H_CHUNKMASK_M	MAKEMASK(0xFF, 8)
+#define GL_ACL_SCENARIO_CFG_H_START_COMPARE_S	24
+#define GL_ACL_SCENARIO_CFG_H_START_COMPARE_M	BIT(24)
+#define GL_ACL_SCENARIO_CFG_H_START_SET_S	28
+#define GL_ACL_SCENARIO_CFG_H_START_SET_M	BIT(28)
+#define GL_ACL_SCENARIO_CFG_L(_i)		(0x0039382C + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GL_ACL_SCENARIO_CFG_L_MAX_INDEX		15
+#define GL_ACL_SCENARIO_CFG_L_SELECT0_S		0
+#define GL_ACL_SCENARIO_CFG_L_SELECT0_M		MAKEMASK(0x7F, 0)
+#define GL_ACL_SCENARIO_CFG_L_SELECT1_S		8
+#define GL_ACL_SCENARIO_CFG_L_SELECT1_M		MAKEMASK(0x7F, 8)
+#define GL_ACL_SCENARIO_CFG_L_SELECT2_S		16
+#define GL_ACL_SCENARIO_CFG_L_SELECT2_M		MAKEMASK(0x7F, 16)
+#define GL_ACL_SCENARIO_CFG_L_SELECT3_S		24
+#define GL_ACL_SCENARIO_CFG_L_SELECT3_M		MAKEMASK(0x7F, 24)
+#define GL_ACL_TCAM_KEY_H			0x00393818 /* Reset Source: CORER */
+#define GL_ACL_TCAM_KEY_H_GL_ACL_FFU_TCAM_KEY_H_S 0
+#define GL_ACL_TCAM_KEY_H_GL_ACL_FFU_TCAM_KEY_H_M MAKEMASK(0xFF, 0)
+#define GL_ACL_TCAM_KEY_INV_H			0x00393820 /* Reset Source: CORER */
+#define GL_ACL_TCAM_KEY_INV_H_GL_ACL_FFU_TCAM_KEY_INV_H_S 0
+#define GL_ACL_TCAM_KEY_INV_H_GL_ACL_FFU_TCAM_KEY_INV_H_M MAKEMASK(0xFF, 0)
+#define GL_ACL_TCAM_KEY_INV_L			0x0039381C /* Reset Source: CORER */
+#define GL_ACL_TCAM_KEY_INV_L_GL_ACL_FFU_TCAM_KEY_INV_L_S 0
+#define GL_ACL_TCAM_KEY_INV_L_GL_ACL_FFU_TCAM_KEY_INV_L_M MAKEMASK(0xFFFFFFFF, 0)
+#define GL_ACL_TCAM_KEY_L			0x00393814 /* Reset Source: CORER */
+#define GL_ACL_TCAM_KEY_L_GL_ACL_FFU_TCAM_KEY_L_S 0
+#define GL_ACL_TCAM_KEY_L_GL_ACL_FFU_TCAM_KEY_L_M MAKEMASK(0xFFFFFFFF, 0)
+#define VSI_ACL_DEF_SEL(_VSI)			(0x00391800 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_ACL_DEF_SEL_MAX_INDEX		767
+#define VSI_ACL_DEF_SEL_RX_PROFILE_MISS_SEL_S	0
+#define VSI_ACL_DEF_SEL_RX_PROFILE_MISS_SEL_M	MAKEMASK(0x3, 0)
+#define VSI_ACL_DEF_SEL_RX_TABLES_MISS_SEL_S	4
+#define VSI_ACL_DEF_SEL_RX_TABLES_MISS_SEL_M	MAKEMASK(0x3, 4)
+#define VSI_ACL_DEF_SEL_TX_PROFILE_MISS_SEL_S	8
+#define VSI_ACL_DEF_SEL_TX_PROFILE_MISS_SEL_M	MAKEMASK(0x3, 8)
+#define VSI_ACL_DEF_SEL_TX_TABLES_MISS_SEL_S	12
+#define VSI_ACL_DEF_SEL_TX_TABLES_MISS_SEL_M	MAKEMASK(0x3, 12)
+#define GL_SWT_L2TAG0(_i)			(0x000492A8 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GL_SWT_L2TAG0_MAX_INDEX			7
+#define GL_SWT_L2TAG0_DATA_S			0
+#define GL_SWT_L2TAG0_DATA_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GL_SWT_L2TAG1(_i)			(0x000492C8 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GL_SWT_L2TAG1_MAX_INDEX			7
+#define GL_SWT_L2TAG1_DATA_S			0
+#define GL_SWT_L2TAG1_DATA_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GL_SWT_L2TAGCTRL(_i)			(0x001D2660 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GL_SWT_L2TAGCTRL_MAX_INDEX		7
+#define GL_SWT_L2TAGCTRL_LENGTH_S		0
+#define GL_SWT_L2TAGCTRL_LENGTH_M		MAKEMASK(0x7F, 0)
+#define GL_SWT_L2TAGCTRL_HAS_UP_S		7
+#define GL_SWT_L2TAGCTRL_HAS_UP_M		BIT(7)
+#define GL_SWT_L2TAGCTRL_ISVLAN_S		9
+#define GL_SWT_L2TAGCTRL_ISVLAN_M		BIT(9)
+#define GL_SWT_L2TAGCTRL_INNERUP_S		10
+#define GL_SWT_L2TAGCTRL_INNERUP_M		BIT(10)
+#define GL_SWT_L2TAGCTRL_OUTERUP_S		11
+#define GL_SWT_L2TAGCTRL_OUTERUP_M		BIT(11)
+#define GL_SWT_L2TAGCTRL_LONG_S			12
+#define GL_SWT_L2TAGCTRL_LONG_M			BIT(12)
+#define GL_SWT_L2TAGCTRL_ISMPLS_S		13
+#define GL_SWT_L2TAGCTRL_ISMPLS_M		BIT(13)
+#define GL_SWT_L2TAGCTRL_ISNSH_S		14
+#define GL_SWT_L2TAGCTRL_ISNSH_M		BIT(14)
+#define GL_SWT_L2TAGCTRL_ETHERTYPE_S		16
+#define GL_SWT_L2TAGCTRL_ETHERTYPE_M		MAKEMASK(0xFFFF, 16)
+#define GL_SWT_L2TAGRXEB(_i)			(0x00052000 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GL_SWT_L2TAGRXEB_MAX_INDEX		7
+#define GL_SWT_L2TAGRXEB_OFFSET_S		0
+#define GL_SWT_L2TAGRXEB_OFFSET_M		MAKEMASK(0xFF, 0)
+#define GL_SWT_L2TAGRXEB_LENGTH_S		8
+#define GL_SWT_L2TAGRXEB_LENGTH_M		MAKEMASK(0x3, 8)
+#define GL_SWT_L2TAGTXIB(_i)			(0x000492E8 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GL_SWT_L2TAGTXIB_MAX_INDEX		7
+#define GL_SWT_L2TAGTXIB_OFFSET_S		0
+#define GL_SWT_L2TAGTXIB_OFFSET_M		MAKEMASK(0xFF, 0)
+#define GL_SWT_L2TAGTXIB_LENGTH_S		8
+#define GL_SWT_L2TAGTXIB_LENGTH_M		MAKEMASK(0x3, 8)
+#define PRT_TDPUL2TAGSEN			0x00040BA0 /* Reset Source: CORER */
+#define PRT_TDPUL2TAGSEN_ENABLE_S		0
+#define PRT_TDPUL2TAGSEN_ENABLE_M		MAKEMASK(0xFF, 0)
+#define PRT_TDPUL2TAGSEN_NONLAST_TAG_S		8
+#define PRT_TDPUL2TAGSEN_NONLAST_TAG_M		MAKEMASK(0xFF, 8)
+#define GLCM_PE_CACHESIZE			0x005046B4 /* Reset Source: CORER */
+#define GLCM_PE_CACHESIZE_WORD_SIZE_S		0
+#define GLCM_PE_CACHESIZE_WORD_SIZE_M		MAKEMASK(0xFFF, 0)
+#define GLCM_PE_CACHESIZE_SETS_S		12
+#define GLCM_PE_CACHESIZE_SETS_M		MAKEMASK(0xF, 12)
+#define GLCM_PE_CACHESIZE_WAYS_S		16
+#define GLCM_PE_CACHESIZE_WAYS_M		MAKEMASK(0x1FF, 16)
+#define GLCOMM_CQ_CTL(_CQ)			(0x000F0000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLCOMM_CQ_CTL_MAX_INDEX			511
+#define GLCOMM_CQ_CTL_COMP_TYPE_S		0
+#define GLCOMM_CQ_CTL_COMP_TYPE_M		MAKEMASK(0x7, 0)
+#define GLCOMM_CQ_CTL_CMD_S			4
+#define GLCOMM_CQ_CTL_CMD_M			MAKEMASK(0x7, 4)
+#define GLCOMM_CQ_CTL_ID_S			16
+#define GLCOMM_CQ_CTL_ID_M			MAKEMASK(0x3FFF, 16)
+#define GLCOMM_MIN_MAX_PKT			0x000FC064 /* Reset Source: CORER */
+#define GLCOMM_MIN_MAX_PKT_MAHDL_S		0
+#define GLCOMM_MIN_MAX_PKT_MAHDL_M		MAKEMASK(0x3FFF, 0)
+#define GLCOMM_MIN_MAX_PKT_MIHDL_S		16
+#define GLCOMM_MIN_MAX_PKT_MIHDL_M		MAKEMASK(0x3F, 16)
+#define GLCOMM_MIN_MAX_PKT_LSO_COMS_MIHDL_S	22
+#define GLCOMM_MIN_MAX_PKT_LSO_COMS_MIHDL_M	MAKEMASK(0x3FF, 22)
+#define GLCOMM_PKT_SHAPER_PROF(_i)		(0x002D2DA8 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLCOMM_PKT_SHAPER_PROF_MAX_INDEX	7
+#define GLCOMM_PKT_SHAPER_PROF_PKTCNT_S		0
+#define GLCOMM_PKT_SHAPER_PROF_PKTCNT_M		MAKEMASK(0x3F, 0)
+#define GLCOMM_QTX_CNTX_CTL			0x002D2DC8 /* Reset Source: CORER */
+#define GLCOMM_QTX_CNTX_CTL_QUEUE_ID_S		0
+#define GLCOMM_QTX_CNTX_CTL_QUEUE_ID_M		MAKEMASK(0x3FFF, 0)
+#define GLCOMM_QTX_CNTX_CTL_CMD_S		16
+#define GLCOMM_QTX_CNTX_CTL_CMD_M		MAKEMASK(0x7, 16)
+#define GLCOMM_QTX_CNTX_CTL_CMD_EXEC_S		19
+#define GLCOMM_QTX_CNTX_CTL_CMD_EXEC_M		BIT(19)
+#define GLCOMM_QTX_CNTX_DATA(_i)		(0x002D2D40 + ((_i) * 4)) /* _i=0...9 */ /* Reset Source: CORER */
+#define GLCOMM_QTX_CNTX_DATA_MAX_INDEX		9
+#define GLCOMM_QTX_CNTX_DATA_DATA_S		0
+#define GLCOMM_QTX_CNTX_DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLCOMM_QTX_CNTX_STAT			0x002D2DCC /* Reset Source: CORER */
+#define GLCOMM_QTX_CNTX_STAT_CMD_IN_PROG_S	0
+#define GLCOMM_QTX_CNTX_STAT_CMD_IN_PROG_M	BIT(0)
+#define GLCOMM_QUANTA_PROF(_i)			(0x002D2D68 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLCOMM_QUANTA_PROF_MAX_INDEX		15
+#define GLCOMM_QUANTA_PROF_QUANTA_SIZE_S	0
+#define GLCOMM_QUANTA_PROF_QUANTA_SIZE_M	MAKEMASK(0x3FFF, 0)
+#define GLCOMM_QUANTA_PROF_MAX_CMD_S		16
+#define GLCOMM_QUANTA_PROF_MAX_CMD_M		MAKEMASK(0xFF, 16)
+#define GLCOMM_QUANTA_PROF_MAX_DESC_S		24
+#define GLCOMM_QUANTA_PROF_MAX_DESC_M		MAKEMASK(0x3F, 24)
+#define GLLAN_TCLAN_CACHE_CTL			0x000FC0B8 /* Reset Source: CORER */
+#define GLLAN_TCLAN_CACHE_CTL_MIN_FETCH_THRESH_S 0
+#define GLLAN_TCLAN_CACHE_CTL_MIN_FETCH_THRESH_M MAKEMASK(0x3F, 0)
+#define GLLAN_TCLAN_CACHE_CTL_FETCH_CL_ALIGN_S	6
+#define GLLAN_TCLAN_CACHE_CTL_FETCH_CL_ALIGN_M	BIT(6)
+#define GLLAN_TCLAN_CACHE_CTL_MIN_ALLOC_THRESH_S 7
+#define GLLAN_TCLAN_CACHE_CTL_MIN_ALLOC_THRESH_M MAKEMASK(0x7F, 7)
+#define GLLAN_TCLAN_CACHE_CTL_CACHE_ENTRY_CNT_S 14
+#define GLLAN_TCLAN_CACHE_CTL_CACHE_ENTRY_CNT_M MAKEMASK(0xFF, 14)
+#define GLLAN_TCLAN_CACHE_CTL_CACHE_DESC_LIM_S	22
+#define GLLAN_TCLAN_CACHE_CTL_CACHE_DESC_LIM_M	MAKEMASK(0x3FF, 22)
+#define GLTCLAN_CQ_CNTX0(_CQ)			(0x000F0800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX0_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX0_RING_ADDR_LSB_S	0
+#define GLTCLAN_CQ_CNTX0_RING_ADDR_LSB_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX1(_CQ)			(0x000F1000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX1_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX1_RING_ADDR_MSB_S	0
+#define GLTCLAN_CQ_CNTX1_RING_ADDR_MSB_M	MAKEMASK(0x1FFFFFF, 0)
+#define GLTCLAN_CQ_CNTX10(_CQ)			(0x000F5800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX10_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX10_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX10_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX11(_CQ)			(0x000F6000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX11_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX11_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX11_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX12(_CQ)			(0x000F6800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX12_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX12_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX12_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX13(_CQ)			(0x000F7000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX13_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX13_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX13_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX14(_CQ)			(0x000F7800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX14_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX14_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX14_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX15(_CQ)			(0x000F8000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX15_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX15_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX15_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX16(_CQ)			(0x000F8800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX16_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX16_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX16_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX17(_CQ)			(0x000F9000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX17_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX17_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX17_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX18(_CQ)			(0x000F9800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX18_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX18_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX18_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX19(_CQ)			(0x000FA000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX19_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX19_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX19_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX2(_CQ)			(0x000F1800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX2_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX2_RING_LEN_S		0
+#define GLTCLAN_CQ_CNTX2_RING_LEN_M		MAKEMASK(0x3FFFF, 0)
+#define GLTCLAN_CQ_CNTX20(_CQ)			(0x000FA800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX20_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX20_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX20_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX21(_CQ)			(0x000FB000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX21_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX21_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX21_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX3(_CQ)			(0x000F2000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX3_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX3_GENERATION_S		0
+#define GLTCLAN_CQ_CNTX3_GENERATION_M		BIT(0)
+#define GLTCLAN_CQ_CNTX3_CQ_WR_PTR_S		1
+#define GLTCLAN_CQ_CNTX3_CQ_WR_PTR_M		MAKEMASK(0x3FFFFF, 1)
+#define GLTCLAN_CQ_CNTX4(_CQ)			(0x000F2800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX4_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX4_PF_NUM_S		0
+#define GLTCLAN_CQ_CNTX4_PF_NUM_M		MAKEMASK(0x7, 0)
+#define GLTCLAN_CQ_CNTX4_VMVF_NUM_S		3
+#define GLTCLAN_CQ_CNTX4_VMVF_NUM_M		MAKEMASK(0x3FF, 3)
+#define GLTCLAN_CQ_CNTX4_VMVF_TYPE_S		13
+#define GLTCLAN_CQ_CNTX4_VMVF_TYPE_M		MAKEMASK(0x3, 13)
+#define GLTCLAN_CQ_CNTX5(_CQ)			(0x000F3000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX5_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX5_TPH_EN_S		0
+#define GLTCLAN_CQ_CNTX5_TPH_EN_M		BIT(0)
+#define GLTCLAN_CQ_CNTX5_CPU_ID_S		1
+#define GLTCLAN_CQ_CNTX5_CPU_ID_M		MAKEMASK(0xFF, 1)
+#define GLTCLAN_CQ_CNTX5_FLUSH_ON_ITR_DIS_S	9
+#define GLTCLAN_CQ_CNTX5_FLUSH_ON_ITR_DIS_M	BIT(9)
+#define GLTCLAN_CQ_CNTX6(_CQ)			(0x000F3800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX6_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX6_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX6_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX7(_CQ)			(0x000F4000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX7_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX7_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX7_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX8(_CQ)			(0x000F4800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX8_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX8_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX8_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX9(_CQ)			(0x000F5000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX9_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX9_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX9_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define QTX_COMM_DBELL(_DBQM)			(0x002C0000 + ((_DBQM) * 4)) /* _i=0...16383 */ /* Reset Source: CORER */
+#define QTX_COMM_DBELL_MAX_INDEX		16383
+#define QTX_COMM_DBELL_QTX_COMM_DBELL_S		0
+#define QTX_COMM_DBELL_QTX_COMM_DBELL_M		MAKEMASK(0xFFFFFFFF, 0)
+#define QTX_COMM_DBLQ_CNTX(_i, _DBLQ)		(0x002D0000 + ((_i) * 1024 + (_DBLQ) * 4)) /* _i=0...4, _DBLQ=0...255 */ /* Reset Source: CORER */
+#define QTX_COMM_DBLQ_CNTX_MAX_INDEX		4
+#define QTX_COMM_DBLQ_CNTX_DATA_S		0
+#define QTX_COMM_DBLQ_CNTX_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define QTX_COMM_DBLQ_DBELL(_DBLQ)		(0x002D1400 + ((_DBLQ) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define QTX_COMM_DBLQ_DBELL_MAX_INDEX		255
+#define QTX_COMM_DBLQ_DBELL_TAIL_S		0
+#define QTX_COMM_DBLQ_DBELL_TAIL_M		MAKEMASK(0x1FFF, 0)
+#define QTX_COMM_HEAD(_DBQM)			(0x000E0000 + ((_DBQM) * 4)) /* _i=0...16383 */ /* Reset Source: CORER */
+#define QTX_COMM_HEAD_MAX_INDEX			16383
+#define QTX_COMM_HEAD_HEAD_S			0
+#define QTX_COMM_HEAD_HEAD_M			MAKEMASK(0x1FFF, 0)
+#define QTX_COMM_HEAD_RS_PENDING_S		16
+#define QTX_COMM_HEAD_RS_PENDING_M		BIT(16)
+#define GL_FW_TOOL_ARQBAH			0x000801C0 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ARQBAH_ARQBAH_S		0
+#define GL_FW_TOOL_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_FW_TOOL_ARQBAL			0x000800C0 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ARQBAL_ARQBAL_LSB_S		0
+#define GL_FW_TOOL_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define GL_FW_TOOL_ARQBAL_ARQBAL_S		6
+#define GL_FW_TOOL_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define GL_FW_TOOL_ARQH				0x000803C0 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ARQH_ARQH_S			0
+#define GL_FW_TOOL_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define GL_FW_TOOL_ARQLEN			0x000802C0 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ARQLEN_ARQLEN_S		0
+#define GL_FW_TOOL_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define GL_FW_TOOL_ARQLEN_ARQVFE_S		28
+#define GL_FW_TOOL_ARQLEN_ARQVFE_M		BIT(28)
+#define GL_FW_TOOL_ARQLEN_ARQOVFL_S		29
+#define GL_FW_TOOL_ARQLEN_ARQOVFL_M		BIT(29)
+#define GL_FW_TOOL_ARQLEN_ARQCRIT_S		30
+#define GL_FW_TOOL_ARQLEN_ARQCRIT_M		BIT(30)
+#define GL_FW_TOOL_ARQLEN_ARQENABLE_S		31
+#define GL_FW_TOOL_ARQLEN_ARQENABLE_M		BIT(31)
+#define GL_FW_TOOL_ARQT				0x000804C0 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ARQT_ARQT_S			0
+#define GL_FW_TOOL_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define GL_FW_TOOL_ATQBAH			0x00080140 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ATQBAH_ATQBAH_S		0
+#define GL_FW_TOOL_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_FW_TOOL_ATQBAL			0x00080040 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ATQBAL_ATQBAL_LSB_S		0
+#define GL_FW_TOOL_ATQBAL_ATQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define GL_FW_TOOL_ATQBAL_ATQBAL_S		6
+#define GL_FW_TOOL_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define GL_FW_TOOL_ATQH				0x00080340 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ATQH_ATQH_S			0
+#define GL_FW_TOOL_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define GL_FW_TOOL_ATQLEN			0x00080240 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ATQLEN_ATQLEN_S		0
+#define GL_FW_TOOL_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define GL_FW_TOOL_ATQLEN_ATQVFE_S		28
+#define GL_FW_TOOL_ATQLEN_ATQVFE_M		BIT(28)
+#define GL_FW_TOOL_ATQLEN_ATQOVFL_S		29
+#define GL_FW_TOOL_ATQLEN_ATQOVFL_M		BIT(29)
+#define GL_FW_TOOL_ATQLEN_ATQCRIT_S		30
+#define GL_FW_TOOL_ATQLEN_ATQCRIT_M		BIT(30)
+#define GL_FW_TOOL_ATQLEN_ATQENABLE_S		31
+#define GL_FW_TOOL_ATQLEN_ATQENABLE_M		BIT(31)
+#define GL_FW_TOOL_ATQT				0x00080440 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ATQT_ATQT_S			0
+#define GL_FW_TOOL_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define GL_MBX_PASID				0x00231EC0 /* Reset Source: CORER */
+#define GL_MBX_PASID_PASID_MODE_S		0
+#define GL_MBX_PASID_PASID_MODE_M		BIT(0)
+#define GL_MBX_PASID_PASID_MODE_VALID_S		1
+#define GL_MBX_PASID_PASID_MODE_VALID_M		BIT(1)
+#define PF_FW_ARQBAH				0x00080180 /* Reset Source: EMPR */
+#define PF_FW_ARQBAH_ARQBAH_S			0
+#define PF_FW_ARQBAH_ARQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PF_FW_ARQBAL				0x00080080 /* Reset Source: EMPR */
+#define PF_FW_ARQBAL_ARQBAL_LSB_S		0
+#define PF_FW_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF_FW_ARQBAL_ARQBAL_S			6
+#define PF_FW_ARQBAL_ARQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define PF_FW_ARQH				0x00080380 /* Reset Source: EMPR */
+#define PF_FW_ARQH_ARQH_S			0
+#define PF_FW_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define PF_FW_ARQLEN				0x00080280 /* Reset Source: EMPR */
+#define PF_FW_ARQLEN_ARQLEN_S			0
+#define PF_FW_ARQLEN_ARQLEN_M			MAKEMASK(0x3FF, 0)
+#define PF_FW_ARQLEN_ARQVFE_S			28
+#define PF_FW_ARQLEN_ARQVFE_M			BIT(28)
+#define PF_FW_ARQLEN_ARQOVFL_S			29
+#define PF_FW_ARQLEN_ARQOVFL_M			BIT(29)
+#define PF_FW_ARQLEN_ARQCRIT_S			30
+#define PF_FW_ARQLEN_ARQCRIT_M			BIT(30)
+#define PF_FW_ARQLEN_ARQENABLE_S		31
+#define PF_FW_ARQLEN_ARQENABLE_M		BIT(31)
+#define PF_FW_ARQT				0x00080480 /* Reset Source: EMPR */
+#define PF_FW_ARQT_ARQT_S			0
+#define PF_FW_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define PF_FW_ATQBAH				0x00080100 /* Reset Source: EMPR */
+#define PF_FW_ATQBAH_ATQBAH_S			0
+#define PF_FW_ATQBAH_ATQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PF_FW_ATQBAL				0x00080000 /* Reset Source: EMPR */
+#define PF_FW_ATQBAL_ATQBAL_LSB_S		0
+#define PF_FW_ATQBAL_ATQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF_FW_ATQBAL_ATQBAL_S			6
+#define PF_FW_ATQBAL_ATQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define PF_FW_ATQH				0x00080300 /* Reset Source: EMPR */
+#define PF_FW_ATQH_ATQH_S			0
+#define PF_FW_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define PF_FW_ATQLEN				0x00080200 /* Reset Source: EMPR */
+#define PF_FW_ATQLEN_ATQLEN_S			0
+#define PF_FW_ATQLEN_ATQLEN_M			MAKEMASK(0x3FF, 0)
+#define PF_FW_ATQLEN_ATQVFE_S			28
+#define PF_FW_ATQLEN_ATQVFE_M			BIT(28)
+#define PF_FW_ATQLEN_ATQOVFL_S			29
+#define PF_FW_ATQLEN_ATQOVFL_M			BIT(29)
+#define PF_FW_ATQLEN_ATQCRIT_S			30
+#define PF_FW_ATQLEN_ATQCRIT_M			BIT(30)
+#define PF_FW_ATQLEN_ATQENABLE_S		31
+#define PF_FW_ATQLEN_ATQENABLE_M		BIT(31)
+#define PF_FW_ATQT				0x00080400 /* Reset Source: EMPR */
+#define PF_FW_ATQT_ATQT_S			0
+#define PF_FW_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define PF_MBX_ARQBAH				0x0022E400 /* Reset Source: CORER */
+#define PF_MBX_ARQBAH_ARQBAH_S			0
+#define PF_MBX_ARQBAH_ARQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PF_MBX_ARQBAL				0x0022E380 /* Reset Source: CORER */
+#define PF_MBX_ARQBAL_ARQBAL_LSB_S		0
+#define PF_MBX_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF_MBX_ARQBAL_ARQBAL_S			6
+#define PF_MBX_ARQBAL_ARQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define PF_MBX_ARQH				0x0022E500 /* Reset Source: CORER */
+#define PF_MBX_ARQH_ARQH_S			0
+#define PF_MBX_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define PF_MBX_ARQLEN				0x0022E480 /* Reset Source: CORER */
+#define PF_MBX_ARQLEN_ARQLEN_S			0
+#define PF_MBX_ARQLEN_ARQLEN_M			MAKEMASK(0x3FF, 0)
+#define PF_MBX_ARQLEN_ARQVFE_S			28
+#define PF_MBX_ARQLEN_ARQVFE_M			BIT(28)
+#define PF_MBX_ARQLEN_ARQOVFL_S			29
+#define PF_MBX_ARQLEN_ARQOVFL_M			BIT(29)
+#define PF_MBX_ARQLEN_ARQCRIT_S			30
+#define PF_MBX_ARQLEN_ARQCRIT_M			BIT(30)
+#define PF_MBX_ARQLEN_ARQENABLE_S		31
+#define PF_MBX_ARQLEN_ARQENABLE_M		BIT(31)
+#define PF_MBX_ARQT				0x0022E580 /* Reset Source: CORER */
+#define PF_MBX_ARQT_ARQT_S			0
+#define PF_MBX_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define PF_MBX_ATQBAH				0x0022E180 /* Reset Source: CORER */
+#define PF_MBX_ATQBAH_ATQBAH_S			0
+#define PF_MBX_ATQBAH_ATQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PF_MBX_ATQBAL				0x0022E100 /* Reset Source: CORER */
+#define PF_MBX_ATQBAL_ATQBAL_S			6
+#define PF_MBX_ATQBAL_ATQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define PF_MBX_ATQH				0x0022E280 /* Reset Source: CORER */
+#define PF_MBX_ATQH_ATQH_S			0
+#define PF_MBX_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define PF_MBX_ATQLEN				0x0022E200 /* Reset Source: CORER */
+#define PF_MBX_ATQLEN_ATQLEN_S			0
+#define PF_MBX_ATQLEN_ATQLEN_M			MAKEMASK(0x3FF, 0)
+#define PF_MBX_ATQLEN_ATQVFE_S			28
+#define PF_MBX_ATQLEN_ATQVFE_M			BIT(28)
+#define PF_MBX_ATQLEN_ATQOVFL_S			29
+#define PF_MBX_ATQLEN_ATQOVFL_M			BIT(29)
+#define PF_MBX_ATQLEN_ATQCRIT_S			30
+#define PF_MBX_ATQLEN_ATQCRIT_M			BIT(30)
+#define PF_MBX_ATQLEN_ATQENABLE_S		31
+#define PF_MBX_ATQLEN_ATQENABLE_M		BIT(31)
+#define PF_MBX_ATQT				0x0022E300 /* Reset Source: CORER */
+#define PF_MBX_ATQT_ATQT_S			0
+#define PF_MBX_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define PF_SB_ARQBAH				0x0022FF00 /* Reset Source: CORER */
+#define PF_SB_ARQBAH_ARQBAH_S			0
+#define PF_SB_ARQBAH_ARQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PF_SB_ARQBAL				0x0022FE80 /* Reset Source: CORER */
+#define PF_SB_ARQBAL_ARQBAL_LSB_S		0
+#define PF_SB_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF_SB_ARQBAL_ARQBAL_S			6
+#define PF_SB_ARQBAL_ARQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define PF_SB_ARQH				0x00230000 /* Reset Source: CORER */
+#define PF_SB_ARQH_ARQH_S			0
+#define PF_SB_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define PF_SB_ARQLEN				0x0022FF80 /* Reset Source: CORER */
+#define PF_SB_ARQLEN_ARQLEN_S			0
+#define PF_SB_ARQLEN_ARQLEN_M			MAKEMASK(0x3FF, 0)
+#define PF_SB_ARQLEN_ARQVFE_S			28
+#define PF_SB_ARQLEN_ARQVFE_M			BIT(28)
+#define PF_SB_ARQLEN_ARQOVFL_S			29
+#define PF_SB_ARQLEN_ARQOVFL_M			BIT(29)
+#define PF_SB_ARQLEN_ARQCRIT_S			30
+#define PF_SB_ARQLEN_ARQCRIT_M			BIT(30)
+#define PF_SB_ARQLEN_ARQENABLE_S		31
+#define PF_SB_ARQLEN_ARQENABLE_M		BIT(31)
+#define PF_SB_ARQT				0x00230080 /* Reset Source: CORER */
+#define PF_SB_ARQT_ARQT_S			0
+#define PF_SB_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define PF_SB_ATQBAH				0x0022FC80 /* Reset Source: CORER */
+#define PF_SB_ATQBAH_ATQBAH_S			0
+#define PF_SB_ATQBAH_ATQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PF_SB_ATQBAL				0x0022FC00 /* Reset Source: CORER */
+#define PF_SB_ATQBAL_ATQBAL_S			6
+#define PF_SB_ATQBAL_ATQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define PF_SB_ATQH				0x0022FD80 /* Reset Source: CORER */
+#define PF_SB_ATQH_ATQH_S			0
+#define PF_SB_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define PF_SB_ATQLEN				0x0022FD00 /* Reset Source: CORER */
+#define PF_SB_ATQLEN_ATQLEN_S			0
+#define PF_SB_ATQLEN_ATQLEN_M			MAKEMASK(0x3FF, 0)
+#define PF_SB_ATQLEN_ATQVFE_S			28
+#define PF_SB_ATQLEN_ATQVFE_M			BIT(28)
+#define PF_SB_ATQLEN_ATQOVFL_S			29
+#define PF_SB_ATQLEN_ATQOVFL_M			BIT(29)
+#define PF_SB_ATQLEN_ATQCRIT_S			30
+#define PF_SB_ATQLEN_ATQCRIT_M			BIT(30)
+#define PF_SB_ATQLEN_ATQENABLE_S		31
+#define PF_SB_ATQLEN_ATQENABLE_M		BIT(31)
+#define PF_SB_ATQT				0x0022FE00 /* Reset Source: CORER */
+#define PF_SB_ATQT_ATQT_S			0
+#define PF_SB_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define PF_SB_REM_DEV_CTL			0x002300F0 /* Reset Source: CORER */
+#define PF_SB_REM_DEV_CTL_DEST_EN_S		0
+#define PF_SB_REM_DEV_CTL_DEST_EN_M		MAKEMASK(0xFFFF, 0)
+#define PF0_FW_HLP_ARQBAH			0x000801C8 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQBAH_ARQBAH_S		0
+#define PF0_FW_HLP_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_FW_HLP_ARQBAL			0x000800C8 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQBAL_ARQBAL_LSB_S		0
+#define PF0_FW_HLP_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF0_FW_HLP_ARQBAL_ARQBAL_S		6
+#define PF0_FW_HLP_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_FW_HLP_ARQH				0x000803C8 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQH_ARQH_S			0
+#define PF0_FW_HLP_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ARQLEN			0x000802C8 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQLEN_ARQLEN_S		0
+#define PF0_FW_HLP_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ARQLEN_ARQVFE_S		28
+#define PF0_FW_HLP_ARQLEN_ARQVFE_M		BIT(28)
+#define PF0_FW_HLP_ARQLEN_ARQOVFL_S		29
+#define PF0_FW_HLP_ARQLEN_ARQOVFL_M		BIT(29)
+#define PF0_FW_HLP_ARQLEN_ARQCRIT_S		30
+#define PF0_FW_HLP_ARQLEN_ARQCRIT_M		BIT(30)
+#define PF0_FW_HLP_ARQLEN_ARQENABLE_S		31
+#define PF0_FW_HLP_ARQLEN_ARQENABLE_M		BIT(31)
+#define PF0_FW_HLP_ARQT				0x000804C8 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQT_ARQT_S			0
+#define PF0_FW_HLP_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ATQBAH			0x00080148 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQBAH_ATQBAH_S		0
+#define PF0_FW_HLP_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_FW_HLP_ATQBAL			0x00080048 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQBAL_ATQBAL_LSB_S		0
+#define PF0_FW_HLP_ATQBAL_ATQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF0_FW_HLP_ATQBAL_ATQBAL_S		6
+#define PF0_FW_HLP_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_FW_HLP_ATQH				0x00080348 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQH_ATQH_S			0
+#define PF0_FW_HLP_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ATQLEN			0x00080248 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQLEN_ATQLEN_S		0
+#define PF0_FW_HLP_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ATQLEN_ATQVFE_S		28
+#define PF0_FW_HLP_ATQLEN_ATQVFE_M		BIT(28)
+#define PF0_FW_HLP_ATQLEN_ATQOVFL_S		29
+#define PF0_FW_HLP_ATQLEN_ATQOVFL_M		BIT(29)
+#define PF0_FW_HLP_ATQLEN_ATQCRIT_S		30
+#define PF0_FW_HLP_ATQLEN_ATQCRIT_M		BIT(30)
+#define PF0_FW_HLP_ATQLEN_ATQENABLE_S		31
+#define PF0_FW_HLP_ATQLEN_ATQENABLE_M		BIT(31)
+#define PF0_FW_HLP_ATQT				0x00080448 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQT_ATQT_S			0
+#define PF0_FW_HLP_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ARQBAH			0x000801C4 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQBAH_ARQBAH_S		0
+#define PF0_FW_PSM_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_FW_PSM_ARQBAL			0x000800C4 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQBAL_ARQBAL_LSB_S		0
+#define PF0_FW_PSM_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF0_FW_PSM_ARQBAL_ARQBAL_S		6
+#define PF0_FW_PSM_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_FW_PSM_ARQH				0x000803C4 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQH_ARQH_S			0
+#define PF0_FW_PSM_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ARQLEN			0x000802C4 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQLEN_ARQLEN_S		0
+#define PF0_FW_PSM_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ARQLEN_ARQVFE_S		28
+#define PF0_FW_PSM_ARQLEN_ARQVFE_M		BIT(28)
+#define PF0_FW_PSM_ARQLEN_ARQOVFL_S		29
+#define PF0_FW_PSM_ARQLEN_ARQOVFL_M		BIT(29)
+#define PF0_FW_PSM_ARQLEN_ARQCRIT_S		30
+#define PF0_FW_PSM_ARQLEN_ARQCRIT_M		BIT(30)
+#define PF0_FW_PSM_ARQLEN_ARQENABLE_S		31
+#define PF0_FW_PSM_ARQLEN_ARQENABLE_M		BIT(31)
+#define PF0_FW_PSM_ARQT				0x000804C4 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQT_ARQT_S			0
+#define PF0_FW_PSM_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ATQBAH			0x00080144 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQBAH_ATQBAH_S		0
+#define PF0_FW_PSM_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_FW_PSM_ATQBAL			0x00080044 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQBAL_ATQBAL_LSB_S		0
+#define PF0_FW_PSM_ATQBAL_ATQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF0_FW_PSM_ATQBAL_ATQBAL_S		6
+#define PF0_FW_PSM_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_FW_PSM_ATQH				0x00080344 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQH_ATQH_S			0
+#define PF0_FW_PSM_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ATQLEN			0x00080244 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQLEN_ATQLEN_S		0
+#define PF0_FW_PSM_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ATQLEN_ATQVFE_S		28
+#define PF0_FW_PSM_ATQLEN_ATQVFE_M		BIT(28)
+#define PF0_FW_PSM_ATQLEN_ATQOVFL_S		29
+#define PF0_FW_PSM_ATQLEN_ATQOVFL_M		BIT(29)
+#define PF0_FW_PSM_ATQLEN_ATQCRIT_S		30
+#define PF0_FW_PSM_ATQLEN_ATQCRIT_M		BIT(30)
+#define PF0_FW_PSM_ATQLEN_ATQENABLE_S		31
+#define PF0_FW_PSM_ATQLEN_ATQENABLE_M		BIT(31)
+#define PF0_FW_PSM_ATQT				0x00080444 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQT_ATQT_S			0
+#define PF0_FW_PSM_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ARQBAH			0x0022E5D8 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQBAH_ARQBAH_S		0
+#define PF0_MBX_CPM_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_CPM_ARQBAL			0x0022E5D4 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQBAL_ARQBAL_LSB_S		0
+#define PF0_MBX_CPM_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF0_MBX_CPM_ARQBAL_ARQBAL_S		6
+#define PF0_MBX_CPM_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_CPM_ARQH			0x0022E5E0 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQH_ARQH_S			0
+#define PF0_MBX_CPM_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ARQLEN			0x0022E5DC /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQLEN_ARQLEN_S		0
+#define PF0_MBX_CPM_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ARQLEN_ARQVFE_S		28
+#define PF0_MBX_CPM_ARQLEN_ARQVFE_M		BIT(28)
+#define PF0_MBX_CPM_ARQLEN_ARQOVFL_S		29
+#define PF0_MBX_CPM_ARQLEN_ARQOVFL_M		BIT(29)
+#define PF0_MBX_CPM_ARQLEN_ARQCRIT_S		30
+#define PF0_MBX_CPM_ARQLEN_ARQCRIT_M		BIT(30)
+#define PF0_MBX_CPM_ARQLEN_ARQENABLE_S		31
+#define PF0_MBX_CPM_ARQLEN_ARQENABLE_M		BIT(31)
+#define PF0_MBX_CPM_ARQT			0x0022E5E4 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQT_ARQT_S			0
+#define PF0_MBX_CPM_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ATQBAH			0x0022E5C4 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQBAH_ATQBAH_S		0
+#define PF0_MBX_CPM_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_CPM_ATQBAL			0x0022E5C0 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQBAL_ATQBAL_S		6
+#define PF0_MBX_CPM_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_CPM_ATQH			0x0022E5CC /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQH_ATQH_S			0
+#define PF0_MBX_CPM_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ATQLEN			0x0022E5C8 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQLEN_ATQLEN_S		0
+#define PF0_MBX_CPM_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ATQLEN_ATQVFE_S		28
+#define PF0_MBX_CPM_ATQLEN_ATQVFE_M		BIT(28)
+#define PF0_MBX_CPM_ATQLEN_ATQOVFL_S		29
+#define PF0_MBX_CPM_ATQLEN_ATQOVFL_M		BIT(29)
+#define PF0_MBX_CPM_ATQLEN_ATQCRIT_S		30
+#define PF0_MBX_CPM_ATQLEN_ATQCRIT_M		BIT(30)
+#define PF0_MBX_CPM_ATQLEN_ATQENABLE_S		31
+#define PF0_MBX_CPM_ATQLEN_ATQENABLE_M		BIT(31)
+#define PF0_MBX_CPM_ATQT			0x0022E5D0 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQT_ATQT_S			0
+#define PF0_MBX_CPM_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ARQBAH			0x0022E600 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQBAH_ARQBAH_S		0
+#define PF0_MBX_HLP_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_HLP_ARQBAL			0x0022E5FC /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQBAL_ARQBAL_LSB_S		0
+#define PF0_MBX_HLP_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF0_MBX_HLP_ARQBAL_ARQBAL_S		6
+#define PF0_MBX_HLP_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_HLP_ARQH			0x0022E608 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQH_ARQH_S			0
+#define PF0_MBX_HLP_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ARQLEN			0x0022E604 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQLEN_ARQLEN_S		0
+#define PF0_MBX_HLP_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ARQLEN_ARQVFE_S		28
+#define PF0_MBX_HLP_ARQLEN_ARQVFE_M		BIT(28)
+#define PF0_MBX_HLP_ARQLEN_ARQOVFL_S		29
+#define PF0_MBX_HLP_ARQLEN_ARQOVFL_M		BIT(29)
+#define PF0_MBX_HLP_ARQLEN_ARQCRIT_S		30
+#define PF0_MBX_HLP_ARQLEN_ARQCRIT_M		BIT(30)
+#define PF0_MBX_HLP_ARQLEN_ARQENABLE_S		31
+#define PF0_MBX_HLP_ARQLEN_ARQENABLE_M		BIT(31)
+#define PF0_MBX_HLP_ARQT			0x0022E60C /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQT_ARQT_S			0
+#define PF0_MBX_HLP_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ATQBAH			0x0022E5EC /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQBAH_ATQBAH_S		0
+#define PF0_MBX_HLP_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_HLP_ATQBAL			0x0022E5E8 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQBAL_ATQBAL_S		6
+#define PF0_MBX_HLP_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_HLP_ATQH			0x0022E5F4 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQH_ATQH_S			0
+#define PF0_MBX_HLP_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ATQLEN			0x0022E5F0 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQLEN_ATQLEN_S		0
+#define PF0_MBX_HLP_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ATQLEN_ATQVFE_S		28
+#define PF0_MBX_HLP_ATQLEN_ATQVFE_M		BIT(28)
+#define PF0_MBX_HLP_ATQLEN_ATQOVFL_S		29
+#define PF0_MBX_HLP_ATQLEN_ATQOVFL_M		BIT(29)
+#define PF0_MBX_HLP_ATQLEN_ATQCRIT_S		30
+#define PF0_MBX_HLP_ATQLEN_ATQCRIT_M		BIT(30)
+#define PF0_MBX_HLP_ATQLEN_ATQENABLE_S		31
+#define PF0_MBX_HLP_ATQLEN_ATQENABLE_M		BIT(31)
+#define PF0_MBX_HLP_ATQT			0x0022E5F8 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQT_ATQT_S			0
+#define PF0_MBX_HLP_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ARQBAH			0x0022E628 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQBAH_ARQBAH_S		0
+#define PF0_MBX_PSM_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_PSM_ARQBAL			0x0022E624 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQBAL_ARQBAL_LSB_S		0
+#define PF0_MBX_PSM_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF0_MBX_PSM_ARQBAL_ARQBAL_S		6
+#define PF0_MBX_PSM_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_PSM_ARQH			0x0022E630 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQH_ARQH_S			0
+#define PF0_MBX_PSM_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ARQLEN			0x0022E62C /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQLEN_ARQLEN_S		0
+#define PF0_MBX_PSM_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ARQLEN_ARQVFE_S		28
+#define PF0_MBX_PSM_ARQLEN_ARQVFE_M		BIT(28)
+#define PF0_MBX_PSM_ARQLEN_ARQOVFL_S		29
+#define PF0_MBX_PSM_ARQLEN_ARQOVFL_M		BIT(29)
+#define PF0_MBX_PSM_ARQLEN_ARQCRIT_S		30
+#define PF0_MBX_PSM_ARQLEN_ARQCRIT_M		BIT(30)
+#define PF0_MBX_PSM_ARQLEN_ARQENABLE_S		31
+#define PF0_MBX_PSM_ARQLEN_ARQENABLE_M		BIT(31)
+#define PF0_MBX_PSM_ARQT			0x0022E634 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQT_ARQT_S			0
+#define PF0_MBX_PSM_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ATQBAH			0x0022E614 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQBAH_ATQBAH_S		0
+#define PF0_MBX_PSM_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_PSM_ATQBAL			0x0022E610 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQBAL_ATQBAL_S		6
+#define PF0_MBX_PSM_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_PSM_ATQH			0x0022E61C /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQH_ATQH_S			0
+#define PF0_MBX_PSM_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ATQLEN			0x0022E618 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQLEN_ATQLEN_S		0
+#define PF0_MBX_PSM_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ATQLEN_ATQVFE_S		28
+#define PF0_MBX_PSM_ATQLEN_ATQVFE_M		BIT(28)
+#define PF0_MBX_PSM_ATQLEN_ATQOVFL_S		29
+#define PF0_MBX_PSM_ATQLEN_ATQOVFL_M		BIT(29)
+#define PF0_MBX_PSM_ATQLEN_ATQCRIT_S		30
+#define PF0_MBX_PSM_ATQLEN_ATQCRIT_M		BIT(30)
+#define PF0_MBX_PSM_ATQLEN_ATQENABLE_S		31
+#define PF0_MBX_PSM_ATQLEN_ATQENABLE_M		BIT(31)
+#define PF0_MBX_PSM_ATQT			0x0022E620 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQT_ATQT_S			0
+#define PF0_MBX_PSM_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ARQBAH			0x0022E650 /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQBAH_ARQBAH_S		0
+#define PF0_SB_CPM_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_SB_CPM_ARQBAL			0x0022E64C /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQBAL_ARQBAL_LSB_S		0
+#define PF0_SB_CPM_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF0_SB_CPM_ARQBAL_ARQBAL_S		6
+#define PF0_SB_CPM_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_SB_CPM_ARQH				0x0022E658 /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQH_ARQH_S			0
+#define PF0_SB_CPM_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ARQLEN			0x0022E654 /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQLEN_ARQLEN_S		0
+#define PF0_SB_CPM_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ARQLEN_ARQVFE_S		28
+#define PF0_SB_CPM_ARQLEN_ARQVFE_M		BIT(28)
+#define PF0_SB_CPM_ARQLEN_ARQOVFL_S		29
+#define PF0_SB_CPM_ARQLEN_ARQOVFL_M		BIT(29)
+#define PF0_SB_CPM_ARQLEN_ARQCRIT_S		30
+#define PF0_SB_CPM_ARQLEN_ARQCRIT_M		BIT(30)
+#define PF0_SB_CPM_ARQLEN_ARQENABLE_S		31
+#define PF0_SB_CPM_ARQLEN_ARQENABLE_M		BIT(31)
+#define PF0_SB_CPM_ARQT				0x0022E65C /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQT_ARQT_S			0
+#define PF0_SB_CPM_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ATQBAH			0x0022E63C /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQBAH_ATQBAH_S		0
+#define PF0_SB_CPM_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_SB_CPM_ATQBAL			0x0022E638 /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQBAL_ATQBAL_S		6
+#define PF0_SB_CPM_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_SB_CPM_ATQH				0x0022E644 /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQH_ATQH_S			0
+#define PF0_SB_CPM_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ATQLEN			0x0022E640 /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQLEN_ATQLEN_S		0
+#define PF0_SB_CPM_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ATQLEN_ATQVFE_S		28
+#define PF0_SB_CPM_ATQLEN_ATQVFE_M		BIT(28)
+#define PF0_SB_CPM_ATQLEN_ATQOVFL_S		29
+#define PF0_SB_CPM_ATQLEN_ATQOVFL_M		BIT(29)
+#define PF0_SB_CPM_ATQLEN_ATQCRIT_S		30
+#define PF0_SB_CPM_ATQLEN_ATQCRIT_M		BIT(30)
+#define PF0_SB_CPM_ATQLEN_ATQENABLE_S		31
+#define PF0_SB_CPM_ATQLEN_ATQENABLE_M		BIT(31)
+#define PF0_SB_CPM_ATQT				0x0022E648 /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQT_ATQT_S			0
+#define PF0_SB_CPM_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_REM_DEV_CTL			0x002300F4 /* Reset Source: CORER */
+#define PF0_SB_CPM_REM_DEV_CTL_DEST_EN_S	0
+#define PF0_SB_CPM_REM_DEV_CTL_DEST_EN_M	MAKEMASK(0xFFFF, 0)
+#define PF0_SB_HLP_ARQBAH			0x002300D8 /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQBAH_ARQBAH_S		0
+#define PF0_SB_HLP_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_SB_HLP_ARQBAL			0x002300D4 /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQBAL_ARQBAL_LSB_S		0
+#define PF0_SB_HLP_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF0_SB_HLP_ARQBAL_ARQBAL_S		6
+#define PF0_SB_HLP_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_SB_HLP_ARQH				0x002300E0 /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQH_ARQH_S			0
+#define PF0_SB_HLP_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ARQLEN			0x002300DC /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQLEN_ARQLEN_S		0
+#define PF0_SB_HLP_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ARQLEN_ARQVFE_S		28
+#define PF0_SB_HLP_ARQLEN_ARQVFE_M		BIT(28)
+#define PF0_SB_HLP_ARQLEN_ARQOVFL_S		29
+#define PF0_SB_HLP_ARQLEN_ARQOVFL_M		BIT(29)
+#define PF0_SB_HLP_ARQLEN_ARQCRIT_S		30
+#define PF0_SB_HLP_ARQLEN_ARQCRIT_M		BIT(30)
+#define PF0_SB_HLP_ARQLEN_ARQENABLE_S		31
+#define PF0_SB_HLP_ARQLEN_ARQENABLE_M		BIT(31)
+#define PF0_SB_HLP_ARQT				0x002300E4 /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQT_ARQT_S			0
+#define PF0_SB_HLP_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ATQBAH			0x002300C4 /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQBAH_ATQBAH_S		0
+#define PF0_SB_HLP_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_SB_HLP_ATQBAL			0x002300C0 /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQBAL_ATQBAL_S		6
+#define PF0_SB_HLP_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_SB_HLP_ATQH				0x002300CC /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQH_ATQH_S			0
+#define PF0_SB_HLP_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ATQLEN			0x002300C8 /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQLEN_ATQLEN_S		0
+#define PF0_SB_HLP_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ATQLEN_ATQVFE_S		28
+#define PF0_SB_HLP_ATQLEN_ATQVFE_M		BIT(28)
+#define PF0_SB_HLP_ATQLEN_ATQOVFL_S		29
+#define PF0_SB_HLP_ATQLEN_ATQOVFL_M		BIT(29)
+#define PF0_SB_HLP_ATQLEN_ATQCRIT_S		30
+#define PF0_SB_HLP_ATQLEN_ATQCRIT_M		BIT(30)
+#define PF0_SB_HLP_ATQLEN_ATQENABLE_S		31
+#define PF0_SB_HLP_ATQLEN_ATQENABLE_M		BIT(31)
+#define PF0_SB_HLP_ATQT				0x002300D0 /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQT_ATQT_S			0
+#define PF0_SB_HLP_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_REM_DEV_CTL			0x002300E8 /* Reset Source: CORER */
+#define PF0_SB_HLP_REM_DEV_CTL_DEST_EN_S	0
+#define PF0_SB_HLP_REM_DEV_CTL_DEST_EN_M	MAKEMASK(0xFFFF, 0)
+#define SB_REM_DEV_DEST(_i)			(0x002300F8 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define SB_REM_DEV_DEST_MAX_INDEX		7
+#define SB_REM_DEV_DEST_DEST_S			0
+#define SB_REM_DEV_DEST_DEST_M			MAKEMASK(0xF, 0)
+#define SB_REM_DEV_DEST_DEST_VALID_S		31
+#define SB_REM_DEV_DEST_DEST_VALID_M		BIT(31)
+#define VF_MBX_ARQBAH(_VF)			(0x0022B800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ARQBAH_MAX_INDEX			255
+#define VF_MBX_ARQBAH_ARQBAH_S			0
+#define VF_MBX_ARQBAH_ARQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_ARQBAL(_VF)			(0x0022B400 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ARQBAL_MAX_INDEX			255
+#define VF_MBX_ARQBAL_ARQBAL_LSB_S		0
+#define VF_MBX_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define VF_MBX_ARQBAL_ARQBAL_S			6
+#define VF_MBX_ARQBAL_ARQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_ARQH(_VF)			(0x0022C000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ARQH_MAX_INDEX			255
+#define VF_MBX_ARQH_ARQH_S			0
+#define VF_MBX_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_ARQLEN(_VF)			(0x0022BC00 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ARQLEN_MAX_INDEX			255
+#define VF_MBX_ARQLEN_ARQLEN_S			0
+#define VF_MBX_ARQLEN_ARQLEN_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_ARQLEN_ARQVFE_S			28
+#define VF_MBX_ARQLEN_ARQVFE_M			BIT(28)
+#define VF_MBX_ARQLEN_ARQOVFL_S			29
+#define VF_MBX_ARQLEN_ARQOVFL_M			BIT(29)
+#define VF_MBX_ARQLEN_ARQCRIT_S			30
+#define VF_MBX_ARQLEN_ARQCRIT_M			BIT(30)
+#define VF_MBX_ARQLEN_ARQENABLE_S		31
+#define VF_MBX_ARQLEN_ARQENABLE_M		BIT(31)
+#define VF_MBX_ARQT(_VF)			(0x0022C400 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ARQT_MAX_INDEX			255
+#define VF_MBX_ARQT_ARQT_S			0
+#define VF_MBX_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_ATQBAH(_VF)			(0x0022A400 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ATQBAH_MAX_INDEX			255
+#define VF_MBX_ATQBAH_ATQBAH_S			0
+#define VF_MBX_ATQBAH_ATQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_ATQBAL(_VF)			(0x0022A000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ATQBAL_MAX_INDEX			255
+#define VF_MBX_ATQBAL_ATQBAL_S			6
+#define VF_MBX_ATQBAL_ATQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_ATQH(_VF)			(0x0022AC00 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ATQH_MAX_INDEX			255
+#define VF_MBX_ATQH_ATQH_S			0
+#define VF_MBX_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_ATQLEN(_VF)			(0x0022A800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ATQLEN_MAX_INDEX			255
+#define VF_MBX_ATQLEN_ATQLEN_S			0
+#define VF_MBX_ATQLEN_ATQLEN_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_ATQLEN_ATQVFE_S			28
+#define VF_MBX_ATQLEN_ATQVFE_M			BIT(28)
+#define VF_MBX_ATQLEN_ATQOVFL_S			29
+#define VF_MBX_ATQLEN_ATQOVFL_M			BIT(29)
+#define VF_MBX_ATQLEN_ATQCRIT_S			30
+#define VF_MBX_ATQLEN_ATQCRIT_M			BIT(30)
+#define VF_MBX_ATQLEN_ATQENABLE_S		31
+#define VF_MBX_ATQLEN_ATQENABLE_M		BIT(31)
+#define VF_MBX_ATQT(_VF)			(0x0022B000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ATQT_MAX_INDEX			255
+#define VF_MBX_ATQT_ATQT_S			0
+#define VF_MBX_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ARQBAH(_VF128)		(0x0022D400 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ARQBAH_MAX_INDEX		127
+#define VF_MBX_CPM_ARQBAH_ARQBAH_S		0
+#define VF_MBX_CPM_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_CPM_ARQBAL(_VF128)		(0x0022D200 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ARQBAL_MAX_INDEX		127
+#define VF_MBX_CPM_ARQBAL_ARQBAL_LSB_S		0
+#define VF_MBX_CPM_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define VF_MBX_CPM_ARQBAL_ARQBAL_S		6
+#define VF_MBX_CPM_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_CPM_ARQH(_VF128)			(0x0022D800 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ARQH_MAX_INDEX		127
+#define VF_MBX_CPM_ARQH_ARQH_S			0
+#define VF_MBX_CPM_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ARQLEN(_VF128)		(0x0022D600 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ARQLEN_MAX_INDEX		127
+#define VF_MBX_CPM_ARQLEN_ARQLEN_S		0
+#define VF_MBX_CPM_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ARQLEN_ARQVFE_S		28
+#define VF_MBX_CPM_ARQLEN_ARQVFE_M		BIT(28)
+#define VF_MBX_CPM_ARQLEN_ARQOVFL_S		29
+#define VF_MBX_CPM_ARQLEN_ARQOVFL_M		BIT(29)
+#define VF_MBX_CPM_ARQLEN_ARQCRIT_S		30
+#define VF_MBX_CPM_ARQLEN_ARQCRIT_M		BIT(30)
+#define VF_MBX_CPM_ARQLEN_ARQENABLE_S		31
+#define VF_MBX_CPM_ARQLEN_ARQENABLE_M		BIT(31)
+#define VF_MBX_CPM_ARQT(_VF128)			(0x0022DA00 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ARQT_MAX_INDEX		127
+#define VF_MBX_CPM_ARQT_ARQT_S			0
+#define VF_MBX_CPM_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ATQBAH(_VF128)		(0x0022CA00 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ATQBAH_MAX_INDEX		127
+#define VF_MBX_CPM_ATQBAH_ATQBAH_S		0
+#define VF_MBX_CPM_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_CPM_ATQBAL(_VF128)		(0x0022C800 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ATQBAL_MAX_INDEX		127
+#define VF_MBX_CPM_ATQBAL_ATQBAL_S		6
+#define VF_MBX_CPM_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_CPM_ATQH(_VF128)			(0x0022CE00 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ATQH_MAX_INDEX		127
+#define VF_MBX_CPM_ATQH_ATQH_S			0
+#define VF_MBX_CPM_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ATQLEN(_VF128)		(0x0022CC00 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ATQLEN_MAX_INDEX		127
+#define VF_MBX_CPM_ATQLEN_ATQLEN_S		0
+#define VF_MBX_CPM_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ATQLEN_ATQVFE_S		28
+#define VF_MBX_CPM_ATQLEN_ATQVFE_M		BIT(28)
+#define VF_MBX_CPM_ATQLEN_ATQOVFL_S		29
+#define VF_MBX_CPM_ATQLEN_ATQOVFL_M		BIT(29)
+#define VF_MBX_CPM_ATQLEN_ATQCRIT_S		30
+#define VF_MBX_CPM_ATQLEN_ATQCRIT_M		BIT(30)
+#define VF_MBX_CPM_ATQLEN_ATQENABLE_S		31
+#define VF_MBX_CPM_ATQLEN_ATQENABLE_M		BIT(31)
+#define VF_MBX_CPM_ATQT(_VF128)			(0x0022D000 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ATQT_MAX_INDEX		127
+#define VF_MBX_CPM_ATQT_ATQT_S			0
+#define VF_MBX_CPM_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ARQBAH(_VF16)		(0x0022DD80 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ARQBAH_MAX_INDEX		15
+#define VF_MBX_HLP_ARQBAH_ARQBAH_S		0
+#define VF_MBX_HLP_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_HLP_ARQBAL(_VF16)		(0x0022DD40 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ARQBAL_MAX_INDEX		15
+#define VF_MBX_HLP_ARQBAL_ARQBAL_LSB_S		0
+#define VF_MBX_HLP_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define VF_MBX_HLP_ARQBAL_ARQBAL_S		6
+#define VF_MBX_HLP_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_HLP_ARQH(_VF16)			(0x0022DE00 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ARQH_MAX_INDEX		15
+#define VF_MBX_HLP_ARQH_ARQH_S			0
+#define VF_MBX_HLP_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ARQLEN(_VF16)		(0x0022DDC0 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ARQLEN_MAX_INDEX		15
+#define VF_MBX_HLP_ARQLEN_ARQLEN_S		0
+#define VF_MBX_HLP_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ARQLEN_ARQVFE_S		28
+#define VF_MBX_HLP_ARQLEN_ARQVFE_M		BIT(28)
+#define VF_MBX_HLP_ARQLEN_ARQOVFL_S		29
+#define VF_MBX_HLP_ARQLEN_ARQOVFL_M		BIT(29)
+#define VF_MBX_HLP_ARQLEN_ARQCRIT_S		30
+#define VF_MBX_HLP_ARQLEN_ARQCRIT_M		BIT(30)
+#define VF_MBX_HLP_ARQLEN_ARQENABLE_S		31
+#define VF_MBX_HLP_ARQLEN_ARQENABLE_M		BIT(31)
+#define VF_MBX_HLP_ARQT(_VF16)			(0x0022DE40 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ARQT_MAX_INDEX		15
+#define VF_MBX_HLP_ARQT_ARQT_S			0
+#define VF_MBX_HLP_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ATQBAH(_VF16)		(0x0022DC40 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ATQBAH_MAX_INDEX		15
+#define VF_MBX_HLP_ATQBAH_ATQBAH_S		0
+#define VF_MBX_HLP_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_HLP_ATQBAL(_VF16)		(0x0022DC00 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ATQBAL_MAX_INDEX		15
+#define VF_MBX_HLP_ATQBAL_ATQBAL_S		6
+#define VF_MBX_HLP_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_HLP_ATQH(_VF16)			(0x0022DCC0 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ATQH_MAX_INDEX		15
+#define VF_MBX_HLP_ATQH_ATQH_S			0
+#define VF_MBX_HLP_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ATQLEN(_VF16)		(0x0022DC80 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ATQLEN_MAX_INDEX		15
+#define VF_MBX_HLP_ATQLEN_ATQLEN_S		0
+#define VF_MBX_HLP_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ATQLEN_ATQVFE_S		28
+#define VF_MBX_HLP_ATQLEN_ATQVFE_M		BIT(28)
+#define VF_MBX_HLP_ATQLEN_ATQOVFL_S		29
+#define VF_MBX_HLP_ATQLEN_ATQOVFL_M		BIT(29)
+#define VF_MBX_HLP_ATQLEN_ATQCRIT_S		30
+#define VF_MBX_HLP_ATQLEN_ATQCRIT_M		BIT(30)
+#define VF_MBX_HLP_ATQLEN_ATQENABLE_S		31
+#define VF_MBX_HLP_ATQLEN_ATQENABLE_M		BIT(31)
+#define VF_MBX_HLP_ATQT(_VF16)			(0x0022DD00 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ATQT_MAX_INDEX		15
+#define VF_MBX_HLP_ATQT_ATQT_S			0
+#define VF_MBX_HLP_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ARQBAH(_VF16)		(0x0022E000 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ARQBAH_MAX_INDEX		15
+#define VF_MBX_PSM_ARQBAH_ARQBAH_S		0
+#define VF_MBX_PSM_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_PSM_ARQBAL(_VF16)		(0x0022DFC0 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ARQBAL_MAX_INDEX		15
+#define VF_MBX_PSM_ARQBAL_ARQBAL_LSB_S		0
+#define VF_MBX_PSM_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define VF_MBX_PSM_ARQBAL_ARQBAL_S		6
+#define VF_MBX_PSM_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_PSM_ARQH(_VF16)			(0x0022E080 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ARQH_MAX_INDEX		15
+#define VF_MBX_PSM_ARQH_ARQH_S			0
+#define VF_MBX_PSM_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ARQLEN(_VF16)		(0x0022E040 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ARQLEN_MAX_INDEX		15
+#define VF_MBX_PSM_ARQLEN_ARQLEN_S		0
+#define VF_MBX_PSM_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ARQLEN_ARQVFE_S		28
+#define VF_MBX_PSM_ARQLEN_ARQVFE_M		BIT(28)
+#define VF_MBX_PSM_ARQLEN_ARQOVFL_S		29
+#define VF_MBX_PSM_ARQLEN_ARQOVFL_M		BIT(29)
+#define VF_MBX_PSM_ARQLEN_ARQCRIT_S		30
+#define VF_MBX_PSM_ARQLEN_ARQCRIT_M		BIT(30)
+#define VF_MBX_PSM_ARQLEN_ARQENABLE_S		31
+#define VF_MBX_PSM_ARQLEN_ARQENABLE_M		BIT(31)
+#define VF_MBX_PSM_ARQT(_VF16)			(0x0022E0C0 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ARQT_MAX_INDEX		15
+#define VF_MBX_PSM_ARQT_ARQT_S			0
+#define VF_MBX_PSM_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ATQBAH(_VF16)		(0x0022DEC0 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ATQBAH_MAX_INDEX		15
+#define VF_MBX_PSM_ATQBAH_ATQBAH_S		0
+#define VF_MBX_PSM_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_PSM_ATQBAL(_VF16)		(0x0022DE80 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ATQBAL_MAX_INDEX		15
+#define VF_MBX_PSM_ATQBAL_ATQBAL_S		6
+#define VF_MBX_PSM_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_PSM_ATQH(_VF16)			(0x0022DF40 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ATQH_MAX_INDEX		15
+#define VF_MBX_PSM_ATQH_ATQH_S			0
+#define VF_MBX_PSM_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ATQLEN(_VF16)		(0x0022DF00 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ATQLEN_MAX_INDEX		15
+#define VF_MBX_PSM_ATQLEN_ATQLEN_S		0
+#define VF_MBX_PSM_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ATQLEN_ATQVFE_S		28
+#define VF_MBX_PSM_ATQLEN_ATQVFE_M		BIT(28)
+#define VF_MBX_PSM_ATQLEN_ATQOVFL_S		29
+#define VF_MBX_PSM_ATQLEN_ATQOVFL_M		BIT(29)
+#define VF_MBX_PSM_ATQLEN_ATQCRIT_S		30
+#define VF_MBX_PSM_ATQLEN_ATQCRIT_M		BIT(30)
+#define VF_MBX_PSM_ATQLEN_ATQENABLE_S		31
+#define VF_MBX_PSM_ATQLEN_ATQENABLE_M		BIT(31)
+#define VF_MBX_PSM_ATQT(_VF16)			(0x0022DF80 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ATQT_MAX_INDEX		15
+#define VF_MBX_PSM_ATQT_ATQT_S			0
+#define VF_MBX_PSM_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ARQBAH(_VF128)		(0x0022F400 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ARQBAH_MAX_INDEX		127
+#define VF_SB_CPM_ARQBAH_ARQBAH_S		0
+#define VF_SB_CPM_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_SB_CPM_ARQBAL(_VF128)		(0x0022F200 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ARQBAL_MAX_INDEX		127
+#define VF_SB_CPM_ARQBAL_ARQBAL_LSB_S		0
+#define VF_SB_CPM_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define VF_SB_CPM_ARQBAL_ARQBAL_S		6
+#define VF_SB_CPM_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_SB_CPM_ARQH(_VF128)			(0x0022F800 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ARQH_MAX_INDEX		127
+#define VF_SB_CPM_ARQH_ARQH_S			0
+#define VF_SB_CPM_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ARQLEN(_VF128)		(0x0022F600 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ARQLEN_MAX_INDEX		127
+#define VF_SB_CPM_ARQLEN_ARQLEN_S		0
+#define VF_SB_CPM_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ARQLEN_ARQVFE_S		28
+#define VF_SB_CPM_ARQLEN_ARQVFE_M		BIT(28)
+#define VF_SB_CPM_ARQLEN_ARQOVFL_S		29
+#define VF_SB_CPM_ARQLEN_ARQOVFL_M		BIT(29)
+#define VF_SB_CPM_ARQLEN_ARQCRIT_S		30
+#define VF_SB_CPM_ARQLEN_ARQCRIT_M		BIT(30)
+#define VF_SB_CPM_ARQLEN_ARQENABLE_S		31
+#define VF_SB_CPM_ARQLEN_ARQENABLE_M		BIT(31)
+#define VF_SB_CPM_ARQT(_VF128)			(0x0022FA00 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ARQT_MAX_INDEX		127
+#define VF_SB_CPM_ARQT_ARQT_S			0
+#define VF_SB_CPM_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ATQBAH(_VF128)		(0x0022EA00 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ATQBAH_MAX_INDEX		127
+#define VF_SB_CPM_ATQBAH_ATQBAH_S		0
+#define VF_SB_CPM_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_SB_CPM_ATQBAL(_VF128)		(0x0022E800 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ATQBAL_MAX_INDEX		127
+#define VF_SB_CPM_ATQBAL_ATQBAL_S		6
+#define VF_SB_CPM_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_SB_CPM_ATQH(_VF128)			(0x0022EE00 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ATQH_MAX_INDEX		127
+#define VF_SB_CPM_ATQH_ATQH_S			0
+#define VF_SB_CPM_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ATQLEN(_VF128)		(0x0022EC00 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ATQLEN_MAX_INDEX		127
+#define VF_SB_CPM_ATQLEN_ATQLEN_S		0
+#define VF_SB_CPM_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ATQLEN_ATQVFE_S		28
+#define VF_SB_CPM_ATQLEN_ATQVFE_M		BIT(28)
+#define VF_SB_CPM_ATQLEN_ATQOVFL_S		29
+#define VF_SB_CPM_ATQLEN_ATQOVFL_M		BIT(29)
+#define VF_SB_CPM_ATQLEN_ATQCRIT_S		30
+#define VF_SB_CPM_ATQLEN_ATQCRIT_M		BIT(30)
+#define VF_SB_CPM_ATQLEN_ATQENABLE_S		31
+#define VF_SB_CPM_ATQLEN_ATQENABLE_M		BIT(31)
+#define VF_SB_CPM_ATQT(_VF128)			(0x0022F000 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ATQT_MAX_INDEX		127
+#define VF_SB_CPM_ATQT_ATQT_S			0
+#define VF_SB_CPM_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_REM_DEV_CTL			0x002300EC /* Reset Source: CORER */
+#define VF_SB_CPM_REM_DEV_CTL_DEST_EN_S		0
+#define VF_SB_CPM_REM_DEV_CTL_DEST_EN_M		MAKEMASK(0xFFFF, 0)
+#define VP_MBX_CPM_PF_VF_CTRL(_VP128)		(0x00231800 + ((_VP128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VP_MBX_CPM_PF_VF_CTRL_MAX_INDEX		127
+#define VP_MBX_CPM_PF_VF_CTRL_QUEUE_EN_S	0
+#define VP_MBX_CPM_PF_VF_CTRL_QUEUE_EN_M	BIT(0)
+#define VP_MBX_HLP_PF_VF_CTRL(_VP16)		(0x00231A00 + ((_VP16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VP_MBX_HLP_PF_VF_CTRL_MAX_INDEX		15
+#define VP_MBX_HLP_PF_VF_CTRL_QUEUE_EN_S	0
+#define VP_MBX_HLP_PF_VF_CTRL_QUEUE_EN_M	BIT(0)
+#define VP_MBX_PF_VF_CTRL(_VSI)			(0x00230800 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VP_MBX_PF_VF_CTRL_MAX_INDEX		767
+#define VP_MBX_PF_VF_CTRL_QUEUE_EN_S		0
+#define VP_MBX_PF_VF_CTRL_QUEUE_EN_M		BIT(0)
+#define VP_MBX_PSM_PF_VF_CTRL(_VP16)		(0x00231A40 + ((_VP16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VP_MBX_PSM_PF_VF_CTRL_MAX_INDEX		15
+#define VP_MBX_PSM_PF_VF_CTRL_QUEUE_EN_S	0
+#define VP_MBX_PSM_PF_VF_CTRL_QUEUE_EN_M	BIT(0)
+#define VP_SB_CPM_PF_VF_CTRL(_VP128)		(0x00231C00 + ((_VP128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VP_SB_CPM_PF_VF_CTRL_MAX_INDEX		127
+#define VP_SB_CPM_PF_VF_CTRL_QUEUE_EN_S		0
+#define VP_SB_CPM_PF_VF_CTRL_QUEUE_EN_M		BIT(0)
+#define GL_DCB_TDSCP2TC_BLOCK_DIS		0x00049218 /* Reset Source: CORER */
+#define GL_DCB_TDSCP2TC_BLOCK_DIS_DSCP2TC_BLOCK_DIS_S 0
+#define GL_DCB_TDSCP2TC_BLOCK_DIS_DSCP2TC_BLOCK_DIS_M BIT(0)
+#define GL_DCB_TDSCP2TC_BLOCK_IPV4(_i)		(0x00049018 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GL_DCB_TDSCP2TC_BLOCK_IPV4_MAX_INDEX	63
+#define GL_DCB_TDSCP2TC_BLOCK_IPV4_TC_BLOCK_LUT_S 0
+#define GL_DCB_TDSCP2TC_BLOCK_IPV4_TC_BLOCK_LUT_M MAKEMASK(0xFFFFFFFF, 0)
+#define GL_DCB_TDSCP2TC_BLOCK_IPV6(_i)		(0x00049118 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GL_DCB_TDSCP2TC_BLOCK_IPV6_MAX_INDEX	63
+#define GL_DCB_TDSCP2TC_BLOCK_IPV6_TC_BLOCK_LUT_S 0
+#define GL_DCB_TDSCP2TC_BLOCK_IPV6_TC_BLOCK_LUT_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_GENC				0x00083044 /* Reset Source: CORER */
+#define GLDCB_GENC_PCIRTT_S			0
+#define GLDCB_GENC_PCIRTT_M			MAKEMASK(0xFFFF, 0)
+#define GLDCB_PRS_RETSTCC(_i)			(0x002000B0 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLDCB_PRS_RETSTCC_MAX_INDEX		31
+#define GLDCB_PRS_RETSTCC_BWSHARE_S		0
+#define GLDCB_PRS_RETSTCC_BWSHARE_M		MAKEMASK(0x7F, 0)
+#define GLDCB_PRS_RETSTCC_ETSTC_S		31
+#define GLDCB_PRS_RETSTCC_ETSTC_M		BIT(31)
+#define GLDCB_PRS_RSPMC				0x00200160 /* Reset Source: CORER */
+#define GLDCB_PRS_RSPMC_RSPM_S			0
+#define GLDCB_PRS_RSPMC_RSPM_M			MAKEMASK(0xFF, 0)
+#define GLDCB_PRS_RSPMC_RPM_MODE_S		8
+#define GLDCB_PRS_RSPMC_RPM_MODE_M		MAKEMASK(0x3, 8)
+#define GLDCB_PRS_RSPMC_PRR_MAX_EXP_S		10
+#define GLDCB_PRS_RSPMC_PRR_MAX_EXP_M		MAKEMASK(0xF, 10)
+#define GLDCB_PRS_RSPMC_PFCTIMER_S		14
+#define GLDCB_PRS_RSPMC_PFCTIMER_M		MAKEMASK(0x3FFF, 14)
+#define GLDCB_PRS_RSPMC_RPM_DIS_S		31
+#define GLDCB_PRS_RSPMC_RPM_DIS_M		BIT(31)
+#define GLDCB_RETSTCC(_i)			(0x00122140 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLDCB_RETSTCC_MAX_INDEX			31
+#define GLDCB_RETSTCC_BWSHARE_S			0
+#define GLDCB_RETSTCC_BWSHARE_M			MAKEMASK(0x7F, 0)
+#define GLDCB_RETSTCC_ETSTC_S			31
+#define GLDCB_RETSTCC_ETSTC_M			BIT(31)
+#define GLDCB_RETSTCS(_i)			(0x001221C0 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLDCB_RETSTCS_MAX_INDEX			31
+#define GLDCB_RETSTCS_CREDITS_S			0
+#define GLDCB_RETSTCS_CREDITS_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_RTC2PFC_RCB			0x00122100 /* Reset Source: CORER */
+#define GLDCB_RTC2PFC_RCB_TC2PFC_S		0
+#define GLDCB_RTC2PFC_RCB_TC2PFC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_SWT_RETSTCC(_i)			(0x0020A040 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLDCB_SWT_RETSTCC_MAX_INDEX		31
+#define GLDCB_SWT_RETSTCC_BWSHARE_S		0
+#define GLDCB_SWT_RETSTCC_BWSHARE_M		MAKEMASK(0x7F, 0)
+#define GLDCB_SWT_RETSTCC_ETSTC_S		31
+#define GLDCB_SWT_RETSTCC_ETSTC_M		BIT(31)
+#define GLDCB_TC2PFC				0x001D2694 /* Reset Source: CORER */
+#define GLDCB_TC2PFC_TC2PFC_S			0
+#define GLDCB_TC2PFC_TC2PFC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_TCB_MNG_SP			0x000AE12C /* Reset Source: CORER */
+#define GLDCB_TCB_MNG_SP_MNG_SP_S		0
+#define GLDCB_TCB_MNG_SP_MNG_SP_M		BIT(0)
+#define GLDCB_TCB_TCLL_CFG			0x000AE134 /* Reset Source: CORER */
+#define GLDCB_TCB_TCLL_CFG_LLTC_S		0
+#define GLDCB_TCB_TCLL_CFG_LLTC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_TCB_WB_SP				0x000AE310 /* Reset Source: CORER */
+#define GLDCB_TCB_WB_SP_WB_SP_S			0
+#define GLDCB_TCB_WB_SP_WB_SP_M			BIT(0)
+#define GLDCB_TCUPM_IMM_EN			0x000BC824 /* Reset Source: CORER */
+#define GLDCB_TCUPM_IMM_EN_IMM_EN_S		0
+#define GLDCB_TCUPM_IMM_EN_IMM_EN_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_TCUPM_LEGACY_TC			0x000BC828 /* Reset Source: CORER */
+#define GLDCB_TCUPM_LEGACY_TC_LEGTC_S		0
+#define GLDCB_TCUPM_LEGACY_TC_LEGTC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_TCUPM_NO_EXCEED_DIS		0x000BC830 /* Reset Source: CORER */
+#define GLDCB_TCUPM_NO_EXCEED_DIS_NON_EXCEED_DIS_S 0
+#define GLDCB_TCUPM_NO_EXCEED_DIS_NON_EXCEED_DIS_M BIT(0)
+#define GLDCB_TCUPM_WB_DIS			0x000BC834 /* Reset Source: CORER */
+#define GLDCB_TCUPM_WB_DIS_PORT_DISABLE_S	0
+#define GLDCB_TCUPM_WB_DIS_PORT_DISABLE_M	BIT(0)
+#define GLDCB_TCUPM_WB_DIS_TC_DISABLE_S		1
+#define GLDCB_TCUPM_WB_DIS_TC_DISABLE_M		BIT(1)
+#define GLDCB_TFPFCI				0x0009949C /* Reset Source: CORER */
+#define GLDCB_TFPFCI_GLDCB_TFPFCI_S		0
+#define GLDCB_TFPFCI_GLDCB_TFPFCI_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_TLPM_IMM_TCB			0x000A0190 /* Reset Source: CORER */
+#define GLDCB_TLPM_IMM_TCB_IMM_EN_S		0
+#define GLDCB_TLPM_IMM_TCB_IMM_EN_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_TLPM_IMM_TCUPM			0x000A018C /* Reset Source: CORER */
+#define GLDCB_TLPM_IMM_TCUPM_IMM_EN_S		0
+#define GLDCB_TLPM_IMM_TCUPM_IMM_EN_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_TLPM_PCI_DM			0x000A0180 /* Reset Source: CORER */
+#define GLDCB_TLPM_PCI_DM_MONITOR_S		0
+#define GLDCB_TLPM_PCI_DM_MONITOR_M		MAKEMASK(0x7FFFF, 0)
+#define GLDCB_TLPM_PCI_DTHR			0x000A0184 /* Reset Source: CORER */
+#define GLDCB_TLPM_PCI_DTHR_PCI_TDATA_S		0
+#define GLDCB_TLPM_PCI_DTHR_PCI_TDATA_M		MAKEMASK(0xFFF, 0)
+#define GLDCB_TPB_IMM_TLPM			0x00099468 /* Reset Source: CORER */
+#define GLDCB_TPB_IMM_TLPM_IMM_EN_S		0
+#define GLDCB_TPB_IMM_TLPM_IMM_EN_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_TPB_IMM_TPB			0x0009946C /* Reset Source: CORER */
+#define GLDCB_TPB_IMM_TPB_IMM_EN_S		0
+#define GLDCB_TPB_IMM_TPB_IMM_EN_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_TPB_TCLL_CFG			0x00099464 /* Reset Source: CORER */
+#define GLDCB_TPB_TCLL_CFG_LLTC_S		0
+#define GLDCB_TPB_TCLL_CFG_LLTC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCB_BULK_DWRR_REG_QUANTA		0x000AE0E0 /* Reset Source: CORER */
+#define GLTCB_BULK_DWRR_REG_QUANTA_QUANTA_S	0
+#define GLTCB_BULK_DWRR_REG_QUANTA_QUANTA_M	MAKEMASK(0x7FF, 0)
+#define GLTCB_BULK_DWRR_REG_SAT			0x000AE0F0 /* Reset Source: CORER */
+#define GLTCB_BULK_DWRR_REG_SAT_SATURATION_S	0
+#define GLTCB_BULK_DWRR_REG_SAT_SATURATION_M	MAKEMASK(0x1FFFF, 0)
+#define GLTCB_BULK_DWRR_WB_QUANTA		0x000AE0E4 /* Reset Source: CORER */
+#define GLTCB_BULK_DWRR_WB_QUANTA_QUANTA_S	0
+#define GLTCB_BULK_DWRR_WB_QUANTA_QUANTA_M	MAKEMASK(0x7FF, 0)
+#define GLTCB_BULK_DWRR_WB_SAT			0x000AE0F4 /* Reset Source: CORER */
+#define GLTCB_BULK_DWRR_WB_SAT_SATURATION_S	0
+#define GLTCB_BULK_DWRR_WB_SAT_SATURATION_M	MAKEMASK(0x1FFFF, 0)
+#define GLTCB_CREDIT_EXP_CTL			0x000AE120 /* Reset Source: CORER */
+#define GLTCB_CREDIT_EXP_CTL_EN_S		0
+#define GLTCB_CREDIT_EXP_CTL_EN_M		BIT(0)
+#define GLTCB_CREDIT_EXP_CTL_MIN_PKT_S		1
+#define GLTCB_CREDIT_EXP_CTL_MIN_PKT_M		MAKEMASK(0x1FF, 1)
+#define GLTCB_LL_DWRR_REG_QUANTA		0x000AE0E8 /* Reset Source: CORER */
+#define GLTCB_LL_DWRR_REG_QUANTA_QUANTA_S	0
+#define GLTCB_LL_DWRR_REG_QUANTA_QUANTA_M	MAKEMASK(0x7FF, 0)
+#define GLTCB_LL_DWRR_REG_SAT			0x000AE0F8 /* Reset Source: CORER */
+#define GLTCB_LL_DWRR_REG_SAT_SATURATION_S	0
+#define GLTCB_LL_DWRR_REG_SAT_SATURATION_M	MAKEMASK(0x1FFFF, 0)
+#define GLTCB_LL_DWRR_WB_QUANTA			0x000AE0EC /* Reset Source: CORER */
+#define GLTCB_LL_DWRR_WB_QUANTA_QUANTA_S	0
+#define GLTCB_LL_DWRR_WB_QUANTA_QUANTA_M	MAKEMASK(0x7FF, 0)
+#define GLTCB_LL_DWRR_WB_SAT			0x000AE0FC /* Reset Source: CORER */
+#define GLTCB_LL_DWRR_WB_SAT_SATURATION_S	0
+#define GLTCB_LL_DWRR_WB_SAT_SATURATION_M	MAKEMASK(0x1FFFF, 0)
+#define GLTCB_WB_RL				0x000AE238 /* Reset Source: CORER */
+#define GLTCB_WB_RL_PERIOD_S			0
+#define GLTCB_WB_RL_PERIOD_M			MAKEMASK(0xFFFF, 0)
+#define GLTCB_WB_RL_EN_S			16
+#define GLTCB_WB_RL_EN_M			BIT(16)
+#define GLTPB_WB_RL				0x00099460 /* Reset Source: CORER */
+#define GLTPB_WB_RL_PERIOD_S			0
+#define GLTPB_WB_RL_PERIOD_M			MAKEMASK(0xFFFF, 0)
+#define GLTPB_WB_RL_EN_S			16
+#define GLTPB_WB_RL_EN_M			BIT(16)
+#define PRTDCB_FCCFG				0x001E4640 /* Reset Source: GLOBR */
+#define PRTDCB_FCCFG_TFCE_S			3
+#define PRTDCB_FCCFG_TFCE_M			MAKEMASK(0x3, 3)
+#define PRTDCB_FCRTV				0x001E4600 /* Reset Source: GLOBR */
+#define PRTDCB_FCRTV_FC_REFRESH_TH_S		0
+#define PRTDCB_FCRTV_FC_REFRESH_TH_M		MAKEMASK(0xFFFF, 0)
+#define PRTDCB_FCTTVN(_i)			(0x001E4580 + ((_i) * 32)) /* _i=0...3 */ /* Reset Source: GLOBR */
+#define PRTDCB_FCTTVN_MAX_INDEX			3
+#define PRTDCB_FCTTVN_TTV_2N_S			0
+#define PRTDCB_FCTTVN_TTV_2N_M			MAKEMASK(0xFFFF, 0)
+#define PRTDCB_FCTTVN_TTV_2N_P1_S		16
+#define PRTDCB_FCTTVN_TTV_2N_P1_M		MAKEMASK(0xFFFF, 16)
+#define PRTDCB_GENC				0x00083000 /* Reset Source: CORER */
+#define PRTDCB_GENC_NUMTC_S			2
+#define PRTDCB_GENC_NUMTC_M			MAKEMASK(0xF, 2)
+#define PRTDCB_GENC_FCOEUP_S			6
+#define PRTDCB_GENC_FCOEUP_M			MAKEMASK(0x7, 6)
+#define PRTDCB_GENC_FCOEUP_VALID_S		9
+#define PRTDCB_GENC_FCOEUP_VALID_M		BIT(9)
+#define PRTDCB_GENC_PFCLDA_S			16
+#define PRTDCB_GENC_PFCLDA_M			MAKEMASK(0xFFFF, 16)
+#define PRTDCB_GENS				0x00083020 /* Reset Source: CORER */
+#define PRTDCB_GENS_DCBX_STATUS_S		0
+#define PRTDCB_GENS_DCBX_STATUS_M		MAKEMASK(0x7, 0)
+#define PRTDCB_PRS_RETSC			0x002001A0 /* Reset Source: CORER */
+#define PRTDCB_PRS_RETSC_ETS_MODE_S		0
+#define PRTDCB_PRS_RETSC_ETS_MODE_M		BIT(0)
+#define PRTDCB_PRS_RETSC_NON_ETS_MODE_S		1
+#define PRTDCB_PRS_RETSC_NON_ETS_MODE_M		BIT(1)
+#define PRTDCB_PRS_RETSC_ETS_MAX_EXP_S		2
+#define PRTDCB_PRS_RETSC_ETS_MAX_EXP_M		MAKEMASK(0xF, 2)
+#define PRTDCB_PRS_RPRRC			0x00200180 /* Reset Source: CORER */
+#define PRTDCB_PRS_RPRRC_BWSHARE_S		0
+#define PRTDCB_PRS_RPRRC_BWSHARE_M		MAKEMASK(0x3FF, 0)
+#define PRTDCB_PRS_RPRRC_BWSHARE_DIS_S		31
+#define PRTDCB_PRS_RPRRC_BWSHARE_DIS_M		BIT(31)
+#define PRTDCB_RETSC				0x001222A0 /* Reset Source: CORER */
+#define PRTDCB_RETSC_ETS_MODE_S			0
+#define PRTDCB_RETSC_ETS_MODE_M			BIT(0)
+#define PRTDCB_RETSC_NON_ETS_MODE_S		1
+#define PRTDCB_RETSC_NON_ETS_MODE_M		BIT(1)
+#define PRTDCB_RETSC_ETS_MAX_EXP_S		2
+#define PRTDCB_RETSC_ETS_MAX_EXP_M		MAKEMASK(0xF, 2)
+#define PRTDCB_RPRRC				0x001220C0 /* Reset Source: CORER */
+#define PRTDCB_RPRRC_BWSHARE_S			0
+#define PRTDCB_RPRRC_BWSHARE_M			MAKEMASK(0x3FF, 0)
+#define PRTDCB_RPRRC_BWSHARE_DIS_S		31
+#define PRTDCB_RPRRC_BWSHARE_DIS_M		BIT(31)
+#define PRTDCB_RPRRS				0x001220E0 /* Reset Source: CORER */
+#define PRTDCB_RPRRS_CREDITS_S			0
+#define PRTDCB_RPRRS_CREDITS_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PRTDCB_RUP_TDPU				0x00040960 /* Reset Source: CORER */
+#define PRTDCB_RUP_TDPU_NOVLANUP_S		0
+#define PRTDCB_RUP_TDPU_NOVLANUP_M		MAKEMASK(0x7, 0)
+#define PRTDCB_RUP2TC				0x001D2640 /* Reset Source: CORER */
+#define PRTDCB_RUP2TC_UP0TC_S			0
+#define PRTDCB_RUP2TC_UP0TC_M			MAKEMASK(0x7, 0)
+#define PRTDCB_RUP2TC_UP1TC_S			3
+#define PRTDCB_RUP2TC_UP1TC_M			MAKEMASK(0x7, 3)
+#define PRTDCB_RUP2TC_UP2TC_S			6
+#define PRTDCB_RUP2TC_UP2TC_M			MAKEMASK(0x7, 6)
+#define PRTDCB_RUP2TC_UP3TC_S			9
+#define PRTDCB_RUP2TC_UP3TC_M			MAKEMASK(0x7, 9)
+#define PRTDCB_RUP2TC_UP4TC_S			12
+#define PRTDCB_RUP2TC_UP4TC_M			MAKEMASK(0x7, 12)
+#define PRTDCB_RUP2TC_UP5TC_S			15
+#define PRTDCB_RUP2TC_UP5TC_M			MAKEMASK(0x7, 15)
+#define PRTDCB_RUP2TC_UP6TC_S			18
+#define PRTDCB_RUP2TC_UP6TC_M			MAKEMASK(0x7, 18)
+#define PRTDCB_RUP2TC_UP7TC_S			21
+#define PRTDCB_RUP2TC_UP7TC_M			MAKEMASK(0x7, 21)
+#define PRTDCB_SWT_RETSC			0x0020A140 /* Reset Source: CORER */
+#define PRTDCB_SWT_RETSC_ETS_MODE_S		0
+#define PRTDCB_SWT_RETSC_ETS_MODE_M		BIT(0)
+#define PRTDCB_SWT_RETSC_NON_ETS_MODE_S		1
+#define PRTDCB_SWT_RETSC_NON_ETS_MODE_M		BIT(1)
+#define PRTDCB_SWT_RETSC_ETS_MAX_EXP_S		2
+#define PRTDCB_SWT_RETSC_ETS_MAX_EXP_M		MAKEMASK(0xF, 2)
+#define PRTDCB_TCB_DWRR_CREDITS			0x000AE000 /* Reset Source: CORER */
+#define PRTDCB_TCB_DWRR_CREDITS_CREDITS_S	0
+#define PRTDCB_TCB_DWRR_CREDITS_CREDITS_M	MAKEMASK(0x3FFFF, 0)
+#define PRTDCB_TCB_DWRR_QUANTA			0x000AE020 /* Reset Source: CORER */
+#define PRTDCB_TCB_DWRR_QUANTA_QUANTA_S		0
+#define PRTDCB_TCB_DWRR_QUANTA_QUANTA_M		MAKEMASK(0x7FF, 0)
+#define PRTDCB_TCB_DWRR_SAT			0x000AE040 /* Reset Source: CORER */
+#define PRTDCB_TCB_DWRR_SAT_SATURATION_S	0
+#define PRTDCB_TCB_DWRR_SAT_SATURATION_M	MAKEMASK(0x1FFFF, 0)
+#define PRTDCB_TCUPM_NO_EXCEED_DM		0x000BC3C0 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_NO_EXCEED_DM_MONITOR_S	0
+#define PRTDCB_TCUPM_NO_EXCEED_DM_MONITOR_M	MAKEMASK(0x7FFFF, 0)
+#define PRTDCB_TCUPM_REG_CM			0x000BC360 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_REG_CM_MONITOR_S		0
+#define PRTDCB_TCUPM_REG_CM_MONITOR_M		MAKEMASK(0x7FFF, 0)
+#define PRTDCB_TCUPM_REG_CTHR			0x000BC380 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_REG_CTHR_PORTOFFTH_H_S	0
+#define PRTDCB_TCUPM_REG_CTHR_PORTOFFTH_H_M	MAKEMASK(0x7FFF, 0)
+#define PRTDCB_TCUPM_REG_CTHR_PORTOFFTH_L_S	15
+#define PRTDCB_TCUPM_REG_CTHR_PORTOFFTH_L_M	MAKEMASK(0x7FFF, 15)
+#define PRTDCB_TCUPM_REG_DM			0x000BC3A0 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_REG_DM_MONITOR_S		0
+#define PRTDCB_TCUPM_REG_DM_MONITOR_M		MAKEMASK(0x7FFFF, 0)
+#define PRTDCB_TCUPM_REG_DTHR			0x000BC3E0 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_REG_DTHR_PORTOFFTH_H_S	0
+#define PRTDCB_TCUPM_REG_DTHR_PORTOFFTH_H_M	MAKEMASK(0xFFF, 0)
+#define PRTDCB_TCUPM_REG_DTHR_PORTOFFTH_L_S	12
+#define PRTDCB_TCUPM_REG_DTHR_PORTOFFTH_L_M	MAKEMASK(0xFFF, 12)
+#define PRTDCB_TCUPM_REG_PE_HB_DM		0x000BC400 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_REG_PE_HB_DM_MONITOR_S	0
+#define PRTDCB_TCUPM_REG_PE_HB_DM_MONITOR_M	MAKEMASK(0xFFF, 0)
+#define PRTDCB_TCUPM_REG_PE_HB_DTHR		0x000BC420 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_REG_PE_HB_DTHR_PORTOFFTH_H_S 0
+#define PRTDCB_TCUPM_REG_PE_HB_DTHR_PORTOFFTH_H_M MAKEMASK(0xFFF, 0)
+#define PRTDCB_TCUPM_REG_PE_HB_DTHR_PORTOFFTH_L_S 12
+#define PRTDCB_TCUPM_REG_PE_HB_DTHR_PORTOFFTH_L_M MAKEMASK(0xFFF, 12)
+#define PRTDCB_TCUPM_WAIT_PFC_CM		0x000BC440 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_WAIT_PFC_CM_MONITOR_S	0
+#define PRTDCB_TCUPM_WAIT_PFC_CM_MONITOR_M	MAKEMASK(0x7FFF, 0)
+#define PRTDCB_TCUPM_WAIT_PFC_CTHR		0x000BC460 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_WAIT_PFC_CTHR_PORTOFFTH_S	0
+#define PRTDCB_TCUPM_WAIT_PFC_CTHR_PORTOFFTH_M	MAKEMASK(0x7FFF, 0)
+#define PRTDCB_TCUPM_WAIT_PFC_DM		0x000BC480 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_WAIT_PFC_DM_MONITOR_S	0
+#define PRTDCB_TCUPM_WAIT_PFC_DM_MONITOR_M	MAKEMASK(0x7FFFF, 0)
+#define PRTDCB_TCUPM_WAIT_PFC_DTHR		0x000BC4A0 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_WAIT_PFC_DTHR_PORTOFFTH_S	0
+#define PRTDCB_TCUPM_WAIT_PFC_DTHR_PORTOFFTH_M	MAKEMASK(0xFFF, 0)
+#define PRTDCB_TCUPM_WAIT_PFC_PE_HB_DM		0x000BC4C0 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_WAIT_PFC_PE_HB_DM_MONITOR_S 0
+#define PRTDCB_TCUPM_WAIT_PFC_PE_HB_DM_MONITOR_M MAKEMASK(0xFFF, 0)
+#define PRTDCB_TCUPM_WAIT_PFC_PE_HB_DTHR	0x000BC4E0 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_WAIT_PFC_PE_HB_DTHR_PORTOFFTH_S 0
+#define PRTDCB_TCUPM_WAIT_PFC_PE_HB_DTHR_PORTOFFTH_M MAKEMASK(0xFFF, 0)
+#define PRTDCB_TDPUC				0x00040940 /* Reset Source: CORER */
+#define PRTDCB_TDPUC_MAX_TXFRAME_S		0
+#define PRTDCB_TDPUC_MAX_TXFRAME_M		MAKEMASK(0xFFFF, 0)
+#define PRTDCB_TDPUC_MAL_LENGTH_S		16
+#define PRTDCB_TDPUC_MAL_LENGTH_M		BIT(16)
+#define PRTDCB_TDPUC_MAL_CMD_S			17
+#define PRTDCB_TDPUC_MAL_CMD_M			BIT(17)
+#define PRTDCB_TDPUC_TTL_DROP_S			18
+#define PRTDCB_TDPUC_TTL_DROP_M			BIT(18)
+#define PRTDCB_TDPUC_UR_DROP_S			19
+#define PRTDCB_TDPUC_UR_DROP_M			BIT(19)
+#define PRTDCB_TDPUC_DUMMY_S			20
+#define PRTDCB_TDPUC_DUMMY_M			BIT(20)
+#define PRTDCB_TDPUC_BIG_PKT_SIZE_S		21
+#define PRTDCB_TDPUC_BIG_PKT_SIZE_M		BIT(21)
+#define PRTDCB_TDPUC_L2_ACCEPT_FAIL_S		22
+#define PRTDCB_TDPUC_L2_ACCEPT_FAIL_M		BIT(22)
+#define PRTDCB_TDPUC_DSCP_CHECK_FAIL_S		23
+#define PRTDCB_TDPUC_DSCP_CHECK_FAIL_M		BIT(23)
+#define PRTDCB_TDPUC_RCU_ANTISPOOF_S		24
+#define PRTDCB_TDPUC_RCU_ANTISPOOF_M		BIT(24)
+#define PRTDCB_TDPUC_NIC_DSI_S			25
+#define PRTDCB_TDPUC_NIC_DSI_M			BIT(25)
+#define PRTDCB_TDPUC_NIC_IPSEC_S		26
+#define PRTDCB_TDPUC_NIC_IPSEC_M		BIT(26)
+#define PRTDCB_TDPUC_CLEAR_DROP_S		31
+#define PRTDCB_TDPUC_CLEAR_DROP_M		BIT(31)
+#define PRTDCB_TFCS				0x001E4560 /* Reset Source: GLOBR */
+#define PRTDCB_TFCS_TXOFF_S			0
+#define PRTDCB_TFCS_TXOFF_M			BIT(0)
+#define PRTDCB_TFCS_TXOFF0_S			8
+#define PRTDCB_TFCS_TXOFF0_M			BIT(8)
+#define PRTDCB_TFCS_TXOFF1_S			9
+#define PRTDCB_TFCS_TXOFF1_M			BIT(9)
+#define PRTDCB_TFCS_TXOFF2_S			10
+#define PRTDCB_TFCS_TXOFF2_M			BIT(10)
+#define PRTDCB_TFCS_TXOFF3_S			11
+#define PRTDCB_TFCS_TXOFF3_M			BIT(11)
+#define PRTDCB_TFCS_TXOFF4_S			12
+#define PRTDCB_TFCS_TXOFF4_M			BIT(12)
+#define PRTDCB_TFCS_TXOFF5_S			13
+#define PRTDCB_TFCS_TXOFF5_M			BIT(13)
+#define PRTDCB_TFCS_TXOFF6_S			14
+#define PRTDCB_TFCS_TXOFF6_M			BIT(14)
+#define PRTDCB_TFCS_TXOFF7_S			15
+#define PRTDCB_TFCS_TXOFF7_M			BIT(15)
+#define PRTDCB_TLPM_REG_DM			0x000A0000 /* Reset Source: CORER */
+#define PRTDCB_TLPM_REG_DM_MONITOR_S		0
+#define PRTDCB_TLPM_REG_DM_MONITOR_M		MAKEMASK(0x7FFFF, 0)
+#define PRTDCB_TLPM_REG_DTHR			0x000A0020 /* Reset Source: CORER */
+#define PRTDCB_TLPM_REG_DTHR_PORTOFFTH_H_S	0
+#define PRTDCB_TLPM_REG_DTHR_PORTOFFTH_H_M	MAKEMASK(0xFFF, 0)
+#define PRTDCB_TLPM_REG_DTHR_PORTOFFTH_L_S	12
+#define PRTDCB_TLPM_REG_DTHR_PORTOFFTH_L_M	MAKEMASK(0xFFF, 12)
+#define PRTDCB_TLPM_WAIT_PFC_DM			0x000A0040 /* Reset Source: CORER */
+#define PRTDCB_TLPM_WAIT_PFC_DM_MONITOR_S	0
+#define PRTDCB_TLPM_WAIT_PFC_DM_MONITOR_M	MAKEMASK(0x7FFFF, 0)
+#define PRTDCB_TLPM_WAIT_PFC_DTHR		0x000A0060 /* Reset Source: CORER */
+#define PRTDCB_TLPM_WAIT_PFC_DTHR_PORTOFFTH_S	0
+#define PRTDCB_TLPM_WAIT_PFC_DTHR_PORTOFFTH_M	MAKEMASK(0xFFF, 0)
+#define PRTDCB_TPFCTS(_i)			(0x001E4660 + ((_i) * 32)) /* _i=0...7 */ /* Reset Source: GLOBR */
+#define PRTDCB_TPFCTS_MAX_INDEX			7
+#define PRTDCB_TPFCTS_PFCTIMER_S		0
+#define PRTDCB_TPFCTS_PFCTIMER_M		MAKEMASK(0x3FFF, 0)
+#define PRTDCB_TUP2TC				0x001D26C0 /* Reset Source: CORER */
+#define PRTDCB_TUP2TC_UP0TC_S			0
+#define PRTDCB_TUP2TC_UP0TC_M			MAKEMASK(0x7, 0)
+#define PRTDCB_TUP2TC_UP1TC_S			3
+#define PRTDCB_TUP2TC_UP1TC_M			MAKEMASK(0x7, 3)
+#define PRTDCB_TUP2TC_UP2TC_S			6
+#define PRTDCB_TUP2TC_UP2TC_M			MAKEMASK(0x7, 6)
+#define PRTDCB_TUP2TC_UP3TC_S			9
+#define PRTDCB_TUP2TC_UP3TC_M			MAKEMASK(0x7, 9)
+#define PRTDCB_TUP2TC_UP4TC_S			12
+#define PRTDCB_TUP2TC_UP4TC_M			MAKEMASK(0x7, 12)
+#define PRTDCB_TUP2TC_UP5TC_S			15
+#define PRTDCB_TUP2TC_UP5TC_M			MAKEMASK(0x7, 15)
+#define PRTDCB_TUP2TC_UP6TC_S			18
+#define PRTDCB_TUP2TC_UP6TC_M			MAKEMASK(0x7, 18)
+#define PRTDCB_TUP2TC_UP7TC_S			21
+#define PRTDCB_TUP2TC_UP7TC_M			MAKEMASK(0x7, 21)
+#define PRTDCB_TX_DSCP2UP_CTL			0x00040980 /* Reset Source: CORER */
+#define PRTDCB_TX_DSCP2UP_CTL_DSCP2UP_ENA_S	0
+#define PRTDCB_TX_DSCP2UP_CTL_DSCP2UP_ENA_M	BIT(0)
+#define PRTDCB_TX_DSCP2UP_CTL_DSCP_DEFAULT_UP_S 1
+#define PRTDCB_TX_DSCP2UP_CTL_DSCP_DEFAULT_UP_M MAKEMASK(0x7, 1)
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT(_i)		(0x000409A0 + ((_i) * 32)) /* _i=0...7 */ /* Reset Source: CORER */
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_MAX_INDEX	7
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_0_S 0
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_0_M MAKEMASK(0x7, 0)
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_1_S 4
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_1_M MAKEMASK(0x7, 4)
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_2_S 8
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_2_M MAKEMASK(0x7, 8)
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_3_S 12
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_3_M MAKEMASK(0x7, 12)
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_4_S 16
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_4_M MAKEMASK(0x7, 16)
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_5_S 20
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_5_M MAKEMASK(0x7, 20)
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_6_S 24
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_6_M MAKEMASK(0x7, 24)
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_7_S 28
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_7_M MAKEMASK(0x7, 28)
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT(_i)		(0x00040AA0 + ((_i) * 32)) /* _i=0...7 */ /* Reset Source: CORER */
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_MAX_INDEX	7
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_0_S 0
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_0_M MAKEMASK(0x7, 0)
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_1_S 4
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_1_M MAKEMASK(0x7, 4)
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_2_S 8
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_2_M MAKEMASK(0x7, 8)
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_3_S 12
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_3_M MAKEMASK(0x7, 12)
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_4_S 16
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_4_M MAKEMASK(0x7, 16)
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_5_S 20
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_5_M MAKEMASK(0x7, 20)
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_6_S 24
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_6_M MAKEMASK(0x7, 24)
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_7_S 28
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_7_M MAKEMASK(0x7, 28)
+#define PRTTCB_BULK_DWRR_REG_CREDITS		0x000AE060 /* Reset Source: CORER */
+#define PRTTCB_BULK_DWRR_REG_CREDITS_CREDITS_S	0
+#define PRTTCB_BULK_DWRR_REG_CREDITS_CREDITS_M	MAKEMASK(0x3FFFF, 0)
+#define PRTTCB_BULK_DWRR_WB_CREDITS		0x000AE080 /* Reset Source: CORER */
+#define PRTTCB_BULK_DWRR_WB_CREDITS_CREDITS_S	0
+#define PRTTCB_BULK_DWRR_WB_CREDITS_CREDITS_M	MAKEMASK(0x3FFFF, 0)
+#define PRTTCB_CREDIT_EXP			0x000AE100 /* Reset Source: CORER */
+#define PRTTCB_CREDIT_EXP_EXPANSION_S		0
+#define PRTTCB_CREDIT_EXP_EXPANSION_M		MAKEMASK(0xFF, 0)
+#define PRTTCB_LL_DWRR_REG_CREDITS		0x000AE0A0 /* Reset Source: CORER */
+#define PRTTCB_LL_DWRR_REG_CREDITS_CREDITS_S	0
+#define PRTTCB_LL_DWRR_REG_CREDITS_CREDITS_M	MAKEMASK(0x3FFFF, 0)
+#define PRTTCB_LL_DWRR_WB_CREDITS		0x000AE0C0 /* Reset Source: CORER */
+#define PRTTCB_LL_DWRR_WB_CREDITS_CREDITS_S	0
+#define PRTTCB_LL_DWRR_WB_CREDITS_CREDITS_M	MAKEMASK(0x3FFFF, 0)
+#define TCDCB_TCUPM_WAIT_CM(_i)			(0x000BC520 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCDCB_TCUPM_WAIT_CM_MAX_INDEX		31
+#define TCDCB_TCUPM_WAIT_CM_MONITOR_S		0
+#define TCDCB_TCUPM_WAIT_CM_MONITOR_M		MAKEMASK(0x7FFF, 0)
+#define TCDCB_TCUPM_WAIT_CTHR(_i)		(0x000BC5A0 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCDCB_TCUPM_WAIT_CTHR_MAX_INDEX		31
+#define TCDCB_TCUPM_WAIT_CTHR_TCOFFTH_S		0
+#define TCDCB_TCUPM_WAIT_CTHR_TCOFFTH_M		MAKEMASK(0x7FFF, 0)
+#define TCDCB_TCUPM_WAIT_DM(_i)			(0x000BC620 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCDCB_TCUPM_WAIT_DM_MAX_INDEX		31
+#define TCDCB_TCUPM_WAIT_DM_MONITOR_S		0
+#define TCDCB_TCUPM_WAIT_DM_MONITOR_M		MAKEMASK(0x7FFFF, 0)
+#define TCDCB_TCUPM_WAIT_DTHR(_i)		(0x000BC6A0 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCDCB_TCUPM_WAIT_DTHR_MAX_INDEX		31
+#define TCDCB_TCUPM_WAIT_DTHR_TCOFFTH_S		0
+#define TCDCB_TCUPM_WAIT_DTHR_TCOFFTH_M		MAKEMASK(0xFFF, 0)
+#define TCDCB_TCUPM_WAIT_PE_HB_DM(_i)		(0x000BC720 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCDCB_TCUPM_WAIT_PE_HB_DM_MAX_INDEX	31
+#define TCDCB_TCUPM_WAIT_PE_HB_DM_MONITOR_S	0
+#define TCDCB_TCUPM_WAIT_PE_HB_DM_MONITOR_M	MAKEMASK(0xFFF, 0)
+#define TCDCB_TCUPM_WAIT_PE_HB_DTHR(_i)		(0x000BC7A0 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCDCB_TCUPM_WAIT_PE_HB_DTHR_MAX_INDEX	31
+#define TCDCB_TCUPM_WAIT_PE_HB_DTHR_TCOFFTH_S	0
+#define TCDCB_TCUPM_WAIT_PE_HB_DTHR_TCOFFTH_M	MAKEMASK(0xFFF, 0)
+#define TCDCB_TLPM_WAIT_DM(_i)			(0x000A0080 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCDCB_TLPM_WAIT_DM_MAX_INDEX		31
+#define TCDCB_TLPM_WAIT_DM_MONITOR_S		0
+#define TCDCB_TLPM_WAIT_DM_MONITOR_M		MAKEMASK(0x7FFFF, 0)
+#define TCDCB_TLPM_WAIT_DTHR(_i)		(0x000A0100 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCDCB_TLPM_WAIT_DTHR_MAX_INDEX		31
+#define TCDCB_TLPM_WAIT_DTHR_TCOFFTH_S		0
+#define TCDCB_TLPM_WAIT_DTHR_TCOFFTH_M		MAKEMASK(0xFFF, 0)
+#define TCTCB_WB_RL_TC_CFG(_i)			(0x000AE138 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCTCB_WB_RL_TC_CFG_MAX_INDEX		31
+#define TCTCB_WB_RL_TC_CFG_TOKENS_S		0
+#define TCTCB_WB_RL_TC_CFG_TOKENS_M		MAKEMASK(0xFFF, 0)
+#define TCTCB_WB_RL_TC_CFG_BURST_SIZE_S		12
+#define TCTCB_WB_RL_TC_CFG_BURST_SIZE_M		MAKEMASK(0x3FF, 12)
+#define TCTCB_WB_RL_TC_STAT(_i)			(0x000AE1B8 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCTCB_WB_RL_TC_STAT_MAX_INDEX		31
+#define TCTCB_WB_RL_TC_STAT_BUCKET_S		0
+#define TCTCB_WB_RL_TC_STAT_BUCKET_M		MAKEMASK(0x1FFFF, 0)
+#define TPB_BULK_DWRR_REG_QUANTA		0x00099340 /* Reset Source: CORER */
+#define TPB_BULK_DWRR_REG_QUANTA_QUANTA_S	0
+#define TPB_BULK_DWRR_REG_QUANTA_QUANTA_M	MAKEMASK(0x7FF, 0)
+#define TPB_BULK_DWRR_REG_SAT			0x00099350 /* Reset Source: CORER */
+#define TPB_BULK_DWRR_REG_SAT_SATURATION_S	0
+#define TPB_BULK_DWRR_REG_SAT_SATURATION_M	MAKEMASK(0x1FFFF, 0)
+#define TPB_BULK_DWRR_WB_QUANTA			0x00099344 /* Reset Source: CORER */
+#define TPB_BULK_DWRR_WB_QUANTA_QUANTA_S	0
+#define TPB_BULK_DWRR_WB_QUANTA_QUANTA_M	MAKEMASK(0x7FF, 0)
+#define TPB_BULK_DWRR_WB_SAT			0x00099354 /* Reset Source: CORER */
+#define TPB_BULK_DWRR_WB_SAT_SATURATION_S	0
+#define TPB_BULK_DWRR_WB_SAT_SATURATION_M	MAKEMASK(0x1FFFF, 0)
+#define TPB_GLDCB_TCB_WB_SP			0x0009966C /* Reset Source: CORER */
+#define TPB_GLDCB_TCB_WB_SP_WB_SP_S		0
+#define TPB_GLDCB_TCB_WB_SP_WB_SP_M		BIT(0)
+#define TPB_GLTCB_CREDIT_EXP_CTL		0x00099664 /* Reset Source: CORER */
+#define TPB_GLTCB_CREDIT_EXP_CTL_EN_S		0
+#define TPB_GLTCB_CREDIT_EXP_CTL_EN_M		BIT(0)
+#define TPB_GLTCB_CREDIT_EXP_CTL_MIN_PKT_S	1
+#define TPB_GLTCB_CREDIT_EXP_CTL_MIN_PKT_M	MAKEMASK(0x1FF, 1)
+#define TPB_LL_DWRR_REG_QUANTA			0x00099348 /* Reset Source: CORER */
+#define TPB_LL_DWRR_REG_QUANTA_QUANTA_S		0
+#define TPB_LL_DWRR_REG_QUANTA_QUANTA_M		MAKEMASK(0x7FF, 0)
+#define TPB_LL_DWRR_REG_SAT			0x00099358 /* Reset Source: CORER */
+#define TPB_LL_DWRR_REG_SAT_SATURATION_S	0
+#define TPB_LL_DWRR_REG_SAT_SATURATION_M	MAKEMASK(0x1FFFF, 0)
+#define TPB_LL_DWRR_WB_QUANTA			0x0009934C /* Reset Source: CORER */
+#define TPB_LL_DWRR_WB_QUANTA_QUANTA_S		0
+#define TPB_LL_DWRR_WB_QUANTA_QUANTA_M		MAKEMASK(0x7FF, 0)
+#define TPB_LL_DWRR_WB_SAT			0x0009935C /* Reset Source: CORER */
+#define TPB_LL_DWRR_WB_SAT_SATURATION_S		0
+#define TPB_LL_DWRR_WB_SAT_SATURATION_M		MAKEMASK(0x1FFFF, 0)
+#define TPB_PRTDCB_TCB_DWRR_CREDITS		0x000991C0 /* Reset Source: CORER */
+#define TPB_PRTDCB_TCB_DWRR_CREDITS_CREDITS_S	0
+#define TPB_PRTDCB_TCB_DWRR_CREDITS_CREDITS_M	MAKEMASK(0x3FFFF, 0)
+#define TPB_PRTDCB_TCB_DWRR_QUANTA		0x00099220 /* Reset Source: CORER */
+#define TPB_PRTDCB_TCB_DWRR_QUANTA_QUANTA_S	0
+#define TPB_PRTDCB_TCB_DWRR_QUANTA_QUANTA_M	MAKEMASK(0x7FF, 0)
+#define TPB_PRTDCB_TCB_DWRR_SAT			0x00099260 /* Reset Source: CORER */
+#define TPB_PRTDCB_TCB_DWRR_SAT_SATURATION_S	0
+#define TPB_PRTDCB_TCB_DWRR_SAT_SATURATION_M	MAKEMASK(0x1FFFF, 0)
+#define TPB_PRTTCB_BULK_DWRR_REG_CREDITS	0x000992A0 /* Reset Source: CORER */
+#define TPB_PRTTCB_BULK_DWRR_REG_CREDITS_CREDITS_S 0
+#define TPB_PRTTCB_BULK_DWRR_REG_CREDITS_CREDITS_M MAKEMASK(0x3FFFF, 0)
+#define TPB_PRTTCB_BULK_DWRR_WB_CREDITS		0x000992C0 /* Reset Source: CORER */
+#define TPB_PRTTCB_BULK_DWRR_WB_CREDITS_CREDITS_S 0
+#define TPB_PRTTCB_BULK_DWRR_WB_CREDITS_CREDITS_M MAKEMASK(0x3FFFF, 0)
+#define TPB_PRTTCB_CREDIT_EXP			0x00099644 /* Reset Source: CORER */
+#define TPB_PRTTCB_CREDIT_EXP_EXPANSION_S	0
+#define TPB_PRTTCB_CREDIT_EXP_EXPANSION_M	MAKEMASK(0xFF, 0)
+#define TPB_PRTTCB_LL_DWRR_REG_CREDITS		0x00099300 /* Reset Source: CORER */
+#define TPB_PRTTCB_LL_DWRR_REG_CREDITS_CREDITS_S 0
+#define TPB_PRTTCB_LL_DWRR_REG_CREDITS_CREDITS_M MAKEMASK(0x3FFFF, 0)
+#define TPB_PRTTCB_LL_DWRR_WB_CREDITS		0x00099320 /* Reset Source: CORER */
+#define TPB_PRTTCB_LL_DWRR_WB_CREDITS_CREDITS_S 0
+#define TPB_PRTTCB_LL_DWRR_WB_CREDITS_CREDITS_M MAKEMASK(0x3FFFF, 0)
+#define TPB_WB_RL_TC_CFG(_i)			(0x00099360 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TPB_WB_RL_TC_CFG_MAX_INDEX		31
+#define TPB_WB_RL_TC_CFG_TOKENS_S		0
+#define TPB_WB_RL_TC_CFG_TOKENS_M		MAKEMASK(0xFFF, 0)
+#define TPB_WB_RL_TC_CFG_BURST_SIZE_S		12
+#define TPB_WB_RL_TC_CFG_BURST_SIZE_M		MAKEMASK(0x3FF, 12)
+#define TPB_WB_RL_TC_STAT(_i)			(0x000993E0 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TPB_WB_RL_TC_STAT_MAX_INDEX		31
+#define TPB_WB_RL_TC_STAT_BUCKET_S		0
+#define TPB_WB_RL_TC_STAT_BUCKET_M		MAKEMASK(0x1FFFF, 0)
+#define GL_ACLEXT_CDMD_L1SEL(_i)		(0x00210054 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_CDMD_L1SEL_MAX_INDEX		2
+#define GL_ACLEXT_CDMD_L1SEL_RX_SEL_S		0
+#define GL_ACLEXT_CDMD_L1SEL_RX_SEL_M		MAKEMASK(0x1F, 0)
+#define GL_ACLEXT_CDMD_L1SEL_TX_SEL_S		8
+#define GL_ACLEXT_CDMD_L1SEL_TX_SEL_M		MAKEMASK(0x1F, 8)
+#define GL_ACLEXT_CDMD_L1SEL_AUX0_SEL_S		16
+#define GL_ACLEXT_CDMD_L1SEL_AUX0_SEL_M		MAKEMASK(0x1F, 16)
+#define GL_ACLEXT_CDMD_L1SEL_AUX1_SEL_S		24
+#define GL_ACLEXT_CDMD_L1SEL_AUX1_SEL_M		MAKEMASK(0x1F, 24)
+#define GL_ACLEXT_CDMD_L1SEL_BIDIR_ENA_S	30
+#define GL_ACLEXT_CDMD_L1SEL_BIDIR_ENA_M	MAKEMASK(0x3, 30)
+#define GL_ACLEXT_CTLTBL_L2ADDR(_i)		(0x00210084 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_CTLTBL_L2ADDR_MAX_INDEX	2
+#define GL_ACLEXT_CTLTBL_L2ADDR_LINE_OFF_S	0
+#define GL_ACLEXT_CTLTBL_L2ADDR_LINE_OFF_M	MAKEMASK(0x7, 0)
+#define GL_ACLEXT_CTLTBL_L2ADDR_LINE_IDX_S	8
+#define GL_ACLEXT_CTLTBL_L2ADDR_LINE_IDX_M	MAKEMASK(0x7, 8)
+#define GL_ACLEXT_CTLTBL_L2ADDR_AUTO_INC_S	31
+#define GL_ACLEXT_CTLTBL_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_ACLEXT_CTLTBL_L2DATA(_i)		(0x00210090 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_CTLTBL_L2DATA_MAX_INDEX	2
+#define GL_ACLEXT_CTLTBL_L2DATA_DATA_S		0
+#define GL_ACLEXT_CTLTBL_L2DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_ACLEXT_DFLT_L2PRFL(_i)		(0x00210138 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_DFLT_L2PRFL_MAX_INDEX		2
+#define GL_ACLEXT_DFLT_L2PRFL_DFLT_PRFL_S	0
+#define GL_ACLEXT_DFLT_L2PRFL_DFLT_PRFL_M	MAKEMASK(0xFFFF, 0)
+#define GL_ACLEXT_DFLT_L2PRFL_ACL(_i)		(0x00393800 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_DFLT_L2PRFL_ACL_MAX_INDEX	2
+#define GL_ACLEXT_DFLT_L2PRFL_ACL_DFLT_PRFL_S	0
+#define GL_ACLEXT_DFLT_L2PRFL_ACL_DFLT_PRFL_M	MAKEMASK(0xFFFF, 0)
+#define GL_ACLEXT_FLGS_L1SEL0_1(_i)		(0x0021006C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_FLGS_L1SEL0_1_MAX_INDEX	2
+#define GL_ACLEXT_FLGS_L1SEL0_1_FLS0_S		0
+#define GL_ACLEXT_FLGS_L1SEL0_1_FLS0_M		MAKEMASK(0x1FF, 0)
+#define GL_ACLEXT_FLGS_L1SEL0_1_FLS1_S		16
+#define GL_ACLEXT_FLGS_L1SEL0_1_FLS1_M		MAKEMASK(0x1FF, 16)
+#define GL_ACLEXT_FLGS_L1SEL2_3(_i)		(0x00210078 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_FLGS_L1SEL2_3_MAX_INDEX	2
+#define GL_ACLEXT_FLGS_L1SEL2_3_FLS2_S		0
+#define GL_ACLEXT_FLGS_L1SEL2_3_FLS2_M		MAKEMASK(0x1FF, 0)
+#define GL_ACLEXT_FLGS_L1SEL2_3_FLS3_S		16
+#define GL_ACLEXT_FLGS_L1SEL2_3_FLS3_M		MAKEMASK(0x1FF, 16)
+#define GL_ACLEXT_FLGS_L1TBL(_i)		(0x00210060 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_FLGS_L1TBL_MAX_INDEX		2
+#define GL_ACLEXT_FLGS_L1TBL_LSB_S		0
+#define GL_ACLEXT_FLGS_L1TBL_LSB_M		MAKEMASK(0xFFFF, 0)
+#define GL_ACLEXT_FLGS_L1TBL_MSB_S		16
+#define GL_ACLEXT_FLGS_L1TBL_MSB_M		MAKEMASK(0xFFFF, 16)
+#define GL_ACLEXT_FORCE_L1CDID(_i)		(0x00210018 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_FORCE_L1CDID_MAX_INDEX	2
+#define GL_ACLEXT_FORCE_L1CDID_STATIC_CDID_S	0
+#define GL_ACLEXT_FORCE_L1CDID_STATIC_CDID_M	MAKEMASK(0xF, 0)
+#define GL_ACLEXT_FORCE_L1CDID_STATIC_CDID_EN_S 31
+#define GL_ACLEXT_FORCE_L1CDID_STATIC_CDID_EN_M BIT(31)
+#define GL_ACLEXT_FORCE_PID(_i)			(0x00210000 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_FORCE_PID_MAX_INDEX		2
+#define GL_ACLEXT_FORCE_PID_STATIC_PID_S	0
+#define GL_ACLEXT_FORCE_PID_STATIC_PID_M	MAKEMASK(0xFFFF, 0)
+#define GL_ACLEXT_FORCE_PID_STATIC_PID_EN_S	31
+#define GL_ACLEXT_FORCE_PID_STATIC_PID_EN_M	BIT(31)
+#define GL_ACLEXT_K2N_L2ADDR(_i)		(0x00210144 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_K2N_L2ADDR_MAX_INDEX		2
+#define GL_ACLEXT_K2N_L2ADDR_LINE_IDX_S		0
+#define GL_ACLEXT_K2N_L2ADDR_LINE_IDX_M		MAKEMASK(0x7F, 0)
+#define GL_ACLEXT_K2N_L2ADDR_AUTO_INC_S		31
+#define GL_ACLEXT_K2N_L2ADDR_AUTO_INC_M		BIT(31)
+#define GL_ACLEXT_K2N_L2DATA(_i)		(0x00210150 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_K2N_L2DATA_MAX_INDEX		2
+#define GL_ACLEXT_K2N_L2DATA_DATA0_S		0
+#define GL_ACLEXT_K2N_L2DATA_DATA0_M		MAKEMASK(0xFF, 0)
+#define GL_ACLEXT_K2N_L2DATA_DATA1_S		8
+#define GL_ACLEXT_K2N_L2DATA_DATA1_M		MAKEMASK(0xFF, 8)
+#define GL_ACLEXT_K2N_L2DATA_DATA2_S		16
+#define GL_ACLEXT_K2N_L2DATA_DATA2_M		MAKEMASK(0xFF, 16)
+#define GL_ACLEXT_K2N_L2DATA_DATA3_S		24
+#define GL_ACLEXT_K2N_L2DATA_DATA3_M		MAKEMASK(0xFF, 24)
+#define GL_ACLEXT_L2_PMASK0(_i)			(0x002100FC + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_L2_PMASK0_MAX_INDEX		2
+#define GL_ACLEXT_L2_PMASK0_BITMASK_S		0
+#define GL_ACLEXT_L2_PMASK0_BITMASK_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_ACLEXT_L2_PMASK1(_i)			(0x00210108 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_L2_PMASK1_MAX_INDEX		2
+#define GL_ACLEXT_L2_PMASK1_BITMASK_S		0
+#define GL_ACLEXT_L2_PMASK1_BITMASK_M		MAKEMASK(0xFFFF, 0)
+#define GL_ACLEXT_L2_TMASK0(_i)			(0x00210498 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_L2_TMASK0_MAX_INDEX		2
+#define GL_ACLEXT_L2_TMASK0_BITMASK_S		0
+#define GL_ACLEXT_L2_TMASK0_BITMASK_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_ACLEXT_L2_TMASK1(_i)			(0x002104A4 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_L2_TMASK1_MAX_INDEX		2
+#define GL_ACLEXT_L2_TMASK1_BITMASK_S		0
+#define GL_ACLEXT_L2_TMASK1_BITMASK_M		MAKEMASK(0xFF, 0)
+#define GL_ACLEXT_L2BMP0_3(_i)			(0x002100A8 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_L2BMP0_3_MAX_INDEX		2
+#define GL_ACLEXT_L2BMP0_3_BMP0_S		0
+#define GL_ACLEXT_L2BMP0_3_BMP0_M		MAKEMASK(0xFF, 0)
+#define GL_ACLEXT_L2BMP0_3_BMP1_S		8
+#define GL_ACLEXT_L2BMP0_3_BMP1_M		MAKEMASK(0xFF, 8)
+#define GL_ACLEXT_L2BMP0_3_BMP2_S		16
+#define GL_ACLEXT_L2BMP0_3_BMP2_M		MAKEMASK(0xFF, 16)
+#define GL_ACLEXT_L2BMP0_3_BMP3_S		24
+#define GL_ACLEXT_L2BMP0_3_BMP3_M		MAKEMASK(0xFF, 24)
+#define GL_ACLEXT_L2BMP4_7(_i)			(0x002100B4 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_L2BMP4_7_MAX_INDEX		2
+#define GL_ACLEXT_L2BMP4_7_BMP4_S		0
+#define GL_ACLEXT_L2BMP4_7_BMP4_M		MAKEMASK(0xFF, 0)
+#define GL_ACLEXT_L2BMP4_7_BMP5_S		8
+#define GL_ACLEXT_L2BMP4_7_BMP5_M		MAKEMASK(0xFF, 8)
+#define GL_ACLEXT_L2BMP4_7_BMP6_S		16
+#define GL_ACLEXT_L2BMP4_7_BMP6_M		MAKEMASK(0xFF, 16)
+#define GL_ACLEXT_L2BMP4_7_BMP7_S		24
+#define GL_ACLEXT_L2BMP4_7_BMP7_M		MAKEMASK(0xFF, 24)
+#define GL_ACLEXT_L2PRTMOD(_i)			(0x0021009C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_L2PRTMOD_MAX_INDEX		2
+#define GL_ACLEXT_L2PRTMOD_XLT1_S		0
+#define GL_ACLEXT_L2PRTMOD_XLT1_M		MAKEMASK(0x3, 0)
+#define GL_ACLEXT_L2PRTMOD_XLT2_S		8
+#define GL_ACLEXT_L2PRTMOD_XLT2_M		MAKEMASK(0x3, 8)
+#define GL_ACLEXT_N2N_L2ADDR(_i)		(0x0021015C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_N2N_L2ADDR_MAX_INDEX		2
+#define GL_ACLEXT_N2N_L2ADDR_LINE_IDX_S		0
+#define GL_ACLEXT_N2N_L2ADDR_LINE_IDX_M		MAKEMASK(0x3F, 0)
+#define GL_ACLEXT_N2N_L2ADDR_AUTO_INC_S		31
+#define GL_ACLEXT_N2N_L2ADDR_AUTO_INC_M		BIT(31)
+#define GL_ACLEXT_N2N_L2DATA(_i)		(0x00210168 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_N2N_L2DATA_MAX_INDEX		2
+#define GL_ACLEXT_N2N_L2DATA_DATA0_S		0
+#define GL_ACLEXT_N2N_L2DATA_DATA0_M		MAKEMASK(0xFF, 0)
+#define GL_ACLEXT_N2N_L2DATA_DATA1_S		8
+#define GL_ACLEXT_N2N_L2DATA_DATA1_M		MAKEMASK(0xFF, 8)
+#define GL_ACLEXT_N2N_L2DATA_DATA2_S		16
+#define GL_ACLEXT_N2N_L2DATA_DATA2_M		MAKEMASK(0xFF, 16)
+#define GL_ACLEXT_N2N_L2DATA_DATA3_S		24
+#define GL_ACLEXT_N2N_L2DATA_DATA3_M		MAKEMASK(0xFF, 24)
+#define GL_ACLEXT_P2P_L1ADDR(_i)		(0x00210024 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_P2P_L1ADDR_MAX_INDEX		2
+#define GL_ACLEXT_P2P_L1ADDR_LINE_IDX_S		0
+#define GL_ACLEXT_P2P_L1ADDR_LINE_IDX_M		BIT(0)
+#define GL_ACLEXT_P2P_L1ADDR_AUTO_INC_S		31
+#define GL_ACLEXT_P2P_L1ADDR_AUTO_INC_M		BIT(31)
+#define GL_ACLEXT_P2P_L1DATA(_i)		(0x00210030 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_P2P_L1DATA_MAX_INDEX		2
+#define GL_ACLEXT_P2P_L1DATA_DATA_S		0
+#define GL_ACLEXT_P2P_L1DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_ACLEXT_PID_L2GKTYPE(_i)		(0x002100F0 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_PID_L2GKTYPE_MAX_INDEX	2
+#define GL_ACLEXT_PID_L2GKTYPE_PID_GKTYPE_S	0
+#define GL_ACLEXT_PID_L2GKTYPE_PID_GKTYPE_M	MAKEMASK(0x3, 0)
+#define GL_ACLEXT_PLVL_SEL(_i)			(0x0021000C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_PLVL_SEL_MAX_INDEX		2
+#define GL_ACLEXT_PLVL_SEL_PLVL_SEL_S		0
+#define GL_ACLEXT_PLVL_SEL_PLVL_SEL_M		BIT(0)
+#define GL_ACLEXT_TCAM_L2ADDR(_i)		(0x00210114 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_TCAM_L2ADDR_MAX_INDEX		2
+#define GL_ACLEXT_TCAM_L2ADDR_LINE_IDX_S	0
+#define GL_ACLEXT_TCAM_L2ADDR_LINE_IDX_M	MAKEMASK(0x3FF, 0)
+#define GL_ACLEXT_TCAM_L2ADDR_AUTO_INC_S	31
+#define GL_ACLEXT_TCAM_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_ACLEXT_TCAM_L2DATALSB(_i)		(0x00210120 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_TCAM_L2DATALSB_MAX_INDEX	2
+#define GL_ACLEXT_TCAM_L2DATALSB_DATALSB_S	0
+#define GL_ACLEXT_TCAM_L2DATALSB_DATALSB_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GL_ACLEXT_TCAM_L2DATAMSB(_i)		(0x0021012C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_TCAM_L2DATAMSB_MAX_INDEX	2
+#define GL_ACLEXT_TCAM_L2DATAMSB_DATAMSB_S	0
+#define GL_ACLEXT_TCAM_L2DATAMSB_DATAMSB_M	MAKEMASK(0xFF, 0)
+#define GL_ACLEXT_XLT0_L1ADDR(_i)		(0x0021003C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_XLT0_L1ADDR_MAX_INDEX		2
+#define GL_ACLEXT_XLT0_L1ADDR_LINE_IDX_S	0
+#define GL_ACLEXT_XLT0_L1ADDR_LINE_IDX_M	MAKEMASK(0xFF, 0)
+#define GL_ACLEXT_XLT0_L1ADDR_AUTO_INC_S	31
+#define GL_ACLEXT_XLT0_L1ADDR_AUTO_INC_M	BIT(31)
+#define GL_ACLEXT_XLT0_L1DATA(_i)		(0x00210048 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_XLT0_L1DATA_MAX_INDEX		2
+#define GL_ACLEXT_XLT0_L1DATA_DATA_S		0
+#define GL_ACLEXT_XLT0_L1DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_ACLEXT_XLT1_L2ADDR(_i)		(0x002100C0 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_XLT1_L2ADDR_MAX_INDEX		2
+#define GL_ACLEXT_XLT1_L2ADDR_LINE_IDX_S	0
+#define GL_ACLEXT_XLT1_L2ADDR_LINE_IDX_M	MAKEMASK(0x7FF, 0)
+#define GL_ACLEXT_XLT1_L2ADDR_AUTO_INC_S	31
+#define GL_ACLEXT_XLT1_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_ACLEXT_XLT1_L2DATA(_i)		(0x002100CC + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_XLT1_L2DATA_MAX_INDEX		2
+#define GL_ACLEXT_XLT1_L2DATA_DATA_S		0
+#define GL_ACLEXT_XLT1_L2DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_ACLEXT_XLT2_L2ADDR(_i)		(0x002100D8 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_XLT2_L2ADDR_MAX_INDEX		2
+#define GL_ACLEXT_XLT2_L2ADDR_LINE_IDX_S	0
+#define GL_ACLEXT_XLT2_L2ADDR_LINE_IDX_M	MAKEMASK(0x1FF, 0)
+#define GL_ACLEXT_XLT2_L2ADDR_AUTO_INC_S	31
+#define GL_ACLEXT_XLT2_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_ACLEXT_XLT2_L2DATA(_i)		(0x002100E4 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_XLT2_L2DATA_MAX_INDEX		2
+#define GL_ACLEXT_XLT2_L2DATA_DATA_S		0
+#define GL_ACLEXT_XLT2_L2DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PREEXT_CDMD_L1SEL(_i)		(0x0020F054 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_CDMD_L1SEL_MAX_INDEX		2
+#define GL_PREEXT_CDMD_L1SEL_RX_SEL_S		0
+#define GL_PREEXT_CDMD_L1SEL_RX_SEL_M		MAKEMASK(0x1F, 0)
+#define GL_PREEXT_CDMD_L1SEL_TX_SEL_S		8
+#define GL_PREEXT_CDMD_L1SEL_TX_SEL_M		MAKEMASK(0x1F, 8)
+#define GL_PREEXT_CDMD_L1SEL_AUX0_SEL_S		16
+#define GL_PREEXT_CDMD_L1SEL_AUX0_SEL_M		MAKEMASK(0x1F, 16)
+#define GL_PREEXT_CDMD_L1SEL_AUX1_SEL_S		24
+#define GL_PREEXT_CDMD_L1SEL_AUX1_SEL_M		MAKEMASK(0x1F, 24)
+#define GL_PREEXT_CDMD_L1SEL_BIDIR_ENA_S	30
+#define GL_PREEXT_CDMD_L1SEL_BIDIR_ENA_M	MAKEMASK(0x3, 30)
+#define GL_PREEXT_CTLTBL_L2ADDR(_i)		(0x0020F084 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_CTLTBL_L2ADDR_MAX_INDEX	2
+#define GL_PREEXT_CTLTBL_L2ADDR_LINE_OFF_S	0
+#define GL_PREEXT_CTLTBL_L2ADDR_LINE_OFF_M	MAKEMASK(0x7, 0)
+#define GL_PREEXT_CTLTBL_L2ADDR_LINE_IDX_S	8
+#define GL_PREEXT_CTLTBL_L2ADDR_LINE_IDX_M	MAKEMASK(0x7, 8)
+#define GL_PREEXT_CTLTBL_L2ADDR_AUTO_INC_S	31
+#define GL_PREEXT_CTLTBL_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_PREEXT_CTLTBL_L2DATA(_i)		(0x0020F090 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_CTLTBL_L2DATA_MAX_INDEX	2
+#define GL_PREEXT_CTLTBL_L2DATA_DATA_S		0
+#define GL_PREEXT_CTLTBL_L2DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PREEXT_DFLT_L2PRFL(_i)		(0x0020F138 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_DFLT_L2PRFL_MAX_INDEX		2
+#define GL_PREEXT_DFLT_L2PRFL_DFLT_PRFL_S	0
+#define GL_PREEXT_DFLT_L2PRFL_DFLT_PRFL_M	MAKEMASK(0xFFFF, 0)
+#define GL_PREEXT_FLGS_L1SEL0_1(_i)		(0x0020F06C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_FLGS_L1SEL0_1_MAX_INDEX	2
+#define GL_PREEXT_FLGS_L1SEL0_1_FLS0_S		0
+#define GL_PREEXT_FLGS_L1SEL0_1_FLS0_M		MAKEMASK(0x1FF, 0)
+#define GL_PREEXT_FLGS_L1SEL0_1_FLS1_S		16
+#define GL_PREEXT_FLGS_L1SEL0_1_FLS1_M		MAKEMASK(0x1FF, 16)
+#define GL_PREEXT_FLGS_L1SEL2_3(_i)		(0x0020F078 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_FLGS_L1SEL2_3_MAX_INDEX	2
+#define GL_PREEXT_FLGS_L1SEL2_3_FLS2_S		0
+#define GL_PREEXT_FLGS_L1SEL2_3_FLS2_M		MAKEMASK(0x1FF, 0)
+#define GL_PREEXT_FLGS_L1SEL2_3_FLS3_S		16
+#define GL_PREEXT_FLGS_L1SEL2_3_FLS3_M		MAKEMASK(0x1FF, 16)
+#define GL_PREEXT_FLGS_L1TBL(_i)		(0x0020F060 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_FLGS_L1TBL_MAX_INDEX		2
+#define GL_PREEXT_FLGS_L1TBL_LSB_S		0
+#define GL_PREEXT_FLGS_L1TBL_LSB_M		MAKEMASK(0xFFFF, 0)
+#define GL_PREEXT_FLGS_L1TBL_MSB_S		16
+#define GL_PREEXT_FLGS_L1TBL_MSB_M		MAKEMASK(0xFFFF, 16)
+#define GL_PREEXT_FORCE_L1CDID(_i)		(0x0020F018 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_FORCE_L1CDID_MAX_INDEX	2
+#define GL_PREEXT_FORCE_L1CDID_STATIC_CDID_S	0
+#define GL_PREEXT_FORCE_L1CDID_STATIC_CDID_M	MAKEMASK(0xF, 0)
+#define GL_PREEXT_FORCE_L1CDID_STATIC_CDID_EN_S 31
+#define GL_PREEXT_FORCE_L1CDID_STATIC_CDID_EN_M BIT(31)
+#define GL_PREEXT_FORCE_PID(_i)			(0x0020F000 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_FORCE_PID_MAX_INDEX		2
+#define GL_PREEXT_FORCE_PID_STATIC_PID_S	0
+#define GL_PREEXT_FORCE_PID_STATIC_PID_M	MAKEMASK(0xFFFF, 0)
+#define GL_PREEXT_FORCE_PID_STATIC_PID_EN_S	31
+#define GL_PREEXT_FORCE_PID_STATIC_PID_EN_M	BIT(31)
+#define GL_PREEXT_K2N_L2ADDR(_i)		(0x0020F144 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_K2N_L2ADDR_MAX_INDEX		2
+#define GL_PREEXT_K2N_L2ADDR_LINE_IDX_S		0
+#define GL_PREEXT_K2N_L2ADDR_LINE_IDX_M		MAKEMASK(0x7F, 0)
+#define GL_PREEXT_K2N_L2ADDR_AUTO_INC_S		31
+#define GL_PREEXT_K2N_L2ADDR_AUTO_INC_M		BIT(31)
+#define GL_PREEXT_K2N_L2DATA(_i)		(0x0020F150 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_K2N_L2DATA_MAX_INDEX		2
+#define GL_PREEXT_K2N_L2DATA_DATA0_S		0
+#define GL_PREEXT_K2N_L2DATA_DATA0_M		MAKEMASK(0xFF, 0)
+#define GL_PREEXT_K2N_L2DATA_DATA1_S		8
+#define GL_PREEXT_K2N_L2DATA_DATA1_M		MAKEMASK(0xFF, 8)
+#define GL_PREEXT_K2N_L2DATA_DATA2_S		16
+#define GL_PREEXT_K2N_L2DATA_DATA2_M		MAKEMASK(0xFF, 16)
+#define GL_PREEXT_K2N_L2DATA_DATA3_S		24
+#define GL_PREEXT_K2N_L2DATA_DATA3_M		MAKEMASK(0xFF, 24)
+#define GL_PREEXT_L2_PMASK0(_i)			(0x0020F0FC + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_L2_PMASK0_MAX_INDEX		2
+#define GL_PREEXT_L2_PMASK0_BITMASK_S		0
+#define GL_PREEXT_L2_PMASK0_BITMASK_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PREEXT_L2_PMASK1(_i)			(0x0020F108 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_L2_PMASK1_MAX_INDEX		2
+#define GL_PREEXT_L2_PMASK1_BITMASK_S		0
+#define GL_PREEXT_L2_PMASK1_BITMASK_M		MAKEMASK(0xFFFF, 0)
+#define GL_PREEXT_L2_TMASK0(_i)			(0x0020F498 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_L2_TMASK0_MAX_INDEX		2
+#define GL_PREEXT_L2_TMASK0_BITMASK_S		0
+#define GL_PREEXT_L2_TMASK0_BITMASK_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PREEXT_L2_TMASK1(_i)			(0x0020F4A4 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_L2_TMASK1_MAX_INDEX		2
+#define GL_PREEXT_L2_TMASK1_BITMASK_S		0
+#define GL_PREEXT_L2_TMASK1_BITMASK_M		MAKEMASK(0xFF, 0)
+#define GL_PREEXT_L2BMP0_3(_i)			(0x0020F0A8 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_L2BMP0_3_MAX_INDEX		2
+#define GL_PREEXT_L2BMP0_3_BMP0_S		0
+#define GL_PREEXT_L2BMP0_3_BMP0_M		MAKEMASK(0xFF, 0)
+#define GL_PREEXT_L2BMP0_3_BMP1_S		8
+#define GL_PREEXT_L2BMP0_3_BMP1_M		MAKEMASK(0xFF, 8)
+#define GL_PREEXT_L2BMP0_3_BMP2_S		16
+#define GL_PREEXT_L2BMP0_3_BMP2_M		MAKEMASK(0xFF, 16)
+#define GL_PREEXT_L2BMP0_3_BMP3_S		24
+#define GL_PREEXT_L2BMP0_3_BMP3_M		MAKEMASK(0xFF, 24)
+#define GL_PREEXT_L2BMP4_7(_i)			(0x0020F0B4 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_L2BMP4_7_MAX_INDEX		2
+#define GL_PREEXT_L2BMP4_7_BMP4_S		0
+#define GL_PREEXT_L2BMP4_7_BMP4_M		MAKEMASK(0xFF, 0)
+#define GL_PREEXT_L2BMP4_7_BMP5_S		8
+#define GL_PREEXT_L2BMP4_7_BMP5_M		MAKEMASK(0xFF, 8)
+#define GL_PREEXT_L2BMP4_7_BMP6_S		16
+#define GL_PREEXT_L2BMP4_7_BMP6_M		MAKEMASK(0xFF, 16)
+#define GL_PREEXT_L2BMP4_7_BMP7_S		24
+#define GL_PREEXT_L2BMP4_7_BMP7_M		MAKEMASK(0xFF, 24)
+#define GL_PREEXT_L2PRTMOD(_i)			(0x0020F09C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_L2PRTMOD_MAX_INDEX		2
+#define GL_PREEXT_L2PRTMOD_XLT1_S		0
+#define GL_PREEXT_L2PRTMOD_XLT1_M		MAKEMASK(0x3, 0)
+#define GL_PREEXT_L2PRTMOD_XLT2_S		8
+#define GL_PREEXT_L2PRTMOD_XLT2_M		MAKEMASK(0x3, 8)
+#define GL_PREEXT_N2N_L2ADDR(_i)		(0x0020F15C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_N2N_L2ADDR_MAX_INDEX		2
+#define GL_PREEXT_N2N_L2ADDR_LINE_IDX_S		0
+#define GL_PREEXT_N2N_L2ADDR_LINE_IDX_M		MAKEMASK(0x3F, 0)
+#define GL_PREEXT_N2N_L2ADDR_AUTO_INC_S		31
+#define GL_PREEXT_N2N_L2ADDR_AUTO_INC_M		BIT(31)
+#define GL_PREEXT_N2N_L2DATA(_i)		(0x0020F168 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_N2N_L2DATA_MAX_INDEX		2
+#define GL_PREEXT_N2N_L2DATA_DATA0_S		0
+#define GL_PREEXT_N2N_L2DATA_DATA0_M		MAKEMASK(0xFF, 0)
+#define GL_PREEXT_N2N_L2DATA_DATA1_S		8
+#define GL_PREEXT_N2N_L2DATA_DATA1_M		MAKEMASK(0xFF, 8)
+#define GL_PREEXT_N2N_L2DATA_DATA2_S		16
+#define GL_PREEXT_N2N_L2DATA_DATA2_M		MAKEMASK(0xFF, 16)
+#define GL_PREEXT_N2N_L2DATA_DATA3_S		24
+#define GL_PREEXT_N2N_L2DATA_DATA3_M		MAKEMASK(0xFF, 24)
+#define GL_PREEXT_P2P_L1ADDR(_i)		(0x0020F024 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_P2P_L1ADDR_MAX_INDEX		2
+#define GL_PREEXT_P2P_L1ADDR_LINE_IDX_S		0
+#define GL_PREEXT_P2P_L1ADDR_LINE_IDX_M		BIT(0)
+#define GL_PREEXT_P2P_L1ADDR_AUTO_INC_S		31
+#define GL_PREEXT_P2P_L1ADDR_AUTO_INC_M		BIT(31)
+#define GL_PREEXT_P2P_L1DATA(_i)		(0x0020F030 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_P2P_L1DATA_MAX_INDEX		2
+#define GL_PREEXT_P2P_L1DATA_DATA_S		0
+#define GL_PREEXT_P2P_L1DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PREEXT_PID_L2GKTYPE(_i)		(0x0020F0F0 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_PID_L2GKTYPE_MAX_INDEX	2
+#define GL_PREEXT_PID_L2GKTYPE_PID_GKTYPE_S	0
+#define GL_PREEXT_PID_L2GKTYPE_PID_GKTYPE_M	MAKEMASK(0x3, 0)
+#define GL_PREEXT_PLVL_SEL(_i)			(0x0020F00C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_PLVL_SEL_MAX_INDEX		2
+#define GL_PREEXT_PLVL_SEL_PLVL_SEL_S		0
+#define GL_PREEXT_PLVL_SEL_PLVL_SEL_M		BIT(0)
+#define GL_PREEXT_TCAM_L2ADDR(_i)		(0x0020F114 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_TCAM_L2ADDR_MAX_INDEX		2
+#define GL_PREEXT_TCAM_L2ADDR_LINE_IDX_S	0
+#define GL_PREEXT_TCAM_L2ADDR_LINE_IDX_M	MAKEMASK(0x3FF, 0)
+#define GL_PREEXT_TCAM_L2ADDR_AUTO_INC_S	31
+#define GL_PREEXT_TCAM_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_PREEXT_TCAM_L2DATALSB(_i)		(0x0020F120 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_TCAM_L2DATALSB_MAX_INDEX	2
+#define GL_PREEXT_TCAM_L2DATALSB_DATALSB_S	0
+#define GL_PREEXT_TCAM_L2DATALSB_DATALSB_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PREEXT_TCAM_L2DATAMSB(_i)		(0x0020F12C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_TCAM_L2DATAMSB_MAX_INDEX	2
+#define GL_PREEXT_TCAM_L2DATAMSB_DATAMSB_S	0
+#define GL_PREEXT_TCAM_L2DATAMSB_DATAMSB_M	MAKEMASK(0xFF, 0)
+#define GL_PREEXT_XLT0_L1ADDR(_i)		(0x0020F03C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_XLT0_L1ADDR_MAX_INDEX		2
+#define GL_PREEXT_XLT0_L1ADDR_LINE_IDX_S	0
+#define GL_PREEXT_XLT0_L1ADDR_LINE_IDX_M	MAKEMASK(0xFF, 0)
+#define GL_PREEXT_XLT0_L1ADDR_AUTO_INC_S	31
+#define GL_PREEXT_XLT0_L1ADDR_AUTO_INC_M	BIT(31)
+#define GL_PREEXT_XLT0_L1DATA(_i)		(0x0020F048 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_XLT0_L1DATA_MAX_INDEX		2
+#define GL_PREEXT_XLT0_L1DATA_DATA_S		0
+#define GL_PREEXT_XLT0_L1DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PREEXT_XLT1_L2ADDR(_i)		(0x0020F0C0 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_XLT1_L2ADDR_MAX_INDEX		2
+#define GL_PREEXT_XLT1_L2ADDR_LINE_IDX_S	0
+#define GL_PREEXT_XLT1_L2ADDR_LINE_IDX_M	MAKEMASK(0x7FF, 0)
+#define GL_PREEXT_XLT1_L2ADDR_AUTO_INC_S	31
+#define GL_PREEXT_XLT1_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_PREEXT_XLT1_L2DATA(_i)		(0x0020F0CC + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_XLT1_L2DATA_MAX_INDEX		2
+#define GL_PREEXT_XLT1_L2DATA_DATA_S		0
+#define GL_PREEXT_XLT1_L2DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PREEXT_XLT2_L2ADDR(_i)		(0x0020F0D8 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_XLT2_L2ADDR_MAX_INDEX		2
+#define GL_PREEXT_XLT2_L2ADDR_LINE_IDX_S	0
+#define GL_PREEXT_XLT2_L2ADDR_LINE_IDX_M	MAKEMASK(0x1FF, 0)
+#define GL_PREEXT_XLT2_L2ADDR_AUTO_INC_S	31
+#define GL_PREEXT_XLT2_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_PREEXT_XLT2_L2DATA(_i)		(0x0020F0E4 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_XLT2_L2DATA_MAX_INDEX		2
+#define GL_PREEXT_XLT2_L2DATA_DATA_S		0
+#define GL_PREEXT_XLT2_L2DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_CDMD_L1SEL(_i)		(0x0020E054 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_CDMD_L1SEL_MAX_INDEX		2
+#define GL_PSTEXT_CDMD_L1SEL_RX_SEL_S		0
+#define GL_PSTEXT_CDMD_L1SEL_RX_SEL_M		MAKEMASK(0x1F, 0)
+#define GL_PSTEXT_CDMD_L1SEL_TX_SEL_S		8
+#define GL_PSTEXT_CDMD_L1SEL_TX_SEL_M		MAKEMASK(0x1F, 8)
+#define GL_PSTEXT_CDMD_L1SEL_AUX0_SEL_S		16
+#define GL_PSTEXT_CDMD_L1SEL_AUX0_SEL_M		MAKEMASK(0x1F, 16)
+#define GL_PSTEXT_CDMD_L1SEL_AUX1_SEL_S		24
+#define GL_PSTEXT_CDMD_L1SEL_AUX1_SEL_M		MAKEMASK(0x1F, 24)
+#define GL_PSTEXT_CDMD_L1SEL_BIDIR_ENA_S	30
+#define GL_PSTEXT_CDMD_L1SEL_BIDIR_ENA_M	MAKEMASK(0x3, 30)
+#define GL_PSTEXT_CTLTBL_L2ADDR(_i)		(0x0020E084 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_CTLTBL_L2ADDR_MAX_INDEX	2
+#define GL_PSTEXT_CTLTBL_L2ADDR_LINE_OFF_S	0
+#define GL_PSTEXT_CTLTBL_L2ADDR_LINE_OFF_M	MAKEMASK(0x7, 0)
+#define GL_PSTEXT_CTLTBL_L2ADDR_LINE_IDX_S	8
+#define GL_PSTEXT_CTLTBL_L2ADDR_LINE_IDX_M	MAKEMASK(0x7, 8)
+#define GL_PSTEXT_CTLTBL_L2ADDR_AUTO_INC_S	31
+#define GL_PSTEXT_CTLTBL_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_PSTEXT_CTLTBL_L2DATA(_i)		(0x0020E090 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_CTLTBL_L2DATA_MAX_INDEX	2
+#define GL_PSTEXT_CTLTBL_L2DATA_DATA_S		0
+#define GL_PSTEXT_CTLTBL_L2DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_DFLT_L2PRFL(_i)		(0x0020E138 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_DFLT_L2PRFL_MAX_INDEX		2
+#define GL_PSTEXT_DFLT_L2PRFL_DFLT_PRFL_S	0
+#define GL_PSTEXT_DFLT_L2PRFL_DFLT_PRFL_M	MAKEMASK(0xFFFF, 0)
+#define GL_PSTEXT_FL15_BMPLSB(_i)		(0x0020E480 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_FL15_BMPLSB_MAX_INDEX		2
+#define GL_PSTEXT_FL15_BMPLSB_BMPLSB_S		0
+#define GL_PSTEXT_FL15_BMPLSB_BMPLSB_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_FL15_BMPMSB(_i)		(0x0020E48C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_FL15_BMPMSB_MAX_INDEX		2
+#define GL_PSTEXT_FL15_BMPMSB_BMPMSB_S		0
+#define GL_PSTEXT_FL15_BMPMSB_BMPMSB_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_FLGS_L1SEL0_1(_i)		(0x0020E06C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_FLGS_L1SEL0_1_MAX_INDEX	2
+#define GL_PSTEXT_FLGS_L1SEL0_1_FLS0_S		0
+#define GL_PSTEXT_FLGS_L1SEL0_1_FLS0_M		MAKEMASK(0x1FF, 0)
+#define GL_PSTEXT_FLGS_L1SEL0_1_FLS1_S		16
+#define GL_PSTEXT_FLGS_L1SEL0_1_FLS1_M		MAKEMASK(0x1FF, 16)
+#define GL_PSTEXT_FLGS_L1SEL2_3(_i)		(0x0020E078 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_FLGS_L1SEL2_3_MAX_INDEX	2
+#define GL_PSTEXT_FLGS_L1SEL2_3_FLS2_S		0
+#define GL_PSTEXT_FLGS_L1SEL2_3_FLS2_M		MAKEMASK(0x1FF, 0)
+#define GL_PSTEXT_FLGS_L1SEL2_3_FLS3_S		16
+#define GL_PSTEXT_FLGS_L1SEL2_3_FLS3_M		MAKEMASK(0x1FF, 16)
+#define GL_PSTEXT_FLGS_L1TBL(_i)		(0x0020E060 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_FLGS_L1TBL_MAX_INDEX		2
+#define GL_PSTEXT_FLGS_L1TBL_LSB_S		0
+#define GL_PSTEXT_FLGS_L1TBL_LSB_M		MAKEMASK(0xFFFF, 0)
+#define GL_PSTEXT_FLGS_L1TBL_MSB_S		16
+#define GL_PSTEXT_FLGS_L1TBL_MSB_M		MAKEMASK(0xFFFF, 16)
+#define GL_PSTEXT_FORCE_L1CDID(_i)		(0x0020E018 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_FORCE_L1CDID_MAX_INDEX	2
+#define GL_PSTEXT_FORCE_L1CDID_STATIC_CDID_S	0
+#define GL_PSTEXT_FORCE_L1CDID_STATIC_CDID_M	MAKEMASK(0xF, 0)
+#define GL_PSTEXT_FORCE_L1CDID_STATIC_CDID_EN_S 31
+#define GL_PSTEXT_FORCE_L1CDID_STATIC_CDID_EN_M BIT(31)
+#define GL_PSTEXT_FORCE_PID(_i)			(0x0020E000 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_FORCE_PID_MAX_INDEX		2
+#define GL_PSTEXT_FORCE_PID_STATIC_PID_S	0
+#define GL_PSTEXT_FORCE_PID_STATIC_PID_M	MAKEMASK(0xFFFF, 0)
+#define GL_PSTEXT_FORCE_PID_STATIC_PID_EN_S	31
+#define GL_PSTEXT_FORCE_PID_STATIC_PID_EN_M	BIT(31)
+#define GL_PSTEXT_K2N_L2ADDR(_i)		(0x0020E144 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_K2N_L2ADDR_MAX_INDEX		2
+#define GL_PSTEXT_K2N_L2ADDR_LINE_IDX_S		0
+#define GL_PSTEXT_K2N_L2ADDR_LINE_IDX_M		MAKEMASK(0x7F, 0)
+#define GL_PSTEXT_K2N_L2ADDR_AUTO_INC_S		31
+#define GL_PSTEXT_K2N_L2ADDR_AUTO_INC_M		BIT(31)
+#define GL_PSTEXT_K2N_L2DATA(_i)		(0x0020E150 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_K2N_L2DATA_MAX_INDEX		2
+#define GL_PSTEXT_K2N_L2DATA_DATA0_S		0
+#define GL_PSTEXT_K2N_L2DATA_DATA0_M		MAKEMASK(0xFF, 0)
+#define GL_PSTEXT_K2N_L2DATA_DATA1_S		8
+#define GL_PSTEXT_K2N_L2DATA_DATA1_M		MAKEMASK(0xFF, 8)
+#define GL_PSTEXT_K2N_L2DATA_DATA2_S		16
+#define GL_PSTEXT_K2N_L2DATA_DATA2_M		MAKEMASK(0xFF, 16)
+#define GL_PSTEXT_K2N_L2DATA_DATA3_S		24
+#define GL_PSTEXT_K2N_L2DATA_DATA3_M		MAKEMASK(0xFF, 24)
+#define GL_PSTEXT_L2_PMASK0(_i)			(0x0020E0FC + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_L2_PMASK0_MAX_INDEX		2
+#define GL_PSTEXT_L2_PMASK0_BITMASK_S		0
+#define GL_PSTEXT_L2_PMASK0_BITMASK_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_L2_PMASK1(_i)			(0x0020E108 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_L2_PMASK1_MAX_INDEX		2
+#define GL_PSTEXT_L2_PMASK1_BITMASK_S		0
+#define GL_PSTEXT_L2_PMASK1_BITMASK_M		MAKEMASK(0xFFFF, 0)
+#define GL_PSTEXT_L2_TMASK0(_i)			(0x0020E498 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_L2_TMASK0_MAX_INDEX		2
+#define GL_PSTEXT_L2_TMASK0_BITMASK_S		0
+#define GL_PSTEXT_L2_TMASK0_BITMASK_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_L2_TMASK1(_i)			(0x0020E4A4 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_L2_TMASK1_MAX_INDEX		2
+#define GL_PSTEXT_L2_TMASK1_BITMASK_S		0
+#define GL_PSTEXT_L2_TMASK1_BITMASK_M		MAKEMASK(0xFF, 0)
+#define GL_PSTEXT_L2PRTMOD(_i)			(0x0020E09C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_L2PRTMOD_MAX_INDEX		2
+#define GL_PSTEXT_L2PRTMOD_XLT1_S		0
+#define GL_PSTEXT_L2PRTMOD_XLT1_M		MAKEMASK(0x3, 0)
+#define GL_PSTEXT_L2PRTMOD_XLT2_S		8
+#define GL_PSTEXT_L2PRTMOD_XLT2_M		MAKEMASK(0x3, 8)
+#define GL_PSTEXT_N2N_L2ADDR(_i)		(0x0020E15C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_N2N_L2ADDR_MAX_INDEX		2
+#define GL_PSTEXT_N2N_L2ADDR_LINE_IDX_S		0
+#define GL_PSTEXT_N2N_L2ADDR_LINE_IDX_M		MAKEMASK(0x3F, 0)
+#define GL_PSTEXT_N2N_L2ADDR_AUTO_INC_S		31
+#define GL_PSTEXT_N2N_L2ADDR_AUTO_INC_M		BIT(31)
+#define GL_PSTEXT_N2N_L2DATA(_i)		(0x0020E168 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_N2N_L2DATA_MAX_INDEX		2
+#define GL_PSTEXT_N2N_L2DATA_DATA0_S		0
+#define GL_PSTEXT_N2N_L2DATA_DATA0_M		MAKEMASK(0xFF, 0)
+#define GL_PSTEXT_N2N_L2DATA_DATA1_S		8
+#define GL_PSTEXT_N2N_L2DATA_DATA1_M		MAKEMASK(0xFF, 8)
+#define GL_PSTEXT_N2N_L2DATA_DATA2_S		16
+#define GL_PSTEXT_N2N_L2DATA_DATA2_M		MAKEMASK(0xFF, 16)
+#define GL_PSTEXT_N2N_L2DATA_DATA3_S		24
+#define GL_PSTEXT_N2N_L2DATA_DATA3_M		MAKEMASK(0xFF, 24)
+#define GL_PSTEXT_P2P_L1ADDR(_i)		(0x0020E024 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_P2P_L1ADDR_MAX_INDEX		2
+#define GL_PSTEXT_P2P_L1ADDR_LINE_IDX_S		0
+#define GL_PSTEXT_P2P_L1ADDR_LINE_IDX_M		BIT(0)
+#define GL_PSTEXT_P2P_L1ADDR_AUTO_INC_S		31
+#define GL_PSTEXT_P2P_L1ADDR_AUTO_INC_M		BIT(31)
+#define GL_PSTEXT_P2P_L1DATA(_i)		(0x0020E030 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_P2P_L1DATA_MAX_INDEX		2
+#define GL_PSTEXT_P2P_L1DATA_DATA_S		0
+#define GL_PSTEXT_P2P_L1DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_PID_L2GKTYPE(_i)		(0x0020E0F0 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_PID_L2GKTYPE_MAX_INDEX	2
+#define GL_PSTEXT_PID_L2GKTYPE_PID_GKTYPE_S	0
+#define GL_PSTEXT_PID_L2GKTYPE_PID_GKTYPE_M	MAKEMASK(0x3, 0)
+#define GL_PSTEXT_PLVL_SEL(_i)			(0x0020E00C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_PLVL_SEL_MAX_INDEX		2
+#define GL_PSTEXT_PLVL_SEL_PLVL_SEL_S		0
+#define GL_PSTEXT_PLVL_SEL_PLVL_SEL_M		BIT(0)
+#define GL_PSTEXT_PRFLM_CTRL(_i)		(0x0020E474 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_PRFLM_CTRL_MAX_INDEX		2
+#define GL_PSTEXT_PRFLM_CTRL_PRFL_IDX_S		0
+#define GL_PSTEXT_PRFLM_CTRL_PRFL_IDX_M		MAKEMASK(0xFF, 0)
+#define GL_PSTEXT_PRFLM_CTRL_RD_REQ_S		30
+#define GL_PSTEXT_PRFLM_CTRL_RD_REQ_M		BIT(30)
+#define GL_PSTEXT_PRFLM_CTRL_WR_REQ_S		31
+#define GL_PSTEXT_PRFLM_CTRL_WR_REQ_M		BIT(31)
+#define GL_PSTEXT_PRFLM_DATA_0(_i)		(0x0020E174 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GL_PSTEXT_PRFLM_DATA_0_MAX_INDEX	63
+#define GL_PSTEXT_PRFLM_DATA_0_PROT_S		0
+#define GL_PSTEXT_PRFLM_DATA_0_PROT_M		MAKEMASK(0xFF, 0)
+#define GL_PSTEXT_PRFLM_DATA_0_OFF_S		16
+#define GL_PSTEXT_PRFLM_DATA_0_OFF_M		MAKEMASK(0x1FF, 16)
+#define GL_PSTEXT_PRFLM_DATA_1(_i)		(0x0020E274 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GL_PSTEXT_PRFLM_DATA_1_MAX_INDEX	63
+#define GL_PSTEXT_PRFLM_DATA_1_PROT_S		0
+#define GL_PSTEXT_PRFLM_DATA_1_PROT_M		MAKEMASK(0xFF, 0)
+#define GL_PSTEXT_PRFLM_DATA_1_OFF_S		16
+#define GL_PSTEXT_PRFLM_DATA_1_OFF_M		MAKEMASK(0x1FF, 16)
+#define GL_PSTEXT_PRFLM_DATA_2(_i)		(0x0020E374 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GL_PSTEXT_PRFLM_DATA_2_MAX_INDEX	63
+#define GL_PSTEXT_PRFLM_DATA_2_PROT_S		0
+#define GL_PSTEXT_PRFLM_DATA_2_PROT_M		MAKEMASK(0xFF, 0)
+#define GL_PSTEXT_PRFLM_DATA_2_OFF_S		16
+#define GL_PSTEXT_PRFLM_DATA_2_OFF_M		MAKEMASK(0x1FF, 16)
+#define GL_PSTEXT_TCAM_L2ADDR(_i)		(0x0020E114 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_TCAM_L2ADDR_MAX_INDEX		2
+#define GL_PSTEXT_TCAM_L2ADDR_LINE_IDX_S	0
+#define GL_PSTEXT_TCAM_L2ADDR_LINE_IDX_M	MAKEMASK(0x3FF, 0)
+#define GL_PSTEXT_TCAM_L2ADDR_AUTO_INC_S	31
+#define GL_PSTEXT_TCAM_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_PSTEXT_TCAM_L2DATALSB(_i)		(0x0020E120 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_TCAM_L2DATALSB_MAX_INDEX	2
+#define GL_PSTEXT_TCAM_L2DATALSB_DATALSB_S	0
+#define GL_PSTEXT_TCAM_L2DATALSB_DATALSB_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_TCAM_L2DATAMSB(_i)		(0x0020E12C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_TCAM_L2DATAMSB_MAX_INDEX	2
+#define GL_PSTEXT_TCAM_L2DATAMSB_DATAMSB_S	0
+#define GL_PSTEXT_TCAM_L2DATAMSB_DATAMSB_M	MAKEMASK(0xFF, 0)
+#define GL_PSTEXT_XLT0_L1ADDR(_i)		(0x0020E03C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_XLT0_L1ADDR_MAX_INDEX		2
+#define GL_PSTEXT_XLT0_L1ADDR_LINE_IDX_S	0
+#define GL_PSTEXT_XLT0_L1ADDR_LINE_IDX_M	MAKEMASK(0xFF, 0)
+#define GL_PSTEXT_XLT0_L1ADDR_AUTO_INC_S	31
+#define GL_PSTEXT_XLT0_L1ADDR_AUTO_INC_M	BIT(31)
+#define GL_PSTEXT_XLT0_L1DATA(_i)		(0x0020E048 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_XLT0_L1DATA_MAX_INDEX		2
+#define GL_PSTEXT_XLT0_L1DATA_DATA_S		0
+#define GL_PSTEXT_XLT0_L1DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_XLT1_L2ADDR(_i)		(0x0020E0C0 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_XLT1_L2ADDR_MAX_INDEX		2
+#define GL_PSTEXT_XLT1_L2ADDR_LINE_IDX_S	0
+#define GL_PSTEXT_XLT1_L2ADDR_LINE_IDX_M	MAKEMASK(0x7FF, 0)
+#define GL_PSTEXT_XLT1_L2ADDR_AUTO_INC_S	31
+#define GL_PSTEXT_XLT1_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_PSTEXT_XLT1_L2DATA(_i)		(0x0020E0CC + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_XLT1_L2DATA_MAX_INDEX		2
+#define GL_PSTEXT_XLT1_L2DATA_DATA_S		0
+#define GL_PSTEXT_XLT1_L2DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_XLT2_L2ADDR(_i)		(0x0020E0D8 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_XLT2_L2ADDR_MAX_INDEX		2
+#define GL_PSTEXT_XLT2_L2ADDR_LINE_IDX_S	0
+#define GL_PSTEXT_XLT2_L2ADDR_LINE_IDX_M	MAKEMASK(0x1FF, 0)
+#define GL_PSTEXT_XLT2_L2ADDR_AUTO_INC_S	31
+#define GL_PSTEXT_XLT2_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_PSTEXT_XLT2_L2DATA(_i)		(0x0020E0E4 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_XLT2_L2DATA_MAX_INDEX		2
+#define GL_PSTEXT_XLT2_L2DATA_DATA_S		0
+#define GL_PSTEXT_XLT2_L2DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLFLXP_PTYPE_TRANSLATION(_i)		(0x0045C000 + ((_i) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define GLFLXP_PTYPE_TRANSLATION_MAX_INDEX	255
+#define GLFLXP_PTYPE_TRANSLATION_PTYPE_4N_S	0
+#define GLFLXP_PTYPE_TRANSLATION_PTYPE_4N_M	MAKEMASK(0xFF, 0)
+#define GLFLXP_PTYPE_TRANSLATION_PTYPE_4N_1_S	8
+#define GLFLXP_PTYPE_TRANSLATION_PTYPE_4N_1_M	MAKEMASK(0xFF, 8)
+#define GLFLXP_PTYPE_TRANSLATION_PTYPE_4N_2_S	16
+#define GLFLXP_PTYPE_TRANSLATION_PTYPE_4N_2_M	MAKEMASK(0xFF, 16)
+#define GLFLXP_PTYPE_TRANSLATION_PTYPE_4N_3_S	24
+#define GLFLXP_PTYPE_TRANSLATION_PTYPE_4N_3_M	MAKEMASK(0xFF, 24)
+#define GLFLXP_RX_CMD_LX_PROT_IDX(_i)		(0x0045C400 + ((_i) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define GLFLXP_RX_CMD_LX_PROT_IDX_MAX_INDEX	255
+#define GLFLXP_RX_CMD_LX_PROT_IDX_INNER_CLOUD_OFFSET_INDEX_S 0
+#define GLFLXP_RX_CMD_LX_PROT_IDX_INNER_CLOUD_OFFSET_INDEX_M MAKEMASK(0x7, 0)
+#define GLFLXP_RX_CMD_LX_PROT_IDX_L4_OFFSET_INDEX_S 4
+#define GLFLXP_RX_CMD_LX_PROT_IDX_L4_OFFSET_INDEX_M MAKEMASK(0x7, 4)
+#define GLFLXP_RX_CMD_LX_PROT_IDX_PAYLOAD_OFFSET_INDEX_S 8
+#define GLFLXP_RX_CMD_LX_PROT_IDX_PAYLOAD_OFFSET_INDEX_M MAKEMASK(0x7, 8)
+#define GLFLXP_RX_CMD_LX_PROT_IDX_L3_PROTOCOL_S 12
+#define GLFLXP_RX_CMD_LX_PROT_IDX_L3_PROTOCOL_M MAKEMASK(0x3, 12)
+#define GLFLXP_RX_CMD_LX_PROT_IDX_L4_PROTOCOL_S 14
+#define GLFLXP_RX_CMD_LX_PROT_IDX_L4_PROTOCOL_M MAKEMASK(0x3, 14)
+#define GLFLXP_RX_CMD_PROTIDS(_i, _j)		(0x0045A000 + ((_i) * 4 + (_j) * 1024)) /* _i=0...255, _j=0...5 */ /* Reset Source: CORER */
+#define GLFLXP_RX_CMD_PROTIDS_MAX_INDEX		255
+#define GLFLXP_RX_CMD_PROTIDS_PROTID_4N_S	0
+#define GLFLXP_RX_CMD_PROTIDS_PROTID_4N_M	MAKEMASK(0xFF, 0)
+#define GLFLXP_RX_CMD_PROTIDS_PROTID_4N_1_S	8
+#define GLFLXP_RX_CMD_PROTIDS_PROTID_4N_1_M	MAKEMASK(0xFF, 8)
+#define GLFLXP_RX_CMD_PROTIDS_PROTID_4N_2_S	16
+#define GLFLXP_RX_CMD_PROTIDS_PROTID_4N_2_M	MAKEMASK(0xFF, 16)
+#define GLFLXP_RX_CMD_PROTIDS_PROTID_4N_3_S	24
+#define GLFLXP_RX_CMD_PROTIDS_PROTID_4N_3_M	MAKEMASK(0xFF, 24)
+#define GLFLXP_RXDID_FLAGS(_i, _j)		(0x0045D000 + ((_i) * 4 + (_j) * 256)) /* _i=0...63, _j=0...4 */ /* Reset Source: CORER */
+#define GLFLXP_RXDID_FLAGS_MAX_INDEX		63
+#define GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_S	0
+#define GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_M	MAKEMASK(0x3F, 0)
+#define GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_1_S	8
+#define GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_1_M	MAKEMASK(0x3F, 8)
+#define GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_2_S	16
+#define GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_2_M	MAKEMASK(0x3F, 16)
+#define GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_3_S	24
+#define GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_3_M	MAKEMASK(0x3F, 24)
+#define GLFLXP_RXDID_FLAGS1_OVERRIDE(_i)	(0x0045D600 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GLFLXP_RXDID_FLAGS1_OVERRIDE_MAX_INDEX	63
+#define GLFLXP_RXDID_FLAGS1_OVERRIDE_FLEXIFLAGS1_OVERRIDE_S 0
+#define GLFLXP_RXDID_FLAGS1_OVERRIDE_FLEXIFLAGS1_OVERRIDE_M MAKEMASK(0xF, 0)
+#define GLFLXP_RXDID_FLX_WRD_0(_i)		(0x0045c800 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GLFLXP_RXDID_FLX_WRD_0_MAX_INDEX	63
+#define GLFLXP_RXDID_FLX_WRD_0_PROT_MDID_S	0
+#define GLFLXP_RXDID_FLX_WRD_0_PROT_MDID_M	MAKEMASK(0xFF, 0)
+#define GLFLXP_RXDID_FLX_WRD_0_EXTRACTION_OFFSET_S 8
+#define GLFLXP_RXDID_FLX_WRD_0_EXTRACTION_OFFSET_M MAKEMASK(0x3FF, 8)
+#define GLFLXP_RXDID_FLX_WRD_0_RXDID_OPCODE_S	30
+#define GLFLXP_RXDID_FLX_WRD_0_RXDID_OPCODE_M	MAKEMASK(0x3, 30)
+#define GLFLXP_RXDID_FLX_WRD_1(_i)		(0x0045c900 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GLFLXP_RXDID_FLX_WRD_1_MAX_INDEX	63
+#define GLFLXP_RXDID_FLX_WRD_1_PROT_MDID_S	0
+#define GLFLXP_RXDID_FLX_WRD_1_PROT_MDID_M	MAKEMASK(0xFF, 0)
+#define GLFLXP_RXDID_FLX_WRD_1_EXTRACTION_OFFSET_S 8
+#define GLFLXP_RXDID_FLX_WRD_1_EXTRACTION_OFFSET_M MAKEMASK(0x3FF, 8)
+#define GLFLXP_RXDID_FLX_WRD_1_RXDID_OPCODE_S	30
+#define GLFLXP_RXDID_FLX_WRD_1_RXDID_OPCODE_M	MAKEMASK(0x3, 30)
+#define GLFLXP_RXDID_FLX_WRD_2(_i)		(0x0045ca00 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GLFLXP_RXDID_FLX_WRD_2_MAX_INDEX	63
+#define GLFLXP_RXDID_FLX_WRD_2_PROT_MDID_S	0
+#define GLFLXP_RXDID_FLX_WRD_2_PROT_MDID_M	MAKEMASK(0xFF, 0)
+#define GLFLXP_RXDID_FLX_WRD_2_EXTRACTION_OFFSET_S 8
+#define GLFLXP_RXDID_FLX_WRD_2_EXTRACTION_OFFSET_M MAKEMASK(0x3FF, 8)
+#define GLFLXP_RXDID_FLX_WRD_2_RXDID_OPCODE_S	30
+#define GLFLXP_RXDID_FLX_WRD_2_RXDID_OPCODE_M	MAKEMASK(0x3, 30)
+#define GLFLXP_RXDID_FLX_WRD_3(_i)		(0x0045cb00 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GLFLXP_RXDID_FLX_WRD_3_MAX_INDEX	63
+#define GLFLXP_RXDID_FLX_WRD_3_PROT_MDID_S	0
+#define GLFLXP_RXDID_FLX_WRD_3_PROT_MDID_M	MAKEMASK(0xFF, 0)
+#define GLFLXP_RXDID_FLX_WRD_3_EXTRACTION_OFFSET_S 8
+#define GLFLXP_RXDID_FLX_WRD_3_EXTRACTION_OFFSET_M MAKEMASK(0x3FF, 8)
+#define GLFLXP_RXDID_FLX_WRD_3_RXDID_OPCODE_S	30
+#define GLFLXP_RXDID_FLX_WRD_3_RXDID_OPCODE_M	MAKEMASK(0x3, 30)
+#define GLFLXP_RXDID_FLX_WRD_4(_i)		(0x0045cc00 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GLFLXP_RXDID_FLX_WRD_4_MAX_INDEX	63
+#define GLFLXP_RXDID_FLX_WRD_4_PROT_MDID_S	0
+#define GLFLXP_RXDID_FLX_WRD_4_PROT_MDID_M	MAKEMASK(0xFF, 0)
+#define GLFLXP_RXDID_FLX_WRD_4_EXTRACTION_OFFSET_S 8
+#define GLFLXP_RXDID_FLX_WRD_4_EXTRACTION_OFFSET_M MAKEMASK(0x3FF, 8)
+#define GLFLXP_RXDID_FLX_WRD_4_RXDID_OPCODE_S	30
+#define GLFLXP_RXDID_FLX_WRD_4_RXDID_OPCODE_M	MAKEMASK(0x3, 30)
+#define GLFLXP_RXDID_FLX_WRD_5(_i)		(0x0045cd00 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GLFLXP_RXDID_FLX_WRD_5_MAX_INDEX	63
+#define GLFLXP_RXDID_FLX_WRD_5_PROT_MDID_S	0
+#define GLFLXP_RXDID_FLX_WRD_5_PROT_MDID_M	MAKEMASK(0xFF, 0)
+#define GLFLXP_RXDID_FLX_WRD_5_EXTRACTION_OFFSET_S 8
+#define GLFLXP_RXDID_FLX_WRD_5_EXTRACTION_OFFSET_M MAKEMASK(0x3FF, 8)
+#define GLFLXP_RXDID_FLX_WRD_5_RXDID_OPCODE_S	30
+#define GLFLXP_RXDID_FLX_WRD_5_RXDID_OPCODE_M	MAKEMASK(0x3, 30)
+#define GLFLXP_TX_SCHED_CORRECT(_i, _j)		(0x00458000 + ((_i) * 4 + (_j) * 256)) /* _i=0...63, _j=0...31 */ /* Reset Source: CORER */
+#define GLFLXP_TX_SCHED_CORRECT_MAX_INDEX	63
+#define GLFLXP_TX_SCHED_CORRECT_PROTD_ID_2N_S	0
+#define GLFLXP_TX_SCHED_CORRECT_PROTD_ID_2N_M	MAKEMASK(0xFF, 0)
+#define GLFLXP_TX_SCHED_CORRECT_RECIPE_2N_S	8
+#define GLFLXP_TX_SCHED_CORRECT_RECIPE_2N_M	MAKEMASK(0x1F, 8)
+#define GLFLXP_TX_SCHED_CORRECT_PROTD_ID_2N_1_S 16
+#define GLFLXP_TX_SCHED_CORRECT_PROTD_ID_2N_1_M MAKEMASK(0xFF, 16)
+#define GLFLXP_TX_SCHED_CORRECT_RECIPE_2N_1_S	24
+#define GLFLXP_TX_SCHED_CORRECT_RECIPE_2N_1_M	MAKEMASK(0x1F, 24)
+#define QRXFLXP_CNTXT(_QRX)			(0x00480000 + ((_QRX) * 4)) /* _i=0...2047 */ /* Reset Source: CORER */
+#define QRXFLXP_CNTXT_MAX_INDEX			2047
+#define QRXFLXP_CNTXT_RXDID_IDX_S		0
+#define QRXFLXP_CNTXT_RXDID_IDX_M		MAKEMASK(0x3F, 0)
+#define QRXFLXP_CNTXT_RXDID_PRIO_S		8
+#define QRXFLXP_CNTXT_RXDID_PRIO_M		MAKEMASK(0x7, 8)
+#define QRXFLXP_CNTXT_TS_S			11
+#define QRXFLXP_CNTXT_TS_M			BIT(11)
+#define GL_FWSTS				0x00083048 /* Reset Source: POR */
+#define GL_FWSTS_FWS0B_S			0
+#define GL_FWSTS_FWS0B_M			MAKEMASK(0xFF, 0)
+#define GL_FWSTS_FWROWD_S			8
+#define GL_FWSTS_FWROWD_M			BIT(8)
+#define GL_FWSTS_FWRI_S				9
+#define GL_FWSTS_FWRI_M				BIT(9)
+#define GL_FWSTS_FWS1B_S			16
+#define GL_FWSTS_FWS1B_M			MAKEMASK(0xFF, 16)
+#define GL_TCVMLR_DRAIN_CNTR_CTL		0x000A21E0 /* Reset Source: CORER */
+#define GL_TCVMLR_DRAIN_CNTR_CTL_OP_S		0
+#define GL_TCVMLR_DRAIN_CNTR_CTL_OP_M		BIT(0)
+#define GL_TCVMLR_DRAIN_CNTR_CTL_PORT_S		1
+#define GL_TCVMLR_DRAIN_CNTR_CTL_PORT_M		MAKEMASK(0x7, 1)
+#define GL_TCVMLR_DRAIN_CNTR_CTL_VALUE_S	4
+#define GL_TCVMLR_DRAIN_CNTR_CTL_VALUE_M	MAKEMASK(0x3FFF, 4)
+#define GL_TCVMLR_DRAIN_DONE_DEC		0x000A21A8 /* Reset Source: CORER */
+#define GL_TCVMLR_DRAIN_DONE_DEC_TARGET_S	0
+#define GL_TCVMLR_DRAIN_DONE_DEC_TARGET_M	BIT(0)
+#define GL_TCVMLR_DRAIN_DONE_DEC_INDEX_S	1
+#define GL_TCVMLR_DRAIN_DONE_DEC_INDEX_M	MAKEMASK(0x1F, 1)
+#define GL_TCVMLR_DRAIN_DONE_DEC_VALUE_S	6
+#define GL_TCVMLR_DRAIN_DONE_DEC_VALUE_M	MAKEMASK(0xFF, 6)
+#define GL_TCVMLR_DRAIN_DONE_TCLAN(_i)		(0x000A20A8 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GL_TCVMLR_DRAIN_DONE_TCLAN_MAX_INDEX	31
+#define GL_TCVMLR_DRAIN_DONE_TCLAN_COUNT_S	0
+#define GL_TCVMLR_DRAIN_DONE_TCLAN_COUNT_M	MAKEMASK(0xFF, 0)
+#define GL_TCVMLR_DRAIN_DONE_TPB(_i)		(0x000A2128 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GL_TCVMLR_DRAIN_DONE_TPB_MAX_INDEX	31
+#define GL_TCVMLR_DRAIN_DONE_TPB_COUNT_S	0
+#define GL_TCVMLR_DRAIN_DONE_TPB_COUNT_M	MAKEMASK(0xFF, 0)
+#define GL_TCVMLR_DRAIN_MARKER			0x000A2008 /* Reset Source: CORER */
+#define GL_TCVMLR_DRAIN_MARKER_PORT_S		0
+#define GL_TCVMLR_DRAIN_MARKER_PORT_M		MAKEMASK(0x7, 0)
+#define GL_TCVMLR_DRAIN_MARKER_TC_S		3
+#define GL_TCVMLR_DRAIN_MARKER_TC_M		MAKEMASK(0x1F, 3)
+#define GL_TCVMLR_ERR_STAT			0x000A2024 /* Reset Source: CORER */
+#define GL_TCVMLR_ERR_STAT_ERROR_S		0
+#define GL_TCVMLR_ERR_STAT_ERROR_M		BIT(0)
+#define GL_TCVMLR_ERR_STAT_FW_REQ_S		1
+#define GL_TCVMLR_ERR_STAT_FW_REQ_M		BIT(1)
+#define GL_TCVMLR_ERR_STAT_STAT_S		2
+#define GL_TCVMLR_ERR_STAT_STAT_M		MAKEMASK(0x7, 2)
+#define GL_TCVMLR_ERR_STAT_ENT_TYPE_S		5
+#define GL_TCVMLR_ERR_STAT_ENT_TYPE_M		MAKEMASK(0x7, 5)
+#define GL_TCVMLR_ERR_STAT_ENT_ID_S		8
+#define GL_TCVMLR_ERR_STAT_ENT_ID_M		MAKEMASK(0x3FFF, 8)
+#define GL_TCVMLR_QCFG				0x000A2010 /* Reset Source: CORER */
+#define GL_TCVMLR_QCFG_QID_S			0
+#define GL_TCVMLR_QCFG_QID_M			MAKEMASK(0x3FFF, 0)
+#define GL_TCVMLR_QCFG_OP_S			14
+#define GL_TCVMLR_QCFG_OP_M			BIT(14)
+#define GL_TCVMLR_QCFG_PORT_S			15
+#define GL_TCVMLR_QCFG_PORT_M			MAKEMASK(0x7, 15)
+#define GL_TCVMLR_QCFG_TC_S			18
+#define GL_TCVMLR_QCFG_TC_M			MAKEMASK(0x1F, 18)
+#define GL_TCVMLR_QCFG_RD			0x000A2014 /* Reset Source: CORER */
+#define GL_TCVMLR_QCFG_RD_QID_S			0
+#define GL_TCVMLR_QCFG_RD_QID_M			MAKEMASK(0x3FFF, 0)
+#define GL_TCVMLR_QCFG_RD_PORT_S		14
+#define GL_TCVMLR_QCFG_RD_PORT_M		MAKEMASK(0x7, 14)
+#define GL_TCVMLR_QCFG_RD_TC_S			17
+#define GL_TCVMLR_QCFG_RD_TC_M			MAKEMASK(0x1F, 17)
+#define GL_TCVMLR_QCNTR				0x000A200C /* Reset Source: CORER */
+#define GL_TCVMLR_QCNTR_CNTR_S			0
+#define GL_TCVMLR_QCNTR_CNTR_M			MAKEMASK(0x7FFF, 0)
+#define GL_TCVMLR_QCTL				0x000A2004 /* Reset Source: CORER */
+#define GL_TCVMLR_QCTL_QID_S			0
+#define GL_TCVMLR_QCTL_QID_M			MAKEMASK(0x3FFF, 0)
+#define GL_TCVMLR_QCTL_OP_S			14
+#define GL_TCVMLR_QCTL_OP_M			BIT(14)
+#define GL_TCVMLR_REQ_STAT			0x000A2018 /* Reset Source: CORER */
+#define GL_TCVMLR_REQ_STAT_ENT_TYPE_S		0
+#define GL_TCVMLR_REQ_STAT_ENT_TYPE_M		MAKEMASK(0x7, 0)
+#define GL_TCVMLR_REQ_STAT_ENT_ID_S		3
+#define GL_TCVMLR_REQ_STAT_ENT_ID_M		MAKEMASK(0x3FFF, 3)
+#define GL_TCVMLR_REQ_STAT_OP_S			17
+#define GL_TCVMLR_REQ_STAT_OP_M			BIT(17)
+#define GL_TCVMLR_REQ_STAT_WRITE_STATUS_S	18
+#define GL_TCVMLR_REQ_STAT_WRITE_STATUS_M	MAKEMASK(0x7, 18)
+#define GL_TCVMLR_STAT				0x000A201C /* Reset Source: CORER */
+#define GL_TCVMLR_STAT_ENT_TYPE_S		0
+#define GL_TCVMLR_STAT_ENT_TYPE_M		MAKEMASK(0x7, 0)
+#define GL_TCVMLR_STAT_ENT_ID_S			3
+#define GL_TCVMLR_STAT_ENT_ID_M			MAKEMASK(0x3FFF, 3)
+#define GL_TCVMLR_STAT_STATUS_S			17
+#define GL_TCVMLR_STAT_STATUS_M			MAKEMASK(0x7, 17)
+#define GL_XLR_MARKER_TRIG_TCVMLR		0x000A2000 /* Reset Source: CORER */
+#define GL_XLR_MARKER_TRIG_TCVMLR_VM_VF_NUM_S	0
+#define GL_XLR_MARKER_TRIG_TCVMLR_VM_VF_NUM_M	MAKEMASK(0x3FF, 0)
+#define GL_XLR_MARKER_TRIG_TCVMLR_VM_VF_TYPE_S	10
+#define GL_XLR_MARKER_TRIG_TCVMLR_VM_VF_TYPE_M	MAKEMASK(0x3, 10)
+#define GL_XLR_MARKER_TRIG_TCVMLR_PF_NUM_S	12
+#define GL_XLR_MARKER_TRIG_TCVMLR_PF_NUM_M	MAKEMASK(0x7, 12)
+#define GL_XLR_MARKER_TRIG_TCVMLR_PORT_NUM_S	16
+#define GL_XLR_MARKER_TRIG_TCVMLR_PORT_NUM_M	MAKEMASK(0x7, 16)
+#define GL_XLR_MARKER_TRIG_VMLR			0x00093804 /* Reset Source: CORER */
+#define GL_XLR_MARKER_TRIG_VMLR_VM_VF_NUM_S	0
+#define GL_XLR_MARKER_TRIG_VMLR_VM_VF_NUM_M	MAKEMASK(0x3FF, 0)
+#define GL_XLR_MARKER_TRIG_VMLR_VM_VF_TYPE_S	10
+#define GL_XLR_MARKER_TRIG_VMLR_VM_VF_TYPE_M	MAKEMASK(0x3, 10)
+#define GL_XLR_MARKER_TRIG_VMLR_PF_NUM_S	12
+#define GL_XLR_MARKER_TRIG_VMLR_PF_NUM_M	MAKEMASK(0x7, 12)
+#define GL_XLR_MARKER_TRIG_VMLR_PORT_NUM_S	16
+#define GL_XLR_MARKER_TRIG_VMLR_PORT_NUM_M	MAKEMASK(0x7, 16)
+#define GLGEN_ANA_ABORT_PTYPE			0x0020C21C /* Reset Source: CORER */
+#define GLGEN_ANA_ABORT_PTYPE_ABORT_S		0
+#define GLGEN_ANA_ABORT_PTYPE_ABORT_M		MAKEMASK(0x3FF, 0)
+#define GLGEN_ANA_ALU_ACCSS_OUT_OF_PKT		0x0020C208 /* Reset Source: CORER */
+#define GLGEN_ANA_ALU_ACCSS_OUT_OF_PKT_NPC_S	0
+#define GLGEN_ANA_ALU_ACCSS_OUT_OF_PKT_NPC_M	MAKEMASK(0xFF, 0)
+#define GLGEN_ANA_CFG_CTRL			0x0020C104 /* Reset Source: CORER */
+#define GLGEN_ANA_CFG_CTRL_LINE_IDX_S		0
+#define GLGEN_ANA_CFG_CTRL_LINE_IDX_M		MAKEMASK(0x3FFFF, 0)
+#define GLGEN_ANA_CFG_CTRL_TABLE_ID_S		18
+#define GLGEN_ANA_CFG_CTRL_TABLE_ID_M		MAKEMASK(0xFF, 18)
+#define GLGEN_ANA_CFG_CTRL_RESRVED_S		26
+#define GLGEN_ANA_CFG_CTRL_RESRVED_M		MAKEMASK(0x7, 26)
+#define GLGEN_ANA_CFG_CTRL_OPERATION_ID_S	29
+#define GLGEN_ANA_CFG_CTRL_OPERATION_ID_M	MAKEMASK(0x7, 29)
+#define GLGEN_ANA_CFG_HTBL_LU_RESULT		0x0020C158 /* Reset Source: CORER */
+#define GLGEN_ANA_CFG_HTBL_LU_RESULT_HIT_S	0
+#define GLGEN_ANA_CFG_HTBL_LU_RESULT_HIT_M	BIT(0)
+#define GLGEN_ANA_CFG_HTBL_LU_RESULT_PG_MEM_IDX_S 1
+#define GLGEN_ANA_CFG_HTBL_LU_RESULT_PG_MEM_IDX_M MAKEMASK(0x7, 1)
+#define GLGEN_ANA_CFG_HTBL_LU_RESULT_ADDR_S	4
+#define GLGEN_ANA_CFG_HTBL_LU_RESULT_ADDR_M	MAKEMASK(0x1FF, 4)
+#define GLGEN_ANA_CFG_LU_KEY(_i)		(0x0020C14C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GLGEN_ANA_CFG_LU_KEY_MAX_INDEX		2
+#define GLGEN_ANA_CFG_LU_KEY_LU_KEY_S		0
+#define GLGEN_ANA_CFG_LU_KEY_LU_KEY_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_CFG_RDDATA(_i)		(0x0020C10C + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLGEN_ANA_CFG_RDDATA_MAX_INDEX		15
+#define GLGEN_ANA_CFG_RDDATA_RD_DATA_S		0
+#define GLGEN_ANA_CFG_RDDATA_RD_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_CFG_SPLBUF_LU_RESULT		0x0020C15C /* Reset Source: CORER */
+#define GLGEN_ANA_CFG_SPLBUF_LU_RESULT_HIT_S	0
+#define GLGEN_ANA_CFG_SPLBUF_LU_RESULT_HIT_M	BIT(0)
+#define GLGEN_ANA_CFG_SPLBUF_LU_RESULT_RSV_S	1
+#define GLGEN_ANA_CFG_SPLBUF_LU_RESULT_RSV_M	MAKEMASK(0x7, 1)
+#define GLGEN_ANA_CFG_SPLBUF_LU_RESULT_ADDR_S	4
+#define GLGEN_ANA_CFG_SPLBUF_LU_RESULT_ADDR_M	MAKEMASK(0x1FF, 4)
+#define GLGEN_ANA_CFG_WRDATA			0x0020C108 /* Reset Source: CORER */
+#define GLGEN_ANA_CFG_WRDATA_WR_DATA_S		0
+#define GLGEN_ANA_CFG_WRDATA_WR_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_DEF_PTYPE			0x0020C100 /* Reset Source: CORER */
+#define GLGEN_ANA_DEF_PTYPE_DEF_PTYPE_S		0
+#define GLGEN_ANA_DEF_PTYPE_DEF_PTYPE_M		MAKEMASK(0x3FF, 0)
+#define GLGEN_ANA_DFD_FIFO_0			0x0020C398 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_FIFO_0_PC_NXT_S		0
+#define GLGEN_ANA_DFD_FIFO_0_PC_NXT_M		BIT(0)
+#define GLGEN_ANA_DFD_FIFO_0_HO_NXT_S		1
+#define GLGEN_ANA_DFD_FIFO_0_HO_NXT_M		BIT(1)
+#define GLGEN_ANA_DFD_FIFO_0_NID_NXT_S		2
+#define GLGEN_ANA_DFD_FIFO_0_NID_NXT_M		BIT(2)
+#define GLGEN_ANA_DFD_FIFO_0_PG_KEY_SEL_S	8
+#define GLGEN_ANA_DFD_FIFO_0_PG_KEY_SEL_M	BIT(8)
+#define GLGEN_ANA_DFD_FIFO_PTR			0x0020C43C /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_FIFO_PTR_HEAD_S		0
+#define GLGEN_ANA_DFD_FIFO_PTR_HEAD_M		MAKEMASK(0x1FF, 0)
+#define GLGEN_ANA_DFD_FIFO_PTR_USED_SPACE_S	16
+#define GLGEN_ANA_DFD_FIFO_PTR_USED_SPACE_M	MAKEMASK(0x3FF, 16)
+#define GLGEN_ANA_DFD_GEN_CTRL			0x0020C38C /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_GEN_CTRL_ENABLE_S		0
+#define GLGEN_ANA_DFD_GEN_CTRL_ENABLE_M		BIT(0)
+#define GLGEN_ANA_DFD_GEN_CTRL_BLK_INPUT_S	1
+#define GLGEN_ANA_DFD_GEN_CTRL_BLK_INPUT_M	BIT(1)
+#define GLGEN_ANA_DFD_LOG_0			0x0020C3A8 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_LOG_0_SOURCE_S		0
+#define GLGEN_ANA_DFD_LOG_0_SOURCE_M		MAKEMASK(0x7, 0)
+#define GLGEN_ANA_DFD_LOG_0_LVL_OR_EDGE_S	3
+#define GLGEN_ANA_DFD_LOG_0_LVL_OR_EDGE_M	BIT(3)
+#define GLGEN_ANA_DFD_LOG_0_RC_DISP_TRIG_S	4
+#define GLGEN_ANA_DFD_LOG_0_RC_DISP_TRIG_M	MAKEMASK(0x7, 4)
+#define GLGEN_ANA_DFD_LOG_0_FLD_MODE_S		8
+#define GLGEN_ANA_DFD_LOG_0_FLD_MODE_M		BIT(8)
+#define GLGEN_ANA_DFD_LOG_0_DLY_CYCL_S		16
+#define GLGEN_ANA_DFD_LOG_0_DLY_CYCL_M		MAKEMASK(0x3FF, 16)
+#define GLGEN_ANA_DFD_LOG_1			0x0020C3AC /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_LOG_1_NUM_EVENTS_S	0
+#define GLGEN_ANA_DFD_LOG_1_NUM_EVENTS_M	MAKEMASK(0x3FF, 0)
+#define GLGEN_ANA_DFD_LOG_1_NUM_TRIGS_S		16
+#define GLGEN_ANA_DFD_LOG_1_NUM_TRIGS_M		MAKEMASK(0x3FF, 16)
+#define GLGEN_ANA_DFD_LOG_ACTN_EN		0x0020C3F8 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_BLK_PH_ARB_S	0
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_BLK_PH_ARB_M	BIT(0)
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_BLK_PH_FB_S	1
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_BLK_PH_FB_M	BIT(1)
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_BLK_INPUT_S	2
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_BLK_INPUT_M	BIT(2)
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_STP_OUTPUT_S	3
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_STP_OUTPUT_M	BIT(3)
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_STP_WR_DFD_FIFO_S 4
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_STP_WR_DFD_FIFO_M BIT(4)
+#define GLGEN_ANA_DFD_LOG_ACTN_RST		0x0020C3FC /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_BLK_PH_ARB_S 0
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_BLK_PH_ARB_M BIT(0)
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_BLK_PH_FB_S	1
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_BLK_PH_FB_M	BIT(1)
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_BLK_INPUT_S	2
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_BLK_INPUT_M	BIT(2)
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_STP_OUTPUT_S 3
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_STP_OUTPUT_M BIT(3)
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_STP_WR_DFD_FIFO_S 4
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_STP_WR_DFD_FIFO_M BIT(4)
+#define GLGEN_ANA_DFD_LOG_DATA(_i)		(0x0020C3B0 + ((_i) * 4)) /* _i=0...8 */ /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_LOG_DATA_MAX_INDEX	8
+#define GLGEN_ANA_DFD_LOG_DATA_DATA_S		0
+#define GLGEN_ANA_DFD_LOG_DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_DFD_LOG_MASK(_i)		(0x0020C3D4 + ((_i) * 4)) /* _i=0...8 */ /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_LOG_MASK_MAX_INDEX	8
+#define GLGEN_ANA_DFD_LOG_MASK_MASK_S		0
+#define GLGEN_ANA_DFD_LOG_MASK_MASK_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_DFD_LOG_RST_ALL		0x0020C400 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_LOG_RST_ALL_RST_S		0
+#define GLGEN_ANA_DFD_LOG_RST_ALL_RST_M		BIT(0)
+#define GLGEN_ANA_DFD_LOG_RST_ALL_GEN_RST_S	1
+#define GLGEN_ANA_DFD_LOG_RST_ALL_GEN_RST_M	BIT(1)
+#define GLGEN_ANA_DFD_LOG_TRG_0			0x0020C404 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_LOG_TRG_0_TAGID_S		0
+#define GLGEN_ANA_DFD_LOG_TRG_0_TAGID_M		MAKEMASK(0x3F, 0)
+#define GLGEN_ANA_DFD_LOG_TRG_0_ACT_TRIGGED_S	16
+#define GLGEN_ANA_DFD_LOG_TRG_0_ACT_TRIGGED_M	BIT(16)
+#define GLGEN_ANA_DFD_LOG_TRG_0_MAX_NUM_RND_S	24
+#define GLGEN_ANA_DFD_LOG_TRG_0_MAX_NUM_RND_M	MAKEMASK(0x7F, 24)
+#define GLGEN_ANA_DFD_LOG_TRG_0_TRIGGED_S	31
+#define GLGEN_ANA_DFD_LOG_TRG_0_TRIGGED_M	BIT(31)
+#define GLGEN_ANA_DFD_LOG_TRG_DATA(_i)		(0x0020C408 + ((_i) * 4)) /* _i=0...8 */ /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_LOG_TRG_DATA_MAX_INDEX	8
+#define GLGEN_ANA_DFD_LOG_TRG_DATA_DATA_S	0
+#define GLGEN_ANA_DFD_LOG_TRG_DATA_DATA_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_DFD_PACE_OUT			0x0020C4CC /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_PACE_OUT_PUSH_S		0
+#define GLGEN_ANA_DFD_PACE_OUT_PUSH_M		BIT(0)
+#define GLGEN_ANA_DFD_PACING_0			0x0020C390 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_PACING_0_STOP_FEEDBK_S	0
+#define GLGEN_ANA_DFD_PACING_0_STOP_FEEDBK_M	BIT(0)
+#define GLGEN_ANA_DFD_PACING_0_STOP_ARB_S	1
+#define GLGEN_ANA_DFD_PACING_0_STOP_ARB_M	BIT(1)
+#define GLGEN_ANA_DFD_PACING_0_NUM_CHUNKS_S	2
+#define GLGEN_ANA_DFD_PACING_0_NUM_CHUNKS_M	MAKEMASK(0x1F, 2)
+#define GLGEN_ANA_DFD_PACING_1			0x0020C394 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_PACING_1_PUSH_S		0
+#define GLGEN_ANA_DFD_PACING_1_PUSH_M		BIT(0)
+#define GLGEN_ANA_DFD_REG_FILE_ACC_0		0x0020C39C /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_REG_FILE_ACC_0_SLOT_ID_S	0
+#define GLGEN_ANA_DFD_REG_FILE_ACC_0_SLOT_ID_M	MAKEMASK(0xF, 0)
+#define GLGEN_ANA_DFD_REG_FILE_ACC_1		0x0020C3A0 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_REG_FILE_ACC_1_REGID_S	0
+#define GLGEN_ANA_DFD_REG_FILE_ACC_1_REGID_M	MAKEMASK(0xFF, 0)
+#define GLGEN_ANA_DFD_REG_FILE_ACC_RES		0x0020C3A4 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_REG_FILE_ACC_RES_REG_VAL_S 0
+#define GLGEN_ANA_DFD_REG_FILE_ACC_RES_REG_VAL_M MAKEMASK(0xFFFF, 0)
+#define GLGEN_ANA_DFD_REG_FILE_ACC_RES_EXCEPTIONS_S 16
+#define GLGEN_ANA_DFD_REG_FILE_ACC_RES_EXCEPTIONS_M MAKEMASK(0x7FFF, 16)
+#define GLGEN_ANA_DFD_TAGIDS			0x0020C438 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_TAGIDS_TAGID_IN_DFD_FIFO_S 0
+#define GLGEN_ANA_DFD_TAGIDS_TAGID_IN_DFD_FIFO_M MAKEMASK(0x3F, 0)
+#define GLGEN_ANA_DFD_TAGIDS_TAGID_NXT_ANA_S	8
+#define GLGEN_ANA_DFD_TAGIDS_TAGID_NXT_ANA_M	MAKEMASK(0x3F, 8)
+#define GLGEN_ANA_DFD_TAGIDS_TAGID_OUT_S	16
+#define GLGEN_ANA_DFD_TAGIDS_TAGID_OUT_M	MAKEMASK(0x3F, 16)
+#define GLGEN_ANA_DFD_TAGIDS_SLOTID_IN_DFD_FIFO_S 24
+#define GLGEN_ANA_DFD_TAGIDS_SLOTID_IN_DFD_FIFO_M MAKEMASK(0xF, 24)
+#define GLGEN_ANA_DFD_TAGIDS_SLOTID_NXT_ANA_S	28
+#define GLGEN_ANA_DFD_TAGIDS_SLOTID_NXT_ANA_M	MAKEMASK(0xF, 28)
+#define GLGEN_ANA_ERR_AUX			0x0020C228 /* Reset Source: CORER */
+#define GLGEN_ANA_ERR_AUX_IPLEN_GPREG_S		0
+#define GLGEN_ANA_ERR_AUX_IPLEN_GPREG_M		MAKEMASK(0xF, 0)
+#define GLGEN_ANA_ERR_CTRL			0x0020C220 /* Reset Source: CORER */
+#define GLGEN_ANA_ERR_CTRL_ERR_MASK_EN_S	0
+#define GLGEN_ANA_ERR_CTRL_ERR_MASK_EN_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_FLAG_MAP(_i)			(0x0020C000 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GLGEN_ANA_FLAG_MAP_MAX_INDEX		63
+#define GLGEN_ANA_FLAG_MAP_FLAG_EN_S		0
+#define GLGEN_ANA_FLAG_MAP_FLAG_EN_M		BIT(0)
+#define GLGEN_ANA_FLAG_MAP_EXT_FLAG_ID_S	1
+#define GLGEN_ANA_FLAG_MAP_EXT_FLAG_ID_M	MAKEMASK(0x3F, 1)
+#define GLGEN_ANA_GEN_DFD_RO			0x0020C4C8 /* Reset Source: CORER */
+#define GLGEN_ANA_GEN_DFD_RO_GEN_VAL_S		0
+#define GLGEN_ANA_GEN_DFD_RO_GEN_VAL_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_GIGO_FIFO_PTR			0x0020C448 /* Reset Source: CORER */
+#define GLGEN_ANA_GIGO_FIFO_PTR_HEAD_S		0
+#define GLGEN_ANA_GIGO_FIFO_PTR_HEAD_M		MAKEMASK(0x1FF, 0)
+#define GLGEN_ANA_GIGO_FIFO_PTR_USED_SPACE_S	16
+#define GLGEN_ANA_GIGO_FIFO_PTR_USED_SPACE_M	MAKEMASK(0x3FF, 16)
+#define GLGEN_ANA_HDR_FIFO_FIFO_PTR		0x0020C44C /* Reset Source: CORER */
+#define GLGEN_ANA_HDR_FIFO_FIFO_PTR_HEAD_S	0
+#define GLGEN_ANA_HDR_FIFO_FIFO_PTR_HEAD_M	MAKEMASK(0x1FF, 0)
+#define GLGEN_ANA_HDR_FIFO_FIFO_PTR_USED_SPACE_S 16
+#define GLGEN_ANA_HDR_FIFO_FIFO_PTR_USED_SPACE_M MAKEMASK(0x3FF, 16)
+#define GLGEN_ANA_INV_NODE_PTYPE		0x0020C210 /* Reset Source: CORER */
+#define GLGEN_ANA_INV_NODE_PTYPE_INV_NODE_PTYPE_S 0
+#define GLGEN_ANA_INV_NODE_PTYPE_INV_NODE_PTYPE_M MAKEMASK(0x7FF, 0)
+#define GLGEN_ANA_INV_PROT_ID			0x0020C214 /* Reset Source: CORER */
+#define GLGEN_ANA_INV_PROT_ID_INV_PROT_ID_S	0
+#define GLGEN_ANA_INV_PROT_ID_INV_PROT_ID_M	MAKEMASK(0xFF, 0)
+#define GLGEN_ANA_INV_PTYPE_MARKER		0x0020C218 /* Reset Source: CORER */
+#define GLGEN_ANA_INV_PTYPE_MARKER_INV_PTYPE_MARKER_S 0
+#define GLGEN_ANA_INV_PTYPE_MARKER_INV_PTYPE_MARKER_M MAKEMASK(0x7F, 0)
+#define GLGEN_ANA_LAST_PROT_ID(_i)		(0x0020C1E4 + ((_i) * 4)) /* _i=0...5 */ /* Reset Source: CORER */
+#define GLGEN_ANA_LAST_PROT_ID_MAX_INDEX	5
+#define GLGEN_ANA_LAST_PROT_ID_EN_S		0
+#define GLGEN_ANA_LAST_PROT_ID_EN_M		BIT(0)
+#define GLGEN_ANA_LAST_PROT_ID_PROT_ID_S	1
+#define GLGEN_ANA_LAST_PROT_ID_PROT_ID_M	MAKEMASK(0xFF, 1)
+#define GLGEN_ANA_MAX_HDRLEN			0x0020C1E0 /* Reset Source: CORER */
+#define GLGEN_ANA_MAX_HDRLEN_NPC_S		0
+#define GLGEN_ANA_MAX_HDRLEN_NPC_M		MAKEMASK(0xFF, 0)
+#define GLGEN_ANA_MAX_HDRLEN_MAX_HDR_LEN_S	8
+#define GLGEN_ANA_MAX_HDRLEN_MAX_HDR_LEN_M	MAKEMASK(0x1FF, 8)
+#define GLGEN_ANA_MAX_PROT			0x0020C224 /* Reset Source: CORER */
+#define GLGEN_ANA_MAX_PROT_MAX_PRTS_S		0
+#define GLGEN_ANA_MAX_PROT_MAX_PRTS_M		MAKEMASK(0x7F, 0)
+#define GLGEN_ANA_MAX_ROUND			0x0020C20C /* Reset Source: CORER */
+#define GLGEN_ANA_MAX_ROUND_MAX_ROUND_ABS_S	0
+#define GLGEN_ANA_MAX_ROUND_MAX_ROUND_ABS_M	MAKEMASK(0x7F, 0)
+#define GLGEN_ANA_MIN_PKT			0x0020C42C /* Reset Source: CORER */
+#define GLGEN_ANA_MIN_PKT_MIN_LEN_S		0
+#define GLGEN_ANA_MIN_PKT_MIN_LEN_M		MAKEMASK(0x3FFF, 0)
+#define GLGEN_ANA_NMPG_KEYMASK(_i)		(0x0020C1D0 + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: CORER */
+#define GLGEN_ANA_NMPG_KEYMASK_MAX_INDEX	3
+#define GLGEN_ANA_NMPG_KEYMASK_HASH_KEY_S	0
+#define GLGEN_ANA_NMPG_KEYMASK_HASH_KEY_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_NMPG0_HASHKEY(_i)		(0x0020C1B0 + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: CORER */
+#define GLGEN_ANA_NMPG0_HASHKEY_MAX_INDEX	3
+#define GLGEN_ANA_NMPG0_HASHKEY_HASH_KEY_S	0
+#define GLGEN_ANA_NMPG0_HASHKEY_HASH_KEY_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_NO_HIT_PG_NM_PG		0x0020C204 /* Reset Source: CORER */
+#define GLGEN_ANA_NO_HIT_PG_NM_PG_NPC_S		0
+#define GLGEN_ANA_NO_HIT_PG_NM_PG_NPC_M		MAKEMASK(0xFF, 0)
+#define GLGEN_ANA_OUT_OF_PKT			0x0020C200 /* Reset Source: CORER */
+#define GLGEN_ANA_OUT_OF_PKT_NPC_S		0
+#define GLGEN_ANA_OUT_OF_PKT_NPC_M		MAKEMASK(0xFF, 0)
+#define GLGEN_ANA_P2P(_i)			(0x0020C160 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLGEN_ANA_P2P_MAX_INDEX			15
+#define GLGEN_ANA_P2P_TARGET_PROF_S		0
+#define GLGEN_ANA_P2P_TARGET_PROF_M		MAKEMASK(0xF, 0)
+#define GLGEN_ANA_PG_KEYMASK(_i)		(0x0020C1C0 + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: CORER */
+#define GLGEN_ANA_PG_KEYMASK_MAX_INDEX		3
+#define GLGEN_ANA_PG_KEYMASK_HASH_KEY_S		0
+#define GLGEN_ANA_PG_KEYMASK_HASH_KEY_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_PG0_HASHKEY(_i)		(0x0020C1A0 + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: CORER */
+#define GLGEN_ANA_PG0_HASHKEY_MAX_INDEX		3
+#define GLGEN_ANA_PG0_HASHKEY_HASH_KEY_S	0
+#define GLGEN_ANA_PG0_HASHKEY_HASH_KEY_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_PROFIL_CTRL			0x0020C1FC /* Reset Source: CORER */
+#define GLGEN_ANA_PROFIL_CTRL_PROFILE_SELECT_MDID_S 0
+#define GLGEN_ANA_PROFIL_CTRL_PROFILE_SELECT_MDID_M MAKEMASK(0x1F, 0)
+#define GLGEN_ANA_PROFIL_CTRL_PROFILE_SELECT_MDSTART_S 5
+#define GLGEN_ANA_PROFIL_CTRL_PROFILE_SELECT_MDSTART_M MAKEMASK(0xF, 5)
+#define GLGEN_ANA_PROFIL_CTRL_PROFILE_SELECT_MD_LEN_S 9
+#define GLGEN_ANA_PROFIL_CTRL_PROFILE_SELECT_MD_LEN_M MAKEMASK(0x1F, 9)
+#define GLGEN_ANA_PROFIL_CTRL_NUM_CTRL_DOMAIN_S 14
+#define GLGEN_ANA_PROFIL_CTRL_NUM_CTRL_DOMAIN_M MAKEMASK(0x3, 14)
+#define GLGEN_ANA_PROFIL_CTRL_DEF_PROF_ID_S	16
+#define GLGEN_ANA_PROFIL_CTRL_DEF_PROF_ID_M	MAKEMASK(0xF, 16)
+#define GLGEN_ANA_PROFIL_CTRL_SEL_DEF_PROF_ID_S 20
+#define GLGEN_ANA_PROFIL_CTRL_SEL_DEF_PROF_ID_M BIT(20)
+#define GLGEN_ANA_PSTAT_FIFO_PTR		0x0020C444 /* Reset Source: CORER */
+#define GLGEN_ANA_PSTAT_FIFO_PTR_HEAD_S		0
+#define GLGEN_ANA_PSTAT_FIFO_PTR_HEAD_M		MAKEMASK(0x1FF, 0)
+#define GLGEN_ANA_PSTAT_FIFO_PTR_USED_SPACE_S	16
+#define GLGEN_ANA_PSTAT_FIFO_PTR_USED_SPACE_M	MAKEMASK(0x3FF, 16)
+#define GLGEN_ANA_STAT_FIFO_PTR			0x0020C440 /* Reset Source: CORER */
+#define GLGEN_ANA_STAT_FIFO_PTR_HEAD_S		0
+#define GLGEN_ANA_STAT_FIFO_PTR_HEAD_M		MAKEMASK(0x1FF, 0)
+#define GLGEN_ANA_STAT_FIFO_PTR_USED_SPACE_S	16
+#define GLGEN_ANA_STAT_FIFO_PTR_USED_SPACE_M	MAKEMASK(0x3FF, 16)
+#define GLGEN_ANA_TX_DFD_LOG_0			0x0020D3A8 /* Reset Source: CORER */
+#define GLGEN_ANA_TX_DFD_LOG_0_SOURCE_S		0
+#define GLGEN_ANA_TX_DFD_LOG_0_SOURCE_M		MAKEMASK(0x7, 0)
+#define GLGEN_ANA_TX_DFD_LOG_0_LVL_OR_EDGE_S	3
+#define GLGEN_ANA_TX_DFD_LOG_0_LVL_OR_EDGE_M	BIT(3)
+#define GLGEN_ANA_TX_DFD_LOG_0_RC_DISP_TRIG_S	4
+#define GLGEN_ANA_TX_DFD_LOG_0_RC_DISP_TRIG_M	MAKEMASK(0x7, 4)
+#define GLGEN_ANA_TX_DFD_LOG_0_FLD_MODE_S	8
+#define GLGEN_ANA_TX_DFD_LOG_0_FLD_MODE_M	BIT(8)
+#define GLGEN_ANA_TX_DFD_LOG_0_DLY_CYCL_S	16
+#define GLGEN_ANA_TX_DFD_LOG_0_DLY_CYCL_M	MAKEMASK(0x3FF, 16)
+#define GLGEN_ANA_TX_DFD_PACE_OUT		0x0020D4CC /* Reset Source: CORER */
+#define GLGEN_ANA_TX_DFD_PACE_OUT_PUSH_S	0
+#define GLGEN_ANA_TX_DFD_PACE_OUT_PUSH_M	BIT(0)
+#define GLGEN_ANA_TX_GEN_DFD_RO			0x0020D4C8 /* Reset Source: CORER */
+#define GLGEN_ANA_TX_GEN_DFD_RO_GEN_VAL_S	0
+#define GLGEN_ANA_TX_GEN_DFD_RO_GEN_VAL_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_TX_P2P(_i)			(0x0020D160 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLGEN_ANA_TX_P2P_MAX_INDEX		15
+#define GLGEN_ANA_TX_P2P_TARGET_PROF_S		0
+#define GLGEN_ANA_TX_P2P_TARGET_PROF_M		MAKEMASK(0xF, 0)
+#define GLGEN_ASSERT_HLP			0x000B81E4 /* Reset Source: POR */
+#define GLGEN_ASSERT_HLP_CORE_ON_RST_S		0
+#define GLGEN_ASSERT_HLP_CORE_ON_RST_M		BIT(0)
+#define GLGEN_ASSERT_HLP_FULL_ON_RST_S		1
+#define GLGEN_ASSERT_HLP_FULL_ON_RST_M		BIT(1)
+#define GLGEN_CLKSTAT				0x000B8184 /* Reset Source: POR */
+#define GLGEN_CLKSTAT_U_CLK_SPEED_S		0
+#define GLGEN_CLKSTAT_U_CLK_SPEED_M		MAKEMASK(0x7, 0)
+#define GLGEN_CLKSTAT_L_CLK_SPEED_S		3
+#define GLGEN_CLKSTAT_L_CLK_SPEED_M		MAKEMASK(0x7, 3)
+#define GLGEN_CLKSTAT_PSM_CLK_SPEED_S		6
+#define GLGEN_CLKSTAT_PSM_CLK_SPEED_M		MAKEMASK(0x7, 6)
+#define GLGEN_CLKSTAT_RXCTL_CLK_SPEED_S		9
+#define GLGEN_CLKSTAT_RXCTL_CLK_SPEED_M		MAKEMASK(0x7, 9)
+#define GLGEN_CLKSTAT_UANA_CLK_SPEED_S		12
+#define GLGEN_CLKSTAT_UANA_CLK_SPEED_M		MAKEMASK(0x7, 12)
+#define GLGEN_CLKSTAT_PE_CLK_SPEED_S		18
+#define GLGEN_CLKSTAT_PE_CLK_SPEED_M		MAKEMASK(0x7, 18)
+#define GLGEN_CLKSTAT_SRC			0x000B826C /* Reset Source: POR */
+#define GLGEN_CLKSTAT_SRC_U_CLK_SRC_S		0
+#define GLGEN_CLKSTAT_SRC_U_CLK_SRC_M		MAKEMASK(0x3, 0)
+#define GLGEN_CLKSTAT_SRC_L_CLK_SRC_S		2
+#define GLGEN_CLKSTAT_SRC_L_CLK_SRC_M		MAKEMASK(0x3, 2)
+#define GLGEN_CLKSTAT_SRC_PSM_CLK_SRC_S		4
+#define GLGEN_CLKSTAT_SRC_PSM_CLK_SRC_M		MAKEMASK(0x3, 4)
+#define GLGEN_CLKSTAT_SRC_RXCTL_CLK_SRC_S	6
+#define GLGEN_CLKSTAT_SRC_RXCTL_CLK_SRC_M	MAKEMASK(0x3, 6)
+#define GLGEN_CLKSTAT_SRC_UANA_CLK_SRC_S	8
+#define GLGEN_CLKSTAT_SRC_UANA_CLK_SRC_M	MAKEMASK(0xF, 8)
+#define GLGEN_ECC_ERR_INT_TOG_MASK_H		0x00093A00 /* Reset Source: CORER */
+#define GLGEN_ECC_ERR_INT_TOG_MASK_H_CLIENT_NUM_S 0
+#define GLGEN_ECC_ERR_INT_TOG_MASK_H_CLIENT_NUM_M MAKEMASK(0x7F, 0)
+#define GLGEN_ECC_ERR_INT_TOG_MASK_L		0x000939FC /* Reset Source: CORER */
+#define GLGEN_ECC_ERR_INT_TOG_MASK_L_CLIENT_NUM_S 0
+#define GLGEN_ECC_ERR_INT_TOG_MASK_L_CLIENT_NUM_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ECC_ERR_RST_MASK_H		0x000939F8 /* Reset Source: CORER */
+#define GLGEN_ECC_ERR_RST_MASK_H_CLIENT_NUM_S	0
+#define GLGEN_ECC_ERR_RST_MASK_H_CLIENT_NUM_M	MAKEMASK(0x7F, 0)
+#define GLGEN_ECC_ERR_RST_MASK_L		0x000939F4 /* Reset Source: CORER */
+#define GLGEN_ECC_ERR_RST_MASK_L_CLIENT_NUM_S	0
+#define GLGEN_ECC_ERR_RST_MASK_L_CLIENT_NUM_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_GPIO_CTL(_i)			(0x000880C8 + ((_i) * 4)) /* _i=0...6 */ /* Reset Source: POR */
+#define GLGEN_GPIO_CTL_MAX_INDEX		6
+#define GLGEN_GPIO_CTL_IN_VALUE_S		0
+#define GLGEN_GPIO_CTL_IN_VALUE_M		BIT(0)
+#define GLGEN_GPIO_CTL_IN_TRANSIT_S		1
+#define GLGEN_GPIO_CTL_IN_TRANSIT_M		BIT(1)
+#define GLGEN_GPIO_CTL_OUT_VALUE_S		2
+#define GLGEN_GPIO_CTL_OUT_VALUE_M		BIT(2)
+#define GLGEN_GPIO_CTL_NO_P_UP_S		3
+#define GLGEN_GPIO_CTL_NO_P_UP_M		BIT(3)
+#define GLGEN_GPIO_CTL_PIN_DIR_S		4
+#define GLGEN_GPIO_CTL_PIN_DIR_M		BIT(4)
+#define GLGEN_GPIO_CTL_TRI_CTL_S		5
+#define GLGEN_GPIO_CTL_TRI_CTL_M		BIT(5)
+#define GLGEN_GPIO_CTL_PIN_FUNC_S		8
+#define GLGEN_GPIO_CTL_PIN_FUNC_M		MAKEMASK(0xF, 8)
+#define GLGEN_GPIO_CTL_INT_MODE_S		12
+#define GLGEN_GPIO_CTL_INT_MODE_M		MAKEMASK(0x3, 12)
+#define GLGEN_MARKER_COUNT			0x000939E8 /* Reset Source: CORER */
+#define GLGEN_MARKER_COUNT_MARKER_COUNT_S	0
+#define GLGEN_MARKER_COUNT_MARKER_COUNT_M	MAKEMASK(0xFF, 0)
+#define GLGEN_MARKER_COUNT_MARKER_COUNT_EN_S	31
+#define GLGEN_MARKER_COUNT_MARKER_COUNT_EN_M	BIT(31)
+#define GLGEN_RSTAT				0x000B8188 /* Reset Source: POR */
+#define GLGEN_RSTAT_DEVSTATE_S			0
+#define GLGEN_RSTAT_DEVSTATE_M			MAKEMASK(0x3, 0)
+#define GLGEN_RSTAT_RESET_TYPE_S		2
+#define GLGEN_RSTAT_RESET_TYPE_M		MAKEMASK(0x3, 2)
+#define GLGEN_RSTAT_CORERCNT_S			4
+#define GLGEN_RSTAT_CORERCNT_M			MAKEMASK(0x3, 4)
+#define GLGEN_RSTAT_GLOBRCNT_S			6
+#define GLGEN_RSTAT_GLOBRCNT_M			MAKEMASK(0x3, 6)
+#define GLGEN_RSTAT_EMPRCNT_S			8
+#define GLGEN_RSTAT_EMPRCNT_M			MAKEMASK(0x3, 8)
+#define GLGEN_RSTAT_TIME_TO_RST_S		10
+#define GLGEN_RSTAT_TIME_TO_RST_M		MAKEMASK(0x3F, 10)
+#define GLGEN_RSTAT_RTRIG_FLR_S			16
+#define GLGEN_RSTAT_RTRIG_FLR_M			BIT(16)
+#define GLGEN_RSTAT_RTRIG_ECC_S			17
+#define GLGEN_RSTAT_RTRIG_ECC_M			BIT(17)
+#define GLGEN_RSTAT_RTRIG_FW_AUX_S		18
+#define GLGEN_RSTAT_RTRIG_FW_AUX_M		BIT(18)
+#define GLGEN_RTRIG				0x000B8190 /* Reset Source: CORER */
+#define GLGEN_RTRIG_CORER_S			0
+#define GLGEN_RTRIG_CORER_M			BIT(0)
+#define GLGEN_RTRIG_GLOBR_S			1
+#define GLGEN_RTRIG_GLOBR_M			BIT(1)
+#define GLGEN_RTRIG_EMPFWR_S			2
+#define GLGEN_RTRIG_EMPFWR_M			BIT(2)
+#define GLGEN_STAT				0x000B612C /* Reset Source: POR */
+#define GLGEN_STAT_RSVD4FW_S			0
+#define GLGEN_STAT_RSVD4FW_M			MAKEMASK(0xFF, 0)
+#define GLGEN_VFLRSTAT(_i)			(0x00093A04 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLGEN_VFLRSTAT_MAX_INDEX		7
+#define GLGEN_VFLRSTAT_VFLRS_S			0
+#define GLGEN_VFLRSTAT_VFLRS_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_XLR_MSK2HLP_RDY			0x000939F0 /* Reset Source: CORER */
+#define GLGEN_XLR_MSK2HLP_RDY_GLGEN_XLR_MSK2HLP_RDY_S 0
+#define GLGEN_XLR_MSK2HLP_RDY_GLGEN_XLR_MSK2HLP_RDY_M BIT(0)
+#define GLGEN_XLR_TRNS_WAIT_COUNT		0x000939EC /* Reset Source: CORER */
+#define GLGEN_XLR_TRNS_WAIT_COUNT_W_BTWN_TRNS_COUNT_S 0
+#define GLGEN_XLR_TRNS_WAIT_COUNT_W_BTWN_TRNS_COUNT_M MAKEMASK(0x1F, 0)
+#define GLGEN_XLR_TRNS_WAIT_COUNT_W_PEND_TRNS_COUNT_S 8
+#define GLGEN_XLR_TRNS_WAIT_COUNT_W_PEND_TRNS_COUNT_M MAKEMASK(0xFF, 8)
+#define GLQDC_DFD_CAM_ACC			0x002D2E24 /* Reset Source: CORER */
+#define GLQDC_DFD_CAM_ACC_CLNUM_S		0
+#define GLQDC_DFD_CAM_ACC_CLNUM_M		MAKEMASK(0x7F, 0)
+#define GLQDC_DFD_CAM_ACC_RES_0			0x002D2E28 /* Reset Source: CORER */
+#define GLQDC_DFD_CAM_ACC_RES_0_QID_S		0
+#define GLQDC_DFD_CAM_ACC_RES_0_QID_M		MAKEMASK(0x3FFF, 0)
+#define GLQDC_DFD_CAM_ACC_RES_0_CAM_V_S		16
+#define GLQDC_DFD_CAM_ACC_RES_0_CAM_V_M		BIT(16)
+#define GLQDC_DFD_CAM_ACC_RES_0_CAM_E_S		31
+#define GLQDC_DFD_CAM_ACC_RES_0_CAM_E_M		BIT(31)
+#define GLQDC_DFD_CAM_ACC_RES_1			0x002D2E2C /* Reset Source: CORER */
+#define GLQDC_DFD_CAM_ACC_RES_1_CL_HEAD_S	0
+#define GLQDC_DFD_CAM_ACC_RES_1_CL_HEAD_M	MAKEMASK(0x3F, 0)
+#define GLQDC_DFD_CAM_ACC_RES_1_CL_TAIL_S	8
+#define GLQDC_DFD_CAM_ACC_RES_1_CL_TAIL_M	MAKEMASK(0x3F, 8)
+#define GLQDC_DFD_CAM_ACC_RES_1_CL_EMPTY_S	16
+#define GLQDC_DFD_CAM_ACC_RES_1_CL_EMPTY_M	BIT(16)
+#define GLQDC_DFD_CAM_ACC_RES_1_CL_MALC_S	24
+#define GLQDC_DFD_CAM_ACC_RES_1_CL_MALC_M	MAKEMASK(0x3F, 24)
+#define GLQDC_DFD_FIFO_CFG_0			0x002D2E34 /* Reset Source: CORER */
+#define GLQDC_DFD_FIFO_CFG_0_QID_S		0
+#define GLQDC_DFD_FIFO_CFG_0_QID_M		MAKEMASK(0x3FFF, 0)
+#define GLQDC_DFD_FIFO_CFG_0_SMPL_PT_S		16
+#define GLQDC_DFD_FIFO_CFG_0_SMPL_PT_M		MAKEMASK(0xFF, 16)
+#define GLQDC_DFD_FIFO_CFG_0_ALL_QID_S		31
+#define GLQDC_DFD_FIFO_CFG_0_ALL_QID_M		BIT(31)
+#define GLQDC_DFD_FIFO_CFG_1			0x002D2E38 /* Reset Source: CORER */
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_0_S		0
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_0_M		MAKEMASK(0x7, 0)
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_1_S		4
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_1_M		MAKEMASK(0x7, 4)
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_2_S		8
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_2_M		MAKEMASK(0x7, 8)
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_3_S		12
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_3_M		MAKEMASK(0x7, 12)
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_4_S		16
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_4_M		MAKEMASK(0x7, 16)
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_5_S		20
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_5_M		MAKEMASK(0x7, 20)
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_6_S		24
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_6_M		MAKEMASK(0x7, 24)
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_7_S		28
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_7_M		MAKEMASK(0x7, 28)
+#define GLQDC_DFD_FIFO_SZ_CFG			0x002D30AC /* Reset Source: CORER */
+#define GLQDC_DFD_FIFO_SZ_CFG_COMP_S		0
+#define GLQDC_DFD_FIFO_SZ_CFG_COMP_M		MAKEMASK(0xFF, 0)
+#define GLQDC_DFD_FIFO_SZ_CFG_MISS_S		8
+#define GLQDC_DFD_FIFO_SZ_CFG_MISS_M		MAKEMASK(0xFF, 8)
+#define GLQDC_DFD_FIFO_SZ_CFG_MISS_COMP_S	16
+#define GLQDC_DFD_FIFO_SZ_CFG_MISS_COMP_M	MAKEMASK(0xFF, 16)
+#define GLQDC_DFD_GEN_CHKN			0x002D30A0 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_CHKN_GEN_BITS_S		0
+#define GLQDC_DFD_GEN_CHKN_GEN_BITS_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_GEN_CHKN_2			0x002D30A4 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_CHKN_2_GEN_BITS_S		0
+#define GLQDC_DFD_GEN_CHKN_2_GEN_BITS_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_GEN_CTRL			0x002D2E20 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_CTRL_ENABLE_S		0
+#define GLQDC_DFD_GEN_CTRL_ENABLE_M		BIT(0)
+#define GLQDC_DFD_GEN_CTRL_BLK_INJECT_M1_S	1
+#define GLQDC_DFD_GEN_CTRL_BLK_INJECT_M1_M	BIT(1)
+#define GLQDC_DFD_GEN_CTRL_NUM_PAUSE_M1_S	16
+#define GLQDC_DFD_GEN_CTRL_NUM_PAUSE_M1_M	MAKEMASK(0x3FF, 16)
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0		0x002D2EE8 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_MISS_COMP_ACK_S 0
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_MISS_COMP_ACK_M MAKEMASK(0x7F, 0)
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_MISS_COMP_S 7
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_MISS_COMP_M MAKEMASK(0x7F, 7)
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_COMP_FSM_DATA_S 14
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_COMP_FSM_DATA_M MAKEMASK(0x3, 14)
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_COMP_FSM_S	16
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_COMP_FSM_M	MAKEMASK(0x7F, 16)
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_PCIE_OUT_S	23
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_PCIE_OUT_M	MAKEMASK(0x7, 23)
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_1		0x002D2EEC /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_1_MISS_FSM_S	0
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_1_MISS_FSM_M	MAKEMASK(0x7F, 0)
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_1_DFD_S	7
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_1_DFD_M	MAKEMASK(0xFF, 7)
+#define GLQDC_DFD_GEN_LOG_FSM			0x002D2EF0 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOG_FSM_FTSTATE_S		0
+#define GLQDC_DFD_GEN_LOG_FSM_FTSTATE_M		MAKEMASK(0x3, 0)
+#define GLQDC_DFD_GEN_LOG_FSM_MISS_FIFO_FSM_ST_S 2
+#define GLQDC_DFD_GEN_LOG_FSM_MISS_FIFO_FSM_ST_M MAKEMASK(0x7, 2)
+#define GLQDC_DFD_GEN_LOG_FSM_IN_MISS_FIFO_S	5
+#define GLQDC_DFD_GEN_LOG_FSM_IN_MISS_FIFO_M	MAKEMASK(0x3, 5)
+#define GLQDC_DFD_GEN_LOG_FSM_CPSTATE_S		7
+#define GLQDC_DFD_GEN_LOG_FSM_CPSTATE_M		MAKEMASK(0x7, 7)
+#define GLQDC_DFD_GEN_LOGGNG_0			0x002D2EE0 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOGGNG_0_RINGH_WR_RD_S	0
+#define GLQDC_DFD_GEN_LOGGNG_0_RINGH_WR_RD_M	BIT(0)
+#define GLQDC_DFD_GEN_LOGGNG_0_QD_WR_RD_S	1
+#define GLQDC_DFD_GEN_LOGGNG_0_QD_WR_RD_M	BIT(1)
+#define GLQDC_DFD_GEN_LOGGNG_0_PCIE_RD_REQ_VLD_S 2
+#define GLQDC_DFD_GEN_LOGGNG_0_PCIE_RD_REQ_VLD_M BIT(2)
+#define GLQDC_DFD_GEN_LOGGNG_0_NXT_SQ_VLD_S	3
+#define GLQDC_DFD_GEN_LOGGNG_0_NXT_SQ_VLD_M	BIT(3)
+#define GLQDC_DFD_GEN_LOGGNG_0_SQ_VLD_TO_DONE_S 4
+#define GLQDC_DFD_GEN_LOGGNG_0_SQ_VLD_TO_DONE_M BIT(4)
+#define GLQDC_DFD_GEN_LOGGNG_0_PCIE_COMP_VLD_S	5
+#define GLQDC_DFD_GEN_LOGGNG_0_PCIE_COMP_VLD_M	BIT(5)
+#define GLQDC_DFD_GEN_LOGGNG_0_FETCH_NXT_SQ_VLD_S 6
+#define GLQDC_DFD_GEN_LOGGNG_0_FETCH_NXT_SQ_VLD_M BIT(6)
+#define GLQDC_DFD_GEN_LOGGNG_0_MALC_RPT_S	8
+#define GLQDC_DFD_GEN_LOGGNG_0_MALC_RPT_M	MAKEMASK(0xF, 8)
+#define GLQDC_DFD_GEN_LOGGNG_0_DFD_FIFO_ADD_S	16
+#define GLQDC_DFD_GEN_LOGGNG_0_DFD_FIFO_ADD_M	MAKEMASK(0x7F, 16)
+#define GLQDC_DFD_GEN_LOGGNG_1			0x002D2EE4 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_RFIL_WM_S	0
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_RFIL_WM_M	MAKEMASK(0x3, 0)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_RFIL_S	2
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_RFIL_M	MAKEMASK(0x3, 2)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_MRED_WM_S	4
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_MRED_WM_M	MAKEMASK(0x3, 4)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_MRED_S	6
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_MRED_M	MAKEMASK(0x3, 6)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_M3_WM_S	8
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_M3_WM_M	MAKEMASK(0x3, 8)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_M3_S		10
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_M3_M		MAKEMASK(0x3, 10)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_LSO_MT_M3_WM_S 12
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_LSO_MT_M3_WM_M MAKEMASK(0x3, 12)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_LSO_MT_M3_S	14
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_LSO_MT_M3_M	MAKEMASK(0x3, 14)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_ACK_MISS_FIFO_M3_WM_S 16
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_ACK_MISS_FIFO_M3_WM_M MAKEMASK(0x3, 16)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_ACK_MISS_FIFO_M3_S 18
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_ACK_MISS_FIFO_M3_M MAKEMASK(0x3, 18)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_EVICT_WM_S	20
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_EVICT_WM_M	MAKEMASK(0x3, 20)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_EVICT_S	22
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_EVICT_M	MAKEMASK(0x3, 22)
+#define GLQDC_DFD_GEN_LOGGNG_2			0x002D2FFC /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOGGNG_2_WR_WHEN_FULL_S	0
+#define GLQDC_DFD_GEN_LOGGNG_2_WR_WHEN_FULL_M	MAKEMASK(0x3F, 0)
+#define GLQDC_DFD_GEN_LOGGNG_2_WR_WHEN_FULL_LT_S 6
+#define GLQDC_DFD_GEN_LOGGNG_2_WR_WHEN_FULL_LT_M MAKEMASK(0x3F, 6)
+#define GLQDC_DFD_GEN_LOGGNG_2_TEST_S		24
+#define GLQDC_DFD_GEN_LOGGNG_2_TEST_M		MAKEMASK(0xFF, 24)
+#define GLQDC_DFD_GEN_LOGGNG_3			0x002D3008 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOGGNG_3_GEN_S		0
+#define GLQDC_DFD_GEN_LOGGNG_3_GEN_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_GEN_LOGGNG_4			0x002D300C /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOGGNG_4_GEN_S		0
+#define GLQDC_DFD_GEN_LOGGNG_4_GEN_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_GEN_LOGGNG_5			0x002D3010 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOGGNG_5_GEN_S		0
+#define GLQDC_DFD_GEN_LOGGNG_5_GEN_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_GEN_LOGGNG_6			0x002D3014 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOGGNG_6_GEN_S		0
+#define GLQDC_DFD_GEN_LOGGNG_6_GEN_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_GEN_STAT_REGS(_i)		(0x002D3018 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_STAT_REGS_MAX_INDEX	15
+#define GLQDC_DFD_GEN_STAT_REGS_COUNT_S		0
+#define GLQDC_DFD_GEN_STAT_REGS_COUNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_LOG_0				0x002D2E3C /* Reset Source: CORER */
+#define GLQDC_DFD_LOG_0_SOURCE_S		0
+#define GLQDC_DFD_LOG_0_SOURCE_M		MAKEMASK(0x3, 0)
+#define GLQDC_DFD_LOG_0_LVL_OR_EDGE_S		4
+#define GLQDC_DFD_LOG_0_LVL_OR_EDGE_M		BIT(4)
+#define GLQDC_DFD_LOG_0_DLY_CYCL_S		16
+#define GLQDC_DFD_LOG_0_DLY_CYCL_M		MAKEMASK(0x3FF, 16)
+#define GLQDC_DFD_LOG_1				0x002D2E40 /* Reset Source: CORER */
+#define GLQDC_DFD_LOG_1_NUM_EVENTS_S		0
+#define GLQDC_DFD_LOG_1_NUM_EVENTS_M		MAKEMASK(0x3FF, 0)
+#define GLQDC_DFD_LOG_1_NUM_TRIGS_S		16
+#define GLQDC_DFD_LOG_1_NUM_TRIGS_M		MAKEMASK(0x3FF, 16)
+#define GLQDC_DFD_LOG_1_TRIG_B2B_S		31
+#define GLQDC_DFD_LOG_1_TRIG_B2B_M		BIT(31)
+#define GLQDC_DFD_LOG_ACTN_EN			0x002D2EA4 /* Reset Source: CORER */
+#define GLQDC_DFD_LOG_ACTN_EN_BLK_INJECT_M1_S	0
+#define GLQDC_DFD_LOG_ACTN_EN_BLK_INJECT_M1_M	BIT(0)
+#define GLQDC_DFD_LOG_ACTN_EN_STP_WR_DFD_FIFO_S 1
+#define GLQDC_DFD_LOG_ACTN_EN_STP_WR_DFD_FIFO_M BIT(1)
+#define GLQDC_DFD_LOG_ACTN_EN_STP_UPDT_MALC_RPT_CSR_S 2
+#define GLQDC_DFD_LOG_ACTN_EN_STP_UPDT_MALC_RPT_CSR_M BIT(2)
+#define GLQDC_DFD_LOG_ACTN_RST			0x002D2EA8 /* Reset Source: CORER */
+#define GLQDC_DFD_LOG_ACTN_RST_BLK_INJECT_M1_S	0
+#define GLQDC_DFD_LOG_ACTN_RST_BLK_INJECT_M1_M	BIT(0)
+#define GLQDC_DFD_LOG_ACTN_RST_STP_WR_DFD_FIFO_S 1
+#define GLQDC_DFD_LOG_ACTN_RST_STP_WR_DFD_FIFO_M BIT(1)
+#define GLQDC_DFD_LOG_ACTN_RST_STP_UPDT_MALC_RPT_CSR_S 2
+#define GLQDC_DFD_LOG_ACTN_RST_STP_UPDT_MALC_RPT_CSR_M BIT(2)
+#define GLQDC_DFD_LOG_DATA(_i)			(0x002D2E44 + ((_i) * 4)) /* _i=0...11 */ /* Reset Source: CORER */
+#define GLQDC_DFD_LOG_DATA_MAX_INDEX		11
+#define GLQDC_DFD_LOG_DATA_DATA_S		0
+#define GLQDC_DFD_LOG_DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_LOG_MASK(_i)			(0x002D2E74 + ((_i) * 4)) /* _i=0...11 */ /* Reset Source: CORER */
+#define GLQDC_DFD_LOG_MASK_MAX_INDEX		11
+#define GLQDC_DFD_LOG_MASK_MASK_S		0
+#define GLQDC_DFD_LOG_MASK_MASK_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_LOG_TRG_0			0x002D2EAC /* Reset Source: CORER */
+#define GLQDC_DFD_LOG_TRG_0_QID_S		0
+#define GLQDC_DFD_LOG_TRG_0_QID_M		MAKEMASK(0x3FFF, 0)
+#define GLQDC_DFD_LOG_TRG_0_ACT_TRIGGED_S	16
+#define GLQDC_DFD_LOG_TRG_0_ACT_TRIGGED_M	BIT(16)
+#define GLQDC_DFD_LOG_TRG_0_TRIGGED_S		31
+#define GLQDC_DFD_LOG_TRG_0_TRIGGED_M		BIT(31)
+#define GLQDC_DFD_LOG_TRG_DATA(_i)		(0x002D2EB0 + ((_i) * 4)) /* _i=0...11 */ /* Reset Source: CORER */
+#define GLQDC_DFD_LOG_TRG_DATA_MAX_INDEX	11
+#define GLQDC_DFD_LOG_TRG_DATA_DATA_S		0
+#define GLQDC_DFD_LOG_TRG_DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_PACE				0x002D3000 /* Reset Source: CORER */
+#define GLQDC_DFD_PACE_PUSH_S			0
+#define GLQDC_DFD_PACE_PUSH_M			BIT(0)
+#define GLQDC_DFD_RST				0x002D2E30 /* Reset Source: CORER */
+#define GLQDC_DFD_RST_RST_S			0
+#define GLQDC_DFD_RST_RST_M			BIT(0)
+#define GLQDC_DFD_RST_CLR_MALC_RPT_S		1
+#define GLQDC_DFD_RST_CLR_MALC_RPT_M		BIT(1)
+#define GLQDC_DFD_RST_LOG_RST_S			2
+#define GLQDC_DFD_RST_LOG_RST_M			BIT(2)
+#define GLQDC_DFD_SAMPLE_RO_CSR			0x002D3004 /* Reset Source: CORER */
+#define GLQDC_DFD_SAMPLE_RO_CSR_SMPL_S		0
+#define GLQDC_DFD_SAMPLE_RO_CSR_SMPL_M		BIT(0)
+#define GLQDC_DFD_STATS_CFG_0			0x002D3058 /* Reset Source: CORER */
+#define GLQDC_DFD_STATS_CFG_0_CLR_S		0
+#define GLQDC_DFD_STATS_CFG_0_CLR_M		BIT(0)
+#define GLQDC_DFD_STATS_CFG_1			0x002D305C /* Reset Source: CORER */
+#define GLQDC_DFD_STATS_CFG_1_QID_S		0
+#define GLQDC_DFD_STATS_CFG_1_QID_M		MAKEMASK(0x3FFF, 0)
+#define GLQDC_DFD_STATS_CFG_1_GEN_CFG_S		16
+#define GLQDC_DFD_STATS_CFG_1_GEN_CFG_M		MAKEMASK(0x1F, 16)
+#define GLQDC_DFD_STATS_CFG_EVNT(_i)		(0x002D3060 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLQDC_DFD_STATS_CFG_EVNT_MAX_INDEX	15
+#define GLQDC_DFD_STATS_CFG_EVNT_EVNT_ID_S	0
+#define GLQDC_DFD_STATS_CFG_EVNT_EVNT_ID_M	MAKEMASK(0x1F, 0)
+#define GLQDC_DFD_STATS_CFG_EVNT_WRAP_EN_S	31
+#define GLQDC_DFD_STATS_CFG_EVNT_WRAP_EN_M	BIT(31)
+#define GLQDC_DFD_TEST_MNG			0x002D30A8 /* Reset Source: CORER */
+#define GLQDC_DFD_TEST_MNG_TST_S		2
+#define GLQDC_DFD_TEST_MNG_TST_M		BIT(2)
+#define GLVFGEN_TIMER				0x000B8214 /* Reset Source: POR */
+#define GLVFGEN_TIMER_GTIME_S			0
+#define GLVFGEN_TIMER_GTIME_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PFGEN_CTRL				0x00091000 /* Reset Source: CORER */
+#define PFGEN_CTRL_PFSWR_S			0
+#define PFGEN_CTRL_PFSWR_M			BIT(0)
+#define PFGEN_DRUN				0x00091180 /* Reset Source: CORER */
+#define PFGEN_DRUN_DRVUNLD_S			0
+#define PFGEN_DRUN_DRVUNLD_M			BIT(0)
+#define PFGEN_PFRSTAT				0x00091080 /* Reset Source: CORER */
+#define PFGEN_PFRSTAT_PFRD_S			0
+#define PFGEN_PFRSTAT_PFRD_M			BIT(0)
+#define PFGEN_PORTNUM				0x001D2400 /* Reset Source: CORER */
+#define PFGEN_PORTNUM_PORT_NUM_S		0
+#define PFGEN_PORTNUM_PORT_NUM_M		MAKEMASK(0x7, 0)
+#define PFGEN_STATE				0x00088000 /* Reset Source: CORER */
+#define PFGEN_STATE_PFPEEN_S			0
+#define PFGEN_STATE_PFPEEN_M			BIT(0)
+#define PFGEN_STATE_RSVD_S			1
+#define PFGEN_STATE_RSVD_M			BIT(1)
+#define PFGEN_STATE_PFLINKEN_S			2
+#define PFGEN_STATE_PFLINKEN_M			BIT(2)
+#define PFGEN_STATE_PFSCEN_S			3
+#define PFGEN_STATE_PFSCEN_M			BIT(3)
+#define PRT_TCVMLR_DRAIN_CNTR			0x000A21C0 /* Reset Source: CORER */
+#define PRT_TCVMLR_DRAIN_CNTR_CNTR_S		0
+#define PRT_TCVMLR_DRAIN_CNTR_CNTR_M		MAKEMASK(0x3FFF, 0)
+#define PRTGEN_CNF				0x000B8120 /* Reset Source: POR */
+#define PRTGEN_CNF_PORT_DIS_S			0
+#define PRTGEN_CNF_PORT_DIS_M			BIT(0)
+#define PRTGEN_CNF_ALLOW_PORT_DIS_S		1
+#define PRTGEN_CNF_ALLOW_PORT_DIS_M		BIT(1)
+#define PRTGEN_CNF_EMP_PORT_DIS_S		2
+#define PRTGEN_CNF_EMP_PORT_DIS_M		BIT(2)
+#define PRTGEN_CNF2				0x000B8160 /* Reset Source: POR */
+#define PRTGEN_CNF2_ACTIVATE_PORT_LINK_S	0
+#define PRTGEN_CNF2_ACTIVATE_PORT_LINK_M	BIT(0)
+#define PRTGEN_CNF3				0x000B8280 /* Reset Source: POR */
+#define PRTGEN_CNF3_PORT_STAGERING_EN_S		0
+#define PRTGEN_CNF3_PORT_STAGERING_EN_M		BIT(0)
+#define PRTGEN_STATUS				0x000B8100 /* Reset Source: POR */
+#define PRTGEN_STATUS_PORT_VALID_S		0
+#define PRTGEN_STATUS_PORT_VALID_M		BIT(0)
+#define PRTGEN_STATUS_PORT_ACTIVE_S		1
+#define PRTGEN_STATUS_PORT_ACTIVE_M		BIT(1)
+#define VFGEN_RSTAT(_VF)			(0x00074000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: VFR */
+#define VFGEN_RSTAT_MAX_INDEX			255
+#define VFGEN_RSTAT_VFR_STATE_S			0
+#define VFGEN_RSTAT_VFR_STATE_M			MAKEMASK(0x3, 0)
+#define VPGEN_VFRSTAT(_VF)			(0x00090800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPGEN_VFRSTAT_MAX_INDEX			255
+#define VPGEN_VFRSTAT_VFRD_S			0
+#define VPGEN_VFRSTAT_VFRD_M			BIT(0)
+#define VPGEN_VFRTRIG(_VF)			(0x00090000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPGEN_VFRTRIG_MAX_INDEX			255
+#define VPGEN_VFRTRIG_VFSWR_S			0
+#define VPGEN_VFRTRIG_VFSWR_M			BIT(0)
+#define VSIGEN_RSTAT(_VSI)			(0x00092800 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSIGEN_RSTAT_MAX_INDEX			767
+#define VSIGEN_RSTAT_VMRD_S			0
+#define VSIGEN_RSTAT_VMRD_M			BIT(0)
+#define VSIGEN_RTRIG(_VSI)			(0x00091800 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSIGEN_RTRIG_MAX_INDEX			767
+#define VSIGEN_RTRIG_VMSWR_S			0
+#define VSIGEN_RTRIG_VMSWR_M			BIT(0)
+#define GLHMC_APBVTINUSEBASE(_i)		(0x00524A00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_APBVTINUSEBASE_MAX_INDEX		7
+#define GLHMC_APBVTINUSEBASE_FPMAPBINUSEBASE_S	0
+#define GLHMC_APBVTINUSEBASE_FPMAPBINUSEBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_CEQPART(_i)			(0x005031C0 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_CEQPART_MAX_INDEX			7
+#define GLHMC_CEQPART_PMCEQBASE_S		0
+#define GLHMC_CEQPART_PMCEQBASE_M		MAKEMASK(0x3FF, 0)
+#define GLHMC_CEQPART_PMCEQSIZE_S		16
+#define GLHMC_CEQPART_PMCEQSIZE_M		MAKEMASK(0x3FF, 16)
+#define GLHMC_DBCQMAX				0x005220F0 /* Reset Source: CORER */
+#define GLHMC_DBCQMAX_GLHMC_DBCQMAX_S		0
+#define GLHMC_DBCQMAX_GLHMC_DBCQMAX_M		MAKEMASK(0xFFFFF, 0)
+#define GLHMC_DBCQPART(_i)			(0x00503180 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_DBCQPART_MAX_INDEX		7
+#define GLHMC_DBCQPART_PMDBCQBASE_S		0
+#define GLHMC_DBCQPART_PMDBCQBASE_M		MAKEMASK(0x3FFF, 0)
+#define GLHMC_DBCQPART_PMDBCQSIZE_S		16
+#define GLHMC_DBCQPART_PMDBCQSIZE_M		MAKEMASK(0x7FFF, 16)
+#define GLHMC_DBQPMAX				0x005220EC /* Reset Source: CORER */
+#define GLHMC_DBQPMAX_GLHMC_DBQPMAX_S		0
+#define GLHMC_DBQPMAX_GLHMC_DBQPMAX_M		MAKEMASK(0x7FFFF, 0)
+#define GLHMC_DBQPPART(_i)			(0x005044C0 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_DBQPPART_MAX_INDEX		7
+#define GLHMC_DBQPPART_PMDBQPBASE_S		0
+#define GLHMC_DBQPPART_PMDBQPBASE_M		MAKEMASK(0x3FFF, 0)
+#define GLHMC_DBQPPART_PMDBQPSIZE_S		16
+#define GLHMC_DBQPPART_PMDBQPSIZE_M		MAKEMASK(0x7FFF, 16)
+#define GLHMC_FSIAVBASE(_i)			(0x00525600 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_FSIAVBASE_MAX_INDEX		7
+#define GLHMC_FSIAVBASE_FPMFSIAVBASE_S		0
+#define GLHMC_FSIAVBASE_FPMFSIAVBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_FSIAVCNT(_i)			(0x00525700 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_FSIAVCNT_MAX_INDEX		7
+#define GLHMC_FSIAVCNT_FPMFSIAVCNT_S		0
+#define GLHMC_FSIAVCNT_FPMFSIAVCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_FSIAVMAX				0x00522068 /* Reset Source: CORER */
+#define GLHMC_FSIAVMAX_PMFSIAVMAX_S		0
+#define GLHMC_FSIAVMAX_PMFSIAVMAX_M		MAKEMASK(0x1FFFF, 0)
+#define GLHMC_FSIAVOBJSZ			0x00522064 /* Reset Source: CORER */
+#define GLHMC_FSIAVOBJSZ_PMFSIAVOBJSZ_S		0
+#define GLHMC_FSIAVOBJSZ_PMFSIAVOBJSZ_M		MAKEMASK(0xF, 0)
+#define GLHMC_FSIMCBASE(_i)			(0x00526000 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_FSIMCBASE_MAX_INDEX		7
+#define GLHMC_FSIMCBASE_FPMFSIMCBASE_S		0
+#define GLHMC_FSIMCBASE_FPMFSIMCBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_FSIMCCNT(_i)			(0x00526100 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_FSIMCCNT_MAX_INDEX		7
+#define GLHMC_FSIMCCNT_FPMFSIMCSZ_S		0
+#define GLHMC_FSIMCCNT_FPMFSIMCSZ_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_FSIMCMAX				0x00522060 /* Reset Source: CORER */
+#define GLHMC_FSIMCMAX_PMFSIMCMAX_S		0
+#define GLHMC_FSIMCMAX_PMFSIMCMAX_M		MAKEMASK(0x3FFF, 0)
+#define GLHMC_FSIMCOBJSZ			0x0052205C /* Reset Source: CORER */
+#define GLHMC_FSIMCOBJSZ_PMFSIMCOBJSZ_S		0
+#define GLHMC_FSIMCOBJSZ_PMFSIMCOBJSZ_M		MAKEMASK(0xF, 0)
+#define GLHMC_FWPDINV				0x0052207C /* Reset Source: CORER */
+#define GLHMC_FWPDINV_PMSDIDX_S			0
+#define GLHMC_FWPDINV_PMSDIDX_M			MAKEMASK(0xFFF, 0)
+#define GLHMC_FWPDINV_PMSDPARTSEL_S		15
+#define GLHMC_FWPDINV_PMSDPARTSEL_M		BIT(15)
+#define GLHMC_FWPDINV_PMPDIDX_S			16
+#define GLHMC_FWPDINV_PMPDIDX_M			MAKEMASK(0x1FF, 16)
+#define GLHMC_FWPDINV_FPMAT			0x0010207c /* Reset Source: CORER */
+#define GLHMC_FWPDINV_FPMAT_PMSDIDX_S		0
+#define GLHMC_FWPDINV_FPMAT_PMSDIDX_M		MAKEMASK(0xFFF, 0)
+#define GLHMC_FWPDINV_FPMAT_PMSDPARTSEL_S	15
+#define GLHMC_FWPDINV_FPMAT_PMSDPARTSEL_M	BIT(15)
+#define GLHMC_FWPDINV_FPMAT_PMPDIDX_S		16
+#define GLHMC_FWPDINV_FPMAT_PMPDIDX_M		MAKEMASK(0x1FF, 16)
+#define GLHMC_FWSDDATAHIGH			0x00522078 /* Reset Source: CORER */
+#define GLHMC_FWSDDATAHIGH_PMSDDATAHIGH_S	0
+#define GLHMC_FWSDDATAHIGH_PMSDDATAHIGH_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_FWSDDATAHIGH_FPMAT		0x00102078 /* Reset Source: CORER */
+#define GLHMC_FWSDDATAHIGH_FPMAT_PMSDDATAHIGH_S 0
+#define GLHMC_FWSDDATAHIGH_FPMAT_PMSDDATAHIGH_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_FWSDDATALOW			0x00522074 /* Reset Source: CORER */
+#define GLHMC_FWSDDATALOW_PMSDVALID_S		0
+#define GLHMC_FWSDDATALOW_PMSDVALID_M		BIT(0)
+#define GLHMC_FWSDDATALOW_PMSDTYPE_S		1
+#define GLHMC_FWSDDATALOW_PMSDTYPE_M		BIT(1)
+#define GLHMC_FWSDDATALOW_PMSDBPCOUNT_S		2
+#define GLHMC_FWSDDATALOW_PMSDBPCOUNT_M		MAKEMASK(0x3FF, 2)
+#define GLHMC_FWSDDATALOW_PMSDDATALOW_S		12
+#define GLHMC_FWSDDATALOW_PMSDDATALOW_M		MAKEMASK(0xFFFFF, 12)
+#define GLHMC_FWSDDATALOW_FPMAT			0x00102074 /* Reset Source: CORER */
+#define GLHMC_FWSDDATALOW_FPMAT_PMSDVALID_S	0
+#define GLHMC_FWSDDATALOW_FPMAT_PMSDVALID_M	BIT(0)
+#define GLHMC_FWSDDATALOW_FPMAT_PMSDTYPE_S	1
+#define GLHMC_FWSDDATALOW_FPMAT_PMSDTYPE_M	BIT(1)
+#define GLHMC_FWSDDATALOW_FPMAT_PMSDBPCOUNT_S	2
+#define GLHMC_FWSDDATALOW_FPMAT_PMSDBPCOUNT_M	MAKEMASK(0x3FF, 2)
+#define GLHMC_FWSDDATALOW_FPMAT_PMSDDATALOW_S	12
+#define GLHMC_FWSDDATALOW_FPMAT_PMSDDATALOW_M	MAKEMASK(0xFFFFF, 12)
+#define GLHMC_PEARPBASE(_i)			(0x00524800 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEARPBASE_MAX_INDEX		7
+#define GLHMC_PEARPBASE_FPMPEARPBASE_S		0
+#define GLHMC_PEARPBASE_FPMPEARPBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEARPCNT(_i)			(0x00524900 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEARPCNT_MAX_INDEX		7
+#define GLHMC_PEARPCNT_FPMPEARPCNT_S		0
+#define GLHMC_PEARPCNT_FPMPEARPCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEARPMAX				0x00522038 /* Reset Source: CORER */
+#define GLHMC_PEARPMAX_PMPEARPMAX_S		0
+#define GLHMC_PEARPMAX_PMPEARPMAX_M		MAKEMASK(0x1FFFF, 0)
+#define GLHMC_PEARPOBJSZ			0x00522034 /* Reset Source: CORER */
+#define GLHMC_PEARPOBJSZ_PMPEARPOBJSZ_S		0
+#define GLHMC_PEARPOBJSZ_PMPEARPOBJSZ_M		MAKEMASK(0x7, 0)
+#define GLHMC_PECQBASE(_i)			(0x00524200 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PECQBASE_MAX_INDEX		7
+#define GLHMC_PECQBASE_FPMPECQBASE_S		0
+#define GLHMC_PECQBASE_FPMPECQBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PECQCNT(_i)			(0x00524300 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PECQCNT_MAX_INDEX			7
+#define GLHMC_PECQCNT_FPMPECQCNT_S		0
+#define GLHMC_PECQCNT_FPMPECQCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PECQOBJSZ				0x00522020 /* Reset Source: CORER */
+#define GLHMC_PECQOBJSZ_PMPECQOBJSZ_S		0
+#define GLHMC_PECQOBJSZ_PMPECQOBJSZ_M		MAKEMASK(0xF, 0)
+#define GLHMC_PEHDRBASE(_i)			(0x00526200 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEHDRBASE_MAX_INDEX		7
+#define GLHMC_PEHDRBASE_GLHMC_PEHDRBASE_S	0
+#define GLHMC_PEHDRBASE_GLHMC_PEHDRBASE_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PEHDRCNT(_i)			(0x00526300 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEHDRCNT_MAX_INDEX		7
+#define GLHMC_PEHDRCNT_GLHMC_PEHDRCNT_S		0
+#define GLHMC_PEHDRCNT_GLHMC_PEHDRCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PEHDRMAX				0x00522008 /* Reset Source: CORER */
+#define GLHMC_PEHDRMAX_PMPEHDRMAX_S		0
+#define GLHMC_PEHDRMAX_PMPEHDRMAX_M		MAKEMASK(0x7FFFF, 0)
+#define GLHMC_PEHDRMAX_RSVD_S			19
+#define GLHMC_PEHDRMAX_RSVD_M			MAKEMASK(0x1FFF, 19)
+#define GLHMC_PEHDROBJSZ			0x00522004 /* Reset Source: CORER */
+#define GLHMC_PEHDROBJSZ_PMPEHDROBJSZ_S		0
+#define GLHMC_PEHDROBJSZ_PMPEHDROBJSZ_M		MAKEMASK(0xF, 0)
+#define GLHMC_PEHDROBJSZ_RSVD_S			4
+#define GLHMC_PEHDROBJSZ_RSVD_M			MAKEMASK(0xFFFFFFF, 4)
+#define GLHMC_PEHTCNT(_i)			(0x00524700 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEHTCNT_MAX_INDEX			7
+#define GLHMC_PEHTCNT_FPMPEHTCNT_S		0
+#define GLHMC_PEHTCNT_FPMPEHTCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEHTCNT_FPMAT(_i)			(0x00104700 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEHTCNT_FPMAT_MAX_INDEX		7
+#define GLHMC_PEHTCNT_FPMAT_FPMPEHTCNT_S	0
+#define GLHMC_PEHTCNT_FPMAT_FPMPEHTCNT_M	MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEHTEBASE(_i)			(0x00524600 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEHTEBASE_MAX_INDEX		7
+#define GLHMC_PEHTEBASE_FPMPEHTEBASE_S		0
+#define GLHMC_PEHTEBASE_FPMPEHTEBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEHTEBASE_FPMAT(_i)		(0x00104600 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEHTEBASE_FPMAT_MAX_INDEX		7
+#define GLHMC_PEHTEBASE_FPMAT_FPMPEHTEBASE_S	0
+#define GLHMC_PEHTEBASE_FPMAT_FPMPEHTEBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEHTEOBJSZ			0x0052202C /* Reset Source: CORER */
+#define GLHMC_PEHTEOBJSZ_PMPEHTEOBJSZ_S		0
+#define GLHMC_PEHTEOBJSZ_PMPEHTEOBJSZ_M		MAKEMASK(0xF, 0)
+#define GLHMC_PEHTEOBJSZ_FPMAT			0x0010202c /* Reset Source: CORER */
+#define GLHMC_PEHTEOBJSZ_FPMAT_PMPEHTEOBJSZ_S	0
+#define GLHMC_PEHTEOBJSZ_FPMAT_PMPEHTEOBJSZ_M	MAKEMASK(0xF, 0)
+#define GLHMC_PEHTMAX				0x00522030 /* Reset Source: CORER */
+#define GLHMC_PEHTMAX_PMPEHTMAX_S		0
+#define GLHMC_PEHTMAX_PMPEHTMAX_M		MAKEMASK(0x1FFFFF, 0)
+#define GLHMC_PEHTMAX_FPMAT			0x00102030 /* Reset Source: CORER */
+#define GLHMC_PEHTMAX_FPMAT_PMPEHTMAX_S		0
+#define GLHMC_PEHTMAX_FPMAT_PMPEHTMAX_M		MAKEMASK(0x1FFFFF, 0)
+#define GLHMC_PEMDBASE(_i)			(0x00526400 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEMDBASE_MAX_INDEX		7
+#define GLHMC_PEMDBASE_GLHMC_PEMDBASE_S		0
+#define GLHMC_PEMDBASE_GLHMC_PEMDBASE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PEMDCNT(_i)			(0x00526500 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEMDCNT_MAX_INDEX			7
+#define GLHMC_PEMDCNT_GLHMC_PEMDCNT_S		0
+#define GLHMC_PEMDCNT_GLHMC_PEMDCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PEMDMAX				0x00522010 /* Reset Source: CORER */
+#define GLHMC_PEMDMAX_PMPEMDMAX_S		0
+#define GLHMC_PEMDMAX_PMPEMDMAX_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEMDMAX_RSVD_S			24
+#define GLHMC_PEMDMAX_RSVD_M			MAKEMASK(0xFF, 24)
+#define GLHMC_PEMDOBJSZ				0x0052200C /* Reset Source: CORER */
+#define GLHMC_PEMDOBJSZ_PMPEMDOBJSZ_S		0
+#define GLHMC_PEMDOBJSZ_PMPEMDOBJSZ_M		MAKEMASK(0xF, 0)
+#define GLHMC_PEMDOBJSZ_RSVD_S			4
+#define GLHMC_PEMDOBJSZ_RSVD_M			MAKEMASK(0xFFFFFFF, 4)
+#define GLHMC_PEMRBASE(_i)			(0x00524C00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEMRBASE_MAX_INDEX		7
+#define GLHMC_PEMRBASE_FPMPEMRBASE_S		0
+#define GLHMC_PEMRBASE_FPMPEMRBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEMRCNT(_i)			(0x00524D00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEMRCNT_MAX_INDEX			7
+#define GLHMC_PEMRCNT_FPMPEMRSZ_S		0
+#define GLHMC_PEMRCNT_FPMPEMRSZ_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEMRMAX				0x00522040 /* Reset Source: CORER */
+#define GLHMC_PEMRMAX_PMPEMRMAX_S		0
+#define GLHMC_PEMRMAX_PMPEMRMAX_M		MAKEMASK(0x7FFFFF, 0)
+#define GLHMC_PEMROBJSZ				0x0052203c /* Reset Source: CORER */
+#define GLHMC_PEMROBJSZ_PMPEMROBJSZ_S		0
+#define GLHMC_PEMROBJSZ_PMPEMROBJSZ_M		MAKEMASK(0xF, 0)
+#define GLHMC_PEOOISCBASE(_i)			(0x00526600 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEOOISCBASE_MAX_INDEX		7
+#define GLHMC_PEOOISCBASE_GLHMC_PEOOISCBASE_S	0
+#define GLHMC_PEOOISCBASE_GLHMC_PEOOISCBASE_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PEOOISCCNT(_i)			(0x00526700 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEOOISCCNT_MAX_INDEX		7
+#define GLHMC_PEOOISCCNT_GLHMC_PEOOISCCNT_S	0
+#define GLHMC_PEOOISCCNT_GLHMC_PEOOISCCNT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PEOOISCFFLBASE(_i)		(0x00526C00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEOOISCFFLBASE_MAX_INDEX		7
+#define GLHMC_PEOOISCFFLBASE_GLHMC_PEOOISCFFLBASE_S 0
+#define GLHMC_PEOOISCFFLBASE_GLHMC_PEOOISCFFLBASE_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PEOOISCFFLCNT_PMAT(_i)		(0x00526D00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEOOISCFFLCNT_PMAT_MAX_INDEX	7
+#define GLHMC_PEOOISCFFLCNT_PMAT_FPMPEOOISCFLCNT_S 0
+#define GLHMC_PEOOISCFFLCNT_PMAT_FPMPEOOISCFLCNT_M MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEOOISCFFLMAX			0x005220A4 /* Reset Source: CORER */
+#define GLHMC_PEOOISCFFLMAX_PMPEOOISCFFLMAX_S	0
+#define GLHMC_PEOOISCFFLMAX_PMPEOOISCFFLMAX_M	MAKEMASK(0x7FFFF, 0)
+#define GLHMC_PEOOISCFFLMAX_RSVD_S		19
+#define GLHMC_PEOOISCFFLMAX_RSVD_M		MAKEMASK(0x1FFF, 19)
+#define GLHMC_PEOOISCMAX			0x00522018 /* Reset Source: CORER */
+#define GLHMC_PEOOISCMAX_PMPEOOISCMAX_S		0
+#define GLHMC_PEOOISCMAX_PMPEOOISCMAX_M		MAKEMASK(0x7FFFF, 0)
+#define GLHMC_PEOOISCMAX_RSVD_S			19
+#define GLHMC_PEOOISCMAX_RSVD_M			MAKEMASK(0x1FFF, 19)
+#define GLHMC_PEOOISCOBJSZ			0x00522014 /* Reset Source: CORER */
+#define GLHMC_PEOOISCOBJSZ_PMPEOOISCOBJSZ_S	0
+#define GLHMC_PEOOISCOBJSZ_PMPEOOISCOBJSZ_M	MAKEMASK(0xF, 0)
+#define GLHMC_PEOOISCOBJSZ_RSVD_S		4
+#define GLHMC_PEOOISCOBJSZ_RSVD_M		MAKEMASK(0xFFFFFFF, 4)
+#define GLHMC_PEPBLBASE(_i)			(0x00525800 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEPBLBASE_MAX_INDEX		7
+#define GLHMC_PEPBLBASE_FPMPEPBLBASE_S		0
+#define GLHMC_PEPBLBASE_FPMPEPBLBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEPBLCNT(_i)			(0x00525900 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEPBLCNT_MAX_INDEX		7
+#define GLHMC_PEPBLCNT_FPMPEPBLCNT_S		0
+#define GLHMC_PEPBLCNT_FPMPEPBLCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEPBLMAX				0x0052206C /* Reset Source: CORER */
+#define GLHMC_PEPBLMAX_PMPEPBLMAX_S		0
+#define GLHMC_PEPBLMAX_PMPEPBLMAX_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEQ1BASE(_i)			(0x00525200 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEQ1BASE_MAX_INDEX		7
+#define GLHMC_PEQ1BASE_FPMPEQ1BASE_S		0
+#define GLHMC_PEQ1BASE_FPMPEQ1BASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEQ1CNT(_i)			(0x00525300 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEQ1CNT_MAX_INDEX			7
+#define GLHMC_PEQ1CNT_FPMPEQ1CNT_S		0
+#define GLHMC_PEQ1CNT_FPMPEQ1CNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEQ1FLBASE(_i)			(0x00525400 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEQ1FLBASE_MAX_INDEX		7
+#define GLHMC_PEQ1FLBASE_FPMPEQ1FLBASE_S	0
+#define GLHMC_PEQ1FLBASE_FPMPEQ1FLBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEQ1FLMAX				0x00522058 /* Reset Source: CORER */
+#define GLHMC_PEQ1FLMAX_PMPEQ1FLMAX_S		0
+#define GLHMC_PEQ1FLMAX_PMPEQ1FLMAX_M		MAKEMASK(0x3FFFFFF, 0)
+#define GLHMC_PEQ1MAX				0x00522054 /* Reset Source: CORER */
+#define GLHMC_PEQ1MAX_PMPEQ1MAX_S		0
+#define GLHMC_PEQ1MAX_PMPEQ1MAX_M		MAKEMASK(0xFFFFFFF, 0)
+#define GLHMC_PEQ1OBJSZ				0x00522050 /* Reset Source: CORER */
+#define GLHMC_PEQ1OBJSZ_PMPEQ1OBJSZ_S		0
+#define GLHMC_PEQ1OBJSZ_PMPEQ1OBJSZ_M		MAKEMASK(0xF, 0)
+#define GLHMC_PEQPBASE(_i)			(0x00524000 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEQPBASE_MAX_INDEX		7
+#define GLHMC_PEQPBASE_FPMPEQPBASE_S		0
+#define GLHMC_PEQPBASE_FPMPEQPBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEQPCNT(_i)			(0x00524100 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEQPCNT_MAX_INDEX			7
+#define GLHMC_PEQPCNT_FPMPEQPCNT_S		0
+#define GLHMC_PEQPCNT_FPMPEQPCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEQPOBJSZ				0x0052201C /* Reset Source: CORER */
+#define GLHMC_PEQPOBJSZ_PMPEQPOBJSZ_S		0
+#define GLHMC_PEQPOBJSZ_PMPEQPOBJSZ_M		MAKEMASK(0xF, 0)
+#define GLHMC_PERRFBASE(_i)			(0x00526800 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PERRFBASE_MAX_INDEX		7
+#define GLHMC_PERRFBASE_GLHMC_PERRFBASE_S	0
+#define GLHMC_PERRFBASE_GLHMC_PERRFBASE_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PERRFCNT(_i)			(0x00526900 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PERRFCNT_MAX_INDEX		7
+#define GLHMC_PERRFCNT_GLHMC_PERRFCNT_S		0
+#define GLHMC_PERRFCNT_GLHMC_PERRFCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PERRFFLBASE(_i)			(0x00526A00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PERRFFLBASE_MAX_INDEX		7
+#define GLHMC_PERRFFLBASE_GLHMC_PERRFFLBASE_S	0
+#define GLHMC_PERRFFLBASE_GLHMC_PERRFFLBASE_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PERRFFLCNT_PMAT(_i)		(0x00526B00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PERRFFLCNT_PMAT_MAX_INDEX		7
+#define GLHMC_PERRFFLCNT_PMAT_FPMPERRFFLCNT_S	0
+#define GLHMC_PERRFFLCNT_PMAT_FPMPERRFFLCNT_M	MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PERRFFLMAX			0x005220A0 /* Reset Source: CORER */
+#define GLHMC_PERRFFLMAX_PMPERRFFLMAX_S		0
+#define GLHMC_PERRFFLMAX_PMPERRFFLMAX_M		MAKEMASK(0x3FFFFFF, 0)
+#define GLHMC_PERRFFLMAX_RSVD_S			26
+#define GLHMC_PERRFFLMAX_RSVD_M			MAKEMASK(0x3F, 26)
+#define GLHMC_PERRFMAX				0x0052209C /* Reset Source: CORER */
+#define GLHMC_PERRFMAX_PMPERRFMAX_S		0
+#define GLHMC_PERRFMAX_PMPERRFMAX_M		MAKEMASK(0xFFFFFFF, 0)
+#define GLHMC_PERRFMAX_RSVD_S			28
+#define GLHMC_PERRFMAX_RSVD_M			MAKEMASK(0xF, 28)
+#define GLHMC_PERRFOBJSZ			0x00522098 /* Reset Source: CORER */
+#define GLHMC_PERRFOBJSZ_PMPERRFOBJSZ_S		0
+#define GLHMC_PERRFOBJSZ_PMPERRFOBJSZ_M		MAKEMASK(0xF, 0)
+#define GLHMC_PERRFOBJSZ_RSVD_S			4
+#define GLHMC_PERRFOBJSZ_RSVD_M			MAKEMASK(0xFFFFFFF, 4)
+#define GLHMC_PETIMERBASE(_i)			(0x00525A00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PETIMERBASE_MAX_INDEX		7
+#define GLHMC_PETIMERBASE_FPMPETIMERBASE_S	0
+#define GLHMC_PETIMERBASE_FPMPETIMERBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PETIMERCNT(_i)			(0x00525B00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PETIMERCNT_MAX_INDEX		7
+#define GLHMC_PETIMERCNT_FPMPETIMERCNT_S	0
+#define GLHMC_PETIMERCNT_FPMPETIMERCNT_M	MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PETIMERMAX			0x00522084 /* Reset Source: CORER */
+#define GLHMC_PETIMERMAX_PMPETIMERMAX_S		0
+#define GLHMC_PETIMERMAX_PMPETIMERMAX_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PETIMEROBJSZ			0x00522080 /* Reset Source: CORER */
+#define GLHMC_PETIMEROBJSZ_PMPETIMEROBJSZ_S	0
+#define GLHMC_PETIMEROBJSZ_PMPETIMEROBJSZ_M	MAKEMASK(0xF, 0)
+#define GLHMC_PEXFBASE(_i)			(0x00524E00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEXFBASE_MAX_INDEX		7
+#define GLHMC_PEXFBASE_FPMPEXFBASE_S		0
+#define GLHMC_PEXFBASE_FPMPEXFBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEXFCNT(_i)			(0x00524F00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEXFCNT_MAX_INDEX			7
+#define GLHMC_PEXFCNT_FPMPEXFCNT_S		0
+#define GLHMC_PEXFCNT_FPMPEXFCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEXFFLBASE(_i)			(0x00525000 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEXFFLBASE_MAX_INDEX		7
+#define GLHMC_PEXFFLBASE_FPMPEXFFLBASE_S	0
+#define GLHMC_PEXFFLBASE_FPMPEXFFLBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEXFFLMAX				0x0052204C /* Reset Source: CORER */
+#define GLHMC_PEXFFLMAX_PMPEXFFLMAX_S		0
+#define GLHMC_PEXFFLMAX_PMPEXFFLMAX_M		MAKEMASK(0x3FFFFFF, 0)
+#define GLHMC_PEXFMAX				0x00522048 /* Reset Source: CORER */
+#define GLHMC_PEXFMAX_PMPEXFMAX_S		0
+#define GLHMC_PEXFMAX_PMPEXFMAX_M		MAKEMASK(0xFFFFFFF, 0)
+#define GLHMC_PEXFOBJSZ				0x00522044 /* Reset Source: CORER */
+#define GLHMC_PEXFOBJSZ_PMPEXFOBJSZ_S		0
+#define GLHMC_PEXFOBJSZ_PMPEXFOBJSZ_M		MAKEMASK(0xF, 0)
+#define GLHMC_PFPESDPART(_i)			(0x00520880 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PFPESDPART_MAX_INDEX		7
+#define GLHMC_PFPESDPART_PMSDBASE_S		0
+#define GLHMC_PFPESDPART_PMSDBASE_M		MAKEMASK(0xFFF, 0)
+#define GLHMC_PFPESDPART_PMSDSIZE_S		16
+#define GLHMC_PFPESDPART_PMSDSIZE_M		MAKEMASK(0x1FFF, 16)
+#define GLHMC_PFPESDPART_FPMAT(_i)		(0x00100880 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PFPESDPART_FPMAT_MAX_INDEX	7
+#define GLHMC_PFPESDPART_FPMAT_PMSDBASE_S	0
+#define GLHMC_PFPESDPART_FPMAT_PMSDBASE_M	MAKEMASK(0xFFF, 0)
+#define GLHMC_PFPESDPART_FPMAT_PMSDSIZE_S	16
+#define GLHMC_PFPESDPART_FPMAT_PMSDSIZE_M	MAKEMASK(0x1FFF, 16)
+#define GLHMC_SDPART(_i)			(0x00520800 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_SDPART_MAX_INDEX			7
+#define GLHMC_SDPART_PMSDBASE_S			0
+#define GLHMC_SDPART_PMSDBASE_M			MAKEMASK(0xFFF, 0)
+#define GLHMC_SDPART_PMSDSIZE_S			16
+#define GLHMC_SDPART_PMSDSIZE_M			MAKEMASK(0x1FFF, 16)
+#define GLHMC_SDPART_FPMAT(_i)			(0x00100800 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_SDPART_FPMAT_MAX_INDEX		7
+#define GLHMC_SDPART_FPMAT_PMSDBASE_S		0
+#define GLHMC_SDPART_FPMAT_PMSDBASE_M		MAKEMASK(0xFFF, 0)
+#define GLHMC_SDPART_FPMAT_PMSDSIZE_S		16
+#define GLHMC_SDPART_FPMAT_PMSDSIZE_M		MAKEMASK(0x1FFF, 16)
+#define GLHMC_VFAPBVTINUSEBASE(_i)		(0x0052CA00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFAPBVTINUSEBASE_MAX_INDEX	31
+#define GLHMC_VFAPBVTINUSEBASE_FPMAPBINUSEBASE_S 0
+#define GLHMC_VFAPBVTINUSEBASE_FPMAPBINUSEBASE_M MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFCEQPART(_i)			(0x00502F00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFCEQPART_MAX_INDEX		31
+#define GLHMC_VFCEQPART_PMCEQBASE_S		0
+#define GLHMC_VFCEQPART_PMCEQBASE_M		MAKEMASK(0x3FF, 0)
+#define GLHMC_VFCEQPART_PMCEQSIZE_S		16
+#define GLHMC_VFCEQPART_PMCEQSIZE_M		MAKEMASK(0x3FF, 16)
+#define GLHMC_VFDBCQPART(_i)			(0x00502E00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFDBCQPART_MAX_INDEX		31
+#define GLHMC_VFDBCQPART_PMDBCQBASE_S		0
+#define GLHMC_VFDBCQPART_PMDBCQBASE_M		MAKEMASK(0x3FFF, 0)
+#define GLHMC_VFDBCQPART_PMDBCQSIZE_S		16
+#define GLHMC_VFDBCQPART_PMDBCQSIZE_M		MAKEMASK(0x7FFF, 16)
+#define GLHMC_VFDBQPPART(_i)			(0x00504520 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFDBQPPART_MAX_INDEX		31
+#define GLHMC_VFDBQPPART_PMDBQPBASE_S		0
+#define GLHMC_VFDBQPPART_PMDBQPBASE_M		MAKEMASK(0x3FFF, 0)
+#define GLHMC_VFDBQPPART_PMDBQPSIZE_S		16
+#define GLHMC_VFDBQPPART_PMDBQPSIZE_M		MAKEMASK(0x7FFF, 16)
+#define GLHMC_VFFSIAVBASE(_i)			(0x0052D600 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFFSIAVBASE_MAX_INDEX		31
+#define GLHMC_VFFSIAVBASE_FPMFSIAVBASE_S	0
+#define GLHMC_VFFSIAVBASE_FPMFSIAVBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFFSIAVCNT(_i)			(0x0052D700 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFFSIAVCNT_MAX_INDEX		31
+#define GLHMC_VFFSIAVCNT_FPMFSIAVCNT_S		0
+#define GLHMC_VFFSIAVCNT_FPMFSIAVCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFFSIMCBASE(_i)			(0x0052E000 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFFSIMCBASE_MAX_INDEX		31
+#define GLHMC_VFFSIMCBASE_FPMFSIMCBASE_S	0
+#define GLHMC_VFFSIMCBASE_FPMFSIMCBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFFSIMCCNT(_i)			(0x0052E100 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFFSIMCCNT_MAX_INDEX		31
+#define GLHMC_VFFSIMCCNT_FPMFSIMCSZ_S		0
+#define GLHMC_VFFSIMCCNT_FPMFSIMCSZ_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPDINV(_i)			(0x00528300 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPDINV_MAX_INDEX			31
+#define GLHMC_VFPDINV_PMSDIDX_S			0
+#define GLHMC_VFPDINV_PMSDIDX_M			MAKEMASK(0xFFF, 0)
+#define GLHMC_VFPDINV_PMSDPARTSEL_S		15
+#define GLHMC_VFPDINV_PMSDPARTSEL_M		BIT(15)
+#define GLHMC_VFPDINV_PMPDIDX_S			16
+#define GLHMC_VFPDINV_PMPDIDX_M			MAKEMASK(0x1FF, 16)
+#define GLHMC_VFPDINV_FPMAT(_i)			(0x00108300 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPDINV_FPMAT_MAX_INDEX		31
+#define GLHMC_VFPDINV_FPMAT_PMSDIDX_S		0
+#define GLHMC_VFPDINV_FPMAT_PMSDIDX_M		MAKEMASK(0xFFF, 0)
+#define GLHMC_VFPDINV_FPMAT_PMSDPARTSEL_S	15
+#define GLHMC_VFPDINV_FPMAT_PMSDPARTSEL_M	BIT(15)
+#define GLHMC_VFPDINV_FPMAT_PMPDIDX_S		16
+#define GLHMC_VFPDINV_FPMAT_PMPDIDX_M		MAKEMASK(0x1FF, 16)
+#define GLHMC_VFPEARPBASE(_i)			(0x0052C800 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEARPBASE_MAX_INDEX		31
+#define GLHMC_VFPEARPBASE_FPMPEARPBASE_S	0
+#define GLHMC_VFPEARPBASE_FPMPEARPBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPEARPCNT(_i)			(0x0052C900 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEARPCNT_MAX_INDEX		31
+#define GLHMC_VFPEARPCNT_FPMPEARPCNT_S		0
+#define GLHMC_VFPEARPCNT_FPMPEARPCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPECQBASE(_i)			(0x0052C200 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPECQBASE_MAX_INDEX		31
+#define GLHMC_VFPECQBASE_FPMPECQBASE_S		0
+#define GLHMC_VFPECQBASE_FPMPECQBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPECQCNT(_i)			(0x0052C300 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPECQCNT_MAX_INDEX		31
+#define GLHMC_VFPECQCNT_FPMPECQCNT_S		0
+#define GLHMC_VFPECQCNT_FPMPECQCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPEHDRBASE(_i)			(0x0052E200 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEHDRBASE_MAX_INDEX		31
+#define GLHMC_VFPEHDRBASE_GLHMC_PEHDRBASE_S	0
+#define GLHMC_VFPEHDRBASE_GLHMC_PEHDRBASE_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPEHDRCNT(_i)			(0x0052E300 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEHDRCNT_MAX_INDEX		31
+#define GLHMC_VFPEHDRCNT_GLHMC_PEHDRCNT_S	0
+#define GLHMC_VFPEHDRCNT_GLHMC_PEHDRCNT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPEHTCNT(_i)			(0x0052C700 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEHTCNT_MAX_INDEX		31
+#define GLHMC_VFPEHTCNT_FPMPEHTCNT_S		0
+#define GLHMC_VFPEHTCNT_FPMPEHTCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPEHTCNT_FPMAT(_i)		(0x0010c700 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEHTCNT_FPMAT_MAX_INDEX		31
+#define GLHMC_VFPEHTCNT_FPMAT_FPMPEHTCNT_S	0
+#define GLHMC_VFPEHTCNT_FPMAT_FPMPEHTCNT_M	MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPEHTEBASE(_i)			(0x0052C600 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEHTEBASE_MAX_INDEX		31
+#define GLHMC_VFPEHTEBASE_FPMPEHTEBASE_S	0
+#define GLHMC_VFPEHTEBASE_FPMPEHTEBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPEHTEBASE_FPMAT(_i)		(0x0010C600 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEHTEBASE_FPMAT_MAX_INDEX	31
+#define GLHMC_VFPEHTEBASE_FPMAT_FPMPEHTEBASE_S	0
+#define GLHMC_VFPEHTEBASE_FPMAT_FPMPEHTEBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPEMDBASE(_i)			(0x0052E400 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEMDBASE_MAX_INDEX		31
+#define GLHMC_VFPEMDBASE_GLHMC_PEMDBASE_S	0
+#define GLHMC_VFPEMDBASE_GLHMC_PEMDBASE_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPEMDCNT(_i)			(0x0052E500 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEMDCNT_MAX_INDEX		31
+#define GLHMC_VFPEMDCNT_GLHMC_PEMDCNT_S		0
+#define GLHMC_VFPEMDCNT_GLHMC_PEMDCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPEMRBASE(_i)			(0x0052CC00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEMRBASE_MAX_INDEX		31
+#define GLHMC_VFPEMRBASE_FPMPEMRBASE_S		0
+#define GLHMC_VFPEMRBASE_FPMPEMRBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPEMRCNT(_i)			(0x0052CD00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEMRCNT_MAX_INDEX		31
+#define GLHMC_VFPEMRCNT_FPMPEMRSZ_S		0
+#define GLHMC_VFPEMRCNT_FPMPEMRSZ_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPEOOISCBASE(_i)			(0x0052E600 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEOOISCBASE_MAX_INDEX		31
+#define GLHMC_VFPEOOISCBASE_GLHMC_PEOOISCBASE_S 0
+#define GLHMC_VFPEOOISCBASE_GLHMC_PEOOISCBASE_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPEOOISCCNT(_i)			(0x0052E700 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEOOISCCNT_MAX_INDEX		31
+#define GLHMC_VFPEOOISCCNT_GLHMC_PEOOISCCNT_S	0
+#define GLHMC_VFPEOOISCCNT_GLHMC_PEOOISCCNT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPEOOISCFFLBASE(_i)		(0x0052EC00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEOOISCFFLBASE_MAX_INDEX	31
+#define GLHMC_VFPEOOISCFFLBASE_GLHMC_PEOOISCFFLBASE_S 0
+#define GLHMC_VFPEOOISCFFLBASE_GLHMC_PEOOISCFFLBASE_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPEPBLBASE(_i)			(0x0052D800 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEPBLBASE_MAX_INDEX		31
+#define GLHMC_VFPEPBLBASE_FPMPEPBLBASE_S	0
+#define GLHMC_VFPEPBLBASE_FPMPEPBLBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPEPBLCNT(_i)			(0x0052D900 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEPBLCNT_MAX_INDEX		31
+#define GLHMC_VFPEPBLCNT_FPMPEPBLCNT_S		0
+#define GLHMC_VFPEPBLCNT_FPMPEPBLCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPEQ1BASE(_i)			(0x0052D200 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEQ1BASE_MAX_INDEX		31
+#define GLHMC_VFPEQ1BASE_FPMPEQ1BASE_S		0
+#define GLHMC_VFPEQ1BASE_FPMPEQ1BASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPEQ1CNT(_i)			(0x0052D300 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEQ1CNT_MAX_INDEX		31
+#define GLHMC_VFPEQ1CNT_FPMPEQ1CNT_S		0
+#define GLHMC_VFPEQ1CNT_FPMPEQ1CNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPEQ1FLBASE(_i)			(0x0052D400 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEQ1FLBASE_MAX_INDEX		31
+#define GLHMC_VFPEQ1FLBASE_FPMPEQ1FLBASE_S	0
+#define GLHMC_VFPEQ1FLBASE_FPMPEQ1FLBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPEQPBASE(_i)			(0x0052C000 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEQPBASE_MAX_INDEX		31
+#define GLHMC_VFPEQPBASE_FPMPEQPBASE_S		0
+#define GLHMC_VFPEQPBASE_FPMPEQPBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPEQPCNT(_i)			(0x0052C100 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEQPCNT_MAX_INDEX		31
+#define GLHMC_VFPEQPCNT_FPMPEQPCNT_S		0
+#define GLHMC_VFPEQPCNT_FPMPEQPCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPERRFBASE(_i)			(0x0052E800 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPERRFBASE_MAX_INDEX		31
+#define GLHMC_VFPERRFBASE_GLHMC_PERRFBASE_S	0
+#define GLHMC_VFPERRFBASE_GLHMC_PERRFBASE_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPERRFCNT(_i)			(0x0052E900 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPERRFCNT_MAX_INDEX		31
+#define GLHMC_VFPERRFCNT_GLHMC_PERRFCNT_S	0
+#define GLHMC_VFPERRFCNT_GLHMC_PERRFCNT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPERRFFLBASE(_i)			(0x0052EA00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPERRFFLBASE_MAX_INDEX		31
+#define GLHMC_VFPERRFFLBASE_GLHMC_PERRFFLBASE_S 0
+#define GLHMC_VFPERRFFLBASE_GLHMC_PERRFFLBASE_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPETIMERBASE(_i)			(0x0052DA00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPETIMERBASE_MAX_INDEX		31
+#define GLHMC_VFPETIMERBASE_FPMPETIMERBASE_S	0
+#define GLHMC_VFPETIMERBASE_FPMPETIMERBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPETIMERCNT(_i)			(0x0052DB00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPETIMERCNT_MAX_INDEX		31
+#define GLHMC_VFPETIMERCNT_FPMPETIMERCNT_S	0
+#define GLHMC_VFPETIMERCNT_FPMPETIMERCNT_M	MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPEXFBASE(_i)			(0x0052CE00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEXFBASE_MAX_INDEX		31
+#define GLHMC_VFPEXFBASE_FPMPEXFBASE_S		0
+#define GLHMC_VFPEXFBASE_FPMPEXFBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPEXFCNT(_i)			(0x0052CF00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEXFCNT_MAX_INDEX		31
+#define GLHMC_VFPEXFCNT_FPMPEXFCNT_S		0
+#define GLHMC_VFPEXFCNT_FPMPEXFCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPEXFFLBASE(_i)			(0x0052D000 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEXFFLBASE_MAX_INDEX		31
+#define GLHMC_VFPEXFFLBASE_FPMPEXFFLBASE_S	0
+#define GLHMC_VFPEXFFLBASE_FPMPEXFFLBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFSDDATAHIGH(_i)			(0x00528200 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFSDDATAHIGH_MAX_INDEX		31
+#define GLHMC_VFSDDATAHIGH_PMSDDATAHIGH_S	0
+#define GLHMC_VFSDDATAHIGH_PMSDDATAHIGH_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFSDDATAHIGH_FPMAT(_i)		(0x00108200 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFSDDATAHIGH_FPMAT_MAX_INDEX	31
+#define GLHMC_VFSDDATAHIGH_FPMAT_PMSDDATAHIGH_S 0
+#define GLHMC_VFSDDATAHIGH_FPMAT_PMSDDATAHIGH_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFSDDATALOW(_i)			(0x00528100 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFSDDATALOW_MAX_INDEX		31
+#define GLHMC_VFSDDATALOW_PMSDVALID_S		0
+#define GLHMC_VFSDDATALOW_PMSDVALID_M		BIT(0)
+#define GLHMC_VFSDDATALOW_PMSDTYPE_S		1
+#define GLHMC_VFSDDATALOW_PMSDTYPE_M		BIT(1)
+#define GLHMC_VFSDDATALOW_PMSDBPCOUNT_S		2
+#define GLHMC_VFSDDATALOW_PMSDBPCOUNT_M		MAKEMASK(0x3FF, 2)
+#define GLHMC_VFSDDATALOW_PMSDDATALOW_S		12
+#define GLHMC_VFSDDATALOW_PMSDDATALOW_M		MAKEMASK(0xFFFFF, 12)
+#define GLHMC_VFSDDATALOW_FPMAT(_i)		(0x00108100 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFSDDATALOW_FPMAT_MAX_INDEX	31
+#define GLHMC_VFSDDATALOW_FPMAT_PMSDVALID_S	0
+#define GLHMC_VFSDDATALOW_FPMAT_PMSDVALID_M	BIT(0)
+#define GLHMC_VFSDDATALOW_FPMAT_PMSDTYPE_S	1
+#define GLHMC_VFSDDATALOW_FPMAT_PMSDTYPE_M	BIT(1)
+#define GLHMC_VFSDDATALOW_FPMAT_PMSDBPCOUNT_S	2
+#define GLHMC_VFSDDATALOW_FPMAT_PMSDBPCOUNT_M	MAKEMASK(0x3FF, 2)
+#define GLHMC_VFSDDATALOW_FPMAT_PMSDDATALOW_S	12
+#define GLHMC_VFSDDATALOW_FPMAT_PMSDDATALOW_M	MAKEMASK(0xFFFFF, 12)
+#define GLHMC_VFSDPART(_i)			(0x00528800 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFSDPART_MAX_INDEX		31
+#define GLHMC_VFSDPART_PMSDBASE_S		0
+#define GLHMC_VFSDPART_PMSDBASE_M		MAKEMASK(0xFFF, 0)
+#define GLHMC_VFSDPART_PMSDSIZE_S		16
+#define GLHMC_VFSDPART_PMSDSIZE_M		MAKEMASK(0x1FFF, 16)
+#define GLHMC_VFSDPART_FPMAT(_i)		(0x00108800 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFSDPART_FPMAT_MAX_INDEX		31
+#define GLHMC_VFSDPART_FPMAT_PMSDBASE_S		0
+#define GLHMC_VFSDPART_FPMAT_PMSDBASE_M		MAKEMASK(0xFFF, 0)
+#define GLHMC_VFSDPART_FPMAT_PMSDSIZE_S		16
+#define GLHMC_VFSDPART_FPMAT_PMSDSIZE_M		MAKEMASK(0x1FFF, 16)
+#define GLMDOC_CACHESIZE			0x0051C06C /* Reset Source: CORER */
+#define GLMDOC_CACHESIZE_WORD_SIZE_S		0
+#define GLMDOC_CACHESIZE_WORD_SIZE_M		MAKEMASK(0xFF, 0)
+#define GLMDOC_CACHESIZE_SETS_S			8
+#define GLMDOC_CACHESIZE_SETS_M			MAKEMASK(0xFFF, 8)
+#define GLMDOC_CACHESIZE_WAYS_S			20
+#define GLMDOC_CACHESIZE_WAYS_M			MAKEMASK(0xF, 20)
+#define GLPBLOC0_CACHESIZE			0x00518074 /* Reset Source: CORER */
+#define GLPBLOC0_CACHESIZE_WORD_SIZE_S		0
+#define GLPBLOC0_CACHESIZE_WORD_SIZE_M		MAKEMASK(0xFF, 0)
+#define GLPBLOC0_CACHESIZE_SETS_S		8
+#define GLPBLOC0_CACHESIZE_SETS_M		MAKEMASK(0xFFF, 8)
+#define GLPBLOC0_CACHESIZE_WAYS_S		20
+#define GLPBLOC0_CACHESIZE_WAYS_M		MAKEMASK(0xF, 20)
+#define GLPBLOC1_CACHESIZE			0x0051A074 /* Reset Source: CORER */
+#define GLPBLOC1_CACHESIZE_WORD_SIZE_S		0
+#define GLPBLOC1_CACHESIZE_WORD_SIZE_M		MAKEMASK(0xFF, 0)
+#define GLPBLOC1_CACHESIZE_SETS_S		8
+#define GLPBLOC1_CACHESIZE_SETS_M		MAKEMASK(0xFFF, 8)
+#define GLPBLOC1_CACHESIZE_WAYS_S		20
+#define GLPBLOC1_CACHESIZE_WAYS_M		MAKEMASK(0xF, 20)
+#define GLPDOC_CACHESIZE			0x00530048 /* Reset Source: CORER */
+#define GLPDOC_CACHESIZE_WORD_SIZE_S		0
+#define GLPDOC_CACHESIZE_WORD_SIZE_M		MAKEMASK(0xFF, 0)
+#define GLPDOC_CACHESIZE_SETS_S			8
+#define GLPDOC_CACHESIZE_SETS_M			MAKEMASK(0xFFF, 8)
+#define GLPDOC_CACHESIZE_WAYS_S			20
+#define GLPDOC_CACHESIZE_WAYS_M			MAKEMASK(0xF, 20)
+#define GLPDOC_CACHESIZE_FPMAT			0x00110088 /* Reset Source: CORER */
+#define GLPDOC_CACHESIZE_FPMAT_WORD_SIZE_S	0
+#define GLPDOC_CACHESIZE_FPMAT_WORD_SIZE_M	MAKEMASK(0xFF, 0)
+#define GLPDOC_CACHESIZE_FPMAT_SETS_S		8
+#define GLPDOC_CACHESIZE_FPMAT_SETS_M		MAKEMASK(0xFFF, 8)
+#define GLPDOC_CACHESIZE_FPMAT_WAYS_S		20
+#define GLPDOC_CACHESIZE_FPMAT_WAYS_M		MAKEMASK(0xF, 20)
+#define GLPEOC0_CACHESIZE			0x005140A8 /* Reset Source: CORER */
+#define GLPEOC0_CACHESIZE_WORD_SIZE_S		0
+#define GLPEOC0_CACHESIZE_WORD_SIZE_M		MAKEMASK(0xFF, 0)
+#define GLPEOC0_CACHESIZE_SETS_S		8
+#define GLPEOC0_CACHESIZE_SETS_M		MAKEMASK(0xFFF, 8)
+#define GLPEOC0_CACHESIZE_WAYS_S		20
+#define GLPEOC0_CACHESIZE_WAYS_M		MAKEMASK(0xF, 20)
+#define GLPEOC1_CACHESIZE			0x005160A8 /* Reset Source: CORER */
+#define GLPEOC1_CACHESIZE_WORD_SIZE_S		0
+#define GLPEOC1_CACHESIZE_WORD_SIZE_M		MAKEMASK(0xFF, 0)
+#define GLPEOC1_CACHESIZE_SETS_S		8
+#define GLPEOC1_CACHESIZE_SETS_M		MAKEMASK(0xFFF, 8)
+#define GLPEOC1_CACHESIZE_WAYS_S		20
+#define GLPEOC1_CACHESIZE_WAYS_M		MAKEMASK(0xF, 20)
+#define PFHMC_ERRORDATA				0x00520500 /* Reset Source: PFR */
+#define PFHMC_ERRORDATA_HMC_ERROR_DATA_S	0
+#define PFHMC_ERRORDATA_HMC_ERROR_DATA_M	MAKEMASK(0x3FFFFFFF, 0)
+#define PFHMC_ERRORDATA_FPMAT			0x00100500 /* Reset Source: PFR */
+#define PFHMC_ERRORDATA_FPMAT_HMC_ERROR_DATA_S	0
+#define PFHMC_ERRORDATA_FPMAT_HMC_ERROR_DATA_M	MAKEMASK(0x3FFFFFFF, 0)
+#define PFHMC_ERRORINFO				0x00520400 /* Reset Source: PFR */
+#define PFHMC_ERRORINFO_PMF_INDEX_S		0
+#define PFHMC_ERRORINFO_PMF_INDEX_M		MAKEMASK(0x1F, 0)
+#define PFHMC_ERRORINFO_PMF_ISVF_S		7
+#define PFHMC_ERRORINFO_PMF_ISVF_M		BIT(7)
+#define PFHMC_ERRORINFO_HMC_ERROR_TYPE_S	8
+#define PFHMC_ERRORINFO_HMC_ERROR_TYPE_M	MAKEMASK(0xF, 8)
+#define PFHMC_ERRORINFO_HMC_OBJECT_TYPE_S	16
+#define PFHMC_ERRORINFO_HMC_OBJECT_TYPE_M	MAKEMASK(0x1F, 16)
+#define PFHMC_ERRORINFO_ERROR_DETECTED_S	31
+#define PFHMC_ERRORINFO_ERROR_DETECTED_M	BIT(31)
+#define PFHMC_ERRORINFO_FPMAT			0x00100400 /* Reset Source: PFR */
+#define PFHMC_ERRORINFO_FPMAT_PMF_INDEX_S	0
+#define PFHMC_ERRORINFO_FPMAT_PMF_INDEX_M	MAKEMASK(0x1F, 0)
+#define PFHMC_ERRORINFO_FPMAT_PMF_ISVF_S	7
+#define PFHMC_ERRORINFO_FPMAT_PMF_ISVF_M	BIT(7)
+#define PFHMC_ERRORINFO_FPMAT_HMC_ERROR_TYPE_S	8
+#define PFHMC_ERRORINFO_FPMAT_HMC_ERROR_TYPE_M	MAKEMASK(0xF, 8)
+#define PFHMC_ERRORINFO_FPMAT_HMC_OBJECT_TYPE_S 16
+#define PFHMC_ERRORINFO_FPMAT_HMC_OBJECT_TYPE_M MAKEMASK(0x1F, 16)
+#define PFHMC_ERRORINFO_FPMAT_ERROR_DETECTED_S	31
+#define PFHMC_ERRORINFO_FPMAT_ERROR_DETECTED_M	BIT(31)
+#define PFHMC_PDINV				0x00520300 /* Reset Source: PFR */
+#define PFHMC_PDINV_PMSDIDX_S			0
+#define PFHMC_PDINV_PMSDIDX_M			MAKEMASK(0xFFF, 0)
+#define PFHMC_PDINV_PMSDPARTSEL_S		15
+#define PFHMC_PDINV_PMSDPARTSEL_M		BIT(15)
+#define PFHMC_PDINV_PMPDIDX_S			16
+#define PFHMC_PDINV_PMPDIDX_M			MAKEMASK(0x1FF, 16)
+#define PFHMC_PDINV_FPMAT			0x00100300 /* Reset Source: PFR */
+#define PFHMC_PDINV_FPMAT_PMSDIDX_S		0
+#define PFHMC_PDINV_FPMAT_PMSDIDX_M		MAKEMASK(0xFFF, 0)
+#define PFHMC_PDINV_FPMAT_PMSDPARTSEL_S		15
+#define PFHMC_PDINV_FPMAT_PMSDPARTSEL_M		BIT(15)
+#define PFHMC_PDINV_FPMAT_PMPDIDX_S		16
+#define PFHMC_PDINV_FPMAT_PMPDIDX_M		MAKEMASK(0x1FF, 16)
+#define PFHMC_SDCMD				0x00520000 /* Reset Source: PFR */
+#define PFHMC_SDCMD_PMSDIDX_S			0
+#define PFHMC_SDCMD_PMSDIDX_M			MAKEMASK(0xFFF, 0)
+#define PFHMC_SDCMD_PMSDPARTSEL_S		15
+#define PFHMC_SDCMD_PMSDPARTSEL_M		BIT(15)
+#define PFHMC_SDCMD_PMSDWR_S			31
+#define PFHMC_SDCMD_PMSDWR_M			BIT(31)
+#define PFHMC_SDCMD_FPMAT			0x00100000 /* Reset Source: PFR */
+#define PFHMC_SDCMD_FPMAT_PMSDIDX_S		0
+#define PFHMC_SDCMD_FPMAT_PMSDIDX_M		MAKEMASK(0xFFF, 0)
+#define PFHMC_SDCMD_FPMAT_PMSDPARTSEL_S		15
+#define PFHMC_SDCMD_FPMAT_PMSDPARTSEL_M		BIT(15)
+#define PFHMC_SDCMD_FPMAT_PMSDWR_S		31
+#define PFHMC_SDCMD_FPMAT_PMSDWR_M		BIT(31)
+#define PFHMC_SDDATAHIGH			0x00520200 /* Reset Source: PFR */
+#define PFHMC_SDDATAHIGH_PMSDDATAHIGH_S		0
+#define PFHMC_SDDATAHIGH_PMSDDATAHIGH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PFHMC_SDDATAHIGH_FPMAT			0x00100200 /* Reset Source: PFR */
+#define PFHMC_SDDATAHIGH_FPMAT_PMSDDATAHIGH_S	0
+#define PFHMC_SDDATAHIGH_FPMAT_PMSDDATAHIGH_M	MAKEMASK(0xFFFFFFFF, 0)
+#define PFHMC_SDDATALOW				0x00520100 /* Reset Source: PFR */
+#define PFHMC_SDDATALOW_PMSDVALID_S		0
+#define PFHMC_SDDATALOW_PMSDVALID_M		BIT(0)
+#define PFHMC_SDDATALOW_PMSDTYPE_S		1
+#define PFHMC_SDDATALOW_PMSDTYPE_M		BIT(1)
+#define PFHMC_SDDATALOW_PMSDBPCOUNT_S		2
+#define PFHMC_SDDATALOW_PMSDBPCOUNT_M		MAKEMASK(0x3FF, 2)
+#define PFHMC_SDDATALOW_PMSDDATALOW_S		12
+#define PFHMC_SDDATALOW_PMSDDATALOW_M		MAKEMASK(0xFFFFF, 12)
+#define PFHMC_SDDATALOW_FPMAT			0x00100100 /* Reset Source: PFR */
+#define PFHMC_SDDATALOW_FPMAT_PMSDVALID_S	0
+#define PFHMC_SDDATALOW_FPMAT_PMSDVALID_M	BIT(0)
+#define PFHMC_SDDATALOW_FPMAT_PMSDTYPE_S	1
+#define PFHMC_SDDATALOW_FPMAT_PMSDTYPE_M	BIT(1)
+#define PFHMC_SDDATALOW_FPMAT_PMSDBPCOUNT_S	2
+#define PFHMC_SDDATALOW_FPMAT_PMSDBPCOUNT_M	MAKEMASK(0x3FF, 2)
+#define PFHMC_SDDATALOW_FPMAT_PMSDDATALOW_S	12
+#define PFHMC_SDDATALOW_FPMAT_PMSDDATALOW_M	MAKEMASK(0xFFFFF, 12)
+#define GL_DSI_RDPC				0x00294204 /* Reset Source: CORER */
+#define GL_DSI_RDPC_RDPC_S			0
+#define GL_DSI_RDPC_RDPC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GL_DSI_REPC				0x00294208 /* Reset Source: CORER */
+#define GL_DSI_REPC_NO_DESC_CNT_S		0
+#define GL_DSI_REPC_NO_DESC_CNT_M		MAKEMASK(0xFFFF, 0)
+#define GL_DSI_REPC_ERROR_CNT_S			16
+#define GL_DSI_REPC_ERROR_CNT_M			MAKEMASK(0xFFFF, 16)
+#define GL_MDCK_TDAT_TCLAN			0x000FC0DC /* Reset Source: CORER */
+#define GL_MDCK_TDAT_TCLAN_WRONG_ORDER_FORMAT_DESC_S 0
+#define GL_MDCK_TDAT_TCLAN_WRONG_ORDER_FORMAT_DESC_M BIT(0)
+#define GL_MDCK_TDAT_TCLAN_UR_S			1
+#define GL_MDCK_TDAT_TCLAN_UR_M			BIT(1)
+#define GL_MDCK_TDAT_TCLAN_TAIL_DESC_NOT_DDESC_EOP_NOP_S 2
+#define GL_MDCK_TDAT_TCLAN_TAIL_DESC_NOT_DDESC_EOP_NOP_M BIT(2)
+#define GL_MDCK_TDAT_TCLAN_FALSE_SCHEDULING_S	3
+#define GL_MDCK_TDAT_TCLAN_FALSE_SCHEDULING_M	BIT(3)
+#define GL_MDCK_TDAT_TCLAN_TAIL_VALUE_BIGGER_THAN_RING_LEN_S 4
+#define GL_MDCK_TDAT_TCLAN_TAIL_VALUE_BIGGER_THAN_RING_LEN_M BIT(4)
+#define GL_MDCK_TDAT_TCLAN_MORE_THAN_8_DCMDS_IN_PKT_S 5
+#define GL_MDCK_TDAT_TCLAN_MORE_THAN_8_DCMDS_IN_PKT_M BIT(5)
+#define GL_MDCK_TDAT_TCLAN_NO_HEAD_UPDATE_IN_QUANTA_S 6
+#define GL_MDCK_TDAT_TCLAN_NO_HEAD_UPDATE_IN_QUANTA_M BIT(6)
+#define GL_MDCK_TDAT_TCLAN_PKT_LEN_NOT_LEGAL_S	7
+#define GL_MDCK_TDAT_TCLAN_PKT_LEN_NOT_LEGAL_M	BIT(7)
+#define GL_MDCK_TDAT_TCLAN_TSO_TLEN_NOT_COHERENT_WITH_SUM_BUFS_S 8
+#define GL_MDCK_TDAT_TCLAN_TSO_TLEN_NOT_COHERENT_WITH_SUM_BUFS_M BIT(8)
+#define GL_MDCK_TDAT_TCLAN_TSO_TAIL_REACHED_BEFORE_TLEN_END_S 9
+#define GL_MDCK_TDAT_TCLAN_TSO_TAIL_REACHED_BEFORE_TLEN_END_M BIT(9)
+#define GL_MDCK_TDAT_TCLAN_TSO_MORE_THAN_3_HDRS_S 10
+#define GL_MDCK_TDAT_TCLAN_TSO_MORE_THAN_3_HDRS_M BIT(10)
+#define GL_MDCK_TDAT_TCLAN_TSO_SUM_BUFFS_LT_SUM_HDRS_S 11
+#define GL_MDCK_TDAT_TCLAN_TSO_SUM_BUFFS_LT_SUM_HDRS_M BIT(11)
+#define GL_MDCK_TDAT_TCLAN_TSO_ZERO_MSS_TLEN_HDRS_S 12
+#define GL_MDCK_TDAT_TCLAN_TSO_ZERO_MSS_TLEN_HDRS_M BIT(12)
+#define GL_MDCK_TDAT_TCLAN_TSO_CTX_DESC_IPSEC_S 13
+#define GL_MDCK_TDAT_TCLAN_TSO_CTX_DESC_IPSEC_M BIT(13)
+#define GL_MDCK_TDAT_TCLAN_SSO_COMS_NOT_WHOLE_PKT_NUM_IN_QUANTA_S 14
+#define GL_MDCK_TDAT_TCLAN_SSO_COMS_NOT_WHOLE_PKT_NUM_IN_QUANTA_M BIT(14)
+#define GL_MDCK_TDAT_TCLAN_COMS_QUANTA_BYTES_EXCEED_PKTLEN_X_64_S 15
+#define GL_MDCK_TDAT_TCLAN_COMS_QUANTA_BYTES_EXCEED_PKTLEN_X_64_M BIT(15)
+#define GL_MDCK_TDAT_TCLAN_COMS_QUANTA_CMDS_EXCEED_S 16
+#define GL_MDCK_TDAT_TCLAN_COMS_QUANTA_CMDS_EXCEED_M BIT(16)
+#define GL_MDCK_TDAT_TCLAN_TSO_COMS_TSO_DESCS_LAST_LSO_QUANTA_S 17
+#define GL_MDCK_TDAT_TCLAN_TSO_COMS_TSO_DESCS_LAST_LSO_QUANTA_M BIT(17)
+#define GL_MDCK_TDAT_TCLAN_TSO_COMS_TSO_DESCS_TLEN_S 18
+#define GL_MDCK_TDAT_TCLAN_TSO_COMS_TSO_DESCS_TLEN_M BIT(18)
+#define GL_MDCK_TDAT_TCLAN_TSO_COMS_QUANTA_FINISHED_TOO_EARLY_S 19
+#define GL_MDCK_TDAT_TCLAN_TSO_COMS_QUANTA_FINISHED_TOO_EARLY_M BIT(19)
+#define GL_MDCK_TDAT_TCLAN_COMS_NUM_PKTS_IN_QUANTA_S 20
+#define GL_MDCK_TDAT_TCLAN_COMS_NUM_PKTS_IN_QUANTA_M BIT(20)
+#define GL_PPRS_SPARE_0				0x000841A8 /* Reset Source: CORER */
+#define GL_PPRS_SPARE_0_GL_PPRS_SPARE_S		0
+#define GL_PPRS_SPARE_0_GL_PPRS_SPARE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PPRS_SPARE_1				0x000851A8 /* Reset Source: CORER */
+#define GL_PPRS_SPARE_1_GL_PPRS_SPARE_S		0
+#define GL_PPRS_SPARE_1_GL_PPRS_SPARE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PPRS_SPARE_2				0x000861A8 /* Reset Source: CORER */
+#define GL_PPRS_SPARE_2_GL_PPRS_SPARE_S		0
+#define GL_PPRS_SPARE_2_GL_PPRS_SPARE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PPRS_SPARE_3				0x000871A8 /* Reset Source: CORER */
+#define GL_PPRS_SPARE_3_GL_PPRS_SPARE_S		0
+#define GL_PPRS_SPARE_3_GL_PPRS_SPARE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLCORE_CLKCTL_H				0x000B81E8 /* Reset Source: POR */
+#define GLCORE_CLKCTL_H_UPPER_CLK_SRC_H_S	0
+#define GLCORE_CLKCTL_H_UPPER_CLK_SRC_H_M	MAKEMASK(0x3, 0)
+#define GLCORE_CLKCTL_H_LOWER_CLK_SRC_H_S	2
+#define GLCORE_CLKCTL_H_LOWER_CLK_SRC_H_M	MAKEMASK(0x3, 2)
+#define GLCORE_CLKCTL_H_PSM_CLK_SRC_H_S		4
+#define GLCORE_CLKCTL_H_PSM_CLK_SRC_H_M		MAKEMASK(0x3, 4)
+#define GLCORE_CLKCTL_H_RXCTL_CLK_SRC_H_S	6
+#define GLCORE_CLKCTL_H_RXCTL_CLK_SRC_H_M	MAKEMASK(0x3, 6)
+#define GLCORE_CLKCTL_H_UANA_CLK_SRC_H_S	8
+#define GLCORE_CLKCTL_H_UANA_CLK_SRC_H_M	MAKEMASK(0x7, 8)
+#define GLCORE_CLKCTL_L				0x000B8254 /* Reset Source: POR */
+#define GLCORE_CLKCTL_L_UPPER_CLK_SRC_L_S	0
+#define GLCORE_CLKCTL_L_UPPER_CLK_SRC_L_M	MAKEMASK(0x3, 0)
+#define GLCORE_CLKCTL_L_LOWER_CLK_SRC_L_S	2
+#define GLCORE_CLKCTL_L_LOWER_CLK_SRC_L_M	MAKEMASK(0x3, 2)
+#define GLCORE_CLKCTL_L_PSM_CLK_SRC_L_S		4
+#define GLCORE_CLKCTL_L_PSM_CLK_SRC_L_M		MAKEMASK(0x3, 4)
+#define GLCORE_CLKCTL_L_RXCTL_CLK_SRC_L_S	6
+#define GLCORE_CLKCTL_L_RXCTL_CLK_SRC_L_M	MAKEMASK(0x3, 6)
+#define GLCORE_CLKCTL_L_UANA_CLK_SRC_L_S	8
+#define GLCORE_CLKCTL_L_UANA_CLK_SRC_L_M	MAKEMASK(0x7, 8)
+#define GLCORE_CLKCTL_M				0x000B8258 /* Reset Source: POR */
+#define GLCORE_CLKCTL_M_UPPER_CLK_SRC_M_S	0
+#define GLCORE_CLKCTL_M_UPPER_CLK_SRC_M_M	MAKEMASK(0x3, 0)
+#define GLCORE_CLKCTL_M_LOWER_CLK_SRC_M_S	2
+#define GLCORE_CLKCTL_M_LOWER_CLK_SRC_M_M	MAKEMASK(0x3, 2)
+#define GLCORE_CLKCTL_M_PSM_CLK_SRC_M_S		4
+#define GLCORE_CLKCTL_M_PSM_CLK_SRC_M_M		MAKEMASK(0x3, 4)
+#define GLCORE_CLKCTL_M_RXCTL_CLK_SRC_M_S	6
+#define GLCORE_CLKCTL_M_RXCTL_CLK_SRC_M_M	MAKEMASK(0x3, 6)
+#define GLCORE_CLKCTL_M_UANA_CLK_SRC_M_S	8
+#define GLCORE_CLKCTL_M_UANA_CLK_SRC_M_M	MAKEMASK(0x7, 8)
+#define GLFOC_CACHESIZE				0x000AA074 /* Reset Source: CORER */
+#define GLFOC_CACHESIZE_WORD_SIZE_S		0
+#define GLFOC_CACHESIZE_WORD_SIZE_M		MAKEMASK(0xFF, 0)
+#define GLFOC_CACHESIZE_SETS_S			8
+#define GLFOC_CACHESIZE_SETS_M			MAKEMASK(0xFFF, 8)
+#define GLFOC_CACHESIZE_WAYS_S			20
+#define GLFOC_CACHESIZE_WAYS_M			MAKEMASK(0xF, 20)
+#define GLGEN_CAR_DEBUG				0x000B81C0 /* Reset Source: POR */
+#define GLGEN_CAR_DEBUG_CAR_UPPER_CORE_CLK_EN_S 0
+#define GLGEN_CAR_DEBUG_CAR_UPPER_CORE_CLK_EN_M BIT(0)
+#define GLGEN_CAR_DEBUG_CAR_PCIE_HIU_CLK_EN_S	1
+#define GLGEN_CAR_DEBUG_CAR_PCIE_HIU_CLK_EN_M	BIT(1)
+#define GLGEN_CAR_DEBUG_CAR_PE_CLK_EN_S		2
+#define GLGEN_CAR_DEBUG_CAR_PE_CLK_EN_M		BIT(2)
+#define GLGEN_CAR_DEBUG_CAR_PCIE_PRIM_CLK_ACTIVE_S 3
+#define GLGEN_CAR_DEBUG_CAR_PCIE_PRIM_CLK_ACTIVE_M BIT(3)
+#define GLGEN_CAR_DEBUG_CDC_PE_ACTIVE_S		4
+#define GLGEN_CAR_DEBUG_CDC_PE_ACTIVE_M		BIT(4)
+#define GLGEN_CAR_DEBUG_CAR_PCIE_RAW_PRST_RESET_N_S 5
+#define GLGEN_CAR_DEBUG_CAR_PCIE_RAW_PRST_RESET_N_M BIT(5)
+#define GLGEN_CAR_DEBUG_CAR_PCIE_RAW_SCLR_RESET_N_S 6
+#define GLGEN_CAR_DEBUG_CAR_PCIE_RAW_SCLR_RESET_N_M BIT(6)
+#define GLGEN_CAR_DEBUG_CAR_PCIE_RAW_IB_RESET_N_S 7
+#define GLGEN_CAR_DEBUG_CAR_PCIE_RAW_IB_RESET_N_M BIT(7)
+#define GLGEN_CAR_DEBUG_CAR_PCIE_RAW_IMIB_RESET_N_S 8
+#define GLGEN_CAR_DEBUG_CAR_PCIE_RAW_IMIB_RESET_N_M BIT(8)
+#define GLGEN_CAR_DEBUG_CAR_RAW_EMP_RESET_N_S	9
+#define GLGEN_CAR_DEBUG_CAR_RAW_EMP_RESET_N_M	BIT(9)
+#define GLGEN_CAR_DEBUG_CAR_RAW_GLOBAL_RESET_N_S 10
+#define GLGEN_CAR_DEBUG_CAR_RAW_GLOBAL_RESET_N_M BIT(10)
+#define GLGEN_CAR_DEBUG_CAR_RAW_LAN_POWER_GOOD_S 11
+#define GLGEN_CAR_DEBUG_CAR_RAW_LAN_POWER_GOOD_M BIT(11)
+#define GLGEN_CAR_DEBUG_CDC_IOSF_PRIMERY_RST_B_S 12
+#define GLGEN_CAR_DEBUG_CDC_IOSF_PRIMERY_RST_B_M BIT(12)
+#define GLGEN_CAR_DEBUG_GBE_GLOBALRST_B_S	13
+#define GLGEN_CAR_DEBUG_GBE_GLOBALRST_B_M	BIT(13)
+#define GLGEN_CAR_DEBUG_FLEEP_AL_GLOBR_DONE_S	14
+#define GLGEN_CAR_DEBUG_FLEEP_AL_GLOBR_DONE_M	BIT(14)
+#define GLGEN_CAR_DEBUG_CAR_RST_STATE_S		15
+#define GLGEN_CAR_DEBUG_CAR_RST_STATE_M		MAKEMASK(0xF, 15)
+#define GLGEN_CAR_SPARE				0x000B81C4 /* Reset Source: POR */
+#define GLGEN_CAR_SPARE_SPARE_CLEAR_S		0
+#define GLGEN_CAR_SPARE_SPARE_CLEAR_M		MAKEMASK(0xFFFF, 0)
+#define GLGEN_CAR_SPARE_SPARE_SET_S		16
+#define GLGEN_CAR_SPARE_SPARE_SET_M		MAKEMASK(0xFFFF, 16)
+#define GLMAC_CLKSTAT				0x000B8210 /* Reset Source: POR */
+#define GLMAC_CLKSTAT_P0_CLK_SPEED_S		0
+#define GLMAC_CLKSTAT_P0_CLK_SPEED_M		MAKEMASK(0xF, 0)
+#define GLMAC_CLKSTAT_P1_CLK_SPEED_S		4
+#define GLMAC_CLKSTAT_P1_CLK_SPEED_M		MAKEMASK(0xF, 4)
+#define GLMAC_CLKSTAT_P2_CLK_SPEED_S		8
+#define GLMAC_CLKSTAT_P2_CLK_SPEED_M		MAKEMASK(0xF, 8)
+#define GLMAC_CLKSTAT_P3_CLK_SPEED_S		12
+#define GLMAC_CLKSTAT_P3_CLK_SPEED_M		MAKEMASK(0xF, 12)
+#define GLMAC_CLKSTAT_P4_CLK_SPEED_S		16
+#define GLMAC_CLKSTAT_P4_CLK_SPEED_M		MAKEMASK(0xF, 16)
+#define GLMAC_CLKSTAT_P5_CLK_SPEED_S		20
+#define GLMAC_CLKSTAT_P5_CLK_SPEED_M		MAKEMASK(0xF, 20)
+#define GLMAC_CLKSTAT_P6_CLK_SPEED_S		24
+#define GLMAC_CLKSTAT_P6_CLK_SPEED_M		MAKEMASK(0xF, 24)
+#define GLMAC_CLKSTAT_P7_CLK_SPEED_S		28
+#define GLMAC_CLKSTAT_P7_CLK_SPEED_M		MAKEMASK(0xF, 28)
+#define GLRCB_DCB_LAN_PMS			0x001223F8 /* Reset Source: CORER */
+#define GLRCB_DCB_LAN_PMS_PSM_LAN_S		0
+#define GLRCB_DCB_LAN_PMS_PSM_LAN_M		MAKEMASK(0x3FFF, 0)
+#define GLRCB_DCB_RDMA_PMS			0x001223FC /* Reset Source: CORER */
+#define GLRCB_DCB_RDMA_PMS_PSM_RDMA_S		0
+#define GLRCB_DCB_RDMA_PMS_PSM_RDMA_M		MAKEMASK(0x3FFF, 0)
+#define GLRLAN_MDET				0x00294200 /* Reset Source: CORER */
+#define GLRLAN_MDET_PCKT_EXTRCT_ERR_S		0
+#define GLRLAN_MDET_PCKT_EXTRCT_ERR_M		BIT(0)
+#define GLTPB_100G_MAC_FC_THRESH		0x00099510 /* Reset Source: CORER */
+#define GLTPB_100G_MAC_FC_THRESH_PORT0_FC_THRESH_S 0
+#define GLTPB_100G_MAC_FC_THRESH_PORT0_FC_THRESH_M MAKEMASK(0xFFFF, 0)
+#define GLTPB_100G_MAC_FC_THRESH_PORT1_FC_THRESH_S 16
+#define GLTPB_100G_MAC_FC_THRESH_PORT1_FC_THRESH_M MAKEMASK(0xFFFF, 16)
+#define GLTPB_100G_RPB_FC_THRESH		0x0009963C /* Reset Source: CORER */
+#define GLTPB_100G_RPB_FC_THRESH_PORT0_FC_THRESH_S 0
+#define GLTPB_100G_RPB_FC_THRESH_PORT0_FC_THRESH_M MAKEMASK(0xFFFF, 0)
+#define GLTPB_100G_RPB_FC_THRESH_PORT1_FC_THRESH_S 16
+#define GLTPB_100G_RPB_FC_THRESH_PORT1_FC_THRESH_M MAKEMASK(0xFFFF, 16)
+#define GLTPB_PACING_10G			0x000994E4 /* Reset Source: CORER */
+#define GLTPB_PACING_10G_N_S			0
+#define GLTPB_PACING_10G_N_M			MAKEMASK(0xFF, 0)
+#define GLTPB_PACING_10G_K_S			8
+#define GLTPB_PACING_10G_K_M			MAKEMASK(0xFF, 8)
+#define GLTPB_PACING_10G_S_S			16
+#define GLTPB_PACING_10G_S_M			MAKEMASK(0x1FF, 16)
+#define GLTPB_PACING_25G			0x000994E0 /* Reset Source: CORER */
+#define GLTPB_PACING_25G_N_S			0
+#define GLTPB_PACING_25G_N_M			MAKEMASK(0xFF, 0)
+#define GLTPB_PACING_25G_K_S			8
+#define GLTPB_PACING_25G_K_M			MAKEMASK(0xFF, 8)
+#define GLTPB_PACING_25G_S_S			16
+#define GLTPB_PACING_25G_S_M			MAKEMASK(0x1FF, 16)
+#define GLTPB_PORT_PACING_SPEED			0x000994E8 /* Reset Source: CORER */
+#define GLTPB_PORT_PACING_SPEED_PORT0_SPEED_S	0
+#define GLTPB_PORT_PACING_SPEED_PORT0_SPEED_M	BIT(0)
+#define GLTPB_PORT_PACING_SPEED_PORT1_SPEED_S	1
+#define GLTPB_PORT_PACING_SPEED_PORT1_SPEED_M	BIT(1)
+#define GLTPB_PORT_PACING_SPEED_PORT2_SPEED_S	2
+#define GLTPB_PORT_PACING_SPEED_PORT2_SPEED_M	BIT(2)
+#define GLTPB_PORT_PACING_SPEED_PORT3_SPEED_S	3
+#define GLTPB_PORT_PACING_SPEED_PORT3_SPEED_M	BIT(3)
+#define GLTPB_PORT_PACING_SPEED_PORT4_SPEED_S	4
+#define GLTPB_PORT_PACING_SPEED_PORT4_SPEED_M	BIT(4)
+#define GLTPB_PORT_PACING_SPEED_PORT5_SPEED_S	5
+#define GLTPB_PORT_PACING_SPEED_PORT5_SPEED_M	BIT(5)
+#define GLTPB_PORT_PACING_SPEED_PORT6_SPEED_S	6
+#define GLTPB_PORT_PACING_SPEED_PORT6_SPEED_M	BIT(6)
+#define GLTPB_PORT_PACING_SPEED_PORT7_SPEED_S	7
+#define GLTPB_PORT_PACING_SPEED_PORT7_SPEED_M	BIT(7)
+#define GLTSYN_HH_DBG				0x000889F0 /* Reset Source: CORER */
+#define GLTSYN_HH_DBG_HH_SYNC_S			0
+#define GLTSYN_HH_DBG_HH_SYNC_M			BIT(0)
+#define GLTSYN_HH_DBG_HH_LATCH_EN_S		1
+#define GLTSYN_HH_DBG_HH_LATCH_EN_M		BIT(1)
+#define TPB_CFG_SCHEDULED_BC_THRESHOLD		0x00099494 /* Reset Source: CORER */
+#define TPB_CFG_SCHEDULED_BC_THRESHOLD_THRESHOLD_S 0
+#define TPB_CFG_SCHEDULED_BC_THRESHOLD_THRESHOLD_M MAKEMASK(0x7FFF, 0)
+#define GL_UFUSE_SOC				0x000A400C /* Reset Source: POR */
+#define GL_UFUSE_SOC_PORT_MODE_S		0
+#define GL_UFUSE_SOC_PORT_MODE_M		MAKEMASK(0x3, 0)
+#define GL_UFUSE_SOC_BANDWIDTH_S		2
+#define GL_UFUSE_SOC_BANDWIDTH_M		MAKEMASK(0x3, 2)
+#define GL_UFUSE_SOC_PE_DISABLE_S		4
+#define GL_UFUSE_SOC_PE_DISABLE_M		BIT(4)
+#define GL_UFUSE_SOC_SWITCH_MODE_S		5
+#define GL_UFUSE_SOC_SWITCH_MODE_M		BIT(5)
+#define GL_UFUSE_SOC_CSR_PROTECTION_ENABLE_S	6
+#define GL_UFUSE_SOC_CSR_PROTECTION_ENABLE_M	BIT(6)
+#define GL_UFUSE_SOC_SERIAL_50G_S		7
+#define GL_UFUSE_SOC_SERIAL_50G_M		BIT(7)
+#define GL_UFUSE_SOC_NIC_ID_S			8
+#define GL_UFUSE_SOC_NIC_ID_M			BIT(8)
+#define GL_UFUSE_SOC_BLOCK_BME_TO_FW_S		9
+#define GL_UFUSE_SOC_BLOCK_BME_TO_FW_M		BIT(9)
+#define GL_UFUSE_SOC_SOC_TYPE_S			10
+#define GL_UFUSE_SOC_SOC_TYPE_M			BIT(10)
+#define GL_UFUSE_SOC_BTS_MODE_S			11
+#define GL_UFUSE_SOC_BTS_MODE_M			BIT(11)
+#define GL_UFUSE_SOC_SPARE_FUSES_S		12
+#define GL_UFUSE_SOC_SPARE_FUSES_M		MAKEMASK(0xF, 12)
+#define EMPINT_GPIO_ENA				0x000880C0 /* Reset Source: POR */
+#define EMPINT_GPIO_ENA_GPIO0_ENA_S		0
+#define EMPINT_GPIO_ENA_GPIO0_ENA_M		BIT(0)
+#define EMPINT_GPIO_ENA_GPIO1_ENA_S		1
+#define EMPINT_GPIO_ENA_GPIO1_ENA_M		BIT(1)
+#define EMPINT_GPIO_ENA_GPIO2_ENA_S		2
+#define EMPINT_GPIO_ENA_GPIO2_ENA_M		BIT(2)
+#define EMPINT_GPIO_ENA_GPIO3_ENA_S		3
+#define EMPINT_GPIO_ENA_GPIO3_ENA_M		BIT(3)
+#define EMPINT_GPIO_ENA_GPIO4_ENA_S		4
+#define EMPINT_GPIO_ENA_GPIO4_ENA_M		BIT(4)
+#define EMPINT_GPIO_ENA_GPIO5_ENA_S		5
+#define EMPINT_GPIO_ENA_GPIO5_ENA_M		BIT(5)
+#define EMPINT_GPIO_ENA_GPIO6_ENA_S		6
+#define EMPINT_GPIO_ENA_GPIO6_ENA_M		BIT(6)
+#define GL_CLKGEN_DEBUG				0x000B8268 /* Reset Source: POR */
+#define GL_CLKGEN_DEBUG_PROBE_S			0
+#define GL_CLKGEN_DEBUG_PROBE_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GL_CLKGEN_DEBUG_SEL			0x000B8264 /* Reset Source: POR */
+#define GL_CLKGEN_DEBUG_SEL_GL_CLKGEN_DEBUG_SEL_S 0
+#define GL_CLKGEN_DEBUG_SEL_GL_CLKGEN_DEBUG_SEL_M MAKEMASK(0xFFFF, 0)
+#define GLGEN_MAC_LINK_TOPO			0x000B81DC /* Reset Source: GLOBR */
+#define GLGEN_MAC_LINK_TOPO_LINK_TOPO_S		0
+#define GLGEN_MAC_LINK_TOPO_LINK_TOPO_M		MAKEMASK(0x3, 0)
+#define GLINT_CEQCTL(_INT)			(0x0015C000 + ((_INT) * 4)) /* _i=0...2047 */ /* Reset Source: CORER */
+#define GLINT_CEQCTL_MAX_INDEX			2047
+#define GLINT_CEQCTL_MSIX_INDX_S		0
+#define GLINT_CEQCTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define GLINT_CEQCTL_ITR_INDX_S			11
+#define GLINT_CEQCTL_ITR_INDX_M			MAKEMASK(0x3, 11)
+#define GLINT_CEQCTL_CAUSE_ENA_S		30
+#define GLINT_CEQCTL_CAUSE_ENA_M		BIT(30)
+#define GLINT_CEQCTL_INTEVENT_S			31
+#define GLINT_CEQCTL_INTEVENT_M			BIT(31)
+#define GLINT_CTL				0x0016CC54 /* Reset Source: CORER */
+#define GLINT_CTL_DIS_AUTOMASK_S		0
+#define GLINT_CTL_DIS_AUTOMASK_M		BIT(0)
+#define GLINT_CTL_RSVD_S			1
+#define GLINT_CTL_RSVD_M			MAKEMASK(0x7FFF, 1)
+#define GLINT_CTL_ITR_GRAN_200_S		16
+#define GLINT_CTL_ITR_GRAN_200_M		MAKEMASK(0xF, 16)
+#define GLINT_CTL_ITR_GRAN_100_S		20
+#define GLINT_CTL_ITR_GRAN_100_M		MAKEMASK(0xF, 20)
+#define GLINT_CTL_ITR_GRAN_50_S			24
+#define GLINT_CTL_ITR_GRAN_50_M			MAKEMASK(0xF, 24)
+#define GLINT_CTL_ITR_GRAN_25_S			28
+#define GLINT_CTL_ITR_GRAN_25_M			MAKEMASK(0xF, 28)
+#define GLINT_DYN_CTL(_INT)			(0x00160000 + ((_INT) * 4)) /* _i=0...2047 */ /* Reset Source: PFR */
+#define GLINT_DYN_CTL_MAX_INDEX			2047
+#define GLINT_DYN_CTL_INTENA_S			0
+#define GLINT_DYN_CTL_INTENA_M			BIT(0)
+#define GLINT_DYN_CTL_CLEARPBA_S		1
+#define GLINT_DYN_CTL_CLEARPBA_M		BIT(1)
+#define GLINT_DYN_CTL_SWINT_TRIG_S		2
+#define GLINT_DYN_CTL_SWINT_TRIG_M		BIT(2)
+#define GLINT_DYN_CTL_ITR_INDX_S		3
+#define GLINT_DYN_CTL_ITR_INDX_M		MAKEMASK(0x3, 3)
+#define GLINT_DYN_CTL_INTERVAL_S		5
+#define GLINT_DYN_CTL_INTERVAL_M		MAKEMASK(0xFFF, 5)
+#define GLINT_DYN_CTL_SW_ITR_INDX_ENA_S		24
+#define GLINT_DYN_CTL_SW_ITR_INDX_ENA_M		BIT(24)
+#define GLINT_DYN_CTL_SW_ITR_INDX_S		25
+#define GLINT_DYN_CTL_SW_ITR_INDX_M		MAKEMASK(0x3, 25)
+#define GLINT_DYN_CTL_WB_ON_ITR_S		30
+#define GLINT_DYN_CTL_WB_ON_ITR_M		BIT(30)
+#define GLINT_DYN_CTL_INTENA_MSK_S		31
+#define GLINT_DYN_CTL_INTENA_MSK_M		BIT(31)
+#define GLINT_FW_TOOL_CTL			0x0016C840 /* Reset Source: CORER */
+#define GLINT_FW_TOOL_CTL_MSIX_INDX_S		0
+#define GLINT_FW_TOOL_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define GLINT_FW_TOOL_CTL_ITR_INDX_S		11
+#define GLINT_FW_TOOL_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define GLINT_FW_TOOL_CTL_CAUSE_ENA_S		30
+#define GLINT_FW_TOOL_CTL_CAUSE_ENA_M		BIT(30)
+#define GLINT_FW_TOOL_CTL_INTEVENT_S		31
+#define GLINT_FW_TOOL_CTL_INTEVENT_M		BIT(31)
+#define GLINT_ITR(_i, _INT)			(0x00154000 + ((_i) * 8192 + (_INT) * 4)) /* _i=0...2, _INT=0...2047 */ /* Reset Source: PFR */
+#define GLINT_ITR_MAX_INDEX			2
+#define GLINT_ITR_INTERVAL_S			0
+#define GLINT_ITR_INTERVAL_M			MAKEMASK(0xFFF, 0)
+#define GLINT_RATE(_INT)			(0x0015A000 + ((_INT) * 4)) /* _i=0...2047 */ /* Reset Source: PFR */
+#define GLINT_RATE_MAX_INDEX			2047
+#define GLINT_RATE_INTERVAL_S			0
+#define GLINT_RATE_INTERVAL_M			MAKEMASK(0x3F, 0)
+#define GLINT_RATE_INTRL_ENA_S			6
+#define GLINT_RATE_INTRL_ENA_M			BIT(6)
+#define GLINT_TSYN_PFMSTR(_i)			(0x0016CCC0 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLINT_TSYN_PFMSTR_MAX_INDEX		1
+#define GLINT_TSYN_PFMSTR_PF_MASTER_S		0
+#define GLINT_TSYN_PFMSTR_PF_MASTER_M		MAKEMASK(0x7, 0)
+#define GLINT_TSYN_PHY				0x0016CC50 /* Reset Source: CORER */
+#define GLINT_TSYN_PHY_PHY_INDX_S		0
+#define GLINT_TSYN_PHY_PHY_INDX_M		MAKEMASK(0x1F, 0)
+#define GLINT_VECT2FUNC(_INT)			(0x00162000 + ((_INT) * 4)) /* _i=0...2047 */ /* Reset Source: CORER */
+#define GLINT_VECT2FUNC_MAX_INDEX		2047
+#define GLINT_VECT2FUNC_VF_NUM_S		0
+#define GLINT_VECT2FUNC_VF_NUM_M		MAKEMASK(0xFF, 0)
+#define GLINT_VECT2FUNC_PF_NUM_S		12
+#define GLINT_VECT2FUNC_PF_NUM_M		MAKEMASK(0x7, 12)
+#define GLINT_VECT2FUNC_IS_PF_S			16
+#define GLINT_VECT2FUNC_IS_PF_M			BIT(16)
+#define PF0INT_FW_HLP_CTL			0x0016C844 /* Reset Source: CORER */
+#define PF0INT_FW_HLP_CTL_MSIX_INDX_S		0
+#define PF0INT_FW_HLP_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PF0INT_FW_HLP_CTL_ITR_INDX_S		11
+#define PF0INT_FW_HLP_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PF0INT_FW_HLP_CTL_CAUSE_ENA_S		30
+#define PF0INT_FW_HLP_CTL_CAUSE_ENA_M		BIT(30)
+#define PF0INT_FW_HLP_CTL_INTEVENT_S		31
+#define PF0INT_FW_HLP_CTL_INTEVENT_M		BIT(31)
+#define PF0INT_FW_PSM_CTL			0x0016C848 /* Reset Source: CORER */
+#define PF0INT_FW_PSM_CTL_MSIX_INDX_S		0
+#define PF0INT_FW_PSM_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PF0INT_FW_PSM_CTL_ITR_INDX_S		11
+#define PF0INT_FW_PSM_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PF0INT_FW_PSM_CTL_CAUSE_ENA_S		30
+#define PF0INT_FW_PSM_CTL_CAUSE_ENA_M		BIT(30)
+#define PF0INT_FW_PSM_CTL_INTEVENT_S		31
+#define PF0INT_FW_PSM_CTL_INTEVENT_M		BIT(31)
+#define PF0INT_MBX_CPM_CTL			0x0016B2C0 /* Reset Source: CORER */
+#define PF0INT_MBX_CPM_CTL_MSIX_INDX_S		0
+#define PF0INT_MBX_CPM_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PF0INT_MBX_CPM_CTL_ITR_INDX_S		11
+#define PF0INT_MBX_CPM_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PF0INT_MBX_CPM_CTL_CAUSE_ENA_S		30
+#define PF0INT_MBX_CPM_CTL_CAUSE_ENA_M		BIT(30)
+#define PF0INT_MBX_CPM_CTL_INTEVENT_S		31
+#define PF0INT_MBX_CPM_CTL_INTEVENT_M		BIT(31)
+#define PF0INT_MBX_HLP_CTL			0x0016B2C4 /* Reset Source: CORER */
+#define PF0INT_MBX_HLP_CTL_MSIX_INDX_S		0
+#define PF0INT_MBX_HLP_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PF0INT_MBX_HLP_CTL_ITR_INDX_S		11
+#define PF0INT_MBX_HLP_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PF0INT_MBX_HLP_CTL_CAUSE_ENA_S		30
+#define PF0INT_MBX_HLP_CTL_CAUSE_ENA_M		BIT(30)
+#define PF0INT_MBX_HLP_CTL_INTEVENT_S		31
+#define PF0INT_MBX_HLP_CTL_INTEVENT_M		BIT(31)
+#define PF0INT_MBX_PSM_CTL			0x0016B2C8 /* Reset Source: CORER */
+#define PF0INT_MBX_PSM_CTL_MSIX_INDX_S		0
+#define PF0INT_MBX_PSM_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PF0INT_MBX_PSM_CTL_ITR_INDX_S		11
+#define PF0INT_MBX_PSM_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PF0INT_MBX_PSM_CTL_CAUSE_ENA_S		30
+#define PF0INT_MBX_PSM_CTL_CAUSE_ENA_M		BIT(30)
+#define PF0INT_MBX_PSM_CTL_INTEVENT_S		31
+#define PF0INT_MBX_PSM_CTL_INTEVENT_M		BIT(31)
+#define PF0INT_OICR_CPM				0x0016CC40 /* Reset Source: CORER */
+#define PF0INT_OICR_CPM_INTEVENT_S		0
+#define PF0INT_OICR_CPM_INTEVENT_M		BIT(0)
+#define PF0INT_OICR_CPM_QUEUE_S			1
+#define PF0INT_OICR_CPM_QUEUE_M			BIT(1)
+#define PF0INT_OICR_CPM_RSV1_S			2
+#define PF0INT_OICR_CPM_RSV1_M			MAKEMASK(0xFF, 2)
+#define PF0INT_OICR_CPM_HH_COMP_S		10
+#define PF0INT_OICR_CPM_HH_COMP_M		BIT(10)
+#define PF0INT_OICR_CPM_TSYN_TX_S		11
+#define PF0INT_OICR_CPM_TSYN_TX_M		BIT(11)
+#define PF0INT_OICR_CPM_TSYN_EVNT_S		12
+#define PF0INT_OICR_CPM_TSYN_EVNT_M		BIT(12)
+#define PF0INT_OICR_CPM_TSYN_TGT_S		13
+#define PF0INT_OICR_CPM_TSYN_TGT_M		BIT(13)
+#define PF0INT_OICR_CPM_HLP_RDY_S		14
+#define PF0INT_OICR_CPM_HLP_RDY_M		BIT(14)
+#define PF0INT_OICR_CPM_CPM_RDY_S		15
+#define PF0INT_OICR_CPM_CPM_RDY_M		BIT(15)
+#define PF0INT_OICR_CPM_ECC_ERR_S		16
+#define PF0INT_OICR_CPM_ECC_ERR_M		BIT(16)
+#define PF0INT_OICR_CPM_RSV2_S			17
+#define PF0INT_OICR_CPM_RSV2_M			MAKEMASK(0x3, 17)
+#define PF0INT_OICR_CPM_MAL_DETECT_S		19
+#define PF0INT_OICR_CPM_MAL_DETECT_M		BIT(19)
+#define PF0INT_OICR_CPM_GRST_S			20
+#define PF0INT_OICR_CPM_GRST_M			BIT(20)
+#define PF0INT_OICR_CPM_PCI_EXCEPTION_S		21
+#define PF0INT_OICR_CPM_PCI_EXCEPTION_M		BIT(21)
+#define PF0INT_OICR_CPM_GPIO_S			22
+#define PF0INT_OICR_CPM_GPIO_M			BIT(22)
+#define PF0INT_OICR_CPM_RSV3_S			23
+#define PF0INT_OICR_CPM_RSV3_M			BIT(23)
+#define PF0INT_OICR_CPM_STORM_DETECT_S		24
+#define PF0INT_OICR_CPM_STORM_DETECT_M		BIT(24)
+#define PF0INT_OICR_CPM_LINK_STAT_CHANGE_S	25
+#define PF0INT_OICR_CPM_LINK_STAT_CHANGE_M	BIT(25)
+#define PF0INT_OICR_CPM_HMC_ERR_S		26
+#define PF0INT_OICR_CPM_HMC_ERR_M		BIT(26)
+#define PF0INT_OICR_CPM_PE_PUSH_S		27
+#define PF0INT_OICR_CPM_PE_PUSH_M		BIT(27)
+#define PF0INT_OICR_CPM_PE_CRITERR_S		28
+#define PF0INT_OICR_CPM_PE_CRITERR_M		BIT(28)
+#define PF0INT_OICR_CPM_VFLR_S			29
+#define PF0INT_OICR_CPM_VFLR_M			BIT(29)
+#define PF0INT_OICR_CPM_XLR_HW_DONE_S		30
+#define PF0INT_OICR_CPM_XLR_HW_DONE_M		BIT(30)
+#define PF0INT_OICR_CPM_SWINT_S			31
+#define PF0INT_OICR_CPM_SWINT_M			BIT(31)
+#define PF0INT_OICR_CTL_CPM			0x0016CC48 /* Reset Source: CORER */
+#define PF0INT_OICR_CTL_CPM_MSIX_INDX_S		0
+#define PF0INT_OICR_CTL_CPM_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PF0INT_OICR_CTL_CPM_ITR_INDX_S		11
+#define PF0INT_OICR_CTL_CPM_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PF0INT_OICR_CTL_CPM_CAUSE_ENA_S		30
+#define PF0INT_OICR_CTL_CPM_CAUSE_ENA_M		BIT(30)
+#define PF0INT_OICR_CTL_CPM_INTEVENT_S		31
+#define PF0INT_OICR_CTL_CPM_INTEVENT_M		BIT(31)
+#define PF0INT_OICR_CTL_HLP			0x0016CC5C /* Reset Source: CORER */
+#define PF0INT_OICR_CTL_HLP_MSIX_INDX_S		0
+#define PF0INT_OICR_CTL_HLP_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PF0INT_OICR_CTL_HLP_ITR_INDX_S		11
+#define PF0INT_OICR_CTL_HLP_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PF0INT_OICR_CTL_HLP_CAUSE_ENA_S		30
+#define PF0INT_OICR_CTL_HLP_CAUSE_ENA_M		BIT(30)
+#define PF0INT_OICR_CTL_HLP_INTEVENT_S		31
+#define PF0INT_OICR_CTL_HLP_INTEVENT_M		BIT(31)
+#define PF0INT_OICR_CTL_PSM			0x0016CC64 /* Reset Source: CORER */
+#define PF0INT_OICR_CTL_PSM_MSIX_INDX_S		0
+#define PF0INT_OICR_CTL_PSM_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PF0INT_OICR_CTL_PSM_ITR_INDX_S		11
+#define PF0INT_OICR_CTL_PSM_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PF0INT_OICR_CTL_PSM_CAUSE_ENA_S		30
+#define PF0INT_OICR_CTL_PSM_CAUSE_ENA_M		BIT(30)
+#define PF0INT_OICR_CTL_PSM_INTEVENT_S		31
+#define PF0INT_OICR_CTL_PSM_INTEVENT_M		BIT(31)
+#define PF0INT_OICR_ENA_CPM			0x0016CC60 /* Reset Source: CORER */
+#define PF0INT_OICR_ENA_CPM_RSV0_S		0
+#define PF0INT_OICR_ENA_CPM_RSV0_M		BIT(0)
+#define PF0INT_OICR_ENA_CPM_INT_ENA_S		1
+#define PF0INT_OICR_ENA_CPM_INT_ENA_M		MAKEMASK(0x7FFFFFFF, 1)
+#define PF0INT_OICR_ENA_HLP			0x0016CC4C /* Reset Source: CORER */
+#define PF0INT_OICR_ENA_HLP_RSV0_S		0
+#define PF0INT_OICR_ENA_HLP_RSV0_M		BIT(0)
+#define PF0INT_OICR_ENA_HLP_INT_ENA_S		1
+#define PF0INT_OICR_ENA_HLP_INT_ENA_M		MAKEMASK(0x7FFFFFFF, 1)
+#define PF0INT_OICR_ENA_PSM			0x0016CC58 /* Reset Source: CORER */
+#define PF0INT_OICR_ENA_PSM_RSV0_S		0
+#define PF0INT_OICR_ENA_PSM_RSV0_M		BIT(0)
+#define PF0INT_OICR_ENA_PSM_INT_ENA_S		1
+#define PF0INT_OICR_ENA_PSM_INT_ENA_M		MAKEMASK(0x7FFFFFFF, 1)
+#define PF0INT_OICR_HLP				0x0016CC68 /* Reset Source: CORER */
+#define PF0INT_OICR_HLP_INTEVENT_S		0
+#define PF0INT_OICR_HLP_INTEVENT_M		BIT(0)
+#define PF0INT_OICR_HLP_QUEUE_S			1
+#define PF0INT_OICR_HLP_QUEUE_M			BIT(1)
+#define PF0INT_OICR_HLP_RSV1_S			2
+#define PF0INT_OICR_HLP_RSV1_M			MAKEMASK(0xFF, 2)
+#define PF0INT_OICR_HLP_HH_COMP_S		10
+#define PF0INT_OICR_HLP_HH_COMP_M		BIT(10)
+#define PF0INT_OICR_HLP_TSYN_TX_S		11
+#define PF0INT_OICR_HLP_TSYN_TX_M		BIT(11)
+#define PF0INT_OICR_HLP_TSYN_EVNT_S		12
+#define PF0INT_OICR_HLP_TSYN_EVNT_M		BIT(12)
+#define PF0INT_OICR_HLP_TSYN_TGT_S		13
+#define PF0INT_OICR_HLP_TSYN_TGT_M		BIT(13)
+#define PF0INT_OICR_HLP_HLP_RDY_S		14
+#define PF0INT_OICR_HLP_HLP_RDY_M		BIT(14)
+#define PF0INT_OICR_HLP_CPM_RDY_S		15
+#define PF0INT_OICR_HLP_CPM_RDY_M		BIT(15)
+#define PF0INT_OICR_HLP_ECC_ERR_S		16
+#define PF0INT_OICR_HLP_ECC_ERR_M		BIT(16)
+#define PF0INT_OICR_HLP_RSV2_S			17
+#define PF0INT_OICR_HLP_RSV2_M			MAKEMASK(0x3, 17)
+#define PF0INT_OICR_HLP_MAL_DETECT_S		19
+#define PF0INT_OICR_HLP_MAL_DETECT_M		BIT(19)
+#define PF0INT_OICR_HLP_GRST_S			20
+#define PF0INT_OICR_HLP_GRST_M			BIT(20)
+#define PF0INT_OICR_HLP_PCI_EXCEPTION_S		21
+#define PF0INT_OICR_HLP_PCI_EXCEPTION_M		BIT(21)
+#define PF0INT_OICR_HLP_GPIO_S			22
+#define PF0INT_OICR_HLP_GPIO_M			BIT(22)
+#define PF0INT_OICR_HLP_RSV3_S			23
+#define PF0INT_OICR_HLP_RSV3_M			BIT(23)
+#define PF0INT_OICR_HLP_STORM_DETECT_S		24
+#define PF0INT_OICR_HLP_STORM_DETECT_M		BIT(24)
+#define PF0INT_OICR_HLP_LINK_STAT_CHANGE_S	25
+#define PF0INT_OICR_HLP_LINK_STAT_CHANGE_M	BIT(25)
+#define PF0INT_OICR_HLP_HMC_ERR_S		26
+#define PF0INT_OICR_HLP_HMC_ERR_M		BIT(26)
+#define PF0INT_OICR_HLP_PE_PUSH_S		27
+#define PF0INT_OICR_HLP_PE_PUSH_M		BIT(27)
+#define PF0INT_OICR_HLP_PE_CRITERR_S		28
+#define PF0INT_OICR_HLP_PE_CRITERR_M		BIT(28)
+#define PF0INT_OICR_HLP_VFLR_S			29
+#define PF0INT_OICR_HLP_VFLR_M			BIT(29)
+#define PF0INT_OICR_HLP_XLR_HW_DONE_S		30
+#define PF0INT_OICR_HLP_XLR_HW_DONE_M		BIT(30)
+#define PF0INT_OICR_HLP_SWINT_S			31
+#define PF0INT_OICR_HLP_SWINT_M			BIT(31)
+#define PF0INT_OICR_PSM				0x0016CC44 /* Reset Source: CORER */
+#define PF0INT_OICR_PSM_INTEVENT_S		0
+#define PF0INT_OICR_PSM_INTEVENT_M		BIT(0)
+#define PF0INT_OICR_PSM_QUEUE_S			1
+#define PF0INT_OICR_PSM_QUEUE_M			BIT(1)
+#define PF0INT_OICR_PSM_RSV1_S			2
+#define PF0INT_OICR_PSM_RSV1_M			MAKEMASK(0xFF, 2)
+#define PF0INT_OICR_PSM_HH_COMP_S		10
+#define PF0INT_OICR_PSM_HH_COMP_M		BIT(10)
+#define PF0INT_OICR_PSM_TSYN_TX_S		11
+#define PF0INT_OICR_PSM_TSYN_TX_M		BIT(11)
+#define PF0INT_OICR_PSM_TSYN_EVNT_S		12
+#define PF0INT_OICR_PSM_TSYN_EVNT_M		BIT(12)
+#define PF0INT_OICR_PSM_TSYN_TGT_S		13
+#define PF0INT_OICR_PSM_TSYN_TGT_M		BIT(13)
+#define PF0INT_OICR_PSM_HLP_RDY_S		14
+#define PF0INT_OICR_PSM_HLP_RDY_M		BIT(14)
+#define PF0INT_OICR_PSM_CPM_RDY_S		15
+#define PF0INT_OICR_PSM_CPM_RDY_M		BIT(15)
+#define PF0INT_OICR_PSM_ECC_ERR_S		16
+#define PF0INT_OICR_PSM_ECC_ERR_M		BIT(16)
+#define PF0INT_OICR_PSM_RSV2_S			17
+#define PF0INT_OICR_PSM_RSV2_M			MAKEMASK(0x3, 17)
+#define PF0INT_OICR_PSM_MAL_DETECT_S		19
+#define PF0INT_OICR_PSM_MAL_DETECT_M		BIT(19)
+#define PF0INT_OICR_PSM_GRST_S			20
+#define PF0INT_OICR_PSM_GRST_M			BIT(20)
+#define PF0INT_OICR_PSM_PCI_EXCEPTION_S		21
+#define PF0INT_OICR_PSM_PCI_EXCEPTION_M		BIT(21)
+#define PF0INT_OICR_PSM_GPIO_S			22
+#define PF0INT_OICR_PSM_GPIO_M			BIT(22)
+#define PF0INT_OICR_PSM_RSV3_S			23
+#define PF0INT_OICR_PSM_RSV3_M			BIT(23)
+#define PF0INT_OICR_PSM_STORM_DETECT_S		24
+#define PF0INT_OICR_PSM_STORM_DETECT_M		BIT(24)
+#define PF0INT_OICR_PSM_LINK_STAT_CHANGE_S	25
+#define PF0INT_OICR_PSM_LINK_STAT_CHANGE_M	BIT(25)
+#define PF0INT_OICR_PSM_HMC_ERR_S		26
+#define PF0INT_OICR_PSM_HMC_ERR_M		BIT(26)
+#define PF0INT_OICR_PSM_PE_PUSH_S		27
+#define PF0INT_OICR_PSM_PE_PUSH_M		BIT(27)
+#define PF0INT_OICR_PSM_PE_CRITERR_S		28
+#define PF0INT_OICR_PSM_PE_CRITERR_M		BIT(28)
+#define PF0INT_OICR_PSM_VFLR_S			29
+#define PF0INT_OICR_PSM_VFLR_M			BIT(29)
+#define PF0INT_OICR_PSM_XLR_HW_DONE_S		30
+#define PF0INT_OICR_PSM_XLR_HW_DONE_M		BIT(30)
+#define PF0INT_OICR_PSM_SWINT_S			31
+#define PF0INT_OICR_PSM_SWINT_M			BIT(31)
+#define PF0INT_SB_CPM_CTL			0x0016B2CC /* Reset Source: CORER */
+#define PF0INT_SB_CPM_CTL_MSIX_INDX_S		0
+#define PF0INT_SB_CPM_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PF0INT_SB_CPM_CTL_ITR_INDX_S		11
+#define PF0INT_SB_CPM_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PF0INT_SB_CPM_CTL_CAUSE_ENA_S		30
+#define PF0INT_SB_CPM_CTL_CAUSE_ENA_M		BIT(30)
+#define PF0INT_SB_CPM_CTL_INTEVENT_S		31
+#define PF0INT_SB_CPM_CTL_INTEVENT_M		BIT(31)
+#define PF0INT_SB_HLP_CTL			0x0016B640 /* Reset Source: CORER */
+#define PF0INT_SB_HLP_CTL_MSIX_INDX_S		0
+#define PF0INT_SB_HLP_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PF0INT_SB_HLP_CTL_ITR_INDX_S		11
+#define PF0INT_SB_HLP_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PF0INT_SB_HLP_CTL_CAUSE_ENA_S		30
+#define PF0INT_SB_HLP_CTL_CAUSE_ENA_M		BIT(30)
+#define PF0INT_SB_HLP_CTL_INTEVENT_S		31
+#define PF0INT_SB_HLP_CTL_INTEVENT_M		BIT(31)
+#define PFINT_AEQCTL				0x0016CB00 /* Reset Source: CORER */
+#define PFINT_AEQCTL_MSIX_INDX_S		0
+#define PFINT_AEQCTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PFINT_AEQCTL_ITR_INDX_S			11
+#define PFINT_AEQCTL_ITR_INDX_M			MAKEMASK(0x3, 11)
+#define PFINT_AEQCTL_CAUSE_ENA_S		30
+#define PFINT_AEQCTL_CAUSE_ENA_M		BIT(30)
+#define PFINT_AEQCTL_INTEVENT_S			31
+#define PFINT_AEQCTL_INTEVENT_M			BIT(31)
+#define PFINT_ALLOC				0x001D2600 /* Reset Source: CORER */
+#define PFINT_ALLOC_FIRST_S			0
+#define PFINT_ALLOC_FIRST_M			MAKEMASK(0x7FF, 0)
+#define PFINT_ALLOC_LAST_S			12
+#define PFINT_ALLOC_LAST_M			MAKEMASK(0x7FF, 12)
+#define PFINT_ALLOC_VALID_S			31
+#define PFINT_ALLOC_VALID_M			BIT(31)
+#define PFINT_ALLOC_PCI				0x0009D800 /* Reset Source: PCIR */
+#define PFINT_ALLOC_PCI_FIRST_S			0
+#define PFINT_ALLOC_PCI_FIRST_M			MAKEMASK(0x7FF, 0)
+#define PFINT_ALLOC_PCI_LAST_S			12
+#define PFINT_ALLOC_PCI_LAST_M			MAKEMASK(0x7FF, 12)
+#define PFINT_ALLOC_PCI_VALID_S			31
+#define PFINT_ALLOC_PCI_VALID_M			BIT(31)
+#define PFINT_FW_CTL				0x0016C800 /* Reset Source: CORER */
+#define PFINT_FW_CTL_MSIX_INDX_S		0
+#define PFINT_FW_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PFINT_FW_CTL_ITR_INDX_S			11
+#define PFINT_FW_CTL_ITR_INDX_M			MAKEMASK(0x3, 11)
+#define PFINT_FW_CTL_CAUSE_ENA_S		30
+#define PFINT_FW_CTL_CAUSE_ENA_M		BIT(30)
+#define PFINT_FW_CTL_INTEVENT_S			31
+#define PFINT_FW_CTL_INTEVENT_M			BIT(31)
+#define PFINT_GPIO_ENA				0x00088080 /* Reset Source: CORER */
+#define PFINT_GPIO_ENA_GPIO0_ENA_S		0
+#define PFINT_GPIO_ENA_GPIO0_ENA_M		BIT(0)
+#define PFINT_GPIO_ENA_GPIO1_ENA_S		1
+#define PFINT_GPIO_ENA_GPIO1_ENA_M		BIT(1)
+#define PFINT_GPIO_ENA_GPIO2_ENA_S		2
+#define PFINT_GPIO_ENA_GPIO2_ENA_M		BIT(2)
+#define PFINT_GPIO_ENA_GPIO3_ENA_S		3
+#define PFINT_GPIO_ENA_GPIO3_ENA_M		BIT(3)
+#define PFINT_GPIO_ENA_GPIO4_ENA_S		4
+#define PFINT_GPIO_ENA_GPIO4_ENA_M		BIT(4)
+#define PFINT_GPIO_ENA_GPIO5_ENA_S		5
+#define PFINT_GPIO_ENA_GPIO5_ENA_M		BIT(5)
+#define PFINT_GPIO_ENA_GPIO6_ENA_S		6
+#define PFINT_GPIO_ENA_GPIO6_ENA_M		BIT(6)
+#define PFINT_MBX_CTL				0x0016B280 /* Reset Source: CORER */
+#define PFINT_MBX_CTL_MSIX_INDX_S		0
+#define PFINT_MBX_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PFINT_MBX_CTL_ITR_INDX_S		11
+#define PFINT_MBX_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PFINT_MBX_CTL_CAUSE_ENA_S		30
+#define PFINT_MBX_CTL_CAUSE_ENA_M		BIT(30)
+#define PFINT_MBX_CTL_INTEVENT_S		31
+#define PFINT_MBX_CTL_INTEVENT_M		BIT(31)
+#define PFINT_OICR				0x0016CA00 /* Reset Source: CORER */
+#define PFINT_OICR_INTEVENT_S			0
+#define PFINT_OICR_INTEVENT_M			BIT(0)
+#define PFINT_OICR_QUEUE_S			1
+#define PFINT_OICR_QUEUE_M			BIT(1)
+#define PFINT_OICR_RSV1_S			2
+#define PFINT_OICR_RSV1_M			MAKEMASK(0xFF, 2)
+#define PFINT_OICR_HH_COMP_S			10
+#define PFINT_OICR_HH_COMP_M			BIT(10)
+#define PFINT_OICR_TSYN_TX_S			11
+#define PFINT_OICR_TSYN_TX_M			BIT(11)
+#define PFINT_OICR_TSYN_EVNT_S			12
+#define PFINT_OICR_TSYN_EVNT_M			BIT(12)
+#define PFINT_OICR_TSYN_TGT_S			13
+#define PFINT_OICR_TSYN_TGT_M			BIT(13)
+#define PFINT_OICR_HLP_RDY_S			14
+#define PFINT_OICR_HLP_RDY_M			BIT(14)
+#define PFINT_OICR_CPM_RDY_S			15
+#define PFINT_OICR_CPM_RDY_M			BIT(15)
+#define PFINT_OICR_ECC_ERR_S			16
+#define PFINT_OICR_ECC_ERR_M			BIT(16)
+#define PFINT_OICR_RSV2_S			17
+#define PFINT_OICR_RSV2_M			MAKEMASK(0x3, 17)
+#define PFINT_OICR_MAL_DETECT_S			19
+#define PFINT_OICR_MAL_DETECT_M			BIT(19)
+#define PFINT_OICR_GRST_S			20
+#define PFINT_OICR_GRST_M			BIT(20)
+#define PFINT_OICR_PCI_EXCEPTION_S		21
+#define PFINT_OICR_PCI_EXCEPTION_M		BIT(21)
+#define PFINT_OICR_GPIO_S			22
+#define PFINT_OICR_GPIO_M			BIT(22)
+#define PFINT_OICR_RSV3_S			23
+#define PFINT_OICR_RSV3_M			BIT(23)
+#define PFINT_OICR_STORM_DETECT_S		24
+#define PFINT_OICR_STORM_DETECT_M		BIT(24)
+#define PFINT_OICR_LINK_STAT_CHANGE_S		25
+#define PFINT_OICR_LINK_STAT_CHANGE_M		BIT(25)
+#define PFINT_OICR_HMC_ERR_S			26
+#define PFINT_OICR_HMC_ERR_M			BIT(26)
+#define PFINT_OICR_PE_PUSH_S			27
+#define PFINT_OICR_PE_PUSH_M			BIT(27)
+#define PFINT_OICR_PE_CRITERR_S			28
+#define PFINT_OICR_PE_CRITERR_M			BIT(28)
+#define PFINT_OICR_VFLR_S			29
+#define PFINT_OICR_VFLR_M			BIT(29)
+#define PFINT_OICR_XLR_HW_DONE_S		30
+#define PFINT_OICR_XLR_HW_DONE_M		BIT(30)
+#define PFINT_OICR_SWINT_S			31
+#define PFINT_OICR_SWINT_M			BIT(31)
+#define PFINT_OICR_CTL				0x0016CA80 /* Reset Source: CORER */
+#define PFINT_OICR_CTL_MSIX_INDX_S		0
+#define PFINT_OICR_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PFINT_OICR_CTL_ITR_INDX_S		11
+#define PFINT_OICR_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PFINT_OICR_CTL_CAUSE_ENA_S		30
+#define PFINT_OICR_CTL_CAUSE_ENA_M		BIT(30)
+#define PFINT_OICR_CTL_INTEVENT_S		31
+#define PFINT_OICR_CTL_INTEVENT_M		BIT(31)
+#define PFINT_OICR_ENA				0x0016C900 /* Reset Source: CORER */
+#define PFINT_OICR_ENA_RSV0_S			0
+#define PFINT_OICR_ENA_RSV0_M			BIT(0)
+#define PFINT_OICR_ENA_INT_ENA_S		1
+#define PFINT_OICR_ENA_INT_ENA_M		MAKEMASK(0x7FFFFFFF, 1)
+#define PFINT_SB_CTL				0x0016B600 /* Reset Source: CORER */
+#define PFINT_SB_CTL_MSIX_INDX_S		0
+#define PFINT_SB_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PFINT_SB_CTL_ITR_INDX_S			11
+#define PFINT_SB_CTL_ITR_INDX_M			MAKEMASK(0x3, 11)
+#define PFINT_SB_CTL_CAUSE_ENA_S		30
+#define PFINT_SB_CTL_CAUSE_ENA_M		BIT(30)
+#define PFINT_SB_CTL_INTEVENT_S			31
+#define PFINT_SB_CTL_INTEVENT_M			BIT(31)
+#define PFINT_TSYN_MSK				0x0016C980 /* Reset Source: CORER */
+#define PFINT_TSYN_MSK_PHY_INDX_S		0
+#define PFINT_TSYN_MSK_PHY_INDX_M		MAKEMASK(0x1F, 0)
+#define QINT_RQCTL(_QRX)			(0x00150000 + ((_QRX) * 4)) /* _i=0...2047 */ /* Reset Source: CORER */
+#define QINT_RQCTL_MAX_INDEX			2047
+#define QINT_RQCTL_MSIX_INDX_S			0
+#define QINT_RQCTL_MSIX_INDX_M			MAKEMASK(0x7FF, 0)
+#define QINT_RQCTL_ITR_INDX_S			11
+#define QINT_RQCTL_ITR_INDX_M			MAKEMASK(0x3, 11)
+#define QINT_RQCTL_CAUSE_ENA_S			30
+#define QINT_RQCTL_CAUSE_ENA_M			BIT(30)
+#define QINT_RQCTL_INTEVENT_S			31
+#define QINT_RQCTL_INTEVENT_M			BIT(31)
+#define QINT_TQCTL(_DBQM)			(0x00140000 + ((_DBQM) * 4)) /* _i=0...16383 */ /* Reset Source: CORER */
+#define QINT_TQCTL_MAX_INDEX			16383
+#define QINT_TQCTL_MSIX_INDX_S			0
+#define QINT_TQCTL_MSIX_INDX_M			MAKEMASK(0x7FF, 0)
+#define QINT_TQCTL_ITR_INDX_S			11
+#define QINT_TQCTL_ITR_INDX_M			MAKEMASK(0x3, 11)
+#define QINT_TQCTL_CAUSE_ENA_S			30
+#define QINT_TQCTL_CAUSE_ENA_M			BIT(30)
+#define QINT_TQCTL_INTEVENT_S			31
+#define QINT_TQCTL_INTEVENT_M			BIT(31)
+#define VPINT_AEQCTL(_VF)			(0x0016B800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPINT_AEQCTL_MAX_INDEX			255
+#define VPINT_AEQCTL_MSIX_INDX_S		0
+#define VPINT_AEQCTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define VPINT_AEQCTL_ITR_INDX_S			11
+#define VPINT_AEQCTL_ITR_INDX_M			MAKEMASK(0x3, 11)
+#define VPINT_AEQCTL_CAUSE_ENA_S		30
+#define VPINT_AEQCTL_CAUSE_ENA_M		BIT(30)
+#define VPINT_AEQCTL_INTEVENT_S			31
+#define VPINT_AEQCTL_INTEVENT_M			BIT(31)
+#define VPINT_ALLOC(_VF)			(0x001D1000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPINT_ALLOC_MAX_INDEX			255
+#define VPINT_ALLOC_FIRST_S			0
+#define VPINT_ALLOC_FIRST_M			MAKEMASK(0x7FF, 0)
+#define VPINT_ALLOC_LAST_S			12
+#define VPINT_ALLOC_LAST_M			MAKEMASK(0x7FF, 12)
+#define VPINT_ALLOC_VALID_S			31
+#define VPINT_ALLOC_VALID_M			BIT(31)
+#define VPINT_ALLOC_PCI(_VF)			(0x0009D000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PCIR */
+#define VPINT_ALLOC_PCI_MAX_INDEX		255
+#define VPINT_ALLOC_PCI_FIRST_S			0
+#define VPINT_ALLOC_PCI_FIRST_M			MAKEMASK(0x7FF, 0)
+#define VPINT_ALLOC_PCI_LAST_S			12
+#define VPINT_ALLOC_PCI_LAST_M			MAKEMASK(0x7FF, 12)
+#define VPINT_ALLOC_PCI_VALID_S			31
+#define VPINT_ALLOC_PCI_VALID_M			BIT(31)
+#define VPINT_MBX_CPM_CTL(_VP128)		(0x0016B000 + ((_VP128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VPINT_MBX_CPM_CTL_MAX_INDEX		127
+#define VPINT_MBX_CPM_CTL_MSIX_INDX_S		0
+#define VPINT_MBX_CPM_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define VPINT_MBX_CPM_CTL_ITR_INDX_S		11
+#define VPINT_MBX_CPM_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define VPINT_MBX_CPM_CTL_CAUSE_ENA_S		30
+#define VPINT_MBX_CPM_CTL_CAUSE_ENA_M		BIT(30)
+#define VPINT_MBX_CPM_CTL_INTEVENT_S		31
+#define VPINT_MBX_CPM_CTL_INTEVENT_M		BIT(31)
+#define VPINT_MBX_CTL(_VSI)			(0x0016A000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VPINT_MBX_CTL_MAX_INDEX			767
+#define VPINT_MBX_CTL_MSIX_INDX_S		0
+#define VPINT_MBX_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define VPINT_MBX_CTL_ITR_INDX_S		11
+#define VPINT_MBX_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define VPINT_MBX_CTL_CAUSE_ENA_S		30
+#define VPINT_MBX_CTL_CAUSE_ENA_M		BIT(30)
+#define VPINT_MBX_CTL_INTEVENT_S		31
+#define VPINT_MBX_CTL_INTEVENT_M		BIT(31)
+#define VPINT_MBX_HLP_CTL(_VP16)		(0x0016B200 + ((_VP16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VPINT_MBX_HLP_CTL_MAX_INDEX		15
+#define VPINT_MBX_HLP_CTL_MSIX_INDX_S		0
+#define VPINT_MBX_HLP_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define VPINT_MBX_HLP_CTL_ITR_INDX_S		11
+#define VPINT_MBX_HLP_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define VPINT_MBX_HLP_CTL_CAUSE_ENA_S		30
+#define VPINT_MBX_HLP_CTL_CAUSE_ENA_M		BIT(30)
+#define VPINT_MBX_HLP_CTL_INTEVENT_S		31
+#define VPINT_MBX_HLP_CTL_INTEVENT_M		BIT(31)
+#define VPINT_MBX_PSM_CTL(_VP16)		(0x0016B240 + ((_VP16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VPINT_MBX_PSM_CTL_MAX_INDEX		15
+#define VPINT_MBX_PSM_CTL_MSIX_INDX_S		0
+#define VPINT_MBX_PSM_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define VPINT_MBX_PSM_CTL_ITR_INDX_S		11
+#define VPINT_MBX_PSM_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define VPINT_MBX_PSM_CTL_CAUSE_ENA_S		30
+#define VPINT_MBX_PSM_CTL_CAUSE_ENA_M		BIT(30)
+#define VPINT_MBX_PSM_CTL_INTEVENT_S		31
+#define VPINT_MBX_PSM_CTL_INTEVENT_M		BIT(31)
+#define VPINT_SB_CPM_CTL(_VP128)		(0x0016B400 + ((_VP128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VPINT_SB_CPM_CTL_MAX_INDEX		127
+#define VPINT_SB_CPM_CTL_MSIX_INDX_S		0
+#define VPINT_SB_CPM_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define VPINT_SB_CPM_CTL_ITR_INDX_S		11
+#define VPINT_SB_CPM_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define VPINT_SB_CPM_CTL_CAUSE_ENA_S		30
+#define VPINT_SB_CPM_CTL_CAUSE_ENA_M		BIT(30)
+#define VPINT_SB_CPM_CTL_INTEVENT_S		31
+#define VPINT_SB_CPM_CTL_INTEVENT_M		BIT(31)
+#define GL_HLP_PRT_IPG_PREAMBLE_SIZE(_i)	(0x00049240 + ((_i) * 4)) /* _i=0...20 */ /* Reset Source: CORER */
+#define GL_HLP_PRT_IPG_PREAMBLE_SIZE_MAX_INDEX	20
+#define GL_HLP_PRT_IPG_PREAMBLE_SIZE_IPG_PREAMBLE_SIZE_S 0
+#define GL_HLP_PRT_IPG_PREAMBLE_SIZE_IPG_PREAMBLE_SIZE_M MAKEMASK(0xFF, 0)
+#define GL_TDPU_PSM_DEFAULT_RECIPE(_i)		(0x00049294 + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: CORER */
+#define GL_TDPU_PSM_DEFAULT_RECIPE_MAX_INDEX	3
+#define GL_TDPU_PSM_DEFAULT_RECIPE_ADD_IPG_S	0
+#define GL_TDPU_PSM_DEFAULT_RECIPE_ADD_IPG_M	BIT(0)
+#define GL_TDPU_PSM_DEFAULT_RECIPE_SUB_CRC_S	1
+#define GL_TDPU_PSM_DEFAULT_RECIPE_SUB_CRC_M	BIT(1)
+#define GL_TDPU_PSM_DEFAULT_RECIPE_SUB_ESP_TRAILER_S 2
+#define GL_TDPU_PSM_DEFAULT_RECIPE_SUB_ESP_TRAILER_M BIT(2)
+#define GL_TDPU_PSM_DEFAULT_RECIPE_INCLUDE_L2_PAD_S 3
+#define GL_TDPU_PSM_DEFAULT_RECIPE_INCLUDE_L2_PAD_M BIT(3)
+#define GL_TDPU_PSM_DEFAULT_RECIPE_DEFAULT_UPDATE_MODE_S 4
+#define GL_TDPU_PSM_DEFAULT_RECIPE_DEFAULT_UPDATE_MODE_M BIT(4)
+#define GLLAN_PF_RECIPE(_i)			(0x0029420C + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLLAN_PF_RECIPE_MAX_INDEX		7
+#define GLLAN_PF_RECIPE_RECIPE_S		0
+#define GLLAN_PF_RECIPE_RECIPE_M		MAKEMASK(0x3, 0)
+#define GLLAN_RCTL_0				0x002941F8 /* Reset Source: CORER */
+#define GLLAN_RCTL_0_PXE_MODE_S			0
+#define GLLAN_RCTL_0_PXE_MODE_M			BIT(0)
+#define GLLAN_RCTL_1				0x002941FC /* Reset Source: CORER */
+#define GLLAN_RCTL_1_RXMAX_EXPANSION_S		12
+#define GLLAN_RCTL_1_RXMAX_EXPANSION_M		MAKEMASK(0xF, 12)
+#define GLLAN_RCTL_1_RXDRDCTL_S			17
+#define GLLAN_RCTL_1_RXDRDCTL_M			BIT(17)
+#define GLLAN_RCTL_1_RXDESCRDROEN_S		18
+#define GLLAN_RCTL_1_RXDESCRDROEN_M		BIT(18)
+#define GLLAN_RCTL_1_RXDATAWRROEN_S		19
+#define GLLAN_RCTL_1_RXDATAWRROEN_M		BIT(19)
+#define GLLAN_TSOMSK_F				0x00049308 /* Reset Source: CORER */
+#define GLLAN_TSOMSK_F_TCPMSKF_S		0
+#define GLLAN_TSOMSK_F_TCPMSKF_M		MAKEMASK(0xFFF, 0)
+#define GLLAN_TSOMSK_L				0x00049310 /* Reset Source: CORER */
+#define GLLAN_TSOMSK_L_TCPMSKL_S		0
+#define GLLAN_TSOMSK_L_TCPMSKL_M		MAKEMASK(0xFFF, 0)
+#define GLLAN_TSOMSK_M				0x0004930C /* Reset Source: CORER */
+#define GLLAN_TSOMSK_M_TCPMSKM_S		0
+#define GLLAN_TSOMSK_M_TCPMSKM_M		MAKEMASK(0xFFF, 0)
+#define PFLAN_CP_QALLOC				0x00075700 /* Reset Source: CORER */
+#define PFLAN_CP_QALLOC_FIRSTQ_S		0
+#define PFLAN_CP_QALLOC_FIRSTQ_M		MAKEMASK(0x1FF, 0)
+#define PFLAN_CP_QALLOC_LASTQ_S			16
+#define PFLAN_CP_QALLOC_LASTQ_M			MAKEMASK(0x1FF, 16)
+#define PFLAN_CP_QALLOC_VALID_S			31
+#define PFLAN_CP_QALLOC_VALID_M			BIT(31)
+#define PFLAN_DB_QALLOC				0x00075680 /* Reset Source: CORER */
+#define PFLAN_DB_QALLOC_FIRSTQ_S		0
+#define PFLAN_DB_QALLOC_FIRSTQ_M		MAKEMASK(0xFF, 0)
+#define PFLAN_DB_QALLOC_LASTQ_S			16
+#define PFLAN_DB_QALLOC_LASTQ_M			MAKEMASK(0xFF, 16)
+#define PFLAN_DB_QALLOC_VALID_S			31
+#define PFLAN_DB_QALLOC_VALID_M			BIT(31)
+#define PFLAN_RX_QALLOC				0x001D2500 /* Reset Source: CORER */
+#define PFLAN_RX_QALLOC_FIRSTQ_S		0
+#define PFLAN_RX_QALLOC_FIRSTQ_M		MAKEMASK(0x7FF, 0)
+#define PFLAN_RX_QALLOC_LASTQ_S			16
+#define PFLAN_RX_QALLOC_LASTQ_M			MAKEMASK(0x7FF, 16)
+#define PFLAN_RX_QALLOC_VALID_S			31
+#define PFLAN_RX_QALLOC_VALID_M			BIT(31)
+#define PFLAN_TX_QALLOC				0x001D2580 /* Reset Source: CORER */
+#define PFLAN_TX_QALLOC_FIRSTQ_S		0
+#define PFLAN_TX_QALLOC_FIRSTQ_M		MAKEMASK(0x3FFF, 0)
+#define PFLAN_TX_QALLOC_LASTQ_S			16
+#define PFLAN_TX_QALLOC_LASTQ_M			MAKEMASK(0x3FFF, 16)
+#define PFLAN_TX_QALLOC_VALID_S			31
+#define PFLAN_TX_QALLOC_VALID_M			BIT(31)
+#define QRX_CONTEXT(_i, _QRX)			(0x00280000 + ((_i) * 8192 + (_QRX) * 4)) /* _i=0...7, _QRX=0...2047 */ /* Reset Source: CORER */
+#define QRX_CONTEXT_MAX_INDEX			7
+#define QRX_CONTEXT_RXQ_CONTEXT_S		0
+#define QRX_CONTEXT_RXQ_CONTEXT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define QRX_CTRL(_QRX)				(0x00120000 + ((_QRX) * 4)) /* _i=0...2047 */ /* Reset Source: PFR */
+#define QRX_CTRL_MAX_INDEX			2047
+#define QRX_CTRL_QENA_REQ_S			0
+#define QRX_CTRL_QENA_REQ_M			BIT(0)
+#define QRX_CTRL_FAST_QDIS_S			1
+#define QRX_CTRL_FAST_QDIS_M			BIT(1)
+#define QRX_CTRL_QENA_STAT_S			2
+#define QRX_CTRL_QENA_STAT_M			BIT(2)
+#define QRX_CTRL_CDE_S				3
+#define QRX_CTRL_CDE_M				BIT(3)
+#define QRX_CTRL_CDS_S				4
+#define QRX_CTRL_CDS_M				BIT(4)
+#define QRX_ITR(_QRX)				(0x00292000 + ((_QRX) * 4)) /* _i=0...2047 */ /* Reset Source: CORER */
+#define QRX_ITR_MAX_INDEX			2047
+#define QRX_ITR_NO_EXPR_S			0
+#define QRX_ITR_NO_EXPR_M			BIT(0)
+#define QRX_TAIL(_QRX)				(0x00290000 + ((_QRX) * 4)) /* _i=0...2047 */ /* Reset Source: CORER */
+#define QRX_TAIL_MAX_INDEX			2047
+#define QRX_TAIL_TAIL_S				0
+#define QRX_TAIL_TAIL_M				MAKEMASK(0x1FFF, 0)
+#define VPDSI_RX_QTABLE(_i, _VP16)		(0x00074C00 + ((_i) * 64 + (_VP16) * 4)) /* _i=0...15, _VP16=0...15 */ /* Reset Source: CORER */
+#define VPDSI_RX_QTABLE_MAX_INDEX		15
+#define VPDSI_RX_QTABLE_PAGE_INDEX0_S		0
+#define VPDSI_RX_QTABLE_PAGE_INDEX0_M		MAKEMASK(0x7F, 0)
+#define VPDSI_RX_QTABLE_PAGE_INDEX1_S		8
+#define VPDSI_RX_QTABLE_PAGE_INDEX1_M		MAKEMASK(0x7F, 8)
+#define VPDSI_RX_QTABLE_PAGE_INDEX2_S		16
+#define VPDSI_RX_QTABLE_PAGE_INDEX2_M		MAKEMASK(0x7F, 16)
+#define VPDSI_RX_QTABLE_PAGE_INDEX3_S		24
+#define VPDSI_RX_QTABLE_PAGE_INDEX3_M		MAKEMASK(0x7F, 24)
+#define VPDSI_TX_QTABLE(_i, _VP16)		(0x001D2000 + ((_i) * 64 + (_VP16) * 4)) /* _i=0...15, _VP16=0...15 */ /* Reset Source: CORER */
+#define VPDSI_TX_QTABLE_MAX_INDEX		15
+#define VPDSI_TX_QTABLE_PAGE_INDEX0_S		0
+#define VPDSI_TX_QTABLE_PAGE_INDEX0_M		MAKEMASK(0x7F, 0)
+#define VPDSI_TX_QTABLE_PAGE_INDEX1_S		8
+#define VPDSI_TX_QTABLE_PAGE_INDEX1_M		MAKEMASK(0x7F, 8)
+#define VPDSI_TX_QTABLE_PAGE_INDEX2_S		16
+#define VPDSI_TX_QTABLE_PAGE_INDEX2_M		MAKEMASK(0x7F, 16)
+#define VPDSI_TX_QTABLE_PAGE_INDEX3_S		24
+#define VPDSI_TX_QTABLE_PAGE_INDEX3_M		MAKEMASK(0x7F, 24)
+#define VPLAN_DB_QTABLE(_i, _VF)		(0x00070000 + ((_i) * 2048 + (_VF) * 4)) /* _i=0...3, _VF=0...255 */ /* Reset Source: CORER */
+#define VPLAN_DB_QTABLE_MAX_INDEX		3
+#define VPLAN_DB_QTABLE_QINDEX_S		0
+#define VPLAN_DB_QTABLE_QINDEX_M		MAKEMASK(0x1FF, 0)
+#define VPLAN_DSI_VF_MODE(_VP16)		(0x002D2C00 + ((_VP16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VPLAN_DSI_VF_MODE_MAX_INDEX		15
+#define VPLAN_DSI_VF_MODE_LAN_DSI_VF_MODE_S	0
+#define VPLAN_DSI_VF_MODE_LAN_DSI_VF_MODE_M	BIT(0)
+#define VPLAN_RX_QBASE(_VF)			(0x00072000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPLAN_RX_QBASE_MAX_INDEX		255
+#define VPLAN_RX_QBASE_VFFIRSTQ_S		0
+#define VPLAN_RX_QBASE_VFFIRSTQ_M		MAKEMASK(0x7FF, 0)
+#define VPLAN_RX_QBASE_VFNUMQ_S			16
+#define VPLAN_RX_QBASE_VFNUMQ_M			MAKEMASK(0xFF, 16)
+#define VPLAN_RX_QBASE_VFQTABLE_ENA_S		31
+#define VPLAN_RX_QBASE_VFQTABLE_ENA_M		BIT(31)
+#define VPLAN_RX_QTABLE(_i, _VF)		(0x00060000 + ((_i) * 2048 + (_VF) * 4)) /* _i=0...15, _VF=0...255 */ /* Reset Source: CORER */
+#define VPLAN_RX_QTABLE_MAX_INDEX		15
+#define VPLAN_RX_QTABLE_QINDEX_S		0
+#define VPLAN_RX_QTABLE_QINDEX_M		MAKEMASK(0xFFF, 0)
+#define VPLAN_RXQ_MAPENA(_VF)			(0x00073000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPLAN_RXQ_MAPENA_MAX_INDEX		255
+#define VPLAN_RXQ_MAPENA_RX_ENA_S		0
+#define VPLAN_RXQ_MAPENA_RX_ENA_M		BIT(0)
+#define VPLAN_TX_QBASE(_VF)			(0x001D1800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPLAN_TX_QBASE_MAX_INDEX		255
+#define VPLAN_TX_QBASE_VFFIRSTQ_S		0
+#define VPLAN_TX_QBASE_VFFIRSTQ_M		MAKEMASK(0x3FFF, 0)
+#define VPLAN_TX_QBASE_VFNUMQ_S			16
+#define VPLAN_TX_QBASE_VFNUMQ_M			MAKEMASK(0xFF, 16)
+#define VPLAN_TX_QBASE_VFQTABLE_ENA_S		31
+#define VPLAN_TX_QBASE_VFQTABLE_ENA_M		BIT(31)
+#define VPLAN_TX_QTABLE(_i, _VF)		(0x001C0000 + ((_i) * 2048 + (_VF) * 4)) /* _i=0...15, _VF=0...255 */ /* Reset Source: CORER */
+#define VPLAN_TX_QTABLE_MAX_INDEX		15
+#define VPLAN_TX_QTABLE_QINDEX_S		0
+#define VPLAN_TX_QTABLE_QINDEX_M		MAKEMASK(0x7FFF, 0)
+#define VPLAN_TXQ_MAPENA(_VF)			(0x00073800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPLAN_TXQ_MAPENA_MAX_INDEX		255
+#define VPLAN_TXQ_MAPENA_TX_ENA_S		0
+#define VPLAN_TXQ_MAPENA_TX_ENA_M		BIT(0)
+#define VSILAN_QBASE(_VSI)			(0x0044c000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSILAN_QBASE_MAX_INDEX			767
+#define VSILAN_QBASE_VSIBASE_S			0
+#define VSILAN_QBASE_VSIBASE_M			MAKEMASK(0x7FF, 0)
+#define VSILAN_QBASE_VSIQTABLE_ENA_S		11
+#define VSILAN_QBASE_VSIQTABLE_ENA_M		BIT(11)
+#define VSILAN_QTABLE(_i, _VSI)			(0x00440000 + ((_i) * 4096 + (_VSI) * 4)) /* _i=0...7, _VSI=0...767 */ /* Reset Source: PFR */
+#define VSILAN_QTABLE_MAX_INDEX			7
+#define VSILAN_QTABLE_QINDEX_0_S		0
+#define VSILAN_QTABLE_QINDEX_0_M		MAKEMASK(0x7FF, 0)
+#define VSILAN_QTABLE_QINDEX_1_S		16
+#define VSILAN_QTABLE_QINDEX_1_M		MAKEMASK(0x7FF, 16)
+#define PRTMAC_HSEC_CTL_RX_ENABLE_GCP		0x001E31C0 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_ENABLE_GCP_HSEC_CTL_RX_ENABLE_GCP_S 0
+#define PRTMAC_HSEC_CTL_RX_ENABLE_GCP_HSEC_CTL_RX_ENABLE_GCP_M BIT(0)
+#define PRTMAC_HSEC_CTL_RX_ENABLE_GPP		0x001E34C0 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_ENABLE_GPP_HSEC_CTL_RX_ENABLE_GPP_S 0
+#define PRTMAC_HSEC_CTL_RX_ENABLE_GPP_HSEC_CTL_RX_ENABLE_GPP_M BIT(0)
+#define PRTMAC_HSEC_CTL_RX_ENABLE_PPP		0x001E35C0 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_ENABLE_PPP_HSEC_CTL_RX_ENABLE_PPP_S 0
+#define PRTMAC_HSEC_CTL_RX_ENABLE_PPP_HSEC_CTL_RX_ENABLE_PPP_M BIT(0)
+#define PRTMAC_HSEC_CTL_RX_FORWARD_CONTROL	0x001E36C0 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_FORWARD_CONTROL_HSEC_CTL_RX_FORWARD_CONTROL_S 0
+#define PRTMAC_HSEC_CTL_RX_FORWARD_CONTROL_HSEC_CTL_RX_FORWARD_CONTROL_M BIT(0)
+#define PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART1 0x001E3220 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART1_HSEC_CTL_RX_PAUSE_DA_UCAST_PART1_S 0
+#define PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART1_HSEC_CTL_RX_PAUSE_DA_UCAST_PART1_M MAKEMASK(0xFFFFFFFF, 0)
+#define PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART2 0x001E3240 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART2_HSEC_CTL_RX_PAUSE_DA_UCAST_PART2_S 0
+#define PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART2_HSEC_CTL_RX_PAUSE_DA_UCAST_PART2_M MAKEMASK(0xFFFF, 0)
+#define PRTMAC_HSEC_CTL_RX_PAUSE_ENABLE		0x001E3180 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_PAUSE_ENABLE_HSEC_CTL_RX_PAUSE_ENABLE_S 0
+#define PRTMAC_HSEC_CTL_RX_PAUSE_ENABLE_HSEC_CTL_RX_PAUSE_ENABLE_M MAKEMASK(0x1FF, 0)
+#define PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART1	0x001E3280 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART1_HSEC_CTL_RX_PAUSE_SA_PART1_S 0
+#define PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART1_HSEC_CTL_RX_PAUSE_SA_PART1_M MAKEMASK(0xFFFFFFFF, 0)
+#define PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART2	0x001E32A0 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART2_HSEC_CTL_RX_PAUSE_SA_PART2_S 0
+#define PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART2_HSEC_CTL_RX_PAUSE_SA_PART2_M MAKEMASK(0xFFFF, 0)
+#define PRTMAC_HSEC_CTL_RX_QUANTA_S		0x001E3C40 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_QUANTA_SHIFT_PRTMAC_HSEC_CTL_RX_QUANTA_SHIFT_S 0
+#define PRTMAC_HSEC_CTL_RX_QUANTA_SHIFT_PRTMAC_HSEC_CTL_RX_QUANTA_SHIFT_M MAKEMASK(0xFFFF, 0)
+#define PRTMAC_HSEC_CTL_TX_PAUSE_ENABLE		0x001E31A0 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_TX_PAUSE_ENABLE_HSEC_CTL_TX_PAUSE_ENABLE_S 0
+#define PRTMAC_HSEC_CTL_TX_PAUSE_ENABLE_HSEC_CTL_TX_PAUSE_ENABLE_M MAKEMASK(0x1FF, 0)
+#define PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA(_i)	(0x001E36E0 + ((_i) * 32)) /* _i=0...8 */ /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA_MAX_INDEX 8
+#define PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA_HSEC_CTL_TX_PAUSE_QUANTA_S 0
+#define PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA_HSEC_CTL_TX_PAUSE_QUANTA_M MAKEMASK(0xFFFF, 0)
+#define PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER(_i) (0x001E3800 + ((_i) * 32)) /* _i=0...8 */ /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_MAX_INDEX 8
+#define PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_S 0
+#define PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_M MAKEMASK(0xFFFF, 0)
+#define PRTMAC_HSEC_CTL_TX_SA_PART1		0x001E3960 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_TX_SA_PART1_HSEC_CTL_TX_SA_PART1_S 0
+#define PRTMAC_HSEC_CTL_TX_SA_PART1_HSEC_CTL_TX_SA_PART1_M MAKEMASK(0xFFFFFFFF, 0)
+#define PRTMAC_HSEC_CTL_TX_SA_PART2		0x001E3980 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_TX_SA_PART2_HSEC_CTL_TX_SA_PART2_S 0
+#define PRTMAC_HSEC_CTL_TX_SA_PART2_HSEC_CTL_TX_SA_PART2_M MAKEMASK(0xFFFF, 0)
+#define PRTMAC_LINK_DOWN_COUNTER		0x001E47C0 /* Reset Source: GLOBR */
+#define PRTMAC_LINK_DOWN_COUNTER_LINK_DOWN_COUNTER_S 0
+#define PRTMAC_LINK_DOWN_COUNTER_LINK_DOWN_COUNTER_M MAKEMASK(0xFFFF, 0)
+#define PRTMAC_MD_OVRRIDE_ENABLE(_i)		(0x001E3C60 + ((_i) * 32)) /* _i=0...7 */ /* Reset Source: GLOBR */
+#define PRTMAC_MD_OVRRIDE_ENABLE_MAX_INDEX	7
+#define PRTMAC_MD_OVRRIDE_ENABLE_PRTMAC_MD_OVRRIDE_ENABLE_S 0
+#define PRTMAC_MD_OVRRIDE_ENABLE_PRTMAC_MD_OVRRIDE_ENABLE_M MAKEMASK(0xFFFFFFFF, 0)
+#define PRTMAC_MD_OVRRIDE_VAL(_i)		(0x001E3D60 + ((_i) * 32)) /* _i=0...7 */ /* Reset Source: GLOBR */
+#define PRTMAC_MD_OVRRIDE_VAL_MAX_INDEX		7
+#define PRTMAC_MD_OVRRIDE_VAL_PRTMAC_MD_OVRRIDE_ENABLE_S 0
+#define PRTMAC_MD_OVRRIDE_VAL_PRTMAC_MD_OVRRIDE_ENABLE_M MAKEMASK(0xFFFFFFFF, 0)
+#define PRTMAC_RX_CNT_MRKR			0x001E48E0 /* Reset Source: GLOBR */
+#define PRTMAC_RX_CNT_MRKR_RX_CNT_MRKR_S	0
+#define PRTMAC_RX_CNT_MRKR_RX_CNT_MRKR_M	MAKEMASK(0xFFFF, 0)
+#define PRTMAC_RX_PKT_DRP_CNT			0x001E3C20 /* Reset Source: GLOBR */
+#define PRTMAC_RX_PKT_DRP_CNT_RX_PKT_DRP_CNT_S	0
+#define PRTMAC_RX_PKT_DRP_CNT_RX_PKT_DRP_CNT_M	MAKEMASK(0xFFFF, 0)
+#define PRTMAC_RX_PKT_DRP_CNT_RX_MKR_PKT_DRP_CNT_S 16
+#define PRTMAC_RX_PKT_DRP_CNT_RX_MKR_PKT_DRP_CNT_M MAKEMASK(0xFFFF, 16)
+#define PRTMAC_TX_CNT_MRKR			0x001E48C0 /* Reset Source: GLOBR */
+#define PRTMAC_TX_CNT_MRKR_TX_CNT_MRKR_S	0
+#define PRTMAC_TX_CNT_MRKR_TX_CNT_MRKR_M	MAKEMASK(0xFFFF, 0)
+#define PRTMAC_TX_LNK_UP_CNT			0x001E4840 /* Reset Source: GLOBR */
+#define PRTMAC_TX_LNK_UP_CNT_TX_LINK_UP_CNT_S	0
+#define PRTMAC_TX_LNK_UP_CNT_TX_LINK_UP_CNT_M	MAKEMASK(0xFFFF, 0)
+#define GL_MDCK_CFG1_TX_PQM			0x002D2DF4 /* Reset Source: CORER */
+#define GL_MDCK_CFG1_TX_PQM_SSO_MAX_DATA_LEN_S	0
+#define GL_MDCK_CFG1_TX_PQM_SSO_MAX_DATA_LEN_M	MAKEMASK(0xFF, 0)
+#define GL_MDCK_CFG1_TX_PQM_SSO_MAX_PKT_CNT_S	8
+#define GL_MDCK_CFG1_TX_PQM_SSO_MAX_PKT_CNT_M	MAKEMASK(0x3F, 8)
+#define GL_MDCK_CFG1_TX_PQM_SSO_MAX_DESC_CNT_S	16
+#define GL_MDCK_CFG1_TX_PQM_SSO_MAX_DESC_CNT_M	MAKEMASK(0x3F, 16)
+#define GL_MDCK_EN_TX_PQM			0x002D2DFC /* Reset Source: CORER */
+#define GL_MDCK_EN_TX_PQM_PCI_DUMMY_COMP_S	0
+#define GL_MDCK_EN_TX_PQM_PCI_DUMMY_COMP_M	BIT(0)
+#define GL_MDCK_EN_TX_PQM_PCI_UR_COMP_S		1
+#define GL_MDCK_EN_TX_PQM_PCI_UR_COMP_M		BIT(1)
+#define GL_MDCK_EN_TX_PQM_RCV_SH_BE_LSO_S	3
+#define GL_MDCK_EN_TX_PQM_RCV_SH_BE_LSO_M	BIT(3)
+#define GL_MDCK_EN_TX_PQM_Q_FL_MNG_EPY_CH_S	4
+#define GL_MDCK_EN_TX_PQM_Q_FL_MNG_EPY_CH_M	BIT(4)
+#define GL_MDCK_EN_TX_PQM_Q_EPY_MNG_FL_CH_S	5
+#define GL_MDCK_EN_TX_PQM_Q_EPY_MNG_FL_CH_M	BIT(5)
+#define GL_MDCK_EN_TX_PQM_LSO_NUMDESCS_ZERO_S	6
+#define GL_MDCK_EN_TX_PQM_LSO_NUMDESCS_ZERO_M	BIT(6)
+#define GL_MDCK_EN_TX_PQM_LSO_LENGTH_ZERO_S	7
+#define GL_MDCK_EN_TX_PQM_LSO_LENGTH_ZERO_M	BIT(7)
+#define GL_MDCK_EN_TX_PQM_LSO_MSS_BELOW_MIN_S	8
+#define GL_MDCK_EN_TX_PQM_LSO_MSS_BELOW_MIN_M	BIT(8)
+#define GL_MDCK_EN_TX_PQM_LSO_MSS_ABOVE_MAX_S	9
+#define GL_MDCK_EN_TX_PQM_LSO_MSS_ABOVE_MAX_M	BIT(9)
+#define GL_MDCK_EN_TX_PQM_LSO_HDR_SIZE_ZERO_S	10
+#define GL_MDCK_EN_TX_PQM_LSO_HDR_SIZE_ZERO_M	BIT(10)
+#define GL_MDCK_EN_TX_PQM_RCV_CNT_BE_LSO_S	11
+#define GL_MDCK_EN_TX_PQM_RCV_CNT_BE_LSO_M	BIT(11)
+#define GL_MDCK_EN_TX_PQM_SKIP_ONE_QT_ONLY_S	12
+#define GL_MDCK_EN_TX_PQM_SKIP_ONE_QT_ONLY_M	BIT(12)
+#define GL_MDCK_EN_TX_PQM_LSO_PKTCNT_ZERO_S	13
+#define GL_MDCK_EN_TX_PQM_LSO_PKTCNT_ZERO_M	BIT(13)
+#define GL_MDCK_EN_TX_PQM_SSO_LENGTH_ZERO_S	14
+#define GL_MDCK_EN_TX_PQM_SSO_LENGTH_ZERO_M	BIT(14)
+#define GL_MDCK_EN_TX_PQM_SSO_LENGTH_EXCEED_S	15
+#define GL_MDCK_EN_TX_PQM_SSO_LENGTH_EXCEED_M	BIT(15)
+#define GL_MDCK_EN_TX_PQM_SSO_PKTCNT_ZERO_S	16
+#define GL_MDCK_EN_TX_PQM_SSO_PKTCNT_ZERO_M	BIT(16)
+#define GL_MDCK_EN_TX_PQM_SSO_PKTCNT_EXCEED_S	17
+#define GL_MDCK_EN_TX_PQM_SSO_PKTCNT_EXCEED_M	BIT(17)
+#define GL_MDCK_EN_TX_PQM_SSO_NUMDESCS_ZERO_S	18
+#define GL_MDCK_EN_TX_PQM_SSO_NUMDESCS_ZERO_M	BIT(18)
+#define GL_MDCK_EN_TX_PQM_SSO_NUMDESCS_EXCEED_S 19
+#define GL_MDCK_EN_TX_PQM_SSO_NUMDESCS_EXCEED_M BIT(19)
+#define GL_MDCK_EN_TX_PQM_TAIL_GT_RING_LENGTH_S 20
+#define GL_MDCK_EN_TX_PQM_TAIL_GT_RING_LENGTH_M BIT(20)
+#define GL_MDCK_EN_TX_PQM_RESERVED_DBL_TYPE_S	21
+#define GL_MDCK_EN_TX_PQM_RESERVED_DBL_TYPE_M	BIT(21)
+#define GL_MDCK_EN_TX_PQM_ILLEGAL_HEAD_DROP_DBL_S 22
+#define GL_MDCK_EN_TX_PQM_ILLEGAL_HEAD_DROP_DBL_M BIT(22)
+#define GL_MDCK_EN_TX_PQM_LSO_OVER_COMMS_Q_S	23
+#define GL_MDCK_EN_TX_PQM_LSO_OVER_COMMS_Q_M	BIT(23)
+#define GL_MDCK_EN_TX_PQM_ILLEGAL_VF_QNUM_S	24
+#define GL_MDCK_EN_TX_PQM_ILLEGAL_VF_QNUM_M	BIT(24)
+#define GL_MDCK_EN_TX_PQM_QTAIL_GT_RING_LENGTH_S 25
+#define GL_MDCK_EN_TX_PQM_QTAIL_GT_RING_LENGTH_M BIT(25)
+#define GL_MDCK_EN_TX_PQM_RSVD_S		26
+#define GL_MDCK_EN_TX_PQM_RSVD_M		MAKEMASK(0x3F, 26)
+#define GL_MDCK_RX				0x0029422C /* Reset Source: CORER */
+#define GL_MDCK_RX_DESC_ADDR_S			0
+#define GL_MDCK_RX_DESC_ADDR_M			BIT(0)
+#define GL_MDET_RX				0x00294C00 /* Reset Source: CORER */
+#define GL_MDET_RX_QNUM_S			0
+#define GL_MDET_RX_QNUM_M			MAKEMASK(0x7FFF, 0)
+#define GL_MDET_RX_VF_NUM_S			15
+#define GL_MDET_RX_VF_NUM_M			MAKEMASK(0xFF, 15)
+#define GL_MDET_RX_PF_NUM_S			23
+#define GL_MDET_RX_PF_NUM_M			MAKEMASK(0x7, 23)
+#define GL_MDET_RX_MAL_TYPE_S			26
+#define GL_MDET_RX_MAL_TYPE_M			MAKEMASK(0x1F, 26)
+#define GL_MDET_RX_VALID_S			31
+#define GL_MDET_RX_VALID_M			BIT(31)
+#define GL_MDET_TX_PQM				0x002D2E00 /* Reset Source: CORER */
+#define GL_MDET_TX_PQM_PF_NUM_S			0
+#define GL_MDET_TX_PQM_PF_NUM_M			MAKEMASK(0x7, 0)
+#define GL_MDET_TX_PQM_VF_NUM_S			4
+#define GL_MDET_TX_PQM_VF_NUM_M			MAKEMASK(0xFF, 4)
+#define GL_MDET_TX_PQM_QNUM_S			12
+#define GL_MDET_TX_PQM_QNUM_M			MAKEMASK(0x3FFF, 12)
+#define GL_MDET_TX_PQM_MAL_TYPE_S		26
+#define GL_MDET_TX_PQM_MAL_TYPE_M		MAKEMASK(0x1F, 26)
+#define GL_MDET_TX_PQM_VALID_S			31
+#define GL_MDET_TX_PQM_VALID_M			BIT(31)
+#define GL_MDET_TX_TCLAN			0x000FC068 /* Reset Source: CORER */
+#define GL_MDET_TX_TCLAN_QNUM_S			0
+#define GL_MDET_TX_TCLAN_QNUM_M			MAKEMASK(0x7FFF, 0)
+#define GL_MDET_TX_TCLAN_VF_NUM_S		15
+#define GL_MDET_TX_TCLAN_VF_NUM_M		MAKEMASK(0xFF, 15)
+#define GL_MDET_TX_TCLAN_PF_NUM_S		23
+#define GL_MDET_TX_TCLAN_PF_NUM_M		MAKEMASK(0x7, 23)
+#define GL_MDET_TX_TCLAN_MAL_TYPE_S		26
+#define GL_MDET_TX_TCLAN_MAL_TYPE_M		MAKEMASK(0x1F, 26)
+#define GL_MDET_TX_TCLAN_VALID_S		31
+#define GL_MDET_TX_TCLAN_VALID_M		BIT(31)
+#define PF_MDET_RX				0x00294280 /* Reset Source: CORER */
+#define PF_MDET_RX_VALID_S			0
+#define PF_MDET_RX_VALID_M			BIT(0)
+#define PF_MDET_TX_PQM				0x002D2C80 /* Reset Source: CORER */
+#define PF_MDET_TX_PQM_VALID_S			0
+#define PF_MDET_TX_PQM_VALID_M			BIT(0)
+#define PF_MDET_TX_TCLAN			0x000FC000 /* Reset Source: CORER */
+#define PF_MDET_TX_TCLAN_VALID_S		0
+#define PF_MDET_TX_TCLAN_VALID_M		BIT(0)
+#define PF_MDET_TX_TDPU				0x00040800 /* Reset Source: CORER */
+#define PF_MDET_TX_TDPU_VALID_S			0
+#define PF_MDET_TX_TDPU_VALID_M			BIT(0)
+#define VP_MDET_RX(_VF)				(0x00294400 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VP_MDET_RX_MAX_INDEX			255
+#define VP_MDET_RX_VALID_S			0
+#define VP_MDET_RX_VALID_M			BIT(0)
+#define VP_MDET_TX_PQM(_VF)			(0x002D2000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VP_MDET_TX_PQM_MAX_INDEX		255
+#define VP_MDET_TX_PQM_VALID_S			0
+#define VP_MDET_TX_PQM_VALID_M			BIT(0)
+#define VP_MDET_TX_TCLAN(_VF)			(0x000FB800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VP_MDET_TX_TCLAN_MAX_INDEX		255
+#define VP_MDET_TX_TCLAN_VALID_S		0
+#define VP_MDET_TX_TCLAN_VALID_M		BIT(0)
+#define VP_MDET_TX_TDPU(_VF)			(0x00040000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VP_MDET_TX_TDPU_MAX_INDEX		255
+#define VP_MDET_TX_TDPU_VALID_S			0
+#define VP_MDET_TX_TDPU_VALID_M			BIT(0)
+#define GENERAL_MNG_FW_DBG_CSR(_i)		(0x000B6180 + ((_i) * 4)) /* _i=0...9 */ /* Reset Source: POR */
+#define GENERAL_MNG_FW_DBG_CSR_MAX_INDEX	9
+#define GENERAL_MNG_FW_DBG_CSR_GENERAL_FW_DBG_S 0
+#define GENERAL_MNG_FW_DBG_CSR_GENERAL_FW_DBG_M MAKEMASK(0xFFFFFFFF, 0)
+#define GL_FWRESETCNT				0x00083100 /* Reset Source: POR */
+#define GL_FWRESETCNT_FWRESETCNT_S		0
+#define GL_FWRESETCNT_FWRESETCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_MNG_FW_RAM_STAT			0x0008309C /* Reset Source: POR */
+#define GL_MNG_FW_RAM_STAT_FW_RAM_RST_STAT_S	0
+#define GL_MNG_FW_RAM_STAT_FW_RAM_RST_STAT_M	BIT(0)
+#define GL_MNG_FW_RAM_STAT_MNG_MEM_ECC_ERR_S	1
+#define GL_MNG_FW_RAM_STAT_MNG_MEM_ECC_ERR_M	BIT(1)
+#define GL_MNG_FWSM				0x000B6134 /* Reset Source: POR */
+#define GL_MNG_FWSM_FW_MODES_S			0
+#define GL_MNG_FWSM_FW_MODES_M			MAKEMASK(0x3, 0)
+#define GL_MNG_FWSM_RSV0_S			2
+#define GL_MNG_FWSM_RSV0_M			MAKEMASK(0xFF, 2)
+#define GL_MNG_FWSM_EEP_RELOAD_IND_S		10
+#define GL_MNG_FWSM_EEP_RELOAD_IND_M		BIT(10)
+#define GL_MNG_FWSM_RSV1_S			11
+#define GL_MNG_FWSM_RSV1_M			MAKEMASK(0xF, 11)
+#define GL_MNG_FWSM_RSV2_S			15
+#define GL_MNG_FWSM_RSV2_M			BIT(15)
+#define GL_MNG_FWSM_PCIR_AL_FAILURE_S		16
+#define GL_MNG_FWSM_PCIR_AL_FAILURE_M		BIT(16)
+#define GL_MNG_FWSM_POR_AL_FAILURE_S		17
+#define GL_MNG_FWSM_POR_AL_FAILURE_M		BIT(17)
+#define GL_MNG_FWSM_RSV3_S			18
+#define GL_MNG_FWSM_RSV3_M			BIT(18)
+#define GL_MNG_FWSM_EXT_ERR_IND_S		19
+#define GL_MNG_FWSM_EXT_ERR_IND_M		MAKEMASK(0x3F, 19)
+#define GL_MNG_FWSM_RSV4_S			25
+#define GL_MNG_FWSM_RSV4_M			BIT(25)
+#define GL_MNG_FWSM_RESERVED_11_S		26
+#define GL_MNG_FWSM_RESERVED_11_M		MAKEMASK(0xF, 26)
+#define GL_MNG_FWSM_RSV5_S			30
+#define GL_MNG_FWSM_RSV5_M			MAKEMASK(0x3, 30)
+#define GL_MNG_HWARB_CTRL			0x000B6130 /* Reset Source: POR */
+#define GL_MNG_HWARB_CTRL_NCSI_ARB_EN_S		0
+#define GL_MNG_HWARB_CTRL_NCSI_ARB_EN_M		BIT(0)
+#define GL_MNG_SHA_EXTEND(_i)			(0x00083120 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: EMPR */
+#define GL_MNG_SHA_EXTEND_MAX_INDEX		7
+#define GL_MNG_SHA_EXTEND_GL_MNG_SHA_EXTEND_S	0
+#define GL_MNG_SHA_EXTEND_GL_MNG_SHA_EXTEND_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GL_MNG_SHA_EXTEND_ROM(_i)		(0x00083160 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: EMPR */
+#define GL_MNG_SHA_EXTEND_ROM_MAX_INDEX		7
+#define GL_MNG_SHA_EXTEND_ROM_GL_MNG_SHA_EXTEND_ROM_S 0
+#define GL_MNG_SHA_EXTEND_ROM_GL_MNG_SHA_EXTEND_ROM_M MAKEMASK(0xFFFFFFFF, 0)
+#define GL_MNG_SHA_EXTEND_STATUS		0x00083148 /* Reset Source: EMPR */
+#define GL_MNG_SHA_EXTEND_STATUS_STAGE_S	0
+#define GL_MNG_SHA_EXTEND_STATUS_STAGE_M	MAKEMASK(0x7, 0)
+#define GL_MNG_SHA_EXTEND_STATUS_FW_HALTED_S	30
+#define GL_MNG_SHA_EXTEND_STATUS_FW_HALTED_M	BIT(30)
+#define GL_MNG_SHA_EXTEND_STATUS_DONE_S		31
+#define GL_MNG_SHA_EXTEND_STATUS_DONE_M		BIT(31)
+#define GL_SWT_PRT2MDEF(_i)			(0x00216018 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: POR */
+#define GL_SWT_PRT2MDEF_MAX_INDEX		31
+#define GL_SWT_PRT2MDEF_MDEFIDX_S		0
+#define GL_SWT_PRT2MDEF_MDEFIDX_M		MAKEMASK(0x7, 0)
+#define GL_SWT_PRT2MDEF_MDEFENA_S		31
+#define GL_SWT_PRT2MDEF_MDEFENA_M		BIT(31)
+#define PRT_MNG_MANC				0x00214720 /* Reset Source: POR */
+#define PRT_MNG_MANC_FLOW_CONTROL_DISCARD_S	0
+#define PRT_MNG_MANC_FLOW_CONTROL_DISCARD_M	BIT(0)
+#define PRT_MNG_MANC_NCSI_DISCARD_S		1
+#define PRT_MNG_MANC_NCSI_DISCARD_M		BIT(1)
+#define PRT_MNG_MANC_RCV_TCO_EN_S		17
+#define PRT_MNG_MANC_RCV_TCO_EN_M		BIT(17)
+#define PRT_MNG_MANC_RCV_ALL_S			19
+#define PRT_MNG_MANC_RCV_ALL_M			BIT(19)
+#define PRT_MNG_MANC_FIXED_NET_TYPE_S		25
+#define PRT_MNG_MANC_FIXED_NET_TYPE_M		BIT(25)
+#define PRT_MNG_MANC_NET_TYPE_S			26
+#define PRT_MNG_MANC_NET_TYPE_M			BIT(26)
+#define PRT_MNG_MANC_EN_BMC2OS_S		28
+#define PRT_MNG_MANC_EN_BMC2OS_M		BIT(28)
+#define PRT_MNG_MANC_EN_BMC2NET_S		29
+#define PRT_MNG_MANC_EN_BMC2NET_M		BIT(29)
+#define PRT_MNG_MAVTV(_i)			(0x00214780 + ((_i) * 32)) /* _i=0...7 */ /* Reset Source: POR */
+#define PRT_MNG_MAVTV_MAX_INDEX			7
+#define PRT_MNG_MAVTV_VID_S			0
+#define PRT_MNG_MAVTV_VID_M			MAKEMASK(0xFFF, 0)
+#define PRT_MNG_MDEF(_i)			(0x00214880 + ((_i) * 32)) /* _i=0...7 */ /* Reset Source: POR */
+#define PRT_MNG_MDEF_MAX_INDEX			7
+#define PRT_MNG_MDEF_MAC_EXACT_AND_S		0
+#define PRT_MNG_MDEF_MAC_EXACT_AND_M		MAKEMASK(0xF, 0)
+#define PRT_MNG_MDEF_BROADCAST_AND_S		4
+#define PRT_MNG_MDEF_BROADCAST_AND_M		BIT(4)
+#define PRT_MNG_MDEF_VLAN_AND_S			5
+#define PRT_MNG_MDEF_VLAN_AND_M			MAKEMASK(0xFF, 5)
+#define PRT_MNG_MDEF_IPV4_ADDRESS_AND_S		13
+#define PRT_MNG_MDEF_IPV4_ADDRESS_AND_M		MAKEMASK(0xF, 13)
+#define PRT_MNG_MDEF_IPV6_ADDRESS_AND_S		17
+#define PRT_MNG_MDEF_IPV6_ADDRESS_AND_M		MAKEMASK(0xF, 17)
+#define PRT_MNG_MDEF_MAC_EXACT_OR_S		21
+#define PRT_MNG_MDEF_MAC_EXACT_OR_M		MAKEMASK(0xF, 21)
+#define PRT_MNG_MDEF_BROADCAST_OR_S		25
+#define PRT_MNG_MDEF_BROADCAST_OR_M		BIT(25)
+#define PRT_MNG_MDEF_MULTICAST_AND_S		26
+#define PRT_MNG_MDEF_MULTICAST_AND_M		BIT(26)
+#define PRT_MNG_MDEF_ARP_REQUEST_OR_S		27
+#define PRT_MNG_MDEF_ARP_REQUEST_OR_M		BIT(27)
+#define PRT_MNG_MDEF_ARP_RESPONSE_OR_S		28
+#define PRT_MNG_MDEF_ARP_RESPONSE_OR_M		BIT(28)
+#define PRT_MNG_MDEF_NEIGHBOR_DISCOVERY_134_OR_S 29
+#define PRT_MNG_MDEF_NEIGHBOR_DISCOVERY_134_OR_M BIT(29)
+#define PRT_MNG_MDEF_PORT_0X298_OR_S		30
+#define PRT_MNG_MDEF_PORT_0X298_OR_M		BIT(30)
+#define PRT_MNG_MDEF_PORT_0X26F_OR_S		31
+#define PRT_MNG_MDEF_PORT_0X26F_OR_M		BIT(31)
+#define PRT_MNG_MDEF_EXT(_i)			(0x00214A00 + ((_i) * 32)) /* _i=0...7 */ /* Reset Source: POR */
+#define PRT_MNG_MDEF_EXT_MAX_INDEX		7
+#define PRT_MNG_MDEF_EXT_L2_ETHERTYPE_AND_S	0
+#define PRT_MNG_MDEF_EXT_L2_ETHERTYPE_AND_M	MAKEMASK(0xF, 0)
+#define PRT_MNG_MDEF_EXT_L2_ETHERTYPE_OR_S	4
+#define PRT_MNG_MDEF_EXT_L2_ETHERTYPE_OR_M	MAKEMASK(0xF, 4)
+#define PRT_MNG_MDEF_EXT_FLEX_PORT_OR_S		8
+#define PRT_MNG_MDEF_EXT_FLEX_PORT_OR_M		MAKEMASK(0xFFFF, 8)
+#define PRT_MNG_MDEF_EXT_FLEX_TCO_S		24
+#define PRT_MNG_MDEF_EXT_FLEX_TCO_M		BIT(24)
+#define PRT_MNG_MDEF_EXT_NEIGHBOR_DISCOVERY_135_OR_S 25
+#define PRT_MNG_MDEF_EXT_NEIGHBOR_DISCOVERY_135_OR_M BIT(25)
+#define PRT_MNG_MDEF_EXT_NEIGHBOR_DISCOVERY_136_OR_S 26
+#define PRT_MNG_MDEF_EXT_NEIGHBOR_DISCOVERY_136_OR_M BIT(26)
+#define PRT_MNG_MDEF_EXT_NEIGHBOR_DISCOVERY_137_OR_S 27
+#define PRT_MNG_MDEF_EXT_NEIGHBOR_DISCOVERY_137_OR_M BIT(27)
+#define PRT_MNG_MDEF_EXT_ICMP_OR_S		28
+#define PRT_MNG_MDEF_EXT_ICMP_OR_M		BIT(28)
+#define PRT_MNG_MDEF_EXT_MLD_S			29
+#define PRT_MNG_MDEF_EXT_MLD_M			BIT(29)
+#define PRT_MNG_MDEF_EXT_APPLY_TO_NETWORK_TRAFFIC_S 30
+#define PRT_MNG_MDEF_EXT_APPLY_TO_NETWORK_TRAFFIC_M BIT(30)
+#define PRT_MNG_MDEF_EXT_APPLY_TO_HOST_TRAFFIC_S 31
+#define PRT_MNG_MDEF_EXT_APPLY_TO_HOST_TRAFFIC_M BIT(31)
+#define PRT_MNG_MDEFVSI(_i)			(0x00214980 + ((_i) * 32)) /* _i=0...3 */ /* Reset Source: POR */
+#define PRT_MNG_MDEFVSI_MAX_INDEX		3
+#define PRT_MNG_MDEFVSI_MDEFVSI_2N_S		0
+#define PRT_MNG_MDEFVSI_MDEFVSI_2N_M		MAKEMASK(0xFFFF, 0)
+#define PRT_MNG_MDEFVSI_MDEFVSI_2NP1_S		16
+#define PRT_MNG_MDEFVSI_MDEFVSI_2NP1_M		MAKEMASK(0xFFFF, 16)
+#define PRT_MNG_METF(_i)			(0x00214120 + ((_i) * 32)) /* _i=0...3 */ /* Reset Source: POR */
+#define PRT_MNG_METF_MAX_INDEX			3
+#define PRT_MNG_METF_ETYPE_S			0
+#define PRT_MNG_METF_ETYPE_M			MAKEMASK(0xFFFF, 0)
+#define PRT_MNG_METF_POLARITY_S			30
+#define PRT_MNG_METF_POLARITY_M			BIT(30)
+#define PRT_MNG_MFUTP(_i)			(0x00214320 + ((_i) * 32)) /* _i=0...15 */ /* Reset Source: POR */
+#define PRT_MNG_MFUTP_MAX_INDEX			15
+#define PRT_MNG_MFUTP_MFUTP_N_S			0
+#define PRT_MNG_MFUTP_MFUTP_N_M			MAKEMASK(0xFFFF, 0)
+#define PRT_MNG_MFUTP_UDP_S			16
+#define PRT_MNG_MFUTP_UDP_M			BIT(16)
+#define PRT_MNG_MFUTP_TCP_S			17
+#define PRT_MNG_MFUTP_TCP_M			BIT(17)
+#define PRT_MNG_MFUTP_SOURCE_DESTINATION_S	18
+#define PRT_MNG_MFUTP_SOURCE_DESTINATION_M	BIT(18)
+#define PRT_MNG_MIPAF4(_i)			(0x002141A0 + ((_i) * 32)) /* _i=0...3 */ /* Reset Source: POR */
+#define PRT_MNG_MIPAF4_MAX_INDEX		3
+#define PRT_MNG_MIPAF4_MIPAF_S			0
+#define PRT_MNG_MIPAF4_MIPAF_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PRT_MNG_MIPAF6(_i)			(0x00214520 + ((_i) * 32)) /* _i=0...15 */ /* Reset Source: POR */
+#define PRT_MNG_MIPAF6_MAX_INDEX		15
+#define PRT_MNG_MIPAF6_MIPAF_S			0
+#define PRT_MNG_MIPAF6_MIPAF_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PRT_MNG_MMAH(_i)			(0x00214220 + ((_i) * 32)) /* _i=0...3 */ /* Reset Source: POR */
+#define PRT_MNG_MMAH_MAX_INDEX			3
+#define PRT_MNG_MMAH_MMAH_S			0
+#define PRT_MNG_MMAH_MMAH_M			MAKEMASK(0xFFFF, 0)
+#define PRT_MNG_MMAL(_i)			(0x002142A0 + ((_i) * 32)) /* _i=0...3 */ /* Reset Source: POR */
+#define PRT_MNG_MMAL_MAX_INDEX			3
+#define PRT_MNG_MMAL_MMAL_S			0
+#define PRT_MNG_MMAL_MMAL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PRT_MNG_MNGONLY				0x00214740 /* Reset Source: POR */
+#define PRT_MNG_MNGONLY_EXCLUSIVE_TO_MANAGEABILITY_S 0
+#define PRT_MNG_MNGONLY_EXCLUSIVE_TO_MANAGEABILITY_M MAKEMASK(0xFF, 0)
+#define PRT_MNG_MSFM				0x00214760 /* Reset Source: POR */
+#define PRT_MNG_MSFM_PORT_26F_UDP_S		0
+#define PRT_MNG_MSFM_PORT_26F_UDP_M		BIT(0)
+#define PRT_MNG_MSFM_PORT_26F_TCP_S		1
+#define PRT_MNG_MSFM_PORT_26F_TCP_M		BIT(1)
+#define PRT_MNG_MSFM_PORT_298_UDP_S		2
+#define PRT_MNG_MSFM_PORT_298_UDP_M		BIT(2)
+#define PRT_MNG_MSFM_PORT_298_TCP_S		3
+#define PRT_MNG_MSFM_PORT_298_TCP_M		BIT(3)
+#define PRT_MNG_MSFM_IPV6_0_MASK_S		4
+#define PRT_MNG_MSFM_IPV6_0_MASK_M		BIT(4)
+#define PRT_MNG_MSFM_IPV6_1_MASK_S		5
+#define PRT_MNG_MSFM_IPV6_1_MASK_M		BIT(5)
+#define PRT_MNG_MSFM_IPV6_2_MASK_S		6
+#define PRT_MNG_MSFM_IPV6_2_MASK_M		BIT(6)
+#define PRT_MNG_MSFM_IPV6_3_MASK_S		7
+#define PRT_MNG_MSFM_IPV6_3_MASK_M		BIT(7)
+#define MSIX_PBA_PAGE(_i)			(0x02E08000 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: FLR */
+#define MSIX_PBA_PAGE_MAX_INDEX			63
+#define MSIX_PBA_PAGE_PENBIT_S			0
+#define MSIX_PBA_PAGE_PENBIT_M			MAKEMASK(0xFFFFFFFF, 0)
+#define MSIX_PBA1(_i)				(0x00008000 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: FLR */
+#define MSIX_PBA1_MAX_INDEX			63
+#define MSIX_PBA1_PENBIT_S			0
+#define MSIX_PBA1_PENBIT_M			MAKEMASK(0xFFFFFFFF, 0)
+#define MSIX_TADD_PAGE(_i)			(0x02E00000 + ((_i) * 16)) /* _i=0...2047 */ /* Reset Source: FLR */
+#define MSIX_TADD_PAGE_MAX_INDEX		2047
+#define MSIX_TADD_PAGE_MSIXTADD10_S		0
+#define MSIX_TADD_PAGE_MSIXTADD10_M		MAKEMASK(0x3, 0)
+#define MSIX_TADD_PAGE_MSIXTADD_S		2
+#define MSIX_TADD_PAGE_MSIXTADD_M		MAKEMASK(0x3FFFFFFF, 2)
+#define MSIX_TADD1(_i)				(0x00000000 + ((_i) * 16)) /* _i=0...2047 */ /* Reset Source: FLR */
+#define MSIX_TADD1_MAX_INDEX			2047
+#define MSIX_TADD1_MSIXTADD10_S			0
+#define MSIX_TADD1_MSIXTADD10_M			MAKEMASK(0x3, 0)
+#define MSIX_TADD1_MSIXTADD_S			2
+#define MSIX_TADD1_MSIXTADD_M			MAKEMASK(0x3FFFFFFF, 2)
+#define MSIX_TMSG(_i)				(0x00000008 + ((_i) * 16)) /* _i=0...2047 */ /* Reset Source: FLR */
+#define MSIX_TMSG_MAX_INDEX			2047
+#define MSIX_TMSG_MSIXTMSG_S			0
+#define MSIX_TMSG_MSIXTMSG_M			MAKEMASK(0xFFFFFFFF, 0)
+#define MSIX_TMSG_PAGE(_i)			(0x02E00008 + ((_i) * 16)) /* _i=0...2047 */ /* Reset Source: FLR */
+#define MSIX_TMSG_PAGE_MAX_INDEX		2047
+#define MSIX_TMSG_PAGE_MSIXTMSG_S		0
+#define MSIX_TMSG_PAGE_MSIXTMSG_M		MAKEMASK(0xFFFFFFFF, 0)
+#define MSIX_TUADD_PAGE(_i)			(0x02E00004 + ((_i) * 16)) /* _i=0...2047 */ /* Reset Source: FLR */
+#define MSIX_TUADD_PAGE_MAX_INDEX		2047
+#define MSIX_TUADD_PAGE_MSIXTUADD_S		0
+#define MSIX_TUADD_PAGE_MSIXTUADD_M		MAKEMASK(0xFFFFFFFF, 0)
+#define MSIX_TUADD1(_i)				(0x00000004 + ((_i) * 16)) /* _i=0...2047 */ /* Reset Source: FLR */
+#define MSIX_TUADD1_MAX_INDEX			2047
+#define MSIX_TUADD1_MSIXTUADD_S			0
+#define MSIX_TUADD1_MSIXTUADD_M			MAKEMASK(0xFFFFFFFF, 0)
+#define MSIX_TVCTRL_PAGE(_i)			(0x02E0000C + ((_i) * 16)) /* _i=0...2047 */ /* Reset Source: FLR */
+#define MSIX_TVCTRL_PAGE_MAX_INDEX		2047
+#define MSIX_TVCTRL_PAGE_MASK_S			0
+#define MSIX_TVCTRL_PAGE_MASK_M			BIT(0)
+#define MSIX_TVCTRL1(_i)			(0x0000000C + ((_i) * 16)) /* _i=0...2047 */ /* Reset Source: FLR */
+#define MSIX_TVCTRL1_MAX_INDEX			2047
+#define MSIX_TVCTRL1_MASK_S			0
+#define MSIX_TVCTRL1_MASK_M			BIT(0)
+#define GLNVM_AL_DONE_HLP			0x000824C4 /* Reset Source: POR */
+#define GLNVM_AL_DONE_HLP_HLP_CORER_S		0
+#define GLNVM_AL_DONE_HLP_HLP_CORER_M		BIT(0)
+#define GLNVM_AL_DONE_HLP_HLP_FULLR_S		1
+#define GLNVM_AL_DONE_HLP_HLP_FULLR_M		BIT(1)
+#define GLNVM_ALTIMERS				0x000B6140 /* Reset Source: POR */
+#define GLNVM_ALTIMERS_PCI_ALTIMER_S		0
+#define GLNVM_ALTIMERS_PCI_ALTIMER_M		MAKEMASK(0xFFF, 0)
+#define GLNVM_ALTIMERS_GEN_ALTIMER_S		12
+#define GLNVM_ALTIMERS_GEN_ALTIMER_M		MAKEMASK(0xFFFFF, 12)
+#define GLNVM_FLA				0x000B6108 /* Reset Source: POR */
+#define GLNVM_FLA_LOCKED_S			6
+#define GLNVM_FLA_LOCKED_M			BIT(6)
+#define GLNVM_GENS				0x000B6100 /* Reset Source: POR */
+#define GLNVM_GENS_NVM_PRES_S			0
+#define GLNVM_GENS_NVM_PRES_M			BIT(0)
+#define GLNVM_GENS_SR_SIZE_S			5
+#define GLNVM_GENS_SR_SIZE_M			MAKEMASK(0x7, 5)
+#define GLNVM_GENS_BANK1VAL_S			8
+#define GLNVM_GENS_BANK1VAL_M			BIT(8)
+#define GLNVM_GENS_ALT_PRST_S			23
+#define GLNVM_GENS_ALT_PRST_M			BIT(23)
+#define GLNVM_GENS_FL_AUTO_RD_S			25
+#define GLNVM_GENS_FL_AUTO_RD_M			BIT(25)
+#define GLNVM_PROTCSR(_i)			(0x000B6010 + ((_i) * 4)) /* _i=0...59 */ /* Reset Source: POR */
+#define GLNVM_PROTCSR_MAX_INDEX			59
+#define GLNVM_PROTCSR_ADDR_BLOCK_S		0
+#define GLNVM_PROTCSR_ADDR_BLOCK_M		MAKEMASK(0xFFFFFF, 0)
+#define GLNVM_ULD				0x000B6008 /* Reset Source: POR */
+#define GLNVM_ULD_PCIER_DONE_S			0
+#define GLNVM_ULD_PCIER_DONE_M			BIT(0)
+#define GLNVM_ULD_PCIER_DONE_1_S		1
+#define GLNVM_ULD_PCIER_DONE_1_M		BIT(1)
+#define GLNVM_ULD_CORER_DONE_S			3
+#define GLNVM_ULD_CORER_DONE_M			BIT(3)
+#define GLNVM_ULD_GLOBR_DONE_S			4
+#define GLNVM_ULD_GLOBR_DONE_M			BIT(4)
+#define GLNVM_ULD_POR_DONE_S			5
+#define GLNVM_ULD_POR_DONE_M			BIT(5)
+#define GLNVM_ULD_POR_DONE_1_S			8
+#define GLNVM_ULD_POR_DONE_1_M			BIT(8)
+#define GLNVM_ULD_PCIER_DONE_2_S		9
+#define GLNVM_ULD_PCIER_DONE_2_M		BIT(9)
+#define GLNVM_ULD_PE_DONE_S			10
+#define GLNVM_ULD_PE_DONE_M			BIT(10)
+#define GLNVM_ULD_HLP_CORE_DONE_S		11
+#define GLNVM_ULD_HLP_CORE_DONE_M		BIT(11)
+#define GLNVM_ULD_HLP_FULL_DONE_S		12
+#define GLNVM_ULD_HLP_FULL_DONE_M		BIT(12)
+#define GLNVM_ULT				0x000B6154 /* Reset Source: POR */
+#define GLNVM_ULT_CONF_PCIR_AE_S		0
+#define GLNVM_ULT_CONF_PCIR_AE_M		BIT(0)
+#define GLNVM_ULT_CONF_PCIRTL_AE_S		1
+#define GLNVM_ULT_CONF_PCIRTL_AE_M		BIT(1)
+#define GLNVM_ULT_RESERVED_1_S			2
+#define GLNVM_ULT_RESERVED_1_M			BIT(2)
+#define GLNVM_ULT_CONF_CORE_AE_S		3
+#define GLNVM_ULT_CONF_CORE_AE_M		BIT(3)
+#define GLNVM_ULT_CONF_GLOBAL_AE_S		4
+#define GLNVM_ULT_CONF_GLOBAL_AE_M		BIT(4)
+#define GLNVM_ULT_CONF_POR_AE_S			5
+#define GLNVM_ULT_CONF_POR_AE_M			BIT(5)
+#define GLNVM_ULT_RESERVED_2_S			6
+#define GLNVM_ULT_RESERVED_2_M			BIT(6)
+#define GLNVM_ULT_RESERVED_3_S			7
+#define GLNVM_ULT_RESERVED_3_M			BIT(7)
+#define GLNVM_ULT_RESERVED_5_S			8
+#define GLNVM_ULT_RESERVED_5_M			BIT(8)
+#define GLNVM_ULT_CONF_PCIALT_AE_S		9
+#define GLNVM_ULT_CONF_PCIALT_AE_M		BIT(9)
+#define GLNVM_ULT_CONF_PE_AE_S			10
+#define GLNVM_ULT_CONF_PE_AE_M			BIT(10)
+#define GLNVM_ULT_RESERVED_4_S			11
+#define GLNVM_ULT_RESERVED_4_M			MAKEMASK(0x1FFFFF, 11)
+#define GL_COTF_MARKER_STATUS			0x00200200 /* Reset Source: CORER */
+#define GL_COTF_MARKER_STATUS_MRKR_BUSY_S	0
+#define GL_COTF_MARKER_STATUS_MRKR_BUSY_M	MAKEMASK(0xFF, 0)
+#define GL_COTF_MARKER_TRIG_RCU_PRS(_i)		(0x002001D4 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GL_COTF_MARKER_TRIG_RCU_PRS_MAX_INDEX	7
+#define GL_COTF_MARKER_TRIG_RCU_PRS_SET_RST_S	0
+#define GL_COTF_MARKER_TRIG_RCU_PRS_SET_RST_M	BIT(0)
+#define GL_PRS_MARKER_ERROR			0x00200204 /* Reset Source: CORER */
+#define GL_PRS_MARKER_ERROR_XLR_CFG_ERR_S	0
+#define GL_PRS_MARKER_ERROR_XLR_CFG_ERR_M	BIT(0)
+#define GL_PRS_MARKER_ERROR_QH_CFG_ERR_S	1
+#define GL_PRS_MARKER_ERROR_QH_CFG_ERR_M	BIT(1)
+#define GL_PRS_MARKER_ERROR_COTF_CFG_ERR_S	2
+#define GL_PRS_MARKER_ERROR_COTF_CFG_ERR_M	BIT(2)
+#define GL_PRS_RX_PIPE_INIT0(_i)		(0x0020000C + ((_i) * 4)) /* _i=0...6 */ /* Reset Source: CORER */
+#define GL_PRS_RX_PIPE_INIT0_MAX_INDEX		6
+#define GL_PRS_RX_PIPE_INIT0_GPCSR_INIT_S	0
+#define GL_PRS_RX_PIPE_INIT0_GPCSR_INIT_M	MAKEMASK(0xFFFF, 0)
+#define GL_PRS_RX_PIPE_INIT1			0x00200028 /* Reset Source: CORER */
+#define GL_PRS_RX_PIPE_INIT1_GPCSR_INIT_S	0
+#define GL_PRS_RX_PIPE_INIT1_GPCSR_INIT_M	MAKEMASK(0xFFFF, 0)
+#define GL_PRS_RX_PIPE_INIT2			0x0020002C /* Reset Source: CORER */
+#define GL_PRS_RX_PIPE_INIT2_GPCSR_INIT_S	0
+#define GL_PRS_RX_PIPE_INIT2_GPCSR_INIT_M	MAKEMASK(0xFFFF, 0)
+#define GL_PRS_RX_SIZE_CTRL			0x00200004 /* Reset Source: CORER */
+#define GL_PRS_RX_SIZE_CTRL_MIN_SIZE_S		0
+#define GL_PRS_RX_SIZE_CTRL_MIN_SIZE_M		MAKEMASK(0x3FF, 0)
+#define GL_PRS_RX_SIZE_CTRL_MIN_SIZE_EN_S	15
+#define GL_PRS_RX_SIZE_CTRL_MIN_SIZE_EN_M	BIT(15)
+#define GL_PRS_RX_SIZE_CTRL_MAX_SIZE_S		16
+#define GL_PRS_RX_SIZE_CTRL_MAX_SIZE_M		MAKEMASK(0x3FF, 16)
+#define GL_PRS_RX_SIZE_CTRL_MAX_SIZE_EN_S	31
+#define GL_PRS_RX_SIZE_CTRL_MAX_SIZE_EN_M	BIT(31)
+#define GL_PRS_TX_PIPE_INIT0(_i)		(0x00202018 + ((_i) * 4)) /* _i=0...6 */ /* Reset Source: CORER */
+#define GL_PRS_TX_PIPE_INIT0_MAX_INDEX		6
+#define GL_PRS_TX_PIPE_INIT0_GPCSR_INIT_S	0
+#define GL_PRS_TX_PIPE_INIT0_GPCSR_INIT_M	MAKEMASK(0xFFFF, 0)
+#define GL_PRS_TX_PIPE_INIT1			0x00202034 /* Reset Source: CORER */
+#define GL_PRS_TX_PIPE_INIT1_GPCSR_INIT_S	0
+#define GL_PRS_TX_PIPE_INIT1_GPCSR_INIT_M	MAKEMASK(0xFFFF, 0)
+#define GL_PRS_TX_PIPE_INIT2			0x00202038 /* Reset Source: CORER */
+#define GL_PRS_TX_PIPE_INIT2_GPCSR_INIT_S	0
+#define GL_PRS_TX_PIPE_INIT2_GPCSR_INIT_M	MAKEMASK(0xFFFF, 0)
+#define GL_PRS_TX_SIZE_CTRL			0x00202014 /* Reset Source: CORER */
+#define GL_PRS_TX_SIZE_CTRL_MIN_SIZE_S		0
+#define GL_PRS_TX_SIZE_CTRL_MIN_SIZE_M		MAKEMASK(0x3FF, 0)
+#define GL_PRS_TX_SIZE_CTRL_MIN_SIZE_EN_S	15
+#define GL_PRS_TX_SIZE_CTRL_MIN_SIZE_EN_M	BIT(15)
+#define GL_PRS_TX_SIZE_CTRL_MAX_SIZE_S		16
+#define GL_PRS_TX_SIZE_CTRL_MAX_SIZE_M		MAKEMASK(0x3FF, 16)
+#define GL_PRS_TX_SIZE_CTRL_MAX_SIZE_EN_S	31
+#define GL_PRS_TX_SIZE_CTRL_MAX_SIZE_EN_M	BIT(31)
+#define GL_QH_MARKER_STATUS			0x002001FC /* Reset Source: CORER */
+#define GL_QH_MARKER_STATUS_MRKR_BUSY_S		0
+#define GL_QH_MARKER_STATUS_MRKR_BUSY_M		MAKEMASK(0xF, 0)
+#define GL_QH_MARKER_TRIG_RCU_PRS(_i)		(0x002001C4 + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: CORER */
+#define GL_QH_MARKER_TRIG_RCU_PRS_MAX_INDEX	3
+#define GL_QH_MARKER_TRIG_RCU_PRS_QPID_S	0
+#define GL_QH_MARKER_TRIG_RCU_PRS_QPID_M	MAKEMASK(0x3FFFF, 0)
+#define GL_QH_MARKER_TRIG_RCU_PRS_PE_TAG_S	18
+#define GL_QH_MARKER_TRIG_RCU_PRS_PE_TAG_M	MAKEMASK(0xFF, 18)
+#define GL_QH_MARKER_TRIG_RCU_PRS_PORT_NUM_S	26
+#define GL_QH_MARKER_TRIG_RCU_PRS_PORT_NUM_M	MAKEMASK(0x7, 26)
+#define GL_QH_MARKER_TRIG_RCU_PRS_SET_RST_S	31
+#define GL_QH_MARKER_TRIG_RCU_PRS_SET_RST_M	BIT(31)
+#define GL_RPRS_ANA_CSR_CTRL			0x00200708 /* Reset Source: CORER */
+#define GL_RPRS_ANA_CSR_CTRL_SELECT_EN_S	0
+#define GL_RPRS_ANA_CSR_CTRL_SELECT_EN_M	BIT(0)
+#define GL_RPRS_ANA_CSR_CTRL_SELECTED_ANA_S	1
+#define GL_RPRS_ANA_CSR_CTRL_SELECTED_ANA_M	BIT(1)
+#define GL_TPRS_ANA_CSR_CTRL			0x00202100 /* Reset Source: CORER */
+#define GL_TPRS_ANA_CSR_CTRL_SELECT_EN_S	0
+#define GL_TPRS_ANA_CSR_CTRL_SELECT_EN_M	BIT(0)
+#define GL_TPRS_ANA_CSR_CTRL_SELECTED_ANA_S	1
+#define GL_TPRS_ANA_CSR_CTRL_SELECTED_ANA_M	BIT(1)
+#define GL_TPRS_MNG_PM_THR			0x00202004 /* Reset Source: CORER */
+#define GL_TPRS_MNG_PM_THR_MNG_PM_THR_S		0
+#define GL_TPRS_MNG_PM_THR_MNG_PM_THR_M		MAKEMASK(0x3FFF, 0)
+#define GL_TPRS_PM_CNT(_i)			(0x00202008 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GL_TPRS_PM_CNT_MAX_INDEX		1
+#define GL_TPRS_PM_CNT_GL_PRS_PM_CNT_S		0
+#define GL_TPRS_PM_CNT_GL_PRS_PM_CNT_M		MAKEMASK(0x3FFF, 0)
+#define GL_TPRS_PM_THR				0x00202000 /* Reset Source: CORER */
+#define GL_TPRS_PM_THR_PM_THR_S			0
+#define GL_TPRS_PM_THR_PM_THR_M			MAKEMASK(0x3FFF, 0)
+#define GL_XLR_MARKER_LOG_RCU_PRS(_i)		(0x00200208 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GL_XLR_MARKER_LOG_RCU_PRS_MAX_INDEX	63
+#define GL_XLR_MARKER_LOG_RCU_PRS_XLR_TRIG_S	0
+#define GL_XLR_MARKER_LOG_RCU_PRS_XLR_TRIG_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GL_XLR_MARKER_STATUS(_i)		(0x002001F4 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GL_XLR_MARKER_STATUS_MAX_INDEX		1
+#define GL_XLR_MARKER_STATUS_MRKR_BUSY_S	0
+#define GL_XLR_MARKER_STATUS_MRKR_BUSY_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GL_XLR_MARKER_TRIG_PE			0x005008C0 /* Reset Source: CORER */
+#define GL_XLR_MARKER_TRIG_PE_VM_VF_NUM_S	0
+#define GL_XLR_MARKER_TRIG_PE_VM_VF_NUM_M	MAKEMASK(0x3FF, 0)
+#define GL_XLR_MARKER_TRIG_PE_VM_VF_TYPE_S	10
+#define GL_XLR_MARKER_TRIG_PE_VM_VF_TYPE_M	MAKEMASK(0x3, 10)
+#define GL_XLR_MARKER_TRIG_PE_PF_NUM_S		12
+#define GL_XLR_MARKER_TRIG_PE_PF_NUM_M		MAKEMASK(0x7, 12)
+#define GL_XLR_MARKER_TRIG_PE_PORT_NUM_S	16
+#define GL_XLR_MARKER_TRIG_PE_PORT_NUM_M	MAKEMASK(0x7, 16)
+#define GL_XLR_MARKER_TRIG_RCU_PRS		0x002001C0 /* Reset Source: CORER */
+#define GL_XLR_MARKER_TRIG_RCU_PRS_VM_VF_NUM_S	0
+#define GL_XLR_MARKER_TRIG_RCU_PRS_VM_VF_NUM_M	MAKEMASK(0x3FF, 0)
+#define GL_XLR_MARKER_TRIG_RCU_PRS_VM_VF_TYPE_S 10
+#define GL_XLR_MARKER_TRIG_RCU_PRS_VM_VF_TYPE_M MAKEMASK(0x3, 10)
+#define GL_XLR_MARKER_TRIG_RCU_PRS_PF_NUM_S	12
+#define GL_XLR_MARKER_TRIG_RCU_PRS_PF_NUM_M	MAKEMASK(0x7, 12)
+#define GL_XLR_MARKER_TRIG_RCU_PRS_PORT_NUM_S	16
+#define GL_XLR_MARKER_TRIG_RCU_PRS_PORT_NUM_M	MAKEMASK(0x7, 16)
+#define GL_CLKGATE_EVENTS			0x0009DE70 /* Reset Source: PERST */
+#define GL_CLKGATE_EVENTS_PRIMARY_CLKGATE_EVENTS_S 0
+#define GL_CLKGATE_EVENTS_PRIMARY_CLKGATE_EVENTS_M MAKEMASK(0xFFFF, 0)
+#define GL_CLKGATE_EVENTS_SIDEBAND_CLKGATE_EVENTS_S 16
+#define GL_CLKGATE_EVENTS_SIDEBAND_CLKGATE_EVENTS_M MAKEMASK(0xFFFF, 16)
+#define GLPCI_BYTCTH_NP_C			0x000BFDA8 /* Reset Source: PCIR */
+#define GLPCI_BYTCTH_NP_C_PCI_COUNT_BW_BCT_S	0
+#define GLPCI_BYTCTH_NP_C_PCI_COUNT_BW_BCT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPCI_BYTCTH_P				0x0009E970 /* Reset Source: PCIR */
+#define GLPCI_BYTCTH_P_PCI_COUNT_BW_BCT_S	0
+#define GLPCI_BYTCTH_P_PCI_COUNT_BW_BCT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPCI_BYTCTL_NP_C			0x000BFDAC /* Reset Source: PCIR */
+#define GLPCI_BYTCTL_NP_C_PCI_COUNT_BW_BCT_S	0
+#define GLPCI_BYTCTL_NP_C_PCI_COUNT_BW_BCT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPCI_BYTCTL_P				0x0009E994 /* Reset Source: PCIR */
+#define GLPCI_BYTCTL_P_PCI_COUNT_BW_BCT_S	0
+#define GLPCI_BYTCTL_P_PCI_COUNT_BW_BCT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPCI_CAPCTRL				0x0009DE88 /* Reset Source: PCIR */
+#define GLPCI_CAPCTRL_VPD_EN_S			0
+#define GLPCI_CAPCTRL_VPD_EN_M			BIT(0)
+#define GLPCI_CAPSUP				0x0009DE8C /* Reset Source: PCIR */
+#define GLPCI_CAPSUP_PCIE_VER_S			0
+#define GLPCI_CAPSUP_PCIE_VER_M			BIT(0)
+#define GLPCI_CAPSUP_RESERVED_2_S		1
+#define GLPCI_CAPSUP_RESERVED_2_M		BIT(1)
+#define GLPCI_CAPSUP_LTR_EN_S			2
+#define GLPCI_CAPSUP_LTR_EN_M			BIT(2)
+#define GLPCI_CAPSUP_TPH_EN_S			3
+#define GLPCI_CAPSUP_TPH_EN_M			BIT(3)
+#define GLPCI_CAPSUP_ARI_EN_S			4
+#define GLPCI_CAPSUP_ARI_EN_M			BIT(4)
+#define GLPCI_CAPSUP_IOV_EN_S			5
+#define GLPCI_CAPSUP_IOV_EN_M			BIT(5)
+#define GLPCI_CAPSUP_ACS_EN_S			6
+#define GLPCI_CAPSUP_ACS_EN_M			BIT(6)
+#define GLPCI_CAPSUP_SEC_EN_S			7
+#define GLPCI_CAPSUP_SEC_EN_M			BIT(7)
+#define GLPCI_CAPSUP_PASID_EN_S			8
+#define GLPCI_CAPSUP_PASID_EN_M			BIT(8)
+#define GLPCI_CAPSUP_DLFE_EN_S			9
+#define GLPCI_CAPSUP_DLFE_EN_M			BIT(9)
+#define GLPCI_CAPSUP_GEN4_EXT_EN_S		10
+#define GLPCI_CAPSUP_GEN4_EXT_EN_M		BIT(10)
+#define GLPCI_CAPSUP_GEN4_MARG_EN_S		11
+#define GLPCI_CAPSUP_GEN4_MARG_EN_M		BIT(11)
+#define GLPCI_CAPSUP_ECRC_GEN_EN_S		16
+#define GLPCI_CAPSUP_ECRC_GEN_EN_M		BIT(16)
+#define GLPCI_CAPSUP_ECRC_CHK_EN_S		17
+#define GLPCI_CAPSUP_ECRC_CHK_EN_M		BIT(17)
+#define GLPCI_CAPSUP_IDO_EN_S			18
+#define GLPCI_CAPSUP_IDO_EN_M			BIT(18)
+#define GLPCI_CAPSUP_MSI_MASK_S			19
+#define GLPCI_CAPSUP_MSI_MASK_M			BIT(19)
+#define GLPCI_CAPSUP_CSR_CONF_EN_S		20
+#define GLPCI_CAPSUP_CSR_CONF_EN_M		BIT(20)
+#define GLPCI_CAPSUP_WAKUP_EN_S			21
+#define GLPCI_CAPSUP_WAKUP_EN_M			BIT(21)
+#define GLPCI_CAPSUP_LOAD_SUBSYS_ID_S		30
+#define GLPCI_CAPSUP_LOAD_SUBSYS_ID_M		BIT(30)
+#define GLPCI_CAPSUP_LOAD_DEV_ID_S		31
+#define GLPCI_CAPSUP_LOAD_DEV_ID_M		BIT(31)
+#define GLPCI_CNF				0x0009DEA0 /* Reset Source: POR */
+#define GLPCI_CNF_FLEX10_S			1
+#define GLPCI_CNF_FLEX10_M			BIT(1)
+#define GLPCI_CNF_WAKE_PIN_EN_S			2
+#define GLPCI_CNF_WAKE_PIN_EN_M			BIT(2)
+#define GLPCI_CNF_MSIX_ECC_BLOCK_DISABLE_S	3
+#define GLPCI_CNF_MSIX_ECC_BLOCK_DISABLE_M	BIT(3)
+#define GLPCI_CNF2				0x000BE004 /* Reset Source: PCIR */
+#define GLPCI_CNF2_RO_DIS_S			0
+#define GLPCI_CNF2_RO_DIS_M			BIT(0)
+#define GLPCI_CNF2_CACHELINE_SIZE_S		1
+#define GLPCI_CNF2_CACHELINE_SIZE_M		BIT(1)
+#define GLPCI_DREVID				0x0009E9AC /* Reset Source: PCIR */
+#define GLPCI_DREVID_DEFAULT_REVID_S		0
+#define GLPCI_DREVID_DEFAULT_REVID_M		MAKEMASK(0xFF, 0)
+#define GLPCI_GSCL_1_NP_C			0x000BFDA4 /* Reset Source: PCIR */
+#define GLPCI_GSCL_1_NP_C_RT_MODE_S		8
+#define GLPCI_GSCL_1_NP_C_RT_MODE_M		BIT(8)
+#define GLPCI_GSCL_1_NP_C_RT_EVENT_S		9
+#define GLPCI_GSCL_1_NP_C_RT_EVENT_M		MAKEMASK(0x1F, 9)
+#define GLPCI_GSCL_1_NP_C_PCI_COUNT_BW_EN_S	14
+#define GLPCI_GSCL_1_NP_C_PCI_COUNT_BW_EN_M	BIT(14)
+#define GLPCI_GSCL_1_NP_C_PCI_COUNT_BW_EV_S	15
+#define GLPCI_GSCL_1_NP_C_PCI_COUNT_BW_EV_M	MAKEMASK(0x1F, 15)
+#define GLPCI_GSCL_1_NP_C_GIO_COUNT_RESET_S	29
+#define GLPCI_GSCL_1_NP_C_GIO_COUNT_RESET_M	BIT(29)
+#define GLPCI_GSCL_1_NP_C_GIO_COUNT_STOP_S	30
+#define GLPCI_GSCL_1_NP_C_GIO_COUNT_STOP_M	BIT(30)
+#define GLPCI_GSCL_1_NP_C_GIO_COUNT_START_S	31
+#define GLPCI_GSCL_1_NP_C_GIO_COUNT_START_M	BIT(31)
+#define GLPCI_GSCL_1_P				0x0009E9B4 /* Reset Source: PCIR */
+#define GLPCI_GSCL_1_P_GIO_COUNT_EN_0_S		0
+#define GLPCI_GSCL_1_P_GIO_COUNT_EN_0_M		BIT(0)
+#define GLPCI_GSCL_1_P_GIO_COUNT_EN_1_S		1
+#define GLPCI_GSCL_1_P_GIO_COUNT_EN_1_M		BIT(1)
+#define GLPCI_GSCL_1_P_GIO_COUNT_EN_2_S		2
+#define GLPCI_GSCL_1_P_GIO_COUNT_EN_2_M		BIT(2)
+#define GLPCI_GSCL_1_P_GIO_COUNT_EN_3_S		3
+#define GLPCI_GSCL_1_P_GIO_COUNT_EN_3_M		BIT(3)
+#define GLPCI_GSCL_1_P_LBC_ENABLE_0_S		4
+#define GLPCI_GSCL_1_P_LBC_ENABLE_0_M		BIT(4)
+#define GLPCI_GSCL_1_P_LBC_ENABLE_1_S		5
+#define GLPCI_GSCL_1_P_LBC_ENABLE_1_M		BIT(5)
+#define GLPCI_GSCL_1_P_LBC_ENABLE_2_S		6
+#define GLPCI_GSCL_1_P_LBC_ENABLE_2_M		BIT(6)
+#define GLPCI_GSCL_1_P_LBC_ENABLE_3_S		7
+#define GLPCI_GSCL_1_P_LBC_ENABLE_3_M		BIT(7)
+#define GLPCI_GSCL_1_P_PCI_COUNT_BW_EN_S	14
+#define GLPCI_GSCL_1_P_PCI_COUNT_BW_EN_M	BIT(14)
+#define GLPCI_GSCL_1_P_GIO_64_BIT_EN_S		28
+#define GLPCI_GSCL_1_P_GIO_64_BIT_EN_M		BIT(28)
+#define GLPCI_GSCL_1_P_GIO_COUNT_RESET_S	29
+#define GLPCI_GSCL_1_P_GIO_COUNT_RESET_M	BIT(29)
+#define GLPCI_GSCL_1_P_GIO_COUNT_STOP_S		30
+#define GLPCI_GSCL_1_P_GIO_COUNT_STOP_M		BIT(30)
+#define GLPCI_GSCL_1_P_GIO_COUNT_START_S	31
+#define GLPCI_GSCL_1_P_GIO_COUNT_START_M	BIT(31)
+#define GLPCI_GSCL_2				0x0009E998 /* Reset Source: PCIR */
+#define GLPCI_GSCL_2_GIO_EVENT_NUM_0_S		0
+#define GLPCI_GSCL_2_GIO_EVENT_NUM_0_M		MAKEMASK(0xFF, 0)
+#define GLPCI_GSCL_2_GIO_EVENT_NUM_1_S		8
+#define GLPCI_GSCL_2_GIO_EVENT_NUM_1_M		MAKEMASK(0xFF, 8)
+#define GLPCI_GSCL_2_GIO_EVENT_NUM_2_S		16
+#define GLPCI_GSCL_2_GIO_EVENT_NUM_2_M		MAKEMASK(0xFF, 16)
+#define GLPCI_GSCL_2_GIO_EVENT_NUM_3_S		24
+#define GLPCI_GSCL_2_GIO_EVENT_NUM_3_M		MAKEMASK(0xFF, 24)
+#define GLPCI_GSCL_5_8(_i)			(0x0009E954 + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: PCIR */
+#define GLPCI_GSCL_5_8_MAX_INDEX		3
+#define GLPCI_GSCL_5_8_LBC_THRESHOLD_N_S	0
+#define GLPCI_GSCL_5_8_LBC_THRESHOLD_N_M	MAKEMASK(0xFFFF, 0)
+#define GLPCI_GSCL_5_8_LBC_TIMER_N_S		16
+#define GLPCI_GSCL_5_8_LBC_TIMER_N_M		MAKEMASK(0xFFFF, 16)
+#define GLPCI_GSCN_0_3(_i)			(0x0009E99C + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: PCIR */
+#define GLPCI_GSCN_0_3_MAX_INDEX		3
+#define GLPCI_GSCN_0_3_EVENT_COUNTER_S		0
+#define GLPCI_GSCN_0_3_EVENT_COUNTER_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPCI_LATCT_NP_C			0x000BFDA0 /* Reset Source: PCIR */
+#define GLPCI_LATCT_NP_C_PCI_LATENCY_COUNT_S	0
+#define GLPCI_LATCT_NP_C_PCI_LATENCY_COUNT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPCI_LBARCTRL				0x0009DE74 /* Reset Source: POR */
+#define GLPCI_LBARCTRL_PREFBAR_S		0
+#define GLPCI_LBARCTRL_PREFBAR_M		BIT(0)
+#define GLPCI_LBARCTRL_BAR32_S			1
+#define GLPCI_LBARCTRL_BAR32_M			BIT(1)
+#define GLPCI_LBARCTRL_PAGES_SPACE_EN_PF_S	2
+#define GLPCI_LBARCTRL_PAGES_SPACE_EN_PF_M	BIT(2)
+#define GLPCI_LBARCTRL_FLASH_EXPOSE_S		3
+#define GLPCI_LBARCTRL_FLASH_EXPOSE_M		BIT(3)
+#define GLPCI_LBARCTRL_PE_DB_SIZE_S		4
+#define GLPCI_LBARCTRL_PE_DB_SIZE_M		MAKEMASK(0x3, 4)
+#define GLPCI_LBARCTRL_PAGES_SPACE_EN_VF_S	9
+#define GLPCI_LBARCTRL_PAGES_SPACE_EN_VF_M	BIT(9)
+#define GLPCI_LBARCTRL_EXROM_SIZE_S		11
+#define GLPCI_LBARCTRL_EXROM_SIZE_M		MAKEMASK(0x7, 11)
+#define GLPCI_LBARCTRL_VF_PE_DB_SIZE_S		14
+#define GLPCI_LBARCTRL_VF_PE_DB_SIZE_M		MAKEMASK(0x3, 14)
+#define GLPCI_LINKCAP				0x0009DE90 /* Reset Source: PCIR */
+#define GLPCI_LINKCAP_LINK_SPEEDS_VECTOR_S	0
+#define GLPCI_LINKCAP_LINK_SPEEDS_VECTOR_M	MAKEMASK(0x3F, 0)
+#define GLPCI_LINKCAP_MAX_LINK_WIDTH_S		9
+#define GLPCI_LINKCAP_MAX_LINK_WIDTH_M		MAKEMASK(0xF, 9)
+#define GLPCI_NPQ_CFG				0x000BFD80 /* Reset Source: PCIR */
+#define GLPCI_NPQ_CFG_EXTEND_TO_S		0
+#define GLPCI_NPQ_CFG_EXTEND_TO_M		BIT(0)
+#define GLPCI_NPQ_CFG_SMALL_TO_S		1
+#define GLPCI_NPQ_CFG_SMALL_TO_M		BIT(1)
+#define GLPCI_NPQ_CFG_WEIGHT_AVG_S		2
+#define GLPCI_NPQ_CFG_WEIGHT_AVG_M		MAKEMASK(0xF, 2)
+#define GLPCI_NPQ_CFG_NPQ_SPARE_S		6
+#define GLPCI_NPQ_CFG_NPQ_SPARE_M		MAKEMASK(0x3FF, 6)
+#define GLPCI_NPQ_CFG_NPQ_ERR_STAT_S		16
+#define GLPCI_NPQ_CFG_NPQ_ERR_STAT_M		MAKEMASK(0xF, 16)
+#define GLPCI_PKTCT_NP_C			0x000BFD9C /* Reset Source: PCIR */
+#define GLPCI_PKTCT_NP_C_PCI_COUNT_BW_PCT_S	0
+#define GLPCI_PKTCT_NP_C_PCI_COUNT_BW_PCT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPCI_PKTCT_P				0x0009E9B0 /* Reset Source: PCIR */
+#define GLPCI_PKTCT_P_PCI_COUNT_BW_PCT_S	0
+#define GLPCI_PKTCT_P_PCI_COUNT_BW_PCT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPCI_PMSUP				0x0009DE94 /* Reset Source: PCIR */
+#define GLPCI_PMSUP_RESERVED_0_S		0
+#define GLPCI_PMSUP_RESERVED_0_M		MAKEMASK(0x3, 0)
+#define GLPCI_PMSUP_RESERVED_1_S		2
+#define GLPCI_PMSUP_RESERVED_1_M		MAKEMASK(0x7, 2)
+#define GLPCI_PMSUP_RESERVED_2_S		5
+#define GLPCI_PMSUP_RESERVED_2_M		MAKEMASK(0x7, 5)
+#define GLPCI_PMSUP_L0S_ACC_LAT_S		8
+#define GLPCI_PMSUP_L0S_ACC_LAT_M		MAKEMASK(0x7, 8)
+#define GLPCI_PMSUP_L1_ACC_LAT_S		11
+#define GLPCI_PMSUP_L1_ACC_LAT_M		MAKEMASK(0x7, 11)
+#define GLPCI_PMSUP_RESERVED_3_S		14
+#define GLPCI_PMSUP_RESERVED_3_M		BIT(14)
+#define GLPCI_PMSUP_OBFF_SUP_S			15
+#define GLPCI_PMSUP_OBFF_SUP_M			MAKEMASK(0x3, 15)
+#define GLPCI_PUSH_PE_IF_TO_STATUS		0x0009DF44 /* Reset Source: PCIR */
+#define GLPCI_PUSH_PE_IF_TO_STATUS_GLPCI_PUSH_PE_IF_TO_STATUS_S 0
+#define GLPCI_PUSH_PE_IF_TO_STATUS_GLPCI_PUSH_PE_IF_TO_STATUS_M BIT(0)
+#define GLPCI_PWRDATA				0x0009DE7C /* Reset Source: PCIR */
+#define GLPCI_PWRDATA_D0_POWER_S		0
+#define GLPCI_PWRDATA_D0_POWER_M		MAKEMASK(0xFF, 0)
+#define GLPCI_PWRDATA_COMM_POWER_S		8
+#define GLPCI_PWRDATA_COMM_POWER_M		MAKEMASK(0xFF, 8)
+#define GLPCI_PWRDATA_D3_POWER_S		16
+#define GLPCI_PWRDATA_D3_POWER_M		MAKEMASK(0xFF, 16)
+#define GLPCI_PWRDATA_DATA_SCALE_S		24
+#define GLPCI_PWRDATA_DATA_SCALE_M		MAKEMASK(0x3, 24)
+#define GLPCI_REVID				0x0009DE98 /* Reset Source: PCIR */
+#define GLPCI_REVID_NVM_REVID_S			0
+#define GLPCI_REVID_NVM_REVID_M			MAKEMASK(0xFF, 0)
+#define GLPCI_SERH				0x0009DE84 /* Reset Source: PCIR */
+#define GLPCI_SERH_SER_NUM_H_S			0
+#define GLPCI_SERH_SER_NUM_H_M			MAKEMASK(0xFFFF, 0)
+#define GLPCI_SERL				0x0009DE80 /* Reset Source: PCIR */
+#define GLPCI_SERL_SER_NUM_L_S			0
+#define GLPCI_SERL_SER_NUM_L_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPCI_SUBVENID				0x0009DEE8 /* Reset Source: PCIR */
+#define GLPCI_SUBVENID_SUB_VEN_ID_S		0
+#define GLPCI_SUBVENID_SUB_VEN_ID_M		MAKEMASK(0xFFFF, 0)
+#define GLPCI_UPADD				0x000BE0D4 /* Reset Source: PCIR */
+#define GLPCI_UPADD_ADDRESS_S			1
+#define GLPCI_UPADD_ADDRESS_M			MAKEMASK(0x7FFFFFFF, 1)
+#define GLPCI_VENDORID				0x0009DEC8 /* Reset Source: PCIR */
+#define GLPCI_VENDORID_VENDORID_S		0
+#define GLPCI_VENDORID_VENDORID_M		MAKEMASK(0xFFFF, 0)
+#define GLPCI_VFSUP				0x0009DE9C /* Reset Source: PCIR */
+#define GLPCI_VFSUP_VF_PREFETCH_S		0
+#define GLPCI_VFSUP_VF_PREFETCH_M		BIT(0)
+#define GLPCI_VFSUP_VR_BAR_TYPE_S		1
+#define GLPCI_VFSUP_VR_BAR_TYPE_M		BIT(1)
+#define GLPCI_WATMK_CLNT_PIPEMON		0x000BFD90 /* Reset Source: PCIR */
+#define GLPCI_WATMK_CLNT_PIPEMON_DATA_LINES_S	0
+#define GLPCI_WATMK_CLNT_PIPEMON_DATA_LINES_M	MAKEMASK(0xFFFF, 0)
+#define PF_FUNC_RID				0x0009E880 /* Reset Source: PCIR */
+#define PF_FUNC_RID_FUNCTION_NUMBER_S		0
+#define PF_FUNC_RID_FUNCTION_NUMBER_M		MAKEMASK(0x7, 0)
+#define PF_FUNC_RID_DEVICE_NUMBER_S		3
+#define PF_FUNC_RID_DEVICE_NUMBER_M		MAKEMASK(0x1F, 3)
+#define PF_FUNC_RID_BUS_NUMBER_S		8
+#define PF_FUNC_RID_BUS_NUMBER_M		MAKEMASK(0xFF, 8)
+#define PF_PCI_CIAA				0x0009E580 /* Reset Source: FLR */
+#define PF_PCI_CIAA_ADDRESS_S			0
+#define PF_PCI_CIAA_ADDRESS_M			MAKEMASK(0xFFF, 0)
+#define PF_PCI_CIAA_VF_NUM_S			12
+#define PF_PCI_CIAA_VF_NUM_M			MAKEMASK(0xFF, 12)
+#define PF_PCI_CIAD				0x0009E500 /* Reset Source: FLR */
+#define PF_PCI_CIAD_DATA_S			0
+#define PF_PCI_CIAD_DATA_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PFPCI_CLASS				0x0009DB00 /* Reset Source: PCIR */
+#define PFPCI_CLASS_STORAGE_CLASS_S		0
+#define PFPCI_CLASS_STORAGE_CLASS_M		BIT(0)
+#define PFPCI_CLASS_PF_IS_LAN_S			2
+#define PFPCI_CLASS_PF_IS_LAN_M			BIT(2)
+#define PFPCI_CNF				0x0009DF00 /* Reset Source: PCIR */
+#define PFPCI_CNF_MSI_EN_S			2
+#define PFPCI_CNF_MSI_EN_M			BIT(2)
+#define PFPCI_CNF_EXROM_DIS_S			3
+#define PFPCI_CNF_EXROM_DIS_M			BIT(3)
+#define PFPCI_CNF_IO_BAR_S			4
+#define PFPCI_CNF_IO_BAR_M			BIT(4)
+#define PFPCI_CNF_INT_PIN_S			5
+#define PFPCI_CNF_INT_PIN_M			MAKEMASK(0x3, 5)
+#define PFPCI_DEVID				0x0009DE00 /* Reset Source: PCIR */
+#define PFPCI_DEVID_PF_DEV_ID_S			0
+#define PFPCI_DEVID_PF_DEV_ID_M			MAKEMASK(0xFFFF, 0)
+#define PFPCI_DEVID_VF_DEV_ID_S			16
+#define PFPCI_DEVID_VF_DEV_ID_M			MAKEMASK(0xFFFF, 16)
+#define PFPCI_FACTPS				0x0009E900 /* Reset Source: FLR */
+#define PFPCI_FACTPS_FUNC_POWER_STATE_S		0
+#define PFPCI_FACTPS_FUNC_POWER_STATE_M		MAKEMASK(0x3, 0)
+#define PFPCI_FACTPS_FUNC_AUX_EN_S		3
+#define PFPCI_FACTPS_FUNC_AUX_EN_M		BIT(3)
+#define PFPCI_FUNC				0x0009D980 /* Reset Source: POR */
+#define PFPCI_FUNC_FUNC_DIS_S			0
+#define PFPCI_FUNC_FUNC_DIS_M			BIT(0)
+#define PFPCI_FUNC_ALLOW_FUNC_DIS_S		1
+#define PFPCI_FUNC_ALLOW_FUNC_DIS_M		BIT(1)
+#define PFPCI_FUNC_DIS_FUNC_ON_PORT_DIS_S	2
+#define PFPCI_FUNC_DIS_FUNC_ON_PORT_DIS_M	BIT(2)
+#define PFPCI_PF_FLUSH_DONE			0x0009E400 /* Reset Source: PCIR */
+#define PFPCI_PF_FLUSH_DONE_FLUSH_DONE_S	0
+#define PFPCI_PF_FLUSH_DONE_FLUSH_DONE_M	BIT(0)
+#define PFPCI_PM				0x0009DA80 /* Reset Source: POR */
+#define PFPCI_PM_PME_EN_S			0
+#define PFPCI_PM_PME_EN_M			BIT(0)
+#define PFPCI_STATUS1				0x0009DA00 /* Reset Source: POR */
+#define PFPCI_STATUS1_FUNC_VALID_S		0
+#define PFPCI_STATUS1_FUNC_VALID_M		BIT(0)
+#define PFPCI_SUBSYSID				0x0009D880 /* Reset Source: PCIR */
+#define PFPCI_SUBSYSID_PF_SUBSYS_ID_S		0
+#define PFPCI_SUBSYSID_PF_SUBSYS_ID_M		MAKEMASK(0xFFFF, 0)
+#define PFPCI_SUBSYSID_VF_SUBSYS_ID_S		16
+#define PFPCI_SUBSYSID_VF_SUBSYS_ID_M		MAKEMASK(0xFFFF, 16)
+#define PFPCI_VF_FLUSH_DONE(_VF)		(0x0009E000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PCIR */
+#define PFPCI_VF_FLUSH_DONE_MAX_INDEX		255
+#define PFPCI_VF_FLUSH_DONE_FLUSH_DONE_S	0
+#define PFPCI_VF_FLUSH_DONE_FLUSH_DONE_M	BIT(0)
+#define PFPCI_VM_FLUSH_DONE			0x0009E480 /* Reset Source: PCIR */
+#define PFPCI_VM_FLUSH_DONE_FLUSH_DONE_S	0
+#define PFPCI_VM_FLUSH_DONE_FLUSH_DONE_M	BIT(0)
+#define PFPCI_VMINDEX				0x0009E600 /* Reset Source: PCIR */
+#define PFPCI_VMINDEX_VMINDEX_S			0
+#define PFPCI_VMINDEX_VMINDEX_M			MAKEMASK(0x3FF, 0)
+#define PFPCI_VMPEND				0x0009E800 /* Reset Source: PCIR */
+#define PFPCI_VMPEND_PENDING_S			0
+#define PFPCI_VMPEND_PENDING_M			BIT(0)
+#define PQ_FIFO_STATUS				0x0009DF40 /* Reset Source: PCIR */
+#define PQ_FIFO_STATUS_PQ_FIFO_COUNT_S		0
+#define PQ_FIFO_STATUS_PQ_FIFO_COUNT_M		MAKEMASK(0x7FFFFFFF, 0)
+#define PQ_FIFO_STATUS_PQ_FIFO_EMPTY_S		31
+#define PQ_FIFO_STATUS_PQ_FIFO_EMPTY_M		BIT(31)
+#define GLPE_CPUSTATUS0				0x0050BA5C /* Reset Source: CORER */
+#define GLPE_CPUSTATUS0_PECPUSTATUS0_S		0
+#define GLPE_CPUSTATUS0_PECPUSTATUS0_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPE_CPUSTATUS1				0x0050BA60 /* Reset Source: CORER */
+#define GLPE_CPUSTATUS1_PECPUSTATUS1_S		0
+#define GLPE_CPUSTATUS1_PECPUSTATUS1_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPE_CPUSTATUS2				0x0050BA64 /* Reset Source: CORER */
+#define GLPE_CPUSTATUS2_PECPUSTATUS2_S		0
+#define GLPE_CPUSTATUS2_PECPUSTATUS2_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPE_MDQ_BASE(_i)			(0x00536000 + ((_i) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLPE_MDQ_BASE_MAX_INDEX			511
+#define GLPE_MDQ_BASE_MDOC_INDEX_S		0
+#define GLPE_MDQ_BASE_MDOC_INDEX_M		MAKEMASK(0xFFFFFFF, 0)
+#define GLPE_MDQ_PTR(_i)			(0x00537000 + ((_i) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLPE_MDQ_PTR_MAX_INDEX			511
+#define GLPE_MDQ_PTR_MDQ_HEAD_S			0
+#define GLPE_MDQ_PTR_MDQ_HEAD_M			MAKEMASK(0x3FFF, 0)
+#define GLPE_MDQ_PTR_MDQ_TAIL_S			16
+#define GLPE_MDQ_PTR_MDQ_TAIL_M			MAKEMASK(0x3FFF, 16)
+#define GLPE_MDQ_SIZE(_i)			(0x00536800 + ((_i) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLPE_MDQ_SIZE_MAX_INDEX			511
+#define GLPE_MDQ_SIZE_MDQ_SIZE_S		0
+#define GLPE_MDQ_SIZE_MDQ_SIZE_M		MAKEMASK(0x3FFF, 0)
+#define GLPE_PEPM_CTRL				0x0050C000 /* Reset Source: PERST */
+#define GLPE_PEPM_CTRL_PEPM_ENABLE_S		0
+#define GLPE_PEPM_CTRL_PEPM_ENABLE_M		BIT(0)
+#define GLPE_PEPM_CTRL_PEPM_HALT_S		8
+#define GLPE_PEPM_CTRL_PEPM_HALT_M		BIT(8)
+#define GLPE_PEPM_CTRL_PEPM_PUSH_MARGIN_S	16
+#define GLPE_PEPM_CTRL_PEPM_PUSH_MARGIN_M	MAKEMASK(0xFF, 16)
+#define GLPE_PEPM_DEALLOC			0x0050C004 /* Reset Source: PERST */
+#define GLPE_PEPM_DEALLOC_MDQ_CREDITS_S		0
+#define GLPE_PEPM_DEALLOC_MDQ_CREDITS_M		MAKEMASK(0x3FFF, 0)
+#define GLPE_PEPM_DEALLOC_PSQ_CREDITS_S		14
+#define GLPE_PEPM_DEALLOC_PSQ_CREDITS_M		MAKEMASK(0x1F, 14)
+#define GLPE_PEPM_DEALLOC_PQID_S		19
+#define GLPE_PEPM_DEALLOC_PQID_M		MAKEMASK(0x1FF, 19)
+#define GLPE_PEPM_DEALLOC_PORT_S		28
+#define GLPE_PEPM_DEALLOC_PORT_M		MAKEMASK(0x7, 28)
+#define GLPE_PEPM_DEALLOC_DEALLOC_RDY_S		31
+#define GLPE_PEPM_DEALLOC_DEALLOC_RDY_M		BIT(31)
+#define GLPE_PEPM_PSQ_COUNT			0x0050C020 /* Reset Source: PERST */
+#define GLPE_PEPM_PSQ_COUNT_PEPM_PSQ_COUNT_S	0
+#define GLPE_PEPM_PSQ_COUNT_PEPM_PSQ_COUNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_PEPM_THRESH(_i)			(0x0050C840 + ((_i) * 4)) /* _i=0...511 */ /* Reset Source: PERST */
+#define GLPE_PEPM_THRESH_MAX_INDEX		511
+#define GLPE_PEPM_THRESH_PEPM_PSQ_THRESH_S	0
+#define GLPE_PEPM_THRESH_PEPM_PSQ_THRESH_M	MAKEMASK(0x1F, 0)
+#define GLPE_PEPM_THRESH_PEPM_MDQ_THRESH_S	16
+#define GLPE_PEPM_THRESH_PEPM_MDQ_THRESH_M	MAKEMASK(0x3FFF, 16)
+#define GLPE_PFAEQEDROPCNT(_i)			(0x00503240 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPE_PFAEQEDROPCNT_MAX_INDEX		7
+#define GLPE_PFAEQEDROPCNT_AEQEDROPCNT_S	0
+#define GLPE_PFAEQEDROPCNT_AEQEDROPCNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_PFCEQEDROPCNT(_i)			(0x00503220 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPE_PFCEQEDROPCNT_MAX_INDEX		7
+#define GLPE_PFCEQEDROPCNT_CEQEDROPCNT_S	0
+#define GLPE_PFCEQEDROPCNT_CEQEDROPCNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_PFCQEDROPCNT(_i)			(0x00503200 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPE_PFCQEDROPCNT_MAX_INDEX		7
+#define GLPE_PFCQEDROPCNT_CQEDROPCNT_S		0
+#define GLPE_PFCQEDROPCNT_CQEDROPCNT_M		MAKEMASK(0xFFFF, 0)
+#define GLPE_PFFLMOOISCALLOCERR(_i)		(0x0050B960 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPE_PFFLMOOISCALLOCERR_MAX_INDEX	7
+#define GLPE_PFFLMOOISCALLOCERR_ERROR_COUNT_S	0
+#define GLPE_PFFLMOOISCALLOCERR_ERROR_COUNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_PFFLMQ1ALLOCERR(_i)		(0x0050B920 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPE_PFFLMQ1ALLOCERR_MAX_INDEX		7
+#define GLPE_PFFLMQ1ALLOCERR_ERROR_COUNT_S	0
+#define GLPE_PFFLMQ1ALLOCERR_ERROR_COUNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_PFFLMRRFALLOCERR(_i)		(0x0050B940 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPE_PFFLMRRFALLOCERR_MAX_INDEX		7
+#define GLPE_PFFLMRRFALLOCERR_ERROR_COUNT_S	0
+#define GLPE_PFFLMRRFALLOCERR_ERROR_COUNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_PFFLMXMITALLOCERR(_i)		(0x0050B900 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPE_PFFLMXMITALLOCERR_MAX_INDEX	7
+#define GLPE_PFFLMXMITALLOCERR_ERROR_COUNT_S	0
+#define GLPE_PFFLMXMITALLOCERR_ERROR_COUNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_PFTCPNOW50USCNT(_i)		(0x0050B8C0 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPE_PFTCPNOW50USCNT_MAX_INDEX		7
+#define GLPE_PFTCPNOW50USCNT_CNT_S		0
+#define GLPE_PFTCPNOW50USCNT_CNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPE_PUSH_PEPM				0x0053241C /* Reset Source: CORER */
+#define GLPE_PUSH_PEPM_MDQ_CREDITS_S		0
+#define GLPE_PUSH_PEPM_MDQ_CREDITS_M		MAKEMASK(0xFF, 0)
+#define GLPE_VFAEQEDROPCNT(_i)			(0x00503100 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLPE_VFAEQEDROPCNT_MAX_INDEX		31
+#define GLPE_VFAEQEDROPCNT_AEQEDROPCNT_S	0
+#define GLPE_VFAEQEDROPCNT_AEQEDROPCNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_VFCEQEDROPCNT(_i)			(0x00503080 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLPE_VFCEQEDROPCNT_MAX_INDEX		31
+#define GLPE_VFCEQEDROPCNT_CEQEDROPCNT_S	0
+#define GLPE_VFCEQEDROPCNT_CEQEDROPCNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_VFCQEDROPCNT(_i)			(0x00503000 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLPE_VFCQEDROPCNT_MAX_INDEX		31
+#define GLPE_VFCQEDROPCNT_CQEDROPCNT_S		0
+#define GLPE_VFCQEDROPCNT_CQEDROPCNT_M		MAKEMASK(0xFFFF, 0)
+#define GLPE_VFFLMOOISCALLOCERR(_i)		(0x0050B580 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLPE_VFFLMOOISCALLOCERR_MAX_INDEX	31
+#define GLPE_VFFLMOOISCALLOCERR_ERROR_COUNT_S	0
+#define GLPE_VFFLMOOISCALLOCERR_ERROR_COUNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_VFFLMQ1ALLOCERR(_i)		(0x0050B480 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLPE_VFFLMQ1ALLOCERR_MAX_INDEX		31
+#define GLPE_VFFLMQ1ALLOCERR_ERROR_COUNT_S	0
+#define GLPE_VFFLMQ1ALLOCERR_ERROR_COUNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_VFFLMRRFALLOCERR(_i)		(0x0050B500 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLPE_VFFLMRRFALLOCERR_MAX_INDEX		31
+#define GLPE_VFFLMRRFALLOCERR_ERROR_COUNT_S	0
+#define GLPE_VFFLMRRFALLOCERR_ERROR_COUNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_VFFLMXMITALLOCERR(_i)		(0x0050B400 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLPE_VFFLMXMITALLOCERR_MAX_INDEX	31
+#define GLPE_VFFLMXMITALLOCERR_ERROR_COUNT_S	0
+#define GLPE_VFFLMXMITALLOCERR_ERROR_COUNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_VFTCPNOW50USCNT(_i)		(0x0050B300 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: PE_CORER */
+#define GLPE_VFTCPNOW50USCNT_MAX_INDEX		31
+#define GLPE_VFTCPNOW50USCNT_CNT_S		0
+#define GLPE_VFTCPNOW50USCNT_CNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PFPE_AEQALLOC				0x00502D00 /* Reset Source: PFR */
+#define PFPE_AEQALLOC_AECOUNT_S			0
+#define PFPE_AEQALLOC_AECOUNT_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PFPE_CCQPHIGH				0x0050A100 /* Reset Source: PFR */
+#define PFPE_CCQPHIGH_PECCQPHIGH_S		0
+#define PFPE_CCQPHIGH_PECCQPHIGH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PFPE_CCQPLOW				0x0050A080 /* Reset Source: PFR */
+#define PFPE_CCQPLOW_PECCQPLOW_S		0
+#define PFPE_CCQPLOW_PECCQPLOW_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PFPE_CCQPSTATUS				0x0050A000 /* Reset Source: PFR */
+#define PFPE_CCQPSTATUS_CCQP_DONE_S		0
+#define PFPE_CCQPSTATUS_CCQP_DONE_M		BIT(0)
+#define PFPE_CCQPSTATUS_HMC_PROFILE_S		4
+#define PFPE_CCQPSTATUS_HMC_PROFILE_M		MAKEMASK(0x7, 4)
+#define PFPE_CCQPSTATUS_RDMA_EN_VFS_S		16
+#define PFPE_CCQPSTATUS_RDMA_EN_VFS_M		MAKEMASK(0x3F, 16)
+#define PFPE_CCQPSTATUS_CCQP_ERR_S		31
+#define PFPE_CCQPSTATUS_CCQP_ERR_M		BIT(31)
+#define PFPE_CQACK				0x00502C80 /* Reset Source: PFR */
+#define PFPE_CQACK_PECQID_S			0
+#define PFPE_CQACK_PECQID_M			MAKEMASK(0x7FFFF, 0)
+#define PFPE_CQARM				0x00502C00 /* Reset Source: PFR */
+#define PFPE_CQARM_PECQID_S			0
+#define PFPE_CQARM_PECQID_M			MAKEMASK(0x7FFFF, 0)
+#define PFPE_CQPDB				0x00500800 /* Reset Source: PFR */
+#define PFPE_CQPDB_WQHEAD_S			0
+#define PFPE_CQPDB_WQHEAD_M			MAKEMASK(0x7FF, 0)
+#define PFPE_CQPERRCODES			0x0050A200 /* Reset Source: PFR */
+#define PFPE_CQPERRCODES_CQP_MINOR_CODE_S	0
+#define PFPE_CQPERRCODES_CQP_MINOR_CODE_M	MAKEMASK(0xFFFF, 0)
+#define PFPE_CQPERRCODES_CQP_MAJOR_CODE_S	16
+#define PFPE_CQPERRCODES_CQP_MAJOR_CODE_M	MAKEMASK(0xFFFF, 16)
+#define PFPE_CQPTAIL				0x00500880 /* Reset Source: PFR */
+#define PFPE_CQPTAIL_WQTAIL_S			0
+#define PFPE_CQPTAIL_WQTAIL_M			MAKEMASK(0x7FF, 0)
+#define PFPE_CQPTAIL_CQP_OP_ERR_S		31
+#define PFPE_CQPTAIL_CQP_OP_ERR_M		BIT(31)
+#define PFPE_IPCONFIG0				0x0050A180 /* Reset Source: PFR */
+#define PFPE_IPCONFIG0_PEIPID_S			0
+#define PFPE_IPCONFIG0_PEIPID_M			MAKEMASK(0xFFFF, 0)
+#define PFPE_IPCONFIG0_USEENTIREIDRANGE_S	16
+#define PFPE_IPCONFIG0_USEENTIREIDRANGE_M	BIT(16)
+#define PFPE_IPCONFIG0_UDP_SRC_PORT_MASK_EN_S	17
+#define PFPE_IPCONFIG0_UDP_SRC_PORT_MASK_EN_M	BIT(17)
+#define PFPE_MRTEIDXMASK			0x0050A300 /* Reset Source: PFR */
+#define PFPE_MRTEIDXMASK_MRTEIDXMASKBITS_S	0
+#define PFPE_MRTEIDXMASK_MRTEIDXMASKBITS_M	MAKEMASK(0x1F, 0)
+#define PFPE_RCVUNEXPECTEDERROR			0x0050A380 /* Reset Source: PFR */
+#define PFPE_RCVUNEXPECTEDERROR_TCP_RX_UNEXP_ERR_S 0
+#define PFPE_RCVUNEXPECTEDERROR_TCP_RX_UNEXP_ERR_M MAKEMASK(0xFFFFFF, 0)
+#define PFPE_TCPNOWTIMER			0x0050A280 /* Reset Source: PFR */
+#define PFPE_TCPNOWTIMER_TCP_NOW_S		0
+#define PFPE_TCPNOWTIMER_TCP_NOW_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PFPE_WQEALLOC				0x00504400 /* Reset Source: PFR */
+#define PFPE_WQEALLOC_PEQPID_S			0
+#define PFPE_WQEALLOC_PEQPID_M			MAKEMASK(0x3FFFF, 0)
+#define PFPE_WQEALLOC_WQE_DESC_INDEX_S		20
+#define PFPE_WQEALLOC_WQE_DESC_INDEX_M		MAKEMASK(0xFFF, 20)
+#define PRT_PEPM_COUNT(_i)			(0x0050C040 + ((_i) * 4)) /* _i=0...511 */ /* Reset Source: PERST */
+#define PRT_PEPM_COUNT_MAX_INDEX		511
+#define PRT_PEPM_COUNT_PEPM_PSQ_COUNT_S		0
+#define PRT_PEPM_COUNT_PEPM_PSQ_COUNT_M		MAKEMASK(0x1F, 0)
+#define PRT_PEPM_COUNT_PEPM_MDQ_COUNT_S		16
+#define PRT_PEPM_COUNT_PEPM_MDQ_COUNT_M		MAKEMASK(0x3FFF, 16)
+#define VFPE_AEQALLOC(_VF)			(0x00502800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_AEQALLOC_MAX_INDEX			255
+#define VFPE_AEQALLOC_AECOUNT_S			0
+#define VFPE_AEQALLOC_AECOUNT_M			MAKEMASK(0xFFFFFFFF, 0)
+#define VFPE_CCQPHIGH(_VF)			(0x00508800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_CCQPHIGH_MAX_INDEX			255
+#define VFPE_CCQPHIGH_PECCQPHIGH_S		0
+#define VFPE_CCQPHIGH_PECCQPHIGH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VFPE_CCQPLOW(_VF)			(0x00508400 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_CCQPLOW_MAX_INDEX			255
+#define VFPE_CCQPLOW_PECCQPLOW_S		0
+#define VFPE_CCQPLOW_PECCQPLOW_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VFPE_CCQPSTATUS(_VF)			(0x00508000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_CCQPSTATUS_MAX_INDEX		255
+#define VFPE_CCQPSTATUS_CCQP_DONE_S		0
+#define VFPE_CCQPSTATUS_CCQP_DONE_M		BIT(0)
+#define VFPE_CCQPSTATUS_HMC_PROFILE_S		4
+#define VFPE_CCQPSTATUS_HMC_PROFILE_M		MAKEMASK(0x7, 4)
+#define VFPE_CCQPSTATUS_RDMA_EN_VFS_S		16
+#define VFPE_CCQPSTATUS_RDMA_EN_VFS_M		MAKEMASK(0x3F, 16)
+#define VFPE_CCQPSTATUS_CCQP_ERR_S		31
+#define VFPE_CCQPSTATUS_CCQP_ERR_M		BIT(31)
+#define VFPE_CQACK(_VF)				(0x00502400 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_CQACK_MAX_INDEX			255
+#define VFPE_CQACK_PECQID_S			0
+#define VFPE_CQACK_PECQID_M			MAKEMASK(0x7FFFF, 0)
+#define VFPE_CQARM(_VF)				(0x00502000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_CQARM_MAX_INDEX			255
+#define VFPE_CQARM_PECQID_S			0
+#define VFPE_CQARM_PECQID_M			MAKEMASK(0x7FFFF, 0)
+#define VFPE_CQPDB(_VF)				(0x00500000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_CQPDB_MAX_INDEX			255
+#define VFPE_CQPDB_WQHEAD_S			0
+#define VFPE_CQPDB_WQHEAD_M			MAKEMASK(0x7FF, 0)
+#define VFPE_CQPERRCODES(_VF)			(0x00509000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_CQPERRCODES_MAX_INDEX		255
+#define VFPE_CQPERRCODES_CQP_MINOR_CODE_S	0
+#define VFPE_CQPERRCODES_CQP_MINOR_CODE_M	MAKEMASK(0xFFFF, 0)
+#define VFPE_CQPERRCODES_CQP_MAJOR_CODE_S	16
+#define VFPE_CQPERRCODES_CQP_MAJOR_CODE_M	MAKEMASK(0xFFFF, 16)
+#define VFPE_CQPTAIL(_VF)			(0x00500400 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_CQPTAIL_MAX_INDEX			255
+#define VFPE_CQPTAIL_WQTAIL_S			0
+#define VFPE_CQPTAIL_WQTAIL_M			MAKEMASK(0x7FF, 0)
+#define VFPE_CQPTAIL_CQP_OP_ERR_S		31
+#define VFPE_CQPTAIL_CQP_OP_ERR_M		BIT(31)
+#define VFPE_IPCONFIG0(_VF)			(0x00508C00 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_IPCONFIG0_MAX_INDEX		255
+#define VFPE_IPCONFIG0_PEIPID_S			0
+#define VFPE_IPCONFIG0_PEIPID_M			MAKEMASK(0xFFFF, 0)
+#define VFPE_IPCONFIG0_USEENTIREIDRANGE_S	16
+#define VFPE_IPCONFIG0_USEENTIREIDRANGE_M	BIT(16)
+#define VFPE_IPCONFIG0_UDP_SRC_PORT_MASK_EN_S	17
+#define VFPE_IPCONFIG0_UDP_SRC_PORT_MASK_EN_M	BIT(17)
+#define VFPE_RCVUNEXPECTEDERROR(_VF)		(0x00509C00 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_RCVUNEXPECTEDERROR_MAX_INDEX	255
+#define VFPE_RCVUNEXPECTEDERROR_TCP_RX_UNEXP_ERR_S 0
+#define VFPE_RCVUNEXPECTEDERROR_TCP_RX_UNEXP_ERR_M MAKEMASK(0xFFFFFF, 0)
+#define VFPE_TCPNOWTIMER(_VF)			(0x00509400 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_TCPNOWTIMER_MAX_INDEX		255
+#define VFPE_TCPNOWTIMER_TCP_NOW_S		0
+#define VFPE_TCPNOWTIMER_TCP_NOW_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VFPE_WQEALLOC(_VF)			(0x00504000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_WQEALLOC_MAX_INDEX			255
+#define VFPE_WQEALLOC_PEQPID_S			0
+#define VFPE_WQEALLOC_PEQPID_M			MAKEMASK(0x3FFFF, 0)
+#define VFPE_WQEALLOC_WQE_DESC_INDEX_S		20
+#define VFPE_WQEALLOC_WQE_DESC_INDEX_M		MAKEMASK(0xFFF, 20)
+#define GLPES_PFIP4RXDISCARD(_i)		(0x00541400 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXDISCARD_MAX_INDEX		127
+#define GLPES_PFIP4RXDISCARD_IP4RXDISCARD_S	0
+#define GLPES_PFIP4RXDISCARD_IP4RXDISCARD_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4RXFRAGSHI(_i)		(0x00541C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXFRAGSHI_MAX_INDEX		127
+#define GLPES_PFIP4RXFRAGSHI_IP4RXFRAGSHI_S	0
+#define GLPES_PFIP4RXFRAGSHI_IP4RXFRAGSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP4RXFRAGSLO(_i)		(0x00541C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXFRAGSLO_MAX_INDEX		127
+#define GLPES_PFIP4RXFRAGSLO_IP4RXFRAGSLO_S	0
+#define GLPES_PFIP4RXFRAGSLO_IP4RXFRAGSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4RXMCOCTSHI(_i)		(0x00542404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXMCOCTSHI_MAX_INDEX		127
+#define GLPES_PFIP4RXMCOCTSHI_IP4RXMCOCTSHI_S	0
+#define GLPES_PFIP4RXMCOCTSHI_IP4RXMCOCTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP4RXMCOCTSLO(_i)		(0x00542400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXMCOCTSLO_MAX_INDEX		127
+#define GLPES_PFIP4RXMCOCTSLO_IP4RXMCOCTSLO_S	0
+#define GLPES_PFIP4RXMCOCTSLO_IP4RXMCOCTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4RXMCPKTSHI(_i)		(0x00542C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXMCPKTSHI_MAX_INDEX		127
+#define GLPES_PFIP4RXMCPKTSHI_IP4RXMCPKTSHI_S	0
+#define GLPES_PFIP4RXMCPKTSHI_IP4RXMCPKTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP4RXMCPKTSLO(_i)		(0x00542C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXMCPKTSLO_MAX_INDEX		127
+#define GLPES_PFIP4RXMCPKTSLO_IP4RXMCPKTSLO_S	0
+#define GLPES_PFIP4RXMCPKTSLO_IP4RXMCPKTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4RXOCTSHI(_i)			(0x00540404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXOCTSHI_MAX_INDEX		127
+#define GLPES_PFIP4RXOCTSHI_IP4RXOCTSHI_S	0
+#define GLPES_PFIP4RXOCTSHI_IP4RXOCTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP4RXOCTSLO(_i)			(0x00540400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXOCTSLO_MAX_INDEX		127
+#define GLPES_PFIP4RXOCTSLO_IP4RXOCTSLO_S	0
+#define GLPES_PFIP4RXOCTSLO_IP4RXOCTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4RXPKTSHI(_i)			(0x00540C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXPKTSHI_MAX_INDEX		127
+#define GLPES_PFIP4RXPKTSHI_IP4RXPKTSHI_S	0
+#define GLPES_PFIP4RXPKTSHI_IP4RXPKTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP4RXPKTSLO(_i)			(0x00540C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXPKTSLO_MAX_INDEX		127
+#define GLPES_PFIP4RXPKTSLO_IP4RXPKTSLO_S	0
+#define GLPES_PFIP4RXPKTSLO_IP4RXPKTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4RXTRUNC(_i)			(0x00541800 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXTRUNC_MAX_INDEX		127
+#define GLPES_PFIP4RXTRUNC_IP4RXTRUNC_S		0
+#define GLPES_PFIP4RXTRUNC_IP4RXTRUNC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4TXFRAGSHI(_i)		(0x00547404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4TXFRAGSHI_MAX_INDEX		127
+#define GLPES_PFIP4TXFRAGSHI_IP4TXFRAGSHI_S	0
+#define GLPES_PFIP4TXFRAGSHI_IP4TXFRAGSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP4TXFRAGSLO(_i)		(0x00547400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4TXFRAGSLO_MAX_INDEX		127
+#define GLPES_PFIP4TXFRAGSLO_IP4TXFRAGSLO_S	0
+#define GLPES_PFIP4TXFRAGSLO_IP4TXFRAGSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4TXMCOCTSHI(_i)		(0x00547C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4TXMCOCTSHI_MAX_INDEX		127
+#define GLPES_PFIP4TXMCOCTSHI_IP4TXMCOCTSHI_S	0
+#define GLPES_PFIP4TXMCOCTSHI_IP4TXMCOCTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP4TXMCOCTSLO(_i)		(0x00547C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4TXMCOCTSLO_MAX_INDEX		127
+#define GLPES_PFIP4TXMCOCTSLO_IP4TXMCOCTSLO_S	0
+#define GLPES_PFIP4TXMCOCTSLO_IP4TXMCOCTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4TXMCPKTSHI(_i)		(0x00548404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4TXMCPKTSHI_MAX_INDEX		127
+#define GLPES_PFIP4TXMCPKTSHI_IP4TXMCPKTSHI_S	0
+#define GLPES_PFIP4TXMCPKTSHI_IP4TXMCPKTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP4TXMCPKTSLO(_i)		(0x00548400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4TXMCPKTSLO_MAX_INDEX		127
+#define GLPES_PFIP4TXMCPKTSLO_IP4TXMCPKTSLO_S	0
+#define GLPES_PFIP4TXMCPKTSLO_IP4TXMCPKTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4TXNOROUTE(_i)		(0x0054B400 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4TXNOROUTE_MAX_INDEX		127
+#define GLPES_PFIP4TXNOROUTE_IP4TXNOROUTE_S	0
+#define GLPES_PFIP4TXNOROUTE_IP4TXNOROUTE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLPES_PFIP4TXOCTSHI(_i)			(0x00546404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4TXOCTSHI_MAX_INDEX		127
+#define GLPES_PFIP4TXOCTSHI_IP4TXOCTSHI_S	0
+#define GLPES_PFIP4TXOCTSHI_IP4TXOCTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP4TXOCTSLO(_i)			(0x00546400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4TXOCTSLO_MAX_INDEX		127
+#define GLPES_PFIP4TXOCTSLO_IP4TXOCTSLO_S	0
+#define GLPES_PFIP4TXOCTSLO_IP4TXOCTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4TXPKTSHI(_i)			(0x00546C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4TXPKTSHI_MAX_INDEX		127
+#define GLPES_PFIP4TXPKTSHI_IP4TXPKTSHI_S	0
+#define GLPES_PFIP4TXPKTSHI_IP4TXPKTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP4TXPKTSLO(_i)			(0x00546C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4TXPKTSLO_MAX_INDEX		127
+#define GLPES_PFIP4TXPKTSLO_IP4TXPKTSLO_S	0
+#define GLPES_PFIP4TXPKTSLO_IP4TXPKTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6RXDISCARD(_i)		(0x00544400 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXDISCARD_MAX_INDEX		127
+#define GLPES_PFIP6RXDISCARD_IP6RXDISCARD_S	0
+#define GLPES_PFIP6RXDISCARD_IP6RXDISCARD_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6RXFRAGSHI(_i)		(0x00544C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXFRAGSHI_MAX_INDEX		127
+#define GLPES_PFIP6RXFRAGSHI_IP6RXFRAGSHI_S	0
+#define GLPES_PFIP6RXFRAGSHI_IP6RXFRAGSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP6RXFRAGSLO(_i)		(0x00544C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXFRAGSLO_MAX_INDEX		127
+#define GLPES_PFIP6RXFRAGSLO_IP6RXFRAGSLO_S	0
+#define GLPES_PFIP6RXFRAGSLO_IP6RXFRAGSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6RXMCOCTSHI(_i)		(0x00545404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXMCOCTSHI_MAX_INDEX		127
+#define GLPES_PFIP6RXMCOCTSHI_IP6RXMCOCTSHI_S	0
+#define GLPES_PFIP6RXMCOCTSHI_IP6RXMCOCTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP6RXMCOCTSLO(_i)		(0x00545400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXMCOCTSLO_MAX_INDEX		127
+#define GLPES_PFIP6RXMCOCTSLO_IP6RXMCOCTSLO_S	0
+#define GLPES_PFIP6RXMCOCTSLO_IP6RXMCOCTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6RXMCPKTSHI(_i)		(0x00545C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXMCPKTSHI_MAX_INDEX		127
+#define GLPES_PFIP6RXMCPKTSHI_IP6RXMCPKTSHI_S	0
+#define GLPES_PFIP6RXMCPKTSHI_IP6RXMCPKTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP6RXMCPKTSLO(_i)		(0x00545C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXMCPKTSLO_MAX_INDEX		127
+#define GLPES_PFIP6RXMCPKTSLO_IP6RXMCPKTSLO_S	0
+#define GLPES_PFIP6RXMCPKTSLO_IP6RXMCPKTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6RXOCTSHI(_i)			(0x00543404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXOCTSHI_MAX_INDEX		127
+#define GLPES_PFIP6RXOCTSHI_IP6RXOCTSHI_S	0
+#define GLPES_PFIP6RXOCTSHI_IP6RXOCTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP6RXOCTSLO(_i)			(0x00543400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXOCTSLO_MAX_INDEX		127
+#define GLPES_PFIP6RXOCTSLO_IP6RXOCTSLO_S	0
+#define GLPES_PFIP6RXOCTSLO_IP6RXOCTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6RXPKTSHI(_i)			(0x00543C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXPKTSHI_MAX_INDEX		127
+#define GLPES_PFIP6RXPKTSHI_IP6RXPKTSHI_S	0
+#define GLPES_PFIP6RXPKTSHI_IP6RXPKTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP6RXPKTSLO(_i)			(0x00543C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXPKTSLO_MAX_INDEX		127
+#define GLPES_PFIP6RXPKTSLO_IP6RXPKTSLO_S	0
+#define GLPES_PFIP6RXPKTSLO_IP6RXPKTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6RXTRUNC(_i)			(0x00544800 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXTRUNC_MAX_INDEX		127
+#define GLPES_PFIP6RXTRUNC_IP6RXTRUNC_S		0
+#define GLPES_PFIP6RXTRUNC_IP6RXTRUNC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6TXFRAGSHI(_i)		(0x00549C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6TXFRAGSHI_MAX_INDEX		127
+#define GLPES_PFIP6TXFRAGSHI_IP6TXFRAGSHI_S	0
+#define GLPES_PFIP6TXFRAGSHI_IP6TXFRAGSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP6TXFRAGSLO(_i)		(0x00549C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6TXFRAGSLO_MAX_INDEX		127
+#define GLPES_PFIP6TXFRAGSLO_IP6TXFRAGSLO_S	0
+#define GLPES_PFIP6TXFRAGSLO_IP6TXFRAGSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6TXMCOCTSHI(_i)		(0x0054A404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6TXMCOCTSHI_MAX_INDEX		127
+#define GLPES_PFIP6TXMCOCTSHI_IP6TXMCOCTSHI_S	0
+#define GLPES_PFIP6TXMCOCTSHI_IP6TXMCOCTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP6TXMCOCTSLO(_i)		(0x0054A400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6TXMCOCTSLO_MAX_INDEX		127
+#define GLPES_PFIP6TXMCOCTSLO_IP6TXMCOCTSLO_S	0
+#define GLPES_PFIP6TXMCOCTSLO_IP6TXMCOCTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6TXMCPKTSHI(_i)		(0x0054AC04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6TXMCPKTSHI_MAX_INDEX		127
+#define GLPES_PFIP6TXMCPKTSHI_IP6TXMCPKTSHI_S	0
+#define GLPES_PFIP6TXMCPKTSHI_IP6TXMCPKTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP6TXMCPKTSLO(_i)		(0x0054AC00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6TXMCPKTSLO_MAX_INDEX		127
+#define GLPES_PFIP6TXMCPKTSLO_IP6TXMCPKTSLO_S	0
+#define GLPES_PFIP6TXMCPKTSLO_IP6TXMCPKTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6TXNOROUTE(_i)		(0x0054B800 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6TXNOROUTE_MAX_INDEX		127
+#define GLPES_PFIP6TXNOROUTE_IP6TXNOROUTE_S	0
+#define GLPES_PFIP6TXNOROUTE_IP6TXNOROUTE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLPES_PFIP6TXOCTSHI(_i)			(0x00548C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6TXOCTSHI_MAX_INDEX		127
+#define GLPES_PFIP6TXOCTSHI_IP6TXOCTSHI_S	0
+#define GLPES_PFIP6TXOCTSHI_IP6TXOCTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP6TXOCTSLO(_i)			(0x00548C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6TXOCTSLO_MAX_INDEX		127
+#define GLPES_PFIP6TXOCTSLO_IP6TXOCTSLO_S	0
+#define GLPES_PFIP6TXOCTSLO_IP6TXOCTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6TXPKTSHI(_i)			(0x00549404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6TXPKTSHI_MAX_INDEX		127
+#define GLPES_PFIP6TXPKTSHI_IP6TXPKTSHI_S	0
+#define GLPES_PFIP6TXPKTSHI_IP6TXPKTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP6TXPKTSLO(_i)			(0x00549400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6TXPKTSLO_MAX_INDEX		127
+#define GLPES_PFIP6TXPKTSLO_IP6TXPKTSLO_S	0
+#define GLPES_PFIP6TXPKTSLO_IP6TXPKTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFRDMARXRDSHI(_i)			(0x0054EC04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMARXRDSHI_MAX_INDEX		127
+#define GLPES_PFRDMARXRDSHI_RDMARXRDSHI_S	0
+#define GLPES_PFRDMARXRDSHI_RDMARXRDSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFRDMARXRDSLO(_i)			(0x0054EC00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMARXRDSLO_MAX_INDEX		127
+#define GLPES_PFRDMARXRDSLO_RDMARXRDSLO_S	0
+#define GLPES_PFRDMARXRDSLO_RDMARXRDSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFRDMARXSNDSHI(_i)		(0x0054F404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMARXSNDSHI_MAX_INDEX		127
+#define GLPES_PFRDMARXSNDSHI_RDMARXSNDSHI_S	0
+#define GLPES_PFRDMARXSNDSHI_RDMARXSNDSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFRDMARXSNDSLO(_i)		(0x0054F400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMARXSNDSLO_MAX_INDEX		127
+#define GLPES_PFRDMARXSNDSLO_RDMARXSNDSLO_S	0
+#define GLPES_PFRDMARXSNDSLO_RDMARXSNDSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFRDMARXWRSHI(_i)			(0x0054E404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMARXWRSHI_MAX_INDEX		127
+#define GLPES_PFRDMARXWRSHI_RDMARXWRSHI_S	0
+#define GLPES_PFRDMARXWRSHI_RDMARXWRSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFRDMARXWRSLO(_i)			(0x0054E400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMARXWRSLO_MAX_INDEX		127
+#define GLPES_PFRDMARXWRSLO_RDMARXWRSLO_S	0
+#define GLPES_PFRDMARXWRSLO_RDMARXWRSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFRDMATXRDSHI(_i)			(0x00550404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMATXRDSHI_MAX_INDEX		127
+#define GLPES_PFRDMATXRDSHI_RDMARXRDSHI_S	0
+#define GLPES_PFRDMATXRDSHI_RDMARXRDSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFRDMATXRDSLO(_i)			(0x00550400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMATXRDSLO_MAX_INDEX		127
+#define GLPES_PFRDMATXRDSLO_RDMARXRDSLO_S	0
+#define GLPES_PFRDMATXRDSLO_RDMARXRDSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFRDMATXSNDSHI(_i)		(0x00550C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMATXSNDSHI_MAX_INDEX		127
+#define GLPES_PFRDMATXSNDSHI_RDMARXSNDSHI_S	0
+#define GLPES_PFRDMATXSNDSHI_RDMARXSNDSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFRDMATXSNDSLO(_i)		(0x00550C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMATXSNDSLO_MAX_INDEX		127
+#define GLPES_PFRDMATXSNDSLO_RDMARXSNDSLO_S	0
+#define GLPES_PFRDMATXSNDSLO_RDMARXSNDSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFRDMATXWRSHI(_i)			(0x0054FC04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMATXWRSHI_MAX_INDEX		127
+#define GLPES_PFRDMATXWRSHI_RDMARXWRSHI_S	0
+#define GLPES_PFRDMATXWRSHI_RDMARXWRSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFRDMATXWRSLO(_i)			(0x0054FC00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMATXWRSLO_MAX_INDEX		127
+#define GLPES_PFRDMATXWRSLO_RDMARXWRSLO_S	0
+#define GLPES_PFRDMATXWRSLO_RDMARXWRSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFRDMAVBNDHI(_i)			(0x00551404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMAVBNDHI_MAX_INDEX		127
+#define GLPES_PFRDMAVBNDHI_RDMAVBNDHI_S		0
+#define GLPES_PFRDMAVBNDHI_RDMAVBNDHI_M		MAKEMASK(0xFFFF, 0)
+#define GLPES_PFRDMAVBNDLO(_i)			(0x00551400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMAVBNDLO_MAX_INDEX		127
+#define GLPES_PFRDMAVBNDLO_RDMAVBNDLO_S		0
+#define GLPES_PFRDMAVBNDLO_RDMAVBNDLO_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFRDMAVINVHI(_i)			(0x00551C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMAVINVHI_MAX_INDEX		127
+#define GLPES_PFRDMAVINVHI_RDMAVINVHI_S		0
+#define GLPES_PFRDMAVINVHI_RDMAVINVHI_M		MAKEMASK(0xFFFF, 0)
+#define GLPES_PFRDMAVINVLO(_i)			(0x00551C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMAVINVLO_MAX_INDEX		127
+#define GLPES_PFRDMAVINVLO_RDMAVINVLO_S		0
+#define GLPES_PFRDMAVINVLO_RDMAVINVLO_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFRXVLANERR(_i)			(0x00540000 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRXVLANERR_MAX_INDEX		127
+#define GLPES_PFRXVLANERR_RXVLANERR_S		0
+#define GLPES_PFRXVLANERR_RXVLANERR_M		MAKEMASK(0xFFFFFF, 0)
+#define GLPES_PFTCPRTXSEG(_i)			(0x00552400 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFTCPRTXSEG_MAX_INDEX		127
+#define GLPES_PFTCPRTXSEG_TCPRTXSEG_S		0
+#define GLPES_PFTCPRTXSEG_TCPRTXSEG_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFTCPRXOPTERR(_i)			(0x0054C400 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFTCPRXOPTERR_MAX_INDEX		127
+#define GLPES_PFTCPRXOPTERR_TCPRXOPTERR_S	0
+#define GLPES_PFTCPRXOPTERR_TCPRXOPTERR_M	MAKEMASK(0xFFFFFF, 0)
+#define GLPES_PFTCPRXPROTOERR(_i)		(0x0054C800 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFTCPRXPROTOERR_MAX_INDEX		127
+#define GLPES_PFTCPRXPROTOERR_TCPRXPROTOERR_S	0
+#define GLPES_PFTCPRXPROTOERR_TCPRXPROTOERR_M	MAKEMASK(0xFFFFFF, 0)
+#define GLPES_PFTCPRXSEGSHI(_i)			(0x0054BC04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFTCPRXSEGSHI_MAX_INDEX		127
+#define GLPES_PFTCPRXSEGSHI_TCPRXSEGSHI_S	0
+#define GLPES_PFTCPRXSEGSHI_TCPRXSEGSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFTCPRXSEGSLO(_i)			(0x0054BC00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFTCPRXSEGSLO_MAX_INDEX		127
+#define GLPES_PFTCPRXSEGSLO_TCPRXSEGSLO_S	0
+#define GLPES_PFTCPRXSEGSLO_TCPRXSEGSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFTCPTXSEGHI(_i)			(0x0054CC04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFTCPTXSEGHI_MAX_INDEX		127
+#define GLPES_PFTCPTXSEGHI_TCPTXSEGHI_S		0
+#define GLPES_PFTCPTXSEGHI_TCPTXSEGHI_M		MAKEMASK(0xFFFF, 0)
+#define GLPES_PFTCPTXSEGLO(_i)			(0x0054CC00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFTCPTXSEGLO_MAX_INDEX		127
+#define GLPES_PFTCPTXSEGLO_TCPTXSEGLO_S		0
+#define GLPES_PFTCPTXSEGLO_TCPTXSEGLO_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFUDPRXPKTSHI(_i)			(0x0054D404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFUDPRXPKTSHI_MAX_INDEX		127
+#define GLPES_PFUDPRXPKTSHI_UDPRXPKTSHI_S	0
+#define GLPES_PFUDPRXPKTSHI_UDPRXPKTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFUDPRXPKTSLO(_i)			(0x0054D400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFUDPRXPKTSLO_MAX_INDEX		127
+#define GLPES_PFUDPRXPKTSLO_UDPRXPKTSLO_S	0
+#define GLPES_PFUDPRXPKTSLO_UDPRXPKTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFUDPTXPKTSHI(_i)			(0x0054DC04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFUDPTXPKTSHI_MAX_INDEX		127
+#define GLPES_PFUDPTXPKTSHI_UDPTXPKTSHI_S	0
+#define GLPES_PFUDPTXPKTSHI_UDPTXPKTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFUDPTXPKTSLO(_i)			(0x0054DC00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFUDPTXPKTSLO_MAX_INDEX		127
+#define GLPES_PFUDPTXPKTSLO_UDPTXPKTSLO_S	0
+#define GLPES_PFUDPTXPKTSLO_UDPTXPKTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_RDMARXMULTFPDUSHI			0x0055E00C /* Reset Source: CORER */
+#define GLPES_RDMARXMULTFPDUSHI_RDMARXMULTFPDUSHI_S 0
+#define GLPES_RDMARXMULTFPDUSHI_RDMARXMULTFPDUSHI_M MAKEMASK(0xFFFFFF, 0)
+#define GLPES_RDMARXMULTFPDUSLO			0x0055E008 /* Reset Source: CORER */
+#define GLPES_RDMARXMULTFPDUSLO_RDMARXMULTFPDUSLO_S 0
+#define GLPES_RDMARXMULTFPDUSLO_RDMARXMULTFPDUSLO_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_RDMARXOOODDPHI			0x0055E014 /* Reset Source: CORER */
+#define GLPES_RDMARXOOODDPHI_RDMARXOOODDPHI_S	0
+#define GLPES_RDMARXOOODDPHI_RDMARXOOODDPHI_M	MAKEMASK(0xFFFFFF, 0)
+#define GLPES_RDMARXOOODDPLO			0x0055E010 /* Reset Source: CORER */
+#define GLPES_RDMARXOOODDPLO_RDMARXOOODDPLO_S	0
+#define GLPES_RDMARXOOODDPLO_RDMARXOOODDPLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_RDMARXOOONOMARK			0x0055E004 /* Reset Source: CORER */
+#define GLPES_RDMARXOOONOMARK_RDMAOOONOMARK_S	0
+#define GLPES_RDMARXOOONOMARK_RDMAOOONOMARK_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_RDMARXUNALIGN			0x0055E000 /* Reset Source: CORER */
+#define GLPES_RDMARXUNALIGN_RDMRXAUNALIGN_S	0
+#define GLPES_RDMARXUNALIGN_RDMRXAUNALIGN_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_TCPRXFOURHOLEHI			0x0055E03C /* Reset Source: CORER */
+#define GLPES_TCPRXFOURHOLEHI_TCPRXFOURHOLEHI_S 0
+#define GLPES_TCPRXFOURHOLEHI_TCPRXFOURHOLEHI_M MAKEMASK(0xFFFFFF, 0)
+#define GLPES_TCPRXFOURHOLELO			0x0055E038 /* Reset Source: CORER */
+#define GLPES_TCPRXFOURHOLELO_TCPRXFOURHOLELO_S 0
+#define GLPES_TCPRXFOURHOLELO_TCPRXFOURHOLELO_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_TCPRXONEHOLEHI			0x0055E024 /* Reset Source: CORER */
+#define GLPES_TCPRXONEHOLEHI_TCPRXONEHOLEHI_S	0
+#define GLPES_TCPRXONEHOLEHI_TCPRXONEHOLEHI_M	MAKEMASK(0xFFFFFF, 0)
+#define GLPES_TCPRXONEHOLELO			0x0055E020 /* Reset Source: CORER */
+#define GLPES_TCPRXONEHOLELO_TCPRXONEHOLELO_S	0
+#define GLPES_TCPRXONEHOLELO_TCPRXONEHOLELO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_TCPRXPUREACKHI			0x0055E01C /* Reset Source: CORER */
+#define GLPES_TCPRXPUREACKHI_TCPRXPUREACKSHI_S	0
+#define GLPES_TCPRXPUREACKHI_TCPRXPUREACKSHI_M	MAKEMASK(0xFFFFFF, 0)
+#define GLPES_TCPRXPUREACKSLO			0x0055E018 /* Reset Source: CORER */
+#define GLPES_TCPRXPUREACKSLO_TCPRXPUREACKLO_S	0
+#define GLPES_TCPRXPUREACKSLO_TCPRXPUREACKLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_TCPRXTHREEHOLEHI			0x0055E034 /* Reset Source: CORER */
+#define GLPES_TCPRXTHREEHOLEHI_TCPRXTHREEHOLEHI_S 0
+#define GLPES_TCPRXTHREEHOLEHI_TCPRXTHREEHOLEHI_M MAKEMASK(0xFFFFFF, 0)
+#define GLPES_TCPRXTHREEHOLELO			0x0055E030 /* Reset Source: CORER */
+#define GLPES_TCPRXTHREEHOLELO_TCPRXTHREEHOLELO_S 0
+#define GLPES_TCPRXTHREEHOLELO_TCPRXTHREEHOLELO_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_TCPRXTWOHOLEHI			0x0055E02C /* Reset Source: CORER */
+#define GLPES_TCPRXTWOHOLEHI_TCPRXTWOHOLEHI_S	0
+#define GLPES_TCPRXTWOHOLEHI_TCPRXTWOHOLEHI_M	MAKEMASK(0xFFFFFF, 0)
+#define GLPES_TCPRXTWOHOLELO			0x0055E028 /* Reset Source: CORER */
+#define GLPES_TCPRXTWOHOLELO_TCPRXTWOHOLELO_S	0
+#define GLPES_TCPRXTWOHOLELO_TCPRXTWOHOLELO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_TCPTXRETRANSFASTHI		0x0055E044 /* Reset Source: CORER */
+#define GLPES_TCPTXRETRANSFASTHI_TCPTXRETRANSFASTHI_S 0
+#define GLPES_TCPTXRETRANSFASTHI_TCPTXRETRANSFASTHI_M MAKEMASK(0xFFFFFF, 0)
+#define GLPES_TCPTXRETRANSFASTLO		0x0055E040 /* Reset Source: CORER */
+#define GLPES_TCPTXRETRANSFASTLO_TCPTXRETRANSFASTLO_S 0
+#define GLPES_TCPTXRETRANSFASTLO_TCPTXRETRANSFASTLO_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_TCPTXTOUTSFASTHI			0x0055E04C /* Reset Source: CORER */
+#define GLPES_TCPTXTOUTSFASTHI_TCPTXTOUTSFASTHI_S 0
+#define GLPES_TCPTXTOUTSFASTHI_TCPTXTOUTSFASTHI_M MAKEMASK(0xFFFFFF, 0)
+#define GLPES_TCPTXTOUTSFASTLO			0x0055E048 /* Reset Source: CORER */
+#define GLPES_TCPTXTOUTSFASTLO_TCPTXTOUTSFASTLO_S 0
+#define GLPES_TCPTXTOUTSFASTLO_TCPTXTOUTSFASTLO_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_TCPTXTOUTSHI			0x0055E054 /* Reset Source: CORER */
+#define GLPES_TCPTXTOUTSHI_TCPTXTOUTSHI_S	0
+#define GLPES_TCPTXTOUTSHI_TCPTXTOUTSHI_M	MAKEMASK(0xFFFFFF, 0)
+#define GLPES_TCPTXTOUTSLO			0x0055E050 /* Reset Source: CORER */
+#define GLPES_TCPTXTOUTSLO_TCPTXTOUTSLO_S	0
+#define GLPES_TCPTXTOUTSLO_TCPTXTOUTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PWR_MODE_CTL				0x000B820C /* Reset Source: POR */
+#define GL_PWR_MODE_CTL_SWITCH_PWR_MODE_EN_S	0
+#define GL_PWR_MODE_CTL_SWITCH_PWR_MODE_EN_M	BIT(0)
+#define GL_PWR_MODE_CTL_NIC_PWR_MODE_EN_S	1
+#define GL_PWR_MODE_CTL_NIC_PWR_MODE_EN_M	BIT(1)
+#define GL_PWR_MODE_CTL_S5_PWR_MODE_EN_S	2
+#define GL_PWR_MODE_CTL_S5_PWR_MODE_EN_M	BIT(2)
+#define GL_PWR_MODE_CTL_CAR_MAX_SW_CONFIG_S	3
+#define GL_PWR_MODE_CTL_CAR_MAX_SW_CONFIG_M	MAKEMASK(0x3, 3)
+#define GL_PWR_MODE_CTL_CAR_MAX_BW_S		30
+#define GL_PWR_MODE_CTL_CAR_MAX_BW_M		MAKEMASK(0x3, 30)
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT	0x000B825C /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_PECLK_S 0
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_PECLK_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_UCLK_S 3
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_UCLK_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_LCLK_S 6
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_LCLK_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_PSM_S 9
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_PSM_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_RXCTL_S 12
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_RXCTL_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_UANA_S 15
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_UANA_M MAKEMASK(0x7, 15)
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_S5_S 18
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_S5_M MAKEMASK(0x7, 18)
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT	0x000B8218 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_PECLK_S 0
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_PECLK_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_UCLK_S 3
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_UCLK_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_LCLK_S 6
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_LCLK_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_PSM_S 9
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_PSM_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_RXCTL_S 12
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_RXCTL_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_UANA_S 15
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_UANA_M MAKEMASK(0x7, 15)
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_S5_S 18
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_S5_M MAKEMASK(0x7, 18)
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT	0x000B8260 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_PECLK_S 0
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_PECLK_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_UCLK_S 3
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_UCLK_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_LCLK_S 6
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_LCLK_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_PSM_S 9
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_PSM_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_RXCTL_S 12
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_RXCTL_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_UANA_S 15
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_UANA_M MAKEMASK(0x7, 15)
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_S5_S 18
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_S5_M MAKEMASK(0x7, 18)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_LCLK	0x000B8200 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_LCLK_DIV_VAL_TBW_50G_H_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_LCLK_DIV_VAL_TBW_50G_H_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_LCLK_DIV_VAL_TBW_25G_H_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_LCLK_DIV_VAL_TBW_25G_H_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_LCLK_DIV_VAL_TBW_10G_H_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_LCLK_DIV_VAL_TBW_10G_H_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_LCLK_DIV_VAL_TBW_4G_H_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_LCLK_DIV_VAL_TBW_4G_H_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_LCLK_DIV_VAL_TBW_A50G_H_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_LCLK_DIV_VAL_TBW_A50G_H_M MAKEMASK(0xF, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PECLK	0x000B81F0 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PECLK_DIV_VAL_TBW_50G_H_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PECLK_DIV_VAL_TBW_50G_H_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PECLK_DIV_VAL_TBW_25G_H_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PECLK_DIV_VAL_TBW_25G_H_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PECLK_DIV_VAL_TBW_10G_H_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PECLK_DIV_VAL_TBW_10G_H_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PECLK_DIV_VAL_TBW_4G_H_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PECLK_DIV_VAL_TBW_4G_H_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PECLK_DIV_VAL_TBW_A50G_H_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PECLK_DIV_VAL_TBW_A50G_H_M MAKEMASK(0xF, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PSM	0x000B81FC /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PSM_DIV_VAL_TBW_50G_H_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PSM_DIV_VAL_TBW_50G_H_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PSM_DIV_VAL_TBW_25G_H_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PSM_DIV_VAL_TBW_25G_H_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PSM_DIV_VAL_TBW_10G_H_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PSM_DIV_VAL_TBW_10G_H_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PSM_DIV_VAL_TBW_4G_H_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PSM_DIV_VAL_TBW_4G_H_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PSM_DIV_VAL_TBW_A50G_H_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PSM_DIV_VAL_TBW_A50G_H_M MAKEMASK(0xF, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_RXCTL	0x000B81F8 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_RXCTL_DIV_VAL_TBW_50G_H_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_RXCTL_DIV_VAL_TBW_50G_H_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_RXCTL_DIV_VAL_TBW_25G_H_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_RXCTL_DIV_VAL_TBW_25G_H_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_RXCTL_DIV_VAL_TBW_10G_H_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_RXCTL_DIV_VAL_TBW_10G_H_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_RXCTL_DIV_VAL_TBW_4G_H_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_RXCTL_DIV_VAL_TBW_4G_H_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_RXCTL_DIV_VAL_TBW_A50G_H_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_RXCTL_DIV_VAL_TBW_A50G_H_M MAKEMASK(0xF, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UANA	0x000B8208 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UANA_DIV_VAL_TBW_50G_H_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UANA_DIV_VAL_TBW_50G_H_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UANA_DIV_VAL_TBW_25G_H_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UANA_DIV_VAL_TBW_25G_H_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UANA_DIV_VAL_TBW_10G_H_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UANA_DIV_VAL_TBW_10G_H_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UANA_DIV_VAL_TBW_4G_H_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UANA_DIV_VAL_TBW_4G_H_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UANA_DIV_VAL_TBW_A50G_H_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UANA_DIV_VAL_TBW_A50G_H_M MAKEMASK(0xF, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UCLK	0x000B81F4 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UCLK_DIV_VAL_TBW_50G_H_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UCLK_DIV_VAL_TBW_50G_H_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UCLK_DIV_VAL_TBW_25G_H_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UCLK_DIV_VAL_TBW_25G_H_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UCLK_DIV_VAL_TBW_10G_H_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UCLK_DIV_VAL_TBW_10G_H_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UCLK_DIV_VAL_TBW_4G_H_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UCLK_DIV_VAL_TBW_4G_H_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UCLK_DIV_VAL_TBW_A50G_H_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UCLK_DIV_VAL_TBW_A50G_H_M MAKEMASK(0xF, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_LCLK	0x000B8244 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_LCLK_DIV_VAL_TBW_50G_L_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_LCLK_DIV_VAL_TBW_50G_L_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_LCLK_DIV_VAL_TBW_25G_L_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_LCLK_DIV_VAL_TBW_25G_L_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_LCLK_DIV_VAL_TBW_10G_L_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_LCLK_DIV_VAL_TBW_10G_L_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_LCLK_DIV_VAL_TBW_4G_L_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_LCLK_DIV_VAL_TBW_4G_L_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_LCLK_DIV_VAL_TBW_A50G_L_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_LCLK_DIV_VAL_TBW_A50G_L_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PECLK	0x000B8220 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PECLK_DIV_VAL_TBW_50G_L_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PECLK_DIV_VAL_TBW_50G_L_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PECLK_DIV_VAL_TBW_25G_L_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PECLK_DIV_VAL_TBW_25G_L_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PECLK_DIV_VAL_TBW_10G_L_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PECLK_DIV_VAL_TBW_10G_L_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PECLK_DIV_VAL_TBW_4G_L_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PECLK_DIV_VAL_TBW_4G_L_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PECLK_DIV_VAL_TBW_A50G_L_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PECLK_DIV_VAL_TBW_A50G_L_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PSM	0x000B8240 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PSM_DIV_VAL_TBW_50G_L_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PSM_DIV_VAL_TBW_50G_L_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PSM_DIV_VAL_TBW_25G_L_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PSM_DIV_VAL_TBW_25G_L_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PSM_DIV_VAL_TBW_10G_L_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PSM_DIV_VAL_TBW_10G_L_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PSM_DIV_VAL_TBW_4G_L_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PSM_DIV_VAL_TBW_4G_L_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PSM_DIV_VAL_TBW_A50G_L_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PSM_DIV_VAL_TBW_A50G_L_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_RXCTL	0x000B823C /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_RXCTL_DIV_VAL_TBW_50G_L_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_RXCTL_DIV_VAL_TBW_50G_L_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_RXCTL_DIV_VAL_TBW_25G_L_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_RXCTL_DIV_VAL_TBW_25G_L_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_RXCTL_DIV_VAL_TBW_10G_L_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_RXCTL_DIV_VAL_TBW_10G_L_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_RXCTL_DIV_VAL_TBW_4G_L_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_RXCTL_DIV_VAL_TBW_4G_L_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_RXCTL_DIV_VAL_TBW_A50G_L_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_RXCTL_DIV_VAL_TBW_A50G_L_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UANA	0x000B8248 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UANA_DIV_VAL_TBW_50G_L_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UANA_DIV_VAL_TBW_50G_L_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UANA_DIV_VAL_TBW_25G_L_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UANA_DIV_VAL_TBW_25G_L_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UANA_DIV_VAL_TBW_10G_L_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UANA_DIV_VAL_TBW_10G_L_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UANA_DIV_VAL_TBW_4G_L_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UANA_DIV_VAL_TBW_4G_L_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UANA_DIV_VAL_TBW_A50G_L_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UANA_DIV_VAL_TBW_A50G_L_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UCLK	0x000B8238 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UCLK_DIV_VAL_TBW_50G_L_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UCLK_DIV_VAL_TBW_50G_L_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UCLK_DIV_VAL_TBW_25G_L_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UCLK_DIV_VAL_TBW_25G_L_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UCLK_DIV_VAL_TBW_10G_L_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UCLK_DIV_VAL_TBW_10G_L_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UCLK_DIV_VAL_TBW_4G_L_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UCLK_DIV_VAL_TBW_4G_L_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UCLK_DIV_VAL_TBW_A50G_L_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UCLK_DIV_VAL_TBW_A50G_L_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_LCLK	0x000B8230 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_LCLK_DIV_VAL_TBW_50G_M_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_LCLK_DIV_VAL_TBW_50G_M_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_LCLK_DIV_VAL_TBW_25G_M_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_LCLK_DIV_VAL_TBW_25G_M_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_LCLK_DIV_VAL_TBW_10G_M_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_LCLK_DIV_VAL_TBW_10G_M_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_LCLK_DIV_VAL_TBW_4G_M_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_LCLK_DIV_VAL_TBW_4G_M_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_LCLK_DIV_VAL_TBW_A50G_M_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_LCLK_DIV_VAL_TBW_A50G_M_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PECLK	0x000B821C /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PECLK_DIV_VAL_TBW_50G_M_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PECLK_DIV_VAL_TBW_50G_M_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PECLK_DIV_VAL_TBW_25G_M_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PECLK_DIV_VAL_TBW_25G_M_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PECLK_DIV_VAL_TBW_10G_M_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PECLK_DIV_VAL_TBW_10G_M_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PECLK_DIV_VAL_TBW_4G_M_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PECLK_DIV_VAL_TBW_4G_M_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PECLK_DIV_VAL_TBW_A50G_M_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PECLK_DIV_VAL_TBW_A50G_M_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PSM	0x000B822C /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PSM_DIV_VAL_TBW_50G_M_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PSM_DIV_VAL_TBW_50G_M_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PSM_DIV_VAL_TBW_25G_M_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PSM_DIV_VAL_TBW_25G_M_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PSM_DIV_VAL_TBW_10G_M_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PSM_DIV_VAL_TBW_10G_M_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PSM_DIV_VAL_TBW_4G_M_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PSM_DIV_VAL_TBW_4G_M_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PSM_DIV_VAL_TBW_A50G_M_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PSM_DIV_VAL_TBW_A50G_M_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_RXCTL	0x000B8228 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_RXCTL_DIV_VAL_TBW_50G_M_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_RXCTL_DIV_VAL_TBW_50G_M_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_RXCTL_DIV_VAL_TBW_25G_M_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_RXCTL_DIV_VAL_TBW_25G_M_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_RXCTL_DIV_VAL_TBW_10G_M_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_RXCTL_DIV_VAL_TBW_10G_M_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_RXCTL_DIV_VAL_TBW_4G_M_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_RXCTL_DIV_VAL_TBW_4G_M_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_RXCTL_DIV_VAL_TBW_A50G_M_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_RXCTL_DIV_VAL_TBW_A50G_M_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UANA	0x000B8234 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UANA_DIV_VAL_TBW_50G_M_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UANA_DIV_VAL_TBW_50G_M_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UANA_DIV_VAL_TBW_25G_M_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UANA_DIV_VAL_TBW_25G_M_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UANA_DIV_VAL_TBW_10G_M_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UANA_DIV_VAL_TBW_10G_M_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UANA_DIV_VAL_TBW_4G_M_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UANA_DIV_VAL_TBW_4G_M_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UANA_DIV_VAL_TBW_A50G_M_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UANA_DIV_VAL_TBW_A50G_M_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UCLK	0x000B8224 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UCLK_DIV_VAL_TBW_50G_M_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UCLK_DIV_VAL_TBW_50G_M_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UCLK_DIV_VAL_TBW_25G_M_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UCLK_DIV_VAL_TBW_25G_M_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UCLK_DIV_VAL_TBW_10G_M_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UCLK_DIV_VAL_TBW_10G_M_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UCLK_DIV_VAL_TBW_4G_M_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UCLK_DIV_VAL_TBW_4G_M_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UCLK_DIV_VAL_TBW_A50G_M_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UCLK_DIV_VAL_TBW_A50G_M_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S5_H_CTRL		0x000B81EC /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S5_H_CTRL_DIV_VAL_TBW_50G_H_S 0
+#define GL_PWR_MODE_DIVIDE_S5_H_CTRL_DIV_VAL_TBW_50G_H_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S5_H_CTRL_DIV_VAL_TBW_25G_H_S 3
+#define GL_PWR_MODE_DIVIDE_S5_H_CTRL_DIV_VAL_TBW_25G_H_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S5_H_CTRL_DIV_VAL_TBW_10G_H_S 6
+#define GL_PWR_MODE_DIVIDE_S5_H_CTRL_DIV_VAL_TBW_10G_H_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S5_H_CTRL_DIV_VAL_TBW_4G_H_S 9
+#define GL_PWR_MODE_DIVIDE_S5_H_CTRL_DIV_VAL_TBW_4G_H_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S5_H_CTRL_DIV_VAL_TBW_A50G_H_S 12
+#define GL_PWR_MODE_DIVIDE_S5_H_CTRL_DIV_VAL_TBW_A50G_H_M MAKEMASK(0xF, 12)
+#define GL_PWR_MODE_DIVIDE_S5_L_CTRL		0x000B824C /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S5_L_CTRL_DIV_VAL_TBW_50G_L_S 0
+#define GL_PWR_MODE_DIVIDE_S5_L_CTRL_DIV_VAL_TBW_50G_L_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S5_L_CTRL_DIV_VAL_TBW_25G_L_S 3
+#define GL_PWR_MODE_DIVIDE_S5_L_CTRL_DIV_VAL_TBW_25G_L_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S5_L_CTRL_DIV_VAL_TBW_10G_L_S 6
+#define GL_PWR_MODE_DIVIDE_S5_L_CTRL_DIV_VAL_TBW_10G_L_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S5_L_CTRL_DIV_VAL_TBW_4G_L_S 9
+#define GL_PWR_MODE_DIVIDE_S5_L_CTRL_DIV_VAL_TBW_4G_L_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S5_L_CTRL_DIV_VAL_TBW_A50G_L_S 12
+#define GL_PWR_MODE_DIVIDE_S5_L_CTRL_DIV_VAL_TBW_A50G_L_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S5_M_CTRL		0x000B8250 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S5_M_CTRL_DIV_VAL_TBW_50G_M_S 0
+#define GL_PWR_MODE_DIVIDE_S5_M_CTRL_DIV_VAL_TBW_50G_M_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S5_M_CTRL_DIV_VAL_TBW_25G_M_S 3
+#define GL_PWR_MODE_DIVIDE_S5_M_CTRL_DIV_VAL_TBW_25G_M_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S5_M_CTRL_DIV_VAL_TBW_10G_M_S 6
+#define GL_PWR_MODE_DIVIDE_S5_M_CTRL_DIV_VAL_TBW_10G_M_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S5_M_CTRL_DIV_VAL_TBW_4G_M_S 9
+#define GL_PWR_MODE_DIVIDE_S5_M_CTRL_DIV_VAL_TBW_4G_M_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S5_M_CTRL_DIV_VAL_TBW_A50G_M_S 12
+#define GL_PWR_MODE_DIVIDE_S5_M_CTRL_DIV_VAL_TBW_A50G_M_M MAKEMASK(0x7, 12)
+#define GL_S5_PWR_MODE_EXIT_CTL			0x000B8270 /* Reset Source: POR */
+#define GL_S5_PWR_MODE_EXIT_CTL_S5_PWR_MODE_AUTO_EXIT_S 0
+#define GL_S5_PWR_MODE_EXIT_CTL_S5_PWR_MODE_AUTO_EXIT_M BIT(0)
+#define GL_S5_PWR_MODE_EXIT_CTL_S5_PWR_MODE_FW_EXIT_S 1
+#define GL_S5_PWR_MODE_EXIT_CTL_S5_PWR_MODE_FW_EXIT_M BIT(1)
+#define GL_S5_PWR_MODE_EXIT_CTL_S5_PWR_MODE_PRST_FLOWS_ON_CORER_S 3
+#define GL_S5_PWR_MODE_EXIT_CTL_S5_PWR_MODE_PRST_FLOWS_ON_CORER_M BIT(3)
+#define GLGEN_PME_TO				0x000B81BC /* Reset Source: POR */
+#define GLGEN_PME_TO_PME_TO_FOR_PE_S		0
+#define GLGEN_PME_TO_PME_TO_FOR_PE_M		BIT(0)
+#define PRTPM_EEE_STAT				0x001E4320 /* Reset Source: GLOBR */
+#define PRTPM_EEE_STAT_EEE_NEG_S		29
+#define PRTPM_EEE_STAT_EEE_NEG_M		BIT(29)
+#define PRTPM_EEE_STAT_RX_LPI_STATUS_S		30
+#define PRTPM_EEE_STAT_RX_LPI_STATUS_M		BIT(30)
+#define PRTPM_EEE_STAT_TX_LPI_STATUS_S		31
+#define PRTPM_EEE_STAT_TX_LPI_STATUS_M		BIT(31)
+#define PRTPM_EEEC				0x001E4380 /* Reset Source: GLOBR */
+#define PRTPM_EEEC_TW_WAKE_MIN_S		16
+#define PRTPM_EEEC_TW_WAKE_MIN_M		MAKEMASK(0x3F, 16)
+#define PRTPM_EEEC_TX_LU_LPI_DLY_S		24
+#define PRTPM_EEEC_TX_LU_LPI_DLY_M		MAKEMASK(0x3, 24)
+#define PRTPM_EEEC_TEEE_DLY_S			26
+#define PRTPM_EEEC_TEEE_DLY_M			MAKEMASK(0x3F, 26)
+#define PRTPM_EEEFWD				0x001E4400 /* Reset Source: GLOBR */
+#define PRTPM_EEEFWD_EEE_FW_CONFIG_DONE_S	31
+#define PRTPM_EEEFWD_EEE_FW_CONFIG_DONE_M	BIT(31)
+#define PRTPM_EEER				0x001E4360 /* Reset Source: GLOBR */
+#define PRTPM_EEER_TW_SYSTEM_S			0
+#define PRTPM_EEER_TW_SYSTEM_M			MAKEMASK(0xFFFF, 0)
+#define PRTPM_EEER_TX_LPI_EN_S			16
+#define PRTPM_EEER_TX_LPI_EN_M			BIT(16)
+#define PRTPM_EEETXC				0x001E43E0 /* Reset Source: GLOBR */
+#define PRTPM_EEETXC_TW_PHY_S			0
+#define PRTPM_EEETXC_TW_PHY_M			MAKEMASK(0xFFFF, 0)
+#define PRTPM_RLPIC				0x001E43A0 /* Reset Source: GLOBR */
+#define PRTPM_RLPIC_ERLPIC_S			0
+#define PRTPM_RLPIC_ERLPIC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PRTPM_TLPIC				0x001E43C0 /* Reset Source: GLOBR */
+#define PRTPM_TLPIC_ETLPIC_S			0
+#define PRTPM_TLPIC_ETLPIC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLRPB_DHW(_i)				(0x000AC000 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLRPB_DHW_MAX_INDEX			15
+#define GLRPB_DHW_DHW_TCN_S			0
+#define GLRPB_DHW_DHW_TCN_M			MAKEMASK(0xFFFFF, 0)
+#define GLRPB_DLW(_i)				(0x000AC044 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLRPB_DLW_MAX_INDEX			15
+#define GLRPB_DLW_DLW_TCN_S			0
+#define GLRPB_DLW_DLW_TCN_M			MAKEMASK(0xFFFFF, 0)
+#define GLRPB_DPS(_i)				(0x000AC084 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLRPB_DPS_MAX_INDEX			15
+#define GLRPB_DPS_DPS_TCN_S			0
+#define GLRPB_DPS_DPS_TCN_M			MAKEMASK(0xFFFFF, 0)
+#define GLRPB_DSI_EN				0x000AC324 /* Reset Source: CORER */
+#define GLRPB_DSI_EN_DSI_EN_S			0
+#define GLRPB_DSI_EN_DSI_EN_M			BIT(0)
+#define GLRPB_DSI_EN_DSI_L2_MAC_ERR_DROP_EN_S	1
+#define GLRPB_DSI_EN_DSI_L2_MAC_ERR_DROP_EN_M	BIT(1)
+#define GLRPB_SHW(_i)				(0x000AC120 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLRPB_SHW_MAX_INDEX			7
+#define GLRPB_SHW_SHW_S				0
+#define GLRPB_SHW_SHW_M				MAKEMASK(0xFFFFF, 0)
+#define GLRPB_SLW(_i)				(0x000AC140 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLRPB_SLW_MAX_INDEX			7
+#define GLRPB_SLW_SLW_S				0
+#define GLRPB_SLW_SLW_M				MAKEMASK(0xFFFFF, 0)
+#define GLRPB_SPS(_i)				(0x000AC0C4 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLRPB_SPS_MAX_INDEX			7
+#define GLRPB_SPS_SPS_TCN_S			0
+#define GLRPB_SPS_SPS_TCN_M			MAKEMASK(0xFFFFF, 0)
+#define GLRPB_TC_CFG(_i)			(0x000AC2A4 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLRPB_TC_CFG_MAX_INDEX			31
+#define GLRPB_TC_CFG_D_POOL_S			0
+#define GLRPB_TC_CFG_D_POOL_M			MAKEMASK(0xFFFF, 0)
+#define GLRPB_TC_CFG_S_POOL_S			16
+#define GLRPB_TC_CFG_S_POOL_M			MAKEMASK(0xFFFF, 16)
+#define GLRPB_TCHW(_i)				(0x000AC330 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLRPB_TCHW_MAX_INDEX			31
+#define GLRPB_TCHW_TCHW_S			0
+#define GLRPB_TCHW_TCHW_M			MAKEMASK(0xFFFFF, 0)
+#define GLRPB_TCLW(_i)				(0x000AC3B0 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLRPB_TCLW_MAX_INDEX			31
+#define GLRPB_TCLW_TCLW_S			0
+#define GLRPB_TCLW_TCLW_M			MAKEMASK(0xFFFFF, 0)
+#define GLQF_APBVT(_i)				(0x00450000 + ((_i) * 4)) /* _i=0...2047 */ /* Reset Source: CORER */
+#define GLQF_APBVT_MAX_INDEX			2047
+#define GLQF_APBVT_APBVT_S			0
+#define GLQF_APBVT_APBVT_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLQF_FD_CLSN_0				0x00460028 /* Reset Source: CORER */
+#define GLQF_FD_CLSN_0_HITSBCNT_S		0
+#define GLQF_FD_CLSN_0_HITSBCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQF_FD_CLSN1				0x00460030 /* Reset Source: CORER */
+#define GLQF_FD_CLSN1_HITLBCNT_S		0
+#define GLQF_FD_CLSN1_HITLBCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQF_FD_CNT				0x00460018 /* Reset Source: CORER */
+#define GLQF_FD_CNT_FD_GCNT_S			0
+#define GLQF_FD_CNT_FD_GCNT_M			MAKEMASK(0x7FFF, 0)
+#define GLQF_FD_CNT_FD_BCNT_S			16
+#define GLQF_FD_CNT_FD_BCNT_M			MAKEMASK(0x7FFF, 16)
+#define GLQF_FD_CTL				0x00460000 /* Reset Source: CORER */
+#define GLQF_FD_CTL_FDLONG_S			0
+#define GLQF_FD_CTL_FDLONG_M			MAKEMASK(0xF, 0)
+#define GLQF_FD_CTL_HASH_REPORT_S		4
+#define GLQF_FD_CTL_HASH_REPORT_M		BIT(4)
+#define GLQF_FD_CTL_FLT_ADDR_REPORT_S		5
+#define GLQF_FD_CTL_FLT_ADDR_REPORT_M		BIT(5)
+#define GLQF_FD_SIZE				0x00460010 /* Reset Source: CORER */
+#define GLQF_FD_SIZE_FD_GSIZE_S			0
+#define GLQF_FD_SIZE_FD_GSIZE_M			MAKEMASK(0x7FFF, 0)
+#define GLQF_FD_SIZE_FD_BSIZE_S			16
+#define GLQF_FD_SIZE_FD_BSIZE_M			MAKEMASK(0x7FFF, 16)
+#define GLQF_FDCNT_0				0x00460020 /* Reset Source: CORER */
+#define GLQF_FDCNT_0_BUCKETCNT_S		0
+#define GLQF_FDCNT_0_BUCKETCNT_M		MAKEMASK(0x7FFF, 0)
+#define GLQF_FDCNT_0_CNT_NOT_VLD_S		31
+#define GLQF_FDCNT_0_CNT_NOT_VLD_M		BIT(31)
+#define GLQF_FDEVICTENA(_i)			(0x00452000 + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: CORER */
+#define GLQF_FDEVICTENA_MAX_INDEX		3
+#define GLQF_FDEVICTENA_FDEVICTENA_S		0
+#define GLQF_FDEVICTENA_FDEVICTENA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQF_FDINSET(_i, _j)			(0x00412000 + ((_i) * 4 + (_j) * 512)) /* _i=0...127, _j=0...5 */ /* Reset Source: CORER */
+#define GLQF_FDINSET_MAX_INDEX			127
+#define GLQF_FDINSET_FV_WORD_INDX0_S		0
+#define GLQF_FDINSET_FV_WORD_INDX0_M		MAKEMASK(0x1F, 0)
+#define GLQF_FDINSET_FV_WORD_VAL0_S		7
+#define GLQF_FDINSET_FV_WORD_VAL0_M		BIT(7)
+#define GLQF_FDINSET_FV_WORD_INDX1_S		8
+#define GLQF_FDINSET_FV_WORD_INDX1_M		MAKEMASK(0x1F, 8)
+#define GLQF_FDINSET_FV_WORD_VAL1_S		15
+#define GLQF_FDINSET_FV_WORD_VAL1_M		BIT(15)
+#define GLQF_FDINSET_FV_WORD_INDX2_S		16
+#define GLQF_FDINSET_FV_WORD_INDX2_M		MAKEMASK(0x1F, 16)
+#define GLQF_FDINSET_FV_WORD_VAL2_S		23
+#define GLQF_FDINSET_FV_WORD_VAL2_M		BIT(23)
+#define GLQF_FDINSET_FV_WORD_INDX3_S		24
+#define GLQF_FDINSET_FV_WORD_INDX3_M		MAKEMASK(0x1F, 24)
+#define GLQF_FDINSET_FV_WORD_VAL3_S		31
+#define GLQF_FDINSET_FV_WORD_VAL3_M		BIT(31)
+#define GLQF_FDMASK(_i)				(0x00410800 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLQF_FDMASK_MAX_INDEX			31
+#define GLQF_FDMASK_MSK_INDEX_S			0
+#define GLQF_FDMASK_MSK_INDEX_M			MAKEMASK(0x1F, 0)
+#define GLQF_FDMASK_MASK_S			16
+#define GLQF_FDMASK_MASK_M			MAKEMASK(0xFFFF, 16)
+#define GLQF_FDMASK_SEL(_i)			(0x00410400 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLQF_FDMASK_SEL_MAX_INDEX		127
+#define GLQF_FDMASK_SEL_MASK_SEL_S		0
+#define GLQF_FDMASK_SEL_MASK_SEL_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQF_FDSWAP(_i, _j)			(0x00413000 + ((_i) * 4 + (_j) * 512)) /* _i=0...127, _j=0...5 */ /* Reset Source: CORER */
+#define GLQF_FDSWAP_MAX_INDEX			127
+#define GLQF_FDSWAP_FV_WORD_INDX0_S		0
+#define GLQF_FDSWAP_FV_WORD_INDX0_M		MAKEMASK(0x1F, 0)
+#define GLQF_FDSWAP_FV_WORD_VAL0_S		7
+#define GLQF_FDSWAP_FV_WORD_VAL0_M		BIT(7)
+#define GLQF_FDSWAP_FV_WORD_INDX1_S		8
+#define GLQF_FDSWAP_FV_WORD_INDX1_M		MAKEMASK(0x1F, 8)
+#define GLQF_FDSWAP_FV_WORD_VAL1_S		15
+#define GLQF_FDSWAP_FV_WORD_VAL1_M		BIT(15)
+#define GLQF_FDSWAP_FV_WORD_INDX2_S		16
+#define GLQF_FDSWAP_FV_WORD_INDX2_M		MAKEMASK(0x1F, 16)
+#define GLQF_FDSWAP_FV_WORD_VAL2_S		23
+#define GLQF_FDSWAP_FV_WORD_VAL2_M		BIT(23)
+#define GLQF_FDSWAP_FV_WORD_INDX3_S		24
+#define GLQF_FDSWAP_FV_WORD_INDX3_M		MAKEMASK(0x1F, 24)
+#define GLQF_FDSWAP_FV_WORD_VAL3_S		31
+#define GLQF_FDSWAP_FV_WORD_VAL3_M		BIT(31)
+#define GLQF_HINSET(_i, _j)			(0x0040E000 + ((_i) * 4 + (_j) * 512)) /* _i=0...127, _j=0...5 */ /* Reset Source: CORER */
+#define GLQF_HINSET_MAX_INDEX			127
+#define GLQF_HINSET_FV_WORD_INDX0_S		0
+#define GLQF_HINSET_FV_WORD_INDX0_M		MAKEMASK(0x1F, 0)
+#define GLQF_HINSET_FV_WORD_VAL0_S		7
+#define GLQF_HINSET_FV_WORD_VAL0_M		BIT(7)
+#define GLQF_HINSET_FV_WORD_INDX1_S		8
+#define GLQF_HINSET_FV_WORD_INDX1_M		MAKEMASK(0x1F, 8)
+#define GLQF_HINSET_FV_WORD_VAL1_S		15
+#define GLQF_HINSET_FV_WORD_VAL1_M		BIT(15)
+#define GLQF_HINSET_FV_WORD_INDX2_S		16
+#define GLQF_HINSET_FV_WORD_INDX2_M		MAKEMASK(0x1F, 16)
+#define GLQF_HINSET_FV_WORD_VAL2_S		23
+#define GLQF_HINSET_FV_WORD_VAL2_M		BIT(23)
+#define GLQF_HINSET_FV_WORD_INDX3_S		24
+#define GLQF_HINSET_FV_WORD_INDX3_M		MAKEMASK(0x1F, 24)
+#define GLQF_HINSET_FV_WORD_VAL3_S		31
+#define GLQF_HINSET_FV_WORD_VAL3_M		BIT(31)
+#define GLQF_HKEY(_i)				(0x00456000 + ((_i) * 4)) /* _i=0...12 */ /* Reset Source: CORER */
+#define GLQF_HKEY_MAX_INDEX			12
+#define GLQF_HKEY_KEY_0_S			0
+#define GLQF_HKEY_KEY_0_M			MAKEMASK(0xFF, 0)
+#define GLQF_HKEY_KEY_1_S			8
+#define GLQF_HKEY_KEY_1_M			MAKEMASK(0xFF, 8)
+#define GLQF_HKEY_KEY_2_S			16
+#define GLQF_HKEY_KEY_2_M			MAKEMASK(0xFF, 16)
+#define GLQF_HKEY_KEY_3_S			24
+#define GLQF_HKEY_KEY_3_M			MAKEMASK(0xFF, 24)
+#define GLQF_HLUT(_i, _j)			(0x00438000 + ((_i) * 4 + (_j) * 512)) /* _i=0...127, _j=0...15 */ /* Reset Source: CORER */
+#define GLQF_HLUT_MAX_INDEX			127
+#define GLQF_HLUT_LUT0_S			0
+#define GLQF_HLUT_LUT0_M			MAKEMASK(0x3F, 0)
+#define GLQF_HLUT_LUT1_S			8
+#define GLQF_HLUT_LUT1_M			MAKEMASK(0x3F, 8)
+#define GLQF_HLUT_LUT2_S			16
+#define GLQF_HLUT_LUT2_M			MAKEMASK(0x3F, 16)
+#define GLQF_HLUT_LUT3_S			24
+#define GLQF_HLUT_LUT3_M			MAKEMASK(0x3F, 24)
+#define GLQF_HLUT_SIZE(_i)			(0x00455400 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLQF_HLUT_SIZE_MAX_INDEX		15
+#define GLQF_HLUT_SIZE_HSIZE_S			0
+#define GLQF_HLUT_SIZE_HSIZE_M			BIT(0)
+#define GLQF_HMASK(_i)				(0x0040FC00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLQF_HMASK_MAX_INDEX			31
+#define GLQF_HMASK_MSK_INDEX_S			0
+#define GLQF_HMASK_MSK_INDEX_M			MAKEMASK(0x1F, 0)
+#define GLQF_HMASK_MASK_S			16
+#define GLQF_HMASK_MASK_M			MAKEMASK(0xFFFF, 16)
+#define GLQF_HMASK_SEL(_i)			(0x00410000 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLQF_HMASK_SEL_MAX_INDEX		127
+#define GLQF_HMASK_SEL_MASK_SEL_S		0
+#define GLQF_HMASK_SEL_MASK_SEL_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQF_HSYMM(_i, _j)			(0x0040F000 + ((_i) * 4 + (_j) * 512)) /* _i=0...127, _j=0...5 */ /* Reset Source: CORER */
+#define GLQF_HSYMM_MAX_INDEX			127
+#define GLQF_HSYMM_FV_SYMM_INDX0_S		0
+#define GLQF_HSYMM_FV_SYMM_INDX0_M		MAKEMASK(0x1F, 0)
+#define GLQF_HSYMM_SYMM0_ENA_S			7
+#define GLQF_HSYMM_SYMM0_ENA_M			BIT(7)
+#define GLQF_HSYMM_FV_SYMM_INDX1_S		8
+#define GLQF_HSYMM_FV_SYMM_INDX1_M		MAKEMASK(0x1F, 8)
+#define GLQF_HSYMM_SYMM1_ENA_S			15
+#define GLQF_HSYMM_SYMM1_ENA_M			BIT(15)
+#define GLQF_HSYMM_FV_SYMM_INDX2_S		16
+#define GLQF_HSYMM_FV_SYMM_INDX2_M		MAKEMASK(0x1F, 16)
+#define GLQF_HSYMM_SYMM2_ENA_S			23
+#define GLQF_HSYMM_SYMM2_ENA_M			BIT(23)
+#define GLQF_HSYMM_FV_SYMM_INDX3_S		24
+#define GLQF_HSYMM_FV_SYMM_INDX3_M		MAKEMASK(0x1F, 24)
+#define GLQF_HSYMM_SYMM3_ENA_S			31
+#define GLQF_HSYMM_SYMM3_ENA_M			BIT(31)
+#define GLQF_PE_APBVT_CNT			0x00455500 /* Reset Source: CORER */
+#define GLQF_PE_APBVT_CNT_APBVT_LAN_S		0
+#define GLQF_PE_APBVT_CNT_APBVT_LAN_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQF_PE_CMD				0x00471080 /* Reset Source: CORER */
+#define GLQF_PE_CMD_ADDREM_STS_S		0
+#define GLQF_PE_CMD_ADDREM_STS_M		MAKEMASK(0xFFFFFF, 0)
+#define GLQF_PE_CMD_ADDREM_ID_S			28
+#define GLQF_PE_CMD_ADDREM_ID_M			MAKEMASK(0xF, 28)
+#define GLQF_PE_CTL				0x004710C0 /* Reset Source: CORER */
+#define GLQF_PE_CTL_PELONG_S			0
+#define GLQF_PE_CTL_PELONG_M			MAKEMASK(0xF, 0)
+#define GLQF_PE_CTL2(_i)			(0x00455200 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLQF_PE_CTL2_MAX_INDEX			31
+#define GLQF_PE_CTL2_TO_QH_S			0
+#define GLQF_PE_CTL2_TO_QH_M			MAKEMASK(0x3, 0)
+#define GLQF_PE_CTL2_APBVT_ENA_S		2
+#define GLQF_PE_CTL2_APBVT_ENA_M		BIT(2)
+#define GLQF_PE_FVE				0x0020E514 /* Reset Source: CORER */
+#define GLQF_PE_FVE_W_ENA_S			0
+#define GLQF_PE_FVE_W_ENA_M			MAKEMASK(0xFFFFFF, 0)
+#define GLQF_PE_OSR_STS				0x00471040 /* Reset Source: CORER */
+#define GLQF_PE_OSR_STS_QH_SRCH_MAXOSR_S	0
+#define GLQF_PE_OSR_STS_QH_SRCH_MAXOSR_M	MAKEMASK(0x3FF, 0)
+#define GLQF_PE_OSR_STS_QH_CMD_MAXOSR_S		16
+#define GLQF_PE_OSR_STS_QH_CMD_MAXOSR_M		MAKEMASK(0x3FF, 16)
+#define GLQF_PEINSET(_i, _j)			(0x00415000 + ((_i) * 4 + (_j) * 128)) /* _i=0...31, _j=0...5 */ /* Reset Source: CORER */
+#define GLQF_PEINSET_MAX_INDEX			31
+#define GLQF_PEINSET_FV_WORD_INDX0_S		0
+#define GLQF_PEINSET_FV_WORD_INDX0_M		MAKEMASK(0x1F, 0)
+#define GLQF_PEINSET_FV_WORD_VAL0_S		7
+#define GLQF_PEINSET_FV_WORD_VAL0_M		BIT(7)
+#define GLQF_PEINSET_FV_WORD_INDX1_S		8
+#define GLQF_PEINSET_FV_WORD_INDX1_M		MAKEMASK(0x1F, 8)
+#define GLQF_PEINSET_FV_WORD_VAL1_S		15
+#define GLQF_PEINSET_FV_WORD_VAL1_M		BIT(15)
+#define GLQF_PEINSET_FV_WORD_INDX2_S		16
+#define GLQF_PEINSET_FV_WORD_INDX2_M		MAKEMASK(0x1F, 16)
+#define GLQF_PEINSET_FV_WORD_VAL2_S		23
+#define GLQF_PEINSET_FV_WORD_VAL2_M		BIT(23)
+#define GLQF_PEINSET_FV_WORD_INDX3_S		24
+#define GLQF_PEINSET_FV_WORD_INDX3_M		MAKEMASK(0x1F, 24)
+#define GLQF_PEINSET_FV_WORD_VAL3_S		31
+#define GLQF_PEINSET_FV_WORD_VAL3_M		BIT(31)
+#define GLQF_PEMASK(_i)				(0x00415400 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLQF_PEMASK_MAX_INDEX			15
+#define GLQF_PEMASK_MSK_INDEX_S			0
+#define GLQF_PEMASK_MSK_INDEX_M			MAKEMASK(0x1F, 0)
+#define GLQF_PEMASK_MASK_S			16
+#define GLQF_PEMASK_MASK_M			MAKEMASK(0xFFFF, 16)
+#define GLQF_PEMASK_SEL(_i)			(0x00415500 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLQF_PEMASK_SEL_MAX_INDEX		31
+#define GLQF_PEMASK_SEL_MASK_SEL_S		0
+#define GLQF_PEMASK_SEL_MASK_SEL_M		MAKEMASK(0xFFFF, 0)
+#define GLQF_PETABLE_CLR(_i)			(0x000AA078 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLQF_PETABLE_CLR_MAX_INDEX		1
+#define GLQF_PETABLE_CLR_VM_VF_NUM_S		0
+#define GLQF_PETABLE_CLR_VM_VF_NUM_M		MAKEMASK(0x3FF, 0)
+#define GLQF_PETABLE_CLR_VM_VF_TYPE_S		10
+#define GLQF_PETABLE_CLR_VM_VF_TYPE_M		MAKEMASK(0x3, 10)
+#define GLQF_PETABLE_CLR_PF_NUM_S		12
+#define GLQF_PETABLE_CLR_PF_NUM_M		MAKEMASK(0x7, 12)
+#define GLQF_PETABLE_CLR_PE_BUSY_S		16
+#define GLQF_PETABLE_CLR_PE_BUSY_M		BIT(16)
+#define GLQF_PETABLE_CLR_PE_CLEAR_S		17
+#define GLQF_PETABLE_CLR_PE_CLEAR_M		BIT(17)
+#define GLQF_PROF2TC(_i, _j)			(0x0044D000 + ((_i) * 4 + (_j) * 512)) /* _i=0...127, _j=0...3 */ /* Reset Source: CORER */
+#define GLQF_PROF2TC_MAX_INDEX			127
+#define GLQF_PROF2TC_OVERRIDE_ENA_0_S		0
+#define GLQF_PROF2TC_OVERRIDE_ENA_0_M		BIT(0)
+#define GLQF_PROF2TC_REGION_0_S			1
+#define GLQF_PROF2TC_REGION_0_M			MAKEMASK(0x7, 1)
+#define GLQF_PROF2TC_OVERRIDE_ENA_1_S		4
+#define GLQF_PROF2TC_OVERRIDE_ENA_1_M		BIT(4)
+#define GLQF_PROF2TC_REGION_1_S			5
+#define GLQF_PROF2TC_REGION_1_M			MAKEMASK(0x7, 5)
+#define GLQF_PROF2TC_OVERRIDE_ENA_2_S		8
+#define GLQF_PROF2TC_OVERRIDE_ENA_2_M		BIT(8)
+#define GLQF_PROF2TC_REGION_2_S			9
+#define GLQF_PROF2TC_REGION_2_M			MAKEMASK(0x7, 9)
+#define GLQF_PROF2TC_OVERRIDE_ENA_3_S		12
+#define GLQF_PROF2TC_OVERRIDE_ENA_3_M		BIT(12)
+#define GLQF_PROF2TC_REGION_3_S			13
+#define GLQF_PROF2TC_REGION_3_M			MAKEMASK(0x7, 13)
+#define GLQF_PROF2TC_OVERRIDE_ENA_4_S		16
+#define GLQF_PROF2TC_OVERRIDE_ENA_4_M		BIT(16)
+#define GLQF_PROF2TC_REGION_4_S			17
+#define GLQF_PROF2TC_REGION_4_M			MAKEMASK(0x7, 17)
+#define GLQF_PROF2TC_OVERRIDE_ENA_5_S		20
+#define GLQF_PROF2TC_OVERRIDE_ENA_5_M		BIT(20)
+#define GLQF_PROF2TC_REGION_5_S			21
+#define GLQF_PROF2TC_REGION_5_M			MAKEMASK(0x7, 21)
+#define GLQF_PROF2TC_OVERRIDE_ENA_6_S		24
+#define GLQF_PROF2TC_OVERRIDE_ENA_6_M		BIT(24)
+#define GLQF_PROF2TC_REGION_6_S			25
+#define GLQF_PROF2TC_REGION_6_M			MAKEMASK(0x7, 25)
+#define GLQF_PROF2TC_OVERRIDE_ENA_7_S		28
+#define GLQF_PROF2TC_OVERRIDE_ENA_7_M		BIT(28)
+#define GLQF_PROF2TC_REGION_7_S			29
+#define GLQF_PROF2TC_REGION_7_M			MAKEMASK(0x7, 29)
+#define PFQF_FD_CNT				0x00460180 /* Reset Source: CORER */
+#define PFQF_FD_CNT_FD_GCNT_S			0
+#define PFQF_FD_CNT_FD_GCNT_M			MAKEMASK(0x7FFF, 0)
+#define PFQF_FD_CNT_FD_BCNT_S			16
+#define PFQF_FD_CNT_FD_BCNT_M			MAKEMASK(0x7FFF, 16)
+#define PFQF_FD_ENA				0x0043A000 /* Reset Source: CORER */
+#define PFQF_FD_ENA_FD_ENA_S			0
+#define PFQF_FD_ENA_FD_ENA_M			BIT(0)
+#define PFQF_FD_SIZE				0x00460100 /* Reset Source: CORER */
+#define PFQF_FD_SIZE_FD_GSIZE_S			0
+#define PFQF_FD_SIZE_FD_GSIZE_M			MAKEMASK(0x7FFF, 0)
+#define PFQF_FD_SIZE_FD_BSIZE_S			16
+#define PFQF_FD_SIZE_FD_BSIZE_M			MAKEMASK(0x7FFF, 16)
+#define PFQF_FD_SUBTRACT			0x00460200 /* Reset Source: CORER */
+#define PFQF_FD_SUBTRACT_FD_GCNT_S		0
+#define PFQF_FD_SUBTRACT_FD_GCNT_M		MAKEMASK(0x7FFF, 0)
+#define PFQF_FD_SUBTRACT_FD_BCNT_S		16
+#define PFQF_FD_SUBTRACT_FD_BCNT_M		MAKEMASK(0x7FFF, 16)
+#define PFQF_HLUT(_i)				(0x00430000 + ((_i) * 64)) /* _i=0...511 */ /* Reset Source: CORER */
+#define PFQF_HLUT_MAX_INDEX			511
+#define PFQF_HLUT_LUT0_S			0
+#define PFQF_HLUT_LUT0_M			MAKEMASK(0xFF, 0)
+#define PFQF_HLUT_LUT1_S			8
+#define PFQF_HLUT_LUT1_M			MAKEMASK(0xFF, 8)
+#define PFQF_HLUT_LUT2_S			16
+#define PFQF_HLUT_LUT2_M			MAKEMASK(0xFF, 16)
+#define PFQF_HLUT_LUT3_S			24
+#define PFQF_HLUT_LUT3_M			MAKEMASK(0xFF, 24)
+#define PFQF_HLUT_SIZE				0x00455480 /* Reset Source: CORER */
+#define PFQF_HLUT_SIZE_HSIZE_S			0
+#define PFQF_HLUT_SIZE_HSIZE_M			MAKEMASK(0x3, 0)
+#define PFQF_PE_CLSN0				0x00470480 /* Reset Source: CORER */
+#define PFQF_PE_CLSN0_HITSBCNT_S		0
+#define PFQF_PE_CLSN0_HITSBCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PFQF_PE_CLSN1				0x00470500 /* Reset Source: CORER */
+#define PFQF_PE_CLSN1_HITLBCNT_S		0
+#define PFQF_PE_CLSN1_HITLBCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PFQF_PE_CTL1				0x00470000 /* Reset Source: CORER */
+#define PFQF_PE_CTL1_PEHSIZE_S			0
+#define PFQF_PE_CTL1_PEHSIZE_M			MAKEMASK(0xF, 0)
+#define PFQF_PE_CTL2				0x00470040 /* Reset Source: CORER */
+#define PFQF_PE_CTL2_PEDSIZE_S			0
+#define PFQF_PE_CTL2_PEDSIZE_M			MAKEMASK(0xF, 0)
+#define PFQF_PE_FILTERING_ENA			0x0043A080 /* Reset Source: CORER */
+#define PFQF_PE_FILTERING_ENA_PE_ENA_S		0
+#define PFQF_PE_FILTERING_ENA_PE_ENA_M		BIT(0)
+#define PFQF_PE_FLHD				0x00470100 /* Reset Source: CORER */
+#define PFQF_PE_FLHD_FLHD_S			0
+#define PFQF_PE_FLHD_FLHD_M			MAKEMASK(0xFFFFFF, 0)
+#define PFQF_PE_ST_CTL				0x00470400 /* Reset Source: CORER */
+#define PFQF_PE_ST_CTL_PF_CNT_EN_S		0
+#define PFQF_PE_ST_CTL_PF_CNT_EN_M		BIT(0)
+#define PFQF_PE_ST_CTL_VFS_CNT_EN_S		1
+#define PFQF_PE_ST_CTL_VFS_CNT_EN_M		BIT(1)
+#define PFQF_PE_ST_CTL_VF_CNT_EN_S		2
+#define PFQF_PE_ST_CTL_VF_CNT_EN_M		BIT(2)
+#define PFQF_PE_ST_CTL_VF_NUM_S			16
+#define PFQF_PE_ST_CTL_VF_NUM_M			MAKEMASK(0xFF, 16)
+#define PFQF_PE_TC_CTL				0x00452080 /* Reset Source: CORER */
+#define PFQF_PE_TC_CTL_TC_EN_PF_S		0
+#define PFQF_PE_TC_CTL_TC_EN_PF_M		MAKEMASK(0xFF, 0)
+#define PFQF_PE_TC_CTL_TC_EN_VF_S		16
+#define PFQF_PE_TC_CTL_TC_EN_VF_M		MAKEMASK(0xFF, 16)
+#define PFQF_PECNT_0				0x00470200 /* Reset Source: CORER */
+#define PFQF_PECNT_0_BUCKETCNT_S		0
+#define PFQF_PECNT_0_BUCKETCNT_M		MAKEMASK(0x3FFFF, 0)
+#define PFQF_PECNT_1				0x00470300 /* Reset Source: CORER */
+#define PFQF_PECNT_1_FLTCNT_S			0
+#define PFQF_PECNT_1_FLTCNT_M			MAKEMASK(0x3FFFF, 0)
+#define VPQF_PE_CTL1(_VF)			(0x00474000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPQF_PE_CTL1_MAX_INDEX			255
+#define VPQF_PE_CTL1_PEHSIZE_S			0
+#define VPQF_PE_CTL1_PEHSIZE_M			MAKEMASK(0xF, 0)
+#define VPQF_PE_CTL2(_VF)			(0x00474800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPQF_PE_CTL2_MAX_INDEX			255
+#define VPQF_PE_CTL2_PEDSIZE_S			0
+#define VPQF_PE_CTL2_PEDSIZE_M			MAKEMASK(0xF, 0)
+#define VPQF_PE_FILTERING_ENA(_VF)		(0x00455800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPQF_PE_FILTERING_ENA_MAX_INDEX		255
+#define VPQF_PE_FILTERING_ENA_PE_ENA_S		0
+#define VPQF_PE_FILTERING_ENA_PE_ENA_M		BIT(0)
+#define VPQF_PE_FLHD(_VF)			(0x00472000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPQF_PE_FLHD_MAX_INDEX			255
+#define VPQF_PE_FLHD_FLHD_S			0
+#define VPQF_PE_FLHD_FLHD_M			MAKEMASK(0xFFFFFF, 0)
+#define VPQF_PECNT_0(_VF)			(0x00472800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPQF_PECNT_0_MAX_INDEX			255
+#define VPQF_PECNT_0_BUCKETCNT_S		0
+#define VPQF_PECNT_0_BUCKETCNT_M		MAKEMASK(0x3FFFF, 0)
+#define VPQF_PECNT_1(_VF)			(0x00473000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPQF_PECNT_1_MAX_INDEX			255
+#define VPQF_PECNT_1_FLTCNT_S			0
+#define VPQF_PECNT_1_FLTCNT_M			MAKEMASK(0x3FFFF, 0)
+#define GLDCB_RMPMC				0x001223C8 /* Reset Source: CORER */
+#define GLDCB_RMPMC_RSPM_S			0
+#define GLDCB_RMPMC_RSPM_M			MAKEMASK(0x3F, 0)
+#define GLDCB_RMPMC_MIQ_NODROP_MODE_S		6
+#define GLDCB_RMPMC_MIQ_NODROP_MODE_M		MAKEMASK(0x1F, 6)
+#define GLDCB_RMPMC_RPM_DIS_S			31
+#define GLDCB_RMPMC_RPM_DIS_M			BIT(31)
+#define GLDCB_RMPMS				0x001223CC /* Reset Source: CORER */
+#define GLDCB_RMPMS_RMPM_S			0
+#define GLDCB_RMPMS_RMPM_M			MAKEMASK(0xFFFF, 0)
+#define GLDCB_RPCC				0x00122260 /* Reset Source: CORER */
+#define GLDCB_RPCC_EN_S				0
+#define GLDCB_RPCC_EN_M				BIT(0)
+#define GLDCB_RPCC_SCL_FACT_S			4
+#define GLDCB_RPCC_SCL_FACT_M			MAKEMASK(0x1F, 4)
+#define GLDCB_RPCC_THRSH_S			16
+#define GLDCB_RPCC_THRSH_M			MAKEMASK(0xFFF, 16)
+#define GLDCB_RSPMC				0x001223C4 /* Reset Source: CORER */
+#define GLDCB_RSPMC_RSPM_S			0
+#define GLDCB_RSPMC_RSPM_M			MAKEMASK(0xFF, 0)
+#define GLDCB_RSPMC_RPM_MODE_S			8
+#define GLDCB_RSPMC_RPM_MODE_M			MAKEMASK(0x3, 8)
+#define GLDCB_RSPMC_PRR_MAX_EXP_S		10
+#define GLDCB_RSPMC_PRR_MAX_EXP_M		MAKEMASK(0xF, 10)
+#define GLDCB_RSPMC_PFCTIMER_S			14
+#define GLDCB_RSPMC_PFCTIMER_M			MAKEMASK(0x3FFF, 14)
+#define GLDCB_RSPMC_RPM_DIS_S			31
+#define GLDCB_RSPMC_RPM_DIS_M			BIT(31)
+#define GLDCB_RSPMS				0x001223C0 /* Reset Source: CORER */
+#define GLDCB_RSPMS_RSPM_S			0
+#define GLDCB_RSPMS_RSPM_M			MAKEMASK(0x3FFFF, 0)
+#define GLDCB_RTCTI				0x001223D0 /* Reset Source: CORER */
+#define GLDCB_RTCTI_PFCTIMEOUT_TC_S		0
+#define GLDCB_RTCTI_PFCTIMEOUT_TC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_RTCTQ(_i)				(0x001222C0 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLDCB_RTCTQ_MAX_INDEX			31
+#define GLDCB_RTCTQ_RXQNUM_S			0
+#define GLDCB_RTCTQ_RXQNUM_M			MAKEMASK(0x7FF, 0)
+#define GLDCB_RTCTQ_IS_PF_Q_S			16
+#define GLDCB_RTCTQ_IS_PF_Q_M			BIT(16)
+#define GLDCB_RTCTS(_i)				(0x00122340 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLDCB_RTCTS_MAX_INDEX			31
+#define GLDCB_RTCTS_PFCTIMER_S			0
+#define GLDCB_RTCTS_PFCTIMER_M			MAKEMASK(0x3FFF, 0)
+#define GLRCB_CFG_COTF_CNT(_i)			(0x001223D4 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLRCB_CFG_COTF_CNT_MAX_INDEX		7
+#define GLRCB_CFG_COTF_CNT_MRKR_COTF_CNT_S	0
+#define GLRCB_CFG_COTF_CNT_MRKR_COTF_CNT_M	MAKEMASK(0x3F, 0)
+#define GLRCB_CFG_COTF_ST			0x001223F4 /* Reset Source: CORER */
+#define GLRCB_CFG_COTF_ST_MRKR_COTF_ST_S	0
+#define GLRCB_CFG_COTF_ST_MRKR_COTF_ST_M	MAKEMASK(0xFF, 0)
+#define GLRPRS_PMCFG_DHW(_i)			(0x00200388 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLRPRS_PMCFG_DHW_MAX_INDEX		15
+#define GLRPRS_PMCFG_DHW_DHW_S			0
+#define GLRPRS_PMCFG_DHW_DHW_M			MAKEMASK(0xFFFFF, 0)
+#define GLRPRS_PMCFG_DLW(_i)			(0x002003C8 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLRPRS_PMCFG_DLW_MAX_INDEX		15
+#define GLRPRS_PMCFG_DLW_DLW_S			0
+#define GLRPRS_PMCFG_DLW_DLW_M			MAKEMASK(0xFFFFF, 0)
+#define GLRPRS_PMCFG_DPS(_i)			(0x00200308 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLRPRS_PMCFG_DPS_MAX_INDEX		15
+#define GLRPRS_PMCFG_DPS_DPS_S			0
+#define GLRPRS_PMCFG_DPS_DPS_M			MAKEMASK(0xFFFFF, 0)
+#define GLRPRS_PMCFG_SHW(_i)			(0x00200448 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLRPRS_PMCFG_SHW_MAX_INDEX		7
+#define GLRPRS_PMCFG_SHW_SHW_S			0
+#define GLRPRS_PMCFG_SHW_SHW_M			MAKEMASK(0xFFFFF, 0)
+#define GLRPRS_PMCFG_SLW(_i)			(0x00200468 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLRPRS_PMCFG_SLW_MAX_INDEX		7
+#define GLRPRS_PMCFG_SLW_SLW_S			0
+#define GLRPRS_PMCFG_SLW_SLW_M			MAKEMASK(0xFFFFF, 0)
+#define GLRPRS_PMCFG_SPS(_i)			(0x00200408 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLRPRS_PMCFG_SPS_MAX_INDEX		7
+#define GLRPRS_PMCFG_SPS_SPS_S			0
+#define GLRPRS_PMCFG_SPS_SPS_M			MAKEMASK(0xFFFFF, 0)
+#define GLRPRS_PMCFG_TC_CFG(_i)			(0x00200488 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLRPRS_PMCFG_TC_CFG_MAX_INDEX		31
+#define GLRPRS_PMCFG_TC_CFG_D_POOL_S		0
+#define GLRPRS_PMCFG_TC_CFG_D_POOL_M		MAKEMASK(0xF, 0)
+#define GLRPRS_PMCFG_TC_CFG_S_POOL_S		16
+#define GLRPRS_PMCFG_TC_CFG_S_POOL_M		MAKEMASK(0x7, 16)
+#define GLRPRS_PMCFG_TCHW(_i)			(0x00200588 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLRPRS_PMCFG_TCHW_MAX_INDEX		31
+#define GLRPRS_PMCFG_TCHW_TCHW_S		0
+#define GLRPRS_PMCFG_TCHW_TCHW_M		MAKEMASK(0xFFFFF, 0)
+#define GLRPRS_PMCFG_TCLW(_i)			(0x00200608 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLRPRS_PMCFG_TCLW_MAX_INDEX		31
+#define GLRPRS_PMCFG_TCLW_TCLW_S		0
+#define GLRPRS_PMCFG_TCLW_TCLW_M		MAKEMASK(0xFFFFF, 0)
+#define GLSWT_PMCFG_TC_CFG(_i)			(0x00204900 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSWT_PMCFG_TC_CFG_MAX_INDEX		31
+#define GLSWT_PMCFG_TC_CFG_D_POOL_S		0
+#define GLSWT_PMCFG_TC_CFG_D_POOL_M		MAKEMASK(0xF, 0)
+#define GLSWT_PMCFG_TC_CFG_S_POOL_S		16
+#define GLSWT_PMCFG_TC_CFG_S_POOL_M		MAKEMASK(0x7, 16)
+#define PRTDCB_RLANPMS				0x00122280 /* Reset Source: CORER */
+#define PRTDCB_RLANPMS_LANRPPM_S		0
+#define PRTDCB_RLANPMS_LANRPPM_M		MAKEMASK(0x3FFFF, 0)
+#define PRTDCB_RPPMC				0x00122240 /* Reset Source: CORER */
+#define PRTDCB_RPPMC_LANRPPM_S			0
+#define PRTDCB_RPPMC_LANRPPM_M			MAKEMASK(0xFF, 0)
+#define PRTDCB_RPPMC_RDMARPPM_S			8
+#define PRTDCB_RPPMC_RDMARPPM_M			MAKEMASK(0xFF, 8)
+#define PRTDCB_RRDMAPMS				0x00122120 /* Reset Source: CORER */
+#define PRTDCB_RRDMAPMS_RDMARPPM_S		0
+#define PRTDCB_RRDMAPMS_RDMARPPM_M		MAKEMASK(0x3FFFF, 0)
+#define GL_STAT_SWR_BPCH(_i)			(0x00347804 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GL_STAT_SWR_BPCH_MAX_INDEX		127
+#define GL_STAT_SWR_BPCH_VLBPCH_S		0
+#define GL_STAT_SWR_BPCH_VLBPCH_M		MAKEMASK(0xFF, 0)
+#define GL_STAT_SWR_BPCL(_i)			(0x00347800 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GL_STAT_SWR_BPCL_MAX_INDEX		127
+#define GL_STAT_SWR_BPCL_VLBPCL_S		0
+#define GL_STAT_SWR_BPCL_VLBPCL_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_STAT_SWR_GORCH(_i)			(0x00342004 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GL_STAT_SWR_GORCH_MAX_INDEX		127
+#define GL_STAT_SWR_GORCH_VLBCH_S		0
+#define GL_STAT_SWR_GORCH_VLBCH_M		MAKEMASK(0xFF, 0)
+#define GL_STAT_SWR_GORCL(_i)			(0x00342000 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GL_STAT_SWR_GORCL_MAX_INDEX		127
+#define GL_STAT_SWR_GORCL_VLBCL_S		0
+#define GL_STAT_SWR_GORCL_VLBCL_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_STAT_SWR_GOTCH(_i)			(0x00304004 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GL_STAT_SWR_GOTCH_MAX_INDEX		127
+#define GL_STAT_SWR_GOTCH_VLBCH_S		0
+#define GL_STAT_SWR_GOTCH_VLBCH_M		MAKEMASK(0xFF, 0)
+#define GL_STAT_SWR_GOTCL(_i)			(0x00304000 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GL_STAT_SWR_GOTCL_MAX_INDEX		127
+#define GL_STAT_SWR_GOTCL_VLBCL_S		0
+#define GL_STAT_SWR_GOTCL_VLBCL_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_STAT_SWR_MPCH(_i)			(0x00347404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GL_STAT_SWR_MPCH_MAX_INDEX		127
+#define GL_STAT_SWR_MPCH_VLMPCH_S		0
+#define GL_STAT_SWR_MPCH_VLMPCH_M		MAKEMASK(0xFF, 0)
+#define GL_STAT_SWR_MPCL(_i)			(0x00347400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GL_STAT_SWR_MPCL_MAX_INDEX		127
+#define GL_STAT_SWR_MPCL_VLMPCL_S		0
+#define GL_STAT_SWR_MPCL_VLMPCL_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_STAT_SWR_UPCH(_i)			(0x00347004 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GL_STAT_SWR_UPCH_MAX_INDEX		127
+#define GL_STAT_SWR_UPCH_VLUPCH_S		0
+#define GL_STAT_SWR_UPCH_VLUPCH_M		MAKEMASK(0xFF, 0)
+#define GL_STAT_SWR_UPCL(_i)			(0x00347000 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GL_STAT_SWR_UPCL_MAX_INDEX		127
+#define GL_STAT_SWR_UPCL_VLUPCL_S		0
+#define GL_STAT_SWR_UPCL_VLUPCL_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_AORCL(_i)				(0x003812C0 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_AORCL_MAX_INDEX			7
+#define GLPRT_AORCL_AORCL_S			0
+#define GLPRT_AORCL_AORCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_BPRCH(_i)				(0x00381384 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_BPRCH_MAX_INDEX			7
+#define GLPRT_BPRCH_UPRCH_S			0
+#define GLPRT_BPRCH_UPRCH_M			MAKEMASK(0xFF, 0)
+#define GLPRT_BPRCL(_i)				(0x00381380 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_BPRCL_MAX_INDEX			7
+#define GLPRT_BPRCL_UPRCH_S			0
+#define GLPRT_BPRCL_UPRCH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_BPTCH(_i)				(0x00381244 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_BPTCH_MAX_INDEX			7
+#define GLPRT_BPTCH_UPRCH_S			0
+#define GLPRT_BPTCH_UPRCH_M			MAKEMASK(0xFF, 0)
+#define GLPRT_BPTCL(_i)				(0x00381240 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_BPTCL_MAX_INDEX			7
+#define GLPRT_BPTCL_UPRCH_S			0
+#define GLPRT_BPTCL_UPRCH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_CRCERRS(_i)			(0x00380100 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_CRCERRS_MAX_INDEX			7
+#define GLPRT_CRCERRS_CRCERRS_S			0
+#define GLPRT_CRCERRS_CRCERRS_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_CRCERRS_H(_i)			(0x00380104 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_CRCERRS_H_MAX_INDEX		7
+#define GLPRT_CRCERRS_H_CRCERRS_S		0
+#define GLPRT_CRCERRS_H_CRCERRS_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_GORCH(_i)				(0x00380004 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_GORCH_MAX_INDEX			7
+#define GLPRT_GORCH_GORCH_S			0
+#define GLPRT_GORCH_GORCH_M			MAKEMASK(0xFF, 0)
+#define GLPRT_GORCL(_i)				(0x00380000 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_GORCL_MAX_INDEX			7
+#define GLPRT_GORCL_GORCL_S			0
+#define GLPRT_GORCL_GORCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_GOTCH(_i)				(0x00380B44 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_GOTCH_MAX_INDEX			7
+#define GLPRT_GOTCH_GOTCH_S			0
+#define GLPRT_GOTCH_GOTCH_M			MAKEMASK(0xFF, 0)
+#define GLPRT_GOTCL(_i)				(0x00380B40 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_GOTCL_MAX_INDEX			7
+#define GLPRT_GOTCL_GOTCL_S			0
+#define GLPRT_GOTCL_GOTCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_ILLERRC(_i)			(0x003801C0 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_ILLERRC_MAX_INDEX			7
+#define GLPRT_ILLERRC_ILLERRC_S			0
+#define GLPRT_ILLERRC_ILLERRC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_ILLERRC_H(_i)			(0x003801C4 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_ILLERRC_H_MAX_INDEX		7
+#define GLPRT_ILLERRC_H_ILLERRC_S		0
+#define GLPRT_ILLERRC_H_ILLERRC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_LXOFFRXC(_i)			(0x003802C0 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_LXOFFRXC_MAX_INDEX		7
+#define GLPRT_LXOFFRXC_LXOFFRXCNT_S		0
+#define GLPRT_LXOFFRXC_LXOFFRXCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_LXOFFRXC_H(_i)			(0x003802C4 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_LXOFFRXC_H_MAX_INDEX		7
+#define GLPRT_LXOFFRXC_H_LXOFFRXCNT_S		0
+#define GLPRT_LXOFFRXC_H_LXOFFRXCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_LXOFFTXC(_i)			(0x00381180 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_LXOFFTXC_MAX_INDEX		7
+#define GLPRT_LXOFFTXC_LXOFFTXC_S		0
+#define GLPRT_LXOFFTXC_LXOFFTXC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_LXOFFTXC_H(_i)			(0x00381184 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_LXOFFTXC_H_MAX_INDEX		7
+#define GLPRT_LXOFFTXC_H_LXOFFTXC_S		0
+#define GLPRT_LXOFFTXC_H_LXOFFTXC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_LXONRXC(_i)			(0x00380280 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_LXONRXC_MAX_INDEX			7
+#define GLPRT_LXONRXC_LXONRXCNT_S		0
+#define GLPRT_LXONRXC_LXONRXCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_LXONRXC_H(_i)			(0x00380284 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_LXONRXC_H_MAX_INDEX		7
+#define GLPRT_LXONRXC_H_LXONRXCNT_S		0
+#define GLPRT_LXONRXC_H_LXONRXCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_LXONTXC(_i)			(0x00381140 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_LXONTXC_MAX_INDEX			7
+#define GLPRT_LXONTXC_LXONTXC_S			0
+#define GLPRT_LXONTXC_LXONTXC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_LXONTXC_H(_i)			(0x00381144 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_LXONTXC_H_MAX_INDEX		7
+#define GLPRT_LXONTXC_H_LXONTXC_S		0
+#define GLPRT_LXONTXC_H_LXONTXC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_MLFC(_i)				(0x00380040 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_MLFC_MAX_INDEX			7
+#define GLPRT_MLFC_MLFC_S			0
+#define GLPRT_MLFC_MLFC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_MLFC_H(_i)			(0x00380044 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_MLFC_H_MAX_INDEX			7
+#define GLPRT_MLFC_H_MLFC_S			0
+#define GLPRT_MLFC_H_MLFC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_MPRCH(_i)				(0x00381344 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_MPRCH_MAX_INDEX			7
+#define GLPRT_MPRCH_MPRCH_S			0
+#define GLPRT_MPRCH_MPRCH_M			MAKEMASK(0xFF, 0)
+#define GLPRT_MPRCL(_i)				(0x00381340 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_MPRCL_MAX_INDEX			7
+#define GLPRT_MPRCL_MPRCL_S			0
+#define GLPRT_MPRCL_MPRCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_MPTCH(_i)				(0x00381204 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_MPTCH_MAX_INDEX			7
+#define GLPRT_MPTCH_MPTCH_S			0
+#define GLPRT_MPTCH_MPTCH_M			MAKEMASK(0xFF, 0)
+#define GLPRT_MPTCL(_i)				(0x00381200 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_MPTCL_MAX_INDEX			7
+#define GLPRT_MPTCL_MPTCL_S			0
+#define GLPRT_MPTCL_MPTCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_MRFC(_i)				(0x00380080 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_MRFC_MAX_INDEX			7
+#define GLPRT_MRFC_MRFC_S			0
+#define GLPRT_MRFC_MRFC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_MRFC_H(_i)			(0x00380084 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_MRFC_H_MAX_INDEX			7
+#define GLPRT_MRFC_H_MRFC_S			0
+#define GLPRT_MRFC_H_MRFC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PRC1023H(_i)			(0x00380A04 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC1023H_MAX_INDEX		7
+#define GLPRT_PRC1023H_PRC1023H_S		0
+#define GLPRT_PRC1023H_PRC1023H_M		MAKEMASK(0xFF, 0)
+#define GLPRT_PRC1023L(_i)			(0x00380A00 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC1023L_MAX_INDEX		7
+#define GLPRT_PRC1023L_PRC1023L_S		0
+#define GLPRT_PRC1023L_PRC1023L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PRC127H(_i)			(0x00380944 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC127H_MAX_INDEX			7
+#define GLPRT_PRC127H_PRC127H_S			0
+#define GLPRT_PRC127H_PRC127H_M			MAKEMASK(0xFF, 0)
+#define GLPRT_PRC127L(_i)			(0x00380940 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC127L_MAX_INDEX			7
+#define GLPRT_PRC127L_PRC127L_S			0
+#define GLPRT_PRC127L_PRC127L_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PRC1522H(_i)			(0x00380A44 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC1522H_MAX_INDEX		7
+#define GLPRT_PRC1522H_PRC1522H_S		0
+#define GLPRT_PRC1522H_PRC1522H_M		MAKEMASK(0xFF, 0)
+#define GLPRT_PRC1522L(_i)			(0x00380A40 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC1522L_MAX_INDEX		7
+#define GLPRT_PRC1522L_PRC1522L_S		0
+#define GLPRT_PRC1522L_PRC1522L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PRC255H(_i)			(0x00380984 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC255H_MAX_INDEX			7
+#define GLPRT_PRC255H_PRTPRC255H_S		0
+#define GLPRT_PRC255H_PRTPRC255H_M		MAKEMASK(0xFF, 0)
+#define GLPRT_PRC255L(_i)			(0x00380980 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC255L_MAX_INDEX			7
+#define GLPRT_PRC255L_PRC255L_S			0
+#define GLPRT_PRC255L_PRC255L_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PRC511H(_i)			(0x003809C4 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC511H_MAX_INDEX			7
+#define GLPRT_PRC511H_PRC511H_S			0
+#define GLPRT_PRC511H_PRC511H_M			MAKEMASK(0xFF, 0)
+#define GLPRT_PRC511L(_i)			(0x003809C0 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC511L_MAX_INDEX			7
+#define GLPRT_PRC511L_PRC511L_S			0
+#define GLPRT_PRC511L_PRC511L_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PRC64H(_i)			(0x00380904 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC64H_MAX_INDEX			7
+#define GLPRT_PRC64H_PRC64H_S			0
+#define GLPRT_PRC64H_PRC64H_M			MAKEMASK(0xFF, 0)
+#define GLPRT_PRC64L(_i)			(0x00380900 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC64L_MAX_INDEX			7
+#define GLPRT_PRC64L_PRC64L_S			0
+#define GLPRT_PRC64L_PRC64L_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PRC9522H(_i)			(0x00380A84 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC9522H_MAX_INDEX		7
+#define GLPRT_PRC9522H_PRC1522H_S		0
+#define GLPRT_PRC9522H_PRC1522H_M		MAKEMASK(0xFF, 0)
+#define GLPRT_PRC9522L(_i)			(0x00380A80 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC9522L_MAX_INDEX		7
+#define GLPRT_PRC9522L_PRC1522L_S		0
+#define GLPRT_PRC9522L_PRC1522L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PTC1023H(_i)			(0x00380C84 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC1023H_MAX_INDEX		7
+#define GLPRT_PTC1023H_PTC1023H_S		0
+#define GLPRT_PTC1023H_PTC1023H_M		MAKEMASK(0xFF, 0)
+#define GLPRT_PTC1023L(_i)			(0x00380C80 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC1023L_MAX_INDEX		7
+#define GLPRT_PTC1023L_PTC1023L_S		0
+#define GLPRT_PTC1023L_PTC1023L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PTC127H(_i)			(0x00380BC4 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC127H_MAX_INDEX			7
+#define GLPRT_PTC127H_PTC127H_S			0
+#define GLPRT_PTC127H_PTC127H_M			MAKEMASK(0xFF, 0)
+#define GLPRT_PTC127L(_i)			(0x00380BC0 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC127L_MAX_INDEX			7
+#define GLPRT_PTC127L_PTC127L_S			0
+#define GLPRT_PTC127L_PTC127L_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PTC1522H(_i)			(0x00380CC4 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC1522H_MAX_INDEX		7
+#define GLPRT_PTC1522H_PTC1522H_S		0
+#define GLPRT_PTC1522H_PTC1522H_M		MAKEMASK(0xFF, 0)
+#define GLPRT_PTC1522L(_i)			(0x00380CC0 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC1522L_MAX_INDEX		7
+#define GLPRT_PTC1522L_PTC1522L_S		0
+#define GLPRT_PTC1522L_PTC1522L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PTC255H(_i)			(0x00380C04 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC255H_MAX_INDEX			7
+#define GLPRT_PTC255H_PTC255H_S			0
+#define GLPRT_PTC255H_PTC255H_M			MAKEMASK(0xFF, 0)
+#define GLPRT_PTC255L(_i)			(0x00380C00 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC255L_MAX_INDEX			7
+#define GLPRT_PTC255L_PTC255L_S			0
+#define GLPRT_PTC255L_PTC255L_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PTC511H(_i)			(0x00380C44 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC511H_MAX_INDEX			7
+#define GLPRT_PTC511H_PTC511H_S			0
+#define GLPRT_PTC511H_PTC511H_M			MAKEMASK(0xFF, 0)
+#define GLPRT_PTC511L(_i)			(0x00380C40 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC511L_MAX_INDEX			7
+#define GLPRT_PTC511L_PTC511L_S			0
+#define GLPRT_PTC511L_PTC511L_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PTC64H(_i)			(0x00380B84 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC64H_MAX_INDEX			7
+#define GLPRT_PTC64H_PTC64H_S			0
+#define GLPRT_PTC64H_PTC64H_M			MAKEMASK(0xFF, 0)
+#define GLPRT_PTC64L(_i)			(0x00380B80 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC64L_MAX_INDEX			7
+#define GLPRT_PTC64L_PTC64L_S			0
+#define GLPRT_PTC64L_PTC64L_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PTC9522H(_i)			(0x00380D04 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC9522H_MAX_INDEX		7
+#define GLPRT_PTC9522H_PTC9522H_S		0
+#define GLPRT_PTC9522H_PTC9522H_M		MAKEMASK(0xFF, 0)
+#define GLPRT_PTC9522L(_i)			(0x00380D00 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC9522L_MAX_INDEX		7
+#define GLPRT_PTC9522L_PTC9522L_S		0
+#define GLPRT_PTC9522L_PTC9522L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PXOFFRXC(_i, _j)			(0x00380500 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PXOFFRXC_MAX_INDEX		7
+#define GLPRT_PXOFFRXC_PRPXOFFRXCNT_S		0
+#define GLPRT_PXOFFRXC_PRPXOFFRXCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PXOFFRXC_H(_i, _j)		(0x00380504 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PXOFFRXC_H_MAX_INDEX		7
+#define GLPRT_PXOFFRXC_H_PRPXOFFRXCNT_S		0
+#define GLPRT_PXOFFRXC_H_PRPXOFFRXCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PXOFFTXC(_i, _j)			(0x00380F40 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PXOFFTXC_MAX_INDEX		7
+#define GLPRT_PXOFFTXC_PRPXOFFTXCNT_S		0
+#define GLPRT_PXOFFTXC_PRPXOFFTXCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PXOFFTXC_H(_i, _j)		(0x00380F44 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PXOFFTXC_H_MAX_INDEX		7
+#define GLPRT_PXOFFTXC_H_PRPXOFFTXCNT_S		0
+#define GLPRT_PXOFFTXC_H_PRPXOFFTXCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PXONRXC(_i, _j)			(0x00380300 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PXONRXC_MAX_INDEX			7
+#define GLPRT_PXONRXC_PRPXONRXCNT_S		0
+#define GLPRT_PXONRXC_PRPXONRXCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PXONRXC_H(_i, _j)			(0x00380304 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PXONRXC_H_MAX_INDEX		7
+#define GLPRT_PXONRXC_H_PRPXONRXCNT_S		0
+#define GLPRT_PXONRXC_H_PRPXONRXCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PXONTXC(_i, _j)			(0x00380D40 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PXONTXC_MAX_INDEX			7
+#define GLPRT_PXONTXC_PRPXONTXC_S		0
+#define GLPRT_PXONTXC_PRPXONTXC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PXONTXC_H(_i, _j)			(0x00380D44 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PXONTXC_H_MAX_INDEX		7
+#define GLPRT_PXONTXC_H_PRPXONTXC_S		0
+#define GLPRT_PXONTXC_H_PRPXONTXC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_RFC(_i)				(0x00380AC0 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_RFC_MAX_INDEX			7
+#define GLPRT_RFC_RFC_S				0
+#define GLPRT_RFC_RFC_M				MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_RFC_H(_i)				(0x00380AC4 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_RFC_H_MAX_INDEX			7
+#define GLPRT_RFC_H_RFC_S			0
+#define GLPRT_RFC_H_RFC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_RJC(_i)				(0x00380B00 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_RJC_MAX_INDEX			7
+#define GLPRT_RJC_RJC_S				0
+#define GLPRT_RJC_RJC_M				MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_RJC_H(_i)				(0x00380B04 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_RJC_H_MAX_INDEX			7
+#define GLPRT_RJC_H_RJC_S			0
+#define GLPRT_RJC_H_RJC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_RLEC(_i)				(0x00380140 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_RLEC_MAX_INDEX			7
+#define GLPRT_RLEC_RLEC_S			0
+#define GLPRT_RLEC_RLEC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_RLEC_H(_i)			(0x00380144 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_RLEC_H_MAX_INDEX			7
+#define GLPRT_RLEC_H_RLEC_S			0
+#define GLPRT_RLEC_H_RLEC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_ROC(_i)				(0x00380240 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_ROC_MAX_INDEX			7
+#define GLPRT_ROC_ROC_S				0
+#define GLPRT_ROC_ROC_M				MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_ROC_H(_i)				(0x00380244 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_ROC_H_MAX_INDEX			7
+#define GLPRT_ROC_H_ROC_S			0
+#define GLPRT_ROC_H_ROC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_RUC(_i)				(0x00380200 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_RUC_MAX_INDEX			7
+#define GLPRT_RUC_RUC_S				0
+#define GLPRT_RUC_RUC_M				MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_RUC_H(_i)				(0x00380204 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_RUC_H_MAX_INDEX			7
+#define GLPRT_RUC_H_RUC_S			0
+#define GLPRT_RUC_H_RUC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_RXON2OFFCNT(_i, _j)		(0x00380700 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...7 */ /* Reset Source: CORER */
+#define GLPRT_RXON2OFFCNT_MAX_INDEX		7
+#define GLPRT_RXON2OFFCNT_PRRXON2OFFCNT_S	0
+#define GLPRT_RXON2OFFCNT_PRRXON2OFFCNT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_RXON2OFFCNT_H(_i, _j)		(0x00380704 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...7 */ /* Reset Source: CORER */
+#define GLPRT_RXON2OFFCNT_H_MAX_INDEX		7
+#define GLPRT_RXON2OFFCNT_H_PRRXON2OFFCNT_S	0
+#define GLPRT_RXON2OFFCNT_H_PRRXON2OFFCNT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_STDC(_i)				(0x00340000 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_STDC_MAX_INDEX			7
+#define GLPRT_STDC_STDC_S			0
+#define GLPRT_STDC_STDC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_TDOLD(_i)				(0x00381280 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_TDOLD_MAX_INDEX			7
+#define GLPRT_TDOLD_GLPRT_TDOLD_S		0
+#define GLPRT_TDOLD_GLPRT_TDOLD_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_TDOLD_H(_i)			(0x00381284 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_TDOLD_H_MAX_INDEX			7
+#define GLPRT_TDOLD_H_GLPRT_TDOLD_S		0
+#define GLPRT_TDOLD_H_GLPRT_TDOLD_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_UPRCH(_i)				(0x00381304 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_UPRCH_MAX_INDEX			7
+#define GLPRT_UPRCH_UPRCH_S			0
+#define GLPRT_UPRCH_UPRCH_M			MAKEMASK(0xFF, 0)
+#define GLPRT_UPRCL(_i)				(0x00381300 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_UPRCL_MAX_INDEX			7
+#define GLPRT_UPRCL_UPRCL_S			0
+#define GLPRT_UPRCL_UPRCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_UPTCH(_i)				(0x003811C4 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_UPTCH_MAX_INDEX			7
+#define GLPRT_UPTCH_UPTCH_S			0
+#define GLPRT_UPTCH_UPTCH_M			MAKEMASK(0xFF, 0)
+#define GLPRT_UPTCL(_i)				(0x003811C0 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_UPTCL_MAX_INDEX			7
+#define GLPRT_UPTCL_VUPTCH_S			0
+#define GLPRT_UPTCL_VUPTCH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLSTAT_ACL_CNT_0_H(_i)			(0x00388004 + ((_i) * 8)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLSTAT_ACL_CNT_0_H_MAX_INDEX		511
+#define GLSTAT_ACL_CNT_0_H_CNT_MSB_S		0
+#define GLSTAT_ACL_CNT_0_H_CNT_MSB_M		MAKEMASK(0xFF, 0)
+#define GLSTAT_ACL_CNT_0_L(_i)			(0x00388000 + ((_i) * 8)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLSTAT_ACL_CNT_0_L_MAX_INDEX		511
+#define GLSTAT_ACL_CNT_0_L_CNT_LSB_S		0
+#define GLSTAT_ACL_CNT_0_L_CNT_LSB_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLSTAT_ACL_CNT_1_H(_i)			(0x00389004 + ((_i) * 8)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLSTAT_ACL_CNT_1_H_MAX_INDEX		511
+#define GLSTAT_ACL_CNT_1_H_CNT_MSB_S		0
+#define GLSTAT_ACL_CNT_1_H_CNT_MSB_M		MAKEMASK(0xFF, 0)
+#define GLSTAT_ACL_CNT_1_L(_i)			(0x00389000 + ((_i) * 8)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLSTAT_ACL_CNT_1_L_MAX_INDEX		511
+#define GLSTAT_ACL_CNT_1_L_CNT_LSB_S		0
+#define GLSTAT_ACL_CNT_1_L_CNT_LSB_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLSTAT_ACL_CNT_2_H(_i)			(0x0038A004 + ((_i) * 8)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLSTAT_ACL_CNT_2_H_MAX_INDEX		511
+#define GLSTAT_ACL_CNT_2_H_CNT_MSB_S		0
+#define GLSTAT_ACL_CNT_2_H_CNT_MSB_M		MAKEMASK(0xFF, 0)
+#define GLSTAT_ACL_CNT_2_L(_i)			(0x0038A000 + ((_i) * 8)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLSTAT_ACL_CNT_2_L_MAX_INDEX		511
+#define GLSTAT_ACL_CNT_2_L_CNT_LSB_S		0
+#define GLSTAT_ACL_CNT_2_L_CNT_LSB_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLSTAT_ACL_CNT_3_H(_i)			(0x0038B004 + ((_i) * 8)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLSTAT_ACL_CNT_3_H_MAX_INDEX		511
+#define GLSTAT_ACL_CNT_3_H_CNT_MSB_S		0
+#define GLSTAT_ACL_CNT_3_H_CNT_MSB_M		MAKEMASK(0xFF, 0)
+#define GLSTAT_ACL_CNT_3_L(_i)			(0x0038B000 + ((_i) * 8)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLSTAT_ACL_CNT_3_L_MAX_INDEX		511
+#define GLSTAT_ACL_CNT_3_L_CNT_LSB_S		0
+#define GLSTAT_ACL_CNT_3_L_CNT_LSB_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLSTAT_FD_CNT0H(_i)			(0x003A0004 + ((_i) * 8)) /* _i=0...4095 */ /* Reset Source: CORER */
+#define GLSTAT_FD_CNT0H_MAX_INDEX		4095
+#define GLSTAT_FD_CNT0H_FD0_CNT_H_S		0
+#define GLSTAT_FD_CNT0H_FD0_CNT_H_M		MAKEMASK(0xFF, 0)
+#define GLSTAT_FD_CNT0L(_i)			(0x003A0000 + ((_i) * 8)) /* _i=0...4095 */ /* Reset Source: CORER */
+#define GLSTAT_FD_CNT0L_MAX_INDEX		4095
+#define GLSTAT_FD_CNT0L_FD0_CNT_L_S		0
+#define GLSTAT_FD_CNT0L_FD0_CNT_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLSTAT_FD_CNT1H(_i)			(0x003A8004 + ((_i) * 8)) /* _i=0...4095 */ /* Reset Source: CORER */
+#define GLSTAT_FD_CNT1H_MAX_INDEX		4095
+#define GLSTAT_FD_CNT1H_FD0_CNT_H_S		0
+#define GLSTAT_FD_CNT1H_FD0_CNT_H_M		MAKEMASK(0xFF, 0)
+#define GLSTAT_FD_CNT1L(_i)			(0x003A8000 + ((_i) * 8)) /* _i=0...4095 */ /* Reset Source: CORER */
+#define GLSTAT_FD_CNT1L_MAX_INDEX		4095
+#define GLSTAT_FD_CNT1L_FD0_CNT_L_S		0
+#define GLSTAT_FD_CNT1L_FD0_CNT_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLSW_BPRCH(_i)				(0x00346204 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_BPRCH_MAX_INDEX			31
+#define GLSW_BPRCH_BPRCH_S			0
+#define GLSW_BPRCH_BPRCH_M			MAKEMASK(0xFF, 0)
+#define GLSW_BPRCL(_i)				(0x00346200 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_BPRCL_MAX_INDEX			31
+#define GLSW_BPRCL_BPRCL_S			0
+#define GLSW_BPRCL_BPRCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLSW_BPTCH(_i)				(0x00310204 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_BPTCH_MAX_INDEX			31
+#define GLSW_BPTCH_BPTCH_S			0
+#define GLSW_BPTCH_BPTCH_M			MAKEMASK(0xFF, 0)
+#define GLSW_BPTCL(_i)				(0x00310200 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_BPTCL_MAX_INDEX			31
+#define GLSW_BPTCL_BPTCL_S			0
+#define GLSW_BPTCL_BPTCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLSW_GORCH(_i)				(0x00341004 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_GORCH_MAX_INDEX			31
+#define GLSW_GORCH_GORCH_S			0
+#define GLSW_GORCH_GORCH_M			MAKEMASK(0xFF, 0)
+#define GLSW_GORCL(_i)				(0x00341000 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_GORCL_MAX_INDEX			31
+#define GLSW_GORCL_GORCL_S			0
+#define GLSW_GORCL_GORCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLSW_GOTCH(_i)				(0x00302004 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_GOTCH_MAX_INDEX			31
+#define GLSW_GOTCH_GOTCH_S			0
+#define GLSW_GOTCH_GOTCH_M			MAKEMASK(0xFF, 0)
+#define GLSW_GOTCL(_i)				(0x00302000 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_GOTCL_MAX_INDEX			31
+#define GLSW_GOTCL_GOTCL_S			0
+#define GLSW_GOTCL_GOTCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLSW_MPRCH(_i)				(0x00346104 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_MPRCH_MAX_INDEX			31
+#define GLSW_MPRCH_MPRCH_S			0
+#define GLSW_MPRCH_MPRCH_M			MAKEMASK(0xFF, 0)
+#define GLSW_MPRCL(_i)				(0x00346100 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_MPRCL_MAX_INDEX			31
+#define GLSW_MPRCL_MPRCL_S			0
+#define GLSW_MPRCL_MPRCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLSW_MPTCH(_i)				(0x00310104 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_MPTCH_MAX_INDEX			31
+#define GLSW_MPTCH_MPTCH_S			0
+#define GLSW_MPTCH_MPTCH_M			MAKEMASK(0xFF, 0)
+#define GLSW_MPTCL(_i)				(0x00310100 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_MPTCL_MAX_INDEX			31
+#define GLSW_MPTCL_MPTCL_S			0
+#define GLSW_MPTCL_MPTCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLSW_UPRCH(_i)				(0x00346004 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_UPRCH_MAX_INDEX			31
+#define GLSW_UPRCH_UPRCH_S			0
+#define GLSW_UPRCH_UPRCH_M			MAKEMASK(0xFF, 0)
+#define GLSW_UPRCL(_i)				(0x00346000 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_UPRCL_MAX_INDEX			31
+#define GLSW_UPRCL_UPRCL_S			0
+#define GLSW_UPRCL_UPRCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLSW_UPTCH(_i)				(0x00310004 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_UPTCH_MAX_INDEX			31
+#define GLSW_UPTCH_UPTCH_S			0
+#define GLSW_UPTCH_UPTCH_M			MAKEMASK(0xFF, 0)
+#define GLSW_UPTCL(_i)				(0x00310000 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_UPTCL_MAX_INDEX			31
+#define GLSW_UPTCL_UPTCL_S			0
+#define GLSW_UPTCL_UPTCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLSWID_RUPP(_i)				(0x00345000 + ((_i) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define GLSWID_RUPP_MAX_INDEX			255
+#define GLSWID_RUPP_RUPP_S			0
+#define GLSWID_RUPP_RUPP_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLV_BPRCH(_i)				(0x003B6004 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_BPRCH_MAX_INDEX			767
+#define GLV_BPRCH_BPRCH_S			0
+#define GLV_BPRCH_BPRCH_M			MAKEMASK(0xFF, 0)
+#define GLV_BPRCL(_i)				(0x003B6000 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_BPRCL_MAX_INDEX			767
+#define GLV_BPRCL_BPRCL_S			0
+#define GLV_BPRCL_BPRCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLV_BPTCH(_i)				(0x0030E004 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_BPTCH_MAX_INDEX			767
+#define GLV_BPTCH_BPTCH_S			0
+#define GLV_BPTCH_BPTCH_M			MAKEMASK(0xFF, 0)
+#define GLV_BPTCL(_i)				(0x0030E000 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_BPTCL_MAX_INDEX			767
+#define GLV_BPTCL_BPTCL_S			0
+#define GLV_BPTCL_BPTCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLV_GORCH(_i)				(0x003B0004 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_GORCH_MAX_INDEX			767
+#define GLV_GORCH_GORCH_S			0
+#define GLV_GORCH_GORCH_M			MAKEMASK(0xFF, 0)
+#define GLV_GORCL(_i)				(0x003B0000 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_GORCL_MAX_INDEX			767
+#define GLV_GORCL_GORCL_S			0
+#define GLV_GORCL_GORCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLV_GOTCH(_i)				(0x00300004 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_GOTCH_MAX_INDEX			767
+#define GLV_GOTCH_GOTCH_S			0
+#define GLV_GOTCH_GOTCH_M			MAKEMASK(0xFF, 0)
+#define GLV_GOTCL(_i)				(0x00300000 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_GOTCL_MAX_INDEX			767
+#define GLV_GOTCL_GOTCL_S			0
+#define GLV_GOTCL_GOTCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLV_MPRCH(_i)				(0x003B4004 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_MPRCH_MAX_INDEX			767
+#define GLV_MPRCH_MPRCH_S			0
+#define GLV_MPRCH_MPRCH_M			MAKEMASK(0xFF, 0)
+#define GLV_MPRCL(_i)				(0x003B4000 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_MPRCL_MAX_INDEX			767
+#define GLV_MPRCL_MPRCL_S			0
+#define GLV_MPRCL_MPRCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLV_MPTCH(_i)				(0x0030C004 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_MPTCH_MAX_INDEX			767
+#define GLV_MPTCH_MPTCH_S			0
+#define GLV_MPTCH_MPTCH_M			MAKEMASK(0xFF, 0)
+#define GLV_MPTCL(_i)				(0x0030C000 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_MPTCL_MAX_INDEX			767
+#define GLV_MPTCL_MPTCL_S			0
+#define GLV_MPTCL_MPTCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLV_RDPC(_i)				(0x00294C04 + ((_i) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_RDPC_MAX_INDEX			767
+#define GLV_RDPC_RDPC_S				0
+#define GLV_RDPC_RDPC_M				MAKEMASK(0xFFFFFFFF, 0)
+#define GLV_REPC(_i)				(0x00295804 + ((_i) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_REPC_MAX_INDEX			767
+#define GLV_REPC_NO_DESC_CNT_S			0
+#define GLV_REPC_NO_DESC_CNT_M			MAKEMASK(0xFFFF, 0)
+#define GLV_REPC_ERROR_CNT_S			16
+#define GLV_REPC_ERROR_CNT_M			MAKEMASK(0xFFFF, 16)
+#define GLV_TEPC(_VSI)				(0x00312000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_TEPC_MAX_INDEX			767
+#define GLV_TEPC_TEPC_S				0
+#define GLV_TEPC_TEPC_M				MAKEMASK(0xFFFFFFFF, 0)
+#define GLV_UPRCH(_i)				(0x003B2004 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_UPRCH_MAX_INDEX			767
+#define GLV_UPRCH_UPRCH_S			0
+#define GLV_UPRCH_UPRCH_M			MAKEMASK(0xFF, 0)
+#define GLV_UPRCL(_i)				(0x003B2000 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_UPRCL_MAX_INDEX			767
+#define GLV_UPRCL_UPRCL_S			0
+#define GLV_UPRCL_UPRCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLV_UPTCH(_i)				(0x0030A004 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_UPTCH_MAX_INDEX			767
+#define GLV_UPTCH_GLVUPTCH_S			0
+#define GLV_UPTCH_GLVUPTCH_M			MAKEMASK(0xFF, 0)
+#define GLV_UPTCL(_i)				(0x0030A000 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_UPTCL_MAX_INDEX			767
+#define GLV_UPTCL_UPTCL_S			0
+#define GLV_UPTCL_UPTCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLVEBUP_RBCH(_i, _j)			(0x00343004 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...31 */ /* Reset Source: CORER */
+#define GLVEBUP_RBCH_MAX_INDEX			7
+#define GLVEBUP_RBCH_UPBCH_S			0
+#define GLVEBUP_RBCH_UPBCH_M			MAKEMASK(0xFF, 0)
+#define GLVEBUP_RBCL(_i, _j)			(0x00343000 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...31 */ /* Reset Source: CORER */
+#define GLVEBUP_RBCL_MAX_INDEX			7
+#define GLVEBUP_RBCL_UPBCL_S			0
+#define GLVEBUP_RBCL_UPBCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLVEBUP_RPCH(_i, _j)			(0x00344004 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...31 */ /* Reset Source: CORER */
+#define GLVEBUP_RPCH_MAX_INDEX			7
+#define GLVEBUP_RPCH_UPPCH_S			0
+#define GLVEBUP_RPCH_UPPCH_M			MAKEMASK(0xFF, 0)
+#define GLVEBUP_RPCL(_i, _j)			(0x00344000 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...31 */ /* Reset Source: CORER */
+#define GLVEBUP_RPCL_MAX_INDEX			7
+#define GLVEBUP_RPCL_UPPCL_S			0
+#define GLVEBUP_RPCL_UPPCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLVEBUP_TBCH(_i, _j)			(0x00306004 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...31 */ /* Reset Source: CORER */
+#define GLVEBUP_TBCH_MAX_INDEX			7
+#define GLVEBUP_TBCH_UPBCH_S			0
+#define GLVEBUP_TBCH_UPBCH_M			MAKEMASK(0xFF, 0)
+#define GLVEBUP_TBCL(_i, _j)			(0x00306000 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...31 */ /* Reset Source: CORER */
+#define GLVEBUP_TBCL_MAX_INDEX			7
+#define GLVEBUP_TBCL_UPBCL_S			0
+#define GLVEBUP_TBCL_UPBCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLVEBUP_TPCH(_i, _j)			(0x00308004 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...31 */ /* Reset Source: CORER */
+#define GLVEBUP_TPCH_MAX_INDEX			7
+#define GLVEBUP_TPCH_UPPCH_S			0
+#define GLVEBUP_TPCH_UPPCH_M			MAKEMASK(0xFF, 0)
+#define GLVEBUP_TPCL(_i, _j)			(0x00308000 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...31 */ /* Reset Source: CORER */
+#define GLVEBUP_TPCL_MAX_INDEX			7
+#define GLVEBUP_TPCL_UPPCL_S			0
+#define GLVEBUP_TPCL_UPPCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PRTRPB_LDPC				0x000AC280 /* Reset Source: CORER */
+#define PRTRPB_LDPC_CRCERRS_S			0
+#define PRTRPB_LDPC_CRCERRS_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PRTRPB_RDPC				0x000AC260 /* Reset Source: CORER */
+#define PRTRPB_RDPC_CRCERRS_S			0
+#define PRTRPB_RDPC_CRCERRS_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PRTTPB_STAT_TC_BYTES_SENTL(_i)		(0x00098200 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define PRTTPB_STAT_TC_BYTES_SENTL_MAX_INDEX	63
+#define PRTTPB_STAT_TC_BYTES_SENTL_TCCNT_S	0
+#define PRTTPB_STAT_TC_BYTES_SENTL_TCCNT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define TPB_PRTTPB_STAT_PKT_SENT(_i)		(0x00099470 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define TPB_PRTTPB_STAT_PKT_SENT_MAX_INDEX	7
+#define TPB_PRTTPB_STAT_PKT_SENT_PKTCNT_S	0
+#define TPB_PRTTPB_STAT_PKT_SENT_PKTCNT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define TPB_PRTTPB_STAT_TC_BYTES_SENT(_i)	(0x00099094 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define TPB_PRTTPB_STAT_TC_BYTES_SENT_MAX_INDEX 63
+#define TPB_PRTTPB_STAT_TC_BYTES_SENT_TCCNT_S	0
+#define TPB_PRTTPB_STAT_TC_BYTES_SENT_TCCNT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define EMP_SWT_PRUNIND				0x00204020 /* Reset Source: CORER */
+#define EMP_SWT_PRUNIND_OPCODE_S		0
+#define EMP_SWT_PRUNIND_OPCODE_M		MAKEMASK(0xF, 0)
+#define EMP_SWT_PRUNIND_LIST_INDEX_NUM_S	4
+#define EMP_SWT_PRUNIND_LIST_INDEX_NUM_M	MAKEMASK(0x3FF, 4)
+#define EMP_SWT_PRUNIND_VSI_NUM_S		16
+#define EMP_SWT_PRUNIND_VSI_NUM_M		MAKEMASK(0x3FF, 16)
+#define EMP_SWT_PRUNIND_BIT_VALUE_S		31
+#define EMP_SWT_PRUNIND_BIT_VALUE_M		BIT(31)
+#define EMP_SWT_REPIND				0x0020401c /* Reset Source: CORER */
+#define EMP_SWT_REPIND_OPCODE_S			0
+#define EMP_SWT_REPIND_OPCODE_M			MAKEMASK(0xF, 0)
+#define EMP_SWT_REPIND_LIST_INDEX_NUMBER_S	4
+#define EMP_SWT_REPIND_LIST_INDEX_NUMBER_M	MAKEMASK(0x3FF, 4)
+#define EMP_SWT_REPIND_VSI_NUM_S		16
+#define EMP_SWT_REPIND_VSI_NUM_M		MAKEMASK(0x3FF, 16)
+#define EMP_SWT_REPIND_BIT_VALUE_S		31
+#define EMP_SWT_REPIND_BIT_VALUE_M		BIT(31)
+#define GL_OVERRIDEC				0x002040a4 /* Reset Source: CORER */
+#define GL_OVERRIDEC_OVERRIDE_ATTEMPTC_S	0
+#define GL_OVERRIDEC_OVERRIDE_ATTEMPTC_M	MAKEMASK(0xFFFF, 0)
+#define GL_OVERRIDEC_LAST_VSI_S			16
+#define GL_OVERRIDEC_LAST_VSI_M			MAKEMASK(0x3FF, 16)
+#define GL_PLG_AVG_CALC_CFG			0x0020A5AC /* Reset Source: CORER */
+#define GL_PLG_AVG_CALC_CFG_CYCLE_LEN_S		0
+#define GL_PLG_AVG_CALC_CFG_CYCLE_LEN_M		MAKEMASK(0x7FFFFFFF, 0)
+#define GL_PLG_AVG_CALC_CFG_MODE_S		31
+#define GL_PLG_AVG_CALC_CFG_MODE_M		BIT(31)
+#define GL_PLG_AVG_CALC_ST			0x0020A5B0 /* Reset Source: CORER */
+#define GL_PLG_AVG_CALC_ST_IN_DATA_S		0
+#define GL_PLG_AVG_CALC_ST_IN_DATA_M		MAKEMASK(0x7FFF, 0)
+#define GL_PLG_AVG_CALC_ST_OUT_DATA_S		16
+#define GL_PLG_AVG_CALC_ST_OUT_DATA_M		MAKEMASK(0x7FFF, 16)
+#define GL_PLG_AVG_CALC_ST_VALID_S		31
+#define GL_PLG_AVG_CALC_ST_VALID_M		BIT(31)
+#define GL_PRE_CFG_CMD				0x00214090 /* Reset Source: CORER */
+#define GL_PRE_CFG_CMD_ADDR_S			0
+#define GL_PRE_CFG_CMD_ADDR_M			MAKEMASK(0x1FFF, 0)
+#define GL_PRE_CFG_CMD_TBLIDX_S			16
+#define GL_PRE_CFG_CMD_TBLIDX_M			MAKEMASK(0x7, 16)
+#define GL_PRE_CFG_CMD_CMD_S			29
+#define GL_PRE_CFG_CMD_CMD_M			BIT(29)
+#define GL_PRE_CFG_CMD_DONE_S			31
+#define GL_PRE_CFG_CMD_DONE_M			BIT(31)
+#define GL_PRE_CFG_DATA(_i)			(0x00214074 + ((_i) * 4)) /* _i=0...6 */ /* Reset Source: CORER */
+#define GL_PRE_CFG_DATA_MAX_INDEX		6
+#define GL_PRE_CFG_DATA_GL_PRE_RCP_DATA_S	0
+#define GL_PRE_CFG_DATA_GL_PRE_RCP_DATA_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GL_SWT_FUNCFILT				0x001D2698 /* Reset Source: CORER */
+#define GL_SWT_FUNCFILT_FUNCFILT_S		0
+#define GL_SWT_FUNCFILT_FUNCFILT_M		BIT(0)
+#define GL_SWT_FW_STS(_i)			(0x00216000 + ((_i) * 4)) /* _i=0...5 */ /* Reset Source: CORER */
+#define GL_SWT_FW_STS_MAX_INDEX			5
+#define GL_SWT_FW_STS_GL_SWT_FW_STS_S		0
+#define GL_SWT_FW_STS_GL_SWT_FW_STS_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_SWT_LAT_DOUBLE			0x00204004 /* Reset Source: CORER */
+#define GL_SWT_LAT_DOUBLE_BASE_S		0
+#define GL_SWT_LAT_DOUBLE_BASE_M		MAKEMASK(0x7FF, 0)
+#define GL_SWT_LAT_DOUBLE_SIZE_S		16
+#define GL_SWT_LAT_DOUBLE_SIZE_M		MAKEMASK(0x7FF, 16)
+#define GL_SWT_LAT_QUAD				0x00204008 /* Reset Source: CORER */
+#define GL_SWT_LAT_QUAD_BASE_S			0
+#define GL_SWT_LAT_QUAD_BASE_M			MAKEMASK(0x7FF, 0)
+#define GL_SWT_LAT_QUAD_SIZE_S			16
+#define GL_SWT_LAT_QUAD_SIZE_M			MAKEMASK(0x7FF, 16)
+#define GL_SWT_LAT_SINGLE			0x00204000 /* Reset Source: CORER */
+#define GL_SWT_LAT_SINGLE_BASE_S		0
+#define GL_SWT_LAT_SINGLE_BASE_M		MAKEMASK(0x7FF, 0)
+#define GL_SWT_LAT_SINGLE_SIZE_S		16
+#define GL_SWT_LAT_SINGLE_SIZE_M		MAKEMASK(0x7FF, 16)
+#define GL_SWT_MD_PRI				0x002040ac /* Reset Source: CORER */
+#define GL_SWT_MD_PRI_VSI_PRI_S			0
+#define GL_SWT_MD_PRI_VSI_PRI_M			MAKEMASK(0x7, 0)
+#define GL_SWT_MD_PRI_LB_PRI_S			4
+#define GL_SWT_MD_PRI_LB_PRI_M			MAKEMASK(0x7, 4)
+#define GL_SWT_MD_PRI_LAN_EN_PRI_S		8
+#define GL_SWT_MD_PRI_LAN_EN_PRI_M		MAKEMASK(0x7, 8)
+#define GL_SWT_MD_PRI_QH_PRI_S			12
+#define GL_SWT_MD_PRI_QH_PRI_M			MAKEMASK(0x7, 12)
+#define GL_SWT_MD_PRI_QL_PRI_S			16
+#define GL_SWT_MD_PRI_QL_PRI_M			MAKEMASK(0x7, 16)
+#define GL_SWT_MIRTARVSI(_i)			(0x00204500 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GL_SWT_MIRTARVSI_MAX_INDEX		63
+#define GL_SWT_MIRTARVSI_VFVMNUMBER_S		0
+#define GL_SWT_MIRTARVSI_VFVMNUMBER_M		MAKEMASK(0x3FF, 0)
+#define GL_SWT_MIRTARVSI_FUNCTIONTYPE_S		10
+#define GL_SWT_MIRTARVSI_FUNCTIONTYPE_M		MAKEMASK(0x3, 10)
+#define GL_SWT_MIRTARVSI_PFNUMBER_S		12
+#define GL_SWT_MIRTARVSI_PFNUMBER_M		MAKEMASK(0x7, 12)
+#define GL_SWT_MIRTARVSI_TARGETVSI_S		20
+#define GL_SWT_MIRTARVSI_TARGETVSI_M		MAKEMASK(0x3FF, 20)
+#define GL_SWT_MIRTARVSI_RULEENABLE_S		31
+#define GL_SWT_MIRTARVSI_RULEENABLE_M		BIT(31)
+#define GL_SWT_NOMDEF_FLGS_H			0x0021411C /* Reset Source: CORER */
+#define GL_SWT_NOMDEF_FLGS_H_FLGS_S		0
+#define GL_SWT_NOMDEF_FLGS_H_FLGS_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_SWT_NOMDEF_FLGS_L			0x00214118 /* Reset Source: CORER */
+#define GL_SWT_NOMDEF_FLGS_L_FLGS_S		0
+#define GL_SWT_NOMDEF_FLGS_L_FLGS_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_SWT_SWIDFVIDX			0x00214114 /* Reset Source: CORER */
+#define GL_SWT_SWIDFVIDX_SWIDFVIDX_S		0
+#define GL_SWT_SWIDFVIDX_SWIDFVIDX_M		MAKEMASK(0x3F, 0)
+#define GL_SWT_SWIDFVIDX_PORT_TYPE_S		31
+#define GL_SWT_SWIDFVIDX_PORT_TYPE_M		BIT(31)
+#define GL_VP_SWITCHID(_i)			(0x00214094 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GL_VP_SWITCHID_MAX_INDEX		31
+#define GL_VP_SWITCHID_SWITCHID_S		0
+#define GL_VP_SWITCHID_SWITCHID_M		MAKEMASK(0xFF, 0)
+#define GLSWID_STAT_BLOCK(_i)			(0x0020A1A4 + ((_i) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define GLSWID_STAT_BLOCK_MAX_INDEX		255
+#define GLSWID_STAT_BLOCK_VEBID_S		0
+#define GLSWID_STAT_BLOCK_VEBID_M		MAKEMASK(0x1F, 0)
+#define GLSWID_STAT_BLOCK_VEBID_VALID_S		31
+#define GLSWID_STAT_BLOCK_VEBID_VALID_M		BIT(31)
+#define GLSWT_ACT_RESP_0			0x0020A5A4 /* Reset Source: CORER */
+#define GLSWT_ACT_RESP_0_GLSWT_ACT_RESP_S	0
+#define GLSWT_ACT_RESP_0_GLSWT_ACT_RESP_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLSWT_ACT_RESP_1			0x0020A5A8 /* Reset Source: CORER */
+#define GLSWT_ACT_RESP_1_GLSWT_ACT_RESP_S	0
+#define GLSWT_ACT_RESP_1_GLSWT_ACT_RESP_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLSWT_ARB_MODE				0x0020A674 /* Reset Source: CORER */
+#define GLSWT_ARB_MODE_FLU_PRI_SHM_S		0
+#define GLSWT_ARB_MODE_FLU_PRI_SHM_M		BIT(0)
+#define GLSWT_ARB_MODE_TX_RX_FWD_PRI_S		1
+#define GLSWT_ARB_MODE_TX_RX_FWD_PRI_M		BIT(1)
+#define PRT_SBPVSI				0x00204120 /* Reset Source: CORER */
+#define PRT_SBPVSI_BAD_FRAMES_VSI_S		0
+#define PRT_SBPVSI_BAD_FRAMES_VSI_M		MAKEMASK(0x3FF, 0)
+#define PRT_SBPVSI_SBP_S			31
+#define PRT_SBPVSI_SBP_M			BIT(31)
+#define PRT_SCSTS				0x00204140 /* Reset Source: CORER */
+#define PRT_SCSTS_BSCA_S			0
+#define PRT_SCSTS_BSCA_M			BIT(0)
+#define PRT_SCSTS_BSCAP_S			1
+#define PRT_SCSTS_BSCAP_M			BIT(1)
+#define PRT_SCSTS_MSCA_S			2
+#define PRT_SCSTS_MSCA_M			BIT(2)
+#define PRT_SCSTS_MSCAP_S			3
+#define PRT_SCSTS_MSCAP_M			BIT(3)
+#define PRT_SWT_BSCCNT				0x00204160 /* Reset Source: CORER */
+#define PRT_SWT_BSCCNT_CCOUNT_S			0
+#define PRT_SWT_BSCCNT_CCOUNT_M			MAKEMASK(0x1FFFFFF, 0)
+#define PRT_SWT_BSCTRH				0x00204180 /* Reset Source: CORER */
+#define PRT_SWT_BSCTRH_UTRESH_S			0
+#define PRT_SWT_BSCTRH_UTRESH_M			MAKEMASK(0x7FFFF, 0)
+#define PRT_SWT_MIREG				0x002042A0 /* Reset Source: CORER */
+#define PRT_SWT_MIREG_MIRRULE_S			0
+#define PRT_SWT_MIREG_MIRRULE_M			MAKEMASK(0x3F, 0)
+#define PRT_SWT_MIREG_MIRENA_S			7
+#define PRT_SWT_MIREG_MIRENA_M			BIT(7)
+#define PRT_SWT_MIRIG				0x00204280 /* Reset Source: CORER */
+#define PRT_SWT_MIRIG_MIRRULE_S			0
+#define PRT_SWT_MIRIG_MIRRULE_M			MAKEMASK(0x3F, 0)
+#define PRT_SWT_MIRIG_MIRENA_S			7
+#define PRT_SWT_MIRIG_MIRENA_M			BIT(7)
+#define PRT_SWT_MSCCNT				0x00204100 /* Reset Source: CORER */
+#define PRT_SWT_MSCCNT_CCOUNT_S			0
+#define PRT_SWT_MSCCNT_CCOUNT_M			MAKEMASK(0x1FFFFFF, 0)
+#define PRT_SWT_MSCTRH				0x002041c0 /* Reset Source: CORER */
+#define PRT_SWT_MSCTRH_UTRESH_S			0
+#define PRT_SWT_MSCTRH_UTRESH_M			MAKEMASK(0x7FFFF, 0)
+#define PRT_SWT_SCBI				0x002041e0 /* Reset Source: CORER */
+#define PRT_SWT_SCBI_BI_S			0
+#define PRT_SWT_SCBI_BI_M			MAKEMASK(0x1FFFFFF, 0)
+#define PRT_SWT_SCCRL				0x00204200 /* Reset Source: CORER */
+#define PRT_SWT_SCCRL_MDIPW_S			0
+#define PRT_SWT_SCCRL_MDIPW_M			BIT(0)
+#define PRT_SWT_SCCRL_MDICW_S			1
+#define PRT_SWT_SCCRL_MDICW_M			BIT(1)
+#define PRT_SWT_SCCRL_BDIPW_S			2
+#define PRT_SWT_SCCRL_BDIPW_M			BIT(2)
+#define PRT_SWT_SCCRL_BDICW_S			3
+#define PRT_SWT_SCCRL_BDICW_M			BIT(3)
+#define PRT_SWT_SCCRL_INTERVAL_S		8
+#define PRT_SWT_SCCRL_INTERVAL_M		MAKEMASK(0xFFFFF, 8)
+#define PRT_TCTUPR(_i)				(0x00040840 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define PRT_TCTUPR_MAX_INDEX			31
+#define PRT_TCTUPR_UP0_S			0
+#define PRT_TCTUPR_UP0_M			MAKEMASK(0x7, 0)
+#define PRT_TCTUPR_UP1_S			4
+#define PRT_TCTUPR_UP1_M			MAKEMASK(0x7, 4)
+#define PRT_TCTUPR_UP2_S			8
+#define PRT_TCTUPR_UP2_M			MAKEMASK(0x7, 8)
+#define PRT_TCTUPR_UP3_S			12
+#define PRT_TCTUPR_UP3_M			MAKEMASK(0x7, 12)
+#define PRT_TCTUPR_UP4_S			16
+#define PRT_TCTUPR_UP4_M			MAKEMASK(0x7, 16)
+#define PRT_TCTUPR_UP5_S			20
+#define PRT_TCTUPR_UP5_M			MAKEMASK(0x7, 20)
+#define PRT_TCTUPR_UP6_S			24
+#define PRT_TCTUPR_UP6_M			MAKEMASK(0x7, 24)
+#define PRT_TCTUPR_UP7_S			28
+#define PRT_TCTUPR_UP7_M			MAKEMASK(0x7, 28)
+#define GLHH_ART_CTL				0x000A41D4 /* Reset Source: POR */
+#define GLHH_ART_CTL_ACTIVE_S			0
+#define GLHH_ART_CTL_ACTIVE_M			BIT(0)
+#define GLHH_ART_CTL_TIME_OUT1_S		1
+#define GLHH_ART_CTL_TIME_OUT1_M		BIT(1)
+#define GLHH_ART_CTL_TIME_OUT2_S		2
+#define GLHH_ART_CTL_TIME_OUT2_M		BIT(2)
+#define GLHH_ART_CTL_RESET_HH_S			31
+#define GLHH_ART_CTL_RESET_HH_M			BIT(31)
+#define GLHH_ART_DATA				0x000A41E0 /* Reset Source: POR */
+#define GLHH_ART_DATA_AGENT_TYPE_S		0
+#define GLHH_ART_DATA_AGENT_TYPE_M		MAKEMASK(0x7, 0)
+#define GLHH_ART_DATA_SYNC_TYPE_S		3
+#define GLHH_ART_DATA_SYNC_TYPE_M		BIT(3)
+#define GLHH_ART_DATA_MAX_DELAY_S		4
+#define GLHH_ART_DATA_MAX_DELAY_M		MAKEMASK(0xF, 4)
+#define GLHH_ART_DATA_TIME_BASE_S		8
+#define GLHH_ART_DATA_TIME_BASE_M		MAKEMASK(0xF, 8)
+#define GLHH_ART_DATA_RSV_DATA_S		12
+#define GLHH_ART_DATA_RSV_DATA_M		MAKEMASK(0xFFFFF, 12)
+#define GLHH_ART_TIME_H				0x000A41D8 /* Reset Source: POR */
+#define GLHH_ART_TIME_H_ART_TIME_H_S		0
+#define GLHH_ART_TIME_H_ART_TIME_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLHH_ART_TIME_L				0x000A41DC /* Reset Source: POR */
+#define GLHH_ART_TIME_L_ART_TIME_L_S		0
+#define GLHH_ART_TIME_L_ART_TIME_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_AUX_IN_0(_i)			(0x000889D8 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_AUX_IN_0_MAX_INDEX		1
+#define GLTSYN_AUX_IN_0_EVNTLVL_S		0
+#define GLTSYN_AUX_IN_0_EVNTLVL_M		MAKEMASK(0x3, 0)
+#define GLTSYN_AUX_IN_0_INT_ENA_S		4
+#define GLTSYN_AUX_IN_0_INT_ENA_M		BIT(4)
+#define GLTSYN_AUX_IN_1(_i)			(0x000889E0 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_AUX_IN_1_MAX_INDEX		1
+#define GLTSYN_AUX_IN_1_EVNTLVL_S		0
+#define GLTSYN_AUX_IN_1_EVNTLVL_M		MAKEMASK(0x3, 0)
+#define GLTSYN_AUX_IN_1_INT_ENA_S		4
+#define GLTSYN_AUX_IN_1_INT_ENA_M		BIT(4)
+#define GLTSYN_AUX_IN_2(_i)			(0x000889E8 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_AUX_IN_2_MAX_INDEX		1
+#define GLTSYN_AUX_IN_2_EVNTLVL_S		0
+#define GLTSYN_AUX_IN_2_EVNTLVL_M		MAKEMASK(0x3, 0)
+#define GLTSYN_AUX_IN_2_INT_ENA_S		4
+#define GLTSYN_AUX_IN_2_INT_ENA_M		BIT(4)
+#define GLTSYN_AUX_OUT_0(_i)			(0x00088998 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_AUX_OUT_0_MAX_INDEX		1
+#define GLTSYN_AUX_OUT_0_OUT_ENA_S		0
+#define GLTSYN_AUX_OUT_0_OUT_ENA_M		BIT(0)
+#define GLTSYN_AUX_OUT_0_OUTMOD_S		1
+#define GLTSYN_AUX_OUT_0_OUTMOD_M		MAKEMASK(0x3, 1)
+#define GLTSYN_AUX_OUT_0_OUTLVL_S		3
+#define GLTSYN_AUX_OUT_0_OUTLVL_M		BIT(3)
+#define GLTSYN_AUX_OUT_0_INT_ENA_S		4
+#define GLTSYN_AUX_OUT_0_INT_ENA_M		BIT(4)
+#define GLTSYN_AUX_OUT_0_PULSEW_S		8
+#define GLTSYN_AUX_OUT_0_PULSEW_M		MAKEMASK(0xF, 8)
+#define GLTSYN_AUX_OUT_1(_i)			(0x000889A0 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_AUX_OUT_1_MAX_INDEX		1
+#define GLTSYN_AUX_OUT_1_OUT_ENA_S		0
+#define GLTSYN_AUX_OUT_1_OUT_ENA_M		BIT(0)
+#define GLTSYN_AUX_OUT_1_OUTMOD_S		1
+#define GLTSYN_AUX_OUT_1_OUTMOD_M		MAKEMASK(0x3, 1)
+#define GLTSYN_AUX_OUT_1_OUTLVL_S		3
+#define GLTSYN_AUX_OUT_1_OUTLVL_M		BIT(3)
+#define GLTSYN_AUX_OUT_1_INT_ENA_S		4
+#define GLTSYN_AUX_OUT_1_INT_ENA_M		BIT(4)
+#define GLTSYN_AUX_OUT_1_PULSEW_S		8
+#define GLTSYN_AUX_OUT_1_PULSEW_M		MAKEMASK(0xF, 8)
+#define GLTSYN_AUX_OUT_2(_i)			(0x000889A8 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_AUX_OUT_2_MAX_INDEX		1
+#define GLTSYN_AUX_OUT_2_OUT_ENA_S		0
+#define GLTSYN_AUX_OUT_2_OUT_ENA_M		BIT(0)
+#define GLTSYN_AUX_OUT_2_OUTMOD_S		1
+#define GLTSYN_AUX_OUT_2_OUTMOD_M		MAKEMASK(0x3, 1)
+#define GLTSYN_AUX_OUT_2_OUTLVL_S		3
+#define GLTSYN_AUX_OUT_2_OUTLVL_M		BIT(3)
+#define GLTSYN_AUX_OUT_2_INT_ENA_S		4
+#define GLTSYN_AUX_OUT_2_INT_ENA_M		BIT(4)
+#define GLTSYN_AUX_OUT_2_PULSEW_S		8
+#define GLTSYN_AUX_OUT_2_PULSEW_M		MAKEMASK(0xF, 8)
+#define GLTSYN_AUX_OUT_3(_i)			(0x000889B0 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_AUX_OUT_3_MAX_INDEX		1
+#define GLTSYN_AUX_OUT_3_OUT_ENA_S		0
+#define GLTSYN_AUX_OUT_3_OUT_ENA_M		BIT(0)
+#define GLTSYN_AUX_OUT_3_OUTMOD_S		1
+#define GLTSYN_AUX_OUT_3_OUTMOD_M		MAKEMASK(0x3, 1)
+#define GLTSYN_AUX_OUT_3_OUTLVL_S		3
+#define GLTSYN_AUX_OUT_3_OUTLVL_M		BIT(3)
+#define GLTSYN_AUX_OUT_3_INT_ENA_S		4
+#define GLTSYN_AUX_OUT_3_INT_ENA_M		BIT(4)
+#define GLTSYN_AUX_OUT_3_PULSEW_S		8
+#define GLTSYN_AUX_OUT_3_PULSEW_M		MAKEMASK(0xF, 8)
+#define GLTSYN_CLKO_0(_i)			(0x000889B8 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_CLKO_0_MAX_INDEX			1
+#define GLTSYN_CLKO_0_TSYNCLKO_S		0
+#define GLTSYN_CLKO_0_TSYNCLKO_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_CLKO_1(_i)			(0x000889C0 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_CLKO_1_MAX_INDEX			1
+#define GLTSYN_CLKO_1_TSYNCLKO_S		0
+#define GLTSYN_CLKO_1_TSYNCLKO_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_CLKO_2(_i)			(0x000889C8 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_CLKO_2_MAX_INDEX			1
+#define GLTSYN_CLKO_2_TSYNCLKO_S		0
+#define GLTSYN_CLKO_2_TSYNCLKO_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_CLKO_3(_i)			(0x000889D0 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_CLKO_3_MAX_INDEX			1
+#define GLTSYN_CLKO_3_TSYNCLKO_S		0
+#define GLTSYN_CLKO_3_TSYNCLKO_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_CMD				0x00088810 /* Reset Source: CORER */
+#define GLTSYN_CMD_CMD_S			0
+#define GLTSYN_CMD_CMD_M			MAKEMASK(0xFF, 0)
+#define GLTSYN_CMD_SEL_MASTER_S			8
+#define GLTSYN_CMD_SEL_MASTER_M			BIT(8)
+#define GLTSYN_CMD_SYNC				0x00088814 /* Reset Source: CORER */
+#define GLTSYN_CMD_SYNC_SYNC_S			0
+#define GLTSYN_CMD_SYNC_SYNC_M			MAKEMASK(0x3, 0)
+#define GLTSYN_ENA(_i)				(0x00088808 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_ENA_MAX_INDEX			1
+#define GLTSYN_ENA_TSYN_ENA_S			0
+#define GLTSYN_ENA_TSYN_ENA_M			BIT(0)
+#define GLTSYN_EVNT_H_0(_i)			(0x00088970 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_EVNT_H_0_MAX_INDEX		1
+#define GLTSYN_EVNT_H_0_TSYNEVNT_H_S		0
+#define GLTSYN_EVNT_H_0_TSYNEVNT_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_EVNT_H_1(_i)			(0x00088980 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_EVNT_H_1_MAX_INDEX		1
+#define GLTSYN_EVNT_H_1_TSYNEVNT_H_S		0
+#define GLTSYN_EVNT_H_1_TSYNEVNT_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_EVNT_H_2(_i)			(0x00088990 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_EVNT_H_2_MAX_INDEX		1
+#define GLTSYN_EVNT_H_2_TSYNEVNT_H_S		0
+#define GLTSYN_EVNT_H_2_TSYNEVNT_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_EVNT_L_0(_i)			(0x00088968 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_EVNT_L_0_MAX_INDEX		1
+#define GLTSYN_EVNT_L_0_TSYNEVNT_L_S		0
+#define GLTSYN_EVNT_L_0_TSYNEVNT_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_EVNT_L_1(_i)			(0x00088978 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_EVNT_L_1_MAX_INDEX		1
+#define GLTSYN_EVNT_L_1_TSYNEVNT_L_S		0
+#define GLTSYN_EVNT_L_1_TSYNEVNT_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_EVNT_L_2(_i)			(0x00088988 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_EVNT_L_2_MAX_INDEX		1
+#define GLTSYN_EVNT_L_2_TSYNEVNT_L_S		0
+#define GLTSYN_EVNT_L_2_TSYNEVNT_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_HHTIME_H(_i)			(0x00088900 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_HHTIME_H_MAX_INDEX		1
+#define GLTSYN_HHTIME_H_TSYNEVNT_H_S		0
+#define GLTSYN_HHTIME_H_TSYNEVNT_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_HHTIME_L(_i)			(0x000888F8 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_HHTIME_L_MAX_INDEX		1
+#define GLTSYN_HHTIME_L_TSYNEVNT_L_S		0
+#define GLTSYN_HHTIME_L_TSYNEVNT_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_INCVAL_H(_i)			(0x00088920 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_INCVAL_H_MAX_INDEX		1
+#define GLTSYN_INCVAL_H_INCVAL_H_S		0
+#define GLTSYN_INCVAL_H_INCVAL_H_M		MAKEMASK(0xFF, 0)
+#define GLTSYN_INCVAL_L(_i)			(0x00088918 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_INCVAL_L_MAX_INDEX		1
+#define GLTSYN_INCVAL_L_INCVAL_L_S		0
+#define GLTSYN_INCVAL_L_INCVAL_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_SHADJ_H(_i)			(0x00088910 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_SHADJ_H_MAX_INDEX		1
+#define GLTSYN_SHADJ_H_ADJUST_H_S		0
+#define GLTSYN_SHADJ_H_ADJUST_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_SHADJ_L(_i)			(0x00088908 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_SHADJ_L_MAX_INDEX		1
+#define GLTSYN_SHADJ_L_ADJUST_L_S		0
+#define GLTSYN_SHADJ_L_ADJUST_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_SHTIME_0(_i)			(0x000888E0 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_SHTIME_0_MAX_INDEX		1
+#define GLTSYN_SHTIME_0_TSYNTIME_0_S		0
+#define GLTSYN_SHTIME_0_TSYNTIME_0_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_SHTIME_H(_i)			(0x000888F0 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_SHTIME_H_MAX_INDEX		1
+#define GLTSYN_SHTIME_H_TSYNTIME_H_S		0
+#define GLTSYN_SHTIME_H_TSYNTIME_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_SHTIME_L(_i)			(0x000888E8 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_SHTIME_L_MAX_INDEX		1
+#define GLTSYN_SHTIME_L_TSYNTIME_L_S		0
+#define GLTSYN_SHTIME_L_TSYNTIME_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_STAT(_i)				(0x000888C0 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_STAT_MAX_INDEX			1
+#define GLTSYN_STAT_EVENT0_S			0
+#define GLTSYN_STAT_EVENT0_M			BIT(0)
+#define GLTSYN_STAT_EVENT1_S			1
+#define GLTSYN_STAT_EVENT1_M			BIT(1)
+#define GLTSYN_STAT_EVENT2_S			2
+#define GLTSYN_STAT_EVENT2_M			BIT(2)
+#define GLTSYN_STAT_TGT0_S			4
+#define GLTSYN_STAT_TGT0_M			BIT(4)
+#define GLTSYN_STAT_TGT1_S			5
+#define GLTSYN_STAT_TGT1_M			BIT(5)
+#define GLTSYN_STAT_TGT2_S			6
+#define GLTSYN_STAT_TGT2_M			BIT(6)
+#define GLTSYN_STAT_TGT3_S			7
+#define GLTSYN_STAT_TGT3_M			BIT(7)
+#define GLTSYN_SYNC_DLAY			0x00088818 /* Reset Source: CORER */
+#define GLTSYN_SYNC_DLAY_SYNC_DELAY_S		0
+#define GLTSYN_SYNC_DLAY_SYNC_DELAY_M		MAKEMASK(0x1F, 0)
+#define GLTSYN_TGT_H_0(_i)			(0x00088930 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_TGT_H_0_MAX_INDEX		1
+#define GLTSYN_TGT_H_0_TSYNTGTT_H_S		0
+#define GLTSYN_TGT_H_0_TSYNTGTT_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_TGT_H_1(_i)			(0x00088940 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_TGT_H_1_MAX_INDEX		1
+#define GLTSYN_TGT_H_1_TSYNTGTT_H_S		0
+#define GLTSYN_TGT_H_1_TSYNTGTT_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_TGT_H_2(_i)			(0x00088950 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_TGT_H_2_MAX_INDEX		1
+#define GLTSYN_TGT_H_2_TSYNTGTT_H_S		0
+#define GLTSYN_TGT_H_2_TSYNTGTT_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_TGT_H_3(_i)			(0x00088960 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_TGT_H_3_MAX_INDEX		1
+#define GLTSYN_TGT_H_3_TSYNTGTT_H_S		0
+#define GLTSYN_TGT_H_3_TSYNTGTT_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_TGT_L_0(_i)			(0x00088928 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_TGT_L_0_MAX_INDEX		1
+#define GLTSYN_TGT_L_0_TSYNTGTT_L_S		0
+#define GLTSYN_TGT_L_0_TSYNTGTT_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_TGT_L_1(_i)			(0x00088938 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_TGT_L_1_MAX_INDEX		1
+#define GLTSYN_TGT_L_1_TSYNTGTT_L_S		0
+#define GLTSYN_TGT_L_1_TSYNTGTT_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_TGT_L_2(_i)			(0x00088948 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_TGT_L_2_MAX_INDEX		1
+#define GLTSYN_TGT_L_2_TSYNTGTT_L_S		0
+#define GLTSYN_TGT_L_2_TSYNTGTT_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_TGT_L_3(_i)			(0x00088958 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_TGT_L_3_MAX_INDEX		1
+#define GLTSYN_TGT_L_3_TSYNTGTT_L_S		0
+#define GLTSYN_TGT_L_3_TSYNTGTT_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_TIME_0(_i)			(0x000888C8 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_TIME_0_MAX_INDEX			1
+#define GLTSYN_TIME_0_TSYNTIME_0_S		0
+#define GLTSYN_TIME_0_TSYNTIME_0_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_TIME_H(_i)			(0x000888D8 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_TIME_H_MAX_INDEX			1
+#define GLTSYN_TIME_H_TSYNTIME_H_S		0
+#define GLTSYN_TIME_H_TSYNTIME_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_TIME_L(_i)			(0x000888D0 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_TIME_L_MAX_INDEX			1
+#define GLTSYN_TIME_L_TSYNTIME_L_S		0
+#define GLTSYN_TIME_L_TSYNTIME_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PFHH_SEM				0x000A4200 /* Reset Source: PFR */
+#define PFHH_SEM_BUSY_S				0
+#define PFHH_SEM_BUSY_M				BIT(0)
+#define PFHH_SEM_PF_OWNER_S			4
+#define PFHH_SEM_PF_OWNER_M			MAKEMASK(0x7, 4)
+#define PFTSYN_SEM				0x00088880 /* Reset Source: PFR */
+#define PFTSYN_SEM_BUSY_S			0
+#define PFTSYN_SEM_BUSY_M			BIT(0)
+#define PFTSYN_SEM_PF_OWNER_S			4
+#define PFTSYN_SEM_PF_OWNER_M			MAKEMASK(0x7, 4)
+#define GLPE_TSCD_FLR(_i)			(0x0051E24c + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: CORER */
+#define GLPE_TSCD_FLR_MAX_INDEX			3
+#define GLPE_TSCD_FLR_DRAIN_VCTR_ID_S		0
+#define GLPE_TSCD_FLR_DRAIN_VCTR_ID_M		MAKEMASK(0x3, 0)
+#define GLPE_TSCD_FLR_PORT_S			2
+#define GLPE_TSCD_FLR_PORT_M			MAKEMASK(0x7, 2)
+#define GLPE_TSCD_FLR_PF_NUM_S			5
+#define GLPE_TSCD_FLR_PF_NUM_M			MAKEMASK(0x7, 5)
+#define GLPE_TSCD_FLR_VM_VF_TYPE_S		8
+#define GLPE_TSCD_FLR_VM_VF_TYPE_M		MAKEMASK(0x3, 8)
+#define GLPE_TSCD_FLR_VM_VF_NUM_S		16
+#define GLPE_TSCD_FLR_VM_VF_NUM_M		MAKEMASK(0x3FF, 16)
+#define GLPE_TSCD_FLR_VLD_S			31
+#define GLPE_TSCD_FLR_VLD_M			BIT(31)
+#define GLPE_TSCD_PEPM				0x0051E228 /* Reset Source: CORER */
+#define GLPE_TSCD_PEPM_MDQ_CREDITS_S		0
+#define GLPE_TSCD_PEPM_MDQ_CREDITS_M		MAKEMASK(0xFF, 0)
+#define PF_VIRT_VSTATUS				0x0009E680 /* Reset Source: PFR */
+#define PF_VIRT_VSTATUS_NUM_VFS_S		0
+#define PF_VIRT_VSTATUS_NUM_VFS_M		MAKEMASK(0xFF, 0)
+#define PF_VIRT_VSTATUS_TOTAL_VFS_S		8
+#define PF_VIRT_VSTATUS_TOTAL_VFS_M		MAKEMASK(0xFF, 8)
+#define PF_VIRT_VSTATUS_IOV_ACTIVE_S		16
+#define PF_VIRT_VSTATUS_IOV_ACTIVE_M		BIT(16)
+#define PF_VT_PFALLOC				0x001D2480 /* Reset Source: CORER */
+#define PF_VT_PFALLOC_FIRSTVF_S			0
+#define PF_VT_PFALLOC_FIRSTVF_M			MAKEMASK(0xFF, 0)
+#define PF_VT_PFALLOC_LASTVF_S			8
+#define PF_VT_PFALLOC_LASTVF_M			MAKEMASK(0xFF, 8)
+#define PF_VT_PFALLOC_VALID_S			31
+#define PF_VT_PFALLOC_VALID_M			BIT(31)
+#define PF_VT_PFALLOC_HIF			0x0009DD80 /* Reset Source: PCIR */
+#define PF_VT_PFALLOC_HIF_FIRSTVF_S		0
+#define PF_VT_PFALLOC_HIF_FIRSTVF_M		MAKEMASK(0xFF, 0)
+#define PF_VT_PFALLOC_HIF_LASTVF_S		8
+#define PF_VT_PFALLOC_HIF_LASTVF_M		MAKEMASK(0xFF, 8)
+#define PF_VT_PFALLOC_HIF_VALID_S		31
+#define PF_VT_PFALLOC_HIF_VALID_M		BIT(31)
+#define PF_VT_PFALLOC_PCIE			0x000BE080 /* Reset Source: PCIR */
+#define PF_VT_PFALLOC_PCIE_FIRSTVF_S		0
+#define PF_VT_PFALLOC_PCIE_FIRSTVF_M		MAKEMASK(0xFF, 0)
+#define PF_VT_PFALLOC_PCIE_LASTVF_S		8
+#define PF_VT_PFALLOC_PCIE_LASTVF_M		MAKEMASK(0xFF, 8)
+#define PF_VT_PFALLOC_PCIE_VALID_S		31
+#define PF_VT_PFALLOC_PCIE_VALID_M		BIT(31)
+#define VSI_L2TAGSTXVALID(_VSI)			(0x00046000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_L2TAGSTXVALID_MAX_INDEX		767
+#define VSI_L2TAGSTXVALID_L2TAG1INSERTID_S	0
+#define VSI_L2TAGSTXVALID_L2TAG1INSERTID_M	MAKEMASK(0x7, 0)
+#define VSI_L2TAGSTXVALID_L2TAG1INSERTID_VALID_S 3
+#define VSI_L2TAGSTXVALID_L2TAG1INSERTID_VALID_M BIT(3)
+#define VSI_L2TAGSTXVALID_L2TAG2INSERTID_S	4
+#define VSI_L2TAGSTXVALID_L2TAG2INSERTID_M	MAKEMASK(0x7, 4)
+#define VSI_L2TAGSTXVALID_L2TAG2INSERTID_VALID_S 7
+#define VSI_L2TAGSTXVALID_L2TAG2INSERTID_VALID_M BIT(7)
+#define VSI_L2TAGSTXVALID_TIR0INSERTID_S	16
+#define VSI_L2TAGSTXVALID_TIR0INSERTID_M	MAKEMASK(0x7, 16)
+#define VSI_L2TAGSTXVALID_TIR0_INSERT_S		19
+#define VSI_L2TAGSTXVALID_TIR0_INSERT_M		BIT(19)
+#define VSI_L2TAGSTXVALID_TIR1INSERTID_S	20
+#define VSI_L2TAGSTXVALID_TIR1INSERTID_M	MAKEMASK(0x7, 20)
+#define VSI_L2TAGSTXVALID_TIR1_INSERT_S		23
+#define VSI_L2TAGSTXVALID_TIR1_INSERT_M		BIT(23)
+#define VSI_L2TAGSTXVALID_TIR2INSERTID_S	24
+#define VSI_L2TAGSTXVALID_TIR2INSERTID_M	MAKEMASK(0x7, 24)
+#define VSI_L2TAGSTXVALID_TIR2_INSERT_S		27
+#define VSI_L2TAGSTXVALID_TIR2_INSERT_M		BIT(27)
+#define VSI_PASID(_VSI)				(0x0009C000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_PASID_MAX_INDEX			767
+#define VSI_PASID_PASID_S			0
+#define VSI_PASID_PASID_M			MAKEMASK(0xFFFFF, 0)
+#define VSI_PASID_EN_S				31
+#define VSI_PASID_EN_M				BIT(31)
+#define VSI_RUPR(_VSI)				(0x00050000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_RUPR_MAX_INDEX			767
+#define VSI_RUPR_UP0_S				0
+#define VSI_RUPR_UP0_M				MAKEMASK(0x7, 0)
+#define VSI_RUPR_UP1_S				3
+#define VSI_RUPR_UP1_M				MAKEMASK(0x7, 3)
+#define VSI_RUPR_UP2_S				6
+#define VSI_RUPR_UP2_M				MAKEMASK(0x7, 6)
+#define VSI_RUPR_UP3_S				9
+#define VSI_RUPR_UP3_M				MAKEMASK(0x7, 9)
+#define VSI_RUPR_UP4_S				12
+#define VSI_RUPR_UP4_M				MAKEMASK(0x7, 12)
+#define VSI_RUPR_UP5_S				15
+#define VSI_RUPR_UP5_M				MAKEMASK(0x7, 15)
+#define VSI_RUPR_UP6_S				18
+#define VSI_RUPR_UP6_M				MAKEMASK(0x7, 18)
+#define VSI_RUPR_UP7_S				21
+#define VSI_RUPR_UP7_M				MAKEMASK(0x7, 21)
+#define VSI_RXSWCTRL(_VSI)			(0x00205000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_RXSWCTRL_MAX_INDEX			767
+#define VSI_RXSWCTRL_MACVSIPRUNEENABLE_S	8
+#define VSI_RXSWCTRL_MACVSIPRUNEENABLE_M	BIT(8)
+#define VSI_RXSWCTRL_PRUNEENABLE_S		9
+#define VSI_RXSWCTRL_PRUNEENABLE_M		MAKEMASK(0xF, 9)
+#define VSI_RXSWCTRL_SRCPRUNEENABLE_S		13
+#define VSI_RXSWCTRL_SRCPRUNEENABLE_M		BIT(13)
+#define VSI_SRCSWCTRL(_VSI)			(0x00209000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_SRCSWCTRL_MAX_INDEX			767
+#define VSI_SRCSWCTRL_ALLOWDESTOVERRIDE_S	0
+#define VSI_SRCSWCTRL_ALLOWDESTOVERRIDE_M	BIT(0)
+#define VSI_SRCSWCTRL_ALLOWLOOPBACK_S		1
+#define VSI_SRCSWCTRL_ALLOWLOOPBACK_M		BIT(1)
+#define VSI_SRCSWCTRL_LANENABLE_S		2
+#define VSI_SRCSWCTRL_LANENABLE_M		BIT(2)
+#define VSI_SRCSWCTRL_MACAS_S			3
+#define VSI_SRCSWCTRL_MACAS_M			BIT(3)
+#define VSI_SRCSWCTRL_PRUNEENABLE_S		4
+#define VSI_SRCSWCTRL_PRUNEENABLE_M		MAKEMASK(0xF, 4)
+#define VSI_SWITCHID(_VSI)			(0x00215000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_SWITCHID_MAX_INDEX			767
+#define VSI_SWITCHID_SWITCHID_S			0
+#define VSI_SWITCHID_SWITCHID_M			MAKEMASK(0xFF, 0)
+#define VSI_SWT_MIREG(_VSI)			(0x00207000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_SWT_MIREG_MAX_INDEX			767
+#define VSI_SWT_MIREG_MIRRULE_S			0
+#define VSI_SWT_MIREG_MIRRULE_M			MAKEMASK(0x3F, 0)
+#define VSI_SWT_MIREG_MIRENA_S			7
+#define VSI_SWT_MIREG_MIRENA_M			BIT(7)
+#define VSI_SWT_MIRIG(_VSI)			(0x00208000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_SWT_MIRIG_MAX_INDEX			767
+#define VSI_SWT_MIRIG_MIRRULE_S			0
+#define VSI_SWT_MIRIG_MIRRULE_M			MAKEMASK(0x3F, 0)
+#define VSI_SWT_MIRIG_MIRENA_S			7
+#define VSI_SWT_MIRIG_MIRENA_M			BIT(7)
+#define VSI_TAIR(_VSI)				(0x00044000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_TAIR_MAX_INDEX			767
+#define VSI_TAIR_PORT_TAG_ID_S			0
+#define VSI_TAIR_PORT_TAG_ID_M			MAKEMASK(0xFFFF, 0)
+#define VSI_TAR(_VSI)				(0x00045000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_TAR_MAX_INDEX			767
+#define VSI_TAR_ACCEPTTAGGED_S			0
+#define VSI_TAR_ACCEPTTAGGED_M			MAKEMASK(0x3FF, 0)
+#define VSI_TAR_ACCEPTUNTAGGED_S		16
+#define VSI_TAR_ACCEPTUNTAGGED_M		MAKEMASK(0x3FF, 16)
+#define VSI_TIR_0(_VSI)				(0x00041000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_TIR_0_MAX_INDEX			767
+#define VSI_TIR_0_PORT_TAG_ID_S			0
+#define VSI_TIR_0_PORT_TAG_ID_M			MAKEMASK(0xFFFF, 0)
+#define VSI_TIR_1(_VSI)				(0x00042000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_TIR_1_MAX_INDEX			767
+#define VSI_TIR_1_PORT_TAG_ID_S			0
+#define VSI_TIR_1_PORT_TAG_ID_M			MAKEMASK(0xFFFFFFFF, 0)
+#define VSI_TIR_2(_VSI)				(0x00043000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_TIR_2_MAX_INDEX			767
+#define VSI_TIR_2_PORT_TAG_ID_S			0
+#define VSI_TIR_2_PORT_TAG_ID_M			MAKEMASK(0xFFFF, 0)
+#define VSI_TSR(_VSI)				(0x00051000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_TSR_MAX_INDEX			767
+#define VSI_TSR_STRIPTAG_S			0
+#define VSI_TSR_STRIPTAG_M			MAKEMASK(0x3FF, 0)
+#define VSI_TSR_SHOWTAG_S			10
+#define VSI_TSR_SHOWTAG_M			MAKEMASK(0x3FF, 10)
+#define VSI_TSR_SHOWPRIONLY_S			20
+#define VSI_TSR_SHOWPRIONLY_M			MAKEMASK(0x3FF, 20)
+#define VSI_TUPIOM(_VSI)			(0x00048000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_TUPIOM_MAX_INDEX			767
+#define VSI_TUPIOM_UP0_S			0
+#define VSI_TUPIOM_UP0_M			MAKEMASK(0x7, 0)
+#define VSI_TUPIOM_UP1_S			3
+#define VSI_TUPIOM_UP1_M			MAKEMASK(0x7, 3)
+#define VSI_TUPIOM_UP2_S			6
+#define VSI_TUPIOM_UP2_M			MAKEMASK(0x7, 6)
+#define VSI_TUPIOM_UP3_S			9
+#define VSI_TUPIOM_UP3_M			MAKEMASK(0x7, 9)
+#define VSI_TUPIOM_UP4_S			12
+#define VSI_TUPIOM_UP4_M			MAKEMASK(0x7, 12)
+#define VSI_TUPIOM_UP5_S			15
+#define VSI_TUPIOM_UP5_M			MAKEMASK(0x7, 15)
+#define VSI_TUPIOM_UP6_S			18
+#define VSI_TUPIOM_UP6_M			MAKEMASK(0x7, 18)
+#define VSI_TUPIOM_UP7_S			21
+#define VSI_TUPIOM_UP7_M			MAKEMASK(0x7, 21)
+#define VSI_TUPR(_VSI)				(0x00047000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_TUPR_MAX_INDEX			767
+#define VSI_TUPR_UP0_S				0
+#define VSI_TUPR_UP0_M				MAKEMASK(0x7, 0)
+#define VSI_TUPR_UP1_S				3
+#define VSI_TUPR_UP1_M				MAKEMASK(0x7, 3)
+#define VSI_TUPR_UP2_S				6
+#define VSI_TUPR_UP2_M				MAKEMASK(0x7, 6)
+#define VSI_TUPR_UP3_S				9
+#define VSI_TUPR_UP3_M				MAKEMASK(0x7, 9)
+#define VSI_TUPR_UP4_S				12
+#define VSI_TUPR_UP4_M				MAKEMASK(0x7, 12)
+#define VSI_TUPR_UP5_S				15
+#define VSI_TUPR_UP5_M				MAKEMASK(0x7, 15)
+#define VSI_TUPR_UP6_S				18
+#define VSI_TUPR_UP6_M				MAKEMASK(0x7, 18)
+#define VSI_TUPR_UP7_S				21
+#define VSI_TUPR_UP7_M				MAKEMASK(0x7, 21)
+#define VSI_VSI2F(_VSI)				(0x001D0000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_VSI2F_MAX_INDEX			767
+#define VSI_VSI2F_VFVMNUMBER_S			0
+#define VSI_VSI2F_VFVMNUMBER_M			MAKEMASK(0x3FF, 0)
+#define VSI_VSI2F_FUNCTIONTYPE_S		10
+#define VSI_VSI2F_FUNCTIONTYPE_M		MAKEMASK(0x3, 10)
+#define VSI_VSI2F_PFNUMBER_S			12
+#define VSI_VSI2F_PFNUMBER_M			MAKEMASK(0x7, 12)
+#define VSI_VSI2F_BUFFERNUMBER_S		16
+#define VSI_VSI2F_BUFFERNUMBER_M		MAKEMASK(0x7, 16)
+#define VSI_VSI2F_VSI_NUMBER_S			20
+#define VSI_VSI2F_VSI_NUMBER_M			MAKEMASK(0x3FF, 20)
+#define VSI_VSI2F_VSI_ENABLE_S			31
+#define VSI_VSI2F_VSI_ENABLE_M			BIT(31)
+#define VSI_VSI2F_MBX(_VSI)			(0x00232000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_VSI2F_MBX_MAX_INDEX			767
+#define VSI_VSI2F_MBX_VFVMNUMBER_S		0
+#define VSI_VSI2F_MBX_VFVMNUMBER_M		MAKEMASK(0x3FF, 0)
+#define VSI_VSI2F_MBX_FUNCTIONTYPE_S		10
+#define VSI_VSI2F_MBX_FUNCTIONTYPE_M		MAKEMASK(0x3, 10)
+#define VSI_VSI2F_MBX_PFNUMBER_S		12
+#define VSI_VSI2F_MBX_PFNUMBER_M		MAKEMASK(0x7, 12)
+#define VSI_VSI2F_MBX_BUFFERNUMBER_S		16
+#define VSI_VSI2F_MBX_BUFFERNUMBER_M		MAKEMASK(0x7, 16)
+#define VSI_VSI2F_MBX_VSI_NUMBER_S		20
+#define VSI_VSI2F_MBX_VSI_NUMBER_M		MAKEMASK(0x3FF, 20)
+#define VSI_VSI2F_MBX_VSI_ENABLE_S		31
+#define VSI_VSI2F_MBX_VSI_ENABLE_M		BIT(31)
+#define VSIQF_FD_CNT(_VSI)			(0x00464000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSIQF_FD_CNT_MAX_INDEX			767
+#define VSIQF_FD_CNT_FD_GCNT_S			0
+#define VSIQF_FD_CNT_FD_GCNT_M			MAKEMASK(0x3FFF, 0)
+#define VSIQF_FD_CNT_FD_BCNT_S			16
+#define VSIQF_FD_CNT_FD_BCNT_M			MAKEMASK(0x3FFF, 16)
+#define VSIQF_FD_CTL1(_VSI)			(0x00411000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSIQF_FD_CTL1_MAX_INDEX			767
+#define VSIQF_FD_CTL1_FLT_ENA_S			0
+#define VSIQF_FD_CTL1_FLT_ENA_M			BIT(0)
+#define VSIQF_FD_CTL1_CFG_ENA_S			1
+#define VSIQF_FD_CTL1_CFG_ENA_M			BIT(1)
+#define VSIQF_FD_CTL1_EVICT_ENA_S		2
+#define VSIQF_FD_CTL1_EVICT_ENA_M		BIT(2)
+#define VSIQF_FD_DFLT(_VSI)			(0x00457000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSIQF_FD_DFLT_MAX_INDEX			767
+#define VSIQF_FD_DFLT_DEFLT_QINDX_S		0
+#define VSIQF_FD_DFLT_DEFLT_QINDX_M		MAKEMASK(0x7FF, 0)
+#define VSIQF_FD_DFLT_DEFLT_TOQUEUE_S		12
+#define VSIQF_FD_DFLT_DEFLT_TOQUEUE_M		MAKEMASK(0x7, 12)
+#define VSIQF_FD_DFLT_COMP_QINDX_S		16
+#define VSIQF_FD_DFLT_COMP_QINDX_M		MAKEMASK(0x7FF, 16)
+#define VSIQF_FD_DFLT_DEFLT_QINDX_PRIO_S	28
+#define VSIQF_FD_DFLT_DEFLT_QINDX_PRIO_M	MAKEMASK(0x7, 28)
+#define VSIQF_FD_DFLT_DEFLT_DROP_S		31
+#define VSIQF_FD_DFLT_DEFLT_DROP_M		BIT(31)
+#define VSIQF_FD_SIZE(_VSI)			(0x00462000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSIQF_FD_SIZE_MAX_INDEX			767
+#define VSIQF_FD_SIZE_FD_GSIZE_S		0
+#define VSIQF_FD_SIZE_FD_GSIZE_M		MAKEMASK(0x3FFF, 0)
+#define VSIQF_FD_SIZE_FD_BSIZE_S		16
+#define VSIQF_FD_SIZE_FD_BSIZE_M		MAKEMASK(0x3FFF, 16)
+#define VSIQF_HASH_CTL(_VSI)			(0x0040D000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSIQF_HASH_CTL_MAX_INDEX		767
+#define VSIQF_HASH_CTL_HASH_LUT_SEL_S		0
+#define VSIQF_HASH_CTL_HASH_LUT_SEL_M		MAKEMASK(0x3, 0)
+#define VSIQF_HASH_CTL_GLOB_LUT_S		2
+#define VSIQF_HASH_CTL_GLOB_LUT_M		MAKEMASK(0xF, 2)
+#define VSIQF_HASH_CTL_HASH_SCHEME_S		6
+#define VSIQF_HASH_CTL_HASH_SCHEME_M		MAKEMASK(0x3, 6)
+#define VSIQF_HASH_CTL_TC_OVER_SEL_S		8
+#define VSIQF_HASH_CTL_TC_OVER_SEL_M		MAKEMASK(0x1F, 8)
+#define VSIQF_HASH_CTL_TC_OVER_ENA_S		15
+#define VSIQF_HASH_CTL_TC_OVER_ENA_M		BIT(15)
+#define VSIQF_HKEY(_i, _VSI)			(0x00400000 + ((_i) * 4096 + (_VSI) * 4)) /* _i=0...12, _VSI=0...767 */ /* Reset Source: PFR */
+#define VSIQF_HKEY_MAX_INDEX			12
+#define VSIQF_HKEY_KEY_0_S			0
+#define VSIQF_HKEY_KEY_0_M			MAKEMASK(0xFF, 0)
+#define VSIQF_HKEY_KEY_1_S			8
+#define VSIQF_HKEY_KEY_1_M			MAKEMASK(0xFF, 8)
+#define VSIQF_HKEY_KEY_2_S			16
+#define VSIQF_HKEY_KEY_2_M			MAKEMASK(0xFF, 16)
+#define VSIQF_HKEY_KEY_3_S			24
+#define VSIQF_HKEY_KEY_3_M			MAKEMASK(0xFF, 24)
+#define VSIQF_HLUT(_i, _VSI)			(0x00420000 + ((_i) * 4096 + (_VSI) * 4)) /* _i=0...15, _VSI=0...767 */ /* Reset Source: PFR */
+#define VSIQF_HLUT_MAX_INDEX			15
+#define VSIQF_HLUT_LUT0_S			0
+#define VSIQF_HLUT_LUT0_M			MAKEMASK(0xF, 0)
+#define VSIQF_HLUT_LUT1_S			8
+#define VSIQF_HLUT_LUT1_M			MAKEMASK(0xF, 8)
+#define VSIQF_HLUT_LUT2_S			16
+#define VSIQF_HLUT_LUT2_M			MAKEMASK(0xF, 16)
+#define VSIQF_HLUT_LUT3_S			24
+#define VSIQF_HLUT_LUT3_M			MAKEMASK(0xF, 24)
+#define VSIQF_PE_CTL1(_VSI)			(0x00414000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSIQF_PE_CTL1_MAX_INDEX			767
+#define VSIQF_PE_CTL1_PE_FLTENA_S		0
+#define VSIQF_PE_CTL1_PE_FLTENA_M		BIT(0)
+#define VSIQF_TC_REGION(_i, _VSI)		(0x00448000 + ((_i) * 4096 + (_VSI) * 4)) /* _i=0...3, _VSI=0...767 */ /* Reset Source: PFR */
+#define VSIQF_TC_REGION_MAX_INDEX		3
+#define VSIQF_TC_REGION_TC_BASE0_S		0
+#define VSIQF_TC_REGION_TC_BASE0_M		MAKEMASK(0x7FF, 0)
+#define VSIQF_TC_REGION_TC_SIZE0_S		11
+#define VSIQF_TC_REGION_TC_SIZE0_M		MAKEMASK(0xF, 11)
+#define VSIQF_TC_REGION_TC_BASE1_S		16
+#define VSIQF_TC_REGION_TC_BASE1_M		MAKEMASK(0x7FF, 16)
+#define VSIQF_TC_REGION_TC_SIZE1_S		27
+#define VSIQF_TC_REGION_TC_SIZE1_M		MAKEMASK(0xF, 27)
+#define GLPM_WUMC				0x0009DEE4 /* Reset Source: POR */
+#define GLPM_WUMC_MNG_WU_PF_S			16
+#define GLPM_WUMC_MNG_WU_PF_M			MAKEMASK(0xFF, 16)
+#define PFPM_APM				0x000B8080 /* Reset Source: POR */
+#define PFPM_APM_APME_S				0
+#define PFPM_APM_APME_M				BIT(0)
+#define PFPM_WUC				0x0009DC80 /* Reset Source: POR */
+#define PFPM_WUC_EN_APM_D0_S			5
+#define PFPM_WUC_EN_APM_D0_M			BIT(5)
+#define PFPM_WUFC				0x0009DC00 /* Reset Source: POR */
+#define PFPM_WUFC_LNKC_S			0
+#define PFPM_WUFC_LNKC_M			BIT(0)
+#define PFPM_WUFC_MAG_S				1
+#define PFPM_WUFC_MAG_M				BIT(1)
+#define PFPM_WUFC_MNG_S				3
+#define PFPM_WUFC_MNG_M				BIT(3)
+#define PFPM_WUFC_FLX0_ACT_S			4
+#define PFPM_WUFC_FLX0_ACT_M			BIT(4)
+#define PFPM_WUFC_FLX1_ACT_S			5
+#define PFPM_WUFC_FLX1_ACT_M			BIT(5)
+#define PFPM_WUFC_FLX2_ACT_S			6
+#define PFPM_WUFC_FLX2_ACT_M			BIT(6)
+#define PFPM_WUFC_FLX3_ACT_S			7
+#define PFPM_WUFC_FLX3_ACT_M			BIT(7)
+#define PFPM_WUFC_FLX4_ACT_S			8
+#define PFPM_WUFC_FLX4_ACT_M			BIT(8)
+#define PFPM_WUFC_FLX5_ACT_S			9
+#define PFPM_WUFC_FLX5_ACT_M			BIT(9)
+#define PFPM_WUFC_FLX6_ACT_S			10
+#define PFPM_WUFC_FLX6_ACT_M			BIT(10)
+#define PFPM_WUFC_FLX7_ACT_S			11
+#define PFPM_WUFC_FLX7_ACT_M			BIT(11)
+#define PFPM_WUFC_FLX0_S			16
+#define PFPM_WUFC_FLX0_M			BIT(16)
+#define PFPM_WUFC_FLX1_S			17
+#define PFPM_WUFC_FLX1_M			BIT(17)
+#define PFPM_WUFC_FLX2_S			18
+#define PFPM_WUFC_FLX2_M			BIT(18)
+#define PFPM_WUFC_FLX3_S			19
+#define PFPM_WUFC_FLX3_M			BIT(19)
+#define PFPM_WUFC_FLX4_S			20
+#define PFPM_WUFC_FLX4_M			BIT(20)
+#define PFPM_WUFC_FLX5_S			21
+#define PFPM_WUFC_FLX5_M			BIT(21)
+#define PFPM_WUFC_FLX6_S			22
+#define PFPM_WUFC_FLX6_M			BIT(22)
+#define PFPM_WUFC_FLX7_S			23
+#define PFPM_WUFC_FLX7_M			BIT(23)
+#define PFPM_WUFC_FW_RST_WK_S			31
+#define PFPM_WUFC_FW_RST_WK_M			BIT(31)
+#define PFPM_WUS				0x0009DB80 /* Reset Source: POR */
+#define PFPM_WUS_LNKC_S				0
+#define PFPM_WUS_LNKC_M				BIT(0)
+#define PFPM_WUS_MAG_S				1
+#define PFPM_WUS_MAG_M				BIT(1)
+#define PFPM_WUS_PME_STATUS_S			2
+#define PFPM_WUS_PME_STATUS_M			BIT(2)
+#define PFPM_WUS_MNG_S				3
+#define PFPM_WUS_MNG_M				BIT(3)
+#define PFPM_WUS_FLX0_S				16
+#define PFPM_WUS_FLX0_M				BIT(16)
+#define PFPM_WUS_FLX1_S				17
+#define PFPM_WUS_FLX1_M				BIT(17)
+#define PFPM_WUS_FLX2_S				18
+#define PFPM_WUS_FLX2_M				BIT(18)
+#define PFPM_WUS_FLX3_S				19
+#define PFPM_WUS_FLX3_M				BIT(19)
+#define PFPM_WUS_FLX4_S				20
+#define PFPM_WUS_FLX4_M				BIT(20)
+#define PFPM_WUS_FLX5_S				21
+#define PFPM_WUS_FLX5_M				BIT(21)
+#define PFPM_WUS_FLX6_S				22
+#define PFPM_WUS_FLX6_M				BIT(22)
+#define PFPM_WUS_FLX7_S				23
+#define PFPM_WUS_FLX7_M				BIT(23)
+#define PFPM_WUS_FW_RST_WK_S			31
+#define PFPM_WUS_FW_RST_WK_M			BIT(31)
+#define PRTPM_SAH(_i)				(0x001E3BA0 + ((_i) * 32)) /* _i=0...3 */ /* Reset Source: PFR */
+#define PRTPM_SAH_MAX_INDEX			3
+#define PRTPM_SAH_PFPM_SAH_S			0
+#define PRTPM_SAH_PFPM_SAH_M			MAKEMASK(0xFFFF, 0)
+#define PRTPM_SAH_PF_NUM_S			26
+#define PRTPM_SAH_PF_NUM_M			MAKEMASK(0xF, 26)
+#define PRTPM_SAH_MC_MAG_EN_S			30
+#define PRTPM_SAH_MC_MAG_EN_M			BIT(30)
+#define PRTPM_SAH_AV_S				31
+#define PRTPM_SAH_AV_M				BIT(31)
+#define PRTPM_SAL(_i)				(0x001E3B20 + ((_i) * 32)) /* _i=0...3 */ /* Reset Source: PFR */
+#define PRTPM_SAL_MAX_INDEX			3
+#define PRTPM_SAL_PFPM_SAL_S			0
+#define PRTPM_SAL_PFPM_SAL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPE_CQM_FUNC_INVALIDATE		0x00503300 /* Reset Source: CORER */
+#define GLPE_CQM_FUNC_INVALIDATE_PF_NUM_S	0
+#define GLPE_CQM_FUNC_INVALIDATE_PF_NUM_M	MAKEMASK(0x7, 0)
+#define GLPE_CQM_FUNC_INVALIDATE_VM_VF_NUM_S	3
+#define GLPE_CQM_FUNC_INVALIDATE_VM_VF_NUM_M	MAKEMASK(0x3FF, 3)
+#define GLPE_CQM_FUNC_INVALIDATE_VM_VF_TYPE_S	13
+#define GLPE_CQM_FUNC_INVALIDATE_VM_VF_TYPE_M	MAKEMASK(0x3, 13)
+#define GLPE_CQM_FUNC_INVALIDATE_ENABLE_S	31
+#define GLPE_CQM_FUNC_INVALIDATE_ENABLE_M	BIT(31)
+#define VFPE_MRTEIDXMASK			0x00009000 /* Reset Source: PFR */
+#define VFPE_MRTEIDXMASK_MRTEIDXMASKBITS_S	0
+#define VFPE_MRTEIDXMASK_MRTEIDXMASKBITS_M	MAKEMASK(0x1F, 0)
+#define GLTSYN_HH_DLAY				0x0008881C /* Reset Source: CORER */
+#define GLTSYN_HH_DLAY_SYNC_DELAY_S		0
+#define GLTSYN_HH_DLAY_SYNC_DELAY_M		MAKEMASK(0xF, 0)
+#define VF_MBX_ARQBAH1				0x00006000 /* Reset Source: CORER */
+#define VF_MBX_ARQBAH1_ARQBAH_S			0
+#define VF_MBX_ARQBAH1_ARQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_ARQBAL1				0x00006C00 /* Reset Source: CORER */
+#define VF_MBX_ARQBAL1_ARQBAL_LSB_S		0
+#define VF_MBX_ARQBAL1_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define VF_MBX_ARQBAL1_ARQBAL_S			6
+#define VF_MBX_ARQBAL1_ARQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_ARQH1				0x00007400 /* Reset Source: CORER */
+#define VF_MBX_ARQH1_ARQH_S			0
+#define VF_MBX_ARQH1_ARQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_ARQLEN1				0x00008000 /* Reset Source: CORER */
+#define VF_MBX_ARQLEN1_ARQLEN_S			0
+#define VF_MBX_ARQLEN1_ARQLEN_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_ARQLEN1_ARQVFE_S			28
+#define VF_MBX_ARQLEN1_ARQVFE_M			BIT(28)
+#define VF_MBX_ARQLEN1_ARQOVFL_S		29
+#define VF_MBX_ARQLEN1_ARQOVFL_M		BIT(29)
+#define VF_MBX_ARQLEN1_ARQCRIT_S		30
+#define VF_MBX_ARQLEN1_ARQCRIT_M		BIT(30)
+#define VF_MBX_ARQLEN1_ARQENABLE_S		31
+#define VF_MBX_ARQLEN1_ARQENABLE_M		BIT(31)
+#define VF_MBX_ARQT1				0x00007000 /* Reset Source: CORER */
+#define VF_MBX_ARQT1_ARQT_S			0
+#define VF_MBX_ARQT1_ARQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_ATQBAH1				0x00007800 /* Reset Source: CORER */
+#define VF_MBX_ATQBAH1_ATQBAH_S			0
+#define VF_MBX_ATQBAH1_ATQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_ATQBAL1				0x00007C00 /* Reset Source: CORER */
+#define VF_MBX_ATQBAL1_ATQBAL_S			6
+#define VF_MBX_ATQBAL1_ATQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_ATQH1				0x00006400 /* Reset Source: CORER */
+#define VF_MBX_ATQH1_ATQH_S			0
+#define VF_MBX_ATQH1_ATQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_ATQLEN1				0x00006800 /* Reset Source: CORER */
+#define VF_MBX_ATQLEN1_ATQLEN_S			0
+#define VF_MBX_ATQLEN1_ATQLEN_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_ATQLEN1_ATQVFE_S			28
+#define VF_MBX_ATQLEN1_ATQVFE_M			BIT(28)
+#define VF_MBX_ATQLEN1_ATQOVFL_S		29
+#define VF_MBX_ATQLEN1_ATQOVFL_M		BIT(29)
+#define VF_MBX_ATQLEN1_ATQCRIT_S		30
+#define VF_MBX_ATQLEN1_ATQCRIT_M		BIT(30)
+#define VF_MBX_ATQLEN1_ATQENABLE_S		31
+#define VF_MBX_ATQLEN1_ATQENABLE_M		BIT(31)
+#define VF_MBX_ATQT1				0x00008400 /* Reset Source: CORER */
+#define VF_MBX_ATQT1_ATQT_S			0
+#define VF_MBX_ATQT1_ATQT_M			MAKEMASK(0x3FF, 0)
+#define PFPCI_VF_FLUSH_DONE1			0x0000E400 /* Reset Source: PCIR */
+#define PFPCI_VF_FLUSH_DONE1_FLUSH_DONE_S	0
+#define PFPCI_VF_FLUSH_DONE1_FLUSH_DONE_M	BIT(0)
+#define VFGEN_RSTAT1				0x00008800 /* Reset Source: VFR */
+#define VFGEN_RSTAT1_VFR_STATE_S		0
+#define VFGEN_RSTAT1_VFR_STATE_M		MAKEMASK(0x3, 0)
+#define VFINT_DYN_CTL0				0x00005C00 /* Reset Source: PFR */
+#define VFINT_DYN_CTL0_INTENA_S			0
+#define VFINT_DYN_CTL0_INTENA_M			BIT(0)
+#define VFINT_DYN_CTL0_CLEARPBA_S		1
+#define VFINT_DYN_CTL0_CLEARPBA_M		BIT(1)
+#define VFINT_DYN_CTL0_SWINT_TRIG_S		2
+#define VFINT_DYN_CTL0_SWINT_TRIG_M		BIT(2)
+#define VFINT_DYN_CTL0_ITR_INDX_S		3
+#define VFINT_DYN_CTL0_ITR_INDX_M		MAKEMASK(0x3, 3)
+#define VFINT_DYN_CTL0_INTERVAL_S		5
+#define VFINT_DYN_CTL0_INTERVAL_M		MAKEMASK(0xFFF, 5)
+#define VFINT_DYN_CTL0_SW_ITR_INDX_ENA_S	24
+#define VFINT_DYN_CTL0_SW_ITR_INDX_ENA_M	BIT(24)
+#define VFINT_DYN_CTL0_SW_ITR_INDX_S		25
+#define VFINT_DYN_CTL0_SW_ITR_INDX_M		MAKEMASK(0x3, 25)
+#define VFINT_DYN_CTL0_WB_ON_ITR_S		30
+#define VFINT_DYN_CTL0_WB_ON_ITR_M		BIT(30)
+#define VFINT_DYN_CTL0_INTENA_MSK_S		31
+#define VFINT_DYN_CTL0_INTENA_MSK_M		BIT(31)
+#define VFINT_DYN_CTLN(_i)			(0x00003800 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: PFR */
+#define VFINT_DYN_CTLN_MAX_INDEX		63
+#define VFINT_DYN_CTLN_INTENA_S			0
+#define VFINT_DYN_CTLN_INTENA_M			BIT(0)
+#define VFINT_DYN_CTLN_CLEARPBA_S		1
+#define VFINT_DYN_CTLN_CLEARPBA_M		BIT(1)
+#define VFINT_DYN_CTLN_SWINT_TRIG_S		2
+#define VFINT_DYN_CTLN_SWINT_TRIG_M		BIT(2)
+#define VFINT_DYN_CTLN_ITR_INDX_S		3
+#define VFINT_DYN_CTLN_ITR_INDX_M		MAKEMASK(0x3, 3)
+#define VFINT_DYN_CTLN_INTERVAL_S		5
+#define VFINT_DYN_CTLN_INTERVAL_M		MAKEMASK(0xFFF, 5)
+#define VFINT_DYN_CTLN_SW_ITR_INDX_ENA_S	24
+#define VFINT_DYN_CTLN_SW_ITR_INDX_ENA_M	BIT(24)
+#define VFINT_DYN_CTLN_SW_ITR_INDX_S		25
+#define VFINT_DYN_CTLN_SW_ITR_INDX_M		MAKEMASK(0x3, 25)
+#define VFINT_DYN_CTLN_WB_ON_ITR_S		30
+#define VFINT_DYN_CTLN_WB_ON_ITR_M		BIT(30)
+#define VFINT_DYN_CTLN_INTENA_MSK_S		31
+#define VFINT_DYN_CTLN_INTENA_MSK_M		BIT(31)
+#define VFINT_ITR0(_i)				(0x00004C00 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: PFR */
+#define VFINT_ITR0_MAX_INDEX			2
+#define VFINT_ITR0_INTERVAL_S			0
+#define VFINT_ITR0_INTERVAL_M			MAKEMASK(0xFFF, 0)
+#define VFINT_ITRN(_i, _j)			(0x00002800 + ((_i) * 4 + (_j) * 12)) /* _i=0...2, _j=0...63 */ /* Reset Source: PFR */
+#define VFINT_ITRN_MAX_INDEX			2
+#define VFINT_ITRN_INTERVAL_S			0
+#define VFINT_ITRN_INTERVAL_M			MAKEMASK(0xFFF, 0)
+#define QRX_TAIL1(_QRX)				(0x00002000 + ((_QRX) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define QRX_TAIL1_MAX_INDEX			255
+#define QRX_TAIL1_TAIL_S			0
+#define QRX_TAIL1_TAIL_M			MAKEMASK(0x1FFF, 0)
+#define QTX_TAIL(_DBQM)				(0x00000000 + ((_DBQM) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define QTX_TAIL_MAX_INDEX			255
+#define QTX_TAIL_QTX_COMM_DBELL_S		0
+#define QTX_TAIL_QTX_COMM_DBELL_M		MAKEMASK(0xFFFFFFFF, 0)
+#define MSIX_TMSG1(_i)				(0x00000008 + ((_i) * 16)) /* _i=0...64 */ /* Reset Source: FLR */
+#define MSIX_TMSG1_MAX_INDEX			64
+#define MSIX_TMSG1_MSIXTMSG_S			0
+#define MSIX_TMSG1_MSIXTMSG_M			MAKEMASK(0xFFFFFFFF, 0)
+#define VFPE_AEQALLOC1				0x0000A400 /* Reset Source: VFR */
+#define VFPE_AEQALLOC1_AECOUNT_S		0
+#define VFPE_AEQALLOC1_AECOUNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VFPE_CCQPHIGH1				0x00009800 /* Reset Source: VFR */
+#define VFPE_CCQPHIGH1_PECCQPHIGH_S		0
+#define VFPE_CCQPHIGH1_PECCQPHIGH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VFPE_CCQPLOW1				0x0000AC00 /* Reset Source: VFR */
+#define VFPE_CCQPLOW1_PECCQPLOW_S		0
+#define VFPE_CCQPLOW1_PECCQPLOW_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VFPE_CCQPSTATUS1			0x0000B800 /* Reset Source: VFR */
+#define VFPE_CCQPSTATUS1_CCQP_DONE_S		0
+#define VFPE_CCQPSTATUS1_CCQP_DONE_M		BIT(0)
+#define VFPE_CCQPSTATUS1_HMC_PROFILE_S		4
+#define VFPE_CCQPSTATUS1_HMC_PROFILE_M		MAKEMASK(0x7, 4)
+#define VFPE_CCQPSTATUS1_RDMA_EN_VFS_S		16
+#define VFPE_CCQPSTATUS1_RDMA_EN_VFS_M		MAKEMASK(0x3F, 16)
+#define VFPE_CCQPSTATUS1_CCQP_ERR_S		31
+#define VFPE_CCQPSTATUS1_CCQP_ERR_M		BIT(31)
+#define VFPE_CQACK1				0x0000B000 /* Reset Source: VFR */
+#define VFPE_CQACK1_PECQID_S			0
+#define VFPE_CQACK1_PECQID_M			MAKEMASK(0x7FFFF, 0)
+#define VFPE_CQARM1				0x0000B400 /* Reset Source: VFR */
+#define VFPE_CQARM1_PECQID_S			0
+#define VFPE_CQARM1_PECQID_M			MAKEMASK(0x7FFFF, 0)
+#define VFPE_CQPDB1				0x0000BC00 /* Reset Source: VFR */
+#define VFPE_CQPDB1_WQHEAD_S			0
+#define VFPE_CQPDB1_WQHEAD_M			MAKEMASK(0x7FF, 0)
+#define VFPE_CQPERRCODES1			0x00009C00 /* Reset Source: VFR */
+#define VFPE_CQPERRCODES1_CQP_MINOR_CODE_S	0
+#define VFPE_CQPERRCODES1_CQP_MINOR_CODE_M	MAKEMASK(0xFFFF, 0)
+#define VFPE_CQPERRCODES1_CQP_MAJOR_CODE_S	16
+#define VFPE_CQPERRCODES1_CQP_MAJOR_CODE_M	MAKEMASK(0xFFFF, 16)
+#define VFPE_CQPTAIL1				0x0000A000 /* Reset Source: VFR */
+#define VFPE_CQPTAIL1_WQTAIL_S			0
+#define VFPE_CQPTAIL1_WQTAIL_M			MAKEMASK(0x7FF, 0)
+#define VFPE_CQPTAIL1_CQP_OP_ERR_S		31
+#define VFPE_CQPTAIL1_CQP_OP_ERR_M		BIT(31)
+#define VFPE_IPCONFIG01				0x00008C00 /* Reset Source: VFR */
+#define VFPE_IPCONFIG01_PEIPID_S		0
+#define VFPE_IPCONFIG01_PEIPID_M		MAKEMASK(0xFFFF, 0)
+#define VFPE_IPCONFIG01_USEENTIREIDRANGE_S	16
+#define VFPE_IPCONFIG01_USEENTIREIDRANGE_M	BIT(16)
+#define VFPE_IPCONFIG01_UDP_SRC_PORT_MASK_EN_S	17
+#define VFPE_IPCONFIG01_UDP_SRC_PORT_MASK_EN_M	BIT(17)
+#define VFPE_MRTEIDXMASK1(_VF)			(0x00509800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_MRTEIDXMASK1_MAX_INDEX		255
+#define VFPE_MRTEIDXMASK1_MRTEIDXMASKBITS_S	0
+#define VFPE_MRTEIDXMASK1_MRTEIDXMASKBITS_M	MAKEMASK(0x1F, 0)
+#define VFPE_RCVUNEXPECTEDERROR1		0x00009400 /* Reset Source: VFR */
+#define VFPE_RCVUNEXPECTEDERROR1_TCP_RX_UNEXP_ERR_S 0
+#define VFPE_RCVUNEXPECTEDERROR1_TCP_RX_UNEXP_ERR_M MAKEMASK(0xFFFFFF, 0)
+#define VFPE_TCPNOWTIMER1			0x0000A800 /* Reset Source: VFR */
+#define VFPE_TCPNOWTIMER1_TCP_NOW_S		0
+#define VFPE_TCPNOWTIMER1_TCP_NOW_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VFPE_WQEALLOC1				0x0000C000 /* Reset Source: VFR */
+#define VFPE_WQEALLOC1_PEQPID_S			0
+#define VFPE_WQEALLOC1_PEQPID_M			MAKEMASK(0x3FFFF, 0)
+#define VFPE_WQEALLOC1_WQE_DESC_INDEX_S		20
+#define VFPE_WQEALLOC1_WQE_DESC_INDEX_M		MAKEMASK(0xFFF, 20)
+#define VF_MBX_CPM_ARQBAH1			0x0000F060 /* Reset Source: CORER */
+#define VF_MBX_CPM_ARQBAH1_ARQBAH_S		0
+#define VF_MBX_CPM_ARQBAH1_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_CPM_ARQBAL1			0x0000F050 /* Reset Source: CORER */
+#define VF_MBX_CPM_ARQBAL1_ARQBAL_LSB_S		0
+#define VF_MBX_CPM_ARQBAL1_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define VF_MBX_CPM_ARQBAL1_ARQBAL_S		6
+#define VF_MBX_CPM_ARQBAL1_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_CPM_ARQH1			0x0000F080 /* Reset Source: CORER */
+#define VF_MBX_CPM_ARQH1_ARQH_S			0
+#define VF_MBX_CPM_ARQH1_ARQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ARQLEN1			0x0000F070 /* Reset Source: CORER */
+#define VF_MBX_CPM_ARQLEN1_ARQLEN_S		0
+#define VF_MBX_CPM_ARQLEN1_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ARQLEN1_ARQVFE_S		28
+#define VF_MBX_CPM_ARQLEN1_ARQVFE_M		BIT(28)
+#define VF_MBX_CPM_ARQLEN1_ARQOVFL_S		29
+#define VF_MBX_CPM_ARQLEN1_ARQOVFL_M		BIT(29)
+#define VF_MBX_CPM_ARQLEN1_ARQCRIT_S		30
+#define VF_MBX_CPM_ARQLEN1_ARQCRIT_M		BIT(30)
+#define VF_MBX_CPM_ARQLEN1_ARQENABLE_S		31
+#define VF_MBX_CPM_ARQLEN1_ARQENABLE_M		BIT(31)
+#define VF_MBX_CPM_ARQT1			0x0000F090 /* Reset Source: CORER */
+#define VF_MBX_CPM_ARQT1_ARQT_S			0
+#define VF_MBX_CPM_ARQT1_ARQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ATQBAH1			0x0000F010 /* Reset Source: CORER */
+#define VF_MBX_CPM_ATQBAH1_ATQBAH_S		0
+#define VF_MBX_CPM_ATQBAH1_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_CPM_ATQBAL1			0x0000F000 /* Reset Source: CORER */
+#define VF_MBX_CPM_ATQBAL1_ATQBAL_S		6
+#define VF_MBX_CPM_ATQBAL1_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_CPM_ATQH1			0x0000F030 /* Reset Source: CORER */
+#define VF_MBX_CPM_ATQH1_ATQH_S			0
+#define VF_MBX_CPM_ATQH1_ATQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ATQLEN1			0x0000F020 /* Reset Source: CORER */
+#define VF_MBX_CPM_ATQLEN1_ATQLEN_S		0
+#define VF_MBX_CPM_ATQLEN1_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ATQLEN1_ATQVFE_S		28
+#define VF_MBX_CPM_ATQLEN1_ATQVFE_M		BIT(28)
+#define VF_MBX_CPM_ATQLEN1_ATQOVFL_S		29
+#define VF_MBX_CPM_ATQLEN1_ATQOVFL_M		BIT(29)
+#define VF_MBX_CPM_ATQLEN1_ATQCRIT_S		30
+#define VF_MBX_CPM_ATQLEN1_ATQCRIT_M		BIT(30)
+#define VF_MBX_CPM_ATQLEN1_ATQENABLE_S		31
+#define VF_MBX_CPM_ATQLEN1_ATQENABLE_M		BIT(31)
+#define VF_MBX_CPM_ATQT1			0x0000F040 /* Reset Source: CORER */
+#define VF_MBX_CPM_ATQT1_ATQT_S			0
+#define VF_MBX_CPM_ATQT1_ATQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ARQBAH1			0x00020060 /* Reset Source: CORER */
+#define VF_MBX_HLP_ARQBAH1_ARQBAH_S		0
+#define VF_MBX_HLP_ARQBAH1_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_HLP_ARQBAL1			0x00020050 /* Reset Source: CORER */
+#define VF_MBX_HLP_ARQBAL1_ARQBAL_LSB_S		0
+#define VF_MBX_HLP_ARQBAL1_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define VF_MBX_HLP_ARQBAL1_ARQBAL_S		6
+#define VF_MBX_HLP_ARQBAL1_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_HLP_ARQH1			0x00020080 /* Reset Source: CORER */
+#define VF_MBX_HLP_ARQH1_ARQH_S			0
+#define VF_MBX_HLP_ARQH1_ARQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ARQLEN1			0x00020070 /* Reset Source: CORER */
+#define VF_MBX_HLP_ARQLEN1_ARQLEN_S		0
+#define VF_MBX_HLP_ARQLEN1_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ARQLEN1_ARQVFE_S		28
+#define VF_MBX_HLP_ARQLEN1_ARQVFE_M		BIT(28)
+#define VF_MBX_HLP_ARQLEN1_ARQOVFL_S		29
+#define VF_MBX_HLP_ARQLEN1_ARQOVFL_M		BIT(29)
+#define VF_MBX_HLP_ARQLEN1_ARQCRIT_S		30
+#define VF_MBX_HLP_ARQLEN1_ARQCRIT_M		BIT(30)
+#define VF_MBX_HLP_ARQLEN1_ARQENABLE_S		31
+#define VF_MBX_HLP_ARQLEN1_ARQENABLE_M		BIT(31)
+#define VF_MBX_HLP_ARQT1			0x00020090 /* Reset Source: CORER */
+#define VF_MBX_HLP_ARQT1_ARQT_S			0
+#define VF_MBX_HLP_ARQT1_ARQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ATQBAH1			0x00020010 /* Reset Source: CORER */
+#define VF_MBX_HLP_ATQBAH1_ATQBAH_S		0
+#define VF_MBX_HLP_ATQBAH1_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_HLP_ATQBAL1			0x00020000 /* Reset Source: CORER */
+#define VF_MBX_HLP_ATQBAL1_ATQBAL_S		6
+#define VF_MBX_HLP_ATQBAL1_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_HLP_ATQH1			0x00020030 /* Reset Source: CORER */
+#define VF_MBX_HLP_ATQH1_ATQH_S			0
+#define VF_MBX_HLP_ATQH1_ATQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ATQLEN1			0x00020020 /* Reset Source: CORER */
+#define VF_MBX_HLP_ATQLEN1_ATQLEN_S		0
+#define VF_MBX_HLP_ATQLEN1_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ATQLEN1_ATQVFE_S		28
+#define VF_MBX_HLP_ATQLEN1_ATQVFE_M		BIT(28)
+#define VF_MBX_HLP_ATQLEN1_ATQOVFL_S		29
+#define VF_MBX_HLP_ATQLEN1_ATQOVFL_M		BIT(29)
+#define VF_MBX_HLP_ATQLEN1_ATQCRIT_S		30
+#define VF_MBX_HLP_ATQLEN1_ATQCRIT_M		BIT(30)
+#define VF_MBX_HLP_ATQLEN1_ATQENABLE_S		31
+#define VF_MBX_HLP_ATQLEN1_ATQENABLE_M		BIT(31)
+#define VF_MBX_HLP_ATQT1			0x00020040 /* Reset Source: CORER */
+#define VF_MBX_HLP_ATQT1_ATQT_S			0
+#define VF_MBX_HLP_ATQT1_ATQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ARQBAH1			0x00021060 /* Reset Source: CORER */
+#define VF_MBX_PSM_ARQBAH1_ARQBAH_S		0
+#define VF_MBX_PSM_ARQBAH1_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_PSM_ARQBAL1			0x00021050 /* Reset Source: CORER */
+#define VF_MBX_PSM_ARQBAL1_ARQBAL_LSB_S		0
+#define VF_MBX_PSM_ARQBAL1_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define VF_MBX_PSM_ARQBAL1_ARQBAL_S		6
+#define VF_MBX_PSM_ARQBAL1_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_PSM_ARQH1			0x00021080 /* Reset Source: CORER */
+#define VF_MBX_PSM_ARQH1_ARQH_S			0
+#define VF_MBX_PSM_ARQH1_ARQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ARQLEN1			0x00021070 /* Reset Source: CORER */
+#define VF_MBX_PSM_ARQLEN1_ARQLEN_S		0
+#define VF_MBX_PSM_ARQLEN1_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ARQLEN1_ARQVFE_S		28
+#define VF_MBX_PSM_ARQLEN1_ARQVFE_M		BIT(28)
+#define VF_MBX_PSM_ARQLEN1_ARQOVFL_S		29
+#define VF_MBX_PSM_ARQLEN1_ARQOVFL_M		BIT(29)
+#define VF_MBX_PSM_ARQLEN1_ARQCRIT_S		30
+#define VF_MBX_PSM_ARQLEN1_ARQCRIT_M		BIT(30)
+#define VF_MBX_PSM_ARQLEN1_ARQENABLE_S		31
+#define VF_MBX_PSM_ARQLEN1_ARQENABLE_M		BIT(31)
+#define VF_MBX_PSM_ARQT1			0x00021090 /* Reset Source: CORER */
+#define VF_MBX_PSM_ARQT1_ARQT_S			0
+#define VF_MBX_PSM_ARQT1_ARQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ATQBAH1			0x00021010 /* Reset Source: CORER */
+#define VF_MBX_PSM_ATQBAH1_ATQBAH_S		0
+#define VF_MBX_PSM_ATQBAH1_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_PSM_ATQBAL1			0x00021000 /* Reset Source: CORER */
+#define VF_MBX_PSM_ATQBAL1_ATQBAL_S		6
+#define VF_MBX_PSM_ATQBAL1_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_PSM_ATQH1			0x00021030 /* Reset Source: CORER */
+#define VF_MBX_PSM_ATQH1_ATQH_S			0
+#define VF_MBX_PSM_ATQH1_ATQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ATQLEN1			0x00021020 /* Reset Source: CORER */
+#define VF_MBX_PSM_ATQLEN1_ATQLEN_S		0
+#define VF_MBX_PSM_ATQLEN1_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ATQLEN1_ATQVFE_S		28
+#define VF_MBX_PSM_ATQLEN1_ATQVFE_M		BIT(28)
+#define VF_MBX_PSM_ATQLEN1_ATQOVFL_S		29
+#define VF_MBX_PSM_ATQLEN1_ATQOVFL_M		BIT(29)
+#define VF_MBX_PSM_ATQLEN1_ATQCRIT_S		30
+#define VF_MBX_PSM_ATQLEN1_ATQCRIT_M		BIT(30)
+#define VF_MBX_PSM_ATQLEN1_ATQENABLE_S		31
+#define VF_MBX_PSM_ATQLEN1_ATQENABLE_M		BIT(31)
+#define VF_MBX_PSM_ATQT1			0x00021040 /* Reset Source: CORER */
+#define VF_MBX_PSM_ATQT1_ATQT_S			0
+#define VF_MBX_PSM_ATQT1_ATQT_M			MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ARQBAH1			0x0000F160 /* Reset Source: CORER */
+#define VF_SB_CPM_ARQBAH1_ARQBAH_S		0
+#define VF_SB_CPM_ARQBAH1_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_SB_CPM_ARQBAL1			0x0000F150 /* Reset Source: CORER */
+#define VF_SB_CPM_ARQBAL1_ARQBAL_LSB_S		0
+#define VF_SB_CPM_ARQBAL1_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define VF_SB_CPM_ARQBAL1_ARQBAL_S		6
+#define VF_SB_CPM_ARQBAL1_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_SB_CPM_ARQH1				0x0000F180 /* Reset Source: CORER */
+#define VF_SB_CPM_ARQH1_ARQH_S			0
+#define VF_SB_CPM_ARQH1_ARQH_M			MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ARQLEN1			0x0000F170 /* Reset Source: CORER */
+#define VF_SB_CPM_ARQLEN1_ARQLEN_S		0
+#define VF_SB_CPM_ARQLEN1_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ARQLEN1_ARQVFE_S		28
+#define VF_SB_CPM_ARQLEN1_ARQVFE_M		BIT(28)
+#define VF_SB_CPM_ARQLEN1_ARQOVFL_S		29
+#define VF_SB_CPM_ARQLEN1_ARQOVFL_M		BIT(29)
+#define VF_SB_CPM_ARQLEN1_ARQCRIT_S		30
+#define VF_SB_CPM_ARQLEN1_ARQCRIT_M		BIT(30)
+#define VF_SB_CPM_ARQLEN1_ARQENABLE_S		31
+#define VF_SB_CPM_ARQLEN1_ARQENABLE_M		BIT(31)
+#define VF_SB_CPM_ARQT1				0x0000F190 /* Reset Source: CORER */
+#define VF_SB_CPM_ARQT1_ARQT_S			0
+#define VF_SB_CPM_ARQT1_ARQT_M			MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ATQBAH1			0x0000F110 /* Reset Source: CORER */
+#define VF_SB_CPM_ATQBAH1_ATQBAH_S		0
+#define VF_SB_CPM_ATQBAH1_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_SB_CPM_ATQBAL1			0x0000F100 /* Reset Source: CORER */
+#define VF_SB_CPM_ATQBAL1_ATQBAL_S		6
+#define VF_SB_CPM_ATQBAL1_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_SB_CPM_ATQH1				0x0000F130 /* Reset Source: CORER */
+#define VF_SB_CPM_ATQH1_ATQH_S			0
+#define VF_SB_CPM_ATQH1_ATQH_M			MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ATQLEN1			0x0000F120 /* Reset Source: CORER */
+#define VF_SB_CPM_ATQLEN1_ATQLEN_S		0
+#define VF_SB_CPM_ATQLEN1_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ATQLEN1_ATQVFE_S		28
+#define VF_SB_CPM_ATQLEN1_ATQVFE_M		BIT(28)
+#define VF_SB_CPM_ATQLEN1_ATQOVFL_S		29
+#define VF_SB_CPM_ATQLEN1_ATQOVFL_M		BIT(29)
+#define VF_SB_CPM_ATQLEN1_ATQCRIT_S		30
+#define VF_SB_CPM_ATQLEN1_ATQCRIT_M		BIT(30)
+#define VF_SB_CPM_ATQLEN1_ATQENABLE_S		31
+#define VF_SB_CPM_ATQLEN1_ATQENABLE_M		BIT(31)
+#define VF_SB_CPM_ATQT1				0x0000F140 /* Reset Source: CORER */
+#define VF_SB_CPM_ATQT1_ATQT_S			0
+#define VF_SB_CPM_ATQT1_ATQT_M			MAKEMASK(0x3FF, 0)
+#define VFINT_DYN_CTL(_i)			(0x00023000 + ((_i) * 4096)) /* _i=0...7 */ /* Reset Source: PFR */
+#define VFINT_DYN_CTL_MAX_INDEX			7
+#define VFINT_DYN_CTL_INTENA_S			0
+#define VFINT_DYN_CTL_INTENA_M			BIT(0)
+#define VFINT_DYN_CTL_CLEARPBA_S		1
+#define VFINT_DYN_CTL_CLEARPBA_M		BIT(1)
+#define VFINT_DYN_CTL_SWINT_TRIG_S		2
+#define VFINT_DYN_CTL_SWINT_TRIG_M		BIT(2)
+#define VFINT_DYN_CTL_ITR_INDX_S		3
+#define VFINT_DYN_CTL_ITR_INDX_M		MAKEMASK(0x3, 3)
+#define VFINT_DYN_CTL_INTERVAL_S		5
+#define VFINT_DYN_CTL_INTERVAL_M		MAKEMASK(0xFFF, 5)
+#define VFINT_DYN_CTL_SW_ITR_INDX_ENA_S		24
+#define VFINT_DYN_CTL_SW_ITR_INDX_ENA_M		BIT(24)
+#define VFINT_DYN_CTL_SW_ITR_INDX_S		25
+#define VFINT_DYN_CTL_SW_ITR_INDX_M		MAKEMASK(0x3, 25)
+#define VFINT_DYN_CTL_WB_ON_ITR_S		30
+#define VFINT_DYN_CTL_WB_ON_ITR_M		BIT(30)
+#define VFINT_DYN_CTL_INTENA_MSK_S		31
+#define VFINT_DYN_CTL_INTENA_MSK_M		BIT(31)
+#define VFINT_ITR_0(_i)				(0x00023004 + ((_i) * 4096)) /* _i=0...7 */ /* Reset Source: PFR */
+#define VFINT_ITR_0_MAX_INDEX			7
+#define VFINT_ITR_0_INTERVAL_S			0
+#define VFINT_ITR_0_INTERVAL_M			MAKEMASK(0xFFF, 0)
+#define VFINT_ITR_1(_i)				(0x00023008 + ((_i) * 4096)) /* _i=0...7 */ /* Reset Source: PFR */
+#define VFINT_ITR_1_MAX_INDEX			7
+#define VFINT_ITR_1_INTERVAL_S			0
+#define VFINT_ITR_1_INTERVAL_M			MAKEMASK(0xFFF, 0)
+#define VFINT_ITR_2(_i)				(0x0002300C + ((_i) * 4096)) /* _i=0...7 */ /* Reset Source: PFR */
+#define VFINT_ITR_2_MAX_INDEX			7
+#define VFINT_ITR_2_INTERVAL_S			0
+#define VFINT_ITR_2_INTERVAL_M			MAKEMASK(0xFFF, 0)
+#define VFQRX_TAIL(_QRX)			(0x0002E000 + ((_QRX) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VFQRX_TAIL_MAX_INDEX			255
+#define VFQRX_TAIL_TAIL_S			0
+#define VFQRX_TAIL_TAIL_M			MAKEMASK(0x1FFF, 0)
+#define VFQTX_COMM_DBELL(_DBQM)			(0x00030000 + ((_DBQM) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VFQTX_COMM_DBELL_MAX_INDEX		255
+#define VFQTX_COMM_DBELL_QTX_COMM_DBELL_S	0
+#define VFQTX_COMM_DBELL_QTX_COMM_DBELL_M	MAKEMASK(0xFFFFFFFF, 0)
+#define VFQTX_COMM_DBLQ_DBELL(_DBLQ)		(0x00022000 + ((_DBLQ) * 4)) /* _i=0...3 */ /* Reset Source: CORER */
+#define VFQTX_COMM_DBLQ_DBELL_MAX_INDEX		3
+#define VFQTX_COMM_DBLQ_DBELL_TAIL_S		0
+#define VFQTX_COMM_DBLQ_DBELL_TAIL_M		MAKEMASK(0x1FFF, 0)
+
+#endif
diff --git a/drivers/net/ice/base/ice_impl_guide.c b/drivers/net/ice/base/ice_impl_guide.c
new file mode 100644
index 0000000..853cf52
--- /dev/null
+++ b/drivers/net/ice/base/ice_impl_guide.c
@@ -0,0 +1,167 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+/*! \mainpage Intel FIXME Shared Code Implementation Guide
+ *
+ *\section secA  Operating System Interface
+ * The Shared Code is common code designed to coordinate, and
+ * make common, the initialization and other hardware tasks.
+ *
+ * \section sec2 Operating System Dependent Files
+ * Each driver is required to implement one or two files, ice_osdep.c and
+ * ice_osdep.h for the operating system dependent portions of the shared code.
+ * The following are required in the osdep file(s) (in header file if
+ * implemented as a macro/inline-function or in the C file if implemented as a
+ * function with a prototype in the header file).
+ *
+ * \section sec3 Data Types/structures
+ * \htmlonly <br>
+ * __le16<br>
+ * __le32<br>
+ * __le64<br>
+ * <br>
+ * struct ice_dma_mem {<br>
+ * &emsp;void *va;<br>
+ * &emsp;&lt;os-specific physical address type&gt; pa;<br>
+ * &emsp;&lt;os-specific size type&gt; size;<br>
+ * &emsp;&lt;other OS-specific data...&gt;<br>
+ * }<br>
+ * <br>
+ * struct ice_lock {<br>
+ * &emsp;&lt;os specific lock type&gt; lock;<br>
+ * }<br>
+ * <br>
+ * LIST_ENTRY_TYPE	(list entry, e.g. list_head on Linux, _LIST_ENTRY on Windows)<br>
+ * LIST_HEAD_TYPE	(list head, e.g. list_head on Linux, _LIST_ENTRY on Windows)<br>
+ * \endhtmlonly
+ *
+ * \section sec4 Functions/macros
+ * \htmlonly <br>
+ * <bold>See ice_common.c:ice_init_hw() for some examples</bold><br>
+ * <br>
+ * STATIC<br>
+ * CPU_TO_BE64(a)<br>
+ * CPU_TO_BE32(a)<br>
+ * CPU_TO_BE16(a)<br>
+ * CPU_TO_LE64(a)<br>
+ * CPU_TO_LE32(a)<br>
+ * CPU_TO_LE16(a)<br>
+ * LE64_TO_CPU(a)<br>
+ * LE32_TO_CPU(a)<br>
+ * LE16_TO_CPU(a)<br>
+ * offsetof(_type, _field)<br>
+ * FIELD_SIZEOF(_type, _field)<br>
+ * ARRAY_SIZE(_array)<br>
+ * NTOHL(a)<br>
+ * NTOHS(a)<br>
+ * HTONL(a)<br>
+ * HTONS(a)<br>
+ * SNPRINTF(buf, size, fmt, ...)<br>
+ * <br>
+ * u32 rd32(struct ice_hw *, reg_offset)<br>
+ * void wr32(struct ice_hw *, reg_offset, u32 value)<br>
+ * u64 rd64(struct ice_hw *, reg_offset)<br>
+ * void wr64(struct ice_hw *, reg_offset, u64 value)<br>
+ * <br>
+ * void ice_flush(struct ice_hw *)<br>
+ * <br>
+ * void ice_debug(struct ice_hw *hw, u32 mask, char *format, ...)<br>
+ * void ice_debug_array(struct ice_hw *hw, u32 mask, u32 rowsize, u32 groupsize, char *buf, size_t len)<br>
+ * <br>
+ * void ice_info(struct ice_hw *hw, char *format, ...)<br>
+ * <br>
+ * void ice_warn(struct ice_hw *hw, char *format, ...)<br>
+ * Like ice_info but may log the message at a higher warning level<br>
+ * <br>
+ * Delay functions - bool sleep indicates sleep (true) or busy-wait (false)<br>
+ * void ice_usec_delay(unsigned long usecs, bool sleep)<br>
+ * void ice_msec_delay(unsigned long msecs, bool sleep)<br>
+ * <br>
+ * void *ice_memset(void *addr, int c, size_t n, ice_memset_type direction)<br>
+ * void *ice_memcpy(void *d, const void *s, size_t n, ice_memcpy_type dir)<br>
+ * void *ice_memdup(struct ice_hw *hw, const void *s, size_t n, ice_memcpy_type dir)<br>
+ * <br>
+ * Memory allocation functions - expected to provide zero'ed memory<br>
+ * void *ice_alloc_dma_mem(struct ice_hw *hw, struct ice_dma_mem *m, u64 size)<br>
+ * void *ice_malloc(struct ice_hw *hw, size)<br>
+ * void *ice_calloc(struct ice_hw *hw, cnt, size)<br>
+ * <br>
+ * void ice_free_dma_mem(struct ice_hw *hw, struct ice_dma_mem *m)<br>
+ * void ice_free(struct ice_hw *, void *) - should not fail if void pointer is NULL<br>
+ * <br>
+ * void ice_init_lock(struct ice_lock *lock);<br>
+ * void ice_acquire_lock(struct ice_lock *lock);<br>
+ * void ice_release_lock(struct ice_lock *lock);<br>
+ * void ice_destroy_lock(struct ice_lock *lock);<br>
+ * <br>
+ * void ice_declare_bitmap(name, u16 size);<br>
+ * void ice_set_bit(unsigned int bit, unsigned long *name);<br>
+ * <br>
+ * u8 ice_hweight8(u8 weight) - determine hamming weight of an 8-bit value<br>
+ * <br>
+ * <bold>doubly-linked list management macros:</bold><br>
+ * INIT_LIST_HEAD(struct LIST_HEAD_TYPE *head)<br>
+ * LIST_EMPTY(const struct LIST_HEAD_TYPE *head)<br>
+ * LIST_ADD(struct LIST_ENTRY_TYPE *entry, struct LIST_HEAD_TYPE *head)<br>
+ * LIST_ADD_AFTER(struct LIST_ENTRY_TYPE *entry, struct LIST_ENTRY_TYPE *elem)<br>
+ * LIST_FIRST_ENTRY(struct LIST_HEAD_TYPE *head, &lt;name of struct&gt;, &lt;name of LIST_ENTRY_TYPE member in struct&gt;)<br>
+ * LIST_DEL(struct LIST_ENTRY_TYPE *entry)<br>
+ * LIST_FOR_EACH_ENTRY(&lt;&amp;struct used as iterator&gt;, struct LIST_HEAD_TYPE *head, &lt;name of struct&gt;, &lt;name of LIST_ENTRY_TYPE member in struct&gt;)<br>
+ * Note: it is not safe to remove list entries in a LIST_FOR_EACH_ENTRY() loop<br>
+ * LIST_FOR_EACH_ENTRY_SAFE(&lt;&amp;struct used as iterator&gt;, &lt;&amp;struct used as iterator&gt;struct LIST_ENTRY_TYPE *entry;, struct LIST_HEAD_TYPE *head, &lt;name of struct&gt;, &lt;name of LIST_ENTRY_TYPE member in struct&gt;)<br>
+ * LIST_REPLACE_INIT(struct LIST_HEAD_TYPE *old_head, struct LIST_HEAD_TYPE *new_head)<br>
+ *
+ * \section sec5 List implementation details
+ * The LIST macros are defined to implement a doubly-linked list which embeds
+ * the LIST_ENTRY structures as elements of the items linked to the list. The
+ * macros assume that pointer arithmetic can be used to extract the container
+ * structure from the LIST_ENTRY element and the structure type.
+ * <br>
+ * INIT_LIST_HEAD is expected to be able to initialize a pointer to a new
+ * list.
+ * <br>
+ * LIST_EMPTY is called to determine if a list pointed to by a given list head
+ * contains any elements. Calling LIST_EMPTY on an uninitialized list head
+ * results in undefined implementation specific behavior.
+ * <br>
+ * LIST_ADD is called to add an element to the front of a list pointed to by
+ * a given list head. It is assumed that LIST_ADD will perform any required
+ * initialization for the LIST_ENTRY_TYPE structure.
+ * <br>
+ * LIST_ADD_AFTER is called to insert a new element into the list after the
+ * given element. It is assumed that LIST_ADD_AFTER will perform any required
+ * initialization for the LIST_ENTRY_TYPE structure.
+ * <br>
+ * LIST_FIRST_ENTRY is called to obtain a pointer to the structure containing
+ * the first LIST_ENTRY_TYPE element of a list pointed to by the given list
+ * head. Calling LIST_FIRST_ENTRY with an empty or uninitialized list results
+ * in undefined implementation specific behavior.
+ * <br>
+ * LIST_NEXT_ENTRY is called to obtain a pointer to the structure containing
+ * the next LIST_ENTRY_TYPE element in the list, given a pointer to the current
+ * structure.
+ * <br>
+ * LIST_DEL is called to remove an element from its associated list.
+ * <br>
+ * LIST_FOR_EACH_ENTRY is used to loop through every element of a list, using
+ * a pointer of the containing type as an iterator. It is expected to have
+ * semantics similar to for loops, and can take either a single or block
+ * statement following it. Calling LIST_DEL on the iterator is not safe and
+ * results in undefined implementation specific behavior. If deleting elements
+ * from the list while iterating is required, use LIST_FOR_EACH_ENTRY_SAFE
+ * instead.
+ * <br>
+ * LIST_FOR_EACH_ENTRY_SAFE is used to loop through every element of a list
+ * guaranteeing safety to delete the iterator element even during the
+ * iteration. It requires two temporary pointers both of the struct type used
+ * as the iterator. If the ability to remove the iterated element from the
+ * list is required, then LIST_FOR_EACH_ENTRY_SAFE must be used instead of
+ * LIST_FOR_EACH_ENTRY.
+ * <br>
+ * LIST_REPLACE_INIT is used to replace old head by a new head. This will also
+ * reinitialize the old head to make it a empty list head. The new list does
+ * not have to be initialized for this function.
+ * <br>
+ * \endhtmlonly
+ */
diff --git a/drivers/net/ice/base/ice_lan_tx_rx.h b/drivers/net/ice/base/ice_lan_tx_rx.h
new file mode 100644
index 0000000..30671a5
--- /dev/null
+++ b/drivers/net/ice/base/ice_lan_tx_rx.h
@@ -0,0 +1,2290 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_LAN_TX_RX_H_
+#define _ICE_LAN_TX_RX_H_
+#include "ice_osdep.h"
+
+/* RX Descriptors */
+union ice_16byte_rx_desc {
+	struct {
+		__le64 pkt_addr; /* Packet buffer address */
+		__le64 hdr_addr; /* Header buffer address */
+	} read;
+	struct {
+		struct {
+			struct {
+				__le16 mirroring_status;
+				__le16 l2tag1;
+			} lo_dword;
+			union {
+				__le32 rss; /* RSS Hash */
+				__le32 fd_id; /* Flow Director filter id */
+			} hi_dword;
+		} qword0;
+		struct {
+			/* ext status/error/PTYPE/length */
+			__le64 status_error_len;
+		} qword1;
+	} wb;  /* writeback */
+};
+
+union ice_32byte_rx_desc {
+	struct {
+		__le64 pkt_addr; /* Packet buffer address */
+		__le64 hdr_addr; /* Header buffer address */
+			/* bit 0 of hdr_addr is DD bit */
+		__le64 rsvd1;
+		__le64 rsvd2;
+	} read;
+	struct {
+		struct {
+			struct {
+				__le16 mirroring_status;
+				__le16 l2tag1;
+			} lo_dword;
+			union {
+				__le32 rss; /* RSS Hash */
+				__le32 fd_id; /* Flow Director filter id */
+			} hi_dword;
+		} qword0;
+		struct {
+			/* status/error/PTYPE/length */
+			__le64 status_error_len;
+		} qword1;
+		struct {
+			__le16 ext_status; /* extended status */
+			__le16 rsvd;
+			__le16 l2tag2_1;
+			__le16 l2tag2_2;
+		} qword2;
+		struct {
+			__le32 reserved;
+			__le32 fd_id;
+		} qword3;
+	} wb; /* writeback */
+};
+
+struct ice_fltr_desc {
+	__le64 qidx_compq_space_stat;
+	__le64 dtype_cmd_vsi_fdid;
+};
+
+#define ICE_FXD_FLTR_QW0_QINDEX_S	0
+#define ICE_FXD_FLTR_QW0_QINDEX_M	(0x7FFULL << ICE_FXD_FLTR_QW0_QINDEX_S)
+#define ICE_FXD_FLTR_QW0_COMP_Q_S	11
+#define ICE_FXD_FLTR_QW0_COMP_Q_M	BIT_ULL(ICE_FXD_FLTR_QW0_COMP_Q_S)
+#define ICE_FXD_FLTR_QW0_COMP_Q_ZERO	0x0ULL
+#define ICE_FXD_FLTR_QW0_COMP_Q_QINDX	0x1ULL
+
+#define ICE_FXD_FLTR_QW0_COMP_REPORT_S	12
+#define ICE_FXD_FLTR_QW0_COMP_REPORT_M	\
+				(0x3ULL << ICE_FXD_FLTR_QW0_COMP_REPORT_S)
+#define ICE_FXD_FLTR_QW0_COMP_REPORT_NONE	0x0ULL
+#define ICE_FXD_FLTR_QW0_COMP_REPORT_SW_FAIL	0x1ULL
+#define ICE_FXD_FLTR_QW0_COMP_REPORT_SW		0x2ULL
+
+#define ICE_FXD_FLTR_QW0_FD_SPACE_S	14
+#define ICE_FXD_FLTR_QW0_FD_SPACE_M	(0x3ULL << ICE_FXD_FLTR_QW0_FD_SPACE_S)
+#define ICE_FXD_FLTR_QW0_FD_SPACE_GUAR			0x0ULL
+#define ICE_FXD_FLTR_QW0_FD_SPACE_BEST_EFFORT		0x1ULL
+#define ICE_FXD_FLTR_QW0_FD_SPACE_GUAR_BEST		0x2ULL
+#define ICE_FXD_FLTR_QW0_FD_SPACE_BEST_GUAR		0x3ULL
+
+#define ICE_FXD_FLTR_QW0_STAT_CNT_S	16
+#define ICE_FXD_FLTR_QW0_STAT_CNT_M	\
+				(0x1FFFULL << ICE_FXD_FLTR_QW0_STAT_CNT_S)
+#define ICE_FXD_FLTR_QW0_STAT_ENA_S	29
+#define ICE_FXD_FLTR_QW0_STAT_ENA_M	(0x3ULL << ICE_FXD_FLTR_QW0_STAT_ENA_S)
+#define ICE_FXD_FLTR_QW0_STAT_ENA_NONE		0x0ULL
+#define ICE_FXD_FLTR_QW0_STAT_ENA_PKTS		0x1ULL
+#define ICE_FXD_FLTR_QW0_STAT_ENA_BYTES		0x2ULL
+#define ICE_FXD_FLTR_QW0_STAT_ENA_PKTS_BYTES	0x3ULL
+
+#define ICE_FXD_FLTR_QW0_EVICT_ENA_S	31
+#define ICE_FXD_FLTR_QW0_EVICT_ENA_M	BIT_ULL(ICE_FXD_FLTR_QW0_EVICT_ENA_S)
+#define ICE_FXD_FLTR_QW0_EVICT_ENA_FALSE	0x0ULL
+#define ICE_FXD_FLTR_QW0_EVICT_ENA_TRUE		0x1ULL
+
+#define ICE_FXD_FLTR_QW0_TO_Q_S		32
+#define ICE_FXD_FLTR_QW0_TO_Q_M		(0x7ULL << ICE_FXD_FLTR_QW0_TO_Q_S)
+#define ICE_FXD_FLTR_QW0_TO_Q_EQUALS_QINDEX	0x0ULL
+
+#define ICE_FXD_FLTR_QW0_TO_Q_PRI_S	35
+#define ICE_FXD_FLTR_QW0_TO_Q_PRI_M	(0x7ULL << ICE_FXD_FLTR_QW0_TO_Q_PRI_S)
+#define ICE_FXD_FLTR_QW0_TO_Q_PRIO1	0x1ULL
+
+#define ICE_FXD_FLTR_QW0_DPU_RECIPE_S	38
+#define ICE_FXD_FLTR_QW0_DPU_RECIPE_M	\
+			(0x3ULL << ICE_FXD_FLTR_QW0_DPU_RECIPE_S)
+#define ICE_FXD_FLTR_QW0_DPU_RECIPE_DFLT	0x0ULL
+
+#define ICE_FXD_FLTR_QW0_DROP_S		40
+#define ICE_FXD_FLTR_QW0_DROP_M		BIT_ULL(ICE_FXD_FLTR_QW0_DROP_S)
+#define ICE_FXD_FLTR_QW0_DROP_NO	0x0ULL
+#define ICE_FXD_FLTR_QW0_DROP_YES	0x1ULL
+
+#define ICE_FXD_FLTR_QW0_FLEX_PRI_S	41
+#define ICE_FXD_FLTR_QW0_FLEX_PRI_M	(0x7ULL << ICE_FXD_FLTR_QW0_FLEX_PRI_S)
+#define ICE_FXD_FLTR_QW0_FLEX_PRI_NONE	0x0ULL
+
+#define ICE_FXD_FLTR_QW0_FLEX_MDID_S	44
+#define ICE_FXD_FLTR_QW0_FLEX_MDID_M	(0xFULL << ICE_FXD_FLTR_QW0_FLEX_MDID_S)
+#define ICE_FXD_FLTR_QW0_FLEX_MDID0	0x0ULL
+
+#define ICE_FXD_FLTR_QW0_FLEX_VAL_S	48
+#define ICE_FXD_FLTR_QW0_FLEX_VAL_M	\
+				(0xFFFFULL << ICE_FXD_FLTR_QW0_FLEX_VAL_S)
+#define ICE_FXD_FLTR_QW0_FLEX_VAL0	0x0ULL
+
+#define ICE_FXD_FLTR_QW1_DTYPE_S	0
+#define ICE_FXD_FLTR_QW1_DTYPE_M	(0xFULL << ICE_FXD_FLTR_QW1_DTYPE_S)
+#define ICE_FXD_FLTR_QW1_PCMD_S		4
+#define ICE_FXD_FLTR_QW1_PCMD_M		BIT_ULL(ICE_FXD_FLTR_QW1_PCMD_S)
+#define ICE_FXD_FLTR_QW1_PCMD_ADD	0x0ULL
+#define ICE_FXD_FLTR_QW1_PCMD_REMOVE	0x1ULL
+
+#define ICE_FXD_FLTR_QW1_PROF_PRI_S	5
+#define ICE_FXD_FLTR_QW1_PROF_PRI_M	(0x7ULL << ICE_FXD_FLTR_QW1_PROF_PRI_S)
+#define ICE_FXD_FLTR_QW1_PROF_PRIO_ZERO	0x0ULL
+
+#define ICE_FXD_FLTR_QW1_PROF_S		8
+#define ICE_FXD_FLTR_QW1_PROF_M		(0x3FULL << ICE_FXD_FLTR_QW1_PROF_S)
+#define ICE_FXD_FLTR_QW1_PROF_ZERO	0x0ULL
+
+#define ICE_FXD_FLTR_QW1_FD_VSI_S	14
+#define ICE_FXD_FLTR_QW1_FD_VSI_M	(0x3FFULL << ICE_FXD_FLTR_QW1_FD_VSI_S)
+#define ICE_FXD_FLTR_QW1_SWAP_S		24
+#define ICE_FXD_FLTR_QW1_SWAP_M		BIT_ULL(ICE_FXD_FLTR_QW1_SWAP_S)
+#define ICE_FXD_FLTR_QW1_SWAP_NOT_SET	0x0ULL
+#define ICE_FXD_FLTR_QW1_SWAP_SET	0x1ULL
+
+#define ICE_FXD_FLTR_QW1_FDID_PRI_S	25
+#define ICE_FXD_FLTR_QW1_FDID_PRI_M	(0x7ULL << ICE_FXD_FLTR_QW1_FDID_PRI_S)
+#define ICE_FXD_FLTR_QW1_FDID_PRI_ZERO	0x0ULL
+
+#define ICE_FXD_FLTR_QW1_FDID_MDID_S	28
+#define ICE_FXD_FLTR_QW1_FDID_MDID_M	(0xFULL << ICE_FXD_FLTR_QW1_FDID_MDID_S)
+#define ICE_FXD_FLTR_QW1_FDID_MDID_FD	0x05ULL
+
+#define ICE_FXD_FLTR_QW1_FDID_S		32
+#define ICE_FXD_FLTR_QW1_FDID_M		\
+			(0xFFFFFFFFULL << ICE_FXD_FLTR_QW1_FDID_S)
+#define ICE_FXD_FLTR_QW1_FDID_ZERO	0x0ULL
+
+
+enum ice_rx_desc_status_bits {
+	/* Note: These are predefined bit offsets */
+	ICE_RX_DESC_STATUS_DD_S			= 0,
+	ICE_RX_DESC_STATUS_EOF_S		= 1,
+	ICE_RX_DESC_STATUS_L2TAG1P_S		= 2,
+	ICE_RX_DESC_STATUS_L3L4P_S		= 3,
+	ICE_RX_DESC_STATUS_CRCP_S		= 4,
+	ICE_RX_DESC_STATUS_TSYNINDX_S		= 5, /* 2 BITS */
+	ICE_RX_DESC_STATUS_TSYNVALID_S		= 7,
+	ICE_RX_DESC_STATUS_EXT_UDP_0_S		= 8,
+	ICE_RX_DESC_STATUS_UMBCAST_S		= 9, /* 2 BITS */
+	ICE_RX_DESC_STATUS_FLM_S		= 11,
+	ICE_RX_DESC_STATUS_FLTSTAT_S		= 12, /* 2 BITS */
+	ICE_RX_DESC_STATUS_LPBK_S		= 14,
+	ICE_RX_DESC_STATUS_IPV6EXADD_S		= 15,
+	ICE_RX_DESC_STATUS_RESERVED2_S		= 16, /* 2 BITS */
+	ICE_RX_DESC_STATUS_INT_UDP_0_S		= 18,
+	ICE_RX_DESC_STATUS_LAST /* this entry must be last!!! */
+};
+
+#define ICE_RXD_QW1_STATUS_S	0
+#define ICE_RXD_QW1_STATUS_M	((BIT(ICE_RX_DESC_STATUS_LAST) - 1) << \
+				 ICE_RXD_QW1_STATUS_S)
+
+#define ICE_RXD_QW1_STATUS_TSYNINDX_S ICE_RX_DESC_STATUS_TSYNINDX_S
+#define ICE_RXD_QW1_STATUS_TSYNINDX_M (0x3UL << ICE_RXD_QW1_STATUS_TSYNINDX_S)
+
+#define ICE_RXD_QW1_STATUS_TSYNVALID_S ICE_RX_DESC_STATUS_TSYNVALID_S
+#define ICE_RXD_QW1_STATUS_TSYNVALID_M BIT_ULL(ICE_RXD_QW1_STATUS_TSYNVALID_S)
+
+
+enum ice_rx_desc_fltstat_values {
+	ICE_RX_DESC_FLTSTAT_NO_DATA	= 0,
+	ICE_RX_DESC_FLTSTAT_RSV_FD_ID	= 1, /* 16byte desc? FD_ID : RSV */
+	ICE_RX_DESC_FLTSTAT_RSV		= 2,
+	ICE_RX_DESC_FLTSTAT_RSS_HASH	= 3,
+};
+
+
+#define ICE_RXD_QW1_ERROR_S	19
+#define ICE_RXD_QW1_ERROR_M		(0xFFUL << ICE_RXD_QW1_ERROR_S)
+
+enum ice_rx_desc_error_bits {
+	/* Note: These are predefined bit offsets */
+	ICE_RX_DESC_ERROR_RXE_S			= 0,
+	ICE_RX_DESC_ERROR_RECIPE_S		= 1,
+	ICE_RX_DESC_ERROR_HBO_S			= 2,
+	ICE_RX_DESC_ERROR_L3L4E_S		= 3, /* 3 BITS */
+	ICE_RX_DESC_ERROR_IPE_S			= 3,
+	ICE_RX_DESC_ERROR_L4E_S			= 4,
+	ICE_RX_DESC_ERROR_EIPE_S		= 5,
+	ICE_RX_DESC_ERROR_OVERSIZE_S		= 6,
+	ICE_RX_DESC_ERROR_PPRS_S		= 7
+};
+
+enum ice_rx_desc_error_l3l4e_masks {
+	ICE_RX_DESC_ERROR_L3L4E_NONE		= 0,
+	ICE_RX_DESC_ERROR_L3L4E_PROT		= 1,
+};
+
+#define ICE_RXD_QW1_PTYPE_S	30
+#define ICE_RXD_QW1_PTYPE_M	(0xFFULL << ICE_RXD_QW1_PTYPE_S)
+
+/* Packet type non-ip values */
+enum ice_rx_l2_ptype {
+	ICE_RX_PTYPE_L2_RESERVED	= 0,
+	ICE_RX_PTYPE_L2_MAC_PAY2	= 1,
+	ICE_RX_PTYPE_L2_FIP_PAY2	= 3,
+	ICE_RX_PTYPE_L2_OUI_PAY2	= 4,
+	ICE_RX_PTYPE_L2_MACCNTRL_PAY2	= 5,
+	ICE_RX_PTYPE_L2_LLDP_PAY2	= 6,
+	ICE_RX_PTYPE_L2_ECP_PAY2	= 7,
+	ICE_RX_PTYPE_L2_EVB_PAY2	= 8,
+	ICE_RX_PTYPE_L2_QCN_PAY2	= 9,
+	ICE_RX_PTYPE_L2_EAPOL_PAY2	= 10,
+	ICE_RX_PTYPE_L2_ARP		= 11,
+};
+
+struct ice_rx_ptype_decoded {
+	u32 ptype:10;
+	u32 known:1;
+	u32 outer_ip:1;
+	u32 outer_ip_ver:2;
+	u32 outer_frag:1;
+	u32 tunnel_type:3;
+	u32 tunnel_end_prot:2;
+	u32 tunnel_end_frag:1;
+	u32 inner_prot:4;
+	u32 payload_layer:3;
+};
+
+enum ice_rx_ptype_outer_ip {
+	ICE_RX_PTYPE_OUTER_L2	= 0,
+	ICE_RX_PTYPE_OUTER_IP	= 1,
+};
+
+enum ice_rx_ptype_outer_ip_ver {
+	ICE_RX_PTYPE_OUTER_NONE	= 0,
+	ICE_RX_PTYPE_OUTER_IPV4	= 1,
+	ICE_RX_PTYPE_OUTER_IPV6	= 2,
+};
+
+enum ice_rx_ptype_outer_fragmented {
+	ICE_RX_PTYPE_NOT_FRAG	= 0,
+	ICE_RX_PTYPE_FRAG	= 1,
+};
+
+enum ice_rx_ptype_tunnel_type {
+	ICE_RX_PTYPE_TUNNEL_NONE		= 0,
+	ICE_RX_PTYPE_TUNNEL_IP_IP		= 1,
+	ICE_RX_PTYPE_TUNNEL_IP_GRENAT		= 2,
+	ICE_RX_PTYPE_TUNNEL_IP_GRENAT_MAC	= 3,
+	ICE_RX_PTYPE_TUNNEL_IP_GRENAT_MAC_VLAN	= 4,
+};
+
+enum ice_rx_ptype_tunnel_end_prot {
+	ICE_RX_PTYPE_TUNNEL_END_NONE	= 0,
+	ICE_RX_PTYPE_TUNNEL_END_IPV4	= 1,
+	ICE_RX_PTYPE_TUNNEL_END_IPV6	= 2,
+};
+
+enum ice_rx_ptype_inner_prot {
+	ICE_RX_PTYPE_INNER_PROT_NONE		= 0,
+	ICE_RX_PTYPE_INNER_PROT_UDP		= 1,
+	ICE_RX_PTYPE_INNER_PROT_TCP		= 2,
+	ICE_RX_PTYPE_INNER_PROT_SCTP		= 3,
+	ICE_RX_PTYPE_INNER_PROT_ICMP		= 4,
+};
+
+enum ice_rx_ptype_payload_layer {
+	ICE_RX_PTYPE_PAYLOAD_LAYER_NONE	= 0,
+	ICE_RX_PTYPE_PAYLOAD_LAYER_PAY2	= 1,
+	ICE_RX_PTYPE_PAYLOAD_LAYER_PAY3	= 2,
+	ICE_RX_PTYPE_PAYLOAD_LAYER_PAY4	= 3,
+};
+
+
+#define ICE_RXD_QW1_LEN_PBUF_S	38
+#define ICE_RXD_QW1_LEN_PBUF_M	(0x3FFFULL << ICE_RXD_QW1_LEN_PBUF_S)
+
+#define ICE_RXD_QW1_LEN_HBUF_S	52
+#define ICE_RXD_QW1_LEN_HBUF_M	(0x7FFULL << ICE_RXD_QW1_LEN_HBUF_S)
+
+#define ICE_RXD_QW1_LEN_SPH_S	63
+#define ICE_RXD_QW1_LEN_SPH_M	BIT_ULL(ICE_RXD_QW1_LEN_SPH_S)
+
+
+enum ice_rx_desc_ext_status_bits {
+	/* Note: These are predefined bit offsets */
+	ICE_RX_DESC_EXT_STATUS_L2TAG2P_S	= 0,
+	ICE_RX_DESC_EXT_STATUS_L2TAG3P_S	= 1,
+	ICE_RX_DESC_EXT_STATUS_FLEXBL_S		= 2, /* 2 BITS */
+	ICE_RX_DESC_EXT_STATUS_FLEXBH_S		= 4, /* 2 BITS */
+	ICE_RX_DESC_EXT_STATUS_FDLONGB_S	= 9,
+	ICE_RX_DESC_EXT_STATUS_PELONGB_S	= 11,
+};
+
+
+enum ice_rx_desc_pe_status_bits {
+	/* Note: These are predefined bit offsets */
+	ICE_RX_DESC_PE_STATUS_QPID_S		= 0, /* 18 BITS */
+	ICE_RX_DESC_PE_STATUS_L4PORT_S		= 0, /* 16 BITS */
+	ICE_RX_DESC_PE_STATUS_IPINDEX_S		= 16, /* 8 BITS */
+	ICE_RX_DESC_PE_STATUS_QPIDHIT_S		= 24,
+	ICE_RX_DESC_PE_STATUS_APBVTHIT_S	= 25,
+	ICE_RX_DESC_PE_STATUS_PORTV_S		= 26,
+	ICE_RX_DESC_PE_STATUS_URG_S		= 27,
+	ICE_RX_DESC_PE_STATUS_IPFRAG_S		= 28,
+	ICE_RX_DESC_PE_STATUS_IPOPT_S		= 29
+};
+
+#define ICE_RX_PROG_STATUS_DESC_LEN_S	38
+#define ICE_RX_PROG_STATUS_DESC_LEN	0x2000000
+
+#define ICE_RX_PROG_STATUS_DESC_QW1_PROGID_S	2
+#define ICE_RX_PROG_STATUS_DESC_QW1_PROGID_M	\
+			(0x7UL << ICE_RX_PROG_STATUS_DESC_QW1_PROGID_S)
+
+
+#define ICE_RX_PROG_STATUS_DESC_QW1_ERROR_S	19
+#define ICE_RX_PROG_STATUS_DESC_QW1_ERROR_M	\
+			(0x3FUL << ICE_RX_PROG_STATUS_DESC_QW1_ERROR_S)
+
+enum ice_rx_prog_status_desc_status_bits {
+	/* Note: These are predefined bit offsets */
+	ICE_RX_PROG_STATUS_DESC_DD_S		= 0,
+	ICE_RX_PROG_STATUS_DESC_PROG_ID_S	= 2 /* 3 BITS */
+};
+
+enum ice_rx_prog_status_desc_prog_id_masks {
+	ICE_RX_PROG_STATUS_DESC_FD_FLTR_STATUS	= 1,
+};
+
+enum ice_rx_prog_status_desc_error_bits {
+	/* Note: These are predefined bit offsets */
+	ICE_RX_PROG_STATUS_DESC_FD_TBL_FULL_S	= 0,
+	ICE_RX_PROG_STATUS_DESC_NO_FD_ENTRY_S	= 1,
+};
+
+/* RX Flex Descriptor
+ * This descriptor is used instead of the legacy version descriptor when
+ * ice_rlan_ctx.adv_desc is set
+ */
+union ice_32b_rx_flex_desc {
+	struct {
+		__le64 pkt_addr; /* Packet buffer address */
+		__le64 hdr_addr; /* Header buffer address */
+				 /* bit 0 of hdr_addr is DD bit */
+		__le64 rsvd1;
+		__le64 rsvd2;
+	} read;
+	struct {
+		/* Qword 0 */
+		u8 rxdid; /* descriptor builder profile id */
+		u8 mir_id_umb_cast; /* mirror=[5:0], umb=[7:6] */
+		__le16 ptype_flex_flags0; /* ptype=[9:0], ff0=[15:10] */
+		__le16 pkt_len; /* [15:14] are reserved */
+		__le16 hdr_len_sph_flex_flags1; /* header=[10:0] */
+						/* sph=[11:11] */
+						/* ff1/ext=[15:12] */
+
+		/* Qword 1 */
+		__le16 status_error0;
+		__le16 l2tag1;
+		__le16 flex_meta0;
+		__le16 flex_meta1;
+
+		/* Qword 2 */
+		__le16 status_error1;
+		u8 flex_flags2;
+		u8 time_stamp_low;
+		__le16 l2tag2_1st;
+		__le16 l2tag2_2nd;
+
+		/* Qword 3 */
+		__le16 flex_meta2;
+		__le16 flex_meta3;
+		union {
+			struct {
+				__le16 flex_meta4;
+				__le16 flex_meta5;
+			} flex;
+			__le32 ts_high;
+		} flex_ts;
+	} wb; /* writeback */
+};
+
+/* Rx Flex Descriptor NIC Profile
+ * RxDID Profile Id 2
+ * Flex-field 0: RSS hash lower 16-bits
+ * Flex-field 1: RSS hash upper 16-bits
+ * Flex-field 2: Flow Id lower 16-bits
+ * Flex-field 3: Flow Id higher 16-bits
+ * Flex-field 4: reserved, Vlan id taken from L2Tag
+ */
+struct ice_32b_rx_flex_desc_nic {
+	/* Qword 0 */
+	u8 rxdid;
+	u8 mir_id_umb_cast;
+	__le16 ptype_flexi_flags0;
+	__le16 pkt_len;
+	__le16 hdr_len_sph_flex_flags1;
+
+	/* Qword 1 */
+	__le16 status_error0;
+	__le16 l2tag1;
+	__le32 rss_hash;
+
+	/* Qword 2 */
+	__le16 status_error1;
+	u8 flexi_flags2;
+	u8 ts_low;
+	__le16 l2tag2_1st;
+	__le16 l2tag2_2nd;
+
+	/* Qword 3 */
+	__le32 flow_id;
+	union {
+		struct {
+			__le16 rsvd;
+			__le16 flow_id_ipv6;
+		} flex;
+		__le32 ts_high;
+	} flex_ts;
+};
+
+/* Rx Flex Descriptor Switch Profile
+ * RxDID Profile Id 3
+ * Flex-field 0: Source Vsi
+ */
+struct ice_32b_rx_flex_desc_sw {
+	/* Qword 0 */
+	u8 rxdid;
+	u8 mir_id_umb_cast;
+	__le16 ptype_flexi_flags0;
+	__le16 pkt_len;
+	__le16 hdr_len_sph_flex_flags1;
+
+	/* Qword 1 */
+	__le16 status_error0;
+	__le16 l2tag1;
+	__le16 src_vsi; /* [10:15] are reserved */
+	__le16 flex_md1_rsvd;
+
+	/* Qword 2 */
+	__le16 status_error1;
+	u8 flex_flags2;
+	u8 ts_low;
+	__le16 l2tag2_1st;
+	__le16 l2tag2_2nd;
+
+	/* Qword 3 */
+	__le32 rsvd; /* flex words 2-3 are reserved */
+	__le32 ts_high;
+};
+
+/* Rx Flex Descriptor NIC VEB Profile
+ * RxDID Profile Id 4
+ * Flex-field 0: Destination Vsi
+ */
+struct ice_32b_rx_flex_desc_nic_veb_dbg {
+	/* Qword 0 */
+	u8 rxdid;
+	u8 mir_id_umb_cast;
+	__le16 ptype_flexi_flags0;
+	__le16 pkt_len;
+	__le16 hdr_len_sph_flex_flags1;
+
+	/* Qword 1 */
+	__le16 status_error0;
+	__le16 l2tag1;
+	__le16 dst_vsi; /* [0:12]: destination vsi */
+			/* 13: vsi valid bit */
+			/* [14:15] are reserved */
+	__le16 flex_field_1;
+
+	/* Qword 2 */
+	__le16 status_error1;
+	u8 flex_flags2;
+	u8 ts_low;
+	__le16 l2tag2_1st;
+	__le16 l2tag2_2nd;
+
+	/* Qword 3 */
+	__le32 rsvd; /* flex words 2-3 are reserved */
+	__le32 ts_high;
+};
+
+/* Rx Flex Descriptor NIC ACL Profile
+ * RxDID Profile Id 5
+ * Flex-field 0: ACL Counter 0
+ * Flex-field 1: ACL Counter 1
+ * Flex-field 2: ACL Counter 2
+ */
+struct ice_32b_rx_flex_desc_nic_acl_dbg {
+	/* Qword 0 */
+	u8 rxdid;
+	u8 mir_id_umb_cast;
+	__le16 ptype_flexi_flags0;
+	__le16 pkt_len;
+	__le16 hdr_len_sph_flex_flags1;
+
+	/* Qword 1 */
+	__le16 status_error0;
+	__le16 l2tag1;
+	__le16 acl_ctr0;
+	__le16 acl_ctr1;
+
+	/* Qword 2 */
+	__le16 status_error1;
+	u8 flex_flags2;
+	u8 ts_low;
+	__le16 l2tag2_1st;
+	__le16 l2tag2_2nd;
+
+	/* Qword 3 */
+	__le16 acl_ctr2;
+	__le16 rsvd; /* flex words 2-3 are reserved */
+	__le32 ts_high;
+};
+
+/* Rx Flex Descriptor NIC Profile
+ * RxDID Profile Id 6
+ * Flex-field 0: RSS hash lower 16-bits
+ * Flex-field 1: RSS hash upper 16-bits
+ * Flex-field 2: Flow Id lower 16-bits
+ * Flex-field 3: Source Vsi
+ * Flex-field 4: reserved, Vlan id taken from L2Tag
+ */
+struct ice_32b_rx_flex_desc_nic_2 {
+	/* Qword 0 */
+	u8 rxdid;
+	u8 mir_id_umb_cast;
+	__le16 ptype_flexi_flags0;
+	__le16 pkt_len;
+	__le16 hdr_len_sph_flex_flags1;
+
+	/* Qword 1 */
+	__le16 status_error0;
+	__le16 l2tag1;
+	__le32 rss_hash;
+
+	/* Qword 2 */
+	__le16 status_error1;
+	u8 flexi_flags2;
+	u8 ts_low;
+	__le16 l2tag2_1st;
+	__le16 l2tag2_2nd;
+
+	/* Qword 3 */
+	__le16 flow_id;
+	__le16 src_vsi;
+	union {
+		struct {
+			__le16 rsvd;
+			__le16 flow_id_ipv6;
+		} flex;
+		__le32 ts_high;
+	} flex_ts;
+};
+
+/* Receive Flex Descriptor profile IDs: There are a total
+ * of 64 profiles where profile IDs 0/1 are for legacy; and
+ * profiles 2-63 are flex profiles that can be programmed
+ * with a specific metadata (profile 7 reserved for HW)
+ */
+enum ice_rxdid {
+	ICE_RXDID_LEGACY_0		= 0,
+	ICE_RXDID_LEGACY_1		= 1,
+	ICE_RXDID_FLEX_NIC		= 2,
+	ICE_RXDID_FLEX_NIC_2		= 6,
+	ICE_RXDID_HW			= 7,
+	ICE_RXDID_LAST			= 63,
+};
+
+/* Recceive Flex descriptor Dword Index */
+enum ice_flex_word {
+	ICE_RX_FLEX_DWORD_0 = 0,
+	ICE_RX_FLEX_DWORD_1,
+	ICE_RX_FLEX_DWORD_2,
+	ICE_RX_FLEX_DWORD_3,
+	ICE_RX_FLEX_DWORD_4,
+	ICE_RX_FLEX_DWORD_5
+};
+
+/* Receive Flex Descriptor Rx opcode values */
+enum ice_flex_opcode {
+	ICE_RX_OPC_DEBUG = 0,
+	ICE_RX_OPC_MDID,
+	ICE_RX_OPC_EXTRACT,
+	ICE_RX_OPC_PROTID
+};
+
+/* Receive Descriptor MDID values */
+enum ice_flex_rx_mdid {
+	ICE_RX_MDID_FLOW_ID_LOWER	= 5,
+	ICE_RX_MDID_FLOW_ID_HIGH,
+	ICE_RX_MDID_DST_VSI		= 13,
+	ICE_RX_MDID_SRC_VSI		= 19,
+	ICE_RX_MDID_HASH_LOW		= 56,
+	ICE_RX_MDID_HASH_HIGH,
+	ICE_RX_MDID_ACL_CTR0		= ICE_RX_MDID_HASH_LOW,
+	ICE_RX_MDID_ACL_CTR1		= ICE_RX_MDID_HASH_HIGH,
+	ICE_RX_MDID_ACL_CTR2		= 59
+};
+
+/* for ice_32byte_rx_flex_desc.mir_id_umb_cast member */
+#define ICE_RX_FLEX_DESC_MIRROR_M	(0x3F) /* 6-bits */
+
+/* Rx Flag64 packet flag bits */
+enum ice_rx_flg64_bits {
+	ICE_RXFLG_PKT_DSI	= 0,
+	ICE_RXFLG_EVLAN_x8100	= 15,
+	ICE_RXFLG_EVLAN_x9100,
+	ICE_RXFLG_VLAN_x8100,
+	ICE_RXFLG_TNL_MAC	= 22,
+	ICE_RXFLG_TNL_VLAN,
+	ICE_RXFLG_PKT_FRG,
+	ICE_RXFLG_FIN		= 32,
+	ICE_RXFLG_SYN,
+	ICE_RXFLG_RST,
+	ICE_RXFLG_TNL0		= 38,
+	ICE_RXFLG_TNL1,
+	ICE_RXFLG_TNL2,
+	ICE_RXFLG_UDP_GRE,
+	ICE_RXFLG_RSVD		= 63
+};
+
+enum ice_rx_flex_desc_umb_cast_bits { /* field is 2 bits long */
+	ICE_RX_FLEX_DESC_UMB_CAST_S = 6,
+	ICE_RX_FLEX_DESC_UMB_CAST_LAST /* this entry must be last!!! */
+};
+
+enum ice_umbcast_dest_addr_types {
+	ICE_DEST_UNICAST = 0,
+	ICE_DEST_MULTICAST,
+	ICE_DEST_BROADCAST,
+	ICE_DEST_MIRRORED,
+};
+
+/* for ice_32byte_rx_flex_desc.ptype_flexi_flags0 member */
+#define ICE_RX_FLEX_DESC_PTYPE_M	(0x3FF) /* 10-bits */
+
+enum ice_rx_flex_desc_flexi_flags0_bits { /* field is 6 bits long */
+	ICE_RX_FLEX_DESC_FLEXI_FLAGS0_S = 10,
+	ICE_RX_FLEX_DESC_FLEXI_FLAGS0_LAST /* this entry must be last!!! */
+};
+
+/* for ice_32byte_rx_flex_desc.pkt_length member */
+#define ICE_RX_FLX_DESC_PKT_LEN_M	(0x3FFF) /* 14-bits */
+
+/* for ice_32byte_rx_flex_desc.header_length_sph_flexi_flags1 member */
+#define ICE_RX_FLEX_DESC_HEADER_LEN_M	(0x7FF) /* 11-bits */
+
+enum ice_rx_flex_desc_sph_bits { /* field is 1 bit long */
+	ICE_RX_FLEX_DESC_SPH_S = 11,
+	ICE_RX_FLEX_DESC_SPH_LAST /* this entry must be last!!! */
+};
+
+enum ice_rx_flex_desc_flexi_flags1_bits { /* field is 4 bits long */
+	ICE_RX_FLEX_DESC_FLEXI_FLAGS1_S = 12,
+	ICE_RX_FLEX_DESC_FLEXI_FLAGS1_LAST /* this entry must be last!!! */
+};
+
+enum ice_rx_flex_desc_ext_status_bits { /* field is 4 bits long */
+	ICE_RX_FLEX_DESC_EXT_STATUS_EXT_UDP_S = 12,
+	ICE_RX_FLEX_DESC_EXT_STATUS_INT_UDP_S = 13,
+	ICE_RX_FLEX_DESC_EXT_STATUS_RECIPE_S = 14,
+	ICE_RX_FLEX_DESC_EXT_STATUS_OVERSIZE_S = 15,
+	ICE_RX_FLEX_DESC_EXT_STATUS_LAST /* entry must be last!!! */
+};
+
+enum ice_rx_flex_desc_status_error_0_bits {
+	/* Note: These are predefined bit offsets */
+	ICE_RX_FLEX_DESC_STATUS0_DD_S = 0,
+	ICE_RX_FLEX_DESC_STATUS0_EOF_S,
+	ICE_RX_FLEX_DESC_STATUS0_HBO_S,
+	ICE_RX_FLEX_DESC_STATUS0_L3L4P_S,
+	ICE_RX_FLEX_DESC_STATUS0_XSUM_IPE_S,
+	ICE_RX_FLEX_DESC_STATUS0_XSUM_L4E_S,
+	ICE_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S,
+	ICE_RX_FLEX_DESC_STATUS0_XSUM_EUDPE_S,
+	ICE_RX_FLEX_DESC_STATUS0_LPBK_S,
+	ICE_RX_FLEX_DESC_STATUS0_IPV6EXADD_S,
+	ICE_RX_FLEX_DESC_STATUS0_RXE_S,
+	ICE_RX_FLEX_DESC_STATUS0_CRCP_S,
+	ICE_RX_FLEX_DESC_STATUS0_RSS_VALID_S,
+	ICE_RX_FLEX_DESC_STATUS0_L2TAG1P_S,
+	ICE_RX_FLEX_DESC_STATUS0_XTRMD0_VALID_S,
+	ICE_RX_FLEX_DESC_STATUS0_XTRMD1_VALID_S,
+	ICE_RX_FLEX_DESC_STATUS0_LAST /* this entry must be last!!! */
+};
+
+enum ice_rx_flex_desc_status_error_1_bits {
+	/* Note: These are predefined bit offsets */
+	ICE_RX_FLEX_DESC_STATUS1_CPM_S = 0, /* 4 bits */
+	ICE_RX_FLEX_DESC_STATUS1_NAT_S = 4,
+	ICE_RX_FLEX_DESC_STATUS1_CRYPTO_S = 5,
+	/* [10:6] reserved */
+	ICE_RX_FLEX_DESC_STATUS1_L2TAG2P_S = 11,
+	ICE_RX_FLEX_DESC_STATUS1_XTRMD2_VALID_S = 12,
+	ICE_RX_FLEX_DESC_STATUS1_XTRMD3_VALID_S = 13,
+	ICE_RX_FLEX_DESC_STATUS1_XTRMD4_VALID_S = 14,
+	ICE_RX_FLEX_DESC_STATUS1_XTRMD5_VALID_S = 15,
+	ICE_RX_FLEX_DESC_STATUS1_LAST /* this entry must be last!!! */
+};
+
+enum ice_rx_flex_desc_exstat_bits {
+	/* Note: These are predefined bit offsets */
+	ICE_RX_FLEX_DESC_EXSTAT_EXTUDP_S = 0,
+	ICE_RX_FLEX_DESC_EXSTAT_INTUDP_S = 1,
+	ICE_RX_FLEX_DESC_EXSTAT_RECIPE_S = 2,
+	ICE_RX_FLEX_DESC_EXSTAT_OVERSIZE_S = 3,
+};
+
+
+#define ICE_RXQ_CTX_SIZE_DWORDS		8
+#define ICE_RXQ_CTX_SZ			(ICE_RXQ_CTX_SIZE_DWORDS * sizeof(u32))
+#define ICE_TX_CMPLTNQ_CTX_SIZE_DWORDS	22
+#define ICE_TX_DRBELL_Q_CTX_SIZE_DWORDS	5
+#define GLTCLAN_CQ_CNTX(i, CQ)		(GLTCLAN_CQ_CNTX0(CQ) + ((i) * 0x0800))
+
+/* RLAN Rx queue context data
+ *
+ * The sizes of the variables may be larger than needed due to crossing byte
+ * boundaries. If we do not have the width of the variable set to the correct
+ * size then we could end up shifting bits off the top of the variable when the
+ * variable is at the top of a byte and crosses over into the next byte.
+ */
+struct ice_rlan_ctx {
+	u16 head;
+	u16 cpuid; /* bigger than needed, see above for reason */
+#define ICE_RLAN_BASE_S 7
+	u64 base;
+	u16 qlen;
+#define ICE_RLAN_CTX_DBUF_S 7
+	u16 dbuf; /* bigger than needed, see above for reason */
+#define ICE_RLAN_CTX_HBUF_S 6
+	u16 hbuf; /* bigger than needed, see above for reason */
+	u8 dtype;
+	u8 dsize;
+	u8 crcstrip;
+	u8 l2tsel;
+	u8 hsplit_0;
+	u8 hsplit_1;
+	u8 showiv;
+	u32 rxmax; /* bigger than needed, see above for reason */
+	u8 tphrdesc_ena;
+	u8 tphwdesc_ena;
+	u8 tphdata_ena;
+	u8 tphhead_ena;
+	u16 lrxqthresh; /* bigger than needed, see above for reason */
+};
+
+struct ice_ctx_ele {
+	u16 offset;
+	u16 size_of;
+	u16 width;
+	u16 lsb;
+};
+
+#define ICE_CTX_STORE(_struct, _ele, _width, _lsb) {	\
+	.offset = offsetof(struct _struct, _ele),	\
+	.size_of = FIELD_SIZEOF(struct _struct, _ele),	\
+	.width = _width,				\
+	.lsb = _lsb,					\
+}
+
+/* for hsplit_0 field of Rx RLAN context */
+enum ice_rlan_ctx_rx_hsplit_0 {
+	ICE_RLAN_RX_HSPLIT_0_NO_SPLIT		= 0,
+	ICE_RLAN_RX_HSPLIT_0_SPLIT_L2		= 1,
+	ICE_RLAN_RX_HSPLIT_0_SPLIT_IP		= 2,
+	ICE_RLAN_RX_HSPLIT_0_SPLIT_TCP_UDP	= 4,
+	ICE_RLAN_RX_HSPLIT_0_SPLIT_SCTP		= 8,
+};
+
+/* for hsplit_1 field of Rx RLAN context */
+enum ice_rlan_ctx_rx_hsplit_1 {
+	ICE_RLAN_RX_HSPLIT_1_NO_SPLIT		= 0,
+	ICE_RLAN_RX_HSPLIT_1_SPLIT_L2		= 1,
+	ICE_RLAN_RX_HSPLIT_1_SPLIT_ALWAYS	= 2,
+};
+
+/* TX Descriptor */
+struct ice_tx_desc {
+	__le64 buf_addr; /* Address of descriptor's data buf */
+	__le64 cmd_type_offset_bsz;
+};
+
+#define ICE_TXD_QW1_DTYPE_S	0
+#define ICE_TXD_QW1_DTYPE_M	(0xFUL << ICE_TXD_QW1_DTYPE_S)
+
+enum ice_tx_desc_dtype_value {
+	ICE_TX_DESC_DTYPE_DATA		= 0x0,
+	ICE_TX_DESC_DTYPE_CTX		= 0x1,
+	ICE_TX_DESC_DTYPE_IPSEC		= 0x3,
+	ICE_TX_DESC_DTYPE_FLTR_PROG	= 0x8,
+	ICE_TX_DESC_DTYPE_HLP_META	= 0x9,
+	/* DESC_DONE - HW has completed write-back of descriptor */
+	ICE_TX_DESC_DTYPE_DESC_DONE	= 0xF,
+};
+
+#define ICE_TXD_QW1_CMD_S	4
+#define ICE_TXD_QW1_CMD_M	(0xFFFUL << ICE_TXD_QW1_CMD_S)
+
+enum ice_tx_desc_cmd_bits {
+	ICE_TX_DESC_CMD_EOP			= 0x0001,
+	ICE_TX_DESC_CMD_RS			= 0x0002,
+	ICE_TX_DESC_CMD_RSVD			= 0x0004,
+	ICE_TX_DESC_CMD_IL2TAG1			= 0x0008,
+	ICE_TX_DESC_CMD_DUMMY			= 0x0010,
+	ICE_TX_DESC_CMD_IIPT_NONIP		= 0x0000, /* 2 BITS */
+	ICE_TX_DESC_CMD_IIPT_IPV6		= 0x0020, /* 2 BITS */
+	ICE_TX_DESC_CMD_IIPT_IPV4		= 0x0040, /* 2 BITS */
+	ICE_TX_DESC_CMD_IIPT_IPV4_CSUM		= 0x0060, /* 2 BITS */
+	ICE_TX_DESC_CMD_RSVD2			= 0x0080,
+	ICE_TX_DESC_CMD_L4T_EOFT_UNK		= 0x0000, /* 2 BITS */
+	ICE_TX_DESC_CMD_L4T_EOFT_TCP		= 0x0100, /* 2 BITS */
+	ICE_TX_DESC_CMD_L4T_EOFT_SCTP		= 0x0200, /* 2 BITS */
+	ICE_TX_DESC_CMD_L4T_EOFT_UDP		= 0x0300, /* 2 BITS */
+	ICE_TX_DESC_CMD_RE			= 0x0400,
+	ICE_TX_DESC_CMD_RSVD3			= 0x0800,
+};
+
+#define ICE_TXD_QW1_OFFSET_S	16
+#define ICE_TXD_QW1_OFFSET_M	(0x3FFFFULL << ICE_TXD_QW1_OFFSET_S)
+
+enum ice_tx_desc_len_fields {
+	/* Note: These are predefined bit offsets */
+	ICE_TX_DESC_LEN_MACLEN_S	= 0, /* 7 BITS */
+	ICE_TX_DESC_LEN_IPLEN_S	= 7, /* 7 BITS */
+	ICE_TX_DESC_LEN_L4_LEN_S	= 14 /* 4 BITS */
+};
+
+#define ICE_TXD_QW1_MACLEN_M (0x7FUL << ICE_TX_DESC_LEN_MACLEN_S)
+#define ICE_TXD_QW1_IPLEN_M  (0x7FUL << ICE_TX_DESC_LEN_IPLEN_S)
+#define ICE_TXD_QW1_L4LEN_M  (0xFUL << ICE_TX_DESC_LEN_L4_LEN_S)
+
+/* Tx descriptor field limits in bytes */
+#define ICE_TXD_MACLEN_MAX ((ICE_TXD_QW1_MACLEN_M >> \
+			     ICE_TX_DESC_LEN_MACLEN_S) * ICE_BYTES_PER_WORD)
+#define ICE_TXD_IPLEN_MAX ((ICE_TXD_QW1_IPLEN_M >> \
+			    ICE_TX_DESC_LEN_IPLEN_S) * ICE_BYTES_PER_DWORD)
+#define ICE_TXD_L4LEN_MAX ((ICE_TXD_QW1_L4LEN_M >> \
+			    ICE_TX_DESC_LEN_L4_LEN_S) * ICE_BYTES_PER_DWORD)
+
+#define ICE_TXD_QW1_TX_BUF_SZ_S	34
+#define ICE_TXD_QW1_TX_BUF_SZ_M	(0x3FFFULL << ICE_TXD_QW1_TX_BUF_SZ_S)
+
+#define ICE_TXD_QW1_L2TAG1_S	48
+#define ICE_TXD_QW1_L2TAG1_M	(0xFFFFULL << ICE_TXD_QW1_L2TAG1_S)
+
+/* Context descriptors */
+struct ice_tx_ctx_desc {
+	__le32 tunneling_params;
+	__le16 l2tag2;
+	__le16 rsvd;
+	__le64 qw1;
+};
+
+#define ICE_TXD_CTX_QW1_DTYPE_S	0
+#define ICE_TXD_CTX_QW1_DTYPE_M	(0xFUL << ICE_TXD_CTX_QW1_DTYPE_S)
+
+#define ICE_TXD_CTX_QW1_CMD_S	4
+#define ICE_TXD_CTX_QW1_CMD_M	(0x7FUL << ICE_TXD_CTX_QW1_CMD_S)
+
+#define ICE_TXD_CTX_QW1_IPSEC_S	11
+#define ICE_TXD_CTX_QW1_IPSEC_M	(0x7FUL << ICE_TXD_CTX_QW1_IPSEC_S)
+
+#define ICE_TXD_CTX_QW1_TSO_LEN_S	30
+#define ICE_TXD_CTX_QW1_TSO_LEN_M	\
+			(0x3FFFFULL << ICE_TXD_CTX_QW1_TSO_LEN_S)
+
+#define ICE_TXD_CTX_QW1_TSYN_S	ICE_TXD_CTX_QW1_TSO_LEN_S
+#define ICE_TXD_CTX_QW1_TSYN_M	ICE_TXD_CTX_QW1_TSO_LEN_M
+
+#define ICE_TXD_CTX_QW1_MSS_S	50
+#define ICE_TXD_CTX_QW1_MSS_M	(0x3FFFULL << ICE_TXD_CTX_QW1_MSS_S)
+#define ICE_TXD_CTX_MIN_MSS	64
+#define ICE_TXD_CTX_MAX_MSS	9668
+
+#define ICE_TXD_CTX_QW1_VSI_S	50
+#define ICE_TXD_CTX_QW1_VSI_M	(0x3FFULL << ICE_TXD_CTX_QW1_VSI_S)
+
+enum ice_tx_ctx_desc_cmd_bits {
+	ICE_TX_CTX_DESC_TSO		= 0x01,
+	ICE_TX_CTX_DESC_TSYN		= 0x02,
+	ICE_TX_CTX_DESC_IL2TAG2		= 0x04,
+	ICE_TX_CTX_DESC_IL2TAG2_IL2H	= 0x08,
+	ICE_TX_CTX_DESC_SWTCH_NOTAG	= 0x00,
+	ICE_TX_CTX_DESC_SWTCH_UPLINK	= 0x10,
+	ICE_TX_CTX_DESC_SWTCH_LOCAL	= 0x20,
+	ICE_TX_CTX_DESC_SWTCH_VSI	= 0x30,
+	ICE_TX_CTX_DESC_RESERVED	= 0x40
+};
+
+enum ice_tx_ctx_desc_eipt_offload {
+	ICE_TX_CTX_EIPT_NONE		= 0x0,
+	ICE_TX_CTX_EIPT_IPV6		= 0x1,
+	ICE_TX_CTX_EIPT_IPV4_NO_CSUM	= 0x2,
+	ICE_TX_CTX_EIPT_IPV4		= 0x3
+};
+
+#define ICE_TXD_CTX_QW0_EIPT_S	0
+#define ICE_TXD_CTX_QW0_EIPT_M	(0x3ULL << ICE_TXD_CTX_QW0_EIPT_S)
+
+#define ICE_TXD_CTX_QW0_EIPLEN_S	2
+#define ICE_TXD_CTX_QW0_EIPLEN_M	(0x7FUL << ICE_TXD_CTX_QW0_EIPLEN_S)
+
+#define ICE_TXD_CTX_QW0_L4TUNT_S	9
+#define ICE_TXD_CTX_QW0_L4TUNT_M	(0x3ULL << ICE_TXD_CTX_QW0_L4TUNT_S)
+
+#define ICE_TXD_CTX_UDP_TUNNELING	BIT_ULL(ICE_TXD_CTX_QW0_L4TUNT_S)
+#define ICE_TXD_CTX_GRE_TUNNELING	(0x2ULL << ICE_TXD_CTX_QW0_L4TUNT_S)
+
+#define ICE_TXD_CTX_QW0_EIP_NOINC_S	11
+#define ICE_TXD_CTX_QW0_EIP_NOINC_M	BIT_ULL(ICE_TXD_CTX_QW0_EIP_NOINC_S)
+
+#define ICE_TXD_CTX_EIP_NOINC_IPID_CONST	ICE_TXD_CTX_QW0_EIP_NOINC_M
+
+#define ICE_TXD_CTX_QW0_NATLEN_S	12
+#define ICE_TXD_CTX_QW0_NATLEN_M	(0X7FULL << ICE_TXD_CTX_QW0_NATLEN_S)
+
+#define ICE_TXD_CTX_QW0_DECTTL_S	19
+#define ICE_TXD_CTX_QW0_DECTTL_M	(0xFULL << ICE_TXD_CTX_QW0_DECTTL_S)
+
+#define ICE_TXD_CTX_QW0_L4T_CS_S	23
+#define ICE_TXD_CTX_QW0_L4T_CS_M	BIT_ULL(ICE_TXD_CTX_QW0_L4T_CS_S)
+
+
+#define ICE_LAN_TXQ_MAX_QGRPS	127
+#define ICE_LAN_TXQ_MAX_QDIS	1023
+
+/* Tx queue context data
+ *
+ * The sizes of the variables may be larger than needed due to crossing byte
+ * boundaries. If we do not have the width of the variable set to the correct
+ * size then we could end up shifting bits off the top of the variable when the
+ * variable is at the top of a byte and crosses over into the next byte.
+ */
+struct ice_tlan_ctx {
+#define ICE_TLAN_CTX_BASE_S	7
+	u64 base;		/* base is defined in 128-byte units */
+	u8 port_num;
+	u16 cgd_num;		/* bigger than needed, see above for reason */
+	u8 pf_num;
+	u16 vmvf_num;
+	u8 vmvf_type;
+#define ICE_TLAN_CTX_VMVF_TYPE_VF	0
+#define ICE_TLAN_CTX_VMVF_TYPE_VMQ	1
+#define ICE_TLAN_CTX_VMVF_TYPE_PF	2
+	u16 src_vsi;
+	u8 tsyn_ena;
+	u8 alt_vlan;
+	u16 cpuid;		/* bigger than needed, see above for reason */
+	u8 wb_mode;
+	u8 tphrd_desc;
+	u8 tphrd;
+	u8 tphwr_desc;
+	u16 cmpq_id;
+	u16 qnum_in_func;
+	u8 itr_notification_mode;
+	u8 adjust_prof_id;
+	u32 qlen;		/* bigger than needed, see above for reason */
+	u8 quanta_prof_idx;
+	u8 tso_ena;
+	u16 tso_qnum;
+	u8 legacy_int;
+	u8 drop_ena;
+	u8 cache_prof_idx;
+	u8 pkt_shaper_prof_idx;
+	u8 int_q_state;	/* width not needed - internal do not write */
+};
+
+/* LAN Tx Completion Queue data */
+#pragma pack(1)
+struct ice_tx_cmpltnq {
+	u16 txq_id;
+	u8 generation;
+	u16 tx_head;
+	u8 cmpl_type;
+};
+#pragma pack()
+
+
+/* LAN Tx Completion Queue Context */
+#pragma pack(1)
+struct ice_tx_cmpltnq_ctx {
+	u64 base;
+	u32 q_len;
+#define ICE_TX_CMPLTNQ_CTX_Q_LEN_S	4
+	u8 generation;
+	u32 wrt_ptr;
+	u8 pf_num;
+	u16 vmvf_num;
+	u8 vmvf_type;
+	u8 tph_desc_wr;
+	u8 cpuid;
+	u32 cmpltn_cache[16];
+};
+#pragma pack()
+
+/* LAN Tx Doorbell Descriptor Format */
+struct ice_tx_drbell_fmt {
+	u16 txq_id;
+	u8 dd;
+	u8 rs;
+	u32 db;
+};
+
+
+/* LAN Tx Doorbell Queue Context */
+#pragma pack(1)
+struct ice_tx_drbell_q_ctx {
+	u64 base;
+	u16 ring_len;
+	u8 pf_num;
+	u16 vf_num;
+	u8 vmvf_type;
+	u8 cpuid;
+	u8 tph_desc_rd;
+	u8 tph_desc_wr;
+	u8 db_q_en;
+	u16 rd_head;
+	u16 rd_tail;
+};
+#pragma pack()
+
+/* The ice_ptype_lkup table is used to convert from the 10-bit ptype in the
+ * hardware to a bit-field that can be used by SW to more easily determine the
+ * packet type.
+ *
+ * Macros are used to shorten the table lines and make this table human
+ * readable.
+ *
+ * We store the PTYPE in the top byte of the bit field - this is just so that
+ * we can check that the table doesn't have a row missing, as the index into
+ * the table should be the PTYPE.
+ *
+ * Typical work flow:
+ *
+ * IF NOT ice_ptype_lkup[ptype].known
+ * THEN
+ *      Packet is unknown
+ * ELSE IF ice_ptype_lkup[ptype].outer_ip == ICE_RX_PTYPE_OUTER_IP
+ *      Use the rest of the fields to look at the tunnels, inner protocols, etc
+ * ELSE
+ *      Use the enum ice_rx_l2_ptype to decode the packet type
+ * ENDIF
+ */
+
+/* macro to make the table lines short */
+#define ICE_PTT(PTYPE, OUTER_IP, OUTER_IP_VER, OUTER_FRAG, T, TE, TEF, I, PL)\
+	{	PTYPE, \
+		1, \
+		ICE_RX_PTYPE_OUTER_##OUTER_IP, \
+		ICE_RX_PTYPE_OUTER_##OUTER_IP_VER, \
+		ICE_RX_PTYPE_##OUTER_FRAG, \
+		ICE_RX_PTYPE_TUNNEL_##T, \
+		ICE_RX_PTYPE_TUNNEL_END_##TE, \
+		ICE_RX_PTYPE_##TEF, \
+		ICE_RX_PTYPE_INNER_PROT_##I, \
+		ICE_RX_PTYPE_PAYLOAD_LAYER_##PL }
+
+#define ICE_PTT_UNUSED_ENTRY(PTYPE) { PTYPE, 0, 0, 0, 0, 0, 0, 0, 0, 0 }
+
+/* shorter macros makes the table fit but are terse */
+#define ICE_RX_PTYPE_NOF		ICE_RX_PTYPE_NOT_FRAG
+#define ICE_RX_PTYPE_FRG		ICE_RX_PTYPE_FRAG
+
+/* Lookup table mapping the HW PTYPE to the bit field for decoding */
+static const struct ice_rx_ptype_decoded ice_ptype_lkup[] = {
+	/* L2 Packet types */
+	ICE_PTT_UNUSED_ENTRY(0),
+	ICE_PTT(1, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2),
+	ICE_PTT(2, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE),
+	ICE_PTT_UNUSED_ENTRY(3),
+	ICE_PTT_UNUSED_ENTRY(4),
+	ICE_PTT_UNUSED_ENTRY(5),
+	ICE_PTT(6, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE),
+	ICE_PTT(7, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE),
+	ICE_PTT_UNUSED_ENTRY(8),
+	ICE_PTT_UNUSED_ENTRY(9),
+	ICE_PTT(10, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE),
+	ICE_PTT(11, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE),
+	ICE_PTT_UNUSED_ENTRY(12),
+	ICE_PTT_UNUSED_ENTRY(13),
+	ICE_PTT_UNUSED_ENTRY(14),
+	ICE_PTT_UNUSED_ENTRY(15),
+	ICE_PTT_UNUSED_ENTRY(16),
+	ICE_PTT_UNUSED_ENTRY(17),
+	ICE_PTT_UNUSED_ENTRY(18),
+	ICE_PTT_UNUSED_ENTRY(19),
+	ICE_PTT_UNUSED_ENTRY(20),
+	ICE_PTT_UNUSED_ENTRY(21),
+
+	/* Non Tunneled IPv4 */
+	ICE_PTT(22, IP, IPV4, FRG, NONE, NONE, NOF, NONE, PAY3),
+	ICE_PTT(23, IP, IPV4, NOF, NONE, NONE, NOF, NONE, PAY3),
+	ICE_PTT(24, IP, IPV4, NOF, NONE, NONE, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(25),
+	ICE_PTT(26, IP, IPV4, NOF, NONE, NONE, NOF, TCP,  PAY4),
+	ICE_PTT(27, IP, IPV4, NOF, NONE, NONE, NOF, SCTP, PAY4),
+	ICE_PTT(28, IP, IPV4, NOF, NONE, NONE, NOF, ICMP, PAY4),
+
+	/* IPv4 --> IPv4 */
+	ICE_PTT(29, IP, IPV4, NOF, IP_IP, IPV4, FRG, NONE, PAY3),
+	ICE_PTT(30, IP, IPV4, NOF, IP_IP, IPV4, NOF, NONE, PAY3),
+	ICE_PTT(31, IP, IPV4, NOF, IP_IP, IPV4, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(32),
+	ICE_PTT(33, IP, IPV4, NOF, IP_IP, IPV4, NOF, TCP,  PAY4),
+	ICE_PTT(34, IP, IPV4, NOF, IP_IP, IPV4, NOF, SCTP, PAY4),
+	ICE_PTT(35, IP, IPV4, NOF, IP_IP, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv4 --> IPv6 */
+	ICE_PTT(36, IP, IPV4, NOF, IP_IP, IPV6, FRG, NONE, PAY3),
+	ICE_PTT(37, IP, IPV4, NOF, IP_IP, IPV6, NOF, NONE, PAY3),
+	ICE_PTT(38, IP, IPV4, NOF, IP_IP, IPV6, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(39),
+	ICE_PTT(40, IP, IPV4, NOF, IP_IP, IPV6, NOF, TCP,  PAY4),
+	ICE_PTT(41, IP, IPV4, NOF, IP_IP, IPV6, NOF, SCTP, PAY4),
+	ICE_PTT(42, IP, IPV4, NOF, IP_IP, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv4 --> GRE/NAT */
+	ICE_PTT(43, IP, IPV4, NOF, IP_GRENAT, NONE, NOF, NONE, PAY3),
+
+	/* IPv4 --> GRE/NAT --> IPv4 */
+	ICE_PTT(44, IP, IPV4, NOF, IP_GRENAT, IPV4, FRG, NONE, PAY3),
+	ICE_PTT(45, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, NONE, PAY3),
+	ICE_PTT(46, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(47),
+	ICE_PTT(48, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, TCP,  PAY4),
+	ICE_PTT(49, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, SCTP, PAY4),
+	ICE_PTT(50, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv4 --> GRE/NAT --> IPv6 */
+	ICE_PTT(51, IP, IPV4, NOF, IP_GRENAT, IPV6, FRG, NONE, PAY3),
+	ICE_PTT(52, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, NONE, PAY3),
+	ICE_PTT(53, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(54),
+	ICE_PTT(55, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, TCP,  PAY4),
+	ICE_PTT(56, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, SCTP, PAY4),
+	ICE_PTT(57, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv4 --> GRE/NAT --> MAC */
+	ICE_PTT(58, IP, IPV4, NOF, IP_GRENAT_MAC, NONE, NOF, NONE, PAY3),
+
+	/* IPv4 --> GRE/NAT --> MAC --> IPv4 */
+	ICE_PTT(59, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, FRG, NONE, PAY3),
+	ICE_PTT(60, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, NONE, PAY3),
+	ICE_PTT(61, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(62),
+	ICE_PTT(63, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, TCP,  PAY4),
+	ICE_PTT(64, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, SCTP, PAY4),
+	ICE_PTT(65, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv4 --> GRE/NAT -> MAC --> IPv6 */
+	ICE_PTT(66, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, FRG, NONE, PAY3),
+	ICE_PTT(67, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, NONE, PAY3),
+	ICE_PTT(68, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(69),
+	ICE_PTT(70, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, TCP,  PAY4),
+	ICE_PTT(71, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, SCTP, PAY4),
+	ICE_PTT(72, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv4 --> GRE/NAT --> MAC/VLAN */
+	ICE_PTT(73, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, NONE, NOF, NONE, PAY3),
+
+	/* IPv4 ---> GRE/NAT -> MAC/VLAN --> IPv4 */
+	ICE_PTT(74, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, FRG, NONE, PAY3),
+	ICE_PTT(75, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, NONE, PAY3),
+	ICE_PTT(76, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(77),
+	ICE_PTT(78, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, TCP,  PAY4),
+	ICE_PTT(79, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, SCTP, PAY4),
+	ICE_PTT(80, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv4 -> GRE/NAT -> MAC/VLAN --> IPv6 */
+	ICE_PTT(81, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, FRG, NONE, PAY3),
+	ICE_PTT(82, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, NONE, PAY3),
+	ICE_PTT(83, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(84),
+	ICE_PTT(85, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, TCP,  PAY4),
+	ICE_PTT(86, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, SCTP, PAY4),
+	ICE_PTT(87, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, ICMP, PAY4),
+
+	/* Non Tunneled IPv6 */
+	ICE_PTT(88, IP, IPV6, FRG, NONE, NONE, NOF, NONE, PAY3),
+	ICE_PTT(89, IP, IPV6, NOF, NONE, NONE, NOF, NONE, PAY3),
+	ICE_PTT(90, IP, IPV6, NOF, NONE, NONE, NOF, UDP,  PAY3),
+	ICE_PTT_UNUSED_ENTRY(91),
+	ICE_PTT(92, IP, IPV6, NOF, NONE, NONE, NOF, TCP,  PAY4),
+	ICE_PTT(93, IP, IPV6, NOF, NONE, NONE, NOF, SCTP, PAY4),
+	ICE_PTT(94, IP, IPV6, NOF, NONE, NONE, NOF, ICMP, PAY4),
+
+	/* IPv6 --> IPv4 */
+	ICE_PTT(95, IP, IPV6, NOF, IP_IP, IPV4, FRG, NONE, PAY3),
+	ICE_PTT(96, IP, IPV6, NOF, IP_IP, IPV4, NOF, NONE, PAY3),
+	ICE_PTT(97, IP, IPV6, NOF, IP_IP, IPV4, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(98),
+	ICE_PTT(99, IP, IPV6, NOF, IP_IP, IPV4, NOF, TCP,  PAY4),
+	ICE_PTT(100, IP, IPV6, NOF, IP_IP, IPV4, NOF, SCTP, PAY4),
+	ICE_PTT(101, IP, IPV6, NOF, IP_IP, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv6 --> IPv6 */
+	ICE_PTT(102, IP, IPV6, NOF, IP_IP, IPV6, FRG, NONE, PAY3),
+	ICE_PTT(103, IP, IPV6, NOF, IP_IP, IPV6, NOF, NONE, PAY3),
+	ICE_PTT(104, IP, IPV6, NOF, IP_IP, IPV6, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(105),
+	ICE_PTT(106, IP, IPV6, NOF, IP_IP, IPV6, NOF, TCP,  PAY4),
+	ICE_PTT(107, IP, IPV6, NOF, IP_IP, IPV6, NOF, SCTP, PAY4),
+	ICE_PTT(108, IP, IPV6, NOF, IP_IP, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT */
+	ICE_PTT(109, IP, IPV6, NOF, IP_GRENAT, NONE, NOF, NONE, PAY3),
+
+	/* IPv6 --> GRE/NAT -> IPv4 */
+	ICE_PTT(110, IP, IPV6, NOF, IP_GRENAT, IPV4, FRG, NONE, PAY3),
+	ICE_PTT(111, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, NONE, PAY3),
+	ICE_PTT(112, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(113),
+	ICE_PTT(114, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, TCP,  PAY4),
+	ICE_PTT(115, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, SCTP, PAY4),
+	ICE_PTT(116, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT -> IPv6 */
+	ICE_PTT(117, IP, IPV6, NOF, IP_GRENAT, IPV6, FRG, NONE, PAY3),
+	ICE_PTT(118, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, NONE, PAY3),
+	ICE_PTT(119, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(120),
+	ICE_PTT(121, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, TCP,  PAY4),
+	ICE_PTT(122, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, SCTP, PAY4),
+	ICE_PTT(123, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT -> MAC */
+	ICE_PTT(124, IP, IPV6, NOF, IP_GRENAT_MAC, NONE, NOF, NONE, PAY3),
+
+	/* IPv6 --> GRE/NAT -> MAC -> IPv4 */
+	ICE_PTT(125, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, FRG, NONE, PAY3),
+	ICE_PTT(126, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, NONE, PAY3),
+	ICE_PTT(127, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(128),
+	ICE_PTT(129, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, TCP,  PAY4),
+	ICE_PTT(130, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, SCTP, PAY4),
+	ICE_PTT(131, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT -> MAC -> IPv6 */
+	ICE_PTT(132, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, FRG, NONE, PAY3),
+	ICE_PTT(133, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, NONE, PAY3),
+	ICE_PTT(134, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(135),
+	ICE_PTT(136, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, TCP,  PAY4),
+	ICE_PTT(137, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, SCTP, PAY4),
+	ICE_PTT(138, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT -> MAC/VLAN */
+	ICE_PTT(139, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, NONE, NOF, NONE, PAY3),
+
+	/* IPv6 --> GRE/NAT -> MAC/VLAN --> IPv4 */
+	ICE_PTT(140, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, FRG, NONE, PAY3),
+	ICE_PTT(141, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, NONE, PAY3),
+	ICE_PTT(142, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(143),
+	ICE_PTT(144, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, TCP,  PAY4),
+	ICE_PTT(145, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, SCTP, PAY4),
+	ICE_PTT(146, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT -> MAC/VLAN --> IPv6 */
+	ICE_PTT(147, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, FRG, NONE, PAY3),
+	ICE_PTT(148, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, NONE, PAY3),
+	ICE_PTT(149, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(150),
+	ICE_PTT(151, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, TCP,  PAY4),
+	ICE_PTT(152, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, SCTP, PAY4),
+	ICE_PTT(153, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, ICMP, PAY4),
+
+	/* unused entries */
+	ICE_PTT_UNUSED_ENTRY(154),
+	ICE_PTT_UNUSED_ENTRY(155),
+	ICE_PTT_UNUSED_ENTRY(156),
+	ICE_PTT_UNUSED_ENTRY(157),
+	ICE_PTT_UNUSED_ENTRY(158),
+	ICE_PTT_UNUSED_ENTRY(159),
+
+	ICE_PTT_UNUSED_ENTRY(160),
+	ICE_PTT_UNUSED_ENTRY(161),
+	ICE_PTT_UNUSED_ENTRY(162),
+	ICE_PTT_UNUSED_ENTRY(163),
+	ICE_PTT_UNUSED_ENTRY(164),
+	ICE_PTT_UNUSED_ENTRY(165),
+	ICE_PTT_UNUSED_ENTRY(166),
+	ICE_PTT_UNUSED_ENTRY(167),
+	ICE_PTT_UNUSED_ENTRY(168),
+	ICE_PTT_UNUSED_ENTRY(169),
+
+	ICE_PTT_UNUSED_ENTRY(170),
+	ICE_PTT_UNUSED_ENTRY(171),
+	ICE_PTT_UNUSED_ENTRY(172),
+	ICE_PTT_UNUSED_ENTRY(173),
+	ICE_PTT_UNUSED_ENTRY(174),
+	ICE_PTT_UNUSED_ENTRY(175),
+	ICE_PTT_UNUSED_ENTRY(176),
+	ICE_PTT_UNUSED_ENTRY(177),
+	ICE_PTT_UNUSED_ENTRY(178),
+	ICE_PTT_UNUSED_ENTRY(179),
+
+	ICE_PTT_UNUSED_ENTRY(180),
+	ICE_PTT_UNUSED_ENTRY(181),
+	ICE_PTT_UNUSED_ENTRY(182),
+	ICE_PTT_UNUSED_ENTRY(183),
+	ICE_PTT_UNUSED_ENTRY(184),
+	ICE_PTT_UNUSED_ENTRY(185),
+	ICE_PTT_UNUSED_ENTRY(186),
+	ICE_PTT_UNUSED_ENTRY(187),
+	ICE_PTT_UNUSED_ENTRY(188),
+	ICE_PTT_UNUSED_ENTRY(189),
+
+	ICE_PTT_UNUSED_ENTRY(190),
+	ICE_PTT_UNUSED_ENTRY(191),
+	ICE_PTT_UNUSED_ENTRY(192),
+	ICE_PTT_UNUSED_ENTRY(193),
+	ICE_PTT_UNUSED_ENTRY(194),
+	ICE_PTT_UNUSED_ENTRY(195),
+	ICE_PTT_UNUSED_ENTRY(196),
+	ICE_PTT_UNUSED_ENTRY(197),
+	ICE_PTT_UNUSED_ENTRY(198),
+	ICE_PTT_UNUSED_ENTRY(199),
+
+	ICE_PTT_UNUSED_ENTRY(200),
+	ICE_PTT_UNUSED_ENTRY(201),
+	ICE_PTT_UNUSED_ENTRY(202),
+	ICE_PTT_UNUSED_ENTRY(203),
+	ICE_PTT_UNUSED_ENTRY(204),
+	ICE_PTT_UNUSED_ENTRY(205),
+	ICE_PTT_UNUSED_ENTRY(206),
+	ICE_PTT_UNUSED_ENTRY(207),
+	ICE_PTT_UNUSED_ENTRY(208),
+	ICE_PTT_UNUSED_ENTRY(209),
+
+	ICE_PTT_UNUSED_ENTRY(210),
+	ICE_PTT_UNUSED_ENTRY(211),
+	ICE_PTT_UNUSED_ENTRY(212),
+	ICE_PTT_UNUSED_ENTRY(213),
+	ICE_PTT_UNUSED_ENTRY(214),
+	ICE_PTT_UNUSED_ENTRY(215),
+	ICE_PTT_UNUSED_ENTRY(216),
+	ICE_PTT_UNUSED_ENTRY(217),
+	ICE_PTT_UNUSED_ENTRY(218),
+	ICE_PTT_UNUSED_ENTRY(219),
+
+	ICE_PTT_UNUSED_ENTRY(220),
+	ICE_PTT_UNUSED_ENTRY(221),
+	ICE_PTT_UNUSED_ENTRY(222),
+	ICE_PTT_UNUSED_ENTRY(223),
+	ICE_PTT_UNUSED_ENTRY(224),
+	ICE_PTT_UNUSED_ENTRY(225),
+	ICE_PTT_UNUSED_ENTRY(226),
+	ICE_PTT_UNUSED_ENTRY(227),
+	ICE_PTT_UNUSED_ENTRY(228),
+	ICE_PTT_UNUSED_ENTRY(229),
+
+	ICE_PTT_UNUSED_ENTRY(230),
+	ICE_PTT_UNUSED_ENTRY(231),
+	ICE_PTT_UNUSED_ENTRY(232),
+	ICE_PTT_UNUSED_ENTRY(233),
+	ICE_PTT_UNUSED_ENTRY(234),
+	ICE_PTT_UNUSED_ENTRY(235),
+	ICE_PTT_UNUSED_ENTRY(236),
+	ICE_PTT_UNUSED_ENTRY(237),
+	ICE_PTT_UNUSED_ENTRY(238),
+	ICE_PTT_UNUSED_ENTRY(239),
+
+	ICE_PTT_UNUSED_ENTRY(240),
+	ICE_PTT_UNUSED_ENTRY(241),
+	ICE_PTT_UNUSED_ENTRY(242),
+	ICE_PTT_UNUSED_ENTRY(243),
+	ICE_PTT_UNUSED_ENTRY(244),
+	ICE_PTT_UNUSED_ENTRY(245),
+	ICE_PTT_UNUSED_ENTRY(246),
+	ICE_PTT_UNUSED_ENTRY(247),
+	ICE_PTT_UNUSED_ENTRY(248),
+	ICE_PTT_UNUSED_ENTRY(249),
+
+	ICE_PTT_UNUSED_ENTRY(250),
+	ICE_PTT_UNUSED_ENTRY(251),
+	ICE_PTT_UNUSED_ENTRY(252),
+	ICE_PTT_UNUSED_ENTRY(253),
+	ICE_PTT_UNUSED_ENTRY(254),
+	ICE_PTT_UNUSED_ENTRY(255),
+	ICE_PTT_UNUSED_ENTRY(256),
+	ICE_PTT_UNUSED_ENTRY(257),
+	ICE_PTT_UNUSED_ENTRY(258),
+	ICE_PTT_UNUSED_ENTRY(259),
+
+	ICE_PTT_UNUSED_ENTRY(260),
+	ICE_PTT_UNUSED_ENTRY(261),
+	ICE_PTT_UNUSED_ENTRY(262),
+	ICE_PTT_UNUSED_ENTRY(263),
+	ICE_PTT_UNUSED_ENTRY(264),
+	ICE_PTT_UNUSED_ENTRY(265),
+	ICE_PTT_UNUSED_ENTRY(266),
+	ICE_PTT_UNUSED_ENTRY(267),
+	ICE_PTT_UNUSED_ENTRY(268),
+	ICE_PTT_UNUSED_ENTRY(269),
+
+	ICE_PTT_UNUSED_ENTRY(270),
+	ICE_PTT_UNUSED_ENTRY(271),
+	ICE_PTT_UNUSED_ENTRY(272),
+	ICE_PTT_UNUSED_ENTRY(273),
+	ICE_PTT_UNUSED_ENTRY(274),
+	ICE_PTT_UNUSED_ENTRY(275),
+	ICE_PTT_UNUSED_ENTRY(276),
+	ICE_PTT_UNUSED_ENTRY(277),
+	ICE_PTT_UNUSED_ENTRY(278),
+	ICE_PTT_UNUSED_ENTRY(279),
+
+	ICE_PTT_UNUSED_ENTRY(280),
+	ICE_PTT_UNUSED_ENTRY(281),
+	ICE_PTT_UNUSED_ENTRY(282),
+	ICE_PTT_UNUSED_ENTRY(283),
+	ICE_PTT_UNUSED_ENTRY(284),
+	ICE_PTT_UNUSED_ENTRY(285),
+	ICE_PTT_UNUSED_ENTRY(286),
+	ICE_PTT_UNUSED_ENTRY(287),
+	ICE_PTT_UNUSED_ENTRY(288),
+	ICE_PTT_UNUSED_ENTRY(289),
+
+	ICE_PTT_UNUSED_ENTRY(290),
+	ICE_PTT_UNUSED_ENTRY(291),
+	ICE_PTT_UNUSED_ENTRY(292),
+	ICE_PTT_UNUSED_ENTRY(293),
+	ICE_PTT_UNUSED_ENTRY(294),
+	ICE_PTT_UNUSED_ENTRY(295),
+	ICE_PTT_UNUSED_ENTRY(296),
+	ICE_PTT_UNUSED_ENTRY(297),
+	ICE_PTT_UNUSED_ENTRY(298),
+	ICE_PTT_UNUSED_ENTRY(299),
+
+	ICE_PTT_UNUSED_ENTRY(300),
+	ICE_PTT_UNUSED_ENTRY(301),
+	ICE_PTT_UNUSED_ENTRY(302),
+	ICE_PTT_UNUSED_ENTRY(303),
+	ICE_PTT_UNUSED_ENTRY(304),
+	ICE_PTT_UNUSED_ENTRY(305),
+	ICE_PTT_UNUSED_ENTRY(306),
+	ICE_PTT_UNUSED_ENTRY(307),
+	ICE_PTT_UNUSED_ENTRY(308),
+	ICE_PTT_UNUSED_ENTRY(309),
+
+	ICE_PTT_UNUSED_ENTRY(310),
+	ICE_PTT_UNUSED_ENTRY(311),
+	ICE_PTT_UNUSED_ENTRY(312),
+	ICE_PTT_UNUSED_ENTRY(313),
+	ICE_PTT_UNUSED_ENTRY(314),
+	ICE_PTT_UNUSED_ENTRY(315),
+	ICE_PTT_UNUSED_ENTRY(316),
+	ICE_PTT_UNUSED_ENTRY(317),
+	ICE_PTT_UNUSED_ENTRY(318),
+	ICE_PTT_UNUSED_ENTRY(319),
+
+	ICE_PTT_UNUSED_ENTRY(320),
+	ICE_PTT_UNUSED_ENTRY(321),
+	ICE_PTT_UNUSED_ENTRY(322),
+	ICE_PTT_UNUSED_ENTRY(323),
+	ICE_PTT_UNUSED_ENTRY(324),
+	ICE_PTT_UNUSED_ENTRY(325),
+	ICE_PTT_UNUSED_ENTRY(326),
+	ICE_PTT_UNUSED_ENTRY(327),
+	ICE_PTT_UNUSED_ENTRY(328),
+	ICE_PTT_UNUSED_ENTRY(329),
+
+	ICE_PTT_UNUSED_ENTRY(330),
+	ICE_PTT_UNUSED_ENTRY(331),
+	ICE_PTT_UNUSED_ENTRY(332),
+	ICE_PTT_UNUSED_ENTRY(333),
+	ICE_PTT_UNUSED_ENTRY(334),
+	ICE_PTT_UNUSED_ENTRY(335),
+	ICE_PTT_UNUSED_ENTRY(336),
+	ICE_PTT_UNUSED_ENTRY(337),
+	ICE_PTT_UNUSED_ENTRY(338),
+	ICE_PTT_UNUSED_ENTRY(339),
+
+	ICE_PTT_UNUSED_ENTRY(340),
+	ICE_PTT_UNUSED_ENTRY(341),
+	ICE_PTT_UNUSED_ENTRY(342),
+	ICE_PTT_UNUSED_ENTRY(343),
+	ICE_PTT_UNUSED_ENTRY(344),
+	ICE_PTT_UNUSED_ENTRY(345),
+	ICE_PTT_UNUSED_ENTRY(346),
+	ICE_PTT_UNUSED_ENTRY(347),
+	ICE_PTT_UNUSED_ENTRY(348),
+	ICE_PTT_UNUSED_ENTRY(349),
+
+	ICE_PTT_UNUSED_ENTRY(350),
+	ICE_PTT_UNUSED_ENTRY(351),
+	ICE_PTT_UNUSED_ENTRY(352),
+	ICE_PTT_UNUSED_ENTRY(353),
+	ICE_PTT_UNUSED_ENTRY(354),
+	ICE_PTT_UNUSED_ENTRY(355),
+	ICE_PTT_UNUSED_ENTRY(356),
+	ICE_PTT_UNUSED_ENTRY(357),
+	ICE_PTT_UNUSED_ENTRY(358),
+	ICE_PTT_UNUSED_ENTRY(359),
+
+	ICE_PTT_UNUSED_ENTRY(360),
+	ICE_PTT_UNUSED_ENTRY(361),
+	ICE_PTT_UNUSED_ENTRY(362),
+	ICE_PTT_UNUSED_ENTRY(363),
+	ICE_PTT_UNUSED_ENTRY(364),
+	ICE_PTT_UNUSED_ENTRY(365),
+	ICE_PTT_UNUSED_ENTRY(366),
+	ICE_PTT_UNUSED_ENTRY(367),
+	ICE_PTT_UNUSED_ENTRY(368),
+	ICE_PTT_UNUSED_ENTRY(369),
+
+	ICE_PTT_UNUSED_ENTRY(370),
+	ICE_PTT_UNUSED_ENTRY(371),
+	ICE_PTT_UNUSED_ENTRY(372),
+	ICE_PTT_UNUSED_ENTRY(373),
+	ICE_PTT_UNUSED_ENTRY(374),
+	ICE_PTT_UNUSED_ENTRY(375),
+	ICE_PTT_UNUSED_ENTRY(376),
+	ICE_PTT_UNUSED_ENTRY(377),
+	ICE_PTT_UNUSED_ENTRY(378),
+	ICE_PTT_UNUSED_ENTRY(379),
+
+	ICE_PTT_UNUSED_ENTRY(380),
+	ICE_PTT_UNUSED_ENTRY(381),
+	ICE_PTT_UNUSED_ENTRY(382),
+	ICE_PTT_UNUSED_ENTRY(383),
+	ICE_PTT_UNUSED_ENTRY(384),
+	ICE_PTT_UNUSED_ENTRY(385),
+	ICE_PTT_UNUSED_ENTRY(386),
+	ICE_PTT_UNUSED_ENTRY(387),
+	ICE_PTT_UNUSED_ENTRY(388),
+	ICE_PTT_UNUSED_ENTRY(389),
+
+	ICE_PTT_UNUSED_ENTRY(390),
+	ICE_PTT_UNUSED_ENTRY(391),
+	ICE_PTT_UNUSED_ENTRY(392),
+	ICE_PTT_UNUSED_ENTRY(393),
+	ICE_PTT_UNUSED_ENTRY(394),
+	ICE_PTT_UNUSED_ENTRY(395),
+	ICE_PTT_UNUSED_ENTRY(396),
+	ICE_PTT_UNUSED_ENTRY(397),
+	ICE_PTT_UNUSED_ENTRY(398),
+	ICE_PTT_UNUSED_ENTRY(399),
+
+	ICE_PTT_UNUSED_ENTRY(400),
+	ICE_PTT_UNUSED_ENTRY(401),
+	ICE_PTT_UNUSED_ENTRY(402),
+	ICE_PTT_UNUSED_ENTRY(403),
+	ICE_PTT_UNUSED_ENTRY(404),
+	ICE_PTT_UNUSED_ENTRY(405),
+	ICE_PTT_UNUSED_ENTRY(406),
+	ICE_PTT_UNUSED_ENTRY(407),
+	ICE_PTT_UNUSED_ENTRY(408),
+	ICE_PTT_UNUSED_ENTRY(409),
+
+	ICE_PTT_UNUSED_ENTRY(410),
+	ICE_PTT_UNUSED_ENTRY(411),
+	ICE_PTT_UNUSED_ENTRY(412),
+	ICE_PTT_UNUSED_ENTRY(413),
+	ICE_PTT_UNUSED_ENTRY(414),
+	ICE_PTT_UNUSED_ENTRY(415),
+	ICE_PTT_UNUSED_ENTRY(416),
+	ICE_PTT_UNUSED_ENTRY(417),
+	ICE_PTT_UNUSED_ENTRY(418),
+	ICE_PTT_UNUSED_ENTRY(419),
+
+	ICE_PTT_UNUSED_ENTRY(420),
+	ICE_PTT_UNUSED_ENTRY(421),
+	ICE_PTT_UNUSED_ENTRY(422),
+	ICE_PTT_UNUSED_ENTRY(423),
+	ICE_PTT_UNUSED_ENTRY(424),
+	ICE_PTT_UNUSED_ENTRY(425),
+	ICE_PTT_UNUSED_ENTRY(426),
+	ICE_PTT_UNUSED_ENTRY(427),
+	ICE_PTT_UNUSED_ENTRY(428),
+	ICE_PTT_UNUSED_ENTRY(429),
+
+	ICE_PTT_UNUSED_ENTRY(430),
+	ICE_PTT_UNUSED_ENTRY(431),
+	ICE_PTT_UNUSED_ENTRY(432),
+	ICE_PTT_UNUSED_ENTRY(433),
+	ICE_PTT_UNUSED_ENTRY(434),
+	ICE_PTT_UNUSED_ENTRY(435),
+	ICE_PTT_UNUSED_ENTRY(436),
+	ICE_PTT_UNUSED_ENTRY(437),
+	ICE_PTT_UNUSED_ENTRY(438),
+	ICE_PTT_UNUSED_ENTRY(439),
+
+	ICE_PTT_UNUSED_ENTRY(440),
+	ICE_PTT_UNUSED_ENTRY(441),
+	ICE_PTT_UNUSED_ENTRY(442),
+	ICE_PTT_UNUSED_ENTRY(443),
+	ICE_PTT_UNUSED_ENTRY(444),
+	ICE_PTT_UNUSED_ENTRY(445),
+	ICE_PTT_UNUSED_ENTRY(446),
+	ICE_PTT_UNUSED_ENTRY(447),
+	ICE_PTT_UNUSED_ENTRY(448),
+	ICE_PTT_UNUSED_ENTRY(449),
+
+	ICE_PTT_UNUSED_ENTRY(450),
+	ICE_PTT_UNUSED_ENTRY(451),
+	ICE_PTT_UNUSED_ENTRY(452),
+	ICE_PTT_UNUSED_ENTRY(453),
+	ICE_PTT_UNUSED_ENTRY(454),
+	ICE_PTT_UNUSED_ENTRY(455),
+	ICE_PTT_UNUSED_ENTRY(456),
+	ICE_PTT_UNUSED_ENTRY(457),
+	ICE_PTT_UNUSED_ENTRY(458),
+	ICE_PTT_UNUSED_ENTRY(459),
+
+	ICE_PTT_UNUSED_ENTRY(460),
+	ICE_PTT_UNUSED_ENTRY(461),
+	ICE_PTT_UNUSED_ENTRY(462),
+	ICE_PTT_UNUSED_ENTRY(463),
+	ICE_PTT_UNUSED_ENTRY(464),
+	ICE_PTT_UNUSED_ENTRY(465),
+	ICE_PTT_UNUSED_ENTRY(466),
+	ICE_PTT_UNUSED_ENTRY(467),
+	ICE_PTT_UNUSED_ENTRY(468),
+	ICE_PTT_UNUSED_ENTRY(469),
+
+	ICE_PTT_UNUSED_ENTRY(470),
+	ICE_PTT_UNUSED_ENTRY(471),
+	ICE_PTT_UNUSED_ENTRY(472),
+	ICE_PTT_UNUSED_ENTRY(473),
+	ICE_PTT_UNUSED_ENTRY(474),
+	ICE_PTT_UNUSED_ENTRY(475),
+	ICE_PTT_UNUSED_ENTRY(476),
+	ICE_PTT_UNUSED_ENTRY(477),
+	ICE_PTT_UNUSED_ENTRY(478),
+	ICE_PTT_UNUSED_ENTRY(479),
+
+	ICE_PTT_UNUSED_ENTRY(480),
+	ICE_PTT_UNUSED_ENTRY(481),
+	ICE_PTT_UNUSED_ENTRY(482),
+	ICE_PTT_UNUSED_ENTRY(483),
+	ICE_PTT_UNUSED_ENTRY(484),
+	ICE_PTT_UNUSED_ENTRY(485),
+	ICE_PTT_UNUSED_ENTRY(486),
+	ICE_PTT_UNUSED_ENTRY(487),
+	ICE_PTT_UNUSED_ENTRY(488),
+	ICE_PTT_UNUSED_ENTRY(489),
+
+	ICE_PTT_UNUSED_ENTRY(490),
+	ICE_PTT_UNUSED_ENTRY(491),
+	ICE_PTT_UNUSED_ENTRY(492),
+	ICE_PTT_UNUSED_ENTRY(493),
+	ICE_PTT_UNUSED_ENTRY(494),
+	ICE_PTT_UNUSED_ENTRY(495),
+	ICE_PTT_UNUSED_ENTRY(496),
+	ICE_PTT_UNUSED_ENTRY(497),
+	ICE_PTT_UNUSED_ENTRY(498),
+	ICE_PTT_UNUSED_ENTRY(499),
+
+	ICE_PTT_UNUSED_ENTRY(500),
+	ICE_PTT_UNUSED_ENTRY(501),
+	ICE_PTT_UNUSED_ENTRY(502),
+	ICE_PTT_UNUSED_ENTRY(503),
+	ICE_PTT_UNUSED_ENTRY(504),
+	ICE_PTT_UNUSED_ENTRY(505),
+	ICE_PTT_UNUSED_ENTRY(506),
+	ICE_PTT_UNUSED_ENTRY(507),
+	ICE_PTT_UNUSED_ENTRY(508),
+	ICE_PTT_UNUSED_ENTRY(509),
+
+	ICE_PTT_UNUSED_ENTRY(510),
+	ICE_PTT_UNUSED_ENTRY(511),
+	ICE_PTT_UNUSED_ENTRY(512),
+	ICE_PTT_UNUSED_ENTRY(513),
+	ICE_PTT_UNUSED_ENTRY(514),
+	ICE_PTT_UNUSED_ENTRY(515),
+	ICE_PTT_UNUSED_ENTRY(516),
+	ICE_PTT_UNUSED_ENTRY(517),
+	ICE_PTT_UNUSED_ENTRY(518),
+	ICE_PTT_UNUSED_ENTRY(519),
+
+	ICE_PTT_UNUSED_ENTRY(520),
+	ICE_PTT_UNUSED_ENTRY(521),
+	ICE_PTT_UNUSED_ENTRY(522),
+	ICE_PTT_UNUSED_ENTRY(523),
+	ICE_PTT_UNUSED_ENTRY(524),
+	ICE_PTT_UNUSED_ENTRY(525),
+	ICE_PTT_UNUSED_ENTRY(526),
+	ICE_PTT_UNUSED_ENTRY(527),
+	ICE_PTT_UNUSED_ENTRY(528),
+	ICE_PTT_UNUSED_ENTRY(529),
+
+	ICE_PTT_UNUSED_ENTRY(530),
+	ICE_PTT_UNUSED_ENTRY(531),
+	ICE_PTT_UNUSED_ENTRY(532),
+	ICE_PTT_UNUSED_ENTRY(533),
+	ICE_PTT_UNUSED_ENTRY(534),
+	ICE_PTT_UNUSED_ENTRY(535),
+	ICE_PTT_UNUSED_ENTRY(536),
+	ICE_PTT_UNUSED_ENTRY(537),
+	ICE_PTT_UNUSED_ENTRY(538),
+	ICE_PTT_UNUSED_ENTRY(539),
+
+	ICE_PTT_UNUSED_ENTRY(540),
+	ICE_PTT_UNUSED_ENTRY(541),
+	ICE_PTT_UNUSED_ENTRY(542),
+	ICE_PTT_UNUSED_ENTRY(543),
+	ICE_PTT_UNUSED_ENTRY(544),
+	ICE_PTT_UNUSED_ENTRY(545),
+	ICE_PTT_UNUSED_ENTRY(546),
+	ICE_PTT_UNUSED_ENTRY(547),
+	ICE_PTT_UNUSED_ENTRY(548),
+	ICE_PTT_UNUSED_ENTRY(549),
+
+	ICE_PTT_UNUSED_ENTRY(550),
+	ICE_PTT_UNUSED_ENTRY(551),
+	ICE_PTT_UNUSED_ENTRY(552),
+	ICE_PTT_UNUSED_ENTRY(553),
+	ICE_PTT_UNUSED_ENTRY(554),
+	ICE_PTT_UNUSED_ENTRY(555),
+	ICE_PTT_UNUSED_ENTRY(556),
+	ICE_PTT_UNUSED_ENTRY(557),
+	ICE_PTT_UNUSED_ENTRY(558),
+	ICE_PTT_UNUSED_ENTRY(559),
+
+	ICE_PTT_UNUSED_ENTRY(560),
+	ICE_PTT_UNUSED_ENTRY(561),
+	ICE_PTT_UNUSED_ENTRY(562),
+	ICE_PTT_UNUSED_ENTRY(563),
+	ICE_PTT_UNUSED_ENTRY(564),
+	ICE_PTT_UNUSED_ENTRY(565),
+	ICE_PTT_UNUSED_ENTRY(566),
+	ICE_PTT_UNUSED_ENTRY(567),
+	ICE_PTT_UNUSED_ENTRY(568),
+	ICE_PTT_UNUSED_ENTRY(569),
+
+	ICE_PTT_UNUSED_ENTRY(570),
+	ICE_PTT_UNUSED_ENTRY(571),
+	ICE_PTT_UNUSED_ENTRY(572),
+	ICE_PTT_UNUSED_ENTRY(573),
+	ICE_PTT_UNUSED_ENTRY(574),
+	ICE_PTT_UNUSED_ENTRY(575),
+	ICE_PTT_UNUSED_ENTRY(576),
+	ICE_PTT_UNUSED_ENTRY(577),
+	ICE_PTT_UNUSED_ENTRY(578),
+	ICE_PTT_UNUSED_ENTRY(579),
+
+	ICE_PTT_UNUSED_ENTRY(580),
+	ICE_PTT_UNUSED_ENTRY(581),
+	ICE_PTT_UNUSED_ENTRY(582),
+	ICE_PTT_UNUSED_ENTRY(583),
+	ICE_PTT_UNUSED_ENTRY(584),
+	ICE_PTT_UNUSED_ENTRY(585),
+	ICE_PTT_UNUSED_ENTRY(586),
+	ICE_PTT_UNUSED_ENTRY(587),
+	ICE_PTT_UNUSED_ENTRY(588),
+	ICE_PTT_UNUSED_ENTRY(589),
+
+	ICE_PTT_UNUSED_ENTRY(590),
+	ICE_PTT_UNUSED_ENTRY(591),
+	ICE_PTT_UNUSED_ENTRY(592),
+	ICE_PTT_UNUSED_ENTRY(593),
+	ICE_PTT_UNUSED_ENTRY(594),
+	ICE_PTT_UNUSED_ENTRY(595),
+	ICE_PTT_UNUSED_ENTRY(596),
+	ICE_PTT_UNUSED_ENTRY(597),
+	ICE_PTT_UNUSED_ENTRY(598),
+	ICE_PTT_UNUSED_ENTRY(599),
+
+	ICE_PTT_UNUSED_ENTRY(600),
+	ICE_PTT_UNUSED_ENTRY(601),
+	ICE_PTT_UNUSED_ENTRY(602),
+	ICE_PTT_UNUSED_ENTRY(603),
+	ICE_PTT_UNUSED_ENTRY(604),
+	ICE_PTT_UNUSED_ENTRY(605),
+	ICE_PTT_UNUSED_ENTRY(606),
+	ICE_PTT_UNUSED_ENTRY(607),
+	ICE_PTT_UNUSED_ENTRY(608),
+	ICE_PTT_UNUSED_ENTRY(609),
+
+	ICE_PTT_UNUSED_ENTRY(610),
+	ICE_PTT_UNUSED_ENTRY(611),
+	ICE_PTT_UNUSED_ENTRY(612),
+	ICE_PTT_UNUSED_ENTRY(613),
+	ICE_PTT_UNUSED_ENTRY(614),
+	ICE_PTT_UNUSED_ENTRY(615),
+	ICE_PTT_UNUSED_ENTRY(616),
+	ICE_PTT_UNUSED_ENTRY(617),
+	ICE_PTT_UNUSED_ENTRY(618),
+	ICE_PTT_UNUSED_ENTRY(619),
+
+	ICE_PTT_UNUSED_ENTRY(620),
+	ICE_PTT_UNUSED_ENTRY(621),
+	ICE_PTT_UNUSED_ENTRY(622),
+	ICE_PTT_UNUSED_ENTRY(623),
+	ICE_PTT_UNUSED_ENTRY(624),
+	ICE_PTT_UNUSED_ENTRY(625),
+	ICE_PTT_UNUSED_ENTRY(626),
+	ICE_PTT_UNUSED_ENTRY(627),
+	ICE_PTT_UNUSED_ENTRY(628),
+	ICE_PTT_UNUSED_ENTRY(629),
+
+	ICE_PTT_UNUSED_ENTRY(630),
+	ICE_PTT_UNUSED_ENTRY(631),
+	ICE_PTT_UNUSED_ENTRY(632),
+	ICE_PTT_UNUSED_ENTRY(633),
+	ICE_PTT_UNUSED_ENTRY(634),
+	ICE_PTT_UNUSED_ENTRY(635),
+	ICE_PTT_UNUSED_ENTRY(636),
+	ICE_PTT_UNUSED_ENTRY(637),
+	ICE_PTT_UNUSED_ENTRY(638),
+	ICE_PTT_UNUSED_ENTRY(639),
+
+	ICE_PTT_UNUSED_ENTRY(640),
+	ICE_PTT_UNUSED_ENTRY(641),
+	ICE_PTT_UNUSED_ENTRY(642),
+	ICE_PTT_UNUSED_ENTRY(643),
+	ICE_PTT_UNUSED_ENTRY(644),
+	ICE_PTT_UNUSED_ENTRY(645),
+	ICE_PTT_UNUSED_ENTRY(646),
+	ICE_PTT_UNUSED_ENTRY(647),
+	ICE_PTT_UNUSED_ENTRY(648),
+	ICE_PTT_UNUSED_ENTRY(649),
+
+	ICE_PTT_UNUSED_ENTRY(650),
+	ICE_PTT_UNUSED_ENTRY(651),
+	ICE_PTT_UNUSED_ENTRY(652),
+	ICE_PTT_UNUSED_ENTRY(653),
+	ICE_PTT_UNUSED_ENTRY(654),
+	ICE_PTT_UNUSED_ENTRY(655),
+	ICE_PTT_UNUSED_ENTRY(656),
+	ICE_PTT_UNUSED_ENTRY(657),
+	ICE_PTT_UNUSED_ENTRY(658),
+	ICE_PTT_UNUSED_ENTRY(659),
+
+	ICE_PTT_UNUSED_ENTRY(660),
+	ICE_PTT_UNUSED_ENTRY(661),
+	ICE_PTT_UNUSED_ENTRY(662),
+	ICE_PTT_UNUSED_ENTRY(663),
+	ICE_PTT_UNUSED_ENTRY(664),
+	ICE_PTT_UNUSED_ENTRY(665),
+	ICE_PTT_UNUSED_ENTRY(666),
+	ICE_PTT_UNUSED_ENTRY(667),
+	ICE_PTT_UNUSED_ENTRY(668),
+	ICE_PTT_UNUSED_ENTRY(669),
+
+	ICE_PTT_UNUSED_ENTRY(670),
+	ICE_PTT_UNUSED_ENTRY(671),
+	ICE_PTT_UNUSED_ENTRY(672),
+	ICE_PTT_UNUSED_ENTRY(673),
+	ICE_PTT_UNUSED_ENTRY(674),
+	ICE_PTT_UNUSED_ENTRY(675),
+	ICE_PTT_UNUSED_ENTRY(676),
+	ICE_PTT_UNUSED_ENTRY(677),
+	ICE_PTT_UNUSED_ENTRY(678),
+	ICE_PTT_UNUSED_ENTRY(679),
+
+	ICE_PTT_UNUSED_ENTRY(680),
+	ICE_PTT_UNUSED_ENTRY(681),
+	ICE_PTT_UNUSED_ENTRY(682),
+	ICE_PTT_UNUSED_ENTRY(683),
+	ICE_PTT_UNUSED_ENTRY(684),
+	ICE_PTT_UNUSED_ENTRY(685),
+	ICE_PTT_UNUSED_ENTRY(686),
+	ICE_PTT_UNUSED_ENTRY(687),
+	ICE_PTT_UNUSED_ENTRY(688),
+	ICE_PTT_UNUSED_ENTRY(689),
+
+	ICE_PTT_UNUSED_ENTRY(690),
+	ICE_PTT_UNUSED_ENTRY(691),
+	ICE_PTT_UNUSED_ENTRY(692),
+	ICE_PTT_UNUSED_ENTRY(693),
+	ICE_PTT_UNUSED_ENTRY(694),
+	ICE_PTT_UNUSED_ENTRY(695),
+	ICE_PTT_UNUSED_ENTRY(696),
+	ICE_PTT_UNUSED_ENTRY(697),
+	ICE_PTT_UNUSED_ENTRY(698),
+	ICE_PTT_UNUSED_ENTRY(699),
+
+	ICE_PTT_UNUSED_ENTRY(700),
+	ICE_PTT_UNUSED_ENTRY(701),
+	ICE_PTT_UNUSED_ENTRY(702),
+	ICE_PTT_UNUSED_ENTRY(703),
+	ICE_PTT_UNUSED_ENTRY(704),
+	ICE_PTT_UNUSED_ENTRY(705),
+	ICE_PTT_UNUSED_ENTRY(706),
+	ICE_PTT_UNUSED_ENTRY(707),
+	ICE_PTT_UNUSED_ENTRY(708),
+	ICE_PTT_UNUSED_ENTRY(709),
+
+	ICE_PTT_UNUSED_ENTRY(710),
+	ICE_PTT_UNUSED_ENTRY(711),
+	ICE_PTT_UNUSED_ENTRY(712),
+	ICE_PTT_UNUSED_ENTRY(713),
+	ICE_PTT_UNUSED_ENTRY(714),
+	ICE_PTT_UNUSED_ENTRY(715),
+	ICE_PTT_UNUSED_ENTRY(716),
+	ICE_PTT_UNUSED_ENTRY(717),
+	ICE_PTT_UNUSED_ENTRY(718),
+	ICE_PTT_UNUSED_ENTRY(719),
+
+	ICE_PTT_UNUSED_ENTRY(720),
+	ICE_PTT_UNUSED_ENTRY(721),
+	ICE_PTT_UNUSED_ENTRY(722),
+	ICE_PTT_UNUSED_ENTRY(723),
+	ICE_PTT_UNUSED_ENTRY(724),
+	ICE_PTT_UNUSED_ENTRY(725),
+	ICE_PTT_UNUSED_ENTRY(726),
+	ICE_PTT_UNUSED_ENTRY(727),
+	ICE_PTT_UNUSED_ENTRY(728),
+	ICE_PTT_UNUSED_ENTRY(729),
+
+	ICE_PTT_UNUSED_ENTRY(730),
+	ICE_PTT_UNUSED_ENTRY(731),
+	ICE_PTT_UNUSED_ENTRY(732),
+	ICE_PTT_UNUSED_ENTRY(733),
+	ICE_PTT_UNUSED_ENTRY(734),
+	ICE_PTT_UNUSED_ENTRY(735),
+	ICE_PTT_UNUSED_ENTRY(736),
+	ICE_PTT_UNUSED_ENTRY(737),
+	ICE_PTT_UNUSED_ENTRY(738),
+	ICE_PTT_UNUSED_ENTRY(739),
+
+	ICE_PTT_UNUSED_ENTRY(740),
+	ICE_PTT_UNUSED_ENTRY(741),
+	ICE_PTT_UNUSED_ENTRY(742),
+	ICE_PTT_UNUSED_ENTRY(743),
+	ICE_PTT_UNUSED_ENTRY(744),
+	ICE_PTT_UNUSED_ENTRY(745),
+	ICE_PTT_UNUSED_ENTRY(746),
+	ICE_PTT_UNUSED_ENTRY(747),
+	ICE_PTT_UNUSED_ENTRY(748),
+	ICE_PTT_UNUSED_ENTRY(749),
+
+	ICE_PTT_UNUSED_ENTRY(750),
+	ICE_PTT_UNUSED_ENTRY(751),
+	ICE_PTT_UNUSED_ENTRY(752),
+	ICE_PTT_UNUSED_ENTRY(753),
+	ICE_PTT_UNUSED_ENTRY(754),
+	ICE_PTT_UNUSED_ENTRY(755),
+	ICE_PTT_UNUSED_ENTRY(756),
+	ICE_PTT_UNUSED_ENTRY(757),
+	ICE_PTT_UNUSED_ENTRY(758),
+	ICE_PTT_UNUSED_ENTRY(759),
+
+	ICE_PTT_UNUSED_ENTRY(760),
+	ICE_PTT_UNUSED_ENTRY(761),
+	ICE_PTT_UNUSED_ENTRY(762),
+	ICE_PTT_UNUSED_ENTRY(763),
+	ICE_PTT_UNUSED_ENTRY(764),
+	ICE_PTT_UNUSED_ENTRY(765),
+	ICE_PTT_UNUSED_ENTRY(766),
+	ICE_PTT_UNUSED_ENTRY(767),
+	ICE_PTT_UNUSED_ENTRY(768),
+	ICE_PTT_UNUSED_ENTRY(769),
+
+	ICE_PTT_UNUSED_ENTRY(770),
+	ICE_PTT_UNUSED_ENTRY(771),
+	ICE_PTT_UNUSED_ENTRY(772),
+	ICE_PTT_UNUSED_ENTRY(773),
+	ICE_PTT_UNUSED_ENTRY(774),
+	ICE_PTT_UNUSED_ENTRY(775),
+	ICE_PTT_UNUSED_ENTRY(776),
+	ICE_PTT_UNUSED_ENTRY(777),
+	ICE_PTT_UNUSED_ENTRY(778),
+	ICE_PTT_UNUSED_ENTRY(779),
+
+	ICE_PTT_UNUSED_ENTRY(780),
+	ICE_PTT_UNUSED_ENTRY(781),
+	ICE_PTT_UNUSED_ENTRY(782),
+	ICE_PTT_UNUSED_ENTRY(783),
+	ICE_PTT_UNUSED_ENTRY(784),
+	ICE_PTT_UNUSED_ENTRY(785),
+	ICE_PTT_UNUSED_ENTRY(786),
+	ICE_PTT_UNUSED_ENTRY(787),
+	ICE_PTT_UNUSED_ENTRY(788),
+	ICE_PTT_UNUSED_ENTRY(789),
+
+	ICE_PTT_UNUSED_ENTRY(790),
+	ICE_PTT_UNUSED_ENTRY(791),
+	ICE_PTT_UNUSED_ENTRY(792),
+	ICE_PTT_UNUSED_ENTRY(793),
+	ICE_PTT_UNUSED_ENTRY(794),
+	ICE_PTT_UNUSED_ENTRY(795),
+	ICE_PTT_UNUSED_ENTRY(796),
+	ICE_PTT_UNUSED_ENTRY(797),
+	ICE_PTT_UNUSED_ENTRY(798),
+	ICE_PTT_UNUSED_ENTRY(799),
+
+	ICE_PTT_UNUSED_ENTRY(800),
+	ICE_PTT_UNUSED_ENTRY(801),
+	ICE_PTT_UNUSED_ENTRY(802),
+	ICE_PTT_UNUSED_ENTRY(803),
+	ICE_PTT_UNUSED_ENTRY(804),
+	ICE_PTT_UNUSED_ENTRY(805),
+	ICE_PTT_UNUSED_ENTRY(806),
+	ICE_PTT_UNUSED_ENTRY(807),
+	ICE_PTT_UNUSED_ENTRY(808),
+	ICE_PTT_UNUSED_ENTRY(809),
+
+	ICE_PTT_UNUSED_ENTRY(810),
+	ICE_PTT_UNUSED_ENTRY(811),
+	ICE_PTT_UNUSED_ENTRY(812),
+	ICE_PTT_UNUSED_ENTRY(813),
+	ICE_PTT_UNUSED_ENTRY(814),
+	ICE_PTT_UNUSED_ENTRY(815),
+	ICE_PTT_UNUSED_ENTRY(816),
+	ICE_PTT_UNUSED_ENTRY(817),
+	ICE_PTT_UNUSED_ENTRY(818),
+	ICE_PTT_UNUSED_ENTRY(819),
+
+	ICE_PTT_UNUSED_ENTRY(820),
+	ICE_PTT_UNUSED_ENTRY(821),
+	ICE_PTT_UNUSED_ENTRY(822),
+	ICE_PTT_UNUSED_ENTRY(823),
+	ICE_PTT_UNUSED_ENTRY(824),
+	ICE_PTT_UNUSED_ENTRY(825),
+	ICE_PTT_UNUSED_ENTRY(826),
+	ICE_PTT_UNUSED_ENTRY(827),
+	ICE_PTT_UNUSED_ENTRY(828),
+	ICE_PTT_UNUSED_ENTRY(829),
+
+	ICE_PTT_UNUSED_ENTRY(830),
+	ICE_PTT_UNUSED_ENTRY(831),
+	ICE_PTT_UNUSED_ENTRY(832),
+	ICE_PTT_UNUSED_ENTRY(833),
+	ICE_PTT_UNUSED_ENTRY(834),
+	ICE_PTT_UNUSED_ENTRY(835),
+	ICE_PTT_UNUSED_ENTRY(836),
+	ICE_PTT_UNUSED_ENTRY(837),
+	ICE_PTT_UNUSED_ENTRY(838),
+	ICE_PTT_UNUSED_ENTRY(839),
+
+	ICE_PTT_UNUSED_ENTRY(840),
+	ICE_PTT_UNUSED_ENTRY(841),
+	ICE_PTT_UNUSED_ENTRY(842),
+	ICE_PTT_UNUSED_ENTRY(843),
+	ICE_PTT_UNUSED_ENTRY(844),
+	ICE_PTT_UNUSED_ENTRY(845),
+	ICE_PTT_UNUSED_ENTRY(846),
+	ICE_PTT_UNUSED_ENTRY(847),
+	ICE_PTT_UNUSED_ENTRY(848),
+	ICE_PTT_UNUSED_ENTRY(849),
+
+	ICE_PTT_UNUSED_ENTRY(850),
+	ICE_PTT_UNUSED_ENTRY(851),
+	ICE_PTT_UNUSED_ENTRY(852),
+	ICE_PTT_UNUSED_ENTRY(853),
+	ICE_PTT_UNUSED_ENTRY(854),
+	ICE_PTT_UNUSED_ENTRY(855),
+	ICE_PTT_UNUSED_ENTRY(856),
+	ICE_PTT_UNUSED_ENTRY(857),
+	ICE_PTT_UNUSED_ENTRY(858),
+	ICE_PTT_UNUSED_ENTRY(859),
+
+	ICE_PTT_UNUSED_ENTRY(860),
+	ICE_PTT_UNUSED_ENTRY(861),
+	ICE_PTT_UNUSED_ENTRY(862),
+	ICE_PTT_UNUSED_ENTRY(863),
+	ICE_PTT_UNUSED_ENTRY(864),
+	ICE_PTT_UNUSED_ENTRY(865),
+	ICE_PTT_UNUSED_ENTRY(866),
+	ICE_PTT_UNUSED_ENTRY(867),
+	ICE_PTT_UNUSED_ENTRY(868),
+	ICE_PTT_UNUSED_ENTRY(869),
+
+	ICE_PTT_UNUSED_ENTRY(870),
+	ICE_PTT_UNUSED_ENTRY(871),
+	ICE_PTT_UNUSED_ENTRY(872),
+	ICE_PTT_UNUSED_ENTRY(873),
+	ICE_PTT_UNUSED_ENTRY(874),
+	ICE_PTT_UNUSED_ENTRY(875),
+	ICE_PTT_UNUSED_ENTRY(876),
+	ICE_PTT_UNUSED_ENTRY(877),
+	ICE_PTT_UNUSED_ENTRY(878),
+	ICE_PTT_UNUSED_ENTRY(879),
+
+	ICE_PTT_UNUSED_ENTRY(880),
+	ICE_PTT_UNUSED_ENTRY(881),
+	ICE_PTT_UNUSED_ENTRY(882),
+	ICE_PTT_UNUSED_ENTRY(883),
+	ICE_PTT_UNUSED_ENTRY(884),
+	ICE_PTT_UNUSED_ENTRY(885),
+	ICE_PTT_UNUSED_ENTRY(886),
+	ICE_PTT_UNUSED_ENTRY(887),
+	ICE_PTT_UNUSED_ENTRY(888),
+	ICE_PTT_UNUSED_ENTRY(889),
+
+	ICE_PTT_UNUSED_ENTRY(890),
+	ICE_PTT_UNUSED_ENTRY(891),
+	ICE_PTT_UNUSED_ENTRY(892),
+	ICE_PTT_UNUSED_ENTRY(893),
+	ICE_PTT_UNUSED_ENTRY(894),
+	ICE_PTT_UNUSED_ENTRY(895),
+	ICE_PTT_UNUSED_ENTRY(896),
+	ICE_PTT_UNUSED_ENTRY(897),
+	ICE_PTT_UNUSED_ENTRY(898),
+	ICE_PTT_UNUSED_ENTRY(899),
+
+	ICE_PTT_UNUSED_ENTRY(900),
+	ICE_PTT_UNUSED_ENTRY(901),
+	ICE_PTT_UNUSED_ENTRY(902),
+	ICE_PTT_UNUSED_ENTRY(903),
+	ICE_PTT_UNUSED_ENTRY(904),
+	ICE_PTT_UNUSED_ENTRY(905),
+	ICE_PTT_UNUSED_ENTRY(906),
+	ICE_PTT_UNUSED_ENTRY(907),
+	ICE_PTT_UNUSED_ENTRY(908),
+	ICE_PTT_UNUSED_ENTRY(909),
+
+	ICE_PTT_UNUSED_ENTRY(910),
+	ICE_PTT_UNUSED_ENTRY(911),
+	ICE_PTT_UNUSED_ENTRY(912),
+	ICE_PTT_UNUSED_ENTRY(913),
+	ICE_PTT_UNUSED_ENTRY(914),
+	ICE_PTT_UNUSED_ENTRY(915),
+	ICE_PTT_UNUSED_ENTRY(916),
+	ICE_PTT_UNUSED_ENTRY(917),
+	ICE_PTT_UNUSED_ENTRY(918),
+	ICE_PTT_UNUSED_ENTRY(919),
+
+	ICE_PTT_UNUSED_ENTRY(920),
+	ICE_PTT_UNUSED_ENTRY(921),
+	ICE_PTT_UNUSED_ENTRY(922),
+	ICE_PTT_UNUSED_ENTRY(923),
+	ICE_PTT_UNUSED_ENTRY(924),
+	ICE_PTT_UNUSED_ENTRY(925),
+	ICE_PTT_UNUSED_ENTRY(926),
+	ICE_PTT_UNUSED_ENTRY(927),
+	ICE_PTT_UNUSED_ENTRY(928),
+	ICE_PTT_UNUSED_ENTRY(929),
+
+	ICE_PTT_UNUSED_ENTRY(930),
+	ICE_PTT_UNUSED_ENTRY(931),
+	ICE_PTT_UNUSED_ENTRY(932),
+	ICE_PTT_UNUSED_ENTRY(933),
+	ICE_PTT_UNUSED_ENTRY(934),
+	ICE_PTT_UNUSED_ENTRY(935),
+	ICE_PTT_UNUSED_ENTRY(936),
+	ICE_PTT_UNUSED_ENTRY(937),
+	ICE_PTT_UNUSED_ENTRY(938),
+	ICE_PTT_UNUSED_ENTRY(939),
+
+	ICE_PTT_UNUSED_ENTRY(940),
+	ICE_PTT_UNUSED_ENTRY(941),
+	ICE_PTT_UNUSED_ENTRY(942),
+	ICE_PTT_UNUSED_ENTRY(943),
+	ICE_PTT_UNUSED_ENTRY(944),
+	ICE_PTT_UNUSED_ENTRY(945),
+	ICE_PTT_UNUSED_ENTRY(946),
+	ICE_PTT_UNUSED_ENTRY(947),
+	ICE_PTT_UNUSED_ENTRY(948),
+	ICE_PTT_UNUSED_ENTRY(949),
+
+	ICE_PTT_UNUSED_ENTRY(950),
+	ICE_PTT_UNUSED_ENTRY(951),
+	ICE_PTT_UNUSED_ENTRY(952),
+	ICE_PTT_UNUSED_ENTRY(953),
+	ICE_PTT_UNUSED_ENTRY(954),
+	ICE_PTT_UNUSED_ENTRY(955),
+	ICE_PTT_UNUSED_ENTRY(956),
+	ICE_PTT_UNUSED_ENTRY(957),
+	ICE_PTT_UNUSED_ENTRY(958),
+	ICE_PTT_UNUSED_ENTRY(959),
+
+	ICE_PTT_UNUSED_ENTRY(960),
+	ICE_PTT_UNUSED_ENTRY(961),
+	ICE_PTT_UNUSED_ENTRY(962),
+	ICE_PTT_UNUSED_ENTRY(963),
+	ICE_PTT_UNUSED_ENTRY(964),
+	ICE_PTT_UNUSED_ENTRY(965),
+	ICE_PTT_UNUSED_ENTRY(966),
+	ICE_PTT_UNUSED_ENTRY(967),
+	ICE_PTT_UNUSED_ENTRY(968),
+	ICE_PTT_UNUSED_ENTRY(969),
+
+	ICE_PTT_UNUSED_ENTRY(970),
+	ICE_PTT_UNUSED_ENTRY(971),
+	ICE_PTT_UNUSED_ENTRY(972),
+	ICE_PTT_UNUSED_ENTRY(973),
+	ICE_PTT_UNUSED_ENTRY(974),
+	ICE_PTT_UNUSED_ENTRY(975),
+	ICE_PTT_UNUSED_ENTRY(976),
+	ICE_PTT_UNUSED_ENTRY(977),
+	ICE_PTT_UNUSED_ENTRY(978),
+	ICE_PTT_UNUSED_ENTRY(979),
+
+	ICE_PTT_UNUSED_ENTRY(980),
+	ICE_PTT_UNUSED_ENTRY(981),
+	ICE_PTT_UNUSED_ENTRY(982),
+	ICE_PTT_UNUSED_ENTRY(983),
+	ICE_PTT_UNUSED_ENTRY(984),
+	ICE_PTT_UNUSED_ENTRY(985),
+	ICE_PTT_UNUSED_ENTRY(986),
+	ICE_PTT_UNUSED_ENTRY(987),
+	ICE_PTT_UNUSED_ENTRY(988),
+	ICE_PTT_UNUSED_ENTRY(989),
+
+	ICE_PTT_UNUSED_ENTRY(990),
+	ICE_PTT_UNUSED_ENTRY(991),
+	ICE_PTT_UNUSED_ENTRY(992),
+	ICE_PTT_UNUSED_ENTRY(993),
+	ICE_PTT_UNUSED_ENTRY(994),
+	ICE_PTT_UNUSED_ENTRY(995),
+	ICE_PTT_UNUSED_ENTRY(996),
+	ICE_PTT_UNUSED_ENTRY(997),
+	ICE_PTT_UNUSED_ENTRY(998),
+	ICE_PTT_UNUSED_ENTRY(999),
+
+	ICE_PTT_UNUSED_ENTRY(1000),
+	ICE_PTT_UNUSED_ENTRY(1001),
+	ICE_PTT_UNUSED_ENTRY(1002),
+	ICE_PTT_UNUSED_ENTRY(1003),
+	ICE_PTT_UNUSED_ENTRY(1004),
+	ICE_PTT_UNUSED_ENTRY(1005),
+	ICE_PTT_UNUSED_ENTRY(1006),
+	ICE_PTT_UNUSED_ENTRY(1007),
+	ICE_PTT_UNUSED_ENTRY(1008),
+	ICE_PTT_UNUSED_ENTRY(1009),
+
+	ICE_PTT_UNUSED_ENTRY(1010),
+	ICE_PTT_UNUSED_ENTRY(1011),
+	ICE_PTT_UNUSED_ENTRY(1012),
+	ICE_PTT_UNUSED_ENTRY(1013),
+	ICE_PTT_UNUSED_ENTRY(1014),
+	ICE_PTT_UNUSED_ENTRY(1015),
+	ICE_PTT_UNUSED_ENTRY(1016),
+	ICE_PTT_UNUSED_ENTRY(1017),
+	ICE_PTT_UNUSED_ENTRY(1018),
+	ICE_PTT_UNUSED_ENTRY(1019),
+
+	ICE_PTT_UNUSED_ENTRY(1020),
+	ICE_PTT_UNUSED_ENTRY(1021),
+	ICE_PTT_UNUSED_ENTRY(1022),
+	ICE_PTT_UNUSED_ENTRY(1023),
+};
+
+static inline struct ice_rx_ptype_decoded ice_decode_rx_desc_ptype(u16 ptype)
+{
+	return ice_ptype_lkup[ptype];
+}
+
+#define ICE_LINK_SPEED_UNKNOWN		0
+#define ICE_LINK_SPEED_10MBPS		10
+#define ICE_LINK_SPEED_100MBPS		100
+#define ICE_LINK_SPEED_1000MBPS		1000
+#define ICE_LINK_SPEED_2500MBPS		2500
+#define ICE_LINK_SPEED_5000MBPS		5000
+#define ICE_LINK_SPEED_10000MBPS	10000
+#define ICE_LINK_SPEED_20000MBPS	20000
+#define ICE_LINK_SPEED_25000MBPS	25000
+#define ICE_LINK_SPEED_40000MBPS	40000
+
+#endif /* _ICE_LAN_TX_RX_H_ */
diff --git a/drivers/net/ice/base/ice_nvm.c b/drivers/net/ice/base/ice_nvm.c
new file mode 100644
index 0000000..25a2ca4
--- /dev/null
+++ b/drivers/net/ice/base/ice_nvm.c
@@ -0,0 +1,387 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#include "ice_common.h"
+
+
+/**
+ * ice_aq_read_nvm
+ * @hw: pointer to the hw struct
+ * @module_typeid: module pointer location in words from the NVM beginning
+ * @offset: byte offset from the module beginning
+ * @length: length of the section to be read (in bytes from the offset)
+ * @data: command buffer (size [bytes] = length)
+ * @last_command: tells if this is the last command in a series
+ * @cd: pointer to command details structure or NULL
+ *
+ * Read the NVM using the admin queue commands (0x0701)
+ */
+static enum ice_status
+ice_aq_read_nvm(struct ice_hw *hw, u16 module_typeid, u32 offset, u16 length,
+		void *data, bool last_command, struct ice_sq_cd *cd)
+{
+	struct ice_aq_desc desc;
+	struct ice_aqc_nvm *cmd;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_read_nvm");
+
+	cmd = &desc.params.nvm;
+
+	/* In offset the highest byte must be zeroed. */
+	if (offset & 0xFF000000)
+		return ICE_ERR_PARAM;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_nvm_read);
+
+	/* If this is the last command in a series, set the proper flag. */
+	if (last_command)
+		cmd->cmd_flags |= ICE_AQC_NVM_LAST_CMD;
+	cmd->module_typeid = CPU_TO_LE16(module_typeid);
+	cmd->offset_low = CPU_TO_LE16(offset & 0xFFFF);
+	cmd->offset_high = (offset >> 16) & 0xFF;
+	cmd->length = CPU_TO_LE16(length);
+
+	return ice_aq_send_cmd(hw, &desc, data, length, cd);
+}
+
+/**
+ * ice_check_sr_access_params - verify params for Shadow RAM R/W operations.
+ * @hw: pointer to the HW structure
+ * @offset: offset in words from module start
+ * @words: number of words to access
+ */
+static enum ice_status
+ice_check_sr_access_params(struct ice_hw *hw, u32 offset, u16 words)
+{
+	if ((offset + words) > hw->nvm.sr_words) {
+		ice_debug(hw, ICE_DBG_NVM,
+			  "NVM error: offset beyond SR lmt.\n");
+		return ICE_ERR_PARAM;
+	}
+
+	if (words > ICE_SR_SECTOR_SIZE_IN_WORDS) {
+		/* We can access only up to 4KB (one sector), in one AQ write */
+		ice_debug(hw, ICE_DBG_NVM,
+			  "NVM error: tried to access %d words, limit is %d.\n",
+			  words, ICE_SR_SECTOR_SIZE_IN_WORDS);
+		return ICE_ERR_PARAM;
+	}
+
+	if (((offset + (words - 1)) / ICE_SR_SECTOR_SIZE_IN_WORDS) !=
+	    (offset / ICE_SR_SECTOR_SIZE_IN_WORDS)) {
+		/* A single access cannot spread over two sectors */
+		ice_debug(hw, ICE_DBG_NVM,
+			  "NVM error: cannot spread over two sectors.\n");
+		return ICE_ERR_PARAM;
+	}
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_read_sr_aq - Read Shadow RAM.
+ * @hw: pointer to the HW structure
+ * @offset: offset in words from module start
+ * @words: number of words to read
+ * @data: buffer for words reads from Shadow RAM
+ * @last_command: tells the AdminQ that this is the last command
+ *
+ * Reads 16-bit word buffers from the Shadow RAM using the admin command.
+ */
+static enum ice_status
+ice_read_sr_aq(struct ice_hw *hw, u32 offset, u16 words, u16 *data,
+	       bool last_command)
+{
+	enum ice_status status;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_read_sr_aq");
+
+	status = ice_check_sr_access_params(hw, offset, words);
+
+	/* values in "offset" and "words" parameters are sized as words
+	 * (16 bits) but ice_aq_read_nvm expects these values in bytes.
+	 * So do this conversion while calling ice_aq_read_nvm.
+	 */
+	if (!status)
+		status = ice_aq_read_nvm(hw, 0, 2 * offset, 2 * words, data,
+					 last_command, NULL);
+
+	return status;
+}
+
+/**
+ * ice_read_sr_word_aq - Reads Shadow RAM via AQ
+ * @hw: pointer to the HW structure
+ * @offset: offset of the Shadow RAM word to read (0x000000 - 0x001FFF)
+ * @data: word read from the Shadow RAM
+ *
+ * Reads one 16 bit word from the Shadow RAM using the ice_read_sr_aq method.
+ */
+static enum ice_status
+ice_read_sr_word_aq(struct ice_hw *hw, u16 offset, u16 *data)
+{
+	enum ice_status status;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_read_sr_word_aq");
+
+	status = ice_read_sr_aq(hw, offset, 1, data, true);
+	if (!status)
+		*data = LE16_TO_CPU(*(__le16 *)data);
+
+	return status;
+}
+
+
+/**
+ * ice_read_sr_buf_aq - Reads Shadow RAM buf via AQ
+ * @hw: pointer to the HW structure
+ * @offset: offset of the Shadow RAM word to read (0x000000 - 0x001FFF)
+ * @words: (in) number of words to read; (out) number of words actually read
+ * @data: words read from the Shadow RAM
+ *
+ * Reads 16 bit words (data buf) from the SR using the ice_read_sr_aq
+ * method. Ownership of the NVM is taken before reading the buffer and later
+ * released.
+ */
+static enum ice_status
+ice_read_sr_buf_aq(struct ice_hw *hw, u16 offset, u16 *words, u16 *data)
+{
+	enum ice_status status;
+	bool last_cmd = false;
+	u16 words_read = 0;
+	u16 i = 0;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_read_sr_buf_aq");
+
+	do {
+		u16 read_size, off_w;
+
+		/* Calculate number of bytes we should read in this step.
+		 * It's not allowed to read more than one page at a time or
+		 * to cross page boundaries.
+		 */
+		off_w = offset % ICE_SR_SECTOR_SIZE_IN_WORDS;
+		read_size = off_w ?
+			min(*words,
+			    (u16)(ICE_SR_SECTOR_SIZE_IN_WORDS - off_w)) :
+			min((*words - words_read), ICE_SR_SECTOR_SIZE_IN_WORDS);
+
+		/* Check if this is last command, if so set proper flag */
+		if ((words_read + read_size) >= *words)
+			last_cmd = true;
+
+		status = ice_read_sr_aq(hw, offset, read_size,
+					data + words_read, last_cmd);
+		if (status)
+			goto read_nvm_buf_aq_exit;
+
+		/* Increment counter for words already read and move offset to
+		 * new read location
+		 */
+		words_read += read_size;
+		offset += read_size;
+	} while (words_read < *words);
+
+	for (i = 0; i < *words; i++)
+		data[i] = LE16_TO_CPU(((__le16 *)data)[i]);
+
+read_nvm_buf_aq_exit:
+	*words = words_read;
+	return status;
+}
+
+/**
+ * ice_acquire_nvm - Generic request for acquiring the NVM ownership
+ * @hw: pointer to the HW structure
+ * @access: NVM access type (read or write)
+ *
+ * This function will request NVM ownership.
+ */
+static enum ice_status
+ice_acquire_nvm(struct ice_hw *hw, enum ice_aq_res_access_type access)
+{
+	ice_debug(hw, ICE_DBG_TRACE, "ice_acquire_nvm");
+
+	if (hw->nvm.blank_nvm_mode)
+		return ICE_SUCCESS;
+
+	return ice_acquire_res(hw, ICE_NVM_RES_ID, access, ICE_NVM_TIMEOUT);
+}
+
+/**
+ * ice_release_nvm - Generic request for releasing the NVM ownership
+ * @hw: pointer to the HW structure
+ *
+ * This function will release NVM ownership.
+ */
+static void ice_release_nvm(struct ice_hw *hw)
+{
+	ice_debug(hw, ICE_DBG_TRACE, "ice_release_nvm");
+
+	if (hw->nvm.blank_nvm_mode)
+		return;
+
+	ice_release_res(hw, ICE_NVM_RES_ID);
+}
+
+/**
+ * ice_read_sr_word - Reads Shadow RAM word and acquire NVM if necessary
+ * @hw: pointer to the HW structure
+ * @offset: offset of the Shadow RAM word to read (0x000000 - 0x001FFF)
+ * @data: word read from the Shadow RAM
+ *
+ * Reads one 16 bit word from the Shadow RAM using the ice_read_sr_word_aq.
+ */
+enum ice_status ice_read_sr_word(struct ice_hw *hw, u16 offset, u16 *data)
+{
+	enum ice_status status;
+
+	status = ice_acquire_nvm(hw, ICE_RES_READ);
+	if (!status) {
+		status = ice_read_sr_word_aq(hw, offset, data);
+		ice_release_nvm(hw);
+	}
+
+	return status;
+}
+
+/**
+ * ice_init_nvm - initializes NVM setting
+ * @hw: pointer to the hw struct
+ *
+ * This function reads and populates NVM settings such as Shadow RAM size,
+ * max_timeout, and blank_nvm_mode
+ */
+enum ice_status ice_init_nvm(struct ice_hw *hw)
+{
+	struct ice_nvm_info *nvm = &hw->nvm;
+	u16 oem_hi, oem_lo, cfg_ptr;
+	u16 eetrack_lo, eetrack_hi;
+	enum ice_status status = ICE_SUCCESS;
+	u32 fla, gens_stat;
+	u8 sr_size;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_init_nvm");
+
+	/* The SR size is stored regardless of the nvm programming mode
+	 * as the blank mode may be used in the factory line.
+	 */
+	gens_stat = rd32(hw, GLNVM_GENS);
+	sr_size = (gens_stat & GLNVM_GENS_SR_SIZE_M) >> GLNVM_GENS_SR_SIZE_S;
+
+	/* Switching to words (sr_size contains power of 2) */
+	nvm->sr_words = BIT(sr_size) * ICE_SR_WORDS_IN_1KB;
+
+	/* Check if we are in the normal or blank NVM programming mode */
+	fla = rd32(hw, GLNVM_FLA);
+	if (fla & GLNVM_FLA_LOCKED_M) { /* Normal programming mode */
+		nvm->blank_nvm_mode = false;
+	} else { /* Blank programming mode */
+		nvm->blank_nvm_mode = true;
+		status = ICE_ERR_NVM_BLANK_MODE;
+		ice_debug(hw, ICE_DBG_NVM,
+			  "NVM init error: unsupported blank mode.\n");
+		return status;
+	}
+
+	status = ice_read_sr_word(hw, ICE_SR_NVM_DEV_STARTER_VER, &hw->nvm.ver);
+	if (status) {
+		ice_debug(hw, ICE_DBG_INIT,
+			  "Failed to read DEV starter version.\n");
+		return status;
+	}
+
+	status = ice_read_sr_word(hw, ICE_SR_NVM_EETRACK_LO, &eetrack_lo);
+	if (status) {
+		ice_debug(hw, ICE_DBG_INIT, "Failed to read EETRACK lo.\n");
+		return status;
+	}
+	status = ice_read_sr_word(hw, ICE_SR_NVM_EETRACK_HI, &eetrack_hi);
+	if (status) {
+		ice_debug(hw, ICE_DBG_INIT, "Failed to read EETRACK hi.\n");
+		return status;
+	}
+
+	hw->nvm.eetrack = (eetrack_hi << 16) | eetrack_lo;
+
+	status = ice_read_sr_word(hw, ICE_SR_BOOT_CFG_PTR, &cfg_ptr);
+	if (status) {
+		ice_debug(hw, ICE_DBG_INIT, "Failed to read BOOT_CONFIG_PTR.\n");
+		return status;
+	}
+
+	status = ice_read_sr_word(hw, (cfg_ptr + ICE_NVM_OEM_VER_OFF), &oem_hi);
+	if (status) {
+		ice_debug(hw, ICE_DBG_INIT, "Failed to read OEM_VER hi.\n");
+		return status;
+	}
+
+	status = ice_read_sr_word(hw, (cfg_ptr + (ICE_NVM_OEM_VER_OFF + 1)),
+				  &oem_lo);
+	if (status) {
+		ice_debug(hw, ICE_DBG_INIT, "Failed to read OEM_VER lo.\n");
+		return status;
+	}
+
+	hw->nvm.oem_ver = ((u32)oem_hi << 16) | oem_lo;
+	return status;
+}
+
+
+/**
+ * ice_read_sr_buf - Reads Shadow RAM buf and acquire lock if necessary
+ * @hw: pointer to the HW structure
+ * @offset: offset of the Shadow RAM word to read (0x000000 - 0x001FFF)
+ * @words: (in) number of words to read; (out) number of words actually read
+ * @data: words read from the Shadow RAM
+ *
+ * Reads 16 bit words (data buf) from the SR using the ice_read_nvm_buf_aq
+ * method. The buf read is preceded by the NVM ownership take
+ * and followed by the release.
+ */
+enum ice_status
+ice_read_sr_buf(struct ice_hw *hw, u16 offset, u16 *words, u16 *data)
+{
+	enum ice_status status;
+
+	status = ice_acquire_nvm(hw, ICE_RES_READ);
+	if (!status) {
+		status = ice_read_sr_buf_aq(hw, offset, words, data);
+		ice_release_nvm(hw);
+	}
+
+	return status;
+}
+
+
+/**
+ * ice_nvm_validate_checksum
+ * @hw: pointer to the hw struct
+ *
+ * Verify NVM PFA checksum validity (0x0706)
+ */
+enum ice_status ice_nvm_validate_checksum(struct ice_hw *hw)
+{
+	struct ice_aqc_nvm_checksum *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	status = ice_acquire_nvm(hw, ICE_RES_READ);
+	if (status)
+		return status;
+
+	cmd = &desc.params.nvm_checksum;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_nvm_checksum);
+	cmd->flags = ICE_AQC_NVM_CHECKSUM_VERIFY;
+
+	status = ice_aq_send_cmd(hw, &desc, NULL, 0, NULL);
+	ice_release_nvm(hw);
+
+	if (!status)
+		if (LE16_TO_CPU(cmd->checksum) != ICE_AQC_NVM_CHECKSUM_CORRECT)
+			status = ICE_ERR_NVM_CHECKSUM;
+
+	return status;
+}
diff --git a/drivers/net/ice/base/ice_osdep.h b/drivers/net/ice/base/ice_osdep.h
new file mode 100644
index 0000000..e1f7581
--- /dev/null
+++ b/drivers/net/ice/base/ice_osdep.h
@@ -0,0 +1,491 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _ICE_OSDEP_H_
+#define _ICE_OSDEP_H_
+
+#include <string.h>
+#include <stdint.h>
+#include <stdio.h>
+#include <stdarg.h>
+#include <inttypes.h>
+#include <sys/queue.h>
+#include <stdbool.h>
+
+#include <rte_common.h>
+#include <rte_memcpy.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <rte_byteorder.h>
+#include <rte_cycles.h>
+#include <rte_spinlock.h>
+#include <rte_log.h>
+#include <rte_random.h>
+#include <rte_io.h>
+
+#include "../ice_logs.h"
+
+#define INLINE inline
+#define STATIC static
+
+typedef uint8_t         u8;
+typedef int8_t          s8;
+typedef uint16_t        u16;
+typedef int16_t         s16;
+typedef uint32_t        u32;
+typedef int32_t         s32;
+typedef uint64_t        u64;
+typedef uint64_t        s64;
+
+#define __iomem
+#define hw_dbg(hw, S, A...) do {} while (0)
+#define upper_32_bits(n) ((u32)(((n) >> 16) >> 16))
+#define lower_32_bits(n) ((u32)(n))
+#define low_16_bits(x)   ((x) & 0xFFFF)
+#define high_16_bits(x)  (((x) & 0xFFFF0000) >> 16)
+
+#ifndef ETH_ADDR_LEN
+#define ETH_ADDR_LEN                  6
+#endif
+
+#ifndef __le16
+#define __le16          uint16_t
+#endif
+#ifndef __le32
+#define __le32          uint32_t
+#endif
+#ifndef __le64
+#define __le64          uint64_t
+#endif
+#ifndef __be16
+#define __be16          uint16_t
+#endif
+#ifndef __be32
+#define __be32          uint32_t
+#endif
+#ifndef __be64
+#define __be64          uint64_t
+#endif
+
+#ifndef __always_unused
+#define __always_unused  __attribute__((unused))
+#endif
+#ifndef __maybe_unused
+#define __maybe_unused  __attribute__((unused))
+#endif
+#ifndef __packed
+#define __packed  __attribute__((packed))
+#endif
+
+#ifndef BIT_ULL
+#define BIT_ULL(a) (1ULL << (a))
+#endif
+
+#define FALSE           0
+#define TRUE            1
+#define false           0
+#define true            1
+
+#define min(a, b) RTE_MIN(a, b)
+#define max(a, b) RTE_MAX(a, b)
+
+#define ARRAY_SIZE(arr) (sizeof(arr) / sizeof(arr[0]))
+#define FIELD_SIZEOF(t, f) (sizeof(((t *)0)->f))
+#define MAKEMASK(m, s) ((m) << (s))
+
+#define DEBUGOUT(S, A...) PMD_DRV_LOG_RAW(DEBUG, S, ##A)
+#define DEBUGFUNC(F) PMD_DRV_LOG_RAW(DEBUG, F)
+
+#define ice_debug(h, m, s, ...)					\
+do {								\
+	if (((m) & (h)->debug_mask))				\
+		PMD_DRV_LOG_RAW(DEBUG, "ice %02x.%x " s,	\
+			(h)->bus.device, (h)->bus.func,		\
+					##__VA_ARGS__);		\
+} while (0)
+
+#define ice_info(hw, fmt, args...) ice_debug(hw, ICE_DBG_ALL, fmt, ##args)
+#define ice_warn(hw, fmt, args...) ice_debug(hw, ICE_DBG_ALL, fmt, ##args)
+#define ice_debug_array(hw, type, rowsize, groupsize, buf, len)		\
+do {									\
+	struct ice_hw *hw_l = hw;					\
+		u16 len_l = len;					\
+		u8 *buf_l = buf;					\
+		int i;							\
+		for (i = 0; i < len_l; i += 8)				\
+			ice_debug(hw_l, type,				\
+				  "0x%04X  0x%016"PRIx64"\n",		\
+				  i, *((u64 *)((buf_l) + i)));		\
+} while (0)
+#define ice_snprintf snprintf
+#ifndef SNPRINTF
+#define SNPRINTF ice_snprintf
+#endif
+
+#define ICE_PCI_REG(reg)     rte_read32(reg)
+#define ICE_PCI_REG_ADDR(a, reg) \
+	((volatile uint32_t *)((char *)(a)->hw_addr + (reg)))
+static inline uint32_t ice_read_addr(volatile void *addr)
+{
+	return rte_le_to_cpu_32(ICE_PCI_REG(addr));
+}
+
+#define ICE_PCI_REG_WRITE(reg, value) \
+	rte_write32((rte_cpu_to_le_32(value)), reg)
+
+#define ice_flush(a)   ICE_READ_REG((a), GLGEN_STAT)
+#define icevf_flush(a) ICE_READ_REG((a), VFGEN_RSTAT)
+#define ICE_READ_REG(hw, reg) ice_read_addr(ICE_PCI_REG_ADDR((hw), (reg)))
+#define ICE_WRITE_REG(hw, reg, value) \
+	ICE_PCI_REG_WRITE(ICE_PCI_REG_ADDR((hw), (reg)), (value))
+
+#define rd32(a, reg) ice_read_addr(ICE_PCI_REG_ADDR((a), (reg)))
+#define wr32(a, reg, value) \
+	ICE_PCI_REG_WRITE(ICE_PCI_REG_ADDR((a), (reg)), (value))
+#define flush(a) ice_read_addr(ICE_PCI_REG_ADDR((a), (GLGEN_STAT)))
+#define div64_long(n, d) ((n) / (d))
+
+typedef u8 ice_bitmap_t;
+#define ice_declare_bitmap(name, bits) \
+	unsigned long name[BITS_TO_LONGS(bits)]
+
+#define BITS_TO_LONGS(nr)   DIV_ROUND_UP(nr, BITS_PER_BYTE * sizeof(long))
+#define DIV_ROUND_UP(n, d) (((n) + (d) - 1) / (d))
+#define BITS_PER_BYTE       8
+#define BITS_CHUNK_MASK(nr)	(((ice_bitmap_t)~0) >>			\
+		((BITS_PER_BYTE * sizeof(ice_bitmap_t)) -		\
+		(((nr) - 1) % (BITS_PER_BYTE * sizeof(ice_bitmap_t))	\
+		 + 1)))
+#define ice_is_bit_set(name, bits) \
+	((name)[BITS_TO_LONGS(bits)] & 1)
+#define ice_and_bitmap(d, b1, b2, sz) \
+	ice_intersect_bitmaps((u8 *)d, (u8 *)b1, (const u8 *)b2, (u16)sz)
+static inline int
+ice_intersect_bitmaps(u8 *dst, const u8 *bmp1, const u8 *bmp2, u16 sz)
+{
+	u32 res = 0;
+	int cnt;
+	u16 i;
+
+	/* Utilize 32-bit operations */
+	cnt = (sz % BITS_PER_BYTE) ?
+		(sz / BITS_PER_BYTE) + 1 : sz / BITS_PER_BYTE;
+	for (i = 0; i < cnt / 4; i++) {
+		((u32 *)dst)[i] = ((const u32 *)bmp1)[i] &
+		((const u32 *)bmp2)[i];
+		res |= ((u32 *)dst)[i];
+	}
+
+	for (i *= 4; i < cnt; i++) {
+		if ((sz % 8 == 0) || (i + 1 < cnt)) {
+			dst[i] = bmp1[i] & bmp2[i];
+		} else {
+			/* Remaining bits that do not occupy the whole byte */
+			u8 mask = ~0u >> (8 - (sz % 8));
+
+			dst[i] = bmp1[i] & bmp2[i] & mask;
+		}
+
+		res |= dst[i];
+	}
+
+	return res != 0;
+}
+
+static inline int ice_find_first_bit(unsigned long *name, u16 size)
+{
+	u16 i;
+
+	for (i = 0; i < BITS_PER_BYTE * (size / BITS_PER_BYTE); i++)
+		if (ice_is_bit_set(name, i))
+			return i;
+	return size;
+}
+
+static inline int ice_find_next_bit(unsigned long *name, u16 size, u16 bits)
+{
+	u16 i;
+
+	for (i = bits; i < BITS_PER_BYTE * (size / BITS_PER_BYTE); i++)
+		if (ice_is_bit_set(name, i))
+			return i;
+	return bits;
+}
+
+#define for_each_set_bit(bit, addr, size)				\
+	for ((bit) = ice_find_first_bit((addr), (size));		\
+	(bit) < (size);							\
+	(bit) = ice_find_next_bit((addr), (size), (bit) + 1))
+
+#ifndef LINUX_SUPPORT
+static inline bool ice_is_any_bit_set(u8 *bitmap, u32 bits)
+#else
+static inline bool ice_is_any_bit_set(unsigned long *bitmap, u32 bits)
+#endif
+{
+#ifndef LINUX_SUPPORT
+	u32 max_index = (bits % 8) ? (bits / 8) + 1 : (bits / 8);
+#else
+	u32 max_index = BITS_TO_LONGS(bits);
+#endif
+	u32 i;
+
+	for (i = 0; i < max_index; i++) {
+		if (bitmap[i])
+			return true;
+	}
+	return false;
+}
+
+/* memory allocation tracking */
+struct ice_dma_mem {
+	void *va;
+	u64 pa;
+	u32 size;
+	const void *zone;
+} __attribute__((packed));
+
+struct ice_virt_mem {
+	void *va;
+	u32 size;
+} __attribute__((packed));
+
+#define ice_malloc(h, s)    rte_zmalloc("ice", s, 0)
+#define ice_calloc(h, c, s) rte_zmalloc("ice", c * s, 0)
+#define ice_free(h, m)         rte_free(m)
+
+#define ice_memset(a, b, c, d) memset((a), (b), (c))
+#define ice_memcpy(a, b, c, d) rte_memcpy((a), (b), (c))
+#define ice_memdup(a, b, c, d) rte_memcpy(ice_malloc(a, c), b, c)
+
+#define CPU_TO_BE16(o) rte_cpu_to_be_16(o)
+#define CPU_TO_BE32(o) rte_cpu_to_be_32(o)
+#define CPU_TO_BE64(o) rte_cpu_to_be_64(o)
+#define CPU_TO_LE16(o) rte_cpu_to_le_16(o)
+#define CPU_TO_LE32(s) rte_cpu_to_le_32(s)
+#define CPU_TO_LE64(h) rte_cpu_to_le_64(h)
+#define LE16_TO_CPU(a) rte_le_to_cpu_16(a)
+#define LE32_TO_CPU(c) rte_le_to_cpu_32(c)
+#define LE64_TO_CPU(k) rte_le_to_cpu_64(k)
+
+#define NTOHS(a) rte_be_to_cpu_16(a)
+#define NTOHL(a) rte_be_to_cpu_32(a)
+#define HTONS(a) rte_cpu_to_be_16(a)
+#define HTONL(a) rte_cpu_to_be_32(a)
+
+static inline void
+ice_set_bit(unsigned int nr, volatile unsigned long *addr)
+{
+	__sync_fetch_and_or(addr, (1UL << nr));
+}
+
+static inline void
+ice_clear_bit(unsigned int nr, volatile unsigned long *addr)
+{
+	__sync_fetch_and_and(addr, (0UL << nr));
+}
+
+static inline void
+ice_zero_bitmap(unsigned long *bmp, u16 size)
+{
+	unsigned long mask;
+	u16 i;
+
+	for (i = 0; i < BITS_TO_LONGS(size) - 1; i++)
+		bmp[i] = 0;
+	mask = BITS_CHUNK_MASK(size);
+	bmp[i] &= ~mask;
+}
+
+static inline void
+ice_or_bitmap(unsigned long *dst, const unsigned long *bmp1,
+	      const unsigned long *bmp2, u16 size)
+{
+	unsigned long mask;
+	u16 i;
+
+	/* Handle all but last chunk*/
+	for (i = 0; i < BITS_TO_LONGS(size) - 1; i++)
+		dst[i] = bmp1[i] | bmp2[i];
+
+	/* We want to only OR bits within the size. Furthermore, we also do
+	 * not want to modify destination bits which are beyond the specified
+	 * size. Use a bitmask to ensure that we only modify the bits that are
+	 * within the specified size.
+	 */
+	mask = BITS_CHUNK_MASK(size);
+	dst[i] &= ~mask;
+	dst[i] |= (bmp1[i] | bmp2[i]) & mask;
+}
+
+/* SW spinlock */
+struct ice_lock {
+	rte_spinlock_t spinlock;
+};
+
+static inline void
+ice_init_lock(struct ice_lock *sp)
+{
+	rte_spinlock_init(&sp->spinlock);
+}
+
+static inline void
+ice_acquire_lock(struct ice_lock *sp)
+{
+	rte_spinlock_lock(&sp->spinlock);
+}
+
+static inline void
+ice_release_lock(struct ice_lock *sp)
+{
+	rte_spinlock_unlock(&sp->spinlock);
+}
+
+static inline void
+ice_destroy_lock(__attribute__((unused)) struct ice_lock *sp)
+{
+}
+
+struct ice_hw;
+
+static inline void *
+ice_alloc_dma_mem(__attribute__((unused)) struct ice_hw *hw,
+		  struct ice_dma_mem *mem, u64 size)
+{
+	const struct rte_memzone *mz = NULL;
+	char z_name[RTE_MEMZONE_NAMESIZE];
+
+	if (!mem)
+		return NULL;
+
+	snprintf(z_name, sizeof(z_name), "ice_dma_%"PRIu64, rte_rand());
+	mz = rte_memzone_reserve_bounded(z_name, size, SOCKET_ID_ANY, 0,
+					 0, RTE_PGSIZE_2M);
+	if (!mz)
+		return NULL;
+
+	mem->size = size;
+	mem->va = mz->addr;
+	mem->pa = mz->phys_addr;
+	mem->zone = (const void *)mz;
+	PMD_DRV_LOG(DEBUG, "memzone %s allocated with physical address: "
+		    "%"PRIu64, mz->name, mem->pa);
+
+	return mem->va;
+}
+
+static inline void
+ice_free_dma_mem(__attribute__((unused)) struct ice_hw *hw,
+		 struct ice_dma_mem *mem)
+{
+	PMD_DRV_LOG(DEBUG, "memzone %s to be freed with physical address: "
+		    "%"PRIu64, ((const struct rte_memzone *)mem->zone)->name,
+		    mem->pa);
+	rte_memzone_free((const struct rte_memzone *)mem->zone);
+	mem->zone = NULL;
+	mem->va = NULL;
+	mem->pa = (u64)0;
+}
+
+static inline u8
+ice_hweight8(u32 num)
+{
+	u8 bits = 0;
+	u32 i;
+
+	for (i = 0; i < 8; i++) {
+		bits += (u8)(num & 0x1);
+		num >>= 1;
+	}
+
+	return bits;
+}
+
+#define DIV_ROUND_UP(n, d) (((n) + (d) - 1) / (d))
+#define DELAY(x) rte_delay_us(x)
+#define ice_usec_delay(x) rte_delay_us(x)
+#define ice_msec_delay(x, y) rte_delay_us(1000 * (x))
+#define udelay(x) DELAY(x)
+#define msleep(x) DELAY(1000 * (x))
+#define usleep_range(min, max) msleep(DIV_ROUND_UP(min, 1000))
+
+struct ice_list_entry {
+	LIST_ENTRY(ice_list_entry) next;
+};
+
+LIST_HEAD(ice_list_head, ice_list_entry);
+
+#define LIST_ENTRY_TYPE    ice_list_entry
+#define LIST_HEAD_TYPE     ice_list_head
+#define INIT_LIST_HEAD(list_head)  LIST_INIT(list_head)
+#define LIST_DEL(entry)            LIST_REMOVE(entry, next)
+/* LIST_EMPTY(list_head)) the same in sys/queue.h */
+
+/*Note parameters are swapped*/
+#define LIST_FIRST_ENTRY(head, type, field) (type *)((head)->lh_first)
+#define LIST_ADD(entry, list_head)    LIST_INSERT_HEAD(list_head, entry, next)
+#define LIST_ADD_AFTER(entry, list_entry) \
+	LIST_INSERT_AFTER(list_entry, entry, next)
+#define LIST_FOR_EACH_ENTRY(pos, head, type, member)			       \
+	for ((pos) = (head)->lh_first ?					       \
+		     container_of((head)->lh_first, struct type, member) :     \
+		     0;							       \
+	     (pos);							       \
+	     (pos) = (pos)->member.next.le_next ?			       \
+		     container_of((pos)->member.next.le_next, struct type,     \
+				  member) :				       \
+		     0)
+
+#define LIST_REPLACE_INIT(list_head, head) do {				\
+	(head)->lh_first = (list_head)->lh_first;			\
+	INIT_LIST_HEAD(list_head);					\
+} while (0)
+
+#define HLIST_NODE_TYPE         LIST_ENTRY_TYPE
+#define HLIST_HEAD_TYPE         LIST_HEAD_TYPE
+#define INIT_HLIST_HEAD(list_head)             INIT_LIST_HEAD(list_head)
+#define HLIST_ADD_HEAD(entry, list_head)       LIST_ADD(entry, list_head)
+#define HLIST_EMPTY(list_head)                 LIST_EMPTY(list_head)
+#define HLIST_DEL(entry)                       LIST_DEL(entry)
+#define HLIST_FOR_EACH_ENTRY(pos, head, type, member) \
+	LIST_FOR_EACH_ENTRY(pos, head, type, member)
+#define LIST_FOR_EACH_ENTRY_SAFE(pos, tmp, head, type, member) \
+	LIST_FOR_EACH_ENTRY(pos, head, type, member)
+
+#ifndef ICE_DBG_TRACE
+#define ICE_DBG_TRACE		BIT_ULL(0)
+#endif
+
+#ifndef DIVIDE_AND_ROUND_UP
+#define DIVIDE_AND_ROUND_UP(a, b) (((a) + (b) - 1) / (b))
+#endif
+
+#ifndef ICE_INTEL_VENDOR_ID
+#define ICE_INTEL_VENDOR_ID		0x8086
+#endif
+
+#ifndef IS_UNICAST_ETHER_ADDR
+#define IS_UNICAST_ETHER_ADDR(addr) \
+	((bool)((((u8 *)(addr))[0] % ((u8)0x2)) == 0))
+#endif
+
+#ifndef IS_MULTICAST_ETHER_ADDR
+#define IS_MULTICAST_ETHER_ADDR(addr) \
+	((bool)((((u8 *)(addr))[0] % ((u8)0x2)) == 1))
+#endif
+
+#ifndef IS_BROADCAST_ETHER_ADDR
+/* Check whether an address is broadcast. */
+#define IS_BROADCAST_ETHER_ADDR(addr)	\
+	((bool)((((u16 *)(addr))[0] == ((u16)0xffff))))
+#endif
+
+#ifndef IS_ZERO_ETHER_ADDR
+#define IS_ZERO_ETHER_ADDR(addr) \
+	(((bool)((((u16 *)(addr))[0] == ((u16)0x0)))) && \
+	 ((bool)((((u16 *)(addr))[1] == ((u16)0x0)))) && \
+	 ((bool)((((u16 *)(addr))[2] == ((u16)0x0)))))
+#endif
+
+#endif /* _ICE_OSDEP_H_ */
diff --git a/drivers/net/ice/base/ice_protocol_type.h b/drivers/net/ice/base/ice_protocol_type.h
new file mode 100644
index 0000000..665856a
--- /dev/null
+++ b/drivers/net/ice/base/ice_protocol_type.h
@@ -0,0 +1,237 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_PROTOCOL_TYPE_H_
+#define _ICE_PROTOCOL_TYPE_H_
+#include "ice_flex_type.h"
+#define ICE_IPV6_ADDR_LENGTH 16
+
+/* Each recipe can match up to 5 different fields. Fields to match can be meta-
+ * data, values extracted from packet headers, or results from other recipes.
+ * One of the 5 fields is reserved for matching the switch ID. So, up to 4
+ * recipes can provide intermediate results to another one through chaining,
+ * e.g. recipes 0, 1, 2, and 3 can provide intermediate results to recipe 4.
+ */
+#define ICE_NUM_WORDS_RECIPE 4
+
+/* Max recipes that can be chained */
+#define ICE_MAX_CHAIN_RECIPE 5
+
+/* 1 word reserved for switch id from allowed 5 words.
+ * So a recipe can have max 4 words. And you can chain 5 such recipes
+ * together. So maximum words that can be programmed for look up is 5 * 4.
+ */
+#define ICE_MAX_CHAIN_WORDS (ICE_NUM_WORDS_RECIPE * ICE_MAX_CHAIN_RECIPE)
+
+/* Field vector index corresponding to chaining */
+#define ICE_CHAIN_FV_INDEX_START 47
+
+enum ice_protocol_type {
+	ICE_MAC_OFOS = 0,
+	ICE_MAC_IL,
+	ICE_IPV4_OFOS,
+	ICE_IPV4_IL,
+	ICE_IPV6_IL,
+	ICE_IPV6_OFOS,
+	ICE_TCP_IL,
+	ICE_UDP_ILOS,
+	ICE_SCTP_IL,
+	ICE_VXLAN,
+	ICE_GENEVE,
+	ICE_VXLAN_GPE,
+	ICE_NVGRE,
+	ICE_PROTOCOL_LAST
+};
+
+enum ice_sw_tunnel_type {
+	ICE_NON_TUN,
+	ICE_SW_TUN_VXLAN_GPE,
+	ICE_SW_TUN_GENEVE,
+	ICE_SW_TUN_VXLAN,
+	ICE_SW_TUN_NVGRE,
+	ICE_SW_TUN_UDP, /* This means all "UDP" tunnel types: VXLAN-GPE, VXLAN
+			 * and GENEVE
+			 */
+	ICE_ALL_TUNNELS /* All tunnel types including NVGRE */
+};
+
+/* Decoders for ice_prot_id:
+ * - F: First
+ * - I: Inner
+ * - L: Last
+ * - O: Outer
+ * - S: Single
+ */
+enum ice_prot_id {
+	ICE_PROT_ID_INVAL	= 0,
+	ICE_PROT_MAC_OF_OR_S	= 1,
+	ICE_PROT_MAC_O2		= 2,
+	ICE_PROT_MAC_IL		= 4,
+	ICE_PROT_MAC_IN_MAC	= 7,
+	ICE_PROT_ETYPE_OL	= 9,
+	ICE_PROT_ETYPE_IL	= 10,
+	ICE_PROT_PAY		= 15,
+	ICE_PROT_EVLAN_O	= 16,
+	ICE_PROT_VLAN_O		= 17,
+	ICE_PROT_VLAN_IF	= 18,
+	ICE_PROT_MPLS_OL_MINUS_1 = 27,
+	ICE_PROT_MPLS_OL_OR_OS	= 28,
+	ICE_PROT_MPLS_IL	= 29,
+	ICE_PROT_IPV4_OF_OR_S	= 32,
+	ICE_PROT_IPV4_IL	= 33,
+	ICE_PROT_IPV6_OF_OR_S	= 40,
+	ICE_PROT_IPV6_IL	= 41,
+	ICE_PROT_IPV6_FRAG	= 47,
+	ICE_PROT_TCP_IL		= 49,
+	ICE_PROT_UDP_OF		= 52,
+	ICE_PROT_UDP_IL_OR_S	= 53,
+	ICE_PROT_GRE_OF		= 64,
+	ICE_PROT_NSH_F		= 84,
+	ICE_PROT_ESP_F		= 88,
+	ICE_PROT_ESP_2		= 89,
+	ICE_PROT_SCTP_IL	= 96,
+	ICE_PROT_ICMP_IL	= 98,
+	ICE_PROT_ICMPV6_IL	= 100,
+	ICE_PROT_VRRP_F		= 101,
+	ICE_PROT_OSPF		= 102,
+	ICE_PROT_ATAOE_OF	= 114,
+	ICE_PROT_CTRL_OF	= 116,
+	ICE_PROT_LLDP_OF	= 117,
+	ICE_PROT_ARP_OF		= 118,
+	ICE_PROT_EAPOL_OF	= 120,
+	ICE_PROT_META_ID	= 255, /* when offset == metaddata */
+	ICE_PROT_INVALID	= 255  /* when offset == 0xFF */
+};
+
+
+#define ICE_MAC_OFOS_HW		1
+#define ICE_MAC_IL_HW		4
+#define ICE_IPV4_OFOS_HW	32
+#define ICE_IPV4_IL_HW		33
+#define ICE_IPV6_OFOS_HW	40
+#define ICE_IPV6_IL_HW		41
+#define ICE_TCP_IL_HW		49
+#define ICE_UDP_ILOS_HW		53
+#define ICE_SCTP_IL_HW		96
+
+/* ICE_UDP_OF is used to identify all 3 tunnel types
+ * VXLAN, GENEVE and VXLAN_GPE. To differentiate further
+ * need to use flags from the field vector
+ */
+#define ICE_UDP_OF_HW	52 /* UDP Tunnels */
+#define ICE_GRE_OF_HW	64 /* NVGRE */
+#define ICE_META_DATA_ID_HW 255 /* this is used for tunnel type */
+
+#define ICE_TUN_FLAG_MASK 0xFF
+#define ICE_TUN_FLAG_FV_IND 2
+
+#define ICE_PROTOCOL_MAX_ENTRIES 16
+
+/* Mapping of software defined protocol id to hardware defined protocol id */
+struct ice_protocol_entry {
+	enum ice_protocol_type type;
+	u8 protocol_id;
+};
+
+
+struct ice_ether_hdr {
+	u8 dst_addr[ETH_ALEN];
+	u8 src_addr[ETH_ALEN];
+	u16 ethtype_id;
+};
+
+struct ice_ether_vlan_hdr {
+	u8 dst_addr[ETH_ALEN];
+	u8 src_addr[ETH_ALEN];
+	u32 vlan_id;
+};
+
+struct ice_ipv4_hdr {
+	u8 version;
+	u8 tos;
+	u16 total_length;
+	u16 id;
+	u16 frag_off;
+	u8 time_to_live;
+	u8 protocol;
+	u16 check;
+	u32 src_addr;
+	u32 dst_addr;
+};
+
+struct ice_ipv6_hdr {
+	u8 version;
+	u8 tc;
+	u16 flow_label;
+	u8 src_addr[ICE_IPV6_ADDR_LENGTH];
+	u8 dst_addr[ICE_IPV6_ADDR_LENGTH];
+};
+
+struct ice_l4_hdr {
+	u16 src_port;
+	u16 dst_port;
+	u16 len;
+	u16 check;
+};
+
+struct ice_udp_tnl_hdr {
+	u16 field;
+	u16 proto_type;
+	u16 vni;
+};
+
+struct ice_nvgre {
+	u16 tni;
+	u16 flow_id;
+};
+
+union ice_prot_hdr {
+		struct ice_ether_hdr eth_hdr;
+		struct ice_ipv4_hdr ipv4_hdr;
+		struct ice_ipv6_hdr ice_ipv6_ofos_hdr;
+		struct ice_l4_hdr l4_hdr;
+		struct ice_udp_tnl_hdr tnl_hdr;
+		struct ice_nvgre nvgre_hdr;
+};
+
+/* This is mapping table entry that maps every word within a given protocol
+ * structure to the real byte offset as per the specification of that
+ * protocol header.
+ * for e.g. dst address is 3 words in ethertype header and corresponding bytes
+ * are 0, 2, 3 in the actual packet header and src address is at 4, 6, 8
+ */
+struct ice_prot_ext_tbl_entry {
+	enum ice_protocol_type prot_type;
+	/* Byte offset into header of given protocol type */
+	u8 offs[sizeof(union ice_prot_hdr)];
+};
+
+/* Extractions to be looked up for a given recipe */
+struct ice_prot_lkup_ext {
+	u16 prot_type;
+	u8 n_val_words;
+	/* create a buffer to hold max words per recipe */
+	u8 field_off[ICE_MAX_CHAIN_WORDS];
+
+	struct ice_fv_word fv_words[ICE_MAX_CHAIN_WORDS];
+
+	/* Indicate field offsets that have field vector indices assigned */
+	ice_declare_bitmap(done, ICE_MAX_CHAIN_WORDS);
+};
+
+struct ice_pref_recipe_group {
+	u8 n_val_pairs;		/* Number of valid pairs */
+	struct ice_fv_word pairs[ICE_NUM_WORDS_RECIPE];
+};
+
+struct ice_recp_grp_entry {
+	struct LIST_ENTRY_TYPE l_entry;
+
+#define ICE_INVAL_CHAIN_IND 0xFF
+	u16 rid;
+	u8 chain_idx;
+	u16 fv_idx[ICE_NUM_WORDS_RECIPE];
+	struct ice_pref_recipe_group r_group;
+};
+#endif /* _ICE_PROTOCOL_TYPE_H_ */
diff --git a/drivers/net/ice/base/ice_sbq_cmd.h b/drivers/net/ice/base/ice_sbq_cmd.h
new file mode 100644
index 0000000..6dff378
--- /dev/null
+++ b/drivers/net/ice/base/ice_sbq_cmd.h
@@ -0,0 +1,93 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_SBQ_CMD_H_
+#define _ICE_SBQ_CMD_H_
+
+/* This header file defines the Sideband Queue commands, error codes and
+ * descriptor format. It is shared between Firmware and Software.
+ */
+
+/* Sideband Queue command structure and opcodes */
+enum ice_sbq_opc {
+	/* Sideband Queue commands */
+	ice_sbq_opc_neigh_dev_req			= 0x0C00,
+	ice_sbq_opc_neigh_dev_ev			= 0x0C01
+};
+
+/* Sideband Queue descriptor. Indirect command
+ * and non posted
+ */
+struct ice_sbq_cmd_desc {
+	__le16 flags;
+	__le16 opcode;
+	__le16 datalen;
+	__le16 cmd_retval;
+
+	/* Opaque message data */
+	__le32 cookie_high;
+	__le32 cookie_low;
+
+	union {
+		__le16 cmd_len;
+		__le16 cmpl_len;
+	} param0;
+
+	u8 reserved[6];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+struct ice_sbq_evt_desc {
+	__le16 flags;
+	__le16 opcode;
+	__le16 datalen;
+	__le16 cmd_retval;
+	u8 data[24];
+};
+
+enum ice_sbq_msg_dev {
+	rmn_0	= 0x02,
+	rmn_1	= 0x03,
+	rmn_2	= 0x04,
+	cgu	= 0x06
+};
+
+enum ice_sbq_msg_opcode {
+	ice_sbq_msg_rd	= 0x00,
+	ice_sbq_msg_wr	= 0x01
+};
+
+#define ICE_SBQ_MSG_FLAGS	0x40
+#define ICE_SBQ_MSG_SBE_FBE	0x0F
+
+struct ice_sbq_msg_req {
+	u8 dest_dev;
+	u8 src_dev;
+	u8 opcode;
+	u8 flags;
+	u8 sbe_fbe;
+	u8 func_id;
+	__le16 msg_addr_low;
+	__le32 msg_addr_high;
+	__le32 data;
+};
+
+struct ice_sbq_msg_cmpl {
+	u8 dest_dev;
+	u8 src_dev;
+	u8 opcode;
+	u8 flags;
+	__le32 data;
+};
+
+/* Internal struct */
+struct ice_sbq_msg_input {
+	u8 dest_dev;
+	u8 opcode;
+	u16 msg_addr_low;
+	u32 msg_addr_high;
+	u32 data;
+};
+#endif /* _ICE_SBQ_CMD_H_ */
diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c
new file mode 100644
index 0000000..d885952
--- /dev/null
+++ b/drivers/net/ice/base/ice_sched.c
@@ -0,0 +1,1713 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#include "ice_sched.h"
+
+
+/**
+ * ice_sched_add_root_node - Insert the Tx scheduler root node in SW DB
+ * @pi: port information structure
+ * @info: Scheduler element information from firmware
+ *
+ * This function inserts the root node of the scheduling tree topology
+ * to the SW DB.
+ */
+static enum ice_status
+ice_sched_add_root_node(struct ice_port_info *pi,
+			struct ice_aqc_txsched_elem_data *info)
+{
+	struct ice_sched_node *root;
+	struct ice_hw *hw;
+
+	if (!pi)
+		return ICE_ERR_PARAM;
+
+	hw = pi->hw;
+
+	root = (struct ice_sched_node *)ice_malloc(hw, sizeof(*root));
+	if (!root)
+		return ICE_ERR_NO_MEMORY;
+
+	/* coverity[suspicious_sizeof] */
+	root->children = (struct ice_sched_node **)
+		ice_calloc(hw, hw->max_children[0], sizeof(*root));
+	if (!root->children) {
+		ice_free(hw, root);
+		return ICE_ERR_NO_MEMORY;
+	}
+
+	ice_memcpy(&root->info, info, sizeof(*info), ICE_DMA_TO_NONDMA);
+	pi->root = root;
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_sched_find_node_by_teid - Find the Tx scheduler node in SW DB
+ * @start_node: pointer to the starting ice_sched_node struct in a sub-tree
+ * @teid: node teid to search
+ *
+ * This function searches for a node matching the teid in the scheduling tree
+ * from the SW DB. The search is recursive and is restricted by the number of
+ * layers it has searched through; stopping at the max supported layer.
+ *
+ * This function needs to be called when holding the port_info->sched_lock
+ */
+struct ice_sched_node *
+ice_sched_find_node_by_teid(struct ice_sched_node *start_node, u32 teid)
+{
+	u16 i;
+
+	/* The TEID is same as that of the start_node */
+	if (ICE_TXSCHED_GET_NODE_TEID(start_node) == teid)
+		return start_node;
+
+	/* The node has no children or is at the max layer */
+	if (!start_node->num_children ||
+	    start_node->tx_sched_layer >= ICE_AQC_TOPO_MAX_LEVEL_NUM ||
+	    start_node->info.data.elem_type == ICE_AQC_ELEM_TYPE_LEAF)
+		return NULL;
+
+	/* Check if teid matches to any of the children nodes */
+	for (i = 0; i < start_node->num_children; i++)
+		if (ICE_TXSCHED_GET_NODE_TEID(start_node->children[i]) == teid)
+			return start_node->children[i];
+
+	/* Search within each child's sub-tree */
+	for (i = 0; i < start_node->num_children; i++) {
+		struct ice_sched_node *tmp;
+
+		tmp = ice_sched_find_node_by_teid(start_node->children[i],
+						  teid);
+		if (tmp)
+			return tmp;
+	}
+
+	return NULL;
+}
+
+/**
+ * ice_aqc_send_sched_elem_cmd - send scheduling elements cmd
+ * @hw: pointer to the hw struct
+ * @cmd_opc: cmd opcode
+ * @elems_req: number of elements to request
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @elems_resp: returns total number of elements response
+ * @cd: pointer to command details structure or NULL
+ *
+ * This function sends a scheduling elements cmd (cmd_opc)
+ */
+static enum ice_status
+ice_aqc_send_sched_elem_cmd(struct ice_hw *hw, enum ice_adminq_opc cmd_opc,
+			    u16 elems_req, void *buf, u16 buf_size,
+			    u16 *elems_resp, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_sched_elem_cmd *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	cmd = &desc.params.sched_elem_cmd;
+	ice_fill_dflt_direct_cmd_desc(&desc, cmd_opc);
+	cmd->num_elem_req = CPU_TO_LE16(elems_req);
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+	status = ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
+	if (!status && elems_resp)
+		*elems_resp = LE16_TO_CPU(cmd->num_elem_resp);
+
+	return status;
+}
+
+/**
+ * ice_aq_query_sched_elems - query scheduler elements
+ * @hw: pointer to the hw struct
+ * @elems_req: number of elements to query
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @elems_ret: returns total number of elements returned
+ * @cd: pointer to command details structure or NULL
+ *
+ * Query scheduling elements (0x0404)
+ */
+static enum ice_status
+ice_aq_query_sched_elems(struct ice_hw *hw, u16 elems_req,
+			 struct ice_aqc_get_elem *buf, u16 buf_size,
+			 u16 *elems_ret, struct ice_sq_cd *cd)
+{
+	return ice_aqc_send_sched_elem_cmd(hw, ice_aqc_opc_get_sched_elems,
+					   elems_req, (void *)buf, buf_size,
+					   elems_ret, cd);
+}
+
+/**
+ * ice_sched_query_elem - query element information from hw
+ * @hw: pointer to the hw struct
+ * @node_teid: node teid to be queried
+ * @buf: buffer to element information
+ *
+ * This function queries HW element information
+ */
+static enum ice_status
+ice_sched_query_elem(struct ice_hw *hw, u32 node_teid,
+		     struct ice_aqc_get_elem *buf)
+{
+	u16 buf_size, num_elem_ret = 0;
+	enum ice_status status;
+
+	buf_size = sizeof(*buf);
+	ice_memset(buf, 0, buf_size, ICE_NONDMA_MEM);
+	buf->generic[0].node_teid = CPU_TO_LE32(node_teid);
+	status = ice_aq_query_sched_elems(hw, 1, buf, buf_size, &num_elem_ret,
+					  NULL);
+	if (status != ICE_SUCCESS || num_elem_ret != 1)
+		ice_debug(hw, ICE_DBG_SCHED, "query element failed\n");
+	return status;
+}
+
+/**
+ * ice_sched_add_node - Insert the Tx scheduler node in SW DB
+ * @pi: port information structure
+ * @layer: Scheduler layer of the node
+ * @info: Scheduler element information from firmware
+ *
+ * This function inserts a scheduler node to the SW DB.
+ */
+enum ice_status
+ice_sched_add_node(struct ice_port_info *pi, u8 layer,
+		   struct ice_aqc_txsched_elem_data *info)
+{
+	struct ice_sched_node *parent;
+	struct ice_aqc_get_elem elem;
+	struct ice_sched_node *node;
+	enum ice_status status;
+	struct ice_hw *hw;
+
+	if (!pi)
+		return ICE_ERR_PARAM;
+
+	hw = pi->hw;
+
+	/* A valid parent node should be there */
+	parent = ice_sched_find_node_by_teid(pi->root,
+					     LE32_TO_CPU(info->parent_teid));
+	if (!parent) {
+		ice_debug(hw, ICE_DBG_SCHED,
+			  "Parent Node not found for parent_teid=0x%x\n",
+			  LE32_TO_CPU(info->parent_teid));
+		return ICE_ERR_PARAM;
+	}
+
+	/* query the current node information from FW  before additing it
+	 * to the SW DB
+	 */
+	status = ice_sched_query_elem(hw, LE32_TO_CPU(info->node_teid), &elem);
+	if (status)
+		return status;
+	node = (struct ice_sched_node *)ice_malloc(hw, sizeof(*node));
+	if (!node)
+		return ICE_ERR_NO_MEMORY;
+	if (hw->max_children[layer]) {
+		/* coverity[suspicious_sizeof] */
+		node->children = (struct ice_sched_node **)
+			ice_calloc(hw, hw->max_children[layer], sizeof(*node));
+		if (!node->children) {
+			ice_free(hw, node);
+			return ICE_ERR_NO_MEMORY;
+		}
+	}
+
+	node->in_use = true;
+	node->parent = parent;
+	node->tx_sched_layer = layer;
+	parent->children[parent->num_children++] = node;
+	node->info = elem.generic[0];
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_aq_delete_sched_elems - delete scheduler elements
+ * @hw: pointer to the hw struct
+ * @grps_req: number of groups to delete
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @grps_del: returns total number of elements deleted
+ * @cd: pointer to command details structure or NULL
+ *
+ * Delete scheduling elements (0x040F)
+ */
+static enum ice_status
+ice_aq_delete_sched_elems(struct ice_hw *hw, u16 grps_req,
+			  struct ice_aqc_delete_elem *buf, u16 buf_size,
+			  u16 *grps_del, struct ice_sq_cd *cd)
+{
+	return ice_aqc_send_sched_elem_cmd(hw, ice_aqc_opc_delete_sched_elems,
+					   grps_req, (void *)buf, buf_size,
+					   grps_del, cd);
+}
+
+/**
+ * ice_sched_remove_elems - remove nodes from hw
+ * @hw: pointer to the hw struct
+ * @parent: pointer to the parent node
+ * @num_nodes: number of nodes
+ * @node_teids: array of node teids to be deleted
+ *
+ * This function remove nodes from hw
+ */
+static enum ice_status
+ice_sched_remove_elems(struct ice_hw *hw, struct ice_sched_node *parent,
+		       u16 num_nodes, u32 *node_teids)
+{
+	struct ice_aqc_delete_elem *buf;
+	u16 i, num_groups_removed = 0;
+	enum ice_status status;
+	u16 buf_size;
+
+	buf_size = sizeof(*buf) + sizeof(u32) * (num_nodes - 1);
+	buf = (struct ice_aqc_delete_elem *)ice_malloc(hw, buf_size);
+	if (!buf)
+		return ICE_ERR_NO_MEMORY;
+
+	buf->hdr.parent_teid = parent->info.node_teid;
+	buf->hdr.num_elems = CPU_TO_LE16(num_nodes);
+	for (i = 0; i < num_nodes; i++)
+		buf->teid[i] = CPU_TO_LE32(node_teids[i]);
+
+	status = ice_aq_delete_sched_elems(hw, 1, buf, buf_size,
+					   &num_groups_removed, NULL);
+	if (status != ICE_SUCCESS || num_groups_removed != 1)
+		ice_debug(hw, ICE_DBG_SCHED, "remove elements failed\n");
+
+	ice_free(hw, buf);
+	return status;
+}
+
+/**
+ * ice_sched_get_first_node - get the first node of the given layer
+ * @hw: pointer to the hw struct
+ * @parent: pointer the base node of the subtree
+ * @layer: layer number
+ *
+ * This function retrieves the first node of the given layer from the subtree
+ */
+static struct ice_sched_node *
+ice_sched_get_first_node(struct ice_hw *hw, struct ice_sched_node *parent,
+			 u8 layer)
+{
+	u8 i;
+
+	if (layer < hw->sw_entry_point_layer)
+		return NULL;
+	for (i = 0; i < parent->num_children; i++) {
+		struct ice_sched_node *node = parent->children[i];
+
+		if (node) {
+			if (node->tx_sched_layer == layer)
+				return node;
+			/* this recursion is intentional, and wouldn't
+			 * go more than 9 calls
+			 */
+			return ice_sched_get_first_node(hw, node, layer);
+		}
+	}
+	return NULL;
+}
+
+/**
+ * ice_sched_get_tc_node - get pointer to TC node
+ * @pi: port information structure
+ * @tc: TC number
+ *
+ * This function returns the TC node pointer
+ */
+struct ice_sched_node *ice_sched_get_tc_node(struct ice_port_info *pi, u8 tc)
+{
+	u8 i;
+
+	if (!pi)
+		return NULL;
+	for (i = 0; i < pi->root->num_children; i++)
+		if (pi->root->children[i]->tc_num == tc)
+			return pi->root->children[i];
+	return NULL;
+}
+
+/**
+ * ice_free_sched_node - Free a Tx scheduler node from SW DB
+ * @pi: port information structure
+ * @node: pointer to the ice_sched_node struct
+ *
+ * This function frees up a node from SW DB as well as from HW
+ *
+ * This function needs to be called with the port_info->sched_lock held
+ */
+void ice_free_sched_node(struct ice_port_info *pi, struct ice_sched_node *node)
+{
+	struct ice_sched_node *parent;
+	struct ice_hw *hw = pi->hw;
+	u8 i, j;
+
+	/* Free the children before freeing up the parent node
+	 * The parent array is updated below and that shifts the nodes
+	 * in the array. So always pick the first child if num children > 0
+	 */
+	while (node->num_children)
+		ice_free_sched_node(pi, node->children[0]);
+
+	/* Leaf, TC and root nodes can't be deleted by SW */
+	if (node->tx_sched_layer >= hw->sw_entry_point_layer &&
+	    node->info.data.elem_type != ICE_AQC_ELEM_TYPE_TC &&
+	    node->info.data.elem_type != ICE_AQC_ELEM_TYPE_ROOT_PORT &&
+	    node->info.data.elem_type != ICE_AQC_ELEM_TYPE_LEAF) {
+		u32 teid = LE32_TO_CPU(node->info.node_teid);
+		enum ice_status status;
+
+		status = ice_sched_remove_elems(hw, node->parent, 1, &teid);
+		if (status != ICE_SUCCESS)
+			ice_debug(hw, ICE_DBG_SCHED,
+				  "remove element failed %d\n", status);
+	}
+	parent = node->parent;
+	/* root has no parent */
+	if (parent) {
+		struct ice_sched_node *p, *tc_node;
+
+		/* update the parent */
+		for (i = 0; i < parent->num_children; i++)
+			if (parent->children[i] == node) {
+				for (j = i + 1; j < parent->num_children; j++)
+					parent->children[j - 1] =
+						parent->children[j];
+				parent->num_children--;
+				break;
+			}
+
+		/* search for previous sibling that points to this node and
+		 * remove the reference
+		 */
+		tc_node = ice_sched_get_tc_node(pi, node->tc_num);
+		if (!tc_node) {
+			ice_debug(hw, ICE_DBG_SCHED,
+				  "Invalid TC number %d\n", node->tc_num);
+			goto err_exit;
+		}
+		p = ice_sched_get_first_node(hw, tc_node, node->tx_sched_layer);
+		while (p) {
+			if (p->sibling == node) {
+				p->sibling = node->sibling;
+				break;
+			}
+			p = p->sibling;
+		}
+	}
+err_exit:
+	/* leaf nodes have no children */
+	if (node->children)
+		ice_free(hw, node->children);
+	ice_free(hw, node);
+}
+
+/**
+ * ice_aq_get_dflt_topo - gets default scheduler topology
+ * @hw: pointer to the hw struct
+ * @lport: logical port number
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @num_branches: returns total number of queue to port branches
+ * @cd: pointer to command details structure or NULL
+ *
+ * Get default scheduler topology (0x400)
+ */
+static enum ice_status
+ice_aq_get_dflt_topo(struct ice_hw *hw, u8 lport,
+		     struct ice_aqc_get_topo_elem *buf, u16 buf_size,
+		     u8 *num_branches, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_get_topo *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	cmd = &desc.params.get_topo;
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_dflt_topo);
+	cmd->port_num = lport;
+	status = ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
+	if (!status && num_branches)
+		*num_branches = cmd->num_branches;
+
+	return status;
+}
+
+/**
+ * ice_aq_add_sched_elems - adds scheduling element
+ * @hw: pointer to the hw struct
+ * @grps_req: the number of groups that are requested to be added
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @grps_added: returns total number of groups added
+ * @cd: pointer to command details structure or NULL
+ *
+ * Add scheduling elements (0x0401)
+ */
+static enum ice_status
+ice_aq_add_sched_elems(struct ice_hw *hw, u16 grps_req,
+		       struct ice_aqc_add_elem *buf, u16 buf_size,
+		       u16 *grps_added, struct ice_sq_cd *cd)
+{
+	return ice_aqc_send_sched_elem_cmd(hw, ice_aqc_opc_add_sched_elems,
+					   grps_req, (void *)buf, buf_size,
+					   grps_added, cd);
+}
+
+
+
+/**
+ * ice_aq_suspend_sched_elems - suspend scheduler elements
+ * @hw: pointer to the hw struct
+ * @elems_req: number of elements to suspend
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @elems_ret: returns total number of elements suspended
+ * @cd: pointer to command details structure or NULL
+ *
+ * Suspend scheduling elements (0x0409)
+ */
+static enum ice_status
+ice_aq_suspend_sched_elems(struct ice_hw *hw, u16 elems_req,
+			   struct ice_aqc_suspend_resume_elem *buf,
+			   u16 buf_size, u16 *elems_ret, struct ice_sq_cd *cd)
+{
+	return ice_aqc_send_sched_elem_cmd(hw, ice_aqc_opc_suspend_sched_elems,
+					   elems_req, (void *)buf, buf_size,
+					   elems_ret, cd);
+}
+
+/**
+ * ice_aq_resume_sched_elems - resume scheduler elements
+ * @hw: pointer to the hw struct
+ * @elems_req: number of elements to resume
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @elems_ret: returns total number of elements resumed
+ * @cd: pointer to command details structure or NULL
+ *
+ * resume scheduling elements (0x040A)
+ */
+static enum ice_status
+ice_aq_resume_sched_elems(struct ice_hw *hw, u16 elems_req,
+			  struct ice_aqc_suspend_resume_elem *buf,
+			  u16 buf_size, u16 *elems_ret, struct ice_sq_cd *cd)
+{
+	return ice_aqc_send_sched_elem_cmd(hw, ice_aqc_opc_resume_sched_elems,
+					   elems_req, (void *)buf, buf_size,
+					   elems_ret, cd);
+}
+
+/**
+ * ice_aq_query_sched_res - query scheduler resource
+ * @hw: pointer to the hw struct
+ * @buf_size: buffer size in bytes
+ * @buf: pointer to buffer
+ * @cd: pointer to command details structure or NULL
+ *
+ * Query scheduler resource allocation (0x0412)
+ */
+static enum ice_status
+ice_aq_query_sched_res(struct ice_hw *hw, u16 buf_size,
+		       struct ice_aqc_query_txsched_res_resp *buf,
+		       struct ice_sq_cd *cd)
+{
+	struct ice_aq_desc desc;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_query_sched_res);
+	return ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
+}
+
+
+/**
+ * ice_sched_suspend_resume_elems - suspend or resume hw nodes
+ * @hw: pointer to the hw struct
+ * @num_nodes: number of nodes
+ * @node_teids: array of node teids to be suspended or resumed
+ * @suspend: true means suspend / false means resume
+ *
+ * This function suspends or resumes hw nodes
+ */
+static enum ice_status
+ice_sched_suspend_resume_elems(struct ice_hw *hw, u8 num_nodes, u32 *node_teids,
+			       bool suspend)
+{
+	struct ice_aqc_suspend_resume_elem *buf;
+	u16 i, buf_size, num_elem_ret = 0;
+	enum ice_status status;
+
+	buf_size = sizeof(*buf) * num_nodes;
+	buf = (struct ice_aqc_suspend_resume_elem *)
+		ice_malloc(hw, buf_size);
+	if (!buf)
+		return ICE_ERR_NO_MEMORY;
+
+	for (i = 0; i < num_nodes; i++)
+		buf->teid[i] = CPU_TO_LE32(node_teids[i]);
+
+	if (suspend)
+		status = ice_aq_suspend_sched_elems(hw, num_nodes, buf,
+						    buf_size, &num_elem_ret,
+						    NULL);
+	else
+		status = ice_aq_resume_sched_elems(hw, num_nodes, buf,
+						   buf_size, &num_elem_ret,
+						   NULL);
+	if (status != ICE_SUCCESS || num_elem_ret != num_nodes)
+		ice_debug(hw, ICE_DBG_SCHED, "suspend/resume failed\n");
+
+	ice_free(hw, buf);
+	return status;
+}
+
+
+
+
+/**
+ * ice_sched_clear_agg - clears the agg related information
+ * @hw: pointer to the hardware structure
+ *
+ * This function removes agg list and free up agg related memory
+ * previously allocated.
+ */
+void ice_sched_clear_agg(struct ice_hw *hw)
+{
+	struct ice_sched_agg_info *agg_info;
+	struct ice_sched_agg_info *atmp;
+
+	LIST_FOR_EACH_ENTRY_SAFE(agg_info, atmp, &hw->agg_list,
+				 ice_sched_agg_info,
+				 list_entry) {
+		struct ice_sched_agg_vsi_info *agg_vsi_info;
+		struct ice_sched_agg_vsi_info *vtmp;
+
+		LIST_FOR_EACH_ENTRY_SAFE(agg_vsi_info, vtmp,
+					 &agg_info->agg_vsi_list,
+					 ice_sched_agg_vsi_info, list_entry) {
+			LIST_DEL(&agg_vsi_info->list_entry);
+			ice_free(hw, agg_vsi_info);
+		}
+		LIST_DEL(&agg_info->list_entry);
+		ice_free(hw, agg_info);
+	}
+}
+
+/**
+ * ice_sched_clear_tx_topo - clears the schduler tree nodes
+ * @pi: port information structure
+ *
+ * This function removes all the nodes from HW as well as from SW DB.
+ */
+static void ice_sched_clear_tx_topo(struct ice_port_info *pi)
+{
+	if (!pi)
+		return;
+	if (pi->root) {
+		ice_free_sched_node(pi, pi->root);
+		pi->root = NULL;
+	}
+}
+
+/**
+ * ice_sched_clear_port - clear the scheduler elements from SW DB for a port
+ * @pi: port information structure
+ *
+ * Cleanup scheduling elements from SW DB
+ */
+void ice_sched_clear_port(struct ice_port_info *pi)
+{
+	if (!pi || pi->port_state != ICE_SCHED_PORT_STATE_READY)
+		return;
+
+	pi->port_state = ICE_SCHED_PORT_STATE_INIT;
+	ice_acquire_lock(&pi->sched_lock);
+	ice_sched_clear_tx_topo(pi);
+	ice_release_lock(&pi->sched_lock);
+	ice_destroy_lock(&pi->sched_lock);
+}
+
+/**
+ * ice_sched_cleanup_all - cleanup scheduler elements from SW DB for all ports
+ * @hw: pointer to the hw struct
+ *
+ * Cleanup scheduling elements from SW DB for all the ports
+ */
+void ice_sched_cleanup_all(struct ice_hw *hw)
+{
+	if (!hw)
+		return;
+
+	if (hw->layer_info) {
+		ice_free(hw, hw->layer_info);
+		hw->layer_info = NULL;
+	}
+
+	if (hw->port_info)
+		ice_sched_clear_port(hw->port_info);
+
+	hw->num_tx_sched_layers = 0;
+	hw->num_tx_sched_phys_layers = 0;
+	hw->flattened_layers = 0;
+	hw->max_cgds = 0;
+}
+
+
+/**
+ * ice_sched_add_elems - add nodes to hw and SW DB
+ * @pi: port information structure
+ * @tc_node: pointer to the branch node
+ * @parent: pointer to the parent node
+ * @layer: layer number to add nodes
+ * @num_nodes: number of nodes
+ * @num_nodes_added: pointer to num nodes added
+ * @first_node_teid: if new nodes are added then return the teid of first node
+ *
+ * This function add nodes to hw as well as to SW DB for a given layer
+ */
+static enum ice_status
+ice_sched_add_elems(struct ice_port_info *pi, struct ice_sched_node *tc_node,
+		    struct ice_sched_node *parent, u8 layer, u16 num_nodes,
+		    u16 *num_nodes_added, u32 *first_node_teid)
+{
+	struct ice_sched_node *prev, *new_node;
+	struct ice_aqc_add_elem *buf;
+	u16 i, num_groups_added = 0;
+	enum ice_status status = ICE_SUCCESS;
+	struct ice_hw *hw = pi->hw;
+	u16 buf_size;
+	u32 teid;
+
+	buf_size = sizeof(*buf) + sizeof(*buf->generic) * (num_nodes - 1);
+	buf = (struct ice_aqc_add_elem *)ice_malloc(hw, buf_size);
+	if (!buf)
+		return ICE_ERR_NO_MEMORY;
+
+	buf->hdr.parent_teid = parent->info.node_teid;
+	buf->hdr.num_elems = CPU_TO_LE16(num_nodes);
+	for (i = 0; i < num_nodes; i++) {
+		buf->generic[i].parent_teid = parent->info.node_teid;
+		buf->generic[i].data.elem_type = ICE_AQC_ELEM_TYPE_SE_GENERIC;
+		buf->generic[i].data.valid_sections =
+			ICE_AQC_ELEM_VALID_GENERIC | ICE_AQC_ELEM_VALID_CIR |
+			ICE_AQC_ELEM_VALID_EIR;
+		buf->generic[i].data.generic = 0;
+		buf->generic[i].data.cir_bw.bw_profile_idx =
+			CPU_TO_LE16(ICE_SCHED_DFLT_RL_PROF_ID);
+		buf->generic[i].data.cir_bw.bw_alloc =
+			CPU_TO_LE16(ICE_SCHED_DFLT_BW_WT);
+		buf->generic[i].data.eir_bw.bw_profile_idx =
+			CPU_TO_LE16(ICE_SCHED_DFLT_RL_PROF_ID);
+		buf->generic[i].data.eir_bw.bw_alloc =
+			CPU_TO_LE16(ICE_SCHED_DFLT_BW_WT);
+	}
+
+	status = ice_aq_add_sched_elems(hw, 1, buf, buf_size,
+					&num_groups_added, NULL);
+	if (status != ICE_SUCCESS || num_groups_added != 1) {
+		ice_debug(hw, ICE_DBG_SCHED, "add elements failed\n");
+		ice_free(hw, buf);
+		return ICE_ERR_CFG;
+	}
+
+	*num_nodes_added = num_nodes;
+	/* add nodes to the SW DB */
+	for (i = 0; i < num_nodes; i++) {
+		status = ice_sched_add_node(pi, layer, &buf->generic[i]);
+		if (status != ICE_SUCCESS) {
+			ice_debug(hw, ICE_DBG_SCHED,
+				  "add nodes in SW DB failed status =%d\n",
+				  status);
+			break;
+		}
+
+		teid = LE32_TO_CPU(buf->generic[i].node_teid);
+		new_node = ice_sched_find_node_by_teid(parent, teid);
+		if (!new_node) {
+			ice_debug(hw, ICE_DBG_SCHED,
+				  "Node is missing for teid =%d\n", teid);
+			break;
+		}
+
+		new_node->sibling = NULL;
+		new_node->tc_num = tc_node->tc_num;
+
+		/* add it to previous node sibling pointer */
+		/* Note: siblings are not linked across branches */
+		prev = ice_sched_get_first_node(hw, tc_node, layer);
+		if (prev && prev != new_node) {
+			while (prev->sibling)
+				prev = prev->sibling;
+			prev->sibling = new_node;
+		}
+
+		if (i == 0)
+			*first_node_teid = teid;
+	}
+
+	ice_free(hw, buf);
+	return status;
+}
+
+/**
+ * ice_sched_add_nodes_to_layer - Add nodes to a given layer
+ * @pi: port information structure
+ * @tc_node: pointer to TC node
+ * @parent: pointer to parent node
+ * @layer: layer number to add nodes
+ * @num_nodes: number of nodes to be added
+ * @first_node_teid: pointer to the first node teid
+ * @num_nodes_added: pointer to number of nodes added
+ *
+ * This function add nodes to a given layer.
+ */
+static enum ice_status
+ice_sched_add_nodes_to_layer(struct ice_port_info *pi,
+			     struct ice_sched_node *tc_node,
+			     struct ice_sched_node *parent, u8 layer,
+			     u16 num_nodes, u32 *first_node_teid,
+			     u16 *num_nodes_added)
+{
+	u32 *first_teid_ptr = first_node_teid;
+	u16 new_num_nodes, max_child_nodes;
+	enum ice_status status = ICE_SUCCESS;
+	struct ice_hw *hw = pi->hw;
+	u16 num_added = 0;
+	u32 temp;
+
+	*num_nodes_added = 0;
+
+	if (!num_nodes)
+		return status;
+
+	if (!parent || layer < hw->sw_entry_point_layer)
+		return ICE_ERR_PARAM;
+
+	/* max children per node per layer */
+	max_child_nodes = hw->max_children[parent->tx_sched_layer];
+
+	/* current number of children + required nodes exceed max children ? */
+	if ((parent->num_children + num_nodes) > max_child_nodes) {
+		/* Fail if the parent is a TC node */
+		if (parent == tc_node)
+			return ICE_ERR_CFG;
+
+		/* utilize all the spaces if the parent is not full */
+		if (parent->num_children < max_child_nodes) {
+			new_num_nodes = max_child_nodes - parent->num_children;
+			/* this recursion is intentional, and wouldn't
+			 * go more than 2 calls
+			 */
+			status = ice_sched_add_nodes_to_layer(pi, tc_node,
+							      parent, layer,
+							      new_num_nodes,
+							      first_node_teid,
+							      &num_added);
+			if (status != ICE_SUCCESS)
+				return status;
+
+			*num_nodes_added += num_added;
+		}
+		/* Don't modify the first node teid memory if the first node was
+		 * added already in the above call. Instead send some temp
+		 * memory for all other recursive calls.
+		 */
+		if (num_added)
+			first_teid_ptr = &temp;
+
+		new_num_nodes = num_nodes - num_added;
+
+		/* This parent is full, try the next sibling */
+		parent = parent->sibling;
+
+		/* this recursion is intentional, for 1024 queues
+		 * per VSI, it goes max of 16 iterations.
+		 * 1024 / 8 = 128 layer 8 nodes
+		 * 128 /8 = 16 (add 8 nodes per iteration)
+		 */
+		status = ice_sched_add_nodes_to_layer(pi, tc_node, parent,
+						      layer, new_num_nodes,
+						      first_teid_ptr,
+						      &num_added);
+		*num_nodes_added += num_added;
+		return status;
+	}
+
+	status = ice_sched_add_elems(pi, tc_node, parent, layer, num_nodes,
+				     num_nodes_added, first_node_teid);
+	return status;
+}
+
+/**
+ * ice_sched_get_qgrp_layer - get the current queue group layer number
+ * @hw: pointer to the hw struct
+ *
+ * This function returns the current queue group layer number
+ */
+static u8 ice_sched_get_qgrp_layer(struct ice_hw *hw)
+{
+	/* It's always total layers - 1, the array is 0 relative so -2 */
+	return hw->num_tx_sched_layers - ICE_QGRP_LAYER_OFFSET;
+}
+
+/**
+ * ice_sched_get_vsi_layer - get the current VSI layer number
+ * @hw: pointer to the hw struct
+ *
+ * This function returns the current VSI layer number
+ */
+static u8 ice_sched_get_vsi_layer(struct ice_hw *hw)
+{
+	/* Num Layers       VSI layer
+	 *     9               6
+	 *     7               4
+	 *     5 or less       sw_entry_point_layer
+	 */
+	/* calculate the vsi layer based on number of layers. */
+	if (hw->num_tx_sched_layers > ICE_VSI_LAYER_OFFSET + 1) {
+		u8 layer = hw->num_tx_sched_layers - ICE_VSI_LAYER_OFFSET;
+
+		if (layer > hw->sw_entry_point_layer)
+			return layer;
+	}
+	return hw->sw_entry_point_layer;
+}
+
+
+/**
+ * ice_rm_dflt_leaf_node - remove the default leaf node in the tree
+ * @pi: port information structure
+ *
+ * This function removes the leaf node that was created by the FW
+ * during initialization
+ */
+static void ice_rm_dflt_leaf_node(struct ice_port_info *pi)
+{
+	struct ice_sched_node *node;
+
+	node = pi->root;
+	while (node) {
+		if (!node->num_children)
+			break;
+		node = node->children[0];
+	}
+	if (node && node->info.data.elem_type == ICE_AQC_ELEM_TYPE_LEAF) {
+		u32 teid = LE32_TO_CPU(node->info.node_teid);
+		enum ice_status status;
+
+		/* remove the default leaf node */
+		status = ice_sched_remove_elems(pi->hw, node->parent, 1, &teid);
+		if (!status)
+			ice_free_sched_node(pi, node);
+	}
+}
+
+/**
+ * ice_sched_rm_dflt_nodes - free the default nodes in the tree
+ * @pi: port information structure
+ *
+ * This function frees all the nodes except root and TC that were created by
+ * the FW during initialization
+ */
+static void ice_sched_rm_dflt_nodes(struct ice_port_info *pi)
+{
+	struct ice_sched_node *node;
+
+	ice_rm_dflt_leaf_node(pi);
+
+	/* remove the default nodes except TC and root nodes */
+	node = pi->root;
+	while (node) {
+		if (node->tx_sched_layer >= pi->hw->sw_entry_point_layer &&
+		    node->info.data.elem_type != ICE_AQC_ELEM_TYPE_TC &&
+		    node->info.data.elem_type != ICE_AQC_ELEM_TYPE_ROOT_PORT) {
+			ice_free_sched_node(pi, node);
+			break;
+		}
+
+		if (!node->num_children)
+			break;
+		node = node->children[0];
+	}
+}
+
+/**
+ * ice_sched_init_port - Initialize scheduler by querying information from FW
+ * @pi: port info structure for the tree to cleanup
+ *
+ * This function is the initial call to find the total number of Tx scheduler
+ * resources, default topology created by firmware and storing the information
+ * in SW DB.
+ */
+enum ice_status ice_sched_init_port(struct ice_port_info *pi)
+{
+	struct ice_aqc_get_topo_elem *buf;
+	enum ice_status status;
+	struct ice_hw *hw;
+	u8 num_branches;
+	u16 num_elems;
+	u8 i, j;
+
+	if (!pi)
+		return ICE_ERR_PARAM;
+	hw = pi->hw;
+
+	/* Query the Default Topology from FW */
+	buf = (struct ice_aqc_get_topo_elem *)ice_malloc(hw,
+							 ICE_AQ_MAX_BUF_LEN);
+	if (!buf)
+		return ICE_ERR_NO_MEMORY;
+
+	/* Query default scheduling tree topology */
+	status = ice_aq_get_dflt_topo(hw, pi->lport, buf, ICE_AQ_MAX_BUF_LEN,
+				      &num_branches, NULL);
+	if (status)
+		goto err_init_port;
+
+	/* num_branches should be between 1-8 */
+	if (num_branches < 1 || num_branches > ICE_TXSCHED_MAX_BRANCHES) {
+		ice_debug(hw, ICE_DBG_SCHED, "num_branches unexpected %d\n",
+			  num_branches);
+		status = ICE_ERR_PARAM;
+		goto err_init_port;
+	}
+
+	/* get the number of elements on the default/first branch */
+	num_elems = LE16_TO_CPU(buf[0].hdr.num_elems);
+
+	/* num_elems should always be between 1-9 */
+	if (num_elems < 1 || num_elems > ICE_AQC_TOPO_MAX_LEVEL_NUM) {
+		ice_debug(hw, ICE_DBG_SCHED, "num_elems unexpected %d\n",
+			  num_elems);
+		status = ICE_ERR_PARAM;
+		goto err_init_port;
+	}
+
+	/* If the last node is a leaf node then the index of the Q group
+	 * layer is two less than the number of elements.
+	 */
+	if (num_elems > 2 && buf[0].generic[num_elems - 1].data.elem_type ==
+	    ICE_AQC_ELEM_TYPE_LEAF)
+		pi->last_node_teid =
+			LE32_TO_CPU(buf[0].generic[num_elems - 2].node_teid);
+	else
+		pi->last_node_teid =
+			LE32_TO_CPU(buf[0].generic[num_elems - 1].node_teid);
+
+	/* Insert the Tx Sched root node */
+	status = ice_sched_add_root_node(pi, &buf[0].generic[0]);
+	if (status)
+		goto err_init_port;
+
+	/* Parse the default tree and cache the information */
+	for (i = 0; i < num_branches; i++) {
+		num_elems = LE16_TO_CPU(buf[i].hdr.num_elems);
+
+		/* Skip root element as already inserted */
+		for (j = 1; j < num_elems; j++) {
+			/* update the sw entry point */
+			if (buf[0].generic[j].data.elem_type ==
+			    ICE_AQC_ELEM_TYPE_ENTRY_POINT)
+				hw->sw_entry_point_layer = j;
+
+			status = ice_sched_add_node(pi, j, &buf[i].generic[j]);
+			if (status)
+				goto err_init_port;
+		}
+	}
+
+	/* Remove the default nodes. */
+	if (pi->root)
+		ice_sched_rm_dflt_nodes(pi);
+
+	/* initialize the port for handling the scheduler tree */
+	pi->port_state = ICE_SCHED_PORT_STATE_READY;
+	ice_init_lock(&pi->sched_lock);
+
+err_init_port:
+	if (status && pi->root) {
+		ice_free_sched_node(pi, pi->root);
+		pi->root = NULL;
+	}
+
+	ice_free(hw, buf);
+	return status;
+}
+
+
+/**
+ * ice_sched_query_res_alloc - query the FW for num of logical sched layers
+ * @hw: pointer to the HW struct
+ *
+ * query FW for allocated scheduler resources and store in HW struct
+ */
+enum ice_status ice_sched_query_res_alloc(struct ice_hw *hw)
+{
+	struct ice_aqc_query_txsched_res_resp *buf;
+	enum ice_status status = ICE_SUCCESS;
+	__le16 max_sibl;
+	u8 i;
+
+	if (hw->layer_info)
+		return status;
+
+	buf = (struct ice_aqc_query_txsched_res_resp *)
+		ice_malloc(hw, sizeof(*buf));
+	if (!buf)
+		return ICE_ERR_NO_MEMORY;
+
+	status = ice_aq_query_sched_res(hw, sizeof(*buf), buf, NULL);
+	if (status)
+		goto sched_query_out;
+
+	hw->num_tx_sched_layers = LE16_TO_CPU(buf->sched_props.logical_levels);
+	hw->num_tx_sched_phys_layers =
+		LE16_TO_CPU(buf->sched_props.phys_levels);
+	hw->flattened_layers = buf->sched_props.flattening_bitmap;
+	hw->max_cgds = buf->sched_props.max_pf_cgds;
+
+	/* max sibling group size of current layer refers to the max children
+	 * of the below layer node.
+	 * layer 1 node max children will be layer 2 max sibling group size
+	 * layer 2 node max children will be layer 3 max sibling group size
+	 * and so on. This array will be populated from root (index 0) to
+	 * qgroup layer 7. Leaf node has no children.
+	 */
+	for (i = 0; i < hw->num_tx_sched_layers - 1; i++) {
+		max_sibl = buf->layer_props[i + 1].max_sibl_grp_sz;
+		hw->max_children[i] = LE16_TO_CPU(max_sibl);
+	}
+
+	hw->layer_info = (struct ice_aqc_layer_props *)
+			 ice_memdup(hw, buf->layer_props,
+				    (hw->num_tx_sched_layers *
+				     sizeof(*hw->layer_info)),
+				    ICE_DMA_TO_DMA);
+	if (!hw->layer_info) {
+		status = ICE_ERR_NO_MEMORY;
+		goto sched_query_out;
+	}
+
+
+sched_query_out:
+	ice_free(hw, buf);
+	return status;
+}
+
+/**
+ * ice_sched_find_node_in_subtree - Find node in part of base node subtree
+ * @hw: pointer to the hw struct
+ * @base: pointer to the base node
+ * @node: pointer to the node to search
+ *
+ * This function checks whether a given node is part of the base node
+ * subtree or not
+ */
+static bool
+ice_sched_find_node_in_subtree(struct ice_hw *hw, struct ice_sched_node *base,
+			       struct ice_sched_node *node)
+{
+	u8 i;
+
+	for (i = 0; i < base->num_children; i++) {
+		struct ice_sched_node *child = base->children[i];
+
+		if (node == child)
+			return true;
+
+		if (child->tx_sched_layer > node->tx_sched_layer)
+			return false;
+
+		/* this recursion is intentional, and wouldn't
+		 * go more than 8 calls
+		 */
+		if (ice_sched_find_node_in_subtree(hw, child, node))
+			return true;
+	}
+	return false;
+}
+
+/**
+ * ice_sched_get_free_qparent - Get a free lan or rdma q group node
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc: branch number
+ * @owner: lan or rdma
+ *
+ * This function retrieves a free lan or rdma q group node
+ */
+struct ice_sched_node *
+ice_sched_get_free_qparent(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+			   u8 owner)
+{
+	struct ice_sched_node *vsi_node, *qgrp_node = NULL;
+	struct ice_vsi_ctx *vsi_ctx;
+	u16 max_children;
+	u8 qgrp_layer;
+
+	qgrp_layer = ice_sched_get_qgrp_layer(pi->hw);
+	max_children = pi->hw->max_children[qgrp_layer];
+
+	vsi_ctx = ice_get_vsi_ctx(pi->hw, vsi_handle);
+	if (!vsi_ctx)
+		return NULL;
+	vsi_node = vsi_ctx->sched.vsi_node[tc];
+	/* validate invalid VSI id */
+	if (!vsi_node)
+		goto lan_q_exit;
+
+	/* get the first q group node from VSI sub-tree */
+	qgrp_node = ice_sched_get_first_node(pi->hw, vsi_node, qgrp_layer);
+	while (qgrp_node) {
+		/* make sure the qgroup node is part of the VSI subtree */
+		if (ice_sched_find_node_in_subtree(pi->hw, vsi_node, qgrp_node))
+			if (qgrp_node->num_children < max_children &&
+			    qgrp_node->owner == owner)
+				break;
+		qgrp_node = qgrp_node->sibling;
+	}
+
+lan_q_exit:
+	return qgrp_node;
+}
+
+/**
+ * ice_sched_get_vsi_node - Get a VSI node based on VSI id
+ * @hw: pointer to the hw struct
+ * @tc_node: pointer to the TC node
+ * @vsi_handle: software VSI handle
+ *
+ * This function retrieves a VSI node for a given VSI id from a given
+ * TC branch
+ */
+static struct ice_sched_node *
+ice_sched_get_vsi_node(struct ice_hw *hw, struct ice_sched_node *tc_node,
+		       u16 vsi_handle)
+{
+	struct ice_sched_node *node;
+	u8 vsi_layer;
+
+	vsi_layer = ice_sched_get_vsi_layer(hw);
+	node = ice_sched_get_first_node(hw, tc_node, vsi_layer);
+
+	/* Check whether it already exists */
+	while (node) {
+		if (node->vsi_handle == vsi_handle)
+			return node;
+		node = node->sibling;
+	}
+
+	return node;
+}
+
+
+
+/**
+ * ice_sched_calc_vsi_child_nodes - calculate number of VSI child nodes
+ * @hw: pointer to the hw struct
+ * @num_qs: number of queues
+ * @num_nodes: num nodes array
+ *
+ * This function calculates the number of VSI child nodes based on the
+ * number of queues.
+ */
+static void
+ice_sched_calc_vsi_child_nodes(struct ice_hw *hw, u16 num_qs, u16 *num_nodes)
+{
+	u16 num = num_qs;
+	u8 i, qgl, vsil;
+
+	qgl = ice_sched_get_qgrp_layer(hw);
+	vsil = ice_sched_get_vsi_layer(hw);
+
+	/* calculate num nodes from q group to VSI layer */
+	for (i = qgl; i > vsil; i--) {
+		/* round to the next integer if there is a remainder */
+		num = DIVIDE_AND_ROUND_UP(num, hw->max_children[i]);
+
+		/* need at least one node */
+		num_nodes[i] = num ? num : 1;
+	}
+}
+
+/**
+ * ice_sched_add_vsi_child_nodes - add VSI child nodes to tree
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc_node: pointer to the TC node
+ * @num_nodes: pointer to the num nodes that needs to be added per layer
+ * @owner: node owner (lan or rdma)
+ *
+ * This function adds the VSI child nodes to tree. It gets called for
+ * lan and rdma separately.
+ */
+static enum ice_status
+ice_sched_add_vsi_child_nodes(struct ice_port_info *pi, u16 vsi_handle,
+			      struct ice_sched_node *tc_node, u16 *num_nodes,
+			      u8 owner)
+{
+	struct ice_sched_node *parent, *node;
+	struct ice_hw *hw = pi->hw;
+	enum ice_status status;
+	u32 first_node_teid;
+	u16 num_added = 0;
+	u8 i, qgl, vsil;
+
+	qgl = ice_sched_get_qgrp_layer(hw);
+	vsil = ice_sched_get_vsi_layer(hw);
+	parent = ice_sched_get_vsi_node(hw, tc_node, vsi_handle);
+	for (i = vsil + 1; i <= qgl; i++) {
+		if (!parent)
+			return ICE_ERR_CFG;
+
+		status = ice_sched_add_nodes_to_layer(pi, tc_node, parent, i,
+						      num_nodes[i],
+						      &first_node_teid,
+						      &num_added);
+		if (status != ICE_SUCCESS || num_nodes[i] != num_added)
+			return ICE_ERR_CFG;
+
+		/* The newly added node can be a new parent for the next
+		 * layer nodes
+		 */
+		if (num_added) {
+			parent = ice_sched_find_node_by_teid(tc_node,
+							     first_node_teid);
+			node = parent;
+			while (node) {
+				node->owner = owner;
+				node = node->sibling;
+			}
+		} else {
+			parent = parent->children[0];
+		}
+	}
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_sched_rm_vsi_child_nodes - remove VSI child nodes from the tree
+ * @pi: port information structure
+ * @vsi_node: pointer to the VSI node
+ * @num_nodes: pointer to the num nodes that needs to be removed per layer
+ * @owner: node owner (lan or rdma)
+ *
+ * This function removes the VSI child nodes from the tree. It gets called for
+ * lan and rdma separately.
+ */
+static void
+ice_sched_rm_vsi_child_nodes(struct ice_port_info *pi,
+			     struct ice_sched_node *vsi_node, u16 *num_nodes,
+			     u8 owner)
+{
+	struct ice_sched_node *node, *next;
+	u8 i, qgl, vsil;
+	u16 num;
+
+	qgl = ice_sched_get_qgrp_layer(pi->hw);
+	vsil = ice_sched_get_vsi_layer(pi->hw);
+
+	for (i = qgl; i > vsil; i--) {
+		num = num_nodes[i];
+		node = ice_sched_get_first_node(pi->hw, vsi_node, i);
+		while (node && num) {
+			next = node->sibling;
+			if (node->owner == owner && !node->num_children) {
+				ice_free_sched_node(pi, node);
+				num--;
+			}
+			node = next;
+		}
+	}
+}
+
+/**
+ * ice_sched_calc_vsi_support_nodes - calculate number of VSI support nodes
+ * @hw: pointer to the hw struct
+ * @tc_node: pointer to TC node
+ * @num_nodes: pointer to num nodes array
+ *
+ * This function calculates the number of supported nodes needed to add this
+ * VSI into Tx tree including the VSI, parent and intermediate nodes in below
+ * layers
+ */
+static void
+ice_sched_calc_vsi_support_nodes(struct ice_hw *hw,
+				 struct ice_sched_node *tc_node, u16 *num_nodes)
+{
+	struct ice_sched_node *node;
+	u8 vsil;
+	int i;
+
+	vsil = ice_sched_get_vsi_layer(hw);
+	for (i = vsil; i >= hw->sw_entry_point_layer; i--)
+		/* Add intermediate nodes if TC has no children and
+		 * need at least one node for VSI
+		 */
+		if (!tc_node->num_children || i == vsil) {
+			num_nodes[i]++;
+		} else {
+			/* If intermediate nodes are reached max children
+			 * then add a new one.
+			 */
+			node = ice_sched_get_first_node(hw, tc_node, (u8)i);
+			/* scan all the siblings */
+			while (node) {
+				if (node->num_children < hw->max_children[i])
+					break;
+				node = node->sibling;
+			}
+
+			/* tree has one intermediate node to add this new VSI.
+			 * So no need to calculate supported nodes for below
+			 * layers.
+			 */
+			if (node)
+				break;
+			/* all the nodes are full, allocate a new one */
+			num_nodes[i]++;
+		}
+}
+
+/**
+ * ice_sched_add_vsi_support_nodes - add VSI supported nodes into Tx tree
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc_node: pointer to TC node
+ * @num_nodes: pointer to num nodes array
+ *
+ * This function adds the VSI supported nodes into Tx tree including the
+ * VSI, its parent and intermediate nodes in below layers
+ */
+static enum ice_status
+ice_sched_add_vsi_support_nodes(struct ice_port_info *pi, u16 vsi_handle,
+				struct ice_sched_node *tc_node, u16 *num_nodes)
+{
+	struct ice_sched_node *parent = tc_node;
+	enum ice_status status;
+	u32 first_node_teid;
+	u16 num_added = 0;
+	u8 i, vsil;
+
+	if (!pi)
+		return ICE_ERR_PARAM;
+
+	vsil = ice_sched_get_vsi_layer(pi->hw);
+	for (i = pi->hw->sw_entry_point_layer; i <= vsil; i++) {
+		status = ice_sched_add_nodes_to_layer(pi, tc_node, parent,
+						      i, num_nodes[i],
+						      &first_node_teid,
+						      &num_added);
+		if (status != ICE_SUCCESS || num_nodes[i] != num_added)
+			return ICE_ERR_CFG;
+
+		/* The newly added node can be a new parent for the next
+		 * layer nodes
+		 */
+		if (num_added)
+			parent = ice_sched_find_node_by_teid(tc_node,
+							     first_node_teid);
+		else
+			parent = parent->children[0];
+
+		if (!parent)
+			return ICE_ERR_CFG;
+
+		if (i == vsil)
+			parent->vsi_handle = vsi_handle;
+	}
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_sched_add_vsi_to_topo - add a new VSI into tree
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc: TC number
+ *
+ * This function adds a new VSI into scheduler tree
+ */
+static enum ice_status
+ice_sched_add_vsi_to_topo(struct ice_port_info *pi, u16 vsi_handle, u8 tc)
+{
+	u16 num_nodes[ICE_AQC_TOPO_MAX_LEVEL_NUM] = { 0 };
+	struct ice_sched_node *tc_node;
+	struct ice_hw *hw = pi->hw;
+
+	tc_node = ice_sched_get_tc_node(pi, tc);
+	if (!tc_node)
+		return ICE_ERR_PARAM;
+
+	/* calculate number of supported nodes needed for this VSI */
+	ice_sched_calc_vsi_support_nodes(hw, tc_node, num_nodes);
+
+	/* add vsi supported nodes to tc subtree */
+	return ice_sched_add_vsi_support_nodes(pi, vsi_handle, tc_node,
+					       num_nodes);
+}
+
+/**
+ * ice_sched_update_vsi_child_nodes - update VSI child nodes
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc: TC number
+ * @new_numqs: new number of max queues
+ * @owner: owner of this subtree
+ *
+ * This function updates the VSI child nodes based on the number of queues
+ */
+static enum ice_status
+ice_sched_update_vsi_child_nodes(struct ice_port_info *pi, u16 vsi_handle,
+				 u8 tc, u16 new_numqs, u8 owner)
+{
+	u16 prev_num_nodes[ICE_AQC_TOPO_MAX_LEVEL_NUM] = { 0 };
+	u16 new_num_nodes[ICE_AQC_TOPO_MAX_LEVEL_NUM] = { 0 };
+	struct ice_sched_node *vsi_node;
+	struct ice_sched_node *tc_node;
+	struct ice_vsi_ctx *vsi_ctx;
+	enum ice_status status = ICE_SUCCESS;
+	struct ice_hw *hw = pi->hw;
+	u16 prev_numqs;
+	u8 i;
+
+	tc_node = ice_sched_get_tc_node(pi, tc);
+	if (!tc_node)
+		return ICE_ERR_CFG;
+
+	vsi_node = ice_sched_get_vsi_node(hw, tc_node, vsi_handle);
+	if (!vsi_node)
+		return ICE_ERR_CFG;
+
+	vsi_ctx = ice_get_vsi_ctx(hw, vsi_handle);
+	if (!vsi_ctx)
+		return ICE_ERR_PARAM;
+
+	if (owner == ICE_SCHED_NODE_OWNER_LAN)
+		prev_numqs = vsi_ctx->sched.max_lanq[tc];
+	else
+		return ICE_ERR_PARAM;
+
+	/* num queues are not changed */
+	if (prev_numqs == new_numqs)
+		return status;
+
+	/* calculate number of nodes based on prev/new number of qs */
+	if (prev_numqs)
+		ice_sched_calc_vsi_child_nodes(hw, prev_numqs, prev_num_nodes);
+
+	if (new_numqs)
+		ice_sched_calc_vsi_child_nodes(hw, new_numqs, new_num_nodes);
+
+	if (prev_numqs > new_numqs) {
+		for (i = 0; i < ICE_AQC_TOPO_MAX_LEVEL_NUM; i++)
+			new_num_nodes[i] = prev_num_nodes[i] - new_num_nodes[i];
+
+		ice_sched_rm_vsi_child_nodes(pi, vsi_node, new_num_nodes,
+					     owner);
+	} else {
+		for (i = 0; i < ICE_AQC_TOPO_MAX_LEVEL_NUM; i++)
+			new_num_nodes[i] -= prev_num_nodes[i];
+
+		status = ice_sched_add_vsi_child_nodes(pi, vsi_handle, tc_node,
+						       new_num_nodes, owner);
+		if (status)
+			return status;
+	}
+
+	vsi_ctx->sched.max_lanq[tc] = new_numqs;
+
+	return status;
+}
+
+/**
+ * ice_sched_cfg_vsi - configure the new/existing VSI
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc: TC number
+ * @maxqs: max number of queues
+ * @owner: lan or rdma
+ * @enable: TC enabled or disabled
+ *
+ * This function adds/updates VSI nodes based on the number of queues. If TC is
+ * enabled and VSI is in suspended state then resume the VSI back. If TC is
+ * disabled then suspend the VSI if it is not already.
+ */
+enum ice_status
+ice_sched_cfg_vsi(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u16 maxqs,
+		  u8 owner, bool enable)
+{
+	struct ice_sched_node *vsi_node, *tc_node;
+	struct ice_vsi_ctx *vsi_ctx;
+	enum ice_status status = ICE_SUCCESS;
+	struct ice_hw *hw = pi->hw;
+
+	tc_node = ice_sched_get_tc_node(pi, tc);
+	if (!tc_node)
+		return ICE_ERR_PARAM;
+	vsi_ctx = ice_get_vsi_ctx(hw, vsi_handle);
+	if (!vsi_ctx)
+		return ICE_ERR_PARAM;
+	vsi_node = ice_sched_get_vsi_node(hw, tc_node, vsi_handle);
+
+	/* suspend the VSI if tc is not enabled */
+	if (!enable) {
+		if (vsi_node && vsi_node->in_use) {
+			u32 teid = LE32_TO_CPU(vsi_node->info.node_teid);
+
+			status = ice_sched_suspend_resume_elems(hw, 1, &teid,
+								true);
+			if (!status)
+				vsi_node->in_use = false;
+		}
+		return status;
+	}
+
+	/* TC is enabled, if it is a new VSI then add it to the tree */
+	if (!vsi_node) {
+		status = ice_sched_add_vsi_to_topo(pi, vsi_handle, tc);
+		if (status)
+			return status;
+
+		vsi_node = ice_sched_get_vsi_node(hw, tc_node, vsi_handle);
+		if (!vsi_node)
+			return ICE_ERR_CFG;
+
+		vsi_ctx->sched.vsi_node[tc] = vsi_node;
+		vsi_node->in_use = true;
+		/* invalidate the max queues whenever VSI gets added first time
+		 * into the scheduler tree (boot or after reset). We need to
+		 * recreate the child nodes all the time in these cases.
+		 */
+		vsi_ctx->sched.max_lanq[tc] = 0;
+	}
+
+	/* update the VSI child nodes */
+	status = ice_sched_update_vsi_child_nodes(pi, vsi_handle, tc, maxqs,
+						  owner);
+	if (status)
+		return status;
+
+	/* TC is enabled, resume the VSI if it is in the suspend state */
+	if (!vsi_node->in_use) {
+		u32 teid = LE32_TO_CPU(vsi_node->info.node_teid);
+
+		status = ice_sched_suspend_resume_elems(hw, 1, &teid, false);
+		if (!status)
+			vsi_node->in_use = true;
+	}
+
+	return status;
+}
+
+/**
+ * ice_sched_rm_agg_vsi_entry - remove agg related vsi info entry
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ *
+ * This function removes single aggregator vsi info entry from
+ * aggregator list.
+ */
+static void
+ice_sched_rm_agg_vsi_info(struct ice_port_info *pi, u16 vsi_handle)
+{
+	struct ice_sched_agg_info *agg_info;
+	struct ice_sched_agg_info *atmp;
+
+	LIST_FOR_EACH_ENTRY_SAFE(agg_info, atmp, &pi->hw->agg_list,
+				 ice_sched_agg_info,
+				 list_entry) {
+		struct ice_sched_agg_vsi_info *agg_vsi_info;
+		struct ice_sched_agg_vsi_info *vtmp;
+
+		LIST_FOR_EACH_ENTRY_SAFE(agg_vsi_info, vtmp,
+					 &agg_info->agg_vsi_list,
+					 ice_sched_agg_vsi_info, list_entry)
+			if (agg_vsi_info->vsi_handle == vsi_handle) {
+				LIST_DEL(&agg_vsi_info->list_entry);
+				ice_free(pi->hw, agg_vsi_info);
+				return;
+			}
+	}
+}
+
+/**
+ * ice_sched_rm_vsi_cfg - remove the VSI and its children nodes
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @owner: lan or rdma
+ *
+ * This function removes the VSI and its lan or rdma children nodes from the
+ * scheduler tree.
+ */
+static enum ice_status
+ice_sched_rm_vsi_cfg(struct ice_port_info *pi, u16 vsi_handle, u8 owner)
+{
+	enum ice_status status = ICE_ERR_PARAM;
+	struct ice_vsi_ctx *vsi_ctx;
+	u8 i, j = 0;
+
+	if (!ice_is_vsi_valid(pi->hw, vsi_handle))
+		return status;
+	ice_acquire_lock(&pi->sched_lock);
+	vsi_ctx = ice_get_vsi_ctx(pi->hw, vsi_handle);
+	if (!vsi_ctx)
+		goto exit_sched_rm_vsi_cfg;
+
+	for (i = 0; i < ICE_MAX_TRAFFIC_CLASS; i++) {
+		struct ice_sched_node *vsi_node, *tc_node;
+
+		tc_node = ice_sched_get_tc_node(pi, i);
+		if (!tc_node)
+			continue;
+
+		vsi_node = ice_sched_get_vsi_node(pi->hw, tc_node, vsi_handle);
+		if (!vsi_node)
+			continue;
+
+		while (j < vsi_node->num_children) {
+			if (vsi_node->children[j]->owner == owner) {
+				ice_free_sched_node(pi, vsi_node->children[j]);
+
+				/* reset the counter again since the num
+				 * children will be updated after node removal
+				 */
+				j = 0;
+			} else {
+				j++;
+			}
+		}
+		/* remove the VSI if it has no children */
+		if (!vsi_node->num_children) {
+			ice_free_sched_node(pi, vsi_node);
+			vsi_ctx->sched.vsi_node[i] = NULL;
+
+			/* clean up agg related vsi info if any */
+			ice_sched_rm_agg_vsi_info(pi, vsi_handle);
+		}
+		if (owner == ICE_SCHED_NODE_OWNER_LAN)
+			vsi_ctx->sched.max_lanq[i] = 0;
+	}
+	status = ICE_SUCCESS;
+
+exit_sched_rm_vsi_cfg:
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_rm_vsi_lan_cfg - remove VSI and its lan children nodes
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ *
+ * This function clears the VSI and its lan children nodes from scheduler tree
+ * for all TCs.
+ */
+enum ice_status ice_rm_vsi_lan_cfg(struct ice_port_info *pi, u16 vsi_handle)
+{
+	return ice_sched_rm_vsi_cfg(pi, vsi_handle, ICE_SCHED_NODE_OWNER_LAN);
+}
diff --git a/drivers/net/ice/base/ice_sched.h b/drivers/net/ice/base/ice_sched.h
new file mode 100644
index 0000000..9a8a215
--- /dev/null
+++ b/drivers/net/ice/base/ice_sched.h
@@ -0,0 +1,68 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_SCHED_H_
+#define _ICE_SCHED_H_
+
+#include "ice_common.h"
+
+#define ICE_QGRP_LAYER_OFFSET	2
+#define ICE_VSI_LAYER_OFFSET	4
+#define ICE_AGG_LAYER_OFFSET	6
+#define ICE_SCHED_INVAL_LAYER_NUM	0xFF
+/* Burst size is a 12 bits register that is configured while creating the RL
+ * profile(s). MSB is a granularity bit and tells the granularity type
+ * 0 - LSB bits are in bytes granularity
+ * 1 - LSB bits are in 1K bytes granularity
+ */
+#define ICE_BYTE_GRANULARITY			0
+#define ICE_KBYTE_GRANULARITY			0x800
+#define ICE_MIN_BURST_SIZE_ALLOWED		1 /* In Bytes */
+#define ICE_MAX_BURST_SIZE_ALLOWED		(2047 * 1024) /* In Bytes */
+#define ICE_MAX_BURST_SIZE_BYTE_GRANULARITY	2047 /* In Bytes */
+#define ICE_MAX_BURST_SIZE_KBYTE_GRANULARITY	ICE_MAX_BURST_SIZE_ALLOWED
+
+
+
+struct ice_sched_agg_vsi_info {
+	struct LIST_ENTRY_TYPE list_entry;
+	ice_declare_bitmap(tc_bitmap, ICE_MAX_TRAFFIC_CLASS);
+	u16 vsi_handle;
+};
+
+struct ice_sched_agg_info {
+	struct LIST_HEAD_TYPE agg_vsi_list;
+	struct LIST_ENTRY_TYPE list_entry;
+	ice_declare_bitmap(tc_bitmap, ICE_MAX_TRAFFIC_CLASS);
+	u32 agg_id;
+	enum ice_agg_type agg_type;
+	/* bw_t_info saves agg bw information */
+	struct ice_bw_type_info bw_t_info[ICE_MAX_TRAFFIC_CLASS];
+};
+
+/* FW AQ command calls */
+enum ice_status ice_sched_init_port(struct ice_port_info *pi);
+enum ice_status ice_sched_query_res_alloc(struct ice_hw *hw);
+
+/* Functions to cleanup scheduler SW DB */
+void ice_sched_clear_port(struct ice_port_info *pi);
+void ice_sched_cleanup_all(struct ice_hw *hw);
+void ice_sched_clear_agg(struct ice_hw *hw);
+
+struct ice_sched_node *
+ice_sched_find_node_by_teid(struct ice_sched_node *start_node, u32 teid);
+/* Add a scheduling node into SW DB for given info */
+enum ice_status
+ice_sched_add_node(struct ice_port_info *pi, u8 layer,
+		   struct ice_aqc_txsched_elem_data *info);
+void ice_free_sched_node(struct ice_port_info *pi, struct ice_sched_node *node);
+struct ice_sched_node *ice_sched_get_tc_node(struct ice_port_info *pi, u8 tc);
+struct ice_sched_node *
+ice_sched_get_free_qparent(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+			   u8 owner);
+enum ice_status
+ice_sched_cfg_vsi(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u16 maxqs,
+		  u8 owner, bool enable);
+enum ice_status ice_rm_vsi_lan_cfg(struct ice_port_info *pi, u16 vsi_handle);
+#endif /* _ICE_SCHED_H_ */
diff --git a/drivers/net/ice/base/ice_sriov.c b/drivers/net/ice/base/ice_sriov.c
new file mode 100644
index 0000000..0ee7496
--- /dev/null
+++ b/drivers/net/ice/base/ice_sriov.c
@@ -0,0 +1,129 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#include "ice_common.h"
+#include "ice_adminq_cmd.h"
+#include "ice_sriov.h"
+
+/**
+ * ice_aq_send_msg_to_vf
+ * @hw: pointer to the hardware structure
+ * @vfid: VF ID to send msg
+ * @v_opcode: opcodes for VF-PF communication
+ * @v_retval: return error code
+ * @msg: pointer to the msg buffer
+ * @msglen: msg length
+ * @cd: pointer to command details
+ *
+ * Send message to VF driver (0x0802) using mailbox
+ * queue and asynchronously sending message via
+ * ice_sq_send_cmd() function
+ */
+enum ice_status
+ice_aq_send_msg_to_vf(struct ice_hw *hw, u16 vfid, u32 v_opcode, u32 v_retval,
+		      u8 *msg, u16 msglen, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_pf_vf_msg *cmd;
+	struct ice_aq_desc desc;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_mbx_opc_send_msg_to_vf);
+
+	cmd = &desc.params.virt;
+	cmd->id = CPU_TO_LE32(vfid);
+
+	desc.cookie_high = CPU_TO_LE32(v_opcode);
+	desc.cookie_low = CPU_TO_LE32(v_retval);
+
+	if (msglen)
+		desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+
+	return ice_sq_send_cmd(hw, &hw->mailboxq, &desc, msg, msglen, cd);
+}
+
+
+/**
+ * ice_conv_link_speed_to_virtchnl
+ * @adv_link_support: determines the format of the returned link speed
+ * @link_speed: variable containing the link_speed to be converted
+ *
+ * Convert link speed supported by hw to link speed supported by virtchnl.
+ * If adv_link_support is true, then return link speed in Mbps. Else return
+ * link speed as a VIRTCHNL_LINK_SPEED_* casted to a u32. Note that the caller
+ * needs to cast back to an enum virtchnl_link_speed in the case where
+ * adv_link_support is false, but when adv_link_support is true the caller can
+ * expect the speed in Mbps.
+ */
+u32 ice_conv_link_speed_to_virtchnl(bool adv_link_support, u16 link_speed)
+{
+	u32 speed;
+
+	if (adv_link_support)
+		switch (link_speed) {
+		case ICE_AQ_LINK_SPEED_10MB:
+			speed = ICE_LINK_SPEED_10MBPS;
+			break;
+		case ICE_AQ_LINK_SPEED_100MB:
+			speed = ICE_LINK_SPEED_100MBPS;
+			break;
+		case ICE_AQ_LINK_SPEED_1000MB:
+			speed = ICE_LINK_SPEED_1000MBPS;
+			break;
+		case ICE_AQ_LINK_SPEED_2500MB:
+			speed = ICE_LINK_SPEED_2500MBPS;
+			break;
+		case ICE_AQ_LINK_SPEED_5GB:
+			speed = ICE_LINK_SPEED_5000MBPS;
+			break;
+		case ICE_AQ_LINK_SPEED_10GB:
+			speed = ICE_LINK_SPEED_10000MBPS;
+			break;
+		case ICE_AQ_LINK_SPEED_20GB:
+			speed = ICE_LINK_SPEED_20000MBPS;
+			break;
+		case ICE_AQ_LINK_SPEED_25GB:
+			speed = ICE_LINK_SPEED_25000MBPS;
+			break;
+		case ICE_AQ_LINK_SPEED_40GB:
+			speed = ICE_LINK_SPEED_40000MBPS;
+			break;
+		default:
+			speed = ICE_LINK_SPEED_UNKNOWN;
+			break;
+		}
+	else
+		/* Virtchnl speeds are not defined for every speed supported in
+		 * the hardware. To maintain compatibility with older AVF
+		 * drivers, while reporting the speed the new speed values are
+		 * resolved to the closest known virtchnl speeds
+		 */
+		switch (link_speed) {
+		case ICE_AQ_LINK_SPEED_10MB:
+		case ICE_AQ_LINK_SPEED_100MB:
+			speed = (u32)VIRTCHNL_LINK_SPEED_100MB;
+			break;
+		case ICE_AQ_LINK_SPEED_1000MB:
+		case ICE_AQ_LINK_SPEED_2500MB:
+		case ICE_AQ_LINK_SPEED_5GB:
+			speed = (u32)VIRTCHNL_LINK_SPEED_1GB;
+			break;
+		case ICE_AQ_LINK_SPEED_10GB:
+			speed = (u32)VIRTCHNL_LINK_SPEED_10GB;
+			break;
+		case ICE_AQ_LINK_SPEED_20GB:
+			speed = (u32)VIRTCHNL_LINK_SPEED_20GB;
+			break;
+		case ICE_AQ_LINK_SPEED_25GB:
+			speed = (u32)VIRTCHNL_LINK_SPEED_25GB;
+			break;
+		case ICE_AQ_LINK_SPEED_40GB:
+			/* fall through */
+			speed = (u32)VIRTCHNL_LINK_SPEED_40GB;
+			break;
+		default:
+			speed = (u32)VIRTCHNL_LINK_SPEED_UNKNOWN;
+			break;
+		}
+
+	return speed;
+}
diff --git a/drivers/net/ice/base/ice_sriov.h b/drivers/net/ice/base/ice_sriov.h
new file mode 100644
index 0000000..e1734d6
--- /dev/null
+++ b/drivers/net/ice/base/ice_sriov.h
@@ -0,0 +1,35 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_SRIOV_H_
+#define _ICE_SRIOV_H_
+
+#include "ice_common.h"
+
+/* #ifdef CONFIG_PCI_IOV */
+enum ice_status
+ice_aq_send_msg_to_vf(struct ice_hw *hw, u16 vfid, u32 v_opcode, u32 v_retval,
+		      u8 *msg, u16 msglen, struct ice_sq_cd *cd);
+
+u32 ice_conv_link_speed_to_virtchnl(bool adv_link_support, u16 link_speed);
+/* #else CONFIG_PCI_IOV */
+static inline enum ice_status
+ice_aq_send_msg_to_vf(struct ice_hw __always_unused *hw,
+		      u16 __always_unused vfid, u32 __always_unused v_opcode,
+		      u32 __always_unused v_retval, u8 __always_unused *msg,
+		      u16 __always_unused msglen,
+		      struct ice_sq_cd __always_unused *cd)
+{
+	return ICE_SUCCESS;
+}
+
+static inline u32
+ice_conv_link_speed_to_virtchnl(bool __always_unused adv_link_support,
+				u16 __always_unused link_speed)
+{
+	return 0;
+}
+
+/* #endif CONFIG_PCI_IOV */
+#endif /* _ICE_SRIOV_H_ */
diff --git a/drivers/net/ice/base/ice_status.h b/drivers/net/ice/base/ice_status.h
new file mode 100644
index 0000000..898bfa6
--- /dev/null
+++ b/drivers/net/ice/base/ice_status.h
@@ -0,0 +1,45 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_STATUS_H_
+#define _ICE_STATUS_H_
+
+/* Error Codes */
+enum ice_status {
+	ICE_SUCCESS				= 0,
+
+	/* Generic codes : Range -1..-49 */
+	ICE_ERR_PARAM				= -1,
+	ICE_ERR_NOT_IMPL			= -2,
+	ICE_ERR_NOT_READY			= -3,
+	ICE_ERR_BAD_PTR				= -5,
+	ICE_ERR_INVAL_SIZE			= -6,
+	ICE_ERR_DEVICE_NOT_SUPPORTED		= -8,
+	ICE_ERR_RESET_FAILED			= -9,
+	ICE_ERR_FW_API_VER			= -10,
+	ICE_ERR_NO_MEMORY			= -11,
+	ICE_ERR_CFG				= -12,
+	ICE_ERR_OUT_OF_RANGE			= -13,
+	ICE_ERR_ALREADY_EXISTS			= -14,
+	ICE_ERR_DOES_NOT_EXIST			= -15,
+	ICE_ERR_IN_USE				= -16,
+	ICE_ERR_MAX_LIMIT			= -17,
+	ICE_ERR_RESET_ONGOING			= -18,
+	ICE_ERR_HW_TABLE			= -19,
+
+	/* NVM specific error codes: Range -50..-59 */
+	ICE_ERR_NVM				= -50,
+	ICE_ERR_NVM_CHECKSUM			= -51,
+	ICE_ERR_BUF_TOO_SHORT			= -52,
+	ICE_ERR_NVM_BLANK_MODE			= -53,
+
+	/* ARQ/ASQ specific error codes. Range -100..-109 */
+	ICE_ERR_AQ_ERROR			= -100,
+	ICE_ERR_AQ_TIMEOUT			= -101,
+	ICE_ERR_AQ_FULL				= -102,
+	ICE_ERR_AQ_NO_WORK			= -103,
+	ICE_ERR_AQ_EMPTY			= -104,
+};
+
+#endif /* _ICE_STATUS_H_ */
diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
new file mode 100644
index 0000000..c768733
--- /dev/null
+++ b/drivers/net/ice/base/ice_switch.c
@@ -0,0 +1,2415 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#include "ice_switch.h"
+
+
+#define ICE_ETH_DA_OFFSET		0
+#define ICE_ETH_ETHTYPE_OFFSET		12
+#define ICE_ETH_VLAN_TCI_OFFSET		14
+#define ICE_MAX_VLAN_ID			0xFFF
+
+/* Dummy ethernet header needed in the ice_aqc_sw_rules_elem
+ * struct to configure any switch filter rules.
+ * {DA (6 bytes), SA(6 bytes),
+ * Ether type (2 bytes for header without VLAN tag) OR
+ * VLAN tag (4 bytes for header with VLAN tag) }
+ *
+ * Word on Hardcoded values
+ * byte 0 = 0x2: to identify it as locally administered DA MAC
+ * byte 6 = 0x2: to identify it as locally administered SA MAC
+ * byte 12 = 0x81 & byte 13 = 0x00:
+ *	In case of VLAN filter first two bytes defines ether type (0x8100)
+ *	and remaining two bytes are placeholder for programming a given VLAN id
+ *	In case of Ether type filter it is treated as header without VLAN tag
+ *	and byte 12 and 13 is used to program a given Ether type instead
+ */
+#define DUMMY_ETH_HDR_LEN		16
+static const u8 dummy_eth_header[DUMMY_ETH_HDR_LEN] = { 0x2, 0, 0, 0, 0, 0,
+							0x2, 0, 0, 0, 0, 0,
+							0x81, 0, 0, 0};
+
+#define ICE_SW_RULE_RX_TX_ETH_HDR_SIZE \
+	(sizeof(struct ice_aqc_sw_rules_elem) - \
+	 sizeof(((struct ice_aqc_sw_rules_elem *)0)->pdata) + \
+	 sizeof(struct ice_sw_rule_lkup_rx_tx) + DUMMY_ETH_HDR_LEN - 1)
+#define ICE_SW_RULE_RX_TX_NO_HDR_SIZE \
+	(sizeof(struct ice_aqc_sw_rules_elem) - \
+	 sizeof(((struct ice_aqc_sw_rules_elem *)0)->pdata) + \
+	 sizeof(struct ice_sw_rule_lkup_rx_tx) - 1)
+#define ICE_SW_RULE_LG_ACT_SIZE(n) \
+	(sizeof(struct ice_aqc_sw_rules_elem) - \
+	 sizeof(((struct ice_aqc_sw_rules_elem *)0)->pdata) + \
+	 sizeof(struct ice_sw_rule_lg_act) - \
+	 sizeof(((struct ice_sw_rule_lg_act *)0)->act) + \
+	 ((n) * sizeof(((struct ice_sw_rule_lg_act *)0)->act)))
+#define ICE_SW_RULE_VSI_LIST_SIZE(n) \
+	(sizeof(struct ice_aqc_sw_rules_elem) - \
+	 sizeof(((struct ice_aqc_sw_rules_elem *)0)->pdata) + \
+	 sizeof(struct ice_sw_rule_vsi_list) - \
+	 sizeof(((struct ice_sw_rule_vsi_list *)0)->vsi) + \
+	 ((n) * sizeof(((struct ice_sw_rule_vsi_list *)0)->vsi)))
+
+
+/**
+ * ice_init_def_sw_recp - initialize the recipe book keeping tables
+ * @hw: pointer to the hw struct
+ *
+ * Allocate memory for the entire recipe table and initialize the structures/
+ * entries corresponding to basic recipes.
+ */
+enum ice_status ice_init_def_sw_recp(struct ice_hw *hw)
+{
+	struct ice_sw_recipe *recps;
+	u8 i;
+
+	recps = (struct ice_sw_recipe *)
+		ice_calloc(hw, ICE_MAX_NUM_RECIPES, sizeof(*recps));
+	if (!recps)
+		return ICE_ERR_NO_MEMORY;
+
+	for (i = 0; i < ICE_SW_LKUP_LAST; i++) {
+		recps[i].root_rid = i;
+		INIT_LIST_HEAD(&recps[i].filt_rules);
+		INIT_LIST_HEAD(&recps[i].filt_replay_rules);
+		ice_init_lock(&recps[i].filt_rule_lock);
+	}
+
+	hw->switch_info->recp_list = recps;
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_aq_get_sw_cfg - get switch configuration
+ * @hw: pointer to the hardware structure
+ * @buf: pointer to the result buffer
+ * @buf_size: length of the buffer available for response
+ * @req_desc: pointer to requested descriptor
+ * @num_elems: pointer to number of elements
+ * @cd: pointer to command details structure or NULL
+ *
+ * Get switch configuration (0x0200) to be placed in 'buff'.
+ * This admin command returns information such as initial VSI/port number
+ * and switch ID it belongs to.
+ *
+ * NOTE: *req_desc is both an input/output parameter.
+ * The caller of this function first calls this function with *request_desc set
+ * to 0. If the response from f/w has *req_desc set to 0, all the switch
+ * configuration information has been returned; if non-zero (meaning not all
+ * the information was returned), the caller should call this function again
+ * with *req_desc set to the previous value returned by f/w to get the
+ * next block of switch configuration information.
+ *
+ * *num_elems is output only parameter. This reflects the number of elements
+ * in response buffer. The caller of this function to use *num_elems while
+ * parsing the response buffer.
+ */
+static enum ice_status
+ice_aq_get_sw_cfg(struct ice_hw *hw, struct ice_aqc_get_sw_cfg_resp *buf,
+		  u16 buf_size, u16 *req_desc, u16 *num_elems,
+		  struct ice_sq_cd *cd)
+{
+	struct ice_aqc_get_sw_cfg *cmd;
+	enum ice_status status;
+	struct ice_aq_desc desc;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_sw_cfg);
+	cmd = &desc.params.get_sw_conf;
+	cmd->element = CPU_TO_LE16(*req_desc);
+
+	status = ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
+	if (!status) {
+		*req_desc = LE16_TO_CPU(cmd->element);
+		*num_elems = LE16_TO_CPU(cmd->num_elems);
+	}
+
+	return status;
+}
+
+
+
+/**
+ * ice_aq_add_vsi
+ * @hw: pointer to the hw struct
+ * @vsi_ctx: pointer to a VSI context struct
+ * @cd: pointer to command details structure or NULL
+ *
+ * Add a VSI context to the hardware (0x0210)
+ */
+static enum ice_status
+ice_aq_add_vsi(struct ice_hw *hw, struct ice_vsi_ctx *vsi_ctx,
+	       struct ice_sq_cd *cd)
+{
+	struct ice_aqc_add_update_free_vsi_resp *res;
+	struct ice_aqc_add_get_update_free_vsi *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	cmd = &desc.params.vsi_cmd;
+	res = &desc.params.add_update_free_vsi_res;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_add_vsi);
+
+	if (!vsi_ctx->alloc_from_pool)
+		cmd->vsi_num = CPU_TO_LE16(vsi_ctx->vsi_num |
+					   ICE_AQ_VSI_IS_VALID);
+	cmd->vf_id = vsi_ctx->vf_num;
+
+	cmd->vsi_flags = CPU_TO_LE16(vsi_ctx->flags);
+
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+
+	status = ice_aq_send_cmd(hw, &desc, &vsi_ctx->info,
+				 sizeof(vsi_ctx->info), cd);
+
+	if (!status) {
+		vsi_ctx->vsi_num = LE16_TO_CPU(res->vsi_num) & ICE_AQ_VSI_NUM_M;
+		vsi_ctx->vsis_allocd = LE16_TO_CPU(res->vsi_used);
+		vsi_ctx->vsis_unallocated = LE16_TO_CPU(res->vsi_free);
+	}
+
+	return status;
+}
+
+/**
+ * ice_aq_free_vsi
+ * @hw: pointer to the hw struct
+ * @vsi_ctx: pointer to a VSI context struct
+ * @keep_vsi_alloc: keep VSI allocation as part of this PF's resources
+ * @cd: pointer to command details structure or NULL
+ *
+ * Free VSI context info from hardware (0x0213)
+ */
+static enum ice_status
+ice_aq_free_vsi(struct ice_hw *hw, struct ice_vsi_ctx *vsi_ctx,
+		bool keep_vsi_alloc, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_add_update_free_vsi_resp *resp;
+	struct ice_aqc_add_get_update_free_vsi *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	cmd = &desc.params.vsi_cmd;
+	resp = &desc.params.add_update_free_vsi_res;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_free_vsi);
+
+	cmd->vsi_num = CPU_TO_LE16(vsi_ctx->vsi_num | ICE_AQ_VSI_IS_VALID);
+	if (keep_vsi_alloc)
+		cmd->cmd_flags = CPU_TO_LE16(ICE_AQ_VSI_KEEP_ALLOC);
+
+	status = ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+	if (!status) {
+		vsi_ctx->vsis_allocd = LE16_TO_CPU(resp->vsi_used);
+		vsi_ctx->vsis_unallocated = LE16_TO_CPU(resp->vsi_free);
+	}
+
+	return status;
+}
+
+/**
+ * ice_aq_update_vsi
+ * @hw: pointer to the hw struct
+ * @vsi_ctx: pointer to a VSI context struct
+ * @cd: pointer to command details structure or NULL
+ *
+ * Update VSI context in the hardware (0x0211)
+ */
+static enum ice_status
+ice_aq_update_vsi(struct ice_hw *hw, struct ice_vsi_ctx *vsi_ctx,
+		  struct ice_sq_cd *cd)
+{
+	struct ice_aqc_add_update_free_vsi_resp *resp;
+	struct ice_aqc_add_get_update_free_vsi *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	cmd = &desc.params.vsi_cmd;
+	resp = &desc.params.add_update_free_vsi_res;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_update_vsi);
+
+	cmd->vsi_num = CPU_TO_LE16(vsi_ctx->vsi_num | ICE_AQ_VSI_IS_VALID);
+
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+
+	status = ice_aq_send_cmd(hw, &desc, &vsi_ctx->info,
+				 sizeof(vsi_ctx->info), cd);
+
+	if (!status) {
+		vsi_ctx->vsis_allocd = LE16_TO_CPU(resp->vsi_used);
+		vsi_ctx->vsis_unallocated = LE16_TO_CPU(resp->vsi_free);
+	}
+
+	return status;
+}
+
+/**
+ * ice_is_vsi_valid - check whether the VSI is valid or not
+ * @hw: pointer to the hw struct
+ * @vsi_handle: VSI handle
+ *
+ * check whether the VSI is valid or not
+ */
+bool ice_is_vsi_valid(struct ice_hw *hw, u16 vsi_handle)
+{
+	return vsi_handle < ICE_MAX_VSI && hw->vsi_ctx[vsi_handle];
+}
+
+/**
+ * ice_get_hw_vsi_num - return the hw VSI number
+ * @hw: pointer to the hw struct
+ * @vsi_handle: VSI handle
+ *
+ * return the hw VSI number
+ * Caution: call this function only if VSI is valid (ice_is_vsi_valid)
+ */
+u16 ice_get_hw_vsi_num(struct ice_hw *hw, u16 vsi_handle)
+{
+	return hw->vsi_ctx[vsi_handle]->vsi_num;
+}
+
+/**
+ * ice_get_vsi_ctx - return the VSI context entry for a given VSI handle
+ * @hw: pointer to the hw struct
+ * @vsi_handle: VSI handle
+ *
+ * return the VSI context entry for a given VSI handle
+ */
+struct ice_vsi_ctx *ice_get_vsi_ctx(struct ice_hw *hw, u16 vsi_handle)
+{
+	return (vsi_handle >= ICE_MAX_VSI) ? NULL : hw->vsi_ctx[vsi_handle];
+}
+
+/**
+ * ice_save_vsi_ctx - save the VSI context for a given VSI handle
+ * @hw: pointer to the hw struct
+ * @vsi_handle: VSI handle
+ * @vsi: VSI context pointer
+ *
+ * save the VSI context entry for a given VSI handle
+ */
+static void
+ice_save_vsi_ctx(struct ice_hw *hw, u16 vsi_handle, struct ice_vsi_ctx *vsi)
+{
+	hw->vsi_ctx[vsi_handle] = vsi;
+}
+
+/**
+ * ice_clear_vsi_ctx - clear the VSI context entry
+ * @hw: pointer to the hw struct
+ * @vsi_handle: VSI handle
+ *
+ * clear the VSI context entry
+ */
+static void ice_clear_vsi_ctx(struct ice_hw *hw, u16 vsi_handle)
+{
+	struct ice_vsi_ctx *vsi;
+
+	vsi = ice_get_vsi_ctx(hw, vsi_handle);
+	if (vsi) {
+		ice_destroy_lock(&vsi->rss_locks);
+		ice_free(hw, vsi);
+		hw->vsi_ctx[vsi_handle] = NULL;
+	}
+}
+
+/**
+ * ice_clear_all_vsi_ctx - clear all the VSI context entries
+ * @hw: pointer to the hw struct
+ */
+void ice_clear_all_vsi_ctx(struct ice_hw *hw)
+{
+	u16 i;
+
+	for (i = 0; i < ICE_MAX_VSI; i++)
+		ice_clear_vsi_ctx(hw, i);
+}
+
+/**
+ * ice_add_vsi - add VSI context to the hardware and VSI handle list
+ * @hw: pointer to the hw struct
+ * @vsi_handle: unique VSI handle provided by drivers
+ * @vsi_ctx: pointer to a VSI context struct
+ * @cd: pointer to command details structure or NULL
+ *
+ * Add a VSI context to the hardware also add it into the VSI handle list.
+ * If this function gets called after reset for exisiting VSIs then update
+ * with the new HW VSI number in the corresponding VSI handle list entry.
+ */
+enum ice_status
+ice_add_vsi(struct ice_hw *hw, u16 vsi_handle, struct ice_vsi_ctx *vsi_ctx,
+	    struct ice_sq_cd *cd)
+{
+	struct ice_vsi_ctx *tmp_vsi_ctx;
+	enum ice_status status;
+
+	if (vsi_handle >= ICE_MAX_VSI)
+		return ICE_ERR_PARAM;
+	status = ice_aq_add_vsi(hw, vsi_ctx, cd);
+	if (status)
+		return status;
+	tmp_vsi_ctx = ice_get_vsi_ctx(hw, vsi_handle);
+	if (!tmp_vsi_ctx) {
+		/* Create a new vsi context */
+		tmp_vsi_ctx = (struct ice_vsi_ctx *)
+			ice_malloc(hw, sizeof(*tmp_vsi_ctx));
+		if (!tmp_vsi_ctx) {
+			ice_aq_free_vsi(hw, vsi_ctx, false, cd);
+			return ICE_ERR_NO_MEMORY;
+		}
+		*tmp_vsi_ctx = *vsi_ctx;
+		ice_init_lock(&tmp_vsi_ctx->rss_locks);
+		INIT_LIST_HEAD(&tmp_vsi_ctx->rss_list_head);
+		ice_save_vsi_ctx(hw, vsi_handle, tmp_vsi_ctx);
+	} else {
+		/* update with new HW VSI num */
+		if (tmp_vsi_ctx->vsi_num != vsi_ctx->vsi_num)
+			tmp_vsi_ctx->vsi_num = vsi_ctx->vsi_num;
+	}
+
+	return status;
+}
+
+/**
+ * ice_free_vsi- free VSI context from hardware and VSI handle list
+ * @hw: pointer to the hw struct
+ * @vsi_handle: unique VSI handle
+ * @vsi_ctx: pointer to a VSI context struct
+ * @keep_vsi_alloc: keep VSI allocation as part of this PF's resources
+ * @cd: pointer to command details structure or NULL
+ *
+ * Free VSI context info from hardware as well as from VSI handle list
+ */
+enum ice_status
+ice_free_vsi(struct ice_hw *hw, u16 vsi_handle, struct ice_vsi_ctx *vsi_ctx,
+	     bool keep_vsi_alloc, struct ice_sq_cd *cd)
+{
+	enum ice_status status;
+
+	if (!ice_is_vsi_valid(hw, vsi_handle))
+		return ICE_ERR_PARAM;
+	vsi_ctx->vsi_num = ice_get_hw_vsi_num(hw, vsi_handle);
+	status = ice_aq_free_vsi(hw, vsi_ctx, keep_vsi_alloc, cd);
+	if (!status)
+		ice_clear_vsi_ctx(hw, vsi_handle);
+	return status;
+}
+
+/**
+ * ice_update_vsi
+ * @hw: pointer to the hw struct
+ * @vsi_handle: unique VSI handle
+ * @vsi_ctx: pointer to a VSI context struct
+ * @cd: pointer to command details structure or NULL
+ *
+ * Update VSI context in the hardware
+ */
+enum ice_status
+ice_update_vsi(struct ice_hw *hw, u16 vsi_handle, struct ice_vsi_ctx *vsi_ctx,
+	       struct ice_sq_cd *cd)
+{
+	if (!ice_is_vsi_valid(hw, vsi_handle))
+		return ICE_ERR_PARAM;
+	vsi_ctx->vsi_num = ice_get_hw_vsi_num(hw, vsi_handle);
+	return ice_aq_update_vsi(hw, vsi_ctx, cd);
+}
+
+
+
+/**
+ * ice_aq_alloc_free_vsi_list
+ * @hw: pointer to the hw struct
+ * @vsi_list_id: VSI list id returned or used for lookup
+ * @lkup_type: switch rule filter lookup type
+ * @opc: switch rules population command type - pass in the command opcode
+ *
+ * allocates or free a VSI list resource
+ */
+static enum ice_status
+ice_aq_alloc_free_vsi_list(struct ice_hw *hw, u16 *vsi_list_id,
+			   enum ice_sw_lkup_type lkup_type,
+			   enum ice_adminq_opc opc)
+{
+	struct ice_aqc_alloc_free_res_elem *sw_buf;
+	struct ice_aqc_res_elem *vsi_ele;
+	enum ice_status status;
+	u16 buf_len;
+
+	buf_len = sizeof(*sw_buf);
+	sw_buf = (struct ice_aqc_alloc_free_res_elem *)
+		ice_malloc(hw, buf_len);
+	if (!sw_buf)
+		return ICE_ERR_NO_MEMORY;
+	sw_buf->num_elems = CPU_TO_LE16(1);
+
+	if (lkup_type == ICE_SW_LKUP_MAC ||
+	    lkup_type == ICE_SW_LKUP_MAC_VLAN ||
+	    lkup_type == ICE_SW_LKUP_ETHERTYPE ||
+	    lkup_type == ICE_SW_LKUP_ETHERTYPE_MAC ||
+	    lkup_type == ICE_SW_LKUP_PROMISC ||
+	    lkup_type == ICE_SW_LKUP_PROMISC_VLAN) {
+		sw_buf->res_type = CPU_TO_LE16(ICE_AQC_RES_TYPE_VSI_LIST_REP);
+	} else if (lkup_type == ICE_SW_LKUP_VLAN) {
+		sw_buf->res_type =
+			CPU_TO_LE16(ICE_AQC_RES_TYPE_VSI_LIST_PRUNE);
+	} else {
+		status = ICE_ERR_PARAM;
+		goto ice_aq_alloc_free_vsi_list_exit;
+	}
+
+	if (opc == ice_aqc_opc_free_res)
+		sw_buf->elem[0].e.sw_resp = CPU_TO_LE16(*vsi_list_id);
+
+	status = ice_aq_alloc_free_res(hw, 1, sw_buf, buf_len, opc, NULL);
+	if (status)
+		goto ice_aq_alloc_free_vsi_list_exit;
+
+	if (opc == ice_aqc_opc_alloc_res) {
+		vsi_ele = &sw_buf->elem[0];
+		*vsi_list_id = LE16_TO_CPU(vsi_ele->e.sw_resp);
+	}
+
+ice_aq_alloc_free_vsi_list_exit:
+	ice_free(hw, sw_buf);
+	return status;
+}
+
+
+/**
+ * ice_aq_sw_rules - add/update/remove switch rules
+ * @hw: pointer to the hw struct
+ * @rule_list: pointer to switch rule population list
+ * @rule_list_sz: total size of the rule list in bytes
+ * @num_rules: number of switch rules in the rule_list
+ * @opc: switch rules population command type - pass in the command opcode
+ * @cd: pointer to command details structure or NULL
+ *
+ * Add(0x02a0)/Update(0x02a1)/Remove(0x02a2) switch rules commands to firmware
+ */
+static enum ice_status
+ice_aq_sw_rules(struct ice_hw *hw, void *rule_list, u16 rule_list_sz,
+		u8 num_rules, enum ice_adminq_opc opc, struct ice_sq_cd *cd)
+{
+	struct ice_aq_desc desc;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_sw_rules");
+
+	if (opc != ice_aqc_opc_add_sw_rules &&
+	    opc != ice_aqc_opc_update_sw_rules &&
+	    opc != ice_aqc_opc_remove_sw_rules)
+		return ICE_ERR_PARAM;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, opc);
+
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+	desc.params.sw_rules.num_rules_fltr_entry_index =
+		CPU_TO_LE16(num_rules);
+	return ice_aq_send_cmd(hw, &desc, rule_list, rule_list_sz, cd);
+}
+
+
+/* ice_init_port_info - Initialize port_info with switch configuration data
+ * @pi: pointer to port_info
+ * @vsi_port_num: VSI number or port number
+ * @type: Type of switch element (port or VSI)
+ * @swid: switch ID of the switch the element is attached to
+ * @pf_vf_num: PF or VF number
+ * @is_vf: true if the element is a VF, false otherwise
+ */
+static void
+ice_init_port_info(struct ice_port_info *pi, u16 vsi_port_num, u8 type,
+		   u16 swid, u16 pf_vf_num, bool is_vf)
+{
+	switch (type) {
+	case ICE_AQC_GET_SW_CONF_RESP_PHYS_PORT:
+		pi->lport = (u8)(vsi_port_num & ICE_LPORT_MASK);
+		pi->sw_id = swid;
+		pi->pf_vf_num = pf_vf_num;
+		pi->is_vf = is_vf;
+		pi->dflt_tx_vsi_num = ICE_DFLT_VSI_INVAL;
+		pi->dflt_rx_vsi_num = ICE_DFLT_VSI_INVAL;
+		break;
+	default:
+		ice_debug(pi->hw, ICE_DBG_SW,
+			  "incorrect VSI/port type received\n");
+		break;
+	}
+}
+
+/* ice_get_initial_sw_cfg - Get initial port and default VSI data
+ * @hw: pointer to the hardware structure
+ */
+enum ice_status ice_get_initial_sw_cfg(struct ice_hw *hw)
+{
+	struct ice_aqc_get_sw_cfg_resp *rbuf;
+	enum ice_status status;
+	u16 num_total_ports;
+	u16 req_desc = 0;
+	u16 num_elems;
+	u16 j = 0;
+	u16 i;
+
+	num_total_ports = 1;
+
+	rbuf = (struct ice_aqc_get_sw_cfg_resp *)
+		ice_malloc(hw, ICE_SW_CFG_MAX_BUF_LEN);
+
+	if (!rbuf)
+		return ICE_ERR_NO_MEMORY;
+
+	/* Multiple calls to ice_aq_get_sw_cfg may be required
+	 * to get all the switch configuration information. The need
+	 * for additional calls is indicated by ice_aq_get_sw_cfg
+	 * writing a non-zero value in req_desc
+	 */
+	do {
+		status = ice_aq_get_sw_cfg(hw, rbuf, ICE_SW_CFG_MAX_BUF_LEN,
+					   &req_desc, &num_elems, NULL);
+
+		if (status)
+			break;
+
+		for (i = 0; i < num_elems; i++) {
+			struct ice_aqc_get_sw_cfg_resp_elem *ele;
+			u16 pf_vf_num, swid, vsi_port_num;
+			bool is_vf = false;
+			u8 type;
+
+			ele = rbuf[i].elements;
+			vsi_port_num = LE16_TO_CPU(ele->vsi_port_num) &
+				ICE_AQC_GET_SW_CONF_RESP_VSI_PORT_NUM_M;
+
+			pf_vf_num = LE16_TO_CPU(ele->pf_vf_num) &
+				ICE_AQC_GET_SW_CONF_RESP_FUNC_NUM_M;
+
+			swid = LE16_TO_CPU(ele->swid);
+
+			if (LE16_TO_CPU(ele->pf_vf_num) &
+			    ICE_AQC_GET_SW_CONF_RESP_IS_VF)
+				is_vf = true;
+
+			type = LE16_TO_CPU(ele->vsi_port_num) >>
+				ICE_AQC_GET_SW_CONF_RESP_TYPE_S;
+
+			switch (type) {
+			case ICE_AQC_GET_SW_CONF_RESP_PHYS_PORT:
+			case ICE_AQC_GET_SW_CONF_RESP_VIRT_PORT:
+				if (j == num_total_ports) {
+					ice_debug(hw, ICE_DBG_SW,
+						  "more ports than expected\n");
+					status = ICE_ERR_CFG;
+					goto out;
+				}
+				ice_init_port_info(hw->port_info,
+						   vsi_port_num, type, swid,
+						   pf_vf_num, is_vf);
+				j++;
+				break;
+			default:
+				break;
+			}
+		}
+	} while (req_desc && !status);
+
+
+out:
+	ice_free(hw, (void *)rbuf);
+	return status;
+}
+
+
+/**
+ * ice_fill_sw_info - Helper function to populate lb_en and lan_en
+ * @hw: pointer to the hardware structure
+ * @fi: filter info structure to fill/update
+ *
+ * This helper function populates the lb_en and lan_en elements of the provided
+ * ice_fltr_info struct using the switch's type and characteristics of the
+ * switch rule being configured.
+ */
+static void ice_fill_sw_info(struct ice_hw *hw, struct ice_fltr_info *fi)
+{
+	fi->lb_en = false;
+	fi->lan_en = false;
+	if ((fi->flag & ICE_FLTR_TX) &&
+	    (fi->fltr_act == ICE_FWD_TO_VSI ||
+	     fi->fltr_act == ICE_FWD_TO_VSI_LIST ||
+	     fi->fltr_act == ICE_FWD_TO_Q ||
+	     fi->fltr_act == ICE_FWD_TO_QGRP)) {
+		fi->lb_en = true;
+		/* Do not set lan_en to TRUE if
+		 * 1. The switch is a VEB AND
+		 * 2
+		 * 2.1 The lookup is MAC with unicast addr for MAC, OR
+		 * 2.2 The lookup is MAC_VLAN with unicast addr for MAC
+		 *
+		 * In all other cases, the LAN enable has to be set to true.
+		 */
+		if (!(hw->evb_veb &&
+		      ((fi->lkup_type == ICE_SW_LKUP_MAC &&
+			IS_UNICAST_ETHER_ADDR(fi->l_data.mac.mac_addr)) ||
+		       (fi->lkup_type == ICE_SW_LKUP_MAC_VLAN &&
+			IS_UNICAST_ETHER_ADDR(fi->l_data.mac_vlan.mac_addr)))))
+			fi->lan_en = true;
+	}
+}
+
+/**
+ * ice_ilog2 - Caculates integer log base 2 of a number
+ * @n: number on which to perform operation
+ */
+static int ice_ilog2(u64 n)
+{
+	int i;
+
+	for (i = 63; i >= 0; i--)
+		if (((u64)1 << i) & n)
+			return i;
+
+	return -1;
+}
+
+
+/**
+ * ice_fill_sw_rule - Helper function to fill switch rule structure
+ * @hw: pointer to the hardware structure
+ * @f_info: entry containing packet forwarding information
+ * @s_rule: switch rule structure to be filled in based on mac_entry
+ * @opc: switch rules population command type - pass in the command opcode
+ */
+static void
+ice_fill_sw_rule(struct ice_hw *hw, struct ice_fltr_info *f_info,
+		 struct ice_aqc_sw_rules_elem *s_rule, enum ice_adminq_opc opc)
+{
+	u16 vlan_id = ICE_MAX_VLAN_ID + 1;
+	void *daddr = NULL;
+	u16 eth_hdr_sz;
+	u8 *eth_hdr;
+	u32 act = 0;
+	__be16 *off;
+	u8 q_rgn;
+
+
+	if (opc == ice_aqc_opc_remove_sw_rules) {
+		s_rule->pdata.lkup_tx_rx.act = 0;
+		s_rule->pdata.lkup_tx_rx.index =
+			CPU_TO_LE16(f_info->fltr_rule_id);
+		s_rule->pdata.lkup_tx_rx.hdr_len = 0;
+		return;
+	}
+
+	eth_hdr_sz = sizeof(dummy_eth_header);
+	eth_hdr = s_rule->pdata.lkup_tx_rx.hdr;
+
+	/* initialize the ether header with a dummy header */
+	ice_memcpy(eth_hdr, dummy_eth_header, eth_hdr_sz, ICE_NONDMA_TO_NONDMA);
+	ice_fill_sw_info(hw, f_info);
+
+	switch (f_info->fltr_act) {
+	case ICE_FWD_TO_VSI:
+		act |= (f_info->fwd_id.hw_vsi_id << ICE_SINGLE_ACT_VSI_ID_S) &
+			ICE_SINGLE_ACT_VSI_ID_M;
+		if (f_info->lkup_type != ICE_SW_LKUP_VLAN)
+			act |= ICE_SINGLE_ACT_VSI_FORWARDING |
+				ICE_SINGLE_ACT_VALID_BIT;
+		break;
+	case ICE_FWD_TO_VSI_LIST:
+		act |= ICE_SINGLE_ACT_VSI_LIST;
+		act |= (f_info->fwd_id.vsi_list_id <<
+			ICE_SINGLE_ACT_VSI_LIST_ID_S) &
+			ICE_SINGLE_ACT_VSI_LIST_ID_M;
+		if (f_info->lkup_type != ICE_SW_LKUP_VLAN)
+			act |= ICE_SINGLE_ACT_VSI_FORWARDING |
+				ICE_SINGLE_ACT_VALID_BIT;
+		break;
+	case ICE_FWD_TO_Q:
+		act |= ICE_SINGLE_ACT_TO_Q;
+		act |= (f_info->fwd_id.q_id << ICE_SINGLE_ACT_Q_INDEX_S) &
+			ICE_SINGLE_ACT_Q_INDEX_M;
+		break;
+	case ICE_DROP_PACKET:
+		act |= ICE_SINGLE_ACT_VSI_FORWARDING | ICE_SINGLE_ACT_DROP |
+			ICE_SINGLE_ACT_VALID_BIT;
+		break;
+	case ICE_FWD_TO_QGRP:
+		q_rgn = f_info->qgrp_size > 0 ?
+			(u8)ice_ilog2(f_info->qgrp_size) : 0;
+		act |= ICE_SINGLE_ACT_TO_Q;
+		act |= (f_info->fwd_id.q_id << ICE_SINGLE_ACT_Q_INDEX_S) &
+			ICE_SINGLE_ACT_Q_INDEX_M;
+		act |= (q_rgn << ICE_SINGLE_ACT_Q_REGION_S) &
+			ICE_SINGLE_ACT_Q_REGION_M;
+		break;
+	default:
+		return;
+	}
+
+	if (f_info->lb_en)
+		act |= ICE_SINGLE_ACT_LB_ENABLE;
+	if (f_info->lan_en)
+		act |= ICE_SINGLE_ACT_LAN_ENABLE;
+
+	switch (f_info->lkup_type) {
+	case ICE_SW_LKUP_MAC:
+		daddr = f_info->l_data.mac.mac_addr;
+		break;
+	case ICE_SW_LKUP_VLAN:
+		vlan_id = f_info->l_data.vlan.vlan_id;
+		if (f_info->fltr_act == ICE_FWD_TO_VSI ||
+		    f_info->fltr_act == ICE_FWD_TO_VSI_LIST) {
+			act |= ICE_SINGLE_ACT_PRUNE;
+			act |= ICE_SINGLE_ACT_EGRESS | ICE_SINGLE_ACT_INGRESS;
+		}
+		break;
+	case ICE_SW_LKUP_ETHERTYPE_MAC:
+		daddr = f_info->l_data.ethertype_mac.mac_addr;
+		/* fall-through */
+	case ICE_SW_LKUP_ETHERTYPE:
+		off = (__be16 *)(eth_hdr + ICE_ETH_ETHTYPE_OFFSET);
+		*off = CPU_TO_BE16(f_info->l_data.ethertype_mac.ethertype);
+		break;
+	case ICE_SW_LKUP_MAC_VLAN:
+		daddr = f_info->l_data.mac_vlan.mac_addr;
+		vlan_id = f_info->l_data.mac_vlan.vlan_id;
+		break;
+	case ICE_SW_LKUP_PROMISC_VLAN:
+		vlan_id = f_info->l_data.mac_vlan.vlan_id;
+		/* fall-through */
+	case ICE_SW_LKUP_PROMISC:
+		daddr = f_info->l_data.mac_vlan.mac_addr;
+		break;
+	default:
+		break;
+	}
+
+	s_rule->type = (f_info->flag & ICE_FLTR_RX) ?
+		CPU_TO_LE16(ICE_AQC_SW_RULES_T_LKUP_RX) :
+		CPU_TO_LE16(ICE_AQC_SW_RULES_T_LKUP_TX);
+
+	/* Recipe set depending on lookup type */
+	s_rule->pdata.lkup_tx_rx.recipe_id = CPU_TO_LE16(f_info->lkup_type);
+	s_rule->pdata.lkup_tx_rx.src = CPU_TO_LE16(f_info->src);
+	s_rule->pdata.lkup_tx_rx.act = CPU_TO_LE32(act);
+
+	if (daddr)
+		ice_memcpy(eth_hdr + ICE_ETH_DA_OFFSET, daddr, ETH_ALEN,
+			   ICE_NONDMA_TO_NONDMA);
+
+	if (!(vlan_id > ICE_MAX_VLAN_ID)) {
+		off = (__be16 *)(eth_hdr + ICE_ETH_VLAN_TCI_OFFSET);
+		*off = CPU_TO_BE16(vlan_id);
+	}
+
+	/* Create the switch rule with the final dummy Ethernet header */
+	if (opc != ice_aqc_opc_update_sw_rules)
+		s_rule->pdata.lkup_tx_rx.hdr_len = CPU_TO_LE16(eth_hdr_sz);
+}
+
+/**
+ * ice_add_marker_act
+ * @hw: pointer to the hardware structure
+ * @m_ent: the management entry for which sw marker needs to be added
+ * @sw_marker: sw marker to tag the Rx descriptor with
+ * @l_id: large action resource id
+ *
+ * Create a large action to hold software marker and update the switch rule
+ * entry pointed by m_ent with newly created large action
+ */
+static enum ice_status
+ice_add_marker_act(struct ice_hw *hw, struct ice_fltr_mgmt_list_entry *m_ent,
+		   u16 sw_marker, u16 l_id)
+{
+	struct ice_aqc_sw_rules_elem *lg_act, *rx_tx;
+	/* For software marker we need 3 large actions
+	 * 1. FWD action: FWD TO VSI or VSI LIST
+	 * 2. GENERIC VALUE action to hold the profile id
+	 * 3. GENERIC VALUE action to hold the software marker id
+	 */
+	const u16 num_lg_acts = 3;
+	enum ice_status status;
+	u16 lg_act_size;
+	u16 rules_size;
+	u32 act;
+	u16 id;
+
+	if (m_ent->fltr_info.lkup_type != ICE_SW_LKUP_MAC)
+		return ICE_ERR_PARAM;
+
+	/* Create two back-to-back switch rules and submit them to the HW using
+	 * one memory buffer:
+	 *    1. Large Action
+	 *    2. Look up Tx Rx
+	 */
+	lg_act_size = (u16)ICE_SW_RULE_LG_ACT_SIZE(num_lg_acts);
+	rules_size = lg_act_size + ICE_SW_RULE_RX_TX_ETH_HDR_SIZE;
+	lg_act = (struct ice_aqc_sw_rules_elem *)ice_malloc(hw, rules_size);
+	if (!lg_act)
+		return ICE_ERR_NO_MEMORY;
+
+	rx_tx = (struct ice_aqc_sw_rules_elem *)((u8 *)lg_act + lg_act_size);
+
+	/* Fill in the first switch rule i.e. large action */
+	lg_act->type = CPU_TO_LE16(ICE_AQC_SW_RULES_T_LG_ACT);
+	lg_act->pdata.lg_act.index = CPU_TO_LE16(l_id);
+	lg_act->pdata.lg_act.size = CPU_TO_LE16(num_lg_acts);
+
+	/* First action VSI forwarding or VSI list forwarding depending on how
+	 * many VSIs
+	 */
+	id = (m_ent->vsi_count > 1) ? m_ent->fltr_info.fwd_id.vsi_list_id :
+		m_ent->fltr_info.fwd_id.hw_vsi_id;
+
+	act = ICE_LG_ACT_VSI_FORWARDING | ICE_LG_ACT_VALID_BIT;
+	act |= (id << ICE_LG_ACT_VSI_LIST_ID_S) &
+		ICE_LG_ACT_VSI_LIST_ID_M;
+	if (m_ent->vsi_count > 1)
+		act |= ICE_LG_ACT_VSI_LIST;
+	lg_act->pdata.lg_act.act[0] = CPU_TO_LE32(act);
+
+	/* Second action descriptor type */
+	act = ICE_LG_ACT_GENERIC;
+
+	act |= (1 << ICE_LG_ACT_GENERIC_VALUE_S) & ICE_LG_ACT_GENERIC_VALUE_M;
+	lg_act->pdata.lg_act.act[1] = CPU_TO_LE32(act);
+
+	act = (ICE_LG_ACT_GENERIC_OFF_RX_DESC_PROF_IDX <<
+	       ICE_LG_ACT_GENERIC_OFFSET_S) & ICE_LG_ACT_GENERIC_OFFSET_M;
+
+	/* Third action Marker value */
+	act |= ICE_LG_ACT_GENERIC;
+	act |= (sw_marker << ICE_LG_ACT_GENERIC_VALUE_S) &
+		ICE_LG_ACT_GENERIC_VALUE_M;
+
+	lg_act->pdata.lg_act.act[2] = CPU_TO_LE32(act);
+
+	/* call the fill switch rule to fill the lookup Tx Rx structure */
+	ice_fill_sw_rule(hw, &m_ent->fltr_info, rx_tx,
+			 ice_aqc_opc_update_sw_rules);
+
+	/* Update the action to point to the large action id */
+	rx_tx->pdata.lkup_tx_rx.act =
+		CPU_TO_LE32(ICE_SINGLE_ACT_PTR |
+			    ((l_id << ICE_SINGLE_ACT_PTR_VAL_S) &
+			     ICE_SINGLE_ACT_PTR_VAL_M));
+
+	/* Use the filter rule id of the previously created rule with single
+	 * act. Once the update happens, hardware will treat this as large
+	 * action
+	 */
+	rx_tx->pdata.lkup_tx_rx.index =
+		CPU_TO_LE16(m_ent->fltr_info.fltr_rule_id);
+
+	status = ice_aq_sw_rules(hw, lg_act, rules_size, 2,
+				 ice_aqc_opc_update_sw_rules, NULL);
+	if (!status) {
+		m_ent->lg_act_idx = l_id;
+		m_ent->sw_marker_id = sw_marker;
+	}
+
+	ice_free(hw, lg_act);
+	return status;
+}
+
+
+/**
+ * ice_create_vsi_list_map
+ * @hw: pointer to the hardware structure
+ * @vsi_handle_arr: array of VSI handles to set in the VSI mapping
+ * @num_vsi: number of VSI handles in the array
+ * @vsi_list_id: VSI list id generated as part of allocate resource
+ *
+ * Helper function to create a new entry of VSI list id to VSI mapping
+ * using the given VSI list id
+ */
+static struct ice_vsi_list_map_info *
+ice_create_vsi_list_map(struct ice_hw *hw, u16 *vsi_handle_arr, u16 num_vsi,
+			u16 vsi_list_id)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	struct ice_vsi_list_map_info *v_map;
+	int i;
+
+	v_map = (struct ice_vsi_list_map_info *)ice_calloc(hw, 1,
+		sizeof(*v_map));
+	if (!v_map)
+		return NULL;
+
+	v_map->vsi_list_id = vsi_list_id;
+	v_map->ref_cnt = 1;
+	for (i = 0; i < num_vsi; i++)
+		ice_set_bit(vsi_handle_arr[i], v_map->vsi_map);
+
+	LIST_ADD(&v_map->list_entry, &sw->vsi_list_map_head);
+	return v_map;
+}
+
+/**
+ * ice_update_vsi_list_rule
+ * @hw: pointer to the hardware structure
+ * @vsi_handle_arr: array of VSI handles to form a VSI list
+ * @num_vsi: number of VSI handles in the array
+ * @vsi_list_id: VSI list id generated as part of allocate resource
+ * @remove: Boolean value to indicate if this is a remove action
+ * @opc: switch rules population command type - pass in the command opcode
+ * @lkup_type: lookup type of the filter
+ *
+ * Call AQ command to add a new switch rule or update existing switch rule
+ * using the given VSI list id
+ */
+static enum ice_status
+ice_update_vsi_list_rule(struct ice_hw *hw, u16 *vsi_handle_arr, u16 num_vsi,
+			 u16 vsi_list_id, bool remove, enum ice_adminq_opc opc,
+			 enum ice_sw_lkup_type lkup_type)
+{
+	struct ice_aqc_sw_rules_elem *s_rule;
+	enum ice_status status;
+	u16 s_rule_size;
+	u16 type;
+	int i;
+
+	if (!num_vsi)
+		return ICE_ERR_PARAM;
+
+	if (lkup_type == ICE_SW_LKUP_MAC ||
+	    lkup_type == ICE_SW_LKUP_MAC_VLAN ||
+	    lkup_type == ICE_SW_LKUP_ETHERTYPE ||
+	    lkup_type == ICE_SW_LKUP_ETHERTYPE_MAC ||
+	    lkup_type == ICE_SW_LKUP_PROMISC ||
+	    lkup_type == ICE_SW_LKUP_PROMISC_VLAN)
+		type = remove ? ICE_AQC_SW_RULES_T_VSI_LIST_CLEAR :
+				ICE_AQC_SW_RULES_T_VSI_LIST_SET;
+	else if (lkup_type == ICE_SW_LKUP_VLAN)
+		type = remove ? ICE_AQC_SW_RULES_T_PRUNE_LIST_CLEAR :
+				ICE_AQC_SW_RULES_T_PRUNE_LIST_SET;
+	else
+		return ICE_ERR_PARAM;
+
+	s_rule_size = (u16)ICE_SW_RULE_VSI_LIST_SIZE(num_vsi);
+	s_rule = (struct ice_aqc_sw_rules_elem *)ice_malloc(hw, s_rule_size);
+	if (!s_rule)
+		return ICE_ERR_NO_MEMORY;
+	for (i = 0; i < num_vsi; i++) {
+		if (!ice_is_vsi_valid(hw, vsi_handle_arr[i])) {
+			status = ICE_ERR_PARAM;
+			goto exit;
+		}
+		/* AQ call requires hw_vsi_id(s) */
+		s_rule->pdata.vsi_list.vsi[i] =
+			CPU_TO_LE16(ice_get_hw_vsi_num(hw, vsi_handle_arr[i]));
+	}
+
+	s_rule->type = CPU_TO_LE16(type);
+	s_rule->pdata.vsi_list.number_vsi = CPU_TO_LE16(num_vsi);
+	s_rule->pdata.vsi_list.index = CPU_TO_LE16(vsi_list_id);
+
+	status = ice_aq_sw_rules(hw, s_rule, s_rule_size, 1, opc, NULL);
+
+exit:
+	ice_free(hw, s_rule);
+	return status;
+}
+
+/**
+ * ice_create_vsi_list_rule - Creates and populates a VSI list rule
+ * @hw: pointer to the hw struct
+ * @vsi_handle_arr: array of VSI handles to form a VSI list
+ * @num_vsi: number of VSI handles in the array
+ * @vsi_list_id: stores the ID of the VSI list to be created
+ * @lkup_type: switch rule filter's lookup type
+ */
+static enum ice_status
+ice_create_vsi_list_rule(struct ice_hw *hw, u16 *vsi_handle_arr, u16 num_vsi,
+			 u16 *vsi_list_id, enum ice_sw_lkup_type lkup_type)
+{
+	enum ice_status status;
+
+	status = ice_aq_alloc_free_vsi_list(hw, vsi_list_id, lkup_type,
+					    ice_aqc_opc_alloc_res);
+	if (status)
+		return status;
+
+	/* Update the newly created VSI list to include the specified VSIs */
+	return ice_update_vsi_list_rule(hw, vsi_handle_arr, num_vsi,
+					*vsi_list_id, false,
+					ice_aqc_opc_add_sw_rules, lkup_type);
+}
+
+/**
+ * ice_create_pkt_fwd_rule
+ * @hw: pointer to the hardware structure
+ * @f_entry: entry containing packet forwarding information
+ *
+ * Create switch rule with given filter information and add an entry
+ * to the corresponding filter management list to track this switch rule
+ * and VSI mapping
+ */
+static enum ice_status
+ice_create_pkt_fwd_rule(struct ice_hw *hw,
+			struct ice_fltr_list_entry *f_entry)
+{
+	struct ice_fltr_mgmt_list_entry *fm_entry;
+	struct ice_aqc_sw_rules_elem *s_rule;
+	enum ice_sw_lkup_type l_type;
+	struct ice_sw_recipe *recp;
+	enum ice_status status;
+
+	s_rule = (struct ice_aqc_sw_rules_elem *)
+		ice_malloc(hw, ICE_SW_RULE_RX_TX_ETH_HDR_SIZE);
+	if (!s_rule)
+		return ICE_ERR_NO_MEMORY;
+	fm_entry = (struct ice_fltr_mgmt_list_entry *)
+		   ice_malloc(hw, sizeof(*fm_entry));
+	if (!fm_entry) {
+		status = ICE_ERR_NO_MEMORY;
+		goto ice_create_pkt_fwd_rule_exit;
+	}
+
+	fm_entry->fltr_info = f_entry->fltr_info;
+
+	/* Initialize all the fields for the management entry */
+	fm_entry->vsi_count = 1;
+	fm_entry->lg_act_idx = ICE_INVAL_LG_ACT_INDEX;
+	fm_entry->sw_marker_id = ICE_INVAL_SW_MARKER_ID;
+	fm_entry->counter_index = ICE_INVAL_COUNTER_ID;
+
+	ice_fill_sw_rule(hw, &fm_entry->fltr_info, s_rule,
+			 ice_aqc_opc_add_sw_rules);
+
+	status = ice_aq_sw_rules(hw, s_rule, ICE_SW_RULE_RX_TX_ETH_HDR_SIZE, 1,
+				 ice_aqc_opc_add_sw_rules, NULL);
+	if (status) {
+		ice_free(hw, fm_entry);
+		goto ice_create_pkt_fwd_rule_exit;
+	}
+
+	f_entry->fltr_info.fltr_rule_id =
+		LE16_TO_CPU(s_rule->pdata.lkup_tx_rx.index);
+	fm_entry->fltr_info.fltr_rule_id =
+		LE16_TO_CPU(s_rule->pdata.lkup_tx_rx.index);
+
+	/* The book keeping entries will get removed when base driver
+	 * calls remove filter AQ command
+	 */
+	l_type = fm_entry->fltr_info.lkup_type;
+	recp = &hw->switch_info->recp_list[l_type];
+	LIST_ADD(&fm_entry->list_entry, &recp->filt_rules);
+
+ice_create_pkt_fwd_rule_exit:
+	ice_free(hw, s_rule);
+	return status;
+}
+
+/**
+ * ice_update_pkt_fwd_rule
+ * @hw: pointer to the hardware structure
+ * @f_info: filter information for switch rule
+ *
+ * Call AQ command to update a previously created switch rule with a
+ * VSI list id
+ */
+static enum ice_status
+ice_update_pkt_fwd_rule(struct ice_hw *hw, struct ice_fltr_info *f_info)
+{
+	struct ice_aqc_sw_rules_elem *s_rule;
+	enum ice_status status;
+
+	s_rule = (struct ice_aqc_sw_rules_elem *)
+		ice_malloc(hw, ICE_SW_RULE_RX_TX_ETH_HDR_SIZE);
+	if (!s_rule)
+		return ICE_ERR_NO_MEMORY;
+
+	ice_fill_sw_rule(hw, f_info, s_rule, ice_aqc_opc_update_sw_rules);
+
+	s_rule->pdata.lkup_tx_rx.index = CPU_TO_LE16(f_info->fltr_rule_id);
+
+	/* Update switch rule with new rule set to forward VSI list */
+	status = ice_aq_sw_rules(hw, s_rule, ICE_SW_RULE_RX_TX_ETH_HDR_SIZE, 1,
+				 ice_aqc_opc_update_sw_rules, NULL);
+
+	ice_free(hw, s_rule);
+	return status;
+}
+
+/**
+ * ice_update_sw_rule_bridge_mode
+ * @hw: pointer to the hw struct
+ *
+ * Updates unicast switch filter rules based on VEB/VEPA mode
+ */
+enum ice_status ice_update_sw_rule_bridge_mode(struct ice_hw *hw)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	struct ice_fltr_mgmt_list_entry *fm_entry;
+	enum ice_status status = ICE_SUCCESS;
+	struct LIST_HEAD_TYPE *rule_head;
+	struct ice_lock *rule_lock; /* Lock to protect filter rule list */
+
+	rule_lock = &sw->recp_list[ICE_SW_LKUP_MAC].filt_rule_lock;
+	rule_head = &sw->recp_list[ICE_SW_LKUP_MAC].filt_rules;
+
+	ice_acquire_lock(rule_lock);
+	LIST_FOR_EACH_ENTRY(fm_entry, rule_head, ice_fltr_mgmt_list_entry,
+			    list_entry) {
+		struct ice_fltr_info *fi = &fm_entry->fltr_info;
+		u8 *addr = fi->l_data.mac.mac_addr;
+
+		/* Update unicast Tx rules to reflect the selected
+		 * VEB/VEPA mode
+		 */
+		if ((fi->flag & ICE_FLTR_TX) && IS_UNICAST_ETHER_ADDR(addr) &&
+		    (fi->fltr_act == ICE_FWD_TO_VSI ||
+		     fi->fltr_act == ICE_FWD_TO_VSI_LIST ||
+		     fi->fltr_act == ICE_FWD_TO_Q ||
+		     fi->fltr_act == ICE_FWD_TO_QGRP)) {
+			status = ice_update_pkt_fwd_rule(hw, fi);
+			if (status)
+				break;
+		}
+	}
+
+	ice_release_lock(rule_lock);
+
+	return status;
+}
+
+/**
+ * ice_add_update_vsi_list
+ * @hw: pointer to the hardware structure
+ * @m_entry: pointer to current filter management list entry
+ * @cur_fltr: filter information from the book keeping entry
+ * @new_fltr: filter information with the new VSI to be added
+ *
+ * Call AQ command to add or update previously created VSI list with new VSI.
+ *
+ * Helper function to do book keeping associated with adding filter information
+ * The algorithm to do the book keeping is described below :
+ * When a VSI needs to subscribe to a given filter (MAC/VLAN/Ethtype etc.)
+ *	if only one VSI has been added till now
+ *		Allocate a new VSI list and add two VSIs
+ *		to this list using switch rule command
+ *		Update the previously created switch rule with the
+ *		newly created VSI list id
+ *	if a VSI list was previously created
+ *		Add the new VSI to the previously created VSI list set
+ *		using the update switch rule command
+ */
+static enum ice_status
+ice_add_update_vsi_list(struct ice_hw *hw,
+			struct ice_fltr_mgmt_list_entry *m_entry,
+			struct ice_fltr_info *cur_fltr,
+			struct ice_fltr_info *new_fltr)
+{
+	enum ice_status status = ICE_SUCCESS;
+	u16 vsi_list_id = 0;
+
+	if ((cur_fltr->fltr_act == ICE_FWD_TO_Q ||
+	     cur_fltr->fltr_act == ICE_FWD_TO_QGRP))
+		return ICE_ERR_NOT_IMPL;
+
+	if ((new_fltr->fltr_act == ICE_FWD_TO_Q ||
+	     new_fltr->fltr_act == ICE_FWD_TO_QGRP) &&
+	    (cur_fltr->fltr_act == ICE_FWD_TO_VSI ||
+	     cur_fltr->fltr_act == ICE_FWD_TO_VSI_LIST))
+		return ICE_ERR_NOT_IMPL;
+
+	if (m_entry->vsi_count < 2 && !m_entry->vsi_list_info) {
+		/* Only one entry existed in the mapping and it was not already
+		 * a part of a VSI list. So, create a VSI list with the old and
+		 * new VSIs.
+		 */
+		struct ice_fltr_info tmp_fltr;
+		u16 vsi_handle_arr[2];
+
+		/* A rule already exists with the new VSI being added */
+		if (cur_fltr->fwd_id.hw_vsi_id == new_fltr->fwd_id.hw_vsi_id)
+			return ICE_ERR_ALREADY_EXISTS;
+
+		vsi_handle_arr[0] = cur_fltr->vsi_handle;
+		vsi_handle_arr[1] = new_fltr->vsi_handle;
+		status = ice_create_vsi_list_rule(hw, &vsi_handle_arr[0], 2,
+						  &vsi_list_id,
+						  new_fltr->lkup_type);
+		if (status)
+			return status;
+
+		tmp_fltr = *new_fltr;
+		tmp_fltr.fltr_rule_id = cur_fltr->fltr_rule_id;
+		tmp_fltr.fltr_act = ICE_FWD_TO_VSI_LIST;
+		tmp_fltr.fwd_id.vsi_list_id = vsi_list_id;
+		/* Update the previous switch rule of "MAC forward to VSI" to
+		 * "MAC fwd to VSI list"
+		 */
+		status = ice_update_pkt_fwd_rule(hw, &tmp_fltr);
+		if (status)
+			return status;
+
+		cur_fltr->fwd_id.vsi_list_id = vsi_list_id;
+		cur_fltr->fltr_act = ICE_FWD_TO_VSI_LIST;
+		m_entry->vsi_list_info =
+			ice_create_vsi_list_map(hw, &vsi_handle_arr[0], 2,
+						vsi_list_id);
+
+		/* If this entry was large action then the large action needs
+		 * to be updated to point to FWD to VSI list
+		 */
+		if (m_entry->sw_marker_id != ICE_INVAL_SW_MARKER_ID)
+			status =
+			    ice_add_marker_act(hw, m_entry,
+					       m_entry->sw_marker_id,
+					       m_entry->lg_act_idx);
+	} else {
+		u16 vsi_handle = new_fltr->vsi_handle;
+		enum ice_adminq_opc opcode;
+
+		if (!m_entry->vsi_list_info)
+			return ICE_ERR_CFG;
+
+		/* A rule already exists with the new VSI being added */
+		if (ice_is_bit_set(m_entry->vsi_list_info->vsi_map, vsi_handle))
+			return ICE_SUCCESS;
+
+		/* Update the previously created VSI list set with
+		 * the new VSI id passed in
+		 */
+		vsi_list_id = cur_fltr->fwd_id.vsi_list_id;
+		opcode = ice_aqc_opc_update_sw_rules;
+
+		status = ice_update_vsi_list_rule(hw, &vsi_handle, 1,
+						  vsi_list_id, false, opcode,
+						  new_fltr->lkup_type);
+		/* update VSI list mapping info with new VSI id */
+		if (!status)
+			ice_set_bit(vsi_handle,
+				    m_entry->vsi_list_info->vsi_map);
+	}
+	if (!status)
+		m_entry->vsi_count++;
+	return status;
+}
+
+/**
+ * ice_find_rule_entry - Search a rule entry
+ * @hw: pointer to the hardware structure
+ * @recp_id: lookup type for which the specified rule needs to be searched
+ * @f_info: rule information
+ *
+ * Helper function to search for a given rule entry
+ * Returns pointer to entry storing the rule if found
+ */
+static struct ice_fltr_mgmt_list_entry *
+ice_find_rule_entry(struct ice_hw *hw, u8 recp_id, struct ice_fltr_info *f_info)
+{
+	struct ice_fltr_mgmt_list_entry *list_itr, *ret = NULL;
+	struct ice_switch_info *sw = hw->switch_info;
+	struct LIST_HEAD_TYPE *list_head;
+
+	list_head = &sw->recp_list[recp_id].filt_rules;
+	LIST_FOR_EACH_ENTRY(list_itr, list_head, ice_fltr_mgmt_list_entry,
+			    list_entry) {
+		if (!memcmp(&f_info->l_data, &list_itr->fltr_info.l_data,
+			    sizeof(f_info->l_data)) &&
+		    f_info->flag == list_itr->fltr_info.flag) {
+			ret = list_itr;
+			break;
+		}
+	}
+	return ret;
+}
+
+/**
+ * ice_find_vsi_list_entry - Search VSI list map with VSI count 1
+ * @hw: pointer to the hardware structure
+ * @recp_id: lookup type for which VSI lists needs to be searched
+ * @vsi_handle: VSI handle to be found in VSI list
+ * @vsi_list_id: VSI list id found contaning vsi_handle
+ *
+ * Helper function to search a VSI list with single entry containing given VSI
+ * handle element. This can be extended further to search VSI list with more
+ * than 1 vsi_count. Returns pointer to VSI list entry if found.
+ */
+static struct ice_vsi_list_map_info *
+ice_find_vsi_list_entry(struct ice_hw *hw, u8 recp_id, u16 vsi_handle,
+			u16 *vsi_list_id)
+{
+	struct ice_vsi_list_map_info *map_info = NULL;
+	struct ice_switch_info *sw = hw->switch_info;
+	struct ice_fltr_mgmt_list_entry *list_itr;
+	struct LIST_HEAD_TYPE *list_head;
+
+	list_head = &sw->recp_list[recp_id].filt_rules;
+	LIST_FOR_EACH_ENTRY(list_itr, list_head, ice_fltr_mgmt_list_entry,
+			    list_entry) {
+		if (list_itr->vsi_count == 1 && list_itr->vsi_list_info) {
+			map_info = list_itr->vsi_list_info;
+			if (ice_is_bit_set(map_info->vsi_map, vsi_handle)) {
+				*vsi_list_id = map_info->vsi_list_id;
+				return map_info;
+			}
+		}
+	}
+	return NULL;
+}
+
+/**
+ * ice_add_rule_internal - add rule for a given lookup type
+ * @hw: pointer to the hardware structure
+ * @recp_id: lookup type (recipe id) for which rule has to be added
+ * @f_entry: structure containing MAC forwarding information
+ *
+ * Adds or updates the rule lists for a given recipe
+ */
+static enum ice_status
+ice_add_rule_internal(struct ice_hw *hw, u8 recp_id,
+		      struct ice_fltr_list_entry *f_entry)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	struct ice_fltr_info *new_fltr, *cur_fltr;
+	struct ice_fltr_mgmt_list_entry *m_entry;
+	struct ice_lock *rule_lock; /* Lock to protect filter rule list */
+	enum ice_status status = ICE_SUCCESS;
+
+	if (!ice_is_vsi_valid(hw, f_entry->fltr_info.vsi_handle))
+		return ICE_ERR_PARAM;
+
+	/* Load the hw_vsi_id only if the fwd action is fwd to VSI */
+	if (f_entry->fltr_info.fltr_act == ICE_FWD_TO_VSI)
+		f_entry->fltr_info.fwd_id.hw_vsi_id =
+			ice_get_hw_vsi_num(hw, f_entry->fltr_info.vsi_handle);
+
+	rule_lock = &sw->recp_list[recp_id].filt_rule_lock;
+
+	ice_acquire_lock(rule_lock);
+	new_fltr = &f_entry->fltr_info;
+	if (new_fltr->flag & ICE_FLTR_RX)
+		new_fltr->src = hw->port_info->lport;
+	else if (new_fltr->flag & ICE_FLTR_TX)
+		new_fltr->src =
+			ice_get_hw_vsi_num(hw, f_entry->fltr_info.vsi_handle);
+
+	m_entry = ice_find_rule_entry(hw, recp_id, new_fltr);
+	if (!m_entry) {
+		ice_release_lock(rule_lock);
+		return ice_create_pkt_fwd_rule(hw, f_entry);
+	}
+
+	cur_fltr = &m_entry->fltr_info;
+	status = ice_add_update_vsi_list(hw, m_entry, cur_fltr, new_fltr);
+	ice_release_lock(rule_lock);
+
+	return status;
+}
+
+/**
+ * ice_remove_vsi_list_rule
+ * @hw: pointer to the hardware structure
+ * @vsi_list_id: VSI list id generated as part of allocate resource
+ * @lkup_type: switch rule filter lookup type
+ *
+ * The VSI list should be emptied before this function is called to remove the
+ * VSI list.
+ */
+static enum ice_status
+ice_remove_vsi_list_rule(struct ice_hw *hw, u16 vsi_list_id,
+			 enum ice_sw_lkup_type lkup_type)
+{
+	struct ice_aqc_sw_rules_elem *s_rule;
+	enum ice_status status;
+	u16 s_rule_size;
+
+	s_rule_size = (u16)ICE_SW_RULE_VSI_LIST_SIZE(0);
+	s_rule = (struct ice_aqc_sw_rules_elem *)ice_malloc(hw, s_rule_size);
+	if (!s_rule)
+		return ICE_ERR_NO_MEMORY;
+
+	s_rule->type = CPU_TO_LE16(ICE_AQC_SW_RULES_T_VSI_LIST_CLEAR);
+	s_rule->pdata.vsi_list.index = CPU_TO_LE16(vsi_list_id);
+
+	/* Free the vsi_list resource that we allocated. It is assumed that the
+	 * list is empty at this point.
+	 */
+	status = ice_aq_alloc_free_vsi_list(hw, &vsi_list_id, lkup_type,
+					    ice_aqc_opc_free_res);
+
+	ice_free(hw, s_rule);
+	return status;
+}
+
+/**
+ * ice_rem_update_vsi_list
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: VSI handle of the VSI to remove
+ * @fm_list: filter management entry for which the VSI list management needs to
+ *	     be done
+ */
+static enum ice_status
+ice_rem_update_vsi_list(struct ice_hw *hw, u16 vsi_handle,
+			struct ice_fltr_mgmt_list_entry *fm_list)
+{
+	enum ice_sw_lkup_type lkup_type;
+	enum ice_status status = ICE_SUCCESS;
+	u16 vsi_list_id;
+
+	if (fm_list->fltr_info.fltr_act != ICE_FWD_TO_VSI_LIST ||
+	    fm_list->vsi_count == 0)
+		return ICE_ERR_PARAM;
+
+	/* A rule with the VSI being removed does not exist */
+	if (!ice_is_bit_set(fm_list->vsi_list_info->vsi_map, vsi_handle))
+		return ICE_ERR_DOES_NOT_EXIST;
+
+	lkup_type = fm_list->fltr_info.lkup_type;
+	vsi_list_id = fm_list->fltr_info.fwd_id.vsi_list_id;
+	status = ice_update_vsi_list_rule(hw, &vsi_handle, 1, vsi_list_id, true,
+					  ice_aqc_opc_update_sw_rules,
+					  lkup_type);
+	if (status)
+		return status;
+
+	fm_list->vsi_count--;
+	ice_clear_bit(vsi_handle, fm_list->vsi_list_info->vsi_map);
+
+	if (fm_list->vsi_count == 1 && lkup_type != ICE_SW_LKUP_VLAN) {
+		struct ice_fltr_info tmp_fltr_info = fm_list->fltr_info;
+		struct ice_vsi_list_map_info *vsi_list_info =
+			fm_list->vsi_list_info;
+		u16 rem_vsi_handle;
+
+		rem_vsi_handle = ice_find_first_bit(vsi_list_info->vsi_map,
+						    ICE_MAX_VSI);
+		if (!ice_is_vsi_valid(hw, rem_vsi_handle))
+			return ICE_ERR_OUT_OF_RANGE;
+
+		/* Make sure VSI list is empty before removing it below */
+		status = ice_update_vsi_list_rule(hw, &rem_vsi_handle, 1,
+						  vsi_list_id, true,
+						  ice_aqc_opc_update_sw_rules,
+						  lkup_type);
+		if (status)
+			return status;
+
+		tmp_fltr_info.fltr_act = ICE_FWD_TO_VSI;
+		tmp_fltr_info.fwd_id.hw_vsi_id =
+			ice_get_hw_vsi_num(hw, rem_vsi_handle);
+		tmp_fltr_info.vsi_handle = rem_vsi_handle;
+		status = ice_update_pkt_fwd_rule(hw, &tmp_fltr_info);
+		if (status) {
+			ice_debug(hw, ICE_DBG_SW,
+				  "Failed to update pkt fwd rule to FWD_TO_VSI on HW VSI %d, error %d\n",
+				  tmp_fltr_info.fwd_id.hw_vsi_id, status);
+			return status;
+		}
+
+		fm_list->fltr_info = tmp_fltr_info;
+	}
+
+	if ((fm_list->vsi_count == 1 && lkup_type != ICE_SW_LKUP_VLAN) ||
+	    (fm_list->vsi_count == 0 && lkup_type == ICE_SW_LKUP_VLAN)) {
+		struct ice_vsi_list_map_info *vsi_list_info =
+			fm_list->vsi_list_info;
+
+		/* Remove the VSI list since it is no longer used */
+		status = ice_remove_vsi_list_rule(hw, vsi_list_id, lkup_type);
+		if (status) {
+			ice_debug(hw, ICE_DBG_SW,
+				  "Failed to remove VSI list %d, error %d\n",
+				  vsi_list_id, status);
+			return status;
+		}
+
+		LIST_DEL(&vsi_list_info->list_entry);
+		ice_free(hw, vsi_list_info);
+		fm_list->vsi_list_info = NULL;
+	}
+
+	return status;
+}
+
+/**
+ * ice_remove_rule_internal - Remove a filter rule of a given type
+ *
+ * @hw: pointer to the hardware structure
+ * @recp_id: recipe id for which the rule needs to removed
+ * @f_entry: rule entry containing filter information
+ */
+static enum ice_status
+ice_remove_rule_internal(struct ice_hw *hw, u8 recp_id,
+			 struct ice_fltr_list_entry *f_entry)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	struct ice_fltr_mgmt_list_entry *list_elem;
+	struct ice_lock *rule_lock; /* Lock to protect filter rule list */
+	enum ice_status status = ICE_SUCCESS;
+	bool remove_rule = false;
+	u16 vsi_handle;
+
+	if (!ice_is_vsi_valid(hw, f_entry->fltr_info.vsi_handle))
+		return ICE_ERR_PARAM;
+	f_entry->fltr_info.fwd_id.hw_vsi_id =
+		ice_get_hw_vsi_num(hw, f_entry->fltr_info.vsi_handle);
+
+	rule_lock = &sw->recp_list[recp_id].filt_rule_lock;
+	ice_acquire_lock(rule_lock);
+	list_elem = ice_find_rule_entry(hw, recp_id, &f_entry->fltr_info);
+	if (!list_elem) {
+		status = ICE_ERR_DOES_NOT_EXIST;
+		goto exit;
+	}
+
+	if (list_elem->fltr_info.fltr_act != ICE_FWD_TO_VSI_LIST) {
+		remove_rule = true;
+	} else if (!list_elem->vsi_list_info) {
+		status = ICE_ERR_DOES_NOT_EXIST;
+		goto exit;
+	} else {
+		if (list_elem->vsi_list_info->ref_cnt > 1)
+			list_elem->vsi_list_info->ref_cnt--;
+		vsi_handle = f_entry->fltr_info.vsi_handle;
+		status = ice_rem_update_vsi_list(hw, vsi_handle, list_elem);
+		if (status)
+			goto exit;
+		/* if vsi count goes to zero after updating the vsi list */
+		if (list_elem->vsi_count == 0)
+			remove_rule = true;
+	}
+
+	if (remove_rule) {
+		/* Remove the lookup rule */
+		struct ice_aqc_sw_rules_elem *s_rule;
+
+		s_rule = (struct ice_aqc_sw_rules_elem *)
+			ice_malloc(hw, ICE_SW_RULE_RX_TX_NO_HDR_SIZE);
+		if (!s_rule) {
+			status = ICE_ERR_NO_MEMORY;
+			goto exit;
+		}
+
+		ice_fill_sw_rule(hw, &list_elem->fltr_info, s_rule,
+				 ice_aqc_opc_remove_sw_rules);
+
+		status = ice_aq_sw_rules(hw, s_rule,
+					 ICE_SW_RULE_RX_TX_NO_HDR_SIZE, 1,
+					 ice_aqc_opc_remove_sw_rules, NULL);
+		if (status)
+			goto exit;
+
+		/* Remove a book keeping from the list */
+		ice_free(hw, s_rule);
+
+		LIST_DEL(&list_elem->list_entry);
+		ice_free(hw, list_elem);
+	}
+exit:
+	ice_release_lock(rule_lock);
+	return status;
+}
+
+
+/**
+ * ice_add_mac - Add a MAC address based filter rule
+ * @hw: pointer to the hardware structure
+ * @m_list: list of MAC addresses and forwarding information
+ *
+ * IMPORTANT: When the ucast_shared flag is set to false and m_list has
+ * multiple unicast addresses, the function assumes that all the
+ * addresses are unique in a given add_mac call. It doesn't
+ * check for duplicates in this case, removing duplicates from a given
+ * list should be taken care of in the caller of this function.
+ */
+enum ice_status
+ice_add_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list)
+{
+	struct ice_aqc_sw_rules_elem *s_rule, *r_iter;
+	struct ice_fltr_list_entry *m_list_itr;
+	struct LIST_HEAD_TYPE *rule_head;
+	u16 elem_sent, total_elem_left;
+	struct ice_switch_info *sw;
+	struct ice_lock *rule_lock; /* Lock to protect filter rule list */
+	enum ice_status status = ICE_SUCCESS;
+	u16 num_unicast = 0;
+	u16 s_rule_size;
+
+	if (!m_list || !hw)
+		return ICE_ERR_PARAM;
+	s_rule = NULL;
+	sw = hw->switch_info;
+	rule_lock = &sw->recp_list[ICE_SW_LKUP_MAC].filt_rule_lock;
+	LIST_FOR_EACH_ENTRY(m_list_itr, m_list, ice_fltr_list_entry,
+			    list_entry) {
+		u8 *add = &m_list_itr->fltr_info.l_data.mac.mac_addr[0];
+		u16 vsi_handle;
+		u16 hw_vsi_id;
+
+		m_list_itr->fltr_info.flag = ICE_FLTR_TX;
+		vsi_handle = m_list_itr->fltr_info.vsi_handle;
+		if (!ice_is_vsi_valid(hw, vsi_handle))
+			return ICE_ERR_PARAM;
+		hw_vsi_id = ice_get_hw_vsi_num(hw, vsi_handle);
+		m_list_itr->fltr_info.fwd_id.hw_vsi_id = hw_vsi_id;
+		/* update the src in case it is vsi num */
+		if (m_list_itr->fltr_info.src_id != ICE_SRC_ID_VSI)
+			return ICE_ERR_PARAM;
+		m_list_itr->fltr_info.src = hw_vsi_id;
+		if (m_list_itr->fltr_info.lkup_type != ICE_SW_LKUP_MAC ||
+		    IS_ZERO_ETHER_ADDR(add))
+			return ICE_ERR_PARAM;
+		if (IS_UNICAST_ETHER_ADDR(add) && !hw->ucast_shared) {
+			/* Don't overwrite the unicast address */
+			ice_acquire_lock(rule_lock);
+			if (ice_find_rule_entry(hw, ICE_SW_LKUP_MAC,
+						&m_list_itr->fltr_info)) {
+				ice_release_lock(rule_lock);
+				return ICE_ERR_ALREADY_EXISTS;
+			}
+			ice_release_lock(rule_lock);
+			num_unicast++;
+		} else if (IS_MULTICAST_ETHER_ADDR(add) ||
+			   (IS_UNICAST_ETHER_ADDR(add) && hw->ucast_shared)) {
+			m_list_itr->status =
+				ice_add_rule_internal(hw, ICE_SW_LKUP_MAC,
+						      m_list_itr);
+			if (m_list_itr->status)
+				return m_list_itr->status;
+		}
+	}
+
+	ice_acquire_lock(rule_lock);
+	/* Exit if no suitable entries were found for adding bulk switch rule */
+	if (!num_unicast) {
+		status = ICE_SUCCESS;
+		goto ice_add_mac_exit;
+	}
+
+	rule_head = &sw->recp_list[ICE_SW_LKUP_MAC].filt_rules;
+
+	/* Allocate switch rule buffer for the bulk update for unicast */
+	s_rule_size = ICE_SW_RULE_RX_TX_ETH_HDR_SIZE;
+	s_rule = (struct ice_aqc_sw_rules_elem *)
+		ice_calloc(hw, num_unicast, s_rule_size);
+	if (!s_rule) {
+		status = ICE_ERR_NO_MEMORY;
+		goto ice_add_mac_exit;
+	}
+
+	r_iter = s_rule;
+	LIST_FOR_EACH_ENTRY(m_list_itr, m_list, ice_fltr_list_entry,
+			    list_entry) {
+		struct ice_fltr_info *f_info = &m_list_itr->fltr_info;
+		u8 *mac_addr = &f_info->l_data.mac.mac_addr[0];
+
+		if (IS_UNICAST_ETHER_ADDR(mac_addr)) {
+			ice_fill_sw_rule(hw, &m_list_itr->fltr_info, r_iter,
+					 ice_aqc_opc_add_sw_rules);
+			r_iter = (struct ice_aqc_sw_rules_elem *)
+				((u8 *)r_iter + s_rule_size);
+		}
+	}
+
+	/* Call AQ bulk switch rule update for all unicast addresses */
+	r_iter = s_rule;
+	/* Call AQ switch rule in AQ_MAX chunk */
+	for (total_elem_left = num_unicast; total_elem_left > 0;
+	     total_elem_left -= elem_sent) {
+		struct ice_aqc_sw_rules_elem *entry = r_iter;
+
+		elem_sent = min(total_elem_left,
+				(u16)(ICE_AQ_MAX_BUF_LEN / s_rule_size));
+		status = ice_aq_sw_rules(hw, entry, elem_sent * s_rule_size,
+					 elem_sent, ice_aqc_opc_add_sw_rules,
+					 NULL);
+		if (status)
+			goto ice_add_mac_exit;
+		r_iter = (struct ice_aqc_sw_rules_elem *)
+			((u8 *)r_iter + (elem_sent * s_rule_size));
+	}
+
+	/* Fill up rule id based on the value returned from FW */
+	r_iter = s_rule;
+	LIST_FOR_EACH_ENTRY(m_list_itr, m_list, ice_fltr_list_entry,
+			    list_entry) {
+		struct ice_fltr_info *f_info = &m_list_itr->fltr_info;
+		u8 *mac_addr = &f_info->l_data.mac.mac_addr[0];
+		struct ice_fltr_mgmt_list_entry *fm_entry;
+
+		if (IS_UNICAST_ETHER_ADDR(mac_addr)) {
+			f_info->fltr_rule_id =
+				LE16_TO_CPU(r_iter->pdata.lkup_tx_rx.index);
+			f_info->fltr_act = ICE_FWD_TO_VSI;
+			/* Create an entry to track this MAC address */
+			fm_entry = (struct ice_fltr_mgmt_list_entry *)
+				ice_malloc(hw, sizeof(*fm_entry));
+			if (!fm_entry) {
+				status = ICE_ERR_NO_MEMORY;
+				goto ice_add_mac_exit;
+			}
+			fm_entry->fltr_info = *f_info;
+			fm_entry->vsi_count = 1;
+			/* The book keeping entries will get removed when
+			 * base driver calls remove filter AQ command
+			 */
+
+			LIST_ADD(&fm_entry->list_entry, rule_head);
+			r_iter = (struct ice_aqc_sw_rules_elem *)
+				((u8 *)r_iter + s_rule_size);
+		}
+	}
+
+ice_add_mac_exit:
+	ice_release_lock(rule_lock);
+	if (s_rule)
+		ice_free(hw, s_rule);
+	return status;
+}
+
+/**
+ * ice_add_vlan_internal - Add one VLAN based filter rule
+ * @hw: pointer to the hardware structure
+ * @f_entry: filter entry containing one VLAN information
+ */
+static enum ice_status
+ice_add_vlan_internal(struct ice_hw *hw, struct ice_fltr_list_entry *f_entry)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	struct ice_fltr_mgmt_list_entry *v_list_itr;
+	struct ice_fltr_info *new_fltr, *cur_fltr;
+	enum ice_sw_lkup_type lkup_type;
+	u16 vsi_list_id = 0, vsi_handle;
+	struct ice_lock *rule_lock; /* Lock to protect filter rule list */
+	enum ice_status status = ICE_SUCCESS;
+
+	if (!ice_is_vsi_valid(hw, f_entry->fltr_info.vsi_handle))
+		return ICE_ERR_PARAM;
+
+	f_entry->fltr_info.fwd_id.hw_vsi_id =
+		ice_get_hw_vsi_num(hw, f_entry->fltr_info.vsi_handle);
+	new_fltr = &f_entry->fltr_info;
+
+	/* VLAN id should only be 12 bits */
+	if (new_fltr->l_data.vlan.vlan_id > ICE_MAX_VLAN_ID)
+		return ICE_ERR_PARAM;
+
+	if (new_fltr->src_id != ICE_SRC_ID_VSI)
+		return ICE_ERR_PARAM;
+
+	new_fltr->src = new_fltr->fwd_id.hw_vsi_id;
+	lkup_type = new_fltr->lkup_type;
+	vsi_handle = new_fltr->vsi_handle;
+	rule_lock = &sw->recp_list[ICE_SW_LKUP_VLAN].filt_rule_lock;
+	ice_acquire_lock(rule_lock);
+	v_list_itr = ice_find_rule_entry(hw, ICE_SW_LKUP_VLAN, new_fltr);
+	if (!v_list_itr) {
+		struct ice_vsi_list_map_info *map_info = NULL;
+
+		if (new_fltr->fltr_act == ICE_FWD_TO_VSI) {
+			/* All VLAN pruning rules use a VSI list. Check if
+			 * there is already a VSI list containing VSI that we
+			 * want to add. If found, use the same vsi_list_id for
+			 * this new VLAN rule or else create a new list.
+			 */
+			map_info = ice_find_vsi_list_entry(hw, ICE_SW_LKUP_VLAN,
+							   vsi_handle,
+							   &vsi_list_id);
+			if (!map_info) {
+				status = ice_create_vsi_list_rule(hw,
+								  &vsi_handle,
+								  1,
+								  &vsi_list_id,
+								  lkup_type);
+				if (status)
+					goto exit;
+			}
+			/* Convert the action to forwarding to a VSI list. */
+			new_fltr->fltr_act = ICE_FWD_TO_VSI_LIST;
+			new_fltr->fwd_id.vsi_list_id = vsi_list_id;
+		}
+
+		status = ice_create_pkt_fwd_rule(hw, f_entry);
+		if (!status) {
+			v_list_itr = ice_find_rule_entry(hw, ICE_SW_LKUP_VLAN,
+							 new_fltr);
+			if (!v_list_itr) {
+				status = ICE_ERR_DOES_NOT_EXIST;
+				goto exit;
+			}
+			/* reuse VSI list for new rule and increment ref_cnt */
+			if (map_info) {
+				v_list_itr->vsi_list_info = map_info;
+				map_info->ref_cnt++;
+			} else {
+				v_list_itr->vsi_list_info =
+					ice_create_vsi_list_map(hw, &vsi_handle,
+								1, vsi_list_id);
+			}
+		}
+	} else if (v_list_itr->vsi_list_info->ref_cnt == 1) {
+		/* Update existing VSI list to add new VSI id only if it used
+		 * by one VLAN rule.
+		 */
+		cur_fltr = &v_list_itr->fltr_info;
+		status = ice_add_update_vsi_list(hw, v_list_itr, cur_fltr,
+						 new_fltr);
+	} else {
+		/* If VLAN rule exists and VSI list being used by this rule is
+		 * referenced by more than 1 VLAN rule. Then create a new VSI
+		 * list appending previous VSI with new VSI and update existing
+		 * VLAN rule to point to new VSI list id
+		 */
+		struct ice_fltr_info tmp_fltr;
+		u16 vsi_handle_arr[2];
+		u16 cur_handle;
+
+		/* Current implementation only supports reusing VSI list with
+		 * one VSI count. We should never hit below condition
+		 */
+		if (v_list_itr->vsi_count > 1 &&
+		    v_list_itr->vsi_list_info->ref_cnt > 1) {
+			ice_debug(hw, ICE_DBG_SW,
+				  "Invalid configuration: Optimization to reuse VSI list with more than one VSI is not being done yet\n");
+			status = ICE_ERR_CFG;
+			goto exit;
+		}
+
+		cur_handle =
+			ice_find_first_bit(v_list_itr->vsi_list_info->vsi_map,
+					   ICE_MAX_VSI);
+
+		/* A rule already exists with the new VSI being added */
+		if (cur_handle == vsi_handle) {
+			status = ICE_ERR_ALREADY_EXISTS;
+			goto exit;
+		}
+
+		vsi_handle_arr[0] = cur_handle;
+		vsi_handle_arr[1] = vsi_handle;
+		status = ice_create_vsi_list_rule(hw, &vsi_handle_arr[0], 2,
+						  &vsi_list_id, lkup_type);
+		if (status)
+			goto exit;
+
+		tmp_fltr = v_list_itr->fltr_info;
+		tmp_fltr.fltr_rule_id = v_list_itr->fltr_info.fltr_rule_id;
+		tmp_fltr.fwd_id.vsi_list_id = vsi_list_id;
+		tmp_fltr.fltr_act = ICE_FWD_TO_VSI_LIST;
+		/* Update the previous switch rule to a new VSI list which
+		 * includes current VSI that is requested
+		 */
+		status = ice_update_pkt_fwd_rule(hw, &tmp_fltr);
+		if (status)
+			goto exit;
+
+		/* before overriding VSI list map info. decrement ref_cnt of
+		 * previous VSI list
+		 */
+		v_list_itr->vsi_list_info->ref_cnt--;
+
+		/* now update to newly created list */
+		v_list_itr->fltr_info.fwd_id.vsi_list_id = vsi_list_id;
+		v_list_itr->vsi_list_info =
+			ice_create_vsi_list_map(hw, &vsi_handle_arr[0], 2,
+						vsi_list_id);
+		v_list_itr->vsi_count++;
+	}
+
+exit:
+	ice_release_lock(rule_lock);
+	return status;
+}
+
+/**
+ * ice_add_vlan - Add VLAN based filter rule
+ * @hw: pointer to the hardware structure
+ * @v_list: list of VLAN entries and forwarding information
+ */
+enum ice_status
+ice_add_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list)
+{
+	struct ice_fltr_list_entry *v_list_itr;
+
+	if (!v_list || !hw)
+		return ICE_ERR_PARAM;
+
+	LIST_FOR_EACH_ENTRY(v_list_itr, v_list, ice_fltr_list_entry,
+			    list_entry) {
+		if (v_list_itr->fltr_info.lkup_type != ICE_SW_LKUP_VLAN)
+			return ICE_ERR_PARAM;
+		v_list_itr->fltr_info.flag = ICE_FLTR_TX;
+		v_list_itr->status = ice_add_vlan_internal(hw, v_list_itr);
+		if (v_list_itr->status)
+			return v_list_itr->status;
+	}
+	return ICE_SUCCESS;
+}
+
+
+
+/**
+ * ice_rem_sw_rule_info
+ * @hw: pointer to the hardware structure
+ * @rule_head: pointer to the switch list structure that we want to delete
+ */
+static void
+ice_rem_sw_rule_info(struct ice_hw *hw, struct LIST_HEAD_TYPE *rule_head)
+{
+	if (!LIST_EMPTY(rule_head)) {
+		struct ice_fltr_mgmt_list_entry *entry;
+		struct ice_fltr_mgmt_list_entry *tmp;
+
+		LIST_FOR_EACH_ENTRY_SAFE(entry, tmp, rule_head,
+					 ice_fltr_mgmt_list_entry, list_entry) {
+			LIST_DEL(&entry->list_entry);
+			ice_free(hw, entry);
+		}
+	}
+}
+
+
+
+/**
+ * ice_cfg_dflt_vsi - change state of VSI to set/clear default
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: VSI handle to set as default
+ * @set: true to add the above mentioned switch rule, false to remove it
+ * @direction: ICE_FLTR_RX or ICE_FLTR_TX
+ *
+ * add filter rule to set/unset given VSI as default VSI for the switch
+ * (represented by swid)
+ */
+enum ice_status
+ice_cfg_dflt_vsi(struct ice_hw *hw, u16 vsi_handle, bool set, u8 direction)
+{
+	struct ice_aqc_sw_rules_elem *s_rule;
+	struct ice_fltr_info f_info;
+	enum ice_adminq_opc opcode;
+	enum ice_status status;
+	u16 s_rule_size;
+	u16 hw_vsi_id;
+
+	if (!ice_is_vsi_valid(hw, vsi_handle))
+		return ICE_ERR_PARAM;
+	hw_vsi_id = ice_get_hw_vsi_num(hw, vsi_handle);
+
+	s_rule_size = set ? ICE_SW_RULE_RX_TX_ETH_HDR_SIZE :
+			    ICE_SW_RULE_RX_TX_NO_HDR_SIZE;
+	s_rule = (struct ice_aqc_sw_rules_elem *)ice_malloc(hw, s_rule_size);
+	if (!s_rule)
+		return ICE_ERR_NO_MEMORY;
+
+	ice_memset(&f_info, 0, sizeof(f_info), ICE_NONDMA_MEM);
+
+	f_info.lkup_type = ICE_SW_LKUP_DFLT;
+	f_info.flag = direction;
+	f_info.fltr_act = ICE_FWD_TO_VSI;
+	f_info.fwd_id.hw_vsi_id = hw_vsi_id;
+
+	if (f_info.flag & ICE_FLTR_RX) {
+		f_info.src = hw->port_info->lport;
+		f_info.src_id = ICE_SRC_ID_LPORT;
+		if (!set)
+			f_info.fltr_rule_id =
+				hw->port_info->dflt_rx_vsi_rule_id;
+	} else if (f_info.flag & ICE_FLTR_TX) {
+		f_info.src_id = ICE_SRC_ID_VSI;
+		f_info.src = hw_vsi_id;
+		if (!set)
+			f_info.fltr_rule_id =
+				hw->port_info->dflt_tx_vsi_rule_id;
+	}
+
+	if (set)
+		opcode = ice_aqc_opc_add_sw_rules;
+	else
+		opcode = ice_aqc_opc_remove_sw_rules;
+
+	ice_fill_sw_rule(hw, &f_info, s_rule, opcode);
+
+	status = ice_aq_sw_rules(hw, s_rule, s_rule_size, 1, opcode, NULL);
+	if (status || !(f_info.flag & ICE_FLTR_TX_RX))
+		goto out;
+	if (set) {
+		u16 index = LE16_TO_CPU(s_rule->pdata.lkup_tx_rx.index);
+
+		if (f_info.flag & ICE_FLTR_TX) {
+			hw->port_info->dflt_tx_vsi_num = hw_vsi_id;
+			hw->port_info->dflt_tx_vsi_rule_id = index;
+		} else if (f_info.flag & ICE_FLTR_RX) {
+			hw->port_info->dflt_rx_vsi_num = hw_vsi_id;
+			hw->port_info->dflt_rx_vsi_rule_id = index;
+		}
+	} else {
+		if (f_info.flag & ICE_FLTR_TX) {
+			hw->port_info->dflt_tx_vsi_num = ICE_DFLT_VSI_INVAL;
+			hw->port_info->dflt_tx_vsi_rule_id = ICE_INVAL_ACT;
+		} else if (f_info.flag & ICE_FLTR_RX) {
+			hw->port_info->dflt_rx_vsi_num = ICE_DFLT_VSI_INVAL;
+			hw->port_info->dflt_rx_vsi_rule_id = ICE_INVAL_ACT;
+		}
+	}
+
+out:
+	ice_free(hw, s_rule);
+	return status;
+}
+
+/**
+ * ice_remove_mac - remove a MAC address based filter rule
+ * @hw: pointer to the hardware structure
+ * @m_list: list of MAC addresses and forwarding information
+ *
+ * This function removes either a MAC filter rule or a specific VSI from a
+ * VSI list for a multicast MAC address.
+ *
+ * Returns ICE_ERR_DOES_NOT_EXIST if a given entry was not added by
+ * ice_add_mac. Caller should be aware that this call will only work if all
+ * the entries passed into m_list were added previously. It will not attempt to
+ * do a partial remove of entries that were found.
+ */
+enum ice_status
+ice_remove_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list)
+{
+	struct ice_fltr_list_entry *list_itr, *tmp;
+
+	if (!m_list)
+		return ICE_ERR_PARAM;
+
+	LIST_FOR_EACH_ENTRY_SAFE(list_itr, tmp, m_list, ice_fltr_list_entry,
+				 list_entry) {
+		enum ice_sw_lkup_type l_type = list_itr->fltr_info.lkup_type;
+
+		if (l_type != ICE_SW_LKUP_MAC)
+			return ICE_ERR_PARAM;
+		list_itr->status = ice_remove_rule_internal(hw,
+							    ICE_SW_LKUP_MAC,
+							    list_itr);
+		if (list_itr->status)
+			return list_itr->status;
+	}
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_remove_vlan - Remove VLAN based filter rule
+ * @hw: pointer to the hardware structure
+ * @v_list: list of VLAN entries and forwarding information
+ */
+enum ice_status
+ice_remove_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list)
+{
+	struct ice_fltr_list_entry *v_list_itr, *tmp;
+
+	if (!v_list || !hw)
+		return ICE_ERR_PARAM;
+
+	LIST_FOR_EACH_ENTRY_SAFE(v_list_itr, tmp, v_list, ice_fltr_list_entry,
+				 list_entry) {
+		enum ice_sw_lkup_type l_type = v_list_itr->fltr_info.lkup_type;
+
+		if (l_type != ICE_SW_LKUP_VLAN)
+			return ICE_ERR_PARAM;
+		v_list_itr->status = ice_remove_rule_internal(hw,
+							      ICE_SW_LKUP_VLAN,
+							      v_list_itr);
+		if (v_list_itr->status)
+			return v_list_itr->status;
+	}
+	return ICE_SUCCESS;
+}
+
+
+/**
+ * ice_vsi_uses_fltr - Determine if given VSI uses specified filter
+ * @fm_entry: filter entry to inspect
+ * @vsi_handle: VSI handle to compare with filter info
+ */
+static bool
+ice_vsi_uses_fltr(struct ice_fltr_mgmt_list_entry *fm_entry, u16 vsi_handle)
+{
+	return ((fm_entry->fltr_info.fltr_act == ICE_FWD_TO_VSI &&
+		 fm_entry->fltr_info.vsi_handle == vsi_handle) ||
+		(fm_entry->fltr_info.fltr_act == ICE_FWD_TO_VSI_LIST &&
+		 (ice_is_bit_set(fm_entry->vsi_list_info->vsi_map,
+				 vsi_handle))));
+}
+
+/**
+ * ice_add_entry_to_vsi_fltr_list - Add copy of fltr_list_entry to remove list
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: VSI handle to remove filters from
+ * @vsi_list_head: pointer to the list to add entry to
+ * @fi: pointer to fltr_info of filter entry to copy & add
+ *
+ * Helper function, used when creating a list of filters to remove from
+ * a specific VSI. The entry added to vsi_list_head is a COPY of the
+ * original filter entry, with the exception of fltr_info.fltr_act and
+ * fltr_info.fwd_id fields. These are set such that later logic can
+ * extract which VSI to remove the fltr from, and pass on that information.
+ */
+static enum ice_status
+ice_add_entry_to_vsi_fltr_list(struct ice_hw *hw, u16 vsi_handle,
+			       struct LIST_HEAD_TYPE *vsi_list_head,
+			       struct ice_fltr_info *fi)
+{
+	struct ice_fltr_list_entry *tmp;
+
+	/* this memory is freed up in the caller function
+	 * once filters for this VSI are removed
+	 */
+	tmp = (struct ice_fltr_list_entry *)ice_malloc(hw, sizeof(*tmp));
+	if (!tmp)
+		return ICE_ERR_NO_MEMORY;
+
+	tmp->fltr_info = *fi;
+
+	/* Overwrite these fields to indicate which VSI to remove filter from,
+	 * so find and remove logic can extract the information from the
+	 * list entries. Note that original entries will still have proper
+	 * values.
+	 */
+	tmp->fltr_info.fltr_act = ICE_FWD_TO_VSI;
+	tmp->fltr_info.vsi_handle = vsi_handle;
+	tmp->fltr_info.fwd_id.hw_vsi_id = ice_get_hw_vsi_num(hw, vsi_handle);
+
+	LIST_ADD(&tmp->list_entry, vsi_list_head);
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_add_to_vsi_fltr_list - Add VSI filters to the list
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: VSI handle to remove filters from
+ * @lkup_list_head: pointer to the list that has certain lookup type filters
+ * @vsi_list_head: pointer to the list pertaining to VSI with vsi_handle
+ *
+ * Locates all filters in lkup_list_head that are used by the given VSI,
+ * and adds COPIES of those entries to vsi_list_head (intended to be used
+ * to remove the listed filters).
+ * Note that this means all entries in vsi_list_head must be explicitly
+ * deallocated by the caller when done with list.
+ */
+#if defined(SRIOV_SUPPORT) && !defined(NO_VF_PROMISC_SUPPORT)
+enum ice_status
+#else
+static enum ice_status
+#endif
+ice_add_to_vsi_fltr_list(struct ice_hw *hw, u16 vsi_handle,
+			 struct LIST_HEAD_TYPE *lkup_list_head,
+			 struct LIST_HEAD_TYPE *vsi_list_head)
+{
+	struct ice_fltr_mgmt_list_entry *fm_entry;
+	enum ice_status status = ICE_SUCCESS;
+
+	/* check to make sure VSI id is valid and within boundary */
+	if (!ice_is_vsi_valid(hw, vsi_handle))
+		return ICE_ERR_PARAM;
+
+	LIST_FOR_EACH_ENTRY(fm_entry, lkup_list_head,
+			    ice_fltr_mgmt_list_entry, list_entry) {
+		struct ice_fltr_info *fi;
+
+		fi = &fm_entry->fltr_info;
+		if (!fi || !ice_vsi_uses_fltr(fm_entry, vsi_handle))
+			continue;
+
+		status = ice_add_entry_to_vsi_fltr_list(hw, vsi_handle,
+							vsi_list_head, fi);
+		if (status)
+			return status;
+	}
+	return status;
+}
+
+
+
+/**
+ * ice_remove_vsi_lkup_fltr - Remove lookup type filters for a VSI
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: VSI handle to remove filters from
+ * @lkup: switch rule filter lookup type
+ */
+static void
+ice_remove_vsi_lkup_fltr(struct ice_hw *hw, u16 vsi_handle,
+			 enum ice_sw_lkup_type lkup)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	struct ice_fltr_list_entry *fm_entry;
+	struct LIST_HEAD_TYPE remove_list_head;
+	struct LIST_HEAD_TYPE *rule_head;
+	struct ice_fltr_list_entry *tmp;
+	struct ice_lock *rule_lock;	/* Lock to protect filter rule list */
+	enum ice_status status;
+
+	INIT_LIST_HEAD(&remove_list_head);
+	rule_lock = &sw->recp_list[lkup].filt_rule_lock;
+	rule_head = &sw->recp_list[lkup].filt_rules;
+	ice_acquire_lock(rule_lock);
+	status = ice_add_to_vsi_fltr_list(hw, vsi_handle, rule_head,
+					  &remove_list_head);
+	ice_release_lock(rule_lock);
+	if (status)
+		return;
+
+	switch (lkup) {
+	case ICE_SW_LKUP_MAC:
+		ice_remove_mac(hw, &remove_list_head);
+		break;
+	case ICE_SW_LKUP_VLAN:
+		ice_remove_vlan(hw, &remove_list_head);
+		break;
+	case ICE_SW_LKUP_MAC_VLAN:
+	case ICE_SW_LKUP_ETHERTYPE:
+	case ICE_SW_LKUP_ETHERTYPE_MAC:
+	case ICE_SW_LKUP_PROMISC:
+	case ICE_SW_LKUP_DFLT:
+		ice_debug(hw, ICE_DBG_SW,
+			  "Remove filters for this lookup type hasn't been implemented yet\n");
+		break;
+	case ICE_SW_LKUP_PROMISC_VLAN:
+	case ICE_SW_LKUP_LAST:
+		ice_debug(hw, ICE_DBG_SW, "Unsupported lookup type\n");
+		break;
+	}
+
+	LIST_FOR_EACH_ENTRY_SAFE(fm_entry, tmp, &remove_list_head,
+				 ice_fltr_list_entry, list_entry) {
+		LIST_DEL(&fm_entry->list_entry);
+		ice_free(hw, fm_entry);
+	}
+}
+
+/**
+ * ice_remove_vsi_fltr - Remove all filters for a VSI
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: VSI handle to remove filters from
+ */
+void ice_remove_vsi_fltr(struct ice_hw *hw, u16 vsi_handle)
+{
+	ice_debug(hw, ICE_DBG_TRACE, "ice_remove_vsi_fltr\n");
+
+	ice_remove_vsi_lkup_fltr(hw, vsi_handle, ICE_SW_LKUP_MAC);
+	ice_remove_vsi_lkup_fltr(hw, vsi_handle, ICE_SW_LKUP_MAC_VLAN);
+	ice_remove_vsi_lkup_fltr(hw, vsi_handle, ICE_SW_LKUP_PROMISC);
+	ice_remove_vsi_lkup_fltr(hw, vsi_handle, ICE_SW_LKUP_VLAN);
+	ice_remove_vsi_lkup_fltr(hw, vsi_handle, ICE_SW_LKUP_DFLT);
+	ice_remove_vsi_lkup_fltr(hw, vsi_handle, ICE_SW_LKUP_ETHERTYPE);
+	ice_remove_vsi_lkup_fltr(hw, vsi_handle, ICE_SW_LKUP_ETHERTYPE_MAC);
+	ice_remove_vsi_lkup_fltr(hw, vsi_handle, ICE_SW_LKUP_PROMISC_VLAN);
+}
+
+
+
+
+
+/**
+ * ice_replay_vsi_fltr - Replay filters for requested VSI
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: driver vsi handle
+ * @recp_id: Recipe id for which rules need to be replayed
+ * @list_head: list for which filters need to be replayed
+ *
+ * Replays the filter of recipe recp_id for a VSI represented via vsi_handle.
+ * It is required to pass valid VSI handle.
+ */
+static enum ice_status
+ice_replay_vsi_fltr(struct ice_hw *hw, u16 vsi_handle, u8 recp_id,
+		    struct LIST_HEAD_TYPE *list_head)
+{
+	struct ice_fltr_mgmt_list_entry *itr;
+	enum ice_status status = ICE_SUCCESS;
+	u16 hw_vsi_id;
+
+	if (LIST_EMPTY(list_head))
+		return status;
+	hw_vsi_id = ice_get_hw_vsi_num(hw, vsi_handle);
+
+	LIST_FOR_EACH_ENTRY(itr, list_head, ice_fltr_mgmt_list_entry,
+			    list_entry) {
+		struct ice_fltr_list_entry f_entry;
+
+		f_entry.fltr_info = itr->fltr_info;
+		if (itr->vsi_count < 2 && recp_id != ICE_SW_LKUP_VLAN &&
+		    itr->fltr_info.vsi_handle == vsi_handle) {
+			/* update the src in case it is vsi num */
+			if (f_entry.fltr_info.src_id == ICE_SRC_ID_VSI)
+				f_entry.fltr_info.src = hw_vsi_id;
+			status = ice_add_rule_internal(hw, recp_id, &f_entry);
+			if (status != ICE_SUCCESS)
+				goto end;
+			continue;
+		}
+		if (!itr->vsi_list_info ||
+		    !ice_is_bit_set(itr->vsi_list_info->vsi_map, vsi_handle))
+			continue;
+		/* Clearing it so that the logic can add it back */
+		ice_clear_bit(vsi_handle, itr->vsi_list_info->vsi_map);
+		f_entry.fltr_info.vsi_handle = vsi_handle;
+		f_entry.fltr_info.fltr_act = ICE_FWD_TO_VSI;
+		/* update the src in case it is vsi num */
+		if (f_entry.fltr_info.src_id == ICE_SRC_ID_VSI)
+			f_entry.fltr_info.src = hw_vsi_id;
+		if (recp_id == ICE_SW_LKUP_VLAN)
+			status = ice_add_vlan_internal(hw, &f_entry);
+		else
+			status = ice_add_rule_internal(hw, recp_id, &f_entry);
+		if (status != ICE_SUCCESS)
+			goto end;
+	}
+end:
+	return status;
+}
+
+
+/**
+ * ice_replay_vsi_all_fltr - replay all filters stored in bookkeeping lists
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: driver vsi handle
+ *
+ * Replays filters for requested VSI via vsi_handle.
+ */
+enum ice_status ice_replay_vsi_all_fltr(struct ice_hw *hw, u16 vsi_handle)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	enum ice_status status = ICE_SUCCESS;
+	u8 i;
+
+	for (i = 0; i < ICE_MAX_NUM_RECIPES; i++) {
+		/* Update the default recipe lines and ones that were created */
+		if (i < ICE_SW_LKUP_LAST || sw->recp_list[i].recp_created) {
+			struct LIST_HEAD_TYPE *head;
+
+			head = &sw->recp_list[i].filt_replay_rules;
+			if (!sw->recp_list[i].adv_rule)
+				status = ice_replay_vsi_fltr(hw, vsi_handle, i,
+							     head);
+			if (status != ICE_SUCCESS)
+				return status;
+		}
+	}
+	return status;
+}
+
+/**
+ * ice_rm_all_sw_replay_rule_info - deletes filter replay rules
+ * @hw: pointer to the hw struct
+ *
+ * Deletes the filter replay rules.
+ */
+void ice_rm_all_sw_replay_rule_info(struct ice_hw *hw)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	u8 i;
+
+	if (!sw)
+		return;
+
+	for (i = 0; i < ICE_SW_LKUP_LAST; i++) {
+		if (!LIST_EMPTY(&sw->recp_list[i].filt_replay_rules)) {
+			struct LIST_HEAD_TYPE *l_head;
+
+			l_head = &sw->recp_list[i].filt_replay_rules;
+			if (!sw->recp_list[i].adv_rule)
+				ice_rem_sw_rule_info(hw, l_head);
+		}
+	}
+}
diff --git a/drivers/net/ice/base/ice_switch.h b/drivers/net/ice/base/ice_switch.h
new file mode 100644
index 0000000..1c55c63
--- /dev/null
+++ b/drivers/net/ice/base/ice_switch.h
@@ -0,0 +1,320 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_SWITCH_H_
+#define _ICE_SWITCH_H_
+
+#include "ice_common.h"
+#include "ice_protocol_type.h"
+
+#define ICE_SW_CFG_MAX_BUF_LEN 2048
+#define ICE_MAX_SW 256
+#define ICE_DFLT_VSI_INVAL 0xff
+
+
+
+#define ICE_VSI_INVAL_ID 0xFFFF
+
+/* VSI context structure for add/get/update/free operations */
+struct ice_vsi_ctx {
+	u16 vsi_num;
+	u16 vsis_allocd;
+	u16 vsis_unallocated;
+	u16 flags;
+	struct ice_aqc_vsi_props info;
+	struct ice_sched_vsi_info sched;
+	u8 alloc_from_pool;
+	u8 vf_num;
+	struct ice_lock rss_locks;
+	struct LIST_HEAD_TYPE rss_list_head;
+};
+
+
+/* Switch recipe ID enum values are specific to hardware */
+enum ice_sw_lkup_type {
+	ICE_SW_LKUP_ETHERTYPE = 0,
+	ICE_SW_LKUP_MAC = 1,
+	ICE_SW_LKUP_MAC_VLAN = 2,
+	ICE_SW_LKUP_PROMISC = 3,
+	ICE_SW_LKUP_VLAN = 4,
+	ICE_SW_LKUP_DFLT = 5,
+	ICE_SW_LKUP_ETHERTYPE_MAC = 8,
+	ICE_SW_LKUP_PROMISC_VLAN = 9,
+	ICE_SW_LKUP_LAST
+};
+
+/* type of filter src id */
+enum ice_src_id {
+	ICE_SRC_ID_UNKNOWN = 0,
+	ICE_SRC_ID_VSI,
+	ICE_SRC_ID_QUEUE,
+	ICE_SRC_ID_LPORT,
+};
+
+struct ice_fltr_info {
+	/* Look up information: how to look up packet */
+	enum ice_sw_lkup_type lkup_type;
+	/* Forward action: filter action to do after lookup */
+	enum ice_sw_fwd_act_type fltr_act;
+	/* rule ID returned by firmware once filter rule is created */
+	u16 fltr_rule_id;
+	u16 flag;
+#define ICE_FLTR_RX		BIT(0)
+#define ICE_FLTR_TX		BIT(1)
+#define ICE_FLTR_TX_RX		(ICE_FLTR_RX | ICE_FLTR_TX)
+
+	/* Source VSI for LOOKUP_TX or source port for LOOKUP_RX */
+	u16 src;
+	enum ice_src_id src_id;
+
+	union {
+		struct {
+			u8 mac_addr[ETH_ALEN];
+		} mac;
+		struct {
+			u8 mac_addr[ETH_ALEN];
+			u16 vlan_id;
+		} mac_vlan;
+		struct {
+			u16 vlan_id;
+		} vlan;
+		/* Set lkup_type as ICE_SW_LKUP_ETHERTYPE
+		 * if just using ethertype as filter. Set lkup_type as
+		 * ICE_SW_LKUP_ETHERTYPE_MAC if MAC also needs to be
+		 * passed in as filter.
+		 */
+		struct {
+			u16 ethertype;
+			u8 mac_addr[ETH_ALEN]; /* optional */
+		} ethertype_mac;
+	} l_data; /* Make sure to zero out the memory of l_data before using
+		   * it or only set the data associated with lookup match
+		   * rest everything should be zero
+		   */
+
+	/* Depending on filter action */
+	union {
+		/* queue id in case of ICE_FWD_TO_Q and starting
+		 * queue id in case of ICE_FWD_TO_QGRP.
+		 */
+		u16 q_id:11;
+		u16 hw_vsi_id:10;
+		u16 vsi_id:10;
+		u16 vsi_list_id:10;
+	} fwd_id;
+
+	/* Sw VSI handle */
+	u16 vsi_handle;
+
+	/* Set to num_queues if action is ICE_FWD_TO_QGRP. This field
+	 * determines the range of queues the packet needs to be forwarded to.
+	 * Note that qgrp_size must be set to a power of 2.
+	 */
+	u8 qgrp_size;
+
+	/* Rule creations populate these indicators basing on the switch type */
+	u8 lb_en;	/* Indicate if packet can be looped back */
+	u8 lan_en;	/* Indicate if packet can be forwarded to the uplink */
+};
+
+struct ice_adv_lkup_elem {
+	enum ice_protocol_type type;
+	union ice_prot_hdr h_u;	/* Header values */
+	union ice_prot_hdr m_u;	/* Mask of header values to match */
+};
+
+struct ice_sw_act_ctrl {
+	/* Source VSI for LOOKUP_TX or source port for LOOKUP_RX */
+	u16 src;
+	u16 flag;
+#define ICE_FLTR_RX             BIT(0)
+#define ICE_FLTR_TX             BIT(1)
+#define ICE_FLTR_TX_RX (ICE_FLTR_RX | ICE_FLTR_TX)
+
+	enum ice_sw_fwd_act_type fltr_act;
+	/* Depending on filter action */
+	union {
+		/* This is a queue id in case of ICE_FWD_TO_Q and starting
+		 * queue id in case of ICE_FWD_TO_QGRP.
+		 */
+		u16 q_id:11;
+		u16 vsi_id:10;
+		u16 hw_vsi_id:10;
+		u16 vsi_list_id:10;
+	} fwd_id;
+	/* software VSI handle */
+	u16 vsi_handle;
+	u8 qgrp_size;
+};
+
+struct ice_adv_rule_info {
+	enum ice_sw_tunnel_type tun_type;
+	struct ice_sw_act_ctrl sw_act;
+	u32 priority;
+};
+
+/* A collection of one or more four word recipe */
+struct ice_sw_recipe {
+	/* For a chained recipe the root recipe is what should be used for
+	 * programming rules
+	 */
+	u8 root_rid;
+	u8 recp_created;
+
+	/* Number of extraction words */
+	u8 n_ext_words;
+	/* Protocol ID and Offset pair (extraction word) to describe the
+	 * recipe
+	 */
+	struct ice_fv_word ext_words[ICE_MAX_CHAIN_WORDS];
+
+	/* if this recipe is a collection of other recipe */
+	u8 big_recp;
+
+	/* if this recipe is part of another bigger recipe then chain index
+	 * corresponding to this recipe
+	 */
+	u8 chain_idx;
+
+	/* if this recipe is a collection of other recipe then count of other
+	 * recipes and recipe ids of those recipes
+	 */
+	u8 n_grp_count;
+
+	/* Bit map specifying the IDs associated with this group of recipe */
+	ice_declare_bitmap(r_bitmap, ICE_MAX_NUM_RECIPES);
+
+	enum ice_sw_tunnel_type tun_type;
+
+	/* List of type ice_fltr_mgmt_list_entry or adv_rule */
+	u8 adv_rule;
+	struct LIST_HEAD_TYPE filt_rules;
+	struct LIST_HEAD_TYPE filt_replay_rules;
+
+	/* Lock to protect filter rule structure */
+	struct ice_lock filt_rule_lock;
+
+	/* Profiles this recipe should be associated with */
+	struct LIST_HEAD_TYPE fv_list;
+
+	/* Profiles this recipe is associated with */
+	u8 num_profs, *prof_ids;
+
+	struct LIST_HEAD_TYPE rg_list;
+
+	/* AQ buffer associated with this recipe */
+	struct ice_aqc_recipe_data_elem *root_buf;
+};
+
+/* Bookkeeping structure to hold bitmap of VSIs corresponding to VSI list id */
+struct ice_vsi_list_map_info {
+	struct LIST_ENTRY_TYPE list_entry;
+	ice_declare_bitmap(vsi_map, ICE_MAX_VSI);
+	u16 vsi_list_id;
+	/* counter to track how many rules are reusing this VSI list */
+	u16 ref_cnt;
+};
+
+struct ice_fltr_list_entry {
+	struct LIST_ENTRY_TYPE list_entry;
+	enum ice_status status;
+	struct ice_fltr_info fltr_info;
+};
+
+/* This defines an entry in the list that maintains MAC or VLAN membership
+ * to HW list mapping, since multiple VSIs can subscribe to the same MAC or
+ * VLAN. As an optimization the VSI list should be created only when a
+ * second VSI becomes a subscriber to the same MAC address. VSI lists are always
+ * used for VLAN membership.
+ */
+struct ice_fltr_mgmt_list_entry {
+	/* back pointer to VSI list id to VSI list mapping */
+	struct ice_vsi_list_map_info *vsi_list_info;
+	u16 vsi_count;
+#define ICE_INVAL_LG_ACT_INDEX 0xffff
+	u16 lg_act_idx;
+#define ICE_INVAL_SW_MARKER_ID 0xffff
+	u16 sw_marker_id;
+	struct LIST_ENTRY_TYPE list_entry;
+	struct ice_fltr_info fltr_info;
+#define ICE_INVAL_COUNTER_ID 0xff
+	u8 counter_index;
+};
+
+struct ice_adv_fltr_mgmt_list_entry {
+	struct LIST_ENTRY_TYPE list_entry;
+
+	struct ice_adv_lkup_elem *lkups;
+	struct ice_adv_rule_info rule_info;
+	u16 lkups_cnt;
+};
+
+enum ice_promisc_flags {
+	ICE_PROMISC_UCAST_RX = 0x1,
+	ICE_PROMISC_UCAST_TX = 0x2,
+	ICE_PROMISC_MCAST_RX = 0x4,
+	ICE_PROMISC_MCAST_TX = 0x8,
+	ICE_PROMISC_BCAST_RX = 0x10,
+	ICE_PROMISC_BCAST_TX = 0x20,
+	ICE_PROMISC_VLAN_RX = 0x40,
+	ICE_PROMISC_VLAN_TX = 0x80,
+};
+
+/* VSI related commands */
+enum ice_status
+ice_add_vsi(struct ice_hw *hw, u16 vsi_handle, struct ice_vsi_ctx *vsi_ctx,
+	    struct ice_sq_cd *cd);
+enum ice_status
+ice_free_vsi(struct ice_hw *hw, u16 vsi_handle, struct ice_vsi_ctx *vsi_ctx,
+	     bool keep_vsi_alloc, struct ice_sq_cd *cd);
+enum ice_status
+ice_update_vsi(struct ice_hw *hw, u16 vsi_handle, struct ice_vsi_ctx *vsi_ctx,
+	       struct ice_sq_cd *cd);
+struct ice_vsi_ctx *ice_get_vsi_ctx(struct ice_hw *hw, u16 vsi_handle);
+void ice_clear_all_vsi_ctx(struct ice_hw *hw);
+/* Switch config */
+enum ice_status ice_get_initial_sw_cfg(struct ice_hw *hw);
+
+enum ice_status
+ice_alloc_vlan_res_counter(struct ice_hw *hw, u16 *counter_id);
+enum ice_status
+ice_free_vlan_res_counter(struct ice_hw *hw, u16 counter_id);
+
+/* Switch/bridge related commands */
+enum ice_status ice_update_sw_rule_bridge_mode(struct ice_hw *hw);
+enum ice_status
+ice_add_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list);
+enum ice_status
+ice_add_mac_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list);
+enum ice_status ice_add_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_lst);
+enum ice_status ice_remove_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_lst);
+enum ice_status
+ice_remove_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list);
+enum ice_status
+ice_remove_mac_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list);
+
+void ice_remove_vsi_fltr(struct ice_hw *hw, u16 vsi_handle);
+#if defined(SRIOV_SUPPORT) && !defined(NO_VF_PROMISC_SUPPORT)
+enum ice_status
+ice_add_to_vsi_fltr_list(struct ice_hw *hw, u16 vsi_handle,
+			 struct LIST_HEAD_TYPE *lkup_list_head,
+			 struct LIST_HEAD_TYPE *vsi_list_head);
+#endif
+
+
+enum ice_status
+ice_cfg_dflt_vsi(struct ice_hw *hw, u16 vsi_handle, bool set, u8 direction);
+
+
+
+
+
+enum ice_status ice_init_def_sw_recp(struct ice_hw *hw);
+u16 ice_get_hw_vsi_num(struct ice_hw *hw, u16 vsi_handle);
+bool ice_is_vsi_valid(struct ice_hw *hw, u16 vsi_handle);
+
+enum ice_status ice_replay_vsi_all_fltr(struct ice_hw *hw, u16 vsi_handle);
+void ice_rm_all_sw_replay_rule_info(struct ice_hw *hw);
+
+#endif /* _ICE_SWITCH_H_ */
diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h
new file mode 100644
index 0000000..057bc85
--- /dev/null
+++ b/drivers/net/ice/base/ice_type.h
@@ -0,0 +1,789 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_TYPE_H_
+#define _ICE_TYPE_H_
+
+#define ETH_ALEN	6
+
+#define ETH_HEADER_LEN	14
+
+#define BIT(a) (1UL << (a))
+
+#define BITS_PER_BYTE	8
+
+#define ICE_BYTES_PER_WORD	2
+#define ICE_BYTES_PER_DWORD	4
+#define ICE_MAX_TRAFFIC_CLASS	8
+
+
+#include "ice_status.h"
+#include "ice_hw_autogen.h"
+#include "ice_devids.h"
+#include "ice_osdep.h"
+#include "ice_controlq.h"
+#include "ice_lan_tx_rx.h"
+#include "ice_flex_type.h"
+#include "ice_protocol_type.h"
+
+static inline bool ice_is_tc_ena(ice_bitmap_t bitmap, u8 tc)
+{
+	return ice_is_bit_set(&bitmap, tc);
+}
+
+#ifndef DIV_64BIT
+#define DIV_64BIT(n, d) div64_long((n), (d))
+#endif /* DIV_64BIT */
+
+static inline u64 round_up_64bit(u64 a, u32 b)
+{
+	return DIV_64BIT(((a) + (b) / 2), (b));
+}
+
+
+/* Driver always calls main vsi_handle first */
+#define ICE_MAIN_VSI_HANDLE		0
+
+/* Switch from ms to the 1usec global time (this is the GTIME resolution) */
+#define ICE_MS_TO_GTIME(time)		((time) * 1000)
+
+/* Data type manipulation macros. */
+#define ICE_HI_DWORD(x)		((u32)((((x) >> 16) >> 16) & 0xFFFFFFFF))
+#define ICE_LO_DWORD(x)		((u32)((x) & 0xFFFFFFFF))
+#define ICE_HI_WORD(x)		((u16)(((x) >> 16) & 0xFFFF))
+
+/* debug masks - set these bits in hw->debug_mask to control output */
+#define ICE_DBG_INIT		BIT_ULL(1)
+#define ICE_DBG_RELEASE		BIT_ULL(2)
+
+#define ICE_DBG_LINK		BIT_ULL(4)
+#define ICE_DBG_PHY		BIT_ULL(5)
+#define ICE_DBG_QCTX		BIT_ULL(6)
+#define ICE_DBG_NVM		BIT_ULL(7)
+#define ICE_DBG_LAN		BIT_ULL(8)
+#define ICE_DBG_FLOW		BIT_ULL(9)
+#define ICE_DBG_DCB		BIT_ULL(10)
+#define ICE_DBG_DIAG		BIT_ULL(11)
+#define ICE_DBG_FD		BIT_ULL(12)
+#define ICE_DBG_SW		BIT_ULL(13)
+#define ICE_DBG_SCHED		BIT_ULL(14)
+
+#define ICE_DBG_PKG		BIT_ULL(16)
+#define ICE_DBG_RES		BIT_ULL(17)
+#define ICE_DBG_AQ_MSG		BIT_ULL(24)
+#define ICE_DBG_AQ_DESC		BIT_ULL(25)
+#define ICE_DBG_AQ_DESC_BUF	BIT_ULL(26)
+#define ICE_DBG_AQ_CMD		BIT_ULL(27)
+#define ICE_DBG_AQ		(ICE_DBG_AQ_MSG		| \
+				 ICE_DBG_AQ_DESC	| \
+				 ICE_DBG_AQ_DESC_BUF	| \
+				 ICE_DBG_AQ_CMD)
+
+#define ICE_DBG_USER		BIT_ULL(31)
+#define ICE_DBG_ALL		0xFFFFFFFFFFFFFFFFULL
+
+
+
+
+
+
+enum ice_aq_res_ids {
+	ICE_NVM_RES_ID = 1,
+	ICE_SPD_RES_ID,
+	ICE_CHANGE_LOCK_RES_ID,
+	ICE_GLOBAL_CFG_LOCK_RES_ID
+};
+
+/* FW update timeout definitions are in milliseconds */
+#define ICE_NVM_TIMEOUT			180000
+#define ICE_CHANGE_LOCK_TIMEOUT		1000
+#define ICE_GLOBAL_CFG_LOCK_TIMEOUT	3000
+
+enum ice_aq_res_access_type {
+	ICE_RES_READ = 1,
+	ICE_RES_WRITE
+};
+
+struct ice_driver_ver {
+	u8 major_ver;
+	u8 minor_ver;
+	u8 build_ver;
+	u8 subbuild_ver;
+	u8 driver_string[32];
+};
+
+enum ice_fc_mode {
+	ICE_FC_NONE = 0,
+	ICE_FC_RX_PAUSE,
+	ICE_FC_TX_PAUSE,
+	ICE_FC_FULL,
+	ICE_FC_PFC,
+	ICE_FC_DFLT
+};
+
+enum ice_fec_mode {
+	ICE_FEC_NONE = 0,
+	ICE_FEC_RS,
+	ICE_FEC_BASER,
+	ICE_FEC_AUTO
+};
+
+enum ice_set_fc_aq_failures {
+	ICE_SET_FC_AQ_FAIL_NONE = 0,
+	ICE_SET_FC_AQ_FAIL_GET,
+	ICE_SET_FC_AQ_FAIL_SET,
+	ICE_SET_FC_AQ_FAIL_UPDATE
+};
+
+/* These are structs for managing the hardware information and the operations */
+/* MAC types */
+enum ice_mac_type {
+	ICE_MAC_UNKNOWN = 0,
+	ICE_MAC_VF,
+	ICE_MAC_GENERIC,
+};
+
+/* Media Types */
+enum ice_media_type {
+	ICE_MEDIA_UNKNOWN = 0,
+	ICE_MEDIA_FIBER,
+	ICE_MEDIA_BASET,
+	ICE_MEDIA_BACKPLANE,
+	ICE_MEDIA_DA,
+};
+
+/* Software VSI types. */
+enum ice_vsi_type {
+	ICE_VSI_PF = 0,
+	ICE_VSI_VF = 1,
+};
+
+struct ice_link_status {
+	/* Refer to ice_aq_phy_type for bits definition */
+	u64 phy_type_low;
+	u8 topo_media_conflict;
+	u16 max_frame_size;
+	u16 link_speed;
+	u16 req_speeds;
+	u8 lse_ena;	/* Link Status Event notification */
+	u8 link_info;
+	u8 an_info;
+	u8 ext_info;
+	u8 fec_info;
+	u8 pacing;
+	/* Refer to #define from module_type[ICE_MODULE_TYPE_TOTAL_BYTE] of
+	 * ice_aqc_get_phy_caps structure
+	 */
+	u8 module_type[ICE_MODULE_TYPE_TOTAL_BYTE];
+};
+
+/* Different data queue types: These are mainly for SW consumption. */
+enum ice_q {
+	ICE_DATA_Q_DOORBELL,
+	ICE_DATA_Q_CMPL,
+	ICE_DATA_Q_QUANTA,
+	ICE_DATA_Q_RX,
+	ICE_DATA_Q_TX,
+};
+
+/* Different reset sources for which a disable queue AQ call has to be made in
+ * order to clean the TX scheduler as a part of the reset
+ */
+enum ice_disq_rst_src {
+	ICE_NO_RESET = 0,
+	ICE_VM_RESET,
+	ICE_VF_RESET,
+};
+
+/* PHY info such as phy_type, etc... */
+struct ice_phy_info {
+	struct ice_link_status link_info;
+	struct ice_link_status link_info_old;
+	u64 phy_type_low;
+	enum ice_media_type media_type;
+	u8 get_link_info;
+};
+
+#define ICE_MAX_NUM_MIRROR_RULES	64
+
+/* Common HW capabilities for SW use */
+struct ice_hw_common_caps {
+	/* Write CSR protection */
+	u64 wr_csr_prot;
+	u32 switching_mode;
+	/* switching mode supported - EVB switching (including cloud) */
+#define ICE_NVM_IMAGE_TYPE_EVB		0x0
+
+	/* Manageablity mode & supported protocols over MCTP */
+	u32 mgmt_mode;
+#define ICE_MGMT_MODE_PASS_THRU_MODE_M		0xF
+#define ICE_MGMT_MODE_CTL_INTERFACE_M		0xF0
+#define ICE_MGMT_MODE_REDIR_SB_INTERFACE_M	0xF00
+
+	u32 mgmt_protocols_mctp;
+#define ICE_MGMT_MODE_PROTO_RSVD	BIT(0)
+#define ICE_MGMT_MODE_PROTO_PLDM	BIT(1)
+#define ICE_MGMT_MODE_PROTO_OEM		BIT(2)
+#define ICE_MGMT_MODE_PROTO_NC_SI	BIT(3)
+
+	u32 os2bmc;
+	u32 valid_functions;
+
+	/* RSS related capabilities */
+	u32 rss_table_size;		/* 512 for PFs and 64 for VFs */
+	u32 rss_table_entry_width;	/* RSS Entry width in bits */
+
+	/* TX/RX queues */
+	u32 num_rxq;			/* Number/Total RX queues */
+	u32 rxq_first_id;		/* First queue ID for RX queues */
+	u32 num_txq;			/* Number/Total TX queues */
+	u32 txq_first_id;		/* First queue ID for TX queues */
+
+	/* MSI-X vectors */
+	u32 num_msix_vectors;
+	u32 msix_vector_first_id;
+
+	/* Max MTU for function or device */
+	u32 max_mtu;
+
+	/* WOL related */
+	u32 num_wol_proxy_fltr;
+	u32 wol_proxy_vsi_seid;
+
+	/* LED/SDP pin count */
+	u32 led_pin_num;
+	u32 sdp_pin_num;
+
+	/* LED/SDP - Supports up to 12 LED pins and 8 SDP signals */
+#define ICE_MAX_SUPPORTED_GPIO_LED	12
+#define ICE_MAX_SUPPORTED_GPIO_SDP	8
+	u8 led[ICE_MAX_SUPPORTED_GPIO_LED];
+	u8 sdp[ICE_MAX_SUPPORTED_GPIO_SDP];
+
+	/* Virtualization support */
+	u8 sr_iov_1_1;			/* SR-IOV enabled */
+
+	/* EVB capabilities */
+	u8 evb_802_1_qbg;		/* Edge Virtual Bridging */
+	u8 evb_802_1_qbh;		/* Bridge Port Extension */
+
+	u8 iscsi;
+	u8 mgmt_cem;
+
+	/* WoL and APM support */
+#define ICE_WOL_SUPPORT_M		BIT(0)
+#define ICE_ACPI_PROG_MTHD_M		BIT(1)
+#define ICE_PROXY_SUPPORT_M		BIT(2)
+	u8 apm_wol_support;
+	u8 acpi_prog_mthd;
+	u8 proxy_support;
+};
+
+
+/* Function specific capabilities */
+struct ice_hw_func_caps {
+	struct ice_hw_common_caps common_cap;
+	u32 num_allocd_vfs;		/* Number of allocated VFs */
+	u32 vf_base_id;			/* Logical ID of the first VF */
+	u32 guar_num_vsi;
+};
+
+/* Device wide capabilities */
+struct ice_hw_dev_caps {
+	struct ice_hw_common_caps common_cap;
+	u32 num_vfs_exposed;		/* Total number of VFs exposed */
+	u32 num_vsi_allocd_to_host;	/* Excluding EMP VSI */
+};
+
+
+/* Information about MAC such as address, etc... */
+struct ice_mac_info {
+	u8 lan_addr[ETH_ALEN];
+	u8 perm_addr[ETH_ALEN];
+	u8 port_addr[ETH_ALEN];
+	u8 wol_addr[ETH_ALEN];
+};
+
+/* PCI bus types */
+enum ice_bus_type {
+	ice_bus_unknown = 0,
+	ice_bus_pci_express,
+	ice_bus_embedded, /* Is device Embedded versus card */
+	ice_bus_reserved
+};
+
+/* PCI bus speeds */
+enum ice_pcie_bus_speed {
+	ice_pcie_speed_unknown	= 0xff,
+	ice_pcie_speed_2_5GT	= 0x14,
+	ice_pcie_speed_5_0GT	= 0x15,
+	ice_pcie_speed_8_0GT	= 0x16,
+	ice_pcie_speed_16_0GT	= 0x17
+};
+
+/* PCI bus widths */
+enum ice_pcie_link_width {
+	ice_pcie_lnk_width_resrv	= 0x00,
+	ice_pcie_lnk_x1			= 0x01,
+	ice_pcie_lnk_x2			= 0x02,
+	ice_pcie_lnk_x4			= 0x04,
+	ice_pcie_lnk_x8			= 0x08,
+	ice_pcie_lnk_x12		= 0x0C,
+	ice_pcie_lnk_x16		= 0x10,
+	ice_pcie_lnk_x32		= 0x20,
+	ice_pcie_lnk_width_unknown	= 0xff,
+};
+
+/* Reset types used to determine which kind of reset was requested. These
+ * defines match what the RESET_TYPE field of the GLGEN_RSTAT register.
+ * ICE_RESET_PFR does not match any RESET_TYPE field in the GLGEN_RSTAT register
+ * because its reset source is different than the other types listed.
+ */
+enum ice_reset_req {
+	ICE_RESET_POR	= 0,
+	ICE_RESET_INVAL	= 0,
+	ICE_RESET_CORER	= 1,
+	ICE_RESET_GLOBR	= 2,
+	ICE_RESET_EMPR	= 3,
+	ICE_RESET_PFR	= 4,
+};
+
+/* Bus parameters */
+struct ice_bus_info {
+	enum ice_pcie_bus_speed speed;
+	enum ice_pcie_link_width width;
+	enum ice_bus_type type;
+	u16 domain_num;
+	u16 device;
+	u8 func;
+	u8 bus_num;
+};
+
+/* Flow control (FC) parameters */
+struct ice_fc_info {
+	enum ice_fc_mode current_mode;	/* FC mode in effect */
+	enum ice_fc_mode req_mode;	/* FC mode requested by caller */
+};
+
+/* NVM Information */
+struct ice_nvm_info {
+	u32 eetrack;			/* NVM data version */
+	u32 oem_ver;			/* OEM version info */
+	u16 sr_words;			/* Shadow RAM size in words */
+	u16 ver;			/* NVM package version */
+	u8 blank_nvm_mode;		/* is NVM empty (no FW present)*/
+};
+
+/* Max number of port to queue branches w.r.t topology */
+#define ICE_TXSCHED_MAX_BRANCHES ICE_MAX_TRAFFIC_CLASS
+/* ICE_DFLT_AGG_ID means that all new VM(s)/VSI node connects
+ * to driver defined policy for default aggregator
+ */
+#define ICE_INVAL_TEID 0xFFFFFFFF
+#define ICE_DFLT_AGG_ID 0
+
+struct ice_sched_node {
+	struct ice_sched_node *parent;
+	struct ice_sched_node *sibling; /* next sibling in the same layer */
+	struct ice_sched_node **children;
+	struct ice_aqc_txsched_elem_data info;
+	u32 agg_id;			/* aggregator group id */
+	u16 vsi_handle;
+	u8 in_use;			/* suspended or in use */
+	u8 tx_sched_layer;		/* Logical Layer (1-9) */
+	u8 num_children;
+	u8 tc_num;
+	u8 owner;
+#define ICE_SCHED_NODE_OWNER_LAN	0
+#define ICE_SCHED_NODE_OWNER_AE		1
+#define ICE_SCHED_NODE_OWNER_RDMA	2
+};
+
+/* Access Macros for Tx Sched Elements data */
+#define ICE_TXSCHED_GET_NODE_TEID(x) LE32_TO_CPU((x)->info.node_teid)
+#define ICE_TXSCHED_GET_PARENT_TEID(x) LE32_TO_CPU((x)->info.parent_teid)
+#define ICE_TXSCHED_GET_CIR_RL_ID(x)	\
+	LE16_TO_CPU((x)->info.cir_bw.bw_profile_idx)
+#define ICE_TXSCHED_GET_EIR_RL_ID(x)	\
+	LE16_TO_CPU((x)->info.eir_bw.bw_profile_idx)
+#define ICE_TXSCHED_GET_SRL_ID(x) LE16_TO_CPU((x)->info.srl_id)
+#define ICE_TXSCHED_GET_CIR_BWALLOC(x)	\
+	LE16_TO_CPU((x)->info.cir_bw.bw_alloc)
+#define ICE_TXSCHED_GET_EIR_BWALLOC(x)	\
+	LE16_TO_CPU((x)->info.eir_bw.bw_alloc)
+
+
+/* The aggregator type determines if identifier is for a VSI group,
+ * aggregator group, aggregator of queues, or queue group.
+ */
+enum ice_agg_type {
+	ICE_AGG_TYPE_UNKNOWN = 0,
+	ICE_AGG_TYPE_TC,
+	ICE_AGG_TYPE_AGG, /* aggregator */
+	ICE_AGG_TYPE_VSI,
+	ICE_AGG_TYPE_QG,
+	ICE_AGG_TYPE_Q
+};
+
+
+#define ICE_SCHED_MIN_BW		500		/* in Kbps */
+#define ICE_SCHED_MAX_BW		100000000	/* in Kbps */
+#define ICE_SCHED_DFLT_BW		0xFFFFFFFF	/* unlimited */
+#define ICE_SCHED_NO_PRIORITY		0
+#define ICE_SCHED_NO_BW_WT		0
+#define ICE_SCHED_DFLT_RL_PROF_ID	0
+#define ICE_SCHED_DFLT_BW_WT		1
+#define ICE_SCHED_INVAL_PROF_ID		0xFFFF
+#define ICE_SCHED_DFLT_BURST_SIZE	(15 * 1024)	/* in bytes (15k) */
+
+
+/* The following tree example shows the naming conventions followed under
+ * ice_port_info struct for default scheduler tree topology.
+ *
+ *                 A tree on a port
+ *                       *                ---> root node
+ *        (TC0)/  /  /  / \  \  \  \(TC7) ---> num_branches (range:1- 8)
+ *            *  *  *  *   *  *  *  *     |
+ *           /                            |
+ *          *                             |
+ *         /                              |-> num_elements (range:1 - 9)
+ *        *                               |   implies num_of_layers
+ *       /                                |
+ *   (a)*                                 |
+ *
+ *  (a) is the last_node_teid(not of type Leaf). A leaf node is created under
+ *  (a) as child node where queues get added, add Tx/Rx queue admin commands;
+ *  need teid of (a) to add queues.
+ *
+ *  This tree
+ *       -> has 8 branches (one for each TC)
+ *       -> First branch (TC0) has 4 elements
+ *       -> has 4 layers
+ *       -> (a) is the topmost layer node created by firmware on branch 0
+ *
+ *  Note: Above asterisk tree covers only basic terminology and scenario.
+ *  Refer to the documentation for more info.
+ */
+
+ /* Data structure for saving bw information */
+enum ice_bw_type {
+	ICE_BW_TYPE_PRIO,
+	ICE_BW_TYPE_CIR,
+	ICE_BW_TYPE_CIR_WT,
+	ICE_BW_TYPE_EIR,
+	ICE_BW_TYPE_EIR_WT,
+	ICE_BW_TYPE_SHARED,
+	ICE_BW_TYPE_CNT		/* This must be last */
+};
+
+struct ice_bw {
+	u32 bw;
+	u16 bw_alloc;
+};
+
+struct ice_bw_type_info {
+	ice_declare_bitmap(bw_t_bitmap, ICE_BW_TYPE_CNT);
+	u8 generic;
+	struct ice_bw cir_bw;
+	struct ice_bw eir_bw;
+	u32 shared_bw;
+};
+
+/* vsi type list entry to locate corresponding vsi/ag nodes */
+struct ice_sched_vsi_info {
+	struct ice_sched_node *vsi_node[ICE_MAX_TRAFFIC_CLASS];
+	struct ice_sched_node *ag_node[ICE_MAX_TRAFFIC_CLASS];
+	u16 max_lanq[ICE_MAX_TRAFFIC_CLASS];
+	/* bw_t_info saves VSI bw information */
+	struct ice_bw_type_info bw_t_info[ICE_MAX_TRAFFIC_CLASS];
+};
+
+
+struct ice_port_info {
+	struct ice_sched_node *root;	/* Root Node per Port */
+	struct ice_hw *hw;		/* back pointer to hw instance */
+	u32 last_node_teid;		/* scheduler last node info */
+	u16 sw_id;			/* Initial switch ID belongs to port */
+	u16 pf_vf_num;
+	u8 port_state;
+#define ICE_SCHED_PORT_STATE_INIT	0x0
+#define ICE_SCHED_PORT_STATE_READY	0x1
+	u16 dflt_tx_vsi_rule_id;
+	u16 dflt_tx_vsi_num;
+	u16 dflt_rx_vsi_rule_id;
+	u16 dflt_rx_vsi_num;
+	struct ice_fc_info fc;
+	struct ice_mac_info mac;
+	struct ice_phy_info phy;
+	struct ice_lock sched_lock;	/* protect access to TXSched tree */
+	u8 lport;
+#define ICE_LPORT_MASK		0xff
+	u8 is_vf;
+};
+
+struct ice_switch_info {
+	struct LIST_HEAD_TYPE vsi_list_map_head;
+	struct ice_sw_recipe *recp_list;
+};
+
+/* FW logging configuration */
+struct ice_fw_log_evnt {
+	u8 cfg : 4;	/* New event enables to configure */
+	u8 cur : 4;	/* Current/active event enables */
+};
+
+struct ice_fw_log_cfg {
+	u8 cq_en : 1;    /* FW logging is enabled via the control queue */
+	u8 uart_en : 1;  /* FW logging is enabled via UART for all PFs */
+	u8 actv_evnts;   /* Cumulation of currently enabled log events */
+
+#define ICE_FW_LOG_EVNT_INFO	(ICE_AQC_FW_LOG_INFO_EN >> ICE_AQC_FW_LOG_EN_S)
+#define ICE_FW_LOG_EVNT_INIT	(ICE_AQC_FW_LOG_INIT_EN >> ICE_AQC_FW_LOG_EN_S)
+#define ICE_FW_LOG_EVNT_FLOW	(ICE_AQC_FW_LOG_FLOW_EN >> ICE_AQC_FW_LOG_EN_S)
+#define ICE_FW_LOG_EVNT_ERR	(ICE_AQC_FW_LOG_ERR_EN >> ICE_AQC_FW_LOG_EN_S)
+	struct ice_fw_log_evnt evnts[ICE_AQC_FW_LOG_ID_MAX];
+};
+
+/* Port hardware description */
+struct ice_hw {
+	u8 *hw_addr;
+	void *back;
+	struct ice_aqc_layer_props *layer_info;
+	struct ice_port_info *port_info;
+	u64 debug_mask;		/* BITMAP for debug mask */
+	enum ice_mac_type mac_type;
+
+	/* pci info */
+	u16 device_id;
+	u16 vendor_id;
+	u16 subsystem_device_id;
+	u16 subsystem_vendor_id;
+	u8 revision_id;
+
+	u8 pf_id;		/* device profile info */
+
+	/* TX Scheduler values */
+	u16 num_tx_sched_layers;
+	u16 num_tx_sched_phys_layers;
+	u8 flattened_layers;
+	u8 max_cgds;
+	u8 sw_entry_point_layer;
+	u16 max_children[ICE_AQC_TOPO_MAX_LEVEL_NUM];
+	struct LIST_HEAD_TYPE agg_list;	/* lists all aggregator */
+
+	struct ice_vsi_ctx *vsi_ctx[ICE_MAX_VSI];
+	u8 evb_veb;		/* true for VEB, false for VEPA */
+	u8 reset_ongoing;	/* true if hw is in reset, false otherwise */
+	struct ice_bus_info bus;
+	struct ice_nvm_info nvm;
+	struct ice_hw_dev_caps dev_caps;	/* device capabilities */
+	struct ice_hw_func_caps func_caps;	/* function capabilities */
+
+	struct ice_switch_info *switch_info;	/* switch filter lists */
+
+	/* Control Queue info */
+	struct ice_ctl_q_info adminq;
+	struct ice_ctl_q_info mailboxq;
+
+	u8 api_branch;		/* API branch version */
+	u8 api_maj_ver;		/* API major version */
+	u8 api_min_ver;		/* API minor version */
+	u8 api_patch;		/* API patch version */
+	u8 fw_branch;		/* firmware branch version */
+	u8 fw_maj_ver;		/* firmware major version */
+	u8 fw_min_ver;		/* firmware minor version */
+	u8 fw_patch;		/* firmware patch version */
+	u32 fw_build;		/* firmware build number */
+
+	struct ice_fw_log_cfg fw_log;
+
+/* Device max aggregate bandwidths corresponding to the GL_PWR_MODE_CTL
+ * register. Used for determining the itr/intrl granularity during
+ * initialization.
+ */
+#define ICE_MAX_AGG_BW_200G	0x0
+#define ICE_MAX_AGG_BW_100G	0X1
+#define ICE_MAX_AGG_BW_50G	0x2
+#define ICE_MAX_AGG_BW_25G	0x3
+	/* ITR granularity for different speeds */
+#define ICE_ITR_GRAN_ABOVE_25	2
+#define ICE_ITR_GRAN_MAX_25	4
+	/* ITR granularity in 1 us */
+	u8 itr_gran;
+	/* INTRL granularity for different speeds */
+#define ICE_INTRL_GRAN_ABOVE_25	4
+#define ICE_INTRL_GRAN_MAX_25	8
+	/* INTRL granularity in 1 us */
+	u8 intrl_gran;
+
+	u8 ucast_shared;	/* true if VSIs can share unicast addr */
+
+
+};
+
+/* Statistics collected by each port, VSI, VEB, and S-channel */
+struct ice_eth_stats {
+	u64 rx_bytes;			/* gorc */
+	u64 rx_unicast;			/* uprc */
+	u64 rx_multicast;		/* mprc */
+	u64 rx_broadcast;		/* bprc */
+	u64 rx_discards;		/* rdpc */
+	u64 rx_unknown_protocol;	/* rupp */
+	u64 tx_bytes;			/* gotc */
+	u64 tx_unicast;			/* uptc */
+	u64 tx_multicast;		/* mptc */
+	u64 tx_broadcast;		/* bptc */
+	u64 tx_discards;		/* tdpc */
+	u64 tx_errors;			/* tepc */
+};
+
+#define ICE_MAX_UP	8
+
+/* Statistics collected per VEB per User Priority (UP) for up to 8 UPs */
+struct ice_veb_up_stats {
+	u64 up_rx_pkts[ICE_MAX_UP];
+	u64 up_rx_bytes[ICE_MAX_UP];
+	u64 up_tx_pkts[ICE_MAX_UP];
+	u64 up_tx_bytes[ICE_MAX_UP];
+};
+
+/* Statistics collected by the MAC */
+struct ice_hw_port_stats {
+	/* eth stats collected by the port */
+	struct ice_eth_stats eth;
+	/* additional port specific stats */
+	u64 tx_dropped_link_down;	/* tdold */
+	u64 crc_errors;			/* crcerrs */
+	u64 illegal_bytes;		/* illerrc */
+	u64 error_bytes;		/* errbc */
+	u64 mac_local_faults;		/* mlfc */
+	u64 mac_remote_faults;		/* mrfc */
+	u64 rx_len_errors;		/* rlec */
+	u64 link_xon_rx;		/* lxonrxc */
+	u64 link_xoff_rx;		/* lxoffrxc */
+	u64 link_xon_tx;		/* lxontxc */
+	u64 link_xoff_tx;		/* lxofftxc */
+	u64 rx_size_64;			/* prc64 */
+	u64 rx_size_127;		/* prc127 */
+	u64 rx_size_255;		/* prc255 */
+	u64 rx_size_511;		/* prc511 */
+	u64 rx_size_1023;		/* prc1023 */
+	u64 rx_size_1522;		/* prc1522 */
+	u64 rx_size_big;		/* prc9522 */
+	u64 rx_undersize;		/* ruc */
+	u64 rx_fragments;		/* rfc */
+	u64 rx_oversize;		/* roc */
+	u64 rx_jabber;			/* rjc */
+	u64 tx_size_64;			/* ptc64 */
+	u64 tx_size_127;		/* ptc127 */
+	u64 tx_size_255;		/* ptc255 */
+	u64 tx_size_511;		/* ptc511 */
+	u64 tx_size_1023;		/* ptc1023 */
+	u64 tx_size_1522;		/* ptc1522 */
+	u64 tx_size_big;		/* ptc9522 */
+	u64 mac_short_pkt_dropped;	/* mspdc */
+	/* EEE LPI */
+	u32 tx_lpi_status;
+	u32 rx_lpi_status;
+	u64 tx_lpi_count;		/* etlpic */
+	u64 rx_lpi_count;		/* erlpic */
+};
+
+enum ice_sw_fwd_act_type {
+	ICE_FWD_TO_VSI = 0,
+	ICE_FWD_TO_VSI_LIST, /* Do not use this when adding filter */
+	ICE_FWD_TO_Q,
+	ICE_FWD_TO_QGRP,
+	ICE_DROP_PACKET,
+	ICE_INVAL_ACT
+};
+
+/* Checksum and Shadow RAM pointers */
+#define ICE_SR_NVM_CTRL_WORD			0x00
+#define ICE_SR_PHY_ANALOG_PTR			0x04
+#define ICE_SR_OPTION_ROM_PTR			0x05
+#define ICE_SR_RO_PCIR_REGS_AUTO_LOAD_PTR	0x06
+#define ICE_SR_AUTO_GENERATED_POINTERS_PTR	0x07
+#define ICE_SR_PCIR_REGS_AUTO_LOAD_PTR		0x08
+#define ICE_SR_EMP_GLOBAL_MODULE_PTR		0x09
+#define ICE_SR_EMP_IMAGE_PTR			0x0B
+#define ICE_SR_PE_IMAGE_PTR			0x0C
+#define ICE_SR_CSR_PROTECTED_LIST_PTR		0x0D
+#define ICE_SR_MNG_CFG_PTR			0x0E
+#define ICE_SR_EMP_MODULE_PTR			0x0F
+#define ICE_SR_PBA_FLAGS			0x15
+#define ICE_SR_PBA_BLOCK_PTR			0x16
+#define ICE_SR_BOOT_CFG_PTR			0x17
+#define ICE_SR_NVM_WOL_CFG			0x19
+#define ICE_NVM_OEM_VER_OFF			0x83
+#define ICE_SR_NVM_DEV_STARTER_VER		0x18
+#define ICE_SR_ALTERNATE_SAN_MAC_ADDR_PTR	0x27
+#define ICE_SR_PERMANENT_SAN_MAC_ADDR_PTR	0x28
+#define ICE_SR_NVM_MAP_VER			0x29
+#define ICE_SR_NVM_IMAGE_VER			0x2A
+#define ICE_SR_NVM_STRUCTURE_VER		0x2B
+#define ICE_SR_NVM_EETRACK_LO			0x2D
+#define ICE_SR_NVM_EETRACK_HI			0x2E
+#define ICE_NVM_VER_LO_SHIFT			0
+#define ICE_NVM_VER_LO_MASK			(0xff << ICE_NVM_VER_LO_SHIFT)
+#define ICE_NVM_VER_HI_SHIFT			12
+#define ICE_NVM_VER_HI_MASK			(0xf << ICE_NVM_VER_HI_SHIFT)
+#define ICE_OEM_EETRACK_ID			0xffffffff
+#define ICE_OEM_VER_PATCH_SHIFT			0
+#define ICE_OEM_VER_PATCH_MASK		(0xff << ICE_OEM_VER_PATCH_SHIFT)
+#define ICE_OEM_VER_BUILD_SHIFT			8
+#define ICE_OEM_VER_BUILD_MASK		(0xffff << ICE_OEM_VER_BUILD_SHIFT)
+#define ICE_OEM_VER_SHIFT			24
+#define ICE_OEM_VER_MASK			(0xff << ICE_OEM_VER_SHIFT)
+#define ICE_SR_VPD_PTR				0x2F
+#define ICE_SR_PXE_SETUP_PTR			0x30
+#define ICE_SR_PXE_CFG_CUST_OPTIONS_PTR		0x31
+#define ICE_SR_NVM_ORIGINAL_EETRACK_LO		0x34
+#define ICE_SR_NVM_ORIGINAL_EETRACK_HI		0x35
+#define ICE_SR_VLAN_CFG_PTR			0x37
+#define ICE_SR_POR_REGS_AUTO_LOAD_PTR		0x38
+#define ICE_SR_EMPR_REGS_AUTO_LOAD_PTR		0x3A
+#define ICE_SR_GLOBR_REGS_AUTO_LOAD_PTR		0x3B
+#define ICE_SR_CORER_REGS_AUTO_LOAD_PTR		0x3C
+#define ICE_SR_PHY_CFG_SCRIPT_PTR		0x3D
+#define ICE_SR_PCIE_ALT_AUTO_LOAD_PTR		0x3E
+#define ICE_SR_SW_CHECKSUM_WORD			0x3F
+#define ICE_SR_PFA_PTR				0x40
+#define ICE_SR_1ST_SCRATCH_PAD_PTR		0x41
+#define ICE_SR_1ST_NVM_BANK_PTR			0x42
+#define ICE_SR_NVM_BANK_SIZE			0x43
+#define ICE_SR_1ND_OROM_BANK_PTR		0x44
+#define ICE_SR_OROM_BANK_SIZE			0x45
+#define ICE_SR_EMP_SR_SETTINGS_PTR		0x48
+#define ICE_SR_CONFIGURATION_METADATA_PTR	0x4D
+#define ICE_SR_IMMEDIATE_VALUES_PTR		0x4E
+
+/* Auxiliary field, mask and shift definition for Shadow RAM and NVM Flash */
+#define ICE_SR_VPD_SIZE_WORDS		512
+#define ICE_SR_PCIE_ALT_SIZE_WORDS	512
+#define ICE_SR_CTRL_WORD_1_S		0x06
+#define ICE_SR_CTRL_WORD_1_M		(0x03 << ICE_SR_CTRL_WORD_1_S)
+
+/* Shadow RAM related */
+#define ICE_SR_SECTOR_SIZE_IN_WORDS	0x800
+#define ICE_SR_BUF_ALIGNMENT		4096
+#define ICE_SR_WORDS_IN_1KB		512
+/* Checksum should be calculated such that after adding all the words,
+ * including the checksum word itself, the sum should be 0xBABA.
+ */
+#define ICE_SR_SW_CHECKSUM_BASE		0xBABA
+
+#define ICE_PBA_FLAG_DFLT		0xFAFA
+/* Hash redirection LUT for VSI - maximum array size */
+#define ICE_VSIQF_HLUT_ARRAY_SIZE	((VSIQF_HLUT_MAX_INDEX + 1) * 4)
+
+/*
+ * Defines for values in the VF_PE_DB_SIZE bits in the GLPCI_LBARCTRL register.
+ * This is needed to determine the BAR0 space for the VFs
+ */
+#define GLPCI_LBARCTRL_VF_PE_DB_SIZE_0KB 0x0
+#define GLPCI_LBARCTRL_VF_PE_DB_SIZE_8KB 0x1
+#define GLPCI_LBARCTRL_VF_PE_DB_SIZE_64KB 0x2
+
+#endif /* _ICE_TYPE_H_ */
diff --git a/drivers/net/ice/base/virtchnl.h b/drivers/net/ice/base/virtchnl.h
new file mode 100644
index 0000000..90192f5
--- /dev/null
+++ b/drivers/net/ice/base/virtchnl.h
@@ -0,0 +1,787 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _VIRTCHNL_H_
+#define _VIRTCHNL_H_
+
+/* Description:
+ * This header file describes the VF-PF communication protocol used
+ * by the drivers for all devices starting from our 40G product line
+ *
+ * Admin queue buffer usage:
+ * desc->opcode is always aqc_opc_send_msg_to_pf
+ * flags, retval, datalen, and data addr are all used normally.
+ * The Firmware copies the cookie fields when sending messages between the
+ * PF and VF, but uses all other fields internally. Due to this limitation,
+ * we must send all messages as "indirect", i.e. using an external buffer.
+ *
+ * All the VSI indexes are relative to the VF. Each VF can have maximum of
+ * three VSIs. All the queue indexes are relative to the VSI.  Each VF can
+ * have a maximum of sixteen queues for all of its VSIs.
+ *
+ * The PF is required to return a status code in v_retval for all messages
+ * except RESET_VF, which does not require any response. The return value
+ * is of status_code type, defined in the shared type.h.
+ *
+ * In general, VF driver initialization should roughly follow the order of
+ * these opcodes. The VF driver must first validate the API version of the
+ * PF driver, then request a reset, then get resources, then configure
+ * queues and interrupts. After these operations are complete, the VF
+ * driver may start its queues, optionally add MAC and VLAN filters, and
+ * process traffic.
+ */
+
+/* START GENERIC DEFINES
+ * Need to ensure the following enums and defines hold the same meaning and
+ * value in current and future projects
+ */
+
+/* Error Codes */
+enum virtchnl_status_code {
+	VIRTCHNL_STATUS_SUCCESS				= 0,
+	VIRTCHNL_STATUS_ERR_PARAM			= -5,
+	VIRTCHNL_STATUS_ERR_NO_MEMORY			= -18,
+	VIRTCHNL_STATUS_ERR_OPCODE_MISMATCH		= -38,
+	VIRTCHNL_STATUS_ERR_CQP_COMPL_ERROR		= -39,
+	VIRTCHNL_STATUS_ERR_INVALID_VF_ID		= -40,
+	VIRTCHNL_STATUS_ERR_ADMIN_QUEUE_ERROR		= -53,
+	VIRTCHNL_STATUS_ERR_NOT_SUPPORTED		= -64,
+};
+
+/* Backward compatibility */
+#define VIRTCHNL_ERR_PARAM VIRTCHNL_STATUS_ERR_PARAM
+#define VIRTCHNL_STATUS_NOT_SUPPORTED VIRTCHNL_STATUS_ERR_NOT_SUPPORTED
+
+#define VIRTCHNL_LINK_SPEED_100MB_SHIFT		0x1
+#define VIRTCHNL_LINK_SPEED_1000MB_SHIFT	0x2
+#define VIRTCHNL_LINK_SPEED_10GB_SHIFT		0x3
+#define VIRTCHNL_LINK_SPEED_40GB_SHIFT		0x4
+#define VIRTCHNL_LINK_SPEED_20GB_SHIFT		0x5
+#define VIRTCHNL_LINK_SPEED_25GB_SHIFT		0x6
+
+enum virtchnl_link_speed {
+	VIRTCHNL_LINK_SPEED_UNKNOWN	= 0,
+	VIRTCHNL_LINK_SPEED_100MB	= BIT(VIRTCHNL_LINK_SPEED_100MB_SHIFT),
+	VIRTCHNL_LINK_SPEED_1GB		= BIT(VIRTCHNL_LINK_SPEED_1000MB_SHIFT),
+	VIRTCHNL_LINK_SPEED_10GB	= BIT(VIRTCHNL_LINK_SPEED_10GB_SHIFT),
+	VIRTCHNL_LINK_SPEED_40GB	= BIT(VIRTCHNL_LINK_SPEED_40GB_SHIFT),
+	VIRTCHNL_LINK_SPEED_20GB	= BIT(VIRTCHNL_LINK_SPEED_20GB_SHIFT),
+	VIRTCHNL_LINK_SPEED_25GB	= BIT(VIRTCHNL_LINK_SPEED_25GB_SHIFT),
+};
+
+/* for hsplit_0 field of Rx HMC context */
+/* deprecated with AVF 1.0 */
+enum virtchnl_rx_hsplit {
+	VIRTCHNL_RX_HSPLIT_NO_SPLIT      = 0,
+	VIRTCHNL_RX_HSPLIT_SPLIT_L2      = 1,
+	VIRTCHNL_RX_HSPLIT_SPLIT_IP      = 2,
+	VIRTCHNL_RX_HSPLIT_SPLIT_TCP_UDP = 4,
+	VIRTCHNL_RX_HSPLIT_SPLIT_SCTP    = 8,
+};
+
+/* END GENERIC DEFINES */
+
+/* Opcodes for VF-PF communication. These are placed in the v_opcode field
+ * of the virtchnl_msg structure.
+ */
+enum virtchnl_ops {
+/* The PF sends status change events to VFs using
+ * the VIRTCHNL_OP_EVENT opcode.
+ * VFs send requests to the PF using the other ops.
+ * Use of "advanced opcode" features must be negotiated as part of capabilities
+ * exchange and are not considered part of base mode feature set.
+ */
+	VIRTCHNL_OP_UNKNOWN = 0,
+	VIRTCHNL_OP_VERSION = 1, /* must ALWAYS be 1 */
+	VIRTCHNL_OP_RESET_VF = 2,
+	VIRTCHNL_OP_GET_VF_RESOURCES = 3,
+	VIRTCHNL_OP_CONFIG_TX_QUEUE = 4,
+	VIRTCHNL_OP_CONFIG_RX_QUEUE = 5,
+	VIRTCHNL_OP_CONFIG_VSI_QUEUES = 6,
+	VIRTCHNL_OP_CONFIG_IRQ_MAP = 7,
+	VIRTCHNL_OP_ENABLE_QUEUES = 8,
+	VIRTCHNL_OP_DISABLE_QUEUES = 9,
+	VIRTCHNL_OP_ADD_ETH_ADDR = 10,
+	VIRTCHNL_OP_DEL_ETH_ADDR = 11,
+	VIRTCHNL_OP_ADD_VLAN = 12,
+	VIRTCHNL_OP_DEL_VLAN = 13,
+	VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE = 14,
+	VIRTCHNL_OP_GET_STATS = 15,
+	VIRTCHNL_OP_RSVD = 16,
+	VIRTCHNL_OP_EVENT = 17, /* must ALWAYS be 17 */
+	/* opcode 19 is reserved */
+	/* opcodes 20, 21, and 22 are reserved */
+	VIRTCHNL_OP_CONFIG_RSS_KEY = 23,
+	VIRTCHNL_OP_CONFIG_RSS_LUT = 24,
+	VIRTCHNL_OP_GET_RSS_HENA_CAPS = 25,
+	VIRTCHNL_OP_SET_RSS_HENA = 26,
+	VIRTCHNL_OP_ENABLE_VLAN_STRIPPING = 27,
+	VIRTCHNL_OP_DISABLE_VLAN_STRIPPING = 28,
+	VIRTCHNL_OP_REQUEST_QUEUES = 29,
+	VIRTCHNL_OP_ENABLE_CHANNELS = 30,
+	VIRTCHNL_OP_DISABLE_CHANNELS = 31,
+	VIRTCHNL_OP_ADD_CLOUD_FILTER = 32,
+	VIRTCHNL_OP_DEL_CLOUD_FILTER = 33,
+
+};
+
+/* These macros are used to generate compilation errors if a structure/union
+ * is not exactly the correct length. It gives a divide by zero error if the
+ * structure/union is not of the correct size, otherwise it creates an enum
+ * that is never used.
+ */
+#define VIRTCHNL_CHECK_STRUCT_LEN(n, X) enum virtchnl_static_assert_enum_##X \
+	{ virtchnl_static_assert_##X = (n)/((sizeof(struct X) == (n)) ? 1 : 0) }
+#define VIRTCHNL_CHECK_UNION_LEN(n, X) enum virtchnl_static_asset_enum_##X \
+	{ virtchnl_static_assert_##X = (n)/((sizeof(union X) == (n)) ? 1 : 0) }
+
+/* Virtual channel message descriptor. This overlays the admin queue
+ * descriptor. All other data is passed in external buffers.
+ */
+
+struct virtchnl_msg {
+	u8 pad[8];			 /* AQ flags/opcode/len/retval fields */
+	enum virtchnl_ops v_opcode; /* avoid confusion with desc->opcode */
+	enum virtchnl_status_code v_retval;  /* ditto for desc->retval */
+	u32 vfid;			 /* used by PF when sending to VF */
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(20, virtchnl_msg);
+
+/* Message descriptions and data structures. */
+
+/* VIRTCHNL_OP_VERSION
+ * VF posts its version number to the PF. PF responds with its version number
+ * in the same format, along with a return code.
+ * Reply from PF has its major/minor versions also in param0 and param1.
+ * If there is a major version mismatch, then the VF cannot operate.
+ * If there is a minor version mismatch, then the VF can operate but should
+ * add a warning to the system log.
+ *
+ * This enum element MUST always be specified as == 1, regardless of other
+ * changes in the API. The PF must always respond to this message without
+ * error regardless of version mismatch.
+ */
+#define VIRTCHNL_VERSION_MAJOR		1
+#define VIRTCHNL_VERSION_MINOR		1
+#define VIRTCHNL_VERSION_MINOR_NO_VF_CAPS	0
+
+struct virtchnl_version_info {
+	u32 major;
+	u32 minor;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(8, virtchnl_version_info);
+
+#define VF_IS_V10(_v) (((_v)->major == 1) && ((_v)->minor == 0))
+#define VF_IS_V11(_ver) (((_ver)->major == 1) && ((_ver)->minor == 1))
+
+/* VIRTCHNL_OP_RESET_VF
+ * VF sends this request to PF with no parameters
+ * PF does NOT respond! VF driver must delay then poll VFGEN_RSTAT register
+ * until reset completion is indicated. The admin queue must be reinitialized
+ * after this operation.
+ *
+ * When reset is complete, PF must ensure that all queues in all VSIs associated
+ * with the VF are stopped, all queue configurations in the HMC are set to 0,
+ * and all MAC and VLAN filters (except the default MAC address) on all VSIs
+ * are cleared.
+ */
+
+/* VSI types that use VIRTCHNL interface for VF-PF communication. VSI_SRIOV
+ * vsi_type should always be 6 for backward compatibility. Add other fields
+ * as needed.
+ */
+enum virtchnl_vsi_type {
+	VIRTCHNL_VSI_TYPE_INVALID = 0,
+	VIRTCHNL_VSI_SRIOV = 6,
+};
+
+/* VIRTCHNL_OP_GET_VF_RESOURCES
+ * Version 1.0 VF sends this request to PF with no parameters
+ * Version 1.1 VF sends this request to PF with u32 bitmap of its capabilities
+ * PF responds with an indirect message containing
+ * virtchnl_vf_resource and one or more
+ * virtchnl_vsi_resource structures.
+ */
+
+struct virtchnl_vsi_resource {
+	u16 vsi_id;
+	u16 num_queue_pairs;
+	enum virtchnl_vsi_type vsi_type;
+	u16 qset_handle;
+	u8 default_mac_addr[ETH_ALEN];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_vsi_resource);
+
+/* VF capability flags
+ * VIRTCHNL_VF_OFFLOAD_L2 flag is inclusive of base mode L2 offloads including
+ * TX/RX Checksum offloading and TSO for non-tunnelled packets.
+ */
+#define VIRTCHNL_VF_OFFLOAD_L2			0x00000001
+#define VIRTCHNL_VF_OFFLOAD_IWARP		0x00000002
+#define VIRTCHNL_VF_OFFLOAD_RSVD		0x00000004
+#define VIRTCHNL_VF_OFFLOAD_RSS_AQ		0x00000008
+#define VIRTCHNL_VF_OFFLOAD_RSS_REG		0x00000010
+#define VIRTCHNL_VF_OFFLOAD_WB_ON_ITR		0x00000020
+#define VIRTCHNL_VF_OFFLOAD_REQ_QUEUES		0x00000040
+#define VIRTCHNL_VF_OFFLOAD_VLAN		0x00010000
+#define VIRTCHNL_VF_OFFLOAD_RX_POLLING		0x00020000
+#define VIRTCHNL_VF_OFFLOAD_RSS_PCTYPE_V2	0x00040000
+#define VIRTCHNL_VF_OFFLOAD_RSS_PF		0X00080000
+#define VIRTCHNL_VF_OFFLOAD_ENCAP		0X00100000
+#define VIRTCHNL_VF_OFFLOAD_ENCAP_CSUM		0X00200000
+#define VIRTCHNL_VF_OFFLOAD_RX_ENCAP_CSUM	0X00400000
+#define VIRTCHNL_VF_OFFLOAD_ADQ			0X00800000
+/* Define below the capability flags that are not offloads */
+#define VIRTCHNL_VF_CAP_ADV_LINK_SPEED		0x00000080
+
+#define VF_BASE_MODE_OFFLOADS (VIRTCHNL_VF_OFFLOAD_L2 | \
+			       VIRTCHNL_VF_OFFLOAD_VLAN | \
+			       VIRTCHNL_VF_OFFLOAD_RSS_PF)
+
+struct virtchnl_vf_resource {
+	u16 num_vsis;
+	u16 num_queue_pairs;
+	u16 max_vectors;
+	u16 max_mtu;
+
+	u32 vf_cap_flags;
+	u32 rss_key_size;
+	u32 rss_lut_size;
+
+	struct virtchnl_vsi_resource vsi_res[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(36, virtchnl_vf_resource);
+
+/* VIRTCHNL_OP_CONFIG_TX_QUEUE
+ * VF sends this message to set up parameters for one TX queue.
+ * External data buffer contains one instance of virtchnl_txq_info.
+ * PF configures requested queue and returns a status code.
+ */
+
+/* Tx queue config info */
+struct virtchnl_txq_info {
+	u16 vsi_id;
+	u16 queue_id;
+	u16 ring_len;		/* number of descriptors, multiple of 8 */
+	u16 headwb_enabled; /* deprecated with AVF 1.0 */
+	u64 dma_ring_addr;
+	u64 dma_headwb_addr; /* deprecated with AVF 1.0 */
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(24, virtchnl_txq_info);
+
+/* VIRTCHNL_OP_CONFIG_RX_QUEUE
+ * VF sends this message to set up parameters for one RX queue.
+ * External data buffer contains one instance of virtchnl_rxq_info.
+ * PF configures requested queue and returns a status code.
+ */
+
+/* Rx queue config info */
+struct virtchnl_rxq_info {
+	u16 vsi_id;
+	u16 queue_id;
+	u32 ring_len;		/* number of descriptors, multiple of 32 */
+	u16 hdr_size;
+	u16 splithdr_enabled; /* deprecated with AVF 1.0 */
+	u32 databuffer_size;
+	u32 max_pkt_size;
+	u32 pad1;
+	u64 dma_ring_addr;
+	enum virtchnl_rx_hsplit rx_split_pos; /* deprecated with AVF 1.0 */
+	u32 pad2;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(40, virtchnl_rxq_info);
+
+/* VIRTCHNL_OP_CONFIG_VSI_QUEUES
+ * VF sends this message to set parameters for all active TX and RX queues
+ * associated with the specified VSI.
+ * PF configures queues and returns status.
+ * If the number of queues specified is greater than the number of queues
+ * associated with the VSI, an error is returned and no queues are configured.
+ */
+struct virtchnl_queue_pair_info {
+	/* NOTE: vsi_id and queue_id should be identical for both queues. */
+	struct virtchnl_txq_info txq;
+	struct virtchnl_rxq_info rxq;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(64, virtchnl_queue_pair_info);
+
+struct virtchnl_vsi_queue_config_info {
+	u16 vsi_id;
+	u16 num_queue_pairs;
+	u32 pad;
+	struct virtchnl_queue_pair_info qpair[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(72, virtchnl_vsi_queue_config_info);
+
+/* VIRTCHNL_OP_REQUEST_QUEUES
+ * VF sends this message to request the PF to allocate additional queues to
+ * this VF.  Each VF gets a guaranteed number of queues on init but asking for
+ * additional queues must be negotiated.  This is a best effort request as it
+ * is possible the PF does not have enough queues left to support the request.
+ * If the PF cannot support the number requested it will respond with the
+ * maximum number it is able to support.  If the request is successful, PF will
+ * then reset the VF to institute required changes.
+ */
+
+/* VF resource request */
+struct virtchnl_vf_res_request {
+	u16 num_queue_pairs;
+};
+
+/* VIRTCHNL_OP_CONFIG_IRQ_MAP
+ * VF uses this message to map vectors to queues.
+ * The rxq_map and txq_map fields are bitmaps used to indicate which queues
+ * are to be associated with the specified vector.
+ * The "other" causes are always mapped to vector 0.
+ * PF configures interrupt mapping and returns status.
+ */
+struct virtchnl_vector_map {
+	u16 vsi_id;
+	u16 vector_id;
+	u16 rxq_map;
+	u16 txq_map;
+	u16 rxitr_idx;
+	u16 txitr_idx;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_vector_map);
+
+struct virtchnl_irq_map_info {
+	u16 num_vectors;
+	struct virtchnl_vector_map vecmap[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(14, virtchnl_irq_map_info);
+
+/* VIRTCHNL_OP_ENABLE_QUEUES
+ * VIRTCHNL_OP_DISABLE_QUEUES
+ * VF sends these message to enable or disable TX/RX queue pairs.
+ * The queues fields are bitmaps indicating which queues to act upon.
+ * (Currently, we only support 16 queues per VF, but we make the field
+ * u32 to allow for expansion.)
+ * PF performs requested action and returns status.
+ */
+struct virtchnl_queue_select {
+	u16 vsi_id;
+	u16 pad;
+	u32 rx_queues;
+	u32 tx_queues;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_queue_select);
+
+/* VIRTCHNL_OP_ADD_ETH_ADDR
+ * VF sends this message in order to add one or more unicast or multicast
+ * address filters for the specified VSI.
+ * PF adds the filters and returns status.
+ */
+
+/* VIRTCHNL_OP_DEL_ETH_ADDR
+ * VF sends this message in order to remove one or more unicast or multicast
+ * filters for the specified VSI.
+ * PF removes the filters and returns status.
+ */
+
+struct virtchnl_ether_addr {
+	u8 addr[ETH_ALEN];
+	u8 pad[2];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(8, virtchnl_ether_addr);
+
+struct virtchnl_ether_addr_list {
+	u16 vsi_id;
+	u16 num_elements;
+	struct virtchnl_ether_addr list[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_ether_addr_list);
+
+/* VIRTCHNL_OP_ADD_VLAN
+ * VF sends this message to add one or more VLAN tag filters for receives.
+ * PF adds the filters and returns status.
+ * If a port VLAN is configured by the PF, this operation will return an
+ * error to the VF.
+ */
+
+/* VIRTCHNL_OP_DEL_VLAN
+ * VF sends this message to remove one or more VLAN tag filters for receives.
+ * PF removes the filters and returns status.
+ * If a port VLAN is configured by the PF, this operation will return an
+ * error to the VF.
+ */
+
+struct virtchnl_vlan_filter_list {
+	u16 vsi_id;
+	u16 num_elements;
+	u16 vlan_id[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(6, virtchnl_vlan_filter_list);
+
+/* VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE
+ * VF sends VSI id and flags.
+ * PF returns status code in retval.
+ * Note: we assume that broadcast accept mode is always enabled.
+ */
+struct virtchnl_promisc_info {
+	u16 vsi_id;
+	u16 flags;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(4, virtchnl_promisc_info);
+
+#define FLAG_VF_UNICAST_PROMISC	0x00000001
+#define FLAG_VF_MULTICAST_PROMISC	0x00000002
+
+/* VIRTCHNL_OP_GET_STATS
+ * VF sends this message to request stats for the selected VSI. VF uses
+ * the virtchnl_queue_select struct to specify the VSI. The queue_id
+ * field is ignored by the PF.
+ *
+ * PF replies with struct virtchnl_eth_stats in an external buffer.
+ */
+
+struct virtchnl_eth_stats {
+	u64 rx_bytes;			/* received bytes */
+	u64 rx_unicast;			/* received unicast pkts */
+	u64 rx_multicast;		/* received multicast pkts */
+	u64 rx_broadcast;		/* received broadcast pkts */
+	u64 rx_discards;
+	u64 rx_unknown_protocol;
+	u64 tx_bytes;			/* transmitted bytes */
+	u64 tx_unicast;			/* transmitted unicast pkts */
+	u64 tx_multicast;		/* transmitted multicast pkts */
+	u64 tx_broadcast;		/* transmitted broadcast pkts */
+	u64 tx_discards;
+	u64 tx_errors;
+};
+
+/* VIRTCHNL_OP_CONFIG_RSS_KEY
+ * VIRTCHNL_OP_CONFIG_RSS_LUT
+ * VF sends these messages to configure RSS. Only supported if both PF
+ * and VF drivers set the VIRTCHNL_VF_OFFLOAD_RSS_PF bit during
+ * configuration negotiation. If this is the case, then the RSS fields in
+ * the VF resource struct are valid.
+ * Both the key and LUT are initialized to 0 by the PF, meaning that
+ * RSS is effectively disabled until set up by the VF.
+ */
+struct virtchnl_rss_key {
+	u16 vsi_id;
+	u16 key_len;
+	u8 key[1];         /* RSS hash key, packed bytes */
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(6, virtchnl_rss_key);
+
+struct virtchnl_rss_lut {
+	u16 vsi_id;
+	u16 lut_entries;
+	u8 lut[1];        /* RSS lookup table */
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(6, virtchnl_rss_lut);
+
+/* VIRTCHNL_OP_GET_RSS_HENA_CAPS
+ * VIRTCHNL_OP_SET_RSS_HENA
+ * VF sends these messages to get and set the hash filter enable bits for RSS.
+ * By default, the PF sets these to all possible traffic types that the
+ * hardware supports. The VF can query this value if it wants to change the
+ * traffic types that are hashed by the hardware.
+ */
+struct virtchnl_rss_hena {
+	u64 hena;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(8, virtchnl_rss_hena);
+
+/* VIRTCHNL_OP_ENABLE_CHANNELS
+ * VIRTCHNL_OP_DISABLE_CHANNELS
+ * VF sends these messages to enable or disable channels based on
+ * the user specified queue count and queue offset for each traffic class.
+ * This struct encompasses all the information that the PF needs from
+ * VF to create a channel.
+ */
+struct virtchnl_channel_info {
+	u16 count; /* number of queues in a channel */
+	u16 offset; /* queues in a channel start from 'offset' */
+	u32 pad;
+	u64 max_tx_rate;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_channel_info);
+
+struct virtchnl_tc_info {
+	u32	num_tc;
+	u32	pad;
+	struct	virtchnl_channel_info list[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(24, virtchnl_tc_info);
+
+/* VIRTCHNL_ADD_CLOUD_FILTER
+ * VIRTCHNL_DEL_CLOUD_FILTER
+ * VF sends these messages to add or delete a cloud filter based on the
+ * user specified match and action filters. These structures encompass
+ * all the information that the PF needs from the VF to add/delete a
+ * cloud filter.
+ */
+
+struct virtchnl_l4_spec {
+	u8	src_mac[ETH_ALEN];
+	u8	dst_mac[ETH_ALEN];
+	__be16	vlan_id;
+	__be16	pad; /* reserved for future use */
+	__be32	src_ip[4];
+	__be32	dst_ip[4];
+	__be16	src_port;
+	__be16	dst_port;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(52, virtchnl_l4_spec);
+
+union virtchnl_flow_spec {
+	struct	virtchnl_l4_spec tcp_spec;
+	u8	buffer[128]; /* reserved for future use */
+};
+
+VIRTCHNL_CHECK_UNION_LEN(128, virtchnl_flow_spec);
+
+enum virtchnl_action {
+	/* action types */
+	VIRTCHNL_ACTION_DROP = 0,
+	VIRTCHNL_ACTION_TC_REDIRECT,
+};
+
+enum virtchnl_flow_type {
+	/* flow types */
+	VIRTCHNL_TCP_V4_FLOW = 0,
+	VIRTCHNL_TCP_V6_FLOW,
+};
+
+struct virtchnl_filter {
+	union	virtchnl_flow_spec data;
+	union	virtchnl_flow_spec mask;
+	enum	virtchnl_flow_type flow_type;
+	enum	virtchnl_action action;
+	u32	action_meta;
+	u8	field_flags;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(272, virtchnl_filter);
+
+/* VIRTCHNL_OP_EVENT
+ * PF sends this message to inform the VF driver of events that may affect it.
+ * No direct response is expected from the VF, though it may generate other
+ * messages in response to this one.
+ */
+enum virtchnl_event_codes {
+	VIRTCHNL_EVENT_UNKNOWN = 0,
+	VIRTCHNL_EVENT_LINK_CHANGE,
+	VIRTCHNL_EVENT_RESET_IMPENDING,
+	VIRTCHNL_EVENT_PF_DRIVER_CLOSE,
+};
+
+#define PF_EVENT_SEVERITY_INFO		0
+#define PF_EVENT_SEVERITY_CERTAIN_DOOM	255
+
+struct virtchnl_pf_event {
+	enum virtchnl_event_codes event;
+	union {
+		/* If the PF driver does not support the new speed reporting
+		 * capabilities then use link_event else use link_event_adv to
+		 * get the speed and link information. The ability to understand
+		 * new speeds is indicated by setting the capability flag
+		 * VIRTCHNL_VF_CAP_ADV_LINK_SPEED in vf_cap_flags parameter
+		 * in virtchnl_vf_resource struct and can be used to determine
+		 * which link event struct to use below.
+		 */
+		struct {
+			enum virtchnl_link_speed link_speed;
+			u8 link_status;
+		} link_event;
+		struct {
+			/* link_speed provided in Mbps */
+			u32 link_speed;
+			u8 link_status;
+		} link_event_adv;
+	} event_data;
+
+	int severity;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_pf_event);
+
+
+/* VF reset states - these are written into the RSTAT register:
+ * VFGEN_RSTAT on the VF
+ * When the PF initiates a reset, it writes 0
+ * When the reset is complete, it writes 1
+ * When the PF detects that the VF has recovered, it writes 2
+ * VF checks this register periodically to determine if a reset has occurred,
+ * then polls it to know when the reset is complete.
+ * If either the PF or VF reads the register while the hardware
+ * is in a reset state, it will return DEADBEEF, which, when masked
+ * will result in 3.
+ */
+enum virtchnl_vfr_states {
+	VIRTCHNL_VFR_INPROGRESS = 0,
+	VIRTCHNL_VFR_COMPLETED,
+	VIRTCHNL_VFR_VFACTIVE,
+};
+
+/**
+ * virtchnl_vc_validate_vf_msg
+ * @ver: Virtchnl version info
+ * @v_opcode: Opcode for the message
+ * @msg: pointer to the msg buffer
+ * @msglen: msg length
+ *
+ * validate msg format against struct for each opcode
+ */
+static inline int
+virtchnl_vc_validate_vf_msg(struct virtchnl_version_info *ver, u32 v_opcode,
+			    u8 *msg, u16 msglen)
+{
+	bool err_msg_format = false;
+	int valid_len = 0;
+
+	/* Validate message length. */
+	switch (v_opcode) {
+	case VIRTCHNL_OP_VERSION:
+		valid_len = sizeof(struct virtchnl_version_info);
+		break;
+	case VIRTCHNL_OP_RESET_VF:
+		break;
+	case VIRTCHNL_OP_GET_VF_RESOURCES:
+		if (VF_IS_V11(ver))
+			valid_len = sizeof(u32);
+		break;
+	case VIRTCHNL_OP_CONFIG_TX_QUEUE:
+		valid_len = sizeof(struct virtchnl_txq_info);
+		break;
+	case VIRTCHNL_OP_CONFIG_RX_QUEUE:
+		valid_len = sizeof(struct virtchnl_rxq_info);
+		break;
+	case VIRTCHNL_OP_CONFIG_VSI_QUEUES:
+		valid_len = sizeof(struct virtchnl_vsi_queue_config_info);
+		if (msglen >= valid_len) {
+			struct virtchnl_vsi_queue_config_info *vqc =
+			    (struct virtchnl_vsi_queue_config_info *)msg;
+			valid_len += (vqc->num_queue_pairs *
+				      sizeof(struct
+					     virtchnl_queue_pair_info));
+			if (vqc->num_queue_pairs == 0)
+				err_msg_format = true;
+		}
+		break;
+	case VIRTCHNL_OP_CONFIG_IRQ_MAP:
+		valid_len = sizeof(struct virtchnl_irq_map_info);
+		if (msglen >= valid_len) {
+			struct virtchnl_irq_map_info *vimi =
+			    (struct virtchnl_irq_map_info *)msg;
+			valid_len += (vimi->num_vectors *
+				      sizeof(struct virtchnl_vector_map));
+			if (vimi->num_vectors == 0)
+				err_msg_format = true;
+		}
+		break;
+	case VIRTCHNL_OP_ENABLE_QUEUES:
+	case VIRTCHNL_OP_DISABLE_QUEUES:
+		valid_len = sizeof(struct virtchnl_queue_select);
+		break;
+	case VIRTCHNL_OP_ADD_ETH_ADDR:
+	case VIRTCHNL_OP_DEL_ETH_ADDR:
+		valid_len = sizeof(struct virtchnl_ether_addr_list);
+		if (msglen >= valid_len) {
+			struct virtchnl_ether_addr_list *veal =
+			    (struct virtchnl_ether_addr_list *)msg;
+			valid_len += veal->num_elements *
+			    sizeof(struct virtchnl_ether_addr);
+			if (veal->num_elements == 0)
+				err_msg_format = true;
+		}
+		break;
+	case VIRTCHNL_OP_ADD_VLAN:
+	case VIRTCHNL_OP_DEL_VLAN:
+		valid_len = sizeof(struct virtchnl_vlan_filter_list);
+		if (msglen >= valid_len) {
+			struct virtchnl_vlan_filter_list *vfl =
+			    (struct virtchnl_vlan_filter_list *)msg;
+			valid_len += vfl->num_elements * sizeof(u16);
+			if (vfl->num_elements == 0)
+				err_msg_format = true;
+		}
+		break;
+	case VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE:
+		valid_len = sizeof(struct virtchnl_promisc_info);
+		break;
+	case VIRTCHNL_OP_GET_STATS:
+		valid_len = sizeof(struct virtchnl_queue_select);
+		break;
+	case VIRTCHNL_OP_CONFIG_RSS_KEY:
+		valid_len = sizeof(struct virtchnl_rss_key);
+		if (msglen >= valid_len) {
+			struct virtchnl_rss_key *vrk =
+				(struct virtchnl_rss_key *)msg;
+			valid_len += vrk->key_len - 1;
+		}
+		break;
+	case VIRTCHNL_OP_CONFIG_RSS_LUT:
+		valid_len = sizeof(struct virtchnl_rss_lut);
+		if (msglen >= valid_len) {
+			struct virtchnl_rss_lut *vrl =
+				(struct virtchnl_rss_lut *)msg;
+			valid_len += vrl->lut_entries - 1;
+		}
+		break;
+	case VIRTCHNL_OP_GET_RSS_HENA_CAPS:
+		break;
+	case VIRTCHNL_OP_SET_RSS_HENA:
+		valid_len = sizeof(struct virtchnl_rss_hena);
+		break;
+	case VIRTCHNL_OP_ENABLE_VLAN_STRIPPING:
+	case VIRTCHNL_OP_DISABLE_VLAN_STRIPPING:
+		break;
+	case VIRTCHNL_OP_REQUEST_QUEUES:
+		valid_len = sizeof(struct virtchnl_vf_res_request);
+		break;
+	case VIRTCHNL_OP_ENABLE_CHANNELS:
+		valid_len = sizeof(struct virtchnl_tc_info);
+		if (msglen >= valid_len) {
+			struct virtchnl_tc_info *vti =
+				(struct virtchnl_tc_info *)msg;
+			valid_len += (vti->num_tc - 1) *
+				     sizeof(struct virtchnl_channel_info);
+			if (vti->num_tc == 0)
+				err_msg_format = true;
+		}
+		break;
+	case VIRTCHNL_OP_DISABLE_CHANNELS:
+		break;
+	case VIRTCHNL_OP_ADD_CLOUD_FILTER:
+	case VIRTCHNL_OP_DEL_CLOUD_FILTER:
+		valid_len = sizeof(struct virtchnl_filter);
+		break;
+	/* These are always errors coming from the VF. */
+	case VIRTCHNL_OP_EVENT:
+	case VIRTCHNL_OP_UNKNOWN:
+	default:
+		return VIRTCHNL_STATUS_ERR_PARAM;
+	}
+	/* few more checks */
+	if (err_msg_format || valid_len != msglen)
+		return VIRTCHNL_STATUS_ERR_OPCODE_MISMATCH;
+
+	return 0;
+}
+#endif /* _VIRTCHNL_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v2 02/20] net/ice: support device initialization
  2018-12-03  7:06 ` [dpdk-dev] [PATCH v2 00/20] " Wenzhuo Lu
  2018-12-03  7:06   ` [dpdk-dev] [PATCH v2 01/20] net/ice: add base code Wenzhuo Lu
@ 2018-12-03  7:06   ` Wenzhuo Lu
  2018-12-03  9:07     ` Varghese, Vipin
  2018-12-04  4:40     ` Varghese, Vipin
  2018-12-03  7:06   ` [dpdk-dev] [PATCH v2 03/20] net/ice: support device and queue ops Wenzhuo Lu
                     ` (17 subsequent siblings)
  19 siblings, 2 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-03  7:06 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 config/common_base                      |   9 +
 drivers/net/Makefile                    |   1 +
 drivers/net/ice/Makefile                |  75 ++++
 drivers/net/ice/ice_ethdev.c            | 643 ++++++++++++++++++++++++++++++++
 drivers/net/ice/ice_ethdev.h            | 348 +++++++++++++++++
 drivers/net/ice/ice_logs.h              |  45 +++
 drivers/net/ice/ice_rxtx.h              | 117 ++++++
 drivers/net/ice/rte_pmd_ice_version.map |   4 +
 mk/rte.app.mk                           |   1 +
 9 files changed, 1243 insertions(+)
 create mode 100644 drivers/net/ice/Makefile
 create mode 100644 drivers/net/ice/ice_ethdev.c
 create mode 100644 drivers/net/ice/ice_ethdev.h
 create mode 100644 drivers/net/ice/ice_logs.h
 create mode 100644 drivers/net/ice/ice_rxtx.h
 create mode 100644 drivers/net/ice/rte_pmd_ice_version.map

diff --git a/config/common_base b/config/common_base
index d12ae98..a342760 100644
--- a/config/common_base
+++ b/config/common_base
@@ -297,6 +297,15 @@ CONFIG_RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE=y
 CONFIG_RTE_LIBRTE_FM10K_INC_VECTOR=y
 
 #
+# Compile burst-oriented ICE PMD driver
+#
+CONFIG_RTE_LIBRTE_ICE_PMD=y
+CONFIG_RTE_LIBRTE_ICE_DEBUG_RX=n
+CONFIG_RTE_LIBRTE_ICE_DEBUG_TX=n
+CONFIG_RTE_LIBRTE_ICE_DEBUG_TX_FREE=n
+CONFIG_RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC=y
+CONFIG_RTE_LIBRTE_ICE_16BYTE_RX_DESC=n
+
 # Compile burst-oriented AVF PMD driver
 #
 CONFIG_RTE_LIBRTE_AVF_PMD=y
diff --git a/drivers/net/Makefile b/drivers/net/Makefile
index c0386fe..670d7f7 100644
--- a/drivers/net/Makefile
+++ b/drivers/net/Makefile
@@ -30,6 +30,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_ENIC_PMD) += enic
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_FAILSAFE) += failsafe
 DIRS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k
 DIRS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += i40e
+DIRS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice
 DIRS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe
 DIRS-$(CONFIG_RTE_LIBRTE_LIO_PMD) += liquidio
 DIRS-$(CONFIG_RTE_LIBRTE_MLX4_PMD) += mlx4
diff --git a/drivers/net/ice/Makefile b/drivers/net/ice/Makefile
new file mode 100644
index 0000000..5af66d9
--- /dev/null
+++ b/drivers/net/ice/Makefile
@@ -0,0 +1,75 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Intel Corporation
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_pmd_ice.a
+
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+LDLIBS += -lrte_eal -lrte_ethdev -lrte_kvargs -lrte_bus_pci
+
+EXPORT_MAP := rte_pmd_ice_version.map
+
+LIBABIVER := 1
+
+#
+# Add extra flags for base driver files (also known as shared code)
+# to disable warnings
+#
+ifeq ($(CONFIG_RTE_TOOLCHAIN_ICC),y)
+CFLAGS_BASE_DRIVER = -wd593 -wd188
+else ifeq ($(CONFIG_RTE_TOOLCHAIN_CLANG),y)
+CFLAGS_BASE_DRIVER += -Wno-sign-compare
+CFLAGS_BASE_DRIVER += -Wno-unused-value
+CFLAGS_BASE_DRIVER += -Wno-unused-parameter
+CFLAGS_BASE_DRIVER += -Wno-strict-aliasing
+CFLAGS_BASE_DRIVER += -Wno-format
+CFLAGS_BASE_DRIVER += -Wno-missing-field-initializers
+CFLAGS_BASE_DRIVER += -Wno-pointer-to-int-cast
+CFLAGS_BASE_DRIVER += -Wno-format-nonliteral
+CFLAGS_BASE_DRIVER += -Wno-unused-variable
+else
+CFLAGS_BASE_DRIVER  = -Wno-sign-compare
+CFLAGS_BASE_DRIVER += -Wno-unused-value
+CFLAGS_BASE_DRIVER += -Wno-unused-parameter
+CFLAGS_BASE_DRIVER += -Wno-strict-aliasing
+CFLAGS_BASE_DRIVER += -Wno-format
+CFLAGS_BASE_DRIVER += -Wno-missing-field-initializers
+CFLAGS_BASE_DRIVER += -Wno-pointer-to-int-cast
+CFLAGS_BASE_DRIVER += -Wno-format-nonliteral
+CFLAGS_BASE_DRIVER += -Wno-format-security
+CFLAGS_BASE_DRIVER += -Wno-unused-variable
+
+ifeq ($(shell test $(GCC_VERSION) -ge 44 && echo 1), 1)
+CFLAGS_BASE_DRIVER += -Wno-unused-but-set-variable
+endif
+
+endif
+OBJS_BASE_DRIVER=$(patsubst %.c,%.o,$(notdir $(wildcard $(SRCDIR)/base/*.c)))
+$(foreach obj, $(OBJS_BASE_DRIVER), $(eval CFLAGS_$(obj)+=$(CFLAGS_BASE_DRIVER)))
+
+VPATH += $(SRCDIR)/base
+
+#
+# all source are stored in SRCS-y
+#
+SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_controlq.c
+SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_common.c
+SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_sched.c
+SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_switch.c
+SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_nvm.c
+
+SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_ethdev.c
+
+# this lib depends upon:
+DEPDIRS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += lib/librte_eal lib/librte_ether
+DEPDIRS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += lib/librte_mempool lib/librte_mbuf
+DEPDIRS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += lib/librte_net
+DEPDIRS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += lib/librte_kvargs
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
new file mode 100644
index 0000000..6ea9bf0
--- /dev/null
+++ b/drivers/net/ice/ice_ethdev.c
@@ -0,0 +1,643 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#include <rte_ethdev_pci.h>
+
+#include "base/ice_sched.h"
+#include "ice_ethdev.h"
+#include "ice_rxtx.h"
+
+#define ICE_MAX_QP_NUM "max_queue_pair_num"
+#define ICE_DFLT_OUTER_TAG_TYPE ICE_AQ_VSI_OUTER_TAG_VLAN_9100
+
+int ice_logtype_init;
+int ice_logtype_driver;
+
+static void ice_dev_close(struct rte_eth_dev *dev);
+
+static const struct rte_pci_id pci_id_ice_map[] = {
+	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
+	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_QSFP) },
+	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_SFP) },
+	{ .vendor_id = 0, /* sentinel */ },
+};
+
+static const struct eth_dev_ops ice_eth_dev_ops = {
+	.dev_configure                = NULL,
+};
+
+static void
+ice_init_controlq_parameter(struct ice_hw *hw)
+{
+	/* fields for adminq */
+	hw->adminq.num_rq_entries = ICE_ADMINQ_LEN;
+	hw->adminq.num_sq_entries = ICE_ADMINQ_LEN;
+	hw->adminq.rq_buf_size = ICE_ADMINQ_BUF_SZ;
+	hw->adminq.sq_buf_size = ICE_ADMINQ_BUF_SZ;
+
+	/* fields for mailboxq, DPDK used as PF host */
+	hw->mailboxq.num_rq_entries = ICE_MAILBOXQ_LEN;
+	hw->mailboxq.num_sq_entries = ICE_MAILBOXQ_LEN;
+	hw->mailboxq.rq_buf_size = ICE_MAILBOXQ_BUF_SZ;
+	hw->mailboxq.sq_buf_size = ICE_MAILBOXQ_BUF_SZ;
+}
+
+static int
+ice_check_qp_num(const char *key, const char *qp_value,
+		 __rte_unused void *opaque)
+{
+	char *end = NULL;
+	int num = 0;
+
+	while (isblank(*qp_value))
+		qp_value++;
+
+	num = strtoul(qp_value, &end, 10);
+
+	if (!num || (*end == '-') || errno) {
+		PMD_DRV_LOG(WARNING, "invalid value:\"%s\" for key:\"%s\", "
+			    "value must be > 0",
+			    qp_value, key);
+		return -1;
+	}
+
+	return num;
+}
+
+static int
+ice_config_max_queue_pair_num(struct rte_devargs *devargs)
+{
+	struct rte_kvargs *kvlist;
+	const char *queue_num_key = ICE_MAX_QP_NUM;
+	int ret;
+
+	if (!devargs)
+		return 0;
+
+	kvlist = rte_kvargs_parse(devargs->args, NULL);
+	if (!kvlist)
+		return 0;
+
+	if (!rte_kvargs_count(kvlist, queue_num_key)) {
+		rte_kvargs_free(kvlist);
+		return 0;
+	}
+
+	if (rte_kvargs_process(kvlist, queue_num_key,
+			       ice_check_qp_num, NULL) < 0) {
+		rte_kvargs_free(kvlist);
+		return 0;
+	}
+	ret = rte_kvargs_process(kvlist, queue_num_key,
+				 ice_check_qp_num, NULL);
+	rte_kvargs_free(kvlist);
+
+	return ret;
+}
+
+static int
+ice_res_pool_init(struct ice_res_pool_info *pool, uint32_t base,
+		  uint32_t num)
+{
+	struct pool_entry *entry;
+
+	if (!pool || !num)
+		return -EINVAL;
+
+	entry = rte_zmalloc("ice", sizeof(*entry), 0);
+	if (!entry) {
+		PMD_INIT_LOG(ERR,
+			     "Failed to allocate memory for resource pool");
+		return -ENOMEM;
+	}
+
+	/* queue heap initialize */
+	pool->num_free = num;
+	pool->num_alloc = 0;
+	pool->base = base;
+	LIST_INIT(&pool->alloc_list);
+	LIST_INIT(&pool->free_list);
+
+	/* Initialize element  */
+	entry->base = 0;
+	entry->len = num;
+
+	LIST_INSERT_HEAD(&pool->free_list, entry, next);
+	return 0;
+}
+
+static int
+ice_res_pool_alloc(struct ice_res_pool_info *pool,
+		   uint16_t num)
+{
+	struct pool_entry *entry, *valid_entry;
+
+	if (!pool || !num) {
+		PMD_INIT_LOG(ERR, "Invalid parameter");
+		return -EINVAL;
+	}
+
+	if (pool->num_free < num) {
+		PMD_INIT_LOG(ERR, "No resource. ask:%u, available:%u",
+			     num, pool->num_free);
+		return -ENOMEM;
+	}
+
+	valid_entry = NULL;
+	/* Lookup  in free list and find most fit one */
+	LIST_FOREACH(entry, &pool->free_list, next) {
+		if (entry->len >= num) {
+			/* Find best one */
+			if (entry->len == num) {
+				valid_entry = entry;
+				break;
+			}
+			if (!valid_entry ||
+			    valid_entry->len > entry->len)
+				valid_entry = entry;
+		}
+	}
+
+	/* Not find one to satisfy the request, return */
+	if (!valid_entry) {
+		PMD_INIT_LOG(ERR, "No valid entry found");
+		return -ENOMEM;
+	}
+	/**
+	 * The entry have equal queue number as requested,
+	 * remove it from alloc_list.
+	 */
+	if (valid_entry->len == num) {
+		LIST_REMOVE(valid_entry, next);
+	} else {
+		/**
+		 * The entry have more numbers than requested,
+		 * create a new entry for alloc_list and minus its
+		 * queue base and number in free_list.
+		 */
+		entry = rte_zmalloc("res_pool", sizeof(*entry), 0);
+		if (!entry) {
+			PMD_INIT_LOG(ERR,
+				     "Failed to allocate memory for "
+				     "resource pool");
+			return -ENOMEM;
+		}
+		entry->base = valid_entry->base;
+		entry->len = num;
+		valid_entry->base += num;
+		valid_entry->len -= num;
+		valid_entry = entry;
+	}
+
+	/* Insert it into alloc list, not sorted */
+	LIST_INSERT_HEAD(&pool->alloc_list, valid_entry, next);
+
+	pool->num_free -= valid_entry->len;
+	pool->num_alloc += valid_entry->len;
+
+	return valid_entry->base + pool->base;
+}
+
+static void
+ice_res_pool_destroy(struct ice_res_pool_info *pool)
+{
+	struct pool_entry *entry, *next_entry;
+
+	if (!pool)
+		return;
+
+	for (entry = LIST_FIRST(&pool->alloc_list);
+	     entry && (next_entry = LIST_NEXT(entry, next), 1);
+	     entry = next_entry) {
+		LIST_REMOVE(entry, next);
+		rte_free(entry);
+	}
+
+	for (entry = LIST_FIRST(&pool->free_list);
+	     entry && (next_entry = LIST_NEXT(entry, next), 1);
+	     entry = next_entry) {
+		LIST_REMOVE(entry, next);
+		rte_free(entry);
+	}
+
+	pool->num_free = 0;
+	pool->num_alloc = 0;
+	pool->base = 0;
+	LIST_INIT(&pool->alloc_list);
+	LIST_INIT(&pool->free_list);
+}
+
+static void
+ice_vsi_config_default_rss(struct ice_aqc_vsi_props *info)
+{
+	/* Set VSI LUT selection */
+	info->q_opt_rss = ICE_AQ_VSI_Q_OPT_RSS_LUT_VSI &
+			  ICE_AQ_VSI_Q_OPT_RSS_LUT_M;
+	/* Set Hash scheme */
+	info->q_opt_rss |= ICE_AQ_VSI_Q_OPT_RSS_TPLZ &
+			   ICE_AQ_VSI_Q_OPT_RSS_HASH_M;
+	/* enable TC */
+	info->q_opt_tc = ICE_AQ_VSI_Q_OPT_TC_OVR_M;
+}
+
+static enum ice_status
+ice_vsi_config_tc_queue_mapping(struct ice_vsi *vsi,
+				struct ice_aqc_vsi_props *info,
+				uint8_t enabled_tcmap)
+{
+	uint16_t bsf, qp_idx;
+
+	/* default tc 0 now. Multi-TC supporting need to be done later.
+	 * Configure TC and queue mapping parameters, for enabled TC,
+	 * allocate qpnum_per_tc queues to this traffic.
+	 */
+	if (enabled_tcmap != 0x01) {
+		PMD_INIT_LOG(ERR, "only TC0 is supported");
+		return -ENOTSUP;
+	}
+
+	vsi->nb_qps = RTE_MIN(vsi->nb_qps, ICE_MAX_Q_PER_TC);
+	bsf = rte_bsf32(vsi->nb_qps);
+	/* Adjust the queue number to actual queues that can be applied */
+	vsi->nb_qps = 0x1 << bsf;
+
+	qp_idx = 0;
+	/* Set tc and queue mapping with VSI */
+	info->tc_mapping[0] = rte_cpu_to_le_16((qp_idx <<
+						ICE_AQ_VSI_TC_Q_OFFSET_S) |
+					       (bsf << ICE_AQ_VSI_TC_Q_NUM_S));
+
+	/* Associate queue number with VSI */
+	info->mapping_flags |= rte_cpu_to_le_16(ICE_AQ_VSI_Q_MAP_CONTIG);
+	info->q_mapping[0] = rte_cpu_to_le_16(vsi->base_queue);
+	info->q_mapping[1] = rte_cpu_to_le_16(vsi->nb_qps);
+	info->valid_sections |=
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_RXQ_MAP_VALID);
+	/* Set the info.ingress_table and info.egress_table
+	 * for UP translate table. Now just set it to 1:1 map by default
+	 * -- 0b 111 110 101 100 011 010 001 000 == 0xFAC688
+	 */
+	info->ingress_table  = rte_cpu_to_le_32(0x00FAC688);
+	info->egress_table   = rte_cpu_to_le_32(0x00FAC688);
+	info->outer_up_table = rte_cpu_to_le_32(0x00FAC688);
+	return 0;
+}
+
+static int
+ice_init_mac_address(struct rte_eth_dev *dev)
+{
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	if (!is_unicast_ether_addr
+		((struct ether_addr *)hw->port_info[0].mac.lan_addr)) {
+		PMD_INIT_LOG(ERR, "Invalid MAC address");
+		return -EINVAL;
+	}
+
+	ether_addr_copy((struct ether_addr *)hw->port_info[0].mac.lan_addr,
+			(struct ether_addr *)hw->port_info[0].mac.perm_addr);
+
+	dev->data->mac_addrs = rte_zmalloc("ice", sizeof(struct ether_addr), 0);
+	if (!dev->data->mac_addrs) {
+		PMD_INIT_LOG(ERR,
+			     "Failed to allocate memory to store mac address");
+		return -ENOMEM;
+	}
+	/* store it to dev data */
+	ether_addr_copy((struct ether_addr *)hw->port_info[0].mac.perm_addr,
+			&dev->data->mac_addrs[0]);
+	return 0;
+}
+
+/*  Initialize SW parameters of PF */
+static int
+ice_pf_sw_init(struct rte_eth_dev *dev)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_hw *hw = ICE_PF_TO_HW(pf);
+
+	if (ice_config_max_queue_pair_num(dev->device->devargs) > 0)
+		pf->lan_nb_qp_max =
+			ice_config_max_queue_pair_num(dev->device->devargs);
+	else
+		pf->lan_nb_qp_max =
+			(uint16_t)RTE_MIN(hw->func_caps.common_cap.num_txq,
+					  hw->func_caps.common_cap.num_rxq);
+
+	pf->lan_nb_qps = pf->lan_nb_qp_max;
+
+	return 0;
+}
+
+static struct ice_vsi *
+ice_setup_vsi(struct ice_pf *pf, enum ice_vsi_type type)
+{
+	struct ice_hw *hw = ICE_PF_TO_HW(pf);
+	struct ice_vsi *vsi = NULL;
+	struct ice_vsi_ctx vsi_ctx;
+	int ret;
+	uint16_t max_txqs[ICE_MAX_TRAFFIC_CLASS] = { 0 };
+	uint8_t tc_bitmap = 0x1;
+
+	/* hw->num_lports = 1 in NIC mode */
+	vsi = rte_zmalloc("ice_vsi", sizeof(struct ice_vsi), 0);
+	if (!vsi)
+		return NULL;
+
+	vsi->idx = pf->next_vsi_idx;
+	pf->next_vsi_idx++;
+	vsi->type = type;
+	vsi->adapter = ICE_PF_TO_ADAPTER(pf);
+	vsi->max_macaddrs = ICE_NUM_MACADDR_MAX;
+	vsi->vlan_anti_spoof_on = 0;
+	vsi->vlan_filter_on = 1;
+	TAILQ_INIT(&vsi->mac_list);
+	TAILQ_INIT(&vsi->vlan_list);
+
+	memset(&vsi_ctx, 0, sizeof(vsi_ctx));
+	/* base_queue in used in queue mapping of VSI add/update command.
+	 * Suppose vsi->base_queue is 0 now, don't consider SRIOV, VMDQ
+	 * cases in the first stage. Only Main VSI.
+	 */
+	vsi->base_queue = 0;
+	switch (type) {
+	case ICE_VSI_PF:
+		vsi->nb_qps = pf->lan_nb_qps;
+		ice_vsi_config_default_rss(&vsi_ctx.info);
+		vsi_ctx.alloc_from_pool = true;
+		vsi_ctx.flags = ICE_AQ_VSI_TYPE_PF;
+		/* switch_id is queried by get_switch_config aq, which is done
+		 * by ice_init_hw
+		 */
+		vsi_ctx.info.sw_id = hw->port_info->sw_id;
+		vsi_ctx.info.sw_flags2 = ICE_AQ_VSI_SW_FLAG_LAN_ENA;
+		/* Allow all untagged or tagged packets */
+		vsi_ctx.info.vlan_flags = ICE_AQ_VSI_VLAN_MODE_ALL;
+		vsi_ctx.info.vlan_flags |= ICE_AQ_VSI_VLAN_EMOD_NOTHING;
+		vsi_ctx.info.q_opt_rss = ICE_AQ_VSI_Q_OPT_RSS_LUT_PF |
+					 ICE_AQ_VSI_Q_OPT_RSS_TPLZ;
+		/* Enable VLAN/UP trip */
+		ret = ice_vsi_config_tc_queue_mapping(vsi,
+						      &vsi_ctx.info,
+						      ICE_DEFAULT_TCMAP);
+		if (ret) {
+			PMD_INIT_LOG(ERR,
+				     "tc queue mapping with vsi failed, "
+				     "err = %d",
+				     ret);
+			goto fail_mem;
+		}
+
+		break;
+	default:
+		/* for other types of VSI */
+		PMD_INIT_LOG(ERR, "other types of VSI not supported");
+		goto fail_mem;
+	}
+
+	/* VF has MSIX interrupt in VF range, don't allocate here */
+	if (type == ICE_VSI_PF) {
+		ret = ice_res_pool_alloc(&pf->msix_pool,
+					 RTE_MIN(vsi->nb_qps,
+						 RTE_MAX_RXTX_INTR_VEC_ID));
+		if (ret < 0) {
+			PMD_INIT_LOG(ERR, "VSI MAIN %d get heap failed %d",
+				     vsi->vsi_id, ret);
+		}
+		vsi->msix_intr = ret;
+		vsi->nb_msix = RTE_MIN(vsi->nb_qps, RTE_MAX_RXTX_INTR_VEC_ID);
+	} else {
+		vsi->msix_intr = 0;
+		vsi->nb_msix = 0;
+	}
+	ret = ice_add_vsi(hw, vsi->idx, &vsi_ctx, NULL);
+	if (ret != ICE_SUCCESS) {
+		PMD_INIT_LOG(ERR, "add vsi failed, err = %d", ret);
+		goto fail_mem;
+	}
+	/* store vsi information is SW structure */
+	vsi->vsi_id = vsi_ctx.vsi_num;
+	vsi->info = vsi_ctx.info;
+	pf->vsis_allocated = vsi_ctx.vsis_allocd;
+	pf->vsis_unallocated = vsi_ctx.vsis_unallocated;
+
+	/* At the beginning, only TC0. */
+	/* What we need here is the maximam number of the TX queues.
+	 * Currently vsi->nb_qps means it.
+	 * Correct it if any change.
+	 */
+	max_txqs[0] = vsi->nb_qps;
+	ret = ice_cfg_vsi_lan(hw->port_info, vsi->idx,
+			      tc_bitmap, max_txqs);
+	if (ret != ICE_SUCCESS)
+		PMD_INIT_LOG(ERR, "Failed to config vsi sched");
+
+	return vsi;
+fail_mem:
+	rte_free(vsi);
+	pf->next_vsi_idx--;
+	return NULL;
+}
+
+static int
+ice_pf_setup(struct ice_pf *pf)
+{
+	struct ice_vsi *vsi;
+
+	/* Clear all stats counters */
+	pf->offset_loaded = FALSE;
+	memset(&pf->stats, 0, sizeof(struct ice_hw_port_stats));
+	memset(&pf->stats_offset, 0, sizeof(struct ice_hw_port_stats));
+	memset(&pf->internal_stats, 0, sizeof(struct ice_eth_stats));
+	memset(&pf->internal_stats_offset, 0, sizeof(struct ice_eth_stats));
+
+	vsi = ice_setup_vsi(pf, ICE_VSI_PF);
+	if (!vsi) {
+		PMD_INIT_LOG(ERR, "Failed to add vsi for PF");
+		return -EINVAL;
+	}
+
+	pf->main_vsi = vsi;
+
+	return 0;
+}
+
+static int
+ice_dev_init(struct rte_eth_dev *dev)
+{
+	struct rte_pci_device *pci_dev;
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	int ret;
+
+	dev->dev_ops = &ice_eth_dev_ops;
+
+	pci_dev = RTE_DEV_TO_PCI(dev->device);
+
+	rte_eth_copy_pci_info(dev, pci_dev);
+	pf->adapter = ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	pf->adapter->eth_dev = dev;
+	pf->dev_data = dev->data;
+	hw->back = pf->adapter;
+	hw->hw_addr = (uint8_t *)pci_dev->mem_resource[0].addr;
+	hw->vendor_id = pci_dev->id.vendor_id;
+	hw->device_id = pci_dev->id.device_id;
+	hw->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id;
+	hw->subsystem_device_id = pci_dev->id.subsystem_device_id;
+	hw->bus.device = pci_dev->addr.devid;
+	hw->bus.func = pci_dev->addr.function;
+
+	ice_init_controlq_parameter(hw);
+
+	ret = ice_init_hw(hw);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to initialize HW");
+		return -EINVAL;
+	}
+
+	PMD_INIT_LOG(INFO, "FW %d.%d.%05d API %d.%d",
+		     hw->fw_maj_ver, hw->fw_min_ver, hw->fw_build,
+		     hw->api_maj_ver, hw->api_min_ver);
+
+	ice_pf_sw_init(dev);
+	ret = ice_init_mac_address(dev);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to initialize mac address");
+		goto err_init_mac;
+	}
+
+	ret = ice_res_pool_init(&pf->msix_pool, 1,
+				hw->func_caps.common_cap.num_msix_vectors - 1);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to init MSIX pool");
+		goto err_msix_pool_init;
+	}
+
+	ret = ice_pf_setup(pf);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to setup PF");
+		goto err_pf_setup;
+	}
+
+	return 0;
+
+err_pf_setup:
+	ice_res_pool_destroy(&pf->msix_pool);
+err_msix_pool_init:
+	rte_free(dev->data->mac_addrs);
+err_init_mac:
+	ice_sched_cleanup_all(hw);
+	rte_free(hw->port_info);
+	ice_shutdown_all_ctrlq(hw);
+
+	return ret;
+}
+
+static int
+ice_release_vsi(struct ice_vsi *vsi)
+{
+	struct ice_hw *hw;
+	struct ice_vsi_ctx vsi_ctx;
+	enum ice_status ret;
+
+	if (!vsi)
+		return 0;
+
+	hw = ICE_VSI_TO_HW(vsi);
+
+	memset(&vsi_ctx, 0, sizeof(vsi_ctx));
+
+	vsi_ctx.vsi_num = vsi->vsi_id;
+	vsi_ctx.info = vsi->info;
+	ret = ice_free_vsi(hw, vsi->idx, &vsi_ctx, false, NULL);
+	if (ret != ICE_SUCCESS) {
+		PMD_INIT_LOG(ERR, "Failed to free vsi by aq, %u", vsi->vsi_id);
+		rte_free(vsi);
+		return -1;
+	}
+
+	rte_free(vsi);
+	return 0;
+}
+
+static int
+ice_dev_uninit(struct rte_eth_dev *dev)
+{
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+
+	ICE_PROC_SECONDARY_CHECK_RET_0;
+
+	ice_dev_close(dev);
+
+	dev->dev_ops = NULL;
+	dev->rx_pkt_burst = NULL;
+	dev->tx_pkt_burst = NULL;
+
+	rte_free(dev->data->mac_addrs);
+	dev->data->mac_addrs = NULL;
+
+	ice_release_vsi(pf->main_vsi);
+	ice_sched_cleanup_all(hw);
+	rte_free(hw->port_info);
+	ice_shutdown_all_ctrlq(hw);
+
+	return 0;
+}
+
+static int
+ice_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+	      struct rte_pci_device *pci_dev)
+{
+	return rte_eth_dev_pci_generic_probe(pci_dev,
+					     sizeof(struct ice_adapter),
+					     ice_dev_init);
+}
+
+static int
+ice_pci_remove(struct rte_pci_device *pci_dev)
+{
+	return rte_eth_dev_pci_generic_remove(pci_dev, ice_dev_uninit);
+}
+
+static struct rte_pci_driver rte_ice_pmd = {
+	.id_table = pci_id_ice_map,
+	.drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC,
+	.probe = ice_pci_probe,
+	.remove = ice_pci_remove,
+};
+
+/**
+ * Driver initialization routine.
+ * Invoked once at EAL init time.
+ * Register itself as the [Poll Mode] Driver of PCI devices.
+ */
+RTE_PMD_REGISTER_PCI(net_ice, rte_ice_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(ice, pci_id_ice_map);
+
+RTE_INIT(ice_init_log);
+static void
+ice_init_log(void)
+{
+	ice_logtype_init = rte_log_register("pmd.ice.init");
+	if (ice_logtype_init >= 0)
+		rte_log_set_level(ice_logtype_init, RTE_LOG_NOTICE);
+	ice_logtype_driver = rte_log_register("pmd.ice.driver");
+	if (ice_logtype_driver >= 0)
+		rte_log_set_level(ice_logtype_driver, RTE_LOG_NOTICE);
+}
+
+static void
+ice_dev_close(struct rte_eth_dev *dev)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	ICE_PROC_SECONDARY_CHECK_NO_ERR;
+
+	ice_res_pool_destroy(&pf->msix_pool);
+	ice_release_vsi(pf->main_vsi);
+
+	ice_shutdown_all_ctrlq(hw);
+}
diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
new file mode 100644
index 0000000..bc2d6f2
--- /dev/null
+++ b/drivers/net/ice/ice_ethdev.h
@@ -0,0 +1,348 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _ICE_ETHDEV_H_
+#define _ICE_ETHDEV_H_
+
+#include <rte_kvargs.h>
+
+#include "base/ice_common.h"
+#include "base/ice_adminq_cmd.h"
+
+#define ICE_VLAN_TAG_SIZE        4
+
+#define ICE_ADMINQ_LEN               32
+#define ICE_SBIOQ_LEN                32
+#define ICE_MAILBOXQ_LEN             32
+#define ICE_ADMINQ_BUF_SZ            4096
+#define ICE_SBIOQ_BUF_SZ             4096
+#define ICE_MAILBOXQ_BUF_SZ          4096
+/* Number of queues per TC should be one of 1, 2, 4, 8, 16, 32, 64 */
+#define ICE_MAX_Q_PER_TC         64
+#define ICE_NUM_DESC_DEFAULT     512
+#define ICE_BUF_SIZE_MIN         1024
+#define ICE_FRAME_SIZE_MAX       9728
+#define ICE_QUEUE_BASE_ADDR_UNIT 128
+/* number of VSIs and queue default setting */
+#define ICE_MAX_QP_NUM_PER_VF    16
+#define ICE_DEFAULT_QP_NUM_FDIR  1
+#define ICE_UINT32_BIT_SIZE      (CHAR_BIT * sizeof(uint32_t))
+#define ICE_VFTA_SIZE            (4096 / ICE_UINT32_BIT_SIZE)
+/* Maximun number of MAC addresses */
+#define ICE_NUM_MACADDR_MAX       64
+/* Maximum number of VFs */
+#define ICE_MAX_VF               128
+#define ICE_MAX_INTR_QUEUE_NUM   256
+
+#define ICE_MISC_VEC_ID          RTE_INTR_VEC_ZERO_OFFSET
+#define ICE_RX_VEC_ID            RTE_INTR_VEC_RXTX_OFFSET
+
+#define ICE_MAX_PKT_TYPE  1024
+
+/**
+ * vlan_id is a 12 bit number.
+ * The VFTA array is actually a 4096 bit array, 128 of 32bit elements.
+ * 2^5 = 32. The val of lower 5 bits specifies the bit in the 32bit element.
+ * The higher 7 bit val specifies VFTA array index.
+ */
+#define ICE_VFTA_BIT(vlan_id)    (1 << ((vlan_id) & 0x1F))
+#define ICE_VFTA_IDX(vlan_id)    ((vlan_id) >> 5)
+
+/* Default TC traffic in case DCB is not enabled */
+#define ICE_DEFAULT_TCMAP        0x1
+#define ICE_FDIR_QUEUE_ID        0
+
+/* Always assign pool 0 to main VSI, VMDQ will start from 1 */
+#define ICE_VMDQ_POOL_BASE       1
+
+#define ICE_DEFAULT_RX_FREE_THRESH  32
+#define ICE_DEFAULT_RX_PTHRESH      8
+#define ICE_DEFAULT_RX_HTHRESH      8
+#define ICE_DEFAULT_RX_WTHRESH      0
+
+#define ICE_DEFAULT_TX_FREE_THRESH  32
+#define ICE_DEFAULT_TX_PTHRESH      32
+#define ICE_DEFAULT_TX_HTHRESH      0
+#define ICE_DEFAULT_TX_WTHRESH      0
+#define ICE_DEFAULT_TX_RSBIT_THRESH 32
+
+/* Bit shift and mask */
+#define ICE_4_BIT_WIDTH  (CHAR_BIT / 2)
+#define ICE_4_BIT_MASK   RTE_LEN2MASK(ICE_4_BIT_WIDTH, uint8_t)
+#define ICE_8_BIT_WIDTH  CHAR_BIT
+#define ICE_8_BIT_MASK   UINT8_MAX
+#define ICE_16_BIT_WIDTH (CHAR_BIT * 2)
+#define ICE_16_BIT_MASK  UINT16_MAX
+#define ICE_32_BIT_WIDTH (CHAR_BIT * 4)
+#define ICE_32_BIT_MASK  UINT32_MAX
+#define ICE_40_BIT_WIDTH (CHAR_BIT * 5)
+#define ICE_40_BIT_MASK  RTE_LEN2MASK(ICE_40_BIT_WIDTH, uint64_t)
+#define ICE_48_BIT_WIDTH (CHAR_BIT * 6)
+#define ICE_48_BIT_MASK  RTE_LEN2MASK(ICE_48_BIT_WIDTH, uint64_t)
+
+#define ICE_FLAG_RSS                   BIT_ULL(0)
+#define ICE_FLAG_DCB                   BIT_ULL(1)
+#define ICE_FLAG_VMDQ                  BIT_ULL(2)
+#define ICE_FLAG_SRIOV                 BIT_ULL(3)
+#define ICE_FLAG_HEADER_SPLIT_DISABLED BIT_ULL(4)
+#define ICE_FLAG_HEADER_SPLIT_ENABLED  BIT_ULL(5)
+#define ICE_FLAG_FDIR                  BIT_ULL(6)
+#define ICE_FLAG_VXLAN                 BIT_ULL(7)
+#define ICE_FLAG_RSS_AQ_CAPABLE        BIT_ULL(8)
+#define ICE_FLAG_VF_MAC_BY_PF          BIT_ULL(9)
+#define ICE_FLAG_ALL  (ICE_FLAG_RSS | \
+		       ICE_FLAG_DCB | \
+		       ICE_FLAG_VMDQ | \
+		       ICE_FLAG_SRIOV | \
+		       ICE_FLAG_HEADER_SPLIT_DISABLED | \
+		       ICE_FLAG_HEADER_SPLIT_ENABLED | \
+		       ICE_FLAG_FDIR | \
+		       ICE_FLAG_VXLAN | \
+		       ICE_FLAG_RSS_AQ_CAPABLE | \
+		       ICE_FLAG_VF_MAC_BY_PF)
+
+#define ICE_RSS_OFFLOAD_ALL ( \
+	ETH_RSS_FRAG_IPV4 | \
+	ETH_RSS_NONFRAG_IPV4_TCP | \
+	ETH_RSS_NONFRAG_IPV4_UDP | \
+	ETH_RSS_NONFRAG_IPV4_SCTP | \
+	ETH_RSS_NONFRAG_IPV4_OTHER | \
+	ETH_RSS_FRAG_IPV6 | \
+	ETH_RSS_NONFRAG_IPV6_TCP | \
+	ETH_RSS_NONFRAG_IPV6_UDP | \
+	ETH_RSS_NONFRAG_IPV6_SCTP | \
+	ETH_RSS_NONFRAG_IPV6_OTHER | \
+	ETH_RSS_L2_PAYLOAD)
+
+struct ice_adapter;
+
+/**
+ * MAC filter structure
+ */
+struct ice_mac_filter_info {
+	struct ether_addr mac_addr;
+};
+
+TAILQ_HEAD(ice_mac_filter_list, ice_mac_filter);
+
+/* MAC filter list structure */
+struct ice_mac_filter {
+	TAILQ_ENTRY(ice_mac_filter) next;
+	struct ice_mac_filter_info mac_info;
+};
+
+/**
+ * VLAN filter structure
+ */
+struct ice_vlan_filter_info {
+	uint16_t vlan_id;
+};
+
+TAILQ_HEAD(ice_vlan_filter_list, ice_vlan_filter);
+
+/* VLAN filter list structure */
+struct ice_vlan_filter {
+	TAILQ_ENTRY(ice_vlan_filter) next;
+	struct ice_vlan_filter_info vlan_info;
+};
+
+struct pool_entry {
+	LIST_ENTRY(pool_entry) next;
+	uint16_t base;
+	uint16_t len;
+};
+
+LIST_HEAD(res_list, pool_entry);
+
+struct ice_res_pool_info {
+	uint32_t base;              /* Resource start index */
+	uint32_t num_alloc;         /* Allocated resource number */
+	uint32_t num_free;          /* Total available resource number */
+	struct res_list alloc_list; /* Allocated resource list */
+	struct res_list free_list;  /* Available resource list */
+};
+
+TAILQ_HEAD(ice_vsi_list_head, ice_vsi_list);
+
+struct ice_vsi;
+
+/* VSI list structure */
+struct ice_vsi_list {
+	TAILQ_ENTRY(ice_vsi_list) list;
+	struct ice_vsi *vsi;
+};
+
+struct ice_rx_queue;
+struct ice_tx_queue;
+
+/**
+ * Structure that defines a VSI, associated with a adapter.
+ */
+struct ice_vsi {
+	struct ice_adapter *adapter; /* Backreference to associated adapter */
+	struct ice_aqc_vsi_props info; /* VSI properties */
+	/**
+	 * When drivers loaded, only a default main VSI exists. In case new VSI
+	 * needs to add, HW needs to know the layout that VSIs are organized.
+	 * Besides that, VSI isan element and can't switch packets, which needs
+	 * to add new component VEB to perform switching. So, a new VSI needs
+	 * to specify the the uplink VSI (Parent VSI) before created. The
+	 * uplink VSI will check whether it had a VEB to switch packets. If no,
+	 * it will try to create one. Then, uplink VSI will move the new VSI
+	 * into its' sib_vsi_list to manage all the downlink VSI.
+	 *  sib_vsi_list: the VSI list that shared the same uplink VSI.
+	 *  parent_vsi  : the uplink VSI. It's NULL for main VSI.
+	 *  veb         : the VEB associates with the VSI.
+	 */
+	struct ice_vsi_list sib_vsi_list; /* sibling vsi list */
+	struct ice_vsi *parent_vsi;
+	enum ice_vsi_type type; /* VSI types */
+	uint16_t vlan_num;       /* Total VLAN number */
+	uint16_t mac_num;        /* Total mac number */
+	struct ice_mac_filter_list mac_list; /* macvlan filter list */
+	struct ice_vlan_filter_list vlan_list; /* vlan filter list */
+	uint16_t nb_qps;         /* Number of queue pairs VSI can occupy */
+	uint16_t nb_used_qps;    /* Number of queue pairs VSI uses */
+	uint16_t max_macaddrs;   /* Maximum number of MAC addresses */
+	uint16_t base_queue;     /* The first queue index of this VSI */
+	uint16_t vsi_id;         /* Hardware Id */
+	uint16_t idx;            /* vsi_handle: SW index in hw->vsi_ctx */
+	/* VF number to which the VSI connects, valid when VSI is VF type */
+	uint8_t vf_num;
+	uint16_t msix_intr; /* The MSIX interrupt binds to VSI */
+	uint16_t nb_msix;   /* The max number of msix vector */
+	uint8_t enabled_tc; /* The traffic class enabled */
+	uint8_t vlan_anti_spoof_on; /* The VLAN anti-spoofing enabled */
+	uint8_t vlan_filter_on; /* The VLAN filter enabled */
+	/* information about rss configuration */
+	u32 rss_key_size;
+	u32 rss_lut_size;
+	uint8_t *rss_lut;
+	uint8_t *rss_key;
+	struct ice_eth_stats eth_stats_offset;
+	struct ice_eth_stats eth_stats;
+	bool offset_loaded;
+};
+
+struct ice_pf {
+	struct ice_adapter *adapter; /* The adapter this PF associate to */
+	struct ice_vsi *main_vsi; /* pointer to main VSI structure */
+	/* Used for next free software vsi idx.
+	 * To save the effort, we don't recycle the index.
+	 * Suppose the indexes are more than enough.
+	 */
+	uint16_t next_vsi_idx;
+	uint16_t vsis_allocated;
+	uint16_t vsis_unallocated;
+	struct ice_res_pool_info qp_pool;    /*Queue pair pool */
+	struct ice_res_pool_info msix_pool;  /* MSIX interrupt pool */
+	struct rte_eth_dev_data *dev_data; /* Pointer to the device data */
+	struct ether_addr dev_addr; /* PF device mac address */
+	uint64_t flags; /* PF feature flags */
+	uint16_t hash_lut_size; /* The size of hash lookup table */
+	uint16_t lan_nb_qp_max;
+	uint16_t lan_nb_qps; /* The number of queue pairs of LAN */
+	struct ice_hw_port_stats stats_offset;
+	struct ice_hw_port_stats stats;
+	/* internal packet statistics, it should be excluded from the total */
+	struct ice_eth_stats internal_stats_offset;
+	struct ice_eth_stats internal_stats;
+	bool offset_loaded;
+	bool adapter_stopped;
+};
+
+/**
+ * Structure to store private data for each PF/VF instance.
+ */
+struct ice_adapter {
+	/* Common for both PF and VF */
+	struct ice_hw hw;
+	struct rte_eth_dev *eth_dev;
+	struct ice_pf pf;
+	bool rx_bulk_alloc_allowed;
+	bool tx_simple_allowed;
+	/* ptype mapping table */
+	uint32_t ptype_tbl[ICE_MAX_PKT_TYPE] __rte_cache_min_aligned;
+};
+
+struct ice_vsi_vlan_pvid_info {
+	uint16_t on;		/* Enable or disable pvid */
+	union {
+		uint16_t pvid;	/* Valid in case 'on' is set to set pvid */
+		struct {
+			/* Valid in case 'on' is cleared. 'tagged' will reject
+			 * tagged packets, while 'untagged' will reject
+			 * untagged packets.
+			 */
+			uint8_t tagged;
+			uint8_t untagged;
+		} reject;
+	} config;
+};
+
+#define ICE_DEV_TO_PCI(eth_dev) \
+	RTE_DEV_TO_PCI((eth_dev)->device)
+
+/* ICE_DEV_PRIVATE_TO */
+#define ICE_DEV_PRIVATE_TO_PF(adapter) \
+	(&((struct ice_adapter *)adapter)->pf)
+#define ICE_DEV_PRIVATE_TO_HW(adapter) \
+	(&((struct ice_adapter *)adapter)->hw)
+#define ICE_DEV_PRIVATE_TO_ADAPTER(adapter) \
+	((struct ice_adapter *)adapter)
+
+/* ICE_VSI_TO */
+#define ICE_VSI_TO_HW(vsi) \
+	(&(((struct ice_vsi *)vsi)->adapter->hw))
+#define ICE_VSI_TO_PF(vsi) \
+	(&(((struct ice_vsi *)vsi)->adapter->pf))
+#define ICE_VSI_TO_ETH_DEV(vsi) \
+	(((struct ice_vsi *)vsi)->adapter->eth_dev)
+
+/* ICE_PF_TO */
+#define ICE_PF_TO_HW(pf) \
+	(&(((struct ice_pf *)pf)->adapter->hw))
+#define ICE_PF_TO_ADAPTER(pf) \
+	((struct ice_adapter *)(pf)->adapter)
+#define ICE_PF_TO_ETH_DEV(pf) \
+	(((struct ice_pf *)pf)->adapter->eth_dev)
+
+#define ICE_PROC_SECONDARY_CHECK					\
+	do {								\
+		if (rte_eal_process_type() == RTE_PROC_SECONDARY) {	\
+			PMD_DRV_LOG(ERR,				\
+				    "Control plane functions not "	\
+				    "supported by secondary process.");	\
+			return -E_RTE_SECONDARY;			\
+		}							\
+	} while (0)
+
+#define ICE_PROC_SECONDARY_CHECK_RET_0					\
+	do {								\
+		if (rte_eal_process_type() == RTE_PROC_SECONDARY) {	\
+			PMD_DRV_LOG(ERR,				\
+				    "Control plane functions not "	\
+				    "supported by secondary process.");	\
+			return 0;					\
+		}							\
+	} while (0)
+
+#define ICE_PROC_SECONDARY_CHECK_NO_ERR					\
+	do {								\
+		if (rte_eal_process_type() == RTE_PROC_SECONDARY) {	\
+			PMD_DRV_LOG(ERR,				\
+				    "Control plane functions not "	\
+				    "supported by secondary process.");	\
+			return;						\
+		}							\
+	} while (0)
+
+static inline int
+ice_align_floor(int n)
+{
+	if (n == 0)
+		return 0;
+	return 1 << (sizeof(n) * CHAR_BIT - 1 - __builtin_clz(n));
+}
+#endif /* _ICE_ETHDEV_H_ */
diff --git a/drivers/net/ice/ice_logs.h b/drivers/net/ice/ice_logs.h
new file mode 100644
index 0000000..de2d573
--- /dev/null
+++ b/drivers/net/ice/ice_logs.h
@@ -0,0 +1,45 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _ICE_LOGS_H_
+#define _ICE_LOGS_H_
+
+extern int ice_logtype_init;
+extern int ice_logtype_driver;
+
+#define PMD_INIT_LOG(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, ice_logtype_init, "%s(): " fmt "\n", \
+		__func__, ##args)
+
+#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>")
+
+#ifdef RTE_LIBRTE_ICE_DEBUG_RX
+#define PMD_RX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_RX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_ICE_DEBUG_TX
+#define PMD_TX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_TX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_ICE_DEBUG_TX_FREE
+#define PMD_TX_FREE_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_TX_FREE_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#define PMD_DRV_LOG_RAW(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, ice_logtype_driver, "%s(): " fmt, \
+		__func__, ## args)
+
+#define PMD_DRV_LOG(level, fmt, args...) \
+	PMD_DRV_LOG_RAW(level, fmt "\n", ## args)
+
+#endif /* _ICE_LOGS_H_ */
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
new file mode 100644
index 0000000..c37dc23
--- /dev/null
+++ b/drivers/net/ice/ice_rxtx.h
@@ -0,0 +1,117 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _ICE_RXTX_H_
+#define _ICE_RXTX_H_
+
+#include "ice_ethdev.h"
+
+#define ICE_ALIGN_RING_DESC  32
+#define ICE_MIN_RING_DESC    64
+#define ICE_MAX_RING_DESC    4096
+#define ICE_DMA_MEM_ALIGN    4096
+#define ICE_RING_BASE_ALIGN  128
+
+#define ICE_RX_MAX_BURST 32
+#define ICE_TX_MAX_BURST 32
+
+#define ICE_CHK_Q_ENA_COUNT        100
+#define ICE_CHK_Q_ENA_INTERVAL_US  100
+
+#ifdef RTE_LIBRTE_ICE_16BYTE_RX_DESC
+#define ice_rx_desc ice_16byte_rx_desc
+#else
+#define ice_rx_desc ice_32byte_rx_desc
+#endif
+
+#define ICE_SUPPORT_CHAIN_NUM 5
+
+struct ice_rx_entry {
+	struct rte_mbuf *mbuf;
+};
+
+struct ice_rx_queue {
+	struct rte_mempool *mp; /* mbuf pool to populate RX ring */
+	volatile union ice_rx_desc *rx_ring;/* RX ring virtual address */
+	uint64_t rx_ring_phys_addr; /* RX ring DMA address */
+	struct ice_rx_entry *sw_ring; /* address of RX soft ring */
+	uint16_t nb_rx_desc; /* number of RX descriptors */
+	uint16_t rx_free_thresh; /* max free RX desc to hold */
+	uint16_t rx_tail; /* current value of tail */
+	uint16_t nb_rx_hold; /* number of held free RX desc */
+	struct rte_mbuf *pkt_first_seg; /**< first segment of current packet */
+	struct rte_mbuf *pkt_last_seg; /**< last segment of current packet */
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+	uint16_t rx_nb_avail; /**< number of staged packets ready */
+	uint16_t rx_next_avail; /**< index of next staged packets */
+	uint16_t rx_free_trigger; /**< triggers rx buffer allocation */
+	struct rte_mbuf fake_mbuf; /**< dummy mbuf */
+	struct rte_mbuf *rx_stage[ICE_RX_MAX_BURST * 2];
+#endif
+	uint8_t port_id; /* device port ID */
+	uint8_t crc_len; /* 0 if CRC stripped, 4 otherwise */
+	uint16_t queue_id; /* RX queue index */
+	uint16_t reg_idx; /* RX queue register index */
+	uint8_t drop_en; /* if not 0, set register bit */
+	volatile uint8_t *qrx_tail; /* register address of tail */
+	struct ice_vsi *vsi; /* the VSI this queue belongs to */
+	uint16_t rx_buf_len; /* The packet buffer size */
+	uint16_t rx_hdr_len; /* The header buffer size */
+	uint16_t max_pkt_len; /* Maximum packet length */
+	bool q_set; /* indicate if rx queue has been configured */
+	bool rx_deferred_start; /* don't start this queue in dev start */
+};
+
+struct ice_tx_entry {
+	struct rte_mbuf *mbuf;
+	uint16_t next_id;
+	uint16_t last_id;
+};
+
+struct ice_tx_queue {
+	uint16_t nb_tx_desc; /* number of TX descriptors */
+	uint64_t tx_ring_phys_addr; /* TX ring DMA address */
+	volatile struct ice_tx_desc *tx_ring; /* TX ring virtual address */
+	struct ice_tx_entry *sw_ring; /* virtual address of SW ring */
+	uint16_t tx_tail; /* current value of tail register */
+	volatile uint8_t *qtx_tail; /* register address of tail */
+	uint16_t nb_tx_used; /* number of TX desc used since RS bit set */
+	/* index to last TX descriptor to have been cleaned */
+	uint16_t last_desc_cleaned;
+	/* Total number of TX descriptors ready to be allocated. */
+	uint16_t nb_tx_free;
+	/* Start freeing TX buffers if there are less free descriptors than
+	 * this value.
+	 */
+	uint16_t tx_free_thresh;
+	/* Number of TX descriptors to use before RS bit is set. */
+	uint16_t tx_rs_thresh;
+	uint8_t pthresh; /**< Prefetch threshold register. */
+	uint8_t hthresh; /**< Host threshold register. */
+	uint8_t wthresh; /**< Write-back threshold reg. */
+	uint8_t port_id; /* Device port identifier. */
+	uint16_t queue_id; /* TX queue index. */
+	uint32_t q_teid; /* TX schedule node id. */
+	uint16_t reg_idx;
+	uint64_t offloads;
+	struct ice_vsi *vsi; /* the VSI this queue belongs to */
+	uint16_t tx_next_dd;
+	uint16_t tx_next_rs;
+	bool tx_deferred_start; /* don't start this queue in dev start */
+	bool q_set; /* indicate if tx queue has been configured */
+};
+
+/* Offload features */
+union ice_tx_offload {
+	uint64_t data;
+	struct {
+		uint64_t l2_len:7; /* L2 (MAC) Header Length. */
+		uint64_t l3_len:9; /* L3 (IP) Header Length. */
+		uint64_t l4_len:8; /* L4 Header Length. */
+		uint64_t tso_segsz:16; /* TCP TSO segment size */
+		uint64_t outer_l2_len:8; /* outer L2 Header Length */
+		uint64_t outer_l3_len:16; /* outer L3 Header Length */
+	};
+};
+#endif /* _ICE_RXTX_H_ */
diff --git a/drivers/net/ice/rte_pmd_ice_version.map b/drivers/net/ice/rte_pmd_ice_version.map
new file mode 100644
index 0000000..7b23b60
--- /dev/null
+++ b/drivers/net/ice/rte_pmd_ice_version.map
@@ -0,0 +1,4 @@
+DPDK_19.02 {
+
+	local: *;
+};
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 5699d97..02e8b6f 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -163,6 +163,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_ENIC_PMD)       += -lrte_pmd_enic
 _LDLIBS-$(CONFIG_RTE_LIBRTE_FM10K_PMD)      += -lrte_pmd_fm10k
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_FAILSAFE)   += -lrte_pmd_failsafe
 _LDLIBS-$(CONFIG_RTE_LIBRTE_I40E_PMD)       += -lrte_pmd_i40e
+_LDLIBS-$(CONFIG_RTE_LIBRTE_ICE_PMD)        += -lrte_pmd_ice
 _LDLIBS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD)      += -lrte_pmd_ixgbe
 ifeq ($(CONFIG_RTE_LIBRTE_KNI),y)
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_KNI)        += -lrte_pmd_kni
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v2 03/20] net/ice: support device and queue ops
  2018-12-03  7:06 ` [dpdk-dev] [PATCH v2 00/20] " Wenzhuo Lu
  2018-12-03  7:06   ` [dpdk-dev] [PATCH v2 01/20] net/ice: add base code Wenzhuo Lu
  2018-12-03  7:06   ` [dpdk-dev] [PATCH v2 02/20] net/ice: support device initialization Wenzhuo Lu
@ 2018-12-03  7:06   ` Wenzhuo Lu
  2018-12-04  4:53     ` Varghese, Vipin
  2018-12-03  7:06   ` [dpdk-dev] [PATCH v2 04/20] net/ice: support getting device information Wenzhuo Lu
                     ` (16 subsequent siblings)
  19 siblings, 1 reply; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-03  7:06 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Normally when starting/stopping the device the queue
should be started and stopped. Support them both in
this patch.

Below ops are added,
dev_configure
dev_start
dev_stop
dev_close
dev_reset
rx_queue_start
rx_queue_stop
tx_queue_start
tx_queue_stop
rx_queue_setup
rx_queue_release
tx_queue_setup
tx_queue_release

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/ice/Makefile       |   3 +-
 drivers/net/ice/ice_ethdev.c   | 204 ++++++++-
 drivers/net/ice/ice_lan_rxtx.c | 943 +++++++++++++++++++++++++++++++++++++++++
 drivers/net/ice/ice_rxtx.h     |  20 +
 4 files changed, 1168 insertions(+), 2 deletions(-)
 create mode 100644 drivers/net/ice/ice_lan_rxtx.c

diff --git a/drivers/net/ice/Makefile b/drivers/net/ice/Makefile
index 5af66d9..472f9c7 100644
--- a/drivers/net/ice/Makefile
+++ b/drivers/net/ice/Makefile
@@ -11,7 +11,7 @@ LIB = librte_pmd_ice.a
 CFLAGS += -O3
 CFLAGS += $(WERROR_FLAGS)
 
-LDLIBS += -lrte_eal -lrte_ethdev -lrte_kvargs -lrte_bus_pci
+LDLIBS += -lrte_eal -lrte_ethdev -lrte_kvargs -lrte_bus_pci -lrte_mempool
 
 EXPORT_MAP := rte_pmd_ice_version.map
 
@@ -65,6 +65,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_switch.c
 SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_nvm.c
 
 SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_ethdev.c
+SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_lan_rxtx.c
 
 # this lib depends upon:
 DEPDIRS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += lib/librte_eal lib/librte_ether
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 6ea9bf0..7e3bad0 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -14,7 +14,11 @@
 int ice_logtype_init;
 int ice_logtype_driver;
 
+static int ice_dev_configure(struct rte_eth_dev *dev);
+static int ice_dev_start(struct rte_eth_dev *dev);
+static void ice_dev_stop(struct rte_eth_dev *dev);
 static void ice_dev_close(struct rte_eth_dev *dev);
+static int ice_dev_reset(struct rte_eth_dev *dev);
 
 static const struct rte_pci_id pci_id_ice_map[] = {
 	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
@@ -24,7 +28,19 @@
 };
 
 static const struct eth_dev_ops ice_eth_dev_ops = {
-	.dev_configure                = NULL,
+	.dev_configure                = ice_dev_configure,
+	.dev_start                    = ice_dev_start,
+	.dev_stop                     = ice_dev_stop,
+	.dev_close                    = ice_dev_close,
+	.dev_reset                    = ice_dev_reset,
+	.rx_queue_start               = ice_rx_queue_start,
+	.rx_queue_stop                = ice_rx_queue_stop,
+	.tx_queue_start               = ice_tx_queue_start,
+	.tx_queue_stop                = ice_tx_queue_stop,
+	.rx_queue_setup               = ice_rx_queue_setup,
+	.rx_queue_release             = ice_rx_queue_release,
+	.tx_queue_setup               = ice_tx_queue_setup,
+	.tx_queue_release             = ice_tx_queue_release,
 };
 
 static void
@@ -628,6 +644,162 @@
 		rte_log_set_level(ice_logtype_driver, RTE_LOG_NOTICE);
 }
 
+static int
+ice_dev_configure(__rte_unused struct rte_eth_dev *dev)
+{
+	struct ice_adapter *ad =
+		ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+
+	ICE_PROC_SECONDARY_CHECK;
+
+	/* Initialize to TRUE. If any of Rx queues doesn't meet the
+	 * bulk allocation or vector Rx preconditions we will reset it.
+	 */
+	ad->rx_bulk_alloc_allowed = true;
+	ad->tx_simple_allowed = true;
+
+	return 0;
+}
+
+static int ice_init_rss(struct ice_pf *pf)
+{
+	struct ice_hw *hw = ICE_PF_TO_HW(pf);
+	struct ice_vsi *vsi = pf->main_vsi;
+	struct rte_eth_dev *dev = pf->adapter->eth_dev;
+	struct rte_eth_rss_conf *rss_conf;
+	struct ice_aqc_get_set_rss_keys key;
+	uint16_t i, nb_q;
+	int ret = 0;
+
+	rss_conf = &dev->data->dev_conf.rx_adv_conf.rss_conf;
+	nb_q = dev->data->nb_rx_queues;
+	vsi->rss_key_size = ICE_AQC_GET_SET_RSS_KEY_DATA_RSS_KEY_SIZE;
+	vsi->rss_lut_size = hw->func_caps.common_cap.rss_table_size;
+
+	if (!vsi->rss_key)
+		vsi->rss_key = rte_zmalloc("rss_key",
+					   vsi->rss_key_size, 0);
+	if (!vsi->rss_lut)
+		vsi->rss_lut = rte_zmalloc("rss_lut",
+					   vsi->rss_lut_size, 0);
+
+	/* configure RSS key */
+	if (!rss_conf->rss_key) {
+		/* Calculate the default hash key */
+		for (i = 0; i <= vsi->rss_key_size; i++)
+			vsi->rss_key[i] = (uint8_t)rte_rand();
+	} else {
+		rte_memcpy(vsi->rss_key, rss_conf->rss_key,
+			   RTE_MIN(rss_conf->rss_key_len,
+				   vsi->rss_key_size));
+	}
+	rte_memcpy(key.standard_rss_key, vsi->rss_key, vsi->rss_key_size);
+	ret = ice_aq_set_rss_key(hw, vsi->vsi_id, &key);
+	if (ret)
+		return -EINVAL;
+
+	/* init RSS LUT table */
+	for (i = 0; i < vsi->rss_lut_size; i++)
+		vsi->rss_lut[i] = i % nb_q;
+
+	ret = ice_aq_set_rss_lut(hw, vsi->vsi_id,
+				 ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF,
+				 vsi->rss_lut, vsi->rss_lut_size);
+	if (ret)
+		return -EINVAL;
+
+	return 0;
+}
+
+static int
+ice_dev_start(struct rte_eth_dev *dev)
+{
+	struct rte_eth_dev_data *data = dev->data;
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	uint16_t nb_rxq = 0;
+	uint16_t nb_txq, i;
+	int ret;
+
+	ICE_PROC_SECONDARY_CHECK;
+
+	/* program Tx queues' contex in hardware */
+	for (nb_txq = 0; nb_txq < data->nb_tx_queues; nb_txq++) {
+		ret = ice_tx_queue_start(dev, nb_txq);
+		if (ret) {
+			PMD_DRV_LOG(ERR, "fail to start Tx queue %u", nb_txq);
+			goto tx_err;
+		}
+	}
+
+	/* program Rx queues' context in hardware*/
+	for (nb_rxq = 0; nb_rxq < data->nb_rx_queues; nb_rxq++) {
+		ret = ice_rx_queue_start(dev, nb_rxq);
+		if (ret) {
+			PMD_DRV_LOG(ERR, "fail to start Rx queue %u", nb_rxq);
+			goto rx_err;
+		}
+	}
+
+	ret = ice_init_rss(pf);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to enable rss for PF");
+		goto rx_err;
+	}
+
+	ret = ice_aq_set_event_mask(hw, hw->port_info->lport,
+				    ((u16)(ICE_AQ_LINK_EVENT_LINK_FAULT |
+				     ICE_AQ_LINK_EVENT_PHY_TEMP_ALARM |
+				     ICE_AQ_LINK_EVENT_EXCESSIVE_ERRORS |
+				     ICE_AQ_LINK_EVENT_SIGNAL_DETECT |
+				     ICE_AQ_LINK_EVENT_AN_COMPLETED |
+				     ICE_AQ_LINK_EVENT_PORT_TX_SUSPENDED)),
+				     NULL);
+	if (ret != ICE_SUCCESS)
+		PMD_DRV_LOG(WARNING, "Fail to set phy mask");
+
+	pf->adapter_stopped = false;
+
+	return 0;
+
+	/* stop the started queues if failed to start all queues */
+rx_err:
+	for (i = 0; i < nb_txq; i++)
+		ice_tx_queue_stop(dev, i);
+tx_err:
+	for (i = 0; i < nb_rxq; i++)
+		ice_rx_queue_stop(dev, i);
+
+	return -EIO;
+}
+
+static void
+ice_dev_stop(struct rte_eth_dev *dev)
+{
+	struct rte_eth_dev_data *data = dev->data;
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	uint16_t i;
+
+	/* avoid stopping again */
+	if (pf->adapter_stopped)
+		return;
+
+	ICE_PROC_SECONDARY_CHECK_NO_ERR;
+
+	/* stop and clear all Rx queues */
+	for (i = 0; i < data->nb_rx_queues; i++)
+		ice_rx_queue_stop(dev, i);
+
+	/* stop and clear all Tx queues */
+	for (i = 0; i < data->nb_tx_queues; i++)
+		ice_tx_queue_stop(dev, i);
+
+	/* Clear all queues and release mbufs */
+	ice_clear_queues(dev);
+
+	pf->adapter_stopped = true;
+}
+
 static void
 ice_dev_close(struct rte_eth_dev *dev)
 {
@@ -636,8 +808,38 @@
 
 	ICE_PROC_SECONDARY_CHECK_NO_ERR;
 
+	ice_dev_stop(dev);
+
+	/* release all queue resource */
+	ice_free_queues(dev);
+
 	ice_res_pool_destroy(&pf->msix_pool);
 	ice_release_vsi(pf->main_vsi);
 
 	ice_shutdown_all_ctrlq(hw);
 }
+
+static int
+ice_dev_reset(struct rte_eth_dev *dev)
+{
+	int ret;
+
+	ICE_PROC_SECONDARY_CHECK;
+
+	if (dev->data->sriov.active)
+		return -ENOTSUP;
+
+	ret = ice_dev_uninit(dev);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "failed to uninit device, status = %d", ret);
+		return -ENXIO;
+	}
+
+	ret = ice_dev_init(dev);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "failed to init device, status = %d", ret);
+		return -ENXIO;
+	}
+
+	return 0;
+}
diff --git a/drivers/net/ice/ice_lan_rxtx.c b/drivers/net/ice/ice_lan_rxtx.c
new file mode 100644
index 0000000..dddc8b1
--- /dev/null
+++ b/drivers/net/ice/ice_lan_rxtx.c
@@ -0,0 +1,943 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#include <rte_ethdev_driver.h>
+#include <rte_net.h>
+
+#include "ice_rxtx.h"
+
+#define ICE_TD_CMD ICE_TX_DESC_CMD_EOP
+
+#define ICE_TX_CKSUM_OFFLOAD_MASK (		 \
+		PKT_TX_IP_CKSUM |		 \
+		PKT_TX_L4_MASK |		 \
+		PKT_TX_TCP_SEG |		 \
+		PKT_TX_OUTER_IP_CKSUM)
+
+#define ICE_RX_ERR_BITS 0x3f
+
+static enum ice_status
+ice_program_hw_rx_queue(struct ice_rx_queue *rxq)
+{
+	struct ice_vsi *vsi = rxq->vsi;
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	struct rte_eth_dev *dev = ICE_VSI_TO_ETH_DEV(rxq->vsi);
+	struct ice_rlan_ctx rx_ctx;
+	enum ice_status err;
+	uint16_t buf_size, len;
+	struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
+	uint32_t regval;
+
+	/**
+	 * The kernel driver uses flex descriptor. It sets the register
+	 * to flex descriptor mode.
+	 * DPDK uses legacy descriptor. It should set the register back
+	 * to the default value, then uses legacy descriptor mode.
+	 */
+	regval = (0x01 << QRXFLXP_CNTXT_RXDID_PRIO_S) &
+		 QRXFLXP_CNTXT_RXDID_PRIO_M;
+	ICE_WRITE_REG(hw, QRXFLXP_CNTXT(rxq->reg_idx), regval);
+
+	/* Set buffer size as the head split is disabled. */
+	buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) -
+			      RTE_PKTMBUF_HEADROOM);
+	rxq->rx_hdr_len = 0;
+	rxq->rx_buf_len = RTE_ALIGN(buf_size, (1 << ICE_RLAN_CTX_DBUF_S));
+	len = ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len;
+	rxq->max_pkt_len = RTE_MIN(len,
+				   dev->data->dev_conf.rxmode.max_rx_pkt_len);
+
+	if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+		if (rxq->max_pkt_len <= ETHER_MAX_LEN ||
+		    rxq->max_pkt_len > ICE_FRAME_SIZE_MAX) {
+			PMD_DRV_LOG(ERR, "maximum packet length must "
+				    "be larger than %u and smaller than %u,"
+				    "as jumbo frame is enabled",
+				    (uint32_t)ETHER_MAX_LEN,
+				    (uint32_t)ICE_FRAME_SIZE_MAX);
+			return -EINVAL;
+		}
+	} else {
+		if (rxq->max_pkt_len < ETHER_MIN_LEN ||
+		    rxq->max_pkt_len > ETHER_MAX_LEN) {
+			PMD_DRV_LOG(ERR, "maximum packet length must be "
+				    "larger than %u and smaller than %u, "
+				    "as jumbo frame is disabled",
+				    (uint32_t)ETHER_MIN_LEN,
+				    (uint32_t)ETHER_MAX_LEN);
+			return -EINVAL;
+		}
+	}
+
+	memset(&rx_ctx, 0, sizeof(rx_ctx));
+
+	rx_ctx.base = rxq->rx_ring_phys_addr / ICE_QUEUE_BASE_ADDR_UNIT;
+	rx_ctx.qlen = rxq->nb_rx_desc;
+	rx_ctx.dbuf = rxq->rx_buf_len >> ICE_RLAN_CTX_DBUF_S;
+	rx_ctx.hbuf = rxq->rx_hdr_len >> ICE_RLAN_CTX_HBUF_S;
+	rx_ctx.dtype = 0; /* No Header Split mode */
+#ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
+	rx_ctx.dsize = 1; /* 32B descriptors */
+#endif
+	rx_ctx.rxmax = rxq->max_pkt_len;
+	/* TPH: Transaction Layer Packet (TLP) processing hints */
+	rx_ctx.tphrdesc_ena = 1;
+	rx_ctx.tphwdesc_ena = 1;
+	rx_ctx.tphdata_ena = 1;
+	rx_ctx.tphhead_ena = 1;
+	/* Low Receive Queue Threshold defined in 64 descriptors units.
+	 * When the number of free descriptors goes below the lrxqthresh,
+	 * an immediate interrupt is triggered.
+	 */
+	rx_ctx.lrxqthresh = 2;
+	/*default use 32 byte descriptor, vlan tag extract to L2TAG2(1st)*/
+	rx_ctx.l2tsel = 1;
+	rx_ctx.showiv = 0;
+
+	err = ice_clear_rxq_ctx(hw, rxq->reg_idx);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to clear Lan Rx queue (%u) context",
+			    rxq->queue_id);
+		return -EINVAL;
+	}
+	err = ice_write_rxq_ctx(hw, &rx_ctx, rxq->reg_idx);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to write Lan Rx queue (%u) context",
+			    rxq->queue_id);
+		return -EINVAL;
+	}
+
+	buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) -
+			      RTE_PKTMBUF_HEADROOM);
+
+	/* Check if scattered RX needs to be used. */
+	if ((rxq->max_pkt_len + 2 * ICE_VLAN_TAG_SIZE) > buf_size)
+		dev->data->scattered_rx = 1;
+
+	rxq->qrx_tail = hw->hw_addr + QRX_TAIL(rxq->reg_idx);
+
+	/* Init the Rx tail register*/
+	ICE_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1);
+
+	return 0;
+}
+
+/* Allocate mbufs for all descriptors in rx queue */
+static int
+ice_alloc_rx_queue_mbufs(struct ice_rx_queue *rxq)
+{
+	struct ice_rx_entry *rxe = rxq->sw_ring;
+	uint64_t dma_addr;
+	uint16_t i;
+
+	for (i = 0; i < rxq->nb_rx_desc; i++) {
+		volatile union ice_rx_desc *rxd;
+		struct rte_mbuf *mbuf = rte_mbuf_raw_alloc(rxq->mp);
+
+		if (unlikely(!mbuf)) {
+			PMD_DRV_LOG(ERR, "Failed to allocate mbuf for RX");
+			return -ENOMEM;
+		}
+
+		rte_mbuf_refcnt_set(mbuf, 1);
+		mbuf->next = NULL;
+		mbuf->data_off = RTE_PKTMBUF_HEADROOM;
+		mbuf->nb_segs = 1;
+		mbuf->port = rxq->port_id;
+
+		dma_addr =
+			rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf));
+
+		rxd = &rxq->rx_ring[i];
+		rxd->read.pkt_addr = dma_addr;
+		rxd->read.hdr_addr = 0;
+#ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
+		rxd->read.rsvd1 = 0;
+		rxd->read.rsvd2 = 0;
+#endif
+		rxe[i].mbuf = mbuf;
+	}
+
+	return 0;
+}
+
+/* Free all mbufs for descriptors in rx queue */
+static void
+ice_rx_queue_release_mbufs(struct ice_rx_queue *rxq)
+{
+	uint16_t i;
+
+	if (!rxq || !rxq->sw_ring) {
+		PMD_DRV_LOG(DEBUG, "Pointer to sw_ring is NULL");
+		return;
+	}
+
+	for (i = 0; i < rxq->nb_rx_desc; i++) {
+		if (rxq->sw_ring[i].mbuf) {
+			rte_pktmbuf_free_seg(rxq->sw_ring[i].mbuf);
+			rxq->sw_ring[i].mbuf = NULL;
+		}
+	}
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+		if (rxq->rx_nb_avail == 0)
+			return;
+		for (i = 0; i < rxq->rx_nb_avail; i++) {
+			struct rte_mbuf *mbuf;
+
+			mbuf = rxq->rx_stage[rxq->rx_next_avail + i];
+			rte_pktmbuf_free_seg(mbuf);
+		}
+		rxq->rx_nb_avail = 0;
+#endif /* RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC */
+}
+
+/* turn on or off rx queue
+ * @q_idx: queue index in pf scope
+ * @on: turn on or off the queue
+ */
+static int
+ice_switch_rx_queue(struct ice_hw *hw, uint16_t q_idx, bool on)
+{
+	uint32_t reg;
+	uint16_t j;
+
+	/* QRX_CTRL = QRX_ENA */
+	reg = ICE_READ_REG(hw, QRX_CTRL(q_idx));
+
+	if (on) {
+		if (reg & QRX_CTRL_QENA_STAT_M)
+			return 0; /* Already on, skip */
+		reg |= QRX_CTRL_QENA_REQ_M;
+	} else {
+		if (!(reg & QRX_CTRL_QENA_STAT_M))
+			return 0; /* Already off, skip */
+		reg &= ~QRX_CTRL_QENA_REQ_M;
+	}
+
+	/* Write the register */
+	ICE_WRITE_REG(hw, QRX_CTRL(q_idx), reg);
+	/* Check the result. It is said that QENA_STAT
+	 * follows the QENA_REQ not more than 10 use.
+	 * TODO: need to change the wait counter later
+	 */
+	for (j = 0; j < ICE_CHK_Q_ENA_COUNT; j++) {
+		rte_delay_us(ICE_CHK_Q_ENA_INTERVAL_US);
+		reg = ICE_READ_REG(hw, QRX_CTRL(q_idx));
+		if (on) {
+			if ((reg & QRX_CTRL_QENA_REQ_M) &&
+			    (reg & QRX_CTRL_QENA_STAT_M))
+				break;
+		} else {
+			if (!(reg & QRX_CTRL_QENA_REQ_M) &&
+			    !(reg & QRX_CTRL_QENA_STAT_M))
+				break;
+		}
+	}
+
+	/* Check if it is timeout */
+	if (j >= ICE_CHK_Q_ENA_COUNT) {
+		PMD_DRV_LOG(ERR, "Failed to %s rx queue[%u]",
+			    (on ? "enable" : "disable"), q_idx);
+		return -ETIMEDOUT;
+	}
+
+	return 0;
+}
+
+static inline int
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+ice_check_rx_burst_bulk_alloc_preconditions(struct ice_rx_queue *rxq)
+#else
+ice_check_rx_burst_bulk_alloc_preconditions
+	(__rte_unused struct ice_rx_queue *rxq)
+#endif
+{
+	int ret = 0;
+
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+	if (!(rxq->rx_free_thresh >= ICE_RX_MAX_BURST)) {
+		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: "
+			     "rxq->rx_free_thresh=%d, "
+			     "ICE_RX_MAX_BURST=%d",
+			     rxq->rx_free_thresh, ICE_RX_MAX_BURST);
+		ret = -EINVAL;
+	} else if (!(rxq->rx_free_thresh < rxq->nb_rx_desc)) {
+		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: "
+			     "rxq->rx_free_thresh=%d, "
+			     "rxq->nb_rx_desc=%d",
+			     rxq->rx_free_thresh, rxq->nb_rx_desc);
+		ret = -EINVAL;
+	} else if (rxq->nb_rx_desc % rxq->rx_free_thresh != 0) {
+		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: "
+			     "rxq->nb_rx_desc=%d, "
+			     "rxq->rx_free_thresh=%d",
+			     rxq->nb_rx_desc, rxq->rx_free_thresh);
+		ret = -EINVAL;
+	}
+#else
+	ret = -EINVAL;
+#endif
+
+	return ret;
+}
+
+/* reset fields in ice_rx_queue back to default */
+static void
+ice_reset_rx_queue(struct ice_rx_queue *rxq)
+{
+	unsigned i;
+	uint16_t len;
+
+	if (!rxq) {
+		PMD_DRV_LOG(DEBUG, "Pointer to rxq is NULL");
+		return;
+	}
+
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+	if (ice_check_rx_burst_bulk_alloc_preconditions(rxq) == 0)
+		len = (uint16_t)(rxq->nb_rx_desc + ICE_RX_MAX_BURST);
+	else
+#endif /* RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC */
+		len = rxq->nb_rx_desc;
+
+	for (i = 0; i < len * sizeof(union ice_rx_desc); i++)
+		((volatile char *)rxq->rx_ring)[i] = 0;
+
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+	memset(&rxq->fake_mbuf, 0x0, sizeof(rxq->fake_mbuf));
+	for (i = 0; i < ICE_RX_MAX_BURST; ++i)
+		rxq->sw_ring[rxq->nb_rx_desc + i].mbuf = &rxq->fake_mbuf;
+
+	rxq->rx_nb_avail = 0;
+	rxq->rx_next_avail = 0;
+	rxq->rx_free_trigger = (uint16_t)(rxq->rx_free_thresh - 1);
+#endif /* RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC */
+
+	rxq->rx_tail = 0;
+	rxq->nb_rx_hold = 0;
+	rxq->pkt_first_seg = NULL;
+	rxq->pkt_last_seg = NULL;
+}
+
+int
+ice_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct ice_rx_queue *rxq;
+	int err;
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	ICE_PROC_SECONDARY_CHECK;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (rx_queue_id >= dev->data->nb_rx_queues) {
+		PMD_DRV_LOG(ERR, "RX queue %u is out of range %u",
+			    rx_queue_id, dev->data->nb_rx_queues);
+		return -EINVAL;
+	}
+
+	rxq = dev->data->rx_queues[rx_queue_id];
+	if (!rxq || !rxq->q_set) {
+		PMD_DRV_LOG(ERR, "RX queue %u not available or setup",
+			    rx_queue_id);
+		return -EINVAL;
+	}
+
+	err = ice_program_hw_rx_queue(rxq);
+	if (err) {
+		PMD_DRV_LOG(ERR, "fail to program RX queue %u",
+			    rx_queue_id);
+		return -EIO;
+	}
+
+	err = ice_alloc_rx_queue_mbufs(rxq);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to allocate RX queue mbuf");
+		return -ENOMEM;
+	}
+
+	rte_wmb();
+
+	/* Init the RX tail register. */
+	ICE_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1);
+
+	err = ice_switch_rx_queue(hw, rxq->reg_idx, TRUE);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u on",
+			    rx_queue_id);
+
+		ice_rx_queue_release_mbufs(rxq);
+		ice_reset_rx_queue(rxq);
+		return -EINVAL;
+	}
+
+	dev->data->rx_queue_state[rx_queue_id] =
+		RTE_ETH_QUEUE_STATE_STARTED;
+
+	return 0;
+}
+
+int
+ice_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct ice_rx_queue *rxq;
+	int err;
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	ICE_PROC_SECONDARY_CHECK;
+
+	if (rx_queue_id < dev->data->nb_rx_queues) {
+		rxq = dev->data->rx_queues[rx_queue_id];
+
+		err = ice_switch_rx_queue(hw, rxq->reg_idx, FALSE);
+		if (err) {
+			PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off",
+				    rx_queue_id);
+			return -EINVAL;
+		}
+		ice_rx_queue_release_mbufs(rxq);
+		ice_reset_rx_queue(rxq);
+		dev->data->rx_queue_state[rx_queue_id] =
+			RTE_ETH_QUEUE_STATE_STOPPED;
+	}
+
+	return 0;
+}
+
+int
+ice_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct ice_tx_queue *txq;
+	int err;
+	struct ice_vsi *vsi;
+	struct ice_hw *hw;
+	struct ice_aqc_add_tx_qgrp txq_elem;
+	struct ice_tlan_ctx tx_ctx;
+
+	ICE_PROC_SECONDARY_CHECK;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (tx_queue_id >= dev->data->nb_tx_queues) {
+		PMD_DRV_LOG(ERR, "TX queue %u is out of range %u",
+			    tx_queue_id, dev->data->nb_tx_queues);
+		return -EINVAL;
+	}
+
+	txq = dev->data->tx_queues[tx_queue_id];
+	if (!txq || !txq->q_set) {
+		PMD_DRV_LOG(ERR, "TX queue %u is not available or setup",
+			    tx_queue_id);
+		return -EINVAL;
+	}
+
+	vsi = txq->vsi;
+	hw = ICE_VSI_TO_HW(vsi);
+
+	memset(&txq_elem, 0, sizeof(txq_elem));
+	memset(&tx_ctx, 0, sizeof(tx_ctx));
+	txq_elem.num_txqs = 1;
+	txq_elem.txqs[0].txq_id = rte_cpu_to_le_16(txq->reg_idx);
+
+	tx_ctx.base = txq->tx_ring_phys_addr / ICE_QUEUE_BASE_ADDR_UNIT;
+	tx_ctx.qlen = txq->nb_tx_desc;
+	tx_ctx.pf_num = hw->pf_id;
+	tx_ctx.vmvf_type = ICE_TLAN_CTX_VMVF_TYPE_PF;
+	tx_ctx.src_vsi = vsi->vsi_id;
+	tx_ctx.port_num = hw->port_info->lport;
+	tx_ctx.tso_ena = 1; /* tso enable */
+	tx_ctx.tso_qnum = txq->reg_idx; /* index for tso state structure */
+	tx_ctx.legacy_int = 1; /* Legacy or Advanced Host Interface */
+
+	ice_set_ctx((uint8_t *)&tx_ctx, txq_elem.txqs[0].txq_ctx,
+		    ice_tlan_ctx_info);
+
+	txq->qtx_tail = hw->hw_addr + QTX_COMM_DBELL(txq->reg_idx);
+
+	/* Init the Tx tail register*/
+	ICE_PCI_REG_WRITE(txq->qtx_tail, 0);
+
+	err = ice_ena_vsi_txq(hw->port_info, vsi->idx, 0, 1, &txq_elem,
+			      sizeof(txq_elem), NULL);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to add lan txq");
+		return -EIO;
+	}
+	/* store the schedule node id */
+	txq->q_teid = txq_elem.txqs[0].q_teid;
+
+	dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED;
+	return 0;
+}
+
+/* Free all mbufs for descriptors in tx queue */
+static void
+ice_tx_queue_release_mbufs(struct ice_tx_queue *txq)
+{
+	uint16_t i;
+
+	if (!txq || !txq->sw_ring) {
+		PMD_DRV_LOG(DEBUG, "Pointer to txq or sw_ring is NULL");
+		return;
+	}
+
+	for (i = 0; i < txq->nb_tx_desc; i++) {
+		if (txq->sw_ring[i].mbuf) {
+			rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
+			txq->sw_ring[i].mbuf = NULL;
+		}
+	}
+}
+
+static void
+ice_reset_tx_queue(struct ice_tx_queue *txq)
+{
+	struct ice_tx_entry *txe;
+	uint16_t i, prev, size;
+
+	if (!txq) {
+		PMD_DRV_LOG(DEBUG, "Pointer to txq is NULL");
+		return;
+	}
+
+	txe = txq->sw_ring;
+	size = sizeof(struct ice_tx_desc) * txq->nb_tx_desc;
+	for (i = 0; i < size; i++)
+		((volatile char *)txq->tx_ring)[i] = 0;
+
+	prev = (uint16_t)(txq->nb_tx_desc - 1);
+	for (i = 0; i < txq->nb_tx_desc; i++) {
+		volatile struct ice_tx_desc *txd = &txq->tx_ring[i];
+
+		txd->cmd_type_offset_bsz =
+			rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE);
+		txe[i].mbuf =  NULL;
+		txe[i].last_id = i;
+		txe[prev].next_id = i;
+		prev = i;
+	}
+
+	txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
+	txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
+
+	txq->tx_tail = 0;
+	txq->nb_tx_used = 0;
+
+	txq->last_desc_cleaned = (uint16_t)(txq->nb_tx_desc - 1);
+	txq->nb_tx_free = (uint16_t)(txq->nb_tx_desc - 1);
+}
+
+int
+ice_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct ice_tx_queue *txq;
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	enum ice_status status;
+	uint16_t q_ids[1];
+	uint32_t q_teids[1];
+
+	ICE_PROC_SECONDARY_CHECK;
+
+	if (tx_queue_id >= dev->data->nb_tx_queues) {
+		PMD_DRV_LOG(ERR, "TX queue %u is out of range %u",
+			    tx_queue_id, dev->data->nb_tx_queues);
+		return -EINVAL;
+	}
+
+	txq = dev->data->tx_queues[tx_queue_id];
+	if (!txq) {
+		PMD_DRV_LOG(ERR, "TX queue %u is not available",
+			    tx_queue_id);
+		return -EINVAL;
+	}
+
+	q_ids[0] = txq->reg_idx;
+	q_teids[0] = txq->q_teid;
+
+	status = ice_dis_vsi_txq(hw->port_info, 1, q_ids, q_teids,
+				 ICE_NO_RESET, 0, NULL);
+	if (status != ICE_SUCCESS) {
+		PMD_DRV_LOG(DEBUG, "Failed to disable Lan Tx queue");
+		return -EINVAL;
+	}
+
+	ice_tx_queue_release_mbufs(txq);
+	ice_reset_tx_queue(txq);
+	dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+
+	return 0;
+}
+
+int
+ice_rx_queue_setup(struct rte_eth_dev *dev,
+		   uint16_t queue_idx,
+		   uint16_t nb_desc,
+		   unsigned int socket_id,
+		   const struct rte_eth_rxconf *rx_conf,
+		   struct rte_mempool *mp)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_adapter *ad =
+		ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct ice_vsi *vsi = pf->main_vsi;
+	struct ice_rx_queue *rxq;
+	const struct rte_memzone *rz;
+	uint32_t ring_size;
+	uint16_t len;
+	int use_def_burst_func = 1;
+
+	ICE_PROC_SECONDARY_CHECK;
+
+	if (nb_desc % ICE_ALIGN_RING_DESC != 0 ||
+	    nb_desc > ICE_MAX_RING_DESC ||
+	    nb_desc < ICE_MIN_RING_DESC) {
+		PMD_INIT_LOG(ERR, "Number (%u) of receive descriptors is "
+			     "invalid", nb_desc);
+		return -EINVAL;
+	}
+
+	/* Free memory if needed */
+	if (dev->data->rx_queues[queue_idx]) {
+		ice_rx_queue_release(dev->data->rx_queues[queue_idx]);
+		dev->data->rx_queues[queue_idx] = NULL;
+	}
+
+	/* Allocate the rx queue data structure */
+	rxq = rte_zmalloc_socket("ice rx queue",
+				 sizeof(struct ice_rx_queue),
+				 RTE_CACHE_LINE_SIZE,
+				 socket_id);
+	if (!rxq) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for "
+			     "rx queue data structure");
+		return -ENOMEM;
+	}
+	rxq->mp = mp;
+	rxq->nb_rx_desc = nb_desc;
+	rxq->rx_free_thresh = rx_conf->rx_free_thresh;
+	rxq->queue_id = queue_idx;
+
+	rxq->reg_idx = vsi->base_queue + queue_idx;
+	rxq->port_id = dev->data->port_id;
+	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+		rxq->crc_len = ETHER_CRC_LEN;
+	else
+		rxq->crc_len = 0;
+
+	rxq->drop_en = rx_conf->rx_drop_en;
+	rxq->vsi = vsi;
+	rxq->rx_deferred_start = rx_conf->rx_deferred_start;
+
+	/* Allocate the maximun number of RX ring hardware descriptor. */
+	len = ICE_MAX_RING_DESC;
+
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+	/**
+	 * Allocating a little more memory because vectorized/bulk_alloc Rx
+	 * functions doesn't check boundaries each time.
+	 */
+	len += ICE_RX_MAX_BURST;
+#endif
+
+	/* Allocate the maximum number of RX ring hardware descriptor. */
+	ring_size = sizeof(union ice_rx_desc) * len;
+	ring_size = RTE_ALIGN(ring_size, ICE_DMA_MEM_ALIGN);
+	rz = rte_eth_dma_zone_reserve(dev, "rx_ring", queue_idx,
+				      ring_size, ICE_RING_BASE_ALIGN,
+				      socket_id);
+	if (!rz) {
+		ice_rx_queue_release(rxq);
+		PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for RX");
+		return -ENOMEM;
+	}
+
+	/* Zero all the descriptors in the ring. */
+	memset(rz->addr, 0, ring_size);
+
+	rxq->rx_ring_phys_addr = rz->phys_addr;
+	rxq->rx_ring = (union ice_rx_desc *)rz->addr;
+
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+	len = (uint16_t)(nb_desc + ICE_RX_MAX_BURST);
+#else
+	len = nb_desc;
+#endif
+
+	/* Allocate the software ring. */
+	rxq->sw_ring = rte_zmalloc_socket("ice rx sw ring",
+					  sizeof(struct ice_rx_entry) * len,
+					  RTE_CACHE_LINE_SIZE,
+					  socket_id);
+	if (!rxq->sw_ring) {
+		ice_rx_queue_release(rxq);
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for SW ring");
+		return -ENOMEM;
+	}
+
+	ice_reset_rx_queue(rxq);
+	rxq->q_set = TRUE;
+	dev->data->rx_queues[queue_idx] = rxq;
+
+	use_def_burst_func = ice_check_rx_burst_bulk_alloc_preconditions(rxq);
+
+	if (!use_def_burst_func) {
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions are "
+			     "satisfied. Rx Burst Bulk Alloc function will be "
+			     "used on port=%d, queue=%d.",
+			     rxq->port_id, rxq->queue_id);
+#endif /* RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC */
+	} else {
+		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions are "
+			     "not satisfied, Scattered Rx is requested, "
+			     "or RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC is "
+			     "not enabled on port=%d, queue=%d.",
+			     rxq->port_id, rxq->queue_id);
+		ad->rx_bulk_alloc_allowed = false;
+	}
+
+	return 0;
+}
+
+void
+ice_rx_queue_release(void *rxq)
+{
+	struct ice_rx_queue *q = (struct ice_rx_queue *)rxq;
+
+	ICE_PROC_SECONDARY_CHECK_NO_ERR;
+
+	if (!q) {
+		PMD_DRV_LOG(DEBUG, "Pointer to rxq is NULL");
+		return;
+	}
+
+	ice_rx_queue_release_mbufs(q);
+	rte_free(q->sw_ring);
+	rte_free(q);
+}
+
+int
+ice_tx_queue_setup(struct rte_eth_dev *dev,
+		   uint16_t queue_idx,
+		   uint16_t nb_desc,
+		   unsigned int socket_id,
+		   const struct rte_eth_txconf *tx_conf)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vsi *vsi = pf->main_vsi;
+	struct ice_tx_queue *txq;
+	const struct rte_memzone *tz;
+	uint32_t ring_size;
+	uint16_t tx_rs_thresh, tx_free_thresh;
+	uint64_t offloads;
+
+	offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads;
+
+	ICE_PROC_SECONDARY_CHECK;
+
+	if (nb_desc % ICE_ALIGN_RING_DESC != 0 ||
+	    nb_desc > ICE_MAX_RING_DESC ||
+	    nb_desc < ICE_MIN_RING_DESC) {
+		PMD_INIT_LOG(ERR, "Number (%u) of transmit descriptors is "
+			     "invalid", nb_desc);
+		return -EINVAL;
+	}
+
+	/**
+	 * The following two parameters control the setting of the RS bit on
+	 * transmit descriptors. TX descriptors will have their RS bit set
+	 * after txq->tx_rs_thresh descriptors have been used. The TX
+	 * descriptor ring will be cleaned after txq->tx_free_thresh
+	 * descriptors are used or if the number of descriptors required to
+	 * transmit a packet is greater than the number of free TX descriptors.
+	 *
+	 * The following constraints must be satisfied:
+	 *  - tx_rs_thresh must be greater than 0.
+	 *  - tx_rs_thresh must be less than the size of the ring minus 2.
+	 *  - tx_rs_thresh must be less than or equal to tx_free_thresh.
+	 *  - tx_rs_thresh must be a divisor of the ring size.
+	 *  - tx_free_thresh must be greater than 0.
+	 *  - tx_free_thresh must be less than the size of the ring minus 3.
+	 *
+	 * One descriptor in the TX ring is used as a sentinel to avoid a H/W
+	 * race condition, hence the maximum threshold constraints. When set
+	 * to zero use default values.
+	 */
+	tx_rs_thresh = (uint16_t)(tx_conf->tx_rs_thresh ?
+				  tx_conf->tx_rs_thresh :
+				  ICE_DEFAULT_TX_RSBIT_THRESH);
+	tx_free_thresh = (uint16_t)(tx_conf->tx_free_thresh ?
+				    tx_conf->tx_free_thresh :
+				    ICE_DEFAULT_TX_FREE_THRESH);
+	if (tx_rs_thresh >= (nb_desc - 2)) {
+		PMD_INIT_LOG(ERR, "tx_rs_thresh must be less than the "
+			     "number of TX descriptors minus 2. "
+			     "(tx_rs_thresh=%u port=%d queue=%d)",
+			     (unsigned int)tx_rs_thresh,
+			     (int)dev->data->port_id,
+			     (int)queue_idx);
+		return -EINVAL;
+	}
+	if (tx_free_thresh >= (nb_desc - 3)) {
+		PMD_INIT_LOG(ERR, "tx_rs_thresh must be less than the "
+			     "tx_free_thresh must be less than the "
+			     "number of TX descriptors minus 3. "
+			     "(tx_free_thresh=%u port=%d queue=%d)",
+			     (unsigned int)tx_free_thresh,
+			     (int)dev->data->port_id,
+			     (int)queue_idx);
+		return -EINVAL;
+	}
+	if (tx_rs_thresh > tx_free_thresh) {
+		PMD_INIT_LOG(ERR, "tx_rs_thresh must be less than or "
+			     "equal to tx_free_thresh. (tx_free_thresh=%u"
+			     " tx_rs_thresh=%u port=%d queue=%d)",
+			     (unsigned int)tx_free_thresh,
+			     (unsigned int)tx_rs_thresh,
+			     (int)dev->data->port_id,
+			     (int)queue_idx);
+		return -EINVAL;
+	}
+	if ((nb_desc % tx_rs_thresh) != 0) {
+		PMD_INIT_LOG(ERR, "tx_rs_thresh must be a divisor of the "
+			     "number of TX descriptors. (tx_rs_thresh=%u"
+			     " port=%d queue=%d)",
+			     (unsigned int)tx_rs_thresh,
+			     (int)dev->data->port_id,
+			     (int)queue_idx);
+		return -EINVAL;
+	}
+	if (tx_rs_thresh > 1 && tx_conf->tx_thresh.wthresh != 0) {
+		PMD_INIT_LOG(ERR, "TX WTHRESH must be set to 0 if "
+			     "tx_rs_thresh is greater than 1. "
+			     "(tx_rs_thresh=%u port=%d queue=%d)",
+			     (unsigned int)tx_rs_thresh,
+			     (int)dev->data->port_id,
+			     (int)queue_idx);
+		return -EINVAL;
+	}
+
+	/* Free memory if needed. */
+	if (dev->data->tx_queues[queue_idx]) {
+		ice_tx_queue_release(dev->data->tx_queues[queue_idx]);
+		dev->data->tx_queues[queue_idx] = NULL;
+	}
+
+	/* Allocate the TX queue data structure. */
+	txq = rte_zmalloc_socket("ice tx queue",
+				 sizeof(struct ice_tx_queue),
+				 RTE_CACHE_LINE_SIZE,
+				 socket_id);
+	if (!txq) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for "
+			     "tx queue structure");
+		return -ENOMEM;
+	}
+
+	/* Allocate TX hardware ring descriptors. */
+	ring_size = sizeof(struct ice_tx_desc) * ICE_MAX_RING_DESC;
+	ring_size = RTE_ALIGN(ring_size, ICE_DMA_MEM_ALIGN);
+	tz = rte_eth_dma_zone_reserve(dev, "tx_ring", queue_idx,
+				      ring_size, ICE_RING_BASE_ALIGN,
+				      socket_id);
+	if (!tz) {
+		ice_tx_queue_release(txq);
+		PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for TX");
+		return -ENOMEM;
+	}
+
+	txq->nb_tx_desc = nb_desc;
+	txq->tx_rs_thresh = tx_rs_thresh;
+	txq->tx_free_thresh = tx_free_thresh;
+	txq->pthresh = tx_conf->tx_thresh.pthresh;
+	txq->hthresh = tx_conf->tx_thresh.hthresh;
+	txq->wthresh = tx_conf->tx_thresh.wthresh;
+	txq->queue_id = queue_idx;
+
+	txq->reg_idx = vsi->base_queue + queue_idx;
+	txq->port_id = dev->data->port_id;
+	txq->offloads = offloads;
+	txq->vsi = vsi;
+	txq->tx_deferred_start = tx_conf->tx_deferred_start;
+
+	txq->tx_ring_phys_addr = tz->phys_addr;
+	txq->tx_ring = (struct ice_tx_desc *)tz->addr;
+
+	/* Allocate software ring */
+	txq->sw_ring =
+		rte_zmalloc_socket("ice tx sw ring",
+				   sizeof(struct ice_tx_entry) * nb_desc,
+				   RTE_CACHE_LINE_SIZE,
+				   socket_id);
+	if (!txq->sw_ring) {
+		ice_tx_queue_release(txq);
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for SW TX ring");
+		return -ENOMEM;
+	}
+
+	ice_reset_tx_queue(txq);
+	txq->q_set = TRUE;
+	dev->data->tx_queues[queue_idx] = txq;
+
+	return 0;
+}
+
+void
+ice_tx_queue_release(void *txq)
+{
+	struct ice_tx_queue *q = (struct ice_tx_queue *)txq;
+
+	ICE_PROC_SECONDARY_CHECK_NO_ERR;
+
+	if (!q) {
+		PMD_DRV_LOG(DEBUG, "Pointer to TX queue is NULL");
+		return;
+	}
+
+	ice_tx_queue_release_mbufs(q);
+	rte_free(q->sw_ring);
+	rte_free(q);
+}
+
+void
+ice_clear_queues(struct rte_eth_dev *dev)
+{
+	uint16_t i;
+
+	PMD_INIT_FUNC_TRACE();
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		ice_tx_queue_release_mbufs(dev->data->tx_queues[i]);
+		ice_reset_tx_queue(dev->data->tx_queues[i]);
+	}
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		ice_rx_queue_release_mbufs(dev->data->rx_queues[i]);
+		ice_reset_rx_queue(dev->data->rx_queues[i]);
+	}
+}
+
+void
+ice_free_queues(struct rte_eth_dev *dev)
+{
+	uint16_t i;
+
+	PMD_INIT_FUNC_TRACE();
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		if (!dev->data->rx_queues[i])
+			continue;
+		ice_rx_queue_release(dev->data->rx_queues[i]);
+		dev->data->rx_queues[i] = NULL;
+	}
+	dev->data->nb_rx_queues = 0;
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		if (!dev->data->tx_queues[i])
+			continue;
+		ice_tx_queue_release(dev->data->tx_queues[i]);
+		dev->data->tx_queues[i] = NULL;
+	}
+	dev->data->nb_tx_queues = 0;
+}
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index c37dc23..088a206 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -114,4 +114,24 @@ struct ice_tx_queue {
 		uint64_t outer_l3_len:16; /* outer L3 Header Length */
 	};
 };
+
+int ice_rx_queue_setup(struct rte_eth_dev *dev,
+		       uint16_t queue_idx,
+		       uint16_t nb_desc,
+		       unsigned int socket_id,
+		       const struct rte_eth_rxconf *rx_conf,
+		       struct rte_mempool *mp);
+int ice_tx_queue_setup(struct rte_eth_dev *dev,
+		       uint16_t queue_idx,
+		       uint16_t nb_desc,
+		       unsigned int socket_id,
+		       const struct rte_eth_txconf *tx_conf);
+int ice_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+int ice_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+int ice_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+int ice_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+void ice_rx_queue_release(void *rxq);
+void ice_tx_queue_release(void *txq);
+void ice_clear_queues(struct rte_eth_dev *dev);
+void ice_free_queues(struct rte_eth_dev *dev);
 #endif /* _ICE_RXTX_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v2 04/20] net/ice: support getting device information
  2018-12-03  7:06 ` [dpdk-dev] [PATCH v2 00/20] " Wenzhuo Lu
                     ` (2 preceding siblings ...)
  2018-12-03  7:06   ` [dpdk-dev] [PATCH v2 03/20] net/ice: support device and queue ops Wenzhuo Lu
@ 2018-12-03  7:06   ` Wenzhuo Lu
  2018-12-04  4:59     ` Varghese, Vipin
  2018-12-03  7:06   ` [dpdk-dev] [PATCH v2 05/20] net/ice: support packet type getting Wenzhuo Lu
                     ` (15 subsequent siblings)
  19 siblings, 1 reply; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-03  7:06 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add ops dev_infos_get.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/ice/ice_ethdev.c | 123 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 123 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 7e3bad0..2437159 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -19,6 +19,8 @@
 static void ice_dev_stop(struct rte_eth_dev *dev);
 static void ice_dev_close(struct rte_eth_dev *dev);
 static int ice_dev_reset(struct rte_eth_dev *dev);
+static void ice_dev_info_get(struct rte_eth_dev *dev,
+			     struct rte_eth_dev_info *dev_info);
 
 static const struct rte_pci_id pci_id_ice_map[] = {
 	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
@@ -41,6 +43,7 @@
 	.rx_queue_release             = ice_rx_queue_release,
 	.tx_queue_setup               = ice_tx_queue_setup,
 	.tx_queue_release             = ice_tx_queue_release,
+	.dev_infos_get                = ice_dev_info_get,
 };
 
 static void
@@ -843,3 +846,123 @@ static int ice_init_rss(struct ice_pf *pf)
 
 	return 0;
 }
+
+static void
+ice_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ice_vsi *vsi = pf->main_vsi;
+	struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
+
+	dev_info->min_rx_bufsize = ICE_BUF_SIZE_MIN;
+	dev_info->max_rx_pktlen = ICE_FRAME_SIZE_MAX;
+	dev_info->max_rx_queues = vsi->nb_qps;
+	dev_info->max_tx_queues = vsi->nb_qps;
+	dev_info->max_mac_addrs = vsi->max_macaddrs;
+	dev_info->max_vfs = pci_dev->max_vfs;
+
+	dev_info->rx_offload_capa =
+		DEV_RX_OFFLOAD_VLAN_STRIP |
+		DEV_RX_OFFLOAD_IPV4_CKSUM |
+		DEV_RX_OFFLOAD_UDP_CKSUM |
+		DEV_RX_OFFLOAD_TCP_CKSUM |
+		DEV_RX_OFFLOAD_QINQ_STRIP |
+		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		DEV_RX_OFFLOAD_VLAN_EXTEND |
+		DEV_RX_OFFLOAD_JUMBO_FRAME;
+	dev_info->tx_offload_capa =
+		DEV_TX_OFFLOAD_VLAN_INSERT |
+		DEV_TX_OFFLOAD_QINQ_INSERT |
+		DEV_TX_OFFLOAD_IPV4_CKSUM |
+		DEV_TX_OFFLOAD_UDP_CKSUM |
+		DEV_TX_OFFLOAD_TCP_CKSUM |
+		DEV_TX_OFFLOAD_SCTP_CKSUM |
+		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		DEV_TX_OFFLOAD_TCP_TSO;
+	dev_info->rx_queue_offload_capa = 0;
+	dev_info->tx_queue_offload_capa = 0;
+
+	dev_info->reta_size = hw->func_caps.common_cap.rss_table_size;
+	dev_info->hash_key_size = (VSIQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t);
+	dev_info->flow_type_rss_offloads = ICE_RSS_OFFLOAD_ALL;
+
+	dev_info->default_rxconf = (struct rte_eth_rxconf) {
+		.rx_thresh = {
+			.pthresh = ICE_DEFAULT_RX_PTHRESH,
+			.hthresh = ICE_DEFAULT_RX_HTHRESH,
+			.wthresh = ICE_DEFAULT_RX_WTHRESH,
+		},
+		.rx_free_thresh = ICE_DEFAULT_RX_FREE_THRESH,
+		.rx_drop_en = 0,
+		.offloads = 0,
+	};
+
+	dev_info->default_txconf = (struct rte_eth_txconf) {
+		.tx_thresh = {
+			.pthresh = ICE_DEFAULT_TX_PTHRESH,
+			.hthresh = ICE_DEFAULT_TX_HTHRESH,
+			.wthresh = ICE_DEFAULT_TX_WTHRESH,
+		},
+		.tx_free_thresh = ICE_DEFAULT_TX_FREE_THRESH,
+		.tx_rs_thresh = ICE_DEFAULT_TX_RSBIT_THRESH,
+		.offloads = 0,
+	};
+
+	dev_info->rx_desc_lim = (struct rte_eth_desc_lim) {
+		.nb_max = ICE_MAX_RING_DESC,
+		.nb_min = ICE_MIN_RING_DESC,
+		.nb_align = ICE_ALIGN_RING_DESC,
+	};
+
+	dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
+		.nb_max = ICE_MAX_RING_DESC,
+		.nb_min = ICE_MIN_RING_DESC,
+		.nb_align = ICE_ALIGN_RING_DESC,
+	};
+
+	switch (hw->port_info->phy.link_info.link_speed) {
+	case ICE_AQ_LINK_SPEED_10MB:
+		dev_info->speed_capa = ETH_LINK_SPEED_10M;
+		break;
+	case ICE_AQ_LINK_SPEED_100MB:
+		dev_info->speed_capa = ETH_LINK_SPEED_100M;
+		break;
+	case ICE_AQ_LINK_SPEED_1000MB:
+		dev_info->speed_capa = ETH_LINK_SPEED_1G;
+		break;
+	case ICE_AQ_LINK_SPEED_2500MB:
+		dev_info->speed_capa = ETH_LINK_SPEED_2_5G;
+		break;
+	case ICE_AQ_LINK_SPEED_5GB:
+		dev_info->speed_capa = ETH_LINK_SPEED_5G;
+		break;
+	case ICE_AQ_LINK_SPEED_10GB:
+		dev_info->speed_capa = ETH_LINK_SPEED_10G;
+		break;
+	case ICE_AQ_LINK_SPEED_20GB:
+		dev_info->speed_capa = ETH_LINK_SPEED_20G;
+		break;
+	case ICE_AQ_LINK_SPEED_25GB:
+		dev_info->speed_capa = ETH_LINK_SPEED_25G;
+		break;
+	case ICE_AQ_LINK_SPEED_40GB:
+		dev_info->speed_capa = ETH_LINK_SPEED_40G;
+		break;
+	case ICE_AQ_LINK_SPEED_UNKNOWN:
+	default:
+		PMD_DRV_LOG(ERR, "Unknown link speed");
+		dev_info->speed_capa = ETH_LINK_SPEED_AUTONEG;
+		break;
+	}
+
+	dev_info->nb_rx_queues = dev->data->nb_rx_queues;
+	dev_info->nb_tx_queues = dev->data->nb_tx_queues;
+
+	dev_info->default_rxportconf.burst_size = 32;
+	dev_info->default_txportconf.burst_size = 32;
+	dev_info->default_rxportconf.nb_queues = 1;
+	dev_info->default_txportconf.nb_queues = 1;
+	dev_info->default_rxportconf.ring_size = 1024;
+	dev_info->default_txportconf.ring_size = 1024;
+}
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v2 05/20] net/ice: support packet type getting
  2018-12-03  7:06 ` [dpdk-dev] [PATCH v2 00/20] " Wenzhuo Lu
                     ` (3 preceding siblings ...)
  2018-12-03  7:06   ` [dpdk-dev] [PATCH v2 04/20] net/ice: support getting device information Wenzhuo Lu
@ 2018-12-03  7:06   ` Wenzhuo Lu
  2018-12-04  5:19     ` Varghese, Vipin
  2018-12-03  7:06   ` [dpdk-dev] [PATCH v2 06/20] net/ice: support link update Wenzhuo Lu
                     ` (14 subsequent siblings)
  19 siblings, 1 reply; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-03  7:06 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Wei Zhao

Add ops dev_supported_ptypes_get.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 drivers/net/ice/ice_ethdev.c   |   2 +
 drivers/net/ice/ice_lan_rxtx.c | 601 +++++++++++++++++++++++++++++++++++++++++
 drivers/net/ice/ice_rxtx.h     |   2 +
 3 files changed, 605 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 2437159..5a78f3b 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -44,6 +44,7 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 	.tx_queue_setup               = ice_tx_queue_setup,
 	.tx_queue_release             = ice_tx_queue_release,
 	.dev_infos_get                = ice_dev_info_get,
+	.dev_supported_ptypes_get     = ice_dev_supported_ptypes_get,
 };
 
 static void
@@ -492,6 +493,7 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 
 	dev->dev_ops = &ice_eth_dev_ops;
 
+	ice_set_default_ptype_table(dev);
 	pci_dev = RTE_DEV_TO_PCI(dev->device);
 
 	rte_eth_copy_pci_info(dev, pci_dev);
diff --git a/drivers/net/ice/ice_lan_rxtx.c b/drivers/net/ice/ice_lan_rxtx.c
index dddc8b1..6d5335d 100644
--- a/drivers/net/ice/ice_lan_rxtx.c
+++ b/drivers/net/ice/ice_lan_rxtx.c
@@ -900,6 +900,42 @@
 	rte_free(q);
 }
 
+const uint32_t *
+ice_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
+{
+	static const uint32_t ptypes[] = {
+		/* refers to ice_get_default_pkt_type() */
+		RTE_PTYPE_L2_ETHER,
+		RTE_PTYPE_L2_ETHER_LLDP,
+		RTE_PTYPE_L2_ETHER_ARP,
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN,
+		RTE_PTYPE_L3_IPV6_EXT_UNKNOWN,
+		RTE_PTYPE_L4_FRAG,
+		RTE_PTYPE_L4_ICMP,
+		RTE_PTYPE_L4_NONFRAG,
+		RTE_PTYPE_L4_SCTP,
+		RTE_PTYPE_L4_TCP,
+		RTE_PTYPE_L4_UDP,
+		RTE_PTYPE_TUNNEL_GRENAT,
+		RTE_PTYPE_TUNNEL_IP,
+		RTE_PTYPE_INNER_L2_ETHER,
+		RTE_PTYPE_INNER_L2_ETHER_VLAN,
+		RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN,
+		RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN,
+		RTE_PTYPE_INNER_L4_FRAG,
+		RTE_PTYPE_INNER_L4_ICMP,
+		RTE_PTYPE_INNER_L4_NONFRAG,
+		RTE_PTYPE_INNER_L4_SCTP,
+		RTE_PTYPE_INNER_L4_TCP,
+		RTE_PTYPE_INNER_L4_UDP,
+		RTE_PTYPE_TUNNEL_GTPC,
+		RTE_PTYPE_TUNNEL_GTPU,
+		RTE_PTYPE_UNKNOWN
+	};
+
+	return ptypes;
+}
+
 void
 ice_clear_queues(struct rte_eth_dev *dev)
 {
@@ -941,3 +977,568 @@
 	}
 	dev->data->nb_tx_queues = 0;
 }
+
+/* For each value it means, datasheet of hardware can tell more details
+ *
+ * @note: fix ice_dev_supported_ptypes_get() if any change here.
+ */
+static inline uint32_t
+ice_get_default_pkt_type(uint16_t ptype)
+{
+	static const uint32_t type_table[ICE_MAX_PKT_TYPE]
+		__rte_cache_aligned = {
+		/* L2 types */
+		/* [0] reserved */
+		[1] = RTE_PTYPE_L2_ETHER,
+		/* [2] - [5] reserved */
+		[6] = RTE_PTYPE_L2_ETHER_LLDP,
+		/* [7] - [10] reserved */
+		[11] = RTE_PTYPE_L2_ETHER_ARP,
+		/* [12] - [21] reserved */
+
+		/* Non tunneled IPv4 */
+		[22] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_FRAG,
+		[23] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_NONFRAG,
+		[24] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_UDP,
+		/* [25] reserved */
+		[26] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_TCP,
+		[27] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_SCTP,
+		[28] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_ICMP,
+
+		/* IPv4 --> IPv4 */
+		[29] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_FRAG,
+		[30] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_NONFRAG,
+		[31] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_UDP,
+		/* [32] reserved */
+		[33] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_TCP,
+		[34] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_SCTP,
+		[35] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv4 --> IPv6 */
+		[36] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_FRAG,
+		[37] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_NONFRAG,
+		[38] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_UDP,
+		/* [39] reserved */
+		[40] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_TCP,
+		[41] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_SCTP,
+		[42] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv4 --> GRE/Teredo/VXLAN */
+		[43] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT,
+
+		/* IPv4 --> GRE/Teredo/VXLAN --> IPv4 */
+		[44] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_FRAG,
+		[45] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_NONFRAG,
+		[46] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_UDP,
+		/* [47] reserved */
+		[48] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_TCP,
+		[49] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_SCTP,
+		[50] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv4 --> GRE/Teredo/VXLAN --> IPv6 */
+		[51] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_FRAG,
+		[52] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_NONFRAG,
+		[53] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_UDP,
+		/* [54] reserved */
+		[55] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_TCP,
+		[56] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_SCTP,
+		[57] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv4 --> GRE/Teredo/VXLAN --> MAC */
+		[58] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER,
+
+		/* IPv4 --> GRE/Teredo/VXLAN --> MAC --> IPv4 */
+		[59] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_FRAG,
+		[60] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_NONFRAG,
+		[61] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_UDP,
+		/* [62] reserved */
+		[63] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_TCP,
+		[64] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_SCTP,
+		[65] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv4 --> GRE/Teredo/VXLAN --> MAC --> IPv6 */
+		[66] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_FRAG,
+		[67] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_NONFRAG,
+		[68] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_UDP,
+		/* [69] reserved */
+		[70] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_TCP,
+		[71] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_SCTP,
+		[72] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv4 --> GRE/Teredo/VXLAN --> MAC/VLAN */
+		[73] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN,
+
+		/* IPv4 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv4 */
+		[74] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_FRAG,
+		[75] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_NONFRAG,
+		[76] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_UDP,
+		/* [77] reserved */
+		[78] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_TCP,
+		[79] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_SCTP,
+		[80] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv4 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv6 */
+		[81] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_FRAG,
+		[82] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_NONFRAG,
+		[83] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_UDP,
+		/* [84] reserved */
+		[85] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_TCP,
+		[86] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_SCTP,
+		[87] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_ICMP,
+
+		/* Non tunneled IPv6 */
+		[88] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_FRAG,
+		[89] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_NONFRAG,
+		[90] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_UDP,
+		/* [91] reserved */
+		[92] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_TCP,
+		[93] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_SCTP,
+		[94] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_ICMP,
+
+		/* IPv6 --> IPv4 */
+		[95] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_FRAG,
+		[96] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_NONFRAG,
+		[97] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_UDP,
+		/* [98] reserved */
+		[99] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_TCP,
+		[100] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_IP |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_SCTP,
+		[101] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_IP |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv6 --> IPv6 */
+		[102] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_IP |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_FRAG,
+		[103] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_IP |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_NONFRAG,
+		[104] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_IP |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_UDP,
+		/* [105] reserved */
+		[106] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_IP |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_TCP,
+		[107] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_IP |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_SCTP,
+		[108] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_IP |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv6 --> GRE/Teredo/VXLAN */
+		[109] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT,
+
+		/* IPv6 --> GRE/Teredo/VXLAN --> IPv4 */
+		[110] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_FRAG,
+		[111] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_NONFRAG,
+		[112] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_UDP,
+		/* [113] reserved */
+		[114] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_TCP,
+		[115] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_SCTP,
+		[116] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv6 --> GRE/Teredo/VXLAN --> IPv6 */
+		[117] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_FRAG,
+		[118] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_NONFRAG,
+		[119] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_UDP,
+		/* [120] reserved */
+		[121] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_TCP,
+		[122] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_SCTP,
+		[123] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv6 --> GRE/Teredo/VXLAN --> MAC */
+		[124] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER,
+
+		/* IPv6 --> GRE/Teredo/VXLAN --> MAC --> IPv4 */
+		[125] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_FRAG,
+		[126] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_NONFRAG,
+		[127] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_UDP,
+		/* [128] reserved */
+		[129] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_TCP,
+		[130] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_SCTP,
+		[131] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv6 --> GRE/Teredo/VXLAN --> MAC --> IPv6 */
+		[132] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_FRAG,
+		[133] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_NONFRAG,
+		[134] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_UDP,
+		/* [135] reserved */
+		[136] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_TCP,
+		[137] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_SCTP,
+		[138] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv6 --> GRE/Teredo/VXLAN --> MAC/VLAN */
+		[139] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN,
+
+		/* IPv6 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv4 */
+		[140] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_FRAG,
+		[141] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_NONFRAG,
+		[142] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_UDP,
+		/* [143] reserved */
+		[144] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_TCP,
+		[145] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_SCTP,
+		[146] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv6 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv6 */
+		[147] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_FRAG,
+		[148] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_NONFRAG,
+		[149] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_UDP,
+		/* [150] reserved */
+		[151] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_TCP,
+		[152] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_SCTP,
+		[153] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_ICMP,
+		/* [154] - [255] reserved */
+		[256] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GTPC,
+		[257] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GTPC,
+		[258] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+				RTE_PTYPE_TUNNEL_GTPU,
+		[259] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+				RTE_PTYPE_TUNNEL_GTPU,
+		/* [260] - [263] reserved */
+		[264] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GTPC,
+		[265] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GTPC,
+		[266] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+				RTE_PTYPE_TUNNEL_GTPU,
+		[267] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+				RTE_PTYPE_TUNNEL_GTPU,
+
+		/* All others reserved */
+	};
+
+	return type_table[ptype];
+}
+
+void __attribute__((cold))
+ice_set_default_ptype_table(struct rte_eth_dev *dev)
+{
+	struct ice_adapter *ad =
+		ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	int i;
+
+	for (i = 0; i < ICE_MAX_PKT_TYPE; i++)
+		ad->ptype_tbl[i] = ice_get_default_pkt_type(i);
+}
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index 088a206..871646f 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -134,4 +134,6 @@ int ice_tx_queue_setup(struct rte_eth_dev *dev,
 void ice_tx_queue_release(void *txq);
 void ice_clear_queues(struct rte_eth_dev *dev);
 void ice_free_queues(struct rte_eth_dev *dev);
+void ice_set_default_ptype_table(struct rte_eth_dev *dev);
+const uint32_t *ice_dev_supported_ptypes_get(struct rte_eth_dev *dev);
 #endif /* _ICE_RXTX_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v2 06/20] net/ice: support link update
  2018-12-03  7:06 ` [dpdk-dev] [PATCH v2 00/20] " Wenzhuo Lu
                     ` (4 preceding siblings ...)
  2018-12-03  7:06   ` [dpdk-dev] [PATCH v2 05/20] net/ice: support packet type getting Wenzhuo Lu
@ 2018-12-03  7:06   ` Wenzhuo Lu
  2018-12-03  7:06   ` [dpdk-dev] [PATCH v2 07/20] net/ice: support MTU setting Wenzhuo Lu
                     ` (13 subsequent siblings)
  19 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-03  7:06 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add ops link_update.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/ice/ice_ethdev.c | 334 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 334 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 5a78f3b..164dfd5 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -21,6 +21,8 @@
 static int ice_dev_reset(struct rte_eth_dev *dev);
 static void ice_dev_info_get(struct rte_eth_dev *dev,
 			     struct rte_eth_dev_info *dev_info);
+static int ice_link_update(struct rte_eth_dev *dev,
+			   int wait_to_complete);
 
 static const struct rte_pci_id pci_id_ice_map[] = {
 	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
@@ -45,6 +47,7 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 	.tx_queue_release             = ice_tx_queue_release,
 	.dev_infos_get                = ice_dev_info_get,
 	.dev_supported_ptypes_get     = ice_dev_supported_ptypes_get,
+	.link_update                  = ice_link_update,
 };
 
 static void
@@ -330,6 +333,187 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 	return 0;
 }
 
+/* Enable IRQ0 */
+static void
+ice_pf_enable_irq0(struct ice_hw *hw)
+{
+	/* reset the registers */
+	ICE_WRITE_REG(hw, PFINT_OICR_ENA, 0);
+	ICE_READ_REG(hw, PFINT_OICR);
+
+#ifdef ICE_LSE_SPT
+	ICE_WRITE_REG(hw, PFINT_OICR_ENA,
+		      (uint32_t)(PFINT_OICR_ENA_INT_ENA_M &
+				 (~PFINT_OICR_LINK_STAT_CHANGE_M)));
+
+	ICE_WRITE_REG(hw, PFINT_OICR_CTL,
+		      (0 & PFINT_OICR_CTL_MSIX_INDX_M) |
+		      ((0 << PFINT_OICR_CTL_ITR_INDX_S) &
+		       PFINT_OICR_CTL_ITR_INDX_M) |
+		      PFINT_OICR_CTL_CAUSE_ENA_M);
+
+	ICE_WRITE_REG(hw, PFINT_FW_CTL,
+		      (0 & PFINT_FW_CTL_MSIX_INDX_M) |
+		      ((0 << PFINT_FW_CTL_ITR_INDX_S) &
+		       PFINT_FW_CTL_ITR_INDX_M) |
+		      PFINT_FW_CTL_CAUSE_ENA_M);
+#else
+	ICE_WRITE_REG(hw, PFINT_OICR_ENA, PFINT_OICR_ENA_INT_ENA_M);
+#endif
+
+	ICE_WRITE_REG(hw, GLINT_DYN_CTL(0),
+		      GLINT_DYN_CTL_INTENA_M |
+		      GLINT_DYN_CTL_CLEARPBA_M |
+		      GLINT_DYN_CTL_ITR_INDX_M);
+
+	ice_flush(hw);
+}
+
+/* Disable IRQ0 */
+static void
+ice_pf_disable_irq0(struct ice_hw *hw)
+{
+	/* Disable all interrupt types */
+	ICE_WRITE_REG(hw, GLINT_DYN_CTL(0), GLINT_DYN_CTL_WB_ON_ITR_M);
+	ice_flush(hw);
+}
+
+#ifdef ICE_LSE_SPT
+static void
+ice_handle_aq_msg(struct rte_eth_dev *dev)
+{
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ice_ctl_q_info *cq = &hw->adminq;
+	struct ice_rq_event_info event;
+	uint16_t pending, opcode;
+	int ret;
+
+	event.buf_len = ICE_AQ_MAX_BUF_LEN;
+	event.msg_buf = rte_zmalloc("msg_buffer", event.buf_len, 0);
+	if (!event.msg_buf) {
+		PMD_DRV_LOG(ERR, "Failed to allocate mem");
+		return;
+	}
+
+	pending = 1;
+	while (pending) {
+		ret = ice_clean_rq_elem(hw, cq, &event, &pending);
+
+		if (ret != ICE_SUCCESS) {
+			PMD_DRV_LOG(INFO,
+				    "Failed to read msg from AdminQ, "
+				    "adminq_err: %u",
+				    hw->adminq.sq_last_status);
+			break;
+		}
+		opcode = rte_le_to_cpu_16(event.desc.opcode);
+
+		switch (opcode) {
+		case ice_aqc_opc_get_link_status:
+			ret = ice_link_update(dev, 0);
+			if (!ret)
+				_rte_eth_dev_callback_process
+					(dev, RTE_ETH_EVENT_INTR_LSC, NULL);
+			break;
+		default:
+			PMD_DRV_LOG(DEBUG, "Request %u is not supported yet",
+				    opcode);
+			break;
+		}
+	}
+	rte_free(event.msg_buf);
+}
+#endif
+
+/**
+ * Interrupt handler triggered by NIC for handling
+ * specific interrupt.
+ *
+ * @param handle
+ *  Pointer to interrupt handle.
+ * @param param
+ *  The address of parameter (struct rte_eth_dev *) regsitered before.
+ *
+ * @return
+ *  void
+ */
+static void
+ice_interrupt_handler(void *param)
+{
+	struct rte_eth_dev *dev = (struct rte_eth_dev *)param;
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	uint32_t oicr;
+	uint32_t reg;
+	uint8_t pf_num;
+	uint8_t event;
+	uint16_t queue;
+#ifdef ICE_LSE_SPT
+	uint32_t int_fw_ctl;
+#endif
+
+	/* Disable interrupt */
+	ice_pf_disable_irq0(hw);
+
+	/* read out interrupt causes */
+	oicr = ICE_READ_REG(hw, PFINT_OICR);
+#ifdef ICE_LSE_SPT
+	int_fw_ctl = ICE_READ_REG(hw, PFINT_FW_CTL);
+#endif
+
+	/* No interrupt event indicated */
+	if (!(oicr & PFINT_OICR_INTEVENT_M)) {
+		PMD_DRV_LOG(INFO, "No interrupt event");
+		goto done;
+	}
+
+#ifdef ICE_LSE_SPT
+	if (int_fw_ctl & PFINT_FW_CTL_INTEVENT_M) {
+		PMD_DRV_LOG(INFO, "FW_CTL: link state change event");
+		ice_handle_aq_msg(dev);
+	}
+#else
+	if (oicr & PFINT_OICR_LINK_STAT_CHANGE_M) {
+		PMD_DRV_LOG(INFO, "OICR: link state change event");
+		ice_link_update(dev, 0);
+	}
+#endif
+
+	if (oicr & PFINT_OICR_MAL_DETECT_M) {
+		PMD_DRV_LOG(WARNING, "OICR: MDD event");
+		reg = ICE_READ_REG(hw, GL_MDET_TX_PQM);
+		if (reg & GL_MDET_TX_PQM_VALID_M) {
+			pf_num = (reg & GL_MDET_TX_PQM_PF_NUM_M) >>
+				 GL_MDET_TX_PQM_PF_NUM_S;
+			event = (reg & GL_MDET_TX_PQM_MAL_TYPE_M) >>
+				GL_MDET_TX_PQM_MAL_TYPE_S;
+			queue = (reg & GL_MDET_TX_PQM_QNUM_M) >>
+				GL_MDET_TX_PQM_QNUM_S;
+
+			PMD_DRV_LOG(WARNING, "Malicious Driver Detection event "
+				    "%d by PQM on TX queue %d PF# %d",
+				    event, queue, pf_num);
+		}
+
+		reg = ICE_READ_REG(hw, GL_MDET_TX_TCLAN);
+		if (reg & GL_MDET_TX_TCLAN_VALID_M) {
+			pf_num = (reg & GL_MDET_TX_TCLAN_PF_NUM_M) >>
+				 GL_MDET_TX_TCLAN_PF_NUM_S;
+			event = (reg & GL_MDET_TX_TCLAN_MAL_TYPE_M) >>
+				GL_MDET_TX_TCLAN_MAL_TYPE_S;
+			queue = (reg & GL_MDET_TX_TCLAN_QNUM_M) >>
+				GL_MDET_TX_TCLAN_QNUM_S;
+
+			PMD_DRV_LOG(WARNING, "Malicious Driver Detection event "
+				    "%d by TCLAN on TX queue %d PF# %d",
+				    event, queue, pf_num);
+		}
+	}
+done:
+	/* Enable interrupt */
+	ice_pf_enable_irq0(hw);
+	rte_intr_enable(dev->intr_handle);
+}
+
 /*  Initialize SW parameters of PF */
 static int
 ice_pf_sw_init(struct rte_eth_dev *dev)
@@ -487,6 +671,7 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 ice_dev_init(struct rte_eth_dev *dev)
 {
 	struct rte_pci_device *pci_dev;
+	struct rte_intr_handle *intr_handle;
 	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
 	int ret;
@@ -495,6 +680,7 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 
 	ice_set_default_ptype_table(dev);
 	pci_dev = RTE_DEV_TO_PCI(dev->device);
+	intr_handle = &pci_dev->intr_handle;
 
 	rte_eth_copy_pci_info(dev, pci_dev);
 	pf->adapter = ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
@@ -541,6 +727,15 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 		goto err_pf_setup;
 	}
 
+	/* register callback func to eal lib */
+	rte_intr_callback_register(intr_handle,
+				   ice_interrupt_handler, dev);
+
+	ice_pf_enable_irq0(hw);
+
+	/* enable uio intr after callback register */
+	rte_intr_enable(intr_handle);
+
 	return 0;
 
 err_pf_setup:
@@ -587,6 +782,8 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 {
 	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
 
 	ICE_PROC_SECONDARY_CHECK_RET_0;
 
@@ -599,6 +796,13 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 	rte_free(dev->data->mac_addrs);
 	dev->data->mac_addrs = NULL;
 
+	/* disable uio intr before callback unregister */
+	rte_intr_disable(intr_handle);
+
+	/* register callback func to eal lib */
+	rte_intr_callback_unregister(intr_handle,
+				     ice_interrupt_handler, dev);
+
 	ice_release_vsi(pf->main_vsi);
 	ice_sched_cleanup_all(hw);
 	rte_free(hw->port_info);
@@ -763,6 +967,9 @@ static int ice_init_rss(struct ice_pf *pf)
 	if (ret != ICE_SUCCESS)
 		PMD_DRV_LOG(WARNING, "Fail to set phy mask");
 
+	/* Call get_link_info aq commond to enable/disable LSE */
+	ice_link_update(dev, 0);
+
 	pf->adapter_stopped = false;
 
 	return 0;
@@ -783,6 +990,8 @@ static int ice_init_rss(struct ice_pf *pf)
 {
 	struct rte_eth_dev_data *data = dev->data;
 	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
 	uint16_t i;
 
 	/* avoid stopping again */
@@ -802,6 +1011,13 @@ static int ice_init_rss(struct ice_pf *pf)
 	/* Clear all queues and release mbufs */
 	ice_clear_queues(dev);
 
+	/* Clean datapath event and queue/vec mapping */
+	rte_intr_efd_disable(intr_handle);
+	if (intr_handle->intr_vec) {
+		rte_free(intr_handle->intr_vec);
+		intr_handle->intr_vec = NULL;
+	}
+
 	pf->adapter_stopped = true;
 }
 
@@ -968,3 +1184,121 @@ static int ice_init_rss(struct ice_pf *pf)
 	dev_info->default_rxportconf.ring_size = 1024;
 	dev_info->default_txportconf.ring_size = 1024;
 }
+
+static inline int
+ice_atomic_read_link_status(struct rte_eth_dev *dev,
+			    struct rte_eth_link *link)
+{
+	struct rte_eth_link *dst = link;
+	struct rte_eth_link *src = &dev->data->dev_link;
+
+	if (rte_atomic64_cmpset((uint64_t *)dst, *(uint64_t *)dst,
+				*(uint64_t *)src) == 0)
+		return -1;
+
+	return 0;
+}
+
+static inline int
+ice_atomic_write_link_status(struct rte_eth_dev *dev,
+			     struct rte_eth_link *link)
+{
+	struct rte_eth_link *dst = &dev->data->dev_link;
+	struct rte_eth_link *src = link;
+
+	if (rte_atomic64_cmpset((uint64_t *)dst, *(uint64_t *)dst,
+				*(uint64_t *)src) == 0)
+		return -1;
+
+	return 0;
+}
+
+static int
+ice_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete)
+{
+#define CHECK_INTERVAL 100  /* 100ms */
+#define MAX_REPEAT_TIME 10  /* 1s (10 * 100ms) in total */
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ice_link_status link_status;
+	struct rte_eth_link link, old;
+	int status;
+	unsigned int rep_cnt = MAX_REPEAT_TIME;
+	bool enable_lse = dev->data->dev_conf.intr_conf.lsc ? true : false;
+
+	ICE_PROC_SECONDARY_CHECK;
+
+	memset(&link, 0, sizeof(link));
+	memset(&old, 0, sizeof(old));
+	memset(&link_status, 0, sizeof(link_status));
+	ice_atomic_read_link_status(dev, &old);
+
+	do {
+		/* Get link status information from hardware */
+		status = ice_aq_get_link_info(hw->port_info, enable_lse,
+					      &link_status, NULL);
+		if (status != ICE_SUCCESS) {
+			link.link_speed = ETH_SPEED_NUM_100M;
+			link.link_duplex = ETH_LINK_FULL_DUPLEX;
+			PMD_DRV_LOG(ERR, "Failed to get link info");
+			goto out;
+		}
+
+		link.link_status = link_status.link_info & ICE_AQ_LINK_UP;
+		if (!wait_to_complete || link.link_status)
+			break;
+
+		rte_delay_ms(CHECK_INTERVAL);
+	} while (--rep_cnt);
+
+	if (!link.link_status)
+		goto out;
+
+	/* Full-duplex operation at all supported speeds */
+	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+
+	/* Parse the link status */
+	switch (link_status.link_speed) {
+	case ICE_AQ_LINK_SPEED_10MB:
+		link.link_speed = ETH_SPEED_NUM_10M;
+		break;
+	case ICE_AQ_LINK_SPEED_100MB:
+		link.link_speed = ETH_SPEED_NUM_100M;
+		break;
+	case ICE_AQ_LINK_SPEED_1000MB:
+		link.link_speed = ETH_SPEED_NUM_1G;
+		break;
+	case ICE_AQ_LINK_SPEED_2500MB:
+		link.link_speed = ETH_SPEED_NUM_2_5G;
+		break;
+	case ICE_AQ_LINK_SPEED_5GB:
+		link.link_speed = ETH_SPEED_NUM_5G;
+		break;
+	case ICE_AQ_LINK_SPEED_10GB:
+		link.link_speed = ETH_SPEED_NUM_10G;
+		break;
+	case ICE_AQ_LINK_SPEED_20GB:
+		link.link_speed = ETH_SPEED_NUM_20G;
+		break;
+	case ICE_AQ_LINK_SPEED_25GB:
+		link.link_speed = ETH_SPEED_NUM_25G;
+		break;
+	case ICE_AQ_LINK_SPEED_40GB:
+		link.link_speed = ETH_SPEED_NUM_40G;
+		break;
+	case ICE_AQ_LINK_SPEED_UNKNOWN:
+	default:
+		PMD_DRV_LOG(ERR, "Unknown link speed");
+		link.link_speed = ETH_SPEED_NUM_NONE;
+		break;
+	}
+
+	link.link_autoneg = !(dev->data->dev_conf.link_speeds &
+			      ETH_LINK_SPEED_FIXED);
+
+out:
+	ice_atomic_write_link_status(dev, &link);
+	if (link.link_status == old.link_status)
+		return -1;
+
+	return 0;
+}
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v2 07/20] net/ice: support MTU setting
  2018-12-03  7:06 ` [dpdk-dev] [PATCH v2 00/20] " Wenzhuo Lu
                     ` (5 preceding siblings ...)
  2018-12-03  7:06   ` [dpdk-dev] [PATCH v2 06/20] net/ice: support link update Wenzhuo Lu
@ 2018-12-03  7:06   ` Wenzhuo Lu
  2018-12-04  5:25     ` Varghese, Vipin
  2018-12-03  7:06   ` [dpdk-dev] [PATCH v2 08/20] net/ice: support MAC ops Wenzhuo Lu
                     ` (12 subsequent siblings)
  19 siblings, 1 reply; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-03  7:06 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add ops link_update.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/ice/ice_ethdev.c | 36 ++++++++++++++++++++++++++++++++++++
 1 file changed, 36 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 164dfd5..bf290ab 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -23,6 +23,7 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 			     struct rte_eth_dev_info *dev_info);
 static int ice_link_update(struct rte_eth_dev *dev,
 			   int wait_to_complete);
+static int ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu);
 
 static const struct rte_pci_id pci_id_ice_map[] = {
 	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
@@ -48,6 +49,7 @@ static int ice_link_update(struct rte_eth_dev *dev,
 	.dev_infos_get                = ice_dev_info_get,
 	.dev_supported_ptypes_get     = ice_dev_supported_ptypes_get,
 	.link_update                  = ice_link_update,
+	.mtu_set                      = ice_mtu_set,
 };
 
 static void
@@ -1302,3 +1304,37 @@ static int ice_init_rss(struct ice_pf *pf)
 
 	return 0;
 }
+
+static int
+ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct rte_eth_dev_data *dev_data = pf->dev_data;
+	uint32_t frame_size = mtu + ETHER_HDR_LEN
+			      + ETHER_CRC_LEN + ICE_VLAN_TAG_SIZE;
+
+	ICE_PROC_SECONDARY_CHECK;
+
+	/* check if mtu is within the allowed range */
+	if (mtu < ETHER_MIN_MTU || frame_size > ICE_FRAME_SIZE_MAX)
+		return -EINVAL;
+
+	/* mtu setting is forbidden if port is start */
+	if (dev_data->dev_started) {
+		PMD_DRV_LOG(ERR,
+			    "port %d must be stopped before configuration",
+			    dev_data->port_id);
+		return -EBUSY;
+	}
+
+	if (frame_size > ETHER_MAX_LEN)
+		dev_data->dev_conf.rxmode.offloads |=
+			DEV_RX_OFFLOAD_JUMBO_FRAME;
+	else
+		dev_data->dev_conf.rxmode.offloads &=
+			~DEV_RX_OFFLOAD_JUMBO_FRAME;
+
+	dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
+
+	return 0;
+}
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v2 08/20] net/ice: support MAC ops
  2018-12-03  7:06 ` [dpdk-dev] [PATCH v2 00/20] " Wenzhuo Lu
                     ` (6 preceding siblings ...)
  2018-12-03  7:06   ` [dpdk-dev] [PATCH v2 07/20] net/ice: support MTU setting Wenzhuo Lu
@ 2018-12-03  7:06   ` Wenzhuo Lu
  2018-12-03  7:06   ` [dpdk-dev] [PATCH v2 09/20] net/ice: support VLAN ops Wenzhuo Lu
                     ` (11 subsequent siblings)
  19 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-03  7:06 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add below ops,
mac_addr_set
mac_addr_add
mac_addr_remove

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/ice/ice_ethdev.c | 239 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 239 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index bf290ab..3d3ca95 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -24,6 +24,13 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 static int ice_link_update(struct rte_eth_dev *dev,
 			   int wait_to_complete);
 static int ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu);
+static int ice_macaddr_set(struct rte_eth_dev *dev,
+			   struct ether_addr *mac_addr);
+static int ice_macaddr_add(struct rte_eth_dev *dev,
+			   struct ether_addr *mac_addr,
+			   __rte_unused uint32_t index,
+			   uint32_t pool);
+static void ice_macaddr_remove(struct rte_eth_dev *dev, uint32_t index);
 
 static const struct rte_pci_id pci_id_ice_map[] = {
 	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
@@ -50,6 +57,9 @@ static int ice_link_update(struct rte_eth_dev *dev,
 	.dev_supported_ptypes_get     = ice_dev_supported_ptypes_get,
 	.link_update                  = ice_link_update,
 	.mtu_set                      = ice_mtu_set,
+	.mac_addr_set                 = ice_macaddr_set,
+	.mac_addr_add                 = ice_macaddr_add,
+	.mac_addr_remove              = ice_macaddr_remove,
 };
 
 static void
@@ -335,6 +345,130 @@ static int ice_link_update(struct rte_eth_dev *dev,
 	return 0;
 }
 
+/* Find out specific MAC filter */
+static struct ice_mac_filter *
+ice_find_mac_filter(struct ice_vsi *vsi, struct ether_addr *macaddr)
+{
+	struct ice_mac_filter *f;
+
+	TAILQ_FOREACH(f, &vsi->mac_list, next) {
+		if (is_same_ether_addr(macaddr, &f->mac_info.mac_addr))
+			return f;
+	}
+
+	return NULL;
+}
+
+static int
+ice_add_mac_filter(struct ice_vsi *vsi, struct ether_addr *mac_addr)
+{
+	struct ice_fltr_list_entry *m_list_itr = NULL;
+	struct ice_mac_filter *f;
+	struct LIST_HEAD_TYPE list_head;
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	int ret = 0;
+
+	/* If it's added and configured, return */
+	f = ice_find_mac_filter(vsi, mac_addr);
+	if (f) {
+		PMD_DRV_LOG(INFO, "This MAC filter already exists.");
+		return 0;
+	}
+
+	INIT_LIST_HEAD(&list_head);
+
+	m_list_itr = (struct ice_fltr_list_entry *)
+		ice_malloc(hw, sizeof(*m_list_itr));
+	if (!m_list_itr) {
+		ret = -ENOMEM;
+		goto DONE;
+	}
+	ice_memcpy(m_list_itr->fltr_info.l_data.mac.mac_addr,
+		   mac_addr, ETH_ALEN, ICE_NONDMA_TO_NONDMA);
+	m_list_itr->fltr_info.src_id = ICE_SRC_ID_VSI;
+	m_list_itr->fltr_info.fltr_act = ICE_FWD_TO_VSI;
+	m_list_itr->fltr_info.lkup_type = ICE_SW_LKUP_MAC;
+	m_list_itr->fltr_info.flag = ICE_FLTR_TX;
+	m_list_itr->fltr_info.vsi_handle = vsi->idx;
+
+	LIST_ADD(&m_list_itr->list_entry, &list_head);
+
+	/* Add the mac */
+	ret = ice_add_mac(hw, &list_head);
+	if (ret != ICE_SUCCESS) {
+		PMD_DRV_LOG(ERR, "Failed to add MAC filter");
+		ret = -EINVAL;
+		goto DONE;
+	}
+	/* Add the mac addr into mac list */
+	f = rte_zmalloc("mac_filter", sizeof(*f), 0);
+	if (!f) {
+		PMD_DRV_LOG(ERR, "failed to allocate memory");
+		ret = -ENOMEM;
+		goto DONE;
+	}
+	rte_memcpy(&f->mac_info.mac_addr, mac_addr, ETH_ADDR_LEN);
+	TAILQ_INSERT_TAIL(&vsi->mac_list, f, next);
+	vsi->mac_num++;
+
+	ret = 0;
+
+DONE:
+	rte_free(m_list_itr);
+	return ret;
+}
+
+static int
+ice_remove_mac_filter(struct ice_vsi *vsi, struct ether_addr *mac_addr)
+{
+	struct ice_fltr_list_entry *m_list_itr = NULL;
+	struct ice_mac_filter *f;
+	struct LIST_HEAD_TYPE list_head;
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	int ret = 0;
+
+	/* Can't find it, return an error */
+	f = ice_find_mac_filter(vsi, mac_addr);
+	if (!f)
+		return -EINVAL;
+
+	INIT_LIST_HEAD(&list_head);
+
+	m_list_itr = (struct ice_fltr_list_entry *)
+		ice_malloc(hw, sizeof(*m_list_itr));
+	if (!m_list_itr) {
+		ret = -ENOMEM;
+		goto DONE;
+	}
+	ice_memcpy(m_list_itr->fltr_info.l_data.mac.mac_addr,
+		   mac_addr, ETH_ALEN, ICE_NONDMA_TO_NONDMA);
+	m_list_itr->fltr_info.src_id = ICE_SRC_ID_VSI;
+	m_list_itr->fltr_info.fltr_act = ICE_FWD_TO_VSI;
+	m_list_itr->fltr_info.lkup_type = ICE_SW_LKUP_MAC;
+	m_list_itr->fltr_info.flag = ICE_FLTR_TX;
+	m_list_itr->fltr_info.vsi_handle = vsi->idx;
+
+	LIST_ADD(&m_list_itr->list_entry, &list_head);
+
+	/* remove the mac filter */
+	ret = ice_remove_mac(hw, &list_head);
+	if (ret != ICE_SUCCESS) {
+		PMD_DRV_LOG(ERR, "Failed to remove MAC filter");
+		ret = -EINVAL;
+		goto DONE;
+	}
+
+	/* Remove the mac addr from mac list */
+	TAILQ_REMOVE(&vsi->mac_list, f, next);
+	rte_free(f);
+	vsi->mac_num--;
+
+	ret = 0;
+DONE:
+	rte_free(m_list_itr);
+	return ret;
+}
+
 /* Enable IRQ0 */
 static void
 ice_pf_enable_irq0(struct ice_hw *hw)
@@ -543,6 +677,9 @@ static int ice_link_update(struct rte_eth_dev *dev,
 	struct ice_vsi *vsi = NULL;
 	struct ice_vsi_ctx vsi_ctx;
 	int ret;
+	struct ether_addr broadcast = {
+		.addr_bytes = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff} };
+	struct ether_addr mac_addr;
 	uint16_t max_txqs[ICE_MAX_TRAFFIC_CLASS] = { 0 };
 	uint8_t tc_bitmap = 0x1;
 
@@ -628,6 +765,21 @@ static int ice_link_update(struct rte_eth_dev *dev,
 	pf->vsis_allocated = vsi_ctx.vsis_allocd;
 	pf->vsis_unallocated = vsi_ctx.vsis_unallocated;
 
+	/* MAC configuration */
+	rte_memcpy(pf->dev_addr.addr_bytes,
+		   hw->port_info->mac.perm_addr,
+		   ETH_ADDR_LEN);
+
+	rte_memcpy(&mac_addr, &pf->dev_addr, ETHER_ADDR_LEN);
+	ret = ice_add_mac_filter(vsi, &mac_addr);
+	if (ret != ICE_SUCCESS)
+		PMD_INIT_LOG(ERR, "Failed to add dflt MAC filter");
+
+	rte_memcpy(&mac_addr, &broadcast, ETHER_ADDR_LEN);
+	ret = ice_add_mac_filter(vsi, &mac_addr);
+	if (ret != ICE_SUCCESS)
+		PMD_INIT_LOG(ERR, "Failed to add MAC filter");
+
 	/* At the beginning, only TC0. */
 	/* What we need here is the maximam number of the TX queues.
 	 * Currently vsi->nb_qps means it.
@@ -1338,3 +1490,90 @@ static int ice_init_rss(struct ice_pf *pf)
 
 	return 0;
 }
+
+static int ice_macaddr_set(struct rte_eth_dev *dev,
+			   struct ether_addr *mac_addr)
+{
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vsi *vsi = pf->main_vsi;
+	struct ice_mac_filter *f;
+	uint8_t flags = 0;
+	int ret;
+
+	ICE_PROC_SECONDARY_CHECK;
+
+	if (!is_valid_assigned_ether_addr(mac_addr)) {
+		PMD_DRV_LOG(ERR, "Tried to set invalid MAC address.");
+		return -EINVAL;
+	}
+
+	TAILQ_FOREACH(f, &vsi->mac_list, next) {
+		if (is_same_ether_addr(&pf->dev_addr, &f->mac_info.mac_addr))
+			break;
+	}
+
+	if (!f) {
+		PMD_DRV_LOG(ERR, "Failed to find filter for default mac");
+		return -EIO;
+	}
+
+	ret = ice_remove_mac_filter(vsi, &f->mac_info.mac_addr);
+	if (ret != ICE_SUCCESS) {
+		PMD_DRV_LOG(ERR, "Failed to delete mac filter");
+		return -EIO;
+	}
+	ret = ice_add_mac_filter(vsi, mac_addr);
+	if (ret != ICE_SUCCESS) {
+		PMD_DRV_LOG(ERR, "Failed to add mac filter");
+		return -EIO;
+	}
+	memcpy(&pf->dev_addr, mac_addr, ETH_ADDR_LEN);
+
+	flags = ICE_AQC_MAN_MAC_UPDATE_LAA_WOL;
+	ice_aq_manage_mac_write(hw, mac_addr->addr_bytes, flags, NULL);
+
+	return 0;
+}
+
+/* Add a MAC address, and update filters */
+static int
+ice_macaddr_add(struct rte_eth_dev *dev,
+		struct ether_addr *mac_addr,
+		__rte_unused uint32_t index,
+		__rte_unused uint32_t pool)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vsi *vsi = pf->main_vsi;
+	int ret;
+
+	ICE_PROC_SECONDARY_CHECK;
+
+	ret = ice_add_mac_filter(vsi, mac_addr);
+	if (ret != ICE_SUCCESS) {
+		PMD_DRV_LOG(ERR, "Failed to add MAC filter");
+		return -EINVAL;
+	}
+
+	return ICE_SUCCESS;
+}
+
+/* Remove a MAC address, and update filters */
+static void
+ice_macaddr_remove(struct rte_eth_dev *dev, uint32_t index)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vsi *vsi = pf->main_vsi;
+	struct rte_eth_dev_data *data = dev->data;
+	struct ether_addr *macaddr;
+	int ret;
+
+	ICE_PROC_SECONDARY_CHECK_NO_ERR;
+
+	macaddr = &data->mac_addrs[index];
+	ret = ice_remove_mac_filter(vsi, macaddr);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to remove MAC filter");
+		return;
+	}
+}
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v2 09/20] net/ice: support VLAN ops
  2018-12-03  7:06 ` [dpdk-dev] [PATCH v2 00/20] " Wenzhuo Lu
                     ` (7 preceding siblings ...)
  2018-12-03  7:06   ` [dpdk-dev] [PATCH v2 08/20] net/ice: support MAC ops Wenzhuo Lu
@ 2018-12-03  7:06   ` Wenzhuo Lu
  2018-12-03  7:06   ` [dpdk-dev] [PATCH v2 10/20] net/ice: support RSS Wenzhuo Lu
                     ` (10 subsequent siblings)
  19 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-03  7:06 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add below ops,
ice_vlan_filter_set
ice_vlan_offload_set
ice_vlan_tpid_set
ice_vlan_pvid_set

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/ice/ice_ethdev.c | 598 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 598 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 3d3ca95..20e1620 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -24,6 +24,13 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 static int ice_link_update(struct rte_eth_dev *dev,
 			   int wait_to_complete);
 static int ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu);
+static int ice_vlan_offload_set(struct rte_eth_dev *dev, int mask);
+static int ice_vlan_tpid_set(struct rte_eth_dev *dev,
+			     enum rte_vlan_type vlan_type,
+			     uint16_t tpid);
+static int ice_vlan_filter_set(struct rte_eth_dev *dev,
+			       uint16_t vlan_id,
+			       int on);
 static int ice_macaddr_set(struct rte_eth_dev *dev,
 			   struct ether_addr *mac_addr);
 static int ice_macaddr_add(struct rte_eth_dev *dev,
@@ -31,6 +38,8 @@ static int ice_macaddr_add(struct rte_eth_dev *dev,
 			   __rte_unused uint32_t index,
 			   uint32_t pool);
 static void ice_macaddr_remove(struct rte_eth_dev *dev, uint32_t index);
+static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
+			     uint16_t pvid, int on);
 
 static const struct rte_pci_id pci_id_ice_map[] = {
 	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
@@ -60,6 +69,10 @@ static int ice_macaddr_add(struct rte_eth_dev *dev,
 	.mac_addr_set                 = ice_macaddr_set,
 	.mac_addr_add                 = ice_macaddr_add,
 	.mac_addr_remove              = ice_macaddr_remove,
+	.vlan_filter_set              = ice_vlan_filter_set,
+	.vlan_offload_set             = ice_vlan_offload_set,
+	.vlan_tpid_set                = ice_vlan_tpid_set,
+	.vlan_pvid_set                = ice_vlan_pvid_set,
 };
 
 static void
@@ -469,6 +482,297 @@ static int ice_macaddr_add(struct rte_eth_dev *dev,
 	return ret;
 }
 
+/* Find out specific VLAN filter */
+static struct ice_vlan_filter *
+ice_find_vlan_filter(struct ice_vsi *vsi, uint16_t vlan_id)
+{
+	struct ice_vlan_filter *f;
+
+	TAILQ_FOREACH(f, &vsi->vlan_list, next) {
+		if (vlan_id == f->vlan_info.vlan_id)
+			return f;
+	}
+
+	return NULL;
+}
+
+static int
+ice_add_vlan_filter(struct ice_vsi *vsi, uint16_t vlan_id)
+{
+	struct ice_fltr_list_entry *v_list_itr = NULL;
+	struct ice_vlan_filter *f;
+	struct LIST_HEAD_TYPE list_head;
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	int ret = 0;
+
+	if (!vsi || vlan_id > ETHER_MAX_VLAN_ID)
+		return -EINVAL;
+
+	/* If it's added and configured, return. */
+	f = ice_find_vlan_filter(vsi, vlan_id);
+	if (f) {
+		PMD_DRV_LOG(INFO, "This VLAN filter already exists.");
+		return 0;
+	}
+
+	if (!vsi->vlan_anti_spoof_on && !vsi->vlan_filter_on)
+		return 0;
+
+	INIT_LIST_HEAD(&list_head);
+
+	v_list_itr = (struct ice_fltr_list_entry *)
+		      ice_malloc(hw, sizeof(*v_list_itr));
+	if (!v_list_itr) {
+		ret = -ENOMEM;
+		goto DONE;
+	}
+	v_list_itr->fltr_info.l_data.vlan.vlan_id = vlan_id;
+	v_list_itr->fltr_info.src_id = ICE_SRC_ID_VSI;
+	v_list_itr->fltr_info.fltr_act = ICE_FWD_TO_VSI;
+	v_list_itr->fltr_info.lkup_type = ICE_SW_LKUP_VLAN;
+	v_list_itr->fltr_info.flag = ICE_FLTR_TX;
+	v_list_itr->fltr_info.vsi_handle = vsi->idx;
+
+	LIST_ADD(&v_list_itr->list_entry, &list_head);
+
+	/* Add the vlan */
+	ret = ice_add_vlan(hw, &list_head);
+	if (ret != ICE_SUCCESS) {
+		PMD_DRV_LOG(ERR, "Failed to add VLAN filter");
+		ret = -EINVAL;
+		goto DONE;
+	}
+
+	/* Add vlan into vlan list */
+	f = rte_zmalloc("vlan_filter", sizeof(*f), 0);
+	if (!f) {
+		PMD_DRV_LOG(ERR, "failed to allocate memory");
+		ret = -ENOMEM;
+		goto DONE;
+	}
+	f->vlan_info.vlan_id = vlan_id;
+	TAILQ_INSERT_TAIL(&vsi->vlan_list, f, next);
+	vsi->vlan_num++;
+
+	ret = 0;
+
+DONE:
+	rte_free(v_list_itr);
+	return ret;
+}
+
+static int
+ice_remove_vlan_filter(struct ice_vsi *vsi, uint16_t vlan_id)
+{
+	struct ice_fltr_list_entry *v_list_itr = NULL;
+	struct ice_vlan_filter *f;
+	struct LIST_HEAD_TYPE list_head;
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	int ret = 0;
+
+	/**
+	 * Vlan 0 is the generic filter for untagged packets
+	 * and can't be removed.
+	 */
+	if (!vsi || vlan_id == 0 || vlan_id > ETHER_MAX_VLAN_ID)
+		return -EINVAL;
+
+	/* Can't find it, return an error */
+	f = ice_find_vlan_filter(vsi, vlan_id);
+	if (!f)
+		return -EINVAL;
+
+	INIT_LIST_HEAD(&list_head);
+
+	v_list_itr = (struct ice_fltr_list_entry *)
+		      ice_malloc(hw, sizeof(*v_list_itr));
+	if (!v_list_itr) {
+		ret = -ENOMEM;
+		goto DONE;
+	}
+
+	v_list_itr->fltr_info.l_data.vlan.vlan_id = vlan_id;
+	v_list_itr->fltr_info.src_id = ICE_SRC_ID_VSI;
+	v_list_itr->fltr_info.fltr_act = ICE_FWD_TO_VSI;
+	v_list_itr->fltr_info.lkup_type = ICE_SW_LKUP_VLAN;
+	v_list_itr->fltr_info.flag = ICE_FLTR_TX;
+	v_list_itr->fltr_info.vsi_handle = vsi->idx;
+
+	LIST_ADD(&v_list_itr->list_entry, &list_head);
+
+	/* remove the vlan filter */
+	ret = ice_remove_vlan(hw, &list_head);
+	if (ret != ICE_SUCCESS) {
+		PMD_DRV_LOG(ERR, "Failed to remove VLAN filter");
+		ret = -EINVAL;
+		goto DONE;
+	}
+
+	/* Remove the vlan id from vlan list */
+	TAILQ_REMOVE(&vsi->vlan_list, f, next);
+	rte_free(f);
+	vsi->vlan_num--;
+
+	ret = 0;
+DONE:
+	rte_free(v_list_itr);
+	return ret;
+}
+
+static int
+ice_remove_all_mac_vlan_filters(struct ice_vsi *vsi)
+{
+	struct ice_mac_filter *m_f;
+	struct ice_vlan_filter *v_f;
+	int ret = 0;
+
+	if (!vsi || !vsi->mac_num)
+		return -EINVAL;
+
+	TAILQ_FOREACH(m_f, &vsi->mac_list, next) {
+		ret = ice_remove_mac_filter(vsi, &m_f->mac_info.mac_addr);
+		if (ret != ICE_SUCCESS) {
+			ret = -EINVAL;
+			goto DONE;
+		}
+	}
+
+	if (vsi->vlan_num == 0)
+		return 0;
+
+	TAILQ_FOREACH(v_f, &vsi->vlan_list, next) {
+		ret = ice_remove_vlan_filter(vsi, v_f->vlan_info.vlan_id);
+		if (ret != ICE_SUCCESS) {
+			ret = -EINVAL;
+			goto DONE;
+		}
+	}
+
+DONE:
+	return ret;
+}
+
+static int
+ice_vsi_config_qinq_insertion(struct ice_vsi *vsi, bool on)
+{
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	struct ice_vsi_ctx ctxt;
+	uint8_t qinq_flags;
+	int ret = 0;
+
+	/* Check if it has been already on or off */
+	if (vsi->info.valid_sections &
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID)) {
+		if (on) {
+			if ((vsi->info.outer_tag_flags &
+			     ICE_AQ_VSI_OUTER_TAG_ACCEPT_HOST) ==
+			    ICE_AQ_VSI_OUTER_TAG_ACCEPT_HOST)
+				return 0; /* already on */
+		} else {
+			if (!(vsi->info.outer_tag_flags &
+			      ICE_AQ_VSI_OUTER_TAG_ACCEPT_HOST))
+				return 0; /* already off */
+		}
+	}
+
+	if (on)
+		qinq_flags = ICE_AQ_VSI_OUTER_TAG_ACCEPT_HOST;
+	else
+		qinq_flags = 0;
+	/* clear global insertion and use per packet insertion */
+	vsi->info.outer_tag_flags &= ~(ICE_AQ_VSI_OUTER_TAG_INSERT);
+	vsi->info.outer_tag_flags &= ~(ICE_AQ_VSI_OUTER_TAG_ACCEPT_HOST);
+	vsi->info.outer_tag_flags |= qinq_flags;
+	/* use default vlan type 0x8100 */
+	vsi->info.outer_tag_flags &= ~(ICE_AQ_VSI_OUTER_TAG_TYPE_M);
+	vsi->info.outer_tag_flags |= ICE_DFLT_OUTER_TAG_TYPE <<
+				     ICE_AQ_VSI_OUTER_TAG_TYPE_S;
+	(void)rte_memcpy(&ctxt.info, &vsi->info, sizeof(vsi->info));
+	ctxt.info.valid_sections =
+			rte_cpu_to_le_16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID);
+	ctxt.vsi_num = vsi->vsi_id;
+	ret = ice_update_vsi(hw, vsi->idx, &ctxt, NULL);
+	if (ret) {
+		PMD_DRV_LOG(INFO,
+			    "Update VSI failed to %s qinq stripping",
+			    on ? "enable" : "disable");
+		return -EINVAL;
+	}
+
+	vsi->info.valid_sections |=
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID);
+
+	return ret;
+}
+
+static int
+ice_vsi_config_qinq_stripping(struct ice_vsi *vsi, bool on)
+{
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	struct ice_vsi_ctx ctxt;
+	uint8_t qinq_flags;
+	int ret = 0;
+
+	/* Check if it has been already on or off */
+	if (vsi->info.valid_sections &
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID)) {
+		if (on) {
+			if ((vsi->info.outer_tag_flags &
+			     ICE_AQ_VSI_OUTER_TAG_MODE_M) ==
+			    ICE_AQ_VSI_OUTER_TAG_COPY)
+				return 0; /* already on */
+		} else {
+			if ((vsi->info.outer_tag_flags &
+			     ICE_AQ_VSI_OUTER_TAG_MODE_M) ==
+			    ICE_AQ_VSI_OUTER_TAG_NOTHING)
+				return 0; /* already off */
+		}
+	}
+
+	if (on)
+		qinq_flags = ICE_AQ_VSI_OUTER_TAG_COPY;
+	else
+		qinq_flags = ICE_AQ_VSI_OUTER_TAG_NOTHING;
+	vsi->info.outer_tag_flags &= ~(ICE_AQ_VSI_OUTER_TAG_MODE_M);
+	vsi->info.outer_tag_flags |= qinq_flags;
+	/* use default vlan type 0x8100 */
+	vsi->info.outer_tag_flags &= ~(ICE_AQ_VSI_OUTER_TAG_TYPE_M);
+	vsi->info.outer_tag_flags |= ICE_DFLT_OUTER_TAG_TYPE <<
+				     ICE_AQ_VSI_OUTER_TAG_TYPE_S;
+	(void)rte_memcpy(&ctxt.info, &vsi->info, sizeof(vsi->info));
+	ctxt.info.valid_sections =
+			rte_cpu_to_le_16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID);
+	ctxt.vsi_num = vsi->vsi_id;
+	ret = ice_update_vsi(hw, vsi->idx, &ctxt, NULL);
+	if (ret) {
+		PMD_DRV_LOG(INFO,
+			    "Update VSI failed to %s qinq stripping",
+			    on ? "enable" : "disable");
+		return -EINVAL;
+	}
+
+	vsi->info.valid_sections |=
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID);
+
+	return ret;
+}
+
+static int
+ice_vsi_config_double_vlan(struct ice_vsi *vsi, int on)
+{
+	int ret;
+
+	ret = ice_vsi_config_qinq_stripping(vsi, on);
+	if (ret)
+		PMD_DRV_LOG(ERR, "Fail to set qinq stripping - %d", ret);
+
+	ret = ice_vsi_config_qinq_insertion(vsi, on);
+	if (ret)
+		PMD_DRV_LOG(ERR, "Fail to set qinq insertion - %d", ret);
+
+	return ret;
+}
+
 /* Enable IRQ0 */
 static void
 ice_pf_enable_irq0(struct ice_hw *hw)
@@ -828,6 +1132,7 @@ static int ice_macaddr_add(struct rte_eth_dev *dev,
 	struct rte_intr_handle *intr_handle;
 	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vsi *vsi;
 	int ret;
 
 	dev->dev_ops = &ice_eth_dev_ops;
@@ -881,6 +1186,11 @@ static int ice_macaddr_add(struct rte_eth_dev *dev,
 		goto err_pf_setup;
 	}
 
+	vsi = pf->main_vsi;
+
+	/* Disable double vlan by default */
+	ice_vsi_config_double_vlan(vsi, FALSE);
+
 	/* register callback func to eal lib */
 	rte_intr_callback_register(intr_handle,
 				   ice_interrupt_handler, dev);
@@ -916,6 +1226,8 @@ static int ice_macaddr_add(struct rte_eth_dev *dev,
 
 	hw = ICE_VSI_TO_HW(vsi);
 
+	ice_remove_all_mac_vlan_filters(vsi);
+
 	memset(&vsi_ctx, 0, sizeof(vsi_ctx));
 
 	vsi_ctx.vsi_num = vsi->vsi_id;
@@ -1577,3 +1889,289 @@ static int ice_macaddr_set(struct rte_eth_dev *dev,
 		return;
 	}
 }
+
+static int
+ice_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vsi *vsi = pf->main_vsi;
+	int ret;
+
+	ICE_PROC_SECONDARY_CHECK;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (on) {
+		ret = ice_add_vlan_filter(vsi, vlan_id);
+		if (ret < 0) {
+			PMD_DRV_LOG(ERR, "Failed to add vlan filter");
+			return -EINVAL;
+		}
+	} else {
+		ret = ice_remove_vlan_filter(vsi, vlan_id);
+		if (ret < 0) {
+			PMD_DRV_LOG(ERR, "Failed to remove vlan filter");
+			return -EINVAL;
+		}
+	}
+
+	return 0;
+}
+
+/* Configure vlan filter on or off */
+static int
+ice_vsi_config_vlan_filter(struct ice_vsi *vsi, bool on)
+{
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	struct ice_vsi_ctx ctxt;
+	uint8_t sec_flags, sw_flags2;
+	int ret = 0;
+
+	sec_flags = ICE_AQ_VSI_SEC_TX_VLAN_PRUNE_ENA <<
+		    ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S;
+	sw_flags2 = ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA;
+
+	if (on) {
+		vsi->info.sec_flags |= sec_flags;
+		vsi->info.sw_flags2 |= sw_flags2;
+	} else {
+		vsi->info.sec_flags &= ~sec_flags;
+		vsi->info.sw_flags2 &= ~sw_flags2;
+	}
+	vsi->info.sw_id = hw->port_info->sw_id;
+	(void)rte_memcpy(&ctxt.info, &vsi->info, sizeof(vsi->info));
+	ctxt.info.valid_sections =
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_SW_VALID |
+				 ICE_AQ_VSI_PROP_SECURITY_VALID);
+	ctxt.vsi_num = vsi->vsi_id;
+
+	ret = ice_update_vsi(hw, vsi->idx, &ctxt, NULL);
+	if (ret) {
+		PMD_DRV_LOG(INFO, "Update VSI failed to %s vlan rx pruning",
+			    on ? "enable" : "disable");
+		ret = -EINVAL;
+	} else {
+		vsi->info.valid_sections |=
+			rte_cpu_to_le_16(ICE_AQ_VSI_PROP_SW_VALID |
+					 ICE_AQ_VSI_PROP_SECURITY_VALID);
+	}
+
+	return ret;
+}
+
+static int
+ice_vsi_config_vlan_stripping(struct ice_vsi *vsi, bool on)
+{
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	struct ice_vsi_ctx ctxt;
+	uint8_t vlan_flags;
+	int ret = 0;
+
+	/* Check if it has been already on or off */
+	if (vsi->info.valid_sections &
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_VLAN_VALID)) {
+		if (on) {
+			if ((vsi->info.vlan_flags &
+			     ICE_AQ_VSI_VLAN_EMOD_M) ==
+			    ICE_AQ_VSI_VLAN_EMOD_STR_BOTH)
+				return 0; /* already on */
+		} else {
+			if ((vsi->info.vlan_flags &
+			     ICE_AQ_VSI_VLAN_EMOD_M) ==
+			    ICE_AQ_VSI_VLAN_EMOD_NOTHING)
+				return 0; /* already off */
+		}
+	}
+
+	if (on)
+		vlan_flags = ICE_AQ_VSI_VLAN_EMOD_STR_BOTH;
+	else
+		vlan_flags = ICE_AQ_VSI_VLAN_EMOD_NOTHING;
+	vsi->info.vlan_flags &= ~(ICE_AQ_VSI_VLAN_EMOD_M);
+	vsi->info.vlan_flags |= vlan_flags;
+	(void)rte_memcpy(&ctxt.info, &vsi->info, sizeof(vsi->info));
+	ctxt.info.valid_sections =
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_VLAN_VALID);
+	ctxt.vsi_num = vsi->vsi_id;
+	ret = ice_update_vsi(hw, vsi->idx, &ctxt, NULL);
+	if (ret) {
+		PMD_DRV_LOG(INFO, "Update VSI failed to %s vlan stripping",
+			    on ? "enable" : "disable");
+		return -EINVAL;
+	}
+
+	vsi->info.valid_sections |=
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_VLAN_VALID);
+
+	return ret;
+}
+
+static int
+ice_vlan_offload_set(struct rte_eth_dev *dev, int mask)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vsi *vsi = pf->main_vsi;
+	struct rte_eth_rxmode *rxmode;
+
+	ICE_PROC_SECONDARY_CHECK;
+
+	rxmode = &dev->data->dev_conf.rxmode;
+	if (mask & ETH_VLAN_FILTER_MASK) {
+		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+			ice_vsi_config_vlan_filter(vsi, TRUE);
+		else
+			ice_vsi_config_vlan_filter(vsi, FALSE);
+	}
+
+	if (mask & ETH_VLAN_STRIP_MASK) {
+		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+			ice_vsi_config_vlan_stripping(vsi, TRUE);
+		else
+			ice_vsi_config_vlan_stripping(vsi, FALSE);
+	}
+
+	if (mask & ETH_VLAN_EXTEND_MASK) {
+		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+			ice_vsi_config_double_vlan(vsi, TRUE);
+		else
+			ice_vsi_config_double_vlan(vsi, FALSE);
+	}
+
+	return 0;
+}
+
+static int
+ice_vlan_tpid_set(struct rte_eth_dev *dev,
+		  enum rte_vlan_type vlan_type,
+		  uint16_t tpid)
+{
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	uint64_t reg_r = 0, reg_w = 0;
+	uint16_t reg_id = 0;
+	int ret = 0;
+	int qinq = dev->data->dev_conf.rxmode.offloads &
+		   DEV_RX_OFFLOAD_VLAN_EXTEND;
+
+	ICE_PROC_SECONDARY_CHECK;
+
+	switch (vlan_type) {
+	case ETH_VLAN_TYPE_OUTER:
+		if (qinq)
+			reg_id = 3;
+		else
+			reg_id = 5;
+	break;
+	case ETH_VLAN_TYPE_INNER:
+		if (qinq) {
+			reg_id = 5;
+		} else {
+			PMD_DRV_LOG(ERR,
+				    "Unsupported vlan type in single vlan.");
+			return -EINVAL;
+		}
+		break;
+	default:
+		PMD_DRV_LOG(ERR, "Unsupported vlan type %d", vlan_type);
+		return -EINVAL;
+	}
+	reg_r = ICE_READ_REG(hw, GL_SWT_L2TAGCTRL(reg_id));
+	PMD_DRV_LOG(DEBUG, "Debug read from ICE GL_SWT_L2TAGCTRL[%d]: "
+		    "0x%08"PRIx64"", reg_id, reg_r);
+
+	reg_w = reg_r & (~(GL_SWT_L2TAGCTRL_ETHERTYPE_M));
+	reg_w |= ((uint64_t)tpid << GL_SWT_L2TAGCTRL_ETHERTYPE_S);
+	if (reg_r == reg_w) {
+		PMD_DRV_LOG(DEBUG, "No need to write");
+		return 0;
+	}
+
+	ICE_WRITE_REG(hw, GL_SWT_L2TAGCTRL(reg_id), reg_w);
+	PMD_DRV_LOG(DEBUG, "Debug write 0x%08"PRIx64" to "
+		    "ICE GL_SWT_L2TAGCTRL[%d]", reg_w, reg_id);
+
+	return ret;
+}
+
+static int
+ice_vsi_vlan_pvid_set(struct ice_vsi *vsi, struct ice_vsi_vlan_pvid_info *info)
+{
+	struct ice_hw *hw;
+	struct ice_vsi_ctx ctxt;
+	uint8_t vlan_flags = 0;
+	int ret;
+
+	if (!vsi || !info) {
+		PMD_DRV_LOG(ERR, "invalid parameters");
+		return -EINVAL;
+	}
+
+	if (info->on) {
+		vsi->info.pvid = info->config.pvid;
+		/**
+		 * If insert pvid is enabled, only tagged pkts are
+		 * allowed to be sent out.
+		 */
+		vlan_flags = ICE_AQ_VSI_PVLAN_INSERT_PVID |
+			     ICE_AQ_VSI_VLAN_MODE_UNTAGGED;
+	} else {
+		vsi->info.pvid = 0;
+		if (info->config.reject.tagged == 0)
+			vlan_flags |= ICE_AQ_VSI_VLAN_MODE_TAGGED;
+
+		if (info->config.reject.untagged == 0)
+			vlan_flags |= ICE_AQ_VSI_VLAN_MODE_UNTAGGED;
+	}
+	vsi->info.vlan_flags &= ~(ICE_AQ_VSI_PVLAN_INSERT_PVID |
+				  ICE_AQ_VSI_VLAN_MODE_M);
+	vsi->info.vlan_flags |= vlan_flags;
+	memset(&ctxt, 0, sizeof(ctxt));
+	rte_memcpy(&ctxt.info, &vsi->info, sizeof(vsi->info));
+	ctxt.info.valid_sections =
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_VLAN_VALID);
+	ctxt.vsi_num = vsi->vsi_id;
+
+	hw = ICE_VSI_TO_HW(vsi);
+	ret = ice_update_vsi(hw, vsi->idx, &ctxt, NULL);
+	if (ret != ICE_SUCCESS) {
+		PMD_DRV_LOG(ERR,
+			    "update VSI for VLAN insert failed, err %d",
+			    ret);
+		return -EINVAL;
+	}
+
+	vsi->info.valid_sections |=
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_VLAN_VALID);
+
+	return ret;
+}
+
+static int
+ice_vlan_pvid_set(struct rte_eth_dev *dev, uint16_t pvid, int on)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vsi *vsi = pf->main_vsi;
+	struct rte_eth_dev_data *data = pf->dev_data;
+	struct ice_vsi_vlan_pvid_info info;
+	int ret;
+
+	ICE_PROC_SECONDARY_CHECK;
+
+	memset(&info, 0, sizeof(info));
+	info.on = on;
+	if (info.on) {
+		info.config.pvid = pvid;
+	} else {
+		info.config.reject.tagged =
+			data->dev_conf.txmode.hw_vlan_reject_tagged;
+		info.config.reject.untagged =
+			data->dev_conf.txmode.hw_vlan_reject_untagged;
+	}
+
+	ret = ice_vsi_vlan_pvid_set(vsi, &info);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Failed to set pvid.");
+		return -EINVAL;
+	}
+
+	return 0;
+}
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v2 10/20] net/ice: support RSS
  2018-12-03  7:06 ` [dpdk-dev] [PATCH v2 00/20] " Wenzhuo Lu
                     ` (8 preceding siblings ...)
  2018-12-03  7:06   ` [dpdk-dev] [PATCH v2 09/20] net/ice: support VLAN ops Wenzhuo Lu
@ 2018-12-03  7:06   ` Wenzhuo Lu
  2018-12-03  7:06   ` [dpdk-dev] [PATCH v2 11/20] net/ice: support RX queue interruption Wenzhuo Lu
                     ` (9 subsequent siblings)
  19 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-03  7:06 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add below ops,
reta_update
reta_query
rss_hash_update
rss_hash_conf_get

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/ice/ice_ethdev.c | 246 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 246 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 20e1620..8cf4839 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -28,6 +28,16 @@ static int ice_link_update(struct rte_eth_dev *dev,
 static int ice_vlan_tpid_set(struct rte_eth_dev *dev,
 			     enum rte_vlan_type vlan_type,
 			     uint16_t tpid);
+static int ice_rss_reta_update(struct rte_eth_dev *dev,
+			       struct rte_eth_rss_reta_entry64 *reta_conf,
+			       uint16_t reta_size);
+static int ice_rss_reta_query(struct rte_eth_dev *dev,
+			      struct rte_eth_rss_reta_entry64 *reta_conf,
+			      uint16_t reta_size);
+static int ice_rss_hash_update(struct rte_eth_dev *dev,
+			       struct rte_eth_rss_conf *rss_conf);
+static int ice_rss_hash_conf_get(struct rte_eth_dev *dev,
+				 struct rte_eth_rss_conf *rss_conf);
 static int ice_vlan_filter_set(struct rte_eth_dev *dev,
 			       uint16_t vlan_id,
 			       int on);
@@ -72,6 +82,10 @@ static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
 	.vlan_filter_set              = ice_vlan_filter_set,
 	.vlan_offload_set             = ice_vlan_offload_set,
 	.vlan_tpid_set                = ice_vlan_tpid_set,
+	.reta_update                  = ice_rss_reta_update,
+	.reta_query                   = ice_rss_reta_query,
+	.rss_hash_update              = ice_rss_hash_update,
+	.rss_hash_conf_get            = ice_rss_hash_conf_get,
 	.vlan_pvid_set                = ice_vlan_pvid_set,
 };
 
@@ -2093,6 +2107,238 @@ static int ice_macaddr_set(struct rte_eth_dev *dev,
 }
 
 static int
+ice_get_rss_lut(struct ice_vsi *vsi, uint8_t *lut, uint16_t lut_size)
+{
+	struct ice_pf *pf = ICE_VSI_TO_PF(vsi);
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	int ret;
+
+	if (!lut)
+		return -EINVAL;
+
+	if (pf->flags & ICE_FLAG_RSS_AQ_CAPABLE) {
+		ret = ice_aq_get_rss_lut(hw, vsi->vsi_id, TRUE,
+					 lut, lut_size);
+		if (ret) {
+			PMD_DRV_LOG(ERR, "Failed to get RSS lookup table");
+			return -EINVAL;
+		}
+	} else {
+		uint64_t *lut_dw = (uint64_t *)lut;
+		uint16_t i, lut_size_dw = lut_size / 4;
+
+		for (i = 0; i < lut_size_dw; i++)
+			lut_dw[i] = ICE_READ_REG(hw, PFQF_HLUT(i));
+	}
+
+	return 0;
+}
+
+static int
+ice_set_rss_lut(struct ice_vsi *vsi, uint8_t *lut, uint16_t lut_size)
+{
+	struct ice_pf *pf = ICE_VSI_TO_PF(vsi);
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	int ret;
+
+	if (!vsi || !lut)
+		return -EINVAL;
+
+	if (pf->flags & ICE_FLAG_RSS_AQ_CAPABLE) {
+		ret = ice_aq_set_rss_lut(hw, vsi->vsi_id, TRUE,
+					 lut, lut_size);
+		if (ret) {
+			PMD_DRV_LOG(ERR, "Failed to set RSS lookup table");
+			return -EINVAL;
+		}
+	} else {
+		uint64_t *lut_dw = (uint64_t *)lut;
+		uint16_t i, lut_size_dw = lut_size / 4;
+
+		for (i = 0; i < lut_size_dw; i++)
+			ICE_WRITE_REG(hw, PFQF_HLUT(i), lut_dw[i]);
+
+		ice_flush(hw);
+	}
+
+	return 0;
+}
+
+static int
+ice_rss_reta_update(struct rte_eth_dev *dev,
+		    struct rte_eth_rss_reta_entry64 *reta_conf,
+		    uint16_t reta_size)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	uint16_t i, lut_size = hw->func_caps.common_cap.rss_table_size;
+	uint16_t idx, shift;
+	uint8_t *lut;
+	int ret;
+
+	ICE_PROC_SECONDARY_CHECK;
+
+	if (reta_size != lut_size ||
+	    reta_size > ETH_RSS_RETA_SIZE_512) {
+		PMD_DRV_LOG(ERR,
+			    "The size of hash lookup table configured (%d)"
+			    "doesn't match the number hardware can "
+			    "supported (%d)",
+			    reta_size, lut_size);
+		return -EINVAL;
+	}
+
+	lut = rte_zmalloc("ice_rss_lut", reta_size, 0);
+	if (!lut) {
+		PMD_DRV_LOG(ERR, "No memory can be allocated");
+		return -ENOMEM;
+	}
+	ret = ice_get_rss_lut(pf->main_vsi, lut, reta_size);
+	if (ret)
+		goto out;
+
+	for (i = 0; i < reta_size; i++) {
+		idx = i / RTE_RETA_GROUP_SIZE;
+		shift = i % RTE_RETA_GROUP_SIZE;
+		if (reta_conf[idx].mask & (1ULL << shift))
+			lut[i] = reta_conf[idx].reta[shift];
+	}
+	ret = ice_set_rss_lut(pf->main_vsi, lut, reta_size);
+
+out:
+	rte_free(lut);
+
+	return ret;
+}
+
+static int
+ice_rss_reta_query(struct rte_eth_dev *dev,
+		   struct rte_eth_rss_reta_entry64 *reta_conf,
+		   uint16_t reta_size)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	uint16_t i, lut_size = hw->func_caps.common_cap.rss_table_size;
+	uint16_t idx, shift;
+	uint8_t *lut;
+	int ret;
+
+	ICE_PROC_SECONDARY_CHECK;
+
+	if (reta_size != lut_size ||
+	    reta_size > ETH_RSS_RETA_SIZE_512) {
+		PMD_DRV_LOG(ERR,
+			    "The size of hash lookup table configured (%d)"
+			    "doesn't match the number hardware can "
+			    "supported (%d)",
+			    reta_size, lut_size);
+		return -EINVAL;
+	}
+
+	lut = rte_zmalloc("ice_rss_lut", reta_size, 0);
+	if (!lut) {
+		PMD_DRV_LOG(ERR, "No memory can be allocated");
+		return -ENOMEM;
+	}
+
+	ret = ice_get_rss_lut(pf->main_vsi, lut, reta_size);
+	if (ret)
+		goto out;
+
+	for (i = 0; i < reta_size; i++) {
+		idx = i / RTE_RETA_GROUP_SIZE;
+		shift = i % RTE_RETA_GROUP_SIZE;
+		if (reta_conf[idx].mask & (1ULL << shift))
+			reta_conf[idx].reta[shift] = lut[i];
+	}
+
+out:
+	rte_free(lut);
+
+	return ret;
+}
+
+static int
+ice_set_rss_key(struct ice_vsi *vsi, uint8_t *key, uint8_t key_len)
+{
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	int ret = 0;
+
+	if (!key || key_len == 0) {
+		PMD_DRV_LOG(DEBUG, "No key to be configured");
+		return 0;
+	} else if (key_len != (VSIQF_HKEY_MAX_INDEX + 1) *
+		   sizeof(uint32_t)) {
+		PMD_DRV_LOG(ERR, "Invalid key length %u", key_len);
+		return -EINVAL;
+	}
+
+	struct ice_aqc_get_set_rss_keys *key_dw =
+		(struct ice_aqc_get_set_rss_keys *)key;
+
+	ret = ice_aq_set_rss_key(hw, vsi->vsi_id, key_dw);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to configure RSS key via AQ");
+		ret = -EINVAL;
+	}
+
+	return ret;
+}
+
+static int
+ice_get_rss_key(struct ice_vsi *vsi, uint8_t *key, uint8_t *key_len)
+{
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	int ret;
+
+	if (!key || !key_len)
+		return -EINVAL;
+
+	ret = ice_aq_get_rss_key
+		(hw, vsi->vsi_id,
+		 (struct ice_aqc_get_set_rss_keys *)key);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to get RSS key via AQ");
+		return -EINVAL;
+	}
+	*key_len = (VSIQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t);
+
+	return 0;
+}
+
+static int
+ice_rss_hash_update(struct rte_eth_dev *dev,
+		    struct rte_eth_rss_conf *rss_conf)
+{
+	enum ice_status status = ICE_SUCCESS;
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vsi *vsi = pf->main_vsi;
+
+	/* set hash key */
+	status = ice_set_rss_key(vsi, rss_conf->rss_key, rss_conf->rss_key_len);
+	if (status)
+		return status;
+
+	/* TODO: hash enable config, ice_add_rss_cfg */
+	return 0;
+}
+
+static int
+ice_rss_hash_conf_get(struct rte_eth_dev *dev,
+		      struct rte_eth_rss_conf *rss_conf)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vsi *vsi = pf->main_vsi;
+
+	ice_get_rss_key(vsi, rss_conf->rss_key,
+			&rss_conf->rss_key_len);
+
+	/* TODO: default set to 0 as hf config is not supported now */
+	rss_conf->rss_hf = 0;
+	return 0;
+}
+
+static int
 ice_vsi_vlan_pvid_set(struct ice_vsi *vsi, struct ice_vsi_vlan_pvid_info *info)
 {
 	struct ice_hw *hw;
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v2 11/20] net/ice: support RX queue interruption
  2018-12-03  7:06 ` [dpdk-dev] [PATCH v2 00/20] " Wenzhuo Lu
                     ` (9 preceding siblings ...)
  2018-12-03  7:06   ` [dpdk-dev] [PATCH v2 10/20] net/ice: support RSS Wenzhuo Lu
@ 2018-12-03  7:06   ` Wenzhuo Lu
  2018-12-03  7:06   ` [dpdk-dev] [PATCH v2 12/20] net/ice: support FW version getting Wenzhuo Lu
                     ` (8 subsequent siblings)
  19 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-03  7:06 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add below ops,
rx_queue_intr_enable
rx_queue_intr_disable

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/ice/ice_ethdev.c | 234 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 234 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 8cf4839..008a4fc 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -48,6 +48,10 @@ static int ice_macaddr_add(struct rte_eth_dev *dev,
 			   __rte_unused uint32_t index,
 			   uint32_t pool);
 static void ice_macaddr_remove(struct rte_eth_dev *dev, uint32_t index);
+static int ice_rx_queue_intr_enable(struct rte_eth_dev *dev,
+				    uint16_t queue_id);
+static int ice_rx_queue_intr_disable(struct rte_eth_dev *dev,
+				     uint16_t queue_id);
 static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
 			     uint16_t pvid, int on);
 
@@ -86,6 +90,8 @@ static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
 	.reta_query                   = ice_rss_reta_query,
 	.rss_hash_update              = ice_rss_hash_update,
 	.rss_hash_conf_get            = ice_rss_hash_conf_get,
+	.rx_queue_intr_enable         = ice_rx_queue_intr_enable,
+	.rx_queue_intr_disable        = ice_rx_queue_intr_disable,
 	.vlan_pvid_set                = ice_vlan_pvid_set,
 };
 
@@ -1400,6 +1406,186 @@ static int ice_init_rss(struct ice_pf *pf)
 	return 0;
 }
 
+static void
+__vsi_queues_bind_intr(struct ice_vsi *vsi, uint16_t msix_vect,
+		       int base_queue, int nb_queue)
+{
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	uint32_t val, val_tx;
+	int i;
+
+	for (i = 0; i < nb_queue; i++) {
+		/*do actual bind*/
+		val = (msix_vect & QINT_RQCTL_MSIX_INDX_M) |
+		      (0 < QINT_RQCTL_ITR_INDX_S) | QINT_RQCTL_CAUSE_ENA_M;
+		val_tx = (msix_vect & QINT_TQCTL_MSIX_INDX_M) |
+			 (0 < QINT_TQCTL_ITR_INDX_S) | QINT_TQCTL_CAUSE_ENA_M;
+
+		PMD_DRV_LOG(INFO, "queue %d is binding to vect %d",
+			    base_queue + i, msix_vect);
+		/* set ITR0 value */
+		ICE_WRITE_REG(hw, GLINT_ITR(0, msix_vect), 0x10);
+		ICE_WRITE_REG(hw, QINT_RQCTL(base_queue + i), val);
+		ICE_WRITE_REG(hw, QINT_TQCTL(base_queue + i), val_tx);
+	}
+}
+
+static void
+ice_vsi_queues_bind_intr(struct ice_vsi *vsi)
+{
+	struct rte_eth_dev *dev = vsi->adapter->eth_dev;
+	struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	uint16_t msix_vect = vsi->msix_intr;
+	uint16_t nb_msix = RTE_MIN(vsi->nb_msix, intr_handle->nb_efd);
+	uint16_t queue_idx = 0;
+	int record = 0;
+	int i;
+
+	/* clear Rx/Tx queue interrupt */
+	for (i = 0; i < vsi->nb_used_qps; i++) {
+		ICE_WRITE_REG(hw, QINT_TQCTL(vsi->base_queue + i), 0);
+		ICE_WRITE_REG(hw, QINT_RQCTL(vsi->base_queue + i), 0);
+	}
+
+	/* PF bind interrupt */
+	if (rte_intr_dp_is_en(intr_handle)) {
+		queue_idx = 0;
+		record = 1;
+	}
+
+	for (i = 0; i < vsi->nb_used_qps; i++) {
+		if (nb_msix <= 1) {
+			if (!rte_intr_allow_others(intr_handle))
+				msix_vect = ICE_MISC_VEC_ID;
+
+			/* uio mapping all queue to one msix_vect */
+			__vsi_queues_bind_intr(vsi, msix_vect,
+					       vsi->base_queue + i,
+					       vsi->nb_used_qps - i);
+
+			for (; !!record && i < vsi->nb_used_qps; i++)
+				intr_handle->intr_vec[queue_idx + i] =
+					msix_vect;
+			break;
+		}
+
+		/* vfio 1:1 queue/msix_vect mapping */
+		__vsi_queues_bind_intr(vsi, msix_vect,
+				       vsi->base_queue + i, 1);
+
+		if (!!record)
+			intr_handle->intr_vec[queue_idx + i] = msix_vect;
+
+		msix_vect++;
+		nb_msix--;
+	}
+}
+
+static void
+ice_vsi_enable_queues_intr(struct ice_vsi *vsi)
+{
+	struct rte_eth_dev *dev = vsi->adapter->eth_dev;
+	struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	uint16_t msix_intr, i;
+
+	if (rte_intr_allow_others(intr_handle))
+		for (i = 0; i < vsi->nb_used_qps; i++) {
+			msix_intr = vsi->msix_intr + i;
+			ICE_WRITE_REG(hw, GLINT_DYN_CTL(msix_intr),
+				      GLINT_DYN_CTL_INTENA_M |
+				      GLINT_DYN_CTL_CLEARPBA_M |
+				      GLINT_DYN_CTL_ITR_INDX_M |
+				      GLINT_DYN_CTL_WB_ON_ITR_M);
+		}
+	else
+		ICE_WRITE_REG(hw, GLINT_DYN_CTL(0),
+			      GLINT_DYN_CTL_INTENA_M |
+			      GLINT_DYN_CTL_CLEARPBA_M |
+			      GLINT_DYN_CTL_ITR_INDX_M |
+			      GLINT_DYN_CTL_WB_ON_ITR_M);
+}
+
+static void
+ice_vsi_disable_queues_intr(struct ice_vsi *vsi)
+{
+	struct rte_eth_dev *dev = vsi->adapter->eth_dev;
+	struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	uint16_t msix_intr, i;
+
+	/* disable interrupt and also clear all the exist config */
+	for (i = 0; i < vsi->nb_qps; i++) {
+		ICE_WRITE_REG(hw, QINT_TQCTL(vsi->base_queue + i), 0);
+		ICE_WRITE_REG(hw, QINT_RQCTL(vsi->base_queue + i), 0);
+		rte_wmb();
+	}
+
+	if (rte_intr_allow_others(intr_handle))
+		/* vfio-pci */
+		for (i = 0; i < vsi->nb_msix; i++) {
+			msix_intr = vsi->msix_intr + i;
+			ICE_WRITE_REG(hw, GLINT_DYN_CTL(msix_intr),
+				      GLINT_DYN_CTL_WB_ON_ITR_M);
+		}
+	else
+		/* igb_uio */
+		ICE_WRITE_REG(hw, GLINT_DYN_CTL(0), GLINT_DYN_CTL_WB_ON_ITR_M);
+}
+
+static int
+ice_rxq_intr_setup(struct rte_eth_dev *dev)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+	struct ice_vsi *vsi = pf->main_vsi;
+	uint32_t intr_vector = 0;
+
+	rte_intr_disable(intr_handle);
+
+	/* check and configure queue intr-vector mapping */
+	if ((rte_intr_cap_multiple(intr_handle) ||
+	     !RTE_ETH_DEV_SRIOV(dev).active) &&
+	    dev->data->dev_conf.intr_conf.rxq != 0) {
+		intr_vector = dev->data->nb_rx_queues;
+		if (intr_vector > ICE_MAX_INTR_QUEUE_NUM) {
+			PMD_DRV_LOG(ERR, "At most %d intr queues supported",
+				    ICE_MAX_INTR_QUEUE_NUM);
+			return -ENOTSUP;
+		}
+		if (rte_intr_efd_enable(intr_handle, intr_vector))
+			return -1;
+	}
+
+	if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
+		intr_handle->intr_vec =
+		rte_zmalloc("intr_vec", dev->data->nb_rx_queues * sizeof(int),
+			    0);
+		if (!intr_handle->intr_vec) {
+			PMD_DRV_LOG(ERR,
+				    "Failed to allocate %d rx_queues intr_vec",
+				    dev->data->nb_rx_queues);
+			return -ENOMEM;
+		}
+	}
+
+	/* Map queues with MSIX interrupt */
+	vsi->nb_used_qps = dev->data->nb_rx_queues;
+	ice_vsi_queues_bind_intr(vsi);
+
+	/* Enable interrupts for all the queues */
+	ice_vsi_enable_queues_intr(vsi);
+
+	rte_intr_enable(intr_handle);
+
+	return 0;
+}
+
 static int
 ice_dev_start(struct rte_eth_dev *dev)
 {
@@ -1436,6 +1622,10 @@ static int ice_init_rss(struct ice_pf *pf)
 		goto rx_err;
 	}
 
+	/* enable Rx interrput and mapping Rx queue to interrupt vector */
+	if (ice_rxq_intr_setup(dev))
+		return -EIO;
+
 	ret = ice_aq_set_event_mask(hw, hw->port_info->lport,
 				    ((u16)(ICE_AQ_LINK_EVENT_LINK_FAULT |
 				     ICE_AQ_LINK_EVENT_PHY_TEMP_ALARM |
@@ -1470,6 +1660,7 @@ static int ice_init_rss(struct ice_pf *pf)
 {
 	struct rte_eth_dev_data *data = dev->data;
 	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vsi *main_vsi = pf->main_vsi;
 	struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
 	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
 	uint16_t i;
@@ -1488,6 +1679,9 @@ static int ice_init_rss(struct ice_pf *pf)
 	for (i = 0; i < data->nb_tx_queues; i++)
 		ice_tx_queue_stop(dev, i);
 
+	/* disable all queue interrupts */
+	ice_vsi_disable_queues_intr(main_vsi);
+
 	/* Clear all queues and release mbufs */
 	ice_clear_queues(dev);
 
@@ -2338,6 +2532,46 @@ static int ice_macaddr_set(struct rte_eth_dev *dev,
 	return 0;
 }
 
+static int ice_rx_queue_intr_enable(struct rte_eth_dev *dev,
+				    uint16_t queue_id)
+{
+	struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	uint32_t val;
+	uint16_t msix_intr;
+
+	ICE_PROC_SECONDARY_CHECK;
+
+	msix_intr = intr_handle->intr_vec[queue_id];
+
+	val = GLINT_DYN_CTL_INTENA_M | GLINT_DYN_CTL_CLEARPBA_M |
+	      GLINT_DYN_CTL_ITR_INDX_M;
+	val &= ~GLINT_DYN_CTL_WB_ON_ITR_M;
+
+	ICE_WRITE_REG(hw, GLINT_DYN_CTL(msix_intr), val);
+	rte_intr_enable(&pci_dev->intr_handle);
+
+	return 0;
+}
+
+static int ice_rx_queue_intr_disable(struct rte_eth_dev *dev,
+				     uint16_t queue_id)
+{
+	struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	uint16_t msix_intr;
+
+	ICE_PROC_SECONDARY_CHECK;
+
+	msix_intr = intr_handle->intr_vec[queue_id];
+
+	ICE_WRITE_REG(hw, GLINT_DYN_CTL(msix_intr), GLINT_DYN_CTL_WB_ON_ITR_M);
+
+	return 0;
+}
+
 static int
 ice_vsi_vlan_pvid_set(struct ice_vsi *vsi, struct ice_vsi_vlan_pvid_info *info)
 {
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v2 12/20] net/ice: support FW version getting
  2018-12-03  7:06 ` [dpdk-dev] [PATCH v2 00/20] " Wenzhuo Lu
                     ` (10 preceding siblings ...)
  2018-12-03  7:06   ` [dpdk-dev] [PATCH v2 11/20] net/ice: support RX queue interruption Wenzhuo Lu
@ 2018-12-03  7:06   ` Wenzhuo Lu
  2018-12-03  7:06   ` [dpdk-dev] [PATCH v2 13/20] net/ice: support EEPROM information getting Wenzhuo Lu
                     ` (7 subsequent siblings)
  19 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-03  7:06 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add ops fw_version_get.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/ice/ice_ethdev.c | 21 +++++++++++++++++++++
 1 file changed, 21 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 008a4fc..289cf99 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -52,6 +52,8 @@ static int ice_rx_queue_intr_enable(struct rte_eth_dev *dev,
 				    uint16_t queue_id);
 static int ice_rx_queue_intr_disable(struct rte_eth_dev *dev,
 				     uint16_t queue_id);
+static int ice_fw_version_get(struct rte_eth_dev *dev, char *fw_version,
+			      size_t fw_size);
 static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
 			     uint16_t pvid, int on);
 
@@ -92,6 +94,7 @@ static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
 	.rss_hash_conf_get            = ice_rss_hash_conf_get,
 	.rx_queue_intr_enable         = ice_rx_queue_intr_enable,
 	.rx_queue_intr_disable        = ice_rx_queue_intr_disable,
+	.fw_version_get               = ice_fw_version_get,
 	.vlan_pvid_set                = ice_vlan_pvid_set,
 };
 
@@ -2573,6 +2576,24 @@ static int ice_rx_queue_intr_disable(struct rte_eth_dev *dev,
 }
 
 static int
+ice_fw_version_get(struct rte_eth_dev *dev, char *fw_version, size_t fw_size)
+{
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	int ret;
+
+	ret = snprintf(fw_version, fw_size, "%d.%d.%05d %d.%d",
+		       hw->fw_maj_ver, hw->fw_min_ver, hw->fw_build,
+		       hw->api_maj_ver, hw->api_min_ver);
+
+	/* add the size of '\0' */
+	ret += 1;
+	if (fw_size < (u32)ret)
+		return ret;
+	else
+		return 0;
+}
+
+static int
 ice_vsi_vlan_pvid_set(struct ice_vsi *vsi, struct ice_vsi_vlan_pvid_info *info)
 {
 	struct ice_hw *hw;
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v2 13/20] net/ice: support EEPROM information getting
  2018-12-03  7:06 ` [dpdk-dev] [PATCH v2 00/20] " Wenzhuo Lu
                     ` (11 preceding siblings ...)
  2018-12-03  7:06   ` [dpdk-dev] [PATCH v2 12/20] net/ice: support FW version getting Wenzhuo Lu
@ 2018-12-03  7:06   ` Wenzhuo Lu
  2018-12-03  7:06   ` [dpdk-dev] [PATCH v2 14/20] net/ice: support statistics Wenzhuo Lu
                     ` (6 subsequent siblings)
  19 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-03  7:06 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Wei Zhao

Add below ops,
get_eeprom_length
get_eeprom

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 drivers/net/ice/ice_ethdev.c | 45 ++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 45 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 289cf99..ec66c28 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -56,6 +56,9 @@ static int ice_fw_version_get(struct rte_eth_dev *dev, char *fw_version,
 			      size_t fw_size);
 static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
 			     uint16_t pvid, int on);
+static int ice_get_eeprom_length(struct rte_eth_dev *dev);
+static int ice_get_eeprom(struct rte_eth_dev *dev,
+			  struct rte_dev_eeprom_info *eeprom);
 
 static const struct rte_pci_id pci_id_ice_map[] = {
 	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
@@ -96,6 +99,8 @@ static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
 	.rx_queue_intr_disable        = ice_rx_queue_intr_disable,
 	.fw_version_get               = ice_fw_version_get,
 	.vlan_pvid_set                = ice_vlan_pvid_set,
+	.get_eeprom_length            = ice_get_eeprom_length,
+	.get_eeprom                   = ice_get_eeprom,
 };
 
 static void
@@ -2676,3 +2681,43 @@ static int ice_rx_queue_intr_disable(struct rte_eth_dev *dev,
 
 	return 0;
 }
+
+static int
+ice_get_eeprom_length(struct rte_eth_dev *dev)
+{
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	/* Convert word count to byte count */
+	return hw->nvm.sr_words << 1;
+}
+
+static int
+ice_get_eeprom(struct rte_eth_dev *dev,
+	       struct rte_dev_eeprom_info *eeprom)
+{
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	uint16_t *data = eeprom->data;
+	uint16_t offset, length, i;
+	enum ice_status ret_code = ICE_SUCCESS;
+
+	offset = eeprom->offset >> 1;
+	length = eeprom->length >> 1;
+
+	if (offset > hw->nvm.sr_words ||
+	    offset + length > hw->nvm.sr_words) {
+		PMD_DRV_LOG(ERR, "Requested EEPROM bytes out of range.");
+		return -EINVAL;
+	}
+
+	eeprom->magic = hw->vendor_id | (hw->device_id << 16);
+
+	for (i = 0; i < length; i++) {
+		ret_code = ice_read_sr_word(hw, offset + i, &data[i]);
+		if (ret_code != ICE_SUCCESS) {
+			PMD_DRV_LOG(ERR, "EEPROM read failed.");
+			return -EIO;
+		}
+	}
+
+	return 0;
+}
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v2 14/20] net/ice: support statistics
  2018-12-03  7:06 ` [dpdk-dev] [PATCH v2 00/20] " Wenzhuo Lu
                     ` (12 preceding siblings ...)
  2018-12-03  7:06   ` [dpdk-dev] [PATCH v2 13/20] net/ice: support EEPROM information getting Wenzhuo Lu
@ 2018-12-03  7:06   ` Wenzhuo Lu
  2018-12-04  5:35     ` Varghese, Vipin
  2018-12-03  7:06   ` [dpdk-dev] [PATCH v2 15/20] net/ice: support queue information getting Wenzhuo Lu
                     ` (5 subsequent siblings)
  19 siblings, 1 reply; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-03  7:06 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Jia Guo

Add below ops,
stats_get
stats_reset
xstats_get
xstats_get_names
xstats_reset

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Jia Guo <jia.guo@intel.com>
---
 drivers/net/ice/ice_ethdev.c | 574 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 574 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index ec66c28..d77e5f7 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -59,6 +59,14 @@ static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
 static int ice_get_eeprom_length(struct rte_eth_dev *dev);
 static int ice_get_eeprom(struct rte_eth_dev *dev,
 			  struct rte_dev_eeprom_info *eeprom);
+static int ice_stats_get(struct rte_eth_dev *dev,
+			 struct rte_eth_stats *stats);
+static void ice_stats_reset(struct rte_eth_dev *dev);
+static int ice_xstats_get(struct rte_eth_dev *dev,
+			  struct rte_eth_xstat *xstats, unsigned int n);
+static int ice_xstats_get_names(struct rte_eth_dev *dev,
+				struct rte_eth_xstat_name *xstats_names,
+				unsigned int limit);
 
 static const struct rte_pci_id pci_id_ice_map[] = {
 	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
@@ -101,8 +109,100 @@ static int ice_get_eeprom(struct rte_eth_dev *dev,
 	.vlan_pvid_set                = ice_vlan_pvid_set,
 	.get_eeprom_length            = ice_get_eeprom_length,
 	.get_eeprom                   = ice_get_eeprom,
+	.stats_get                    = ice_stats_get,
+	.stats_reset                  = ice_stats_reset,
+	.xstats_get                   = ice_xstats_get,
+	.xstats_get_names             = ice_xstats_get_names,
+	.xstats_reset                 = ice_stats_reset,
 };
 
+/* store statistics names and its offset in stats structure */
+struct ice_xstats_name_off {
+	char name[RTE_ETH_XSTATS_NAME_SIZE];
+	unsigned int offset;
+};
+
+static const struct ice_xstats_name_off ice_stats_strings[] = {
+	{"rx_unicast_packets", offsetof(struct ice_eth_stats, rx_unicast)},
+	{"rx_multicast_packets", offsetof(struct ice_eth_stats, rx_multicast)},
+	{"rx_broadcast_packets", offsetof(struct ice_eth_stats, rx_broadcast)},
+	{"rx_dropped", offsetof(struct ice_eth_stats, rx_discards)},
+	{"rx_unknown_protocol_packets", offsetof(struct ice_eth_stats,
+		rx_unknown_protocol)},
+	{"tx_unicast_packets", offsetof(struct ice_eth_stats, tx_unicast)},
+	{"tx_multicast_packets", offsetof(struct ice_eth_stats, tx_multicast)},
+	{"tx_broadcast_packets", offsetof(struct ice_eth_stats, tx_broadcast)},
+	{"tx_dropped", offsetof(struct ice_eth_stats, tx_discards)},
+};
+
+#define ICE_NB_ETH_XSTATS (sizeof(ice_stats_strings) / \
+		sizeof(ice_stats_strings[0]))
+
+static const struct ice_xstats_name_off ice_hw_port_strings[] = {
+	{"tx_link_down_dropped", offsetof(struct ice_hw_port_stats,
+		tx_dropped_link_down)},
+	{"rx_crc_errors", offsetof(struct ice_hw_port_stats, crc_errors)},
+	{"rx_illegal_byte_errors", offsetof(struct ice_hw_port_stats,
+		illegal_bytes)},
+	{"rx_error_bytes", offsetof(struct ice_hw_port_stats, error_bytes)},
+	{"mac_local_errors", offsetof(struct ice_hw_port_stats,
+		mac_local_faults)},
+	{"mac_remote_errors", offsetof(struct ice_hw_port_stats,
+		mac_remote_faults)},
+	{"rx_len_errors", offsetof(struct ice_hw_port_stats,
+		rx_len_errors)},
+	{"tx_xon_packets", offsetof(struct ice_hw_port_stats, link_xon_tx)},
+	{"rx_xon_packets", offsetof(struct ice_hw_port_stats, link_xon_rx)},
+	{"tx_xoff_packets", offsetof(struct ice_hw_port_stats, link_xoff_tx)},
+	{"rx_xoff_packets", offsetof(struct ice_hw_port_stats, link_xoff_rx)},
+	{"rx_size_64_packets", offsetof(struct ice_hw_port_stats, rx_size_64)},
+	{"rx_size_65_to_127_packets", offsetof(struct ice_hw_port_stats,
+		rx_size_127)},
+	{"rx_size_128_to_255_packets", offsetof(struct ice_hw_port_stats,
+		rx_size_255)},
+	{"rx_size_256_to_511_packets", offsetof(struct ice_hw_port_stats,
+		rx_size_511)},
+	{"rx_size_512_to_1023_packets", offsetof(struct ice_hw_port_stats,
+		rx_size_1023)},
+	{"rx_size_1024_to_1522_packets", offsetof(struct ice_hw_port_stats,
+		rx_size_1522)},
+	{"rx_size_1523_to_max_packets", offsetof(struct ice_hw_port_stats,
+		rx_size_big)},
+	{"rx_undersized_errors", offsetof(struct ice_hw_port_stats,
+		rx_undersize)},
+	{"rx_oversize_errors", offsetof(struct ice_hw_port_stats,
+		rx_oversize)},
+	{"rx_mac_short_pkt_dropped", offsetof(struct ice_hw_port_stats,
+		mac_short_pkt_dropped)},
+	{"rx_fragmented_errors", offsetof(struct ice_hw_port_stats,
+		rx_fragments)},
+	{"rx_jabber_errors", offsetof(struct ice_hw_port_stats, rx_jabber)},
+	{"tx_size_64_packets", offsetof(struct ice_hw_port_stats, tx_size_64)},
+	{"tx_size_65_to_127_packets", offsetof(struct ice_hw_port_stats,
+		tx_size_127)},
+	{"tx_size_128_to_255_packets", offsetof(struct ice_hw_port_stats,
+		tx_size_255)},
+	{"tx_size_256_to_511_packets", offsetof(struct ice_hw_port_stats,
+		tx_size_511)},
+	{"tx_size_512_to_1023_packets", offsetof(struct ice_hw_port_stats,
+		tx_size_1023)},
+	{"tx_size_1024_to_1522_packets", offsetof(struct ice_hw_port_stats,
+		tx_size_1522)},
+	{"tx_size_1523_to_max_packets", offsetof(struct ice_hw_port_stats,
+		tx_size_big)},
+	{"tx_low_power_idle_status", offsetof(struct ice_hw_port_stats,
+		tx_lpi_status)},
+	{"rx_low_power_idle_status", offsetof(struct ice_hw_port_stats,
+		rx_lpi_status)},
+	{"tx_low_power_idle_count", offsetof(struct ice_hw_port_stats,
+		tx_lpi_count)},
+	{"rx_low_power_idle_count", offsetof(struct ice_hw_port_stats,
+		rx_lpi_count)},
+};
+
+#define ICE_NB_HW_PORT_XSTATS (sizeof(ice_hw_port_strings) / \
+		sizeof(ice_hw_port_strings[0]))
+
 static void
 ice_init_controlq_parameter(struct ice_hw *hw)
 {
@@ -2721,3 +2821,477 @@ static int ice_rx_queue_intr_disable(struct rte_eth_dev *dev,
 
 	return 0;
 }
+
+static void
+ice_stat_update_32(struct ice_hw *hw,
+		   uint32_t reg,
+		   bool offset_loaded,
+		   uint64_t *offset,
+		   uint64_t *stat)
+{
+	uint64_t new_data;
+
+	new_data = (uint64_t)ICE_READ_REG(hw, reg);
+	if (!offset_loaded)
+		*offset = new_data;
+
+	if (new_data >= *offset)
+		*stat = (uint64_t)(new_data - *offset);
+	else
+		*stat = (uint64_t)((new_data +
+				    ((uint64_t)1 << ICE_32_BIT_WIDTH))
+				   - *offset);
+}
+
+static void
+ice_stat_update_40(struct ice_hw *hw,
+		   uint32_t hireg,
+		   uint32_t loreg,
+		   bool offset_loaded,
+		   uint64_t *offset,
+		   uint64_t *stat)
+{
+	uint64_t new_data;
+
+	new_data = (uint64_t)ICE_READ_REG(hw, loreg);
+	new_data |= (uint64_t)(ICE_READ_REG(hw, hireg) & ICE_8_BIT_MASK) <<
+		    ICE_32_BIT_WIDTH;
+
+	if (!offset_loaded)
+		*offset = new_data;
+
+	if (new_data >= *offset)
+		*stat = new_data - *offset;
+	else
+		*stat = (uint64_t)((new_data +
+				    ((uint64_t)1 << ICE_40_BIT_WIDTH)) -
+				   *offset);
+
+	*stat &= ICE_40_BIT_MASK;
+}
+
+/* Get all the statistics of a VSI */
+static void
+ice_update_vsi_stats(struct ice_vsi *vsi)
+{
+	struct ice_eth_stats *oes = &vsi->eth_stats_offset;
+	struct ice_eth_stats *nes = &vsi->eth_stats;
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	int idx = rte_le_to_cpu_16(vsi->vsi_id);
+
+	ice_stat_update_40(hw, GLV_GORCH(idx), GLV_GORCL(idx),
+			   vsi->offset_loaded, &oes->rx_bytes,
+			   &nes->rx_bytes);
+	ice_stat_update_40(hw, GLV_UPRCH(idx), GLV_UPRCL(idx),
+			   vsi->offset_loaded, &oes->rx_unicast,
+			   &nes->rx_unicast);
+	ice_stat_update_40(hw, GLV_MPRCH(idx), GLV_MPRCL(idx),
+			   vsi->offset_loaded, &oes->rx_multicast,
+			   &nes->rx_multicast);
+	ice_stat_update_40(hw, GLV_BPRCH(idx), GLV_BPRCL(idx),
+			   vsi->offset_loaded, &oes->rx_broadcast,
+			   &nes->rx_broadcast);
+	/* exclude CRC bytes */
+	nes->rx_bytes -= (nes->rx_unicast + nes->rx_multicast +
+			  nes->rx_broadcast) * ETHER_CRC_LEN;
+
+	ice_stat_update_32(hw, GLV_RDPC(idx), vsi->offset_loaded,
+			   &oes->rx_discards, &nes->rx_discards);
+	/* GLV_REPC not supported */
+	/* GLV_RMPC not supported */
+	ice_stat_update_32(hw, GLSWID_RUPP(idx), vsi->offset_loaded,
+			   &oes->rx_unknown_protocol,
+			   &nes->rx_unknown_protocol);
+	ice_stat_update_40(hw, GLV_GOTCH(idx), GLV_GOTCL(idx),
+			   vsi->offset_loaded, &oes->tx_bytes,
+			   &nes->tx_bytes);
+	ice_stat_update_40(hw, GLV_UPTCH(idx), GLV_UPTCL(idx),
+			   vsi->offset_loaded, &oes->tx_unicast,
+			   &nes->tx_unicast);
+	ice_stat_update_40(hw, GLV_MPTCH(idx), GLV_MPTCL(idx),
+			   vsi->offset_loaded, &oes->tx_multicast,
+			   &nes->tx_multicast);
+	ice_stat_update_40(hw, GLV_BPTCH(idx), GLV_BPTCL(idx),
+			   vsi->offset_loaded,  &oes->tx_broadcast,
+			   &nes->tx_broadcast);
+	/* GLV_TDPC not supported */
+	ice_stat_update_32(hw, GLV_TEPC(idx), vsi->offset_loaded,
+			   &oes->tx_errors, &nes->tx_errors);
+	vsi->offset_loaded = true;
+
+	PMD_DRV_LOG(DEBUG, "************** VSI[%u] stats start **************",
+		    vsi->vsi_id);
+	PMD_DRV_LOG(DEBUG, "rx_bytes:            %"PRIu64"", nes->rx_bytes);
+	PMD_DRV_LOG(DEBUG, "rx_unicast:          %"PRIu64"", nes->rx_unicast);
+	PMD_DRV_LOG(DEBUG, "rx_multicast:        %"PRIu64"", nes->rx_multicast);
+	PMD_DRV_LOG(DEBUG, "rx_broadcast:        %"PRIu64"", nes->rx_broadcast);
+	PMD_DRV_LOG(DEBUG, "rx_discards:         %"PRIu64"", nes->rx_discards);
+	PMD_DRV_LOG(DEBUG, "rx_unknown_protocol: %"PRIu64"",
+		    nes->rx_unknown_protocol);
+	PMD_DRV_LOG(DEBUG, "tx_bytes:            %"PRIu64"", nes->tx_bytes);
+	PMD_DRV_LOG(DEBUG, "tx_unicast:          %"PRIu64"", nes->tx_unicast);
+	PMD_DRV_LOG(DEBUG, "tx_multicast:        %"PRIu64"", nes->tx_multicast);
+	PMD_DRV_LOG(DEBUG, "tx_broadcast:        %"PRIu64"", nes->tx_broadcast);
+	PMD_DRV_LOG(DEBUG, "tx_discards:         %"PRIu64"", nes->tx_discards);
+	PMD_DRV_LOG(DEBUG, "tx_errors:           %"PRIu64"", nes->tx_errors);
+	PMD_DRV_LOG(DEBUG, "************** VSI[%u] stats end ****************",
+		    vsi->vsi_id);
+}
+
+static void
+ice_read_stats_registers(struct ice_pf *pf, struct ice_hw *hw)
+{
+	struct ice_hw_port_stats *ns = &pf->stats; /* new stats */
+	struct ice_hw_port_stats *os = &pf->stats_offset; /* old stats */
+
+	/* Get statistics of struct ice_eth_stats */
+	ice_stat_update_40(hw, GLPRT_GORCH(hw->port_info->lport),
+			   GLPRT_GORCL(hw->port_info->lport),
+			   pf->offset_loaded, &os->eth.rx_bytes,
+			   &ns->eth.rx_bytes);
+	ice_stat_update_40(hw, GLPRT_UPRCH(hw->port_info->lport),
+			   GLPRT_UPRCL(hw->port_info->lport),
+			   pf->offset_loaded, &os->eth.rx_unicast,
+			   &ns->eth.rx_unicast);
+	ice_stat_update_40(hw, GLPRT_MPRCH(hw->port_info->lport),
+			   GLPRT_MPRCL(hw->port_info->lport),
+			   pf->offset_loaded, &os->eth.rx_multicast,
+			   &ns->eth.rx_multicast);
+	ice_stat_update_40(hw, GLPRT_BPRCH(hw->port_info->lport),
+			   GLPRT_BPRCL(hw->port_info->lport),
+			   pf->offset_loaded, &os->eth.rx_broadcast,
+			   &ns->eth.rx_broadcast);
+	ice_stat_update_32(hw, PRTRPB_RDPC,
+			   pf->offset_loaded, &os->eth.rx_discards,
+			   &ns->eth.rx_discards);
+
+	/* Workaround: CRC size should not be included in byte statistics,
+	 * so subtract ETHER_CRC_LEN from the byte counter for each rx packet.
+	 */
+	ns->eth.rx_bytes -= (ns->eth.rx_unicast + ns->eth.rx_multicast +
+			     ns->eth.rx_broadcast) * ETHER_CRC_LEN;
+
+	/* GLPRT_REPC not supported */
+	/* GLPRT_RMPC not supported */
+	ice_stat_update_32(hw, GLSWID_RUPP(hw->port_info->lport),
+			   pf->offset_loaded,
+			   &os->eth.rx_unknown_protocol,
+			   &ns->eth.rx_unknown_protocol);
+	ice_stat_update_40(hw, GLPRT_GOTCH(hw->port_info->lport),
+			   GLPRT_GOTCL(hw->port_info->lport),
+			   pf->offset_loaded, &os->eth.tx_bytes,
+			   &ns->eth.tx_bytes);
+	ice_stat_update_40(hw, GLPRT_UPTCH(hw->port_info->lport),
+			   GLPRT_UPTCL(hw->port_info->lport),
+			   pf->offset_loaded, &os->eth.tx_unicast,
+			   &ns->eth.tx_unicast);
+	ice_stat_update_40(hw, GLPRT_MPTCH(hw->port_info->lport),
+			   GLPRT_MPTCL(hw->port_info->lport),
+			   pf->offset_loaded, &os->eth.tx_multicast,
+			   &ns->eth.tx_multicast);
+	ice_stat_update_40(hw, GLPRT_BPTCH(hw->port_info->lport),
+			   GLPRT_BPTCL(hw->port_info->lport),
+			   pf->offset_loaded, &os->eth.tx_broadcast,
+			   &ns->eth.tx_broadcast);
+	ns->eth.tx_bytes -= (ns->eth.tx_unicast + ns->eth.tx_multicast +
+			     ns->eth.tx_broadcast) * ETHER_CRC_LEN;
+
+	/* GLPRT_TEPC not supported */
+
+	/* additional port specific stats */
+	ice_stat_update_32(hw, GLPRT_TDOLD(hw->port_info->lport),
+			   pf->offset_loaded, &os->tx_dropped_link_down,
+			   &ns->tx_dropped_link_down);
+	ice_stat_update_32(hw, GLPRT_CRCERRS(hw->port_info->lport),
+			   pf->offset_loaded, &os->crc_errors,
+			   &ns->crc_errors);
+	ice_stat_update_32(hw, GLPRT_ILLERRC(hw->port_info->lport),
+			   pf->offset_loaded, &os->illegal_bytes,
+			   &ns->illegal_bytes);
+	/* GLPRT_ERRBC not supported */
+	ice_stat_update_32(hw, GLPRT_MLFC(hw->port_info->lport),
+			   pf->offset_loaded, &os->mac_local_faults,
+			   &ns->mac_local_faults);
+	ice_stat_update_32(hw, GLPRT_MRFC(hw->port_info->lport),
+			   pf->offset_loaded, &os->mac_remote_faults,
+			   &ns->mac_remote_faults);
+
+	ice_stat_update_32(hw, GLPRT_RLEC(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_len_errors,
+			   &ns->rx_len_errors);
+
+	ice_stat_update_32(hw, GLPRT_LXONRXC(hw->port_info->lport),
+			   pf->offset_loaded, &os->link_xon_rx,
+			   &ns->link_xon_rx);
+	ice_stat_update_32(hw, GLPRT_LXOFFRXC(hw->port_info->lport),
+			   pf->offset_loaded, &os->link_xoff_rx,
+			   &ns->link_xoff_rx);
+	ice_stat_update_32(hw, GLPRT_LXONTXC(hw->port_info->lport),
+			   pf->offset_loaded, &os->link_xon_tx,
+			   &ns->link_xon_tx);
+	ice_stat_update_32(hw, GLPRT_LXOFFTXC(hw->port_info->lport),
+			   pf->offset_loaded, &os->link_xoff_tx,
+			   &ns->link_xoff_tx);
+	ice_stat_update_40(hw, GLPRT_PRC64H(hw->port_info->lport),
+			   GLPRT_PRC64L(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_size_64,
+			   &ns->rx_size_64);
+	ice_stat_update_40(hw, GLPRT_PRC127H(hw->port_info->lport),
+			   GLPRT_PRC127L(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_size_127,
+			   &ns->rx_size_127);
+	ice_stat_update_40(hw, GLPRT_PRC255H(hw->port_info->lport),
+			   GLPRT_PRC255L(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_size_255,
+			   &ns->rx_size_255);
+	ice_stat_update_40(hw, GLPRT_PRC511H(hw->port_info->lport),
+			   GLPRT_PRC511L(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_size_511,
+			   &ns->rx_size_511);
+	ice_stat_update_40(hw, GLPRT_PRC1023H(hw->port_info->lport),
+			   GLPRT_PRC1023L(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_size_1023,
+			   &ns->rx_size_1023);
+	ice_stat_update_40(hw, GLPRT_PRC1522H(hw->port_info->lport),
+			   GLPRT_PRC1522L(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_size_1522,
+			   &ns->rx_size_1522);
+	ice_stat_update_40(hw, GLPRT_PRC9522H(hw->port_info->lport),
+			   GLPRT_PRC9522L(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_size_big,
+			   &ns->rx_size_big);
+	ice_stat_update_32(hw, GLPRT_RUC(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_undersize,
+			   &ns->rx_undersize);
+	ice_stat_update_32(hw, GLPRT_RFC(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_fragments,
+			   &ns->rx_fragments);
+	ice_stat_update_32(hw, GLPRT_ROC(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_oversize,
+			   &ns->rx_oversize);
+	ice_stat_update_32(hw, GLPRT_RJC(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_jabber,
+			   &ns->rx_jabber);
+	ice_stat_update_40(hw, GLPRT_PTC64H(hw->port_info->lport),
+			   GLPRT_PTC64L(hw->port_info->lport),
+			   pf->offset_loaded, &os->tx_size_64,
+			   &ns->tx_size_64);
+	ice_stat_update_40(hw, GLPRT_PTC127H(hw->port_info->lport),
+			   GLPRT_PTC127L(hw->port_info->lport),
+			   pf->offset_loaded, &os->tx_size_127,
+			   &ns->tx_size_127);
+	ice_stat_update_40(hw, GLPRT_PTC255H(hw->port_info->lport),
+			   GLPRT_PTC255L(hw->port_info->lport),
+			   pf->offset_loaded, &os->tx_size_255,
+			   &ns->tx_size_255);
+	ice_stat_update_40(hw, GLPRT_PTC511H(hw->port_info->lport),
+			   GLPRT_PTC511L(hw->port_info->lport),
+			   pf->offset_loaded, &os->tx_size_511,
+			   &ns->tx_size_511);
+	ice_stat_update_40(hw, GLPRT_PTC1023H(hw->port_info->lport),
+			   GLPRT_PTC1023L(hw->port_info->lport),
+			   pf->offset_loaded, &os->tx_size_1023,
+			   &ns->tx_size_1023);
+	ice_stat_update_40(hw, GLPRT_PTC1522H(hw->port_info->lport),
+			   GLPRT_PTC1522L(hw->port_info->lport),
+			   pf->offset_loaded, &os->tx_size_1522,
+			   &ns->tx_size_1522);
+	ice_stat_update_40(hw, GLPRT_PTC9522H(hw->port_info->lport),
+			   GLPRT_PTC9522L(hw->port_info->lport),
+			   pf->offset_loaded, &os->tx_size_big,
+			   &ns->tx_size_big);
+
+	/* GLPRT_MSPDC not supported */
+	/* GLPRT_XEC not supported */
+
+	pf->offset_loaded = true;
+
+	if (pf->main_vsi)
+		ice_update_vsi_stats(pf->main_vsi);
+}
+
+/* Get all statistics of a port */
+static int
+ice_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ice_hw_port_stats *ns = &pf->stats; /* new stats */
+
+	/* call read registers - updates values, now write them to struct */
+	ice_read_stats_registers(pf, hw);
+
+	stats->ipackets = ns->eth.rx_unicast +
+			  ns->eth.rx_multicast +
+			  ns->eth.rx_broadcast -
+			  ns->eth.rx_discards -
+			  pf->main_vsi->eth_stats.rx_discards;
+	stats->opackets = ns->eth.tx_unicast +
+			  ns->eth.tx_multicast +
+			  ns->eth.tx_broadcast;
+	stats->ibytes   = ns->eth.rx_bytes;
+	stats->obytes   = ns->eth.tx_bytes;
+	stats->oerrors  = ns->eth.tx_errors +
+			  pf->main_vsi->eth_stats.tx_errors;
+
+	/* Rx Errors */
+	stats->imissed  = ns->eth.rx_discards +
+			  pf->main_vsi->eth_stats.rx_discards;
+	stats->ierrors  = ns->crc_errors +
+			  ns->rx_undersize +
+			  ns->rx_oversize + ns->rx_fragments + ns->rx_jabber;
+
+	PMD_DRV_LOG(DEBUG, "*************** PF stats start *****************");
+	PMD_DRV_LOG(DEBUG, "rx_bytes:	%"PRIu64"", ns->eth.rx_bytes);
+	PMD_DRV_LOG(DEBUG, "rx_unicast:	%"PRIu64"", ns->eth.rx_unicast);
+	PMD_DRV_LOG(DEBUG, "rx_multicast:%"PRIu64"", ns->eth.rx_multicast);
+	PMD_DRV_LOG(DEBUG, "rx_broadcast:%"PRIu64"", ns->eth.rx_broadcast);
+	PMD_DRV_LOG(DEBUG, "rx_discards:%"PRIu64"", ns->eth.rx_discards);
+	PMD_DRV_LOG(DEBUG, "vsi rx_discards:%"PRIu64"",
+		    pf->main_vsi->eth_stats.rx_discards);
+	PMD_DRV_LOG(DEBUG, "rx_unknown_protocol:  %"PRIu64"",
+		    ns->eth.rx_unknown_protocol);
+	PMD_DRV_LOG(DEBUG, "tx_bytes:	%"PRIu64"", ns->eth.tx_bytes);
+	PMD_DRV_LOG(DEBUG, "tx_unicast:	%"PRIu64"", ns->eth.tx_unicast);
+	PMD_DRV_LOG(DEBUG, "tx_multicast:%"PRIu64"", ns->eth.tx_multicast);
+	PMD_DRV_LOG(DEBUG, "tx_broadcast:%"PRIu64"", ns->eth.tx_broadcast);
+	PMD_DRV_LOG(DEBUG, "tx_discards:%"PRIu64"", ns->eth.tx_discards);
+	PMD_DRV_LOG(DEBUG, "vsi tx_discards:%"PRIu64"",
+		    pf->main_vsi->eth_stats.tx_discards);
+	PMD_DRV_LOG(DEBUG, "tx_errors:		%"PRIu64"", ns->eth.tx_errors);
+
+	PMD_DRV_LOG(DEBUG, "tx_dropped_link_down:	%"PRIu64"",
+		    ns->tx_dropped_link_down);
+	PMD_DRV_LOG(DEBUG, "crc_errors:	%"PRIu64"", ns->crc_errors);
+	PMD_DRV_LOG(DEBUG, "illegal_bytes:	%"PRIu64"",
+		    ns->illegal_bytes);
+	PMD_DRV_LOG(DEBUG, "error_bytes:	%"PRIu64"", ns->error_bytes);
+	PMD_DRV_LOG(DEBUG, "mac_local_faults:	%"PRIu64"",
+		    ns->mac_local_faults);
+	PMD_DRV_LOG(DEBUG, "mac_remote_faults:	%"PRIu64"",
+		    ns->mac_remote_faults);
+	PMD_DRV_LOG(DEBUG, "link_xon_rx:	%"PRIu64"", ns->link_xon_rx);
+	PMD_DRV_LOG(DEBUG, "link_xoff_rx:	%"PRIu64"", ns->link_xoff_rx);
+	PMD_DRV_LOG(DEBUG, "link_xon_tx:	%"PRIu64"", ns->link_xon_tx);
+	PMD_DRV_LOG(DEBUG, "link_xoff_tx:	%"PRIu64"", ns->link_xoff_tx);
+	PMD_DRV_LOG(DEBUG, "rx_size_64:		%"PRIu64"", ns->rx_size_64);
+	PMD_DRV_LOG(DEBUG, "rx_size_127:	%"PRIu64"", ns->rx_size_127);
+	PMD_DRV_LOG(DEBUG, "rx_size_255:	%"PRIu64"", ns->rx_size_255);
+	PMD_DRV_LOG(DEBUG, "rx_size_511:	%"PRIu64"", ns->rx_size_511);
+	PMD_DRV_LOG(DEBUG, "rx_size_1023:	%"PRIu64"", ns->rx_size_1023);
+	PMD_DRV_LOG(DEBUG, "rx_size_1522:	%"PRIu64"", ns->rx_size_1522);
+	PMD_DRV_LOG(DEBUG, "rx_size_big:	%"PRIu64"", ns->rx_size_big);
+	PMD_DRV_LOG(DEBUG, "rx_undersize:	%"PRIu64"", ns->rx_undersize);
+	PMD_DRV_LOG(DEBUG, "rx_fragments:	%"PRIu64"", ns->rx_fragments);
+	PMD_DRV_LOG(DEBUG, "rx_oversize:	%"PRIu64"", ns->rx_oversize);
+	PMD_DRV_LOG(DEBUG, "rx_jabber:		%"PRIu64"", ns->rx_jabber);
+	PMD_DRV_LOG(DEBUG, "tx_size_64:		%"PRIu64"", ns->tx_size_64);
+	PMD_DRV_LOG(DEBUG, "tx_size_127:	%"PRIu64"", ns->tx_size_127);
+	PMD_DRV_LOG(DEBUG, "tx_size_255:	%"PRIu64"", ns->tx_size_255);
+	PMD_DRV_LOG(DEBUG, "tx_size_511:	%"PRIu64"", ns->tx_size_511);
+	PMD_DRV_LOG(DEBUG, "tx_size_1023:	%"PRIu64"", ns->tx_size_1023);
+	PMD_DRV_LOG(DEBUG, "tx_size_1522:	%"PRIu64"", ns->tx_size_1522);
+	PMD_DRV_LOG(DEBUG, "tx_size_big:	%"PRIu64"", ns->tx_size_big);
+	PMD_DRV_LOG(DEBUG, "rx_len_errors:	%"PRIu64"", ns->rx_len_errors);
+	PMD_DRV_LOG(DEBUG, "************* PF stats end ****************");
+	return 0;
+}
+
+/* Reset the statistics */
+static void
+ice_stats_reset(struct rte_eth_dev *dev)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	/* Mark PF and VSI stats to update the offset, aka "reset" */
+	pf->offset_loaded = false;
+	if (pf->main_vsi)
+		pf->main_vsi->offset_loaded = false;
+
+	/* read the stats, reading current register values into offset */
+	ice_read_stats_registers(pf, hw);
+}
+
+static uint32_t
+ice_xstats_calc_num(void)
+{
+	uint32_t num;
+
+	num = ICE_NB_ETH_XSTATS + ICE_NB_HW_PORT_XSTATS;
+
+	return num;
+}
+
+static int
+ice_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats,
+	       unsigned int n)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	unsigned int i;
+	unsigned int count;
+	struct ice_hw_port_stats *hw_stats = &pf->stats;
+
+	count = ice_xstats_calc_num();
+	if (n < count)
+		return count;
+
+	ice_read_stats_registers(pf, hw);
+
+	if (!xstats)
+		return 0;
+
+	count = 0;
+
+	/* Get stats from ice_eth_stats struct */
+	for (i = 0; i < ICE_NB_ETH_XSTATS; i++) {
+		xstats[count].value =
+			*(uint64_t *)((char *)&hw_stats->eth +
+				      ice_stats_strings[i].offset);
+		xstats[count].id = count;
+		count++;
+	}
+
+	/* Get individiual stats from ice_hw_port struct */
+	for (i = 0; i < ICE_NB_HW_PORT_XSTATS; i++) {
+		xstats[count].value =
+			*(uint64_t *)((char *)hw_stats +
+				      ice_hw_port_strings[i].offset);
+		xstats[count].id = count;
+		count++;
+	}
+
+	return count;
+}
+
+static int ice_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
+				struct rte_eth_xstat_name *xstats_names,
+				__rte_unused unsigned int limit)
+{
+	unsigned int count = 0;
+	unsigned int i;
+
+	if (!xstats_names)
+		return ice_xstats_calc_num();
+
+	/* Note: limit checked in rte_eth_xstats_names() */
+
+	/* Get stats from ice_eth_stats struct */
+	for (i = 0; i < ICE_NB_ETH_XSTATS; i++) {
+		snprintf(xstats_names[count].name,
+			 sizeof(xstats_names[count].name),
+			 "%s", ice_stats_strings[i].name);
+		count++;
+	}
+
+	/* Get individiual stats from ice_hw_port struct */
+	for (i = 0; i < ICE_NB_HW_PORT_XSTATS; i++) {
+		snprintf(xstats_names[count].name,
+			 sizeof(xstats_names[count].name),
+			 "%s", ice_hw_port_strings[i].name);
+		count++;
+	}
+
+	return count;
+}
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v2 15/20] net/ice: support queue information getting
  2018-12-03  7:06 ` [dpdk-dev] [PATCH v2 00/20] " Wenzhuo Lu
                     ` (13 preceding siblings ...)
  2018-12-03  7:06   ` [dpdk-dev] [PATCH v2 14/20] net/ice: support statistics Wenzhuo Lu
@ 2018-12-03  7:06   ` Wenzhuo Lu
  2018-12-03  7:06   ` [dpdk-dev] [PATCH v2 16/20] net/ice: support basic RX/TX Wenzhuo Lu
                     ` (4 subsequent siblings)
  19 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-03  7:06 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add below ops,
rxq_info_get
txq_info_get
rx_queue_count

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/ice/ice_ethdev.c   |  3 ++
 drivers/net/ice/ice_lan_rxtx.c | 66 ++++++++++++++++++++++++++++++++++++++++++
 drivers/net/ice/ice_rxtx.h     |  5 ++++
 3 files changed, 74 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index d77e5f7..5fafcb4 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -107,8 +107,11 @@ static int ice_xstats_get_names(struct rte_eth_dev *dev,
 	.rx_queue_intr_disable        = ice_rx_queue_intr_disable,
 	.fw_version_get               = ice_fw_version_get,
 	.vlan_pvid_set                = ice_vlan_pvid_set,
+	.rxq_info_get                 = ice_rxq_info_get,
+	.txq_info_get                 = ice_txq_info_get,
 	.get_eeprom_length            = ice_get_eeprom_length,
 	.get_eeprom                   = ice_get_eeprom,
+	.rx_queue_count               = ice_rx_queue_count,
 	.stats_get                    = ice_stats_get,
 	.stats_reset                  = ice_stats_reset,
 	.xstats_get                   = ice_xstats_get,
diff --git a/drivers/net/ice/ice_lan_rxtx.c b/drivers/net/ice/ice_lan_rxtx.c
index 6d5335d..e0c5d4b 100644
--- a/drivers/net/ice/ice_lan_rxtx.c
+++ b/drivers/net/ice/ice_lan_rxtx.c
@@ -937,6 +937,72 @@
 }
 
 void
+ice_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+		 struct rte_eth_rxq_info *qinfo)
+{
+	struct ice_rx_queue *rxq;
+
+	rxq = dev->data->rx_queues[queue_id];
+
+	qinfo->mp = rxq->mp;
+	qinfo->scattered_rx = dev->data->scattered_rx;
+	qinfo->nb_desc = rxq->nb_rx_desc;
+
+	qinfo->conf.rx_free_thresh = rxq->rx_free_thresh;
+	qinfo->conf.rx_drop_en = rxq->drop_en;
+	qinfo->conf.rx_deferred_start = rxq->rx_deferred_start;
+}
+
+void
+ice_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+		 struct rte_eth_txq_info *qinfo)
+{
+	struct ice_tx_queue *txq;
+
+	txq = dev->data->tx_queues[queue_id];
+
+	qinfo->nb_desc = txq->nb_tx_desc;
+
+	qinfo->conf.tx_thresh.pthresh = txq->pthresh;
+	qinfo->conf.tx_thresh.hthresh = txq->hthresh;
+	qinfo->conf.tx_thresh.wthresh = txq->wthresh;
+
+	qinfo->conf.tx_free_thresh = txq->tx_free_thresh;
+	qinfo->conf.tx_rs_thresh = txq->tx_rs_thresh;
+	qinfo->conf.offloads = txq->offloads;
+	qinfo->conf.tx_deferred_start = txq->tx_deferred_start;
+}
+
+uint32_t
+ice_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+#define ICE_RXQ_SCAN_INTERVAL 4
+	volatile union ice_rx_desc *rxdp;
+	struct ice_rx_queue *rxq;
+	uint16_t desc = 0;
+
+	rxq = dev->data->rx_queues[rx_queue_id];
+	rxdp = &rxq->rx_ring[rxq->rx_tail];
+	while ((desc < rxq->nb_rx_desc) &&
+	       ((rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
+		 ICE_RXD_QW1_STATUS_M) >> ICE_RXD_QW1_STATUS_S) &
+	       (1 << ICE_RX_DESC_STATUS_DD_S)) {
+		/**
+		 * Check the DD bit of a rx descriptor of each 4 in a group,
+		 * to avoid checking too frequently and downgrading performance
+		 * too much.
+		 */
+		desc += ICE_RXQ_SCAN_INTERVAL;
+		rxdp += ICE_RXQ_SCAN_INTERVAL;
+		if (rxq->rx_tail + desc >= rxq->nb_rx_desc)
+			rxdp = &(rxq->rx_ring[rxq->rx_tail +
+				 desc - rxq->nb_rx_desc]);
+	}
+
+	return desc;
+}
+
+void
 ice_clear_queues(struct rte_eth_dev *dev)
 {
 	uint16_t i;
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index 871646f..bad2b89 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -134,6 +134,11 @@ int ice_tx_queue_setup(struct rte_eth_dev *dev,
 void ice_tx_queue_release(void *txq);
 void ice_clear_queues(struct rte_eth_dev *dev);
 void ice_free_queues(struct rte_eth_dev *dev);
+uint32_t ice_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 void ice_set_default_ptype_table(struct rte_eth_dev *dev);
 const uint32_t *ice_dev_supported_ptypes_get(struct rte_eth_dev *dev);
+void ice_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+		      struct rte_eth_rxq_info *qinfo);
+void ice_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+		      struct rte_eth_txq_info *qinfo);
 #endif /* _ICE_RXTX_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v2 16/20] net/ice: support basic RX/TX
  2018-12-03  7:06 ` [dpdk-dev] [PATCH v2 00/20] " Wenzhuo Lu
                     ` (14 preceding siblings ...)
  2018-12-03  7:06   ` [dpdk-dev] [PATCH v2 15/20] net/ice: support queue information getting Wenzhuo Lu
@ 2018-12-03  7:06   ` Wenzhuo Lu
  2018-12-04  5:42     ` Varghese, Vipin
  2018-12-03  7:06   ` [dpdk-dev] [PATCH v2 17/20] net/ice: support advance RX/TX Wenzhuo Lu
                     ` (3 subsequent siblings)
  19 siblings, 1 reply; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-03  7:06 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/ice/ice_ethdev.c   |  17 ++
 drivers/net/ice/ice_lan_rxtx.c | 568 ++++++++++++++++++++++++++++++++++++++++-
 drivers/net/ice/ice_rxtx.h     |   8 +
 3 files changed, 591 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 5fafcb4..b78a342 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -1267,7 +1267,22 @@ struct ice_xstats_name_off {
 	int ret;
 
 	dev->dev_ops = &ice_eth_dev_ops;
+	dev->rx_pkt_burst = ice_recv_pkts;
+	dev->tx_pkt_burst = ice_xmit_pkts;
+	dev->tx_pkt_prepare = ice_prep_pkts;
 
+	/* For secondary processes, we don't initialise any further as primary
+	 * has already done this work. Only check we don't need a different
+	 * RX function.
+	 */
+	if (rte_eal_process_type() == RTE_PROC_SECONDARY) {
+		ice_set_rx_function(dev);
+		ice_set_tx_function(dev);
+		PMD_INIT_LOG(ERR,
+			     "Control plane functions not "
+			     "supported by secondary process.");
+		return 0;
+	}
 	ice_set_default_ptype_table(dev);
 	pci_dev = RTE_DEV_TO_PCI(dev->device);
 	intr_handle = &pci_dev->intr_handle;
@@ -1733,6 +1748,8 @@ static int ice_init_rss(struct ice_pf *pf)
 		goto rx_err;
 	}
 
+	ice_set_rx_function(dev);
+
 	/* enable Rx interrput and mapping Rx queue to interrupt vector */
 	if (ice_rxq_intr_setup(dev))
 		return -EIO;
diff --git a/drivers/net/ice/ice_lan_rxtx.c b/drivers/net/ice/ice_lan_rxtx.c
index e0c5d4b..5aa0ab6 100644
--- a/drivers/net/ice/ice_lan_rxtx.c
+++ b/drivers/net/ice/ice_lan_rxtx.c
@@ -900,8 +900,81 @@
 	rte_free(q);
 }
 
+/* Translate the rx descriptor status to pkt flags */
+static inline uint64_t
+ice_rxd_status_to_pkt_flags(uint64_t qword)
+{
+	uint64_t flags;
+
+	/* Check if RSS_HASH */
+	flags = (((qword >> ICE_RX_DESC_STATUS_FLTSTAT_S) &
+		  ICE_RX_DESC_FLTSTAT_RSS_HASH) ==
+		 ICE_RX_DESC_FLTSTAT_RSS_HASH) ? PKT_RX_RSS_HASH : 0;
+
+	return flags;
+}
+
+/* Rx L3/L4 checksum */
+static inline uint64_t
+ice_rxd_error_to_pkt_flags(uint64_t qword)
+{
+	uint64_t flags = 0;
+	uint64_t error_bits = (qword >> ICE_RXD_QW1_ERROR_S);
+
+	if (likely((error_bits & ICE_RX_ERR_BITS) == 0)) {
+		flags |= (PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD);
+		return flags;
+	}
+
+	if (unlikely(error_bits & (1 << ICE_RX_DESC_ERROR_IPE_S)))
+		flags |= PKT_RX_IP_CKSUM_BAD;
+	else
+		flags |= PKT_RX_IP_CKSUM_GOOD;
+
+	if (unlikely(error_bits & (1 << ICE_RX_DESC_ERROR_L4E_S)))
+		flags |= PKT_RX_L4_CKSUM_BAD;
+	else
+		flags |= PKT_RX_L4_CKSUM_GOOD;
+
+	if (unlikely(error_bits & (1 << ICE_RX_DESC_ERROR_EIPE_S)))
+		flags |= PKT_RX_EIP_CKSUM_BAD;
+
+	return flags;
+}
+
+static inline void
+ice_rxd_to_vlan_tci(struct rte_mbuf *mb, volatile union ice_rx_desc *rxdp)
+{
+	if (rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
+	    (1 << ICE_RX_DESC_STATUS_L2TAG1P_S)) {
+		mb->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		mb->vlan_tci =
+			rte_le_to_cpu_16(rxdp->wb.qword0.lo_dword.l2tag1);
+		PMD_RX_LOG(DEBUG, "Descriptor l2tag1: %u",
+			   rte_le_to_cpu_16(rxdp->wb.qword0.lo_dword.l2tag1));
+	} else {
+		mb->vlan_tci = 0;
+	}
+
+#ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
+	if (rte_le_to_cpu_16(rxdp->wb.qword2.ext_status) &
+	    (1 << ICE_RX_DESC_EXT_STATUS_L2TAG2P_S)) {
+		mb->ol_flags |= PKT_RX_QINQ_STRIPPED | PKT_RX_QINQ |
+				PKT_RX_VLAN_STRIPPED | PKT_RX_VLAN;
+		mb->vlan_tci_outer = mb->vlan_tci;
+		mb->vlan_tci = rte_le_to_cpu_16(rxdp->wb.qword2.l2tag2_2);
+		PMD_RX_LOG(DEBUG, "Descriptor l2tag2_1: %u, l2tag2_2: %u",
+			   rte_le_to_cpu_16(rxdp->wb.qword2.l2tag2_1),
+			   rte_le_to_cpu_16(rxdp->wb.qword2.l2tag2_2));
+	} else {
+		mb->vlan_tci_outer = 0;
+	}
+#endif
+	PMD_RX_LOG(DEBUG, "Mbuf vlan_tci: %u, vlan_tci_outer: %u",
+		   mb->vlan_tci, mb->vlan_tci_outer);
+}
 const uint32_t *
-ice_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
+ice_dev_supported_ptypes_get(struct rte_eth_dev *dev)
 {
 	static const uint32_t ptypes[] = {
 		/* refers to ice_get_default_pkt_type() */
@@ -933,7 +1006,9 @@
 		RTE_PTYPE_UNKNOWN
 	};
 
-	return ptypes;
+	if (dev->rx_pkt_burst == ice_recv_pkts)
+		return ptypes;
+	return NULL;
 }
 
 void
@@ -1044,6 +1119,495 @@
 	dev->data->nb_tx_queues = 0;
 }
 
+uint16_t
+ice_recv_pkts(void *rx_queue,
+	      struct rte_mbuf **rx_pkts,
+	      uint16_t nb_pkts)
+{
+	struct ice_rx_queue *rxq = rx_queue;
+	volatile union ice_rx_desc *rx_ring = rxq->rx_ring;
+	volatile union ice_rx_desc *rxdp;
+	union ice_rx_desc rxd;
+	struct ice_rx_entry *sw_ring = rxq->sw_ring;
+	struct ice_rx_entry *rxe;
+	struct rte_mbuf *nmb; /* new allocated mbuf */
+	struct rte_mbuf *rxm; /* pointer to store old mbuf in SW ring */
+	uint16_t rx_id = rxq->rx_tail;
+	uint16_t nb_rx = 0;
+	uint16_t nb_hold = 0;
+	uint16_t rx_packet_len;
+	uint32_t rx_status;
+	uint64_t qword1;
+	uint64_t dma_addr;
+	uint64_t pkt_flags = 0;
+	uint32_t *ptype_tbl = rxq->vsi->adapter->ptype_tbl;
+	struct rte_eth_dev *dev;
+
+	while (nb_rx < nb_pkts) {
+		rxdp = &rx_ring[rx_id];
+		qword1 = rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len);
+		rx_status = (qword1 & ICE_RXD_QW1_STATUS_M) >>
+			    ICE_RXD_QW1_STATUS_S;
+
+		/* Check the DD bit first */
+		if (!(rx_status & (1 << ICE_RX_DESC_STATUS_DD_S)))
+			break;
+
+		/* allocate mbuf */
+		nmb = rte_mbuf_raw_alloc(rxq->mp);
+		if (unlikely(!nmb)) {
+			dev = ICE_VSI_TO_ETH_DEV(rxq->vsi);
+			dev->data->rx_mbuf_alloc_failed++;
+			break;
+		}
+		rxd = *rxdp; /* copy descriptor in ring to temp variable*/
+
+		nb_hold++;
+		rxe = &sw_ring[rx_id]; /* get corresponding mbuf in SW ring */
+		rx_id++;
+		if (unlikely(rx_id == rxq->nb_rx_desc))
+			rx_id = 0;
+		rxm = rxe->mbuf;
+		rxe->mbuf = nmb;
+		dma_addr =
+			rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb));
+
+		/**
+		 * fill the read format of descriptor with physic address in
+		 * new allocated mbuf: nmb
+		 */
+		rxdp->read.hdr_addr = 0;
+		rxdp->read.pkt_addr = dma_addr;
+
+		/* calculate rx_packet_len of the received pkt */
+		rx_packet_len = ((qword1 & ICE_RXD_QW1_LEN_PBUF_M) >>
+				ICE_RXD_QW1_LEN_PBUF_S) - rxq->crc_len;
+
+		/* fill old mbuf with received descriptor: rxd */
+		rxm->data_off = RTE_PKTMBUF_HEADROOM;
+		rte_prefetch0(RTE_PTR_ADD(rxm->buf_addr, RTE_PKTMBUF_HEADROOM));
+		rxm->nb_segs = 1;
+		rxm->next = NULL;
+		rxm->pkt_len = rx_packet_len;
+		rxm->data_len = rx_packet_len;
+		rxm->port = rxq->port_id;
+		ice_rxd_to_vlan_tci(rxm, rxdp);
+		rxm->packet_type = ptype_tbl[(uint8_t)((qword1 &
+							ICE_RXD_QW1_PTYPE_M) >>
+						       ICE_RXD_QW1_PTYPE_S)];
+		pkt_flags = ice_rxd_status_to_pkt_flags(qword1);
+		pkt_flags |= ice_rxd_error_to_pkt_flags(qword1);
+		if (pkt_flags & PKT_RX_RSS_HASH)
+			rxm->hash.rss =
+				rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
+		rxm->ol_flags |= pkt_flags;
+		/* copy old mbuf to rx_pkts */
+		rx_pkts[nb_rx++] = rxm;
+	}
+	rxq->rx_tail = rx_id;
+	/**
+	 * If the number of free RX descriptors is greater than the RX free
+	 * threshold of the queue, advance the receive tail register of queue.
+	 * Update that register with the value of the last processed RX
+	 * descriptor minus 1.
+	 */
+	nb_hold = (uint16_t)(nb_hold + rxq->nb_rx_hold);
+	if (nb_hold > rxq->rx_free_thresh) {
+		rx_id = (uint16_t)(rx_id == 0 ?
+				   (rxq->nb_rx_desc - 1) : (rx_id - 1));
+		/* write TAIL register */
+		ICE_PCI_REG_WRITE(rxq->qrx_tail, rx_id);
+		nb_hold = 0;
+	}
+	rxq->nb_rx_hold = nb_hold;
+
+	/* return received packet in the burst */
+	return nb_rx;
+}
+
+static inline void
+ice_txd_enable_checksum(uint64_t ol_flags,
+			uint32_t *td_cmd,
+			uint32_t *td_offset,
+			union ice_tx_offload tx_offload)
+{
+	/* L2 length must be set. */
+	*td_offset |= (tx_offload.l2_len >> 1) <<
+		      ICE_TX_DESC_LEN_MACLEN_S;
+
+	/* Enable L3 checksum offloads */
+	if (ol_flags & PKT_TX_IP_CKSUM) {
+		*td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV4_CSUM;
+		*td_offset |= (tx_offload.l3_len >> 2) <<
+			      ICE_TX_DESC_LEN_IPLEN_S;
+	} else if (ol_flags & PKT_TX_IPV4) {
+		*td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV4;
+		*td_offset |= (tx_offload.l3_len >> 2) <<
+			      ICE_TX_DESC_LEN_IPLEN_S;
+	} else if (ol_flags & PKT_TX_IPV6) {
+		*td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV6;
+		*td_offset |= (tx_offload.l3_len >> 2) <<
+			      ICE_TX_DESC_LEN_IPLEN_S;
+	}
+
+	if (ol_flags & PKT_TX_TCP_SEG) {
+		*td_cmd |= ICE_TX_DESC_CMD_L4T_EOFT_TCP;
+		*td_offset |= (tx_offload.l4_len >> 2) <<
+			      ICE_TX_DESC_LEN_L4_LEN_S;
+		return;
+	}
+
+	/* Enable L4 checksum offloads */
+	switch (ol_flags & PKT_TX_L4_MASK) {
+	case PKT_TX_TCP_CKSUM:
+		*td_cmd |= ICE_TX_DESC_CMD_L4T_EOFT_TCP;
+		*td_offset |= (sizeof(struct tcp_hdr) >> 2) <<
+			      ICE_TX_DESC_LEN_L4_LEN_S;
+		break;
+	case PKT_TX_SCTP_CKSUM:
+		*td_cmd |= ICE_TX_DESC_CMD_L4T_EOFT_SCTP;
+		*td_offset |= (sizeof(struct sctp_hdr) >> 2) <<
+			      ICE_TX_DESC_LEN_L4_LEN_S;
+		break;
+	case PKT_TX_UDP_CKSUM:
+		*td_cmd |= ICE_TX_DESC_CMD_L4T_EOFT_UDP;
+		*td_offset |= (sizeof(struct udp_hdr) >> 2) <<
+			      ICE_TX_DESC_LEN_L4_LEN_S;
+		break;
+	default:
+		break;
+	}
+}
+
+static inline int
+ice_xmit_cleanup(struct ice_tx_queue *txq)
+{
+	struct ice_tx_entry *sw_ring = txq->sw_ring;
+	volatile struct ice_tx_desc *txd = txq->tx_ring;
+	uint16_t last_desc_cleaned = txq->last_desc_cleaned;
+	uint16_t nb_tx_desc = txq->nb_tx_desc;
+	uint16_t desc_to_clean_to;
+	uint16_t nb_tx_to_clean;
+
+	/* Determine the last descriptor needing to be cleaned */
+	desc_to_clean_to = (uint16_t)(last_desc_cleaned + txq->tx_rs_thresh);
+	if (desc_to_clean_to >= nb_tx_desc)
+		desc_to_clean_to = (uint16_t)(desc_to_clean_to - nb_tx_desc);
+
+	/* Check to make sure the last descriptor to clean is done */
+	desc_to_clean_to = sw_ring[desc_to_clean_to].last_id;
+	if (!(txd[desc_to_clean_to].cmd_type_offset_bsz &
+	    rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE))) {
+		PMD_TX_FREE_LOG(DEBUG, "TX descriptor %4u is not done "
+				"(port=%d queue=%d) value=0x%lx\n",
+				desc_to_clean_to,
+				txq->port_id, txq->queue_id,
+				txd[desc_to_clean_to].cmd_type_offset_bsz);
+		/* Failed to clean any descriptors */
+		return -1;
+	}
+
+	/* Figure out how many descriptors will be cleaned */
+	if (last_desc_cleaned > desc_to_clean_to)
+		nb_tx_to_clean = (uint16_t)((nb_tx_desc - last_desc_cleaned) +
+					    desc_to_clean_to);
+	else
+		nb_tx_to_clean = (uint16_t)(desc_to_clean_to -
+					    last_desc_cleaned);
+
+	/* The last descriptor to clean is done, so that means all the
+	 * descriptors from the last descriptor that was cleaned
+	 * up to the last descriptor with the RS bit set
+	 * are done. Only reset the threshold descriptor.
+	 */
+	txd[desc_to_clean_to].cmd_type_offset_bsz = 0;
+
+	/* Update the txq to reflect the last descriptor that was cleaned */
+	txq->last_desc_cleaned = desc_to_clean_to;
+	txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + nb_tx_to_clean);
+
+	return 0;
+}
+
+/* Check if the context descriptor is needed for TX offloading */
+static inline uint16_t
+ice_calc_context_desc(uint64_t flags)
+{
+	static uint64_t mask = PKT_TX_TCP_SEG | PKT_TX_QINQ_PKT;
+
+	return (flags & mask) ? 1 : 0;
+}
+
+/* set ice TSO context descriptor */
+static inline uint64_t
+ice_set_tso_ctx(struct rte_mbuf *mbuf, union ice_tx_offload tx_offload)
+{
+	uint64_t ctx_desc = 0;
+	uint32_t cd_cmd, hdr_len, cd_tso_len;
+
+	if (!tx_offload.l4_len) {
+		PMD_TX_LOG(DEBUG, "L4 length set to 0");
+		return ctx_desc;
+	}
+
+	/**
+	 * in case of non tunneling packet, the outer_l2_len and
+	 * outer_l3_len must be 0.
+	 */
+	hdr_len = tx_offload.outer_l2_len +
+		  tx_offload.outer_l3_len +
+		  tx_offload.l2_len +
+		  tx_offload.l3_len +
+		  tx_offload.l4_len;
+
+	cd_cmd = ICE_TX_CTX_DESC_TSO;
+	cd_tso_len = mbuf->pkt_len - hdr_len;
+	ctx_desc |= ((uint64_t)cd_cmd << ICE_TXD_CTX_QW1_CMD_S) |
+		    ((uint64_t)cd_tso_len << ICE_TXD_CTX_QW1_TSO_LEN_S) |
+		    ((uint64_t)mbuf->tso_segsz << ICE_TXD_CTX_QW1_MSS_S);
+
+	return ctx_desc;
+}
+
+uint16_t
+ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+	struct ice_tx_queue *txq;
+	volatile struct ice_tx_desc *tx_ring;
+	volatile struct ice_tx_desc *txd;
+	struct ice_tx_entry *sw_ring;
+	struct ice_tx_entry *txe, *txn;
+	struct rte_mbuf *tx_pkt;
+	struct rte_mbuf *m_seg;
+	uint16_t tx_id;
+	uint16_t nb_tx;
+	uint16_t nb_used;
+	uint16_t nb_ctx;
+	uint32_t td_cmd = 0;
+	uint32_t td_offset = 0;
+	uint32_t td_tag = 0;
+	uint16_t tx_last;
+	uint64_t buf_dma_addr;
+	uint64_t ol_flags;
+	union ice_tx_offload tx_offload = {0};
+
+	txq = tx_queue;
+	sw_ring = txq->sw_ring;
+	tx_ring = txq->tx_ring;
+	tx_id = txq->tx_tail;
+	txe = &sw_ring[tx_id];
+
+	/* Check if the descriptor ring needs to be cleaned. */
+	if (txq->nb_tx_free < txq->tx_free_thresh)
+		ice_xmit_cleanup(txq);
+
+	for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
+		tx_pkt = *tx_pkts++;
+
+		td_cmd = 0;
+		ol_flags = tx_pkt->ol_flags;
+		tx_offload.l2_len = tx_pkt->l2_len;
+		tx_offload.l3_len = tx_pkt->l3_len;
+		tx_offload.outer_l2_len = tx_pkt->outer_l2_len;
+		tx_offload.outer_l3_len = tx_pkt->outer_l3_len;
+		tx_offload.l4_len = tx_pkt->l4_len;
+		tx_offload.tso_segsz = tx_pkt->tso_segsz;
+		/* Calculate the number of context descriptors needed. */
+		nb_ctx = ice_calc_context_desc(ol_flags);
+
+		/* The number of descriptors that must be allocated for
+		 * a packet equals to the number of the segments of that
+		 * packet plus the number of context descriptor if needed.
+		 */
+		nb_used = (uint16_t)(tx_pkt->nb_segs + nb_ctx);
+		tx_last = (uint16_t)(tx_id + nb_used - 1);
+
+		/* Circular ring */
+		if (tx_last >= txq->nb_tx_desc)
+			tx_last = (uint16_t)(tx_last - txq->nb_tx_desc);
+
+		if (nb_used > txq->nb_tx_free) {
+			if (ice_xmit_cleanup(txq) != 0) {
+				if (nb_tx == 0)
+					return 0;
+				goto end_of_tx;
+			}
+			if (unlikely(nb_used > txq->tx_rs_thresh)) {
+				while (nb_used > txq->nb_tx_free) {
+					if (ice_xmit_cleanup(txq) != 0) {
+						if (nb_tx == 0)
+							return 0;
+						goto end_of_tx;
+					}
+				}
+			}
+		}
+
+		/* Descriptor based VLAN insertion */
+		if (ol_flags & (PKT_TX_VLAN_PKT | PKT_TX_QINQ_PKT)) {
+			td_cmd |= ICE_TX_DESC_CMD_IL2TAG1;
+			td_tag = tx_pkt->vlan_tci;
+		}
+
+		/* Enable checksum offloading */
+		if (ol_flags & ICE_TX_CKSUM_OFFLOAD_MASK) {
+			ice_txd_enable_checksum(ol_flags, &td_cmd,
+						&td_offset, tx_offload);
+		}
+
+		if (nb_ctx) {
+			/* Setup TX context descriptor if required */
+			volatile struct ice_tx_ctx_desc *ctx_txd =
+				(volatile struct ice_tx_ctx_desc *)
+					&tx_ring[tx_id];
+			uint16_t cd_l2tag2 = 0;
+			uint64_t cd_type_cmd_tso_mss = ICE_TX_DESC_DTYPE_CTX;
+
+			txn = &sw_ring[txe->next_id];
+			RTE_MBUF_PREFETCH_TO_FREE(txn->mbuf);
+			if (txe->mbuf) {
+				rte_pktmbuf_free_seg(txe->mbuf);
+				txe->mbuf = NULL;
+			}
+
+			if (ol_flags & PKT_TX_TCP_SEG)
+				cd_type_cmd_tso_mss |=
+					ice_set_tso_ctx(tx_pkt, tx_offload);
+
+			/* TX context descriptor based double VLAN insert */
+			if (ol_flags & PKT_TX_QINQ_PKT) {
+				cd_l2tag2 = tx_pkt->vlan_tci_outer;
+				cd_type_cmd_tso_mss |=
+					((uint64_t)ICE_TX_CTX_DESC_IL2TAG2 <<
+					 ICE_TXD_CTX_QW1_CMD_S);
+			}
+			ctx_txd->l2tag2 = rte_cpu_to_le_16(cd_l2tag2);
+			ctx_txd->qw1 =
+				rte_cpu_to_le_64(cd_type_cmd_tso_mss);
+
+			txe->last_id = tx_last;
+			tx_id = txe->next_id;
+			txe = txn;
+		}
+		m_seg = tx_pkt;
+
+		do {
+			txd = &tx_ring[tx_id];
+			txn = &sw_ring[txe->next_id];
+
+			if (txe->mbuf)
+				rte_pktmbuf_free_seg(txe->mbuf);
+			txe->mbuf = m_seg;
+
+			/* Setup TX Descriptor */
+			buf_dma_addr = rte_mbuf_data_iova(m_seg);
+			txd->buf_addr = rte_cpu_to_le_64(buf_dma_addr);
+			txd->cmd_type_offset_bsz =
+				rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DATA |
+				((uint64_t)td_cmd  << ICE_TXD_QW1_CMD_S) |
+				((uint64_t)td_offset << ICE_TXD_QW1_OFFSET_S) |
+				((uint64_t)m_seg->data_len  <<
+				 ICE_TXD_QW1_TX_BUF_SZ_S) |
+				((uint64_t)td_tag  << ICE_TXD_QW1_L2TAG1_S));
+
+			txe->last_id = tx_last;
+			tx_id = txe->next_id;
+			txe = txn;
+			m_seg = m_seg->next;
+		} while (m_seg);
+
+		/* fill the last descriptor with End of Packet (EOP) bit */
+		td_cmd |= ICE_TX_DESC_CMD_EOP;
+		txq->nb_tx_used = (uint16_t)(txq->nb_tx_used + nb_used);
+		txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_used);
+
+		/* set RS bit on the last descriptor of one packet */
+		if (txq->nb_tx_used >= txq->tx_rs_thresh) {
+			PMD_TX_FREE_LOG(DEBUG,
+					"Setting RS bit on TXD id="
+					"%4u (port=%d queue=%d)",
+					tx_last, txq->port_id, txq->queue_id);
+
+			td_cmd |= ICE_TX_DESC_CMD_RS;
+
+			/* Update txq RS bit counters */
+			txq->nb_tx_used = 0;
+		}
+		txd->cmd_type_offset_bsz |=
+			rte_cpu_to_le_64(((uint64_t)td_cmd) <<
+					 ICE_TXD_QW1_CMD_S);
+	}
+end_of_tx:
+	rte_wmb();
+
+	/* update Tail register */
+	ICE_PCI_REG_WRITE(txq->qtx_tail, tx_id);
+	txq->tx_tail = tx_id;
+
+	return nb_tx;
+}
+
+void __attribute__((cold))
+ice_set_rx_function(struct rte_eth_dev *dev)
+{
+	dev->rx_pkt_burst = ice_recv_pkts;
+}
+
+/*********************************************************************
+ *
+ *  TX prep functions
+ *
+ **********************************************************************/
+/* The default values of TSO MSS */
+#define ICE_MIN_TSO_MSS            64
+#define ICE_MAX_TSO_MSS            9728
+#define ICE_MAX_TSO_FRAME_SIZE     262144
+uint16_t
+ice_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
+	      uint16_t nb_pkts)
+{
+	int i, ret;
+	uint64_t ol_flags;
+	struct rte_mbuf *m;
+
+	for (i = 0; i < nb_pkts; i++) {
+		m = tx_pkts[i];
+		ol_flags = m->ol_flags;
+
+		if (ol_flags & PKT_TX_TCP_SEG &&
+		    (m->tso_segsz < ICE_MIN_TSO_MSS ||
+		     m->tso_segsz > ICE_MAX_TSO_MSS ||
+		     m->pkt_len > ICE_MAX_TSO_FRAME_SIZE)) {
+			/**
+			 * MSS outside the range are considered malicious
+			 */
+			rte_errno = -EINVAL;
+			return i;
+		}
+
+#ifdef RTE_LIBRTE_ETHDEV_DEBUG
+		ret = rte_validate_tx_offload(m);
+		if (ret != 0) {
+			rte_errno = ret;
+			return i;
+		}
+#endif
+		ret = rte_net_intel_cksum_prepare(m);
+		if (ret != 0) {
+			rte_errno = ret;
+			return i;
+		}
+	}
+	return i;
+}
+
+void __attribute__((cold))
+ice_set_tx_function(struct rte_eth_dev *dev)
+{
+		dev->tx_pkt_burst = ice_xmit_pkts;
+		dev->tx_pkt_prepare = ice_prep_pkts;
+}
+
 /* For each value it means, datasheet of hardware can tell more details
  *
  * @note: fix ice_dev_supported_ptypes_get() if any change here.
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index bad2b89..e0218b3 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -134,6 +134,14 @@ int ice_tx_queue_setup(struct rte_eth_dev *dev,
 void ice_tx_queue_release(void *txq);
 void ice_clear_queues(struct rte_eth_dev *dev);
 void ice_free_queues(struct rte_eth_dev *dev);
+uint16_t ice_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+		       uint16_t nb_pkts);
+uint16_t ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+		       uint16_t nb_pkts);
+void ice_set_rx_function(struct rte_eth_dev *dev);
+uint16_t ice_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
+		       uint16_t nb_pkts);
+void ice_set_tx_function(struct rte_eth_dev *dev);
 uint32_t ice_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 void ice_set_default_ptype_table(struct rte_eth_dev *dev);
 const uint32_t *ice_dev_supported_ptypes_get(struct rte_eth_dev *dev);
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v2 17/20] net/ice: support advance RX/TX
  2018-12-03  7:06 ` [dpdk-dev] [PATCH v2 00/20] " Wenzhuo Lu
                     ` (15 preceding siblings ...)
  2018-12-03  7:06   ` [dpdk-dev] [PATCH v2 16/20] net/ice: support basic RX/TX Wenzhuo Lu
@ 2018-12-03  7:06   ` Wenzhuo Lu
  2018-12-03  7:06   ` [dpdk-dev] [PATCH v2 18/20] net/ice: support descriptor ops Wenzhuo Lu
                     ` (2 subsequent siblings)
  19 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-03  7:06 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add RX functions, scatter and bulk.
Add TX function, simple.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/ice/ice_lan_rxtx.c | 660 ++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 658 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ice/ice_lan_rxtx.c b/drivers/net/ice/ice_lan_rxtx.c
index 5aa0ab6..80ad4dd 100644
--- a/drivers/net/ice/ice_lan_rxtx.c
+++ b/drivers/net/ice/ice_lan_rxtx.c
@@ -973,6 +973,431 @@
 	PMD_RX_LOG(DEBUG, "Mbuf vlan_tci: %u, vlan_tci_outer: %u",
 		   mb->vlan_tci, mb->vlan_tci_outer);
 }
+
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+#define ICE_LOOK_AHEAD 8
+#if (ICE_LOOK_AHEAD != 8)
+#error "PMD ICE: ICE_LOOK_AHEAD must be 8\n"
+#endif
+static inline int
+ice_rx_scan_hw_ring(struct ice_rx_queue *rxq)
+{
+	volatile union ice_rx_desc *rxdp;
+	struct ice_rx_entry *rxep;
+	struct rte_mbuf *mb;
+	uint16_t pkt_len;
+	uint64_t qword1;
+	uint32_t rx_status;
+	int32_t s[ICE_LOOK_AHEAD], nb_dd;
+	int32_t i, j, nb_rx = 0;
+	uint64_t pkt_flags = 0;
+	uint32_t *ptype_tbl = rxq->vsi->adapter->ptype_tbl;
+
+	rxdp = &rxq->rx_ring[rxq->rx_tail];
+	rxep = &rxq->sw_ring[rxq->rx_tail];
+
+	qword1 = rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len);
+	rx_status = (qword1 & ICE_RXD_QW1_STATUS_M) >> ICE_RXD_QW1_STATUS_S;
+
+	/* Make sure there is at least 1 packet to receive */
+	if (!(rx_status & (1 << ICE_RX_DESC_STATUS_DD_S)))
+		return 0;
+
+	/**
+	 * Scan LOOK_AHEAD descriptors at a time to determine which
+	 * descriptors reference packets that are ready to be received.
+	 */
+	for (i = 0; i < ICE_RX_MAX_BURST; i += ICE_LOOK_AHEAD,
+	     rxdp += ICE_LOOK_AHEAD, rxep += ICE_LOOK_AHEAD) {
+		/* Read desc statuses backwards to avoid race condition */
+		for (j = ICE_LOOK_AHEAD - 1; j >= 0; j--) {
+			qword1 = rte_le_to_cpu_64(
+					rxdp[j].wb.qword1.status_error_len);
+			s[j] = (qword1 & ICE_RXD_QW1_STATUS_M) >>
+			       ICE_RXD_QW1_STATUS_S;
+		}
+
+		rte_smp_rmb();
+
+		/* Compute how many status bits were set */
+		for (j = 0, nb_dd = 0; j < ICE_LOOK_AHEAD; j++)
+			nb_dd += s[j] & (1 << ICE_RX_DESC_STATUS_DD_S);
+
+		nb_rx += nb_dd;
+
+		/* Translate descriptor info to mbuf parameters */
+		for (j = 0; j < nb_dd; j++) {
+			mb = rxep[j].mbuf;
+			qword1 = rte_le_to_cpu_64(
+					rxdp[j].wb.qword1.status_error_len);
+			pkt_len = ((qword1 & ICE_RXD_QW1_LEN_PBUF_M) >>
+				   ICE_RXD_QW1_LEN_PBUF_S) - rxq->crc_len;
+			mb->data_len = pkt_len;
+			mb->pkt_len = pkt_len;
+			mb->ol_flags = 0;
+			pkt_flags = ice_rxd_status_to_pkt_flags(qword1);
+			pkt_flags |= ice_rxd_error_to_pkt_flags(qword1);
+			if (pkt_flags & PKT_RX_RSS_HASH)
+				mb->hash.rss =
+					rte_le_to_cpu_32(
+						rxdp[j].wb.qword0.hi_dword.rss);
+			mb->packet_type = ptype_tbl[(uint8_t)(
+						(qword1 &
+						 ICE_RXD_QW1_PTYPE_M) >>
+						ICE_RXD_QW1_PTYPE_S)];
+			ice_rxd_to_vlan_tci(mb, &rxdp[j]);
+
+			mb->ol_flags |= pkt_flags;
+		}
+
+		for (j = 0; j < ICE_LOOK_AHEAD; j++)
+			rxq->rx_stage[i + j] = rxep[j].mbuf;
+
+		if (nb_dd != ICE_LOOK_AHEAD)
+			break;
+	}
+
+	/* Clear software ring entries */
+	for (i = 0; i < nb_rx; i++)
+		rxq->sw_ring[rxq->rx_tail + i].mbuf = NULL;
+
+	PMD_RX_LOG(DEBUG, "ice_rx_scan_hw_ring: "
+		   "port_id=%u, queue_id=%u, nb_rx=%d",
+		   rxq->port_id, rxq->queue_id, nb_rx);
+
+	return nb_rx;
+}
+
+static inline uint16_t
+ice_rx_fill_from_stage(struct ice_rx_queue *rxq,
+		       struct rte_mbuf **rx_pkts,
+		       uint16_t nb_pkts)
+{
+	uint16_t i;
+	struct rte_mbuf **stage = &rxq->rx_stage[rxq->rx_next_avail];
+
+	nb_pkts = (uint16_t)RTE_MIN(nb_pkts, rxq->rx_nb_avail);
+
+	for (i = 0; i < nb_pkts; i++)
+		rx_pkts[i] = stage[i];
+
+	rxq->rx_nb_avail = (uint16_t)(rxq->rx_nb_avail - nb_pkts);
+	rxq->rx_next_avail = (uint16_t)(rxq->rx_next_avail + nb_pkts);
+
+	return nb_pkts;
+}
+
+static inline int
+ice_rx_alloc_bufs(struct ice_rx_queue *rxq)
+{
+	volatile union ice_rx_desc *rxdp;
+	struct ice_rx_entry *rxep;
+	struct rte_mbuf *mb;
+	uint16_t alloc_idx, i;
+	uint64_t dma_addr;
+	int diag;
+
+	/* Allocate buffers in bulk */
+	alloc_idx = (uint16_t)(rxq->rx_free_trigger -
+			       (rxq->rx_free_thresh - 1));
+	rxep = &rxq->sw_ring[alloc_idx];
+	diag = rte_mempool_get_bulk(rxq->mp, (void *)rxep,
+				    rxq->rx_free_thresh);
+	if (unlikely(diag != 0)) {
+		PMD_RX_LOG(ERR, "Failed to get mbufs in bulk");
+		return -ENOMEM;
+	}
+
+	rxdp = &rxq->rx_ring[alloc_idx];
+	for (i = 0; i < rxq->rx_free_thresh; i++) {
+		if (likely(i < (rxq->rx_free_thresh - 1)))
+			/* Prefetch next mbuf */
+			rte_prefetch0(rxep[i + 1].mbuf);
+
+		mb = rxep[i].mbuf;
+		rte_mbuf_refcnt_set(mb, 1);
+		mb->next = NULL;
+		mb->data_off = RTE_PKTMBUF_HEADROOM;
+		mb->nb_segs = 1;
+		mb->port = rxq->port_id;
+		dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(mb));
+		rxdp[i].read.hdr_addr = 0;
+		rxdp[i].read.pkt_addr = dma_addr;
+	}
+
+	/* Update rx tail regsiter */
+	rte_wmb();
+	ICE_PCI_REG_WRITE(rxq->qrx_tail, rxq->rx_free_trigger);
+
+	rxq->rx_free_trigger =
+		(uint16_t)(rxq->rx_free_trigger + rxq->rx_free_thresh);
+	if (rxq->rx_free_trigger >= rxq->nb_rx_desc)
+		rxq->rx_free_trigger = (uint16_t)(rxq->rx_free_thresh - 1);
+
+	return 0;
+}
+
+static inline uint16_t
+rx_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
+{
+	struct ice_rx_queue *rxq = (struct ice_rx_queue *)rx_queue;
+	uint16_t nb_rx = 0;
+	struct rte_eth_dev *dev;
+
+	if (!nb_pkts)
+		return 0;
+
+	if (rxq->rx_nb_avail)
+		return ice_rx_fill_from_stage(rxq, rx_pkts, nb_pkts);
+
+	nb_rx = (uint16_t)ice_rx_scan_hw_ring(rxq);
+	rxq->rx_next_avail = 0;
+	rxq->rx_nb_avail = nb_rx;
+	rxq->rx_tail = (uint16_t)(rxq->rx_tail + nb_rx);
+
+	if (rxq->rx_tail > rxq->rx_free_trigger) {
+		if (ice_rx_alloc_bufs(rxq) != 0) {
+			uint16_t i, j;
+
+			dev = ICE_VSI_TO_ETH_DEV(rxq->vsi);
+			dev->data->rx_mbuf_alloc_failed +=
+				rxq->rx_free_thresh;
+			PMD_RX_LOG(DEBUG, "Rx mbuf alloc failed for "
+				   "port_id=%u, queue_id=%u",
+				   rxq->port_id, rxq->queue_id);
+			rxq->rx_nb_avail = 0;
+			rxq->rx_tail = (uint16_t)(rxq->rx_tail - nb_rx);
+			for (i = 0, j = rxq->rx_tail; i < nb_rx; i++, j++)
+				rxq->sw_ring[j].mbuf = rxq->rx_stage[i];
+
+			return 0;
+		}
+	}
+
+	if (rxq->rx_tail >= rxq->nb_rx_desc)
+		rxq->rx_tail = 0;
+
+	if (rxq->rx_nb_avail)
+		return ice_rx_fill_from_stage(rxq, rx_pkts, nb_pkts);
+
+	return 0;
+}
+
+static uint16_t
+ice_recv_pkts_bulk_alloc(void *rx_queue,
+			 struct rte_mbuf **rx_pkts,
+			 uint16_t nb_pkts)
+{
+	uint16_t nb_rx = 0;
+	uint16_t n;
+	uint16_t count;
+
+	if (unlikely(nb_pkts == 0))
+		return nb_rx;
+
+	if (likely(nb_pkts <= ICE_RX_MAX_BURST))
+		return rx_recv_pkts(rx_queue, rx_pkts, nb_pkts);
+
+	while (nb_pkts) {
+		n = RTE_MIN(nb_pkts, ICE_RX_MAX_BURST);
+		count = rx_recv_pkts(rx_queue, &rx_pkts[nb_rx], n);
+		nb_rx = (uint16_t)(nb_rx + count);
+		nb_pkts = (uint16_t)(nb_pkts - count);
+		if (count < n)
+			break;
+	}
+
+	return nb_rx;
+}
+#else
+static uint16_t
+ice_recv_pkts_bulk_alloc(void __rte_unused *rx_queue,
+			 struct rte_mbuf __rte_unused **rx_pkts,
+			 uint16_t __rte_unused nb_pkts)
+{
+	return 0;
+}
+#endif /* RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC */
+
+static uint16_t
+ice_recv_scattered_pkts(void *rx_queue,
+			struct rte_mbuf **rx_pkts,
+			uint16_t nb_pkts)
+{
+	struct ice_rx_queue *rxq = rx_queue;
+	volatile union ice_rx_desc *rx_ring = rxq->rx_ring;
+	volatile union ice_rx_desc *rxdp;
+	union ice_rx_desc rxd;
+	struct ice_rx_entry *sw_ring = rxq->sw_ring;
+	struct ice_rx_entry *rxe;
+	struct rte_mbuf *first_seg = rxq->pkt_first_seg;
+	struct rte_mbuf *last_seg = rxq->pkt_last_seg;
+	struct rte_mbuf *nmb; /* new allocated mbuf */
+	struct rte_mbuf *rxm; /* pointer to store old mbuf in SW ring */
+	uint16_t rx_id = rxq->rx_tail;
+	uint16_t nb_rx = 0;
+	uint16_t nb_hold = 0;
+	uint16_t rx_packet_len;
+	uint32_t rx_status;
+	uint64_t qword1;
+	uint64_t dma_addr;
+	uint64_t pkt_flags = 0;
+	uint32_t *ptype_tbl = rxq->vsi->adapter->ptype_tbl;
+	struct rte_eth_dev *dev;
+
+	while (nb_rx < nb_pkts) {
+		rxdp = &rx_ring[rx_id];
+		qword1 = rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len);
+		rx_status = (qword1 & ICE_RXD_QW1_STATUS_M) >>
+			    ICE_RXD_QW1_STATUS_S;
+
+		/* Check the DD bit first */
+		if (!(rx_status & (1 << ICE_RX_DESC_STATUS_DD_S)))
+			break;
+
+		/* allocate mbuf */
+		nmb = rte_mbuf_raw_alloc(rxq->mp);
+		if (unlikely(!nmb)) {
+			dev = ICE_VSI_TO_ETH_DEV(rxq->vsi);
+			dev->data->rx_mbuf_alloc_failed++;
+			break;
+		}
+		rxd = *rxdp; /* copy descriptor in ring to temp variable*/
+
+		nb_hold++;
+		rxe = &sw_ring[rx_id]; /* get corresponding mbuf in SW ring */
+		rx_id++;
+		if (unlikely(rx_id == rxq->nb_rx_desc))
+			rx_id = 0;
+
+		/* Prefetch next mbuf */
+		rte_prefetch0(sw_ring[rx_id].mbuf);
+
+		/**
+		 * When next RX descriptor is on a cache line boundary,
+		 * prefetch the next 4 RX descriptors and next 8 pointers
+		 * to mbufs.
+		 */
+		if ((rx_id & 0x3) == 0) {
+			rte_prefetch0(&rx_ring[rx_id]);
+			rte_prefetch0(&sw_ring[rx_id]);
+		}
+
+		rxm = rxe->mbuf;
+		rxe->mbuf = nmb;
+		dma_addr =
+			rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb));
+
+		/* Set data buffer address and data length of the mbuf */
+		rxdp->read.hdr_addr = 0;
+		rxdp->read.pkt_addr = dma_addr;
+		rx_packet_len = (qword1 & ICE_RXD_QW1_LEN_PBUF_M) >>
+				ICE_RXD_QW1_LEN_PBUF_S;
+		rxm->data_len = rx_packet_len;
+		rxm->data_off = RTE_PKTMBUF_HEADROOM;
+		ice_rxd_to_vlan_tci(rxm, rxdp);
+		rxm->packet_type = ptype_tbl[(uint8_t)((qword1 &
+							ICE_RXD_QW1_PTYPE_M) >>
+						       ICE_RXD_QW1_PTYPE_S)];
+
+		/**
+		 * If this is the first buffer of the received packet, set the
+		 * pointer to the first mbuf of the packet and initialize its
+		 * context. Otherwise, update the total length and the number
+		 * of segments of the current scattered packet, and update the
+		 * pointer to the last mbuf of the current packet.
+		 */
+		if (!first_seg) {
+			first_seg = rxm;
+			first_seg->nb_segs = 1;
+			first_seg->pkt_len = rx_packet_len;
+		} else {
+			first_seg->pkt_len =
+				(uint16_t)(first_seg->pkt_len +
+					   rx_packet_len);
+			first_seg->nb_segs++;
+			last_seg->next = rxm;
+		}
+
+		/**
+		 * If this is not the last buffer of the received packet,
+		 * update the pointer to the last mbuf of the current scattered
+		 * packet and continue to parse the RX ring.
+		 */
+		if (!(rx_status & (1 << ICE_RX_DESC_STATUS_EOF_S))) {
+			last_seg = rxm;
+			continue;
+		}
+
+		/**
+		 * This is the last buffer of the received packet. If the CRC
+		 * is not stripped by the hardware:
+		 *  - Subtract the CRC length from the total packet length.
+		 *  - If the last buffer only contains the whole CRC or a part
+		 *  of it, free the mbuf associated to the last buffer. If part
+		 *  of the CRC is also contained in the previous mbuf, subtract
+		 *  the length of that CRC part from the data length of the
+		 *  previous mbuf.
+		 */
+		rxm->next = NULL;
+		if (unlikely(rxq->crc_len > 0)) {
+			first_seg->pkt_len -= ETHER_CRC_LEN;
+			if (rx_packet_len <= ETHER_CRC_LEN) {
+				rte_pktmbuf_free_seg(rxm);
+				first_seg->nb_segs--;
+				last_seg->data_len =
+					(uint16_t)(last_seg->data_len -
+					(ETHER_CRC_LEN - rx_packet_len));
+				last_seg->next = NULL;
+			} else
+				rxm->data_len = (uint16_t)(rx_packet_len -
+							   ETHER_CRC_LEN);
+		}
+
+		first_seg->port = rxq->port_id;
+		first_seg->ol_flags = 0;
+
+		pkt_flags = ice_rxd_status_to_pkt_flags(qword1);
+		pkt_flags |= ice_rxd_error_to_pkt_flags(qword1);
+		if (pkt_flags & PKT_RX_RSS_HASH)
+			first_seg->hash.rss =
+				rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
+
+		first_seg->ol_flags |= pkt_flags;
+		/* Prefetch data of first segment, if configured to do so. */
+		rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr,
+					  first_seg->data_off));
+		rx_pkts[nb_rx++] = first_seg;
+		first_seg = NULL;
+	}
+
+	/* Record index of the next RX descriptor to probe. */
+	rxq->rx_tail = rx_id;
+	rxq->pkt_first_seg = first_seg;
+	rxq->pkt_last_seg = last_seg;
+
+	/**
+	 * If the number of free RX descriptors is greater than the RX free
+	 * threshold of the queue, advance the Receive Descriptor Tail (RDT)
+	 * register. Update the RDT with the value of the last processed RX
+	 * descriptor minus 1, to guarantee that the RDT register is never
+	 * equal to the RDH register, which creates a "full" ring situtation
+	 * from the hardware point of view.
+	 */
+	nb_hold = (uint16_t)(nb_hold + rxq->nb_rx_hold);
+	if (nb_hold > rxq->rx_free_thresh) {
+		rx_id = (uint16_t)(rx_id == 0 ?
+				   (rxq->nb_rx_desc - 1) : (rx_id - 1));
+		/* write TAIL register */
+		ICE_PCI_REG_WRITE(rxq->qrx_tail, rx_id);
+		nb_hold = 0;
+	}
+	rxq->nb_rx_hold = nb_hold;
+
+	/* return received packet in the burst */
+	return nb_rx;
+}
+
 const uint32_t *
 ice_dev_supported_ptypes_get(struct rte_eth_dev *dev)
 {
@@ -1006,7 +1431,11 @@
 		RTE_PTYPE_UNKNOWN
 	};
 
-	if (dev->rx_pkt_burst == ice_recv_pkts)
+	if (dev->rx_pkt_burst == ice_recv_pkts ||
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+	    dev->rx_pkt_burst == ice_recv_pkts_bulk_alloc ||
+#endif
+	    dev->rx_pkt_burst == ice_recv_scattered_pkts)
 		return ptypes;
 	return NULL;
 }
@@ -1329,6 +1758,20 @@
 	return 0;
 }
 
+/* Construct the tx flags */
+static inline uint64_t
+ice_build_ctob(uint32_t td_cmd,
+	       uint32_t td_offset,
+	       uint16_t size,
+	       uint32_t td_tag)
+{
+	return rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DATA |
+				((uint64_t)td_cmd << ICE_TXD_QW1_CMD_S) |
+				((uint64_t)td_offset << ICE_TXD_QW1_OFFSET_S) |
+				((uint64_t)size << ICE_TXD_QW1_TX_BUF_SZ_S) |
+				((uint64_t)td_tag << ICE_TXD_QW1_L2TAG1_S));
+}
+
 /* Check if the context descriptor is needed for TX offloading */
 static inline uint16_t
 ice_calc_context_desc(uint64_t flags)
@@ -1547,10 +1990,213 @@
 	return nb_tx;
 }
 
+static inline int __attribute__((always_inline))
+ice_tx_free_bufs(struct ice_tx_queue *txq)
+{
+	struct ice_tx_entry *txep;
+	uint16_t i;
+
+	if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
+	     rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M)) !=
+	    rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE))
+		return 0;
+
+	txep = &txq->sw_ring[txq->tx_next_dd - (txq->tx_rs_thresh - 1)];
+
+	for (i = 0; i < txq->tx_rs_thresh; i++)
+		rte_prefetch0((txep + i)->mbuf);
+
+	if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) {
+		for (i = 0; i < txq->tx_rs_thresh; ++i, ++txep) {
+			rte_mempool_put(txep->mbuf->pool, txep->mbuf);
+			txep->mbuf = NULL;
+		}
+	} else {
+		for (i = 0; i < txq->tx_rs_thresh; ++i, ++txep) {
+			rte_pktmbuf_free_seg(txep->mbuf);
+			txep->mbuf = NULL;
+		}
+	}
+
+	txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh);
+	txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh);
+	if (txq->tx_next_dd >= txq->nb_tx_desc)
+		txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
+
+	return txq->tx_rs_thresh;
+}
+
+/* Populate 4 descriptors with data from 4 mbufs */
+static inline void
+tx4(volatile struct ice_tx_desc *txdp, struct rte_mbuf **pkts)
+{
+	uint64_t dma_addr;
+	uint32_t i;
+
+	for (i = 0; i < 4; i++, txdp++, pkts++) {
+		dma_addr = rte_mbuf_data_iova(*pkts);
+		txdp->buf_addr = rte_cpu_to_le_64(dma_addr);
+		txdp->cmd_type_offset_bsz =
+			ice_build_ctob((uint32_t)ICE_TD_CMD, 0,
+				       (*pkts)->data_len, 0);
+	}
+}
+
+/* Populate 1 descriptor with data from 1 mbuf */
+static inline void
+tx1(volatile struct ice_tx_desc *txdp, struct rte_mbuf **pkts)
+{
+	uint64_t dma_addr;
+
+	dma_addr = rte_mbuf_data_iova(*pkts);
+	txdp->buf_addr = rte_cpu_to_le_64(dma_addr);
+	txdp->cmd_type_offset_bsz =
+		ice_build_ctob((uint32_t)ICE_TD_CMD, 0,
+			       (*pkts)->data_len, 0);
+}
+
+static inline void
+ice_tx_fill_hw_ring(struct ice_tx_queue *txq, struct rte_mbuf **pkts,
+		    uint16_t nb_pkts)
+{
+	volatile struct ice_tx_desc *txdp = &txq->tx_ring[txq->tx_tail];
+	struct ice_tx_entry *txep = &txq->sw_ring[txq->tx_tail];
+	const int N_PER_LOOP = 4;
+	const int N_PER_LOOP_MASK = N_PER_LOOP - 1;
+	int mainpart, leftover;
+	int i, j;
+
+	/**
+	 * Process most of the packets in chunks of N pkts.  Any
+	 * leftover packets will get processed one at a time.
+	 */
+	mainpart = nb_pkts & ((uint32_t)~N_PER_LOOP_MASK);
+	leftover = nb_pkts & ((uint32_t)N_PER_LOOP_MASK);
+	for (i = 0; i < mainpart; i += N_PER_LOOP) {
+		/* Copy N mbuf pointers to the S/W ring */
+		for (j = 0; j < N_PER_LOOP; ++j)
+			(txep + i + j)->mbuf = *(pkts + i + j);
+		tx4(txdp + i, pkts + i);
+	}
+
+	if (unlikely(leftover > 0)) {
+		for (i = 0; i < leftover; ++i) {
+			(txep + mainpart + i)->mbuf = *(pkts + mainpart + i);
+			tx1(txdp + mainpart + i, pkts + mainpart + i);
+		}
+	}
+}
+
+static inline uint16_t
+tx_xmit_pkts(struct ice_tx_queue *txq,
+	     struct rte_mbuf **tx_pkts,
+	     uint16_t nb_pkts)
+{
+	volatile struct ice_tx_desc *txr = txq->tx_ring;
+	uint16_t n = 0;
+
+	/**
+	 * Begin scanning the H/W ring for done descriptors when the number
+	 * of available descriptors drops below tx_free_thresh. For each done
+	 * descriptor, free the associated buffer.
+	 */
+	if (txq->nb_tx_free < txq->tx_free_thresh)
+		ice_tx_free_bufs(txq);
+
+	/* Use available descriptor only */
+	nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
+	if (unlikely(!nb_pkts))
+		return 0;
+
+	txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
+	if ((txq->tx_tail + nb_pkts) > txq->nb_tx_desc) {
+		n = (uint16_t)(txq->nb_tx_desc - txq->tx_tail);
+		ice_tx_fill_hw_ring(txq, tx_pkts, n);
+		txr[txq->tx_next_rs].cmd_type_offset_bsz |=
+			rte_cpu_to_le_64(((uint64_t)ICE_TX_DESC_CMD_RS) <<
+					 ICE_TXD_QW1_CMD_S);
+		txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
+		txq->tx_tail = 0;
+	}
+
+	/* Fill hardware descriptor ring with mbuf data */
+	ice_tx_fill_hw_ring(txq, tx_pkts + n, (uint16_t)(nb_pkts - n));
+	txq->tx_tail = (uint16_t)(txq->tx_tail + (nb_pkts - n));
+
+	/* Determin if RS bit needs to be set */
+	if (txq->tx_tail > txq->tx_next_rs) {
+		txr[txq->tx_next_rs].cmd_type_offset_bsz |=
+			rte_cpu_to_le_64(((uint64_t)ICE_TX_DESC_CMD_RS) <<
+					 ICE_TXD_QW1_CMD_S);
+		txq->tx_next_rs =
+			(uint16_t)(txq->tx_next_rs + txq->tx_rs_thresh);
+		if (txq->tx_next_rs >= txq->nb_tx_desc)
+			txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
+	}
+
+	if (txq->tx_tail >= txq->nb_tx_desc)
+		txq->tx_tail = 0;
+
+	/* Update the tx tail register */
+	rte_wmb();
+	ICE_PCI_REG_WRITE(txq->qtx_tail, txq->tx_tail);
+
+	return nb_pkts;
+}
+
+static uint16_t
+ice_xmit_pkts_simple(void *tx_queue,
+		     struct rte_mbuf **tx_pkts,
+		     uint16_t nb_pkts)
+{
+	uint16_t nb_tx = 0;
+
+	if (likely(nb_pkts <= ICE_TX_MAX_BURST))
+		return tx_xmit_pkts((struct ice_tx_queue *)tx_queue,
+				    tx_pkts, nb_pkts);
+
+	while (nb_pkts) {
+		uint16_t ret, num = (uint16_t)RTE_MIN(nb_pkts,
+						      ICE_TX_MAX_BURST);
+
+		ret = tx_xmit_pkts((struct ice_tx_queue *)tx_queue,
+				   &tx_pkts[nb_tx], num);
+		nb_tx = (uint16_t)(nb_tx + ret);
+		nb_pkts = (uint16_t)(nb_pkts - ret);
+		if (ret < num)
+			break;
+	}
+
+	return nb_tx;
+}
+
 void __attribute__((cold))
 ice_set_rx_function(struct rte_eth_dev *dev)
 {
-	dev->rx_pkt_burst = ice_recv_pkts;
+	PMD_INIT_FUNC_TRACE();
+	struct ice_adapter *ad =
+		ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+
+	if (dev->data->scattered_rx) {
+		/* Set the non-LRO scattered function */
+		PMD_INIT_LOG(DEBUG,
+			     "Using a Scattered function on port %d.",
+			     dev->data->port_id);
+		dev->rx_pkt_burst = ice_recv_scattered_pkts;
+	} else if (ad->rx_bulk_alloc_allowed) {
+		PMD_INIT_LOG(DEBUG,
+			     "Rx Burst Bulk Alloc Preconditions are "
+			     "satisfied. Rx Burst Bulk Alloc function "
+			     "will be used on port %d.",
+			     dev->data->port_id);
+		dev->rx_pkt_burst = ice_recv_pkts_bulk_alloc;
+	} else {
+		PMD_INIT_LOG(DEBUG,
+			     "Rx Burst Bulk Alloc Preconditions are not "
+			     "satisfied, Normal Rx will be used on port %d.",
+			     dev->data->port_id);
+		dev->rx_pkt_burst = ice_recv_pkts;
+	}
 }
 
 /*********************************************************************
@@ -1604,8 +2250,18 @@ void __attribute__((cold))
 void __attribute__((cold))
 ice_set_tx_function(struct rte_eth_dev *dev)
 {
+	struct ice_adapter *ad =
+		ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+
+	if (ad->tx_simple_allowed) {
+		PMD_INIT_LOG(DEBUG, "Simple tx finally be used.");
+		dev->tx_pkt_burst = ice_xmit_pkts_simple;
+		dev->tx_pkt_prepare = NULL;
+	} else {
+		PMD_INIT_LOG(DEBUG, "Normal tx finally be used.");
 		dev->tx_pkt_burst = ice_xmit_pkts;
 		dev->tx_pkt_prepare = ice_prep_pkts;
+	}
 }
 
 /* For each value it means, datasheet of hardware can tell more details
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v2 18/20] net/ice: support descriptor ops
  2018-12-03  7:06 ` [dpdk-dev] [PATCH v2 00/20] " Wenzhuo Lu
                     ` (16 preceding siblings ...)
  2018-12-03  7:06   ` [dpdk-dev] [PATCH v2 17/20] net/ice: support advance RX/TX Wenzhuo Lu
@ 2018-12-03  7:06   ` Wenzhuo Lu
  2018-12-03  7:07   ` [dpdk-dev] [PATCH v2 19/20] doc: add ICE description and update release note Wenzhuo Lu
  2018-12-03  7:07   ` [dpdk-dev] [PATCH v2 20/20] net/ice: support meson build Wenzhuo Lu
  19 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-03  7:06 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add below ops,
rx_descriptor_done
rx_descriptor_status
tx_descriptor_status

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/ice/ice_ethdev.c   |  3 ++
 drivers/net/ice/ice_lan_rxtx.c | 84 ++++++++++++++++++++++++++++++++++++++++++
 drivers/net/ice/ice_rxtx.h     |  3 ++
 3 files changed, 90 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index b78a342..ec1445d 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -112,6 +112,9 @@ static int ice_xstats_get_names(struct rte_eth_dev *dev,
 	.get_eeprom_length            = ice_get_eeprom_length,
 	.get_eeprom                   = ice_get_eeprom,
 	.rx_queue_count               = ice_rx_queue_count,
+	.rx_descriptor_done           = ice_rx_descriptor_done,
+	.rx_descriptor_status         = ice_rx_descriptor_status,
+	.tx_descriptor_status         = ice_tx_descriptor_status,
 	.stats_get                    = ice_stats_get,
 	.stats_reset                  = ice_stats_reset,
 	.xstats_get                   = ice_xstats_get,
diff --git a/drivers/net/ice/ice_lan_rxtx.c b/drivers/net/ice/ice_lan_rxtx.c
index 80ad4dd..9aae6c3 100644
--- a/drivers/net/ice/ice_lan_rxtx.c
+++ b/drivers/net/ice/ice_lan_rxtx.c
@@ -1506,6 +1506,90 @@
 	return desc;
 }
 
+int
+ice_rx_descriptor_done(void *rx_queue, uint16_t offset)
+{
+	volatile union ice_rx_desc *rxdp;
+	struct ice_rx_queue *rxq = rx_queue;
+	uint16_t desc;
+	int ret;
+
+	if (unlikely(offset >= rxq->nb_rx_desc)) {
+		PMD_DRV_LOG(ERR, "Invalid RX descriptor id %u", offset);
+		return 0;
+	}
+
+	desc = rxq->rx_tail + offset;
+	if (desc >= rxq->nb_rx_desc)
+		desc -= rxq->nb_rx_desc;
+
+	rxdp = &rxq->rx_ring[desc];
+
+	ret = !!(((rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
+		  ICE_RXD_QW1_STATUS_M) >> ICE_RXD_QW1_STATUS_S) &
+		 (1 << ICE_RX_DESC_STATUS_DD_S));
+
+	return ret;
+}
+
+int
+ice_rx_descriptor_status(void *rx_queue, uint16_t offset)
+{
+	struct ice_rx_queue *rxq = rx_queue;
+	volatile uint64_t *status;
+	uint64_t mask;
+	uint32_t desc;
+
+	if (unlikely(offset >= rxq->nb_rx_desc))
+		return -EINVAL;
+
+	if (offset >= rxq->nb_rx_desc - rxq->nb_rx_hold)
+		return RTE_ETH_RX_DESC_UNAVAIL;
+
+	desc = rxq->rx_tail + offset;
+	if (desc >= rxq->nb_rx_desc)
+		desc -= rxq->nb_rx_desc;
+
+	status = &rxq->rx_ring[desc].wb.qword1.status_error_len;
+	mask = rte_cpu_to_le_64((1ULL << ICE_RX_DESC_STATUS_DD_S) <<
+				ICE_RXD_QW1_STATUS_S);
+	if (*status & mask)
+		return RTE_ETH_RX_DESC_DONE;
+
+	return RTE_ETH_RX_DESC_AVAIL;
+}
+
+int
+ice_tx_descriptor_status(void *tx_queue, uint16_t offset)
+{
+	struct ice_tx_queue *txq = tx_queue;
+	volatile uint64_t *status;
+	uint64_t mask, expect;
+	uint32_t desc;
+
+	if (unlikely(offset >= txq->nb_tx_desc))
+		return -EINVAL;
+
+	desc = txq->tx_tail + offset;
+	/* go to next desc that has the RS bit */
+	desc = ((desc + txq->tx_rs_thresh - 1) / txq->tx_rs_thresh) *
+		txq->tx_rs_thresh;
+	if (desc >= txq->nb_tx_desc) {
+		desc -= txq->nb_tx_desc;
+		if (desc >= txq->nb_tx_desc)
+			desc -= txq->nb_tx_desc;
+	}
+
+	status = &txq->tx_ring[desc].cmd_type_offset_bsz;
+	mask = rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M);
+	expect = rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE <<
+				  ICE_TXD_QW1_DTYPE_S);
+	if ((*status & mask) == expect)
+		return RTE_ETH_TX_DESC_DONE;
+
+	return RTE_ETH_TX_DESC_FULL;
+}
+
 void
 ice_clear_queues(struct rte_eth_dev *dev)
 {
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index e0218b3..12ad383 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -143,6 +143,9 @@ uint16_t ice_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
 		       uint16_t nb_pkts);
 void ice_set_tx_function(struct rte_eth_dev *dev);
 uint32_t ice_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+int ice_rx_descriptor_done(void *rx_queue, uint16_t offset);
+int ice_rx_descriptor_status(void *rx_queue, uint16_t offset);
+int ice_tx_descriptor_status(void *tx_queue, uint16_t offset);
 void ice_set_default_ptype_table(struct rte_eth_dev *dev);
 const uint32_t *ice_dev_supported_ptypes_get(struct rte_eth_dev *dev);
 void ice_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v2 19/20] doc: add ICE description and update release note
  2018-12-03  7:06 ` [dpdk-dev] [PATCH v2 00/20] " Wenzhuo Lu
                     ` (17 preceding siblings ...)
  2018-12-03  7:06   ` [dpdk-dev] [PATCH v2 18/20] net/ice: support descriptor ops Wenzhuo Lu
@ 2018-12-03  7:07   ` Wenzhuo Lu
  2018-12-03  8:15     ` Varghese, Vipin
  2018-12-03  7:07   ` [dpdk-dev] [PATCH v2 20/20] net/ice: support meson build Wenzhuo Lu
  19 siblings, 1 reply; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-03  7:07 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 MAINTAINERS                            |  1 +
 doc/guides/nics/features/ice.ini       | 39 +++++++++++++++
 doc/guides/nics/ice.rst                | 87 ++++++++++++++++++++++++++++++++++
 doc/guides/rel_notes/release_19_02.rst |  4 ++
 4 files changed, 131 insertions(+)
 create mode 100644 doc/guides/nics/features/ice.ini
 create mode 100644 doc/guides/nics/ice.rst

diff --git a/MAINTAINERS b/MAINTAINERS
index 37f3bf7..cd01565 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -598,6 +598,7 @@ M: Qiming Yang <qiming.yang@intel.com>
 M: Wenzhuo Lu <wenzhuo.lu@intel.com>
 T: git://dpdk.org/next/dpdk-next-net-intel
 F: drivers/net/ice/
+F: doc/guides/nics/features/ice*.ini
 
 Marvell mvpp2
 M: Tomasz Duszynski <tdu@semihalf.com>
diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
new file mode 100644
index 0000000..2be52ca
--- /dev/null
+++ b/doc/guides/nics/features/ice.ini
@@ -0,0 +1,39 @@
+;
+; Supported features of the 'ice' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Speed capabilities   = Y
+Link status          = Y
+Link status event    = Y
+Rx interrupt         = Y
+Queue start/stop     = Y
+MTU update           = Y
+Jumbo frame          = Y
+Scattered Rx         = Y
+TSO                  = Y
+Unicast MAC filter   = Y
+Multicast MAC filter = Y
+RSS hash             = Y
+RSS key update       = Y
+RSS reta update      = Y
+VLAN filter          = Y
+CRC offload          = Y
+VLAN offload         = Y
+QinQ offload         = Y
+L3 checksum offload  = Y
+L4 checksum offload  = Y
+Packet type parsing  = Y
+Rx descriptor status = Y
+Tx descriptor status = Y
+Basic stats          = Y
+Extended stats       = Y
+FW version           = Y
+Module EEPROM dump   = Y
+Multiprocess aware   = Y
+BSD nic_uio          = Y
+Linux UIO            = Y
+Linux VFIO           = Y
+x86-32               = Y
+x86-64               = Y
diff --git a/doc/guides/nics/ice.rst b/doc/guides/nics/ice.rst
new file mode 100644
index 0000000..f551c6c
--- /dev/null
+++ b/doc/guides/nics/ice.rst
@@ -0,0 +1,87 @@
+..  SPDX-License-Identifier: BSD-3-Clause
+    Copyright(c) 2018 Intel Corporation.
+
+ICE Poll Mode Driver
+======================
+
+The ice PMD (librte_pmd_ice) provides poll mode driver support for
+10/25 Gbps Intel® Ethernet 810 Series Network Adapters based on
+the Intel Ethernet Controller E810.
+
+
+Prerequisites
+-------------
+
+- Identifying your adapter using `Intel Support
+  <http://www.intel.com/support>`_ and get the latest NVM/FW images.
+
+- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
+
+- To get better performance on Intel platforms, please follow the "How to get best performance with NICs on Intel platforms"
+  section of the :ref:`Getting Started Guide for Linux <linux_gsg>`.
+
+
+Pre-Installation Configuration
+------------------------------
+
+Config File Options
+~~~~~~~~~~~~~~~~~~~
+
+The following options can be modified in the ``config`` file.
+Please note that enabling debugging options may affect system performance.
+
+- ``CONFIG_RTE_LIBRTE_ICE_PMD`` (default ``y``)
+
+  Toggle compilation of the ``librte_pmd_ice`` driver.
+
+- ``CONFIG_RTE_LIBRTE_ICE_DEBUG_*`` (default ``n``)
+
+  Toggle display of generic debugging messages.
+
+- ``CONFIG_RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC`` (default ``y``)
+
+  Toggle bulk allocation for RX.
+
+- ``CONFIG_RTE_LIBRTE_ICE_16BYTE_RX_DESC`` (default ``n``)
+
+  Toggle to use a 16-byte RX descriptor, by default the RX descriptor is 32 byte.
+
+
+Driver compilation and testing
+------------------------------
+
+Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
+for details.
+
+
+Sample Application Notes
+------------------------
+
+Vlan filter
+~~~~~~~~~~~
+
+Vlan filter only works when Promiscuous mode is off.
+
+To start ``testpmd``, and add vlan 10 to port 0:
+
+.. code-block:: console
+
+    ./app/testpmd -l 0-15 -n 4 -- -i
+    ...
+
+    testpmd> rx_vlan add 10 0
+
+
+Limitations or Known issues
+---------------------------
+
+19.02 limitation
+~~~~~~~~~~~~~~~~
+
+Ice code released in 19.02 is for evaluation only.
+
+
+Secondary Process
+~~~~~~~~~~~~~~~~~
+Ice supports secondary process. But it does not support changing the setting
+and configuration in the secondary process.
diff --git a/doc/guides/rel_notes/release_19_02.rst b/doc/guides/rel_notes/release_19_02.rst
index a94fa86..c5a054b 100644
--- a/doc/guides/rel_notes/release_19_02.rst
+++ b/doc/guides/rel_notes/release_19_02.rst
@@ -54,6 +54,10 @@ New Features
      Also, make sure to start the actual text at the margin.
      =========================================================
 
+* **Added ICE net PMD**
+
+  Added the new ``ice`` net driver for Intel® Ethernet Network Adapters E810.
+  See the :doc:`../nics/ice` NIC guide for more details on this new driver.
 
 Removed Items
 -------------
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v2 20/20] net/ice: support meson build
  2018-12-03  7:06 ` [dpdk-dev] [PATCH v2 00/20] " Wenzhuo Lu
                     ` (18 preceding siblings ...)
  2018-12-03  7:07   ` [dpdk-dev] [PATCH v2 19/20] doc: add ICE description and update release note Wenzhuo Lu
@ 2018-12-03  7:07   ` Wenzhuo Lu
  2018-12-03 10:00     ` Varghese, Vipin
  19 siblings, 1 reply; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-03  7:07 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 drivers/net/ice/base/meson.build | 30 ++++++++++++++++++++++++++++++
 drivers/net/ice/meson.build      | 15 +++++++++++++++
 drivers/net/meson.build          |  1 +
 3 files changed, 46 insertions(+)
 create mode 100644 drivers/net/ice/base/meson.build
 create mode 100644 drivers/net/ice/meson.build

diff --git a/drivers/net/ice/base/meson.build b/drivers/net/ice/base/meson.build
new file mode 100644
index 0000000..5aafff3
--- /dev/null
+++ b/drivers/net/ice/base/meson.build
@@ -0,0 +1,30 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Intel Corporation
+
+sources = [
+	'ice_controlq.c',
+	'ice_common.c',
+	'ice_sched.c',
+	'ice_switch.c',
+	'ice_nvm.c',
+]
+
+error_cflags = ['-Wno-sign-compare', '-Wno-unused-value',
+		'-Wno-format', '-Wno-error=format-security',
+		'-Wno-strict-aliasing', '-Wno-unused-but-set-variable',
+		'-Wno-unused-variable',
+]
+c_args = cflags
+if allow_experimental_apis
+	c_args += '-DALLOW_EXPERIMENTAL_API'
+endif
+foreach flag: error_cflags
+	if cc.has_argument(flag)
+		c_args += flag
+	endif
+endforeach
+
+base_lib = static_library('ice_base', sources,
+	dependencies: static_rte_eal,
+	c_args: c_args)
+base_objs = base_lib.extract_all_objects()
diff --git a/drivers/net/ice/meson.build b/drivers/net/ice/meson.build
new file mode 100644
index 0000000..b921354
--- /dev/null
+++ b/drivers/net/ice/meson.build
@@ -0,0 +1,15 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Intel Corporation
+
+cflags += ['-DALLOW_EXPERIMENTAL_API']
+
+subdir('base')
+objs = [base_objs]
+
+sources = files(
+	'ice_ethdev.c',
+	'ice_lan_rxtx.c'
+	)
+
+deps += ['hash']
+includes += include_directories('base')
diff --git a/drivers/net/meson.build b/drivers/net/meson.build
index 980eec2..45da3bb 100644
--- a/drivers/net/meson.build
+++ b/drivers/net/meson.build
@@ -17,6 +17,7 @@ drivers = ['af_packet',
 	'enic',
 	'failsafe',
 	'fm10k', 'i40e',
+	'ice',
 	'ifc',
 	'ixgbe',
 	'kni',
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v2 19/20] doc: add ICE description and update release note
  2018-12-03  7:07   ` [dpdk-dev] [PATCH v2 19/20] doc: add ICE description and update release note Wenzhuo Lu
@ 2018-12-03  8:15     ` Varghese, Vipin
  2018-12-05  6:54       ` Lu, Wenzhuo
  0 siblings, 1 reply; 309+ messages in thread
From: Varghese, Vipin @ 2018-12-03  8:15 UTC (permalink / raw)
  To: Lu, Wenzhuo, dev; +Cc: Lu, Wenzhuo

Hi,

Thanks for adding details for secondary multi process in limitations section, even though the API were expected. 

> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Wenzhuo Lu
> Sent: Monday, December 3, 2018 12:37 PM
> To: dev@dpdk.org
> Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>
> Subject: [dpdk-dev] [PATCH v2 19/20] doc: add ICE description and update
> release note
> 
> Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> ---

Do not we have to add change details like

V2:
 - updated the limitation sections
 - removed section etc?

Snipped

> +[Features]
> +Speed capabilities   = Y
> +Link status          = Y
> +Link status event    = Y
> +Rx interrupt         = Y
> +Queue start/stop     = Y
> +MTU update           = Y
> +Jumbo frame          = Y
> +Scattered Rx         = Y
> +TSO                  = Y
> +Unicast MAC filter   = Y
> +Multicast MAC filter = Y
> +RSS hash             = Y
> +RSS key update       = Y
> +RSS reta update      = Y
> +VLAN filter          = Y
> +CRC offload          = Y
> +VLAN offload         = Y
> +QinQ offload         = Y
> +L3 checksum offload  = Y
> +L4 checksum offload  = Y
> +Packet type parsing  = Y
> +Rx descriptor status = Y
> +Tx descriptor status = Y
> +Basic stats          = Y
> +Extended stats       = Y

Do we support Traffic Manager and Inline Crypto? If not, can we add this also to limitations?

> +FW version           = Y
> +Module EEPROM dump   = Y
> +Multiprocess aware   = Y
> +BSD nic_uio          = Y
> +Linux UIO            = Y
> +Linux VFIO           = Y
> +x86-32               = Y
> +x86-64               = Y

Is cross compile for ARM and PPC disabled since it uses Intel specific ISA? Should this be added to limitations?


Snipped

> +
> +Config File Options
> +~~~~~~~~~~~~~~~~~~~
> +
> +The following options can be modified in the ``config`` file.
> +Please note that enabling debugging options may affect system performance.

Do we see real performance variance? If yes, can we highlight this in info section?

snipped


^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v2 02/20] net/ice: support device initialization
  2018-12-03  7:06   ` [dpdk-dev] [PATCH v2 02/20] net/ice: support device initialization Wenzhuo Lu
@ 2018-12-03  9:07     ` Varghese, Vipin
  2018-12-04  4:40     ` Varghese, Vipin
  1 sibling, 0 replies; 309+ messages in thread
From: Varghese, Vipin @ 2018-12-03  9:07 UTC (permalink / raw)
  To: Lu, Wenzhuo, dev; +Cc: Lu, Wenzhuo, Yang, Qiming, Li, Xiaoyun, Wu, Jingjing

Hi,

snipped

> +# Compile burst-oriented ICE PMD driver # CONFIG_RTE_LIBRTE_ICE_PMD=y

Based on ' https://patches.dpdk.org/patch/48488/' it is suggested to configure. But here is it already set to 'y'. Is this correct? If yes can you update ' https://patches.dpdk.org/patch/48488/'

snipped

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v2 20/20] net/ice: support meson build
  2018-12-03  7:07   ` [dpdk-dev] [PATCH v2 20/20] net/ice: support meson build Wenzhuo Lu
@ 2018-12-03 10:00     ` Varghese, Vipin
  2018-12-05  7:03       ` Lu, Wenzhuo
  0 siblings, 1 reply; 309+ messages in thread
From: Varghese, Vipin @ 2018-12-03 10:00 UTC (permalink / raw)
  To: Lu, Wenzhuo, dev; +Cc: Lu, Wenzhuo

Should not meson build option be add start. That is in patch 1/20 so compile options does not fail?

Thanks
Vipin Varghese

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH 03/19] net/ice: support device and queue ops
  2018-11-23  6:56 ` [dpdk-dev] [PATCH 03/19] net/ice: support device and queue ops Wenzhuo Lu
@ 2018-12-03 15:24   ` Rami Rosen
  2018-12-03 15:43     ` Rami Rosen
  2018-12-06  2:53     ` Lu, Wenzhuo
  0 siblings, 2 replies; 309+ messages in thread
From: Rami Rosen @ 2018-12-03 15:24 UTC (permalink / raw)
  To: wenzhuo.lu; +Cc: dev, qiming.yang, xiaoyun.li, jingjing.wu

Hi, Wenzhuo,

> +static int
> +ice_dev_start(struct rte_eth_dev *dev)
> +{
> +       struct rte_eth_dev_data *data = dev->data;
> +       struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
> +       struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
> +       uint16_t nb_rxq = 0;
> +       uint16_t nb_txq, i;
> +       int ret;
> +
> +       if (rte_eal_process_type() == RTE_PROC_SECONDARY)
> +               return -E_RTE_SECONDARY;
> +
[Rami Rosen] Suppose start of a TX queue failes in the loop below. You
go to **tx_err** label, where you stop
all **RX** queues (which actually were not started at all, since they
are started only later in this method; and then you
return -EIO and the ice_dev_start() method is terminated, without
actually stopping any TX queues which were already started;
So maybe it is better to call   ice_tx_queue_stop() in tx_err and
ice_rx_queue_stop() in rx_err.
Apart from it, there is a typo:  "Tx queues' contex" should be =>Tx
queues' context"

> +       /* program Tx queues' contex in hardware */
> +       for (nb_txq = 0; nb_txq < data->nb_tx_queues; nb_txq++) {
> +               ret = ice_tx_queue_start(dev, nb_txq);
> +               if (ret) {
> +                       PMD_DRV_LOG(ERR, "fail to start Tx queue %u", nb_txq);
> +                       goto tx_err;
> +               }
> +       }
> +

> +       /* program Rx queues' context in hardware*/
> +       for (nb_rxq = 0; nb_rxq < data->nb_rx_queues; nb_rxq++) {
> +               ret = ice_rx_queue_start(dev, nb_rxq);
> +               if (ret) {
> +                       PMD_DRV_LOG(ERR, "fail to start Rx queue %u", nb_rxq);
> +                       goto rx_err;
> +               }
> +       }
,,,

> +       /* stop the started queues if failed to start all queues */
> +rx_err:
> +       for (i = 0; i < nb_txq; i++)
> +               ice_tx_queue_stop(dev, i);
> +tx_err:
> +       for (i = 0; i < nb_rxq; i++)
> +               ice_rx_queue_stop(dev, i);
> +
> +       return -EIO;
> +}
> +

Regards,
Rami Rosen

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH 03/19] net/ice: support device and queue ops
  2018-12-03 15:24   ` Rami Rosen
@ 2018-12-03 15:43     ` Rami Rosen
  2018-12-06  2:53     ` Lu, Wenzhuo
  1 sibling, 0 replies; 309+ messages in thread
From: Rami Rosen @ 2018-12-03 15:43 UTC (permalink / raw)
  To: wenzhuo.lu; +Cc: dev, qiming.yang, xiaoyun.li, jingjing.wu

Hi,

The same comment refers also to V2 of the patch, [PATCH v2 03/20]
net/ice: support device and queue ops

Regards,
Rami Rosen

On Mon, 3 Dec 2018 at 17:24, Rami Rosen <roszenrami@gmail.com> wrote:
>
> Hi, Wenzhuo,
>
> > +static int
> > +ice_dev_start(struct rte_eth_dev *dev)
> > +{
> > +       struct rte_eth_dev_data *data = dev->data;
> > +       struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
> > +       struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
> > +       uint16_t nb_rxq = 0;
> > +       uint16_t nb_txq, i;
> > +       int ret;
> > +
> > +       if (rte_eal_process_type() == RTE_PROC_SECONDARY)
> > +               return -E_RTE_SECONDARY;
> > +
> [Rami Rosen] Suppose start of a TX queue failes in the loop below. You
> go to **tx_err** label, where you stop
> all **RX** queues (which actually were not started at all, since they
> are started only later in this method; and then you
> return -EIO and the ice_dev_start() method is terminated, without
> actually stopping any TX queues which were already started;
> So maybe it is better to call   ice_tx_queue_stop() in tx_err and
> ice_rx_queue_stop() in rx_err.
> Apart from it, there is a typo:  "Tx queues' contex" should be =>Tx
> queues' context"
>
> > +       /* program Tx queues' contex in hardware */
> > +       for (nb_txq = 0; nb_txq < data->nb_tx_queues; nb_txq++) {
> > +               ret = ice_tx_queue_start(dev, nb_txq);
> > +               if (ret) {
> > +                       PMD_DRV_LOG(ERR, "fail to start Tx queue %u", nb_txq);
> > +                       goto tx_err;
> > +               }
> > +       }
> > +
>
> > +       /* program Rx queues' context in hardware*/
> > +       for (nb_rxq = 0; nb_rxq < data->nb_rx_queues; nb_rxq++) {
> > +               ret = ice_rx_queue_start(dev, nb_rxq);
> > +               if (ret) {
> > +                       PMD_DRV_LOG(ERR, "fail to start Rx queue %u", nb_rxq);
> > +                       goto rx_err;
> > +               }
> > +       }
> ,,,
>
> > +       /* stop the started queues if failed to start all queues */
> > +rx_err:
> > +       for (i = 0; i < nb_txq; i++)
> > +               ice_tx_queue_stop(dev, i);
> > +tx_err:
> > +       for (i = 0; i < nb_rxq; i++)
> > +               ice_rx_queue_stop(dev, i);
> > +
> > +       return -EIO;
> > +}
> > +
>
> Regards,
> Rami Rosen

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v2 01/20] net/ice: add base code
  2018-12-03  7:06   ` [dpdk-dev] [PATCH v2 01/20] net/ice: add base code Wenzhuo Lu
@ 2018-12-04  4:18     ` Varghese, Vipin
  2018-12-06  3:27       ` Lu, Wenzhuo
  0 siblings, 1 reply; 309+ messages in thread
From: Varghese, Vipin @ 2018-12-04  4:18 UTC (permalink / raw)
  To: Lu, Wenzhuo, dev; +Cc: Lu, Wenzhuo

Snipped

> +Intel® ICE driver
> +==================
> +
> +This directory contains source code of FreeBSD ice driver of version
> +2018.10.30 released by the team which develops
> +basic drivers for any ice NIC. The directory of base/ contains the
> +original source package.
> +This driver is valid for the product(s) listed below
> +
> +* Intel® Ethernet Network Adapters E810
> +
> +Updating the driver
> +===================
> +
> +NOTE: The source code in this directory should not be modified apart from
> +the following file(s):
> +
> +    ice_osdep.h

I this README persistent in upcoming releases of 'driver/net/ice'?

Snipped

> +/* Manage MAC address, write command - direct (0x0108) */
> +struct ice_aqc_manage_mac_write {
> +	u8 port_num;
> +	u8 flags;
> +#define ICE_AQC_MAN_MAC_WR_MC_MAG_EN		BIT(0)
> +#define ICE_AQC_MAN_MAC_WR_WOL_LAA_PFR_KEEP	BIT(1)
> +#define ICE_AQC_MAN_MAC_WR_S		6
> +#define ICE_AQC_MAN_MAC_WR_M		(3 <<
> ICE_AQC_MAN_MAC_WR_S)

Is this value '3' or 'BIT(3)'?

> +#define ICE_AQC_MAN_MAC_UPDATE_LAA	0
> +#define ICE_AQC_MAN_MAC_UPDATE_LAA_WOL	(BIT(0) <<
> ICE_AQC_MAN_MAC_WR_S)

Can the code be rearranged for? 

#define ICE_AQC_MAN_MAC_WR_S		6
#define ICE_AQC_MAN_MAC_UPDATE_LAA	0
#define ICE_AQC_MAN_MAC_WR_M		(3 << ICE_AQC_MAN_MAC_WR_S)
#define ICE_AQC_MAN_MAC_UPDATE_LAA_WOL	(BIT(0) << ICE_AQC_MAN_MAC_WR_S)

Snipped

> +/* Each entry in the response buffer is of the following type: */
> +struct ice_aqc_get_sw_cfg_resp_elem {
> +	/* VSI/Port Number */
> +	__le16 vsi_port_num;
> +#define ICE_AQC_GET_SW_CONF_RESP_VSI_PORT_NUM_S	0
> +#define ICE_AQC_GET_SW_CONF_RESP_VSI_PORT_NUM_M	\
> +			(0x3FF <<
> ICE_AQC_GET_SW_CONF_RESP_VSI_PORT_NUM_S)
> +#define ICE_AQC_GET_SW_CONF_RESP_TYPE_S	14
> +#define ICE_AQC_GET_SW_CONF_RESP_TYPE_M	(0x3 <<
> ICE_AQC_GET_SW_CONF_RESP_TYPE_S)
> +#define ICE_AQC_GET_SW_CONF_RESP_PHYS_PORT	0
> +#define ICE_AQC_GET_SW_CONF_RESP_VIRT_PORT	1
> +#define ICE_AQC_GET_SW_CONF_RESP_VSI		2
> +

Can the code be rearranged for? 

#define ICE_AQC_GET_SW_CONF_RESP_VSI_PORT_NUM_S	0
#define ICE_AQC_GET_SW_CONF_RESP_TYPE_S	14
#define ICE_AQC_GET_SW_CONF_RESP_VSI_PORT_NUM_M	\
			(0x3FF << ICE_AQC_GET_SW_CONF_RESP_VSI_PORT_NUM_S)
#define ICE_AQC_GET_SW_CONF_RESP_TYPE_M	(0x3 << ICE_AQC_GET_SW_CONF_RESP_TYPE_S)

snipped

 +
> +struct ice_aqc_get_phy_caps_data {
> +	__le64 phy_type_low; /* Use values from ICE_PHY_TYPE_LOW_* */
> +	__le64 reserved;
> +	u8 caps;
> +#define ICE_AQC_PHY_EN_TX_LINK_PAUSE			BIT(0)
> +#define ICE_AQC_PHY_EN_RX_LINK_PAUSE			BIT(1)
> +#define ICE_AQC_PHY_LOW_POWER_MODE			BIT(2)
> +#define ICE_AQC_PHY_EN_LINK				BIT(3)
> +#define ICE_AQC_PHY_AN_MODE				BIT(4)
> +#define ICE_AQC_PHY_EN_MOD_QUAL				BIT(5)
> +#define ICE_AQC_PHY_EN_LESM				BIT(6)
> +#define ICE_AQC_PHY_EN_AUTO_FEC				BIT(7)
> +#define ICE_AQC_PHY_CAPS_MASK
> 	MAKEMASK(0xff, 0)
> +	u8 low_power_ctrl;
> +#define ICE_AQC_PHY_EN_D3COLD_LOW_POWER_AUTONEG		BIT(0)
> +	__le16 eee_cap;
> +#define ICE_AQC_PHY_EEE_EN_100BASE_TX			BIT(0)
> +#define ICE_AQC_PHY_EEE_EN_1000BASE_T			BIT(1)
> +#define ICE_AQC_PHY_EEE_EN_10GBASE_T			BIT(2)
> +#define ICE_AQC_PHY_EEE_EN_1000BASE_KX			BIT(3)
> +#define ICE_AQC_PHY_EEE_EN_10GBASE_KR			BIT(4)
> +#define ICE_AQC_PHY_EEE_EN_25GBASE_KR			BIT(5)
> +#define ICE_AQC_PHY_EEE_EN_40GBASE_KR4			BIT(6)
> +	__le16 eeer_value;
> +	u8 phy_id_oui[4]; /* PHY/Module ID connected on the port */
> +	u8 phy_fw_ver[8];
> +	u8 link_fec_options;
> +#define ICE_AQC_PHY_FEC_10G_KR_40G_KR4_EN		BIT(0)
> +#define ICE_AQC_PHY_FEC_10G_KR_40G_KR4_REQ		BIT(1)
> +#define ICE_AQC_PHY_FEC_25G_RS_528_REQ			BIT(2)
> +#define ICE_AQC_PHY_FEC_25G_KR_REQ			BIT(3)
> +#define ICE_AQC_PHY_FEC_25G_RS_544_REQ			BIT(4)
> +#define ICE_AQC_PHY_FEC_25G_RS_CLAUSE91_EN		BIT(6)
> +#define ICE_AQC_PHY_FEC_25G_KR_CLAUSE74_EN		BIT(7)
> +#define ICE_AQC_PHY_FEC_MASK
> 	MAKEMASK(0xdf, 0)
> +	u8 extended_compliance_code;
> +#define ICE_MODULE_TYPE_TOTAL_BYTE			3
> +	u8 module_type[ICE_MODULE_TYPE_TOTAL_BYTE];
> +#define ICE_AQC_MOD_TYPE_BYTE0_SFP_PLUS			0xA0
> +#define ICE_AQC_MOD_TYPE_BYTE0_QSFP_PLUS		0x80
> +#define ICE_AQC_MOD_TYPE_BYTE1_SFP_PLUS_CU_PASSIVE	BIT(0)
> +#define ICE_AQC_MOD_TYPE_BYTE1_SFP_PLUS_CU_ACTIVE	BIT(1)
> +#define ICE_AQC_MOD_TYPE_BYTE1_10G_BASE_SR		BIT(4)
> +#define ICE_AQC_MOD_TYPE_BYTE1_10G_BASE_LR		BIT(5)
> +#define ICE_AQC_MOD_TYPE_BYTE1_10G_BASE_LRM		BIT(6)
> +#define ICE_AQC_MOD_TYPE_BYTE1_10G_BASE_ER		BIT(7)
> +#define ICE_AQC_MOD_TYPE_BYTE2_SFP_PLUS			0xA0
> +#define ICE_AQC_MOD_TYPE_BYTE2_QSFP_PLUS		0x86
> +	u8 qualified_module_count;
> +#define ICE_AQC_QUAL_MOD_COUNT_MAX			16
> +	struct {
> +		u8 v_oui[3];
> +		u8 rsvd3;
> +		u8 v_part[16];
> +		__le32 v_rev;
> +		__le64 rsvd8;
> +	} qual_modules[ICE_AQC_QUAL_MOD_COUNT_MAX];
> +};
> +

Does the NIC support physical loopback? I am not able to find here.
 
> +#define ICE_AQ_PHY_ENA_LOW_POWER	BIT(2)

Does Low Power PMD is exposed to DPDK? If yes, can you mention the performance numbers or variance in Release documents?

Snipped

> +
> +/* Memory types */
> +enum ice_memset_type {
> +	ICE_NONDMA_MEM = 0,
> +	ICE_DMA_MEM
> +};
> +
> +/* Memcpy types */
> +enum ice_memcpy_type {
> +	ICE_NONDMA_TO_NONDMA = 0,
> +	ICE_NONDMA_TO_DMA,
> +	ICE_DMA_TO_DMA,
> +	ICE_DMA_TO_NONDMA
> +};
> +

Is this exposed to user (rte_eth_dev) API? If yes, can you please let know the performance impact in RX|TX in release notes too.

Snipped

Suggestion: patch 01/20 is bit too long

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v2 02/20] net/ice: support device initialization
  2018-12-03  7:06   ` [dpdk-dev] [PATCH v2 02/20] net/ice: support device initialization Wenzhuo Lu
  2018-12-03  9:07     ` Varghese, Vipin
@ 2018-12-04  4:40     ` Varghese, Vipin
  2018-12-06  5:01       ` Lu, Wenzhuo
  1 sibling, 1 reply; 309+ messages in thread
From: Varghese, Vipin @ 2018-12-04  4:40 UTC (permalink / raw)
  To: Lu, Wenzhuo, dev; +Cc: Lu, Wenzhuo, Yang, Qiming, Li, Xiaoyun, Wu, Jingjing


Snipped

> +	/* Set the info.ingress_table and info.egress_table
> +	 * for UP translate table. Now just set it to 1:1 map by default
> +	 * -- 0b 111 110 101 100 011 010 001 000 == 0xFAC688
> +	 */
> +	info->ingress_table  = rte_cpu_to_le_32(0x00FAC688);
> +	info->egress_table   = rte_cpu_to_le_32(0x00FAC688);
> +	info->outer_up_table = rte_cpu_to_le_32(0x00FAC688);

Can we use MACRO instead of exact values for ingress, egress and outer_up table.

> +	return 0;
> +}
> +

snipped

> +static int
> +ice_dev_init(struct rte_eth_dev *dev)
> +{
> +	struct rte_pci_device *pci_dev;
> +	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data-
> >dev_private);
> +	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
> +	int ret;
> +
> +	dev->dev_ops = &ice_eth_dev_ops;
> +
> +	pci_dev = RTE_DEV_TO_PCI(dev->device);
> +
> +	rte_eth_copy_pci_info(dev, pci_dev);
> +	pf->adapter = ICE_DEV_PRIVATE_TO_ADAPTER(dev->data-
> >dev_private);
> +	pf->adapter->eth_dev = dev;
> +	pf->dev_data = dev->data;
> +	hw->back = pf->adapter;
> +	hw->hw_addr = (uint8_t *)pci_dev->mem_resource[0].addr;
> +	hw->vendor_id = pci_dev->id.vendor_id;
> +	hw->device_id = pci_dev->id.device_id;
> +	hw->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id;
> +	hw->subsystem_device_id = pci_dev->id.subsystem_device_id;
> +	hw->bus.device = pci_dev->addr.devid;
> +	hw->bus.func = pci_dev->addr.function;
> +
> +	ice_init_controlq_parameter(hw);
> +
> +	ret = ice_init_hw(hw);
> +	if (ret) {
> +		PMD_INIT_LOG(ERR, "Failed to initialize HW");
> +		return -EINVAL;
> +	}

Definition for ice_init_hw in patch 01/20 does not check for primary-secondary. Are we allowing secondary to invoke ice_init_hw if it is initialized by primary?

> +
> +	PMD_INIT_LOG(INFO, "FW %d.%d.%05d API %d.%d",
> +		     hw->fw_maj_ver, hw->fw_min_ver, hw->fw_build,
> +		     hw->api_maj_ver, hw->api_min_ver);
> +

Snipped

> +
> +static int
> +ice_dev_uninit(struct rte_eth_dev *dev) {
> +	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data-
> >dev_private);
> +	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
> +
> +	ICE_PROC_SECONDARY_CHECK_RET_0;

Should not we check if primary is alive and NIC is used or initialized by primary then ' ICE_PROC_SECONDARY_CHECK_RET_0'?

> +
> +	ice_dev_close(dev);
> +
> +	dev->dev_ops = NULL;
> +	dev->rx_pkt_burst = NULL;
> +	dev->tx_pkt_burst = NULL;
> +
> +	rte_free(dev->data->mac_addrs);
> +	dev->data->mac_addrs = NULL;
> +
> +	ice_release_vsi(pf->main_vsi);
> +	ice_sched_cleanup_all(hw);
> +	rte_free(hw->port_info);
> +	ice_shutdown_all_ctrlq(hw);
> +
> +	return 0;
> +}
> +

snipped

> +static void
> +ice_dev_close(struct rte_eth_dev *dev)
> +{
> +	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
> +	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data-
> >dev_private);
> +
> +	ICE_PROC_SECONDARY_CHECK_NO_ERR;
> +

I am just wondering in a multi process (primary-secondary) if primary is killed or exited, then if we try to stop the secondary due to this check the vsi, pool and shutdown is not called. Should not we check if primary is still alive, if yes then 
ICE_PROC_SECONDARY_CHECK_NO_ERR?

> +	ice_res_pool_destroy(&pf->msix_pool);
> +	ice_release_vsi(pf->main_vsi);
> +
> +	ice_shutdown_all_ctrlq(hw);
> +}

snipped

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v2 03/20] net/ice: support device and queue ops
  2018-12-03  7:06   ` [dpdk-dev] [PATCH v2 03/20] net/ice: support device and queue ops Wenzhuo Lu
@ 2018-12-04  4:53     ` Varghese, Vipin
  2018-12-06  5:03       ` Lu, Wenzhuo
  0 siblings, 1 reply; 309+ messages in thread
From: Varghese, Vipin @ 2018-12-04  4:53 UTC (permalink / raw)
  To: Lu, Wenzhuo, dev; +Cc: Lu, Wenzhuo, Yang, Qiming, Li, Xiaoyun, Wu, Jingjing

snipped
> +
> +static int ice_init_rss(struct ice_pf *pf) {
> +	struct ice_hw *hw = ICE_PF_TO_HW(pf);
> +	struct ice_vsi *vsi = pf->main_vsi;
> +	struct rte_eth_dev *dev = pf->adapter->eth_dev;
> +	struct rte_eth_rss_conf *rss_conf;
> +	struct ice_aqc_get_set_rss_keys key;
> +	uint16_t i, nb_q;
> +	int ret = 0;
> +
> +	rss_conf = &dev->data->dev_conf.rx_adv_conf.rss_conf;
> +	nb_q = dev->data->nb_rx_queues;
> +	vsi->rss_key_size = ICE_AQC_GET_SET_RSS_KEY_DATA_RSS_KEY_SIZE;
> +	vsi->rss_lut_size = hw->func_caps.common_cap.rss_table_size;
> +
> +	if (!vsi->rss_key)
> +		vsi->rss_key = rte_zmalloc("rss_key",
> +					   vsi->rss_key_size, 0);
> +	if (!vsi->rss_lut)
> +		vsi->rss_lut = rte_zmalloc("rss_lut",
> +					   vsi->rss_lut_size, 0);

2 suggestions
1. should the name be macro?
2. if there are multiple 810 NIC under DPDK, should not each rss be different like "rss_key-%u" where it is port number?

Snipped

> +
> +static int
> +ice_dev_start(struct rte_eth_dev *dev)
> +{
> +	struct rte_eth_dev_data *data = dev->data;
> +	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data-
> >dev_private);
> +	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
> +	uint16_t nb_rxq = 0;
> +	uint16_t nb_txq, i;
> +	int ret;
> +
> +	ICE_PROC_SECONDARY_CHECK;

Device start is not supported, but how is this differentiated from primary configured device vs secondary configured device.

Ie: primary uses black list '-b BB:DD:F' while secondary uses '-w BB:DD:F'. In this case since we are checking process type this will return without start?

Snipped

> +
> +static void
> +ice_dev_stop(struct rte_eth_dev *dev)
> +{
> +	struct rte_eth_dev_data *data = dev->data;
> +	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
> +	uint16_t i;
> +
> +	/* avoid stopping again */
> +	if (pf->adapter_stopped)
> +		return;
> +
> +	ICE_PROC_SECONDARY_CHECK_NO_ERR;

Same as above.

snipped

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v2 04/20] net/ice: support getting device information
  2018-12-03  7:06   ` [dpdk-dev] [PATCH v2 04/20] net/ice: support getting device information Wenzhuo Lu
@ 2018-12-04  4:59     ` Varghese, Vipin
  2018-12-06  5:28       ` Lu, Wenzhuo
  0 siblings, 1 reply; 309+ messages in thread
From: Varghese, Vipin @ 2018-12-04  4:59 UTC (permalink / raw)
  To: Lu, Wenzhuo, dev; +Cc: Lu, Wenzhuo, Yang, Qiming, Li, Xiaoyun, Wu, Jingjing

snipped
> +static void
> +ice_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info
> +*dev_info) {
> +	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
> +	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data-
> >dev_private);
> +	struct ice_vsi *vsi = pf->main_vsi;
> +	struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
> +
> +	dev_info->min_rx_bufsize = ICE_BUF_SIZE_MIN;
> +	dev_info->max_rx_pktlen = ICE_FRAME_SIZE_MAX;
> +	dev_info->max_rx_queues = vsi->nb_qps;
> +	dev_info->max_tx_queues = vsi->nb_qps;
> +	dev_info->max_mac_addrs = vsi->max_macaddrs;
> +	dev_info->max_vfs = pci_dev->max_vfs;
> +
> +	dev_info->rx_offload_capa =
> +		DEV_RX_OFFLOAD_VLAN_STRIP |
> +		DEV_RX_OFFLOAD_IPV4_CKSUM |
> +		DEV_RX_OFFLOAD_UDP_CKSUM |
> +		DEV_RX_OFFLOAD_TCP_CKSUM |
> +		DEV_RX_OFFLOAD_QINQ_STRIP |
> +		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
> +		DEV_RX_OFFLOAD_VLAN_EXTEND |
> +		DEV_RX_OFFLOAD_JUMBO_FRAME;
> +	dev_info->tx_offload_capa =
> +		DEV_TX_OFFLOAD_VLAN_INSERT |
> +		DEV_TX_OFFLOAD_QINQ_INSERT |
> +		DEV_TX_OFFLOAD_IPV4_CKSUM |
> +		DEV_TX_OFFLOAD_UDP_CKSUM |
> +		DEV_TX_OFFLOAD_TCP_CKSUM |
> +		DEV_TX_OFFLOAD_SCTP_CKSUM |
> +		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
> +		DEV_TX_OFFLOAD_TCP_TSO;
> +	dev_info->rx_queue_offload_capa = 0;
> +	dev_info->tx_queue_offload_capa = 0;

Does this mean per queue offload capability is not supported? If yes, can you mention this in release notes under 'support or limitation'

> +
> +	dev_info->reta_size = hw->func_caps.common_cap.rss_table_size;
> +	dev_info->hash_key_size = (VSIQF_HKEY_MAX_INDEX + 1) *
> sizeof(uint32_t);
> +	dev_info->flow_type_rss_offloads = ICE_RSS_OFFLOAD_ALL;
> +
> +	dev_info->default_rxconf = (struct rte_eth_rxconf) {
> +		.rx_thresh = {
> +			.pthresh = ICE_DEFAULT_RX_PTHRESH,
> +			.hthresh = ICE_DEFAULT_RX_HTHRESH,
> +			.wthresh = ICE_DEFAULT_RX_WTHRESH,
> +		},
> +		.rx_free_thresh = ICE_DEFAULT_RX_FREE_THRESH,
> +		.rx_drop_en = 0,
> +		.offloads = 0,
Is drop function and rx_conf.offload supported ? If yes, if device is not configured then all offload should be set?

> +	};
> +
> +	dev_info->default_txconf = (struct rte_eth_txconf) {
> +		.tx_thresh = {
> +			.pthresh = ICE_DEFAULT_TX_PTHRESH,
> +			.hthresh = ICE_DEFAULT_TX_HTHRESH,
> +			.wthresh = ICE_DEFAULT_TX_WTHRESH,
> +		},
> +		.tx_free_thresh = ICE_DEFAULT_TX_FREE_THRESH,
> +		.tx_rs_thresh = ICE_DEFAULT_TX_RSBIT_THRESH,
> +		.offloads = 0,

If device is not configured, is not all offload be set true?

Snipped

> +	switch (hw->port_info->phy.link_info.link_speed) {

If device switch is not configured (default value from NVM) should we highlight the switch can support speed 10, 100, 1000, 1000 and son on?

> +	case ICE_AQ_LINK_SPEED_10MB:
> +		dev_info->speed_capa = ETH_LINK_SPEED_10M;
> +		break;
> +	case ICE_AQ_LINK_SPEED_100MB:
> +		dev_info->speed_capa = ETH_LINK_SPEED_100M;
> +		break;
> +	case ICE_AQ_LINK_SPEED_1000MB:
> +		dev_info->speed_capa = ETH_LINK_SPEED_1G;
> +		break;
> +	case ICE_AQ_LINK_SPEED_2500MB:
> +		dev_info->speed_capa = ETH_LINK_SPEED_2_5G;
> +		break;
> +	case ICE_AQ_LINK_SPEED_5GB:
> +		dev_info->speed_capa = ETH_LINK_SPEED_5G;
> +		break;
> +	case ICE_AQ_LINK_SPEED_10GB:
> +		dev_info->speed_capa = ETH_LINK_SPEED_10G;
> +		break;
> +	case ICE_AQ_LINK_SPEED_20GB:
> +		dev_info->speed_capa = ETH_LINK_SPEED_20G;
> +		break;
> +	case ICE_AQ_LINK_SPEED_25GB:
> +		dev_info->speed_capa = ETH_LINK_SPEED_25G;
> +		break;
> +	case ICE_AQ_LINK_SPEED_40GB:
> +		dev_info->speed_capa = ETH_LINK_SPEED_40G;
> +		break;
> +	case ICE_AQ_LINK_SPEED_UNKNOWN:
> +	default:
> +		PMD_DRV_LOG(ERR, "Unknown link speed");
> +		dev_info->speed_capa = ETH_LINK_SPEED_AUTONEG;
> +		break;
> +	}

If speed is not true as stated above, can you please add this to release notes and documentation.

> +
> +	dev_info->nb_rx_queues = dev->data->nb_rx_queues;
> +	dev_info->nb_tx_queues = dev->data->nb_tx_queues;
> +
> +	dev_info->default_rxportconf.burst_size = 32;
> +	dev_info->default_txportconf.burst_size = 32;
> +	dev_info->default_rxportconf.nb_queues = 1;
> +	dev_info->default_txportconf.nb_queues = 1;
> +	dev_info->default_rxportconf.ring_size = 1024;
> +	dev_info->default_txportconf.ring_size = 1024; 

Can we use MACRO  (in previous PATCH there were MAX_BURST_SIZE)?

}
> --
> 1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v2 05/20] net/ice: support packet type getting
  2018-12-03  7:06   ` [dpdk-dev] [PATCH v2 05/20] net/ice: support packet type getting Wenzhuo Lu
@ 2018-12-04  5:19     ` Varghese, Vipin
  2018-12-06  5:34       ` Lu, Wenzhuo
  0 siblings, 1 reply; 309+ messages in thread
From: Varghese, Vipin @ 2018-12-04  5:19 UTC (permalink / raw)
  To: Lu, Wenzhuo, dev; +Cc: Lu, Wenzhuo, Zhao1, Wei

snipped
> +static inline uint32_t
> +ice_get_default_pkt_type(uint16_t ptype) {

Suggestion: should we check 'ptype >= RTE_PTYPE_UNKNOWN ' return?

> +	static const uint32_t type_table[ICE_MAX_PKT_TYPE]
> +		__rte_cache_aligned = {
> +		/* L2 types */
> +		/* [0] reserved */
> +		[1] = RTE_PTYPE_L2_ETHER,
> +		/* [2] - [5] reserved */
> +		[6] = RTE_PTYPE_L2_ETHER_LLDP,
> +		/* [7] - [10] reserved */
> +		[11] = RTE_PTYPE_L2_ETHER_ARP,
> +		/* [12] - [21] reserved */
> +
> +		/* Non tunneled IPv4 */
> +		[22] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_L4_FRAG,
> +		[23] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_L4_NONFRAG,
> +		[24] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_L4_UDP,
> +		/* [25] reserved */
> +		[26] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_L4_TCP,
> +		[27] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_L4_SCTP,
> +		[28] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_L4_ICMP,
> +
> +		/* IPv4 --> IPv4 */
> +		[29] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_TUNNEL_IP |
> +		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_INNER_L4_FRAG,
> +		[30] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_TUNNEL_IP |
> +		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_INNER_L4_NONFRAG,
> +		[31] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_TUNNEL_IP |
> +		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_INNER_L4_UDP,
> +		/* [32] reserved */
> +		[33] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_TUNNEL_IP |
> +		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_INNER_L4_TCP,
> +		[34] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_TUNNEL_IP |
> +		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_INNER_L4_SCTP,
> +		[35] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_TUNNEL_IP |
> +		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_INNER_L4_ICMP,
> +
> +		/* IPv4 --> IPv6 */
> +		[36] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_TUNNEL_IP |
> +		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
> +		       RTE_PTYPE_INNER_L4_FRAG,
> +		[37] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_TUNNEL_IP |
> +		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
> +		       RTE_PTYPE_INNER_L4_NONFRAG,
> +		[38] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_TUNNEL_IP |
> +		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
> +		       RTE_PTYPE_INNER_L4_UDP,
> +		/* [39] reserved */
> +		[40] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_TUNNEL_IP |
> +		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
> +		       RTE_PTYPE_INNER_L4_TCP,
> +		[41] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_TUNNEL_IP |
> +		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
> +		       RTE_PTYPE_INNER_L4_SCTP,
> +		[42] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_TUNNEL_IP |
> +		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
> +		       RTE_PTYPE_INNER_L4_ICMP,
> +
> +		/* IPv4 --> GRE/Teredo/VXLAN */
> +		[43] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_TUNNEL_GRENAT,
> +
> +		/* IPv4 --> GRE/Teredo/VXLAN --> IPv4 */
> +		[44] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_TUNNEL_GRENAT |
> +		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_INNER_L4_FRAG,
> +		[45] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_TUNNEL_GRENAT |
> +		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_INNER_L4_NONFRAG,
> +		[46] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_TUNNEL_GRENAT |
> +		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_INNER_L4_UDP,
> +		/* [47] reserved */
> +		[48] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_TUNNEL_GRENAT |
> +		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_INNER_L4_TCP,
> +		[49] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_TUNNEL_GRENAT |
> +		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_INNER_L4_SCTP,
> +		[50] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_TUNNEL_GRENAT |
> +		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_INNER_L4_ICMP,
> +
> +		/* IPv4 --> GRE/Teredo/VXLAN --> IPv6 */
> +		[51] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_TUNNEL_GRENAT |
> +		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
> +		       RTE_PTYPE_INNER_L4_FRAG,
> +		[52] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_TUNNEL_GRENAT |
> +		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
> +		       RTE_PTYPE_INNER_L4_NONFRAG,
> +		[53] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_TUNNEL_GRENAT |
> +		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
> +		       RTE_PTYPE_INNER_L4_UDP,
> +		/* [54] reserved */
> +		[55] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_TUNNEL_GRENAT |
> +		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
> +		       RTE_PTYPE_INNER_L4_TCP,
> +		[56] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_TUNNEL_GRENAT |
> +		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
> +		       RTE_PTYPE_INNER_L4_SCTP,
> +		[57] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_TUNNEL_GRENAT |
> +		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
> +		       RTE_PTYPE_INNER_L4_ICMP,
> +
> +		/* IPv4 --> GRE/Teredo/VXLAN --> MAC */
> +		[58] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_TUNNEL_GRENAT |
> RTE_PTYPE_INNER_L2_ETHER,
> +
> +		/* IPv4 --> GRE/Teredo/VXLAN --> MAC --> IPv4 */
> +		[59] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_TUNNEL_GRENAT |
> RTE_PTYPE_INNER_L2_ETHER |
> +		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_INNER_L4_FRAG,
> +		[60] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_TUNNEL_GRENAT |
> RTE_PTYPE_INNER_L2_ETHER |
> +		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_INNER_L4_NONFRAG,
> +		[61] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_TUNNEL_GRENAT |
> RTE_PTYPE_INNER_L2_ETHER |
> +		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_INNER_L4_UDP,
> +		/* [62] reserved */
> +		[63] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_TUNNEL_GRENAT |
> RTE_PTYPE_INNER_L2_ETHER |
> +		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_INNER_L4_TCP,
> +		[64] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_TUNNEL_GRENAT |
> RTE_PTYPE_INNER_L2_ETHER |
> +		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_INNER_L4_SCTP,
> +		[65] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_TUNNEL_GRENAT |
> RTE_PTYPE_INNER_L2_ETHER |
> +		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_INNER_L4_ICMP,
> +
> +		/* IPv4 --> GRE/Teredo/VXLAN --> MAC --> IPv6 */
> +		[66] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_TUNNEL_GRENAT |
> RTE_PTYPE_INNER_L2_ETHER |
> +		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
> +		       RTE_PTYPE_INNER_L4_FRAG,
> +		[67] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_TUNNEL_GRENAT |
> RTE_PTYPE_INNER_L2_ETHER |
> +		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
> +		       RTE_PTYPE_INNER_L4_NONFRAG,
> +		[68] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_TUNNEL_GRENAT |
> RTE_PTYPE_INNER_L2_ETHER |
> +		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
> +		       RTE_PTYPE_INNER_L4_UDP,
> +		/* [69] reserved */
> +		[70] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_TUNNEL_GRENAT |
> RTE_PTYPE_INNER_L2_ETHER |
> +		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
> +		       RTE_PTYPE_INNER_L4_TCP,
> +		[71] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_TUNNEL_GRENAT |
> RTE_PTYPE_INNER_L2_ETHER |
> +		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
> +		       RTE_PTYPE_INNER_L4_SCTP,
> +		[72] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_TUNNEL_GRENAT |
> RTE_PTYPE_INNER_L2_ETHER |
> +		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
> +		       RTE_PTYPE_INNER_L4_ICMP,
> +
> +		/* IPv4 --> GRE/Teredo/VXLAN --> MAC/VLAN */
> +		[73] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_TUNNEL_GRENAT |
> +		       RTE_PTYPE_INNER_L2_ETHER_VLAN,
> +
> +		/* IPv4 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv4 */
> +		[74] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_TUNNEL_GRENAT |
> +		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
> +		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_INNER_L4_FRAG,
> +		[75] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_TUNNEL_GRENAT |
> +		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
> +		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_INNER_L4_NONFRAG,
> +		[76] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_TUNNEL_GRENAT |
> +		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
> +		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_INNER_L4_UDP,
> +		/* [77] reserved */
> +		[78] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_TUNNEL_GRENAT |
> +		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
> +		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_INNER_L4_TCP,
> +		[79] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_TUNNEL_GRENAT |
> +		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
> +		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_INNER_L4_SCTP,
> +		[80] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_TUNNEL_GRENAT |
> +		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
> +		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_INNER_L4_ICMP,
> +
> +		/* IPv4 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv6 */
> +		[81] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_TUNNEL_GRENAT |
> +		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
> +		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
> +		       RTE_PTYPE_INNER_L4_FRAG,
> +		[82] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_TUNNEL_GRENAT |
> +		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
> +		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
> +		       RTE_PTYPE_INNER_L4_NONFRAG,
> +		[83] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_TUNNEL_GRENAT |
> +		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
> +		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
> +		       RTE_PTYPE_INNER_L4_UDP,
> +		/* [84] reserved */
> +		[85] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_TUNNEL_GRENAT |
> +		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
> +		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
> +		       RTE_PTYPE_INNER_L4_TCP,
> +		[86] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_TUNNEL_GRENAT |
> +		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
> +		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
> +		       RTE_PTYPE_INNER_L4_SCTP,
> +		[87] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_TUNNEL_GRENAT |
> +		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
> +		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
> +		       RTE_PTYPE_INNER_L4_ICMP,
> +
> +		/* Non tunneled IPv6 */
> +		[88] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
> +		       RTE_PTYPE_L4_FRAG,
> +		[89] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
> +		       RTE_PTYPE_L4_NONFRAG,
> +		[90] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
> +		       RTE_PTYPE_L4_UDP,
> +		/* [91] reserved */
> +		[92] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
> +		       RTE_PTYPE_L4_TCP,
> +		[93] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
> +		       RTE_PTYPE_L4_SCTP,
> +		[94] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
> +		       RTE_PTYPE_L4_ICMP,
> +
> +		/* IPv6 --> IPv4 */
> +		[95] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
> +		       RTE_PTYPE_TUNNEL_IP |
> +		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_INNER_L4_FRAG,
> +		[96] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
> +		       RTE_PTYPE_TUNNEL_IP |
> +		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_INNER_L4_NONFRAG,
> +		[97] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
> +		       RTE_PTYPE_TUNNEL_IP |
> +		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_INNER_L4_UDP,
> +		/* [98] reserved */
> +		[99] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
> +		       RTE_PTYPE_TUNNEL_IP |
> +		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
> +		       RTE_PTYPE_INNER_L4_TCP,
> +		[100] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_TUNNEL_IP |
> +			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
> +			RTE_PTYPE_INNER_L4_SCTP,
> +		[101] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_TUNNEL_IP |
> +			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
> +			RTE_PTYPE_INNER_L4_ICMP,
> +
> +		/* IPv6 --> IPv6 */
> +		[102] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_TUNNEL_IP |
> +			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_INNER_L4_FRAG,
> +		[103] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_TUNNEL_IP |
> +			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_INNER_L4_NONFRAG,
> +		[104] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_TUNNEL_IP |
> +			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_INNER_L4_UDP,
> +		/* [105] reserved */
> +		[106] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_TUNNEL_IP |
> +			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_INNER_L4_TCP,
> +		[107] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_TUNNEL_IP |
> +			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_INNER_L4_SCTP,
> +		[108] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_TUNNEL_IP |
> +			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_INNER_L4_ICMP,
> +
> +		/* IPv6 --> GRE/Teredo/VXLAN */
> +		[109] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_TUNNEL_GRENAT,
> +
> +		/* IPv6 --> GRE/Teredo/VXLAN --> IPv4 */
> +		[110] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_TUNNEL_GRENAT |
> +			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
> +			RTE_PTYPE_INNER_L4_FRAG,
> +		[111] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_TUNNEL_GRENAT |
> +			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
> +			RTE_PTYPE_INNER_L4_NONFRAG,
> +		[112] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_TUNNEL_GRENAT |
> +			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
> +			RTE_PTYPE_INNER_L4_UDP,
> +		/* [113] reserved */
> +		[114] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_TUNNEL_GRENAT |
> +			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
> +			RTE_PTYPE_INNER_L4_TCP,
> +		[115] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_TUNNEL_GRENAT |
> +			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
> +			RTE_PTYPE_INNER_L4_SCTP,
> +		[116] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_TUNNEL_GRENAT |
> +			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
> +			RTE_PTYPE_INNER_L4_ICMP,
> +
> +		/* IPv6 --> GRE/Teredo/VXLAN --> IPv6 */
> +		[117] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_TUNNEL_GRENAT |
> +			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_INNER_L4_FRAG,
> +		[118] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_TUNNEL_GRENAT |
> +			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_INNER_L4_NONFRAG,
> +		[119] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_TUNNEL_GRENAT |
> +			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_INNER_L4_UDP,
> +		/* [120] reserved */
> +		[121] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_TUNNEL_GRENAT |
> +			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_INNER_L4_TCP,
> +		[122] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_TUNNEL_GRENAT |
> +			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_INNER_L4_SCTP,
> +		[123] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_TUNNEL_GRENAT |
> +			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_INNER_L4_ICMP,
> +
> +		/* IPv6 --> GRE/Teredo/VXLAN --> MAC */
> +		[124] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_TUNNEL_GRENAT |
> RTE_PTYPE_INNER_L2_ETHER,
> +
> +		/* IPv6 --> GRE/Teredo/VXLAN --> MAC --> IPv4 */
> +		[125] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_TUNNEL_GRENAT |
> RTE_PTYPE_INNER_L2_ETHER |
> +			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
> +			RTE_PTYPE_INNER_L4_FRAG,
> +		[126] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_TUNNEL_GRENAT |
> RTE_PTYPE_INNER_L2_ETHER |
> +			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
> +			RTE_PTYPE_INNER_L4_NONFRAG,
> +		[127] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_TUNNEL_GRENAT |
> RTE_PTYPE_INNER_L2_ETHER |
> +			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
> +			RTE_PTYPE_INNER_L4_UDP,
> +		/* [128] reserved */
> +		[129] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_TUNNEL_GRENAT |
> RTE_PTYPE_INNER_L2_ETHER |
> +			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
> +			RTE_PTYPE_INNER_L4_TCP,
> +		[130] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_TUNNEL_GRENAT |
> RTE_PTYPE_INNER_L2_ETHER |
> +			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
> +			RTE_PTYPE_INNER_L4_SCTP,
> +		[131] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_TUNNEL_GRENAT |
> RTE_PTYPE_INNER_L2_ETHER |
> +			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
> +			RTE_PTYPE_INNER_L4_ICMP,
> +
> +		/* IPv6 --> GRE/Teredo/VXLAN --> MAC --> IPv6 */
> +		[132] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_TUNNEL_GRENAT |
> RTE_PTYPE_INNER_L2_ETHER |
> +			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_INNER_L4_FRAG,
> +		[133] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_TUNNEL_GRENAT |
> RTE_PTYPE_INNER_L2_ETHER |
> +			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_INNER_L4_NONFRAG,
> +		[134] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_TUNNEL_GRENAT |
> RTE_PTYPE_INNER_L2_ETHER |
> +			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_INNER_L4_UDP,
> +		/* [135] reserved */
> +		[136] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_TUNNEL_GRENAT |
> RTE_PTYPE_INNER_L2_ETHER |
> +			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_INNER_L4_TCP,
> +		[137] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_TUNNEL_GRENAT |
> RTE_PTYPE_INNER_L2_ETHER |
> +			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_INNER_L4_SCTP,
> +		[138] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_TUNNEL_GRENAT |
> RTE_PTYPE_INNER_L2_ETHER |
> +			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_INNER_L4_ICMP,
> +
> +		/* IPv6 --> GRE/Teredo/VXLAN --> MAC/VLAN */
> +		[139] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_TUNNEL_GRENAT |
> +			RTE_PTYPE_INNER_L2_ETHER_VLAN,
> +
> +		/* IPv6 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv4 */
> +		[140] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_TUNNEL_GRENAT |
> +			RTE_PTYPE_INNER_L2_ETHER_VLAN |
> +			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
> +			RTE_PTYPE_INNER_L4_FRAG,
> +		[141] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_TUNNEL_GRENAT |
> +			RTE_PTYPE_INNER_L2_ETHER_VLAN |
> +			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
> +			RTE_PTYPE_INNER_L4_NONFRAG,
> +		[142] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_TUNNEL_GRENAT |
> +			RTE_PTYPE_INNER_L2_ETHER_VLAN |
> +			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
> +			RTE_PTYPE_INNER_L4_UDP,
> +		/* [143] reserved */
> +		[144] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_TUNNEL_GRENAT |
> +			RTE_PTYPE_INNER_L2_ETHER_VLAN |
> +			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
> +			RTE_PTYPE_INNER_L4_TCP,
> +		[145] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_TUNNEL_GRENAT |
> +			RTE_PTYPE_INNER_L2_ETHER_VLAN |
> +			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
> +			RTE_PTYPE_INNER_L4_SCTP,
> +		[146] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_TUNNEL_GRENAT |
> +			RTE_PTYPE_INNER_L2_ETHER_VLAN |
> +			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
> +			RTE_PTYPE_INNER_L4_ICMP,
> +
> +		/* IPv6 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv6 */
> +		[147] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_TUNNEL_GRENAT |
> +			RTE_PTYPE_INNER_L2_ETHER_VLAN |
> +			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_INNER_L4_FRAG,
> +		[148] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_TUNNEL_GRENAT |
> +			RTE_PTYPE_INNER_L2_ETHER_VLAN |
> +			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_INNER_L4_NONFRAG,
> +		[149] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_TUNNEL_GRENAT |
> +			RTE_PTYPE_INNER_L2_ETHER_VLAN |
> +			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_INNER_L4_UDP,
> +		/* [150] reserved */
> +		[151] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_TUNNEL_GRENAT |
> +			RTE_PTYPE_INNER_L2_ETHER_VLAN |
> +			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_INNER_L4_TCP,
> +		[152] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_TUNNEL_GRENAT |
> +			RTE_PTYPE_INNER_L2_ETHER_VLAN |
> +			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_INNER_L4_SCTP,
> +		[153] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_TUNNEL_GRENAT |
> +			RTE_PTYPE_INNER_L2_ETHER_VLAN |
> +			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_INNER_L4_ICMP,
> +		/* [154] - [255] reserved */
> +		[256] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> +			RTE_PTYPE_TUNNEL_GTPC,
> +		[257] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> +			RTE_PTYPE_TUNNEL_GTPC,
> +		[258] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> +				RTE_PTYPE_TUNNEL_GTPU,
> +		[259] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> +				RTE_PTYPE_TUNNEL_GTPU,
> +		/* [260] - [263] reserved */
> +		[264] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_TUNNEL_GTPC,
> +		[265] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
> +			RTE_PTYPE_TUNNEL_GTPC,
> +		[266] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
> +				RTE_PTYPE_TUNNEL_GTPU,
> +		[267] = RTE_PTYPE_L2_ETHER |
> RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
> +				RTE_PTYPE_TUNNEL_GTPU,
> +
> +		/* All others reserved */
> +	};

Suggestion: is it ok to use MACRO instead of array.
snipped

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v2 07/20] net/ice: support MTU setting
  2018-12-03  7:06   ` [dpdk-dev] [PATCH v2 07/20] net/ice: support MTU setting Wenzhuo Lu
@ 2018-12-04  5:25     ` Varghese, Vipin
  2018-12-04  5:51       ` Varghese, Vipin
  2018-12-06  5:35       ` Lu, Wenzhuo
  0 siblings, 2 replies; 309+ messages in thread
From: Varghese, Vipin @ 2018-12-04  5:25 UTC (permalink / raw)
  To: Lu, Wenzhuo, dev; +Cc: Lu, Wenzhuo, Yang, Qiming, Li, Xiaoyun, Wu, Jingjing

snipped
> +static int
> +ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) {
> +	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
> +	struct rte_eth_dev_data *dev_data = pf->dev_data;
> +	uint32_t frame_size = mtu + ETHER_HDR_LEN
> +			      + ETHER_CRC_LEN + ICE_VLAN_TAG_SIZE;

Should this be ' ICE_VLAN_TAG_SIZE' or ' ICE_SWITCH_VLAN_TAG_SIZE'?
snipped

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v2 14/20] net/ice: support statistics
  2018-12-03  7:06   ` [dpdk-dev] [PATCH v2 14/20] net/ice: support statistics Wenzhuo Lu
@ 2018-12-04  5:35     ` Varghese, Vipin
  2018-12-06  5:37       ` Lu, Wenzhuo
  0 siblings, 1 reply; 309+ messages in thread
From: Varghese, Vipin @ 2018-12-04  5:35 UTC (permalink / raw)
  To: Lu, Wenzhuo, dev; +Cc: Lu, Wenzhuo, Guo, Jia

snipped
> +	ns->eth.tx_bytes -= (ns->eth.tx_unicast + ns->eth.tx_multicast +
> +			     ns->eth.tx_broadcast) * ETHER_CRC_LEN;

In earlier patch for 'mtu set check' we added VSI SWITCH VLAN. Should we add VSI VLAN here?
snipped

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v2 16/20] net/ice: support basic RX/TX
  2018-12-03  7:06   ` [dpdk-dev] [PATCH v2 16/20] net/ice: support basic RX/TX Wenzhuo Lu
@ 2018-12-04  5:42     ` Varghese, Vipin
  2018-12-04  5:44       ` Varghese, Vipin
  2018-12-06  5:39       ` Lu, Wenzhuo
  0 siblings, 2 replies; 309+ messages in thread
From: Varghese, Vipin @ 2018-12-04  5:42 UTC (permalink / raw)
  To: Lu, Wenzhuo, dev; +Cc: Lu, Wenzhuo, Yang, Qiming, Li, Xiaoyun, Wu, Jingjing

snipped
> +uint16_t
> +ice_recv_pkts(void *rx_queue,
> +	      struct rte_mbuf **rx_pkts,
> +	      uint16_t nb_pkts)
> +{
> +	struct ice_rx_queue *rxq = rx_queue;
> +	volatile union ice_rx_desc *rx_ring = rxq->rx_ring;
> +	volatile union ice_rx_desc *rxdp;
> +	union ice_rx_desc rxd;
> +	struct ice_rx_entry *sw_ring = rxq->sw_ring;
> +	struct ice_rx_entry *rxe;
> +	struct rte_mbuf *nmb; /* new allocated mbuf */
> +	struct rte_mbuf *rxm; /* pointer to store old mbuf in SW ring */
> +	uint16_t rx_id = rxq->rx_tail;
> +	uint16_t nb_rx = 0;
> +	uint16_t nb_hold = 0;
> +	uint16_t rx_packet_len;
> +	uint32_t rx_status;
> +	uint64_t qword1;
> +	uint64_t dma_addr;
> +	uint64_t pkt_flags = 0;
> +	uint32_t *ptype_tbl = rxq->vsi->adapter->ptype_tbl;
> +	struct rte_eth_dev *dev;
> +
> +	while (nb_rx < nb_pkts) {
> +		rxdp = &rx_ring[rx_id];
> +		qword1 = rte_le_to_cpu_64(rxdp-
> >wb.qword1.status_error_len);
> +		rx_status = (qword1 & ICE_RXD_QW1_STATUS_M) >>
> +			    ICE_RXD_QW1_STATUS_S;
> +
> +		/* Check the DD bit first */
> +		if (!(rx_status & (1 << ICE_RX_DESC_STATUS_DD_S)))
> +			break;
> +
> +		/* allocate mbuf */
> +		nmb = rte_mbuf_raw_alloc(rxq->mp);
> +		if (unlikely(!nmb)) {
> +			dev = ICE_VSI_TO_ETH_DEV(rxq->vsi);
> +			dev->data->rx_mbuf_alloc_failed++;
> +			break;
> +		}

Should we check if the received packet length is greater than mbug pkt_len then we need bulk alloc with n_segs?

> +		rxd = *rxdp; /* copy descriptor in ring to temp variable*/
> +
> +		nb_hold++;
> +		rxe = &sw_ring[rx_id]; /* get corresponding mbuf in SW ring */
> +		rx_id++;
> +		if (unlikely(rx_id == rxq->nb_rx_desc))
> +			rx_id = 0;
> +		rxm = rxe->mbuf;
> +		rxe->mbuf = nmb;
> +		dma_addr =
> +			rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb));
> +
> +		/**
> +		 * fill the read format of descriptor with physic address in
> +		 * new allocated mbuf: nmb
> +		 */
> +		rxdp->read.hdr_addr = 0;
> +		rxdp->read.pkt_addr = dma_addr;
> +
> +		/* calculate rx_packet_len of the received pkt */
> +		rx_packet_len = ((qword1 & ICE_RXD_QW1_LEN_PBUF_M) >>
> +				ICE_RXD_QW1_LEN_PBUF_S) - rxq->crc_len;
> +
> +		/* fill old mbuf with received descriptor: rxd */
> +		rxm->data_off = RTE_PKTMBUF_HEADROOM;
> +		rte_prefetch0(RTE_PTR_ADD(rxm->buf_addr,
> RTE_PKTMBUF_HEADROOM));
> +		rxm->nb_segs = 1;

Same comment for above for multi segment alloc for larger packets or smaller pkt_len in mempool?

Snipped

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v2 16/20] net/ice: support basic RX/TX
  2018-12-04  5:42     ` Varghese, Vipin
@ 2018-12-04  5:44       ` Varghese, Vipin
  2018-12-06  5:39       ` Lu, Wenzhuo
  1 sibling, 0 replies; 309+ messages in thread
From: Varghese, Vipin @ 2018-12-04  5:44 UTC (permalink / raw)
  To: Varghese, Vipin, Lu, Wenzhuo, dev
  Cc: Lu, Wenzhuo, Yang, Qiming, Li, Xiaoyun, Wu, Jingjing

May this logic is just to allocate the initial descriptor and not actual, if yes please ignore my comments.

> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Varghese, Vipin
> Sent: Tuesday, December 4, 2018 11:12 AM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>; Yang, Qiming
> <qiming.yang@intel.com>; Li, Xiaoyun <xiaoyun.li@intel.com>; Wu, Jingjing
> <jingjing.wu@intel.com>
> Subject: Re: [dpdk-dev] [PATCH v2 16/20] net/ice: support basic RX/TX
> 
> snipped
> > +uint16_t
> > +ice_recv_pkts(void *rx_queue,
> > +	      struct rte_mbuf **rx_pkts,
> > +	      uint16_t nb_pkts)
> > +{
> > +	struct ice_rx_queue *rxq = rx_queue;
> > +	volatile union ice_rx_desc *rx_ring = rxq->rx_ring;
> > +	volatile union ice_rx_desc *rxdp;
> > +	union ice_rx_desc rxd;
> > +	struct ice_rx_entry *sw_ring = rxq->sw_ring;
> > +	struct ice_rx_entry *rxe;
> > +	struct rte_mbuf *nmb; /* new allocated mbuf */
> > +	struct rte_mbuf *rxm; /* pointer to store old mbuf in SW ring */
> > +	uint16_t rx_id = rxq->rx_tail;
> > +	uint16_t nb_rx = 0;
> > +	uint16_t nb_hold = 0;
> > +	uint16_t rx_packet_len;
> > +	uint32_t rx_status;
> > +	uint64_t qword1;
> > +	uint64_t dma_addr;
> > +	uint64_t pkt_flags = 0;
> > +	uint32_t *ptype_tbl = rxq->vsi->adapter->ptype_tbl;
> > +	struct rte_eth_dev *dev;
> > +
> > +	while (nb_rx < nb_pkts) {
> > +		rxdp = &rx_ring[rx_id];
> > +		qword1 = rte_le_to_cpu_64(rxdp-
> > >wb.qword1.status_error_len);
> > +		rx_status = (qword1 & ICE_RXD_QW1_STATUS_M) >>
> > +			    ICE_RXD_QW1_STATUS_S;
> > +
> > +		/* Check the DD bit first */
> > +		if (!(rx_status & (1 << ICE_RX_DESC_STATUS_DD_S)))
> > +			break;
> > +
> > +		/* allocate mbuf */
> > +		nmb = rte_mbuf_raw_alloc(rxq->mp);
> > +		if (unlikely(!nmb)) {
> > +			dev = ICE_VSI_TO_ETH_DEV(rxq->vsi);
> > +			dev->data->rx_mbuf_alloc_failed++;
> > +			break;
> > +		}
> 
> Should we check if the received packet length is greater than mbug pkt_len then
> we need bulk alloc with n_segs?
> 
> > +		rxd = *rxdp; /* copy descriptor in ring to temp variable*/
> > +
> > +		nb_hold++;
> > +		rxe = &sw_ring[rx_id]; /* get corresponding mbuf in SW ring */
> > +		rx_id++;
> > +		if (unlikely(rx_id == rxq->nb_rx_desc))
> > +			rx_id = 0;
> > +		rxm = rxe->mbuf;
> > +		rxe->mbuf = nmb;
> > +		dma_addr =
> > +			rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb));
> > +
> > +		/**
> > +		 * fill the read format of descriptor with physic address in
> > +		 * new allocated mbuf: nmb
> > +		 */
> > +		rxdp->read.hdr_addr = 0;
> > +		rxdp->read.pkt_addr = dma_addr;
> > +
> > +		/* calculate rx_packet_len of the received pkt */
> > +		rx_packet_len = ((qword1 & ICE_RXD_QW1_LEN_PBUF_M) >>
> > +				ICE_RXD_QW1_LEN_PBUF_S) - rxq->crc_len;
> > +
> > +		/* fill old mbuf with received descriptor: rxd */
> > +		rxm->data_off = RTE_PKTMBUF_HEADROOM;
> > +		rte_prefetch0(RTE_PTR_ADD(rxm->buf_addr,
> > RTE_PKTMBUF_HEADROOM));
> > +		rxm->nb_segs = 1;
> 
> Same comment for above for multi segment alloc for larger packets or smaller
> pkt_len in mempool?
> 
> Snipped

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v2 07/20] net/ice: support MTU setting
  2018-12-04  5:25     ` Varghese, Vipin
@ 2018-12-04  5:51       ` Varghese, Vipin
  2018-12-06  5:41         ` Lu, Wenzhuo
  2018-12-06  5:35       ` Lu, Wenzhuo
  1 sibling, 1 reply; 309+ messages in thread
From: Varghese, Vipin @ 2018-12-04  5:51 UTC (permalink / raw)
  To: Varghese, Vipin, Lu, Wenzhuo, dev
  Cc: Lu, Wenzhuo, Yang, Qiming, Li, Xiaoyun, Wu, Jingjing

Can you point me to the patch where 'get_mtu' is defined?

> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Varghese, Vipin
> Sent: Tuesday, December 4, 2018 10:56 AM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>; Yang, Qiming
> <qiming.yang@intel.com>; Li, Xiaoyun <xiaoyun.li@intel.com>; Wu, Jingjing
> <jingjing.wu@intel.com>
> Subject: Re: [dpdk-dev] [PATCH v2 07/20] net/ice: support MTU setting
> 
> snipped
> > +static int
> > +ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) {
> > +	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
> > +	struct rte_eth_dev_data *dev_data = pf->dev_data;
> > +	uint32_t frame_size = mtu + ETHER_HDR_LEN
> > +			      + ETHER_CRC_LEN + ICE_VLAN_TAG_SIZE;
> 
> Should this be ' ICE_VLAN_TAG_SIZE' or ' ICE_SWITCH_VLAN_TAG_SIZE'?
> snipped

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH 00/19] A new net PMD - ice
  2018-11-23 11:00 ` [dpdk-dev] [PATCH 00/19] A new net PMD - ice Thomas Monjalon
@ 2018-12-05  6:39   ` Lu, Wenzhuo
  2018-12-05  7:28     ` Thomas Monjalon
  0 siblings, 1 reply; 309+ messages in thread
From: Lu, Wenzhuo @ 2018-12-05  6:39 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: dev

Hi Thomas,


> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas@monjalon.net]
> Sent: Friday, November 23, 2018 7:00 PM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>
> Cc: dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH 00/19] A new net PMD - ice
> 
> Hi Wenzhuo,
> 
> 23/11/2018 07:56, Wenzhuo Lu:
> >   net/ice: add base code
> 
> This first patch is really too big.
> Please could you try to split it logically?
This is base code which is not developed by us. We're not so familiar with it. It's the first time to release the code, I know it's too big, to us too. But anyway we'll maintain it. 
To save the effort, we always don't split the base code when releasing it at the first time. Is that OK?

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v2 19/20] doc: add ICE description and update release note
  2018-12-03  8:15     ` Varghese, Vipin
@ 2018-12-05  6:54       ` Lu, Wenzhuo
  2018-12-06  4:34         ` Varghese, Vipin
  0 siblings, 1 reply; 309+ messages in thread
From: Lu, Wenzhuo @ 2018-12-05  6:54 UTC (permalink / raw)
  To: Varghese, Vipin, dev

 Hi Vipin,


> -----Original Message-----
> From: Varghese, Vipin
> Sent: Monday, December 3, 2018 4:15 PM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v2 19/20] doc: add ICE description and
> update release note
> 
> Hi,
> 
> Thanks for adding details for secondary multi process in limitations section,
> even though the API were expected.
> 
> > -----Original Message-----
> > From: dev <dev-bounces@dpdk.org> On Behalf Of Wenzhuo Lu
> > Sent: Monday, December 3, 2018 12:37 PM
> > To: dev@dpdk.org
> > Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>
> > Subject: [dpdk-dev] [PATCH v2 19/20] doc: add ICE description and
> > update release note
> >
> > Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> > ---
> 
> Do not we have to add change details like
> 
> V2:
>  - updated the limitation sections
>  - removed section etc?
Good suggestion, thanks. I can add that in the next version.

> 
> Snipped
> 
> > +[Features]
> > +Speed capabilities   = Y
> > +Link status          = Y
> > +Link status event    = Y
> > +Rx interrupt         = Y
> > +Queue start/stop     = Y
> > +MTU update           = Y
> > +Jumbo frame          = Y
> > +Scattered Rx         = Y
> > +TSO                  = Y
> > +Unicast MAC filter   = Y
> > +Multicast MAC filter = Y
> > +RSS hash             = Y
> > +RSS key update       = Y
> > +RSS reta update      = Y
> > +VLAN filter          = Y
> > +CRC offload          = Y
> > +VLAN offload         = Y
> > +QinQ offload         = Y
> > +L3 checksum offload  = Y
> > +L4 checksum offload  = Y
> > +Packet type parsing  = Y
> > +Rx descriptor status = Y
> > +Tx descriptor status = Y
> > +Basic stats          = Y
> > +Extended stats       = Y
> 
> Do we support Traffic Manager and Inline Crypto? If not, can we add this
> also to limitations?
The style is only listed supported features here. I don't think we'll list everything not supported as limitation.
The same below.

> 
> > +FW version           = Y
> > +Module EEPROM dump   = Y
> > +Multiprocess aware   = Y
> > +BSD nic_uio          = Y
> > +Linux UIO            = Y
> > +Linux VFIO           = Y
> > +x86-32               = Y
> > +x86-64               = Y
> 
> Is cross compile for ARM and PPC disabled since it uses Intel specific ISA?
> Should this be added to limitations?
> 
> 
> Snipped
> 
> > +
> > +Config File Options
> > +~~~~~~~~~~~~~~~~~~~
> > +
> > +The following options can be modified in the ``config`` file.
> > +Please note that enabling debugging options may affect system
> performance.
> 
> Do we see real performance variance? If yes, can we highlight this in info
> section?
IMH, we could get different numbers because of different scenarios. It's only a reminder to not enable debug.

> 
> snipped


^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v2 20/20] net/ice: support meson build
  2018-12-03 10:00     ` Varghese, Vipin
@ 2018-12-05  7:03       ` Lu, Wenzhuo
  2018-12-06  4:31         ` Varghese, Vipin
  0 siblings, 1 reply; 309+ messages in thread
From: Lu, Wenzhuo @ 2018-12-05  7:03 UTC (permalink / raw)
  To: Varghese, Vipin, dev

Hi Vipin,


> -----Original Message-----
> From: Varghese, Vipin
> Sent: Monday, December 3, 2018 6:01 PM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v2 20/20] net/ice: support meson build
> 
> Should not meson build option be add start. That is in patch 1/20 so compile
> options does not fail?
It will not fail. Enabling the compile earlier only means the code can be compiled. But, to use this device we do need the whole patch set. From this point of view, compiling it at the end maybe better.

> 
> Thanks
> Vipin Varghese

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH 00/19] A new net PMD - ice
  2018-12-05  6:39   ` Lu, Wenzhuo
@ 2018-12-05  7:28     ` Thomas Monjalon
  2018-12-05  8:19       ` Lu, Wenzhuo
  0 siblings, 1 reply; 309+ messages in thread
From: Thomas Monjalon @ 2018-12-05  7:28 UTC (permalink / raw)
  To: Lu, Wenzhuo; +Cc: dev

05/12/2018 07:39, Lu, Wenzhuo:
> From: Thomas Monjalon [mailto:thomas@monjalon.net]
> > 23/11/2018 07:56, Wenzhuo Lu:
> > >   net/ice: add base code
> > 
> > This first patch is really too big.
> > Please could you try to split it logically?
> 
> This is base code which is not developed by us. We're not so familiar with it. It's the first time to release the code, I know it's too big, to us too. But anyway we'll maintain it. 
> To save the effort, we always don't split the base code when releasing it at the first time. Is that OK?

It's not the best start.
How a very new driver can start with so many code and not being fully
understood by those who will maintain it?

It's really better to start small and grow.
It would ease both testing and understanding.

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH 00/19] A new net PMD - ice
  2018-12-05  7:28     ` Thomas Monjalon
@ 2018-12-05  8:19       ` Lu, Wenzhuo
  0 siblings, 0 replies; 309+ messages in thread
From: Lu, Wenzhuo @ 2018-12-05  8:19 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: dev

> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas@monjalon.net]
> Sent: Wednesday, December 5, 2018 3:29 PM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>
> Cc: dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH 00/19] A new net PMD - ice
> 
> 05/12/2018 07:39, Lu, Wenzhuo:
> > From: Thomas Monjalon [mailto:thomas@monjalon.net]
> > > 23/11/2018 07:56, Wenzhuo Lu:
> > > >   net/ice: add base code
> > >
> > > This first patch is really too big.
> > > Please could you try to split it logically?
> >
> > This is base code which is not developed by us. We're not so familiar with
> it. It's the first time to release the code, I know it's too big, to us too. But
> anyway we'll maintain it.
> > To save the effort, we always don't split the base code when releasing it at
> the first time. Is that OK?
> 
> It's not the best start.
> How a very new driver can start with so many code and not being fully
> understood by those who will maintain it?
> 
> It's really better to start small and grow.
> It would ease both testing and understanding.
Agree ideally it's much better if I can understand the whole code. But there's about 26K lines of code. I cannot lie to say I or any single person can fully understand all of them. There's a team that's supporting us. If any bug related to the code, we'll ask the help from the team. Even we found the root cause of the bug, we'd like getting confirm from them. That's how we make it work for all our NICs.

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH 03/19] net/ice: support device and queue ops
  2018-12-03 15:24   ` Rami Rosen
  2018-12-03 15:43     ` Rami Rosen
@ 2018-12-06  2:53     ` Lu, Wenzhuo
  1 sibling, 0 replies; 309+ messages in thread
From: Lu, Wenzhuo @ 2018-12-06  2:53 UTC (permalink / raw)
  To: Rami Rosen; +Cc: dev, Yang, Qiming, Li, Xiaoyun, Wu, Jingjing

Hi Rami,


> -----Original Message-----
> From: Rami Rosen [mailto:roszenrami@gmail.com]
> Sent: Monday, December 3, 2018 11:24 PM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>
> Cc: dev@dpdk.org; Yang, Qiming <qiming.yang@intel.com>; Li, Xiaoyun
> <xiaoyun.li@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>
> Subject: Re: [dpdk-dev] [PATCH 03/19] net/ice: support device and queue
> ops
> 
> Hi, Wenzhuo,
> 
> > +static int
> > +ice_dev_start(struct rte_eth_dev *dev) {
> > +       struct rte_eth_dev_data *data = dev->data;
> > +       struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data-
> >dev_private);
> > +       struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
> > +       uint16_t nb_rxq = 0;
> > +       uint16_t nb_txq, i;
> > +       int ret;
> > +
> > +       if (rte_eal_process_type() == RTE_PROC_SECONDARY)
> > +               return -E_RTE_SECONDARY;
> > +
> [Rami Rosen] Suppose start of a TX queue failes in the loop below. You go to
> **tx_err** label, where you stop all **RX** queues (which actually were not
> started at all, since they are started only later in this method; and then you
> return -EIO and the ice_dev_start() method is terminated, without actually
> stopping any TX queues which were already started;
> So maybe it is better to call   ice_tx_queue_stop() in tx_err and
> ice_rx_queue_stop() in rx_err.
> Apart from it, there is a typo:  "Tx queues' contex" should be =>Tx queues'
> context"
Thanks for the comments. The logic is not good. Will make it better in the next version.

> 
> > +       /* program Tx queues' contex in hardware */
> > +       for (nb_txq = 0; nb_txq < data->nb_tx_queues; nb_txq++) {
> > +               ret = ice_tx_queue_start(dev, nb_txq);
> > +               if (ret) {
> > +                       PMD_DRV_LOG(ERR, "fail to start Tx queue %u", nb_txq);
> > +                       goto tx_err;
> > +               }
> > +       }
> > +
> 
> > +       /* program Rx queues' context in hardware*/
> > +       for (nb_rxq = 0; nb_rxq < data->nb_rx_queues; nb_rxq++) {
> > +               ret = ice_rx_queue_start(dev, nb_rxq);
> > +               if (ret) {
> > +                       PMD_DRV_LOG(ERR, "fail to start Rx queue %u", nb_rxq);
> > +                       goto rx_err;
> > +               }
> > +       }
> ,,,
> 
> > +       /* stop the started queues if failed to start all queues */
> > +rx_err:
> > +       for (i = 0; i < nb_txq; i++)
> > +               ice_tx_queue_stop(dev, i);
> > +tx_err:
> > +       for (i = 0; i < nb_rxq; i++)
> > +               ice_rx_queue_stop(dev, i);
> > +
> > +       return -EIO;
> > +}
> > +
> 
> Regards,
> Rami Rosen

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v2 01/20] net/ice: add base code
  2018-12-04  4:18     ` Varghese, Vipin
@ 2018-12-06  3:27       ` Lu, Wenzhuo
  2018-12-06  4:28         ` Varghese, Vipin
  0 siblings, 1 reply; 309+ messages in thread
From: Lu, Wenzhuo @ 2018-12-06  3:27 UTC (permalink / raw)
  To: Varghese, Vipin, dev

Hi Vipin,


> -----Original Message-----
> From: Varghese, Vipin
> Sent: Tuesday, December 4, 2018 12:19 PM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v2 01/20] net/ice: add base code
> 
> Snipped
> 
> > +Intel® ICE driver
> > +==================
> > +
> > +This directory contains source code of FreeBSD ice driver of version
> > +2018.10.30 released by the team which develops basic drivers for any
> > +ice NIC. The directory of base/ contains the original source package.
> > +This driver is valid for the product(s) listed below
> > +
> > +* Intel® Ethernet Network Adapters E810
> > +
> > +Updating the driver
> > +===================
> > +
> > +NOTE: The source code in this directory should not be modified apart
> > +from the following file(s):
> > +
> > +    ice_osdep.h
> 
> I this README persistent in upcoming releases of 'driver/net/ice'?
Yes.
> 
> Snipped
> 
> > +/* Manage MAC address, write command - direct (0x0108) */ struct
> > +ice_aqc_manage_mac_write {
> > +	u8 port_num;
> > +	u8 flags;
> > +#define ICE_AQC_MAN_MAC_WR_MC_MAG_EN		BIT(0)
> > +#define ICE_AQC_MAN_MAC_WR_WOL_LAA_PFR_KEEP	BIT(1)
> > +#define ICE_AQC_MAN_MAC_WR_S		6
> > +#define ICE_AQC_MAN_MAC_WR_M		(3 <<
> > ICE_AQC_MAN_MAC_WR_S)
> 
> Is this value '3' or 'BIT(3)'?
It's 3.
> 
> > +#define ICE_AQC_MAN_MAC_UPDATE_LAA	0
> > +#define ICE_AQC_MAN_MAC_UPDATE_LAA_WOL	(BIT(0) <<
> > ICE_AQC_MAN_MAC_WR_S)
> 
> Can the code be rearranged for?
We don’t want to change the base code for the sake of maintenance.

> 
> #define ICE_AQC_MAN_MAC_WR_S		6
> #define ICE_AQC_MAN_MAC_UPDATE_LAA	0
> #define ICE_AQC_MAN_MAC_WR_M		(3 <<
> ICE_AQC_MAN_MAC_WR_S)
> #define ICE_AQC_MAN_MAC_UPDATE_LAA_WOL	(BIT(0) <<
> ICE_AQC_MAN_MAC_WR_S)
> 
> Snipped
> 
> > +/* Each entry in the response buffer is of the following type: */
> > +struct ice_aqc_get_sw_cfg_resp_elem {
> > +	/* VSI/Port Number */
> > +	__le16 vsi_port_num;
> > +#define ICE_AQC_GET_SW_CONF_RESP_VSI_PORT_NUM_S	0
> > +#define ICE_AQC_GET_SW_CONF_RESP_VSI_PORT_NUM_M	\
> > +			(0x3FF <<
> > ICE_AQC_GET_SW_CONF_RESP_VSI_PORT_NUM_S)
> > +#define ICE_AQC_GET_SW_CONF_RESP_TYPE_S	14
> > +#define ICE_AQC_GET_SW_CONF_RESP_TYPE_M	(0x3 <<
> > ICE_AQC_GET_SW_CONF_RESP_TYPE_S)
> > +#define ICE_AQC_GET_SW_CONF_RESP_PHYS_PORT	0
> > +#define ICE_AQC_GET_SW_CONF_RESP_VIRT_PORT	1
> > +#define ICE_AQC_GET_SW_CONF_RESP_VSI		2
> > +
> 
> Can the code be rearranged for?
The same as above.

> 
> #define ICE_AQC_GET_SW_CONF_RESP_VSI_PORT_NUM_S	0
> #define ICE_AQC_GET_SW_CONF_RESP_TYPE_S	14
> #define ICE_AQC_GET_SW_CONF_RESP_VSI_PORT_NUM_M	\
> 			(0x3FF <<
> ICE_AQC_GET_SW_CONF_RESP_VSI_PORT_NUM_S)
> #define ICE_AQC_GET_SW_CONF_RESP_TYPE_M	(0x3 <<
> ICE_AQC_GET_SW_CONF_RESP_TYPE_S)
> 
> snipped
> 
>  +
> > +struct ice_aqc_get_phy_caps_data {
> > +	__le64 phy_type_low; /* Use values from ICE_PHY_TYPE_LOW_* */
> > +	__le64 reserved;
> > +	u8 caps;
> > +#define ICE_AQC_PHY_EN_TX_LINK_PAUSE			BIT(0)
> > +#define ICE_AQC_PHY_EN_RX_LINK_PAUSE			BIT(1)
> > +#define ICE_AQC_PHY_LOW_POWER_MODE			BIT(2)
> > +#define ICE_AQC_PHY_EN_LINK				BIT(3)
> > +#define ICE_AQC_PHY_AN_MODE				BIT(4)
> > +#define ICE_AQC_PHY_EN_MOD_QUAL				BIT(5)
> > +#define ICE_AQC_PHY_EN_LESM				BIT(6)
> > +#define ICE_AQC_PHY_EN_AUTO_FEC				BIT(7)
> > +#define ICE_AQC_PHY_CAPS_MASK
> > 	MAKEMASK(0xff, 0)
> > +	u8 low_power_ctrl;
> > +#define ICE_AQC_PHY_EN_D3COLD_LOW_POWER_AUTONEG
> 	BIT(0)
> > +	__le16 eee_cap;
> > +#define ICE_AQC_PHY_EEE_EN_100BASE_TX			BIT(0)
> > +#define ICE_AQC_PHY_EEE_EN_1000BASE_T			BIT(1)
> > +#define ICE_AQC_PHY_EEE_EN_10GBASE_T			BIT(2)
> > +#define ICE_AQC_PHY_EEE_EN_1000BASE_KX			BIT(3)
> > +#define ICE_AQC_PHY_EEE_EN_10GBASE_KR			BIT(4)
> > +#define ICE_AQC_PHY_EEE_EN_25GBASE_KR			BIT(5)
> > +#define ICE_AQC_PHY_EEE_EN_40GBASE_KR4			BIT(6)
> > +	__le16 eeer_value;
> > +	u8 phy_id_oui[4]; /* PHY/Module ID connected on the port */
> > +	u8 phy_fw_ver[8];
> > +	u8 link_fec_options;
> > +#define ICE_AQC_PHY_FEC_10G_KR_40G_KR4_EN		BIT(0)
> > +#define ICE_AQC_PHY_FEC_10G_KR_40G_KR4_REQ		BIT(1)
> > +#define ICE_AQC_PHY_FEC_25G_RS_528_REQ			BIT(2)
> > +#define ICE_AQC_PHY_FEC_25G_KR_REQ			BIT(3)
> > +#define ICE_AQC_PHY_FEC_25G_RS_544_REQ			BIT(4)
> > +#define ICE_AQC_PHY_FEC_25G_RS_CLAUSE91_EN		BIT(6)
> > +#define ICE_AQC_PHY_FEC_25G_KR_CLAUSE74_EN		BIT(7)
> > +#define ICE_AQC_PHY_FEC_MASK
> > 	MAKEMASK(0xdf, 0)
> > +	u8 extended_compliance_code;
> > +#define ICE_MODULE_TYPE_TOTAL_BYTE			3
> > +	u8 module_type[ICE_MODULE_TYPE_TOTAL_BYTE];
> > +#define ICE_AQC_MOD_TYPE_BYTE0_SFP_PLUS			0xA0
> > +#define ICE_AQC_MOD_TYPE_BYTE0_QSFP_PLUS		0x80
> > +#define ICE_AQC_MOD_TYPE_BYTE1_SFP_PLUS_CU_PASSIVE	BIT(0)
> > +#define ICE_AQC_MOD_TYPE_BYTE1_SFP_PLUS_CU_ACTIVE	BIT(1)
> > +#define ICE_AQC_MOD_TYPE_BYTE1_10G_BASE_SR		BIT(4)
> > +#define ICE_AQC_MOD_TYPE_BYTE1_10G_BASE_LR		BIT(5)
> > +#define ICE_AQC_MOD_TYPE_BYTE1_10G_BASE_LRM		BIT(6)
> > +#define ICE_AQC_MOD_TYPE_BYTE1_10G_BASE_ER		BIT(7)
> > +#define ICE_AQC_MOD_TYPE_BYTE2_SFP_PLUS			0xA0
> > +#define ICE_AQC_MOD_TYPE_BYTE2_QSFP_PLUS		0x86
> > +	u8 qualified_module_count;
> > +#define ICE_AQC_QUAL_MOD_COUNT_MAX			16
> > +	struct {
> > +		u8 v_oui[3];
> > +		u8 rsvd3;
> > +		u8 v_part[16];
> > +		__le32 v_rev;
> > +		__le64 rsvd8;
> > +	} qual_modules[ICE_AQC_QUAL_MOD_COUNT_MAX];
> > +};
> > +
> 
> Does the NIC support physical loopback? I am not able to find here.
Not sure about it. But no plan for this at this stage.

> 
> > +#define ICE_AQ_PHY_ENA_LOW_POWER	BIT(2)
> 
> Does Low Power PMD is exposed to DPDK? If yes, can you mention the
> performance numbers or variance in Release documents?
No plan for it at this release.

> 
> Snipped
> 
> > +
> > +/* Memory types */
> > +enum ice_memset_type {
> > +	ICE_NONDMA_MEM = 0,
> > +	ICE_DMA_MEM
> > +};
> > +
> > +/* Memcpy types */
> > +enum ice_memcpy_type {
> > +	ICE_NONDMA_TO_NONDMA = 0,
> > +	ICE_NONDMA_TO_DMA,
> > +	ICE_DMA_TO_DMA,
> > +	ICE_DMA_TO_NONDMA
> > +};
> > +
> 
> Is this exposed to user (rte_eth_dev) API? If yes, can you please let know the
> performance impact in RX|TX in release notes too.
No plan for it at this release.
> 
> Snipped
> 
> Suggestion: patch 01/20 is bit too long
Discussing in another thread.

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v2 01/20] net/ice: add base code
  2018-12-06  3:27       ` Lu, Wenzhuo
@ 2018-12-06  4:28         ` Varghese, Vipin
  2018-12-06  5:55           ` Lu, Wenzhuo
  0 siblings, 1 reply; 309+ messages in thread
From: Varghese, Vipin @ 2018-12-06  4:28 UTC (permalink / raw)
  To: Lu, Wenzhuo, dev

Hi Wenzhuo,

thanks for the updates, couple of follow up and suggestions

snipped
> >
> > > +Intel® ICE driver
> > > +==================
> > > +
> > > +This directory contains source code of FreeBSD ice driver of
> > > +version
> > > +2018.10.30 released by the team which develops basic drivers for
> > > +any ice NIC. The directory of base/ contains the original source package.
> > > +This driver is valid for the product(s) listed below
> > > +
> > > +* Intel® Ethernet Network Adapters E810
> > > +
> > > +Updating the driver
> > > +===================
> > > +
> > > +NOTE: The source code in this directory should not be modified
> > > +apart from the following file(s):
> > > +
> > > +    ice_osdep.h
> >
> > I this README persistent in upcoming releases of 'driver/net/ice'?
> Yes.
If Linux driver is enabled in 4.20.1 or higher, then will the wording 'This directory contains source code of FreeBSD ice driver of' still hold true?

> >
snipped
> > > +#define ICE_AQC_MAN_MAC_UPDATE_LAA	0
> > > +#define ICE_AQC_MAN_MAC_UPDATE_LAA_WOL	(BIT(0) <<
> > > ICE_AQC_MAN_MAC_WR_S)
> >
> > Can the code be rearranged for?
> We don’t want to change the base code for the sake of maintenance.
I do not follow this, is not your team or individual maintaining the same? because there should be maintainer for this PMD. 

snipped
> > > +struct ice_aqc_get_phy_caps_data {
> > > +	__le64 phy_type_low; /* Use values from ICE_PHY_TYPE_LOW_* */
> > > +	__le64 reserved;
> > > +	u8 caps;
> > > +#define ICE_AQC_PHY_EN_TX_LINK_PAUSE			BIT(0)
> > > +#define ICE_AQC_PHY_EN_RX_LINK_PAUSE			BIT(1)
> > > +#define ICE_AQC_PHY_LOW_POWER_MODE			BIT(2)
> > > +#define ICE_AQC_PHY_EN_LINK				BIT(3)
> > > +#define ICE_AQC_PHY_AN_MODE				BIT(4)
> > > +#define ICE_AQC_PHY_EN_MOD_QUAL				BIT(5)
> > > +#define ICE_AQC_PHY_EN_LESM				BIT(6)
> > > +#define ICE_AQC_PHY_EN_AUTO_FEC				BIT(7)
> > > +#define ICE_AQC_PHY_CAPS_MASK
> > > 	MAKEMASK(0xff, 0)
> > > +	u8 low_power_ctrl;
> > > +#define ICE_AQC_PHY_EN_D3COLD_LOW_POWER_AUTONEG
> > 	BIT(0)
> > > +	__le16 eee_cap;
> > > +#define ICE_AQC_PHY_EEE_EN_100BASE_TX			BIT(0)
> > > +#define ICE_AQC_PHY_EEE_EN_1000BASE_T			BIT(1)
> > > +#define ICE_AQC_PHY_EEE_EN_10GBASE_T			BIT(2)
> > > +#define ICE_AQC_PHY_EEE_EN_1000BASE_KX			BIT(3)
> > > +#define ICE_AQC_PHY_EEE_EN_10GBASE_KR			BIT(4)
> > > +#define ICE_AQC_PHY_EEE_EN_25GBASE_KR			BIT(5)
> > > +#define ICE_AQC_PHY_EEE_EN_40GBASE_KR4			BIT(6)
> > > +	__le16 eeer_value;
> > > +	u8 phy_id_oui[4]; /* PHY/Module ID connected on the port */
> > > +	u8 phy_fw_ver[8];
> > > +	u8 link_fec_options;
> > > +#define ICE_AQC_PHY_FEC_10G_KR_40G_KR4_EN		BIT(0)
> > > +#define ICE_AQC_PHY_FEC_10G_KR_40G_KR4_REQ		BIT(1)
> > > +#define ICE_AQC_PHY_FEC_25G_RS_528_REQ			BIT(2)
> > > +#define ICE_AQC_PHY_FEC_25G_KR_REQ			BIT(3)
> > > +#define ICE_AQC_PHY_FEC_25G_RS_544_REQ			BIT(4)
> > > +#define ICE_AQC_PHY_FEC_25G_RS_CLAUSE91_EN		BIT(6)
> > > +#define ICE_AQC_PHY_FEC_25G_KR_CLAUSE74_EN		BIT(7)
> > > +#define ICE_AQC_PHY_FEC_MASK
> > > 	MAKEMASK(0xdf, 0)
> > > +	u8 extended_compliance_code;
> > > +#define ICE_MODULE_TYPE_TOTAL_BYTE			3
> > > +	u8 module_type[ICE_MODULE_TYPE_TOTAL_BYTE];
> > > +#define ICE_AQC_MOD_TYPE_BYTE0_SFP_PLUS			0xA0
> > > +#define ICE_AQC_MOD_TYPE_BYTE0_QSFP_PLUS		0x80
> > > +#define ICE_AQC_MOD_TYPE_BYTE1_SFP_PLUS_CU_PASSIVE	BIT(0)
> > > +#define ICE_AQC_MOD_TYPE_BYTE1_SFP_PLUS_CU_ACTIVE	BIT(1)
> > > +#define ICE_AQC_MOD_TYPE_BYTE1_10G_BASE_SR		BIT(4)
> > > +#define ICE_AQC_MOD_TYPE_BYTE1_10G_BASE_LR		BIT(5)
> > > +#define ICE_AQC_MOD_TYPE_BYTE1_10G_BASE_LRM		BIT(6)
> > > +#define ICE_AQC_MOD_TYPE_BYTE1_10G_BASE_ER		BIT(7)
> > > +#define ICE_AQC_MOD_TYPE_BYTE2_SFP_PLUS			0xA0
> > > +#define ICE_AQC_MOD_TYPE_BYTE2_QSFP_PLUS		0x86
> > > +	u8 qualified_module_count;
> > > +#define ICE_AQC_QUAL_MOD_COUNT_MAX			16
> > > +	struct {
> > > +		u8 v_oui[3];
> > > +		u8 rsvd3;
> > > +		u8 v_part[16];
> > > +		__le32 v_rev;
> > > +		__le64 rsvd8;
> > > +	} qual_modules[ICE_AQC_QUAL_MOD_COUNT_MAX];
> > > +};
> > > +
> >
> > Does the NIC support physical loopback? I am not able to find here.
> Not sure about it. But no plan for this at this stage.
Please add this in release note and PMD documentation the same.

> 
> >
> > > +#define ICE_AQ_PHY_ENA_LOW_POWER	BIT(2)
> >
> > Does Low Power PMD is exposed to DPDK? If yes, can you mention the
> > performance numbers or variance in Release documents?
> No plan for it at this release.
Would not it be better to not add here or update as comment and release note about the same. Right now it is like dead code.

> 
> >
> > Snipped
> >
> > > +
> > > +/* Memory types */
> > > +enum ice_memset_type {
> > > +	ICE_NONDMA_MEM = 0,
> > > +	ICE_DMA_MEM
> > > +};
> > > +
> > > +/* Memcpy types */
> > > +enum ice_memcpy_type {
> > > +	ICE_NONDMA_TO_NONDMA = 0,
> > > +	ICE_NONDMA_TO_DMA,
> > > +	ICE_DMA_TO_DMA,
> > > +	ICE_DMA_TO_NONDMA
> > > +};
> > > +
> >
> > Is this exposed to user (rte_eth_dev) API? If yes, can you please let
> > know the performance impact in RX|TX in release notes too.
> No plan for it at this release.
Please update 
a. what is difference least as comments.
b. in release notes about the same.

snipped

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v2 20/20] net/ice: support meson build
  2018-12-05  7:03       ` Lu, Wenzhuo
@ 2018-12-06  4:31         ` Varghese, Vipin
  2018-12-06  5:59           ` Lu, Wenzhuo
  0 siblings, 1 reply; 309+ messages in thread
From: Varghese, Vipin @ 2018-12-06  4:31 UTC (permalink / raw)
  To: Lu, Wenzhuo, dev

Hi Wenzhuo

snipped
> >
> > Should not meson build option be add start. That is in patch 1/20 so
> > compile options does not fail?
> It will not fail. Enabling the compile earlier only means the code can be compiled.
> But, to use this device we do need the whole patch set. From this point of view,
> compiling it at the end maybe better.
Thanks for update, so will 'meson-build' success if apply 3 patches?

> 
> >
> > Thanks
> > Vipin Varghese

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v2 19/20] doc: add ICE description and update release note
  2018-12-05  6:54       ` Lu, Wenzhuo
@ 2018-12-06  4:34         ` Varghese, Vipin
  2018-12-06  6:05           ` Lu, Wenzhuo
  0 siblings, 1 reply; 309+ messages in thread
From: Varghese, Vipin @ 2018-12-06  4:34 UTC (permalink / raw)
  To: Lu, Wenzhuo, dev

Thanks Wenzhuo

snipped
> >
> > Do we support Traffic Manager and Inline Crypto? If not, can we add
> > this also to limitations?
> The style is only listed supported features here. I don't think we'll list everything
> not supported as limitation.
Thanks for the correction in format, hence the suggested action item is update limitation section to state what is not supported. 

> The same below.
> 
> >
> > > +FW version           = Y
> > > +Module EEPROM dump   = Y
> > > +Multiprocess aware   = Y
> > > +BSD nic_uio          = Y
> > > +Linux UIO            = Y
> > > +Linux VFIO           = Y
> > > +x86-32               = Y
> > > +x86-64               = Y
> >
> > Is cross compile for ARM and PPC disabled since it uses Intel specific ISA?
> > Should this be added to limitations?
Is powerpc and arm cross build with non avx and sse ISA supported? If no, will default '.config' has 'ICE_PMD=n'?

> >
> >
> > Snipped
> >
> > > +
> > > +Config File Options
> > > +~~~~~~~~~~~~~~~~~~~
> > > +
> > > +The following options can be modified in the ``config`` file.
> > > +Please note that enabling debugging options may affect system
> > performance.
> >
> > Do we see real performance variance? If yes, can we highlight this in
> > info section?
> IMH, we could get different numbers because of different scenarios. It's only a
> reminder to not enable debug.
Thanks, but then wording should 'Note: enabling debug option for ICE will have difference in performance. Hence recommendation is not to enable unless for debugging the ICE PMD.'
> 
> >
> > snipped


^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v2 02/20] net/ice: support device initialization
  2018-12-04  4:40     ` Varghese, Vipin
@ 2018-12-06  5:01       ` Lu, Wenzhuo
  2018-12-06  5:33         ` Varghese, Vipin
  0 siblings, 1 reply; 309+ messages in thread
From: Lu, Wenzhuo @ 2018-12-06  5:01 UTC (permalink / raw)
  To: Varghese, Vipin, dev; +Cc: Yang, Qiming, Li, Xiaoyun, Wu, Jingjing

Hi Vipin,

> -----Original Message-----
> From: Varghese, Vipin
> Sent: Tuesday, December 4, 2018 12:41 PM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>; Yang, Qiming
> <qiming.yang@intel.com>; Li, Xiaoyun <xiaoyun.li@intel.com>; Wu, Jingjing
> <jingjing.wu@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v2 02/20] net/ice: support device
> initialization
> 
> 
> Snipped
> 
> > +	/* Set the info.ingress_table and info.egress_table
> > +	 * for UP translate table. Now just set it to 1:1 map by default
> > +	 * -- 0b 111 110 101 100 011 010 001 000 == 0xFAC688
> > +	 */
> > +	info->ingress_table  = rte_cpu_to_le_32(0x00FAC688);
> > +	info->egress_table   = rte_cpu_to_le_32(0x00FAC688);
> > +	info->outer_up_table = rte_cpu_to_le_32(0x00FAC688);
> 
> Can we use MACRO instead of exact values for ingress, egress and outer_up
> table.
Good suggestion. Will update it in the next version.

> 
> > +	return 0;
> > +}
> > +
> 
> snipped
> 
> > +static int
> > +ice_dev_init(struct rte_eth_dev *dev) {
> > +	struct rte_pci_device *pci_dev;
> > +	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data-
> > >dev_private);
> > +	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data-
> >dev_private);
> > +	int ret;
> > +
> > +	dev->dev_ops = &ice_eth_dev_ops;
> > +
> > +	pci_dev = RTE_DEV_TO_PCI(dev->device);
> > +
> > +	rte_eth_copy_pci_info(dev, pci_dev);
> > +	pf->adapter = ICE_DEV_PRIVATE_TO_ADAPTER(dev->data-
> > >dev_private);
> > +	pf->adapter->eth_dev = dev;
> > +	pf->dev_data = dev->data;
> > +	hw->back = pf->adapter;
> > +	hw->hw_addr = (uint8_t *)pci_dev->mem_resource[0].addr;
> > +	hw->vendor_id = pci_dev->id.vendor_id;
> > +	hw->device_id = pci_dev->id.device_id;
> > +	hw->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id;
> > +	hw->subsystem_device_id = pci_dev->id.subsystem_device_id;
> > +	hw->bus.device = pci_dev->addr.devid;
> > +	hw->bus.func = pci_dev->addr.function;
> > +
> > +	ice_init_controlq_parameter(hw);
> > +
> > +	ret = ice_init_hw(hw);
> > +	if (ret) {
> > +		PMD_INIT_LOG(ERR, "Failed to initialize HW");
> > +		return -EINVAL;
> > +	}
> 
> Definition for ice_init_hw in patch 01/20 does not check for primary-
> secondary. Are we allowing secondary to invoke ice_init_hw if it is initialized
> by primary?
It's a patch split issue. We add the check in later patch. Will put it in this patch in the new version.

> 
> > +
> > +	PMD_INIT_LOG(INFO, "FW %d.%d.%05d API %d.%d",
> > +		     hw->fw_maj_ver, hw->fw_min_ver, hw->fw_build,
> > +		     hw->api_maj_ver, hw->api_min_ver);
> > +
> 
> Snipped
> 
> > +
> > +static int
> > +ice_dev_uninit(struct rte_eth_dev *dev) {
> > +	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data-
> > >dev_private);
> > +	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data-
> >dev_private);
> > +
> > +	ICE_PROC_SECONDARY_CHECK_RET_0;
> 
> Should not we check if primary is alive and NIC is used or initialized by
> primary then ' ICE_PROC_SECONDARY_CHECK_RET_0'?
I think it's not a critical issue if the process is terminate abnormally without uninit.
Comparing with that, I have more concern about this scenario, if the primary process exit and uninit the resource, the secondary process is left alone. And also to me it looks not a good solution to change every PMD for this feature. I don't see many PMD support it. Maybe we'd better not support it now and wait for a better whole picture.
The same below.

> 
> > +
> > +	ice_dev_close(dev);
> > +
> > +	dev->dev_ops = NULL;
> > +	dev->rx_pkt_burst = NULL;
> > +	dev->tx_pkt_burst = NULL;
> > +
> > +	rte_free(dev->data->mac_addrs);
> > +	dev->data->mac_addrs = NULL;
> > +
> > +	ice_release_vsi(pf->main_vsi);
> > +	ice_sched_cleanup_all(hw);
> > +	rte_free(hw->port_info);
> > +	ice_shutdown_all_ctrlq(hw);
> > +
> > +	return 0;
> > +}
> > +
> 
> snipped
> 
> > +static void
> > +ice_dev_close(struct rte_eth_dev *dev) {
> > +	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data-
> >dev_private);
> > +	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data-
> > >dev_private);
> > +
> > +	ICE_PROC_SECONDARY_CHECK_NO_ERR;
> > +
> 
> I am just wondering in a multi process (primary-secondary) if primary is
> killed or exited, then if we try to stop the secondary due to this check the vsi,
> pool and shutdown is not called. Should not we check if primary is still alive,
> if yes then ICE_PROC_SECONDARY_CHECK_NO_ERR?
> 
> > +	ice_res_pool_destroy(&pf->msix_pool);
> > +	ice_release_vsi(pf->main_vsi);
> > +
> > +	ice_shutdown_all_ctrlq(hw);
> > +}
> 
> snipped

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v2 03/20] net/ice: support device and queue ops
  2018-12-04  4:53     ` Varghese, Vipin
@ 2018-12-06  5:03       ` Lu, Wenzhuo
  2018-12-06  5:26         ` Varghese, Vipin
  0 siblings, 1 reply; 309+ messages in thread
From: Lu, Wenzhuo @ 2018-12-06  5:03 UTC (permalink / raw)
  To: Varghese, Vipin, dev; +Cc: Yang, Qiming, Li, Xiaoyun, Wu, Jingjing

Hi Vipin,


> -----Original Message-----
> From: Varghese, Vipin
> Sent: Tuesday, December 4, 2018 12:53 PM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>; Yang, Qiming
> <qiming.yang@intel.com>; Li, Xiaoyun <xiaoyun.li@intel.com>; Wu, Jingjing
> <jingjing.wu@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v2 03/20] net/ice: support device and queue
> ops
> 
> snipped
> > +
> > +static int ice_init_rss(struct ice_pf *pf) {
> > +	struct ice_hw *hw = ICE_PF_TO_HW(pf);
> > +	struct ice_vsi *vsi = pf->main_vsi;
> > +	struct rte_eth_dev *dev = pf->adapter->eth_dev;
> > +	struct rte_eth_rss_conf *rss_conf;
> > +	struct ice_aqc_get_set_rss_keys key;
> > +	uint16_t i, nb_q;
> > +	int ret = 0;
> > +
> > +	rss_conf = &dev->data->dev_conf.rx_adv_conf.rss_conf;
> > +	nb_q = dev->data->nb_rx_queues;
> > +	vsi->rss_key_size =
> ICE_AQC_GET_SET_RSS_KEY_DATA_RSS_KEY_SIZE;
> > +	vsi->rss_lut_size = hw->func_caps.common_cap.rss_table_size;
> > +
> > +	if (!vsi->rss_key)
> > +		vsi->rss_key = rte_zmalloc("rss_key",
> > +					   vsi->rss_key_size, 0);
> > +	if (!vsi->rss_lut)
> > +		vsi->rss_lut = rte_zmalloc("rss_lut",
> > +					   vsi->rss_lut_size, 0);
> 
> 2 suggestions
> 1. should the name be macro?
Sorry, which name?

> 2. if there are multiple 810 NIC under DPDK, should not each rss be different
> like "rss_key-%u" where it is port number?
Sorry, don't understand the question.

> 
> Snipped
> 
> > +
> > +static int
> > +ice_dev_start(struct rte_eth_dev *dev) {
> > +	struct rte_eth_dev_data *data = dev->data;
> > +	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data-
> > >dev_private);
> > +	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data-
> >dev_private);
> > +	uint16_t nb_rxq = 0;
> > +	uint16_t nb_txq, i;
> > +	int ret;
> > +
> > +	ICE_PROC_SECONDARY_CHECK;
> 
> Device start is not supported, but how is this differentiated from primary
> configured device vs secondary configured device.
> 
> Ie: primary uses black list '-b BB:DD:F' while secondary uses '-w BB:DD:F'. In
> this case since we are checking process type this will return without start?
> 
> Snipped
> 
> > +
> > +static void
> > +ice_dev_stop(struct rte_eth_dev *dev) {
> > +	struct rte_eth_dev_data *data = dev->data;
> > +	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data-
> >dev_private);
> > +	uint16_t i;
> > +
> > +	/* avoid stopping again */
> > +	if (pf->adapter_stopped)
> > +		return;
> > +
> > +	ICE_PROC_SECONDARY_CHECK_NO_ERR;
> 
> Same as above.
> 
> snipped

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v2 03/20] net/ice: support device and queue ops
  2018-12-06  5:03       ` Lu, Wenzhuo
@ 2018-12-06  5:26         ` Varghese, Vipin
  2018-12-06 11:52           ` Ananyev, Konstantin
  0 siblings, 1 reply; 309+ messages in thread
From: Varghese, Vipin @ 2018-12-06  5:26 UTC (permalink / raw)
  To: Lu, Wenzhuo, dev; +Cc: Yang, Qiming, Li, Xiaoyun, Wu, Jingjing

Hi Wenzhuo,

Please find my updates below

snipped
> > > +	if (!vsi->rss_key)
> > > +		vsi->rss_key = rte_zmalloc("rss_key",
> > > +					   vsi->rss_key_size, 0);
> > > +	if (!vsi->rss_lut)
> > > +		vsi->rss_lut = rte_zmalloc("rss_lut",
> > > +					   vsi->rss_lut_size, 0);
> >
> > 2 suggestions
> > 1. should the name be macro?
> Sorry, which name?
Would you like to convert and use as? 
#define ICE_RSS_KEY "rss_key"
#define ICE_RSS_LUT "rss_lut"

And replace ' rte_zmalloc("rss_key",' as ' rte_zmalloc(ICE_RSS_KEY,'
> 
> > 2. if there are multiple 810 NIC under DPDK, should not each rss be
> > different like "rss_key-%u" where it is port number?
> Sorry, don't understand the question.
Let assume we have 2 ICE_DSI NIC on PCIe bus. Then crating ' rte_zmalloc("rss_key",' for port 1 will fail since malloc region "rss_key" already exist for port 0

> 
> >
> > Snipped
> >
> > > +
> > > +static int
> > > +ice_dev_start(struct rte_eth_dev *dev) {
> > > +	struct rte_eth_dev_data *data = dev->data;
> > > +	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data-
> > > >dev_private);
> > > +	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data-
> > >dev_private);
> > > +	uint16_t nb_rxq = 0;
> > > +	uint16_t nb_txq, i;
> > > +	int ret;
> > > +
> > > +	ICE_PROC_SECONDARY_CHECK;
> >
> > Device start is not supported, but how is this differentiated from
> > primary configured device vs secondary configured device.
> >
> > Ie: primary uses black list '-b BB:DD:F' while secondary uses '-w
> > BB:DD:F'. In this case since we are checking process type this will return without
> start?
Two updates with respect to your comment, 
1. tool and application like dpdk-procinfo will no longer be able pull data since you are asking to black list.
2. If there are functions which needs to shared, like primary using rx-0 and tx-0 then secondary rx-1 and tx-1 how to make this work?

snipped

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v2 04/20] net/ice: support getting device information
  2018-12-04  4:59     ` Varghese, Vipin
@ 2018-12-06  5:28       ` Lu, Wenzhuo
  2018-12-06  5:49         ` Varghese, Vipin
  0 siblings, 1 reply; 309+ messages in thread
From: Lu, Wenzhuo @ 2018-12-06  5:28 UTC (permalink / raw)
  To: Varghese, Vipin, dev; +Cc: Yang, Qiming, Li, Xiaoyun, Wu, Jingjing

Hi Vipin,


> -----Original Message-----
> From: Varghese, Vipin
> Sent: Tuesday, December 4, 2018 1:00 PM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>; Yang, Qiming
> <qiming.yang@intel.com>; Li, Xiaoyun <xiaoyun.li@intel.com>; Wu, Jingjing
> <jingjing.wu@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v2 04/20] net/ice: support getting device
> information
> 
> snipped
> > +static void
> > +ice_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info
> > +*dev_info) {
> > +	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data-
> >dev_private);
> > +	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data-
> > >dev_private);
> > +	struct ice_vsi *vsi = pf->main_vsi;
> > +	struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
> > +
> > +	dev_info->min_rx_bufsize = ICE_BUF_SIZE_MIN;
> > +	dev_info->max_rx_pktlen = ICE_FRAME_SIZE_MAX;
> > +	dev_info->max_rx_queues = vsi->nb_qps;
> > +	dev_info->max_tx_queues = vsi->nb_qps;
> > +	dev_info->max_mac_addrs = vsi->max_macaddrs;
> > +	dev_info->max_vfs = pci_dev->max_vfs;
> > +
> > +	dev_info->rx_offload_capa =
> > +		DEV_RX_OFFLOAD_VLAN_STRIP |
> > +		DEV_RX_OFFLOAD_IPV4_CKSUM |
> > +		DEV_RX_OFFLOAD_UDP_CKSUM |
> > +		DEV_RX_OFFLOAD_TCP_CKSUM |
> > +		DEV_RX_OFFLOAD_QINQ_STRIP |
> > +		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
> > +		DEV_RX_OFFLOAD_VLAN_EXTEND |
> > +		DEV_RX_OFFLOAD_JUMBO_FRAME;
> > +	dev_info->tx_offload_capa =
> > +		DEV_TX_OFFLOAD_VLAN_INSERT |
> > +		DEV_TX_OFFLOAD_QINQ_INSERT |
> > +		DEV_TX_OFFLOAD_IPV4_CKSUM |
> > +		DEV_TX_OFFLOAD_UDP_CKSUM |
> > +		DEV_TX_OFFLOAD_TCP_CKSUM |
> > +		DEV_TX_OFFLOAD_SCTP_CKSUM |
> > +		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
> > +		DEV_TX_OFFLOAD_TCP_TSO;
> > +	dev_info->rx_queue_offload_capa = 0;
> > +	dev_info->tx_queue_offload_capa = 0;
> 
> Does this mean per queue offload capability is not supported? If yes, can
> you mention this in release notes under 'support or limitation'
No, it's not supported. We have a document, ice.ini, to list all the features supported. All the others are not supported.
BTW, I don't think anything not supported is limitation.

> 
> > +
> > +	dev_info->reta_size = hw->func_caps.common_cap.rss_table_size;
> > +	dev_info->hash_key_size = (VSIQF_HKEY_MAX_INDEX + 1) *
> > sizeof(uint32_t);
> > +	dev_info->flow_type_rss_offloads = ICE_RSS_OFFLOAD_ALL;
> > +
> > +	dev_info->default_rxconf = (struct rte_eth_rxconf) {
> > +		.rx_thresh = {
> > +			.pthresh = ICE_DEFAULT_RX_PTHRESH,
> > +			.hthresh = ICE_DEFAULT_RX_HTHRESH,
> > +			.wthresh = ICE_DEFAULT_RX_WTHRESH,
> > +		},
> > +		.rx_free_thresh = ICE_DEFAULT_RX_FREE_THRESH,
> > +		.rx_drop_en = 0,
> > +		.offloads = 0,
> Is drop function and rx_conf.offload supported ? If yes, if device is not
> configured then all offload should be set?
It's the default configuration. No matter a feature supported or not, it's not set only means it's not enabled here.

> 
> > +	};
> > +
> > +	dev_info->default_txconf = (struct rte_eth_txconf) {
> > +		.tx_thresh = {
> > +			.pthresh = ICE_DEFAULT_TX_PTHRESH,
> > +			.hthresh = ICE_DEFAULT_TX_HTHRESH,
> > +			.wthresh = ICE_DEFAULT_TX_WTHRESH,
> > +		},
> > +		.tx_free_thresh = ICE_DEFAULT_TX_FREE_THRESH,
> > +		.tx_rs_thresh = ICE_DEFAULT_TX_RSBIT_THRESH,
> > +		.offloads = 0,
> 
> If device is not configured, is not all offload be set true?
This a info_get function. I don't understand why we talk about configuration here.

> 
> Snipped
> 
> > +	switch (hw->port_info->phy.link_info.link_speed) {
> 
> If device switch is not configured (default value from NVM) should we
> highlight the switch can support speed 10, 100, 1000, 1000 and son on?
No, this's the capability getting from HW.

> 
> > +	case ICE_AQ_LINK_SPEED_10MB:
> > +		dev_info->speed_capa = ETH_LINK_SPEED_10M;
> > +		break;
> > +	case ICE_AQ_LINK_SPEED_100MB:
> > +		dev_info->speed_capa = ETH_LINK_SPEED_100M;
> > +		break;
> > +	case ICE_AQ_LINK_SPEED_1000MB:
> > +		dev_info->speed_capa = ETH_LINK_SPEED_1G;
> > +		break;
> > +	case ICE_AQ_LINK_SPEED_2500MB:
> > +		dev_info->speed_capa = ETH_LINK_SPEED_2_5G;
> > +		break;
> > +	case ICE_AQ_LINK_SPEED_5GB:
> > +		dev_info->speed_capa = ETH_LINK_SPEED_5G;
> > +		break;
> > +	case ICE_AQ_LINK_SPEED_10GB:
> > +		dev_info->speed_capa = ETH_LINK_SPEED_10G;
> > +		break;
> > +	case ICE_AQ_LINK_SPEED_20GB:
> > +		dev_info->speed_capa = ETH_LINK_SPEED_20G;
> > +		break;
> > +	case ICE_AQ_LINK_SPEED_25GB:
> > +		dev_info->speed_capa = ETH_LINK_SPEED_25G;
> > +		break;
> > +	case ICE_AQ_LINK_SPEED_40GB:
> > +		dev_info->speed_capa = ETH_LINK_SPEED_40G;
> > +		break;
> > +	case ICE_AQ_LINK_SPEED_UNKNOWN:
> > +	default:
> > +		PMD_DRV_LOG(ERR, "Unknown link speed");
> > +		dev_info->speed_capa = ETH_LINK_SPEED_AUTONEG;
> > +		break;
> > +	}
> 
> If speed is not true as stated above, can you please add this to release notes
> and documentation.
Here listed all the case we can get from HW.

> 
> > +
> > +	dev_info->nb_rx_queues = dev->data->nb_rx_queues;
> > +	dev_info->nb_tx_queues = dev->data->nb_tx_queues;
> > +
> > +	dev_info->default_rxportconf.burst_size = 32;
> > +	dev_info->default_txportconf.burst_size = 32;
> > +	dev_info->default_rxportconf.nb_queues = 1;
> > +	dev_info->default_txportconf.nb_queues = 1;
> > +	dev_info->default_rxportconf.ring_size = 1024;
> > +	dev_info->default_txportconf.ring_size = 1024;
> 
> Can we use MACRO  (in previous PATCH there were MAX_BURST_SIZE)?
Good suggestion. Will update it in the new version.

> 
> }
> > --
> > 1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v2 02/20] net/ice: support device initialization
  2018-12-06  5:01       ` Lu, Wenzhuo
@ 2018-12-06  5:33         ` Varghese, Vipin
  2018-12-06  6:13           ` Lu, Wenzhuo
  0 siblings, 1 reply; 309+ messages in thread
From: Varghese, Vipin @ 2018-12-06  5:33 UTC (permalink / raw)
  To: Lu, Wenzhuo, dev; +Cc: Yang, Qiming, Li, Xiaoyun, Wu, Jingjing

snipped
> > > +	ice_init_controlq_parameter(hw);
> > > +
> > > +	ret = ice_init_hw(hw);
> > > +	if (ret) {
> > > +		PMD_INIT_LOG(ERR, "Failed to initialize HW");
> > > +		return -EINVAL;
> > > +	}
> >
> > Definition for ice_init_hw in patch 01/20 does not check for primary-
> > secondary. Are we allowing secondary to invoke ice_init_hw if it is
> > initialized by primary?
> It's a patch split issue. We add the check in later patch. Will put it in this patch in
> the new version.
Suggestion in current patch if comment is kept it will be easier to understand that it is taken care in future patch. 

Example patch 2/20 has comment stating adding support in patch 5/20. Then in patch 5/20 it removes the ToDo it is easier to read and understand the flow

> 
> >
> > > +
> > > +	PMD_INIT_LOG(INFO, "FW %d.%d.%05d API %d.%d",
> > > +		     hw->fw_maj_ver, hw->fw_min_ver, hw->fw_build,
> > > +		     hw->api_maj_ver, hw->api_min_ver);
> > > +
> >
> > Snipped
> >
> > > +
> > > +static int
> > > +ice_dev_uninit(struct rte_eth_dev *dev) {
> > > +	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data-
> > > >dev_private);
> > > +	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data-
> > >dev_private);
> > > +
> > > +	ICE_PROC_SECONDARY_CHECK_RET_0;
> >
> > Should not we check if primary is alive and NIC is used or initialized
> > by primary then ' ICE_PROC_SECONDARY_CHECK_RET_0'?
> I think it's not a critical issue if the process is terminate abnormally without uninit.
> Comparing with that, I have more concern about this scenario, if the primary
> process exit and uninit the resource, the secondary process is left alone.
Since primary is application which reserves the huge page memory (malloc, zmalloc, memzone). So when secondary is killed or stop whole huge pages are released. I am bit confused what is check suggested affecting?

 And also
> to me it looks not a good solution to change every PMD for this feature. 
I am not aware about why other PMD are done in specific way. In my humble opinion, if there is a right way let it be used rather than doing other way.

I don't
> see many PMD support it. Maybe we'd better not support it now and wait for a
> better whole picture.
I wait for others to comment to this approach. 

snipped

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v2 05/20] net/ice: support packet type getting
  2018-12-04  5:19     ` Varghese, Vipin
@ 2018-12-06  5:34       ` Lu, Wenzhuo
  0 siblings, 0 replies; 309+ messages in thread
From: Lu, Wenzhuo @ 2018-12-06  5:34 UTC (permalink / raw)
  To: Varghese, Vipin, dev; +Cc: Zhao1, Wei

Hi Vipin,


> -----Original Message-----
> From: Varghese, Vipin
> Sent: Tuesday, December 4, 2018 1:19 PM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>; Zhao1, Wei
> <wei.zhao1@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v2 05/20] net/ice: support packet type
> getting
> 
> snipped
> > +static inline uint32_t
> > +ice_get_default_pkt_type(uint16_t ptype) {
> 
> Suggestion: should we check 'ptype >= RTE_PTYPE_UNKNOWN ' return?
Good suggestion. Will update it in the new version.
> 
> Suggestion: is it ok to use MACRO instead of array.
The array is for a better performance. I don't know the idea of macro. Would you like to give more details? Thanks.

> snipped

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v2 07/20] net/ice: support MTU setting
  2018-12-04  5:25     ` Varghese, Vipin
  2018-12-04  5:51       ` Varghese, Vipin
@ 2018-12-06  5:35       ` Lu, Wenzhuo
  1 sibling, 0 replies; 309+ messages in thread
From: Lu, Wenzhuo @ 2018-12-06  5:35 UTC (permalink / raw)
  To: Varghese, Vipin, dev; +Cc: Yang, Qiming, Li, Xiaoyun, Wu, Jingjing

Hi Vipin,


> -----Original Message-----
> From: Varghese, Vipin
> Sent: Tuesday, December 4, 2018 1:26 PM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>; Yang, Qiming
> <qiming.yang@intel.com>; Li, Xiaoyun <xiaoyun.li@intel.com>; Wu, Jingjing
> <jingjing.wu@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v2 07/20] net/ice: support MTU setting
> 
> snipped
> > +static int
> > +ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) {
> > +	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data-
> >dev_private);
> > +	struct rte_eth_dev_data *dev_data = pf->dev_data;
> > +	uint32_t frame_size = mtu + ETHER_HDR_LEN
> > +			      + ETHER_CRC_LEN + ICE_VLAN_TAG_SIZE;
> 
> Should this be ' ICE_VLAN_TAG_SIZE' or ' ICE_SWITCH_VLAN_TAG_SIZE'?
Don't get it. What should be changed?

> snipped

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v2 14/20] net/ice: support statistics
  2018-12-04  5:35     ` Varghese, Vipin
@ 2018-12-06  5:37       ` Lu, Wenzhuo
  2018-12-06  5:50         ` Varghese, Vipin
  0 siblings, 1 reply; 309+ messages in thread
From: Lu, Wenzhuo @ 2018-12-06  5:37 UTC (permalink / raw)
  To: Varghese, Vipin, dev; +Cc: Guo, Jia

Hi Vipin,


> -----Original Message-----
> From: Varghese, Vipin
> Sent: Tuesday, December 4, 2018 1:35 PM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>; Guo, Jia <jia.guo@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v2 14/20] net/ice: support statistics
> 
> snipped
> > +	ns->eth.tx_bytes -= (ns->eth.tx_unicast + ns->eth.tx_multicast +
> > +			     ns->eth.tx_broadcast) * ETHER_CRC_LEN;
> 
> In earlier patch for 'mtu set check' we added VSI SWITCH VLAN. Should we
> add VSI VLAN here?
Don't need. They're different functions. We add crc length here because of HW counts the packet length before crc is added.


> snipped

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v2 16/20] net/ice: support basic RX/TX
  2018-12-04  5:42     ` Varghese, Vipin
  2018-12-04  5:44       ` Varghese, Vipin
@ 2018-12-06  5:39       ` Lu, Wenzhuo
  2018-12-06  5:55         ` Varghese, Vipin
  1 sibling, 1 reply; 309+ messages in thread
From: Lu, Wenzhuo @ 2018-12-06  5:39 UTC (permalink / raw)
  To: Varghese, Vipin, dev; +Cc: Yang, Qiming, Li, Xiaoyun, Wu, Jingjing

Hi Vipin,


> -----Original Message-----
> From: Varghese, Vipin
> Sent: Tuesday, December 4, 2018 1:42 PM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>; Yang, Qiming
> <qiming.yang@intel.com>; Li, Xiaoyun <xiaoyun.li@intel.com>; Wu, Jingjing
> <jingjing.wu@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v2 16/20] net/ice: support basic RX/TX
> 
> snipped
> > +uint16_t
> > +ice_recv_pkts(void *rx_queue,
> > +	      struct rte_mbuf **rx_pkts,
> > +	      uint16_t nb_pkts)
> > +{
> > +	struct ice_rx_queue *rxq = rx_queue;
> > +	volatile union ice_rx_desc *rx_ring = rxq->rx_ring;
> > +	volatile union ice_rx_desc *rxdp;
> > +	union ice_rx_desc rxd;
> > +	struct ice_rx_entry *sw_ring = rxq->sw_ring;
> > +	struct ice_rx_entry *rxe;
> > +	struct rte_mbuf *nmb; /* new allocated mbuf */
> > +	struct rte_mbuf *rxm; /* pointer to store old mbuf in SW ring */
> > +	uint16_t rx_id = rxq->rx_tail;
> > +	uint16_t nb_rx = 0;
> > +	uint16_t nb_hold = 0;
> > +	uint16_t rx_packet_len;
> > +	uint32_t rx_status;
> > +	uint64_t qword1;
> > +	uint64_t dma_addr;
> > +	uint64_t pkt_flags = 0;
> > +	uint32_t *ptype_tbl = rxq->vsi->adapter->ptype_tbl;
> > +	struct rte_eth_dev *dev;
> > +
> > +	while (nb_rx < nb_pkts) {
> > +		rxdp = &rx_ring[rx_id];
> > +		qword1 = rte_le_to_cpu_64(rxdp-
> > >wb.qword1.status_error_len);
> > +		rx_status = (qword1 & ICE_RXD_QW1_STATUS_M) >>
> > +			    ICE_RXD_QW1_STATUS_S;
> > +
> > +		/* Check the DD bit first */
> > +		if (!(rx_status & (1 << ICE_RX_DESC_STATUS_DD_S)))
> > +			break;
> > +
> > +		/* allocate mbuf */
> > +		nmb = rte_mbuf_raw_alloc(rxq->mp);
> > +		if (unlikely(!nmb)) {
> > +			dev = ICE_VSI_TO_ETH_DEV(rxq->vsi);
> > +			dev->data->rx_mbuf_alloc_failed++;
> > +			break;
> > +		}
> 
> Should we check if the received packet length is greater than mbug pkt_len
> then we need bulk alloc with n_segs?
We cannot do it here. It's fast path. It hurts performance badly. So we do the check before and choose the right RX function.
Normally by default the n_segs is supported.

> 
> > +		rxd = *rxdp; /* copy descriptor in ring to temp variable*/
> > +
> > +		nb_hold++;
> > +		rxe = &sw_ring[rx_id]; /* get corresponding mbuf in SW ring
> */
> > +		rx_id++;
> > +		if (unlikely(rx_id == rxq->nb_rx_desc))
> > +			rx_id = 0;
> > +		rxm = rxe->mbuf;
> > +		rxe->mbuf = nmb;
> > +		dma_addr =
> > +
> 	rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb));
> > +
> > +		/**
> > +		 * fill the read format of descriptor with physic address in
> > +		 * new allocated mbuf: nmb
> > +		 */
> > +		rxdp->read.hdr_addr = 0;
> > +		rxdp->read.pkt_addr = dma_addr;
> > +
> > +		/* calculate rx_packet_len of the received pkt */
> > +		rx_packet_len = ((qword1 & ICE_RXD_QW1_LEN_PBUF_M) >>
> > +				ICE_RXD_QW1_LEN_PBUF_S) - rxq->crc_len;
> > +
> > +		/* fill old mbuf with received descriptor: rxd */
> > +		rxm->data_off = RTE_PKTMBUF_HEADROOM;
> > +		rte_prefetch0(RTE_PTR_ADD(rxm->buf_addr,
> > RTE_PKTMBUF_HEADROOM));
> > +		rxm->nb_segs = 1;
> 
> Same comment for above for multi segment alloc for larger packets or
> smaller pkt_len in mempool?
> 
> Snipped

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v2 07/20] net/ice: support MTU setting
  2018-12-04  5:51       ` Varghese, Vipin
@ 2018-12-06  5:41         ` Lu, Wenzhuo
  2018-12-06  5:56           ` Varghese, Vipin
  0 siblings, 1 reply; 309+ messages in thread
From: Lu, Wenzhuo @ 2018-12-06  5:41 UTC (permalink / raw)
  To: Varghese, Vipin, dev; +Cc: Yang, Qiming, Li, Xiaoyun, Wu, Jingjing

Hi Vipin,

> -----Original Message-----
> From: Varghese, Vipin
> Sent: Tuesday, December 4, 2018 1:52 PM
> To: Varghese, Vipin <vipin.varghese@intel.com>; Lu, Wenzhuo
> <wenzhuo.lu@intel.com>; dev@dpdk.org
> Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>; Yang, Qiming
> <qiming.yang@intel.com>; Li, Xiaoyun <xiaoyun.li@intel.com>; Wu, Jingjing
> <jingjing.wu@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v2 07/20] net/ice: support MTU setting
> 
> Can you point me to the patch where 'get_mtu' is defined?
There's no 'get_mtu'. No such ops.

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v2 04/20] net/ice: support getting device information
  2018-12-06  5:28       ` Lu, Wenzhuo
@ 2018-12-06  5:49         ` Varghese, Vipin
  0 siblings, 0 replies; 309+ messages in thread
From: Varghese, Vipin @ 2018-12-06  5:49 UTC (permalink / raw)
  To: Lu, Wenzhuo, dev; +Cc: Yang, Qiming, Li, Xiaoyun, Wu, Jingjing

> >
> > snipped
> > > +static void
> > > +ice_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info
> > > +*dev_info) {
> > > +	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data-
> > >dev_private);
> > > +	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data-
> > > >dev_private);
> > > +	struct ice_vsi *vsi = pf->main_vsi;
> > > +	struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
> > > +
> > > +	dev_info->min_rx_bufsize = ICE_BUF_SIZE_MIN;
> > > +	dev_info->max_rx_pktlen = ICE_FRAME_SIZE_MAX;
> > > +	dev_info->max_rx_queues = vsi->nb_qps;
> > > +	dev_info->max_tx_queues = vsi->nb_qps;
> > > +	dev_info->max_mac_addrs = vsi->max_macaddrs;
> > > +	dev_info->max_vfs = pci_dev->max_vfs;
> > > +
> > > +	dev_info->rx_offload_capa =
> > > +		DEV_RX_OFFLOAD_VLAN_STRIP |
> > > +		DEV_RX_OFFLOAD_IPV4_CKSUM |
> > > +		DEV_RX_OFFLOAD_UDP_CKSUM |
> > > +		DEV_RX_OFFLOAD_TCP_CKSUM |
> > > +		DEV_RX_OFFLOAD_QINQ_STRIP |
> > > +		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
> > > +		DEV_RX_OFFLOAD_VLAN_EXTEND |
> > > +		DEV_RX_OFFLOAD_JUMBO_FRAME;
> > > +	dev_info->tx_offload_capa =
> > > +		DEV_TX_OFFLOAD_VLAN_INSERT |
> > > +		DEV_TX_OFFLOAD_QINQ_INSERT |
> > > +		DEV_TX_OFFLOAD_IPV4_CKSUM |
> > > +		DEV_TX_OFFLOAD_UDP_CKSUM |
> > > +		DEV_TX_OFFLOAD_TCP_CKSUM |
> > > +		DEV_TX_OFFLOAD_SCTP_CKSUM |
> > > +		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
> > > +		DEV_TX_OFFLOAD_TCP_TSO;
> > > +	dev_info->rx_queue_offload_capa = 0;
> > > +	dev_info->tx_queue_offload_capa = 0;
> >
> > Does this mean per queue offload capability is not supported? If yes,
> > can you mention this in release notes under 'support or limitation'
> No, it's not supported. We have a document, ice.ini, to list all the features
> supported. All the others are not supported.
> BTW, I don't think anything not supported is limitation.
If I understand correctly,  ICE_DSI_PMD is advertising it has not offload for RX and TX. But you are stating in ice.ini you are listing offload supports. So let me rephrase the question 'if you support port level offload capability, it will reflect for all queues rx and tx. But if you reflect queue level offload as 0 for rx and tx, then APIs rte_eth_rx_queue_setup and rte_eth_tx_queue_setup if queue offload enabled should fail. Is this correct understanding?'

> 
> >
> > > +
> > > +	dev_info->reta_size = hw->func_caps.common_cap.rss_table_size;
> > > +	dev_info->hash_key_size = (VSIQF_HKEY_MAX_INDEX + 1) *
> > > sizeof(uint32_t);
> > > +	dev_info->flow_type_rss_offloads = ICE_RSS_OFFLOAD_ALL;
> > > +
> > > +	dev_info->default_rxconf = (struct rte_eth_rxconf) {
> > > +		.rx_thresh = {
> > > +			.pthresh = ICE_DEFAULT_RX_PTHRESH,
> > > +			.hthresh = ICE_DEFAULT_RX_HTHRESH,
> > > +			.wthresh = ICE_DEFAULT_RX_WTHRESH,
> > > +		},
> > > +		.rx_free_thresh = ICE_DEFAULT_RX_FREE_THRESH,
> > > +		.rx_drop_en = 0,
> > > +		.offloads = 0,
> > Is drop function and rx_conf.offload supported ? If yes, if device is
> > not configured then all offload should be set?
> It's the default configuration. No matter a feature supported or not, it's not set
> only means it's not enabled here.
So, default behaviour is not to support drop_en. Incase of default RSS is disabled all packets will fall onto rx-queue-0. 

> 
> >
> > > +	};
> > > +
> > > +	dev_info->default_txconf = (struct rte_eth_txconf) {
> > > +		.tx_thresh = {
> > > +			.pthresh = ICE_DEFAULT_TX_PTHRESH,
> > > +			.hthresh = ICE_DEFAULT_TX_HTHRESH,
> > > +			.wthresh = ICE_DEFAULT_TX_WTHRESH,
> > > +		},
> > > +		.tx_free_thresh = ICE_DEFAULT_TX_FREE_THRESH,
> > > +		.tx_rs_thresh = ICE_DEFAULT_TX_RSBIT_THRESH,
> > > +		.offloads = 0,
> >
> > If device is not configured, is not all offload be set true?
> This a info_get function. I don't understand why we talk about configuration
> here.
Same as above

> 
> >
> > Snipped
> >
> > > +	switch (hw->port_info->phy.link_info.link_speed) {
> >
> > If device switch is not configured (default value from NVM) should we
> > highlight the switch can support speed 10, 100, 1000, 1000 and son on?
> No, this's the capability getting from HW.
If HW is supported or configured for 10, 100, 25G then those should be returned correctly this I agree. But when the device is queried for capability it should highlight all supported speeds of switch. Am I right?

> 
> >
> > > +	case ICE_AQ_LINK_SPEED_10MB:
> > > +		dev_info->speed_capa = ETH_LINK_SPEED_10M;
> > > +		break;
> > > +	case ICE_AQ_LINK_SPEED_100MB:
> > > +		dev_info->speed_capa = ETH_LINK_SPEED_100M;
> > > +		break;
> > > +	case ICE_AQ_LINK_SPEED_1000MB:
> > > +		dev_info->speed_capa = ETH_LINK_SPEED_1G;
> > > +		break;
> > > +	case ICE_AQ_LINK_SPEED_2500MB:
> > > +		dev_info->speed_capa = ETH_LINK_SPEED_2_5G;
> > > +		break;
> > > +	case ICE_AQ_LINK_SPEED_5GB:
> > > +		dev_info->speed_capa = ETH_LINK_SPEED_5G;
> > > +		break;
> > > +	case ICE_AQ_LINK_SPEED_10GB:
> > > +		dev_info->speed_capa = ETH_LINK_SPEED_10G;
> > > +		break;
> > > +	case ICE_AQ_LINK_SPEED_20GB:
> > > +		dev_info->speed_capa = ETH_LINK_SPEED_20G;
> > > +		break;
> > > +	case ICE_AQ_LINK_SPEED_25GB:
> > > +		dev_info->speed_capa = ETH_LINK_SPEED_25G;
> > > +		break;
> > > +	case ICE_AQ_LINK_SPEED_40GB:
> > > +		dev_info->speed_capa = ETH_LINK_SPEED_40G;
> > > +		break;
> > > +	case ICE_AQ_LINK_SPEED_UNKNOWN:
> > > +	default:
> > > +		PMD_DRV_LOG(ERR, "Unknown link speed");
> > > +		dev_info->speed_capa = ETH_LINK_SPEED_AUTONEG;
> > > +		break;
> > > +	}
> >
> > If speed is not true as stated above, can you please add this to
> > release notes and documentation.
> Here listed all the case we can get from HW.
Please add to ice_dsi documentation also.

snipped

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v2 14/20] net/ice: support statistics
  2018-12-06  5:37       ` Lu, Wenzhuo
@ 2018-12-06  5:50         ` Varghese, Vipin
  0 siblings, 0 replies; 309+ messages in thread
From: Varghese, Vipin @ 2018-12-06  5:50 UTC (permalink / raw)
  To: Lu, Wenzhuo, dev; +Cc: Guo, Jia

Snipped
> >
> > snipped
> > > +	ns->eth.tx_bytes -= (ns->eth.tx_unicast + ns->eth.tx_multicast +
> > > +			     ns->eth.tx_broadcast) * ETHER_CRC_LEN;
> >
> > In earlier patch for 'mtu set check' we added VSI SWITCH VLAN. Should
> > we add VSI VLAN here?
> Don't need. They're different functions. We add crc length here because of HW
> counts the packet length before crc is added.
So you are not fetching stats from HW registers from switch is this correct? How will you get stats for actually transmitted in xstats? As I understand xstats is for switch HW stats right?
> 
> 
> > snipped

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v2 16/20] net/ice: support basic RX/TX
  2018-12-06  5:39       ` Lu, Wenzhuo
@ 2018-12-06  5:55         ` Varghese, Vipin
  0 siblings, 0 replies; 309+ messages in thread
From: Varghese, Vipin @ 2018-12-06  5:55 UTC (permalink / raw)
  To: Lu, Wenzhuo, dev; +Cc: Yang, Qiming, Li, Xiaoyun, Wu, Jingjing

> >
> > snipped
> > > +uint16_t
> > > +ice_recv_pkts(void *rx_queue,
> > > +	      struct rte_mbuf **rx_pkts,
> > > +	      uint16_t nb_pkts)
> > > +{
> > > +	struct ice_rx_queue *rxq = rx_queue;
> > > +	volatile union ice_rx_desc *rx_ring = rxq->rx_ring;
> > > +	volatile union ice_rx_desc *rxdp;
> > > +	union ice_rx_desc rxd;
> > > +	struct ice_rx_entry *sw_ring = rxq->sw_ring;
> > > +	struct ice_rx_entry *rxe;
> > > +	struct rte_mbuf *nmb; /* new allocated mbuf */
> > > +	struct rte_mbuf *rxm; /* pointer to store old mbuf in SW ring */
> > > +	uint16_t rx_id = rxq->rx_tail;
> > > +	uint16_t nb_rx = 0;
> > > +	uint16_t nb_hold = 0;
> > > +	uint16_t rx_packet_len;
> > > +	uint32_t rx_status;
> > > +	uint64_t qword1;
> > > +	uint64_t dma_addr;
> > > +	uint64_t pkt_flags = 0;
> > > +	uint32_t *ptype_tbl = rxq->vsi->adapter->ptype_tbl;
> > > +	struct rte_eth_dev *dev;
> > > +
> > > +	while (nb_rx < nb_pkts) {
> > > +		rxdp = &rx_ring[rx_id];
> > > +		qword1 = rte_le_to_cpu_64(rxdp-
> > > >wb.qword1.status_error_len);
> > > +		rx_status = (qword1 & ICE_RXD_QW1_STATUS_M) >>
> > > +			    ICE_RXD_QW1_STATUS_S;
> > > +
> > > +		/* Check the DD bit first */
> > > +		if (!(rx_status & (1 << ICE_RX_DESC_STATUS_DD_S)))
> > > +			break;
> > > +
> > > +		/* allocate mbuf */
> > > +		nmb = rte_mbuf_raw_alloc(rxq->mp);
> > > +		if (unlikely(!nmb)) {
> > > +			dev = ICE_VSI_TO_ETH_DEV(rxq->vsi);
> > > +			dev->data->rx_mbuf_alloc_failed++;
> > > +			break;
> > > +		}
> >
> > Should we check if the received packet length is greater than mbug
> > pkt_len then we need bulk alloc with n_segs?
> We cannot do it here. It's fast path. It hurts performance badly. So we do the
> check before and choose the right RX function.
> Normally by default the n_segs is supported.
Maybe I am not clear with this approach, lets assume packet of length 6000 bytes comes in. The mempool mbuf data size is 2000bytes, hence requires 3 segs for storing 6000bytes pkt_len where each data size is 2000bytes.

As per your update, as performance is affected for a 6000byte packet you will pick only 1 segment which 2000bytes and rest are discarded. Is this correct understanding?

Snipped.

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v2 01/20] net/ice: add base code
  2018-12-06  4:28         ` Varghese, Vipin
@ 2018-12-06  5:55           ` Lu, Wenzhuo
  2018-12-06  6:03             ` Varghese, Vipin
  0 siblings, 1 reply; 309+ messages in thread
From: Lu, Wenzhuo @ 2018-12-06  5:55 UTC (permalink / raw)
  To: Varghese, Vipin, dev

Hi Vipin,

> -----Original Message-----
> From: Varghese, Vipin
> Sent: Thursday, December 6, 2018 12:29 PM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> Subject: RE: [dpdk-dev] [PATCH v2 01/20] net/ice: add base code
> 
> Hi Wenzhuo,
> 
> thanks for the updates, couple of follow up and suggestions
> 
> snipped
> > >
> > > > +Intel® ICE driver
> > > > +==================
> > > > +
> > > > +This directory contains source code of FreeBSD ice driver of
> > > > +version
> > > > +2018.10.30 released by the team which develops basic drivers for
> > > > +any ice NIC. The directory of base/ contains the original source
> package.
> > > > +This driver is valid for the product(s) listed below
> > > > +
> > > > +* Intel® Ethernet Network Adapters E810
> > > > +
> > > > +Updating the driver
> > > > +===================
> > > > +
> > > > +NOTE: The source code in this directory should not be modified
> > > > +apart from the following file(s):
> > > > +
> > > > +    ice_osdep.h
> > >
> > > I this README persistent in upcoming releases of 'driver/net/ice'?
> > Yes.
> If Linux driver is enabled in 4.20.1 or higher, then will the wording 'This
> directory contains source code of FreeBSD ice driver of' still hold true?
Although I don't understand why we talk about the Linux driver version, but I think the answer is yes.

> 
> > >
> snipped
> > > > +#define ICE_AQC_MAN_MAC_UPDATE_LAA	0
> > > > +#define ICE_AQC_MAN_MAC_UPDATE_LAA_WOL	(BIT(0) <<
> > > > ICE_AQC_MAN_MAC_WR_S)
> > >
> > > Can the code be rearranged for?
> > We don’t want to change the base code for the sake of maintenance.
> I do not follow this, is not your team or individual maintaining the same?
> because there should be maintainer for this PMD.
This code is not implemented by us. You can take us as a representative of the develop team.
If any bug, we'll hande it.

> 
> snipped
> > > > +struct ice_aqc_get_phy_caps_data {
> > > > +	__le64 phy_type_low; /* Use values from ICE_PHY_TYPE_LOW_* */
> > > > +	__le64 reserved;
> > > > +	u8 caps;
> > > > +#define ICE_AQC_PHY_EN_TX_LINK_PAUSE			BIT(0)
> > > > +#define ICE_AQC_PHY_EN_RX_LINK_PAUSE			BIT(1)
> > > > +#define ICE_AQC_PHY_LOW_POWER_MODE			BIT(2)
> > > > +#define ICE_AQC_PHY_EN_LINK				BIT(3)
> > > > +#define ICE_AQC_PHY_AN_MODE				BIT(4)
> > > > +#define ICE_AQC_PHY_EN_MOD_QUAL				BIT(5)
> > > > +#define ICE_AQC_PHY_EN_LESM				BIT(6)
> > > > +#define ICE_AQC_PHY_EN_AUTO_FEC				BIT(7)
> > > > +#define ICE_AQC_PHY_CAPS_MASK
> > > > 	MAKEMASK(0xff, 0)
> > > > +	u8 low_power_ctrl;
> > > > +#define ICE_AQC_PHY_EN_D3COLD_LOW_POWER_AUTONEG
> > > 	BIT(0)
> > > > +	__le16 eee_cap;
> > > > +#define ICE_AQC_PHY_EEE_EN_100BASE_TX			BIT(0)
> > > > +#define ICE_AQC_PHY_EEE_EN_1000BASE_T			BIT(1)
> > > > +#define ICE_AQC_PHY_EEE_EN_10GBASE_T			BIT(2)
> > > > +#define ICE_AQC_PHY_EEE_EN_1000BASE_KX			BIT(3)
> > > > +#define ICE_AQC_PHY_EEE_EN_10GBASE_KR			BIT(4)
> > > > +#define ICE_AQC_PHY_EEE_EN_25GBASE_KR			BIT(5)
> > > > +#define ICE_AQC_PHY_EEE_EN_40GBASE_KR4			BIT(6)
> > > > +	__le16 eeer_value;
> > > > +	u8 phy_id_oui[4]; /* PHY/Module ID connected on the port */
> > > > +	u8 phy_fw_ver[8];
> > > > +	u8 link_fec_options;
> > > > +#define ICE_AQC_PHY_FEC_10G_KR_40G_KR4_EN		BIT(0)
> > > > +#define ICE_AQC_PHY_FEC_10G_KR_40G_KR4_REQ		BIT(1)
> > > > +#define ICE_AQC_PHY_FEC_25G_RS_528_REQ			BIT(2)
> > > > +#define ICE_AQC_PHY_FEC_25G_KR_REQ			BIT(3)
> > > > +#define ICE_AQC_PHY_FEC_25G_RS_544_REQ			BIT(4)
> > > > +#define ICE_AQC_PHY_FEC_25G_RS_CLAUSE91_EN		BIT(6)
> > > > +#define ICE_AQC_PHY_FEC_25G_KR_CLAUSE74_EN		BIT(7)
> > > > +#define ICE_AQC_PHY_FEC_MASK
> > > > 	MAKEMASK(0xdf, 0)
> > > > +	u8 extended_compliance_code;
> > > > +#define ICE_MODULE_TYPE_TOTAL_BYTE			3
> > > > +	u8 module_type[ICE_MODULE_TYPE_TOTAL_BYTE];
> > > > +#define ICE_AQC_MOD_TYPE_BYTE0_SFP_PLUS			0xA0
> > > > +#define ICE_AQC_MOD_TYPE_BYTE0_QSFP_PLUS		0x80
> > > > +#define ICE_AQC_MOD_TYPE_BYTE1_SFP_PLUS_CU_PASSIVE	BIT(0)
> > > > +#define ICE_AQC_MOD_TYPE_BYTE1_SFP_PLUS_CU_ACTIVE	BIT(1)
> > > > +#define ICE_AQC_MOD_TYPE_BYTE1_10G_BASE_SR		BIT(4)
> > > > +#define ICE_AQC_MOD_TYPE_BYTE1_10G_BASE_LR		BIT(5)
> > > > +#define ICE_AQC_MOD_TYPE_BYTE1_10G_BASE_LRM		BIT(6)
> > > > +#define ICE_AQC_MOD_TYPE_BYTE1_10G_BASE_ER		BIT(7)
> > > > +#define ICE_AQC_MOD_TYPE_BYTE2_SFP_PLUS			0xA0
> > > > +#define ICE_AQC_MOD_TYPE_BYTE2_QSFP_PLUS		0x86
> > > > +	u8 qualified_module_count;
> > > > +#define ICE_AQC_QUAL_MOD_COUNT_MAX			16
> > > > +	struct {
> > > > +		u8 v_oui[3];
> > > > +		u8 rsvd3;
> > > > +		u8 v_part[16];
> > > > +		__le32 v_rev;
> > > > +		__le64 rsvd8;
> > > > +	} qual_modules[ICE_AQC_QUAL_MOD_COUNT_MAX];
> > > > +};
> > > > +
> > >
> > > Does the NIC support physical loopback? I am not able to find here.
> > Not sure about it. But no plan for this at this stage.
> Please add this in release note and PMD documentation the same.
No, we list all the things done. It doesn't make sense to list everything not supported or not implemented.

> 
> >
> > >
> > > > +#define ICE_AQ_PHY_ENA_LOW_POWER	BIT(2)
> > >
> > > Does Low Power PMD is exposed to DPDK? If yes, can you mention the
> > > performance numbers or variance in Release documents?
> > No plan for it at this release.
> Would not it be better to not add here or update as comment and release
> note about the same. Right now it is like dead code.
> 
> >
> > >
> > > Snipped
> > >
> > > > +
> > > > +/* Memory types */
> > > > +enum ice_memset_type {
> > > > +	ICE_NONDMA_MEM = 0,
> > > > +	ICE_DMA_MEM
> > > > +};
> > > > +
> > > > +/* Memcpy types */
> > > > +enum ice_memcpy_type {
> > > > +	ICE_NONDMA_TO_NONDMA = 0,
> > > > +	ICE_NONDMA_TO_DMA,
> > > > +	ICE_DMA_TO_DMA,
> > > > +	ICE_DMA_TO_NONDMA
> > > > +};
> > > > +
> > >
> > > Is this exposed to user (rte_eth_dev) API? If yes, can you please
> > > let know the performance impact in RX|TX in release notes too.
> > No plan for it at this release.
> Please update
> a. what is difference least as comments.
> b. in release notes about the same.
> 
> snipped

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v2 07/20] net/ice: support MTU setting
  2018-12-06  5:41         ` Lu, Wenzhuo
@ 2018-12-06  5:56           ` Varghese, Vipin
  0 siblings, 0 replies; 309+ messages in thread
From: Varghese, Vipin @ 2018-12-06  5:56 UTC (permalink / raw)
  To: Lu, Wenzhuo, dev; +Cc: Yang, Qiming, Li, Xiaoyun, Wu, Jingjing

Thanks, I get what you are saying since by default ' rte_eth_dev_get_mtu' fetches MTU ' dev->data->mtu'. But is not ICE_DSI a switch so MTU would be picked up from switch HW ?

> -----Original Message-----
> From: Lu, Wenzhuo
> Sent: Thursday, December 6, 2018 11:11 AM
> To: Varghese, Vipin <vipin.varghese@intel.com>; dev@dpdk.org
> Cc: Yang, Qiming <qiming.yang@intel.com>; Li, Xiaoyun <xiaoyun.li@intel.com>;
> Wu, Jingjing <jingjing.wu@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v2 07/20] net/ice: support MTU setting
> 
> Hi Vipin,
> 
> > -----Original Message-----
> > From: Varghese, Vipin
> > Sent: Tuesday, December 4, 2018 1:52 PM
> > To: Varghese, Vipin <vipin.varghese@intel.com>; Lu, Wenzhuo
> > <wenzhuo.lu@intel.com>; dev@dpdk.org
> > Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>; Yang, Qiming
> > <qiming.yang@intel.com>; Li, Xiaoyun <xiaoyun.li@intel.com>; Wu,
> > Jingjing <jingjing.wu@intel.com>
> > Subject: RE: [dpdk-dev] [PATCH v2 07/20] net/ice: support MTU setting
> >
> > Can you point me to the patch where 'get_mtu' is defined?
> There's no 'get_mtu'. No such ops.

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v2 20/20] net/ice: support meson build
  2018-12-06  4:31         ` Varghese, Vipin
@ 2018-12-06  5:59           ` Lu, Wenzhuo
  2018-12-06  6:05             ` Varghese, Vipin
  0 siblings, 1 reply; 309+ messages in thread
From: Lu, Wenzhuo @ 2018-12-06  5:59 UTC (permalink / raw)
  To: Varghese, Vipin, dev

Hi Vipin,


> -----Original Message-----
> From: Varghese, Vipin
> Sent: Thursday, December 6, 2018 12:31 PM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> Subject: RE: [dpdk-dev] [PATCH v2 20/20] net/ice: support meson build
> 
> Hi Wenzhuo
> 
> snipped
> > >
> > > Should not meson build option be add start. That is in patch 1/20 so
> > > compile options does not fail?
> > It will not fail. Enabling the compile earlier only means the code can be
> compiled.
> > But, to use this device we do need the whole patch set. From this
> > point of view, compiling it at the end maybe better.
> Thanks for update, so will 'meson-build' success if apply 3 patches?
Sure, meson build will not be broken by any one of these patches. Only until this patch, what built by meson can support ice.

> 
> >
> > >
> > > Thanks
> > > Vipin Varghese

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v2 01/20] net/ice: add base code
  2018-12-06  5:55           ` Lu, Wenzhuo
@ 2018-12-06  6:03             ` Varghese, Vipin
  2018-12-06  6:23               ` Ferruh Yigit
  2018-12-06  6:38               ` Lu, Wenzhuo
  0 siblings, 2 replies; 309+ messages in thread
From: Varghese, Vipin @ 2018-12-06  6:03 UTC (permalink / raw)
  To: Lu, Wenzhuo, dev

snipped
> > > >
> > > > > +Intel® ICE driver
> > > > > +==================
> > > > > +
> > > > > +This directory contains source code of FreeBSD ice driver of
> > > > > +version
> > > > > +2018.10.30 released by the team which develops basic drivers
> > > > > +for any ice NIC. The directory of base/ contains the original
> > > > > +source
> > package.
> > > > > +This driver is valid for the product(s) listed below
> > > > > +
> > > > > +* Intel® Ethernet Network Adapters E810
> > > > > +
> > > > > +Updating the driver
> > > > > +===================
> > > > > +
> > > > > +NOTE: The source code in this directory should not be modified
> > > > > +apart from the following file(s):
> > > > > +
> > > > > +    ice_osdep.h
> > > >
> > > > I this README persistent in upcoming releases of 'driver/net/ice'?
> > > Yes.
> > If Linux driver is enabled in 4.20.1 or higher, then will the wording
> > 'This directory contains source code of FreeBSD ice driver of' still hold true?
> Although I don't understand why we talk about the Linux driver version, but I
> think the answer is yes.
Ok, reason for linux driver is because 
1. you would be planning to push the default kernel driver to linux for ICE.
2. the documentation states FreeBSD 2018.10.30, so if there is future enhancement pulled from linux driver this would added here too?

> 
> >
> > > >
> > snipped
> > > > > +#define ICE_AQC_MAN_MAC_UPDATE_LAA	0
> > > > > +#define ICE_AQC_MAN_MAC_UPDATE_LAA_WOL	(BIT(0) <<
> > > > > ICE_AQC_MAN_MAC_WR_S)
> > > >
> > > > Can the code be rearranged for?
> > > We don’t want to change the base code for the sake of maintenance.
> > I do not follow this, is not your team or individual maintaining the same?
> > because there should be maintainer for this PMD.
> This code is not implemented by us. You can take us as a representative of the
> develop team.
> If any bug, we'll hande it.
Ok, currently the team which maintains the code do not want to change the order of code for sake of maintenance. Confusing approach, but I leave this to other members to comment.

> 
> >
> > snipped
> > > > > +struct ice_aqc_get_phy_caps_data {
> > > > > +	__le64 phy_type_low; /* Use values from
> ICE_PHY_TYPE_LOW_* */
> > > > > +	__le64 reserved;
> > > > > +	u8 caps;
> > > > > +#define ICE_AQC_PHY_EN_TX_LINK_PAUSE			BIT(0)
> > > > > +#define ICE_AQC_PHY_EN_RX_LINK_PAUSE			BIT(1)
> > > > > +#define ICE_AQC_PHY_LOW_POWER_MODE			BIT(2)
> > > > > +#define ICE_AQC_PHY_EN_LINK				BIT(3)
> > > > > +#define ICE_AQC_PHY_AN_MODE				BIT(4)
> > > > > +#define ICE_AQC_PHY_EN_MOD_QUAL				BIT(5)
> > > > > +#define ICE_AQC_PHY_EN_LESM				BIT(6)
> > > > > +#define ICE_AQC_PHY_EN_AUTO_FEC				BIT(7)
> > > > > +#define ICE_AQC_PHY_CAPS_MASK
> > > > > 	MAKEMASK(0xff, 0)
> > > > > +	u8 low_power_ctrl;
> > > > > +#define ICE_AQC_PHY_EN_D3COLD_LOW_POWER_AUTONEG
> > > > 	BIT(0)
> > > > > +	__le16 eee_cap;
> > > > > +#define ICE_AQC_PHY_EEE_EN_100BASE_TX			BIT(0)
> > > > > +#define ICE_AQC_PHY_EEE_EN_1000BASE_T			BIT(1)
> > > > > +#define ICE_AQC_PHY_EEE_EN_10GBASE_T			BIT(2)
> > > > > +#define ICE_AQC_PHY_EEE_EN_1000BASE_KX			BIT(3)
> > > > > +#define ICE_AQC_PHY_EEE_EN_10GBASE_KR			BIT(4)
> > > > > +#define ICE_AQC_PHY_EEE_EN_25GBASE_KR			BIT(5)
> > > > > +#define ICE_AQC_PHY_EEE_EN_40GBASE_KR4			BIT(6)
> > > > > +	__le16 eeer_value;
> > > > > +	u8 phy_id_oui[4]; /* PHY/Module ID connected on the port */
> > > > > +	u8 phy_fw_ver[8];
> > > > > +	u8 link_fec_options;
> > > > > +#define ICE_AQC_PHY_FEC_10G_KR_40G_KR4_EN		BIT(0)
> > > > > +#define ICE_AQC_PHY_FEC_10G_KR_40G_KR4_REQ		BIT(1)
> > > > > +#define ICE_AQC_PHY_FEC_25G_RS_528_REQ			BIT(2)
> > > > > +#define ICE_AQC_PHY_FEC_25G_KR_REQ			BIT(3)
> > > > > +#define ICE_AQC_PHY_FEC_25G_RS_544_REQ			BIT(4)
> > > > > +#define ICE_AQC_PHY_FEC_25G_RS_CLAUSE91_EN		BIT(6)
> > > > > +#define ICE_AQC_PHY_FEC_25G_KR_CLAUSE74_EN		BIT(7)
> > > > > +#define ICE_AQC_PHY_FEC_MASK
> > > > > 	MAKEMASK(0xdf, 0)
> > > > > +	u8 extended_compliance_code;
> > > > > +#define ICE_MODULE_TYPE_TOTAL_BYTE			3
> > > > > +	u8 module_type[ICE_MODULE_TYPE_TOTAL_BYTE];
> > > > > +#define ICE_AQC_MOD_TYPE_BYTE0_SFP_PLUS			0xA0
> > > > > +#define ICE_AQC_MOD_TYPE_BYTE0_QSFP_PLUS		0x80
> > > > > +#define ICE_AQC_MOD_TYPE_BYTE1_SFP_PLUS_CU_PASSIVE	BIT(0)
> > > > > +#define ICE_AQC_MOD_TYPE_BYTE1_SFP_PLUS_CU_ACTIVE	BIT(1)
> > > > > +#define ICE_AQC_MOD_TYPE_BYTE1_10G_BASE_SR		BIT(4)
> > > > > +#define ICE_AQC_MOD_TYPE_BYTE1_10G_BASE_LR		BIT(5)
> > > > > +#define ICE_AQC_MOD_TYPE_BYTE1_10G_BASE_LRM		BIT(6)
> > > > > +#define ICE_AQC_MOD_TYPE_BYTE1_10G_BASE_ER		BIT(7)
> > > > > +#define ICE_AQC_MOD_TYPE_BYTE2_SFP_PLUS			0xA0
> > > > > +#define ICE_AQC_MOD_TYPE_BYTE2_QSFP_PLUS		0x86
> > > > > +	u8 qualified_module_count;
> > > > > +#define ICE_AQC_QUAL_MOD_COUNT_MAX			16
> > > > > +	struct {
> > > > > +		u8 v_oui[3];
> > > > > +		u8 rsvd3;
> > > > > +		u8 v_part[16];
> > > > > +		__le32 v_rev;
> > > > > +		__le64 rsvd8;
> > > > > +	} qual_modules[ICE_AQC_QUAL_MOD_COUNT_MAX];
> > > > > +};
> > > > > +
> > > >
> > > > Does the NIC support physical loopback? I am not able to find here.
> > > Not sure about it. But no plan for this at this stage.
> > Please add this in release note and PMD documentation the same.
> No, we list all the things done. It doesn't make sense to list everything not
> supported or not implemented.
I think it is necessary, because application 'testpmd' has option to 'set tx loopback (port_id) (on|off)'. So If ICE DSI PMD does not support it and in testpmd it fails both DTS team and DPDK user should be made aware via documentation for limitation.

> 
> >
> > >
> > > >
> > > > > +#define ICE_AQ_PHY_ENA_LOW_POWER	BIT(2)
> > > >
> > > > Does Low Power PMD is exposed to DPDK? If yes, can you mention the
> > > > performance numbers or variance in Release documents?
> > > No plan for it at this release.
> > Would not it be better to not add here or update as comment and
> > release note about the same. Right now it is like dead code.
> >
> > >
> > > >
> > > > Snipped
> > > >
> > > > > +
> > > > > +/* Memory types */
> > > > > +enum ice_memset_type {
> > > > > +	ICE_NONDMA_MEM = 0,
> > > > > +	ICE_DMA_MEM
> > > > > +};
> > > > > +
> > > > > +/* Memcpy types */
> > > > > +enum ice_memcpy_type {
> > > > > +	ICE_NONDMA_TO_NONDMA = 0,
> > > > > +	ICE_NONDMA_TO_DMA,
> > > > > +	ICE_DMA_TO_DMA,
> > > > > +	ICE_DMA_TO_NONDMA
> > > > > +};
> > > > > +
> > > >
> > > > Is this exposed to user (rte_eth_dev) API? If yes, can you please
> > > > let know the performance impact in RX|TX in release notes too.
> > > No plan for it at this release.
> > Please update
> > a. what is difference least as comments.
> > b. in release notes about the same.
> >
> > snipped

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v2 19/20] doc: add ICE description and update release note
  2018-12-06  4:34         ` Varghese, Vipin
@ 2018-12-06  6:05           ` Lu, Wenzhuo
  2018-12-06  6:08             ` Varghese, Vipin
  0 siblings, 1 reply; 309+ messages in thread
From: Lu, Wenzhuo @ 2018-12-06  6:05 UTC (permalink / raw)
  To: Varghese, Vipin, dev

Hi Vipin,


> -----Original Message-----
> From: Varghese, Vipin
> Sent: Thursday, December 6, 2018 12:34 PM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> Subject: RE: [dpdk-dev] [PATCH v2 19/20] doc: add ICE description and
> update release note
> 
> Thanks Wenzhuo
> 
> snipped
> > >
> > > Do we support Traffic Manager and Inline Crypto? If not, can we add
> > > this also to limitations?
> > The style is only listed supported features here. I don't think we'll
> > list everything not supported as limitation.
> Thanks for the correction in format, hence the suggested action item is
> update limitation section to state what is not supported.
> 
> > The same below.
> >
> > >
> > > > +FW version           = Y
> > > > +Module EEPROM dump   = Y
> > > > +Multiprocess aware   = Y
> > > > +BSD nic_uio          = Y
> > > > +Linux UIO            = Y
> > > > +Linux VFIO           = Y
> > > > +x86-32               = Y
> > > > +x86-64               = Y
> > >
> > > Is cross compile for ARM and PPC disabled since it uses Intel specific ISA?
> > > Should this be added to limitations?
> Is powerpc and arm cross build with non avx and sse ISA supported? If no,
> will default '.config' has 'ICE_PMD=n'?
Yes, it should be. Currently at least all the Intel NICs set 'y' here. If any problem, we should handle it as bug.

> 
> > >
> > >
> > > Snipped
> > >
> > > > +
> > > > +Config File Options
> > > > +~~~~~~~~~~~~~~~~~~~
> > > > +
> > > > +The following options can be modified in the ``config`` file.
> > > > +Please note that enabling debugging options may affect system
> > > performance.
> > >
> > > Do we see real performance variance? If yes, can we highlight this
> > > in info section?
> > IMH, we could get different numbers because of different scenarios.
> > It's only a reminder to not enable debug.
> Thanks, but then wording should 'Note: enabling debug option for ICE will
> have difference in performance. Hence recommendation is not to enable
> unless for debugging the ICE PMD.'
> >
> > >
> > > snipped


^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v2 20/20] net/ice: support meson build
  2018-12-06  5:59           ` Lu, Wenzhuo
@ 2018-12-06  6:05             ` Varghese, Vipin
  0 siblings, 0 replies; 309+ messages in thread
From: Varghese, Vipin @ 2018-12-06  6:05 UTC (permalink / raw)
  To: Lu, Wenzhuo, dev



> -----Original Message-----
> From: Lu, Wenzhuo
> Sent: Thursday, December 6, 2018 11:29 AM
> To: Varghese, Vipin <vipin.varghese@intel.com>; dev@dpdk.org
> Subject: RE: [dpdk-dev] [PATCH v2 20/20] net/ice: support meson build
> 
> Hi Vipin,
> 
> 
> > -----Original Message-----
> > From: Varghese, Vipin
> > Sent: Thursday, December 6, 2018 12:31 PM
> > To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> > Subject: RE: [dpdk-dev] [PATCH v2 20/20] net/ice: support meson build
> >
> > Hi Wenzhuo
> >
> > snipped
> > > >
> > > > Should not meson build option be add start. That is in patch 1/20
> > > > so compile options does not fail?
> > > It will not fail. Enabling the compile earlier only means the code
> > > can be
> > compiled.
> > > But, to use this device we do need the whole patch set. From this
> > > point of view, compiling it at the end maybe better.
> > Thanks for update, so will 'meson-build' success if apply 3 patches?
> Sure, meson build will not be broken by any one of these patches. Only until this
> patch, what built by meson can support ice.
Thanks for confirmation that you have tried './devtools/test-meson-builds.sh' and the intermediate build for ICE_DSI PMD does not fail.

> 
> >
> > >
> > > >
> > > > Thanks
> > > > Vipin Varghese

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v2 19/20] doc: add ICE description and update release note
  2018-12-06  6:05           ` Lu, Wenzhuo
@ 2018-12-06  6:08             ` Varghese, Vipin
  2018-12-06  6:23               ` Lu, Wenzhuo
  0 siblings, 1 reply; 309+ messages in thread
From: Varghese, Vipin @ 2018-12-06  6:08 UTC (permalink / raw)
  To: Lu, Wenzhuo, dev



> -----Original Message-----
> From: Lu, Wenzhuo
> Sent: Thursday, December 6, 2018 11:36 AM
> To: Varghese, Vipin <vipin.varghese@intel.com>; dev@dpdk.org
> Subject: RE: [dpdk-dev] [PATCH v2 19/20] doc: add ICE description and update
> release note
> 
> Hi Vipin,
> 
> 
> > -----Original Message-----
> > From: Varghese, Vipin
> > Sent: Thursday, December 6, 2018 12:34 PM
> > To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> > Subject: RE: [dpdk-dev] [PATCH v2 19/20] doc: add ICE description and
> > update release note
> >
> > Thanks Wenzhuo
> >
> > snipped
> > > >
> > > > Do we support Traffic Manager and Inline Crypto? If not, can we
> > > > add this also to limitations?
> > > The style is only listed supported features here. I don't think
> > > we'll list everything not supported as limitation.
> > Thanks for the correction in format, hence the suggested action item
> > is update limitation section to state what is not supported.
> >
> > > The same below.
> > >
> > > >
> > > > > +FW version           = Y
> > > > > +Module EEPROM dump   = Y
> > > > > +Multiprocess aware   = Y
> > > > > +BSD nic_uio          = Y
> > > > > +Linux UIO            = Y
> > > > > +Linux VFIO           = Y
> > > > > +x86-32               = Y
> > > > > +x86-64               = Y
> > > >
> > > > Is cross compile for ARM and PPC disabled since it uses Intel specific ISA?
> > > > Should this be added to limitations?
> > Is powerpc and arm cross build with non avx and sse ISA supported? If
> > no, will default '.config' has 'ICE_PMD=n'?
> Yes, it should be. Currently at least all the Intel NICs set 'y' here. If any problem,
> we should handle it as bug.
So my understanding from 'yes' is ICE_DSI PMD is having minimum scalar functions which is cross build successful for ARM and powerpc. Hence you are leaving it as '=y' like other NIC. 

If above is true, why is arm and PowerPC not added to your documentation?

> 
> >
> > > >
> > > >
> > > > Snipped
> > > >
> > > > > +
> > > > > +Config File Options
> > > > > +~~~~~~~~~~~~~~~~~~~
> > > > > +
> > > > > +The following options can be modified in the ``config`` file.
> > > > > +Please note that enabling debugging options may affect system
> > > > performance.
> > > >
> > > > Do we see real performance variance? If yes, can we highlight this
> > > > in info section?
> > > IMH, we could get different numbers because of different scenarios.
> > > It's only a reminder to not enable debug.
> > Thanks, but then wording should 'Note: enabling debug option for ICE
> > will have difference in performance. Hence recommendation is not to
> > enable unless for debugging the ICE PMD.'
> > >
> > > >
> > > > snipped


^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v2 02/20] net/ice: support device initialization
  2018-12-06  5:33         ` Varghese, Vipin
@ 2018-12-06  6:13           ` Lu, Wenzhuo
  2018-12-06  6:31             ` Varghese, Vipin
  0 siblings, 1 reply; 309+ messages in thread
From: Lu, Wenzhuo @ 2018-12-06  6:13 UTC (permalink / raw)
  To: Varghese, Vipin, dev; +Cc: Yang, Qiming, Li, Xiaoyun, Wu, Jingjing

Hi Vipin,

> -----Original Message-----
> From: Varghese, Vipin
> Sent: Thursday, December 6, 2018 1:33 PM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> Cc: Yang, Qiming <qiming.yang@intel.com>; Li, Xiaoyun
> <xiaoyun.li@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v2 02/20] net/ice: support device
> initialization
> 
> snipped
> > > > +	ice_init_controlq_parameter(hw);
> > > > +
> > > > +	ret = ice_init_hw(hw);
> > > > +	if (ret) {
> > > > +		PMD_INIT_LOG(ERR, "Failed to initialize HW");
> > > > +		return -EINVAL;
> > > > +	}
> > >
> > > Definition for ice_init_hw in patch 01/20 does not check for
> > > primary- secondary. Are we allowing secondary to invoke ice_init_hw
> > > if it is initialized by primary?
> > It's a patch split issue. We add the check in later patch. Will put it
> > in this patch in the new version.
> Suggestion in current patch if comment is kept it will be easier to understand
> that it is taken care in future patch.
> 
> Example patch 2/20 has comment stating adding support in patch 5/20.
> Then in patch 5/20 it removes the ToDo it is easier to read and understand
> the flow
I mean I made a mistake that put the check code in a later patch. Actually this code should be put in this patch. I plan to correct it.
But currently I think we're running out of time. I prefer not supporting multi process in this release.

> 
> >
> > >
> > > > +
> > > > +	PMD_INIT_LOG(INFO, "FW %d.%d.%05d API %d.%d",
> > > > +		     hw->fw_maj_ver, hw->fw_min_ver, hw->fw_build,
> > > > +		     hw->api_maj_ver, hw->api_min_ver);
> > > > +
> > >
> > > Snipped
> > >
> > > > +
> > > > +static int
> > > > +ice_dev_uninit(struct rte_eth_dev *dev) {
> > > > +	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data-
> > > > >dev_private);
> > > > +	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data-
> > > >dev_private);
> > > > +
> > > > +	ICE_PROC_SECONDARY_CHECK_RET_0;
> > >
> > > Should not we check if primary is alive and NIC is used or
> > > initialized by primary then ' ICE_PROC_SECONDARY_CHECK_RET_0'?
> > I think it's not a critical issue if the process is terminate abnormally without
> uninit.
> > Comparing with that, I have more concern about this scenario, if the
> > primary process exit and uninit the resource, the secondary process is left
> alone.
> Since primary is application which reserves the huge page memory (malloc,
> zmalloc, memzone). So when secondary is killed or stop whole huge pages
> are released. I am bit confused what is check suggested affecting?
> 
>  And also
> > to me it looks not a good solution to change every PMD for this feature.
> I am not aware about why other PMD are done in specific way. In my
> humble opinion, if there is a right way let it be used rather than doing other
> way.
> 
> I don't
> > see many PMD support it. Maybe we'd better not support it now and wait
> > for a better whole picture.
> I wait for others to comment to this approach.
> 
> snipped

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v2 19/20] doc: add ICE description and update release note
  2018-12-06  6:08             ` Varghese, Vipin
@ 2018-12-06  6:23               ` Lu, Wenzhuo
  2018-12-06  6:25                 ` Varghese, Vipin
  0 siblings, 1 reply; 309+ messages in thread
From: Lu, Wenzhuo @ 2018-12-06  6:23 UTC (permalink / raw)
  To: Varghese, Vipin, dev

Hi Vipin,


> -----Original Message-----
> From: Varghese, Vipin
> Sent: Thursday, December 6, 2018 2:09 PM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> Subject: RE: [dpdk-dev] [PATCH v2 19/20] doc: add ICE description and
> update release note
> 
> 
> 
> > -----Original Message-----
> > From: Lu, Wenzhuo
> > Sent: Thursday, December 6, 2018 11:36 AM
> > To: Varghese, Vipin <vipin.varghese@intel.com>; dev@dpdk.org
> > Subject: RE: [dpdk-dev] [PATCH v2 19/20] doc: add ICE description and
> > update release note
> >
> > Hi Vipin,
> >
> >
> > > -----Original Message-----
> > > From: Varghese, Vipin
> > > Sent: Thursday, December 6, 2018 12:34 PM
> > > To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> > > Subject: RE: [dpdk-dev] [PATCH v2 19/20] doc: add ICE description
> > > and update release note
> > >
> > > Thanks Wenzhuo
> > >
> > > snipped
> > > > >
> > > > > Do we support Traffic Manager and Inline Crypto? If not, can we
> > > > > add this also to limitations?
> > > > The style is only listed supported features here. I don't think
> > > > we'll list everything not supported as limitation.
> > > Thanks for the correction in format, hence the suggested action item
> > > is update limitation section to state what is not supported.
> > >
> > > > The same below.
> > > >
> > > > >
> > > > > > +FW version           = Y
> > > > > > +Module EEPROM dump   = Y
> > > > > > +Multiprocess aware   = Y
> > > > > > +BSD nic_uio          = Y
> > > > > > +Linux UIO            = Y
> > > > > > +Linux VFIO           = Y
> > > > > > +x86-32               = Y
> > > > > > +x86-64               = Y
> > > > >
> > > > > Is cross compile for ARM and PPC disabled since it uses Intel specific
> ISA?
> > > > > Should this be added to limitations?
> > > Is powerpc and arm cross build with non avx and sse ISA supported?
> > > If no, will default '.config' has 'ICE_PMD=n'?
> > Yes, it should be. Currently at least all the Intel NICs set 'y' here.
> > If any problem, we should handle it as bug.
> So my understanding from 'yes' is ICE_DSI PMD is having minimum scalar
> functions which is cross build successful for ARM and powerpc. Hence you
> are leaving it as '=y' like other NIC.
> 
> If above is true, why is arm and PowerPC not added to your documentation?
It's a good question. If you check the history, you'll find the arm and powerpc support is updated by the arm and powerpc maintainers. It means ideally we believe it supports these platforms, but we don't have any of these platform to confirm that.

> 
> >
> > >
> > > > >
> > > > >
> > > > > Snipped
> > > > >
> > > > > > +
> > > > > > +Config File Options
> > > > > > +~~~~~~~~~~~~~~~~~~~
> > > > > > +
> > > > > > +The following options can be modified in the ``config`` file.
> > > > > > +Please note that enabling debugging options may affect system
> > > > > performance.
> > > > >
> > > > > Do we see real performance variance? If yes, can we highlight
> > > > > this in info section?
> > > > IMH, we could get different numbers because of different scenarios.
> > > > It's only a reminder to not enable debug.
> > > Thanks, but then wording should 'Note: enabling debug option for ICE
> > > will have difference in performance. Hence recommendation is not to
> > > enable unless for debugging the ICE PMD.'
> > > >
> > > > >
> > > > > snipped


^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v2 01/20] net/ice: add base code
  2018-12-06  6:03             ` Varghese, Vipin
@ 2018-12-06  6:23               ` Ferruh Yigit
  2018-12-06  6:38               ` Lu, Wenzhuo
  1 sibling, 0 replies; 309+ messages in thread
From: Ferruh Yigit @ 2018-12-06  6:23 UTC (permalink / raw)
  To: Varghese, Vipin, Lu, Wenzhuo, dev

On 12/6/2018 6:03 AM, Varghese, Vipin wrote:
> snipped
>>>>>
>>>>>> +Intel® ICE driver
>>>>>> +==================
>>>>>> +
>>>>>> +This directory contains source code of FreeBSD ice driver of
>>>>>> +version
>>>>>> +2018.10.30 released by the team which develops basic drivers
>>>>>> +for any ice NIC. The directory of base/ contains the original
>>>>>> +source
>>> package.
>>>>>> +This driver is valid for the product(s) listed below
>>>>>> +
>>>>>> +* Intel® Ethernet Network Adapters E810
>>>>>> +
>>>>>> +Updating the driver
>>>>>> +===================
>>>>>> +
>>>>>> +NOTE: The source code in this directory should not be modified
>>>>>> +apart from the following file(s):
>>>>>> +
>>>>>> +    ice_osdep.h
>>>>>
>>>>> I this README persistent in upcoming releases of 'driver/net/ice'?
>>>> Yes.
>>> If Linux driver is enabled in 4.20.1 or higher, then will the wording
>>> 'This directory contains source code of FreeBSD ice driver of' still hold true?
>> Although I don't understand why we talk about the Linux driver version, but I
>> think the answer is yes.
> Ok, reason for linux driver is because 
> 1. you would be planning to push the default kernel driver to linux for ICE.
> 2. the documentation states FreeBSD 2018.10.30, so if there is future enhancement pulled from linux driver this would added here too?
> 
>>
>>>
>>>>>
>>> snipped
>>>>>> +#define ICE_AQC_MAN_MAC_UPDATE_LAA	0
>>>>>> +#define ICE_AQC_MAN_MAC_UPDATE_LAA_WOL	(BIT(0) <<
>>>>>> ICE_AQC_MAN_MAC_WR_S)
>>>>>
>>>>> Can the code be rearranged for?
>>>> We don’t want to change the base code for the sake of maintenance.
>>> I do not follow this, is not your team or individual maintaining the same?
>>> because there should be maintainer for this PMD.
>> This code is not implemented by us. You can take us as a representative of the
>> develop team.
>> If any bug, we'll hande it.
> Ok, currently the team which maintains the code do not want to change the order of code for sake of maintenance. Confusing approach, but I leave this to other members to comment.

Hi Vipin,

Of course driver team maintains the code, only Intel follows a process to update
base code instead of directly updating it which also covers synchronization with
kernel code as you shared the concern above, this process is not specific to ice
driver.

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v2 19/20] doc: add ICE description and update release note
  2018-12-06  6:23               ` Lu, Wenzhuo
@ 2018-12-06  6:25                 ` Varghese, Vipin
  2018-12-06  6:35                   ` Lu, Wenzhuo
  0 siblings, 1 reply; 309+ messages in thread
From: Varghese, Vipin @ 2018-12-06  6:25 UTC (permalink / raw)
  To: Lu, Wenzhuo, dev

snipped
> > > > Thanks Wenzhuo
> > > >
> > > > snipped
> > > > > >
> > > > > > Do we support Traffic Manager and Inline Crypto? If not, can
> > > > > > we add this also to limitations?
> > > > > The style is only listed supported features here. I don't think
> > > > > we'll list everything not supported as limitation.
> > > > Thanks for the correction in format, hence the suggested action
> > > > item is update limitation section to state what is not supported.
> > > >
> > > > > The same below.
> > > > >
> > > > > >
> > > > > > > +FW version           = Y
> > > > > > > +Module EEPROM dump   = Y
> > > > > > > +Multiprocess aware   = Y
> > > > > > > +BSD nic_uio          = Y
> > > > > > > +Linux UIO            = Y
> > > > > > > +Linux VFIO           = Y
> > > > > > > +x86-32               = Y
> > > > > > > +x86-64               = Y
> > > > > >
> > > > > > Is cross compile for ARM and PPC disabled since it uses Intel
> > > > > > specific
> > ISA?
> > > > > > Should this be added to limitations?
> > > > Is powerpc and arm cross build with non avx and sse ISA supported?
> > > > If no, will default '.config' has 'ICE_PMD=n'?
> > > Yes, it should be. Currently at least all the Intel NICs set 'y' here.
> > > If any problem, we should handle it as bug.
> > So my understanding from 'yes' is ICE_DSI PMD is having minimum scalar
> > functions which is cross build successful for ARM and powerpc. Hence
> > you are leaving it as '=y' like other NIC.
> >
> > If above is true, why is arm and PowerPC not added to your documentation?
> It's a good question. If you check the history, you'll find the arm and powerpc
> support is updated by the arm and powerpc maintainers. It means ideally we
> believe it supports these platforms, but we don't have any of these platform to
> confirm that.
Perfect, please add the right team members in 'to' so they can update the same.

> 
> >
> > >
> > > >
> > > > > >
> > > > > >
> > > > > > Snipped
> > > > > >
> > > > > > > +
> > > > > > > +Config File Options
> > > > > > > +~~~~~~~~~~~~~~~~~~~
> > > > > > > +
> > > > > > > +The following options can be modified in the ``config`` file.
> > > > > > > +Please note that enabling debugging options may affect
> > > > > > > +system
> > > > > > performance.
> > > > > >
> > > > > > Do we see real performance variance? If yes, can we highlight
> > > > > > this in info section?
> > > > > IMH, we could get different numbers because of different scenarios.
> > > > > It's only a reminder to not enable debug.
> > > > Thanks, but then wording should 'Note: enabling debug option for
> > > > ICE will have difference in performance. Hence recommendation is
> > > > not to enable unless for debugging the ICE PMD.'
> > > > >
> > > > > >
> > > > > > snipped


^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v2 02/20] net/ice: support device initialization
  2018-12-06  6:13           ` Lu, Wenzhuo
@ 2018-12-06  6:31             ` Varghese, Vipin
  2018-12-06  7:04               ` Lu, Wenzhuo
  0 siblings, 1 reply; 309+ messages in thread
From: Varghese, Vipin @ 2018-12-06  6:31 UTC (permalink / raw)
  To: Lu, Wenzhuo, dev; +Cc: Yang, Qiming, Li, Xiaoyun, Wu, Jingjing

snipped
> > > > > +	ice_init_controlq_parameter(hw);
> > > > > +
> > > > > +	ret = ice_init_hw(hw);
> > > > > +	if (ret) {
> > > > > +		PMD_INIT_LOG(ERR, "Failed to initialize HW");
> > > > > +		return -EINVAL;
> > > > > +	}
> > > >
> > > > Definition for ice_init_hw in patch 01/20 does not check for
> > > > primary- secondary. Are we allowing secondary to invoke
> > > > ice_init_hw if it is initialized by primary?
> > > It's a patch split issue. We add the check in later patch. Will put
> > > it in this patch in the new version.
> > Suggestion in current patch if comment is kept it will be easier to
> > understand that it is taken care in future patch.
> >
> > Example patch 2/20 has comment stating adding support in patch 5/20.
> > Then in patch 5/20 it removes the ToDo it is easier to read and
> > understand the flow
> I mean I made a mistake that put the check code in a later patch. Actually this
> code should be put in this patch. I plan to correct it.
> But currently I think we're running out of time. I prefer not supporting multi
> process in this release.
Thanks for clarifying the same. It will helpful to add 'to do or future items' in cover letter, code comment and release documents which helps reviewers, early adopters and later maintainers.

> 
> >
> > >
> > > >
> > > > > +
> > > > > +	PMD_INIT_LOG(INFO, "FW %d.%d.%05d API %d.%d",
> > > > > +		     hw->fw_maj_ver, hw->fw_min_ver, hw->fw_build,
> > > > > +		     hw->api_maj_ver, hw->api_min_ver);
> > > > > +
> > > >
> > > > Snipped
> > > >
> > > > > +
> > > > > +static int
> > > > > +ice_dev_uninit(struct rte_eth_dev *dev) {
> > > > > +	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data-
> > > > > >dev_private);
> > > > > +	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data-
> > > > >dev_private);
> > > > > +
> > > > > +	ICE_PROC_SECONDARY_CHECK_RET_0;
> > > >
> > > > Should not we check if primary is alive and NIC is used or
> > > > initialized by primary then ' ICE_PROC_SECONDARY_CHECK_RET_0'?
> > > I think it's not a critical issue if the process is terminate
> > > abnormally without
> > uninit.
> > > Comparing with that, I have more concern about this scenario, if the
> > > primary process exit and uninit the resource, the secondary process
> > > is left
> > alone.
> > Since primary is application which reserves the huge page memory
> > (malloc, zmalloc, memzone). So when secondary is killed or stop whole
> > huge pages are released. I am bit confused what is check suggested affecting?
> >
> >  And also
> > > to me it looks not a good solution to change every PMD for this feature.
> > I am not aware about why other PMD are done in specific way. In my
> > humble opinion, if there is a right way let it be used rather than
> > doing other way.
> >
> > I don't
> > > see many PMD support it. Maybe we'd better not support it now and
> > > wait for a better whole picture.
> > I wait for others to comment to this approach.
> >
> > snipped

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v2 19/20] doc: add ICE description and update release note
  2018-12-06  6:25                 ` Varghese, Vipin
@ 2018-12-06  6:35                   ` Lu, Wenzhuo
  0 siblings, 0 replies; 309+ messages in thread
From: Lu, Wenzhuo @ 2018-12-06  6:35 UTC (permalink / raw)
  To: Varghese, Vipin, dev

Hi Vipin,


> -----Original Message-----
> From: Varghese, Vipin
> Sent: Thursday, December 6, 2018 2:26 PM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> Subject: RE: [dpdk-dev] [PATCH v2 19/20] doc: add ICE description and
> update release note
> 
> snipped
> > > > > Thanks Wenzhuo
> > > > >
> > > > > snipped
> > > > > > >
> > > > > > > Do we support Traffic Manager and Inline Crypto? If not, can
> > > > > > > we add this also to limitations?
> > > > > > The style is only listed supported features here. I don't
> > > > > > think we'll list everything not supported as limitation.
> > > > > Thanks for the correction in format, hence the suggested action
> > > > > item is update limitation section to state what is not supported.
> > > > >
> > > > > > The same below.
> > > > > >
> > > > > > >
> > > > > > > > +FW version           = Y
> > > > > > > > +Module EEPROM dump   = Y
> > > > > > > > +Multiprocess aware   = Y
> > > > > > > > +BSD nic_uio          = Y
> > > > > > > > +Linux UIO            = Y
> > > > > > > > +Linux VFIO           = Y
> > > > > > > > +x86-32               = Y
> > > > > > > > +x86-64               = Y
> > > > > > >
> > > > > > > Is cross compile for ARM and PPC disabled since it uses
> > > > > > > Intel specific
> > > ISA?
> > > > > > > Should this be added to limitations?
> > > > > Is powerpc and arm cross build with non avx and sse ISA supported?
> > > > > If no, will default '.config' has 'ICE_PMD=n'?
> > > > Yes, it should be. Currently at least all the Intel NICs set 'y' here.
> > > > If any problem, we should handle it as bug.
> > > So my understanding from 'yes' is ICE_DSI PMD is having minimum
> > > scalar functions which is cross build successful for ARM and
> > > powerpc. Hence you are leaving it as '=y' like other NIC.
> > >
> > > If above is true, why is arm and PowerPC not added to your
> documentation?
> > It's a good question. If you check the history, you'll find the arm
> > and powerpc support is updated by the arm and powerpc maintainers. It
> > means ideally we believe it supports these platforms, but we don't
> > have any of these platform to confirm that.
> Perfect, please add the right team members in 'to' so they can update the
> same.
Good suggestion. I'll cc them when sending the new version. But we cannot expect there's a quick update because ice is a new device.

> 
> >
> > >
> > > >
> > > > >
> > > > > > >
> > > > > > >
> > > > > > > Snipped
> > > > > > >
> > > > > > > > +
> > > > > > > > +Config File Options
> > > > > > > > +~~~~~~~~~~~~~~~~~~~
> > > > > > > > +
> > > > > > > > +The following options can be modified in the ``config`` file.
> > > > > > > > +Please note that enabling debugging options may affect
> > > > > > > > +system
> > > > > > > performance.
> > > > > > >
> > > > > > > Do we see real performance variance? If yes, can we
> > > > > > > highlight this in info section?
> > > > > > IMH, we could get different numbers because of different scenarios.
> > > > > > It's only a reminder to not enable debug.
> > > > > Thanks, but then wording should 'Note: enabling debug option for
> > > > > ICE will have difference in performance. Hence recommendation is
> > > > > not to enable unless for debugging the ICE PMD.'
> > > > > >
> > > > > > >
> > > > > > > snipped


^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v2 01/20] net/ice: add base code
  2018-12-06  6:03             ` Varghese, Vipin
  2018-12-06  6:23               ` Ferruh Yigit
@ 2018-12-06  6:38               ` Lu, Wenzhuo
  2018-12-06  6:41                 ` Varghese, Vipin
  1 sibling, 1 reply; 309+ messages in thread
From: Lu, Wenzhuo @ 2018-12-06  6:38 UTC (permalink / raw)
  To: Varghese, Vipin, dev

Hi Vipin,


> -----Original Message-----
> From: Varghese, Vipin
> Sent: Thursday, December 6, 2018 2:03 PM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> Subject: RE: [dpdk-dev] [PATCH v2 01/20] net/ice: add base code
> 
> snipped
> > > > >
> > > > > > +Intel® ICE driver
> > > > > > +==================
> > > > > > +
> > > > > > +This directory contains source code of FreeBSD ice driver of
> > > > > > +version
> > > > > > +2018.10.30 released by the team which develops basic drivers
> > > > > > +for any ice NIC. The directory of base/ contains the original
> > > > > > +source
> > > package.
> > > > > > +This driver is valid for the product(s) listed below
> > > > > > +
> > > > > > +* Intel® Ethernet Network Adapters E810
> > > > > > +
> > > > > > +Updating the driver
> > > > > > +===================
> > > > > > +
> > > > > > +NOTE: The source code in this directory should not be
> > > > > > +modified apart from the following file(s):
> > > > > > +
> > > > > > +    ice_osdep.h
> > > > >
> > > > > I this README persistent in upcoming releases of 'driver/net/ice'?
> > > > Yes.
> > > If Linux driver is enabled in 4.20.1 or higher, then will the
> > > wording 'This directory contains source code of FreeBSD ice driver of' still
> hold true?
> > Although I don't understand why we talk about the Linux driver
> > version, but I think the answer is yes.
> Ok, reason for linux driver is because
> 1. you would be planning to push the default kernel driver to linux for ICE.
> 2. the documentation states FreeBSD 2018.10.30, so if there is future
> enhancement pulled from linux driver this would added here too?
Sure, we'll keep updating this code.

> 
> >
> > >
> > > > >
> > > snipped
> > > > > > +#define ICE_AQC_MAN_MAC_UPDATE_LAA	0
> > > > > > +#define ICE_AQC_MAN_MAC_UPDATE_LAA_WOL	(BIT(0) <<
> > > > > > ICE_AQC_MAN_MAC_WR_S)
> > > > >
> > > > > Can the code be rearranged for?
> > > > We don’t want to change the base code for the sake of maintenance.
> > > I do not follow this, is not your team or individual maintaining the same?
> > > because there should be maintainer for this PMD.
> > This code is not implemented by us. You can take us as a
> > representative of the develop team.
> > If any bug, we'll hande it.
> Ok, currently the team which maintains the code do not want to change the
> order of code for sake of maintenance. Confusing approach, but I leave this
> to other members to comment.
> 
> >
> > >
> > > snipped
> > > > > > +struct ice_aqc_get_phy_caps_data {
> > > > > > +	__le64 phy_type_low; /* Use values from
> > ICE_PHY_TYPE_LOW_* */
> > > > > > +	__le64 reserved;
> > > > > > +	u8 caps;
> > > > > > +#define ICE_AQC_PHY_EN_TX_LINK_PAUSE			BIT(0)
> > > > > > +#define ICE_AQC_PHY_EN_RX_LINK_PAUSE			BIT(1)
> > > > > > +#define ICE_AQC_PHY_LOW_POWER_MODE			BIT(2)
> > > > > > +#define ICE_AQC_PHY_EN_LINK				BIT(3)
> > > > > > +#define ICE_AQC_PHY_AN_MODE				BIT(4)
> > > > > > +#define ICE_AQC_PHY_EN_MOD_QUAL
> 	BIT(5)
> > > > > > +#define ICE_AQC_PHY_EN_LESM				BIT(6)
> > > > > > +#define ICE_AQC_PHY_EN_AUTO_FEC
> 	BIT(7)
> > > > > > +#define ICE_AQC_PHY_CAPS_MASK
> > > > > > 	MAKEMASK(0xff, 0)
> > > > > > +	u8 low_power_ctrl;
> > > > > > +#define ICE_AQC_PHY_EN_D3COLD_LOW_POWER_AUTONEG
> > > > > 	BIT(0)
> > > > > > +	__le16 eee_cap;
> > > > > > +#define ICE_AQC_PHY_EEE_EN_100BASE_TX			BIT(0)
> > > > > > +#define ICE_AQC_PHY_EEE_EN_1000BASE_T			BIT(1)
> > > > > > +#define ICE_AQC_PHY_EEE_EN_10GBASE_T			BIT(2)
> > > > > > +#define ICE_AQC_PHY_EEE_EN_1000BASE_KX
> 	BIT(3)
> > > > > > +#define ICE_AQC_PHY_EEE_EN_10GBASE_KR			BIT(4)
> > > > > > +#define ICE_AQC_PHY_EEE_EN_25GBASE_KR			BIT(5)
> > > > > > +#define ICE_AQC_PHY_EEE_EN_40GBASE_KR4
> 	BIT(6)
> > > > > > +	__le16 eeer_value;
> > > > > > +	u8 phy_id_oui[4]; /* PHY/Module ID connected on the port
> */
> > > > > > +	u8 phy_fw_ver[8];
> > > > > > +	u8 link_fec_options;
> > > > > > +#define ICE_AQC_PHY_FEC_10G_KR_40G_KR4_EN		BIT(0)
> > > > > > +#define ICE_AQC_PHY_FEC_10G_KR_40G_KR4_REQ		BIT(1)
> > > > > > +#define ICE_AQC_PHY_FEC_25G_RS_528_REQ
> 	BIT(2)
> > > > > > +#define ICE_AQC_PHY_FEC_25G_KR_REQ			BIT(3)
> > > > > > +#define ICE_AQC_PHY_FEC_25G_RS_544_REQ
> 	BIT(4)
> > > > > > +#define ICE_AQC_PHY_FEC_25G_RS_CLAUSE91_EN		BIT(6)
> > > > > > +#define ICE_AQC_PHY_FEC_25G_KR_CLAUSE74_EN		BIT(7)
> > > > > > +#define ICE_AQC_PHY_FEC_MASK
> > > > > > 	MAKEMASK(0xdf, 0)
> > > > > > +	u8 extended_compliance_code;
> > > > > > +#define ICE_MODULE_TYPE_TOTAL_BYTE			3
> > > > > > +	u8 module_type[ICE_MODULE_TYPE_TOTAL_BYTE];
> > > > > > +#define ICE_AQC_MOD_TYPE_BYTE0_SFP_PLUS
> 	0xA0
> > > > > > +#define ICE_AQC_MOD_TYPE_BYTE0_QSFP_PLUS		0x80
> > > > > > +#define ICE_AQC_MOD_TYPE_BYTE1_SFP_PLUS_CU_PASSIVE	BIT(0)
> > > > > > +#define ICE_AQC_MOD_TYPE_BYTE1_SFP_PLUS_CU_ACTIVE	BIT(1)
> > > > > > +#define ICE_AQC_MOD_TYPE_BYTE1_10G_BASE_SR		BIT(4)
> > > > > > +#define ICE_AQC_MOD_TYPE_BYTE1_10G_BASE_LR		BIT(5)
> > > > > > +#define ICE_AQC_MOD_TYPE_BYTE1_10G_BASE_LRM
> 	BIT(6)
> > > > > > +#define ICE_AQC_MOD_TYPE_BYTE1_10G_BASE_ER		BIT(7)
> > > > > > +#define ICE_AQC_MOD_TYPE_BYTE2_SFP_PLUS
> 	0xA0
> > > > > > +#define ICE_AQC_MOD_TYPE_BYTE2_QSFP_PLUS		0x86
> > > > > > +	u8 qualified_module_count;
> > > > > > +#define ICE_AQC_QUAL_MOD_COUNT_MAX			16
> > > > > > +	struct {
> > > > > > +		u8 v_oui[3];
> > > > > > +		u8 rsvd3;
> > > > > > +		u8 v_part[16];
> > > > > > +		__le32 v_rev;
> > > > > > +		__le64 rsvd8;
> > > > > > +	} qual_modules[ICE_AQC_QUAL_MOD_COUNT_MAX];
> > > > > > +};
> > > > > > +
> > > > >
> > > > > Does the NIC support physical loopback? I am not able to find here.
> > > > Not sure about it. But no plan for this at this stage.
> > > Please add this in release note and PMD documentation the same.
> > No, we list all the things done. It doesn't make sense to list
> > everything not supported or not implemented.
> I think it is necessary, because application 'testpmd' has option to 'set tx
> loopback (port_id) (on|off)'. So If ICE DSI PMD does not support it and in
> testpmd it fails both DTS team and DPDK user should be made aware via
> documentation for limitation.
If the feature is not supported, the not supported failure is returned. That's a RTE layer design and common solution for all the devices.

> 
> >
> > >
> > > >
> > > > >
> > > > > > +#define ICE_AQ_PHY_ENA_LOW_POWER	BIT(2)
> > > > >
> > > > > Does Low Power PMD is exposed to DPDK? If yes, can you mention
> > > > > the performance numbers or variance in Release documents?
> > > > No plan for it at this release.
> > > Would not it be better to not add here or update as comment and
> > > release note about the same. Right now it is like dead code.
> > >
> > > >
> > > > >
> > > > > Snipped
> > > > >
> > > > > > +
> > > > > > +/* Memory types */
> > > > > > +enum ice_memset_type {
> > > > > > +	ICE_NONDMA_MEM = 0,
> > > > > > +	ICE_DMA_MEM
> > > > > > +};
> > > > > > +
> > > > > > +/* Memcpy types */
> > > > > > +enum ice_memcpy_type {
> > > > > > +	ICE_NONDMA_TO_NONDMA = 0,
> > > > > > +	ICE_NONDMA_TO_DMA,
> > > > > > +	ICE_DMA_TO_DMA,
> > > > > > +	ICE_DMA_TO_NONDMA
> > > > > > +};
> > > > > > +
> > > > >
> > > > > Is this exposed to user (rte_eth_dev) API? If yes, can you
> > > > > please let know the performance impact in RX|TX in release notes too.
> > > > No plan for it at this release.
> > > Please update
> > > a. what is difference least as comments.
> > > b. in release notes about the same.
> > >
> > > snipped

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v2 01/20] net/ice: add base code
  2018-12-06  6:38               ` Lu, Wenzhuo
@ 2018-12-06  6:41                 ` Varghese, Vipin
  2018-12-06  7:06                   ` Zhang, Qi Z
  2018-12-06  7:17                   ` Lu, Wenzhuo
  0 siblings, 2 replies; 309+ messages in thread
From: Varghese, Vipin @ 2018-12-06  6:41 UTC (permalink / raw)
  To: Lu, Wenzhuo, dev

snipped
> > > > > >
> > > > > > I this README persistent in upcoming releases of 'driver/net/ice'?
> > > > > Yes.
> > > > If Linux driver is enabled in 4.20.1 or higher, then will the
> > > > wording 'This directory contains source code of FreeBSD ice driver
> > > > of' still
> > hold true?
> > > Although I don't understand why we talk about the Linux driver
> > > version, but I think the answer is yes.
> > Ok, reason for linux driver is because 1. you would be planning to
> > push the default kernel driver to linux for ICE.
> > 2. the documentation states FreeBSD 2018.10.30, so if there is future
> > enhancement pulled from linux driver this would added here too?
> Sure, we'll keep updating this code.
thanks

> 
> >
> > >
> > > >
> > > > > >
> > > > snipped
> > > > > > > +#define ICE_AQC_MAN_MAC_UPDATE_LAA	0
> > > > > > > +#define ICE_AQC_MAN_MAC_UPDATE_LAA_WOL	(BIT(0) <<
> > > > > > > ICE_AQC_MAN_MAC_WR_S)
> > > > > >
> > > > > > Can the code be rearranged for?
> > > > > We don’t want to change the base code for the sake of maintenance.
> > > > I do not follow this, is not your team or individual maintaining the same?
> > > > because there should be maintainer for this PMD.
> > > This code is not implemented by us. You can take us as a
> > > representative of the develop team.
> > > If any bug, we'll hande it.
> > Ok, currently the team which maintains the code do not want to change
> > the order of code for sake of maintenance. Confusing approach, but I
> > leave this to other members to comment.
> >
> > >
> > > >
Snipped

> > > > > >
> > > > > > Does the NIC support physical loopback? I am not able to find here.
> > > > > Not sure about it. But no plan for this at this stage.
> > > > Please add this in release note and PMD documentation the same.
> > > No, we list all the things done. It doesn't make sense to list
> > > everything not supported or not implemented.
> > I think it is necessary, because application 'testpmd' has option to
> > 'set tx loopback (port_id) (on|off)'. So If ICE DSI PMD does not
> > support it and in testpmd it fails both DTS team and DPDK user should
> > be made aware via documentation for limitation.
> If the feature is not supported, the not supported failure is returned. That's a RTE
> layer design and common solution for all the devices.
I am more concerned about DPDK error values and DTS. If DTS uses the loopback as pass case it should pass and if it is feature not supported it should be documented.

Note: In version 1 I enquired about unit or DTS validation for PMD. Is this still holding good?

> 
> >
> > >
> > > >
> > > > >
> > > > > >
> > > > > > > +#define ICE_AQ_PHY_ENA_LOW_POWER	BIT(2)
> > > > > >
> > > > > > Does Low Power PMD is exposed to DPDK? If yes, can you mention
> > > > > > the performance numbers or variance in Release documents?
> > > > > No plan for it at this release.
> > > > Would not it be better to not add here or update as comment and
> > > > release note about the same. Right now it is like dead code.
> > > >
> > > > >
> > > > > >
> > > > > > Snipped
> > > > > >
> > > > > > > +
> > > > > > > +/* Memory types */
> > > > > > > +enum ice_memset_type {
> > > > > > > +	ICE_NONDMA_MEM = 0,
> > > > > > > +	ICE_DMA_MEM
> > > > > > > +};
> > > > > > > +
> > > > > > > +/* Memcpy types */
> > > > > > > +enum ice_memcpy_type {
> > > > > > > +	ICE_NONDMA_TO_NONDMA = 0,
> > > > > > > +	ICE_NONDMA_TO_DMA,
> > > > > > > +	ICE_DMA_TO_DMA,
> > > > > > > +	ICE_DMA_TO_NONDMA
> > > > > > > +};
> > > > > > > +
> > > > > >
> > > > > > Is this exposed to user (rte_eth_dev) API? If yes, can you
> > > > > > please let know the performance impact in RX|TX in release notes too.
> > > > > No plan for it at this release.
> > > > Please update
> > > > a. what is difference least as comments.
> > > > b. in release notes about the same.
> > > >
> > > > snipped

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v2 02/20] net/ice: support device initialization
  2018-12-06  6:31             ` Varghese, Vipin
@ 2018-12-06  7:04               ` Lu, Wenzhuo
       [not found]                 ` <039ED4275CED7440929022BC67E70611532FA732@SHSMSX103.ccr.corp.intel.com>
  0 siblings, 1 reply; 309+ messages in thread
From: Lu, Wenzhuo @ 2018-12-06  7:04 UTC (permalink / raw)
  To: Varghese, Vipin, dev; +Cc: Yang, Qiming, Li, Xiaoyun, Wu, Jingjing

Hi Vipin,

> -----Original Message-----
> From: Varghese, Vipin
> Sent: Thursday, December 6, 2018 2:31 PM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> Cc: Yang, Qiming <qiming.yang@intel.com>; Li, Xiaoyun
> <xiaoyun.li@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v2 02/20] net/ice: support device
> initialization
> 
> snipped
> > > > > > +	ice_init_controlq_parameter(hw);
> > > > > > +
> > > > > > +	ret = ice_init_hw(hw);
> > > > > > +	if (ret) {
> > > > > > +		PMD_INIT_LOG(ERR, "Failed to initialize HW");
> > > > > > +		return -EINVAL;
> > > > > > +	}
> > > > >
> > > > > Definition for ice_init_hw in patch 01/20 does not check for
> > > > > primary- secondary. Are we allowing secondary to invoke
> > > > > ice_init_hw if it is initialized by primary?
> > > > It's a patch split issue. We add the check in later patch. Will
> > > > put it in this patch in the new version.
> > > Suggestion in current patch if comment is kept it will be easier to
> > > understand that it is taken care in future patch.
> > >
> > > Example patch 2/20 has comment stating adding support in patch 5/20.
> > > Then in patch 5/20 it removes the ToDo it is easier to read and
> > > understand the flow
> > I mean I made a mistake that put the check code in a later patch.
> > Actually this code should be put in this patch. I plan to correct it.
> > But currently I think we're running out of time. I prefer not
> > supporting multi process in this release.
> Thanks for clarifying the same. It will helpful to add 'to do or future items' in
> cover letter, code comment and release documents which helps reviewers,
> early adopters and later maintainers.
I'd like to suggest focusing on what we have. Sorry, for many reasons it's not appropriate to talk too much about we'll do in the future. Like Internally we have a plan, but it keeps changing. Like something is still under investigation...

> 
> >
> > >
> > > >
> > > > >
> > > > > > +
> > > > > > +	PMD_INIT_LOG(INFO, "FW %d.%d.%05d API %d.%d",
> > > > > > +		     hw->fw_maj_ver, hw->fw_min_ver, hw->fw_build,
> > > > > > +		     hw->api_maj_ver, hw->api_min_ver);
> > > > > > +
> > > > >
> > > > > Snipped
> > > > >
> > > > > > +
> > > > > > +static int
> > > > > > +ice_dev_uninit(struct rte_eth_dev *dev) {
> > > > > > +	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data-
> > > > > > >dev_private);
> > > > > > +	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data-
> > > > > >dev_private);
> > > > > > +
> > > > > > +	ICE_PROC_SECONDARY_CHECK_RET_0;
> > > > >
> > > > > Should not we check if primary is alive and NIC is used or
> > > > > initialized by primary then ' ICE_PROC_SECONDARY_CHECK_RET_0'?
> > > > I think it's not a critical issue if the process is terminate
> > > > abnormally without
> > > uninit.
> > > > Comparing with that, I have more concern about this scenario, if
> > > > the primary process exit and uninit the resource, the secondary
> > > > process is left
> > > alone.
> > > Since primary is application which reserves the huge page memory
> > > (malloc, zmalloc, memzone). So when secondary is killed or stop
> > > whole huge pages are released. I am bit confused what is check
> suggested affecting?
> > >
> > >  And also
> > > > to me it looks not a good solution to change every PMD for this feature.
> > > I am not aware about why other PMD are done in specific way. In my
> > > humble opinion, if there is a right way let it be used rather than
> > > doing other way.
> > >
> > > I don't
> > > > see many PMD support it. Maybe we'd better not support it now and
> > > > wait for a better whole picture.
> > > I wait for others to comment to this approach.
> > >
> > > snipped

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v2 01/20] net/ice: add base code
  2018-12-06  6:41                 ` Varghese, Vipin
@ 2018-12-06  7:06                   ` Zhang, Qi Z
  2018-12-06  7:17                   ` Lu, Wenzhuo
  1 sibling, 0 replies; 309+ messages in thread
From: Zhang, Qi Z @ 2018-12-06  7:06 UTC (permalink / raw)
  To: Varghese, Vipin, Lu, Wenzhuo, dev



> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Varghese, Vipin
> Sent: Thursday, December 6, 2018 2:41 PM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH v2 01/20] net/ice: add base code
> 
> snipped
> > > > > > >
> > > > > > > I this README persistent in upcoming releases of 'driver/net/ice'?
> > > > > > Yes.
> > > > > If Linux driver is enabled in 4.20.1 or higher, then will the
> > > > > wording 'This directory contains source code of FreeBSD ice
> > > > > driver of' still
> > > hold true?
> > > > Although I don't understand why we talk about the Linux driver
> > > > version, but I think the answer is yes.
> > > Ok, reason for linux driver is because 1. you would be planning to
> > > push the default kernel driver to linux for ICE.
> > > 2. the documentation states FreeBSD 2018.10.30, so if there is
> > > future enhancement pulled from linux driver this would added here too?
> > Sure, we'll keep updating this code.
> thanks
> 
> >
> > >
> > > >
> > > > >
> > > > > > >
> > > > > snipped
> > > > > > > > +#define ICE_AQC_MAN_MAC_UPDATE_LAA	0
> > > > > > > > +#define ICE_AQC_MAN_MAC_UPDATE_LAA_WOL	(BIT(0) <<
> > > > > > > > ICE_AQC_MAN_MAC_WR_S)
> > > > > > >
> > > > > > > Can the code be rearranged for?
> > > > > > We don’t want to change the base code for the sake of maintenance.
> > > > > I do not follow this, is not your team or individual maintaining the same?
> > > > > because there should be maintainer for this PMD.
> > > > This code is not implemented by us. You can take us as a
> > > > representative of the develop team.
> > > > If any bug, we'll hande it.
> > > Ok, currently the team which maintains the code do not want to
> > > change the order of code for sake of maintenance. Confusing
> > > approach, but I leave this to other members to comment.
> > >
> > > >
> > > > >
> Snipped
> 
> > > > > > >
> > > > > > > Does the NIC support physical loopback? I am not able to find here.
> > > > > > Not sure about it. But no plan for this at this stage.
> > > > > Please add this in release note and PMD documentation the same.
> > > > No, we list all the things done. It doesn't make sense to list
> > > > everything not supported or not implemented.
> > > I think it is necessary, because application 'testpmd' has option to
> > > 'set tx loopback (port_id) (on|off)'. So If ICE DSI PMD does not
> > > support it and in testpmd it fails both DTS team and DPDK user
> > > should be made aware via documentation for limitation.
> > If the feature is not supported, the not supported failure is
> > returned. That's a RTE layer design and common solution for all the devices.
> I am more concerned about DPDK error values and DTS. If DTS uses the
> loopback as pass case it should pass and if it is feature not supported it should
> be documented.
> 
> Note: In version 1 I enquired about unit or DTS validation for PMD. Is this still
> holding good?

I agree it's better to document the feature that is supported or not, actually we have this mechanism in DPDK doc.
I think the gap here is there is no tx loopback related description in doc/guides/nics/features.rst 
Driver is only responsible to update the doc/guides/nics/features/ice.ini for the features that be include in features.rst.
	
So the issue is not related with this patch from my view.

> 
> >
> > >
> > > >
> > > > >
> > > > > >
> > > > > > >
> > > > > > > > +#define ICE_AQ_PHY_ENA_LOW_POWER	BIT(2)
> > > > > > >
> > > > > > > Does Low Power PMD is exposed to DPDK? If yes, can you
> > > > > > > mention the performance numbers or variance in Release
> documents?
> > > > > > No plan for it at this release.
> > > > > Would not it be better to not add here or update as comment and
> > > > > release note about the same. Right now it is like dead code.
> > > > >
> > > > > >
> > > > > > >
> > > > > > > Snipped
> > > > > > >
> > > > > > > > +
> > > > > > > > +/* Memory types */
> > > > > > > > +enum ice_memset_type {
> > > > > > > > +	ICE_NONDMA_MEM = 0,
> > > > > > > > +	ICE_DMA_MEM
> > > > > > > > +};
> > > > > > > > +
> > > > > > > > +/* Memcpy types */
> > > > > > > > +enum ice_memcpy_type {
> > > > > > > > +	ICE_NONDMA_TO_NONDMA = 0,
> > > > > > > > +	ICE_NONDMA_TO_DMA,
> > > > > > > > +	ICE_DMA_TO_DMA,
> > > > > > > > +	ICE_DMA_TO_NONDMA
> > > > > > > > +};
> > > > > > > > +
> > > > > > >
> > > > > > > Is this exposed to user (rte_eth_dev) API? If yes, can you
> > > > > > > please let know the performance impact in RX|TX in release notes
> too.
> > > > > > No plan for it at this release.
> > > > > Please update
> > > > > a. what is difference least as comments.
> > > > > b. in release notes about the same.
> > > > >
> > > > > snipped

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v2 01/20] net/ice: add base code
  2018-12-06  6:41                 ` Varghese, Vipin
  2018-12-06  7:06                   ` Zhang, Qi Z
@ 2018-12-06  7:17                   ` Lu, Wenzhuo
  1 sibling, 0 replies; 309+ messages in thread
From: Lu, Wenzhuo @ 2018-12-06  7:17 UTC (permalink / raw)
  To: Varghese, Vipin, dev

Hi Vipin,


> -----Original Message-----
> From: Varghese, Vipin
> Sent: Thursday, December 6, 2018 2:41 PM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> Subject: RE: [dpdk-dev] [PATCH v2 01/20] net/ice: add base code
> 
> 
> Note: In version 1 I enquired about unit or DTS validation for PMD. Is this
> still holding good?
Yes, it's planned and on going.

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v2 03/20] net/ice: support device and queue ops
  2018-12-06  5:26         ` Varghese, Vipin
@ 2018-12-06 11:52           ` Ananyev, Konstantin
  2018-12-06 14:16             ` Varghese, Vipin
  0 siblings, 1 reply; 309+ messages in thread
From: Ananyev, Konstantin @ 2018-12-06 11:52 UTC (permalink / raw)
  To: Varghese, Vipin, Lu, Wenzhuo, dev; +Cc: Yang, Qiming, Li, Xiaoyun, Wu, Jingjing



> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Varghese, Vipin
> Sent: Thursday, December 6, 2018 5:27 AM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> Cc: Yang, Qiming <qiming.yang@intel.com>; Li, Xiaoyun <xiaoyun.li@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>
> Subject: Re: [dpdk-dev] [PATCH v2 03/20] net/ice: support device and queue ops
> 
> Hi Wenzhuo,
> 
> Please find my updates below
> 
> snipped
> > > > +	if (!vsi->rss_key)
> > > > +		vsi->rss_key = rte_zmalloc("rss_key",
> > > > +					   vsi->rss_key_size, 0);
> > > > +	if (!vsi->rss_lut)
> > > > +		vsi->rss_lut = rte_zmalloc("rss_lut",
> > > > +					   vsi->rss_lut_size, 0);
> > >
> > > 2 suggestions
> > > 1. should the name be macro?
> > Sorry, which name?
> Would you like to convert and use as?
> #define ICE_RSS_KEY "rss_key"
> #define ICE_RSS_LUT "rss_lut"
> 
> And replace ' rte_zmalloc("rss_key",' as ' rte_zmalloc(ICE_RSS_KEY,'
> >
> > > 2. if there are multiple 810 NIC under DPDK, should not each rss be
> > > different like "rss_key-%u" where it is port number?
> > Sorry, don't understand the question.
> Let assume we have 2 ICE_DSI NIC on PCIe bus. Then crating ' rte_zmalloc("rss_key",' for port 1 will fail since malloc region "rss_key"
> already exist for port 0

It wouldn't.
rte_malloc() simply ignores name argument.
You can even put NULL here.
As I remember, Anatoly suggested to remove it completely in future.
Konstantin

> 
> >
> > >
> > > Snipped
> > >
> > > > +
> > > > +static int
> > > > +ice_dev_start(struct rte_eth_dev *dev) {
> > > > +	struct rte_eth_dev_data *data = dev->data;
> > > > +	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data-
> > > > >dev_private);
> > > > +	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data-
> > > >dev_private);
> > > > +	uint16_t nb_rxq = 0;
> > > > +	uint16_t nb_txq, i;
> > > > +	int ret;
> > > > +
> > > > +	ICE_PROC_SECONDARY_CHECK;
> > >
> > > Device start is not supported, but how is this differentiated from
> > > primary configured device vs secondary configured device.
> > >
> > > Ie: primary uses black list '-b BB:DD:F' while secondary uses '-w
> > > BB:DD:F'. In this case since we are checking process type this will return without
> > start?
> Two updates with respect to your comment,
> 1. tool and application like dpdk-procinfo will no longer be able pull data since you are asking to black list.
> 2. If there are functions which needs to shared, like primary using rx-0 and tx-0 then secondary rx-1 and tx-1 how to make this work?
> 
> snipped

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v2 03/20] net/ice: support device and queue ops
  2018-12-06 11:52           ` Ananyev, Konstantin
@ 2018-12-06 14:16             ` Varghese, Vipin
  2018-12-07  1:02               ` Lu, Wenzhuo
  0 siblings, 1 reply; 309+ messages in thread
From: Varghese, Vipin @ 2018-12-06 14:16 UTC (permalink / raw)
  To: Ananyev, Konstantin, Lu, Wenzhuo, dev
  Cc: Yang, Qiming, Li, Xiaoyun, Wu, Jingjing

snipped
> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Varghese, Vipin
> > Sent: Thursday, December 6, 2018 5:27 AM
> > To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> > Cc: Yang, Qiming <qiming.yang@intel.com>; Li, Xiaoyun
> > <xiaoyun.li@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>
> > Subject: Re: [dpdk-dev] [PATCH v2 03/20] net/ice: support device and
> > queue ops
> >
> > Hi Wenzhuo,
> >
> > Please find my updates below
> >
> > snipped
> > > > > +	if (!vsi->rss_key)
> > > > > +		vsi->rss_key = rte_zmalloc("rss_key",
> > > > > +					   vsi->rss_key_size, 0);
> > > > > +	if (!vsi->rss_lut)
> > > > > +		vsi->rss_lut = rte_zmalloc("rss_lut",
> > > > > +					   vsi->rss_lut_size, 0);
> > > >
> > > > 2 suggestions
> > > > 1. should the name be macro?
> > > Sorry, which name?
> > Would you like to convert and use as?
> > #define ICE_RSS_KEY "rss_key"
> > #define ICE_RSS_LUT "rss_lut"
> >
> > And replace ' rte_zmalloc("rss_key",' as ' rte_zmalloc(ICE_RSS_KEY,'
> > >
> > > > 2. if there are multiple 810 NIC under DPDK, should not each rss
> > > > be different like "rss_key-%u" where it is port number?
> > > Sorry, don't understand the question.
> > Let assume we have 2 ICE_DSI NIC on PCIe bus. Then crating '
> rte_zmalloc("rss_key",' for port 1 will fail since malloc region "rss_key"
> > already exist for port 0
> 
> It wouldn't.
> rte_malloc() simply ignores name argument.
> You can even put NULL here.
> As I remember, Anatoly suggested to remove it completely in future.
> Konstantin
Ohh, this I was not aware. Then the suggestion from my end would be to pass 'NULL'. Wenzhuo you can ignore the MACRO and safe thing is passing NULL. 

> 
> >
> > >
> > > >
> > > > Snipped
> > > >
> > > > > +
> > > > > +static int
> > > > > +ice_dev_start(struct rte_eth_dev *dev) {
> > > > > +	struct rte_eth_dev_data *data = dev->data;
> > > > > +	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data-
> > > > > >dev_private);
> > > > > +	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data-
> > > > >dev_private);
> > > > > +	uint16_t nb_rxq = 0;
> > > > > +	uint16_t nb_txq, i;
> > > > > +	int ret;
> > > > > +
> > > > > +	ICE_PROC_SECONDARY_CHECK;
> > > >
> > > > Device start is not supported, but how is this differentiated from
> > > > primary configured device vs secondary configured device.
> > > >
> > > > Ie: primary uses black list '-b BB:DD:F' while secondary uses '-w
> > > > BB:DD:F'. In this case since we are checking process type this
> > > > will return without
> > > start?
> > Two updates with respect to your comment, 1. tool and application like
> > dpdk-procinfo will no longer be able pull data since you are asking to black
> list.
> > 2. If there are functions which needs to shared, like primary using rx-0 and tx-
> 0 then secondary rx-1 and tx-1 how to make this work?
> >
> > snipped

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v2 03/20] net/ice: support device and queue ops
  2018-12-06 14:16             ` Varghese, Vipin
@ 2018-12-07  1:02               ` Lu, Wenzhuo
  0 siblings, 0 replies; 309+ messages in thread
From: Lu, Wenzhuo @ 2018-12-07  1:02 UTC (permalink / raw)
  To: Varghese, Vipin, Ananyev, Konstantin, dev
  Cc: Yang, Qiming, Li, Xiaoyun, Wu, Jingjing

Hi Vipin, Konstantin,

> -----Original Message-----
> From: Varghese, Vipin
> Sent: Thursday, December 6, 2018 10:16 PM
> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>; Lu, Wenzhuo
> <wenzhuo.lu@intel.com>; dev@dpdk.org
> Cc: Yang, Qiming <qiming.yang@intel.com>; Li, Xiaoyun
> <xiaoyun.li@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v2 03/20] net/ice: support device and queue
> ops
> 
> snipped
> > > -----Original Message-----
> > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Varghese, Vipin
> > > Sent: Thursday, December 6, 2018 5:27 AM
> > > To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> > > Cc: Yang, Qiming <qiming.yang@intel.com>; Li, Xiaoyun
> > > <xiaoyun.li@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>
> > > Subject: Re: [dpdk-dev] [PATCH v2 03/20] net/ice: support device and
> > > queue ops
> > >
> > > Hi Wenzhuo,
> > >
> > > Please find my updates below
> > >
> > > snipped
> > > > > > +	if (!vsi->rss_key)
> > > > > > +		vsi->rss_key = rte_zmalloc("rss_key",
> > > > > > +					   vsi->rss_key_size, 0);
> > > > > > +	if (!vsi->rss_lut)
> > > > > > +		vsi->rss_lut = rte_zmalloc("rss_lut",
> > > > > > +					   vsi->rss_lut_size, 0);
> > > > >
> > > > > 2 suggestions
> > > > > 1. should the name be macro?
> > > > Sorry, which name?
> > > Would you like to convert and use as?
> > > #define ICE_RSS_KEY "rss_key"
> > > #define ICE_RSS_LUT "rss_lut"
> > >
> > > And replace ' rte_zmalloc("rss_key",' as ' rte_zmalloc(ICE_RSS_KEY,'
> > > >
> > > > > 2. if there are multiple 810 NIC under DPDK, should not each rss
> > > > > be different like "rss_key-%u" where it is port number?
> > > > Sorry, don't understand the question.
> > > Let assume we have 2 ICE_DSI NIC on PCIe bus. Then crating '
> > rte_zmalloc("rss_key",' for port 1 will fail since malloc region "rss_key"
> > > already exist for port 0
> >
> > It wouldn't.
> > rte_malloc() simply ignores name argument.
> > You can even put NULL here.
> > As I remember, Anatoly suggested to remove it completely in future.
> > Konstantin
> Ohh, this I was not aware. Then the suggestion from my end would be to
> pass 'NULL'. Wenzhuo you can ignore the MACRO and safe thing is passing
> NULL.
Checked the code, currently someone already passes NULL to the function. I'd like to change it to NULL. Thanks.

> 
> >
> > >
> > > >
> > > > >
> > > > > Snipped
> > > > >
> > > > > > +
> > > > > > +static int
> > > > > > +ice_dev_start(struct rte_eth_dev *dev) {
> > > > > > +	struct rte_eth_dev_data *data = dev->data;
> > > > > > +	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data-
> > > > > > >dev_private);
> > > > > > +	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data-
> > > > > >dev_private);
> > > > > > +	uint16_t nb_rxq = 0;
> > > > > > +	uint16_t nb_txq, i;
> > > > > > +	int ret;
> > > > > > +
> > > > > > +	ICE_PROC_SECONDARY_CHECK;
> > > > >
> > > > > Device start is not supported, but how is this differentiated
> > > > > from primary configured device vs secondary configured device.
> > > > >
> > > > > Ie: primary uses black list '-b BB:DD:F' while secondary uses
> > > > > '-w BB:DD:F'. In this case since we are checking process type
> > > > > this will return without
> > > > start?
> > > Two updates with respect to your comment, 1. tool and application
> > > like dpdk-procinfo will no longer be able pull data since you are
> > > asking to black
> > list.
> > > 2. If there are functions which needs to shared, like primary using
> > > rx-0 and tx-
> > 0 then secondary rx-1 and tx-1 how to make this work?
> > >
> > > snipped

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v3 00/34] A new net PMD - ice
  2018-11-23  6:56 [dpdk-dev] [PATCH 00/19] A new net PMD - ice Wenzhuo Lu
                   ` (20 preceding siblings ...)
  2018-12-03  7:06 ` [dpdk-dev] [PATCH v2 00/20] " Wenzhuo Lu
@ 2018-12-12  6:59 ` Wenzhuo Lu
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 01/34] net/ice: Add registers for Intel(R) E800 Series NIC Wenzhuo Lu
                     ` (34 more replies)
  2018-12-14  8:34 ` [dpdk-dev] [PATCH v4 00/32] A new net PMD - ICE Wenzhuo Lu
                   ` (2 subsequent siblings)
  24 siblings, 35 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-12  6:59 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu

This patch set adds the support of a new net PMD,
Intel® Ethernet Network Adapters E810, also
called ice.

Besides enabling this new NIC, also some other features
supported on this NIC.
Like below,

Basic features:
1, Basic device operations: probe, initialization, start/stop, configure, info get.
2, RX/TX queue operations: setup/release, start/stop, info get.
3, RX/TX.

HW Offload features:
1, CRC Stripping/insertion.
2, L2/L3 checksum strip/insertion.
3, PVID set.
4, TPID change.
5, TSO (LRO/RSC not supported).

Stats:
1, statics & xstatics.

Switch functions:
1, MAC Filter Add/Delete.
2, VLAN Filter Add/Delete.

Power saving:
1, RX interrupt mode.

Misc:
1, Interrupt For Link Status.
2, firmware info query.
3, Jumbo Frame Support.
4, ptype check.
5, EEPROM check and set.

v2:
 - Fix shared lib compile issue.
 - Add meson build support.
 - Update documents.
 - Fix more checkpatch issues.

v3:
 - Removed the support of secondary process.
 - Splitted the base code to more patches.
 - Pass NULL to rte_zmalloc.
 - Changed some magic numbers to macros.
 - Fixed the wrong implementation of a specific bitmap.


Paul M Stillwell Jr (14):
  net/ice: Add registers for Intel(R) E800 Series NIC
  net/ice: Add basic structures
  net/ice: Add admin queue structures and commands
  net/ice: Add sideband queue info
  net/ice: Add device IDs for Intel(r) E800 Series NICs
  net/ice: Add control queue information
  net/ice: Add data center bridging (DCB)
  net/ice: Add basic transmit scheduler
  net/ice: Add virtual switch code
  net/ice: Add code to work with the NVM
  net/ice: Add common functions
  net/ice: Add various headers
  net/ice: Add protocol structures and defines
  net/ice: Add structures for RX/TX queues

Wenzhuo Lu (20):
  net/ice: add OS specific implementation
  net/ice: support device initialization
  net/ice: support device and queue ops
  net/ice: support getting device information
  net/ice: support packet type getting
  net/ice: support link update
  net/ice: support MTU setting
  net/ice: support MAC ops
  net/ice: support VLAN ops
  net/ice: support RSS
  net/ice: support RX queue interruption
  net/ice: support FW version getting
  net/ice: support EEPROM information getting
  net/ice: support statistics
  net/ice: support queue information getting
  net/ice: support basic RX/TX
  net/ice: support advance RX/TX
  net/ice: support descriptor ops
  doc: add ICE description and update release note
  net/ice: support meson build

 MAINTAINERS                              |    7 +
 config/common_base                       |    9 +
 doc/guides/nics/features/ice.ini         |   38 +
 doc/guides/nics/ice.rst                  |  101 +
 doc/guides/rel_notes/release_19_02.rst   |    4 +
 drivers/net/Makefile                     |    1 +
 drivers/net/ice/Makefile                 |   76 +
 drivers/net/ice/base/README              |   22 +
 drivers/net/ice/base/ice_adminq_cmd.h    | 1891 ++++++
 drivers/net/ice/base/ice_alloc.h         |   22 +
 drivers/net/ice/base/ice_common.c        | 3521 +++++++++++
 drivers/net/ice/base/ice_common.h        |  186 +
 drivers/net/ice/base/ice_controlq.c      | 1098 ++++
 drivers/net/ice/base/ice_controlq.h      |   97 +
 drivers/net/ice/base/ice_dcb.c           | 1385 +++++
 drivers/net/ice/base/ice_dcb.h           |  220 +
 drivers/net/ice/base/ice_devids.h        |   17 +
 drivers/net/ice/base/ice_flex_type.h     |   19 +
 drivers/net/ice/base/ice_flow.h          |    8 +
 drivers/net/ice/base/ice_hw_autogen.h    | 9815 ++++++++++++++++++++++++++++++
 drivers/net/ice/base/ice_lan_tx_rx.h     | 2291 +++++++
 drivers/net/ice/base/ice_nvm.c           |  387 ++
 drivers/net/ice/base/ice_osdep.h         |  524 ++
 drivers/net/ice/base/ice_protocol_type.h |  248 +
 drivers/net/ice/base/ice_sbq_cmd.h       |   93 +
 drivers/net/ice/base/ice_sched.c         | 5380 ++++++++++++++++
 drivers/net/ice/base/ice_sched.h         |  210 +
 drivers/net/ice/base/ice_status.h        |   45 +
 drivers/net/ice/base/ice_switch.c        | 2812 +++++++++
 drivers/net/ice/base/ice_switch.h        |  333 +
 drivers/net/ice/base/ice_type.h          |  869 +++
 drivers/net/ice/base/meson.build         |   30 +
 drivers/net/ice/ice_ethdev.c             | 3263 ++++++++++
 drivers/net/ice/ice_ethdev.h             |  318 +
 drivers/net/ice/ice_lan_rxtx.c           | 2898 +++++++++
 drivers/net/ice/ice_logs.h               |   45 +
 drivers/net/ice/ice_rxtx.h               |  155 +
 drivers/net/ice/meson.build              |   15 +
 drivers/net/ice/rte_pmd_ice_version.map  |    4 +
 drivers/net/meson.build                  |    1 +
 mk/rte.app.mk                            |    1 +
 41 files changed, 38459 insertions(+)
 create mode 100644 doc/guides/nics/features/ice.ini
 create mode 100644 doc/guides/nics/ice.rst
 create mode 100644 drivers/net/ice/Makefile
 create mode 100644 drivers/net/ice/base/README
 create mode 100644 drivers/net/ice/base/ice_adminq_cmd.h
 create mode 100644 drivers/net/ice/base/ice_alloc.h
 create mode 100644 drivers/net/ice/base/ice_common.c
 create mode 100644 drivers/net/ice/base/ice_common.h
 create mode 100644 drivers/net/ice/base/ice_controlq.c
 create mode 100644 drivers/net/ice/base/ice_controlq.h
 create mode 100644 drivers/net/ice/base/ice_dcb.c
 create mode 100644 drivers/net/ice/base/ice_dcb.h
 create mode 100644 drivers/net/ice/base/ice_devids.h
 create mode 100644 drivers/net/ice/base/ice_flex_type.h
 create mode 100644 drivers/net/ice/base/ice_flow.h
 create mode 100644 drivers/net/ice/base/ice_hw_autogen.h
 create mode 100644 drivers/net/ice/base/ice_lan_tx_rx.h
 create mode 100644 drivers/net/ice/base/ice_nvm.c
 create mode 100644 drivers/net/ice/base/ice_osdep.h
 create mode 100644 drivers/net/ice/base/ice_protocol_type.h
 create mode 100644 drivers/net/ice/base/ice_sbq_cmd.h
 create mode 100644 drivers/net/ice/base/ice_sched.c
 create mode 100644 drivers/net/ice/base/ice_sched.h
 create mode 100644 drivers/net/ice/base/ice_status.h
 create mode 100644 drivers/net/ice/base/ice_switch.c
 create mode 100644 drivers/net/ice/base/ice_switch.h
 create mode 100644 drivers/net/ice/base/ice_type.h
 create mode 100644 drivers/net/ice/base/meson.build
 create mode 100644 drivers/net/ice/ice_ethdev.c
 create mode 100644 drivers/net/ice/ice_ethdev.h
 create mode 100644 drivers/net/ice/ice_lan_rxtx.c
 create mode 100644 drivers/net/ice/ice_logs.h
 create mode 100644 drivers/net/ice/ice_rxtx.h
 create mode 100644 drivers/net/ice/meson.build
 create mode 100644 drivers/net/ice/rte_pmd_ice_version.map

-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v3 01/34] net/ice: Add registers for Intel(R) E800 Series NIC
  2018-12-12  6:59 ` [dpdk-dev] [PATCH v3 00/34] A new net PMD - ice Wenzhuo Lu
@ 2018-12-12  6:59   ` Wenzhuo Lu
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 02/34] net/ice: Add basic structures Wenzhuo Lu
                     ` (33 subsequent siblings)
  34 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-12  6:59 UTC (permalink / raw)
  To: dev; +Cc: Paul M Stillwell Jr

From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>

Add the registers that comprise the Intel(R) 800
Series NIC. There is no functionality in this patch.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
---
 MAINTAINERS                           |    6 +
 drivers/net/ice/base/ice_hw_autogen.h | 9815 +++++++++++++++++++++++++++++++++
 2 files changed, 9821 insertions(+)
 create mode 100644 drivers/net/ice/base/ice_hw_autogen.h

diff --git a/MAINTAINERS b/MAINTAINERS
index 71ba312..37f3bf7 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -593,6 +593,12 @@ F: drivers/net/ifc/
 F: doc/guides/nics/ifc.rst
 F: doc/guides/nics/features/ifc*.ini
 
+Intel ice
+M: Qiming Yang <qiming.yang@intel.com>
+M: Wenzhuo Lu <wenzhuo.lu@intel.com>
+T: git://dpdk.org/next/dpdk-next-net-intel
+F: drivers/net/ice/
+
 Marvell mvpp2
 M: Tomasz Duszynski <tdu@semihalf.com>
 M: Dmitri Epshtein <dima@marvell.com>
diff --git a/drivers/net/ice/base/ice_hw_autogen.h b/drivers/net/ice/base/ice_hw_autogen.h
new file mode 100644
index 0000000..8c79891
--- /dev/null
+++ b/drivers/net/ice/base/ice_hw_autogen.h
@@ -0,0 +1,9815 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+/* Machine-generated file; do not edit */
+#ifndef _ICE_HW_AUTOGEN_H_
+#define _ICE_HW_AUTOGEN_H_
+
+
+
+#define GL_RDPU_CNTRL				0x00052054 /* Reset Source: CORER */
+#define GL_RDPU_CNTRL_RX_PAD_EN_S		0
+#define GL_RDPU_CNTRL_RX_PAD_EN_M		BIT(0)
+#define GL_RDPU_CNTRL_UDP_ZERO_EN_S		1
+#define GL_RDPU_CNTRL_UDP_ZERO_EN_M		BIT(1)
+#define GL_RDPU_CNTRL_BLNC_EN_S			2
+#define GL_RDPU_CNTRL_BLNC_EN_M			BIT(2)
+#define GL_RDPU_CNTRL_RECIPE_BYPASS_S		3
+#define GL_RDPU_CNTRL_RECIPE_BYPASS_M		BIT(3)
+#define GL_RDPU_CNTRL_RLAN_ACK_REQ_PM_TH_S	4
+#define GL_RDPU_CNTRL_RLAN_ACK_REQ_PM_TH_M	MAKEMASK(0x3F, 4)
+#define GL_RDPU_CNTRL_PE_ACK_REQ_PM_TH_S	10
+#define GL_RDPU_CNTRL_PE_ACK_REQ_PM_TH_M	MAKEMASK(0x3F, 10)
+#define GL_RDPU_CNTRL_REQ_WB_PM_TH_S		16
+#define GL_RDPU_CNTRL_REQ_WB_PM_TH_M		MAKEMASK(0x1F, 16)
+#define GL_RDPU_CNTRL_ECO_S			21
+#define GL_RDPU_CNTRL_ECO_M			MAKEMASK(0x7FF, 21)
+#define MSIX_PBA(_i)				(0x00008000 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: FLR */
+#define MSIX_PBA_MAX_INDEX			2
+#define MSIX_PBA_PENBIT_S			0
+#define MSIX_PBA_PENBIT_M			MAKEMASK(0xFFFFFFFF, 0)
+#define MSIX_TADD(_i)				(0x00000000 + ((_i) * 16)) /* _i=0...64 */ /* Reset Source: FLR */
+#define MSIX_TADD_MAX_INDEX			64
+#define MSIX_TADD_MSIXTADD10_S			0
+#define MSIX_TADD_MSIXTADD10_M			MAKEMASK(0x3, 0)
+#define MSIX_TADD_MSIXTADD_S			2
+#define MSIX_TADD_MSIXTADD_M			MAKEMASK(0x3FFFFFFF, 2)
+#define MSIX_TUADD(_i)				(0x00000004 + ((_i) * 16)) /* _i=0...64 */ /* Reset Source: FLR */
+#define MSIX_TUADD_MAX_INDEX			64
+#define MSIX_TUADD_MSIXTUADD_S			0
+#define MSIX_TUADD_MSIXTUADD_M			MAKEMASK(0xFFFFFFFF, 0)
+#define MSIX_TVCTRL(_i)				(0x0000000C + ((_i) * 16)) /* _i=0...64 */ /* Reset Source: FLR */
+#define MSIX_TVCTRL_MAX_INDEX			64
+#define MSIX_TVCTRL_MASK_S			0
+#define MSIX_TVCTRL_MASK_M			BIT(0)
+#define PF0_FW_HLP_ARQBAH_PAGE			0x02D00180 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQBAH_PAGE_ARQBAH_S		0
+#define PF0_FW_HLP_ARQBAH_PAGE_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_FW_HLP_ARQBAL_PAGE			0x02D00080 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQBAL_PAGE_ARQBAL_LSB_S	0
+#define PF0_FW_HLP_ARQBAL_PAGE_ARQBAL_LSB_M	MAKEMASK(0x3F, 0)
+#define PF0_FW_HLP_ARQBAL_PAGE_ARQBAL_S		6
+#define PF0_FW_HLP_ARQBAL_PAGE_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_FW_HLP_ARQH_PAGE			0x02D00380 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQH_PAGE_ARQH_S		0
+#define PF0_FW_HLP_ARQH_PAGE_ARQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ARQLEN_PAGE			0x02D00280 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQLEN_S		0
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQVFE_S		28
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQVFE_M		BIT(28)
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQOVFL_S	29
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQOVFL_M	BIT(29)
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQCRIT_S	30
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQCRIT_M	BIT(30)
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQENABLE_S	31
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQENABLE_M	BIT(31)
+#define PF0_FW_HLP_ARQT_PAGE			0x02D00480 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQT_PAGE_ARQT_S		0
+#define PF0_FW_HLP_ARQT_PAGE_ARQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ATQBAH_PAGE			0x02D00100 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQBAH_PAGE_ATQBAH_S		0
+#define PF0_FW_HLP_ATQBAH_PAGE_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_FW_HLP_ATQBAL_PAGE			0x02D00000 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQBAL_PAGE_ATQBAL_LSB_S	0
+#define PF0_FW_HLP_ATQBAL_PAGE_ATQBAL_LSB_M	MAKEMASK(0x3F, 0)
+#define PF0_FW_HLP_ATQBAL_PAGE_ATQBAL_S		6
+#define PF0_FW_HLP_ATQBAL_PAGE_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_FW_HLP_ATQH_PAGE			0x02D00300 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQH_PAGE_ATQH_S		0
+#define PF0_FW_HLP_ATQH_PAGE_ATQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ATQLEN_PAGE			0x02D00200 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQLEN_S		0
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQVFE_S		28
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQVFE_M		BIT(28)
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQOVFL_S	29
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQOVFL_M	BIT(29)
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQCRIT_S	30
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQCRIT_M	BIT(30)
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQENABLE_S	31
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQENABLE_M	BIT(31)
+#define PF0_FW_HLP_ATQT_PAGE			0x02D00400 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQT_PAGE_ATQT_S		0
+#define PF0_FW_HLP_ATQT_PAGE_ATQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ARQBAH_PAGE			0x02D40180 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQBAH_PAGE_ARQBAH_S		0
+#define PF0_FW_PSM_ARQBAH_PAGE_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_FW_PSM_ARQBAL_PAGE			0x02D40080 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQBAL_PAGE_ARQBAL_LSB_S	0
+#define PF0_FW_PSM_ARQBAL_PAGE_ARQBAL_LSB_M	MAKEMASK(0x3F, 0)
+#define PF0_FW_PSM_ARQBAL_PAGE_ARQBAL_S		6
+#define PF0_FW_PSM_ARQBAL_PAGE_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_FW_PSM_ARQH_PAGE			0x02D40380 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQH_PAGE_ARQH_S		0
+#define PF0_FW_PSM_ARQH_PAGE_ARQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ARQLEN_PAGE			0x02D40280 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQLEN_S		0
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQVFE_S		28
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQVFE_M		BIT(28)
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQOVFL_S	29
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQOVFL_M	BIT(29)
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQCRIT_S	30
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQCRIT_M	BIT(30)
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQENABLE_S	31
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQENABLE_M	BIT(31)
+#define PF0_FW_PSM_ARQT_PAGE			0x02D40480 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQT_PAGE_ARQT_S		0
+#define PF0_FW_PSM_ARQT_PAGE_ARQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ATQBAH_PAGE			0x02D40100 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQBAH_PAGE_ATQBAH_S		0
+#define PF0_FW_PSM_ATQBAH_PAGE_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_FW_PSM_ATQBAL_PAGE			0x02D40000 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQBAL_PAGE_ATQBAL_LSB_S	0
+#define PF0_FW_PSM_ATQBAL_PAGE_ATQBAL_LSB_M	MAKEMASK(0x3F, 0)
+#define PF0_FW_PSM_ATQBAL_PAGE_ATQBAL_S		6
+#define PF0_FW_PSM_ATQBAL_PAGE_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_FW_PSM_ATQH_PAGE			0x02D40300 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQH_PAGE_ATQH_S		0
+#define PF0_FW_PSM_ATQH_PAGE_ATQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ATQLEN_PAGE			0x02D40200 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQLEN_S		0
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQVFE_S		28
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQVFE_M		BIT(28)
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQOVFL_S	29
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQOVFL_M	BIT(29)
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQCRIT_S	30
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQCRIT_M	BIT(30)
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQENABLE_S	31
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQENABLE_M	BIT(31)
+#define PF0_FW_PSM_ATQT_PAGE			0x02D40400 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQT_PAGE_ATQT_S		0
+#define PF0_FW_PSM_ATQT_PAGE_ATQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ARQBAH_PAGE			0x02D80190 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQBAH_PAGE_ARQBAH_S	0
+#define PF0_MBX_CPM_ARQBAH_PAGE_ARQBAH_M	MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_CPM_ARQBAL_PAGE			0x02D80090 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQBAL_PAGE_ARQBAL_LSB_S	0
+#define PF0_MBX_CPM_ARQBAL_PAGE_ARQBAL_LSB_M	MAKEMASK(0x3F, 0)
+#define PF0_MBX_CPM_ARQBAL_PAGE_ARQBAL_S	6
+#define PF0_MBX_CPM_ARQBAL_PAGE_ARQBAL_M	MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_CPM_ARQH_PAGE			0x02D80390 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQH_PAGE_ARQH_S		0
+#define PF0_MBX_CPM_ARQH_PAGE_ARQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ARQLEN_PAGE			0x02D80290 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQLEN_S	0
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQLEN_M	MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQVFE_S	28
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQVFE_M	BIT(28)
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQOVFL_S	29
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQOVFL_M	BIT(29)
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQCRIT_S	30
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQCRIT_M	BIT(30)
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQENABLE_S	31
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQENABLE_M	BIT(31)
+#define PF0_MBX_CPM_ARQT_PAGE			0x02D80490 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQT_PAGE_ARQT_S		0
+#define PF0_MBX_CPM_ARQT_PAGE_ARQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ATQBAH_PAGE			0x02D80110 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQBAH_PAGE_ATQBAH_S	0
+#define PF0_MBX_CPM_ATQBAH_PAGE_ATQBAH_M	MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_CPM_ATQBAL_PAGE			0x02D80010 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQBAL_PAGE_ATQBAL_S	6
+#define PF0_MBX_CPM_ATQBAL_PAGE_ATQBAL_M	MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_CPM_ATQH_PAGE			0x02D80310 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQH_PAGE_ATQH_S		0
+#define PF0_MBX_CPM_ATQH_PAGE_ATQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ATQLEN_PAGE			0x02D80210 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQLEN_S	0
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQLEN_M	MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQVFE_S	28
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQVFE_M	BIT(28)
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQOVFL_S	29
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQOVFL_M	BIT(29)
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQCRIT_S	30
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQCRIT_M	BIT(30)
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQENABLE_S	31
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQENABLE_M	BIT(31)
+#define PF0_MBX_CPM_ATQT_PAGE			0x02D80410 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQT_PAGE_ATQT_S		0
+#define PF0_MBX_CPM_ATQT_PAGE_ATQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ARQBAH_PAGE			0x02D00190 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQBAH_PAGE_ARQBAH_S	0
+#define PF0_MBX_HLP_ARQBAH_PAGE_ARQBAH_M	MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_HLP_ARQBAL_PAGE			0x02D00090 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQBAL_PAGE_ARQBAL_LSB_S	0
+#define PF0_MBX_HLP_ARQBAL_PAGE_ARQBAL_LSB_M	MAKEMASK(0x3F, 0)
+#define PF0_MBX_HLP_ARQBAL_PAGE_ARQBAL_S	6
+#define PF0_MBX_HLP_ARQBAL_PAGE_ARQBAL_M	MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_HLP_ARQH_PAGE			0x02D00390 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQH_PAGE_ARQH_S		0
+#define PF0_MBX_HLP_ARQH_PAGE_ARQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ARQLEN_PAGE			0x02D00290 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQLEN_S	0
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQLEN_M	MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQVFE_S	28
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQVFE_M	BIT(28)
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQOVFL_S	29
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQOVFL_M	BIT(29)
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQCRIT_S	30
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQCRIT_M	BIT(30)
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQENABLE_S	31
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQENABLE_M	BIT(31)
+#define PF0_MBX_HLP_ARQT_PAGE			0x02D00490 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQT_PAGE_ARQT_S		0
+#define PF0_MBX_HLP_ARQT_PAGE_ARQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ATQBAH_PAGE			0x02D00110 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQBAH_PAGE_ATQBAH_S	0
+#define PF0_MBX_HLP_ATQBAH_PAGE_ATQBAH_M	MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_HLP_ATQBAL_PAGE			0x02D00010 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQBAL_PAGE_ATQBAL_S	6
+#define PF0_MBX_HLP_ATQBAL_PAGE_ATQBAL_M	MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_HLP_ATQH_PAGE			0x02D00310 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQH_PAGE_ATQH_S		0
+#define PF0_MBX_HLP_ATQH_PAGE_ATQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ATQLEN_PAGE			0x02D00210 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQLEN_S	0
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQLEN_M	MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQVFE_S	28
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQVFE_M	BIT(28)
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQOVFL_S	29
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQOVFL_M	BIT(29)
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQCRIT_S	30
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQCRIT_M	BIT(30)
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQENABLE_S	31
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQENABLE_M	BIT(31)
+#define PF0_MBX_HLP_ATQT_PAGE			0x02D00410 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQT_PAGE_ATQT_S		0
+#define PF0_MBX_HLP_ATQT_PAGE_ATQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ARQBAH_PAGE			0x02D40190 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQBAH_PAGE_ARQBAH_S	0
+#define PF0_MBX_PSM_ARQBAH_PAGE_ARQBAH_M	MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_PSM_ARQBAL_PAGE			0x02D40090 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQBAL_PAGE_ARQBAL_LSB_S	0
+#define PF0_MBX_PSM_ARQBAL_PAGE_ARQBAL_LSB_M	MAKEMASK(0x3F, 0)
+#define PF0_MBX_PSM_ARQBAL_PAGE_ARQBAL_S	6
+#define PF0_MBX_PSM_ARQBAL_PAGE_ARQBAL_M	MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_PSM_ARQH_PAGE			0x02D40390 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQH_PAGE_ARQH_S		0
+#define PF0_MBX_PSM_ARQH_PAGE_ARQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ARQLEN_PAGE			0x02D40290 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQLEN_S	0
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQLEN_M	MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQVFE_S	28
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQVFE_M	BIT(28)
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQOVFL_S	29
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQOVFL_M	BIT(29)
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQCRIT_S	30
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQCRIT_M	BIT(30)
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQENABLE_S	31
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQENABLE_M	BIT(31)
+#define PF0_MBX_PSM_ARQT_PAGE			0x02D40490 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQT_PAGE_ARQT_S		0
+#define PF0_MBX_PSM_ARQT_PAGE_ARQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ATQBAH_PAGE			0x02D40110 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQBAH_PAGE_ATQBAH_S	0
+#define PF0_MBX_PSM_ATQBAH_PAGE_ATQBAH_M	MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_PSM_ATQBAL_PAGE			0x02D40010 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQBAL_PAGE_ATQBAL_S	6
+#define PF0_MBX_PSM_ATQBAL_PAGE_ATQBAL_M	MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_PSM_ATQH_PAGE			0x02D40310 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQH_PAGE_ATQH_S		0
+#define PF0_MBX_PSM_ATQH_PAGE_ATQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ATQLEN_PAGE			0x02D40210 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQLEN_S	0
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQLEN_M	MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQVFE_S	28
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQVFE_M	BIT(28)
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQOVFL_S	29
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQOVFL_M	BIT(29)
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQCRIT_S	30
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQCRIT_M	BIT(30)
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQENABLE_S	31
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQENABLE_M	BIT(31)
+#define PF0_MBX_PSM_ATQT_PAGE			0x02D40410 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQT_PAGE_ATQT_S		0
+#define PF0_MBX_PSM_ATQT_PAGE_ATQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ARQBAH_PAGE			0x02D801A0 /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQBAH_PAGE_ARQBAH_S		0
+#define PF0_SB_CPM_ARQBAH_PAGE_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_SB_CPM_ARQBAL_PAGE			0x02D800A0 /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQBAL_PAGE_ARQBAL_LSB_S	0
+#define PF0_SB_CPM_ARQBAL_PAGE_ARQBAL_LSB_M	MAKEMASK(0x3F, 0)
+#define PF0_SB_CPM_ARQBAL_PAGE_ARQBAL_S		6
+#define PF0_SB_CPM_ARQBAL_PAGE_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_SB_CPM_ARQH_PAGE			0x02D803A0 /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQH_PAGE_ARQH_S		0
+#define PF0_SB_CPM_ARQH_PAGE_ARQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ARQLEN_PAGE			0x02D802A0 /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQLEN_S		0
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQVFE_S		28
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQVFE_M		BIT(28)
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQOVFL_S	29
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQOVFL_M	BIT(29)
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQCRIT_S	30
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQCRIT_M	BIT(30)
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQENABLE_S	31
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQENABLE_M	BIT(31)
+#define PF0_SB_CPM_ARQT_PAGE			0x02D804A0 /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQT_PAGE_ARQT_S		0
+#define PF0_SB_CPM_ARQT_PAGE_ARQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ATQBAH_PAGE			0x02D80120 /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQBAH_PAGE_ATQBAH_S		0
+#define PF0_SB_CPM_ATQBAH_PAGE_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_SB_CPM_ATQBAL_PAGE			0x02D80020 /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQBAL_PAGE_ATQBAL_S		6
+#define PF0_SB_CPM_ATQBAL_PAGE_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_SB_CPM_ATQH_PAGE			0x02D80320 /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQH_PAGE_ATQH_S		0
+#define PF0_SB_CPM_ATQH_PAGE_ATQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ATQLEN_PAGE			0x02D80220 /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQLEN_S		0
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQVFE_S		28
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQVFE_M		BIT(28)
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQOVFL_S	29
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQOVFL_M	BIT(29)
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQCRIT_S	30
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQCRIT_M	BIT(30)
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQENABLE_S	31
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQENABLE_M	BIT(31)
+#define PF0_SB_CPM_ATQT_PAGE			0x02D80420 /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQT_PAGE_ATQT_S		0
+#define PF0_SB_CPM_ATQT_PAGE_ATQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ARQBAH_PAGE			0x02D001A0 /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQBAH_PAGE_ARQBAH_S		0
+#define PF0_SB_HLP_ARQBAH_PAGE_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_SB_HLP_ARQBAL_PAGE			0x02D000A0 /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQBAL_PAGE_ARQBAL_LSB_S	0
+#define PF0_SB_HLP_ARQBAL_PAGE_ARQBAL_LSB_M	MAKEMASK(0x3F, 0)
+#define PF0_SB_HLP_ARQBAL_PAGE_ARQBAL_S		6
+#define PF0_SB_HLP_ARQBAL_PAGE_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_SB_HLP_ARQH_PAGE			0x02D003A0 /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQH_PAGE_ARQH_S		0
+#define PF0_SB_HLP_ARQH_PAGE_ARQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ARQLEN_PAGE			0x02D002A0 /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQLEN_S		0
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQVFE_S		28
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQVFE_M		BIT(28)
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQOVFL_S	29
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQOVFL_M	BIT(29)
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQCRIT_S	30
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQCRIT_M	BIT(30)
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQENABLE_S	31
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQENABLE_M	BIT(31)
+#define PF0_SB_HLP_ARQT_PAGE			0x02D004A0 /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQT_PAGE_ARQT_S		0
+#define PF0_SB_HLP_ARQT_PAGE_ARQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ATQBAH_PAGE			0x02D00120 /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQBAH_PAGE_ATQBAH_S		0
+#define PF0_SB_HLP_ATQBAH_PAGE_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_SB_HLP_ATQBAL_PAGE			0x02D00020 /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQBAL_PAGE_ATQBAL_S		6
+#define PF0_SB_HLP_ATQBAL_PAGE_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_SB_HLP_ATQH_PAGE			0x02D00320 /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQH_PAGE_ATQH_S		0
+#define PF0_SB_HLP_ATQH_PAGE_ATQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ATQLEN_PAGE			0x02D00220 /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQLEN_S		0
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQVFE_S		28
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQVFE_M		BIT(28)
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQOVFL_S	29
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQOVFL_M	BIT(29)
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQCRIT_S	30
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQCRIT_M	BIT(30)
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQENABLE_S	31
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQENABLE_M	BIT(31)
+#define PF0_SB_HLP_ATQT_PAGE			0x02D00420 /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQT_PAGE_ATQT_S		0
+#define PF0_SB_HLP_ATQT_PAGE_ATQT_M		MAKEMASK(0x3FF, 0)
+#define PF0INT_DYN_CTL(_i)			(0x03000000 + ((_i) * 4096)) /* _i=0...2047 */ /* Reset Source: PFR */
+#define PF0INT_DYN_CTL_MAX_INDEX		2047
+#define PF0INT_DYN_CTL_INTENA_S			0
+#define PF0INT_DYN_CTL_INTENA_M			BIT(0)
+#define PF0INT_DYN_CTL_CLEARPBA_S		1
+#define PF0INT_DYN_CTL_CLEARPBA_M		BIT(1)
+#define PF0INT_DYN_CTL_SWINT_TRIG_S		2
+#define PF0INT_DYN_CTL_SWINT_TRIG_M		BIT(2)
+#define PF0INT_DYN_CTL_ITR_INDX_S		3
+#define PF0INT_DYN_CTL_ITR_INDX_M		MAKEMASK(0x3, 3)
+#define PF0INT_DYN_CTL_INTERVAL_S		5
+#define PF0INT_DYN_CTL_INTERVAL_M		MAKEMASK(0xFFF, 5)
+#define PF0INT_DYN_CTL_SW_ITR_INDX_ENA_S	24
+#define PF0INT_DYN_CTL_SW_ITR_INDX_ENA_M	BIT(24)
+#define PF0INT_DYN_CTL_SW_ITR_INDX_S		25
+#define PF0INT_DYN_CTL_SW_ITR_INDX_M		MAKEMASK(0x3, 25)
+#define PF0INT_DYN_CTL_WB_ON_ITR_S		30
+#define PF0INT_DYN_CTL_WB_ON_ITR_M		BIT(30)
+#define PF0INT_DYN_CTL_INTENA_MSK_S		31
+#define PF0INT_DYN_CTL_INTENA_MSK_M		BIT(31)
+#define PF0INT_ITR_0(_i)			(0x03000004 + ((_i) * 4096)) /* _i=0...2047 */ /* Reset Source: PFR */
+#define PF0INT_ITR_0_MAX_INDEX			2047
+#define PF0INT_ITR_0_INTERVAL_S			0
+#define PF0INT_ITR_0_INTERVAL_M			MAKEMASK(0xFFF, 0)
+#define PF0INT_ITR_1(_i)			(0x03000008 + ((_i) * 4096)) /* _i=0...2047 */ /* Reset Source: PFR */
+#define PF0INT_ITR_1_MAX_INDEX			2047
+#define PF0INT_ITR_1_INTERVAL_S			0
+#define PF0INT_ITR_1_INTERVAL_M			MAKEMASK(0xFFF, 0)
+#define PF0INT_ITR_2(_i)			(0x0300000C + ((_i) * 4096)) /* _i=0...2047 */ /* Reset Source: PFR */
+#define PF0INT_ITR_2_MAX_INDEX			2047
+#define PF0INT_ITR_2_INTERVAL_S			0
+#define PF0INT_ITR_2_INTERVAL_M			MAKEMASK(0xFFF, 0)
+#define PF0INT_OICR_CPM_PAGE			0x02D03000 /* Reset Source: CORER */
+#define PF0INT_OICR_CPM_PAGE_INTEVENT_S		0
+#define PF0INT_OICR_CPM_PAGE_INTEVENT_M		BIT(0)
+#define PF0INT_OICR_CPM_PAGE_QUEUE_S		1
+#define PF0INT_OICR_CPM_PAGE_QUEUE_M		BIT(1)
+#define PF0INT_OICR_CPM_PAGE_RSV1_S		2
+#define PF0INT_OICR_CPM_PAGE_RSV1_M		MAKEMASK(0xFF, 2)
+#define PF0INT_OICR_CPM_PAGE_HH_COMP_S		10
+#define PF0INT_OICR_CPM_PAGE_HH_COMP_M		BIT(10)
+#define PF0INT_OICR_CPM_PAGE_TSYN_TX_S		11
+#define PF0INT_OICR_CPM_PAGE_TSYN_TX_M		BIT(11)
+#define PF0INT_OICR_CPM_PAGE_TSYN_EVNT_S	12
+#define PF0INT_OICR_CPM_PAGE_TSYN_EVNT_M	BIT(12)
+#define PF0INT_OICR_CPM_PAGE_TSYN_TGT_S		13
+#define PF0INT_OICR_CPM_PAGE_TSYN_TGT_M		BIT(13)
+#define PF0INT_OICR_CPM_PAGE_HLP_RDY_S		14
+#define PF0INT_OICR_CPM_PAGE_HLP_RDY_M		BIT(14)
+#define PF0INT_OICR_CPM_PAGE_CPM_RDY_S		15
+#define PF0INT_OICR_CPM_PAGE_CPM_RDY_M		BIT(15)
+#define PF0INT_OICR_CPM_PAGE_ECC_ERR_S		16
+#define PF0INT_OICR_CPM_PAGE_ECC_ERR_M		BIT(16)
+#define PF0INT_OICR_CPM_PAGE_RSV2_S		17
+#define PF0INT_OICR_CPM_PAGE_RSV2_M		MAKEMASK(0x3, 17)
+#define PF0INT_OICR_CPM_PAGE_MAL_DETECT_S	19
+#define PF0INT_OICR_CPM_PAGE_MAL_DETECT_M	BIT(19)
+#define PF0INT_OICR_CPM_PAGE_GRST_S		20
+#define PF0INT_OICR_CPM_PAGE_GRST_M		BIT(20)
+#define PF0INT_OICR_CPM_PAGE_PCI_EXCEPTION_S	21
+#define PF0INT_OICR_CPM_PAGE_PCI_EXCEPTION_M	BIT(21)
+#define PF0INT_OICR_CPM_PAGE_GPIO_S		22
+#define PF0INT_OICR_CPM_PAGE_GPIO_M		BIT(22)
+#define PF0INT_OICR_CPM_PAGE_RSV3_S		23
+#define PF0INT_OICR_CPM_PAGE_RSV3_M		BIT(23)
+#define PF0INT_OICR_CPM_PAGE_STORM_DETECT_S	24
+#define PF0INT_OICR_CPM_PAGE_STORM_DETECT_M	BIT(24)
+#define PF0INT_OICR_CPM_PAGE_LINK_STAT_CHANGE_S 25
+#define PF0INT_OICR_CPM_PAGE_LINK_STAT_CHANGE_M BIT(25)
+#define PF0INT_OICR_CPM_PAGE_HMC_ERR_S		26
+#define PF0INT_OICR_CPM_PAGE_HMC_ERR_M		BIT(26)
+#define PF0INT_OICR_CPM_PAGE_PE_PUSH_S		27
+#define PF0INT_OICR_CPM_PAGE_PE_PUSH_M		BIT(27)
+#define PF0INT_OICR_CPM_PAGE_PE_CRITERR_S	28
+#define PF0INT_OICR_CPM_PAGE_PE_CRITERR_M	BIT(28)
+#define PF0INT_OICR_CPM_PAGE_VFLR_S		29
+#define PF0INT_OICR_CPM_PAGE_VFLR_M		BIT(29)
+#define PF0INT_OICR_CPM_PAGE_XLR_HW_DONE_S	30
+#define PF0INT_OICR_CPM_PAGE_XLR_HW_DONE_M	BIT(30)
+#define PF0INT_OICR_CPM_PAGE_SWINT_S		31
+#define PF0INT_OICR_CPM_PAGE_SWINT_M		BIT(31)
+#define PF0INT_OICR_ENA_CPM_PAGE		0x02D03100 /* Reset Source: CORER */
+#define PF0INT_OICR_ENA_CPM_PAGE_RSV0_S		0
+#define PF0INT_OICR_ENA_CPM_PAGE_RSV0_M		BIT(0)
+#define PF0INT_OICR_ENA_CPM_PAGE_INT_ENA_S	1
+#define PF0INT_OICR_ENA_CPM_PAGE_INT_ENA_M	MAKEMASK(0x7FFFFFFF, 1)
+#define PF0INT_OICR_ENA_HLP_PAGE		0x02D01100 /* Reset Source: CORER */
+#define PF0INT_OICR_ENA_HLP_PAGE_RSV0_S		0
+#define PF0INT_OICR_ENA_HLP_PAGE_RSV0_M		BIT(0)
+#define PF0INT_OICR_ENA_HLP_PAGE_INT_ENA_S	1
+#define PF0INT_OICR_ENA_HLP_PAGE_INT_ENA_M	MAKEMASK(0x7FFFFFFF, 1)
+#define PF0INT_OICR_ENA_PSM_PAGE		0x02D02100 /* Reset Source: CORER */
+#define PF0INT_OICR_ENA_PSM_PAGE_RSV0_S		0
+#define PF0INT_OICR_ENA_PSM_PAGE_RSV0_M		BIT(0)
+#define PF0INT_OICR_ENA_PSM_PAGE_INT_ENA_S	1
+#define PF0INT_OICR_ENA_PSM_PAGE_INT_ENA_M	MAKEMASK(0x7FFFFFFF, 1)
+#define PF0INT_OICR_HLP_PAGE			0x02D01000 /* Reset Source: CORER */
+#define PF0INT_OICR_HLP_PAGE_INTEVENT_S		0
+#define PF0INT_OICR_HLP_PAGE_INTEVENT_M		BIT(0)
+#define PF0INT_OICR_HLP_PAGE_QUEUE_S		1
+#define PF0INT_OICR_HLP_PAGE_QUEUE_M		BIT(1)
+#define PF0INT_OICR_HLP_PAGE_RSV1_S		2
+#define PF0INT_OICR_HLP_PAGE_RSV1_M		MAKEMASK(0xFF, 2)
+#define PF0INT_OICR_HLP_PAGE_HH_COMP_S		10
+#define PF0INT_OICR_HLP_PAGE_HH_COMP_M		BIT(10)
+#define PF0INT_OICR_HLP_PAGE_TSYN_TX_S		11
+#define PF0INT_OICR_HLP_PAGE_TSYN_TX_M		BIT(11)
+#define PF0INT_OICR_HLP_PAGE_TSYN_EVNT_S	12
+#define PF0INT_OICR_HLP_PAGE_TSYN_EVNT_M	BIT(12)
+#define PF0INT_OICR_HLP_PAGE_TSYN_TGT_S		13
+#define PF0INT_OICR_HLP_PAGE_TSYN_TGT_M		BIT(13)
+#define PF0INT_OICR_HLP_PAGE_HLP_RDY_S		14
+#define PF0INT_OICR_HLP_PAGE_HLP_RDY_M		BIT(14)
+#define PF0INT_OICR_HLP_PAGE_CPM_RDY_S		15
+#define PF0INT_OICR_HLP_PAGE_CPM_RDY_M		BIT(15)
+#define PF0INT_OICR_HLP_PAGE_ECC_ERR_S		16
+#define PF0INT_OICR_HLP_PAGE_ECC_ERR_M		BIT(16)
+#define PF0INT_OICR_HLP_PAGE_RSV2_S		17
+#define PF0INT_OICR_HLP_PAGE_RSV2_M		MAKEMASK(0x3, 17)
+#define PF0INT_OICR_HLP_PAGE_MAL_DETECT_S	19
+#define PF0INT_OICR_HLP_PAGE_MAL_DETECT_M	BIT(19)
+#define PF0INT_OICR_HLP_PAGE_GRST_S		20
+#define PF0INT_OICR_HLP_PAGE_GRST_M		BIT(20)
+#define PF0INT_OICR_HLP_PAGE_PCI_EXCEPTION_S	21
+#define PF0INT_OICR_HLP_PAGE_PCI_EXCEPTION_M	BIT(21)
+#define PF0INT_OICR_HLP_PAGE_GPIO_S		22
+#define PF0INT_OICR_HLP_PAGE_GPIO_M		BIT(22)
+#define PF0INT_OICR_HLP_PAGE_RSV3_S		23
+#define PF0INT_OICR_HLP_PAGE_RSV3_M		BIT(23)
+#define PF0INT_OICR_HLP_PAGE_STORM_DETECT_S	24
+#define PF0INT_OICR_HLP_PAGE_STORM_DETECT_M	BIT(24)
+#define PF0INT_OICR_HLP_PAGE_LINK_STAT_CHANGE_S 25
+#define PF0INT_OICR_HLP_PAGE_LINK_STAT_CHANGE_M BIT(25)
+#define PF0INT_OICR_HLP_PAGE_HMC_ERR_S		26
+#define PF0INT_OICR_HLP_PAGE_HMC_ERR_M		BIT(26)
+#define PF0INT_OICR_HLP_PAGE_PE_PUSH_S		27
+#define PF0INT_OICR_HLP_PAGE_PE_PUSH_M		BIT(27)
+#define PF0INT_OICR_HLP_PAGE_PE_CRITERR_S	28
+#define PF0INT_OICR_HLP_PAGE_PE_CRITERR_M	BIT(28)
+#define PF0INT_OICR_HLP_PAGE_VFLR_S		29
+#define PF0INT_OICR_HLP_PAGE_VFLR_M		BIT(29)
+#define PF0INT_OICR_HLP_PAGE_XLR_HW_DONE_S	30
+#define PF0INT_OICR_HLP_PAGE_XLR_HW_DONE_M	BIT(30)
+#define PF0INT_OICR_HLP_PAGE_SWINT_S		31
+#define PF0INT_OICR_HLP_PAGE_SWINT_M		BIT(31)
+#define PF0INT_OICR_PSM_PAGE			0x02D02000 /* Reset Source: CORER */
+#define PF0INT_OICR_PSM_PAGE_INTEVENT_S		0
+#define PF0INT_OICR_PSM_PAGE_INTEVENT_M		BIT(0)
+#define PF0INT_OICR_PSM_PAGE_QUEUE_S		1
+#define PF0INT_OICR_PSM_PAGE_QUEUE_M		BIT(1)
+#define PF0INT_OICR_PSM_PAGE_RSV1_S		2
+#define PF0INT_OICR_PSM_PAGE_RSV1_M		MAKEMASK(0xFF, 2)
+#define PF0INT_OICR_PSM_PAGE_HH_COMP_S		10
+#define PF0INT_OICR_PSM_PAGE_HH_COMP_M		BIT(10)
+#define PF0INT_OICR_PSM_PAGE_TSYN_TX_S		11
+#define PF0INT_OICR_PSM_PAGE_TSYN_TX_M		BIT(11)
+#define PF0INT_OICR_PSM_PAGE_TSYN_EVNT_S	12
+#define PF0INT_OICR_PSM_PAGE_TSYN_EVNT_M	BIT(12)
+#define PF0INT_OICR_PSM_PAGE_TSYN_TGT_S		13
+#define PF0INT_OICR_PSM_PAGE_TSYN_TGT_M		BIT(13)
+#define PF0INT_OICR_PSM_PAGE_HLP_RDY_S		14
+#define PF0INT_OICR_PSM_PAGE_HLP_RDY_M		BIT(14)
+#define PF0INT_OICR_PSM_PAGE_CPM_RDY_S		15
+#define PF0INT_OICR_PSM_PAGE_CPM_RDY_M		BIT(15)
+#define PF0INT_OICR_PSM_PAGE_ECC_ERR_S		16
+#define PF0INT_OICR_PSM_PAGE_ECC_ERR_M		BIT(16)
+#define PF0INT_OICR_PSM_PAGE_RSV2_S		17
+#define PF0INT_OICR_PSM_PAGE_RSV2_M		MAKEMASK(0x3, 17)
+#define PF0INT_OICR_PSM_PAGE_MAL_DETECT_S	19
+#define PF0INT_OICR_PSM_PAGE_MAL_DETECT_M	BIT(19)
+#define PF0INT_OICR_PSM_PAGE_GRST_S		20
+#define PF0INT_OICR_PSM_PAGE_GRST_M		BIT(20)
+#define PF0INT_OICR_PSM_PAGE_PCI_EXCEPTION_S	21
+#define PF0INT_OICR_PSM_PAGE_PCI_EXCEPTION_M	BIT(21)
+#define PF0INT_OICR_PSM_PAGE_GPIO_S		22
+#define PF0INT_OICR_PSM_PAGE_GPIO_M		BIT(22)
+#define PF0INT_OICR_PSM_PAGE_RSV3_S		23
+#define PF0INT_OICR_PSM_PAGE_RSV3_M		BIT(23)
+#define PF0INT_OICR_PSM_PAGE_STORM_DETECT_S	24
+#define PF0INT_OICR_PSM_PAGE_STORM_DETECT_M	BIT(24)
+#define PF0INT_OICR_PSM_PAGE_LINK_STAT_CHANGE_S 25
+#define PF0INT_OICR_PSM_PAGE_LINK_STAT_CHANGE_M BIT(25)
+#define PF0INT_OICR_PSM_PAGE_HMC_ERR_S		26
+#define PF0INT_OICR_PSM_PAGE_HMC_ERR_M		BIT(26)
+#define PF0INT_OICR_PSM_PAGE_PE_PUSH_S		27
+#define PF0INT_OICR_PSM_PAGE_PE_PUSH_M		BIT(27)
+#define PF0INT_OICR_PSM_PAGE_PE_CRITERR_S	28
+#define PF0INT_OICR_PSM_PAGE_PE_CRITERR_M	BIT(28)
+#define PF0INT_OICR_PSM_PAGE_VFLR_S		29
+#define PF0INT_OICR_PSM_PAGE_VFLR_M		BIT(29)
+#define PF0INT_OICR_PSM_PAGE_XLR_HW_DONE_S	30
+#define PF0INT_OICR_PSM_PAGE_XLR_HW_DONE_M	BIT(30)
+#define PF0INT_OICR_PSM_PAGE_SWINT_S		31
+#define PF0INT_OICR_PSM_PAGE_SWINT_M		BIT(31)
+#define QRX_TAIL_PAGE(_QRX)			(0x03800000 + ((_QRX) * 4096)) /* _i=0...2047 */ /* Reset Source: CORER */
+#define QRX_TAIL_PAGE_MAX_INDEX			2047
+#define QRX_TAIL_PAGE_TAIL_S			0
+#define QRX_TAIL_PAGE_TAIL_M			MAKEMASK(0x1FFF, 0)
+#define QTX_COMM_DBELL_PAGE(_DBQM)		(0x04000000 + ((_DBQM) * 4096)) /* _i=0...16383 */ /* Reset Source: CORER */
+#define QTX_COMM_DBELL_PAGE_MAX_INDEX		16383
+#define QTX_COMM_DBELL_PAGE_QTX_COMM_DBELL_S	0
+#define QTX_COMM_DBELL_PAGE_QTX_COMM_DBELL_M	MAKEMASK(0xFFFFFFFF, 0)
+#define QTX_COMM_DBLQ_DBELL_PAGE(_DBLQ)		(0x02F00000 + ((_DBLQ) * 8)) /* _i=0...255 */ /* Reset Source: CORER */
+#define QTX_COMM_DBLQ_DBELL_PAGE_MAX_INDEX	255
+#define QTX_COMM_DBLQ_DBELL_PAGE_TAIL_S		0
+#define QTX_COMM_DBLQ_DBELL_PAGE_TAIL_M		MAKEMASK(0x1FFF, 0)
+#define VSI_MBX_ARQBAH(_VSI)			(0x02000018 + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ARQBAH_MAX_INDEX		767
+#define VSI_MBX_ARQBAH_ARQBAH_S			0
+#define VSI_MBX_ARQBAH_ARQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define VSI_MBX_ARQBAL(_VSI)			(0x02000014 + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ARQBAL_MAX_INDEX		767
+#define VSI_MBX_ARQBAL_ARQBAL_LSB_S		0
+#define VSI_MBX_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define VSI_MBX_ARQBAL_ARQBAL_S			6
+#define VSI_MBX_ARQBAL_ARQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define VSI_MBX_ARQH(_VSI)			(0x02000020 + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ARQH_MAX_INDEX			767
+#define VSI_MBX_ARQH_ARQH_S			0
+#define VSI_MBX_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define VSI_MBX_ARQLEN(_VSI)			(0x0200001C + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ARQLEN_MAX_INDEX		767
+#define VSI_MBX_ARQLEN_ARQLEN_S			0
+#define VSI_MBX_ARQLEN_ARQLEN_M			MAKEMASK(0x3FF, 0)
+#define VSI_MBX_ARQLEN_ARQVFE_S			28
+#define VSI_MBX_ARQLEN_ARQVFE_M			BIT(28)
+#define VSI_MBX_ARQLEN_ARQOVFL_S		29
+#define VSI_MBX_ARQLEN_ARQOVFL_M		BIT(29)
+#define VSI_MBX_ARQLEN_ARQCRIT_S		30
+#define VSI_MBX_ARQLEN_ARQCRIT_M		BIT(30)
+#define VSI_MBX_ARQLEN_ARQENABLE_S		31
+#define VSI_MBX_ARQLEN_ARQENABLE_M		BIT(31)
+#define VSI_MBX_ARQT(_VSI)			(0x02000024 + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ARQT_MAX_INDEX			767
+#define VSI_MBX_ARQT_ARQT_S			0
+#define VSI_MBX_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define VSI_MBX_ATQBAH(_VSI)			(0x02000004 + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ATQBAH_MAX_INDEX		767
+#define VSI_MBX_ATQBAH_ATQBAH_S			0
+#define VSI_MBX_ATQBAH_ATQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define VSI_MBX_ATQBAL(_VSI)			(0x02000000 + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ATQBAL_MAX_INDEX		767
+#define VSI_MBX_ATQBAL_ATQBAL_S			6
+#define VSI_MBX_ATQBAL_ATQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define VSI_MBX_ATQH(_VSI)			(0x0200000C + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ATQH_MAX_INDEX			767
+#define VSI_MBX_ATQH_ATQH_S			0
+#define VSI_MBX_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define VSI_MBX_ATQLEN(_VSI)			(0x02000008 + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ATQLEN_MAX_INDEX		767
+#define VSI_MBX_ATQLEN_ATQLEN_S			0
+#define VSI_MBX_ATQLEN_ATQLEN_M			MAKEMASK(0x3FF, 0)
+#define VSI_MBX_ATQLEN_ATQVFE_S			28
+#define VSI_MBX_ATQLEN_ATQVFE_M			BIT(28)
+#define VSI_MBX_ATQLEN_ATQOVFL_S		29
+#define VSI_MBX_ATQLEN_ATQOVFL_M		BIT(29)
+#define VSI_MBX_ATQLEN_ATQCRIT_S		30
+#define VSI_MBX_ATQLEN_ATQCRIT_M		BIT(30)
+#define VSI_MBX_ATQLEN_ATQENABLE_S		31
+#define VSI_MBX_ATQLEN_ATQENABLE_M		BIT(31)
+#define VSI_MBX_ATQT(_VSI)			(0x02000010 + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ATQT_MAX_INDEX			767
+#define VSI_MBX_ATQT_ATQT_S			0
+#define VSI_MBX_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define GL_ACL_ACCESS_CMD			0x00391000 /* Reset Source: CORER */
+#define GL_ACL_ACCESS_CMD_TABLE_ID_S		0
+#define GL_ACL_ACCESS_CMD_TABLE_ID_M		MAKEMASK(0xFF, 0)
+#define GL_ACL_ACCESS_CMD_ENTRY_INDEX_S		8
+#define GL_ACL_ACCESS_CMD_ENTRY_INDEX_M		MAKEMASK(0xFFF, 8)
+#define GL_ACL_ACCESS_CMD_OPERATION_S		20
+#define GL_ACL_ACCESS_CMD_OPERATION_M		BIT(20)
+#define GL_ACL_ACCESS_CMD_OBJ_TYPE_S		24
+#define GL_ACL_ACCESS_CMD_OBJ_TYPE_M		MAKEMASK(0xF, 24)
+#define GL_ACL_ACCESS_CMD_EXECUTE_S		31
+#define GL_ACL_ACCESS_CMD_EXECUTE_M		BIT(31)
+#define GL_ACL_ACCESS_STATUS			0x00391004 /* Reset Source: CORER */
+#define GL_ACL_ACCESS_STATUS_BUSY_S		0
+#define GL_ACL_ACCESS_STATUS_BUSY_M		BIT(0)
+#define GL_ACL_ACCESS_STATUS_DONE_S		1
+#define GL_ACL_ACCESS_STATUS_DONE_M		BIT(1)
+#define GL_ACL_ACCESS_STATUS_ERROR_S		2
+#define GL_ACL_ACCESS_STATUS_ERROR_M		BIT(2)
+#define GL_ACL_ACCESS_STATUS_OPERATION_S	3
+#define GL_ACL_ACCESS_STATUS_OPERATION_M	BIT(3)
+#define GL_ACL_ACCESS_STATUS_ERROR_CODE_S	4
+#define GL_ACL_ACCESS_STATUS_ERROR_CODE_M	MAKEMASK(0xF, 4)
+#define GL_ACL_ACCESS_STATUS_TABLE_ID_S		8
+#define GL_ACL_ACCESS_STATUS_TABLE_ID_M		MAKEMASK(0xFF, 8)
+#define GL_ACL_ACCESS_STATUS_ENTRY_INDEX_S	16
+#define GL_ACL_ACCESS_STATUS_ENTRY_INDEX_M	MAKEMASK(0xFFF, 16)
+#define GL_ACL_ACCESS_STATUS_OBJ_TYPE_S		28
+#define GL_ACL_ACCESS_STATUS_OBJ_TYPE_M		MAKEMASK(0xF, 28)
+#define GL_ACL_ACTMEM_ACT(_i)			(0x00393824 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GL_ACL_ACTMEM_ACT_MAX_INDEX		1
+#define GL_ACL_ACTMEM_ACT_VALUE_S		0
+#define GL_ACL_ACTMEM_ACT_VALUE_M		MAKEMASK(0xFFFF, 0)
+#define GL_ACL_ACTMEM_ACT_MDID_S		20
+#define GL_ACL_ACTMEM_ACT_MDID_M		MAKEMASK(0x3F, 20)
+#define GL_ACL_ACTMEM_ACT_PRIORITY_S		28
+#define GL_ACL_ACTMEM_ACT_PRIORITY_M		MAKEMASK(0x7, 28)
+#define GL_ACL_CHICKEN_REGISTER			0x00393810 /* Reset Source: CORER */
+#define GL_ACL_CHICKEN_REGISTER_TCAM_DATA_POL_CH_S 0
+#define GL_ACL_CHICKEN_REGISTER_TCAM_DATA_POL_CH_M BIT(0)
+#define GL_ACL_CHICKEN_REGISTER_TCAM_ADDR_POL_CH_S 1
+#define GL_ACL_CHICKEN_REGISTER_TCAM_ADDR_POL_CH_M BIT(1)
+#define GL_ACL_DEFAULT_ACT(_i)			(0x00391168 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GL_ACL_DEFAULT_ACT_MAX_INDEX		15
+#define GL_ACL_DEFAULT_ACT_VALUE_S		0
+#define GL_ACL_DEFAULT_ACT_VALUE_M		MAKEMASK(0xFFFF, 0)
+#define GL_ACL_DEFAULT_ACT_MDID_S		20
+#define GL_ACL_DEFAULT_ACT_MDID_M		MAKEMASK(0x3F, 20)
+#define GL_ACL_DEFAULT_ACT_PRIORITY_S		28
+#define GL_ACL_DEFAULT_ACT_PRIORITY_M		MAKEMASK(0x7, 28)
+#define GL_ACL_PROFILE_BWSB_SEL(_i)		(0x00391008 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GL_ACL_PROFILE_BWSB_SEL_MAX_INDEX	31
+#define GL_ACL_PROFILE_BWSB_SEL_BSB_SRC_OFF_S	0
+#define GL_ACL_PROFILE_BWSB_SEL_BSB_SRC_OFF_M	MAKEMASK(0x3F, 0)
+#define GL_ACL_PROFILE_BWSB_SEL_WSB_SRC_OFF_S	8
+#define GL_ACL_PROFILE_BWSB_SEL_WSB_SRC_OFF_M	MAKEMASK(0x1F, 8)
+#define GL_ACL_PROFILE_DWSB_SEL(_i)		(0x00391088 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GL_ACL_PROFILE_DWSB_SEL_MAX_INDEX	15
+#define GL_ACL_PROFILE_DWSB_SEL_DWORD_SEL_OFF_S 0
+#define GL_ACL_PROFILE_DWSB_SEL_DWORD_SEL_OFF_M MAKEMASK(0xF, 0)
+#define GL_ACL_PROFILE_PF_CFG(_i)		(0x003910C8 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GL_ACL_PROFILE_PF_CFG_MAX_INDEX		7
+#define GL_ACL_PROFILE_PF_CFG_SCEN_SEL_S	0
+#define GL_ACL_PROFILE_PF_CFG_SCEN_SEL_M	MAKEMASK(0x3F, 0)
+#define GL_ACL_PROFILE_RC_CFG(_i)		(0x003910E8 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GL_ACL_PROFILE_RC_CFG_MAX_INDEX		7
+#define GL_ACL_PROFILE_RC_CFG_LOW_BOUND_S	0
+#define GL_ACL_PROFILE_RC_CFG_LOW_BOUND_M	MAKEMASK(0xFFFF, 0)
+#define GL_ACL_PROFILE_RC_CFG_HIGH_BOUND_S	16
+#define GL_ACL_PROFILE_RC_CFG_HIGH_BOUND_M	MAKEMASK(0xFFFF, 16)
+#define GL_ACL_PROFILE_RCF_MASK(_i)		(0x00391108 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GL_ACL_PROFILE_RCF_MASK_MAX_INDEX	7
+#define GL_ACL_PROFILE_RCF_MASK_MASK_S		0
+#define GL_ACL_PROFILE_RCF_MASK_MASK_M		MAKEMASK(0xFFFF, 0)
+#define GL_ACL_SCENARIO_ACT_CFG(_i)		(0x003938AC + ((_i) * 4)) /* _i=0...19 */ /* Reset Source: CORER */
+#define GL_ACL_SCENARIO_ACT_CFG_MAX_INDEX	19
+#define GL_ACL_SCENARIO_ACT_CFG_ACTMEM_SEL_S	0
+#define GL_ACL_SCENARIO_ACT_CFG_ACTMEM_SEL_M	MAKEMASK(0xF, 0)
+#define GL_ACL_SCENARIO_ACT_CFG_ACTMEM_EN_S	8
+#define GL_ACL_SCENARIO_ACT_CFG_ACTMEM_EN_M	BIT(8)
+#define GL_ACL_SCENARIO_CFG_H(_i)		(0x0039386C + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GL_ACL_SCENARIO_CFG_H_MAX_INDEX		15
+#define GL_ACL_SCENARIO_CFG_H_SELECT4_S		0
+#define GL_ACL_SCENARIO_CFG_H_SELECT4_M		MAKEMASK(0x1F, 0)
+#define GL_ACL_SCENARIO_CFG_H_CHUNKMASK_S	8
+#define GL_ACL_SCENARIO_CFG_H_CHUNKMASK_M	MAKEMASK(0xFF, 8)
+#define GL_ACL_SCENARIO_CFG_H_START_COMPARE_S	24
+#define GL_ACL_SCENARIO_CFG_H_START_COMPARE_M	BIT(24)
+#define GL_ACL_SCENARIO_CFG_H_START_SET_S	28
+#define GL_ACL_SCENARIO_CFG_H_START_SET_M	BIT(28)
+#define GL_ACL_SCENARIO_CFG_L(_i)		(0x0039382C + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GL_ACL_SCENARIO_CFG_L_MAX_INDEX		15
+#define GL_ACL_SCENARIO_CFG_L_SELECT0_S		0
+#define GL_ACL_SCENARIO_CFG_L_SELECT0_M		MAKEMASK(0x7F, 0)
+#define GL_ACL_SCENARIO_CFG_L_SELECT1_S		8
+#define GL_ACL_SCENARIO_CFG_L_SELECT1_M		MAKEMASK(0x7F, 8)
+#define GL_ACL_SCENARIO_CFG_L_SELECT2_S		16
+#define GL_ACL_SCENARIO_CFG_L_SELECT2_M		MAKEMASK(0x7F, 16)
+#define GL_ACL_SCENARIO_CFG_L_SELECT3_S		24
+#define GL_ACL_SCENARIO_CFG_L_SELECT3_M		MAKEMASK(0x7F, 24)
+#define GL_ACL_TCAM_KEY_H			0x00393818 /* Reset Source: CORER */
+#define GL_ACL_TCAM_KEY_H_GL_ACL_FFU_TCAM_KEY_H_S 0
+#define GL_ACL_TCAM_KEY_H_GL_ACL_FFU_TCAM_KEY_H_M MAKEMASK(0xFF, 0)
+#define GL_ACL_TCAM_KEY_INV_H			0x00393820 /* Reset Source: CORER */
+#define GL_ACL_TCAM_KEY_INV_H_GL_ACL_FFU_TCAM_KEY_INV_H_S 0
+#define GL_ACL_TCAM_KEY_INV_H_GL_ACL_FFU_TCAM_KEY_INV_H_M MAKEMASK(0xFF, 0)
+#define GL_ACL_TCAM_KEY_INV_L			0x0039381C /* Reset Source: CORER */
+#define GL_ACL_TCAM_KEY_INV_L_GL_ACL_FFU_TCAM_KEY_INV_L_S 0
+#define GL_ACL_TCAM_KEY_INV_L_GL_ACL_FFU_TCAM_KEY_INV_L_M MAKEMASK(0xFFFFFFFF, 0)
+#define GL_ACL_TCAM_KEY_L			0x00393814 /* Reset Source: CORER */
+#define GL_ACL_TCAM_KEY_L_GL_ACL_FFU_TCAM_KEY_L_S 0
+#define GL_ACL_TCAM_KEY_L_GL_ACL_FFU_TCAM_KEY_L_M MAKEMASK(0xFFFFFFFF, 0)
+#define VSI_ACL_DEF_SEL(_VSI)			(0x00391800 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_ACL_DEF_SEL_MAX_INDEX		767
+#define VSI_ACL_DEF_SEL_RX_PROFILE_MISS_SEL_S	0
+#define VSI_ACL_DEF_SEL_RX_PROFILE_MISS_SEL_M	MAKEMASK(0x3, 0)
+#define VSI_ACL_DEF_SEL_RX_TABLES_MISS_SEL_S	4
+#define VSI_ACL_DEF_SEL_RX_TABLES_MISS_SEL_M	MAKEMASK(0x3, 4)
+#define VSI_ACL_DEF_SEL_TX_PROFILE_MISS_SEL_S	8
+#define VSI_ACL_DEF_SEL_TX_PROFILE_MISS_SEL_M	MAKEMASK(0x3, 8)
+#define VSI_ACL_DEF_SEL_TX_TABLES_MISS_SEL_S	12
+#define VSI_ACL_DEF_SEL_TX_TABLES_MISS_SEL_M	MAKEMASK(0x3, 12)
+#define GL_SWT_L2TAG0(_i)			(0x000492A8 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GL_SWT_L2TAG0_MAX_INDEX			7
+#define GL_SWT_L2TAG0_DATA_S			0
+#define GL_SWT_L2TAG0_DATA_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GL_SWT_L2TAG1(_i)			(0x000492C8 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GL_SWT_L2TAG1_MAX_INDEX			7
+#define GL_SWT_L2TAG1_DATA_S			0
+#define GL_SWT_L2TAG1_DATA_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GL_SWT_L2TAGCTRL(_i)			(0x001D2660 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GL_SWT_L2TAGCTRL_MAX_INDEX		7
+#define GL_SWT_L2TAGCTRL_LENGTH_S		0
+#define GL_SWT_L2TAGCTRL_LENGTH_M		MAKEMASK(0x7F, 0)
+#define GL_SWT_L2TAGCTRL_HAS_UP_S		7
+#define GL_SWT_L2TAGCTRL_HAS_UP_M		BIT(7)
+#define GL_SWT_L2TAGCTRL_ISVLAN_S		9
+#define GL_SWT_L2TAGCTRL_ISVLAN_M		BIT(9)
+#define GL_SWT_L2TAGCTRL_INNERUP_S		10
+#define GL_SWT_L2TAGCTRL_INNERUP_M		BIT(10)
+#define GL_SWT_L2TAGCTRL_OUTERUP_S		11
+#define GL_SWT_L2TAGCTRL_OUTERUP_M		BIT(11)
+#define GL_SWT_L2TAGCTRL_LONG_S			12
+#define GL_SWT_L2TAGCTRL_LONG_M			BIT(12)
+#define GL_SWT_L2TAGCTRL_ISMPLS_S		13
+#define GL_SWT_L2TAGCTRL_ISMPLS_M		BIT(13)
+#define GL_SWT_L2TAGCTRL_ISNSH_S		14
+#define GL_SWT_L2TAGCTRL_ISNSH_M		BIT(14)
+#define GL_SWT_L2TAGCTRL_ETHERTYPE_S		16
+#define GL_SWT_L2TAGCTRL_ETHERTYPE_M		MAKEMASK(0xFFFF, 16)
+#define GL_SWT_L2TAGRXEB(_i)			(0x00052000 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GL_SWT_L2TAGRXEB_MAX_INDEX		7
+#define GL_SWT_L2TAGRXEB_OFFSET_S		0
+#define GL_SWT_L2TAGRXEB_OFFSET_M		MAKEMASK(0xFF, 0)
+#define GL_SWT_L2TAGRXEB_LENGTH_S		8
+#define GL_SWT_L2TAGRXEB_LENGTH_M		MAKEMASK(0x3, 8)
+#define GL_SWT_L2TAGTXIB(_i)			(0x000492E8 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GL_SWT_L2TAGTXIB_MAX_INDEX		7
+#define GL_SWT_L2TAGTXIB_OFFSET_S		0
+#define GL_SWT_L2TAGTXIB_OFFSET_M		MAKEMASK(0xFF, 0)
+#define GL_SWT_L2TAGTXIB_LENGTH_S		8
+#define GL_SWT_L2TAGTXIB_LENGTH_M		MAKEMASK(0x3, 8)
+#define PRT_TDPUL2TAGSEN			0x00040BA0 /* Reset Source: CORER */
+#define PRT_TDPUL2TAGSEN_ENABLE_S		0
+#define PRT_TDPUL2TAGSEN_ENABLE_M		MAKEMASK(0xFF, 0)
+#define PRT_TDPUL2TAGSEN_NONLAST_TAG_S		8
+#define PRT_TDPUL2TAGSEN_NONLAST_TAG_M		MAKEMASK(0xFF, 8)
+#define GLCM_PE_CACHESIZE			0x005046B4 /* Reset Source: CORER */
+#define GLCM_PE_CACHESIZE_WORD_SIZE_S		0
+#define GLCM_PE_CACHESIZE_WORD_SIZE_M		MAKEMASK(0xFFF, 0)
+#define GLCM_PE_CACHESIZE_SETS_S		12
+#define GLCM_PE_CACHESIZE_SETS_M		MAKEMASK(0xF, 12)
+#define GLCM_PE_CACHESIZE_WAYS_S		16
+#define GLCM_PE_CACHESIZE_WAYS_M		MAKEMASK(0x1FF, 16)
+#define GLCOMM_CQ_CTL(_CQ)			(0x000F0000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLCOMM_CQ_CTL_MAX_INDEX			511
+#define GLCOMM_CQ_CTL_COMP_TYPE_S		0
+#define GLCOMM_CQ_CTL_COMP_TYPE_M		MAKEMASK(0x7, 0)
+#define GLCOMM_CQ_CTL_CMD_S			4
+#define GLCOMM_CQ_CTL_CMD_M			MAKEMASK(0x7, 4)
+#define GLCOMM_CQ_CTL_ID_S			16
+#define GLCOMM_CQ_CTL_ID_M			MAKEMASK(0x3FFF, 16)
+#define GLCOMM_MIN_MAX_PKT			0x000FC064 /* Reset Source: CORER */
+#define GLCOMM_MIN_MAX_PKT_MAHDL_S		0
+#define GLCOMM_MIN_MAX_PKT_MAHDL_M		MAKEMASK(0x3FFF, 0)
+#define GLCOMM_MIN_MAX_PKT_MIHDL_S		16
+#define GLCOMM_MIN_MAX_PKT_MIHDL_M		MAKEMASK(0x3F, 16)
+#define GLCOMM_MIN_MAX_PKT_LSO_COMS_MIHDL_S	22
+#define GLCOMM_MIN_MAX_PKT_LSO_COMS_MIHDL_M	MAKEMASK(0x3FF, 22)
+#define GLCOMM_PKT_SHAPER_PROF(_i)		(0x002D2DA8 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLCOMM_PKT_SHAPER_PROF_MAX_INDEX	7
+#define GLCOMM_PKT_SHAPER_PROF_PKTCNT_S		0
+#define GLCOMM_PKT_SHAPER_PROF_PKTCNT_M		MAKEMASK(0x3F, 0)
+#define GLCOMM_QTX_CNTX_CTL			0x002D2DC8 /* Reset Source: CORER */
+#define GLCOMM_QTX_CNTX_CTL_QUEUE_ID_S		0
+#define GLCOMM_QTX_CNTX_CTL_QUEUE_ID_M		MAKEMASK(0x3FFF, 0)
+#define GLCOMM_QTX_CNTX_CTL_CMD_S		16
+#define GLCOMM_QTX_CNTX_CTL_CMD_M		MAKEMASK(0x7, 16)
+#define GLCOMM_QTX_CNTX_CTL_CMD_EXEC_S		19
+#define GLCOMM_QTX_CNTX_CTL_CMD_EXEC_M		BIT(19)
+#define GLCOMM_QTX_CNTX_DATA(_i)		(0x002D2D40 + ((_i) * 4)) /* _i=0...9 */ /* Reset Source: CORER */
+#define GLCOMM_QTX_CNTX_DATA_MAX_INDEX		9
+#define GLCOMM_QTX_CNTX_DATA_DATA_S		0
+#define GLCOMM_QTX_CNTX_DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLCOMM_QTX_CNTX_STAT			0x002D2DCC /* Reset Source: CORER */
+#define GLCOMM_QTX_CNTX_STAT_CMD_IN_PROG_S	0
+#define GLCOMM_QTX_CNTX_STAT_CMD_IN_PROG_M	BIT(0)
+#define GLCOMM_QUANTA_PROF(_i)			(0x002D2D68 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLCOMM_QUANTA_PROF_MAX_INDEX		15
+#define GLCOMM_QUANTA_PROF_QUANTA_SIZE_S	0
+#define GLCOMM_QUANTA_PROF_QUANTA_SIZE_M	MAKEMASK(0x3FFF, 0)
+#define GLCOMM_QUANTA_PROF_MAX_CMD_S		16
+#define GLCOMM_QUANTA_PROF_MAX_CMD_M		MAKEMASK(0xFF, 16)
+#define GLCOMM_QUANTA_PROF_MAX_DESC_S		24
+#define GLCOMM_QUANTA_PROF_MAX_DESC_M		MAKEMASK(0x3F, 24)
+#define GLLAN_TCLAN_CACHE_CTL			0x000FC0B8 /* Reset Source: CORER */
+#define GLLAN_TCLAN_CACHE_CTL_MIN_FETCH_THRESH_S 0
+#define GLLAN_TCLAN_CACHE_CTL_MIN_FETCH_THRESH_M MAKEMASK(0x3F, 0)
+#define GLLAN_TCLAN_CACHE_CTL_FETCH_CL_ALIGN_S	6
+#define GLLAN_TCLAN_CACHE_CTL_FETCH_CL_ALIGN_M	BIT(6)
+#define GLLAN_TCLAN_CACHE_CTL_MIN_ALLOC_THRESH_S 7
+#define GLLAN_TCLAN_CACHE_CTL_MIN_ALLOC_THRESH_M MAKEMASK(0x7F, 7)
+#define GLLAN_TCLAN_CACHE_CTL_CACHE_ENTRY_CNT_S 14
+#define GLLAN_TCLAN_CACHE_CTL_CACHE_ENTRY_CNT_M MAKEMASK(0xFF, 14)
+#define GLLAN_TCLAN_CACHE_CTL_CACHE_DESC_LIM_S	22
+#define GLLAN_TCLAN_CACHE_CTL_CACHE_DESC_LIM_M	MAKEMASK(0x3FF, 22)
+#define GLTCLAN_CQ_CNTX0(_CQ)			(0x000F0800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX0_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX0_RING_ADDR_LSB_S	0
+#define GLTCLAN_CQ_CNTX0_RING_ADDR_LSB_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX1(_CQ)			(0x000F1000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX1_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX1_RING_ADDR_MSB_S	0
+#define GLTCLAN_CQ_CNTX1_RING_ADDR_MSB_M	MAKEMASK(0x1FFFFFF, 0)
+#define GLTCLAN_CQ_CNTX10(_CQ)			(0x000F5800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX10_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX10_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX10_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX11(_CQ)			(0x000F6000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX11_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX11_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX11_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX12(_CQ)			(0x000F6800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX12_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX12_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX12_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX13(_CQ)			(0x000F7000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX13_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX13_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX13_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX14(_CQ)			(0x000F7800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX14_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX14_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX14_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX15(_CQ)			(0x000F8000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX15_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX15_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX15_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX16(_CQ)			(0x000F8800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX16_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX16_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX16_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX17(_CQ)			(0x000F9000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX17_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX17_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX17_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX18(_CQ)			(0x000F9800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX18_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX18_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX18_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX19(_CQ)			(0x000FA000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX19_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX19_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX19_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX2(_CQ)			(0x000F1800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX2_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX2_RING_LEN_S		0
+#define GLTCLAN_CQ_CNTX2_RING_LEN_M		MAKEMASK(0x3FFFF, 0)
+#define GLTCLAN_CQ_CNTX20(_CQ)			(0x000FA800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX20_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX20_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX20_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX21(_CQ)			(0x000FB000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX21_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX21_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX21_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX3(_CQ)			(0x000F2000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX3_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX3_GENERATION_S		0
+#define GLTCLAN_CQ_CNTX3_GENERATION_M		BIT(0)
+#define GLTCLAN_CQ_CNTX3_CQ_WR_PTR_S		1
+#define GLTCLAN_CQ_CNTX3_CQ_WR_PTR_M		MAKEMASK(0x3FFFFF, 1)
+#define GLTCLAN_CQ_CNTX4(_CQ)			(0x000F2800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX4_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX4_PF_NUM_S		0
+#define GLTCLAN_CQ_CNTX4_PF_NUM_M		MAKEMASK(0x7, 0)
+#define GLTCLAN_CQ_CNTX4_VMVF_NUM_S		3
+#define GLTCLAN_CQ_CNTX4_VMVF_NUM_M		MAKEMASK(0x3FF, 3)
+#define GLTCLAN_CQ_CNTX4_VMVF_TYPE_S		13
+#define GLTCLAN_CQ_CNTX4_VMVF_TYPE_M		MAKEMASK(0x3, 13)
+#define GLTCLAN_CQ_CNTX5(_CQ)			(0x000F3000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX5_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX5_TPH_EN_S		0
+#define GLTCLAN_CQ_CNTX5_TPH_EN_M		BIT(0)
+#define GLTCLAN_CQ_CNTX5_CPU_ID_S		1
+#define GLTCLAN_CQ_CNTX5_CPU_ID_M		MAKEMASK(0xFF, 1)
+#define GLTCLAN_CQ_CNTX5_FLUSH_ON_ITR_DIS_S	9
+#define GLTCLAN_CQ_CNTX5_FLUSH_ON_ITR_DIS_M	BIT(9)
+#define GLTCLAN_CQ_CNTX6(_CQ)			(0x000F3800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX6_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX6_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX6_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX7(_CQ)			(0x000F4000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX7_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX7_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX7_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX8(_CQ)			(0x000F4800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX8_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX8_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX8_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX9(_CQ)			(0x000F5000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX9_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX9_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX9_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define QTX_COMM_DBELL(_DBQM)			(0x002C0000 + ((_DBQM) * 4)) /* _i=0...16383 */ /* Reset Source: CORER */
+#define QTX_COMM_DBELL_MAX_INDEX		16383
+#define QTX_COMM_DBELL_QTX_COMM_DBELL_S		0
+#define QTX_COMM_DBELL_QTX_COMM_DBELL_M		MAKEMASK(0xFFFFFFFF, 0)
+#define QTX_COMM_DBLQ_CNTX(_i, _DBLQ)		(0x002D0000 + ((_i) * 1024 + (_DBLQ) * 4)) /* _i=0...4, _DBLQ=0...255 */ /* Reset Source: CORER */
+#define QTX_COMM_DBLQ_CNTX_MAX_INDEX		4
+#define QTX_COMM_DBLQ_CNTX_DATA_S		0
+#define QTX_COMM_DBLQ_CNTX_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define QTX_COMM_DBLQ_DBELL(_DBLQ)		(0x002D1400 + ((_DBLQ) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define QTX_COMM_DBLQ_DBELL_MAX_INDEX		255
+#define QTX_COMM_DBLQ_DBELL_TAIL_S		0
+#define QTX_COMM_DBLQ_DBELL_TAIL_M		MAKEMASK(0x1FFF, 0)
+#define QTX_COMM_HEAD(_DBQM)			(0x000E0000 + ((_DBQM) * 4)) /* _i=0...16383 */ /* Reset Source: CORER */
+#define QTX_COMM_HEAD_MAX_INDEX			16383
+#define QTX_COMM_HEAD_HEAD_S			0
+#define QTX_COMM_HEAD_HEAD_M			MAKEMASK(0x1FFF, 0)
+#define QTX_COMM_HEAD_RS_PENDING_S		16
+#define QTX_COMM_HEAD_RS_PENDING_M		BIT(16)
+#define GL_FW_TOOL_ARQBAH			0x000801C0 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ARQBAH_ARQBAH_S		0
+#define GL_FW_TOOL_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_FW_TOOL_ARQBAL			0x000800C0 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ARQBAL_ARQBAL_LSB_S		0
+#define GL_FW_TOOL_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define GL_FW_TOOL_ARQBAL_ARQBAL_S		6
+#define GL_FW_TOOL_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define GL_FW_TOOL_ARQH				0x000803C0 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ARQH_ARQH_S			0
+#define GL_FW_TOOL_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define GL_FW_TOOL_ARQLEN			0x000802C0 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ARQLEN_ARQLEN_S		0
+#define GL_FW_TOOL_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define GL_FW_TOOL_ARQLEN_ARQVFE_S		28
+#define GL_FW_TOOL_ARQLEN_ARQVFE_M		BIT(28)
+#define GL_FW_TOOL_ARQLEN_ARQOVFL_S		29
+#define GL_FW_TOOL_ARQLEN_ARQOVFL_M		BIT(29)
+#define GL_FW_TOOL_ARQLEN_ARQCRIT_S		30
+#define GL_FW_TOOL_ARQLEN_ARQCRIT_M		BIT(30)
+#define GL_FW_TOOL_ARQLEN_ARQENABLE_S		31
+#define GL_FW_TOOL_ARQLEN_ARQENABLE_M		BIT(31)
+#define GL_FW_TOOL_ARQT				0x000804C0 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ARQT_ARQT_S			0
+#define GL_FW_TOOL_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define GL_FW_TOOL_ATQBAH			0x00080140 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ATQBAH_ATQBAH_S		0
+#define GL_FW_TOOL_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_FW_TOOL_ATQBAL			0x00080040 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ATQBAL_ATQBAL_LSB_S		0
+#define GL_FW_TOOL_ATQBAL_ATQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define GL_FW_TOOL_ATQBAL_ATQBAL_S		6
+#define GL_FW_TOOL_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define GL_FW_TOOL_ATQH				0x00080340 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ATQH_ATQH_S			0
+#define GL_FW_TOOL_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define GL_FW_TOOL_ATQLEN			0x00080240 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ATQLEN_ATQLEN_S		0
+#define GL_FW_TOOL_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define GL_FW_TOOL_ATQLEN_ATQVFE_S		28
+#define GL_FW_TOOL_ATQLEN_ATQVFE_M		BIT(28)
+#define GL_FW_TOOL_ATQLEN_ATQOVFL_S		29
+#define GL_FW_TOOL_ATQLEN_ATQOVFL_M		BIT(29)
+#define GL_FW_TOOL_ATQLEN_ATQCRIT_S		30
+#define GL_FW_TOOL_ATQLEN_ATQCRIT_M		BIT(30)
+#define GL_FW_TOOL_ATQLEN_ATQENABLE_S		31
+#define GL_FW_TOOL_ATQLEN_ATQENABLE_M		BIT(31)
+#define GL_FW_TOOL_ATQT				0x00080440 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ATQT_ATQT_S			0
+#define GL_FW_TOOL_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define GL_MBX_PASID				0x00231EC0 /* Reset Source: CORER */
+#define GL_MBX_PASID_PASID_MODE_S		0
+#define GL_MBX_PASID_PASID_MODE_M		BIT(0)
+#define GL_MBX_PASID_PASID_MODE_VALID_S		1
+#define GL_MBX_PASID_PASID_MODE_VALID_M		BIT(1)
+#define PF_FW_ARQBAH				0x00080180 /* Reset Source: EMPR */
+#define PF_FW_ARQBAH_ARQBAH_S			0
+#define PF_FW_ARQBAH_ARQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PF_FW_ARQBAL				0x00080080 /* Reset Source: EMPR */
+#define PF_FW_ARQBAL_ARQBAL_LSB_S		0
+#define PF_FW_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF_FW_ARQBAL_ARQBAL_S			6
+#define PF_FW_ARQBAL_ARQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define PF_FW_ARQH				0x00080380 /* Reset Source: EMPR */
+#define PF_FW_ARQH_ARQH_S			0
+#define PF_FW_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define PF_FW_ARQLEN				0x00080280 /* Reset Source: EMPR */
+#define PF_FW_ARQLEN_ARQLEN_S			0
+#define PF_FW_ARQLEN_ARQLEN_M			MAKEMASK(0x3FF, 0)
+#define PF_FW_ARQLEN_ARQVFE_S			28
+#define PF_FW_ARQLEN_ARQVFE_M			BIT(28)
+#define PF_FW_ARQLEN_ARQOVFL_S			29
+#define PF_FW_ARQLEN_ARQOVFL_M			BIT(29)
+#define PF_FW_ARQLEN_ARQCRIT_S			30
+#define PF_FW_ARQLEN_ARQCRIT_M			BIT(30)
+#define PF_FW_ARQLEN_ARQENABLE_S		31
+#define PF_FW_ARQLEN_ARQENABLE_M		BIT(31)
+#define PF_FW_ARQT				0x00080480 /* Reset Source: EMPR */
+#define PF_FW_ARQT_ARQT_S			0
+#define PF_FW_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define PF_FW_ATQBAH				0x00080100 /* Reset Source: EMPR */
+#define PF_FW_ATQBAH_ATQBAH_S			0
+#define PF_FW_ATQBAH_ATQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PF_FW_ATQBAL				0x00080000 /* Reset Source: EMPR */
+#define PF_FW_ATQBAL_ATQBAL_LSB_S		0
+#define PF_FW_ATQBAL_ATQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF_FW_ATQBAL_ATQBAL_S			6
+#define PF_FW_ATQBAL_ATQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define PF_FW_ATQH				0x00080300 /* Reset Source: EMPR */
+#define PF_FW_ATQH_ATQH_S			0
+#define PF_FW_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define PF_FW_ATQLEN				0x00080200 /* Reset Source: EMPR */
+#define PF_FW_ATQLEN_ATQLEN_S			0
+#define PF_FW_ATQLEN_ATQLEN_M			MAKEMASK(0x3FF, 0)
+#define PF_FW_ATQLEN_ATQVFE_S			28
+#define PF_FW_ATQLEN_ATQVFE_M			BIT(28)
+#define PF_FW_ATQLEN_ATQOVFL_S			29
+#define PF_FW_ATQLEN_ATQOVFL_M			BIT(29)
+#define PF_FW_ATQLEN_ATQCRIT_S			30
+#define PF_FW_ATQLEN_ATQCRIT_M			BIT(30)
+#define PF_FW_ATQLEN_ATQENABLE_S		31
+#define PF_FW_ATQLEN_ATQENABLE_M		BIT(31)
+#define PF_FW_ATQT				0x00080400 /* Reset Source: EMPR */
+#define PF_FW_ATQT_ATQT_S			0
+#define PF_FW_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define PF_MBX_ARQBAH				0x0022E400 /* Reset Source: CORER */
+#define PF_MBX_ARQBAH_ARQBAH_S			0
+#define PF_MBX_ARQBAH_ARQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PF_MBX_ARQBAL				0x0022E380 /* Reset Source: CORER */
+#define PF_MBX_ARQBAL_ARQBAL_LSB_S		0
+#define PF_MBX_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF_MBX_ARQBAL_ARQBAL_S			6
+#define PF_MBX_ARQBAL_ARQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define PF_MBX_ARQH				0x0022E500 /* Reset Source: CORER */
+#define PF_MBX_ARQH_ARQH_S			0
+#define PF_MBX_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define PF_MBX_ARQLEN				0x0022E480 /* Reset Source: CORER */
+#define PF_MBX_ARQLEN_ARQLEN_S			0
+#define PF_MBX_ARQLEN_ARQLEN_M			MAKEMASK(0x3FF, 0)
+#define PF_MBX_ARQLEN_ARQVFE_S			28
+#define PF_MBX_ARQLEN_ARQVFE_M			BIT(28)
+#define PF_MBX_ARQLEN_ARQOVFL_S			29
+#define PF_MBX_ARQLEN_ARQOVFL_M			BIT(29)
+#define PF_MBX_ARQLEN_ARQCRIT_S			30
+#define PF_MBX_ARQLEN_ARQCRIT_M			BIT(30)
+#define PF_MBX_ARQLEN_ARQENABLE_S		31
+#define PF_MBX_ARQLEN_ARQENABLE_M		BIT(31)
+#define PF_MBX_ARQT				0x0022E580 /* Reset Source: CORER */
+#define PF_MBX_ARQT_ARQT_S			0
+#define PF_MBX_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define PF_MBX_ATQBAH				0x0022E180 /* Reset Source: CORER */
+#define PF_MBX_ATQBAH_ATQBAH_S			0
+#define PF_MBX_ATQBAH_ATQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PF_MBX_ATQBAL				0x0022E100 /* Reset Source: CORER */
+#define PF_MBX_ATQBAL_ATQBAL_S			6
+#define PF_MBX_ATQBAL_ATQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define PF_MBX_ATQH				0x0022E280 /* Reset Source: CORER */
+#define PF_MBX_ATQH_ATQH_S			0
+#define PF_MBX_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define PF_MBX_ATQLEN				0x0022E200 /* Reset Source: CORER */
+#define PF_MBX_ATQLEN_ATQLEN_S			0
+#define PF_MBX_ATQLEN_ATQLEN_M			MAKEMASK(0x3FF, 0)
+#define PF_MBX_ATQLEN_ATQVFE_S			28
+#define PF_MBX_ATQLEN_ATQVFE_M			BIT(28)
+#define PF_MBX_ATQLEN_ATQOVFL_S			29
+#define PF_MBX_ATQLEN_ATQOVFL_M			BIT(29)
+#define PF_MBX_ATQLEN_ATQCRIT_S			30
+#define PF_MBX_ATQLEN_ATQCRIT_M			BIT(30)
+#define PF_MBX_ATQLEN_ATQENABLE_S		31
+#define PF_MBX_ATQLEN_ATQENABLE_M		BIT(31)
+#define PF_MBX_ATQT				0x0022E300 /* Reset Source: CORER */
+#define PF_MBX_ATQT_ATQT_S			0
+#define PF_MBX_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define PF_SB_ARQBAH				0x0022FF00 /* Reset Source: CORER */
+#define PF_SB_ARQBAH_ARQBAH_S			0
+#define PF_SB_ARQBAH_ARQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PF_SB_ARQBAL				0x0022FE80 /* Reset Source: CORER */
+#define PF_SB_ARQBAL_ARQBAL_LSB_S		0
+#define PF_SB_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF_SB_ARQBAL_ARQBAL_S			6
+#define PF_SB_ARQBAL_ARQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define PF_SB_ARQH				0x00230000 /* Reset Source: CORER */
+#define PF_SB_ARQH_ARQH_S			0
+#define PF_SB_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define PF_SB_ARQLEN				0x0022FF80 /* Reset Source: CORER */
+#define PF_SB_ARQLEN_ARQLEN_S			0
+#define PF_SB_ARQLEN_ARQLEN_M			MAKEMASK(0x3FF, 0)
+#define PF_SB_ARQLEN_ARQVFE_S			28
+#define PF_SB_ARQLEN_ARQVFE_M			BIT(28)
+#define PF_SB_ARQLEN_ARQOVFL_S			29
+#define PF_SB_ARQLEN_ARQOVFL_M			BIT(29)
+#define PF_SB_ARQLEN_ARQCRIT_S			30
+#define PF_SB_ARQLEN_ARQCRIT_M			BIT(30)
+#define PF_SB_ARQLEN_ARQENABLE_S		31
+#define PF_SB_ARQLEN_ARQENABLE_M		BIT(31)
+#define PF_SB_ARQT				0x00230080 /* Reset Source: CORER */
+#define PF_SB_ARQT_ARQT_S			0
+#define PF_SB_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define PF_SB_ATQBAH				0x0022FC80 /* Reset Source: CORER */
+#define PF_SB_ATQBAH_ATQBAH_S			0
+#define PF_SB_ATQBAH_ATQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PF_SB_ATQBAL				0x0022FC00 /* Reset Source: CORER */
+#define PF_SB_ATQBAL_ATQBAL_S			6
+#define PF_SB_ATQBAL_ATQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define PF_SB_ATQH				0x0022FD80 /* Reset Source: CORER */
+#define PF_SB_ATQH_ATQH_S			0
+#define PF_SB_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define PF_SB_ATQLEN				0x0022FD00 /* Reset Source: CORER */
+#define PF_SB_ATQLEN_ATQLEN_S			0
+#define PF_SB_ATQLEN_ATQLEN_M			MAKEMASK(0x3FF, 0)
+#define PF_SB_ATQLEN_ATQVFE_S			28
+#define PF_SB_ATQLEN_ATQVFE_M			BIT(28)
+#define PF_SB_ATQLEN_ATQOVFL_S			29
+#define PF_SB_ATQLEN_ATQOVFL_M			BIT(29)
+#define PF_SB_ATQLEN_ATQCRIT_S			30
+#define PF_SB_ATQLEN_ATQCRIT_M			BIT(30)
+#define PF_SB_ATQLEN_ATQENABLE_S		31
+#define PF_SB_ATQLEN_ATQENABLE_M		BIT(31)
+#define PF_SB_ATQT				0x0022FE00 /* Reset Source: CORER */
+#define PF_SB_ATQT_ATQT_S			0
+#define PF_SB_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define PF_SB_REM_DEV_CTL			0x002300F0 /* Reset Source: CORER */
+#define PF_SB_REM_DEV_CTL_DEST_EN_S		0
+#define PF_SB_REM_DEV_CTL_DEST_EN_M		MAKEMASK(0xFFFF, 0)
+#define PF0_FW_HLP_ARQBAH			0x000801C8 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQBAH_ARQBAH_S		0
+#define PF0_FW_HLP_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_FW_HLP_ARQBAL			0x000800C8 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQBAL_ARQBAL_LSB_S		0
+#define PF0_FW_HLP_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF0_FW_HLP_ARQBAL_ARQBAL_S		6
+#define PF0_FW_HLP_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_FW_HLP_ARQH				0x000803C8 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQH_ARQH_S			0
+#define PF0_FW_HLP_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ARQLEN			0x000802C8 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQLEN_ARQLEN_S		0
+#define PF0_FW_HLP_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ARQLEN_ARQVFE_S		28
+#define PF0_FW_HLP_ARQLEN_ARQVFE_M		BIT(28)
+#define PF0_FW_HLP_ARQLEN_ARQOVFL_S		29
+#define PF0_FW_HLP_ARQLEN_ARQOVFL_M		BIT(29)
+#define PF0_FW_HLP_ARQLEN_ARQCRIT_S		30
+#define PF0_FW_HLP_ARQLEN_ARQCRIT_M		BIT(30)
+#define PF0_FW_HLP_ARQLEN_ARQENABLE_S		31
+#define PF0_FW_HLP_ARQLEN_ARQENABLE_M		BIT(31)
+#define PF0_FW_HLP_ARQT				0x000804C8 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQT_ARQT_S			0
+#define PF0_FW_HLP_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ATQBAH			0x00080148 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQBAH_ATQBAH_S		0
+#define PF0_FW_HLP_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_FW_HLP_ATQBAL			0x00080048 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQBAL_ATQBAL_LSB_S		0
+#define PF0_FW_HLP_ATQBAL_ATQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF0_FW_HLP_ATQBAL_ATQBAL_S		6
+#define PF0_FW_HLP_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_FW_HLP_ATQH				0x00080348 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQH_ATQH_S			0
+#define PF0_FW_HLP_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ATQLEN			0x00080248 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQLEN_ATQLEN_S		0
+#define PF0_FW_HLP_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ATQLEN_ATQVFE_S		28
+#define PF0_FW_HLP_ATQLEN_ATQVFE_M		BIT(28)
+#define PF0_FW_HLP_ATQLEN_ATQOVFL_S		29
+#define PF0_FW_HLP_ATQLEN_ATQOVFL_M		BIT(29)
+#define PF0_FW_HLP_ATQLEN_ATQCRIT_S		30
+#define PF0_FW_HLP_ATQLEN_ATQCRIT_M		BIT(30)
+#define PF0_FW_HLP_ATQLEN_ATQENABLE_S		31
+#define PF0_FW_HLP_ATQLEN_ATQENABLE_M		BIT(31)
+#define PF0_FW_HLP_ATQT				0x00080448 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQT_ATQT_S			0
+#define PF0_FW_HLP_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ARQBAH			0x000801C4 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQBAH_ARQBAH_S		0
+#define PF0_FW_PSM_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_FW_PSM_ARQBAL			0x000800C4 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQBAL_ARQBAL_LSB_S		0
+#define PF0_FW_PSM_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF0_FW_PSM_ARQBAL_ARQBAL_S		6
+#define PF0_FW_PSM_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_FW_PSM_ARQH				0x000803C4 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQH_ARQH_S			0
+#define PF0_FW_PSM_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ARQLEN			0x000802C4 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQLEN_ARQLEN_S		0
+#define PF0_FW_PSM_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ARQLEN_ARQVFE_S		28
+#define PF0_FW_PSM_ARQLEN_ARQVFE_M		BIT(28)
+#define PF0_FW_PSM_ARQLEN_ARQOVFL_S		29
+#define PF0_FW_PSM_ARQLEN_ARQOVFL_M		BIT(29)
+#define PF0_FW_PSM_ARQLEN_ARQCRIT_S		30
+#define PF0_FW_PSM_ARQLEN_ARQCRIT_M		BIT(30)
+#define PF0_FW_PSM_ARQLEN_ARQENABLE_S		31
+#define PF0_FW_PSM_ARQLEN_ARQENABLE_M		BIT(31)
+#define PF0_FW_PSM_ARQT				0x000804C4 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQT_ARQT_S			0
+#define PF0_FW_PSM_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ATQBAH			0x00080144 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQBAH_ATQBAH_S		0
+#define PF0_FW_PSM_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_FW_PSM_ATQBAL			0x00080044 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQBAL_ATQBAL_LSB_S		0
+#define PF0_FW_PSM_ATQBAL_ATQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF0_FW_PSM_ATQBAL_ATQBAL_S		6
+#define PF0_FW_PSM_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_FW_PSM_ATQH				0x00080344 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQH_ATQH_S			0
+#define PF0_FW_PSM_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ATQLEN			0x00080244 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQLEN_ATQLEN_S		0
+#define PF0_FW_PSM_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ATQLEN_ATQVFE_S		28
+#define PF0_FW_PSM_ATQLEN_ATQVFE_M		BIT(28)
+#define PF0_FW_PSM_ATQLEN_ATQOVFL_S		29
+#define PF0_FW_PSM_ATQLEN_ATQOVFL_M		BIT(29)
+#define PF0_FW_PSM_ATQLEN_ATQCRIT_S		30
+#define PF0_FW_PSM_ATQLEN_ATQCRIT_M		BIT(30)
+#define PF0_FW_PSM_ATQLEN_ATQENABLE_S		31
+#define PF0_FW_PSM_ATQLEN_ATQENABLE_M		BIT(31)
+#define PF0_FW_PSM_ATQT				0x00080444 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQT_ATQT_S			0
+#define PF0_FW_PSM_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ARQBAH			0x0022E5D8 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQBAH_ARQBAH_S		0
+#define PF0_MBX_CPM_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_CPM_ARQBAL			0x0022E5D4 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQBAL_ARQBAL_LSB_S		0
+#define PF0_MBX_CPM_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF0_MBX_CPM_ARQBAL_ARQBAL_S		6
+#define PF0_MBX_CPM_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_CPM_ARQH			0x0022E5E0 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQH_ARQH_S			0
+#define PF0_MBX_CPM_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ARQLEN			0x0022E5DC /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQLEN_ARQLEN_S		0
+#define PF0_MBX_CPM_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ARQLEN_ARQVFE_S		28
+#define PF0_MBX_CPM_ARQLEN_ARQVFE_M		BIT(28)
+#define PF0_MBX_CPM_ARQLEN_ARQOVFL_S		29
+#define PF0_MBX_CPM_ARQLEN_ARQOVFL_M		BIT(29)
+#define PF0_MBX_CPM_ARQLEN_ARQCRIT_S		30
+#define PF0_MBX_CPM_ARQLEN_ARQCRIT_M		BIT(30)
+#define PF0_MBX_CPM_ARQLEN_ARQENABLE_S		31
+#define PF0_MBX_CPM_ARQLEN_ARQENABLE_M		BIT(31)
+#define PF0_MBX_CPM_ARQT			0x0022E5E4 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQT_ARQT_S			0
+#define PF0_MBX_CPM_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ATQBAH			0x0022E5C4 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQBAH_ATQBAH_S		0
+#define PF0_MBX_CPM_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_CPM_ATQBAL			0x0022E5C0 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQBAL_ATQBAL_S		6
+#define PF0_MBX_CPM_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_CPM_ATQH			0x0022E5CC /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQH_ATQH_S			0
+#define PF0_MBX_CPM_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ATQLEN			0x0022E5C8 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQLEN_ATQLEN_S		0
+#define PF0_MBX_CPM_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ATQLEN_ATQVFE_S		28
+#define PF0_MBX_CPM_ATQLEN_ATQVFE_M		BIT(28)
+#define PF0_MBX_CPM_ATQLEN_ATQOVFL_S		29
+#define PF0_MBX_CPM_ATQLEN_ATQOVFL_M		BIT(29)
+#define PF0_MBX_CPM_ATQLEN_ATQCRIT_S		30
+#define PF0_MBX_CPM_ATQLEN_ATQCRIT_M		BIT(30)
+#define PF0_MBX_CPM_ATQLEN_ATQENABLE_S		31
+#define PF0_MBX_CPM_ATQLEN_ATQENABLE_M		BIT(31)
+#define PF0_MBX_CPM_ATQT			0x0022E5D0 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQT_ATQT_S			0
+#define PF0_MBX_CPM_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ARQBAH			0x0022E600 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQBAH_ARQBAH_S		0
+#define PF0_MBX_HLP_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_HLP_ARQBAL			0x0022E5FC /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQBAL_ARQBAL_LSB_S		0
+#define PF0_MBX_HLP_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF0_MBX_HLP_ARQBAL_ARQBAL_S		6
+#define PF0_MBX_HLP_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_HLP_ARQH			0x0022E608 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQH_ARQH_S			0
+#define PF0_MBX_HLP_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ARQLEN			0x0022E604 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQLEN_ARQLEN_S		0
+#define PF0_MBX_HLP_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ARQLEN_ARQVFE_S		28
+#define PF0_MBX_HLP_ARQLEN_ARQVFE_M		BIT(28)
+#define PF0_MBX_HLP_ARQLEN_ARQOVFL_S		29
+#define PF0_MBX_HLP_ARQLEN_ARQOVFL_M		BIT(29)
+#define PF0_MBX_HLP_ARQLEN_ARQCRIT_S		30
+#define PF0_MBX_HLP_ARQLEN_ARQCRIT_M		BIT(30)
+#define PF0_MBX_HLP_ARQLEN_ARQENABLE_S		31
+#define PF0_MBX_HLP_ARQLEN_ARQENABLE_M		BIT(31)
+#define PF0_MBX_HLP_ARQT			0x0022E60C /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQT_ARQT_S			0
+#define PF0_MBX_HLP_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ATQBAH			0x0022E5EC /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQBAH_ATQBAH_S		0
+#define PF0_MBX_HLP_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_HLP_ATQBAL			0x0022E5E8 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQBAL_ATQBAL_S		6
+#define PF0_MBX_HLP_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_HLP_ATQH			0x0022E5F4 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQH_ATQH_S			0
+#define PF0_MBX_HLP_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ATQLEN			0x0022E5F0 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQLEN_ATQLEN_S		0
+#define PF0_MBX_HLP_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ATQLEN_ATQVFE_S		28
+#define PF0_MBX_HLP_ATQLEN_ATQVFE_M		BIT(28)
+#define PF0_MBX_HLP_ATQLEN_ATQOVFL_S		29
+#define PF0_MBX_HLP_ATQLEN_ATQOVFL_M		BIT(29)
+#define PF0_MBX_HLP_ATQLEN_ATQCRIT_S		30
+#define PF0_MBX_HLP_ATQLEN_ATQCRIT_M		BIT(30)
+#define PF0_MBX_HLP_ATQLEN_ATQENABLE_S		31
+#define PF0_MBX_HLP_ATQLEN_ATQENABLE_M		BIT(31)
+#define PF0_MBX_HLP_ATQT			0x0022E5F8 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQT_ATQT_S			0
+#define PF0_MBX_HLP_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ARQBAH			0x0022E628 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQBAH_ARQBAH_S		0
+#define PF0_MBX_PSM_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_PSM_ARQBAL			0x0022E624 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQBAL_ARQBAL_LSB_S		0
+#define PF0_MBX_PSM_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF0_MBX_PSM_ARQBAL_ARQBAL_S		6
+#define PF0_MBX_PSM_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_PSM_ARQH			0x0022E630 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQH_ARQH_S			0
+#define PF0_MBX_PSM_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ARQLEN			0x0022E62C /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQLEN_ARQLEN_S		0
+#define PF0_MBX_PSM_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ARQLEN_ARQVFE_S		28
+#define PF0_MBX_PSM_ARQLEN_ARQVFE_M		BIT(28)
+#define PF0_MBX_PSM_ARQLEN_ARQOVFL_S		29
+#define PF0_MBX_PSM_ARQLEN_ARQOVFL_M		BIT(29)
+#define PF0_MBX_PSM_ARQLEN_ARQCRIT_S		30
+#define PF0_MBX_PSM_ARQLEN_ARQCRIT_M		BIT(30)
+#define PF0_MBX_PSM_ARQLEN_ARQENABLE_S		31
+#define PF0_MBX_PSM_ARQLEN_ARQENABLE_M		BIT(31)
+#define PF0_MBX_PSM_ARQT			0x0022E634 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQT_ARQT_S			0
+#define PF0_MBX_PSM_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ATQBAH			0x0022E614 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQBAH_ATQBAH_S		0
+#define PF0_MBX_PSM_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_PSM_ATQBAL			0x0022E610 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQBAL_ATQBAL_S		6
+#define PF0_MBX_PSM_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_PSM_ATQH			0x0022E61C /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQH_ATQH_S			0
+#define PF0_MBX_PSM_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ATQLEN			0x0022E618 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQLEN_ATQLEN_S		0
+#define PF0_MBX_PSM_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ATQLEN_ATQVFE_S		28
+#define PF0_MBX_PSM_ATQLEN_ATQVFE_M		BIT(28)
+#define PF0_MBX_PSM_ATQLEN_ATQOVFL_S		29
+#define PF0_MBX_PSM_ATQLEN_ATQOVFL_M		BIT(29)
+#define PF0_MBX_PSM_ATQLEN_ATQCRIT_S		30
+#define PF0_MBX_PSM_ATQLEN_ATQCRIT_M		BIT(30)
+#define PF0_MBX_PSM_ATQLEN_ATQENABLE_S		31
+#define PF0_MBX_PSM_ATQLEN_ATQENABLE_M		BIT(31)
+#define PF0_MBX_PSM_ATQT			0x0022E620 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQT_ATQT_S			0
+#define PF0_MBX_PSM_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ARQBAH			0x0022E650 /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQBAH_ARQBAH_S		0
+#define PF0_SB_CPM_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_SB_CPM_ARQBAL			0x0022E64C /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQBAL_ARQBAL_LSB_S		0
+#define PF0_SB_CPM_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF0_SB_CPM_ARQBAL_ARQBAL_S		6
+#define PF0_SB_CPM_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_SB_CPM_ARQH				0x0022E658 /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQH_ARQH_S			0
+#define PF0_SB_CPM_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ARQLEN			0x0022E654 /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQLEN_ARQLEN_S		0
+#define PF0_SB_CPM_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ARQLEN_ARQVFE_S		28
+#define PF0_SB_CPM_ARQLEN_ARQVFE_M		BIT(28)
+#define PF0_SB_CPM_ARQLEN_ARQOVFL_S		29
+#define PF0_SB_CPM_ARQLEN_ARQOVFL_M		BIT(29)
+#define PF0_SB_CPM_ARQLEN_ARQCRIT_S		30
+#define PF0_SB_CPM_ARQLEN_ARQCRIT_M		BIT(30)
+#define PF0_SB_CPM_ARQLEN_ARQENABLE_S		31
+#define PF0_SB_CPM_ARQLEN_ARQENABLE_M		BIT(31)
+#define PF0_SB_CPM_ARQT				0x0022E65C /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQT_ARQT_S			0
+#define PF0_SB_CPM_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ATQBAH			0x0022E63C /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQBAH_ATQBAH_S		0
+#define PF0_SB_CPM_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_SB_CPM_ATQBAL			0x0022E638 /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQBAL_ATQBAL_S		6
+#define PF0_SB_CPM_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_SB_CPM_ATQH				0x0022E644 /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQH_ATQH_S			0
+#define PF0_SB_CPM_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ATQLEN			0x0022E640 /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQLEN_ATQLEN_S		0
+#define PF0_SB_CPM_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ATQLEN_ATQVFE_S		28
+#define PF0_SB_CPM_ATQLEN_ATQVFE_M		BIT(28)
+#define PF0_SB_CPM_ATQLEN_ATQOVFL_S		29
+#define PF0_SB_CPM_ATQLEN_ATQOVFL_M		BIT(29)
+#define PF0_SB_CPM_ATQLEN_ATQCRIT_S		30
+#define PF0_SB_CPM_ATQLEN_ATQCRIT_M		BIT(30)
+#define PF0_SB_CPM_ATQLEN_ATQENABLE_S		31
+#define PF0_SB_CPM_ATQLEN_ATQENABLE_M		BIT(31)
+#define PF0_SB_CPM_ATQT				0x0022E648 /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQT_ATQT_S			0
+#define PF0_SB_CPM_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_REM_DEV_CTL			0x002300F4 /* Reset Source: CORER */
+#define PF0_SB_CPM_REM_DEV_CTL_DEST_EN_S	0
+#define PF0_SB_CPM_REM_DEV_CTL_DEST_EN_M	MAKEMASK(0xFFFF, 0)
+#define PF0_SB_HLP_ARQBAH			0x002300D8 /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQBAH_ARQBAH_S		0
+#define PF0_SB_HLP_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_SB_HLP_ARQBAL			0x002300D4 /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQBAL_ARQBAL_LSB_S		0
+#define PF0_SB_HLP_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF0_SB_HLP_ARQBAL_ARQBAL_S		6
+#define PF0_SB_HLP_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_SB_HLP_ARQH				0x002300E0 /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQH_ARQH_S			0
+#define PF0_SB_HLP_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ARQLEN			0x002300DC /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQLEN_ARQLEN_S		0
+#define PF0_SB_HLP_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ARQLEN_ARQVFE_S		28
+#define PF0_SB_HLP_ARQLEN_ARQVFE_M		BIT(28)
+#define PF0_SB_HLP_ARQLEN_ARQOVFL_S		29
+#define PF0_SB_HLP_ARQLEN_ARQOVFL_M		BIT(29)
+#define PF0_SB_HLP_ARQLEN_ARQCRIT_S		30
+#define PF0_SB_HLP_ARQLEN_ARQCRIT_M		BIT(30)
+#define PF0_SB_HLP_ARQLEN_ARQENABLE_S		31
+#define PF0_SB_HLP_ARQLEN_ARQENABLE_M		BIT(31)
+#define PF0_SB_HLP_ARQT				0x002300E4 /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQT_ARQT_S			0
+#define PF0_SB_HLP_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ATQBAH			0x002300C4 /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQBAH_ATQBAH_S		0
+#define PF0_SB_HLP_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_SB_HLP_ATQBAL			0x002300C0 /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQBAL_ATQBAL_S		6
+#define PF0_SB_HLP_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_SB_HLP_ATQH				0x002300CC /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQH_ATQH_S			0
+#define PF0_SB_HLP_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ATQLEN			0x002300C8 /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQLEN_ATQLEN_S		0
+#define PF0_SB_HLP_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ATQLEN_ATQVFE_S		28
+#define PF0_SB_HLP_ATQLEN_ATQVFE_M		BIT(28)
+#define PF0_SB_HLP_ATQLEN_ATQOVFL_S		29
+#define PF0_SB_HLP_ATQLEN_ATQOVFL_M		BIT(29)
+#define PF0_SB_HLP_ATQLEN_ATQCRIT_S		30
+#define PF0_SB_HLP_ATQLEN_ATQCRIT_M		BIT(30)
+#define PF0_SB_HLP_ATQLEN_ATQENABLE_S		31
+#define PF0_SB_HLP_ATQLEN_ATQENABLE_M		BIT(31)
+#define PF0_SB_HLP_ATQT				0x002300D0 /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQT_ATQT_S			0
+#define PF0_SB_HLP_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_REM_DEV_CTL			0x002300E8 /* Reset Source: CORER */
+#define PF0_SB_HLP_REM_DEV_CTL_DEST_EN_S	0
+#define PF0_SB_HLP_REM_DEV_CTL_DEST_EN_M	MAKEMASK(0xFFFF, 0)
+#define SB_REM_DEV_DEST(_i)			(0x002300F8 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define SB_REM_DEV_DEST_MAX_INDEX		7
+#define SB_REM_DEV_DEST_DEST_S			0
+#define SB_REM_DEV_DEST_DEST_M			MAKEMASK(0xF, 0)
+#define SB_REM_DEV_DEST_DEST_VALID_S		31
+#define SB_REM_DEV_DEST_DEST_VALID_M		BIT(31)
+#define VF_MBX_ARQBAH(_VF)			(0x0022B800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ARQBAH_MAX_INDEX			255
+#define VF_MBX_ARQBAH_ARQBAH_S			0
+#define VF_MBX_ARQBAH_ARQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_ARQBAL(_VF)			(0x0022B400 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ARQBAL_MAX_INDEX			255
+#define VF_MBX_ARQBAL_ARQBAL_LSB_S		0
+#define VF_MBX_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define VF_MBX_ARQBAL_ARQBAL_S			6
+#define VF_MBX_ARQBAL_ARQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_ARQH(_VF)			(0x0022C000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ARQH_MAX_INDEX			255
+#define VF_MBX_ARQH_ARQH_S			0
+#define VF_MBX_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_ARQLEN(_VF)			(0x0022BC00 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ARQLEN_MAX_INDEX			255
+#define VF_MBX_ARQLEN_ARQLEN_S			0
+#define VF_MBX_ARQLEN_ARQLEN_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_ARQLEN_ARQVFE_S			28
+#define VF_MBX_ARQLEN_ARQVFE_M			BIT(28)
+#define VF_MBX_ARQLEN_ARQOVFL_S			29
+#define VF_MBX_ARQLEN_ARQOVFL_M			BIT(29)
+#define VF_MBX_ARQLEN_ARQCRIT_S			30
+#define VF_MBX_ARQLEN_ARQCRIT_M			BIT(30)
+#define VF_MBX_ARQLEN_ARQENABLE_S		31
+#define VF_MBX_ARQLEN_ARQENABLE_M		BIT(31)
+#define VF_MBX_ARQT(_VF)			(0x0022C400 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ARQT_MAX_INDEX			255
+#define VF_MBX_ARQT_ARQT_S			0
+#define VF_MBX_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_ATQBAH(_VF)			(0x0022A400 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ATQBAH_MAX_INDEX			255
+#define VF_MBX_ATQBAH_ATQBAH_S			0
+#define VF_MBX_ATQBAH_ATQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_ATQBAL(_VF)			(0x0022A000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ATQBAL_MAX_INDEX			255
+#define VF_MBX_ATQBAL_ATQBAL_S			6
+#define VF_MBX_ATQBAL_ATQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_ATQH(_VF)			(0x0022AC00 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ATQH_MAX_INDEX			255
+#define VF_MBX_ATQH_ATQH_S			0
+#define VF_MBX_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_ATQLEN(_VF)			(0x0022A800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ATQLEN_MAX_INDEX			255
+#define VF_MBX_ATQLEN_ATQLEN_S			0
+#define VF_MBX_ATQLEN_ATQLEN_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_ATQLEN_ATQVFE_S			28
+#define VF_MBX_ATQLEN_ATQVFE_M			BIT(28)
+#define VF_MBX_ATQLEN_ATQOVFL_S			29
+#define VF_MBX_ATQLEN_ATQOVFL_M			BIT(29)
+#define VF_MBX_ATQLEN_ATQCRIT_S			30
+#define VF_MBX_ATQLEN_ATQCRIT_M			BIT(30)
+#define VF_MBX_ATQLEN_ATQENABLE_S		31
+#define VF_MBX_ATQLEN_ATQENABLE_M		BIT(31)
+#define VF_MBX_ATQT(_VF)			(0x0022B000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ATQT_MAX_INDEX			255
+#define VF_MBX_ATQT_ATQT_S			0
+#define VF_MBX_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ARQBAH(_VF128)		(0x0022D400 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ARQBAH_MAX_INDEX		127
+#define VF_MBX_CPM_ARQBAH_ARQBAH_S		0
+#define VF_MBX_CPM_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_CPM_ARQBAL(_VF128)		(0x0022D200 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ARQBAL_MAX_INDEX		127
+#define VF_MBX_CPM_ARQBAL_ARQBAL_LSB_S		0
+#define VF_MBX_CPM_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define VF_MBX_CPM_ARQBAL_ARQBAL_S		6
+#define VF_MBX_CPM_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_CPM_ARQH(_VF128)			(0x0022D800 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ARQH_MAX_INDEX		127
+#define VF_MBX_CPM_ARQH_ARQH_S			0
+#define VF_MBX_CPM_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ARQLEN(_VF128)		(0x0022D600 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ARQLEN_MAX_INDEX		127
+#define VF_MBX_CPM_ARQLEN_ARQLEN_S		0
+#define VF_MBX_CPM_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ARQLEN_ARQVFE_S		28
+#define VF_MBX_CPM_ARQLEN_ARQVFE_M		BIT(28)
+#define VF_MBX_CPM_ARQLEN_ARQOVFL_S		29
+#define VF_MBX_CPM_ARQLEN_ARQOVFL_M		BIT(29)
+#define VF_MBX_CPM_ARQLEN_ARQCRIT_S		30
+#define VF_MBX_CPM_ARQLEN_ARQCRIT_M		BIT(30)
+#define VF_MBX_CPM_ARQLEN_ARQENABLE_S		31
+#define VF_MBX_CPM_ARQLEN_ARQENABLE_M		BIT(31)
+#define VF_MBX_CPM_ARQT(_VF128)			(0x0022DA00 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ARQT_MAX_INDEX		127
+#define VF_MBX_CPM_ARQT_ARQT_S			0
+#define VF_MBX_CPM_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ATQBAH(_VF128)		(0x0022CA00 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ATQBAH_MAX_INDEX		127
+#define VF_MBX_CPM_ATQBAH_ATQBAH_S		0
+#define VF_MBX_CPM_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_CPM_ATQBAL(_VF128)		(0x0022C800 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ATQBAL_MAX_INDEX		127
+#define VF_MBX_CPM_ATQBAL_ATQBAL_S		6
+#define VF_MBX_CPM_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_CPM_ATQH(_VF128)			(0x0022CE00 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ATQH_MAX_INDEX		127
+#define VF_MBX_CPM_ATQH_ATQH_S			0
+#define VF_MBX_CPM_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ATQLEN(_VF128)		(0x0022CC00 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ATQLEN_MAX_INDEX		127
+#define VF_MBX_CPM_ATQLEN_ATQLEN_S		0
+#define VF_MBX_CPM_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ATQLEN_ATQVFE_S		28
+#define VF_MBX_CPM_ATQLEN_ATQVFE_M		BIT(28)
+#define VF_MBX_CPM_ATQLEN_ATQOVFL_S		29
+#define VF_MBX_CPM_ATQLEN_ATQOVFL_M		BIT(29)
+#define VF_MBX_CPM_ATQLEN_ATQCRIT_S		30
+#define VF_MBX_CPM_ATQLEN_ATQCRIT_M		BIT(30)
+#define VF_MBX_CPM_ATQLEN_ATQENABLE_S		31
+#define VF_MBX_CPM_ATQLEN_ATQENABLE_M		BIT(31)
+#define VF_MBX_CPM_ATQT(_VF128)			(0x0022D000 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ATQT_MAX_INDEX		127
+#define VF_MBX_CPM_ATQT_ATQT_S			0
+#define VF_MBX_CPM_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ARQBAH(_VF16)		(0x0022DD80 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ARQBAH_MAX_INDEX		15
+#define VF_MBX_HLP_ARQBAH_ARQBAH_S		0
+#define VF_MBX_HLP_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_HLP_ARQBAL(_VF16)		(0x0022DD40 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ARQBAL_MAX_INDEX		15
+#define VF_MBX_HLP_ARQBAL_ARQBAL_LSB_S		0
+#define VF_MBX_HLP_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define VF_MBX_HLP_ARQBAL_ARQBAL_S		6
+#define VF_MBX_HLP_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_HLP_ARQH(_VF16)			(0x0022DE00 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ARQH_MAX_INDEX		15
+#define VF_MBX_HLP_ARQH_ARQH_S			0
+#define VF_MBX_HLP_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ARQLEN(_VF16)		(0x0022DDC0 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ARQLEN_MAX_INDEX		15
+#define VF_MBX_HLP_ARQLEN_ARQLEN_S		0
+#define VF_MBX_HLP_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ARQLEN_ARQVFE_S		28
+#define VF_MBX_HLP_ARQLEN_ARQVFE_M		BIT(28)
+#define VF_MBX_HLP_ARQLEN_ARQOVFL_S		29
+#define VF_MBX_HLP_ARQLEN_ARQOVFL_M		BIT(29)
+#define VF_MBX_HLP_ARQLEN_ARQCRIT_S		30
+#define VF_MBX_HLP_ARQLEN_ARQCRIT_M		BIT(30)
+#define VF_MBX_HLP_ARQLEN_ARQENABLE_S		31
+#define VF_MBX_HLP_ARQLEN_ARQENABLE_M		BIT(31)
+#define VF_MBX_HLP_ARQT(_VF16)			(0x0022DE40 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ARQT_MAX_INDEX		15
+#define VF_MBX_HLP_ARQT_ARQT_S			0
+#define VF_MBX_HLP_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ATQBAH(_VF16)		(0x0022DC40 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ATQBAH_MAX_INDEX		15
+#define VF_MBX_HLP_ATQBAH_ATQBAH_S		0
+#define VF_MBX_HLP_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_HLP_ATQBAL(_VF16)		(0x0022DC00 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ATQBAL_MAX_INDEX		15
+#define VF_MBX_HLP_ATQBAL_ATQBAL_S		6
+#define VF_MBX_HLP_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_HLP_ATQH(_VF16)			(0x0022DCC0 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ATQH_MAX_INDEX		15
+#define VF_MBX_HLP_ATQH_ATQH_S			0
+#define VF_MBX_HLP_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ATQLEN(_VF16)		(0x0022DC80 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ATQLEN_MAX_INDEX		15
+#define VF_MBX_HLP_ATQLEN_ATQLEN_S		0
+#define VF_MBX_HLP_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ATQLEN_ATQVFE_S		28
+#define VF_MBX_HLP_ATQLEN_ATQVFE_M		BIT(28)
+#define VF_MBX_HLP_ATQLEN_ATQOVFL_S		29
+#define VF_MBX_HLP_ATQLEN_ATQOVFL_M		BIT(29)
+#define VF_MBX_HLP_ATQLEN_ATQCRIT_S		30
+#define VF_MBX_HLP_ATQLEN_ATQCRIT_M		BIT(30)
+#define VF_MBX_HLP_ATQLEN_ATQENABLE_S		31
+#define VF_MBX_HLP_ATQLEN_ATQENABLE_M		BIT(31)
+#define VF_MBX_HLP_ATQT(_VF16)			(0x0022DD00 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ATQT_MAX_INDEX		15
+#define VF_MBX_HLP_ATQT_ATQT_S			0
+#define VF_MBX_HLP_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ARQBAH(_VF16)		(0x0022E000 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ARQBAH_MAX_INDEX		15
+#define VF_MBX_PSM_ARQBAH_ARQBAH_S		0
+#define VF_MBX_PSM_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_PSM_ARQBAL(_VF16)		(0x0022DFC0 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ARQBAL_MAX_INDEX		15
+#define VF_MBX_PSM_ARQBAL_ARQBAL_LSB_S		0
+#define VF_MBX_PSM_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define VF_MBX_PSM_ARQBAL_ARQBAL_S		6
+#define VF_MBX_PSM_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_PSM_ARQH(_VF16)			(0x0022E080 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ARQH_MAX_INDEX		15
+#define VF_MBX_PSM_ARQH_ARQH_S			0
+#define VF_MBX_PSM_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ARQLEN(_VF16)		(0x0022E040 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ARQLEN_MAX_INDEX		15
+#define VF_MBX_PSM_ARQLEN_ARQLEN_S		0
+#define VF_MBX_PSM_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ARQLEN_ARQVFE_S		28
+#define VF_MBX_PSM_ARQLEN_ARQVFE_M		BIT(28)
+#define VF_MBX_PSM_ARQLEN_ARQOVFL_S		29
+#define VF_MBX_PSM_ARQLEN_ARQOVFL_M		BIT(29)
+#define VF_MBX_PSM_ARQLEN_ARQCRIT_S		30
+#define VF_MBX_PSM_ARQLEN_ARQCRIT_M		BIT(30)
+#define VF_MBX_PSM_ARQLEN_ARQENABLE_S		31
+#define VF_MBX_PSM_ARQLEN_ARQENABLE_M		BIT(31)
+#define VF_MBX_PSM_ARQT(_VF16)			(0x0022E0C0 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ARQT_MAX_INDEX		15
+#define VF_MBX_PSM_ARQT_ARQT_S			0
+#define VF_MBX_PSM_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ATQBAH(_VF16)		(0x0022DEC0 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ATQBAH_MAX_INDEX		15
+#define VF_MBX_PSM_ATQBAH_ATQBAH_S		0
+#define VF_MBX_PSM_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_PSM_ATQBAL(_VF16)		(0x0022DE80 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ATQBAL_MAX_INDEX		15
+#define VF_MBX_PSM_ATQBAL_ATQBAL_S		6
+#define VF_MBX_PSM_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_PSM_ATQH(_VF16)			(0x0022DF40 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ATQH_MAX_INDEX		15
+#define VF_MBX_PSM_ATQH_ATQH_S			0
+#define VF_MBX_PSM_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ATQLEN(_VF16)		(0x0022DF00 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ATQLEN_MAX_INDEX		15
+#define VF_MBX_PSM_ATQLEN_ATQLEN_S		0
+#define VF_MBX_PSM_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ATQLEN_ATQVFE_S		28
+#define VF_MBX_PSM_ATQLEN_ATQVFE_M		BIT(28)
+#define VF_MBX_PSM_ATQLEN_ATQOVFL_S		29
+#define VF_MBX_PSM_ATQLEN_ATQOVFL_M		BIT(29)
+#define VF_MBX_PSM_ATQLEN_ATQCRIT_S		30
+#define VF_MBX_PSM_ATQLEN_ATQCRIT_M		BIT(30)
+#define VF_MBX_PSM_ATQLEN_ATQENABLE_S		31
+#define VF_MBX_PSM_ATQLEN_ATQENABLE_M		BIT(31)
+#define VF_MBX_PSM_ATQT(_VF16)			(0x0022DF80 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ATQT_MAX_INDEX		15
+#define VF_MBX_PSM_ATQT_ATQT_S			0
+#define VF_MBX_PSM_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ARQBAH(_VF128)		(0x0022F400 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ARQBAH_MAX_INDEX		127
+#define VF_SB_CPM_ARQBAH_ARQBAH_S		0
+#define VF_SB_CPM_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_SB_CPM_ARQBAL(_VF128)		(0x0022F200 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ARQBAL_MAX_INDEX		127
+#define VF_SB_CPM_ARQBAL_ARQBAL_LSB_S		0
+#define VF_SB_CPM_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define VF_SB_CPM_ARQBAL_ARQBAL_S		6
+#define VF_SB_CPM_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_SB_CPM_ARQH(_VF128)			(0x0022F800 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ARQH_MAX_INDEX		127
+#define VF_SB_CPM_ARQH_ARQH_S			0
+#define VF_SB_CPM_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ARQLEN(_VF128)		(0x0022F600 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ARQLEN_MAX_INDEX		127
+#define VF_SB_CPM_ARQLEN_ARQLEN_S		0
+#define VF_SB_CPM_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ARQLEN_ARQVFE_S		28
+#define VF_SB_CPM_ARQLEN_ARQVFE_M		BIT(28)
+#define VF_SB_CPM_ARQLEN_ARQOVFL_S		29
+#define VF_SB_CPM_ARQLEN_ARQOVFL_M		BIT(29)
+#define VF_SB_CPM_ARQLEN_ARQCRIT_S		30
+#define VF_SB_CPM_ARQLEN_ARQCRIT_M		BIT(30)
+#define VF_SB_CPM_ARQLEN_ARQENABLE_S		31
+#define VF_SB_CPM_ARQLEN_ARQENABLE_M		BIT(31)
+#define VF_SB_CPM_ARQT(_VF128)			(0x0022FA00 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ARQT_MAX_INDEX		127
+#define VF_SB_CPM_ARQT_ARQT_S			0
+#define VF_SB_CPM_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ATQBAH(_VF128)		(0x0022EA00 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ATQBAH_MAX_INDEX		127
+#define VF_SB_CPM_ATQBAH_ATQBAH_S		0
+#define VF_SB_CPM_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_SB_CPM_ATQBAL(_VF128)		(0x0022E800 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ATQBAL_MAX_INDEX		127
+#define VF_SB_CPM_ATQBAL_ATQBAL_S		6
+#define VF_SB_CPM_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_SB_CPM_ATQH(_VF128)			(0x0022EE00 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ATQH_MAX_INDEX		127
+#define VF_SB_CPM_ATQH_ATQH_S			0
+#define VF_SB_CPM_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ATQLEN(_VF128)		(0x0022EC00 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ATQLEN_MAX_INDEX		127
+#define VF_SB_CPM_ATQLEN_ATQLEN_S		0
+#define VF_SB_CPM_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ATQLEN_ATQVFE_S		28
+#define VF_SB_CPM_ATQLEN_ATQVFE_M		BIT(28)
+#define VF_SB_CPM_ATQLEN_ATQOVFL_S		29
+#define VF_SB_CPM_ATQLEN_ATQOVFL_M		BIT(29)
+#define VF_SB_CPM_ATQLEN_ATQCRIT_S		30
+#define VF_SB_CPM_ATQLEN_ATQCRIT_M		BIT(30)
+#define VF_SB_CPM_ATQLEN_ATQENABLE_S		31
+#define VF_SB_CPM_ATQLEN_ATQENABLE_M		BIT(31)
+#define VF_SB_CPM_ATQT(_VF128)			(0x0022F000 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ATQT_MAX_INDEX		127
+#define VF_SB_CPM_ATQT_ATQT_S			0
+#define VF_SB_CPM_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_REM_DEV_CTL			0x002300EC /* Reset Source: CORER */
+#define VF_SB_CPM_REM_DEV_CTL_DEST_EN_S		0
+#define VF_SB_CPM_REM_DEV_CTL_DEST_EN_M		MAKEMASK(0xFFFF, 0)
+#define VP_MBX_CPM_PF_VF_CTRL(_VP128)		(0x00231800 + ((_VP128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VP_MBX_CPM_PF_VF_CTRL_MAX_INDEX		127
+#define VP_MBX_CPM_PF_VF_CTRL_QUEUE_EN_S	0
+#define VP_MBX_CPM_PF_VF_CTRL_QUEUE_EN_M	BIT(0)
+#define VP_MBX_HLP_PF_VF_CTRL(_VP16)		(0x00231A00 + ((_VP16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VP_MBX_HLP_PF_VF_CTRL_MAX_INDEX		15
+#define VP_MBX_HLP_PF_VF_CTRL_QUEUE_EN_S	0
+#define VP_MBX_HLP_PF_VF_CTRL_QUEUE_EN_M	BIT(0)
+#define VP_MBX_PF_VF_CTRL(_VSI)			(0x00230800 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VP_MBX_PF_VF_CTRL_MAX_INDEX		767
+#define VP_MBX_PF_VF_CTRL_QUEUE_EN_S		0
+#define VP_MBX_PF_VF_CTRL_QUEUE_EN_M		BIT(0)
+#define VP_MBX_PSM_PF_VF_CTRL(_VP16)		(0x00231A40 + ((_VP16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VP_MBX_PSM_PF_VF_CTRL_MAX_INDEX		15
+#define VP_MBX_PSM_PF_VF_CTRL_QUEUE_EN_S	0
+#define VP_MBX_PSM_PF_VF_CTRL_QUEUE_EN_M	BIT(0)
+#define VP_SB_CPM_PF_VF_CTRL(_VP128)		(0x00231C00 + ((_VP128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VP_SB_CPM_PF_VF_CTRL_MAX_INDEX		127
+#define VP_SB_CPM_PF_VF_CTRL_QUEUE_EN_S		0
+#define VP_SB_CPM_PF_VF_CTRL_QUEUE_EN_M		BIT(0)
+#define GL_DCB_TDSCP2TC_BLOCK_DIS		0x00049218 /* Reset Source: CORER */
+#define GL_DCB_TDSCP2TC_BLOCK_DIS_DSCP2TC_BLOCK_DIS_S 0
+#define GL_DCB_TDSCP2TC_BLOCK_DIS_DSCP2TC_BLOCK_DIS_M BIT(0)
+#define GL_DCB_TDSCP2TC_BLOCK_IPV4(_i)		(0x00049018 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GL_DCB_TDSCP2TC_BLOCK_IPV4_MAX_INDEX	63
+#define GL_DCB_TDSCP2TC_BLOCK_IPV4_TC_BLOCK_LUT_S 0
+#define GL_DCB_TDSCP2TC_BLOCK_IPV4_TC_BLOCK_LUT_M MAKEMASK(0xFFFFFFFF, 0)
+#define GL_DCB_TDSCP2TC_BLOCK_IPV6(_i)		(0x00049118 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GL_DCB_TDSCP2TC_BLOCK_IPV6_MAX_INDEX	63
+#define GL_DCB_TDSCP2TC_BLOCK_IPV6_TC_BLOCK_LUT_S 0
+#define GL_DCB_TDSCP2TC_BLOCK_IPV6_TC_BLOCK_LUT_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_GENC				0x00083044 /* Reset Source: CORER */
+#define GLDCB_GENC_PCIRTT_S			0
+#define GLDCB_GENC_PCIRTT_M			MAKEMASK(0xFFFF, 0)
+#define GLDCB_PRS_RETSTCC(_i)			(0x002000B0 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLDCB_PRS_RETSTCC_MAX_INDEX		31
+#define GLDCB_PRS_RETSTCC_BWSHARE_S		0
+#define GLDCB_PRS_RETSTCC_BWSHARE_M		MAKEMASK(0x7F, 0)
+#define GLDCB_PRS_RETSTCC_ETSTC_S		31
+#define GLDCB_PRS_RETSTCC_ETSTC_M		BIT(31)
+#define GLDCB_PRS_RSPMC				0x00200160 /* Reset Source: CORER */
+#define GLDCB_PRS_RSPMC_RSPM_S			0
+#define GLDCB_PRS_RSPMC_RSPM_M			MAKEMASK(0xFF, 0)
+#define GLDCB_PRS_RSPMC_RPM_MODE_S		8
+#define GLDCB_PRS_RSPMC_RPM_MODE_M		MAKEMASK(0x3, 8)
+#define GLDCB_PRS_RSPMC_PRR_MAX_EXP_S		10
+#define GLDCB_PRS_RSPMC_PRR_MAX_EXP_M		MAKEMASK(0xF, 10)
+#define GLDCB_PRS_RSPMC_PFCTIMER_S		14
+#define GLDCB_PRS_RSPMC_PFCTIMER_M		MAKEMASK(0x3FFF, 14)
+#define GLDCB_PRS_RSPMC_RPM_DIS_S		31
+#define GLDCB_PRS_RSPMC_RPM_DIS_M		BIT(31)
+#define GLDCB_RETSTCC(_i)			(0x00122140 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLDCB_RETSTCC_MAX_INDEX			31
+#define GLDCB_RETSTCC_BWSHARE_S			0
+#define GLDCB_RETSTCC_BWSHARE_M			MAKEMASK(0x7F, 0)
+#define GLDCB_RETSTCC_ETSTC_S			31
+#define GLDCB_RETSTCC_ETSTC_M			BIT(31)
+#define GLDCB_RETSTCS(_i)			(0x001221C0 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLDCB_RETSTCS_MAX_INDEX			31
+#define GLDCB_RETSTCS_CREDITS_S			0
+#define GLDCB_RETSTCS_CREDITS_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_RTC2PFC_RCB			0x00122100 /* Reset Source: CORER */
+#define GLDCB_RTC2PFC_RCB_TC2PFC_S		0
+#define GLDCB_RTC2PFC_RCB_TC2PFC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_SWT_RETSTCC(_i)			(0x0020A040 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLDCB_SWT_RETSTCC_MAX_INDEX		31
+#define GLDCB_SWT_RETSTCC_BWSHARE_S		0
+#define GLDCB_SWT_RETSTCC_BWSHARE_M		MAKEMASK(0x7F, 0)
+#define GLDCB_SWT_RETSTCC_ETSTC_S		31
+#define GLDCB_SWT_RETSTCC_ETSTC_M		BIT(31)
+#define GLDCB_TC2PFC				0x001D2694 /* Reset Source: CORER */
+#define GLDCB_TC2PFC_TC2PFC_S			0
+#define GLDCB_TC2PFC_TC2PFC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_TCB_MNG_SP			0x000AE12C /* Reset Source: CORER */
+#define GLDCB_TCB_MNG_SP_MNG_SP_S		0
+#define GLDCB_TCB_MNG_SP_MNG_SP_M		BIT(0)
+#define GLDCB_TCB_TCLL_CFG			0x000AE134 /* Reset Source: CORER */
+#define GLDCB_TCB_TCLL_CFG_LLTC_S		0
+#define GLDCB_TCB_TCLL_CFG_LLTC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_TCB_WB_SP				0x000AE310 /* Reset Source: CORER */
+#define GLDCB_TCB_WB_SP_WB_SP_S			0
+#define GLDCB_TCB_WB_SP_WB_SP_M			BIT(0)
+#define GLDCB_TCUPM_IMM_EN			0x000BC824 /* Reset Source: CORER */
+#define GLDCB_TCUPM_IMM_EN_IMM_EN_S		0
+#define GLDCB_TCUPM_IMM_EN_IMM_EN_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_TCUPM_LEGACY_TC			0x000BC828 /* Reset Source: CORER */
+#define GLDCB_TCUPM_LEGACY_TC_LEGTC_S		0
+#define GLDCB_TCUPM_LEGACY_TC_LEGTC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_TCUPM_NO_EXCEED_DIS		0x000BC830 /* Reset Source: CORER */
+#define GLDCB_TCUPM_NO_EXCEED_DIS_NON_EXCEED_DIS_S 0
+#define GLDCB_TCUPM_NO_EXCEED_DIS_NON_EXCEED_DIS_M BIT(0)
+#define GLDCB_TCUPM_WB_DIS			0x000BC834 /* Reset Source: CORER */
+#define GLDCB_TCUPM_WB_DIS_PORT_DISABLE_S	0
+#define GLDCB_TCUPM_WB_DIS_PORT_DISABLE_M	BIT(0)
+#define GLDCB_TCUPM_WB_DIS_TC_DISABLE_S		1
+#define GLDCB_TCUPM_WB_DIS_TC_DISABLE_M		BIT(1)
+#define GLDCB_TFPFCI				0x0009949C /* Reset Source: CORER */
+#define GLDCB_TFPFCI_GLDCB_TFPFCI_S		0
+#define GLDCB_TFPFCI_GLDCB_TFPFCI_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_TLPM_IMM_TCB			0x000A0190 /* Reset Source: CORER */
+#define GLDCB_TLPM_IMM_TCB_IMM_EN_S		0
+#define GLDCB_TLPM_IMM_TCB_IMM_EN_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_TLPM_IMM_TCUPM			0x000A018C /* Reset Source: CORER */
+#define GLDCB_TLPM_IMM_TCUPM_IMM_EN_S		0
+#define GLDCB_TLPM_IMM_TCUPM_IMM_EN_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_TLPM_PCI_DM			0x000A0180 /* Reset Source: CORER */
+#define GLDCB_TLPM_PCI_DM_MONITOR_S		0
+#define GLDCB_TLPM_PCI_DM_MONITOR_M		MAKEMASK(0x7FFFF, 0)
+#define GLDCB_TLPM_PCI_DTHR			0x000A0184 /* Reset Source: CORER */
+#define GLDCB_TLPM_PCI_DTHR_PCI_TDATA_S		0
+#define GLDCB_TLPM_PCI_DTHR_PCI_TDATA_M		MAKEMASK(0xFFF, 0)
+#define GLDCB_TPB_IMM_TLPM			0x00099468 /* Reset Source: CORER */
+#define GLDCB_TPB_IMM_TLPM_IMM_EN_S		0
+#define GLDCB_TPB_IMM_TLPM_IMM_EN_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_TPB_IMM_TPB			0x0009946C /* Reset Source: CORER */
+#define GLDCB_TPB_IMM_TPB_IMM_EN_S		0
+#define GLDCB_TPB_IMM_TPB_IMM_EN_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_TPB_TCLL_CFG			0x00099464 /* Reset Source: CORER */
+#define GLDCB_TPB_TCLL_CFG_LLTC_S		0
+#define GLDCB_TPB_TCLL_CFG_LLTC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCB_BULK_DWRR_REG_QUANTA		0x000AE0E0 /* Reset Source: CORER */
+#define GLTCB_BULK_DWRR_REG_QUANTA_QUANTA_S	0
+#define GLTCB_BULK_DWRR_REG_QUANTA_QUANTA_M	MAKEMASK(0x7FF, 0)
+#define GLTCB_BULK_DWRR_REG_SAT			0x000AE0F0 /* Reset Source: CORER */
+#define GLTCB_BULK_DWRR_REG_SAT_SATURATION_S	0
+#define GLTCB_BULK_DWRR_REG_SAT_SATURATION_M	MAKEMASK(0x1FFFF, 0)
+#define GLTCB_BULK_DWRR_WB_QUANTA		0x000AE0E4 /* Reset Source: CORER */
+#define GLTCB_BULK_DWRR_WB_QUANTA_QUANTA_S	0
+#define GLTCB_BULK_DWRR_WB_QUANTA_QUANTA_M	MAKEMASK(0x7FF, 0)
+#define GLTCB_BULK_DWRR_WB_SAT			0x000AE0F4 /* Reset Source: CORER */
+#define GLTCB_BULK_DWRR_WB_SAT_SATURATION_S	0
+#define GLTCB_BULK_DWRR_WB_SAT_SATURATION_M	MAKEMASK(0x1FFFF, 0)
+#define GLTCB_CREDIT_EXP_CTL			0x000AE120 /* Reset Source: CORER */
+#define GLTCB_CREDIT_EXP_CTL_EN_S		0
+#define GLTCB_CREDIT_EXP_CTL_EN_M		BIT(0)
+#define GLTCB_CREDIT_EXP_CTL_MIN_PKT_S		1
+#define GLTCB_CREDIT_EXP_CTL_MIN_PKT_M		MAKEMASK(0x1FF, 1)
+#define GLTCB_LL_DWRR_REG_QUANTA		0x000AE0E8 /* Reset Source: CORER */
+#define GLTCB_LL_DWRR_REG_QUANTA_QUANTA_S	0
+#define GLTCB_LL_DWRR_REG_QUANTA_QUANTA_M	MAKEMASK(0x7FF, 0)
+#define GLTCB_LL_DWRR_REG_SAT			0x000AE0F8 /* Reset Source: CORER */
+#define GLTCB_LL_DWRR_REG_SAT_SATURATION_S	0
+#define GLTCB_LL_DWRR_REG_SAT_SATURATION_M	MAKEMASK(0x1FFFF, 0)
+#define GLTCB_LL_DWRR_WB_QUANTA			0x000AE0EC /* Reset Source: CORER */
+#define GLTCB_LL_DWRR_WB_QUANTA_QUANTA_S	0
+#define GLTCB_LL_DWRR_WB_QUANTA_QUANTA_M	MAKEMASK(0x7FF, 0)
+#define GLTCB_LL_DWRR_WB_SAT			0x000AE0FC /* Reset Source: CORER */
+#define GLTCB_LL_DWRR_WB_SAT_SATURATION_S	0
+#define GLTCB_LL_DWRR_WB_SAT_SATURATION_M	MAKEMASK(0x1FFFF, 0)
+#define GLTCB_WB_RL				0x000AE238 /* Reset Source: CORER */
+#define GLTCB_WB_RL_PERIOD_S			0
+#define GLTCB_WB_RL_PERIOD_M			MAKEMASK(0xFFFF, 0)
+#define GLTCB_WB_RL_EN_S			16
+#define GLTCB_WB_RL_EN_M			BIT(16)
+#define GLTPB_WB_RL				0x00099460 /* Reset Source: CORER */
+#define GLTPB_WB_RL_PERIOD_S			0
+#define GLTPB_WB_RL_PERIOD_M			MAKEMASK(0xFFFF, 0)
+#define GLTPB_WB_RL_EN_S			16
+#define GLTPB_WB_RL_EN_M			BIT(16)
+#define PRTDCB_FCCFG				0x001E4640 /* Reset Source: GLOBR */
+#define PRTDCB_FCCFG_TFCE_S			3
+#define PRTDCB_FCCFG_TFCE_M			MAKEMASK(0x3, 3)
+#define PRTDCB_FCRTV				0x001E4600 /* Reset Source: GLOBR */
+#define PRTDCB_FCRTV_FC_REFRESH_TH_S		0
+#define PRTDCB_FCRTV_FC_REFRESH_TH_M		MAKEMASK(0xFFFF, 0)
+#define PRTDCB_FCTTVN(_i)			(0x001E4580 + ((_i) * 32)) /* _i=0...3 */ /* Reset Source: GLOBR */
+#define PRTDCB_FCTTVN_MAX_INDEX			3
+#define PRTDCB_FCTTVN_TTV_2N_S			0
+#define PRTDCB_FCTTVN_TTV_2N_M			MAKEMASK(0xFFFF, 0)
+#define PRTDCB_FCTTVN_TTV_2N_P1_S		16
+#define PRTDCB_FCTTVN_TTV_2N_P1_M		MAKEMASK(0xFFFF, 16)
+#define PRTDCB_GENC				0x00083000 /* Reset Source: CORER */
+#define PRTDCB_GENC_NUMTC_S			2
+#define PRTDCB_GENC_NUMTC_M			MAKEMASK(0xF, 2)
+#define PRTDCB_GENC_FCOEUP_S			6
+#define PRTDCB_GENC_FCOEUP_M			MAKEMASK(0x7, 6)
+#define PRTDCB_GENC_FCOEUP_VALID_S		9
+#define PRTDCB_GENC_FCOEUP_VALID_M		BIT(9)
+#define PRTDCB_GENC_PFCLDA_S			16
+#define PRTDCB_GENC_PFCLDA_M			MAKEMASK(0xFFFF, 16)
+#define PRTDCB_GENS				0x00083020 /* Reset Source: CORER */
+#define PRTDCB_GENS_DCBX_STATUS_S		0
+#define PRTDCB_GENS_DCBX_STATUS_M		MAKEMASK(0x7, 0)
+#define PRTDCB_PRS_RETSC			0x002001A0 /* Reset Source: CORER */
+#define PRTDCB_PRS_RETSC_ETS_MODE_S		0
+#define PRTDCB_PRS_RETSC_ETS_MODE_M		BIT(0)
+#define PRTDCB_PRS_RETSC_NON_ETS_MODE_S		1
+#define PRTDCB_PRS_RETSC_NON_ETS_MODE_M		BIT(1)
+#define PRTDCB_PRS_RETSC_ETS_MAX_EXP_S		2
+#define PRTDCB_PRS_RETSC_ETS_MAX_EXP_M		MAKEMASK(0xF, 2)
+#define PRTDCB_PRS_RPRRC			0x00200180 /* Reset Source: CORER */
+#define PRTDCB_PRS_RPRRC_BWSHARE_S		0
+#define PRTDCB_PRS_RPRRC_BWSHARE_M		MAKEMASK(0x3FF, 0)
+#define PRTDCB_PRS_RPRRC_BWSHARE_DIS_S		31
+#define PRTDCB_PRS_RPRRC_BWSHARE_DIS_M		BIT(31)
+#define PRTDCB_RETSC				0x001222A0 /* Reset Source: CORER */
+#define PRTDCB_RETSC_ETS_MODE_S			0
+#define PRTDCB_RETSC_ETS_MODE_M			BIT(0)
+#define PRTDCB_RETSC_NON_ETS_MODE_S		1
+#define PRTDCB_RETSC_NON_ETS_MODE_M		BIT(1)
+#define PRTDCB_RETSC_ETS_MAX_EXP_S		2
+#define PRTDCB_RETSC_ETS_MAX_EXP_M		MAKEMASK(0xF, 2)
+#define PRTDCB_RPRRC				0x001220C0 /* Reset Source: CORER */
+#define PRTDCB_RPRRC_BWSHARE_S			0
+#define PRTDCB_RPRRC_BWSHARE_M			MAKEMASK(0x3FF, 0)
+#define PRTDCB_RPRRC_BWSHARE_DIS_S		31
+#define PRTDCB_RPRRC_BWSHARE_DIS_M		BIT(31)
+#define PRTDCB_RPRRS				0x001220E0 /* Reset Source: CORER */
+#define PRTDCB_RPRRS_CREDITS_S			0
+#define PRTDCB_RPRRS_CREDITS_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PRTDCB_RUP_TDPU				0x00040960 /* Reset Source: CORER */
+#define PRTDCB_RUP_TDPU_NOVLANUP_S		0
+#define PRTDCB_RUP_TDPU_NOVLANUP_M		MAKEMASK(0x7, 0)
+#define PRTDCB_RUP2TC				0x001D2640 /* Reset Source: CORER */
+#define PRTDCB_RUP2TC_UP0TC_S			0
+#define PRTDCB_RUP2TC_UP0TC_M			MAKEMASK(0x7, 0)
+#define PRTDCB_RUP2TC_UP1TC_S			3
+#define PRTDCB_RUP2TC_UP1TC_M			MAKEMASK(0x7, 3)
+#define PRTDCB_RUP2TC_UP2TC_S			6
+#define PRTDCB_RUP2TC_UP2TC_M			MAKEMASK(0x7, 6)
+#define PRTDCB_RUP2TC_UP3TC_S			9
+#define PRTDCB_RUP2TC_UP3TC_M			MAKEMASK(0x7, 9)
+#define PRTDCB_RUP2TC_UP4TC_S			12
+#define PRTDCB_RUP2TC_UP4TC_M			MAKEMASK(0x7, 12)
+#define PRTDCB_RUP2TC_UP5TC_S			15
+#define PRTDCB_RUP2TC_UP5TC_M			MAKEMASK(0x7, 15)
+#define PRTDCB_RUP2TC_UP6TC_S			18
+#define PRTDCB_RUP2TC_UP6TC_M			MAKEMASK(0x7, 18)
+#define PRTDCB_RUP2TC_UP7TC_S			21
+#define PRTDCB_RUP2TC_UP7TC_M			MAKEMASK(0x7, 21)
+#define PRTDCB_SWT_RETSC			0x0020A140 /* Reset Source: CORER */
+#define PRTDCB_SWT_RETSC_ETS_MODE_S		0
+#define PRTDCB_SWT_RETSC_ETS_MODE_M		BIT(0)
+#define PRTDCB_SWT_RETSC_NON_ETS_MODE_S		1
+#define PRTDCB_SWT_RETSC_NON_ETS_MODE_M		BIT(1)
+#define PRTDCB_SWT_RETSC_ETS_MAX_EXP_S		2
+#define PRTDCB_SWT_RETSC_ETS_MAX_EXP_M		MAKEMASK(0xF, 2)
+#define PRTDCB_TCB_DWRR_CREDITS			0x000AE000 /* Reset Source: CORER */
+#define PRTDCB_TCB_DWRR_CREDITS_CREDITS_S	0
+#define PRTDCB_TCB_DWRR_CREDITS_CREDITS_M	MAKEMASK(0x3FFFF, 0)
+#define PRTDCB_TCB_DWRR_QUANTA			0x000AE020 /* Reset Source: CORER */
+#define PRTDCB_TCB_DWRR_QUANTA_QUANTA_S		0
+#define PRTDCB_TCB_DWRR_QUANTA_QUANTA_M		MAKEMASK(0x7FF, 0)
+#define PRTDCB_TCB_DWRR_SAT			0x000AE040 /* Reset Source: CORER */
+#define PRTDCB_TCB_DWRR_SAT_SATURATION_S	0
+#define PRTDCB_TCB_DWRR_SAT_SATURATION_M	MAKEMASK(0x1FFFF, 0)
+#define PRTDCB_TCUPM_NO_EXCEED_DM		0x000BC3C0 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_NO_EXCEED_DM_MONITOR_S	0
+#define PRTDCB_TCUPM_NO_EXCEED_DM_MONITOR_M	MAKEMASK(0x7FFFF, 0)
+#define PRTDCB_TCUPM_REG_CM			0x000BC360 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_REG_CM_MONITOR_S		0
+#define PRTDCB_TCUPM_REG_CM_MONITOR_M		MAKEMASK(0x7FFF, 0)
+#define PRTDCB_TCUPM_REG_CTHR			0x000BC380 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_REG_CTHR_PORTOFFTH_H_S	0
+#define PRTDCB_TCUPM_REG_CTHR_PORTOFFTH_H_M	MAKEMASK(0x7FFF, 0)
+#define PRTDCB_TCUPM_REG_CTHR_PORTOFFTH_L_S	15
+#define PRTDCB_TCUPM_REG_CTHR_PORTOFFTH_L_M	MAKEMASK(0x7FFF, 15)
+#define PRTDCB_TCUPM_REG_DM			0x000BC3A0 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_REG_DM_MONITOR_S		0
+#define PRTDCB_TCUPM_REG_DM_MONITOR_M		MAKEMASK(0x7FFFF, 0)
+#define PRTDCB_TCUPM_REG_DTHR			0x000BC3E0 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_REG_DTHR_PORTOFFTH_H_S	0
+#define PRTDCB_TCUPM_REG_DTHR_PORTOFFTH_H_M	MAKEMASK(0xFFF, 0)
+#define PRTDCB_TCUPM_REG_DTHR_PORTOFFTH_L_S	12
+#define PRTDCB_TCUPM_REG_DTHR_PORTOFFTH_L_M	MAKEMASK(0xFFF, 12)
+#define PRTDCB_TCUPM_REG_PE_HB_DM		0x000BC400 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_REG_PE_HB_DM_MONITOR_S	0
+#define PRTDCB_TCUPM_REG_PE_HB_DM_MONITOR_M	MAKEMASK(0xFFF, 0)
+#define PRTDCB_TCUPM_REG_PE_HB_DTHR		0x000BC420 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_REG_PE_HB_DTHR_PORTOFFTH_H_S 0
+#define PRTDCB_TCUPM_REG_PE_HB_DTHR_PORTOFFTH_H_M MAKEMASK(0xFFF, 0)
+#define PRTDCB_TCUPM_REG_PE_HB_DTHR_PORTOFFTH_L_S 12
+#define PRTDCB_TCUPM_REG_PE_HB_DTHR_PORTOFFTH_L_M MAKEMASK(0xFFF, 12)
+#define PRTDCB_TCUPM_WAIT_PFC_CM		0x000BC440 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_WAIT_PFC_CM_MONITOR_S	0
+#define PRTDCB_TCUPM_WAIT_PFC_CM_MONITOR_M	MAKEMASK(0x7FFF, 0)
+#define PRTDCB_TCUPM_WAIT_PFC_CTHR		0x000BC460 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_WAIT_PFC_CTHR_PORTOFFTH_S	0
+#define PRTDCB_TCUPM_WAIT_PFC_CTHR_PORTOFFTH_M	MAKEMASK(0x7FFF, 0)
+#define PRTDCB_TCUPM_WAIT_PFC_DM		0x000BC480 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_WAIT_PFC_DM_MONITOR_S	0
+#define PRTDCB_TCUPM_WAIT_PFC_DM_MONITOR_M	MAKEMASK(0x7FFFF, 0)
+#define PRTDCB_TCUPM_WAIT_PFC_DTHR		0x000BC4A0 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_WAIT_PFC_DTHR_PORTOFFTH_S	0
+#define PRTDCB_TCUPM_WAIT_PFC_DTHR_PORTOFFTH_M	MAKEMASK(0xFFF, 0)
+#define PRTDCB_TCUPM_WAIT_PFC_PE_HB_DM		0x000BC4C0 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_WAIT_PFC_PE_HB_DM_MONITOR_S 0
+#define PRTDCB_TCUPM_WAIT_PFC_PE_HB_DM_MONITOR_M MAKEMASK(0xFFF, 0)
+#define PRTDCB_TCUPM_WAIT_PFC_PE_HB_DTHR	0x000BC4E0 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_WAIT_PFC_PE_HB_DTHR_PORTOFFTH_S 0
+#define PRTDCB_TCUPM_WAIT_PFC_PE_HB_DTHR_PORTOFFTH_M MAKEMASK(0xFFF, 0)
+#define PRTDCB_TDPUC				0x00040940 /* Reset Source: CORER */
+#define PRTDCB_TDPUC_MAX_TXFRAME_S		0
+#define PRTDCB_TDPUC_MAX_TXFRAME_M		MAKEMASK(0xFFFF, 0)
+#define PRTDCB_TDPUC_MAL_LENGTH_S		16
+#define PRTDCB_TDPUC_MAL_LENGTH_M		BIT(16)
+#define PRTDCB_TDPUC_MAL_CMD_S			17
+#define PRTDCB_TDPUC_MAL_CMD_M			BIT(17)
+#define PRTDCB_TDPUC_TTL_DROP_S			18
+#define PRTDCB_TDPUC_TTL_DROP_M			BIT(18)
+#define PRTDCB_TDPUC_UR_DROP_S			19
+#define PRTDCB_TDPUC_UR_DROP_M			BIT(19)
+#define PRTDCB_TDPUC_DUMMY_S			20
+#define PRTDCB_TDPUC_DUMMY_M			BIT(20)
+#define PRTDCB_TDPUC_BIG_PKT_SIZE_S		21
+#define PRTDCB_TDPUC_BIG_PKT_SIZE_M		BIT(21)
+#define PRTDCB_TDPUC_L2_ACCEPT_FAIL_S		22
+#define PRTDCB_TDPUC_L2_ACCEPT_FAIL_M		BIT(22)
+#define PRTDCB_TDPUC_DSCP_CHECK_FAIL_S		23
+#define PRTDCB_TDPUC_DSCP_CHECK_FAIL_M		BIT(23)
+#define PRTDCB_TDPUC_RCU_ANTISPOOF_S		24
+#define PRTDCB_TDPUC_RCU_ANTISPOOF_M		BIT(24)
+#define PRTDCB_TDPUC_NIC_DSI_S			25
+#define PRTDCB_TDPUC_NIC_DSI_M			BIT(25)
+#define PRTDCB_TDPUC_NIC_IPSEC_S		26
+#define PRTDCB_TDPUC_NIC_IPSEC_M		BIT(26)
+#define PRTDCB_TDPUC_CLEAR_DROP_S		31
+#define PRTDCB_TDPUC_CLEAR_DROP_M		BIT(31)
+#define PRTDCB_TFCS				0x001E4560 /* Reset Source: GLOBR */
+#define PRTDCB_TFCS_TXOFF_S			0
+#define PRTDCB_TFCS_TXOFF_M			BIT(0)
+#define PRTDCB_TFCS_TXOFF0_S			8
+#define PRTDCB_TFCS_TXOFF0_M			BIT(8)
+#define PRTDCB_TFCS_TXOFF1_S			9
+#define PRTDCB_TFCS_TXOFF1_M			BIT(9)
+#define PRTDCB_TFCS_TXOFF2_S			10
+#define PRTDCB_TFCS_TXOFF2_M			BIT(10)
+#define PRTDCB_TFCS_TXOFF3_S			11
+#define PRTDCB_TFCS_TXOFF3_M			BIT(11)
+#define PRTDCB_TFCS_TXOFF4_S			12
+#define PRTDCB_TFCS_TXOFF4_M			BIT(12)
+#define PRTDCB_TFCS_TXOFF5_S			13
+#define PRTDCB_TFCS_TXOFF5_M			BIT(13)
+#define PRTDCB_TFCS_TXOFF6_S			14
+#define PRTDCB_TFCS_TXOFF6_M			BIT(14)
+#define PRTDCB_TFCS_TXOFF7_S			15
+#define PRTDCB_TFCS_TXOFF7_M			BIT(15)
+#define PRTDCB_TLPM_REG_DM			0x000A0000 /* Reset Source: CORER */
+#define PRTDCB_TLPM_REG_DM_MONITOR_S		0
+#define PRTDCB_TLPM_REG_DM_MONITOR_M		MAKEMASK(0x7FFFF, 0)
+#define PRTDCB_TLPM_REG_DTHR			0x000A0020 /* Reset Source: CORER */
+#define PRTDCB_TLPM_REG_DTHR_PORTOFFTH_H_S	0
+#define PRTDCB_TLPM_REG_DTHR_PORTOFFTH_H_M	MAKEMASK(0xFFF, 0)
+#define PRTDCB_TLPM_REG_DTHR_PORTOFFTH_L_S	12
+#define PRTDCB_TLPM_REG_DTHR_PORTOFFTH_L_M	MAKEMASK(0xFFF, 12)
+#define PRTDCB_TLPM_WAIT_PFC_DM			0x000A0040 /* Reset Source: CORER */
+#define PRTDCB_TLPM_WAIT_PFC_DM_MONITOR_S	0
+#define PRTDCB_TLPM_WAIT_PFC_DM_MONITOR_M	MAKEMASK(0x7FFFF, 0)
+#define PRTDCB_TLPM_WAIT_PFC_DTHR		0x000A0060 /* Reset Source: CORER */
+#define PRTDCB_TLPM_WAIT_PFC_DTHR_PORTOFFTH_S	0
+#define PRTDCB_TLPM_WAIT_PFC_DTHR_PORTOFFTH_M	MAKEMASK(0xFFF, 0)
+#define PRTDCB_TPFCTS(_i)			(0x001E4660 + ((_i) * 32)) /* _i=0...7 */ /* Reset Source: GLOBR */
+#define PRTDCB_TPFCTS_MAX_INDEX			7
+#define PRTDCB_TPFCTS_PFCTIMER_S		0
+#define PRTDCB_TPFCTS_PFCTIMER_M		MAKEMASK(0x3FFF, 0)
+#define PRTDCB_TUP2TC				0x001D26C0 /* Reset Source: CORER */
+#define PRTDCB_TUP2TC_UP0TC_S			0
+#define PRTDCB_TUP2TC_UP0TC_M			MAKEMASK(0x7, 0)
+#define PRTDCB_TUP2TC_UP1TC_S			3
+#define PRTDCB_TUP2TC_UP1TC_M			MAKEMASK(0x7, 3)
+#define PRTDCB_TUP2TC_UP2TC_S			6
+#define PRTDCB_TUP2TC_UP2TC_M			MAKEMASK(0x7, 6)
+#define PRTDCB_TUP2TC_UP3TC_S			9
+#define PRTDCB_TUP2TC_UP3TC_M			MAKEMASK(0x7, 9)
+#define PRTDCB_TUP2TC_UP4TC_S			12
+#define PRTDCB_TUP2TC_UP4TC_M			MAKEMASK(0x7, 12)
+#define PRTDCB_TUP2TC_UP5TC_S			15
+#define PRTDCB_TUP2TC_UP5TC_M			MAKEMASK(0x7, 15)
+#define PRTDCB_TUP2TC_UP6TC_S			18
+#define PRTDCB_TUP2TC_UP6TC_M			MAKEMASK(0x7, 18)
+#define PRTDCB_TUP2TC_UP7TC_S			21
+#define PRTDCB_TUP2TC_UP7TC_M			MAKEMASK(0x7, 21)
+#define PRTDCB_TX_DSCP2UP_CTL			0x00040980 /* Reset Source: CORER */
+#define PRTDCB_TX_DSCP2UP_CTL_DSCP2UP_ENA_S	0
+#define PRTDCB_TX_DSCP2UP_CTL_DSCP2UP_ENA_M	BIT(0)
+#define PRTDCB_TX_DSCP2UP_CTL_DSCP_DEFAULT_UP_S 1
+#define PRTDCB_TX_DSCP2UP_CTL_DSCP_DEFAULT_UP_M MAKEMASK(0x7, 1)
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT(_i)		(0x000409A0 + ((_i) * 32)) /* _i=0...7 */ /* Reset Source: CORER */
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_MAX_INDEX	7
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_0_S 0
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_0_M MAKEMASK(0x7, 0)
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_1_S 4
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_1_M MAKEMASK(0x7, 4)
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_2_S 8
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_2_M MAKEMASK(0x7, 8)
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_3_S 12
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_3_M MAKEMASK(0x7, 12)
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_4_S 16
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_4_M MAKEMASK(0x7, 16)
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_5_S 20
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_5_M MAKEMASK(0x7, 20)
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_6_S 24
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_6_M MAKEMASK(0x7, 24)
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_7_S 28
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_7_M MAKEMASK(0x7, 28)
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT(_i)		(0x00040AA0 + ((_i) * 32)) /* _i=0...7 */ /* Reset Source: CORER */
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_MAX_INDEX	7
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_0_S 0
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_0_M MAKEMASK(0x7, 0)
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_1_S 4
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_1_M MAKEMASK(0x7, 4)
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_2_S 8
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_2_M MAKEMASK(0x7, 8)
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_3_S 12
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_3_M MAKEMASK(0x7, 12)
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_4_S 16
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_4_M MAKEMASK(0x7, 16)
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_5_S 20
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_5_M MAKEMASK(0x7, 20)
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_6_S 24
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_6_M MAKEMASK(0x7, 24)
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_7_S 28
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_7_M MAKEMASK(0x7, 28)
+#define PRTTCB_BULK_DWRR_REG_CREDITS		0x000AE060 /* Reset Source: CORER */
+#define PRTTCB_BULK_DWRR_REG_CREDITS_CREDITS_S	0
+#define PRTTCB_BULK_DWRR_REG_CREDITS_CREDITS_M	MAKEMASK(0x3FFFF, 0)
+#define PRTTCB_BULK_DWRR_WB_CREDITS		0x000AE080 /* Reset Source: CORER */
+#define PRTTCB_BULK_DWRR_WB_CREDITS_CREDITS_S	0
+#define PRTTCB_BULK_DWRR_WB_CREDITS_CREDITS_M	MAKEMASK(0x3FFFF, 0)
+#define PRTTCB_CREDIT_EXP			0x000AE100 /* Reset Source: CORER */
+#define PRTTCB_CREDIT_EXP_EXPANSION_S		0
+#define PRTTCB_CREDIT_EXP_EXPANSION_M		MAKEMASK(0xFF, 0)
+#define PRTTCB_LL_DWRR_REG_CREDITS		0x000AE0A0 /* Reset Source: CORER */
+#define PRTTCB_LL_DWRR_REG_CREDITS_CREDITS_S	0
+#define PRTTCB_LL_DWRR_REG_CREDITS_CREDITS_M	MAKEMASK(0x3FFFF, 0)
+#define PRTTCB_LL_DWRR_WB_CREDITS		0x000AE0C0 /* Reset Source: CORER */
+#define PRTTCB_LL_DWRR_WB_CREDITS_CREDITS_S	0
+#define PRTTCB_LL_DWRR_WB_CREDITS_CREDITS_M	MAKEMASK(0x3FFFF, 0)
+#define TCDCB_TCUPM_WAIT_CM(_i)			(0x000BC520 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCDCB_TCUPM_WAIT_CM_MAX_INDEX		31
+#define TCDCB_TCUPM_WAIT_CM_MONITOR_S		0
+#define TCDCB_TCUPM_WAIT_CM_MONITOR_M		MAKEMASK(0x7FFF, 0)
+#define TCDCB_TCUPM_WAIT_CTHR(_i)		(0x000BC5A0 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCDCB_TCUPM_WAIT_CTHR_MAX_INDEX		31
+#define TCDCB_TCUPM_WAIT_CTHR_TCOFFTH_S		0
+#define TCDCB_TCUPM_WAIT_CTHR_TCOFFTH_M		MAKEMASK(0x7FFF, 0)
+#define TCDCB_TCUPM_WAIT_DM(_i)			(0x000BC620 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCDCB_TCUPM_WAIT_DM_MAX_INDEX		31
+#define TCDCB_TCUPM_WAIT_DM_MONITOR_S		0
+#define TCDCB_TCUPM_WAIT_DM_MONITOR_M		MAKEMASK(0x7FFFF, 0)
+#define TCDCB_TCUPM_WAIT_DTHR(_i)		(0x000BC6A0 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCDCB_TCUPM_WAIT_DTHR_MAX_INDEX		31
+#define TCDCB_TCUPM_WAIT_DTHR_TCOFFTH_S		0
+#define TCDCB_TCUPM_WAIT_DTHR_TCOFFTH_M		MAKEMASK(0xFFF, 0)
+#define TCDCB_TCUPM_WAIT_PE_HB_DM(_i)		(0x000BC720 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCDCB_TCUPM_WAIT_PE_HB_DM_MAX_INDEX	31
+#define TCDCB_TCUPM_WAIT_PE_HB_DM_MONITOR_S	0
+#define TCDCB_TCUPM_WAIT_PE_HB_DM_MONITOR_M	MAKEMASK(0xFFF, 0)
+#define TCDCB_TCUPM_WAIT_PE_HB_DTHR(_i)		(0x000BC7A0 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCDCB_TCUPM_WAIT_PE_HB_DTHR_MAX_INDEX	31
+#define TCDCB_TCUPM_WAIT_PE_HB_DTHR_TCOFFTH_S	0
+#define TCDCB_TCUPM_WAIT_PE_HB_DTHR_TCOFFTH_M	MAKEMASK(0xFFF, 0)
+#define TCDCB_TLPM_WAIT_DM(_i)			(0x000A0080 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCDCB_TLPM_WAIT_DM_MAX_INDEX		31
+#define TCDCB_TLPM_WAIT_DM_MONITOR_S		0
+#define TCDCB_TLPM_WAIT_DM_MONITOR_M		MAKEMASK(0x7FFFF, 0)
+#define TCDCB_TLPM_WAIT_DTHR(_i)		(0x000A0100 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCDCB_TLPM_WAIT_DTHR_MAX_INDEX		31
+#define TCDCB_TLPM_WAIT_DTHR_TCOFFTH_S		0
+#define TCDCB_TLPM_WAIT_DTHR_TCOFFTH_M		MAKEMASK(0xFFF, 0)
+#define TCTCB_WB_RL_TC_CFG(_i)			(0x000AE138 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCTCB_WB_RL_TC_CFG_MAX_INDEX		31
+#define TCTCB_WB_RL_TC_CFG_TOKENS_S		0
+#define TCTCB_WB_RL_TC_CFG_TOKENS_M		MAKEMASK(0xFFF, 0)
+#define TCTCB_WB_RL_TC_CFG_BURST_SIZE_S		12
+#define TCTCB_WB_RL_TC_CFG_BURST_SIZE_M		MAKEMASK(0x3FF, 12)
+#define TCTCB_WB_RL_TC_STAT(_i)			(0x000AE1B8 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCTCB_WB_RL_TC_STAT_MAX_INDEX		31
+#define TCTCB_WB_RL_TC_STAT_BUCKET_S		0
+#define TCTCB_WB_RL_TC_STAT_BUCKET_M		MAKEMASK(0x1FFFF, 0)
+#define TPB_BULK_DWRR_REG_QUANTA		0x00099340 /* Reset Source: CORER */
+#define TPB_BULK_DWRR_REG_QUANTA_QUANTA_S	0
+#define TPB_BULK_DWRR_REG_QUANTA_QUANTA_M	MAKEMASK(0x7FF, 0)
+#define TPB_BULK_DWRR_REG_SAT			0x00099350 /* Reset Source: CORER */
+#define TPB_BULK_DWRR_REG_SAT_SATURATION_S	0
+#define TPB_BULK_DWRR_REG_SAT_SATURATION_M	MAKEMASK(0x1FFFF, 0)
+#define TPB_BULK_DWRR_WB_QUANTA			0x00099344 /* Reset Source: CORER */
+#define TPB_BULK_DWRR_WB_QUANTA_QUANTA_S	0
+#define TPB_BULK_DWRR_WB_QUANTA_QUANTA_M	MAKEMASK(0x7FF, 0)
+#define TPB_BULK_DWRR_WB_SAT			0x00099354 /* Reset Source: CORER */
+#define TPB_BULK_DWRR_WB_SAT_SATURATION_S	0
+#define TPB_BULK_DWRR_WB_SAT_SATURATION_M	MAKEMASK(0x1FFFF, 0)
+#define TPB_GLDCB_TCB_WB_SP			0x0009966C /* Reset Source: CORER */
+#define TPB_GLDCB_TCB_WB_SP_WB_SP_S		0
+#define TPB_GLDCB_TCB_WB_SP_WB_SP_M		BIT(0)
+#define TPB_GLTCB_CREDIT_EXP_CTL		0x00099664 /* Reset Source: CORER */
+#define TPB_GLTCB_CREDIT_EXP_CTL_EN_S		0
+#define TPB_GLTCB_CREDIT_EXP_CTL_EN_M		BIT(0)
+#define TPB_GLTCB_CREDIT_EXP_CTL_MIN_PKT_S	1
+#define TPB_GLTCB_CREDIT_EXP_CTL_MIN_PKT_M	MAKEMASK(0x1FF, 1)
+#define TPB_LL_DWRR_REG_QUANTA			0x00099348 /* Reset Source: CORER */
+#define TPB_LL_DWRR_REG_QUANTA_QUANTA_S		0
+#define TPB_LL_DWRR_REG_QUANTA_QUANTA_M		MAKEMASK(0x7FF, 0)
+#define TPB_LL_DWRR_REG_SAT			0x00099358 /* Reset Source: CORER */
+#define TPB_LL_DWRR_REG_SAT_SATURATION_S	0
+#define TPB_LL_DWRR_REG_SAT_SATURATION_M	MAKEMASK(0x1FFFF, 0)
+#define TPB_LL_DWRR_WB_QUANTA			0x0009934C /* Reset Source: CORER */
+#define TPB_LL_DWRR_WB_QUANTA_QUANTA_S		0
+#define TPB_LL_DWRR_WB_QUANTA_QUANTA_M		MAKEMASK(0x7FF, 0)
+#define TPB_LL_DWRR_WB_SAT			0x0009935C /* Reset Source: CORER */
+#define TPB_LL_DWRR_WB_SAT_SATURATION_S		0
+#define TPB_LL_DWRR_WB_SAT_SATURATION_M		MAKEMASK(0x1FFFF, 0)
+#define TPB_PRTDCB_TCB_DWRR_CREDITS		0x000991C0 /* Reset Source: CORER */
+#define TPB_PRTDCB_TCB_DWRR_CREDITS_CREDITS_S	0
+#define TPB_PRTDCB_TCB_DWRR_CREDITS_CREDITS_M	MAKEMASK(0x3FFFF, 0)
+#define TPB_PRTDCB_TCB_DWRR_QUANTA		0x00099220 /* Reset Source: CORER */
+#define TPB_PRTDCB_TCB_DWRR_QUANTA_QUANTA_S	0
+#define TPB_PRTDCB_TCB_DWRR_QUANTA_QUANTA_M	MAKEMASK(0x7FF, 0)
+#define TPB_PRTDCB_TCB_DWRR_SAT			0x00099260 /* Reset Source: CORER */
+#define TPB_PRTDCB_TCB_DWRR_SAT_SATURATION_S	0
+#define TPB_PRTDCB_TCB_DWRR_SAT_SATURATION_M	MAKEMASK(0x1FFFF, 0)
+#define TPB_PRTTCB_BULK_DWRR_REG_CREDITS	0x000992A0 /* Reset Source: CORER */
+#define TPB_PRTTCB_BULK_DWRR_REG_CREDITS_CREDITS_S 0
+#define TPB_PRTTCB_BULK_DWRR_REG_CREDITS_CREDITS_M MAKEMASK(0x3FFFF, 0)
+#define TPB_PRTTCB_BULK_DWRR_WB_CREDITS		0x000992C0 /* Reset Source: CORER */
+#define TPB_PRTTCB_BULK_DWRR_WB_CREDITS_CREDITS_S 0
+#define TPB_PRTTCB_BULK_DWRR_WB_CREDITS_CREDITS_M MAKEMASK(0x3FFFF, 0)
+#define TPB_PRTTCB_CREDIT_EXP			0x00099644 /* Reset Source: CORER */
+#define TPB_PRTTCB_CREDIT_EXP_EXPANSION_S	0
+#define TPB_PRTTCB_CREDIT_EXP_EXPANSION_M	MAKEMASK(0xFF, 0)
+#define TPB_PRTTCB_LL_DWRR_REG_CREDITS		0x00099300 /* Reset Source: CORER */
+#define TPB_PRTTCB_LL_DWRR_REG_CREDITS_CREDITS_S 0
+#define TPB_PRTTCB_LL_DWRR_REG_CREDITS_CREDITS_M MAKEMASK(0x3FFFF, 0)
+#define TPB_PRTTCB_LL_DWRR_WB_CREDITS		0x00099320 /* Reset Source: CORER */
+#define TPB_PRTTCB_LL_DWRR_WB_CREDITS_CREDITS_S 0
+#define TPB_PRTTCB_LL_DWRR_WB_CREDITS_CREDITS_M MAKEMASK(0x3FFFF, 0)
+#define TPB_WB_RL_TC_CFG(_i)			(0x00099360 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TPB_WB_RL_TC_CFG_MAX_INDEX		31
+#define TPB_WB_RL_TC_CFG_TOKENS_S		0
+#define TPB_WB_RL_TC_CFG_TOKENS_M		MAKEMASK(0xFFF, 0)
+#define TPB_WB_RL_TC_CFG_BURST_SIZE_S		12
+#define TPB_WB_RL_TC_CFG_BURST_SIZE_M		MAKEMASK(0x3FF, 12)
+#define TPB_WB_RL_TC_STAT(_i)			(0x000993E0 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TPB_WB_RL_TC_STAT_MAX_INDEX		31
+#define TPB_WB_RL_TC_STAT_BUCKET_S		0
+#define TPB_WB_RL_TC_STAT_BUCKET_M		MAKEMASK(0x1FFFF, 0)
+#define GL_ACLEXT_CDMD_L1SEL(_i)		(0x00210054 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_CDMD_L1SEL_MAX_INDEX		2
+#define GL_ACLEXT_CDMD_L1SEL_RX_SEL_S		0
+#define GL_ACLEXT_CDMD_L1SEL_RX_SEL_M		MAKEMASK(0x1F, 0)
+#define GL_ACLEXT_CDMD_L1SEL_TX_SEL_S		8
+#define GL_ACLEXT_CDMD_L1SEL_TX_SEL_M		MAKEMASK(0x1F, 8)
+#define GL_ACLEXT_CDMD_L1SEL_AUX0_SEL_S		16
+#define GL_ACLEXT_CDMD_L1SEL_AUX0_SEL_M		MAKEMASK(0x1F, 16)
+#define GL_ACLEXT_CDMD_L1SEL_AUX1_SEL_S		24
+#define GL_ACLEXT_CDMD_L1SEL_AUX1_SEL_M		MAKEMASK(0x1F, 24)
+#define GL_ACLEXT_CDMD_L1SEL_BIDIR_ENA_S	30
+#define GL_ACLEXT_CDMD_L1SEL_BIDIR_ENA_M	MAKEMASK(0x3, 30)
+#define GL_ACLEXT_CTLTBL_L2ADDR(_i)		(0x00210084 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_CTLTBL_L2ADDR_MAX_INDEX	2
+#define GL_ACLEXT_CTLTBL_L2ADDR_LINE_OFF_S	0
+#define GL_ACLEXT_CTLTBL_L2ADDR_LINE_OFF_M	MAKEMASK(0x7, 0)
+#define GL_ACLEXT_CTLTBL_L2ADDR_LINE_IDX_S	8
+#define GL_ACLEXT_CTLTBL_L2ADDR_LINE_IDX_M	MAKEMASK(0x7, 8)
+#define GL_ACLEXT_CTLTBL_L2ADDR_AUTO_INC_S	31
+#define GL_ACLEXT_CTLTBL_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_ACLEXT_CTLTBL_L2DATA(_i)		(0x00210090 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_CTLTBL_L2DATA_MAX_INDEX	2
+#define GL_ACLEXT_CTLTBL_L2DATA_DATA_S		0
+#define GL_ACLEXT_CTLTBL_L2DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_ACLEXT_DFLT_L2PRFL(_i)		(0x00210138 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_DFLT_L2PRFL_MAX_INDEX		2
+#define GL_ACLEXT_DFLT_L2PRFL_DFLT_PRFL_S	0
+#define GL_ACLEXT_DFLT_L2PRFL_DFLT_PRFL_M	MAKEMASK(0xFFFF, 0)
+#define GL_ACLEXT_DFLT_L2PRFL_ACL(_i)		(0x00393800 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_DFLT_L2PRFL_ACL_MAX_INDEX	2
+#define GL_ACLEXT_DFLT_L2PRFL_ACL_DFLT_PRFL_S	0
+#define GL_ACLEXT_DFLT_L2PRFL_ACL_DFLT_PRFL_M	MAKEMASK(0xFFFF, 0)
+#define GL_ACLEXT_FLGS_L1SEL0_1(_i)		(0x0021006C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_FLGS_L1SEL0_1_MAX_INDEX	2
+#define GL_ACLEXT_FLGS_L1SEL0_1_FLS0_S		0
+#define GL_ACLEXT_FLGS_L1SEL0_1_FLS0_M		MAKEMASK(0x1FF, 0)
+#define GL_ACLEXT_FLGS_L1SEL0_1_FLS1_S		16
+#define GL_ACLEXT_FLGS_L1SEL0_1_FLS1_M		MAKEMASK(0x1FF, 16)
+#define GL_ACLEXT_FLGS_L1SEL2_3(_i)		(0x00210078 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_FLGS_L1SEL2_3_MAX_INDEX	2
+#define GL_ACLEXT_FLGS_L1SEL2_3_FLS2_S		0
+#define GL_ACLEXT_FLGS_L1SEL2_3_FLS2_M		MAKEMASK(0x1FF, 0)
+#define GL_ACLEXT_FLGS_L1SEL2_3_FLS3_S		16
+#define GL_ACLEXT_FLGS_L1SEL2_3_FLS3_M		MAKEMASK(0x1FF, 16)
+#define GL_ACLEXT_FLGS_L1TBL(_i)		(0x00210060 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_FLGS_L1TBL_MAX_INDEX		2
+#define GL_ACLEXT_FLGS_L1TBL_LSB_S		0
+#define GL_ACLEXT_FLGS_L1TBL_LSB_M		MAKEMASK(0xFFFF, 0)
+#define GL_ACLEXT_FLGS_L1TBL_MSB_S		16
+#define GL_ACLEXT_FLGS_L1TBL_MSB_M		MAKEMASK(0xFFFF, 16)
+#define GL_ACLEXT_FORCE_L1CDID(_i)		(0x00210018 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_FORCE_L1CDID_MAX_INDEX	2
+#define GL_ACLEXT_FORCE_L1CDID_STATIC_CDID_S	0
+#define GL_ACLEXT_FORCE_L1CDID_STATIC_CDID_M	MAKEMASK(0xF, 0)
+#define GL_ACLEXT_FORCE_L1CDID_STATIC_CDID_EN_S 31
+#define GL_ACLEXT_FORCE_L1CDID_STATIC_CDID_EN_M BIT(31)
+#define GL_ACLEXT_FORCE_PID(_i)			(0x00210000 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_FORCE_PID_MAX_INDEX		2
+#define GL_ACLEXT_FORCE_PID_STATIC_PID_S	0
+#define GL_ACLEXT_FORCE_PID_STATIC_PID_M	MAKEMASK(0xFFFF, 0)
+#define GL_ACLEXT_FORCE_PID_STATIC_PID_EN_S	31
+#define GL_ACLEXT_FORCE_PID_STATIC_PID_EN_M	BIT(31)
+#define GL_ACLEXT_K2N_L2ADDR(_i)		(0x00210144 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_K2N_L2ADDR_MAX_INDEX		2
+#define GL_ACLEXT_K2N_L2ADDR_LINE_IDX_S		0
+#define GL_ACLEXT_K2N_L2ADDR_LINE_IDX_M		MAKEMASK(0x7F, 0)
+#define GL_ACLEXT_K2N_L2ADDR_AUTO_INC_S		31
+#define GL_ACLEXT_K2N_L2ADDR_AUTO_INC_M		BIT(31)
+#define GL_ACLEXT_K2N_L2DATA(_i)		(0x00210150 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_K2N_L2DATA_MAX_INDEX		2
+#define GL_ACLEXT_K2N_L2DATA_DATA0_S		0
+#define GL_ACLEXT_K2N_L2DATA_DATA0_M		MAKEMASK(0xFF, 0)
+#define GL_ACLEXT_K2N_L2DATA_DATA1_S		8
+#define GL_ACLEXT_K2N_L2DATA_DATA1_M		MAKEMASK(0xFF, 8)
+#define GL_ACLEXT_K2N_L2DATA_DATA2_S		16
+#define GL_ACLEXT_K2N_L2DATA_DATA2_M		MAKEMASK(0xFF, 16)
+#define GL_ACLEXT_K2N_L2DATA_DATA3_S		24
+#define GL_ACLEXT_K2N_L2DATA_DATA3_M		MAKEMASK(0xFF, 24)
+#define GL_ACLEXT_L2_PMASK0(_i)			(0x002100FC + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_L2_PMASK0_MAX_INDEX		2
+#define GL_ACLEXT_L2_PMASK0_BITMASK_S		0
+#define GL_ACLEXT_L2_PMASK0_BITMASK_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_ACLEXT_L2_PMASK1(_i)			(0x00210108 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_L2_PMASK1_MAX_INDEX		2
+#define GL_ACLEXT_L2_PMASK1_BITMASK_S		0
+#define GL_ACLEXT_L2_PMASK1_BITMASK_M		MAKEMASK(0xFFFF, 0)
+#define GL_ACLEXT_L2_TMASK0(_i)			(0x00210498 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_L2_TMASK0_MAX_INDEX		2
+#define GL_ACLEXT_L2_TMASK0_BITMASK_S		0
+#define GL_ACLEXT_L2_TMASK0_BITMASK_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_ACLEXT_L2_TMASK1(_i)			(0x002104A4 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_L2_TMASK1_MAX_INDEX		2
+#define GL_ACLEXT_L2_TMASK1_BITMASK_S		0
+#define GL_ACLEXT_L2_TMASK1_BITMASK_M		MAKEMASK(0xFF, 0)
+#define GL_ACLEXT_L2BMP0_3(_i)			(0x002100A8 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_L2BMP0_3_MAX_INDEX		2
+#define GL_ACLEXT_L2BMP0_3_BMP0_S		0
+#define GL_ACLEXT_L2BMP0_3_BMP0_M		MAKEMASK(0xFF, 0)
+#define GL_ACLEXT_L2BMP0_3_BMP1_S		8
+#define GL_ACLEXT_L2BMP0_3_BMP1_M		MAKEMASK(0xFF, 8)
+#define GL_ACLEXT_L2BMP0_3_BMP2_S		16
+#define GL_ACLEXT_L2BMP0_3_BMP2_M		MAKEMASK(0xFF, 16)
+#define GL_ACLEXT_L2BMP0_3_BMP3_S		24
+#define GL_ACLEXT_L2BMP0_3_BMP3_M		MAKEMASK(0xFF, 24)
+#define GL_ACLEXT_L2BMP4_7(_i)			(0x002100B4 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_L2BMP4_7_MAX_INDEX		2
+#define GL_ACLEXT_L2BMP4_7_BMP4_S		0
+#define GL_ACLEXT_L2BMP4_7_BMP4_M		MAKEMASK(0xFF, 0)
+#define GL_ACLEXT_L2BMP4_7_BMP5_S		8
+#define GL_ACLEXT_L2BMP4_7_BMP5_M		MAKEMASK(0xFF, 8)
+#define GL_ACLEXT_L2BMP4_7_BMP6_S		16
+#define GL_ACLEXT_L2BMP4_7_BMP6_M		MAKEMASK(0xFF, 16)
+#define GL_ACLEXT_L2BMP4_7_BMP7_S		24
+#define GL_ACLEXT_L2BMP4_7_BMP7_M		MAKEMASK(0xFF, 24)
+#define GL_ACLEXT_L2PRTMOD(_i)			(0x0021009C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_L2PRTMOD_MAX_INDEX		2
+#define GL_ACLEXT_L2PRTMOD_XLT1_S		0
+#define GL_ACLEXT_L2PRTMOD_XLT1_M		MAKEMASK(0x3, 0)
+#define GL_ACLEXT_L2PRTMOD_XLT2_S		8
+#define GL_ACLEXT_L2PRTMOD_XLT2_M		MAKEMASK(0x3, 8)
+#define GL_ACLEXT_N2N_L2ADDR(_i)		(0x0021015C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_N2N_L2ADDR_MAX_INDEX		2
+#define GL_ACLEXT_N2N_L2ADDR_LINE_IDX_S		0
+#define GL_ACLEXT_N2N_L2ADDR_LINE_IDX_M		MAKEMASK(0x3F, 0)
+#define GL_ACLEXT_N2N_L2ADDR_AUTO_INC_S		31
+#define GL_ACLEXT_N2N_L2ADDR_AUTO_INC_M		BIT(31)
+#define GL_ACLEXT_N2N_L2DATA(_i)		(0x00210168 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_N2N_L2DATA_MAX_INDEX		2
+#define GL_ACLEXT_N2N_L2DATA_DATA0_S		0
+#define GL_ACLEXT_N2N_L2DATA_DATA0_M		MAKEMASK(0xFF, 0)
+#define GL_ACLEXT_N2N_L2DATA_DATA1_S		8
+#define GL_ACLEXT_N2N_L2DATA_DATA1_M		MAKEMASK(0xFF, 8)
+#define GL_ACLEXT_N2N_L2DATA_DATA2_S		16
+#define GL_ACLEXT_N2N_L2DATA_DATA2_M		MAKEMASK(0xFF, 16)
+#define GL_ACLEXT_N2N_L2DATA_DATA3_S		24
+#define GL_ACLEXT_N2N_L2DATA_DATA3_M		MAKEMASK(0xFF, 24)
+#define GL_ACLEXT_P2P_L1ADDR(_i)		(0x00210024 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_P2P_L1ADDR_MAX_INDEX		2
+#define GL_ACLEXT_P2P_L1ADDR_LINE_IDX_S		0
+#define GL_ACLEXT_P2P_L1ADDR_LINE_IDX_M		BIT(0)
+#define GL_ACLEXT_P2P_L1ADDR_AUTO_INC_S		31
+#define GL_ACLEXT_P2P_L1ADDR_AUTO_INC_M		BIT(31)
+#define GL_ACLEXT_P2P_L1DATA(_i)		(0x00210030 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_P2P_L1DATA_MAX_INDEX		2
+#define GL_ACLEXT_P2P_L1DATA_DATA_S		0
+#define GL_ACLEXT_P2P_L1DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_ACLEXT_PID_L2GKTYPE(_i)		(0x002100F0 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_PID_L2GKTYPE_MAX_INDEX	2
+#define GL_ACLEXT_PID_L2GKTYPE_PID_GKTYPE_S	0
+#define GL_ACLEXT_PID_L2GKTYPE_PID_GKTYPE_M	MAKEMASK(0x3, 0)
+#define GL_ACLEXT_PLVL_SEL(_i)			(0x0021000C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_PLVL_SEL_MAX_INDEX		2
+#define GL_ACLEXT_PLVL_SEL_PLVL_SEL_S		0
+#define GL_ACLEXT_PLVL_SEL_PLVL_SEL_M		BIT(0)
+#define GL_ACLEXT_TCAM_L2ADDR(_i)		(0x00210114 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_TCAM_L2ADDR_MAX_INDEX		2
+#define GL_ACLEXT_TCAM_L2ADDR_LINE_IDX_S	0
+#define GL_ACLEXT_TCAM_L2ADDR_LINE_IDX_M	MAKEMASK(0x3FF, 0)
+#define GL_ACLEXT_TCAM_L2ADDR_AUTO_INC_S	31
+#define GL_ACLEXT_TCAM_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_ACLEXT_TCAM_L2DATALSB(_i)		(0x00210120 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_TCAM_L2DATALSB_MAX_INDEX	2
+#define GL_ACLEXT_TCAM_L2DATALSB_DATALSB_S	0
+#define GL_ACLEXT_TCAM_L2DATALSB_DATALSB_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GL_ACLEXT_TCAM_L2DATAMSB(_i)		(0x0021012C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_TCAM_L2DATAMSB_MAX_INDEX	2
+#define GL_ACLEXT_TCAM_L2DATAMSB_DATAMSB_S	0
+#define GL_ACLEXT_TCAM_L2DATAMSB_DATAMSB_M	MAKEMASK(0xFF, 0)
+#define GL_ACLEXT_XLT0_L1ADDR(_i)		(0x0021003C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_XLT0_L1ADDR_MAX_INDEX		2
+#define GL_ACLEXT_XLT0_L1ADDR_LINE_IDX_S	0
+#define GL_ACLEXT_XLT0_L1ADDR_LINE_IDX_M	MAKEMASK(0xFF, 0)
+#define GL_ACLEXT_XLT0_L1ADDR_AUTO_INC_S	31
+#define GL_ACLEXT_XLT0_L1ADDR_AUTO_INC_M	BIT(31)
+#define GL_ACLEXT_XLT0_L1DATA(_i)		(0x00210048 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_XLT0_L1DATA_MAX_INDEX		2
+#define GL_ACLEXT_XLT0_L1DATA_DATA_S		0
+#define GL_ACLEXT_XLT0_L1DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_ACLEXT_XLT1_L2ADDR(_i)		(0x002100C0 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_XLT1_L2ADDR_MAX_INDEX		2
+#define GL_ACLEXT_XLT1_L2ADDR_LINE_IDX_S	0
+#define GL_ACLEXT_XLT1_L2ADDR_LINE_IDX_M	MAKEMASK(0x7FF, 0)
+#define GL_ACLEXT_XLT1_L2ADDR_AUTO_INC_S	31
+#define GL_ACLEXT_XLT1_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_ACLEXT_XLT1_L2DATA(_i)		(0x002100CC + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_XLT1_L2DATA_MAX_INDEX		2
+#define GL_ACLEXT_XLT1_L2DATA_DATA_S		0
+#define GL_ACLEXT_XLT1_L2DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_ACLEXT_XLT2_L2ADDR(_i)		(0x002100D8 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_XLT2_L2ADDR_MAX_INDEX		2
+#define GL_ACLEXT_XLT2_L2ADDR_LINE_IDX_S	0
+#define GL_ACLEXT_XLT2_L2ADDR_LINE_IDX_M	MAKEMASK(0x1FF, 0)
+#define GL_ACLEXT_XLT2_L2ADDR_AUTO_INC_S	31
+#define GL_ACLEXT_XLT2_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_ACLEXT_XLT2_L2DATA(_i)		(0x002100E4 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_XLT2_L2DATA_MAX_INDEX		2
+#define GL_ACLEXT_XLT2_L2DATA_DATA_S		0
+#define GL_ACLEXT_XLT2_L2DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PREEXT_CDMD_L1SEL(_i)		(0x0020F054 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_CDMD_L1SEL_MAX_INDEX		2
+#define GL_PREEXT_CDMD_L1SEL_RX_SEL_S		0
+#define GL_PREEXT_CDMD_L1SEL_RX_SEL_M		MAKEMASK(0x1F, 0)
+#define GL_PREEXT_CDMD_L1SEL_TX_SEL_S		8
+#define GL_PREEXT_CDMD_L1SEL_TX_SEL_M		MAKEMASK(0x1F, 8)
+#define GL_PREEXT_CDMD_L1SEL_AUX0_SEL_S		16
+#define GL_PREEXT_CDMD_L1SEL_AUX0_SEL_M		MAKEMASK(0x1F, 16)
+#define GL_PREEXT_CDMD_L1SEL_AUX1_SEL_S		24
+#define GL_PREEXT_CDMD_L1SEL_AUX1_SEL_M		MAKEMASK(0x1F, 24)
+#define GL_PREEXT_CDMD_L1SEL_BIDIR_ENA_S	30
+#define GL_PREEXT_CDMD_L1SEL_BIDIR_ENA_M	MAKEMASK(0x3, 30)
+#define GL_PREEXT_CTLTBL_L2ADDR(_i)		(0x0020F084 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_CTLTBL_L2ADDR_MAX_INDEX	2
+#define GL_PREEXT_CTLTBL_L2ADDR_LINE_OFF_S	0
+#define GL_PREEXT_CTLTBL_L2ADDR_LINE_OFF_M	MAKEMASK(0x7, 0)
+#define GL_PREEXT_CTLTBL_L2ADDR_LINE_IDX_S	8
+#define GL_PREEXT_CTLTBL_L2ADDR_LINE_IDX_M	MAKEMASK(0x7, 8)
+#define GL_PREEXT_CTLTBL_L2ADDR_AUTO_INC_S	31
+#define GL_PREEXT_CTLTBL_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_PREEXT_CTLTBL_L2DATA(_i)		(0x0020F090 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_CTLTBL_L2DATA_MAX_INDEX	2
+#define GL_PREEXT_CTLTBL_L2DATA_DATA_S		0
+#define GL_PREEXT_CTLTBL_L2DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PREEXT_DFLT_L2PRFL(_i)		(0x0020F138 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_DFLT_L2PRFL_MAX_INDEX		2
+#define GL_PREEXT_DFLT_L2PRFL_DFLT_PRFL_S	0
+#define GL_PREEXT_DFLT_L2PRFL_DFLT_PRFL_M	MAKEMASK(0xFFFF, 0)
+#define GL_PREEXT_FLGS_L1SEL0_1(_i)		(0x0020F06C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_FLGS_L1SEL0_1_MAX_INDEX	2
+#define GL_PREEXT_FLGS_L1SEL0_1_FLS0_S		0
+#define GL_PREEXT_FLGS_L1SEL0_1_FLS0_M		MAKEMASK(0x1FF, 0)
+#define GL_PREEXT_FLGS_L1SEL0_1_FLS1_S		16
+#define GL_PREEXT_FLGS_L1SEL0_1_FLS1_M		MAKEMASK(0x1FF, 16)
+#define GL_PREEXT_FLGS_L1SEL2_3(_i)		(0x0020F078 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_FLGS_L1SEL2_3_MAX_INDEX	2
+#define GL_PREEXT_FLGS_L1SEL2_3_FLS2_S		0
+#define GL_PREEXT_FLGS_L1SEL2_3_FLS2_M		MAKEMASK(0x1FF, 0)
+#define GL_PREEXT_FLGS_L1SEL2_3_FLS3_S		16
+#define GL_PREEXT_FLGS_L1SEL2_3_FLS3_M		MAKEMASK(0x1FF, 16)
+#define GL_PREEXT_FLGS_L1TBL(_i)		(0x0020F060 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_FLGS_L1TBL_MAX_INDEX		2
+#define GL_PREEXT_FLGS_L1TBL_LSB_S		0
+#define GL_PREEXT_FLGS_L1TBL_LSB_M		MAKEMASK(0xFFFF, 0)
+#define GL_PREEXT_FLGS_L1TBL_MSB_S		16
+#define GL_PREEXT_FLGS_L1TBL_MSB_M		MAKEMASK(0xFFFF, 16)
+#define GL_PREEXT_FORCE_L1CDID(_i)		(0x0020F018 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_FORCE_L1CDID_MAX_INDEX	2
+#define GL_PREEXT_FORCE_L1CDID_STATIC_CDID_S	0
+#define GL_PREEXT_FORCE_L1CDID_STATIC_CDID_M	MAKEMASK(0xF, 0)
+#define GL_PREEXT_FORCE_L1CDID_STATIC_CDID_EN_S 31
+#define GL_PREEXT_FORCE_L1CDID_STATIC_CDID_EN_M BIT(31)
+#define GL_PREEXT_FORCE_PID(_i)			(0x0020F000 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_FORCE_PID_MAX_INDEX		2
+#define GL_PREEXT_FORCE_PID_STATIC_PID_S	0
+#define GL_PREEXT_FORCE_PID_STATIC_PID_M	MAKEMASK(0xFFFF, 0)
+#define GL_PREEXT_FORCE_PID_STATIC_PID_EN_S	31
+#define GL_PREEXT_FORCE_PID_STATIC_PID_EN_M	BIT(31)
+#define GL_PREEXT_K2N_L2ADDR(_i)		(0x0020F144 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_K2N_L2ADDR_MAX_INDEX		2
+#define GL_PREEXT_K2N_L2ADDR_LINE_IDX_S		0
+#define GL_PREEXT_K2N_L2ADDR_LINE_IDX_M		MAKEMASK(0x7F, 0)
+#define GL_PREEXT_K2N_L2ADDR_AUTO_INC_S		31
+#define GL_PREEXT_K2N_L2ADDR_AUTO_INC_M		BIT(31)
+#define GL_PREEXT_K2N_L2DATA(_i)		(0x0020F150 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_K2N_L2DATA_MAX_INDEX		2
+#define GL_PREEXT_K2N_L2DATA_DATA0_S		0
+#define GL_PREEXT_K2N_L2DATA_DATA0_M		MAKEMASK(0xFF, 0)
+#define GL_PREEXT_K2N_L2DATA_DATA1_S		8
+#define GL_PREEXT_K2N_L2DATA_DATA1_M		MAKEMASK(0xFF, 8)
+#define GL_PREEXT_K2N_L2DATA_DATA2_S		16
+#define GL_PREEXT_K2N_L2DATA_DATA2_M		MAKEMASK(0xFF, 16)
+#define GL_PREEXT_K2N_L2DATA_DATA3_S		24
+#define GL_PREEXT_K2N_L2DATA_DATA3_M		MAKEMASK(0xFF, 24)
+#define GL_PREEXT_L2_PMASK0(_i)			(0x0020F0FC + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_L2_PMASK0_MAX_INDEX		2
+#define GL_PREEXT_L2_PMASK0_BITMASK_S		0
+#define GL_PREEXT_L2_PMASK0_BITMASK_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PREEXT_L2_PMASK1(_i)			(0x0020F108 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_L2_PMASK1_MAX_INDEX		2
+#define GL_PREEXT_L2_PMASK1_BITMASK_S		0
+#define GL_PREEXT_L2_PMASK1_BITMASK_M		MAKEMASK(0xFFFF, 0)
+#define GL_PREEXT_L2_TMASK0(_i)			(0x0020F498 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_L2_TMASK0_MAX_INDEX		2
+#define GL_PREEXT_L2_TMASK0_BITMASK_S		0
+#define GL_PREEXT_L2_TMASK0_BITMASK_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PREEXT_L2_TMASK1(_i)			(0x0020F4A4 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_L2_TMASK1_MAX_INDEX		2
+#define GL_PREEXT_L2_TMASK1_BITMASK_S		0
+#define GL_PREEXT_L2_TMASK1_BITMASK_M		MAKEMASK(0xFF, 0)
+#define GL_PREEXT_L2BMP0_3(_i)			(0x0020F0A8 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_L2BMP0_3_MAX_INDEX		2
+#define GL_PREEXT_L2BMP0_3_BMP0_S		0
+#define GL_PREEXT_L2BMP0_3_BMP0_M		MAKEMASK(0xFF, 0)
+#define GL_PREEXT_L2BMP0_3_BMP1_S		8
+#define GL_PREEXT_L2BMP0_3_BMP1_M		MAKEMASK(0xFF, 8)
+#define GL_PREEXT_L2BMP0_3_BMP2_S		16
+#define GL_PREEXT_L2BMP0_3_BMP2_M		MAKEMASK(0xFF, 16)
+#define GL_PREEXT_L2BMP0_3_BMP3_S		24
+#define GL_PREEXT_L2BMP0_3_BMP3_M		MAKEMASK(0xFF, 24)
+#define GL_PREEXT_L2BMP4_7(_i)			(0x0020F0B4 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_L2BMP4_7_MAX_INDEX		2
+#define GL_PREEXT_L2BMP4_7_BMP4_S		0
+#define GL_PREEXT_L2BMP4_7_BMP4_M		MAKEMASK(0xFF, 0)
+#define GL_PREEXT_L2BMP4_7_BMP5_S		8
+#define GL_PREEXT_L2BMP4_7_BMP5_M		MAKEMASK(0xFF, 8)
+#define GL_PREEXT_L2BMP4_7_BMP6_S		16
+#define GL_PREEXT_L2BMP4_7_BMP6_M		MAKEMASK(0xFF, 16)
+#define GL_PREEXT_L2BMP4_7_BMP7_S		24
+#define GL_PREEXT_L2BMP4_7_BMP7_M		MAKEMASK(0xFF, 24)
+#define GL_PREEXT_L2PRTMOD(_i)			(0x0020F09C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_L2PRTMOD_MAX_INDEX		2
+#define GL_PREEXT_L2PRTMOD_XLT1_S		0
+#define GL_PREEXT_L2PRTMOD_XLT1_M		MAKEMASK(0x3, 0)
+#define GL_PREEXT_L2PRTMOD_XLT2_S		8
+#define GL_PREEXT_L2PRTMOD_XLT2_M		MAKEMASK(0x3, 8)
+#define GL_PREEXT_N2N_L2ADDR(_i)		(0x0020F15C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_N2N_L2ADDR_MAX_INDEX		2
+#define GL_PREEXT_N2N_L2ADDR_LINE_IDX_S		0
+#define GL_PREEXT_N2N_L2ADDR_LINE_IDX_M		MAKEMASK(0x3F, 0)
+#define GL_PREEXT_N2N_L2ADDR_AUTO_INC_S		31
+#define GL_PREEXT_N2N_L2ADDR_AUTO_INC_M		BIT(31)
+#define GL_PREEXT_N2N_L2DATA(_i)		(0x0020F168 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_N2N_L2DATA_MAX_INDEX		2
+#define GL_PREEXT_N2N_L2DATA_DATA0_S		0
+#define GL_PREEXT_N2N_L2DATA_DATA0_M		MAKEMASK(0xFF, 0)
+#define GL_PREEXT_N2N_L2DATA_DATA1_S		8
+#define GL_PREEXT_N2N_L2DATA_DATA1_M		MAKEMASK(0xFF, 8)
+#define GL_PREEXT_N2N_L2DATA_DATA2_S		16
+#define GL_PREEXT_N2N_L2DATA_DATA2_M		MAKEMASK(0xFF, 16)
+#define GL_PREEXT_N2N_L2DATA_DATA3_S		24
+#define GL_PREEXT_N2N_L2DATA_DATA3_M		MAKEMASK(0xFF, 24)
+#define GL_PREEXT_P2P_L1ADDR(_i)		(0x0020F024 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_P2P_L1ADDR_MAX_INDEX		2
+#define GL_PREEXT_P2P_L1ADDR_LINE_IDX_S		0
+#define GL_PREEXT_P2P_L1ADDR_LINE_IDX_M		BIT(0)
+#define GL_PREEXT_P2P_L1ADDR_AUTO_INC_S		31
+#define GL_PREEXT_P2P_L1ADDR_AUTO_INC_M		BIT(31)
+#define GL_PREEXT_P2P_L1DATA(_i)		(0x0020F030 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_P2P_L1DATA_MAX_INDEX		2
+#define GL_PREEXT_P2P_L1DATA_DATA_S		0
+#define GL_PREEXT_P2P_L1DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PREEXT_PID_L2GKTYPE(_i)		(0x0020F0F0 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_PID_L2GKTYPE_MAX_INDEX	2
+#define GL_PREEXT_PID_L2GKTYPE_PID_GKTYPE_S	0
+#define GL_PREEXT_PID_L2GKTYPE_PID_GKTYPE_M	MAKEMASK(0x3, 0)
+#define GL_PREEXT_PLVL_SEL(_i)			(0x0020F00C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_PLVL_SEL_MAX_INDEX		2
+#define GL_PREEXT_PLVL_SEL_PLVL_SEL_S		0
+#define GL_PREEXT_PLVL_SEL_PLVL_SEL_M		BIT(0)
+#define GL_PREEXT_TCAM_L2ADDR(_i)		(0x0020F114 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_TCAM_L2ADDR_MAX_INDEX		2
+#define GL_PREEXT_TCAM_L2ADDR_LINE_IDX_S	0
+#define GL_PREEXT_TCAM_L2ADDR_LINE_IDX_M	MAKEMASK(0x3FF, 0)
+#define GL_PREEXT_TCAM_L2ADDR_AUTO_INC_S	31
+#define GL_PREEXT_TCAM_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_PREEXT_TCAM_L2DATALSB(_i)		(0x0020F120 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_TCAM_L2DATALSB_MAX_INDEX	2
+#define GL_PREEXT_TCAM_L2DATALSB_DATALSB_S	0
+#define GL_PREEXT_TCAM_L2DATALSB_DATALSB_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PREEXT_TCAM_L2DATAMSB(_i)		(0x0020F12C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_TCAM_L2DATAMSB_MAX_INDEX	2
+#define GL_PREEXT_TCAM_L2DATAMSB_DATAMSB_S	0
+#define GL_PREEXT_TCAM_L2DATAMSB_DATAMSB_M	MAKEMASK(0xFF, 0)
+#define GL_PREEXT_XLT0_L1ADDR(_i)		(0x0020F03C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_XLT0_L1ADDR_MAX_INDEX		2
+#define GL_PREEXT_XLT0_L1ADDR_LINE_IDX_S	0
+#define GL_PREEXT_XLT0_L1ADDR_LINE_IDX_M	MAKEMASK(0xFF, 0)
+#define GL_PREEXT_XLT0_L1ADDR_AUTO_INC_S	31
+#define GL_PREEXT_XLT0_L1ADDR_AUTO_INC_M	BIT(31)
+#define GL_PREEXT_XLT0_L1DATA(_i)		(0x0020F048 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_XLT0_L1DATA_MAX_INDEX		2
+#define GL_PREEXT_XLT0_L1DATA_DATA_S		0
+#define GL_PREEXT_XLT0_L1DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PREEXT_XLT1_L2ADDR(_i)		(0x0020F0C0 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_XLT1_L2ADDR_MAX_INDEX		2
+#define GL_PREEXT_XLT1_L2ADDR_LINE_IDX_S	0
+#define GL_PREEXT_XLT1_L2ADDR_LINE_IDX_M	MAKEMASK(0x7FF, 0)
+#define GL_PREEXT_XLT1_L2ADDR_AUTO_INC_S	31
+#define GL_PREEXT_XLT1_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_PREEXT_XLT1_L2DATA(_i)		(0x0020F0CC + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_XLT1_L2DATA_MAX_INDEX		2
+#define GL_PREEXT_XLT1_L2DATA_DATA_S		0
+#define GL_PREEXT_XLT1_L2DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PREEXT_XLT2_L2ADDR(_i)		(0x0020F0D8 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_XLT2_L2ADDR_MAX_INDEX		2
+#define GL_PREEXT_XLT2_L2ADDR_LINE_IDX_S	0
+#define GL_PREEXT_XLT2_L2ADDR_LINE_IDX_M	MAKEMASK(0x1FF, 0)
+#define GL_PREEXT_XLT2_L2ADDR_AUTO_INC_S	31
+#define GL_PREEXT_XLT2_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_PREEXT_XLT2_L2DATA(_i)		(0x0020F0E4 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_XLT2_L2DATA_MAX_INDEX		2
+#define GL_PREEXT_XLT2_L2DATA_DATA_S		0
+#define GL_PREEXT_XLT2_L2DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_CDMD_L1SEL(_i)		(0x0020E054 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_CDMD_L1SEL_MAX_INDEX		2
+#define GL_PSTEXT_CDMD_L1SEL_RX_SEL_S		0
+#define GL_PSTEXT_CDMD_L1SEL_RX_SEL_M		MAKEMASK(0x1F, 0)
+#define GL_PSTEXT_CDMD_L1SEL_TX_SEL_S		8
+#define GL_PSTEXT_CDMD_L1SEL_TX_SEL_M		MAKEMASK(0x1F, 8)
+#define GL_PSTEXT_CDMD_L1SEL_AUX0_SEL_S		16
+#define GL_PSTEXT_CDMD_L1SEL_AUX0_SEL_M		MAKEMASK(0x1F, 16)
+#define GL_PSTEXT_CDMD_L1SEL_AUX1_SEL_S		24
+#define GL_PSTEXT_CDMD_L1SEL_AUX1_SEL_M		MAKEMASK(0x1F, 24)
+#define GL_PSTEXT_CDMD_L1SEL_BIDIR_ENA_S	30
+#define GL_PSTEXT_CDMD_L1SEL_BIDIR_ENA_M	MAKEMASK(0x3, 30)
+#define GL_PSTEXT_CTLTBL_L2ADDR(_i)		(0x0020E084 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_CTLTBL_L2ADDR_MAX_INDEX	2
+#define GL_PSTEXT_CTLTBL_L2ADDR_LINE_OFF_S	0
+#define GL_PSTEXT_CTLTBL_L2ADDR_LINE_OFF_M	MAKEMASK(0x7, 0)
+#define GL_PSTEXT_CTLTBL_L2ADDR_LINE_IDX_S	8
+#define GL_PSTEXT_CTLTBL_L2ADDR_LINE_IDX_M	MAKEMASK(0x7, 8)
+#define GL_PSTEXT_CTLTBL_L2ADDR_AUTO_INC_S	31
+#define GL_PSTEXT_CTLTBL_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_PSTEXT_CTLTBL_L2DATA(_i)		(0x0020E090 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_CTLTBL_L2DATA_MAX_INDEX	2
+#define GL_PSTEXT_CTLTBL_L2DATA_DATA_S		0
+#define GL_PSTEXT_CTLTBL_L2DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_DFLT_L2PRFL(_i)		(0x0020E138 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_DFLT_L2PRFL_MAX_INDEX		2
+#define GL_PSTEXT_DFLT_L2PRFL_DFLT_PRFL_S	0
+#define GL_PSTEXT_DFLT_L2PRFL_DFLT_PRFL_M	MAKEMASK(0xFFFF, 0)
+#define GL_PSTEXT_FL15_BMPLSB(_i)		(0x0020E480 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_FL15_BMPLSB_MAX_INDEX		2
+#define GL_PSTEXT_FL15_BMPLSB_BMPLSB_S		0
+#define GL_PSTEXT_FL15_BMPLSB_BMPLSB_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_FL15_BMPMSB(_i)		(0x0020E48C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_FL15_BMPMSB_MAX_INDEX		2
+#define GL_PSTEXT_FL15_BMPMSB_BMPMSB_S		0
+#define GL_PSTEXT_FL15_BMPMSB_BMPMSB_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_FLGS_L1SEL0_1(_i)		(0x0020E06C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_FLGS_L1SEL0_1_MAX_INDEX	2
+#define GL_PSTEXT_FLGS_L1SEL0_1_FLS0_S		0
+#define GL_PSTEXT_FLGS_L1SEL0_1_FLS0_M		MAKEMASK(0x1FF, 0)
+#define GL_PSTEXT_FLGS_L1SEL0_1_FLS1_S		16
+#define GL_PSTEXT_FLGS_L1SEL0_1_FLS1_M		MAKEMASK(0x1FF, 16)
+#define GL_PSTEXT_FLGS_L1SEL2_3(_i)		(0x0020E078 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_FLGS_L1SEL2_3_MAX_INDEX	2
+#define GL_PSTEXT_FLGS_L1SEL2_3_FLS2_S		0
+#define GL_PSTEXT_FLGS_L1SEL2_3_FLS2_M		MAKEMASK(0x1FF, 0)
+#define GL_PSTEXT_FLGS_L1SEL2_3_FLS3_S		16
+#define GL_PSTEXT_FLGS_L1SEL2_3_FLS3_M		MAKEMASK(0x1FF, 16)
+#define GL_PSTEXT_FLGS_L1TBL(_i)		(0x0020E060 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_FLGS_L1TBL_MAX_INDEX		2
+#define GL_PSTEXT_FLGS_L1TBL_LSB_S		0
+#define GL_PSTEXT_FLGS_L1TBL_LSB_M		MAKEMASK(0xFFFF, 0)
+#define GL_PSTEXT_FLGS_L1TBL_MSB_S		16
+#define GL_PSTEXT_FLGS_L1TBL_MSB_M		MAKEMASK(0xFFFF, 16)
+#define GL_PSTEXT_FORCE_L1CDID(_i)		(0x0020E018 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_FORCE_L1CDID_MAX_INDEX	2
+#define GL_PSTEXT_FORCE_L1CDID_STATIC_CDID_S	0
+#define GL_PSTEXT_FORCE_L1CDID_STATIC_CDID_M	MAKEMASK(0xF, 0)
+#define GL_PSTEXT_FORCE_L1CDID_STATIC_CDID_EN_S 31
+#define GL_PSTEXT_FORCE_L1CDID_STATIC_CDID_EN_M BIT(31)
+#define GL_PSTEXT_FORCE_PID(_i)			(0x0020E000 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_FORCE_PID_MAX_INDEX		2
+#define GL_PSTEXT_FORCE_PID_STATIC_PID_S	0
+#define GL_PSTEXT_FORCE_PID_STATIC_PID_M	MAKEMASK(0xFFFF, 0)
+#define GL_PSTEXT_FORCE_PID_STATIC_PID_EN_S	31
+#define GL_PSTEXT_FORCE_PID_STATIC_PID_EN_M	BIT(31)
+#define GL_PSTEXT_K2N_L2ADDR(_i)		(0x0020E144 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_K2N_L2ADDR_MAX_INDEX		2
+#define GL_PSTEXT_K2N_L2ADDR_LINE_IDX_S		0
+#define GL_PSTEXT_K2N_L2ADDR_LINE_IDX_M		MAKEMASK(0x7F, 0)
+#define GL_PSTEXT_K2N_L2ADDR_AUTO_INC_S		31
+#define GL_PSTEXT_K2N_L2ADDR_AUTO_INC_M		BIT(31)
+#define GL_PSTEXT_K2N_L2DATA(_i)		(0x0020E150 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_K2N_L2DATA_MAX_INDEX		2
+#define GL_PSTEXT_K2N_L2DATA_DATA0_S		0
+#define GL_PSTEXT_K2N_L2DATA_DATA0_M		MAKEMASK(0xFF, 0)
+#define GL_PSTEXT_K2N_L2DATA_DATA1_S		8
+#define GL_PSTEXT_K2N_L2DATA_DATA1_M		MAKEMASK(0xFF, 8)
+#define GL_PSTEXT_K2N_L2DATA_DATA2_S		16
+#define GL_PSTEXT_K2N_L2DATA_DATA2_M		MAKEMASK(0xFF, 16)
+#define GL_PSTEXT_K2N_L2DATA_DATA3_S		24
+#define GL_PSTEXT_K2N_L2DATA_DATA3_M		MAKEMASK(0xFF, 24)
+#define GL_PSTEXT_L2_PMASK0(_i)			(0x0020E0FC + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_L2_PMASK0_MAX_INDEX		2
+#define GL_PSTEXT_L2_PMASK0_BITMASK_S		0
+#define GL_PSTEXT_L2_PMASK0_BITMASK_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_L2_PMASK1(_i)			(0x0020E108 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_L2_PMASK1_MAX_INDEX		2
+#define GL_PSTEXT_L2_PMASK1_BITMASK_S		0
+#define GL_PSTEXT_L2_PMASK1_BITMASK_M		MAKEMASK(0xFFFF, 0)
+#define GL_PSTEXT_L2_TMASK0(_i)			(0x0020E498 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_L2_TMASK0_MAX_INDEX		2
+#define GL_PSTEXT_L2_TMASK0_BITMASK_S		0
+#define GL_PSTEXT_L2_TMASK0_BITMASK_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_L2_TMASK1(_i)			(0x0020E4A4 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_L2_TMASK1_MAX_INDEX		2
+#define GL_PSTEXT_L2_TMASK1_BITMASK_S		0
+#define GL_PSTEXT_L2_TMASK1_BITMASK_M		MAKEMASK(0xFF, 0)
+#define GL_PSTEXT_L2PRTMOD(_i)			(0x0020E09C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_L2PRTMOD_MAX_INDEX		2
+#define GL_PSTEXT_L2PRTMOD_XLT1_S		0
+#define GL_PSTEXT_L2PRTMOD_XLT1_M		MAKEMASK(0x3, 0)
+#define GL_PSTEXT_L2PRTMOD_XLT2_S		8
+#define GL_PSTEXT_L2PRTMOD_XLT2_M		MAKEMASK(0x3, 8)
+#define GL_PSTEXT_N2N_L2ADDR(_i)		(0x0020E15C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_N2N_L2ADDR_MAX_INDEX		2
+#define GL_PSTEXT_N2N_L2ADDR_LINE_IDX_S		0
+#define GL_PSTEXT_N2N_L2ADDR_LINE_IDX_M		MAKEMASK(0x3F, 0)
+#define GL_PSTEXT_N2N_L2ADDR_AUTO_INC_S		31
+#define GL_PSTEXT_N2N_L2ADDR_AUTO_INC_M		BIT(31)
+#define GL_PSTEXT_N2N_L2DATA(_i)		(0x0020E168 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_N2N_L2DATA_MAX_INDEX		2
+#define GL_PSTEXT_N2N_L2DATA_DATA0_S		0
+#define GL_PSTEXT_N2N_L2DATA_DATA0_M		MAKEMASK(0xFF, 0)
+#define GL_PSTEXT_N2N_L2DATA_DATA1_S		8
+#define GL_PSTEXT_N2N_L2DATA_DATA1_M		MAKEMASK(0xFF, 8)
+#define GL_PSTEXT_N2N_L2DATA_DATA2_S		16
+#define GL_PSTEXT_N2N_L2DATA_DATA2_M		MAKEMASK(0xFF, 16)
+#define GL_PSTEXT_N2N_L2DATA_DATA3_S		24
+#define GL_PSTEXT_N2N_L2DATA_DATA3_M		MAKEMASK(0xFF, 24)
+#define GL_PSTEXT_P2P_L1ADDR(_i)		(0x0020E024 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_P2P_L1ADDR_MAX_INDEX		2
+#define GL_PSTEXT_P2P_L1ADDR_LINE_IDX_S		0
+#define GL_PSTEXT_P2P_L1ADDR_LINE_IDX_M		BIT(0)
+#define GL_PSTEXT_P2P_L1ADDR_AUTO_INC_S		31
+#define GL_PSTEXT_P2P_L1ADDR_AUTO_INC_M		BIT(31)
+#define GL_PSTEXT_P2P_L1DATA(_i)		(0x0020E030 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_P2P_L1DATA_MAX_INDEX		2
+#define GL_PSTEXT_P2P_L1DATA_DATA_S		0
+#define GL_PSTEXT_P2P_L1DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_PID_L2GKTYPE(_i)		(0x0020E0F0 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_PID_L2GKTYPE_MAX_INDEX	2
+#define GL_PSTEXT_PID_L2GKTYPE_PID_GKTYPE_S	0
+#define GL_PSTEXT_PID_L2GKTYPE_PID_GKTYPE_M	MAKEMASK(0x3, 0)
+#define GL_PSTEXT_PLVL_SEL(_i)			(0x0020E00C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_PLVL_SEL_MAX_INDEX		2
+#define GL_PSTEXT_PLVL_SEL_PLVL_SEL_S		0
+#define GL_PSTEXT_PLVL_SEL_PLVL_SEL_M		BIT(0)
+#define GL_PSTEXT_PRFLM_CTRL(_i)		(0x0020E474 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_PRFLM_CTRL_MAX_INDEX		2
+#define GL_PSTEXT_PRFLM_CTRL_PRFL_IDX_S		0
+#define GL_PSTEXT_PRFLM_CTRL_PRFL_IDX_M		MAKEMASK(0xFF, 0)
+#define GL_PSTEXT_PRFLM_CTRL_RD_REQ_S		30
+#define GL_PSTEXT_PRFLM_CTRL_RD_REQ_M		BIT(30)
+#define GL_PSTEXT_PRFLM_CTRL_WR_REQ_S		31
+#define GL_PSTEXT_PRFLM_CTRL_WR_REQ_M		BIT(31)
+#define GL_PSTEXT_PRFLM_DATA_0(_i)		(0x0020E174 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GL_PSTEXT_PRFLM_DATA_0_MAX_INDEX	63
+#define GL_PSTEXT_PRFLM_DATA_0_PROT_S		0
+#define GL_PSTEXT_PRFLM_DATA_0_PROT_M		MAKEMASK(0xFF, 0)
+#define GL_PSTEXT_PRFLM_DATA_0_OFF_S		16
+#define GL_PSTEXT_PRFLM_DATA_0_OFF_M		MAKEMASK(0x1FF, 16)
+#define GL_PSTEXT_PRFLM_DATA_1(_i)		(0x0020E274 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GL_PSTEXT_PRFLM_DATA_1_MAX_INDEX	63
+#define GL_PSTEXT_PRFLM_DATA_1_PROT_S		0
+#define GL_PSTEXT_PRFLM_DATA_1_PROT_M		MAKEMASK(0xFF, 0)
+#define GL_PSTEXT_PRFLM_DATA_1_OFF_S		16
+#define GL_PSTEXT_PRFLM_DATA_1_OFF_M		MAKEMASK(0x1FF, 16)
+#define GL_PSTEXT_PRFLM_DATA_2(_i)		(0x0020E374 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GL_PSTEXT_PRFLM_DATA_2_MAX_INDEX	63
+#define GL_PSTEXT_PRFLM_DATA_2_PROT_S		0
+#define GL_PSTEXT_PRFLM_DATA_2_PROT_M		MAKEMASK(0xFF, 0)
+#define GL_PSTEXT_PRFLM_DATA_2_OFF_S		16
+#define GL_PSTEXT_PRFLM_DATA_2_OFF_M		MAKEMASK(0x1FF, 16)
+#define GL_PSTEXT_TCAM_L2ADDR(_i)		(0x0020E114 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_TCAM_L2ADDR_MAX_INDEX		2
+#define GL_PSTEXT_TCAM_L2ADDR_LINE_IDX_S	0
+#define GL_PSTEXT_TCAM_L2ADDR_LINE_IDX_M	MAKEMASK(0x3FF, 0)
+#define GL_PSTEXT_TCAM_L2ADDR_AUTO_INC_S	31
+#define GL_PSTEXT_TCAM_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_PSTEXT_TCAM_L2DATALSB(_i)		(0x0020E120 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_TCAM_L2DATALSB_MAX_INDEX	2
+#define GL_PSTEXT_TCAM_L2DATALSB_DATALSB_S	0
+#define GL_PSTEXT_TCAM_L2DATALSB_DATALSB_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_TCAM_L2DATAMSB(_i)		(0x0020E12C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_TCAM_L2DATAMSB_MAX_INDEX	2
+#define GL_PSTEXT_TCAM_L2DATAMSB_DATAMSB_S	0
+#define GL_PSTEXT_TCAM_L2DATAMSB_DATAMSB_M	MAKEMASK(0xFF, 0)
+#define GL_PSTEXT_XLT0_L1ADDR(_i)		(0x0020E03C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_XLT0_L1ADDR_MAX_INDEX		2
+#define GL_PSTEXT_XLT0_L1ADDR_LINE_IDX_S	0
+#define GL_PSTEXT_XLT0_L1ADDR_LINE_IDX_M	MAKEMASK(0xFF, 0)
+#define GL_PSTEXT_XLT0_L1ADDR_AUTO_INC_S	31
+#define GL_PSTEXT_XLT0_L1ADDR_AUTO_INC_M	BIT(31)
+#define GL_PSTEXT_XLT0_L1DATA(_i)		(0x0020E048 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_XLT0_L1DATA_MAX_INDEX		2
+#define GL_PSTEXT_XLT0_L1DATA_DATA_S		0
+#define GL_PSTEXT_XLT0_L1DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_XLT1_L2ADDR(_i)		(0x0020E0C0 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_XLT1_L2ADDR_MAX_INDEX		2
+#define GL_PSTEXT_XLT1_L2ADDR_LINE_IDX_S	0
+#define GL_PSTEXT_XLT1_L2ADDR_LINE_IDX_M	MAKEMASK(0x7FF, 0)
+#define GL_PSTEXT_XLT1_L2ADDR_AUTO_INC_S	31
+#define GL_PSTEXT_XLT1_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_PSTEXT_XLT1_L2DATA(_i)		(0x0020E0CC + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_XLT1_L2DATA_MAX_INDEX		2
+#define GL_PSTEXT_XLT1_L2DATA_DATA_S		0
+#define GL_PSTEXT_XLT1_L2DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_XLT2_L2ADDR(_i)		(0x0020E0D8 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_XLT2_L2ADDR_MAX_INDEX		2
+#define GL_PSTEXT_XLT2_L2ADDR_LINE_IDX_S	0
+#define GL_PSTEXT_XLT2_L2ADDR_LINE_IDX_M	MAKEMASK(0x1FF, 0)
+#define GL_PSTEXT_XLT2_L2ADDR_AUTO_INC_S	31
+#define GL_PSTEXT_XLT2_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_PSTEXT_XLT2_L2DATA(_i)		(0x0020E0E4 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_XLT2_L2DATA_MAX_INDEX		2
+#define GL_PSTEXT_XLT2_L2DATA_DATA_S		0
+#define GL_PSTEXT_XLT2_L2DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLFLXP_PTYPE_TRANSLATION(_i)		(0x0045C000 + ((_i) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define GLFLXP_PTYPE_TRANSLATION_MAX_INDEX	255
+#define GLFLXP_PTYPE_TRANSLATION_PTYPE_4N_S	0
+#define GLFLXP_PTYPE_TRANSLATION_PTYPE_4N_M	MAKEMASK(0xFF, 0)
+#define GLFLXP_PTYPE_TRANSLATION_PTYPE_4N_1_S	8
+#define GLFLXP_PTYPE_TRANSLATION_PTYPE_4N_1_M	MAKEMASK(0xFF, 8)
+#define GLFLXP_PTYPE_TRANSLATION_PTYPE_4N_2_S	16
+#define GLFLXP_PTYPE_TRANSLATION_PTYPE_4N_2_M	MAKEMASK(0xFF, 16)
+#define GLFLXP_PTYPE_TRANSLATION_PTYPE_4N_3_S	24
+#define GLFLXP_PTYPE_TRANSLATION_PTYPE_4N_3_M	MAKEMASK(0xFF, 24)
+#define GLFLXP_RX_CMD_LX_PROT_IDX(_i)		(0x0045C400 + ((_i) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define GLFLXP_RX_CMD_LX_PROT_IDX_MAX_INDEX	255
+#define GLFLXP_RX_CMD_LX_PROT_IDX_INNER_CLOUD_OFFSET_INDEX_S 0
+#define GLFLXP_RX_CMD_LX_PROT_IDX_INNER_CLOUD_OFFSET_INDEX_M MAKEMASK(0x7, 0)
+#define GLFLXP_RX_CMD_LX_PROT_IDX_L4_OFFSET_INDEX_S 4
+#define GLFLXP_RX_CMD_LX_PROT_IDX_L4_OFFSET_INDEX_M MAKEMASK(0x7, 4)
+#define GLFLXP_RX_CMD_LX_PROT_IDX_PAYLOAD_OFFSET_INDEX_S 8
+#define GLFLXP_RX_CMD_LX_PROT_IDX_PAYLOAD_OFFSET_INDEX_M MAKEMASK(0x7, 8)
+#define GLFLXP_RX_CMD_LX_PROT_IDX_L3_PROTOCOL_S 12
+#define GLFLXP_RX_CMD_LX_PROT_IDX_L3_PROTOCOL_M MAKEMASK(0x3, 12)
+#define GLFLXP_RX_CMD_LX_PROT_IDX_L4_PROTOCOL_S 14
+#define GLFLXP_RX_CMD_LX_PROT_IDX_L4_PROTOCOL_M MAKEMASK(0x3, 14)
+#define GLFLXP_RX_CMD_PROTIDS(_i, _j)		(0x0045A000 + ((_i) * 4 + (_j) * 1024)) /* _i=0...255, _j=0...5 */ /* Reset Source: CORER */
+#define GLFLXP_RX_CMD_PROTIDS_MAX_INDEX		255
+#define GLFLXP_RX_CMD_PROTIDS_PROTID_4N_S	0
+#define GLFLXP_RX_CMD_PROTIDS_PROTID_4N_M	MAKEMASK(0xFF, 0)
+#define GLFLXP_RX_CMD_PROTIDS_PROTID_4N_1_S	8
+#define GLFLXP_RX_CMD_PROTIDS_PROTID_4N_1_M	MAKEMASK(0xFF, 8)
+#define GLFLXP_RX_CMD_PROTIDS_PROTID_4N_2_S	16
+#define GLFLXP_RX_CMD_PROTIDS_PROTID_4N_2_M	MAKEMASK(0xFF, 16)
+#define GLFLXP_RX_CMD_PROTIDS_PROTID_4N_3_S	24
+#define GLFLXP_RX_CMD_PROTIDS_PROTID_4N_3_M	MAKEMASK(0xFF, 24)
+#define GLFLXP_RXDID_FLAGS(_i, _j)		(0x0045D000 + ((_i) * 4 + (_j) * 256)) /* _i=0...63, _j=0...4 */ /* Reset Source: CORER */
+#define GLFLXP_RXDID_FLAGS_MAX_INDEX		63
+#define GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_S	0
+#define GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_M	MAKEMASK(0x3F, 0)
+#define GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_1_S	8
+#define GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_1_M	MAKEMASK(0x3F, 8)
+#define GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_2_S	16
+#define GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_2_M	MAKEMASK(0x3F, 16)
+#define GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_3_S	24
+#define GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_3_M	MAKEMASK(0x3F, 24)
+#define GLFLXP_RXDID_FLAGS1_OVERRIDE(_i)	(0x0045D600 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GLFLXP_RXDID_FLAGS1_OVERRIDE_MAX_INDEX	63
+#define GLFLXP_RXDID_FLAGS1_OVERRIDE_FLEXIFLAGS1_OVERRIDE_S 0
+#define GLFLXP_RXDID_FLAGS1_OVERRIDE_FLEXIFLAGS1_OVERRIDE_M MAKEMASK(0xF, 0)
+#define GLFLXP_RXDID_FLX_WRD_0(_i)		(0x0045c800 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GLFLXP_RXDID_FLX_WRD_0_MAX_INDEX	63
+#define GLFLXP_RXDID_FLX_WRD_0_PROT_MDID_S	0
+#define GLFLXP_RXDID_FLX_WRD_0_PROT_MDID_M	MAKEMASK(0xFF, 0)
+#define GLFLXP_RXDID_FLX_WRD_0_EXTRACTION_OFFSET_S 8
+#define GLFLXP_RXDID_FLX_WRD_0_EXTRACTION_OFFSET_M MAKEMASK(0x3FF, 8)
+#define GLFLXP_RXDID_FLX_WRD_0_RXDID_OPCODE_S	30
+#define GLFLXP_RXDID_FLX_WRD_0_RXDID_OPCODE_M	MAKEMASK(0x3, 30)
+#define GLFLXP_RXDID_FLX_WRD_1(_i)		(0x0045c900 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GLFLXP_RXDID_FLX_WRD_1_MAX_INDEX	63
+#define GLFLXP_RXDID_FLX_WRD_1_PROT_MDID_S	0
+#define GLFLXP_RXDID_FLX_WRD_1_PROT_MDID_M	MAKEMASK(0xFF, 0)
+#define GLFLXP_RXDID_FLX_WRD_1_EXTRACTION_OFFSET_S 8
+#define GLFLXP_RXDID_FLX_WRD_1_EXTRACTION_OFFSET_M MAKEMASK(0x3FF, 8)
+#define GLFLXP_RXDID_FLX_WRD_1_RXDID_OPCODE_S	30
+#define GLFLXP_RXDID_FLX_WRD_1_RXDID_OPCODE_M	MAKEMASK(0x3, 30)
+#define GLFLXP_RXDID_FLX_WRD_2(_i)		(0x0045ca00 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GLFLXP_RXDID_FLX_WRD_2_MAX_INDEX	63
+#define GLFLXP_RXDID_FLX_WRD_2_PROT_MDID_S	0
+#define GLFLXP_RXDID_FLX_WRD_2_PROT_MDID_M	MAKEMASK(0xFF, 0)
+#define GLFLXP_RXDID_FLX_WRD_2_EXTRACTION_OFFSET_S 8
+#define GLFLXP_RXDID_FLX_WRD_2_EXTRACTION_OFFSET_M MAKEMASK(0x3FF, 8)
+#define GLFLXP_RXDID_FLX_WRD_2_RXDID_OPCODE_S	30
+#define GLFLXP_RXDID_FLX_WRD_2_RXDID_OPCODE_M	MAKEMASK(0x3, 30)
+#define GLFLXP_RXDID_FLX_WRD_3(_i)		(0x0045cb00 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GLFLXP_RXDID_FLX_WRD_3_MAX_INDEX	63
+#define GLFLXP_RXDID_FLX_WRD_3_PROT_MDID_S	0
+#define GLFLXP_RXDID_FLX_WRD_3_PROT_MDID_M	MAKEMASK(0xFF, 0)
+#define GLFLXP_RXDID_FLX_WRD_3_EXTRACTION_OFFSET_S 8
+#define GLFLXP_RXDID_FLX_WRD_3_EXTRACTION_OFFSET_M MAKEMASK(0x3FF, 8)
+#define GLFLXP_RXDID_FLX_WRD_3_RXDID_OPCODE_S	30
+#define GLFLXP_RXDID_FLX_WRD_3_RXDID_OPCODE_M	MAKEMASK(0x3, 30)
+#define GLFLXP_RXDID_FLX_WRD_4(_i)		(0x0045cc00 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GLFLXP_RXDID_FLX_WRD_4_MAX_INDEX	63
+#define GLFLXP_RXDID_FLX_WRD_4_PROT_MDID_S	0
+#define GLFLXP_RXDID_FLX_WRD_4_PROT_MDID_M	MAKEMASK(0xFF, 0)
+#define GLFLXP_RXDID_FLX_WRD_4_EXTRACTION_OFFSET_S 8
+#define GLFLXP_RXDID_FLX_WRD_4_EXTRACTION_OFFSET_M MAKEMASK(0x3FF, 8)
+#define GLFLXP_RXDID_FLX_WRD_4_RXDID_OPCODE_S	30
+#define GLFLXP_RXDID_FLX_WRD_4_RXDID_OPCODE_M	MAKEMASK(0x3, 30)
+#define GLFLXP_RXDID_FLX_WRD_5(_i)		(0x0045cd00 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GLFLXP_RXDID_FLX_WRD_5_MAX_INDEX	63
+#define GLFLXP_RXDID_FLX_WRD_5_PROT_MDID_S	0
+#define GLFLXP_RXDID_FLX_WRD_5_PROT_MDID_M	MAKEMASK(0xFF, 0)
+#define GLFLXP_RXDID_FLX_WRD_5_EXTRACTION_OFFSET_S 8
+#define GLFLXP_RXDID_FLX_WRD_5_EXTRACTION_OFFSET_M MAKEMASK(0x3FF, 8)
+#define GLFLXP_RXDID_FLX_WRD_5_RXDID_OPCODE_S	30
+#define GLFLXP_RXDID_FLX_WRD_5_RXDID_OPCODE_M	MAKEMASK(0x3, 30)
+#define GLFLXP_TX_SCHED_CORRECT(_i, _j)		(0x00458000 + ((_i) * 4 + (_j) * 256)) /* _i=0...63, _j=0...31 */ /* Reset Source: CORER */
+#define GLFLXP_TX_SCHED_CORRECT_MAX_INDEX	63
+#define GLFLXP_TX_SCHED_CORRECT_PROTD_ID_2N_S	0
+#define GLFLXP_TX_SCHED_CORRECT_PROTD_ID_2N_M	MAKEMASK(0xFF, 0)
+#define GLFLXP_TX_SCHED_CORRECT_RECIPE_2N_S	8
+#define GLFLXP_TX_SCHED_CORRECT_RECIPE_2N_M	MAKEMASK(0x1F, 8)
+#define GLFLXP_TX_SCHED_CORRECT_PROTD_ID_2N_1_S 16
+#define GLFLXP_TX_SCHED_CORRECT_PROTD_ID_2N_1_M MAKEMASK(0xFF, 16)
+#define GLFLXP_TX_SCHED_CORRECT_RECIPE_2N_1_S	24
+#define GLFLXP_TX_SCHED_CORRECT_RECIPE_2N_1_M	MAKEMASK(0x1F, 24)
+#define QRXFLXP_CNTXT(_QRX)			(0x00480000 + ((_QRX) * 4)) /* _i=0...2047 */ /* Reset Source: CORER */
+#define QRXFLXP_CNTXT_MAX_INDEX			2047
+#define QRXFLXP_CNTXT_RXDID_IDX_S		0
+#define QRXFLXP_CNTXT_RXDID_IDX_M		MAKEMASK(0x3F, 0)
+#define QRXFLXP_CNTXT_RXDID_PRIO_S		8
+#define QRXFLXP_CNTXT_RXDID_PRIO_M		MAKEMASK(0x7, 8)
+#define QRXFLXP_CNTXT_TS_S			11
+#define QRXFLXP_CNTXT_TS_M			BIT(11)
+#define GL_FWSTS				0x00083048 /* Reset Source: POR */
+#define GL_FWSTS_FWS0B_S			0
+#define GL_FWSTS_FWS0B_M			MAKEMASK(0xFF, 0)
+#define GL_FWSTS_FWROWD_S			8
+#define GL_FWSTS_FWROWD_M			BIT(8)
+#define GL_FWSTS_FWRI_S				9
+#define GL_FWSTS_FWRI_M				BIT(9)
+#define GL_FWSTS_FWS1B_S			16
+#define GL_FWSTS_FWS1B_M			MAKEMASK(0xFF, 16)
+#define GL_TCVMLR_DRAIN_CNTR_CTL		0x000A21E0 /* Reset Source: CORER */
+#define GL_TCVMLR_DRAIN_CNTR_CTL_OP_S		0
+#define GL_TCVMLR_DRAIN_CNTR_CTL_OP_M		BIT(0)
+#define GL_TCVMLR_DRAIN_CNTR_CTL_PORT_S		1
+#define GL_TCVMLR_DRAIN_CNTR_CTL_PORT_M		MAKEMASK(0x7, 1)
+#define GL_TCVMLR_DRAIN_CNTR_CTL_VALUE_S	4
+#define GL_TCVMLR_DRAIN_CNTR_CTL_VALUE_M	MAKEMASK(0x3FFF, 4)
+#define GL_TCVMLR_DRAIN_DONE_DEC		0x000A21A8 /* Reset Source: CORER */
+#define GL_TCVMLR_DRAIN_DONE_DEC_TARGET_S	0
+#define GL_TCVMLR_DRAIN_DONE_DEC_TARGET_M	BIT(0)
+#define GL_TCVMLR_DRAIN_DONE_DEC_INDEX_S	1
+#define GL_TCVMLR_DRAIN_DONE_DEC_INDEX_M	MAKEMASK(0x1F, 1)
+#define GL_TCVMLR_DRAIN_DONE_DEC_VALUE_S	6
+#define GL_TCVMLR_DRAIN_DONE_DEC_VALUE_M	MAKEMASK(0xFF, 6)
+#define GL_TCVMLR_DRAIN_DONE_TCLAN(_i)		(0x000A20A8 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GL_TCVMLR_DRAIN_DONE_TCLAN_MAX_INDEX	31
+#define GL_TCVMLR_DRAIN_DONE_TCLAN_COUNT_S	0
+#define GL_TCVMLR_DRAIN_DONE_TCLAN_COUNT_M	MAKEMASK(0xFF, 0)
+#define GL_TCVMLR_DRAIN_DONE_TPB(_i)		(0x000A2128 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GL_TCVMLR_DRAIN_DONE_TPB_MAX_INDEX	31
+#define GL_TCVMLR_DRAIN_DONE_TPB_COUNT_S	0
+#define GL_TCVMLR_DRAIN_DONE_TPB_COUNT_M	MAKEMASK(0xFF, 0)
+#define GL_TCVMLR_DRAIN_MARKER			0x000A2008 /* Reset Source: CORER */
+#define GL_TCVMLR_DRAIN_MARKER_PORT_S		0
+#define GL_TCVMLR_DRAIN_MARKER_PORT_M		MAKEMASK(0x7, 0)
+#define GL_TCVMLR_DRAIN_MARKER_TC_S		3
+#define GL_TCVMLR_DRAIN_MARKER_TC_M		MAKEMASK(0x1F, 3)
+#define GL_TCVMLR_ERR_STAT			0x000A2024 /* Reset Source: CORER */
+#define GL_TCVMLR_ERR_STAT_ERROR_S		0
+#define GL_TCVMLR_ERR_STAT_ERROR_M		BIT(0)
+#define GL_TCVMLR_ERR_STAT_FW_REQ_S		1
+#define GL_TCVMLR_ERR_STAT_FW_REQ_M		BIT(1)
+#define GL_TCVMLR_ERR_STAT_STAT_S		2
+#define GL_TCVMLR_ERR_STAT_STAT_M		MAKEMASK(0x7, 2)
+#define GL_TCVMLR_ERR_STAT_ENT_TYPE_S		5
+#define GL_TCVMLR_ERR_STAT_ENT_TYPE_M		MAKEMASK(0x7, 5)
+#define GL_TCVMLR_ERR_STAT_ENT_ID_S		8
+#define GL_TCVMLR_ERR_STAT_ENT_ID_M		MAKEMASK(0x3FFF, 8)
+#define GL_TCVMLR_QCFG				0x000A2010 /* Reset Source: CORER */
+#define GL_TCVMLR_QCFG_QID_S			0
+#define GL_TCVMLR_QCFG_QID_M			MAKEMASK(0x3FFF, 0)
+#define GL_TCVMLR_QCFG_OP_S			14
+#define GL_TCVMLR_QCFG_OP_M			BIT(14)
+#define GL_TCVMLR_QCFG_PORT_S			15
+#define GL_TCVMLR_QCFG_PORT_M			MAKEMASK(0x7, 15)
+#define GL_TCVMLR_QCFG_TC_S			18
+#define GL_TCVMLR_QCFG_TC_M			MAKEMASK(0x1F, 18)
+#define GL_TCVMLR_QCFG_RD			0x000A2014 /* Reset Source: CORER */
+#define GL_TCVMLR_QCFG_RD_QID_S			0
+#define GL_TCVMLR_QCFG_RD_QID_M			MAKEMASK(0x3FFF, 0)
+#define GL_TCVMLR_QCFG_RD_PORT_S		14
+#define GL_TCVMLR_QCFG_RD_PORT_M		MAKEMASK(0x7, 14)
+#define GL_TCVMLR_QCFG_RD_TC_S			17
+#define GL_TCVMLR_QCFG_RD_TC_M			MAKEMASK(0x1F, 17)
+#define GL_TCVMLR_QCNTR				0x000A200C /* Reset Source: CORER */
+#define GL_TCVMLR_QCNTR_CNTR_S			0
+#define GL_TCVMLR_QCNTR_CNTR_M			MAKEMASK(0x7FFF, 0)
+#define GL_TCVMLR_QCTL				0x000A2004 /* Reset Source: CORER */
+#define GL_TCVMLR_QCTL_QID_S			0
+#define GL_TCVMLR_QCTL_QID_M			MAKEMASK(0x3FFF, 0)
+#define GL_TCVMLR_QCTL_OP_S			14
+#define GL_TCVMLR_QCTL_OP_M			BIT(14)
+#define GL_TCVMLR_REQ_STAT			0x000A2018 /* Reset Source: CORER */
+#define GL_TCVMLR_REQ_STAT_ENT_TYPE_S		0
+#define GL_TCVMLR_REQ_STAT_ENT_TYPE_M		MAKEMASK(0x7, 0)
+#define GL_TCVMLR_REQ_STAT_ENT_ID_S		3
+#define GL_TCVMLR_REQ_STAT_ENT_ID_M		MAKEMASK(0x3FFF, 3)
+#define GL_TCVMLR_REQ_STAT_OP_S			17
+#define GL_TCVMLR_REQ_STAT_OP_M			BIT(17)
+#define GL_TCVMLR_REQ_STAT_WRITE_STATUS_S	18
+#define GL_TCVMLR_REQ_STAT_WRITE_STATUS_M	MAKEMASK(0x7, 18)
+#define GL_TCVMLR_STAT				0x000A201C /* Reset Source: CORER */
+#define GL_TCVMLR_STAT_ENT_TYPE_S		0
+#define GL_TCVMLR_STAT_ENT_TYPE_M		MAKEMASK(0x7, 0)
+#define GL_TCVMLR_STAT_ENT_ID_S			3
+#define GL_TCVMLR_STAT_ENT_ID_M			MAKEMASK(0x3FFF, 3)
+#define GL_TCVMLR_STAT_STATUS_S			17
+#define GL_TCVMLR_STAT_STATUS_M			MAKEMASK(0x7, 17)
+#define GL_XLR_MARKER_TRIG_TCVMLR		0x000A2000 /* Reset Source: CORER */
+#define GL_XLR_MARKER_TRIG_TCVMLR_VM_VF_NUM_S	0
+#define GL_XLR_MARKER_TRIG_TCVMLR_VM_VF_NUM_M	MAKEMASK(0x3FF, 0)
+#define GL_XLR_MARKER_TRIG_TCVMLR_VM_VF_TYPE_S	10
+#define GL_XLR_MARKER_TRIG_TCVMLR_VM_VF_TYPE_M	MAKEMASK(0x3, 10)
+#define GL_XLR_MARKER_TRIG_TCVMLR_PF_NUM_S	12
+#define GL_XLR_MARKER_TRIG_TCVMLR_PF_NUM_M	MAKEMASK(0x7, 12)
+#define GL_XLR_MARKER_TRIG_TCVMLR_PORT_NUM_S	16
+#define GL_XLR_MARKER_TRIG_TCVMLR_PORT_NUM_M	MAKEMASK(0x7, 16)
+#define GL_XLR_MARKER_TRIG_VMLR			0x00093804 /* Reset Source: CORER */
+#define GL_XLR_MARKER_TRIG_VMLR_VM_VF_NUM_S	0
+#define GL_XLR_MARKER_TRIG_VMLR_VM_VF_NUM_M	MAKEMASK(0x3FF, 0)
+#define GL_XLR_MARKER_TRIG_VMLR_VM_VF_TYPE_S	10
+#define GL_XLR_MARKER_TRIG_VMLR_VM_VF_TYPE_M	MAKEMASK(0x3, 10)
+#define GL_XLR_MARKER_TRIG_VMLR_PF_NUM_S	12
+#define GL_XLR_MARKER_TRIG_VMLR_PF_NUM_M	MAKEMASK(0x7, 12)
+#define GL_XLR_MARKER_TRIG_VMLR_PORT_NUM_S	16
+#define GL_XLR_MARKER_TRIG_VMLR_PORT_NUM_M	MAKEMASK(0x7, 16)
+#define GLGEN_ANA_ABORT_PTYPE			0x0020C21C /* Reset Source: CORER */
+#define GLGEN_ANA_ABORT_PTYPE_ABORT_S		0
+#define GLGEN_ANA_ABORT_PTYPE_ABORT_M		MAKEMASK(0x3FF, 0)
+#define GLGEN_ANA_ALU_ACCSS_OUT_OF_PKT		0x0020C208 /* Reset Source: CORER */
+#define GLGEN_ANA_ALU_ACCSS_OUT_OF_PKT_NPC_S	0
+#define GLGEN_ANA_ALU_ACCSS_OUT_OF_PKT_NPC_M	MAKEMASK(0xFF, 0)
+#define GLGEN_ANA_CFG_CTRL			0x0020C104 /* Reset Source: CORER */
+#define GLGEN_ANA_CFG_CTRL_LINE_IDX_S		0
+#define GLGEN_ANA_CFG_CTRL_LINE_IDX_M		MAKEMASK(0x3FFFF, 0)
+#define GLGEN_ANA_CFG_CTRL_TABLE_ID_S		18
+#define GLGEN_ANA_CFG_CTRL_TABLE_ID_M		MAKEMASK(0xFF, 18)
+#define GLGEN_ANA_CFG_CTRL_RESRVED_S		26
+#define GLGEN_ANA_CFG_CTRL_RESRVED_M		MAKEMASK(0x7, 26)
+#define GLGEN_ANA_CFG_CTRL_OPERATION_ID_S	29
+#define GLGEN_ANA_CFG_CTRL_OPERATION_ID_M	MAKEMASK(0x7, 29)
+#define GLGEN_ANA_CFG_HTBL_LU_RESULT		0x0020C158 /* Reset Source: CORER */
+#define GLGEN_ANA_CFG_HTBL_LU_RESULT_HIT_S	0
+#define GLGEN_ANA_CFG_HTBL_LU_RESULT_HIT_M	BIT(0)
+#define GLGEN_ANA_CFG_HTBL_LU_RESULT_PG_MEM_IDX_S 1
+#define GLGEN_ANA_CFG_HTBL_LU_RESULT_PG_MEM_IDX_M MAKEMASK(0x7, 1)
+#define GLGEN_ANA_CFG_HTBL_LU_RESULT_ADDR_S	4
+#define GLGEN_ANA_CFG_HTBL_LU_RESULT_ADDR_M	MAKEMASK(0x1FF, 4)
+#define GLGEN_ANA_CFG_LU_KEY(_i)		(0x0020C14C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GLGEN_ANA_CFG_LU_KEY_MAX_INDEX		2
+#define GLGEN_ANA_CFG_LU_KEY_LU_KEY_S		0
+#define GLGEN_ANA_CFG_LU_KEY_LU_KEY_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_CFG_RDDATA(_i)		(0x0020C10C + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLGEN_ANA_CFG_RDDATA_MAX_INDEX		15
+#define GLGEN_ANA_CFG_RDDATA_RD_DATA_S		0
+#define GLGEN_ANA_CFG_RDDATA_RD_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_CFG_SPLBUF_LU_RESULT		0x0020C15C /* Reset Source: CORER */
+#define GLGEN_ANA_CFG_SPLBUF_LU_RESULT_HIT_S	0
+#define GLGEN_ANA_CFG_SPLBUF_LU_RESULT_HIT_M	BIT(0)
+#define GLGEN_ANA_CFG_SPLBUF_LU_RESULT_RSV_S	1
+#define GLGEN_ANA_CFG_SPLBUF_LU_RESULT_RSV_M	MAKEMASK(0x7, 1)
+#define GLGEN_ANA_CFG_SPLBUF_LU_RESULT_ADDR_S	4
+#define GLGEN_ANA_CFG_SPLBUF_LU_RESULT_ADDR_M	MAKEMASK(0x1FF, 4)
+#define GLGEN_ANA_CFG_WRDATA			0x0020C108 /* Reset Source: CORER */
+#define GLGEN_ANA_CFG_WRDATA_WR_DATA_S		0
+#define GLGEN_ANA_CFG_WRDATA_WR_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_DEF_PTYPE			0x0020C100 /* Reset Source: CORER */
+#define GLGEN_ANA_DEF_PTYPE_DEF_PTYPE_S		0
+#define GLGEN_ANA_DEF_PTYPE_DEF_PTYPE_M		MAKEMASK(0x3FF, 0)
+#define GLGEN_ANA_DFD_FIFO_0			0x0020C398 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_FIFO_0_PC_NXT_S		0
+#define GLGEN_ANA_DFD_FIFO_0_PC_NXT_M		BIT(0)
+#define GLGEN_ANA_DFD_FIFO_0_HO_NXT_S		1
+#define GLGEN_ANA_DFD_FIFO_0_HO_NXT_M		BIT(1)
+#define GLGEN_ANA_DFD_FIFO_0_NID_NXT_S		2
+#define GLGEN_ANA_DFD_FIFO_0_NID_NXT_M		BIT(2)
+#define GLGEN_ANA_DFD_FIFO_0_PG_KEY_SEL_S	8
+#define GLGEN_ANA_DFD_FIFO_0_PG_KEY_SEL_M	BIT(8)
+#define GLGEN_ANA_DFD_FIFO_PTR			0x0020C43C /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_FIFO_PTR_HEAD_S		0
+#define GLGEN_ANA_DFD_FIFO_PTR_HEAD_M		MAKEMASK(0x1FF, 0)
+#define GLGEN_ANA_DFD_FIFO_PTR_USED_SPACE_S	16
+#define GLGEN_ANA_DFD_FIFO_PTR_USED_SPACE_M	MAKEMASK(0x3FF, 16)
+#define GLGEN_ANA_DFD_GEN_CTRL			0x0020C38C /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_GEN_CTRL_ENABLE_S		0
+#define GLGEN_ANA_DFD_GEN_CTRL_ENABLE_M		BIT(0)
+#define GLGEN_ANA_DFD_GEN_CTRL_BLK_INPUT_S	1
+#define GLGEN_ANA_DFD_GEN_CTRL_BLK_INPUT_M	BIT(1)
+#define GLGEN_ANA_DFD_LOG_0			0x0020C3A8 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_LOG_0_SOURCE_S		0
+#define GLGEN_ANA_DFD_LOG_0_SOURCE_M		MAKEMASK(0x7, 0)
+#define GLGEN_ANA_DFD_LOG_0_LVL_OR_EDGE_S	3
+#define GLGEN_ANA_DFD_LOG_0_LVL_OR_EDGE_M	BIT(3)
+#define GLGEN_ANA_DFD_LOG_0_RC_DISP_TRIG_S	4
+#define GLGEN_ANA_DFD_LOG_0_RC_DISP_TRIG_M	MAKEMASK(0x7, 4)
+#define GLGEN_ANA_DFD_LOG_0_FLD_MODE_S		8
+#define GLGEN_ANA_DFD_LOG_0_FLD_MODE_M		BIT(8)
+#define GLGEN_ANA_DFD_LOG_0_DLY_CYCL_S		16
+#define GLGEN_ANA_DFD_LOG_0_DLY_CYCL_M		MAKEMASK(0x3FF, 16)
+#define GLGEN_ANA_DFD_LOG_1			0x0020C3AC /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_LOG_1_NUM_EVENTS_S	0
+#define GLGEN_ANA_DFD_LOG_1_NUM_EVENTS_M	MAKEMASK(0x3FF, 0)
+#define GLGEN_ANA_DFD_LOG_1_NUM_TRIGS_S		16
+#define GLGEN_ANA_DFD_LOG_1_NUM_TRIGS_M		MAKEMASK(0x3FF, 16)
+#define GLGEN_ANA_DFD_LOG_ACTN_EN		0x0020C3F8 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_BLK_PH_ARB_S	0
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_BLK_PH_ARB_M	BIT(0)
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_BLK_PH_FB_S	1
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_BLK_PH_FB_M	BIT(1)
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_BLK_INPUT_S	2
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_BLK_INPUT_M	BIT(2)
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_STP_OUTPUT_S	3
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_STP_OUTPUT_M	BIT(3)
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_STP_WR_DFD_FIFO_S 4
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_STP_WR_DFD_FIFO_M BIT(4)
+#define GLGEN_ANA_DFD_LOG_ACTN_RST		0x0020C3FC /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_BLK_PH_ARB_S 0
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_BLK_PH_ARB_M BIT(0)
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_BLK_PH_FB_S	1
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_BLK_PH_FB_M	BIT(1)
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_BLK_INPUT_S	2
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_BLK_INPUT_M	BIT(2)
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_STP_OUTPUT_S 3
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_STP_OUTPUT_M BIT(3)
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_STP_WR_DFD_FIFO_S 4
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_STP_WR_DFD_FIFO_M BIT(4)
+#define GLGEN_ANA_DFD_LOG_DATA(_i)		(0x0020C3B0 + ((_i) * 4)) /* _i=0...8 */ /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_LOG_DATA_MAX_INDEX	8
+#define GLGEN_ANA_DFD_LOG_DATA_DATA_S		0
+#define GLGEN_ANA_DFD_LOG_DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_DFD_LOG_MASK(_i)		(0x0020C3D4 + ((_i) * 4)) /* _i=0...8 */ /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_LOG_MASK_MAX_INDEX	8
+#define GLGEN_ANA_DFD_LOG_MASK_MASK_S		0
+#define GLGEN_ANA_DFD_LOG_MASK_MASK_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_DFD_LOG_RST_ALL		0x0020C400 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_LOG_RST_ALL_RST_S		0
+#define GLGEN_ANA_DFD_LOG_RST_ALL_RST_M		BIT(0)
+#define GLGEN_ANA_DFD_LOG_RST_ALL_GEN_RST_S	1
+#define GLGEN_ANA_DFD_LOG_RST_ALL_GEN_RST_M	BIT(1)
+#define GLGEN_ANA_DFD_LOG_TRG_0			0x0020C404 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_LOG_TRG_0_TAGID_S		0
+#define GLGEN_ANA_DFD_LOG_TRG_0_TAGID_M		MAKEMASK(0x3F, 0)
+#define GLGEN_ANA_DFD_LOG_TRG_0_ACT_TRIGGED_S	16
+#define GLGEN_ANA_DFD_LOG_TRG_0_ACT_TRIGGED_M	BIT(16)
+#define GLGEN_ANA_DFD_LOG_TRG_0_MAX_NUM_RND_S	24
+#define GLGEN_ANA_DFD_LOG_TRG_0_MAX_NUM_RND_M	MAKEMASK(0x7F, 24)
+#define GLGEN_ANA_DFD_LOG_TRG_0_TRIGGED_S	31
+#define GLGEN_ANA_DFD_LOG_TRG_0_TRIGGED_M	BIT(31)
+#define GLGEN_ANA_DFD_LOG_TRG_DATA(_i)		(0x0020C408 + ((_i) * 4)) /* _i=0...8 */ /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_LOG_TRG_DATA_MAX_INDEX	8
+#define GLGEN_ANA_DFD_LOG_TRG_DATA_DATA_S	0
+#define GLGEN_ANA_DFD_LOG_TRG_DATA_DATA_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_DFD_PACE_OUT			0x0020C4CC /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_PACE_OUT_PUSH_S		0
+#define GLGEN_ANA_DFD_PACE_OUT_PUSH_M		BIT(0)
+#define GLGEN_ANA_DFD_PACING_0			0x0020C390 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_PACING_0_STOP_FEEDBK_S	0
+#define GLGEN_ANA_DFD_PACING_0_STOP_FEEDBK_M	BIT(0)
+#define GLGEN_ANA_DFD_PACING_0_STOP_ARB_S	1
+#define GLGEN_ANA_DFD_PACING_0_STOP_ARB_M	BIT(1)
+#define GLGEN_ANA_DFD_PACING_0_NUM_CHUNKS_S	2
+#define GLGEN_ANA_DFD_PACING_0_NUM_CHUNKS_M	MAKEMASK(0x1F, 2)
+#define GLGEN_ANA_DFD_PACING_1			0x0020C394 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_PACING_1_PUSH_S		0
+#define GLGEN_ANA_DFD_PACING_1_PUSH_M		BIT(0)
+#define GLGEN_ANA_DFD_REG_FILE_ACC_0		0x0020C39C /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_REG_FILE_ACC_0_SLOT_ID_S	0
+#define GLGEN_ANA_DFD_REG_FILE_ACC_0_SLOT_ID_M	MAKEMASK(0xF, 0)
+#define GLGEN_ANA_DFD_REG_FILE_ACC_1		0x0020C3A0 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_REG_FILE_ACC_1_REGID_S	0
+#define GLGEN_ANA_DFD_REG_FILE_ACC_1_REGID_M	MAKEMASK(0xFF, 0)
+#define GLGEN_ANA_DFD_REG_FILE_ACC_RES		0x0020C3A4 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_REG_FILE_ACC_RES_REG_VAL_S 0
+#define GLGEN_ANA_DFD_REG_FILE_ACC_RES_REG_VAL_M MAKEMASK(0xFFFF, 0)
+#define GLGEN_ANA_DFD_REG_FILE_ACC_RES_EXCEPTIONS_S 16
+#define GLGEN_ANA_DFD_REG_FILE_ACC_RES_EXCEPTIONS_M MAKEMASK(0x7FFF, 16)
+#define GLGEN_ANA_DFD_TAGIDS			0x0020C438 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_TAGIDS_TAGID_IN_DFD_FIFO_S 0
+#define GLGEN_ANA_DFD_TAGIDS_TAGID_IN_DFD_FIFO_M MAKEMASK(0x3F, 0)
+#define GLGEN_ANA_DFD_TAGIDS_TAGID_NXT_ANA_S	8
+#define GLGEN_ANA_DFD_TAGIDS_TAGID_NXT_ANA_M	MAKEMASK(0x3F, 8)
+#define GLGEN_ANA_DFD_TAGIDS_TAGID_OUT_S	16
+#define GLGEN_ANA_DFD_TAGIDS_TAGID_OUT_M	MAKEMASK(0x3F, 16)
+#define GLGEN_ANA_DFD_TAGIDS_SLOTID_IN_DFD_FIFO_S 24
+#define GLGEN_ANA_DFD_TAGIDS_SLOTID_IN_DFD_FIFO_M MAKEMASK(0xF, 24)
+#define GLGEN_ANA_DFD_TAGIDS_SLOTID_NXT_ANA_S	28
+#define GLGEN_ANA_DFD_TAGIDS_SLOTID_NXT_ANA_M	MAKEMASK(0xF, 28)
+#define GLGEN_ANA_ERR_AUX			0x0020C228 /* Reset Source: CORER */
+#define GLGEN_ANA_ERR_AUX_IPLEN_GPREG_S		0
+#define GLGEN_ANA_ERR_AUX_IPLEN_GPREG_M		MAKEMASK(0xF, 0)
+#define GLGEN_ANA_ERR_CTRL			0x0020C220 /* Reset Source: CORER */
+#define GLGEN_ANA_ERR_CTRL_ERR_MASK_EN_S	0
+#define GLGEN_ANA_ERR_CTRL_ERR_MASK_EN_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_FLAG_MAP(_i)			(0x0020C000 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GLGEN_ANA_FLAG_MAP_MAX_INDEX		63
+#define GLGEN_ANA_FLAG_MAP_FLAG_EN_S		0
+#define GLGEN_ANA_FLAG_MAP_FLAG_EN_M		BIT(0)
+#define GLGEN_ANA_FLAG_MAP_EXT_FLAG_ID_S	1
+#define GLGEN_ANA_FLAG_MAP_EXT_FLAG_ID_M	MAKEMASK(0x3F, 1)
+#define GLGEN_ANA_GEN_DFD_RO			0x0020C4C8 /* Reset Source: CORER */
+#define GLGEN_ANA_GEN_DFD_RO_GEN_VAL_S		0
+#define GLGEN_ANA_GEN_DFD_RO_GEN_VAL_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_GIGO_FIFO_PTR			0x0020C448 /* Reset Source: CORER */
+#define GLGEN_ANA_GIGO_FIFO_PTR_HEAD_S		0
+#define GLGEN_ANA_GIGO_FIFO_PTR_HEAD_M		MAKEMASK(0x1FF, 0)
+#define GLGEN_ANA_GIGO_FIFO_PTR_USED_SPACE_S	16
+#define GLGEN_ANA_GIGO_FIFO_PTR_USED_SPACE_M	MAKEMASK(0x3FF, 16)
+#define GLGEN_ANA_HDR_FIFO_FIFO_PTR		0x0020C44C /* Reset Source: CORER */
+#define GLGEN_ANA_HDR_FIFO_FIFO_PTR_HEAD_S	0
+#define GLGEN_ANA_HDR_FIFO_FIFO_PTR_HEAD_M	MAKEMASK(0x1FF, 0)
+#define GLGEN_ANA_HDR_FIFO_FIFO_PTR_USED_SPACE_S 16
+#define GLGEN_ANA_HDR_FIFO_FIFO_PTR_USED_SPACE_M MAKEMASK(0x3FF, 16)
+#define GLGEN_ANA_INV_NODE_PTYPE		0x0020C210 /* Reset Source: CORER */
+#define GLGEN_ANA_INV_NODE_PTYPE_INV_NODE_PTYPE_S 0
+#define GLGEN_ANA_INV_NODE_PTYPE_INV_NODE_PTYPE_M MAKEMASK(0x7FF, 0)
+#define GLGEN_ANA_INV_PROT_ID			0x0020C214 /* Reset Source: CORER */
+#define GLGEN_ANA_INV_PROT_ID_INV_PROT_ID_S	0
+#define GLGEN_ANA_INV_PROT_ID_INV_PROT_ID_M	MAKEMASK(0xFF, 0)
+#define GLGEN_ANA_INV_PTYPE_MARKER		0x0020C218 /* Reset Source: CORER */
+#define GLGEN_ANA_INV_PTYPE_MARKER_INV_PTYPE_MARKER_S 0
+#define GLGEN_ANA_INV_PTYPE_MARKER_INV_PTYPE_MARKER_M MAKEMASK(0x7F, 0)
+#define GLGEN_ANA_LAST_PROT_ID(_i)		(0x0020C1E4 + ((_i) * 4)) /* _i=0...5 */ /* Reset Source: CORER */
+#define GLGEN_ANA_LAST_PROT_ID_MAX_INDEX	5
+#define GLGEN_ANA_LAST_PROT_ID_EN_S		0
+#define GLGEN_ANA_LAST_PROT_ID_EN_M		BIT(0)
+#define GLGEN_ANA_LAST_PROT_ID_PROT_ID_S	1
+#define GLGEN_ANA_LAST_PROT_ID_PROT_ID_M	MAKEMASK(0xFF, 1)
+#define GLGEN_ANA_MAX_HDRLEN			0x0020C1E0 /* Reset Source: CORER */
+#define GLGEN_ANA_MAX_HDRLEN_NPC_S		0
+#define GLGEN_ANA_MAX_HDRLEN_NPC_M		MAKEMASK(0xFF, 0)
+#define GLGEN_ANA_MAX_HDRLEN_MAX_HDR_LEN_S	8
+#define GLGEN_ANA_MAX_HDRLEN_MAX_HDR_LEN_M	MAKEMASK(0x1FF, 8)
+#define GLGEN_ANA_MAX_PROT			0x0020C224 /* Reset Source: CORER */
+#define GLGEN_ANA_MAX_PROT_MAX_PRTS_S		0
+#define GLGEN_ANA_MAX_PROT_MAX_PRTS_M		MAKEMASK(0x7F, 0)
+#define GLGEN_ANA_MAX_ROUND			0x0020C20C /* Reset Source: CORER */
+#define GLGEN_ANA_MAX_ROUND_MAX_ROUND_ABS_S	0
+#define GLGEN_ANA_MAX_ROUND_MAX_ROUND_ABS_M	MAKEMASK(0x7F, 0)
+#define GLGEN_ANA_MIN_PKT			0x0020C42C /* Reset Source: CORER */
+#define GLGEN_ANA_MIN_PKT_MIN_LEN_S		0
+#define GLGEN_ANA_MIN_PKT_MIN_LEN_M		MAKEMASK(0x3FFF, 0)
+#define GLGEN_ANA_NMPG_KEYMASK(_i)		(0x0020C1D0 + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: CORER */
+#define GLGEN_ANA_NMPG_KEYMASK_MAX_INDEX	3
+#define GLGEN_ANA_NMPG_KEYMASK_HASH_KEY_S	0
+#define GLGEN_ANA_NMPG_KEYMASK_HASH_KEY_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_NMPG0_HASHKEY(_i)		(0x0020C1B0 + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: CORER */
+#define GLGEN_ANA_NMPG0_HASHKEY_MAX_INDEX	3
+#define GLGEN_ANA_NMPG0_HASHKEY_HASH_KEY_S	0
+#define GLGEN_ANA_NMPG0_HASHKEY_HASH_KEY_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_NO_HIT_PG_NM_PG		0x0020C204 /* Reset Source: CORER */
+#define GLGEN_ANA_NO_HIT_PG_NM_PG_NPC_S		0
+#define GLGEN_ANA_NO_HIT_PG_NM_PG_NPC_M		MAKEMASK(0xFF, 0)
+#define GLGEN_ANA_OUT_OF_PKT			0x0020C200 /* Reset Source: CORER */
+#define GLGEN_ANA_OUT_OF_PKT_NPC_S		0
+#define GLGEN_ANA_OUT_OF_PKT_NPC_M		MAKEMASK(0xFF, 0)
+#define GLGEN_ANA_P2P(_i)			(0x0020C160 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLGEN_ANA_P2P_MAX_INDEX			15
+#define GLGEN_ANA_P2P_TARGET_PROF_S		0
+#define GLGEN_ANA_P2P_TARGET_PROF_M		MAKEMASK(0xF, 0)
+#define GLGEN_ANA_PG_KEYMASK(_i)		(0x0020C1C0 + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: CORER */
+#define GLGEN_ANA_PG_KEYMASK_MAX_INDEX		3
+#define GLGEN_ANA_PG_KEYMASK_HASH_KEY_S		0
+#define GLGEN_ANA_PG_KEYMASK_HASH_KEY_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_PG0_HASHKEY(_i)		(0x0020C1A0 + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: CORER */
+#define GLGEN_ANA_PG0_HASHKEY_MAX_INDEX		3
+#define GLGEN_ANA_PG0_HASHKEY_HASH_KEY_S	0
+#define GLGEN_ANA_PG0_HASHKEY_HASH_KEY_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_PROFIL_CTRL			0x0020C1FC /* Reset Source: CORER */
+#define GLGEN_ANA_PROFIL_CTRL_PROFILE_SELECT_MDID_S 0
+#define GLGEN_ANA_PROFIL_CTRL_PROFILE_SELECT_MDID_M MAKEMASK(0x1F, 0)
+#define GLGEN_ANA_PROFIL_CTRL_PROFILE_SELECT_MDSTART_S 5
+#define GLGEN_ANA_PROFIL_CTRL_PROFILE_SELECT_MDSTART_M MAKEMASK(0xF, 5)
+#define GLGEN_ANA_PROFIL_CTRL_PROFILE_SELECT_MD_LEN_S 9
+#define GLGEN_ANA_PROFIL_CTRL_PROFILE_SELECT_MD_LEN_M MAKEMASK(0x1F, 9)
+#define GLGEN_ANA_PROFIL_CTRL_NUM_CTRL_DOMAIN_S 14
+#define GLGEN_ANA_PROFIL_CTRL_NUM_CTRL_DOMAIN_M MAKEMASK(0x3, 14)
+#define GLGEN_ANA_PROFIL_CTRL_DEF_PROF_ID_S	16
+#define GLGEN_ANA_PROFIL_CTRL_DEF_PROF_ID_M	MAKEMASK(0xF, 16)
+#define GLGEN_ANA_PROFIL_CTRL_SEL_DEF_PROF_ID_S 20
+#define GLGEN_ANA_PROFIL_CTRL_SEL_DEF_PROF_ID_M BIT(20)
+#define GLGEN_ANA_PSTAT_FIFO_PTR		0x0020C444 /* Reset Source: CORER */
+#define GLGEN_ANA_PSTAT_FIFO_PTR_HEAD_S		0
+#define GLGEN_ANA_PSTAT_FIFO_PTR_HEAD_M		MAKEMASK(0x1FF, 0)
+#define GLGEN_ANA_PSTAT_FIFO_PTR_USED_SPACE_S	16
+#define GLGEN_ANA_PSTAT_FIFO_PTR_USED_SPACE_M	MAKEMASK(0x3FF, 16)
+#define GLGEN_ANA_STAT_FIFO_PTR			0x0020C440 /* Reset Source: CORER */
+#define GLGEN_ANA_STAT_FIFO_PTR_HEAD_S		0
+#define GLGEN_ANA_STAT_FIFO_PTR_HEAD_M		MAKEMASK(0x1FF, 0)
+#define GLGEN_ANA_STAT_FIFO_PTR_USED_SPACE_S	16
+#define GLGEN_ANA_STAT_FIFO_PTR_USED_SPACE_M	MAKEMASK(0x3FF, 16)
+#define GLGEN_ANA_TX_DFD_LOG_0			0x0020D3A8 /* Reset Source: CORER */
+#define GLGEN_ANA_TX_DFD_LOG_0_SOURCE_S		0
+#define GLGEN_ANA_TX_DFD_LOG_0_SOURCE_M		MAKEMASK(0x7, 0)
+#define GLGEN_ANA_TX_DFD_LOG_0_LVL_OR_EDGE_S	3
+#define GLGEN_ANA_TX_DFD_LOG_0_LVL_OR_EDGE_M	BIT(3)
+#define GLGEN_ANA_TX_DFD_LOG_0_RC_DISP_TRIG_S	4
+#define GLGEN_ANA_TX_DFD_LOG_0_RC_DISP_TRIG_M	MAKEMASK(0x7, 4)
+#define GLGEN_ANA_TX_DFD_LOG_0_FLD_MODE_S	8
+#define GLGEN_ANA_TX_DFD_LOG_0_FLD_MODE_M	BIT(8)
+#define GLGEN_ANA_TX_DFD_LOG_0_DLY_CYCL_S	16
+#define GLGEN_ANA_TX_DFD_LOG_0_DLY_CYCL_M	MAKEMASK(0x3FF, 16)
+#define GLGEN_ANA_TX_DFD_PACE_OUT		0x0020D4CC /* Reset Source: CORER */
+#define GLGEN_ANA_TX_DFD_PACE_OUT_PUSH_S	0
+#define GLGEN_ANA_TX_DFD_PACE_OUT_PUSH_M	BIT(0)
+#define GLGEN_ANA_TX_GEN_DFD_RO			0x0020D4C8 /* Reset Source: CORER */
+#define GLGEN_ANA_TX_GEN_DFD_RO_GEN_VAL_S	0
+#define GLGEN_ANA_TX_GEN_DFD_RO_GEN_VAL_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_TX_P2P(_i)			(0x0020D160 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLGEN_ANA_TX_P2P_MAX_INDEX		15
+#define GLGEN_ANA_TX_P2P_TARGET_PROF_S		0
+#define GLGEN_ANA_TX_P2P_TARGET_PROF_M		MAKEMASK(0xF, 0)
+#define GLGEN_ASSERT_HLP			0x000B81E4 /* Reset Source: POR */
+#define GLGEN_ASSERT_HLP_CORE_ON_RST_S		0
+#define GLGEN_ASSERT_HLP_CORE_ON_RST_M		BIT(0)
+#define GLGEN_ASSERT_HLP_FULL_ON_RST_S		1
+#define GLGEN_ASSERT_HLP_FULL_ON_RST_M		BIT(1)
+#define GLGEN_CLKSTAT				0x000B8184 /* Reset Source: POR */
+#define GLGEN_CLKSTAT_U_CLK_SPEED_S		0
+#define GLGEN_CLKSTAT_U_CLK_SPEED_M		MAKEMASK(0x7, 0)
+#define GLGEN_CLKSTAT_L_CLK_SPEED_S		3
+#define GLGEN_CLKSTAT_L_CLK_SPEED_M		MAKEMASK(0x7, 3)
+#define GLGEN_CLKSTAT_PSM_CLK_SPEED_S		6
+#define GLGEN_CLKSTAT_PSM_CLK_SPEED_M		MAKEMASK(0x7, 6)
+#define GLGEN_CLKSTAT_RXCTL_CLK_SPEED_S		9
+#define GLGEN_CLKSTAT_RXCTL_CLK_SPEED_M		MAKEMASK(0x7, 9)
+#define GLGEN_CLKSTAT_UANA_CLK_SPEED_S		12
+#define GLGEN_CLKSTAT_UANA_CLK_SPEED_M		MAKEMASK(0x7, 12)
+#define GLGEN_CLKSTAT_PE_CLK_SPEED_S		18
+#define GLGEN_CLKSTAT_PE_CLK_SPEED_M		MAKEMASK(0x7, 18)
+#define GLGEN_CLKSTAT_SRC			0x000B826C /* Reset Source: POR */
+#define GLGEN_CLKSTAT_SRC_U_CLK_SRC_S		0
+#define GLGEN_CLKSTAT_SRC_U_CLK_SRC_M		MAKEMASK(0x3, 0)
+#define GLGEN_CLKSTAT_SRC_L_CLK_SRC_S		2
+#define GLGEN_CLKSTAT_SRC_L_CLK_SRC_M		MAKEMASK(0x3, 2)
+#define GLGEN_CLKSTAT_SRC_PSM_CLK_SRC_S		4
+#define GLGEN_CLKSTAT_SRC_PSM_CLK_SRC_M		MAKEMASK(0x3, 4)
+#define GLGEN_CLKSTAT_SRC_RXCTL_CLK_SRC_S	6
+#define GLGEN_CLKSTAT_SRC_RXCTL_CLK_SRC_M	MAKEMASK(0x3, 6)
+#define GLGEN_CLKSTAT_SRC_UANA_CLK_SRC_S	8
+#define GLGEN_CLKSTAT_SRC_UANA_CLK_SRC_M	MAKEMASK(0xF, 8)
+#define GLGEN_ECC_ERR_INT_TOG_MASK_H		0x00093A00 /* Reset Source: CORER */
+#define GLGEN_ECC_ERR_INT_TOG_MASK_H_CLIENT_NUM_S 0
+#define GLGEN_ECC_ERR_INT_TOG_MASK_H_CLIENT_NUM_M MAKEMASK(0x7F, 0)
+#define GLGEN_ECC_ERR_INT_TOG_MASK_L		0x000939FC /* Reset Source: CORER */
+#define GLGEN_ECC_ERR_INT_TOG_MASK_L_CLIENT_NUM_S 0
+#define GLGEN_ECC_ERR_INT_TOG_MASK_L_CLIENT_NUM_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ECC_ERR_RST_MASK_H		0x000939F8 /* Reset Source: CORER */
+#define GLGEN_ECC_ERR_RST_MASK_H_CLIENT_NUM_S	0
+#define GLGEN_ECC_ERR_RST_MASK_H_CLIENT_NUM_M	MAKEMASK(0x7F, 0)
+#define GLGEN_ECC_ERR_RST_MASK_L		0x000939F4 /* Reset Source: CORER */
+#define GLGEN_ECC_ERR_RST_MASK_L_CLIENT_NUM_S	0
+#define GLGEN_ECC_ERR_RST_MASK_L_CLIENT_NUM_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_GPIO_CTL(_i)			(0x000880C8 + ((_i) * 4)) /* _i=0...6 */ /* Reset Source: POR */
+#define GLGEN_GPIO_CTL_MAX_INDEX		6
+#define GLGEN_GPIO_CTL_IN_VALUE_S		0
+#define GLGEN_GPIO_CTL_IN_VALUE_M		BIT(0)
+#define GLGEN_GPIO_CTL_IN_TRANSIT_S		1
+#define GLGEN_GPIO_CTL_IN_TRANSIT_M		BIT(1)
+#define GLGEN_GPIO_CTL_OUT_VALUE_S		2
+#define GLGEN_GPIO_CTL_OUT_VALUE_M		BIT(2)
+#define GLGEN_GPIO_CTL_NO_P_UP_S		3
+#define GLGEN_GPIO_CTL_NO_P_UP_M		BIT(3)
+#define GLGEN_GPIO_CTL_PIN_DIR_S		4
+#define GLGEN_GPIO_CTL_PIN_DIR_M		BIT(4)
+#define GLGEN_GPIO_CTL_TRI_CTL_S		5
+#define GLGEN_GPIO_CTL_TRI_CTL_M		BIT(5)
+#define GLGEN_GPIO_CTL_PIN_FUNC_S		8
+#define GLGEN_GPIO_CTL_PIN_FUNC_M		MAKEMASK(0xF, 8)
+#define GLGEN_GPIO_CTL_INT_MODE_S		12
+#define GLGEN_GPIO_CTL_INT_MODE_M		MAKEMASK(0x3, 12)
+#define GLGEN_MARKER_COUNT			0x000939E8 /* Reset Source: CORER */
+#define GLGEN_MARKER_COUNT_MARKER_COUNT_S	0
+#define GLGEN_MARKER_COUNT_MARKER_COUNT_M	MAKEMASK(0xFF, 0)
+#define GLGEN_MARKER_COUNT_MARKER_COUNT_EN_S	31
+#define GLGEN_MARKER_COUNT_MARKER_COUNT_EN_M	BIT(31)
+#define GLGEN_RSTAT				0x000B8188 /* Reset Source: POR */
+#define GLGEN_RSTAT_DEVSTATE_S			0
+#define GLGEN_RSTAT_DEVSTATE_M			MAKEMASK(0x3, 0)
+#define GLGEN_RSTAT_RESET_TYPE_S		2
+#define GLGEN_RSTAT_RESET_TYPE_M		MAKEMASK(0x3, 2)
+#define GLGEN_RSTAT_CORERCNT_S			4
+#define GLGEN_RSTAT_CORERCNT_M			MAKEMASK(0x3, 4)
+#define GLGEN_RSTAT_GLOBRCNT_S			6
+#define GLGEN_RSTAT_GLOBRCNT_M			MAKEMASK(0x3, 6)
+#define GLGEN_RSTAT_EMPRCNT_S			8
+#define GLGEN_RSTAT_EMPRCNT_M			MAKEMASK(0x3, 8)
+#define GLGEN_RSTAT_TIME_TO_RST_S		10
+#define GLGEN_RSTAT_TIME_TO_RST_M		MAKEMASK(0x3F, 10)
+#define GLGEN_RSTAT_RTRIG_FLR_S			16
+#define GLGEN_RSTAT_RTRIG_FLR_M			BIT(16)
+#define GLGEN_RSTAT_RTRIG_ECC_S			17
+#define GLGEN_RSTAT_RTRIG_ECC_M			BIT(17)
+#define GLGEN_RSTAT_RTRIG_FW_AUX_S		18
+#define GLGEN_RSTAT_RTRIG_FW_AUX_M		BIT(18)
+#define GLGEN_RTRIG				0x000B8190 /* Reset Source: CORER */
+#define GLGEN_RTRIG_CORER_S			0
+#define GLGEN_RTRIG_CORER_M			BIT(0)
+#define GLGEN_RTRIG_GLOBR_S			1
+#define GLGEN_RTRIG_GLOBR_M			BIT(1)
+#define GLGEN_RTRIG_EMPFWR_S			2
+#define GLGEN_RTRIG_EMPFWR_M			BIT(2)
+#define GLGEN_STAT				0x000B612C /* Reset Source: POR */
+#define GLGEN_STAT_RSVD4FW_S			0
+#define GLGEN_STAT_RSVD4FW_M			MAKEMASK(0xFF, 0)
+#define GLGEN_VFLRSTAT(_i)			(0x00093A04 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLGEN_VFLRSTAT_MAX_INDEX		7
+#define GLGEN_VFLRSTAT_VFLRS_S			0
+#define GLGEN_VFLRSTAT_VFLRS_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_XLR_MSK2HLP_RDY			0x000939F0 /* Reset Source: CORER */
+#define GLGEN_XLR_MSK2HLP_RDY_GLGEN_XLR_MSK2HLP_RDY_S 0
+#define GLGEN_XLR_MSK2HLP_RDY_GLGEN_XLR_MSK2HLP_RDY_M BIT(0)
+#define GLGEN_XLR_TRNS_WAIT_COUNT		0x000939EC /* Reset Source: CORER */
+#define GLGEN_XLR_TRNS_WAIT_COUNT_W_BTWN_TRNS_COUNT_S 0
+#define GLGEN_XLR_TRNS_WAIT_COUNT_W_BTWN_TRNS_COUNT_M MAKEMASK(0x1F, 0)
+#define GLGEN_XLR_TRNS_WAIT_COUNT_W_PEND_TRNS_COUNT_S 8
+#define GLGEN_XLR_TRNS_WAIT_COUNT_W_PEND_TRNS_COUNT_M MAKEMASK(0xFF, 8)
+#define GLQDC_DFD_CAM_ACC			0x002D2E24 /* Reset Source: CORER */
+#define GLQDC_DFD_CAM_ACC_CLNUM_S		0
+#define GLQDC_DFD_CAM_ACC_CLNUM_M		MAKEMASK(0x7F, 0)
+#define GLQDC_DFD_CAM_ACC_RES_0			0x002D2E28 /* Reset Source: CORER */
+#define GLQDC_DFD_CAM_ACC_RES_0_QID_S		0
+#define GLQDC_DFD_CAM_ACC_RES_0_QID_M		MAKEMASK(0x3FFF, 0)
+#define GLQDC_DFD_CAM_ACC_RES_0_CAM_V_S		16
+#define GLQDC_DFD_CAM_ACC_RES_0_CAM_V_M		BIT(16)
+#define GLQDC_DFD_CAM_ACC_RES_0_CAM_E_S		31
+#define GLQDC_DFD_CAM_ACC_RES_0_CAM_E_M		BIT(31)
+#define GLQDC_DFD_CAM_ACC_RES_1			0x002D2E2C /* Reset Source: CORER */
+#define GLQDC_DFD_CAM_ACC_RES_1_CL_HEAD_S	0
+#define GLQDC_DFD_CAM_ACC_RES_1_CL_HEAD_M	MAKEMASK(0x3F, 0)
+#define GLQDC_DFD_CAM_ACC_RES_1_CL_TAIL_S	8
+#define GLQDC_DFD_CAM_ACC_RES_1_CL_TAIL_M	MAKEMASK(0x3F, 8)
+#define GLQDC_DFD_CAM_ACC_RES_1_CL_EMPTY_S	16
+#define GLQDC_DFD_CAM_ACC_RES_1_CL_EMPTY_M	BIT(16)
+#define GLQDC_DFD_CAM_ACC_RES_1_CL_MALC_S	24
+#define GLQDC_DFD_CAM_ACC_RES_1_CL_MALC_M	MAKEMASK(0x3F, 24)
+#define GLQDC_DFD_FIFO_CFG_0			0x002D2E34 /* Reset Source: CORER */
+#define GLQDC_DFD_FIFO_CFG_0_QID_S		0
+#define GLQDC_DFD_FIFO_CFG_0_QID_M		MAKEMASK(0x3FFF, 0)
+#define GLQDC_DFD_FIFO_CFG_0_SMPL_PT_S		16
+#define GLQDC_DFD_FIFO_CFG_0_SMPL_PT_M		MAKEMASK(0xFF, 16)
+#define GLQDC_DFD_FIFO_CFG_0_ALL_QID_S		31
+#define GLQDC_DFD_FIFO_CFG_0_ALL_QID_M		BIT(31)
+#define GLQDC_DFD_FIFO_CFG_1			0x002D2E38 /* Reset Source: CORER */
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_0_S		0
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_0_M		MAKEMASK(0x7, 0)
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_1_S		4
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_1_M		MAKEMASK(0x7, 4)
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_2_S		8
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_2_M		MAKEMASK(0x7, 8)
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_3_S		12
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_3_M		MAKEMASK(0x7, 12)
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_4_S		16
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_4_M		MAKEMASK(0x7, 16)
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_5_S		20
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_5_M		MAKEMASK(0x7, 20)
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_6_S		24
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_6_M		MAKEMASK(0x7, 24)
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_7_S		28
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_7_M		MAKEMASK(0x7, 28)
+#define GLQDC_DFD_FIFO_SZ_CFG			0x002D30AC /* Reset Source: CORER */
+#define GLQDC_DFD_FIFO_SZ_CFG_COMP_S		0
+#define GLQDC_DFD_FIFO_SZ_CFG_COMP_M		MAKEMASK(0xFF, 0)
+#define GLQDC_DFD_FIFO_SZ_CFG_MISS_S		8
+#define GLQDC_DFD_FIFO_SZ_CFG_MISS_M		MAKEMASK(0xFF, 8)
+#define GLQDC_DFD_FIFO_SZ_CFG_MISS_COMP_S	16
+#define GLQDC_DFD_FIFO_SZ_CFG_MISS_COMP_M	MAKEMASK(0xFF, 16)
+#define GLQDC_DFD_GEN_CHKN			0x002D30A0 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_CHKN_GEN_BITS_S		0
+#define GLQDC_DFD_GEN_CHKN_GEN_BITS_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_GEN_CHKN_2			0x002D30A4 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_CHKN_2_GEN_BITS_S		0
+#define GLQDC_DFD_GEN_CHKN_2_GEN_BITS_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_GEN_CTRL			0x002D2E20 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_CTRL_ENABLE_S		0
+#define GLQDC_DFD_GEN_CTRL_ENABLE_M		BIT(0)
+#define GLQDC_DFD_GEN_CTRL_BLK_INJECT_M1_S	1
+#define GLQDC_DFD_GEN_CTRL_BLK_INJECT_M1_M	BIT(1)
+#define GLQDC_DFD_GEN_CTRL_NUM_PAUSE_M1_S	16
+#define GLQDC_DFD_GEN_CTRL_NUM_PAUSE_M1_M	MAKEMASK(0x3FF, 16)
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0		0x002D2EE8 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_MISS_COMP_ACK_S 0
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_MISS_COMP_ACK_M MAKEMASK(0x7F, 0)
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_MISS_COMP_S 7
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_MISS_COMP_M MAKEMASK(0x7F, 7)
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_COMP_FSM_DATA_S 14
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_COMP_FSM_DATA_M MAKEMASK(0x3, 14)
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_COMP_FSM_S	16
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_COMP_FSM_M	MAKEMASK(0x7F, 16)
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_PCIE_OUT_S	23
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_PCIE_OUT_M	MAKEMASK(0x7, 23)
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_1		0x002D2EEC /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_1_MISS_FSM_S	0
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_1_MISS_FSM_M	MAKEMASK(0x7F, 0)
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_1_DFD_S	7
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_1_DFD_M	MAKEMASK(0xFF, 7)
+#define GLQDC_DFD_GEN_LOG_FSM			0x002D2EF0 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOG_FSM_FTSTATE_S		0
+#define GLQDC_DFD_GEN_LOG_FSM_FTSTATE_M		MAKEMASK(0x3, 0)
+#define GLQDC_DFD_GEN_LOG_FSM_MISS_FIFO_FSM_ST_S 2
+#define GLQDC_DFD_GEN_LOG_FSM_MISS_FIFO_FSM_ST_M MAKEMASK(0x7, 2)
+#define GLQDC_DFD_GEN_LOG_FSM_IN_MISS_FIFO_S	5
+#define GLQDC_DFD_GEN_LOG_FSM_IN_MISS_FIFO_M	MAKEMASK(0x3, 5)
+#define GLQDC_DFD_GEN_LOG_FSM_CPSTATE_S		7
+#define GLQDC_DFD_GEN_LOG_FSM_CPSTATE_M		MAKEMASK(0x7, 7)
+#define GLQDC_DFD_GEN_LOGGNG_0			0x002D2EE0 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOGGNG_0_RINGH_WR_RD_S	0
+#define GLQDC_DFD_GEN_LOGGNG_0_RINGH_WR_RD_M	BIT(0)
+#define GLQDC_DFD_GEN_LOGGNG_0_QD_WR_RD_S	1
+#define GLQDC_DFD_GEN_LOGGNG_0_QD_WR_RD_M	BIT(1)
+#define GLQDC_DFD_GEN_LOGGNG_0_PCIE_RD_REQ_VLD_S 2
+#define GLQDC_DFD_GEN_LOGGNG_0_PCIE_RD_REQ_VLD_M BIT(2)
+#define GLQDC_DFD_GEN_LOGGNG_0_NXT_SQ_VLD_S	3
+#define GLQDC_DFD_GEN_LOGGNG_0_NXT_SQ_VLD_M	BIT(3)
+#define GLQDC_DFD_GEN_LOGGNG_0_SQ_VLD_TO_DONE_S 4
+#define GLQDC_DFD_GEN_LOGGNG_0_SQ_VLD_TO_DONE_M BIT(4)
+#define GLQDC_DFD_GEN_LOGGNG_0_PCIE_COMP_VLD_S	5
+#define GLQDC_DFD_GEN_LOGGNG_0_PCIE_COMP_VLD_M	BIT(5)
+#define GLQDC_DFD_GEN_LOGGNG_0_FETCH_NXT_SQ_VLD_S 6
+#define GLQDC_DFD_GEN_LOGGNG_0_FETCH_NXT_SQ_VLD_M BIT(6)
+#define GLQDC_DFD_GEN_LOGGNG_0_MALC_RPT_S	8
+#define GLQDC_DFD_GEN_LOGGNG_0_MALC_RPT_M	MAKEMASK(0xF, 8)
+#define GLQDC_DFD_GEN_LOGGNG_0_DFD_FIFO_ADD_S	16
+#define GLQDC_DFD_GEN_LOGGNG_0_DFD_FIFO_ADD_M	MAKEMASK(0x7F, 16)
+#define GLQDC_DFD_GEN_LOGGNG_1			0x002D2EE4 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_RFIL_WM_S	0
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_RFIL_WM_M	MAKEMASK(0x3, 0)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_RFIL_S	2
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_RFIL_M	MAKEMASK(0x3, 2)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_MRED_WM_S	4
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_MRED_WM_M	MAKEMASK(0x3, 4)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_MRED_S	6
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_MRED_M	MAKEMASK(0x3, 6)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_M3_WM_S	8
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_M3_WM_M	MAKEMASK(0x3, 8)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_M3_S		10
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_M3_M		MAKEMASK(0x3, 10)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_LSO_MT_M3_WM_S 12
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_LSO_MT_M3_WM_M MAKEMASK(0x3, 12)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_LSO_MT_M3_S	14
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_LSO_MT_M3_M	MAKEMASK(0x3, 14)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_ACK_MISS_FIFO_M3_WM_S 16
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_ACK_MISS_FIFO_M3_WM_M MAKEMASK(0x3, 16)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_ACK_MISS_FIFO_M3_S 18
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_ACK_MISS_FIFO_M3_M MAKEMASK(0x3, 18)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_EVICT_WM_S	20
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_EVICT_WM_M	MAKEMASK(0x3, 20)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_EVICT_S	22
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_EVICT_M	MAKEMASK(0x3, 22)
+#define GLQDC_DFD_GEN_LOGGNG_2			0x002D2FFC /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOGGNG_2_WR_WHEN_FULL_S	0
+#define GLQDC_DFD_GEN_LOGGNG_2_WR_WHEN_FULL_M	MAKEMASK(0x3F, 0)
+#define GLQDC_DFD_GEN_LOGGNG_2_WR_WHEN_FULL_LT_S 6
+#define GLQDC_DFD_GEN_LOGGNG_2_WR_WHEN_FULL_LT_M MAKEMASK(0x3F, 6)
+#define GLQDC_DFD_GEN_LOGGNG_2_TEST_S		24
+#define GLQDC_DFD_GEN_LOGGNG_2_TEST_M		MAKEMASK(0xFF, 24)
+#define GLQDC_DFD_GEN_LOGGNG_3			0x002D3008 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOGGNG_3_GEN_S		0
+#define GLQDC_DFD_GEN_LOGGNG_3_GEN_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_GEN_LOGGNG_4			0x002D300C /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOGGNG_4_GEN_S		0
+#define GLQDC_DFD_GEN_LOGGNG_4_GEN_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_GEN_LOGGNG_5			0x002D3010 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOGGNG_5_GEN_S		0
+#define GLQDC_DFD_GEN_LOGGNG_5_GEN_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_GEN_LOGGNG_6			0x002D3014 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOGGNG_6_GEN_S		0
+#define GLQDC_DFD_GEN_LOGGNG_6_GEN_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_GEN_STAT_REGS(_i)		(0x002D3018 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_STAT_REGS_MAX_INDEX	15
+#define GLQDC_DFD_GEN_STAT_REGS_COUNT_S		0
+#define GLQDC_DFD_GEN_STAT_REGS_COUNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_LOG_0				0x002D2E3C /* Reset Source: CORER */
+#define GLQDC_DFD_LOG_0_SOURCE_S		0
+#define GLQDC_DFD_LOG_0_SOURCE_M		MAKEMASK(0x3, 0)
+#define GLQDC_DFD_LOG_0_LVL_OR_EDGE_S		4
+#define GLQDC_DFD_LOG_0_LVL_OR_EDGE_M		BIT(4)
+#define GLQDC_DFD_LOG_0_DLY_CYCL_S		16
+#define GLQDC_DFD_LOG_0_DLY_CYCL_M		MAKEMASK(0x3FF, 16)
+#define GLQDC_DFD_LOG_1				0x002D2E40 /* Reset Source: CORER */
+#define GLQDC_DFD_LOG_1_NUM_EVENTS_S		0
+#define GLQDC_DFD_LOG_1_NUM_EVENTS_M		MAKEMASK(0x3FF, 0)
+#define GLQDC_DFD_LOG_1_NUM_TRIGS_S		16
+#define GLQDC_DFD_LOG_1_NUM_TRIGS_M		MAKEMASK(0x3FF, 16)
+#define GLQDC_DFD_LOG_1_TRIG_B2B_S		31
+#define GLQDC_DFD_LOG_1_TRIG_B2B_M		BIT(31)
+#define GLQDC_DFD_LOG_ACTN_EN			0x002D2EA4 /* Reset Source: CORER */
+#define GLQDC_DFD_LOG_ACTN_EN_BLK_INJECT_M1_S	0
+#define GLQDC_DFD_LOG_ACTN_EN_BLK_INJECT_M1_M	BIT(0)
+#define GLQDC_DFD_LOG_ACTN_EN_STP_WR_DFD_FIFO_S 1
+#define GLQDC_DFD_LOG_ACTN_EN_STP_WR_DFD_FIFO_M BIT(1)
+#define GLQDC_DFD_LOG_ACTN_EN_STP_UPDT_MALC_RPT_CSR_S 2
+#define GLQDC_DFD_LOG_ACTN_EN_STP_UPDT_MALC_RPT_CSR_M BIT(2)
+#define GLQDC_DFD_LOG_ACTN_RST			0x002D2EA8 /* Reset Source: CORER */
+#define GLQDC_DFD_LOG_ACTN_RST_BLK_INJECT_M1_S	0
+#define GLQDC_DFD_LOG_ACTN_RST_BLK_INJECT_M1_M	BIT(0)
+#define GLQDC_DFD_LOG_ACTN_RST_STP_WR_DFD_FIFO_S 1
+#define GLQDC_DFD_LOG_ACTN_RST_STP_WR_DFD_FIFO_M BIT(1)
+#define GLQDC_DFD_LOG_ACTN_RST_STP_UPDT_MALC_RPT_CSR_S 2
+#define GLQDC_DFD_LOG_ACTN_RST_STP_UPDT_MALC_RPT_CSR_M BIT(2)
+#define GLQDC_DFD_LOG_DATA(_i)			(0x002D2E44 + ((_i) * 4)) /* _i=0...11 */ /* Reset Source: CORER */
+#define GLQDC_DFD_LOG_DATA_MAX_INDEX		11
+#define GLQDC_DFD_LOG_DATA_DATA_S		0
+#define GLQDC_DFD_LOG_DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_LOG_MASK(_i)			(0x002D2E74 + ((_i) * 4)) /* _i=0...11 */ /* Reset Source: CORER */
+#define GLQDC_DFD_LOG_MASK_MAX_INDEX		11
+#define GLQDC_DFD_LOG_MASK_MASK_S		0
+#define GLQDC_DFD_LOG_MASK_MASK_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_LOG_TRG_0			0x002D2EAC /* Reset Source: CORER */
+#define GLQDC_DFD_LOG_TRG_0_QID_S		0
+#define GLQDC_DFD_LOG_TRG_0_QID_M		MAKEMASK(0x3FFF, 0)
+#define GLQDC_DFD_LOG_TRG_0_ACT_TRIGGED_S	16
+#define GLQDC_DFD_LOG_TRG_0_ACT_TRIGGED_M	BIT(16)
+#define GLQDC_DFD_LOG_TRG_0_TRIGGED_S		31
+#define GLQDC_DFD_LOG_TRG_0_TRIGGED_M		BIT(31)
+#define GLQDC_DFD_LOG_TRG_DATA(_i)		(0x002D2EB0 + ((_i) * 4)) /* _i=0...11 */ /* Reset Source: CORER */
+#define GLQDC_DFD_LOG_TRG_DATA_MAX_INDEX	11
+#define GLQDC_DFD_LOG_TRG_DATA_DATA_S		0
+#define GLQDC_DFD_LOG_TRG_DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_PACE				0x002D3000 /* Reset Source: CORER */
+#define GLQDC_DFD_PACE_PUSH_S			0
+#define GLQDC_DFD_PACE_PUSH_M			BIT(0)
+#define GLQDC_DFD_RST				0x002D2E30 /* Reset Source: CORER */
+#define GLQDC_DFD_RST_RST_S			0
+#define GLQDC_DFD_RST_RST_M			BIT(0)
+#define GLQDC_DFD_RST_CLR_MALC_RPT_S		1
+#define GLQDC_DFD_RST_CLR_MALC_RPT_M		BIT(1)
+#define GLQDC_DFD_RST_LOG_RST_S			2
+#define GLQDC_DFD_RST_LOG_RST_M			BIT(2)
+#define GLQDC_DFD_SAMPLE_RO_CSR			0x002D3004 /* Reset Source: CORER */
+#define GLQDC_DFD_SAMPLE_RO_CSR_SMPL_S		0
+#define GLQDC_DFD_SAMPLE_RO_CSR_SMPL_M		BIT(0)
+#define GLQDC_DFD_STATS_CFG_0			0x002D3058 /* Reset Source: CORER */
+#define GLQDC_DFD_STATS_CFG_0_CLR_S		0
+#define GLQDC_DFD_STATS_CFG_0_CLR_M		BIT(0)
+#define GLQDC_DFD_STATS_CFG_1			0x002D305C /* Reset Source: CORER */
+#define GLQDC_DFD_STATS_CFG_1_QID_S		0
+#define GLQDC_DFD_STATS_CFG_1_QID_M		MAKEMASK(0x3FFF, 0)
+#define GLQDC_DFD_STATS_CFG_1_GEN_CFG_S		16
+#define GLQDC_DFD_STATS_CFG_1_GEN_CFG_M		MAKEMASK(0x1F, 16)
+#define GLQDC_DFD_STATS_CFG_EVNT(_i)		(0x002D3060 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLQDC_DFD_STATS_CFG_EVNT_MAX_INDEX	15
+#define GLQDC_DFD_STATS_CFG_EVNT_EVNT_ID_S	0
+#define GLQDC_DFD_STATS_CFG_EVNT_EVNT_ID_M	MAKEMASK(0x1F, 0)
+#define GLQDC_DFD_STATS_CFG_EVNT_WRAP_EN_S	31
+#define GLQDC_DFD_STATS_CFG_EVNT_WRAP_EN_M	BIT(31)
+#define GLQDC_DFD_TEST_MNG			0x002D30A8 /* Reset Source: CORER */
+#define GLQDC_DFD_TEST_MNG_TST_S		2
+#define GLQDC_DFD_TEST_MNG_TST_M		BIT(2)
+#define GLVFGEN_TIMER				0x000B8214 /* Reset Source: POR */
+#define GLVFGEN_TIMER_GTIME_S			0
+#define GLVFGEN_TIMER_GTIME_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PFGEN_CTRL				0x00091000 /* Reset Source: CORER */
+#define PFGEN_CTRL_PFSWR_S			0
+#define PFGEN_CTRL_PFSWR_M			BIT(0)
+#define PFGEN_DRUN				0x00091180 /* Reset Source: CORER */
+#define PFGEN_DRUN_DRVUNLD_S			0
+#define PFGEN_DRUN_DRVUNLD_M			BIT(0)
+#define PFGEN_PFRSTAT				0x00091080 /* Reset Source: CORER */
+#define PFGEN_PFRSTAT_PFRD_S			0
+#define PFGEN_PFRSTAT_PFRD_M			BIT(0)
+#define PFGEN_PORTNUM				0x001D2400 /* Reset Source: CORER */
+#define PFGEN_PORTNUM_PORT_NUM_S		0
+#define PFGEN_PORTNUM_PORT_NUM_M		MAKEMASK(0x7, 0)
+#define PFGEN_STATE				0x00088000 /* Reset Source: CORER */
+#define PFGEN_STATE_PFPEEN_S			0
+#define PFGEN_STATE_PFPEEN_M			BIT(0)
+#define PFGEN_STATE_RSVD_S			1
+#define PFGEN_STATE_RSVD_M			BIT(1)
+#define PFGEN_STATE_PFLINKEN_S			2
+#define PFGEN_STATE_PFLINKEN_M			BIT(2)
+#define PFGEN_STATE_PFSCEN_S			3
+#define PFGEN_STATE_PFSCEN_M			BIT(3)
+#define PRT_TCVMLR_DRAIN_CNTR			0x000A21C0 /* Reset Source: CORER */
+#define PRT_TCVMLR_DRAIN_CNTR_CNTR_S		0
+#define PRT_TCVMLR_DRAIN_CNTR_CNTR_M		MAKEMASK(0x3FFF, 0)
+#define PRTGEN_CNF				0x000B8120 /* Reset Source: POR */
+#define PRTGEN_CNF_PORT_DIS_S			0
+#define PRTGEN_CNF_PORT_DIS_M			BIT(0)
+#define PRTGEN_CNF_ALLOW_PORT_DIS_S		1
+#define PRTGEN_CNF_ALLOW_PORT_DIS_M		BIT(1)
+#define PRTGEN_CNF_EMP_PORT_DIS_S		2
+#define PRTGEN_CNF_EMP_PORT_DIS_M		BIT(2)
+#define PRTGEN_CNF2				0x000B8160 /* Reset Source: POR */
+#define PRTGEN_CNF2_ACTIVATE_PORT_LINK_S	0
+#define PRTGEN_CNF2_ACTIVATE_PORT_LINK_M	BIT(0)
+#define PRTGEN_CNF3				0x000B8280 /* Reset Source: POR */
+#define PRTGEN_CNF3_PORT_STAGERING_EN_S		0
+#define PRTGEN_CNF3_PORT_STAGERING_EN_M		BIT(0)
+#define PRTGEN_STATUS				0x000B8100 /* Reset Source: POR */
+#define PRTGEN_STATUS_PORT_VALID_S		0
+#define PRTGEN_STATUS_PORT_VALID_M		BIT(0)
+#define PRTGEN_STATUS_PORT_ACTIVE_S		1
+#define PRTGEN_STATUS_PORT_ACTIVE_M		BIT(1)
+#define VFGEN_RSTAT(_VF)			(0x00074000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: VFR */
+#define VFGEN_RSTAT_MAX_INDEX			255
+#define VFGEN_RSTAT_VFR_STATE_S			0
+#define VFGEN_RSTAT_VFR_STATE_M			MAKEMASK(0x3, 0)
+#define VPGEN_VFRSTAT(_VF)			(0x00090800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPGEN_VFRSTAT_MAX_INDEX			255
+#define VPGEN_VFRSTAT_VFRD_S			0
+#define VPGEN_VFRSTAT_VFRD_M			BIT(0)
+#define VPGEN_VFRTRIG(_VF)			(0x00090000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPGEN_VFRTRIG_MAX_INDEX			255
+#define VPGEN_VFRTRIG_VFSWR_S			0
+#define VPGEN_VFRTRIG_VFSWR_M			BIT(0)
+#define VSIGEN_RSTAT(_VSI)			(0x00092800 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSIGEN_RSTAT_MAX_INDEX			767
+#define VSIGEN_RSTAT_VMRD_S			0
+#define VSIGEN_RSTAT_VMRD_M			BIT(0)
+#define VSIGEN_RTRIG(_VSI)			(0x00091800 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSIGEN_RTRIG_MAX_INDEX			767
+#define VSIGEN_RTRIG_VMSWR_S			0
+#define VSIGEN_RTRIG_VMSWR_M			BIT(0)
+#define GLHMC_APBVTINUSEBASE(_i)		(0x00524A00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_APBVTINUSEBASE_MAX_INDEX		7
+#define GLHMC_APBVTINUSEBASE_FPMAPBINUSEBASE_S	0
+#define GLHMC_APBVTINUSEBASE_FPMAPBINUSEBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_CEQPART(_i)			(0x005031C0 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_CEQPART_MAX_INDEX			7
+#define GLHMC_CEQPART_PMCEQBASE_S		0
+#define GLHMC_CEQPART_PMCEQBASE_M		MAKEMASK(0x3FF, 0)
+#define GLHMC_CEQPART_PMCEQSIZE_S		16
+#define GLHMC_CEQPART_PMCEQSIZE_M		MAKEMASK(0x3FF, 16)
+#define GLHMC_DBCQMAX				0x005220F0 /* Reset Source: CORER */
+#define GLHMC_DBCQMAX_GLHMC_DBCQMAX_S		0
+#define GLHMC_DBCQMAX_GLHMC_DBCQMAX_M		MAKEMASK(0xFFFFF, 0)
+#define GLHMC_DBCQPART(_i)			(0x00503180 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_DBCQPART_MAX_INDEX		7
+#define GLHMC_DBCQPART_PMDBCQBASE_S		0
+#define GLHMC_DBCQPART_PMDBCQBASE_M		MAKEMASK(0x3FFF, 0)
+#define GLHMC_DBCQPART_PMDBCQSIZE_S		16
+#define GLHMC_DBCQPART_PMDBCQSIZE_M		MAKEMASK(0x7FFF, 16)
+#define GLHMC_DBQPMAX				0x005220EC /* Reset Source: CORER */
+#define GLHMC_DBQPMAX_GLHMC_DBQPMAX_S		0
+#define GLHMC_DBQPMAX_GLHMC_DBQPMAX_M		MAKEMASK(0x7FFFF, 0)
+#define GLHMC_DBQPPART(_i)			(0x005044C0 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_DBQPPART_MAX_INDEX		7
+#define GLHMC_DBQPPART_PMDBQPBASE_S		0
+#define GLHMC_DBQPPART_PMDBQPBASE_M		MAKEMASK(0x3FFF, 0)
+#define GLHMC_DBQPPART_PMDBQPSIZE_S		16
+#define GLHMC_DBQPPART_PMDBQPSIZE_M		MAKEMASK(0x7FFF, 16)
+#define GLHMC_FSIAVBASE(_i)			(0x00525600 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_FSIAVBASE_MAX_INDEX		7
+#define GLHMC_FSIAVBASE_FPMFSIAVBASE_S		0
+#define GLHMC_FSIAVBASE_FPMFSIAVBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_FSIAVCNT(_i)			(0x00525700 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_FSIAVCNT_MAX_INDEX		7
+#define GLHMC_FSIAVCNT_FPMFSIAVCNT_S		0
+#define GLHMC_FSIAVCNT_FPMFSIAVCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_FSIAVMAX				0x00522068 /* Reset Source: CORER */
+#define GLHMC_FSIAVMAX_PMFSIAVMAX_S		0
+#define GLHMC_FSIAVMAX_PMFSIAVMAX_M		MAKEMASK(0x1FFFF, 0)
+#define GLHMC_FSIAVOBJSZ			0x00522064 /* Reset Source: CORER */
+#define GLHMC_FSIAVOBJSZ_PMFSIAVOBJSZ_S		0
+#define GLHMC_FSIAVOBJSZ_PMFSIAVOBJSZ_M		MAKEMASK(0xF, 0)
+#define GLHMC_FSIMCBASE(_i)			(0x00526000 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_FSIMCBASE_MAX_INDEX		7
+#define GLHMC_FSIMCBASE_FPMFSIMCBASE_S		0
+#define GLHMC_FSIMCBASE_FPMFSIMCBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_FSIMCCNT(_i)			(0x00526100 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_FSIMCCNT_MAX_INDEX		7
+#define GLHMC_FSIMCCNT_FPMFSIMCSZ_S		0
+#define GLHMC_FSIMCCNT_FPMFSIMCSZ_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_FSIMCMAX				0x00522060 /* Reset Source: CORER */
+#define GLHMC_FSIMCMAX_PMFSIMCMAX_S		0
+#define GLHMC_FSIMCMAX_PMFSIMCMAX_M		MAKEMASK(0x3FFF, 0)
+#define GLHMC_FSIMCOBJSZ			0x0052205C /* Reset Source: CORER */
+#define GLHMC_FSIMCOBJSZ_PMFSIMCOBJSZ_S		0
+#define GLHMC_FSIMCOBJSZ_PMFSIMCOBJSZ_M		MAKEMASK(0xF, 0)
+#define GLHMC_FWPDINV				0x0052207C /* Reset Source: CORER */
+#define GLHMC_FWPDINV_PMSDIDX_S			0
+#define GLHMC_FWPDINV_PMSDIDX_M			MAKEMASK(0xFFF, 0)
+#define GLHMC_FWPDINV_PMSDPARTSEL_S		15
+#define GLHMC_FWPDINV_PMSDPARTSEL_M		BIT(15)
+#define GLHMC_FWPDINV_PMPDIDX_S			16
+#define GLHMC_FWPDINV_PMPDIDX_M			MAKEMASK(0x1FF, 16)
+#define GLHMC_FWPDINV_FPMAT			0x0010207c /* Reset Source: CORER */
+#define GLHMC_FWPDINV_FPMAT_PMSDIDX_S		0
+#define GLHMC_FWPDINV_FPMAT_PMSDIDX_M		MAKEMASK(0xFFF, 0)
+#define GLHMC_FWPDINV_FPMAT_PMSDPARTSEL_S	15
+#define GLHMC_FWPDINV_FPMAT_PMSDPARTSEL_M	BIT(15)
+#define GLHMC_FWPDINV_FPMAT_PMPDIDX_S		16
+#define GLHMC_FWPDINV_FPMAT_PMPDIDX_M		MAKEMASK(0x1FF, 16)
+#define GLHMC_FWSDDATAHIGH			0x00522078 /* Reset Source: CORER */
+#define GLHMC_FWSDDATAHIGH_PMSDDATAHIGH_S	0
+#define GLHMC_FWSDDATAHIGH_PMSDDATAHIGH_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_FWSDDATAHIGH_FPMAT		0x00102078 /* Reset Source: CORER */
+#define GLHMC_FWSDDATAHIGH_FPMAT_PMSDDATAHIGH_S 0
+#define GLHMC_FWSDDATAHIGH_FPMAT_PMSDDATAHIGH_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_FWSDDATALOW			0x00522074 /* Reset Source: CORER */
+#define GLHMC_FWSDDATALOW_PMSDVALID_S		0
+#define GLHMC_FWSDDATALOW_PMSDVALID_M		BIT(0)
+#define GLHMC_FWSDDATALOW_PMSDTYPE_S		1
+#define GLHMC_FWSDDATALOW_PMSDTYPE_M		BIT(1)
+#define GLHMC_FWSDDATALOW_PMSDBPCOUNT_S		2
+#define GLHMC_FWSDDATALOW_PMSDBPCOUNT_M		MAKEMASK(0x3FF, 2)
+#define GLHMC_FWSDDATALOW_PMSDDATALOW_S		12
+#define GLHMC_FWSDDATALOW_PMSDDATALOW_M		MAKEMASK(0xFFFFF, 12)
+#define GLHMC_FWSDDATALOW_FPMAT			0x00102074 /* Reset Source: CORER */
+#define GLHMC_FWSDDATALOW_FPMAT_PMSDVALID_S	0
+#define GLHMC_FWSDDATALOW_FPMAT_PMSDVALID_M	BIT(0)
+#define GLHMC_FWSDDATALOW_FPMAT_PMSDTYPE_S	1
+#define GLHMC_FWSDDATALOW_FPMAT_PMSDTYPE_M	BIT(1)
+#define GLHMC_FWSDDATALOW_FPMAT_PMSDBPCOUNT_S	2
+#define GLHMC_FWSDDATALOW_FPMAT_PMSDBPCOUNT_M	MAKEMASK(0x3FF, 2)
+#define GLHMC_FWSDDATALOW_FPMAT_PMSDDATALOW_S	12
+#define GLHMC_FWSDDATALOW_FPMAT_PMSDDATALOW_M	MAKEMASK(0xFFFFF, 12)
+#define GLHMC_PEARPBASE(_i)			(0x00524800 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEARPBASE_MAX_INDEX		7
+#define GLHMC_PEARPBASE_FPMPEARPBASE_S		0
+#define GLHMC_PEARPBASE_FPMPEARPBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEARPCNT(_i)			(0x00524900 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEARPCNT_MAX_INDEX		7
+#define GLHMC_PEARPCNT_FPMPEARPCNT_S		0
+#define GLHMC_PEARPCNT_FPMPEARPCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEARPMAX				0x00522038 /* Reset Source: CORER */
+#define GLHMC_PEARPMAX_PMPEARPMAX_S		0
+#define GLHMC_PEARPMAX_PMPEARPMAX_M		MAKEMASK(0x1FFFF, 0)
+#define GLHMC_PEARPOBJSZ			0x00522034 /* Reset Source: CORER */
+#define GLHMC_PEARPOBJSZ_PMPEARPOBJSZ_S		0
+#define GLHMC_PEARPOBJSZ_PMPEARPOBJSZ_M		MAKEMASK(0x7, 0)
+#define GLHMC_PECQBASE(_i)			(0x00524200 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PECQBASE_MAX_INDEX		7
+#define GLHMC_PECQBASE_FPMPECQBASE_S		0
+#define GLHMC_PECQBASE_FPMPECQBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PECQCNT(_i)			(0x00524300 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PECQCNT_MAX_INDEX			7
+#define GLHMC_PECQCNT_FPMPECQCNT_S		0
+#define GLHMC_PECQCNT_FPMPECQCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PECQOBJSZ				0x00522020 /* Reset Source: CORER */
+#define GLHMC_PECQOBJSZ_PMPECQOBJSZ_S		0
+#define GLHMC_PECQOBJSZ_PMPECQOBJSZ_M		MAKEMASK(0xF, 0)
+#define GLHMC_PEHDRBASE(_i)			(0x00526200 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEHDRBASE_MAX_INDEX		7
+#define GLHMC_PEHDRBASE_GLHMC_PEHDRBASE_S	0
+#define GLHMC_PEHDRBASE_GLHMC_PEHDRBASE_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PEHDRCNT(_i)			(0x00526300 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEHDRCNT_MAX_INDEX		7
+#define GLHMC_PEHDRCNT_GLHMC_PEHDRCNT_S		0
+#define GLHMC_PEHDRCNT_GLHMC_PEHDRCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PEHDRMAX				0x00522008 /* Reset Source: CORER */
+#define GLHMC_PEHDRMAX_PMPEHDRMAX_S		0
+#define GLHMC_PEHDRMAX_PMPEHDRMAX_M		MAKEMASK(0x7FFFF, 0)
+#define GLHMC_PEHDRMAX_RSVD_S			19
+#define GLHMC_PEHDRMAX_RSVD_M			MAKEMASK(0x1FFF, 19)
+#define GLHMC_PEHDROBJSZ			0x00522004 /* Reset Source: CORER */
+#define GLHMC_PEHDROBJSZ_PMPEHDROBJSZ_S		0
+#define GLHMC_PEHDROBJSZ_PMPEHDROBJSZ_M		MAKEMASK(0xF, 0)
+#define GLHMC_PEHDROBJSZ_RSVD_S			4
+#define GLHMC_PEHDROBJSZ_RSVD_M			MAKEMASK(0xFFFFFFF, 4)
+#define GLHMC_PEHTCNT(_i)			(0x00524700 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEHTCNT_MAX_INDEX			7
+#define GLHMC_PEHTCNT_FPMPEHTCNT_S		0
+#define GLHMC_PEHTCNT_FPMPEHTCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEHTCNT_FPMAT(_i)			(0x00104700 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEHTCNT_FPMAT_MAX_INDEX		7
+#define GLHMC_PEHTCNT_FPMAT_FPMPEHTCNT_S	0
+#define GLHMC_PEHTCNT_FPMAT_FPMPEHTCNT_M	MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEHTEBASE(_i)			(0x00524600 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEHTEBASE_MAX_INDEX		7
+#define GLHMC_PEHTEBASE_FPMPEHTEBASE_S		0
+#define GLHMC_PEHTEBASE_FPMPEHTEBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEHTEBASE_FPMAT(_i)		(0x00104600 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEHTEBASE_FPMAT_MAX_INDEX		7
+#define GLHMC_PEHTEBASE_FPMAT_FPMPEHTEBASE_S	0
+#define GLHMC_PEHTEBASE_FPMAT_FPMPEHTEBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEHTEOBJSZ			0x0052202C /* Reset Source: CORER */
+#define GLHMC_PEHTEOBJSZ_PMPEHTEOBJSZ_S		0
+#define GLHMC_PEHTEOBJSZ_PMPEHTEOBJSZ_M		MAKEMASK(0xF, 0)
+#define GLHMC_PEHTEOBJSZ_FPMAT			0x0010202c /* Reset Source: CORER */
+#define GLHMC_PEHTEOBJSZ_FPMAT_PMPEHTEOBJSZ_S	0
+#define GLHMC_PEHTEOBJSZ_FPMAT_PMPEHTEOBJSZ_M	MAKEMASK(0xF, 0)
+#define GLHMC_PEHTMAX				0x00522030 /* Reset Source: CORER */
+#define GLHMC_PEHTMAX_PMPEHTMAX_S		0
+#define GLHMC_PEHTMAX_PMPEHTMAX_M		MAKEMASK(0x1FFFFF, 0)
+#define GLHMC_PEHTMAX_FPMAT			0x00102030 /* Reset Source: CORER */
+#define GLHMC_PEHTMAX_FPMAT_PMPEHTMAX_S		0
+#define GLHMC_PEHTMAX_FPMAT_PMPEHTMAX_M		MAKEMASK(0x1FFFFF, 0)
+#define GLHMC_PEMDBASE(_i)			(0x00526400 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEMDBASE_MAX_INDEX		7
+#define GLHMC_PEMDBASE_GLHMC_PEMDBASE_S		0
+#define GLHMC_PEMDBASE_GLHMC_PEMDBASE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PEMDCNT(_i)			(0x00526500 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEMDCNT_MAX_INDEX			7
+#define GLHMC_PEMDCNT_GLHMC_PEMDCNT_S		0
+#define GLHMC_PEMDCNT_GLHMC_PEMDCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PEMDMAX				0x00522010 /* Reset Source: CORER */
+#define GLHMC_PEMDMAX_PMPEMDMAX_S		0
+#define GLHMC_PEMDMAX_PMPEMDMAX_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEMDMAX_RSVD_S			24
+#define GLHMC_PEMDMAX_RSVD_M			MAKEMASK(0xFF, 24)
+#define GLHMC_PEMDOBJSZ				0x0052200C /* Reset Source: CORER */
+#define GLHMC_PEMDOBJSZ_PMPEMDOBJSZ_S		0
+#define GLHMC_PEMDOBJSZ_PMPEMDOBJSZ_M		MAKEMASK(0xF, 0)
+#define GLHMC_PEMDOBJSZ_RSVD_S			4
+#define GLHMC_PEMDOBJSZ_RSVD_M			MAKEMASK(0xFFFFFFF, 4)
+#define GLHMC_PEMRBASE(_i)			(0x00524C00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEMRBASE_MAX_INDEX		7
+#define GLHMC_PEMRBASE_FPMPEMRBASE_S		0
+#define GLHMC_PEMRBASE_FPMPEMRBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEMRCNT(_i)			(0x00524D00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEMRCNT_MAX_INDEX			7
+#define GLHMC_PEMRCNT_FPMPEMRSZ_S		0
+#define GLHMC_PEMRCNT_FPMPEMRSZ_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEMRMAX				0x00522040 /* Reset Source: CORER */
+#define GLHMC_PEMRMAX_PMPEMRMAX_S		0
+#define GLHMC_PEMRMAX_PMPEMRMAX_M		MAKEMASK(0x7FFFFF, 0)
+#define GLHMC_PEMROBJSZ				0x0052203c /* Reset Source: CORER */
+#define GLHMC_PEMROBJSZ_PMPEMROBJSZ_S		0
+#define GLHMC_PEMROBJSZ_PMPEMROBJSZ_M		MAKEMASK(0xF, 0)
+#define GLHMC_PEOOISCBASE(_i)			(0x00526600 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEOOISCBASE_MAX_INDEX		7
+#define GLHMC_PEOOISCBASE_GLHMC_PEOOISCBASE_S	0
+#define GLHMC_PEOOISCBASE_GLHMC_PEOOISCBASE_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PEOOISCCNT(_i)			(0x00526700 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEOOISCCNT_MAX_INDEX		7
+#define GLHMC_PEOOISCCNT_GLHMC_PEOOISCCNT_S	0
+#define GLHMC_PEOOISCCNT_GLHMC_PEOOISCCNT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PEOOISCFFLBASE(_i)		(0x00526C00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEOOISCFFLBASE_MAX_INDEX		7
+#define GLHMC_PEOOISCFFLBASE_GLHMC_PEOOISCFFLBASE_S 0
+#define GLHMC_PEOOISCFFLBASE_GLHMC_PEOOISCFFLBASE_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PEOOISCFFLCNT_PMAT(_i)		(0x00526D00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEOOISCFFLCNT_PMAT_MAX_INDEX	7
+#define GLHMC_PEOOISCFFLCNT_PMAT_FPMPEOOISCFLCNT_S 0
+#define GLHMC_PEOOISCFFLCNT_PMAT_FPMPEOOISCFLCNT_M MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEOOISCFFLMAX			0x005220A4 /* Reset Source: CORER */
+#define GLHMC_PEOOISCFFLMAX_PMPEOOISCFFLMAX_S	0
+#define GLHMC_PEOOISCFFLMAX_PMPEOOISCFFLMAX_M	MAKEMASK(0x7FFFF, 0)
+#define GLHMC_PEOOISCFFLMAX_RSVD_S		19
+#define GLHMC_PEOOISCFFLMAX_RSVD_M		MAKEMASK(0x1FFF, 19)
+#define GLHMC_PEOOISCMAX			0x00522018 /* Reset Source: CORER */
+#define GLHMC_PEOOISCMAX_PMPEOOISCMAX_S		0
+#define GLHMC_PEOOISCMAX_PMPEOOISCMAX_M		MAKEMASK(0x7FFFF, 0)
+#define GLHMC_PEOOISCMAX_RSVD_S			19
+#define GLHMC_PEOOISCMAX_RSVD_M			MAKEMASK(0x1FFF, 19)
+#define GLHMC_PEOOISCOBJSZ			0x00522014 /* Reset Source: CORER */
+#define GLHMC_PEOOISCOBJSZ_PMPEOOISCOBJSZ_S	0
+#define GLHMC_PEOOISCOBJSZ_PMPEOOISCOBJSZ_M	MAKEMASK(0xF, 0)
+#define GLHMC_PEOOISCOBJSZ_RSVD_S		4
+#define GLHMC_PEOOISCOBJSZ_RSVD_M		MAKEMASK(0xFFFFFFF, 4)
+#define GLHMC_PEPBLBASE(_i)			(0x00525800 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEPBLBASE_MAX_INDEX		7
+#define GLHMC_PEPBLBASE_FPMPEPBLBASE_S		0
+#define GLHMC_PEPBLBASE_FPMPEPBLBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEPBLCNT(_i)			(0x00525900 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEPBLCNT_MAX_INDEX		7
+#define GLHMC_PEPBLCNT_FPMPEPBLCNT_S		0
+#define GLHMC_PEPBLCNT_FPMPEPBLCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEPBLMAX				0x0052206C /* Reset Source: CORER */
+#define GLHMC_PEPBLMAX_PMPEPBLMAX_S		0
+#define GLHMC_PEPBLMAX_PMPEPBLMAX_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEQ1BASE(_i)			(0x00525200 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEQ1BASE_MAX_INDEX		7
+#define GLHMC_PEQ1BASE_FPMPEQ1BASE_S		0
+#define GLHMC_PEQ1BASE_FPMPEQ1BASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEQ1CNT(_i)			(0x00525300 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEQ1CNT_MAX_INDEX			7
+#define GLHMC_PEQ1CNT_FPMPEQ1CNT_S		0
+#define GLHMC_PEQ1CNT_FPMPEQ1CNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEQ1FLBASE(_i)			(0x00525400 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEQ1FLBASE_MAX_INDEX		7
+#define GLHMC_PEQ1FLBASE_FPMPEQ1FLBASE_S	0
+#define GLHMC_PEQ1FLBASE_FPMPEQ1FLBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEQ1FLMAX				0x00522058 /* Reset Source: CORER */
+#define GLHMC_PEQ1FLMAX_PMPEQ1FLMAX_S		0
+#define GLHMC_PEQ1FLMAX_PMPEQ1FLMAX_M		MAKEMASK(0x3FFFFFF, 0)
+#define GLHMC_PEQ1MAX				0x00522054 /* Reset Source: CORER */
+#define GLHMC_PEQ1MAX_PMPEQ1MAX_S		0
+#define GLHMC_PEQ1MAX_PMPEQ1MAX_M		MAKEMASK(0xFFFFFFF, 0)
+#define GLHMC_PEQ1OBJSZ				0x00522050 /* Reset Source: CORER */
+#define GLHMC_PEQ1OBJSZ_PMPEQ1OBJSZ_S		0
+#define GLHMC_PEQ1OBJSZ_PMPEQ1OBJSZ_M		MAKEMASK(0xF, 0)
+#define GLHMC_PEQPBASE(_i)			(0x00524000 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEQPBASE_MAX_INDEX		7
+#define GLHMC_PEQPBASE_FPMPEQPBASE_S		0
+#define GLHMC_PEQPBASE_FPMPEQPBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEQPCNT(_i)			(0x00524100 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEQPCNT_MAX_INDEX			7
+#define GLHMC_PEQPCNT_FPMPEQPCNT_S		0
+#define GLHMC_PEQPCNT_FPMPEQPCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEQPOBJSZ				0x0052201C /* Reset Source: CORER */
+#define GLHMC_PEQPOBJSZ_PMPEQPOBJSZ_S		0
+#define GLHMC_PEQPOBJSZ_PMPEQPOBJSZ_M		MAKEMASK(0xF, 0)
+#define GLHMC_PERRFBASE(_i)			(0x00526800 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PERRFBASE_MAX_INDEX		7
+#define GLHMC_PERRFBASE_GLHMC_PERRFBASE_S	0
+#define GLHMC_PERRFBASE_GLHMC_PERRFBASE_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PERRFCNT(_i)			(0x00526900 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PERRFCNT_MAX_INDEX		7
+#define GLHMC_PERRFCNT_GLHMC_PERRFCNT_S		0
+#define GLHMC_PERRFCNT_GLHMC_PERRFCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PERRFFLBASE(_i)			(0x00526A00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PERRFFLBASE_MAX_INDEX		7
+#define GLHMC_PERRFFLBASE_GLHMC_PERRFFLBASE_S	0
+#define GLHMC_PERRFFLBASE_GLHMC_PERRFFLBASE_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PERRFFLCNT_PMAT(_i)		(0x00526B00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PERRFFLCNT_PMAT_MAX_INDEX		7
+#define GLHMC_PERRFFLCNT_PMAT_FPMPERRFFLCNT_S	0
+#define GLHMC_PERRFFLCNT_PMAT_FPMPERRFFLCNT_M	MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PERRFFLMAX			0x005220A0 /* Reset Source: CORER */
+#define GLHMC_PERRFFLMAX_PMPERRFFLMAX_S		0
+#define GLHMC_PERRFFLMAX_PMPERRFFLMAX_M		MAKEMASK(0x3FFFFFF, 0)
+#define GLHMC_PERRFFLMAX_RSVD_S			26
+#define GLHMC_PERRFFLMAX_RSVD_M			MAKEMASK(0x3F, 26)
+#define GLHMC_PERRFMAX				0x0052209C /* Reset Source: CORER */
+#define GLHMC_PERRFMAX_PMPERRFMAX_S		0
+#define GLHMC_PERRFMAX_PMPERRFMAX_M		MAKEMASK(0xFFFFFFF, 0)
+#define GLHMC_PERRFMAX_RSVD_S			28
+#define GLHMC_PERRFMAX_RSVD_M			MAKEMASK(0xF, 28)
+#define GLHMC_PERRFOBJSZ			0x00522098 /* Reset Source: CORER */
+#define GLHMC_PERRFOBJSZ_PMPERRFOBJSZ_S		0
+#define GLHMC_PERRFOBJSZ_PMPERRFOBJSZ_M		MAKEMASK(0xF, 0)
+#define GLHMC_PERRFOBJSZ_RSVD_S			4
+#define GLHMC_PERRFOBJSZ_RSVD_M			MAKEMASK(0xFFFFFFF, 4)
+#define GLHMC_PETIMERBASE(_i)			(0x00525A00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PETIMERBASE_MAX_INDEX		7
+#define GLHMC_PETIMERBASE_FPMPETIMERBASE_S	0
+#define GLHMC_PETIMERBASE_FPMPETIMERBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PETIMERCNT(_i)			(0x00525B00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PETIMERCNT_MAX_INDEX		7
+#define GLHMC_PETIMERCNT_FPMPETIMERCNT_S	0
+#define GLHMC_PETIMERCNT_FPMPETIMERCNT_M	MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PETIMERMAX			0x00522084 /* Reset Source: CORER */
+#define GLHMC_PETIMERMAX_PMPETIMERMAX_S		0
+#define GLHMC_PETIMERMAX_PMPETIMERMAX_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PETIMEROBJSZ			0x00522080 /* Reset Source: CORER */
+#define GLHMC_PETIMEROBJSZ_PMPETIMEROBJSZ_S	0
+#define GLHMC_PETIMEROBJSZ_PMPETIMEROBJSZ_M	MAKEMASK(0xF, 0)
+#define GLHMC_PEXFBASE(_i)			(0x00524E00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEXFBASE_MAX_INDEX		7
+#define GLHMC_PEXFBASE_FPMPEXFBASE_S		0
+#define GLHMC_PEXFBASE_FPMPEXFBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEXFCNT(_i)			(0x00524F00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEXFCNT_MAX_INDEX			7
+#define GLHMC_PEXFCNT_FPMPEXFCNT_S		0
+#define GLHMC_PEXFCNT_FPMPEXFCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEXFFLBASE(_i)			(0x00525000 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEXFFLBASE_MAX_INDEX		7
+#define GLHMC_PEXFFLBASE_FPMPEXFFLBASE_S	0
+#define GLHMC_PEXFFLBASE_FPMPEXFFLBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEXFFLMAX				0x0052204C /* Reset Source: CORER */
+#define GLHMC_PEXFFLMAX_PMPEXFFLMAX_S		0
+#define GLHMC_PEXFFLMAX_PMPEXFFLMAX_M		MAKEMASK(0x3FFFFFF, 0)
+#define GLHMC_PEXFMAX				0x00522048 /* Reset Source: CORER */
+#define GLHMC_PEXFMAX_PMPEXFMAX_S		0
+#define GLHMC_PEXFMAX_PMPEXFMAX_M		MAKEMASK(0xFFFFFFF, 0)
+#define GLHMC_PEXFOBJSZ				0x00522044 /* Reset Source: CORER */
+#define GLHMC_PEXFOBJSZ_PMPEXFOBJSZ_S		0
+#define GLHMC_PEXFOBJSZ_PMPEXFOBJSZ_M		MAKEMASK(0xF, 0)
+#define GLHMC_PFPESDPART(_i)			(0x00520880 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PFPESDPART_MAX_INDEX		7
+#define GLHMC_PFPESDPART_PMSDBASE_S		0
+#define GLHMC_PFPESDPART_PMSDBASE_M		MAKEMASK(0xFFF, 0)
+#define GLHMC_PFPESDPART_PMSDSIZE_S		16
+#define GLHMC_PFPESDPART_PMSDSIZE_M		MAKEMASK(0x1FFF, 16)
+#define GLHMC_PFPESDPART_FPMAT(_i)		(0x00100880 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PFPESDPART_FPMAT_MAX_INDEX	7
+#define GLHMC_PFPESDPART_FPMAT_PMSDBASE_S	0
+#define GLHMC_PFPESDPART_FPMAT_PMSDBASE_M	MAKEMASK(0xFFF, 0)
+#define GLHMC_PFPESDPART_FPMAT_PMSDSIZE_S	16
+#define GLHMC_PFPESDPART_FPMAT_PMSDSIZE_M	MAKEMASK(0x1FFF, 16)
+#define GLHMC_SDPART(_i)			(0x00520800 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_SDPART_MAX_INDEX			7
+#define GLHMC_SDPART_PMSDBASE_S			0
+#define GLHMC_SDPART_PMSDBASE_M			MAKEMASK(0xFFF, 0)
+#define GLHMC_SDPART_PMSDSIZE_S			16
+#define GLHMC_SDPART_PMSDSIZE_M			MAKEMASK(0x1FFF, 16)
+#define GLHMC_SDPART_FPMAT(_i)			(0x00100800 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_SDPART_FPMAT_MAX_INDEX		7
+#define GLHMC_SDPART_FPMAT_PMSDBASE_S		0
+#define GLHMC_SDPART_FPMAT_PMSDBASE_M		MAKEMASK(0xFFF, 0)
+#define GLHMC_SDPART_FPMAT_PMSDSIZE_S		16
+#define GLHMC_SDPART_FPMAT_PMSDSIZE_M		MAKEMASK(0x1FFF, 16)
+#define GLHMC_VFAPBVTINUSEBASE(_i)		(0x0052CA00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFAPBVTINUSEBASE_MAX_INDEX	31
+#define GLHMC_VFAPBVTINUSEBASE_FPMAPBINUSEBASE_S 0
+#define GLHMC_VFAPBVTINUSEBASE_FPMAPBINUSEBASE_M MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFCEQPART(_i)			(0x00502F00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFCEQPART_MAX_INDEX		31
+#define GLHMC_VFCEQPART_PMCEQBASE_S		0
+#define GLHMC_VFCEQPART_PMCEQBASE_M		MAKEMASK(0x3FF, 0)
+#define GLHMC_VFCEQPART_PMCEQSIZE_S		16
+#define GLHMC_VFCEQPART_PMCEQSIZE_M		MAKEMASK(0x3FF, 16)
+#define GLHMC_VFDBCQPART(_i)			(0x00502E00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFDBCQPART_MAX_INDEX		31
+#define GLHMC_VFDBCQPART_PMDBCQBASE_S		0
+#define GLHMC_VFDBCQPART_PMDBCQBASE_M		MAKEMASK(0x3FFF, 0)
+#define GLHMC_VFDBCQPART_PMDBCQSIZE_S		16
+#define GLHMC_VFDBCQPART_PMDBCQSIZE_M		MAKEMASK(0x7FFF, 16)
+#define GLHMC_VFDBQPPART(_i)			(0x00504520 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFDBQPPART_MAX_INDEX		31
+#define GLHMC_VFDBQPPART_PMDBQPBASE_S		0
+#define GLHMC_VFDBQPPART_PMDBQPBASE_M		MAKEMASK(0x3FFF, 0)
+#define GLHMC_VFDBQPPART_PMDBQPSIZE_S		16
+#define GLHMC_VFDBQPPART_PMDBQPSIZE_M		MAKEMASK(0x7FFF, 16)
+#define GLHMC_VFFSIAVBASE(_i)			(0x0052D600 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFFSIAVBASE_MAX_INDEX		31
+#define GLHMC_VFFSIAVBASE_FPMFSIAVBASE_S	0
+#define GLHMC_VFFSIAVBASE_FPMFSIAVBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFFSIAVCNT(_i)			(0x0052D700 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFFSIAVCNT_MAX_INDEX		31
+#define GLHMC_VFFSIAVCNT_FPMFSIAVCNT_S		0
+#define GLHMC_VFFSIAVCNT_FPMFSIAVCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFFSIMCBASE(_i)			(0x0052E000 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFFSIMCBASE_MAX_INDEX		31
+#define GLHMC_VFFSIMCBASE_FPMFSIMCBASE_S	0
+#define GLHMC_VFFSIMCBASE_FPMFSIMCBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFFSIMCCNT(_i)			(0x0052E100 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFFSIMCCNT_MAX_INDEX		31
+#define GLHMC_VFFSIMCCNT_FPMFSIMCSZ_S		0
+#define GLHMC_VFFSIMCCNT_FPMFSIMCSZ_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPDINV(_i)			(0x00528300 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPDINV_MAX_INDEX			31
+#define GLHMC_VFPDINV_PMSDIDX_S			0
+#define GLHMC_VFPDINV_PMSDIDX_M			MAKEMASK(0xFFF, 0)
+#define GLHMC_VFPDINV_PMSDPARTSEL_S		15
+#define GLHMC_VFPDINV_PMSDPARTSEL_M		BIT(15)
+#define GLHMC_VFPDINV_PMPDIDX_S			16
+#define GLHMC_VFPDINV_PMPDIDX_M			MAKEMASK(0x1FF, 16)
+#define GLHMC_VFPDINV_FPMAT(_i)			(0x00108300 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPDINV_FPMAT_MAX_INDEX		31
+#define GLHMC_VFPDINV_FPMAT_PMSDIDX_S		0
+#define GLHMC_VFPDINV_FPMAT_PMSDIDX_M		MAKEMASK(0xFFF, 0)
+#define GLHMC_VFPDINV_FPMAT_PMSDPARTSEL_S	15
+#define GLHMC_VFPDINV_FPMAT_PMSDPARTSEL_M	BIT(15)
+#define GLHMC_VFPDINV_FPMAT_PMPDIDX_S		16
+#define GLHMC_VFPDINV_FPMAT_PMPDIDX_M		MAKEMASK(0x1FF, 16)
+#define GLHMC_VFPEARPBASE(_i)			(0x0052C800 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEARPBASE_MAX_INDEX		31
+#define GLHMC_VFPEARPBASE_FPMPEARPBASE_S	0
+#define GLHMC_VFPEARPBASE_FPMPEARPBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPEARPCNT(_i)			(0x0052C900 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEARPCNT_MAX_INDEX		31
+#define GLHMC_VFPEARPCNT_FPMPEARPCNT_S		0
+#define GLHMC_VFPEARPCNT_FPMPEARPCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPECQBASE(_i)			(0x0052C200 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPECQBASE_MAX_INDEX		31
+#define GLHMC_VFPECQBASE_FPMPECQBASE_S		0
+#define GLHMC_VFPECQBASE_FPMPECQBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPECQCNT(_i)			(0x0052C300 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPECQCNT_MAX_INDEX		31
+#define GLHMC_VFPECQCNT_FPMPECQCNT_S		0
+#define GLHMC_VFPECQCNT_FPMPECQCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPEHDRBASE(_i)			(0x0052E200 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEHDRBASE_MAX_INDEX		31
+#define GLHMC_VFPEHDRBASE_GLHMC_PEHDRBASE_S	0
+#define GLHMC_VFPEHDRBASE_GLHMC_PEHDRBASE_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPEHDRCNT(_i)			(0x0052E300 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEHDRCNT_MAX_INDEX		31
+#define GLHMC_VFPEHDRCNT_GLHMC_PEHDRCNT_S	0
+#define GLHMC_VFPEHDRCNT_GLHMC_PEHDRCNT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPEHTCNT(_i)			(0x0052C700 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEHTCNT_MAX_INDEX		31
+#define GLHMC_VFPEHTCNT_FPMPEHTCNT_S		0
+#define GLHMC_VFPEHTCNT_FPMPEHTCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPEHTCNT_FPMAT(_i)		(0x0010c700 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEHTCNT_FPMAT_MAX_INDEX		31
+#define GLHMC_VFPEHTCNT_FPMAT_FPMPEHTCNT_S	0
+#define GLHMC_VFPEHTCNT_FPMAT_FPMPEHTCNT_M	MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPEHTEBASE(_i)			(0x0052C600 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEHTEBASE_MAX_INDEX		31
+#define GLHMC_VFPEHTEBASE_FPMPEHTEBASE_S	0
+#define GLHMC_VFPEHTEBASE_FPMPEHTEBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPEHTEBASE_FPMAT(_i)		(0x0010C600 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEHTEBASE_FPMAT_MAX_INDEX	31
+#define GLHMC_VFPEHTEBASE_FPMAT_FPMPEHTEBASE_S	0
+#define GLHMC_VFPEHTEBASE_FPMAT_FPMPEHTEBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPEMDBASE(_i)			(0x0052E400 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEMDBASE_MAX_INDEX		31
+#define GLHMC_VFPEMDBASE_GLHMC_PEMDBASE_S	0
+#define GLHMC_VFPEMDBASE_GLHMC_PEMDBASE_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPEMDCNT(_i)			(0x0052E500 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEMDCNT_MAX_INDEX		31
+#define GLHMC_VFPEMDCNT_GLHMC_PEMDCNT_S		0
+#define GLHMC_VFPEMDCNT_GLHMC_PEMDCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPEMRBASE(_i)			(0x0052CC00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEMRBASE_MAX_INDEX		31
+#define GLHMC_VFPEMRBASE_FPMPEMRBASE_S		0
+#define GLHMC_VFPEMRBASE_FPMPEMRBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPEMRCNT(_i)			(0x0052CD00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEMRCNT_MAX_INDEX		31
+#define GLHMC_VFPEMRCNT_FPMPEMRSZ_S		0
+#define GLHMC_VFPEMRCNT_FPMPEMRSZ_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPEOOISCBASE(_i)			(0x0052E600 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEOOISCBASE_MAX_INDEX		31
+#define GLHMC_VFPEOOISCBASE_GLHMC_PEOOISCBASE_S 0
+#define GLHMC_VFPEOOISCBASE_GLHMC_PEOOISCBASE_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPEOOISCCNT(_i)			(0x0052E700 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEOOISCCNT_MAX_INDEX		31
+#define GLHMC_VFPEOOISCCNT_GLHMC_PEOOISCCNT_S	0
+#define GLHMC_VFPEOOISCCNT_GLHMC_PEOOISCCNT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPEOOISCFFLBASE(_i)		(0x0052EC00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEOOISCFFLBASE_MAX_INDEX	31
+#define GLHMC_VFPEOOISCFFLBASE_GLHMC_PEOOISCFFLBASE_S 0
+#define GLHMC_VFPEOOISCFFLBASE_GLHMC_PEOOISCFFLBASE_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPEPBLBASE(_i)			(0x0052D800 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEPBLBASE_MAX_INDEX		31
+#define GLHMC_VFPEPBLBASE_FPMPEPBLBASE_S	0
+#define GLHMC_VFPEPBLBASE_FPMPEPBLBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPEPBLCNT(_i)			(0x0052D900 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEPBLCNT_MAX_INDEX		31
+#define GLHMC_VFPEPBLCNT_FPMPEPBLCNT_S		0
+#define GLHMC_VFPEPBLCNT_FPMPEPBLCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPEQ1BASE(_i)			(0x0052D200 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEQ1BASE_MAX_INDEX		31
+#define GLHMC_VFPEQ1BASE_FPMPEQ1BASE_S		0
+#define GLHMC_VFPEQ1BASE_FPMPEQ1BASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPEQ1CNT(_i)			(0x0052D300 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEQ1CNT_MAX_INDEX		31
+#define GLHMC_VFPEQ1CNT_FPMPEQ1CNT_S		0
+#define GLHMC_VFPEQ1CNT_FPMPEQ1CNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPEQ1FLBASE(_i)			(0x0052D400 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEQ1FLBASE_MAX_INDEX		31
+#define GLHMC_VFPEQ1FLBASE_FPMPEQ1FLBASE_S	0
+#define GLHMC_VFPEQ1FLBASE_FPMPEQ1FLBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPEQPBASE(_i)			(0x0052C000 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEQPBASE_MAX_INDEX		31
+#define GLHMC_VFPEQPBASE_FPMPEQPBASE_S		0
+#define GLHMC_VFPEQPBASE_FPMPEQPBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPEQPCNT(_i)			(0x0052C100 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEQPCNT_MAX_INDEX		31
+#define GLHMC_VFPEQPCNT_FPMPEQPCNT_S		0
+#define GLHMC_VFPEQPCNT_FPMPEQPCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPERRFBASE(_i)			(0x0052E800 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPERRFBASE_MAX_INDEX		31
+#define GLHMC_VFPERRFBASE_GLHMC_PERRFBASE_S	0
+#define GLHMC_VFPERRFBASE_GLHMC_PERRFBASE_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPERRFCNT(_i)			(0x0052E900 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPERRFCNT_MAX_INDEX		31
+#define GLHMC_VFPERRFCNT_GLHMC_PERRFCNT_S	0
+#define GLHMC_VFPERRFCNT_GLHMC_PERRFCNT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPERRFFLBASE(_i)			(0x0052EA00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPERRFFLBASE_MAX_INDEX		31
+#define GLHMC_VFPERRFFLBASE_GLHMC_PERRFFLBASE_S 0
+#define GLHMC_VFPERRFFLBASE_GLHMC_PERRFFLBASE_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPETIMERBASE(_i)			(0x0052DA00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPETIMERBASE_MAX_INDEX		31
+#define GLHMC_VFPETIMERBASE_FPMPETIMERBASE_S	0
+#define GLHMC_VFPETIMERBASE_FPMPETIMERBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPETIMERCNT(_i)			(0x0052DB00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPETIMERCNT_MAX_INDEX		31
+#define GLHMC_VFPETIMERCNT_FPMPETIMERCNT_S	0
+#define GLHMC_VFPETIMERCNT_FPMPETIMERCNT_M	MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPEXFBASE(_i)			(0x0052CE00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEXFBASE_MAX_INDEX		31
+#define GLHMC_VFPEXFBASE_FPMPEXFBASE_S		0
+#define GLHMC_VFPEXFBASE_FPMPEXFBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPEXFCNT(_i)			(0x0052CF00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEXFCNT_MAX_INDEX		31
+#define GLHMC_VFPEXFCNT_FPMPEXFCNT_S		0
+#define GLHMC_VFPEXFCNT_FPMPEXFCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPEXFFLBASE(_i)			(0x0052D000 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEXFFLBASE_MAX_INDEX		31
+#define GLHMC_VFPEXFFLBASE_FPMPEXFFLBASE_S	0
+#define GLHMC_VFPEXFFLBASE_FPMPEXFFLBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFSDDATAHIGH(_i)			(0x00528200 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFSDDATAHIGH_MAX_INDEX		31
+#define GLHMC_VFSDDATAHIGH_PMSDDATAHIGH_S	0
+#define GLHMC_VFSDDATAHIGH_PMSDDATAHIGH_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFSDDATAHIGH_FPMAT(_i)		(0x00108200 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFSDDATAHIGH_FPMAT_MAX_INDEX	31
+#define GLHMC_VFSDDATAHIGH_FPMAT_PMSDDATAHIGH_S 0
+#define GLHMC_VFSDDATAHIGH_FPMAT_PMSDDATAHIGH_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFSDDATALOW(_i)			(0x00528100 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFSDDATALOW_MAX_INDEX		31
+#define GLHMC_VFSDDATALOW_PMSDVALID_S		0
+#define GLHMC_VFSDDATALOW_PMSDVALID_M		BIT(0)
+#define GLHMC_VFSDDATALOW_PMSDTYPE_S		1
+#define GLHMC_VFSDDATALOW_PMSDTYPE_M		BIT(1)
+#define GLHMC_VFSDDATALOW_PMSDBPCOUNT_S		2
+#define GLHMC_VFSDDATALOW_PMSDBPCOUNT_M		MAKEMASK(0x3FF, 2)
+#define GLHMC_VFSDDATALOW_PMSDDATALOW_S		12
+#define GLHMC_VFSDDATALOW_PMSDDATALOW_M		MAKEMASK(0xFFFFF, 12)
+#define GLHMC_VFSDDATALOW_FPMAT(_i)		(0x00108100 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFSDDATALOW_FPMAT_MAX_INDEX	31
+#define GLHMC_VFSDDATALOW_FPMAT_PMSDVALID_S	0
+#define GLHMC_VFSDDATALOW_FPMAT_PMSDVALID_M	BIT(0)
+#define GLHMC_VFSDDATALOW_FPMAT_PMSDTYPE_S	1
+#define GLHMC_VFSDDATALOW_FPMAT_PMSDTYPE_M	BIT(1)
+#define GLHMC_VFSDDATALOW_FPMAT_PMSDBPCOUNT_S	2
+#define GLHMC_VFSDDATALOW_FPMAT_PMSDBPCOUNT_M	MAKEMASK(0x3FF, 2)
+#define GLHMC_VFSDDATALOW_FPMAT_PMSDDATALOW_S	12
+#define GLHMC_VFSDDATALOW_FPMAT_PMSDDATALOW_M	MAKEMASK(0xFFFFF, 12)
+#define GLHMC_VFSDPART(_i)			(0x00528800 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFSDPART_MAX_INDEX		31
+#define GLHMC_VFSDPART_PMSDBASE_S		0
+#define GLHMC_VFSDPART_PMSDBASE_M		MAKEMASK(0xFFF, 0)
+#define GLHMC_VFSDPART_PMSDSIZE_S		16
+#define GLHMC_VFSDPART_PMSDSIZE_M		MAKEMASK(0x1FFF, 16)
+#define GLHMC_VFSDPART_FPMAT(_i)		(0x00108800 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFSDPART_FPMAT_MAX_INDEX		31
+#define GLHMC_VFSDPART_FPMAT_PMSDBASE_S		0
+#define GLHMC_VFSDPART_FPMAT_PMSDBASE_M		MAKEMASK(0xFFF, 0)
+#define GLHMC_VFSDPART_FPMAT_PMSDSIZE_S		16
+#define GLHMC_VFSDPART_FPMAT_PMSDSIZE_M		MAKEMASK(0x1FFF, 16)
+#define GLMDOC_CACHESIZE			0x0051C06C /* Reset Source: CORER */
+#define GLMDOC_CACHESIZE_WORD_SIZE_S		0
+#define GLMDOC_CACHESIZE_WORD_SIZE_M		MAKEMASK(0xFF, 0)
+#define GLMDOC_CACHESIZE_SETS_S			8
+#define GLMDOC_CACHESIZE_SETS_M			MAKEMASK(0xFFF, 8)
+#define GLMDOC_CACHESIZE_WAYS_S			20
+#define GLMDOC_CACHESIZE_WAYS_M			MAKEMASK(0xF, 20)
+#define GLPBLOC0_CACHESIZE			0x00518074 /* Reset Source: CORER */
+#define GLPBLOC0_CACHESIZE_WORD_SIZE_S		0
+#define GLPBLOC0_CACHESIZE_WORD_SIZE_M		MAKEMASK(0xFF, 0)
+#define GLPBLOC0_CACHESIZE_SETS_S		8
+#define GLPBLOC0_CACHESIZE_SETS_M		MAKEMASK(0xFFF, 8)
+#define GLPBLOC0_CACHESIZE_WAYS_S		20
+#define GLPBLOC0_CACHESIZE_WAYS_M		MAKEMASK(0xF, 20)
+#define GLPBLOC1_CACHESIZE			0x0051A074 /* Reset Source: CORER */
+#define GLPBLOC1_CACHESIZE_WORD_SIZE_S		0
+#define GLPBLOC1_CACHESIZE_WORD_SIZE_M		MAKEMASK(0xFF, 0)
+#define GLPBLOC1_CACHESIZE_SETS_S		8
+#define GLPBLOC1_CACHESIZE_SETS_M		MAKEMASK(0xFFF, 8)
+#define GLPBLOC1_CACHESIZE_WAYS_S		20
+#define GLPBLOC1_CACHESIZE_WAYS_M		MAKEMASK(0xF, 20)
+#define GLPDOC_CACHESIZE			0x00530048 /* Reset Source: CORER */
+#define GLPDOC_CACHESIZE_WORD_SIZE_S		0
+#define GLPDOC_CACHESIZE_WORD_SIZE_M		MAKEMASK(0xFF, 0)
+#define GLPDOC_CACHESIZE_SETS_S			8
+#define GLPDOC_CACHESIZE_SETS_M			MAKEMASK(0xFFF, 8)
+#define GLPDOC_CACHESIZE_WAYS_S			20
+#define GLPDOC_CACHESIZE_WAYS_M			MAKEMASK(0xF, 20)
+#define GLPDOC_CACHESIZE_FPMAT			0x00110088 /* Reset Source: CORER */
+#define GLPDOC_CACHESIZE_FPMAT_WORD_SIZE_S	0
+#define GLPDOC_CACHESIZE_FPMAT_WORD_SIZE_M	MAKEMASK(0xFF, 0)
+#define GLPDOC_CACHESIZE_FPMAT_SETS_S		8
+#define GLPDOC_CACHESIZE_FPMAT_SETS_M		MAKEMASK(0xFFF, 8)
+#define GLPDOC_CACHESIZE_FPMAT_WAYS_S		20
+#define GLPDOC_CACHESIZE_FPMAT_WAYS_M		MAKEMASK(0xF, 20)
+#define GLPEOC0_CACHESIZE			0x005140A8 /* Reset Source: CORER */
+#define GLPEOC0_CACHESIZE_WORD_SIZE_S		0
+#define GLPEOC0_CACHESIZE_WORD_SIZE_M		MAKEMASK(0xFF, 0)
+#define GLPEOC0_CACHESIZE_SETS_S		8
+#define GLPEOC0_CACHESIZE_SETS_M		MAKEMASK(0xFFF, 8)
+#define GLPEOC0_CACHESIZE_WAYS_S		20
+#define GLPEOC0_CACHESIZE_WAYS_M		MAKEMASK(0xF, 20)
+#define GLPEOC1_CACHESIZE			0x005160A8 /* Reset Source: CORER */
+#define GLPEOC1_CACHESIZE_WORD_SIZE_S		0
+#define GLPEOC1_CACHESIZE_WORD_SIZE_M		MAKEMASK(0xFF, 0)
+#define GLPEOC1_CACHESIZE_SETS_S		8
+#define GLPEOC1_CACHESIZE_SETS_M		MAKEMASK(0xFFF, 8)
+#define GLPEOC1_CACHESIZE_WAYS_S		20
+#define GLPEOC1_CACHESIZE_WAYS_M		MAKEMASK(0xF, 20)
+#define PFHMC_ERRORDATA				0x00520500 /* Reset Source: PFR */
+#define PFHMC_ERRORDATA_HMC_ERROR_DATA_S	0
+#define PFHMC_ERRORDATA_HMC_ERROR_DATA_M	MAKEMASK(0x3FFFFFFF, 0)
+#define PFHMC_ERRORDATA_FPMAT			0x00100500 /* Reset Source: PFR */
+#define PFHMC_ERRORDATA_FPMAT_HMC_ERROR_DATA_S	0
+#define PFHMC_ERRORDATA_FPMAT_HMC_ERROR_DATA_M	MAKEMASK(0x3FFFFFFF, 0)
+#define PFHMC_ERRORINFO				0x00520400 /* Reset Source: PFR */
+#define PFHMC_ERRORINFO_PMF_INDEX_S		0
+#define PFHMC_ERRORINFO_PMF_INDEX_M		MAKEMASK(0x1F, 0)
+#define PFHMC_ERRORINFO_PMF_ISVF_S		7
+#define PFHMC_ERRORINFO_PMF_ISVF_M		BIT(7)
+#define PFHMC_ERRORINFO_HMC_ERROR_TYPE_S	8
+#define PFHMC_ERRORINFO_HMC_ERROR_TYPE_M	MAKEMASK(0xF, 8)
+#define PFHMC_ERRORINFO_HMC_OBJECT_TYPE_S	16
+#define PFHMC_ERRORINFO_HMC_OBJECT_TYPE_M	MAKEMASK(0x1F, 16)
+#define PFHMC_ERRORINFO_ERROR_DETECTED_S	31
+#define PFHMC_ERRORINFO_ERROR_DETECTED_M	BIT(31)
+#define PFHMC_ERRORINFO_FPMAT			0x00100400 /* Reset Source: PFR */
+#define PFHMC_ERRORINFO_FPMAT_PMF_INDEX_S	0
+#define PFHMC_ERRORINFO_FPMAT_PMF_INDEX_M	MAKEMASK(0x1F, 0)
+#define PFHMC_ERRORINFO_FPMAT_PMF_ISVF_S	7
+#define PFHMC_ERRORINFO_FPMAT_PMF_ISVF_M	BIT(7)
+#define PFHMC_ERRORINFO_FPMAT_HMC_ERROR_TYPE_S	8
+#define PFHMC_ERRORINFO_FPMAT_HMC_ERROR_TYPE_M	MAKEMASK(0xF, 8)
+#define PFHMC_ERRORINFO_FPMAT_HMC_OBJECT_TYPE_S 16
+#define PFHMC_ERRORINFO_FPMAT_HMC_OBJECT_TYPE_M MAKEMASK(0x1F, 16)
+#define PFHMC_ERRORINFO_FPMAT_ERROR_DETECTED_S	31
+#define PFHMC_ERRORINFO_FPMAT_ERROR_DETECTED_M	BIT(31)
+#define PFHMC_PDINV				0x00520300 /* Reset Source: PFR */
+#define PFHMC_PDINV_PMSDIDX_S			0
+#define PFHMC_PDINV_PMSDIDX_M			MAKEMASK(0xFFF, 0)
+#define PFHMC_PDINV_PMSDPARTSEL_S		15
+#define PFHMC_PDINV_PMSDPARTSEL_M		BIT(15)
+#define PFHMC_PDINV_PMPDIDX_S			16
+#define PFHMC_PDINV_PMPDIDX_M			MAKEMASK(0x1FF, 16)
+#define PFHMC_PDINV_FPMAT			0x00100300 /* Reset Source: PFR */
+#define PFHMC_PDINV_FPMAT_PMSDIDX_S		0
+#define PFHMC_PDINV_FPMAT_PMSDIDX_M		MAKEMASK(0xFFF, 0)
+#define PFHMC_PDINV_FPMAT_PMSDPARTSEL_S		15
+#define PFHMC_PDINV_FPMAT_PMSDPARTSEL_M		BIT(15)
+#define PFHMC_PDINV_FPMAT_PMPDIDX_S		16
+#define PFHMC_PDINV_FPMAT_PMPDIDX_M		MAKEMASK(0x1FF, 16)
+#define PFHMC_SDCMD				0x00520000 /* Reset Source: PFR */
+#define PFHMC_SDCMD_PMSDIDX_S			0
+#define PFHMC_SDCMD_PMSDIDX_M			MAKEMASK(0xFFF, 0)
+#define PFHMC_SDCMD_PMSDPARTSEL_S		15
+#define PFHMC_SDCMD_PMSDPARTSEL_M		BIT(15)
+#define PFHMC_SDCMD_PMSDWR_S			31
+#define PFHMC_SDCMD_PMSDWR_M			BIT(31)
+#define PFHMC_SDCMD_FPMAT			0x00100000 /* Reset Source: PFR */
+#define PFHMC_SDCMD_FPMAT_PMSDIDX_S		0
+#define PFHMC_SDCMD_FPMAT_PMSDIDX_M		MAKEMASK(0xFFF, 0)
+#define PFHMC_SDCMD_FPMAT_PMSDPARTSEL_S		15
+#define PFHMC_SDCMD_FPMAT_PMSDPARTSEL_M		BIT(15)
+#define PFHMC_SDCMD_FPMAT_PMSDWR_S		31
+#define PFHMC_SDCMD_FPMAT_PMSDWR_M		BIT(31)
+#define PFHMC_SDDATAHIGH			0x00520200 /* Reset Source: PFR */
+#define PFHMC_SDDATAHIGH_PMSDDATAHIGH_S		0
+#define PFHMC_SDDATAHIGH_PMSDDATAHIGH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PFHMC_SDDATAHIGH_FPMAT			0x00100200 /* Reset Source: PFR */
+#define PFHMC_SDDATAHIGH_FPMAT_PMSDDATAHIGH_S	0
+#define PFHMC_SDDATAHIGH_FPMAT_PMSDDATAHIGH_M	MAKEMASK(0xFFFFFFFF, 0)
+#define PFHMC_SDDATALOW				0x00520100 /* Reset Source: PFR */
+#define PFHMC_SDDATALOW_PMSDVALID_S		0
+#define PFHMC_SDDATALOW_PMSDVALID_M		BIT(0)
+#define PFHMC_SDDATALOW_PMSDTYPE_S		1
+#define PFHMC_SDDATALOW_PMSDTYPE_M		BIT(1)
+#define PFHMC_SDDATALOW_PMSDBPCOUNT_S		2
+#define PFHMC_SDDATALOW_PMSDBPCOUNT_M		MAKEMASK(0x3FF, 2)
+#define PFHMC_SDDATALOW_PMSDDATALOW_S		12
+#define PFHMC_SDDATALOW_PMSDDATALOW_M		MAKEMASK(0xFFFFF, 12)
+#define PFHMC_SDDATALOW_FPMAT			0x00100100 /* Reset Source: PFR */
+#define PFHMC_SDDATALOW_FPMAT_PMSDVALID_S	0
+#define PFHMC_SDDATALOW_FPMAT_PMSDVALID_M	BIT(0)
+#define PFHMC_SDDATALOW_FPMAT_PMSDTYPE_S	1
+#define PFHMC_SDDATALOW_FPMAT_PMSDTYPE_M	BIT(1)
+#define PFHMC_SDDATALOW_FPMAT_PMSDBPCOUNT_S	2
+#define PFHMC_SDDATALOW_FPMAT_PMSDBPCOUNT_M	MAKEMASK(0x3FF, 2)
+#define PFHMC_SDDATALOW_FPMAT_PMSDDATALOW_S	12
+#define PFHMC_SDDATALOW_FPMAT_PMSDDATALOW_M	MAKEMASK(0xFFFFF, 12)
+#define GL_DSI_RDPC				0x00294204 /* Reset Source: CORER */
+#define GL_DSI_RDPC_RDPC_S			0
+#define GL_DSI_RDPC_RDPC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GL_DSI_REPC				0x00294208 /* Reset Source: CORER */
+#define GL_DSI_REPC_NO_DESC_CNT_S		0
+#define GL_DSI_REPC_NO_DESC_CNT_M		MAKEMASK(0xFFFF, 0)
+#define GL_DSI_REPC_ERROR_CNT_S			16
+#define GL_DSI_REPC_ERROR_CNT_M			MAKEMASK(0xFFFF, 16)
+#define GL_MDCK_TDAT_TCLAN			0x000FC0DC /* Reset Source: CORER */
+#define GL_MDCK_TDAT_TCLAN_WRONG_ORDER_FORMAT_DESC_S 0
+#define GL_MDCK_TDAT_TCLAN_WRONG_ORDER_FORMAT_DESC_M BIT(0)
+#define GL_MDCK_TDAT_TCLAN_UR_S			1
+#define GL_MDCK_TDAT_TCLAN_UR_M			BIT(1)
+#define GL_MDCK_TDAT_TCLAN_TAIL_DESC_NOT_DDESC_EOP_NOP_S 2
+#define GL_MDCK_TDAT_TCLAN_TAIL_DESC_NOT_DDESC_EOP_NOP_M BIT(2)
+#define GL_MDCK_TDAT_TCLAN_FALSE_SCHEDULING_S	3
+#define GL_MDCK_TDAT_TCLAN_FALSE_SCHEDULING_M	BIT(3)
+#define GL_MDCK_TDAT_TCLAN_TAIL_VALUE_BIGGER_THAN_RING_LEN_S 4
+#define GL_MDCK_TDAT_TCLAN_TAIL_VALUE_BIGGER_THAN_RING_LEN_M BIT(4)
+#define GL_MDCK_TDAT_TCLAN_MORE_THAN_8_DCMDS_IN_PKT_S 5
+#define GL_MDCK_TDAT_TCLAN_MORE_THAN_8_DCMDS_IN_PKT_M BIT(5)
+#define GL_MDCK_TDAT_TCLAN_NO_HEAD_UPDATE_IN_QUANTA_S 6
+#define GL_MDCK_TDAT_TCLAN_NO_HEAD_UPDATE_IN_QUANTA_M BIT(6)
+#define GL_MDCK_TDAT_TCLAN_PKT_LEN_NOT_LEGAL_S	7
+#define GL_MDCK_TDAT_TCLAN_PKT_LEN_NOT_LEGAL_M	BIT(7)
+#define GL_MDCK_TDAT_TCLAN_TSO_TLEN_NOT_COHERENT_WITH_SUM_BUFS_S 8
+#define GL_MDCK_TDAT_TCLAN_TSO_TLEN_NOT_COHERENT_WITH_SUM_BUFS_M BIT(8)
+#define GL_MDCK_TDAT_TCLAN_TSO_TAIL_REACHED_BEFORE_TLEN_END_S 9
+#define GL_MDCK_TDAT_TCLAN_TSO_TAIL_REACHED_BEFORE_TLEN_END_M BIT(9)
+#define GL_MDCK_TDAT_TCLAN_TSO_MORE_THAN_3_HDRS_S 10
+#define GL_MDCK_TDAT_TCLAN_TSO_MORE_THAN_3_HDRS_M BIT(10)
+#define GL_MDCK_TDAT_TCLAN_TSO_SUM_BUFFS_LT_SUM_HDRS_S 11
+#define GL_MDCK_TDAT_TCLAN_TSO_SUM_BUFFS_LT_SUM_HDRS_M BIT(11)
+#define GL_MDCK_TDAT_TCLAN_TSO_ZERO_MSS_TLEN_HDRS_S 12
+#define GL_MDCK_TDAT_TCLAN_TSO_ZERO_MSS_TLEN_HDRS_M BIT(12)
+#define GL_MDCK_TDAT_TCLAN_TSO_CTX_DESC_IPSEC_S 13
+#define GL_MDCK_TDAT_TCLAN_TSO_CTX_DESC_IPSEC_M BIT(13)
+#define GL_MDCK_TDAT_TCLAN_SSO_COMS_NOT_WHOLE_PKT_NUM_IN_QUANTA_S 14
+#define GL_MDCK_TDAT_TCLAN_SSO_COMS_NOT_WHOLE_PKT_NUM_IN_QUANTA_M BIT(14)
+#define GL_MDCK_TDAT_TCLAN_COMS_QUANTA_BYTES_EXCEED_PKTLEN_X_64_S 15
+#define GL_MDCK_TDAT_TCLAN_COMS_QUANTA_BYTES_EXCEED_PKTLEN_X_64_M BIT(15)
+#define GL_MDCK_TDAT_TCLAN_COMS_QUANTA_CMDS_EXCEED_S 16
+#define GL_MDCK_TDAT_TCLAN_COMS_QUANTA_CMDS_EXCEED_M BIT(16)
+#define GL_MDCK_TDAT_TCLAN_TSO_COMS_TSO_DESCS_LAST_LSO_QUANTA_S 17
+#define GL_MDCK_TDAT_TCLAN_TSO_COMS_TSO_DESCS_LAST_LSO_QUANTA_M BIT(17)
+#define GL_MDCK_TDAT_TCLAN_TSO_COMS_TSO_DESCS_TLEN_S 18
+#define GL_MDCK_TDAT_TCLAN_TSO_COMS_TSO_DESCS_TLEN_M BIT(18)
+#define GL_MDCK_TDAT_TCLAN_TSO_COMS_QUANTA_FINISHED_TOO_EARLY_S 19
+#define GL_MDCK_TDAT_TCLAN_TSO_COMS_QUANTA_FINISHED_TOO_EARLY_M BIT(19)
+#define GL_MDCK_TDAT_TCLAN_COMS_NUM_PKTS_IN_QUANTA_S 20
+#define GL_MDCK_TDAT_TCLAN_COMS_NUM_PKTS_IN_QUANTA_M BIT(20)
+#define GL_PPRS_SPARE_0				0x000841A8 /* Reset Source: CORER */
+#define GL_PPRS_SPARE_0_GL_PPRS_SPARE_S		0
+#define GL_PPRS_SPARE_0_GL_PPRS_SPARE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PPRS_SPARE_1				0x000851A8 /* Reset Source: CORER */
+#define GL_PPRS_SPARE_1_GL_PPRS_SPARE_S		0
+#define GL_PPRS_SPARE_1_GL_PPRS_SPARE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PPRS_SPARE_2				0x000861A8 /* Reset Source: CORER */
+#define GL_PPRS_SPARE_2_GL_PPRS_SPARE_S		0
+#define GL_PPRS_SPARE_2_GL_PPRS_SPARE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PPRS_SPARE_3				0x000871A8 /* Reset Source: CORER */
+#define GL_PPRS_SPARE_3_GL_PPRS_SPARE_S		0
+#define GL_PPRS_SPARE_3_GL_PPRS_SPARE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLCORE_CLKCTL_H				0x000B81E8 /* Reset Source: POR */
+#define GLCORE_CLKCTL_H_UPPER_CLK_SRC_H_S	0
+#define GLCORE_CLKCTL_H_UPPER_CLK_SRC_H_M	MAKEMASK(0x3, 0)
+#define GLCORE_CLKCTL_H_LOWER_CLK_SRC_H_S	2
+#define GLCORE_CLKCTL_H_LOWER_CLK_SRC_H_M	MAKEMASK(0x3, 2)
+#define GLCORE_CLKCTL_H_PSM_CLK_SRC_H_S		4
+#define GLCORE_CLKCTL_H_PSM_CLK_SRC_H_M		MAKEMASK(0x3, 4)
+#define GLCORE_CLKCTL_H_RXCTL_CLK_SRC_H_S	6
+#define GLCORE_CLKCTL_H_RXCTL_CLK_SRC_H_M	MAKEMASK(0x3, 6)
+#define GLCORE_CLKCTL_H_UANA_CLK_SRC_H_S	8
+#define GLCORE_CLKCTL_H_UANA_CLK_SRC_H_M	MAKEMASK(0x7, 8)
+#define GLCORE_CLKCTL_L				0x000B8254 /* Reset Source: POR */
+#define GLCORE_CLKCTL_L_UPPER_CLK_SRC_L_S	0
+#define GLCORE_CLKCTL_L_UPPER_CLK_SRC_L_M	MAKEMASK(0x3, 0)
+#define GLCORE_CLKCTL_L_LOWER_CLK_SRC_L_S	2
+#define GLCORE_CLKCTL_L_LOWER_CLK_SRC_L_M	MAKEMASK(0x3, 2)
+#define GLCORE_CLKCTL_L_PSM_CLK_SRC_L_S		4
+#define GLCORE_CLKCTL_L_PSM_CLK_SRC_L_M		MAKEMASK(0x3, 4)
+#define GLCORE_CLKCTL_L_RXCTL_CLK_SRC_L_S	6
+#define GLCORE_CLKCTL_L_RXCTL_CLK_SRC_L_M	MAKEMASK(0x3, 6)
+#define GLCORE_CLKCTL_L_UANA_CLK_SRC_L_S	8
+#define GLCORE_CLKCTL_L_UANA_CLK_SRC_L_M	MAKEMASK(0x7, 8)
+#define GLCORE_CLKCTL_M				0x000B8258 /* Reset Source: POR */
+#define GLCORE_CLKCTL_M_UPPER_CLK_SRC_M_S	0
+#define GLCORE_CLKCTL_M_UPPER_CLK_SRC_M_M	MAKEMASK(0x3, 0)
+#define GLCORE_CLKCTL_M_LOWER_CLK_SRC_M_S	2
+#define GLCORE_CLKCTL_M_LOWER_CLK_SRC_M_M	MAKEMASK(0x3, 2)
+#define GLCORE_CLKCTL_M_PSM_CLK_SRC_M_S		4
+#define GLCORE_CLKCTL_M_PSM_CLK_SRC_M_M		MAKEMASK(0x3, 4)
+#define GLCORE_CLKCTL_M_RXCTL_CLK_SRC_M_S	6
+#define GLCORE_CLKCTL_M_RXCTL_CLK_SRC_M_M	MAKEMASK(0x3, 6)
+#define GLCORE_CLKCTL_M_UANA_CLK_SRC_M_S	8
+#define GLCORE_CLKCTL_M_UANA_CLK_SRC_M_M	MAKEMASK(0x7, 8)
+#define GLFOC_CACHESIZE				0x000AA074 /* Reset Source: CORER */
+#define GLFOC_CACHESIZE_WORD_SIZE_S		0
+#define GLFOC_CACHESIZE_WORD_SIZE_M		MAKEMASK(0xFF, 0)
+#define GLFOC_CACHESIZE_SETS_S			8
+#define GLFOC_CACHESIZE_SETS_M			MAKEMASK(0xFFF, 8)
+#define GLFOC_CACHESIZE_WAYS_S			20
+#define GLFOC_CACHESIZE_WAYS_M			MAKEMASK(0xF, 20)
+#define GLGEN_CAR_DEBUG				0x000B81C0 /* Reset Source: POR */
+#define GLGEN_CAR_DEBUG_CAR_UPPER_CORE_CLK_EN_S 0
+#define GLGEN_CAR_DEBUG_CAR_UPPER_CORE_CLK_EN_M BIT(0)
+#define GLGEN_CAR_DEBUG_CAR_PCIE_HIU_CLK_EN_S	1
+#define GLGEN_CAR_DEBUG_CAR_PCIE_HIU_CLK_EN_M	BIT(1)
+#define GLGEN_CAR_DEBUG_CAR_PE_CLK_EN_S		2
+#define GLGEN_CAR_DEBUG_CAR_PE_CLK_EN_M		BIT(2)
+#define GLGEN_CAR_DEBUG_CAR_PCIE_PRIM_CLK_ACTIVE_S 3
+#define GLGEN_CAR_DEBUG_CAR_PCIE_PRIM_CLK_ACTIVE_M BIT(3)
+#define GLGEN_CAR_DEBUG_CDC_PE_ACTIVE_S		4
+#define GLGEN_CAR_DEBUG_CDC_PE_ACTIVE_M		BIT(4)
+#define GLGEN_CAR_DEBUG_CAR_PCIE_RAW_PRST_RESET_N_S 5
+#define GLGEN_CAR_DEBUG_CAR_PCIE_RAW_PRST_RESET_N_M BIT(5)
+#define GLGEN_CAR_DEBUG_CAR_PCIE_RAW_SCLR_RESET_N_S 6
+#define GLGEN_CAR_DEBUG_CAR_PCIE_RAW_SCLR_RESET_N_M BIT(6)
+#define GLGEN_CAR_DEBUG_CAR_PCIE_RAW_IB_RESET_N_S 7
+#define GLGEN_CAR_DEBUG_CAR_PCIE_RAW_IB_RESET_N_M BIT(7)
+#define GLGEN_CAR_DEBUG_CAR_PCIE_RAW_IMIB_RESET_N_S 8
+#define GLGEN_CAR_DEBUG_CAR_PCIE_RAW_IMIB_RESET_N_M BIT(8)
+#define GLGEN_CAR_DEBUG_CAR_RAW_EMP_RESET_N_S	9
+#define GLGEN_CAR_DEBUG_CAR_RAW_EMP_RESET_N_M	BIT(9)
+#define GLGEN_CAR_DEBUG_CAR_RAW_GLOBAL_RESET_N_S 10
+#define GLGEN_CAR_DEBUG_CAR_RAW_GLOBAL_RESET_N_M BIT(10)
+#define GLGEN_CAR_DEBUG_CAR_RAW_LAN_POWER_GOOD_S 11
+#define GLGEN_CAR_DEBUG_CAR_RAW_LAN_POWER_GOOD_M BIT(11)
+#define GLGEN_CAR_DEBUG_CDC_IOSF_PRIMERY_RST_B_S 12
+#define GLGEN_CAR_DEBUG_CDC_IOSF_PRIMERY_RST_B_M BIT(12)
+#define GLGEN_CAR_DEBUG_GBE_GLOBALRST_B_S	13
+#define GLGEN_CAR_DEBUG_GBE_GLOBALRST_B_M	BIT(13)
+#define GLGEN_CAR_DEBUG_FLEEP_AL_GLOBR_DONE_S	14
+#define GLGEN_CAR_DEBUG_FLEEP_AL_GLOBR_DONE_M	BIT(14)
+#define GLGEN_CAR_DEBUG_CAR_RST_STATE_S		15
+#define GLGEN_CAR_DEBUG_CAR_RST_STATE_M		MAKEMASK(0xF, 15)
+#define GLGEN_CAR_SPARE				0x000B81C4 /* Reset Source: POR */
+#define GLGEN_CAR_SPARE_SPARE_CLEAR_S		0
+#define GLGEN_CAR_SPARE_SPARE_CLEAR_M		MAKEMASK(0xFFFF, 0)
+#define GLGEN_CAR_SPARE_SPARE_SET_S		16
+#define GLGEN_CAR_SPARE_SPARE_SET_M		MAKEMASK(0xFFFF, 16)
+#define GLMAC_CLKSTAT				0x000B8210 /* Reset Source: POR */
+#define GLMAC_CLKSTAT_P0_CLK_SPEED_S		0
+#define GLMAC_CLKSTAT_P0_CLK_SPEED_M		MAKEMASK(0xF, 0)
+#define GLMAC_CLKSTAT_P1_CLK_SPEED_S		4
+#define GLMAC_CLKSTAT_P1_CLK_SPEED_M		MAKEMASK(0xF, 4)
+#define GLMAC_CLKSTAT_P2_CLK_SPEED_S		8
+#define GLMAC_CLKSTAT_P2_CLK_SPEED_M		MAKEMASK(0xF, 8)
+#define GLMAC_CLKSTAT_P3_CLK_SPEED_S		12
+#define GLMAC_CLKSTAT_P3_CLK_SPEED_M		MAKEMASK(0xF, 12)
+#define GLMAC_CLKSTAT_P4_CLK_SPEED_S		16
+#define GLMAC_CLKSTAT_P4_CLK_SPEED_M		MAKEMASK(0xF, 16)
+#define GLMAC_CLKSTAT_P5_CLK_SPEED_S		20
+#define GLMAC_CLKSTAT_P5_CLK_SPEED_M		MAKEMASK(0xF, 20)
+#define GLMAC_CLKSTAT_P6_CLK_SPEED_S		24
+#define GLMAC_CLKSTAT_P6_CLK_SPEED_M		MAKEMASK(0xF, 24)
+#define GLMAC_CLKSTAT_P7_CLK_SPEED_S		28
+#define GLMAC_CLKSTAT_P7_CLK_SPEED_M		MAKEMASK(0xF, 28)
+#define GLRCB_DCB_LAN_PMS			0x001223F8 /* Reset Source: CORER */
+#define GLRCB_DCB_LAN_PMS_PSM_LAN_S		0
+#define GLRCB_DCB_LAN_PMS_PSM_LAN_M		MAKEMASK(0x3FFF, 0)
+#define GLRCB_DCB_RDMA_PMS			0x001223FC /* Reset Source: CORER */
+#define GLRCB_DCB_RDMA_PMS_PSM_RDMA_S		0
+#define GLRCB_DCB_RDMA_PMS_PSM_RDMA_M		MAKEMASK(0x3FFF, 0)
+#define GLRLAN_MDET				0x00294200 /* Reset Source: CORER */
+#define GLRLAN_MDET_PCKT_EXTRCT_ERR_S		0
+#define GLRLAN_MDET_PCKT_EXTRCT_ERR_M		BIT(0)
+#define GLTPB_100G_MAC_FC_THRESH		0x00099510 /* Reset Source: CORER */
+#define GLTPB_100G_MAC_FC_THRESH_PORT0_FC_THRESH_S 0
+#define GLTPB_100G_MAC_FC_THRESH_PORT0_FC_THRESH_M MAKEMASK(0xFFFF, 0)
+#define GLTPB_100G_MAC_FC_THRESH_PORT1_FC_THRESH_S 16
+#define GLTPB_100G_MAC_FC_THRESH_PORT1_FC_THRESH_M MAKEMASK(0xFFFF, 16)
+#define GLTPB_100G_RPB_FC_THRESH		0x0009963C /* Reset Source: CORER */
+#define GLTPB_100G_RPB_FC_THRESH_PORT0_FC_THRESH_S 0
+#define GLTPB_100G_RPB_FC_THRESH_PORT0_FC_THRESH_M MAKEMASK(0xFFFF, 0)
+#define GLTPB_100G_RPB_FC_THRESH_PORT1_FC_THRESH_S 16
+#define GLTPB_100G_RPB_FC_THRESH_PORT1_FC_THRESH_M MAKEMASK(0xFFFF, 16)
+#define GLTPB_PACING_10G			0x000994E4 /* Reset Source: CORER */
+#define GLTPB_PACING_10G_N_S			0
+#define GLTPB_PACING_10G_N_M			MAKEMASK(0xFF, 0)
+#define GLTPB_PACING_10G_K_S			8
+#define GLTPB_PACING_10G_K_M			MAKEMASK(0xFF, 8)
+#define GLTPB_PACING_10G_S_S			16
+#define GLTPB_PACING_10G_S_M			MAKEMASK(0x1FF, 16)
+#define GLTPB_PACING_25G			0x000994E0 /* Reset Source: CORER */
+#define GLTPB_PACING_25G_N_S			0
+#define GLTPB_PACING_25G_N_M			MAKEMASK(0xFF, 0)
+#define GLTPB_PACING_25G_K_S			8
+#define GLTPB_PACING_25G_K_M			MAKEMASK(0xFF, 8)
+#define GLTPB_PACING_25G_S_S			16
+#define GLTPB_PACING_25G_S_M			MAKEMASK(0x1FF, 16)
+#define GLTPB_PORT_PACING_SPEED			0x000994E8 /* Reset Source: CORER */
+#define GLTPB_PORT_PACING_SPEED_PORT0_SPEED_S	0
+#define GLTPB_PORT_PACING_SPEED_PORT0_SPEED_M	BIT(0)
+#define GLTPB_PORT_PACING_SPEED_PORT1_SPEED_S	1
+#define GLTPB_PORT_PACING_SPEED_PORT1_SPEED_M	BIT(1)
+#define GLTPB_PORT_PACING_SPEED_PORT2_SPEED_S	2
+#define GLTPB_PORT_PACING_SPEED_PORT2_SPEED_M	BIT(2)
+#define GLTPB_PORT_PACING_SPEED_PORT3_SPEED_S	3
+#define GLTPB_PORT_PACING_SPEED_PORT3_SPEED_M	BIT(3)
+#define GLTPB_PORT_PACING_SPEED_PORT4_SPEED_S	4
+#define GLTPB_PORT_PACING_SPEED_PORT4_SPEED_M	BIT(4)
+#define GLTPB_PORT_PACING_SPEED_PORT5_SPEED_S	5
+#define GLTPB_PORT_PACING_SPEED_PORT5_SPEED_M	BIT(5)
+#define GLTPB_PORT_PACING_SPEED_PORT6_SPEED_S	6
+#define GLTPB_PORT_PACING_SPEED_PORT6_SPEED_M	BIT(6)
+#define GLTPB_PORT_PACING_SPEED_PORT7_SPEED_S	7
+#define GLTPB_PORT_PACING_SPEED_PORT7_SPEED_M	BIT(7)
+#define GLTSYN_HH_DBG				0x000889F0 /* Reset Source: CORER */
+#define GLTSYN_HH_DBG_HH_SYNC_S			0
+#define GLTSYN_HH_DBG_HH_SYNC_M			BIT(0)
+#define GLTSYN_HH_DBG_HH_LATCH_EN_S		1
+#define GLTSYN_HH_DBG_HH_LATCH_EN_M		BIT(1)
+#define TPB_CFG_SCHEDULED_BC_THRESHOLD		0x00099494 /* Reset Source: CORER */
+#define TPB_CFG_SCHEDULED_BC_THRESHOLD_THRESHOLD_S 0
+#define TPB_CFG_SCHEDULED_BC_THRESHOLD_THRESHOLD_M MAKEMASK(0x7FFF, 0)
+#define GL_UFUSE_SOC				0x000A400C /* Reset Source: POR */
+#define GL_UFUSE_SOC_PORT_MODE_S		0
+#define GL_UFUSE_SOC_PORT_MODE_M		MAKEMASK(0x3, 0)
+#define GL_UFUSE_SOC_BANDWIDTH_S		2
+#define GL_UFUSE_SOC_BANDWIDTH_M		MAKEMASK(0x3, 2)
+#define GL_UFUSE_SOC_PE_DISABLE_S		4
+#define GL_UFUSE_SOC_PE_DISABLE_M		BIT(4)
+#define GL_UFUSE_SOC_SWITCH_MODE_S		5
+#define GL_UFUSE_SOC_SWITCH_MODE_M		BIT(5)
+#define GL_UFUSE_SOC_CSR_PROTECTION_ENABLE_S	6
+#define GL_UFUSE_SOC_CSR_PROTECTION_ENABLE_M	BIT(6)
+#define GL_UFUSE_SOC_SERIAL_50G_S		7
+#define GL_UFUSE_SOC_SERIAL_50G_M		BIT(7)
+#define GL_UFUSE_SOC_NIC_ID_S			8
+#define GL_UFUSE_SOC_NIC_ID_M			BIT(8)
+#define GL_UFUSE_SOC_BLOCK_BME_TO_FW_S		9
+#define GL_UFUSE_SOC_BLOCK_BME_TO_FW_M		BIT(9)
+#define GL_UFUSE_SOC_SOC_TYPE_S			10
+#define GL_UFUSE_SOC_SOC_TYPE_M			BIT(10)
+#define GL_UFUSE_SOC_BTS_MODE_S			11
+#define GL_UFUSE_SOC_BTS_MODE_M			BIT(11)
+#define GL_UFUSE_SOC_SPARE_FUSES_S		12
+#define GL_UFUSE_SOC_SPARE_FUSES_M		MAKEMASK(0xF, 12)
+#define EMPINT_GPIO_ENA				0x000880C0 /* Reset Source: POR */
+#define EMPINT_GPIO_ENA_GPIO0_ENA_S		0
+#define EMPINT_GPIO_ENA_GPIO0_ENA_M		BIT(0)
+#define EMPINT_GPIO_ENA_GPIO1_ENA_S		1
+#define EMPINT_GPIO_ENA_GPIO1_ENA_M		BIT(1)
+#define EMPINT_GPIO_ENA_GPIO2_ENA_S		2
+#define EMPINT_GPIO_ENA_GPIO2_ENA_M		BIT(2)
+#define EMPINT_GPIO_ENA_GPIO3_ENA_S		3
+#define EMPINT_GPIO_ENA_GPIO3_ENA_M		BIT(3)
+#define EMPINT_GPIO_ENA_GPIO4_ENA_S		4
+#define EMPINT_GPIO_ENA_GPIO4_ENA_M		BIT(4)
+#define EMPINT_GPIO_ENA_GPIO5_ENA_S		5
+#define EMPINT_GPIO_ENA_GPIO5_ENA_M		BIT(5)
+#define EMPINT_GPIO_ENA_GPIO6_ENA_S		6
+#define EMPINT_GPIO_ENA_GPIO6_ENA_M		BIT(6)
+#define GL_CLKGEN_DEBUG				0x000B8268 /* Reset Source: POR */
+#define GL_CLKGEN_DEBUG_PROBE_S			0
+#define GL_CLKGEN_DEBUG_PROBE_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GL_CLKGEN_DEBUG_SEL			0x000B8264 /* Reset Source: POR */
+#define GL_CLKGEN_DEBUG_SEL_GL_CLKGEN_DEBUG_SEL_S 0
+#define GL_CLKGEN_DEBUG_SEL_GL_CLKGEN_DEBUG_SEL_M MAKEMASK(0xFFFF, 0)
+#define GLGEN_MAC_LINK_TOPO			0x000B81DC /* Reset Source: GLOBR */
+#define GLGEN_MAC_LINK_TOPO_LINK_TOPO_S		0
+#define GLGEN_MAC_LINK_TOPO_LINK_TOPO_M		MAKEMASK(0x3, 0)
+#define GLINT_CEQCTL(_INT)			(0x0015C000 + ((_INT) * 4)) /* _i=0...2047 */ /* Reset Source: CORER */
+#define GLINT_CEQCTL_MAX_INDEX			2047
+#define GLINT_CEQCTL_MSIX_INDX_S		0
+#define GLINT_CEQCTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define GLINT_CEQCTL_ITR_INDX_S			11
+#define GLINT_CEQCTL_ITR_INDX_M			MAKEMASK(0x3, 11)
+#define GLINT_CEQCTL_CAUSE_ENA_S		30
+#define GLINT_CEQCTL_CAUSE_ENA_M		BIT(30)
+#define GLINT_CEQCTL_INTEVENT_S			31
+#define GLINT_CEQCTL_INTEVENT_M			BIT(31)
+#define GLINT_CTL				0x0016CC54 /* Reset Source: CORER */
+#define GLINT_CTL_DIS_AUTOMASK_S		0
+#define GLINT_CTL_DIS_AUTOMASK_M		BIT(0)
+#define GLINT_CTL_RSVD_S			1
+#define GLINT_CTL_RSVD_M			MAKEMASK(0x7FFF, 1)
+#define GLINT_CTL_ITR_GRAN_200_S		16
+#define GLINT_CTL_ITR_GRAN_200_M		MAKEMASK(0xF, 16)
+#define GLINT_CTL_ITR_GRAN_100_S		20
+#define GLINT_CTL_ITR_GRAN_100_M		MAKEMASK(0xF, 20)
+#define GLINT_CTL_ITR_GRAN_50_S			24
+#define GLINT_CTL_ITR_GRAN_50_M			MAKEMASK(0xF, 24)
+#define GLINT_CTL_ITR_GRAN_25_S			28
+#define GLINT_CTL_ITR_GRAN_25_M			MAKEMASK(0xF, 28)
+#define GLINT_DYN_CTL(_INT)			(0x00160000 + ((_INT) * 4)) /* _i=0...2047 */ /* Reset Source: PFR */
+#define GLINT_DYN_CTL_MAX_INDEX			2047
+#define GLINT_DYN_CTL_INTENA_S			0
+#define GLINT_DYN_CTL_INTENA_M			BIT(0)
+#define GLINT_DYN_CTL_CLEARPBA_S		1
+#define GLINT_DYN_CTL_CLEARPBA_M		BIT(1)
+#define GLINT_DYN_CTL_SWINT_TRIG_S		2
+#define GLINT_DYN_CTL_SWINT_TRIG_M		BIT(2)
+#define GLINT_DYN_CTL_ITR_INDX_S		3
+#define GLINT_DYN_CTL_ITR_INDX_M		MAKEMASK(0x3, 3)
+#define GLINT_DYN_CTL_INTERVAL_S		5
+#define GLINT_DYN_CTL_INTERVAL_M		MAKEMASK(0xFFF, 5)
+#define GLINT_DYN_CTL_SW_ITR_INDX_ENA_S		24
+#define GLINT_DYN_CTL_SW_ITR_INDX_ENA_M		BIT(24)
+#define GLINT_DYN_CTL_SW_ITR_INDX_S		25
+#define GLINT_DYN_CTL_SW_ITR_INDX_M		MAKEMASK(0x3, 25)
+#define GLINT_DYN_CTL_WB_ON_ITR_S		30
+#define GLINT_DYN_CTL_WB_ON_ITR_M		BIT(30)
+#define GLINT_DYN_CTL_INTENA_MSK_S		31
+#define GLINT_DYN_CTL_INTENA_MSK_M		BIT(31)
+#define GLINT_FW_TOOL_CTL			0x0016C840 /* Reset Source: CORER */
+#define GLINT_FW_TOOL_CTL_MSIX_INDX_S		0
+#define GLINT_FW_TOOL_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define GLINT_FW_TOOL_CTL_ITR_INDX_S		11
+#define GLINT_FW_TOOL_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define GLINT_FW_TOOL_CTL_CAUSE_ENA_S		30
+#define GLINT_FW_TOOL_CTL_CAUSE_ENA_M		BIT(30)
+#define GLINT_FW_TOOL_CTL_INTEVENT_S		31
+#define GLINT_FW_TOOL_CTL_INTEVENT_M		BIT(31)
+#define GLINT_ITR(_i, _INT)			(0x00154000 + ((_i) * 8192 + (_INT) * 4)) /* _i=0...2, _INT=0...2047 */ /* Reset Source: PFR */
+#define GLINT_ITR_MAX_INDEX			2
+#define GLINT_ITR_INTERVAL_S			0
+#define GLINT_ITR_INTERVAL_M			MAKEMASK(0xFFF, 0)
+#define GLINT_RATE(_INT)			(0x0015A000 + ((_INT) * 4)) /* _i=0...2047 */ /* Reset Source: PFR */
+#define GLINT_RATE_MAX_INDEX			2047
+#define GLINT_RATE_INTERVAL_S			0
+#define GLINT_RATE_INTERVAL_M			MAKEMASK(0x3F, 0)
+#define GLINT_RATE_INTRL_ENA_S			6
+#define GLINT_RATE_INTRL_ENA_M			BIT(6)
+#define GLINT_TSYN_PFMSTR(_i)			(0x0016CCC0 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLINT_TSYN_PFMSTR_MAX_INDEX		1
+#define GLINT_TSYN_PFMSTR_PF_MASTER_S		0
+#define GLINT_TSYN_PFMSTR_PF_MASTER_M		MAKEMASK(0x7, 0)
+#define GLINT_TSYN_PHY				0x0016CC50 /* Reset Source: CORER */
+#define GLINT_TSYN_PHY_PHY_INDX_S		0
+#define GLINT_TSYN_PHY_PHY_INDX_M		MAKEMASK(0x1F, 0)
+#define GLINT_VECT2FUNC(_INT)			(0x00162000 + ((_INT) * 4)) /* _i=0...2047 */ /* Reset Source: CORER */
+#define GLINT_VECT2FUNC_MAX_INDEX		2047
+#define GLINT_VECT2FUNC_VF_NUM_S		0
+#define GLINT_VECT2FUNC_VF_NUM_M		MAKEMASK(0xFF, 0)
+#define GLINT_VECT2FUNC_PF_NUM_S		12
+#define GLINT_VECT2FUNC_PF_NUM_M		MAKEMASK(0x7, 12)
+#define GLINT_VECT2FUNC_IS_PF_S			16
+#define GLINT_VECT2FUNC_IS_PF_M			BIT(16)
+#define PF0INT_FW_HLP_CTL			0x0016C844 /* Reset Source: CORER */
+#define PF0INT_FW_HLP_CTL_MSIX_INDX_S		0
+#define PF0INT_FW_HLP_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PF0INT_FW_HLP_CTL_ITR_INDX_S		11
+#define PF0INT_FW_HLP_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PF0INT_FW_HLP_CTL_CAUSE_ENA_S		30
+#define PF0INT_FW_HLP_CTL_CAUSE_ENA_M		BIT(30)
+#define PF0INT_FW_HLP_CTL_INTEVENT_S		31
+#define PF0INT_FW_HLP_CTL_INTEVENT_M		BIT(31)
+#define PF0INT_FW_PSM_CTL			0x0016C848 /* Reset Source: CORER */
+#define PF0INT_FW_PSM_CTL_MSIX_INDX_S		0
+#define PF0INT_FW_PSM_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PF0INT_FW_PSM_CTL_ITR_INDX_S		11
+#define PF0INT_FW_PSM_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PF0INT_FW_PSM_CTL_CAUSE_ENA_S		30
+#define PF0INT_FW_PSM_CTL_CAUSE_ENA_M		BIT(30)
+#define PF0INT_FW_PSM_CTL_INTEVENT_S		31
+#define PF0INT_FW_PSM_CTL_INTEVENT_M		BIT(31)
+#define PF0INT_MBX_CPM_CTL			0x0016B2C0 /* Reset Source: CORER */
+#define PF0INT_MBX_CPM_CTL_MSIX_INDX_S		0
+#define PF0INT_MBX_CPM_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PF0INT_MBX_CPM_CTL_ITR_INDX_S		11
+#define PF0INT_MBX_CPM_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PF0INT_MBX_CPM_CTL_CAUSE_ENA_S		30
+#define PF0INT_MBX_CPM_CTL_CAUSE_ENA_M		BIT(30)
+#define PF0INT_MBX_CPM_CTL_INTEVENT_S		31
+#define PF0INT_MBX_CPM_CTL_INTEVENT_M		BIT(31)
+#define PF0INT_MBX_HLP_CTL			0x0016B2C4 /* Reset Source: CORER */
+#define PF0INT_MBX_HLP_CTL_MSIX_INDX_S		0
+#define PF0INT_MBX_HLP_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PF0INT_MBX_HLP_CTL_ITR_INDX_S		11
+#define PF0INT_MBX_HLP_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PF0INT_MBX_HLP_CTL_CAUSE_ENA_S		30
+#define PF0INT_MBX_HLP_CTL_CAUSE_ENA_M		BIT(30)
+#define PF0INT_MBX_HLP_CTL_INTEVENT_S		31
+#define PF0INT_MBX_HLP_CTL_INTEVENT_M		BIT(31)
+#define PF0INT_MBX_PSM_CTL			0x0016B2C8 /* Reset Source: CORER */
+#define PF0INT_MBX_PSM_CTL_MSIX_INDX_S		0
+#define PF0INT_MBX_PSM_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PF0INT_MBX_PSM_CTL_ITR_INDX_S		11
+#define PF0INT_MBX_PSM_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PF0INT_MBX_PSM_CTL_CAUSE_ENA_S		30
+#define PF0INT_MBX_PSM_CTL_CAUSE_ENA_M		BIT(30)
+#define PF0INT_MBX_PSM_CTL_INTEVENT_S		31
+#define PF0INT_MBX_PSM_CTL_INTEVENT_M		BIT(31)
+#define PF0INT_OICR_CPM				0x0016CC40 /* Reset Source: CORER */
+#define PF0INT_OICR_CPM_INTEVENT_S		0
+#define PF0INT_OICR_CPM_INTEVENT_M		BIT(0)
+#define PF0INT_OICR_CPM_QUEUE_S			1
+#define PF0INT_OICR_CPM_QUEUE_M			BIT(1)
+#define PF0INT_OICR_CPM_RSV1_S			2
+#define PF0INT_OICR_CPM_RSV1_M			MAKEMASK(0xFF, 2)
+#define PF0INT_OICR_CPM_HH_COMP_S		10
+#define PF0INT_OICR_CPM_HH_COMP_M		BIT(10)
+#define PF0INT_OICR_CPM_TSYN_TX_S		11
+#define PF0INT_OICR_CPM_TSYN_TX_M		BIT(11)
+#define PF0INT_OICR_CPM_TSYN_EVNT_S		12
+#define PF0INT_OICR_CPM_TSYN_EVNT_M		BIT(12)
+#define PF0INT_OICR_CPM_TSYN_TGT_S		13
+#define PF0INT_OICR_CPM_TSYN_TGT_M		BIT(13)
+#define PF0INT_OICR_CPM_HLP_RDY_S		14
+#define PF0INT_OICR_CPM_HLP_RDY_M		BIT(14)
+#define PF0INT_OICR_CPM_CPM_RDY_S		15
+#define PF0INT_OICR_CPM_CPM_RDY_M		BIT(15)
+#define PF0INT_OICR_CPM_ECC_ERR_S		16
+#define PF0INT_OICR_CPM_ECC_ERR_M		BIT(16)
+#define PF0INT_OICR_CPM_RSV2_S			17
+#define PF0INT_OICR_CPM_RSV2_M			MAKEMASK(0x3, 17)
+#define PF0INT_OICR_CPM_MAL_DETECT_S		19
+#define PF0INT_OICR_CPM_MAL_DETECT_M		BIT(19)
+#define PF0INT_OICR_CPM_GRST_S			20
+#define PF0INT_OICR_CPM_GRST_M			BIT(20)
+#define PF0INT_OICR_CPM_PCI_EXCEPTION_S		21
+#define PF0INT_OICR_CPM_PCI_EXCEPTION_M		BIT(21)
+#define PF0INT_OICR_CPM_GPIO_S			22
+#define PF0INT_OICR_CPM_GPIO_M			BIT(22)
+#define PF0INT_OICR_CPM_RSV3_S			23
+#define PF0INT_OICR_CPM_RSV3_M			BIT(23)
+#define PF0INT_OICR_CPM_STORM_DETECT_S		24
+#define PF0INT_OICR_CPM_STORM_DETECT_M		BIT(24)
+#define PF0INT_OICR_CPM_LINK_STAT_CHANGE_S	25
+#define PF0INT_OICR_CPM_LINK_STAT_CHANGE_M	BIT(25)
+#define PF0INT_OICR_CPM_HMC_ERR_S		26
+#define PF0INT_OICR_CPM_HMC_ERR_M		BIT(26)
+#define PF0INT_OICR_CPM_PE_PUSH_S		27
+#define PF0INT_OICR_CPM_PE_PUSH_M		BIT(27)
+#define PF0INT_OICR_CPM_PE_CRITERR_S		28
+#define PF0INT_OICR_CPM_PE_CRITERR_M		BIT(28)
+#define PF0INT_OICR_CPM_VFLR_S			29
+#define PF0INT_OICR_CPM_VFLR_M			BIT(29)
+#define PF0INT_OICR_CPM_XLR_HW_DONE_S		30
+#define PF0INT_OICR_CPM_XLR_HW_DONE_M		BIT(30)
+#define PF0INT_OICR_CPM_SWINT_S			31
+#define PF0INT_OICR_CPM_SWINT_M			BIT(31)
+#define PF0INT_OICR_CTL_CPM			0x0016CC48 /* Reset Source: CORER */
+#define PF0INT_OICR_CTL_CPM_MSIX_INDX_S		0
+#define PF0INT_OICR_CTL_CPM_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PF0INT_OICR_CTL_CPM_ITR_INDX_S		11
+#define PF0INT_OICR_CTL_CPM_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PF0INT_OICR_CTL_CPM_CAUSE_ENA_S		30
+#define PF0INT_OICR_CTL_CPM_CAUSE_ENA_M		BIT(30)
+#define PF0INT_OICR_CTL_CPM_INTEVENT_S		31
+#define PF0INT_OICR_CTL_CPM_INTEVENT_M		BIT(31)
+#define PF0INT_OICR_CTL_HLP			0x0016CC5C /* Reset Source: CORER */
+#define PF0INT_OICR_CTL_HLP_MSIX_INDX_S		0
+#define PF0INT_OICR_CTL_HLP_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PF0INT_OICR_CTL_HLP_ITR_INDX_S		11
+#define PF0INT_OICR_CTL_HLP_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PF0INT_OICR_CTL_HLP_CAUSE_ENA_S		30
+#define PF0INT_OICR_CTL_HLP_CAUSE_ENA_M		BIT(30)
+#define PF0INT_OICR_CTL_HLP_INTEVENT_S		31
+#define PF0INT_OICR_CTL_HLP_INTEVENT_M		BIT(31)
+#define PF0INT_OICR_CTL_PSM			0x0016CC64 /* Reset Source: CORER */
+#define PF0INT_OICR_CTL_PSM_MSIX_INDX_S		0
+#define PF0INT_OICR_CTL_PSM_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PF0INT_OICR_CTL_PSM_ITR_INDX_S		11
+#define PF0INT_OICR_CTL_PSM_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PF0INT_OICR_CTL_PSM_CAUSE_ENA_S		30
+#define PF0INT_OICR_CTL_PSM_CAUSE_ENA_M		BIT(30)
+#define PF0INT_OICR_CTL_PSM_INTEVENT_S		31
+#define PF0INT_OICR_CTL_PSM_INTEVENT_M		BIT(31)
+#define PF0INT_OICR_ENA_CPM			0x0016CC60 /* Reset Source: CORER */
+#define PF0INT_OICR_ENA_CPM_RSV0_S		0
+#define PF0INT_OICR_ENA_CPM_RSV0_M		BIT(0)
+#define PF0INT_OICR_ENA_CPM_INT_ENA_S		1
+#define PF0INT_OICR_ENA_CPM_INT_ENA_M		MAKEMASK(0x7FFFFFFF, 1)
+#define PF0INT_OICR_ENA_HLP			0x0016CC4C /* Reset Source: CORER */
+#define PF0INT_OICR_ENA_HLP_RSV0_S		0
+#define PF0INT_OICR_ENA_HLP_RSV0_M		BIT(0)
+#define PF0INT_OICR_ENA_HLP_INT_ENA_S		1
+#define PF0INT_OICR_ENA_HLP_INT_ENA_M		MAKEMASK(0x7FFFFFFF, 1)
+#define PF0INT_OICR_ENA_PSM			0x0016CC58 /* Reset Source: CORER */
+#define PF0INT_OICR_ENA_PSM_RSV0_S		0
+#define PF0INT_OICR_ENA_PSM_RSV0_M		BIT(0)
+#define PF0INT_OICR_ENA_PSM_INT_ENA_S		1
+#define PF0INT_OICR_ENA_PSM_INT_ENA_M		MAKEMASK(0x7FFFFFFF, 1)
+#define PF0INT_OICR_HLP				0x0016CC68 /* Reset Source: CORER */
+#define PF0INT_OICR_HLP_INTEVENT_S		0
+#define PF0INT_OICR_HLP_INTEVENT_M		BIT(0)
+#define PF0INT_OICR_HLP_QUEUE_S			1
+#define PF0INT_OICR_HLP_QUEUE_M			BIT(1)
+#define PF0INT_OICR_HLP_RSV1_S			2
+#define PF0INT_OICR_HLP_RSV1_M			MAKEMASK(0xFF, 2)
+#define PF0INT_OICR_HLP_HH_COMP_S		10
+#define PF0INT_OICR_HLP_HH_COMP_M		BIT(10)
+#define PF0INT_OICR_HLP_TSYN_TX_S		11
+#define PF0INT_OICR_HLP_TSYN_TX_M		BIT(11)
+#define PF0INT_OICR_HLP_TSYN_EVNT_S		12
+#define PF0INT_OICR_HLP_TSYN_EVNT_M		BIT(12)
+#define PF0INT_OICR_HLP_TSYN_TGT_S		13
+#define PF0INT_OICR_HLP_TSYN_TGT_M		BIT(13)
+#define PF0INT_OICR_HLP_HLP_RDY_S		14
+#define PF0INT_OICR_HLP_HLP_RDY_M		BIT(14)
+#define PF0INT_OICR_HLP_CPM_RDY_S		15
+#define PF0INT_OICR_HLP_CPM_RDY_M		BIT(15)
+#define PF0INT_OICR_HLP_ECC_ERR_S		16
+#define PF0INT_OICR_HLP_ECC_ERR_M		BIT(16)
+#define PF0INT_OICR_HLP_RSV2_S			17
+#define PF0INT_OICR_HLP_RSV2_M			MAKEMASK(0x3, 17)
+#define PF0INT_OICR_HLP_MAL_DETECT_S		19
+#define PF0INT_OICR_HLP_MAL_DETECT_M		BIT(19)
+#define PF0INT_OICR_HLP_GRST_S			20
+#define PF0INT_OICR_HLP_GRST_M			BIT(20)
+#define PF0INT_OICR_HLP_PCI_EXCEPTION_S		21
+#define PF0INT_OICR_HLP_PCI_EXCEPTION_M		BIT(21)
+#define PF0INT_OICR_HLP_GPIO_S			22
+#define PF0INT_OICR_HLP_GPIO_M			BIT(22)
+#define PF0INT_OICR_HLP_RSV3_S			23
+#define PF0INT_OICR_HLP_RSV3_M			BIT(23)
+#define PF0INT_OICR_HLP_STORM_DETECT_S		24
+#define PF0INT_OICR_HLP_STORM_DETECT_M		BIT(24)
+#define PF0INT_OICR_HLP_LINK_STAT_CHANGE_S	25
+#define PF0INT_OICR_HLP_LINK_STAT_CHANGE_M	BIT(25)
+#define PF0INT_OICR_HLP_HMC_ERR_S		26
+#define PF0INT_OICR_HLP_HMC_ERR_M		BIT(26)
+#define PF0INT_OICR_HLP_PE_PUSH_S		27
+#define PF0INT_OICR_HLP_PE_PUSH_M		BIT(27)
+#define PF0INT_OICR_HLP_PE_CRITERR_S		28
+#define PF0INT_OICR_HLP_PE_CRITERR_M		BIT(28)
+#define PF0INT_OICR_HLP_VFLR_S			29
+#define PF0INT_OICR_HLP_VFLR_M			BIT(29)
+#define PF0INT_OICR_HLP_XLR_HW_DONE_S		30
+#define PF0INT_OICR_HLP_XLR_HW_DONE_M		BIT(30)
+#define PF0INT_OICR_HLP_SWINT_S			31
+#define PF0INT_OICR_HLP_SWINT_M			BIT(31)
+#define PF0INT_OICR_PSM				0x0016CC44 /* Reset Source: CORER */
+#define PF0INT_OICR_PSM_INTEVENT_S		0
+#define PF0INT_OICR_PSM_INTEVENT_M		BIT(0)
+#define PF0INT_OICR_PSM_QUEUE_S			1
+#define PF0INT_OICR_PSM_QUEUE_M			BIT(1)
+#define PF0INT_OICR_PSM_RSV1_S			2
+#define PF0INT_OICR_PSM_RSV1_M			MAKEMASK(0xFF, 2)
+#define PF0INT_OICR_PSM_HH_COMP_S		10
+#define PF0INT_OICR_PSM_HH_COMP_M		BIT(10)
+#define PF0INT_OICR_PSM_TSYN_TX_S		11
+#define PF0INT_OICR_PSM_TSYN_TX_M		BIT(11)
+#define PF0INT_OICR_PSM_TSYN_EVNT_S		12
+#define PF0INT_OICR_PSM_TSYN_EVNT_M		BIT(12)
+#define PF0INT_OICR_PSM_TSYN_TGT_S		13
+#define PF0INT_OICR_PSM_TSYN_TGT_M		BIT(13)
+#define PF0INT_OICR_PSM_HLP_RDY_S		14
+#define PF0INT_OICR_PSM_HLP_RDY_M		BIT(14)
+#define PF0INT_OICR_PSM_CPM_RDY_S		15
+#define PF0INT_OICR_PSM_CPM_RDY_M		BIT(15)
+#define PF0INT_OICR_PSM_ECC_ERR_S		16
+#define PF0INT_OICR_PSM_ECC_ERR_M		BIT(16)
+#define PF0INT_OICR_PSM_RSV2_S			17
+#define PF0INT_OICR_PSM_RSV2_M			MAKEMASK(0x3, 17)
+#define PF0INT_OICR_PSM_MAL_DETECT_S		19
+#define PF0INT_OICR_PSM_MAL_DETECT_M		BIT(19)
+#define PF0INT_OICR_PSM_GRST_S			20
+#define PF0INT_OICR_PSM_GRST_M			BIT(20)
+#define PF0INT_OICR_PSM_PCI_EXCEPTION_S		21
+#define PF0INT_OICR_PSM_PCI_EXCEPTION_M		BIT(21)
+#define PF0INT_OICR_PSM_GPIO_S			22
+#define PF0INT_OICR_PSM_GPIO_M			BIT(22)
+#define PF0INT_OICR_PSM_RSV3_S			23
+#define PF0INT_OICR_PSM_RSV3_M			BIT(23)
+#define PF0INT_OICR_PSM_STORM_DETECT_S		24
+#define PF0INT_OICR_PSM_STORM_DETECT_M		BIT(24)
+#define PF0INT_OICR_PSM_LINK_STAT_CHANGE_S	25
+#define PF0INT_OICR_PSM_LINK_STAT_CHANGE_M	BIT(25)
+#define PF0INT_OICR_PSM_HMC_ERR_S		26
+#define PF0INT_OICR_PSM_HMC_ERR_M		BIT(26)
+#define PF0INT_OICR_PSM_PE_PUSH_S		27
+#define PF0INT_OICR_PSM_PE_PUSH_M		BIT(27)
+#define PF0INT_OICR_PSM_PE_CRITERR_S		28
+#define PF0INT_OICR_PSM_PE_CRITERR_M		BIT(28)
+#define PF0INT_OICR_PSM_VFLR_S			29
+#define PF0INT_OICR_PSM_VFLR_M			BIT(29)
+#define PF0INT_OICR_PSM_XLR_HW_DONE_S		30
+#define PF0INT_OICR_PSM_XLR_HW_DONE_M		BIT(30)
+#define PF0INT_OICR_PSM_SWINT_S			31
+#define PF0INT_OICR_PSM_SWINT_M			BIT(31)
+#define PF0INT_SB_CPM_CTL			0x0016B2CC /* Reset Source: CORER */
+#define PF0INT_SB_CPM_CTL_MSIX_INDX_S		0
+#define PF0INT_SB_CPM_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PF0INT_SB_CPM_CTL_ITR_INDX_S		11
+#define PF0INT_SB_CPM_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PF0INT_SB_CPM_CTL_CAUSE_ENA_S		30
+#define PF0INT_SB_CPM_CTL_CAUSE_ENA_M		BIT(30)
+#define PF0INT_SB_CPM_CTL_INTEVENT_S		31
+#define PF0INT_SB_CPM_CTL_INTEVENT_M		BIT(31)
+#define PF0INT_SB_HLP_CTL			0x0016B640 /* Reset Source: CORER */
+#define PF0INT_SB_HLP_CTL_MSIX_INDX_S		0
+#define PF0INT_SB_HLP_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PF0INT_SB_HLP_CTL_ITR_INDX_S		11
+#define PF0INT_SB_HLP_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PF0INT_SB_HLP_CTL_CAUSE_ENA_S		30
+#define PF0INT_SB_HLP_CTL_CAUSE_ENA_M		BIT(30)
+#define PF0INT_SB_HLP_CTL_INTEVENT_S		31
+#define PF0INT_SB_HLP_CTL_INTEVENT_M		BIT(31)
+#define PFINT_AEQCTL				0x0016CB00 /* Reset Source: CORER */
+#define PFINT_AEQCTL_MSIX_INDX_S		0
+#define PFINT_AEQCTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PFINT_AEQCTL_ITR_INDX_S			11
+#define PFINT_AEQCTL_ITR_INDX_M			MAKEMASK(0x3, 11)
+#define PFINT_AEQCTL_CAUSE_ENA_S		30
+#define PFINT_AEQCTL_CAUSE_ENA_M		BIT(30)
+#define PFINT_AEQCTL_INTEVENT_S			31
+#define PFINT_AEQCTL_INTEVENT_M			BIT(31)
+#define PFINT_ALLOC				0x001D2600 /* Reset Source: CORER */
+#define PFINT_ALLOC_FIRST_S			0
+#define PFINT_ALLOC_FIRST_M			MAKEMASK(0x7FF, 0)
+#define PFINT_ALLOC_LAST_S			12
+#define PFINT_ALLOC_LAST_M			MAKEMASK(0x7FF, 12)
+#define PFINT_ALLOC_VALID_S			31
+#define PFINT_ALLOC_VALID_M			BIT(31)
+#define PFINT_ALLOC_PCI				0x0009D800 /* Reset Source: PCIR */
+#define PFINT_ALLOC_PCI_FIRST_S			0
+#define PFINT_ALLOC_PCI_FIRST_M			MAKEMASK(0x7FF, 0)
+#define PFINT_ALLOC_PCI_LAST_S			12
+#define PFINT_ALLOC_PCI_LAST_M			MAKEMASK(0x7FF, 12)
+#define PFINT_ALLOC_PCI_VALID_S			31
+#define PFINT_ALLOC_PCI_VALID_M			BIT(31)
+#define PFINT_FW_CTL				0x0016C800 /* Reset Source: CORER */
+#define PFINT_FW_CTL_MSIX_INDX_S		0
+#define PFINT_FW_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PFINT_FW_CTL_ITR_INDX_S			11
+#define PFINT_FW_CTL_ITR_INDX_M			MAKEMASK(0x3, 11)
+#define PFINT_FW_CTL_CAUSE_ENA_S		30
+#define PFINT_FW_CTL_CAUSE_ENA_M		BIT(30)
+#define PFINT_FW_CTL_INTEVENT_S			31
+#define PFINT_FW_CTL_INTEVENT_M			BIT(31)
+#define PFINT_GPIO_ENA				0x00088080 /* Reset Source: CORER */
+#define PFINT_GPIO_ENA_GPIO0_ENA_S		0
+#define PFINT_GPIO_ENA_GPIO0_ENA_M		BIT(0)
+#define PFINT_GPIO_ENA_GPIO1_ENA_S		1
+#define PFINT_GPIO_ENA_GPIO1_ENA_M		BIT(1)
+#define PFINT_GPIO_ENA_GPIO2_ENA_S		2
+#define PFINT_GPIO_ENA_GPIO2_ENA_M		BIT(2)
+#define PFINT_GPIO_ENA_GPIO3_ENA_S		3
+#define PFINT_GPIO_ENA_GPIO3_ENA_M		BIT(3)
+#define PFINT_GPIO_ENA_GPIO4_ENA_S		4
+#define PFINT_GPIO_ENA_GPIO4_ENA_M		BIT(4)
+#define PFINT_GPIO_ENA_GPIO5_ENA_S		5
+#define PFINT_GPIO_ENA_GPIO5_ENA_M		BIT(5)
+#define PFINT_GPIO_ENA_GPIO6_ENA_S		6
+#define PFINT_GPIO_ENA_GPIO6_ENA_M		BIT(6)
+#define PFINT_MBX_CTL				0x0016B280 /* Reset Source: CORER */
+#define PFINT_MBX_CTL_MSIX_INDX_S		0
+#define PFINT_MBX_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PFINT_MBX_CTL_ITR_INDX_S		11
+#define PFINT_MBX_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PFINT_MBX_CTL_CAUSE_ENA_S		30
+#define PFINT_MBX_CTL_CAUSE_ENA_M		BIT(30)
+#define PFINT_MBX_CTL_INTEVENT_S		31
+#define PFINT_MBX_CTL_INTEVENT_M		BIT(31)
+#define PFINT_OICR				0x0016CA00 /* Reset Source: CORER */
+#define PFINT_OICR_INTEVENT_S			0
+#define PFINT_OICR_INTEVENT_M			BIT(0)
+#define PFINT_OICR_QUEUE_S			1
+#define PFINT_OICR_QUEUE_M			BIT(1)
+#define PFINT_OICR_RSV1_S			2
+#define PFINT_OICR_RSV1_M			MAKEMASK(0xFF, 2)
+#define PFINT_OICR_HH_COMP_S			10
+#define PFINT_OICR_HH_COMP_M			BIT(10)
+#define PFINT_OICR_TSYN_TX_S			11
+#define PFINT_OICR_TSYN_TX_M			BIT(11)
+#define PFINT_OICR_TSYN_EVNT_S			12
+#define PFINT_OICR_TSYN_EVNT_M			BIT(12)
+#define PFINT_OICR_TSYN_TGT_S			13
+#define PFINT_OICR_TSYN_TGT_M			BIT(13)
+#define PFINT_OICR_HLP_RDY_S			14
+#define PFINT_OICR_HLP_RDY_M			BIT(14)
+#define PFINT_OICR_CPM_RDY_S			15
+#define PFINT_OICR_CPM_RDY_M			BIT(15)
+#define PFINT_OICR_ECC_ERR_S			16
+#define PFINT_OICR_ECC_ERR_M			BIT(16)
+#define PFINT_OICR_RSV2_S			17
+#define PFINT_OICR_RSV2_M			MAKEMASK(0x3, 17)
+#define PFINT_OICR_MAL_DETECT_S			19
+#define PFINT_OICR_MAL_DETECT_M			BIT(19)
+#define PFINT_OICR_GRST_S			20
+#define PFINT_OICR_GRST_M			BIT(20)
+#define PFINT_OICR_PCI_EXCEPTION_S		21
+#define PFINT_OICR_PCI_EXCEPTION_M		BIT(21)
+#define PFINT_OICR_GPIO_S			22
+#define PFINT_OICR_GPIO_M			BIT(22)
+#define PFINT_OICR_RSV3_S			23
+#define PFINT_OICR_RSV3_M			BIT(23)
+#define PFINT_OICR_STORM_DETECT_S		24
+#define PFINT_OICR_STORM_DETECT_M		BIT(24)
+#define PFINT_OICR_LINK_STAT_CHANGE_S		25
+#define PFINT_OICR_LINK_STAT_CHANGE_M		BIT(25)
+#define PFINT_OICR_HMC_ERR_S			26
+#define PFINT_OICR_HMC_ERR_M			BIT(26)
+#define PFINT_OICR_PE_PUSH_S			27
+#define PFINT_OICR_PE_PUSH_M			BIT(27)
+#define PFINT_OICR_PE_CRITERR_S			28
+#define PFINT_OICR_PE_CRITERR_M			BIT(28)
+#define PFINT_OICR_VFLR_S			29
+#define PFINT_OICR_VFLR_M			BIT(29)
+#define PFINT_OICR_XLR_HW_DONE_S		30
+#define PFINT_OICR_XLR_HW_DONE_M		BIT(30)
+#define PFINT_OICR_SWINT_S			31
+#define PFINT_OICR_SWINT_M			BIT(31)
+#define PFINT_OICR_CTL				0x0016CA80 /* Reset Source: CORER */
+#define PFINT_OICR_CTL_MSIX_INDX_S		0
+#define PFINT_OICR_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PFINT_OICR_CTL_ITR_INDX_S		11
+#define PFINT_OICR_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PFINT_OICR_CTL_CAUSE_ENA_S		30
+#define PFINT_OICR_CTL_CAUSE_ENA_M		BIT(30)
+#define PFINT_OICR_CTL_INTEVENT_S		31
+#define PFINT_OICR_CTL_INTEVENT_M		BIT(31)
+#define PFINT_OICR_ENA				0x0016C900 /* Reset Source: CORER */
+#define PFINT_OICR_ENA_RSV0_S			0
+#define PFINT_OICR_ENA_RSV0_M			BIT(0)
+#define PFINT_OICR_ENA_INT_ENA_S		1
+#define PFINT_OICR_ENA_INT_ENA_M		MAKEMASK(0x7FFFFFFF, 1)
+#define PFINT_SB_CTL				0x0016B600 /* Reset Source: CORER */
+#define PFINT_SB_CTL_MSIX_INDX_S		0
+#define PFINT_SB_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PFINT_SB_CTL_ITR_INDX_S			11
+#define PFINT_SB_CTL_ITR_INDX_M			MAKEMASK(0x3, 11)
+#define PFINT_SB_CTL_CAUSE_ENA_S		30
+#define PFINT_SB_CTL_CAUSE_ENA_M		BIT(30)
+#define PFINT_SB_CTL_INTEVENT_S			31
+#define PFINT_SB_CTL_INTEVENT_M			BIT(31)
+#define PFINT_TSYN_MSK				0x0016C980 /* Reset Source: CORER */
+#define PFINT_TSYN_MSK_PHY_INDX_S		0
+#define PFINT_TSYN_MSK_PHY_INDX_M		MAKEMASK(0x1F, 0)
+#define QINT_RQCTL(_QRX)			(0x00150000 + ((_QRX) * 4)) /* _i=0...2047 */ /* Reset Source: CORER */
+#define QINT_RQCTL_MAX_INDEX			2047
+#define QINT_RQCTL_MSIX_INDX_S			0
+#define QINT_RQCTL_MSIX_INDX_M			MAKEMASK(0x7FF, 0)
+#define QINT_RQCTL_ITR_INDX_S			11
+#define QINT_RQCTL_ITR_INDX_M			MAKEMASK(0x3, 11)
+#define QINT_RQCTL_CAUSE_ENA_S			30
+#define QINT_RQCTL_CAUSE_ENA_M			BIT(30)
+#define QINT_RQCTL_INTEVENT_S			31
+#define QINT_RQCTL_INTEVENT_M			BIT(31)
+#define QINT_TQCTL(_DBQM)			(0x00140000 + ((_DBQM) * 4)) /* _i=0...16383 */ /* Reset Source: CORER */
+#define QINT_TQCTL_MAX_INDEX			16383
+#define QINT_TQCTL_MSIX_INDX_S			0
+#define QINT_TQCTL_MSIX_INDX_M			MAKEMASK(0x7FF, 0)
+#define QINT_TQCTL_ITR_INDX_S			11
+#define QINT_TQCTL_ITR_INDX_M			MAKEMASK(0x3, 11)
+#define QINT_TQCTL_CAUSE_ENA_S			30
+#define QINT_TQCTL_CAUSE_ENA_M			BIT(30)
+#define QINT_TQCTL_INTEVENT_S			31
+#define QINT_TQCTL_INTEVENT_M			BIT(31)
+#define VPINT_AEQCTL(_VF)			(0x0016B800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPINT_AEQCTL_MAX_INDEX			255
+#define VPINT_AEQCTL_MSIX_INDX_S		0
+#define VPINT_AEQCTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define VPINT_AEQCTL_ITR_INDX_S			11
+#define VPINT_AEQCTL_ITR_INDX_M			MAKEMASK(0x3, 11)
+#define VPINT_AEQCTL_CAUSE_ENA_S		30
+#define VPINT_AEQCTL_CAUSE_ENA_M		BIT(30)
+#define VPINT_AEQCTL_INTEVENT_S			31
+#define VPINT_AEQCTL_INTEVENT_M			BIT(31)
+#define VPINT_ALLOC(_VF)			(0x001D1000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPINT_ALLOC_MAX_INDEX			255
+#define VPINT_ALLOC_FIRST_S			0
+#define VPINT_ALLOC_FIRST_M			MAKEMASK(0x7FF, 0)
+#define VPINT_ALLOC_LAST_S			12
+#define VPINT_ALLOC_LAST_M			MAKEMASK(0x7FF, 12)
+#define VPINT_ALLOC_VALID_S			31
+#define VPINT_ALLOC_VALID_M			BIT(31)
+#define VPINT_ALLOC_PCI(_VF)			(0x0009D000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PCIR */
+#define VPINT_ALLOC_PCI_MAX_INDEX		255
+#define VPINT_ALLOC_PCI_FIRST_S			0
+#define VPINT_ALLOC_PCI_FIRST_M			MAKEMASK(0x7FF, 0)
+#define VPINT_ALLOC_PCI_LAST_S			12
+#define VPINT_ALLOC_PCI_LAST_M			MAKEMASK(0x7FF, 12)
+#define VPINT_ALLOC_PCI_VALID_S			31
+#define VPINT_ALLOC_PCI_VALID_M			BIT(31)
+#define VPINT_MBX_CPM_CTL(_VP128)		(0x0016B000 + ((_VP128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VPINT_MBX_CPM_CTL_MAX_INDEX		127
+#define VPINT_MBX_CPM_CTL_MSIX_INDX_S		0
+#define VPINT_MBX_CPM_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define VPINT_MBX_CPM_CTL_ITR_INDX_S		11
+#define VPINT_MBX_CPM_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define VPINT_MBX_CPM_CTL_CAUSE_ENA_S		30
+#define VPINT_MBX_CPM_CTL_CAUSE_ENA_M		BIT(30)
+#define VPINT_MBX_CPM_CTL_INTEVENT_S		31
+#define VPINT_MBX_CPM_CTL_INTEVENT_M		BIT(31)
+#define VPINT_MBX_CTL(_VSI)			(0x0016A000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VPINT_MBX_CTL_MAX_INDEX			767
+#define VPINT_MBX_CTL_MSIX_INDX_S		0
+#define VPINT_MBX_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define VPINT_MBX_CTL_ITR_INDX_S		11
+#define VPINT_MBX_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define VPINT_MBX_CTL_CAUSE_ENA_S		30
+#define VPINT_MBX_CTL_CAUSE_ENA_M		BIT(30)
+#define VPINT_MBX_CTL_INTEVENT_S		31
+#define VPINT_MBX_CTL_INTEVENT_M		BIT(31)
+#define VPINT_MBX_HLP_CTL(_VP16)		(0x0016B200 + ((_VP16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VPINT_MBX_HLP_CTL_MAX_INDEX		15
+#define VPINT_MBX_HLP_CTL_MSIX_INDX_S		0
+#define VPINT_MBX_HLP_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define VPINT_MBX_HLP_CTL_ITR_INDX_S		11
+#define VPINT_MBX_HLP_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define VPINT_MBX_HLP_CTL_CAUSE_ENA_S		30
+#define VPINT_MBX_HLP_CTL_CAUSE_ENA_M		BIT(30)
+#define VPINT_MBX_HLP_CTL_INTEVENT_S		31
+#define VPINT_MBX_HLP_CTL_INTEVENT_M		BIT(31)
+#define VPINT_MBX_PSM_CTL(_VP16)		(0x0016B240 + ((_VP16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VPINT_MBX_PSM_CTL_MAX_INDEX		15
+#define VPINT_MBX_PSM_CTL_MSIX_INDX_S		0
+#define VPINT_MBX_PSM_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define VPINT_MBX_PSM_CTL_ITR_INDX_S		11
+#define VPINT_MBX_PSM_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define VPINT_MBX_PSM_CTL_CAUSE_ENA_S		30
+#define VPINT_MBX_PSM_CTL_CAUSE_ENA_M		BIT(30)
+#define VPINT_MBX_PSM_CTL_INTEVENT_S		31
+#define VPINT_MBX_PSM_CTL_INTEVENT_M		BIT(31)
+#define VPINT_SB_CPM_CTL(_VP128)		(0x0016B400 + ((_VP128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VPINT_SB_CPM_CTL_MAX_INDEX		127
+#define VPINT_SB_CPM_CTL_MSIX_INDX_S		0
+#define VPINT_SB_CPM_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define VPINT_SB_CPM_CTL_ITR_INDX_S		11
+#define VPINT_SB_CPM_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define VPINT_SB_CPM_CTL_CAUSE_ENA_S		30
+#define VPINT_SB_CPM_CTL_CAUSE_ENA_M		BIT(30)
+#define VPINT_SB_CPM_CTL_INTEVENT_S		31
+#define VPINT_SB_CPM_CTL_INTEVENT_M		BIT(31)
+#define GL_HLP_PRT_IPG_PREAMBLE_SIZE(_i)	(0x00049240 + ((_i) * 4)) /* _i=0...20 */ /* Reset Source: CORER */
+#define GL_HLP_PRT_IPG_PREAMBLE_SIZE_MAX_INDEX	20
+#define GL_HLP_PRT_IPG_PREAMBLE_SIZE_IPG_PREAMBLE_SIZE_S 0
+#define GL_HLP_PRT_IPG_PREAMBLE_SIZE_IPG_PREAMBLE_SIZE_M MAKEMASK(0xFF, 0)
+#define GL_TDPU_PSM_DEFAULT_RECIPE(_i)		(0x00049294 + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: CORER */
+#define GL_TDPU_PSM_DEFAULT_RECIPE_MAX_INDEX	3
+#define GL_TDPU_PSM_DEFAULT_RECIPE_ADD_IPG_S	0
+#define GL_TDPU_PSM_DEFAULT_RECIPE_ADD_IPG_M	BIT(0)
+#define GL_TDPU_PSM_DEFAULT_RECIPE_SUB_CRC_S	1
+#define GL_TDPU_PSM_DEFAULT_RECIPE_SUB_CRC_M	BIT(1)
+#define GL_TDPU_PSM_DEFAULT_RECIPE_SUB_ESP_TRAILER_S 2
+#define GL_TDPU_PSM_DEFAULT_RECIPE_SUB_ESP_TRAILER_M BIT(2)
+#define GL_TDPU_PSM_DEFAULT_RECIPE_INCLUDE_L2_PAD_S 3
+#define GL_TDPU_PSM_DEFAULT_RECIPE_INCLUDE_L2_PAD_M BIT(3)
+#define GL_TDPU_PSM_DEFAULT_RECIPE_DEFAULT_UPDATE_MODE_S 4
+#define GL_TDPU_PSM_DEFAULT_RECIPE_DEFAULT_UPDATE_MODE_M BIT(4)
+#define GLLAN_PF_RECIPE(_i)			(0x0029420C + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLLAN_PF_RECIPE_MAX_INDEX		7
+#define GLLAN_PF_RECIPE_RECIPE_S		0
+#define GLLAN_PF_RECIPE_RECIPE_M		MAKEMASK(0x3, 0)
+#define GLLAN_RCTL_0				0x002941F8 /* Reset Source: CORER */
+#define GLLAN_RCTL_0_PXE_MODE_S			0
+#define GLLAN_RCTL_0_PXE_MODE_M			BIT(0)
+#define GLLAN_RCTL_1				0x002941FC /* Reset Source: CORER */
+#define GLLAN_RCTL_1_RXMAX_EXPANSION_S		12
+#define GLLAN_RCTL_1_RXMAX_EXPANSION_M		MAKEMASK(0xF, 12)
+#define GLLAN_RCTL_1_RXDRDCTL_S			17
+#define GLLAN_RCTL_1_RXDRDCTL_M			BIT(17)
+#define GLLAN_RCTL_1_RXDESCRDROEN_S		18
+#define GLLAN_RCTL_1_RXDESCRDROEN_M		BIT(18)
+#define GLLAN_RCTL_1_RXDATAWRROEN_S		19
+#define GLLAN_RCTL_1_RXDATAWRROEN_M		BIT(19)
+#define GLLAN_TSOMSK_F				0x00049308 /* Reset Source: CORER */
+#define GLLAN_TSOMSK_F_TCPMSKF_S		0
+#define GLLAN_TSOMSK_F_TCPMSKF_M		MAKEMASK(0xFFF, 0)
+#define GLLAN_TSOMSK_L				0x00049310 /* Reset Source: CORER */
+#define GLLAN_TSOMSK_L_TCPMSKL_S		0
+#define GLLAN_TSOMSK_L_TCPMSKL_M		MAKEMASK(0xFFF, 0)
+#define GLLAN_TSOMSK_M				0x0004930C /* Reset Source: CORER */
+#define GLLAN_TSOMSK_M_TCPMSKM_S		0
+#define GLLAN_TSOMSK_M_TCPMSKM_M		MAKEMASK(0xFFF, 0)
+#define PFLAN_CP_QALLOC				0x00075700 /* Reset Source: CORER */
+#define PFLAN_CP_QALLOC_FIRSTQ_S		0
+#define PFLAN_CP_QALLOC_FIRSTQ_M		MAKEMASK(0x1FF, 0)
+#define PFLAN_CP_QALLOC_LASTQ_S			16
+#define PFLAN_CP_QALLOC_LASTQ_M			MAKEMASK(0x1FF, 16)
+#define PFLAN_CP_QALLOC_VALID_S			31
+#define PFLAN_CP_QALLOC_VALID_M			BIT(31)
+#define PFLAN_DB_QALLOC				0x00075680 /* Reset Source: CORER */
+#define PFLAN_DB_QALLOC_FIRSTQ_S		0
+#define PFLAN_DB_QALLOC_FIRSTQ_M		MAKEMASK(0xFF, 0)
+#define PFLAN_DB_QALLOC_LASTQ_S			16
+#define PFLAN_DB_QALLOC_LASTQ_M			MAKEMASK(0xFF, 16)
+#define PFLAN_DB_QALLOC_VALID_S			31
+#define PFLAN_DB_QALLOC_VALID_M			BIT(31)
+#define PFLAN_RX_QALLOC				0x001D2500 /* Reset Source: CORER */
+#define PFLAN_RX_QALLOC_FIRSTQ_S		0
+#define PFLAN_RX_QALLOC_FIRSTQ_M		MAKEMASK(0x7FF, 0)
+#define PFLAN_RX_QALLOC_LASTQ_S			16
+#define PFLAN_RX_QALLOC_LASTQ_M			MAKEMASK(0x7FF, 16)
+#define PFLAN_RX_QALLOC_VALID_S			31
+#define PFLAN_RX_QALLOC_VALID_M			BIT(31)
+#define PFLAN_TX_QALLOC				0x001D2580 /* Reset Source: CORER */
+#define PFLAN_TX_QALLOC_FIRSTQ_S		0
+#define PFLAN_TX_QALLOC_FIRSTQ_M		MAKEMASK(0x3FFF, 0)
+#define PFLAN_TX_QALLOC_LASTQ_S			16
+#define PFLAN_TX_QALLOC_LASTQ_M			MAKEMASK(0x3FFF, 16)
+#define PFLAN_TX_QALLOC_VALID_S			31
+#define PFLAN_TX_QALLOC_VALID_M			BIT(31)
+#define QRX_CONTEXT(_i, _QRX)			(0x00280000 + ((_i) * 8192 + (_QRX) * 4)) /* _i=0...7, _QRX=0...2047 */ /* Reset Source: CORER */
+#define QRX_CONTEXT_MAX_INDEX			7
+#define QRX_CONTEXT_RXQ_CONTEXT_S		0
+#define QRX_CONTEXT_RXQ_CONTEXT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define QRX_CTRL(_QRX)				(0x00120000 + ((_QRX) * 4)) /* _i=0...2047 */ /* Reset Source: PFR */
+#define QRX_CTRL_MAX_INDEX			2047
+#define QRX_CTRL_QENA_REQ_S			0
+#define QRX_CTRL_QENA_REQ_M			BIT(0)
+#define QRX_CTRL_FAST_QDIS_S			1
+#define QRX_CTRL_FAST_QDIS_M			BIT(1)
+#define QRX_CTRL_QENA_STAT_S			2
+#define QRX_CTRL_QENA_STAT_M			BIT(2)
+#define QRX_CTRL_CDE_S				3
+#define QRX_CTRL_CDE_M				BIT(3)
+#define QRX_CTRL_CDS_S				4
+#define QRX_CTRL_CDS_M				BIT(4)
+#define QRX_ITR(_QRX)				(0x00292000 + ((_QRX) * 4)) /* _i=0...2047 */ /* Reset Source: CORER */
+#define QRX_ITR_MAX_INDEX			2047
+#define QRX_ITR_NO_EXPR_S			0
+#define QRX_ITR_NO_EXPR_M			BIT(0)
+#define QRX_TAIL(_QRX)				(0x00290000 + ((_QRX) * 4)) /* _i=0...2047 */ /* Reset Source: CORER */
+#define QRX_TAIL_MAX_INDEX			2047
+#define QRX_TAIL_TAIL_S				0
+#define QRX_TAIL_TAIL_M				MAKEMASK(0x1FFF, 0)
+#define VPDSI_RX_QTABLE(_i, _VP16)		(0x00074C00 + ((_i) * 64 + (_VP16) * 4)) /* _i=0...15, _VP16=0...15 */ /* Reset Source: CORER */
+#define VPDSI_RX_QTABLE_MAX_INDEX		15
+#define VPDSI_RX_QTABLE_PAGE_INDEX0_S		0
+#define VPDSI_RX_QTABLE_PAGE_INDEX0_M		MAKEMASK(0x7F, 0)
+#define VPDSI_RX_QTABLE_PAGE_INDEX1_S		8
+#define VPDSI_RX_QTABLE_PAGE_INDEX1_M		MAKEMASK(0x7F, 8)
+#define VPDSI_RX_QTABLE_PAGE_INDEX2_S		16
+#define VPDSI_RX_QTABLE_PAGE_INDEX2_M		MAKEMASK(0x7F, 16)
+#define VPDSI_RX_QTABLE_PAGE_INDEX3_S		24
+#define VPDSI_RX_QTABLE_PAGE_INDEX3_M		MAKEMASK(0x7F, 24)
+#define VPDSI_TX_QTABLE(_i, _VP16)		(0x001D2000 + ((_i) * 64 + (_VP16) * 4)) /* _i=0...15, _VP16=0...15 */ /* Reset Source: CORER */
+#define VPDSI_TX_QTABLE_MAX_INDEX		15
+#define VPDSI_TX_QTABLE_PAGE_INDEX0_S		0
+#define VPDSI_TX_QTABLE_PAGE_INDEX0_M		MAKEMASK(0x7F, 0)
+#define VPDSI_TX_QTABLE_PAGE_INDEX1_S		8
+#define VPDSI_TX_QTABLE_PAGE_INDEX1_M		MAKEMASK(0x7F, 8)
+#define VPDSI_TX_QTABLE_PAGE_INDEX2_S		16
+#define VPDSI_TX_QTABLE_PAGE_INDEX2_M		MAKEMASK(0x7F, 16)
+#define VPDSI_TX_QTABLE_PAGE_INDEX3_S		24
+#define VPDSI_TX_QTABLE_PAGE_INDEX3_M		MAKEMASK(0x7F, 24)
+#define VPLAN_DB_QTABLE(_i, _VF)		(0x00070000 + ((_i) * 2048 + (_VF) * 4)) /* _i=0...3, _VF=0...255 */ /* Reset Source: CORER */
+#define VPLAN_DB_QTABLE_MAX_INDEX		3
+#define VPLAN_DB_QTABLE_QINDEX_S		0
+#define VPLAN_DB_QTABLE_QINDEX_M		MAKEMASK(0x1FF, 0)
+#define VPLAN_DSI_VF_MODE(_VP16)		(0x002D2C00 + ((_VP16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VPLAN_DSI_VF_MODE_MAX_INDEX		15
+#define VPLAN_DSI_VF_MODE_LAN_DSI_VF_MODE_S	0
+#define VPLAN_DSI_VF_MODE_LAN_DSI_VF_MODE_M	BIT(0)
+#define VPLAN_RX_QBASE(_VF)			(0x00072000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPLAN_RX_QBASE_MAX_INDEX		255
+#define VPLAN_RX_QBASE_VFFIRSTQ_S		0
+#define VPLAN_RX_QBASE_VFFIRSTQ_M		MAKEMASK(0x7FF, 0)
+#define VPLAN_RX_QBASE_VFNUMQ_S			16
+#define VPLAN_RX_QBASE_VFNUMQ_M			MAKEMASK(0xFF, 16)
+#define VPLAN_RX_QBASE_VFQTABLE_ENA_S		31
+#define VPLAN_RX_QBASE_VFQTABLE_ENA_M		BIT(31)
+#define VPLAN_RX_QTABLE(_i, _VF)		(0x00060000 + ((_i) * 2048 + (_VF) * 4)) /* _i=0...15, _VF=0...255 */ /* Reset Source: CORER */
+#define VPLAN_RX_QTABLE_MAX_INDEX		15
+#define VPLAN_RX_QTABLE_QINDEX_S		0
+#define VPLAN_RX_QTABLE_QINDEX_M		MAKEMASK(0xFFF, 0)
+#define VPLAN_RXQ_MAPENA(_VF)			(0x00073000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPLAN_RXQ_MAPENA_MAX_INDEX		255
+#define VPLAN_RXQ_MAPENA_RX_ENA_S		0
+#define VPLAN_RXQ_MAPENA_RX_ENA_M		BIT(0)
+#define VPLAN_TX_QBASE(_VF)			(0x001D1800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPLAN_TX_QBASE_MAX_INDEX		255
+#define VPLAN_TX_QBASE_VFFIRSTQ_S		0
+#define VPLAN_TX_QBASE_VFFIRSTQ_M		MAKEMASK(0x3FFF, 0)
+#define VPLAN_TX_QBASE_VFNUMQ_S			16
+#define VPLAN_TX_QBASE_VFNUMQ_M			MAKEMASK(0xFF, 16)
+#define VPLAN_TX_QBASE_VFQTABLE_ENA_S		31
+#define VPLAN_TX_QBASE_VFQTABLE_ENA_M		BIT(31)
+#define VPLAN_TX_QTABLE(_i, _VF)		(0x001C0000 + ((_i) * 2048 + (_VF) * 4)) /* _i=0...15, _VF=0...255 */ /* Reset Source: CORER */
+#define VPLAN_TX_QTABLE_MAX_INDEX		15
+#define VPLAN_TX_QTABLE_QINDEX_S		0
+#define VPLAN_TX_QTABLE_QINDEX_M		MAKEMASK(0x7FFF, 0)
+#define VPLAN_TXQ_MAPENA(_VF)			(0x00073800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPLAN_TXQ_MAPENA_MAX_INDEX		255
+#define VPLAN_TXQ_MAPENA_TX_ENA_S		0
+#define VPLAN_TXQ_MAPENA_TX_ENA_M		BIT(0)
+#define VSILAN_QBASE(_VSI)			(0x0044c000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSILAN_QBASE_MAX_INDEX			767
+#define VSILAN_QBASE_VSIBASE_S			0
+#define VSILAN_QBASE_VSIBASE_M			MAKEMASK(0x7FF, 0)
+#define VSILAN_QBASE_VSIQTABLE_ENA_S		11
+#define VSILAN_QBASE_VSIQTABLE_ENA_M		BIT(11)
+#define VSILAN_QTABLE(_i, _VSI)			(0x00440000 + ((_i) * 4096 + (_VSI) * 4)) /* _i=0...7, _VSI=0...767 */ /* Reset Source: PFR */
+#define VSILAN_QTABLE_MAX_INDEX			7
+#define VSILAN_QTABLE_QINDEX_0_S		0
+#define VSILAN_QTABLE_QINDEX_0_M		MAKEMASK(0x7FF, 0)
+#define VSILAN_QTABLE_QINDEX_1_S		16
+#define VSILAN_QTABLE_QINDEX_1_M		MAKEMASK(0x7FF, 16)
+#define PRTMAC_HSEC_CTL_RX_ENABLE_GCP		0x001E31C0 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_ENABLE_GCP_HSEC_CTL_RX_ENABLE_GCP_S 0
+#define PRTMAC_HSEC_CTL_RX_ENABLE_GCP_HSEC_CTL_RX_ENABLE_GCP_M BIT(0)
+#define PRTMAC_HSEC_CTL_RX_ENABLE_GPP		0x001E34C0 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_ENABLE_GPP_HSEC_CTL_RX_ENABLE_GPP_S 0
+#define PRTMAC_HSEC_CTL_RX_ENABLE_GPP_HSEC_CTL_RX_ENABLE_GPP_M BIT(0)
+#define PRTMAC_HSEC_CTL_RX_ENABLE_PPP		0x001E35C0 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_ENABLE_PPP_HSEC_CTL_RX_ENABLE_PPP_S 0
+#define PRTMAC_HSEC_CTL_RX_ENABLE_PPP_HSEC_CTL_RX_ENABLE_PPP_M BIT(0)
+#define PRTMAC_HSEC_CTL_RX_FORWARD_CONTROL	0x001E36C0 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_FORWARD_CONTROL_HSEC_CTL_RX_FORWARD_CONTROL_S 0
+#define PRTMAC_HSEC_CTL_RX_FORWARD_CONTROL_HSEC_CTL_RX_FORWARD_CONTROL_M BIT(0)
+#define PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART1 0x001E3220 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART1_HSEC_CTL_RX_PAUSE_DA_UCAST_PART1_S 0
+#define PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART1_HSEC_CTL_RX_PAUSE_DA_UCAST_PART1_M MAKEMASK(0xFFFFFFFF, 0)
+#define PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART2 0x001E3240 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART2_HSEC_CTL_RX_PAUSE_DA_UCAST_PART2_S 0
+#define PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART2_HSEC_CTL_RX_PAUSE_DA_UCAST_PART2_M MAKEMASK(0xFFFF, 0)
+#define PRTMAC_HSEC_CTL_RX_PAUSE_ENABLE		0x001E3180 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_PAUSE_ENABLE_HSEC_CTL_RX_PAUSE_ENABLE_S 0
+#define PRTMAC_HSEC_CTL_RX_PAUSE_ENABLE_HSEC_CTL_RX_PAUSE_ENABLE_M MAKEMASK(0x1FF, 0)
+#define PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART1	0x001E3280 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART1_HSEC_CTL_RX_PAUSE_SA_PART1_S 0
+#define PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART1_HSEC_CTL_RX_PAUSE_SA_PART1_M MAKEMASK(0xFFFFFFFF, 0)
+#define PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART2	0x001E32A0 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART2_HSEC_CTL_RX_PAUSE_SA_PART2_S 0
+#define PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART2_HSEC_CTL_RX_PAUSE_SA_PART2_M MAKEMASK(0xFFFF, 0)
+#define PRTMAC_HSEC_CTL_RX_QUANTA_S		0x001E3C40 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_QUANTA_SHIFT_PRTMAC_HSEC_CTL_RX_QUANTA_SHIFT_S 0
+#define PRTMAC_HSEC_CTL_RX_QUANTA_SHIFT_PRTMAC_HSEC_CTL_RX_QUANTA_SHIFT_M MAKEMASK(0xFFFF, 0)
+#define PRTMAC_HSEC_CTL_TX_PAUSE_ENABLE		0x001E31A0 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_TX_PAUSE_ENABLE_HSEC_CTL_TX_PAUSE_ENABLE_S 0
+#define PRTMAC_HSEC_CTL_TX_PAUSE_ENABLE_HSEC_CTL_TX_PAUSE_ENABLE_M MAKEMASK(0x1FF, 0)
+#define PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA(_i)	(0x001E36E0 + ((_i) * 32)) /* _i=0...8 */ /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA_MAX_INDEX 8
+#define PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA_HSEC_CTL_TX_PAUSE_QUANTA_S 0
+#define PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA_HSEC_CTL_TX_PAUSE_QUANTA_M MAKEMASK(0xFFFF, 0)
+#define PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER(_i) (0x001E3800 + ((_i) * 32)) /* _i=0...8 */ /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_MAX_INDEX 8
+#define PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_S 0
+#define PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_M MAKEMASK(0xFFFF, 0)
+#define PRTMAC_HSEC_CTL_TX_SA_PART1		0x001E3960 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_TX_SA_PART1_HSEC_CTL_TX_SA_PART1_S 0
+#define PRTMAC_HSEC_CTL_TX_SA_PART1_HSEC_CTL_TX_SA_PART1_M MAKEMASK(0xFFFFFFFF, 0)
+#define PRTMAC_HSEC_CTL_TX_SA_PART2		0x001E3980 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_TX_SA_PART2_HSEC_CTL_TX_SA_PART2_S 0
+#define PRTMAC_HSEC_CTL_TX_SA_PART2_HSEC_CTL_TX_SA_PART2_M MAKEMASK(0xFFFF, 0)
+#define PRTMAC_LINK_DOWN_COUNTER		0x001E47C0 /* Reset Source: GLOBR */
+#define PRTMAC_LINK_DOWN_COUNTER_LINK_DOWN_COUNTER_S 0
+#define PRTMAC_LINK_DOWN_COUNTER_LINK_DOWN_COUNTER_M MAKEMASK(0xFFFF, 0)
+#define PRTMAC_MD_OVRRIDE_ENABLE(_i)		(0x001E3C60 + ((_i) * 32)) /* _i=0...7 */ /* Reset Source: GLOBR */
+#define PRTMAC_MD_OVRRIDE_ENABLE_MAX_INDEX	7
+#define PRTMAC_MD_OVRRIDE_ENABLE_PRTMAC_MD_OVRRIDE_ENABLE_S 0
+#define PRTMAC_MD_OVRRIDE_ENABLE_PRTMAC_MD_OVRRIDE_ENABLE_M MAKEMASK(0xFFFFFFFF, 0)
+#define PRTMAC_MD_OVRRIDE_VAL(_i)		(0x001E3D60 + ((_i) * 32)) /* _i=0...7 */ /* Reset Source: GLOBR */
+#define PRTMAC_MD_OVRRIDE_VAL_MAX_INDEX		7
+#define PRTMAC_MD_OVRRIDE_VAL_PRTMAC_MD_OVRRIDE_ENABLE_S 0
+#define PRTMAC_MD_OVRRIDE_VAL_PRTMAC_MD_OVRRIDE_ENABLE_M MAKEMASK(0xFFFFFFFF, 0)
+#define PRTMAC_RX_CNT_MRKR			0x001E48E0 /* Reset Source: GLOBR */
+#define PRTMAC_RX_CNT_MRKR_RX_CNT_MRKR_S	0
+#define PRTMAC_RX_CNT_MRKR_RX_CNT_MRKR_M	MAKEMASK(0xFFFF, 0)
+#define PRTMAC_RX_PKT_DRP_CNT			0x001E3C20 /* Reset Source: GLOBR */
+#define PRTMAC_RX_PKT_DRP_CNT_RX_PKT_DRP_CNT_S	0
+#define PRTMAC_RX_PKT_DRP_CNT_RX_PKT_DRP_CNT_M	MAKEMASK(0xFFFF, 0)
+#define PRTMAC_RX_PKT_DRP_CNT_RX_MKR_PKT_DRP_CNT_S 16
+#define PRTMAC_RX_PKT_DRP_CNT_RX_MKR_PKT_DRP_CNT_M MAKEMASK(0xFFFF, 16)
+#define PRTMAC_TX_CNT_MRKR			0x001E48C0 /* Reset Source: GLOBR */
+#define PRTMAC_TX_CNT_MRKR_TX_CNT_MRKR_S	0
+#define PRTMAC_TX_CNT_MRKR_TX_CNT_MRKR_M	MAKEMASK(0xFFFF, 0)
+#define PRTMAC_TX_LNK_UP_CNT			0x001E4840 /* Reset Source: GLOBR */
+#define PRTMAC_TX_LNK_UP_CNT_TX_LINK_UP_CNT_S	0
+#define PRTMAC_TX_LNK_UP_CNT_TX_LINK_UP_CNT_M	MAKEMASK(0xFFFF, 0)
+#define GL_MDCK_CFG1_TX_PQM			0x002D2DF4 /* Reset Source: CORER */
+#define GL_MDCK_CFG1_TX_PQM_SSO_MAX_DATA_LEN_S	0
+#define GL_MDCK_CFG1_TX_PQM_SSO_MAX_DATA_LEN_M	MAKEMASK(0xFF, 0)
+#define GL_MDCK_CFG1_TX_PQM_SSO_MAX_PKT_CNT_S	8
+#define GL_MDCK_CFG1_TX_PQM_SSO_MAX_PKT_CNT_M	MAKEMASK(0x3F, 8)
+#define GL_MDCK_CFG1_TX_PQM_SSO_MAX_DESC_CNT_S	16
+#define GL_MDCK_CFG1_TX_PQM_SSO_MAX_DESC_CNT_M	MAKEMASK(0x3F, 16)
+#define GL_MDCK_EN_TX_PQM			0x002D2DFC /* Reset Source: CORER */
+#define GL_MDCK_EN_TX_PQM_PCI_DUMMY_COMP_S	0
+#define GL_MDCK_EN_TX_PQM_PCI_DUMMY_COMP_M	BIT(0)
+#define GL_MDCK_EN_TX_PQM_PCI_UR_COMP_S		1
+#define GL_MDCK_EN_TX_PQM_PCI_UR_COMP_M		BIT(1)
+#define GL_MDCK_EN_TX_PQM_RCV_SH_BE_LSO_S	3
+#define GL_MDCK_EN_TX_PQM_RCV_SH_BE_LSO_M	BIT(3)
+#define GL_MDCK_EN_TX_PQM_Q_FL_MNG_EPY_CH_S	4
+#define GL_MDCK_EN_TX_PQM_Q_FL_MNG_EPY_CH_M	BIT(4)
+#define GL_MDCK_EN_TX_PQM_Q_EPY_MNG_FL_CH_S	5
+#define GL_MDCK_EN_TX_PQM_Q_EPY_MNG_FL_CH_M	BIT(5)
+#define GL_MDCK_EN_TX_PQM_LSO_NUMDESCS_ZERO_S	6
+#define GL_MDCK_EN_TX_PQM_LSO_NUMDESCS_ZERO_M	BIT(6)
+#define GL_MDCK_EN_TX_PQM_LSO_LENGTH_ZERO_S	7
+#define GL_MDCK_EN_TX_PQM_LSO_LENGTH_ZERO_M	BIT(7)
+#define GL_MDCK_EN_TX_PQM_LSO_MSS_BELOW_MIN_S	8
+#define GL_MDCK_EN_TX_PQM_LSO_MSS_BELOW_MIN_M	BIT(8)
+#define GL_MDCK_EN_TX_PQM_LSO_MSS_ABOVE_MAX_S	9
+#define GL_MDCK_EN_TX_PQM_LSO_MSS_ABOVE_MAX_M	BIT(9)
+#define GL_MDCK_EN_TX_PQM_LSO_HDR_SIZE_ZERO_S	10
+#define GL_MDCK_EN_TX_PQM_LSO_HDR_SIZE_ZERO_M	BIT(10)
+#define GL_MDCK_EN_TX_PQM_RCV_CNT_BE_LSO_S	11
+#define GL_MDCK_EN_TX_PQM_RCV_CNT_BE_LSO_M	BIT(11)
+#define GL_MDCK_EN_TX_PQM_SKIP_ONE_QT_ONLY_S	12
+#define GL_MDCK_EN_TX_PQM_SKIP_ONE_QT_ONLY_M	BIT(12)
+#define GL_MDCK_EN_TX_PQM_LSO_PKTCNT_ZERO_S	13
+#define GL_MDCK_EN_TX_PQM_LSO_PKTCNT_ZERO_M	BIT(13)
+#define GL_MDCK_EN_TX_PQM_SSO_LENGTH_ZERO_S	14
+#define GL_MDCK_EN_TX_PQM_SSO_LENGTH_ZERO_M	BIT(14)
+#define GL_MDCK_EN_TX_PQM_SSO_LENGTH_EXCEED_S	15
+#define GL_MDCK_EN_TX_PQM_SSO_LENGTH_EXCEED_M	BIT(15)
+#define GL_MDCK_EN_TX_PQM_SSO_PKTCNT_ZERO_S	16
+#define GL_MDCK_EN_TX_PQM_SSO_PKTCNT_ZERO_M	BIT(16)
+#define GL_MDCK_EN_TX_PQM_SSO_PKTCNT_EXCEED_S	17
+#define GL_MDCK_EN_TX_PQM_SSO_PKTCNT_EXCEED_M	BIT(17)
+#define GL_MDCK_EN_TX_PQM_SSO_NUMDESCS_ZERO_S	18
+#define GL_MDCK_EN_TX_PQM_SSO_NUMDESCS_ZERO_M	BIT(18)
+#define GL_MDCK_EN_TX_PQM_SSO_NUMDESCS_EXCEED_S 19
+#define GL_MDCK_EN_TX_PQM_SSO_NUMDESCS_EXCEED_M BIT(19)
+#define GL_MDCK_EN_TX_PQM_TAIL_GT_RING_LENGTH_S 20
+#define GL_MDCK_EN_TX_PQM_TAIL_GT_RING_LENGTH_M BIT(20)
+#define GL_MDCK_EN_TX_PQM_RESERVED_DBL_TYPE_S	21
+#define GL_MDCK_EN_TX_PQM_RESERVED_DBL_TYPE_M	BIT(21)
+#define GL_MDCK_EN_TX_PQM_ILLEGAL_HEAD_DROP_DBL_S 22
+#define GL_MDCK_EN_TX_PQM_ILLEGAL_HEAD_DROP_DBL_M BIT(22)
+#define GL_MDCK_EN_TX_PQM_LSO_OVER_COMMS_Q_S	23
+#define GL_MDCK_EN_TX_PQM_LSO_OVER_COMMS_Q_M	BIT(23)
+#define GL_MDCK_EN_TX_PQM_ILLEGAL_VF_QNUM_S	24
+#define GL_MDCK_EN_TX_PQM_ILLEGAL_VF_QNUM_M	BIT(24)
+#define GL_MDCK_EN_TX_PQM_QTAIL_GT_RING_LENGTH_S 25
+#define GL_MDCK_EN_TX_PQM_QTAIL_GT_RING_LENGTH_M BIT(25)
+#define GL_MDCK_EN_TX_PQM_RSVD_S		26
+#define GL_MDCK_EN_TX_PQM_RSVD_M		MAKEMASK(0x3F, 26)
+#define GL_MDCK_RX				0x0029422C /* Reset Source: CORER */
+#define GL_MDCK_RX_DESC_ADDR_S			0
+#define GL_MDCK_RX_DESC_ADDR_M			BIT(0)
+#define GL_MDET_RX				0x00294C00 /* Reset Source: CORER */
+#define GL_MDET_RX_QNUM_S			0
+#define GL_MDET_RX_QNUM_M			MAKEMASK(0x7FFF, 0)
+#define GL_MDET_RX_VF_NUM_S			15
+#define GL_MDET_RX_VF_NUM_M			MAKEMASK(0xFF, 15)
+#define GL_MDET_RX_PF_NUM_S			23
+#define GL_MDET_RX_PF_NUM_M			MAKEMASK(0x7, 23)
+#define GL_MDET_RX_MAL_TYPE_S			26
+#define GL_MDET_RX_MAL_TYPE_M			MAKEMASK(0x1F, 26)
+#define GL_MDET_RX_VALID_S			31
+#define GL_MDET_RX_VALID_M			BIT(31)
+#define GL_MDET_TX_PQM				0x002D2E00 /* Reset Source: CORER */
+#define GL_MDET_TX_PQM_PF_NUM_S			0
+#define GL_MDET_TX_PQM_PF_NUM_M			MAKEMASK(0x7, 0)
+#define GL_MDET_TX_PQM_VF_NUM_S			4
+#define GL_MDET_TX_PQM_VF_NUM_M			MAKEMASK(0xFF, 4)
+#define GL_MDET_TX_PQM_QNUM_S			12
+#define GL_MDET_TX_PQM_QNUM_M			MAKEMASK(0x3FFF, 12)
+#define GL_MDET_TX_PQM_MAL_TYPE_S		26
+#define GL_MDET_TX_PQM_MAL_TYPE_M		MAKEMASK(0x1F, 26)
+#define GL_MDET_TX_PQM_VALID_S			31
+#define GL_MDET_TX_PQM_VALID_M			BIT(31)
+#define GL_MDET_TX_TCLAN			0x000FC068 /* Reset Source: CORER */
+#define GL_MDET_TX_TCLAN_QNUM_S			0
+#define GL_MDET_TX_TCLAN_QNUM_M			MAKEMASK(0x7FFF, 0)
+#define GL_MDET_TX_TCLAN_VF_NUM_S		15
+#define GL_MDET_TX_TCLAN_VF_NUM_M		MAKEMASK(0xFF, 15)
+#define GL_MDET_TX_TCLAN_PF_NUM_S		23
+#define GL_MDET_TX_TCLAN_PF_NUM_M		MAKEMASK(0x7, 23)
+#define GL_MDET_TX_TCLAN_MAL_TYPE_S		26
+#define GL_MDET_TX_TCLAN_MAL_TYPE_M		MAKEMASK(0x1F, 26)
+#define GL_MDET_TX_TCLAN_VALID_S		31
+#define GL_MDET_TX_TCLAN_VALID_M		BIT(31)
+#define PF_MDET_RX				0x00294280 /* Reset Source: CORER */
+#define PF_MDET_RX_VALID_S			0
+#define PF_MDET_RX_VALID_M			BIT(0)
+#define PF_MDET_TX_PQM				0x002D2C80 /* Reset Source: CORER */
+#define PF_MDET_TX_PQM_VALID_S			0
+#define PF_MDET_TX_PQM_VALID_M			BIT(0)
+#define PF_MDET_TX_TCLAN			0x000FC000 /* Reset Source: CORER */
+#define PF_MDET_TX_TCLAN_VALID_S		0
+#define PF_MDET_TX_TCLAN_VALID_M		BIT(0)
+#define PF_MDET_TX_TDPU				0x00040800 /* Reset Source: CORER */
+#define PF_MDET_TX_TDPU_VALID_S			0
+#define PF_MDET_TX_TDPU_VALID_M			BIT(0)
+#define VP_MDET_RX(_VF)				(0x00294400 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VP_MDET_RX_MAX_INDEX			255
+#define VP_MDET_RX_VALID_S			0
+#define VP_MDET_RX_VALID_M			BIT(0)
+#define VP_MDET_TX_PQM(_VF)			(0x002D2000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VP_MDET_TX_PQM_MAX_INDEX		255
+#define VP_MDET_TX_PQM_VALID_S			0
+#define VP_MDET_TX_PQM_VALID_M			BIT(0)
+#define VP_MDET_TX_TCLAN(_VF)			(0x000FB800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VP_MDET_TX_TCLAN_MAX_INDEX		255
+#define VP_MDET_TX_TCLAN_VALID_S		0
+#define VP_MDET_TX_TCLAN_VALID_M		BIT(0)
+#define VP_MDET_TX_TDPU(_VF)			(0x00040000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VP_MDET_TX_TDPU_MAX_INDEX		255
+#define VP_MDET_TX_TDPU_VALID_S			0
+#define VP_MDET_TX_TDPU_VALID_M			BIT(0)
+#define GENERAL_MNG_FW_DBG_CSR(_i)		(0x000B6180 + ((_i) * 4)) /* _i=0...9 */ /* Reset Source: POR */
+#define GENERAL_MNG_FW_DBG_CSR_MAX_INDEX	9
+#define GENERAL_MNG_FW_DBG_CSR_GENERAL_FW_DBG_S 0
+#define GENERAL_MNG_FW_DBG_CSR_GENERAL_FW_DBG_M MAKEMASK(0xFFFFFFFF, 0)
+#define GL_FWRESETCNT				0x00083100 /* Reset Source: POR */
+#define GL_FWRESETCNT_FWRESETCNT_S		0
+#define GL_FWRESETCNT_FWRESETCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_MNG_FW_RAM_STAT			0x0008309C /* Reset Source: POR */
+#define GL_MNG_FW_RAM_STAT_FW_RAM_RST_STAT_S	0
+#define GL_MNG_FW_RAM_STAT_FW_RAM_RST_STAT_M	BIT(0)
+#define GL_MNG_FW_RAM_STAT_MNG_MEM_ECC_ERR_S	1
+#define GL_MNG_FW_RAM_STAT_MNG_MEM_ECC_ERR_M	BIT(1)
+#define GL_MNG_FWSM				0x000B6134 /* Reset Source: POR */
+#define GL_MNG_FWSM_FW_MODES_S			0
+#define GL_MNG_FWSM_FW_MODES_M			MAKEMASK(0x3, 0)
+#define GL_MNG_FWSM_RSV0_S			2
+#define GL_MNG_FWSM_RSV0_M			MAKEMASK(0xFF, 2)
+#define GL_MNG_FWSM_EEP_RELOAD_IND_S		10
+#define GL_MNG_FWSM_EEP_RELOAD_IND_M		BIT(10)
+#define GL_MNG_FWSM_RSV1_S			11
+#define GL_MNG_FWSM_RSV1_M			MAKEMASK(0xF, 11)
+#define GL_MNG_FWSM_RSV2_S			15
+#define GL_MNG_FWSM_RSV2_M			BIT(15)
+#define GL_MNG_FWSM_PCIR_AL_FAILURE_S		16
+#define GL_MNG_FWSM_PCIR_AL_FAILURE_M		BIT(16)
+#define GL_MNG_FWSM_POR_AL_FAILURE_S		17
+#define GL_MNG_FWSM_POR_AL_FAILURE_M		BIT(17)
+#define GL_MNG_FWSM_RSV3_S			18
+#define GL_MNG_FWSM_RSV3_M			BIT(18)
+#define GL_MNG_FWSM_EXT_ERR_IND_S		19
+#define GL_MNG_FWSM_EXT_ERR_IND_M		MAKEMASK(0x3F, 19)
+#define GL_MNG_FWSM_RSV4_S			25
+#define GL_MNG_FWSM_RSV4_M			BIT(25)
+#define GL_MNG_FWSM_RESERVED_11_S		26
+#define GL_MNG_FWSM_RESERVED_11_M		MAKEMASK(0xF, 26)
+#define GL_MNG_FWSM_RSV5_S			30
+#define GL_MNG_FWSM_RSV5_M			MAKEMASK(0x3, 30)
+#define GL_MNG_HWARB_CTRL			0x000B6130 /* Reset Source: POR */
+#define GL_MNG_HWARB_CTRL_NCSI_ARB_EN_S		0
+#define GL_MNG_HWARB_CTRL_NCSI_ARB_EN_M		BIT(0)
+#define GL_MNG_SHA_EXTEND(_i)			(0x00083120 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: EMPR */
+#define GL_MNG_SHA_EXTEND_MAX_INDEX		7
+#define GL_MNG_SHA_EXTEND_GL_MNG_SHA_EXTEND_S	0
+#define GL_MNG_SHA_EXTEND_GL_MNG_SHA_EXTEND_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GL_MNG_SHA_EXTEND_ROM(_i)		(0x00083160 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: EMPR */
+#define GL_MNG_SHA_EXTEND_ROM_MAX_INDEX		7
+#define GL_MNG_SHA_EXTEND_ROM_GL_MNG_SHA_EXTEND_ROM_S 0
+#define GL_MNG_SHA_EXTEND_ROM_GL_MNG_SHA_EXTEND_ROM_M MAKEMASK(0xFFFFFFFF, 0)
+#define GL_MNG_SHA_EXTEND_STATUS		0x00083148 /* Reset Source: EMPR */
+#define GL_MNG_SHA_EXTEND_STATUS_STAGE_S	0
+#define GL_MNG_SHA_EXTEND_STATUS_STAGE_M	MAKEMASK(0x7, 0)
+#define GL_MNG_SHA_EXTEND_STATUS_FW_HALTED_S	30
+#define GL_MNG_SHA_EXTEND_STATUS_FW_HALTED_M	BIT(30)
+#define GL_MNG_SHA_EXTEND_STATUS_DONE_S		31
+#define GL_MNG_SHA_EXTEND_STATUS_DONE_M		BIT(31)
+#define GL_SWT_PRT2MDEF(_i)			(0x00216018 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: POR */
+#define GL_SWT_PRT2MDEF_MAX_INDEX		31
+#define GL_SWT_PRT2MDEF_MDEFIDX_S		0
+#define GL_SWT_PRT2MDEF_MDEFIDX_M		MAKEMASK(0x7, 0)
+#define GL_SWT_PRT2MDEF_MDEFENA_S		31
+#define GL_SWT_PRT2MDEF_MDEFENA_M		BIT(31)
+#define PRT_MNG_MANC				0x00214720 /* Reset Source: POR */
+#define PRT_MNG_MANC_FLOW_CONTROL_DISCARD_S	0
+#define PRT_MNG_MANC_FLOW_CONTROL_DISCARD_M	BIT(0)
+#define PRT_MNG_MANC_NCSI_DISCARD_S		1
+#define PRT_MNG_MANC_NCSI_DISCARD_M		BIT(1)
+#define PRT_MNG_MANC_RCV_TCO_EN_S		17
+#define PRT_MNG_MANC_RCV_TCO_EN_M		BIT(17)
+#define PRT_MNG_MANC_RCV_ALL_S			19
+#define PRT_MNG_MANC_RCV_ALL_M			BIT(19)
+#define PRT_MNG_MANC_FIXED_NET_TYPE_S		25
+#define PRT_MNG_MANC_FIXED_NET_TYPE_M		BIT(25)
+#define PRT_MNG_MANC_NET_TYPE_S			26
+#define PRT_MNG_MANC_NET_TYPE_M			BIT(26)
+#define PRT_MNG_MANC_EN_BMC2OS_S		28
+#define PRT_MNG_MANC_EN_BMC2OS_M		BIT(28)
+#define PRT_MNG_MANC_EN_BMC2NET_S		29
+#define PRT_MNG_MANC_EN_BMC2NET_M		BIT(29)
+#define PRT_MNG_MAVTV(_i)			(0x00214780 + ((_i) * 32)) /* _i=0...7 */ /* Reset Source: POR */
+#define PRT_MNG_MAVTV_MAX_INDEX			7
+#define PRT_MNG_MAVTV_VID_S			0
+#define PRT_MNG_MAVTV_VID_M			MAKEMASK(0xFFF, 0)
+#define PRT_MNG_MDEF(_i)			(0x00214880 + ((_i) * 32)) /* _i=0...7 */ /* Reset Source: POR */
+#define PRT_MNG_MDEF_MAX_INDEX			7
+#define PRT_MNG_MDEF_MAC_EXACT_AND_S		0
+#define PRT_MNG_MDEF_MAC_EXACT_AND_M		MAKEMASK(0xF, 0)
+#define PRT_MNG_MDEF_BROADCAST_AND_S		4
+#define PRT_MNG_MDEF_BROADCAST_AND_M		BIT(4)
+#define PRT_MNG_MDEF_VLAN_AND_S			5
+#define PRT_MNG_MDEF_VLAN_AND_M			MAKEMASK(0xFF, 5)
+#define PRT_MNG_MDEF_IPV4_ADDRESS_AND_S		13
+#define PRT_MNG_MDEF_IPV4_ADDRESS_AND_M		MAKEMASK(0xF, 13)
+#define PRT_MNG_MDEF_IPV6_ADDRESS_AND_S		17
+#define PRT_MNG_MDEF_IPV6_ADDRESS_AND_M		MAKEMASK(0xF, 17)
+#define PRT_MNG_MDEF_MAC_EXACT_OR_S		21
+#define PRT_MNG_MDEF_MAC_EXACT_OR_M		MAKEMASK(0xF, 21)
+#define PRT_MNG_MDEF_BROADCAST_OR_S		25
+#define PRT_MNG_MDEF_BROADCAST_OR_M		BIT(25)
+#define PRT_MNG_MDEF_MULTICAST_AND_S		26
+#define PRT_MNG_MDEF_MULTICAST_AND_M		BIT(26)
+#define PRT_MNG_MDEF_ARP_REQUEST_OR_S		27
+#define PRT_MNG_MDEF_ARP_REQUEST_OR_M		BIT(27)
+#define PRT_MNG_MDEF_ARP_RESPONSE_OR_S		28
+#define PRT_MNG_MDEF_ARP_RESPONSE_OR_M		BIT(28)
+#define PRT_MNG_MDEF_NEIGHBOR_DISCOVERY_134_OR_S 29
+#define PRT_MNG_MDEF_NEIGHBOR_DISCOVERY_134_OR_M BIT(29)
+#define PRT_MNG_MDEF_PORT_0X298_OR_S		30
+#define PRT_MNG_MDEF_PORT_0X298_OR_M		BIT(30)
+#define PRT_MNG_MDEF_PORT_0X26F_OR_S		31
+#define PRT_MNG_MDEF_PORT_0X26F_OR_M		BIT(31)
+#define PRT_MNG_MDEF_EXT(_i)			(0x00214A00 + ((_i) * 32)) /* _i=0...7 */ /* Reset Source: POR */
+#define PRT_MNG_MDEF_EXT_MAX_INDEX		7
+#define PRT_MNG_MDEF_EXT_L2_ETHERTYPE_AND_S	0
+#define PRT_MNG_MDEF_EXT_L2_ETHERTYPE_AND_M	MAKEMASK(0xF, 0)
+#define PRT_MNG_MDEF_EXT_L2_ETHERTYPE_OR_S	4
+#define PRT_MNG_MDEF_EXT_L2_ETHERTYPE_OR_M	MAKEMASK(0xF, 4)
+#define PRT_MNG_MDEF_EXT_FLEX_PORT_OR_S		8
+#define PRT_MNG_MDEF_EXT_FLEX_PORT_OR_M		MAKEMASK(0xFFFF, 8)
+#define PRT_MNG_MDEF_EXT_FLEX_TCO_S		24
+#define PRT_MNG_MDEF_EXT_FLEX_TCO_M		BIT(24)
+#define PRT_MNG_MDEF_EXT_NEIGHBOR_DISCOVERY_135_OR_S 25
+#define PRT_MNG_MDEF_EXT_NEIGHBOR_DISCOVERY_135_OR_M BIT(25)
+#define PRT_MNG_MDEF_EXT_NEIGHBOR_DISCOVERY_136_OR_S 26
+#define PRT_MNG_MDEF_EXT_NEIGHBOR_DISCOVERY_136_OR_M BIT(26)
+#define PRT_MNG_MDEF_EXT_NEIGHBOR_DISCOVERY_137_OR_S 27
+#define PRT_MNG_MDEF_EXT_NEIGHBOR_DISCOVERY_137_OR_M BIT(27)
+#define PRT_MNG_MDEF_EXT_ICMP_OR_S		28
+#define PRT_MNG_MDEF_EXT_ICMP_OR_M		BIT(28)
+#define PRT_MNG_MDEF_EXT_MLD_S			29
+#define PRT_MNG_MDEF_EXT_MLD_M			BIT(29)
+#define PRT_MNG_MDEF_EXT_APPLY_TO_NETWORK_TRAFFIC_S 30
+#define PRT_MNG_MDEF_EXT_APPLY_TO_NETWORK_TRAFFIC_M BIT(30)
+#define PRT_MNG_MDEF_EXT_APPLY_TO_HOST_TRAFFIC_S 31
+#define PRT_MNG_MDEF_EXT_APPLY_TO_HOST_TRAFFIC_M BIT(31)
+#define PRT_MNG_MDEFVSI(_i)			(0x00214980 + ((_i) * 32)) /* _i=0...3 */ /* Reset Source: POR */
+#define PRT_MNG_MDEFVSI_MAX_INDEX		3
+#define PRT_MNG_MDEFVSI_MDEFVSI_2N_S		0
+#define PRT_MNG_MDEFVSI_MDEFVSI_2N_M		MAKEMASK(0xFFFF, 0)
+#define PRT_MNG_MDEFVSI_MDEFVSI_2NP1_S		16
+#define PRT_MNG_MDEFVSI_MDEFVSI_2NP1_M		MAKEMASK(0xFFFF, 16)
+#define PRT_MNG_METF(_i)			(0x00214120 + ((_i) * 32)) /* _i=0...3 */ /* Reset Source: POR */
+#define PRT_MNG_METF_MAX_INDEX			3
+#define PRT_MNG_METF_ETYPE_S			0
+#define PRT_MNG_METF_ETYPE_M			MAKEMASK(0xFFFF, 0)
+#define PRT_MNG_METF_POLARITY_S			30
+#define PRT_MNG_METF_POLARITY_M			BIT(30)
+#define PRT_MNG_MFUTP(_i)			(0x00214320 + ((_i) * 32)) /* _i=0...15 */ /* Reset Source: POR */
+#define PRT_MNG_MFUTP_MAX_INDEX			15
+#define PRT_MNG_MFUTP_MFUTP_N_S			0
+#define PRT_MNG_MFUTP_MFUTP_N_M			MAKEMASK(0xFFFF, 0)
+#define PRT_MNG_MFUTP_UDP_S			16
+#define PRT_MNG_MFUTP_UDP_M			BIT(16)
+#define PRT_MNG_MFUTP_TCP_S			17
+#define PRT_MNG_MFUTP_TCP_M			BIT(17)
+#define PRT_MNG_MFUTP_SOURCE_DESTINATION_S	18
+#define PRT_MNG_MFUTP_SOURCE_DESTINATION_M	BIT(18)
+#define PRT_MNG_MIPAF4(_i)			(0x002141A0 + ((_i) * 32)) /* _i=0...3 */ /* Reset Source: POR */
+#define PRT_MNG_MIPAF4_MAX_INDEX		3
+#define PRT_MNG_MIPAF4_MIPAF_S			0
+#define PRT_MNG_MIPAF4_MIPAF_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PRT_MNG_MIPAF6(_i)			(0x00214520 + ((_i) * 32)) /* _i=0...15 */ /* Reset Source: POR */
+#define PRT_MNG_MIPAF6_MAX_INDEX		15
+#define PRT_MNG_MIPAF6_MIPAF_S			0
+#define PRT_MNG_MIPAF6_MIPAF_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PRT_MNG_MMAH(_i)			(0x00214220 + ((_i) * 32)) /* _i=0...3 */ /* Reset Source: POR */
+#define PRT_MNG_MMAH_MAX_INDEX			3
+#define PRT_MNG_MMAH_MMAH_S			0
+#define PRT_MNG_MMAH_MMAH_M			MAKEMASK(0xFFFF, 0)
+#define PRT_MNG_MMAL(_i)			(0x002142A0 + ((_i) * 32)) /* _i=0...3 */ /* Reset Source: POR */
+#define PRT_MNG_MMAL_MAX_INDEX			3
+#define PRT_MNG_MMAL_MMAL_S			0
+#define PRT_MNG_MMAL_MMAL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PRT_MNG_MNGONLY				0x00214740 /* Reset Source: POR */
+#define PRT_MNG_MNGONLY_EXCLUSIVE_TO_MANAGEABILITY_S 0
+#define PRT_MNG_MNGONLY_EXCLUSIVE_TO_MANAGEABILITY_M MAKEMASK(0xFF, 0)
+#define PRT_MNG_MSFM				0x00214760 /* Reset Source: POR */
+#define PRT_MNG_MSFM_PORT_26F_UDP_S		0
+#define PRT_MNG_MSFM_PORT_26F_UDP_M		BIT(0)
+#define PRT_MNG_MSFM_PORT_26F_TCP_S		1
+#define PRT_MNG_MSFM_PORT_26F_TCP_M		BIT(1)
+#define PRT_MNG_MSFM_PORT_298_UDP_S		2
+#define PRT_MNG_MSFM_PORT_298_UDP_M		BIT(2)
+#define PRT_MNG_MSFM_PORT_298_TCP_S		3
+#define PRT_MNG_MSFM_PORT_298_TCP_M		BIT(3)
+#define PRT_MNG_MSFM_IPV6_0_MASK_S		4
+#define PRT_MNG_MSFM_IPV6_0_MASK_M		BIT(4)
+#define PRT_MNG_MSFM_IPV6_1_MASK_S		5
+#define PRT_MNG_MSFM_IPV6_1_MASK_M		BIT(5)
+#define PRT_MNG_MSFM_IPV6_2_MASK_S		6
+#define PRT_MNG_MSFM_IPV6_2_MASK_M		BIT(6)
+#define PRT_MNG_MSFM_IPV6_3_MASK_S		7
+#define PRT_MNG_MSFM_IPV6_3_MASK_M		BIT(7)
+#define MSIX_PBA_PAGE(_i)			(0x02E08000 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: FLR */
+#define MSIX_PBA_PAGE_MAX_INDEX			63
+#define MSIX_PBA_PAGE_PENBIT_S			0
+#define MSIX_PBA_PAGE_PENBIT_M			MAKEMASK(0xFFFFFFFF, 0)
+#define MSIX_PBA1(_i)				(0x00008000 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: FLR */
+#define MSIX_PBA1_MAX_INDEX			63
+#define MSIX_PBA1_PENBIT_S			0
+#define MSIX_PBA1_PENBIT_M			MAKEMASK(0xFFFFFFFF, 0)
+#define MSIX_TADD_PAGE(_i)			(0x02E00000 + ((_i) * 16)) /* _i=0...2047 */ /* Reset Source: FLR */
+#define MSIX_TADD_PAGE_MAX_INDEX		2047
+#define MSIX_TADD_PAGE_MSIXTADD10_S		0
+#define MSIX_TADD_PAGE_MSIXTADD10_M		MAKEMASK(0x3, 0)
+#define MSIX_TADD_PAGE_MSIXTADD_S		2
+#define MSIX_TADD_PAGE_MSIXTADD_M		MAKEMASK(0x3FFFFFFF, 2)
+#define MSIX_TADD1(_i)				(0x00000000 + ((_i) * 16)) /* _i=0...2047 */ /* Reset Source: FLR */
+#define MSIX_TADD1_MAX_INDEX			2047
+#define MSIX_TADD1_MSIXTADD10_S			0
+#define MSIX_TADD1_MSIXTADD10_M			MAKEMASK(0x3, 0)
+#define MSIX_TADD1_MSIXTADD_S			2
+#define MSIX_TADD1_MSIXTADD_M			MAKEMASK(0x3FFFFFFF, 2)
+#define MSIX_TMSG(_i)				(0x00000008 + ((_i) * 16)) /* _i=0...2047 */ /* Reset Source: FLR */
+#define MSIX_TMSG_MAX_INDEX			2047
+#define MSIX_TMSG_MSIXTMSG_S			0
+#define MSIX_TMSG_MSIXTMSG_M			MAKEMASK(0xFFFFFFFF, 0)
+#define MSIX_TMSG_PAGE(_i)			(0x02E00008 + ((_i) * 16)) /* _i=0...2047 */ /* Reset Source: FLR */
+#define MSIX_TMSG_PAGE_MAX_INDEX		2047
+#define MSIX_TMSG_PAGE_MSIXTMSG_S		0
+#define MSIX_TMSG_PAGE_MSIXTMSG_M		MAKEMASK(0xFFFFFFFF, 0)
+#define MSIX_TUADD_PAGE(_i)			(0x02E00004 + ((_i) * 16)) /* _i=0...2047 */ /* Reset Source: FLR */
+#define MSIX_TUADD_PAGE_MAX_INDEX		2047
+#define MSIX_TUADD_PAGE_MSIXTUADD_S		0
+#define MSIX_TUADD_PAGE_MSIXTUADD_M		MAKEMASK(0xFFFFFFFF, 0)
+#define MSIX_TUADD1(_i)				(0x00000004 + ((_i) * 16)) /* _i=0...2047 */ /* Reset Source: FLR */
+#define MSIX_TUADD1_MAX_INDEX			2047
+#define MSIX_TUADD1_MSIXTUADD_S			0
+#define MSIX_TUADD1_MSIXTUADD_M			MAKEMASK(0xFFFFFFFF, 0)
+#define MSIX_TVCTRL_PAGE(_i)			(0x02E0000C + ((_i) * 16)) /* _i=0...2047 */ /* Reset Source: FLR */
+#define MSIX_TVCTRL_PAGE_MAX_INDEX		2047
+#define MSIX_TVCTRL_PAGE_MASK_S			0
+#define MSIX_TVCTRL_PAGE_MASK_M			BIT(0)
+#define MSIX_TVCTRL1(_i)			(0x0000000C + ((_i) * 16)) /* _i=0...2047 */ /* Reset Source: FLR */
+#define MSIX_TVCTRL1_MAX_INDEX			2047
+#define MSIX_TVCTRL1_MASK_S			0
+#define MSIX_TVCTRL1_MASK_M			BIT(0)
+#define GLNVM_AL_DONE_HLP			0x000824C4 /* Reset Source: POR */
+#define GLNVM_AL_DONE_HLP_HLP_CORER_S		0
+#define GLNVM_AL_DONE_HLP_HLP_CORER_M		BIT(0)
+#define GLNVM_AL_DONE_HLP_HLP_FULLR_S		1
+#define GLNVM_AL_DONE_HLP_HLP_FULLR_M		BIT(1)
+#define GLNVM_ALTIMERS				0x000B6140 /* Reset Source: POR */
+#define GLNVM_ALTIMERS_PCI_ALTIMER_S		0
+#define GLNVM_ALTIMERS_PCI_ALTIMER_M		MAKEMASK(0xFFF, 0)
+#define GLNVM_ALTIMERS_GEN_ALTIMER_S		12
+#define GLNVM_ALTIMERS_GEN_ALTIMER_M		MAKEMASK(0xFFFFF, 12)
+#define GLNVM_FLA				0x000B6108 /* Reset Source: POR */
+#define GLNVM_FLA_LOCKED_S			6
+#define GLNVM_FLA_LOCKED_M			BIT(6)
+#define GLNVM_GENS				0x000B6100 /* Reset Source: POR */
+#define GLNVM_GENS_NVM_PRES_S			0
+#define GLNVM_GENS_NVM_PRES_M			BIT(0)
+#define GLNVM_GENS_SR_SIZE_S			5
+#define GLNVM_GENS_SR_SIZE_M			MAKEMASK(0x7, 5)
+#define GLNVM_GENS_BANK1VAL_S			8
+#define GLNVM_GENS_BANK1VAL_M			BIT(8)
+#define GLNVM_GENS_ALT_PRST_S			23
+#define GLNVM_GENS_ALT_PRST_M			BIT(23)
+#define GLNVM_GENS_FL_AUTO_RD_S			25
+#define GLNVM_GENS_FL_AUTO_RD_M			BIT(25)
+#define GLNVM_PROTCSR(_i)			(0x000B6010 + ((_i) * 4)) /* _i=0...59 */ /* Reset Source: POR */
+#define GLNVM_PROTCSR_MAX_INDEX			59
+#define GLNVM_PROTCSR_ADDR_BLOCK_S		0
+#define GLNVM_PROTCSR_ADDR_BLOCK_M		MAKEMASK(0xFFFFFF, 0)
+#define GLNVM_ULD				0x000B6008 /* Reset Source: POR */
+#define GLNVM_ULD_PCIER_DONE_S			0
+#define GLNVM_ULD_PCIER_DONE_M			BIT(0)
+#define GLNVM_ULD_PCIER_DONE_1_S		1
+#define GLNVM_ULD_PCIER_DONE_1_M		BIT(1)
+#define GLNVM_ULD_CORER_DONE_S			3
+#define GLNVM_ULD_CORER_DONE_M			BIT(3)
+#define GLNVM_ULD_GLOBR_DONE_S			4
+#define GLNVM_ULD_GLOBR_DONE_M			BIT(4)
+#define GLNVM_ULD_POR_DONE_S			5
+#define GLNVM_ULD_POR_DONE_M			BIT(5)
+#define GLNVM_ULD_POR_DONE_1_S			8
+#define GLNVM_ULD_POR_DONE_1_M			BIT(8)
+#define GLNVM_ULD_PCIER_DONE_2_S		9
+#define GLNVM_ULD_PCIER_DONE_2_M		BIT(9)
+#define GLNVM_ULD_PE_DONE_S			10
+#define GLNVM_ULD_PE_DONE_M			BIT(10)
+#define GLNVM_ULD_HLP_CORE_DONE_S		11
+#define GLNVM_ULD_HLP_CORE_DONE_M		BIT(11)
+#define GLNVM_ULD_HLP_FULL_DONE_S		12
+#define GLNVM_ULD_HLP_FULL_DONE_M		BIT(12)
+#define GLNVM_ULT				0x000B6154 /* Reset Source: POR */
+#define GLNVM_ULT_CONF_PCIR_AE_S		0
+#define GLNVM_ULT_CONF_PCIR_AE_M		BIT(0)
+#define GLNVM_ULT_CONF_PCIRTL_AE_S		1
+#define GLNVM_ULT_CONF_PCIRTL_AE_M		BIT(1)
+#define GLNVM_ULT_RESERVED_1_S			2
+#define GLNVM_ULT_RESERVED_1_M			BIT(2)
+#define GLNVM_ULT_CONF_CORE_AE_S		3
+#define GLNVM_ULT_CONF_CORE_AE_M		BIT(3)
+#define GLNVM_ULT_CONF_GLOBAL_AE_S		4
+#define GLNVM_ULT_CONF_GLOBAL_AE_M		BIT(4)
+#define GLNVM_ULT_CONF_POR_AE_S			5
+#define GLNVM_ULT_CONF_POR_AE_M			BIT(5)
+#define GLNVM_ULT_RESERVED_2_S			6
+#define GLNVM_ULT_RESERVED_2_M			BIT(6)
+#define GLNVM_ULT_RESERVED_3_S			7
+#define GLNVM_ULT_RESERVED_3_M			BIT(7)
+#define GLNVM_ULT_RESERVED_5_S			8
+#define GLNVM_ULT_RESERVED_5_M			BIT(8)
+#define GLNVM_ULT_CONF_PCIALT_AE_S		9
+#define GLNVM_ULT_CONF_PCIALT_AE_M		BIT(9)
+#define GLNVM_ULT_CONF_PE_AE_S			10
+#define GLNVM_ULT_CONF_PE_AE_M			BIT(10)
+#define GLNVM_ULT_RESERVED_4_S			11
+#define GLNVM_ULT_RESERVED_4_M			MAKEMASK(0x1FFFFF, 11)
+#define GL_COTF_MARKER_STATUS			0x00200200 /* Reset Source: CORER */
+#define GL_COTF_MARKER_STATUS_MRKR_BUSY_S	0
+#define GL_COTF_MARKER_STATUS_MRKR_BUSY_M	MAKEMASK(0xFF, 0)
+#define GL_COTF_MARKER_TRIG_RCU_PRS(_i)		(0x002001D4 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GL_COTF_MARKER_TRIG_RCU_PRS_MAX_INDEX	7
+#define GL_COTF_MARKER_TRIG_RCU_PRS_SET_RST_S	0
+#define GL_COTF_MARKER_TRIG_RCU_PRS_SET_RST_M	BIT(0)
+#define GL_PRS_MARKER_ERROR			0x00200204 /* Reset Source: CORER */
+#define GL_PRS_MARKER_ERROR_XLR_CFG_ERR_S	0
+#define GL_PRS_MARKER_ERROR_XLR_CFG_ERR_M	BIT(0)
+#define GL_PRS_MARKER_ERROR_QH_CFG_ERR_S	1
+#define GL_PRS_MARKER_ERROR_QH_CFG_ERR_M	BIT(1)
+#define GL_PRS_MARKER_ERROR_COTF_CFG_ERR_S	2
+#define GL_PRS_MARKER_ERROR_COTF_CFG_ERR_M	BIT(2)
+#define GL_PRS_RX_PIPE_INIT0(_i)		(0x0020000C + ((_i) * 4)) /* _i=0...6 */ /* Reset Source: CORER */
+#define GL_PRS_RX_PIPE_INIT0_MAX_INDEX		6
+#define GL_PRS_RX_PIPE_INIT0_GPCSR_INIT_S	0
+#define GL_PRS_RX_PIPE_INIT0_GPCSR_INIT_M	MAKEMASK(0xFFFF, 0)
+#define GL_PRS_RX_PIPE_INIT1			0x00200028 /* Reset Source: CORER */
+#define GL_PRS_RX_PIPE_INIT1_GPCSR_INIT_S	0
+#define GL_PRS_RX_PIPE_INIT1_GPCSR_INIT_M	MAKEMASK(0xFFFF, 0)
+#define GL_PRS_RX_PIPE_INIT2			0x0020002C /* Reset Source: CORER */
+#define GL_PRS_RX_PIPE_INIT2_GPCSR_INIT_S	0
+#define GL_PRS_RX_PIPE_INIT2_GPCSR_INIT_M	MAKEMASK(0xFFFF, 0)
+#define GL_PRS_RX_SIZE_CTRL			0x00200004 /* Reset Source: CORER */
+#define GL_PRS_RX_SIZE_CTRL_MIN_SIZE_S		0
+#define GL_PRS_RX_SIZE_CTRL_MIN_SIZE_M		MAKEMASK(0x3FF, 0)
+#define GL_PRS_RX_SIZE_CTRL_MIN_SIZE_EN_S	15
+#define GL_PRS_RX_SIZE_CTRL_MIN_SIZE_EN_M	BIT(15)
+#define GL_PRS_RX_SIZE_CTRL_MAX_SIZE_S		16
+#define GL_PRS_RX_SIZE_CTRL_MAX_SIZE_M		MAKEMASK(0x3FF, 16)
+#define GL_PRS_RX_SIZE_CTRL_MAX_SIZE_EN_S	31
+#define GL_PRS_RX_SIZE_CTRL_MAX_SIZE_EN_M	BIT(31)
+#define GL_PRS_TX_PIPE_INIT0(_i)		(0x00202018 + ((_i) * 4)) /* _i=0...6 */ /* Reset Source: CORER */
+#define GL_PRS_TX_PIPE_INIT0_MAX_INDEX		6
+#define GL_PRS_TX_PIPE_INIT0_GPCSR_INIT_S	0
+#define GL_PRS_TX_PIPE_INIT0_GPCSR_INIT_M	MAKEMASK(0xFFFF, 0)
+#define GL_PRS_TX_PIPE_INIT1			0x00202034 /* Reset Source: CORER */
+#define GL_PRS_TX_PIPE_INIT1_GPCSR_INIT_S	0
+#define GL_PRS_TX_PIPE_INIT1_GPCSR_INIT_M	MAKEMASK(0xFFFF, 0)
+#define GL_PRS_TX_PIPE_INIT2			0x00202038 /* Reset Source: CORER */
+#define GL_PRS_TX_PIPE_INIT2_GPCSR_INIT_S	0
+#define GL_PRS_TX_PIPE_INIT2_GPCSR_INIT_M	MAKEMASK(0xFFFF, 0)
+#define GL_PRS_TX_SIZE_CTRL			0x00202014 /* Reset Source: CORER */
+#define GL_PRS_TX_SIZE_CTRL_MIN_SIZE_S		0
+#define GL_PRS_TX_SIZE_CTRL_MIN_SIZE_M		MAKEMASK(0x3FF, 0)
+#define GL_PRS_TX_SIZE_CTRL_MIN_SIZE_EN_S	15
+#define GL_PRS_TX_SIZE_CTRL_MIN_SIZE_EN_M	BIT(15)
+#define GL_PRS_TX_SIZE_CTRL_MAX_SIZE_S		16
+#define GL_PRS_TX_SIZE_CTRL_MAX_SIZE_M		MAKEMASK(0x3FF, 16)
+#define GL_PRS_TX_SIZE_CTRL_MAX_SIZE_EN_S	31
+#define GL_PRS_TX_SIZE_CTRL_MAX_SIZE_EN_M	BIT(31)
+#define GL_QH_MARKER_STATUS			0x002001FC /* Reset Source: CORER */
+#define GL_QH_MARKER_STATUS_MRKR_BUSY_S		0
+#define GL_QH_MARKER_STATUS_MRKR_BUSY_M		MAKEMASK(0xF, 0)
+#define GL_QH_MARKER_TRIG_RCU_PRS(_i)		(0x002001C4 + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: CORER */
+#define GL_QH_MARKER_TRIG_RCU_PRS_MAX_INDEX	3
+#define GL_QH_MARKER_TRIG_RCU_PRS_QPID_S	0
+#define GL_QH_MARKER_TRIG_RCU_PRS_QPID_M	MAKEMASK(0x3FFFF, 0)
+#define GL_QH_MARKER_TRIG_RCU_PRS_PE_TAG_S	18
+#define GL_QH_MARKER_TRIG_RCU_PRS_PE_TAG_M	MAKEMASK(0xFF, 18)
+#define GL_QH_MARKER_TRIG_RCU_PRS_PORT_NUM_S	26
+#define GL_QH_MARKER_TRIG_RCU_PRS_PORT_NUM_M	MAKEMASK(0x7, 26)
+#define GL_QH_MARKER_TRIG_RCU_PRS_SET_RST_S	31
+#define GL_QH_MARKER_TRIG_RCU_PRS_SET_RST_M	BIT(31)
+#define GL_RPRS_ANA_CSR_CTRL			0x00200708 /* Reset Source: CORER */
+#define GL_RPRS_ANA_CSR_CTRL_SELECT_EN_S	0
+#define GL_RPRS_ANA_CSR_CTRL_SELECT_EN_M	BIT(0)
+#define GL_RPRS_ANA_CSR_CTRL_SELECTED_ANA_S	1
+#define GL_RPRS_ANA_CSR_CTRL_SELECTED_ANA_M	BIT(1)
+#define GL_TPRS_ANA_CSR_CTRL			0x00202100 /* Reset Source: CORER */
+#define GL_TPRS_ANA_CSR_CTRL_SELECT_EN_S	0
+#define GL_TPRS_ANA_CSR_CTRL_SELECT_EN_M	BIT(0)
+#define GL_TPRS_ANA_CSR_CTRL_SELECTED_ANA_S	1
+#define GL_TPRS_ANA_CSR_CTRL_SELECTED_ANA_M	BIT(1)
+#define GL_TPRS_MNG_PM_THR			0x00202004 /* Reset Source: CORER */
+#define GL_TPRS_MNG_PM_THR_MNG_PM_THR_S		0
+#define GL_TPRS_MNG_PM_THR_MNG_PM_THR_M		MAKEMASK(0x3FFF, 0)
+#define GL_TPRS_PM_CNT(_i)			(0x00202008 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GL_TPRS_PM_CNT_MAX_INDEX		1
+#define GL_TPRS_PM_CNT_GL_PRS_PM_CNT_S		0
+#define GL_TPRS_PM_CNT_GL_PRS_PM_CNT_M		MAKEMASK(0x3FFF, 0)
+#define GL_TPRS_PM_THR				0x00202000 /* Reset Source: CORER */
+#define GL_TPRS_PM_THR_PM_THR_S			0
+#define GL_TPRS_PM_THR_PM_THR_M			MAKEMASK(0x3FFF, 0)
+#define GL_XLR_MARKER_LOG_RCU_PRS(_i)		(0x00200208 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GL_XLR_MARKER_LOG_RCU_PRS_MAX_INDEX	63
+#define GL_XLR_MARKER_LOG_RCU_PRS_XLR_TRIG_S	0
+#define GL_XLR_MARKER_LOG_RCU_PRS_XLR_TRIG_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GL_XLR_MARKER_STATUS(_i)		(0x002001F4 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GL_XLR_MARKER_STATUS_MAX_INDEX		1
+#define GL_XLR_MARKER_STATUS_MRKR_BUSY_S	0
+#define GL_XLR_MARKER_STATUS_MRKR_BUSY_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GL_XLR_MARKER_TRIG_PE			0x005008C0 /* Reset Source: CORER */
+#define GL_XLR_MARKER_TRIG_PE_VM_VF_NUM_S	0
+#define GL_XLR_MARKER_TRIG_PE_VM_VF_NUM_M	MAKEMASK(0x3FF, 0)
+#define GL_XLR_MARKER_TRIG_PE_VM_VF_TYPE_S	10
+#define GL_XLR_MARKER_TRIG_PE_VM_VF_TYPE_M	MAKEMASK(0x3, 10)
+#define GL_XLR_MARKER_TRIG_PE_PF_NUM_S		12
+#define GL_XLR_MARKER_TRIG_PE_PF_NUM_M		MAKEMASK(0x7, 12)
+#define GL_XLR_MARKER_TRIG_PE_PORT_NUM_S	16
+#define GL_XLR_MARKER_TRIG_PE_PORT_NUM_M	MAKEMASK(0x7, 16)
+#define GL_XLR_MARKER_TRIG_RCU_PRS		0x002001C0 /* Reset Source: CORER */
+#define GL_XLR_MARKER_TRIG_RCU_PRS_VM_VF_NUM_S	0
+#define GL_XLR_MARKER_TRIG_RCU_PRS_VM_VF_NUM_M	MAKEMASK(0x3FF, 0)
+#define GL_XLR_MARKER_TRIG_RCU_PRS_VM_VF_TYPE_S 10
+#define GL_XLR_MARKER_TRIG_RCU_PRS_VM_VF_TYPE_M MAKEMASK(0x3, 10)
+#define GL_XLR_MARKER_TRIG_RCU_PRS_PF_NUM_S	12
+#define GL_XLR_MARKER_TRIG_RCU_PRS_PF_NUM_M	MAKEMASK(0x7, 12)
+#define GL_XLR_MARKER_TRIG_RCU_PRS_PORT_NUM_S	16
+#define GL_XLR_MARKER_TRIG_RCU_PRS_PORT_NUM_M	MAKEMASK(0x7, 16)
+#define GL_CLKGATE_EVENTS			0x0009DE70 /* Reset Source: PERST */
+#define GL_CLKGATE_EVENTS_PRIMARY_CLKGATE_EVENTS_S 0
+#define GL_CLKGATE_EVENTS_PRIMARY_CLKGATE_EVENTS_M MAKEMASK(0xFFFF, 0)
+#define GL_CLKGATE_EVENTS_SIDEBAND_CLKGATE_EVENTS_S 16
+#define GL_CLKGATE_EVENTS_SIDEBAND_CLKGATE_EVENTS_M MAKEMASK(0xFFFF, 16)
+#define GLPCI_BYTCTH_NP_C			0x000BFDA8 /* Reset Source: PCIR */
+#define GLPCI_BYTCTH_NP_C_PCI_COUNT_BW_BCT_S	0
+#define GLPCI_BYTCTH_NP_C_PCI_COUNT_BW_BCT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPCI_BYTCTH_P				0x0009E970 /* Reset Source: PCIR */
+#define GLPCI_BYTCTH_P_PCI_COUNT_BW_BCT_S	0
+#define GLPCI_BYTCTH_P_PCI_COUNT_BW_BCT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPCI_BYTCTL_NP_C			0x000BFDAC /* Reset Source: PCIR */
+#define GLPCI_BYTCTL_NP_C_PCI_COUNT_BW_BCT_S	0
+#define GLPCI_BYTCTL_NP_C_PCI_COUNT_BW_BCT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPCI_BYTCTL_P				0x0009E994 /* Reset Source: PCIR */
+#define GLPCI_BYTCTL_P_PCI_COUNT_BW_BCT_S	0
+#define GLPCI_BYTCTL_P_PCI_COUNT_BW_BCT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPCI_CAPCTRL				0x0009DE88 /* Reset Source: PCIR */
+#define GLPCI_CAPCTRL_VPD_EN_S			0
+#define GLPCI_CAPCTRL_VPD_EN_M			BIT(0)
+#define GLPCI_CAPSUP				0x0009DE8C /* Reset Source: PCIR */
+#define GLPCI_CAPSUP_PCIE_VER_S			0
+#define GLPCI_CAPSUP_PCIE_VER_M			BIT(0)
+#define GLPCI_CAPSUP_RESERVED_2_S		1
+#define GLPCI_CAPSUP_RESERVED_2_M		BIT(1)
+#define GLPCI_CAPSUP_LTR_EN_S			2
+#define GLPCI_CAPSUP_LTR_EN_M			BIT(2)
+#define GLPCI_CAPSUP_TPH_EN_S			3
+#define GLPCI_CAPSUP_TPH_EN_M			BIT(3)
+#define GLPCI_CAPSUP_ARI_EN_S			4
+#define GLPCI_CAPSUP_ARI_EN_M			BIT(4)
+#define GLPCI_CAPSUP_IOV_EN_S			5
+#define GLPCI_CAPSUP_IOV_EN_M			BIT(5)
+#define GLPCI_CAPSUP_ACS_EN_S			6
+#define GLPCI_CAPSUP_ACS_EN_M			BIT(6)
+#define GLPCI_CAPSUP_SEC_EN_S			7
+#define GLPCI_CAPSUP_SEC_EN_M			BIT(7)
+#define GLPCI_CAPSUP_PASID_EN_S			8
+#define GLPCI_CAPSUP_PASID_EN_M			BIT(8)
+#define GLPCI_CAPSUP_DLFE_EN_S			9
+#define GLPCI_CAPSUP_DLFE_EN_M			BIT(9)
+#define GLPCI_CAPSUP_GEN4_EXT_EN_S		10
+#define GLPCI_CAPSUP_GEN4_EXT_EN_M		BIT(10)
+#define GLPCI_CAPSUP_GEN4_MARG_EN_S		11
+#define GLPCI_CAPSUP_GEN4_MARG_EN_M		BIT(11)
+#define GLPCI_CAPSUP_ECRC_GEN_EN_S		16
+#define GLPCI_CAPSUP_ECRC_GEN_EN_M		BIT(16)
+#define GLPCI_CAPSUP_ECRC_CHK_EN_S		17
+#define GLPCI_CAPSUP_ECRC_CHK_EN_M		BIT(17)
+#define GLPCI_CAPSUP_IDO_EN_S			18
+#define GLPCI_CAPSUP_IDO_EN_M			BIT(18)
+#define GLPCI_CAPSUP_MSI_MASK_S			19
+#define GLPCI_CAPSUP_MSI_MASK_M			BIT(19)
+#define GLPCI_CAPSUP_CSR_CONF_EN_S		20
+#define GLPCI_CAPSUP_CSR_CONF_EN_M		BIT(20)
+#define GLPCI_CAPSUP_WAKUP_EN_S			21
+#define GLPCI_CAPSUP_WAKUP_EN_M			BIT(21)
+#define GLPCI_CAPSUP_LOAD_SUBSYS_ID_S		30
+#define GLPCI_CAPSUP_LOAD_SUBSYS_ID_M		BIT(30)
+#define GLPCI_CAPSUP_LOAD_DEV_ID_S		31
+#define GLPCI_CAPSUP_LOAD_DEV_ID_M		BIT(31)
+#define GLPCI_CNF				0x0009DEA0 /* Reset Source: POR */
+#define GLPCI_CNF_FLEX10_S			1
+#define GLPCI_CNF_FLEX10_M			BIT(1)
+#define GLPCI_CNF_WAKE_PIN_EN_S			2
+#define GLPCI_CNF_WAKE_PIN_EN_M			BIT(2)
+#define GLPCI_CNF_MSIX_ECC_BLOCK_DISABLE_S	3
+#define GLPCI_CNF_MSIX_ECC_BLOCK_DISABLE_M	BIT(3)
+#define GLPCI_CNF2				0x000BE004 /* Reset Source: PCIR */
+#define GLPCI_CNF2_RO_DIS_S			0
+#define GLPCI_CNF2_RO_DIS_M			BIT(0)
+#define GLPCI_CNF2_CACHELINE_SIZE_S		1
+#define GLPCI_CNF2_CACHELINE_SIZE_M		BIT(1)
+#define GLPCI_DREVID				0x0009E9AC /* Reset Source: PCIR */
+#define GLPCI_DREVID_DEFAULT_REVID_S		0
+#define GLPCI_DREVID_DEFAULT_REVID_M		MAKEMASK(0xFF, 0)
+#define GLPCI_GSCL_1_NP_C			0x000BFDA4 /* Reset Source: PCIR */
+#define GLPCI_GSCL_1_NP_C_RT_MODE_S		8
+#define GLPCI_GSCL_1_NP_C_RT_MODE_M		BIT(8)
+#define GLPCI_GSCL_1_NP_C_RT_EVENT_S		9
+#define GLPCI_GSCL_1_NP_C_RT_EVENT_M		MAKEMASK(0x1F, 9)
+#define GLPCI_GSCL_1_NP_C_PCI_COUNT_BW_EN_S	14
+#define GLPCI_GSCL_1_NP_C_PCI_COUNT_BW_EN_M	BIT(14)
+#define GLPCI_GSCL_1_NP_C_PCI_COUNT_BW_EV_S	15
+#define GLPCI_GSCL_1_NP_C_PCI_COUNT_BW_EV_M	MAKEMASK(0x1F, 15)
+#define GLPCI_GSCL_1_NP_C_GIO_COUNT_RESET_S	29
+#define GLPCI_GSCL_1_NP_C_GIO_COUNT_RESET_M	BIT(29)
+#define GLPCI_GSCL_1_NP_C_GIO_COUNT_STOP_S	30
+#define GLPCI_GSCL_1_NP_C_GIO_COUNT_STOP_M	BIT(30)
+#define GLPCI_GSCL_1_NP_C_GIO_COUNT_START_S	31
+#define GLPCI_GSCL_1_NP_C_GIO_COUNT_START_M	BIT(31)
+#define GLPCI_GSCL_1_P				0x0009E9B4 /* Reset Source: PCIR */
+#define GLPCI_GSCL_1_P_GIO_COUNT_EN_0_S		0
+#define GLPCI_GSCL_1_P_GIO_COUNT_EN_0_M		BIT(0)
+#define GLPCI_GSCL_1_P_GIO_COUNT_EN_1_S		1
+#define GLPCI_GSCL_1_P_GIO_COUNT_EN_1_M		BIT(1)
+#define GLPCI_GSCL_1_P_GIO_COUNT_EN_2_S		2
+#define GLPCI_GSCL_1_P_GIO_COUNT_EN_2_M		BIT(2)
+#define GLPCI_GSCL_1_P_GIO_COUNT_EN_3_S		3
+#define GLPCI_GSCL_1_P_GIO_COUNT_EN_3_M		BIT(3)
+#define GLPCI_GSCL_1_P_LBC_ENABLE_0_S		4
+#define GLPCI_GSCL_1_P_LBC_ENABLE_0_M		BIT(4)
+#define GLPCI_GSCL_1_P_LBC_ENABLE_1_S		5
+#define GLPCI_GSCL_1_P_LBC_ENABLE_1_M		BIT(5)
+#define GLPCI_GSCL_1_P_LBC_ENABLE_2_S		6
+#define GLPCI_GSCL_1_P_LBC_ENABLE_2_M		BIT(6)
+#define GLPCI_GSCL_1_P_LBC_ENABLE_3_S		7
+#define GLPCI_GSCL_1_P_LBC_ENABLE_3_M		BIT(7)
+#define GLPCI_GSCL_1_P_PCI_COUNT_BW_EN_S	14
+#define GLPCI_GSCL_1_P_PCI_COUNT_BW_EN_M	BIT(14)
+#define GLPCI_GSCL_1_P_GIO_64_BIT_EN_S		28
+#define GLPCI_GSCL_1_P_GIO_64_BIT_EN_M		BIT(28)
+#define GLPCI_GSCL_1_P_GIO_COUNT_RESET_S	29
+#define GLPCI_GSCL_1_P_GIO_COUNT_RESET_M	BIT(29)
+#define GLPCI_GSCL_1_P_GIO_COUNT_STOP_S		30
+#define GLPCI_GSCL_1_P_GIO_COUNT_STOP_M		BIT(30)
+#define GLPCI_GSCL_1_P_GIO_COUNT_START_S	31
+#define GLPCI_GSCL_1_P_GIO_COUNT_START_M	BIT(31)
+#define GLPCI_GSCL_2				0x0009E998 /* Reset Source: PCIR */
+#define GLPCI_GSCL_2_GIO_EVENT_NUM_0_S		0
+#define GLPCI_GSCL_2_GIO_EVENT_NUM_0_M		MAKEMASK(0xFF, 0)
+#define GLPCI_GSCL_2_GIO_EVENT_NUM_1_S		8
+#define GLPCI_GSCL_2_GIO_EVENT_NUM_1_M		MAKEMASK(0xFF, 8)
+#define GLPCI_GSCL_2_GIO_EVENT_NUM_2_S		16
+#define GLPCI_GSCL_2_GIO_EVENT_NUM_2_M		MAKEMASK(0xFF, 16)
+#define GLPCI_GSCL_2_GIO_EVENT_NUM_3_S		24
+#define GLPCI_GSCL_2_GIO_EVENT_NUM_3_M		MAKEMASK(0xFF, 24)
+#define GLPCI_GSCL_5_8(_i)			(0x0009E954 + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: PCIR */
+#define GLPCI_GSCL_5_8_MAX_INDEX		3
+#define GLPCI_GSCL_5_8_LBC_THRESHOLD_N_S	0
+#define GLPCI_GSCL_5_8_LBC_THRESHOLD_N_M	MAKEMASK(0xFFFF, 0)
+#define GLPCI_GSCL_5_8_LBC_TIMER_N_S		16
+#define GLPCI_GSCL_5_8_LBC_TIMER_N_M		MAKEMASK(0xFFFF, 16)
+#define GLPCI_GSCN_0_3(_i)			(0x0009E99C + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: PCIR */
+#define GLPCI_GSCN_0_3_MAX_INDEX		3
+#define GLPCI_GSCN_0_3_EVENT_COUNTER_S		0
+#define GLPCI_GSCN_0_3_EVENT_COUNTER_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPCI_LATCT_NP_C			0x000BFDA0 /* Reset Source: PCIR */
+#define GLPCI_LATCT_NP_C_PCI_LATENCY_COUNT_S	0
+#define GLPCI_LATCT_NP_C_PCI_LATENCY_COUNT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPCI_LBARCTRL				0x0009DE74 /* Reset Source: POR */
+#define GLPCI_LBARCTRL_PREFBAR_S		0
+#define GLPCI_LBARCTRL_PREFBAR_M		BIT(0)
+#define GLPCI_LBARCTRL_BAR32_S			1
+#define GLPCI_LBARCTRL_BAR32_M			BIT(1)
+#define GLPCI_LBARCTRL_PAGES_SPACE_EN_PF_S	2
+#define GLPCI_LBARCTRL_PAGES_SPACE_EN_PF_M	BIT(2)
+#define GLPCI_LBARCTRL_FLASH_EXPOSE_S		3
+#define GLPCI_LBARCTRL_FLASH_EXPOSE_M		BIT(3)
+#define GLPCI_LBARCTRL_PE_DB_SIZE_S		4
+#define GLPCI_LBARCTRL_PE_DB_SIZE_M		MAKEMASK(0x3, 4)
+#define GLPCI_LBARCTRL_PAGES_SPACE_EN_VF_S	9
+#define GLPCI_LBARCTRL_PAGES_SPACE_EN_VF_M	BIT(9)
+#define GLPCI_LBARCTRL_EXROM_SIZE_S		11
+#define GLPCI_LBARCTRL_EXROM_SIZE_M		MAKEMASK(0x7, 11)
+#define GLPCI_LBARCTRL_VF_PE_DB_SIZE_S		14
+#define GLPCI_LBARCTRL_VF_PE_DB_SIZE_M		MAKEMASK(0x3, 14)
+#define GLPCI_LINKCAP				0x0009DE90 /* Reset Source: PCIR */
+#define GLPCI_LINKCAP_LINK_SPEEDS_VECTOR_S	0
+#define GLPCI_LINKCAP_LINK_SPEEDS_VECTOR_M	MAKEMASK(0x3F, 0)
+#define GLPCI_LINKCAP_MAX_LINK_WIDTH_S		9
+#define GLPCI_LINKCAP_MAX_LINK_WIDTH_M		MAKEMASK(0xF, 9)
+#define GLPCI_NPQ_CFG				0x000BFD80 /* Reset Source: PCIR */
+#define GLPCI_NPQ_CFG_EXTEND_TO_S		0
+#define GLPCI_NPQ_CFG_EXTEND_TO_M		BIT(0)
+#define GLPCI_NPQ_CFG_SMALL_TO_S		1
+#define GLPCI_NPQ_CFG_SMALL_TO_M		BIT(1)
+#define GLPCI_NPQ_CFG_WEIGHT_AVG_S		2
+#define GLPCI_NPQ_CFG_WEIGHT_AVG_M		MAKEMASK(0xF, 2)
+#define GLPCI_NPQ_CFG_NPQ_SPARE_S		6
+#define GLPCI_NPQ_CFG_NPQ_SPARE_M		MAKEMASK(0x3FF, 6)
+#define GLPCI_NPQ_CFG_NPQ_ERR_STAT_S		16
+#define GLPCI_NPQ_CFG_NPQ_ERR_STAT_M		MAKEMASK(0xF, 16)
+#define GLPCI_PKTCT_NP_C			0x000BFD9C /* Reset Source: PCIR */
+#define GLPCI_PKTCT_NP_C_PCI_COUNT_BW_PCT_S	0
+#define GLPCI_PKTCT_NP_C_PCI_COUNT_BW_PCT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPCI_PKTCT_P				0x0009E9B0 /* Reset Source: PCIR */
+#define GLPCI_PKTCT_P_PCI_COUNT_BW_PCT_S	0
+#define GLPCI_PKTCT_P_PCI_COUNT_BW_PCT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPCI_PMSUP				0x0009DE94 /* Reset Source: PCIR */
+#define GLPCI_PMSUP_RESERVED_0_S		0
+#define GLPCI_PMSUP_RESERVED_0_M		MAKEMASK(0x3, 0)
+#define GLPCI_PMSUP_RESERVED_1_S		2
+#define GLPCI_PMSUP_RESERVED_1_M		MAKEMASK(0x7, 2)
+#define GLPCI_PMSUP_RESERVED_2_S		5
+#define GLPCI_PMSUP_RESERVED_2_M		MAKEMASK(0x7, 5)
+#define GLPCI_PMSUP_L0S_ACC_LAT_S		8
+#define GLPCI_PMSUP_L0S_ACC_LAT_M		MAKEMASK(0x7, 8)
+#define GLPCI_PMSUP_L1_ACC_LAT_S		11
+#define GLPCI_PMSUP_L1_ACC_LAT_M		MAKEMASK(0x7, 11)
+#define GLPCI_PMSUP_RESERVED_3_S		14
+#define GLPCI_PMSUP_RESERVED_3_M		BIT(14)
+#define GLPCI_PMSUP_OBFF_SUP_S			15
+#define GLPCI_PMSUP_OBFF_SUP_M			MAKEMASK(0x3, 15)
+#define GLPCI_PUSH_PE_IF_TO_STATUS		0x0009DF44 /* Reset Source: PCIR */
+#define GLPCI_PUSH_PE_IF_TO_STATUS_GLPCI_PUSH_PE_IF_TO_STATUS_S 0
+#define GLPCI_PUSH_PE_IF_TO_STATUS_GLPCI_PUSH_PE_IF_TO_STATUS_M BIT(0)
+#define GLPCI_PWRDATA				0x0009DE7C /* Reset Source: PCIR */
+#define GLPCI_PWRDATA_D0_POWER_S		0
+#define GLPCI_PWRDATA_D0_POWER_M		MAKEMASK(0xFF, 0)
+#define GLPCI_PWRDATA_COMM_POWER_S		8
+#define GLPCI_PWRDATA_COMM_POWER_M		MAKEMASK(0xFF, 8)
+#define GLPCI_PWRDATA_D3_POWER_S		16
+#define GLPCI_PWRDATA_D3_POWER_M		MAKEMASK(0xFF, 16)
+#define GLPCI_PWRDATA_DATA_SCALE_S		24
+#define GLPCI_PWRDATA_DATA_SCALE_M		MAKEMASK(0x3, 24)
+#define GLPCI_REVID				0x0009DE98 /* Reset Source: PCIR */
+#define GLPCI_REVID_NVM_REVID_S			0
+#define GLPCI_REVID_NVM_REVID_M			MAKEMASK(0xFF, 0)
+#define GLPCI_SERH				0x0009DE84 /* Reset Source: PCIR */
+#define GLPCI_SERH_SER_NUM_H_S			0
+#define GLPCI_SERH_SER_NUM_H_M			MAKEMASK(0xFFFF, 0)
+#define GLPCI_SERL				0x0009DE80 /* Reset Source: PCIR */
+#define GLPCI_SERL_SER_NUM_L_S			0
+#define GLPCI_SERL_SER_NUM_L_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPCI_SUBVENID				0x0009DEE8 /* Reset Source: PCIR */
+#define GLPCI_SUBVENID_SUB_VEN_ID_S		0
+#define GLPCI_SUBVENID_SUB_VEN_ID_M		MAKEMASK(0xFFFF, 0)
+#define GLPCI_UPADD				0x000BE0D4 /* Reset Source: PCIR */
+#define GLPCI_UPADD_ADDRESS_S			1
+#define GLPCI_UPADD_ADDRESS_M			MAKEMASK(0x7FFFFFFF, 1)
+#define GLPCI_VENDORID				0x0009DEC8 /* Reset Source: PCIR */
+#define GLPCI_VENDORID_VENDORID_S		0
+#define GLPCI_VENDORID_VENDORID_M		MAKEMASK(0xFFFF, 0)
+#define GLPCI_VFSUP				0x0009DE9C /* Reset Source: PCIR */
+#define GLPCI_VFSUP_VF_PREFETCH_S		0
+#define GLPCI_VFSUP_VF_PREFETCH_M		BIT(0)
+#define GLPCI_VFSUP_VR_BAR_TYPE_S		1
+#define GLPCI_VFSUP_VR_BAR_TYPE_M		BIT(1)
+#define GLPCI_WATMK_CLNT_PIPEMON		0x000BFD90 /* Reset Source: PCIR */
+#define GLPCI_WATMK_CLNT_PIPEMON_DATA_LINES_S	0
+#define GLPCI_WATMK_CLNT_PIPEMON_DATA_LINES_M	MAKEMASK(0xFFFF, 0)
+#define PF_FUNC_RID				0x0009E880 /* Reset Source: PCIR */
+#define PF_FUNC_RID_FUNCTION_NUMBER_S		0
+#define PF_FUNC_RID_FUNCTION_NUMBER_M		MAKEMASK(0x7, 0)
+#define PF_FUNC_RID_DEVICE_NUMBER_S		3
+#define PF_FUNC_RID_DEVICE_NUMBER_M		MAKEMASK(0x1F, 3)
+#define PF_FUNC_RID_BUS_NUMBER_S		8
+#define PF_FUNC_RID_BUS_NUMBER_M		MAKEMASK(0xFF, 8)
+#define PF_PCI_CIAA				0x0009E580 /* Reset Source: FLR */
+#define PF_PCI_CIAA_ADDRESS_S			0
+#define PF_PCI_CIAA_ADDRESS_M			MAKEMASK(0xFFF, 0)
+#define PF_PCI_CIAA_VF_NUM_S			12
+#define PF_PCI_CIAA_VF_NUM_M			MAKEMASK(0xFF, 12)
+#define PF_PCI_CIAD				0x0009E500 /* Reset Source: FLR */
+#define PF_PCI_CIAD_DATA_S			0
+#define PF_PCI_CIAD_DATA_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PFPCI_CLASS				0x0009DB00 /* Reset Source: PCIR */
+#define PFPCI_CLASS_STORAGE_CLASS_S		0
+#define PFPCI_CLASS_STORAGE_CLASS_M		BIT(0)
+#define PFPCI_CLASS_PF_IS_LAN_S			2
+#define PFPCI_CLASS_PF_IS_LAN_M			BIT(2)
+#define PFPCI_CNF				0x0009DF00 /* Reset Source: PCIR */
+#define PFPCI_CNF_MSI_EN_S			2
+#define PFPCI_CNF_MSI_EN_M			BIT(2)
+#define PFPCI_CNF_EXROM_DIS_S			3
+#define PFPCI_CNF_EXROM_DIS_M			BIT(3)
+#define PFPCI_CNF_IO_BAR_S			4
+#define PFPCI_CNF_IO_BAR_M			BIT(4)
+#define PFPCI_CNF_INT_PIN_S			5
+#define PFPCI_CNF_INT_PIN_M			MAKEMASK(0x3, 5)
+#define PFPCI_DEVID				0x0009DE00 /* Reset Source: PCIR */
+#define PFPCI_DEVID_PF_DEV_ID_S			0
+#define PFPCI_DEVID_PF_DEV_ID_M			MAKEMASK(0xFFFF, 0)
+#define PFPCI_DEVID_VF_DEV_ID_S			16
+#define PFPCI_DEVID_VF_DEV_ID_M			MAKEMASK(0xFFFF, 16)
+#define PFPCI_FACTPS				0x0009E900 /* Reset Source: FLR */
+#define PFPCI_FACTPS_FUNC_POWER_STATE_S		0
+#define PFPCI_FACTPS_FUNC_POWER_STATE_M		MAKEMASK(0x3, 0)
+#define PFPCI_FACTPS_FUNC_AUX_EN_S		3
+#define PFPCI_FACTPS_FUNC_AUX_EN_M		BIT(3)
+#define PFPCI_FUNC				0x0009D980 /* Reset Source: POR */
+#define PFPCI_FUNC_FUNC_DIS_S			0
+#define PFPCI_FUNC_FUNC_DIS_M			BIT(0)
+#define PFPCI_FUNC_ALLOW_FUNC_DIS_S		1
+#define PFPCI_FUNC_ALLOW_FUNC_DIS_M		BIT(1)
+#define PFPCI_FUNC_DIS_FUNC_ON_PORT_DIS_S	2
+#define PFPCI_FUNC_DIS_FUNC_ON_PORT_DIS_M	BIT(2)
+#define PFPCI_PF_FLUSH_DONE			0x0009E400 /* Reset Source: PCIR */
+#define PFPCI_PF_FLUSH_DONE_FLUSH_DONE_S	0
+#define PFPCI_PF_FLUSH_DONE_FLUSH_DONE_M	BIT(0)
+#define PFPCI_PM				0x0009DA80 /* Reset Source: POR */
+#define PFPCI_PM_PME_EN_S			0
+#define PFPCI_PM_PME_EN_M			BIT(0)
+#define PFPCI_STATUS1				0x0009DA00 /* Reset Source: POR */
+#define PFPCI_STATUS1_FUNC_VALID_S		0
+#define PFPCI_STATUS1_FUNC_VALID_M		BIT(0)
+#define PFPCI_SUBSYSID				0x0009D880 /* Reset Source: PCIR */
+#define PFPCI_SUBSYSID_PF_SUBSYS_ID_S		0
+#define PFPCI_SUBSYSID_PF_SUBSYS_ID_M		MAKEMASK(0xFFFF, 0)
+#define PFPCI_SUBSYSID_VF_SUBSYS_ID_S		16
+#define PFPCI_SUBSYSID_VF_SUBSYS_ID_M		MAKEMASK(0xFFFF, 16)
+#define PFPCI_VF_FLUSH_DONE(_VF)		(0x0009E000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PCIR */
+#define PFPCI_VF_FLUSH_DONE_MAX_INDEX		255
+#define PFPCI_VF_FLUSH_DONE_FLUSH_DONE_S	0
+#define PFPCI_VF_FLUSH_DONE_FLUSH_DONE_M	BIT(0)
+#define PFPCI_VM_FLUSH_DONE			0x0009E480 /* Reset Source: PCIR */
+#define PFPCI_VM_FLUSH_DONE_FLUSH_DONE_S	0
+#define PFPCI_VM_FLUSH_DONE_FLUSH_DONE_M	BIT(0)
+#define PFPCI_VMINDEX				0x0009E600 /* Reset Source: PCIR */
+#define PFPCI_VMINDEX_VMINDEX_S			0
+#define PFPCI_VMINDEX_VMINDEX_M			MAKEMASK(0x3FF, 0)
+#define PFPCI_VMPEND				0x0009E800 /* Reset Source: PCIR */
+#define PFPCI_VMPEND_PENDING_S			0
+#define PFPCI_VMPEND_PENDING_M			BIT(0)
+#define PQ_FIFO_STATUS				0x0009DF40 /* Reset Source: PCIR */
+#define PQ_FIFO_STATUS_PQ_FIFO_COUNT_S		0
+#define PQ_FIFO_STATUS_PQ_FIFO_COUNT_M		MAKEMASK(0x7FFFFFFF, 0)
+#define PQ_FIFO_STATUS_PQ_FIFO_EMPTY_S		31
+#define PQ_FIFO_STATUS_PQ_FIFO_EMPTY_M		BIT(31)
+#define GLPE_CPUSTATUS0				0x0050BA5C /* Reset Source: CORER */
+#define GLPE_CPUSTATUS0_PECPUSTATUS0_S		0
+#define GLPE_CPUSTATUS0_PECPUSTATUS0_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPE_CPUSTATUS1				0x0050BA60 /* Reset Source: CORER */
+#define GLPE_CPUSTATUS1_PECPUSTATUS1_S		0
+#define GLPE_CPUSTATUS1_PECPUSTATUS1_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPE_CPUSTATUS2				0x0050BA64 /* Reset Source: CORER */
+#define GLPE_CPUSTATUS2_PECPUSTATUS2_S		0
+#define GLPE_CPUSTATUS2_PECPUSTATUS2_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPE_MDQ_BASE(_i)			(0x00536000 + ((_i) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLPE_MDQ_BASE_MAX_INDEX			511
+#define GLPE_MDQ_BASE_MDOC_INDEX_S		0
+#define GLPE_MDQ_BASE_MDOC_INDEX_M		MAKEMASK(0xFFFFFFF, 0)
+#define GLPE_MDQ_PTR(_i)			(0x00537000 + ((_i) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLPE_MDQ_PTR_MAX_INDEX			511
+#define GLPE_MDQ_PTR_MDQ_HEAD_S			0
+#define GLPE_MDQ_PTR_MDQ_HEAD_M			MAKEMASK(0x3FFF, 0)
+#define GLPE_MDQ_PTR_MDQ_TAIL_S			16
+#define GLPE_MDQ_PTR_MDQ_TAIL_M			MAKEMASK(0x3FFF, 16)
+#define GLPE_MDQ_SIZE(_i)			(0x00536800 + ((_i) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLPE_MDQ_SIZE_MAX_INDEX			511
+#define GLPE_MDQ_SIZE_MDQ_SIZE_S		0
+#define GLPE_MDQ_SIZE_MDQ_SIZE_M		MAKEMASK(0x3FFF, 0)
+#define GLPE_PEPM_CTRL				0x0050C000 /* Reset Source: PERST */
+#define GLPE_PEPM_CTRL_PEPM_ENABLE_S		0
+#define GLPE_PEPM_CTRL_PEPM_ENABLE_M		BIT(0)
+#define GLPE_PEPM_CTRL_PEPM_HALT_S		8
+#define GLPE_PEPM_CTRL_PEPM_HALT_M		BIT(8)
+#define GLPE_PEPM_CTRL_PEPM_PUSH_MARGIN_S	16
+#define GLPE_PEPM_CTRL_PEPM_PUSH_MARGIN_M	MAKEMASK(0xFF, 16)
+#define GLPE_PEPM_DEALLOC			0x0050C004 /* Reset Source: PERST */
+#define GLPE_PEPM_DEALLOC_MDQ_CREDITS_S		0
+#define GLPE_PEPM_DEALLOC_MDQ_CREDITS_M		MAKEMASK(0x3FFF, 0)
+#define GLPE_PEPM_DEALLOC_PSQ_CREDITS_S		14
+#define GLPE_PEPM_DEALLOC_PSQ_CREDITS_M		MAKEMASK(0x1F, 14)
+#define GLPE_PEPM_DEALLOC_PQID_S		19
+#define GLPE_PEPM_DEALLOC_PQID_M		MAKEMASK(0x1FF, 19)
+#define GLPE_PEPM_DEALLOC_PORT_S		28
+#define GLPE_PEPM_DEALLOC_PORT_M		MAKEMASK(0x7, 28)
+#define GLPE_PEPM_DEALLOC_DEALLOC_RDY_S		31
+#define GLPE_PEPM_DEALLOC_DEALLOC_RDY_M		BIT(31)
+#define GLPE_PEPM_PSQ_COUNT			0x0050C020 /* Reset Source: PERST */
+#define GLPE_PEPM_PSQ_COUNT_PEPM_PSQ_COUNT_S	0
+#define GLPE_PEPM_PSQ_COUNT_PEPM_PSQ_COUNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_PEPM_THRESH(_i)			(0x0050C840 + ((_i) * 4)) /* _i=0...511 */ /* Reset Source: PERST */
+#define GLPE_PEPM_THRESH_MAX_INDEX		511
+#define GLPE_PEPM_THRESH_PEPM_PSQ_THRESH_S	0
+#define GLPE_PEPM_THRESH_PEPM_PSQ_THRESH_M	MAKEMASK(0x1F, 0)
+#define GLPE_PEPM_THRESH_PEPM_MDQ_THRESH_S	16
+#define GLPE_PEPM_THRESH_PEPM_MDQ_THRESH_M	MAKEMASK(0x3FFF, 16)
+#define GLPE_PFAEQEDROPCNT(_i)			(0x00503240 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPE_PFAEQEDROPCNT_MAX_INDEX		7
+#define GLPE_PFAEQEDROPCNT_AEQEDROPCNT_S	0
+#define GLPE_PFAEQEDROPCNT_AEQEDROPCNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_PFCEQEDROPCNT(_i)			(0x00503220 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPE_PFCEQEDROPCNT_MAX_INDEX		7
+#define GLPE_PFCEQEDROPCNT_CEQEDROPCNT_S	0
+#define GLPE_PFCEQEDROPCNT_CEQEDROPCNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_PFCQEDROPCNT(_i)			(0x00503200 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPE_PFCQEDROPCNT_MAX_INDEX		7
+#define GLPE_PFCQEDROPCNT_CQEDROPCNT_S		0
+#define GLPE_PFCQEDROPCNT_CQEDROPCNT_M		MAKEMASK(0xFFFF, 0)
+#define GLPE_PFFLMOOISCALLOCERR(_i)		(0x0050B960 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPE_PFFLMOOISCALLOCERR_MAX_INDEX	7
+#define GLPE_PFFLMOOISCALLOCERR_ERROR_COUNT_S	0
+#define GLPE_PFFLMOOISCALLOCERR_ERROR_COUNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_PFFLMQ1ALLOCERR(_i)		(0x0050B920 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPE_PFFLMQ1ALLOCERR_MAX_INDEX		7
+#define GLPE_PFFLMQ1ALLOCERR_ERROR_COUNT_S	0
+#define GLPE_PFFLMQ1ALLOCERR_ERROR_COUNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_PFFLMRRFALLOCERR(_i)		(0x0050B940 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPE_PFFLMRRFALLOCERR_MAX_INDEX		7
+#define GLPE_PFFLMRRFALLOCERR_ERROR_COUNT_S	0
+#define GLPE_PFFLMRRFALLOCERR_ERROR_COUNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_PFFLMXMITALLOCERR(_i)		(0x0050B900 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPE_PFFLMXMITALLOCERR_MAX_INDEX	7
+#define GLPE_PFFLMXMITALLOCERR_ERROR_COUNT_S	0
+#define GLPE_PFFLMXMITALLOCERR_ERROR_COUNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_PFTCPNOW50USCNT(_i)		(0x0050B8C0 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPE_PFTCPNOW50USCNT_MAX_INDEX		7
+#define GLPE_PFTCPNOW50USCNT_CNT_S		0
+#define GLPE_PFTCPNOW50USCNT_CNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPE_PUSH_PEPM				0x0053241C /* Reset Source: CORER */
+#define GLPE_PUSH_PEPM_MDQ_CREDITS_S		0
+#define GLPE_PUSH_PEPM_MDQ_CREDITS_M		MAKEMASK(0xFF, 0)
+#define GLPE_VFAEQEDROPCNT(_i)			(0x00503100 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLPE_VFAEQEDROPCNT_MAX_INDEX		31
+#define GLPE_VFAEQEDROPCNT_AEQEDROPCNT_S	0
+#define GLPE_VFAEQEDROPCNT_AEQEDROPCNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_VFCEQEDROPCNT(_i)			(0x00503080 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLPE_VFCEQEDROPCNT_MAX_INDEX		31
+#define GLPE_VFCEQEDROPCNT_CEQEDROPCNT_S	0
+#define GLPE_VFCEQEDROPCNT_CEQEDROPCNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_VFCQEDROPCNT(_i)			(0x00503000 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLPE_VFCQEDROPCNT_MAX_INDEX		31
+#define GLPE_VFCQEDROPCNT_CQEDROPCNT_S		0
+#define GLPE_VFCQEDROPCNT_CQEDROPCNT_M		MAKEMASK(0xFFFF, 0)
+#define GLPE_VFFLMOOISCALLOCERR(_i)		(0x0050B580 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLPE_VFFLMOOISCALLOCERR_MAX_INDEX	31
+#define GLPE_VFFLMOOISCALLOCERR_ERROR_COUNT_S	0
+#define GLPE_VFFLMOOISCALLOCERR_ERROR_COUNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_VFFLMQ1ALLOCERR(_i)		(0x0050B480 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLPE_VFFLMQ1ALLOCERR_MAX_INDEX		31
+#define GLPE_VFFLMQ1ALLOCERR_ERROR_COUNT_S	0
+#define GLPE_VFFLMQ1ALLOCERR_ERROR_COUNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_VFFLMRRFALLOCERR(_i)		(0x0050B500 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLPE_VFFLMRRFALLOCERR_MAX_INDEX		31
+#define GLPE_VFFLMRRFALLOCERR_ERROR_COUNT_S	0
+#define GLPE_VFFLMRRFALLOCERR_ERROR_COUNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_VFFLMXMITALLOCERR(_i)		(0x0050B400 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLPE_VFFLMXMITALLOCERR_MAX_INDEX	31
+#define GLPE_VFFLMXMITALLOCERR_ERROR_COUNT_S	0
+#define GLPE_VFFLMXMITALLOCERR_ERROR_COUNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_VFTCPNOW50USCNT(_i)		(0x0050B300 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: PE_CORER */
+#define GLPE_VFTCPNOW50USCNT_MAX_INDEX		31
+#define GLPE_VFTCPNOW50USCNT_CNT_S		0
+#define GLPE_VFTCPNOW50USCNT_CNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PFPE_AEQALLOC				0x00502D00 /* Reset Source: PFR */
+#define PFPE_AEQALLOC_AECOUNT_S			0
+#define PFPE_AEQALLOC_AECOUNT_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PFPE_CCQPHIGH				0x0050A100 /* Reset Source: PFR */
+#define PFPE_CCQPHIGH_PECCQPHIGH_S		0
+#define PFPE_CCQPHIGH_PECCQPHIGH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PFPE_CCQPLOW				0x0050A080 /* Reset Source: PFR */
+#define PFPE_CCQPLOW_PECCQPLOW_S		0
+#define PFPE_CCQPLOW_PECCQPLOW_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PFPE_CCQPSTATUS				0x0050A000 /* Reset Source: PFR */
+#define PFPE_CCQPSTATUS_CCQP_DONE_S		0
+#define PFPE_CCQPSTATUS_CCQP_DONE_M		BIT(0)
+#define PFPE_CCQPSTATUS_HMC_PROFILE_S		4
+#define PFPE_CCQPSTATUS_HMC_PROFILE_M		MAKEMASK(0x7, 4)
+#define PFPE_CCQPSTATUS_RDMA_EN_VFS_S		16
+#define PFPE_CCQPSTATUS_RDMA_EN_VFS_M		MAKEMASK(0x3F, 16)
+#define PFPE_CCQPSTATUS_CCQP_ERR_S		31
+#define PFPE_CCQPSTATUS_CCQP_ERR_M		BIT(31)
+#define PFPE_CQACK				0x00502C80 /* Reset Source: PFR */
+#define PFPE_CQACK_PECQID_S			0
+#define PFPE_CQACK_PECQID_M			MAKEMASK(0x7FFFF, 0)
+#define PFPE_CQARM				0x00502C00 /* Reset Source: PFR */
+#define PFPE_CQARM_PECQID_S			0
+#define PFPE_CQARM_PECQID_M			MAKEMASK(0x7FFFF, 0)
+#define PFPE_CQPDB				0x00500800 /* Reset Source: PFR */
+#define PFPE_CQPDB_WQHEAD_S			0
+#define PFPE_CQPDB_WQHEAD_M			MAKEMASK(0x7FF, 0)
+#define PFPE_CQPERRCODES			0x0050A200 /* Reset Source: PFR */
+#define PFPE_CQPERRCODES_CQP_MINOR_CODE_S	0
+#define PFPE_CQPERRCODES_CQP_MINOR_CODE_M	MAKEMASK(0xFFFF, 0)
+#define PFPE_CQPERRCODES_CQP_MAJOR_CODE_S	16
+#define PFPE_CQPERRCODES_CQP_MAJOR_CODE_M	MAKEMASK(0xFFFF, 16)
+#define PFPE_CQPTAIL				0x00500880 /* Reset Source: PFR */
+#define PFPE_CQPTAIL_WQTAIL_S			0
+#define PFPE_CQPTAIL_WQTAIL_M			MAKEMASK(0x7FF, 0)
+#define PFPE_CQPTAIL_CQP_OP_ERR_S		31
+#define PFPE_CQPTAIL_CQP_OP_ERR_M		BIT(31)
+#define PFPE_IPCONFIG0				0x0050A180 /* Reset Source: PFR */
+#define PFPE_IPCONFIG0_PEIPID_S			0
+#define PFPE_IPCONFIG0_PEIPID_M			MAKEMASK(0xFFFF, 0)
+#define PFPE_IPCONFIG0_USEENTIREIDRANGE_S	16
+#define PFPE_IPCONFIG0_USEENTIREIDRANGE_M	BIT(16)
+#define PFPE_IPCONFIG0_UDP_SRC_PORT_MASK_EN_S	17
+#define PFPE_IPCONFIG0_UDP_SRC_PORT_MASK_EN_M	BIT(17)
+#define PFPE_MRTEIDXMASK			0x0050A300 /* Reset Source: PFR */
+#define PFPE_MRTEIDXMASK_MRTEIDXMASKBITS_S	0
+#define PFPE_MRTEIDXMASK_MRTEIDXMASKBITS_M	MAKEMASK(0x1F, 0)
+#define PFPE_RCVUNEXPECTEDERROR			0x0050A380 /* Reset Source: PFR */
+#define PFPE_RCVUNEXPECTEDERROR_TCP_RX_UNEXP_ERR_S 0
+#define PFPE_RCVUNEXPECTEDERROR_TCP_RX_UNEXP_ERR_M MAKEMASK(0xFFFFFF, 0)
+#define PFPE_TCPNOWTIMER			0x0050A280 /* Reset Source: PFR */
+#define PFPE_TCPNOWTIMER_TCP_NOW_S		0
+#define PFPE_TCPNOWTIMER_TCP_NOW_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PFPE_WQEALLOC				0x00504400 /* Reset Source: PFR */
+#define PFPE_WQEALLOC_PEQPID_S			0
+#define PFPE_WQEALLOC_PEQPID_M			MAKEMASK(0x3FFFF, 0)
+#define PFPE_WQEALLOC_WQE_DESC_INDEX_S		20
+#define PFPE_WQEALLOC_WQE_DESC_INDEX_M		MAKEMASK(0xFFF, 20)
+#define PRT_PEPM_COUNT(_i)			(0x0050C040 + ((_i) * 4)) /* _i=0...511 */ /* Reset Source: PERST */
+#define PRT_PEPM_COUNT_MAX_INDEX		511
+#define PRT_PEPM_COUNT_PEPM_PSQ_COUNT_S		0
+#define PRT_PEPM_COUNT_PEPM_PSQ_COUNT_M		MAKEMASK(0x1F, 0)
+#define PRT_PEPM_COUNT_PEPM_MDQ_COUNT_S		16
+#define PRT_PEPM_COUNT_PEPM_MDQ_COUNT_M		MAKEMASK(0x3FFF, 16)
+#define VFPE_AEQALLOC(_VF)			(0x00502800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_AEQALLOC_MAX_INDEX			255
+#define VFPE_AEQALLOC_AECOUNT_S			0
+#define VFPE_AEQALLOC_AECOUNT_M			MAKEMASK(0xFFFFFFFF, 0)
+#define VFPE_CCQPHIGH(_VF)			(0x00508800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_CCQPHIGH_MAX_INDEX			255
+#define VFPE_CCQPHIGH_PECCQPHIGH_S		0
+#define VFPE_CCQPHIGH_PECCQPHIGH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VFPE_CCQPLOW(_VF)			(0x00508400 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_CCQPLOW_MAX_INDEX			255
+#define VFPE_CCQPLOW_PECCQPLOW_S		0
+#define VFPE_CCQPLOW_PECCQPLOW_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VFPE_CCQPSTATUS(_VF)			(0x00508000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_CCQPSTATUS_MAX_INDEX		255
+#define VFPE_CCQPSTATUS_CCQP_DONE_S		0
+#define VFPE_CCQPSTATUS_CCQP_DONE_M		BIT(0)
+#define VFPE_CCQPSTATUS_HMC_PROFILE_S		4
+#define VFPE_CCQPSTATUS_HMC_PROFILE_M		MAKEMASK(0x7, 4)
+#define VFPE_CCQPSTATUS_RDMA_EN_VFS_S		16
+#define VFPE_CCQPSTATUS_RDMA_EN_VFS_M		MAKEMASK(0x3F, 16)
+#define VFPE_CCQPSTATUS_CCQP_ERR_S		31
+#define VFPE_CCQPSTATUS_CCQP_ERR_M		BIT(31)
+#define VFPE_CQACK(_VF)				(0x00502400 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_CQACK_MAX_INDEX			255
+#define VFPE_CQACK_PECQID_S			0
+#define VFPE_CQACK_PECQID_M			MAKEMASK(0x7FFFF, 0)
+#define VFPE_CQARM(_VF)				(0x00502000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_CQARM_MAX_INDEX			255
+#define VFPE_CQARM_PECQID_S			0
+#define VFPE_CQARM_PECQID_M			MAKEMASK(0x7FFFF, 0)
+#define VFPE_CQPDB(_VF)				(0x00500000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_CQPDB_MAX_INDEX			255
+#define VFPE_CQPDB_WQHEAD_S			0
+#define VFPE_CQPDB_WQHEAD_M			MAKEMASK(0x7FF, 0)
+#define VFPE_CQPERRCODES(_VF)			(0x00509000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_CQPERRCODES_MAX_INDEX		255
+#define VFPE_CQPERRCODES_CQP_MINOR_CODE_S	0
+#define VFPE_CQPERRCODES_CQP_MINOR_CODE_M	MAKEMASK(0xFFFF, 0)
+#define VFPE_CQPERRCODES_CQP_MAJOR_CODE_S	16
+#define VFPE_CQPERRCODES_CQP_MAJOR_CODE_M	MAKEMASK(0xFFFF, 16)
+#define VFPE_CQPTAIL(_VF)			(0x00500400 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_CQPTAIL_MAX_INDEX			255
+#define VFPE_CQPTAIL_WQTAIL_S			0
+#define VFPE_CQPTAIL_WQTAIL_M			MAKEMASK(0x7FF, 0)
+#define VFPE_CQPTAIL_CQP_OP_ERR_S		31
+#define VFPE_CQPTAIL_CQP_OP_ERR_M		BIT(31)
+#define VFPE_IPCONFIG0(_VF)			(0x00508C00 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_IPCONFIG0_MAX_INDEX		255
+#define VFPE_IPCONFIG0_PEIPID_S			0
+#define VFPE_IPCONFIG0_PEIPID_M			MAKEMASK(0xFFFF, 0)
+#define VFPE_IPCONFIG0_USEENTIREIDRANGE_S	16
+#define VFPE_IPCONFIG0_USEENTIREIDRANGE_M	BIT(16)
+#define VFPE_IPCONFIG0_UDP_SRC_PORT_MASK_EN_S	17
+#define VFPE_IPCONFIG0_UDP_SRC_PORT_MASK_EN_M	BIT(17)
+#define VFPE_RCVUNEXPECTEDERROR(_VF)		(0x00509C00 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_RCVUNEXPECTEDERROR_MAX_INDEX	255
+#define VFPE_RCVUNEXPECTEDERROR_TCP_RX_UNEXP_ERR_S 0
+#define VFPE_RCVUNEXPECTEDERROR_TCP_RX_UNEXP_ERR_M MAKEMASK(0xFFFFFF, 0)
+#define VFPE_TCPNOWTIMER(_VF)			(0x00509400 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_TCPNOWTIMER_MAX_INDEX		255
+#define VFPE_TCPNOWTIMER_TCP_NOW_S		0
+#define VFPE_TCPNOWTIMER_TCP_NOW_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VFPE_WQEALLOC(_VF)			(0x00504000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_WQEALLOC_MAX_INDEX			255
+#define VFPE_WQEALLOC_PEQPID_S			0
+#define VFPE_WQEALLOC_PEQPID_M			MAKEMASK(0x3FFFF, 0)
+#define VFPE_WQEALLOC_WQE_DESC_INDEX_S		20
+#define VFPE_WQEALLOC_WQE_DESC_INDEX_M		MAKEMASK(0xFFF, 20)
+#define GLPES_PFIP4RXDISCARD(_i)		(0x00541400 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXDISCARD_MAX_INDEX		127
+#define GLPES_PFIP4RXDISCARD_IP4RXDISCARD_S	0
+#define GLPES_PFIP4RXDISCARD_IP4RXDISCARD_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4RXFRAGSHI(_i)		(0x00541C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXFRAGSHI_MAX_INDEX		127
+#define GLPES_PFIP4RXFRAGSHI_IP4RXFRAGSHI_S	0
+#define GLPES_PFIP4RXFRAGSHI_IP4RXFRAGSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP4RXFRAGSLO(_i)		(0x00541C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXFRAGSLO_MAX_INDEX		127
+#define GLPES_PFIP4RXFRAGSLO_IP4RXFRAGSLO_S	0
+#define GLPES_PFIP4RXFRAGSLO_IP4RXFRAGSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4RXMCOCTSHI(_i)		(0x00542404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXMCOCTSHI_MAX_INDEX		127
+#define GLPES_PFIP4RXMCOCTSHI_IP4RXMCOCTSHI_S	0
+#define GLPES_PFIP4RXMCOCTSHI_IP4RXMCOCTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP4RXMCOCTSLO(_i)		(0x00542400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXMCOCTSLO_MAX_INDEX		127
+#define GLPES_PFIP4RXMCOCTSLO_IP4RXMCOCTSLO_S	0
+#define GLPES_PFIP4RXMCOCTSLO_IP4RXMCOCTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4RXMCPKTSHI(_i)		(0x00542C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXMCPKTSHI_MAX_INDEX		127
+#define GLPES_PFIP4RXMCPKTSHI_IP4RXMCPKTSHI_S	0
+#define GLPES_PFIP4RXMCPKTSHI_IP4RXMCPKTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP4RXMCPKTSLO(_i)		(0x00542C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXMCPKTSLO_MAX_INDEX		127
+#define GLPES_PFIP4RXMCPKTSLO_IP4RXMCPKTSLO_S	0
+#define GLPES_PFIP4RXMCPKTSLO_IP4RXMCPKTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4RXOCTSHI(_i)			(0x00540404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXOCTSHI_MAX_INDEX		127
+#define GLPES_PFIP4RXOCTSHI_IP4RXOCTSHI_S	0
+#define GLPES_PFIP4RXOCTSHI_IP4RXOCTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP4RXOCTSLO(_i)			(0x00540400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXOCTSLO_MAX_INDEX		127
+#define GLPES_PFIP4RXOCTSLO_IP4RXOCTSLO_S	0
+#define GLPES_PFIP4RXOCTSLO_IP4RXOCTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4RXPKTSHI(_i)			(0x00540C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXPKTSHI_MAX_INDEX		127
+#define GLPES_PFIP4RXPKTSHI_IP4RXPKTSHI_S	0
+#define GLPES_PFIP4RXPKTSHI_IP4RXPKTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP4RXPKTSLO(_i)			(0x00540C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXPKTSLO_MAX_INDEX		127
+#define GLPES_PFIP4RXPKTSLO_IP4RXPKTSLO_S	0
+#define GLPES_PFIP4RXPKTSLO_IP4RXPKTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4RXTRUNC(_i)			(0x00541800 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXTRUNC_MAX_INDEX		127
+#define GLPES_PFIP4RXTRUNC_IP4RXTRUNC_S		0
+#define GLPES_PFIP4RXTRUNC_IP4RXTRUNC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4TXFRAGSHI(_i)		(0x00547404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4TXFRAGSHI_MAX_INDEX		127
+#define GLPES_PFIP4TXFRAGSHI_IP4TXFRAGSHI_S	0
+#define GLPES_PFIP4TXFRAGSHI_IP4TXFRAGSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP4TXFRAGSLO(_i)		(0x00547400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4TXFRAGSLO_MAX_INDEX		127
+#define GLPES_PFIP4TXFRAGSLO_IP4TXFRAGSLO_S	0
+#define GLPES_PFIP4TXFRAGSLO_IP4TXFRAGSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4TXMCOCTSHI(_i)		(0x00547C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4TXMCOCTSHI_MAX_INDEX		127
+#define GLPES_PFIP4TXMCOCTSHI_IP4TXMCOCTSHI_S	0
+#define GLPES_PFIP4TXMCOCTSHI_IP4TXMCOCTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP4TXMCOCTSLO(_i)		(0x00547C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4TXMCOCTSLO_MAX_INDEX		127
+#define GLPES_PFIP4TXMCOCTSLO_IP4TXMCOCTSLO_S	0
+#define GLPES_PFIP4TXMCOCTSLO_IP4TXMCOCTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4TXMCPKTSHI(_i)		(0x00548404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4TXMCPKTSHI_MAX_INDEX		127
+#define GLPES_PFIP4TXMCPKTSHI_IP4TXMCPKTSHI_S	0
+#define GLPES_PFIP4TXMCPKTSHI_IP4TXMCPKTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP4TXMCPKTSLO(_i)		(0x00548400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4TXMCPKTSLO_MAX_INDEX		127
+#define GLPES_PFIP4TXMCPKTSLO_IP4TXMCPKTSLO_S	0
+#define GLPES_PFIP4TXMCPKTSLO_IP4TXMCPKTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4TXNOROUTE(_i)		(0x0054B400 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4TXNOROUTE_MAX_INDEX		127
+#define GLPES_PFIP4TXNOROUTE_IP4TXNOROUTE_S	0
+#define GLPES_PFIP4TXNOROUTE_IP4TXNOROUTE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLPES_PFIP4TXOCTSHI(_i)			(0x00546404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4TXOCTSHI_MAX_INDEX		127
+#define GLPES_PFIP4TXOCTSHI_IP4TXOCTSHI_S	0
+#define GLPES_PFIP4TXOCTSHI_IP4TXOCTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP4TXOCTSLO(_i)			(0x00546400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4TXOCTSLO_MAX_INDEX		127
+#define GLPES_PFIP4TXOCTSLO_IP4TXOCTSLO_S	0
+#define GLPES_PFIP4TXOCTSLO_IP4TXOCTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4TXPKTSHI(_i)			(0x00546C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4TXPKTSHI_MAX_INDEX		127
+#define GLPES_PFIP4TXPKTSHI_IP4TXPKTSHI_S	0
+#define GLPES_PFIP4TXPKTSHI_IP4TXPKTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP4TXPKTSLO(_i)			(0x00546C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4TXPKTSLO_MAX_INDEX		127
+#define GLPES_PFIP4TXPKTSLO_IP4TXPKTSLO_S	0
+#define GLPES_PFIP4TXPKTSLO_IP4TXPKTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6RXDISCARD(_i)		(0x00544400 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXDISCARD_MAX_INDEX		127
+#define GLPES_PFIP6RXDISCARD_IP6RXDISCARD_S	0
+#define GLPES_PFIP6RXDISCARD_IP6RXDISCARD_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6RXFRAGSHI(_i)		(0x00544C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXFRAGSHI_MAX_INDEX		127
+#define GLPES_PFIP6RXFRAGSHI_IP6RXFRAGSHI_S	0
+#define GLPES_PFIP6RXFRAGSHI_IP6RXFRAGSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP6RXFRAGSLO(_i)		(0x00544C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXFRAGSLO_MAX_INDEX		127
+#define GLPES_PFIP6RXFRAGSLO_IP6RXFRAGSLO_S	0
+#define GLPES_PFIP6RXFRAGSLO_IP6RXFRAGSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6RXMCOCTSHI(_i)		(0x00545404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXMCOCTSHI_MAX_INDEX		127
+#define GLPES_PFIP6RXMCOCTSHI_IP6RXMCOCTSHI_S	0
+#define GLPES_PFIP6RXMCOCTSHI_IP6RXMCOCTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP6RXMCOCTSLO(_i)		(0x00545400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXMCOCTSLO_MAX_INDEX		127
+#define GLPES_PFIP6RXMCOCTSLO_IP6RXMCOCTSLO_S	0
+#define GLPES_PFIP6RXMCOCTSLO_IP6RXMCOCTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6RXMCPKTSHI(_i)		(0x00545C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXMCPKTSHI_MAX_INDEX		127
+#define GLPES_PFIP6RXMCPKTSHI_IP6RXMCPKTSHI_S	0
+#define GLPES_PFIP6RXMCPKTSHI_IP6RXMCPKTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP6RXMCPKTSLO(_i)		(0x00545C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXMCPKTSLO_MAX_INDEX		127
+#define GLPES_PFIP6RXMCPKTSLO_IP6RXMCPKTSLO_S	0
+#define GLPES_PFIP6RXMCPKTSLO_IP6RXMCPKTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6RXOCTSHI(_i)			(0x00543404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXOCTSHI_MAX_INDEX		127
+#define GLPES_PFIP6RXOCTSHI_IP6RXOCTSHI_S	0
+#define GLPES_PFIP6RXOCTSHI_IP6RXOCTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP6RXOCTSLO(_i)			(0x00543400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXOCTSLO_MAX_INDEX		127
+#define GLPES_PFIP6RXOCTSLO_IP6RXOCTSLO_S	0
+#define GLPES_PFIP6RXOCTSLO_IP6RXOCTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6RXPKTSHI(_i)			(0x00543C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXPKTSHI_MAX_INDEX		127
+#define GLPES_PFIP6RXPKTSHI_IP6RXPKTSHI_S	0
+#define GLPES_PFIP6RXPKTSHI_IP6RXPKTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP6RXPKTSLO(_i)			(0x00543C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXPKTSLO_MAX_INDEX		127
+#define GLPES_PFIP6RXPKTSLO_IP6RXPKTSLO_S	0
+#define GLPES_PFIP6RXPKTSLO_IP6RXPKTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6RXTRUNC(_i)			(0x00544800 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXTRUNC_MAX_INDEX		127
+#define GLPES_PFIP6RXTRUNC_IP6RXTRUNC_S		0
+#define GLPES_PFIP6RXTRUNC_IP6RXTRUNC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6TXFRAGSHI(_i)		(0x00549C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6TXFRAGSHI_MAX_INDEX		127
+#define GLPES_PFIP6TXFRAGSHI_IP6TXFRAGSHI_S	0
+#define GLPES_PFIP6TXFRAGSHI_IP6TXFRAGSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP6TXFRAGSLO(_i)		(0x00549C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6TXFRAGSLO_MAX_INDEX		127
+#define GLPES_PFIP6TXFRAGSLO_IP6TXFRAGSLO_S	0
+#define GLPES_PFIP6TXFRAGSLO_IP6TXFRAGSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6TXMCOCTSHI(_i)		(0x0054A404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6TXMCOCTSHI_MAX_INDEX		127
+#define GLPES_PFIP6TXMCOCTSHI_IP6TXMCOCTSHI_S	0
+#define GLPES_PFIP6TXMCOCTSHI_IP6TXMCOCTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP6TXMCOCTSLO(_i)		(0x0054A400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6TXMCOCTSLO_MAX_INDEX		127
+#define GLPES_PFIP6TXMCOCTSLO_IP6TXMCOCTSLO_S	0
+#define GLPES_PFIP6TXMCOCTSLO_IP6TXMCOCTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6TXMCPKTSHI(_i)		(0x0054AC04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6TXMCPKTSHI_MAX_INDEX		127
+#define GLPES_PFIP6TXMCPKTSHI_IP6TXMCPKTSHI_S	0
+#define GLPES_PFIP6TXMCPKTSHI_IP6TXMCPKTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP6TXMCPKTSLO(_i)		(0x0054AC00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6TXMCPKTSLO_MAX_INDEX		127
+#define GLPES_PFIP6TXMCPKTSLO_IP6TXMCPKTSLO_S	0
+#define GLPES_PFIP6TXMCPKTSLO_IP6TXMCPKTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6TXNOROUTE(_i)		(0x0054B800 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6TXNOROUTE_MAX_INDEX		127
+#define GLPES_PFIP6TXNOROUTE_IP6TXNOROUTE_S	0
+#define GLPES_PFIP6TXNOROUTE_IP6TXNOROUTE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLPES_PFIP6TXOCTSHI(_i)			(0x00548C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6TXOCTSHI_MAX_INDEX		127
+#define GLPES_PFIP6TXOCTSHI_IP6TXOCTSHI_S	0
+#define GLPES_PFIP6TXOCTSHI_IP6TXOCTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP6TXOCTSLO(_i)			(0x00548C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6TXOCTSLO_MAX_INDEX		127
+#define GLPES_PFIP6TXOCTSLO_IP6TXOCTSLO_S	0
+#define GLPES_PFIP6TXOCTSLO_IP6TXOCTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6TXPKTSHI(_i)			(0x00549404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6TXPKTSHI_MAX_INDEX		127
+#define GLPES_PFIP6TXPKTSHI_IP6TXPKTSHI_S	0
+#define GLPES_PFIP6TXPKTSHI_IP6TXPKTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP6TXPKTSLO(_i)			(0x00549400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6TXPKTSLO_MAX_INDEX		127
+#define GLPES_PFIP6TXPKTSLO_IP6TXPKTSLO_S	0
+#define GLPES_PFIP6TXPKTSLO_IP6TXPKTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFRDMARXRDSHI(_i)			(0x0054EC04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMARXRDSHI_MAX_INDEX		127
+#define GLPES_PFRDMARXRDSHI_RDMARXRDSHI_S	0
+#define GLPES_PFRDMARXRDSHI_RDMARXRDSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFRDMARXRDSLO(_i)			(0x0054EC00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMARXRDSLO_MAX_INDEX		127
+#define GLPES_PFRDMARXRDSLO_RDMARXRDSLO_S	0
+#define GLPES_PFRDMARXRDSLO_RDMARXRDSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFRDMARXSNDSHI(_i)		(0x0054F404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMARXSNDSHI_MAX_INDEX		127
+#define GLPES_PFRDMARXSNDSHI_RDMARXSNDSHI_S	0
+#define GLPES_PFRDMARXSNDSHI_RDMARXSNDSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFRDMARXSNDSLO(_i)		(0x0054F400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMARXSNDSLO_MAX_INDEX		127
+#define GLPES_PFRDMARXSNDSLO_RDMARXSNDSLO_S	0
+#define GLPES_PFRDMARXSNDSLO_RDMARXSNDSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFRDMARXWRSHI(_i)			(0x0054E404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMARXWRSHI_MAX_INDEX		127
+#define GLPES_PFRDMARXWRSHI_RDMARXWRSHI_S	0
+#define GLPES_PFRDMARXWRSHI_RDMARXWRSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFRDMARXWRSLO(_i)			(0x0054E400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMARXWRSLO_MAX_INDEX		127
+#define GLPES_PFRDMARXWRSLO_RDMARXWRSLO_S	0
+#define GLPES_PFRDMARXWRSLO_RDMARXWRSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFRDMATXRDSHI(_i)			(0x00550404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMATXRDSHI_MAX_INDEX		127
+#define GLPES_PFRDMATXRDSHI_RDMARXRDSHI_S	0
+#define GLPES_PFRDMATXRDSHI_RDMARXRDSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFRDMATXRDSLO(_i)			(0x00550400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMATXRDSLO_MAX_INDEX		127
+#define GLPES_PFRDMATXRDSLO_RDMARXRDSLO_S	0
+#define GLPES_PFRDMATXRDSLO_RDMARXRDSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFRDMATXSNDSHI(_i)		(0x00550C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMATXSNDSHI_MAX_INDEX		127
+#define GLPES_PFRDMATXSNDSHI_RDMARXSNDSHI_S	0
+#define GLPES_PFRDMATXSNDSHI_RDMARXSNDSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFRDMATXSNDSLO(_i)		(0x00550C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMATXSNDSLO_MAX_INDEX		127
+#define GLPES_PFRDMATXSNDSLO_RDMARXSNDSLO_S	0
+#define GLPES_PFRDMATXSNDSLO_RDMARXSNDSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFRDMATXWRSHI(_i)			(0x0054FC04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMATXWRSHI_MAX_INDEX		127
+#define GLPES_PFRDMATXWRSHI_RDMARXWRSHI_S	0
+#define GLPES_PFRDMATXWRSHI_RDMARXWRSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFRDMATXWRSLO(_i)			(0x0054FC00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMATXWRSLO_MAX_INDEX		127
+#define GLPES_PFRDMATXWRSLO_RDMARXWRSLO_S	0
+#define GLPES_PFRDMATXWRSLO_RDMARXWRSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFRDMAVBNDHI(_i)			(0x00551404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMAVBNDHI_MAX_INDEX		127
+#define GLPES_PFRDMAVBNDHI_RDMAVBNDHI_S		0
+#define GLPES_PFRDMAVBNDHI_RDMAVBNDHI_M		MAKEMASK(0xFFFF, 0)
+#define GLPES_PFRDMAVBNDLO(_i)			(0x00551400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMAVBNDLO_MAX_INDEX		127
+#define GLPES_PFRDMAVBNDLO_RDMAVBNDLO_S		0
+#define GLPES_PFRDMAVBNDLO_RDMAVBNDLO_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFRDMAVINVHI(_i)			(0x00551C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMAVINVHI_MAX_INDEX		127
+#define GLPES_PFRDMAVINVHI_RDMAVINVHI_S		0
+#define GLPES_PFRDMAVINVHI_RDMAVINVHI_M		MAKEMASK(0xFFFF, 0)
+#define GLPES_PFRDMAVINVLO(_i)			(0x00551C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMAVINVLO_MAX_INDEX		127
+#define GLPES_PFRDMAVINVLO_RDMAVINVLO_S		0
+#define GLPES_PFRDMAVINVLO_RDMAVINVLO_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFRXVLANERR(_i)			(0x00540000 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRXVLANERR_MAX_INDEX		127
+#define GLPES_PFRXVLANERR_RXVLANERR_S		0
+#define GLPES_PFRXVLANERR_RXVLANERR_M		MAKEMASK(0xFFFFFF, 0)
+#define GLPES_PFTCPRTXSEG(_i)			(0x00552400 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFTCPRTXSEG_MAX_INDEX		127
+#define GLPES_PFTCPRTXSEG_TCPRTXSEG_S		0
+#define GLPES_PFTCPRTXSEG_TCPRTXSEG_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFTCPRXOPTERR(_i)			(0x0054C400 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFTCPRXOPTERR_MAX_INDEX		127
+#define GLPES_PFTCPRXOPTERR_TCPRXOPTERR_S	0
+#define GLPES_PFTCPRXOPTERR_TCPRXOPTERR_M	MAKEMASK(0xFFFFFF, 0)
+#define GLPES_PFTCPRXPROTOERR(_i)		(0x0054C800 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFTCPRXPROTOERR_MAX_INDEX		127
+#define GLPES_PFTCPRXPROTOERR_TCPRXPROTOERR_S	0
+#define GLPES_PFTCPRXPROTOERR_TCPRXPROTOERR_M	MAKEMASK(0xFFFFFF, 0)
+#define GLPES_PFTCPRXSEGSHI(_i)			(0x0054BC04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFTCPRXSEGSHI_MAX_INDEX		127
+#define GLPES_PFTCPRXSEGSHI_TCPRXSEGSHI_S	0
+#define GLPES_PFTCPRXSEGSHI_TCPRXSEGSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFTCPRXSEGSLO(_i)			(0x0054BC00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFTCPRXSEGSLO_MAX_INDEX		127
+#define GLPES_PFTCPRXSEGSLO_TCPRXSEGSLO_S	0
+#define GLPES_PFTCPRXSEGSLO_TCPRXSEGSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFTCPTXSEGHI(_i)			(0x0054CC04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFTCPTXSEGHI_MAX_INDEX		127
+#define GLPES_PFTCPTXSEGHI_TCPTXSEGHI_S		0
+#define GLPES_PFTCPTXSEGHI_TCPTXSEGHI_M		MAKEMASK(0xFFFF, 0)
+#define GLPES_PFTCPTXSEGLO(_i)			(0x0054CC00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFTCPTXSEGLO_MAX_INDEX		127
+#define GLPES_PFTCPTXSEGLO_TCPTXSEGLO_S		0
+#define GLPES_PFTCPTXSEGLO_TCPTXSEGLO_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFUDPRXPKTSHI(_i)			(0x0054D404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFUDPRXPKTSHI_MAX_INDEX		127
+#define GLPES_PFUDPRXPKTSHI_UDPRXPKTSHI_S	0
+#define GLPES_PFUDPRXPKTSHI_UDPRXPKTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFUDPRXPKTSLO(_i)			(0x0054D400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFUDPRXPKTSLO_MAX_INDEX		127
+#define GLPES_PFUDPRXPKTSLO_UDPRXPKTSLO_S	0
+#define GLPES_PFUDPRXPKTSLO_UDPRXPKTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFUDPTXPKTSHI(_i)			(0x0054DC04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFUDPTXPKTSHI_MAX_INDEX		127
+#define GLPES_PFUDPTXPKTSHI_UDPTXPKTSHI_S	0
+#define GLPES_PFUDPTXPKTSHI_UDPTXPKTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFUDPTXPKTSLO(_i)			(0x0054DC00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFUDPTXPKTSLO_MAX_INDEX		127
+#define GLPES_PFUDPTXPKTSLO_UDPTXPKTSLO_S	0
+#define GLPES_PFUDPTXPKTSLO_UDPTXPKTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_RDMARXMULTFPDUSHI			0x0055E00C /* Reset Source: CORER */
+#define GLPES_RDMARXMULTFPDUSHI_RDMARXMULTFPDUSHI_S 0
+#define GLPES_RDMARXMULTFPDUSHI_RDMARXMULTFPDUSHI_M MAKEMASK(0xFFFFFF, 0)
+#define GLPES_RDMARXMULTFPDUSLO			0x0055E008 /* Reset Source: CORER */
+#define GLPES_RDMARXMULTFPDUSLO_RDMARXMULTFPDUSLO_S 0
+#define GLPES_RDMARXMULTFPDUSLO_RDMARXMULTFPDUSLO_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_RDMARXOOODDPHI			0x0055E014 /* Reset Source: CORER */
+#define GLPES_RDMARXOOODDPHI_RDMARXOOODDPHI_S	0
+#define GLPES_RDMARXOOODDPHI_RDMARXOOODDPHI_M	MAKEMASK(0xFFFFFF, 0)
+#define GLPES_RDMARXOOODDPLO			0x0055E010 /* Reset Source: CORER */
+#define GLPES_RDMARXOOODDPLO_RDMARXOOODDPLO_S	0
+#define GLPES_RDMARXOOODDPLO_RDMARXOOODDPLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_RDMARXOOONOMARK			0x0055E004 /* Reset Source: CORER */
+#define GLPES_RDMARXOOONOMARK_RDMAOOONOMARK_S	0
+#define GLPES_RDMARXOOONOMARK_RDMAOOONOMARK_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_RDMARXUNALIGN			0x0055E000 /* Reset Source: CORER */
+#define GLPES_RDMARXUNALIGN_RDMRXAUNALIGN_S	0
+#define GLPES_RDMARXUNALIGN_RDMRXAUNALIGN_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_TCPRXFOURHOLEHI			0x0055E03C /* Reset Source: CORER */
+#define GLPES_TCPRXFOURHOLEHI_TCPRXFOURHOLEHI_S 0
+#define GLPES_TCPRXFOURHOLEHI_TCPRXFOURHOLEHI_M MAKEMASK(0xFFFFFF, 0)
+#define GLPES_TCPRXFOURHOLELO			0x0055E038 /* Reset Source: CORER */
+#define GLPES_TCPRXFOURHOLELO_TCPRXFOURHOLELO_S 0
+#define GLPES_TCPRXFOURHOLELO_TCPRXFOURHOLELO_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_TCPRXONEHOLEHI			0x0055E024 /* Reset Source: CORER */
+#define GLPES_TCPRXONEHOLEHI_TCPRXONEHOLEHI_S	0
+#define GLPES_TCPRXONEHOLEHI_TCPRXONEHOLEHI_M	MAKEMASK(0xFFFFFF, 0)
+#define GLPES_TCPRXONEHOLELO			0x0055E020 /* Reset Source: CORER */
+#define GLPES_TCPRXONEHOLELO_TCPRXONEHOLELO_S	0
+#define GLPES_TCPRXONEHOLELO_TCPRXONEHOLELO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_TCPRXPUREACKHI			0x0055E01C /* Reset Source: CORER */
+#define GLPES_TCPRXPUREACKHI_TCPRXPUREACKSHI_S	0
+#define GLPES_TCPRXPUREACKHI_TCPRXPUREACKSHI_M	MAKEMASK(0xFFFFFF, 0)
+#define GLPES_TCPRXPUREACKSLO			0x0055E018 /* Reset Source: CORER */
+#define GLPES_TCPRXPUREACKSLO_TCPRXPUREACKLO_S	0
+#define GLPES_TCPRXPUREACKSLO_TCPRXPUREACKLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_TCPRXTHREEHOLEHI			0x0055E034 /* Reset Source: CORER */
+#define GLPES_TCPRXTHREEHOLEHI_TCPRXTHREEHOLEHI_S 0
+#define GLPES_TCPRXTHREEHOLEHI_TCPRXTHREEHOLEHI_M MAKEMASK(0xFFFFFF, 0)
+#define GLPES_TCPRXTHREEHOLELO			0x0055E030 /* Reset Source: CORER */
+#define GLPES_TCPRXTHREEHOLELO_TCPRXTHREEHOLELO_S 0
+#define GLPES_TCPRXTHREEHOLELO_TCPRXTHREEHOLELO_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_TCPRXTWOHOLEHI			0x0055E02C /* Reset Source: CORER */
+#define GLPES_TCPRXTWOHOLEHI_TCPRXTWOHOLEHI_S	0
+#define GLPES_TCPRXTWOHOLEHI_TCPRXTWOHOLEHI_M	MAKEMASK(0xFFFFFF, 0)
+#define GLPES_TCPRXTWOHOLELO			0x0055E028 /* Reset Source: CORER */
+#define GLPES_TCPRXTWOHOLELO_TCPRXTWOHOLELO_S	0
+#define GLPES_TCPRXTWOHOLELO_TCPRXTWOHOLELO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_TCPTXRETRANSFASTHI		0x0055E044 /* Reset Source: CORER */
+#define GLPES_TCPTXRETRANSFASTHI_TCPTXRETRANSFASTHI_S 0
+#define GLPES_TCPTXRETRANSFASTHI_TCPTXRETRANSFASTHI_M MAKEMASK(0xFFFFFF, 0)
+#define GLPES_TCPTXRETRANSFASTLO		0x0055E040 /* Reset Source: CORER */
+#define GLPES_TCPTXRETRANSFASTLO_TCPTXRETRANSFASTLO_S 0
+#define GLPES_TCPTXRETRANSFASTLO_TCPTXRETRANSFASTLO_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_TCPTXTOUTSFASTHI			0x0055E04C /* Reset Source: CORER */
+#define GLPES_TCPTXTOUTSFASTHI_TCPTXTOUTSFASTHI_S 0
+#define GLPES_TCPTXTOUTSFASTHI_TCPTXTOUTSFASTHI_M MAKEMASK(0xFFFFFF, 0)
+#define GLPES_TCPTXTOUTSFASTLO			0x0055E048 /* Reset Source: CORER */
+#define GLPES_TCPTXTOUTSFASTLO_TCPTXTOUTSFASTLO_S 0
+#define GLPES_TCPTXTOUTSFASTLO_TCPTXTOUTSFASTLO_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_TCPTXTOUTSHI			0x0055E054 /* Reset Source: CORER */
+#define GLPES_TCPTXTOUTSHI_TCPTXTOUTSHI_S	0
+#define GLPES_TCPTXTOUTSHI_TCPTXTOUTSHI_M	MAKEMASK(0xFFFFFF, 0)
+#define GLPES_TCPTXTOUTSLO			0x0055E050 /* Reset Source: CORER */
+#define GLPES_TCPTXTOUTSLO_TCPTXTOUTSLO_S	0
+#define GLPES_TCPTXTOUTSLO_TCPTXTOUTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PWR_MODE_CTL				0x000B820C /* Reset Source: POR */
+#define GL_PWR_MODE_CTL_SWITCH_PWR_MODE_EN_S	0
+#define GL_PWR_MODE_CTL_SWITCH_PWR_MODE_EN_M	BIT(0)
+#define GL_PWR_MODE_CTL_NIC_PWR_MODE_EN_S	1
+#define GL_PWR_MODE_CTL_NIC_PWR_MODE_EN_M	BIT(1)
+#define GL_PWR_MODE_CTL_S5_PWR_MODE_EN_S	2
+#define GL_PWR_MODE_CTL_S5_PWR_MODE_EN_M	BIT(2)
+#define GL_PWR_MODE_CTL_CAR_MAX_SW_CONFIG_S	3
+#define GL_PWR_MODE_CTL_CAR_MAX_SW_CONFIG_M	MAKEMASK(0x3, 3)
+#define GL_PWR_MODE_CTL_CAR_MAX_BW_S		30
+#define GL_PWR_MODE_CTL_CAR_MAX_BW_M		MAKEMASK(0x3, 30)
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT	0x000B825C /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_PECLK_S 0
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_PECLK_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_UCLK_S 3
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_UCLK_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_LCLK_S 6
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_LCLK_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_PSM_S 9
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_PSM_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_RXCTL_S 12
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_RXCTL_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_UANA_S 15
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_UANA_M MAKEMASK(0x7, 15)
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_S5_S 18
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_S5_M MAKEMASK(0x7, 18)
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT	0x000B8218 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_PECLK_S 0
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_PECLK_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_UCLK_S 3
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_UCLK_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_LCLK_S 6
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_LCLK_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_PSM_S 9
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_PSM_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_RXCTL_S 12
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_RXCTL_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_UANA_S 15
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_UANA_M MAKEMASK(0x7, 15)
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_S5_S 18
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_S5_M MAKEMASK(0x7, 18)
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT	0x000B8260 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_PECLK_S 0
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_PECLK_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_UCLK_S 3
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_UCLK_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_LCLK_S 6
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_LCLK_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_PSM_S 9
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_PSM_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_RXCTL_S 12
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_RXCTL_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_UANA_S 15
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_UANA_M MAKEMASK(0x7, 15)
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_S5_S 18
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_S5_M MAKEMASK(0x7, 18)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_LCLK	0x000B8200 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_LCLK_DIV_VAL_TBW_50G_H_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_LCLK_DIV_VAL_TBW_50G_H_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_LCLK_DIV_VAL_TBW_25G_H_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_LCLK_DIV_VAL_TBW_25G_H_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_LCLK_DIV_VAL_TBW_10G_H_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_LCLK_DIV_VAL_TBW_10G_H_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_LCLK_DIV_VAL_TBW_4G_H_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_LCLK_DIV_VAL_TBW_4G_H_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_LCLK_DIV_VAL_TBW_A50G_H_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_LCLK_DIV_VAL_TBW_A50G_H_M MAKEMASK(0xF, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PECLK	0x000B81F0 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PECLK_DIV_VAL_TBW_50G_H_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PECLK_DIV_VAL_TBW_50G_H_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PECLK_DIV_VAL_TBW_25G_H_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PECLK_DIV_VAL_TBW_25G_H_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PECLK_DIV_VAL_TBW_10G_H_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PECLK_DIV_VAL_TBW_10G_H_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PECLK_DIV_VAL_TBW_4G_H_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PECLK_DIV_VAL_TBW_4G_H_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PECLK_DIV_VAL_TBW_A50G_H_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PECLK_DIV_VAL_TBW_A50G_H_M MAKEMASK(0xF, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PSM	0x000B81FC /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PSM_DIV_VAL_TBW_50G_H_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PSM_DIV_VAL_TBW_50G_H_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PSM_DIV_VAL_TBW_25G_H_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PSM_DIV_VAL_TBW_25G_H_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PSM_DIV_VAL_TBW_10G_H_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PSM_DIV_VAL_TBW_10G_H_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PSM_DIV_VAL_TBW_4G_H_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PSM_DIV_VAL_TBW_4G_H_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PSM_DIV_VAL_TBW_A50G_H_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PSM_DIV_VAL_TBW_A50G_H_M MAKEMASK(0xF, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_RXCTL	0x000B81F8 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_RXCTL_DIV_VAL_TBW_50G_H_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_RXCTL_DIV_VAL_TBW_50G_H_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_RXCTL_DIV_VAL_TBW_25G_H_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_RXCTL_DIV_VAL_TBW_25G_H_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_RXCTL_DIV_VAL_TBW_10G_H_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_RXCTL_DIV_VAL_TBW_10G_H_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_RXCTL_DIV_VAL_TBW_4G_H_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_RXCTL_DIV_VAL_TBW_4G_H_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_RXCTL_DIV_VAL_TBW_A50G_H_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_RXCTL_DIV_VAL_TBW_A50G_H_M MAKEMASK(0xF, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UANA	0x000B8208 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UANA_DIV_VAL_TBW_50G_H_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UANA_DIV_VAL_TBW_50G_H_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UANA_DIV_VAL_TBW_25G_H_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UANA_DIV_VAL_TBW_25G_H_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UANA_DIV_VAL_TBW_10G_H_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UANA_DIV_VAL_TBW_10G_H_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UANA_DIV_VAL_TBW_4G_H_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UANA_DIV_VAL_TBW_4G_H_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UANA_DIV_VAL_TBW_A50G_H_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UANA_DIV_VAL_TBW_A50G_H_M MAKEMASK(0xF, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UCLK	0x000B81F4 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UCLK_DIV_VAL_TBW_50G_H_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UCLK_DIV_VAL_TBW_50G_H_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UCLK_DIV_VAL_TBW_25G_H_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UCLK_DIV_VAL_TBW_25G_H_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UCLK_DIV_VAL_TBW_10G_H_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UCLK_DIV_VAL_TBW_10G_H_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UCLK_DIV_VAL_TBW_4G_H_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UCLK_DIV_VAL_TBW_4G_H_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UCLK_DIV_VAL_TBW_A50G_H_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UCLK_DIV_VAL_TBW_A50G_H_M MAKEMASK(0xF, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_LCLK	0x000B8244 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_LCLK_DIV_VAL_TBW_50G_L_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_LCLK_DIV_VAL_TBW_50G_L_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_LCLK_DIV_VAL_TBW_25G_L_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_LCLK_DIV_VAL_TBW_25G_L_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_LCLK_DIV_VAL_TBW_10G_L_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_LCLK_DIV_VAL_TBW_10G_L_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_LCLK_DIV_VAL_TBW_4G_L_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_LCLK_DIV_VAL_TBW_4G_L_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_LCLK_DIV_VAL_TBW_A50G_L_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_LCLK_DIV_VAL_TBW_A50G_L_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PECLK	0x000B8220 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PECLK_DIV_VAL_TBW_50G_L_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PECLK_DIV_VAL_TBW_50G_L_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PECLK_DIV_VAL_TBW_25G_L_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PECLK_DIV_VAL_TBW_25G_L_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PECLK_DIV_VAL_TBW_10G_L_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PECLK_DIV_VAL_TBW_10G_L_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PECLK_DIV_VAL_TBW_4G_L_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PECLK_DIV_VAL_TBW_4G_L_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PECLK_DIV_VAL_TBW_A50G_L_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PECLK_DIV_VAL_TBW_A50G_L_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PSM	0x000B8240 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PSM_DIV_VAL_TBW_50G_L_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PSM_DIV_VAL_TBW_50G_L_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PSM_DIV_VAL_TBW_25G_L_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PSM_DIV_VAL_TBW_25G_L_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PSM_DIV_VAL_TBW_10G_L_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PSM_DIV_VAL_TBW_10G_L_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PSM_DIV_VAL_TBW_4G_L_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PSM_DIV_VAL_TBW_4G_L_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PSM_DIV_VAL_TBW_A50G_L_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PSM_DIV_VAL_TBW_A50G_L_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_RXCTL	0x000B823C /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_RXCTL_DIV_VAL_TBW_50G_L_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_RXCTL_DIV_VAL_TBW_50G_L_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_RXCTL_DIV_VAL_TBW_25G_L_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_RXCTL_DIV_VAL_TBW_25G_L_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_RXCTL_DIV_VAL_TBW_10G_L_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_RXCTL_DIV_VAL_TBW_10G_L_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_RXCTL_DIV_VAL_TBW_4G_L_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_RXCTL_DIV_VAL_TBW_4G_L_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_RXCTL_DIV_VAL_TBW_A50G_L_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_RXCTL_DIV_VAL_TBW_A50G_L_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UANA	0x000B8248 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UANA_DIV_VAL_TBW_50G_L_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UANA_DIV_VAL_TBW_50G_L_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UANA_DIV_VAL_TBW_25G_L_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UANA_DIV_VAL_TBW_25G_L_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UANA_DIV_VAL_TBW_10G_L_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UANA_DIV_VAL_TBW_10G_L_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UANA_DIV_VAL_TBW_4G_L_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UANA_DIV_VAL_TBW_4G_L_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UANA_DIV_VAL_TBW_A50G_L_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UANA_DIV_VAL_TBW_A50G_L_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UCLK	0x000B8238 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UCLK_DIV_VAL_TBW_50G_L_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UCLK_DIV_VAL_TBW_50G_L_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UCLK_DIV_VAL_TBW_25G_L_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UCLK_DIV_VAL_TBW_25G_L_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UCLK_DIV_VAL_TBW_10G_L_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UCLK_DIV_VAL_TBW_10G_L_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UCLK_DIV_VAL_TBW_4G_L_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UCLK_DIV_VAL_TBW_4G_L_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UCLK_DIV_VAL_TBW_A50G_L_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UCLK_DIV_VAL_TBW_A50G_L_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_LCLK	0x000B8230 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_LCLK_DIV_VAL_TBW_50G_M_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_LCLK_DIV_VAL_TBW_50G_M_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_LCLK_DIV_VAL_TBW_25G_M_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_LCLK_DIV_VAL_TBW_25G_M_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_LCLK_DIV_VAL_TBW_10G_M_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_LCLK_DIV_VAL_TBW_10G_M_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_LCLK_DIV_VAL_TBW_4G_M_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_LCLK_DIV_VAL_TBW_4G_M_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_LCLK_DIV_VAL_TBW_A50G_M_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_LCLK_DIV_VAL_TBW_A50G_M_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PECLK	0x000B821C /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PECLK_DIV_VAL_TBW_50G_M_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PECLK_DIV_VAL_TBW_50G_M_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PECLK_DIV_VAL_TBW_25G_M_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PECLK_DIV_VAL_TBW_25G_M_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PECLK_DIV_VAL_TBW_10G_M_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PECLK_DIV_VAL_TBW_10G_M_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PECLK_DIV_VAL_TBW_4G_M_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PECLK_DIV_VAL_TBW_4G_M_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PECLK_DIV_VAL_TBW_A50G_M_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PECLK_DIV_VAL_TBW_A50G_M_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PSM	0x000B822C /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PSM_DIV_VAL_TBW_50G_M_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PSM_DIV_VAL_TBW_50G_M_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PSM_DIV_VAL_TBW_25G_M_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PSM_DIV_VAL_TBW_25G_M_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PSM_DIV_VAL_TBW_10G_M_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PSM_DIV_VAL_TBW_10G_M_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PSM_DIV_VAL_TBW_4G_M_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PSM_DIV_VAL_TBW_4G_M_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PSM_DIV_VAL_TBW_A50G_M_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PSM_DIV_VAL_TBW_A50G_M_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_RXCTL	0x000B8228 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_RXCTL_DIV_VAL_TBW_50G_M_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_RXCTL_DIV_VAL_TBW_50G_M_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_RXCTL_DIV_VAL_TBW_25G_M_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_RXCTL_DIV_VAL_TBW_25G_M_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_RXCTL_DIV_VAL_TBW_10G_M_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_RXCTL_DIV_VAL_TBW_10G_M_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_RXCTL_DIV_VAL_TBW_4G_M_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_RXCTL_DIV_VAL_TBW_4G_M_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_RXCTL_DIV_VAL_TBW_A50G_M_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_RXCTL_DIV_VAL_TBW_A50G_M_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UANA	0x000B8234 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UANA_DIV_VAL_TBW_50G_M_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UANA_DIV_VAL_TBW_50G_M_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UANA_DIV_VAL_TBW_25G_M_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UANA_DIV_VAL_TBW_25G_M_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UANA_DIV_VAL_TBW_10G_M_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UANA_DIV_VAL_TBW_10G_M_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UANA_DIV_VAL_TBW_4G_M_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UANA_DIV_VAL_TBW_4G_M_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UANA_DIV_VAL_TBW_A50G_M_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UANA_DIV_VAL_TBW_A50G_M_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UCLK	0x000B8224 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UCLK_DIV_VAL_TBW_50G_M_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UCLK_DIV_VAL_TBW_50G_M_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UCLK_DIV_VAL_TBW_25G_M_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UCLK_DIV_VAL_TBW_25G_M_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UCLK_DIV_VAL_TBW_10G_M_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UCLK_DIV_VAL_TBW_10G_M_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UCLK_DIV_VAL_TBW_4G_M_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UCLK_DIV_VAL_TBW_4G_M_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UCLK_DIV_VAL_TBW_A50G_M_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UCLK_DIV_VAL_TBW_A50G_M_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S5_H_CTRL		0x000B81EC /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S5_H_CTRL_DIV_VAL_TBW_50G_H_S 0
+#define GL_PWR_MODE_DIVIDE_S5_H_CTRL_DIV_VAL_TBW_50G_H_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S5_H_CTRL_DIV_VAL_TBW_25G_H_S 3
+#define GL_PWR_MODE_DIVIDE_S5_H_CTRL_DIV_VAL_TBW_25G_H_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S5_H_CTRL_DIV_VAL_TBW_10G_H_S 6
+#define GL_PWR_MODE_DIVIDE_S5_H_CTRL_DIV_VAL_TBW_10G_H_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S5_H_CTRL_DIV_VAL_TBW_4G_H_S 9
+#define GL_PWR_MODE_DIVIDE_S5_H_CTRL_DIV_VAL_TBW_4G_H_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S5_H_CTRL_DIV_VAL_TBW_A50G_H_S 12
+#define GL_PWR_MODE_DIVIDE_S5_H_CTRL_DIV_VAL_TBW_A50G_H_M MAKEMASK(0xF, 12)
+#define GL_PWR_MODE_DIVIDE_S5_L_CTRL		0x000B824C /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S5_L_CTRL_DIV_VAL_TBW_50G_L_S 0
+#define GL_PWR_MODE_DIVIDE_S5_L_CTRL_DIV_VAL_TBW_50G_L_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S5_L_CTRL_DIV_VAL_TBW_25G_L_S 3
+#define GL_PWR_MODE_DIVIDE_S5_L_CTRL_DIV_VAL_TBW_25G_L_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S5_L_CTRL_DIV_VAL_TBW_10G_L_S 6
+#define GL_PWR_MODE_DIVIDE_S5_L_CTRL_DIV_VAL_TBW_10G_L_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S5_L_CTRL_DIV_VAL_TBW_4G_L_S 9
+#define GL_PWR_MODE_DIVIDE_S5_L_CTRL_DIV_VAL_TBW_4G_L_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S5_L_CTRL_DIV_VAL_TBW_A50G_L_S 12
+#define GL_PWR_MODE_DIVIDE_S5_L_CTRL_DIV_VAL_TBW_A50G_L_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S5_M_CTRL		0x000B8250 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S5_M_CTRL_DIV_VAL_TBW_50G_M_S 0
+#define GL_PWR_MODE_DIVIDE_S5_M_CTRL_DIV_VAL_TBW_50G_M_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S5_M_CTRL_DIV_VAL_TBW_25G_M_S 3
+#define GL_PWR_MODE_DIVIDE_S5_M_CTRL_DIV_VAL_TBW_25G_M_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S5_M_CTRL_DIV_VAL_TBW_10G_M_S 6
+#define GL_PWR_MODE_DIVIDE_S5_M_CTRL_DIV_VAL_TBW_10G_M_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S5_M_CTRL_DIV_VAL_TBW_4G_M_S 9
+#define GL_PWR_MODE_DIVIDE_S5_M_CTRL_DIV_VAL_TBW_4G_M_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S5_M_CTRL_DIV_VAL_TBW_A50G_M_S 12
+#define GL_PWR_MODE_DIVIDE_S5_M_CTRL_DIV_VAL_TBW_A50G_M_M MAKEMASK(0x7, 12)
+#define GL_S5_PWR_MODE_EXIT_CTL			0x000B8270 /* Reset Source: POR */
+#define GL_S5_PWR_MODE_EXIT_CTL_S5_PWR_MODE_AUTO_EXIT_S 0
+#define GL_S5_PWR_MODE_EXIT_CTL_S5_PWR_MODE_AUTO_EXIT_M BIT(0)
+#define GL_S5_PWR_MODE_EXIT_CTL_S5_PWR_MODE_FW_EXIT_S 1
+#define GL_S5_PWR_MODE_EXIT_CTL_S5_PWR_MODE_FW_EXIT_M BIT(1)
+#define GL_S5_PWR_MODE_EXIT_CTL_S5_PWR_MODE_PRST_FLOWS_ON_CORER_S 3
+#define GL_S5_PWR_MODE_EXIT_CTL_S5_PWR_MODE_PRST_FLOWS_ON_CORER_M BIT(3)
+#define GLGEN_PME_TO				0x000B81BC /* Reset Source: POR */
+#define GLGEN_PME_TO_PME_TO_FOR_PE_S		0
+#define GLGEN_PME_TO_PME_TO_FOR_PE_M		BIT(0)
+#define PRTPM_EEE_STAT				0x001E4320 /* Reset Source: GLOBR */
+#define PRTPM_EEE_STAT_EEE_NEG_S		29
+#define PRTPM_EEE_STAT_EEE_NEG_M		BIT(29)
+#define PRTPM_EEE_STAT_RX_LPI_STATUS_S		30
+#define PRTPM_EEE_STAT_RX_LPI_STATUS_M		BIT(30)
+#define PRTPM_EEE_STAT_TX_LPI_STATUS_S		31
+#define PRTPM_EEE_STAT_TX_LPI_STATUS_M		BIT(31)
+#define PRTPM_EEEC				0x001E4380 /* Reset Source: GLOBR */
+#define PRTPM_EEEC_TW_WAKE_MIN_S		16
+#define PRTPM_EEEC_TW_WAKE_MIN_M		MAKEMASK(0x3F, 16)
+#define PRTPM_EEEC_TX_LU_LPI_DLY_S		24
+#define PRTPM_EEEC_TX_LU_LPI_DLY_M		MAKEMASK(0x3, 24)
+#define PRTPM_EEEC_TEEE_DLY_S			26
+#define PRTPM_EEEC_TEEE_DLY_M			MAKEMASK(0x3F, 26)
+#define PRTPM_EEEFWD				0x001E4400 /* Reset Source: GLOBR */
+#define PRTPM_EEEFWD_EEE_FW_CONFIG_DONE_S	31
+#define PRTPM_EEEFWD_EEE_FW_CONFIG_DONE_M	BIT(31)
+#define PRTPM_EEER				0x001E4360 /* Reset Source: GLOBR */
+#define PRTPM_EEER_TW_SYSTEM_S			0
+#define PRTPM_EEER_TW_SYSTEM_M			MAKEMASK(0xFFFF, 0)
+#define PRTPM_EEER_TX_LPI_EN_S			16
+#define PRTPM_EEER_TX_LPI_EN_M			BIT(16)
+#define PRTPM_EEETXC				0x001E43E0 /* Reset Source: GLOBR */
+#define PRTPM_EEETXC_TW_PHY_S			0
+#define PRTPM_EEETXC_TW_PHY_M			MAKEMASK(0xFFFF, 0)
+#define PRTPM_RLPIC				0x001E43A0 /* Reset Source: GLOBR */
+#define PRTPM_RLPIC_ERLPIC_S			0
+#define PRTPM_RLPIC_ERLPIC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PRTPM_TLPIC				0x001E43C0 /* Reset Source: GLOBR */
+#define PRTPM_TLPIC_ETLPIC_S			0
+#define PRTPM_TLPIC_ETLPIC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLRPB_DHW(_i)				(0x000AC000 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLRPB_DHW_MAX_INDEX			15
+#define GLRPB_DHW_DHW_TCN_S			0
+#define GLRPB_DHW_DHW_TCN_M			MAKEMASK(0xFFFFF, 0)
+#define GLRPB_DLW(_i)				(0x000AC044 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLRPB_DLW_MAX_INDEX			15
+#define GLRPB_DLW_DLW_TCN_S			0
+#define GLRPB_DLW_DLW_TCN_M			MAKEMASK(0xFFFFF, 0)
+#define GLRPB_DPS(_i)				(0x000AC084 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLRPB_DPS_MAX_INDEX			15
+#define GLRPB_DPS_DPS_TCN_S			0
+#define GLRPB_DPS_DPS_TCN_M			MAKEMASK(0xFFFFF, 0)
+#define GLRPB_DSI_EN				0x000AC324 /* Reset Source: CORER */
+#define GLRPB_DSI_EN_DSI_EN_S			0
+#define GLRPB_DSI_EN_DSI_EN_M			BIT(0)
+#define GLRPB_DSI_EN_DSI_L2_MAC_ERR_DROP_EN_S	1
+#define GLRPB_DSI_EN_DSI_L2_MAC_ERR_DROP_EN_M	BIT(1)
+#define GLRPB_SHW(_i)				(0x000AC120 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLRPB_SHW_MAX_INDEX			7
+#define GLRPB_SHW_SHW_S				0
+#define GLRPB_SHW_SHW_M				MAKEMASK(0xFFFFF, 0)
+#define GLRPB_SLW(_i)				(0x000AC140 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLRPB_SLW_MAX_INDEX			7
+#define GLRPB_SLW_SLW_S				0
+#define GLRPB_SLW_SLW_M				MAKEMASK(0xFFFFF, 0)
+#define GLRPB_SPS(_i)				(0x000AC0C4 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLRPB_SPS_MAX_INDEX			7
+#define GLRPB_SPS_SPS_TCN_S			0
+#define GLRPB_SPS_SPS_TCN_M			MAKEMASK(0xFFFFF, 0)
+#define GLRPB_TC_CFG(_i)			(0x000AC2A4 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLRPB_TC_CFG_MAX_INDEX			31
+#define GLRPB_TC_CFG_D_POOL_S			0
+#define GLRPB_TC_CFG_D_POOL_M			MAKEMASK(0xFFFF, 0)
+#define GLRPB_TC_CFG_S_POOL_S			16
+#define GLRPB_TC_CFG_S_POOL_M			MAKEMASK(0xFFFF, 16)
+#define GLRPB_TCHW(_i)				(0x000AC330 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLRPB_TCHW_MAX_INDEX			31
+#define GLRPB_TCHW_TCHW_S			0
+#define GLRPB_TCHW_TCHW_M			MAKEMASK(0xFFFFF, 0)
+#define GLRPB_TCLW(_i)				(0x000AC3B0 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLRPB_TCLW_MAX_INDEX			31
+#define GLRPB_TCLW_TCLW_S			0
+#define GLRPB_TCLW_TCLW_M			MAKEMASK(0xFFFFF, 0)
+#define GLQF_APBVT(_i)				(0x00450000 + ((_i) * 4)) /* _i=0...2047 */ /* Reset Source: CORER */
+#define GLQF_APBVT_MAX_INDEX			2047
+#define GLQF_APBVT_APBVT_S			0
+#define GLQF_APBVT_APBVT_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLQF_FD_CLSN_0				0x00460028 /* Reset Source: CORER */
+#define GLQF_FD_CLSN_0_HITSBCNT_S		0
+#define GLQF_FD_CLSN_0_HITSBCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQF_FD_CLSN1				0x00460030 /* Reset Source: CORER */
+#define GLQF_FD_CLSN1_HITLBCNT_S		0
+#define GLQF_FD_CLSN1_HITLBCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQF_FD_CNT				0x00460018 /* Reset Source: CORER */
+#define GLQF_FD_CNT_FD_GCNT_S			0
+#define GLQF_FD_CNT_FD_GCNT_M			MAKEMASK(0x7FFF, 0)
+#define GLQF_FD_CNT_FD_BCNT_S			16
+#define GLQF_FD_CNT_FD_BCNT_M			MAKEMASK(0x7FFF, 16)
+#define GLQF_FD_CTL				0x00460000 /* Reset Source: CORER */
+#define GLQF_FD_CTL_FDLONG_S			0
+#define GLQF_FD_CTL_FDLONG_M			MAKEMASK(0xF, 0)
+#define GLQF_FD_CTL_HASH_REPORT_S		4
+#define GLQF_FD_CTL_HASH_REPORT_M		BIT(4)
+#define GLQF_FD_CTL_FLT_ADDR_REPORT_S		5
+#define GLQF_FD_CTL_FLT_ADDR_REPORT_M		BIT(5)
+#define GLQF_FD_SIZE				0x00460010 /* Reset Source: CORER */
+#define GLQF_FD_SIZE_FD_GSIZE_S			0
+#define GLQF_FD_SIZE_FD_GSIZE_M			MAKEMASK(0x7FFF, 0)
+#define GLQF_FD_SIZE_FD_BSIZE_S			16
+#define GLQF_FD_SIZE_FD_BSIZE_M			MAKEMASK(0x7FFF, 16)
+#define GLQF_FDCNT_0				0x00460020 /* Reset Source: CORER */
+#define GLQF_FDCNT_0_BUCKETCNT_S		0
+#define GLQF_FDCNT_0_BUCKETCNT_M		MAKEMASK(0x7FFF, 0)
+#define GLQF_FDCNT_0_CNT_NOT_VLD_S		31
+#define GLQF_FDCNT_0_CNT_NOT_VLD_M		BIT(31)
+#define GLQF_FDEVICTENA(_i)			(0x00452000 + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: CORER */
+#define GLQF_FDEVICTENA_MAX_INDEX		3
+#define GLQF_FDEVICTENA_FDEVICTENA_S		0
+#define GLQF_FDEVICTENA_FDEVICTENA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQF_FDINSET(_i, _j)			(0x00412000 + ((_i) * 4 + (_j) * 512)) /* _i=0...127, _j=0...5 */ /* Reset Source: CORER */
+#define GLQF_FDINSET_MAX_INDEX			127
+#define GLQF_FDINSET_FV_WORD_INDX0_S		0
+#define GLQF_FDINSET_FV_WORD_INDX0_M		MAKEMASK(0x1F, 0)
+#define GLQF_FDINSET_FV_WORD_VAL0_S		7
+#define GLQF_FDINSET_FV_WORD_VAL0_M		BIT(7)
+#define GLQF_FDINSET_FV_WORD_INDX1_S		8
+#define GLQF_FDINSET_FV_WORD_INDX1_M		MAKEMASK(0x1F, 8)
+#define GLQF_FDINSET_FV_WORD_VAL1_S		15
+#define GLQF_FDINSET_FV_WORD_VAL1_M		BIT(15)
+#define GLQF_FDINSET_FV_WORD_INDX2_S		16
+#define GLQF_FDINSET_FV_WORD_INDX2_M		MAKEMASK(0x1F, 16)
+#define GLQF_FDINSET_FV_WORD_VAL2_S		23
+#define GLQF_FDINSET_FV_WORD_VAL2_M		BIT(23)
+#define GLQF_FDINSET_FV_WORD_INDX3_S		24
+#define GLQF_FDINSET_FV_WORD_INDX3_M		MAKEMASK(0x1F, 24)
+#define GLQF_FDINSET_FV_WORD_VAL3_S		31
+#define GLQF_FDINSET_FV_WORD_VAL3_M		BIT(31)
+#define GLQF_FDMASK(_i)				(0x00410800 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLQF_FDMASK_MAX_INDEX			31
+#define GLQF_FDMASK_MSK_INDEX_S			0
+#define GLQF_FDMASK_MSK_INDEX_M			MAKEMASK(0x1F, 0)
+#define GLQF_FDMASK_MASK_S			16
+#define GLQF_FDMASK_MASK_M			MAKEMASK(0xFFFF, 16)
+#define GLQF_FDMASK_SEL(_i)			(0x00410400 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLQF_FDMASK_SEL_MAX_INDEX		127
+#define GLQF_FDMASK_SEL_MASK_SEL_S		0
+#define GLQF_FDMASK_SEL_MASK_SEL_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQF_FDSWAP(_i, _j)			(0x00413000 + ((_i) * 4 + (_j) * 512)) /* _i=0...127, _j=0...5 */ /* Reset Source: CORER */
+#define GLQF_FDSWAP_MAX_INDEX			127
+#define GLQF_FDSWAP_FV_WORD_INDX0_S		0
+#define GLQF_FDSWAP_FV_WORD_INDX0_M		MAKEMASK(0x1F, 0)
+#define GLQF_FDSWAP_FV_WORD_VAL0_S		7
+#define GLQF_FDSWAP_FV_WORD_VAL0_M		BIT(7)
+#define GLQF_FDSWAP_FV_WORD_INDX1_S		8
+#define GLQF_FDSWAP_FV_WORD_INDX1_M		MAKEMASK(0x1F, 8)
+#define GLQF_FDSWAP_FV_WORD_VAL1_S		15
+#define GLQF_FDSWAP_FV_WORD_VAL1_M		BIT(15)
+#define GLQF_FDSWAP_FV_WORD_INDX2_S		16
+#define GLQF_FDSWAP_FV_WORD_INDX2_M		MAKEMASK(0x1F, 16)
+#define GLQF_FDSWAP_FV_WORD_VAL2_S		23
+#define GLQF_FDSWAP_FV_WORD_VAL2_M		BIT(23)
+#define GLQF_FDSWAP_FV_WORD_INDX3_S		24
+#define GLQF_FDSWAP_FV_WORD_INDX3_M		MAKEMASK(0x1F, 24)
+#define GLQF_FDSWAP_FV_WORD_VAL3_S		31
+#define GLQF_FDSWAP_FV_WORD_VAL3_M		BIT(31)
+#define GLQF_HINSET(_i, _j)			(0x0040E000 + ((_i) * 4 + (_j) * 512)) /* _i=0...127, _j=0...5 */ /* Reset Source: CORER */
+#define GLQF_HINSET_MAX_INDEX			127
+#define GLQF_HINSET_FV_WORD_INDX0_S		0
+#define GLQF_HINSET_FV_WORD_INDX0_M		MAKEMASK(0x1F, 0)
+#define GLQF_HINSET_FV_WORD_VAL0_S		7
+#define GLQF_HINSET_FV_WORD_VAL0_M		BIT(7)
+#define GLQF_HINSET_FV_WORD_INDX1_S		8
+#define GLQF_HINSET_FV_WORD_INDX1_M		MAKEMASK(0x1F, 8)
+#define GLQF_HINSET_FV_WORD_VAL1_S		15
+#define GLQF_HINSET_FV_WORD_VAL1_M		BIT(15)
+#define GLQF_HINSET_FV_WORD_INDX2_S		16
+#define GLQF_HINSET_FV_WORD_INDX2_M		MAKEMASK(0x1F, 16)
+#define GLQF_HINSET_FV_WORD_VAL2_S		23
+#define GLQF_HINSET_FV_WORD_VAL2_M		BIT(23)
+#define GLQF_HINSET_FV_WORD_INDX3_S		24
+#define GLQF_HINSET_FV_WORD_INDX3_M		MAKEMASK(0x1F, 24)
+#define GLQF_HINSET_FV_WORD_VAL3_S		31
+#define GLQF_HINSET_FV_WORD_VAL3_M		BIT(31)
+#define GLQF_HKEY(_i)				(0x00456000 + ((_i) * 4)) /* _i=0...12 */ /* Reset Source: CORER */
+#define GLQF_HKEY_MAX_INDEX			12
+#define GLQF_HKEY_KEY_0_S			0
+#define GLQF_HKEY_KEY_0_M			MAKEMASK(0xFF, 0)
+#define GLQF_HKEY_KEY_1_S			8
+#define GLQF_HKEY_KEY_1_M			MAKEMASK(0xFF, 8)
+#define GLQF_HKEY_KEY_2_S			16
+#define GLQF_HKEY_KEY_2_M			MAKEMASK(0xFF, 16)
+#define GLQF_HKEY_KEY_3_S			24
+#define GLQF_HKEY_KEY_3_M			MAKEMASK(0xFF, 24)
+#define GLQF_HLUT(_i, _j)			(0x00438000 + ((_i) * 4 + (_j) * 512)) /* _i=0...127, _j=0...15 */ /* Reset Source: CORER */
+#define GLQF_HLUT_MAX_INDEX			127
+#define GLQF_HLUT_LUT0_S			0
+#define GLQF_HLUT_LUT0_M			MAKEMASK(0x3F, 0)
+#define GLQF_HLUT_LUT1_S			8
+#define GLQF_HLUT_LUT1_M			MAKEMASK(0x3F, 8)
+#define GLQF_HLUT_LUT2_S			16
+#define GLQF_HLUT_LUT2_M			MAKEMASK(0x3F, 16)
+#define GLQF_HLUT_LUT3_S			24
+#define GLQF_HLUT_LUT3_M			MAKEMASK(0x3F, 24)
+#define GLQF_HLUT_SIZE(_i)			(0x00455400 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLQF_HLUT_SIZE_MAX_INDEX		15
+#define GLQF_HLUT_SIZE_HSIZE_S			0
+#define GLQF_HLUT_SIZE_HSIZE_M			BIT(0)
+#define GLQF_HMASK(_i)				(0x0040FC00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLQF_HMASK_MAX_INDEX			31
+#define GLQF_HMASK_MSK_INDEX_S			0
+#define GLQF_HMASK_MSK_INDEX_M			MAKEMASK(0x1F, 0)
+#define GLQF_HMASK_MASK_S			16
+#define GLQF_HMASK_MASK_M			MAKEMASK(0xFFFF, 16)
+#define GLQF_HMASK_SEL(_i)			(0x00410000 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLQF_HMASK_SEL_MAX_INDEX		127
+#define GLQF_HMASK_SEL_MASK_SEL_S		0
+#define GLQF_HMASK_SEL_MASK_SEL_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQF_HSYMM(_i, _j)			(0x0040F000 + ((_i) * 4 + (_j) * 512)) /* _i=0...127, _j=0...5 */ /* Reset Source: CORER */
+#define GLQF_HSYMM_MAX_INDEX			127
+#define GLQF_HSYMM_FV_SYMM_INDX0_S		0
+#define GLQF_HSYMM_FV_SYMM_INDX0_M		MAKEMASK(0x1F, 0)
+#define GLQF_HSYMM_SYMM0_ENA_S			7
+#define GLQF_HSYMM_SYMM0_ENA_M			BIT(7)
+#define GLQF_HSYMM_FV_SYMM_INDX1_S		8
+#define GLQF_HSYMM_FV_SYMM_INDX1_M		MAKEMASK(0x1F, 8)
+#define GLQF_HSYMM_SYMM1_ENA_S			15
+#define GLQF_HSYMM_SYMM1_ENA_M			BIT(15)
+#define GLQF_HSYMM_FV_SYMM_INDX2_S		16
+#define GLQF_HSYMM_FV_SYMM_INDX2_M		MAKEMASK(0x1F, 16)
+#define GLQF_HSYMM_SYMM2_ENA_S			23
+#define GLQF_HSYMM_SYMM2_ENA_M			BIT(23)
+#define GLQF_HSYMM_FV_SYMM_INDX3_S		24
+#define GLQF_HSYMM_FV_SYMM_INDX3_M		MAKEMASK(0x1F, 24)
+#define GLQF_HSYMM_SYMM3_ENA_S			31
+#define GLQF_HSYMM_SYMM3_ENA_M			BIT(31)
+#define GLQF_PE_APBVT_CNT			0x00455500 /* Reset Source: CORER */
+#define GLQF_PE_APBVT_CNT_APBVT_LAN_S		0
+#define GLQF_PE_APBVT_CNT_APBVT_LAN_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQF_PE_CMD				0x00471080 /* Reset Source: CORER */
+#define GLQF_PE_CMD_ADDREM_STS_S		0
+#define GLQF_PE_CMD_ADDREM_STS_M		MAKEMASK(0xFFFFFF, 0)
+#define GLQF_PE_CMD_ADDREM_ID_S			28
+#define GLQF_PE_CMD_ADDREM_ID_M			MAKEMASK(0xF, 28)
+#define GLQF_PE_CTL				0x004710C0 /* Reset Source: CORER */
+#define GLQF_PE_CTL_PELONG_S			0
+#define GLQF_PE_CTL_PELONG_M			MAKEMASK(0xF, 0)
+#define GLQF_PE_CTL2(_i)			(0x00455200 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLQF_PE_CTL2_MAX_INDEX			31
+#define GLQF_PE_CTL2_TO_QH_S			0
+#define GLQF_PE_CTL2_TO_QH_M			MAKEMASK(0x3, 0)
+#define GLQF_PE_CTL2_APBVT_ENA_S		2
+#define GLQF_PE_CTL2_APBVT_ENA_M		BIT(2)
+#define GLQF_PE_FVE				0x0020E514 /* Reset Source: CORER */
+#define GLQF_PE_FVE_W_ENA_S			0
+#define GLQF_PE_FVE_W_ENA_M			MAKEMASK(0xFFFFFF, 0)
+#define GLQF_PE_OSR_STS				0x00471040 /* Reset Source: CORER */
+#define GLQF_PE_OSR_STS_QH_SRCH_MAXOSR_S	0
+#define GLQF_PE_OSR_STS_QH_SRCH_MAXOSR_M	MAKEMASK(0x3FF, 0)
+#define GLQF_PE_OSR_STS_QH_CMD_MAXOSR_S		16
+#define GLQF_PE_OSR_STS_QH_CMD_MAXOSR_M		MAKEMASK(0x3FF, 16)
+#define GLQF_PEINSET(_i, _j)			(0x00415000 + ((_i) * 4 + (_j) * 128)) /* _i=0...31, _j=0...5 */ /* Reset Source: CORER */
+#define GLQF_PEINSET_MAX_INDEX			31
+#define GLQF_PEINSET_FV_WORD_INDX0_S		0
+#define GLQF_PEINSET_FV_WORD_INDX0_M		MAKEMASK(0x1F, 0)
+#define GLQF_PEINSET_FV_WORD_VAL0_S		7
+#define GLQF_PEINSET_FV_WORD_VAL0_M		BIT(7)
+#define GLQF_PEINSET_FV_WORD_INDX1_S		8
+#define GLQF_PEINSET_FV_WORD_INDX1_M		MAKEMASK(0x1F, 8)
+#define GLQF_PEINSET_FV_WORD_VAL1_S		15
+#define GLQF_PEINSET_FV_WORD_VAL1_M		BIT(15)
+#define GLQF_PEINSET_FV_WORD_INDX2_S		16
+#define GLQF_PEINSET_FV_WORD_INDX2_M		MAKEMASK(0x1F, 16)
+#define GLQF_PEINSET_FV_WORD_VAL2_S		23
+#define GLQF_PEINSET_FV_WORD_VAL2_M		BIT(23)
+#define GLQF_PEINSET_FV_WORD_INDX3_S		24
+#define GLQF_PEINSET_FV_WORD_INDX3_M		MAKEMASK(0x1F, 24)
+#define GLQF_PEINSET_FV_WORD_VAL3_S		31
+#define GLQF_PEINSET_FV_WORD_VAL3_M		BIT(31)
+#define GLQF_PEMASK(_i)				(0x00415400 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLQF_PEMASK_MAX_INDEX			15
+#define GLQF_PEMASK_MSK_INDEX_S			0
+#define GLQF_PEMASK_MSK_INDEX_M			MAKEMASK(0x1F, 0)
+#define GLQF_PEMASK_MASK_S			16
+#define GLQF_PEMASK_MASK_M			MAKEMASK(0xFFFF, 16)
+#define GLQF_PEMASK_SEL(_i)			(0x00415500 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLQF_PEMASK_SEL_MAX_INDEX		31
+#define GLQF_PEMASK_SEL_MASK_SEL_S		0
+#define GLQF_PEMASK_SEL_MASK_SEL_M		MAKEMASK(0xFFFF, 0)
+#define GLQF_PETABLE_CLR(_i)			(0x000AA078 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLQF_PETABLE_CLR_MAX_INDEX		1
+#define GLQF_PETABLE_CLR_VM_VF_NUM_S		0
+#define GLQF_PETABLE_CLR_VM_VF_NUM_M		MAKEMASK(0x3FF, 0)
+#define GLQF_PETABLE_CLR_VM_VF_TYPE_S		10
+#define GLQF_PETABLE_CLR_VM_VF_TYPE_M		MAKEMASK(0x3, 10)
+#define GLQF_PETABLE_CLR_PF_NUM_S		12
+#define GLQF_PETABLE_CLR_PF_NUM_M		MAKEMASK(0x7, 12)
+#define GLQF_PETABLE_CLR_PE_BUSY_S		16
+#define GLQF_PETABLE_CLR_PE_BUSY_M		BIT(16)
+#define GLQF_PETABLE_CLR_PE_CLEAR_S		17
+#define GLQF_PETABLE_CLR_PE_CLEAR_M		BIT(17)
+#define GLQF_PROF2TC(_i, _j)			(0x0044D000 + ((_i) * 4 + (_j) * 512)) /* _i=0...127, _j=0...3 */ /* Reset Source: CORER */
+#define GLQF_PROF2TC_MAX_INDEX			127
+#define GLQF_PROF2TC_OVERRIDE_ENA_0_S		0
+#define GLQF_PROF2TC_OVERRIDE_ENA_0_M		BIT(0)
+#define GLQF_PROF2TC_REGION_0_S			1
+#define GLQF_PROF2TC_REGION_0_M			MAKEMASK(0x7, 1)
+#define GLQF_PROF2TC_OVERRIDE_ENA_1_S		4
+#define GLQF_PROF2TC_OVERRIDE_ENA_1_M		BIT(4)
+#define GLQF_PROF2TC_REGION_1_S			5
+#define GLQF_PROF2TC_REGION_1_M			MAKEMASK(0x7, 5)
+#define GLQF_PROF2TC_OVERRIDE_ENA_2_S		8
+#define GLQF_PROF2TC_OVERRIDE_ENA_2_M		BIT(8)
+#define GLQF_PROF2TC_REGION_2_S			9
+#define GLQF_PROF2TC_REGION_2_M			MAKEMASK(0x7, 9)
+#define GLQF_PROF2TC_OVERRIDE_ENA_3_S		12
+#define GLQF_PROF2TC_OVERRIDE_ENA_3_M		BIT(12)
+#define GLQF_PROF2TC_REGION_3_S			13
+#define GLQF_PROF2TC_REGION_3_M			MAKEMASK(0x7, 13)
+#define GLQF_PROF2TC_OVERRIDE_ENA_4_S		16
+#define GLQF_PROF2TC_OVERRIDE_ENA_4_M		BIT(16)
+#define GLQF_PROF2TC_REGION_4_S			17
+#define GLQF_PROF2TC_REGION_4_M			MAKEMASK(0x7, 17)
+#define GLQF_PROF2TC_OVERRIDE_ENA_5_S		20
+#define GLQF_PROF2TC_OVERRIDE_ENA_5_M		BIT(20)
+#define GLQF_PROF2TC_REGION_5_S			21
+#define GLQF_PROF2TC_REGION_5_M			MAKEMASK(0x7, 21)
+#define GLQF_PROF2TC_OVERRIDE_ENA_6_S		24
+#define GLQF_PROF2TC_OVERRIDE_ENA_6_M		BIT(24)
+#define GLQF_PROF2TC_REGION_6_S			25
+#define GLQF_PROF2TC_REGION_6_M			MAKEMASK(0x7, 25)
+#define GLQF_PROF2TC_OVERRIDE_ENA_7_S		28
+#define GLQF_PROF2TC_OVERRIDE_ENA_7_M		BIT(28)
+#define GLQF_PROF2TC_REGION_7_S			29
+#define GLQF_PROF2TC_REGION_7_M			MAKEMASK(0x7, 29)
+#define PFQF_FD_CNT				0x00460180 /* Reset Source: CORER */
+#define PFQF_FD_CNT_FD_GCNT_S			0
+#define PFQF_FD_CNT_FD_GCNT_M			MAKEMASK(0x7FFF, 0)
+#define PFQF_FD_CNT_FD_BCNT_S			16
+#define PFQF_FD_CNT_FD_BCNT_M			MAKEMASK(0x7FFF, 16)
+#define PFQF_FD_ENA				0x0043A000 /* Reset Source: CORER */
+#define PFQF_FD_ENA_FD_ENA_S			0
+#define PFQF_FD_ENA_FD_ENA_M			BIT(0)
+#define PFQF_FD_SIZE				0x00460100 /* Reset Source: CORER */
+#define PFQF_FD_SIZE_FD_GSIZE_S			0
+#define PFQF_FD_SIZE_FD_GSIZE_M			MAKEMASK(0x7FFF, 0)
+#define PFQF_FD_SIZE_FD_BSIZE_S			16
+#define PFQF_FD_SIZE_FD_BSIZE_M			MAKEMASK(0x7FFF, 16)
+#define PFQF_FD_SUBTRACT			0x00460200 /* Reset Source: CORER */
+#define PFQF_FD_SUBTRACT_FD_GCNT_S		0
+#define PFQF_FD_SUBTRACT_FD_GCNT_M		MAKEMASK(0x7FFF, 0)
+#define PFQF_FD_SUBTRACT_FD_BCNT_S		16
+#define PFQF_FD_SUBTRACT_FD_BCNT_M		MAKEMASK(0x7FFF, 16)
+#define PFQF_HLUT(_i)				(0x00430000 + ((_i) * 64)) /* _i=0...511 */ /* Reset Source: CORER */
+#define PFQF_HLUT_MAX_INDEX			511
+#define PFQF_HLUT_LUT0_S			0
+#define PFQF_HLUT_LUT0_M			MAKEMASK(0xFF, 0)
+#define PFQF_HLUT_LUT1_S			8
+#define PFQF_HLUT_LUT1_M			MAKEMASK(0xFF, 8)
+#define PFQF_HLUT_LUT2_S			16
+#define PFQF_HLUT_LUT2_M			MAKEMASK(0xFF, 16)
+#define PFQF_HLUT_LUT3_S			24
+#define PFQF_HLUT_LUT3_M			MAKEMASK(0xFF, 24)
+#define PFQF_HLUT_SIZE				0x00455480 /* Reset Source: CORER */
+#define PFQF_HLUT_SIZE_HSIZE_S			0
+#define PFQF_HLUT_SIZE_HSIZE_M			MAKEMASK(0x3, 0)
+#define PFQF_PE_CLSN0				0x00470480 /* Reset Source: CORER */
+#define PFQF_PE_CLSN0_HITSBCNT_S		0
+#define PFQF_PE_CLSN0_HITSBCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PFQF_PE_CLSN1				0x00470500 /* Reset Source: CORER */
+#define PFQF_PE_CLSN1_HITLBCNT_S		0
+#define PFQF_PE_CLSN1_HITLBCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PFQF_PE_CTL1				0x00470000 /* Reset Source: CORER */
+#define PFQF_PE_CTL1_PEHSIZE_S			0
+#define PFQF_PE_CTL1_PEHSIZE_M			MAKEMASK(0xF, 0)
+#define PFQF_PE_CTL2				0x00470040 /* Reset Source: CORER */
+#define PFQF_PE_CTL2_PEDSIZE_S			0
+#define PFQF_PE_CTL2_PEDSIZE_M			MAKEMASK(0xF, 0)
+#define PFQF_PE_FILTERING_ENA			0x0043A080 /* Reset Source: CORER */
+#define PFQF_PE_FILTERING_ENA_PE_ENA_S		0
+#define PFQF_PE_FILTERING_ENA_PE_ENA_M		BIT(0)
+#define PFQF_PE_FLHD				0x00470100 /* Reset Source: CORER */
+#define PFQF_PE_FLHD_FLHD_S			0
+#define PFQF_PE_FLHD_FLHD_M			MAKEMASK(0xFFFFFF, 0)
+#define PFQF_PE_ST_CTL				0x00470400 /* Reset Source: CORER */
+#define PFQF_PE_ST_CTL_PF_CNT_EN_S		0
+#define PFQF_PE_ST_CTL_PF_CNT_EN_M		BIT(0)
+#define PFQF_PE_ST_CTL_VFS_CNT_EN_S		1
+#define PFQF_PE_ST_CTL_VFS_CNT_EN_M		BIT(1)
+#define PFQF_PE_ST_CTL_VF_CNT_EN_S		2
+#define PFQF_PE_ST_CTL_VF_CNT_EN_M		BIT(2)
+#define PFQF_PE_ST_CTL_VF_NUM_S			16
+#define PFQF_PE_ST_CTL_VF_NUM_M			MAKEMASK(0xFF, 16)
+#define PFQF_PE_TC_CTL				0x00452080 /* Reset Source: CORER */
+#define PFQF_PE_TC_CTL_TC_EN_PF_S		0
+#define PFQF_PE_TC_CTL_TC_EN_PF_M		MAKEMASK(0xFF, 0)
+#define PFQF_PE_TC_CTL_TC_EN_VF_S		16
+#define PFQF_PE_TC_CTL_TC_EN_VF_M		MAKEMASK(0xFF, 16)
+#define PFQF_PECNT_0				0x00470200 /* Reset Source: CORER */
+#define PFQF_PECNT_0_BUCKETCNT_S		0
+#define PFQF_PECNT_0_BUCKETCNT_M		MAKEMASK(0x3FFFF, 0)
+#define PFQF_PECNT_1				0x00470300 /* Reset Source: CORER */
+#define PFQF_PECNT_1_FLTCNT_S			0
+#define PFQF_PECNT_1_FLTCNT_M			MAKEMASK(0x3FFFF, 0)
+#define VPQF_PE_CTL1(_VF)			(0x00474000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPQF_PE_CTL1_MAX_INDEX			255
+#define VPQF_PE_CTL1_PEHSIZE_S			0
+#define VPQF_PE_CTL1_PEHSIZE_M			MAKEMASK(0xF, 0)
+#define VPQF_PE_CTL2(_VF)			(0x00474800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPQF_PE_CTL2_MAX_INDEX			255
+#define VPQF_PE_CTL2_PEDSIZE_S			0
+#define VPQF_PE_CTL2_PEDSIZE_M			MAKEMASK(0xF, 0)
+#define VPQF_PE_FILTERING_ENA(_VF)		(0x00455800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPQF_PE_FILTERING_ENA_MAX_INDEX		255
+#define VPQF_PE_FILTERING_ENA_PE_ENA_S		0
+#define VPQF_PE_FILTERING_ENA_PE_ENA_M		BIT(0)
+#define VPQF_PE_FLHD(_VF)			(0x00472000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPQF_PE_FLHD_MAX_INDEX			255
+#define VPQF_PE_FLHD_FLHD_S			0
+#define VPQF_PE_FLHD_FLHD_M			MAKEMASK(0xFFFFFF, 0)
+#define VPQF_PECNT_0(_VF)			(0x00472800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPQF_PECNT_0_MAX_INDEX			255
+#define VPQF_PECNT_0_BUCKETCNT_S		0
+#define VPQF_PECNT_0_BUCKETCNT_M		MAKEMASK(0x3FFFF, 0)
+#define VPQF_PECNT_1(_VF)			(0x00473000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPQF_PECNT_1_MAX_INDEX			255
+#define VPQF_PECNT_1_FLTCNT_S			0
+#define VPQF_PECNT_1_FLTCNT_M			MAKEMASK(0x3FFFF, 0)
+#define GLDCB_RMPMC				0x001223C8 /* Reset Source: CORER */
+#define GLDCB_RMPMC_RSPM_S			0
+#define GLDCB_RMPMC_RSPM_M			MAKEMASK(0x3F, 0)
+#define GLDCB_RMPMC_MIQ_NODROP_MODE_S		6
+#define GLDCB_RMPMC_MIQ_NODROP_MODE_M		MAKEMASK(0x1F, 6)
+#define GLDCB_RMPMC_RPM_DIS_S			31
+#define GLDCB_RMPMC_RPM_DIS_M			BIT(31)
+#define GLDCB_RMPMS				0x001223CC /* Reset Source: CORER */
+#define GLDCB_RMPMS_RMPM_S			0
+#define GLDCB_RMPMS_RMPM_M			MAKEMASK(0xFFFF, 0)
+#define GLDCB_RPCC				0x00122260 /* Reset Source: CORER */
+#define GLDCB_RPCC_EN_S				0
+#define GLDCB_RPCC_EN_M				BIT(0)
+#define GLDCB_RPCC_SCL_FACT_S			4
+#define GLDCB_RPCC_SCL_FACT_M			MAKEMASK(0x1F, 4)
+#define GLDCB_RPCC_THRSH_S			16
+#define GLDCB_RPCC_THRSH_M			MAKEMASK(0xFFF, 16)
+#define GLDCB_RSPMC				0x001223C4 /* Reset Source: CORER */
+#define GLDCB_RSPMC_RSPM_S			0
+#define GLDCB_RSPMC_RSPM_M			MAKEMASK(0xFF, 0)
+#define GLDCB_RSPMC_RPM_MODE_S			8
+#define GLDCB_RSPMC_RPM_MODE_M			MAKEMASK(0x3, 8)
+#define GLDCB_RSPMC_PRR_MAX_EXP_S		10
+#define GLDCB_RSPMC_PRR_MAX_EXP_M		MAKEMASK(0xF, 10)
+#define GLDCB_RSPMC_PFCTIMER_S			14
+#define GLDCB_RSPMC_PFCTIMER_M			MAKEMASK(0x3FFF, 14)
+#define GLDCB_RSPMC_RPM_DIS_S			31
+#define GLDCB_RSPMC_RPM_DIS_M			BIT(31)
+#define GLDCB_RSPMS				0x001223C0 /* Reset Source: CORER */
+#define GLDCB_RSPMS_RSPM_S			0
+#define GLDCB_RSPMS_RSPM_M			MAKEMASK(0x3FFFF, 0)
+#define GLDCB_RTCTI				0x001223D0 /* Reset Source: CORER */
+#define GLDCB_RTCTI_PFCTIMEOUT_TC_S		0
+#define GLDCB_RTCTI_PFCTIMEOUT_TC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_RTCTQ(_i)				(0x001222C0 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLDCB_RTCTQ_MAX_INDEX			31
+#define GLDCB_RTCTQ_RXQNUM_S			0
+#define GLDCB_RTCTQ_RXQNUM_M			MAKEMASK(0x7FF, 0)
+#define GLDCB_RTCTQ_IS_PF_Q_S			16
+#define GLDCB_RTCTQ_IS_PF_Q_M			BIT(16)
+#define GLDCB_RTCTS(_i)				(0x00122340 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLDCB_RTCTS_MAX_INDEX			31
+#define GLDCB_RTCTS_PFCTIMER_S			0
+#define GLDCB_RTCTS_PFCTIMER_M			MAKEMASK(0x3FFF, 0)
+#define GLRCB_CFG_COTF_CNT(_i)			(0x001223D4 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLRCB_CFG_COTF_CNT_MAX_INDEX		7
+#define GLRCB_CFG_COTF_CNT_MRKR_COTF_CNT_S	0
+#define GLRCB_CFG_COTF_CNT_MRKR_COTF_CNT_M	MAKEMASK(0x3F, 0)
+#define GLRCB_CFG_COTF_ST			0x001223F4 /* Reset Source: CORER */
+#define GLRCB_CFG_COTF_ST_MRKR_COTF_ST_S	0
+#define GLRCB_CFG_COTF_ST_MRKR_COTF_ST_M	MAKEMASK(0xFF, 0)
+#define GLRPRS_PMCFG_DHW(_i)			(0x00200388 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLRPRS_PMCFG_DHW_MAX_INDEX		15
+#define GLRPRS_PMCFG_DHW_DHW_S			0
+#define GLRPRS_PMCFG_DHW_DHW_M			MAKEMASK(0xFFFFF, 0)
+#define GLRPRS_PMCFG_DLW(_i)			(0x002003C8 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLRPRS_PMCFG_DLW_MAX_INDEX		15
+#define GLRPRS_PMCFG_DLW_DLW_S			0
+#define GLRPRS_PMCFG_DLW_DLW_M			MAKEMASK(0xFFFFF, 0)
+#define GLRPRS_PMCFG_DPS(_i)			(0x00200308 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLRPRS_PMCFG_DPS_MAX_INDEX		15
+#define GLRPRS_PMCFG_DPS_DPS_S			0
+#define GLRPRS_PMCFG_DPS_DPS_M			MAKEMASK(0xFFFFF, 0)
+#define GLRPRS_PMCFG_SHW(_i)			(0x00200448 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLRPRS_PMCFG_SHW_MAX_INDEX		7
+#define GLRPRS_PMCFG_SHW_SHW_S			0
+#define GLRPRS_PMCFG_SHW_SHW_M			MAKEMASK(0xFFFFF, 0)
+#define GLRPRS_PMCFG_SLW(_i)			(0x00200468 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLRPRS_PMCFG_SLW_MAX_INDEX		7
+#define GLRPRS_PMCFG_SLW_SLW_S			0
+#define GLRPRS_PMCFG_SLW_SLW_M			MAKEMASK(0xFFFFF, 0)
+#define GLRPRS_PMCFG_SPS(_i)			(0x00200408 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLRPRS_PMCFG_SPS_MAX_INDEX		7
+#define GLRPRS_PMCFG_SPS_SPS_S			0
+#define GLRPRS_PMCFG_SPS_SPS_M			MAKEMASK(0xFFFFF, 0)
+#define GLRPRS_PMCFG_TC_CFG(_i)			(0x00200488 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLRPRS_PMCFG_TC_CFG_MAX_INDEX		31
+#define GLRPRS_PMCFG_TC_CFG_D_POOL_S		0
+#define GLRPRS_PMCFG_TC_CFG_D_POOL_M		MAKEMASK(0xF, 0)
+#define GLRPRS_PMCFG_TC_CFG_S_POOL_S		16
+#define GLRPRS_PMCFG_TC_CFG_S_POOL_M		MAKEMASK(0x7, 16)
+#define GLRPRS_PMCFG_TCHW(_i)			(0x00200588 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLRPRS_PMCFG_TCHW_MAX_INDEX		31
+#define GLRPRS_PMCFG_TCHW_TCHW_S		0
+#define GLRPRS_PMCFG_TCHW_TCHW_M		MAKEMASK(0xFFFFF, 0)
+#define GLRPRS_PMCFG_TCLW(_i)			(0x00200608 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLRPRS_PMCFG_TCLW_MAX_INDEX		31
+#define GLRPRS_PMCFG_TCLW_TCLW_S		0
+#define GLRPRS_PMCFG_TCLW_TCLW_M		MAKEMASK(0xFFFFF, 0)
+#define GLSWT_PMCFG_TC_CFG(_i)			(0x00204900 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSWT_PMCFG_TC_CFG_MAX_INDEX		31
+#define GLSWT_PMCFG_TC_CFG_D_POOL_S		0
+#define GLSWT_PMCFG_TC_CFG_D_POOL_M		MAKEMASK(0xF, 0)
+#define GLSWT_PMCFG_TC_CFG_S_POOL_S		16
+#define GLSWT_PMCFG_TC_CFG_S_POOL_M		MAKEMASK(0x7, 16)
+#define PRTDCB_RLANPMS				0x00122280 /* Reset Source: CORER */
+#define PRTDCB_RLANPMS_LANRPPM_S		0
+#define PRTDCB_RLANPMS_LANRPPM_M		MAKEMASK(0x3FFFF, 0)
+#define PRTDCB_RPPMC				0x00122240 /* Reset Source: CORER */
+#define PRTDCB_RPPMC_LANRPPM_S			0
+#define PRTDCB_RPPMC_LANRPPM_M			MAKEMASK(0xFF, 0)
+#define PRTDCB_RPPMC_RDMARPPM_S			8
+#define PRTDCB_RPPMC_RDMARPPM_M			MAKEMASK(0xFF, 8)
+#define PRTDCB_RRDMAPMS				0x00122120 /* Reset Source: CORER */
+#define PRTDCB_RRDMAPMS_RDMARPPM_S		0
+#define PRTDCB_RRDMAPMS_RDMARPPM_M		MAKEMASK(0x3FFFF, 0)
+#define GL_STAT_SWR_BPCH(_i)			(0x00347804 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GL_STAT_SWR_BPCH_MAX_INDEX		127
+#define GL_STAT_SWR_BPCH_VLBPCH_S		0
+#define GL_STAT_SWR_BPCH_VLBPCH_M		MAKEMASK(0xFF, 0)
+#define GL_STAT_SWR_BPCL(_i)			(0x00347800 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GL_STAT_SWR_BPCL_MAX_INDEX		127
+#define GL_STAT_SWR_BPCL_VLBPCL_S		0
+#define GL_STAT_SWR_BPCL_VLBPCL_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_STAT_SWR_GORCH(_i)			(0x00342004 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GL_STAT_SWR_GORCH_MAX_INDEX		127
+#define GL_STAT_SWR_GORCH_VLBCH_S		0
+#define GL_STAT_SWR_GORCH_VLBCH_M		MAKEMASK(0xFF, 0)
+#define GL_STAT_SWR_GORCL(_i)			(0x00342000 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GL_STAT_SWR_GORCL_MAX_INDEX		127
+#define GL_STAT_SWR_GORCL_VLBCL_S		0
+#define GL_STAT_SWR_GORCL_VLBCL_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_STAT_SWR_GOTCH(_i)			(0x00304004 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GL_STAT_SWR_GOTCH_MAX_INDEX		127
+#define GL_STAT_SWR_GOTCH_VLBCH_S		0
+#define GL_STAT_SWR_GOTCH_VLBCH_M		MAKEMASK(0xFF, 0)
+#define GL_STAT_SWR_GOTCL(_i)			(0x00304000 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GL_STAT_SWR_GOTCL_MAX_INDEX		127
+#define GL_STAT_SWR_GOTCL_VLBCL_S		0
+#define GL_STAT_SWR_GOTCL_VLBCL_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_STAT_SWR_MPCH(_i)			(0x00347404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GL_STAT_SWR_MPCH_MAX_INDEX		127
+#define GL_STAT_SWR_MPCH_VLMPCH_S		0
+#define GL_STAT_SWR_MPCH_VLMPCH_M		MAKEMASK(0xFF, 0)
+#define GL_STAT_SWR_MPCL(_i)			(0x00347400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GL_STAT_SWR_MPCL_MAX_INDEX		127
+#define GL_STAT_SWR_MPCL_VLMPCL_S		0
+#define GL_STAT_SWR_MPCL_VLMPCL_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_STAT_SWR_UPCH(_i)			(0x00347004 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GL_STAT_SWR_UPCH_MAX_INDEX		127
+#define GL_STAT_SWR_UPCH_VLUPCH_S		0
+#define GL_STAT_SWR_UPCH_VLUPCH_M		MAKEMASK(0xFF, 0)
+#define GL_STAT_SWR_UPCL(_i)			(0x00347000 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GL_STAT_SWR_UPCL_MAX_INDEX		127
+#define GL_STAT_SWR_UPCL_VLUPCL_S		0
+#define GL_STAT_SWR_UPCL_VLUPCL_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_AORCL(_i)				(0x003812C0 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_AORCL_MAX_INDEX			7
+#define GLPRT_AORCL_AORCL_S			0
+#define GLPRT_AORCL_AORCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_BPRCH(_i)				(0x00381384 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_BPRCH_MAX_INDEX			7
+#define GLPRT_BPRCH_UPRCH_S			0
+#define GLPRT_BPRCH_UPRCH_M			MAKEMASK(0xFF, 0)
+#define GLPRT_BPRCL(_i)				(0x00381380 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_BPRCL_MAX_INDEX			7
+#define GLPRT_BPRCL_UPRCH_S			0
+#define GLPRT_BPRCL_UPRCH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_BPTCH(_i)				(0x00381244 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_BPTCH_MAX_INDEX			7
+#define GLPRT_BPTCH_UPRCH_S			0
+#define GLPRT_BPTCH_UPRCH_M			MAKEMASK(0xFF, 0)
+#define GLPRT_BPTCL(_i)				(0x00381240 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_BPTCL_MAX_INDEX			7
+#define GLPRT_BPTCL_UPRCH_S			0
+#define GLPRT_BPTCL_UPRCH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_CRCERRS(_i)			(0x00380100 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_CRCERRS_MAX_INDEX			7
+#define GLPRT_CRCERRS_CRCERRS_S			0
+#define GLPRT_CRCERRS_CRCERRS_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_CRCERRS_H(_i)			(0x00380104 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_CRCERRS_H_MAX_INDEX		7
+#define GLPRT_CRCERRS_H_CRCERRS_S		0
+#define GLPRT_CRCERRS_H_CRCERRS_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_GORCH(_i)				(0x00380004 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_GORCH_MAX_INDEX			7
+#define GLPRT_GORCH_GORCH_S			0
+#define GLPRT_GORCH_GORCH_M			MAKEMASK(0xFF, 0)
+#define GLPRT_GORCL(_i)				(0x00380000 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_GORCL_MAX_INDEX			7
+#define GLPRT_GORCL_GORCL_S			0
+#define GLPRT_GORCL_GORCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_GOTCH(_i)				(0x00380B44 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_GOTCH_MAX_INDEX			7
+#define GLPRT_GOTCH_GOTCH_S			0
+#define GLPRT_GOTCH_GOTCH_M			MAKEMASK(0xFF, 0)
+#define GLPRT_GOTCL(_i)				(0x00380B40 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_GOTCL_MAX_INDEX			7
+#define GLPRT_GOTCL_GOTCL_S			0
+#define GLPRT_GOTCL_GOTCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_ILLERRC(_i)			(0x003801C0 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_ILLERRC_MAX_INDEX			7
+#define GLPRT_ILLERRC_ILLERRC_S			0
+#define GLPRT_ILLERRC_ILLERRC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_ILLERRC_H(_i)			(0x003801C4 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_ILLERRC_H_MAX_INDEX		7
+#define GLPRT_ILLERRC_H_ILLERRC_S		0
+#define GLPRT_ILLERRC_H_ILLERRC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_LXOFFRXC(_i)			(0x003802C0 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_LXOFFRXC_MAX_INDEX		7
+#define GLPRT_LXOFFRXC_LXOFFRXCNT_S		0
+#define GLPRT_LXOFFRXC_LXOFFRXCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_LXOFFRXC_H(_i)			(0x003802C4 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_LXOFFRXC_H_MAX_INDEX		7
+#define GLPRT_LXOFFRXC_H_LXOFFRXCNT_S		0
+#define GLPRT_LXOFFRXC_H_LXOFFRXCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_LXOFFTXC(_i)			(0x00381180 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_LXOFFTXC_MAX_INDEX		7
+#define GLPRT_LXOFFTXC_LXOFFTXC_S		0
+#define GLPRT_LXOFFTXC_LXOFFTXC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_LXOFFTXC_H(_i)			(0x00381184 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_LXOFFTXC_H_MAX_INDEX		7
+#define GLPRT_LXOFFTXC_H_LXOFFTXC_S		0
+#define GLPRT_LXOFFTXC_H_LXOFFTXC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_LXONRXC(_i)			(0x00380280 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_LXONRXC_MAX_INDEX			7
+#define GLPRT_LXONRXC_LXONRXCNT_S		0
+#define GLPRT_LXONRXC_LXONRXCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_LXONRXC_H(_i)			(0x00380284 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_LXONRXC_H_MAX_INDEX		7
+#define GLPRT_LXONRXC_H_LXONRXCNT_S		0
+#define GLPRT_LXONRXC_H_LXONRXCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_LXONTXC(_i)			(0x00381140 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_LXONTXC_MAX_INDEX			7
+#define GLPRT_LXONTXC_LXONTXC_S			0
+#define GLPRT_LXONTXC_LXONTXC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_LXONTXC_H(_i)			(0x00381144 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_LXONTXC_H_MAX_INDEX		7
+#define GLPRT_LXONTXC_H_LXONTXC_S		0
+#define GLPRT_LXONTXC_H_LXONTXC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_MLFC(_i)				(0x00380040 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_MLFC_MAX_INDEX			7
+#define GLPRT_MLFC_MLFC_S			0
+#define GLPRT_MLFC_MLFC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_MLFC_H(_i)			(0x00380044 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_MLFC_H_MAX_INDEX			7
+#define GLPRT_MLFC_H_MLFC_S			0
+#define GLPRT_MLFC_H_MLFC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_MPRCH(_i)				(0x00381344 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_MPRCH_MAX_INDEX			7
+#define GLPRT_MPRCH_MPRCH_S			0
+#define GLPRT_MPRCH_MPRCH_M			MAKEMASK(0xFF, 0)
+#define GLPRT_MPRCL(_i)				(0x00381340 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_MPRCL_MAX_INDEX			7
+#define GLPRT_MPRCL_MPRCL_S			0
+#define GLPRT_MPRCL_MPRCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_MPTCH(_i)				(0x00381204 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_MPTCH_MAX_INDEX			7
+#define GLPRT_MPTCH_MPTCH_S			0
+#define GLPRT_MPTCH_MPTCH_M			MAKEMASK(0xFF, 0)
+#define GLPRT_MPTCL(_i)				(0x00381200 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_MPTCL_MAX_INDEX			7
+#define GLPRT_MPTCL_MPTCL_S			0
+#define GLPRT_MPTCL_MPTCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_MRFC(_i)				(0x00380080 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_MRFC_MAX_INDEX			7
+#define GLPRT_MRFC_MRFC_S			0
+#define GLPRT_MRFC_MRFC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_MRFC_H(_i)			(0x00380084 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_MRFC_H_MAX_INDEX			7
+#define GLPRT_MRFC_H_MRFC_S			0
+#define GLPRT_MRFC_H_MRFC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PRC1023H(_i)			(0x00380A04 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC1023H_MAX_INDEX		7
+#define GLPRT_PRC1023H_PRC1023H_S		0
+#define GLPRT_PRC1023H_PRC1023H_M		MAKEMASK(0xFF, 0)
+#define GLPRT_PRC1023L(_i)			(0x00380A00 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC1023L_MAX_INDEX		7
+#define GLPRT_PRC1023L_PRC1023L_S		0
+#define GLPRT_PRC1023L_PRC1023L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PRC127H(_i)			(0x00380944 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC127H_MAX_INDEX			7
+#define GLPRT_PRC127H_PRC127H_S			0
+#define GLPRT_PRC127H_PRC127H_M			MAKEMASK(0xFF, 0)
+#define GLPRT_PRC127L(_i)			(0x00380940 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC127L_MAX_INDEX			7
+#define GLPRT_PRC127L_PRC127L_S			0
+#define GLPRT_PRC127L_PRC127L_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PRC1522H(_i)			(0x00380A44 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC1522H_MAX_INDEX		7
+#define GLPRT_PRC1522H_PRC1522H_S		0
+#define GLPRT_PRC1522H_PRC1522H_M		MAKEMASK(0xFF, 0)
+#define GLPRT_PRC1522L(_i)			(0x00380A40 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC1522L_MAX_INDEX		7
+#define GLPRT_PRC1522L_PRC1522L_S		0
+#define GLPRT_PRC1522L_PRC1522L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PRC255H(_i)			(0x00380984 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC255H_MAX_INDEX			7
+#define GLPRT_PRC255H_PRTPRC255H_S		0
+#define GLPRT_PRC255H_PRTPRC255H_M		MAKEMASK(0xFF, 0)
+#define GLPRT_PRC255L(_i)			(0x00380980 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC255L_MAX_INDEX			7
+#define GLPRT_PRC255L_PRC255L_S			0
+#define GLPRT_PRC255L_PRC255L_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PRC511H(_i)			(0x003809C4 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC511H_MAX_INDEX			7
+#define GLPRT_PRC511H_PRC511H_S			0
+#define GLPRT_PRC511H_PRC511H_M			MAKEMASK(0xFF, 0)
+#define GLPRT_PRC511L(_i)			(0x003809C0 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC511L_MAX_INDEX			7
+#define GLPRT_PRC511L_PRC511L_S			0
+#define GLPRT_PRC511L_PRC511L_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PRC64H(_i)			(0x00380904 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC64H_MAX_INDEX			7
+#define GLPRT_PRC64H_PRC64H_S			0
+#define GLPRT_PRC64H_PRC64H_M			MAKEMASK(0xFF, 0)
+#define GLPRT_PRC64L(_i)			(0x00380900 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC64L_MAX_INDEX			7
+#define GLPRT_PRC64L_PRC64L_S			0
+#define GLPRT_PRC64L_PRC64L_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PRC9522H(_i)			(0x00380A84 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC9522H_MAX_INDEX		7
+#define GLPRT_PRC9522H_PRC1522H_S		0
+#define GLPRT_PRC9522H_PRC1522H_M		MAKEMASK(0xFF, 0)
+#define GLPRT_PRC9522L(_i)			(0x00380A80 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC9522L_MAX_INDEX		7
+#define GLPRT_PRC9522L_PRC1522L_S		0
+#define GLPRT_PRC9522L_PRC1522L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PTC1023H(_i)			(0x00380C84 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC1023H_MAX_INDEX		7
+#define GLPRT_PTC1023H_PTC1023H_S		0
+#define GLPRT_PTC1023H_PTC1023H_M		MAKEMASK(0xFF, 0)
+#define GLPRT_PTC1023L(_i)			(0x00380C80 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC1023L_MAX_INDEX		7
+#define GLPRT_PTC1023L_PTC1023L_S		0
+#define GLPRT_PTC1023L_PTC1023L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PTC127H(_i)			(0x00380BC4 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC127H_MAX_INDEX			7
+#define GLPRT_PTC127H_PTC127H_S			0
+#define GLPRT_PTC127H_PTC127H_M			MAKEMASK(0xFF, 0)
+#define GLPRT_PTC127L(_i)			(0x00380BC0 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC127L_MAX_INDEX			7
+#define GLPRT_PTC127L_PTC127L_S			0
+#define GLPRT_PTC127L_PTC127L_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PTC1522H(_i)			(0x00380CC4 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC1522H_MAX_INDEX		7
+#define GLPRT_PTC1522H_PTC1522H_S		0
+#define GLPRT_PTC1522H_PTC1522H_M		MAKEMASK(0xFF, 0)
+#define GLPRT_PTC1522L(_i)			(0x00380CC0 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC1522L_MAX_INDEX		7
+#define GLPRT_PTC1522L_PTC1522L_S		0
+#define GLPRT_PTC1522L_PTC1522L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PTC255H(_i)			(0x00380C04 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC255H_MAX_INDEX			7
+#define GLPRT_PTC255H_PTC255H_S			0
+#define GLPRT_PTC255H_PTC255H_M			MAKEMASK(0xFF, 0)
+#define GLPRT_PTC255L(_i)			(0x00380C00 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC255L_MAX_INDEX			7
+#define GLPRT_PTC255L_PTC255L_S			0
+#define GLPRT_PTC255L_PTC255L_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PTC511H(_i)			(0x00380C44 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC511H_MAX_INDEX			7
+#define GLPRT_PTC511H_PTC511H_S			0
+#define GLPRT_PTC511H_PTC511H_M			MAKEMASK(0xFF, 0)
+#define GLPRT_PTC511L(_i)			(0x00380C40 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC511L_MAX_INDEX			7
+#define GLPRT_PTC511L_PTC511L_S			0
+#define GLPRT_PTC511L_PTC511L_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PTC64H(_i)			(0x00380B84 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC64H_MAX_INDEX			7
+#define GLPRT_PTC64H_PTC64H_S			0
+#define GLPRT_PTC64H_PTC64H_M			MAKEMASK(0xFF, 0)
+#define GLPRT_PTC64L(_i)			(0x00380B80 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC64L_MAX_INDEX			7
+#define GLPRT_PTC64L_PTC64L_S			0
+#define GLPRT_PTC64L_PTC64L_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PTC9522H(_i)			(0x00380D04 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC9522H_MAX_INDEX		7
+#define GLPRT_PTC9522H_PTC9522H_S		0
+#define GLPRT_PTC9522H_PTC9522H_M		MAKEMASK(0xFF, 0)
+#define GLPRT_PTC9522L(_i)			(0x00380D00 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC9522L_MAX_INDEX		7
+#define GLPRT_PTC9522L_PTC9522L_S		0
+#define GLPRT_PTC9522L_PTC9522L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PXOFFRXC(_i, _j)			(0x00380500 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PXOFFRXC_MAX_INDEX		7
+#define GLPRT_PXOFFRXC_PRPXOFFRXCNT_S		0
+#define GLPRT_PXOFFRXC_PRPXOFFRXCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PXOFFRXC_H(_i, _j)		(0x00380504 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PXOFFRXC_H_MAX_INDEX		7
+#define GLPRT_PXOFFRXC_H_PRPXOFFRXCNT_S		0
+#define GLPRT_PXOFFRXC_H_PRPXOFFRXCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PXOFFTXC(_i, _j)			(0x00380F40 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PXOFFTXC_MAX_INDEX		7
+#define GLPRT_PXOFFTXC_PRPXOFFTXCNT_S		0
+#define GLPRT_PXOFFTXC_PRPXOFFTXCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PXOFFTXC_H(_i, _j)		(0x00380F44 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PXOFFTXC_H_MAX_INDEX		7
+#define GLPRT_PXOFFTXC_H_PRPXOFFTXCNT_S		0
+#define GLPRT_PXOFFTXC_H_PRPXOFFTXCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PXONRXC(_i, _j)			(0x00380300 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PXONRXC_MAX_INDEX			7
+#define GLPRT_PXONRXC_PRPXONRXCNT_S		0
+#define GLPRT_PXONRXC_PRPXONRXCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PXONRXC_H(_i, _j)			(0x00380304 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PXONRXC_H_MAX_INDEX		7
+#define GLPRT_PXONRXC_H_PRPXONRXCNT_S		0
+#define GLPRT_PXONRXC_H_PRPXONRXCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PXONTXC(_i, _j)			(0x00380D40 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PXONTXC_MAX_INDEX			7
+#define GLPRT_PXONTXC_PRPXONTXC_S		0
+#define GLPRT_PXONTXC_PRPXONTXC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PXONTXC_H(_i, _j)			(0x00380D44 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PXONTXC_H_MAX_INDEX		7
+#define GLPRT_PXONTXC_H_PRPXONTXC_S		0
+#define GLPRT_PXONTXC_H_PRPXONTXC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_RFC(_i)				(0x00380AC0 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_RFC_MAX_INDEX			7
+#define GLPRT_RFC_RFC_S				0
+#define GLPRT_RFC_RFC_M				MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_RFC_H(_i)				(0x00380AC4 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_RFC_H_MAX_INDEX			7
+#define GLPRT_RFC_H_RFC_S			0
+#define GLPRT_RFC_H_RFC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_RJC(_i)				(0x00380B00 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_RJC_MAX_INDEX			7
+#define GLPRT_RJC_RJC_S				0
+#define GLPRT_RJC_RJC_M				MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_RJC_H(_i)				(0x00380B04 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_RJC_H_MAX_INDEX			7
+#define GLPRT_RJC_H_RJC_S			0
+#define GLPRT_RJC_H_RJC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_RLEC(_i)				(0x00380140 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_RLEC_MAX_INDEX			7
+#define GLPRT_RLEC_RLEC_S			0
+#define GLPRT_RLEC_RLEC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_RLEC_H(_i)			(0x00380144 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_RLEC_H_MAX_INDEX			7
+#define GLPRT_RLEC_H_RLEC_S			0
+#define GLPRT_RLEC_H_RLEC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_ROC(_i)				(0x00380240 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_ROC_MAX_INDEX			7
+#define GLPRT_ROC_ROC_S				0
+#define GLPRT_ROC_ROC_M				MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_ROC_H(_i)				(0x00380244 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_ROC_H_MAX_INDEX			7
+#define GLPRT_ROC_H_ROC_S			0
+#define GLPRT_ROC_H_ROC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_RUC(_i)				(0x00380200 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_RUC_MAX_INDEX			7
+#define GLPRT_RUC_RUC_S				0
+#define GLPRT_RUC_RUC_M				MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_RUC_H(_i)				(0x00380204 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_RUC_H_MAX_INDEX			7
+#define GLPRT_RUC_H_RUC_S			0
+#define GLPRT_RUC_H_RUC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_RXON2OFFCNT(_i, _j)		(0x00380700 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...7 */ /* Reset Source: CORER */
+#define GLPRT_RXON2OFFCNT_MAX_INDEX		7
+#define GLPRT_RXON2OFFCNT_PRRXON2OFFCNT_S	0
+#define GLPRT_RXON2OFFCNT_PRRXON2OFFCNT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_RXON2OFFCNT_H(_i, _j)		(0x00380704 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...7 */ /* Reset Source: CORER */
+#define GLPRT_RXON2OFFCNT_H_MAX_INDEX		7
+#define GLPRT_RXON2OFFCNT_H_PRRXON2OFFCNT_S	0
+#define GLPRT_RXON2OFFCNT_H_PRRXON2OFFCNT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_STDC(_i)				(0x00340000 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_STDC_MAX_INDEX			7
+#define GLPRT_STDC_STDC_S			0
+#define GLPRT_STDC_STDC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_TDOLD(_i)				(0x00381280 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_TDOLD_MAX_INDEX			7
+#define GLPRT_TDOLD_GLPRT_TDOLD_S		0
+#define GLPRT_TDOLD_GLPRT_TDOLD_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_TDOLD_H(_i)			(0x00381284 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_TDOLD_H_MAX_INDEX			7
+#define GLPRT_TDOLD_H_GLPRT_TDOLD_S		0
+#define GLPRT_TDOLD_H_GLPRT_TDOLD_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_UPRCH(_i)				(0x00381304 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_UPRCH_MAX_INDEX			7
+#define GLPRT_UPRCH_UPRCH_S			0
+#define GLPRT_UPRCH_UPRCH_M			MAKEMASK(0xFF, 0)
+#define GLPRT_UPRCL(_i)				(0x00381300 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_UPRCL_MAX_INDEX			7
+#define GLPRT_UPRCL_UPRCL_S			0
+#define GLPRT_UPRCL_UPRCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_UPTCH(_i)				(0x003811C4 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_UPTCH_MAX_INDEX			7
+#define GLPRT_UPTCH_UPTCH_S			0
+#define GLPRT_UPTCH_UPTCH_M			MAKEMASK(0xFF, 0)
+#define GLPRT_UPTCL(_i)				(0x003811C0 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_UPTCL_MAX_INDEX			7
+#define GLPRT_UPTCL_VUPTCH_S			0
+#define GLPRT_UPTCL_VUPTCH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLSTAT_ACL_CNT_0_H(_i)			(0x00388004 + ((_i) * 8)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLSTAT_ACL_CNT_0_H_MAX_INDEX		511
+#define GLSTAT_ACL_CNT_0_H_CNT_MSB_S		0
+#define GLSTAT_ACL_CNT_0_H_CNT_MSB_M		MAKEMASK(0xFF, 0)
+#define GLSTAT_ACL_CNT_0_L(_i)			(0x00388000 + ((_i) * 8)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLSTAT_ACL_CNT_0_L_MAX_INDEX		511
+#define GLSTAT_ACL_CNT_0_L_CNT_LSB_S		0
+#define GLSTAT_ACL_CNT_0_L_CNT_LSB_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLSTAT_ACL_CNT_1_H(_i)			(0x00389004 + ((_i) * 8)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLSTAT_ACL_CNT_1_H_MAX_INDEX		511
+#define GLSTAT_ACL_CNT_1_H_CNT_MSB_S		0
+#define GLSTAT_ACL_CNT_1_H_CNT_MSB_M		MAKEMASK(0xFF, 0)
+#define GLSTAT_ACL_CNT_1_L(_i)			(0x00389000 + ((_i) * 8)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLSTAT_ACL_CNT_1_L_MAX_INDEX		511
+#define GLSTAT_ACL_CNT_1_L_CNT_LSB_S		0
+#define GLSTAT_ACL_CNT_1_L_CNT_LSB_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLSTAT_ACL_CNT_2_H(_i)			(0x0038A004 + ((_i) * 8)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLSTAT_ACL_CNT_2_H_MAX_INDEX		511
+#define GLSTAT_ACL_CNT_2_H_CNT_MSB_S		0
+#define GLSTAT_ACL_CNT_2_H_CNT_MSB_M		MAKEMASK(0xFF, 0)
+#define GLSTAT_ACL_CNT_2_L(_i)			(0x0038A000 + ((_i) * 8)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLSTAT_ACL_CNT_2_L_MAX_INDEX		511
+#define GLSTAT_ACL_CNT_2_L_CNT_LSB_S		0
+#define GLSTAT_ACL_CNT_2_L_CNT_LSB_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLSTAT_ACL_CNT_3_H(_i)			(0x0038B004 + ((_i) * 8)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLSTAT_ACL_CNT_3_H_MAX_INDEX		511
+#define GLSTAT_ACL_CNT_3_H_CNT_MSB_S		0
+#define GLSTAT_ACL_CNT_3_H_CNT_MSB_M		MAKEMASK(0xFF, 0)
+#define GLSTAT_ACL_CNT_3_L(_i)			(0x0038B000 + ((_i) * 8)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLSTAT_ACL_CNT_3_L_MAX_INDEX		511
+#define GLSTAT_ACL_CNT_3_L_CNT_LSB_S		0
+#define GLSTAT_ACL_CNT_3_L_CNT_LSB_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLSTAT_FD_CNT0H(_i)			(0x003A0004 + ((_i) * 8)) /* _i=0...4095 */ /* Reset Source: CORER */
+#define GLSTAT_FD_CNT0H_MAX_INDEX		4095
+#define GLSTAT_FD_CNT0H_FD0_CNT_H_S		0
+#define GLSTAT_FD_CNT0H_FD0_CNT_H_M		MAKEMASK(0xFF, 0)
+#define GLSTAT_FD_CNT0L(_i)			(0x003A0000 + ((_i) * 8)) /* _i=0...4095 */ /* Reset Source: CORER */
+#define GLSTAT_FD_CNT0L_MAX_INDEX		4095
+#define GLSTAT_FD_CNT0L_FD0_CNT_L_S		0
+#define GLSTAT_FD_CNT0L_FD0_CNT_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLSTAT_FD_CNT1H(_i)			(0x003A8004 + ((_i) * 8)) /* _i=0...4095 */ /* Reset Source: CORER */
+#define GLSTAT_FD_CNT1H_MAX_INDEX		4095
+#define GLSTAT_FD_CNT1H_FD0_CNT_H_S		0
+#define GLSTAT_FD_CNT1H_FD0_CNT_H_M		MAKEMASK(0xFF, 0)
+#define GLSTAT_FD_CNT1L(_i)			(0x003A8000 + ((_i) * 8)) /* _i=0...4095 */ /* Reset Source: CORER */
+#define GLSTAT_FD_CNT1L_MAX_INDEX		4095
+#define GLSTAT_FD_CNT1L_FD0_CNT_L_S		0
+#define GLSTAT_FD_CNT1L_FD0_CNT_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLSW_BPRCH(_i)				(0x00346204 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_BPRCH_MAX_INDEX			31
+#define GLSW_BPRCH_BPRCH_S			0
+#define GLSW_BPRCH_BPRCH_M			MAKEMASK(0xFF, 0)
+#define GLSW_BPRCL(_i)				(0x00346200 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_BPRCL_MAX_INDEX			31
+#define GLSW_BPRCL_BPRCL_S			0
+#define GLSW_BPRCL_BPRCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLSW_BPTCH(_i)				(0x00310204 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_BPTCH_MAX_INDEX			31
+#define GLSW_BPTCH_BPTCH_S			0
+#define GLSW_BPTCH_BPTCH_M			MAKEMASK(0xFF, 0)
+#define GLSW_BPTCL(_i)				(0x00310200 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_BPTCL_MAX_INDEX			31
+#define GLSW_BPTCL_BPTCL_S			0
+#define GLSW_BPTCL_BPTCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLSW_GORCH(_i)				(0x00341004 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_GORCH_MAX_INDEX			31
+#define GLSW_GORCH_GORCH_S			0
+#define GLSW_GORCH_GORCH_M			MAKEMASK(0xFF, 0)
+#define GLSW_GORCL(_i)				(0x00341000 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_GORCL_MAX_INDEX			31
+#define GLSW_GORCL_GORCL_S			0
+#define GLSW_GORCL_GORCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLSW_GOTCH(_i)				(0x00302004 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_GOTCH_MAX_INDEX			31
+#define GLSW_GOTCH_GOTCH_S			0
+#define GLSW_GOTCH_GOTCH_M			MAKEMASK(0xFF, 0)
+#define GLSW_GOTCL(_i)				(0x00302000 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_GOTCL_MAX_INDEX			31
+#define GLSW_GOTCL_GOTCL_S			0
+#define GLSW_GOTCL_GOTCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLSW_MPRCH(_i)				(0x00346104 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_MPRCH_MAX_INDEX			31
+#define GLSW_MPRCH_MPRCH_S			0
+#define GLSW_MPRCH_MPRCH_M			MAKEMASK(0xFF, 0)
+#define GLSW_MPRCL(_i)				(0x00346100 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_MPRCL_MAX_INDEX			31
+#define GLSW_MPRCL_MPRCL_S			0
+#define GLSW_MPRCL_MPRCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLSW_MPTCH(_i)				(0x00310104 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_MPTCH_MAX_INDEX			31
+#define GLSW_MPTCH_MPTCH_S			0
+#define GLSW_MPTCH_MPTCH_M			MAKEMASK(0xFF, 0)
+#define GLSW_MPTCL(_i)				(0x00310100 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_MPTCL_MAX_INDEX			31
+#define GLSW_MPTCL_MPTCL_S			0
+#define GLSW_MPTCL_MPTCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLSW_UPRCH(_i)				(0x00346004 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_UPRCH_MAX_INDEX			31
+#define GLSW_UPRCH_UPRCH_S			0
+#define GLSW_UPRCH_UPRCH_M			MAKEMASK(0xFF, 0)
+#define GLSW_UPRCL(_i)				(0x00346000 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_UPRCL_MAX_INDEX			31
+#define GLSW_UPRCL_UPRCL_S			0
+#define GLSW_UPRCL_UPRCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLSW_UPTCH(_i)				(0x00310004 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_UPTCH_MAX_INDEX			31
+#define GLSW_UPTCH_UPTCH_S			0
+#define GLSW_UPTCH_UPTCH_M			MAKEMASK(0xFF, 0)
+#define GLSW_UPTCL(_i)				(0x00310000 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_UPTCL_MAX_INDEX			31
+#define GLSW_UPTCL_UPTCL_S			0
+#define GLSW_UPTCL_UPTCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLSWID_RUPP(_i)				(0x00345000 + ((_i) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define GLSWID_RUPP_MAX_INDEX			255
+#define GLSWID_RUPP_RUPP_S			0
+#define GLSWID_RUPP_RUPP_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLV_BPRCH(_i)				(0x003B6004 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_BPRCH_MAX_INDEX			767
+#define GLV_BPRCH_BPRCH_S			0
+#define GLV_BPRCH_BPRCH_M			MAKEMASK(0xFF, 0)
+#define GLV_BPRCL(_i)				(0x003B6000 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_BPRCL_MAX_INDEX			767
+#define GLV_BPRCL_BPRCL_S			0
+#define GLV_BPRCL_BPRCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLV_BPTCH(_i)				(0x0030E004 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_BPTCH_MAX_INDEX			767
+#define GLV_BPTCH_BPTCH_S			0
+#define GLV_BPTCH_BPTCH_M			MAKEMASK(0xFF, 0)
+#define GLV_BPTCL(_i)				(0x0030E000 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_BPTCL_MAX_INDEX			767
+#define GLV_BPTCL_BPTCL_S			0
+#define GLV_BPTCL_BPTCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLV_GORCH(_i)				(0x003B0004 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_GORCH_MAX_INDEX			767
+#define GLV_GORCH_GORCH_S			0
+#define GLV_GORCH_GORCH_M			MAKEMASK(0xFF, 0)
+#define GLV_GORCL(_i)				(0x003B0000 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_GORCL_MAX_INDEX			767
+#define GLV_GORCL_GORCL_S			0
+#define GLV_GORCL_GORCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLV_GOTCH(_i)				(0x00300004 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_GOTCH_MAX_INDEX			767
+#define GLV_GOTCH_GOTCH_S			0
+#define GLV_GOTCH_GOTCH_M			MAKEMASK(0xFF, 0)
+#define GLV_GOTCL(_i)				(0x00300000 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_GOTCL_MAX_INDEX			767
+#define GLV_GOTCL_GOTCL_S			0
+#define GLV_GOTCL_GOTCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLV_MPRCH(_i)				(0x003B4004 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_MPRCH_MAX_INDEX			767
+#define GLV_MPRCH_MPRCH_S			0
+#define GLV_MPRCH_MPRCH_M			MAKEMASK(0xFF, 0)
+#define GLV_MPRCL(_i)				(0x003B4000 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_MPRCL_MAX_INDEX			767
+#define GLV_MPRCL_MPRCL_S			0
+#define GLV_MPRCL_MPRCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLV_MPTCH(_i)				(0x0030C004 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_MPTCH_MAX_INDEX			767
+#define GLV_MPTCH_MPTCH_S			0
+#define GLV_MPTCH_MPTCH_M			MAKEMASK(0xFF, 0)
+#define GLV_MPTCL(_i)				(0x0030C000 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_MPTCL_MAX_INDEX			767
+#define GLV_MPTCL_MPTCL_S			0
+#define GLV_MPTCL_MPTCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLV_RDPC(_i)				(0x00294C04 + ((_i) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_RDPC_MAX_INDEX			767
+#define GLV_RDPC_RDPC_S				0
+#define GLV_RDPC_RDPC_M				MAKEMASK(0xFFFFFFFF, 0)
+#define GLV_REPC(_i)				(0x00295804 + ((_i) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_REPC_MAX_INDEX			767
+#define GLV_REPC_NO_DESC_CNT_S			0
+#define GLV_REPC_NO_DESC_CNT_M			MAKEMASK(0xFFFF, 0)
+#define GLV_REPC_ERROR_CNT_S			16
+#define GLV_REPC_ERROR_CNT_M			MAKEMASK(0xFFFF, 16)
+#define GLV_TEPC(_VSI)				(0x00312000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_TEPC_MAX_INDEX			767
+#define GLV_TEPC_TEPC_S				0
+#define GLV_TEPC_TEPC_M				MAKEMASK(0xFFFFFFFF, 0)
+#define GLV_UPRCH(_i)				(0x003B2004 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_UPRCH_MAX_INDEX			767
+#define GLV_UPRCH_UPRCH_S			0
+#define GLV_UPRCH_UPRCH_M			MAKEMASK(0xFF, 0)
+#define GLV_UPRCL(_i)				(0x003B2000 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_UPRCL_MAX_INDEX			767
+#define GLV_UPRCL_UPRCL_S			0
+#define GLV_UPRCL_UPRCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLV_UPTCH(_i)				(0x0030A004 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_UPTCH_MAX_INDEX			767
+#define GLV_UPTCH_GLVUPTCH_S			0
+#define GLV_UPTCH_GLVUPTCH_M			MAKEMASK(0xFF, 0)
+#define GLV_UPTCL(_i)				(0x0030A000 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_UPTCL_MAX_INDEX			767
+#define GLV_UPTCL_UPTCL_S			0
+#define GLV_UPTCL_UPTCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLVEBUP_RBCH(_i, _j)			(0x00343004 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...31 */ /* Reset Source: CORER */
+#define GLVEBUP_RBCH_MAX_INDEX			7
+#define GLVEBUP_RBCH_UPBCH_S			0
+#define GLVEBUP_RBCH_UPBCH_M			MAKEMASK(0xFF, 0)
+#define GLVEBUP_RBCL(_i, _j)			(0x00343000 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...31 */ /* Reset Source: CORER */
+#define GLVEBUP_RBCL_MAX_INDEX			7
+#define GLVEBUP_RBCL_UPBCL_S			0
+#define GLVEBUP_RBCL_UPBCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLVEBUP_RPCH(_i, _j)			(0x00344004 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...31 */ /* Reset Source: CORER */
+#define GLVEBUP_RPCH_MAX_INDEX			7
+#define GLVEBUP_RPCH_UPPCH_S			0
+#define GLVEBUP_RPCH_UPPCH_M			MAKEMASK(0xFF, 0)
+#define GLVEBUP_RPCL(_i, _j)			(0x00344000 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...31 */ /* Reset Source: CORER */
+#define GLVEBUP_RPCL_MAX_INDEX			7
+#define GLVEBUP_RPCL_UPPCL_S			0
+#define GLVEBUP_RPCL_UPPCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLVEBUP_TBCH(_i, _j)			(0x00306004 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...31 */ /* Reset Source: CORER */
+#define GLVEBUP_TBCH_MAX_INDEX			7
+#define GLVEBUP_TBCH_UPBCH_S			0
+#define GLVEBUP_TBCH_UPBCH_M			MAKEMASK(0xFF, 0)
+#define GLVEBUP_TBCL(_i, _j)			(0x00306000 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...31 */ /* Reset Source: CORER */
+#define GLVEBUP_TBCL_MAX_INDEX			7
+#define GLVEBUP_TBCL_UPBCL_S			0
+#define GLVEBUP_TBCL_UPBCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLVEBUP_TPCH(_i, _j)			(0x00308004 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...31 */ /* Reset Source: CORER */
+#define GLVEBUP_TPCH_MAX_INDEX			7
+#define GLVEBUP_TPCH_UPPCH_S			0
+#define GLVEBUP_TPCH_UPPCH_M			MAKEMASK(0xFF, 0)
+#define GLVEBUP_TPCL(_i, _j)			(0x00308000 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...31 */ /* Reset Source: CORER */
+#define GLVEBUP_TPCL_MAX_INDEX			7
+#define GLVEBUP_TPCL_UPPCL_S			0
+#define GLVEBUP_TPCL_UPPCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PRTRPB_LDPC				0x000AC280 /* Reset Source: CORER */
+#define PRTRPB_LDPC_CRCERRS_S			0
+#define PRTRPB_LDPC_CRCERRS_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PRTRPB_RDPC				0x000AC260 /* Reset Source: CORER */
+#define PRTRPB_RDPC_CRCERRS_S			0
+#define PRTRPB_RDPC_CRCERRS_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PRTTPB_STAT_TC_BYTES_SENTL(_i)		(0x00098200 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define PRTTPB_STAT_TC_BYTES_SENTL_MAX_INDEX	63
+#define PRTTPB_STAT_TC_BYTES_SENTL_TCCNT_S	0
+#define PRTTPB_STAT_TC_BYTES_SENTL_TCCNT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define TPB_PRTTPB_STAT_PKT_SENT(_i)		(0x00099470 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define TPB_PRTTPB_STAT_PKT_SENT_MAX_INDEX	7
+#define TPB_PRTTPB_STAT_PKT_SENT_PKTCNT_S	0
+#define TPB_PRTTPB_STAT_PKT_SENT_PKTCNT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define TPB_PRTTPB_STAT_TC_BYTES_SENT(_i)	(0x00099094 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define TPB_PRTTPB_STAT_TC_BYTES_SENT_MAX_INDEX 63
+#define TPB_PRTTPB_STAT_TC_BYTES_SENT_TCCNT_S	0
+#define TPB_PRTTPB_STAT_TC_BYTES_SENT_TCCNT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define EMP_SWT_PRUNIND				0x00204020 /* Reset Source: CORER */
+#define EMP_SWT_PRUNIND_OPCODE_S		0
+#define EMP_SWT_PRUNIND_OPCODE_M		MAKEMASK(0xF, 0)
+#define EMP_SWT_PRUNIND_LIST_INDEX_NUM_S	4
+#define EMP_SWT_PRUNIND_LIST_INDEX_NUM_M	MAKEMASK(0x3FF, 4)
+#define EMP_SWT_PRUNIND_VSI_NUM_S		16
+#define EMP_SWT_PRUNIND_VSI_NUM_M		MAKEMASK(0x3FF, 16)
+#define EMP_SWT_PRUNIND_BIT_VALUE_S		31
+#define EMP_SWT_PRUNIND_BIT_VALUE_M		BIT(31)
+#define EMP_SWT_REPIND				0x0020401c /* Reset Source: CORER */
+#define EMP_SWT_REPIND_OPCODE_S			0
+#define EMP_SWT_REPIND_OPCODE_M			MAKEMASK(0xF, 0)
+#define EMP_SWT_REPIND_LIST_INDEX_NUMBER_S	4
+#define EMP_SWT_REPIND_LIST_INDEX_NUMBER_M	MAKEMASK(0x3FF, 4)
+#define EMP_SWT_REPIND_VSI_NUM_S		16
+#define EMP_SWT_REPIND_VSI_NUM_M		MAKEMASK(0x3FF, 16)
+#define EMP_SWT_REPIND_BIT_VALUE_S		31
+#define EMP_SWT_REPIND_BIT_VALUE_M		BIT(31)
+#define GL_OVERRIDEC				0x002040a4 /* Reset Source: CORER */
+#define GL_OVERRIDEC_OVERRIDE_ATTEMPTC_S	0
+#define GL_OVERRIDEC_OVERRIDE_ATTEMPTC_M	MAKEMASK(0xFFFF, 0)
+#define GL_OVERRIDEC_LAST_VSI_S			16
+#define GL_OVERRIDEC_LAST_VSI_M			MAKEMASK(0x3FF, 16)
+#define GL_PLG_AVG_CALC_CFG			0x0020A5AC /* Reset Source: CORER */
+#define GL_PLG_AVG_CALC_CFG_CYCLE_LEN_S		0
+#define GL_PLG_AVG_CALC_CFG_CYCLE_LEN_M		MAKEMASK(0x7FFFFFFF, 0)
+#define GL_PLG_AVG_CALC_CFG_MODE_S		31
+#define GL_PLG_AVG_CALC_CFG_MODE_M		BIT(31)
+#define GL_PLG_AVG_CALC_ST			0x0020A5B0 /* Reset Source: CORER */
+#define GL_PLG_AVG_CALC_ST_IN_DATA_S		0
+#define GL_PLG_AVG_CALC_ST_IN_DATA_M		MAKEMASK(0x7FFF, 0)
+#define GL_PLG_AVG_CALC_ST_OUT_DATA_S		16
+#define GL_PLG_AVG_CALC_ST_OUT_DATA_M		MAKEMASK(0x7FFF, 16)
+#define GL_PLG_AVG_CALC_ST_VALID_S		31
+#define GL_PLG_AVG_CALC_ST_VALID_M		BIT(31)
+#define GL_PRE_CFG_CMD				0x00214090 /* Reset Source: CORER */
+#define GL_PRE_CFG_CMD_ADDR_S			0
+#define GL_PRE_CFG_CMD_ADDR_M			MAKEMASK(0x1FFF, 0)
+#define GL_PRE_CFG_CMD_TBLIDX_S			16
+#define GL_PRE_CFG_CMD_TBLIDX_M			MAKEMASK(0x7, 16)
+#define GL_PRE_CFG_CMD_CMD_S			29
+#define GL_PRE_CFG_CMD_CMD_M			BIT(29)
+#define GL_PRE_CFG_CMD_DONE_S			31
+#define GL_PRE_CFG_CMD_DONE_M			BIT(31)
+#define GL_PRE_CFG_DATA(_i)			(0x00214074 + ((_i) * 4)) /* _i=0...6 */ /* Reset Source: CORER */
+#define GL_PRE_CFG_DATA_MAX_INDEX		6
+#define GL_PRE_CFG_DATA_GL_PRE_RCP_DATA_S	0
+#define GL_PRE_CFG_DATA_GL_PRE_RCP_DATA_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GL_SWT_FUNCFILT				0x001D2698 /* Reset Source: CORER */
+#define GL_SWT_FUNCFILT_FUNCFILT_S		0
+#define GL_SWT_FUNCFILT_FUNCFILT_M		BIT(0)
+#define GL_SWT_FW_STS(_i)			(0x00216000 + ((_i) * 4)) /* _i=0...5 */ /* Reset Source: CORER */
+#define GL_SWT_FW_STS_MAX_INDEX			5
+#define GL_SWT_FW_STS_GL_SWT_FW_STS_S		0
+#define GL_SWT_FW_STS_GL_SWT_FW_STS_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_SWT_LAT_DOUBLE			0x00204004 /* Reset Source: CORER */
+#define GL_SWT_LAT_DOUBLE_BASE_S		0
+#define GL_SWT_LAT_DOUBLE_BASE_M		MAKEMASK(0x7FF, 0)
+#define GL_SWT_LAT_DOUBLE_SIZE_S		16
+#define GL_SWT_LAT_DOUBLE_SIZE_M		MAKEMASK(0x7FF, 16)
+#define GL_SWT_LAT_QUAD				0x00204008 /* Reset Source: CORER */
+#define GL_SWT_LAT_QUAD_BASE_S			0
+#define GL_SWT_LAT_QUAD_BASE_M			MAKEMASK(0x7FF, 0)
+#define GL_SWT_LAT_QUAD_SIZE_S			16
+#define GL_SWT_LAT_QUAD_SIZE_M			MAKEMASK(0x7FF, 16)
+#define GL_SWT_LAT_SINGLE			0x00204000 /* Reset Source: CORER */
+#define GL_SWT_LAT_SINGLE_BASE_S		0
+#define GL_SWT_LAT_SINGLE_BASE_M		MAKEMASK(0x7FF, 0)
+#define GL_SWT_LAT_SINGLE_SIZE_S		16
+#define GL_SWT_LAT_SINGLE_SIZE_M		MAKEMASK(0x7FF, 16)
+#define GL_SWT_MD_PRI				0x002040ac /* Reset Source: CORER */
+#define GL_SWT_MD_PRI_VSI_PRI_S			0
+#define GL_SWT_MD_PRI_VSI_PRI_M			MAKEMASK(0x7, 0)
+#define GL_SWT_MD_PRI_LB_PRI_S			4
+#define GL_SWT_MD_PRI_LB_PRI_M			MAKEMASK(0x7, 4)
+#define GL_SWT_MD_PRI_LAN_EN_PRI_S		8
+#define GL_SWT_MD_PRI_LAN_EN_PRI_M		MAKEMASK(0x7, 8)
+#define GL_SWT_MD_PRI_QH_PRI_S			12
+#define GL_SWT_MD_PRI_QH_PRI_M			MAKEMASK(0x7, 12)
+#define GL_SWT_MD_PRI_QL_PRI_S			16
+#define GL_SWT_MD_PRI_QL_PRI_M			MAKEMASK(0x7, 16)
+#define GL_SWT_MIRTARVSI(_i)			(0x00204500 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GL_SWT_MIRTARVSI_MAX_INDEX		63
+#define GL_SWT_MIRTARVSI_VFVMNUMBER_S		0
+#define GL_SWT_MIRTARVSI_VFVMNUMBER_M		MAKEMASK(0x3FF, 0)
+#define GL_SWT_MIRTARVSI_FUNCTIONTYPE_S		10
+#define GL_SWT_MIRTARVSI_FUNCTIONTYPE_M		MAKEMASK(0x3, 10)
+#define GL_SWT_MIRTARVSI_PFNUMBER_S		12
+#define GL_SWT_MIRTARVSI_PFNUMBER_M		MAKEMASK(0x7, 12)
+#define GL_SWT_MIRTARVSI_TARGETVSI_S		20
+#define GL_SWT_MIRTARVSI_TARGETVSI_M		MAKEMASK(0x3FF, 20)
+#define GL_SWT_MIRTARVSI_RULEENABLE_S		31
+#define GL_SWT_MIRTARVSI_RULEENABLE_M		BIT(31)
+#define GL_SWT_NOMDEF_FLGS_H			0x0021411C /* Reset Source: CORER */
+#define GL_SWT_NOMDEF_FLGS_H_FLGS_S		0
+#define GL_SWT_NOMDEF_FLGS_H_FLGS_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_SWT_NOMDEF_FLGS_L			0x00214118 /* Reset Source: CORER */
+#define GL_SWT_NOMDEF_FLGS_L_FLGS_S		0
+#define GL_SWT_NOMDEF_FLGS_L_FLGS_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_SWT_SWIDFVIDX			0x00214114 /* Reset Source: CORER */
+#define GL_SWT_SWIDFVIDX_SWIDFVIDX_S		0
+#define GL_SWT_SWIDFVIDX_SWIDFVIDX_M		MAKEMASK(0x3F, 0)
+#define GL_SWT_SWIDFVIDX_PORT_TYPE_S		31
+#define GL_SWT_SWIDFVIDX_PORT_TYPE_M		BIT(31)
+#define GL_VP_SWITCHID(_i)			(0x00214094 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GL_VP_SWITCHID_MAX_INDEX		31
+#define GL_VP_SWITCHID_SWITCHID_S		0
+#define GL_VP_SWITCHID_SWITCHID_M		MAKEMASK(0xFF, 0)
+#define GLSWID_STAT_BLOCK(_i)			(0x0020A1A4 + ((_i) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define GLSWID_STAT_BLOCK_MAX_INDEX		255
+#define GLSWID_STAT_BLOCK_VEBID_S		0
+#define GLSWID_STAT_BLOCK_VEBID_M		MAKEMASK(0x1F, 0)
+#define GLSWID_STAT_BLOCK_VEBID_VALID_S		31
+#define GLSWID_STAT_BLOCK_VEBID_VALID_M		BIT(31)
+#define GLSWT_ACT_RESP_0			0x0020A5A4 /* Reset Source: CORER */
+#define GLSWT_ACT_RESP_0_GLSWT_ACT_RESP_S	0
+#define GLSWT_ACT_RESP_0_GLSWT_ACT_RESP_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLSWT_ACT_RESP_1			0x0020A5A8 /* Reset Source: CORER */
+#define GLSWT_ACT_RESP_1_GLSWT_ACT_RESP_S	0
+#define GLSWT_ACT_RESP_1_GLSWT_ACT_RESP_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLSWT_ARB_MODE				0x0020A674 /* Reset Source: CORER */
+#define GLSWT_ARB_MODE_FLU_PRI_SHM_S		0
+#define GLSWT_ARB_MODE_FLU_PRI_SHM_M		BIT(0)
+#define GLSWT_ARB_MODE_TX_RX_FWD_PRI_S		1
+#define GLSWT_ARB_MODE_TX_RX_FWD_PRI_M		BIT(1)
+#define PRT_SBPVSI				0x00204120 /* Reset Source: CORER */
+#define PRT_SBPVSI_BAD_FRAMES_VSI_S		0
+#define PRT_SBPVSI_BAD_FRAMES_VSI_M		MAKEMASK(0x3FF, 0)
+#define PRT_SBPVSI_SBP_S			31
+#define PRT_SBPVSI_SBP_M			BIT(31)
+#define PRT_SCSTS				0x00204140 /* Reset Source: CORER */
+#define PRT_SCSTS_BSCA_S			0
+#define PRT_SCSTS_BSCA_M			BIT(0)
+#define PRT_SCSTS_BSCAP_S			1
+#define PRT_SCSTS_BSCAP_M			BIT(1)
+#define PRT_SCSTS_MSCA_S			2
+#define PRT_SCSTS_MSCA_M			BIT(2)
+#define PRT_SCSTS_MSCAP_S			3
+#define PRT_SCSTS_MSCAP_M			BIT(3)
+#define PRT_SWT_BSCCNT				0x00204160 /* Reset Source: CORER */
+#define PRT_SWT_BSCCNT_CCOUNT_S			0
+#define PRT_SWT_BSCCNT_CCOUNT_M			MAKEMASK(0x1FFFFFF, 0)
+#define PRT_SWT_BSCTRH				0x00204180 /* Reset Source: CORER */
+#define PRT_SWT_BSCTRH_UTRESH_S			0
+#define PRT_SWT_BSCTRH_UTRESH_M			MAKEMASK(0x7FFFF, 0)
+#define PRT_SWT_MIREG				0x002042A0 /* Reset Source: CORER */
+#define PRT_SWT_MIREG_MIRRULE_S			0
+#define PRT_SWT_MIREG_MIRRULE_M			MAKEMASK(0x3F, 0)
+#define PRT_SWT_MIREG_MIRENA_S			7
+#define PRT_SWT_MIREG_MIRENA_M			BIT(7)
+#define PRT_SWT_MIRIG				0x00204280 /* Reset Source: CORER */
+#define PRT_SWT_MIRIG_MIRRULE_S			0
+#define PRT_SWT_MIRIG_MIRRULE_M			MAKEMASK(0x3F, 0)
+#define PRT_SWT_MIRIG_MIRENA_S			7
+#define PRT_SWT_MIRIG_MIRENA_M			BIT(7)
+#define PRT_SWT_MSCCNT				0x00204100 /* Reset Source: CORER */
+#define PRT_SWT_MSCCNT_CCOUNT_S			0
+#define PRT_SWT_MSCCNT_CCOUNT_M			MAKEMASK(0x1FFFFFF, 0)
+#define PRT_SWT_MSCTRH				0x002041c0 /* Reset Source: CORER */
+#define PRT_SWT_MSCTRH_UTRESH_S			0
+#define PRT_SWT_MSCTRH_UTRESH_M			MAKEMASK(0x7FFFF, 0)
+#define PRT_SWT_SCBI				0x002041e0 /* Reset Source: CORER */
+#define PRT_SWT_SCBI_BI_S			0
+#define PRT_SWT_SCBI_BI_M			MAKEMASK(0x1FFFFFF, 0)
+#define PRT_SWT_SCCRL				0x00204200 /* Reset Source: CORER */
+#define PRT_SWT_SCCRL_MDIPW_S			0
+#define PRT_SWT_SCCRL_MDIPW_M			BIT(0)
+#define PRT_SWT_SCCRL_MDICW_S			1
+#define PRT_SWT_SCCRL_MDICW_M			BIT(1)
+#define PRT_SWT_SCCRL_BDIPW_S			2
+#define PRT_SWT_SCCRL_BDIPW_M			BIT(2)
+#define PRT_SWT_SCCRL_BDICW_S			3
+#define PRT_SWT_SCCRL_BDICW_M			BIT(3)
+#define PRT_SWT_SCCRL_INTERVAL_S		8
+#define PRT_SWT_SCCRL_INTERVAL_M		MAKEMASK(0xFFFFF, 8)
+#define PRT_TCTUPR(_i)				(0x00040840 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define PRT_TCTUPR_MAX_INDEX			31
+#define PRT_TCTUPR_UP0_S			0
+#define PRT_TCTUPR_UP0_M			MAKEMASK(0x7, 0)
+#define PRT_TCTUPR_UP1_S			4
+#define PRT_TCTUPR_UP1_M			MAKEMASK(0x7, 4)
+#define PRT_TCTUPR_UP2_S			8
+#define PRT_TCTUPR_UP2_M			MAKEMASK(0x7, 8)
+#define PRT_TCTUPR_UP3_S			12
+#define PRT_TCTUPR_UP3_M			MAKEMASK(0x7, 12)
+#define PRT_TCTUPR_UP4_S			16
+#define PRT_TCTUPR_UP4_M			MAKEMASK(0x7, 16)
+#define PRT_TCTUPR_UP5_S			20
+#define PRT_TCTUPR_UP5_M			MAKEMASK(0x7, 20)
+#define PRT_TCTUPR_UP6_S			24
+#define PRT_TCTUPR_UP6_M			MAKEMASK(0x7, 24)
+#define PRT_TCTUPR_UP7_S			28
+#define PRT_TCTUPR_UP7_M			MAKEMASK(0x7, 28)
+#define GLHH_ART_CTL				0x000A41D4 /* Reset Source: POR */
+#define GLHH_ART_CTL_ACTIVE_S			0
+#define GLHH_ART_CTL_ACTIVE_M			BIT(0)
+#define GLHH_ART_CTL_TIME_OUT1_S		1
+#define GLHH_ART_CTL_TIME_OUT1_M		BIT(1)
+#define GLHH_ART_CTL_TIME_OUT2_S		2
+#define GLHH_ART_CTL_TIME_OUT2_M		BIT(2)
+#define GLHH_ART_CTL_RESET_HH_S			31
+#define GLHH_ART_CTL_RESET_HH_M			BIT(31)
+#define GLHH_ART_DATA				0x000A41E0 /* Reset Source: POR */
+#define GLHH_ART_DATA_AGENT_TYPE_S		0
+#define GLHH_ART_DATA_AGENT_TYPE_M		MAKEMASK(0x7, 0)
+#define GLHH_ART_DATA_SYNC_TYPE_S		3
+#define GLHH_ART_DATA_SYNC_TYPE_M		BIT(3)
+#define GLHH_ART_DATA_MAX_DELAY_S		4
+#define GLHH_ART_DATA_MAX_DELAY_M		MAKEMASK(0xF, 4)
+#define GLHH_ART_DATA_TIME_BASE_S		8
+#define GLHH_ART_DATA_TIME_BASE_M		MAKEMASK(0xF, 8)
+#define GLHH_ART_DATA_RSV_DATA_S		12
+#define GLHH_ART_DATA_RSV_DATA_M		MAKEMASK(0xFFFFF, 12)
+#define GLHH_ART_TIME_H				0x000A41D8 /* Reset Source: POR */
+#define GLHH_ART_TIME_H_ART_TIME_H_S		0
+#define GLHH_ART_TIME_H_ART_TIME_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLHH_ART_TIME_L				0x000A41DC /* Reset Source: POR */
+#define GLHH_ART_TIME_L_ART_TIME_L_S		0
+#define GLHH_ART_TIME_L_ART_TIME_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_AUX_IN_0(_i)			(0x000889D8 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_AUX_IN_0_MAX_INDEX		1
+#define GLTSYN_AUX_IN_0_EVNTLVL_S		0
+#define GLTSYN_AUX_IN_0_EVNTLVL_M		MAKEMASK(0x3, 0)
+#define GLTSYN_AUX_IN_0_INT_ENA_S		4
+#define GLTSYN_AUX_IN_0_INT_ENA_M		BIT(4)
+#define GLTSYN_AUX_IN_1(_i)			(0x000889E0 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_AUX_IN_1_MAX_INDEX		1
+#define GLTSYN_AUX_IN_1_EVNTLVL_S		0
+#define GLTSYN_AUX_IN_1_EVNTLVL_M		MAKEMASK(0x3, 0)
+#define GLTSYN_AUX_IN_1_INT_ENA_S		4
+#define GLTSYN_AUX_IN_1_INT_ENA_M		BIT(4)
+#define GLTSYN_AUX_IN_2(_i)			(0x000889E8 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_AUX_IN_2_MAX_INDEX		1
+#define GLTSYN_AUX_IN_2_EVNTLVL_S		0
+#define GLTSYN_AUX_IN_2_EVNTLVL_M		MAKEMASK(0x3, 0)
+#define GLTSYN_AUX_IN_2_INT_ENA_S		4
+#define GLTSYN_AUX_IN_2_INT_ENA_M		BIT(4)
+#define GLTSYN_AUX_OUT_0(_i)			(0x00088998 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_AUX_OUT_0_MAX_INDEX		1
+#define GLTSYN_AUX_OUT_0_OUT_ENA_S		0
+#define GLTSYN_AUX_OUT_0_OUT_ENA_M		BIT(0)
+#define GLTSYN_AUX_OUT_0_OUTMOD_S		1
+#define GLTSYN_AUX_OUT_0_OUTMOD_M		MAKEMASK(0x3, 1)
+#define GLTSYN_AUX_OUT_0_OUTLVL_S		3
+#define GLTSYN_AUX_OUT_0_OUTLVL_M		BIT(3)
+#define GLTSYN_AUX_OUT_0_INT_ENA_S		4
+#define GLTSYN_AUX_OUT_0_INT_ENA_M		BIT(4)
+#define GLTSYN_AUX_OUT_0_PULSEW_S		8
+#define GLTSYN_AUX_OUT_0_PULSEW_M		MAKEMASK(0xF, 8)
+#define GLTSYN_AUX_OUT_1(_i)			(0x000889A0 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_AUX_OUT_1_MAX_INDEX		1
+#define GLTSYN_AUX_OUT_1_OUT_ENA_S		0
+#define GLTSYN_AUX_OUT_1_OUT_ENA_M		BIT(0)
+#define GLTSYN_AUX_OUT_1_OUTMOD_S		1
+#define GLTSYN_AUX_OUT_1_OUTMOD_M		MAKEMASK(0x3, 1)
+#define GLTSYN_AUX_OUT_1_OUTLVL_S		3
+#define GLTSYN_AUX_OUT_1_OUTLVL_M		BIT(3)
+#define GLTSYN_AUX_OUT_1_INT_ENA_S		4
+#define GLTSYN_AUX_OUT_1_INT_ENA_M		BIT(4)
+#define GLTSYN_AUX_OUT_1_PULSEW_S		8
+#define GLTSYN_AUX_OUT_1_PULSEW_M		MAKEMASK(0xF, 8)
+#define GLTSYN_AUX_OUT_2(_i)			(0x000889A8 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_AUX_OUT_2_MAX_INDEX		1
+#define GLTSYN_AUX_OUT_2_OUT_ENA_S		0
+#define GLTSYN_AUX_OUT_2_OUT_ENA_M		BIT(0)
+#define GLTSYN_AUX_OUT_2_OUTMOD_S		1
+#define GLTSYN_AUX_OUT_2_OUTMOD_M		MAKEMASK(0x3, 1)
+#define GLTSYN_AUX_OUT_2_OUTLVL_S		3
+#define GLTSYN_AUX_OUT_2_OUTLVL_M		BIT(3)
+#define GLTSYN_AUX_OUT_2_INT_ENA_S		4
+#define GLTSYN_AUX_OUT_2_INT_ENA_M		BIT(4)
+#define GLTSYN_AUX_OUT_2_PULSEW_S		8
+#define GLTSYN_AUX_OUT_2_PULSEW_M		MAKEMASK(0xF, 8)
+#define GLTSYN_AUX_OUT_3(_i)			(0x000889B0 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_AUX_OUT_3_MAX_INDEX		1
+#define GLTSYN_AUX_OUT_3_OUT_ENA_S		0
+#define GLTSYN_AUX_OUT_3_OUT_ENA_M		BIT(0)
+#define GLTSYN_AUX_OUT_3_OUTMOD_S		1
+#define GLTSYN_AUX_OUT_3_OUTMOD_M		MAKEMASK(0x3, 1)
+#define GLTSYN_AUX_OUT_3_OUTLVL_S		3
+#define GLTSYN_AUX_OUT_3_OUTLVL_M		BIT(3)
+#define GLTSYN_AUX_OUT_3_INT_ENA_S		4
+#define GLTSYN_AUX_OUT_3_INT_ENA_M		BIT(4)
+#define GLTSYN_AUX_OUT_3_PULSEW_S		8
+#define GLTSYN_AUX_OUT_3_PULSEW_M		MAKEMASK(0xF, 8)
+#define GLTSYN_CLKO_0(_i)			(0x000889B8 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_CLKO_0_MAX_INDEX			1
+#define GLTSYN_CLKO_0_TSYNCLKO_S		0
+#define GLTSYN_CLKO_0_TSYNCLKO_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_CLKO_1(_i)			(0x000889C0 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_CLKO_1_MAX_INDEX			1
+#define GLTSYN_CLKO_1_TSYNCLKO_S		0
+#define GLTSYN_CLKO_1_TSYNCLKO_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_CLKO_2(_i)			(0x000889C8 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_CLKO_2_MAX_INDEX			1
+#define GLTSYN_CLKO_2_TSYNCLKO_S		0
+#define GLTSYN_CLKO_2_TSYNCLKO_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_CLKO_3(_i)			(0x000889D0 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_CLKO_3_MAX_INDEX			1
+#define GLTSYN_CLKO_3_TSYNCLKO_S		0
+#define GLTSYN_CLKO_3_TSYNCLKO_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_CMD				0x00088810 /* Reset Source: CORER */
+#define GLTSYN_CMD_CMD_S			0
+#define GLTSYN_CMD_CMD_M			MAKEMASK(0xFF, 0)
+#define GLTSYN_CMD_SEL_MASTER_S			8
+#define GLTSYN_CMD_SEL_MASTER_M			BIT(8)
+#define GLTSYN_CMD_SYNC				0x00088814 /* Reset Source: CORER */
+#define GLTSYN_CMD_SYNC_SYNC_S			0
+#define GLTSYN_CMD_SYNC_SYNC_M			MAKEMASK(0x3, 0)
+#define GLTSYN_ENA(_i)				(0x00088808 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_ENA_MAX_INDEX			1
+#define GLTSYN_ENA_TSYN_ENA_S			0
+#define GLTSYN_ENA_TSYN_ENA_M			BIT(0)
+#define GLTSYN_EVNT_H_0(_i)			(0x00088970 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_EVNT_H_0_MAX_INDEX		1
+#define GLTSYN_EVNT_H_0_TSYNEVNT_H_S		0
+#define GLTSYN_EVNT_H_0_TSYNEVNT_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_EVNT_H_1(_i)			(0x00088980 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_EVNT_H_1_MAX_INDEX		1
+#define GLTSYN_EVNT_H_1_TSYNEVNT_H_S		0
+#define GLTSYN_EVNT_H_1_TSYNEVNT_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_EVNT_H_2(_i)			(0x00088990 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_EVNT_H_2_MAX_INDEX		1
+#define GLTSYN_EVNT_H_2_TSYNEVNT_H_S		0
+#define GLTSYN_EVNT_H_2_TSYNEVNT_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_EVNT_L_0(_i)			(0x00088968 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_EVNT_L_0_MAX_INDEX		1
+#define GLTSYN_EVNT_L_0_TSYNEVNT_L_S		0
+#define GLTSYN_EVNT_L_0_TSYNEVNT_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_EVNT_L_1(_i)			(0x00088978 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_EVNT_L_1_MAX_INDEX		1
+#define GLTSYN_EVNT_L_1_TSYNEVNT_L_S		0
+#define GLTSYN_EVNT_L_1_TSYNEVNT_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_EVNT_L_2(_i)			(0x00088988 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_EVNT_L_2_MAX_INDEX		1
+#define GLTSYN_EVNT_L_2_TSYNEVNT_L_S		0
+#define GLTSYN_EVNT_L_2_TSYNEVNT_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_HHTIME_H(_i)			(0x00088900 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_HHTIME_H_MAX_INDEX		1
+#define GLTSYN_HHTIME_H_TSYNEVNT_H_S		0
+#define GLTSYN_HHTIME_H_TSYNEVNT_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_HHTIME_L(_i)			(0x000888F8 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_HHTIME_L_MAX_INDEX		1
+#define GLTSYN_HHTIME_L_TSYNEVNT_L_S		0
+#define GLTSYN_HHTIME_L_TSYNEVNT_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_INCVAL_H(_i)			(0x00088920 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_INCVAL_H_MAX_INDEX		1
+#define GLTSYN_INCVAL_H_INCVAL_H_S		0
+#define GLTSYN_INCVAL_H_INCVAL_H_M		MAKEMASK(0xFF, 0)
+#define GLTSYN_INCVAL_L(_i)			(0x00088918 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_INCVAL_L_MAX_INDEX		1
+#define GLTSYN_INCVAL_L_INCVAL_L_S		0
+#define GLTSYN_INCVAL_L_INCVAL_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_SHADJ_H(_i)			(0x00088910 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_SHADJ_H_MAX_INDEX		1
+#define GLTSYN_SHADJ_H_ADJUST_H_S		0
+#define GLTSYN_SHADJ_H_ADJUST_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_SHADJ_L(_i)			(0x00088908 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_SHADJ_L_MAX_INDEX		1
+#define GLTSYN_SHADJ_L_ADJUST_L_S		0
+#define GLTSYN_SHADJ_L_ADJUST_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_SHTIME_0(_i)			(0x000888E0 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_SHTIME_0_MAX_INDEX		1
+#define GLTSYN_SHTIME_0_TSYNTIME_0_S		0
+#define GLTSYN_SHTIME_0_TSYNTIME_0_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_SHTIME_H(_i)			(0x000888F0 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_SHTIME_H_MAX_INDEX		1
+#define GLTSYN_SHTIME_H_TSYNTIME_H_S		0
+#define GLTSYN_SHTIME_H_TSYNTIME_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_SHTIME_L(_i)			(0x000888E8 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_SHTIME_L_MAX_INDEX		1
+#define GLTSYN_SHTIME_L_TSYNTIME_L_S		0
+#define GLTSYN_SHTIME_L_TSYNTIME_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_STAT(_i)				(0x000888C0 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_STAT_MAX_INDEX			1
+#define GLTSYN_STAT_EVENT0_S			0
+#define GLTSYN_STAT_EVENT0_M			BIT(0)
+#define GLTSYN_STAT_EVENT1_S			1
+#define GLTSYN_STAT_EVENT1_M			BIT(1)
+#define GLTSYN_STAT_EVENT2_S			2
+#define GLTSYN_STAT_EVENT2_M			BIT(2)
+#define GLTSYN_STAT_TGT0_S			4
+#define GLTSYN_STAT_TGT0_M			BIT(4)
+#define GLTSYN_STAT_TGT1_S			5
+#define GLTSYN_STAT_TGT1_M			BIT(5)
+#define GLTSYN_STAT_TGT2_S			6
+#define GLTSYN_STAT_TGT2_M			BIT(6)
+#define GLTSYN_STAT_TGT3_S			7
+#define GLTSYN_STAT_TGT3_M			BIT(7)
+#define GLTSYN_SYNC_DLAY			0x00088818 /* Reset Source: CORER */
+#define GLTSYN_SYNC_DLAY_SYNC_DELAY_S		0
+#define GLTSYN_SYNC_DLAY_SYNC_DELAY_M		MAKEMASK(0x1F, 0)
+#define GLTSYN_TGT_H_0(_i)			(0x00088930 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_TGT_H_0_MAX_INDEX		1
+#define GLTSYN_TGT_H_0_TSYNTGTT_H_S		0
+#define GLTSYN_TGT_H_0_TSYNTGTT_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_TGT_H_1(_i)			(0x00088940 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_TGT_H_1_MAX_INDEX		1
+#define GLTSYN_TGT_H_1_TSYNTGTT_H_S		0
+#define GLTSYN_TGT_H_1_TSYNTGTT_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_TGT_H_2(_i)			(0x00088950 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_TGT_H_2_MAX_INDEX		1
+#define GLTSYN_TGT_H_2_TSYNTGTT_H_S		0
+#define GLTSYN_TGT_H_2_TSYNTGTT_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_TGT_H_3(_i)			(0x00088960 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_TGT_H_3_MAX_INDEX		1
+#define GLTSYN_TGT_H_3_TSYNTGTT_H_S		0
+#define GLTSYN_TGT_H_3_TSYNTGTT_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_TGT_L_0(_i)			(0x00088928 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_TGT_L_0_MAX_INDEX		1
+#define GLTSYN_TGT_L_0_TSYNTGTT_L_S		0
+#define GLTSYN_TGT_L_0_TSYNTGTT_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_TGT_L_1(_i)			(0x00088938 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_TGT_L_1_MAX_INDEX		1
+#define GLTSYN_TGT_L_1_TSYNTGTT_L_S		0
+#define GLTSYN_TGT_L_1_TSYNTGTT_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_TGT_L_2(_i)			(0x00088948 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_TGT_L_2_MAX_INDEX		1
+#define GLTSYN_TGT_L_2_TSYNTGTT_L_S		0
+#define GLTSYN_TGT_L_2_TSYNTGTT_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_TGT_L_3(_i)			(0x00088958 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_TGT_L_3_MAX_INDEX		1
+#define GLTSYN_TGT_L_3_TSYNTGTT_L_S		0
+#define GLTSYN_TGT_L_3_TSYNTGTT_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_TIME_0(_i)			(0x000888C8 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_TIME_0_MAX_INDEX			1
+#define GLTSYN_TIME_0_TSYNTIME_0_S		0
+#define GLTSYN_TIME_0_TSYNTIME_0_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_TIME_H(_i)			(0x000888D8 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_TIME_H_MAX_INDEX			1
+#define GLTSYN_TIME_H_TSYNTIME_H_S		0
+#define GLTSYN_TIME_H_TSYNTIME_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_TIME_L(_i)			(0x000888D0 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_TIME_L_MAX_INDEX			1
+#define GLTSYN_TIME_L_TSYNTIME_L_S		0
+#define GLTSYN_TIME_L_TSYNTIME_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PFHH_SEM				0x000A4200 /* Reset Source: PFR */
+#define PFHH_SEM_BUSY_S				0
+#define PFHH_SEM_BUSY_M				BIT(0)
+#define PFHH_SEM_PF_OWNER_S			4
+#define PFHH_SEM_PF_OWNER_M			MAKEMASK(0x7, 4)
+#define PFTSYN_SEM				0x00088880 /* Reset Source: PFR */
+#define PFTSYN_SEM_BUSY_S			0
+#define PFTSYN_SEM_BUSY_M			BIT(0)
+#define PFTSYN_SEM_PF_OWNER_S			4
+#define PFTSYN_SEM_PF_OWNER_M			MAKEMASK(0x7, 4)
+#define GLPE_TSCD_FLR(_i)			(0x0051E24c + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: CORER */
+#define GLPE_TSCD_FLR_MAX_INDEX			3
+#define GLPE_TSCD_FLR_DRAIN_VCTR_ID_S		0
+#define GLPE_TSCD_FLR_DRAIN_VCTR_ID_M		MAKEMASK(0x3, 0)
+#define GLPE_TSCD_FLR_PORT_S			2
+#define GLPE_TSCD_FLR_PORT_M			MAKEMASK(0x7, 2)
+#define GLPE_TSCD_FLR_PF_NUM_S			5
+#define GLPE_TSCD_FLR_PF_NUM_M			MAKEMASK(0x7, 5)
+#define GLPE_TSCD_FLR_VM_VF_TYPE_S		8
+#define GLPE_TSCD_FLR_VM_VF_TYPE_M		MAKEMASK(0x3, 8)
+#define GLPE_TSCD_FLR_VM_VF_NUM_S		16
+#define GLPE_TSCD_FLR_VM_VF_NUM_M		MAKEMASK(0x3FF, 16)
+#define GLPE_TSCD_FLR_VLD_S			31
+#define GLPE_TSCD_FLR_VLD_M			BIT(31)
+#define GLPE_TSCD_PEPM				0x0051E228 /* Reset Source: CORER */
+#define GLPE_TSCD_PEPM_MDQ_CREDITS_S		0
+#define GLPE_TSCD_PEPM_MDQ_CREDITS_M		MAKEMASK(0xFF, 0)
+#define PF_VIRT_VSTATUS				0x0009E680 /* Reset Source: PFR */
+#define PF_VIRT_VSTATUS_NUM_VFS_S		0
+#define PF_VIRT_VSTATUS_NUM_VFS_M		MAKEMASK(0xFF, 0)
+#define PF_VIRT_VSTATUS_TOTAL_VFS_S		8
+#define PF_VIRT_VSTATUS_TOTAL_VFS_M		MAKEMASK(0xFF, 8)
+#define PF_VIRT_VSTATUS_IOV_ACTIVE_S		16
+#define PF_VIRT_VSTATUS_IOV_ACTIVE_M		BIT(16)
+#define PF_VT_PFALLOC				0x001D2480 /* Reset Source: CORER */
+#define PF_VT_PFALLOC_FIRSTVF_S			0
+#define PF_VT_PFALLOC_FIRSTVF_M			MAKEMASK(0xFF, 0)
+#define PF_VT_PFALLOC_LASTVF_S			8
+#define PF_VT_PFALLOC_LASTVF_M			MAKEMASK(0xFF, 8)
+#define PF_VT_PFALLOC_VALID_S			31
+#define PF_VT_PFALLOC_VALID_M			BIT(31)
+#define PF_VT_PFALLOC_HIF			0x0009DD80 /* Reset Source: PCIR */
+#define PF_VT_PFALLOC_HIF_FIRSTVF_S		0
+#define PF_VT_PFALLOC_HIF_FIRSTVF_M		MAKEMASK(0xFF, 0)
+#define PF_VT_PFALLOC_HIF_LASTVF_S		8
+#define PF_VT_PFALLOC_HIF_LASTVF_M		MAKEMASK(0xFF, 8)
+#define PF_VT_PFALLOC_HIF_VALID_S		31
+#define PF_VT_PFALLOC_HIF_VALID_M		BIT(31)
+#define PF_VT_PFALLOC_PCIE			0x000BE080 /* Reset Source: PCIR */
+#define PF_VT_PFALLOC_PCIE_FIRSTVF_S		0
+#define PF_VT_PFALLOC_PCIE_FIRSTVF_M		MAKEMASK(0xFF, 0)
+#define PF_VT_PFALLOC_PCIE_LASTVF_S		8
+#define PF_VT_PFALLOC_PCIE_LASTVF_M		MAKEMASK(0xFF, 8)
+#define PF_VT_PFALLOC_PCIE_VALID_S		31
+#define PF_VT_PFALLOC_PCIE_VALID_M		BIT(31)
+#define VSI_L2TAGSTXVALID(_VSI)			(0x00046000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_L2TAGSTXVALID_MAX_INDEX		767
+#define VSI_L2TAGSTXVALID_L2TAG1INSERTID_S	0
+#define VSI_L2TAGSTXVALID_L2TAG1INSERTID_M	MAKEMASK(0x7, 0)
+#define VSI_L2TAGSTXVALID_L2TAG1INSERTID_VALID_S 3
+#define VSI_L2TAGSTXVALID_L2TAG1INSERTID_VALID_M BIT(3)
+#define VSI_L2TAGSTXVALID_L2TAG2INSERTID_S	4
+#define VSI_L2TAGSTXVALID_L2TAG2INSERTID_M	MAKEMASK(0x7, 4)
+#define VSI_L2TAGSTXVALID_L2TAG2INSERTID_VALID_S 7
+#define VSI_L2TAGSTXVALID_L2TAG2INSERTID_VALID_M BIT(7)
+#define VSI_L2TAGSTXVALID_TIR0INSERTID_S	16
+#define VSI_L2TAGSTXVALID_TIR0INSERTID_M	MAKEMASK(0x7, 16)
+#define VSI_L2TAGSTXVALID_TIR0_INSERT_S		19
+#define VSI_L2TAGSTXVALID_TIR0_INSERT_M		BIT(19)
+#define VSI_L2TAGSTXVALID_TIR1INSERTID_S	20
+#define VSI_L2TAGSTXVALID_TIR1INSERTID_M	MAKEMASK(0x7, 20)
+#define VSI_L2TAGSTXVALID_TIR1_INSERT_S		23
+#define VSI_L2TAGSTXVALID_TIR1_INSERT_M		BIT(23)
+#define VSI_L2TAGSTXVALID_TIR2INSERTID_S	24
+#define VSI_L2TAGSTXVALID_TIR2INSERTID_M	MAKEMASK(0x7, 24)
+#define VSI_L2TAGSTXVALID_TIR2_INSERT_S		27
+#define VSI_L2TAGSTXVALID_TIR2_INSERT_M		BIT(27)
+#define VSI_PASID(_VSI)				(0x0009C000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_PASID_MAX_INDEX			767
+#define VSI_PASID_PASID_S			0
+#define VSI_PASID_PASID_M			MAKEMASK(0xFFFFF, 0)
+#define VSI_PASID_EN_S				31
+#define VSI_PASID_EN_M				BIT(31)
+#define VSI_RUPR(_VSI)				(0x00050000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_RUPR_MAX_INDEX			767
+#define VSI_RUPR_UP0_S				0
+#define VSI_RUPR_UP0_M				MAKEMASK(0x7, 0)
+#define VSI_RUPR_UP1_S				3
+#define VSI_RUPR_UP1_M				MAKEMASK(0x7, 3)
+#define VSI_RUPR_UP2_S				6
+#define VSI_RUPR_UP2_M				MAKEMASK(0x7, 6)
+#define VSI_RUPR_UP3_S				9
+#define VSI_RUPR_UP3_M				MAKEMASK(0x7, 9)
+#define VSI_RUPR_UP4_S				12
+#define VSI_RUPR_UP4_M				MAKEMASK(0x7, 12)
+#define VSI_RUPR_UP5_S				15
+#define VSI_RUPR_UP5_M				MAKEMASK(0x7, 15)
+#define VSI_RUPR_UP6_S				18
+#define VSI_RUPR_UP6_M				MAKEMASK(0x7, 18)
+#define VSI_RUPR_UP7_S				21
+#define VSI_RUPR_UP7_M				MAKEMASK(0x7, 21)
+#define VSI_RXSWCTRL(_VSI)			(0x00205000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_RXSWCTRL_MAX_INDEX			767
+#define VSI_RXSWCTRL_MACVSIPRUNEENABLE_S	8
+#define VSI_RXSWCTRL_MACVSIPRUNEENABLE_M	BIT(8)
+#define VSI_RXSWCTRL_PRUNEENABLE_S		9
+#define VSI_RXSWCTRL_PRUNEENABLE_M		MAKEMASK(0xF, 9)
+#define VSI_RXSWCTRL_SRCPRUNEENABLE_S		13
+#define VSI_RXSWCTRL_SRCPRUNEENABLE_M		BIT(13)
+#define VSI_SRCSWCTRL(_VSI)			(0x00209000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_SRCSWCTRL_MAX_INDEX			767
+#define VSI_SRCSWCTRL_ALLOWDESTOVERRIDE_S	0
+#define VSI_SRCSWCTRL_ALLOWDESTOVERRIDE_M	BIT(0)
+#define VSI_SRCSWCTRL_ALLOWLOOPBACK_S		1
+#define VSI_SRCSWCTRL_ALLOWLOOPBACK_M		BIT(1)
+#define VSI_SRCSWCTRL_LANENABLE_S		2
+#define VSI_SRCSWCTRL_LANENABLE_M		BIT(2)
+#define VSI_SRCSWCTRL_MACAS_S			3
+#define VSI_SRCSWCTRL_MACAS_M			BIT(3)
+#define VSI_SRCSWCTRL_PRUNEENABLE_S		4
+#define VSI_SRCSWCTRL_PRUNEENABLE_M		MAKEMASK(0xF, 4)
+#define VSI_SWITCHID(_VSI)			(0x00215000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_SWITCHID_MAX_INDEX			767
+#define VSI_SWITCHID_SWITCHID_S			0
+#define VSI_SWITCHID_SWITCHID_M			MAKEMASK(0xFF, 0)
+#define VSI_SWT_MIREG(_VSI)			(0x00207000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_SWT_MIREG_MAX_INDEX			767
+#define VSI_SWT_MIREG_MIRRULE_S			0
+#define VSI_SWT_MIREG_MIRRULE_M			MAKEMASK(0x3F, 0)
+#define VSI_SWT_MIREG_MIRENA_S			7
+#define VSI_SWT_MIREG_MIRENA_M			BIT(7)
+#define VSI_SWT_MIRIG(_VSI)			(0x00208000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_SWT_MIRIG_MAX_INDEX			767
+#define VSI_SWT_MIRIG_MIRRULE_S			0
+#define VSI_SWT_MIRIG_MIRRULE_M			MAKEMASK(0x3F, 0)
+#define VSI_SWT_MIRIG_MIRENA_S			7
+#define VSI_SWT_MIRIG_MIRENA_M			BIT(7)
+#define VSI_TAIR(_VSI)				(0x00044000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_TAIR_MAX_INDEX			767
+#define VSI_TAIR_PORT_TAG_ID_S			0
+#define VSI_TAIR_PORT_TAG_ID_M			MAKEMASK(0xFFFF, 0)
+#define VSI_TAR(_VSI)				(0x00045000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_TAR_MAX_INDEX			767
+#define VSI_TAR_ACCEPTTAGGED_S			0
+#define VSI_TAR_ACCEPTTAGGED_M			MAKEMASK(0x3FF, 0)
+#define VSI_TAR_ACCEPTUNTAGGED_S		16
+#define VSI_TAR_ACCEPTUNTAGGED_M		MAKEMASK(0x3FF, 16)
+#define VSI_TIR_0(_VSI)				(0x00041000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_TIR_0_MAX_INDEX			767
+#define VSI_TIR_0_PORT_TAG_ID_S			0
+#define VSI_TIR_0_PORT_TAG_ID_M			MAKEMASK(0xFFFF, 0)
+#define VSI_TIR_1(_VSI)				(0x00042000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_TIR_1_MAX_INDEX			767
+#define VSI_TIR_1_PORT_TAG_ID_S			0
+#define VSI_TIR_1_PORT_TAG_ID_M			MAKEMASK(0xFFFFFFFF, 0)
+#define VSI_TIR_2(_VSI)				(0x00043000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_TIR_2_MAX_INDEX			767
+#define VSI_TIR_2_PORT_TAG_ID_S			0
+#define VSI_TIR_2_PORT_TAG_ID_M			MAKEMASK(0xFFFF, 0)
+#define VSI_TSR(_VSI)				(0x00051000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_TSR_MAX_INDEX			767
+#define VSI_TSR_STRIPTAG_S			0
+#define VSI_TSR_STRIPTAG_M			MAKEMASK(0x3FF, 0)
+#define VSI_TSR_SHOWTAG_S			10
+#define VSI_TSR_SHOWTAG_M			MAKEMASK(0x3FF, 10)
+#define VSI_TSR_SHOWPRIONLY_S			20
+#define VSI_TSR_SHOWPRIONLY_M			MAKEMASK(0x3FF, 20)
+#define VSI_TUPIOM(_VSI)			(0x00048000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_TUPIOM_MAX_INDEX			767
+#define VSI_TUPIOM_UP0_S			0
+#define VSI_TUPIOM_UP0_M			MAKEMASK(0x7, 0)
+#define VSI_TUPIOM_UP1_S			3
+#define VSI_TUPIOM_UP1_M			MAKEMASK(0x7, 3)
+#define VSI_TUPIOM_UP2_S			6
+#define VSI_TUPIOM_UP2_M			MAKEMASK(0x7, 6)
+#define VSI_TUPIOM_UP3_S			9
+#define VSI_TUPIOM_UP3_M			MAKEMASK(0x7, 9)
+#define VSI_TUPIOM_UP4_S			12
+#define VSI_TUPIOM_UP4_M			MAKEMASK(0x7, 12)
+#define VSI_TUPIOM_UP5_S			15
+#define VSI_TUPIOM_UP5_M			MAKEMASK(0x7, 15)
+#define VSI_TUPIOM_UP6_S			18
+#define VSI_TUPIOM_UP6_M			MAKEMASK(0x7, 18)
+#define VSI_TUPIOM_UP7_S			21
+#define VSI_TUPIOM_UP7_M			MAKEMASK(0x7, 21)
+#define VSI_TUPR(_VSI)				(0x00047000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_TUPR_MAX_INDEX			767
+#define VSI_TUPR_UP0_S				0
+#define VSI_TUPR_UP0_M				MAKEMASK(0x7, 0)
+#define VSI_TUPR_UP1_S				3
+#define VSI_TUPR_UP1_M				MAKEMASK(0x7, 3)
+#define VSI_TUPR_UP2_S				6
+#define VSI_TUPR_UP2_M				MAKEMASK(0x7, 6)
+#define VSI_TUPR_UP3_S				9
+#define VSI_TUPR_UP3_M				MAKEMASK(0x7, 9)
+#define VSI_TUPR_UP4_S				12
+#define VSI_TUPR_UP4_M				MAKEMASK(0x7, 12)
+#define VSI_TUPR_UP5_S				15
+#define VSI_TUPR_UP5_M				MAKEMASK(0x7, 15)
+#define VSI_TUPR_UP6_S				18
+#define VSI_TUPR_UP6_M				MAKEMASK(0x7, 18)
+#define VSI_TUPR_UP7_S				21
+#define VSI_TUPR_UP7_M				MAKEMASK(0x7, 21)
+#define VSI_VSI2F(_VSI)				(0x001D0000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_VSI2F_MAX_INDEX			767
+#define VSI_VSI2F_VFVMNUMBER_S			0
+#define VSI_VSI2F_VFVMNUMBER_M			MAKEMASK(0x3FF, 0)
+#define VSI_VSI2F_FUNCTIONTYPE_S		10
+#define VSI_VSI2F_FUNCTIONTYPE_M		MAKEMASK(0x3, 10)
+#define VSI_VSI2F_PFNUMBER_S			12
+#define VSI_VSI2F_PFNUMBER_M			MAKEMASK(0x7, 12)
+#define VSI_VSI2F_BUFFERNUMBER_S		16
+#define VSI_VSI2F_BUFFERNUMBER_M		MAKEMASK(0x7, 16)
+#define VSI_VSI2F_VSI_NUMBER_S			20
+#define VSI_VSI2F_VSI_NUMBER_M			MAKEMASK(0x3FF, 20)
+#define VSI_VSI2F_VSI_ENABLE_S			31
+#define VSI_VSI2F_VSI_ENABLE_M			BIT(31)
+#define VSI_VSI2F_MBX(_VSI)			(0x00232000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_VSI2F_MBX_MAX_INDEX			767
+#define VSI_VSI2F_MBX_VFVMNUMBER_S		0
+#define VSI_VSI2F_MBX_VFVMNUMBER_M		MAKEMASK(0x3FF, 0)
+#define VSI_VSI2F_MBX_FUNCTIONTYPE_S		10
+#define VSI_VSI2F_MBX_FUNCTIONTYPE_M		MAKEMASK(0x3, 10)
+#define VSI_VSI2F_MBX_PFNUMBER_S		12
+#define VSI_VSI2F_MBX_PFNUMBER_M		MAKEMASK(0x7, 12)
+#define VSI_VSI2F_MBX_BUFFERNUMBER_S		16
+#define VSI_VSI2F_MBX_BUFFERNUMBER_M		MAKEMASK(0x7, 16)
+#define VSI_VSI2F_MBX_VSI_NUMBER_S		20
+#define VSI_VSI2F_MBX_VSI_NUMBER_M		MAKEMASK(0x3FF, 20)
+#define VSI_VSI2F_MBX_VSI_ENABLE_S		31
+#define VSI_VSI2F_MBX_VSI_ENABLE_M		BIT(31)
+#define VSIQF_FD_CNT(_VSI)			(0x00464000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSIQF_FD_CNT_MAX_INDEX			767
+#define VSIQF_FD_CNT_FD_GCNT_S			0
+#define VSIQF_FD_CNT_FD_GCNT_M			MAKEMASK(0x3FFF, 0)
+#define VSIQF_FD_CNT_FD_BCNT_S			16
+#define VSIQF_FD_CNT_FD_BCNT_M			MAKEMASK(0x3FFF, 16)
+#define VSIQF_FD_CTL1(_VSI)			(0x00411000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSIQF_FD_CTL1_MAX_INDEX			767
+#define VSIQF_FD_CTL1_FLT_ENA_S			0
+#define VSIQF_FD_CTL1_FLT_ENA_M			BIT(0)
+#define VSIQF_FD_CTL1_CFG_ENA_S			1
+#define VSIQF_FD_CTL1_CFG_ENA_M			BIT(1)
+#define VSIQF_FD_CTL1_EVICT_ENA_S		2
+#define VSIQF_FD_CTL1_EVICT_ENA_M		BIT(2)
+#define VSIQF_FD_DFLT(_VSI)			(0x00457000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSIQF_FD_DFLT_MAX_INDEX			767
+#define VSIQF_FD_DFLT_DEFLT_QINDX_S		0
+#define VSIQF_FD_DFLT_DEFLT_QINDX_M		MAKEMASK(0x7FF, 0)
+#define VSIQF_FD_DFLT_DEFLT_TOQUEUE_S		12
+#define VSIQF_FD_DFLT_DEFLT_TOQUEUE_M		MAKEMASK(0x7, 12)
+#define VSIQF_FD_DFLT_COMP_QINDX_S		16
+#define VSIQF_FD_DFLT_COMP_QINDX_M		MAKEMASK(0x7FF, 16)
+#define VSIQF_FD_DFLT_DEFLT_QINDX_PRIO_S	28
+#define VSIQF_FD_DFLT_DEFLT_QINDX_PRIO_M	MAKEMASK(0x7, 28)
+#define VSIQF_FD_DFLT_DEFLT_DROP_S		31
+#define VSIQF_FD_DFLT_DEFLT_DROP_M		BIT(31)
+#define VSIQF_FD_SIZE(_VSI)			(0x00462000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSIQF_FD_SIZE_MAX_INDEX			767
+#define VSIQF_FD_SIZE_FD_GSIZE_S		0
+#define VSIQF_FD_SIZE_FD_GSIZE_M		MAKEMASK(0x3FFF, 0)
+#define VSIQF_FD_SIZE_FD_BSIZE_S		16
+#define VSIQF_FD_SIZE_FD_BSIZE_M		MAKEMASK(0x3FFF, 16)
+#define VSIQF_HASH_CTL(_VSI)			(0x0040D000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSIQF_HASH_CTL_MAX_INDEX		767
+#define VSIQF_HASH_CTL_HASH_LUT_SEL_S		0
+#define VSIQF_HASH_CTL_HASH_LUT_SEL_M		MAKEMASK(0x3, 0)
+#define VSIQF_HASH_CTL_GLOB_LUT_S		2
+#define VSIQF_HASH_CTL_GLOB_LUT_M		MAKEMASK(0xF, 2)
+#define VSIQF_HASH_CTL_HASH_SCHEME_S		6
+#define VSIQF_HASH_CTL_HASH_SCHEME_M		MAKEMASK(0x3, 6)
+#define VSIQF_HASH_CTL_TC_OVER_SEL_S		8
+#define VSIQF_HASH_CTL_TC_OVER_SEL_M		MAKEMASK(0x1F, 8)
+#define VSIQF_HASH_CTL_TC_OVER_ENA_S		15
+#define VSIQF_HASH_CTL_TC_OVER_ENA_M		BIT(15)
+#define VSIQF_HKEY(_i, _VSI)			(0x00400000 + ((_i) * 4096 + (_VSI) * 4)) /* _i=0...12, _VSI=0...767 */ /* Reset Source: PFR */
+#define VSIQF_HKEY_MAX_INDEX			12
+#define VSIQF_HKEY_KEY_0_S			0
+#define VSIQF_HKEY_KEY_0_M			MAKEMASK(0xFF, 0)
+#define VSIQF_HKEY_KEY_1_S			8
+#define VSIQF_HKEY_KEY_1_M			MAKEMASK(0xFF, 8)
+#define VSIQF_HKEY_KEY_2_S			16
+#define VSIQF_HKEY_KEY_2_M			MAKEMASK(0xFF, 16)
+#define VSIQF_HKEY_KEY_3_S			24
+#define VSIQF_HKEY_KEY_3_M			MAKEMASK(0xFF, 24)
+#define VSIQF_HLUT(_i, _VSI)			(0x00420000 + ((_i) * 4096 + (_VSI) * 4)) /* _i=0...15, _VSI=0...767 */ /* Reset Source: PFR */
+#define VSIQF_HLUT_MAX_INDEX			15
+#define VSIQF_HLUT_LUT0_S			0
+#define VSIQF_HLUT_LUT0_M			MAKEMASK(0xF, 0)
+#define VSIQF_HLUT_LUT1_S			8
+#define VSIQF_HLUT_LUT1_M			MAKEMASK(0xF, 8)
+#define VSIQF_HLUT_LUT2_S			16
+#define VSIQF_HLUT_LUT2_M			MAKEMASK(0xF, 16)
+#define VSIQF_HLUT_LUT3_S			24
+#define VSIQF_HLUT_LUT3_M			MAKEMASK(0xF, 24)
+#define VSIQF_PE_CTL1(_VSI)			(0x00414000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSIQF_PE_CTL1_MAX_INDEX			767
+#define VSIQF_PE_CTL1_PE_FLTENA_S		0
+#define VSIQF_PE_CTL1_PE_FLTENA_M		BIT(0)
+#define VSIQF_TC_REGION(_i, _VSI)		(0x00448000 + ((_i) * 4096 + (_VSI) * 4)) /* _i=0...3, _VSI=0...767 */ /* Reset Source: PFR */
+#define VSIQF_TC_REGION_MAX_INDEX		3
+#define VSIQF_TC_REGION_TC_BASE0_S		0
+#define VSIQF_TC_REGION_TC_BASE0_M		MAKEMASK(0x7FF, 0)
+#define VSIQF_TC_REGION_TC_SIZE0_S		11
+#define VSIQF_TC_REGION_TC_SIZE0_M		MAKEMASK(0xF, 11)
+#define VSIQF_TC_REGION_TC_BASE1_S		16
+#define VSIQF_TC_REGION_TC_BASE1_M		MAKEMASK(0x7FF, 16)
+#define VSIQF_TC_REGION_TC_SIZE1_S		27
+#define VSIQF_TC_REGION_TC_SIZE1_M		MAKEMASK(0xF, 27)
+#define GLPM_WUMC				0x0009DEE4 /* Reset Source: POR */
+#define GLPM_WUMC_MNG_WU_PF_S			16
+#define GLPM_WUMC_MNG_WU_PF_M			MAKEMASK(0xFF, 16)
+#define PFPM_APM				0x000B8080 /* Reset Source: POR */
+#define PFPM_APM_APME_S				0
+#define PFPM_APM_APME_M				BIT(0)
+#define PFPM_WUC				0x0009DC80 /* Reset Source: POR */
+#define PFPM_WUC_EN_APM_D0_S			5
+#define PFPM_WUC_EN_APM_D0_M			BIT(5)
+#define PFPM_WUFC				0x0009DC00 /* Reset Source: POR */
+#define PFPM_WUFC_LNKC_S			0
+#define PFPM_WUFC_LNKC_M			BIT(0)
+#define PFPM_WUFC_MAG_S				1
+#define PFPM_WUFC_MAG_M				BIT(1)
+#define PFPM_WUFC_MNG_S				3
+#define PFPM_WUFC_MNG_M				BIT(3)
+#define PFPM_WUFC_FLX0_ACT_S			4
+#define PFPM_WUFC_FLX0_ACT_M			BIT(4)
+#define PFPM_WUFC_FLX1_ACT_S			5
+#define PFPM_WUFC_FLX1_ACT_M			BIT(5)
+#define PFPM_WUFC_FLX2_ACT_S			6
+#define PFPM_WUFC_FLX2_ACT_M			BIT(6)
+#define PFPM_WUFC_FLX3_ACT_S			7
+#define PFPM_WUFC_FLX3_ACT_M			BIT(7)
+#define PFPM_WUFC_FLX4_ACT_S			8
+#define PFPM_WUFC_FLX4_ACT_M			BIT(8)
+#define PFPM_WUFC_FLX5_ACT_S			9
+#define PFPM_WUFC_FLX5_ACT_M			BIT(9)
+#define PFPM_WUFC_FLX6_ACT_S			10
+#define PFPM_WUFC_FLX6_ACT_M			BIT(10)
+#define PFPM_WUFC_FLX7_ACT_S			11
+#define PFPM_WUFC_FLX7_ACT_M			BIT(11)
+#define PFPM_WUFC_FLX0_S			16
+#define PFPM_WUFC_FLX0_M			BIT(16)
+#define PFPM_WUFC_FLX1_S			17
+#define PFPM_WUFC_FLX1_M			BIT(17)
+#define PFPM_WUFC_FLX2_S			18
+#define PFPM_WUFC_FLX2_M			BIT(18)
+#define PFPM_WUFC_FLX3_S			19
+#define PFPM_WUFC_FLX3_M			BIT(19)
+#define PFPM_WUFC_FLX4_S			20
+#define PFPM_WUFC_FLX4_M			BIT(20)
+#define PFPM_WUFC_FLX5_S			21
+#define PFPM_WUFC_FLX5_M			BIT(21)
+#define PFPM_WUFC_FLX6_S			22
+#define PFPM_WUFC_FLX6_M			BIT(22)
+#define PFPM_WUFC_FLX7_S			23
+#define PFPM_WUFC_FLX7_M			BIT(23)
+#define PFPM_WUFC_FW_RST_WK_S			31
+#define PFPM_WUFC_FW_RST_WK_M			BIT(31)
+#define PFPM_WUS				0x0009DB80 /* Reset Source: POR */
+#define PFPM_WUS_LNKC_S				0
+#define PFPM_WUS_LNKC_M				BIT(0)
+#define PFPM_WUS_MAG_S				1
+#define PFPM_WUS_MAG_M				BIT(1)
+#define PFPM_WUS_PME_STATUS_S			2
+#define PFPM_WUS_PME_STATUS_M			BIT(2)
+#define PFPM_WUS_MNG_S				3
+#define PFPM_WUS_MNG_M				BIT(3)
+#define PFPM_WUS_FLX0_S				16
+#define PFPM_WUS_FLX0_M				BIT(16)
+#define PFPM_WUS_FLX1_S				17
+#define PFPM_WUS_FLX1_M				BIT(17)
+#define PFPM_WUS_FLX2_S				18
+#define PFPM_WUS_FLX2_M				BIT(18)
+#define PFPM_WUS_FLX3_S				19
+#define PFPM_WUS_FLX3_M				BIT(19)
+#define PFPM_WUS_FLX4_S				20
+#define PFPM_WUS_FLX4_M				BIT(20)
+#define PFPM_WUS_FLX5_S				21
+#define PFPM_WUS_FLX5_M				BIT(21)
+#define PFPM_WUS_FLX6_S				22
+#define PFPM_WUS_FLX6_M				BIT(22)
+#define PFPM_WUS_FLX7_S				23
+#define PFPM_WUS_FLX7_M				BIT(23)
+#define PFPM_WUS_FW_RST_WK_S			31
+#define PFPM_WUS_FW_RST_WK_M			BIT(31)
+#define PRTPM_SAH(_i)				(0x001E3BA0 + ((_i) * 32)) /* _i=0...3 */ /* Reset Source: PFR */
+#define PRTPM_SAH_MAX_INDEX			3
+#define PRTPM_SAH_PFPM_SAH_S			0
+#define PRTPM_SAH_PFPM_SAH_M			MAKEMASK(0xFFFF, 0)
+#define PRTPM_SAH_PF_NUM_S			26
+#define PRTPM_SAH_PF_NUM_M			MAKEMASK(0xF, 26)
+#define PRTPM_SAH_MC_MAG_EN_S			30
+#define PRTPM_SAH_MC_MAG_EN_M			BIT(30)
+#define PRTPM_SAH_AV_S				31
+#define PRTPM_SAH_AV_M				BIT(31)
+#define PRTPM_SAL(_i)				(0x001E3B20 + ((_i) * 32)) /* _i=0...3 */ /* Reset Source: PFR */
+#define PRTPM_SAL_MAX_INDEX			3
+#define PRTPM_SAL_PFPM_SAL_S			0
+#define PRTPM_SAL_PFPM_SAL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPE_CQM_FUNC_INVALIDATE		0x00503300 /* Reset Source: CORER */
+#define GLPE_CQM_FUNC_INVALIDATE_PF_NUM_S	0
+#define GLPE_CQM_FUNC_INVALIDATE_PF_NUM_M	MAKEMASK(0x7, 0)
+#define GLPE_CQM_FUNC_INVALIDATE_VM_VF_NUM_S	3
+#define GLPE_CQM_FUNC_INVALIDATE_VM_VF_NUM_M	MAKEMASK(0x3FF, 3)
+#define GLPE_CQM_FUNC_INVALIDATE_VM_VF_TYPE_S	13
+#define GLPE_CQM_FUNC_INVALIDATE_VM_VF_TYPE_M	MAKEMASK(0x3, 13)
+#define GLPE_CQM_FUNC_INVALIDATE_ENABLE_S	31
+#define GLPE_CQM_FUNC_INVALIDATE_ENABLE_M	BIT(31)
+#define VFPE_MRTEIDXMASK			0x00009000 /* Reset Source: PFR */
+#define VFPE_MRTEIDXMASK_MRTEIDXMASKBITS_S	0
+#define VFPE_MRTEIDXMASK_MRTEIDXMASKBITS_M	MAKEMASK(0x1F, 0)
+#define GLTSYN_HH_DLAY				0x0008881C /* Reset Source: CORER */
+#define GLTSYN_HH_DLAY_SYNC_DELAY_S		0
+#define GLTSYN_HH_DLAY_SYNC_DELAY_M		MAKEMASK(0xF, 0)
+#define VF_MBX_ARQBAH1				0x00006000 /* Reset Source: CORER */
+#define VF_MBX_ARQBAH1_ARQBAH_S			0
+#define VF_MBX_ARQBAH1_ARQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_ARQBAL1				0x00006C00 /* Reset Source: CORER */
+#define VF_MBX_ARQBAL1_ARQBAL_LSB_S		0
+#define VF_MBX_ARQBAL1_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define VF_MBX_ARQBAL1_ARQBAL_S			6
+#define VF_MBX_ARQBAL1_ARQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_ARQH1				0x00007400 /* Reset Source: CORER */
+#define VF_MBX_ARQH1_ARQH_S			0
+#define VF_MBX_ARQH1_ARQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_ARQLEN1				0x00008000 /* Reset Source: CORER */
+#define VF_MBX_ARQLEN1_ARQLEN_S			0
+#define VF_MBX_ARQLEN1_ARQLEN_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_ARQLEN1_ARQVFE_S			28
+#define VF_MBX_ARQLEN1_ARQVFE_M			BIT(28)
+#define VF_MBX_ARQLEN1_ARQOVFL_S		29
+#define VF_MBX_ARQLEN1_ARQOVFL_M		BIT(29)
+#define VF_MBX_ARQLEN1_ARQCRIT_S		30
+#define VF_MBX_ARQLEN1_ARQCRIT_M		BIT(30)
+#define VF_MBX_ARQLEN1_ARQENABLE_S		31
+#define VF_MBX_ARQLEN1_ARQENABLE_M		BIT(31)
+#define VF_MBX_ARQT1				0x00007000 /* Reset Source: CORER */
+#define VF_MBX_ARQT1_ARQT_S			0
+#define VF_MBX_ARQT1_ARQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_ATQBAH1				0x00007800 /* Reset Source: CORER */
+#define VF_MBX_ATQBAH1_ATQBAH_S			0
+#define VF_MBX_ATQBAH1_ATQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_ATQBAL1				0x00007C00 /* Reset Source: CORER */
+#define VF_MBX_ATQBAL1_ATQBAL_S			6
+#define VF_MBX_ATQBAL1_ATQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_ATQH1				0x00006400 /* Reset Source: CORER */
+#define VF_MBX_ATQH1_ATQH_S			0
+#define VF_MBX_ATQH1_ATQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_ATQLEN1				0x00006800 /* Reset Source: CORER */
+#define VF_MBX_ATQLEN1_ATQLEN_S			0
+#define VF_MBX_ATQLEN1_ATQLEN_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_ATQLEN1_ATQVFE_S			28
+#define VF_MBX_ATQLEN1_ATQVFE_M			BIT(28)
+#define VF_MBX_ATQLEN1_ATQOVFL_S		29
+#define VF_MBX_ATQLEN1_ATQOVFL_M		BIT(29)
+#define VF_MBX_ATQLEN1_ATQCRIT_S		30
+#define VF_MBX_ATQLEN1_ATQCRIT_M		BIT(30)
+#define VF_MBX_ATQLEN1_ATQENABLE_S		31
+#define VF_MBX_ATQLEN1_ATQENABLE_M		BIT(31)
+#define VF_MBX_ATQT1				0x00008400 /* Reset Source: CORER */
+#define VF_MBX_ATQT1_ATQT_S			0
+#define VF_MBX_ATQT1_ATQT_M			MAKEMASK(0x3FF, 0)
+#define PFPCI_VF_FLUSH_DONE1			0x0000E400 /* Reset Source: PCIR */
+#define PFPCI_VF_FLUSH_DONE1_FLUSH_DONE_S	0
+#define PFPCI_VF_FLUSH_DONE1_FLUSH_DONE_M	BIT(0)
+#define VFGEN_RSTAT1				0x00008800 /* Reset Source: VFR */
+#define VFGEN_RSTAT1_VFR_STATE_S		0
+#define VFGEN_RSTAT1_VFR_STATE_M		MAKEMASK(0x3, 0)
+#define VFINT_DYN_CTL0				0x00005C00 /* Reset Source: PFR */
+#define VFINT_DYN_CTL0_INTENA_S			0
+#define VFINT_DYN_CTL0_INTENA_M			BIT(0)
+#define VFINT_DYN_CTL0_CLEARPBA_S		1
+#define VFINT_DYN_CTL0_CLEARPBA_M		BIT(1)
+#define VFINT_DYN_CTL0_SWINT_TRIG_S		2
+#define VFINT_DYN_CTL0_SWINT_TRIG_M		BIT(2)
+#define VFINT_DYN_CTL0_ITR_INDX_S		3
+#define VFINT_DYN_CTL0_ITR_INDX_M		MAKEMASK(0x3, 3)
+#define VFINT_DYN_CTL0_INTERVAL_S		5
+#define VFINT_DYN_CTL0_INTERVAL_M		MAKEMASK(0xFFF, 5)
+#define VFINT_DYN_CTL0_SW_ITR_INDX_ENA_S	24
+#define VFINT_DYN_CTL0_SW_ITR_INDX_ENA_M	BIT(24)
+#define VFINT_DYN_CTL0_SW_ITR_INDX_S		25
+#define VFINT_DYN_CTL0_SW_ITR_INDX_M		MAKEMASK(0x3, 25)
+#define VFINT_DYN_CTL0_WB_ON_ITR_S		30
+#define VFINT_DYN_CTL0_WB_ON_ITR_M		BIT(30)
+#define VFINT_DYN_CTL0_INTENA_MSK_S		31
+#define VFINT_DYN_CTL0_INTENA_MSK_M		BIT(31)
+#define VFINT_DYN_CTLN(_i)			(0x00003800 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: PFR */
+#define VFINT_DYN_CTLN_MAX_INDEX		63
+#define VFINT_DYN_CTLN_INTENA_S			0
+#define VFINT_DYN_CTLN_INTENA_M			BIT(0)
+#define VFINT_DYN_CTLN_CLEARPBA_S		1
+#define VFINT_DYN_CTLN_CLEARPBA_M		BIT(1)
+#define VFINT_DYN_CTLN_SWINT_TRIG_S		2
+#define VFINT_DYN_CTLN_SWINT_TRIG_M		BIT(2)
+#define VFINT_DYN_CTLN_ITR_INDX_S		3
+#define VFINT_DYN_CTLN_ITR_INDX_M		MAKEMASK(0x3, 3)
+#define VFINT_DYN_CTLN_INTERVAL_S		5
+#define VFINT_DYN_CTLN_INTERVAL_M		MAKEMASK(0xFFF, 5)
+#define VFINT_DYN_CTLN_SW_ITR_INDX_ENA_S	24
+#define VFINT_DYN_CTLN_SW_ITR_INDX_ENA_M	BIT(24)
+#define VFINT_DYN_CTLN_SW_ITR_INDX_S		25
+#define VFINT_DYN_CTLN_SW_ITR_INDX_M		MAKEMASK(0x3, 25)
+#define VFINT_DYN_CTLN_WB_ON_ITR_S		30
+#define VFINT_DYN_CTLN_WB_ON_ITR_M		BIT(30)
+#define VFINT_DYN_CTLN_INTENA_MSK_S		31
+#define VFINT_DYN_CTLN_INTENA_MSK_M		BIT(31)
+#define VFINT_ITR0(_i)				(0x00004C00 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: PFR */
+#define VFINT_ITR0_MAX_INDEX			2
+#define VFINT_ITR0_INTERVAL_S			0
+#define VFINT_ITR0_INTERVAL_M			MAKEMASK(0xFFF, 0)
+#define VFINT_ITRN(_i, _j)			(0x00002800 + ((_i) * 4 + (_j) * 12)) /* _i=0...2, _j=0...63 */ /* Reset Source: PFR */
+#define VFINT_ITRN_MAX_INDEX			2
+#define VFINT_ITRN_INTERVAL_S			0
+#define VFINT_ITRN_INTERVAL_M			MAKEMASK(0xFFF, 0)
+#define QRX_TAIL1(_QRX)				(0x00002000 + ((_QRX) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define QRX_TAIL1_MAX_INDEX			255
+#define QRX_TAIL1_TAIL_S			0
+#define QRX_TAIL1_TAIL_M			MAKEMASK(0x1FFF, 0)
+#define QTX_TAIL(_DBQM)				(0x00000000 + ((_DBQM) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define QTX_TAIL_MAX_INDEX			255
+#define QTX_TAIL_QTX_COMM_DBELL_S		0
+#define QTX_TAIL_QTX_COMM_DBELL_M		MAKEMASK(0xFFFFFFFF, 0)
+#define MSIX_TMSG1(_i)				(0x00000008 + ((_i) * 16)) /* _i=0...64 */ /* Reset Source: FLR */
+#define MSIX_TMSG1_MAX_INDEX			64
+#define MSIX_TMSG1_MSIXTMSG_S			0
+#define MSIX_TMSG1_MSIXTMSG_M			MAKEMASK(0xFFFFFFFF, 0)
+#define VFPE_AEQALLOC1				0x0000A400 /* Reset Source: VFR */
+#define VFPE_AEQALLOC1_AECOUNT_S		0
+#define VFPE_AEQALLOC1_AECOUNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VFPE_CCQPHIGH1				0x00009800 /* Reset Source: VFR */
+#define VFPE_CCQPHIGH1_PECCQPHIGH_S		0
+#define VFPE_CCQPHIGH1_PECCQPHIGH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VFPE_CCQPLOW1				0x0000AC00 /* Reset Source: VFR */
+#define VFPE_CCQPLOW1_PECCQPLOW_S		0
+#define VFPE_CCQPLOW1_PECCQPLOW_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VFPE_CCQPSTATUS1			0x0000B800 /* Reset Source: VFR */
+#define VFPE_CCQPSTATUS1_CCQP_DONE_S		0
+#define VFPE_CCQPSTATUS1_CCQP_DONE_M		BIT(0)
+#define VFPE_CCQPSTATUS1_HMC_PROFILE_S		4
+#define VFPE_CCQPSTATUS1_HMC_PROFILE_M		MAKEMASK(0x7, 4)
+#define VFPE_CCQPSTATUS1_RDMA_EN_VFS_S		16
+#define VFPE_CCQPSTATUS1_RDMA_EN_VFS_M		MAKEMASK(0x3F, 16)
+#define VFPE_CCQPSTATUS1_CCQP_ERR_S		31
+#define VFPE_CCQPSTATUS1_CCQP_ERR_M		BIT(31)
+#define VFPE_CQACK1				0x0000B000 /* Reset Source: VFR */
+#define VFPE_CQACK1_PECQID_S			0
+#define VFPE_CQACK1_PECQID_M			MAKEMASK(0x7FFFF, 0)
+#define VFPE_CQARM1				0x0000B400 /* Reset Source: VFR */
+#define VFPE_CQARM1_PECQID_S			0
+#define VFPE_CQARM1_PECQID_M			MAKEMASK(0x7FFFF, 0)
+#define VFPE_CQPDB1				0x0000BC00 /* Reset Source: VFR */
+#define VFPE_CQPDB1_WQHEAD_S			0
+#define VFPE_CQPDB1_WQHEAD_M			MAKEMASK(0x7FF, 0)
+#define VFPE_CQPERRCODES1			0x00009C00 /* Reset Source: VFR */
+#define VFPE_CQPERRCODES1_CQP_MINOR_CODE_S	0
+#define VFPE_CQPERRCODES1_CQP_MINOR_CODE_M	MAKEMASK(0xFFFF, 0)
+#define VFPE_CQPERRCODES1_CQP_MAJOR_CODE_S	16
+#define VFPE_CQPERRCODES1_CQP_MAJOR_CODE_M	MAKEMASK(0xFFFF, 16)
+#define VFPE_CQPTAIL1				0x0000A000 /* Reset Source: VFR */
+#define VFPE_CQPTAIL1_WQTAIL_S			0
+#define VFPE_CQPTAIL1_WQTAIL_M			MAKEMASK(0x7FF, 0)
+#define VFPE_CQPTAIL1_CQP_OP_ERR_S		31
+#define VFPE_CQPTAIL1_CQP_OP_ERR_M		BIT(31)
+#define VFPE_IPCONFIG01				0x00008C00 /* Reset Source: VFR */
+#define VFPE_IPCONFIG01_PEIPID_S		0
+#define VFPE_IPCONFIG01_PEIPID_M		MAKEMASK(0xFFFF, 0)
+#define VFPE_IPCONFIG01_USEENTIREIDRANGE_S	16
+#define VFPE_IPCONFIG01_USEENTIREIDRANGE_M	BIT(16)
+#define VFPE_IPCONFIG01_UDP_SRC_PORT_MASK_EN_S	17
+#define VFPE_IPCONFIG01_UDP_SRC_PORT_MASK_EN_M	BIT(17)
+#define VFPE_MRTEIDXMASK1(_VF)			(0x00509800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_MRTEIDXMASK1_MAX_INDEX		255
+#define VFPE_MRTEIDXMASK1_MRTEIDXMASKBITS_S	0
+#define VFPE_MRTEIDXMASK1_MRTEIDXMASKBITS_M	MAKEMASK(0x1F, 0)
+#define VFPE_RCVUNEXPECTEDERROR1		0x00009400 /* Reset Source: VFR */
+#define VFPE_RCVUNEXPECTEDERROR1_TCP_RX_UNEXP_ERR_S 0
+#define VFPE_RCVUNEXPECTEDERROR1_TCP_RX_UNEXP_ERR_M MAKEMASK(0xFFFFFF, 0)
+#define VFPE_TCPNOWTIMER1			0x0000A800 /* Reset Source: VFR */
+#define VFPE_TCPNOWTIMER1_TCP_NOW_S		0
+#define VFPE_TCPNOWTIMER1_TCP_NOW_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VFPE_WQEALLOC1				0x0000C000 /* Reset Source: VFR */
+#define VFPE_WQEALLOC1_PEQPID_S			0
+#define VFPE_WQEALLOC1_PEQPID_M			MAKEMASK(0x3FFFF, 0)
+#define VFPE_WQEALLOC1_WQE_DESC_INDEX_S		20
+#define VFPE_WQEALLOC1_WQE_DESC_INDEX_M		MAKEMASK(0xFFF, 20)
+#define VF_MBX_CPM_ARQBAH1			0x0000F060 /* Reset Source: CORER */
+#define VF_MBX_CPM_ARQBAH1_ARQBAH_S		0
+#define VF_MBX_CPM_ARQBAH1_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_CPM_ARQBAL1			0x0000F050 /* Reset Source: CORER */
+#define VF_MBX_CPM_ARQBAL1_ARQBAL_LSB_S		0
+#define VF_MBX_CPM_ARQBAL1_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define VF_MBX_CPM_ARQBAL1_ARQBAL_S		6
+#define VF_MBX_CPM_ARQBAL1_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_CPM_ARQH1			0x0000F080 /* Reset Source: CORER */
+#define VF_MBX_CPM_ARQH1_ARQH_S			0
+#define VF_MBX_CPM_ARQH1_ARQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ARQLEN1			0x0000F070 /* Reset Source: CORER */
+#define VF_MBX_CPM_ARQLEN1_ARQLEN_S		0
+#define VF_MBX_CPM_ARQLEN1_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ARQLEN1_ARQVFE_S		28
+#define VF_MBX_CPM_ARQLEN1_ARQVFE_M		BIT(28)
+#define VF_MBX_CPM_ARQLEN1_ARQOVFL_S		29
+#define VF_MBX_CPM_ARQLEN1_ARQOVFL_M		BIT(29)
+#define VF_MBX_CPM_ARQLEN1_ARQCRIT_S		30
+#define VF_MBX_CPM_ARQLEN1_ARQCRIT_M		BIT(30)
+#define VF_MBX_CPM_ARQLEN1_ARQENABLE_S		31
+#define VF_MBX_CPM_ARQLEN1_ARQENABLE_M		BIT(31)
+#define VF_MBX_CPM_ARQT1			0x0000F090 /* Reset Source: CORER */
+#define VF_MBX_CPM_ARQT1_ARQT_S			0
+#define VF_MBX_CPM_ARQT1_ARQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ATQBAH1			0x0000F010 /* Reset Source: CORER */
+#define VF_MBX_CPM_ATQBAH1_ATQBAH_S		0
+#define VF_MBX_CPM_ATQBAH1_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_CPM_ATQBAL1			0x0000F000 /* Reset Source: CORER */
+#define VF_MBX_CPM_ATQBAL1_ATQBAL_S		6
+#define VF_MBX_CPM_ATQBAL1_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_CPM_ATQH1			0x0000F030 /* Reset Source: CORER */
+#define VF_MBX_CPM_ATQH1_ATQH_S			0
+#define VF_MBX_CPM_ATQH1_ATQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ATQLEN1			0x0000F020 /* Reset Source: CORER */
+#define VF_MBX_CPM_ATQLEN1_ATQLEN_S		0
+#define VF_MBX_CPM_ATQLEN1_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ATQLEN1_ATQVFE_S		28
+#define VF_MBX_CPM_ATQLEN1_ATQVFE_M		BIT(28)
+#define VF_MBX_CPM_ATQLEN1_ATQOVFL_S		29
+#define VF_MBX_CPM_ATQLEN1_ATQOVFL_M		BIT(29)
+#define VF_MBX_CPM_ATQLEN1_ATQCRIT_S		30
+#define VF_MBX_CPM_ATQLEN1_ATQCRIT_M		BIT(30)
+#define VF_MBX_CPM_ATQLEN1_ATQENABLE_S		31
+#define VF_MBX_CPM_ATQLEN1_ATQENABLE_M		BIT(31)
+#define VF_MBX_CPM_ATQT1			0x0000F040 /* Reset Source: CORER */
+#define VF_MBX_CPM_ATQT1_ATQT_S			0
+#define VF_MBX_CPM_ATQT1_ATQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ARQBAH1			0x00020060 /* Reset Source: CORER */
+#define VF_MBX_HLP_ARQBAH1_ARQBAH_S		0
+#define VF_MBX_HLP_ARQBAH1_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_HLP_ARQBAL1			0x00020050 /* Reset Source: CORER */
+#define VF_MBX_HLP_ARQBAL1_ARQBAL_LSB_S		0
+#define VF_MBX_HLP_ARQBAL1_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define VF_MBX_HLP_ARQBAL1_ARQBAL_S		6
+#define VF_MBX_HLP_ARQBAL1_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_HLP_ARQH1			0x00020080 /* Reset Source: CORER */
+#define VF_MBX_HLP_ARQH1_ARQH_S			0
+#define VF_MBX_HLP_ARQH1_ARQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ARQLEN1			0x00020070 /* Reset Source: CORER */
+#define VF_MBX_HLP_ARQLEN1_ARQLEN_S		0
+#define VF_MBX_HLP_ARQLEN1_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ARQLEN1_ARQVFE_S		28
+#define VF_MBX_HLP_ARQLEN1_ARQVFE_M		BIT(28)
+#define VF_MBX_HLP_ARQLEN1_ARQOVFL_S		29
+#define VF_MBX_HLP_ARQLEN1_ARQOVFL_M		BIT(29)
+#define VF_MBX_HLP_ARQLEN1_ARQCRIT_S		30
+#define VF_MBX_HLP_ARQLEN1_ARQCRIT_M		BIT(30)
+#define VF_MBX_HLP_ARQLEN1_ARQENABLE_S		31
+#define VF_MBX_HLP_ARQLEN1_ARQENABLE_M		BIT(31)
+#define VF_MBX_HLP_ARQT1			0x00020090 /* Reset Source: CORER */
+#define VF_MBX_HLP_ARQT1_ARQT_S			0
+#define VF_MBX_HLP_ARQT1_ARQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ATQBAH1			0x00020010 /* Reset Source: CORER */
+#define VF_MBX_HLP_ATQBAH1_ATQBAH_S		0
+#define VF_MBX_HLP_ATQBAH1_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_HLP_ATQBAL1			0x00020000 /* Reset Source: CORER */
+#define VF_MBX_HLP_ATQBAL1_ATQBAL_S		6
+#define VF_MBX_HLP_ATQBAL1_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_HLP_ATQH1			0x00020030 /* Reset Source: CORER */
+#define VF_MBX_HLP_ATQH1_ATQH_S			0
+#define VF_MBX_HLP_ATQH1_ATQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ATQLEN1			0x00020020 /* Reset Source: CORER */
+#define VF_MBX_HLP_ATQLEN1_ATQLEN_S		0
+#define VF_MBX_HLP_ATQLEN1_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ATQLEN1_ATQVFE_S		28
+#define VF_MBX_HLP_ATQLEN1_ATQVFE_M		BIT(28)
+#define VF_MBX_HLP_ATQLEN1_ATQOVFL_S		29
+#define VF_MBX_HLP_ATQLEN1_ATQOVFL_M		BIT(29)
+#define VF_MBX_HLP_ATQLEN1_ATQCRIT_S		30
+#define VF_MBX_HLP_ATQLEN1_ATQCRIT_M		BIT(30)
+#define VF_MBX_HLP_ATQLEN1_ATQENABLE_S		31
+#define VF_MBX_HLP_ATQLEN1_ATQENABLE_M		BIT(31)
+#define VF_MBX_HLP_ATQT1			0x00020040 /* Reset Source: CORER */
+#define VF_MBX_HLP_ATQT1_ATQT_S			0
+#define VF_MBX_HLP_ATQT1_ATQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ARQBAH1			0x00021060 /* Reset Source: CORER */
+#define VF_MBX_PSM_ARQBAH1_ARQBAH_S		0
+#define VF_MBX_PSM_ARQBAH1_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_PSM_ARQBAL1			0x00021050 /* Reset Source: CORER */
+#define VF_MBX_PSM_ARQBAL1_ARQBAL_LSB_S		0
+#define VF_MBX_PSM_ARQBAL1_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define VF_MBX_PSM_ARQBAL1_ARQBAL_S		6
+#define VF_MBX_PSM_ARQBAL1_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_PSM_ARQH1			0x00021080 /* Reset Source: CORER */
+#define VF_MBX_PSM_ARQH1_ARQH_S			0
+#define VF_MBX_PSM_ARQH1_ARQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ARQLEN1			0x00021070 /* Reset Source: CORER */
+#define VF_MBX_PSM_ARQLEN1_ARQLEN_S		0
+#define VF_MBX_PSM_ARQLEN1_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ARQLEN1_ARQVFE_S		28
+#define VF_MBX_PSM_ARQLEN1_ARQVFE_M		BIT(28)
+#define VF_MBX_PSM_ARQLEN1_ARQOVFL_S		29
+#define VF_MBX_PSM_ARQLEN1_ARQOVFL_M		BIT(29)
+#define VF_MBX_PSM_ARQLEN1_ARQCRIT_S		30
+#define VF_MBX_PSM_ARQLEN1_ARQCRIT_M		BIT(30)
+#define VF_MBX_PSM_ARQLEN1_ARQENABLE_S		31
+#define VF_MBX_PSM_ARQLEN1_ARQENABLE_M		BIT(31)
+#define VF_MBX_PSM_ARQT1			0x00021090 /* Reset Source: CORER */
+#define VF_MBX_PSM_ARQT1_ARQT_S			0
+#define VF_MBX_PSM_ARQT1_ARQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ATQBAH1			0x00021010 /* Reset Source: CORER */
+#define VF_MBX_PSM_ATQBAH1_ATQBAH_S		0
+#define VF_MBX_PSM_ATQBAH1_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_PSM_ATQBAL1			0x00021000 /* Reset Source: CORER */
+#define VF_MBX_PSM_ATQBAL1_ATQBAL_S		6
+#define VF_MBX_PSM_ATQBAL1_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_PSM_ATQH1			0x00021030 /* Reset Source: CORER */
+#define VF_MBX_PSM_ATQH1_ATQH_S			0
+#define VF_MBX_PSM_ATQH1_ATQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ATQLEN1			0x00021020 /* Reset Source: CORER */
+#define VF_MBX_PSM_ATQLEN1_ATQLEN_S		0
+#define VF_MBX_PSM_ATQLEN1_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ATQLEN1_ATQVFE_S		28
+#define VF_MBX_PSM_ATQLEN1_ATQVFE_M		BIT(28)
+#define VF_MBX_PSM_ATQLEN1_ATQOVFL_S		29
+#define VF_MBX_PSM_ATQLEN1_ATQOVFL_M		BIT(29)
+#define VF_MBX_PSM_ATQLEN1_ATQCRIT_S		30
+#define VF_MBX_PSM_ATQLEN1_ATQCRIT_M		BIT(30)
+#define VF_MBX_PSM_ATQLEN1_ATQENABLE_S		31
+#define VF_MBX_PSM_ATQLEN1_ATQENABLE_M		BIT(31)
+#define VF_MBX_PSM_ATQT1			0x00021040 /* Reset Source: CORER */
+#define VF_MBX_PSM_ATQT1_ATQT_S			0
+#define VF_MBX_PSM_ATQT1_ATQT_M			MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ARQBAH1			0x0000F160 /* Reset Source: CORER */
+#define VF_SB_CPM_ARQBAH1_ARQBAH_S		0
+#define VF_SB_CPM_ARQBAH1_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_SB_CPM_ARQBAL1			0x0000F150 /* Reset Source: CORER */
+#define VF_SB_CPM_ARQBAL1_ARQBAL_LSB_S		0
+#define VF_SB_CPM_ARQBAL1_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define VF_SB_CPM_ARQBAL1_ARQBAL_S		6
+#define VF_SB_CPM_ARQBAL1_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_SB_CPM_ARQH1				0x0000F180 /* Reset Source: CORER */
+#define VF_SB_CPM_ARQH1_ARQH_S			0
+#define VF_SB_CPM_ARQH1_ARQH_M			MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ARQLEN1			0x0000F170 /* Reset Source: CORER */
+#define VF_SB_CPM_ARQLEN1_ARQLEN_S		0
+#define VF_SB_CPM_ARQLEN1_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ARQLEN1_ARQVFE_S		28
+#define VF_SB_CPM_ARQLEN1_ARQVFE_M		BIT(28)
+#define VF_SB_CPM_ARQLEN1_ARQOVFL_S		29
+#define VF_SB_CPM_ARQLEN1_ARQOVFL_M		BIT(29)
+#define VF_SB_CPM_ARQLEN1_ARQCRIT_S		30
+#define VF_SB_CPM_ARQLEN1_ARQCRIT_M		BIT(30)
+#define VF_SB_CPM_ARQLEN1_ARQENABLE_S		31
+#define VF_SB_CPM_ARQLEN1_ARQENABLE_M		BIT(31)
+#define VF_SB_CPM_ARQT1				0x0000F190 /* Reset Source: CORER */
+#define VF_SB_CPM_ARQT1_ARQT_S			0
+#define VF_SB_CPM_ARQT1_ARQT_M			MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ATQBAH1			0x0000F110 /* Reset Source: CORER */
+#define VF_SB_CPM_ATQBAH1_ATQBAH_S		0
+#define VF_SB_CPM_ATQBAH1_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_SB_CPM_ATQBAL1			0x0000F100 /* Reset Source: CORER */
+#define VF_SB_CPM_ATQBAL1_ATQBAL_S		6
+#define VF_SB_CPM_ATQBAL1_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_SB_CPM_ATQH1				0x0000F130 /* Reset Source: CORER */
+#define VF_SB_CPM_ATQH1_ATQH_S			0
+#define VF_SB_CPM_ATQH1_ATQH_M			MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ATQLEN1			0x0000F120 /* Reset Source: CORER */
+#define VF_SB_CPM_ATQLEN1_ATQLEN_S		0
+#define VF_SB_CPM_ATQLEN1_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ATQLEN1_ATQVFE_S		28
+#define VF_SB_CPM_ATQLEN1_ATQVFE_M		BIT(28)
+#define VF_SB_CPM_ATQLEN1_ATQOVFL_S		29
+#define VF_SB_CPM_ATQLEN1_ATQOVFL_M		BIT(29)
+#define VF_SB_CPM_ATQLEN1_ATQCRIT_S		30
+#define VF_SB_CPM_ATQLEN1_ATQCRIT_M		BIT(30)
+#define VF_SB_CPM_ATQLEN1_ATQENABLE_S		31
+#define VF_SB_CPM_ATQLEN1_ATQENABLE_M		BIT(31)
+#define VF_SB_CPM_ATQT1				0x0000F140 /* Reset Source: CORER */
+#define VF_SB_CPM_ATQT1_ATQT_S			0
+#define VF_SB_CPM_ATQT1_ATQT_M			MAKEMASK(0x3FF, 0)
+#define VFINT_DYN_CTL(_i)			(0x00023000 + ((_i) * 4096)) /* _i=0...7 */ /* Reset Source: PFR */
+#define VFINT_DYN_CTL_MAX_INDEX			7
+#define VFINT_DYN_CTL_INTENA_S			0
+#define VFINT_DYN_CTL_INTENA_M			BIT(0)
+#define VFINT_DYN_CTL_CLEARPBA_S		1
+#define VFINT_DYN_CTL_CLEARPBA_M		BIT(1)
+#define VFINT_DYN_CTL_SWINT_TRIG_S		2
+#define VFINT_DYN_CTL_SWINT_TRIG_M		BIT(2)
+#define VFINT_DYN_CTL_ITR_INDX_S		3
+#define VFINT_DYN_CTL_ITR_INDX_M		MAKEMASK(0x3, 3)
+#define VFINT_DYN_CTL_INTERVAL_S		5
+#define VFINT_DYN_CTL_INTERVAL_M		MAKEMASK(0xFFF, 5)
+#define VFINT_DYN_CTL_SW_ITR_INDX_ENA_S		24
+#define VFINT_DYN_CTL_SW_ITR_INDX_ENA_M		BIT(24)
+#define VFINT_DYN_CTL_SW_ITR_INDX_S		25
+#define VFINT_DYN_CTL_SW_ITR_INDX_M		MAKEMASK(0x3, 25)
+#define VFINT_DYN_CTL_WB_ON_ITR_S		30
+#define VFINT_DYN_CTL_WB_ON_ITR_M		BIT(30)
+#define VFINT_DYN_CTL_INTENA_MSK_S		31
+#define VFINT_DYN_CTL_INTENA_MSK_M		BIT(31)
+#define VFINT_ITR_0(_i)				(0x00023004 + ((_i) * 4096)) /* _i=0...7 */ /* Reset Source: PFR */
+#define VFINT_ITR_0_MAX_INDEX			7
+#define VFINT_ITR_0_INTERVAL_S			0
+#define VFINT_ITR_0_INTERVAL_M			MAKEMASK(0xFFF, 0)
+#define VFINT_ITR_1(_i)				(0x00023008 + ((_i) * 4096)) /* _i=0...7 */ /* Reset Source: PFR */
+#define VFINT_ITR_1_MAX_INDEX			7
+#define VFINT_ITR_1_INTERVAL_S			0
+#define VFINT_ITR_1_INTERVAL_M			MAKEMASK(0xFFF, 0)
+#define VFINT_ITR_2(_i)				(0x0002300C + ((_i) * 4096)) /* _i=0...7 */ /* Reset Source: PFR */
+#define VFINT_ITR_2_MAX_INDEX			7
+#define VFINT_ITR_2_INTERVAL_S			0
+#define VFINT_ITR_2_INTERVAL_M			MAKEMASK(0xFFF, 0)
+#define VFQRX_TAIL(_QRX)			(0x0002E000 + ((_QRX) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VFQRX_TAIL_MAX_INDEX			255
+#define VFQRX_TAIL_TAIL_S			0
+#define VFQRX_TAIL_TAIL_M			MAKEMASK(0x1FFF, 0)
+#define VFQTX_COMM_DBELL(_DBQM)			(0x00030000 + ((_DBQM) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VFQTX_COMM_DBELL_MAX_INDEX		255
+#define VFQTX_COMM_DBELL_QTX_COMM_DBELL_S	0
+#define VFQTX_COMM_DBELL_QTX_COMM_DBELL_M	MAKEMASK(0xFFFFFFFF, 0)
+#define VFQTX_COMM_DBLQ_DBELL(_DBLQ)		(0x00022000 + ((_DBLQ) * 4)) /* _i=0...3 */ /* Reset Source: CORER */
+#define VFQTX_COMM_DBLQ_DBELL_MAX_INDEX		3
+#define VFQTX_COMM_DBLQ_DBELL_TAIL_S		0
+#define VFQTX_COMM_DBLQ_DBELL_TAIL_M		MAKEMASK(0x1FFF, 0)
+
+#endif
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v3 02/34] net/ice: Add basic structures
  2018-12-12  6:59 ` [dpdk-dev] [PATCH v3 00/34] A new net PMD - ice Wenzhuo Lu
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 01/34] net/ice: Add registers for Intel(R) E800 Series NIC Wenzhuo Lu
@ 2018-12-12  6:59   ` Wenzhuo Lu
  2018-12-12 15:19     ` Ferruh Yigit
  2018-12-12 15:19     ` Ferruh Yigit
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 03/34] net/ice: Add admin queue structures and commands Wenzhuo Lu
                     ` (32 subsequent siblings)
  34 siblings, 2 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-12  6:59 UTC (permalink / raw)
  To: dev; +Cc: Paul M Stillwell Jr

From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>

Add the structures required by the NIC.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
---
 drivers/net/ice/base/ice_type.h | 869 ++++++++++++++++++++++++++++++++++++++++
 1 file changed, 869 insertions(+)
 create mode 100644 drivers/net/ice/base/ice_type.h

diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h
new file mode 100644
index 0000000..256bf3f
--- /dev/null
+++ b/drivers/net/ice/base/ice_type.h
@@ -0,0 +1,869 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_TYPE_H_
+#define _ICE_TYPE_H_
+
+#define ETH_ALEN	6
+
+#define ETH_HEADER_LEN	14
+
+#define BIT(a) (1UL << (a))
+#define BIT_ULL(a) (1ULL << (a))
+
+#define BITS_PER_BYTE	8
+
+#define ICE_BYTES_PER_WORD	2
+#define ICE_BYTES_PER_DWORD	4
+#define ICE_MAX_TRAFFIC_CLASS	8
+
+
+#include "ice_status.h"
+#include "ice_hw_autogen.h"
+#include "ice_devids.h"
+#include "ice_osdep.h"
+#include "ice_controlq.h"
+#include "ice_lan_tx_rx.h"
+#include "ice_flex_type.h"
+#include "ice_protocol_type.h"
+
+static inline bool ice_is_tc_ena(ice_bitmap_t bitmap, u8 tc)
+{
+	return ice_is_bit_set(&bitmap, tc);
+}
+
+#ifndef DIV_64BIT
+#define DIV_64BIT(n, d) ((n) / (d))
+#endif /* DIV_64BIT */
+
+static inline u64 round_up_64bit(u64 a, u32 b)
+{
+	return DIV_64BIT(((a) + (b) / 2), (b));
+}
+
+static inline u32 ice_round_to_num(u32 N, u32 R)
+{
+	return ((((N) % (R)) < ((R) / 2)) ? (((N) / (R)) * (R)) :
+		((((N) + (R) - 1) / (R)) * (R)));
+}
+
+/* Driver always calls main vsi_handle first */
+#define ICE_MAIN_VSI_HANDLE		0
+
+/* Switch from ms to the 1usec global time (this is the GTIME resolution) */
+#define ICE_MS_TO_GTIME(time)		((time) * 1000)
+
+/* Data type manipulation macros. */
+#define ICE_HI_DWORD(x)		((u32)((((x) >> 16) >> 16) & 0xFFFFFFFF))
+#define ICE_LO_DWORD(x)		((u32)((x) & 0xFFFFFFFF))
+#define ICE_HI_WORD(x)		((u16)(((x) >> 16) & 0xFFFF))
+
+/* debug masks - set these bits in hw->debug_mask to control output */
+#define ICE_DBG_INIT		BIT_ULL(1)
+#define ICE_DBG_RELEASE		BIT_ULL(2)
+
+#define ICE_DBG_LINK		BIT_ULL(4)
+#define ICE_DBG_PHY		BIT_ULL(5)
+#define ICE_DBG_QCTX		BIT_ULL(6)
+#define ICE_DBG_NVM		BIT_ULL(7)
+#define ICE_DBG_LAN		BIT_ULL(8)
+#define ICE_DBG_FLOW		BIT_ULL(9)
+#define ICE_DBG_DCB		BIT_ULL(10)
+#define ICE_DBG_DIAG		BIT_ULL(11)
+#define ICE_DBG_FD		BIT_ULL(12)
+#define ICE_DBG_SW		BIT_ULL(13)
+#define ICE_DBG_SCHED		BIT_ULL(14)
+
+#define ICE_DBG_PKG		BIT_ULL(16)
+#define ICE_DBG_RES		BIT_ULL(17)
+#define ICE_DBG_AQ_MSG		BIT_ULL(24)
+#define ICE_DBG_AQ_DESC		BIT_ULL(25)
+#define ICE_DBG_AQ_DESC_BUF	BIT_ULL(26)
+#define ICE_DBG_AQ_CMD		BIT_ULL(27)
+#define ICE_DBG_AQ		(ICE_DBG_AQ_MSG		| \
+				 ICE_DBG_AQ_DESC	| \
+				 ICE_DBG_AQ_DESC_BUF	| \
+				 ICE_DBG_AQ_CMD)
+
+#define ICE_DBG_USER		BIT_ULL(31)
+#define ICE_DBG_ALL		0xFFFFFFFFFFFFFFFFULL
+
+
+
+
+
+
+enum ice_aq_res_ids {
+	ICE_NVM_RES_ID = 1,
+	ICE_SPD_RES_ID,
+	ICE_CHANGE_LOCK_RES_ID,
+	ICE_GLOBAL_CFG_LOCK_RES_ID
+};
+
+/* FW update timeout definitions are in milliseconds */
+#define ICE_NVM_TIMEOUT			180000
+#define ICE_CHANGE_LOCK_TIMEOUT		1000
+#define ICE_GLOBAL_CFG_LOCK_TIMEOUT	3000
+
+enum ice_aq_res_access_type {
+	ICE_RES_READ = 1,
+	ICE_RES_WRITE
+};
+
+struct ice_driver_ver {
+	u8 major_ver;
+	u8 minor_ver;
+	u8 build_ver;
+	u8 subbuild_ver;
+	u8 driver_string[32];
+};
+
+enum ice_fc_mode {
+	ICE_FC_NONE = 0,
+	ICE_FC_RX_PAUSE,
+	ICE_FC_TX_PAUSE,
+	ICE_FC_FULL,
+	ICE_FC_PFC,
+	ICE_FC_DFLT
+};
+
+enum ice_fec_mode {
+	ICE_FEC_NONE = 0,
+	ICE_FEC_RS,
+	ICE_FEC_BASER,
+	ICE_FEC_AUTO
+};
+
+enum ice_set_fc_aq_failures {
+	ICE_SET_FC_AQ_FAIL_NONE = 0,
+	ICE_SET_FC_AQ_FAIL_GET,
+	ICE_SET_FC_AQ_FAIL_SET,
+	ICE_SET_FC_AQ_FAIL_UPDATE
+};
+
+/* These are structs for managing the hardware information and the operations */
+/* MAC types */
+enum ice_mac_type {
+	ICE_MAC_UNKNOWN = 0,
+	ICE_MAC_GENERIC,
+};
+
+/* Media Types */
+enum ice_media_type {
+	ICE_MEDIA_UNKNOWN = 0,
+	ICE_MEDIA_FIBER,
+	ICE_MEDIA_BASET,
+	ICE_MEDIA_BACKPLANE,
+	ICE_MEDIA_DA,
+};
+
+/* Software VSI types. */
+enum ice_vsi_type {
+	ICE_VSI_PF = 0,
+#ifdef ADQ_SUPPORT
+	ICE_VSI_CHNL = 4,
+#endif /* ADQ_SUPPORT */
+};
+
+struct ice_link_status {
+	/* Refer to ice_aq_phy_type for bits definition */
+	u64 phy_type_low;
+	u64 phy_type_high;
+	u8 topo_media_conflict;
+	u16 max_frame_size;
+	u16 link_speed;
+	u16 req_speeds;
+	u8 lse_ena;	/* Link Status Event notification */
+	u8 link_info;
+	u8 an_info;
+	u8 ext_info;
+	u8 fec_info;
+	u8 pacing;
+	/* Refer to #define from module_type[ICE_MODULE_TYPE_TOTAL_BYTE] of
+	 * ice_aqc_get_phy_caps structure
+	 */
+	u8 module_type[ICE_MODULE_TYPE_TOTAL_BYTE];
+};
+
+/* Different data queue types: These are mainly for SW consumption. */
+enum ice_q {
+	ICE_DATA_Q_DOORBELL,
+	ICE_DATA_Q_CMPL,
+	ICE_DATA_Q_QUANTA,
+	ICE_DATA_Q_RX,
+	ICE_DATA_Q_TX,
+};
+
+/* Different reset sources for which a disable queue AQ call has to be made in
+ * order to clean the TX scheduler as a part of the reset
+ */
+enum ice_disq_rst_src {
+	ICE_NO_RESET = 0,
+	ICE_VM_RESET,
+};
+
+/* PHY info such as phy_type, etc... */
+struct ice_phy_info {
+	struct ice_link_status link_info;
+	struct ice_link_status link_info_old;
+	u64 phy_type_low;
+	u64 phy_type_high;
+	enum ice_media_type media_type;
+	u8 get_link_info;
+};
+
+#define ICE_MAX_NUM_MIRROR_RULES	64
+
+/* Common HW capabilities for SW use */
+struct ice_hw_common_caps {
+	/* Write CSR protection */
+	u64 wr_csr_prot;
+	u32 switching_mode;
+	/* switching mode supported - EVB switching (including cloud) */
+#define ICE_NVM_IMAGE_TYPE_EVB		0x0
+
+	/* Manageablity mode & supported protocols over MCTP */
+	u32 mgmt_mode;
+#define ICE_MGMT_MODE_PASS_THRU_MODE_M		0xF
+#define ICE_MGMT_MODE_CTL_INTERFACE_M		0xF0
+#define ICE_MGMT_MODE_REDIR_SB_INTERFACE_M	0xF00
+
+	u32 mgmt_protocols_mctp;
+#define ICE_MGMT_MODE_PROTO_RSVD	BIT(0)
+#define ICE_MGMT_MODE_PROTO_PLDM	BIT(1)
+#define ICE_MGMT_MODE_PROTO_OEM		BIT(2)
+#define ICE_MGMT_MODE_PROTO_NC_SI	BIT(3)
+
+	u32 os2bmc;
+	u32 valid_functions;
+
+	/* RSS related capabilities */
+	u32 rss_table_size;		/* 512 for PFs and 64 for VFs */
+	u32 rss_table_entry_width;	/* RSS Entry width in bits */
+
+	/* TX/RX queues */
+	u32 num_rxq;			/* Number/Total RX queues */
+	u32 rxq_first_id;		/* First queue ID for RX queues */
+	u32 num_txq;			/* Number/Total TX queues */
+	u32 txq_first_id;		/* First queue ID for TX queues */
+
+	/* MSI-X vectors */
+	u32 num_msix_vectors;
+	u32 msix_vector_first_id;
+
+	/* Max MTU for function or device */
+	u32 max_mtu;
+
+	/* WOL related */
+	u32 num_wol_proxy_fltr;
+	u32 wol_proxy_vsi_seid;
+
+	/* LED/SDP pin count */
+	u32 led_pin_num;
+	u32 sdp_pin_num;
+
+	/* LED/SDP - Supports up to 12 LED pins and 8 SDP signals */
+#define ICE_MAX_SUPPORTED_GPIO_LED	12
+#define ICE_MAX_SUPPORTED_GPIO_SDP	8
+	u8 led[ICE_MAX_SUPPORTED_GPIO_LED];
+	u8 sdp[ICE_MAX_SUPPORTED_GPIO_SDP];
+
+	/* EVB capabilities */
+	u8 evb_802_1_qbg;		/* Edge Virtual Bridging */
+	u8 evb_802_1_qbh;		/* Bridge Port Extension */
+
+	u8 iscsi;
+	u8 mgmt_cem;
+
+	/* WoL and APM support */
+#define ICE_WOL_SUPPORT_M		BIT(0)
+#define ICE_ACPI_PROG_MTHD_M		BIT(1)
+#define ICE_PROXY_SUPPORT_M		BIT(2)
+	u8 apm_wol_support;
+	u8 acpi_prog_mthd;
+	u8 proxy_support;
+};
+
+
+/* Function specific capabilities */
+struct ice_hw_func_caps {
+	struct ice_hw_common_caps common_cap;
+	u32 guar_num_vsi;
+};
+
+/* Device wide capabilities */
+struct ice_hw_dev_caps {
+	struct ice_hw_common_caps common_cap;
+	u32 num_vsi_allocd_to_host;	/* Excluding EMP VSI */
+};
+
+
+/* Information about MAC such as address, etc... */
+struct ice_mac_info {
+	u8 lan_addr[ETH_ALEN];
+	u8 perm_addr[ETH_ALEN];
+	u8 port_addr[ETH_ALEN];
+	u8 wol_addr[ETH_ALEN];
+};
+
+/* PCI bus types */
+enum ice_bus_type {
+	ice_bus_unknown = 0,
+	ice_bus_pci_express,
+	ice_bus_embedded, /* Is device Embedded versus card */
+	ice_bus_reserved
+};
+
+/* PCI bus speeds */
+enum ice_pcie_bus_speed {
+	ice_pcie_speed_unknown	= 0xff,
+	ice_pcie_speed_2_5GT	= 0x14,
+	ice_pcie_speed_5_0GT	= 0x15,
+	ice_pcie_speed_8_0GT	= 0x16,
+	ice_pcie_speed_16_0GT	= 0x17
+};
+
+/* PCI bus widths */
+enum ice_pcie_link_width {
+	ice_pcie_lnk_width_resrv	= 0x00,
+	ice_pcie_lnk_x1			= 0x01,
+	ice_pcie_lnk_x2			= 0x02,
+	ice_pcie_lnk_x4			= 0x04,
+	ice_pcie_lnk_x8			= 0x08,
+	ice_pcie_lnk_x12		= 0x0C,
+	ice_pcie_lnk_x16		= 0x10,
+	ice_pcie_lnk_x32		= 0x20,
+	ice_pcie_lnk_width_unknown	= 0xff,
+};
+
+/* Reset types used to determine which kind of reset was requested. These
+ * defines match what the RESET_TYPE field of the GLGEN_RSTAT register.
+ * ICE_RESET_PFR does not match any RESET_TYPE field in the GLGEN_RSTAT register
+ * because its reset source is different than the other types listed.
+ */
+enum ice_reset_req {
+	ICE_RESET_POR	= 0,
+	ICE_RESET_INVAL	= 0,
+	ICE_RESET_CORER	= 1,
+	ICE_RESET_GLOBR	= 2,
+	ICE_RESET_EMPR	= 3,
+	ICE_RESET_PFR	= 4,
+};
+
+/* Bus parameters */
+struct ice_bus_info {
+	enum ice_pcie_bus_speed speed;
+	enum ice_pcie_link_width width;
+	enum ice_bus_type type;
+	u16 domain_num;
+	u16 device;
+	u8 func;
+	u8 bus_num;
+};
+
+/* Flow control (FC) parameters */
+struct ice_fc_info {
+	enum ice_fc_mode current_mode;	/* FC mode in effect */
+	enum ice_fc_mode req_mode;	/* FC mode requested by caller */
+};
+
+/* NVM Information */
+struct ice_nvm_info {
+	u32 eetrack;			/* NVM data version */
+	u32 oem_ver;			/* OEM version info */
+	u16 sr_words;			/* Shadow RAM size in words */
+	u16 ver;			/* NVM package version */
+	u8 blank_nvm_mode;		/* is NVM empty (no FW present)*/
+};
+
+/* Max number of port to queue branches w.r.t topology */
+#define ICE_TXSCHED_MAX_BRANCHES ICE_MAX_TRAFFIC_CLASS
+/* ICE_DFLT_AGG_ID means that all new VM(s)/VSI node connects
+ * to driver defined policy for default aggregator
+ */
+#define ICE_INVAL_TEID 0xFFFFFFFF
+#define ICE_DFLT_AGG_ID 0
+
+struct ice_sched_node {
+	struct ice_sched_node *parent;
+	struct ice_sched_node *sibling; /* next sibling in the same layer */
+	struct ice_sched_node **children;
+	struct ice_aqc_txsched_elem_data info;
+	u32 agg_id;			/* aggregator group id */
+	u16 vsi_handle;
+	u8 in_use;			/* suspended or in use */
+	u8 tx_sched_layer;		/* Logical Layer (1-9) */
+	u8 num_children;
+	u8 tc_num;
+	u8 owner;
+#define ICE_SCHED_NODE_OWNER_LAN	0
+#define ICE_SCHED_NODE_OWNER_AE		1
+#define ICE_SCHED_NODE_OWNER_RDMA	2
+};
+
+/* Access Macros for Tx Sched Elements data */
+#define ICE_TXSCHED_GET_NODE_TEID(x) LE32_TO_CPU((x)->info.node_teid)
+#define ICE_TXSCHED_GET_PARENT_TEID(x) LE32_TO_CPU((x)->info.parent_teid)
+#define ICE_TXSCHED_GET_CIR_RL_ID(x)	\
+	LE16_TO_CPU((x)->info.cir_bw.bw_profile_idx)
+#define ICE_TXSCHED_GET_EIR_RL_ID(x)	\
+	LE16_TO_CPU((x)->info.eir_bw.bw_profile_idx)
+#define ICE_TXSCHED_GET_SRL_ID(x) LE16_TO_CPU((x)->info.srl_id)
+#define ICE_TXSCHED_GET_CIR_BWALLOC(x)	\
+	LE16_TO_CPU((x)->info.cir_bw.bw_alloc)
+#define ICE_TXSCHED_GET_EIR_BWALLOC(x)	\
+	LE16_TO_CPU((x)->info.eir_bw.bw_alloc)
+
+struct ice_sched_rl_profle {
+	u32 rate; /* In Kbps */
+	struct ice_aqc_rl_profile_elem info;
+};
+
+/* The aggregator type determines if identifier is for a VSI group,
+ * aggregator group, aggregator of queues, or queue group.
+ */
+enum ice_agg_type {
+	ICE_AGG_TYPE_UNKNOWN = 0,
+	ICE_AGG_TYPE_TC,
+	ICE_AGG_TYPE_AGG, /* aggregator */
+	ICE_AGG_TYPE_VSI,
+	ICE_AGG_TYPE_QG,
+	ICE_AGG_TYPE_Q
+};
+
+/* Rate limit types */
+enum ice_rl_type {
+	ICE_UNKNOWN_BW = 0,
+	ICE_MIN_BW,		/* for cir profile */
+	ICE_MAX_BW,		/* for eir profile */
+	ICE_SHARED_BW		/* for shared profile */
+};
+
+#define ICE_SCHED_MIN_BW		500		/* in Kbps */
+#define ICE_SCHED_MAX_BW		100000000	/* in Kbps */
+#define ICE_SCHED_DFLT_BW		0xFFFFFFFF	/* unlimited */
+#define ICE_SCHED_NO_PRIORITY		0
+#define ICE_SCHED_NO_BW_WT		0
+#define ICE_SCHED_DFLT_RL_PROF_ID	0
+#define ICE_SCHED_NO_SHARED_RL_PROF_ID	0xFFFF
+#define ICE_SCHED_DFLT_BW_WT		1
+#define ICE_SCHED_INVAL_PROF_ID		0xFFFF
+#define ICE_SCHED_DFLT_BURST_SIZE	(15 * 1024)	/* in bytes (15k) */
+
+/* Access Macros for Tx Sched RL Profile data */
+#define ICE_TXSCHED_GET_RL_PROF_ID(p) LE16_TO_CPU((p)->info.profile_id)
+#define ICE_TXSCHED_GET_RL_MBS(p) LE16_TO_CPU((p)->info.max_burst_size)
+#define ICE_TXSCHED_GET_RL_MULTIPLIER(p) LE16_TO_CPU((p)->info.rl_multiply)
+#define ICE_TXSCHED_GET_RL_WAKEUP_MV(p) LE16_TO_CPU((p)->info.wake_up_calc)
+#define ICE_TXSCHED_GET_RL_ENCODE(p) LE16_TO_CPU((p)->info.rl_encode)
+
+
+/* The following tree example shows the naming conventions followed under
+ * ice_port_info struct for default scheduler tree topology.
+ *
+ *                 A tree on a port
+ *                       *                ---> root node
+ *        (TC0)/  /  /  / \  \  \  \(TC7) ---> num_branches (range:1- 8)
+ *            *  *  *  *   *  *  *  *     |
+ *           /                            |
+ *          *                             |
+ *         /                              |-> num_elements (range:1 - 9)
+ *        *                               |   implies num_of_layers
+ *       /                                |
+ *   (a)*                                 |
+ *
+ *  (a) is the last_node_teid(not of type Leaf). A leaf node is created under
+ *  (a) as child node where queues get added, add Tx/Rx queue admin commands;
+ *  need teid of (a) to add queues.
+ *
+ *  This tree
+ *       -> has 8 branches (one for each TC)
+ *       -> First branch (TC0) has 4 elements
+ *       -> has 4 layers
+ *       -> (a) is the topmost layer node created by firmware on branch 0
+ *
+ *  Note: Above asterisk tree covers only basic terminology and scenario.
+ *  Refer to the documentation for more info.
+ */
+
+ /* Data structure for saving bw information */
+enum ice_bw_type {
+	ICE_BW_TYPE_PRIO,
+	ICE_BW_TYPE_CIR,
+	ICE_BW_TYPE_CIR_WT,
+	ICE_BW_TYPE_EIR,
+	ICE_BW_TYPE_EIR_WT,
+	ICE_BW_TYPE_SHARED,
+	ICE_BW_TYPE_CNT		/* This must be last */
+};
+
+struct ice_bw {
+	u32 bw;
+	u16 bw_alloc;
+};
+
+struct ice_bw_type_info {
+	ice_declare_bitmap(bw_t_bitmap, ICE_BW_TYPE_CNT);
+	u8 generic;
+	struct ice_bw cir_bw;
+	struct ice_bw eir_bw;
+	u32 shared_bw;
+};
+
+/* vsi type list entry to locate corresponding vsi/ag nodes */
+struct ice_sched_vsi_info {
+	struct ice_sched_node *vsi_node[ICE_MAX_TRAFFIC_CLASS];
+	struct ice_sched_node *ag_node[ICE_MAX_TRAFFIC_CLASS];
+	u16 max_lanq[ICE_MAX_TRAFFIC_CLASS];
+	/* bw_t_info saves VSI bw information */
+	struct ice_bw_type_info bw_t_info[ICE_MAX_TRAFFIC_CLASS];
+};
+
+#if !defined(NO_DCB_SUPPORT) || defined(ADQ_SUPPORT)
+/* CEE or IEEE 802.1Qaz ETS Configuration data */
+struct ice_dcb_ets_cfg {
+	u8 willing;
+	u8 cbs;
+	u8 maxtcs;
+	u8 prio_table[ICE_MAX_TRAFFIC_CLASS];
+	u8 tcbwtable[ICE_MAX_TRAFFIC_CLASS];
+	u8 tsatable[ICE_MAX_TRAFFIC_CLASS];
+};
+
+/* CEE or IEEE 802.1Qaz PFC Configuration data */
+struct ice_dcb_pfc_cfg {
+	u8 willing;
+	u8 mbc;
+	u8 pfccap;
+	u8 pfcena;
+};
+
+/* CEE or IEEE 802.1Qaz Application Priority data */
+struct ice_dcb_app_priority_table {
+	u16 prot_id;
+	u8 priority;
+	u8 selector;
+};
+
+#define ICE_MAX_USER_PRIORITY	8
+#define ICE_DCBX_MAX_APPS	32
+#define ICE_LLDPDU_SIZE		1500
+#define ICE_TLV_STATUS_OPER	0x1
+#define ICE_TLV_STATUS_SYNC	0x2
+#define ICE_TLV_STATUS_ERR	0x4
+#define ICE_APP_PROT_ID_FCOE	0x8906
+#define ICE_APP_PROT_ID_ISCSI	0x0cbc
+#define ICE_APP_PROT_ID_FIP	0x8914
+#define ICE_APP_SEL_ETHTYPE	0x1
+#define ICE_APP_SEL_TCPIP	0x2
+#define ICE_CEE_APP_SEL_ETHTYPE	0x0
+#define ICE_CEE_APP_SEL_TCPIP	0x1
+
+struct ice_dcbx_cfg {
+	u32 numapps;
+	u32 tlv_status; /* CEE mode TLV status */
+	struct ice_dcb_ets_cfg etscfg;
+	struct ice_dcb_ets_cfg etsrec;
+	struct ice_dcb_pfc_cfg pfc;
+	struct ice_dcb_app_priority_table app[ICE_DCBX_MAX_APPS];
+	u8 dcbx_mode;
+#define ICE_DCBX_MODE_CEE	0x1
+#define ICE_DCBX_MODE_IEEE	0x2
+	u8 app_mode;
+#define ICE_DCBX_APPS_NON_WILLING	0x1
+};
+#endif /* !NO_DCB_SUPPORT || ADQ_SUPPORT */
+
+struct ice_port_info {
+	struct ice_sched_node *root;	/* Root Node per Port */
+	struct ice_hw *hw;		/* back pointer to hw instance */
+	u32 last_node_teid;		/* scheduler last node info */
+	u16 sw_id;			/* Initial switch ID belongs to port */
+	u16 pf_vf_num;
+	u8 port_state;
+#define ICE_SCHED_PORT_STATE_INIT	0x0
+#define ICE_SCHED_PORT_STATE_READY	0x1
+	u16 dflt_tx_vsi_rule_id;
+	u16 dflt_tx_vsi_num;
+	u16 dflt_rx_vsi_rule_id;
+	u16 dflt_rx_vsi_num;
+	struct ice_fc_info fc;
+	struct ice_mac_info mac;
+	struct ice_phy_info phy;
+	struct ice_lock sched_lock;	/* protect access to TXSched tree */
+	/* List contain profile id(s) and other params per layer */
+	struct LIST_HEAD_TYPE rl_prof_list[ICE_AQC_TOPO_MAX_LEVEL_NUM];
+#if !defined(NO_DCB_SUPPORT) || defined(ADQ_SUPPORT)
+	struct ice_dcbx_cfg local_dcbx_cfg;	/* Oper/Local Cfg */
+#endif /* !NO_DCB_SUPPORT || ADQ_SUPPORT */
+	u8 lport;
+#define ICE_LPORT_MASK		0xff
+	u8 is_vf;
+};
+
+struct ice_switch_info {
+	struct LIST_HEAD_TYPE vsi_list_map_head;
+	struct ice_sw_recipe *recp_list;
+};
+
+/* FW logging configuration */
+struct ice_fw_log_evnt {
+	u8 cfg : 4;	/* New event enables to configure */
+	u8 cur : 4;	/* Current/active event enables */
+};
+
+struct ice_fw_log_cfg {
+	u8 cq_en : 1;    /* FW logging is enabled via the control queue */
+	u8 uart_en : 1;  /* FW logging is enabled via UART for all PFs */
+	u8 actv_evnts;   /* Cumulation of currently enabled log events */
+
+#define ICE_FW_LOG_EVNT_INFO	(ICE_AQC_FW_LOG_INFO_EN >> ICE_AQC_FW_LOG_EN_S)
+#define ICE_FW_LOG_EVNT_INIT	(ICE_AQC_FW_LOG_INIT_EN >> ICE_AQC_FW_LOG_EN_S)
+#define ICE_FW_LOG_EVNT_FLOW	(ICE_AQC_FW_LOG_FLOW_EN >> ICE_AQC_FW_LOG_EN_S)
+#define ICE_FW_LOG_EVNT_ERR	(ICE_AQC_FW_LOG_ERR_EN >> ICE_AQC_FW_LOG_EN_S)
+	struct ice_fw_log_evnt evnts[ICE_AQC_FW_LOG_ID_MAX];
+};
+
+/* Port hardware description */
+struct ice_hw {
+	u8 *hw_addr;
+	void *back;
+	struct ice_aqc_layer_props *layer_info;
+	struct ice_port_info *port_info;
+	/* 2D Array for each Tx Sched RL Profile type */
+	struct ice_sched_rl_profile **cir_profiles;
+	struct ice_sched_rl_profile **eir_profiles;
+	struct ice_sched_rl_profile **srl_profiles;
+	u64 debug_mask;		/* BITMAP for debug mask */
+	enum ice_mac_type mac_type;
+
+	/* pci info */
+	u16 device_id;
+	u16 vendor_id;
+	u16 subsystem_device_id;
+	u16 subsystem_vendor_id;
+	u8 revision_id;
+
+	u8 pf_id;		/* device profile info */
+
+	u16 max_burst_size;	/* driver sets this value */
+	/* TX Scheduler values */
+	u16 num_tx_sched_layers;
+	u16 num_tx_sched_phys_layers;
+	u8 flattened_layers;
+	u8 max_cgds;
+	u8 sw_entry_point_layer;
+	u16 max_children[ICE_AQC_TOPO_MAX_LEVEL_NUM];
+	struct LIST_HEAD_TYPE agg_list;	/* lists all aggregator */
+	struct ice_bw_type_info tc_node_bw_t_info[ICE_MAX_TRAFFIC_CLASS];
+	struct ice_vsi_ctx *vsi_ctx[ICE_MAX_VSI];
+	u8 evb_veb;		/* true for VEB, false for VEPA */
+	u8 reset_ongoing;	/* true if hw is in reset, false otherwise */
+	struct ice_bus_info bus;
+	struct ice_nvm_info nvm;
+	struct ice_hw_dev_caps dev_caps;	/* device capabilities */
+	struct ice_hw_func_caps func_caps;	/* function capabilities */
+
+	struct ice_switch_info *switch_info;	/* switch filter lists */
+
+	/* Control Queue info */
+	struct ice_ctl_q_info adminq;
+	struct ice_ctl_q_info mailboxq;
+
+	u8 api_branch;		/* API branch version */
+	u8 api_maj_ver;		/* API major version */
+	u8 api_min_ver;		/* API minor version */
+	u8 api_patch;		/* API patch version */
+	u8 fw_branch;		/* firmware branch version */
+	u8 fw_maj_ver;		/* firmware major version */
+	u8 fw_min_ver;		/* firmware minor version */
+	u8 fw_patch;		/* firmware patch version */
+	u32 fw_build;		/* firmware build number */
+
+	struct ice_fw_log_cfg fw_log;
+
+/* Device max aggregate bandwidths corresponding to the GL_PWR_MODE_CTL
+ * register. Used for determining the itr/intrl granularity during
+ * initialization.
+ */
+#define ICE_MAX_AGG_BW_200G	0x0
+#define ICE_MAX_AGG_BW_100G	0X1
+#define ICE_MAX_AGG_BW_50G	0x2
+#define ICE_MAX_AGG_BW_25G	0x3
+	/* ITR granularity for different speeds */
+#define ICE_ITR_GRAN_ABOVE_25	2
+#define ICE_ITR_GRAN_MAX_25	4
+	/* ITR granularity in 1 us */
+	u8 itr_gran;
+	/* INTRL granularity for different speeds */
+#define ICE_INTRL_GRAN_ABOVE_25	4
+#define ICE_INTRL_GRAN_MAX_25	8
+	/* INTRL granularity in 1 us */
+	u8 intrl_gran;
+
+	u8 ucast_shared;	/* true if VSIs can share unicast addr */
+
+
+};
+
+/* Statistics collected by each port, VSI, VEB, and S-channel */
+struct ice_eth_stats {
+	u64 rx_bytes;			/* gorc */
+	u64 rx_unicast;			/* uprc */
+	u64 rx_multicast;		/* mprc */
+	u64 rx_broadcast;		/* bprc */
+	u64 rx_discards;		/* rdpc */
+	u64 rx_unknown_protocol;	/* rupp */
+	u64 tx_bytes;			/* gotc */
+	u64 tx_unicast;			/* uptc */
+	u64 tx_multicast;		/* mptc */
+	u64 tx_broadcast;		/* bptc */
+	u64 tx_discards;		/* tdpc */
+	u64 tx_errors;			/* tepc */
+};
+
+#define ICE_MAX_UP	8
+
+/* Statistics collected per VEB per User Priority (UP) for up to 8 UPs */
+struct ice_veb_up_stats {
+	u64 up_rx_pkts[ICE_MAX_UP];
+	u64 up_rx_bytes[ICE_MAX_UP];
+	u64 up_tx_pkts[ICE_MAX_UP];
+	u64 up_tx_bytes[ICE_MAX_UP];
+};
+
+/* Statistics collected by the MAC */
+struct ice_hw_port_stats {
+	/* eth stats collected by the port */
+	struct ice_eth_stats eth;
+	/* additional port specific stats */
+	u64 tx_dropped_link_down;	/* tdold */
+	u64 crc_errors;			/* crcerrs */
+	u64 illegal_bytes;		/* illerrc */
+	u64 error_bytes;		/* errbc */
+	u64 mac_local_faults;		/* mlfc */
+	u64 mac_remote_faults;		/* mrfc */
+	u64 rx_len_errors;		/* rlec */
+	u64 link_xon_rx;		/* lxonrxc */
+	u64 link_xoff_rx;		/* lxoffrxc */
+	u64 link_xon_tx;		/* lxontxc */
+	u64 link_xoff_tx;		/* lxofftxc */
+	u64 rx_size_64;			/* prc64 */
+	u64 rx_size_127;		/* prc127 */
+	u64 rx_size_255;		/* prc255 */
+	u64 rx_size_511;		/* prc511 */
+	u64 rx_size_1023;		/* prc1023 */
+	u64 rx_size_1522;		/* prc1522 */
+	u64 rx_size_big;		/* prc9522 */
+	u64 rx_undersize;		/* ruc */
+	u64 rx_fragments;		/* rfc */
+	u64 rx_oversize;		/* roc */
+	u64 rx_jabber;			/* rjc */
+	u64 tx_size_64;			/* ptc64 */
+	u64 tx_size_127;		/* ptc127 */
+	u64 tx_size_255;		/* ptc255 */
+	u64 tx_size_511;		/* ptc511 */
+	u64 tx_size_1023;		/* ptc1023 */
+	u64 tx_size_1522;		/* ptc1522 */
+	u64 tx_size_big;		/* ptc9522 */
+	u64 mac_short_pkt_dropped;	/* mspdc */
+};
+
+enum ice_sw_fwd_act_type {
+	ICE_FWD_TO_VSI = 0,
+	ICE_FWD_TO_VSI_LIST, /* Do not use this when adding filter */
+	ICE_FWD_TO_Q,
+	ICE_FWD_TO_QGRP,
+	ICE_DROP_PACKET,
+	ICE_INVAL_ACT
+};
+
+/* Checksum and Shadow RAM pointers */
+#define ICE_SR_NVM_CTRL_WORD			0x00
+#define ICE_SR_PHY_ANALOG_PTR			0x04
+#define ICE_SR_OPTION_ROM_PTR			0x05
+#define ICE_SR_RO_PCIR_REGS_AUTO_LOAD_PTR	0x06
+#define ICE_SR_AUTO_GENERATED_POINTERS_PTR	0x07
+#define ICE_SR_PCIR_REGS_AUTO_LOAD_PTR		0x08
+#define ICE_SR_EMP_GLOBAL_MODULE_PTR		0x09
+#define ICE_SR_EMP_IMAGE_PTR			0x0B
+#define ICE_SR_PE_IMAGE_PTR			0x0C
+#define ICE_SR_CSR_PROTECTED_LIST_PTR		0x0D
+#define ICE_SR_MNG_CFG_PTR			0x0E
+#define ICE_SR_EMP_MODULE_PTR			0x0F
+#define ICE_SR_PBA_FLAGS			0x15
+#define ICE_SR_PBA_BLOCK_PTR			0x16
+#define ICE_SR_BOOT_CFG_PTR			0x17
+#define ICE_SR_NVM_WOL_CFG			0x19
+#define ICE_NVM_OEM_VER_OFF			0x83
+#define ICE_SR_NVM_DEV_STARTER_VER		0x18
+#define ICE_SR_ALTERNATE_SAN_MAC_ADDR_PTR	0x27
+#define ICE_SR_PERMANENT_SAN_MAC_ADDR_PTR	0x28
+#define ICE_SR_NVM_MAP_VER			0x29
+#define ICE_SR_NVM_IMAGE_VER			0x2A
+#define ICE_SR_NVM_STRUCTURE_VER		0x2B
+#define ICE_SR_NVM_EETRACK_LO			0x2D
+#define ICE_SR_NVM_EETRACK_HI			0x2E
+#define ICE_NVM_VER_LO_SHIFT			0
+#define ICE_NVM_VER_LO_MASK			(0xff << ICE_NVM_VER_LO_SHIFT)
+#define ICE_NVM_VER_HI_SHIFT			12
+#define ICE_NVM_VER_HI_MASK			(0xf << ICE_NVM_VER_HI_SHIFT)
+#define ICE_OEM_EETRACK_ID			0xffffffff
+#define ICE_OEM_VER_PATCH_SHIFT			0
+#define ICE_OEM_VER_PATCH_MASK		(0xff << ICE_OEM_VER_PATCH_SHIFT)
+#define ICE_OEM_VER_BUILD_SHIFT			8
+#define ICE_OEM_VER_BUILD_MASK		(0xffff << ICE_OEM_VER_BUILD_SHIFT)
+#define ICE_OEM_VER_SHIFT			24
+#define ICE_OEM_VER_MASK			(0xff << ICE_OEM_VER_SHIFT)
+#define ICE_SR_VPD_PTR				0x2F
+#define ICE_SR_PXE_SETUP_PTR			0x30
+#define ICE_SR_PXE_CFG_CUST_OPTIONS_PTR		0x31
+#define ICE_SR_NVM_ORIGINAL_EETRACK_LO		0x34
+#define ICE_SR_NVM_ORIGINAL_EETRACK_HI		0x35
+#define ICE_SR_VLAN_CFG_PTR			0x37
+#define ICE_SR_POR_REGS_AUTO_LOAD_PTR		0x38
+#define ICE_SR_EMPR_REGS_AUTO_LOAD_PTR		0x3A
+#define ICE_SR_GLOBR_REGS_AUTO_LOAD_PTR		0x3B
+#define ICE_SR_CORER_REGS_AUTO_LOAD_PTR		0x3C
+#define ICE_SR_PHY_CFG_SCRIPT_PTR		0x3D
+#define ICE_SR_PCIE_ALT_AUTO_LOAD_PTR		0x3E
+#define ICE_SR_SW_CHECKSUM_WORD			0x3F
+#define ICE_SR_PFA_PTR				0x40
+#define ICE_SR_1ST_SCRATCH_PAD_PTR		0x41
+#define ICE_SR_1ST_NVM_BANK_PTR			0x42
+#define ICE_SR_NVM_BANK_SIZE			0x43
+#define ICE_SR_1ND_OROM_BANK_PTR		0x44
+#define ICE_SR_OROM_BANK_SIZE			0x45
+#define ICE_SR_EMP_SR_SETTINGS_PTR		0x48
+#define ICE_SR_CONFIGURATION_METADATA_PTR	0x4D
+#define ICE_SR_IMMEDIATE_VALUES_PTR		0x4E
+
+/* Auxiliary field, mask and shift definition for Shadow RAM and NVM Flash */
+#define ICE_SR_VPD_SIZE_WORDS		512
+#define ICE_SR_PCIE_ALT_SIZE_WORDS	512
+#define ICE_SR_CTRL_WORD_1_S		0x06
+#define ICE_SR_CTRL_WORD_1_M		(0x03 << ICE_SR_CTRL_WORD_1_S)
+
+/* Shadow RAM related */
+#define ICE_SR_SECTOR_SIZE_IN_WORDS	0x800
+#define ICE_SR_BUF_ALIGNMENT		4096
+#define ICE_SR_WORDS_IN_1KB		512
+/* Checksum should be calculated such that after adding all the words,
+ * including the checksum word itself, the sum should be 0xBABA.
+ */
+#define ICE_SR_SW_CHECKSUM_BASE		0xBABA
+
+#define ICE_PBA_FLAG_DFLT		0xFAFA
+/* Hash redirection LUT for VSI - maximum array size */
+#define ICE_VSIQF_HLUT_ARRAY_SIZE	((VSIQF_HLUT_MAX_INDEX + 1) * 4)
+
+/*
+ * Defines for values in the VF_PE_DB_SIZE bits in the GLPCI_LBARCTRL register.
+ * This is needed to determine the BAR0 space for the VFs
+ */
+#define GLPCI_LBARCTRL_VF_PE_DB_SIZE_0KB 0x0
+#define GLPCI_LBARCTRL_VF_PE_DB_SIZE_8KB 0x1
+#define GLPCI_LBARCTRL_VF_PE_DB_SIZE_64KB 0x2
+
+#endif /* _ICE_TYPE_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v3 03/34] net/ice: Add admin queue structures and commands
  2018-12-12  6:59 ` [dpdk-dev] [PATCH v3 00/34] A new net PMD - ice Wenzhuo Lu
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 01/34] net/ice: Add registers for Intel(R) E800 Series NIC Wenzhuo Lu
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 02/34] net/ice: Add basic structures Wenzhuo Lu
@ 2018-12-12  6:59   ` Wenzhuo Lu
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 04/34] net/ice: Add sideband queue info Wenzhuo Lu
                     ` (31 subsequent siblings)
  34 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-12  6:59 UTC (permalink / raw)
  To: dev; +Cc: Paul M Stillwell Jr

From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>

Add the commands, error codes, and structures for
the admin queue.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
---
 drivers/net/ice/base/ice_adminq_cmd.h | 1891 +++++++++++++++++++++++++++++++++
 1 file changed, 1891 insertions(+)
 create mode 100644 drivers/net/ice/base/ice_adminq_cmd.h

diff --git a/drivers/net/ice/base/ice_adminq_cmd.h b/drivers/net/ice/base/ice_adminq_cmd.h
new file mode 100644
index 0000000..9332f84
--- /dev/null
+++ b/drivers/net/ice/base/ice_adminq_cmd.h
@@ -0,0 +1,1891 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_ADMINQ_CMD_H_
+#define _ICE_ADMINQ_CMD_H_
+
+/* This header file defines the Admin Queue commands, error codes and
+ * descriptor format. It is shared between Firmware and Software.
+ */
+
+
+#define ICE_MAX_VSI			768
+#define ICE_AQC_TOPO_MAX_LEVEL_NUM	0x9
+#define ICE_AQ_SET_MAC_FRAME_SIZE_MAX	9728
+
+
+struct ice_aqc_generic {
+	__le32 param0;
+	__le32 param1;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Get version (direct 0x0001) */
+struct ice_aqc_get_ver {
+	__le32 rom_ver;
+	__le32 fw_build;
+	u8 fw_branch;
+	u8 fw_major;
+	u8 fw_minor;
+	u8 fw_patch;
+	u8 api_branch;
+	u8 api_major;
+	u8 api_minor;
+	u8 api_patch;
+};
+
+
+
+/* Queue Shutdown (direct 0x0003) */
+struct ice_aqc_q_shutdown {
+	__le32 driver_unloading;
+#define ICE_AQC_DRIVER_UNLOADING	BIT(0)
+	u8 reserved[12];
+};
+
+
+
+
+/* Request resource ownership (direct 0x0008)
+ * Release resource ownership (direct 0x0009)
+ */
+struct ice_aqc_req_res {
+	__le16 res_id;
+#define ICE_AQC_RES_ID_NVM		1
+#define ICE_AQC_RES_ID_SDP		2
+#define ICE_AQC_RES_ID_CHNG_LOCK	3
+#define ICE_AQC_RES_ID_GLBL_LOCK	4
+	__le16 access_type;
+#define ICE_AQC_RES_ACCESS_READ		1
+#define ICE_AQC_RES_ACCESS_WRITE	2
+
+	/* Upon successful completion, FW writes this value and driver is
+	 * expected to release resource before timeout. This value is provided
+	 * in milliseconds.
+	 */
+	__le32 timeout;
+#define ICE_AQ_RES_NVM_READ_DFLT_TIMEOUT_MS	3000
+#define ICE_AQ_RES_NVM_WRITE_DFLT_TIMEOUT_MS	180000
+#define ICE_AQ_RES_CHNG_LOCK_DFLT_TIMEOUT_MS	1000
+#define ICE_AQ_RES_GLBL_LOCK_DFLT_TIMEOUT_MS	3000
+	/* For SDP: pin id of the SDP */
+	__le32 res_number;
+	/* Status is only used for ICE_AQC_RES_ID_GLBL_LOCK */
+	__le16 status;
+#define ICE_AQ_RES_GLBL_SUCCESS		0
+#define ICE_AQ_RES_GLBL_IN_PROG		1
+#define ICE_AQ_RES_GLBL_DONE		2
+	u8 reserved[2];
+};
+
+
+/* Get function capabilities (indirect 0x000A)
+ * Get device capabilities (indirect 0x000B)
+ */
+struct ice_aqc_list_caps {
+	u8 cmd_flags;
+	u8 pf_index;
+	u8 reserved[2];
+	__le32 count;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Device/Function buffer entry, repeated per reported capability */
+struct ice_aqc_list_caps_elem {
+	__le16 cap;
+#define ICE_AQC_CAPS_VALID_FUNCTIONS			0x0005
+#define ICE_AQC_CAPS_VSI				0x0017
+#define ICE_AQC_CAPS_RSS				0x0040
+#define ICE_AQC_CAPS_RXQS				0x0041
+#define ICE_AQC_CAPS_TXQS				0x0042
+#define ICE_AQC_CAPS_MSIX				0x0043
+#define ICE_AQC_CAPS_MAX_MTU				0x0047
+
+	u8 major_ver;
+	u8 minor_ver;
+	/* Number of resources described by this capability */
+	__le32 number;
+	/* Only meaningful for some types of resources */
+	__le32 logical_id;
+	/* Only meaningful for some types of resources */
+	__le32 phys_id;
+	__le64 rsvd1;
+	__le64 rsvd2;
+};
+
+
+/* Manage MAC address, read command - indirect (0x0107)
+ * This struct is also used for the response
+ */
+struct ice_aqc_manage_mac_read {
+	__le16 flags; /* Zeroed by device driver */
+#define ICE_AQC_MAN_MAC_LAN_ADDR_VALID		BIT(4)
+#define ICE_AQC_MAN_MAC_SAN_ADDR_VALID		BIT(5)
+#define ICE_AQC_MAN_MAC_PORT_ADDR_VALID		BIT(6)
+#define ICE_AQC_MAN_MAC_WOL_ADDR_VALID		BIT(7)
+#define ICE_AQC_MAN_MAC_READ_S			4
+#define ICE_AQC_MAN_MAC_READ_M			(0xF << ICE_AQC_MAN_MAC_READ_S)
+	u8 lport_num;
+	u8 lport_num_valid;
+#define ICE_AQC_MAN_MAC_PORT_NUM_IS_VALID	BIT(0)
+	u8 num_addr; /* Used in response */
+	u8 reserved[3];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Response buffer format for manage MAC read command */
+struct ice_aqc_manage_mac_read_resp {
+	u8 lport_num;
+	u8 addr_type;
+#define ICE_AQC_MAN_MAC_ADDR_TYPE_LAN		0
+#define ICE_AQC_MAN_MAC_ADDR_TYPE_WOL		1
+	u8 mac_addr[ETH_ALEN];
+};
+
+
+/* Manage MAC address, write command - direct (0x0108) */
+struct ice_aqc_manage_mac_write {
+	u8 port_num;
+	u8 flags;
+#define ICE_AQC_MAN_MAC_WR_MC_MAG_EN		BIT(0)
+#define ICE_AQC_MAN_MAC_WR_WOL_LAA_PFR_KEEP	BIT(1)
+#define ICE_AQC_MAN_MAC_WR_S		6
+#define ICE_AQC_MAN_MAC_WR_M		(3 << ICE_AQC_MAN_MAC_WR_S)
+#define ICE_AQC_MAN_MAC_UPDATE_LAA	0
+#define ICE_AQC_MAN_MAC_UPDATE_LAA_WOL	(BIT(0) << ICE_AQC_MAN_MAC_WR_S)
+	/* High 16 bits of MAC address in big endian order */
+	__be16 sah;
+	/* Low 32 bits of MAC address in big endian order */
+	__be32 sal;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Clear PXE Command and response (direct 0x0110) */
+struct ice_aqc_clear_pxe {
+	u8 rx_cnt;
+#define ICE_AQC_CLEAR_PXE_RX_CNT		0x2
+	u8 reserved[15];
+};
+
+
+/* Get switch configuration (0x0200) */
+struct ice_aqc_get_sw_cfg {
+	/* Reserved for command and copy of request flags for response */
+	__le16 flags;
+	/* First desc in case of command and next_elem in case of response
+	 * In case of response, if it is not zero, means all the configuration
+	 * was not returned and new command shall be sent with this value in
+	 * the 'first desc' field
+	 */
+	__le16 element;
+	/* Reserved for command, only used for response */
+	__le16 num_elems;
+	__le16 rsvd;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Each entry in the response buffer is of the following type: */
+struct ice_aqc_get_sw_cfg_resp_elem {
+	/* VSI/Port Number */
+	__le16 vsi_port_num;
+#define ICE_AQC_GET_SW_CONF_RESP_VSI_PORT_NUM_S	0
+#define ICE_AQC_GET_SW_CONF_RESP_VSI_PORT_NUM_M	\
+			(0x3FF << ICE_AQC_GET_SW_CONF_RESP_VSI_PORT_NUM_S)
+#define ICE_AQC_GET_SW_CONF_RESP_TYPE_S	14
+#define ICE_AQC_GET_SW_CONF_RESP_TYPE_M	(0x3 << ICE_AQC_GET_SW_CONF_RESP_TYPE_S)
+#define ICE_AQC_GET_SW_CONF_RESP_PHYS_PORT	0
+#define ICE_AQC_GET_SW_CONF_RESP_VIRT_PORT	1
+#define ICE_AQC_GET_SW_CONF_RESP_VSI		2
+
+	/* SWID VSI/Port belongs to */
+	__le16 swid;
+
+	/* Bit 14..0 : PF/VF number VSI belongs to
+	 * Bit 15 : VF indication bit
+	 */
+	__le16 pf_vf_num;
+#define ICE_AQC_GET_SW_CONF_RESP_FUNC_NUM_S	0
+#define ICE_AQC_GET_SW_CONF_RESP_FUNC_NUM_M	\
+				(0x7FFF << ICE_AQC_GET_SW_CONF_RESP_FUNC_NUM_S)
+#define ICE_AQC_GET_SW_CONF_RESP_IS_VF		BIT(15)
+};
+
+
+/* The response buffer is as follows. Note that the length of the
+ * elements array varies with the length of the command response.
+ */
+struct ice_aqc_get_sw_cfg_resp {
+	struct ice_aqc_get_sw_cfg_resp_elem elements[1];
+};
+
+
+
+/* These resource type defines are used for all switch resource
+ * commands where a resource type is required, such as:
+ * Get Resource Allocation command (indirect 0x0204)
+ * Allocate Resources command (indirect 0x0208)
+ * Free Resources command (indirect 0x0209)
+ * Get Allocated Resource Descriptors Command (indirect 0x020A)
+ */
+#define ICE_AQC_RES_TYPE_VSI_LIST_REP			0x03
+#define ICE_AQC_RES_TYPE_VSI_LIST_PRUNE			0x04
+
+#define ICE_AQC_RES_TYPE_FLAG_SHARED			BIT(7)
+#define ICE_AQC_RES_TYPE_FLAG_SCAN_BOTTOM		BIT(12)
+#define ICE_AQC_RES_TYPE_FLAG_IGNORE_INDEX		BIT(13)
+
+#define ICE_AQC_RES_TYPE_FLAG_DEDICATED			0x00
+
+
+
+/* Allocate Resources command (indirect 0x0208)
+ * Free Resources command (indirect 0x0209)
+ */
+struct ice_aqc_alloc_free_res_cmd {
+	__le16 num_entries; /* Number of Resource entries */
+	u8 reserved[6];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Resource descriptor */
+struct ice_aqc_res_elem {
+	union {
+		__le16 sw_resp;
+		__le16 flu_resp;
+	} e;
+};
+
+
+/* Buffer for Allocate/Free Resources commands */
+struct ice_aqc_alloc_free_res_elem {
+	__le16 res_type; /* Types defined above cmd 0x0204 */
+#define ICE_AQC_RES_TYPE_VSI_PRUNE_LIST_S	8
+#define ICE_AQC_RES_TYPE_VSI_PRUNE_LIST_M	\
+				(0xF << ICE_AQC_RES_TYPE_VSI_PRUNE_LIST_S)
+	__le16 num_elems;
+	struct ice_aqc_res_elem elem[1];
+};
+
+
+
+
+/* Add VSI (indirect 0x0210)
+ * Update VSI (indirect 0x0211)
+ * Get VSI (indirect 0x0212)
+ * Free VSI (indirect 0x0213)
+ */
+struct ice_aqc_add_get_update_free_vsi {
+	__le16 vsi_num;
+#define ICE_AQ_VSI_NUM_S	0
+#define ICE_AQ_VSI_NUM_M	(0x03FF << ICE_AQ_VSI_NUM_S)
+#define ICE_AQ_VSI_IS_VALID	BIT(15)
+	__le16 cmd_flags;
+#define ICE_AQ_VSI_KEEP_ALLOC	0x1
+	u8 vf_id;
+	u8 reserved;
+	__le16 vsi_flags;
+#define ICE_AQ_VSI_TYPE_S	0
+#define ICE_AQ_VSI_TYPE_M	(0x3 << ICE_AQ_VSI_TYPE_S)
+#define ICE_AQ_VSI_TYPE_VF	0x0
+#define ICE_AQ_VSI_TYPE_VMDQ2	0x1
+#define ICE_AQ_VSI_TYPE_PF	0x2
+#define ICE_AQ_VSI_TYPE_EMP_MNG	0x3
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Response descriptor for:
+ * Add VSI (indirect 0x0210)
+ * Update VSI (indirect 0x0211)
+ * Free VSI (indirect 0x0213)
+ */
+struct ice_aqc_add_update_free_vsi_resp {
+	__le16 vsi_num;
+	__le16 ext_status;
+	__le16 vsi_used;
+	__le16 vsi_free;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+
+struct ice_aqc_vsi_props {
+	__le16 valid_sections;
+#define ICE_AQ_VSI_PROP_SW_VALID		BIT(0)
+#define ICE_AQ_VSI_PROP_SECURITY_VALID		BIT(1)
+#define ICE_AQ_VSI_PROP_VLAN_VALID		BIT(2)
+#define ICE_AQ_VSI_PROP_OUTER_TAG_VALID		BIT(3)
+#define ICE_AQ_VSI_PROP_INGRESS_UP_VALID	BIT(4)
+#define ICE_AQ_VSI_PROP_EGRESS_UP_VALID		BIT(5)
+#define ICE_AQ_VSI_PROP_RXQ_MAP_VALID		BIT(6)
+#define ICE_AQ_VSI_PROP_Q_OPT_VALID		BIT(7)
+#define ICE_AQ_VSI_PROP_OUTER_UP_VALID		BIT(8)
+#define ICE_AQ_VSI_PROP_FLOW_DIR_VALID		BIT(11)
+#define ICE_AQ_VSI_PROP_PASID_VALID		BIT(12)
+	/* switch section */
+	u8 sw_id;
+	u8 sw_flags;
+#define ICE_AQ_VSI_SW_FLAG_ALLOW_LB		BIT(5)
+#define ICE_AQ_VSI_SW_FLAG_LOCAL_LB		BIT(6)
+#define ICE_AQ_VSI_SW_FLAG_SRC_PRUNE		BIT(7)
+	u8 sw_flags2;
+#define ICE_AQ_VSI_SW_FLAG_RX_PRUNE_EN_S	0
+#define ICE_AQ_VSI_SW_FLAG_RX_PRUNE_EN_M	\
+				(0xF << ICE_AQ_VSI_SW_FLAG_RX_PRUNE_EN_S)
+#define ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA	BIT(0)
+#define ICE_AQ_VSI_SW_FLAG_LAN_ENA		BIT(4)
+	u8 veb_stat_id;
+#define ICE_AQ_VSI_SW_VEB_STAT_ID_S		0
+#define ICE_AQ_VSI_SW_VEB_STAT_ID_M	(0x1F << ICE_AQ_VSI_SW_VEB_STAT_ID_S)
+#define ICE_AQ_VSI_SW_VEB_STAT_ID_VALID		BIT(5)
+	/* security section */
+	u8 sec_flags;
+#define ICE_AQ_VSI_SEC_FLAG_ALLOW_DEST_OVRD	BIT(0)
+#define ICE_AQ_VSI_SEC_FLAG_ENA_MAC_ANTI_SPOOF	BIT(2)
+#define ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S	4
+#define ICE_AQ_VSI_SEC_TX_PRUNE_ENA_M	(0xF << ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S)
+#define ICE_AQ_VSI_SEC_TX_VLAN_PRUNE_ENA	BIT(0)
+	u8 sec_reserved;
+	/* VLAN section */
+	__le16 pvid; /* VLANS include priority bits */
+	u8 pvlan_reserved[2];
+	u8 vlan_flags;
+#define ICE_AQ_VSI_VLAN_MODE_S	0
+#define ICE_AQ_VSI_VLAN_MODE_M	(0x3 << ICE_AQ_VSI_VLAN_MODE_S)
+#define ICE_AQ_VSI_VLAN_MODE_UNTAGGED	0x1
+#define ICE_AQ_VSI_VLAN_MODE_TAGGED	0x2
+#define ICE_AQ_VSI_VLAN_MODE_ALL	0x3
+#define ICE_AQ_VSI_PVLAN_INSERT_PVID	BIT(2)
+#define ICE_AQ_VSI_VLAN_EMOD_S	3
+#define ICE_AQ_VSI_VLAN_EMOD_M	(0x3 << ICE_AQ_VSI_VLAN_EMOD_S)
+#define ICE_AQ_VSI_VLAN_EMOD_STR_BOTH	(0x0 << ICE_AQ_VSI_VLAN_EMOD_S)
+#define ICE_AQ_VSI_VLAN_EMOD_STR_UP	(0x1 << ICE_AQ_VSI_VLAN_EMOD_S)
+#define ICE_AQ_VSI_VLAN_EMOD_STR	(0x2 << ICE_AQ_VSI_VLAN_EMOD_S)
+#define ICE_AQ_VSI_VLAN_EMOD_NOTHING	(0x3 << ICE_AQ_VSI_VLAN_EMOD_S)
+	u8 pvlan_reserved2[3];
+	/* ingress egress up sections */
+	__le32 ingress_table; /* bitmap, 3 bits per up */
+#define ICE_AQ_VSI_UP_TABLE_UP0_S	0
+#define ICE_AQ_VSI_UP_TABLE_UP0_M	(0x7 << ICE_AQ_VSI_UP_TABLE_UP0_S)
+#define ICE_AQ_VSI_UP_TABLE_UP1_S	3
+#define ICE_AQ_VSI_UP_TABLE_UP1_M	(0x7 << ICE_AQ_VSI_UP_TABLE_UP1_S)
+#define ICE_AQ_VSI_UP_TABLE_UP2_S	6
+#define ICE_AQ_VSI_UP_TABLE_UP2_M	(0x7 << ICE_AQ_VSI_UP_TABLE_UP2_S)
+#define ICE_AQ_VSI_UP_TABLE_UP3_S	9
+#define ICE_AQ_VSI_UP_TABLE_UP3_M	(0x7 << ICE_AQ_VSI_UP_TABLE_UP3_S)
+#define ICE_AQ_VSI_UP_TABLE_UP4_S	12
+#define ICE_AQ_VSI_UP_TABLE_UP4_M	(0x7 << ICE_AQ_VSI_UP_TABLE_UP4_S)
+#define ICE_AQ_VSI_UP_TABLE_UP5_S	15
+#define ICE_AQ_VSI_UP_TABLE_UP5_M	(0x7 << ICE_AQ_VSI_UP_TABLE_UP5_S)
+#define ICE_AQ_VSI_UP_TABLE_UP6_S	18
+#define ICE_AQ_VSI_UP_TABLE_UP6_M	(0x7 << ICE_AQ_VSI_UP_TABLE_UP6_S)
+#define ICE_AQ_VSI_UP_TABLE_UP7_S	21
+#define ICE_AQ_VSI_UP_TABLE_UP7_M	(0x7 << ICE_AQ_VSI_UP_TABLE_UP7_S)
+	__le32 egress_table;   /* same defines as for ingress table */
+	/* outer tags section */
+	__le16 outer_tag;
+	u8 outer_tag_flags;
+#define ICE_AQ_VSI_OUTER_TAG_MODE_S	0
+#define ICE_AQ_VSI_OUTER_TAG_MODE_M	(0x3 << ICE_AQ_VSI_OUTER_TAG_MODE_S)
+#define ICE_AQ_VSI_OUTER_TAG_NOTHING	0x0
+#define ICE_AQ_VSI_OUTER_TAG_REMOVE	0x1
+#define ICE_AQ_VSI_OUTER_TAG_COPY	0x2
+#define ICE_AQ_VSI_OUTER_TAG_TYPE_S	2
+#define ICE_AQ_VSI_OUTER_TAG_TYPE_M	(0x3 << ICE_AQ_VSI_OUTER_TAG_TYPE_S)
+#define ICE_AQ_VSI_OUTER_TAG_NONE	0x0
+#define ICE_AQ_VSI_OUTER_TAG_STAG	0x1
+#define ICE_AQ_VSI_OUTER_TAG_VLAN_8100	0x2
+#define ICE_AQ_VSI_OUTER_TAG_VLAN_9100	0x3
+#define ICE_AQ_VSI_OUTER_TAG_INSERT	BIT(4)
+#define ICE_AQ_VSI_OUTER_TAG_ACCEPT_HOST BIT(6)
+	u8 outer_tag_reserved;
+	/* queue mapping section */
+	__le16 mapping_flags;
+#define ICE_AQ_VSI_Q_MAP_CONTIG	0x0
+#define ICE_AQ_VSI_Q_MAP_NONCONTIG	BIT(0)
+	__le16 q_mapping[16];
+#define ICE_AQ_VSI_Q_S		0
+#define ICE_AQ_VSI_Q_M		(0x7FF << ICE_AQ_VSI_Q_S)
+	__le16 tc_mapping[8];
+#define ICE_AQ_VSI_TC_Q_OFFSET_S	0
+#define ICE_AQ_VSI_TC_Q_OFFSET_M	(0x7FF << ICE_AQ_VSI_TC_Q_OFFSET_S)
+#define ICE_AQ_VSI_TC_Q_NUM_S		11
+#define ICE_AQ_VSI_TC_Q_NUM_M		(0xF << ICE_AQ_VSI_TC_Q_NUM_S)
+	/* queueing option section */
+	u8 q_opt_rss;
+#define ICE_AQ_VSI_Q_OPT_RSS_LUT_S	0
+#define ICE_AQ_VSI_Q_OPT_RSS_LUT_M	(0x3 << ICE_AQ_VSI_Q_OPT_RSS_LUT_S)
+#define ICE_AQ_VSI_Q_OPT_RSS_LUT_VSI	0x0
+#define ICE_AQ_VSI_Q_OPT_RSS_LUT_PF	0x2
+#define ICE_AQ_VSI_Q_OPT_RSS_LUT_GBL	0x3
+#define ICE_AQ_VSI_Q_OPT_RSS_GBL_LUT_S	2
+#define ICE_AQ_VSI_Q_OPT_RSS_GBL_LUT_M	(0xF << ICE_AQ_VSI_Q_OPT_RSS_GBL_LUT_S)
+#define ICE_AQ_VSI_Q_OPT_RSS_HASH_S	6
+#define ICE_AQ_VSI_Q_OPT_RSS_HASH_M	(0x3 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S)
+#define ICE_AQ_VSI_Q_OPT_RSS_TPLZ	(0x0 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S)
+#define ICE_AQ_VSI_Q_OPT_RSS_SYM_TPLZ	(0x1 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S)
+#define ICE_AQ_VSI_Q_OPT_RSS_XOR	(0x2 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S)
+#define ICE_AQ_VSI_Q_OPT_RSS_JHASH	(0x3 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S)
+	u8 q_opt_tc;
+#define ICE_AQ_VSI_Q_OPT_TC_OVR_S	0
+#define ICE_AQ_VSI_Q_OPT_TC_OVR_M	(0x1F << ICE_AQ_VSI_Q_OPT_TC_OVR_S)
+#define ICE_AQ_VSI_Q_OPT_PROF_TC_OVR	BIT(7)
+	u8 q_opt_flags;
+#define ICE_AQ_VSI_Q_OPT_PE_FLTR_EN	BIT(0)
+	u8 q_opt_reserved[3];
+	/* outer up section */
+	__le32 outer_up_table; /* same structure and defines as ingress tbl */
+	/* section 10 */
+	__le16 sect_10_reserved;
+	/* flow director section */
+	__le16 fd_options;
+#define ICE_AQ_VSI_FD_ENABLE		BIT(0)
+#define ICE_AQ_VSI_FD_TX_AUTO_ENABLE	BIT(1)
+#define ICE_AQ_VSI_FD_PROG_ENABLE	BIT(3)
+	__le16 max_fd_fltr_dedicated;
+	__le16 max_fd_fltr_shared;
+	__le16 fd_def_q;
+#define ICE_AQ_VSI_FD_DEF_Q_S		0
+#define ICE_AQ_VSI_FD_DEF_Q_M		(0x7FF << ICE_AQ_VSI_FD_DEF_Q_S)
+#define ICE_AQ_VSI_FD_DEF_GRP_S	12
+#define ICE_AQ_VSI_FD_DEF_GRP_M	(0x7 << ICE_AQ_VSI_FD_DEF_GRP_S)
+	__le16 fd_report_opt;
+#define ICE_AQ_VSI_FD_REPORT_Q_S	0
+#define ICE_AQ_VSI_FD_REPORT_Q_M	(0x7FF << ICE_AQ_VSI_FD_REPORT_Q_S)
+#define ICE_AQ_VSI_FD_DEF_PRIORITY_S	12
+#define ICE_AQ_VSI_FD_DEF_PRIORITY_M	(0x7 << ICE_AQ_VSI_FD_DEF_PRIORITY_S)
+#define ICE_AQ_VSI_FD_DEF_DROP		BIT(15)
+	/* PASID section */
+	__le32 pasid_id;
+#define ICE_AQ_VSI_PASID_ID_S		0
+#define ICE_AQ_VSI_PASID_ID_M		(0xFFFFF << ICE_AQ_VSI_PASID_ID_S)
+#define ICE_AQ_VSI_PASID_ID_VALID	BIT(31)
+	u8 reserved[24];
+};
+
+
+
+#define ICE_MAX_NUM_RECIPES 64
+
+
+/* Add/Update/Remove/Get switch rules (indirect 0x02A0, 0x02A1, 0x02A2, 0x02A3)
+ */
+struct ice_aqc_sw_rules {
+	/* ops: add switch rules, referring the number of rules.
+	 * ops: update switch rules, referring the number of filters
+	 * ops: remove switch rules, referring the entry index.
+	 * ops: get switch rules, referring to the number of filters.
+	 */
+	__le16 num_rules_fltr_entry_index;
+	u8 reserved[6];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+#pragma pack(1)
+/* Add/Update/Get/Remove lookup Rx/Tx command/response entry
+ * This structures describes the lookup rules and associated actions. "index"
+ * is returned as part of a response to a successful Add command, and can be
+ * used to identify the rule for Update/Get/Remove commands.
+ */
+struct ice_sw_rule_lkup_rx_tx {
+	__le16 recipe_id;
+#define ICE_SW_RECIPE_LOGICAL_PORT_FWD		10
+	/* Source port for LOOKUP_RX and source VSI in case of LOOKUP_TX */
+	__le16 src;
+	__le32 act;
+
+	/* Bit 0:1 - Action type */
+#define ICE_SINGLE_ACT_TYPE_S	0x00
+#define ICE_SINGLE_ACT_TYPE_M	(0x3 << ICE_SINGLE_ACT_TYPE_S)
+
+	/* Bit 2 - Loop back enable
+	 * Bit 3 - LAN enable
+	 */
+#define ICE_SINGLE_ACT_LB_ENABLE	BIT(2)
+#define ICE_SINGLE_ACT_LAN_ENABLE	BIT(3)
+
+	/* Action type = 0 - Forward to VSI or VSI list */
+#define ICE_SINGLE_ACT_VSI_FORWARDING	0x0
+
+#define ICE_SINGLE_ACT_VSI_ID_S		4
+#define ICE_SINGLE_ACT_VSI_ID_M		(0x3FF << ICE_SINGLE_ACT_VSI_ID_S)
+#define ICE_SINGLE_ACT_VSI_LIST_ID_S	4
+#define ICE_SINGLE_ACT_VSI_LIST_ID_M	(0x3FF << ICE_SINGLE_ACT_VSI_LIST_ID_S)
+	/* This bit needs to be set if action is forward to VSI list */
+#define ICE_SINGLE_ACT_VSI_LIST		BIT(14)
+#define ICE_SINGLE_ACT_VALID_BIT	BIT(17)
+#define ICE_SINGLE_ACT_DROP		BIT(18)
+
+	/* Action type = 1 - Forward to Queue of Queue group */
+#define ICE_SINGLE_ACT_TO_Q		0x1
+#define ICE_SINGLE_ACT_Q_INDEX_S	4
+#define ICE_SINGLE_ACT_Q_INDEX_M	(0x7FF << ICE_SINGLE_ACT_Q_INDEX_S)
+#define ICE_SINGLE_ACT_Q_REGION_S	15
+#define ICE_SINGLE_ACT_Q_REGION_M	(0x7 << ICE_SINGLE_ACT_Q_REGION_S)
+#define ICE_SINGLE_ACT_Q_PRIORITY	BIT(18)
+
+	/* Action type = 2 - Prune */
+#define ICE_SINGLE_ACT_PRUNE		0x2
+#define ICE_SINGLE_ACT_EGRESS		BIT(15)
+#define ICE_SINGLE_ACT_INGRESS		BIT(16)
+#define ICE_SINGLE_ACT_PRUNET		BIT(17)
+	/* Bit 18 should be set to 0 for this action */
+
+	/* Action type = 2 - Pointer */
+#define ICE_SINGLE_ACT_PTR		0x2
+#define ICE_SINGLE_ACT_PTR_VAL_S	4
+#define ICE_SINGLE_ACT_PTR_VAL_M	(0x1FFF << ICE_SINGLE_ACT_PTR_VAL_S)
+	/* Bit 18 should be set to 1 */
+#define ICE_SINGLE_ACT_PTR_BIT		BIT(18)
+
+	/* Action type = 3 - Other actions. Last two bits
+	 * are other action identifier
+	 */
+#define ICE_SINGLE_ACT_OTHER_ACTS		0x3
+#define ICE_SINGLE_OTHER_ACT_IDENTIFIER_S	17
+#define ICE_SINGLE_OTHER_ACT_IDENTIFIER_M	\
+				(0x3 << \ ICE_SINGLE_OTHER_ACT_IDENTIFIER_S)
+
+	/* Bit 17:18 - Defines other actions */
+	/* Other action = 0 - Mirror VSI */
+#define ICE_SINGLE_OTHER_ACT_MIRROR		0
+#define ICE_SINGLE_ACT_MIRROR_VSI_ID_S	4
+#define ICE_SINGLE_ACT_MIRROR_VSI_ID_M	\
+				(0x3FF << ICE_SINGLE_ACT_MIRROR_VSI_ID_S)
+
+	/* Other action = 3 - Set Stat count */
+#define ICE_SINGLE_OTHER_ACT_STAT_COUNT		3
+#define ICE_SINGLE_ACT_STAT_COUNT_INDEX_S	4
+#define ICE_SINGLE_ACT_STAT_COUNT_INDEX_M	\
+				(0x7F << ICE_SINGLE_ACT_STAT_COUNT_INDEX_S)
+
+	__le16 index; /* The index of the rule in the lookup table */
+	/* Length and values of the header to be matched per recipe or
+	 * lookup-type
+	 */
+	__le16 hdr_len;
+	u8 hdr[1];
+};
+#pragma pack()
+
+
+/* Add/Update/Remove large action command/response entry
+ * "index" is returned as part of a response to a successful Add command, and
+ * can be used to identify the action for Update/Get/Remove commands.
+ */
+struct ice_sw_rule_lg_act {
+	__le16 index; /* Index in large action table */
+	__le16 size;
+	__le32 act[1]; /* array of size for actions */
+	/* Max number of large actions */
+#define ICE_MAX_LG_ACT	4
+	/* Bit 0:1 - Action type */
+#define ICE_LG_ACT_TYPE_S	0
+#define ICE_LG_ACT_TYPE_M	(0x7 << ICE_LG_ACT_TYPE_S)
+
+	/* Action type = 0 - Forward to VSI or VSI list */
+#define ICE_LG_ACT_VSI_FORWARDING	0
+#define ICE_LG_ACT_VSI_ID_S		3
+#define ICE_LG_ACT_VSI_ID_M		(0x3FF << ICE_LG_ACT_VSI_ID_S)
+#define ICE_LG_ACT_VSI_LIST_ID_S	3
+#define ICE_LG_ACT_VSI_LIST_ID_M	(0x3FF << ICE_LG_ACT_VSI_LIST_ID_S)
+	/* This bit needs to be set if action is forward to VSI list */
+#define ICE_LG_ACT_VSI_LIST		BIT(13)
+
+#define ICE_LG_ACT_VALID_BIT		BIT(16)
+
+	/* Action type = 1 - Forward to Queue of Queue group */
+#define ICE_LG_ACT_TO_Q			0x1
+#define ICE_LG_ACT_Q_INDEX_S		3
+#define ICE_LG_ACT_Q_INDEX_M		(0x7FF << ICE_LG_ACT_Q_INDEX_S)
+#define ICE_LG_ACT_Q_REGION_S		14
+#define ICE_LG_ACT_Q_REGION_M		(0x7 << ICE_LG_ACT_Q_REGION_S)
+#define ICE_LG_ACT_Q_PRIORITY_SET	BIT(17)
+
+	/* Action type = 2 - Prune */
+#define ICE_LG_ACT_PRUNE		0x2
+#define ICE_LG_ACT_EGRESS		BIT(14)
+#define ICE_LG_ACT_INGRESS		BIT(15)
+#define ICE_LG_ACT_PRUNET		BIT(16)
+
+	/* Action type = 3 - Mirror VSI */
+#define ICE_LG_OTHER_ACT_MIRROR		0x3
+#define ICE_LG_ACT_MIRROR_VSI_ID_S	3
+#define ICE_LG_ACT_MIRROR_VSI_ID_M	(0x3FF << ICE_LG_ACT_MIRROR_VSI_ID_S)
+
+	/* Action type = 5 - Generic Value */
+#define ICE_LG_ACT_GENERIC		0x5
+#define ICE_LG_ACT_GENERIC_VALUE_S	3
+#define ICE_LG_ACT_GENERIC_VALUE_M	(0xFFFF << ICE_LG_ACT_GENERIC_VALUE_S)
+#define ICE_LG_ACT_GENERIC_OFFSET_S	19
+#define ICE_LG_ACT_GENERIC_OFFSET_M	(0x7 << ICE_LG_ACT_GENERIC_OFFSET_S)
+#define ICE_LG_ACT_GENERIC_PRIORITY_S	22
+#define ICE_LG_ACT_GENERIC_PRIORITY_M	(0x7 << ICE_LG_ACT_GENERIC_PRIORITY_S)
+#define ICE_LG_ACT_GENERIC_OFF_RX_DESC_PROF_IDX	7
+
+	/* Action = 7 - Set Stat count */
+#define ICE_LG_ACT_STAT_COUNT		0x7
+#define ICE_LG_ACT_STAT_COUNT_S		3
+#define ICE_LG_ACT_STAT_COUNT_M		(0x7F << ICE_LG_ACT_STAT_COUNT_S)
+};
+
+
+/* Add/Update/Remove VSI list command/response entry
+ * "index" is returned as part of a response to a successful Add command, and
+ * can be used to identify the VSI list for Update/Get/Remove commands.
+ */
+struct ice_sw_rule_vsi_list {
+	__le16 index; /* Index of VSI/Prune list */
+	__le16 number_vsi;
+	__le16 vsi[1]; /* Array of number_vsi VSI numbers */
+};
+
+
+#pragma pack(1)
+/* Query VSI list command/response entry */
+struct ice_sw_rule_vsi_list_query {
+	__le16 index;
+	ice_declare_bitmap(vsi_list, ICE_MAX_VSI);
+};
+#pragma pack()
+
+
+#pragma pack(1)
+/* Add switch rule response:
+ * Content of return buffer is same as the input buffer. The status field and
+ * LUT index are updated as part of the response
+ */
+struct ice_aqc_sw_rules_elem {
+	__le16 type; /* Switch rule type, one of T_... */
+#define ICE_AQC_SW_RULES_T_LKUP_RX		0x0
+#define ICE_AQC_SW_RULES_T_LKUP_TX		0x1
+#define ICE_AQC_SW_RULES_T_LG_ACT		0x2
+#define ICE_AQC_SW_RULES_T_VSI_LIST_SET		0x3
+#define ICE_AQC_SW_RULES_T_VSI_LIST_CLEAR	0x4
+#define ICE_AQC_SW_RULES_T_PRUNE_LIST_SET	0x5
+#define ICE_AQC_SW_RULES_T_PRUNE_LIST_CLEAR	0x6
+	__le16 status;
+	union {
+		struct ice_sw_rule_lkup_rx_tx lkup_tx_rx;
+		struct ice_sw_rule_lg_act lg_act;
+		struct ice_sw_rule_vsi_list vsi_list;
+		struct ice_sw_rule_vsi_list_query vsi_list_query;
+	} pdata;
+};
+
+#pragma pack()
+
+
+
+/* Get Default Topology (indirect 0x0400) */
+struct ice_aqc_get_topo {
+	u8 port_num;
+	u8 num_branches;
+	__le16 reserved1;
+	__le32 reserved2;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Update TSE (indirect 0x0403)
+ * Get TSE (indirect 0x0404)
+ * Add TSE (indirect 0x0401)
+ * Delete TSE (indirect 0x040F)
+ * Move TSE (indirect 0x0408)
+ * Suspend Nodes (indirect 0x0409)
+ * Resume Nodes (indirect 0x040A)
+ */
+struct ice_aqc_sched_elem_cmd {
+	__le16 num_elem_req;	/* Used by commands */
+	__le16 num_elem_resp;	/* Used by responses */
+	__le32 reserved;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* This is the buffer for:
+ * Suspend Nodes (indirect 0x0409)
+ * Resume Nodes (indirect 0x040A)
+ */
+struct ice_aqc_suspend_resume_elem {
+	__le32 teid[1];
+};
+
+
+struct ice_aqc_txsched_move_grp_info_hdr {
+	__le32 src_parent_teid;
+	__le32 dest_parent_teid;
+	__le16 num_elems;
+	__le16 reserved;
+};
+
+
+struct ice_aqc_move_elem {
+	struct ice_aqc_txsched_move_grp_info_hdr hdr;
+	__le32 teid[1];
+};
+
+
+struct ice_aqc_elem_info_bw {
+	__le16 bw_profile_idx;
+	__le16 bw_alloc;
+};
+
+
+struct ice_aqc_txsched_elem {
+	u8 elem_type; /* Special field, reserved for some aq calls */
+#define ICE_AQC_ELEM_TYPE_UNDEFINED		0x0
+#define ICE_AQC_ELEM_TYPE_ROOT_PORT		0x1
+#define ICE_AQC_ELEM_TYPE_TC			0x2
+#define ICE_AQC_ELEM_TYPE_SE_GENERIC		0x3
+#define ICE_AQC_ELEM_TYPE_ENTRY_POINT		0x4
+#define ICE_AQC_ELEM_TYPE_LEAF			0x5
+#define ICE_AQC_ELEM_TYPE_SE_PADDED		0x6
+	u8 valid_sections;
+#define ICE_AQC_ELEM_VALID_GENERIC		BIT(0)
+#define ICE_AQC_ELEM_VALID_CIR			BIT(1)
+#define ICE_AQC_ELEM_VALID_EIR			BIT(2)
+#define ICE_AQC_ELEM_VALID_SHARED		BIT(3)
+	u8 generic;
+#define ICE_AQC_ELEM_GENERIC_MODE_M		0x1
+#define ICE_AQC_ELEM_GENERIC_PRIO_S		0x1
+#define ICE_AQC_ELEM_GENERIC_PRIO_M	(0x7 << ICE_AQC_ELEM_GENERIC_PRIO_S)
+#define ICE_AQC_ELEM_GENERIC_SP_S		0x4
+#define ICE_AQC_ELEM_GENERIC_SP_M	(0x1 << ICE_AQC_ELEM_GENERIC_SP_S)
+#define ICE_AQC_ELEM_GENERIC_ADJUST_VAL_S	0x5
+#define ICE_AQC_ELEM_GENERIC_ADJUST_VAL_M	\
+	(0x3 << ICE_AQC_ELEM_GENERIC_ADJUST_VAL_S)
+	u8 flags; /* Special field, reserved for some aq calls */
+#define ICE_AQC_ELEM_FLAG_SUSPEND_M		0x1
+	struct ice_aqc_elem_info_bw cir_bw;
+	struct ice_aqc_elem_info_bw eir_bw;
+	__le16 srl_id;
+	__le16 reserved2;
+};
+
+
+struct ice_aqc_txsched_elem_data {
+	__le32 parent_teid;
+	__le32 node_teid;
+	struct ice_aqc_txsched_elem data;
+};
+
+
+struct ice_aqc_txsched_topo_grp_info_hdr {
+	__le32 parent_teid;
+	__le16 num_elems;
+	__le16 reserved2;
+};
+
+
+struct ice_aqc_add_elem {
+	struct ice_aqc_txsched_topo_grp_info_hdr hdr;
+	struct ice_aqc_txsched_elem_data generic[1];
+};
+
+
+struct ice_aqc_conf_elem {
+	struct ice_aqc_txsched_elem_data generic[1];
+};
+
+
+struct ice_aqc_get_elem {
+	struct ice_aqc_txsched_elem_data generic[1];
+};
+
+
+struct ice_aqc_get_topo_elem {
+	struct ice_aqc_txsched_topo_grp_info_hdr hdr;
+	struct ice_aqc_txsched_elem_data
+		generic[ICE_AQC_TOPO_MAX_LEVEL_NUM];
+};
+
+
+struct ice_aqc_delete_elem {
+	struct ice_aqc_txsched_topo_grp_info_hdr hdr;
+	__le32 teid[1];
+};
+
+
+
+
+/* Rate limiting profile for
+ * Add RL profile (indirect 0x0410)
+ * Query RL profile (indirect 0x0411)
+ * Remove RL profile (indirect 0x0415)
+ * These indirect commands acts on single or multiple
+ * RL profiles with specified data.
+ */
+struct ice_aqc_rl_profile {
+	__le16 num_profiles;
+	__le16 num_processed; /* Only for response. Reserved in Command. */
+	u8 reserved[4];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+struct ice_aqc_rl_profile_elem {
+	u8 level;
+	u8 flags;
+#define ICE_AQC_RL_PROFILE_TYPE_S	0x0
+#define ICE_AQC_RL_PROFILE_TYPE_M	(0x3 << ICE_AQC_RL_PROFILE_TYPE_S)
+#define ICE_AQC_RL_PROFILE_TYPE_CIR	0
+#define ICE_AQC_RL_PROFILE_TYPE_EIR	1
+#define ICE_AQC_RL_PROFILE_TYPE_SRL	2
+/* The following flag is used for Query RL Profile Data */
+#define ICE_AQC_RL_PROFILE_INVAL_S	0x7
+#define ICE_AQC_RL_PROFILE_INVAL_M	(0x1 << ICE_AQC_RL_PROFILE_INVAL_S)
+
+	__le16 profile_id;
+	__le16 max_burst_size;
+	__le16 rl_multiply;
+	__le16 wake_up_calc;
+	__le16 rl_encode;
+};
+
+
+struct ice_aqc_rl_profile_generic_elem {
+	struct ice_aqc_rl_profile_elem generic[1];
+};
+
+
+
+/* Configure L2 Node CGD (indirect 0x0414)
+ * This indirect command allows configuring a congestion domain for given L2
+ * node TEIDs in the scheduler topology.
+ */
+struct ice_aqc_cfg_l2_node_cgd {
+	__le16 num_l2_nodes;
+	u8 reserved[6];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+struct ice_aqc_cfg_l2_node_cgd_elem {
+	__le32 node_teid;
+	u8 cgd;
+	u8 reserved[3];
+};
+
+
+struct ice_aqc_cfg_l2_node_cgd_data {
+	struct ice_aqc_cfg_l2_node_cgd_elem elem[1];
+};
+
+
+/* Query Scheduler Resource Allocation (indirect 0x0412)
+ * This indirect command retrieves the scheduler resources allocated by
+ * EMP Firmware to the given PF.
+ */
+struct ice_aqc_query_txsched_res {
+	u8 reserved[8];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+struct ice_aqc_generic_sched_props {
+	__le16 phys_levels;
+	__le16 logical_levels;
+	u8 flattening_bitmap;
+	u8 max_device_cgds;
+	u8 max_pf_cgds;
+	u8 rsvd0;
+	__le16 rdma_qsets;
+	u8 rsvd1[22];
+};
+
+
+struct ice_aqc_layer_props {
+	u8 logical_layer;
+	u8 chunk_size;
+	__le16 max_device_nodes;
+	__le16 max_pf_nodes;
+	u8 rsvd0[4];
+	__le16 max_sibl_grp_sz;
+	__le16 max_cir_rl_profiles;
+	__le16 max_eir_rl_profiles;
+	__le16 max_srl_profiles;
+	u8 rsvd1[14];
+};
+
+
+struct ice_aqc_query_txsched_res_resp {
+	struct ice_aqc_generic_sched_props sched_props;
+	struct ice_aqc_layer_props layer_props[ICE_AQC_TOPO_MAX_LEVEL_NUM];
+};
+
+
+/* Query Node to Root Topology (indirect 0x0413)
+ * This command uses ice_aqc_get_elem as its data buffer.
+ */
+struct ice_aqc_query_node_to_root {
+	__le32 teid;
+	__le32 num_nodes; /* Response only */
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Get PHY capabilities (indirect 0x0600) */
+struct ice_aqc_get_phy_caps {
+	u8 lport_num;
+	u8 reserved;
+	__le16 param0;
+	/* 18.0 - Report qualified modules */
+#define ICE_AQC_GET_PHY_RQM		BIT(0)
+	/* 18.1 - 18.2 : Report mode
+	 * 00b - Report NVM capabilities
+	 * 01b - Report topology capabilities
+	 * 10b - Report SW configured
+	 */
+#define ICE_AQC_REPORT_MODE_S		1
+#define ICE_AQC_REPORT_MODE_M		(3 << ICE_AQC_REPORT_MODE_S)
+#define ICE_AQC_REPORT_NVM_CAP		0
+#define ICE_AQC_REPORT_TOPO_CAP		BIT(1)
+#define ICE_AQC_REPORT_SW_CFG		BIT(2)
+	__le32 reserved1;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* This is #define of PHY type (Extended):
+ * The first set of defines is for phy_type_low.
+ */
+#define ICE_PHY_TYPE_LOW_100BASE_TX		BIT_ULL(0)
+#define ICE_PHY_TYPE_LOW_100M_SGMII		BIT_ULL(1)
+#define ICE_PHY_TYPE_LOW_1000BASE_T		BIT_ULL(2)
+#define ICE_PHY_TYPE_LOW_1000BASE_SX		BIT_ULL(3)
+#define ICE_PHY_TYPE_LOW_1000BASE_LX		BIT_ULL(4)
+#define ICE_PHY_TYPE_LOW_1000BASE_KX		BIT_ULL(5)
+#define ICE_PHY_TYPE_LOW_1G_SGMII		BIT_ULL(6)
+#define ICE_PHY_TYPE_LOW_2500BASE_T		BIT_ULL(7)
+#define ICE_PHY_TYPE_LOW_2500BASE_X		BIT_ULL(8)
+#define ICE_PHY_TYPE_LOW_2500BASE_KX		BIT_ULL(9)
+#define ICE_PHY_TYPE_LOW_5GBASE_T		BIT_ULL(10)
+#define ICE_PHY_TYPE_LOW_5GBASE_KR		BIT_ULL(11)
+#define ICE_PHY_TYPE_LOW_10GBASE_T		BIT_ULL(12)
+#define ICE_PHY_TYPE_LOW_10G_SFI_DA		BIT_ULL(13)
+#define ICE_PHY_TYPE_LOW_10GBASE_SR		BIT_ULL(14)
+#define ICE_PHY_TYPE_LOW_10GBASE_LR		BIT_ULL(15)
+#define ICE_PHY_TYPE_LOW_10GBASE_KR_CR1		BIT_ULL(16)
+#define ICE_PHY_TYPE_LOW_10G_SFI_AOC_ACC	BIT_ULL(17)
+#define ICE_PHY_TYPE_LOW_10G_SFI_C2C		BIT_ULL(18)
+#define ICE_PHY_TYPE_LOW_25GBASE_T		BIT_ULL(19)
+#define ICE_PHY_TYPE_LOW_25GBASE_CR		BIT_ULL(20)
+#define ICE_PHY_TYPE_LOW_25GBASE_CR_S		BIT_ULL(21)
+#define ICE_PHY_TYPE_LOW_25GBASE_CR1		BIT_ULL(22)
+#define ICE_PHY_TYPE_LOW_25GBASE_SR		BIT_ULL(23)
+#define ICE_PHY_TYPE_LOW_25GBASE_LR		BIT_ULL(24)
+#define ICE_PHY_TYPE_LOW_25GBASE_KR		BIT_ULL(25)
+#define ICE_PHY_TYPE_LOW_25GBASE_KR_S		BIT_ULL(26)
+#define ICE_PHY_TYPE_LOW_25GBASE_KR1		BIT_ULL(27)
+#define ICE_PHY_TYPE_LOW_25G_AUI_AOC_ACC	BIT_ULL(28)
+#define ICE_PHY_TYPE_LOW_25G_AUI_C2C		BIT_ULL(29)
+#define ICE_PHY_TYPE_LOW_40GBASE_CR4		BIT_ULL(30)
+#define ICE_PHY_TYPE_LOW_40GBASE_SR4		BIT_ULL(31)
+#define ICE_PHY_TYPE_LOW_40GBASE_LR4		BIT_ULL(32)
+#define ICE_PHY_TYPE_LOW_40GBASE_KR4		BIT_ULL(33)
+#define ICE_PHY_TYPE_LOW_40G_XLAUI_AOC_ACC	BIT_ULL(34)
+#define ICE_PHY_TYPE_LOW_40G_XLAUI		BIT_ULL(35)
+#define ICE_PHY_TYPE_LOW_50GBASE_CR2		BIT_ULL(36)
+#define ICE_PHY_TYPE_LOW_50GBASE_SR2		BIT_ULL(37)
+#define ICE_PHY_TYPE_LOW_50GBASE_LR2		BIT_ULL(38)
+#define ICE_PHY_TYPE_LOW_50GBASE_KR2		BIT_ULL(39)
+#define ICE_PHY_TYPE_LOW_50G_LAUI2_AOC_ACC	BIT_ULL(40)
+#define ICE_PHY_TYPE_LOW_50G_LAUI2		BIT_ULL(41)
+#define ICE_PHY_TYPE_LOW_50G_AUI2_AOC_ACC	BIT_ULL(42)
+#define ICE_PHY_TYPE_LOW_50G_AUI2		BIT_ULL(43)
+#define ICE_PHY_TYPE_LOW_50GBASE_CP		BIT_ULL(44)
+#define ICE_PHY_TYPE_LOW_50GBASE_SR		BIT_ULL(45)
+#define ICE_PHY_TYPE_LOW_50GBASE_FR		BIT_ULL(46)
+#define ICE_PHY_TYPE_LOW_50GBASE_LR		BIT_ULL(47)
+#define ICE_PHY_TYPE_LOW_50GBASE_KR_PAM4	BIT_ULL(48)
+#define ICE_PHY_TYPE_LOW_50G_AUI1_AOC_ACC	BIT_ULL(49)
+#define ICE_PHY_TYPE_LOW_50G_AUI1		BIT_ULL(50)
+#define ICE_PHY_TYPE_LOW_100GBASE_CR4		BIT_ULL(51)
+#define ICE_PHY_TYPE_LOW_100GBASE_SR4		BIT_ULL(52)
+#define ICE_PHY_TYPE_LOW_100GBASE_LR4		BIT_ULL(53)
+#define ICE_PHY_TYPE_LOW_100GBASE_KR4		BIT_ULL(54)
+#define ICE_PHY_TYPE_LOW_100G_CAUI4_AOC_ACC	BIT_ULL(55)
+#define ICE_PHY_TYPE_LOW_100G_CAUI4		BIT_ULL(56)
+#define ICE_PHY_TYPE_LOW_100G_AUI4_AOC_ACC	BIT_ULL(57)
+#define ICE_PHY_TYPE_LOW_100G_AUI4		BIT_ULL(58)
+#define ICE_PHY_TYPE_LOW_100GBASE_CR_PAM4	BIT_ULL(59)
+#define ICE_PHY_TYPE_LOW_100GBASE_KR_PAM4	BIT_ULL(60)
+#define ICE_PHY_TYPE_LOW_100GBASE_CP2		BIT_ULL(61)
+#define ICE_PHY_TYPE_LOW_100GBASE_SR2		BIT_ULL(62)
+#define ICE_PHY_TYPE_LOW_100GBASE_DR		BIT_ULL(63)
+#define ICE_PHY_TYPE_LOW_MAX_INDEX		63
+/* The second set of defines is for phy_type_high. */
+#define ICE_PHY_TYPE_HIGH_100GBASE_KR2_PAM4	BIT_ULL(0)
+#define ICE_PHY_TYPE_HIGH_100G_CAUI2_AOC_ACC	BIT_ULL(1)
+#define ICE_PHY_TYPE_HIGH_100G_CAUI2		BIT_ULL(2)
+#define ICE_PHY_TYPE_HIGH_100G_AUI2_AOC_ACC	BIT_ULL(3)
+#define ICE_PHY_TYPE_HIGH_100G_AUI2		BIT_ULL(4)
+#define ICE_PHY_TYPE_HIGH_MAX_INDEX		19
+
+struct ice_aqc_get_phy_caps_data {
+	__le64 phy_type_low; /* Use values from ICE_PHY_TYPE_LOW_* */
+	__le64 phy_type_high; /* Use values from ICE_PHY_TYPE_HIGH_* */
+	u8 caps;
+#define ICE_AQC_PHY_EN_TX_LINK_PAUSE			BIT(0)
+#define ICE_AQC_PHY_EN_RX_LINK_PAUSE			BIT(1)
+#define ICE_AQC_PHY_LOW_POWER_MODE			BIT(2)
+#define ICE_AQC_PHY_EN_LINK				BIT(3)
+#define ICE_AQC_PHY_AN_MODE				BIT(4)
+#define ICE_AQC_PHY_EN_MOD_QUAL				BIT(5)
+#define ICE_AQC_PHY_EN_LESM				BIT(6)
+#define ICE_AQC_PHY_EN_AUTO_FEC				BIT(7)
+#define ICE_AQC_PHY_CAPS_MASK				MAKEMASK(0xff, 0)
+	u8 low_power_ctrl;
+#define ICE_AQC_PHY_EN_D3COLD_LOW_POWER_AUTONEG		BIT(0)
+	__le16 eee_cap;
+#define ICE_AQC_PHY_EEE_EN_100BASE_TX			BIT(0)
+#define ICE_AQC_PHY_EEE_EN_1000BASE_T			BIT(1)
+#define ICE_AQC_PHY_EEE_EN_10GBASE_T			BIT(2)
+#define ICE_AQC_PHY_EEE_EN_1000BASE_KX			BIT(3)
+#define ICE_AQC_PHY_EEE_EN_10GBASE_KR			BIT(4)
+#define ICE_AQC_PHY_EEE_EN_25GBASE_KR			BIT(5)
+#define ICE_AQC_PHY_EEE_EN_40GBASE_KR4			BIT(6)
+#define ICE_AQC_PHY_EEE_EN_50GBASE_KR2			BIT(7)
+#define ICE_AQC_PHY_EEE_EN_50GBASE_KR_PAM4		BIT(8)
+#define ICE_AQC_PHY_EEE_EN_100GBASE_KR4			BIT(9)
+#define ICE_AQC_PHY_EEE_EN_100GBASE_KR2_PAM4		BIT(10)
+	__le16 eeer_value;
+	u8 phy_id_oui[4]; /* PHY/Module ID connected on the port */
+	u8 phy_fw_ver[8];
+	u8 link_fec_options;
+#define ICE_AQC_PHY_FEC_10G_KR_40G_KR4_EN		BIT(0)
+#define ICE_AQC_PHY_FEC_10G_KR_40G_KR4_REQ		BIT(1)
+#define ICE_AQC_PHY_FEC_25G_RS_528_REQ			BIT(2)
+#define ICE_AQC_PHY_FEC_25G_KR_REQ			BIT(3)
+#define ICE_AQC_PHY_FEC_25G_RS_544_REQ			BIT(4)
+#define ICE_AQC_PHY_FEC_25G_RS_CLAUSE91_EN		BIT(6)
+#define ICE_AQC_PHY_FEC_25G_KR_CLAUSE74_EN		BIT(7)
+#define ICE_AQC_PHY_FEC_MASK				MAKEMASK(0xdf, 0)
+	u8 extended_compliance_code;
+#define ICE_MODULE_TYPE_TOTAL_BYTE			3
+	u8 module_type[ICE_MODULE_TYPE_TOTAL_BYTE];
+#define ICE_AQC_MOD_TYPE_BYTE0_SFP_PLUS			0xA0
+#define ICE_AQC_MOD_TYPE_BYTE0_QSFP_PLUS		0x80
+#define ICE_AQC_MOD_TYPE_BYTE1_SFP_PLUS_CU_PASSIVE	BIT(0)
+#define ICE_AQC_MOD_TYPE_BYTE1_SFP_PLUS_CU_ACTIVE	BIT(1)
+#define ICE_AQC_MOD_TYPE_BYTE1_10G_BASE_SR		BIT(4)
+#define ICE_AQC_MOD_TYPE_BYTE1_10G_BASE_LR		BIT(5)
+#define ICE_AQC_MOD_TYPE_BYTE1_10G_BASE_LRM		BIT(6)
+#define ICE_AQC_MOD_TYPE_BYTE1_10G_BASE_ER		BIT(7)
+#define ICE_AQC_MOD_TYPE_BYTE2_SFP_PLUS			0xA0
+#define ICE_AQC_MOD_TYPE_BYTE2_QSFP_PLUS		0x86
+	u8 qualified_module_count;
+#define ICE_AQC_QUAL_MOD_COUNT_MAX			16
+	struct {
+		u8 v_oui[3];
+		u8 rsvd3;
+		u8 v_part[16];
+		__le32 v_rev;
+		__le64 rsvd8;
+	} qual_modules[ICE_AQC_QUAL_MOD_COUNT_MAX];
+};
+
+
+/* Set PHY capabilities (direct 0x0601)
+ * NOTE: This command must be followed by setup link and restart auto-neg
+ */
+struct ice_aqc_set_phy_cfg {
+	u8 lport_num;
+	u8 reserved[7];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Set PHY config command data structure */
+struct ice_aqc_set_phy_cfg_data {
+	__le64 phy_type_low; /* Use values from ICE_PHY_TYPE_LOW_* */
+	__le64 phy_type_high; /* Use values from ICE_PHY_TYPE_HIGH_* */
+	u8 caps;
+#define ICE_AQ_PHY_ENA_TX_PAUSE_ABILITY		BIT(0)
+#define ICE_AQ_PHY_ENA_RX_PAUSE_ABILITY		BIT(1)
+#define ICE_AQ_PHY_ENA_LOW_POWER	BIT(2)
+#define ICE_AQ_PHY_ENA_LINK		BIT(3)
+#define ICE_AQ_PHY_ENA_AUTO_LINK_UPDT	BIT(5)
+#define ICE_AQ_PHY_ENA_LESM		BIT(6)
+#define ICE_AQ_PHY_ENA_AUTO_FEC		BIT(7)
+	u8 low_power_ctrl;
+	__le16 eee_cap; /* Value from ice_aqc_get_phy_caps */
+	__le16 eeer_value;
+	u8 link_fec_opt; /* Use defines from ice_aqc_get_phy_caps */
+	u8 rsvd1;
+};
+
+
+
+/* Restart AN command data structure (direct 0x0605)
+ * Also used for response, with only the lport_num field present.
+ */
+struct ice_aqc_restart_an {
+	u8 lport_num;
+	u8 reserved;
+	u8 cmd_flags;
+#define ICE_AQC_RESTART_AN_LINK_RESTART	BIT(1)
+#define ICE_AQC_RESTART_AN_LINK_ENABLE	BIT(2)
+	u8 reserved2[13];
+};
+
+
+/* Get link status (indirect 0x0607), also used for Link Status Event */
+struct ice_aqc_get_link_status {
+	u8 lport_num;
+	u8 reserved;
+	__le16 cmd_flags;
+#define ICE_AQ_LSE_M			0x3
+#define ICE_AQ_LSE_NOP			0x0
+#define ICE_AQ_LSE_DIS			0x2
+#define ICE_AQ_LSE_ENA			0x3
+	/* only response uses this flag */
+#define ICE_AQ_LSE_IS_ENABLED		0x1
+	__le32 reserved2;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Get link status response data structure, also used for Link Status Event */
+struct ice_aqc_get_link_status_data {
+	u8 topo_media_conflict;
+#define ICE_AQ_LINK_TOPO_CONFLICT	BIT(0)
+#define ICE_AQ_LINK_MEDIA_CONFLICT	BIT(1)
+#define ICE_AQ_LINK_TOPO_CORRUPT	BIT(2)
+	u8 reserved1;
+	u8 link_info;
+#define ICE_AQ_LINK_UP			BIT(0)	/* Link Status */
+#define ICE_AQ_LINK_FAULT		BIT(1)
+#define ICE_AQ_LINK_FAULT_TX		BIT(2)
+#define ICE_AQ_LINK_FAULT_RX		BIT(3)
+#define ICE_AQ_LINK_FAULT_REMOTE	BIT(4)
+#define ICE_AQ_LINK_UP_PORT		BIT(5)	/* External Port Link Status */
+#define ICE_AQ_MEDIA_AVAILABLE		BIT(6)
+#define ICE_AQ_SIGNAL_DETECT		BIT(7)
+	u8 an_info;
+#define ICE_AQ_AN_COMPLETED		BIT(0)
+#define ICE_AQ_LP_AN_ABILITY		BIT(1)
+#define ICE_AQ_PD_FAULT			BIT(2)	/* Parallel Detection Fault */
+#define ICE_AQ_FEC_EN			BIT(3)
+#define ICE_AQ_PHY_LOW_POWER		BIT(4)	/* Low Power State */
+#define ICE_AQ_LINK_PAUSE_TX		BIT(5)
+#define ICE_AQ_LINK_PAUSE_RX		BIT(6)
+#define ICE_AQ_QUALIFIED_MODULE		BIT(7)
+	u8 ext_info;
+#define ICE_AQ_LINK_PHY_TEMP_ALARM	BIT(0)
+#define ICE_AQ_LINK_EXCESSIVE_ERRORS	BIT(1)	/* Excessive Link Errors */
+	/* Port TX Suspended */
+#define ICE_AQ_LINK_TX_S		2
+#define ICE_AQ_LINK_TX_M		(0x03 << ICE_AQ_LINK_TX_S)
+#define ICE_AQ_LINK_TX_ACTIVE		0
+#define ICE_AQ_LINK_TX_DRAINED		1
+#define ICE_AQ_LINK_TX_FLUSHED		3
+	u8 reserved2;
+	__le16 max_frame_size;
+	u8 cfg;
+#define ICE_AQ_LINK_25G_KR_FEC_EN	BIT(0)
+#define ICE_AQ_LINK_25G_RS_528_FEC_EN	BIT(1)
+#define ICE_AQ_LINK_25G_RS_544_FEC_EN	BIT(2)
+#define ICE_AQ_FEC_MASK			MAKEMASK(0x7, 0)
+	/* Pacing Config */
+#define ICE_AQ_CFG_PACING_S		3
+#define ICE_AQ_CFG_PACING_M		(0xF << ICE_AQ_CFG_PACING_S)
+#define ICE_AQ_CFG_PACING_TYPE_M	BIT(7)
+#define ICE_AQ_CFG_PACING_TYPE_AVG	0
+#define ICE_AQ_CFG_PACING_TYPE_FIXED	ICE_AQ_CFG_PACING_TYPE_M
+	/* External Device Power Ability */
+	u8 power_desc;
+#define ICE_AQ_PWR_CLASS_M		0x3
+#define ICE_AQ_LINK_PWR_BASET_LOW_HIGH	0
+#define ICE_AQ_LINK_PWR_BASET_HIGH	1
+#define ICE_AQ_LINK_PWR_QSFP_CLASS_1	0
+#define ICE_AQ_LINK_PWR_QSFP_CLASS_2	1
+#define ICE_AQ_LINK_PWR_QSFP_CLASS_3	2
+#define ICE_AQ_LINK_PWR_QSFP_CLASS_4	3
+	__le16 link_speed;
+#define ICE_AQ_LINK_SPEED_10MB		BIT(0)
+#define ICE_AQ_LINK_SPEED_100MB		BIT(1)
+#define ICE_AQ_LINK_SPEED_1000MB	BIT(2)
+#define ICE_AQ_LINK_SPEED_2500MB	BIT(3)
+#define ICE_AQ_LINK_SPEED_5GB		BIT(4)
+#define ICE_AQ_LINK_SPEED_10GB		BIT(5)
+#define ICE_AQ_LINK_SPEED_20GB		BIT(6)
+#define ICE_AQ_LINK_SPEED_25GB		BIT(7)
+#define ICE_AQ_LINK_SPEED_40GB		BIT(8)
+#define ICE_AQ_LINK_SPEED_50GB		BIT(9)
+#define ICE_AQ_LINK_SPEED_100GB		BIT(10)
+#define ICE_AQ_LINK_SPEED_UNKNOWN	BIT(15)
+	__le32 reserved3; /* Aligns next field to 8-byte boundary */
+	__le64 phy_type_low; /* Use values from ICE_PHY_TYPE_LOW_* */
+	__le64 phy_type_high; /* Use values from ICE_PHY_TYPE_HIGH_* */
+};
+
+
+/* Set event mask command (direct 0x0613) */
+struct ice_aqc_set_event_mask {
+	u8	lport_num;
+	u8	reserved[7];
+	__le16	event_mask;
+#define ICE_AQ_LINK_EVENT_UPDOWN		BIT(1)
+#define ICE_AQ_LINK_EVENT_MEDIA_NA		BIT(2)
+#define ICE_AQ_LINK_EVENT_LINK_FAULT		BIT(3)
+#define ICE_AQ_LINK_EVENT_PHY_TEMP_ALARM	BIT(4)
+#define ICE_AQ_LINK_EVENT_EXCESSIVE_ERRORS	BIT(5)
+#define ICE_AQ_LINK_EVENT_SIGNAL_DETECT		BIT(6)
+#define ICE_AQ_LINK_EVENT_AN_COMPLETED		BIT(7)
+#define ICE_AQ_LINK_EVENT_MODULE_QUAL_FAIL	BIT(8)
+#define ICE_AQ_LINK_EVENT_PORT_TX_SUSPENDED	BIT(9)
+	u8	reserved1[6];
+};
+
+
+
+/* Set MAC Loopback command (direct 0x0620) */
+struct ice_aqc_set_mac_lb {
+	u8 lb_mode;
+#define ICE_AQ_MAC_LB_EN		BIT(0)
+#define ICE_AQ_MAC_LB_OSC_CLK		BIT(1)
+	u8 reserved[15];
+};
+
+
+
+
+
+/* Set Port Identification LED (direct, 0x06E9) */
+struct ice_aqc_set_port_id_led {
+	u8 lport_num;
+	u8 lport_num_valid;
+#define ICE_AQC_PORT_ID_PORT_NUM_VALID	BIT(0)
+	u8 ident_mode;
+#define ICE_AQC_PORT_IDENT_LED_BLINK	BIT(0)
+#define ICE_AQC_PORT_IDENT_LED_ORIG	0
+	u8 rsvd[13];
+};
+
+
+
+/* NVM Read command (indirect 0x0701)
+ * NVM Erase commands (direct 0x0702)
+ * NVM Update commands (indirect 0x0703)
+ */
+struct ice_aqc_nvm {
+	__le16 offset_low;
+	u8 offset_high;
+	u8 cmd_flags;
+#define ICE_AQC_NVM_LAST_CMD		BIT(0)
+#define ICE_AQC_NVM_PCIR_REQ		BIT(0)	/* Used by NVM Update reply */
+#define ICE_AQC_NVM_PRESERVATION_S	1
+#define ICE_AQC_NVM_PRESERVATION_M	(3 << ICE_AQC_NVM_PRESERVATION_S)
+#define ICE_AQC_NVM_NO_PRESERVATION	(0 << ICE_AQC_NVM_PRESERVATION_S)
+#define ICE_AQC_NVM_PRESERVE_ALL	BIT(1)
+#define ICE_AQC_NVM_FACTORY_DEFAULT	(2 << ICE_AQC_NVM_PRESERVATION_S)
+#define ICE_AQC_NVM_PRESERVE_SELECTED	(3 << ICE_AQC_NVM_PRESERVATION_S)
+#define ICE_AQC_NVM_FLASH_ONLY		BIT(7)
+	__le16 module_typeid;
+	__le16 length;
+#define ICE_AQC_NVM_ERASE_LEN	0xFFFF
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Used for 0x0704 as well as for 0x0705 commands */
+struct ice_aqc_nvm_cfg {
+	u8	cmd_flags;
+#define ICE_AQC_ANVM_MULTIPLE_ELEMS	BIT(0)
+#define ICE_AQC_ANVM_IMMEDIATE_FIELD	BIT(1)
+#define ICE_AQC_ANVM_NEW_CFG		BIT(2)
+	u8	reserved;
+	__le16 count;
+	__le16 id;
+	u8 reserved1[2];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+struct ice_aqc_nvm_cfg_data {
+	__le16 field_id;
+	__le16 field_options;
+	__le16 field_value;
+};
+
+
+/* NVM Checksum Command (direct, 0x0706) */
+struct ice_aqc_nvm_checksum {
+	u8 flags;
+#define ICE_AQC_NVM_CHECKSUM_VERIFY	BIT(0)
+#define ICE_AQC_NVM_CHECKSUM_RECALC	BIT(1)
+	u8 rsvd;
+	__le16 checksum; /* Used only by response */
+#define ICE_AQC_NVM_CHECKSUM_CORRECT	0xBABA
+	u8 rsvd2[12];
+};
+
+
+
+
+
+/* Get/Set RSS key (indirect 0x0B04/0x0B02) */
+struct ice_aqc_get_set_rss_key {
+#define ICE_AQC_GSET_RSS_KEY_VSI_VALID	BIT(15)
+#define ICE_AQC_GSET_RSS_KEY_VSI_ID_S	0
+#define ICE_AQC_GSET_RSS_KEY_VSI_ID_M	(0x3FF << ICE_AQC_GSET_RSS_KEY_VSI_ID_S)
+	__le16 vsi_id;
+	u8 reserved[6];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+#define ICE_AQC_GET_SET_RSS_KEY_DATA_RSS_KEY_SIZE	0x28
+#define ICE_AQC_GET_SET_RSS_KEY_DATA_HASH_KEY_SIZE	0xC
+
+struct ice_aqc_get_set_rss_keys {
+	u8 standard_rss_key[ICE_AQC_GET_SET_RSS_KEY_DATA_RSS_KEY_SIZE];
+	u8 extended_hash_key[ICE_AQC_GET_SET_RSS_KEY_DATA_HASH_KEY_SIZE];
+};
+
+
+/* Get/Set RSS LUT (indirect 0x0B05/0x0B03) */
+struct ice_aqc_get_set_rss_lut {
+#define ICE_AQC_GSET_RSS_LUT_VSI_VALID	BIT(15)
+#define ICE_AQC_GSET_RSS_LUT_VSI_ID_S	0
+#define ICE_AQC_GSET_RSS_LUT_VSI_ID_M	(0x1FF << ICE_AQC_GSET_RSS_LUT_VSI_ID_S)
+	__le16 vsi_id;
+#define ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_S	0
+#define ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_M	\
+				(0x3 << ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_S)
+
+#define ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_VSI	 0
+#define ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF	 1
+#define ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_GLOBAL	 2
+
+#define ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S	 2
+#define ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_M	 \
+				(0x3 << ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S)
+
+#define ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_128	 128
+#define ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_128_FLAG 0
+#define ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_512	 512
+#define ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_512_FLAG 1
+#define ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K	 2048
+#define ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K_FLAG	 2
+
+#define ICE_AQC_GSET_RSS_LUT_GLOBAL_IDX_S	 4
+#define ICE_AQC_GSET_RSS_LUT_GLOBAL_IDX_M	 \
+				(0xF << ICE_AQC_GSET_RSS_LUT_GLOBAL_IDX_S)
+
+	__le16 flags;
+	__le32 reserved;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+
+
+
+/* Add TX LAN Queues (indirect 0x0C30) */
+struct ice_aqc_add_txqs {
+	u8 num_qgrps;
+	u8 reserved[3];
+	__le32 reserved1;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* This is the descriptor of each queue entry for the Add TX LAN Queues
+ * command (0x0C30). Only used within struct ice_aqc_add_tx_qgrp.
+ */
+struct ice_aqc_add_txqs_perq {
+	__le16 txq_id;
+	u8 rsvd[2];
+	__le32 q_teid;
+	u8 txq_ctx[22];
+	u8 rsvd2[2];
+	struct ice_aqc_txsched_elem info;
+};
+
+
+/* The format of the command buffer for Add TX LAN Queues (0x0C30)
+ * is an array of the following structs. Please note that the length of
+ * each struct ice_aqc_add_tx_qgrp is variable due
+ * to the variable number of queues in each group!
+ */
+struct ice_aqc_add_tx_qgrp {
+	__le32 parent_teid;
+	u8 num_txqs;
+	u8 rsvd[3];
+	struct ice_aqc_add_txqs_perq txqs[1];
+};
+
+
+/* Disable TX LAN Queues (indirect 0x0C31) */
+struct ice_aqc_dis_txqs {
+	u8 cmd_type;
+#define ICE_AQC_Q_DIS_CMD_S		0
+#define ICE_AQC_Q_DIS_CMD_M		(0x3 << ICE_AQC_Q_DIS_CMD_S)
+#define ICE_AQC_Q_DIS_CMD_NO_FUNC_RESET	(0 << ICE_AQC_Q_DIS_CMD_S)
+#define ICE_AQC_Q_DIS_CMD_VM_RESET	BIT(ICE_AQC_Q_DIS_CMD_S)
+#define ICE_AQC_Q_DIS_CMD_VF_RESET	(2 << ICE_AQC_Q_DIS_CMD_S)
+#define ICE_AQC_Q_DIS_CMD_PF_RESET	(3 << ICE_AQC_Q_DIS_CMD_S)
+#define ICE_AQC_Q_DIS_CMD_SUBSEQ_CALL	BIT(2)
+#define ICE_AQC_Q_DIS_CMD_FLUSH_PIPE	BIT(3)
+	u8 num_entries;
+	__le16 vmvf_and_timeout;
+#define ICE_AQC_Q_DIS_VMVF_NUM_S	0
+#define ICE_AQC_Q_DIS_VMVF_NUM_M	(0x3FF << ICE_AQC_Q_DIS_VMVF_NUM_S)
+#define ICE_AQC_Q_DIS_TIMEOUT_S		10
+#define ICE_AQC_Q_DIS_TIMEOUT_M		(0x3F << ICE_AQC_Q_DIS_TIMEOUT_S)
+	__le32 blocked_cgds;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* The buffer for Disable TX LAN Queues (indirect 0x0C31)
+ * contains the following structures, arrayed one after the
+ * other.
+ * Note: Since the q_id is 16 bits wide, if the
+ * number of queues is even, then 2 bytes of alignment MUST be
+ * added before the start of the next group, to allow correct
+ * alignment of the parent_teid field.
+ */
+struct ice_aqc_dis_txq_item {
+	__le32 parent_teid;
+	u8 num_qs;
+	u8 rsvd;
+	/* The length of the q_id array varies according to num_qs */
+	__le16 q_id[1];
+	/* This only applies from F8 onward */
+#define ICE_AQC_Q_DIS_BUF_ELEM_TYPE_S		15
+#define ICE_AQC_Q_DIS_BUF_ELEM_TYPE_LAN_Q	\
+			(0 << ICE_AQC_Q_DIS_BUF_ELEM_TYPE_S)
+#define ICE_AQC_Q_DIS_BUF_ELEM_TYPE_RDMA_QSET	\
+			(1 << ICE_AQC_Q_DIS_BUF_ELEM_TYPE_S)
+};
+
+
+struct ice_aqc_dis_txq {
+	struct ice_aqc_dis_txq_item qgrps[1];
+};
+
+
+/* TX LAN Queues Cleanup Event (0x0C31) */
+struct ice_aqc_txqs_cleanup {
+	__le16 caller_opc;
+	__le16 cmd_tag;
+	u8 reserved[12];
+};
+
+
+/* Move / Reconfigure TX Queues (indirect 0x0C32) */
+struct ice_aqc_move_txqs {
+	u8 cmd_type;
+#define ICE_AQC_Q_CMD_TYPE_S		0
+#define ICE_AQC_Q_CMD_TYPE_M		(0x3 << ICE_AQC_Q_CMD_TYPE_S)
+#define ICE_AQC_Q_CMD_TYPE_MOVE		1
+#define ICE_AQC_Q_CMD_TYPE_TC_CHANGE	2
+#define ICE_AQC_Q_CMD_TYPE_MOVE_AND_TC	3
+#define ICE_AQC_Q_CMD_SUBSEQ_CALL	BIT(2)
+#define ICE_AQC_Q_CMD_FLUSH_PIPE	BIT(3)
+	u8 num_qs;
+	u8 rsvd;
+	u8 timeout;
+#define ICE_AQC_Q_CMD_TIMEOUT_S		2
+#define ICE_AQC_Q_CMD_TIMEOUT_M		(0x3F << ICE_AQC_Q_CMD_TIMEOUT_S)
+	__le32 blocked_cgds;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* This is the descriptor of each queue entry for the move TX LAN Queues
+ * command (0x0C32).
+ */
+struct ice_aqc_move_txqs_elem {
+	__le16 txq_id;
+	u8 q_cgd;
+	u8 rsvd;
+	__le32 q_teid;
+};
+
+
+struct ice_aqc_move_txqs_data {
+	__le32 src_teid;
+	__le32 dest_teid;
+	struct ice_aqc_move_txqs_elem txqs[1];
+};
+
+
+
+
+
+
+/* Lan Queue Overflow Event (direct, 0x1001) */
+struct ice_aqc_event_lan_overflow {
+	__le32 prtdcb_ruptq;
+	__le32 qtx_ctl;
+	u8 reserved[8];
+};
+
+
+
+/* Configure Firmware Logging Command (indirect 0xFF09)
+ * Logging Information Read Response (indirect 0xFF10)
+ * Note: The 0xFF10 command has no input parameters.
+ */
+struct ice_aqc_fw_logging {
+	u8 log_ctrl;
+#define ICE_AQC_FW_LOG_AQ_EN		BIT(0)
+#define ICE_AQC_FW_LOG_UART_EN		BIT(1)
+	u8 rsvd0;
+	u8 log_ctrl_valid; /* Not used by 0xFF10 Response */
+#define ICE_AQC_FW_LOG_AQ_VALID		BIT(0)
+#define ICE_AQC_FW_LOG_UART_VALID	BIT(1)
+	u8 rsvd1[5];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+enum ice_aqc_fw_logging_mod {
+	ICE_AQC_FW_LOG_ID_GENERAL = 0,
+	ICE_AQC_FW_LOG_ID_CTRL,
+	ICE_AQC_FW_LOG_ID_LINK,
+	ICE_AQC_FW_LOG_ID_LINK_TOPO,
+	ICE_AQC_FW_LOG_ID_DNL,
+	ICE_AQC_FW_LOG_ID_I2C,
+	ICE_AQC_FW_LOG_ID_SDP,
+	ICE_AQC_FW_LOG_ID_MDIO,
+	ICE_AQC_FW_LOG_ID_ADMINQ,
+	ICE_AQC_FW_LOG_ID_HDMA,
+	ICE_AQC_FW_LOG_ID_LLDP,
+	ICE_AQC_FW_LOG_ID_DCBX,
+	ICE_AQC_FW_LOG_ID_DCB,
+	ICE_AQC_FW_LOG_ID_NETPROXY,
+	ICE_AQC_FW_LOG_ID_NVM,
+	ICE_AQC_FW_LOG_ID_AUTH,
+	ICE_AQC_FW_LOG_ID_VPD,
+	ICE_AQC_FW_LOG_ID_IOSF,
+	ICE_AQC_FW_LOG_ID_PARSER,
+	ICE_AQC_FW_LOG_ID_SW,
+	ICE_AQC_FW_LOG_ID_SCHEDULER,
+	ICE_AQC_FW_LOG_ID_TXQ,
+	ICE_AQC_FW_LOG_ID_RSVD,
+	ICE_AQC_FW_LOG_ID_POST,
+	ICE_AQC_FW_LOG_ID_WATCHDOG,
+	ICE_AQC_FW_LOG_ID_TASK_DISPATCH,
+	ICE_AQC_FW_LOG_ID_MNG,
+	ICE_AQC_FW_LOG_ID_MAX,
+};
+
+/* This is the buffer for both of the logging commands.
+ * The entry array size depends on the datalen parameter in the descriptor.
+ * There will be a total of datalen / 2 entries.
+ */
+struct ice_aqc_fw_logging_data {
+	__le16 entry[1];
+#define ICE_AQC_FW_LOG_ID_S		0
+#define ICE_AQC_FW_LOG_ID_M		(0xFFF << ICE_AQC_FW_LOG_ID_S)
+
+#define ICE_AQC_FW_LOG_CONF_SUCCESS	0	/* Used by response */
+#define ICE_AQC_FW_LOG_CONF_BAD_INDX	BIT(12)	/* Used by response */
+
+#define ICE_AQC_FW_LOG_EN_S		12
+#define ICE_AQC_FW_LOG_EN_M		(0xF << ICE_AQC_FW_LOG_EN_S)
+#define ICE_AQC_FW_LOG_INFO_EN		BIT(12)	/* Used by command */
+#define ICE_AQC_FW_LOG_INIT_EN		BIT(13)	/* Used by command */
+#define ICE_AQC_FW_LOG_FLOW_EN		BIT(14)	/* Used by command */
+#define ICE_AQC_FW_LOG_ERR_EN		BIT(15)	/* Used by command */
+};
+
+
+/* Get/Clear FW Log (indirect 0xFF11) */
+struct ice_aqc_get_clear_fw_log {
+	u8 flags;
+#define ICE_AQC_FW_LOG_CLEAR		BIT(0)
+#define ICE_AQC_FW_LOG_MORE_DATA_AVAIL	BIT(1)
+	u8 rsvd1[7];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/**
+ * struct ice_aq_desc - Admin Queue (AQ) descriptor
+ * @flags: ICE_AQ_FLAG_* flags
+ * @opcode: AQ command opcode
+ * @datalen: length in bytes of indirect/external data buffer
+ * @retval: return value from firmware
+ * @cookie_h: opaque data high-half
+ * @cookie_l: opaque data low-half
+ * @params: command-specific parameters
+ *
+ * Descriptor format for commands the driver posts on the Admin Transmit Queue
+ * (ATQ). The firmware writes back onto the command descriptor and returns
+ * the result of the command. Asynchronous events that are not an immediate
+ * result of the command are written to the Admin Receive Queue (ARQ) using
+ * the same descriptor format. Descriptors are in little-endian notation with
+ * 32-bit words.
+ */
+struct ice_aq_desc {
+	__le16 flags;
+	__le16 opcode;
+	__le16 datalen;
+	__le16 retval;
+	__le32 cookie_high;
+	__le32 cookie_low;
+	union {
+		u8 raw[16];
+		struct ice_aqc_generic generic;
+		struct ice_aqc_get_ver get_ver;
+		struct ice_aqc_q_shutdown q_shutdown;
+		struct ice_aqc_req_res res_owner;
+		struct ice_aqc_manage_mac_read mac_read;
+		struct ice_aqc_manage_mac_write mac_write;
+		struct ice_aqc_clear_pxe clear_pxe;
+		struct ice_aqc_list_caps get_cap;
+		struct ice_aqc_get_phy_caps get_phy;
+		struct ice_aqc_set_phy_cfg set_phy;
+		struct ice_aqc_restart_an restart_an;
+		struct ice_aqc_set_port_id_led set_port_id_led;
+		struct ice_aqc_get_sw_cfg get_sw_conf;
+		struct ice_aqc_sw_rules sw_rules;
+		struct ice_aqc_get_topo get_topo;
+		struct ice_aqc_sched_elem_cmd sched_elem_cmd;
+		struct ice_aqc_query_txsched_res query_sched_res;
+		struct ice_aqc_query_node_to_root query_node_to_root;
+		struct ice_aqc_cfg_l2_node_cgd cfg_l2_node_cgd;
+		struct ice_aqc_rl_profile rl_profile;
+
+		struct ice_aqc_nvm nvm;
+		struct ice_aqc_nvm_cfg nvm_cfg;
+		struct ice_aqc_nvm_checksum nvm_checksum;
+		struct ice_aqc_get_set_rss_lut get_set_rss_lut;
+		struct ice_aqc_get_set_rss_key get_set_rss_key;
+		struct ice_aqc_add_txqs add_txqs;
+		struct ice_aqc_dis_txqs dis_txqs;
+		struct ice_aqc_txqs_cleanup txqs_cleanup;
+		struct ice_aqc_add_get_update_free_vsi vsi_cmd;
+		struct ice_aqc_add_update_free_vsi_resp add_update_free_vsi_res;
+		struct ice_aqc_fw_logging fw_logging;
+		struct ice_aqc_get_clear_fw_log get_clear_fw_log;
+		struct ice_aqc_set_mac_lb set_mac_lb;
+		struct ice_aqc_alloc_free_res_cmd sw_res_ctrl;
+		struct ice_aqc_set_event_mask set_event_mask;
+		struct ice_aqc_get_link_status get_link_status;
+	} params;
+};
+
+
+/* FW defined boundary for a large buffer, 4k >= Large buffer > 512 bytes */
+#define ICE_AQ_LG_BUF	512
+
+/* Flags sub-structure
+ * |0  |1  |2  |3  |4  |5  |6  |7  |8  |9  |10 |11 |12 |13 |14 |15 |
+ * |DD |CMP|ERR|VFE| * *  RESERVED * * |LB |RD |VFC|BUF|SI |EI |FE |
+ */
+
+/* command flags and offsets */
+#define ICE_AQ_FLAG_DD_S	0
+#define ICE_AQ_FLAG_CMP_S	1
+#define ICE_AQ_FLAG_ERR_S	2
+#define ICE_AQ_FLAG_VFE_S	3
+#define ICE_AQ_FLAG_LB_S	9
+#define ICE_AQ_FLAG_RD_S	10
+#define ICE_AQ_FLAG_VFC_S	11
+#define ICE_AQ_FLAG_BUF_S	12
+#define ICE_AQ_FLAG_SI_S	13
+#define ICE_AQ_FLAG_EI_S	14
+#define ICE_AQ_FLAG_FE_S	15
+
+#define ICE_AQ_FLAG_DD		BIT(ICE_AQ_FLAG_DD_S)  /* 0x1    */
+#define ICE_AQ_FLAG_CMP		BIT(ICE_AQ_FLAG_CMP_S) /* 0x2    */
+#define ICE_AQ_FLAG_ERR		BIT(ICE_AQ_FLAG_ERR_S) /* 0x4    */
+#define ICE_AQ_FLAG_VFE		BIT(ICE_AQ_FLAG_VFE_S) /* 0x8    */
+#define ICE_AQ_FLAG_LB		BIT(ICE_AQ_FLAG_LB_S)  /* 0x200  */
+#define ICE_AQ_FLAG_RD		BIT(ICE_AQ_FLAG_RD_S)  /* 0x400  */
+#define ICE_AQ_FLAG_VFC		BIT(ICE_AQ_FLAG_VFC_S) /* 0x800  */
+#define ICE_AQ_FLAG_BUF		BIT(ICE_AQ_FLAG_BUF_S) /* 0x1000 */
+#define ICE_AQ_FLAG_SI		BIT(ICE_AQ_FLAG_SI_S)  /* 0x2000 */
+#define ICE_AQ_FLAG_EI		BIT(ICE_AQ_FLAG_EI_S)  /* 0x4000 */
+#define ICE_AQ_FLAG_FE		BIT(ICE_AQ_FLAG_FE_S)  /* 0x8000 */
+
+/* error codes */
+enum ice_aq_err {
+	ICE_AQ_RC_OK		= 0,  /* Success */
+	ICE_AQ_RC_EPERM		= 1,  /* Operation not permitted */
+	ICE_AQ_RC_ENOENT	= 2,  /* No such element */
+	ICE_AQ_RC_ESRCH		= 3,  /* Bad opcode */
+	ICE_AQ_RC_EINTR		= 4,  /* Operation interrupted */
+	ICE_AQ_RC_EIO		= 5,  /* I/O error */
+	ICE_AQ_RC_ENXIO		= 6,  /* No such resource */
+	ICE_AQ_RC_E2BIG		= 7,  /* Arg too long */
+	ICE_AQ_RC_EAGAIN	= 8,  /* Try again */
+	ICE_AQ_RC_ENOMEM	= 9,  /* Out of memory */
+	ICE_AQ_RC_EACCES	= 10, /* Permission denied */
+	ICE_AQ_RC_EFAULT	= 11, /* Bad address */
+	ICE_AQ_RC_EBUSY		= 12, /* Device or resource busy */
+	ICE_AQ_RC_EEXIST	= 13, /* Object already exists */
+	ICE_AQ_RC_EINVAL	= 14, /* Invalid argument */
+	ICE_AQ_RC_ENOTTY	= 15, /* Not a typewriter */
+	ICE_AQ_RC_ENOSPC	= 16, /* No space left or allocation failure */
+	ICE_AQ_RC_ENOSYS	= 17, /* Function not implemented */
+	ICE_AQ_RC_ERANGE	= 18, /* Parameter out of range */
+	ICE_AQ_RC_EFLUSHED	= 19, /* Cmd flushed due to prev cmd error */
+	ICE_AQ_RC_BAD_ADDR	= 20, /* Descriptor contains a bad pointer */
+	ICE_AQ_RC_EMODE		= 21, /* Op not allowed in current dev mode */
+	ICE_AQ_RC_EFBIG		= 22, /* File too big */
+	ICE_AQ_RC_ESBCOMP	= 23, /* SB-IOSF completion unsuccessful */
+	ICE_AQ_RC_ENOSEC	= 24, /* Missing security manifest */
+	ICE_AQ_RC_EBADSIG	= 25, /* Bad RSA signature */
+	ICE_AQ_RC_ESVN		= 26, /* SVN number prohibits this package */
+	ICE_AQ_RC_EBADMAN	= 27, /* Manifest hash mismatch */
+	ICE_AQ_RC_EBADBUF	= 28, /* Buffer hash mismatches manifest */
+};
+
+/* Admin Queue command opcodes */
+enum ice_adminq_opc {
+	/* AQ commands */
+	ice_aqc_opc_get_ver				= 0x0001,
+	ice_aqc_opc_driver_ver				= 0x0002,
+	ice_aqc_opc_q_shutdown				= 0x0003,
+	ice_aqc_opc_get_exp_err				= 0x0005,
+
+	/* resource ownership */
+	ice_aqc_opc_req_res				= 0x0008,
+	ice_aqc_opc_release_res				= 0x0009,
+
+	/* device/function capabilities */
+	ice_aqc_opc_list_func_caps			= 0x000A,
+	ice_aqc_opc_list_dev_caps			= 0x000B,
+
+	/* manage MAC address */
+	ice_aqc_opc_manage_mac_read			= 0x0107,
+	ice_aqc_opc_manage_mac_write			= 0x0108,
+
+	/* PXE */
+	ice_aqc_opc_clear_pxe_mode			= 0x0110,
+
+	/* internal switch commands */
+	ice_aqc_opc_get_sw_cfg				= 0x0200,
+
+	/* Alloc/Free/Get Resources */
+	ice_aqc_opc_get_res_alloc			= 0x0204,
+	ice_aqc_opc_alloc_res				= 0x0208,
+	ice_aqc_opc_free_res				= 0x0209,
+	ice_aqc_opc_get_allocd_res_desc			= 0x020A,
+
+	/* VSI commands */
+	ice_aqc_opc_add_vsi				= 0x0210,
+	ice_aqc_opc_update_vsi				= 0x0211,
+	ice_aqc_opc_get_vsi_params			= 0x0212,
+	ice_aqc_opc_free_vsi				= 0x0213,
+
+
+
+	/* switch rules population commands */
+	ice_aqc_opc_add_sw_rules			= 0x02A0,
+	ice_aqc_opc_update_sw_rules			= 0x02A1,
+	ice_aqc_opc_remove_sw_rules			= 0x02A2,
+	ice_aqc_opc_get_sw_rules			= 0x02A3,
+	ice_aqc_opc_clear_pf_cfg			= 0x02A4,
+
+
+	/* transmit scheduler commands */
+	ice_aqc_opc_get_dflt_topo			= 0x0400,
+	ice_aqc_opc_add_sched_elems			= 0x0401,
+	ice_aqc_opc_cfg_sched_elems			= 0x0403,
+	ice_aqc_opc_get_sched_elems			= 0x0404,
+	ice_aqc_opc_move_sched_elems			= 0x0408,
+	ice_aqc_opc_suspend_sched_elems			= 0x0409,
+	ice_aqc_opc_resume_sched_elems			= 0x040A,
+	ice_aqc_opc_suspend_sched_traffic		= 0x040B,
+	ice_aqc_opc_resume_sched_traffic		= 0x040C,
+	ice_aqc_opc_delete_sched_elems			= 0x040F,
+	ice_aqc_opc_add_rl_profiles			= 0x0410,
+	ice_aqc_opc_query_rl_profiles			= 0x0411,
+	ice_aqc_opc_query_sched_res			= 0x0412,
+	ice_aqc_opc_query_node_to_root			= 0x0413,
+	ice_aqc_opc_cfg_l2_node_cgd			= 0x0414,
+	ice_aqc_opc_remove_rl_profiles			= 0x0415,
+
+	/* PHY commands */
+	ice_aqc_opc_get_phy_caps			= 0x0600,
+	ice_aqc_opc_set_phy_cfg				= 0x0601,
+	ice_aqc_opc_set_mac_cfg				= 0x0603,
+	ice_aqc_opc_restart_an				= 0x0605,
+	ice_aqc_opc_get_link_status			= 0x0607,
+	ice_aqc_opc_set_event_mask			= 0x0613,
+	ice_aqc_opc_set_mac_lb				= 0x0620,
+	ice_aqc_opc_set_port_id_led			= 0x06E9,
+	ice_aqc_opc_get_port_options			= 0x06EA,
+	ice_aqc_opc_set_port_option			= 0x06EB,
+	ice_aqc_opc_set_gpio				= 0x06EC,
+	ice_aqc_opc_get_gpio				= 0x06ED,
+
+	/* NVM commands */
+	ice_aqc_opc_nvm_read				= 0x0701,
+	ice_aqc_opc_nvm_erase				= 0x0702,
+	ice_aqc_opc_nvm_update				= 0x0703,
+	ice_aqc_opc_nvm_cfg_read			= 0x0704,
+	ice_aqc_opc_nvm_cfg_write			= 0x0705,
+	ice_aqc_opc_nvm_checksum			= 0x0706,
+
+
+	/* RSS commands */
+	ice_aqc_opc_set_rss_key				= 0x0B02,
+	ice_aqc_opc_set_rss_lut				= 0x0B03,
+	ice_aqc_opc_get_rss_key				= 0x0B04,
+	ice_aqc_opc_get_rss_lut				= 0x0B05,
+
+	/* TX queue handling commands/events */
+	ice_aqc_opc_add_txqs				= 0x0C30,
+	ice_aqc_opc_dis_txqs				= 0x0C31,
+	ice_aqc_opc_txqs_cleanup			= 0x0C31,
+	ice_aqc_opc_move_recfg_txqs			= 0x0C32,
+
+
+
+
+	/* Standalone Commands/Events */
+	ice_aqc_opc_event_lan_overflow			= 0x1001,
+
+	/* debug commands */
+	ice_aqc_opc_fw_logging				= 0xFF09,
+	ice_aqc_opc_fw_logging_info			= 0xFF10,
+	ice_aqc_opc_get_clear_fw_log			= 0xFF11
+};
+
+#endif /* _ICE_ADMINQ_CMD_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v3 04/34] net/ice: Add sideband queue info
  2018-12-12  6:59 ` [dpdk-dev] [PATCH v3 00/34] A new net PMD - ice Wenzhuo Lu
                     ` (2 preceding siblings ...)
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 03/34] net/ice: Add admin queue structures and commands Wenzhuo Lu
@ 2018-12-12  6:59   ` Wenzhuo Lu
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 05/34] net/ice: Add device IDs for Intel(r) E800 Series NICs Wenzhuo Lu
                     ` (30 subsequent siblings)
  34 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-12  6:59 UTC (permalink / raw)
  To: dev; +Cc: Paul M Stillwell Jr

From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>

Add the commands, error codes, and structures
for the sideband queue.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
---
 drivers/net/ice/base/ice_sbq_cmd.h | 93 ++++++++++++++++++++++++++++++++++++++
 1 file changed, 93 insertions(+)
 create mode 100644 drivers/net/ice/base/ice_sbq_cmd.h

diff --git a/drivers/net/ice/base/ice_sbq_cmd.h b/drivers/net/ice/base/ice_sbq_cmd.h
new file mode 100644
index 0000000..6dff378
--- /dev/null
+++ b/drivers/net/ice/base/ice_sbq_cmd.h
@@ -0,0 +1,93 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_SBQ_CMD_H_
+#define _ICE_SBQ_CMD_H_
+
+/* This header file defines the Sideband Queue commands, error codes and
+ * descriptor format. It is shared between Firmware and Software.
+ */
+
+/* Sideband Queue command structure and opcodes */
+enum ice_sbq_opc {
+	/* Sideband Queue commands */
+	ice_sbq_opc_neigh_dev_req			= 0x0C00,
+	ice_sbq_opc_neigh_dev_ev			= 0x0C01
+};
+
+/* Sideband Queue descriptor. Indirect command
+ * and non posted
+ */
+struct ice_sbq_cmd_desc {
+	__le16 flags;
+	__le16 opcode;
+	__le16 datalen;
+	__le16 cmd_retval;
+
+	/* Opaque message data */
+	__le32 cookie_high;
+	__le32 cookie_low;
+
+	union {
+		__le16 cmd_len;
+		__le16 cmpl_len;
+	} param0;
+
+	u8 reserved[6];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+struct ice_sbq_evt_desc {
+	__le16 flags;
+	__le16 opcode;
+	__le16 datalen;
+	__le16 cmd_retval;
+	u8 data[24];
+};
+
+enum ice_sbq_msg_dev {
+	rmn_0	= 0x02,
+	rmn_1	= 0x03,
+	rmn_2	= 0x04,
+	cgu	= 0x06
+};
+
+enum ice_sbq_msg_opcode {
+	ice_sbq_msg_rd	= 0x00,
+	ice_sbq_msg_wr	= 0x01
+};
+
+#define ICE_SBQ_MSG_FLAGS	0x40
+#define ICE_SBQ_MSG_SBE_FBE	0x0F
+
+struct ice_sbq_msg_req {
+	u8 dest_dev;
+	u8 src_dev;
+	u8 opcode;
+	u8 flags;
+	u8 sbe_fbe;
+	u8 func_id;
+	__le16 msg_addr_low;
+	__le32 msg_addr_high;
+	__le32 data;
+};
+
+struct ice_sbq_msg_cmpl {
+	u8 dest_dev;
+	u8 src_dev;
+	u8 opcode;
+	u8 flags;
+	__le32 data;
+};
+
+/* Internal struct */
+struct ice_sbq_msg_input {
+	u8 dest_dev;
+	u8 opcode;
+	u16 msg_addr_low;
+	u32 msg_addr_high;
+	u32 data;
+};
+#endif /* _ICE_SBQ_CMD_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v3 05/34] net/ice: Add device IDs for Intel(r) E800 Series NICs
  2018-12-12  6:59 ` [dpdk-dev] [PATCH v3 00/34] A new net PMD - ice Wenzhuo Lu
                     ` (3 preceding siblings ...)
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 04/34] net/ice: Add sideband queue info Wenzhuo Lu
@ 2018-12-12  6:59   ` Wenzhuo Lu
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 06/34] net/ice: Add control queue information Wenzhuo Lu
                     ` (29 subsequent siblings)
  34 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-12  6:59 UTC (permalink / raw)
  To: dev; +Cc: Paul M Stillwell Jr

From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>

Add all the device IDs that represent the NIC.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
---
 drivers/net/ice/base/ice_devids.h | 17 +++++++++++++++++
 1 file changed, 17 insertions(+)
 create mode 100644 drivers/net/ice/base/ice_devids.h

diff --git a/drivers/net/ice/base/ice_devids.h b/drivers/net/ice/base/ice_devids.h
new file mode 100644
index 0000000..87f17ab
--- /dev/null
+++ b/drivers/net/ice/base/ice_devids.h
@@ -0,0 +1,17 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_DEVIDS_H_
+#define _ICE_DEVIDS_H_
+
+
+/* Device IDs */
+/* Intel(R) Ethernet Controller E810-C for backplane */
+#define ICE_DEV_ID_E810C_BACKPLANE	0x1591
+/* Intel(R) Ethernet Controller E810-C for QSFP */
+#define ICE_DEV_ID_E810C_QSFP		0x1592
+/* Intel(R) Ethernet Controller E810-C for SFP */
+#define ICE_DEV_ID_E810C_SFP		0x1593
+
+#endif /* _ICE_DEVIDS_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v3 06/34] net/ice: Add control queue information
  2018-12-12  6:59 ` [dpdk-dev] [PATCH v3 00/34] A new net PMD - ice Wenzhuo Lu
                     ` (4 preceding siblings ...)
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 05/34] net/ice: Add device IDs for Intel(r) E800 Series NICs Wenzhuo Lu
@ 2018-12-12  6:59   ` Wenzhuo Lu
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 07/34] net/ice: Add data center bridging (DCB) Wenzhuo Lu
                     ` (28 subsequent siblings)
  34 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-12  6:59 UTC (permalink / raw)
  To: dev; +Cc: Paul M Stillwell Jr

From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>

Add the structures for the control queues.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
---
 drivers/net/ice/base/ice_controlq.c | 1098 +++++++++++++++++++++++++++++++++++
 drivers/net/ice/base/ice_controlq.h |   97 ++++
 2 files changed, 1195 insertions(+)
 create mode 100644 drivers/net/ice/base/ice_controlq.c
 create mode 100644 drivers/net/ice/base/ice_controlq.h

diff --git a/drivers/net/ice/base/ice_controlq.c b/drivers/net/ice/base/ice_controlq.c
new file mode 100644
index 0000000..fb82c23
--- /dev/null
+++ b/drivers/net/ice/base/ice_controlq.c
@@ -0,0 +1,1098 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#include "ice_common.h"
+
+
+#define ICE_CQ_INIT_REGS(qinfo, prefix)				\
+do {								\
+	(qinfo)->sq.head = prefix##_ATQH;			\
+	(qinfo)->sq.tail = prefix##_ATQT;			\
+	(qinfo)->sq.len = prefix##_ATQLEN;			\
+	(qinfo)->sq.bah = prefix##_ATQBAH;			\
+	(qinfo)->sq.bal = prefix##_ATQBAL;			\
+	(qinfo)->sq.len_mask = prefix##_ATQLEN_ATQLEN_M;	\
+	(qinfo)->sq.len_ena_mask = prefix##_ATQLEN_ATQENABLE_M;	\
+	(qinfo)->sq.head_mask = prefix##_ATQH_ATQH_M;		\
+	(qinfo)->rq.head = prefix##_ARQH;			\
+	(qinfo)->rq.tail = prefix##_ARQT;			\
+	(qinfo)->rq.len = prefix##_ARQLEN;			\
+	(qinfo)->rq.bah = prefix##_ARQBAH;			\
+	(qinfo)->rq.bal = prefix##_ARQBAL;			\
+	(qinfo)->rq.len_mask = prefix##_ARQLEN_ARQLEN_M;	\
+	(qinfo)->rq.len_ena_mask = prefix##_ARQLEN_ARQENABLE_M;	\
+	(qinfo)->rq.head_mask = prefix##_ARQH_ARQH_M;		\
+} while (0)
+
+/**
+ * ice_adminq_init_regs - Initialize AdminQ registers
+ * @hw: pointer to the hardware structure
+ *
+ * This assumes the alloc_sq and alloc_rq functions have already been called
+ */
+static void ice_adminq_init_regs(struct ice_hw *hw)
+{
+	struct ice_ctl_q_info *cq = &hw->adminq;
+
+	ICE_CQ_INIT_REGS(cq, PF_FW);
+}
+
+/**
+ * ice_mailbox_init_regs - Initialize Mailbox registers
+ * @hw: pointer to the hardware structure
+ *
+ * This assumes the alloc_sq and alloc_rq functions have already been called
+ */
+static void ice_mailbox_init_regs(struct ice_hw *hw)
+{
+	struct ice_ctl_q_info *cq = &hw->mailboxq;
+
+	ICE_CQ_INIT_REGS(cq, PF_MBX);
+}
+
+
+/**
+ * ice_check_sq_alive
+ * @hw: pointer to the hw struct
+ * @cq: pointer to the specific Control queue
+ *
+ * Returns true if Queue is enabled else false.
+ */
+bool ice_check_sq_alive(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	/* check both queue-length and queue-enable fields */
+	if (cq->sq.len && cq->sq.len_mask && cq->sq.len_ena_mask)
+		return (rd32(hw, cq->sq.len) & (cq->sq.len_mask |
+						cq->sq.len_ena_mask)) ==
+			(cq->num_sq_entries | cq->sq.len_ena_mask);
+
+	return false;
+}
+
+/**
+ * ice_alloc_ctrlq_sq_ring - Allocate Control Transmit Queue (ATQ) rings
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ */
+static enum ice_status
+ice_alloc_ctrlq_sq_ring(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	size_t size = cq->num_sq_entries * sizeof(struct ice_aq_desc);
+
+	cq->sq.desc_buf.va = ice_alloc_dma_mem(hw, &cq->sq.desc_buf, size);
+	if (!cq->sq.desc_buf.va)
+		return ICE_ERR_NO_MEMORY;
+
+	cq->sq.cmd_buf = ice_calloc(hw, cq->num_sq_entries,
+				    sizeof(struct ice_sq_cd));
+	if (!cq->sq.cmd_buf) {
+		ice_free_dma_mem(hw, &cq->sq.desc_buf);
+		return ICE_ERR_NO_MEMORY;
+	}
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_alloc_ctrlq_rq_ring - Allocate Control Receive Queue (ARQ) rings
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ */
+static enum ice_status
+ice_alloc_ctrlq_rq_ring(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	size_t size = cq->num_rq_entries * sizeof(struct ice_aq_desc);
+
+	cq->rq.desc_buf.va = ice_alloc_dma_mem(hw, &cq->rq.desc_buf, size);
+	if (!cq->rq.desc_buf.va)
+		return ICE_ERR_NO_MEMORY;
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_free_cq_ring - Free control queue ring
+ * @hw: pointer to the hardware structure
+ * @ring: pointer to the specific control queue ring
+ *
+ * This assumes the posted buffers have already been cleaned
+ * and de-allocated
+ */
+static void ice_free_cq_ring(struct ice_hw *hw, struct ice_ctl_q_ring *ring)
+{
+	ice_free_dma_mem(hw, &ring->desc_buf);
+}
+
+/**
+ * ice_alloc_rq_bufs - Allocate pre-posted buffers for the ARQ
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ */
+static enum ice_status
+ice_alloc_rq_bufs(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	int i;
+
+	/* We'll be allocating the buffer info memory first, then we can
+	 * allocate the mapped buffers for the event processing
+	 */
+	cq->rq.dma_head = ice_calloc(hw, cq->num_rq_entries,
+				     sizeof(cq->rq.desc_buf));
+	if (!cq->rq.dma_head)
+		return ICE_ERR_NO_MEMORY;
+	cq->rq.r.rq_bi = (struct ice_dma_mem *)cq->rq.dma_head;
+
+	/* allocate the mapped buffers */
+	for (i = 0; i < cq->num_rq_entries; i++) {
+		struct ice_aq_desc *desc;
+		struct ice_dma_mem *bi;
+
+		bi = &cq->rq.r.rq_bi[i];
+		bi->va = ice_alloc_dma_mem(hw, bi, cq->rq_buf_size);
+		if (!bi->va)
+			goto unwind_alloc_rq_bufs;
+
+		/* now configure the descriptors for use */
+		desc = ICE_CTL_Q_DESC(cq->rq, i);
+
+		desc->flags = CPU_TO_LE16(ICE_AQ_FLAG_BUF);
+		if (cq->rq_buf_size > ICE_AQ_LG_BUF)
+			desc->flags |= CPU_TO_LE16(ICE_AQ_FLAG_LB);
+		desc->opcode = 0;
+		/* This is in accordance with Admin queue design, there is no
+		 * register for buffer size configuration
+		 */
+		desc->datalen = CPU_TO_LE16(bi->size);
+		desc->retval = 0;
+		desc->cookie_high = 0;
+		desc->cookie_low = 0;
+		desc->params.generic.addr_high =
+			CPU_TO_LE32(ICE_HI_DWORD(bi->pa));
+		desc->params.generic.addr_low =
+			CPU_TO_LE32(ICE_LO_DWORD(bi->pa));
+		desc->params.generic.param0 = 0;
+		desc->params.generic.param1 = 0;
+	}
+	return ICE_SUCCESS;
+
+unwind_alloc_rq_bufs:
+	/* don't try to free the one that failed... */
+	i--;
+	for (; i >= 0; i--)
+		ice_free_dma_mem(hw, &cq->rq.r.rq_bi[i]);
+	ice_free(hw, cq->rq.dma_head);
+
+	return ICE_ERR_NO_MEMORY;
+}
+
+/**
+ * ice_alloc_sq_bufs - Allocate empty buffer structs for the ATQ
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ */
+static enum ice_status
+ice_alloc_sq_bufs(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	int i;
+
+	/* No mapped memory needed yet, just the buffer info structures */
+	cq->sq.dma_head = ice_calloc(hw, cq->num_sq_entries,
+				     sizeof(cq->sq.desc_buf));
+	if (!cq->sq.dma_head)
+		return ICE_ERR_NO_MEMORY;
+	cq->sq.r.sq_bi = (struct ice_dma_mem *)cq->sq.dma_head;
+
+	/* allocate the mapped buffers */
+	for (i = 0; i < cq->num_sq_entries; i++) {
+		struct ice_dma_mem *bi;
+
+		bi = &cq->sq.r.sq_bi[i];
+		bi->va = ice_alloc_dma_mem(hw, bi, cq->sq_buf_size);
+		if (!bi->va)
+			goto unwind_alloc_sq_bufs;
+	}
+	return ICE_SUCCESS;
+
+unwind_alloc_sq_bufs:
+	/* don't try to free the one that failed... */
+	i--;
+	for (; i >= 0; i--)
+		ice_free_dma_mem(hw, &cq->sq.r.sq_bi[i]);
+	ice_free(hw, cq->sq.dma_head);
+
+	return ICE_ERR_NO_MEMORY;
+}
+
+static enum ice_status
+ice_cfg_cq_regs(struct ice_hw *hw, struct ice_ctl_q_ring *ring, u16 num_entries)
+{
+	/* Clear Head and Tail */
+	wr32(hw, ring->head, 0);
+	wr32(hw, ring->tail, 0);
+
+	/* set starting point */
+	wr32(hw, ring->len, (num_entries | ring->len_ena_mask));
+	wr32(hw, ring->bal, ICE_LO_DWORD(ring->desc_buf.pa));
+	wr32(hw, ring->bah, ICE_HI_DWORD(ring->desc_buf.pa));
+
+	/* Check one register to verify that config was applied */
+	if (rd32(hw, ring->bal) != ICE_LO_DWORD(ring->desc_buf.pa))
+		return ICE_ERR_AQ_ERROR;
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_cfg_sq_regs - configure Control ATQ registers
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ *
+ * Configure base address and length registers for the transmit queue
+ */
+static enum ice_status
+ice_cfg_sq_regs(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	return ice_cfg_cq_regs(hw, &cq->sq, cq->num_sq_entries);
+}
+
+/**
+ * ice_cfg_rq_regs - configure Control ARQ register
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ *
+ * Configure base address and length registers for the receive (event q)
+ */
+static enum ice_status
+ice_cfg_rq_regs(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	enum ice_status status;
+
+	status = ice_cfg_cq_regs(hw, &cq->rq, cq->num_rq_entries);
+	if (status)
+		return status;
+
+	/* Update tail in the HW to post pre-allocated buffers */
+	wr32(hw, cq->rq.tail, (u32)(cq->num_rq_entries - 1));
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_init_sq - main initialization routine for Control ATQ
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ *
+ * This is the main initialization routine for the Control Send Queue
+ * Prior to calling this function, drivers *MUST* set the following fields
+ * in the cq->structure:
+ *     - cq->num_sq_entries
+ *     - cq->sq_buf_size
+ *
+ * Do *NOT* hold the lock when calling this as the memory allocation routines
+ * called are not going to be atomic context safe
+ */
+static enum ice_status ice_init_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	enum ice_status ret_code;
+
+	if (cq->sq.count > 0) {
+		/* queue already initialized */
+		ret_code = ICE_ERR_NOT_READY;
+		goto init_ctrlq_exit;
+	}
+
+	/* verify input for valid configuration */
+	if (!cq->num_sq_entries || !cq->sq_buf_size) {
+		ret_code = ICE_ERR_CFG;
+		goto init_ctrlq_exit;
+	}
+
+	cq->sq.next_to_use = 0;
+	cq->sq.next_to_clean = 0;
+
+	/* allocate the ring memory */
+	ret_code = ice_alloc_ctrlq_sq_ring(hw, cq);
+	if (ret_code)
+		goto init_ctrlq_exit;
+
+	/* allocate buffers in the rings */
+	ret_code = ice_alloc_sq_bufs(hw, cq);
+	if (ret_code)
+		goto init_ctrlq_free_rings;
+
+	/* initialize base registers */
+	ret_code = ice_cfg_sq_regs(hw, cq);
+	if (ret_code)
+		goto init_ctrlq_free_rings;
+
+	/* success! */
+	cq->sq.count = cq->num_sq_entries;
+	goto init_ctrlq_exit;
+
+init_ctrlq_free_rings:
+	ice_free_cq_ring(hw, &cq->sq);
+
+init_ctrlq_exit:
+	return ret_code;
+}
+
+/**
+ * ice_init_rq - initialize ARQ
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ *
+ * The main initialization routine for the Admin Receive (Event) Queue.
+ * Prior to calling this function, drivers *MUST* set the following fields
+ * in the cq->structure:
+ *     - cq->num_rq_entries
+ *     - cq->rq_buf_size
+ *
+ * Do *NOT* hold the lock when calling this as the memory allocation routines
+ * called are not going to be atomic context safe
+ */
+static enum ice_status ice_init_rq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	enum ice_status ret_code;
+
+	if (cq->rq.count > 0) {
+		/* queue already initialized */
+		ret_code = ICE_ERR_NOT_READY;
+		goto init_ctrlq_exit;
+	}
+
+	/* verify input for valid configuration */
+	if (!cq->num_rq_entries || !cq->rq_buf_size) {
+		ret_code = ICE_ERR_CFG;
+		goto init_ctrlq_exit;
+	}
+
+	cq->rq.next_to_use = 0;
+	cq->rq.next_to_clean = 0;
+
+	/* allocate the ring memory */
+	ret_code = ice_alloc_ctrlq_rq_ring(hw, cq);
+	if (ret_code)
+		goto init_ctrlq_exit;
+
+	/* allocate buffers in the rings */
+	ret_code = ice_alloc_rq_bufs(hw, cq);
+	if (ret_code)
+		goto init_ctrlq_free_rings;
+
+	/* initialize base registers */
+	ret_code = ice_cfg_rq_regs(hw, cq);
+	if (ret_code)
+		goto init_ctrlq_free_rings;
+
+	/* success! */
+	cq->rq.count = cq->num_rq_entries;
+	goto init_ctrlq_exit;
+
+init_ctrlq_free_rings:
+	ice_free_cq_ring(hw, &cq->rq);
+
+init_ctrlq_exit:
+	return ret_code;
+}
+
+#define ICE_FREE_CQ_BUFS(hw, qi, ring)					\
+do {									\
+	int i;								\
+	/* free descriptors */						\
+	for (i = 0; i < (qi)->num_##ring##_entries; i++)		\
+		if ((qi)->ring.r.ring##_bi[i].pa)			\
+			ice_free_dma_mem((hw),				\
+					 &(qi)->ring.r.ring##_bi[i]);	\
+	/* free the buffer info list */					\
+	if ((qi)->ring.cmd_buf)						\
+		ice_free(hw, (qi)->ring.cmd_buf);			\
+	/* free dma head */						\
+	ice_free(hw, (qi)->ring.dma_head);				\
+} while (0)
+
+/**
+ * ice_shutdown_sq - shutdown the Control ATQ
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ *
+ * The main shutdown routine for the Control Transmit Queue
+ */
+static enum ice_status
+ice_shutdown_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	enum ice_status ret_code = ICE_SUCCESS;
+
+	ice_acquire_lock(&cq->sq_lock);
+
+	if (!cq->sq.count) {
+		ret_code = ICE_ERR_NOT_READY;
+		goto shutdown_sq_out;
+	}
+
+	/* Stop firmware AdminQ processing */
+	wr32(hw, cq->sq.head, 0);
+	wr32(hw, cq->sq.tail, 0);
+	wr32(hw, cq->sq.len, 0);
+	wr32(hw, cq->sq.bal, 0);
+	wr32(hw, cq->sq.bah, 0);
+
+	cq->sq.count = 0;	/* to indicate uninitialized queue */
+
+	/* free ring buffers and the ring itself */
+	ICE_FREE_CQ_BUFS(hw, cq, sq);
+	ice_free_cq_ring(hw, &cq->sq);
+
+shutdown_sq_out:
+	ice_release_lock(&cq->sq_lock);
+	return ret_code;
+}
+
+/**
+ * ice_aq_ver_check - Check the reported AQ API version.
+ * @hw: pointer to the hardware structure
+ *
+ * Checks if the driver should load on a given AQ API version.
+ *
+ * Return: 'true' iff the driver should attempt to load. 'false' otherwise.
+ */
+static bool ice_aq_ver_check(struct ice_hw *hw)
+{
+	if (hw->api_maj_ver > EXP_FW_API_VER_MAJOR) {
+		/* Major API version is newer than expected, don't load */
+		ice_warn(hw, "The driver for the device stopped because the NVM image is newer than expected. You must install the most recent version of the network driver.\n");
+		return false;
+	} else if (hw->api_maj_ver == EXP_FW_API_VER_MAJOR) {
+		if (hw->api_min_ver > (EXP_FW_API_VER_MINOR + 2))
+			ice_info(hw, "The driver for the device detected a newer version of the NVM image than expected. Please install the most recent version of the network driver.\n");
+		else if ((hw->api_min_ver + 2) < EXP_FW_API_VER_MINOR)
+			ice_info(hw, "The driver for the device detected an older version of the NVM image than expected. Please update the NVM image.\n");
+	} else {
+		/* Major API version is older than expected, log a warning */
+		ice_info(hw, "The driver for the device detected an older version of the NVM image than expected. Please update the NVM image.\n");
+	}
+	return true;
+}
+
+/**
+ * ice_shutdown_rq - shutdown Control ARQ
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ *
+ * The main shutdown routine for the Control Receive Queue
+ */
+static enum ice_status
+ice_shutdown_rq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	enum ice_status ret_code = ICE_SUCCESS;
+
+	ice_acquire_lock(&cq->rq_lock);
+
+	if (!cq->rq.count) {
+		ret_code = ICE_ERR_NOT_READY;
+		goto shutdown_rq_out;
+	}
+
+	/* Stop Control Queue processing */
+	wr32(hw, cq->rq.head, 0);
+	wr32(hw, cq->rq.tail, 0);
+	wr32(hw, cq->rq.len, 0);
+	wr32(hw, cq->rq.bal, 0);
+	wr32(hw, cq->rq.bah, 0);
+
+	/* set rq.count to 0 to indicate uninitialized queue */
+	cq->rq.count = 0;
+
+	/* free ring buffers and the ring itself */
+	ICE_FREE_CQ_BUFS(hw, cq, rq);
+	ice_free_cq_ring(hw, &cq->rq);
+
+shutdown_rq_out:
+	ice_release_lock(&cq->rq_lock);
+	return ret_code;
+}
+
+
+/**
+ * ice_init_check_adminq - Check version for Admin Queue to know if its alive
+ * @hw: pointer to the hardware structure
+ */
+static enum ice_status ice_init_check_adminq(struct ice_hw *hw)
+{
+	struct ice_ctl_q_info *cq = &hw->adminq;
+	enum ice_status status;
+
+
+	status = ice_aq_get_fw_ver(hw, NULL);
+	if (status)
+		goto init_ctrlq_free_rq;
+
+
+	if (!ice_aq_ver_check(hw)) {
+		status = ICE_ERR_FW_API_VER;
+		goto init_ctrlq_free_rq;
+	}
+
+	return ICE_SUCCESS;
+
+init_ctrlq_free_rq:
+	if (cq->rq.count) {
+		ice_shutdown_rq(hw, cq);
+		ice_destroy_lock(&cq->rq_lock);
+	}
+	if (cq->sq.count) {
+		ice_shutdown_sq(hw, cq);
+		ice_destroy_lock(&cq->sq_lock);
+	}
+	return status;
+}
+
+/**
+ * ice_init_ctrlq - main initialization routine for any control Queue
+ * @hw: pointer to the hardware structure
+ * @q_type: specific Control queue type
+ *
+ * Prior to calling this function, drivers *MUST* set the following fields
+ * in the cq->structure:
+ *     - cq->num_sq_entries
+ *     - cq->num_rq_entries
+ *     - cq->rq_buf_size
+ *     - cq->sq_buf_size
+ */
+static enum ice_status ice_init_ctrlq(struct ice_hw *hw, enum ice_ctl_q q_type)
+{
+	struct ice_ctl_q_info *cq;
+	enum ice_status ret_code;
+
+	switch (q_type) {
+	case ICE_CTL_Q_ADMIN:
+		ice_adminq_init_regs(hw);
+		cq = &hw->adminq;
+		break;
+	case ICE_CTL_Q_MAILBOX:
+		ice_mailbox_init_regs(hw);
+		cq = &hw->mailboxq;
+		break;
+	default:
+		return ICE_ERR_PARAM;
+	}
+	cq->qtype = q_type;
+
+	/* verify input for valid configuration */
+	if (!cq->num_rq_entries || !cq->num_sq_entries ||
+	    !cq->rq_buf_size || !cq->sq_buf_size) {
+		return ICE_ERR_CFG;
+	}
+	ice_init_lock(&cq->sq_lock);
+	ice_init_lock(&cq->rq_lock);
+
+	/* setup SQ command write back timeout */
+	cq->sq_cmd_timeout = ICE_CTL_Q_SQ_CMD_TIMEOUT;
+
+	/* allocate the ATQ */
+	ret_code = ice_init_sq(hw, cq);
+	if (ret_code)
+		goto init_ctrlq_destroy_locks;
+
+	/* allocate the ARQ */
+	ret_code = ice_init_rq(hw, cq);
+	if (ret_code)
+		goto init_ctrlq_free_sq;
+
+	/* success! */
+	return ICE_SUCCESS;
+
+init_ctrlq_free_sq:
+	ice_shutdown_sq(hw, cq);
+init_ctrlq_destroy_locks:
+	ice_destroy_lock(&cq->sq_lock);
+	ice_destroy_lock(&cq->rq_lock);
+	return ret_code;
+}
+
+/**
+ * ice_init_all_ctrlq - main initialization routine for all control queues
+ * @hw: pointer to the hardware structure
+ *
+ * Prior to calling this function, drivers *MUST* set the following fields
+ * in the cq->structure for all control queues:
+ *     - cq->num_sq_entries
+ *     - cq->num_rq_entries
+ *     - cq->rq_buf_size
+ *     - cq->sq_buf_size
+ */
+enum ice_status ice_init_all_ctrlq(struct ice_hw *hw)
+{
+	enum ice_status ret_code;
+
+
+	/* Init FW admin queue */
+	ret_code = ice_init_ctrlq(hw, ICE_CTL_Q_ADMIN);
+	if (ret_code)
+		return ret_code;
+
+	ret_code = ice_init_check_adminq(hw);
+	if (ret_code)
+		return ret_code;
+	/* Init Mailbox queue */
+	return ice_init_ctrlq(hw, ICE_CTL_Q_MAILBOX);
+}
+
+/**
+ * ice_shutdown_ctrlq - shutdown routine for any control queue
+ * @hw: pointer to the hardware structure
+ * @q_type: specific Control queue type
+ */
+static void ice_shutdown_ctrlq(struct ice_hw *hw, enum ice_ctl_q q_type)
+{
+	struct ice_ctl_q_info *cq;
+
+	switch (q_type) {
+	case ICE_CTL_Q_ADMIN:
+		cq = &hw->adminq;
+		if (ice_check_sq_alive(hw, cq))
+			ice_aq_q_shutdown(hw, true);
+		break;
+	case ICE_CTL_Q_MAILBOX:
+		cq = &hw->mailboxq;
+		break;
+	default:
+		return;
+	}
+
+	if (cq->sq.count) {
+		ice_shutdown_sq(hw, cq);
+		ice_destroy_lock(&cq->sq_lock);
+	}
+	if (cq->rq.count) {
+		ice_shutdown_rq(hw, cq);
+		ice_destroy_lock(&cq->rq_lock);
+	}
+}
+
+/**
+ * ice_shutdown_all_ctrlq - shutdown routine for all control queues
+ * @hw: pointer to the hardware structure
+ */
+void ice_shutdown_all_ctrlq(struct ice_hw *hw)
+{
+	/* Shutdown FW admin queue */
+	ice_shutdown_ctrlq(hw, ICE_CTL_Q_ADMIN);
+	/* Shutdown PF-VF Mailbox */
+	ice_shutdown_ctrlq(hw, ICE_CTL_Q_MAILBOX);
+}
+
+/**
+ * ice_clean_sq - cleans Admin send queue (ATQ)
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ *
+ * returns the number of free desc
+ */
+static u16 ice_clean_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	struct ice_ctl_q_ring *sq = &cq->sq;
+	u16 ntc = sq->next_to_clean;
+	struct ice_sq_cd *details;
+#if 0
+	struct ice_aq_desc desc_cb;
+#endif
+	struct ice_aq_desc *desc;
+
+	desc = ICE_CTL_Q_DESC(*sq, ntc);
+	details = ICE_CTL_Q_DETAILS(*sq, ntc);
+
+	while (rd32(hw, cq->sq.head) != ntc) {
+		ice_debug(hw, ICE_DBG_AQ_MSG,
+			  "ntc %d head %d.\n", ntc, rd32(hw, cq->sq.head));
+#if 0
+		if (details->callback) {
+			ICE_CTL_Q_CALLBACK cb_func =
+				(ICE_CTL_Q_CALLBACK)details->callback;
+			ice_memcpy(&desc_cb, desc, sizeof(desc_cb),
+				   ICE_DMA_TO_DMA);
+			cb_func(hw, &desc_cb);
+		}
+#endif
+		ice_memset(desc, 0, sizeof(*desc), ICE_DMA_MEM);
+		ice_memset(details, 0, sizeof(*details), ICE_NONDMA_MEM);
+		ntc++;
+		if (ntc == sq->count)
+			ntc = 0;
+		desc = ICE_CTL_Q_DESC(*sq, ntc);
+		details = ICE_CTL_Q_DETAILS(*sq, ntc);
+	}
+
+	sq->next_to_clean = ntc;
+
+	return ICE_CTL_Q_DESC_UNUSED(sq);
+}
+
+/**
+ * ice_sq_done - check if FW has processed the Admin Send Queue (ATQ)
+ * @hw: pointer to the hw struct
+ * @cq: pointer to the specific Control queue
+ *
+ * Returns true if the firmware has processed all descriptors on the
+ * admin send queue. Returns false if there are still requests pending.
+ */
+bool ice_sq_done(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	/* AQ designers suggest use of head for better
+	 * timing reliability than DD bit
+	 */
+	return rd32(hw, cq->sq.head) == cq->sq.next_to_use;
+}
+
+/**
+ * ice_sq_send_cmd - send command to Control Queue (ATQ)
+ * @hw: pointer to the hw struct
+ * @cq: pointer to the specific Control queue
+ * @desc: prefilled descriptor describing the command (non DMA mem)
+ * @buf: buffer to use for indirect commands (or NULL for direct commands)
+ * @buf_size: size of buffer for indirect commands (or 0 for direct commands)
+ * @cd: pointer to command details structure
+ *
+ * This is the main send command routine for the ATQ. It runs the queue,
+ * cleans the queue, etc.
+ */
+enum ice_status
+ice_sq_send_cmd(struct ice_hw *hw, struct ice_ctl_q_info *cq,
+		struct ice_aq_desc *desc, void *buf, u16 buf_size,
+		struct ice_sq_cd *cd)
+{
+	struct ice_dma_mem *dma_buf = NULL;
+	struct ice_aq_desc *desc_on_ring;
+	bool cmd_completed = false;
+	enum ice_status status = ICE_SUCCESS;
+	struct ice_sq_cd *details;
+	u32 total_delay = 0;
+	u16 retval = 0;
+	u32 val = 0;
+
+	/* if reset is in progress return a soft error */
+	if (hw->reset_ongoing)
+		return ICE_ERR_RESET_ONGOING;
+	ice_acquire_lock(&cq->sq_lock);
+
+	cq->sq_last_status = ICE_AQ_RC_OK;
+
+	if (!cq->sq.count) {
+		ice_debug(hw, ICE_DBG_AQ_MSG,
+			  "Control Send queue not initialized.\n");
+		status = ICE_ERR_AQ_EMPTY;
+		goto sq_send_command_error;
+	}
+
+	if ((buf && !buf_size) || (!buf && buf_size)) {
+		status = ICE_ERR_PARAM;
+		goto sq_send_command_error;
+	}
+
+	if (buf) {
+		if (buf_size > cq->sq_buf_size) {
+			ice_debug(hw, ICE_DBG_AQ_MSG,
+				  "Invalid buffer size for Control Send queue: %d.\n",
+				  buf_size);
+			status = ICE_ERR_INVAL_SIZE;
+			goto sq_send_command_error;
+		}
+
+		desc->flags |= CPU_TO_LE16(ICE_AQ_FLAG_BUF);
+		if (buf_size > ICE_AQ_LG_BUF)
+			desc->flags |= CPU_TO_LE16(ICE_AQ_FLAG_LB);
+	}
+
+	val = rd32(hw, cq->sq.head);
+	if (val >= cq->num_sq_entries) {
+		ice_debug(hw, ICE_DBG_AQ_MSG,
+			  "head overrun at %d in the Control Send Queue ring\n",
+			  val);
+		status = ICE_ERR_AQ_EMPTY;
+		goto sq_send_command_error;
+	}
+
+	details = ICE_CTL_Q_DETAILS(cq->sq, cq->sq.next_to_use);
+	if (cd)
+		*details = *cd;
+#if 0
+		/* FIXME: if/when this block gets enabled (when the #if 0
+		 * is removed), add braces to both branches of the surrounding
+		 * conditional expression. The braces have been removed to
+		 * prevent checkpatch complaining.
+		 */
+
+		/* If the command details are defined copy the cookie. The
+		 * CPU_TO_LE32 is not needed here because the data is ignored
+		 * by the FW, only used by the driver
+		 */
+		if (details->cookie) {
+			desc->cookie_high =
+				CPU_TO_LE32(ICE_HI_DWORD(details->cookie));
+			desc->cookie_low =
+				CPU_TO_LE32(ICE_LO_DWORD(details->cookie));
+		}
+#endif
+	else
+		ice_memset(details, 0, sizeof(*details), ICE_NONDMA_MEM);
+#if 0
+	/* clear requested flags and then set additional flags if defined */
+	desc->flags &= ~CPU_TO_LE16(details->flags_dis);
+	desc->flags |= CPU_TO_LE16(details->flags_ena);
+
+	if (details->postpone && !details->async) {
+		ice_debug(hw, ICE_DBG_AQ_MSG,
+			  "Async flag not set along with postpone flag\n");
+		status = ICE_ERR_PARAM;
+		goto sq_send_command_error;
+	}
+#endif
+
+	/* Call clean and check queue available function to reclaim the
+	 * descriptors that were processed by FW/MBX; the function returns the
+	 * number of desc available. The clean function called here could be
+	 * called in a separate thread in case of asynchronous completions.
+	 */
+	if (ice_clean_sq(hw, cq) == 0) {
+		ice_debug(hw, ICE_DBG_AQ_MSG,
+			  "Error: Control Send Queue is full.\n");
+		status = ICE_ERR_AQ_FULL;
+		goto sq_send_command_error;
+	}
+
+	/* initialize the temp desc pointer with the right desc */
+	desc_on_ring = ICE_CTL_Q_DESC(cq->sq, cq->sq.next_to_use);
+
+	/* if the desc is available copy the temp desc to the right place */
+	ice_memcpy(desc_on_ring, desc, sizeof(*desc_on_ring),
+		   ICE_NONDMA_TO_DMA);
+
+	/* if buf is not NULL assume indirect command */
+	if (buf) {
+		dma_buf = &cq->sq.r.sq_bi[cq->sq.next_to_use];
+		/* copy the user buf into the respective DMA buf */
+		ice_memcpy(dma_buf->va, buf, buf_size, ICE_NONDMA_TO_DMA);
+		desc_on_ring->datalen = CPU_TO_LE16(buf_size);
+
+		/* Update the address values in the desc with the pa value
+		 * for respective buffer
+		 */
+		desc_on_ring->params.generic.addr_high =
+			CPU_TO_LE32(ICE_HI_DWORD(dma_buf->pa));
+		desc_on_ring->params.generic.addr_low =
+			CPU_TO_LE32(ICE_LO_DWORD(dma_buf->pa));
+	}
+
+	/* Debug desc and buffer */
+	ice_debug(hw, ICE_DBG_AQ_MSG,
+		  "ATQ: Control Send queue desc and buffer:\n");
+
+	ice_debug_cq(hw, ICE_DBG_AQ_CMD, (void *)desc_on_ring, buf, buf_size);
+
+
+	(cq->sq.next_to_use)++;
+	if (cq->sq.next_to_use == cq->sq.count)
+		cq->sq.next_to_use = 0;
+#if 0
+	/* FIXME - handle this case? */
+	if (!details->postpone)
+#endif
+	wr32(hw, cq->sq.tail, cq->sq.next_to_use);
+
+#if 0
+	/* if command details are not defined or async flag is not set,
+	 * we need to wait for desc write back
+	 */
+	if (!details->async && !details->postpone) {
+		/* FIXME - handle this case? */
+	}
+#endif
+	do {
+		if (ice_sq_done(hw, cq))
+			break;
+
+		ice_msec_delay(1, false);
+		total_delay++;
+	} while (total_delay < cq->sq_cmd_timeout);
+
+	/* if ready, copy the desc back to temp */
+	if (ice_sq_done(hw, cq)) {
+		ice_memcpy(desc, desc_on_ring, sizeof(*desc),
+			   ICE_DMA_TO_NONDMA);
+		if (buf) {
+			/* get returned length to copy */
+			u16 copy_size = LE16_TO_CPU(desc->datalen);
+
+			if (copy_size > buf_size) {
+				ice_debug(hw, ICE_DBG_AQ_MSG,
+					  "Return len %d > than buf len %d\n",
+					  copy_size, buf_size);
+				status = ICE_ERR_AQ_ERROR;
+			} else {
+				ice_memcpy(buf, dma_buf->va, copy_size,
+					   ICE_DMA_TO_NONDMA);
+			}
+		}
+		retval = LE16_TO_CPU(desc->retval);
+		if (retval) {
+			ice_debug(hw, ICE_DBG_AQ_MSG,
+				  "Control Send Queue command completed with error 0x%x\n",
+				  retval);
+
+			/* strip off FW internal code */
+			retval &= 0xff;
+		}
+		cmd_completed = true;
+		if (!status && retval != ICE_AQ_RC_OK)
+			status = ICE_ERR_AQ_ERROR;
+		cq->sq_last_status = (enum ice_aq_err)retval;
+	}
+
+	ice_debug(hw, ICE_DBG_AQ_MSG,
+		  "ATQ: desc and buffer writeback:\n");
+
+	ice_debug_cq(hw, ICE_DBG_AQ_CMD, (void *)desc, buf, buf_size);
+
+
+	/* save writeback AQ if requested */
+	if (details->wb_desc)
+		ice_memcpy(details->wb_desc, desc_on_ring,
+			   sizeof(*details->wb_desc), ICE_DMA_TO_NONDMA);
+
+	/* update the error if time out occurred */
+	if (!cmd_completed) {
+#if 0
+	    (!details->async && !details->postpone)) {
+#endif
+		ice_debug(hw, ICE_DBG_AQ_MSG,
+			  "Control Send Queue Writeback timeout.\n");
+		status = ICE_ERR_AQ_TIMEOUT;
+	}
+
+sq_send_command_error:
+	ice_release_lock(&cq->sq_lock);
+	return status;
+}
+
+/**
+ * ice_fill_dflt_direct_cmd_desc - AQ descriptor helper function
+ * @desc: pointer to the temp descriptor (non DMA mem)
+ * @opcode: the opcode can be used to decide which flags to turn off or on
+ *
+ * Fill the desc with default values
+ */
+void ice_fill_dflt_direct_cmd_desc(struct ice_aq_desc *desc, u16 opcode)
+{
+	/* zero out the desc */
+	ice_memset(desc, 0, sizeof(*desc), ICE_NONDMA_MEM);
+	desc->opcode = CPU_TO_LE16(opcode);
+	desc->flags = CPU_TO_LE16(ICE_AQ_FLAG_SI);
+}
+
+/**
+ * ice_clean_rq_elem
+ * @hw: pointer to the hw struct
+ * @cq: pointer to the specific Control queue
+ * @e: event info from the receive descriptor, includes any buffers
+ * @pending: number of events that could be left to process
+ *
+ * This function cleans one Admin Receive Queue element and returns
+ * the contents through e. It can also return how many events are
+ * left to process through 'pending'.
+ */
+enum ice_status
+ice_clean_rq_elem(struct ice_hw *hw, struct ice_ctl_q_info *cq,
+		  struct ice_rq_event_info *e, u16 *pending)
+{
+	u16 ntc = cq->rq.next_to_clean;
+	enum ice_status ret_code = ICE_SUCCESS;
+	struct ice_aq_desc *desc;
+	struct ice_dma_mem *bi;
+	u16 desc_idx;
+	u16 datalen;
+	u16 flags;
+	u16 ntu;
+
+	/* pre-clean the event info */
+	ice_memset(&e->desc, 0, sizeof(e->desc), ICE_NONDMA_MEM);
+
+	/* take the lock before we start messing with the ring */
+	ice_acquire_lock(&cq->rq_lock);
+
+	if (!cq->rq.count) {
+		ice_debug(hw, ICE_DBG_AQ_MSG,
+			  "Control Receive queue not initialized.\n");
+		ret_code = ICE_ERR_AQ_EMPTY;
+		goto clean_rq_elem_err;
+	}
+
+	/* set next_to_use to head */
+	ntu = (u16)(rd32(hw, cq->rq.head) & cq->rq.head_mask);
+
+	if (ntu == ntc) {
+		/* nothing to do - shouldn't need to update ring's values */
+		ret_code = ICE_ERR_AQ_NO_WORK;
+		goto clean_rq_elem_out;
+	}
+
+	/* now clean the next descriptor */
+	desc = ICE_CTL_Q_DESC(cq->rq, ntc);
+	desc_idx = ntc;
+
+	cq->rq_last_status = (enum ice_aq_err)LE16_TO_CPU(desc->retval);
+	flags = LE16_TO_CPU(desc->flags);
+	if (flags & ICE_AQ_FLAG_ERR) {
+		ret_code = ICE_ERR_AQ_ERROR;
+		ice_debug(hw, ICE_DBG_AQ_MSG,
+			  "Control Receive Queue Event received with error 0x%x\n",
+			  cq->rq_last_status);
+	}
+	ice_memcpy(&e->desc, desc, sizeof(e->desc), ICE_DMA_TO_NONDMA);
+	datalen = LE16_TO_CPU(desc->datalen);
+	e->msg_len = min(datalen, e->buf_len);
+	if (e->msg_buf && e->msg_len)
+		ice_memcpy(e->msg_buf, cq->rq.r.rq_bi[desc_idx].va,
+			   e->msg_len, ICE_DMA_TO_NONDMA);
+
+	ice_debug(hw, ICE_DBG_AQ_MSG, "ARQ: desc and buffer:\n");
+
+	ice_debug_cq(hw, ICE_DBG_AQ_CMD, (void *)desc, e->msg_buf,
+		     cq->rq_buf_size);
+
+
+	/* Restore the original datalen and buffer address in the desc,
+	 * FW updates datalen to indicate the event message size
+	 */
+	bi = &cq->rq.r.rq_bi[ntc];
+	ice_memset(desc, 0, sizeof(*desc), ICE_DMA_MEM);
+
+	desc->flags = CPU_TO_LE16(ICE_AQ_FLAG_BUF);
+	if (cq->rq_buf_size > ICE_AQ_LG_BUF)
+		desc->flags |= CPU_TO_LE16(ICE_AQ_FLAG_LB);
+	desc->datalen = CPU_TO_LE16(bi->size);
+	desc->params.generic.addr_high = CPU_TO_LE32(ICE_HI_DWORD(bi->pa));
+	desc->params.generic.addr_low = CPU_TO_LE32(ICE_LO_DWORD(bi->pa));
+
+	/* set tail = the last cleaned desc index. */
+	wr32(hw, cq->rq.tail, ntc);
+	/* ntc is updated to tail + 1 */
+	ntc++;
+	if (ntc == cq->num_rq_entries)
+		ntc = 0;
+	cq->rq.next_to_clean = ntc;
+	cq->rq.next_to_use = ntu;
+
+#if 0
+	ice_nvmupd_check_wait_event(hw, LE16_TO_CPU(e->desc.opcode));
+#endif
+clean_rq_elem_out:
+	/* Set pending if needed, unlock and return */
+	if (pending) {
+		/* re-read HW head to calculate actual pending messages */
+		ntu = (u16)(rd32(hw, cq->rq.head) & cq->rq.head_mask);
+		*pending = (u16)((ntc > ntu ? cq->rq.count : 0) + (ntu - ntc));
+	}
+clean_rq_elem_err:
+	ice_release_lock(&cq->rq_lock);
+
+	return ret_code;
+}
diff --git a/drivers/net/ice/base/ice_controlq.h b/drivers/net/ice/base/ice_controlq.h
new file mode 100644
index 0000000..db2db93
--- /dev/null
+++ b/drivers/net/ice/base/ice_controlq.h
@@ -0,0 +1,97 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_CONTROLQ_H_
+#define _ICE_CONTROLQ_H_
+
+#include "ice_adminq_cmd.h"
+
+
+/* Maximum buffer lengths for all control queue types */
+#define ICE_AQ_MAX_BUF_LEN 4096
+#define ICE_MBXQ_MAX_BUF_LEN 4096
+
+#define ICE_CTL_Q_DESC(R, i) \
+	(&(((struct ice_aq_desc *)((R).desc_buf.va))[i]))
+
+#define ICE_CTL_Q_DESC_UNUSED(R) \
+	(u16)((((R)->next_to_clean > (R)->next_to_use) ? 0 : (R)->count) + \
+	      (R)->next_to_clean - (R)->next_to_use - 1)
+
+/* Defines that help manage the driver vs FW API checks.
+ * Take a look at ice_aq_ver_check in ice_controlq.c for actual usage.
+ */
+#define EXP_FW_API_VER_BRANCH		0x00
+#define EXP_FW_API_VER_MAJOR		0x01
+#define EXP_FW_API_VER_MINOR		0x03
+
+/* Different control queue types: These are mainly for SW consumption. */
+enum ice_ctl_q {
+	ICE_CTL_Q_UNKNOWN = 0,
+	ICE_CTL_Q_ADMIN,
+	ICE_CTL_Q_MAILBOX,
+};
+
+/* Control Queue default settings */
+#define ICE_CTL_Q_SQ_CMD_TIMEOUT	250  /* msecs */
+
+struct ice_ctl_q_ring {
+	void *dma_head;			/* Virtual address to dma head */
+	struct ice_dma_mem desc_buf;	/* descriptor ring memory */
+	void *cmd_buf;			/* command buffer memory */
+
+	union {
+		struct ice_dma_mem *sq_bi;
+		struct ice_dma_mem *rq_bi;
+	} r;
+
+	u16 count;		/* Number of descriptors */
+
+	/* used for interrupt processing */
+	u16 next_to_use;
+	u16 next_to_clean;
+
+	/* used for queue tracking */
+	u32 head;
+	u32 tail;
+	u32 len;
+	u32 bah;
+	u32 bal;
+	u32 len_mask;
+	u32 len_ena_mask;
+	u32 head_mask;
+};
+
+/* sq transaction details */
+struct ice_sq_cd {
+	struct ice_aq_desc *wb_desc;
+};
+
+#define ICE_CTL_Q_DETAILS(R, i) (&(((struct ice_sq_cd *)((R).cmd_buf))[i]))
+
+/* rq event information */
+struct ice_rq_event_info {
+	struct ice_aq_desc desc;
+	u16 msg_len;
+	u16 buf_len;
+	u8 *msg_buf;
+};
+
+/* Control Queue information */
+struct ice_ctl_q_info {
+	enum ice_ctl_q qtype;
+	struct ice_ctl_q_ring rq;	/* receive queue */
+	struct ice_ctl_q_ring sq;	/* send queue */
+	u32 sq_cmd_timeout;		/* send queue cmd write back timeout */
+	u16 num_rq_entries;		/* receive queue depth */
+	u16 num_sq_entries;		/* send queue depth */
+	u16 rq_buf_size;		/* receive queue buffer size */
+	u16 sq_buf_size;		/* send queue buffer size */
+	struct ice_lock sq_lock;		/* Send queue lock */
+	struct ice_lock rq_lock;		/* Receive queue lock */
+	enum ice_aq_err sq_last_status;	/* last status on send queue */
+	enum ice_aq_err rq_last_status;	/* last status on receive queue */
+};
+
+#endif /* _ICE_CONTROLQ_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v3 07/34] net/ice: Add data center bridging (DCB)
  2018-12-12  6:59 ` [dpdk-dev] [PATCH v3 00/34] A new net PMD - ice Wenzhuo Lu
                     ` (5 preceding siblings ...)
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 06/34] net/ice: Add control queue information Wenzhuo Lu
@ 2018-12-12  6:59   ` Wenzhuo Lu
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 08/34] net/ice: Add basic transmit scheduler Wenzhuo Lu
                     ` (27 subsequent siblings)
  34 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-12  6:59 UTC (permalink / raw)
  To: dev; +Cc: Paul M Stillwell Jr

From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>

Add the code to handle DCB.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
---
 drivers/net/ice/base/ice_dcb.c | 1385 ++++++++++++++++++++++++++++++++++++++++
 drivers/net/ice/base/ice_dcb.h |  220 +++++++
 2 files changed, 1605 insertions(+)
 create mode 100644 drivers/net/ice/base/ice_dcb.c
 create mode 100644 drivers/net/ice/base/ice_dcb.h

diff --git a/drivers/net/ice/base/ice_dcb.c b/drivers/net/ice/base/ice_dcb.c
new file mode 100644
index 0000000..76411d5
--- /dev/null
+++ b/drivers/net/ice/base/ice_dcb.c
@@ -0,0 +1,1385 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#include "ice_common.h"
+#include "ice_sched.h"
+#include "ice_dcb.h"
+
+/**
+ * ice_aq_get_lldp_mib
+ * @hw: pointer to the hw struct
+ * @bridge_type: type of bridge requested
+ * @mib_type: Local, Remote or both Local and Remote MIBs
+ * @buf: pointer to the caller-supplied buffer to store the MIB block
+ * @buf_size: size of the buffer (in bytes)
+ * @local_len: length of the returned Local LLDP MIB
+ * @remote_len: length of the returned Remote LLDP MIB
+ * @cd: pointer to command details structure or NULL
+ *
+ * Requests the complete LLDP MIB (entire packet). (0x0A00)
+ */
+enum ice_status
+ice_aq_get_lldp_mib(struct ice_hw *hw, u8 bridge_type, u8 mib_type, void *buf,
+		    u16 buf_size, u16 *local_len, u16 *remote_len,
+		    struct ice_sq_cd *cd)
+{
+	struct ice_aqc_lldp_get_mib *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	cmd = &desc.params.lldp_get_mib;
+
+	if (buf_size == 0 || !buf)
+		return ICE_ERR_PARAM;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_lldp_get_mib);
+
+	cmd->type = mib_type & ICE_AQ_LLDP_MIB_TYPE_M;
+	cmd->type |= (bridge_type << ICE_AQ_LLDP_BRID_TYPE_S) &
+		ICE_AQ_LLDP_BRID_TYPE_M;
+
+	desc.datalen = CPU_TO_LE16(buf_size);
+
+	status = ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
+	if (!status) {
+		if (local_len)
+			*local_len = LE16_TO_CPU(cmd->local_len);
+		if (remote_len)
+			*remote_len = LE16_TO_CPU(cmd->remote_len);
+	}
+
+	return status;
+}
+
+/**
+ * ice_aq_cfg_lldp_mib_change
+ * @hw: pointer to the hw struct
+ * @ena_update: Enable or Disable event posting
+ * @cd: pointer to command details structure or NULL
+ *
+ * Enable or Disable posting of an event on ARQ when LLDP MIB
+ * associated with the interface changes (0x0A01)
+ */
+enum ice_status
+ice_aq_cfg_lldp_mib_change(struct ice_hw *hw, bool ena_update,
+			   struct ice_sq_cd *cd)
+{
+	struct ice_aqc_lldp_set_mib_change *cmd;
+	struct ice_aq_desc desc;
+
+	cmd = &desc.params.lldp_set_event;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_lldp_set_mib_change);
+
+	if (!ena_update)
+		cmd->command |= ICE_AQ_LLDP_MIB_UPDATE_DIS;
+
+	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+}
+
+
+/**
+ * ice_aq_start_lldp
+ * @hw: pointer to the hw struct
+ * @cd: pointer to command details structure or NULL
+ *
+ * Start the embedded LLDP Agent on all ports. (0x0A06)
+ */
+enum ice_status ice_aq_start_lldp(struct ice_hw *hw, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_lldp_start *cmd;
+	struct ice_aq_desc desc;
+
+	cmd = &desc.params.lldp_start;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_lldp_start);
+
+	cmd->command = ICE_AQ_LLDP_AGENT_START;
+
+	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+}
+
+/**
+ * ice_aq_set_lldp_mib - Set the LLDP MIB
+ * @hw: pointer to the hw struct
+ * @mib_type: Local, Remote or both Local and Remote MIBs
+ * @buf: pointer to the caller-supplied buffer to store the MIB block
+ * @buf_size: size of the buffer (in bytes)
+ * @cd: pointer to command details structure or NULL
+ *
+ * Set the LLDP MIB. (0x0A08)
+ */
+enum ice_status
+ice_aq_set_lldp_mib(struct ice_hw *hw, u8 mib_type, void *buf, u16 buf_size,
+		    struct ice_sq_cd *cd)
+{
+	struct ice_aqc_lldp_set_local_mib *cmd;
+	struct ice_aq_desc desc;
+
+	cmd = &desc.params.lldp_set_mib;
+
+	if (buf_size == 0 || !buf)
+		return ICE_ERR_PARAM;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_lldp_set_local_mib);
+
+	desc.flags |= CPU_TO_LE16((u16)ICE_AQ_FLAG_RD);
+	desc.datalen = CPU_TO_LE16(buf_size);
+
+	cmd->type = mib_type;
+	cmd->length = CPU_TO_LE16(buf_size);
+
+	return ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
+}
+
+/**
+ * ice_get_dcbx_status
+ * @hw: pointer to the hw struct
+ *
+ * Get the DCBX status from the Firmware
+ */
+u8 ice_get_dcbx_status(struct ice_hw *hw)
+{
+	u32 reg;
+
+	reg = rd32(hw, PRTDCB_GENS);
+	return (u8)((reg & PRTDCB_GENS_DCBX_STATUS_M) >>
+		    PRTDCB_GENS_DCBX_STATUS_S);
+}
+
+/**
+ * ice_parse_ieee_ets_common_tlv
+ * @buf: Data buffer to be parsed for ETS CFG/REC data
+ * @ets_cfg: Container to store parsed data
+ *
+ * Parses the common data of IEEE 802.1Qaz ETS CFG/REC TLV
+ */
+static void
+ice_parse_ieee_ets_common_tlv(u8 *buf, struct ice_dcb_ets_cfg *ets_cfg)
+{
+	u8 offset = 0;
+	int i;
+
+	/* Priority Assignment Table (4 octets)
+	 * Octets:|    1    |    2    |    3    |    4    |
+	 *        -----------------------------------------
+	 *        |pri0|pri1|pri2|pri3|pri4|pri5|pri6|pri7|
+	 *        -----------------------------------------
+	 *   Bits:|7  4|3  0|7  4|3  0|7  4|3  0|7  4|3  0|
+	 *        -----------------------------------------
+	 */
+	for (i = 0; i < 4; i++) {
+		ets_cfg->prio_table[i * 2] =
+			((buf[offset] & ICE_IEEE_ETS_PRIO_1_M) >>
+			 ICE_IEEE_ETS_PRIO_1_S);
+		ets_cfg->prio_table[i * 2 + 1] =
+			((buf[offset] & ICE_IEEE_ETS_PRIO_0_M) >>
+			 ICE_IEEE_ETS_PRIO_0_S);
+		offset++;
+	}
+
+	/* TC Bandwidth Table (8 octets)
+	 * Octets:| 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
+	 *        ---------------------------------
+	 *        |tc0|tc1|tc2|tc3|tc4|tc5|tc6|tc7|
+	 *        ---------------------------------
+	 *
+	 * TSA Assignment Table (8 octets)
+	 * Octets:| 9 | 10| 11| 12| 13| 14| 15| 16|
+	 *        ---------------------------------
+	 *        |tc0|tc1|tc2|tc3|tc4|tc5|tc6|tc7|
+	 *        ---------------------------------
+	 */
+	for (i = 0; i < ICE_MAX_TRAFFIC_CLASS; i++) {
+		ets_cfg->tcbwtable[i] = buf[offset];
+		ets_cfg->tsatable[i] = buf[ICE_MAX_TRAFFIC_CLASS + offset++];
+	}
+}
+
+/**
+ * ice_parse_ieee_etscfg_tlv
+ * @tlv: IEEE 802.1Qaz ETS CFG TLV
+ * @dcbcfg: Local store to update ETS CFG data
+ *
+ * Parses IEEE 802.1Qaz ETS CFG TLV
+ */
+static void
+ice_parse_ieee_etscfg_tlv(struct ice_lldp_org_tlv *tlv,
+			  struct ice_dcbx_cfg *dcbcfg)
+{
+	struct ice_dcb_ets_cfg *etscfg;
+	u8 *buf = tlv->tlvinfo;
+
+	/* First Octet post subtype
+	 * --------------------------
+	 * |will-|CBS  | Re-  | Max |
+	 * |ing  |     |served| TCs |
+	 * --------------------------
+	 * |1bit | 1bit|3 bits|3bits|
+	 */
+	etscfg = &dcbcfg->etscfg;
+	etscfg->willing = ((buf[0] & ICE_IEEE_ETS_WILLING_M) >>
+			   ICE_IEEE_ETS_WILLING_S);
+	etscfg->cbs = ((buf[0] & ICE_IEEE_ETS_CBS_M) >> ICE_IEEE_ETS_CBS_S);
+	etscfg->maxtcs = ((buf[0] & ICE_IEEE_ETS_MAXTC_M) >>
+			  ICE_IEEE_ETS_MAXTC_S);
+
+	/* Begin parsing at Priority Assignment Table (offset 1 in buf) */
+	ice_parse_ieee_ets_common_tlv(&buf[1], etscfg);
+}
+
+/**
+ * ice_parse_ieee_etsrec_tlv
+ * @tlv: IEEE 802.1Qaz ETS REC TLV
+ * @dcbcfg: Local store to update ETS REC data
+ *
+ * Parses IEEE 802.1Qaz ETS REC TLV
+ */
+static void
+ice_parse_ieee_etsrec_tlv(struct ice_lldp_org_tlv *tlv,
+			  struct ice_dcbx_cfg *dcbcfg)
+{
+	u8 *buf = tlv->tlvinfo;
+
+	/* Begin parsing at Priority Assignment Table (offset 1 in buf) */
+	ice_parse_ieee_ets_common_tlv(&buf[1], &dcbcfg->etsrec);
+}
+
+/**
+ * ice_parse_ieee_pfccfg_tlv
+ * @tlv: IEEE 802.1Qaz PFC CFG TLV
+ * @dcbcfg: Local store to update PFC CFG data
+ *
+ * Parses IEEE 802.1Qaz PFC CFG TLV
+ */
+static void
+ice_parse_ieee_pfccfg_tlv(struct ice_lldp_org_tlv *tlv,
+			  struct ice_dcbx_cfg *dcbcfg)
+{
+	u8 *buf = tlv->tlvinfo;
+
+	/* ----------------------------------------
+	 * |will-|MBC  | Re-  | PFC |  PFC Enable  |
+	 * |ing  |     |served| cap |              |
+	 * -----------------------------------------
+	 * |1bit | 1bit|2 bits|4bits| 1 octet      |
+	 */
+	dcbcfg->pfc.willing = ((buf[0] & ICE_IEEE_PFC_WILLING_M) >>
+			       ICE_IEEE_PFC_WILLING_S);
+	dcbcfg->pfc.mbc = ((buf[0] & ICE_IEEE_PFC_MBC_M) >> ICE_IEEE_PFC_MBC_S);
+	dcbcfg->pfc.pfccap = ((buf[0] & ICE_IEEE_PFC_CAP_M) >>
+			      ICE_IEEE_PFC_CAP_S);
+	dcbcfg->pfc.pfcena = buf[1];
+}
+
+/**
+ * ice_parse_ieee_app_tlv
+ * @tlv: IEEE 802.1Qaz APP TLV
+ * @dcbcfg: Local store to update APP PRIO data
+ *
+ * Parses IEEE 802.1Qaz APP PRIO TLV
+ */
+static void
+ice_parse_ieee_app_tlv(struct ice_lldp_org_tlv *tlv,
+		       struct ice_dcbx_cfg *dcbcfg)
+{
+	u16 offset = 0;
+	u16 typelen;
+	int i = 0;
+	u16 len;
+	u8 *buf;
+
+	typelen = NTOHS(tlv->typelen);
+	len = ((typelen & ICE_LLDP_TLV_LEN_M) >> ICE_LLDP_TLV_LEN_S);
+	buf = tlv->tlvinfo;
+
+	/* Removing sizeof(ouisubtype) and reserved byte from len.
+	 * Remaining len div 3 is number of APP TLVs.
+	 */
+	len -= (sizeof(tlv->ouisubtype) + 1);
+
+	/* Move offset to App Priority Table */
+	offset++;
+
+	/* Application Priority Table (3 octets)
+	 * Octets:|         1          |    2    |    3    |
+	 *        -----------------------------------------
+	 *        |Priority|Rsrvd| Sel |    Protocol ID    |
+	 *        -----------------------------------------
+	 *   Bits:|23    21|20 19|18 16|15                0|
+	 *        -----------------------------------------
+	 */
+	while (offset < len) {
+		dcbcfg->app[i].priority = ((buf[offset] &
+					    ICE_IEEE_APP_PRIO_M) >>
+					   ICE_IEEE_APP_PRIO_S);
+		dcbcfg->app[i].selector = ((buf[offset] &
+					    ICE_IEEE_APP_SEL_M) >>
+					   ICE_IEEE_APP_SEL_S);
+		dcbcfg->app[i].prot_id = (buf[offset + 1] << 0x8) |
+			buf[offset + 2];
+		/* Move to next app */
+		offset += 3;
+		i++;
+		if (i >= ICE_DCBX_MAX_APPS)
+			break;
+	}
+
+	dcbcfg->numapps = i;
+}
+
+/**
+ * ice_parse_ieee_tlv
+ * @tlv: IEEE 802.1Qaz TLV
+ * @dcbcfg: Local store to update ETS REC data
+ *
+ * Get the TLV subtype and send it to parsing function
+ * based on the subtype value
+ */
+static void
+ice_parse_ieee_tlv(struct ice_lldp_org_tlv *tlv, struct ice_dcbx_cfg *dcbcfg)
+{
+	u32 ouisubtype;
+	u8 subtype;
+
+	ouisubtype = NTOHL(tlv->ouisubtype);
+	subtype = (u8)((ouisubtype & ICE_LLDP_TLV_SUBTYPE_M) >>
+		       ICE_LLDP_TLV_SUBTYPE_S);
+	switch (subtype) {
+	case ICE_IEEE_SUBTYPE_ETS_CFG:
+		ice_parse_ieee_etscfg_tlv(tlv, dcbcfg);
+		break;
+	case ICE_IEEE_SUBTYPE_ETS_REC:
+		ice_parse_ieee_etsrec_tlv(tlv, dcbcfg);
+		break;
+	case ICE_IEEE_SUBTYPE_PFC_CFG:
+		ice_parse_ieee_pfccfg_tlv(tlv, dcbcfg);
+		break;
+	case ICE_IEEE_SUBTYPE_APP_PRI:
+		ice_parse_ieee_app_tlv(tlv, dcbcfg);
+		break;
+	default:
+		break;
+	}
+}
+
+/**
+ * ice_parse_cee_pgcfg_tlv
+ * @tlv: CEE DCBX PG CFG TLV
+ * @dcbcfg: Local store to update ETS CFG data
+ *
+ * Parses CEE DCBX PG CFG TLV
+ */
+static void
+ice_parse_cee_pgcfg_tlv(struct ice_cee_feat_tlv *tlv,
+			struct ice_dcbx_cfg *dcbcfg)
+{
+	struct ice_dcb_ets_cfg *etscfg;
+	u8 *buf = tlv->tlvinfo;
+	u16 offset = 0;
+	int i;
+
+	etscfg = &dcbcfg->etscfg;
+
+	if (tlv->en_will_err & ICE_CEE_FEAT_TLV_WILLING_M)
+		etscfg->willing = 1;
+
+	etscfg->cbs = 0;
+	/* Priority Group Table (4 octets)
+	 * Octets:|    1    |    2    |    3    |    4    |
+	 *        -----------------------------------------
+	 *        |pri0|pri1|pri2|pri3|pri4|pri5|pri6|pri7|
+	 *        -----------------------------------------
+	 *   Bits:|7  4|3  0|7  4|3  0|7  4|3  0|7  4|3  0|
+	 *        -----------------------------------------
+	 */
+	for (i = 0; i < 4; i++) {
+		etscfg->prio_table[i * 2] =
+			((buf[offset] & ICE_CEE_PGID_PRIO_1_M) >>
+			 ICE_CEE_PGID_PRIO_1_S);
+		etscfg->prio_table[i * 2 + 1] =
+			((buf[offset] & ICE_CEE_PGID_PRIO_0_M) >>
+			 ICE_CEE_PGID_PRIO_0_S);
+		offset++;
+	}
+
+	/* PG Percentage Table (8 octets)
+	 * Octets:| 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
+	 *        ---------------------------------
+	 *        |pg0|pg1|pg2|pg3|pg4|pg5|pg6|pg7|
+	 *        ---------------------------------
+	 */
+	for (i = 0; i < ICE_MAX_TRAFFIC_CLASS; i++)
+		etscfg->tcbwtable[i] = buf[offset++];
+
+	/* Number of TCs supported (1 octet) */
+	etscfg->maxtcs = buf[offset];
+}
+
+/**
+ * ice_parse_cee_pfccfg_tlv
+ * @tlv: CEE DCBX PFC CFG TLV
+ * @dcbcfg: Local store to update PFC CFG data
+ *
+ * Parses CEE DCBX PFC CFG TLV
+ */
+static void
+ice_parse_cee_pfccfg_tlv(struct ice_cee_feat_tlv *tlv,
+			 struct ice_dcbx_cfg *dcbcfg)
+{
+	u8 *buf = tlv->tlvinfo;
+
+	if (tlv->en_will_err & ICE_CEE_FEAT_TLV_WILLING_M)
+		dcbcfg->pfc.willing = 1;
+
+	/* ------------------------
+	 * | PFC Enable | PFC TCs |
+	 * ------------------------
+	 * | 1 octet    | 1 octet |
+	 */
+	dcbcfg->pfc.pfcena = buf[0];
+	dcbcfg->pfc.pfccap = buf[1];
+}
+
+/**
+ * ice_parse_cee_app_tlv
+ * @tlv: CEE DCBX APP TLV
+ * @dcbcfg: Local store to update APP PRIO data
+ *
+ * Parses CEE DCBX APP PRIO TLV
+ */
+static void
+ice_parse_cee_app_tlv(struct ice_cee_feat_tlv *tlv, struct ice_dcbx_cfg *dcbcfg)
+{
+	u16 len, typelen, offset = 0;
+	struct ice_cee_app_prio *app;
+	u8 i;
+
+	typelen = NTOHS(tlv->hdr.typelen);
+	len = ((typelen & ICE_LLDP_TLV_LEN_M) >> ICE_LLDP_TLV_LEN_S);
+
+	dcbcfg->numapps = len / sizeof(*app);
+	if (!dcbcfg->numapps)
+		return;
+	if (dcbcfg->numapps > ICE_DCBX_MAX_APPS)
+		dcbcfg->numapps = ICE_DCBX_MAX_APPS;
+
+	for (i = 0; i < dcbcfg->numapps; i++) {
+		u8 up, selector;
+
+		app = (struct ice_cee_app_prio *)(tlv->tlvinfo + offset);
+		for (up = 0; up < ICE_MAX_USER_PRIORITY; up++)
+			if (app->prio_map & BIT(up))
+				break;
+
+		dcbcfg->app[i].priority = up;
+
+		/* Get Selector from lower 2 bits, and convert to IEEE */
+		selector = (app->upper_oui_sel & ICE_CEE_APP_SELECTOR_M);
+		switch (selector) {
+		case ICE_CEE_APP_SEL_ETHTYPE:
+			dcbcfg->app[i].selector = ICE_APP_SEL_ETHTYPE;
+			break;
+		case ICE_CEE_APP_SEL_TCPIP:
+			dcbcfg->app[i].selector = ICE_APP_SEL_TCPIP;
+			break;
+		default:
+			/* Keep selector as it is for unknown types */
+			dcbcfg->app[i].selector = selector;
+		}
+
+		dcbcfg->app[i].prot_id = NTOHS(app->protocol);
+		/* Move to next app */
+		offset += sizeof(*app);
+	}
+}
+
+/**
+ * ice_parse_cee_tlv
+ * @tlv: CEE DCBX TLV
+ * @dcbcfg: Local store to update DCBX config data
+ *
+ * Get the TLV subtype and send it to parsing function
+ * based on the subtype value
+ */
+static void
+ice_parse_cee_tlv(struct ice_lldp_org_tlv *tlv, struct ice_dcbx_cfg *dcbcfg)
+{
+	struct ice_cee_feat_tlv *sub_tlv;
+	u8 subtype, feat_tlv_count = 0;
+	u16 len, tlvlen, typelen;
+	u32 ouisubtype;
+
+	ouisubtype = NTOHL(tlv->ouisubtype);
+	subtype = (u8)((ouisubtype & ICE_LLDP_TLV_SUBTYPE_M) >>
+		       ICE_LLDP_TLV_SUBTYPE_S);
+	/* Return if not CEE DCBX */
+	if (subtype != ICE_CEE_DCBX_TYPE)
+		return;
+
+	typelen = NTOHS(tlv->typelen);
+	tlvlen = ((typelen & ICE_LLDP_TLV_LEN_M) >> ICE_LLDP_TLV_LEN_S);
+	len = sizeof(tlv->typelen) + sizeof(ouisubtype) +
+		sizeof(struct ice_cee_ctrl_tlv);
+	/* Return if no CEE DCBX Feature TLVs */
+	if (tlvlen <= len)
+		return;
+
+	sub_tlv = (struct ice_cee_feat_tlv *)((char *)tlv + len);
+	while (feat_tlv_count < ICE_CEE_MAX_FEAT_TYPE) {
+		u16 sublen;
+
+		typelen = NTOHS(sub_tlv->hdr.typelen);
+		sublen = ((typelen & ICE_LLDP_TLV_LEN_M) >> ICE_LLDP_TLV_LEN_S);
+		subtype = (u8)((typelen & ICE_LLDP_TLV_TYPE_M) >>
+			       ICE_LLDP_TLV_TYPE_S);
+		switch (subtype) {
+		case ICE_CEE_SUBTYPE_PG_CFG:
+			ice_parse_cee_pgcfg_tlv(sub_tlv, dcbcfg);
+			break;
+		case ICE_CEE_SUBTYPE_PFC_CFG:
+			ice_parse_cee_pfccfg_tlv(sub_tlv, dcbcfg);
+			break;
+		case ICE_CEE_SUBTYPE_APP_PRI:
+			ice_parse_cee_app_tlv(sub_tlv, dcbcfg);
+			break;
+		default:
+			return;	/* Invalid Sub-type return */
+		}
+		feat_tlv_count++;
+		/* Move to next sub TLV */
+		sub_tlv = (struct ice_cee_feat_tlv *)
+			  ((char *)sub_tlv + sizeof(sub_tlv->hdr.typelen) +
+			   sublen);
+	}
+}
+
+/**
+ * ice_parse_org_tlv
+ * @tlv: Organization specific TLV
+ * @dcbcfg: Local store to update ETS REC data
+ *
+ * Currently only IEEE 802.1Qaz TLV is supported, all others
+ * will be returned
+ */
+static void
+ice_parse_org_tlv(struct ice_lldp_org_tlv *tlv, struct ice_dcbx_cfg *dcbcfg)
+{
+	u32 ouisubtype;
+	u32 oui;
+
+	ouisubtype = NTOHL(tlv->ouisubtype);
+	oui = ((ouisubtype & ICE_LLDP_TLV_OUI_M) >> ICE_LLDP_TLV_OUI_S);
+	switch (oui) {
+	case ICE_IEEE_8021QAZ_OUI:
+		ice_parse_ieee_tlv(tlv, dcbcfg);
+		break;
+	case ICE_CEE_DCBX_OUI:
+		ice_parse_cee_tlv(tlv, dcbcfg);
+		break;
+	default:
+		break;
+	}
+}
+
+/**
+ * ice_lldp_to_dcb_cfg
+ * @lldpmib: LLDPDU to be parsed
+ * @dcbcfg: store for LLDPDU data
+ *
+ * Parse DCB configuration from the LLDPDU
+ */
+enum ice_status ice_lldp_to_dcb_cfg(u8 *lldpmib, struct ice_dcbx_cfg *dcbcfg)
+{
+	struct ice_lldp_org_tlv *tlv;
+	enum ice_status ret = ICE_SUCCESS;
+	u16 offset = 0;
+	u16 typelen;
+	u16 type;
+	u16 len;
+
+	if (!lldpmib || !dcbcfg)
+		return ICE_ERR_PARAM;
+
+	/* set to the start of LLDPDU */
+	lldpmib += ETH_HEADER_LEN;
+	tlv = (struct ice_lldp_org_tlv *)lldpmib;
+	while (1) {
+		typelen = NTOHS(tlv->typelen);
+		type = ((typelen & ICE_LLDP_TLV_TYPE_M) >> ICE_LLDP_TLV_TYPE_S);
+		len = ((typelen & ICE_LLDP_TLV_LEN_M) >> ICE_LLDP_TLV_LEN_S);
+		offset += sizeof(typelen) + len;
+
+		/* END TLV or beyond LLDPDU size */
+		if (type == ICE_TLV_TYPE_END || offset > ICE_LLDPDU_SIZE)
+			break;
+
+		switch (type) {
+		case ICE_TLV_TYPE_ORG:
+			ice_parse_org_tlv(tlv, dcbcfg);
+			break;
+		default:
+			break;
+		}
+
+		/* Move to next TLV */
+		tlv = (struct ice_lldp_org_tlv *)
+		      ((char *)tlv + sizeof(tlv->typelen) + len);
+	}
+
+	return ret;
+}
+
+/**
+ * ice_aq_get_dcb_cfg
+ * @hw: pointer to the hw struct
+ * @mib_type: mib type for the query
+ * @bridgetype: bridge type for the query (remote)
+ * @dcbcfg: store for LLDPDU data
+ *
+ * Query DCB configuration from the firmware
+ */
+enum ice_status
+ice_aq_get_dcb_cfg(struct ice_hw *hw, u8 mib_type, u8 bridgetype,
+		   struct ice_dcbx_cfg *dcbcfg)
+{
+	enum ice_status ret;
+	u8 *lldpmib;
+
+	/* Allocate the LLDPDU */
+	lldpmib = (u8 *)ice_malloc(hw, ICE_LLDPDU_SIZE);
+	if (!lldpmib)
+		return ICE_ERR_NO_MEMORY;
+
+	ret = ice_aq_get_lldp_mib(hw, bridgetype, mib_type, (void *)lldpmib,
+				  ICE_LLDPDU_SIZE, NULL, NULL, NULL);
+
+	if (ret == ICE_SUCCESS)
+		/* Parse LLDP MIB to get dcb configuration */
+		ret = ice_lldp_to_dcb_cfg(lldpmib, dcbcfg);
+
+	ice_free(hw, lldpmib);
+
+	return ret;
+}
+
+
+/**
+ * ice_aq_start_stop_dcbx - Start/Stop DCBx service in FW
+ * @hw: pointer to the hw struct
+ * @start_dcbx_agent: True if DCBx Agent needs to be started
+ *		      False if DCBx Agent needs to be stopped
+ * @dcbx_agent_status: FW indicates back the DCBx agent status
+ *		       True if DCBx Agent is active
+ *		       False if DCBx Agent is stopped
+ * @cd: pointer to command details structure or NULL
+ *
+ * Start/Stop the embedded dcbx Agent. In case that this wrapper function
+ * returns ICE_SUCCESS, caller will need to check if FW returns back the same
+ * value as stated in dcbx_agent_status, and react accordingly. (0x0A09)
+ */
+enum ice_status
+ice_aq_start_stop_dcbx(struct ice_hw *hw, bool start_dcbx_agent,
+		       bool *dcbx_agent_status, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_lldp_stop_start_specific_agent *cmd;
+	enum ice_status status;
+	struct ice_aq_desc desc;
+	u16 opcode;
+
+	cmd = &desc.params.lldp_agent_ctrl;
+
+	opcode = ice_aqc_opc_lldp_stop_start_specific_agent;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, opcode);
+
+	if (start_dcbx_agent)
+		cmd->command = ICE_AQC_START_STOP_AGENT_START_DCBX;
+
+	status = ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+
+	*dcbx_agent_status = false;
+
+	if (status == ICE_SUCCESS &&
+	    cmd->command == ICE_AQC_START_STOP_AGENT_START_DCBX)
+		*dcbx_agent_status = true;
+
+	return status;
+}
+
+/**
+ * ice_aq_get_cee_dcb_cfg
+ * @hw: pointer to the hw struct
+ * @buff: response buffer that stores CEE operational configuration
+ * @cd: pointer to command details structure or NULL
+ *
+ * Get CEE DCBX mode operational configuration from firmware (0x0A07)
+ */
+enum ice_status
+ice_aq_get_cee_dcb_cfg(struct ice_hw *hw,
+		       struct ice_aqc_get_cee_dcb_cfg_resp *buff,
+		       struct ice_sq_cd *cd)
+{
+	struct ice_aq_desc desc;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_cee_dcb_cfg);
+
+	return ice_aq_send_cmd(hw, &desc, (void *)buff, sizeof(*buff), cd);
+}
+
+
+/**
+ * ice_cee_to_dcb_cfg
+ * @cee_cfg: pointer to CEE configuration struct
+ * @dcbcfg: DCB configuration struct
+ *
+ * Convert CEE configuration from firmware to DCB configuration
+ */
+static void
+ice_cee_to_dcb_cfg(struct ice_aqc_get_cee_dcb_cfg_resp *cee_cfg,
+		   struct ice_dcbx_cfg *dcbcfg)
+{
+	u32 status, tlv_status = LE32_TO_CPU(cee_cfg->tlv_status);
+	u32 ice_aqc_cee_status_mask, ice_aqc_cee_status_shift;
+	u16 app_prio = LE16_TO_CPU(cee_cfg->oper_app_prio);
+	u8 i, err, sync, oper, app_index, ice_app_sel_type;
+	u16 ice_aqc_cee_app_mask, ice_aqc_cee_app_shift;
+	u16 ice_app_prot_id_type;
+
+	/* CEE PG data to ETS config */
+	dcbcfg->etscfg.maxtcs = cee_cfg->oper_num_tc;
+
+	/* Note that the FW creates the oper_prio_tc nibbles reversed
+	 * from those in the CEE Priority Group sub-TLV.
+	 */
+	for (i = 0; i < ICE_MAX_TRAFFIC_CLASS / 2; i++) {
+		dcbcfg->etscfg.prio_table[i * 2] =
+			((cee_cfg->oper_prio_tc[i] & ICE_CEE_PGID_PRIO_0_M) >>
+			 ICE_CEE_PGID_PRIO_0_S);
+		dcbcfg->etscfg.prio_table[i * 2 + 1] =
+			((cee_cfg->oper_prio_tc[i] & ICE_CEE_PGID_PRIO_1_M) >>
+			 ICE_CEE_PGID_PRIO_1_S);
+	}
+
+	for (i = 0; i < ICE_MAX_TRAFFIC_CLASS; i++) {
+		dcbcfg->etscfg.tcbwtable[i] = cee_cfg->oper_tc_bw[i];
+
+		if (dcbcfg->etscfg.prio_table[i] == ICE_CEE_PGID_STRICT) {
+			/* Map it to next empty TC */
+			dcbcfg->etscfg.prio_table[i] = cee_cfg->oper_num_tc - 1;
+			dcbcfg->etscfg.tsatable[i] = ICE_IEEE_TSA_STRICT;
+		} else {
+			dcbcfg->etscfg.tsatable[i] = ICE_IEEE_TSA_ETS;
+		}
+	}
+
+	/* CEE PFC data to ETS config */
+	dcbcfg->pfc.pfcena = cee_cfg->oper_pfc_en;
+	dcbcfg->pfc.pfccap = ICE_MAX_TRAFFIC_CLASS;
+
+	app_index = 0;
+	for (i = 0; i < 3; i++) {
+		if (i == 0) {
+			/* FCoE APP */
+			ice_aqc_cee_status_mask = ICE_AQC_CEE_FCOE_STATUS_M;
+			ice_aqc_cee_status_shift = ICE_AQC_CEE_FCOE_STATUS_S;
+			ice_aqc_cee_app_mask = ICE_AQC_CEE_APP_FCOE_M;
+			ice_aqc_cee_app_shift = ICE_AQC_CEE_APP_FCOE_S;
+			ice_app_sel_type = ICE_APP_SEL_ETHTYPE;
+			ice_app_prot_id_type = ICE_APP_PROT_ID_FCOE;
+		} else if (i == 1) {
+			/* iSCSI APP */
+			ice_aqc_cee_status_mask = ICE_AQC_CEE_ISCSI_STATUS_M;
+			ice_aqc_cee_status_shift = ICE_AQC_CEE_ISCSI_STATUS_S;
+			ice_aqc_cee_app_mask = ICE_AQC_CEE_APP_ISCSI_M;
+			ice_aqc_cee_app_shift = ICE_AQC_CEE_APP_ISCSI_S;
+			ice_app_sel_type = ICE_APP_SEL_TCPIP;
+			ice_app_prot_id_type = ICE_APP_PROT_ID_ISCSI;
+		} else {
+			/* FIP APP */
+			ice_aqc_cee_status_mask = ICE_AQC_CEE_FIP_STATUS_M;
+			ice_aqc_cee_status_shift = ICE_AQC_CEE_FIP_STATUS_S;
+			ice_aqc_cee_app_mask = ICE_AQC_CEE_APP_FIP_M;
+			ice_aqc_cee_app_shift = ICE_AQC_CEE_APP_FIP_S;
+			ice_app_sel_type = ICE_APP_SEL_ETHTYPE;
+			ice_app_prot_id_type = ICE_APP_PROT_ID_FIP;
+		}
+
+		status = (tlv_status & ice_aqc_cee_status_mask) >>
+			 ice_aqc_cee_status_shift;
+		err = (status & ICE_TLV_STATUS_ERR) ? 1 : 0;
+		sync = (status & ICE_TLV_STATUS_SYNC) ? 1 : 0;
+		oper = (status & ICE_TLV_STATUS_OPER) ? 1 : 0;
+		/* Add FCoE/iSCSI/FIP APP if Error is False and
+		 * Oper/Sync is True
+		 */
+		if (!err && sync && oper) {
+			dcbcfg->app[app_index].priority =
+				(app_prio & ice_aqc_cee_app_mask) >>
+				ice_aqc_cee_app_shift;
+			dcbcfg->app[app_index].selector = ice_app_sel_type;
+			dcbcfg->app[app_index].prot_id = ice_app_prot_id_type;
+			app_index++;
+		}
+	}
+
+	dcbcfg->numapps = app_index;
+}
+
+/**
+ * ice_get_ieee_dcb_cfg
+ * @pi: port information structure
+ * @dcbx_mode: mode of DCBX (IEEE or CEE)
+ *
+ * Get IEEE or CEE mode DCB configuration from the Firmware
+ */
+STATIC enum ice_status
+ice_get_ieee_or_cee_dcb_cfg(struct ice_port_info *pi, u8 dcbx_mode)
+{
+	struct ice_dcbx_cfg *dcbx_cfg = NULL;
+	enum ice_status ret;
+
+	if (!pi)
+		return ICE_ERR_PARAM;
+
+	if (dcbx_mode == ICE_DCBX_MODE_IEEE)
+		dcbx_cfg = &pi->local_dcbx_cfg;
+	else if (dcbx_mode == ICE_DCBX_MODE_CEE)
+		dcbx_cfg = &pi->desired_dcbx_cfg;
+
+	/* Get Local DCB Config in case of ICE_DCBX_MODE_IEEE
+	 * or get CEE DCB Desired Config in case of ICE_DCBX_MODE_CEE
+	 */
+	ret = ice_aq_get_dcb_cfg(pi->hw, ICE_AQ_LLDP_MIB_LOCAL,
+				 ICE_AQ_LLDP_BRID_TYPE_NEAREST_BRID, dcbx_cfg);
+	if (ret)
+		goto out;
+
+	/* Get Remote DCB Config */
+	dcbx_cfg = &pi->remote_dcbx_cfg;
+	ret = ice_aq_get_dcb_cfg(pi->hw, ICE_AQ_LLDP_MIB_REMOTE,
+				 ICE_AQ_LLDP_BRID_TYPE_NEAREST_BRID, dcbx_cfg);
+	/* Don't treat ENOENT as an error for Remote MIBs */
+	if (pi->hw->adminq.sq_last_status == ICE_AQ_RC_ENOENT)
+		ret = ICE_SUCCESS;
+
+out:
+	return ret;
+}
+
+/**
+ * ice_get_dcb_cfg
+ * @pi: port information structure
+ *
+ * Get DCB configuration from the Firmware
+ */
+enum ice_status ice_get_dcb_cfg(struct ice_port_info *pi)
+{
+	struct ice_aqc_get_cee_dcb_cfg_resp cee_cfg;
+	struct ice_dcbx_cfg *dcbx_cfg;
+	enum ice_status ret;
+
+	if (!pi)
+		return ICE_ERR_PARAM;
+
+	ret = ice_aq_get_cee_dcb_cfg(pi->hw, &cee_cfg, NULL);
+	if (ret == ICE_SUCCESS) {
+		/* CEE mode */
+		dcbx_cfg = &pi->local_dcbx_cfg;
+		dcbx_cfg->dcbx_mode = ICE_DCBX_MODE_CEE;
+		dcbx_cfg->tlv_status = LE32_TO_CPU(cee_cfg.tlv_status);
+		ice_cee_to_dcb_cfg(&cee_cfg, dcbx_cfg);
+		ret = ice_get_ieee_or_cee_dcb_cfg(pi, ICE_DCBX_MODE_CEE);
+	} else if (pi->hw->adminq.sq_last_status == ICE_AQ_RC_ENOENT) {
+		/* CEE mode not enabled try querying IEEE data */
+		dcbx_cfg = &pi->local_dcbx_cfg;
+		dcbx_cfg->dcbx_mode = ICE_DCBX_MODE_IEEE;
+		ret = ice_get_ieee_or_cee_dcb_cfg(pi, ICE_DCBX_MODE_IEEE);
+	}
+
+	return ret;
+}
+
+/**
+ * ice_init_dcb
+ * @hw: pointer to the hw struct
+ *
+ * Update DCB configuration from the Firmware
+ */
+enum ice_status ice_init_dcb(struct ice_hw *hw)
+{
+	struct ice_port_info *pi;
+	enum ice_status ret = ICE_SUCCESS;
+
+	if (!hw->func_caps.common_cap.dcb)
+		return ret;
+	pi = hw->port_info;
+	pi->is_sw_lldp = true;
+
+	/* Get DCBX status */
+	pi->dcbx_status = ice_get_dcbx_status(hw);
+
+	/* Check the DCBX Status */
+	switch (pi->dcbx_status) {
+	case ICE_DCBX_STATUS_NOT_STARTED:
+		break;
+	case ICE_DCBX_STATUS_DIS:
+		/* DCBx not in usable state, stop init */
+		return ret;
+	case ICE_DCBX_STATUS_DONE:
+	case ICE_DCBX_STATUS_IN_PROGRESS:
+		/* Get current DCBX configuration */
+		ret = ice_get_dcb_cfg(pi);
+		pi->is_sw_lldp = (hw->adminq.sq_last_status == ICE_AQ_RC_EPERM);
+		if (ret)
+			return ret;
+		break;
+	case ICE_DCBX_STATUS_MULTIPLE_PEERS:
+	default:
+		break;
+	}
+
+	/* Configure the LLDP MIB change event */
+	ret = ice_aq_cfg_lldp_mib_change(hw, true, NULL);
+	if (!ret)
+		pi->is_sw_lldp = false;
+
+	return ret;
+}
+
+/**
+ * ice_add_ieee_ets_common_tlv
+ * @buf: Data buffer to be populated with ice_dcb_ets_cfg data
+ * @ets_cfg: Container for ice_dcb_ets_cfg data
+ *
+ * Populate the TLV buffer with ice_dcb_ets_cfg data
+ */
+static void
+ice_add_ieee_ets_common_tlv(u8 *buf, struct ice_dcb_ets_cfg *ets_cfg)
+{
+	u8 priority0, priority1;
+	u8 offset = 0;
+	int i;
+
+	/* Priority Assignment Table (4 octets)
+	 * Octets:|    1    |    2    |    3    |    4    |
+	 *        -----------------------------------------
+	 *        |pri0|pri1|pri2|pri3|pri4|pri5|pri6|pri7|
+	 *        -----------------------------------------
+	 *   Bits:|7  4|3  0|7  4|3  0|7  4|3  0|7  4|3  0|
+	 *        -----------------------------------------
+	 */
+	for (i = 0; i < ICE_MAX_TRAFFIC_CLASS / 2; i++) {
+		priority0 = ets_cfg->prio_table[i * 2] & 0xF;
+		priority1 = ets_cfg->prio_table[i * 2 + 1] & 0xF;
+		buf[offset] = (priority0 << ICE_IEEE_ETS_PRIO_1_S) | priority1;
+		offset++;
+	}
+
+	/* TC Bandwidth Table (8 octets)
+	 * Octets:| 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
+	 *        ---------------------------------
+	 *        |tc0|tc1|tc2|tc3|tc4|tc5|tc6|tc7|
+	 *        ---------------------------------
+	 *
+	 * TSA Assignment Table (8 octets)
+	 * Octets:| 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
+	 *        ---------------------------------
+	 *        |tc0|tc1|tc2|tc3|tc4|tc5|tc6|tc7|
+	 *        ---------------------------------
+	 */
+	for (i = 0; i < ICE_MAX_TRAFFIC_CLASS; i++) {
+		buf[offset] = ets_cfg->tcbwtable[i];
+		buf[ICE_MAX_TRAFFIC_CLASS + offset] = ets_cfg->tsatable[i];
+		offset++;
+	}
+}
+
+/**
+ * ice_add_ieee_ets_tlv - Prepare ETS TLV in IEEE format
+ * @tlv: Fill the ETS config data in IEEE format
+ * @dcbcfg: Local store which holds the DCB Config
+ *
+ * Prepare IEEE 802.1Qaz ETS CFG TLV
+ */
+static void
+ice_add_ieee_ets_tlv(struct ice_lldp_org_tlv *tlv, struct ice_dcbx_cfg *dcbcfg)
+{
+	struct ice_dcb_ets_cfg *etscfg;
+	u8 *buf = tlv->tlvinfo;
+	u8 maxtcwilling = 0;
+	u32 ouisubtype;
+	u16 typelen;
+
+	typelen = ((ICE_TLV_TYPE_ORG << ICE_LLDP_TLV_TYPE_S) |
+		   ICE_IEEE_ETS_TLV_LEN);
+	tlv->typelen = HTONS(typelen);
+
+	ouisubtype = ((ICE_IEEE_8021QAZ_OUI << ICE_LLDP_TLV_OUI_S) |
+		      ICE_IEEE_SUBTYPE_ETS_CFG);
+	tlv->ouisubtype = HTONL(ouisubtype);
+
+	/* First Octet post subtype
+	 * --------------------------
+	 * |will-|CBS  | Re-  | Max |
+	 * |ing  |     |served| TCs |
+	 * --------------------------
+	 * |1bit | 1bit|3 bits|3bits|
+	 */
+	etscfg = &dcbcfg->etscfg;
+	if (etscfg->willing)
+		maxtcwilling = BIT(ICE_IEEE_ETS_WILLING_S);
+	maxtcwilling |= etscfg->maxtcs & ICE_IEEE_ETS_MAXTC_M;
+	buf[0] = maxtcwilling;
+
+	/* Begin adding at Priority Assignment Table (offset 1 in buf) */
+	ice_add_ieee_ets_common_tlv(&buf[1], etscfg);
+}
+
+/**
+ * ice_add_ieee_etsrec_tlv - Prepare ETS Recommended TLV in IEEE format
+ * @tlv: Fill ETS Recommended TLV in IEEE format
+ * @dcbcfg: Local store which holds the DCB Config
+ *
+ * Prepare IEEE 802.1Qaz ETS REC TLV
+ */
+static void
+ice_add_ieee_etsrec_tlv(struct ice_lldp_org_tlv *tlv,
+			struct ice_dcbx_cfg *dcbcfg)
+{
+	struct ice_dcb_ets_cfg *etsrec;
+	u8 *buf = tlv->tlvinfo;
+	u32 ouisubtype;
+	u16 typelen;
+
+	typelen = ((ICE_TLV_TYPE_ORG << ICE_LLDP_TLV_TYPE_S) |
+		   ICE_IEEE_ETS_TLV_LEN);
+	tlv->typelen = HTONS(typelen);
+
+	ouisubtype = ((ICE_IEEE_8021QAZ_OUI << ICE_LLDP_TLV_OUI_S) |
+		      ICE_IEEE_SUBTYPE_ETS_REC);
+	tlv->ouisubtype = HTONL(ouisubtype);
+
+	etsrec = &dcbcfg->etsrec;
+
+	/* First Octet is reserved */
+	/* Begin adding at Priority Assignment Table (offset 1 in buf) */
+	ice_add_ieee_ets_common_tlv(&buf[1], etsrec);
+}
+
+/**
+ * ice_add_ieee_pfc_tlv - Prepare PFC TLV in IEEE format
+ * @tlv: Fill PFC TLV in IEEE format
+ * @dcbcfg: Local store which holds the PFC CFG data
+ *
+ * Prepare IEEE 802.1Qaz PFC CFG TLV
+ */
+static void
+ice_add_ieee_pfc_tlv(struct ice_lldp_org_tlv *tlv, struct ice_dcbx_cfg *dcbcfg)
+{
+	u8 *buf = tlv->tlvinfo;
+	u32 ouisubtype;
+	u16 typelen;
+
+	typelen = ((ICE_TLV_TYPE_ORG << ICE_LLDP_TLV_TYPE_S) |
+		   ICE_IEEE_PFC_TLV_LEN);
+	tlv->typelen = HTONS(typelen);
+
+	ouisubtype = ((ICE_IEEE_8021QAZ_OUI << ICE_LLDP_TLV_OUI_S) |
+		      ICE_IEEE_SUBTYPE_PFC_CFG);
+	tlv->ouisubtype = HTONL(ouisubtype);
+
+	/* ----------------------------------------
+	 * |will-|MBC  | Re-  | PFC |  PFC Enable  |
+	 * |ing  |     |served| cap |              |
+	 * -----------------------------------------
+	 * |1bit | 1bit|2 bits|4bits| 1 octet      |
+	 */
+	if (dcbcfg->pfc.willing)
+		buf[0] = BIT(ICE_IEEE_PFC_WILLING_S);
+
+	if (dcbcfg->pfc.mbc)
+		buf[0] |= BIT(ICE_IEEE_PFC_MBC_S);
+
+	buf[0] |= dcbcfg->pfc.pfccap & 0xF;
+	buf[1] = dcbcfg->pfc.pfcena;
+}
+
+/**
+ * ice_add_ieee_app_pri_tlv -  Prepare APP TLV in IEEE format
+ * @tlv: Fill APP TLV in IEEE format
+ * @dcbcfg: Local store which holds the APP CFG data
+ *
+ * Prepare IEEE 802.1Qaz APP CFG TLV
+ */
+static void
+ice_add_ieee_app_pri_tlv(struct ice_lldp_org_tlv *tlv,
+			 struct ice_dcbx_cfg *dcbcfg)
+{
+	u16 typelen, len, offset = 0;
+	u8 priority, selector, i = 0;
+	u8 *buf = tlv->tlvinfo;
+	u32 ouisubtype;
+
+	/* No APP TLVs then just return */
+	if (dcbcfg->numapps == 0)
+		return;
+	ouisubtype = ((ICE_IEEE_8021QAZ_OUI << ICE_LLDP_TLV_OUI_S) |
+		      ICE_IEEE_SUBTYPE_APP_PRI);
+	tlv->ouisubtype = HTONL(ouisubtype);
+
+	/* Move offset to App Priority Table */
+	offset++;
+	/* Application Priority Table (3 octets)
+	 * Octets:|         1          |    2    |    3    |
+	 *        -----------------------------------------
+	 *        |Priority|Rsrvd| Sel |    Protocol ID    |
+	 *        -----------------------------------------
+	 *   Bits:|23    21|20 19|18 16|15                0|
+	 *        -----------------------------------------
+	 */
+	while (i < dcbcfg->numapps) {
+		priority = dcbcfg->app[i].priority & 0x7;
+		selector = dcbcfg->app[i].selector & 0x7;
+		buf[offset] = (priority << ICE_IEEE_APP_PRIO_S) | selector;
+		buf[offset + 1] = (dcbcfg->app[i].prot_id >> 0x8) & 0xFF;
+		buf[offset + 2] = dcbcfg->app[i].prot_id & 0xFF;
+		/* Move to next app */
+		offset += 3;
+		i++;
+		if (i >= ICE_DCBX_MAX_APPS)
+			break;
+	}
+	/* len includes size of ouisubtype + 1 reserved + 3*numapps */
+	len = sizeof(tlv->ouisubtype) + 1 + (i * 3);
+	typelen = ((ICE_TLV_TYPE_ORG << ICE_LLDP_TLV_TYPE_S) | (len & 0x1FF));
+	tlv->typelen = HTONS(typelen);
+}
+
+/**
+ * ice_add_dcb_tlv - Add all IEEE TLVs
+ * @tlv: Fill TLV data in IEEE format
+ * @dcbcfg: Local store which holds the DCB Config
+ * @tlvid: Type of IEEE TLV
+ *
+ * Add tlv information
+ */
+static void
+ice_add_dcb_tlv(struct ice_lldp_org_tlv *tlv, struct ice_dcbx_cfg *dcbcfg,
+		u16 tlvid)
+{
+	switch (tlvid) {
+	case ICE_IEEE_TLV_ID_ETS_CFG:
+		ice_add_ieee_ets_tlv(tlv, dcbcfg);
+		break;
+	case ICE_IEEE_TLV_ID_ETS_REC:
+		ice_add_ieee_etsrec_tlv(tlv, dcbcfg);
+		break;
+	case ICE_IEEE_TLV_ID_PFC_CFG:
+		ice_add_ieee_pfc_tlv(tlv, dcbcfg);
+		break;
+	case ICE_IEEE_TLV_ID_APP_PRI:
+		ice_add_ieee_app_pri_tlv(tlv, dcbcfg);
+		break;
+	default:
+		break;
+	}
+}
+
+/**
+ * ice_dcb_cfg_to_lldp - Convert DCB configuration to MIB format
+ * @lldpmib: pointer to the hw struct
+ * @miblen: length of LLDP mib
+ * @dcbcfg: Local store which holds the DCB Config
+ *
+ * Convert the DCB configuration to MIB format
+ */
+void ice_dcb_cfg_to_lldp(u8 *lldpmib, u16 *miblen, struct ice_dcbx_cfg *dcbcfg)
+{
+	u16 len, offset = 0, tlvid = ICE_TLV_ID_START;
+	struct ice_lldp_org_tlv *tlv;
+	u16 typelen;
+
+	tlv = (struct ice_lldp_org_tlv *)lldpmib;
+	while (1) {
+		ice_add_dcb_tlv(tlv, dcbcfg, tlvid++);
+		typelen = NTOHS(tlv->typelen);
+		len = (typelen & ICE_LLDP_TLV_LEN_M) >> ICE_LLDP_TLV_LEN_S;
+		if (len)
+			offset += len + 2;
+		/* END TLV or beyond LLDPDU size */
+		if (tlvid >= ICE_TLV_ID_END_OF_LLDPPDU ||
+		    offset > ICE_LLDPDU_SIZE)
+			break;
+		/* Move to next TLV */
+		if (len)
+			tlv = (struct ice_lldp_org_tlv *)
+				((char *)tlv + sizeof(tlv->typelen) + len);
+	}
+	*miblen = offset;
+}
+
+/**
+ * ice_set_dcb_cfg - Set the local LLDP MIB to FW
+ * @pi: port information structure
+ *
+ * Set DCB configuration to the Firmware
+ */
+enum ice_status ice_set_dcb_cfg(struct ice_port_info *pi)
+{
+	u8 mib_type, *lldpmib = NULL;
+	struct ice_dcbx_cfg *dcbcfg;
+	enum ice_status ret;
+	struct ice_hw *hw;
+	u16 miblen;
+
+	if (!pi)
+		return ICE_ERR_PARAM;
+
+	hw = pi->hw;
+
+	/* update the hw local config */
+	dcbcfg = &pi->local_dcbx_cfg;
+	/* Allocate the LLDPDU */
+	lldpmib = (u8 *)ice_malloc(hw, ICE_LLDPDU_SIZE);
+	if (!lldpmib)
+		return ICE_ERR_NO_MEMORY;
+
+	mib_type = SET_LOCAL_MIB_TYPE_LOCAL_MIB;
+	if (dcbcfg->app_mode == ICE_DCBX_APPS_NON_WILLING)
+		mib_type |= SET_LOCAL_MIB_TYPE_CEE_NON_WILLING;
+
+	ice_dcb_cfg_to_lldp(lldpmib, &miblen, dcbcfg);
+	ret = ice_aq_set_lldp_mib(hw, mib_type, (void *)lldpmib, miblen,
+				  NULL);
+
+	ice_free(hw, lldpmib);
+
+	return ret;
+}
+
+/**
+ * ice_aq_query_cfg_port_ets - query or config port ets configuration
+ * @pi: port information structure
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @cd: pointer to command details structure or NULL
+ * @opcode: query or config port ets
+ *
+ * query current or config port ets configuration
+ */
+enum ice_status
+ice_aq_query_cfg_port_ets(struct ice_port_info *pi,
+			  struct ice_aqc_port_ets_elem *buf, u16 buf_size,
+			  struct ice_sq_cd *cd, enum ice_adminq_opc opcode)
+{
+	struct ice_aqc_cfg_query_port_ets *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	if (!pi)
+		return ICE_ERR_PARAM;
+	cmd = &desc.params.port_ets;
+	ice_fill_dflt_direct_cmd_desc(&desc, opcode);
+	cmd->port_teid = pi->root->info.node_teid;
+
+	if (opcode == ice_aqc_opc_cfg_port_ets)
+		desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+
+	status = ice_aq_send_cmd(pi->hw, &desc, buf, buf_size, cd);
+	return status;
+}
+
+/**
+ * ice_update_port_tc_tree_cfg - update tc tree configuration
+ * @pi: port information structure
+ * @buf: pointer to buffer
+ *
+ * update the SW DB with the new TC changes
+ */
+enum ice_status
+ice_update_port_tc_tree_cfg(struct ice_port_info *pi,
+			    struct ice_aqc_port_ets_elem *buf)
+{
+	struct ice_sched_node *node, *tc_node;
+	struct ice_aqc_get_elem elem;
+	enum ice_status status = ICE_SUCCESS;
+	u32 teid1, teid2;
+	u8 i, j;
+
+	if (!pi)
+		return ICE_ERR_PARAM;
+	/* suspend the missing TC nodes */
+	for (i = 0; i < pi->root->num_children; i++) {
+		teid1 = LE32_TO_CPU(pi->root->children[i]->info.node_teid);
+		for (j = 0; j < ICE_MAX_TRAFFIC_CLASS; j++) {
+			teid2 = LE32_TO_CPU(buf->tc_node_teid[j]);
+			if (teid1 == teid2)
+				break;
+		}
+		if (j < ICE_MAX_TRAFFIC_CLASS)
+			continue;
+		/* TC is missing */
+		pi->root->children[i]->in_use = false;
+	}
+	/* add the new TC nodes */
+	for (j = 0; j < ICE_MAX_TRAFFIC_CLASS; j++) {
+		teid2 = LE32_TO_CPU(buf->tc_node_teid[j]);
+		if (teid2 == ICE_INVAL_TEID)
+			continue;
+		/* Is it already present in the tree ? */
+		for (i = 0; i < pi->root->num_children; i++) {
+			tc_node = pi->root->children[i];
+			if (!tc_node)
+				continue;
+			teid1 = LE32_TO_CPU(tc_node->info.node_teid);
+			if (teid1 == teid2) {
+				tc_node->tc_num = j;
+				tc_node->in_use = true;
+				break;
+			}
+		}
+		if (i < pi->root->num_children)
+			continue;
+		/* new TC */
+		status = ice_sched_query_elem(pi->hw, teid2, &elem);
+		if (!status)
+			status = ice_sched_add_node(pi, 1, &elem.generic[0]);
+		if (status)
+			break;
+		/* update the TC number */
+		node = ice_sched_find_node_by_teid(pi->root, teid2);
+		if (node)
+			node->tc_num = j;
+	}
+	return status;
+}
+
+/**
+ * ice_query_cfg_port_ets - query or config port ets configuration
+ * @pi: port information structure
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @cd: pointer to command details structure or NULL
+ * @config: true - config port ets, false - query port ets
+ *
+ * query current/config port ets configuration and update the
+ * SW DB with the TC changes
+ */
+enum ice_status
+ice_query_cfg_port_ets(struct ice_port_info *pi,
+		       struct ice_aqc_port_ets_elem *buf, u16 buf_size,
+		       struct ice_sq_cd *cd, bool config)
+{
+	enum ice_adminq_opc opcode;
+	enum ice_status status;
+
+	opcode = config ? ice_aqc_opc_cfg_port_ets : ice_aqc_opc_query_port_ets;
+	ice_acquire_lock(&pi->sched_lock);
+	status = ice_aq_query_cfg_port_ets(pi, buf, buf_size, cd, opcode);
+	if (!status)
+		status = ice_update_port_tc_tree_cfg(pi, buf);
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
diff --git a/drivers/net/ice/base/ice_dcb.h b/drivers/net/ice/base/ice_dcb.h
new file mode 100644
index 0000000..b0e5a5f
--- /dev/null
+++ b/drivers/net/ice/base/ice_dcb.h
@@ -0,0 +1,220 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_DCB_H_
+#define _ICE_DCB_H_
+
+#include "ice_type.h"
+
+#define ICE_DCBX_OFFLOAD_DIS		0
+#define ICE_DCBX_OFFLOAD_ENABLED	1
+
+#define ICE_DCBX_STATUS_NOT_STARTED	0
+#define ICE_DCBX_STATUS_IN_PROGRESS	1
+#define ICE_DCBX_STATUS_DONE		2
+#define ICE_DCBX_STATUS_MULTIPLE_PEERS	3
+#define ICE_DCBX_STATUS_DIS		7
+
+#define ICE_TLV_TYPE_END		0
+#define ICE_TLV_TYPE_ORG		127
+
+#define ICE_IEEE_8021QAZ_OUI		0x0080C2
+#define ICE_IEEE_SUBTYPE_ETS_CFG	9
+#define ICE_IEEE_SUBTYPE_ETS_REC	10
+#define ICE_IEEE_SUBTYPE_PFC_CFG	11
+#define ICE_IEEE_SUBTYPE_APP_PRI	12
+
+#define ICE_CEE_DCBX_OUI		0x001B21
+#define ICE_CEE_DCBX_TYPE		2
+
+#define ICE_CEE_SUBTYPE_CTRL		1
+#define ICE_CEE_SUBTYPE_PG_CFG		2
+#define ICE_CEE_SUBTYPE_PFC_CFG		3
+#define ICE_CEE_SUBTYPE_APP_PRI		4
+
+#define ICE_CEE_MAX_FEAT_TYPE		3
+#define ICE_LLDP_ADMINSTATUS_DIS	0
+#define ICE_LLDP_ADMINSTATUS_ENA_RX	1
+#define ICE_LLDP_ADMINSTATUS_ENA_TX	2
+#define ICE_LLDP_ADMINSTATUS_ENA_RXTX	3
+
+/* Defines for LLDP TLV header */
+#define ICE_LLDP_TLV_LEN_S		0
+#define ICE_LLDP_TLV_LEN_M		(0x01FF << ICE_LLDP_TLV_LEN_S)
+#define ICE_LLDP_TLV_TYPE_S		9
+#define ICE_LLDP_TLV_TYPE_M		(0x7F << ICE_LLDP_TLV_TYPE_S)
+#define ICE_LLDP_TLV_SUBTYPE_S		0
+#define ICE_LLDP_TLV_SUBTYPE_M		(0xFF << ICE_LLDP_TLV_SUBTYPE_S)
+#define ICE_LLDP_TLV_OUI_S		8
+#define ICE_LLDP_TLV_OUI_M		(0xFFFFFFUL << ICE_LLDP_TLV_OUI_S)
+
+/* Defines for IEEE ETS TLV */
+#define ICE_IEEE_ETS_MAXTC_S	0
+#define ICE_IEEE_ETS_MAXTC_M		(0x7 << ICE_IEEE_ETS_MAXTC_S)
+#define ICE_IEEE_ETS_CBS_S		6
+#define ICE_IEEE_ETS_CBS_M		BIT(ICE_IEEE_ETS_CBS_S)
+#define ICE_IEEE_ETS_WILLING_S		7
+#define ICE_IEEE_ETS_WILLING_M		BIT(ICE_IEEE_ETS_WILLING_S)
+#define ICE_IEEE_ETS_PRIO_0_S		0
+#define ICE_IEEE_ETS_PRIO_0_M		(0x7 << ICE_IEEE_ETS_PRIO_0_S)
+#define ICE_IEEE_ETS_PRIO_1_S		4
+#define ICE_IEEE_ETS_PRIO_1_M		(0x7 << ICE_IEEE_ETS_PRIO_1_S)
+#define ICE_CEE_PGID_PRIO_0_S		0
+#define ICE_CEE_PGID_PRIO_0_M		(0xF << ICE_CEE_PGID_PRIO_0_S)
+#define ICE_CEE_PGID_PRIO_1_S		4
+#define ICE_CEE_PGID_PRIO_1_M		(0xF << ICE_CEE_PGID_PRIO_1_S)
+#define ICE_CEE_PGID_STRICT		15
+
+/* Defines for IEEE TSA types */
+#define ICE_IEEE_TSA_STRICT		0
+#define ICE_IEEE_TSA_CBS		1
+#define ICE_IEEE_TSA_ETS		2
+#define ICE_IEEE_TSA_VENDOR		255
+
+/* Defines for IEEE PFC TLV */
+#define ICE_IEEE_PFC_CAP_S		0
+#define ICE_IEEE_PFC_CAP_M		(0xF << ICE_IEEE_PFC_CAP_S)
+#define ICE_IEEE_PFC_MBC_S		6
+#define ICE_IEEE_PFC_MBC_M		BIT(ICE_IEEE_PFC_MBC_S)
+#define ICE_IEEE_PFC_WILLING_S		7
+#define ICE_IEEE_PFC_WILLING_M		BIT(ICE_IEEE_PFC_WILLING_S)
+
+/* Defines for IEEE APP TLV */
+#define ICE_IEEE_APP_SEL_S		0
+#define ICE_IEEE_APP_SEL_M		(0x7 << ICE_IEEE_APP_SEL_S)
+#define ICE_IEEE_APP_PRIO_S		5
+#define ICE_IEEE_APP_PRIO_M		(0x7 << ICE_IEEE_APP_PRIO_S)
+
+/* TLV definitions for preparing MIB */
+#define ICE_TLV_ID_CHASSIS_ID		0
+#define ICE_TLV_ID_PORT_ID		1
+#define ICE_TLV_ID_TIME_TO_LIVE		2
+#define ICE_IEEE_TLV_ID_ETS_CFG		3
+#define ICE_IEEE_TLV_ID_ETS_REC		4
+#define ICE_IEEE_TLV_ID_PFC_CFG		5
+#define ICE_IEEE_TLV_ID_APP_PRI		6
+#define ICE_TLV_ID_END_OF_LLDPPDU	7
+#define ICE_TLV_ID_START		ICE_IEEE_TLV_ID_ETS_CFG
+
+#define ICE_IEEE_ETS_TLV_LEN		25
+#define ICE_IEEE_PFC_TLV_LEN		6
+#define ICE_IEEE_APP_TLV_LEN		11
+
+#pragma pack(1)
+/* IEEE 802.1AB LLDP TLV structure */
+struct ice_lldp_generic_tlv {
+	__be16 typelen;
+	u8 tlvinfo[1];
+};
+
+/* IEEE 802.1AB LLDP Organization specific TLV */
+struct ice_lldp_org_tlv {
+	__be16 typelen;
+	__be32 ouisubtype;
+	u8 tlvinfo[1];
+};
+#pragma pack()
+
+struct ice_cee_tlv_hdr {
+	__be16 typelen;
+	u8 operver;
+	u8 maxver;
+};
+
+struct ice_cee_ctrl_tlv {
+	struct ice_cee_tlv_hdr hdr;
+	__be32 seqno;
+	__be32 ackno;
+};
+
+struct ice_cee_feat_tlv {
+	struct ice_cee_tlv_hdr hdr;
+	u8 en_will_err; /* Bits: |En|Will|Err|Reserved(5)| */
+#define ICE_CEE_FEAT_TLV_ENA_M		0x80
+#define ICE_CEE_FEAT_TLV_WILLING_M	0x40
+#define ICE_CEE_FEAT_TLV_ERR_M		0x20
+	u8 subtype;
+	u8 tlvinfo[1];
+};
+
+#pragma pack(1)
+struct ice_cee_app_prio {
+	__be16 protocol;
+	u8 upper_oui_sel; /* Bits: |Upper OUI(6)|Selector(2)| */
+#define ICE_CEE_APP_SELECTOR_M	0x03
+	__be16 lower_oui;
+	u8 prio_map;
+};
+#pragma pack()
+
+/* TODO: The below structures related LLDP/DCBX variables
+ * and statistics are defined but need to find how to get
+ * the required information from the Firmware to use them
+ */
+
+/* IEEE 802.1AB LLDP Agent Statistics */
+struct ice_lldp_stats {
+	u64 remtablelastchangetime;
+	u64 remtableinserts;
+	u64 remtabledeletes;
+	u64 remtabledrops;
+	u64 remtableageouts;
+	u64 txframestotal;
+	u64 rxframesdiscarded;
+	u64 rxportframeerrors;
+	u64 rxportframestotal;
+	u64 rxporttlvsdiscardedtotal;
+	u64 rxporttlvsunrecognizedtotal;
+	u64 remtoomanyneighbors;
+};
+
+/* IEEE 802.1Qaz DCBX variables */
+struct ice_dcbx_variables {
+	u32 defmaxtrafficclasses;
+	u32 defprioritytcmapping;
+	u32 deftcbandwidth;
+	u32 deftsaassignment;
+};
+
+
+enum ice_status
+ice_aq_get_lldp_mib(struct ice_hw *hw, u8 bridge_type, u8 mib_type, void *buf,
+		    u16 buf_size, u16 *local_len, u16 *remote_len,
+		    struct ice_sq_cd *cd);
+enum ice_status
+ice_aq_cfg_lldp_mib_change(struct ice_hw *hw, bool ena_update,
+			   struct ice_sq_cd *cd);
+enum ice_status
+ice_aq_start_lldp(struct ice_hw *hw, struct ice_sq_cd *cd);
+enum ice_status
+ice_aq_set_lldp_mib(struct ice_hw *hw, u8 mib_type, void *buf, u16 buf_size,
+		    struct ice_sq_cd *cd);
+enum ice_status
+ice_aq_get_cee_dcb_cfg(struct ice_hw *hw,
+		       struct ice_aqc_get_cee_dcb_cfg_resp *buff,
+		       struct ice_sq_cd *cd);
+enum ice_status
+ice_aq_start_stop_dcbx(struct ice_hw *hw, bool start_dcbx_agent,
+		       bool *dcbx_agent_status, struct ice_sq_cd *cd);
+u8 ice_get_dcbx_status(struct ice_hw *hw);
+enum ice_status ice_lldp_to_dcb_cfg(u8 *lldpmib, struct ice_dcbx_cfg *dcbcfg);
+enum ice_status
+ice_aq_get_dcb_cfg(struct ice_hw *hw, u8 mib_type, u8 bridgetype,
+		   struct ice_dcbx_cfg *dcbcfg);
+enum ice_status ice_get_dcb_cfg(struct ice_port_info *pi);
+enum ice_status ice_init_dcb(struct ice_hw *hw);
+enum ice_status ice_set_dcb_cfg(struct ice_port_info *pi);
+void ice_dcb_cfg_to_lldp(u8 *lldpmib, u16 *miblen, struct ice_dcbx_cfg *dcbcfg);
+enum ice_status
+ice_query_cfg_port_ets(struct ice_port_info *pi,
+		       struct ice_aqc_port_ets_elem *buff, u16 buf_size,
+		       struct ice_sq_cd *cmd_details, bool config);
+enum ice_status
+ice_aq_query_cfg_port_ets(struct ice_port_info *pi,
+			  struct ice_aqc_port_ets_elem *buf, u16 buf_size,
+			  struct ice_sq_cd *cd, enum ice_adminq_opc opcode);
+enum ice_status
+ice_update_port_tc_tree_cfg(struct ice_port_info *pi,
+			    struct ice_aqc_port_ets_elem *buf);
+#endif /* _ICE_DCB_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v3 08/34] net/ice: Add basic transmit scheduler
  2018-12-12  6:59 ` [dpdk-dev] [PATCH v3 00/34] A new net PMD - ice Wenzhuo Lu
                     ` (6 preceding siblings ...)
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 07/34] net/ice: Add data center bridging (DCB) Wenzhuo Lu
@ 2018-12-12  6:59   ` Wenzhuo Lu
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 09/34] net/ice: Add virtual switch code Wenzhuo Lu
                     ` (26 subsequent siblings)
  34 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-12  6:59 UTC (permalink / raw)
  To: dev; +Cc: Paul M Stillwell Jr

From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>

Add code for the basic TX scheduler.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
---
 drivers/net/ice/base/ice_sched.c | 5380 ++++++++++++++++++++++++++++++++++++++
 drivers/net/ice/base/ice_sched.h |  210 ++
 2 files changed, 5590 insertions(+)
 create mode 100644 drivers/net/ice/base/ice_sched.c
 create mode 100644 drivers/net/ice/base/ice_sched.h

diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c
new file mode 100644
index 0000000..7acbae6
--- /dev/null
+++ b/drivers/net/ice/base/ice_sched.c
@@ -0,0 +1,5380 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#include "ice_sched.h"
+
+
+/**
+ * ice_sched_add_root_node - Insert the Tx scheduler root node in SW DB
+ * @pi: port information structure
+ * @info: Scheduler element information from firmware
+ *
+ * This function inserts the root node of the scheduling tree topology
+ * to the SW DB.
+ */
+static enum ice_status
+ice_sched_add_root_node(struct ice_port_info *pi,
+			struct ice_aqc_txsched_elem_data *info)
+{
+	struct ice_sched_node *root;
+	struct ice_hw *hw;
+
+	if (!pi)
+		return ICE_ERR_PARAM;
+
+	hw = pi->hw;
+
+	root = (struct ice_sched_node *)ice_malloc(hw, sizeof(*root));
+	if (!root)
+		return ICE_ERR_NO_MEMORY;
+
+	/* coverity[suspicious_sizeof] */
+	root->children = (struct ice_sched_node **)
+		ice_calloc(hw, hw->max_children[0], sizeof(*root));
+	if (!root->children) {
+		ice_free(hw, root);
+		return ICE_ERR_NO_MEMORY;
+	}
+
+	ice_memcpy(&root->info, info, sizeof(*info), ICE_DMA_TO_NONDMA);
+	pi->root = root;
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_sched_find_node_by_teid - Find the Tx scheduler node in SW DB
+ * @start_node: pointer to the starting ice_sched_node struct in a sub-tree
+ * @teid: node teid to search
+ *
+ * This function searches for a node matching the teid in the scheduling tree
+ * from the SW DB. The search is recursive and is restricted by the number of
+ * layers it has searched through; stopping at the max supported layer.
+ *
+ * This function needs to be called when holding the port_info->sched_lock
+ */
+struct ice_sched_node *
+ice_sched_find_node_by_teid(struct ice_sched_node *start_node, u32 teid)
+{
+	u16 i;
+
+	/* The TEID is same as that of the start_node */
+	if (ICE_TXSCHED_GET_NODE_TEID(start_node) == teid)
+		return start_node;
+
+	/* The node has no children or is at the max layer */
+	if (!start_node->num_children ||
+	    start_node->tx_sched_layer >= ICE_AQC_TOPO_MAX_LEVEL_NUM ||
+	    start_node->info.data.elem_type == ICE_AQC_ELEM_TYPE_LEAF)
+		return NULL;
+
+	/* Check if teid matches to any of the children nodes */
+	for (i = 0; i < start_node->num_children; i++)
+		if (ICE_TXSCHED_GET_NODE_TEID(start_node->children[i]) == teid)
+			return start_node->children[i];
+
+	/* Search within each child's sub-tree */
+	for (i = 0; i < start_node->num_children; i++) {
+		struct ice_sched_node *tmp;
+
+		tmp = ice_sched_find_node_by_teid(start_node->children[i],
+						  teid);
+		if (tmp)
+			return tmp;
+	}
+
+	return NULL;
+}
+
+/**
+ * ice_aqc_send_sched_elem_cmd - send scheduling elements cmd
+ * @hw: pointer to the hw struct
+ * @cmd_opc: cmd opcode
+ * @elems_req: number of elements to request
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @elems_resp: returns total number of elements response
+ * @cd: pointer to command details structure or NULL
+ *
+ * This function sends a scheduling elements cmd (cmd_opc)
+ */
+static enum ice_status
+ice_aqc_send_sched_elem_cmd(struct ice_hw *hw, enum ice_adminq_opc cmd_opc,
+			    u16 elems_req, void *buf, u16 buf_size,
+			    u16 *elems_resp, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_sched_elem_cmd *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	cmd = &desc.params.sched_elem_cmd;
+	ice_fill_dflt_direct_cmd_desc(&desc, cmd_opc);
+	cmd->num_elem_req = CPU_TO_LE16(elems_req);
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+	status = ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
+	if (!status && elems_resp)
+		*elems_resp = LE16_TO_CPU(cmd->num_elem_resp);
+
+	return status;
+}
+
+/**
+ * ice_aq_query_sched_elems - query scheduler elements
+ * @hw: pointer to the hw struct
+ * @elems_req: number of elements to query
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @elems_ret: returns total number of elements returned
+ * @cd: pointer to command details structure or NULL
+ *
+ * Query scheduling elements (0x0404)
+ */
+enum ice_status
+ice_aq_query_sched_elems(struct ice_hw *hw, u16 elems_req,
+			 struct ice_aqc_get_elem *buf, u16 buf_size,
+			 u16 *elems_ret, struct ice_sq_cd *cd)
+{
+	return ice_aqc_send_sched_elem_cmd(hw, ice_aqc_opc_get_sched_elems,
+					   elems_req, (void *)buf, buf_size,
+					   elems_ret, cd);
+}
+
+/**
+ * ice_sched_add_node - Insert the Tx scheduler node in SW DB
+ * @pi: port information structure
+ * @layer: Scheduler layer of the node
+ * @info: Scheduler element information from firmware
+ *
+ * This function inserts a scheduler node to the SW DB.
+ */
+enum ice_status
+ice_sched_add_node(struct ice_port_info *pi, u8 layer,
+		   struct ice_aqc_txsched_elem_data *info)
+{
+	struct ice_sched_node *parent;
+	struct ice_aqc_get_elem elem;
+	struct ice_sched_node *node;
+	enum ice_status status;
+	struct ice_hw *hw;
+
+	if (!pi)
+		return ICE_ERR_PARAM;
+
+	hw = pi->hw;
+
+	/* A valid parent node should be there */
+	parent = ice_sched_find_node_by_teid(pi->root,
+					     LE32_TO_CPU(info->parent_teid));
+	if (!parent) {
+		ice_debug(hw, ICE_DBG_SCHED,
+			  "Parent Node not found for parent_teid=0x%x\n",
+			  LE32_TO_CPU(info->parent_teid));
+		return ICE_ERR_PARAM;
+	}
+
+	/* query the current node information from FW  before additing it
+	 * to the SW DB
+	 */
+	status = ice_sched_query_elem(hw, LE32_TO_CPU(info->node_teid), &elem);
+	if (status)
+		return status;
+	node = (struct ice_sched_node *)ice_malloc(hw, sizeof(*node));
+	if (!node)
+		return ICE_ERR_NO_MEMORY;
+	if (hw->max_children[layer]) {
+		/* coverity[suspicious_sizeof] */
+		node->children = (struct ice_sched_node **)
+			ice_calloc(hw, hw->max_children[layer], sizeof(*node));
+		if (!node->children) {
+			ice_free(hw, node);
+			return ICE_ERR_NO_MEMORY;
+		}
+	}
+
+	node->in_use = true;
+	node->parent = parent;
+	node->tx_sched_layer = layer;
+	parent->children[parent->num_children++] = node;
+	node->info = elem.generic[0];
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_aq_delete_sched_elems - delete scheduler elements
+ * @hw: pointer to the hw struct
+ * @grps_req: number of groups to delete
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @grps_del: returns total number of elements deleted
+ * @cd: pointer to command details structure or NULL
+ *
+ * Delete scheduling elements (0x040F)
+ */
+static enum ice_status
+ice_aq_delete_sched_elems(struct ice_hw *hw, u16 grps_req,
+			  struct ice_aqc_delete_elem *buf, u16 buf_size,
+			  u16 *grps_del, struct ice_sq_cd *cd)
+{
+	return ice_aqc_send_sched_elem_cmd(hw, ice_aqc_opc_delete_sched_elems,
+					   grps_req, (void *)buf, buf_size,
+					   grps_del, cd);
+}
+
+/**
+ * ice_sched_remove_elems - remove nodes from hw
+ * @hw: pointer to the hw struct
+ * @parent: pointer to the parent node
+ * @num_nodes: number of nodes
+ * @node_teids: array of node teids to be deleted
+ *
+ * This function remove nodes from hw
+ */
+static enum ice_status
+ice_sched_remove_elems(struct ice_hw *hw, struct ice_sched_node *parent,
+		       u16 num_nodes, u32 *node_teids)
+{
+	struct ice_aqc_delete_elem *buf;
+	u16 i, num_groups_removed = 0;
+	enum ice_status status;
+	u16 buf_size;
+
+	buf_size = sizeof(*buf) + sizeof(u32) * (num_nodes - 1);
+	buf = (struct ice_aqc_delete_elem *)ice_malloc(hw, buf_size);
+	if (!buf)
+		return ICE_ERR_NO_MEMORY;
+
+	buf->hdr.parent_teid = parent->info.node_teid;
+	buf->hdr.num_elems = CPU_TO_LE16(num_nodes);
+	for (i = 0; i < num_nodes; i++)
+		buf->teid[i] = CPU_TO_LE32(node_teids[i]);
+
+	status = ice_aq_delete_sched_elems(hw, 1, buf, buf_size,
+					   &num_groups_removed, NULL);
+	if (status != ICE_SUCCESS || num_groups_removed != 1)
+		ice_debug(hw, ICE_DBG_SCHED, "remove node failed FW error %d\n",
+			  hw->adminq.sq_last_status);
+
+	ice_free(hw, buf);
+	return status;
+}
+
+/**
+ * ice_sched_get_first_node - get the first node of the given layer
+ * @hw: pointer to the hw struct
+ * @parent: pointer the base node of the subtree
+ * @layer: layer number
+ *
+ * This function retrieves the first node of the given layer from the subtree
+ */
+static struct ice_sched_node *
+ice_sched_get_first_node(struct ice_hw *hw, struct ice_sched_node *parent,
+			 u8 layer)
+{
+	u8 i;
+
+	if (layer < hw->sw_entry_point_layer)
+		return NULL;
+	for (i = 0; i < parent->num_children; i++) {
+		struct ice_sched_node *node = parent->children[i];
+
+		if (node) {
+			if (node->tx_sched_layer == layer)
+				return node;
+			/* this recursion is intentional, and wouldn't
+			 * go more than 9 calls
+			 */
+			return ice_sched_get_first_node(hw, node, layer);
+		}
+	}
+	return NULL;
+}
+
+/**
+ * ice_sched_get_tc_node - get pointer to TC node
+ * @pi: port information structure
+ * @tc: TC number
+ *
+ * This function returns the TC node pointer
+ */
+struct ice_sched_node *ice_sched_get_tc_node(struct ice_port_info *pi, u8 tc)
+{
+	u8 i;
+
+	if (!pi)
+		return NULL;
+	for (i = 0; i < pi->root->num_children; i++)
+		if (pi->root->children[i]->tc_num == tc)
+			return pi->root->children[i];
+	return NULL;
+}
+
+/**
+ * ice_free_sched_node - Free a Tx scheduler node from SW DB
+ * @pi: port information structure
+ * @node: pointer to the ice_sched_node struct
+ *
+ * This function frees up a node from SW DB as well as from HW
+ *
+ * This function needs to be called with the port_info->sched_lock held
+ */
+void ice_free_sched_node(struct ice_port_info *pi, struct ice_sched_node *node)
+{
+	struct ice_sched_node *parent;
+	struct ice_hw *hw = pi->hw;
+	u8 i, j;
+
+	/* Free the children before freeing up the parent node
+	 * The parent array is updated below and that shifts the nodes
+	 * in the array. So always pick the first child if num children > 0
+	 */
+	while (node->num_children)
+		ice_free_sched_node(pi, node->children[0]);
+
+	/* Leaf, TC and root nodes can't be deleted by SW */
+	if (node->tx_sched_layer >= hw->sw_entry_point_layer &&
+	    node->info.data.elem_type != ICE_AQC_ELEM_TYPE_TC &&
+	    node->info.data.elem_type != ICE_AQC_ELEM_TYPE_ROOT_PORT &&
+	    node->info.data.elem_type != ICE_AQC_ELEM_TYPE_LEAF) {
+		u32 teid = LE32_TO_CPU(node->info.node_teid);
+
+		ice_sched_remove_elems(hw, node->parent, 1, &teid);
+	}
+	parent = node->parent;
+	/* root has no parent */
+	if (parent) {
+		struct ice_sched_node *p, *tc_node;
+
+		/* update the parent */
+		for (i = 0; i < parent->num_children; i++)
+			if (parent->children[i] == node) {
+				for (j = i + 1; j < parent->num_children; j++)
+					parent->children[j - 1] =
+						parent->children[j];
+				parent->num_children--;
+				break;
+			}
+
+		/* search for previous sibling that points to this node and
+		 * remove the reference
+		 */
+		tc_node = ice_sched_get_tc_node(pi, node->tc_num);
+		if (!tc_node) {
+			ice_debug(hw, ICE_DBG_SCHED,
+				  "Invalid TC number %d\n", node->tc_num);
+			goto err_exit;
+		}
+		p = ice_sched_get_first_node(hw, tc_node, node->tx_sched_layer);
+		while (p) {
+			if (p->sibling == node) {
+				p->sibling = node->sibling;
+				break;
+			}
+			p = p->sibling;
+		}
+	}
+err_exit:
+	/* leaf nodes have no children */
+	if (node->children)
+		ice_free(hw, node->children);
+	ice_free(hw, node);
+}
+
+/**
+ * ice_aq_get_dflt_topo - gets default scheduler topology
+ * @hw: pointer to the hw struct
+ * @lport: logical port number
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @num_branches: returns total number of queue to port branches
+ * @cd: pointer to command details structure or NULL
+ *
+ * Get default scheduler topology (0x400)
+ */
+static enum ice_status
+ice_aq_get_dflt_topo(struct ice_hw *hw, u8 lport,
+		     struct ice_aqc_get_topo_elem *buf, u16 buf_size,
+		     u8 *num_branches, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_get_topo *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	cmd = &desc.params.get_topo;
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_dflt_topo);
+	cmd->port_num = lport;
+	status = ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
+	if (!status && num_branches)
+		*num_branches = cmd->num_branches;
+
+	return status;
+}
+
+/**
+ * ice_aq_add_sched_elems - adds scheduling element
+ * @hw: pointer to the hw struct
+ * @grps_req: the number of groups that are requested to be added
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @grps_added: returns total number of groups added
+ * @cd: pointer to command details structure or NULL
+ *
+ * Add scheduling elements (0x0401)
+ */
+static enum ice_status
+ice_aq_add_sched_elems(struct ice_hw *hw, u16 grps_req,
+		       struct ice_aqc_add_elem *buf, u16 buf_size,
+		       u16 *grps_added, struct ice_sq_cd *cd)
+{
+	return ice_aqc_send_sched_elem_cmd(hw, ice_aqc_opc_add_sched_elems,
+					   grps_req, (void *)buf, buf_size,
+					   grps_added, cd);
+}
+
+/**
+ * ice_aq_cfg_sched_elems - configures scheduler elements
+ * @hw: pointer to the hw struct
+ * @elems_req: number of elements to configure
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @elems_cfgd: returns total number of elements configured
+ * @cd: pointer to command details structure or NULL
+ *
+ * Configure scheduling elements (0x0403)
+ */
+static enum ice_status
+ice_aq_cfg_sched_elems(struct ice_hw *hw, u16 elems_req,
+		       struct ice_aqc_conf_elem *buf, u16 buf_size,
+		       u16 *elems_cfgd, struct ice_sq_cd *cd)
+{
+	return ice_aqc_send_sched_elem_cmd(hw, ice_aqc_opc_cfg_sched_elems,
+					   elems_req, (void *)buf, buf_size,
+					   elems_cfgd, cd);
+}
+
+/**
+ * ice_aq_move_sched_elems - move scheduler elements
+ * @hw: pointer to the hw struct
+ * @grps_req: number of groups to move
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @grps_movd: returns total number of groups moved
+ * @cd: pointer to command details structure or NULL
+ *
+ * Move scheduling elements (0x0408)
+ */
+enum ice_status
+ice_aq_move_sched_elems(struct ice_hw *hw, u16 grps_req,
+			struct ice_aqc_move_elem *buf, u16 buf_size,
+			u16 *grps_movd, struct ice_sq_cd *cd)
+{
+	return ice_aqc_send_sched_elem_cmd(hw, ice_aqc_opc_move_sched_elems,
+					   grps_req, (void *)buf, buf_size,
+					   grps_movd, cd);
+}
+
+/**
+ * ice_aq_suspend_sched_elems - suspend scheduler elements
+ * @hw: pointer to the hw struct
+ * @elems_req: number of elements to suspend
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @elems_ret: returns total number of elements suspended
+ * @cd: pointer to command details structure or NULL
+ *
+ * Suspend scheduling elements (0x0409)
+ */
+static enum ice_status
+ice_aq_suspend_sched_elems(struct ice_hw *hw, u16 elems_req,
+			   struct ice_aqc_suspend_resume_elem *buf,
+			   u16 buf_size, u16 *elems_ret, struct ice_sq_cd *cd)
+{
+	return ice_aqc_send_sched_elem_cmd(hw, ice_aqc_opc_suspend_sched_elems,
+					   elems_req, (void *)buf, buf_size,
+					   elems_ret, cd);
+}
+
+/**
+ * ice_aq_resume_sched_elems - resume scheduler elements
+ * @hw: pointer to the hw struct
+ * @elems_req: number of elements to resume
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @elems_ret: returns total number of elements resumed
+ * @cd: pointer to command details structure or NULL
+ *
+ * resume scheduling elements (0x040A)
+ */
+static enum ice_status
+ice_aq_resume_sched_elems(struct ice_hw *hw, u16 elems_req,
+			  struct ice_aqc_suspend_resume_elem *buf,
+			  u16 buf_size, u16 *elems_ret, struct ice_sq_cd *cd)
+{
+	return ice_aqc_send_sched_elem_cmd(hw, ice_aqc_opc_resume_sched_elems,
+					   elems_req, (void *)buf, buf_size,
+					   elems_ret, cd);
+}
+
+/**
+ * ice_aq_query_sched_res - query scheduler resource
+ * @hw: pointer to the hw struct
+ * @buf_size: buffer size in bytes
+ * @buf: pointer to buffer
+ * @cd: pointer to command details structure or NULL
+ *
+ * Query scheduler resource allocation (0x0412)
+ */
+static enum ice_status
+ice_aq_query_sched_res(struct ice_hw *hw, u16 buf_size,
+		       struct ice_aqc_query_txsched_res_resp *buf,
+		       struct ice_sq_cd *cd)
+{
+	struct ice_aq_desc desc;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_query_sched_res);
+	return ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
+}
+
+/**
+ * ice_sched_suspend_resume_elems - suspend or resume hw nodes
+ * @hw: pointer to the hw struct
+ * @num_nodes: number of nodes
+ * @node_teids: array of node teids to be suspended or resumed
+ * @suspend: true means suspend / false means resume
+ *
+ * This function suspends or resumes hw nodes
+ */
+static enum ice_status
+ice_sched_suspend_resume_elems(struct ice_hw *hw, u8 num_nodes, u32 *node_teids,
+			       bool suspend)
+{
+	struct ice_aqc_suspend_resume_elem *buf;
+	u16 i, buf_size, num_elem_ret = 0;
+	enum ice_status status;
+
+	buf_size = sizeof(*buf) * num_nodes;
+	buf = (struct ice_aqc_suspend_resume_elem *)
+		ice_malloc(hw, buf_size);
+	if (!buf)
+		return ICE_ERR_NO_MEMORY;
+
+	for (i = 0; i < num_nodes; i++)
+		buf->teid[i] = CPU_TO_LE32(node_teids[i]);
+
+	if (suspend)
+		status = ice_aq_suspend_sched_elems(hw, num_nodes, buf,
+						    buf_size, &num_elem_ret,
+						    NULL);
+	else
+		status = ice_aq_resume_sched_elems(hw, num_nodes, buf,
+						   buf_size, &num_elem_ret,
+						   NULL);
+	if (status != ICE_SUCCESS || num_elem_ret != num_nodes)
+		ice_debug(hw, ICE_DBG_SCHED, "suspend/resume failed\n");
+
+	ice_free(hw, buf);
+	return status;
+}
+
+/**
+ * ice_aq_rl_profile - performs a rate limiting task
+ * @hw: pointer to the hw struct
+ * @opcode:opcode for add, query, or remove profile(s)
+ * @num_profiles: the number of profiles
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @num_processed: number of processed add or remove profile(s) to return
+ * @cd: pointer to command details structure
+ *
+ * Rl profile function to add, query, or remove profile(s)
+ */
+static enum ice_status
+ice_aq_rl_profile(struct ice_hw *hw, enum ice_adminq_opc opcode,
+		  u16 num_profiles, struct ice_aqc_rl_profile_generic_elem *buf,
+		  u16 buf_size, u16 *num_processed, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_rl_profile *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	cmd = &desc.params.rl_profile;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, opcode);
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+	cmd->num_profiles = CPU_TO_LE16(num_profiles);
+	status = ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
+	if (!status && num_processed)
+		*num_processed = LE16_TO_CPU(cmd->num_processed);
+	return status;
+}
+
+/**
+ * ice_aq_add_rl_profile - adds rate limiting profile(s)
+ * @hw: pointer to the hw struct
+ * @num_profiles: the number of profile(s) to be add
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @num_profiles_added: total number of profiles added to return
+ * @cd: pointer to command details structure
+ *
+ * Add rl profile (0x0410)
+ */
+static enum ice_status
+ice_aq_add_rl_profile(struct ice_hw *hw, u16 num_profiles,
+		      struct ice_aqc_rl_profile_generic_elem *buf,
+		      u16 buf_size, u16 *num_profiles_added,
+		      struct ice_sq_cd *cd)
+{
+	return ice_aq_rl_profile(hw, ice_aqc_opc_add_rl_profiles,
+				 num_profiles, buf,
+				 buf_size, num_profiles_added, cd);
+}
+
+/**
+ * ice_aq_query_rl_profile - query rate limiting profile(s)
+ * @hw: pointer to the hw struct
+ * @num_profiles: the number of profile(s) to query
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @cd: pointer to command details structure
+ *
+ * Query rl profile (0x0411)
+ */
+enum ice_status
+ice_aq_query_rl_profile(struct ice_hw *hw, u16 num_profiles,
+			struct ice_aqc_rl_profile_generic_elem *buf,
+			u16 buf_size, struct ice_sq_cd *cd)
+{
+	return ice_aq_rl_profile(hw, ice_aqc_opc_query_rl_profiles,
+				 num_profiles, buf, buf_size, NULL, cd);
+}
+
+/**
+ * ice_aq_remove_rl_profile - removes rl profile(s)
+ * @hw: pointer to the hw struct
+ * @num_profiles: the number of profile(s) to remove
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @num_profiles_removed: total number of profiles removed to return
+ * @cd: pointer to command details structure or NULL
+ *
+ * Remove rl profile (0x0415)
+ */
+static enum ice_status
+ice_aq_remove_rl_profile(struct ice_hw *hw, u16 num_profiles,
+			 struct ice_aqc_rl_profile_generic_elem *buf,
+			 u16 buf_size, u16 *num_profiles_removed,
+			 struct ice_sq_cd *cd)
+{
+	return ice_aq_rl_profile(hw, ice_aqc_opc_remove_rl_profiles,
+				 num_profiles, buf,
+				 buf_size, num_profiles_removed, cd);
+}
+
+/**
+ * ice_sched_clear_rl_prof - clears rl prof entries
+ * @pi: port information structure
+ *
+ * This function removes all rl profile from hw as well as from SW DB.
+ */
+static void ice_sched_clear_rl_prof(struct ice_port_info *pi)
+{
+	u8 ln;
+
+	for (ln = 0; ln < pi->hw->num_tx_sched_layers; ln++) {
+		struct ice_aqc_rl_profile_info *rl_prof_elem;
+		struct ice_aqc_rl_profile_info *rl_prof_tmp;
+
+		LIST_FOR_EACH_ENTRY_SAFE(rl_prof_elem, rl_prof_tmp,
+					 &pi->rl_prof_list[ln],
+					 ice_aqc_rl_profile_info, list_entry) {
+			struct ice_hw *hw = pi->hw;
+			enum ice_status status;
+
+			rl_prof_elem->prof_id_ref = 0;
+			status = ice_sched_del_rl_profile(hw, rl_prof_elem);
+			if (status) {
+				ice_debug(hw, ICE_DBG_SCHED,
+					  "Remove rl profile failed\n");
+				/* On error, free mem required */
+				LIST_DEL(&rl_prof_elem->list_entry);
+				ice_free(hw, rl_prof_elem);
+			}
+		}
+	}
+}
+
+/**
+ * ice_sched_clear_agg - clears the agg related information
+ * @hw: pointer to the hardware structure
+ *
+ * This function removes agg list and free up agg related memory
+ * previously allocated.
+ */
+void ice_sched_clear_agg(struct ice_hw *hw)
+{
+	struct ice_sched_agg_info *agg_info;
+	struct ice_sched_agg_info *atmp;
+
+	LIST_FOR_EACH_ENTRY_SAFE(agg_info, atmp, &hw->agg_list,
+				 ice_sched_agg_info,
+				 list_entry) {
+		struct ice_sched_agg_vsi_info *agg_vsi_info;
+		struct ice_sched_agg_vsi_info *vtmp;
+
+		LIST_FOR_EACH_ENTRY_SAFE(agg_vsi_info, vtmp,
+					 &agg_info->agg_vsi_list,
+					 ice_sched_agg_vsi_info, list_entry) {
+			LIST_DEL(&agg_vsi_info->list_entry);
+			ice_free(hw, agg_vsi_info);
+		}
+		LIST_DEL(&agg_info->list_entry);
+		ice_free(hw, agg_info);
+	}
+}
+
+/**
+ * ice_sched_clear_tx_topo - clears the schduler tree nodes
+ * @pi: port information structure
+ *
+ * This function removes all the nodes from HW as well as from SW DB.
+ */
+static void ice_sched_clear_tx_topo(struct ice_port_info *pi)
+{
+	if (!pi)
+		return;
+	/* remove rl profiles related lists */
+	ice_sched_clear_rl_prof(pi);
+	if (pi->root) {
+		ice_free_sched_node(pi, pi->root);
+		pi->root = NULL;
+	}
+}
+
+/**
+ * ice_sched_clear_port - clear the scheduler elements from SW DB for a port
+ * @pi: port information structure
+ *
+ * Cleanup scheduling elements from SW DB
+ */
+void ice_sched_clear_port(struct ice_port_info *pi)
+{
+	if (!pi || pi->port_state != ICE_SCHED_PORT_STATE_READY)
+		return;
+
+	pi->port_state = ICE_SCHED_PORT_STATE_INIT;
+	ice_acquire_lock(&pi->sched_lock);
+	ice_sched_clear_tx_topo(pi);
+	ice_release_lock(&pi->sched_lock);
+	ice_destroy_lock(&pi->sched_lock);
+}
+
+/**
+ * ice_sched_cleanup_all - cleanup scheduler elements from SW DB for all ports
+ * @hw: pointer to the hw struct
+ *
+ * Cleanup scheduling elements from SW DB for all the ports
+ */
+void ice_sched_cleanup_all(struct ice_hw *hw)
+{
+	if (!hw)
+		return;
+
+	if (hw->layer_info) {
+		ice_free(hw, hw->layer_info);
+		hw->layer_info = NULL;
+	}
+
+	if (hw->port_info)
+		ice_sched_clear_port(hw->port_info);
+
+	hw->num_tx_sched_layers = 0;
+	hw->num_tx_sched_phys_layers = 0;
+	hw->flattened_layers = 0;
+	hw->max_cgds = 0;
+}
+
+/**
+ * ice_aq_cfg_l2_node_cgd - configures L2 node to CGD mapping
+ * @hw: pointer to the hw struct
+ * @num_l2_nodes: the number of L2 nodes whose CGDs to configure
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @cd: pointer to command details structure or NULL
+ *
+ * Configure L2 Node CGD (0x0414)
+ */
+enum ice_status
+ice_aq_cfg_l2_node_cgd(struct ice_hw *hw, u16 num_l2_nodes,
+		       struct ice_aqc_cfg_l2_node_cgd_data *buf,
+		       u16 buf_size, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_cfg_l2_node_cgd *cmd;
+	struct ice_aq_desc desc;
+
+	cmd = &desc.params.cfg_l2_node_cgd;
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_cfg_l2_node_cgd);
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+
+	cmd->num_l2_nodes = CPU_TO_LE16(num_l2_nodes);
+	return ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
+}
+
+
+/**
+ * ice_sched_add_elems - add nodes to hw and SW DB
+ * @pi: port information structure
+ * @tc_node: pointer to the branch node
+ * @parent: pointer to the parent node
+ * @layer: layer number to add nodes
+ * @num_nodes: number of nodes
+ * @num_nodes_added: pointer to num nodes added
+ * @first_node_teid: if new nodes are added then return the teid of first node
+ *
+ * This function add nodes to hw as well as to SW DB for a given layer
+ */
+static enum ice_status
+ice_sched_add_elems(struct ice_port_info *pi, struct ice_sched_node *tc_node,
+		    struct ice_sched_node *parent, u8 layer, u16 num_nodes,
+		    u16 *num_nodes_added, u32 *first_node_teid)
+{
+	struct ice_sched_node *prev, *new_node;
+	struct ice_aqc_add_elem *buf;
+	u16 i, num_groups_added = 0;
+	enum ice_status status = ICE_SUCCESS;
+	struct ice_hw *hw = pi->hw;
+	u16 buf_size;
+	u32 teid;
+
+	buf_size = sizeof(*buf) + sizeof(*buf->generic) * (num_nodes - 1);
+	buf = (struct ice_aqc_add_elem *)ice_malloc(hw, buf_size);
+	if (!buf)
+		return ICE_ERR_NO_MEMORY;
+
+	buf->hdr.parent_teid = parent->info.node_teid;
+	buf->hdr.num_elems = CPU_TO_LE16(num_nodes);
+	for (i = 0; i < num_nodes; i++) {
+		buf->generic[i].parent_teid = parent->info.node_teid;
+		buf->generic[i].data.elem_type = ICE_AQC_ELEM_TYPE_SE_GENERIC;
+		buf->generic[i].data.valid_sections =
+			ICE_AQC_ELEM_VALID_GENERIC | ICE_AQC_ELEM_VALID_CIR |
+			ICE_AQC_ELEM_VALID_EIR;
+		buf->generic[i].data.generic = 0;
+		buf->generic[i].data.cir_bw.bw_profile_idx =
+			CPU_TO_LE16(ICE_SCHED_DFLT_RL_PROF_ID);
+		buf->generic[i].data.cir_bw.bw_alloc =
+			CPU_TO_LE16(ICE_SCHED_DFLT_BW_WT);
+		buf->generic[i].data.eir_bw.bw_profile_idx =
+			CPU_TO_LE16(ICE_SCHED_DFLT_RL_PROF_ID);
+		buf->generic[i].data.eir_bw.bw_alloc =
+			CPU_TO_LE16(ICE_SCHED_DFLT_BW_WT);
+	}
+
+	status = ice_aq_add_sched_elems(hw, 1, buf, buf_size,
+					&num_groups_added, NULL);
+	if (status != ICE_SUCCESS || num_groups_added != 1) {
+		ice_debug(hw, ICE_DBG_SCHED, "add node failed FW Error %d\n",
+			  hw->adminq.sq_last_status);
+		ice_free(hw, buf);
+		return ICE_ERR_CFG;
+	}
+
+	*num_nodes_added = num_nodes;
+	/* add nodes to the SW DB */
+	for (i = 0; i < num_nodes; i++) {
+		status = ice_sched_add_node(pi, layer, &buf->generic[i]);
+		if (status != ICE_SUCCESS) {
+			ice_debug(hw, ICE_DBG_SCHED,
+				  "add nodes in SW DB failed status =%d\n",
+				  status);
+			break;
+		}
+
+		teid = LE32_TO_CPU(buf->generic[i].node_teid);
+		new_node = ice_sched_find_node_by_teid(parent, teid);
+		if (!new_node) {
+			ice_debug(hw, ICE_DBG_SCHED,
+				  "Node is missing for teid =%d\n", teid);
+			break;
+		}
+
+		new_node->sibling = NULL;
+		new_node->tc_num = tc_node->tc_num;
+
+		/* add it to previous node sibling pointer */
+		/* Note: siblings are not linked across branches */
+		prev = ice_sched_get_first_node(hw, tc_node, layer);
+		if (prev && prev != new_node) {
+			while (prev->sibling)
+				prev = prev->sibling;
+			prev->sibling = new_node;
+		}
+
+		if (i == 0)
+			*first_node_teid = teid;
+	}
+
+	ice_free(hw, buf);
+	return status;
+}
+
+/**
+ * ice_sched_add_nodes_to_layer - Add nodes to a given layer
+ * @pi: port information structure
+ * @tc_node: pointer to TC node
+ * @parent: pointer to parent node
+ * @layer: layer number to add nodes
+ * @num_nodes: number of nodes to be added
+ * @first_node_teid: pointer to the first node teid
+ * @num_nodes_added: pointer to number of nodes added
+ *
+ * This function add nodes to a given layer.
+ */
+static enum ice_status
+ice_sched_add_nodes_to_layer(struct ice_port_info *pi,
+			     struct ice_sched_node *tc_node,
+			     struct ice_sched_node *parent, u8 layer,
+			     u16 num_nodes, u32 *first_node_teid,
+			     u16 *num_nodes_added)
+{
+	u32 *first_teid_ptr = first_node_teid;
+	u16 new_num_nodes, max_child_nodes;
+	enum ice_status status = ICE_SUCCESS;
+	struct ice_hw *hw = pi->hw;
+	u16 num_added = 0;
+	u32 temp;
+
+	*num_nodes_added = 0;
+
+	if (!num_nodes)
+		return status;
+
+	if (!parent || layer < hw->sw_entry_point_layer)
+		return ICE_ERR_PARAM;
+
+	/* max children per node per layer */
+	max_child_nodes = hw->max_children[parent->tx_sched_layer];
+
+	/* current number of children + required nodes exceed max children ? */
+	if ((parent->num_children + num_nodes) > max_child_nodes) {
+		/* Fail if the parent is a TC node */
+		if (parent == tc_node)
+			return ICE_ERR_CFG;
+
+		/* utilize all the spaces if the parent is not full */
+		if (parent->num_children < max_child_nodes) {
+			new_num_nodes = max_child_nodes - parent->num_children;
+			/* this recursion is intentional, and wouldn't
+			 * go more than 2 calls
+			 */
+			status = ice_sched_add_nodes_to_layer(pi, tc_node,
+							      parent, layer,
+							      new_num_nodes,
+							      first_node_teid,
+							      &num_added);
+			if (status != ICE_SUCCESS)
+				return status;
+
+			*num_nodes_added += num_added;
+		}
+		/* Don't modify the first node teid memory if the first node was
+		 * added already in the above call. Instead send some temp
+		 * memory for all other recursive calls.
+		 */
+		if (num_added)
+			first_teid_ptr = &temp;
+
+		new_num_nodes = num_nodes - num_added;
+
+		/* This parent is full, try the next sibling */
+		parent = parent->sibling;
+
+		/* this recursion is intentional, for 1024 queues
+		 * per VSI, it goes max of 16 iterations.
+		 * 1024 / 8 = 128 layer 8 nodes
+		 * 128 /8 = 16 (add 8 nodes per iteration)
+		 */
+		status = ice_sched_add_nodes_to_layer(pi, tc_node, parent,
+						      layer, new_num_nodes,
+						      first_teid_ptr,
+						      &num_added);
+		*num_nodes_added += num_added;
+		return status;
+	}
+
+	status = ice_sched_add_elems(pi, tc_node, parent, layer, num_nodes,
+				     num_nodes_added, first_node_teid);
+	return status;
+}
+
+/**
+ * ice_sched_get_qgrp_layer - get the current queue group layer number
+ * @hw: pointer to the hw struct
+ *
+ * This function returns the current queue group layer number
+ */
+static u8 ice_sched_get_qgrp_layer(struct ice_hw *hw)
+{
+	/* It's always total layers - 1, the array is 0 relative so -2 */
+	return hw->num_tx_sched_layers - ICE_QGRP_LAYER_OFFSET;
+}
+
+/**
+ * ice_sched_get_vsi_layer - get the current VSI layer number
+ * @hw: pointer to the hw struct
+ *
+ * This function returns the current VSI layer number
+ */
+static u8 ice_sched_get_vsi_layer(struct ice_hw *hw)
+{
+	/* Num Layers       VSI layer
+	 *     9               6
+	 *     7               4
+	 *     5 or less       sw_entry_point_layer
+	 */
+	/* calculate the vsi layer based on number of layers. */
+	if (hw->num_tx_sched_layers > ICE_VSI_LAYER_OFFSET + 1) {
+		u8 layer = hw->num_tx_sched_layers - ICE_VSI_LAYER_OFFSET;
+
+		if (layer > hw->sw_entry_point_layer)
+			return layer;
+	}
+	return hw->sw_entry_point_layer;
+}
+
+/**
+ * ice_sched_get_agg_layer - get the current aggregator layer number
+ * @hw: pointer to the hw struct
+ *
+ * This function returns the current aggregator layer number
+ */
+static u8 ice_sched_get_agg_layer(struct ice_hw *hw)
+{
+	/* Num Layers       agg layer
+	 *     9               4
+	 *     7 or less       sw_entry_point_layer
+	 */
+	/* calculate the agg layer based on number of layers. */
+	if (hw->num_tx_sched_layers > ICE_AGG_LAYER_OFFSET + 1) {
+		u8 layer = hw->num_tx_sched_layers - ICE_AGG_LAYER_OFFSET;
+
+		if (layer > hw->sw_entry_point_layer)
+			return layer;
+	}
+	return hw->sw_entry_point_layer;
+}
+
+/**
+ * ice_rm_dflt_leaf_node - remove the default leaf node in the tree
+ * @pi: port information structure
+ *
+ * This function removes the leaf node that was created by the FW
+ * during initialization
+ */
+static void ice_rm_dflt_leaf_node(struct ice_port_info *pi)
+{
+	struct ice_sched_node *node;
+
+	node = pi->root;
+	while (node) {
+		if (!node->num_children)
+			break;
+		node = node->children[0];
+	}
+	if (node && node->info.data.elem_type == ICE_AQC_ELEM_TYPE_LEAF) {
+		u32 teid = LE32_TO_CPU(node->info.node_teid);
+		enum ice_status status;
+
+		/* remove the default leaf node */
+		status = ice_sched_remove_elems(pi->hw, node->parent, 1, &teid);
+		if (!status)
+			ice_free_sched_node(pi, node);
+	}
+}
+
+/**
+ * ice_sched_rm_dflt_nodes - free the default nodes in the tree
+ * @pi: port information structure
+ *
+ * This function frees all the nodes except root and TC that were created by
+ * the FW during initialization
+ */
+static void ice_sched_rm_dflt_nodes(struct ice_port_info *pi)
+{
+	struct ice_sched_node *node;
+
+	ice_rm_dflt_leaf_node(pi);
+
+	/* remove the default nodes except TC and root nodes */
+	node = pi->root;
+	while (node) {
+		if (node->tx_sched_layer >= pi->hw->sw_entry_point_layer &&
+		    node->info.data.elem_type != ICE_AQC_ELEM_TYPE_TC &&
+		    node->info.data.elem_type != ICE_AQC_ELEM_TYPE_ROOT_PORT) {
+			ice_free_sched_node(pi, node);
+			break;
+		}
+
+		if (!node->num_children)
+			break;
+		node = node->children[0];
+	}
+}
+
+/**
+ * ice_sched_init_port - Initialize scheduler by querying information from FW
+ * @pi: port info structure for the tree to cleanup
+ *
+ * This function is the initial call to find the total number of Tx scheduler
+ * resources, default topology created by firmware and storing the information
+ * in SW DB.
+ */
+enum ice_status ice_sched_init_port(struct ice_port_info *pi)
+{
+	struct ice_aqc_get_topo_elem *buf;
+	enum ice_status status;
+	struct ice_hw *hw;
+	u8 num_branches;
+	u16 num_elems;
+	u8 i, j;
+
+	if (!pi)
+		return ICE_ERR_PARAM;
+	hw = pi->hw;
+
+	/* Query the Default Topology from FW */
+	buf = (struct ice_aqc_get_topo_elem *)ice_malloc(hw,
+							 ICE_AQ_MAX_BUF_LEN);
+	if (!buf)
+		return ICE_ERR_NO_MEMORY;
+
+	/* Query default scheduling tree topology */
+	status = ice_aq_get_dflt_topo(hw, pi->lport, buf, ICE_AQ_MAX_BUF_LEN,
+				      &num_branches, NULL);
+	if (status)
+		goto err_init_port;
+
+	/* num_branches should be between 1-8 */
+	if (num_branches < 1 || num_branches > ICE_TXSCHED_MAX_BRANCHES) {
+		ice_debug(hw, ICE_DBG_SCHED, "num_branches unexpected %d\n",
+			  num_branches);
+		status = ICE_ERR_PARAM;
+		goto err_init_port;
+	}
+
+	/* get the number of elements on the default/first branch */
+	num_elems = LE16_TO_CPU(buf[0].hdr.num_elems);
+
+	/* num_elems should always be between 1-9 */
+	if (num_elems < 1 || num_elems > ICE_AQC_TOPO_MAX_LEVEL_NUM) {
+		ice_debug(hw, ICE_DBG_SCHED, "num_elems unexpected %d\n",
+			  num_elems);
+		status = ICE_ERR_PARAM;
+		goto err_init_port;
+	}
+
+	/* If the last node is a leaf node then the index of the Q group
+	 * layer is two less than the number of elements.
+	 */
+	if (num_elems > 2 && buf[0].generic[num_elems - 1].data.elem_type ==
+	    ICE_AQC_ELEM_TYPE_LEAF)
+		pi->last_node_teid =
+			LE32_TO_CPU(buf[0].generic[num_elems - 2].node_teid);
+	else
+		pi->last_node_teid =
+			LE32_TO_CPU(buf[0].generic[num_elems - 1].node_teid);
+
+	/* Insert the Tx Sched root node */
+	status = ice_sched_add_root_node(pi, &buf[0].generic[0]);
+	if (status)
+		goto err_init_port;
+
+	/* Parse the default tree and cache the information */
+	for (i = 0; i < num_branches; i++) {
+		num_elems = LE16_TO_CPU(buf[i].hdr.num_elems);
+
+		/* Skip root element as already inserted */
+		for (j = 1; j < num_elems; j++) {
+			/* update the sw entry point */
+			if (buf[0].generic[j].data.elem_type ==
+			    ICE_AQC_ELEM_TYPE_ENTRY_POINT)
+				hw->sw_entry_point_layer = j;
+
+			status = ice_sched_add_node(pi, j, &buf[i].generic[j]);
+			if (status)
+				goto err_init_port;
+		}
+	}
+
+	/* Remove the default nodes. */
+	if (pi->root)
+		ice_sched_rm_dflt_nodes(pi);
+
+	/* initialize the port for handling the scheduler tree */
+	pi->port_state = ICE_SCHED_PORT_STATE_READY;
+	ice_init_lock(&pi->sched_lock);
+	for (i = 0; i < ICE_AQC_TOPO_MAX_LEVEL_NUM; i++)
+		INIT_LIST_HEAD(&pi->rl_prof_list[i]);
+
+err_init_port:
+	if (status && pi->root) {
+		ice_free_sched_node(pi, pi->root);
+		pi->root = NULL;
+	}
+
+	ice_free(hw, buf);
+	return status;
+}
+
+/**
+ * ice_sched_get_node - Get the struct ice_sched_node for given teid
+ * @pi: port information structure
+ * @teid: Scheduler node TEID
+ *
+ * This function retrieves the ice_sched_node struct for given teid from
+ * the SW DB and returns it to the caller.
+ */
+struct ice_sched_node *ice_sched_get_node(struct ice_port_info *pi, u32 teid)
+{
+	struct ice_sched_node *node;
+
+	if (!pi)
+		return NULL;
+
+	/* Find the node starting from root */
+	ice_acquire_lock(&pi->sched_lock);
+	node = ice_sched_find_node_by_teid(pi->root, teid);
+	ice_release_lock(&pi->sched_lock);
+
+	if (!node)
+		ice_debug(pi->hw, ICE_DBG_SCHED,
+			  "Node not found for teid=0x%x\n", teid);
+
+	return node;
+}
+
+/**
+ * ice_sched_query_res_alloc - query the FW for num of logical sched layers
+ * @hw: pointer to the HW struct
+ *
+ * query FW for allocated scheduler resources and store in HW struct
+ */
+enum ice_status ice_sched_query_res_alloc(struct ice_hw *hw)
+{
+	struct ice_aqc_query_txsched_res_resp *buf;
+	enum ice_status status = ICE_SUCCESS;
+	__le16 max_sibl;
+	u8 i;
+
+	if (hw->layer_info)
+		return status;
+
+	buf = (struct ice_aqc_query_txsched_res_resp *)
+		ice_malloc(hw, sizeof(*buf));
+	if (!buf)
+		return ICE_ERR_NO_MEMORY;
+
+	status = ice_aq_query_sched_res(hw, sizeof(*buf), buf, NULL);
+	if (status)
+		goto sched_query_out;
+
+	hw->num_tx_sched_layers = LE16_TO_CPU(buf->sched_props.logical_levels);
+	hw->num_tx_sched_phys_layers =
+		LE16_TO_CPU(buf->sched_props.phys_levels);
+	hw->flattened_layers = buf->sched_props.flattening_bitmap;
+	hw->max_cgds = buf->sched_props.max_pf_cgds;
+
+	/* max sibling group size of current layer refers to the max children
+	 * of the below layer node.
+	 * layer 1 node max children will be layer 2 max sibling group size
+	 * layer 2 node max children will be layer 3 max sibling group size
+	 * and so on. This array will be populated from root (index 0) to
+	 * qgroup layer 7. Leaf node has no children.
+	 */
+	for (i = 0; i < hw->num_tx_sched_layers - 1; i++) {
+		max_sibl = buf->layer_props[i + 1].max_sibl_grp_sz;
+		hw->max_children[i] = LE16_TO_CPU(max_sibl);
+	}
+
+	hw->layer_info = (struct ice_aqc_layer_props *)
+			 ice_memdup(hw, buf->layer_props,
+				    (hw->num_tx_sched_layers *
+				     sizeof(*hw->layer_info)),
+				    ICE_DMA_TO_DMA);
+	if (!hw->layer_info) {
+		status = ICE_ERR_NO_MEMORY;
+		goto sched_query_out;
+	}
+
+
+sched_query_out:
+	ice_free(hw, buf);
+	return status;
+}
+
+/**
+ * ice_sched_find_node_in_subtree - Find node in part of base node subtree
+ * @hw: pointer to the hw struct
+ * @base: pointer to the base node
+ * @node: pointer to the node to search
+ *
+ * This function checks whether a given node is part of the base node
+ * subtree or not
+ */
+bool
+ice_sched_find_node_in_subtree(struct ice_hw *hw, struct ice_sched_node *base,
+			       struct ice_sched_node *node)
+{
+	u8 i;
+
+	for (i = 0; i < base->num_children; i++) {
+		struct ice_sched_node *child = base->children[i];
+
+		if (node == child)
+			return true;
+
+		if (child->tx_sched_layer > node->tx_sched_layer)
+			return false;
+
+		/* this recursion is intentional, and wouldn't
+		 * go more than 8 calls
+		 */
+		if (ice_sched_find_node_in_subtree(hw, child, node))
+			return true;
+	}
+	return false;
+}
+
+/**
+ * ice_sched_get_free_qparent - Get a free lan or rdma q group node
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc: branch number
+ * @owner: lan or rdma
+ *
+ * This function retrieves a free lan or rdma q group node
+ */
+struct ice_sched_node *
+ice_sched_get_free_qparent(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+			   u8 owner)
+{
+	struct ice_sched_node *vsi_node, *qgrp_node = NULL;
+	struct ice_vsi_ctx *vsi_ctx;
+	u16 max_children;
+	u8 qgrp_layer;
+
+	qgrp_layer = ice_sched_get_qgrp_layer(pi->hw);
+	max_children = pi->hw->max_children[qgrp_layer];
+
+	vsi_ctx = ice_get_vsi_ctx(pi->hw, vsi_handle);
+	if (!vsi_ctx)
+		return NULL;
+	vsi_node = vsi_ctx->sched.vsi_node[tc];
+	/* validate invalid VSI id */
+	if (!vsi_node)
+		goto lan_q_exit;
+
+	/* get the first q group node from VSI sub-tree */
+	qgrp_node = ice_sched_get_first_node(pi->hw, vsi_node, qgrp_layer);
+	while (qgrp_node) {
+		/* make sure the qgroup node is part of the VSI subtree */
+		if (ice_sched_find_node_in_subtree(pi->hw, vsi_node, qgrp_node))
+			if (qgrp_node->num_children < max_children &&
+			    qgrp_node->owner == owner)
+				break;
+		qgrp_node = qgrp_node->sibling;
+	}
+
+lan_q_exit:
+	return qgrp_node;
+}
+
+/**
+ * ice_sched_get_vsi_node - Get a VSI node based on VSI id
+ * @hw: pointer to the hw struct
+ * @tc_node: pointer to the TC node
+ * @vsi_handle: software VSI handle
+ *
+ * This function retrieves a VSI node for a given VSI id from a given
+ * TC branch
+ */
+struct ice_sched_node *
+ice_sched_get_vsi_node(struct ice_hw *hw, struct ice_sched_node *tc_node,
+		       u16 vsi_handle)
+{
+	struct ice_sched_node *node;
+	u8 vsi_layer;
+
+	vsi_layer = ice_sched_get_vsi_layer(hw);
+	node = ice_sched_get_first_node(hw, tc_node, vsi_layer);
+
+	/* Check whether it already exists */
+	while (node) {
+		if (node->vsi_handle == vsi_handle)
+			return node;
+		node = node->sibling;
+	}
+
+	return node;
+}
+
+/**
+ * ice_sched_get_agg_node - Get an aggregator node based on agg id
+ * @hw: pointer to the hw struct
+ * @tc_node: pointer to the TC node
+ * @agg_id: aggregator id
+ *
+ * This function retrieves an aggregator node for a given agg id from a given
+ * TC branch
+ */
+struct ice_sched_node *
+ice_sched_get_agg_node(struct ice_hw *hw, struct ice_sched_node *tc_node,
+		       u32 agg_id)
+{
+	struct ice_sched_node *node;
+	u8 agg_layer;
+
+	agg_layer = ice_sched_get_agg_layer(hw);
+	node = ice_sched_get_first_node(hw, tc_node, agg_layer);
+
+	/* Check whether it already exists */
+	while (node) {
+		if (node->agg_id == agg_id)
+			return node;
+		node = node->sibling;
+	}
+
+	return node;
+}
+
+/**
+ * ice_sched_check_node - Compare node parameters between SW DB and HW DB
+ * @hw: pointer to the hw struct
+ * @node: pointer to the ice_sched_node struct
+ *
+ * This function queries and compares the HW element with SW DB node parameters
+ */
+static bool ice_sched_check_node(struct ice_hw *hw, struct ice_sched_node *node)
+{
+	struct ice_aqc_get_elem buf;
+	enum ice_status status;
+	u32 node_teid;
+
+	node_teid = LE32_TO_CPU(node->info.node_teid);
+	status = ice_sched_query_elem(hw, node_teid, &buf);
+	if (status != ICE_SUCCESS)
+		return false;
+
+	if (memcmp(buf.generic, &node->info, sizeof(*buf.generic))) {
+		ice_debug(hw, ICE_DBG_SCHED, "Node mismatch for teid=0x%x\n",
+			  node_teid);
+		return false;
+	}
+
+	return true;
+}
+
+/**
+ * ice_sched_calc_vsi_child_nodes - calculate number of VSI child nodes
+ * @hw: pointer to the hw struct
+ * @num_qs: number of queues
+ * @num_nodes: num nodes array
+ *
+ * This function calculates the number of VSI child nodes based on the
+ * number of queues.
+ */
+static void
+ice_sched_calc_vsi_child_nodes(struct ice_hw *hw, u16 num_qs, u16 *num_nodes)
+{
+	u16 num = num_qs;
+	u8 i, qgl, vsil;
+
+	qgl = ice_sched_get_qgrp_layer(hw);
+	vsil = ice_sched_get_vsi_layer(hw);
+
+	/* calculate num nodes from q group to VSI layer */
+	for (i = qgl; i > vsil; i--) {
+		/* round to the next integer if there is a remainder */
+		num = DIVIDE_AND_ROUND_UP(num, hw->max_children[i]);
+
+		/* need at least one node */
+		num_nodes[i] = num ? num : 1;
+	}
+}
+
+/**
+ * ice_sched_add_vsi_child_nodes - add VSI child nodes to tree
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc_node: pointer to the TC node
+ * @num_nodes: pointer to the num nodes that needs to be added per layer
+ * @owner: node owner (lan or rdma)
+ *
+ * This function adds the VSI child nodes to tree. It gets called for
+ * lan and rdma separately.
+ */
+static enum ice_status
+ice_sched_add_vsi_child_nodes(struct ice_port_info *pi, u16 vsi_handle,
+			      struct ice_sched_node *tc_node, u16 *num_nodes,
+			      u8 owner)
+{
+	struct ice_sched_node *parent, *node;
+	struct ice_hw *hw = pi->hw;
+	enum ice_status status;
+	u32 first_node_teid;
+	u16 num_added = 0;
+	u8 i, qgl, vsil;
+
+	qgl = ice_sched_get_qgrp_layer(hw);
+	vsil = ice_sched_get_vsi_layer(hw);
+	parent = ice_sched_get_vsi_node(hw, tc_node, vsi_handle);
+	for (i = vsil + 1; i <= qgl; i++) {
+		if (!parent)
+			return ICE_ERR_CFG;
+
+		status = ice_sched_add_nodes_to_layer(pi, tc_node, parent, i,
+						      num_nodes[i],
+						      &first_node_teid,
+						      &num_added);
+		if (status != ICE_SUCCESS || num_nodes[i] != num_added)
+			return ICE_ERR_CFG;
+
+		/* The newly added node can be a new parent for the next
+		 * layer nodes
+		 */
+		if (num_added) {
+			parent = ice_sched_find_node_by_teid(tc_node,
+							     first_node_teid);
+			node = parent;
+			while (node) {
+				node->owner = owner;
+				node = node->sibling;
+			}
+		} else {
+			parent = parent->children[0];
+		}
+	}
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_sched_calc_vsi_support_nodes - calculate number of VSI support nodes
+ * @hw: pointer to the hw struct
+ * @tc_node: pointer to TC node
+ * @num_nodes: pointer to num nodes array
+ *
+ * This function calculates the number of supported nodes needed to add this
+ * VSI into Tx tree including the VSI, parent and intermediate nodes in below
+ * layers
+ */
+static void
+ice_sched_calc_vsi_support_nodes(struct ice_hw *hw,
+				 struct ice_sched_node *tc_node, u16 *num_nodes)
+{
+	struct ice_sched_node *node;
+	u8 vsil;
+	int i;
+
+	vsil = ice_sched_get_vsi_layer(hw);
+	for (i = vsil; i >= hw->sw_entry_point_layer; i--)
+		/* Add intermediate nodes if TC has no children and
+		 * need at least one node for VSI
+		 */
+		if (!tc_node->num_children || i == vsil) {
+			num_nodes[i]++;
+		} else {
+			/* If intermediate nodes are reached max children
+			 * then add a new one.
+			 */
+			node = ice_sched_get_first_node(hw, tc_node, (u8)i);
+			/* scan all the siblings */
+			while (node) {
+				if (node->num_children < hw->max_children[i])
+					break;
+				node = node->sibling;
+			}
+
+			/* tree has one intermediate node to add this new VSI.
+			 * So no need to calculate supported nodes for below
+			 * layers.
+			 */
+			if (node)
+				break;
+			/* all the nodes are full, allocate a new one */
+			num_nodes[i]++;
+		}
+}
+
+/**
+ * ice_sched_add_vsi_support_nodes - add VSI supported nodes into Tx tree
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc_node: pointer to TC node
+ * @num_nodes: pointer to num nodes array
+ *
+ * This function adds the VSI supported nodes into Tx tree including the
+ * VSI, its parent and intermediate nodes in below layers
+ */
+static enum ice_status
+ice_sched_add_vsi_support_nodes(struct ice_port_info *pi, u16 vsi_handle,
+				struct ice_sched_node *tc_node, u16 *num_nodes)
+{
+	struct ice_sched_node *parent = tc_node;
+	enum ice_status status;
+	u32 first_node_teid;
+	u16 num_added = 0;
+	u8 i, vsil;
+
+	if (!pi)
+		return ICE_ERR_PARAM;
+
+	vsil = ice_sched_get_vsi_layer(pi->hw);
+	for (i = pi->hw->sw_entry_point_layer; i <= vsil; i++) {
+		status = ice_sched_add_nodes_to_layer(pi, tc_node, parent,
+						      i, num_nodes[i],
+						      &first_node_teid,
+						      &num_added);
+		if (status != ICE_SUCCESS || num_nodes[i] != num_added)
+			return ICE_ERR_CFG;
+
+		/* The newly added node can be a new parent for the next
+		 * layer nodes
+		 */
+		if (num_added)
+			parent = ice_sched_find_node_by_teid(tc_node,
+							     first_node_teid);
+		else
+			parent = parent->children[0];
+
+		if (!parent)
+			return ICE_ERR_CFG;
+
+		if (i == vsil)
+			parent->vsi_handle = vsi_handle;
+	}
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_sched_add_vsi_to_topo - add a new VSI into tree
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc: TC number
+ *
+ * This function adds a new VSI into scheduler tree
+ */
+static enum ice_status
+ice_sched_add_vsi_to_topo(struct ice_port_info *pi, u16 vsi_handle, u8 tc)
+{
+	u16 num_nodes[ICE_AQC_TOPO_MAX_LEVEL_NUM] = { 0 };
+	struct ice_sched_node *tc_node;
+	struct ice_hw *hw = pi->hw;
+
+	tc_node = ice_sched_get_tc_node(pi, tc);
+	if (!tc_node)
+		return ICE_ERR_PARAM;
+
+	/* calculate number of supported nodes needed for this VSI */
+	ice_sched_calc_vsi_support_nodes(hw, tc_node, num_nodes);
+
+	/* add vsi supported nodes to tc subtree */
+	return ice_sched_add_vsi_support_nodes(pi, vsi_handle, tc_node,
+					       num_nodes);
+}
+
+/**
+ * ice_sched_update_vsi_child_nodes - update VSI child nodes
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc: TC number
+ * @new_numqs: new number of max queues
+ * @owner: owner of this subtree
+ *
+ * This function updates the VSI child nodes based on the number of queues
+ */
+static enum ice_status
+ice_sched_update_vsi_child_nodes(struct ice_port_info *pi, u16 vsi_handle,
+				 u8 tc, u16 new_numqs, u8 owner)
+{
+	u16 new_num_nodes[ICE_AQC_TOPO_MAX_LEVEL_NUM] = { 0 };
+	struct ice_sched_node *vsi_node;
+	struct ice_sched_node *tc_node;
+	struct ice_vsi_ctx *vsi_ctx;
+	enum ice_status status = ICE_SUCCESS;
+	struct ice_hw *hw = pi->hw;
+	u16 prev_numqs;
+
+	tc_node = ice_sched_get_tc_node(pi, tc);
+	if (!tc_node)
+		return ICE_ERR_CFG;
+
+	vsi_node = ice_sched_get_vsi_node(hw, tc_node, vsi_handle);
+	if (!vsi_node)
+		return ICE_ERR_CFG;
+
+	vsi_ctx = ice_get_vsi_ctx(hw, vsi_handle);
+	if (!vsi_ctx)
+		return ICE_ERR_PARAM;
+
+	if (owner == ICE_SCHED_NODE_OWNER_LAN)
+		prev_numqs = vsi_ctx->sched.max_lanq[tc];
+	else
+		return ICE_ERR_PARAM;
+
+	/* num queues are not changed or less than the previous number */
+	if (new_numqs <= prev_numqs)
+		return status;
+	if (new_numqs)
+		ice_sched_calc_vsi_child_nodes(hw, new_numqs, new_num_nodes);
+	/* Keep the max number of queue configuration all the time. Update the
+	 * tree only if number of queues > previous number of queues. This may
+	 * leave some extra nodes in the tree if number of queues < previous
+	 * number but that wouldn't harm anything. Removing those extra nodes
+	 * may complicate the code if those nodes are part of SRL or
+	 * individually rate limited.
+	 */
+	status = ice_sched_add_vsi_child_nodes(pi, vsi_handle, tc_node,
+					       new_num_nodes, owner);
+	if (status)
+		return status;
+	vsi_ctx->sched.max_lanq[tc] = new_numqs;
+
+	return status;
+}
+
+/**
+ * ice_sched_cfg_vsi - configure the new/existing VSI
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc: TC number
+ * @maxqs: max number of queues
+ * @owner: lan or rdma
+ * @enable: TC enabled or disabled
+ *
+ * This function adds/updates VSI nodes based on the number of queues. If TC is
+ * enabled and VSI is in suspended state then resume the VSI back. If TC is
+ * disabled then suspend the VSI if it is not already.
+ */
+enum ice_status
+ice_sched_cfg_vsi(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u16 maxqs,
+		  u8 owner, bool enable)
+{
+	struct ice_sched_node *vsi_node, *tc_node;
+	struct ice_vsi_ctx *vsi_ctx;
+	enum ice_status status = ICE_SUCCESS;
+	struct ice_hw *hw = pi->hw;
+
+	ice_debug(pi->hw, ICE_DBG_SCHED, "add/config VSI %d\n", vsi_handle);
+	tc_node = ice_sched_get_tc_node(pi, tc);
+	if (!tc_node)
+		return ICE_ERR_PARAM;
+	vsi_ctx = ice_get_vsi_ctx(hw, vsi_handle);
+	if (!vsi_ctx)
+		return ICE_ERR_PARAM;
+	vsi_node = ice_sched_get_vsi_node(hw, tc_node, vsi_handle);
+
+	/* suspend the VSI if tc is not enabled */
+	if (!enable) {
+		if (vsi_node && vsi_node->in_use) {
+			u32 teid = LE32_TO_CPU(vsi_node->info.node_teid);
+
+			status = ice_sched_suspend_resume_elems(hw, 1, &teid,
+								true);
+			if (!status)
+				vsi_node->in_use = false;
+		}
+		return status;
+	}
+
+	/* TC is enabled, if it is a new VSI then add it to the tree */
+	if (!vsi_node) {
+		status = ice_sched_add_vsi_to_topo(pi, vsi_handle, tc);
+		if (status)
+			return status;
+
+		vsi_node = ice_sched_get_vsi_node(hw, tc_node, vsi_handle);
+		if (!vsi_node)
+			return ICE_ERR_CFG;
+
+		vsi_ctx->sched.vsi_node[tc] = vsi_node;
+		vsi_node->in_use = true;
+		/* invalidate the max queues whenever VSI gets added first time
+		 * into the scheduler tree (boot or after reset). We need to
+		 * recreate the child nodes all the time in these cases.
+		 */
+		vsi_ctx->sched.max_lanq[tc] = 0;
+	}
+
+	/* update the VSI child nodes */
+	status = ice_sched_update_vsi_child_nodes(pi, vsi_handle, tc, maxqs,
+						  owner);
+	if (status)
+		return status;
+
+	/* TC is enabled, resume the VSI if it is in the suspend state */
+	if (!vsi_node->in_use) {
+		u32 teid = LE32_TO_CPU(vsi_node->info.node_teid);
+
+		status = ice_sched_suspend_resume_elems(hw, 1, &teid, false);
+		if (!status)
+			vsi_node->in_use = true;
+	}
+
+	return status;
+}
+
+/**
+ * ice_sched_rm_agg_vsi_entry - remove agg related vsi info entry
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ *
+ * This function removes single aggregator vsi info entry from
+ * aggregator list.
+ */
+static void
+ice_sched_rm_agg_vsi_info(struct ice_port_info *pi, u16 vsi_handle)
+{
+	struct ice_sched_agg_info *agg_info;
+	struct ice_sched_agg_info *atmp;
+
+	LIST_FOR_EACH_ENTRY_SAFE(agg_info, atmp, &pi->hw->agg_list,
+				 ice_sched_agg_info,
+				 list_entry) {
+		struct ice_sched_agg_vsi_info *agg_vsi_info;
+		struct ice_sched_agg_vsi_info *vtmp;
+
+		LIST_FOR_EACH_ENTRY_SAFE(agg_vsi_info, vtmp,
+					 &agg_info->agg_vsi_list,
+					 ice_sched_agg_vsi_info, list_entry)
+			if (agg_vsi_info->vsi_handle == vsi_handle) {
+				LIST_DEL(&agg_vsi_info->list_entry);
+				ice_free(pi->hw, agg_vsi_info);
+				return;
+			}
+	}
+}
+
+/**
+ * ice_sched_is_leaf_node_present - check for a leaf node in the sub-tree
+ * @node: pointer to the sub-tree node
+ *
+ * This function checks for a leaf node presence in a given sub-tree node.
+ */
+static bool ice_sched_is_leaf_node_present(struct ice_sched_node *node)
+{
+	u8 i;
+
+	for (i = 0; i < node->num_children; i++)
+		if (ice_sched_is_leaf_node_present(node->children[i]))
+			return true;
+	/* check for a leaf node */
+	return (node->info.data.elem_type == ICE_AQC_ELEM_TYPE_LEAF);
+}
+
+/**
+ * ice_sched_rm_vsi_cfg - remove the VSI and its children nodes
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @owner: lan or rdma
+ *
+ * This function removes the VSI and its lan or rdma children nodes from the
+ * scheduler tree.
+ */
+static enum ice_status
+ice_sched_rm_vsi_cfg(struct ice_port_info *pi, u16 vsi_handle, u8 owner)
+{
+	enum ice_status status = ICE_ERR_PARAM;
+	struct ice_vsi_ctx *vsi_ctx;
+	u8 i;
+
+	ice_debug(pi->hw, ICE_DBG_SCHED, "removing VSI %d\n", vsi_handle);
+	if (!ice_is_vsi_valid(pi->hw, vsi_handle))
+		return status;
+	ice_acquire_lock(&pi->sched_lock);
+	vsi_ctx = ice_get_vsi_ctx(pi->hw, vsi_handle);
+	if (!vsi_ctx)
+		goto exit_sched_rm_vsi_cfg;
+
+	for (i = 0; i < ICE_MAX_TRAFFIC_CLASS; i++) {
+		struct ice_sched_node *vsi_node, *tc_node;
+		u8 j = 0;
+
+		tc_node = ice_sched_get_tc_node(pi, i);
+		if (!tc_node)
+			continue;
+
+		vsi_node = ice_sched_get_vsi_node(pi->hw, tc_node, vsi_handle);
+		if (!vsi_node)
+			continue;
+
+		if (ice_sched_is_leaf_node_present(vsi_node)) {
+			ice_debug(pi->hw, ICE_DBG_SCHED,
+				  "VSI has leaf nodes in TC %d\n", i);
+			status = ICE_ERR_IN_USE;
+			goto exit_sched_rm_vsi_cfg;
+		}
+		while (j < vsi_node->num_children) {
+			if (vsi_node->children[j]->owner == owner) {
+				ice_free_sched_node(pi, vsi_node->children[j]);
+
+				/* reset the counter again since the num
+				 * children will be updated after node removal
+				 */
+				j = 0;
+			} else {
+				j++;
+			}
+		}
+		/* remove the VSI if it has no children */
+		if (!vsi_node->num_children) {
+			ice_free_sched_node(pi, vsi_node);
+			vsi_ctx->sched.vsi_node[i] = NULL;
+
+			/* clean up agg related vsi info if any */
+			ice_sched_rm_agg_vsi_info(pi, vsi_handle);
+		}
+		if (owner == ICE_SCHED_NODE_OWNER_LAN)
+			vsi_ctx->sched.max_lanq[i] = 0;
+	}
+	status = ICE_SUCCESS;
+
+exit_sched_rm_vsi_cfg:
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_rm_vsi_lan_cfg - remove VSI and its lan children nodes
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ *
+ * This function clears the VSI and its lan children nodes from scheduler tree
+ * for all TCs.
+ */
+enum ice_status ice_rm_vsi_lan_cfg(struct ice_port_info *pi, u16 vsi_handle)
+{
+	return ice_sched_rm_vsi_cfg(pi, vsi_handle, ICE_SCHED_NODE_OWNER_LAN);
+}
+
+
+/**
+ * ice_sched_is_tree_balanced - Check tree nodes are identical or not
+ * @hw: pointer to the hw struct
+ * @node: pointer to the ice_sched_node struct
+ *
+ * This function compares all the nodes for a given tree against HW DB nodes
+ * This function needs to be called with the port_info->sched_lock held
+ */
+bool ice_sched_is_tree_balanced(struct ice_hw *hw, struct ice_sched_node *node)
+{
+	u8 i;
+
+	/* start from the leaf node */
+	for (i = 0; i < node->num_children; i++)
+		/* Fail if node doesn't match with the SW DB
+		 * this recursion is intentional, and wouldn't
+		 * go more than 9 calls
+		 */
+		if (!ice_sched_is_tree_balanced(hw, node->children[i]))
+			return false;
+
+	return ice_sched_check_node(hw, node);
+}
+
+/**
+ * ice_aq_query_node_to_root - retrieve the tree topology for a given node teid
+ * @hw: pointer to the hw struct
+ * @node_teid: node teid
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @cd: pointer to command details structure or NULL
+ *
+ * This function retrieves the tree topology from the firmware for a given
+ * node teid to the root node.
+ */
+enum ice_status
+ice_aq_query_node_to_root(struct ice_hw *hw, u32 node_teid,
+			  struct ice_aqc_get_elem *buf, u16 buf_size,
+			  struct ice_sq_cd *cd)
+{
+	struct ice_aqc_query_node_to_root *cmd;
+	struct ice_aq_desc desc;
+
+	cmd = &desc.params.query_node_to_root;
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_query_node_to_root);
+	cmd->teid = CPU_TO_LE32(node_teid);
+	return ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
+}
+
+/**
+ * ice_get_agg_info - get the agg id
+ * @hw: pointer to the hardware structure
+ * @agg_id: aggregator id
+ *
+ * This function validates agg id. The function returns info if agg id is
+ * prsent in list otherwise it returns null.
+ */
+static struct ice_sched_agg_info*
+ice_get_agg_info(struct ice_hw *hw, u32 agg_id)
+{
+	struct ice_sched_agg_info *agg_info;
+
+	LIST_FOR_EACH_ENTRY(agg_info, &hw->agg_list, ice_sched_agg_info,
+			    list_entry)
+		if (agg_info->agg_id == agg_id)
+			return agg_info;
+
+	return NULL;
+}
+
+/**
+ * ice_move_all_vsi_to_dflt_agg - move all VSI(s) to default agg
+ * @pi: port information structure
+ * @agg_info: aggregator info
+ * @tc: traffic class number
+ * @rm_vsi_info: true or false
+ *
+ * This function move all the VSI(s) to the default aggregator and delete
+ * agg vsi info based on passed in boolean parameter rm_vsi_info. The
+ * caller holds the scheduler lock.
+ */
+static enum ice_status
+ice_move_all_vsi_to_dflt_agg(struct ice_port_info *pi,
+			     struct ice_sched_agg_info *agg_info, u8 tc,
+			     bool rm_vsi_info)
+{
+	struct ice_sched_agg_vsi_info *agg_vsi_info;
+	struct ice_sched_agg_vsi_info *tmp;
+	enum ice_status status = ICE_SUCCESS;
+
+	LIST_FOR_EACH_ENTRY_SAFE(agg_vsi_info, tmp, &agg_info->agg_vsi_list,
+				 ice_sched_agg_vsi_info, list_entry) {
+		u16 vsi_handle = agg_vsi_info->vsi_handle;
+
+		/* Move VSI to default agg */
+		if (!ice_is_tc_ena(agg_vsi_info->tc_bitmap[0], tc))
+			continue;
+
+		status = ice_sched_move_vsi_to_agg(pi, vsi_handle,
+						   ICE_DFLT_AGG_ID, tc);
+		if (status)
+			break;
+
+		ice_clear_bit(tc, agg_vsi_info->tc_bitmap);
+		if (rm_vsi_info && !agg_vsi_info->tc_bitmap[0]) {
+			LIST_DEL(&agg_vsi_info->list_entry);
+			ice_free(pi->hw, agg_vsi_info);
+		}
+	}
+
+	return status;
+}
+
+/**
+ * ice_rm_agg_cfg_tc - remove agg configuration for tc
+ * @pi: port information structure
+ * @agg_info: aggregator id
+ * @tc: tc number
+ * @rm_vsi_info: bool value true or false
+ *
+ * This function removes agg reference to vsi of given tc. It removes the agg
+ * configuration completely for requested tc. The caller needs to hold the
+ * scheduler lock.
+ */
+static enum ice_status
+ice_rm_agg_cfg_tc(struct ice_port_info *pi, struct ice_sched_agg_info *agg_info,
+		  u8 tc, bool rm_vsi_info)
+{
+	enum ice_status status = ICE_SUCCESS;
+
+	/* If nothing to remove - return success */
+	if (!ice_is_tc_ena(agg_info->tc_bitmap[0], tc))
+		goto exit_rm_agg_cfg_tc;
+
+	status = ice_move_all_vsi_to_dflt_agg(pi, agg_info, tc, rm_vsi_info);
+	if (status)
+		goto exit_rm_agg_cfg_tc;
+
+	/* Delete aggregator node(s) */
+	status = ice_sched_rm_agg_cfg(pi, agg_info->agg_id, tc);
+	if (status)
+		goto exit_rm_agg_cfg_tc;
+
+	ice_clear_bit(tc, agg_info->tc_bitmap);
+exit_rm_agg_cfg_tc:
+	return status;
+}
+
+/**
+ * ice_save_agg_tc_bitmap - save agg TC bitmap
+ * @pi: port information structure
+ * @agg_id: aggregator id
+ * @tc_bitmap: 8 bits TC bitmap
+ *
+ * Save agg TC bitmap. This function needs to be called with scheduler
+ * lock held.
+ */
+static enum ice_status
+ice_save_agg_tc_bitmap(struct ice_port_info *pi, u32 agg_id,
+		       ice_bitmap_t *tc_bitmap)
+{
+	struct ice_sched_agg_info *agg_info;
+
+	agg_info = ice_get_agg_info(pi->hw, agg_id);
+	if (!agg_info)
+		return ICE_ERR_PARAM;
+	ice_cp_bitmap(agg_info->replay_tc_bitmap, tc_bitmap,
+		      ICE_MAX_TRAFFIC_CLASS);
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_sched_cfg_agg - configure agg node
+ * @pi: port information structure
+ * @agg_id: aggregator id
+ * @agg_type: aggregator type queue, VSI, or agg group
+ * @tc_bitmap: bits TC bitmap
+ *
+ * It registers a unique aggregator node into scheduler services. It
+ * allows a user to register with a unique ID to track it's resources.
+ * The aggregator type determines if this is a queue group, VSI group
+ * or aggregator group. It then creates the agg node(s) for requested
+ * tc(s) or removes an existing agg node including its configuration
+ * if indicated via tc_bitmap. Call ice_rm_agg_cfg to release agg
+ * resources and remove agg id.
+ * This function needs to be called with scheduler lock held.
+ */
+static enum ice_status
+ice_sched_cfg_agg(struct ice_port_info *pi, u32 agg_id,
+		  enum ice_agg_type agg_type, ice_bitmap_t *tc_bitmap)
+{
+	struct ice_sched_agg_info *agg_info;
+	enum ice_status status = ICE_SUCCESS;
+	struct ice_hw *hw = pi->hw;
+	u8 tc;
+
+	agg_info = ice_get_agg_info(hw, agg_id);
+	if (!agg_info) {
+		/* Creat new entry for new agg id */
+		agg_info = (struct ice_sched_agg_info *)
+			ice_malloc(hw, sizeof(*agg_info));
+		if (!agg_info) {
+			status = ICE_ERR_NO_MEMORY;
+			goto exit_reg_agg;
+		}
+		agg_info->agg_id = agg_id;
+		agg_info->agg_type = agg_type;
+		agg_info->tc_bitmap[0] = 0;
+
+		/* Initialize the aggregator vsi list head */
+		INIT_LIST_HEAD(&agg_info->agg_vsi_list);
+
+		/* Add new entry in agg list */
+		LIST_ADD(&agg_info->list_entry, &hw->agg_list);
+	}
+	/* Create agg node(s) for requested tc(s) */
+	for (tc = 0; tc < ICE_MAX_TRAFFIC_CLASS; tc++) {
+		if (!ice_is_tc_ena(*tc_bitmap, tc)) {
+			/* Delete agg cfg tc if it exists previously */
+			status = ice_rm_agg_cfg_tc(pi, agg_info, tc, false);
+			if (status)
+				break;
+			continue;
+		}
+
+		/* Check if agg node for tc already exists */
+		if (ice_is_tc_ena(agg_info->tc_bitmap[0], tc))
+			continue;
+
+		/* Create new agg node for tc */
+		status = ice_sched_add_agg_cfg(pi, agg_id, tc);
+		if (status)
+			break;
+
+		/* Save agg node's tc information */
+		ice_set_bit(tc, agg_info->tc_bitmap);
+	}
+exit_reg_agg:
+	return status;
+}
+
+/**
+ * ice_cfg_agg - config agg node
+ * @pi: port information structure
+ * @agg_id: aggregator id
+ * @agg_type: aggregator type queue, VSI, or agg group
+ * @tc_bitmap: bits TC bitmap
+ *
+ * This function configures aggregator node(s).
+ */
+enum ice_status
+ice_cfg_agg(struct ice_port_info *pi, u32 agg_id, enum ice_agg_type agg_type,
+	    u8 tc_bitmap)
+{
+	ice_bitmap_t bitmap = tc_bitmap;
+	enum ice_status status;
+
+	ice_acquire_lock(&pi->sched_lock);
+	status = ice_sched_cfg_agg(pi, agg_id, agg_type,
+				   (ice_bitmap_t *)&bitmap);
+	if (!status)
+		status = ice_save_agg_tc_bitmap(pi, agg_id,
+						(ice_bitmap_t *)&bitmap);
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_get_agg_vsi_info - get the agg id
+ * @agg_info: aggregator info
+ * @vsi_handle: software VSI handle
+ *
+ * The function returns agg VSI info based on VSI handle. This function needs
+ * to be called with scheduler lock held.
+ */
+static struct ice_sched_agg_vsi_info*
+ice_get_agg_vsi_info(struct ice_sched_agg_info *agg_info, u16 vsi_handle)
+{
+	struct ice_sched_agg_vsi_info *agg_vsi_info;
+
+	LIST_FOR_EACH_ENTRY(agg_vsi_info, &agg_info->agg_vsi_list,
+			    ice_sched_agg_vsi_info, list_entry)
+		if (agg_vsi_info->vsi_handle == vsi_handle)
+			return agg_vsi_info;
+
+	return NULL;
+}
+
+/**
+ * ice_get_vsi_agg_info - get the agg info of VSI
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: Sw VSI handle
+ *
+ * The function returns agg info of VSI represented via vsi_handle. The VSI has
+ * in this case a different aggregator than the default one. This function
+ * needs to be called with scheduler lock held.
+ */
+static struct ice_sched_agg_info*
+ice_get_vsi_agg_info(struct ice_hw *hw, u16 vsi_handle)
+{
+	struct ice_sched_agg_info *agg_info;
+
+	LIST_FOR_EACH_ENTRY(agg_info, &hw->agg_list, ice_sched_agg_info,
+			    list_entry) {
+		struct ice_sched_agg_vsi_info *agg_vsi_info;
+
+		agg_vsi_info = ice_get_agg_vsi_info(agg_info, vsi_handle);
+		if (agg_vsi_info)
+			return agg_info;
+	}
+	return NULL;
+}
+
+/**
+ * ice_save_agg_vsi_tc_bitmap - save aggregator VSI TC bitmap
+ * @pi: port information structure
+ * @agg_id: aggregator id
+ * @vsi_handle: software VSI handle
+ * @tc_bitmap: TC bitmap of enabled tc(s)
+ *
+ * Save VSI to aggregator TC bitmap. This function needs to call with scheduler
+ * lock held.
+ */
+static enum ice_status
+ice_save_agg_vsi_tc_bitmap(struct ice_port_info *pi, u32 agg_id, u16 vsi_handle,
+			   ice_bitmap_t *tc_bitmap)
+{
+	struct ice_sched_agg_vsi_info *agg_vsi_info;
+	struct ice_sched_agg_info *agg_info;
+
+	agg_info = ice_get_agg_info(pi->hw, agg_id);
+	if (!agg_info)
+		return ICE_ERR_PARAM;
+	/* check if entry already exist */
+	agg_vsi_info = ice_get_agg_vsi_info(agg_info, vsi_handle);
+	if (!agg_vsi_info)
+		return ICE_ERR_PARAM;
+	ice_cp_bitmap(agg_vsi_info->replay_tc_bitmap, tc_bitmap,
+		      ICE_MAX_TRAFFIC_CLASS);
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_sched_assoc_vsi_to_agg - associate or move VSI to new or default agg
+ * @pi: port information structure
+ * @agg_id: aggregator id
+ * @vsi_handle: software VSI handle
+ * @tc_bitmap: TC bitmap of enabled tc(s)
+ *
+ * This function moves VSI to a new or default aggregator node. If VSI is
+ * already associated to the agg node then no operation is performed on the
+ * tree. This function needs to be called with scheduler lock held.
+ */
+static enum ice_status
+ice_sched_assoc_vsi_to_agg(struct ice_port_info *pi, u32 agg_id,
+			   u16 vsi_handle, ice_bitmap_t *tc_bitmap)
+{
+	struct ice_sched_agg_vsi_info *agg_vsi_info;
+	struct ice_sched_agg_info *agg_info;
+	enum ice_status status = ICE_SUCCESS;
+	struct ice_hw *hw = pi->hw;
+	u8 tc;
+
+	if (!ice_is_vsi_valid(pi->hw, vsi_handle))
+		return ICE_ERR_PARAM;
+	agg_info = ice_get_agg_info(hw, agg_id);
+	if (!agg_info)
+		return ICE_ERR_PARAM;
+	/* check if entry already exist */
+	agg_vsi_info = ice_get_agg_vsi_info(agg_info, vsi_handle);
+	if (!agg_vsi_info) {
+		/* Create new entry for vsi under agg list */
+		agg_vsi_info = (struct ice_sched_agg_vsi_info *)
+			ice_malloc(hw, sizeof(*agg_vsi_info));
+		if (!agg_vsi_info)
+			return ICE_ERR_PARAM;
+
+		/* add vsi id into the agg list */
+		agg_vsi_info->vsi_handle = vsi_handle;
+		LIST_ADD(&agg_vsi_info->list_entry, &agg_info->agg_vsi_list);
+	}
+	/* Move vsi node to new agg node for requested tc(s) */
+	for (tc = 0; tc < ICE_MAX_TRAFFIC_CLASS; tc++) {
+		if (!ice_is_tc_ena(*tc_bitmap, tc))
+			continue;
+
+		/* Move VSI to new agg */
+		status = ice_sched_move_vsi_to_agg(pi, vsi_handle, agg_id, tc);
+		if (status)
+			break;
+
+		if (agg_id != ICE_DFLT_AGG_ID)
+			ice_set_bit(tc, agg_vsi_info->tc_bitmap);
+		else
+			ice_clear_bit(tc, agg_vsi_info->tc_bitmap);
+	}
+	/* If vsi moved back to default agg then delete entry agg_vsi_info. */
+	if (!ice_is_any_bit_set(agg_vsi_info->tc_bitmap,
+				ICE_MAX_TRAFFIC_CLASS)) {
+		LIST_DEL(&agg_vsi_info->list_entry);
+		ice_free(hw, agg_vsi_info);
+	}
+	return status;
+}
+
+/**
+ * ice_move_vsi_to_agg - moves VSI to new or default agg
+ * @pi: port information structure
+ * @agg_id: aggregator id
+ * @vsi_handle: software VSI handle
+ * @tc_bitmap: tc bitmap of enabled tc(s)
+ *
+ * Move or associate VSI to a new or default aggregator node.
+ */
+enum ice_status
+ice_move_vsi_to_agg(struct ice_port_info *pi, u32 agg_id, u16 vsi_handle,
+		    u8 tc_bitmap)
+{
+	ice_bitmap_t bitmap = tc_bitmap;
+	enum ice_status status;
+
+	ice_acquire_lock(&pi->sched_lock);
+	status = ice_sched_assoc_vsi_to_agg(pi, agg_id, vsi_handle,
+					    (ice_bitmap_t *)&bitmap);
+	if (!status)
+		status = ice_save_agg_vsi_tc_bitmap(pi, agg_id, vsi_handle,
+						    (ice_bitmap_t *)&bitmap);
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_rm_agg_cfg - remove agg configuration
+ * @pi: port information structure
+ * @agg_id: aggregator id
+ *
+ * This function removes agg reference to vsi and delete agg id info.
+ * It removes the agg configuration completely.
+ */
+enum ice_status ice_rm_agg_cfg(struct ice_port_info *pi, u32 agg_id)
+{
+	struct ice_sched_agg_info *agg_info;
+	enum ice_status status = ICE_SUCCESS;
+	u8 tc;
+
+	ice_acquire_lock(&pi->sched_lock);
+	agg_info = ice_get_agg_info(pi->hw, agg_id);
+	if (!agg_info) {
+		status = ICE_ERR_DOES_NOT_EXIST;
+		goto exit_ice_rm_agg_cfg;
+	}
+
+	for (tc = 0; tc < ICE_MAX_TRAFFIC_CLASS; tc++) {
+		status = ice_rm_agg_cfg_tc(pi, agg_info, tc, true);
+		if (status)
+			goto exit_ice_rm_agg_cfg;
+	}
+
+	if (ice_is_any_bit_set(agg_info->tc_bitmap, ICE_MAX_TRAFFIC_CLASS)) {
+		status = ICE_ERR_IN_USE;
+		goto exit_ice_rm_agg_cfg;
+	}
+
+	/* Safe to delete entry now */
+	LIST_DEL(&agg_info->list_entry);
+	ice_free(pi->hw, agg_info);
+
+	/* Remove unused rl profile ids from HW and SW DB */
+	ice_sched_rm_unused_rl_prof(pi);
+
+exit_ice_rm_agg_cfg:
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_set_clear_cir_bw_alloc - set or clear CIR bw alloc information
+ * @bw_t_info: bandwidth type information structure
+ * @bw_alloc: Bandwidth allocation information
+ *
+ * Save or clear CIR bw alloc information (bw_alloc) in the passed param
+ * bw_t_info.
+ */
+static void
+ice_set_clear_cir_bw_alloc(struct ice_bw_type_info *bw_t_info, u16 bw_alloc)
+{
+	bw_t_info->cir_bw.bw_alloc = bw_alloc;
+	if (bw_t_info->cir_bw.bw_alloc)
+		ice_set_bit(ICE_BW_TYPE_CIR_WT, bw_t_info->bw_t_bitmap);
+	else
+		ice_clear_bit(ICE_BW_TYPE_CIR_WT, bw_t_info->bw_t_bitmap);
+}
+
+/**
+ * ice_set_clear_eir_bw_alloc - set or clear EIR bw alloc information
+ * @bw_t_info: bandwidth type information structure
+ * @bw_alloc: Bandwidth allocation information
+ *
+ * Save or clear EIR bw alloc information (bw_alloc) in the passed param
+ * bw_t_info.
+ */
+static void
+ice_set_clear_eir_bw_alloc(struct ice_bw_type_info *bw_t_info, u16 bw_alloc)
+{
+	bw_t_info->eir_bw.bw_alloc = bw_alloc;
+	if (bw_t_info->eir_bw.bw_alloc)
+		ice_set_bit(ICE_BW_TYPE_EIR_WT, bw_t_info->bw_t_bitmap);
+	else
+		ice_clear_bit(ICE_BW_TYPE_EIR_WT, bw_t_info->bw_t_bitmap);
+}
+
+/**
+ * ice_sched_save_vsi_bw_alloc - save VSI node's bw alloc information
+ * @pi: port information structure
+ * @vsi_handle: sw VSI handle
+ * @tc: traffic class
+ * @rl_type: rate limit type min or max
+ * @bw_alloc: Bandwidth allocation information
+ *
+ * Save bw alloc information of VSI type node for post replay use.
+ */
+static enum ice_status
+ice_sched_save_vsi_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+			    enum ice_rl_type rl_type, u16 bw_alloc)
+{
+	struct ice_vsi_ctx *vsi_ctx;
+
+	if (!ice_is_vsi_valid(pi->hw, vsi_handle))
+		return ICE_ERR_PARAM;
+	vsi_ctx = ice_get_vsi_ctx(pi->hw, vsi_handle);
+	if (!vsi_ctx)
+		return ICE_ERR_PARAM;
+	switch (rl_type) {
+	case ICE_MIN_BW:
+		ice_set_clear_cir_bw_alloc(&vsi_ctx->sched.bw_t_info[tc],
+					   bw_alloc);
+		break;
+	case ICE_MAX_BW:
+		ice_set_clear_eir_bw_alloc(&vsi_ctx->sched.bw_t_info[tc],
+					   bw_alloc);
+		break;
+	default:
+		return ICE_ERR_PARAM;
+	}
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_set_clear_cir_bw - set or clear CIR bw
+ * @bw_t_info: bandwidth type information structure
+ * @bw: bandwidth in Kbps - Kilo bits per sec
+ *
+ * Save or clear CIR bandwidth (bw) in the passed param bw_t_info.
+ */
+static void
+ice_set_clear_cir_bw(struct ice_bw_type_info *bw_t_info, u32 bw)
+{
+	if (bw == ICE_SCHED_DFLT_BW) {
+		ice_clear_bit(ICE_BW_TYPE_CIR, bw_t_info->bw_t_bitmap);
+		bw_t_info->cir_bw.bw = 0;
+	} else {
+		/* Save type of bw information */
+		ice_set_bit(ICE_BW_TYPE_CIR, bw_t_info->bw_t_bitmap);
+		bw_t_info->cir_bw.bw = bw;
+	}
+}
+
+/**
+ * ice_set_clear_eir_bw - set or clear EIR bw
+ * @bw_t_info: bandwidth type information structure
+ * @bw: bandwidth in Kbps - Kilo bits per sec
+ *
+ * Save or clear EIR bandwidth (bw) in the passed param bw_t_info.
+ */
+static void
+ice_set_clear_eir_bw(struct ice_bw_type_info *bw_t_info, u32 bw)
+{
+	if (bw == ICE_SCHED_DFLT_BW) {
+		ice_clear_bit(ICE_BW_TYPE_EIR, bw_t_info->bw_t_bitmap);
+		bw_t_info->eir_bw.bw = 0;
+	} else {
+		/* EIR bw and Shared bw profiles are mutually exclusive and
+		 * hence only one of them may be set for any given element.
+		 * First clear earlier saved shared bw information.
+		 */
+		ice_clear_bit(ICE_BW_TYPE_SHARED, bw_t_info->bw_t_bitmap);
+		bw_t_info->shared_bw = 0;
+		/* save EIR bw information */
+		ice_set_bit(ICE_BW_TYPE_EIR, bw_t_info->bw_t_bitmap);
+		bw_t_info->eir_bw.bw = bw;
+	}
+}
+
+/**
+ * ice_set_clear_shared_bw - set or clear shared bw
+ * @bw_t_info: bandwidth type information structure
+ * @bw: bandwidth in Kbps - Kilo bits per sec
+ *
+ * Save or clear shared bandwidth (bw) in the passed param bw_t_info.
+ */
+static void
+ice_set_clear_shared_bw(struct ice_bw_type_info *bw_t_info, u32 bw)
+{
+	if (bw == ICE_SCHED_DFLT_BW) {
+		ice_clear_bit(ICE_BW_TYPE_SHARED, bw_t_info->bw_t_bitmap);
+		bw_t_info->shared_bw = 0;
+	} else {
+		/* EIR bw and Shared bw profiles are mutually exclusive and
+		 * hence only one of them may be set for any given element.
+		 * First clear earlier saved EIR bw information.
+		 */
+		ice_clear_bit(ICE_BW_TYPE_EIR, bw_t_info->bw_t_bitmap);
+		bw_t_info->eir_bw.bw = 0;
+		/* save shared bw information */
+		ice_set_bit(ICE_BW_TYPE_SHARED, bw_t_info->bw_t_bitmap);
+		bw_t_info->shared_bw = bw;
+	}
+}
+
+/**
+ * ice_sched_save_vsi_bw - save VSI node's bw information
+ * @pi: port information structure
+ * @vsi_handle: sw VSI handle
+ * @tc: traffic class
+ * @rl_type: rate limit type min, max, or shared
+ * @bw: bandwidth in Kbps - Kilo bits per sec
+ *
+ * Save bw information of VSI type node for post replay use.
+ */
+static enum ice_status
+ice_sched_save_vsi_bw(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+		      enum ice_rl_type rl_type, u32 bw)
+{
+	struct ice_vsi_ctx *vsi_ctx;
+
+	if (!ice_is_vsi_valid(pi->hw, vsi_handle))
+		return ICE_ERR_PARAM;
+	vsi_ctx = ice_get_vsi_ctx(pi->hw, vsi_handle);
+	if (!vsi_ctx)
+		return ICE_ERR_PARAM;
+	switch (rl_type) {
+	case ICE_MIN_BW:
+		ice_set_clear_cir_bw(&vsi_ctx->sched.bw_t_info[tc], bw);
+		break;
+	case ICE_MAX_BW:
+		ice_set_clear_eir_bw(&vsi_ctx->sched.bw_t_info[tc], bw);
+		break;
+	case ICE_SHARED_BW:
+		ice_set_clear_shared_bw(&vsi_ctx->sched.bw_t_info[tc], bw);
+		break;
+	default:
+		return ICE_ERR_PARAM;
+	}
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_set_clear_prio - set or clear priority information
+ * @bw_t_info: bandwidth type information structure
+ * @prio: priority to save
+ *
+ * Save or clear priority (prio) in the passed param bw_t_info.
+ */
+static void
+ice_set_clear_prio(struct ice_bw_type_info *bw_t_info, u8 prio)
+{
+	bw_t_info->generic = prio;
+	if (bw_t_info->generic)
+		ice_set_bit(ICE_BW_TYPE_PRIO, bw_t_info->bw_t_bitmap);
+	else
+		ice_clear_bit(ICE_BW_TYPE_PRIO, bw_t_info->bw_t_bitmap);
+}
+
+/**
+ * ice_sched_save_vsi_prio - save VSI node's priority information
+ * @pi: port information structure
+ * @vsi_handle: Software VSI handle
+ * @tc: traffic class
+ * @prio: priority to save
+ *
+ * Save priority information of VSI type node for post replay use.
+ */
+static enum ice_status
+ice_sched_save_vsi_prio(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+			u8 prio)
+{
+	struct ice_vsi_ctx *vsi_ctx;
+
+	if (!ice_is_vsi_valid(pi->hw, vsi_handle))
+		return ICE_ERR_PARAM;
+	vsi_ctx = ice_get_vsi_ctx(pi->hw, vsi_handle);
+	if (!vsi_ctx)
+		return ICE_ERR_PARAM;
+	if (tc >= ICE_MAX_TRAFFIC_CLASS)
+		return ICE_ERR_PARAM;
+	ice_set_clear_prio(&vsi_ctx->sched.bw_t_info[tc], prio);
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_sched_save_agg_bw_alloc - save agg node's bw alloc information
+ * @pi: port information structure
+ * @agg_id: node aggregator id
+ * @tc: traffic class
+ * @rl_type: rate limit type min or max
+ * @bw_alloc: bandwidth alloc information
+ *
+ * Save bw alloc information of AGG type node for post replay use.
+ */
+static enum ice_status
+ice_sched_save_agg_bw_alloc(struct ice_port_info *pi, u32 agg_id, u8 tc,
+			    enum ice_rl_type rl_type, u16 bw_alloc)
+{
+	struct ice_sched_agg_info *agg_info;
+
+	agg_info = ice_get_agg_info(pi->hw, agg_id);
+	if (!agg_info)
+		return ICE_ERR_PARAM;
+	if (!ice_is_tc_ena(agg_info->tc_bitmap[0], tc))
+		return ICE_ERR_PARAM;
+	switch (rl_type) {
+	case ICE_MIN_BW:
+		ice_set_clear_cir_bw_alloc(&agg_info->bw_t_info[tc], bw_alloc);
+		break;
+	case ICE_MAX_BW:
+		ice_set_clear_eir_bw_alloc(&agg_info->bw_t_info[tc], bw_alloc);
+		break;
+	default:
+		return ICE_ERR_PARAM;
+	}
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_sched_save_agg_bw - save agg node's bw information
+ * @pi: port information structure
+ * @agg_id: node aggregator id
+ * @tc: traffic class
+ * @rl_type: rate limit type min, max, or shared
+ * @bw: bandwidth in Kbps - Kilo bits per sec
+ *
+ * Save bw information of AGG type node for post replay use.
+ */
+static enum ice_status
+ice_sched_save_agg_bw(struct ice_port_info *pi, u32 agg_id, u8 tc,
+		      enum ice_rl_type rl_type, u32 bw)
+{
+	struct ice_sched_agg_info *agg_info;
+
+	agg_info = ice_get_agg_info(pi->hw, agg_id);
+	if (!agg_info)
+		return ICE_ERR_PARAM;
+	if (!ice_is_tc_ena(agg_info->tc_bitmap[0], tc))
+		return ICE_ERR_PARAM;
+	switch (rl_type) {
+	case ICE_MIN_BW:
+		ice_set_clear_cir_bw(&agg_info->bw_t_info[tc], bw);
+		break;
+	case ICE_MAX_BW:
+		ice_set_clear_eir_bw(&agg_info->bw_t_info[tc], bw);
+		break;
+	case ICE_SHARED_BW:
+		ice_set_clear_shared_bw(&agg_info->bw_t_info[tc], bw);
+		break;
+	default:
+		return ICE_ERR_PARAM;
+	}
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_cfg_vsi_bw_lmt_per_tc - configure VSI bw limit per tc
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc: traffic class
+ * @rl_type: min or max
+ * @bw: bandwidth in kbps
+ *
+ * This function configures bw limit of VSI scheduling node based on tc
+ * information.
+ */
+enum ice_status
+ice_cfg_vsi_bw_lmt_per_tc(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+			  enum ice_rl_type rl_type, u32 bw)
+{
+	enum ice_status status;
+
+	status = ice_sched_set_node_bw_lmt_per_tc(pi, vsi_handle,
+						  ICE_AGG_TYPE_VSI,
+						  tc, rl_type, bw);
+	if (!status) {
+		ice_acquire_lock(&pi->sched_lock);
+		status = ice_sched_save_vsi_bw(pi, vsi_handle, tc, rl_type, bw);
+		ice_release_lock(&pi->sched_lock);
+	}
+	return status;
+}
+
+/**
+ * ice_cfg_dflt_vsi_bw_lmt_per_tc - configure default VSI bw limit per tc
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc: traffic class
+ * @rl_type: min or max
+ *
+ * This function configures default bw limit of VSI scheduling node based on tc
+ * information.
+ */
+enum ice_status
+ice_cfg_vsi_bw_dflt_lmt_per_tc(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+			       enum ice_rl_type rl_type)
+{
+	enum ice_status status;
+
+	status = ice_sched_set_node_bw_lmt_per_tc(pi, vsi_handle,
+						  ICE_AGG_TYPE_VSI,
+						  tc, rl_type,
+						  ICE_SCHED_DFLT_BW);
+	if (!status) {
+		ice_acquire_lock(&pi->sched_lock);
+		status = ice_sched_save_vsi_bw(pi, vsi_handle, tc, rl_type,
+					       ICE_SCHED_DFLT_BW);
+		ice_release_lock(&pi->sched_lock);
+	}
+	return status;
+}
+
+/**
+ * ice_cfg_agg_bw_lmt_per_tc - configure aggregator bw limit per tc
+ * @pi: port information structure
+ * @agg_id: aggregator id
+ * @tc: traffic class
+ * @rl_type: min or max
+ * @bw: bandwidth in kbps
+ *
+ * This function applies bw limit to aggregator scheduling node based on tc
+ * information.
+ */
+enum ice_status
+ice_cfg_agg_bw_lmt_per_tc(struct ice_port_info *pi, u32 agg_id, u8 tc,
+			  enum ice_rl_type rl_type, u32 bw)
+{
+	enum ice_status status;
+
+	status = ice_sched_set_node_bw_lmt_per_tc(pi, agg_id, ICE_AGG_TYPE_AGG,
+						  tc, rl_type, bw);
+	if (!status) {
+		ice_acquire_lock(&pi->sched_lock);
+		status = ice_sched_save_agg_bw(pi, agg_id, tc, rl_type, bw);
+		ice_release_lock(&pi->sched_lock);
+	}
+	return status;
+}
+
+/**
+ * ice_cfg_agg_bw_dflt_lmt_per_tc - configure aggregator bw default limit per tc
+ * @pi: port information structure
+ * @agg_id: aggregator id
+ * @tc: traffic class
+ * @rl_type: min or max
+ *
+ * This function applies default bw limit to aggregator scheduling node based
+ * on tc information.
+ */
+enum ice_status
+ice_cfg_agg_bw_dflt_lmt_per_tc(struct ice_port_info *pi, u32 agg_id, u8 tc,
+			       enum ice_rl_type rl_type)
+{
+	enum ice_status status;
+
+	status = ice_sched_set_node_bw_lmt_per_tc(pi, agg_id, ICE_AGG_TYPE_AGG,
+						  tc, rl_type,
+						  ICE_SCHED_DFLT_BW);
+	if (!status) {
+		ice_acquire_lock(&pi->sched_lock);
+		status = ice_sched_save_agg_bw(pi, agg_id, tc, rl_type,
+					       ICE_SCHED_DFLT_BW);
+		ice_release_lock(&pi->sched_lock);
+	}
+	return status;
+}
+
+/**
+ * ice_cfg_vsi_bw_shared_lmt - configure VSI bw shared limit
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @bw: bandwidth in kbps
+ *
+ * This function Configures shared rate limiter(SRL) of all VSI type nodes
+ * across all traffic classes for VSI matching handle.
+ */
+enum ice_status
+ice_cfg_vsi_bw_shared_lmt(struct ice_port_info *pi, u16 vsi_handle, u32 bw)
+{
+	return ice_sched_set_vsi_bw_shared_lmt(pi, vsi_handle, bw);
+}
+
+/**
+ * ice_cfg_vsi_bw_no_shared_lmt - configure VSI bw for no shared limiter
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ *
+ * This function removes the shared rate limiter(SRL) of all VSI type nodes
+ * across all traffic classes for VSI matching handle.
+ */
+enum ice_status
+ice_cfg_vsi_bw_no_shared_lmt(struct ice_port_info *pi, u16 vsi_handle)
+{
+	return ice_sched_set_vsi_bw_shared_lmt(pi, vsi_handle,
+					       ICE_SCHED_DFLT_BW);
+}
+
+/**
+ * ice_cfg_agg_bw_shared_lmt - configure aggregator bw shared limit
+ * @pi: port information structure
+ * @agg_id: aggregator id
+ * @bw: bandwidth in kbps
+ *
+ * This function configures the shared rate limiter(SRL) of all agg type nodes
+ * across all traffic classes for aggregator matching agg_id.
+ */
+enum ice_status
+ice_cfg_agg_bw_shared_lmt(struct ice_port_info *pi, u32 agg_id, u32 bw)
+{
+	return ice_sched_set_agg_bw_shared_lmt(pi, agg_id, bw);
+}
+
+/**
+ * ice_cfg_agg_bw_no_shared_lmt - configure aggregator bw for no shared limiter
+ * @pi: port information structure
+ * @agg_id: aggregator id
+ *
+ * This function removes the shared rate limiter(SRL) of all agg type nodes
+ * across all traffic classes for aggregator matching agg_id.
+ */
+enum ice_status
+ice_cfg_agg_bw_no_shared_lmt(struct ice_port_info *pi, u32 agg_id)
+{
+	return ice_sched_set_agg_bw_shared_lmt(pi, agg_id, ICE_SCHED_DFLT_BW);
+}
+
+/**
+ * ice_config_vsi_queue_priority - config VSI queue priority of node
+ * @pi: port information structure
+ * @num_qs: number of VSI queues
+ * @q_ids: queue ids array
+ * @q_ids: queue ids array
+ * @q_prio: queue priority array
+ *
+ * This function configures the queue node priority (Sibling Priority) of the
+ * passed in VSI's queue(s) for a given traffic class (tc).
+ */
+enum ice_status
+ice_cfg_vsi_q_priority(struct ice_port_info *pi, u16 num_qs, u32 *q_ids,
+		       u8 *q_prio)
+{
+	enum ice_status status = ICE_ERR_PARAM;
+	struct ice_hw *hw = pi->hw;
+	u16 i;
+
+	ice_acquire_lock(&pi->sched_lock);
+
+	for (i = 0; i < num_qs; i++) {
+		struct ice_sched_node *node;
+
+		node = ice_sched_find_node_by_teid(pi->root, q_ids[i]);
+		if (!node || node->info.data.elem_type !=
+		    ICE_AQC_ELEM_TYPE_LEAF) {
+			status = ICE_ERR_PARAM;
+			break;
+		}
+		/* Configure Priority */
+		status = ice_sched_cfg_sibl_node_prio(hw, node, q_prio[i]);
+		if (status)
+			break;
+	}
+
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_cfg_agg_vsi_priority_per_tc - config agg's VSI priority per tc
+ * @pi: port information structure
+ * @agg_id: Aggregator id
+ * @num_vsis: number of VSI(s)
+ * @vsi_handle_arr: array of software VSI handles
+ * @node_prio: pointer to node priority
+ * @tc: traffic class
+ *
+ * This function configures the node priority (Sibling Priority) of the
+ * passed in VSI's for a given traffic class (tc) of an Aggregator id.
+ */
+enum ice_status
+ice_cfg_agg_vsi_priority_per_tc(struct ice_port_info *pi, u32 agg_id,
+				u16 num_vsis, u16 *vsi_handle_arr,
+				u8 *node_prio, u8 tc)
+{
+	struct ice_sched_agg_vsi_info *agg_vsi_info;
+	struct ice_sched_node *tc_node, *agg_node;
+	enum ice_status status = ICE_ERR_PARAM;
+	struct ice_sched_agg_info *agg_info;
+	bool agg_id_present = false;
+	struct ice_hw *hw = pi->hw;
+	u16 i;
+
+	ice_acquire_lock(&pi->sched_lock);
+	LIST_FOR_EACH_ENTRY(agg_info, &hw->agg_list, ice_sched_agg_info,
+			    list_entry)
+		if (agg_info->agg_id == agg_id) {
+			agg_id_present = true;
+			break;
+		}
+	if (!agg_id_present)
+		goto exit_agg_priority_per_tc;
+
+	tc_node = ice_sched_get_tc_node(pi, tc);
+	if (!tc_node)
+		goto exit_agg_priority_per_tc;
+
+	agg_node = ice_sched_get_agg_node(hw, tc_node, agg_id);
+	if (!agg_node)
+		goto exit_agg_priority_per_tc;
+
+	if (num_vsis > hw->max_children[agg_node->tx_sched_layer])
+		goto exit_agg_priority_per_tc;
+
+	for (i = 0; i < num_vsis; i++) {
+		struct ice_sched_node *vsi_node;
+		bool vsi_handle_valid = false;
+		u16 vsi_handle;
+
+		status = ICE_ERR_PARAM;
+		vsi_handle = vsi_handle_arr[i];
+		if (!ice_is_vsi_valid(hw, vsi_handle))
+			goto exit_agg_priority_per_tc;
+		/* Verify child nodes before applying settings */
+		LIST_FOR_EACH_ENTRY(agg_vsi_info, &agg_info->agg_vsi_list,
+				    ice_sched_agg_vsi_info, list_entry)
+			if (agg_vsi_info->vsi_handle == vsi_handle) {
+				vsi_handle_valid = true;
+				break;
+			}
+		if (!vsi_handle_valid)
+			goto exit_agg_priority_per_tc;
+
+		vsi_node = ice_sched_get_vsi_node(hw, tc_node, vsi_handle);
+		if (!vsi_node)
+			goto exit_agg_priority_per_tc;
+
+		if (ice_sched_find_node_in_subtree(hw, agg_node, vsi_node)) {
+			/* Configure Priority */
+			status = ice_sched_cfg_sibl_node_prio(hw, vsi_node,
+							      node_prio[i]);
+			if (status)
+				break;
+			status = ice_sched_save_vsi_prio(pi, vsi_handle, tc,
+							 node_prio[i]);
+			if (status)
+				break;
+		}
+	}
+
+exit_agg_priority_per_tc:
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_cfg_vsi_bw_alloc - config VSI bw alloc per tc
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @ena_tcmap: enabled tc map
+ * @rl_type: Rate limit type CIR/EIR
+ * @bw_alloc: Array of bw alloc
+ *
+ * This function configures the bw allocation of the passed in VSI's
+ * node(s) for enabled traffic class.
+ */
+enum ice_status
+ice_cfg_vsi_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 ena_tcmap,
+		     enum ice_rl_type rl_type, u8 *bw_alloc)
+{
+	enum ice_status status = ICE_SUCCESS;
+	u8 tc;
+
+	if (!ice_is_vsi_valid(pi->hw, vsi_handle))
+		return ICE_ERR_PARAM;
+
+	ice_acquire_lock(&pi->sched_lock);
+
+	/* Return success if no nodes are present across tc */
+	for (tc = 0; tc < ICE_MAX_TRAFFIC_CLASS; tc++) {
+		struct ice_sched_node *tc_node, *vsi_node;
+
+		if (!ice_is_tc_ena(ena_tcmap, tc))
+			continue;
+
+		tc_node = ice_sched_get_tc_node(pi, tc);
+		if (!tc_node)
+			continue;
+
+		vsi_node = ice_sched_get_vsi_node(pi->hw, tc_node, vsi_handle);
+		if (!vsi_node)
+			continue;
+
+		status = ice_sched_cfg_node_bw_alloc(pi->hw, vsi_node, rl_type,
+						     bw_alloc[tc]);
+		if (status)
+			break;
+		status = ice_sched_save_vsi_bw_alloc(pi, vsi_handle, tc,
+						     rl_type, bw_alloc[tc]);
+		if (status)
+			break;
+	}
+
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_cfg_agg_bw_alloc - config agg bw alloc
+ * @pi: port information structure
+ * @agg_id: aggregator id
+ * @ena_tcmap: enabled tc map
+ * @rl_type: rate limit type CIR/EIR
+ * @bw_alloc: array of bw alloc
+ *
+ * This function configures the bw allocation of passed in aggregator for
+ * enabled traffic class(s).
+ */
+enum ice_status
+ice_cfg_agg_bw_alloc(struct ice_port_info *pi, u32 agg_id, u8 ena_tcmap,
+		     enum ice_rl_type rl_type, u8 *bw_alloc)
+{
+	struct ice_sched_agg_info *agg_info;
+	bool agg_id_present = false;
+	enum ice_status status = ICE_SUCCESS;
+	struct ice_hw *hw = pi->hw;
+	u8 tc;
+
+	ice_acquire_lock(&pi->sched_lock);
+	LIST_FOR_EACH_ENTRY(agg_info, &hw->agg_list, ice_sched_agg_info,
+			    list_entry)
+		if (agg_info->agg_id == agg_id) {
+			agg_id_present = true;
+			break;
+		}
+	if (!agg_id_present) {
+		status = ICE_ERR_PARAM;
+		goto exit_cfg_agg_bw_alloc;
+	}
+
+	/* Return success if no nodes are present across tc */
+	for (tc = 0; tc < ICE_MAX_TRAFFIC_CLASS; tc++) {
+		struct ice_sched_node *tc_node, *agg_node;
+
+		if (!ice_is_tc_ena(ena_tcmap, tc))
+			continue;
+
+		tc_node = ice_sched_get_tc_node(pi, tc);
+		if (!tc_node)
+			continue;
+
+		agg_node = ice_sched_get_agg_node(hw, tc_node, agg_id);
+		if (!agg_node)
+			continue;
+
+		status = ice_sched_cfg_node_bw_alloc(hw, agg_node, rl_type,
+						     bw_alloc[tc]);
+		if (status)
+			break;
+		status = ice_sched_save_agg_bw_alloc(pi, agg_id, tc, rl_type,
+						     bw_alloc[tc]);
+		if (status)
+			break;
+	}
+
+exit_cfg_agg_bw_alloc:
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_sched_calc_wakeup - calculate rl profile wakeup parameter
+ * @bw: bandwidth in kbps
+ *
+ * This function calculates the wakeup parameter of rl profile.
+ */
+static u16 ice_sched_calc_wakeup(s32 bw)
+{
+	s64 bytes_per_sec, wakeup_int, wakeup_a, wakeup_b, wakeup_f;
+	s32 wakeup_f_int;
+	u16 wakeup = 0;
+
+	/* Get the wakeup integer value */
+	bytes_per_sec = DIV_64BIT(((s64)bw * 1000), BITS_PER_BYTE);
+	wakeup_int = DIV_64BIT(ICE_RL_PROF_FREQUENCY, bytes_per_sec);
+	if (wakeup_int > 63) {
+		wakeup = (u16)((1 << 15) | wakeup_int);
+	} else {
+		/* Calculate fraction value up to 4 decimals
+		 * Convert Integer value to a constant multiplier
+		 */
+		wakeup_b = (s64)ICE_RL_PROF_MULTIPLIER * wakeup_int;
+		wakeup_a = DIV_64BIT((s64)ICE_RL_PROF_MULTIPLIER *
+				     ICE_RL_PROF_FREQUENCY, bytes_per_sec);
+
+		/* Get Fraction value */
+		wakeup_f = wakeup_a - wakeup_b;
+
+		/* Round up the Fractional value via Ceil(Fractional value) */
+		if (wakeup_f > DIV_64BIT(ICE_RL_PROF_MULTIPLIER, 2))
+			wakeup_f += 1;
+
+		wakeup_f_int = (s32)DIV_64BIT(wakeup_f * ICE_RL_PROF_FRACTION,
+					      ICE_RL_PROF_MULTIPLIER);
+		wakeup |= (u16)(wakeup_int << 9);
+		wakeup |= (u16)(0x1ff & wakeup_f_int);
+	}
+
+	return wakeup;
+}
+
+/**
+ * ice_sched_bw_to_rl_profile - convert bw to profile parameters
+ * @bw: bandwidth in kbps
+ * @profile: profile parameters to return
+ *
+ * This function converts the bw to profile structure format.
+ */
+static enum ice_status
+ice_sched_bw_to_rl_profile(u32 bw, struct ice_aqc_rl_profile_elem *profile)
+{
+	enum ice_status status = ICE_ERR_PARAM;
+	s64 bytes_per_sec, ts_rate, mv_tmp;
+	bool found = false;
+	s32 encode = 0;
+	s64 mv = 0;
+	s32 i;
+
+	/* Bw settings range is from 0.5Mb/sec to 100Gb/sec */
+	if (bw < ICE_SCHED_MIN_BW || bw > ICE_SCHED_MAX_BW)
+		return status;
+
+	/* Bytes per second from kbps */
+	bytes_per_sec = DIV_64BIT(((s64)bw * 1000), BITS_PER_BYTE);
+
+	/* encode is 6 bits but really useful are 5 bits */
+	for (i = 0; i < 64; i++) {
+		u64 pow_result = BIT_ULL(i);
+
+		ts_rate = DIV_64BIT((s64)ICE_RL_PROF_FREQUENCY,
+				    pow_result * ICE_RL_PROF_TS_MULTIPLIER);
+		if (ts_rate <= 0)
+			continue;
+
+		/* Multiplier value */
+		mv_tmp = DIV_64BIT(bytes_per_sec * ICE_RL_PROF_MULTIPLIER,
+				   ts_rate);
+
+		/* Round to the nearest ICE_RL_PROF_MULTIPLIER */
+		mv = round_up_64bit(mv_tmp, ICE_RL_PROF_MULTIPLIER);
+
+		/* First multiplier value greater than the given
+		 * accuracy bytes
+		 */
+		if (mv > ICE_RL_PROF_ACCURACY_BYTES) {
+			encode = i;
+			found = true;
+			break;
+		}
+	}
+	if (found) {
+		u16 wm;
+
+		wm = ice_sched_calc_wakeup(bw);
+		profile->rl_multiply = CPU_TO_LE16(mv);
+		profile->wake_up_calc = CPU_TO_LE16(wm);
+		profile->rl_encode = CPU_TO_LE16(encode);
+		status = ICE_SUCCESS;
+	} else {
+		status = ICE_ERR_DOES_NOT_EXIST;
+	}
+
+	return status;
+}
+
+/**
+ * ice_sched_add_rl_profile - add rl profile
+ * @pi: port information structure
+ * @rl_type: type of rate limit bw - min, max, or shared
+ * @bw: bandwidth in Kbps - Kilo bits per sec
+ * @layer_num: specifies in which layer to create profile
+ *
+ * This function first checks the existing list for corresponding bw
+ * parameter. If it exists, it returns the associated profile otherwise
+ * it creates a new rate limit profile for requested bw, and adds it to
+ * the hw db and local list. It returns the new profile or null on error.
+ * The caller needs to hold the scheduler lock.
+ */
+static struct ice_aqc_rl_profile_info *
+ice_sched_add_rl_profile(struct ice_port_info *pi,
+			 enum ice_rl_type rl_type, u32 bw, u8 layer_num)
+{
+	struct ice_aqc_rl_profile_generic_elem *buf;
+	struct ice_aqc_rl_profile_info *rl_prof_elem;
+	u16 profiles_added = 0, num_profiles = 1;
+	enum ice_status status = ICE_ERR_PARAM;
+	struct ice_hw *hw;
+	u8 profile_type;
+
+	switch (rl_type) {
+	case ICE_MIN_BW:
+		profile_type = ICE_AQC_RL_PROFILE_TYPE_CIR;
+		break;
+	case ICE_MAX_BW:
+		profile_type = ICE_AQC_RL_PROFILE_TYPE_EIR;
+		break;
+	case ICE_SHARED_BW:
+		profile_type = ICE_AQC_RL_PROFILE_TYPE_SRL;
+		break;
+	default:
+		return NULL;
+	}
+
+	if (!pi)
+		return NULL;
+	hw = pi->hw;
+	LIST_FOR_EACH_ENTRY(rl_prof_elem, &pi->rl_prof_list[layer_num],
+			    ice_aqc_rl_profile_info, list_entry)
+		if (rl_prof_elem->profile.flags == profile_type &&
+		    rl_prof_elem->bw == bw)
+			/* Return existing profile id info */
+			return rl_prof_elem;
+
+	/* Create new profile id */
+	rl_prof_elem = (struct ice_aqc_rl_profile_info *)
+		ice_malloc(hw, sizeof(*rl_prof_elem));
+
+	if (!rl_prof_elem)
+		return NULL;
+
+	status = ice_sched_bw_to_rl_profile(bw, &rl_prof_elem->profile);
+	if (status != ICE_SUCCESS)
+		goto exit_add_rl_prof;
+
+	rl_prof_elem->bw = bw;
+	/* layer_num is zero relative, and fw expects level from 1 to 9 */
+	rl_prof_elem->profile.level = layer_num + 1;
+	rl_prof_elem->profile.flags = profile_type;
+	rl_prof_elem->profile.max_burst_size = CPU_TO_LE16(hw->max_burst_size);
+
+	/* Create new entry in hw db */
+	buf = (struct ice_aqc_rl_profile_generic_elem *)
+		&rl_prof_elem->profile;
+	status = ice_aq_add_rl_profile(hw, num_profiles, buf, sizeof(*buf),
+				       &profiles_added, NULL);
+	if (status || profiles_added != num_profiles)
+		goto exit_add_rl_prof;
+
+	/* Good entry - add in the list */
+	rl_prof_elem->prof_id_ref = 0;
+	LIST_ADD(&rl_prof_elem->list_entry, &pi->rl_prof_list[layer_num]);
+	return rl_prof_elem;
+
+exit_add_rl_prof:
+	ice_free(hw, rl_prof_elem);
+	return NULL;
+}
+
+/**
+ * ice_sched_del_rl_profile - remove rl profile
+ * @hw: pointer to the hw struct
+ * @rl_info: rate limit profile information
+ *
+ * If the profile id is not referenced anymore, it removes profile id with
+ * its associated parameters from hw db,and locally. The caller needs to
+ * hold scheduler lock.
+ */
+enum ice_status
+ice_sched_del_rl_profile(struct ice_hw *hw,
+			 struct ice_aqc_rl_profile_info *rl_info)
+{
+	struct ice_aqc_rl_profile_generic_elem *buf;
+	u16 num_profiles_removed;
+	enum ice_status status;
+	u16 num_profiles = 1;
+
+	if (rl_info->prof_id_ref != 0)
+		return ICE_ERR_IN_USE;
+
+	/* Safe to remove profile id */
+	buf = (struct ice_aqc_rl_profile_generic_elem *)
+		&rl_info->profile;
+	status = ice_aq_remove_rl_profile(hw, num_profiles, buf, sizeof(*buf),
+					  &num_profiles_removed, NULL);
+	if (status || num_profiles_removed != num_profiles)
+		return ICE_ERR_CFG;
+
+	/* Delete stale entry now */
+	LIST_DEL(&rl_info->list_entry);
+	ice_free(hw, rl_info);
+	return status;
+}
+
+/**
+ * ice_sched_rm_unused_rl_prof - remove unused rl profile
+ * @pi: port information structure
+ *
+ * This function removes unused rate limit profiles from the hw and
+ * SW DB. The caller needs to hold scheduler lock.
+ */
+void ice_sched_rm_unused_rl_prof(struct ice_port_info *pi)
+{
+	u8 ln;
+
+	for (ln = 0; ln < pi->hw->num_tx_sched_layers; ln++) {
+		struct ice_aqc_rl_profile_info *rl_prof_elem;
+		struct ice_aqc_rl_profile_info *rl_prof_tmp;
+
+		LIST_FOR_EACH_ENTRY_SAFE(rl_prof_elem, rl_prof_tmp,
+					 &pi->rl_prof_list[ln],
+					 ice_aqc_rl_profile_info, list_entry) {
+			if (!ice_sched_del_rl_profile(pi->hw, rl_prof_elem))
+				ice_debug(pi->hw, ICE_DBG_SCHED,
+					  "Removed rl profile\n");
+		}
+	}
+}
+
+/**
+ * ice_sched_update_elem - update element
+ * @hw: pointer to the hw struct
+ * @node: pointer to node
+ * @info: node info to update
+ *
+ * It updates the HW DB, and local SW DB of node. It updates the scheduling
+ * parameters of node from argument info data buffer (Info->data buf) and
+ * returns success or error on config sched element failure. The caller
+ * needs to hold scheduler lock.
+ */
+static enum ice_status
+ice_sched_update_elem(struct ice_hw *hw, struct ice_sched_node *node,
+		      struct ice_aqc_txsched_elem_data *info)
+{
+	struct ice_aqc_conf_elem buf;
+	enum ice_status status;
+	u16 elem_cfgd = 0;
+	u16 num_elems = 1;
+
+	buf.generic[0] = *info;
+	/* Parent teid is reserved field in this aq call */
+	buf.generic[0].parent_teid = 0;
+	/* Element type is reserved field in this aq call */
+	buf.generic[0].data.elem_type = 0;
+	/* Flags is reserved field in this aq call */
+	buf.generic[0].data.flags = 0;
+
+	/* Update HW DB */
+	/* Configure element node */
+	status = ice_aq_cfg_sched_elems(hw, num_elems, &buf, sizeof(buf),
+					&elem_cfgd, NULL);
+	if (status || elem_cfgd != num_elems) {
+		ice_debug(hw, ICE_DBG_SCHED, "Config sched elem error\n");
+		return ICE_ERR_CFG;
+	}
+
+	/* Config success case */
+	/* Now update local SW DB */
+	/* Only copy the data portion of info buffer */
+	node->info.data = info->data;
+	return status;
+}
+
+/**
+ * ice_sched_cfg_node_bw_lmt - configure node sched params
+ * @hw: pointer to the hw struct
+ * @node: sched node to configure
+ * @rl_type: rate limit type cir, eir, or shared
+ * @rl_prof_id: rate limit profile id
+ *
+ * This function configures node element's bw limit.
+ */
+static enum ice_status
+ice_sched_cfg_node_bw_lmt(struct ice_hw *hw, struct ice_sched_node *node,
+			  enum ice_rl_type rl_type, u16 rl_prof_id)
+{
+	struct ice_aqc_txsched_elem_data buf;
+	struct ice_aqc_txsched_elem *data;
+
+	buf = node->info;
+	data = &buf.data;
+	switch (rl_type) {
+	case ICE_MIN_BW:
+		data->valid_sections |= ICE_AQC_ELEM_VALID_CIR;
+		data->cir_bw.bw_profile_idx = CPU_TO_LE16(rl_prof_id);
+		break;
+	case ICE_MAX_BW:
+		/* EIR bw and Shared bw profiles are mutually exclusive and
+		 * hence only one of them may be set for any given element
+		 */
+		if (data->valid_sections & ICE_AQC_ELEM_VALID_SHARED)
+			return ICE_ERR_CFG;
+		data->valid_sections |= ICE_AQC_ELEM_VALID_EIR;
+		data->eir_bw.bw_profile_idx = CPU_TO_LE16(rl_prof_id);
+		break;
+	case ICE_SHARED_BW:
+		/* Check for removing shared bw */
+		if (rl_prof_id == ICE_SCHED_NO_SHARED_RL_PROF_ID) {
+			/* remove shared profile */
+			data->valid_sections &= ~ICE_AQC_ELEM_VALID_SHARED;
+			data->srl_id = 0; /* clear srl field */
+
+			/* enable back EIR to default profile */
+			data->valid_sections |= ICE_AQC_ELEM_VALID_EIR;
+			data->eir_bw.bw_profile_idx =
+				CPU_TO_LE16(ICE_SCHED_DFLT_RL_PROF_ID);
+			break;
+		}
+		/* EIR bw and Shared bw profiles are mutually exclusive and
+		 * hence only one of them may be set for any given element
+		 */
+		if ((data->valid_sections & ICE_AQC_ELEM_VALID_EIR) &&
+		    (LE16_TO_CPU(data->eir_bw.bw_profile_idx) !=
+			    ICE_SCHED_DFLT_RL_PROF_ID))
+			return ICE_ERR_CFG;
+		/* EIR bw is set to default, disable it */
+		data->valid_sections &= ~ICE_AQC_ELEM_VALID_EIR;
+		/* Okay to enable shared bw now */
+		data->valid_sections |= ICE_AQC_ELEM_VALID_SHARED;
+		data->srl_id = CPU_TO_LE16(rl_prof_id);
+		break;
+	default:
+		/* Unknown rate limit type */
+		return ICE_ERR_PARAM;
+	}
+
+	/* Configure element */
+	return ice_sched_update_elem(hw, node, &buf);
+}
+
+/**
+ * ice_sched_get_node_rl_prof_id - get node's rate limit profile id
+ * @node: sched node
+ * @rl_type: rate limit type
+ *
+ * If existing profile matches, it returns the corresponding rate
+ * limit profile id, otherwise it returns an invalid id as error.
+ */
+static u16
+ice_sched_get_node_rl_prof_id(struct ice_sched_node *node,
+			      enum ice_rl_type rl_type)
+{
+	u16 rl_prof_id = ICE_SCHED_INVAL_PROF_ID;
+	struct ice_aqc_txsched_elem *data;
+
+	data = &node->info.data;
+	switch (rl_type) {
+	case ICE_MIN_BW:
+		if (data->valid_sections & ICE_AQC_ELEM_VALID_CIR)
+			rl_prof_id = LE16_TO_CPU(data->cir_bw.bw_profile_idx);
+		break;
+	case ICE_MAX_BW:
+		if (data->valid_sections & ICE_AQC_ELEM_VALID_EIR)
+			rl_prof_id = LE16_TO_CPU(data->eir_bw.bw_profile_idx);
+		break;
+	case ICE_SHARED_BW:
+		if (data->valid_sections & ICE_AQC_ELEM_VALID_SHARED)
+			rl_prof_id = LE16_TO_CPU(data->srl_id);
+		break;
+	default:
+		break;
+	}
+
+	return rl_prof_id;
+}
+
+/**
+ * ice_sched_get_rl_prof_layer - selects rate limit profile creation layer
+ * @pi: port information structure
+ * @rl_type: type of rate limit bw - min, max, or shared
+ * @layer_index: layer index
+ *
+ * This function returns requested profile creation layer.
+ */
+static u8
+ice_sched_get_rl_prof_layer(struct ice_port_info *pi, enum ice_rl_type rl_type,
+			    u8 layer_index)
+{
+	struct ice_hw *hw = pi->hw;
+
+	if (layer_index >= hw->num_tx_sched_layers)
+		return ICE_SCHED_INVAL_LAYER_NUM;
+	switch (rl_type) {
+	case ICE_MIN_BW:
+		if (hw->layer_info[layer_index].max_cir_rl_profiles)
+			return layer_index;
+		break;
+	case ICE_MAX_BW:
+		if (hw->layer_info[layer_index].max_eir_rl_profiles)
+			return layer_index;
+		break;
+	case ICE_SHARED_BW:
+		/* if current layer doesn't support SRL profile creation
+		 * then try a layer up or down.
+		 */
+		if (hw->layer_info[layer_index].max_srl_profiles)
+			return layer_index;
+		else if (layer_index < hw->num_tx_sched_layers - 1 &&
+			 hw->layer_info[layer_index + 1].max_srl_profiles)
+			return layer_index + 1;
+		else if (layer_index > 0 &&
+			 hw->layer_info[layer_index - 1].max_srl_profiles)
+			return layer_index - 1;
+		break;
+	default:
+		break;
+	}
+	return ICE_SCHED_INVAL_LAYER_NUM;
+}
+
+/**
+ * ice_sched_get_srl_node - get shared rate limit node
+ * @node: tree node
+ * @srl_layer: shared rate limit layer
+ *
+ * This function returns SRL node to be used for shared rate limit purpose.
+ * The caller needs to hold scheduler lock.
+ */
+static struct ice_sched_node *
+ice_sched_get_srl_node(struct ice_sched_node *node, u8 srl_layer)
+{
+	if (srl_layer > node->tx_sched_layer)
+		return node->children[0];
+	else if (srl_layer < node->tx_sched_layer)
+		/* Node can't be created without a parent. It will always
+		 * have a valid parent except root node.
+		 */
+		return node->parent;
+	else
+		return node;
+}
+
+/**
+ * ice_sched_rm_rl_profile - remove rl profile id
+ * @pi: port information structure
+ * @layer_num: layer number where profiles are saved
+ * @profile_type: profile type like EIR, CIR, or SRL
+ * @profile_id: profile id to remove
+ *
+ * This function removes rate limit profile from layer 'layer_num' of type
+ * 'profile_type' and profile id as 'profile_id'. The caller needs to hold
+ * scheduler lock.
+ */
+static enum ice_status
+ice_sched_rm_rl_profile(struct ice_port_info *pi, u8 layer_num, u8 profile_type,
+			u16 profile_id)
+{
+	struct ice_aqc_rl_profile_info *rl_prof_elem;
+	enum ice_status status = ICE_SUCCESS;
+
+	/* Check the existing list for rl profile */
+	LIST_FOR_EACH_ENTRY(rl_prof_elem, &pi->rl_prof_list[layer_num],
+			    ice_aqc_rl_profile_info, list_entry)
+		if (rl_prof_elem->profile.flags == profile_type &&
+		    LE16_TO_CPU(rl_prof_elem->profile.profile_id) ==
+		    profile_id) {
+			if (rl_prof_elem->prof_id_ref)
+				rl_prof_elem->prof_id_ref--;
+
+			/* Remove old profile id from database */
+			status = ice_sched_del_rl_profile(pi->hw, rl_prof_elem);
+			if (status && status != ICE_ERR_IN_USE)
+				ice_debug(pi->hw, ICE_DBG_SCHED,
+					  "Remove rl profile failed\n");
+			break;
+		}
+	if (status == ICE_ERR_IN_USE)
+		status = ICE_SUCCESS;
+	return status;
+}
+
+/**
+ * ice_sched_set_node_bw_dflt - set node's bandwidth limit to default
+ * @pi: port information structure
+ * @node: pointer to node structure
+ * @rl_type: rate limit type min, max, or shared
+ * @layer_num: layer number where rl profiles are saved
+ *
+ * This function configures node element's bw rate limit profile id of
+ * type cir, eir, or srl to default. This function needs to be called
+ * with the scheduler lock held.
+ */
+static enum ice_status
+ice_sched_set_node_bw_dflt(struct ice_port_info *pi,
+			   struct ice_sched_node *node,
+			   enum ice_rl_type rl_type, u8 layer_num)
+{
+	enum ice_status status;
+	struct ice_hw *hw;
+	u8 profile_type;
+	u16 rl_prof_id;
+	u16 old_id;
+
+	hw = pi->hw;
+	switch (rl_type) {
+	case ICE_MIN_BW:
+		profile_type = ICE_AQC_RL_PROFILE_TYPE_CIR;
+		rl_prof_id = ICE_SCHED_DFLT_RL_PROF_ID;
+		break;
+	case ICE_MAX_BW:
+		profile_type = ICE_AQC_RL_PROFILE_TYPE_EIR;
+		rl_prof_id = ICE_SCHED_DFLT_RL_PROF_ID;
+		break;
+	case ICE_SHARED_BW:
+		profile_type = ICE_AQC_RL_PROFILE_TYPE_SRL;
+		/* No SRL is configured for default case */
+		rl_prof_id = ICE_SCHED_NO_SHARED_RL_PROF_ID;
+		break;
+	default:
+		return ICE_ERR_PARAM;
+	}
+	/* Save existing rl prof id for later clean up */
+	old_id = ice_sched_get_node_rl_prof_id(node, rl_type);
+	/* Configure bw scheduling parameters */
+	status = ice_sched_cfg_node_bw_lmt(hw, node, rl_type, rl_prof_id);
+	if (status)
+		return status;
+
+	/* Remove stale rl profile id */
+	if (old_id == ICE_SCHED_DFLT_RL_PROF_ID ||
+	    old_id == ICE_SCHED_INVAL_PROF_ID)
+		return status;
+	return ice_sched_rm_rl_profile(pi, layer_num, profile_type, old_id);
+}
+
+/**
+ * ice_sched_set_eir_srl_excl - set EIR/SRL exclusiveness
+ * @pi: port information structure
+ * @node: pointer to node structure
+ * @layer_num: layer number where rate limit profiles are saved
+ * @rl_type: rate limit type min, max, or shared
+ * @bw: bandwidth value
+ *
+ * This function prepares node element's bandwidth to SRL or EIR exclusively.
+ * EIR bw and Shared bw profiles are mutually exclusive and hence only one of
+ * them may be set for any given element. This function needs to be called
+ * with the scheduler lock held.
+ */
+static enum ice_status
+ice_sched_set_eir_srl_excl(struct ice_port_info *pi,
+			   struct ice_sched_node *node,
+			   u8 layer_num, enum ice_rl_type rl_type, u32 bw)
+{
+	if (rl_type == ICE_SHARED_BW) {
+		/* SRL node passed in this case, it may be different node */
+		if (bw == ICE_SCHED_DFLT_BW)
+			/* SRL being removed, ice_sched_cfg_node_bw_lmt()
+			 * enables EIR to default. EIR is not set in this
+			 * case, so no additional action is required.
+			 */
+			return ICE_SUCCESS;
+
+		/* SRL being configured, set EIR to default here.
+		 * ice_sched_cfg_node_bw_lmt() disables EIR when it
+		 * configures SRL
+		 */
+		return ice_sched_set_node_bw_dflt(pi, node, ICE_MAX_BW,
+						  layer_num);
+	} else if (rl_type == ICE_MAX_BW &&
+		   node->info.data.valid_sections & ICE_AQC_ELEM_VALID_SHARED) {
+		/* Remove Shared profile. Set default shared bw call
+		 * removes shared profile for a node.
+		 */
+		return ice_sched_set_node_bw_dflt(pi, node,
+						  ICE_SHARED_BW,
+						  layer_num);
+	}
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_sched_set_node_bw - set node's bandwidth
+ * @pi: port information structure
+ * @node: tree node
+ * @rl_type: rate limit type min, max, or shared
+ * @bw: bandwidth in Kbps - Kilo bits per sec
+ * @layer_num: layer number
+ *
+ * This function adds new profile corresponding to requested bw, configures
+ * node's rl profile id of type cir, eir, or srl, and removes old profile
+ * id from local database. The caller needs to hold scheduler lock.
+ */
+static enum ice_status
+ice_sched_set_node_bw(struct ice_port_info *pi, struct ice_sched_node *node,
+		      enum ice_rl_type rl_type, u32 bw, u8 layer_num)
+{
+	struct ice_aqc_rl_profile_info *rl_prof_info;
+	enum ice_status status = ICE_ERR_PARAM;
+	struct ice_hw *hw = pi->hw;
+	u16 old_id, rl_prof_id;
+
+	rl_prof_info = ice_sched_add_rl_profile(pi, rl_type, bw, layer_num);
+	if (!rl_prof_info)
+		return status;
+
+	rl_prof_id = LE16_TO_CPU(rl_prof_info->profile.profile_id);
+
+	/* Save existing rl prof id for later clean up */
+	old_id = ice_sched_get_node_rl_prof_id(node, rl_type);
+	/* Configure bw scheduling parameters */
+	status = ice_sched_cfg_node_bw_lmt(hw, node, rl_type, rl_prof_id);
+	if (status)
+		return status;
+
+	/* New changes has been applied */
+	/* Increment the profile id reference count */
+	rl_prof_info->prof_id_ref++;
+
+	/* Check for old id removal */
+	if ((old_id == ICE_SCHED_DFLT_RL_PROF_ID && rl_type != ICE_SHARED_BW) ||
+	    old_id == ICE_SCHED_INVAL_PROF_ID || old_id == rl_prof_id)
+		return status;
+
+	return ice_sched_rm_rl_profile(pi, layer_num,
+				       rl_prof_info->profile.flags,
+				       old_id);
+}
+
+/**
+ * ice_sched_set_node_bw_lmt - set node's bw limit
+ * @pi: port information structure
+ * @node: tree node
+ * @rl_type: rate limit type min, max, or shared
+ * @bw: bandwidth in Kbps - Kilo bits per sec
+ *
+ * It updates node's bw limit parameters like bw rl profile id of type cir,
+ * eir, or srl. The caller needs to hold scheduler lock.
+ */
+enum ice_status
+ice_sched_set_node_bw_lmt(struct ice_port_info *pi, struct ice_sched_node *node,
+			  enum ice_rl_type rl_type, u32 bw)
+{
+	struct ice_sched_node *cfg_node = node;
+	enum ice_status status;
+
+	struct ice_hw *hw;
+	u8 layer_num;
+
+	if (!pi)
+		return ICE_ERR_PARAM;
+	hw = pi->hw;
+	/* Remove unused rl profile ids from HW and SW DB */
+	ice_sched_rm_unused_rl_prof(pi);
+	layer_num = ice_sched_get_rl_prof_layer(pi, rl_type,
+						node->tx_sched_layer);
+	if (layer_num >= hw->num_tx_sched_layers)
+		return ICE_ERR_PARAM;
+
+	if (rl_type == ICE_SHARED_BW) {
+		/* SRL node may be different */
+		cfg_node = ice_sched_get_srl_node(node, layer_num);
+		if (!cfg_node)
+			return ICE_ERR_CFG;
+	}
+	/* EIR bw and Shared bw profiles are mutually exclusive and
+	 * hence only one of them may be set for any given element
+	 */
+	status = ice_sched_set_eir_srl_excl(pi, cfg_node, layer_num, rl_type,
+					    bw);
+	if (status)
+		return status;
+	if (bw == ICE_SCHED_DFLT_BW)
+		return ice_sched_set_node_bw_dflt(pi, cfg_node, rl_type,
+						  layer_num);
+	return ice_sched_set_node_bw(pi, cfg_node, rl_type, bw, layer_num);
+}
+
+/**
+ * ice_sched_set_node_bw_dflt_lmt - set node's bw limit to default
+ * @pi: port information structure
+ * @node: pointer to node structure
+ * @rl_type: rate limit type min, max, or shared
+ *
+ * This function configures node element's bw rate limit profile id of
+ * type cir, eir, or srl to default. This function needs to be called
+ * with the scheduler lock held.
+ */
+static enum ice_status
+ice_sched_set_node_bw_dflt_lmt(struct ice_port_info *pi,
+			       struct ice_sched_node *node,
+			       enum ice_rl_type rl_type)
+{
+	return ice_sched_set_node_bw_lmt(pi, node, rl_type,
+					 ICE_SCHED_DFLT_BW);
+}
+
+/**
+ * ice_sched_validate_srl_node - Check node for SRL applicability
+ * @node: sched node to configure
+ * @sel_layer: selected SRL layer
+ *
+ * This function checks if the SRL can be applied to a selceted layer node on
+ * behalf of the requested node (first argument). This function needs to be
+ * called with scheduler lock held.
+ */
+static enum ice_status
+ice_sched_validate_srl_node(struct ice_sched_node *node, u8 sel_layer)
+{
+	/* SRL profiles are not available on all layers. Check if the
+	 * SRL profile can be applied to a node above or below the
+	 * requested node. SRL configuration is possible only if the
+	 * selected layer's node has single child.
+	 */
+	if (sel_layer == node->tx_sched_layer ||
+	    ((sel_layer == node->tx_sched_layer + 1) &&
+	    node->num_children == 1) ||
+	    ((sel_layer == node->tx_sched_layer - 1) &&
+	    (node->parent && node->parent->num_children == 1)))
+		return ICE_SUCCESS;
+
+	return ICE_ERR_CFG;
+}
+
+/**
+ * ice_sched_set_q_bw_lmt - sets queue bw limit
+ * @pi: port information structure
+ * @q_id: queue id (leaf node teid)
+ * @rl_type: min, max, or shared
+ * @bw: bandwidth in kbps
+ *
+ * This function sets bw limit of queue scheduling node.
+ */
+static enum ice_status
+ice_sched_set_q_bw_lmt(struct ice_port_info *pi, u32 q_id,
+		       enum ice_rl_type rl_type, u32 bw)
+{
+	enum ice_status status = ICE_ERR_PARAM;
+	struct ice_sched_node *node;
+
+	ice_acquire_lock(&pi->sched_lock);
+
+	node = ice_sched_find_node_by_teid(pi->root, q_id);
+	if (!node) {
+		ice_debug(pi->hw, ICE_DBG_SCHED, "Wrong q_id\n");
+		goto exit_q_bw_lmt;
+	}
+
+	/* Return error if it is not a leaf node */
+	if (node->info.data.elem_type != ICE_AQC_ELEM_TYPE_LEAF)
+		goto exit_q_bw_lmt;
+
+	/* SRL bandwidth layer selection */
+	if (rl_type == ICE_SHARED_BW) {
+		u8 sel_layer; /* selected layer */
+
+		sel_layer = ice_sched_get_rl_prof_layer(pi, rl_type,
+							node->tx_sched_layer);
+		if (sel_layer >= pi->hw->num_tx_sched_layers) {
+			status = ICE_ERR_PARAM;
+			goto exit_q_bw_lmt;
+		}
+		status = ice_sched_validate_srl_node(node, sel_layer);
+		if (status)
+			goto exit_q_bw_lmt;
+	}
+
+	if (bw == ICE_SCHED_DFLT_BW)
+		status = ice_sched_set_node_bw_dflt_lmt(pi, node, rl_type);
+	else
+		status = ice_sched_set_node_bw_lmt(pi, node, rl_type, bw);
+
+exit_q_bw_lmt:
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_cfg_q_bw_lmt - configure queue bw limit
+ * @pi: port information structure
+ * @q_id: queue id (leaf node teid)
+ * @rl_type: min, max, or shared
+ * @bw: bandwidth in kbps
+ *
+ * This function configures bw limit of queue scheduling node.
+ */
+enum ice_status
+ice_cfg_q_bw_lmt(struct ice_port_info *pi, u32 q_id, enum ice_rl_type rl_type,
+		 u32 bw)
+{
+	return ice_sched_set_q_bw_lmt(pi, q_id, rl_type, bw);
+}
+
+/**
+ * ice_cfg_q_bw_dflt_lmt - configure queue bw default limit
+ * @pi: port information structure
+ * @q_id: queue id (leaf node teid)
+ * @rl_type: min, max, or shared
+ *
+ * This function configures bw default limit of queue scheduling node.
+ */
+enum ice_status
+ice_cfg_q_bw_dflt_lmt(struct ice_port_info *pi, u32 q_id,
+		      enum ice_rl_type rl_type)
+{
+	return ice_sched_set_q_bw_lmt(pi, q_id, rl_type, ICE_SCHED_DFLT_BW);
+}
+
+/**
+ * ice_sched_save_tc_node_bw - save tc node bw limit
+ * @pi: port information structure
+ * @tc: tc number
+ * @rl_type: min or max
+ * @bw: bandwidth in kbps
+ *
+ * This function saves the modified values of bandwidth settings for later
+ * replay purpose (restore) after reset.
+ */
+static enum ice_status
+ice_sched_save_tc_node_bw(struct ice_port_info *pi, u8 tc,
+			  enum ice_rl_type rl_type, u32 bw)
+{
+	struct ice_hw *hw = pi->hw;
+
+	if (tc >= ICE_MAX_TRAFFIC_CLASS)
+		return ICE_ERR_PARAM;
+	switch (rl_type) {
+	case ICE_MIN_BW:
+		ice_set_clear_cir_bw(&hw->tc_node_bw_t_info[tc], bw);
+		break;
+	case ICE_MAX_BW:
+		ice_set_clear_eir_bw(&hw->tc_node_bw_t_info[tc], bw);
+		break;
+	case ICE_SHARED_BW:
+		ice_set_clear_shared_bw(&hw->tc_node_bw_t_info[tc], bw);
+		break;
+	default:
+		return ICE_ERR_PARAM;
+	}
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_sched_set_tc_node_bw_lmt - sets tc node bw limit
+ * @pi: port information structure
+ * @tc: tc number
+ * @rl_type: min or max
+ * @bw: bandwidth in kbps
+ *
+ * This function configures bandwidth limit of tc node.
+ */
+static enum ice_status
+ice_sched_set_tc_node_bw_lmt(struct ice_port_info *pi, u8 tc,
+			     enum ice_rl_type rl_type, u32 bw)
+{
+	enum ice_status status = ICE_ERR_PARAM;
+	struct ice_sched_node *tc_node;
+
+	if (tc >= ICE_MAX_TRAFFIC_CLASS)
+		return status;
+	ice_acquire_lock(&pi->sched_lock);
+	tc_node = ice_sched_get_tc_node(pi, tc);
+	if (!tc_node)
+		goto exit_set_tc_node_bw;
+	if (bw == ICE_SCHED_DFLT_BW)
+		status = ice_sched_set_node_bw_dflt_lmt(pi, tc_node, rl_type);
+	else
+		status = ice_sched_set_node_bw_lmt(pi, tc_node, rl_type, bw);
+	if (!status)
+		status = ice_sched_save_tc_node_bw(pi, tc, rl_type, bw);
+
+exit_set_tc_node_bw:
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_cfg_tc_node_bw_lmt - configure tc node bw limit
+ * @pi: port information structure
+ * @tc: tc number
+ * @rl_type: min or max
+ * @bw: bandwidth in kbps
+ *
+ * This function configures bw limit of tc node.
+ * Note: The minimum guaranteed reservation is done via DCBX.
+ */
+enum ice_status
+ice_cfg_tc_node_bw_lmt(struct ice_port_info *pi, u8 tc,
+		       enum ice_rl_type rl_type, u32 bw)
+{
+	return ice_sched_set_tc_node_bw_lmt(pi, tc, rl_type, bw);
+}
+
+/**
+ * ice_cfg_tc_node_bw_dflt_lmt - configure tc node bw default limit
+ * @pi: port information structure
+ * @tc: tc number
+ * @rl_type: min or max
+ *
+ * This function configures bw default limit of tc node.
+ */
+enum ice_status
+ice_cfg_tc_node_bw_dflt_lmt(struct ice_port_info *pi, u8 tc,
+			    enum ice_rl_type rl_type)
+{
+	return ice_sched_set_tc_node_bw_lmt(pi, tc, rl_type, ICE_SCHED_DFLT_BW);
+}
+
+/**
+ * ice_sched_save_tc_node_bw_alloc - save tc node's bw alloc information
+ * @pi: port information structure
+ * @tc: traffic class
+ * @rl_type: rate limit type min or max
+ * @bw_alloc: Bandwidth allocation information
+ *
+ * Save bw alloc information of VSI type node for post replay use.
+ */
+static enum ice_status
+ice_sched_save_tc_node_bw_alloc(struct ice_port_info *pi, u8 tc,
+				enum ice_rl_type rl_type, u16 bw_alloc)
+{
+	struct ice_hw *hw = pi->hw;
+
+	if (tc >= ICE_MAX_TRAFFIC_CLASS)
+		return ICE_ERR_PARAM;
+	switch (rl_type) {
+	case ICE_MIN_BW:
+		ice_set_clear_cir_bw_alloc(&hw->tc_node_bw_t_info[tc],
+					   bw_alloc);
+		break;
+	case ICE_MAX_BW:
+		ice_set_clear_eir_bw_alloc(&hw->tc_node_bw_t_info[tc],
+					   bw_alloc);
+		break;
+	default:
+		return ICE_ERR_PARAM;
+	}
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_sched_set_tc_node_bw_alloc - set tc node bw alloc
+ * @pi: port information structure
+ * @tc: tc number
+ * @rl_type: min or max
+ * @bw_alloc: bandwidth alloc
+ *
+ * This function configures bandwidth alloc of tc node, also saves the
+ * changed settings for replay purpose, and return success if it succeeds
+ * in modifying bandwidth alloc setting.
+ */
+static enum ice_status
+ice_sched_set_tc_node_bw_alloc(struct ice_port_info *pi, u8 tc,
+			       enum ice_rl_type rl_type, u8 bw_alloc)
+{
+	enum ice_status status = ICE_ERR_PARAM;
+	struct ice_sched_node *tc_node;
+
+	if (tc >= ICE_MAX_TRAFFIC_CLASS)
+		return status;
+	ice_acquire_lock(&pi->sched_lock);
+	tc_node = ice_sched_get_tc_node(pi, tc);
+	if (!tc_node)
+		goto exit_set_tc_node_bw_alloc;
+	status = ice_sched_cfg_node_bw_alloc(pi->hw, tc_node, rl_type,
+					     bw_alloc);
+	if (status)
+		goto exit_set_tc_node_bw_alloc;
+	status = ice_sched_save_tc_node_bw_alloc(pi, tc, rl_type, bw_alloc);
+
+exit_set_tc_node_bw_alloc:
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_cfg_tc_node_bw_alloc - configure tc node bw alloc
+ * @pi: port information structure
+ * @tc: tc number
+ * @rl_type: min or max
+ * @bw_alloc: bandwidth alloc
+ *
+ * This function configures bw limit of tc node.
+ * Note: The minimum guaranteed reservation is done via DCBX.
+ */
+enum ice_status
+ice_cfg_tc_node_bw_alloc(struct ice_port_info *pi, u8 tc,
+			 enum ice_rl_type rl_type, u8 bw_alloc)
+{
+	return ice_sched_set_tc_node_bw_alloc(pi, tc, rl_type, bw_alloc);
+}
+
+/**
+ * ice_sched_set_agg_bw_dflt_lmt - set agg node's bw limit to default
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ *
+ * This function retrieves the aggregator id based on VSI id and tc,
+ * and sets node's bw limit to default. This function needs to be
+ * called with the scheduler lock held.
+ */
+enum ice_status
+ice_sched_set_agg_bw_dflt_lmt(struct ice_port_info *pi, u16 vsi_handle)
+{
+	struct ice_vsi_ctx *vsi_ctx;
+	enum ice_status status = ICE_SUCCESS;
+	u8 tc;
+
+	if (!ice_is_vsi_valid(pi->hw, vsi_handle))
+		return ICE_ERR_PARAM;
+	vsi_ctx = ice_get_vsi_ctx(pi->hw, vsi_handle);
+	if (!vsi_ctx)
+		return ICE_ERR_PARAM;
+
+	for (tc = 0; tc < ICE_MAX_TRAFFIC_CLASS; tc++) {
+		struct ice_sched_node *node;
+
+		node = vsi_ctx->sched.ag_node[tc];
+		if (!node)
+			continue;
+
+		/* Set min profile to default */
+		status = ice_sched_set_node_bw_dflt_lmt(pi, node, ICE_MIN_BW);
+		if (status)
+			break;
+
+		/* Set max profile to default */
+		status = ice_sched_set_node_bw_dflt_lmt(pi, node, ICE_MAX_BW);
+		if (status)
+			break;
+
+		/* Remove shared profile, if there is one */
+		status = ice_sched_set_node_bw_dflt_lmt(pi, node,
+							ICE_SHARED_BW);
+		if (status)
+			break;
+	}
+
+	return status;
+}
+
+/**
+ * ice_sched_get_node_by_id_type - get node from id type
+ * @pi: port information structure
+ * @id: identifier
+ * @agg_type: type of aggregator
+ * @tc: traffic class
+ *
+ * This function returns node identified by id of type aggregator, and
+ * based on traffic class (tc). This function needs to be called with
+ * the scheduler lock held.
+ */
+static struct ice_sched_node *
+ice_sched_get_node_by_id_type(struct ice_port_info *pi, u32 id,
+			      enum ice_agg_type agg_type, u8 tc)
+{
+	struct ice_sched_node *node = NULL;
+	struct ice_sched_node *child_node;
+
+	switch (agg_type) {
+	case ICE_AGG_TYPE_VSI: {
+		struct ice_vsi_ctx *vsi_ctx;
+		u16 vsi_handle = (u16)id;
+
+		if (!ice_is_vsi_valid(pi->hw, vsi_handle))
+			break;
+		/* Get sched_vsi_info */
+		vsi_ctx = ice_get_vsi_ctx(pi->hw, vsi_handle);
+		if (!vsi_ctx)
+			break;
+		node = vsi_ctx->sched.vsi_node[tc];
+		break;
+	}
+
+	case ICE_AGG_TYPE_AGG: {
+		struct ice_sched_node *tc_node;
+
+		tc_node = ice_sched_get_tc_node(pi, tc);
+		if (tc_node)
+			node = ice_sched_get_agg_node(pi->hw, tc_node, id);
+		break;
+	}
+
+	case ICE_AGG_TYPE_Q:
+		/* The current implementation allows single queue to modify */
+		node = ice_sched_get_node(pi, id);
+		break;
+
+	case ICE_AGG_TYPE_QG:
+		/* The current implementation allows single qg to modify */
+		child_node = ice_sched_get_node(pi, id);
+		if (!child_node)
+			break;
+		node = child_node->parent;
+		break;
+
+	default:
+		break;
+	}
+
+	return node;
+}
+
+/**
+ * ice_sched_set_node_bw_lmt_per_tc - set node bw limit per tc
+ * @pi: port information structure
+ * @id: id (software VSI handle or AGG id)
+ * @agg_type: aggregator type (VSI or AGG type node)
+ * @tc: traffic class
+ * @rl_type: min or max
+ * @bw: bandwidth in kbps
+ *
+ * This function sets bw limit of VSI or Aggregator scheduling node
+ * based on tc information from passed in argument bw.
+ */
+enum ice_status
+ice_sched_set_node_bw_lmt_per_tc(struct ice_port_info *pi, u32 id,
+				 enum ice_agg_type agg_type, u8 tc,
+				 enum ice_rl_type rl_type, u32 bw)
+{
+	enum ice_status status = ICE_ERR_PARAM;
+	struct ice_sched_node *node;
+
+	if (!pi)
+		return status;
+
+	if (rl_type == ICE_UNKNOWN_BW)
+		return status;
+
+	ice_acquire_lock(&pi->sched_lock);
+	node = ice_sched_get_node_by_id_type(pi, id, agg_type, tc);
+	if (!node) {
+		ice_debug(pi->hw, ICE_DBG_SCHED, "Wrong id, agg type, or tc\n");
+		goto exit_set_node_bw_lmt_per_tc;
+	}
+	if (bw == ICE_SCHED_DFLT_BW)
+		status = ice_sched_set_node_bw_dflt_lmt(pi, node, rl_type);
+	else
+		status = ice_sched_set_node_bw_lmt(pi, node, rl_type, bw);
+
+exit_set_node_bw_lmt_per_tc:
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_sched_validate_vsi_srl_node - validate VSI SRL node
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ *
+ * This function validates SRL node of the VSI node if available SRL layer is
+ * different than the VSI node layer on all tc(s).This function needs to be
+ * called with scheduler lock held.
+ */
+static enum ice_status
+ice_sched_validate_vsi_srl_node(struct ice_port_info *pi, u16 vsi_handle)
+{
+	u8 sel_layer = ICE_SCHED_INVAL_LAYER_NUM;
+	u8 tc;
+
+	if (!ice_is_vsi_valid(pi->hw, vsi_handle))
+		return ICE_ERR_PARAM;
+
+	/* Return success if no nodes are present across tc */
+	for (tc = 0; tc < ICE_MAX_TRAFFIC_CLASS; tc++) {
+		struct ice_sched_node *tc_node, *vsi_node;
+		enum ice_rl_type rl_type = ICE_SHARED_BW;
+		enum ice_status status;
+
+		tc_node = ice_sched_get_tc_node(pi, tc);
+		if (!tc_node)
+			continue;
+
+		vsi_node = ice_sched_get_vsi_node(pi->hw, tc_node, vsi_handle);
+		if (!vsi_node)
+			continue;
+
+		/* SRL bandwidth layer selection */
+		if (sel_layer == ICE_SCHED_INVAL_LAYER_NUM) {
+			u8 node_layer = vsi_node->tx_sched_layer;
+			u8 layer_num;
+
+			layer_num = ice_sched_get_rl_prof_layer(pi, rl_type,
+								node_layer);
+			if (layer_num >= pi->hw->num_tx_sched_layers)
+				return ICE_ERR_PARAM;
+			sel_layer = layer_num;
+		}
+
+		status = ice_sched_validate_srl_node(vsi_node, sel_layer);
+		if (status)
+			return status;
+	}
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_sched_set_vsi_bw_shared_lmt - set VSI bw shared limit
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @bw: bandwidth in kbps
+ *
+ * This function Configures shared rate limiter(SRL) of all VSI type nodes
+ * across all traffic classes for VSI matching handle. When bw value of
+ * ICE_SCHED_DFLT_BW is passed, it removes the SRL from the node.
+ */
+enum ice_status
+ice_sched_set_vsi_bw_shared_lmt(struct ice_port_info *pi, u16 vsi_handle,
+				u32 bw)
+{
+	enum ice_status status = ICE_SUCCESS;
+	u8 tc;
+
+	if (!pi)
+		return ICE_ERR_PARAM;
+
+	if (!ice_is_vsi_valid(pi->hw, vsi_handle))
+		return ICE_ERR_PARAM;
+
+	ice_acquire_lock(&pi->sched_lock);
+	status = ice_sched_validate_vsi_srl_node(pi, vsi_handle);
+	if (status)
+		goto exit_set_vsi_bw_shared_lmt;
+	/* Return success if no nodes are present across tc */
+	for (tc = 0; tc < ICE_MAX_TRAFFIC_CLASS; tc++) {
+		struct ice_sched_node *tc_node, *vsi_node;
+		enum ice_rl_type rl_type = ICE_SHARED_BW;
+
+		tc_node = ice_sched_get_tc_node(pi, tc);
+		if (!tc_node)
+			continue;
+
+		vsi_node = ice_sched_get_vsi_node(pi->hw, tc_node, vsi_handle);
+		if (!vsi_node)
+			continue;
+
+		if (bw == ICE_SCHED_DFLT_BW)
+			/* It removes existing SRL from the node */
+			status = ice_sched_set_node_bw_dflt_lmt(pi, vsi_node,
+								rl_type);
+		else
+			status = ice_sched_set_node_bw_lmt(pi, vsi_node,
+							   rl_type, bw);
+		if (status)
+			break;
+		status = ice_sched_save_vsi_bw(pi, vsi_handle, tc, rl_type, bw);
+		if (status)
+			break;
+	}
+
+exit_set_vsi_bw_shared_lmt:
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_sched_validate_agg_srl_node - validate AGG SRL node
+ * @pi: port information structure
+ * @agg_id: aggregator id
+ *
+ * This function validates SRL node of the AGG node if available SRL layer is
+ * different than the AGG node layer on all tc(s).This function needs to be
+ * called with scheduler lock held.
+ */
+static enum ice_status
+ice_sched_validate_agg_srl_node(struct ice_port_info *pi, u32 agg_id)
+{
+	u8 sel_layer = ICE_SCHED_INVAL_LAYER_NUM;
+	struct ice_sched_agg_info *agg_info;
+	bool agg_id_present = false;
+	enum ice_status status = ICE_SUCCESS;
+	u8 tc;
+
+	LIST_FOR_EACH_ENTRY(agg_info, &pi->hw->agg_list, ice_sched_agg_info,
+			    list_entry)
+		if (agg_info->agg_id == agg_id) {
+			agg_id_present = true;
+			break;
+		}
+	if (!agg_id_present)
+		return ICE_ERR_PARAM;
+	/* Return success if no nodes are present across tc */
+	for (tc = 0; tc < ICE_MAX_TRAFFIC_CLASS; tc++) {
+		struct ice_sched_node *tc_node, *agg_node;
+		enum ice_rl_type rl_type = ICE_SHARED_BW;
+
+		tc_node = ice_sched_get_tc_node(pi, tc);
+		if (!tc_node)
+			continue;
+
+		agg_node = ice_sched_get_agg_node(pi->hw, tc_node, agg_id);
+		if (!agg_node)
+			continue;
+		/* SRL bandwidth layer selection */
+		if (sel_layer == ICE_SCHED_INVAL_LAYER_NUM) {
+			u8 node_layer = agg_node->tx_sched_layer;
+			u8 layer_num;
+
+			layer_num = ice_sched_get_rl_prof_layer(pi, rl_type,
+								node_layer);
+			if (layer_num >= pi->hw->num_tx_sched_layers)
+				return ICE_ERR_PARAM;
+			sel_layer = layer_num;
+		}
+
+		status = ice_sched_validate_srl_node(agg_node, sel_layer);
+		if (status)
+			break;
+	}
+	return status;
+}
+
+/**
+ * ice_sched_set_agg_bw_shared_lmt - set aggregator bw shared limit
+ * @pi: port information structure
+ * @agg_id: aggregator id
+ * @bw: bandwidth in kbps
+ *
+ * This function configures the shared rate limiter(SRL) of all agg type
+ * nodes across all traffic classes for aggregator matching agg_id. When
+ * bw value of ICE_SCHED_DFLT_BW is passed, it removes SRL from the
+ * node(s).
+ */
+enum ice_status
+ice_sched_set_agg_bw_shared_lmt(struct ice_port_info *pi, u32 agg_id, u32 bw)
+{
+	struct ice_sched_agg_info *agg_info;
+	struct ice_sched_agg_info *tmp;
+	bool agg_id_present = false;
+	enum ice_status status = ICE_SUCCESS;
+	u8 tc;
+
+	if (!pi)
+		return ICE_ERR_PARAM;
+
+	ice_acquire_lock(&pi->sched_lock);
+	status = ice_sched_validate_agg_srl_node(pi, agg_id);
+	if (status)
+		goto exit_agg_bw_shared_lmt;
+
+	LIST_FOR_EACH_ENTRY_SAFE(agg_info, tmp, &pi->hw->agg_list,
+				 ice_sched_agg_info, list_entry)
+		if (agg_info->agg_id == agg_id) {
+			agg_id_present = true;
+			break;
+		}
+
+	if (!agg_id_present) {
+		status = ICE_ERR_PARAM;
+		goto exit_agg_bw_shared_lmt;
+	}
+
+	/* Return success if no nodes are present across tc */
+	for (tc = 0; tc < ICE_MAX_TRAFFIC_CLASS; tc++) {
+		enum ice_rl_type rl_type = ICE_SHARED_BW;
+		struct ice_sched_node *tc_node, *agg_node;
+
+		tc_node = ice_sched_get_tc_node(pi, tc);
+		if (!tc_node)
+			continue;
+
+		agg_node = ice_sched_get_agg_node(pi->hw, tc_node, agg_id);
+		if (!agg_node)
+			continue;
+
+		if (bw == ICE_SCHED_DFLT_BW)
+			/* It removes existing SRL from the node */
+			status = ice_sched_set_node_bw_dflt_lmt(pi, agg_node,
+								rl_type);
+		else
+			status = ice_sched_set_node_bw_lmt(pi, agg_node,
+							   rl_type, bw);
+		if (status)
+			break;
+		status = ice_sched_save_agg_bw(pi, agg_id, tc, rl_type, bw);
+		if (status)
+			break;
+	}
+
+exit_agg_bw_shared_lmt:
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_sched_cfg_sibl_node_prio - configure node sibling priority
+ * @hw: pointer to the hw struct
+ * @node: sched node to configure
+ * @priority: sibling priority
+ *
+ * This function configures node element's sibling priority only. This
+ * function needs to be called with scheduler lock held.
+ */
+enum ice_status
+ice_sched_cfg_sibl_node_prio(struct ice_hw *hw, struct ice_sched_node *node,
+			     u8 priority)
+{
+	struct ice_aqc_txsched_elem_data buf;
+	struct ice_aqc_txsched_elem *data;
+	enum ice_status status;
+
+	buf = node->info;
+	data = &buf.data;
+	data->valid_sections |= ICE_AQC_ELEM_VALID_GENERIC;
+	priority = (priority << ICE_AQC_ELEM_GENERIC_PRIO_S) &
+		   ICE_AQC_ELEM_GENERIC_PRIO_M;
+	data->generic &= ~ICE_AQC_ELEM_GENERIC_PRIO_M;
+	data->generic |= priority;
+
+	/* Configure element */
+	status = ice_sched_update_elem(hw, node, &buf);
+	return status;
+}
+
+/**
+ * ice_sched_cfg_node_bw_alloc - configure node bw weight/alloc params
+ * @hw: pointer to the hw struct
+ * @node: sched node to configure
+ * @rl_type: rate limit type cir, eir, or shared
+ * @bw_alloc: bw weight/allocation
+ *
+ * This function configures node element's bw allocation.
+ */
+enum ice_status
+ice_sched_cfg_node_bw_alloc(struct ice_hw *hw, struct ice_sched_node *node,
+			    enum ice_rl_type rl_type, u8 bw_alloc)
+{
+	struct ice_aqc_txsched_elem_data buf;
+	struct ice_aqc_txsched_elem *data;
+	enum ice_status status;
+
+	buf = node->info;
+	data = &buf.data;
+	if (rl_type == ICE_MIN_BW) {
+		data->valid_sections |= ICE_AQC_ELEM_VALID_CIR;
+		data->cir_bw.bw_alloc = CPU_TO_LE16(bw_alloc);
+	} else if (rl_type == ICE_MAX_BW) {
+		data->valid_sections |= ICE_AQC_ELEM_VALID_EIR;
+		data->eir_bw.bw_alloc = CPU_TO_LE16(bw_alloc);
+	} else {
+		return ICE_ERR_PARAM;
+	}
+
+	/* Configure element */
+	status = ice_sched_update_elem(hw, node, &buf);
+	return status;
+}
+
+/**
+ * ice_sched_add_agg_cfg - create an aggregator node
+ * @pi: port information structure
+ * @agg_id: aggregator id
+ * @tc: TC number
+ *
+ * This function creates an aggregator node and intermediate nodes if required
+ * for the given TC
+ */
+enum ice_status
+ice_sched_add_agg_cfg(struct ice_port_info *pi, u32 agg_id, u8 tc)
+{
+	struct ice_sched_node *parent, *agg_node, *tc_node;
+	u16 num_nodes[ICE_AQC_TOPO_MAX_LEVEL_NUM] = { 0 };
+	enum ice_status status = ICE_SUCCESS;
+	struct ice_hw *hw = pi->hw;
+	u32 first_node_teid;
+	u16 num_nodes_added;
+	u8 i, aggl;
+
+	tc_node = ice_sched_get_tc_node(pi, tc);
+	if (!tc_node)
+		return ICE_ERR_CFG;
+
+	agg_node = ice_sched_get_agg_node(hw, tc_node, agg_id);
+	/* Does Agg node already exist ? */
+	if (agg_node)
+		return status;
+
+	aggl = ice_sched_get_agg_layer(hw);
+
+	/* need one node in Agg layer */
+	num_nodes[aggl] = 1;
+
+	/* Check whether the intermediate nodes have space to add the
+	 * new agg. If they are full, then SW needs to allocate a new
+	 * intermediate node on those layers
+	 */
+	for (i = hw->sw_entry_point_layer; i < aggl; i++) {
+		parent = ice_sched_get_first_node(hw, tc_node, i);
+
+		/* scan all the siblings */
+		while (parent) {
+			if (parent->num_children < hw->max_children[i])
+				break;
+			parent = parent->sibling;
+		}
+
+		/* all the nodes are full, reserve one for this layer */
+		if (!parent)
+			num_nodes[i]++;
+	}
+
+	/* add the agg node */
+	parent = tc_node;
+	for (i = hw->sw_entry_point_layer; i <= aggl; i++) {
+		if (!parent)
+			return ICE_ERR_CFG;
+
+		status = ice_sched_add_nodes_to_layer(pi, tc_node, parent, i,
+						      num_nodes[i],
+						      &first_node_teid,
+						      &num_nodes_added);
+		if (status != ICE_SUCCESS || num_nodes[i] != num_nodes_added)
+			return ICE_ERR_CFG;
+
+		/* The newly added node can be a new parent for the next
+		 * layer nodes
+		 */
+		if (num_nodes_added) {
+			parent = ice_sched_find_node_by_teid(tc_node,
+							     first_node_teid);
+			/* register the aggregator id with the agg node */
+			if (parent && i == aggl)
+				parent->agg_id = agg_id;
+		} else {
+			parent = parent->children[0];
+		}
+	}
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_sched_is_agg_inuse - check whether the agg is in use or not
+ * @pi: port information structure
+ * @node: node pointer
+ *
+ * This function checks whether the agg is attached with any vsi or not.
+ */
+static bool
+ice_sched_is_agg_inuse(struct ice_port_info *pi, struct ice_sched_node *node)
+{
+	u8 vsil, i;
+
+	vsil = ice_sched_get_vsi_layer(pi->hw);
+	if (node->tx_sched_layer < vsil - 1) {
+		for (i = 0; i < node->num_children; i++)
+			if (ice_sched_is_agg_inuse(pi, node->children[i]))
+				return true;
+		return false;
+	} else {
+		return node->num_children ? true : false;
+	}
+}
+
+/**
+ * ice_sched_rm_agg_cfg - remove the aggregator node
+ * @pi: port information structure
+ * @agg_id: aggregator id
+ * @tc: TC number
+ *
+ * This function removes the aggregator node and intermediate nodes if any
+ * from the given TC
+ */
+enum ice_status
+ice_sched_rm_agg_cfg(struct ice_port_info *pi, u32 agg_id, u8 tc)
+{
+	struct ice_sched_node *tc_node, *agg_node;
+	struct ice_hw *hw = pi->hw;
+
+	tc_node = ice_sched_get_tc_node(pi, tc);
+	if (!tc_node)
+		return ICE_ERR_CFG;
+
+	agg_node = ice_sched_get_agg_node(hw, tc_node, agg_id);
+	if (!agg_node)
+		return ICE_ERR_DOES_NOT_EXIST;
+
+	/* Can't remove the agg node if it has children */
+	if (ice_sched_is_agg_inuse(pi, agg_node))
+		return ICE_ERR_IN_USE;
+
+	/* need to remove the whole subtree if agg node is the
+	 * only child.
+	 */
+	while (agg_node->tx_sched_layer > hw->sw_entry_point_layer) {
+		struct ice_sched_node *parent = agg_node->parent;
+
+		if (!parent)
+			return ICE_ERR_CFG;
+
+		if (parent->num_children > 1)
+			break;
+
+		agg_node = parent;
+	}
+
+	ice_free_sched_node(pi, agg_node);
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_sched_get_free_vsi_parent - Find a free parent node in agg subtree
+ * @hw: pointer to the hw struct
+ * @node: pointer to a child node
+ * @num_nodes: num nodes count array
+ *
+ * This function walks through the aggregator subtree to find a free parent
+ * node
+ */
+static struct ice_sched_node *
+ice_sched_get_free_vsi_parent(struct ice_hw *hw, struct ice_sched_node *node,
+			      u16 *num_nodes)
+{
+	u8 l = node->tx_sched_layer;
+	u8 vsil, i;
+
+	vsil = ice_sched_get_vsi_layer(hw);
+
+	/* Is it VSI parent layer ? */
+	if (l == vsil - 1)
+		return (node->num_children < hw->max_children[l]) ? node : NULL;
+
+	/* We have intermediate nodes. Let's walk through the subtree. If the
+	 * intermediate node has space to add a new node then clear the count
+	 */
+	if (node->num_children < hw->max_children[l])
+		num_nodes[l] = 0;
+	/* The below recursive call is intentional and wouldn't go more than
+	 * 2 or 3 iterations.
+	 */
+
+	for (i = 0; i < node->num_children; i++) {
+		struct ice_sched_node *parent;
+
+		parent = ice_sched_get_free_vsi_parent(hw, node->children[i],
+						       num_nodes);
+		if (parent)
+			return parent;
+	}
+
+	return NULL;
+}
+
+/**
+ * ice_sched_update_new_parent - update the new parent in SW DB
+ * @new_parent: pointer to a new parent node
+ * @node: pointer to a child node
+ *
+ * This function removes the child from the old parent and adds it to a new
+ * parent
+ */
+static void
+ice_sched_update_parent(struct ice_sched_node *new_parent,
+			struct ice_sched_node *node)
+{
+	struct ice_sched_node *old_parent;
+	u8 i, j;
+
+	old_parent = node->parent;
+
+	/* update the old parent children */
+	for (i = 0; i < old_parent->num_children; i++)
+		if (old_parent->children[i] == node) {
+			for (j = i + 1; j < old_parent->num_children; j++)
+				old_parent->children[j - 1] =
+					old_parent->children[j];
+			old_parent->num_children--;
+			break;
+		}
+
+	/* now move the node to a new parent */
+	new_parent->children[new_parent->num_children++] = node;
+	node->parent = new_parent;
+	node->info.parent_teid = new_parent->info.node_teid;
+}
+
+/**
+ * ice_sched_move_nodes - move child nodes to a given parent
+ * @pi: port information structure
+ * @parent: pointer to parent node
+ * @num_items: number of child nodes to be moved
+ * @list: pointer to child node teids
+ *
+ * This function move the child nodes to a given parent.
+ */
+static enum ice_status
+ice_sched_move_nodes(struct ice_port_info *pi, struct ice_sched_node *parent,
+		     u16 num_items, u32 *list)
+{
+	struct ice_aqc_move_elem *buf;
+	struct ice_sched_node *node;
+	enum ice_status status = ICE_SUCCESS;
+	struct ice_hw *hw;
+	u16 grps_movd = 0;
+	u8 i;
+
+	hw = pi->hw;
+
+	if (!parent || !num_items)
+		return ICE_ERR_PARAM;
+
+	/* Does parent have enough space */
+	if (parent->num_children + num_items >=
+	    hw->max_children[parent->tx_sched_layer])
+		return ICE_ERR_AQ_FULL;
+
+	buf = (struct ice_aqc_move_elem *) ice_malloc(hw, sizeof(*buf));
+	if (!buf)
+		return ICE_ERR_NO_MEMORY;
+
+	for (i = 0; i < num_items; i++) {
+		node = ice_sched_find_node_by_teid(pi->root, list[i]);
+		if (!node) {
+			status = ICE_ERR_PARAM;
+			goto move_err_exit;
+		}
+
+		buf->hdr.src_parent_teid = node->info.parent_teid;
+		buf->hdr.dest_parent_teid = parent->info.node_teid;
+		buf->teid[0] = node->info.node_teid;
+		buf->hdr.num_elems = CPU_TO_LE16(1);
+		status = ice_aq_move_sched_elems(hw, 1, buf, sizeof(*buf),
+						 &grps_movd, NULL);
+		if (status && grps_movd != 1) {
+			status = ICE_ERR_CFG;
+			goto move_err_exit;
+		}
+
+		/* update the SW DB */
+		ice_sched_update_parent(parent, node);
+	}
+
+move_err_exit:
+	ice_free(hw, buf);
+	return status;
+}
+
+/**
+ * ice_sched_move_vsi_to_agg - move VSI to aggregator node
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @agg_id: aggregator id
+ * @tc: TC number
+ *
+ * This function moves a VSI to an aggregator node or its subtree.
+ * Intermediate nodes may be created if required.
+ */
+enum ice_status
+ice_sched_move_vsi_to_agg(struct ice_port_info *pi, u16 vsi_handle, u32 agg_id,
+			  u8 tc)
+{
+	struct ice_sched_node *vsi_node, *agg_node, *tc_node, *parent;
+	u16 num_nodes[ICE_AQC_TOPO_MAX_LEVEL_NUM] = { 0 };
+	u32 first_node_teid, vsi_teid;
+	enum ice_status status;
+	u16 num_nodes_added;
+	u8 aggl, vsil, i;
+
+	tc_node = ice_sched_get_tc_node(pi, tc);
+	if (!tc_node)
+		return ICE_ERR_CFG;
+
+	agg_node = ice_sched_get_agg_node(pi->hw, tc_node, agg_id);
+	if (!agg_node)
+		return ICE_ERR_DOES_NOT_EXIST;
+
+	vsi_node = ice_sched_get_vsi_node(pi->hw, tc_node, vsi_handle);
+	if (!vsi_node)
+		return ICE_ERR_DOES_NOT_EXIST;
+
+	aggl = ice_sched_get_agg_layer(pi->hw);
+	vsil = ice_sched_get_vsi_layer(pi->hw);
+
+	/* initialize intermediate node count to 1 between agg and VSI layers */
+	for (i = aggl + 1; i < vsil; i++)
+		num_nodes[i] = 1;
+
+	/* Check whether the agg subtree has any free node to add the VSI */
+	for (i = 0; i < agg_node->num_children; i++) {
+		parent = ice_sched_get_free_vsi_parent(pi->hw,
+						       agg_node->children[i],
+						       num_nodes);
+		if (parent)
+			goto move_nodes;
+	}
+
+	/* add new nodes */
+	parent = agg_node;
+	for (i = aggl + 1; i < vsil; i++) {
+		status = ice_sched_add_nodes_to_layer(pi, tc_node, parent, i,
+						      num_nodes[i],
+						      &first_node_teid,
+						      &num_nodes_added);
+		if (status != ICE_SUCCESS || num_nodes[i] != num_nodes_added)
+			return ICE_ERR_CFG;
+
+		/* The newly added node can be a new parent for the next
+		 * layer nodes
+		 */
+		if (num_nodes_added)
+			parent = ice_sched_find_node_by_teid(tc_node,
+							     first_node_teid);
+		else
+			parent = parent->children[0];
+
+		if (!parent)
+			return ICE_ERR_CFG;
+	}
+
+move_nodes:
+	vsi_teid = LE32_TO_CPU(vsi_node->info.node_teid);
+	return ice_sched_move_nodes(pi, parent, 1, &vsi_teid);
+}
+
+/**
+ * ice_cfg_rl_burst_size - Set burst size value
+ * @hw: pointer to the hw struct
+ * @bytes: burst size in bytes
+ *
+ * This function configures/set the burst size to requested new value. The new
+ * burst size value is used for future rate limit calls. It doesn't change the
+ * existing or previously created RL profiles.
+ */
+enum ice_status ice_cfg_rl_burst_size(struct ice_hw *hw, u32 bytes)
+{
+	u16 burst_size_to_prog;
+
+	if (bytes < ICE_MIN_BURST_SIZE_ALLOWED ||
+	    bytes > ICE_MAX_BURST_SIZE_ALLOWED)
+		return ICE_ERR_PARAM;
+	if (bytes <= ICE_MAX_BURST_SIZE_BYTE_GRANULARITY) {
+		/* byte granularity case */
+		/* Disable MSB granularity bit */
+		burst_size_to_prog = ICE_BYTE_GRANULARITY;
+		/* round number to nearest 256 granularity */
+		bytes = ice_round_to_num(bytes, 256);
+		/* check rounding doesn't go beyound allowed */
+		if (bytes > ICE_MAX_BURST_SIZE_BYTE_GRANULARITY)
+			bytes = ICE_MAX_BURST_SIZE_BYTE_GRANULARITY;
+		burst_size_to_prog |= (u16)bytes;
+	} else {
+		/* k bytes granularity case */
+		/* Enable MSB granularity bit */
+		burst_size_to_prog = ICE_KBYTE_GRANULARITY;
+		/* round number to nearest 1024 granularity */
+		bytes = ice_round_to_num(bytes, 1024);
+		/* check rounding doesn't go beyound allowed */
+		if (bytes > ICE_MAX_BURST_SIZE_KBYTE_GRANULARITY)
+			bytes = ICE_MAX_BURST_SIZE_KBYTE_GRANULARITY;
+		/* The value is in k bytes */
+		burst_size_to_prog |= (u16)(bytes / 1024);
+	}
+	hw->max_burst_size = burst_size_to_prog;
+	return ICE_SUCCESS;
+}
+
+/*
+ * ice_sched_replay_node_prio - re-configure node priority
+ * @hw: pointer to the hw struct
+ * @node: sched node to configure
+ * @priority: priority value
+ *
+ * This function configures node element's priority value. It
+ * needs to be called with scheduler lock held.
+ */
+static enum ice_status
+ice_sched_replay_node_prio(struct ice_hw *hw, struct ice_sched_node *node,
+			   u8 priority)
+{
+	struct ice_aqc_txsched_elem_data buf;
+	struct ice_aqc_txsched_elem *data;
+	enum ice_status status;
+
+	buf = node->info;
+	data = &buf.data;
+	data->valid_sections |= ICE_AQC_ELEM_VALID_GENERIC;
+	data->generic = priority;
+
+	/* Configure element */
+	status = ice_sched_update_elem(hw, node, &buf);
+	return status;
+}
+
+/**
+ * ice_sched_replay_node_bw - replay node(s) bw
+ * @hw: pointer to the hw struct
+ * @node: sched node to configure
+ * @bw_t_info: bw type information
+ *
+ * This function restores node's bw from bw_t_info. The caller needs
+ * to hold the scheduler lock.
+ */
+static enum ice_status
+ice_sched_replay_node_bw(struct ice_hw *hw, struct ice_sched_node *node,
+			 struct ice_bw_type_info *bw_t_info)
+{
+	struct ice_port_info *pi = hw->port_info;
+	enum ice_status status = ICE_ERR_PARAM;
+	u16 bw_alloc;
+
+	if (!node)
+		return status;
+	if (!ice_is_any_bit_set(bw_t_info->bw_t_bitmap, ICE_BW_TYPE_CNT))
+		return ICE_SUCCESS;
+	if (ice_is_bit_set(bw_t_info->bw_t_bitmap, ICE_BW_TYPE_PRIO)) {
+		status = ice_sched_replay_node_prio(hw, node,
+						    bw_t_info->generic);
+		if (status)
+			return status;
+	}
+	if (ice_is_bit_set(bw_t_info->bw_t_bitmap, ICE_BW_TYPE_CIR)) {
+		status = ice_sched_set_node_bw_lmt(pi, node, ICE_MIN_BW,
+						   bw_t_info->cir_bw.bw);
+		if (status)
+			return status;
+	}
+	if (ice_is_bit_set(bw_t_info->bw_t_bitmap, ICE_BW_TYPE_CIR_WT)) {
+		bw_alloc = bw_t_info->cir_bw.bw_alloc;
+		status = ice_sched_cfg_node_bw_alloc(hw, node, ICE_MIN_BW,
+						     bw_alloc);
+		if (status)
+			return status;
+	}
+	if (ice_is_bit_set(bw_t_info->bw_t_bitmap, ICE_BW_TYPE_EIR)) {
+		status = ice_sched_set_node_bw_lmt(pi, node, ICE_MAX_BW,
+						   bw_t_info->eir_bw.bw);
+		if (status)
+			return status;
+	}
+	if (ice_is_bit_set(bw_t_info->bw_t_bitmap, ICE_BW_TYPE_EIR_WT)) {
+		bw_alloc = bw_t_info->eir_bw.bw_alloc;
+		status = ice_sched_cfg_node_bw_alloc(hw, node, ICE_MAX_BW,
+						     bw_alloc);
+		if (status)
+			return status;
+	}
+	if (ice_is_bit_set(bw_t_info->bw_t_bitmap, ICE_BW_TYPE_SHARED))
+		status = ice_sched_set_node_bw_lmt(pi, node, ICE_SHARED_BW,
+						   bw_t_info->shared_bw);
+	return status;
+}
+
+/**
+ * ice_sched_replay_agg_bw - replay aggregator node(s) bw
+ * @hw: pointer to the hw struct
+ * @agg_info: aggregator data structure
+ *
+ * This function re-creates aggregator type nodes. The caller needs to hold
+ * the scheduler lock.
+ */
+static enum ice_status
+ice_sched_replay_agg_bw(struct ice_hw *hw, struct ice_sched_agg_info *agg_info)
+{
+	struct ice_sched_node *tc_node, *agg_node;
+	enum ice_status status = ICE_SUCCESS;
+	u8 tc;
+
+	if (!agg_info)
+		return ICE_ERR_PARAM;
+	for (tc = 0; tc < ICE_MAX_TRAFFIC_CLASS; tc++) {
+		if (!ice_is_any_bit_set(agg_info->bw_t_info[tc].bw_t_bitmap,
+					ICE_BW_TYPE_CNT))
+			continue;
+		tc_node = ice_sched_get_tc_node(hw->port_info, tc);
+		if (!tc_node) {
+			status = ICE_ERR_PARAM;
+			break;
+		}
+		agg_node = ice_sched_get_agg_node(hw, tc_node,
+						  agg_info->agg_id);
+		if (!agg_node) {
+			status = ICE_ERR_PARAM;
+			break;
+		}
+		status = ice_sched_replay_node_bw(hw, agg_node,
+						  &agg_info->bw_t_info[tc]);
+		if (status)
+			break;
+	}
+	return status;
+}
+
+/**
+ * ice_sched_get_ena_tc_bitmap - get enabled TC bitmap
+ * @pi: port info struct
+ * @tc_bitmap: 8 bits TC bitmap to check
+ * @ena_tc_bitmap: 8 bits enabled TC bitmap to return
+ *
+ * This function returns enabled TC bitmap in variable ena_tc_bitmap. Some TCs
+ * may be missing, it returns enabled TCs. This function needs to be called with
+ * scheduler lock held.
+ */
+static void
+ice_sched_get_ena_tc_bitmap(struct ice_port_info *pi, ice_bitmap_t *tc_bitmap,
+			    ice_bitmap_t *ena_tc_bitmap)
+{
+	u8 tc;
+
+	/* Some tc(s) may be missing after reset, adjust for replay */
+	for (tc = 0; tc < ICE_MAX_TRAFFIC_CLASS; tc++)
+		if (ice_is_tc_ena(*tc_bitmap, tc) &&
+		    (ice_sched_get_tc_node(pi, tc)))
+			ice_set_bit(tc, ena_tc_bitmap);
+}
+
+/**
+ * ice_sched_replay_agg - recreate aggregator node(s)
+ * @hw: pointer to the hw struct
+ *
+ * This function recreate aggregator type nodes which are not replayed earlier.
+ * It also replay aggregator bw information. These aggregator nodes are not
+ * associated with VSI type node yet.
+ */
+void ice_sched_replay_agg(struct ice_hw *hw)
+{
+	struct ice_port_info *pi = hw->port_info;
+	struct ice_sched_agg_info *agg_info;
+
+	ice_acquire_lock(&pi->sched_lock);
+	LIST_FOR_EACH_ENTRY(agg_info, &hw->agg_list, ice_sched_agg_info,
+			    list_entry) {
+		/* replay agg (re-create aggregator node) */
+		if (!ice_cmp_bitmap(agg_info->tc_bitmap,
+				    agg_info->replay_tc_bitmap,
+				    ICE_MAX_TRAFFIC_CLASS)) {
+			ice_declare_bitmap(replay_bitmap,
+					   ICE_MAX_TRAFFIC_CLASS);
+			enum ice_status status;
+
+			ice_zero_bitmap(replay_bitmap,
+					sizeof(replay_bitmap) * BITS_PER_BYTE);
+			ice_sched_get_ena_tc_bitmap(pi,
+						    agg_info->replay_tc_bitmap,
+						    replay_bitmap);
+			status = ice_sched_cfg_agg(hw->port_info,
+						   agg_info->agg_id,
+						   ICE_AGG_TYPE_AGG,
+						   replay_bitmap);
+			if (status) {
+				ice_info(hw, "Replay agg id[%d] failed\n",
+					 agg_info->agg_id);
+				/* Move on to next one */
+				continue;
+			}
+			/* Replay agg node bw (restore agg bw) */
+			status = ice_sched_replay_agg_bw(hw, agg_info);
+			if (status)
+				ice_info(hw, "Replay agg bw [id=%d] failed\n",
+					 agg_info->agg_id);
+		}
+	}
+	ice_release_lock(&pi->sched_lock);
+}
+
+/**
+ * ice_sched_replay_agg_vsi_preinit - Agg/VSI replay pre initialization
+ * @hw: pointer to the hw struct
+ *
+ * This function initialize aggregator(s) TC bitmap to zero. A required
+ * preinit step for replaying aggregators.
+ */
+void ice_sched_replay_agg_vsi_preinit(struct ice_hw *hw)
+{
+	struct ice_port_info *pi = hw->port_info;
+	struct ice_sched_agg_info *agg_info;
+
+	ice_acquire_lock(&pi->sched_lock);
+	LIST_FOR_EACH_ENTRY(agg_info, &hw->agg_list, ice_sched_agg_info,
+			    list_entry) {
+		struct ice_sched_agg_vsi_info *agg_vsi_info;
+
+		agg_info->tc_bitmap[0] = 0;
+		LIST_FOR_EACH_ENTRY(agg_vsi_info, &agg_info->agg_vsi_list,
+				    ice_sched_agg_vsi_info, list_entry)
+			agg_vsi_info->tc_bitmap[0] = 0;
+	}
+	ice_release_lock(&pi->sched_lock);
+}
+
+/**
+ * ice_sched_replay_tc_node_bw - replay tc node(s) bw
+ * @hw: pointer to the hw struct
+ *
+ * This function replay tc nodes. The caller needs to hold the scheduler lock.
+ */
+enum ice_status
+ice_sched_replay_tc_node_bw(struct ice_hw *hw)
+{
+	struct ice_port_info *pi = hw->port_info;
+	enum ice_status status = ICE_SUCCESS;
+	u8 tc;
+
+	ice_acquire_lock(&pi->sched_lock);
+	for (tc = 0; tc < ICE_MAX_TRAFFIC_CLASS; tc++) {
+		struct ice_sched_node *tc_node;
+
+		tc_node = ice_sched_get_tc_node(hw->port_info, tc);
+		if (!tc_node)
+			continue; /* tc not present */
+		status = ice_sched_replay_node_bw(hw, tc_node,
+						  &hw->tc_node_bw_t_info[tc]);
+		if (status)
+			break;
+	}
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_sched_replay_vsi_bw - replay VSI type node(s) bw
+ * @hw: pointer to the hw struct
+ * @vsi_handle: software VSI handle
+ * @tc_bitmap: 8 bits TC bitmap
+ *
+ * This function replays VSI type nodes bandwidth. This function needs to be
+ * called with scheduler lock held.
+ */
+static enum ice_status
+ice_sched_replay_vsi_bw(struct ice_hw *hw, u16 vsi_handle,
+			ice_bitmap_t *tc_bitmap)
+{
+	struct ice_sched_node *vsi_node, *tc_node;
+	struct ice_port_info *pi = hw->port_info;
+	struct ice_bw_type_info *bw_t_info;
+	struct ice_vsi_ctx *vsi_ctx;
+	enum ice_status status = ICE_SUCCESS;
+	u8 tc;
+
+	vsi_ctx = ice_get_vsi_ctx(pi->hw, vsi_handle);
+	if (!vsi_ctx)
+		return ICE_ERR_PARAM;
+	for (tc = 0; tc < ICE_MAX_TRAFFIC_CLASS; tc++) {
+		if (!ice_is_tc_ena(*tc_bitmap, tc))
+			continue;
+		tc_node = ice_sched_get_tc_node(pi, tc);
+		if (!tc_node)
+			continue;
+		vsi_node = ice_sched_get_vsi_node(hw, tc_node, vsi_handle);
+		if (!vsi_node)
+			continue;
+		bw_t_info = &vsi_ctx->sched.bw_t_info[tc];
+		status = ice_sched_replay_node_bw(hw, vsi_node, bw_t_info);
+		if (status)
+			break;
+	}
+	return status;
+}
+
+/**
+ * ice_sched_replay_vsi_agg - replay agg & VSI to aggregator node(s)
+ * @hw: pointer to the hw struct
+ * @vsi_handle: software VSI handle
+ *
+ * This function replays aggregator node, VSI to aggregator type nodes, and
+ * their node bandwidth information. This function needs to be called with
+ * scheduler lock held.
+ */
+static enum ice_status
+ice_sched_replay_vsi_agg(struct ice_hw *hw, u16 vsi_handle)
+{
+	ice_declare_bitmap(replay_bitmap, ICE_MAX_TRAFFIC_CLASS);
+	struct ice_sched_agg_vsi_info *agg_vsi_info;
+	struct ice_port_info *pi = hw->port_info;
+	struct ice_sched_agg_info *agg_info;
+	enum ice_status status;
+
+	ice_zero_bitmap(replay_bitmap, sizeof(replay_bitmap) * BITS_PER_BYTE);
+	if (!ice_is_vsi_valid(hw, vsi_handle))
+		return ICE_ERR_PARAM;
+	agg_info = ice_get_vsi_agg_info(hw, vsi_handle);
+	if (!agg_info)
+		return ICE_SUCCESS; /* Not present in list - default Agg case */
+	agg_vsi_info = ice_get_agg_vsi_info(agg_info, vsi_handle);
+	if (!agg_vsi_info)
+		return ICE_SUCCESS; /* Not present in list - default Agg case */
+	ice_sched_get_ena_tc_bitmap(pi, agg_info->replay_tc_bitmap,
+				    replay_bitmap);
+	/* Replay agg node associated to vsi_handle */
+	status = ice_sched_cfg_agg(hw->port_info, agg_info->agg_id,
+				   ICE_AGG_TYPE_AGG, replay_bitmap);
+	if (status)
+		return status;
+	/* Replay agg node bw (restore agg bw) */
+	status = ice_sched_replay_agg_bw(hw, agg_info);
+	if (status)
+		return status;
+
+	ice_zero_bitmap(replay_bitmap, ICE_MAX_TRAFFIC_CLASS);
+	ice_sched_get_ena_tc_bitmap(pi, agg_vsi_info->replay_tc_bitmap,
+				    replay_bitmap);
+	/* Move this VSI (vsi_handle) to above aggregator */
+	status = ice_sched_assoc_vsi_to_agg(pi, agg_info->agg_id, vsi_handle,
+					    replay_bitmap);
+	if (status)
+		return status;
+	/* Replay VSI bw (restore VSI bw) */
+	return ice_sched_replay_vsi_bw(hw, vsi_handle,
+				       agg_vsi_info->tc_bitmap);
+}
+
+/**
+ * ice_replay_vsi_agg - replay VSI to aggregator node
+ * @hw: pointer to the hw struct
+ * @vsi_handle: software VSI handle
+ *
+ * This function replays association of VSI to aggregator type nodes, and
+ * node bandwidth information.
+ */
+enum ice_status
+ice_replay_vsi_agg(struct ice_hw *hw, u16 vsi_handle)
+{
+	struct ice_port_info *pi = hw->port_info;
+	enum ice_status status;
+
+	ice_acquire_lock(&pi->sched_lock);
+	status = ice_sched_replay_vsi_agg(hw, vsi_handle);
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
diff --git a/drivers/net/ice/base/ice_sched.h b/drivers/net/ice/base/ice_sched.h
new file mode 100644
index 0000000..a556594
--- /dev/null
+++ b/drivers/net/ice/base/ice_sched.h
@@ -0,0 +1,210 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_SCHED_H_
+#define _ICE_SCHED_H_
+
+#include "ice_common.h"
+
+#define ICE_QGRP_LAYER_OFFSET	2
+#define ICE_VSI_LAYER_OFFSET	4
+#define ICE_AGG_LAYER_OFFSET	6
+#define ICE_SCHED_INVAL_LAYER_NUM	0xFF
+/* Burst size is a 12 bits register that is configured while creating the RL
+ * profile(s). MSB is a granularity bit and tells the granularity type
+ * 0 - LSB bits are in bytes granularity
+ * 1 - LSB bits are in 1K bytes granularity
+ */
+#define ICE_BYTE_GRANULARITY			0
+#define ICE_KBYTE_GRANULARITY			0x800
+#define ICE_MIN_BURST_SIZE_ALLOWED		1 /* In Bytes */
+#define ICE_MAX_BURST_SIZE_ALLOWED		(2047 * 1024) /* In Bytes */
+#define ICE_MAX_BURST_SIZE_BYTE_GRANULARITY	2047 /* In Bytes */
+#define ICE_MAX_BURST_SIZE_KBYTE_GRANULARITY	ICE_MAX_BURST_SIZE_ALLOWED
+
+#define ICE_RL_PROF_FREQUENCY 446000000
+#define ICE_RL_PROF_ACCURACY_BYTES 128
+#define ICE_RL_PROF_MULTIPLIER 10000
+#define ICE_RL_PROF_TS_MULTIPLIER 32
+#define ICE_RL_PROF_FRACTION 512
+
+struct rl_profile_params {
+	u32 bw;			/* in Kbps */
+	u16 rl_multiplier;
+	u16 wake_up_calc;
+	u16 rl_encode;
+};
+
+/* BW rate limit profile parameters list entry along
+ * with bandwidth maintained per layer in port info
+ */
+struct ice_aqc_rl_profile_info {
+	struct ice_aqc_rl_profile_elem profile;
+	struct LIST_ENTRY_TYPE list_entry;
+	u32 bw;			/* requested */
+	u16 prof_id_ref;	/* profile id to node association ref count */
+};
+
+struct ice_sched_agg_vsi_info {
+	struct LIST_ENTRY_TYPE list_entry;
+	ice_declare_bitmap(tc_bitmap, ICE_MAX_TRAFFIC_CLASS);
+	u16 vsi_handle;
+	/* save agg vsi TC bitmap */
+	ice_declare_bitmap(replay_tc_bitmap, ICE_MAX_TRAFFIC_CLASS);
+};
+
+struct ice_sched_agg_info {
+	struct LIST_HEAD_TYPE agg_vsi_list;
+	struct LIST_ENTRY_TYPE list_entry;
+	ice_declare_bitmap(tc_bitmap, ICE_MAX_TRAFFIC_CLASS);
+	u32 agg_id;
+	enum ice_agg_type agg_type;
+	/* bw_t_info saves agg bw information */
+	struct ice_bw_type_info bw_t_info[ICE_MAX_TRAFFIC_CLASS];
+	/* save agg TC bitmap */
+	ice_declare_bitmap(replay_tc_bitmap, ICE_MAX_TRAFFIC_CLASS);
+};
+
+/* FW AQ command calls */
+enum ice_status
+ice_aq_query_rl_profile(struct ice_hw *hw, u16 num_profiles,
+			struct ice_aqc_rl_profile_generic_elem *buf,
+			u16 buf_size, struct ice_sq_cd *cd);
+enum ice_status
+ice_aq_cfg_l2_node_cgd(struct ice_hw *hw, u16 num_nodes,
+		       struct ice_aqc_cfg_l2_node_cgd_data *buf, u16 buf_size,
+		       struct ice_sq_cd *cd);
+enum ice_status
+ice_aq_move_sched_elems(struct ice_hw *hw, u16 grps_req,
+			struct ice_aqc_move_elem *buf, u16 buf_size,
+			u16 *grps_movd, struct ice_sq_cd *cd);
+enum ice_status
+ice_aq_query_sched_elems(struct ice_hw *hw, u16 elems_req,
+			 struct ice_aqc_get_elem *buf, u16 buf_size,
+			 u16 *elems_ret, struct ice_sq_cd *cd);
+enum ice_status ice_sched_init_port(struct ice_port_info *pi);
+enum ice_status ice_sched_query_res_alloc(struct ice_hw *hw);
+
+/* Functions to cleanup scheduler SW DB */
+void ice_sched_clear_port(struct ice_port_info *pi);
+void ice_sched_cleanup_all(struct ice_hw *hw);
+void ice_sched_clear_agg(struct ice_hw *hw);
+
+/* Get a scheduling node from SW DB for given TEID */
+struct ice_sched_node *ice_sched_get_node(struct ice_port_info *pi, u32 teid);
+struct ice_sched_node *
+ice_sched_find_node_by_teid(struct ice_sched_node *start_node, u32 teid);
+/* Add a scheduling node into SW DB for given info */
+enum ice_status
+ice_sched_add_node(struct ice_port_info *pi, u8 layer,
+		   struct ice_aqc_txsched_elem_data *info);
+void ice_free_sched_node(struct ice_port_info *pi, struct ice_sched_node *node);
+struct ice_sched_node *ice_sched_get_tc_node(struct ice_port_info *pi, u8 tc);
+struct ice_sched_node *
+ice_sched_get_free_qparent(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+			   u8 owner);
+enum ice_status
+ice_sched_cfg_vsi(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u16 maxqs,
+		  u8 owner, bool enable);
+enum ice_status ice_rm_vsi_lan_cfg(struct ice_port_info *pi, u16 vsi_handle);
+struct ice_sched_node *
+ice_sched_get_agg_node(struct ice_hw *hw, struct ice_sched_node *tc_node,
+		       u32 agg_id);
+struct ice_sched_node *
+ice_sched_get_vsi_node(struct ice_hw *hw, struct ice_sched_node *tc_node,
+		       u16 vsi_handle);
+bool ice_sched_is_tree_balanced(struct ice_hw *hw, struct ice_sched_node *node);
+enum ice_status
+ice_aq_query_node_to_root(struct ice_hw *hw, u32 node_teid,
+			  struct ice_aqc_get_elem *buf, u16 buf_size,
+			  struct ice_sq_cd *cd);
+
+/* Tx scheduler rate limiter functions */
+enum ice_status
+ice_cfg_agg(struct ice_port_info *pi, u32 agg_id,
+	    enum ice_agg_type agg_type, u8 tc_bitmap);
+enum ice_status
+ice_move_vsi_to_agg(struct ice_port_info *pi, u32 agg_id, u16 vsi_handle,
+		    u8 tc_bitmap);
+enum ice_status ice_rm_agg_cfg(struct ice_port_info *pi, u32 agg_id);
+enum ice_status
+ice_cfg_q_bw_lmt(struct ice_port_info *pi, u32 q_id, enum ice_rl_type rl_type,
+		 u32 bw);
+enum ice_status
+ice_cfg_q_bw_dflt_lmt(struct ice_port_info *pi, u32 q_id,
+		      enum ice_rl_type rl_type);
+enum ice_status
+ice_cfg_tc_node_bw_lmt(struct ice_port_info *pi, u8 tc,
+		       enum ice_rl_type rl_type, u32 bw);
+enum ice_status
+ice_cfg_tc_node_bw_dflt_lmt(struct ice_port_info *pi, u8 tc,
+			    enum ice_rl_type rl_type);
+enum ice_status
+ice_cfg_vsi_bw_lmt_per_tc(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+			  enum ice_rl_type rl_type, u32 bw);
+enum ice_status
+ice_cfg_vsi_bw_dflt_lmt_per_tc(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+			       enum ice_rl_type rl_type);
+enum ice_status
+ice_cfg_agg_bw_lmt_per_tc(struct ice_port_info *pi, u32 agg_id, u8 tc,
+			  enum ice_rl_type rl_type, u32 bw);
+enum ice_status
+ice_cfg_agg_bw_dflt_lmt_per_tc(struct ice_port_info *pi, u32 agg_id, u8 tc,
+			       enum ice_rl_type rl_type);
+enum ice_status
+ice_cfg_vsi_bw_shared_lmt(struct ice_port_info *pi, u16 vsi_handle, u32 bw);
+enum ice_status
+ice_cfg_vsi_bw_no_shared_lmt(struct ice_port_info *pi, u16 vsi_handle);
+enum ice_status
+ice_cfg_agg_bw_shared_lmt(struct ice_port_info *pi, u32 agg_id, u32 bw);
+enum ice_status
+ice_cfg_agg_bw_no_shared_lmt(struct ice_port_info *pi, u32 agg_id);
+enum ice_status
+ice_cfg_vsi_q_priority(struct ice_port_info *pi, u16 num_qs, u32 *q_ids,
+		       u8 *q_prio);
+enum ice_status
+ice_cfg_vsi_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 ena_tcmap,
+		     enum ice_rl_type rl_type, u8 *bw_alloc);
+enum ice_status
+ice_cfg_agg_vsi_priority_per_tc(struct ice_port_info *pi, u32 agg_id,
+				u16 num_vsis, u16 *vsi_handle_arr,
+				u8 *node_prio, u8 tc);
+enum ice_status
+ice_cfg_agg_bw_alloc(struct ice_port_info *pi, u32 agg_id, u8 ena_tcmap,
+		     enum ice_rl_type rl_type, u8 *bw_alloc);
+bool
+ice_sched_find_node_in_subtree(struct ice_hw *hw, struct ice_sched_node *base,
+			       struct ice_sched_node *node);
+enum ice_status
+ice_sched_set_node_bw_lmt(struct ice_port_info *pi, struct ice_sched_node *node,
+			  enum ice_rl_type rl_type, u32 bw);
+enum ice_status
+ice_sched_set_agg_bw_dflt_lmt(struct ice_port_info *pi, u16 vsi_handle);
+enum ice_status
+ice_sched_set_node_bw_lmt_per_tc(struct ice_port_info *pi, u32 id,
+				 enum ice_agg_type agg_type, u8 tc,
+				 enum ice_rl_type rl_type, u32 bw);
+enum ice_status
+ice_sched_set_vsi_bw_shared_lmt(struct ice_port_info *pi, u16 vsi_handle,
+				u32 bw);
+enum ice_status
+ice_sched_set_agg_bw_shared_lmt(struct ice_port_info *pi, u32 agg_id, u32 bw);
+enum ice_status
+ice_sched_cfg_sibl_node_prio(struct ice_hw *hw, struct ice_sched_node *node,
+			     u8 priority);
+enum ice_status
+ice_sched_cfg_node_bw_alloc(struct ice_hw *hw, struct ice_sched_node *node,
+			    enum ice_rl_type rl_type, u8 bw_alloc);
+enum ice_status
+ice_sched_add_agg_cfg(struct ice_port_info *pi, u32 agg_id, u8 tc);
+enum ice_status
+ice_sched_rm_agg_cfg(struct ice_port_info *pi, u32 agg_id, u8 tc);
+enum ice_status
+ice_sched_move_vsi_to_agg(struct ice_port_info *pi, u16 vsi_handle, u32 agg_id,
+			  u8 tc);
+enum ice_status
+ice_sched_del_rl_profile(struct ice_hw *hw,
+			 struct ice_aqc_rl_profile_info *rl_info);
+void ice_sched_rm_unused_rl_prof(struct ice_port_info *pi);
+#endif /* _ICE_SCHED_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v3 09/34] net/ice: Add virtual switch code
  2018-12-12  6:59 ` [dpdk-dev] [PATCH v3 00/34] A new net PMD - ice Wenzhuo Lu
                     ` (7 preceding siblings ...)
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 08/34] net/ice: Add basic transmit scheduler Wenzhuo Lu
@ 2018-12-12  6:59   ` Wenzhuo Lu
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 10/34] net/ice: Add code to work with the NVM Wenzhuo Lu
                     ` (25 subsequent siblings)
  34 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-12  6:59 UTC (permalink / raw)
  To: dev; +Cc: Paul M Stillwell Jr

From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>

Add code to handle the virtual switch within the NIC.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
---
 drivers/net/ice/base/ice_switch.c | 2812 +++++++++++++++++++++++++++++++++++++
 drivers/net/ice/base/ice_switch.h |  333 +++++
 2 files changed, 3145 insertions(+)
 create mode 100644 drivers/net/ice/base/ice_switch.c
 create mode 100644 drivers/net/ice/base/ice_switch.h

diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
new file mode 100644
index 0000000..0379cd0
--- /dev/null
+++ b/drivers/net/ice/base/ice_switch.c
@@ -0,0 +1,2812 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#include "ice_switch.h"
+
+
+#define ICE_ETH_DA_OFFSET		0
+#define ICE_ETH_ETHTYPE_OFFSET		12
+#define ICE_ETH_VLAN_TCI_OFFSET		14
+#define ICE_MAX_VLAN_ID			0xFFF
+
+/* Dummy ethernet header needed in the ice_aqc_sw_rules_elem
+ * struct to configure any switch filter rules.
+ * {DA (6 bytes), SA(6 bytes),
+ * Ether type (2 bytes for header without VLAN tag) OR
+ * VLAN tag (4 bytes for header with VLAN tag) }
+ *
+ * Word on Hardcoded values
+ * byte 0 = 0x2: to identify it as locally administered DA MAC
+ * byte 6 = 0x2: to identify it as locally administered SA MAC
+ * byte 12 = 0x81 & byte 13 = 0x00:
+ *	In case of VLAN filter first two bytes defines ether type (0x8100)
+ *	and remaining two bytes are placeholder for programming a given VLAN id
+ *	In case of Ether type filter it is treated as header without VLAN tag
+ *	and byte 12 and 13 is used to program a given Ether type instead
+ */
+#define DUMMY_ETH_HDR_LEN		16
+static const u8 dummy_eth_header[DUMMY_ETH_HDR_LEN] = { 0x2, 0, 0, 0, 0, 0,
+							0x2, 0, 0, 0, 0, 0,
+							0x81, 0, 0, 0};
+
+#define ICE_SW_RULE_RX_TX_ETH_HDR_SIZE \
+	(sizeof(struct ice_aqc_sw_rules_elem) - \
+	 sizeof(((struct ice_aqc_sw_rules_elem *)0)->pdata) + \
+	 sizeof(struct ice_sw_rule_lkup_rx_tx) + DUMMY_ETH_HDR_LEN - 1)
+#define ICE_SW_RULE_RX_TX_NO_HDR_SIZE \
+	(sizeof(struct ice_aqc_sw_rules_elem) - \
+	 sizeof(((struct ice_aqc_sw_rules_elem *)0)->pdata) + \
+	 sizeof(struct ice_sw_rule_lkup_rx_tx) - 1)
+#define ICE_SW_RULE_LG_ACT_SIZE(n) \
+	(sizeof(struct ice_aqc_sw_rules_elem) - \
+	 sizeof(((struct ice_aqc_sw_rules_elem *)0)->pdata) + \
+	 sizeof(struct ice_sw_rule_lg_act) - \
+	 sizeof(((struct ice_sw_rule_lg_act *)0)->act) + \
+	 ((n) * sizeof(((struct ice_sw_rule_lg_act *)0)->act)))
+#define ICE_SW_RULE_VSI_LIST_SIZE(n) \
+	(sizeof(struct ice_aqc_sw_rules_elem) - \
+	 sizeof(((struct ice_aqc_sw_rules_elem *)0)->pdata) + \
+	 sizeof(struct ice_sw_rule_vsi_list) - \
+	 sizeof(((struct ice_sw_rule_vsi_list *)0)->vsi) + \
+	 ((n) * sizeof(((struct ice_sw_rule_vsi_list *)0)->vsi)))
+
+
+/**
+ * ice_init_def_sw_recp - initialize the recipe book keeping tables
+ * @hw: pointer to the hw struct
+ *
+ * Allocate memory for the entire recipe table and initialize the structures/
+ * entries corresponding to basic recipes.
+ */
+enum ice_status ice_init_def_sw_recp(struct ice_hw *hw)
+{
+	struct ice_sw_recipe *recps;
+	u8 i;
+
+	recps = (struct ice_sw_recipe *)
+		ice_calloc(hw, ICE_MAX_NUM_RECIPES, sizeof(*recps));
+	if (!recps)
+		return ICE_ERR_NO_MEMORY;
+
+	for (i = 0; i < ICE_MAX_NUM_RECIPES; i++) {
+		recps[i].root_rid = i;
+		INIT_LIST_HEAD(&recps[i].filt_rules);
+		INIT_LIST_HEAD(&recps[i].filt_replay_rules);
+		ice_init_lock(&recps[i].filt_rule_lock);
+	}
+
+	hw->switch_info->recp_list = recps;
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_aq_get_sw_cfg - get switch configuration
+ * @hw: pointer to the hardware structure
+ * @buf: pointer to the result buffer
+ * @buf_size: length of the buffer available for response
+ * @req_desc: pointer to requested descriptor
+ * @num_elems: pointer to number of elements
+ * @cd: pointer to command details structure or NULL
+ *
+ * Get switch configuration (0x0200) to be placed in 'buff'.
+ * This admin command returns information such as initial VSI/port number
+ * and switch ID it belongs to.
+ *
+ * NOTE: *req_desc is both an input/output parameter.
+ * The caller of this function first calls this function with *request_desc set
+ * to 0. If the response from f/w has *req_desc set to 0, all the switch
+ * configuration information has been returned; if non-zero (meaning not all
+ * the information was returned), the caller should call this function again
+ * with *req_desc set to the previous value returned by f/w to get the
+ * next block of switch configuration information.
+ *
+ * *num_elems is output only parameter. This reflects the number of elements
+ * in response buffer. The caller of this function to use *num_elems while
+ * parsing the response buffer.
+ */
+static enum ice_status
+ice_aq_get_sw_cfg(struct ice_hw *hw, struct ice_aqc_get_sw_cfg_resp *buf,
+		  u16 buf_size, u16 *req_desc, u16 *num_elems,
+		  struct ice_sq_cd *cd)
+{
+	struct ice_aqc_get_sw_cfg *cmd;
+	enum ice_status status;
+	struct ice_aq_desc desc;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_sw_cfg);
+	cmd = &desc.params.get_sw_conf;
+	cmd->element = CPU_TO_LE16(*req_desc);
+
+	status = ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
+	if (!status) {
+		*req_desc = LE16_TO_CPU(cmd->element);
+		*num_elems = LE16_TO_CPU(cmd->num_elems);
+	}
+
+	return status;
+}
+
+
+
+/**
+ * ice_aq_add_vsi
+ * @hw: pointer to the hw struct
+ * @vsi_ctx: pointer to a VSI context struct
+ * @cd: pointer to command details structure or NULL
+ *
+ * Add a VSI context to the hardware (0x0210)
+ */
+static enum ice_status
+ice_aq_add_vsi(struct ice_hw *hw, struct ice_vsi_ctx *vsi_ctx,
+	       struct ice_sq_cd *cd)
+{
+	struct ice_aqc_add_update_free_vsi_resp *res;
+	struct ice_aqc_add_get_update_free_vsi *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	cmd = &desc.params.vsi_cmd;
+	res = &desc.params.add_update_free_vsi_res;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_add_vsi);
+
+	if (!vsi_ctx->alloc_from_pool)
+		cmd->vsi_num = CPU_TO_LE16(vsi_ctx->vsi_num |
+					   ICE_AQ_VSI_IS_VALID);
+
+	cmd->vsi_flags = CPU_TO_LE16(vsi_ctx->flags);
+
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+
+	status = ice_aq_send_cmd(hw, &desc, &vsi_ctx->info,
+				 sizeof(vsi_ctx->info), cd);
+
+	if (!status) {
+		vsi_ctx->vsi_num = LE16_TO_CPU(res->vsi_num) & ICE_AQ_VSI_NUM_M;
+		vsi_ctx->vsis_allocd = LE16_TO_CPU(res->vsi_used);
+		vsi_ctx->vsis_unallocated = LE16_TO_CPU(res->vsi_free);
+	}
+
+	return status;
+}
+
+/**
+ * ice_aq_free_vsi
+ * @hw: pointer to the hw struct
+ * @vsi_ctx: pointer to a VSI context struct
+ * @keep_vsi_alloc: keep VSI allocation as part of this PF's resources
+ * @cd: pointer to command details structure or NULL
+ *
+ * Free VSI context info from hardware (0x0213)
+ */
+static enum ice_status
+ice_aq_free_vsi(struct ice_hw *hw, struct ice_vsi_ctx *vsi_ctx,
+		bool keep_vsi_alloc, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_add_update_free_vsi_resp *resp;
+	struct ice_aqc_add_get_update_free_vsi *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	cmd = &desc.params.vsi_cmd;
+	resp = &desc.params.add_update_free_vsi_res;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_free_vsi);
+
+	cmd->vsi_num = CPU_TO_LE16(vsi_ctx->vsi_num | ICE_AQ_VSI_IS_VALID);
+	if (keep_vsi_alloc)
+		cmd->cmd_flags = CPU_TO_LE16(ICE_AQ_VSI_KEEP_ALLOC);
+
+	status = ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+	if (!status) {
+		vsi_ctx->vsis_allocd = LE16_TO_CPU(resp->vsi_used);
+		vsi_ctx->vsis_unallocated = LE16_TO_CPU(resp->vsi_free);
+	}
+
+	return status;
+}
+
+/**
+ * ice_aq_update_vsi
+ * @hw: pointer to the hw struct
+ * @vsi_ctx: pointer to a VSI context struct
+ * @cd: pointer to command details structure or NULL
+ *
+ * Update VSI context in the hardware (0x0211)
+ */
+static enum ice_status
+ice_aq_update_vsi(struct ice_hw *hw, struct ice_vsi_ctx *vsi_ctx,
+		  struct ice_sq_cd *cd)
+{
+	struct ice_aqc_add_update_free_vsi_resp *resp;
+	struct ice_aqc_add_get_update_free_vsi *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	cmd = &desc.params.vsi_cmd;
+	resp = &desc.params.add_update_free_vsi_res;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_update_vsi);
+
+	cmd->vsi_num = CPU_TO_LE16(vsi_ctx->vsi_num | ICE_AQ_VSI_IS_VALID);
+
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+
+	status = ice_aq_send_cmd(hw, &desc, &vsi_ctx->info,
+				 sizeof(vsi_ctx->info), cd);
+
+	if (!status) {
+		vsi_ctx->vsis_allocd = LE16_TO_CPU(resp->vsi_used);
+		vsi_ctx->vsis_unallocated = LE16_TO_CPU(resp->vsi_free);
+	}
+
+	return status;
+}
+
+/**
+ * ice_is_vsi_valid - check whether the VSI is valid or not
+ * @hw: pointer to the hw struct
+ * @vsi_handle: VSI handle
+ *
+ * check whether the VSI is valid or not
+ */
+bool ice_is_vsi_valid(struct ice_hw *hw, u16 vsi_handle)
+{
+	return vsi_handle < ICE_MAX_VSI && hw->vsi_ctx[vsi_handle];
+}
+
+/**
+ * ice_get_hw_vsi_num - return the hw VSI number
+ * @hw: pointer to the hw struct
+ * @vsi_handle: VSI handle
+ *
+ * return the hw VSI number
+ * Caution: call this function only if VSI is valid (ice_is_vsi_valid)
+ */
+u16 ice_get_hw_vsi_num(struct ice_hw *hw, u16 vsi_handle)
+{
+	return hw->vsi_ctx[vsi_handle]->vsi_num;
+}
+
+/**
+ * ice_get_vsi_ctx - return the VSI context entry for a given VSI handle
+ * @hw: pointer to the hw struct
+ * @vsi_handle: VSI handle
+ *
+ * return the VSI context entry for a given VSI handle
+ */
+struct ice_vsi_ctx *ice_get_vsi_ctx(struct ice_hw *hw, u16 vsi_handle)
+{
+	return (vsi_handle >= ICE_MAX_VSI) ? NULL : hw->vsi_ctx[vsi_handle];
+}
+
+/**
+ * ice_save_vsi_ctx - save the VSI context for a given VSI handle
+ * @hw: pointer to the hw struct
+ * @vsi_handle: VSI handle
+ * @vsi: VSI context pointer
+ *
+ * save the VSI context entry for a given VSI handle
+ */
+static void
+ice_save_vsi_ctx(struct ice_hw *hw, u16 vsi_handle, struct ice_vsi_ctx *vsi)
+{
+	hw->vsi_ctx[vsi_handle] = vsi;
+}
+
+/**
+ * ice_clear_vsi_ctx - clear the VSI context entry
+ * @hw: pointer to the hw struct
+ * @vsi_handle: VSI handle
+ *
+ * clear the VSI context entry
+ */
+static void ice_clear_vsi_ctx(struct ice_hw *hw, u16 vsi_handle)
+{
+	struct ice_vsi_ctx *vsi;
+
+	vsi = ice_get_vsi_ctx(hw, vsi_handle);
+	if (vsi) {
+		ice_destroy_lock(&vsi->rss_locks);
+		ice_free(hw, vsi);
+		hw->vsi_ctx[vsi_handle] = NULL;
+	}
+}
+
+/**
+ * ice_clear_all_vsi_ctx - clear all the VSI context entries
+ * @hw: pointer to the hw struct
+ */
+void ice_clear_all_vsi_ctx(struct ice_hw *hw)
+{
+	u16 i;
+
+	for (i = 0; i < ICE_MAX_VSI; i++)
+		ice_clear_vsi_ctx(hw, i);
+}
+
+/**
+ * ice_add_vsi - add VSI context to the hardware and VSI handle list
+ * @hw: pointer to the hw struct
+ * @vsi_handle: unique VSI handle provided by drivers
+ * @vsi_ctx: pointer to a VSI context struct
+ * @cd: pointer to command details structure or NULL
+ *
+ * Add a VSI context to the hardware also add it into the VSI handle list.
+ * If this function gets called after reset for exisiting VSIs then update
+ * with the new HW VSI number in the corresponding VSI handle list entry.
+ */
+enum ice_status
+ice_add_vsi(struct ice_hw *hw, u16 vsi_handle, struct ice_vsi_ctx *vsi_ctx,
+	    struct ice_sq_cd *cd)
+{
+	struct ice_vsi_ctx *tmp_vsi_ctx;
+	enum ice_status status;
+
+	if (vsi_handle >= ICE_MAX_VSI)
+		return ICE_ERR_PARAM;
+	status = ice_aq_add_vsi(hw, vsi_ctx, cd);
+	if (status)
+		return status;
+	tmp_vsi_ctx = ice_get_vsi_ctx(hw, vsi_handle);
+	if (!tmp_vsi_ctx) {
+		/* Create a new vsi context */
+		tmp_vsi_ctx = (struct ice_vsi_ctx *)
+			ice_malloc(hw, sizeof(*tmp_vsi_ctx));
+		if (!tmp_vsi_ctx) {
+			ice_aq_free_vsi(hw, vsi_ctx, false, cd);
+			return ICE_ERR_NO_MEMORY;
+		}
+		*tmp_vsi_ctx = *vsi_ctx;
+		ice_init_lock(&tmp_vsi_ctx->rss_locks);
+		INIT_LIST_HEAD(&tmp_vsi_ctx->rss_list_head);
+		ice_save_vsi_ctx(hw, vsi_handle, tmp_vsi_ctx);
+	} else {
+		/* update with new HW VSI num */
+		if (tmp_vsi_ctx->vsi_num != vsi_ctx->vsi_num)
+			tmp_vsi_ctx->vsi_num = vsi_ctx->vsi_num;
+	}
+
+	return status;
+}
+
+/**
+ * ice_free_vsi- free VSI context from hardware and VSI handle list
+ * @hw: pointer to the hw struct
+ * @vsi_handle: unique VSI handle
+ * @vsi_ctx: pointer to a VSI context struct
+ * @keep_vsi_alloc: keep VSI allocation as part of this PF's resources
+ * @cd: pointer to command details structure or NULL
+ *
+ * Free VSI context info from hardware as well as from VSI handle list
+ */
+enum ice_status
+ice_free_vsi(struct ice_hw *hw, u16 vsi_handle, struct ice_vsi_ctx *vsi_ctx,
+	     bool keep_vsi_alloc, struct ice_sq_cd *cd)
+{
+	enum ice_status status;
+
+	if (!ice_is_vsi_valid(hw, vsi_handle))
+		return ICE_ERR_PARAM;
+	vsi_ctx->vsi_num = ice_get_hw_vsi_num(hw, vsi_handle);
+	status = ice_aq_free_vsi(hw, vsi_ctx, keep_vsi_alloc, cd);
+	if (!status)
+		ice_clear_vsi_ctx(hw, vsi_handle);
+	return status;
+}
+
+/**
+ * ice_update_vsi
+ * @hw: pointer to the hw struct
+ * @vsi_handle: unique VSI handle
+ * @vsi_ctx: pointer to a VSI context struct
+ * @cd: pointer to command details structure or NULL
+ *
+ * Update VSI context in the hardware
+ */
+enum ice_status
+ice_update_vsi(struct ice_hw *hw, u16 vsi_handle, struct ice_vsi_ctx *vsi_ctx,
+	       struct ice_sq_cd *cd)
+{
+	if (!ice_is_vsi_valid(hw, vsi_handle))
+		return ICE_ERR_PARAM;
+	vsi_ctx->vsi_num = ice_get_hw_vsi_num(hw, vsi_handle);
+	return ice_aq_update_vsi(hw, vsi_ctx, cd);
+}
+
+
+
+/**
+ * ice_aq_alloc_free_vsi_list
+ * @hw: pointer to the hw struct
+ * @vsi_list_id: VSI list id returned or used for lookup
+ * @lkup_type: switch rule filter lookup type
+ * @opc: switch rules population command type - pass in the command opcode
+ *
+ * allocates or free a VSI list resource
+ */
+static enum ice_status
+ice_aq_alloc_free_vsi_list(struct ice_hw *hw, u16 *vsi_list_id,
+			   enum ice_sw_lkup_type lkup_type,
+			   enum ice_adminq_opc opc)
+{
+	struct ice_aqc_alloc_free_res_elem *sw_buf;
+	struct ice_aqc_res_elem *vsi_ele;
+	enum ice_status status;
+	u16 buf_len;
+
+	buf_len = sizeof(*sw_buf);
+	sw_buf = (struct ice_aqc_alloc_free_res_elem *)
+		ice_malloc(hw, buf_len);
+	if (!sw_buf)
+		return ICE_ERR_NO_MEMORY;
+	sw_buf->num_elems = CPU_TO_LE16(1);
+
+	if (lkup_type == ICE_SW_LKUP_MAC ||
+	    lkup_type == ICE_SW_LKUP_MAC_VLAN ||
+	    lkup_type == ICE_SW_LKUP_ETHERTYPE ||
+	    lkup_type == ICE_SW_LKUP_ETHERTYPE_MAC ||
+	    lkup_type == ICE_SW_LKUP_PROMISC ||
+	    lkup_type == ICE_SW_LKUP_PROMISC_VLAN) {
+		sw_buf->res_type = CPU_TO_LE16(ICE_AQC_RES_TYPE_VSI_LIST_REP);
+	} else if (lkup_type == ICE_SW_LKUP_VLAN) {
+		sw_buf->res_type =
+			CPU_TO_LE16(ICE_AQC_RES_TYPE_VSI_LIST_PRUNE);
+	} else {
+		status = ICE_ERR_PARAM;
+		goto ice_aq_alloc_free_vsi_list_exit;
+	}
+
+	if (opc == ice_aqc_opc_free_res)
+		sw_buf->elem[0].e.sw_resp = CPU_TO_LE16(*vsi_list_id);
+
+	status = ice_aq_alloc_free_res(hw, 1, sw_buf, buf_len, opc, NULL);
+	if (status)
+		goto ice_aq_alloc_free_vsi_list_exit;
+
+	if (opc == ice_aqc_opc_alloc_res) {
+		vsi_ele = &sw_buf->elem[0];
+		*vsi_list_id = LE16_TO_CPU(vsi_ele->e.sw_resp);
+	}
+
+ice_aq_alloc_free_vsi_list_exit:
+	ice_free(hw, sw_buf);
+	return status;
+}
+
+
+/**
+ * ice_aq_sw_rules - add/update/remove switch rules
+ * @hw: pointer to the hw struct
+ * @rule_list: pointer to switch rule population list
+ * @rule_list_sz: total size of the rule list in bytes
+ * @num_rules: number of switch rules in the rule_list
+ * @opc: switch rules population command type - pass in the command opcode
+ * @cd: pointer to command details structure or NULL
+ *
+ * Add(0x02a0)/Update(0x02a1)/Remove(0x02a2) switch rules commands to firmware
+ */
+static enum ice_status
+ice_aq_sw_rules(struct ice_hw *hw, void *rule_list, u16 rule_list_sz,
+		u8 num_rules, enum ice_adminq_opc opc, struct ice_sq_cd *cd)
+{
+	struct ice_aq_desc desc;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_sw_rules");
+
+	if (opc != ice_aqc_opc_add_sw_rules &&
+	    opc != ice_aqc_opc_update_sw_rules &&
+	    opc != ice_aqc_opc_remove_sw_rules)
+		return ICE_ERR_PARAM;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, opc);
+
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+	desc.params.sw_rules.num_rules_fltr_entry_index =
+		CPU_TO_LE16(num_rules);
+	return ice_aq_send_cmd(hw, &desc, rule_list, rule_list_sz, cd);
+}
+
+
+/* ice_init_port_info - Initialize port_info with switch configuration data
+ * @pi: pointer to port_info
+ * @vsi_port_num: VSI number or port number
+ * @type: Type of switch element (port or VSI)
+ * @swid: switch ID of the switch the element is attached to
+ * @pf_vf_num: PF or VF number
+ * @is_vf: true if the element is a VF, false otherwise
+ */
+static void
+ice_init_port_info(struct ice_port_info *pi, u16 vsi_port_num, u8 type,
+		   u16 swid, u16 pf_vf_num, bool is_vf)
+{
+	switch (type) {
+	case ICE_AQC_GET_SW_CONF_RESP_PHYS_PORT:
+		pi->lport = (u8)(vsi_port_num & ICE_LPORT_MASK);
+		pi->sw_id = swid;
+		pi->pf_vf_num = pf_vf_num;
+		pi->is_vf = is_vf;
+		pi->dflt_tx_vsi_num = ICE_DFLT_VSI_INVAL;
+		pi->dflt_rx_vsi_num = ICE_DFLT_VSI_INVAL;
+		break;
+	default:
+		ice_debug(pi->hw, ICE_DBG_SW,
+			  "incorrect VSI/port type received\n");
+		break;
+	}
+}
+
+/* ice_get_initial_sw_cfg - Get initial port and default VSI data
+ * @hw: pointer to the hardware structure
+ */
+enum ice_status ice_get_initial_sw_cfg(struct ice_hw *hw)
+{
+	struct ice_aqc_get_sw_cfg_resp *rbuf;
+	enum ice_status status;
+	u16 num_total_ports;
+	u16 req_desc = 0;
+	u16 num_elems;
+	u16 j = 0;
+	u16 i;
+
+	num_total_ports = 1;
+
+	rbuf = (struct ice_aqc_get_sw_cfg_resp *)
+		ice_malloc(hw, ICE_SW_CFG_MAX_BUF_LEN);
+
+	if (!rbuf)
+		return ICE_ERR_NO_MEMORY;
+
+	/* Multiple calls to ice_aq_get_sw_cfg may be required
+	 * to get all the switch configuration information. The need
+	 * for additional calls is indicated by ice_aq_get_sw_cfg
+	 * writing a non-zero value in req_desc
+	 */
+	do {
+		status = ice_aq_get_sw_cfg(hw, rbuf, ICE_SW_CFG_MAX_BUF_LEN,
+					   &req_desc, &num_elems, NULL);
+
+		if (status)
+			break;
+
+		for (i = 0; i < num_elems; i++) {
+			struct ice_aqc_get_sw_cfg_resp_elem *ele;
+			u16 pf_vf_num, swid, vsi_port_num;
+			bool is_vf = false;
+			u8 type;
+
+			ele = rbuf[i].elements;
+			vsi_port_num = LE16_TO_CPU(ele->vsi_port_num) &
+				ICE_AQC_GET_SW_CONF_RESP_VSI_PORT_NUM_M;
+
+			pf_vf_num = LE16_TO_CPU(ele->pf_vf_num) &
+				ICE_AQC_GET_SW_CONF_RESP_FUNC_NUM_M;
+
+			swid = LE16_TO_CPU(ele->swid);
+
+			if (LE16_TO_CPU(ele->pf_vf_num) &
+			    ICE_AQC_GET_SW_CONF_RESP_IS_VF)
+				is_vf = true;
+
+			type = LE16_TO_CPU(ele->vsi_port_num) >>
+				ICE_AQC_GET_SW_CONF_RESP_TYPE_S;
+
+			switch (type) {
+			case ICE_AQC_GET_SW_CONF_RESP_PHYS_PORT:
+			case ICE_AQC_GET_SW_CONF_RESP_VIRT_PORT:
+				if (j == num_total_ports) {
+					ice_debug(hw, ICE_DBG_SW,
+						  "more ports than expected\n");
+					status = ICE_ERR_CFG;
+					goto out;
+				}
+				ice_init_port_info(hw->port_info,
+						   vsi_port_num, type, swid,
+						   pf_vf_num, is_vf);
+				j++;
+				break;
+			default:
+				break;
+			}
+		}
+	} while (req_desc && !status);
+
+
+out:
+	ice_free(hw, (void *)rbuf);
+	return status;
+}
+
+
+/**
+ * ice_fill_sw_info - Helper function to populate lb_en and lan_en
+ * @hw: pointer to the hardware structure
+ * @fi: filter info structure to fill/update
+ *
+ * This helper function populates the lb_en and lan_en elements of the provided
+ * ice_fltr_info struct using the switch's type and characteristics of the
+ * switch rule being configured.
+ */
+static void ice_fill_sw_info(struct ice_hw *hw, struct ice_fltr_info *fi)
+{
+	fi->lb_en = false;
+	fi->lan_en = false;
+	if ((fi->flag & ICE_FLTR_TX) &&
+	    (fi->fltr_act == ICE_FWD_TO_VSI ||
+	     fi->fltr_act == ICE_FWD_TO_VSI_LIST ||
+	     fi->fltr_act == ICE_FWD_TO_Q ||
+	     fi->fltr_act == ICE_FWD_TO_QGRP)) {
+		/* Setting LB for prune actions will result in replicated
+		 * packets to the internal switch that will be dropped.
+		 */
+		if (fi->lkup_type != ICE_SW_LKUP_VLAN)
+			fi->lb_en = true;
+
+		/* Set lan_en to TRUE if
+		 * 1. The switch is a VEB AND
+		 * 2
+		 * 2.1 The lookup is a directional lookup like ethertype,
+		 * promiscuous, ethertype-mac, promiscuous-vlan
+		 * and default-port OR
+		 * 2.2 The lookup is VLAN, OR
+		 * 2.3 The lookup is MAC with mcast or bcast addr for MAC, OR
+		 * 2.4 The lookup is MAC_VLAN with mcast or bcast addr for MAC.
+		 *
+		 * OR
+		 *
+		 * The switch is a VEPA.
+		 *
+		 * In all other cases, the LAN enable has to be set to false.
+		 */
+		if (hw->evb_veb) {
+			if (fi->lkup_type == ICE_SW_LKUP_ETHERTYPE ||
+			    fi->lkup_type == ICE_SW_LKUP_PROMISC ||
+			    fi->lkup_type == ICE_SW_LKUP_ETHERTYPE_MAC ||
+			    fi->lkup_type == ICE_SW_LKUP_PROMISC_VLAN ||
+			    fi->lkup_type == ICE_SW_LKUP_DFLT ||
+			    fi->lkup_type == ICE_SW_LKUP_VLAN ||
+			    (fi->lkup_type == ICE_SW_LKUP_MAC &&
+			     !IS_UNICAST_ETHER_ADDR(fi->l_data.mac.mac_addr)) ||
+			    (fi->lkup_type == ICE_SW_LKUP_MAC_VLAN &&
+			     !IS_UNICAST_ETHER_ADDR(fi->l_data.mac.mac_addr)))
+				fi->lan_en = true;
+		} else {
+			fi->lan_en = true;
+		}
+	}
+}
+
+/**
+ * ice_ilog2 - Caculates integer log base 2 of a number
+ * @n: number on which to perform operation
+ */
+static int ice_ilog2(u64 n)
+{
+	int i;
+
+	for (i = 63; i >= 0; i--)
+		if (((u64)1 << i) & n)
+			return i;
+
+	return -1;
+}
+
+
+/**
+ * ice_fill_sw_rule - Helper function to fill switch rule structure
+ * @hw: pointer to the hardware structure
+ * @f_info: entry containing packet forwarding information
+ * @s_rule: switch rule structure to be filled in based on mac_entry
+ * @opc: switch rules population command type - pass in the command opcode
+ */
+static void
+ice_fill_sw_rule(struct ice_hw *hw, struct ice_fltr_info *f_info,
+		 struct ice_aqc_sw_rules_elem *s_rule, enum ice_adminq_opc opc)
+{
+	u16 vlan_id = ICE_MAX_VLAN_ID + 1;
+	void *daddr = NULL;
+	u16 eth_hdr_sz;
+	u8 *eth_hdr;
+	u32 act = 0;
+	__be16 *off;
+	u8 q_rgn;
+
+
+	if (opc == ice_aqc_opc_remove_sw_rules) {
+		s_rule->pdata.lkup_tx_rx.act = 0;
+		s_rule->pdata.lkup_tx_rx.index =
+			CPU_TO_LE16(f_info->fltr_rule_id);
+		s_rule->pdata.lkup_tx_rx.hdr_len = 0;
+		return;
+	}
+
+	eth_hdr_sz = sizeof(dummy_eth_header);
+	eth_hdr = s_rule->pdata.lkup_tx_rx.hdr;
+
+	/* initialize the ether header with a dummy header */
+	ice_memcpy(eth_hdr, dummy_eth_header, eth_hdr_sz, ICE_NONDMA_TO_NONDMA);
+	ice_fill_sw_info(hw, f_info);
+
+	switch (f_info->fltr_act) {
+	case ICE_FWD_TO_VSI:
+		act |= (f_info->fwd_id.hw_vsi_id << ICE_SINGLE_ACT_VSI_ID_S) &
+			ICE_SINGLE_ACT_VSI_ID_M;
+		if (f_info->lkup_type != ICE_SW_LKUP_VLAN)
+			act |= ICE_SINGLE_ACT_VSI_FORWARDING |
+				ICE_SINGLE_ACT_VALID_BIT;
+		break;
+	case ICE_FWD_TO_VSI_LIST:
+		act |= ICE_SINGLE_ACT_VSI_LIST;
+		act |= (f_info->fwd_id.vsi_list_id <<
+			ICE_SINGLE_ACT_VSI_LIST_ID_S) &
+			ICE_SINGLE_ACT_VSI_LIST_ID_M;
+		if (f_info->lkup_type != ICE_SW_LKUP_VLAN)
+			act |= ICE_SINGLE_ACT_VSI_FORWARDING |
+				ICE_SINGLE_ACT_VALID_BIT;
+		break;
+	case ICE_FWD_TO_Q:
+		act |= ICE_SINGLE_ACT_TO_Q;
+		act |= (f_info->fwd_id.q_id << ICE_SINGLE_ACT_Q_INDEX_S) &
+			ICE_SINGLE_ACT_Q_INDEX_M;
+		break;
+	case ICE_DROP_PACKET:
+		act |= ICE_SINGLE_ACT_VSI_FORWARDING | ICE_SINGLE_ACT_DROP |
+			ICE_SINGLE_ACT_VALID_BIT;
+		break;
+	case ICE_FWD_TO_QGRP:
+		q_rgn = f_info->qgrp_size > 0 ?
+			(u8)ice_ilog2(f_info->qgrp_size) : 0;
+		act |= ICE_SINGLE_ACT_TO_Q;
+		act |= (f_info->fwd_id.q_id << ICE_SINGLE_ACT_Q_INDEX_S) &
+			ICE_SINGLE_ACT_Q_INDEX_M;
+		act |= (q_rgn << ICE_SINGLE_ACT_Q_REGION_S) &
+			ICE_SINGLE_ACT_Q_REGION_M;
+		break;
+	default:
+		return;
+	}
+
+	if (f_info->lb_en)
+		act |= ICE_SINGLE_ACT_LB_ENABLE;
+	if (f_info->lan_en)
+		act |= ICE_SINGLE_ACT_LAN_ENABLE;
+
+	switch (f_info->lkup_type) {
+	case ICE_SW_LKUP_MAC:
+		daddr = f_info->l_data.mac.mac_addr;
+		break;
+	case ICE_SW_LKUP_VLAN:
+		vlan_id = f_info->l_data.vlan.vlan_id;
+		if (f_info->fltr_act == ICE_FWD_TO_VSI ||
+		    f_info->fltr_act == ICE_FWD_TO_VSI_LIST) {
+			act |= ICE_SINGLE_ACT_PRUNE;
+			act |= ICE_SINGLE_ACT_EGRESS | ICE_SINGLE_ACT_INGRESS;
+		}
+		break;
+	case ICE_SW_LKUP_ETHERTYPE_MAC:
+		daddr = f_info->l_data.ethertype_mac.mac_addr;
+		/* fall-through */
+	case ICE_SW_LKUP_ETHERTYPE:
+		off = (__be16 *)(eth_hdr + ICE_ETH_ETHTYPE_OFFSET);
+		*off = CPU_TO_BE16(f_info->l_data.ethertype_mac.ethertype);
+		break;
+	case ICE_SW_LKUP_MAC_VLAN:
+		daddr = f_info->l_data.mac_vlan.mac_addr;
+		vlan_id = f_info->l_data.mac_vlan.vlan_id;
+		break;
+	case ICE_SW_LKUP_PROMISC_VLAN:
+		vlan_id = f_info->l_data.mac_vlan.vlan_id;
+		/* fall-through */
+	case ICE_SW_LKUP_PROMISC:
+		daddr = f_info->l_data.mac_vlan.mac_addr;
+		break;
+	default:
+		break;
+	}
+
+	s_rule->type = (f_info->flag & ICE_FLTR_RX) ?
+		CPU_TO_LE16(ICE_AQC_SW_RULES_T_LKUP_RX) :
+		CPU_TO_LE16(ICE_AQC_SW_RULES_T_LKUP_TX);
+
+	/* Recipe set depending on lookup type */
+	s_rule->pdata.lkup_tx_rx.recipe_id = CPU_TO_LE16(f_info->lkup_type);
+	s_rule->pdata.lkup_tx_rx.src = CPU_TO_LE16(f_info->src);
+	s_rule->pdata.lkup_tx_rx.act = CPU_TO_LE32(act);
+
+	if (daddr)
+		ice_memcpy(eth_hdr + ICE_ETH_DA_OFFSET, daddr, ETH_ALEN,
+			   ICE_NONDMA_TO_NONDMA);
+
+	if (!(vlan_id > ICE_MAX_VLAN_ID)) {
+		off = (__be16 *)(eth_hdr + ICE_ETH_VLAN_TCI_OFFSET);
+		*off = CPU_TO_BE16(vlan_id);
+	}
+
+	/* Create the switch rule with the final dummy Ethernet header */
+	if (opc != ice_aqc_opc_update_sw_rules)
+		s_rule->pdata.lkup_tx_rx.hdr_len = CPU_TO_LE16(eth_hdr_sz);
+}
+
+/**
+ * ice_add_marker_act
+ * @hw: pointer to the hardware structure
+ * @m_ent: the management entry for which sw marker needs to be added
+ * @sw_marker: sw marker to tag the Rx descriptor with
+ * @l_id: large action resource id
+ *
+ * Create a large action to hold software marker and update the switch rule
+ * entry pointed by m_ent with newly created large action
+ */
+static enum ice_status
+ice_add_marker_act(struct ice_hw *hw, struct ice_fltr_mgmt_list_entry *m_ent,
+		   u16 sw_marker, u16 l_id)
+{
+	struct ice_aqc_sw_rules_elem *lg_act, *rx_tx;
+	/* For software marker we need 3 large actions
+	 * 1. FWD action: FWD TO VSI or VSI LIST
+	 * 2. GENERIC VALUE action to hold the profile id
+	 * 3. GENERIC VALUE action to hold the software marker id
+	 */
+	const u16 num_lg_acts = 3;
+	enum ice_status status;
+	u16 lg_act_size;
+	u16 rules_size;
+	u32 act;
+	u16 id;
+
+	if (m_ent->fltr_info.lkup_type != ICE_SW_LKUP_MAC)
+		return ICE_ERR_PARAM;
+
+	/* Create two back-to-back switch rules and submit them to the HW using
+	 * one memory buffer:
+	 *    1. Large Action
+	 *    2. Look up Tx Rx
+	 */
+	lg_act_size = (u16)ICE_SW_RULE_LG_ACT_SIZE(num_lg_acts);
+	rules_size = lg_act_size + ICE_SW_RULE_RX_TX_ETH_HDR_SIZE;
+	lg_act = (struct ice_aqc_sw_rules_elem *)ice_malloc(hw, rules_size);
+	if (!lg_act)
+		return ICE_ERR_NO_MEMORY;
+
+	rx_tx = (struct ice_aqc_sw_rules_elem *)((u8 *)lg_act + lg_act_size);
+
+	/* Fill in the first switch rule i.e. large action */
+	lg_act->type = CPU_TO_LE16(ICE_AQC_SW_RULES_T_LG_ACT);
+	lg_act->pdata.lg_act.index = CPU_TO_LE16(l_id);
+	lg_act->pdata.lg_act.size = CPU_TO_LE16(num_lg_acts);
+
+	/* First action VSI forwarding or VSI list forwarding depending on how
+	 * many VSIs
+	 */
+	id = (m_ent->vsi_count > 1) ? m_ent->fltr_info.fwd_id.vsi_list_id :
+		m_ent->fltr_info.fwd_id.hw_vsi_id;
+
+	act = ICE_LG_ACT_VSI_FORWARDING | ICE_LG_ACT_VALID_BIT;
+	act |= (id << ICE_LG_ACT_VSI_LIST_ID_S) &
+		ICE_LG_ACT_VSI_LIST_ID_M;
+	if (m_ent->vsi_count > 1)
+		act |= ICE_LG_ACT_VSI_LIST;
+	lg_act->pdata.lg_act.act[0] = CPU_TO_LE32(act);
+
+	/* Second action descriptor type */
+	act = ICE_LG_ACT_GENERIC;
+
+	act |= (1 << ICE_LG_ACT_GENERIC_VALUE_S) & ICE_LG_ACT_GENERIC_VALUE_M;
+	lg_act->pdata.lg_act.act[1] = CPU_TO_LE32(act);
+
+	act = (ICE_LG_ACT_GENERIC_OFF_RX_DESC_PROF_IDX <<
+	       ICE_LG_ACT_GENERIC_OFFSET_S) & ICE_LG_ACT_GENERIC_OFFSET_M;
+
+	/* Third action Marker value */
+	act |= ICE_LG_ACT_GENERIC;
+	act |= (sw_marker << ICE_LG_ACT_GENERIC_VALUE_S) &
+		ICE_LG_ACT_GENERIC_VALUE_M;
+
+	lg_act->pdata.lg_act.act[2] = CPU_TO_LE32(act);
+
+	/* call the fill switch rule to fill the lookup Tx Rx structure */
+	ice_fill_sw_rule(hw, &m_ent->fltr_info, rx_tx,
+			 ice_aqc_opc_update_sw_rules);
+
+	/* Update the action to point to the large action id */
+	rx_tx->pdata.lkup_tx_rx.act =
+		CPU_TO_LE32(ICE_SINGLE_ACT_PTR |
+			    ((l_id << ICE_SINGLE_ACT_PTR_VAL_S) &
+			     ICE_SINGLE_ACT_PTR_VAL_M));
+
+	/* Use the filter rule id of the previously created rule with single
+	 * act. Once the update happens, hardware will treat this as large
+	 * action
+	 */
+	rx_tx->pdata.lkup_tx_rx.index =
+		CPU_TO_LE16(m_ent->fltr_info.fltr_rule_id);
+
+	status = ice_aq_sw_rules(hw, lg_act, rules_size, 2,
+				 ice_aqc_opc_update_sw_rules, NULL);
+	if (!status) {
+		m_ent->lg_act_idx = l_id;
+		m_ent->sw_marker_id = sw_marker;
+	}
+
+	ice_free(hw, lg_act);
+	return status;
+}
+
+
+/**
+ * ice_create_vsi_list_map
+ * @hw: pointer to the hardware structure
+ * @vsi_handle_arr: array of VSI handles to set in the VSI mapping
+ * @num_vsi: number of VSI handles in the array
+ * @vsi_list_id: VSI list id generated as part of allocate resource
+ *
+ * Helper function to create a new entry of VSI list id to VSI mapping
+ * using the given VSI list id
+ */
+static struct ice_vsi_list_map_info *
+ice_create_vsi_list_map(struct ice_hw *hw, u16 *vsi_handle_arr, u16 num_vsi,
+			u16 vsi_list_id)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	struct ice_vsi_list_map_info *v_map;
+	int i;
+
+	v_map = (struct ice_vsi_list_map_info *)ice_calloc(hw, 1,
+		sizeof(*v_map));
+	if (!v_map)
+		return NULL;
+
+	v_map->vsi_list_id = vsi_list_id;
+	v_map->ref_cnt = 1;
+	for (i = 0; i < num_vsi; i++)
+		ice_set_bit(vsi_handle_arr[i], v_map->vsi_map);
+
+	LIST_ADD(&v_map->list_entry, &sw->vsi_list_map_head);
+	return v_map;
+}
+
+/**
+ * ice_update_vsi_list_rule
+ * @hw: pointer to the hardware structure
+ * @vsi_handle_arr: array of VSI handles to form a VSI list
+ * @num_vsi: number of VSI handles in the array
+ * @vsi_list_id: VSI list id generated as part of allocate resource
+ * @remove: Boolean value to indicate if this is a remove action
+ * @opc: switch rules population command type - pass in the command opcode
+ * @lkup_type: lookup type of the filter
+ *
+ * Call AQ command to add a new switch rule or update existing switch rule
+ * using the given VSI list id
+ */
+static enum ice_status
+ice_update_vsi_list_rule(struct ice_hw *hw, u16 *vsi_handle_arr, u16 num_vsi,
+			 u16 vsi_list_id, bool remove, enum ice_adminq_opc opc,
+			 enum ice_sw_lkup_type lkup_type)
+{
+	struct ice_aqc_sw_rules_elem *s_rule;
+	enum ice_status status;
+	u16 s_rule_size;
+	u16 type;
+	int i;
+
+	if (!num_vsi)
+		return ICE_ERR_PARAM;
+
+	if (lkup_type == ICE_SW_LKUP_MAC ||
+	    lkup_type == ICE_SW_LKUP_MAC_VLAN ||
+	    lkup_type == ICE_SW_LKUP_ETHERTYPE ||
+	    lkup_type == ICE_SW_LKUP_ETHERTYPE_MAC ||
+	    lkup_type == ICE_SW_LKUP_PROMISC ||
+	    lkup_type == ICE_SW_LKUP_PROMISC_VLAN)
+		type = remove ? ICE_AQC_SW_RULES_T_VSI_LIST_CLEAR :
+				ICE_AQC_SW_RULES_T_VSI_LIST_SET;
+	else if (lkup_type == ICE_SW_LKUP_VLAN)
+		type = remove ? ICE_AQC_SW_RULES_T_PRUNE_LIST_CLEAR :
+				ICE_AQC_SW_RULES_T_PRUNE_LIST_SET;
+	else
+		return ICE_ERR_PARAM;
+
+	s_rule_size = (u16)ICE_SW_RULE_VSI_LIST_SIZE(num_vsi);
+	s_rule = (struct ice_aqc_sw_rules_elem *)ice_malloc(hw, s_rule_size);
+	if (!s_rule)
+		return ICE_ERR_NO_MEMORY;
+	for (i = 0; i < num_vsi; i++) {
+		if (!ice_is_vsi_valid(hw, vsi_handle_arr[i])) {
+			status = ICE_ERR_PARAM;
+			goto exit;
+		}
+		/* AQ call requires hw_vsi_id(s) */
+		s_rule->pdata.vsi_list.vsi[i] =
+			CPU_TO_LE16(ice_get_hw_vsi_num(hw, vsi_handle_arr[i]));
+	}
+
+	s_rule->type = CPU_TO_LE16(type);
+	s_rule->pdata.vsi_list.number_vsi = CPU_TO_LE16(num_vsi);
+	s_rule->pdata.vsi_list.index = CPU_TO_LE16(vsi_list_id);
+
+	status = ice_aq_sw_rules(hw, s_rule, s_rule_size, 1, opc, NULL);
+
+exit:
+	ice_free(hw, s_rule);
+	return status;
+}
+
+/**
+ * ice_create_vsi_list_rule - Creates and populates a VSI list rule
+ * @hw: pointer to the hw struct
+ * @vsi_handle_arr: array of VSI handles to form a VSI list
+ * @num_vsi: number of VSI handles in the array
+ * @vsi_list_id: stores the ID of the VSI list to be created
+ * @lkup_type: switch rule filter's lookup type
+ */
+static enum ice_status
+ice_create_vsi_list_rule(struct ice_hw *hw, u16 *vsi_handle_arr, u16 num_vsi,
+			 u16 *vsi_list_id, enum ice_sw_lkup_type lkup_type)
+{
+	enum ice_status status;
+
+	status = ice_aq_alloc_free_vsi_list(hw, vsi_list_id, lkup_type,
+					    ice_aqc_opc_alloc_res);
+	if (status)
+		return status;
+
+	/* Update the newly created VSI list to include the specified VSIs */
+	return ice_update_vsi_list_rule(hw, vsi_handle_arr, num_vsi,
+					*vsi_list_id, false,
+					ice_aqc_opc_add_sw_rules, lkup_type);
+}
+
+/**
+ * ice_create_pkt_fwd_rule
+ * @hw: pointer to the hardware structure
+ * @f_entry: entry containing packet forwarding information
+ *
+ * Create switch rule with given filter information and add an entry
+ * to the corresponding filter management list to track this switch rule
+ * and VSI mapping
+ */
+static enum ice_status
+ice_create_pkt_fwd_rule(struct ice_hw *hw,
+			struct ice_fltr_list_entry *f_entry)
+{
+	struct ice_fltr_mgmt_list_entry *fm_entry;
+	struct ice_aqc_sw_rules_elem *s_rule;
+	enum ice_sw_lkup_type l_type;
+	struct ice_sw_recipe *recp;
+	enum ice_status status;
+
+	s_rule = (struct ice_aqc_sw_rules_elem *)
+		ice_malloc(hw, ICE_SW_RULE_RX_TX_ETH_HDR_SIZE);
+	if (!s_rule)
+		return ICE_ERR_NO_MEMORY;
+	fm_entry = (struct ice_fltr_mgmt_list_entry *)
+		   ice_malloc(hw, sizeof(*fm_entry));
+	if (!fm_entry) {
+		status = ICE_ERR_NO_MEMORY;
+		goto ice_create_pkt_fwd_rule_exit;
+	}
+
+	fm_entry->fltr_info = f_entry->fltr_info;
+
+	/* Initialize all the fields for the management entry */
+	fm_entry->vsi_count = 1;
+	fm_entry->lg_act_idx = ICE_INVAL_LG_ACT_INDEX;
+	fm_entry->sw_marker_id = ICE_INVAL_SW_MARKER_ID;
+	fm_entry->counter_index = ICE_INVAL_COUNTER_ID;
+
+	ice_fill_sw_rule(hw, &fm_entry->fltr_info, s_rule,
+			 ice_aqc_opc_add_sw_rules);
+
+	status = ice_aq_sw_rules(hw, s_rule, ICE_SW_RULE_RX_TX_ETH_HDR_SIZE, 1,
+				 ice_aqc_opc_add_sw_rules, NULL);
+	if (status) {
+		ice_free(hw, fm_entry);
+		goto ice_create_pkt_fwd_rule_exit;
+	}
+
+	f_entry->fltr_info.fltr_rule_id =
+		LE16_TO_CPU(s_rule->pdata.lkup_tx_rx.index);
+	fm_entry->fltr_info.fltr_rule_id =
+		LE16_TO_CPU(s_rule->pdata.lkup_tx_rx.index);
+
+	/* The book keeping entries will get removed when base driver
+	 * calls remove filter AQ command
+	 */
+	l_type = fm_entry->fltr_info.lkup_type;
+	recp = &hw->switch_info->recp_list[l_type];
+	LIST_ADD(&fm_entry->list_entry, &recp->filt_rules);
+
+ice_create_pkt_fwd_rule_exit:
+	ice_free(hw, s_rule);
+	return status;
+}
+
+/**
+ * ice_update_pkt_fwd_rule
+ * @hw: pointer to the hardware structure
+ * @f_info: filter information for switch rule
+ *
+ * Call AQ command to update a previously created switch rule with a
+ * VSI list id
+ */
+static enum ice_status
+ice_update_pkt_fwd_rule(struct ice_hw *hw, struct ice_fltr_info *f_info)
+{
+	struct ice_aqc_sw_rules_elem *s_rule;
+	enum ice_status status;
+
+	s_rule = (struct ice_aqc_sw_rules_elem *)
+		ice_malloc(hw, ICE_SW_RULE_RX_TX_ETH_HDR_SIZE);
+	if (!s_rule)
+		return ICE_ERR_NO_MEMORY;
+
+	ice_fill_sw_rule(hw, f_info, s_rule, ice_aqc_opc_update_sw_rules);
+
+	s_rule->pdata.lkup_tx_rx.index = CPU_TO_LE16(f_info->fltr_rule_id);
+
+	/* Update switch rule with new rule set to forward VSI list */
+	status = ice_aq_sw_rules(hw, s_rule, ICE_SW_RULE_RX_TX_ETH_HDR_SIZE, 1,
+				 ice_aqc_opc_update_sw_rules, NULL);
+
+	ice_free(hw, s_rule);
+	return status;
+}
+
+/**
+ * ice_update_sw_rule_bridge_mode
+ * @hw: pointer to the hw struct
+ *
+ * Updates unicast switch filter rules based on VEB/VEPA mode
+ */
+enum ice_status ice_update_sw_rule_bridge_mode(struct ice_hw *hw)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	struct ice_fltr_mgmt_list_entry *fm_entry;
+	enum ice_status status = ICE_SUCCESS;
+	struct LIST_HEAD_TYPE *rule_head;
+	struct ice_lock *rule_lock; /* Lock to protect filter rule list */
+
+	rule_lock = &sw->recp_list[ICE_SW_LKUP_MAC].filt_rule_lock;
+	rule_head = &sw->recp_list[ICE_SW_LKUP_MAC].filt_rules;
+
+	ice_acquire_lock(rule_lock);
+	LIST_FOR_EACH_ENTRY(fm_entry, rule_head, ice_fltr_mgmt_list_entry,
+			    list_entry) {
+		struct ice_fltr_info *fi = &fm_entry->fltr_info;
+		u8 *addr = fi->l_data.mac.mac_addr;
+
+		/* Update unicast Tx rules to reflect the selected
+		 * VEB/VEPA mode
+		 */
+		if ((fi->flag & ICE_FLTR_TX) && IS_UNICAST_ETHER_ADDR(addr) &&
+		    (fi->fltr_act == ICE_FWD_TO_VSI ||
+		     fi->fltr_act == ICE_FWD_TO_VSI_LIST ||
+		     fi->fltr_act == ICE_FWD_TO_Q ||
+		     fi->fltr_act == ICE_FWD_TO_QGRP)) {
+			status = ice_update_pkt_fwd_rule(hw, fi);
+			if (status)
+				break;
+		}
+	}
+
+	ice_release_lock(rule_lock);
+
+	return status;
+}
+
+/**
+ * ice_add_update_vsi_list
+ * @hw: pointer to the hardware structure
+ * @m_entry: pointer to current filter management list entry
+ * @cur_fltr: filter information from the book keeping entry
+ * @new_fltr: filter information with the new VSI to be added
+ *
+ * Call AQ command to add or update previously created VSI list with new VSI.
+ *
+ * Helper function to do book keeping associated with adding filter information
+ * The algorithm to do the book keeping is described below :
+ * When a VSI needs to subscribe to a given filter (MAC/VLAN/Ethtype etc.)
+ *	if only one VSI has been added till now
+ *		Allocate a new VSI list and add two VSIs
+ *		to this list using switch rule command
+ *		Update the previously created switch rule with the
+ *		newly created VSI list id
+ *	if a VSI list was previously created
+ *		Add the new VSI to the previously created VSI list set
+ *		using the update switch rule command
+ */
+static enum ice_status
+ice_add_update_vsi_list(struct ice_hw *hw,
+			struct ice_fltr_mgmt_list_entry *m_entry,
+			struct ice_fltr_info *cur_fltr,
+			struct ice_fltr_info *new_fltr)
+{
+	enum ice_status status = ICE_SUCCESS;
+	u16 vsi_list_id = 0;
+
+	if ((cur_fltr->fltr_act == ICE_FWD_TO_Q ||
+	     cur_fltr->fltr_act == ICE_FWD_TO_QGRP))
+		return ICE_ERR_NOT_IMPL;
+
+	if ((new_fltr->fltr_act == ICE_FWD_TO_Q ||
+	     new_fltr->fltr_act == ICE_FWD_TO_QGRP) &&
+	    (cur_fltr->fltr_act == ICE_FWD_TO_VSI ||
+	     cur_fltr->fltr_act == ICE_FWD_TO_VSI_LIST))
+		return ICE_ERR_NOT_IMPL;
+
+	if (m_entry->vsi_count < 2 && !m_entry->vsi_list_info) {
+		/* Only one entry existed in the mapping and it was not already
+		 * a part of a VSI list. So, create a VSI list with the old and
+		 * new VSIs.
+		 */
+		struct ice_fltr_info tmp_fltr;
+		u16 vsi_handle_arr[2];
+
+		/* A rule already exists with the new VSI being added */
+		if (cur_fltr->fwd_id.hw_vsi_id == new_fltr->fwd_id.hw_vsi_id)
+			return ICE_ERR_ALREADY_EXISTS;
+
+		vsi_handle_arr[0] = cur_fltr->vsi_handle;
+		vsi_handle_arr[1] = new_fltr->vsi_handle;
+		status = ice_create_vsi_list_rule(hw, &vsi_handle_arr[0], 2,
+						  &vsi_list_id,
+						  new_fltr->lkup_type);
+		if (status)
+			return status;
+
+		tmp_fltr = *new_fltr;
+		tmp_fltr.fltr_rule_id = cur_fltr->fltr_rule_id;
+		tmp_fltr.fltr_act = ICE_FWD_TO_VSI_LIST;
+		tmp_fltr.fwd_id.vsi_list_id = vsi_list_id;
+		/* Update the previous switch rule of "MAC forward to VSI" to
+		 * "MAC fwd to VSI list"
+		 */
+		status = ice_update_pkt_fwd_rule(hw, &tmp_fltr);
+		if (status)
+			return status;
+
+		cur_fltr->fwd_id.vsi_list_id = vsi_list_id;
+		cur_fltr->fltr_act = ICE_FWD_TO_VSI_LIST;
+		m_entry->vsi_list_info =
+			ice_create_vsi_list_map(hw, &vsi_handle_arr[0], 2,
+						vsi_list_id);
+
+		/* If this entry was large action then the large action needs
+		 * to be updated to point to FWD to VSI list
+		 */
+		if (m_entry->sw_marker_id != ICE_INVAL_SW_MARKER_ID)
+			status =
+			    ice_add_marker_act(hw, m_entry,
+					       m_entry->sw_marker_id,
+					       m_entry->lg_act_idx);
+	} else {
+		u16 vsi_handle = new_fltr->vsi_handle;
+		enum ice_adminq_opc opcode;
+
+		if (!m_entry->vsi_list_info)
+			return ICE_ERR_CFG;
+
+		/* A rule already exists with the new VSI being added */
+		if (ice_is_bit_set(m_entry->vsi_list_info->vsi_map, vsi_handle))
+			return ICE_SUCCESS;
+
+		/* Update the previously created VSI list set with
+		 * the new VSI id passed in
+		 */
+		vsi_list_id = cur_fltr->fwd_id.vsi_list_id;
+		opcode = ice_aqc_opc_update_sw_rules;
+
+		status = ice_update_vsi_list_rule(hw, &vsi_handle, 1,
+						  vsi_list_id, false, opcode,
+						  new_fltr->lkup_type);
+		/* update VSI list mapping info with new VSI id */
+		if (!status)
+			ice_set_bit(vsi_handle,
+				    m_entry->vsi_list_info->vsi_map);
+	}
+	if (!status)
+		m_entry->vsi_count++;
+	return status;
+}
+
+/**
+ * ice_find_rule_entry - Search a rule entry
+ * @hw: pointer to the hardware structure
+ * @recp_id: lookup type for which the specified rule needs to be searched
+ * @f_info: rule information
+ *
+ * Helper function to search for a given rule entry
+ * Returns pointer to entry storing the rule if found
+ */
+static struct ice_fltr_mgmt_list_entry *
+ice_find_rule_entry(struct ice_hw *hw, u8 recp_id, struct ice_fltr_info *f_info)
+{
+	struct ice_fltr_mgmt_list_entry *list_itr, *ret = NULL;
+	struct ice_switch_info *sw = hw->switch_info;
+	struct LIST_HEAD_TYPE *list_head;
+
+	list_head = &sw->recp_list[recp_id].filt_rules;
+	LIST_FOR_EACH_ENTRY(list_itr, list_head, ice_fltr_mgmt_list_entry,
+			    list_entry) {
+		if (!memcmp(&f_info->l_data, &list_itr->fltr_info.l_data,
+			    sizeof(f_info->l_data)) &&
+		    f_info->flag == list_itr->fltr_info.flag) {
+			ret = list_itr;
+			break;
+		}
+	}
+	return ret;
+}
+
+/**
+ * ice_find_vsi_list_entry - Search VSI list map with VSI count 1
+ * @hw: pointer to the hardware structure
+ * @recp_id: lookup type for which VSI lists needs to be searched
+ * @vsi_handle: VSI handle to be found in VSI list
+ * @vsi_list_id: VSI list id found contaning vsi_handle
+ *
+ * Helper function to search a VSI list with single entry containing given VSI
+ * handle element. This can be extended further to search VSI list with more
+ * than 1 vsi_count. Returns pointer to VSI list entry if found.
+ */
+static struct ice_vsi_list_map_info *
+ice_find_vsi_list_entry(struct ice_hw *hw, u8 recp_id, u16 vsi_handle,
+			u16 *vsi_list_id)
+{
+	struct ice_vsi_list_map_info *map_info = NULL;
+	struct ice_switch_info *sw = hw->switch_info;
+	struct ice_fltr_mgmt_list_entry *list_itr;
+	struct LIST_HEAD_TYPE *list_head;
+
+	list_head = &sw->recp_list[recp_id].filt_rules;
+	LIST_FOR_EACH_ENTRY(list_itr, list_head, ice_fltr_mgmt_list_entry,
+			    list_entry) {
+		if (list_itr->vsi_count == 1 && list_itr->vsi_list_info) {
+			map_info = list_itr->vsi_list_info;
+			if (ice_is_bit_set(map_info->vsi_map, vsi_handle)) {
+				*vsi_list_id = map_info->vsi_list_id;
+				return map_info;
+			}
+		}
+	}
+	return NULL;
+}
+
+/**
+ * ice_add_rule_internal - add rule for a given lookup type
+ * @hw: pointer to the hardware structure
+ * @recp_id: lookup type (recipe id) for which rule has to be added
+ * @f_entry: structure containing MAC forwarding information
+ *
+ * Adds or updates the rule lists for a given recipe
+ */
+static enum ice_status
+ice_add_rule_internal(struct ice_hw *hw, u8 recp_id,
+		      struct ice_fltr_list_entry *f_entry)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	struct ice_fltr_info *new_fltr, *cur_fltr;
+	struct ice_fltr_mgmt_list_entry *m_entry;
+	struct ice_lock *rule_lock; /* Lock to protect filter rule list */
+	enum ice_status status = ICE_SUCCESS;
+
+	if (!ice_is_vsi_valid(hw, f_entry->fltr_info.vsi_handle))
+		return ICE_ERR_PARAM;
+
+	/* Load the hw_vsi_id only if the fwd action is fwd to VSI */
+	if (f_entry->fltr_info.fltr_act == ICE_FWD_TO_VSI)
+		f_entry->fltr_info.fwd_id.hw_vsi_id =
+			ice_get_hw_vsi_num(hw, f_entry->fltr_info.vsi_handle);
+
+	rule_lock = &sw->recp_list[recp_id].filt_rule_lock;
+
+	ice_acquire_lock(rule_lock);
+	new_fltr = &f_entry->fltr_info;
+	if (new_fltr->flag & ICE_FLTR_RX)
+		new_fltr->src = hw->port_info->lport;
+	else if (new_fltr->flag & ICE_FLTR_TX)
+		new_fltr->src =
+			ice_get_hw_vsi_num(hw, f_entry->fltr_info.vsi_handle);
+
+	m_entry = ice_find_rule_entry(hw, recp_id, new_fltr);
+	if (!m_entry) {
+		ice_release_lock(rule_lock);
+		return ice_create_pkt_fwd_rule(hw, f_entry);
+	}
+
+	cur_fltr = &m_entry->fltr_info;
+	status = ice_add_update_vsi_list(hw, m_entry, cur_fltr, new_fltr);
+	ice_release_lock(rule_lock);
+
+	return status;
+}
+
+/**
+ * ice_remove_vsi_list_rule
+ * @hw: pointer to the hardware structure
+ * @vsi_list_id: VSI list id generated as part of allocate resource
+ * @lkup_type: switch rule filter lookup type
+ *
+ * The VSI list should be emptied before this function is called to remove the
+ * VSI list.
+ */
+static enum ice_status
+ice_remove_vsi_list_rule(struct ice_hw *hw, u16 vsi_list_id,
+			 enum ice_sw_lkup_type lkup_type)
+{
+	struct ice_aqc_sw_rules_elem *s_rule;
+	enum ice_status status;
+	u16 s_rule_size;
+
+	s_rule_size = (u16)ICE_SW_RULE_VSI_LIST_SIZE(0);
+	s_rule = (struct ice_aqc_sw_rules_elem *)ice_malloc(hw, s_rule_size);
+	if (!s_rule)
+		return ICE_ERR_NO_MEMORY;
+
+	s_rule->type = CPU_TO_LE16(ICE_AQC_SW_RULES_T_VSI_LIST_CLEAR);
+	s_rule->pdata.vsi_list.index = CPU_TO_LE16(vsi_list_id);
+
+	/* Free the vsi_list resource that we allocated. It is assumed that the
+	 * list is empty at this point.
+	 */
+	status = ice_aq_alloc_free_vsi_list(hw, &vsi_list_id, lkup_type,
+					    ice_aqc_opc_free_res);
+
+	ice_free(hw, s_rule);
+	return status;
+}
+
+/**
+ * ice_rem_update_vsi_list
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: VSI handle of the VSI to remove
+ * @fm_list: filter management entry for which the VSI list management needs to
+ *	     be done
+ */
+static enum ice_status
+ice_rem_update_vsi_list(struct ice_hw *hw, u16 vsi_handle,
+			struct ice_fltr_mgmt_list_entry *fm_list)
+{
+	enum ice_sw_lkup_type lkup_type;
+	enum ice_status status = ICE_SUCCESS;
+	u16 vsi_list_id;
+
+	if (fm_list->fltr_info.fltr_act != ICE_FWD_TO_VSI_LIST ||
+	    fm_list->vsi_count == 0)
+		return ICE_ERR_PARAM;
+
+	/* A rule with the VSI being removed does not exist */
+	if (!ice_is_bit_set(fm_list->vsi_list_info->vsi_map, vsi_handle))
+		return ICE_ERR_DOES_NOT_EXIST;
+
+	lkup_type = fm_list->fltr_info.lkup_type;
+	vsi_list_id = fm_list->fltr_info.fwd_id.vsi_list_id;
+	status = ice_update_vsi_list_rule(hw, &vsi_handle, 1, vsi_list_id, true,
+					  ice_aqc_opc_update_sw_rules,
+					  lkup_type);
+	if (status)
+		return status;
+
+	fm_list->vsi_count--;
+	ice_clear_bit(vsi_handle, fm_list->vsi_list_info->vsi_map);
+
+	if (fm_list->vsi_count == 1 && lkup_type != ICE_SW_LKUP_VLAN) {
+		struct ice_fltr_info tmp_fltr_info = fm_list->fltr_info;
+		struct ice_vsi_list_map_info *vsi_list_info =
+			fm_list->vsi_list_info;
+		u16 rem_vsi_handle;
+
+		rem_vsi_handle = ice_find_first_bit(vsi_list_info->vsi_map,
+						    ICE_MAX_VSI);
+		if (!ice_is_vsi_valid(hw, rem_vsi_handle))
+			return ICE_ERR_OUT_OF_RANGE;
+
+		/* Make sure VSI list is empty before removing it below */
+		status = ice_update_vsi_list_rule(hw, &rem_vsi_handle, 1,
+						  vsi_list_id, true,
+						  ice_aqc_opc_update_sw_rules,
+						  lkup_type);
+		if (status)
+			return status;
+
+		tmp_fltr_info.fltr_act = ICE_FWD_TO_VSI;
+		tmp_fltr_info.fwd_id.hw_vsi_id =
+			ice_get_hw_vsi_num(hw, rem_vsi_handle);
+		tmp_fltr_info.vsi_handle = rem_vsi_handle;
+		status = ice_update_pkt_fwd_rule(hw, &tmp_fltr_info);
+		if (status) {
+			ice_debug(hw, ICE_DBG_SW,
+				  "Failed to update pkt fwd rule to FWD_TO_VSI on HW VSI %d, error %d\n",
+				  tmp_fltr_info.fwd_id.hw_vsi_id, status);
+			return status;
+		}
+
+		fm_list->fltr_info = tmp_fltr_info;
+	}
+
+	if ((fm_list->vsi_count == 1 && lkup_type != ICE_SW_LKUP_VLAN) ||
+	    (fm_list->vsi_count == 0 && lkup_type == ICE_SW_LKUP_VLAN)) {
+		struct ice_vsi_list_map_info *vsi_list_info =
+			fm_list->vsi_list_info;
+
+		/* Remove the VSI list since it is no longer used */
+		status = ice_remove_vsi_list_rule(hw, vsi_list_id, lkup_type);
+		if (status) {
+			ice_debug(hw, ICE_DBG_SW,
+				  "Failed to remove VSI list %d, error %d\n",
+				  vsi_list_id, status);
+			return status;
+		}
+
+		LIST_DEL(&vsi_list_info->list_entry);
+		ice_free(hw, vsi_list_info);
+		fm_list->vsi_list_info = NULL;
+	}
+
+	return status;
+}
+
+/**
+ * ice_remove_rule_internal - Remove a filter rule of a given type
+ *
+ * @hw: pointer to the hardware structure
+ * @recp_id: recipe id for which the rule needs to removed
+ * @f_entry: rule entry containing filter information
+ */
+static enum ice_status
+ice_remove_rule_internal(struct ice_hw *hw, u8 recp_id,
+			 struct ice_fltr_list_entry *f_entry)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	struct ice_fltr_mgmt_list_entry *list_elem;
+	struct ice_lock *rule_lock; /* Lock to protect filter rule list */
+	enum ice_status status = ICE_SUCCESS;
+	bool remove_rule = false;
+	u16 vsi_handle;
+
+	if (!ice_is_vsi_valid(hw, f_entry->fltr_info.vsi_handle))
+		return ICE_ERR_PARAM;
+	f_entry->fltr_info.fwd_id.hw_vsi_id =
+		ice_get_hw_vsi_num(hw, f_entry->fltr_info.vsi_handle);
+
+	rule_lock = &sw->recp_list[recp_id].filt_rule_lock;
+	ice_acquire_lock(rule_lock);
+	list_elem = ice_find_rule_entry(hw, recp_id, &f_entry->fltr_info);
+	if (!list_elem) {
+		status = ICE_ERR_DOES_NOT_EXIST;
+		goto exit;
+	}
+
+	if (list_elem->fltr_info.fltr_act != ICE_FWD_TO_VSI_LIST) {
+		remove_rule = true;
+	} else if (!list_elem->vsi_list_info) {
+		status = ICE_ERR_DOES_NOT_EXIST;
+		goto exit;
+	} else if (list_elem->vsi_list_info->ref_cnt > 1) {
+		/* a ref_cnt > 1 indicates that the vsi_list is being
+		 * shared by multiple rules. Decrement the ref_cnt and
+		 * remove this rule, but do not modify the list, as it
+		 * is in-use by other rules.
+		 */
+		list_elem->vsi_list_info->ref_cnt--;
+		remove_rule = true;
+	} else {
+		/* a ref_cnt of 1 indicates the vsi_list is only used
+		 * by one rule. However, the original removal request is only
+		 * for a single VSI. Update the vsi_list first, and only
+		 * remove the rule if there are no further VSIs in this list.
+		 */
+		vsi_handle = f_entry->fltr_info.vsi_handle;
+		status = ice_rem_update_vsi_list(hw, vsi_handle, list_elem);
+		if (status)
+			goto exit;
+		/* if vsi count goes to zero after updating the vsi list */
+		if (list_elem->vsi_count == 0)
+			remove_rule = true;
+	}
+
+	if (remove_rule) {
+		/* Remove the lookup rule */
+		struct ice_aqc_sw_rules_elem *s_rule;
+
+		s_rule = (struct ice_aqc_sw_rules_elem *)
+			ice_malloc(hw, ICE_SW_RULE_RX_TX_NO_HDR_SIZE);
+		if (!s_rule) {
+			status = ICE_ERR_NO_MEMORY;
+			goto exit;
+		}
+
+		ice_fill_sw_rule(hw, &list_elem->fltr_info, s_rule,
+				 ice_aqc_opc_remove_sw_rules);
+
+		status = ice_aq_sw_rules(hw, s_rule,
+					 ICE_SW_RULE_RX_TX_NO_HDR_SIZE, 1,
+					 ice_aqc_opc_remove_sw_rules, NULL);
+		if (status)
+			goto exit;
+
+		/* Remove a book keeping from the list */
+		ice_free(hw, s_rule);
+
+		LIST_DEL(&list_elem->list_entry);
+		ice_free(hw, list_elem);
+	}
+exit:
+	ice_release_lock(rule_lock);
+	return status;
+}
+
+
+/**
+ * ice_add_mac - Add a MAC address based filter rule
+ * @hw: pointer to the hardware structure
+ * @m_list: list of MAC addresses and forwarding information
+ *
+ * IMPORTANT: When the ucast_shared flag is set to false and m_list has
+ * multiple unicast addresses, the function assumes that all the
+ * addresses are unique in a given add_mac call. It doesn't
+ * check for duplicates in this case, removing duplicates from a given
+ * list should be taken care of in the caller of this function.
+ */
+enum ice_status
+ice_add_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list)
+{
+	struct ice_aqc_sw_rules_elem *s_rule, *r_iter;
+	struct ice_fltr_list_entry *m_list_itr;
+	struct LIST_HEAD_TYPE *rule_head;
+	u16 elem_sent, total_elem_left;
+	struct ice_switch_info *sw;
+	struct ice_lock *rule_lock; /* Lock to protect filter rule list */
+	enum ice_status status = ICE_SUCCESS;
+	u16 num_unicast = 0;
+	u16 s_rule_size;
+
+	if (!m_list || !hw)
+		return ICE_ERR_PARAM;
+	s_rule = NULL;
+	sw = hw->switch_info;
+	rule_lock = &sw->recp_list[ICE_SW_LKUP_MAC].filt_rule_lock;
+	LIST_FOR_EACH_ENTRY(m_list_itr, m_list, ice_fltr_list_entry,
+			    list_entry) {
+		u8 *add = &m_list_itr->fltr_info.l_data.mac.mac_addr[0];
+		u16 vsi_handle;
+		u16 hw_vsi_id;
+
+		m_list_itr->fltr_info.flag = ICE_FLTR_TX;
+		vsi_handle = m_list_itr->fltr_info.vsi_handle;
+		if (!ice_is_vsi_valid(hw, vsi_handle))
+			return ICE_ERR_PARAM;
+		hw_vsi_id = ice_get_hw_vsi_num(hw, vsi_handle);
+		m_list_itr->fltr_info.fwd_id.hw_vsi_id = hw_vsi_id;
+		/* update the src in case it is vsi num */
+		if (m_list_itr->fltr_info.src_id != ICE_SRC_ID_VSI)
+			return ICE_ERR_PARAM;
+		m_list_itr->fltr_info.src = hw_vsi_id;
+		if (m_list_itr->fltr_info.lkup_type != ICE_SW_LKUP_MAC ||
+		    IS_ZERO_ETHER_ADDR(add))
+			return ICE_ERR_PARAM;
+		if (IS_UNICAST_ETHER_ADDR(add) && !hw->ucast_shared) {
+			/* Don't overwrite the unicast address */
+			ice_acquire_lock(rule_lock);
+			if (ice_find_rule_entry(hw, ICE_SW_LKUP_MAC,
+						&m_list_itr->fltr_info)) {
+				ice_release_lock(rule_lock);
+				return ICE_ERR_ALREADY_EXISTS;
+			}
+			ice_release_lock(rule_lock);
+			num_unicast++;
+		} else if (IS_MULTICAST_ETHER_ADDR(add) ||
+			   (IS_UNICAST_ETHER_ADDR(add) && hw->ucast_shared)) {
+			m_list_itr->status =
+				ice_add_rule_internal(hw, ICE_SW_LKUP_MAC,
+						      m_list_itr);
+			if (m_list_itr->status)
+				return m_list_itr->status;
+		}
+	}
+
+	ice_acquire_lock(rule_lock);
+	/* Exit if no suitable entries were found for adding bulk switch rule */
+	if (!num_unicast) {
+		status = ICE_SUCCESS;
+		goto ice_add_mac_exit;
+	}
+
+	rule_head = &sw->recp_list[ICE_SW_LKUP_MAC].filt_rules;
+
+	/* Allocate switch rule buffer for the bulk update for unicast */
+	s_rule_size = ICE_SW_RULE_RX_TX_ETH_HDR_SIZE;
+	s_rule = (struct ice_aqc_sw_rules_elem *)
+		ice_calloc(hw, num_unicast, s_rule_size);
+	if (!s_rule) {
+		status = ICE_ERR_NO_MEMORY;
+		goto ice_add_mac_exit;
+	}
+
+	r_iter = s_rule;
+	LIST_FOR_EACH_ENTRY(m_list_itr, m_list, ice_fltr_list_entry,
+			    list_entry) {
+		struct ice_fltr_info *f_info = &m_list_itr->fltr_info;
+		u8 *mac_addr = &f_info->l_data.mac.mac_addr[0];
+
+		if (IS_UNICAST_ETHER_ADDR(mac_addr)) {
+			ice_fill_sw_rule(hw, &m_list_itr->fltr_info, r_iter,
+					 ice_aqc_opc_add_sw_rules);
+			r_iter = (struct ice_aqc_sw_rules_elem *)
+				((u8 *)r_iter + s_rule_size);
+		}
+	}
+
+	/* Call AQ bulk switch rule update for all unicast addresses */
+	r_iter = s_rule;
+	/* Call AQ switch rule in AQ_MAX chunk */
+	for (total_elem_left = num_unicast; total_elem_left > 0;
+	     total_elem_left -= elem_sent) {
+		struct ice_aqc_sw_rules_elem *entry = r_iter;
+
+		elem_sent = min(total_elem_left,
+				(u16)(ICE_AQ_MAX_BUF_LEN / s_rule_size));
+		status = ice_aq_sw_rules(hw, entry, elem_sent * s_rule_size,
+					 elem_sent, ice_aqc_opc_add_sw_rules,
+					 NULL);
+		if (status)
+			goto ice_add_mac_exit;
+		r_iter = (struct ice_aqc_sw_rules_elem *)
+			((u8 *)r_iter + (elem_sent * s_rule_size));
+	}
+
+	/* Fill up rule id based on the value returned from FW */
+	r_iter = s_rule;
+	LIST_FOR_EACH_ENTRY(m_list_itr, m_list, ice_fltr_list_entry,
+			    list_entry) {
+		struct ice_fltr_info *f_info = &m_list_itr->fltr_info;
+		u8 *mac_addr = &f_info->l_data.mac.mac_addr[0];
+		struct ice_fltr_mgmt_list_entry *fm_entry;
+
+		if (IS_UNICAST_ETHER_ADDR(mac_addr)) {
+			f_info->fltr_rule_id =
+				LE16_TO_CPU(r_iter->pdata.lkup_tx_rx.index);
+			f_info->fltr_act = ICE_FWD_TO_VSI;
+			/* Create an entry to track this MAC address */
+			fm_entry = (struct ice_fltr_mgmt_list_entry *)
+				ice_malloc(hw, sizeof(*fm_entry));
+			if (!fm_entry) {
+				status = ICE_ERR_NO_MEMORY;
+				goto ice_add_mac_exit;
+			}
+			fm_entry->fltr_info = *f_info;
+			fm_entry->vsi_count = 1;
+			/* The book keeping entries will get removed when
+			 * base driver calls remove filter AQ command
+			 */
+
+			LIST_ADD(&fm_entry->list_entry, rule_head);
+			r_iter = (struct ice_aqc_sw_rules_elem *)
+				((u8 *)r_iter + s_rule_size);
+		}
+	}
+
+ice_add_mac_exit:
+	ice_release_lock(rule_lock);
+	if (s_rule)
+		ice_free(hw, s_rule);
+	return status;
+}
+
+/**
+ * ice_add_vlan_internal - Add one VLAN based filter rule
+ * @hw: pointer to the hardware structure
+ * @f_entry: filter entry containing one VLAN information
+ */
+static enum ice_status
+ice_add_vlan_internal(struct ice_hw *hw, struct ice_fltr_list_entry *f_entry)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	struct ice_fltr_mgmt_list_entry *v_list_itr;
+	struct ice_fltr_info *new_fltr, *cur_fltr;
+	enum ice_sw_lkup_type lkup_type;
+	u16 vsi_list_id = 0, vsi_handle;
+	struct ice_lock *rule_lock; /* Lock to protect filter rule list */
+	enum ice_status status = ICE_SUCCESS;
+
+	if (!ice_is_vsi_valid(hw, f_entry->fltr_info.vsi_handle))
+		return ICE_ERR_PARAM;
+
+	f_entry->fltr_info.fwd_id.hw_vsi_id =
+		ice_get_hw_vsi_num(hw, f_entry->fltr_info.vsi_handle);
+	new_fltr = &f_entry->fltr_info;
+
+	/* VLAN id should only be 12 bits */
+	if (new_fltr->l_data.vlan.vlan_id > ICE_MAX_VLAN_ID)
+		return ICE_ERR_PARAM;
+
+	if (new_fltr->src_id != ICE_SRC_ID_VSI)
+		return ICE_ERR_PARAM;
+
+	new_fltr->src = new_fltr->fwd_id.hw_vsi_id;
+	lkup_type = new_fltr->lkup_type;
+	vsi_handle = new_fltr->vsi_handle;
+	rule_lock = &sw->recp_list[ICE_SW_LKUP_VLAN].filt_rule_lock;
+	ice_acquire_lock(rule_lock);
+	v_list_itr = ice_find_rule_entry(hw, ICE_SW_LKUP_VLAN, new_fltr);
+	if (!v_list_itr) {
+		struct ice_vsi_list_map_info *map_info = NULL;
+
+		if (new_fltr->fltr_act == ICE_FWD_TO_VSI) {
+			/* All VLAN pruning rules use a VSI list. Check if
+			 * there is already a VSI list containing VSI that we
+			 * want to add. If found, use the same vsi_list_id for
+			 * this new VLAN rule or else create a new list.
+			 */
+			map_info = ice_find_vsi_list_entry(hw, ICE_SW_LKUP_VLAN,
+							   vsi_handle,
+							   &vsi_list_id);
+			if (!map_info) {
+				status = ice_create_vsi_list_rule(hw,
+								  &vsi_handle,
+								  1,
+								  &vsi_list_id,
+								  lkup_type);
+				if (status)
+					goto exit;
+			}
+			/* Convert the action to forwarding to a VSI list. */
+			new_fltr->fltr_act = ICE_FWD_TO_VSI_LIST;
+			new_fltr->fwd_id.vsi_list_id = vsi_list_id;
+		}
+
+		status = ice_create_pkt_fwd_rule(hw, f_entry);
+		if (!status) {
+			v_list_itr = ice_find_rule_entry(hw, ICE_SW_LKUP_VLAN,
+							 new_fltr);
+			if (!v_list_itr) {
+				status = ICE_ERR_DOES_NOT_EXIST;
+				goto exit;
+			}
+			/* reuse VSI list for new rule and increment ref_cnt */
+			if (map_info) {
+				v_list_itr->vsi_list_info = map_info;
+				map_info->ref_cnt++;
+			} else {
+				v_list_itr->vsi_list_info =
+					ice_create_vsi_list_map(hw, &vsi_handle,
+								1, vsi_list_id);
+			}
+		}
+	} else if (v_list_itr->vsi_list_info->ref_cnt == 1) {
+		/* Update existing VSI list to add new VSI id only if it used
+		 * by one VLAN rule.
+		 */
+		cur_fltr = &v_list_itr->fltr_info;
+		status = ice_add_update_vsi_list(hw, v_list_itr, cur_fltr,
+						 new_fltr);
+	} else {
+		/* If VLAN rule exists and VSI list being used by this rule is
+		 * referenced by more than 1 VLAN rule. Then create a new VSI
+		 * list appending previous VSI with new VSI and update existing
+		 * VLAN rule to point to new VSI list id
+		 */
+		struct ice_fltr_info tmp_fltr;
+		u16 vsi_handle_arr[2];
+		u16 cur_handle;
+
+		/* Current implementation only supports reusing VSI list with
+		 * one VSI count. We should never hit below condition
+		 */
+		if (v_list_itr->vsi_count > 1 &&
+		    v_list_itr->vsi_list_info->ref_cnt > 1) {
+			ice_debug(hw, ICE_DBG_SW,
+				  "Invalid configuration: Optimization to reuse VSI list with more than one VSI is not being done yet\n");
+			status = ICE_ERR_CFG;
+			goto exit;
+		}
+
+		cur_handle =
+			ice_find_first_bit(v_list_itr->vsi_list_info->vsi_map,
+					   ICE_MAX_VSI);
+
+		/* A rule already exists with the new VSI being added */
+		if (cur_handle == vsi_handle) {
+			status = ICE_ERR_ALREADY_EXISTS;
+			goto exit;
+		}
+
+		vsi_handle_arr[0] = cur_handle;
+		vsi_handle_arr[1] = vsi_handle;
+		status = ice_create_vsi_list_rule(hw, &vsi_handle_arr[0], 2,
+						  &vsi_list_id, lkup_type);
+		if (status)
+			goto exit;
+
+		tmp_fltr = v_list_itr->fltr_info;
+		tmp_fltr.fltr_rule_id = v_list_itr->fltr_info.fltr_rule_id;
+		tmp_fltr.fwd_id.vsi_list_id = vsi_list_id;
+		tmp_fltr.fltr_act = ICE_FWD_TO_VSI_LIST;
+		/* Update the previous switch rule to a new VSI list which
+		 * includes current VSI that is requested
+		 */
+		status = ice_update_pkt_fwd_rule(hw, &tmp_fltr);
+		if (status)
+			goto exit;
+
+		/* before overriding VSI list map info. decrement ref_cnt of
+		 * previous VSI list
+		 */
+		v_list_itr->vsi_list_info->ref_cnt--;
+
+		/* now update to newly created list */
+		v_list_itr->fltr_info.fwd_id.vsi_list_id = vsi_list_id;
+		v_list_itr->vsi_list_info =
+			ice_create_vsi_list_map(hw, &vsi_handle_arr[0], 2,
+						vsi_list_id);
+		v_list_itr->vsi_count++;
+	}
+
+exit:
+	ice_release_lock(rule_lock);
+	return status;
+}
+
+/**
+ * ice_add_vlan - Add VLAN based filter rule
+ * @hw: pointer to the hardware structure
+ * @v_list: list of VLAN entries and forwarding information
+ */
+enum ice_status
+ice_add_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list)
+{
+	struct ice_fltr_list_entry *v_list_itr;
+
+	if (!v_list || !hw)
+		return ICE_ERR_PARAM;
+
+	LIST_FOR_EACH_ENTRY(v_list_itr, v_list, ice_fltr_list_entry,
+			    list_entry) {
+		if (v_list_itr->fltr_info.lkup_type != ICE_SW_LKUP_VLAN)
+			return ICE_ERR_PARAM;
+		v_list_itr->fltr_info.flag = ICE_FLTR_TX;
+		v_list_itr->status = ice_add_vlan_internal(hw, v_list_itr);
+		if (v_list_itr->status)
+			return v_list_itr->status;
+	}
+	return ICE_SUCCESS;
+}
+
+#ifndef NO_MACVLAN_SUPPORT
+/**
+ * ice_add_mac_vlan - Add MAC and VLAN pair based filter rule
+ * @hw: pointer to the hardware structure
+ * @mv_list: list of MAC and VLAN filters
+ *
+ * If the VSI on which the mac-vlan pair has to be added has RX and Tx VLAN
+ * pruning bits enabled, then it is the responsibility of the caller to make
+ * sure to add a vlan only filter on the same VSI. Packets belonging to that
+ * VLAN won't be received on that VSI otherwise.
+ */
+enum ice_status
+ice_add_mac_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *mv_list)
+{
+	struct ice_fltr_list_entry *mv_list_itr;
+
+	if (!mv_list || !hw)
+		return ICE_ERR_PARAM;
+
+	LIST_FOR_EACH_ENTRY(mv_list_itr, mv_list, ice_fltr_list_entry,
+			    list_entry) {
+		enum ice_sw_lkup_type l_type =
+			mv_list_itr->fltr_info.lkup_type;
+
+		if (l_type != ICE_SW_LKUP_MAC_VLAN)
+			return ICE_ERR_PARAM;
+		mv_list_itr->fltr_info.flag = ICE_FLTR_TX;
+		mv_list_itr->status =
+			ice_add_rule_internal(hw, ICE_SW_LKUP_MAC_VLAN,
+					      mv_list_itr);
+		if (mv_list_itr->status)
+			return mv_list_itr->status;
+	}
+	return ICE_SUCCESS;
+}
+#endif
+
+
+
+/**
+ * ice_rem_sw_rule_info
+ * @hw: pointer to the hardware structure
+ * @rule_head: pointer to the switch list structure that we want to delete
+ */
+static void
+ice_rem_sw_rule_info(struct ice_hw *hw, struct LIST_HEAD_TYPE *rule_head)
+{
+	if (!LIST_EMPTY(rule_head)) {
+		struct ice_fltr_mgmt_list_entry *entry;
+		struct ice_fltr_mgmt_list_entry *tmp;
+
+		LIST_FOR_EACH_ENTRY_SAFE(entry, tmp, rule_head,
+					 ice_fltr_mgmt_list_entry, list_entry) {
+			LIST_DEL(&entry->list_entry);
+			ice_free(hw, entry);
+		}
+	}
+}
+
+
+
+/**
+ * ice_cfg_dflt_vsi - change state of VSI to set/clear default
+ * @pi: pointer to the port_info structure
+ * @vsi_handle: VSI handle to set as default
+ * @set: true to add the above mentioned switch rule, false to remove it
+ * @direction: ICE_FLTR_RX or ICE_FLTR_TX
+ *
+ * add filter rule to set/unset given VSI as default VSI for the switch
+ * (represented by swid)
+ */
+enum ice_status
+ice_cfg_dflt_vsi(struct ice_port_info *pi, u16 vsi_handle, bool set,
+		 u8 direction)
+{
+	struct ice_aqc_sw_rules_elem *s_rule;
+	struct ice_fltr_info f_info;
+	struct ice_hw *hw = pi->hw;
+	enum ice_adminq_opc opcode;
+	enum ice_status status;
+	u16 s_rule_size;
+	u16 hw_vsi_id;
+
+	if (!ice_is_vsi_valid(hw, vsi_handle))
+		return ICE_ERR_PARAM;
+	hw_vsi_id = ice_get_hw_vsi_num(hw, vsi_handle);
+
+	s_rule_size = set ? ICE_SW_RULE_RX_TX_ETH_HDR_SIZE :
+			    ICE_SW_RULE_RX_TX_NO_HDR_SIZE;
+	s_rule = (struct ice_aqc_sw_rules_elem *)ice_malloc(hw, s_rule_size);
+	if (!s_rule)
+		return ICE_ERR_NO_MEMORY;
+
+	ice_memset(&f_info, 0, sizeof(f_info), ICE_NONDMA_MEM);
+
+	f_info.lkup_type = ICE_SW_LKUP_DFLT;
+	f_info.flag = direction;
+	f_info.fltr_act = ICE_FWD_TO_VSI;
+	f_info.fwd_id.hw_vsi_id = hw_vsi_id;
+
+	if (f_info.flag & ICE_FLTR_RX) {
+		f_info.src = pi->lport;
+		f_info.src_id = ICE_SRC_ID_LPORT;
+		if (!set)
+			f_info.fltr_rule_id =
+				pi->dflt_rx_vsi_rule_id;
+	} else if (f_info.flag & ICE_FLTR_TX) {
+		f_info.src_id = ICE_SRC_ID_VSI;
+		f_info.src = hw_vsi_id;
+		if (!set)
+			f_info.fltr_rule_id =
+				pi->dflt_tx_vsi_rule_id;
+	}
+
+	if (set)
+		opcode = ice_aqc_opc_add_sw_rules;
+	else
+		opcode = ice_aqc_opc_remove_sw_rules;
+
+	ice_fill_sw_rule(hw, &f_info, s_rule, opcode);
+
+	status = ice_aq_sw_rules(hw, s_rule, s_rule_size, 1, opcode, NULL);
+	if (status || !(f_info.flag & ICE_FLTR_TX_RX))
+		goto out;
+	if (set) {
+		u16 index = LE16_TO_CPU(s_rule->pdata.lkup_tx_rx.index);
+
+		if (f_info.flag & ICE_FLTR_TX) {
+			pi->dflt_tx_vsi_num = hw_vsi_id;
+			pi->dflt_tx_vsi_rule_id = index;
+		} else if (f_info.flag & ICE_FLTR_RX) {
+			pi->dflt_rx_vsi_num = hw_vsi_id;
+			pi->dflt_rx_vsi_rule_id = index;
+		}
+	} else {
+		if (f_info.flag & ICE_FLTR_TX) {
+			pi->dflt_tx_vsi_num = ICE_DFLT_VSI_INVAL;
+			pi->dflt_tx_vsi_rule_id = ICE_INVAL_ACT;
+		} else if (f_info.flag & ICE_FLTR_RX) {
+			pi->dflt_rx_vsi_num = ICE_DFLT_VSI_INVAL;
+			pi->dflt_rx_vsi_rule_id = ICE_INVAL_ACT;
+		}
+	}
+
+out:
+	ice_free(hw, s_rule);
+	return status;
+}
+
+/**
+ * ice_remove_mac - remove a MAC address based filter rule
+ * @hw: pointer to the hardware structure
+ * @m_list: list of MAC addresses and forwarding information
+ *
+ * This function removes either a MAC filter rule or a specific VSI from a
+ * VSI list for a multicast MAC address.
+ *
+ * Returns ICE_ERR_DOES_NOT_EXIST if a given entry was not added by
+ * ice_add_mac. Caller should be aware that this call will only work if all
+ * the entries passed into m_list were added previously. It will not attempt to
+ * do a partial remove of entries that were found.
+ */
+enum ice_status
+ice_remove_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list)
+{
+	struct ice_fltr_list_entry *list_itr, *tmp;
+
+	if (!m_list)
+		return ICE_ERR_PARAM;
+
+	LIST_FOR_EACH_ENTRY_SAFE(list_itr, tmp, m_list, ice_fltr_list_entry,
+				 list_entry) {
+		enum ice_sw_lkup_type l_type = list_itr->fltr_info.lkup_type;
+
+		if (l_type != ICE_SW_LKUP_MAC)
+			return ICE_ERR_PARAM;
+		list_itr->status = ice_remove_rule_internal(hw,
+							    ICE_SW_LKUP_MAC,
+							    list_itr);
+		if (list_itr->status)
+			return list_itr->status;
+	}
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_remove_vlan - Remove VLAN based filter rule
+ * @hw: pointer to the hardware structure
+ * @v_list: list of VLAN entries and forwarding information
+ */
+enum ice_status
+ice_remove_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list)
+{
+	struct ice_fltr_list_entry *v_list_itr, *tmp;
+
+	if (!v_list || !hw)
+		return ICE_ERR_PARAM;
+
+	LIST_FOR_EACH_ENTRY_SAFE(v_list_itr, tmp, v_list, ice_fltr_list_entry,
+				 list_entry) {
+		enum ice_sw_lkup_type l_type = v_list_itr->fltr_info.lkup_type;
+
+		if (l_type != ICE_SW_LKUP_VLAN)
+			return ICE_ERR_PARAM;
+		v_list_itr->status = ice_remove_rule_internal(hw,
+							      ICE_SW_LKUP_VLAN,
+							      v_list_itr);
+		if (v_list_itr->status)
+			return v_list_itr->status;
+	}
+	return ICE_SUCCESS;
+}
+
+#ifndef NO_MACVLAN_SUPPORT
+/**
+ * ice_remove_mac_vlan - Remove MAC VLAN based filter rule
+ * @hw: pointer to the hardware structure
+ * @v_list: list of MAC VLAN entries and forwarding information
+ */
+enum ice_status
+ice_remove_mac_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list)
+{
+	struct ice_fltr_list_entry *v_list_itr, *tmp;
+
+	if (!v_list || !hw)
+		return ICE_ERR_PARAM;
+
+	LIST_FOR_EACH_ENTRY_SAFE(v_list_itr, tmp, v_list, ice_fltr_list_entry,
+				 list_entry) {
+		enum ice_sw_lkup_type l_type = v_list_itr->fltr_info.lkup_type;
+
+		if (l_type != ICE_SW_LKUP_MAC_VLAN)
+			return ICE_ERR_PARAM;
+		v_list_itr->status =
+			ice_remove_rule_internal(hw, ICE_SW_LKUP_MAC_VLAN,
+						 v_list_itr);
+		if (v_list_itr->status)
+			return v_list_itr->status;
+	}
+	return ICE_SUCCESS;
+}
+#endif /* !NO_MACVLAN_SUPPORT */
+
+/**
+ * ice_vsi_uses_fltr - Determine if given VSI uses specified filter
+ * @fm_entry: filter entry to inspect
+ * @vsi_handle: VSI handle to compare with filter info
+ */
+static bool
+ice_vsi_uses_fltr(struct ice_fltr_mgmt_list_entry *fm_entry, u16 vsi_handle)
+{
+	return ((fm_entry->fltr_info.fltr_act == ICE_FWD_TO_VSI &&
+		 fm_entry->fltr_info.vsi_handle == vsi_handle) ||
+		(fm_entry->fltr_info.fltr_act == ICE_FWD_TO_VSI_LIST &&
+		 (ice_is_bit_set(fm_entry->vsi_list_info->vsi_map,
+				 vsi_handle))));
+}
+
+/**
+ * ice_add_entry_to_vsi_fltr_list - Add copy of fltr_list_entry to remove list
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: VSI handle to remove filters from
+ * @vsi_list_head: pointer to the list to add entry to
+ * @fi: pointer to fltr_info of filter entry to copy & add
+ *
+ * Helper function, used when creating a list of filters to remove from
+ * a specific VSI. The entry added to vsi_list_head is a COPY of the
+ * original filter entry, with the exception of fltr_info.fltr_act and
+ * fltr_info.fwd_id fields. These are set such that later logic can
+ * extract which VSI to remove the fltr from, and pass on that information.
+ */
+static enum ice_status
+ice_add_entry_to_vsi_fltr_list(struct ice_hw *hw, u16 vsi_handle,
+			       struct LIST_HEAD_TYPE *vsi_list_head,
+			       struct ice_fltr_info *fi)
+{
+	struct ice_fltr_list_entry *tmp;
+
+	/* this memory is freed up in the caller function
+	 * once filters for this VSI are removed
+	 */
+	tmp = (struct ice_fltr_list_entry *)ice_malloc(hw, sizeof(*tmp));
+	if (!tmp)
+		return ICE_ERR_NO_MEMORY;
+
+	tmp->fltr_info = *fi;
+
+	/* Overwrite these fields to indicate which VSI to remove filter from,
+	 * so find and remove logic can extract the information from the
+	 * list entries. Note that original entries will still have proper
+	 * values.
+	 */
+	tmp->fltr_info.fltr_act = ICE_FWD_TO_VSI;
+	tmp->fltr_info.vsi_handle = vsi_handle;
+	tmp->fltr_info.fwd_id.hw_vsi_id = ice_get_hw_vsi_num(hw, vsi_handle);
+
+	LIST_ADD(&tmp->list_entry, vsi_list_head);
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_add_to_vsi_fltr_list - Add VSI filters to the list
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: VSI handle to remove filters from
+ * @lkup_list_head: pointer to the list that has certain lookup type filters
+ * @vsi_list_head: pointer to the list pertaining to VSI with vsi_handle
+ *
+ * Locates all filters in lkup_list_head that are used by the given VSI,
+ * and adds COPIES of those entries to vsi_list_head (intended to be used
+ * to remove the listed filters).
+ * Note that this means all entries in vsi_list_head must be explicitly
+ * deallocated by the caller when done with list.
+ */
+static enum ice_status
+ice_add_to_vsi_fltr_list(struct ice_hw *hw, u16 vsi_handle,
+			 struct LIST_HEAD_TYPE *lkup_list_head,
+			 struct LIST_HEAD_TYPE *vsi_list_head)
+{
+	struct ice_fltr_mgmt_list_entry *fm_entry;
+	enum ice_status status = ICE_SUCCESS;
+
+	/* check to make sure VSI id is valid and within boundary */
+	if (!ice_is_vsi_valid(hw, vsi_handle))
+		return ICE_ERR_PARAM;
+
+	LIST_FOR_EACH_ENTRY(fm_entry, lkup_list_head,
+			    ice_fltr_mgmt_list_entry, list_entry) {
+		struct ice_fltr_info *fi;
+
+		fi = &fm_entry->fltr_info;
+		if (!fi || !ice_vsi_uses_fltr(fm_entry, vsi_handle))
+			continue;
+
+		status = ice_add_entry_to_vsi_fltr_list(hw, vsi_handle,
+							vsi_list_head, fi);
+		if (status)
+			return status;
+	}
+	return status;
+}
+
+
+/**
+ * ice_determine_promisc_mask
+ * @fi: filter info to parse
+ *
+ * Helper function to determine which ICE_PROMISC_ mask corresponds
+ * to given filter into.
+ */
+static u8 ice_determine_promisc_mask(struct ice_fltr_info *fi)
+{
+	u16 vid = fi->l_data.mac_vlan.vlan_id;
+	u8 *macaddr = fi->l_data.mac.mac_addr;
+	bool is_tx_fltr = false;
+	u8 promisc_mask = 0;
+
+	if (fi->flag == ICE_FLTR_TX)
+		is_tx_fltr = true;
+
+	if (IS_BROADCAST_ETHER_ADDR(macaddr))
+		promisc_mask |= is_tx_fltr ?
+			ICE_PROMISC_BCAST_TX : ICE_PROMISC_BCAST_RX;
+	else if (IS_MULTICAST_ETHER_ADDR(macaddr))
+		promisc_mask |= is_tx_fltr ?
+			ICE_PROMISC_MCAST_TX : ICE_PROMISC_MCAST_RX;
+	else if (IS_UNICAST_ETHER_ADDR(macaddr))
+		promisc_mask |= is_tx_fltr ?
+			ICE_PROMISC_UCAST_TX : ICE_PROMISC_UCAST_RX;
+	if (vid)
+		promisc_mask |= is_tx_fltr ?
+			ICE_PROMISC_VLAN_TX : ICE_PROMISC_VLAN_RX;
+
+	return promisc_mask;
+}
+
+
+/**
+ * ice_remove_promisc - Remove promisc based filter rules
+ * @hw: pointer to the hardware structure
+ * @recp_id: recipe id for which the rule needs to removed
+ * @v_list: list of promisc entries
+ */
+static enum ice_status
+ice_remove_promisc(struct ice_hw *hw, u8 recp_id,
+		   struct LIST_HEAD_TYPE *v_list)
+{
+	struct ice_fltr_list_entry *v_list_itr, *tmp;
+
+	LIST_FOR_EACH_ENTRY_SAFE(v_list_itr, tmp, v_list, ice_fltr_list_entry,
+				 list_entry) {
+		v_list_itr->status =
+			ice_remove_rule_internal(hw, recp_id, v_list_itr);
+		if (v_list_itr->status)
+			return v_list_itr->status;
+	}
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_clear_vsi_promisc - clear specified promiscuous mode(s) for given VSI
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: VSI handle to clear mode
+ * @promisc_mask: mask of promiscuous config bits to clear
+ * @vid: VLAN ID to clear VLAN promiscuous
+ */
+enum ice_status
+ice_clear_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask,
+		      u16 vid)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	struct ice_fltr_list_entry *fm_entry, *tmp;
+	struct LIST_HEAD_TYPE remove_list_head;
+	struct ice_fltr_mgmt_list_entry *itr;
+	struct LIST_HEAD_TYPE *rule_head;
+	struct ice_lock *rule_lock;	/* Lock to protect filter rule list */
+	enum ice_status status = ICE_SUCCESS;
+	u8 recipe_id;
+
+	if (!ice_is_vsi_valid(hw, vsi_handle))
+		return ICE_ERR_PARAM;
+
+	if (vid)
+		recipe_id = ICE_SW_LKUP_PROMISC_VLAN;
+	else
+		recipe_id = ICE_SW_LKUP_PROMISC;
+
+	rule_head = &sw->recp_list[recipe_id].filt_rules;
+	rule_lock = &sw->recp_list[recipe_id].filt_rule_lock;
+
+	INIT_LIST_HEAD(&remove_list_head);
+
+	ice_acquire_lock(rule_lock);
+	LIST_FOR_EACH_ENTRY(itr, rule_head,
+			    ice_fltr_mgmt_list_entry, list_entry) {
+		u8 fltr_promisc_mask = 0;
+
+		if (!ice_vsi_uses_fltr(itr, vsi_handle))
+			continue;
+
+		fltr_promisc_mask |=
+			ice_determine_promisc_mask(&itr->fltr_info);
+
+		/* Skip if filter is not completely specified by given mask */
+		if (fltr_promisc_mask & ~promisc_mask)
+			continue;
+
+		status = ice_add_entry_to_vsi_fltr_list(hw, vsi_handle,
+							&remove_list_head,
+							&itr->fltr_info);
+		if (status) {
+			ice_release_lock(rule_lock);
+			goto free_fltr_list;
+		}
+	}
+	ice_release_lock(rule_lock);
+
+	status = ice_remove_promisc(hw, recipe_id, &remove_list_head);
+
+free_fltr_list:
+	LIST_FOR_EACH_ENTRY_SAFE(fm_entry, tmp, &remove_list_head,
+				 ice_fltr_list_entry, list_entry) {
+		LIST_DEL(&fm_entry->list_entry);
+		ice_free(hw, fm_entry);
+	}
+
+	return status;
+}
+
+/**
+ * ice_set_vsi_promisc - set given VSI to given promiscuous mode(s)
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: VSI handle to configure
+ * @promisc_mask: mask of promiscuous config bits
+ * @vid: VLAN ID to set VLAN promiscuous
+ */
+enum ice_status
+ice_set_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask, u16 vid)
+{
+	enum { UCAST_FLTR = 1, MCAST_FLTR, BCAST_FLTR };
+	struct ice_fltr_list_entry f_list_entry;
+	struct ice_fltr_info new_fltr;
+	enum ice_status status = ICE_SUCCESS;
+	bool is_tx_fltr;
+	u16 hw_vsi_id;
+	int pkt_type;
+	u8 recipe_id;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_set_vsi_promisc\n");
+
+	if (!ice_is_vsi_valid(hw, vsi_handle))
+		return ICE_ERR_PARAM;
+	hw_vsi_id = ice_get_hw_vsi_num(hw, vsi_handle);
+
+	ice_memset(&new_fltr, 0, sizeof(new_fltr), ICE_NONDMA_MEM);
+
+	if (promisc_mask & (ICE_PROMISC_VLAN_RX | ICE_PROMISC_VLAN_TX)) {
+		new_fltr.lkup_type = ICE_SW_LKUP_PROMISC_VLAN;
+		new_fltr.l_data.mac_vlan.vlan_id = vid;
+		recipe_id = ICE_SW_LKUP_PROMISC_VLAN;
+	} else {
+		new_fltr.lkup_type = ICE_SW_LKUP_PROMISC;
+		recipe_id = ICE_SW_LKUP_PROMISC;
+	}
+
+	/* Separate filters must be set for each direction/packet type
+	 * combination, so we will loop over the mask value, store the
+	 * individual type, and clear it out in the input mask as it
+	 * is found.
+	 */
+	while (promisc_mask) {
+		u8 *mac_addr;
+
+		pkt_type = 0;
+		is_tx_fltr = false;
+
+		if (promisc_mask & ICE_PROMISC_UCAST_RX) {
+			promisc_mask &= ~ICE_PROMISC_UCAST_RX;
+			pkt_type = UCAST_FLTR;
+		} else if (promisc_mask & ICE_PROMISC_UCAST_TX) {
+			promisc_mask &= ~ICE_PROMISC_UCAST_TX;
+			pkt_type = UCAST_FLTR;
+			is_tx_fltr = true;
+		} else if (promisc_mask & ICE_PROMISC_MCAST_RX) {
+			promisc_mask &= ~ICE_PROMISC_MCAST_RX;
+			pkt_type = MCAST_FLTR;
+		} else if (promisc_mask & ICE_PROMISC_MCAST_TX) {
+			promisc_mask &= ~ICE_PROMISC_MCAST_TX;
+			pkt_type = MCAST_FLTR;
+			is_tx_fltr = true;
+		} else if (promisc_mask & ICE_PROMISC_BCAST_RX) {
+			promisc_mask &= ~ICE_PROMISC_BCAST_RX;
+			pkt_type = BCAST_FLTR;
+		} else if (promisc_mask & ICE_PROMISC_BCAST_TX) {
+			promisc_mask &= ~ICE_PROMISC_BCAST_TX;
+			pkt_type = BCAST_FLTR;
+			is_tx_fltr = true;
+		}
+
+		/* Check for VLAN promiscuous flag */
+		if (promisc_mask & ICE_PROMISC_VLAN_RX) {
+			promisc_mask &= ~ICE_PROMISC_VLAN_RX;
+		} else if (promisc_mask & ICE_PROMISC_VLAN_TX) {
+			promisc_mask &= ~ICE_PROMISC_VLAN_TX;
+			is_tx_fltr = true;
+		}
+
+		/* Set filter DA based on packet type */
+		mac_addr = new_fltr.l_data.mac.mac_addr;
+		if (pkt_type == BCAST_FLTR) {
+			ice_memset(mac_addr, 0xff, ETH_ALEN, ICE_NONDMA_MEM);
+		} else if (pkt_type == MCAST_FLTR ||
+			   pkt_type == UCAST_FLTR) {
+			/* Use the dummy ether header DA */
+			ice_memcpy(mac_addr, dummy_eth_header, ETH_ALEN,
+				   ICE_NONDMA_TO_NONDMA);
+			if (pkt_type == MCAST_FLTR)
+				mac_addr[0] |= 0x1;	/* Set multicast bit */
+		}
+
+		/* Need to reset this to zero for all iterations */
+		new_fltr.flag = 0;
+		if (is_tx_fltr) {
+			new_fltr.flag |= ICE_FLTR_TX;
+			new_fltr.src = hw_vsi_id;
+		} else {
+			new_fltr.flag |= ICE_FLTR_RX;
+			new_fltr.src = hw->port_info->lport;
+		}
+
+		new_fltr.fltr_act = ICE_FWD_TO_VSI;
+		new_fltr.vsi_handle = vsi_handle;
+		new_fltr.fwd_id.hw_vsi_id = hw_vsi_id;
+		f_list_entry.fltr_info = new_fltr;
+
+		status = ice_add_rule_internal(hw, recipe_id, &f_list_entry);
+		if (status != ICE_SUCCESS)
+			goto set_promisc_exit;
+	}
+
+set_promisc_exit:
+	return status;
+}
+
+/**
+ * ice_set_vlan_vsi_promisc
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: VSI handle to configure
+ * @promisc_mask: mask of promiscuous config bits
+ * @rm_vlan_promisc: Clear VLANs VSI promisc mode
+ *
+ * Configure VSI with all associated VLANs to given promiscuous mode(s)
+ */
+enum ice_status
+ice_set_vlan_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask,
+			 bool rm_vlan_promisc)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	struct ice_fltr_list_entry *list_itr, *tmp;
+	struct LIST_HEAD_TYPE vsi_list_head;
+	struct LIST_HEAD_TYPE *vlan_head;
+	struct ice_lock *vlan_lock; /* Lock to protect filter rule list */
+	enum ice_status status;
+	u16 vlan_id;
+
+	INIT_LIST_HEAD(&vsi_list_head);
+	vlan_lock = &sw->recp_list[ICE_SW_LKUP_VLAN].filt_rule_lock;
+	vlan_head = &sw->recp_list[ICE_SW_LKUP_VLAN].filt_rules;
+	ice_acquire_lock(vlan_lock);
+	status = ice_add_to_vsi_fltr_list(hw, vsi_handle, vlan_head,
+					  &vsi_list_head);
+	ice_release_lock(vlan_lock);
+	if (status)
+		goto free_fltr_list;
+
+	LIST_FOR_EACH_ENTRY(list_itr, &vsi_list_head, ice_fltr_list_entry,
+			    list_entry) {
+		vlan_id = list_itr->fltr_info.l_data.vlan.vlan_id;
+		if (rm_vlan_promisc)
+			status = ice_clear_vsi_promisc(hw, vsi_handle,
+						       promisc_mask, vlan_id);
+		else
+			status = ice_set_vsi_promisc(hw, vsi_handle,
+						     promisc_mask, vlan_id);
+		if (status)
+			break;
+	}
+
+free_fltr_list:
+	LIST_FOR_EACH_ENTRY_SAFE(list_itr, tmp, &vsi_list_head,
+				 ice_fltr_list_entry, list_entry) {
+		LIST_DEL(&list_itr->list_entry);
+		ice_free(hw, list_itr);
+	}
+	return status;
+}
+
+/**
+ * ice_remove_vsi_lkup_fltr - Remove lookup type filters for a VSI
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: VSI handle to remove filters from
+ * @lkup: switch rule filter lookup type
+ */
+static void
+ice_remove_vsi_lkup_fltr(struct ice_hw *hw, u16 vsi_handle,
+			 enum ice_sw_lkup_type lkup)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	struct ice_fltr_list_entry *fm_entry;
+	struct LIST_HEAD_TYPE remove_list_head;
+	struct LIST_HEAD_TYPE *rule_head;
+	struct ice_fltr_list_entry *tmp;
+	struct ice_lock *rule_lock;	/* Lock to protect filter rule list */
+	enum ice_status status;
+
+	INIT_LIST_HEAD(&remove_list_head);
+	rule_lock = &sw->recp_list[lkup].filt_rule_lock;
+	rule_head = &sw->recp_list[lkup].filt_rules;
+	ice_acquire_lock(rule_lock);
+	status = ice_add_to_vsi_fltr_list(hw, vsi_handle, rule_head,
+					  &remove_list_head);
+	ice_release_lock(rule_lock);
+	if (status)
+		return;
+
+	switch (lkup) {
+	case ICE_SW_LKUP_MAC:
+		ice_remove_mac(hw, &remove_list_head);
+		break;
+	case ICE_SW_LKUP_VLAN:
+		ice_remove_vlan(hw, &remove_list_head);
+		break;
+	case ICE_SW_LKUP_PROMISC:
+	case ICE_SW_LKUP_PROMISC_VLAN:
+		ice_remove_promisc(hw, lkup, &remove_list_head);
+		break;
+	case ICE_SW_LKUP_MAC_VLAN:
+#ifndef NO_MACVLAN_SUPPORT
+		ice_remove_mac_vlan(hw, &remove_list_head);
+#else
+		ice_debug(hw, ICE_DBG_SW, "MAC VLAN look up is not supported yet\n");
+#endif /* !NO_MACVLAN_SUPPORT */
+		break;
+	case ICE_SW_LKUP_ETHERTYPE:
+	case ICE_SW_LKUP_ETHERTYPE_MAC:
+	case ICE_SW_LKUP_DFLT:
+		ice_debug(hw, ICE_DBG_SW,
+			  "Remove filters for this lookup type hasn't been implemented yet\n");
+		break;
+	case ICE_SW_LKUP_LAST:
+		ice_debug(hw, ICE_DBG_SW, "Unsupported lookup type\n");
+		break;
+	}
+
+	LIST_FOR_EACH_ENTRY_SAFE(fm_entry, tmp, &remove_list_head,
+				 ice_fltr_list_entry, list_entry) {
+		LIST_DEL(&fm_entry->list_entry);
+		ice_free(hw, fm_entry);
+	}
+}
+
+/**
+ * ice_remove_vsi_fltr - Remove all filters for a VSI
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: VSI handle to remove filters from
+ */
+void ice_remove_vsi_fltr(struct ice_hw *hw, u16 vsi_handle)
+{
+	ice_debug(hw, ICE_DBG_TRACE, "ice_remove_vsi_fltr\n");
+
+	ice_remove_vsi_lkup_fltr(hw, vsi_handle, ICE_SW_LKUP_MAC);
+	ice_remove_vsi_lkup_fltr(hw, vsi_handle, ICE_SW_LKUP_MAC_VLAN);
+	ice_remove_vsi_lkup_fltr(hw, vsi_handle, ICE_SW_LKUP_PROMISC);
+	ice_remove_vsi_lkup_fltr(hw, vsi_handle, ICE_SW_LKUP_VLAN);
+	ice_remove_vsi_lkup_fltr(hw, vsi_handle, ICE_SW_LKUP_DFLT);
+	ice_remove_vsi_lkup_fltr(hw, vsi_handle, ICE_SW_LKUP_ETHERTYPE);
+	ice_remove_vsi_lkup_fltr(hw, vsi_handle, ICE_SW_LKUP_ETHERTYPE_MAC);
+	ice_remove_vsi_lkup_fltr(hw, vsi_handle, ICE_SW_LKUP_PROMISC_VLAN);
+}
+
+
+
+
+
+/**
+ * ice_replay_vsi_fltr - Replay filters for requested VSI
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: driver vsi handle
+ * @recp_id: Recipe id for which rules need to be replayed
+ * @list_head: list for which filters need to be replayed
+ *
+ * Replays the filter of recipe recp_id for a VSI represented via vsi_handle.
+ * It is required to pass valid VSI handle.
+ */
+static enum ice_status
+ice_replay_vsi_fltr(struct ice_hw *hw, u16 vsi_handle, u8 recp_id,
+		    struct LIST_HEAD_TYPE *list_head)
+{
+	struct ice_fltr_mgmt_list_entry *itr;
+	enum ice_status status = ICE_SUCCESS;
+	u16 hw_vsi_id;
+
+	if (LIST_EMPTY(list_head))
+		return status;
+	hw_vsi_id = ice_get_hw_vsi_num(hw, vsi_handle);
+
+	LIST_FOR_EACH_ENTRY(itr, list_head, ice_fltr_mgmt_list_entry,
+			    list_entry) {
+		struct ice_fltr_list_entry f_entry;
+
+		f_entry.fltr_info = itr->fltr_info;
+		if (itr->vsi_count < 2 && recp_id != ICE_SW_LKUP_VLAN &&
+		    itr->fltr_info.vsi_handle == vsi_handle) {
+			/* update the src in case it is vsi num */
+			if (f_entry.fltr_info.src_id == ICE_SRC_ID_VSI)
+				f_entry.fltr_info.src = hw_vsi_id;
+			status = ice_add_rule_internal(hw, recp_id, &f_entry);
+			if (status != ICE_SUCCESS)
+				goto end;
+			continue;
+		}
+		if (!itr->vsi_list_info ||
+		    !ice_is_bit_set(itr->vsi_list_info->vsi_map, vsi_handle))
+			continue;
+		/* Clearing it so that the logic can add it back */
+		ice_clear_bit(vsi_handle, itr->vsi_list_info->vsi_map);
+		f_entry.fltr_info.vsi_handle = vsi_handle;
+		f_entry.fltr_info.fltr_act = ICE_FWD_TO_VSI;
+		/* update the src in case it is vsi num */
+		if (f_entry.fltr_info.src_id == ICE_SRC_ID_VSI)
+			f_entry.fltr_info.src = hw_vsi_id;
+		if (recp_id == ICE_SW_LKUP_VLAN)
+			status = ice_add_vlan_internal(hw, &f_entry);
+		else
+			status = ice_add_rule_internal(hw, recp_id, &f_entry);
+		if (status != ICE_SUCCESS)
+			goto end;
+	}
+end:
+	return status;
+}
+
+
+/**
+ * ice_replay_vsi_all_fltr - replay all filters stored in bookkeeping lists
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: driver vsi handle
+ *
+ * Replays filters for requested VSI via vsi_handle.
+ */
+enum ice_status ice_replay_vsi_all_fltr(struct ice_hw *hw, u16 vsi_handle)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	enum ice_status status = ICE_SUCCESS;
+	u8 i;
+
+	for (i = 0; i < ICE_MAX_NUM_RECIPES; i++) {
+		/* Update the default recipe lines and ones that were created */
+		if (i < ICE_MAX_NUM_RECIPES || sw->recp_list[i].recp_created) {
+			struct LIST_HEAD_TYPE *head;
+
+			head = &sw->recp_list[i].filt_replay_rules;
+			if (!sw->recp_list[i].adv_rule)
+				status = ice_replay_vsi_fltr(hw, vsi_handle, i,
+							     head);
+			if (status != ICE_SUCCESS)
+				return status;
+		}
+	}
+	return status;
+}
+
+/**
+ * ice_rm_all_sw_replay_rule_info - deletes filter replay rules
+ * @hw: pointer to the hw struct
+ *
+ * Deletes the filter replay rules.
+ */
+void ice_rm_all_sw_replay_rule_info(struct ice_hw *hw)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	u8 i;
+
+	if (!sw)
+		return;
+
+	for (i = 0; i < ICE_MAX_NUM_RECIPES; i++) {
+		if (!LIST_EMPTY(&sw->recp_list[i].filt_replay_rules)) {
+			struct LIST_HEAD_TYPE *l_head;
+
+			l_head = &sw->recp_list[i].filt_replay_rules;
+			if (!sw->recp_list[i].adv_rule)
+				ice_rem_sw_rule_info(hw, l_head);
+		}
+	}
+}
diff --git a/drivers/net/ice/base/ice_switch.h b/drivers/net/ice/base/ice_switch.h
new file mode 100644
index 0000000..66a172f
--- /dev/null
+++ b/drivers/net/ice/base/ice_switch.h
@@ -0,0 +1,333 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_SWITCH_H_
+#define _ICE_SWITCH_H_
+
+#include "ice_common.h"
+#include "ice_protocol_type.h"
+
+#define ICE_SW_CFG_MAX_BUF_LEN 2048
+#define ICE_MAX_SW 256
+#define ICE_DFLT_VSI_INVAL 0xff
+
+
+
+#define ICE_VSI_INVAL_ID 0xFFFF
+
+/* VSI context structure for add/get/update/free operations */
+struct ice_vsi_ctx {
+	u16 vsi_num;
+	u16 vsis_allocd;
+	u16 vsis_unallocated;
+	u16 flags;
+	struct ice_aqc_vsi_props info;
+	struct ice_sched_vsi_info sched;
+	u8 alloc_from_pool;
+	struct ice_lock rss_locks;	/* protect rss config in VSI ctx */
+	struct LIST_HEAD_TYPE rss_list_head;
+};
+
+
+/* Switch recipe ID enum values are specific to hardware */
+enum ice_sw_lkup_type {
+	ICE_SW_LKUP_ETHERTYPE = 0,
+	ICE_SW_LKUP_MAC = 1,
+	ICE_SW_LKUP_MAC_VLAN = 2,
+	ICE_SW_LKUP_PROMISC = 3,
+	ICE_SW_LKUP_VLAN = 4,
+	ICE_SW_LKUP_DFLT = 5,
+	ICE_SW_LKUP_ETHERTYPE_MAC = 8,
+	ICE_SW_LKUP_PROMISC_VLAN = 9,
+	ICE_SW_LKUP_LAST,
+};
+
+/* type of filter src id */
+enum ice_src_id {
+	ICE_SRC_ID_UNKNOWN = 0,
+	ICE_SRC_ID_VSI,
+	ICE_SRC_ID_QUEUE,
+	ICE_SRC_ID_LPORT,
+};
+
+struct ice_fltr_info {
+	/* Look up information: how to look up packet */
+	enum ice_sw_lkup_type lkup_type;
+	/* Forward action: filter action to do after lookup */
+	enum ice_sw_fwd_act_type fltr_act;
+	/* rule ID returned by firmware once filter rule is created */
+	u16 fltr_rule_id;
+	u16 flag;
+#define ICE_FLTR_RX		BIT(0)
+#define ICE_FLTR_TX		BIT(1)
+#define ICE_FLTR_TX_RX		(ICE_FLTR_RX | ICE_FLTR_TX)
+
+	/* Source VSI for LOOKUP_TX or source port for LOOKUP_RX */
+	u16 src;
+	enum ice_src_id src_id;
+
+	union {
+		struct {
+			u8 mac_addr[ETH_ALEN];
+		} mac;
+		struct {
+			u8 mac_addr[ETH_ALEN];
+			u16 vlan_id;
+		} mac_vlan;
+		struct {
+			u16 vlan_id;
+		} vlan;
+		/* Set lkup_type as ICE_SW_LKUP_ETHERTYPE
+		 * if just using ethertype as filter. Set lkup_type as
+		 * ICE_SW_LKUP_ETHERTYPE_MAC if MAC also needs to be
+		 * passed in as filter.
+		 */
+		struct {
+			u16 ethertype;
+			u8 mac_addr[ETH_ALEN]; /* optional */
+		} ethertype_mac;
+	} l_data; /* Make sure to zero out the memory of l_data before using
+		   * it or only set the data associated with lookup match
+		   * rest everything should be zero
+		   */
+
+	/* Depending on filter action */
+	union {
+		/* queue id in case of ICE_FWD_TO_Q and starting
+		 * queue id in case of ICE_FWD_TO_QGRP.
+		 */
+		u16 q_id:11;
+		u16 hw_vsi_id:10;
+		u16 vsi_id:10;
+		u16 vsi_list_id:10;
+	} fwd_id;
+
+	/* Sw VSI handle */
+	u16 vsi_handle;
+
+	/* Set to num_queues if action is ICE_FWD_TO_QGRP. This field
+	 * determines the range of queues the packet needs to be forwarded to.
+	 * Note that qgrp_size must be set to a power of 2.
+	 */
+	u8 qgrp_size;
+
+	/* Rule creations populate these indicators basing on the switch type */
+	u8 lb_en;	/* Indicate if packet can be looped back */
+	u8 lan_en;	/* Indicate if packet can be forwarded to the uplink */
+};
+
+struct ice_adv_lkup_elem {
+	enum ice_protocol_type type;
+	union ice_prot_hdr h_u;	/* Header values */
+	union ice_prot_hdr m_u;	/* Mask of header values to match */
+};
+
+struct ice_sw_act_ctrl {
+	/* Source VSI for LOOKUP_TX or source port for LOOKUP_RX */
+	u16 src;
+	u16 flag;
+#define ICE_FLTR_RX             BIT(0)
+#define ICE_FLTR_TX             BIT(1)
+#define ICE_FLTR_TX_RX (ICE_FLTR_RX | ICE_FLTR_TX)
+
+	enum ice_sw_fwd_act_type fltr_act;
+	/* Depending on filter action */
+	union {
+		/* This is a queue id in case of ICE_FWD_TO_Q and starting
+		 * queue id in case of ICE_FWD_TO_QGRP.
+		 */
+		u16 q_id:11;
+		u16 vsi_id:10;
+		u16 hw_vsi_id:10;
+		u16 vsi_list_id:10;
+	} fwd_id;
+	/* software VSI handle */
+	u16 vsi_handle;
+	u8 qgrp_size;
+};
+
+struct ice_adv_rule_info {
+	enum ice_sw_tunnel_type tun_type;
+	struct ice_sw_act_ctrl sw_act;
+	u32 priority;
+	u8 rx; /* true means LOOKUP_RX otherwise LOOKUP_TX */
+};
+
+/* A collection of one or more four word recipe */
+struct ice_sw_recipe {
+	/* For a chained recipe the root recipe is what should be used for
+	 * programming rules
+	 */
+	u8 root_rid;
+	u8 recp_created;
+
+	/* Number of extraction words */
+	u8 n_ext_words;
+	/* Protocol ID and Offset pair (extraction word) to describe the
+	 * recipe
+	 */
+	struct ice_fv_word ext_words[ICE_MAX_CHAIN_WORDS];
+
+	/* if this recipe is a collection of other recipe */
+	u8 big_recp;
+
+	/* if this recipe is part of another bigger recipe then chain index
+	 * corresponding to this recipe
+	 */
+	u8 chain_idx;
+
+	/* if this recipe is a collection of other recipe then count of other
+	 * recipes and recipe ids of those recipes
+	 */
+	u8 n_grp_count;
+
+	/* Bit map specifying the IDs associated with this group of recipe */
+	ice_declare_bitmap(r_bitmap, ICE_MAX_NUM_RECIPES);
+
+	enum ice_sw_tunnel_type tun_type;
+
+	/* List of type ice_fltr_mgmt_list_entry or adv_rule */
+	u8 adv_rule;
+	struct LIST_HEAD_TYPE filt_rules;
+	struct LIST_HEAD_TYPE filt_replay_rules;
+
+	struct ice_lock filt_rule_lock;	/* protect filter rule structure */
+
+	/* Profiles this recipe should be associated with */
+	struct LIST_HEAD_TYPE fv_list;
+
+	/* Profiles this recipe is associated with */
+	u8 num_profs, *prof_ids;
+
+	/* This allows user to specify the recipe priority.
+	 * For now, this becomes 'fwd_priority' when recipe
+	 * is created, usually recipes can have 'fwd' and 'join'
+	 * priority.
+	 */
+	u8 priority;
+
+	struct LIST_HEAD_TYPE rg_list;
+
+	/* AQ buffer associated with this recipe */
+	struct ice_aqc_recipe_data_elem *root_buf;
+};
+
+/* Bookkeeping structure to hold bitmap of VSIs corresponding to VSI list id */
+struct ice_vsi_list_map_info {
+	struct LIST_ENTRY_TYPE list_entry;
+	ice_declare_bitmap(vsi_map, ICE_MAX_VSI);
+	u16 vsi_list_id;
+	/* counter to track how many rules are reusing this VSI list */
+	u16 ref_cnt;
+};
+
+struct ice_fltr_list_entry {
+	struct LIST_ENTRY_TYPE list_entry;
+	enum ice_status status;
+	struct ice_fltr_info fltr_info;
+};
+
+/* This defines an entry in the list that maintains MAC or VLAN membership
+ * to HW list mapping, since multiple VSIs can subscribe to the same MAC or
+ * VLAN. As an optimization the VSI list should be created only when a
+ * second VSI becomes a subscriber to the same MAC address. VSI lists are always
+ * used for VLAN membership.
+ */
+struct ice_fltr_mgmt_list_entry {
+	/* back pointer to VSI list id to VSI list mapping */
+	struct ice_vsi_list_map_info *vsi_list_info;
+	u16 vsi_count;
+#define ICE_INVAL_LG_ACT_INDEX 0xffff
+	u16 lg_act_idx;
+#define ICE_INVAL_SW_MARKER_ID 0xffff
+	u16 sw_marker_id;
+	struct LIST_ENTRY_TYPE list_entry;
+	struct ice_fltr_info fltr_info;
+#define ICE_INVAL_COUNTER_ID 0xff
+	u8 counter_index;
+};
+
+struct ice_adv_fltr_mgmt_list_entry {
+	struct LIST_ENTRY_TYPE list_entry;
+
+	struct ice_adv_lkup_elem *lkups;
+	struct ice_adv_rule_info rule_info;
+	u16 lkups_cnt;
+};
+
+enum ice_promisc_flags {
+	ICE_PROMISC_UCAST_RX = 0x1,
+	ICE_PROMISC_UCAST_TX = 0x2,
+	ICE_PROMISC_MCAST_RX = 0x4,
+	ICE_PROMISC_MCAST_TX = 0x8,
+	ICE_PROMISC_BCAST_RX = 0x10,
+	ICE_PROMISC_BCAST_TX = 0x20,
+	ICE_PROMISC_VLAN_RX = 0x40,
+	ICE_PROMISC_VLAN_TX = 0x80,
+};
+
+/* VSI related commands */
+enum ice_status
+ice_add_vsi(struct ice_hw *hw, u16 vsi_handle, struct ice_vsi_ctx *vsi_ctx,
+	    struct ice_sq_cd *cd);
+enum ice_status
+ice_free_vsi(struct ice_hw *hw, u16 vsi_handle, struct ice_vsi_ctx *vsi_ctx,
+	     bool keep_vsi_alloc, struct ice_sq_cd *cd);
+enum ice_status
+ice_update_vsi(struct ice_hw *hw, u16 vsi_handle, struct ice_vsi_ctx *vsi_ctx,
+	       struct ice_sq_cd *cd);
+struct ice_vsi_ctx *ice_get_vsi_ctx(struct ice_hw *hw, u16 vsi_handle);
+void ice_clear_all_vsi_ctx(struct ice_hw *hw);
+/* Switch config */
+enum ice_status ice_get_initial_sw_cfg(struct ice_hw *hw);
+
+enum ice_status
+ice_alloc_vlan_res_counter(struct ice_hw *hw, u16 *counter_id);
+enum ice_status
+ice_free_vlan_res_counter(struct ice_hw *hw, u16 counter_id);
+
+/* Switch/bridge related commands */
+enum ice_status ice_update_sw_rule_bridge_mode(struct ice_hw *hw);
+enum ice_status
+ice_add_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list);
+enum ice_status ice_add_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_lst);
+enum ice_status ice_remove_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_lst);
+enum ice_status
+ice_remove_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list);
+#ifndef NO_MACVLAN_SUPPORT
+enum ice_status
+ice_add_mac_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list);
+enum ice_status
+ice_remove_mac_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list);
+#endif /* !NO_MACVLAN_SUPPORT */
+
+void ice_remove_vsi_fltr(struct ice_hw *hw, u16 vsi_handle);
+
+
+/* Promisc/defport setup for VSIs */
+enum ice_status
+ice_cfg_dflt_vsi(struct ice_port_info *pi, u16 vsi_handle, bool set,
+		 u8 direction);
+enum ice_status
+ice_set_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask,
+		    u16 vid);
+enum ice_status
+ice_clear_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask,
+		      u16 vid);
+enum ice_status
+ice_set_vlan_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask,
+			 bool rm_vlan_promisc);
+
+
+
+
+
+enum ice_status ice_init_def_sw_recp(struct ice_hw *hw);
+u16 ice_get_hw_vsi_num(struct ice_hw *hw, u16 vsi_handle);
+bool ice_is_vsi_valid(struct ice_hw *hw, u16 vsi_handle);
+
+enum ice_status ice_replay_vsi_all_fltr(struct ice_hw *hw, u16 vsi_handle);
+void ice_rm_all_sw_replay_rule_info(struct ice_hw *hw);
+
+#endif /* _ICE_SWITCH_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v3 10/34] net/ice: Add code to work with the NVM
  2018-12-12  6:59 ` [dpdk-dev] [PATCH v3 00/34] A new net PMD - ice Wenzhuo Lu
                     ` (8 preceding siblings ...)
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 09/34] net/ice: Add virtual switch code Wenzhuo Lu
@ 2018-12-12  6:59   ` Wenzhuo Lu
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 11/34] net/ice: Add common functions Wenzhuo Lu
                     ` (24 subsequent siblings)
  34 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-12  6:59 UTC (permalink / raw)
  To: dev; +Cc: Paul M Stillwell Jr

From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>

Add code to read/write/query the NVM image.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
---
 drivers/net/ice/base/ice_nvm.c | 387 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 387 insertions(+)
 create mode 100644 drivers/net/ice/base/ice_nvm.c

diff --git a/drivers/net/ice/base/ice_nvm.c b/drivers/net/ice/base/ice_nvm.c
new file mode 100644
index 0000000..25a2ca4
--- /dev/null
+++ b/drivers/net/ice/base/ice_nvm.c
@@ -0,0 +1,387 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#include "ice_common.h"
+
+
+/**
+ * ice_aq_read_nvm
+ * @hw: pointer to the hw struct
+ * @module_typeid: module pointer location in words from the NVM beginning
+ * @offset: byte offset from the module beginning
+ * @length: length of the section to be read (in bytes from the offset)
+ * @data: command buffer (size [bytes] = length)
+ * @last_command: tells if this is the last command in a series
+ * @cd: pointer to command details structure or NULL
+ *
+ * Read the NVM using the admin queue commands (0x0701)
+ */
+static enum ice_status
+ice_aq_read_nvm(struct ice_hw *hw, u16 module_typeid, u32 offset, u16 length,
+		void *data, bool last_command, struct ice_sq_cd *cd)
+{
+	struct ice_aq_desc desc;
+	struct ice_aqc_nvm *cmd;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_read_nvm");
+
+	cmd = &desc.params.nvm;
+
+	/* In offset the highest byte must be zeroed. */
+	if (offset & 0xFF000000)
+		return ICE_ERR_PARAM;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_nvm_read);
+
+	/* If this is the last command in a series, set the proper flag. */
+	if (last_command)
+		cmd->cmd_flags |= ICE_AQC_NVM_LAST_CMD;
+	cmd->module_typeid = CPU_TO_LE16(module_typeid);
+	cmd->offset_low = CPU_TO_LE16(offset & 0xFFFF);
+	cmd->offset_high = (offset >> 16) & 0xFF;
+	cmd->length = CPU_TO_LE16(length);
+
+	return ice_aq_send_cmd(hw, &desc, data, length, cd);
+}
+
+/**
+ * ice_check_sr_access_params - verify params for Shadow RAM R/W operations.
+ * @hw: pointer to the HW structure
+ * @offset: offset in words from module start
+ * @words: number of words to access
+ */
+static enum ice_status
+ice_check_sr_access_params(struct ice_hw *hw, u32 offset, u16 words)
+{
+	if ((offset + words) > hw->nvm.sr_words) {
+		ice_debug(hw, ICE_DBG_NVM,
+			  "NVM error: offset beyond SR lmt.\n");
+		return ICE_ERR_PARAM;
+	}
+
+	if (words > ICE_SR_SECTOR_SIZE_IN_WORDS) {
+		/* We can access only up to 4KB (one sector), in one AQ write */
+		ice_debug(hw, ICE_DBG_NVM,
+			  "NVM error: tried to access %d words, limit is %d.\n",
+			  words, ICE_SR_SECTOR_SIZE_IN_WORDS);
+		return ICE_ERR_PARAM;
+	}
+
+	if (((offset + (words - 1)) / ICE_SR_SECTOR_SIZE_IN_WORDS) !=
+	    (offset / ICE_SR_SECTOR_SIZE_IN_WORDS)) {
+		/* A single access cannot spread over two sectors */
+		ice_debug(hw, ICE_DBG_NVM,
+			  "NVM error: cannot spread over two sectors.\n");
+		return ICE_ERR_PARAM;
+	}
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_read_sr_aq - Read Shadow RAM.
+ * @hw: pointer to the HW structure
+ * @offset: offset in words from module start
+ * @words: number of words to read
+ * @data: buffer for words reads from Shadow RAM
+ * @last_command: tells the AdminQ that this is the last command
+ *
+ * Reads 16-bit word buffers from the Shadow RAM using the admin command.
+ */
+static enum ice_status
+ice_read_sr_aq(struct ice_hw *hw, u32 offset, u16 words, u16 *data,
+	       bool last_command)
+{
+	enum ice_status status;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_read_sr_aq");
+
+	status = ice_check_sr_access_params(hw, offset, words);
+
+	/* values in "offset" and "words" parameters are sized as words
+	 * (16 bits) but ice_aq_read_nvm expects these values in bytes.
+	 * So do this conversion while calling ice_aq_read_nvm.
+	 */
+	if (!status)
+		status = ice_aq_read_nvm(hw, 0, 2 * offset, 2 * words, data,
+					 last_command, NULL);
+
+	return status;
+}
+
+/**
+ * ice_read_sr_word_aq - Reads Shadow RAM via AQ
+ * @hw: pointer to the HW structure
+ * @offset: offset of the Shadow RAM word to read (0x000000 - 0x001FFF)
+ * @data: word read from the Shadow RAM
+ *
+ * Reads one 16 bit word from the Shadow RAM using the ice_read_sr_aq method.
+ */
+static enum ice_status
+ice_read_sr_word_aq(struct ice_hw *hw, u16 offset, u16 *data)
+{
+	enum ice_status status;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_read_sr_word_aq");
+
+	status = ice_read_sr_aq(hw, offset, 1, data, true);
+	if (!status)
+		*data = LE16_TO_CPU(*(__le16 *)data);
+
+	return status;
+}
+
+
+/**
+ * ice_read_sr_buf_aq - Reads Shadow RAM buf via AQ
+ * @hw: pointer to the HW structure
+ * @offset: offset of the Shadow RAM word to read (0x000000 - 0x001FFF)
+ * @words: (in) number of words to read; (out) number of words actually read
+ * @data: words read from the Shadow RAM
+ *
+ * Reads 16 bit words (data buf) from the SR using the ice_read_sr_aq
+ * method. Ownership of the NVM is taken before reading the buffer and later
+ * released.
+ */
+static enum ice_status
+ice_read_sr_buf_aq(struct ice_hw *hw, u16 offset, u16 *words, u16 *data)
+{
+	enum ice_status status;
+	bool last_cmd = false;
+	u16 words_read = 0;
+	u16 i = 0;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_read_sr_buf_aq");
+
+	do {
+		u16 read_size, off_w;
+
+		/* Calculate number of bytes we should read in this step.
+		 * It's not allowed to read more than one page at a time or
+		 * to cross page boundaries.
+		 */
+		off_w = offset % ICE_SR_SECTOR_SIZE_IN_WORDS;
+		read_size = off_w ?
+			min(*words,
+			    (u16)(ICE_SR_SECTOR_SIZE_IN_WORDS - off_w)) :
+			min((*words - words_read), ICE_SR_SECTOR_SIZE_IN_WORDS);
+
+		/* Check if this is last command, if so set proper flag */
+		if ((words_read + read_size) >= *words)
+			last_cmd = true;
+
+		status = ice_read_sr_aq(hw, offset, read_size,
+					data + words_read, last_cmd);
+		if (status)
+			goto read_nvm_buf_aq_exit;
+
+		/* Increment counter for words already read and move offset to
+		 * new read location
+		 */
+		words_read += read_size;
+		offset += read_size;
+	} while (words_read < *words);
+
+	for (i = 0; i < *words; i++)
+		data[i] = LE16_TO_CPU(((__le16 *)data)[i]);
+
+read_nvm_buf_aq_exit:
+	*words = words_read;
+	return status;
+}
+
+/**
+ * ice_acquire_nvm - Generic request for acquiring the NVM ownership
+ * @hw: pointer to the HW structure
+ * @access: NVM access type (read or write)
+ *
+ * This function will request NVM ownership.
+ */
+static enum ice_status
+ice_acquire_nvm(struct ice_hw *hw, enum ice_aq_res_access_type access)
+{
+	ice_debug(hw, ICE_DBG_TRACE, "ice_acquire_nvm");
+
+	if (hw->nvm.blank_nvm_mode)
+		return ICE_SUCCESS;
+
+	return ice_acquire_res(hw, ICE_NVM_RES_ID, access, ICE_NVM_TIMEOUT);
+}
+
+/**
+ * ice_release_nvm - Generic request for releasing the NVM ownership
+ * @hw: pointer to the HW structure
+ *
+ * This function will release NVM ownership.
+ */
+static void ice_release_nvm(struct ice_hw *hw)
+{
+	ice_debug(hw, ICE_DBG_TRACE, "ice_release_nvm");
+
+	if (hw->nvm.blank_nvm_mode)
+		return;
+
+	ice_release_res(hw, ICE_NVM_RES_ID);
+}
+
+/**
+ * ice_read_sr_word - Reads Shadow RAM word and acquire NVM if necessary
+ * @hw: pointer to the HW structure
+ * @offset: offset of the Shadow RAM word to read (0x000000 - 0x001FFF)
+ * @data: word read from the Shadow RAM
+ *
+ * Reads one 16 bit word from the Shadow RAM using the ice_read_sr_word_aq.
+ */
+enum ice_status ice_read_sr_word(struct ice_hw *hw, u16 offset, u16 *data)
+{
+	enum ice_status status;
+
+	status = ice_acquire_nvm(hw, ICE_RES_READ);
+	if (!status) {
+		status = ice_read_sr_word_aq(hw, offset, data);
+		ice_release_nvm(hw);
+	}
+
+	return status;
+}
+
+/**
+ * ice_init_nvm - initializes NVM setting
+ * @hw: pointer to the hw struct
+ *
+ * This function reads and populates NVM settings such as Shadow RAM size,
+ * max_timeout, and blank_nvm_mode
+ */
+enum ice_status ice_init_nvm(struct ice_hw *hw)
+{
+	struct ice_nvm_info *nvm = &hw->nvm;
+	u16 oem_hi, oem_lo, cfg_ptr;
+	u16 eetrack_lo, eetrack_hi;
+	enum ice_status status = ICE_SUCCESS;
+	u32 fla, gens_stat;
+	u8 sr_size;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_init_nvm");
+
+	/* The SR size is stored regardless of the nvm programming mode
+	 * as the blank mode may be used in the factory line.
+	 */
+	gens_stat = rd32(hw, GLNVM_GENS);
+	sr_size = (gens_stat & GLNVM_GENS_SR_SIZE_M) >> GLNVM_GENS_SR_SIZE_S;
+
+	/* Switching to words (sr_size contains power of 2) */
+	nvm->sr_words = BIT(sr_size) * ICE_SR_WORDS_IN_1KB;
+
+	/* Check if we are in the normal or blank NVM programming mode */
+	fla = rd32(hw, GLNVM_FLA);
+	if (fla & GLNVM_FLA_LOCKED_M) { /* Normal programming mode */
+		nvm->blank_nvm_mode = false;
+	} else { /* Blank programming mode */
+		nvm->blank_nvm_mode = true;
+		status = ICE_ERR_NVM_BLANK_MODE;
+		ice_debug(hw, ICE_DBG_NVM,
+			  "NVM init error: unsupported blank mode.\n");
+		return status;
+	}
+
+	status = ice_read_sr_word(hw, ICE_SR_NVM_DEV_STARTER_VER, &hw->nvm.ver);
+	if (status) {
+		ice_debug(hw, ICE_DBG_INIT,
+			  "Failed to read DEV starter version.\n");
+		return status;
+	}
+
+	status = ice_read_sr_word(hw, ICE_SR_NVM_EETRACK_LO, &eetrack_lo);
+	if (status) {
+		ice_debug(hw, ICE_DBG_INIT, "Failed to read EETRACK lo.\n");
+		return status;
+	}
+	status = ice_read_sr_word(hw, ICE_SR_NVM_EETRACK_HI, &eetrack_hi);
+	if (status) {
+		ice_debug(hw, ICE_DBG_INIT, "Failed to read EETRACK hi.\n");
+		return status;
+	}
+
+	hw->nvm.eetrack = (eetrack_hi << 16) | eetrack_lo;
+
+	status = ice_read_sr_word(hw, ICE_SR_BOOT_CFG_PTR, &cfg_ptr);
+	if (status) {
+		ice_debug(hw, ICE_DBG_INIT, "Failed to read BOOT_CONFIG_PTR.\n");
+		return status;
+	}
+
+	status = ice_read_sr_word(hw, (cfg_ptr + ICE_NVM_OEM_VER_OFF), &oem_hi);
+	if (status) {
+		ice_debug(hw, ICE_DBG_INIT, "Failed to read OEM_VER hi.\n");
+		return status;
+	}
+
+	status = ice_read_sr_word(hw, (cfg_ptr + (ICE_NVM_OEM_VER_OFF + 1)),
+				  &oem_lo);
+	if (status) {
+		ice_debug(hw, ICE_DBG_INIT, "Failed to read OEM_VER lo.\n");
+		return status;
+	}
+
+	hw->nvm.oem_ver = ((u32)oem_hi << 16) | oem_lo;
+	return status;
+}
+
+
+/**
+ * ice_read_sr_buf - Reads Shadow RAM buf and acquire lock if necessary
+ * @hw: pointer to the HW structure
+ * @offset: offset of the Shadow RAM word to read (0x000000 - 0x001FFF)
+ * @words: (in) number of words to read; (out) number of words actually read
+ * @data: words read from the Shadow RAM
+ *
+ * Reads 16 bit words (data buf) from the SR using the ice_read_nvm_buf_aq
+ * method. The buf read is preceded by the NVM ownership take
+ * and followed by the release.
+ */
+enum ice_status
+ice_read_sr_buf(struct ice_hw *hw, u16 offset, u16 *words, u16 *data)
+{
+	enum ice_status status;
+
+	status = ice_acquire_nvm(hw, ICE_RES_READ);
+	if (!status) {
+		status = ice_read_sr_buf_aq(hw, offset, words, data);
+		ice_release_nvm(hw);
+	}
+
+	return status;
+}
+
+
+/**
+ * ice_nvm_validate_checksum
+ * @hw: pointer to the hw struct
+ *
+ * Verify NVM PFA checksum validity (0x0706)
+ */
+enum ice_status ice_nvm_validate_checksum(struct ice_hw *hw)
+{
+	struct ice_aqc_nvm_checksum *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	status = ice_acquire_nvm(hw, ICE_RES_READ);
+	if (status)
+		return status;
+
+	cmd = &desc.params.nvm_checksum;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_nvm_checksum);
+	cmd->flags = ICE_AQC_NVM_CHECKSUM_VERIFY;
+
+	status = ice_aq_send_cmd(hw, &desc, NULL, 0, NULL);
+	ice_release_nvm(hw);
+
+	if (!status)
+		if (LE16_TO_CPU(cmd->checksum) != ICE_AQC_NVM_CHECKSUM_CORRECT)
+			status = ICE_ERR_NVM_CHECKSUM;
+
+	return status;
+}
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v3 11/34] net/ice: Add common functions
  2018-12-12  6:59 ` [dpdk-dev] [PATCH v3 00/34] A new net PMD - ice Wenzhuo Lu
                     ` (9 preceding siblings ...)
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 10/34] net/ice: Add code to work with the NVM Wenzhuo Lu
@ 2018-12-12  6:59   ` Wenzhuo Lu
  2018-12-12 19:58     ` Mattias Rönnblom
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 12/34] net/ice: Add various headers Wenzhuo Lu
                     ` (23 subsequent siblings)
  34 siblings, 1 reply; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-12  6:59 UTC (permalink / raw)
  To: dev; +Cc: Paul M Stillwell Jr

From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>

Add code that multiple other features use.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
---
 drivers/net/ice/base/ice_common.c | 3521 +++++++++++++++++++++++++++++++++++++
 drivers/net/ice/base/ice_common.h |  186 ++
 2 files changed, 3707 insertions(+)
 create mode 100644 drivers/net/ice/base/ice_common.c
 create mode 100644 drivers/net/ice/base/ice_common.h

diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c
new file mode 100644
index 0000000..d49264d
--- /dev/null
+++ b/drivers/net/ice/base/ice_common.c
@@ -0,0 +1,3521 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#include "ice_common.h"
+#include "ice_sched.h"
+#include "ice_adminq_cmd.h"
+
+#include "ice_flow.h"
+#include "ice_switch.h"
+
+#define ICE_PF_RESET_WAIT_COUNT	200
+
+#define ICE_PROG_FLEX_ENTRY(hw, rxdid, mdid, idx) \
+	wr32((hw), GLFLXP_RXDID_FLX_WRD_##idx(rxdid), \
+	     ((ICE_RX_OPC_MDID << \
+	       GLFLXP_RXDID_FLX_WRD_##idx##_RXDID_OPCODE_S) & \
+	      GLFLXP_RXDID_FLX_WRD_##idx##_RXDID_OPCODE_M) | \
+	     (((mdid) << GLFLXP_RXDID_FLX_WRD_##idx##_PROT_MDID_S) & \
+	      GLFLXP_RXDID_FLX_WRD_##idx##_PROT_MDID_M))
+
+#define ICE_PROG_FLG_ENTRY(hw, rxdid, flg_0, flg_1, flg_2, flg_3, idx) \
+	wr32((hw), GLFLXP_RXDID_FLAGS(rxdid, idx), \
+	     (((flg_0) << GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_S) & \
+	      GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_M) | \
+	     (((flg_1) << GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_1_S) & \
+	      GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_1_M) | \
+	     (((flg_2) << GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_2_S) & \
+	      GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_2_M) | \
+	     (((flg_3) << GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_3_S) & \
+	      GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_3_M))
+
+
+/**
+ * ice_set_mac_type - Sets MAC type
+ * @hw: pointer to the HW structure
+ *
+ * This function sets the MAC type of the adapter based on the
+ * vendor ID and device ID stored in the hw structure.
+ */
+static enum ice_status ice_set_mac_type(struct ice_hw *hw)
+{
+	enum ice_status status = ICE_SUCCESS;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_set_mac_type\n");
+
+	if (hw->vendor_id == ICE_INTEL_VENDOR_ID) {
+		switch (hw->device_id) {
+		default:
+			hw->mac_type = ICE_MAC_GENERIC;
+			break;
+		}
+	} else {
+		status = ICE_ERR_DEVICE_NOT_SUPPORTED;
+	}
+
+	ice_debug(hw, ICE_DBG_INIT, "found mac_type: %d, status: %d\n",
+		  hw->mac_type, status);
+
+	return status;
+}
+
+#if defined(FPGA_SUPPORT) || defined(CVL_A0_SUPPORT)
+void ice_dev_onetime_setup(struct ice_hw *hw)
+{
+	/* configure Rx - set non pxe mode */
+	wr32(hw, GLLAN_RCTL_0, 0x1);
+
+
+
+}
+#endif /* FPGA_SUPPORT || CVL_A0_SUPPORT */
+
+/**
+ * ice_clear_pf_cfg - Clear PF configuration
+ * @hw: pointer to the hardware structure
+ *
+ * Clears any existing PF configuration (VSIs, VSI lists, switch rules, port
+ * configuration, flow director filters, etc.).
+ */
+enum ice_status ice_clear_pf_cfg(struct ice_hw *hw)
+{
+	struct ice_aq_desc desc;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_clear_pf_cfg);
+
+	return ice_aq_send_cmd(hw, &desc, NULL, 0, NULL);
+}
+
+/**
+ * ice_aq_manage_mac_read - manage MAC address read command
+ * @hw: pointer to the hw struct
+ * @buf: a virtual buffer to hold the manage MAC read response
+ * @buf_size: Size of the virtual buffer
+ * @cd: pointer to command details structure or NULL
+ *
+ * This function is used to return per PF station MAC address (0x0107).
+ * NOTE: Upon successful completion of this command, MAC address information
+ * is returned in user specified buffer. Please interpret user specified
+ * buffer as "manage_mac_read" response.
+ * Response such as various MAC addresses are stored in HW struct (port.mac)
+ * ice_aq_discover_caps is expected to be called before this function is called.
+ */
+static enum ice_status
+ice_aq_manage_mac_read(struct ice_hw *hw, void *buf, u16 buf_size,
+		       struct ice_sq_cd *cd)
+{
+	struct ice_aqc_manage_mac_read_resp *resp;
+	struct ice_aqc_manage_mac_read *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+	u16 flags;
+	u8 i;
+
+	cmd = &desc.params.mac_read;
+
+	if (buf_size < sizeof(*resp))
+		return ICE_ERR_BUF_TOO_SHORT;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_manage_mac_read);
+
+	status = ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
+	if (status)
+		return status;
+
+	resp = (struct ice_aqc_manage_mac_read_resp *)buf;
+	flags = LE16_TO_CPU(cmd->flags) & ICE_AQC_MAN_MAC_READ_M;
+
+	if (!(flags & ICE_AQC_MAN_MAC_LAN_ADDR_VALID)) {
+		ice_debug(hw, ICE_DBG_LAN, "got invalid MAC address\n");
+		return ICE_ERR_CFG;
+	}
+
+	/* A single port can report up to two (LAN and WoL) addresses */
+	for (i = 0; i < cmd->num_addr; i++)
+		if (resp[i].addr_type == ICE_AQC_MAN_MAC_ADDR_TYPE_LAN) {
+			ice_memcpy(hw->port_info->mac.lan_addr,
+				   resp[i].mac_addr, ETH_ALEN,
+				   ICE_DMA_TO_NONDMA);
+			ice_memcpy(hw->port_info->mac.perm_addr,
+				   resp[i].mac_addr,
+				   ETH_ALEN, ICE_DMA_TO_NONDMA);
+			break;
+		}
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_aq_get_phy_caps - returns PHY capabilities
+ * @pi: port information structure
+ * @qual_mods: report qualified modules
+ * @report_mode: report mode capabilities
+ * @pcaps: structure for PHY capabilities to be filled
+ * @cd: pointer to command details structure or NULL
+ *
+ * Returns the various PHY capabilities supported on the Port (0x0600)
+ */
+enum ice_status
+ice_aq_get_phy_caps(struct ice_port_info *pi, bool qual_mods, u8 report_mode,
+		    struct ice_aqc_get_phy_caps_data *pcaps,
+		    struct ice_sq_cd *cd)
+{
+	struct ice_aqc_get_phy_caps *cmd;
+	u16 pcaps_size = sizeof(*pcaps);
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	cmd = &desc.params.get_phy;
+
+	if (!pcaps || (report_mode & ~ICE_AQC_REPORT_MODE_M) || !pi)
+		return ICE_ERR_PARAM;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_phy_caps);
+
+	if (qual_mods)
+		cmd->param0 |= CPU_TO_LE16(ICE_AQC_GET_PHY_RQM);
+
+	cmd->param0 |= CPU_TO_LE16(report_mode);
+	status = ice_aq_send_cmd(pi->hw, &desc, pcaps, pcaps_size, cd);
+
+	if (status == ICE_SUCCESS && report_mode == ICE_AQC_REPORT_TOPO_CAP) {
+		pi->phy.phy_type_low = LE64_TO_CPU(pcaps->phy_type_low);
+		pi->phy.phy_type_high = LE64_TO_CPU(pcaps->phy_type_high);
+	}
+
+	return status;
+}
+
+/**
+ * ice_get_media_type - Gets media type
+ * @pi: port information structure
+ */
+static enum ice_media_type ice_get_media_type(struct ice_port_info *pi)
+{
+	struct ice_link_status *hw_link_info;
+
+	if (!pi)
+		return ICE_MEDIA_UNKNOWN;
+
+	hw_link_info = &pi->phy.link_info;
+	if (hw_link_info->phy_type_low && hw_link_info->phy_type_high)
+		/* If more than one media type is selected, report unknown */
+		return ICE_MEDIA_UNKNOWN;
+
+	if (hw_link_info->phy_type_low) {
+		switch (hw_link_info->phy_type_low) {
+		case ICE_PHY_TYPE_LOW_1000BASE_SX:
+		case ICE_PHY_TYPE_LOW_1000BASE_LX:
+		case ICE_PHY_TYPE_LOW_10GBASE_SR:
+		case ICE_PHY_TYPE_LOW_10GBASE_LR:
+		case ICE_PHY_TYPE_LOW_10G_SFI_C2C:
+		case ICE_PHY_TYPE_LOW_25GBASE_SR:
+		case ICE_PHY_TYPE_LOW_25GBASE_LR:
+		case ICE_PHY_TYPE_LOW_25G_AUI_C2C:
+		case ICE_PHY_TYPE_LOW_40GBASE_SR4:
+		case ICE_PHY_TYPE_LOW_40GBASE_LR4:
+		case ICE_PHY_TYPE_LOW_50GBASE_SR2:
+		case ICE_PHY_TYPE_LOW_50GBASE_LR2:
+		case ICE_PHY_TYPE_LOW_50GBASE_SR:
+		case ICE_PHY_TYPE_LOW_50GBASE_FR:
+		case ICE_PHY_TYPE_LOW_50GBASE_LR:
+		case ICE_PHY_TYPE_LOW_100GBASE_SR4:
+		case ICE_PHY_TYPE_LOW_100GBASE_LR4:
+		case ICE_PHY_TYPE_LOW_100GBASE_SR2:
+		case ICE_PHY_TYPE_LOW_100GBASE_DR:
+			return ICE_MEDIA_FIBER;
+		case ICE_PHY_TYPE_LOW_100BASE_TX:
+		case ICE_PHY_TYPE_LOW_1000BASE_T:
+		case ICE_PHY_TYPE_LOW_2500BASE_T:
+		case ICE_PHY_TYPE_LOW_5GBASE_T:
+		case ICE_PHY_TYPE_LOW_10GBASE_T:
+		case ICE_PHY_TYPE_LOW_25GBASE_T:
+			return ICE_MEDIA_BASET;
+		case ICE_PHY_TYPE_LOW_10G_SFI_DA:
+		case ICE_PHY_TYPE_LOW_25GBASE_CR:
+		case ICE_PHY_TYPE_LOW_25GBASE_CR_S:
+		case ICE_PHY_TYPE_LOW_25GBASE_CR1:
+		case ICE_PHY_TYPE_LOW_40GBASE_CR4:
+		case ICE_PHY_TYPE_LOW_50GBASE_CR2:
+		case ICE_PHY_TYPE_LOW_50GBASE_CP:
+		case ICE_PHY_TYPE_LOW_100GBASE_CR4:
+		case ICE_PHY_TYPE_LOW_100GBASE_CR_PAM4:
+		case ICE_PHY_TYPE_LOW_100GBASE_CP2:
+			return ICE_MEDIA_DA;
+		case ICE_PHY_TYPE_LOW_1000BASE_KX:
+		case ICE_PHY_TYPE_LOW_2500BASE_KX:
+		case ICE_PHY_TYPE_LOW_2500BASE_X:
+		case ICE_PHY_TYPE_LOW_5GBASE_KR:
+		case ICE_PHY_TYPE_LOW_10GBASE_KR_CR1:
+		case ICE_PHY_TYPE_LOW_25GBASE_KR:
+		case ICE_PHY_TYPE_LOW_25GBASE_KR1:
+		case ICE_PHY_TYPE_LOW_25GBASE_KR_S:
+		case ICE_PHY_TYPE_LOW_40GBASE_KR4:
+		case ICE_PHY_TYPE_LOW_50GBASE_KR_PAM4:
+		case ICE_PHY_TYPE_LOW_50GBASE_KR2:
+		case ICE_PHY_TYPE_LOW_100GBASE_KR4:
+		case ICE_PHY_TYPE_LOW_100GBASE_KR_PAM4:
+			return ICE_MEDIA_BACKPLANE;
+		}
+	} else {
+		switch (hw_link_info->phy_type_high) {
+		case ICE_PHY_TYPE_HIGH_100GBASE_KR2_PAM4:
+			return ICE_MEDIA_BACKPLANE;
+		}
+	}
+	return ICE_MEDIA_UNKNOWN;
+}
+
+/**
+ * ice_aq_get_link_info
+ * @pi: port information structure
+ * @ena_lse: enable/disable LinkStatusEvent reporting
+ * @link: pointer to link status structure - optional
+ * @cd: pointer to command details structure or NULL
+ *
+ * Get Link Status (0x607). Returns the link status of the adapter.
+ */
+enum ice_status
+ice_aq_get_link_info(struct ice_port_info *pi, bool ena_lse,
+		     struct ice_link_status *link, struct ice_sq_cd *cd)
+{
+	struct ice_link_status *hw_link_info_old, *hw_link_info;
+	struct ice_aqc_get_link_status_data link_data = { 0 };
+	struct ice_aqc_get_link_status *resp;
+	enum ice_media_type *hw_media_type;
+	struct ice_fc_info *hw_fc_info;
+	bool tx_pause, rx_pause;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+	u16 cmd_flags;
+
+	if (!pi)
+		return ICE_ERR_PARAM;
+	hw_link_info_old = &pi->phy.link_info_old;
+	hw_media_type = &pi->phy.media_type;
+	hw_link_info = &pi->phy.link_info;
+	hw_fc_info = &pi->fc;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_link_status);
+	cmd_flags = (ena_lse) ? ICE_AQ_LSE_ENA : ICE_AQ_LSE_DIS;
+	resp = &desc.params.get_link_status;
+	resp->cmd_flags = CPU_TO_LE16(cmd_flags);
+	resp->lport_num = pi->lport;
+
+	status = ice_aq_send_cmd(pi->hw, &desc, &link_data, sizeof(link_data),
+				 cd);
+
+	if (status != ICE_SUCCESS)
+		return status;
+
+	/* save off old link status information */
+	*hw_link_info_old = *hw_link_info;
+
+	/* update current link status information */
+	hw_link_info->link_speed = LE16_TO_CPU(link_data.link_speed);
+	hw_link_info->phy_type_low = LE64_TO_CPU(link_data.phy_type_low);
+	hw_link_info->phy_type_high = LE64_TO_CPU(link_data.phy_type_high);
+	*hw_media_type = ice_get_media_type(pi);
+	hw_link_info->link_info = link_data.link_info;
+	hw_link_info->an_info = link_data.an_info;
+	hw_link_info->ext_info = link_data.ext_info;
+	hw_link_info->max_frame_size = LE16_TO_CPU(link_data.max_frame_size);
+	hw_link_info->fec_info = link_data.cfg & ICE_AQ_FEC_MASK;
+	hw_link_info->topo_media_conflict = link_data.topo_media_conflict;
+	hw_link_info->pacing = link_data.cfg & ICE_AQ_CFG_PACING_M;
+
+	/* update fc info */
+	tx_pause = !!(link_data.an_info & ICE_AQ_LINK_PAUSE_TX);
+	rx_pause = !!(link_data.an_info & ICE_AQ_LINK_PAUSE_RX);
+	if (tx_pause && rx_pause)
+		hw_fc_info->current_mode = ICE_FC_FULL;
+	else if (tx_pause)
+		hw_fc_info->current_mode = ICE_FC_TX_PAUSE;
+	else if (rx_pause)
+		hw_fc_info->current_mode = ICE_FC_RX_PAUSE;
+	else
+		hw_fc_info->current_mode = ICE_FC_NONE;
+
+	hw_link_info->lse_ena =
+		!!(resp->cmd_flags & CPU_TO_LE16(ICE_AQ_LSE_IS_ENABLED));
+
+
+	/* save link status information */
+	if (link)
+		*link = *hw_link_info;
+
+	/* flag cleared so calling functions don't call AQ again */
+	pi->phy.get_link_info = false;
+
+	return status;
+}
+
+/**
+ * ice_init_flex_flags
+ * @hw: pointer to the hardware structure
+ * @prof_id: Rx Descriptor Builder profile ID
+ *
+ * Function to initialize Rx flex flags
+ */
+static void ice_init_flex_flags(struct ice_hw *hw, enum ice_rxdid prof_id)
+{
+	u8 idx = 0;
+
+	/* Flex-flag fields (0-2) are programmed with FLG64 bits with layout:
+	 * flexiflags0[5:0] - TCP flags, is_packet_fragmented, is_packet_UDP_GRE
+	 * flexiflags1[3:0] - Not used for flag programming
+	 * flexiflags2[7:0] - Tunnel and VLAN types
+	 * 2 invalid fields in last index
+	 */
+	switch (prof_id) {
+	/* Rx flex flags are currently programmed for the NIC profiles only.
+	 * Different flag bit programming configurations can be added per
+	 * profile as needed.
+	 */
+	case ICE_RXDID_FLEX_NIC:
+	case ICE_RXDID_FLEX_NIC_2:
+		ICE_PROG_FLG_ENTRY(hw, prof_id, ICE_RXFLG_PKT_FRG,
+				   ICE_RXFLG_UDP_GRE, ICE_RXFLG_PKT_DSI,
+				   ICE_RXFLG_FIN, idx++);
+		/* flex flag 1 is not used for flexi-flag programming, skipping
+		 * these four FLG64 bits.
+		 */
+		ICE_PROG_FLG_ENTRY(hw, prof_id, ICE_RXFLG_SYN, ICE_RXFLG_RST,
+				   ICE_RXFLG_PKT_DSI, ICE_RXFLG_PKT_DSI, idx++);
+		ICE_PROG_FLG_ENTRY(hw, prof_id, ICE_RXFLG_PKT_DSI,
+				   ICE_RXFLG_PKT_DSI, ICE_RXFLG_EVLAN_x8100,
+				   ICE_RXFLG_EVLAN_x9100, idx++);
+		ICE_PROG_FLG_ENTRY(hw, prof_id, ICE_RXFLG_VLAN_x8100,
+				   ICE_RXFLG_TNL_VLAN, ICE_RXFLG_TNL_MAC,
+				   ICE_RXFLG_TNL0, idx++);
+		ICE_PROG_FLG_ENTRY(hw, prof_id, ICE_RXFLG_TNL1, ICE_RXFLG_TNL2,
+				   ICE_RXFLG_PKT_DSI, ICE_RXFLG_PKT_DSI, idx);
+		break;
+
+	default:
+		ice_debug(hw, ICE_DBG_INIT,
+			  "Flag programming for profile ID %d not supported\n",
+			  prof_id);
+	}
+}
+
+/**
+ * ice_init_flex_flds
+ * @hw: pointer to the hardware structure
+ * @prof_id: Rx Descriptor Builder profile ID
+ *
+ * Function to initialize flex descriptors
+ */
+static void ice_init_flex_flds(struct ice_hw *hw, enum ice_rxdid prof_id)
+{
+	enum ice_flex_rx_mdid mdid;
+
+	switch (prof_id) {
+	case ICE_RXDID_FLEX_NIC:
+	case ICE_RXDID_FLEX_NIC_2:
+		ICE_PROG_FLEX_ENTRY(hw, prof_id, ICE_RX_MDID_HASH_LOW, 0);
+		ICE_PROG_FLEX_ENTRY(hw, prof_id, ICE_RX_MDID_HASH_HIGH, 1);
+		ICE_PROG_FLEX_ENTRY(hw, prof_id, ICE_RX_MDID_FLOW_ID_LOWER, 2);
+
+		mdid = (prof_id == ICE_RXDID_FLEX_NIC_2) ?
+			ICE_RX_MDID_SRC_VSI : ICE_RX_MDID_FLOW_ID_HIGH;
+
+		ICE_PROG_FLEX_ENTRY(hw, prof_id, mdid, 3);
+
+		ice_init_flex_flags(hw, prof_id);
+		break;
+
+	default:
+		ice_debug(hw, ICE_DBG_INIT,
+			  "Field init for profile ID %d not supported\n",
+			  prof_id);
+	}
+}
+
+
+/**
+ * ice_init_fltr_mgmt_struct - initializes filter management list and locks
+ * @hw: pointer to the hw struct
+ */
+static enum ice_status ice_init_fltr_mgmt_struct(struct ice_hw *hw)
+{
+	struct ice_switch_info *sw;
+
+	hw->switch_info = (struct ice_switch_info *)
+			  ice_malloc(hw, sizeof(*hw->switch_info));
+	sw = hw->switch_info;
+
+	if (!sw)
+		return ICE_ERR_NO_MEMORY;
+
+	INIT_LIST_HEAD(&sw->vsi_list_map_head);
+
+	return ice_init_def_sw_recp(hw);
+}
+
+/**
+ * ice_cleanup_fltr_mgmt_struct - cleanup filter management list and locks
+ * @hw: pointer to the hw struct
+ */
+static void ice_cleanup_fltr_mgmt_struct(struct ice_hw *hw)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	struct ice_vsi_list_map_info *v_pos_map;
+	struct ice_vsi_list_map_info *v_tmp_map;
+	struct ice_sw_recipe *recps;
+	u8 i;
+
+	LIST_FOR_EACH_ENTRY_SAFE(v_pos_map, v_tmp_map, &sw->vsi_list_map_head,
+				 ice_vsi_list_map_info, list_entry) {
+		LIST_DEL(&v_pos_map->list_entry);
+		ice_free(hw, v_pos_map);
+	}
+	recps = hw->switch_info->recp_list;
+	for (i = 0; i < ICE_MAX_NUM_RECIPES; i++) {
+		recps[i].root_rid = i;
+
+		if (recps[i].adv_rule) {
+			struct ice_adv_fltr_mgmt_list_entry *tmp_entry;
+			struct ice_adv_fltr_mgmt_list_entry *lst_itr;
+
+			ice_destroy_lock(&recps[i].filt_rule_lock);
+			LIST_FOR_EACH_ENTRY_SAFE(lst_itr, tmp_entry,
+						 &recps[i].filt_rules,
+						 ice_adv_fltr_mgmt_list_entry,
+						 list_entry) {
+				LIST_DEL(&lst_itr->list_entry);
+				ice_free(hw, lst_itr->lkups);
+				ice_free(hw, lst_itr);
+			}
+		} else {
+			struct ice_fltr_mgmt_list_entry *lst_itr, *tmp_entry;
+
+			ice_destroy_lock(&recps[i].filt_rule_lock);
+			LIST_FOR_EACH_ENTRY_SAFE(lst_itr, tmp_entry,
+						 &recps[i].filt_rules,
+						 ice_fltr_mgmt_list_entry,
+						 list_entry) {
+				LIST_DEL(&lst_itr->list_entry);
+				ice_free(hw, lst_itr);
+			}
+		}
+	}
+	ice_rm_all_sw_replay_rule_info(hw);
+	ice_free(hw, sw->recp_list);
+	ice_free(hw, sw);
+}
+
+#define ICE_FW_LOG_DESC_SIZE(n)	(sizeof(struct ice_aqc_fw_logging_data) + \
+	(((n) - 1) * sizeof(((struct ice_aqc_fw_logging_data *)0)->entry)))
+#define ICE_FW_LOG_DESC_SIZE_MAX	\
+	ICE_FW_LOG_DESC_SIZE(ICE_AQC_FW_LOG_ID_MAX)
+
+/**
+ * ice_cfg_fw_log - configure FW logging
+ * @hw: pointer to the hw struct
+ * @enable: enable certain FW logging events if true, disable all if false
+ *
+ * This function enables/disables the FW logging via Rx CQ events and a UART
+ * port based on predetermined configurations. FW logging via the Rx CQ can be
+ * enabled/disabled for individual PF's. However, FW logging via the UART can
+ * only be enabled/disabled for all PFs on the same device.
+ *
+ * To enable overall FW logging, the "cq_en" and "uart_en" enable bits in
+ * hw->fw_log need to be set accordingly, e.g. based on user-provided input,
+ * before initializing the device.
+ *
+ * When re/configuring FW logging, callers need to update the "cfg" elements of
+ * the hw->fw_log.evnts array with the desired logging event configurations for
+ * modules of interest. When disabling FW logging completely, the callers can
+ * just pass false in the "enable" parameter. On completion, the function will
+ * update the "cur" element of the hw->fw_log.evnts array with the resulting
+ * logging event configurations of the modules that are being re/configured. FW
+ * logging modules that are not part of a reconfiguration operation retain their
+ * previous states.
+ *
+ * Before resetting the device, it is recommended that the driver disables FW
+ * logging before shutting down the control queue. When disabling FW logging
+ * ("enable" = false), the latest configurations of FW logging events stored in
+ * hw->fw_log.evnts[] are not overridden to allow them to be reconfigured after
+ * a device reset.
+ *
+ * When enabling FW logging to emit log messages via the Rx CQ during the
+ * device's initialization phase, a mechanism alternative to interrupt handlers
+ * needs to be used to extract FW log messages from the Rx CQ periodically and
+ * to prevent the Rx CQ from being full and stalling other types of control
+ * messages from FW to SW. Interrupts are typically disabled during the device's
+ * initialization phase.
+ */
+static enum ice_status ice_cfg_fw_log(struct ice_hw *hw, bool enable)
+{
+	struct ice_aqc_fw_logging_data *data = NULL;
+	struct ice_aqc_fw_logging *cmd;
+	enum ice_status status = ICE_SUCCESS;
+	u16 i, chgs = 0, len = 0;
+	struct ice_aq_desc desc;
+	u8 actv_evnts = 0;
+	void *buf = NULL;
+
+	if (!hw->fw_log.cq_en && !hw->fw_log.uart_en)
+		return ICE_SUCCESS;
+
+	/* Disable FW logging only when the control queue is still responsive */
+	if (!enable &&
+	    (!hw->fw_log.actv_evnts || !ice_check_sq_alive(hw, &hw->adminq)))
+		return ICE_SUCCESS;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_fw_logging);
+	cmd = &desc.params.fw_logging;
+
+	/* Indicate which controls are valid */
+	if (hw->fw_log.cq_en)
+		cmd->log_ctrl_valid |= ICE_AQC_FW_LOG_AQ_VALID;
+
+	if (hw->fw_log.uart_en)
+		cmd->log_ctrl_valid |= ICE_AQC_FW_LOG_UART_VALID;
+
+	if (enable) {
+		/* Fill in an array of entries with FW logging modules and
+		 * logging events being reconfigured.
+		 */
+		for (i = 0; i < ICE_AQC_FW_LOG_ID_MAX; i++) {
+			u16 val;
+
+			/* Keep track of enabled event types */
+			actv_evnts |= hw->fw_log.evnts[i].cfg;
+
+			if (hw->fw_log.evnts[i].cfg == hw->fw_log.evnts[i].cur)
+				continue;
+
+			if (!data) {
+				data = (struct ice_aqc_fw_logging_data *)
+					ice_malloc(hw,
+						   ICE_FW_LOG_DESC_SIZE_MAX);
+				if (!data)
+					return ICE_ERR_NO_MEMORY;
+			}
+
+			val = i << ICE_AQC_FW_LOG_ID_S;
+			val |= hw->fw_log.evnts[i].cfg << ICE_AQC_FW_LOG_EN_S;
+			data->entry[chgs++] = CPU_TO_LE16(val);
+		}
+
+		/* Only enable FW logging if at least one module is specified.
+		 * If FW logging is currently enabled but all modules are not
+		 * enabled to emit log messages, disable FW logging altogether.
+		 */
+		if (actv_evnts) {
+			/* Leave if there is effectively no change */
+			if (!chgs)
+				goto out;
+
+			if (hw->fw_log.cq_en)
+				cmd->log_ctrl |= ICE_AQC_FW_LOG_AQ_EN;
+
+			if (hw->fw_log.uart_en)
+				cmd->log_ctrl |= ICE_AQC_FW_LOG_UART_EN;
+
+			buf = data;
+			len = ICE_FW_LOG_DESC_SIZE(chgs);
+			desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+		}
+	}
+
+	status = ice_aq_send_cmd(hw, &desc, buf, len, NULL);
+	if (!status) {
+		/* Update the current configuration to reflect events enabled.
+		 * hw->fw_log.cq_en and hw->fw_log.uart_en indicate if the FW
+		 * logging mode is enabled for the device. They do not reflect
+		 * actual modules being enabled to emit log messages. So, their
+		 * values remain unchanged even when all modules are disabled.
+		 */
+		u16 cnt = enable ? chgs : (u16)ICE_AQC_FW_LOG_ID_MAX;
+
+		hw->fw_log.actv_evnts = actv_evnts;
+		for (i = 0; i < cnt; i++) {
+			u16 v, m;
+
+			if (!enable) {
+				/* When disabling all FW logging events as part
+				 * of device's de-initialization, the original
+				 * configurations are retained, and can be used
+				 * to reconfigure FW logging later if the device
+				 * is re-initialized.
+				 */
+				hw->fw_log.evnts[i].cur = 0;
+				continue;
+			}
+
+			v = LE16_TO_CPU(data->entry[i]);
+			m = (v & ICE_AQC_FW_LOG_ID_M) >> ICE_AQC_FW_LOG_ID_S;
+			hw->fw_log.evnts[m].cur = hw->fw_log.evnts[m].cfg;
+		}
+	}
+
+out:
+	if (data)
+		ice_free(hw, data);
+
+	return status;
+}
+
+/**
+ * ice_output_fw_log
+ * @hw: pointer to the hw struct
+ * @desc: pointer to the AQ message descriptor
+ * @buf: pointer to the buffer accompanying the AQ message
+ *
+ * Formats a FW Log message and outputs it via the standard driver logs.
+ */
+void ice_output_fw_log(struct ice_hw *hw, struct ice_aq_desc *desc, void *buf)
+{
+	ice_debug(hw, ICE_DBG_AQ_MSG, "[ FW Log Msg Start ]\n");
+	ice_debug_array(hw, ICE_DBG_AQ_MSG, 16, 1, (u8 *)buf,
+			LE16_TO_CPU(desc->datalen));
+	ice_debug(hw, ICE_DBG_AQ_MSG, "[ FW Log Msg End ]\n");
+}
+
+/**
+ * ice_get_itr_intrl_gran - determine int/intrl granularity
+ * @hw: pointer to the hw struct
+ *
+ * Determines the itr/intrl granularities based on the maximum aggregate
+ * bandwidth according to the device's configuration during power-on.
+ */
+static enum ice_status ice_get_itr_intrl_gran(struct ice_hw *hw)
+{
+	u8 max_agg_bw = (rd32(hw, GL_PWR_MODE_CTL) &
+			 GL_PWR_MODE_CTL_CAR_MAX_BW_M) >>
+			GL_PWR_MODE_CTL_CAR_MAX_BW_S;
+
+	switch (max_agg_bw) {
+	case ICE_MAX_AGG_BW_200G:
+	case ICE_MAX_AGG_BW_100G:
+	case ICE_MAX_AGG_BW_50G:
+		hw->itr_gran = ICE_ITR_GRAN_ABOVE_25;
+		hw->intrl_gran = ICE_INTRL_GRAN_ABOVE_25;
+		break;
+	case ICE_MAX_AGG_BW_25G:
+		hw->itr_gran = ICE_ITR_GRAN_MAX_25;
+		hw->intrl_gran = ICE_INTRL_GRAN_MAX_25;
+		break;
+	default:
+		ice_debug(hw, ICE_DBG_INIT,
+			  "Failed to determine itr/intrl granularity\n");
+		return ICE_ERR_CFG;
+	}
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_init_hw - main hardware initialization routine
+ * @hw: pointer to the hardware structure
+ */
+enum ice_status ice_init_hw(struct ice_hw *hw)
+{
+	struct ice_aqc_get_phy_caps_data *pcaps;
+	enum ice_status status;
+	u16 mac_buf_len;
+	void *mac_buf;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_init_hw");
+
+
+	/* Set MAC type based on DeviceID */
+	status = ice_set_mac_type(hw);
+	if (status)
+		return status;
+
+	hw->pf_id = (u8)(rd32(hw, PF_FUNC_RID) &
+			 PF_FUNC_RID_FUNCTION_NUMBER_M) >>
+		PF_FUNC_RID_FUNCTION_NUMBER_S;
+
+
+	status = ice_reset(hw, ICE_RESET_PFR);
+	if (status)
+		return status;
+
+	status = ice_get_itr_intrl_gran(hw);
+	if (status)
+		return status;
+
+
+	status = ice_init_all_ctrlq(hw);
+	if (status)
+		goto err_unroll_cqinit;
+
+	/* Enable FW logging. Not fatal if this fails. */
+	status = ice_cfg_fw_log(hw, true);
+	if (status)
+		ice_debug(hw, ICE_DBG_INIT, "Failed to enable FW logging.\n");
+
+	status = ice_clear_pf_cfg(hw);
+	if (status)
+		goto err_unroll_cqinit;
+
+
+	ice_clear_pxe_mode(hw);
+
+	status = ice_init_nvm(hw);
+	if (status)
+		goto err_unroll_cqinit;
+
+	status = ice_get_caps(hw);
+	if (status)
+		goto err_unroll_cqinit;
+
+	hw->port_info = (struct ice_port_info *)
+			ice_malloc(hw, sizeof(*hw->port_info));
+	if (!hw->port_info) {
+		status = ICE_ERR_NO_MEMORY;
+		goto err_unroll_cqinit;
+	}
+
+	/* set the back pointer to hw */
+	hw->port_info->hw = hw;
+
+	/* Initialize port_info struct with switch configuration data */
+	status = ice_get_initial_sw_cfg(hw);
+	if (status)
+		goto err_unroll_alloc;
+
+	hw->evb_veb = true;
+
+	/* Query the allocated resources for Tx scheduler */
+	status = ice_sched_query_res_alloc(hw);
+	if (status) {
+		ice_debug(hw, ICE_DBG_SCHED,
+			  "Failed to get scheduler allocated resources\n");
+		goto err_unroll_alloc;
+	}
+
+
+	/* Initialize port_info struct with scheduler data */
+	status = ice_sched_init_port(hw->port_info);
+	if (status)
+		goto err_unroll_sched;
+
+	pcaps = (struct ice_aqc_get_phy_caps_data *)
+		ice_malloc(hw, sizeof(*pcaps));
+	if (!pcaps) {
+		status = ICE_ERR_NO_MEMORY;
+		goto err_unroll_sched;
+	}
+
+	/* Initialize port_info struct with PHY capabilities */
+	status = ice_aq_get_phy_caps(hw->port_info, false,
+				     ICE_AQC_REPORT_TOPO_CAP, pcaps, NULL);
+	ice_free(hw, pcaps);
+	if (status)
+		goto err_unroll_sched;
+
+	/* Initialize port_info struct with link information */
+	status = ice_aq_get_link_info(hw->port_info, false, NULL, NULL);
+	if (status)
+		goto err_unroll_sched;
+	/* need a valid SW entry point to build a Tx tree */
+	if (!hw->sw_entry_point_layer) {
+		ice_debug(hw, ICE_DBG_SCHED, "invalid sw entry point\n");
+		status = ICE_ERR_CFG;
+		goto err_unroll_sched;
+	}
+	INIT_LIST_HEAD(&hw->agg_list);
+	/* Initialize max burst size */
+	if (!hw->max_burst_size)
+		ice_cfg_rl_burst_size(hw, ICE_SCHED_DFLT_BURST_SIZE);
+
+	status = ice_init_fltr_mgmt_struct(hw);
+	if (status)
+		goto err_unroll_sched;
+
+#if defined(FPGA_SUPPORT) || defined(CVL_A0_SUPPORT)
+	/* some of the register write workarounds to get Rx working */
+	ice_dev_onetime_setup(hw);
+#endif /* FPGA_SUPPORT || CVL_A0_SUPPORT */
+
+	/* Get MAC information */
+	/* A single port can report up to two (LAN and WoL) addresses */
+	mac_buf = ice_calloc(hw, 2,
+			     sizeof(struct ice_aqc_manage_mac_read_resp));
+	mac_buf_len = 2 * sizeof(struct ice_aqc_manage_mac_read_resp);
+
+	if (!mac_buf) {
+		status = ICE_ERR_NO_MEMORY;
+		goto err_unroll_fltr_mgmt_struct;
+	}
+
+	status = ice_aq_manage_mac_read(hw, mac_buf, mac_buf_len, NULL);
+	ice_free(hw, mac_buf);
+
+	if (status)
+		goto err_unroll_fltr_mgmt_struct;
+
+	ice_init_flex_flds(hw, ICE_RXDID_FLEX_NIC);
+	ice_init_flex_flds(hw, ICE_RXDID_FLEX_NIC_2);
+
+
+	return ICE_SUCCESS;
+
+err_unroll_fltr_mgmt_struct:
+	ice_cleanup_fltr_mgmt_struct(hw);
+err_unroll_sched:
+	ice_sched_cleanup_all(hw);
+err_unroll_alloc:
+	ice_free(hw, hw->port_info);
+	hw->port_info = NULL;
+err_unroll_cqinit:
+	ice_shutdown_all_ctrlq(hw);
+	return status;
+}
+
+/**
+ * ice_deinit_hw - unroll initialization operations done by ice_init_hw
+ * @hw: pointer to the hardware structure
+ *
+ * This should be called only during nominal operation, not as a result of
+ * ice_init_hw() failing since ice_init_hw() will take care of unrolling
+ * applicable initializations if it fails for any reason.
+ */
+void ice_deinit_hw(struct ice_hw *hw)
+{
+	ice_cleanup_fltr_mgmt_struct(hw);
+
+	ice_sched_cleanup_all(hw);
+	ice_sched_clear_agg(hw);
+
+	if (hw->port_info) {
+		ice_free(hw, hw->port_info);
+		hw->port_info = NULL;
+	}
+
+	/* Attempt to disable FW logging before shutting down control queues */
+	ice_cfg_fw_log(hw, false);
+	ice_shutdown_all_ctrlq(hw);
+
+	/* Clear VSI contexts if not already cleared */
+	ice_clear_all_vsi_ctx(hw);
+}
+
+/**
+ * ice_check_reset - Check to see if a global reset is complete
+ * @hw: pointer to the hardware structure
+ */
+enum ice_status ice_check_reset(struct ice_hw *hw)
+{
+	u32 cnt, reg = 0, grst_delay;
+
+	/* Poll for Device Active state in case a recent CORER, GLOBR,
+	 * or EMPR has occurred. The grst delay value is in 100ms units.
+	 * Add 1sec for outstanding AQ commands that can take a long time.
+	 */
+#define GLGEN_RSTCTL		0x000B8180 /* Reset Source: POR */
+#define GLGEN_RSTCTL_GRSTDEL_S	0
+#define GLGEN_RSTCTL_GRSTDEL_M	MAKEMASK(0x3F, GLGEN_RSTCTL_GRSTDEL_S)
+	grst_delay = ((rd32(hw, GLGEN_RSTCTL) & GLGEN_RSTCTL_GRSTDEL_M) >>
+		      GLGEN_RSTCTL_GRSTDEL_S) + 10;
+
+	for (cnt = 0; cnt < grst_delay; cnt++) {
+		ice_msec_delay(100, true);
+		reg = rd32(hw, GLGEN_RSTAT);
+		if (!(reg & GLGEN_RSTAT_DEVSTATE_M))
+			break;
+	}
+
+	if (cnt == grst_delay) {
+		ice_debug(hw, ICE_DBG_INIT,
+			  "Global reset polling failed to complete.\n");
+		return ICE_ERR_RESET_FAILED;
+	}
+
+#define ICE_RESET_DONE_MASK	(GLNVM_ULD_CORER_DONE_M | \
+				 GLNVM_ULD_GLOBR_DONE_M)
+
+	/* Device is Active; check Global Reset processes are done */
+	for (cnt = 0; cnt < ICE_PF_RESET_WAIT_COUNT; cnt++) {
+		reg = rd32(hw, GLNVM_ULD) & ICE_RESET_DONE_MASK;
+		if (reg == ICE_RESET_DONE_MASK) {
+			ice_debug(hw, ICE_DBG_INIT,
+				  "Global reset processes done. %d\n", cnt);
+			break;
+		}
+		ice_msec_delay(10, true);
+	}
+
+	if (cnt == ICE_PF_RESET_WAIT_COUNT) {
+		ice_debug(hw, ICE_DBG_INIT,
+			  "Wait for Reset Done timed out. GLNVM_ULD = 0x%x\n",
+			  reg);
+		return ICE_ERR_RESET_FAILED;
+	}
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_pf_reset - Reset the PF
+ * @hw: pointer to the hardware structure
+ *
+ * If a global reset has been triggered, this function checks
+ * for its completion and then issues the PF reset
+ */
+static enum ice_status ice_pf_reset(struct ice_hw *hw)
+{
+	u32 cnt, reg;
+
+	/* If at function entry a global reset was already in progress, i.e.
+	 * state is not 'device active' or any of the reset done bits are not
+	 * set in GLNVM_ULD, there is no need for a PF Reset; poll until the
+	 * global reset is done.
+	 */
+	if ((rd32(hw, GLGEN_RSTAT) & GLGEN_RSTAT_DEVSTATE_M) ||
+	    (rd32(hw, GLNVM_ULD) & ICE_RESET_DONE_MASK) ^ ICE_RESET_DONE_MASK) {
+		/* poll on global reset currently in progress until done */
+		if (ice_check_reset(hw))
+			return ICE_ERR_RESET_FAILED;
+
+		return ICE_SUCCESS;
+	}
+
+	/* Reset the PF */
+	reg = rd32(hw, PFGEN_CTRL);
+
+	wr32(hw, PFGEN_CTRL, (reg | PFGEN_CTRL_PFSWR_M));
+
+	for (cnt = 0; cnt < ICE_PF_RESET_WAIT_COUNT; cnt++) {
+		reg = rd32(hw, PFGEN_CTRL);
+		if (!(reg & PFGEN_CTRL_PFSWR_M))
+			break;
+
+		ice_msec_delay(1, true);
+	}
+
+	if (cnt == ICE_PF_RESET_WAIT_COUNT) {
+		ice_debug(hw, ICE_DBG_INIT,
+			  "PF reset polling failed to complete.\n");
+		return ICE_ERR_RESET_FAILED;
+	}
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_reset - Perform different types of reset
+ * @hw: pointer to the hardware structure
+ * @req: reset request
+ *
+ * This function triggers a reset as specified by the req parameter.
+ *
+ * Note:
+ * If anything other than a PF reset is triggered, PXE mode is restored.
+ * This has to be cleared using ice_clear_pxe_mode again, once the AQ
+ * interface has been restored in the rebuild flow.
+ */
+enum ice_status ice_reset(struct ice_hw *hw, enum ice_reset_req req)
+{
+	u32 val = 0;
+
+	switch (req) {
+	case ICE_RESET_PFR:
+		return ice_pf_reset(hw);
+	case ICE_RESET_CORER:
+		ice_debug(hw, ICE_DBG_INIT, "CoreR requested\n");
+		val = GLGEN_RTRIG_CORER_M;
+		break;
+	case ICE_RESET_GLOBR:
+		ice_debug(hw, ICE_DBG_INIT, "GlobalR requested\n");
+		val = GLGEN_RTRIG_GLOBR_M;
+		break;
+	default:
+		return ICE_ERR_PARAM;
+	}
+
+	val |= rd32(hw, GLGEN_RTRIG);
+	wr32(hw, GLGEN_RTRIG, val);
+	ice_flush(hw);
+
+
+	/* wait for the FW to be ready */
+	return ice_check_reset(hw);
+}
+
+
+
+/**
+ * ice_copy_rxq_ctx_to_hw
+ * @hw: pointer to the hardware structure
+ * @ice_rxq_ctx: pointer to the rxq context
+ * @rxq_index: the index of the Rx queue
+ *
+ * Copies rxq context from dense structure to hw register space
+ */
+static enum ice_status
+ice_copy_rxq_ctx_to_hw(struct ice_hw *hw, u8 *ice_rxq_ctx, u32 rxq_index)
+{
+	u8 i;
+
+	if (!ice_rxq_ctx)
+		return ICE_ERR_BAD_PTR;
+
+	if (rxq_index > QRX_CTRL_MAX_INDEX)
+		return ICE_ERR_PARAM;
+
+	/* Copy each dword separately to hw */
+	for (i = 0; i < ICE_RXQ_CTX_SIZE_DWORDS; i++) {
+		wr32(hw, QRX_CONTEXT(i, rxq_index),
+		     *((u32 *)(ice_rxq_ctx + (i * sizeof(u32)))));
+
+		ice_debug(hw, ICE_DBG_QCTX, "qrxdata[%d]: %08X\n", i,
+			  *((u32 *)(ice_rxq_ctx + (i * sizeof(u32)))));
+	}
+
+	return ICE_SUCCESS;
+}
+
+/* LAN Rx Queue Context */
+static const struct ice_ctx_ele ice_rlan_ctx_info[] = {
+	/* Field		Width	LSB */
+	ICE_CTX_STORE(ice_rlan_ctx, head,		13,	0),
+	ICE_CTX_STORE(ice_rlan_ctx, cpuid,		8,	13),
+	ICE_CTX_STORE(ice_rlan_ctx, base,		57,	32),
+	ICE_CTX_STORE(ice_rlan_ctx, qlen,		13,	89),
+	ICE_CTX_STORE(ice_rlan_ctx, dbuf,		7,	102),
+	ICE_CTX_STORE(ice_rlan_ctx, hbuf,		5,	109),
+	ICE_CTX_STORE(ice_rlan_ctx, dtype,		2,	114),
+	ICE_CTX_STORE(ice_rlan_ctx, dsize,		1,	116),
+	ICE_CTX_STORE(ice_rlan_ctx, crcstrip,		1,	117),
+	ICE_CTX_STORE(ice_rlan_ctx, l2tsel,		1,	119),
+	ICE_CTX_STORE(ice_rlan_ctx, hsplit_0,		4,	120),
+	ICE_CTX_STORE(ice_rlan_ctx, hsplit_1,		2,	124),
+	ICE_CTX_STORE(ice_rlan_ctx, showiv,		1,	127),
+	ICE_CTX_STORE(ice_rlan_ctx, rxmax,		14,	174),
+	ICE_CTX_STORE(ice_rlan_ctx, tphrdesc_ena,	1,	193),
+	ICE_CTX_STORE(ice_rlan_ctx, tphwdesc_ena,	1,	194),
+	ICE_CTX_STORE(ice_rlan_ctx, tphdata_ena,	1,	195),
+	ICE_CTX_STORE(ice_rlan_ctx, tphhead_ena,	1,	196),
+	ICE_CTX_STORE(ice_rlan_ctx, lrxqthresh,		3,	198),
+	{ 0 }
+};
+
+/**
+ * ice_write_rxq_ctx
+ * @hw: pointer to the hardware structure
+ * @rlan_ctx: pointer to the rxq context
+ * @rxq_index: the index of the Rx queue
+ *
+ * Converts rxq context from sparse to dense structure and then writes
+ * it to hw register space
+ */
+enum ice_status
+ice_write_rxq_ctx(struct ice_hw *hw, struct ice_rlan_ctx *rlan_ctx,
+		  u32 rxq_index)
+{
+	u8 ctx_buf[ICE_RXQ_CTX_SZ] = { 0 };
+
+	ice_set_ctx((u8 *)rlan_ctx, ctx_buf, ice_rlan_ctx_info);
+	return ice_copy_rxq_ctx_to_hw(hw, ctx_buf, rxq_index);
+}
+
+#if !defined(NO_UNUSED_CTX_CODE) || defined(AE_DRIVER)
+/**
+ * ice_clear_rxq_ctx
+ * @hw: pointer to the hardware structure
+ * @rxq_index: the index of the Rx queue to clear
+ *
+ * Clears rxq context in hw register space
+ */
+enum ice_status ice_clear_rxq_ctx(struct ice_hw *hw, u32 rxq_index)
+{
+	u8 i;
+
+	if (rxq_index > QRX_CTRL_MAX_INDEX)
+		return ICE_ERR_PARAM;
+
+	/* Clear each dword register separately */
+	for (i = 0; i < ICE_RXQ_CTX_SIZE_DWORDS; i++)
+		wr32(hw, QRX_CONTEXT(i, rxq_index), 0);
+
+	return ICE_SUCCESS;
+}
+#endif /* !NO_UNUSED_CTX_CODE || AE_DRIVER */
+
+/* LAN Tx Queue Context */
+const struct ice_ctx_ele ice_tlan_ctx_info[] = {
+				    /* Field			Width	LSB */
+	ICE_CTX_STORE(ice_tlan_ctx, base,			57,	0),
+	ICE_CTX_STORE(ice_tlan_ctx, port_num,			3,	57),
+	ICE_CTX_STORE(ice_tlan_ctx, cgd_num,			5,	60),
+	ICE_CTX_STORE(ice_tlan_ctx, pf_num,			3,	65),
+	ICE_CTX_STORE(ice_tlan_ctx, vmvf_num,			10,	68),
+	ICE_CTX_STORE(ice_tlan_ctx, vmvf_type,			2,	78),
+	ICE_CTX_STORE(ice_tlan_ctx, src_vsi,			10,	80),
+	ICE_CTX_STORE(ice_tlan_ctx, tsyn_ena,			1,	90),
+	ICE_CTX_STORE(ice_tlan_ctx, alt_vlan,			1,	92),
+	ICE_CTX_STORE(ice_tlan_ctx, cpuid,			8,	93),
+	ICE_CTX_STORE(ice_tlan_ctx, wb_mode,			1,	101),
+	ICE_CTX_STORE(ice_tlan_ctx, tphrd_desc,			1,	102),
+	ICE_CTX_STORE(ice_tlan_ctx, tphrd,			1,	103),
+	ICE_CTX_STORE(ice_tlan_ctx, tphwr_desc,			1,	104),
+	ICE_CTX_STORE(ice_tlan_ctx, cmpq_id,			9,	105),
+	ICE_CTX_STORE(ice_tlan_ctx, qnum_in_func,		14,	114),
+	ICE_CTX_STORE(ice_tlan_ctx, itr_notification_mode,	1,	128),
+	ICE_CTX_STORE(ice_tlan_ctx, adjust_prof_id,		6,	129),
+	ICE_CTX_STORE(ice_tlan_ctx, qlen,			13,	135),
+	ICE_CTX_STORE(ice_tlan_ctx, quanta_prof_idx,		4,	148),
+	ICE_CTX_STORE(ice_tlan_ctx, tso_ena,			1,	152),
+	ICE_CTX_STORE(ice_tlan_ctx, tso_qnum,			11,	153),
+	ICE_CTX_STORE(ice_tlan_ctx, legacy_int,			1,	164),
+	ICE_CTX_STORE(ice_tlan_ctx, drop_ena,			1,	165),
+	ICE_CTX_STORE(ice_tlan_ctx, cache_prof_idx,		2,	166),
+	ICE_CTX_STORE(ice_tlan_ctx, pkt_shaper_prof_idx,	3,	168),
+	ICE_CTX_STORE(ice_tlan_ctx, int_q_state,		110,	171),
+	{ 0 }
+};
+
+#if !defined(NO_UNUSED_CTX_CODE) || defined(AE_DRIVER)
+/**
+ * ice_copy_tx_cmpltnq_ctx_to_hw
+ * @hw: pointer to the hardware structure
+ * @ice_tx_cmpltnq_ctx: pointer to the Tx completion queue context
+ * @tx_cmpltnq_index: the index of the completion queue
+ *
+ * Copies Tx completion q context from dense structure to hw register space
+ */
+static enum ice_status
+ice_copy_tx_cmpltnq_ctx_to_hw(struct ice_hw *hw, u8 *ice_tx_cmpltnq_ctx,
+			      u32 tx_cmpltnq_index)
+{
+	u8 i;
+
+	if (!ice_tx_cmpltnq_ctx)
+		return ICE_ERR_BAD_PTR;
+
+	if (tx_cmpltnq_index > GLTCLAN_CQ_CNTX0_MAX_INDEX)
+		return ICE_ERR_PARAM;
+
+	/* Copy each dword separately to hw */
+	for (i = 0; i < ICE_TX_CMPLTNQ_CTX_SIZE_DWORDS; i++) {
+		wr32(hw, GLTCLAN_CQ_CNTX(i, tx_cmpltnq_index),
+		     *((u32 *)(ice_tx_cmpltnq_ctx + (i * sizeof(u32)))));
+
+		ice_debug(hw, ICE_DBG_QCTX, "cmpltnqdata[%d]: %08X\n", i,
+			  *((u32 *)(ice_tx_cmpltnq_ctx + (i * sizeof(u32)))));
+	}
+
+	return ICE_SUCCESS;
+}
+
+/* LAN Tx Completion Queue Context */
+static const struct ice_ctx_ele ice_tx_cmpltnq_ctx_info[] = {
+				       /* Field			Width   LSB */
+	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, base,			57,	0),
+	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, q_len,		18,	64),
+	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, generation,		1,	96),
+	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, wrt_ptr,		22,	97),
+	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, pf_num,		3,	128),
+	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, vmvf_num,		10,	131),
+	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, vmvf_type,		2,	141),
+	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, tph_desc_wr,		1,	160),
+	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, cpuid,		8,	161),
+	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, cmpltn_cache,		512,	192),
+	{ 0 }
+};
+
+/**
+ * ice_write_tx_cmpltnq_ctx
+ * @hw: pointer to the hardware structure
+ * @tx_cmpltnq_ctx: pointer to the completion queue context
+ * @tx_cmpltnq_index: the index of the completion queue
+ *
+ * Converts completion queue context from sparse to dense structure and then
+ * writes it to hw register space
+ */
+enum ice_status
+ice_write_tx_cmpltnq_ctx(struct ice_hw *hw,
+			 struct ice_tx_cmpltnq_ctx *tx_cmpltnq_ctx,
+			 u32 tx_cmpltnq_index)
+{
+	u8 ctx_buf[ICE_TX_CMPLTNQ_CTX_SIZE_DWORDS * sizeof(u32)] = { 0 };
+
+	ice_set_ctx((u8 *)tx_cmpltnq_ctx, ctx_buf, ice_tx_cmpltnq_ctx_info);
+	return ice_copy_tx_cmpltnq_ctx_to_hw(hw, ctx_buf, tx_cmpltnq_index);
+}
+
+/**
+ * ice_clear_tx_cmpltnq_ctx
+ * @hw: pointer to the hardware structure
+ * @tx_cmpltnq_index: the index of the completion queue to clear
+ *
+ * Clears Tx completion queue context in hw register space
+ */
+enum ice_status
+ice_clear_tx_cmpltnq_ctx(struct ice_hw *hw, u32 tx_cmpltnq_index)
+{
+	u8 i;
+
+	if (tx_cmpltnq_index > GLTCLAN_CQ_CNTX0_MAX_INDEX)
+		return ICE_ERR_PARAM;
+
+	/* Clear each dword register separately */
+	for (i = 0; i < ICE_TX_CMPLTNQ_CTX_SIZE_DWORDS; i++)
+		wr32(hw, GLTCLAN_CQ_CNTX(i, tx_cmpltnq_index), 0);
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_copy_tx_drbell_q_ctx_to_hw
+ * @hw: pointer to the hardware structure
+ * @ice_tx_drbell_q_ctx: pointer to the doorbell queue context
+ * @tx_drbell_q_index: the index of the doorbell queue
+ *
+ * Copies doorbell q context from dense structure to hw register space
+ */
+static enum ice_status
+ice_copy_tx_drbell_q_ctx_to_hw(struct ice_hw *hw, u8 *ice_tx_drbell_q_ctx,
+			       u32 tx_drbell_q_index)
+{
+	u8 i;
+
+	if (!ice_tx_drbell_q_ctx)
+		return ICE_ERR_BAD_PTR;
+
+	if (tx_drbell_q_index > QTX_COMM_DBLQ_DBELL_MAX_INDEX)
+		return ICE_ERR_PARAM;
+
+	/* Copy each dword separately to hw */
+	for (i = 0; i < ICE_TX_DRBELL_Q_CTX_SIZE_DWORDS; i++) {
+		wr32(hw, QTX_COMM_DBLQ_CNTX(i, tx_drbell_q_index),
+		     *((u32 *)(ice_tx_drbell_q_ctx + (i * sizeof(u32)))));
+
+		ice_debug(hw, ICE_DBG_QCTX, "tx_drbell_qdata[%d]: %08X\n", i,
+			  *((u32 *)(ice_tx_drbell_q_ctx + (i * sizeof(u32)))));
+	}
+
+	return ICE_SUCCESS;
+}
+
+/* LAN Tx Doorbell Queue Context info */
+static const struct ice_ctx_ele ice_tx_drbell_q_ctx_info[] = {
+					/* Field		Width   LSB */
+	ICE_CTX_STORE(ice_tx_drbell_q_ctx, base,		57,	0),
+	ICE_CTX_STORE(ice_tx_drbell_q_ctx, ring_len,		13,	64),
+	ICE_CTX_STORE(ice_tx_drbell_q_ctx, pf_num,		3,	80),
+	ICE_CTX_STORE(ice_tx_drbell_q_ctx, vf_num,		8,	84),
+	ICE_CTX_STORE(ice_tx_drbell_q_ctx, vmvf_type,		2,	94),
+	ICE_CTX_STORE(ice_tx_drbell_q_ctx, cpuid,		8,	96),
+	ICE_CTX_STORE(ice_tx_drbell_q_ctx, tph_desc_rd,		1,	104),
+	ICE_CTX_STORE(ice_tx_drbell_q_ctx, tph_desc_wr,		1,	108),
+	ICE_CTX_STORE(ice_tx_drbell_q_ctx, db_q_en,		1,	112),
+	ICE_CTX_STORE(ice_tx_drbell_q_ctx, rd_head,		13,	128),
+	ICE_CTX_STORE(ice_tx_drbell_q_ctx, rd_tail,		13,	144),
+	{ 0 }
+};
+
+/**
+ * ice_write_tx_drbell_q_ctx
+ * @hw: pointer to the hardware structure
+ * @tx_drbell_q_ctx: pointer to the doorbell queue context
+ * @tx_drbell_q_index: the index of the doorbell queue
+ *
+ * Converts doorbell queue context from sparse to dense structure and then
+ * writes it to hw register space
+ */
+enum ice_status
+ice_write_tx_drbell_q_ctx(struct ice_hw *hw,
+			  struct ice_tx_drbell_q_ctx *tx_drbell_q_ctx,
+			  u32 tx_drbell_q_index)
+{
+	u8 ctx_buf[ICE_TX_DRBELL_Q_CTX_SIZE_DWORDS * sizeof(u32)] = { 0 };
+
+	ice_set_ctx((u8 *)tx_drbell_q_ctx, ctx_buf, ice_tx_drbell_q_ctx_info);
+	return ice_copy_tx_drbell_q_ctx_to_hw(hw, ctx_buf, tx_drbell_q_index);
+}
+
+/**
+ * ice_clear_tx_drbell_q_ctx
+ * @hw: pointer to the hardware structure
+ * @tx_drbell_q_index: the index of the doorbell queue to clear
+ *
+ * Clears doorbell queue context in hw register space
+ */
+enum ice_status
+ice_clear_tx_drbell_q_ctx(struct ice_hw *hw, u32 tx_drbell_q_index)
+{
+	u8 i;
+
+	if (tx_drbell_q_index > QTX_COMM_DBLQ_DBELL_MAX_INDEX)
+		return ICE_ERR_PARAM;
+
+	/* Clear each dword register separately */
+	for (i = 0; i < ICE_TX_DRBELL_Q_CTX_SIZE_DWORDS; i++)
+		wr32(hw, QTX_COMM_DBLQ_CNTX(i, tx_drbell_q_index), 0);
+
+	return ICE_SUCCESS;
+}
+#endif /* !NO_UNUSED_CTX_CODE || AE_DRIVER */
+
+/**
+ * ice_debug_cq
+ * @hw: pointer to the hardware structure
+ * @mask: debug mask
+ * @desc: pointer to control queue descriptor
+ * @buf: pointer to command buffer
+ * @buf_len: max length of buf
+ *
+ * Dumps debug log about control command with descriptor contents.
+ */
+void
+ice_debug_cq(struct ice_hw *hw, u32 mask, void *desc, void *buf, u16 buf_len)
+{
+	struct ice_aq_desc *cq_desc = (struct ice_aq_desc *)desc;
+	u16 len;
+
+	if (!(mask & hw->debug_mask))
+		return;
+
+	if (!desc)
+		return;
+
+	len = LE16_TO_CPU(cq_desc->datalen);
+
+	ice_debug(hw, mask,
+		  "CQ CMD: opcode 0x%04X, flags 0x%04X, datalen 0x%04X, retval 0x%04X\n",
+		  LE16_TO_CPU(cq_desc->opcode),
+		  LE16_TO_CPU(cq_desc->flags),
+		  LE16_TO_CPU(cq_desc->datalen), LE16_TO_CPU(cq_desc->retval));
+	ice_debug(hw, mask, "\tcookie (h,l) 0x%08X 0x%08X\n",
+		  LE32_TO_CPU(cq_desc->cookie_high),
+		  LE32_TO_CPU(cq_desc->cookie_low));
+	ice_debug(hw, mask, "\tparam (0,1)  0x%08X 0x%08X\n",
+		  LE32_TO_CPU(cq_desc->params.generic.param0),
+		  LE32_TO_CPU(cq_desc->params.generic.param1));
+	ice_debug(hw, mask, "\taddr (h,l)   0x%08X 0x%08X\n",
+		  LE32_TO_CPU(cq_desc->params.generic.addr_high),
+		  LE32_TO_CPU(cq_desc->params.generic.addr_low));
+	if (buf && cq_desc->datalen != 0) {
+		ice_debug(hw, mask, "Buffer:\n");
+		if (buf_len < len)
+			len = buf_len;
+
+		ice_debug_array(hw, mask, 16, 1, (u8 *)buf, len);
+	}
+}
+
+
+/* FW Admin Queue command wrappers */
+
+/**
+ * ice_aq_send_cmd - send FW Admin Queue command to FW Admin Queue
+ * @hw: pointer to the hw struct
+ * @desc: descriptor describing the command
+ * @buf: buffer to use for indirect commands (NULL for direct commands)
+ * @buf_size: size of buffer for indirect commands (0 for direct commands)
+ * @cd: pointer to command details structure
+ *
+ * Helper function to send FW Admin Queue commands to the FW Admin Queue.
+ */
+enum ice_status
+ice_aq_send_cmd(struct ice_hw *hw, struct ice_aq_desc *desc, void *buf,
+		u16 buf_size, struct ice_sq_cd *cd)
+{
+	return ice_sq_send_cmd(hw, &hw->adminq, desc, buf, buf_size, cd);
+}
+
+/**
+ * ice_aq_get_fw_ver
+ * @hw: pointer to the hw struct
+ * @cd: pointer to command details structure or NULL
+ *
+ * Get the firmware version (0x0001) from the admin queue commands
+ */
+enum ice_status ice_aq_get_fw_ver(struct ice_hw *hw, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_get_ver *resp;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	resp = &desc.params.get_ver;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_ver);
+
+	status = ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+
+	if (!status) {
+		hw->fw_branch = resp->fw_branch;
+		hw->fw_maj_ver = resp->fw_major;
+		hw->fw_min_ver = resp->fw_minor;
+		hw->fw_patch = resp->fw_patch;
+		hw->fw_build = LE32_TO_CPU(resp->fw_build);
+		hw->api_branch = resp->api_branch;
+		hw->api_maj_ver = resp->api_major;
+		hw->api_min_ver = resp->api_minor;
+		hw->api_patch = resp->api_patch;
+	}
+
+	return status;
+}
+
+
+/**
+ * ice_aq_q_shutdown
+ * @hw: pointer to the hw struct
+ * @unloading: is the driver unloading itself
+ *
+ * Tell the Firmware that we're shutting down the AdminQ and whether
+ * or not the driver is unloading as well (0x0003).
+ */
+enum ice_status ice_aq_q_shutdown(struct ice_hw *hw, bool unloading)
+{
+	struct ice_aqc_q_shutdown *cmd;
+	struct ice_aq_desc desc;
+
+	cmd = &desc.params.q_shutdown;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_q_shutdown);
+
+	if (unloading)
+		cmd->driver_unloading = CPU_TO_LE32(ICE_AQC_DRIVER_UNLOADING);
+
+	return ice_aq_send_cmd(hw, &desc, NULL, 0, NULL);
+}
+
+/**
+ * ice_aq_req_res
+ * @hw: pointer to the hw struct
+ * @res: resource id
+ * @access: access type
+ * @sdp_number: resource number
+ * @timeout: the maximum time in ms that the driver may hold the resource
+ * @cd: pointer to command details structure or NULL
+ *
+ * Requests common resource using the admin queue commands (0x0008).
+ * When attempting to acquire the Global Config Lock, the driver can
+ * learn of three states:
+ *  1) ICE_SUCCESS -        acquired lock, and can perform download package
+ *  2) ICE_ERR_AQ_ERROR -   did not get lock, driver should fail to load
+ *  3) ICE_ERR_AQ_NO_WORK - did not get lock, but another driver has
+ *                          successfully downloaded the package; the driver does
+ *                          not have to download the package and can continue
+ *                          loading
+ *
+ * Note that if the caller is in an acquire lock, perform action, release lock
+ * phase of operation, it is possible that the FW may detect a timeout and issue
+ * a CORER. In this case, the driver will receive a CORER interrupt and will
+ * have to determine its cause. The calling thread that is handling this flow
+ * will likely get an error propagated back to it indicating the Download
+ * Package, Update Package or the Release Resource AQ commands timed out.
+ */
+static enum ice_status
+ice_aq_req_res(struct ice_hw *hw, enum ice_aq_res_ids res,
+	       enum ice_aq_res_access_type access, u8 sdp_number, u32 *timeout,
+	       struct ice_sq_cd *cd)
+{
+	struct ice_aqc_req_res *cmd_resp;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_req_res");
+
+	cmd_resp = &desc.params.res_owner;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_req_res);
+
+	cmd_resp->res_id = CPU_TO_LE16(res);
+	cmd_resp->access_type = CPU_TO_LE16(access);
+	cmd_resp->res_number = CPU_TO_LE32(sdp_number);
+	cmd_resp->timeout = CPU_TO_LE32(*timeout);
+	*timeout = 0;
+
+	status = ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+
+	/* The completion specifies the maximum time in ms that the driver
+	 * may hold the resource in the Timeout field.
+	 */
+
+	/* Global config lock response utilizes an additional status field.
+	 *
+	 * If the Global config lock resource is held by some other driver, the
+	 * command completes with ICE_AQ_RES_GLBL_IN_PROG in the status field
+	 * and the timeout field indicates the maximum time the current owner
+	 * of the resource has to free it.
+	 */
+	if (res == ICE_GLOBAL_CFG_LOCK_RES_ID) {
+		if (LE16_TO_CPU(cmd_resp->status) == ICE_AQ_RES_GLBL_SUCCESS) {
+			*timeout = LE32_TO_CPU(cmd_resp->timeout);
+			return ICE_SUCCESS;
+		} else if (LE16_TO_CPU(cmd_resp->status) ==
+			   ICE_AQ_RES_GLBL_IN_PROG) {
+			*timeout = LE32_TO_CPU(cmd_resp->timeout);
+			return ICE_ERR_AQ_ERROR;
+		} else if (LE16_TO_CPU(cmd_resp->status) ==
+			   ICE_AQ_RES_GLBL_DONE) {
+			return ICE_ERR_AQ_NO_WORK;
+		}
+
+		/* invalid FW response, force a timeout immediately */
+		*timeout = 0;
+		return ICE_ERR_AQ_ERROR;
+	}
+
+	/* If the resource is held by some other driver, the command completes
+	 * with a busy return value and the timeout field indicates the maximum
+	 * time the current owner of the resource has to free it.
+	 */
+	if (!status || hw->adminq.sq_last_status == ICE_AQ_RC_EBUSY)
+		*timeout = LE32_TO_CPU(cmd_resp->timeout);
+
+	return status;
+}
+
+/**
+ * ice_aq_release_res
+ * @hw: pointer to the hw struct
+ * @res: resource id
+ * @sdp_number: resource number
+ * @cd: pointer to command details structure or NULL
+ *
+ * release common resource using the admin queue commands (0x0009)
+ */
+static enum ice_status
+ice_aq_release_res(struct ice_hw *hw, enum ice_aq_res_ids res, u8 sdp_number,
+		   struct ice_sq_cd *cd)
+{
+	struct ice_aqc_req_res *cmd;
+	struct ice_aq_desc desc;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_release_res");
+
+	cmd = &desc.params.res_owner;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_release_res);
+
+	cmd->res_id = CPU_TO_LE16(res);
+	cmd->res_number = CPU_TO_LE32(sdp_number);
+
+	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+}
+
+/**
+ * ice_acquire_res
+ * @hw: pointer to the HW structure
+ * @res: resource id
+ * @access: access type (read or write)
+ * @timeout: timeout in milliseconds
+ *
+ * This function will attempt to acquire the ownership of a resource.
+ */
+enum ice_status
+ice_acquire_res(struct ice_hw *hw, enum ice_aq_res_ids res,
+		enum ice_aq_res_access_type access, u32 timeout)
+{
+#define ICE_RES_POLLING_DELAY_MS	10
+	u32 delay = ICE_RES_POLLING_DELAY_MS;
+	u32 time_left = timeout;
+	enum ice_status status;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_acquire_res");
+
+	status = ice_aq_req_res(hw, res, access, 0, &time_left, NULL);
+
+	/* A return code of ICE_ERR_AQ_NO_WORK means that another driver has
+	 * previously acquired the resource and performed any necessary updates;
+	 * in this case the caller does not obtain the resource and has no
+	 * further work to do.
+	 */
+	if (status == ICE_ERR_AQ_NO_WORK)
+		goto ice_acquire_res_exit;
+
+	if (status)
+		ice_debug(hw, ICE_DBG_RES,
+			  "resource %d acquire type %d failed.\n", res, access);
+
+	/* If necessary, poll until the current lock owner timeouts */
+	timeout = time_left;
+	while (status && timeout && time_left) {
+		ice_msec_delay(delay, true);
+		timeout = (timeout > delay) ? timeout - delay : 0;
+		status = ice_aq_req_res(hw, res, access, 0, &time_left, NULL);
+
+		if (status == ICE_ERR_AQ_NO_WORK)
+			/* lock free, but no work to do */
+			break;
+
+		if (!status)
+			/* lock acquired */
+			break;
+	}
+	if (status && status != ICE_ERR_AQ_NO_WORK)
+		ice_debug(hw, ICE_DBG_RES, "resource acquire timed out.\n");
+
+ice_acquire_res_exit:
+	if (status == ICE_ERR_AQ_NO_WORK) {
+		if (access == ICE_RES_WRITE)
+			ice_debug(hw, ICE_DBG_RES,
+				  "resource indicates no work to do.\n");
+		else
+			ice_debug(hw, ICE_DBG_RES,
+				  "Warning: ICE_ERR_AQ_NO_WORK not expected\n");
+	}
+	return status;
+}
+
+/**
+ * ice_release_res
+ * @hw: pointer to the HW structure
+ * @res: resource id
+ *
+ * This function will release a resource using the proper Admin Command.
+ */
+void ice_release_res(struct ice_hw *hw, enum ice_aq_res_ids res)
+{
+	enum ice_status status;
+	u32 total_delay = 0;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_release_res");
+
+	status = ice_aq_release_res(hw, res, 0, NULL);
+
+	/* there are some rare cases when trying to release the resource
+	 * results in an admin Q timeout, so handle them correctly
+	 */
+	while ((status == ICE_ERR_AQ_TIMEOUT) &&
+	       (total_delay < hw->adminq.sq_cmd_timeout)) {
+		ice_msec_delay(1, true);
+		status = ice_aq_release_res(hw, res, 0, NULL);
+		total_delay++;
+	}
+}
+
+/**
+ * ice_aq_alloc_free_res - command to allocate/free resources
+ * @hw: pointer to the hw struct
+ * @num_entries: number of resource entries in buffer
+ * @buf: Indirect buffer to hold data parameters and response
+ * @buf_size: size of buffer for indirect commands
+ * @opc: pass in the command opcode
+ * @cd: pointer to command details structure or NULL
+ *
+ * Helper function to allocate/free resources using the admin queue commands
+ */
+enum ice_status
+ice_aq_alloc_free_res(struct ice_hw *hw, u16 num_entries,
+		      struct ice_aqc_alloc_free_res_elem *buf, u16 buf_size,
+		      enum ice_adminq_opc opc, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_alloc_free_res_cmd *cmd;
+	struct ice_aq_desc desc;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_alloc_free_res");
+
+	cmd = &desc.params.sw_res_ctrl;
+
+	if (!buf)
+		return ICE_ERR_PARAM;
+
+	if (buf_size < (num_entries * sizeof(buf->elem[0])))
+		return ICE_ERR_PARAM;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, opc);
+
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+
+	cmd->num_entries = CPU_TO_LE16(num_entries);
+
+	return ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
+}
+
+
+/**
+ * ice_get_num_per_func - determine number of resources per PF
+ * @hw: pointer to the hw structure
+ * @max: value to be evenly split between each PF
+ *
+ * Determine the number of valid functions by going through the bitmap returned
+ * from parsing capabilities and use this to calculate the number of resources
+ * per PF based on the max value passed in.
+ */
+static u32 ice_get_num_per_func(struct ice_hw *hw, u32 max)
+{
+	u8 funcs;
+
+#define ICE_CAPS_VALID_FUNCS_M	0xFF
+	funcs = ice_hweight8(hw->dev_caps.common_cap.valid_functions &
+			     ICE_CAPS_VALID_FUNCS_M);
+
+	if (!funcs)
+		return 0;
+
+	return max / funcs;
+}
+
+/**
+ * ice_parse_caps - parse function/device capabilities
+ * @hw: pointer to the hw struct
+ * @buf: pointer to a buffer containing function/device capability records
+ * @cap_count: number of capability records in the list
+ * @opc: type of capabilities list to parse
+ *
+ * Helper function to parse function(0x000a)/device(0x000b) capabilities list.
+ */
+static void
+ice_parse_caps(struct ice_hw *hw, void *buf, u32 cap_count,
+	       enum ice_adminq_opc opc)
+{
+	struct ice_aqc_list_caps_elem *cap_resp;
+	struct ice_hw_func_caps *func_p = NULL;
+	struct ice_hw_dev_caps *dev_p = NULL;
+	struct ice_hw_common_caps *caps;
+	u32 i;
+
+	if (!buf)
+		return;
+
+	cap_resp = (struct ice_aqc_list_caps_elem *)buf;
+
+	if (opc == ice_aqc_opc_list_dev_caps) {
+		dev_p = &hw->dev_caps;
+		caps = &dev_p->common_cap;
+	} else if (opc == ice_aqc_opc_list_func_caps) {
+		func_p = &hw->func_caps;
+		caps = &func_p->common_cap;
+	} else {
+		ice_debug(hw, ICE_DBG_INIT, "wrong opcode\n");
+		return;
+	}
+
+	for (i = 0; caps && i < cap_count; i++, cap_resp++) {
+		u32 logical_id = LE32_TO_CPU(cap_resp->logical_id);
+		u32 phys_id = LE32_TO_CPU(cap_resp->phys_id);
+		u32 number = LE32_TO_CPU(cap_resp->number);
+		u16 cap = LE16_TO_CPU(cap_resp->cap);
+
+		switch (cap) {
+		case ICE_AQC_CAPS_VALID_FUNCTIONS:
+			caps->valid_functions = number;
+			ice_debug(hw, ICE_DBG_INIT,
+				  "HW caps: Valid Functions = %d\n",
+				  caps->valid_functions);
+			break;
+		case ICE_AQC_CAPS_VSI:
+			if (dev_p) {
+				dev_p->num_vsi_allocd_to_host = number;
+				ice_debug(hw, ICE_DBG_INIT,
+					  "HW caps: Dev.VSI cnt = %d\n",
+					  dev_p->num_vsi_allocd_to_host);
+			} else if (func_p) {
+				func_p->guar_num_vsi =
+					ice_get_num_per_func(hw, ICE_MAX_VSI);
+				ice_debug(hw, ICE_DBG_INIT,
+					  "HW caps: Func.VSI cnt = %d\n",
+					  number);
+			}
+			break;
+		case ICE_AQC_CAPS_RSS:
+			caps->rss_table_size = number;
+			caps->rss_table_entry_width = logical_id;
+			ice_debug(hw, ICE_DBG_INIT,
+				  "HW caps: RSS table size = %d\n",
+				  caps->rss_table_size);
+			ice_debug(hw, ICE_DBG_INIT,
+				  "HW caps: RSS table width = %d\n",
+				  caps->rss_table_entry_width);
+			break;
+		case ICE_AQC_CAPS_RXQS:
+			caps->num_rxq = number;
+			caps->rxq_first_id = phys_id;
+			ice_debug(hw, ICE_DBG_INIT,
+				  "HW caps: Num Rx Qs = %d\n", caps->num_rxq);
+			ice_debug(hw, ICE_DBG_INIT,
+				  "HW caps: Rx first queue ID = %d\n",
+				  caps->rxq_first_id);
+			break;
+		case ICE_AQC_CAPS_TXQS:
+			caps->num_txq = number;
+			caps->txq_first_id = phys_id;
+			ice_debug(hw, ICE_DBG_INIT,
+				  "HW caps: Num Tx Qs = %d\n", caps->num_txq);
+			ice_debug(hw, ICE_DBG_INIT,
+				  "HW caps: Tx first queue ID = %d\n",
+				  caps->txq_first_id);
+			break;
+		case ICE_AQC_CAPS_MSIX:
+			caps->num_msix_vectors = number;
+			caps->msix_vector_first_id = phys_id;
+			ice_debug(hw, ICE_DBG_INIT,
+				  "HW caps: MSIX vector count = %d\n",
+				  caps->num_msix_vectors);
+			ice_debug(hw, ICE_DBG_INIT,
+				  "HW caps: MSIX first vector index = %d\n",
+				  caps->msix_vector_first_id);
+			break;
+		case ICE_AQC_CAPS_MAX_MTU:
+			caps->max_mtu = number;
+			if (dev_p)
+				ice_debug(hw, ICE_DBG_INIT,
+					  "HW caps: Dev.MaxMTU = %d\n",
+					  caps->max_mtu);
+			else if (func_p)
+				ice_debug(hw, ICE_DBG_INIT,
+					  "HW caps: func.MaxMTU = %d\n",
+					  caps->max_mtu);
+			break;
+		default:
+			ice_debug(hw, ICE_DBG_INIT,
+				  "HW caps: Unknown capability[%d]: 0x%x\n", i,
+				  cap);
+			break;
+		}
+	}
+}
+
+/**
+ * ice_aq_discover_caps - query function/device capabilities
+ * @hw: pointer to the hw struct
+ * @buf: a virtual buffer to hold the capabilities
+ * @buf_size: Size of the virtual buffer
+ * @cap_count: cap count needed if AQ err==ENOMEM
+ * @opc: capabilities type to discover - pass in the command opcode
+ * @cd: pointer to command details structure or NULL
+ *
+ * Get the function(0x000a)/device(0x000b) capabilities description from
+ * the firmware.
+ */
+static enum ice_status
+ice_aq_discover_caps(struct ice_hw *hw, void *buf, u16 buf_size, u32 *cap_count,
+		     enum ice_adminq_opc opc, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_list_caps *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	cmd = &desc.params.get_cap;
+
+	if (opc != ice_aqc_opc_list_func_caps &&
+	    opc != ice_aqc_opc_list_dev_caps)
+		return ICE_ERR_PARAM;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, opc);
+
+	status = ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
+	if (!status)
+		ice_parse_caps(hw, buf, LE32_TO_CPU(cmd->count), opc);
+	else if (hw->adminq.sq_last_status == ICE_AQ_RC_ENOMEM)
+		*cap_count = LE32_TO_CPU(cmd->count);
+	return status;
+}
+
+/**
+ * ice_discover_caps - get info about the HW
+ * @hw: pointer to the hardware structure
+ * @opc: capabilities type to discover - pass in the command opcode
+ */
+static enum ice_status
+ice_discover_caps(struct ice_hw *hw, enum ice_adminq_opc opc)
+{
+	enum ice_status status;
+	u32 cap_count;
+	u16 cbuf_len;
+	u8 retries;
+
+	/* The driver doesn't know how many capabilities the device will return
+	 * so the buffer size required isn't known ahead of time. The driver
+	 * starts with cbuf_len and if this turns out to be insufficient, the
+	 * device returns ICE_AQ_RC_ENOMEM and also the cap_count it needs.
+	 * The driver then allocates the buffer based on the count and retries
+	 * the operation. So it follows that the retry count is 2.
+	 */
+#define ICE_GET_CAP_BUF_COUNT	40
+#define ICE_GET_CAP_RETRY_COUNT	2
+
+	cap_count = ICE_GET_CAP_BUF_COUNT;
+	retries = ICE_GET_CAP_RETRY_COUNT;
+
+	do {
+		void *cbuf;
+
+		cbuf_len = (u16)(cap_count *
+				 sizeof(struct ice_aqc_list_caps_elem));
+		cbuf = ice_malloc(hw, cbuf_len);
+		if (!cbuf)
+			return ICE_ERR_NO_MEMORY;
+
+		status = ice_aq_discover_caps(hw, cbuf, cbuf_len, &cap_count,
+					      opc, NULL);
+		ice_free(hw, cbuf);
+
+		if (!status || hw->adminq.sq_last_status != ICE_AQ_RC_ENOMEM)
+			break;
+
+		/* If ENOMEM is returned, try again with bigger buffer */
+	} while (--retries);
+
+	return status;
+}
+
+/**
+ * ice_get_caps - get info about the HW
+ * @hw: pointer to the hardware structure
+ */
+enum ice_status ice_get_caps(struct ice_hw *hw)
+{
+	enum ice_status status;
+
+	status = ice_discover_caps(hw, ice_aqc_opc_list_dev_caps);
+	if (!status)
+		status = ice_discover_caps(hw, ice_aqc_opc_list_func_caps);
+
+	return status;
+}
+
+/**
+ * ice_aq_manage_mac_write - manage MAC address write command
+ * @hw: pointer to the hw struct
+ * @mac_addr: MAC address to be written as LAA/LAA+WoL/Port address
+ * @flags: flags to control write behavior
+ * @cd: pointer to command details structure or NULL
+ *
+ * This function is used to write MAC address to the NVM (0x0108).
+ */
+enum ice_status
+ice_aq_manage_mac_write(struct ice_hw *hw, const u8 *mac_addr, u8 flags,
+			struct ice_sq_cd *cd)
+{
+	struct ice_aqc_manage_mac_write *cmd;
+	struct ice_aq_desc desc;
+
+	cmd = &desc.params.mac_write;
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_manage_mac_write);
+
+	cmd->flags = flags;
+
+
+	/* Prep values for flags, sah, sal */
+	cmd->sah = HTONS(*((const u16 *)mac_addr));
+	cmd->sal = HTONL(*((const u32 *)(mac_addr + 2)));
+
+	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+}
+
+/**
+ * ice_aq_clear_pxe_mode
+ * @hw: pointer to the hw struct
+ *
+ * Tell the firmware that the driver is taking over from PXE (0x0110).
+ */
+static enum ice_status ice_aq_clear_pxe_mode(struct ice_hw *hw)
+{
+	struct ice_aq_desc desc;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_clear_pxe_mode);
+	desc.params.clear_pxe.rx_cnt = ICE_AQC_CLEAR_PXE_RX_CNT;
+
+	return ice_aq_send_cmd(hw, &desc, NULL, 0, NULL);
+}
+
+/**
+ * ice_clear_pxe_mode - clear pxe operations mode
+ * @hw: pointer to the hw struct
+ *
+ * Make sure all PXE mode settings are cleared, including things
+ * like descriptor fetch/write-back mode.
+ */
+void ice_clear_pxe_mode(struct ice_hw *hw)
+{
+	if (ice_check_sq_alive(hw, &hw->adminq))
+		ice_aq_clear_pxe_mode(hw);
+}
+
+
+/**
+ * ice_get_link_speed_based_on_phy_type - returns link speed
+ * @phy_type_low: lower part of phy_type
+ * @phy_type_high: higher part of phy_type
+ *
+ * This helper function will convert an entry in phy type structure
+ * [phy_type_low, phy_type_high] to its corresponding link speed.
+ * Note: In the structure of [phy_type_low, phy_type_high], there should
+ * be one bit set, as this function will convert one phy type to its
+ * speed.
+ * If no bit gets set, ICE_LINK_SPEED_UNKNOWN will be returned
+ * If more than one bit gets set, ICE_LINK_SPEED_UNKNOWN will be returned
+ */
+static u16
+ice_get_link_speed_based_on_phy_type(u64 phy_type_low, u64 phy_type_high)
+{
+	u16 speed_phy_type_high = ICE_AQ_LINK_SPEED_UNKNOWN;
+	u16 speed_phy_type_low = ICE_AQ_LINK_SPEED_UNKNOWN;
+
+	switch (phy_type_low) {
+	case ICE_PHY_TYPE_LOW_100BASE_TX:
+	case ICE_PHY_TYPE_LOW_100M_SGMII:
+		speed_phy_type_low = ICE_AQ_LINK_SPEED_100MB;
+		break;
+	case ICE_PHY_TYPE_LOW_1000BASE_T:
+	case ICE_PHY_TYPE_LOW_1000BASE_SX:
+	case ICE_PHY_TYPE_LOW_1000BASE_LX:
+	case ICE_PHY_TYPE_LOW_1000BASE_KX:
+	case ICE_PHY_TYPE_LOW_1G_SGMII:
+		speed_phy_type_low = ICE_AQ_LINK_SPEED_1000MB;
+		break;
+	case ICE_PHY_TYPE_LOW_2500BASE_T:
+	case ICE_PHY_TYPE_LOW_2500BASE_X:
+	case ICE_PHY_TYPE_LOW_2500BASE_KX:
+		speed_phy_type_low = ICE_AQ_LINK_SPEED_2500MB;
+		break;
+	case ICE_PHY_TYPE_LOW_5GBASE_T:
+	case ICE_PHY_TYPE_LOW_5GBASE_KR:
+		speed_phy_type_low = ICE_AQ_LINK_SPEED_5GB;
+		break;
+	case ICE_PHY_TYPE_LOW_10GBASE_T:
+	case ICE_PHY_TYPE_LOW_10G_SFI_DA:
+	case ICE_PHY_TYPE_LOW_10GBASE_SR:
+	case ICE_PHY_TYPE_LOW_10GBASE_LR:
+	case ICE_PHY_TYPE_LOW_10GBASE_KR_CR1:
+	case ICE_PHY_TYPE_LOW_10G_SFI_AOC_ACC:
+	case ICE_PHY_TYPE_LOW_10G_SFI_C2C:
+		speed_phy_type_low = ICE_AQ_LINK_SPEED_10GB;
+		break;
+	case ICE_PHY_TYPE_LOW_25GBASE_T:
+	case ICE_PHY_TYPE_LOW_25GBASE_CR:
+	case ICE_PHY_TYPE_LOW_25GBASE_CR_S:
+	case ICE_PHY_TYPE_LOW_25GBASE_CR1:
+	case ICE_PHY_TYPE_LOW_25GBASE_SR:
+	case ICE_PHY_TYPE_LOW_25GBASE_LR:
+	case ICE_PHY_TYPE_LOW_25GBASE_KR:
+	case ICE_PHY_TYPE_LOW_25GBASE_KR_S:
+	case ICE_PHY_TYPE_LOW_25GBASE_KR1:
+	case ICE_PHY_TYPE_LOW_25G_AUI_AOC_ACC:
+	case ICE_PHY_TYPE_LOW_25G_AUI_C2C:
+		speed_phy_type_low = ICE_AQ_LINK_SPEED_25GB;
+		break;
+	case ICE_PHY_TYPE_LOW_40GBASE_CR4:
+	case ICE_PHY_TYPE_LOW_40GBASE_SR4:
+	case ICE_PHY_TYPE_LOW_40GBASE_LR4:
+	case ICE_PHY_TYPE_LOW_40GBASE_KR4:
+	case ICE_PHY_TYPE_LOW_40G_XLAUI_AOC_ACC:
+	case ICE_PHY_TYPE_LOW_40G_XLAUI:
+		speed_phy_type_low = ICE_AQ_LINK_SPEED_40GB;
+		break;
+	case ICE_PHY_TYPE_LOW_50GBASE_CR2:
+	case ICE_PHY_TYPE_LOW_50GBASE_SR2:
+	case ICE_PHY_TYPE_LOW_50GBASE_LR2:
+	case ICE_PHY_TYPE_LOW_50GBASE_KR2:
+	case ICE_PHY_TYPE_LOW_50G_LAUI2_AOC_ACC:
+	case ICE_PHY_TYPE_LOW_50G_LAUI2:
+	case ICE_PHY_TYPE_LOW_50G_AUI2_AOC_ACC:
+	case ICE_PHY_TYPE_LOW_50G_AUI2:
+	case ICE_PHY_TYPE_LOW_50GBASE_CP:
+	case ICE_PHY_TYPE_LOW_50GBASE_SR:
+	case ICE_PHY_TYPE_LOW_50GBASE_FR:
+	case ICE_PHY_TYPE_LOW_50GBASE_LR:
+	case ICE_PHY_TYPE_LOW_50GBASE_KR_PAM4:
+	case ICE_PHY_TYPE_LOW_50G_AUI1_AOC_ACC:
+	case ICE_PHY_TYPE_LOW_50G_AUI1:
+		speed_phy_type_low = ICE_AQ_LINK_SPEED_50GB;
+		break;
+	case ICE_PHY_TYPE_LOW_100GBASE_CR4:
+	case ICE_PHY_TYPE_LOW_100GBASE_SR4:
+	case ICE_PHY_TYPE_LOW_100GBASE_LR4:
+	case ICE_PHY_TYPE_LOW_100GBASE_KR4:
+	case ICE_PHY_TYPE_LOW_100G_CAUI4_AOC_ACC:
+	case ICE_PHY_TYPE_LOW_100G_CAUI4:
+	case ICE_PHY_TYPE_LOW_100G_AUI4_AOC_ACC:
+	case ICE_PHY_TYPE_LOW_100G_AUI4:
+	case ICE_PHY_TYPE_LOW_100GBASE_CR_PAM4:
+	case ICE_PHY_TYPE_LOW_100GBASE_KR_PAM4:
+	case ICE_PHY_TYPE_LOW_100GBASE_CP2:
+	case ICE_PHY_TYPE_LOW_100GBASE_SR2:
+	case ICE_PHY_TYPE_LOW_100GBASE_DR:
+		speed_phy_type_low = ICE_AQ_LINK_SPEED_100GB;
+		break;
+	default:
+		speed_phy_type_low = ICE_AQ_LINK_SPEED_UNKNOWN;
+		break;
+	}
+
+	switch (phy_type_high) {
+	case ICE_PHY_TYPE_HIGH_100GBASE_KR2_PAM4:
+	case ICE_PHY_TYPE_HIGH_100G_CAUI2_AOC_ACC:
+	case ICE_PHY_TYPE_HIGH_100G_CAUI2:
+	case ICE_PHY_TYPE_HIGH_100G_AUI2_AOC_ACC:
+	case ICE_PHY_TYPE_HIGH_100G_AUI2:
+		speed_phy_type_high = ICE_AQ_LINK_SPEED_100GB;
+		break;
+	default:
+		speed_phy_type_high = ICE_AQ_LINK_SPEED_UNKNOWN;
+		break;
+	}
+
+	if (speed_phy_type_low == ICE_AQ_LINK_SPEED_UNKNOWN &&
+	    speed_phy_type_high == ICE_AQ_LINK_SPEED_UNKNOWN)
+		return ICE_AQ_LINK_SPEED_UNKNOWN;
+	else if (speed_phy_type_low != ICE_AQ_LINK_SPEED_UNKNOWN &&
+		 speed_phy_type_high != ICE_AQ_LINK_SPEED_UNKNOWN)
+		return ICE_AQ_LINK_SPEED_UNKNOWN;
+	else if (speed_phy_type_low != ICE_AQ_LINK_SPEED_UNKNOWN &&
+		 speed_phy_type_high == ICE_AQ_LINK_SPEED_UNKNOWN)
+		return speed_phy_type_low;
+	else
+		return speed_phy_type_high;
+}
+
+/**
+ * ice_update_phy_type
+ * @phy_type_low: pointer to the lower part of phy_type
+ * @phy_type_high: pointer to the higher part of phy_type
+ * @link_speeds_bitmap: targeted link speeds bitmap
+ *
+ * Note: For the link_speeds_bitmap structure, you can check it at
+ * [ice_aqc_get_link_status->link_speed]. Caller can pass in
+ * link_speeds_bitmap include multiple speeds.
+ *
+ * Each entry in this [phy_type_low, phy_type_high] structure will
+ * present a certain link speed. This helper function will turn on bits
+ * in [phy_type_low, phy_type_high] structure based on the value of
+ * link_speeds_bitmap input parameter.
+ */
+void
+ice_update_phy_type(u64 *phy_type_low, u64 *phy_type_high,
+		    u16 link_speeds_bitmap)
+{
+	u16 speed = ICE_AQ_LINK_SPEED_UNKNOWN;
+	u64 pt_high;
+	u64 pt_low;
+	int index;
+
+	/* We first check with low part of phy_type */
+	for (index = 0; index <= ICE_PHY_TYPE_LOW_MAX_INDEX; index++) {
+		pt_low = BIT_ULL(index);
+		speed = ice_get_link_speed_based_on_phy_type(pt_low, 0);
+
+		if (link_speeds_bitmap & speed)
+			*phy_type_low |= BIT_ULL(index);
+	}
+
+	/* We then check with high part of phy_type */
+	for (index = 0; index <= ICE_PHY_TYPE_HIGH_MAX_INDEX; index++) {
+		pt_high = BIT_ULL(index);
+		speed = ice_get_link_speed_based_on_phy_type(0, pt_high);
+
+		if (link_speeds_bitmap & speed)
+			*phy_type_high |= BIT_ULL(index);
+	}
+}
+
+/**
+ * ice_aq_set_phy_cfg
+ * @hw: pointer to the hw struct
+ * @lport: logical port number
+ * @cfg: structure with PHY configuration data to be set
+ * @cd: pointer to command details structure or NULL
+ *
+ * Set the various PHY configuration parameters supported on the Port.
+ * One or more of the Set PHY config parameters may be ignored in an MFP
+ * mode as the PF may not have the privilege to set some of the PHY Config
+ * parameters. This status will be indicated by the command response (0x0601).
+ */
+enum ice_status
+ice_aq_set_phy_cfg(struct ice_hw *hw, u8 lport,
+		   struct ice_aqc_set_phy_cfg_data *cfg, struct ice_sq_cd *cd)
+{
+	struct ice_aq_desc desc;
+
+	if (!cfg)
+		return ICE_ERR_PARAM;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_phy_cfg);
+	desc.params.set_phy.lport_num = lport;
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+
+	return ice_aq_send_cmd(hw, &desc, cfg, sizeof(*cfg), cd);
+}
+
+/**
+ * ice_update_link_info - update status of the HW network link
+ * @pi: port info structure of the interested logical port
+ */
+enum ice_status ice_update_link_info(struct ice_port_info *pi)
+{
+	struct ice_aqc_get_phy_caps_data *pcaps;
+	struct ice_phy_info *phy_info;
+	enum ice_status status;
+	struct ice_hw *hw;
+
+	if (!pi)
+		return ICE_ERR_PARAM;
+
+	hw = pi->hw;
+
+	pcaps = (struct ice_aqc_get_phy_caps_data *)
+		ice_malloc(hw, sizeof(*pcaps));
+	if (!pcaps)
+		return ICE_ERR_NO_MEMORY;
+
+	phy_info = &pi->phy;
+	status = ice_aq_get_link_info(pi, true, NULL, NULL);
+	if (status)
+		goto out;
+
+	if (phy_info->link_info.link_info & ICE_AQ_MEDIA_AVAILABLE) {
+		status = ice_aq_get_phy_caps(pi, false, ICE_AQC_REPORT_SW_CFG,
+					     pcaps, NULL);
+		if (status)
+			goto out;
+
+		ice_memcpy(phy_info->link_info.module_type, &pcaps->module_type,
+			   sizeof(phy_info->link_info.module_type),
+			   ICE_NONDMA_TO_NONDMA);
+	}
+out:
+	ice_free(hw, pcaps);
+	return status;
+}
+
+/**
+ * ice_set_fc
+ * @pi: port information structure
+ * @aq_failures: pointer to status code, specific to ice_set_fc routine
+ * @ena_auto_link_update: enable automatic link update
+ *
+ * Set the requested flow control mode.
+ */
+enum ice_status
+ice_set_fc(struct ice_port_info *pi, u8 *aq_failures, bool ena_auto_link_update)
+{
+	struct ice_aqc_set_phy_cfg_data cfg = { 0 };
+	struct ice_aqc_get_phy_caps_data *pcaps;
+	enum ice_status status;
+	u8 pause_mask = 0x0;
+	struct ice_hw *hw;
+
+	if (!pi)
+		return ICE_ERR_PARAM;
+	hw = pi->hw;
+	*aq_failures = ICE_SET_FC_AQ_FAIL_NONE;
+
+	switch (pi->fc.req_mode) {
+	case ICE_FC_FULL:
+		pause_mask |= ICE_AQC_PHY_EN_TX_LINK_PAUSE;
+		pause_mask |= ICE_AQC_PHY_EN_RX_LINK_PAUSE;
+		break;
+	case ICE_FC_RX_PAUSE:
+		pause_mask |= ICE_AQC_PHY_EN_RX_LINK_PAUSE;
+		break;
+	case ICE_FC_TX_PAUSE:
+		pause_mask |= ICE_AQC_PHY_EN_TX_LINK_PAUSE;
+		break;
+	default:
+		break;
+	}
+
+	pcaps = (struct ice_aqc_get_phy_caps_data *)
+		ice_malloc(hw, sizeof(*pcaps));
+	if (!pcaps)
+		return ICE_ERR_NO_MEMORY;
+
+	/* Get the current phy config */
+	status = ice_aq_get_phy_caps(pi, false, ICE_AQC_REPORT_SW_CFG, pcaps,
+				     NULL);
+	if (status) {
+		*aq_failures = ICE_SET_FC_AQ_FAIL_GET;
+		goto out;
+	}
+
+	/* clear the old pause settings */
+	cfg.caps = pcaps->caps & ~(ICE_AQC_PHY_EN_TX_LINK_PAUSE |
+				   ICE_AQC_PHY_EN_RX_LINK_PAUSE);
+	/* set the new capabilities */
+	cfg.caps |= pause_mask;
+	/* If the capabilities have changed, then set the new config */
+	if (cfg.caps != pcaps->caps) {
+		int retry_count, retry_max = 10;
+
+		/* Auto restart link so settings take effect */
+		if (ena_auto_link_update)
+			cfg.caps |= ICE_AQ_PHY_ENA_AUTO_LINK_UPDT;
+		/* Copy over all the old settings */
+		cfg.phy_type_high = pcaps->phy_type_high;
+		cfg.phy_type_low = pcaps->phy_type_low;
+		cfg.low_power_ctrl = pcaps->low_power_ctrl;
+		cfg.eee_cap = pcaps->eee_cap;
+		cfg.eeer_value = pcaps->eeer_value;
+		cfg.link_fec_opt = pcaps->link_fec_options;
+
+		status = ice_aq_set_phy_cfg(hw, pi->lport, &cfg, NULL);
+		if (status) {
+			*aq_failures = ICE_SET_FC_AQ_FAIL_SET;
+			goto out;
+		}
+
+		/* Update the link info
+		 * It sometimes takes a really long time for link to
+		 * come back from the atomic reset. Thus, we wait a
+		 * little bit.
+		 */
+		for (retry_count = 0; retry_count < retry_max; retry_count++) {
+			status = ice_update_link_info(pi);
+
+			if (status == ICE_SUCCESS)
+				break;
+
+			ice_msec_delay(100, true);
+		}
+
+		if (status)
+			*aq_failures = ICE_SET_FC_AQ_FAIL_UPDATE;
+	}
+
+out:
+	ice_free(hw, pcaps);
+	return status;
+}
+
+/**
+ * ice_copy_phy_caps_to_cfg - Copy PHY ability data to configuration data
+ * @caps: PHY ability structure to copy date from
+ * @cfg: PHY configuration structure to copy data to
+ *
+ * Helper function to copy AQC PHY get ability data to PHY set configuration
+ * data structure
+ */
+void
+ice_copy_phy_caps_to_cfg(struct ice_aqc_get_phy_caps_data *caps,
+			 struct ice_aqc_set_phy_cfg_data *cfg)
+{
+	if (!caps || !cfg)
+		return;
+
+	cfg->phy_type_low = caps->phy_type_low;
+	cfg->phy_type_high = caps->phy_type_high;
+	cfg->caps = caps->caps;
+	cfg->low_power_ctrl = caps->low_power_ctrl;
+	cfg->eee_cap = caps->eee_cap;
+	cfg->eeer_value = caps->eeer_value;
+	cfg->link_fec_opt = caps->link_fec_options;
+}
+
+/**
+ * ice_cfg_phy_fec - Configure PHY FEC data based on FEC mode
+ * @cfg: PHY configuration data to set FEC mode
+ * @fec: FEC mode to configure
+ *
+ * Caller should copy ice_aqc_get_phy_caps_data.caps ICE_AQC_PHY_EN_AUTO_FEC
+ * (bit 7) and ice_aqc_get_phy_caps_data.link_fec_options to cfg.caps
+ * ICE_AQ_PHY_ENA_AUTO_FEC (bit 7) and cfg.link_fec_options before calling.
+ */
+void
+ice_cfg_phy_fec(struct ice_aqc_set_phy_cfg_data *cfg, enum ice_fec_mode fec)
+{
+	switch (fec) {
+	case ICE_FEC_BASER:
+		/* Clear auto FEC and RS bits, and AND BASE-R ability
+		 * bits and OR request bits.
+		 */
+		cfg->caps &= ~ICE_AQC_PHY_EN_AUTO_FEC;
+		cfg->link_fec_opt &= ICE_AQC_PHY_FEC_10G_KR_40G_KR4_EN |
+				     ICE_AQC_PHY_FEC_25G_KR_CLAUSE74_EN;
+		cfg->link_fec_opt |= ICE_AQC_PHY_FEC_10G_KR_40G_KR4_REQ |
+				     ICE_AQC_PHY_FEC_25G_KR_REQ;
+		break;
+	case ICE_FEC_RS:
+		/* Clear auto FEC and BASE-R bits, and AND RS ability
+		 * bits and OR request bits.
+		 */
+		cfg->caps &= ~ICE_AQC_PHY_EN_AUTO_FEC;
+		cfg->link_fec_opt &= ICE_AQC_PHY_FEC_25G_RS_CLAUSE91_EN;
+		cfg->link_fec_opt |= ICE_AQC_PHY_FEC_25G_RS_528_REQ |
+				     ICE_AQC_PHY_FEC_25G_RS_544_REQ;
+		break;
+	case ICE_FEC_NONE:
+		/* Clear auto FEC and all FEC option bits. */
+		cfg->caps &= ~ICE_AQC_PHY_EN_AUTO_FEC;
+		cfg->link_fec_opt &= ~ICE_AQC_PHY_FEC_MASK;
+		break;
+	case ICE_FEC_AUTO:
+		/* AND auto FEC bit, and all caps bits. */
+		cfg->caps &= ICE_AQC_PHY_CAPS_MASK;
+		break;
+	}
+}
+
+/**
+ * ice_get_link_status - get status of the HW network link
+ * @pi: port information structure
+ * @link_up: pointer to bool (true/false = linkup/linkdown)
+ *
+ * Variable link_up is true if link is up, false if link is down.
+ * The variable link_up is invalid if status is non zero. As a
+ * result of this call, link status reporting becomes enabled
+ */
+enum ice_status ice_get_link_status(struct ice_port_info *pi, bool *link_up)
+{
+	struct ice_phy_info *phy_info;
+	enum ice_status status = ICE_SUCCESS;
+
+	if (!pi || !link_up)
+		return ICE_ERR_PARAM;
+
+	phy_info = &pi->phy;
+
+	if (phy_info->get_link_info) {
+		status = ice_update_link_info(pi);
+
+		if (status)
+			ice_debug(pi->hw, ICE_DBG_LINK,
+				  "get link status error, status = %d\n",
+				  status);
+	}
+
+	*link_up = phy_info->link_info.link_info & ICE_AQ_LINK_UP;
+
+	return status;
+}
+
+/**
+ * ice_aq_set_link_restart_an
+ * @pi: pointer to the port information structure
+ * @ena_link: if true: enable link, if false: disable link
+ * @cd: pointer to command details structure or NULL
+ *
+ * Sets up the link and restarts the Auto-Negotiation over the link.
+ */
+enum ice_status
+ice_aq_set_link_restart_an(struct ice_port_info *pi, bool ena_link,
+			   struct ice_sq_cd *cd)
+{
+	struct ice_aqc_restart_an *cmd;
+	struct ice_aq_desc desc;
+
+	cmd = &desc.params.restart_an;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_restart_an);
+
+	cmd->cmd_flags = ICE_AQC_RESTART_AN_LINK_RESTART;
+	cmd->lport_num = pi->lport;
+	if (ena_link)
+		cmd->cmd_flags |= ICE_AQC_RESTART_AN_LINK_ENABLE;
+	else
+		cmd->cmd_flags &= ~ICE_AQC_RESTART_AN_LINK_ENABLE;
+
+	return ice_aq_send_cmd(pi->hw, &desc, NULL, 0, cd);
+}
+
+/**
+ * ice_aq_set_event_mask
+ * @hw: pointer to the hw struct
+ * @port_num: port number of the physical function
+ * @mask: event mask to be set
+ * @cd: pointer to command details structure or NULL
+ *
+ * Set event mask (0x0613)
+ */
+enum ice_status
+ice_aq_set_event_mask(struct ice_hw *hw, u8 port_num, u16 mask,
+		      struct ice_sq_cd *cd)
+{
+	struct ice_aqc_set_event_mask *cmd;
+	struct ice_aq_desc desc;
+
+	cmd = &desc.params.set_event_mask;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_event_mask);
+
+	cmd->lport_num = port_num;
+
+	cmd->event_mask = CPU_TO_LE16(mask);
+	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+}
+
+/**
+ * ice_aq_set_mac_loopback
+ * @hw: pointer to the hw struct
+ * @ena_lpbk: Enable or Disable loopback
+ * @cd: pointer to command details structure or NULL
+ *
+ * Enable/disable loopback on a given port
+ */
+enum ice_status
+ice_aq_set_mac_loopback(struct ice_hw *hw, bool ena_lpbk, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_set_mac_lb *cmd;
+	struct ice_aq_desc desc;
+
+	cmd = &desc.params.set_mac_lb;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_mac_lb);
+	if (ena_lpbk)
+		cmd->lb_mode = ICE_AQ_MAC_LB_EN;
+
+	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+}
+
+
+/**
+ * ice_aq_set_port_id_led
+ * @pi: pointer to the port information
+ * @is_orig_mode: is this LED set to original mode (by the net-list)
+ * @cd: pointer to command details structure or NULL
+ *
+ * Set LED value for the given port (0x06e9)
+ */
+enum ice_status
+ice_aq_set_port_id_led(struct ice_port_info *pi, bool is_orig_mode,
+		       struct ice_sq_cd *cd)
+{
+	struct ice_aqc_set_port_id_led *cmd;
+	struct ice_hw *hw = pi->hw;
+	struct ice_aq_desc desc;
+
+	cmd = &desc.params.set_port_id_led;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_port_id_led);
+
+
+	if (is_orig_mode)
+		cmd->ident_mode = ICE_AQC_PORT_IDENT_LED_ORIG;
+	else
+		cmd->ident_mode = ICE_AQC_PORT_IDENT_LED_BLINK;
+
+	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+}
+
+/**
+ * __ice_aq_get_set_rss_lut
+ * @hw: pointer to the hardware structure
+ * @vsi_id: VSI FW index
+ * @lut_type: LUT table type
+ * @lut: pointer to the LUT buffer provided by the caller
+ * @lut_size: size of the LUT buffer
+ * @glob_lut_idx: global LUT index
+ * @set: set true to set the table, false to get the table
+ *
+ * Internal function to get (0x0B05) or set (0x0B03) RSS look up table
+ */
+static enum ice_status
+__ice_aq_get_set_rss_lut(struct ice_hw *hw, u16 vsi_id, u8 lut_type, u8 *lut,
+			 u16 lut_size, u8 glob_lut_idx, bool set)
+{
+	struct ice_aqc_get_set_rss_lut *cmd_resp;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+	u16 flags = 0;
+
+	cmd_resp = &desc.params.get_set_rss_lut;
+
+	if (set) {
+		ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_rss_lut);
+		desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+	} else {
+		ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_rss_lut);
+	}
+
+	cmd_resp->vsi_id = CPU_TO_LE16(((vsi_id <<
+					 ICE_AQC_GSET_RSS_LUT_VSI_ID_S) &
+					ICE_AQC_GSET_RSS_LUT_VSI_ID_M) |
+				       ICE_AQC_GSET_RSS_LUT_VSI_VALID);
+
+	switch (lut_type) {
+	case ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_VSI:
+	case ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF:
+	case ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_GLOBAL:
+		flags |= ((lut_type << ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_S) &
+			  ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_M);
+		break;
+	default:
+		status = ICE_ERR_PARAM;
+		goto ice_aq_get_set_rss_lut_exit;
+	}
+
+	if (lut_type == ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_GLOBAL) {
+		flags |= ((glob_lut_idx << ICE_AQC_GSET_RSS_LUT_GLOBAL_IDX_S) &
+			  ICE_AQC_GSET_RSS_LUT_GLOBAL_IDX_M);
+
+		if (!set)
+			goto ice_aq_get_set_rss_lut_send;
+	} else if (lut_type == ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF) {
+		if (!set)
+			goto ice_aq_get_set_rss_lut_send;
+	} else {
+		goto ice_aq_get_set_rss_lut_send;
+	}
+
+	/* LUT size is only valid for Global and PF table types */
+	switch (lut_size) {
+	case ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_128:
+		flags |= (ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_128_FLAG <<
+			  ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S) &
+			 ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_M;
+		break;
+	case ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_512:
+		flags |= (ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_512_FLAG <<
+			  ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S) &
+			 ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_M;
+		break;
+	case ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K:
+		if (lut_type == ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF) {
+			flags |= (ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K_FLAG <<
+				  ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S) &
+				 ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_M;
+			break;
+		}
+		/* fall-through */
+	default:
+		status = ICE_ERR_PARAM;
+		goto ice_aq_get_set_rss_lut_exit;
+	}
+
+ice_aq_get_set_rss_lut_send:
+	cmd_resp->flags = CPU_TO_LE16(flags);
+	status = ice_aq_send_cmd(hw, &desc, lut, lut_size, NULL);
+
+ice_aq_get_set_rss_lut_exit:
+	return status;
+}
+
+/**
+ * ice_aq_get_rss_lut
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: software VSI handle
+ * @lut_type: LUT table type
+ * @lut: pointer to the LUT buffer provided by the caller
+ * @lut_size: size of the LUT buffer
+ *
+ * get the RSS lookup table, PF or VSI type
+ */
+enum ice_status
+ice_aq_get_rss_lut(struct ice_hw *hw, u16 vsi_handle, u8 lut_type,
+		   u8 *lut, u16 lut_size)
+{
+	if (!ice_is_vsi_valid(hw, vsi_handle) || !lut)
+		return ICE_ERR_PARAM;
+
+	return __ice_aq_get_set_rss_lut(hw, ice_get_hw_vsi_num(hw, vsi_handle),
+					lut_type, lut, lut_size, 0, false);
+}
+
+/**
+ * ice_aq_set_rss_lut
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: software VSI handle
+ * @lut_type: LUT table type
+ * @lut: pointer to the LUT buffer provided by the caller
+ * @lut_size: size of the LUT buffer
+ *
+ * set the RSS lookup table, PF or VSI type
+ */
+enum ice_status
+ice_aq_set_rss_lut(struct ice_hw *hw, u16 vsi_handle, u8 lut_type,
+		   u8 *lut, u16 lut_size)
+{
+	if (!ice_is_vsi_valid(hw, vsi_handle) || !lut)
+		return ICE_ERR_PARAM;
+
+	return __ice_aq_get_set_rss_lut(hw, ice_get_hw_vsi_num(hw, vsi_handle),
+					lut_type, lut, lut_size, 0, true);
+}
+
+/**
+ * __ice_aq_get_set_rss_key
+ * @hw: pointer to the hw struct
+ * @vsi_id: VSI FW index
+ * @key: pointer to key info struct
+ * @set: set true to set the key, false to get the key
+ *
+ * get (0x0B04) or set (0x0B02) the RSS key per VSI
+ */
+static enum
+ice_status __ice_aq_get_set_rss_key(struct ice_hw *hw, u16 vsi_id,
+				    struct ice_aqc_get_set_rss_keys *key,
+				    bool set)
+{
+	struct ice_aqc_get_set_rss_key *cmd_resp;
+	u16 key_size = sizeof(*key);
+	struct ice_aq_desc desc;
+
+	cmd_resp = &desc.params.get_set_rss_key;
+
+	if (set) {
+		ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_rss_key);
+		desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+	} else {
+		ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_rss_key);
+	}
+
+	cmd_resp->vsi_id = CPU_TO_LE16(((vsi_id <<
+					 ICE_AQC_GSET_RSS_KEY_VSI_ID_S) &
+					ICE_AQC_GSET_RSS_KEY_VSI_ID_M) |
+				       ICE_AQC_GSET_RSS_KEY_VSI_VALID);
+
+	return ice_aq_send_cmd(hw, &desc, key, key_size, NULL);
+}
+
+/**
+ * ice_aq_get_rss_key
+ * @hw: pointer to the hw struct
+ * @vsi_handle: software VSI handle
+ * @key: pointer to key info struct
+ *
+ * get the RSS key per VSI
+ */
+enum ice_status
+ice_aq_get_rss_key(struct ice_hw *hw, u16 vsi_handle,
+		   struct ice_aqc_get_set_rss_keys *key)
+{
+	if (!ice_is_vsi_valid(hw, vsi_handle) || !key)
+		return ICE_ERR_PARAM;
+
+	return __ice_aq_get_set_rss_key(hw, ice_get_hw_vsi_num(hw, vsi_handle),
+					key, false);
+}
+
+/**
+ * ice_aq_set_rss_key
+ * @hw: pointer to the hw struct
+ * @vsi_handle: software VSI handle
+ * @keys: pointer to key info struct
+ *
+ * set the RSS key per VSI
+ */
+enum ice_status
+ice_aq_set_rss_key(struct ice_hw *hw, u16 vsi_handle,
+		   struct ice_aqc_get_set_rss_keys *keys)
+{
+	if (!ice_is_vsi_valid(hw, vsi_handle) || !keys)
+		return ICE_ERR_PARAM;
+
+	return __ice_aq_get_set_rss_key(hw, ice_get_hw_vsi_num(hw, vsi_handle),
+					keys, true);
+}
+
+/**
+ * ice_aq_add_lan_txq
+ * @hw: pointer to the hardware structure
+ * @num_qgrps: Number of added queue groups
+ * @qg_list: list of queue groups to be added
+ * @buf_size: size of buffer for indirect command
+ * @cd: pointer to command details structure or NULL
+ *
+ * Add Tx LAN queue (0x0C30)
+ *
+ * NOTE:
+ * Prior to calling add Tx LAN queue:
+ * Initialize the following as part of the Tx queue context:
+ * Completion queue ID if the queue uses Completion queue, Quanta profile,
+ * Cache profile and Packet shaper profile.
+ *
+ * After add Tx LAN queue AQ command is completed:
+ * Interrupts should be associated with specific queues,
+ * Association of Tx queue to Doorbell queue is not part of Add LAN Tx queue
+ * flow.
+ */
+static enum ice_status
+ice_aq_add_lan_txq(struct ice_hw *hw, u8 num_qgrps,
+		   struct ice_aqc_add_tx_qgrp *qg_list, u16 buf_size,
+		   struct ice_sq_cd *cd)
+{
+	u16 i, sum_header_size, sum_q_size = 0;
+	struct ice_aqc_add_tx_qgrp *list;
+	struct ice_aqc_add_txqs *cmd;
+	struct ice_aq_desc desc;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_add_lan_txq");
+
+	cmd = &desc.params.add_txqs;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_add_txqs);
+
+	if (!qg_list)
+		return ICE_ERR_PARAM;
+
+	if (num_qgrps > ICE_LAN_TXQ_MAX_QGRPS)
+		return ICE_ERR_PARAM;
+
+	sum_header_size = num_qgrps *
+		(sizeof(*qg_list) - sizeof(*qg_list->txqs));
+
+	list = qg_list;
+	for (i = 0; i < num_qgrps; i++) {
+		struct ice_aqc_add_txqs_perq *q = list->txqs;
+
+		sum_q_size += list->num_txqs * sizeof(*q);
+		list = (struct ice_aqc_add_tx_qgrp *)(q + list->num_txqs);
+	}
+
+	if (buf_size != (sum_header_size + sum_q_size))
+		return ICE_ERR_PARAM;
+
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+
+	cmd->num_qgrps = num_qgrps;
+
+	return ice_aq_send_cmd(hw, &desc, qg_list, buf_size, cd);
+}
+
+/**
+ * ice_aq_dis_lan_txq
+ * @hw: pointer to the hardware structure
+ * @num_qgrps: number of groups in the list
+ * @qg_list: the list of groups to disable
+ * @buf_size: the total size of the qg_list buffer in bytes
+ * @rst_src: if called due to reset, specifies the rst source
+ * @vmvf_num: the relative vm or vf number that is undergoing the reset
+ * @cd: pointer to command details structure or NULL
+ *
+ * Disable LAN Tx queue (0x0C31)
+ */
+static enum ice_status
+ice_aq_dis_lan_txq(struct ice_hw *hw, u8 num_qgrps,
+		   struct ice_aqc_dis_txq_item *qg_list, u16 buf_size,
+		   enum ice_disq_rst_src rst_src, u16 vmvf_num,
+		   struct ice_sq_cd *cd)
+{
+	struct ice_aqc_dis_txqs *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+	u16 i, sz = 0;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_dis_lan_txq");
+	cmd = &desc.params.dis_txqs;
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_dis_txqs);
+
+	/* qg_list can be NULL only in VM/VF reset flow */
+	if (!qg_list && !rst_src)
+		return ICE_ERR_PARAM;
+
+	if (num_qgrps > ICE_LAN_TXQ_MAX_QGRPS)
+		return ICE_ERR_PARAM;
+
+	cmd->num_entries = num_qgrps;
+
+	cmd->vmvf_and_timeout = CPU_TO_LE16((5 << ICE_AQC_Q_DIS_TIMEOUT_S) &
+					    ICE_AQC_Q_DIS_TIMEOUT_M);
+
+	switch (rst_src) {
+	case ICE_VM_RESET:
+		cmd->cmd_type = ICE_AQC_Q_DIS_CMD_VM_RESET;
+		cmd->vmvf_and_timeout |=
+			CPU_TO_LE16(vmvf_num & ICE_AQC_Q_DIS_VMVF_NUM_M);
+		break;
+	case ICE_NO_RESET:
+	default:
+		break;
+	}
+
+	/* flush pipe on time out */
+	cmd->cmd_type |= ICE_AQC_Q_DIS_CMD_FLUSH_PIPE;
+	/* If no queue group info, we are in a reset flow. Issue the AQ */
+	if (!qg_list)
+		goto do_aq;
+
+	/* set RD bit to indicate that command buffer is provided by the driver
+	 * and it needs to be read by the firmware
+	 */
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+
+	for (i = 0; i < num_qgrps; ++i) {
+		/* Calculate the size taken up by the queue IDs in this group */
+		sz += qg_list[i].num_qs * sizeof(qg_list[i].q_id);
+
+		/* Add the size of the group header */
+		sz += sizeof(qg_list[i]) - sizeof(qg_list[i].q_id);
+
+		/* If the num of queues is even, add 2 bytes of padding */
+		if ((qg_list[i].num_qs % 2) == 0)
+			sz += 2;
+	}
+
+	if (buf_size != sz)
+		return ICE_ERR_PARAM;
+
+do_aq:
+	status = ice_aq_send_cmd(hw, &desc, qg_list, buf_size, cd);
+	if (status) {
+		if (!qg_list)
+			ice_debug(hw, ICE_DBG_SCHED, "VM%d disable failed %d\n",
+				  vmvf_num, hw->adminq.sq_last_status);
+		else
+			ice_debug(hw, ICE_DBG_SCHED, "disable Q %d failed %d\n",
+				  LE16_TO_CPU(qg_list[0].q_id[0]),
+				  hw->adminq.sq_last_status);
+	}
+	return status;
+}
+
+
+/* End of FW Admin Queue command wrappers */
+
+/**
+ * ice_write_byte - write a byte to a packed context structure
+ * @src_ctx:  the context structure to read from
+ * @dest_ctx: the context to be written to
+ * @ce_info:  a description of the struct to be filled
+ */
+static void
+ice_write_byte(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info)
+{
+	u8 src_byte, dest_byte, mask;
+	u8 *from, *dest;
+	u16 shift_width;
+
+	/* copy from the next struct field */
+	from = src_ctx + ce_info->offset;
+
+	/* prepare the bits and mask */
+	shift_width = ce_info->lsb % 8;
+	mask = (u8)(BIT(ce_info->width) - 1);
+
+	src_byte = *from;
+	src_byte &= mask;
+
+	/* shift to correct alignment */
+	mask <<= shift_width;
+	src_byte <<= shift_width;
+
+	/* get the current bits from the target bit string */
+	dest = dest_ctx + (ce_info->lsb / 8);
+
+	ice_memcpy(&dest_byte, dest, sizeof(dest_byte), ICE_DMA_TO_NONDMA);
+
+	dest_byte &= ~mask;	/* get the bits not changing */
+	dest_byte |= src_byte;	/* add in the new bits */
+
+	/* put it all back */
+	ice_memcpy(dest, &dest_byte, sizeof(dest_byte), ICE_NONDMA_TO_DMA);
+}
+
+/**
+ * ice_write_word - write a word to a packed context structure
+ * @src_ctx:  the context structure to read from
+ * @dest_ctx: the context to be written to
+ * @ce_info:  a description of the struct to be filled
+ */
+static void
+ice_write_word(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info)
+{
+	u16 src_word, mask;
+	__le16 dest_word;
+	u8 *from, *dest;
+	u16 shift_width;
+
+	/* copy from the next struct field */
+	from = src_ctx + ce_info->offset;
+
+	/* prepare the bits and mask */
+	shift_width = ce_info->lsb % 8;
+	mask = BIT(ce_info->width) - 1;
+
+	/* don't swizzle the bits until after the mask because the mask bits
+	 * will be in a different bit position on big endian machines
+	 */
+	src_word = *(u16 *)from;
+	src_word &= mask;
+
+	/* shift to correct alignment */
+	mask <<= shift_width;
+	src_word <<= shift_width;
+
+	/* get the current bits from the target bit string */
+	dest = dest_ctx + (ce_info->lsb / 8);
+
+	ice_memcpy(&dest_word, dest, sizeof(dest_word), ICE_DMA_TO_NONDMA);
+
+	dest_word &= ~(CPU_TO_LE16(mask));	/* get the bits not changing */
+	dest_word |= CPU_TO_LE16(src_word);	/* add in the new bits */
+
+	/* put it all back */
+	ice_memcpy(dest, &dest_word, sizeof(dest_word), ICE_NONDMA_TO_DMA);
+}
+
+/**
+ * ice_write_dword - write a dword to a packed context structure
+ * @src_ctx:  the context structure to read from
+ * @dest_ctx: the context to be written to
+ * @ce_info:  a description of the struct to be filled
+ */
+static void
+ice_write_dword(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info)
+{
+	u32 src_dword, mask;
+	__le32 dest_dword;
+	u8 *from, *dest;
+	u16 shift_width;
+
+	/* copy from the next struct field */
+	from = src_ctx + ce_info->offset;
+
+	/* prepare the bits and mask */
+	shift_width = ce_info->lsb % 8;
+
+	/* if the field width is exactly 32 on an x86 machine, then the shift
+	 * operation will not work because the SHL instructions count is masked
+	 * to 5 bits so the shift will do nothing
+	 */
+	if (ce_info->width < 32)
+		mask = BIT(ce_info->width) - 1;
+	else
+		mask = (u32)~0;
+
+	/* don't swizzle the bits until after the mask because the mask bits
+	 * will be in a different bit position on big endian machines
+	 */
+	src_dword = *(u32 *)from;
+	src_dword &= mask;
+
+	/* shift to correct alignment */
+	mask <<= shift_width;
+	src_dword <<= shift_width;
+
+	/* get the current bits from the target bit string */
+	dest = dest_ctx + (ce_info->lsb / 8);
+
+	ice_memcpy(&dest_dword, dest, sizeof(dest_dword), ICE_DMA_TO_NONDMA);
+
+	dest_dword &= ~(CPU_TO_LE32(mask));	/* get the bits not changing */
+	dest_dword |= CPU_TO_LE32(src_dword);	/* add in the new bits */
+
+	/* put it all back */
+	ice_memcpy(dest, &dest_dword, sizeof(dest_dword), ICE_NONDMA_TO_DMA);
+}
+
+/**
+ * ice_write_qword - write a qword to a packed context structure
+ * @src_ctx:  the context structure to read from
+ * @dest_ctx: the context to be written to
+ * @ce_info:  a description of the struct to be filled
+ */
+static void
+ice_write_qword(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info)
+{
+	u64 src_qword, mask;
+	__le64 dest_qword;
+	u8 *from, *dest;
+	u16 shift_width;
+
+	/* copy from the next struct field */
+	from = src_ctx + ce_info->offset;
+
+	/* prepare the bits and mask */
+	shift_width = ce_info->lsb % 8;
+
+	/* if the field width is exactly 64 on an x86 machine, then the shift
+	 * operation will not work because the SHL instructions count is masked
+	 * to 6 bits so the shift will do nothing
+	 */
+	if (ce_info->width < 64)
+		mask = BIT_ULL(ce_info->width) - 1;
+	else
+		mask = (u64)~0;
+
+	/* don't swizzle the bits until after the mask because the mask bits
+	 * will be in a different bit position on big endian machines
+	 */
+	src_qword = *(u64 *)from;
+	src_qword &= mask;
+
+	/* shift to correct alignment */
+	mask <<= shift_width;
+	src_qword <<= shift_width;
+
+	/* get the current bits from the target bit string */
+	dest = dest_ctx + (ce_info->lsb / 8);
+
+	ice_memcpy(&dest_qword, dest, sizeof(dest_qword), ICE_DMA_TO_NONDMA);
+
+	dest_qword &= ~(CPU_TO_LE64(mask));	/* get the bits not changing */
+	dest_qword |= CPU_TO_LE64(src_qword);	/* add in the new bits */
+
+	/* put it all back */
+	ice_memcpy(dest, &dest_qword, sizeof(dest_qword), ICE_NONDMA_TO_DMA);
+}
+
+/**
+ * ice_set_ctx - set context bits in packed structure
+ * @src_ctx:  pointer to a generic non-packed context structure
+ * @dest_ctx: pointer to memory for the packed structure
+ * @ce_info:  a description of the structure to be transformed
+ */
+enum ice_status
+ice_set_ctx(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info)
+{
+	int f;
+
+	for (f = 0; ce_info[f].width; f++) {
+		/* We have to deal with each element of the FW response
+		 * using the correct size so that we are correct regardless
+		 * of the endianness of the machine.
+		 */
+		switch (ce_info[f].size_of) {
+		case sizeof(u8):
+			ice_write_byte(src_ctx, dest_ctx, &ce_info[f]);
+			break;
+		case sizeof(u16):
+			ice_write_word(src_ctx, dest_ctx, &ce_info[f]);
+			break;
+		case sizeof(u32):
+			ice_write_dword(src_ctx, dest_ctx, &ce_info[f]);
+			break;
+		case sizeof(u64):
+			ice_write_qword(src_ctx, dest_ctx, &ce_info[f]);
+			break;
+		default:
+			return ICE_ERR_INVAL_SIZE;
+		}
+	}
+
+	return ICE_SUCCESS;
+}
+
+
+
+
+
+/**
+ * ice_ena_vsi_txq
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc: tc number
+ * @num_qgrps: Number of added queue groups
+ * @buf: list of queue groups to be added
+ * @buf_size: size of buffer for indirect command
+ * @cd: pointer to command details structure or NULL
+ *
+ * This function adds one lan q
+ */
+enum ice_status
+ice_ena_vsi_txq(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u8 num_qgrps,
+		struct ice_aqc_add_tx_qgrp *buf, u16 buf_size,
+		struct ice_sq_cd *cd)
+{
+	struct ice_aqc_txsched_elem_data node = { 0 };
+	struct ice_sched_node *parent;
+	enum ice_status status;
+	struct ice_hw *hw;
+
+	if (!pi || pi->port_state != ICE_SCHED_PORT_STATE_READY)
+		return ICE_ERR_CFG;
+
+	if (num_qgrps > 1 || buf->num_txqs > 1)
+		return ICE_ERR_MAX_LIMIT;
+
+	hw = pi->hw;
+
+	if (!ice_is_vsi_valid(hw, vsi_handle))
+		return ICE_ERR_PARAM;
+
+	ice_acquire_lock(&pi->sched_lock);
+
+	/* find a parent node */
+	parent = ice_sched_get_free_qparent(pi, vsi_handle, tc,
+					    ICE_SCHED_NODE_OWNER_LAN);
+	if (!parent) {
+		status = ICE_ERR_PARAM;
+		goto ena_txq_exit;
+	}
+
+	buf->parent_teid = parent->info.node_teid;
+	node.parent_teid = parent->info.node_teid;
+	/* Mark that the values in the "generic" section as valid. The default
+	 * value in the "generic" section is zero. This means that :
+	 * - Scheduling mode is Bytes Per Second (BPS), indicated by Bit 0.
+	 * - 0 priority among siblings, indicated by Bit 1-3.
+	 * - WFQ, indicated by Bit 4.
+	 * - 0 Adjustment value is used in PSM credit update flow, indicated by
+	 * Bit 5-6.
+	 * - Bit 7 is reserved.
+	 * Without setting the generic section as valid in valid_sections, the
+	 * Admin Q command will fail with error code ICE_AQ_RC_EINVAL.
+	 */
+	buf->txqs[0].info.valid_sections = ICE_AQC_ELEM_VALID_GENERIC;
+
+	/* add the lan q */
+	status = ice_aq_add_lan_txq(hw, num_qgrps, buf, buf_size, cd);
+	if (status != ICE_SUCCESS) {
+		ice_debug(hw, ICE_DBG_SCHED, "enable Q %d failed %d\n",
+			  LE16_TO_CPU(buf->txqs[0].txq_id),
+			  hw->adminq.sq_last_status);
+		goto ena_txq_exit;
+	}
+
+	node.node_teid = buf->txqs[0].q_teid;
+	node.data.elem_type = ICE_AQC_ELEM_TYPE_LEAF;
+
+	/* add a leaf node into schduler tree q layer */
+	status = ice_sched_add_node(pi, hw->num_tx_sched_layers - 1, &node);
+
+ena_txq_exit:
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_dis_vsi_txq
+ * @pi: port information structure
+ * @num_queues: number of queues
+ * @q_ids: pointer to the q_id array
+ * @q_teids: pointer to queue node teids
+ * @rst_src: if called due to reset, specifies the rst source
+ * @vmvf_num: the relative vm or vf number that is undergoing the reset
+ * @cd: pointer to command details structure or NULL
+ *
+ * This function removes queues and their corresponding nodes in SW DB
+ */
+enum ice_status
+ice_dis_vsi_txq(struct ice_port_info *pi, u8 num_queues, u16 *q_ids,
+		u32 *q_teids, enum ice_disq_rst_src rst_src, u16 vmvf_num,
+		struct ice_sq_cd *cd)
+{
+	enum ice_status status = ICE_ERR_DOES_NOT_EXIST;
+	struct ice_aqc_dis_txq_item qg_list;
+	u16 i;
+
+	if (!pi || pi->port_state != ICE_SCHED_PORT_STATE_READY)
+		return ICE_ERR_CFG;
+
+	/* if queue is disabled already yet the disable queue command has to be
+	 * sent to complete the VF reset, then call ice_aq_dis_lan_txq without
+	 * any queue information
+	 */
+
+	if (!num_queues && rst_src)
+		return ice_aq_dis_lan_txq(pi->hw, 0, NULL, 0, rst_src, vmvf_num,
+					  NULL);
+
+	ice_acquire_lock(&pi->sched_lock);
+
+	for (i = 0; i < num_queues; i++) {
+		struct ice_sched_node *node;
+
+		node = ice_sched_find_node_by_teid(pi->root, q_teids[i]);
+		if (!node)
+			continue;
+		qg_list.parent_teid = node->info.parent_teid;
+		qg_list.num_qs = 1;
+		qg_list.q_id[0] = CPU_TO_LE16(q_ids[i]);
+		status = ice_aq_dis_lan_txq(pi->hw, 1, &qg_list,
+					    sizeof(qg_list), rst_src, vmvf_num,
+					    cd);
+
+		if (status != ICE_SUCCESS)
+			break;
+		ice_free_sched_node(pi, node);
+	}
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_cfg_vsi_qs - configure the new/exisiting VSI queues
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc_bitmap: TC bitmap
+ * @maxqs: max queues array per TC
+ * @owner: lan or rdma
+ *
+ * This function adds/updates the VSI queues per TC.
+ */
+static enum ice_status
+ice_cfg_vsi_qs(struct ice_port_info *pi, u16 vsi_handle, u8 tc_bitmap,
+	       u16 *maxqs, u8 owner)
+{
+	enum ice_status status = ICE_SUCCESS;
+	u8 i;
+
+	if (!pi || pi->port_state != ICE_SCHED_PORT_STATE_READY)
+		return ICE_ERR_CFG;
+
+	if (!ice_is_vsi_valid(pi->hw, vsi_handle))
+		return ICE_ERR_PARAM;
+
+	ice_acquire_lock(&pi->sched_lock);
+
+	for (i = 0; i < ICE_MAX_TRAFFIC_CLASS; i++) {
+		/* configuration is possible only if TC node is present */
+		if (!ice_sched_get_tc_node(pi, i))
+			continue;
+
+		status = ice_sched_cfg_vsi(pi, vsi_handle, i, maxqs[i], owner,
+					   ice_is_tc_ena(tc_bitmap, i));
+		if (status)
+			break;
+	}
+
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_cfg_vsi_lan - configure VSI lan queues
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc_bitmap: TC bitmap
+ * @max_lanqs: max lan queues array per TC
+ *
+ * This function adds/updates the VSI lan queues per TC.
+ */
+enum ice_status
+ice_cfg_vsi_lan(struct ice_port_info *pi, u16 vsi_handle, u8 tc_bitmap,
+		u16 *max_lanqs)
+{
+	return ice_cfg_vsi_qs(pi, vsi_handle, tc_bitmap, max_lanqs,
+			      ICE_SCHED_NODE_OWNER_LAN);
+}
+
+
+
+/**
+ * ice_replay_pre_init - replay pre initialization
+ * @hw: pointer to the hw struct
+ *
+ * Initializes required config data for VSI, FD, ACL, and RSS before replay.
+ */
+static enum ice_status ice_replay_pre_init(struct ice_hw *hw)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	u8 i;
+
+	/* Delete old entries from replay filter list head if there is any */
+	ice_rm_all_sw_replay_rule_info(hw);
+	/* In start of replay, move entries into replay_rules list, it
+	 * will allow adding rules entries back to filt_rules list,
+	 * which is operational list.
+	 */
+	for (i = 0; i < ICE_MAX_NUM_RECIPES; i++)
+		LIST_REPLACE_INIT(&sw->recp_list[i].filt_rules,
+				  &sw->recp_list[i].filt_replay_rules);
+	ice_sched_replay_agg_vsi_preinit(hw);
+
+	return ice_sched_replay_tc_node_bw(hw);
+}
+
+/**
+ * ice_replay_vsi - replay vsi configuration
+ * @hw: pointer to the hw struct
+ * @vsi_handle: driver vsi handle
+ *
+ * Restore all VSI configuration after reset. It is required to call this
+ * function with main VSI first.
+ */
+enum ice_status ice_replay_vsi(struct ice_hw *hw, u16 vsi_handle)
+{
+	enum ice_status status;
+
+	if (!ice_is_vsi_valid(hw, vsi_handle))
+		return ICE_ERR_PARAM;
+
+	/* Replay pre-initialization if there is any */
+	if (vsi_handle == ICE_MAIN_VSI_HANDLE) {
+		status = ice_replay_pre_init(hw);
+		if (status)
+			return status;
+	}
+
+	/* Replay per VSI all filters */
+	status = ice_replay_vsi_all_fltr(hw, vsi_handle);
+	if (!status)
+		status = ice_replay_vsi_agg(hw, vsi_handle);
+	return status;
+}
+
+/**
+ * ice_replay_post - post replay configuration cleanup
+ * @hw: pointer to the hw struct
+ *
+ * Post replay cleanup.
+ */
+void ice_replay_post(struct ice_hw *hw)
+{
+	/* Delete old entries from replay filter list head */
+	ice_rm_all_sw_replay_rule_info(hw);
+	ice_sched_replay_agg(hw);
+}
+
+/**
+ * ice_stat_update40 - read 40 bit stat from the chip and update stat values
+ * @hw: ptr to the hardware info
+ * @hireg: high 32 bit HW register to read from
+ * @loreg: low 32 bit HW register to read from
+ * @prev_stat_loaded: bool to specify if previous stats are loaded
+ * @prev_stat: ptr to previous loaded stat value
+ * @cur_stat: ptr to current stat value
+ */
+void
+ice_stat_update40(struct ice_hw *hw, u32 hireg, u32 loreg,
+		  bool prev_stat_loaded, u64 *prev_stat, u64 *cur_stat)
+{
+	u64 new_data;
+
+	new_data = rd32(hw, loreg);
+	new_data |= ((u64)(rd32(hw, hireg) & 0xFFFF)) << 32;
+
+	/* device stats are not reset at PFR, they likely will not be zeroed
+	 * when the driver starts. So save the first values read and use them as
+	 * offsets to be subtracted from the raw values in order to report stats
+	 * that count from zero.
+	 */
+	if (!prev_stat_loaded)
+		*prev_stat = new_data;
+	if (new_data >= *prev_stat)
+		*cur_stat = new_data - *prev_stat;
+	else
+		/* to manage the potential roll-over */
+		*cur_stat = (new_data + BIT_ULL(40)) - *prev_stat;
+	*cur_stat &= 0xFFFFFFFFFFULL;
+}
+
+/**
+ * ice_stat_update32 - read 32 bit stat from the chip and update stat values
+ * @hw: ptr to the hardware info
+ * @reg: HW register to read from
+ * @prev_stat_loaded: bool to specify if previous stats are loaded
+ * @prev_stat: ptr to previous loaded stat value
+ * @cur_stat: ptr to current stat value
+ */
+void
+ice_stat_update32(struct ice_hw *hw, u32 reg, bool prev_stat_loaded,
+		  u64 *prev_stat, u64 *cur_stat)
+{
+	u32 new_data;
+
+	new_data = rd32(hw, reg);
+
+	/* device stats are not reset at PFR, they likely will not be zeroed
+	 * when the driver starts. So save the first values read and use them as
+	 * offsets to be subtracted from the raw values in order to report stats
+	 * that count from zero.
+	 */
+	if (!prev_stat_loaded)
+		*prev_stat = new_data;
+	if (new_data >= *prev_stat)
+		*cur_stat = new_data - *prev_stat;
+	else
+		/* to manage the potential roll-over */
+		*cur_stat = (new_data + BIT_ULL(32)) - *prev_stat;
+}
+
+
+/**
+ * ice_sched_query_elem - query element information from hw
+ * @hw: pointer to the hw struct
+ * @node_teid: node teid to be queried
+ * @buf: buffer to element information
+ *
+ * This function queries HW element information
+ */
+enum ice_status
+ice_sched_query_elem(struct ice_hw *hw, u32 node_teid,
+		     struct ice_aqc_get_elem *buf)
+{
+	u16 buf_size, num_elem_ret = 0;
+	enum ice_status status;
+
+	buf_size = sizeof(*buf);
+	ice_memset(buf, 0, buf_size, ICE_NONDMA_MEM);
+	buf->generic[0].node_teid = CPU_TO_LE32(node_teid);
+	status = ice_aq_query_sched_elems(hw, 1, buf, buf_size, &num_elem_ret,
+					  NULL);
+	if (status != ICE_SUCCESS || num_elem_ret != 1)
+		ice_debug(hw, ICE_DBG_SCHED, "query element failed\n");
+	return status;
+}
diff --git a/drivers/net/ice/base/ice_common.h b/drivers/net/ice/base/ice_common.h
new file mode 100644
index 0000000..082ae66
--- /dev/null
+++ b/drivers/net/ice/base/ice_common.h
@@ -0,0 +1,186 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_COMMON_H_
+#define _ICE_COMMON_H_
+
+#include "ice_type.h"
+
+#include "ice_switch.h"
+
+/* prototype for functions used for SW locks */
+void ice_free_list(struct LIST_HEAD_TYPE *list);
+void ice_init_lock(struct ice_lock *lock);
+void ice_acquire_lock(struct ice_lock *lock);
+void ice_release_lock(struct ice_lock *lock);
+void ice_destroy_lock(struct ice_lock *lock);
+
+void *ice_alloc_dma_mem(struct ice_hw *hw, struct ice_dma_mem *m, u64 size);
+void ice_free_dma_mem(struct ice_hw *hw, struct ice_dma_mem *m);
+
+bool ice_sq_done(struct ice_hw *hw, struct ice_ctl_q_info *cq);
+
+enum ice_status ice_nvm_validate_checksum(struct ice_hw *hw);
+
+void
+ice_debug_cq(struct ice_hw *hw, u32 mask, void *desc, void *buf, u16 buf_len);
+enum ice_status ice_init_hw(struct ice_hw *hw);
+void ice_deinit_hw(struct ice_hw *hw);
+enum ice_status ice_check_reset(struct ice_hw *hw);
+enum ice_status ice_reset(struct ice_hw *hw, enum ice_reset_req req);
+
+enum ice_status ice_init_all_ctrlq(struct ice_hw *hw);
+void ice_shutdown_all_ctrlq(struct ice_hw *hw);
+enum ice_status
+ice_clean_rq_elem(struct ice_hw *hw, struct ice_ctl_q_info *cq,
+		  struct ice_rq_event_info *e, u16 *pending);
+enum ice_status
+ice_get_link_status(struct ice_port_info *pi, bool *link_up);
+enum ice_status
+ice_update_link_info(struct ice_port_info *pi);
+enum ice_status
+ice_acquire_res(struct ice_hw *hw, enum ice_aq_res_ids res,
+		enum ice_aq_res_access_type access, u32 timeout);
+void ice_release_res(struct ice_hw *hw, enum ice_aq_res_ids res);
+enum ice_status
+ice_aq_alloc_free_res(struct ice_hw *hw, u16 num_entries,
+		      struct ice_aqc_alloc_free_res_elem *buf, u16 buf_size,
+		      enum ice_adminq_opc opc, struct ice_sq_cd *cd);
+enum ice_status ice_init_nvm(struct ice_hw *hw);
+enum ice_status ice_read_sr_word(struct ice_hw *hw, u16 offset, u16 *data);
+enum ice_status
+ice_read_sr_buf(struct ice_hw *hw, u16 offset, u16 *words, u16 *data);
+enum ice_status
+ice_sq_send_cmd(struct ice_hw *hw, struct ice_ctl_q_info *cq,
+		struct ice_aq_desc *desc, void *buf, u16 buf_size,
+		struct ice_sq_cd *cd);
+void ice_clear_pxe_mode(struct ice_hw *hw);
+
+enum ice_status ice_get_caps(struct ice_hw *hw);
+
+
+
+#if defined(FPGA_SUPPORT) || defined(CVL_A0_SUPPORT)
+void ice_dev_onetime_setup(struct ice_hw *hw);
+#endif /* FPGA_SUPPORT || CVL_A0_SUPPORT */
+
+
+enum ice_status
+ice_write_rxq_ctx(struct ice_hw *hw, struct ice_rlan_ctx *rlan_ctx,
+		  u32 rxq_index);
+#if !defined(NO_UNUSED_CTX_CODE) || defined(AE_DRIVER)
+enum ice_status ice_clear_rxq_ctx(struct ice_hw *hw, u32 rxq_index);
+enum ice_status
+ice_clear_tx_cmpltnq_ctx(struct ice_hw *hw, u32 tx_cmpltnq_index);
+enum ice_status
+ice_write_tx_cmpltnq_ctx(struct ice_hw *hw,
+			 struct ice_tx_cmpltnq_ctx *tx_cmpltnq_ctx,
+			 u32 tx_cmpltnq_index);
+enum ice_status
+ice_clear_tx_drbell_q_ctx(struct ice_hw *hw, u32 tx_drbell_q_index);
+enum ice_status
+ice_write_tx_drbell_q_ctx(struct ice_hw *hw,
+			  struct ice_tx_drbell_q_ctx *tx_drbell_q_ctx,
+			  u32 tx_drbell_q_index);
+#endif /* !NO_UNUSED_CTX_CODE || AE_DRIVER */
+
+enum ice_status
+ice_aq_get_rss_lut(struct ice_hw *hw, u16 vsi_handle, u8 lut_type, u8 *lut,
+		   u16 lut_size);
+enum ice_status
+ice_aq_set_rss_lut(struct ice_hw *hw, u16 vsi_handle, u8 lut_type, u8 *lut,
+		   u16 lut_size);
+enum ice_status
+ice_aq_get_rss_key(struct ice_hw *hw, u16 vsi_handle,
+		   struct ice_aqc_get_set_rss_keys *keys);
+enum ice_status
+ice_aq_set_rss_key(struct ice_hw *hw, u16 vsi_handle,
+		   struct ice_aqc_get_set_rss_keys *keys);
+
+bool ice_check_sq_alive(struct ice_hw *hw, struct ice_ctl_q_info *cq);
+enum ice_status ice_aq_q_shutdown(struct ice_hw *hw, bool unloading);
+void ice_fill_dflt_direct_cmd_desc(struct ice_aq_desc *desc, u16 opcode);
+extern const struct ice_ctx_ele ice_tlan_ctx_info[];
+enum ice_status
+ice_set_ctx(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info);
+enum ice_status
+ice_aq_send_cmd(struct ice_hw *hw, struct ice_aq_desc *desc,
+		void *buf, u16 buf_size, struct ice_sq_cd *cd);
+enum ice_status ice_aq_get_fw_ver(struct ice_hw *hw, struct ice_sq_cd *cd);
+
+enum ice_status
+ice_aq_get_phy_caps(struct ice_port_info *pi, bool qual_mods, u8 report_mode,
+		    struct ice_aqc_get_phy_caps_data *caps,
+		    struct ice_sq_cd *cd);
+void
+ice_update_phy_type(u64 *phy_type_low, u64 *phy_type_high,
+		    u16 link_speeds_bitmap);
+enum ice_status
+ice_aq_manage_mac_write(struct ice_hw *hw, const u8 *mac_addr, u8 flags,
+			struct ice_sq_cd *cd);
+
+enum ice_status ice_clear_pf_cfg(struct ice_hw *hw);
+enum ice_status
+ice_aq_set_phy_cfg(struct ice_hw *hw, u8 lport,
+		   struct ice_aqc_set_phy_cfg_data *cfg, struct ice_sq_cd *cd);
+enum ice_status
+ice_set_fc(struct ice_port_info *pi, u8 *aq_failures,
+	   bool ena_auto_link_update);
+void
+ice_cfg_phy_fec(struct ice_aqc_set_phy_cfg_data *cfg, enum ice_fec_mode fec);
+void
+ice_copy_phy_caps_to_cfg(struct ice_aqc_get_phy_caps_data *caps,
+			 struct ice_aqc_set_phy_cfg_data *cfg);
+enum ice_status
+ice_aq_set_link_restart_an(struct ice_port_info *pi, bool ena_link,
+			   struct ice_sq_cd *cd);
+enum ice_status
+ice_aq_get_link_info(struct ice_port_info *pi, bool ena_lse,
+		     struct ice_link_status *link, struct ice_sq_cd *cd);
+enum ice_status
+ice_aq_set_event_mask(struct ice_hw *hw, u8 port_num, u16 mask,
+		      struct ice_sq_cd *cd);
+enum ice_status
+ice_aq_set_mac_loopback(struct ice_hw *hw, bool ena_lpbk, struct ice_sq_cd *cd);
+
+
+enum ice_status
+ice_aq_set_port_id_led(struct ice_port_info *pi, bool is_orig_mode,
+		       struct ice_sq_cd *cd);
+
+
+
+
+enum ice_status
+ice_dis_vsi_txq(struct ice_port_info *pi, u8 num_queues, u16 *q_ids,
+		u32 *q_teids, enum ice_disq_rst_src rst_src, u16 vmvf_num,
+		struct ice_sq_cd *cmd_details);
+enum ice_status
+ice_cfg_vsi_lan(struct ice_port_info *pi, u16 vsi_handle, u8 tc_bitmap,
+		u16 *max_lanqs);
+enum ice_status
+ice_ena_vsi_txq(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u8 num_qgrps,
+		struct ice_aqc_add_tx_qgrp *buf, u16 buf_size,
+		struct ice_sq_cd *cd);
+enum ice_status ice_replay_vsi(struct ice_hw *hw, u16 vsi_handle);
+void ice_replay_post(struct ice_hw *hw);
+void ice_sched_replay_agg_vsi_preinit(struct ice_hw *hw);
+void ice_sched_replay_agg(struct ice_hw *hw);
+enum ice_status ice_sched_replay_tc_node_bw(struct ice_hw *hw);
+enum ice_status ice_replay_vsi_agg(struct ice_hw *hw, u16 vsi_handle);
+enum ice_status
+ice_cfg_tc_node_bw_alloc(struct ice_port_info *pi, u8 tc,
+			 enum ice_rl_type rl_type, u8 bw_alloc);
+enum ice_status ice_cfg_rl_burst_size(struct ice_hw *hw, u32 bytes);
+void ice_output_fw_log(struct ice_hw *hw, struct ice_aq_desc *desc, void *buf);
+void
+ice_stat_update40(struct ice_hw *hw, u32 hireg, u32 loreg,
+		  bool prev_stat_loaded, u64 *prev_stat, u64 *cur_stat);
+void
+ice_stat_update32(struct ice_hw *hw, u32 reg, bool prev_stat_loaded,
+		  u64 *prev_stat, u64 *cur_stat);
+enum ice_status
+ice_sched_query_elem(struct ice_hw *hw, u32 node_teid,
+		     struct ice_aqc_get_elem *buf);
+#endif /* _ICE_COMMON_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v3 12/34] net/ice: Add various headers
  2018-12-12  6:59 ` [dpdk-dev] [PATCH v3 00/34] A new net PMD - ice Wenzhuo Lu
                     ` (10 preceding siblings ...)
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 11/34] net/ice: Add common functions Wenzhuo Lu
@ 2018-12-12  6:59   ` Wenzhuo Lu
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 13/34] net/ice: Add protocol structures and defines Wenzhuo Lu
                     ` (22 subsequent siblings)
  34 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-12  6:59 UTC (permalink / raw)
  To: dev; +Cc: Paul M Stillwell Jr

From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>

Add various headers that define status codes and
basic defines for use in the code.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
---
 drivers/net/ice/base/ice_alloc.h     | 22 ++++++++++++++++++
 drivers/net/ice/base/ice_flex_type.h | 19 +++++++++++++++
 drivers/net/ice/base/ice_flow.h      |  8 +++++++
 drivers/net/ice/base/ice_status.h    | 45 ++++++++++++++++++++++++++++++++++++
 4 files changed, 94 insertions(+)
 create mode 100644 drivers/net/ice/base/ice_alloc.h
 create mode 100644 drivers/net/ice/base/ice_flex_type.h
 create mode 100644 drivers/net/ice/base/ice_flow.h
 create mode 100644 drivers/net/ice/base/ice_status.h

diff --git a/drivers/net/ice/base/ice_alloc.h b/drivers/net/ice/base/ice_alloc.h
new file mode 100644
index 0000000..7883104
--- /dev/null
+++ b/drivers/net/ice/base/ice_alloc.h
@@ -0,0 +1,22 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_ALLOC_H_
+#define _ICE_ALLOC_H_
+
+/* Memory types */
+enum ice_memset_type {
+	ICE_NONDMA_MEM = 0,
+	ICE_DMA_MEM
+};
+
+/* Memcpy types */
+enum ice_memcpy_type {
+	ICE_NONDMA_TO_NONDMA = 0,
+	ICE_NONDMA_TO_DMA,
+	ICE_DMA_TO_DMA,
+	ICE_DMA_TO_NONDMA
+};
+
+#endif /* _ICE_ALLOC_H_ */
diff --git a/drivers/net/ice/base/ice_flex_type.h b/drivers/net/ice/base/ice_flex_type.h
new file mode 100644
index 0000000..84a38cb
--- /dev/null
+++ b/drivers/net/ice/base/ice_flex_type.h
@@ -0,0 +1,19 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_FLEX_TYPE_H_
+#define _ICE_FLEX_TYPE_H_
+
+/* Extraction Sequence (Field Vector) Table */
+struct ice_fv_word {
+	u8 prot_id;
+	u8 off;		/* Offset within the protocol header */
+};
+
+#define ICE_MAX_FV_WORDS 48
+struct ice_fv {
+	struct ice_fv_word ew[ICE_MAX_FV_WORDS];
+};
+
+#endif /* _ICE_FLEX_TYPE_H_ */
diff --git a/drivers/net/ice/base/ice_flow.h b/drivers/net/ice/base/ice_flow.h
new file mode 100644
index 0000000..228a2c0
--- /dev/null
+++ b/drivers/net/ice/base/ice_flow.h
@@ -0,0 +1,8 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_FLOW_H_
+#define _ICE_FLOW_H_
+
+#endif /* _ICE_FLOW_H_ */
diff --git a/drivers/net/ice/base/ice_status.h b/drivers/net/ice/base/ice_status.h
new file mode 100644
index 0000000..898bfa6
--- /dev/null
+++ b/drivers/net/ice/base/ice_status.h
@@ -0,0 +1,45 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_STATUS_H_
+#define _ICE_STATUS_H_
+
+/* Error Codes */
+enum ice_status {
+	ICE_SUCCESS				= 0,
+
+	/* Generic codes : Range -1..-49 */
+	ICE_ERR_PARAM				= -1,
+	ICE_ERR_NOT_IMPL			= -2,
+	ICE_ERR_NOT_READY			= -3,
+	ICE_ERR_BAD_PTR				= -5,
+	ICE_ERR_INVAL_SIZE			= -6,
+	ICE_ERR_DEVICE_NOT_SUPPORTED		= -8,
+	ICE_ERR_RESET_FAILED			= -9,
+	ICE_ERR_FW_API_VER			= -10,
+	ICE_ERR_NO_MEMORY			= -11,
+	ICE_ERR_CFG				= -12,
+	ICE_ERR_OUT_OF_RANGE			= -13,
+	ICE_ERR_ALREADY_EXISTS			= -14,
+	ICE_ERR_DOES_NOT_EXIST			= -15,
+	ICE_ERR_IN_USE				= -16,
+	ICE_ERR_MAX_LIMIT			= -17,
+	ICE_ERR_RESET_ONGOING			= -18,
+	ICE_ERR_HW_TABLE			= -19,
+
+	/* NVM specific error codes: Range -50..-59 */
+	ICE_ERR_NVM				= -50,
+	ICE_ERR_NVM_CHECKSUM			= -51,
+	ICE_ERR_BUF_TOO_SHORT			= -52,
+	ICE_ERR_NVM_BLANK_MODE			= -53,
+
+	/* ARQ/ASQ specific error codes. Range -100..-109 */
+	ICE_ERR_AQ_ERROR			= -100,
+	ICE_ERR_AQ_TIMEOUT			= -101,
+	ICE_ERR_AQ_FULL				= -102,
+	ICE_ERR_AQ_NO_WORK			= -103,
+	ICE_ERR_AQ_EMPTY			= -104,
+};
+
+#endif /* _ICE_STATUS_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v3 13/34] net/ice: Add protocol structures and defines
  2018-12-12  6:59 ` [dpdk-dev] [PATCH v3 00/34] A new net PMD - ice Wenzhuo Lu
                     ` (11 preceding siblings ...)
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 12/34] net/ice: Add various headers Wenzhuo Lu
@ 2018-12-12  6:59   ` Wenzhuo Lu
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 14/34] net/ice: Add structures for RX/TX queues Wenzhuo Lu
                     ` (21 subsequent siblings)
  34 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-12  6:59 UTC (permalink / raw)
  To: dev; +Cc: Paul M Stillwell Jr

From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>

Add the structures and defines that define what
protocols the NIC can handle.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
---
 drivers/net/ice/base/ice_protocol_type.h | 248 +++++++++++++++++++++++++++++++
 1 file changed, 248 insertions(+)
 create mode 100644 drivers/net/ice/base/ice_protocol_type.h

diff --git a/drivers/net/ice/base/ice_protocol_type.h b/drivers/net/ice/base/ice_protocol_type.h
new file mode 100644
index 0000000..7b92c71
--- /dev/null
+++ b/drivers/net/ice/base/ice_protocol_type.h
@@ -0,0 +1,248 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_PROTOCOL_TYPE_H_
+#define _ICE_PROTOCOL_TYPE_H_
+#include "ice_flex_type.h"
+#define ICE_IPV6_ADDR_LENGTH 16
+
+/* Each recipe can match up to 5 different fields. Fields to match can be meta-
+ * data, values extracted from packet headers, or results from other recipes.
+ * One of the 5 fields is reserved for matching the switch ID. So, up to 4
+ * recipes can provide intermediate results to another one through chaining,
+ * e.g. recipes 0, 1, 2, and 3 can provide intermediate results to recipe 4.
+ */
+#define ICE_NUM_WORDS_RECIPE 4
+
+/* Max recipes that can be chained */
+#define ICE_MAX_CHAIN_RECIPE 5
+
+/* 1 word reserved for switch id from allowed 5 words.
+ * So a recipe can have max 4 words. And you can chain 5 such recipes
+ * together. So maximum words that can be programmed for look up is 5 * 4.
+ */
+#define ICE_MAX_CHAIN_WORDS (ICE_NUM_WORDS_RECIPE * ICE_MAX_CHAIN_RECIPE)
+
+/* Field vector index corresponding to chaining */
+#define ICE_CHAIN_FV_INDEX_START 47
+
+enum ice_protocol_type {
+	ICE_MAC_OFOS = 0,
+	ICE_MAC_IL,
+	ICE_IPV4_OFOS,
+	ICE_IPV4_IL,
+	ICE_IPV6_IL,
+	ICE_IPV6_OFOS,
+	ICE_TCP_IL,
+	ICE_UDP_ILOS,
+	ICE_SCTP_IL,
+	ICE_VXLAN,
+	ICE_GENEVE,
+	ICE_VXLAN_GPE,
+	ICE_NVGRE,
+	ICE_PROTOCOL_LAST
+};
+
+enum ice_sw_tunnel_type {
+	ICE_NON_TUN,
+	ICE_SW_TUN_VXLAN_GPE,
+	ICE_SW_TUN_GENEVE,
+	ICE_SW_TUN_VXLAN,
+	ICE_SW_TUN_NVGRE,
+	ICE_SW_TUN_UDP, /* This means all "UDP" tunnel types: VXLAN-GPE, VXLAN
+			 * and GENEVE
+			 */
+	ICE_ALL_TUNNELS /* All tunnel types including NVGRE */
+};
+
+/* Decoders for ice_prot_id:
+ * - F: First
+ * - I: Inner
+ * - L: Last
+ * - O: Outer
+ * - S: Single
+ */
+enum ice_prot_id {
+	ICE_PROT_ID_INVAL	= 0,
+	ICE_PROT_MAC_OF_OR_S	= 1,
+	ICE_PROT_MAC_O2		= 2,
+	ICE_PROT_MAC_IL		= 4,
+	ICE_PROT_MAC_IN_MAC	= 7,
+	ICE_PROT_ETYPE_OL	= 9,
+	ICE_PROT_ETYPE_IL	= 10,
+	ICE_PROT_PAY		= 15,
+	ICE_PROT_EVLAN_O	= 16,
+	ICE_PROT_VLAN_O		= 17,
+	ICE_PROT_VLAN_IF	= 18,
+	ICE_PROT_MPLS_OL_MINUS_1 = 27,
+	ICE_PROT_MPLS_OL_OR_OS	= 28,
+	ICE_PROT_MPLS_IL	= 29,
+	ICE_PROT_IPV4_OF_OR_S	= 32,
+	ICE_PROT_IPV4_IL	= 33,
+	ICE_PROT_IPV6_OF_OR_S	= 40,
+	ICE_PROT_IPV6_IL	= 41,
+	ICE_PROT_IPV6_FRAG	= 47,
+	ICE_PROT_TCP_IL		= 49,
+	ICE_PROT_UDP_OF		= 52,
+	ICE_PROT_UDP_IL_OR_S	= 53,
+	ICE_PROT_GRE_OF		= 64,
+	ICE_PROT_NSH_F		= 84,
+	ICE_PROT_ESP_F		= 88,
+	ICE_PROT_ESP_2		= 89,
+	ICE_PROT_SCTP_IL	= 96,
+	ICE_PROT_ICMP_IL	= 98,
+	ICE_PROT_ICMPV6_IL	= 100,
+	ICE_PROT_VRRP_F		= 101,
+	ICE_PROT_OSPF		= 102,
+	ICE_PROT_ATAOE_OF	= 114,
+	ICE_PROT_CTRL_OF	= 116,
+	ICE_PROT_LLDP_OF	= 117,
+	ICE_PROT_ARP_OF		= 118,
+	ICE_PROT_EAPOL_OF	= 120,
+	ICE_PROT_META_ID	= 255, /* when offset == metaddata */
+	ICE_PROT_INVALID	= 255  /* when offset == 0xFF */
+};
+
+
+#define ICE_MAC_OFOS_HW		1
+#define ICE_MAC_IL_HW		4
+#define ICE_IPV4_OFOS_HW	32
+#define ICE_IPV4_IL_HW		33
+#define ICE_IPV6_OFOS_HW	40
+#define ICE_IPV6_IL_HW		41
+#define ICE_TCP_IL_HW		49
+#define ICE_UDP_ILOS_HW		53
+#define ICE_SCTP_IL_HW		96
+
+/* ICE_UDP_OF is used to identify all 3 tunnel types
+ * VXLAN, GENEVE and VXLAN_GPE. To differentiate further
+ * need to use flags from the field vector
+ */
+#define ICE_UDP_OF_HW	52 /* UDP Tunnels */
+#define ICE_GRE_OF_HW	64 /* NVGRE */
+#define ICE_META_DATA_ID_HW 255 /* this is used for tunnel type */
+
+#define ICE_TUN_FLAG_MASK 0xFF
+#define ICE_TUN_FLAG_FV_IND 2
+
+#define ICE_PROTOCOL_MAX_ENTRIES 16
+
+/* Mapping of software defined protocol id to hardware defined protocol id */
+struct ice_protocol_entry {
+	enum ice_protocol_type type;
+	u8 protocol_id;
+};
+
+
+struct ice_ether_hdr {
+	u8 dst_addr[ETH_ALEN];
+	u8 src_addr[ETH_ALEN];
+	u16 ethtype_id;
+};
+
+struct ice_ether_vlan_hdr {
+	u8 dst_addr[ETH_ALEN];
+	u8 src_addr[ETH_ALEN];
+	u32 vlan_id;
+};
+
+struct ice_ipv4_hdr {
+	u8 version;
+	u8 tos;
+	u16 total_length;
+	u16 id;
+	u16 frag_off;
+	u8 time_to_live;
+	u8 protocol;
+	u16 check;
+	u32 src_addr;
+	u32 dst_addr;
+};
+
+struct ice_ipv6_hdr {
+	u8 version;
+	u8 tc;
+	u16 flow_label;
+	u16 payload_len;
+	u8 next_hdr;
+	u8 hop_limit;
+	u8 src_addr[ICE_IPV6_ADDR_LENGTH];
+	u8 dst_addr[ICE_IPV6_ADDR_LENGTH];
+};
+
+struct ice_sctp_hdr {
+	u16 src_port;
+	u16 dst_port;
+	u32 verification_tag;
+	u32 check;
+};
+
+struct ice_l4_hdr {
+	u16 src_port;
+	u16 dst_port;
+	u16 len;
+	u16 check;
+};
+
+struct ice_udp_tnl_hdr {
+	u16 field;
+	u16 proto_type;
+	u16 vni;
+};
+
+struct ice_nvgre {
+	u16 tni;
+	u16 flow_id;
+};
+
+union ice_prot_hdr {
+		struct ice_ether_hdr eth_hdr;
+		struct ice_ipv4_hdr ipv4_hdr;
+		struct ice_ipv6_hdr ice_ipv6_ofos_hdr;
+		struct ice_l4_hdr l4_hdr;
+		struct ice_sctp_hdr sctp_hdr;
+		struct ice_udp_tnl_hdr tnl_hdr;
+		struct ice_nvgre nvgre_hdr;
+};
+
+/* This is mapping table entry that maps every word within a given protocol
+ * structure to the real byte offset as per the specification of that
+ * protocol header.
+ * for e.g. dst address is 3 words in ethertype header and corresponding bytes
+ * are 0, 2, 3 in the actual packet header and src address is at 4, 6, 8
+ */
+struct ice_prot_ext_tbl_entry {
+	enum ice_protocol_type prot_type;
+	/* Byte offset into header of given protocol type */
+	u8 offs[sizeof(union ice_prot_hdr)];
+};
+
+/* Extractions to be looked up for a given recipe */
+struct ice_prot_lkup_ext {
+	u16 prot_type;
+	u8 n_val_words;
+	/* create a buffer to hold max words per recipe */
+	u8 field_off[ICE_MAX_CHAIN_WORDS];
+
+	struct ice_fv_word fv_words[ICE_MAX_CHAIN_WORDS];
+
+	/* Indicate field offsets that have field vector indices assigned */
+	ice_declare_bitmap(done, ICE_MAX_CHAIN_WORDS);
+};
+
+struct ice_pref_recipe_group {
+	u8 n_val_pairs;		/* Number of valid pairs */
+	struct ice_fv_word pairs[ICE_NUM_WORDS_RECIPE];
+};
+
+struct ice_recp_grp_entry {
+	struct LIST_ENTRY_TYPE l_entry;
+
+#define ICE_INVAL_CHAIN_IND 0xFF
+	u16 rid;
+	u8 chain_idx;
+	u16 fv_idx[ICE_NUM_WORDS_RECIPE];
+	struct ice_pref_recipe_group r_group;
+};
+#endif /* _ICE_PROTOCOL_TYPE_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v3 14/34] net/ice: Add structures for RX/TX queues
  2018-12-12  6:59 ` [dpdk-dev] [PATCH v3 00/34] A new net PMD - ice Wenzhuo Lu
                     ` (12 preceding siblings ...)
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 13/34] net/ice: Add protocol structures and defines Wenzhuo Lu
@ 2018-12-12  6:59   ` Wenzhuo Lu
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 15/34] net/ice: add OS specific implementation Wenzhuo Lu
                     ` (20 subsequent siblings)
  34 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-12  6:59 UTC (permalink / raw)
  To: dev; +Cc: Paul M Stillwell Jr

From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>

Add the structures that define how the RX/TX queues
are used.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
---
 drivers/net/ice/base/ice_lan_tx_rx.h | 2291 ++++++++++++++++++++++++++++++++++
 1 file changed, 2291 insertions(+)
 create mode 100644 drivers/net/ice/base/ice_lan_tx_rx.h

diff --git a/drivers/net/ice/base/ice_lan_tx_rx.h b/drivers/net/ice/base/ice_lan_tx_rx.h
new file mode 100644
index 0000000..d27045f
--- /dev/null
+++ b/drivers/net/ice/base/ice_lan_tx_rx.h
@@ -0,0 +1,2291 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_LAN_TX_RX_H_
+#define _ICE_LAN_TX_RX_H_
+#include "ice_osdep.h"
+
+/* RX Descriptors */
+union ice_16byte_rx_desc {
+	struct {
+		__le64 pkt_addr; /* Packet buffer address */
+		__le64 hdr_addr; /* Header buffer address */
+	} read;
+	struct {
+		struct {
+			struct {
+				__le16 mirroring_status;
+				__le16 l2tag1;
+			} lo_dword;
+			union {
+				__le32 rss; /* RSS Hash */
+				__le32 fd_id; /* Flow Director filter id */
+			} hi_dword;
+		} qword0;
+		struct {
+			/* ext status/error/PTYPE/length */
+			__le64 status_error_len;
+		} qword1;
+	} wb;  /* writeback */
+};
+
+union ice_32byte_rx_desc {
+	struct {
+		__le64 pkt_addr; /* Packet buffer address */
+		__le64 hdr_addr; /* Header buffer address */
+			/* bit 0 of hdr_addr is DD bit */
+		__le64 rsvd1;
+		__le64 rsvd2;
+	} read;
+	struct {
+		struct {
+			struct {
+				__le16 mirroring_status;
+				__le16 l2tag1;
+			} lo_dword;
+			union {
+				__le32 rss; /* RSS Hash */
+				__le32 fd_id; /* Flow Director filter id */
+			} hi_dword;
+		} qword0;
+		struct {
+			/* status/error/PTYPE/length */
+			__le64 status_error_len;
+		} qword1;
+		struct {
+			__le16 ext_status; /* extended status */
+			__le16 rsvd;
+			__le16 l2tag2_1;
+			__le16 l2tag2_2;
+		} qword2;
+		struct {
+			__le32 reserved;
+			__le32 fd_id;
+		} qword3;
+	} wb; /* writeback */
+};
+
+struct ice_fltr_desc {
+	__le64 qidx_compq_space_stat;
+	__le64 dtype_cmd_vsi_fdid;
+};
+
+#define ICE_FXD_FLTR_QW0_QINDEX_S	0
+#define ICE_FXD_FLTR_QW0_QINDEX_M	(0x7FFULL << ICE_FXD_FLTR_QW0_QINDEX_S)
+#define ICE_FXD_FLTR_QW0_COMP_Q_S	11
+#define ICE_FXD_FLTR_QW0_COMP_Q_M	BIT_ULL(ICE_FXD_FLTR_QW0_COMP_Q_S)
+#define ICE_FXD_FLTR_QW0_COMP_Q_ZERO	0x0ULL
+#define ICE_FXD_FLTR_QW0_COMP_Q_QINDX	0x1ULL
+
+#define ICE_FXD_FLTR_QW0_COMP_REPORT_S	12
+#define ICE_FXD_FLTR_QW0_COMP_REPORT_M	\
+				(0x3ULL << ICE_FXD_FLTR_QW0_COMP_REPORT_S)
+#define ICE_FXD_FLTR_QW0_COMP_REPORT_NONE	0x0ULL
+#define ICE_FXD_FLTR_QW0_COMP_REPORT_SW_FAIL	0x1ULL
+#define ICE_FXD_FLTR_QW0_COMP_REPORT_SW		0x2ULL
+
+#define ICE_FXD_FLTR_QW0_FD_SPACE_S	14
+#define ICE_FXD_FLTR_QW0_FD_SPACE_M	(0x3ULL << ICE_FXD_FLTR_QW0_FD_SPACE_S)
+#define ICE_FXD_FLTR_QW0_FD_SPACE_GUAR			0x0ULL
+#define ICE_FXD_FLTR_QW0_FD_SPACE_BEST_EFFORT		0x1ULL
+#define ICE_FXD_FLTR_QW0_FD_SPACE_GUAR_BEST		0x2ULL
+#define ICE_FXD_FLTR_QW0_FD_SPACE_BEST_GUAR		0x3ULL
+
+#define ICE_FXD_FLTR_QW0_STAT_CNT_S	16
+#define ICE_FXD_FLTR_QW0_STAT_CNT_M	\
+				(0x1FFFULL << ICE_FXD_FLTR_QW0_STAT_CNT_S)
+#define ICE_FXD_FLTR_QW0_STAT_ENA_S	29
+#define ICE_FXD_FLTR_QW0_STAT_ENA_M	(0x3ULL << ICE_FXD_FLTR_QW0_STAT_ENA_S)
+#define ICE_FXD_FLTR_QW0_STAT_ENA_NONE		0x0ULL
+#define ICE_FXD_FLTR_QW0_STAT_ENA_PKTS		0x1ULL
+#define ICE_FXD_FLTR_QW0_STAT_ENA_BYTES		0x2ULL
+#define ICE_FXD_FLTR_QW0_STAT_ENA_PKTS_BYTES	0x3ULL
+
+#define ICE_FXD_FLTR_QW0_EVICT_ENA_S	31
+#define ICE_FXD_FLTR_QW0_EVICT_ENA_M	BIT_ULL(ICE_FXD_FLTR_QW0_EVICT_ENA_S)
+#define ICE_FXD_FLTR_QW0_EVICT_ENA_FALSE	0x0ULL
+#define ICE_FXD_FLTR_QW0_EVICT_ENA_TRUE		0x1ULL
+
+#define ICE_FXD_FLTR_QW0_TO_Q_S		32
+#define ICE_FXD_FLTR_QW0_TO_Q_M		(0x7ULL << ICE_FXD_FLTR_QW0_TO_Q_S)
+#define ICE_FXD_FLTR_QW0_TO_Q_EQUALS_QINDEX	0x0ULL
+
+#define ICE_FXD_FLTR_QW0_TO_Q_PRI_S	35
+#define ICE_FXD_FLTR_QW0_TO_Q_PRI_M	(0x7ULL << ICE_FXD_FLTR_QW0_TO_Q_PRI_S)
+#define ICE_FXD_FLTR_QW0_TO_Q_PRIO1	0x1ULL
+
+#define ICE_FXD_FLTR_QW0_DPU_RECIPE_S	38
+#define ICE_FXD_FLTR_QW0_DPU_RECIPE_M	\
+			(0x3ULL << ICE_FXD_FLTR_QW0_DPU_RECIPE_S)
+#define ICE_FXD_FLTR_QW0_DPU_RECIPE_DFLT	0x0ULL
+
+#define ICE_FXD_FLTR_QW0_DROP_S		40
+#define ICE_FXD_FLTR_QW0_DROP_M		BIT_ULL(ICE_FXD_FLTR_QW0_DROP_S)
+#define ICE_FXD_FLTR_QW0_DROP_NO	0x0ULL
+#define ICE_FXD_FLTR_QW0_DROP_YES	0x1ULL
+
+#define ICE_FXD_FLTR_QW0_FLEX_PRI_S	41
+#define ICE_FXD_FLTR_QW0_FLEX_PRI_M	(0x7ULL << ICE_FXD_FLTR_QW0_FLEX_PRI_S)
+#define ICE_FXD_FLTR_QW0_FLEX_PRI_NONE	0x0ULL
+
+#define ICE_FXD_FLTR_QW0_FLEX_MDID_S	44
+#define ICE_FXD_FLTR_QW0_FLEX_MDID_M	(0xFULL << ICE_FXD_FLTR_QW0_FLEX_MDID_S)
+#define ICE_FXD_FLTR_QW0_FLEX_MDID0	0x0ULL
+
+#define ICE_FXD_FLTR_QW0_FLEX_VAL_S	48
+#define ICE_FXD_FLTR_QW0_FLEX_VAL_M	\
+				(0xFFFFULL << ICE_FXD_FLTR_QW0_FLEX_VAL_S)
+#define ICE_FXD_FLTR_QW0_FLEX_VAL0	0x0ULL
+
+#define ICE_FXD_FLTR_QW1_DTYPE_S	0
+#define ICE_FXD_FLTR_QW1_DTYPE_M	(0xFULL << ICE_FXD_FLTR_QW1_DTYPE_S)
+#define ICE_FXD_FLTR_QW1_PCMD_S		4
+#define ICE_FXD_FLTR_QW1_PCMD_M		BIT_ULL(ICE_FXD_FLTR_QW1_PCMD_S)
+#define ICE_FXD_FLTR_QW1_PCMD_ADD	0x0ULL
+#define ICE_FXD_FLTR_QW1_PCMD_REMOVE	0x1ULL
+
+#define ICE_FXD_FLTR_QW1_PROF_PRI_S	5
+#define ICE_FXD_FLTR_QW1_PROF_PRI_M	(0x7ULL << ICE_FXD_FLTR_QW1_PROF_PRI_S)
+#define ICE_FXD_FLTR_QW1_PROF_PRIO_ZERO	0x0ULL
+
+#define ICE_FXD_FLTR_QW1_PROF_S		8
+#define ICE_FXD_FLTR_QW1_PROF_M		(0x3FULL << ICE_FXD_FLTR_QW1_PROF_S)
+#define ICE_FXD_FLTR_QW1_PROF_ZERO	0x0ULL
+
+#define ICE_FXD_FLTR_QW1_FD_VSI_S	14
+#define ICE_FXD_FLTR_QW1_FD_VSI_M	(0x3FFULL << ICE_FXD_FLTR_QW1_FD_VSI_S)
+#define ICE_FXD_FLTR_QW1_SWAP_S		24
+#define ICE_FXD_FLTR_QW1_SWAP_M		BIT_ULL(ICE_FXD_FLTR_QW1_SWAP_S)
+#define ICE_FXD_FLTR_QW1_SWAP_NOT_SET	0x0ULL
+#define ICE_FXD_FLTR_QW1_SWAP_SET	0x1ULL
+
+#define ICE_FXD_FLTR_QW1_FDID_PRI_S	25
+#define ICE_FXD_FLTR_QW1_FDID_PRI_M	(0x7ULL << ICE_FXD_FLTR_QW1_FDID_PRI_S)
+#define ICE_FXD_FLTR_QW1_FDID_PRI_ZERO	0x0ULL
+
+#define ICE_FXD_FLTR_QW1_FDID_MDID_S	28
+#define ICE_FXD_FLTR_QW1_FDID_MDID_M	(0xFULL << ICE_FXD_FLTR_QW1_FDID_MDID_S)
+#define ICE_FXD_FLTR_QW1_FDID_MDID_FD	0x05ULL
+
+#define ICE_FXD_FLTR_QW1_FDID_S		32
+#define ICE_FXD_FLTR_QW1_FDID_M		\
+			(0xFFFFFFFFULL << ICE_FXD_FLTR_QW1_FDID_S)
+#define ICE_FXD_FLTR_QW1_FDID_ZERO	0x0ULL
+
+
+enum ice_rx_desc_status_bits {
+	/* Note: These are predefined bit offsets */
+	ICE_RX_DESC_STATUS_DD_S			= 0,
+	ICE_RX_DESC_STATUS_EOF_S		= 1,
+	ICE_RX_DESC_STATUS_L2TAG1P_S		= 2,
+	ICE_RX_DESC_STATUS_L3L4P_S		= 3,
+	ICE_RX_DESC_STATUS_CRCP_S		= 4,
+	ICE_RX_DESC_STATUS_TSYNINDX_S		= 5, /* 2 BITS */
+	ICE_RX_DESC_STATUS_TSYNVALID_S		= 7,
+	ICE_RX_DESC_STATUS_EXT_UDP_0_S		= 8,
+	ICE_RX_DESC_STATUS_UMBCAST_S		= 9, /* 2 BITS */
+	ICE_RX_DESC_STATUS_FLM_S		= 11,
+	ICE_RX_DESC_STATUS_FLTSTAT_S		= 12, /* 2 BITS */
+	ICE_RX_DESC_STATUS_LPBK_S		= 14,
+	ICE_RX_DESC_STATUS_IPV6EXADD_S		= 15,
+	ICE_RX_DESC_STATUS_RESERVED2_S		= 16, /* 2 BITS */
+	ICE_RX_DESC_STATUS_INT_UDP_0_S		= 18,
+	ICE_RX_DESC_STATUS_LAST /* this entry must be last!!! */
+};
+
+#define ICE_RXD_QW1_STATUS_S	0
+#define ICE_RXD_QW1_STATUS_M	((BIT(ICE_RX_DESC_STATUS_LAST) - 1) << \
+				 ICE_RXD_QW1_STATUS_S)
+
+#define ICE_RXD_QW1_STATUS_TSYNINDX_S ICE_RX_DESC_STATUS_TSYNINDX_S
+#define ICE_RXD_QW1_STATUS_TSYNINDX_M (0x3UL << ICE_RXD_QW1_STATUS_TSYNINDX_S)
+
+#define ICE_RXD_QW1_STATUS_TSYNVALID_S ICE_RX_DESC_STATUS_TSYNVALID_S
+#define ICE_RXD_QW1_STATUS_TSYNVALID_M BIT_ULL(ICE_RXD_QW1_STATUS_TSYNVALID_S)
+
+
+enum ice_rx_desc_fltstat_values {
+	ICE_RX_DESC_FLTSTAT_NO_DATA	= 0,
+	ICE_RX_DESC_FLTSTAT_RSV_FD_ID	= 1, /* 16byte desc? FD_ID : RSV */
+	ICE_RX_DESC_FLTSTAT_RSV		= 2,
+	ICE_RX_DESC_FLTSTAT_RSS_HASH	= 3,
+};
+
+
+#define ICE_RXD_QW1_ERROR_S	19
+#define ICE_RXD_QW1_ERROR_M		(0xFFUL << ICE_RXD_QW1_ERROR_S)
+
+enum ice_rx_desc_error_bits {
+	/* Note: These are predefined bit offsets */
+	ICE_RX_DESC_ERROR_RXE_S			= 0,
+	ICE_RX_DESC_ERROR_RECIPE_S		= 1,
+	ICE_RX_DESC_ERROR_HBO_S			= 2,
+	ICE_RX_DESC_ERROR_L3L4E_S		= 3, /* 3 BITS */
+	ICE_RX_DESC_ERROR_IPE_S			= 3,
+	ICE_RX_DESC_ERROR_L4E_S			= 4,
+	ICE_RX_DESC_ERROR_EIPE_S		= 5,
+	ICE_RX_DESC_ERROR_OVERSIZE_S		= 6,
+	ICE_RX_DESC_ERROR_PPRS_S		= 7
+};
+
+enum ice_rx_desc_error_l3l4e_masks {
+	ICE_RX_DESC_ERROR_L3L4E_NONE		= 0,
+	ICE_RX_DESC_ERROR_L3L4E_PROT		= 1,
+};
+
+#define ICE_RXD_QW1_PTYPE_S	30
+#define ICE_RXD_QW1_PTYPE_M	(0xFFULL << ICE_RXD_QW1_PTYPE_S)
+
+/* Packet type non-ip values */
+enum ice_rx_l2_ptype {
+	ICE_RX_PTYPE_L2_RESERVED	= 0,
+	ICE_RX_PTYPE_L2_MAC_PAY2	= 1,
+	ICE_RX_PTYPE_L2_FIP_PAY2	= 3,
+	ICE_RX_PTYPE_L2_OUI_PAY2	= 4,
+	ICE_RX_PTYPE_L2_MACCNTRL_PAY2	= 5,
+	ICE_RX_PTYPE_L2_LLDP_PAY2	= 6,
+	ICE_RX_PTYPE_L2_ECP_PAY2	= 7,
+	ICE_RX_PTYPE_L2_EVB_PAY2	= 8,
+	ICE_RX_PTYPE_L2_QCN_PAY2	= 9,
+	ICE_RX_PTYPE_L2_EAPOL_PAY2	= 10,
+	ICE_RX_PTYPE_L2_ARP		= 11,
+};
+
+struct ice_rx_ptype_decoded {
+	u32 ptype:10;
+	u32 known:1;
+	u32 outer_ip:1;
+	u32 outer_ip_ver:2;
+	u32 outer_frag:1;
+	u32 tunnel_type:3;
+	u32 tunnel_end_prot:2;
+	u32 tunnel_end_frag:1;
+	u32 inner_prot:4;
+	u32 payload_layer:3;
+};
+
+enum ice_rx_ptype_outer_ip {
+	ICE_RX_PTYPE_OUTER_L2	= 0,
+	ICE_RX_PTYPE_OUTER_IP	= 1,
+};
+
+enum ice_rx_ptype_outer_ip_ver {
+	ICE_RX_PTYPE_OUTER_NONE	= 0,
+	ICE_RX_PTYPE_OUTER_IPV4	= 1,
+	ICE_RX_PTYPE_OUTER_IPV6	= 2,
+};
+
+enum ice_rx_ptype_outer_fragmented {
+	ICE_RX_PTYPE_NOT_FRAG	= 0,
+	ICE_RX_PTYPE_FRAG	= 1,
+};
+
+enum ice_rx_ptype_tunnel_type {
+	ICE_RX_PTYPE_TUNNEL_NONE		= 0,
+	ICE_RX_PTYPE_TUNNEL_IP_IP		= 1,
+	ICE_RX_PTYPE_TUNNEL_IP_GRENAT		= 2,
+	ICE_RX_PTYPE_TUNNEL_IP_GRENAT_MAC	= 3,
+	ICE_RX_PTYPE_TUNNEL_IP_GRENAT_MAC_VLAN	= 4,
+};
+
+enum ice_rx_ptype_tunnel_end_prot {
+	ICE_RX_PTYPE_TUNNEL_END_NONE	= 0,
+	ICE_RX_PTYPE_TUNNEL_END_IPV4	= 1,
+	ICE_RX_PTYPE_TUNNEL_END_IPV6	= 2,
+};
+
+enum ice_rx_ptype_inner_prot {
+	ICE_RX_PTYPE_INNER_PROT_NONE		= 0,
+	ICE_RX_PTYPE_INNER_PROT_UDP		= 1,
+	ICE_RX_PTYPE_INNER_PROT_TCP		= 2,
+	ICE_RX_PTYPE_INNER_PROT_SCTP		= 3,
+	ICE_RX_PTYPE_INNER_PROT_ICMP		= 4,
+};
+
+enum ice_rx_ptype_payload_layer {
+	ICE_RX_PTYPE_PAYLOAD_LAYER_NONE	= 0,
+	ICE_RX_PTYPE_PAYLOAD_LAYER_PAY2	= 1,
+	ICE_RX_PTYPE_PAYLOAD_LAYER_PAY3	= 2,
+	ICE_RX_PTYPE_PAYLOAD_LAYER_PAY4	= 3,
+};
+
+
+#define ICE_RXD_QW1_LEN_PBUF_S	38
+#define ICE_RXD_QW1_LEN_PBUF_M	(0x3FFFULL << ICE_RXD_QW1_LEN_PBUF_S)
+
+#define ICE_RXD_QW1_LEN_HBUF_S	52
+#define ICE_RXD_QW1_LEN_HBUF_M	(0x7FFULL << ICE_RXD_QW1_LEN_HBUF_S)
+
+#define ICE_RXD_QW1_LEN_SPH_S	63
+#define ICE_RXD_QW1_LEN_SPH_M	BIT_ULL(ICE_RXD_QW1_LEN_SPH_S)
+
+
+enum ice_rx_desc_ext_status_bits {
+	/* Note: These are predefined bit offsets */
+	ICE_RX_DESC_EXT_STATUS_L2TAG2P_S	= 0,
+	ICE_RX_DESC_EXT_STATUS_L2TAG3P_S	= 1,
+	ICE_RX_DESC_EXT_STATUS_FLEXBL_S		= 2, /* 2 BITS */
+	ICE_RX_DESC_EXT_STATUS_FLEXBH_S		= 4, /* 2 BITS */
+	ICE_RX_DESC_EXT_STATUS_FDLONGB_S	= 9,
+	ICE_RX_DESC_EXT_STATUS_PELONGB_S	= 11,
+};
+
+
+enum ice_rx_desc_pe_status_bits {
+	/* Note: These are predefined bit offsets */
+	ICE_RX_DESC_PE_STATUS_QPID_S		= 0, /* 18 BITS */
+	ICE_RX_DESC_PE_STATUS_L4PORT_S		= 0, /* 16 BITS */
+	ICE_RX_DESC_PE_STATUS_IPINDEX_S		= 16, /* 8 BITS */
+	ICE_RX_DESC_PE_STATUS_QPIDHIT_S		= 24,
+	ICE_RX_DESC_PE_STATUS_APBVTHIT_S	= 25,
+	ICE_RX_DESC_PE_STATUS_PORTV_S		= 26,
+	ICE_RX_DESC_PE_STATUS_URG_S		= 27,
+	ICE_RX_DESC_PE_STATUS_IPFRAG_S		= 28,
+	ICE_RX_DESC_PE_STATUS_IPOPT_S		= 29
+};
+
+#define ICE_RX_PROG_STATUS_DESC_LEN_S	38
+#define ICE_RX_PROG_STATUS_DESC_LEN	0x2000000
+
+#define ICE_RX_PROG_STATUS_DESC_QW1_PROGID_S	2
+#define ICE_RX_PROG_STATUS_DESC_QW1_PROGID_M	\
+			(0x7UL << ICE_RX_PROG_STATUS_DESC_QW1_PROGID_S)
+
+
+#define ICE_RX_PROG_STATUS_DESC_QW1_ERROR_S	19
+#define ICE_RX_PROG_STATUS_DESC_QW1_ERROR_M	\
+			(0x3FUL << ICE_RX_PROG_STATUS_DESC_QW1_ERROR_S)
+
+enum ice_rx_prog_status_desc_status_bits {
+	/* Note: These are predefined bit offsets */
+	ICE_RX_PROG_STATUS_DESC_DD_S		= 0,
+	ICE_RX_PROG_STATUS_DESC_PROG_ID_S	= 2 /* 3 BITS */
+};
+
+enum ice_rx_prog_status_desc_prog_id_masks {
+	ICE_RX_PROG_STATUS_DESC_FD_FLTR_STATUS	= 1,
+};
+
+enum ice_rx_prog_status_desc_error_bits {
+	/* Note: These are predefined bit offsets */
+	ICE_RX_PROG_STATUS_DESC_FD_TBL_FULL_S	= 0,
+	ICE_RX_PROG_STATUS_DESC_NO_FD_ENTRY_S	= 1,
+};
+
+/* RX Flex Descriptor
+ * This descriptor is used instead of the legacy version descriptor when
+ * ice_rlan_ctx.adv_desc is set
+ */
+union ice_32b_rx_flex_desc {
+	struct {
+		__le64 pkt_addr; /* Packet buffer address */
+		__le64 hdr_addr; /* Header buffer address */
+				 /* bit 0 of hdr_addr is DD bit */
+		__le64 rsvd1;
+		__le64 rsvd2;
+	} read;
+	struct {
+		/* Qword 0 */
+		u8 rxdid; /* descriptor builder profile id */
+		u8 mir_id_umb_cast; /* mirror=[5:0], umb=[7:6] */
+		__le16 ptype_flex_flags0; /* ptype=[9:0], ff0=[15:10] */
+		__le16 pkt_len; /* [15:14] are reserved */
+		__le16 hdr_len_sph_flex_flags1; /* header=[10:0] */
+						/* sph=[11:11] */
+						/* ff1/ext=[15:12] */
+
+		/* Qword 1 */
+		__le16 status_error0;
+		__le16 l2tag1;
+		__le16 flex_meta0;
+		__le16 flex_meta1;
+
+		/* Qword 2 */
+		__le16 status_error1;
+		u8 flex_flags2;
+		u8 time_stamp_low;
+		__le16 l2tag2_1st;
+		__le16 l2tag2_2nd;
+
+		/* Qword 3 */
+		__le16 flex_meta2;
+		__le16 flex_meta3;
+		union {
+			struct {
+				__le16 flex_meta4;
+				__le16 flex_meta5;
+			} flex;
+			__le32 ts_high;
+		} flex_ts;
+	} wb; /* writeback */
+};
+
+/* Rx Flex Descriptor NIC Profile
+ * RxDID Profile Id 2
+ * Flex-field 0: RSS hash lower 16-bits
+ * Flex-field 1: RSS hash upper 16-bits
+ * Flex-field 2: Flow Id lower 16-bits
+ * Flex-field 3: Flow Id higher 16-bits
+ * Flex-field 4: reserved, Vlan id taken from L2Tag
+ */
+struct ice_32b_rx_flex_desc_nic {
+	/* Qword 0 */
+	u8 rxdid;
+	u8 mir_id_umb_cast;
+	__le16 ptype_flexi_flags0;
+	__le16 pkt_len;
+	__le16 hdr_len_sph_flex_flags1;
+
+	/* Qword 1 */
+	__le16 status_error0;
+	__le16 l2tag1;
+	__le32 rss_hash;
+
+	/* Qword 2 */
+	__le16 status_error1;
+	u8 flexi_flags2;
+	u8 ts_low;
+	__le16 l2tag2_1st;
+	__le16 l2tag2_2nd;
+
+	/* Qword 3 */
+	__le32 flow_id;
+	union {
+		struct {
+			__le16 rsvd;
+			__le16 flow_id_ipv6;
+		} flex;
+		__le32 ts_high;
+	} flex_ts;
+};
+
+/* Rx Flex Descriptor Switch Profile
+ * RxDID Profile Id 3
+ * Flex-field 0: Source Vsi
+ */
+struct ice_32b_rx_flex_desc_sw {
+	/* Qword 0 */
+	u8 rxdid;
+	u8 mir_id_umb_cast;
+	__le16 ptype_flexi_flags0;
+	__le16 pkt_len;
+	__le16 hdr_len_sph_flex_flags1;
+
+	/* Qword 1 */
+	__le16 status_error0;
+	__le16 l2tag1;
+	__le16 src_vsi; /* [10:15] are reserved */
+	__le16 flex_md1_rsvd;
+
+	/* Qword 2 */
+	__le16 status_error1;
+	u8 flex_flags2;
+	u8 ts_low;
+	__le16 l2tag2_1st;
+	__le16 l2tag2_2nd;
+
+	/* Qword 3 */
+	__le32 rsvd; /* flex words 2-3 are reserved */
+	__le32 ts_high;
+};
+
+/* Rx Flex Descriptor NIC VEB Profile
+ * RxDID Profile Id 4
+ * Flex-field 0: Destination Vsi
+ */
+struct ice_32b_rx_flex_desc_nic_veb_dbg {
+	/* Qword 0 */
+	u8 rxdid;
+	u8 mir_id_umb_cast;
+	__le16 ptype_flexi_flags0;
+	__le16 pkt_len;
+	__le16 hdr_len_sph_flex_flags1;
+
+	/* Qword 1 */
+	__le16 status_error0;
+	__le16 l2tag1;
+	__le16 dst_vsi; /* [0:12]: destination vsi */
+			/* 13: vsi valid bit */
+			/* [14:15] are reserved */
+	__le16 flex_field_1;
+
+	/* Qword 2 */
+	__le16 status_error1;
+	u8 flex_flags2;
+	u8 ts_low;
+	__le16 l2tag2_1st;
+	__le16 l2tag2_2nd;
+
+	/* Qword 3 */
+	__le32 rsvd; /* flex words 2-3 are reserved */
+	__le32 ts_high;
+};
+
+/* Rx Flex Descriptor NIC ACL Profile
+ * RxDID Profile Id 5
+ * Flex-field 0: ACL Counter 0
+ * Flex-field 1: ACL Counter 1
+ * Flex-field 2: ACL Counter 2
+ */
+struct ice_32b_rx_flex_desc_nic_acl_dbg {
+	/* Qword 0 */
+	u8 rxdid;
+	u8 mir_id_umb_cast;
+	__le16 ptype_flexi_flags0;
+	__le16 pkt_len;
+	__le16 hdr_len_sph_flex_flags1;
+
+	/* Qword 1 */
+	__le16 status_error0;
+	__le16 l2tag1;
+	__le16 acl_ctr0;
+	__le16 acl_ctr1;
+
+	/* Qword 2 */
+	__le16 status_error1;
+	u8 flex_flags2;
+	u8 ts_low;
+	__le16 l2tag2_1st;
+	__le16 l2tag2_2nd;
+
+	/* Qword 3 */
+	__le16 acl_ctr2;
+	__le16 rsvd; /* flex words 2-3 are reserved */
+	__le32 ts_high;
+};
+
+/* Rx Flex Descriptor NIC Profile
+ * RxDID Profile Id 6
+ * Flex-field 0: RSS hash lower 16-bits
+ * Flex-field 1: RSS hash upper 16-bits
+ * Flex-field 2: Flow Id lower 16-bits
+ * Flex-field 3: Source Vsi
+ * Flex-field 4: reserved, Vlan id taken from L2Tag
+ */
+struct ice_32b_rx_flex_desc_nic_2 {
+	/* Qword 0 */
+	u8 rxdid;
+	u8 mir_id_umb_cast;
+	__le16 ptype_flexi_flags0;
+	__le16 pkt_len;
+	__le16 hdr_len_sph_flex_flags1;
+
+	/* Qword 1 */
+	__le16 status_error0;
+	__le16 l2tag1;
+	__le32 rss_hash;
+
+	/* Qword 2 */
+	__le16 status_error1;
+	u8 flexi_flags2;
+	u8 ts_low;
+	__le16 l2tag2_1st;
+	__le16 l2tag2_2nd;
+
+	/* Qword 3 */
+	__le16 flow_id;
+	__le16 src_vsi;
+	union {
+		struct {
+			__le16 rsvd;
+			__le16 flow_id_ipv6;
+		} flex;
+		__le32 ts_high;
+	} flex_ts;
+};
+
+/* Receive Flex Descriptor profile IDs: There are a total
+ * of 64 profiles where profile IDs 0/1 are for legacy; and
+ * profiles 2-63 are flex profiles that can be programmed
+ * with a specific metadata (profile 7 reserved for HW)
+ */
+enum ice_rxdid {
+	ICE_RXDID_LEGACY_0		= 0,
+	ICE_RXDID_LEGACY_1		= 1,
+	ICE_RXDID_FLEX_NIC		= 2,
+	ICE_RXDID_FLEX_NIC_2		= 6,
+	ICE_RXDID_HW			= 7,
+	ICE_RXDID_LAST			= 63,
+};
+
+/* Recceive Flex descriptor Dword Index */
+enum ice_flex_word {
+	ICE_RX_FLEX_DWORD_0 = 0,
+	ICE_RX_FLEX_DWORD_1,
+	ICE_RX_FLEX_DWORD_2,
+	ICE_RX_FLEX_DWORD_3,
+	ICE_RX_FLEX_DWORD_4,
+	ICE_RX_FLEX_DWORD_5
+};
+
+/* Receive Flex Descriptor Rx opcode values */
+enum ice_flex_opcode {
+	ICE_RX_OPC_DEBUG = 0,
+	ICE_RX_OPC_MDID,
+	ICE_RX_OPC_EXTRACT,
+	ICE_RX_OPC_PROTID
+};
+
+/* Receive Descriptor MDID values */
+enum ice_flex_rx_mdid {
+	ICE_RX_MDID_FLOW_ID_LOWER	= 5,
+	ICE_RX_MDID_FLOW_ID_HIGH,
+	ICE_RX_MDID_DST_VSI		= 13,
+	ICE_RX_MDID_SRC_VSI		= 19,
+	ICE_RX_MDID_HASH_LOW		= 56,
+	ICE_RX_MDID_HASH_HIGH,
+	ICE_RX_MDID_ACL_CTR0		= ICE_RX_MDID_HASH_LOW,
+	ICE_RX_MDID_ACL_CTR1		= ICE_RX_MDID_HASH_HIGH,
+	ICE_RX_MDID_ACL_CTR2		= 59
+};
+
+/* for ice_32byte_rx_flex_desc.mir_id_umb_cast member */
+#define ICE_RX_FLEX_DESC_MIRROR_M	(0x3F) /* 6-bits */
+
+/* Rx Flag64 packet flag bits */
+enum ice_rx_flg64_bits {
+	ICE_RXFLG_PKT_DSI	= 0,
+	ICE_RXFLG_EVLAN_x8100	= 15,
+	ICE_RXFLG_EVLAN_x9100,
+	ICE_RXFLG_VLAN_x8100,
+	ICE_RXFLG_TNL_MAC	= 22,
+	ICE_RXFLG_TNL_VLAN,
+	ICE_RXFLG_PKT_FRG,
+	ICE_RXFLG_FIN		= 32,
+	ICE_RXFLG_SYN,
+	ICE_RXFLG_RST,
+	ICE_RXFLG_TNL0		= 38,
+	ICE_RXFLG_TNL1,
+	ICE_RXFLG_TNL2,
+	ICE_RXFLG_UDP_GRE,
+	ICE_RXFLG_RSVD		= 63
+};
+
+enum ice_rx_flex_desc_umb_cast_bits { /* field is 2 bits long */
+	ICE_RX_FLEX_DESC_UMB_CAST_S = 6,
+	ICE_RX_FLEX_DESC_UMB_CAST_LAST /* this entry must be last!!! */
+};
+
+enum ice_umbcast_dest_addr_types {
+	ICE_DEST_UNICAST = 0,
+	ICE_DEST_MULTICAST,
+	ICE_DEST_BROADCAST,
+	ICE_DEST_MIRRORED,
+};
+
+/* for ice_32byte_rx_flex_desc.ptype_flexi_flags0 member */
+#define ICE_RX_FLEX_DESC_PTYPE_M	(0x3FF) /* 10-bits */
+
+enum ice_rx_flex_desc_flexi_flags0_bits { /* field is 6 bits long */
+	ICE_RX_FLEX_DESC_FLEXI_FLAGS0_S = 10,
+	ICE_RX_FLEX_DESC_FLEXI_FLAGS0_LAST /* this entry must be last!!! */
+};
+
+/* for ice_32byte_rx_flex_desc.pkt_length member */
+#define ICE_RX_FLX_DESC_PKT_LEN_M	(0x3FFF) /* 14-bits */
+
+/* for ice_32byte_rx_flex_desc.header_length_sph_flexi_flags1 member */
+#define ICE_RX_FLEX_DESC_HEADER_LEN_M	(0x7FF) /* 11-bits */
+
+enum ice_rx_flex_desc_sph_bits { /* field is 1 bit long */
+	ICE_RX_FLEX_DESC_SPH_S = 11,
+	ICE_RX_FLEX_DESC_SPH_LAST /* this entry must be last!!! */
+};
+
+enum ice_rx_flex_desc_flexi_flags1_bits { /* field is 4 bits long */
+	ICE_RX_FLEX_DESC_FLEXI_FLAGS1_S = 12,
+	ICE_RX_FLEX_DESC_FLEXI_FLAGS1_LAST /* this entry must be last!!! */
+};
+
+enum ice_rx_flex_desc_ext_status_bits { /* field is 4 bits long */
+	ICE_RX_FLEX_DESC_EXT_STATUS_EXT_UDP_S = 12,
+	ICE_RX_FLEX_DESC_EXT_STATUS_INT_UDP_S = 13,
+	ICE_RX_FLEX_DESC_EXT_STATUS_RECIPE_S = 14,
+	ICE_RX_FLEX_DESC_EXT_STATUS_OVERSIZE_S = 15,
+	ICE_RX_FLEX_DESC_EXT_STATUS_LAST /* entry must be last!!! */
+};
+
+enum ice_rx_flex_desc_status_error_0_bits {
+	/* Note: These are predefined bit offsets */
+	ICE_RX_FLEX_DESC_STATUS0_DD_S = 0,
+	ICE_RX_FLEX_DESC_STATUS0_EOF_S,
+	ICE_RX_FLEX_DESC_STATUS0_HBO_S,
+	ICE_RX_FLEX_DESC_STATUS0_L3L4P_S,
+	ICE_RX_FLEX_DESC_STATUS0_XSUM_IPE_S,
+	ICE_RX_FLEX_DESC_STATUS0_XSUM_L4E_S,
+	ICE_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S,
+	ICE_RX_FLEX_DESC_STATUS0_XSUM_EUDPE_S,
+	ICE_RX_FLEX_DESC_STATUS0_LPBK_S,
+	ICE_RX_FLEX_DESC_STATUS0_IPV6EXADD_S,
+	ICE_RX_FLEX_DESC_STATUS0_RXE_S,
+	ICE_RX_FLEX_DESC_STATUS0_CRCP_S,
+	ICE_RX_FLEX_DESC_STATUS0_RSS_VALID_S,
+	ICE_RX_FLEX_DESC_STATUS0_L2TAG1P_S,
+	ICE_RX_FLEX_DESC_STATUS0_XTRMD0_VALID_S,
+	ICE_RX_FLEX_DESC_STATUS0_XTRMD1_VALID_S,
+	ICE_RX_FLEX_DESC_STATUS0_LAST /* this entry must be last!!! */
+};
+
+enum ice_rx_flex_desc_status_error_1_bits {
+	/* Note: These are predefined bit offsets */
+	ICE_RX_FLEX_DESC_STATUS1_CPM_S = 0, /* 4 bits */
+	ICE_RX_FLEX_DESC_STATUS1_NAT_S = 4,
+	ICE_RX_FLEX_DESC_STATUS1_CRYPTO_S = 5,
+	/* [10:6] reserved */
+	ICE_RX_FLEX_DESC_STATUS1_L2TAG2P_S = 11,
+	ICE_RX_FLEX_DESC_STATUS1_XTRMD2_VALID_S = 12,
+	ICE_RX_FLEX_DESC_STATUS1_XTRMD3_VALID_S = 13,
+	ICE_RX_FLEX_DESC_STATUS1_XTRMD4_VALID_S = 14,
+	ICE_RX_FLEX_DESC_STATUS1_XTRMD5_VALID_S = 15,
+	ICE_RX_FLEX_DESC_STATUS1_LAST /* this entry must be last!!! */
+};
+
+enum ice_rx_flex_desc_exstat_bits {
+	/* Note: These are predefined bit offsets */
+	ICE_RX_FLEX_DESC_EXSTAT_EXTUDP_S = 0,
+	ICE_RX_FLEX_DESC_EXSTAT_INTUDP_S = 1,
+	ICE_RX_FLEX_DESC_EXSTAT_RECIPE_S = 2,
+	ICE_RX_FLEX_DESC_EXSTAT_OVERSIZE_S = 3,
+};
+
+
+#define ICE_RXQ_CTX_SIZE_DWORDS		8
+#define ICE_RXQ_CTX_SZ			(ICE_RXQ_CTX_SIZE_DWORDS * sizeof(u32))
+#define ICE_TX_CMPLTNQ_CTX_SIZE_DWORDS	22
+#define ICE_TX_DRBELL_Q_CTX_SIZE_DWORDS	5
+#define GLTCLAN_CQ_CNTX(i, CQ)		(GLTCLAN_CQ_CNTX0(CQ) + ((i) * 0x0800))
+
+/* RLAN Rx queue context data
+ *
+ * The sizes of the variables may be larger than needed due to crossing byte
+ * boundaries. If we do not have the width of the variable set to the correct
+ * size then we could end up shifting bits off the top of the variable when the
+ * variable is at the top of a byte and crosses over into the next byte.
+ */
+struct ice_rlan_ctx {
+	u16 head;
+	u16 cpuid; /* bigger than needed, see above for reason */
+#define ICE_RLAN_BASE_S 7
+	u64 base;
+	u16 qlen;
+#define ICE_RLAN_CTX_DBUF_S 7
+	u16 dbuf; /* bigger than needed, see above for reason */
+#define ICE_RLAN_CTX_HBUF_S 6
+	u16 hbuf; /* bigger than needed, see above for reason */
+	u8 dtype;
+	u8 dsize;
+	u8 crcstrip;
+	u8 l2tsel;
+	u8 hsplit_0;
+	u8 hsplit_1;
+	u8 showiv;
+	u32 rxmax; /* bigger than needed, see above for reason */
+	u8 tphrdesc_ena;
+	u8 tphwdesc_ena;
+	u8 tphdata_ena;
+	u8 tphhead_ena;
+	u16 lrxqthresh; /* bigger than needed, see above for reason */
+};
+
+struct ice_ctx_ele {
+	u16 offset;
+	u16 size_of;
+	u16 width;
+	u16 lsb;
+};
+
+#define ICE_CTX_STORE(_struct, _ele, _width, _lsb) {	\
+	.offset = offsetof(struct _struct, _ele),	\
+	.size_of = FIELD_SIZEOF(struct _struct, _ele),	\
+	.width = _width,				\
+	.lsb = _lsb,					\
+}
+
+/* for hsplit_0 field of Rx RLAN context */
+enum ice_rlan_ctx_rx_hsplit_0 {
+	ICE_RLAN_RX_HSPLIT_0_NO_SPLIT		= 0,
+	ICE_RLAN_RX_HSPLIT_0_SPLIT_L2		= 1,
+	ICE_RLAN_RX_HSPLIT_0_SPLIT_IP		= 2,
+	ICE_RLAN_RX_HSPLIT_0_SPLIT_TCP_UDP	= 4,
+	ICE_RLAN_RX_HSPLIT_0_SPLIT_SCTP		= 8,
+};
+
+/* for hsplit_1 field of Rx RLAN context */
+enum ice_rlan_ctx_rx_hsplit_1 {
+	ICE_RLAN_RX_HSPLIT_1_NO_SPLIT		= 0,
+	ICE_RLAN_RX_HSPLIT_1_SPLIT_L2		= 1,
+	ICE_RLAN_RX_HSPLIT_1_SPLIT_ALWAYS	= 2,
+};
+
+/* TX Descriptor */
+struct ice_tx_desc {
+	__le64 buf_addr; /* Address of descriptor's data buf */
+	__le64 cmd_type_offset_bsz;
+};
+
+#define ICE_TXD_QW1_DTYPE_S	0
+#define ICE_TXD_QW1_DTYPE_M	(0xFUL << ICE_TXD_QW1_DTYPE_S)
+
+enum ice_tx_desc_dtype_value {
+	ICE_TX_DESC_DTYPE_DATA		= 0x0,
+	ICE_TX_DESC_DTYPE_CTX		= 0x1,
+	ICE_TX_DESC_DTYPE_IPSEC		= 0x3,
+	ICE_TX_DESC_DTYPE_FLTR_PROG	= 0x8,
+	ICE_TX_DESC_DTYPE_HLP_META	= 0x9,
+	/* DESC_DONE - HW has completed write-back of descriptor */
+	ICE_TX_DESC_DTYPE_DESC_DONE	= 0xF,
+};
+
+#define ICE_TXD_QW1_CMD_S	4
+#define ICE_TXD_QW1_CMD_M	(0xFFFUL << ICE_TXD_QW1_CMD_S)
+
+enum ice_tx_desc_cmd_bits {
+	ICE_TX_DESC_CMD_EOP			= 0x0001,
+	ICE_TX_DESC_CMD_RS			= 0x0002,
+	ICE_TX_DESC_CMD_RSVD			= 0x0004,
+	ICE_TX_DESC_CMD_IL2TAG1			= 0x0008,
+	ICE_TX_DESC_CMD_DUMMY			= 0x0010,
+	ICE_TX_DESC_CMD_IIPT_NONIP		= 0x0000, /* 2 BITS */
+	ICE_TX_DESC_CMD_IIPT_IPV6		= 0x0020, /* 2 BITS */
+	ICE_TX_DESC_CMD_IIPT_IPV4		= 0x0040, /* 2 BITS */
+	ICE_TX_DESC_CMD_IIPT_IPV4_CSUM		= 0x0060, /* 2 BITS */
+	ICE_TX_DESC_CMD_RSVD2			= 0x0080,
+	ICE_TX_DESC_CMD_L4T_EOFT_UNK		= 0x0000, /* 2 BITS */
+	ICE_TX_DESC_CMD_L4T_EOFT_TCP		= 0x0100, /* 2 BITS */
+	ICE_TX_DESC_CMD_L4T_EOFT_SCTP		= 0x0200, /* 2 BITS */
+	ICE_TX_DESC_CMD_L4T_EOFT_UDP		= 0x0300, /* 2 BITS */
+	ICE_TX_DESC_CMD_RE			= 0x0400,
+	ICE_TX_DESC_CMD_RSVD3			= 0x0800,
+};
+
+#define ICE_TXD_QW1_OFFSET_S	16
+#define ICE_TXD_QW1_OFFSET_M	(0x3FFFFULL << ICE_TXD_QW1_OFFSET_S)
+
+enum ice_tx_desc_len_fields {
+	/* Note: These are predefined bit offsets */
+	ICE_TX_DESC_LEN_MACLEN_S	= 0, /* 7 BITS */
+	ICE_TX_DESC_LEN_IPLEN_S	= 7, /* 7 BITS */
+	ICE_TX_DESC_LEN_L4_LEN_S	= 14 /* 4 BITS */
+};
+
+#define ICE_TXD_QW1_MACLEN_M (0x7FUL << ICE_TX_DESC_LEN_MACLEN_S)
+#define ICE_TXD_QW1_IPLEN_M  (0x7FUL << ICE_TX_DESC_LEN_IPLEN_S)
+#define ICE_TXD_QW1_L4LEN_M  (0xFUL << ICE_TX_DESC_LEN_L4_LEN_S)
+
+/* Tx descriptor field limits in bytes */
+#define ICE_TXD_MACLEN_MAX ((ICE_TXD_QW1_MACLEN_M >> \
+			     ICE_TX_DESC_LEN_MACLEN_S) * ICE_BYTES_PER_WORD)
+#define ICE_TXD_IPLEN_MAX ((ICE_TXD_QW1_IPLEN_M >> \
+			    ICE_TX_DESC_LEN_IPLEN_S) * ICE_BYTES_PER_DWORD)
+#define ICE_TXD_L4LEN_MAX ((ICE_TXD_QW1_L4LEN_M >> \
+			    ICE_TX_DESC_LEN_L4_LEN_S) * ICE_BYTES_PER_DWORD)
+
+#define ICE_TXD_QW1_TX_BUF_SZ_S	34
+#define ICE_TXD_QW1_TX_BUF_SZ_M	(0x3FFFULL << ICE_TXD_QW1_TX_BUF_SZ_S)
+
+#define ICE_TXD_QW1_L2TAG1_S	48
+#define ICE_TXD_QW1_L2TAG1_M	(0xFFFFULL << ICE_TXD_QW1_L2TAG1_S)
+
+/* Context descriptors */
+struct ice_tx_ctx_desc {
+	__le32 tunneling_params;
+	__le16 l2tag2;
+	__le16 rsvd;
+	__le64 qw1;
+};
+
+#define ICE_TXD_CTX_QW1_DTYPE_S	0
+#define ICE_TXD_CTX_QW1_DTYPE_M	(0xFUL << ICE_TXD_CTX_QW1_DTYPE_S)
+
+#define ICE_TXD_CTX_QW1_CMD_S	4
+#define ICE_TXD_CTX_QW1_CMD_M	(0x7FUL << ICE_TXD_CTX_QW1_CMD_S)
+
+#define ICE_TXD_CTX_QW1_IPSEC_S	11
+#define ICE_TXD_CTX_QW1_IPSEC_M	(0x7FUL << ICE_TXD_CTX_QW1_IPSEC_S)
+
+#define ICE_TXD_CTX_QW1_TSO_LEN_S	30
+#define ICE_TXD_CTX_QW1_TSO_LEN_M	\
+			(0x3FFFFULL << ICE_TXD_CTX_QW1_TSO_LEN_S)
+
+#define ICE_TXD_CTX_QW1_TSYN_S	ICE_TXD_CTX_QW1_TSO_LEN_S
+#define ICE_TXD_CTX_QW1_TSYN_M	ICE_TXD_CTX_QW1_TSO_LEN_M
+
+#define ICE_TXD_CTX_QW1_MSS_S	50
+#define ICE_TXD_CTX_QW1_MSS_M	(0x3FFFULL << ICE_TXD_CTX_QW1_MSS_S)
+#define ICE_TXD_CTX_MIN_MSS	64
+#define ICE_TXD_CTX_MAX_MSS	9668
+
+#define ICE_TXD_CTX_QW1_VSI_S	50
+#define ICE_TXD_CTX_QW1_VSI_M	(0x3FFULL << ICE_TXD_CTX_QW1_VSI_S)
+
+enum ice_tx_ctx_desc_cmd_bits {
+	ICE_TX_CTX_DESC_TSO		= 0x01,
+	ICE_TX_CTX_DESC_TSYN		= 0x02,
+	ICE_TX_CTX_DESC_IL2TAG2		= 0x04,
+	ICE_TX_CTX_DESC_IL2TAG2_IL2H	= 0x08,
+	ICE_TX_CTX_DESC_SWTCH_NOTAG	= 0x00,
+	ICE_TX_CTX_DESC_SWTCH_UPLINK	= 0x10,
+	ICE_TX_CTX_DESC_SWTCH_LOCAL	= 0x20,
+	ICE_TX_CTX_DESC_SWTCH_VSI	= 0x30,
+	ICE_TX_CTX_DESC_RESERVED	= 0x40
+};
+
+enum ice_tx_ctx_desc_eipt_offload {
+	ICE_TX_CTX_EIPT_NONE		= 0x0,
+	ICE_TX_CTX_EIPT_IPV6		= 0x1,
+	ICE_TX_CTX_EIPT_IPV4_NO_CSUM	= 0x2,
+	ICE_TX_CTX_EIPT_IPV4		= 0x3
+};
+
+#define ICE_TXD_CTX_QW0_EIPT_S	0
+#define ICE_TXD_CTX_QW0_EIPT_M	(0x3ULL << ICE_TXD_CTX_QW0_EIPT_S)
+
+#define ICE_TXD_CTX_QW0_EIPLEN_S	2
+#define ICE_TXD_CTX_QW0_EIPLEN_M	(0x7FUL << ICE_TXD_CTX_QW0_EIPLEN_S)
+
+#define ICE_TXD_CTX_QW0_L4TUNT_S	9
+#define ICE_TXD_CTX_QW0_L4TUNT_M	(0x3ULL << ICE_TXD_CTX_QW0_L4TUNT_S)
+
+#define ICE_TXD_CTX_UDP_TUNNELING	BIT_ULL(ICE_TXD_CTX_QW0_L4TUNT_S)
+#define ICE_TXD_CTX_GRE_TUNNELING	(0x2ULL << ICE_TXD_CTX_QW0_L4TUNT_S)
+
+#define ICE_TXD_CTX_QW0_EIP_NOINC_S	11
+#define ICE_TXD_CTX_QW0_EIP_NOINC_M	BIT_ULL(ICE_TXD_CTX_QW0_EIP_NOINC_S)
+
+#define ICE_TXD_CTX_EIP_NOINC_IPID_CONST	ICE_TXD_CTX_QW0_EIP_NOINC_M
+
+#define ICE_TXD_CTX_QW0_NATLEN_S	12
+#define ICE_TXD_CTX_QW0_NATLEN_M	(0X7FULL << ICE_TXD_CTX_QW0_NATLEN_S)
+
+#define ICE_TXD_CTX_QW0_DECTTL_S	19
+#define ICE_TXD_CTX_QW0_DECTTL_M	(0xFULL << ICE_TXD_CTX_QW0_DECTTL_S)
+
+#define ICE_TXD_CTX_QW0_L4T_CS_S	23
+#define ICE_TXD_CTX_QW0_L4T_CS_M	BIT_ULL(ICE_TXD_CTX_QW0_L4T_CS_S)
+
+
+#define ICE_LAN_TXQ_MAX_QGRPS	127
+#define ICE_LAN_TXQ_MAX_QDIS	1023
+
+/* Tx queue context data
+ *
+ * The sizes of the variables may be larger than needed due to crossing byte
+ * boundaries. If we do not have the width of the variable set to the correct
+ * size then we could end up shifting bits off the top of the variable when the
+ * variable is at the top of a byte and crosses over into the next byte.
+ */
+struct ice_tlan_ctx {
+#define ICE_TLAN_CTX_BASE_S	7
+	u64 base;		/* base is defined in 128-byte units */
+	u8 port_num;
+	u16 cgd_num;		/* bigger than needed, see above for reason */
+	u8 pf_num;
+	u16 vmvf_num;
+	u8 vmvf_type;
+#define ICE_TLAN_CTX_VMVF_TYPE_VMQ	1
+#define ICE_TLAN_CTX_VMVF_TYPE_PF	2
+	u16 src_vsi;
+	u8 tsyn_ena;
+	u8 alt_vlan;
+	u16 cpuid;		/* bigger than needed, see above for reason */
+	u8 wb_mode;
+	u8 tphrd_desc;
+	u8 tphrd;
+	u8 tphwr_desc;
+	u16 cmpq_id;
+	u16 qnum_in_func;
+	u8 itr_notification_mode;
+	u8 adjust_prof_id;
+	u32 qlen;		/* bigger than needed, see above for reason */
+	u8 quanta_prof_idx;
+	u8 tso_ena;
+	u16 tso_qnum;
+	u8 legacy_int;
+	u8 drop_ena;
+	u8 cache_prof_idx;
+	u8 pkt_shaper_prof_idx;
+	u8 int_q_state;	/* width not needed - internal do not write */
+};
+
+/* LAN Tx Completion Queue data */
+#pragma pack(1)
+struct ice_tx_cmpltnq {
+	u16 txq_id;
+	u8 generation;
+	u16 tx_head;
+	u8 cmpl_type;
+};
+#pragma pack()
+
+
+/* LAN Tx Completion Queue Context */
+#pragma pack(1)
+struct ice_tx_cmpltnq_ctx {
+	u64 base;
+	u32 q_len;
+#define ICE_TX_CMPLTNQ_CTX_Q_LEN_S	4
+	u8 generation;
+	u32 wrt_ptr;
+	u8 pf_num;
+	u16 vmvf_num;
+	u8 vmvf_type;
+	u8 tph_desc_wr;
+	u8 cpuid;
+	u32 cmpltn_cache[16];
+};
+#pragma pack()
+
+/* LAN Tx Doorbell Descriptor Format */
+struct ice_tx_drbell_fmt {
+	u16 txq_id;
+	u8 dd;
+	u8 rs;
+	u32 db;
+};
+
+
+/* LAN Tx Doorbell Queue Context */
+#pragma pack(1)
+struct ice_tx_drbell_q_ctx {
+	u64 base;
+	u16 ring_len;
+	u8 pf_num;
+	u16 vf_num;
+	u8 vmvf_type;
+	u8 cpuid;
+	u8 tph_desc_rd;
+	u8 tph_desc_wr;
+	u8 db_q_en;
+	u16 rd_head;
+	u16 rd_tail;
+};
+#pragma pack()
+
+/* The ice_ptype_lkup table is used to convert from the 10-bit ptype in the
+ * hardware to a bit-field that can be used by SW to more easily determine the
+ * packet type.
+ *
+ * Macros are used to shorten the table lines and make this table human
+ * readable.
+ *
+ * We store the PTYPE in the top byte of the bit field - this is just so that
+ * we can check that the table doesn't have a row missing, as the index into
+ * the table should be the PTYPE.
+ *
+ * Typical work flow:
+ *
+ * IF NOT ice_ptype_lkup[ptype].known
+ * THEN
+ *      Packet is unknown
+ * ELSE IF ice_ptype_lkup[ptype].outer_ip == ICE_RX_PTYPE_OUTER_IP
+ *      Use the rest of the fields to look at the tunnels, inner protocols, etc
+ * ELSE
+ *      Use the enum ice_rx_l2_ptype to decode the packet type
+ * ENDIF
+ */
+
+/* macro to make the table lines short */
+#define ICE_PTT(PTYPE, OUTER_IP, OUTER_IP_VER, OUTER_FRAG, T, TE, TEF, I, PL)\
+	{	PTYPE, \
+		1, \
+		ICE_RX_PTYPE_OUTER_##OUTER_IP, \
+		ICE_RX_PTYPE_OUTER_##OUTER_IP_VER, \
+		ICE_RX_PTYPE_##OUTER_FRAG, \
+		ICE_RX_PTYPE_TUNNEL_##T, \
+		ICE_RX_PTYPE_TUNNEL_END_##TE, \
+		ICE_RX_PTYPE_##TEF, \
+		ICE_RX_PTYPE_INNER_PROT_##I, \
+		ICE_RX_PTYPE_PAYLOAD_LAYER_##PL }
+
+#define ICE_PTT_UNUSED_ENTRY(PTYPE) { PTYPE, 0, 0, 0, 0, 0, 0, 0, 0, 0 }
+
+/* shorter macros makes the table fit but are terse */
+#define ICE_RX_PTYPE_NOF		ICE_RX_PTYPE_NOT_FRAG
+#define ICE_RX_PTYPE_FRG		ICE_RX_PTYPE_FRAG
+
+/* Lookup table mapping the HW PTYPE to the bit field for decoding */
+static const struct ice_rx_ptype_decoded ice_ptype_lkup[] = {
+	/* L2 Packet types */
+	ICE_PTT_UNUSED_ENTRY(0),
+	ICE_PTT(1, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2),
+	ICE_PTT(2, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE),
+	ICE_PTT_UNUSED_ENTRY(3),
+	ICE_PTT_UNUSED_ENTRY(4),
+	ICE_PTT_UNUSED_ENTRY(5),
+	ICE_PTT(6, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE),
+	ICE_PTT(7, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE),
+	ICE_PTT_UNUSED_ENTRY(8),
+	ICE_PTT_UNUSED_ENTRY(9),
+	ICE_PTT(10, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE),
+	ICE_PTT(11, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE),
+	ICE_PTT_UNUSED_ENTRY(12),
+	ICE_PTT_UNUSED_ENTRY(13),
+	ICE_PTT_UNUSED_ENTRY(14),
+	ICE_PTT_UNUSED_ENTRY(15),
+	ICE_PTT_UNUSED_ENTRY(16),
+	ICE_PTT_UNUSED_ENTRY(17),
+	ICE_PTT_UNUSED_ENTRY(18),
+	ICE_PTT_UNUSED_ENTRY(19),
+	ICE_PTT_UNUSED_ENTRY(20),
+	ICE_PTT_UNUSED_ENTRY(21),
+
+	/* Non Tunneled IPv4 */
+	ICE_PTT(22, IP, IPV4, FRG, NONE, NONE, NOF, NONE, PAY3),
+	ICE_PTT(23, IP, IPV4, NOF, NONE, NONE, NOF, NONE, PAY3),
+	ICE_PTT(24, IP, IPV4, NOF, NONE, NONE, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(25),
+	ICE_PTT(26, IP, IPV4, NOF, NONE, NONE, NOF, TCP,  PAY4),
+	ICE_PTT(27, IP, IPV4, NOF, NONE, NONE, NOF, SCTP, PAY4),
+	ICE_PTT(28, IP, IPV4, NOF, NONE, NONE, NOF, ICMP, PAY4),
+
+	/* IPv4 --> IPv4 */
+	ICE_PTT(29, IP, IPV4, NOF, IP_IP, IPV4, FRG, NONE, PAY3),
+	ICE_PTT(30, IP, IPV4, NOF, IP_IP, IPV4, NOF, NONE, PAY3),
+	ICE_PTT(31, IP, IPV4, NOF, IP_IP, IPV4, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(32),
+	ICE_PTT(33, IP, IPV4, NOF, IP_IP, IPV4, NOF, TCP,  PAY4),
+	ICE_PTT(34, IP, IPV4, NOF, IP_IP, IPV4, NOF, SCTP, PAY4),
+	ICE_PTT(35, IP, IPV4, NOF, IP_IP, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv4 --> IPv6 */
+	ICE_PTT(36, IP, IPV4, NOF, IP_IP, IPV6, FRG, NONE, PAY3),
+	ICE_PTT(37, IP, IPV4, NOF, IP_IP, IPV6, NOF, NONE, PAY3),
+	ICE_PTT(38, IP, IPV4, NOF, IP_IP, IPV6, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(39),
+	ICE_PTT(40, IP, IPV4, NOF, IP_IP, IPV6, NOF, TCP,  PAY4),
+	ICE_PTT(41, IP, IPV4, NOF, IP_IP, IPV6, NOF, SCTP, PAY4),
+	ICE_PTT(42, IP, IPV4, NOF, IP_IP, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv4 --> GRE/NAT */
+	ICE_PTT(43, IP, IPV4, NOF, IP_GRENAT, NONE, NOF, NONE, PAY3),
+
+	/* IPv4 --> GRE/NAT --> IPv4 */
+	ICE_PTT(44, IP, IPV4, NOF, IP_GRENAT, IPV4, FRG, NONE, PAY3),
+	ICE_PTT(45, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, NONE, PAY3),
+	ICE_PTT(46, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(47),
+	ICE_PTT(48, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, TCP,  PAY4),
+	ICE_PTT(49, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, SCTP, PAY4),
+	ICE_PTT(50, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv4 --> GRE/NAT --> IPv6 */
+	ICE_PTT(51, IP, IPV4, NOF, IP_GRENAT, IPV6, FRG, NONE, PAY3),
+	ICE_PTT(52, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, NONE, PAY3),
+	ICE_PTT(53, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(54),
+	ICE_PTT(55, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, TCP,  PAY4),
+	ICE_PTT(56, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, SCTP, PAY4),
+	ICE_PTT(57, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv4 --> GRE/NAT --> MAC */
+	ICE_PTT(58, IP, IPV4, NOF, IP_GRENAT_MAC, NONE, NOF, NONE, PAY3),
+
+	/* IPv4 --> GRE/NAT --> MAC --> IPv4 */
+	ICE_PTT(59, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, FRG, NONE, PAY3),
+	ICE_PTT(60, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, NONE, PAY3),
+	ICE_PTT(61, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(62),
+	ICE_PTT(63, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, TCP,  PAY4),
+	ICE_PTT(64, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, SCTP, PAY4),
+	ICE_PTT(65, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv4 --> GRE/NAT -> MAC --> IPv6 */
+	ICE_PTT(66, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, FRG, NONE, PAY3),
+	ICE_PTT(67, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, NONE, PAY3),
+	ICE_PTT(68, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(69),
+	ICE_PTT(70, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, TCP,  PAY4),
+	ICE_PTT(71, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, SCTP, PAY4),
+	ICE_PTT(72, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv4 --> GRE/NAT --> MAC/VLAN */
+	ICE_PTT(73, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, NONE, NOF, NONE, PAY3),
+
+	/* IPv4 ---> GRE/NAT -> MAC/VLAN --> IPv4 */
+	ICE_PTT(74, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, FRG, NONE, PAY3),
+	ICE_PTT(75, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, NONE, PAY3),
+	ICE_PTT(76, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(77),
+	ICE_PTT(78, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, TCP,  PAY4),
+	ICE_PTT(79, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, SCTP, PAY4),
+	ICE_PTT(80, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv4 -> GRE/NAT -> MAC/VLAN --> IPv6 */
+	ICE_PTT(81, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, FRG, NONE, PAY3),
+	ICE_PTT(82, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, NONE, PAY3),
+	ICE_PTT(83, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(84),
+	ICE_PTT(85, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, TCP,  PAY4),
+	ICE_PTT(86, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, SCTP, PAY4),
+	ICE_PTT(87, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, ICMP, PAY4),
+
+	/* Non Tunneled IPv6 */
+	ICE_PTT(88, IP, IPV6, FRG, NONE, NONE, NOF, NONE, PAY3),
+	ICE_PTT(89, IP, IPV6, NOF, NONE, NONE, NOF, NONE, PAY3),
+	ICE_PTT(90, IP, IPV6, NOF, NONE, NONE, NOF, UDP,  PAY3),
+	ICE_PTT_UNUSED_ENTRY(91),
+	ICE_PTT(92, IP, IPV6, NOF, NONE, NONE, NOF, TCP,  PAY4),
+	ICE_PTT(93, IP, IPV6, NOF, NONE, NONE, NOF, SCTP, PAY4),
+	ICE_PTT(94, IP, IPV6, NOF, NONE, NONE, NOF, ICMP, PAY4),
+
+	/* IPv6 --> IPv4 */
+	ICE_PTT(95, IP, IPV6, NOF, IP_IP, IPV4, FRG, NONE, PAY3),
+	ICE_PTT(96, IP, IPV6, NOF, IP_IP, IPV4, NOF, NONE, PAY3),
+	ICE_PTT(97, IP, IPV6, NOF, IP_IP, IPV4, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(98),
+	ICE_PTT(99, IP, IPV6, NOF, IP_IP, IPV4, NOF, TCP,  PAY4),
+	ICE_PTT(100, IP, IPV6, NOF, IP_IP, IPV4, NOF, SCTP, PAY4),
+	ICE_PTT(101, IP, IPV6, NOF, IP_IP, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv6 --> IPv6 */
+	ICE_PTT(102, IP, IPV6, NOF, IP_IP, IPV6, FRG, NONE, PAY3),
+	ICE_PTT(103, IP, IPV6, NOF, IP_IP, IPV6, NOF, NONE, PAY3),
+	ICE_PTT(104, IP, IPV6, NOF, IP_IP, IPV6, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(105),
+	ICE_PTT(106, IP, IPV6, NOF, IP_IP, IPV6, NOF, TCP,  PAY4),
+	ICE_PTT(107, IP, IPV6, NOF, IP_IP, IPV6, NOF, SCTP, PAY4),
+	ICE_PTT(108, IP, IPV6, NOF, IP_IP, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT */
+	ICE_PTT(109, IP, IPV6, NOF, IP_GRENAT, NONE, NOF, NONE, PAY3),
+
+	/* IPv6 --> GRE/NAT -> IPv4 */
+	ICE_PTT(110, IP, IPV6, NOF, IP_GRENAT, IPV4, FRG, NONE, PAY3),
+	ICE_PTT(111, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, NONE, PAY3),
+	ICE_PTT(112, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(113),
+	ICE_PTT(114, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, TCP,  PAY4),
+	ICE_PTT(115, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, SCTP, PAY4),
+	ICE_PTT(116, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT -> IPv6 */
+	ICE_PTT(117, IP, IPV6, NOF, IP_GRENAT, IPV6, FRG, NONE, PAY3),
+	ICE_PTT(118, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, NONE, PAY3),
+	ICE_PTT(119, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(120),
+	ICE_PTT(121, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, TCP,  PAY4),
+	ICE_PTT(122, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, SCTP, PAY4),
+	ICE_PTT(123, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT -> MAC */
+	ICE_PTT(124, IP, IPV6, NOF, IP_GRENAT_MAC, NONE, NOF, NONE, PAY3),
+
+	/* IPv6 --> GRE/NAT -> MAC -> IPv4 */
+	ICE_PTT(125, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, FRG, NONE, PAY3),
+	ICE_PTT(126, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, NONE, PAY3),
+	ICE_PTT(127, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(128),
+	ICE_PTT(129, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, TCP,  PAY4),
+	ICE_PTT(130, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, SCTP, PAY4),
+	ICE_PTT(131, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT -> MAC -> IPv6 */
+	ICE_PTT(132, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, FRG, NONE, PAY3),
+	ICE_PTT(133, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, NONE, PAY3),
+	ICE_PTT(134, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(135),
+	ICE_PTT(136, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, TCP,  PAY4),
+	ICE_PTT(137, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, SCTP, PAY4),
+	ICE_PTT(138, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT -> MAC/VLAN */
+	ICE_PTT(139, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, NONE, NOF, NONE, PAY3),
+
+	/* IPv6 --> GRE/NAT -> MAC/VLAN --> IPv4 */
+	ICE_PTT(140, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, FRG, NONE, PAY3),
+	ICE_PTT(141, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, NONE, PAY3),
+	ICE_PTT(142, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(143),
+	ICE_PTT(144, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, TCP,  PAY4),
+	ICE_PTT(145, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, SCTP, PAY4),
+	ICE_PTT(146, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT -> MAC/VLAN --> IPv6 */
+	ICE_PTT(147, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, FRG, NONE, PAY3),
+	ICE_PTT(148, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, NONE, PAY3),
+	ICE_PTT(149, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(150),
+	ICE_PTT(151, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, TCP,  PAY4),
+	ICE_PTT(152, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, SCTP, PAY4),
+	ICE_PTT(153, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, ICMP, PAY4),
+
+	/* unused entries */
+	ICE_PTT_UNUSED_ENTRY(154),
+	ICE_PTT_UNUSED_ENTRY(155),
+	ICE_PTT_UNUSED_ENTRY(156),
+	ICE_PTT_UNUSED_ENTRY(157),
+	ICE_PTT_UNUSED_ENTRY(158),
+	ICE_PTT_UNUSED_ENTRY(159),
+
+	ICE_PTT_UNUSED_ENTRY(160),
+	ICE_PTT_UNUSED_ENTRY(161),
+	ICE_PTT_UNUSED_ENTRY(162),
+	ICE_PTT_UNUSED_ENTRY(163),
+	ICE_PTT_UNUSED_ENTRY(164),
+	ICE_PTT_UNUSED_ENTRY(165),
+	ICE_PTT_UNUSED_ENTRY(166),
+	ICE_PTT_UNUSED_ENTRY(167),
+	ICE_PTT_UNUSED_ENTRY(168),
+	ICE_PTT_UNUSED_ENTRY(169),
+
+	ICE_PTT_UNUSED_ENTRY(170),
+	ICE_PTT_UNUSED_ENTRY(171),
+	ICE_PTT_UNUSED_ENTRY(172),
+	ICE_PTT_UNUSED_ENTRY(173),
+	ICE_PTT_UNUSED_ENTRY(174),
+	ICE_PTT_UNUSED_ENTRY(175),
+	ICE_PTT_UNUSED_ENTRY(176),
+	ICE_PTT_UNUSED_ENTRY(177),
+	ICE_PTT_UNUSED_ENTRY(178),
+	ICE_PTT_UNUSED_ENTRY(179),
+
+	ICE_PTT_UNUSED_ENTRY(180),
+	ICE_PTT_UNUSED_ENTRY(181),
+	ICE_PTT_UNUSED_ENTRY(182),
+	ICE_PTT_UNUSED_ENTRY(183),
+	ICE_PTT_UNUSED_ENTRY(184),
+	ICE_PTT_UNUSED_ENTRY(185),
+	ICE_PTT_UNUSED_ENTRY(186),
+	ICE_PTT_UNUSED_ENTRY(187),
+	ICE_PTT_UNUSED_ENTRY(188),
+	ICE_PTT_UNUSED_ENTRY(189),
+
+	ICE_PTT_UNUSED_ENTRY(190),
+	ICE_PTT_UNUSED_ENTRY(191),
+	ICE_PTT_UNUSED_ENTRY(192),
+	ICE_PTT_UNUSED_ENTRY(193),
+	ICE_PTT_UNUSED_ENTRY(194),
+	ICE_PTT_UNUSED_ENTRY(195),
+	ICE_PTT_UNUSED_ENTRY(196),
+	ICE_PTT_UNUSED_ENTRY(197),
+	ICE_PTT_UNUSED_ENTRY(198),
+	ICE_PTT_UNUSED_ENTRY(199),
+
+	ICE_PTT_UNUSED_ENTRY(200),
+	ICE_PTT_UNUSED_ENTRY(201),
+	ICE_PTT_UNUSED_ENTRY(202),
+	ICE_PTT_UNUSED_ENTRY(203),
+	ICE_PTT_UNUSED_ENTRY(204),
+	ICE_PTT_UNUSED_ENTRY(205),
+	ICE_PTT_UNUSED_ENTRY(206),
+	ICE_PTT_UNUSED_ENTRY(207),
+	ICE_PTT_UNUSED_ENTRY(208),
+	ICE_PTT_UNUSED_ENTRY(209),
+
+	ICE_PTT_UNUSED_ENTRY(210),
+	ICE_PTT_UNUSED_ENTRY(211),
+	ICE_PTT_UNUSED_ENTRY(212),
+	ICE_PTT_UNUSED_ENTRY(213),
+	ICE_PTT_UNUSED_ENTRY(214),
+	ICE_PTT_UNUSED_ENTRY(215),
+	ICE_PTT_UNUSED_ENTRY(216),
+	ICE_PTT_UNUSED_ENTRY(217),
+	ICE_PTT_UNUSED_ENTRY(218),
+	ICE_PTT_UNUSED_ENTRY(219),
+
+	ICE_PTT_UNUSED_ENTRY(220),
+	ICE_PTT_UNUSED_ENTRY(221),
+	ICE_PTT_UNUSED_ENTRY(222),
+	ICE_PTT_UNUSED_ENTRY(223),
+	ICE_PTT_UNUSED_ENTRY(224),
+	ICE_PTT_UNUSED_ENTRY(225),
+	ICE_PTT_UNUSED_ENTRY(226),
+	ICE_PTT_UNUSED_ENTRY(227),
+	ICE_PTT_UNUSED_ENTRY(228),
+	ICE_PTT_UNUSED_ENTRY(229),
+
+	ICE_PTT_UNUSED_ENTRY(230),
+	ICE_PTT_UNUSED_ENTRY(231),
+	ICE_PTT_UNUSED_ENTRY(232),
+	ICE_PTT_UNUSED_ENTRY(233),
+	ICE_PTT_UNUSED_ENTRY(234),
+	ICE_PTT_UNUSED_ENTRY(235),
+	ICE_PTT_UNUSED_ENTRY(236),
+	ICE_PTT_UNUSED_ENTRY(237),
+	ICE_PTT_UNUSED_ENTRY(238),
+	ICE_PTT_UNUSED_ENTRY(239),
+
+	ICE_PTT_UNUSED_ENTRY(240),
+	ICE_PTT_UNUSED_ENTRY(241),
+	ICE_PTT_UNUSED_ENTRY(242),
+	ICE_PTT_UNUSED_ENTRY(243),
+	ICE_PTT_UNUSED_ENTRY(244),
+	ICE_PTT_UNUSED_ENTRY(245),
+	ICE_PTT_UNUSED_ENTRY(246),
+	ICE_PTT_UNUSED_ENTRY(247),
+	ICE_PTT_UNUSED_ENTRY(248),
+	ICE_PTT_UNUSED_ENTRY(249),
+
+	ICE_PTT_UNUSED_ENTRY(250),
+	ICE_PTT_UNUSED_ENTRY(251),
+	ICE_PTT_UNUSED_ENTRY(252),
+	ICE_PTT_UNUSED_ENTRY(253),
+	ICE_PTT_UNUSED_ENTRY(254),
+	ICE_PTT_UNUSED_ENTRY(255),
+	ICE_PTT_UNUSED_ENTRY(256),
+	ICE_PTT_UNUSED_ENTRY(257),
+	ICE_PTT_UNUSED_ENTRY(258),
+	ICE_PTT_UNUSED_ENTRY(259),
+
+	ICE_PTT_UNUSED_ENTRY(260),
+	ICE_PTT_UNUSED_ENTRY(261),
+	ICE_PTT_UNUSED_ENTRY(262),
+	ICE_PTT_UNUSED_ENTRY(263),
+	ICE_PTT_UNUSED_ENTRY(264),
+	ICE_PTT_UNUSED_ENTRY(265),
+	ICE_PTT_UNUSED_ENTRY(266),
+	ICE_PTT_UNUSED_ENTRY(267),
+	ICE_PTT_UNUSED_ENTRY(268),
+	ICE_PTT_UNUSED_ENTRY(269),
+
+	ICE_PTT_UNUSED_ENTRY(270),
+	ICE_PTT_UNUSED_ENTRY(271),
+	ICE_PTT_UNUSED_ENTRY(272),
+	ICE_PTT_UNUSED_ENTRY(273),
+	ICE_PTT_UNUSED_ENTRY(274),
+	ICE_PTT_UNUSED_ENTRY(275),
+	ICE_PTT_UNUSED_ENTRY(276),
+	ICE_PTT_UNUSED_ENTRY(277),
+	ICE_PTT_UNUSED_ENTRY(278),
+	ICE_PTT_UNUSED_ENTRY(279),
+
+	ICE_PTT_UNUSED_ENTRY(280),
+	ICE_PTT_UNUSED_ENTRY(281),
+	ICE_PTT_UNUSED_ENTRY(282),
+	ICE_PTT_UNUSED_ENTRY(283),
+	ICE_PTT_UNUSED_ENTRY(284),
+	ICE_PTT_UNUSED_ENTRY(285),
+	ICE_PTT_UNUSED_ENTRY(286),
+	ICE_PTT_UNUSED_ENTRY(287),
+	ICE_PTT_UNUSED_ENTRY(288),
+	ICE_PTT_UNUSED_ENTRY(289),
+
+	ICE_PTT_UNUSED_ENTRY(290),
+	ICE_PTT_UNUSED_ENTRY(291),
+	ICE_PTT_UNUSED_ENTRY(292),
+	ICE_PTT_UNUSED_ENTRY(293),
+	ICE_PTT_UNUSED_ENTRY(294),
+	ICE_PTT_UNUSED_ENTRY(295),
+	ICE_PTT_UNUSED_ENTRY(296),
+	ICE_PTT_UNUSED_ENTRY(297),
+	ICE_PTT_UNUSED_ENTRY(298),
+	ICE_PTT_UNUSED_ENTRY(299),
+
+	ICE_PTT_UNUSED_ENTRY(300),
+	ICE_PTT_UNUSED_ENTRY(301),
+	ICE_PTT_UNUSED_ENTRY(302),
+	ICE_PTT_UNUSED_ENTRY(303),
+	ICE_PTT_UNUSED_ENTRY(304),
+	ICE_PTT_UNUSED_ENTRY(305),
+	ICE_PTT_UNUSED_ENTRY(306),
+	ICE_PTT_UNUSED_ENTRY(307),
+	ICE_PTT_UNUSED_ENTRY(308),
+	ICE_PTT_UNUSED_ENTRY(309),
+
+	ICE_PTT_UNUSED_ENTRY(310),
+	ICE_PTT_UNUSED_ENTRY(311),
+	ICE_PTT_UNUSED_ENTRY(312),
+	ICE_PTT_UNUSED_ENTRY(313),
+	ICE_PTT_UNUSED_ENTRY(314),
+	ICE_PTT_UNUSED_ENTRY(315),
+	ICE_PTT_UNUSED_ENTRY(316),
+	ICE_PTT_UNUSED_ENTRY(317),
+	ICE_PTT_UNUSED_ENTRY(318),
+	ICE_PTT_UNUSED_ENTRY(319),
+
+	ICE_PTT_UNUSED_ENTRY(320),
+	ICE_PTT_UNUSED_ENTRY(321),
+	ICE_PTT_UNUSED_ENTRY(322),
+	ICE_PTT_UNUSED_ENTRY(323),
+	ICE_PTT_UNUSED_ENTRY(324),
+	ICE_PTT_UNUSED_ENTRY(325),
+	ICE_PTT_UNUSED_ENTRY(326),
+	ICE_PTT_UNUSED_ENTRY(327),
+	ICE_PTT_UNUSED_ENTRY(328),
+	ICE_PTT_UNUSED_ENTRY(329),
+
+	ICE_PTT_UNUSED_ENTRY(330),
+	ICE_PTT_UNUSED_ENTRY(331),
+	ICE_PTT_UNUSED_ENTRY(332),
+	ICE_PTT_UNUSED_ENTRY(333),
+	ICE_PTT_UNUSED_ENTRY(334),
+	ICE_PTT_UNUSED_ENTRY(335),
+	ICE_PTT_UNUSED_ENTRY(336),
+	ICE_PTT_UNUSED_ENTRY(337),
+	ICE_PTT_UNUSED_ENTRY(338),
+	ICE_PTT_UNUSED_ENTRY(339),
+
+	ICE_PTT_UNUSED_ENTRY(340),
+	ICE_PTT_UNUSED_ENTRY(341),
+	ICE_PTT_UNUSED_ENTRY(342),
+	ICE_PTT_UNUSED_ENTRY(343),
+	ICE_PTT_UNUSED_ENTRY(344),
+	ICE_PTT_UNUSED_ENTRY(345),
+	ICE_PTT_UNUSED_ENTRY(346),
+	ICE_PTT_UNUSED_ENTRY(347),
+	ICE_PTT_UNUSED_ENTRY(348),
+	ICE_PTT_UNUSED_ENTRY(349),
+
+	ICE_PTT_UNUSED_ENTRY(350),
+	ICE_PTT_UNUSED_ENTRY(351),
+	ICE_PTT_UNUSED_ENTRY(352),
+	ICE_PTT_UNUSED_ENTRY(353),
+	ICE_PTT_UNUSED_ENTRY(354),
+	ICE_PTT_UNUSED_ENTRY(355),
+	ICE_PTT_UNUSED_ENTRY(356),
+	ICE_PTT_UNUSED_ENTRY(357),
+	ICE_PTT_UNUSED_ENTRY(358),
+	ICE_PTT_UNUSED_ENTRY(359),
+
+	ICE_PTT_UNUSED_ENTRY(360),
+	ICE_PTT_UNUSED_ENTRY(361),
+	ICE_PTT_UNUSED_ENTRY(362),
+	ICE_PTT_UNUSED_ENTRY(363),
+	ICE_PTT_UNUSED_ENTRY(364),
+	ICE_PTT_UNUSED_ENTRY(365),
+	ICE_PTT_UNUSED_ENTRY(366),
+	ICE_PTT_UNUSED_ENTRY(367),
+	ICE_PTT_UNUSED_ENTRY(368),
+	ICE_PTT_UNUSED_ENTRY(369),
+
+	ICE_PTT_UNUSED_ENTRY(370),
+	ICE_PTT_UNUSED_ENTRY(371),
+	ICE_PTT_UNUSED_ENTRY(372),
+	ICE_PTT_UNUSED_ENTRY(373),
+	ICE_PTT_UNUSED_ENTRY(374),
+	ICE_PTT_UNUSED_ENTRY(375),
+	ICE_PTT_UNUSED_ENTRY(376),
+	ICE_PTT_UNUSED_ENTRY(377),
+	ICE_PTT_UNUSED_ENTRY(378),
+	ICE_PTT_UNUSED_ENTRY(379),
+
+	ICE_PTT_UNUSED_ENTRY(380),
+	ICE_PTT_UNUSED_ENTRY(381),
+	ICE_PTT_UNUSED_ENTRY(382),
+	ICE_PTT_UNUSED_ENTRY(383),
+	ICE_PTT_UNUSED_ENTRY(384),
+	ICE_PTT_UNUSED_ENTRY(385),
+	ICE_PTT_UNUSED_ENTRY(386),
+	ICE_PTT_UNUSED_ENTRY(387),
+	ICE_PTT_UNUSED_ENTRY(388),
+	ICE_PTT_UNUSED_ENTRY(389),
+
+	ICE_PTT_UNUSED_ENTRY(390),
+	ICE_PTT_UNUSED_ENTRY(391),
+	ICE_PTT_UNUSED_ENTRY(392),
+	ICE_PTT_UNUSED_ENTRY(393),
+	ICE_PTT_UNUSED_ENTRY(394),
+	ICE_PTT_UNUSED_ENTRY(395),
+	ICE_PTT_UNUSED_ENTRY(396),
+	ICE_PTT_UNUSED_ENTRY(397),
+	ICE_PTT_UNUSED_ENTRY(398),
+	ICE_PTT_UNUSED_ENTRY(399),
+
+	ICE_PTT_UNUSED_ENTRY(400),
+	ICE_PTT_UNUSED_ENTRY(401),
+	ICE_PTT_UNUSED_ENTRY(402),
+	ICE_PTT_UNUSED_ENTRY(403),
+	ICE_PTT_UNUSED_ENTRY(404),
+	ICE_PTT_UNUSED_ENTRY(405),
+	ICE_PTT_UNUSED_ENTRY(406),
+	ICE_PTT_UNUSED_ENTRY(407),
+	ICE_PTT_UNUSED_ENTRY(408),
+	ICE_PTT_UNUSED_ENTRY(409),
+
+	ICE_PTT_UNUSED_ENTRY(410),
+	ICE_PTT_UNUSED_ENTRY(411),
+	ICE_PTT_UNUSED_ENTRY(412),
+	ICE_PTT_UNUSED_ENTRY(413),
+	ICE_PTT_UNUSED_ENTRY(414),
+	ICE_PTT_UNUSED_ENTRY(415),
+	ICE_PTT_UNUSED_ENTRY(416),
+	ICE_PTT_UNUSED_ENTRY(417),
+	ICE_PTT_UNUSED_ENTRY(418),
+	ICE_PTT_UNUSED_ENTRY(419),
+
+	ICE_PTT_UNUSED_ENTRY(420),
+	ICE_PTT_UNUSED_ENTRY(421),
+	ICE_PTT_UNUSED_ENTRY(422),
+	ICE_PTT_UNUSED_ENTRY(423),
+	ICE_PTT_UNUSED_ENTRY(424),
+	ICE_PTT_UNUSED_ENTRY(425),
+	ICE_PTT_UNUSED_ENTRY(426),
+	ICE_PTT_UNUSED_ENTRY(427),
+	ICE_PTT_UNUSED_ENTRY(428),
+	ICE_PTT_UNUSED_ENTRY(429),
+
+	ICE_PTT_UNUSED_ENTRY(430),
+	ICE_PTT_UNUSED_ENTRY(431),
+	ICE_PTT_UNUSED_ENTRY(432),
+	ICE_PTT_UNUSED_ENTRY(433),
+	ICE_PTT_UNUSED_ENTRY(434),
+	ICE_PTT_UNUSED_ENTRY(435),
+	ICE_PTT_UNUSED_ENTRY(436),
+	ICE_PTT_UNUSED_ENTRY(437),
+	ICE_PTT_UNUSED_ENTRY(438),
+	ICE_PTT_UNUSED_ENTRY(439),
+
+	ICE_PTT_UNUSED_ENTRY(440),
+	ICE_PTT_UNUSED_ENTRY(441),
+	ICE_PTT_UNUSED_ENTRY(442),
+	ICE_PTT_UNUSED_ENTRY(443),
+	ICE_PTT_UNUSED_ENTRY(444),
+	ICE_PTT_UNUSED_ENTRY(445),
+	ICE_PTT_UNUSED_ENTRY(446),
+	ICE_PTT_UNUSED_ENTRY(447),
+	ICE_PTT_UNUSED_ENTRY(448),
+	ICE_PTT_UNUSED_ENTRY(449),
+
+	ICE_PTT_UNUSED_ENTRY(450),
+	ICE_PTT_UNUSED_ENTRY(451),
+	ICE_PTT_UNUSED_ENTRY(452),
+	ICE_PTT_UNUSED_ENTRY(453),
+	ICE_PTT_UNUSED_ENTRY(454),
+	ICE_PTT_UNUSED_ENTRY(455),
+	ICE_PTT_UNUSED_ENTRY(456),
+	ICE_PTT_UNUSED_ENTRY(457),
+	ICE_PTT_UNUSED_ENTRY(458),
+	ICE_PTT_UNUSED_ENTRY(459),
+
+	ICE_PTT_UNUSED_ENTRY(460),
+	ICE_PTT_UNUSED_ENTRY(461),
+	ICE_PTT_UNUSED_ENTRY(462),
+	ICE_PTT_UNUSED_ENTRY(463),
+	ICE_PTT_UNUSED_ENTRY(464),
+	ICE_PTT_UNUSED_ENTRY(465),
+	ICE_PTT_UNUSED_ENTRY(466),
+	ICE_PTT_UNUSED_ENTRY(467),
+	ICE_PTT_UNUSED_ENTRY(468),
+	ICE_PTT_UNUSED_ENTRY(469),
+
+	ICE_PTT_UNUSED_ENTRY(470),
+	ICE_PTT_UNUSED_ENTRY(471),
+	ICE_PTT_UNUSED_ENTRY(472),
+	ICE_PTT_UNUSED_ENTRY(473),
+	ICE_PTT_UNUSED_ENTRY(474),
+	ICE_PTT_UNUSED_ENTRY(475),
+	ICE_PTT_UNUSED_ENTRY(476),
+	ICE_PTT_UNUSED_ENTRY(477),
+	ICE_PTT_UNUSED_ENTRY(478),
+	ICE_PTT_UNUSED_ENTRY(479),
+
+	ICE_PTT_UNUSED_ENTRY(480),
+	ICE_PTT_UNUSED_ENTRY(481),
+	ICE_PTT_UNUSED_ENTRY(482),
+	ICE_PTT_UNUSED_ENTRY(483),
+	ICE_PTT_UNUSED_ENTRY(484),
+	ICE_PTT_UNUSED_ENTRY(485),
+	ICE_PTT_UNUSED_ENTRY(486),
+	ICE_PTT_UNUSED_ENTRY(487),
+	ICE_PTT_UNUSED_ENTRY(488),
+	ICE_PTT_UNUSED_ENTRY(489),
+
+	ICE_PTT_UNUSED_ENTRY(490),
+	ICE_PTT_UNUSED_ENTRY(491),
+	ICE_PTT_UNUSED_ENTRY(492),
+	ICE_PTT_UNUSED_ENTRY(493),
+	ICE_PTT_UNUSED_ENTRY(494),
+	ICE_PTT_UNUSED_ENTRY(495),
+	ICE_PTT_UNUSED_ENTRY(496),
+	ICE_PTT_UNUSED_ENTRY(497),
+	ICE_PTT_UNUSED_ENTRY(498),
+	ICE_PTT_UNUSED_ENTRY(499),
+
+	ICE_PTT_UNUSED_ENTRY(500),
+	ICE_PTT_UNUSED_ENTRY(501),
+	ICE_PTT_UNUSED_ENTRY(502),
+	ICE_PTT_UNUSED_ENTRY(503),
+	ICE_PTT_UNUSED_ENTRY(504),
+	ICE_PTT_UNUSED_ENTRY(505),
+	ICE_PTT_UNUSED_ENTRY(506),
+	ICE_PTT_UNUSED_ENTRY(507),
+	ICE_PTT_UNUSED_ENTRY(508),
+	ICE_PTT_UNUSED_ENTRY(509),
+
+	ICE_PTT_UNUSED_ENTRY(510),
+	ICE_PTT_UNUSED_ENTRY(511),
+	ICE_PTT_UNUSED_ENTRY(512),
+	ICE_PTT_UNUSED_ENTRY(513),
+	ICE_PTT_UNUSED_ENTRY(514),
+	ICE_PTT_UNUSED_ENTRY(515),
+	ICE_PTT_UNUSED_ENTRY(516),
+	ICE_PTT_UNUSED_ENTRY(517),
+	ICE_PTT_UNUSED_ENTRY(518),
+	ICE_PTT_UNUSED_ENTRY(519),
+
+	ICE_PTT_UNUSED_ENTRY(520),
+	ICE_PTT_UNUSED_ENTRY(521),
+	ICE_PTT_UNUSED_ENTRY(522),
+	ICE_PTT_UNUSED_ENTRY(523),
+	ICE_PTT_UNUSED_ENTRY(524),
+	ICE_PTT_UNUSED_ENTRY(525),
+	ICE_PTT_UNUSED_ENTRY(526),
+	ICE_PTT_UNUSED_ENTRY(527),
+	ICE_PTT_UNUSED_ENTRY(528),
+	ICE_PTT_UNUSED_ENTRY(529),
+
+	ICE_PTT_UNUSED_ENTRY(530),
+	ICE_PTT_UNUSED_ENTRY(531),
+	ICE_PTT_UNUSED_ENTRY(532),
+	ICE_PTT_UNUSED_ENTRY(533),
+	ICE_PTT_UNUSED_ENTRY(534),
+	ICE_PTT_UNUSED_ENTRY(535),
+	ICE_PTT_UNUSED_ENTRY(536),
+	ICE_PTT_UNUSED_ENTRY(537),
+	ICE_PTT_UNUSED_ENTRY(538),
+	ICE_PTT_UNUSED_ENTRY(539),
+
+	ICE_PTT_UNUSED_ENTRY(540),
+	ICE_PTT_UNUSED_ENTRY(541),
+	ICE_PTT_UNUSED_ENTRY(542),
+	ICE_PTT_UNUSED_ENTRY(543),
+	ICE_PTT_UNUSED_ENTRY(544),
+	ICE_PTT_UNUSED_ENTRY(545),
+	ICE_PTT_UNUSED_ENTRY(546),
+	ICE_PTT_UNUSED_ENTRY(547),
+	ICE_PTT_UNUSED_ENTRY(548),
+	ICE_PTT_UNUSED_ENTRY(549),
+
+	ICE_PTT_UNUSED_ENTRY(550),
+	ICE_PTT_UNUSED_ENTRY(551),
+	ICE_PTT_UNUSED_ENTRY(552),
+	ICE_PTT_UNUSED_ENTRY(553),
+	ICE_PTT_UNUSED_ENTRY(554),
+	ICE_PTT_UNUSED_ENTRY(555),
+	ICE_PTT_UNUSED_ENTRY(556),
+	ICE_PTT_UNUSED_ENTRY(557),
+	ICE_PTT_UNUSED_ENTRY(558),
+	ICE_PTT_UNUSED_ENTRY(559),
+
+	ICE_PTT_UNUSED_ENTRY(560),
+	ICE_PTT_UNUSED_ENTRY(561),
+	ICE_PTT_UNUSED_ENTRY(562),
+	ICE_PTT_UNUSED_ENTRY(563),
+	ICE_PTT_UNUSED_ENTRY(564),
+	ICE_PTT_UNUSED_ENTRY(565),
+	ICE_PTT_UNUSED_ENTRY(566),
+	ICE_PTT_UNUSED_ENTRY(567),
+	ICE_PTT_UNUSED_ENTRY(568),
+	ICE_PTT_UNUSED_ENTRY(569),
+
+	ICE_PTT_UNUSED_ENTRY(570),
+	ICE_PTT_UNUSED_ENTRY(571),
+	ICE_PTT_UNUSED_ENTRY(572),
+	ICE_PTT_UNUSED_ENTRY(573),
+	ICE_PTT_UNUSED_ENTRY(574),
+	ICE_PTT_UNUSED_ENTRY(575),
+	ICE_PTT_UNUSED_ENTRY(576),
+	ICE_PTT_UNUSED_ENTRY(577),
+	ICE_PTT_UNUSED_ENTRY(578),
+	ICE_PTT_UNUSED_ENTRY(579),
+
+	ICE_PTT_UNUSED_ENTRY(580),
+	ICE_PTT_UNUSED_ENTRY(581),
+	ICE_PTT_UNUSED_ENTRY(582),
+	ICE_PTT_UNUSED_ENTRY(583),
+	ICE_PTT_UNUSED_ENTRY(584),
+	ICE_PTT_UNUSED_ENTRY(585),
+	ICE_PTT_UNUSED_ENTRY(586),
+	ICE_PTT_UNUSED_ENTRY(587),
+	ICE_PTT_UNUSED_ENTRY(588),
+	ICE_PTT_UNUSED_ENTRY(589),
+
+	ICE_PTT_UNUSED_ENTRY(590),
+	ICE_PTT_UNUSED_ENTRY(591),
+	ICE_PTT_UNUSED_ENTRY(592),
+	ICE_PTT_UNUSED_ENTRY(593),
+	ICE_PTT_UNUSED_ENTRY(594),
+	ICE_PTT_UNUSED_ENTRY(595),
+	ICE_PTT_UNUSED_ENTRY(596),
+	ICE_PTT_UNUSED_ENTRY(597),
+	ICE_PTT_UNUSED_ENTRY(598),
+	ICE_PTT_UNUSED_ENTRY(599),
+
+	ICE_PTT_UNUSED_ENTRY(600),
+	ICE_PTT_UNUSED_ENTRY(601),
+	ICE_PTT_UNUSED_ENTRY(602),
+	ICE_PTT_UNUSED_ENTRY(603),
+	ICE_PTT_UNUSED_ENTRY(604),
+	ICE_PTT_UNUSED_ENTRY(605),
+	ICE_PTT_UNUSED_ENTRY(606),
+	ICE_PTT_UNUSED_ENTRY(607),
+	ICE_PTT_UNUSED_ENTRY(608),
+	ICE_PTT_UNUSED_ENTRY(609),
+
+	ICE_PTT_UNUSED_ENTRY(610),
+	ICE_PTT_UNUSED_ENTRY(611),
+	ICE_PTT_UNUSED_ENTRY(612),
+	ICE_PTT_UNUSED_ENTRY(613),
+	ICE_PTT_UNUSED_ENTRY(614),
+	ICE_PTT_UNUSED_ENTRY(615),
+	ICE_PTT_UNUSED_ENTRY(616),
+	ICE_PTT_UNUSED_ENTRY(617),
+	ICE_PTT_UNUSED_ENTRY(618),
+	ICE_PTT_UNUSED_ENTRY(619),
+
+	ICE_PTT_UNUSED_ENTRY(620),
+	ICE_PTT_UNUSED_ENTRY(621),
+	ICE_PTT_UNUSED_ENTRY(622),
+	ICE_PTT_UNUSED_ENTRY(623),
+	ICE_PTT_UNUSED_ENTRY(624),
+	ICE_PTT_UNUSED_ENTRY(625),
+	ICE_PTT_UNUSED_ENTRY(626),
+	ICE_PTT_UNUSED_ENTRY(627),
+	ICE_PTT_UNUSED_ENTRY(628),
+	ICE_PTT_UNUSED_ENTRY(629),
+
+	ICE_PTT_UNUSED_ENTRY(630),
+	ICE_PTT_UNUSED_ENTRY(631),
+	ICE_PTT_UNUSED_ENTRY(632),
+	ICE_PTT_UNUSED_ENTRY(633),
+	ICE_PTT_UNUSED_ENTRY(634),
+	ICE_PTT_UNUSED_ENTRY(635),
+	ICE_PTT_UNUSED_ENTRY(636),
+	ICE_PTT_UNUSED_ENTRY(637),
+	ICE_PTT_UNUSED_ENTRY(638),
+	ICE_PTT_UNUSED_ENTRY(639),
+
+	ICE_PTT_UNUSED_ENTRY(640),
+	ICE_PTT_UNUSED_ENTRY(641),
+	ICE_PTT_UNUSED_ENTRY(642),
+	ICE_PTT_UNUSED_ENTRY(643),
+	ICE_PTT_UNUSED_ENTRY(644),
+	ICE_PTT_UNUSED_ENTRY(645),
+	ICE_PTT_UNUSED_ENTRY(646),
+	ICE_PTT_UNUSED_ENTRY(647),
+	ICE_PTT_UNUSED_ENTRY(648),
+	ICE_PTT_UNUSED_ENTRY(649),
+
+	ICE_PTT_UNUSED_ENTRY(650),
+	ICE_PTT_UNUSED_ENTRY(651),
+	ICE_PTT_UNUSED_ENTRY(652),
+	ICE_PTT_UNUSED_ENTRY(653),
+	ICE_PTT_UNUSED_ENTRY(654),
+	ICE_PTT_UNUSED_ENTRY(655),
+	ICE_PTT_UNUSED_ENTRY(656),
+	ICE_PTT_UNUSED_ENTRY(657),
+	ICE_PTT_UNUSED_ENTRY(658),
+	ICE_PTT_UNUSED_ENTRY(659),
+
+	ICE_PTT_UNUSED_ENTRY(660),
+	ICE_PTT_UNUSED_ENTRY(661),
+	ICE_PTT_UNUSED_ENTRY(662),
+	ICE_PTT_UNUSED_ENTRY(663),
+	ICE_PTT_UNUSED_ENTRY(664),
+	ICE_PTT_UNUSED_ENTRY(665),
+	ICE_PTT_UNUSED_ENTRY(666),
+	ICE_PTT_UNUSED_ENTRY(667),
+	ICE_PTT_UNUSED_ENTRY(668),
+	ICE_PTT_UNUSED_ENTRY(669),
+
+	ICE_PTT_UNUSED_ENTRY(670),
+	ICE_PTT_UNUSED_ENTRY(671),
+	ICE_PTT_UNUSED_ENTRY(672),
+	ICE_PTT_UNUSED_ENTRY(673),
+	ICE_PTT_UNUSED_ENTRY(674),
+	ICE_PTT_UNUSED_ENTRY(675),
+	ICE_PTT_UNUSED_ENTRY(676),
+	ICE_PTT_UNUSED_ENTRY(677),
+	ICE_PTT_UNUSED_ENTRY(678),
+	ICE_PTT_UNUSED_ENTRY(679),
+
+	ICE_PTT_UNUSED_ENTRY(680),
+	ICE_PTT_UNUSED_ENTRY(681),
+	ICE_PTT_UNUSED_ENTRY(682),
+	ICE_PTT_UNUSED_ENTRY(683),
+	ICE_PTT_UNUSED_ENTRY(684),
+	ICE_PTT_UNUSED_ENTRY(685),
+	ICE_PTT_UNUSED_ENTRY(686),
+	ICE_PTT_UNUSED_ENTRY(687),
+	ICE_PTT_UNUSED_ENTRY(688),
+	ICE_PTT_UNUSED_ENTRY(689),
+
+	ICE_PTT_UNUSED_ENTRY(690),
+	ICE_PTT_UNUSED_ENTRY(691),
+	ICE_PTT_UNUSED_ENTRY(692),
+	ICE_PTT_UNUSED_ENTRY(693),
+	ICE_PTT_UNUSED_ENTRY(694),
+	ICE_PTT_UNUSED_ENTRY(695),
+	ICE_PTT_UNUSED_ENTRY(696),
+	ICE_PTT_UNUSED_ENTRY(697),
+	ICE_PTT_UNUSED_ENTRY(698),
+	ICE_PTT_UNUSED_ENTRY(699),
+
+	ICE_PTT_UNUSED_ENTRY(700),
+	ICE_PTT_UNUSED_ENTRY(701),
+	ICE_PTT_UNUSED_ENTRY(702),
+	ICE_PTT_UNUSED_ENTRY(703),
+	ICE_PTT_UNUSED_ENTRY(704),
+	ICE_PTT_UNUSED_ENTRY(705),
+	ICE_PTT_UNUSED_ENTRY(706),
+	ICE_PTT_UNUSED_ENTRY(707),
+	ICE_PTT_UNUSED_ENTRY(708),
+	ICE_PTT_UNUSED_ENTRY(709),
+
+	ICE_PTT_UNUSED_ENTRY(710),
+	ICE_PTT_UNUSED_ENTRY(711),
+	ICE_PTT_UNUSED_ENTRY(712),
+	ICE_PTT_UNUSED_ENTRY(713),
+	ICE_PTT_UNUSED_ENTRY(714),
+	ICE_PTT_UNUSED_ENTRY(715),
+	ICE_PTT_UNUSED_ENTRY(716),
+	ICE_PTT_UNUSED_ENTRY(717),
+	ICE_PTT_UNUSED_ENTRY(718),
+	ICE_PTT_UNUSED_ENTRY(719),
+
+	ICE_PTT_UNUSED_ENTRY(720),
+	ICE_PTT_UNUSED_ENTRY(721),
+	ICE_PTT_UNUSED_ENTRY(722),
+	ICE_PTT_UNUSED_ENTRY(723),
+	ICE_PTT_UNUSED_ENTRY(724),
+	ICE_PTT_UNUSED_ENTRY(725),
+	ICE_PTT_UNUSED_ENTRY(726),
+	ICE_PTT_UNUSED_ENTRY(727),
+	ICE_PTT_UNUSED_ENTRY(728),
+	ICE_PTT_UNUSED_ENTRY(729),
+
+	ICE_PTT_UNUSED_ENTRY(730),
+	ICE_PTT_UNUSED_ENTRY(731),
+	ICE_PTT_UNUSED_ENTRY(732),
+	ICE_PTT_UNUSED_ENTRY(733),
+	ICE_PTT_UNUSED_ENTRY(734),
+	ICE_PTT_UNUSED_ENTRY(735),
+	ICE_PTT_UNUSED_ENTRY(736),
+	ICE_PTT_UNUSED_ENTRY(737),
+	ICE_PTT_UNUSED_ENTRY(738),
+	ICE_PTT_UNUSED_ENTRY(739),
+
+	ICE_PTT_UNUSED_ENTRY(740),
+	ICE_PTT_UNUSED_ENTRY(741),
+	ICE_PTT_UNUSED_ENTRY(742),
+	ICE_PTT_UNUSED_ENTRY(743),
+	ICE_PTT_UNUSED_ENTRY(744),
+	ICE_PTT_UNUSED_ENTRY(745),
+	ICE_PTT_UNUSED_ENTRY(746),
+	ICE_PTT_UNUSED_ENTRY(747),
+	ICE_PTT_UNUSED_ENTRY(748),
+	ICE_PTT_UNUSED_ENTRY(749),
+
+	ICE_PTT_UNUSED_ENTRY(750),
+	ICE_PTT_UNUSED_ENTRY(751),
+	ICE_PTT_UNUSED_ENTRY(752),
+	ICE_PTT_UNUSED_ENTRY(753),
+	ICE_PTT_UNUSED_ENTRY(754),
+	ICE_PTT_UNUSED_ENTRY(755),
+	ICE_PTT_UNUSED_ENTRY(756),
+	ICE_PTT_UNUSED_ENTRY(757),
+	ICE_PTT_UNUSED_ENTRY(758),
+	ICE_PTT_UNUSED_ENTRY(759),
+
+	ICE_PTT_UNUSED_ENTRY(760),
+	ICE_PTT_UNUSED_ENTRY(761),
+	ICE_PTT_UNUSED_ENTRY(762),
+	ICE_PTT_UNUSED_ENTRY(763),
+	ICE_PTT_UNUSED_ENTRY(764),
+	ICE_PTT_UNUSED_ENTRY(765),
+	ICE_PTT_UNUSED_ENTRY(766),
+	ICE_PTT_UNUSED_ENTRY(767),
+	ICE_PTT_UNUSED_ENTRY(768),
+	ICE_PTT_UNUSED_ENTRY(769),
+
+	ICE_PTT_UNUSED_ENTRY(770),
+	ICE_PTT_UNUSED_ENTRY(771),
+	ICE_PTT_UNUSED_ENTRY(772),
+	ICE_PTT_UNUSED_ENTRY(773),
+	ICE_PTT_UNUSED_ENTRY(774),
+	ICE_PTT_UNUSED_ENTRY(775),
+	ICE_PTT_UNUSED_ENTRY(776),
+	ICE_PTT_UNUSED_ENTRY(777),
+	ICE_PTT_UNUSED_ENTRY(778),
+	ICE_PTT_UNUSED_ENTRY(779),
+
+	ICE_PTT_UNUSED_ENTRY(780),
+	ICE_PTT_UNUSED_ENTRY(781),
+	ICE_PTT_UNUSED_ENTRY(782),
+	ICE_PTT_UNUSED_ENTRY(783),
+	ICE_PTT_UNUSED_ENTRY(784),
+	ICE_PTT_UNUSED_ENTRY(785),
+	ICE_PTT_UNUSED_ENTRY(786),
+	ICE_PTT_UNUSED_ENTRY(787),
+	ICE_PTT_UNUSED_ENTRY(788),
+	ICE_PTT_UNUSED_ENTRY(789),
+
+	ICE_PTT_UNUSED_ENTRY(790),
+	ICE_PTT_UNUSED_ENTRY(791),
+	ICE_PTT_UNUSED_ENTRY(792),
+	ICE_PTT_UNUSED_ENTRY(793),
+	ICE_PTT_UNUSED_ENTRY(794),
+	ICE_PTT_UNUSED_ENTRY(795),
+	ICE_PTT_UNUSED_ENTRY(796),
+	ICE_PTT_UNUSED_ENTRY(797),
+	ICE_PTT_UNUSED_ENTRY(798),
+	ICE_PTT_UNUSED_ENTRY(799),
+
+	ICE_PTT_UNUSED_ENTRY(800),
+	ICE_PTT_UNUSED_ENTRY(801),
+	ICE_PTT_UNUSED_ENTRY(802),
+	ICE_PTT_UNUSED_ENTRY(803),
+	ICE_PTT_UNUSED_ENTRY(804),
+	ICE_PTT_UNUSED_ENTRY(805),
+	ICE_PTT_UNUSED_ENTRY(806),
+	ICE_PTT_UNUSED_ENTRY(807),
+	ICE_PTT_UNUSED_ENTRY(808),
+	ICE_PTT_UNUSED_ENTRY(809),
+
+	ICE_PTT_UNUSED_ENTRY(810),
+	ICE_PTT_UNUSED_ENTRY(811),
+	ICE_PTT_UNUSED_ENTRY(812),
+	ICE_PTT_UNUSED_ENTRY(813),
+	ICE_PTT_UNUSED_ENTRY(814),
+	ICE_PTT_UNUSED_ENTRY(815),
+	ICE_PTT_UNUSED_ENTRY(816),
+	ICE_PTT_UNUSED_ENTRY(817),
+	ICE_PTT_UNUSED_ENTRY(818),
+	ICE_PTT_UNUSED_ENTRY(819),
+
+	ICE_PTT_UNUSED_ENTRY(820),
+	ICE_PTT_UNUSED_ENTRY(821),
+	ICE_PTT_UNUSED_ENTRY(822),
+	ICE_PTT_UNUSED_ENTRY(823),
+	ICE_PTT_UNUSED_ENTRY(824),
+	ICE_PTT_UNUSED_ENTRY(825),
+	ICE_PTT_UNUSED_ENTRY(826),
+	ICE_PTT_UNUSED_ENTRY(827),
+	ICE_PTT_UNUSED_ENTRY(828),
+	ICE_PTT_UNUSED_ENTRY(829),
+
+	ICE_PTT_UNUSED_ENTRY(830),
+	ICE_PTT_UNUSED_ENTRY(831),
+	ICE_PTT_UNUSED_ENTRY(832),
+	ICE_PTT_UNUSED_ENTRY(833),
+	ICE_PTT_UNUSED_ENTRY(834),
+	ICE_PTT_UNUSED_ENTRY(835),
+	ICE_PTT_UNUSED_ENTRY(836),
+	ICE_PTT_UNUSED_ENTRY(837),
+	ICE_PTT_UNUSED_ENTRY(838),
+	ICE_PTT_UNUSED_ENTRY(839),
+
+	ICE_PTT_UNUSED_ENTRY(840),
+	ICE_PTT_UNUSED_ENTRY(841),
+	ICE_PTT_UNUSED_ENTRY(842),
+	ICE_PTT_UNUSED_ENTRY(843),
+	ICE_PTT_UNUSED_ENTRY(844),
+	ICE_PTT_UNUSED_ENTRY(845),
+	ICE_PTT_UNUSED_ENTRY(846),
+	ICE_PTT_UNUSED_ENTRY(847),
+	ICE_PTT_UNUSED_ENTRY(848),
+	ICE_PTT_UNUSED_ENTRY(849),
+
+	ICE_PTT_UNUSED_ENTRY(850),
+	ICE_PTT_UNUSED_ENTRY(851),
+	ICE_PTT_UNUSED_ENTRY(852),
+	ICE_PTT_UNUSED_ENTRY(853),
+	ICE_PTT_UNUSED_ENTRY(854),
+	ICE_PTT_UNUSED_ENTRY(855),
+	ICE_PTT_UNUSED_ENTRY(856),
+	ICE_PTT_UNUSED_ENTRY(857),
+	ICE_PTT_UNUSED_ENTRY(858),
+	ICE_PTT_UNUSED_ENTRY(859),
+
+	ICE_PTT_UNUSED_ENTRY(860),
+	ICE_PTT_UNUSED_ENTRY(861),
+	ICE_PTT_UNUSED_ENTRY(862),
+	ICE_PTT_UNUSED_ENTRY(863),
+	ICE_PTT_UNUSED_ENTRY(864),
+	ICE_PTT_UNUSED_ENTRY(865),
+	ICE_PTT_UNUSED_ENTRY(866),
+	ICE_PTT_UNUSED_ENTRY(867),
+	ICE_PTT_UNUSED_ENTRY(868),
+	ICE_PTT_UNUSED_ENTRY(869),
+
+	ICE_PTT_UNUSED_ENTRY(870),
+	ICE_PTT_UNUSED_ENTRY(871),
+	ICE_PTT_UNUSED_ENTRY(872),
+	ICE_PTT_UNUSED_ENTRY(873),
+	ICE_PTT_UNUSED_ENTRY(874),
+	ICE_PTT_UNUSED_ENTRY(875),
+	ICE_PTT_UNUSED_ENTRY(876),
+	ICE_PTT_UNUSED_ENTRY(877),
+	ICE_PTT_UNUSED_ENTRY(878),
+	ICE_PTT_UNUSED_ENTRY(879),
+
+	ICE_PTT_UNUSED_ENTRY(880),
+	ICE_PTT_UNUSED_ENTRY(881),
+	ICE_PTT_UNUSED_ENTRY(882),
+	ICE_PTT_UNUSED_ENTRY(883),
+	ICE_PTT_UNUSED_ENTRY(884),
+	ICE_PTT_UNUSED_ENTRY(885),
+	ICE_PTT_UNUSED_ENTRY(886),
+	ICE_PTT_UNUSED_ENTRY(887),
+	ICE_PTT_UNUSED_ENTRY(888),
+	ICE_PTT_UNUSED_ENTRY(889),
+
+	ICE_PTT_UNUSED_ENTRY(890),
+	ICE_PTT_UNUSED_ENTRY(891),
+	ICE_PTT_UNUSED_ENTRY(892),
+	ICE_PTT_UNUSED_ENTRY(893),
+	ICE_PTT_UNUSED_ENTRY(894),
+	ICE_PTT_UNUSED_ENTRY(895),
+	ICE_PTT_UNUSED_ENTRY(896),
+	ICE_PTT_UNUSED_ENTRY(897),
+	ICE_PTT_UNUSED_ENTRY(898),
+	ICE_PTT_UNUSED_ENTRY(899),
+
+	ICE_PTT_UNUSED_ENTRY(900),
+	ICE_PTT_UNUSED_ENTRY(901),
+	ICE_PTT_UNUSED_ENTRY(902),
+	ICE_PTT_UNUSED_ENTRY(903),
+	ICE_PTT_UNUSED_ENTRY(904),
+	ICE_PTT_UNUSED_ENTRY(905),
+	ICE_PTT_UNUSED_ENTRY(906),
+	ICE_PTT_UNUSED_ENTRY(907),
+	ICE_PTT_UNUSED_ENTRY(908),
+	ICE_PTT_UNUSED_ENTRY(909),
+
+	ICE_PTT_UNUSED_ENTRY(910),
+	ICE_PTT_UNUSED_ENTRY(911),
+	ICE_PTT_UNUSED_ENTRY(912),
+	ICE_PTT_UNUSED_ENTRY(913),
+	ICE_PTT_UNUSED_ENTRY(914),
+	ICE_PTT_UNUSED_ENTRY(915),
+	ICE_PTT_UNUSED_ENTRY(916),
+	ICE_PTT_UNUSED_ENTRY(917),
+	ICE_PTT_UNUSED_ENTRY(918),
+	ICE_PTT_UNUSED_ENTRY(919),
+
+	ICE_PTT_UNUSED_ENTRY(920),
+	ICE_PTT_UNUSED_ENTRY(921),
+	ICE_PTT_UNUSED_ENTRY(922),
+	ICE_PTT_UNUSED_ENTRY(923),
+	ICE_PTT_UNUSED_ENTRY(924),
+	ICE_PTT_UNUSED_ENTRY(925),
+	ICE_PTT_UNUSED_ENTRY(926),
+	ICE_PTT_UNUSED_ENTRY(927),
+	ICE_PTT_UNUSED_ENTRY(928),
+	ICE_PTT_UNUSED_ENTRY(929),
+
+	ICE_PTT_UNUSED_ENTRY(930),
+	ICE_PTT_UNUSED_ENTRY(931),
+	ICE_PTT_UNUSED_ENTRY(932),
+	ICE_PTT_UNUSED_ENTRY(933),
+	ICE_PTT_UNUSED_ENTRY(934),
+	ICE_PTT_UNUSED_ENTRY(935),
+	ICE_PTT_UNUSED_ENTRY(936),
+	ICE_PTT_UNUSED_ENTRY(937),
+	ICE_PTT_UNUSED_ENTRY(938),
+	ICE_PTT_UNUSED_ENTRY(939),
+
+	ICE_PTT_UNUSED_ENTRY(940),
+	ICE_PTT_UNUSED_ENTRY(941),
+	ICE_PTT_UNUSED_ENTRY(942),
+	ICE_PTT_UNUSED_ENTRY(943),
+	ICE_PTT_UNUSED_ENTRY(944),
+	ICE_PTT_UNUSED_ENTRY(945),
+	ICE_PTT_UNUSED_ENTRY(946),
+	ICE_PTT_UNUSED_ENTRY(947),
+	ICE_PTT_UNUSED_ENTRY(948),
+	ICE_PTT_UNUSED_ENTRY(949),
+
+	ICE_PTT_UNUSED_ENTRY(950),
+	ICE_PTT_UNUSED_ENTRY(951),
+	ICE_PTT_UNUSED_ENTRY(952),
+	ICE_PTT_UNUSED_ENTRY(953),
+	ICE_PTT_UNUSED_ENTRY(954),
+	ICE_PTT_UNUSED_ENTRY(955),
+	ICE_PTT_UNUSED_ENTRY(956),
+	ICE_PTT_UNUSED_ENTRY(957),
+	ICE_PTT_UNUSED_ENTRY(958),
+	ICE_PTT_UNUSED_ENTRY(959),
+
+	ICE_PTT_UNUSED_ENTRY(960),
+	ICE_PTT_UNUSED_ENTRY(961),
+	ICE_PTT_UNUSED_ENTRY(962),
+	ICE_PTT_UNUSED_ENTRY(963),
+	ICE_PTT_UNUSED_ENTRY(964),
+	ICE_PTT_UNUSED_ENTRY(965),
+	ICE_PTT_UNUSED_ENTRY(966),
+	ICE_PTT_UNUSED_ENTRY(967),
+	ICE_PTT_UNUSED_ENTRY(968),
+	ICE_PTT_UNUSED_ENTRY(969),
+
+	ICE_PTT_UNUSED_ENTRY(970),
+	ICE_PTT_UNUSED_ENTRY(971),
+	ICE_PTT_UNUSED_ENTRY(972),
+	ICE_PTT_UNUSED_ENTRY(973),
+	ICE_PTT_UNUSED_ENTRY(974),
+	ICE_PTT_UNUSED_ENTRY(975),
+	ICE_PTT_UNUSED_ENTRY(976),
+	ICE_PTT_UNUSED_ENTRY(977),
+	ICE_PTT_UNUSED_ENTRY(978),
+	ICE_PTT_UNUSED_ENTRY(979),
+
+	ICE_PTT_UNUSED_ENTRY(980),
+	ICE_PTT_UNUSED_ENTRY(981),
+	ICE_PTT_UNUSED_ENTRY(982),
+	ICE_PTT_UNUSED_ENTRY(983),
+	ICE_PTT_UNUSED_ENTRY(984),
+	ICE_PTT_UNUSED_ENTRY(985),
+	ICE_PTT_UNUSED_ENTRY(986),
+	ICE_PTT_UNUSED_ENTRY(987),
+	ICE_PTT_UNUSED_ENTRY(988),
+	ICE_PTT_UNUSED_ENTRY(989),
+
+	ICE_PTT_UNUSED_ENTRY(990),
+	ICE_PTT_UNUSED_ENTRY(991),
+	ICE_PTT_UNUSED_ENTRY(992),
+	ICE_PTT_UNUSED_ENTRY(993),
+	ICE_PTT_UNUSED_ENTRY(994),
+	ICE_PTT_UNUSED_ENTRY(995),
+	ICE_PTT_UNUSED_ENTRY(996),
+	ICE_PTT_UNUSED_ENTRY(997),
+	ICE_PTT_UNUSED_ENTRY(998),
+	ICE_PTT_UNUSED_ENTRY(999),
+
+	ICE_PTT_UNUSED_ENTRY(1000),
+	ICE_PTT_UNUSED_ENTRY(1001),
+	ICE_PTT_UNUSED_ENTRY(1002),
+	ICE_PTT_UNUSED_ENTRY(1003),
+	ICE_PTT_UNUSED_ENTRY(1004),
+	ICE_PTT_UNUSED_ENTRY(1005),
+	ICE_PTT_UNUSED_ENTRY(1006),
+	ICE_PTT_UNUSED_ENTRY(1007),
+	ICE_PTT_UNUSED_ENTRY(1008),
+	ICE_PTT_UNUSED_ENTRY(1009),
+
+	ICE_PTT_UNUSED_ENTRY(1010),
+	ICE_PTT_UNUSED_ENTRY(1011),
+	ICE_PTT_UNUSED_ENTRY(1012),
+	ICE_PTT_UNUSED_ENTRY(1013),
+	ICE_PTT_UNUSED_ENTRY(1014),
+	ICE_PTT_UNUSED_ENTRY(1015),
+	ICE_PTT_UNUSED_ENTRY(1016),
+	ICE_PTT_UNUSED_ENTRY(1017),
+	ICE_PTT_UNUSED_ENTRY(1018),
+	ICE_PTT_UNUSED_ENTRY(1019),
+
+	ICE_PTT_UNUSED_ENTRY(1020),
+	ICE_PTT_UNUSED_ENTRY(1021),
+	ICE_PTT_UNUSED_ENTRY(1022),
+	ICE_PTT_UNUSED_ENTRY(1023),
+};
+
+static inline struct ice_rx_ptype_decoded ice_decode_rx_desc_ptype(u16 ptype)
+{
+	return ice_ptype_lkup[ptype];
+}
+
+#define ICE_LINK_SPEED_UNKNOWN		0
+#define ICE_LINK_SPEED_10MBPS		10
+#define ICE_LINK_SPEED_100MBPS		100
+#define ICE_LINK_SPEED_1000MBPS		1000
+#define ICE_LINK_SPEED_2500MBPS		2500
+#define ICE_LINK_SPEED_5000MBPS		5000
+#define ICE_LINK_SPEED_10000MBPS	10000
+#define ICE_LINK_SPEED_20000MBPS	20000
+#define ICE_LINK_SPEED_25000MBPS	25000
+#define ICE_LINK_SPEED_40000MBPS	40000
+#define ICE_LINK_SPEED_50000MBPS	50000
+#define ICE_LINK_SPEED_100000MBPS	100000
+
+#endif /* _ICE_LAN_TX_RX_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v3 15/34] net/ice: add OS specific implementation
  2018-12-12  6:59 ` [dpdk-dev] [PATCH v3 00/34] A new net PMD - ice Wenzhuo Lu
                     ` (13 preceding siblings ...)
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 14/34] net/ice: Add structures for RX/TX queues Wenzhuo Lu
@ 2018-12-12  6:59   ` Wenzhuo Lu
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 16/34] net/ice: support device initialization Wenzhuo Lu
                     ` (19 subsequent siblings)
  34 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-12  6:59 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu

Add some MACRO defination and small functions which
are specific for DPDK.
Add readme too.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 drivers/net/ice/base/README      |  22 ++
 drivers/net/ice/base/ice_osdep.h | 524 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 546 insertions(+)
 create mode 100644 drivers/net/ice/base/README
 create mode 100644 drivers/net/ice/base/ice_osdep.h

diff --git a/drivers/net/ice/base/README b/drivers/net/ice/base/README
new file mode 100644
index 0000000..708f607
--- /dev/null
+++ b/drivers/net/ice/base/README
@@ -0,0 +1,22 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+Intel® ICE driver
+==================
+
+This directory contains source code of FreeBSD ice driver of version
+2018.12.11 released by the team which develops
+basic drivers for any ice NIC. The directory of base/ contains the
+original source package.
+This driver is valid for the product(s) listed below
+
+* Intel® Ethernet Network Adapters E810
+
+Updating the driver
+===================
+
+NOTE: The source code in this directory should not be modified apart from
+the following file(s):
+
+    ice_osdep.h
diff --git a/drivers/net/ice/base/ice_osdep.h b/drivers/net/ice/base/ice_osdep.h
new file mode 100644
index 0000000..dd25b75
--- /dev/null
+++ b/drivers/net/ice/base/ice_osdep.h
@@ -0,0 +1,524 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _ICE_OSDEP_H_
+#define _ICE_OSDEP_H_
+
+#include <string.h>
+#include <stdint.h>
+#include <stdio.h>
+#include <stdarg.h>
+#include <inttypes.h>
+#include <sys/queue.h>
+#include <stdbool.h>
+
+#include <rte_common.h>
+#include <rte_memcpy.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <rte_byteorder.h>
+#include <rte_cycles.h>
+#include <rte_spinlock.h>
+#include <rte_log.h>
+#include <rte_random.h>
+#include <rte_io.h>
+
+#include "../ice_logs.h"
+
+#define INLINE inline
+#define STATIC static
+
+typedef uint8_t         u8;
+typedef int8_t          s8;
+typedef uint16_t        u16;
+typedef int16_t         s16;
+typedef uint32_t        u32;
+typedef int32_t         s32;
+typedef uint64_t        u64;
+typedef uint64_t        s64;
+
+#define __iomem
+#define hw_dbg(hw, S, A...) do {} while (0)
+#define upper_32_bits(n) ((u32)(((n) >> 16) >> 16))
+#define lower_32_bits(n) ((u32)(n))
+#define low_16_bits(x)   ((x) & 0xFFFF)
+#define high_16_bits(x)  (((x) & 0xFFFF0000) >> 16)
+
+#ifndef ETH_ADDR_LEN
+#define ETH_ADDR_LEN                  6
+#endif
+
+#ifndef __le16
+#define __le16          uint16_t
+#endif
+#ifndef __le32
+#define __le32          uint32_t
+#endif
+#ifndef __le64
+#define __le64          uint64_t
+#endif
+#ifndef __be16
+#define __be16          uint16_t
+#endif
+#ifndef __be32
+#define __be32          uint32_t
+#endif
+#ifndef __be64
+#define __be64          uint64_t
+#endif
+
+#ifndef __always_unused
+#define __always_unused  __attribute__((unused))
+#endif
+#ifndef __maybe_unused
+#define __maybe_unused  __attribute__((unused))
+#endif
+#ifndef __packed
+#define __packed  __attribute__((packed))
+#endif
+
+#ifndef BIT_ULL
+#define BIT_ULL(a) (1ULL << (a))
+#endif
+
+#define FALSE           0
+#define TRUE            1
+#define false           0
+#define true            1
+
+#define min(a, b) RTE_MIN(a, b)
+#define max(a, b) RTE_MAX(a, b)
+
+#define ARRAY_SIZE(arr) (sizeof(arr) / sizeof(arr[0]))
+#define FIELD_SIZEOF(t, f) (sizeof(((t *)0)->f))
+#define MAKEMASK(m, s) ((m) << (s))
+
+#define DEBUGOUT(S, A...) PMD_DRV_LOG_RAW(DEBUG, S, ##A)
+#define DEBUGFUNC(F) PMD_DRV_LOG_RAW(DEBUG, F)
+
+#define ice_debug(h, m, s, ...)					\
+do {								\
+	if (((m) & (h)->debug_mask))				\
+		PMD_DRV_LOG_RAW(DEBUG, "ice %02x.%x " s,	\
+			(h)->bus.device, (h)->bus.func,		\
+					##__VA_ARGS__);		\
+} while (0)
+
+#define ice_info(hw, fmt, args...) ice_debug(hw, ICE_DBG_ALL, fmt, ##args)
+#define ice_warn(hw, fmt, args...) ice_debug(hw, ICE_DBG_ALL, fmt, ##args)
+#define ice_debug_array(hw, type, rowsize, groupsize, buf, len)		\
+do {									\
+	struct ice_hw *hw_l = hw;					\
+		u16 len_l = len;					\
+		u8 *buf_l = buf;					\
+		int i;							\
+		for (i = 0; i < len_l; i += 8)				\
+			ice_debug(hw_l, type,				\
+				  "0x%04X  0x%016"PRIx64"\n",		\
+				  i, *((u64 *)((buf_l) + i)));		\
+} while (0)
+#define ice_snprintf snprintf
+#ifndef SNPRINTF
+#define SNPRINTF ice_snprintf
+#endif
+
+#define ICE_PCI_REG(reg)     rte_read32(reg)
+#define ICE_PCI_REG_ADDR(a, reg) \
+	((volatile uint32_t *)((char *)(a)->hw_addr + (reg)))
+static inline uint32_t ice_read_addr(volatile void *addr)
+{
+	return rte_le_to_cpu_32(ICE_PCI_REG(addr));
+}
+
+#define ICE_PCI_REG_WRITE(reg, value) \
+	rte_write32((rte_cpu_to_le_32(value)), reg)
+
+#define ice_flush(a)   ICE_READ_REG((a), GLGEN_STAT)
+#define icevf_flush(a) ICE_READ_REG((a), VFGEN_RSTAT)
+#define ICE_READ_REG(hw, reg) ice_read_addr(ICE_PCI_REG_ADDR((hw), (reg)))
+#define ICE_WRITE_REG(hw, reg, value) \
+	ICE_PCI_REG_WRITE(ICE_PCI_REG_ADDR((hw), (reg)), (value))
+
+#define rd32(a, reg) ice_read_addr(ICE_PCI_REG_ADDR((a), (reg)))
+#define wr32(a, reg, value) \
+	ICE_PCI_REG_WRITE(ICE_PCI_REG_ADDR((a), (reg)), (value))
+#define flush(a) ice_read_addr(ICE_PCI_REG_ADDR((a), (GLGEN_STAT)))
+#define div64_long(n, d) ((n) / (d))
+
+#define BITS_PER_BYTE       8
+typedef u32 ice_bitmap_t;
+#define DIV_ROUND_UP(n, d) (((n) + (d) - 1) / (d))
+#define BITS_TO_CHUNKS(nr)   DIV_ROUND_UP(nr, BITS_PER_BYTE * sizeof(ice_bitmap_t))
+#define ice_declare_bitmap(name, bits) \
+	ice_bitmap_t name[BITS_TO_CHUNKS(bits)]
+
+#define BITS_CHUNK_MASK(nr)	(((ice_bitmap_t)~0) >>			\
+		((BITS_PER_BYTE * sizeof(ice_bitmap_t)) -		\
+		(((nr) - 1) % (BITS_PER_BYTE * sizeof(ice_bitmap_t))	\
+		 + 1)))
+#define BITS_PER_CHUNK          (BITS_PER_BYTE * sizeof(ice_bitmap_t))
+#define BIT_CHUNK(nr)           ((nr) / BITS_PER_CHUNK)
+#define BIT_IN_CHUNK(nr)        BIT((nr) % BITS_PER_CHUNK)
+
+static inline bool ice_is_bit_set(const ice_bitmap_t *bitmap, u16 nr)
+{
+        return !!(bitmap[BIT_CHUNK(nr)] & BIT_IN_CHUNK(nr));
+}
+
+#define ice_and_bitmap(d, b1, b2, sz) \
+	ice_intersect_bitmaps((u8 *)d, (u8 *)b1, (const u8 *)b2, (u16)sz)
+static inline int
+ice_intersect_bitmaps(u8 *dst, const u8 *bmp1, const u8 *bmp2, u16 sz)
+{
+	u32 res = 0;
+	int cnt;
+	u16 i;
+
+	/* Utilize 32-bit operations */
+	cnt = (sz % BITS_PER_BYTE) ?
+		(sz / BITS_PER_BYTE) + 1 : sz / BITS_PER_BYTE;
+	for (i = 0; i < cnt / 4; i++) {
+		((u32 *)dst)[i] = ((const u32 *)bmp1)[i] &
+		((const u32 *)bmp2)[i];
+		res |= ((u32 *)dst)[i];
+	}
+
+	for (i *= 4; i < cnt; i++) {
+		if ((sz % 8 == 0) || (i + 1 < cnt)) {
+			dst[i] = bmp1[i] & bmp2[i];
+		} else {
+			/* Remaining bits that do not occupy the whole byte */
+			u8 mask = ~0u >> (8 - (sz % 8));
+
+			dst[i] = bmp1[i] & bmp2[i] & mask;
+		}
+
+		res |= dst[i];
+	}
+
+	return res != 0;
+}
+
+static inline int ice_find_first_bit(ice_bitmap_t *name, u16 size)
+{
+	u16 i;
+
+	for (i = 0; i < BITS_PER_BYTE * (size / BITS_PER_BYTE); i++)
+		if (ice_is_bit_set(name, i))
+			return i;
+	return size;
+}
+
+static inline int ice_find_next_bit(ice_bitmap_t *name, u16 size, u16 bits)
+{
+	u16 i;
+
+	for (i = bits; i < BITS_PER_BYTE * (size / BITS_PER_BYTE); i++)
+		if (ice_is_bit_set(name, i))
+			return i;
+	return bits;
+}
+
+#define for_each_set_bit(bit, addr, size)				\
+	for ((bit) = ice_find_first_bit((addr), (size));		\
+	(bit) < (size);							\
+	(bit) = ice_find_next_bit((addr), (size), (bit) + 1))
+
+static inline bool ice_is_any_bit_set(ice_bitmap_t *bitmap, u32 bits)
+{
+	u32 max_index = BITS_TO_CHUNKS(bits);
+	u32 i;
+
+	for (i = 0; i < max_index; i++) {
+		if (bitmap[i])
+			return true;
+	}
+	return false;
+}
+
+/* memory allocation tracking */
+struct ice_dma_mem {
+	void *va;
+	u64 pa;
+	u32 size;
+	const void *zone;
+} __attribute__((packed));
+
+struct ice_virt_mem {
+	void *va;
+	u32 size;
+} __attribute__((packed));
+
+#define ice_malloc(h, s)    rte_zmalloc(NULL, s, 0)
+#define ice_calloc(h, c, s) rte_zmalloc(NULL, c * s, 0)
+#define ice_free(h, m)         rte_free(m)
+
+#define ice_memset(a, b, c, d) memset((a), (b), (c))
+#define ice_memcpy(a, b, c, d) rte_memcpy((a), (b), (c))
+#define ice_memdup(a, b, c, d) rte_memcpy(ice_malloc(a, c), b, c)
+
+#define CPU_TO_BE16(o) rte_cpu_to_be_16(o)
+#define CPU_TO_BE32(o) rte_cpu_to_be_32(o)
+#define CPU_TO_BE64(o) rte_cpu_to_be_64(o)
+#define CPU_TO_LE16(o) rte_cpu_to_le_16(o)
+#define CPU_TO_LE32(s) rte_cpu_to_le_32(s)
+#define CPU_TO_LE64(h) rte_cpu_to_le_64(h)
+#define LE16_TO_CPU(a) rte_le_to_cpu_16(a)
+#define LE32_TO_CPU(c) rte_le_to_cpu_32(c)
+#define LE64_TO_CPU(k) rte_le_to_cpu_64(k)
+
+#define NTOHS(a) rte_be_to_cpu_16(a)
+#define NTOHL(a) rte_be_to_cpu_32(a)
+#define HTONS(a) rte_cpu_to_be_16(a)
+#define HTONL(a) rte_cpu_to_be_32(a)
+
+static inline void
+ice_set_bit(unsigned int nr, volatile ice_bitmap_t *addr)
+{
+	__sync_fetch_and_or(addr, (1UL << nr));
+}
+
+static inline void
+ice_clear_bit(unsigned int nr, volatile ice_bitmap_t *addr)
+{
+	__sync_fetch_and_and(addr, (0UL << nr));
+}
+
+static inline void
+ice_zero_bitmap(ice_bitmap_t *bmp, u16 size)
+{
+	unsigned long mask;
+	u16 i;
+
+	for (i = 0; i < BITS_TO_CHUNKS(size) - 1; i++)
+		bmp[i] = 0;
+	mask = BITS_CHUNK_MASK(size);
+	bmp[i] &= ~mask;
+}
+
+static inline void
+ice_or_bitmap(ice_bitmap_t *dst, const ice_bitmap_t *bmp1,
+	      const ice_bitmap_t *bmp2, u16 size)
+{
+	unsigned long mask;
+	u16 i;
+
+	/* Handle all but last chunk*/
+	for (i = 0; i < BITS_TO_CHUNKS(size) - 1; i++)
+		dst[i] = bmp1[i] | bmp2[i];
+
+	/* We want to only OR bits within the size. Furthermore, we also do
+	 * not want to modify destination bits which are beyond the specified
+	 * size. Use a bitmask to ensure that we only modify the bits that are
+	 * within the specified size.
+	 */
+	mask = BITS_CHUNK_MASK(size);
+	dst[i] &= ~mask;
+	dst[i] |= (bmp1[i] | bmp2[i]) & mask;
+}
+
+static inline void ice_cp_bitmap(ice_bitmap_t *dst, ice_bitmap_t *src, u16 size)
+{
+        ice_bitmap_t mask;
+        u16 i;
+
+        /* Handle all but last chunk*/
+        for (i = 0; i < BITS_TO_CHUNKS(size) - 1; i++)
+                dst[i] = src[i];
+
+        /* We want to only copy bits within the size.*/
+        mask = BITS_CHUNK_MASK(size);
+        dst[i] &= ~mask;
+        dst[i] |= src[i] & mask;
+}
+
+static inline bool
+ice_cmp_bitmap(ice_bitmap_t *bmp1, ice_bitmap_t *bmp2, u16 size)
+{
+        ice_bitmap_t mask;
+        u16 i;
+
+        /* Handle all but last chunk*/
+        for (i = 0; i < BITS_TO_CHUNKS(size) - 1; i++)
+                if (bmp1[i] != bmp2[i])
+                        return false;
+
+        /* We want to only compare bits within the size.*/
+        mask = BITS_CHUNK_MASK(size);
+        if ((bmp1[i] & mask) != (bmp2[i] & mask))
+                return false;
+
+        return true;
+}
+
+/* SW spinlock */
+struct ice_lock {
+	rte_spinlock_t spinlock;
+};
+
+static inline void
+ice_init_lock(struct ice_lock *sp)
+{
+	rte_spinlock_init(&sp->spinlock);
+}
+
+static inline void
+ice_acquire_lock(struct ice_lock *sp)
+{
+	rte_spinlock_lock(&sp->spinlock);
+}
+
+static inline void
+ice_release_lock(struct ice_lock *sp)
+{
+	rte_spinlock_unlock(&sp->spinlock);
+}
+
+static inline void
+ice_destroy_lock(__attribute__((unused)) struct ice_lock *sp)
+{
+}
+
+struct ice_hw;
+
+static inline void *
+ice_alloc_dma_mem(__attribute__((unused)) struct ice_hw *hw,
+		  struct ice_dma_mem *mem, u64 size)
+{
+	const struct rte_memzone *mz = NULL;
+	char z_name[RTE_MEMZONE_NAMESIZE];
+
+	if (!mem)
+		return NULL;
+
+	snprintf(z_name, sizeof(z_name), "ice_dma_%"PRIu64, rte_rand());
+	mz = rte_memzone_reserve_bounded(z_name, size, SOCKET_ID_ANY, 0,
+					 0, RTE_PGSIZE_2M);
+	if (!mz)
+		return NULL;
+
+	mem->size = size;
+	mem->va = mz->addr;
+	mem->pa = mz->phys_addr;
+	mem->zone = (const void *)mz;
+	PMD_DRV_LOG(DEBUG, "memzone %s allocated with physical address: "
+		    "%"PRIu64, mz->name, mem->pa);
+
+	return mem->va;
+}
+
+static inline void
+ice_free_dma_mem(__attribute__((unused)) struct ice_hw *hw,
+		 struct ice_dma_mem *mem)
+{
+	PMD_DRV_LOG(DEBUG, "memzone %s to be freed with physical address: "
+		    "%"PRIu64, ((const struct rte_memzone *)mem->zone)->name,
+		    mem->pa);
+	rte_memzone_free((const struct rte_memzone *)mem->zone);
+	mem->zone = NULL;
+	mem->va = NULL;
+	mem->pa = (u64)0;
+}
+
+static inline u8
+ice_hweight8(u32 num)
+{
+	u8 bits = 0;
+	u32 i;
+
+	for (i = 0; i < 8; i++) {
+		bits += (u8)(num & 0x1);
+		num >>= 1;
+	}
+
+	return bits;
+}
+
+#define DIV_ROUND_UP(n, d) (((n) + (d) - 1) / (d))
+#define DELAY(x) rte_delay_us(x)
+#define ice_usec_delay(x) rte_delay_us(x)
+#define ice_msec_delay(x, y) rte_delay_us(1000 * (x))
+#define udelay(x) DELAY(x)
+#define msleep(x) DELAY(1000 * (x))
+#define usleep_range(min, max) msleep(DIV_ROUND_UP(min, 1000))
+
+struct ice_list_entry {
+	LIST_ENTRY(ice_list_entry) next;
+};
+
+LIST_HEAD(ice_list_head, ice_list_entry);
+
+#define LIST_ENTRY_TYPE    ice_list_entry
+#define LIST_HEAD_TYPE     ice_list_head
+#define INIT_LIST_HEAD(list_head)  LIST_INIT(list_head)
+#define LIST_DEL(entry)            LIST_REMOVE(entry, next)
+/* LIST_EMPTY(list_head)) the same in sys/queue.h */
+
+/*Note parameters are swapped*/
+#define LIST_FIRST_ENTRY(head, type, field) (type *)((head)->lh_first)
+#define LIST_ADD(entry, list_head)    LIST_INSERT_HEAD(list_head, entry, next)
+#define LIST_ADD_AFTER(entry, list_entry) \
+	LIST_INSERT_AFTER(list_entry, entry, next)
+#define LIST_FOR_EACH_ENTRY(pos, head, type, member)			       \
+	for ((pos) = (head)->lh_first ?					       \
+		     container_of((head)->lh_first, struct type, member) :     \
+		     0;							       \
+	     (pos);							       \
+	     (pos) = (pos)->member.next.le_next ?			       \
+		     container_of((pos)->member.next.le_next, struct type,     \
+				  member) :				       \
+		     0)
+
+#define LIST_REPLACE_INIT(list_head, head) do {				\
+	(head)->lh_first = (list_head)->lh_first;			\
+	INIT_LIST_HEAD(list_head);					\
+} while (0)
+
+#define HLIST_NODE_TYPE         LIST_ENTRY_TYPE
+#define HLIST_HEAD_TYPE         LIST_HEAD_TYPE
+#define INIT_HLIST_HEAD(list_head)             INIT_LIST_HEAD(list_head)
+#define HLIST_ADD_HEAD(entry, list_head)       LIST_ADD(entry, list_head)
+#define HLIST_EMPTY(list_head)                 LIST_EMPTY(list_head)
+#define HLIST_DEL(entry)                       LIST_DEL(entry)
+#define HLIST_FOR_EACH_ENTRY(pos, head, type, member) \
+	LIST_FOR_EACH_ENTRY(pos, head, type, member)
+#define LIST_FOR_EACH_ENTRY_SAFE(pos, tmp, head, type, member) \
+	LIST_FOR_EACH_ENTRY(pos, head, type, member)
+
+#ifndef ICE_DBG_TRACE
+#define ICE_DBG_TRACE		BIT_ULL(0)
+#endif
+
+#ifndef DIVIDE_AND_ROUND_UP
+#define DIVIDE_AND_ROUND_UP(a, b) (((a) + (b) - 1) / (b))
+#endif
+
+#ifndef ICE_INTEL_VENDOR_ID
+#define ICE_INTEL_VENDOR_ID		0x8086
+#endif
+
+#ifndef IS_UNICAST_ETHER_ADDR
+#define IS_UNICAST_ETHER_ADDR(addr) \
+	((bool)((((u8 *)(addr))[0] % ((u8)0x2)) == 0))
+#endif
+
+#ifndef IS_MULTICAST_ETHER_ADDR
+#define IS_MULTICAST_ETHER_ADDR(addr) \
+	((bool)((((u8 *)(addr))[0] % ((u8)0x2)) == 1))
+#endif
+
+#ifndef IS_BROADCAST_ETHER_ADDR
+/* Check whether an address is broadcast. */
+#define IS_BROADCAST_ETHER_ADDR(addr)	\
+	((bool)((((u16 *)(addr))[0] == ((u16)0xffff))))
+#endif
+
+#ifndef IS_ZERO_ETHER_ADDR
+#define IS_ZERO_ETHER_ADDR(addr) \
+	(((bool)((((u16 *)(addr))[0] == ((u16)0x0)))) && \
+	 ((bool)((((u16 *)(addr))[1] == ((u16)0x0)))) && \
+	 ((bool)((((u16 *)(addr))[2] == ((u16)0x0)))))
+#endif
+
+#endif /* _ICE_OSDEP_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v3 16/34] net/ice: support device initialization
  2018-12-12  6:59 ` [dpdk-dev] [PATCH v3 00/34] A new net PMD - ice Wenzhuo Lu
                     ` (14 preceding siblings ...)
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 15/34] net/ice: add OS specific implementation Wenzhuo Lu
@ 2018-12-12  6:59   ` Wenzhuo Lu
  2018-12-12 18:17     ` Ferruh Yigit
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 17/34] net/ice: support device and queue ops Wenzhuo Lu
                     ` (18 subsequent siblings)
  34 siblings, 1 reply; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-12  6:59 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 config/common_base                      |   9 +
 drivers/net/Makefile                    |   1 +
 drivers/net/ice/Makefile                |  75 ++++
 drivers/net/ice/ice_ethdev.c            | 640 ++++++++++++++++++++++++++++++++
 drivers/net/ice/ice_ethdev.h            | 318 ++++++++++++++++
 drivers/net/ice/ice_logs.h              |  45 +++
 drivers/net/ice/ice_rxtx.h              | 117 ++++++
 drivers/net/ice/rte_pmd_ice_version.map |   4 +
 mk/rte.app.mk                           |   1 +
 9 files changed, 1210 insertions(+)
 create mode 100644 drivers/net/ice/Makefile
 create mode 100644 drivers/net/ice/ice_ethdev.c
 create mode 100644 drivers/net/ice/ice_ethdev.h
 create mode 100644 drivers/net/ice/ice_logs.h
 create mode 100644 drivers/net/ice/ice_rxtx.h
 create mode 100644 drivers/net/ice/rte_pmd_ice_version.map

diff --git a/config/common_base b/config/common_base
index d12ae98..a342760 100644
--- a/config/common_base
+++ b/config/common_base
@@ -297,6 +297,15 @@ CONFIG_RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE=y
 CONFIG_RTE_LIBRTE_FM10K_INC_VECTOR=y
 
 #
+# Compile burst-oriented ICE PMD driver
+#
+CONFIG_RTE_LIBRTE_ICE_PMD=y
+CONFIG_RTE_LIBRTE_ICE_DEBUG_RX=n
+CONFIG_RTE_LIBRTE_ICE_DEBUG_TX=n
+CONFIG_RTE_LIBRTE_ICE_DEBUG_TX_FREE=n
+CONFIG_RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC=y
+CONFIG_RTE_LIBRTE_ICE_16BYTE_RX_DESC=n
+
 # Compile burst-oriented AVF PMD driver
 #
 CONFIG_RTE_LIBRTE_AVF_PMD=y
diff --git a/drivers/net/Makefile b/drivers/net/Makefile
index c0386fe..670d7f7 100644
--- a/drivers/net/Makefile
+++ b/drivers/net/Makefile
@@ -30,6 +30,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_ENIC_PMD) += enic
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_FAILSAFE) += failsafe
 DIRS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k
 DIRS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += i40e
+DIRS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice
 DIRS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe
 DIRS-$(CONFIG_RTE_LIBRTE_LIO_PMD) += liquidio
 DIRS-$(CONFIG_RTE_LIBRTE_MLX4_PMD) += mlx4
diff --git a/drivers/net/ice/Makefile b/drivers/net/ice/Makefile
new file mode 100644
index 0000000..5af66d9
--- /dev/null
+++ b/drivers/net/ice/Makefile
@@ -0,0 +1,75 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Intel Corporation
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_pmd_ice.a
+
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+LDLIBS += -lrte_eal -lrte_ethdev -lrte_kvargs -lrte_bus_pci
+
+EXPORT_MAP := rte_pmd_ice_version.map
+
+LIBABIVER := 1
+
+#
+# Add extra flags for base driver files (also known as shared code)
+# to disable warnings
+#
+ifeq ($(CONFIG_RTE_TOOLCHAIN_ICC),y)
+CFLAGS_BASE_DRIVER = -wd593 -wd188
+else ifeq ($(CONFIG_RTE_TOOLCHAIN_CLANG),y)
+CFLAGS_BASE_DRIVER += -Wno-sign-compare
+CFLAGS_BASE_DRIVER += -Wno-unused-value
+CFLAGS_BASE_DRIVER += -Wno-unused-parameter
+CFLAGS_BASE_DRIVER += -Wno-strict-aliasing
+CFLAGS_BASE_DRIVER += -Wno-format
+CFLAGS_BASE_DRIVER += -Wno-missing-field-initializers
+CFLAGS_BASE_DRIVER += -Wno-pointer-to-int-cast
+CFLAGS_BASE_DRIVER += -Wno-format-nonliteral
+CFLAGS_BASE_DRIVER += -Wno-unused-variable
+else
+CFLAGS_BASE_DRIVER  = -Wno-sign-compare
+CFLAGS_BASE_DRIVER += -Wno-unused-value
+CFLAGS_BASE_DRIVER += -Wno-unused-parameter
+CFLAGS_BASE_DRIVER += -Wno-strict-aliasing
+CFLAGS_BASE_DRIVER += -Wno-format
+CFLAGS_BASE_DRIVER += -Wno-missing-field-initializers
+CFLAGS_BASE_DRIVER += -Wno-pointer-to-int-cast
+CFLAGS_BASE_DRIVER += -Wno-format-nonliteral
+CFLAGS_BASE_DRIVER += -Wno-format-security
+CFLAGS_BASE_DRIVER += -Wno-unused-variable
+
+ifeq ($(shell test $(GCC_VERSION) -ge 44 && echo 1), 1)
+CFLAGS_BASE_DRIVER += -Wno-unused-but-set-variable
+endif
+
+endif
+OBJS_BASE_DRIVER=$(patsubst %.c,%.o,$(notdir $(wildcard $(SRCDIR)/base/*.c)))
+$(foreach obj, $(OBJS_BASE_DRIVER), $(eval CFLAGS_$(obj)+=$(CFLAGS_BASE_DRIVER)))
+
+VPATH += $(SRCDIR)/base
+
+#
+# all source are stored in SRCS-y
+#
+SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_controlq.c
+SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_common.c
+SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_sched.c
+SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_switch.c
+SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_nvm.c
+
+SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_ethdev.c
+
+# this lib depends upon:
+DEPDIRS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += lib/librte_eal lib/librte_ether
+DEPDIRS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += lib/librte_mempool lib/librte_mbuf
+DEPDIRS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += lib/librte_net
+DEPDIRS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += lib/librte_kvargs
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
new file mode 100644
index 0000000..e0bf15c
--- /dev/null
+++ b/drivers/net/ice/ice_ethdev.c
@@ -0,0 +1,640 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#include <rte_ethdev_pci.h>
+
+#include "base/ice_sched.h"
+#include "ice_ethdev.h"
+#include "ice_rxtx.h"
+
+#define ICE_MAX_QP_NUM "max_queue_pair_num"
+#define ICE_DFLT_OUTER_TAG_TYPE ICE_AQ_VSI_OUTER_TAG_VLAN_9100
+
+int ice_logtype_init;
+int ice_logtype_driver;
+
+static void ice_dev_close(struct rte_eth_dev *dev);
+
+static const struct rte_pci_id pci_id_ice_map[] = {
+	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
+	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_QSFP) },
+	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_SFP) },
+	{ .vendor_id = 0, /* sentinel */ },
+};
+
+static const struct eth_dev_ops ice_eth_dev_ops = {
+	.dev_configure                = NULL,
+};
+
+static void
+ice_init_controlq_parameter(struct ice_hw *hw)
+{
+	/* fields for adminq */
+	hw->adminq.num_rq_entries = ICE_ADMINQ_LEN;
+	hw->adminq.num_sq_entries = ICE_ADMINQ_LEN;
+	hw->adminq.rq_buf_size = ICE_ADMINQ_BUF_SZ;
+	hw->adminq.sq_buf_size = ICE_ADMINQ_BUF_SZ;
+
+	/* fields for mailboxq, DPDK used as PF host */
+	hw->mailboxq.num_rq_entries = ICE_MAILBOXQ_LEN;
+	hw->mailboxq.num_sq_entries = ICE_MAILBOXQ_LEN;
+	hw->mailboxq.rq_buf_size = ICE_MAILBOXQ_BUF_SZ;
+	hw->mailboxq.sq_buf_size = ICE_MAILBOXQ_BUF_SZ;
+}
+
+static int
+ice_check_qp_num(const char *key, const char *qp_value,
+		 __rte_unused void *opaque)
+{
+	char *end = NULL;
+	int num = 0;
+
+	while (isblank(*qp_value))
+		qp_value++;
+
+	num = strtoul(qp_value, &end, 10);
+
+	if (!num || (*end == '-') || errno) {
+		PMD_DRV_LOG(WARNING, "invalid value:\"%s\" for key:\"%s\", "
+			    "value must be > 0",
+			    qp_value, key);
+		return -1;
+	}
+
+	return num;
+}
+
+static int
+ice_config_max_queue_pair_num(struct rte_devargs *devargs)
+{
+	struct rte_kvargs *kvlist;
+	const char *queue_num_key = ICE_MAX_QP_NUM;
+	int ret;
+
+	if (!devargs)
+		return 0;
+
+	kvlist = rte_kvargs_parse(devargs->args, NULL);
+	if (!kvlist)
+		return 0;
+
+	if (!rte_kvargs_count(kvlist, queue_num_key)) {
+		rte_kvargs_free(kvlist);
+		return 0;
+	}
+
+	if (rte_kvargs_process(kvlist, queue_num_key,
+			       ice_check_qp_num, NULL) < 0) {
+		rte_kvargs_free(kvlist);
+		return 0;
+	}
+	ret = rte_kvargs_process(kvlist, queue_num_key,
+				 ice_check_qp_num, NULL);
+	rte_kvargs_free(kvlist);
+
+	return ret;
+}
+
+static int
+ice_res_pool_init(struct ice_res_pool_info *pool, uint32_t base,
+		  uint32_t num)
+{
+	struct pool_entry *entry;
+
+	if (!pool || !num)
+		return -EINVAL;
+
+	entry = rte_zmalloc(NULL, sizeof(*entry), 0);
+	if (!entry) {
+		PMD_INIT_LOG(ERR,
+			     "Failed to allocate memory for resource pool");
+		return -ENOMEM;
+	}
+
+	/* queue heap initialize */
+	pool->num_free = num;
+	pool->num_alloc = 0;
+	pool->base = base;
+	LIST_INIT(&pool->alloc_list);
+	LIST_INIT(&pool->free_list);
+
+	/* Initialize element  */
+	entry->base = 0;
+	entry->len = num;
+
+	LIST_INSERT_HEAD(&pool->free_list, entry, next);
+	return 0;
+}
+
+static int
+ice_res_pool_alloc(struct ice_res_pool_info *pool,
+		   uint16_t num)
+{
+	struct pool_entry *entry, *valid_entry;
+
+	if (!pool || !num) {
+		PMD_INIT_LOG(ERR, "Invalid parameter");
+		return -EINVAL;
+	}
+
+	if (pool->num_free < num) {
+		PMD_INIT_LOG(ERR, "No resource. ask:%u, available:%u",
+			     num, pool->num_free);
+		return -ENOMEM;
+	}
+
+	valid_entry = NULL;
+	/* Lookup  in free list and find most fit one */
+	LIST_FOREACH(entry, &pool->free_list, next) {
+		if (entry->len >= num) {
+			/* Find best one */
+			if (entry->len == num) {
+				valid_entry = entry;
+				break;
+			}
+			if (!valid_entry ||
+			    valid_entry->len > entry->len)
+				valid_entry = entry;
+		}
+	}
+
+	/* Not find one to satisfy the request, return */
+	if (!valid_entry) {
+		PMD_INIT_LOG(ERR, "No valid entry found");
+		return -ENOMEM;
+	}
+	/**
+	 * The entry have equal queue number as requested,
+	 * remove it from alloc_list.
+	 */
+	if (valid_entry->len == num) {
+		LIST_REMOVE(valid_entry, next);
+	} else {
+		/**
+		 * The entry have more numbers than requested,
+		 * create a new entry for alloc_list and minus its
+		 * queue base and number in free_list.
+		 */
+		entry = rte_zmalloc(NULL, sizeof(*entry), 0);
+		if (!entry) {
+			PMD_INIT_LOG(ERR,
+				     "Failed to allocate memory for "
+				     "resource pool");
+			return -ENOMEM;
+		}
+		entry->base = valid_entry->base;
+		entry->len = num;
+		valid_entry->base += num;
+		valid_entry->len -= num;
+		valid_entry = entry;
+	}
+
+	/* Insert it into alloc list, not sorted */
+	LIST_INSERT_HEAD(&pool->alloc_list, valid_entry, next);
+
+	pool->num_free -= valid_entry->len;
+	pool->num_alloc += valid_entry->len;
+
+	return valid_entry->base + pool->base;
+}
+
+static void
+ice_res_pool_destroy(struct ice_res_pool_info *pool)
+{
+	struct pool_entry *entry, *next_entry;
+
+	if (!pool)
+		return;
+
+	for (entry = LIST_FIRST(&pool->alloc_list);
+	     entry && (next_entry = LIST_NEXT(entry, next), 1);
+	     entry = next_entry) {
+		LIST_REMOVE(entry, next);
+		rte_free(entry);
+	}
+
+	for (entry = LIST_FIRST(&pool->free_list);
+	     entry && (next_entry = LIST_NEXT(entry, next), 1);
+	     entry = next_entry) {
+		LIST_REMOVE(entry, next);
+		rte_free(entry);
+	}
+
+	pool->num_free = 0;
+	pool->num_alloc = 0;
+	pool->base = 0;
+	LIST_INIT(&pool->alloc_list);
+	LIST_INIT(&pool->free_list);
+}
+
+static void
+ice_vsi_config_default_rss(struct ice_aqc_vsi_props *info)
+{
+	/* Set VSI LUT selection */
+	info->q_opt_rss = ICE_AQ_VSI_Q_OPT_RSS_LUT_VSI &
+			  ICE_AQ_VSI_Q_OPT_RSS_LUT_M;
+	/* Set Hash scheme */
+	info->q_opt_rss |= ICE_AQ_VSI_Q_OPT_RSS_TPLZ &
+			   ICE_AQ_VSI_Q_OPT_RSS_HASH_M;
+	/* enable TC */
+	info->q_opt_tc = ICE_AQ_VSI_Q_OPT_TC_OVR_M;
+}
+
+static enum ice_status
+ice_vsi_config_tc_queue_mapping(struct ice_vsi *vsi,
+				struct ice_aqc_vsi_props *info,
+				uint8_t enabled_tcmap)
+{
+	uint16_t bsf, qp_idx;
+
+	/* default tc 0 now. Multi-TC supporting need to be done later.
+	 * Configure TC and queue mapping parameters, for enabled TC,
+	 * allocate qpnum_per_tc queues to this traffic.
+	 */
+	if (enabled_tcmap != 0x01) {
+		PMD_INIT_LOG(ERR, "only TC0 is supported");
+		return -ENOTSUP;
+	}
+
+	vsi->nb_qps = RTE_MIN(vsi->nb_qps, ICE_MAX_Q_PER_TC);
+	bsf = rte_bsf32(vsi->nb_qps);
+	/* Adjust the queue number to actual queues that can be applied */
+	vsi->nb_qps = 0x1 << bsf;
+
+	qp_idx = 0;
+	/* Set tc and queue mapping with VSI */
+	info->tc_mapping[0] = rte_cpu_to_le_16((qp_idx <<
+						ICE_AQ_VSI_TC_Q_OFFSET_S) |
+					       (bsf << ICE_AQ_VSI_TC_Q_NUM_S));
+
+	/* Associate queue number with VSI */
+	info->mapping_flags |= rte_cpu_to_le_16(ICE_AQ_VSI_Q_MAP_CONTIG);
+	info->q_mapping[0] = rte_cpu_to_le_16(vsi->base_queue);
+	info->q_mapping[1] = rte_cpu_to_le_16(vsi->nb_qps);
+	info->valid_sections |=
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_RXQ_MAP_VALID);
+	/* Set the info.ingress_table and info.egress_table
+	 * for UP translate table. Now just set it to 1:1 map by default
+	 * -- 0b 111 110 101 100 011 010 001 000 == 0xFAC688
+	 */
+#define ICE_TC_QUEUE_TABLE_DFLT 0x00FAC688
+	info->ingress_table  = rte_cpu_to_le_32(ICE_TC_QUEUE_TABLE_DFLT);
+	info->egress_table   = rte_cpu_to_le_32(ICE_TC_QUEUE_TABLE_DFLT);
+	info->outer_up_table = rte_cpu_to_le_32(ICE_TC_QUEUE_TABLE_DFLT);
+	return 0;
+}
+
+static int
+ice_init_mac_address(struct rte_eth_dev *dev)
+{
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	if (!is_unicast_ether_addr
+		((struct ether_addr *)hw->port_info[0].mac.lan_addr)) {
+		PMD_INIT_LOG(ERR, "Invalid MAC address");
+		return -EINVAL;
+	}
+
+	ether_addr_copy((struct ether_addr *)hw->port_info[0].mac.lan_addr,
+			(struct ether_addr *)hw->port_info[0].mac.perm_addr);
+
+	dev->data->mac_addrs = rte_zmalloc(NULL, sizeof(struct ether_addr), 0);
+	if (!dev->data->mac_addrs) {
+		PMD_INIT_LOG(ERR,
+			     "Failed to allocate memory to store mac address");
+		return -ENOMEM;
+	}
+	/* store it to dev data */
+	ether_addr_copy((struct ether_addr *)hw->port_info[0].mac.perm_addr,
+			&dev->data->mac_addrs[0]);
+	return 0;
+}
+
+/*  Initialize SW parameters of PF */
+static int
+ice_pf_sw_init(struct rte_eth_dev *dev)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_hw *hw = ICE_PF_TO_HW(pf);
+
+	if (ice_config_max_queue_pair_num(dev->device->devargs) > 0)
+		pf->lan_nb_qp_max =
+			ice_config_max_queue_pair_num(dev->device->devargs);
+	else
+		pf->lan_nb_qp_max =
+			(uint16_t)RTE_MIN(hw->func_caps.common_cap.num_txq,
+					  hw->func_caps.common_cap.num_rxq);
+
+	pf->lan_nb_qps = pf->lan_nb_qp_max;
+
+	return 0;
+}
+
+static struct ice_vsi *
+ice_setup_vsi(struct ice_pf *pf, enum ice_vsi_type type)
+{
+	struct ice_hw *hw = ICE_PF_TO_HW(pf);
+	struct ice_vsi *vsi = NULL;
+	struct ice_vsi_ctx vsi_ctx;
+	int ret;
+	uint16_t max_txqs[ICE_MAX_TRAFFIC_CLASS] = { 0 };
+	uint8_t tc_bitmap = 0x1;
+
+	/* hw->num_lports = 1 in NIC mode */
+	vsi = rte_zmalloc(NULL, sizeof(struct ice_vsi), 0);
+	if (!vsi)
+		return NULL;
+
+	vsi->idx = pf->next_vsi_idx;
+	pf->next_vsi_idx++;
+	vsi->type = type;
+	vsi->adapter = ICE_PF_TO_ADAPTER(pf);
+	vsi->max_macaddrs = ICE_NUM_MACADDR_MAX;
+	vsi->vlan_anti_spoof_on = 0;
+	vsi->vlan_filter_on = 1;
+	TAILQ_INIT(&vsi->mac_list);
+	TAILQ_INIT(&vsi->vlan_list);
+
+	memset(&vsi_ctx, 0, sizeof(vsi_ctx));
+	/* base_queue in used in queue mapping of VSI add/update command.
+	 * Suppose vsi->base_queue is 0 now, don't consider SRIOV, VMDQ
+	 * cases in the first stage. Only Main VSI.
+	 */
+	vsi->base_queue = 0;
+	switch (type) {
+	case ICE_VSI_PF:
+		vsi->nb_qps = pf->lan_nb_qps;
+		ice_vsi_config_default_rss(&vsi_ctx.info);
+		vsi_ctx.alloc_from_pool = true;
+		vsi_ctx.flags = ICE_AQ_VSI_TYPE_PF;
+		/* switch_id is queried by get_switch_config aq, which is done
+		 * by ice_init_hw
+		 */
+		vsi_ctx.info.sw_id = hw->port_info->sw_id;
+		vsi_ctx.info.sw_flags2 = ICE_AQ_VSI_SW_FLAG_LAN_ENA;
+		/* Allow all untagged or tagged packets */
+		vsi_ctx.info.vlan_flags = ICE_AQ_VSI_VLAN_MODE_ALL;
+		vsi_ctx.info.vlan_flags |= ICE_AQ_VSI_VLAN_EMOD_NOTHING;
+		vsi_ctx.info.q_opt_rss = ICE_AQ_VSI_Q_OPT_RSS_LUT_PF |
+					 ICE_AQ_VSI_Q_OPT_RSS_TPLZ;
+		/* Enable VLAN/UP trip */
+		ret = ice_vsi_config_tc_queue_mapping(vsi,
+						      &vsi_ctx.info,
+						      ICE_DEFAULT_TCMAP);
+		if (ret) {
+			PMD_INIT_LOG(ERR,
+				     "tc queue mapping with vsi failed, "
+				     "err = %d",
+				     ret);
+			goto fail_mem;
+		}
+
+		break;
+	default:
+		/* for other types of VSI */
+		PMD_INIT_LOG(ERR, "other types of VSI not supported");
+		goto fail_mem;
+	}
+
+	/* VF has MSIX interrupt in VF range, don't allocate here */
+	if (type == ICE_VSI_PF) {
+		ret = ice_res_pool_alloc(&pf->msix_pool,
+					 RTE_MIN(vsi->nb_qps,
+						 RTE_MAX_RXTX_INTR_VEC_ID));
+		if (ret < 0) {
+			PMD_INIT_LOG(ERR, "VSI MAIN %d get heap failed %d",
+				     vsi->vsi_id, ret);
+		}
+		vsi->msix_intr = ret;
+		vsi->nb_msix = RTE_MIN(vsi->nb_qps, RTE_MAX_RXTX_INTR_VEC_ID);
+	} else {
+		vsi->msix_intr = 0;
+		vsi->nb_msix = 0;
+	}
+	ret = ice_add_vsi(hw, vsi->idx, &vsi_ctx, NULL);
+	if (ret != ICE_SUCCESS) {
+		PMD_INIT_LOG(ERR, "add vsi failed, err = %d", ret);
+		goto fail_mem;
+	}
+	/* store vsi information is SW structure */
+	vsi->vsi_id = vsi_ctx.vsi_num;
+	vsi->info = vsi_ctx.info;
+	pf->vsis_allocated = vsi_ctx.vsis_allocd;
+	pf->vsis_unallocated = vsi_ctx.vsis_unallocated;
+
+	/* At the beginning, only TC0. */
+	/* What we need here is the maximam number of the TX queues.
+	 * Currently vsi->nb_qps means it.
+	 * Correct it if any change.
+	 */
+	max_txqs[0] = vsi->nb_qps;
+	ret = ice_cfg_vsi_lan(hw->port_info, vsi->idx,
+			      tc_bitmap, max_txqs);
+	if (ret != ICE_SUCCESS)
+		PMD_INIT_LOG(ERR, "Failed to config vsi sched");
+
+	return vsi;
+fail_mem:
+	rte_free(vsi);
+	pf->next_vsi_idx--;
+	return NULL;
+}
+
+static int
+ice_pf_setup(struct ice_pf *pf)
+{
+	struct ice_vsi *vsi;
+
+	/* Clear all stats counters */
+	pf->offset_loaded = FALSE;
+	memset(&pf->stats, 0, sizeof(struct ice_hw_port_stats));
+	memset(&pf->stats_offset, 0, sizeof(struct ice_hw_port_stats));
+	memset(&pf->internal_stats, 0, sizeof(struct ice_eth_stats));
+	memset(&pf->internal_stats_offset, 0, sizeof(struct ice_eth_stats));
+
+	vsi = ice_setup_vsi(pf, ICE_VSI_PF);
+	if (!vsi) {
+		PMD_INIT_LOG(ERR, "Failed to add vsi for PF");
+		return -EINVAL;
+	}
+
+	pf->main_vsi = vsi;
+
+	return 0;
+}
+
+static int
+ice_dev_init(struct rte_eth_dev *dev)
+{
+	struct rte_pci_device *pci_dev;
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	int ret;
+
+	dev->dev_ops = &ice_eth_dev_ops;
+
+	pci_dev = RTE_DEV_TO_PCI(dev->device);
+
+	rte_eth_copy_pci_info(dev, pci_dev);
+	pf->adapter = ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	pf->adapter->eth_dev = dev;
+	pf->dev_data = dev->data;
+	hw->back = pf->adapter;
+	hw->hw_addr = (uint8_t *)pci_dev->mem_resource[0].addr;
+	hw->vendor_id = pci_dev->id.vendor_id;
+	hw->device_id = pci_dev->id.device_id;
+	hw->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id;
+	hw->subsystem_device_id = pci_dev->id.subsystem_device_id;
+	hw->bus.device = pci_dev->addr.devid;
+	hw->bus.func = pci_dev->addr.function;
+
+	ice_init_controlq_parameter(hw);
+
+	ret = ice_init_hw(hw);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to initialize HW");
+		return -EINVAL;
+	}
+
+	PMD_INIT_LOG(INFO, "FW %d.%d.%05d API %d.%d",
+		     hw->fw_maj_ver, hw->fw_min_ver, hw->fw_build,
+		     hw->api_maj_ver, hw->api_min_ver);
+
+	ice_pf_sw_init(dev);
+	ret = ice_init_mac_address(dev);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to initialize mac address");
+		goto err_init_mac;
+	}
+
+	ret = ice_res_pool_init(&pf->msix_pool, 1,
+				hw->func_caps.common_cap.num_msix_vectors - 1);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to init MSIX pool");
+		goto err_msix_pool_init;
+	}
+
+	ret = ice_pf_setup(pf);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to setup PF");
+		goto err_pf_setup;
+	}
+
+	return 0;
+
+err_pf_setup:
+	ice_res_pool_destroy(&pf->msix_pool);
+err_msix_pool_init:
+	rte_free(dev->data->mac_addrs);
+err_init_mac:
+	ice_sched_cleanup_all(hw);
+	rte_free(hw->port_info);
+	ice_shutdown_all_ctrlq(hw);
+
+	return ret;
+}
+
+static int
+ice_release_vsi(struct ice_vsi *vsi)
+{
+	struct ice_hw *hw;
+	struct ice_vsi_ctx vsi_ctx;
+	enum ice_status ret;
+
+	if (!vsi)
+		return 0;
+
+	hw = ICE_VSI_TO_HW(vsi);
+
+	memset(&vsi_ctx, 0, sizeof(vsi_ctx));
+
+	vsi_ctx.vsi_num = vsi->vsi_id;
+	vsi_ctx.info = vsi->info;
+	ret = ice_free_vsi(hw, vsi->idx, &vsi_ctx, false, NULL);
+	if (ret != ICE_SUCCESS) {
+		PMD_INIT_LOG(ERR, "Failed to free vsi by aq, %u", vsi->vsi_id);
+		rte_free(vsi);
+		return -1;
+	}
+
+	rte_free(vsi);
+	return 0;
+}
+
+static int
+ice_dev_uninit(struct rte_eth_dev *dev)
+{
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+
+	ice_dev_close(dev);
+
+	dev->dev_ops = NULL;
+	dev->rx_pkt_burst = NULL;
+	dev->tx_pkt_burst = NULL;
+
+	rte_free(dev->data->mac_addrs);
+	dev->data->mac_addrs = NULL;
+
+	ice_release_vsi(pf->main_vsi);
+	ice_sched_cleanup_all(hw);
+	rte_free(hw->port_info);
+	ice_shutdown_all_ctrlq(hw);
+
+	return 0;
+}
+
+static int
+ice_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+	      struct rte_pci_device *pci_dev)
+{
+	return rte_eth_dev_pci_generic_probe(pci_dev,
+					     sizeof(struct ice_adapter),
+					     ice_dev_init);
+}
+
+static int
+ice_pci_remove(struct rte_pci_device *pci_dev)
+{
+	return rte_eth_dev_pci_generic_remove(pci_dev, ice_dev_uninit);
+}
+
+static struct rte_pci_driver rte_ice_pmd = {
+	.id_table = pci_id_ice_map,
+	.drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC,
+	.probe = ice_pci_probe,
+	.remove = ice_pci_remove,
+};
+
+/**
+ * Driver initialization routine.
+ * Invoked once at EAL init time.
+ * Register itself as the [Poll Mode] Driver of PCI devices.
+ */
+RTE_PMD_REGISTER_PCI(net_ice, rte_ice_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(ice, pci_id_ice_map);
+
+RTE_INIT(ice_init_log);
+static void
+ice_init_log(void)
+{
+	ice_logtype_init = rte_log_register("pmd.ice.init");
+	if (ice_logtype_init >= 0)
+		rte_log_set_level(ice_logtype_init, RTE_LOG_NOTICE);
+	ice_logtype_driver = rte_log_register("pmd.ice.driver");
+	if (ice_logtype_driver >= 0)
+		rte_log_set_level(ice_logtype_driver, RTE_LOG_NOTICE);
+}
+
+static void
+ice_dev_close(struct rte_eth_dev *dev)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	ice_res_pool_destroy(&pf->msix_pool);
+	ice_release_vsi(pf->main_vsi);
+
+	ice_shutdown_all_ctrlq(hw);
+}
diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
new file mode 100644
index 0000000..3cefa5b
--- /dev/null
+++ b/drivers/net/ice/ice_ethdev.h
@@ -0,0 +1,318 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _ICE_ETHDEV_H_
+#define _ICE_ETHDEV_H_
+
+#include <rte_kvargs.h>
+
+#include "base/ice_common.h"
+#include "base/ice_adminq_cmd.h"
+
+#define ICE_VLAN_TAG_SIZE        4
+
+#define ICE_ADMINQ_LEN               32
+#define ICE_SBIOQ_LEN                32
+#define ICE_MAILBOXQ_LEN             32
+#define ICE_ADMINQ_BUF_SZ            4096
+#define ICE_SBIOQ_BUF_SZ             4096
+#define ICE_MAILBOXQ_BUF_SZ          4096
+/* Number of queues per TC should be one of 1, 2, 4, 8, 16, 32, 64 */
+#define ICE_MAX_Q_PER_TC         64
+#define ICE_NUM_DESC_DEFAULT     512
+#define ICE_BUF_SIZE_MIN         1024
+#define ICE_FRAME_SIZE_MAX       9728
+#define ICE_QUEUE_BASE_ADDR_UNIT 128
+/* number of VSIs and queue default setting */
+#define ICE_MAX_QP_NUM_PER_VF    16
+#define ICE_DEFAULT_QP_NUM_FDIR  1
+#define ICE_UINT32_BIT_SIZE      (CHAR_BIT * sizeof(uint32_t))
+#define ICE_VFTA_SIZE            (4096 / ICE_UINT32_BIT_SIZE)
+/* Maximun number of MAC addresses */
+#define ICE_NUM_MACADDR_MAX       64
+/* Maximum number of VFs */
+#define ICE_MAX_VF               128
+#define ICE_MAX_INTR_QUEUE_NUM   256
+
+#define ICE_MISC_VEC_ID          RTE_INTR_VEC_ZERO_OFFSET
+#define ICE_RX_VEC_ID            RTE_INTR_VEC_RXTX_OFFSET
+
+#define ICE_MAX_PKT_TYPE  1024
+
+/**
+ * vlan_id is a 12 bit number.
+ * The VFTA array is actually a 4096 bit array, 128 of 32bit elements.
+ * 2^5 = 32. The val of lower 5 bits specifies the bit in the 32bit element.
+ * The higher 7 bit val specifies VFTA array index.
+ */
+#define ICE_VFTA_BIT(vlan_id)    (1 << ((vlan_id) & 0x1F))
+#define ICE_VFTA_IDX(vlan_id)    ((vlan_id) >> 5)
+
+/* Default TC traffic in case DCB is not enabled */
+#define ICE_DEFAULT_TCMAP        0x1
+#define ICE_FDIR_QUEUE_ID        0
+
+/* Always assign pool 0 to main VSI, VMDQ will start from 1 */
+#define ICE_VMDQ_POOL_BASE       1
+
+#define ICE_DEFAULT_RX_FREE_THRESH  32
+#define ICE_DEFAULT_RX_PTHRESH      8
+#define ICE_DEFAULT_RX_HTHRESH      8
+#define ICE_DEFAULT_RX_WTHRESH      0
+
+#define ICE_DEFAULT_TX_FREE_THRESH  32
+#define ICE_DEFAULT_TX_PTHRESH      32
+#define ICE_DEFAULT_TX_HTHRESH      0
+#define ICE_DEFAULT_TX_WTHRESH      0
+#define ICE_DEFAULT_TX_RSBIT_THRESH 32
+
+/* Bit shift and mask */
+#define ICE_4_BIT_WIDTH  (CHAR_BIT / 2)
+#define ICE_4_BIT_MASK   RTE_LEN2MASK(ICE_4_BIT_WIDTH, uint8_t)
+#define ICE_8_BIT_WIDTH  CHAR_BIT
+#define ICE_8_BIT_MASK   UINT8_MAX
+#define ICE_16_BIT_WIDTH (CHAR_BIT * 2)
+#define ICE_16_BIT_MASK  UINT16_MAX
+#define ICE_32_BIT_WIDTH (CHAR_BIT * 4)
+#define ICE_32_BIT_MASK  UINT32_MAX
+#define ICE_40_BIT_WIDTH (CHAR_BIT * 5)
+#define ICE_40_BIT_MASK  RTE_LEN2MASK(ICE_40_BIT_WIDTH, uint64_t)
+#define ICE_48_BIT_WIDTH (CHAR_BIT * 6)
+#define ICE_48_BIT_MASK  RTE_LEN2MASK(ICE_48_BIT_WIDTH, uint64_t)
+
+#define ICE_FLAG_RSS                   BIT_ULL(0)
+#define ICE_FLAG_DCB                   BIT_ULL(1)
+#define ICE_FLAG_VMDQ                  BIT_ULL(2)
+#define ICE_FLAG_SRIOV                 BIT_ULL(3)
+#define ICE_FLAG_HEADER_SPLIT_DISABLED BIT_ULL(4)
+#define ICE_FLAG_HEADER_SPLIT_ENABLED  BIT_ULL(5)
+#define ICE_FLAG_FDIR                  BIT_ULL(6)
+#define ICE_FLAG_VXLAN                 BIT_ULL(7)
+#define ICE_FLAG_RSS_AQ_CAPABLE        BIT_ULL(8)
+#define ICE_FLAG_VF_MAC_BY_PF          BIT_ULL(9)
+#define ICE_FLAG_ALL  (ICE_FLAG_RSS | \
+		       ICE_FLAG_DCB | \
+		       ICE_FLAG_VMDQ | \
+		       ICE_FLAG_SRIOV | \
+		       ICE_FLAG_HEADER_SPLIT_DISABLED | \
+		       ICE_FLAG_HEADER_SPLIT_ENABLED | \
+		       ICE_FLAG_FDIR | \
+		       ICE_FLAG_VXLAN | \
+		       ICE_FLAG_RSS_AQ_CAPABLE | \
+		       ICE_FLAG_VF_MAC_BY_PF)
+
+#define ICE_RSS_OFFLOAD_ALL ( \
+	ETH_RSS_FRAG_IPV4 | \
+	ETH_RSS_NONFRAG_IPV4_TCP | \
+	ETH_RSS_NONFRAG_IPV4_UDP | \
+	ETH_RSS_NONFRAG_IPV4_SCTP | \
+	ETH_RSS_NONFRAG_IPV4_OTHER | \
+	ETH_RSS_FRAG_IPV6 | \
+	ETH_RSS_NONFRAG_IPV6_TCP | \
+	ETH_RSS_NONFRAG_IPV6_UDP | \
+	ETH_RSS_NONFRAG_IPV6_SCTP | \
+	ETH_RSS_NONFRAG_IPV6_OTHER | \
+	ETH_RSS_L2_PAYLOAD)
+
+struct ice_adapter;
+
+/**
+ * MAC filter structure
+ */
+struct ice_mac_filter_info {
+	struct ether_addr mac_addr;
+};
+
+TAILQ_HEAD(ice_mac_filter_list, ice_mac_filter);
+
+/* MAC filter list structure */
+struct ice_mac_filter {
+	TAILQ_ENTRY(ice_mac_filter) next;
+	struct ice_mac_filter_info mac_info;
+};
+
+/**
+ * VLAN filter structure
+ */
+struct ice_vlan_filter_info {
+	uint16_t vlan_id;
+};
+
+TAILQ_HEAD(ice_vlan_filter_list, ice_vlan_filter);
+
+/* VLAN filter list structure */
+struct ice_vlan_filter {
+	TAILQ_ENTRY(ice_vlan_filter) next;
+	struct ice_vlan_filter_info vlan_info;
+};
+
+struct pool_entry {
+	LIST_ENTRY(pool_entry) next;
+	uint16_t base;
+	uint16_t len;
+};
+
+LIST_HEAD(res_list, pool_entry);
+
+struct ice_res_pool_info {
+	uint32_t base;              /* Resource start index */
+	uint32_t num_alloc;         /* Allocated resource number */
+	uint32_t num_free;          /* Total available resource number */
+	struct res_list alloc_list; /* Allocated resource list */
+	struct res_list free_list;  /* Available resource list */
+};
+
+TAILQ_HEAD(ice_vsi_list_head, ice_vsi_list);
+
+struct ice_vsi;
+
+/* VSI list structure */
+struct ice_vsi_list {
+	TAILQ_ENTRY(ice_vsi_list) list;
+	struct ice_vsi *vsi;
+};
+
+struct ice_rx_queue;
+struct ice_tx_queue;
+
+/**
+ * Structure that defines a VSI, associated with a adapter.
+ */
+struct ice_vsi {
+	struct ice_adapter *adapter; /* Backreference to associated adapter */
+	struct ice_aqc_vsi_props info; /* VSI properties */
+	/**
+	 * When drivers loaded, only a default main VSI exists. In case new VSI
+	 * needs to add, HW needs to know the layout that VSIs are organized.
+	 * Besides that, VSI isan element and can't switch packets, which needs
+	 * to add new component VEB to perform switching. So, a new VSI needs
+	 * to specify the the uplink VSI (Parent VSI) before created. The
+	 * uplink VSI will check whether it had a VEB to switch packets. If no,
+	 * it will try to create one. Then, uplink VSI will move the new VSI
+	 * into its' sib_vsi_list to manage all the downlink VSI.
+	 *  sib_vsi_list: the VSI list that shared the same uplink VSI.
+	 *  parent_vsi  : the uplink VSI. It's NULL for main VSI.
+	 *  veb         : the VEB associates with the VSI.
+	 */
+	struct ice_vsi_list sib_vsi_list; /* sibling vsi list */
+	struct ice_vsi *parent_vsi;
+	enum ice_vsi_type type; /* VSI types */
+	uint16_t vlan_num;       /* Total VLAN number */
+	uint16_t mac_num;        /* Total mac number */
+	struct ice_mac_filter_list mac_list; /* macvlan filter list */
+	struct ice_vlan_filter_list vlan_list; /* vlan filter list */
+	uint16_t nb_qps;         /* Number of queue pairs VSI can occupy */
+	uint16_t nb_used_qps;    /* Number of queue pairs VSI uses */
+	uint16_t max_macaddrs;   /* Maximum number of MAC addresses */
+	uint16_t base_queue;     /* The first queue index of this VSI */
+	uint16_t vsi_id;         /* Hardware Id */
+	uint16_t idx;            /* vsi_handle: SW index in hw->vsi_ctx */
+	/* VF number to which the VSI connects, valid when VSI is VF type */
+	uint8_t vf_num;
+	uint16_t msix_intr; /* The MSIX interrupt binds to VSI */
+	uint16_t nb_msix;   /* The max number of msix vector */
+	uint8_t enabled_tc; /* The traffic class enabled */
+	uint8_t vlan_anti_spoof_on; /* The VLAN anti-spoofing enabled */
+	uint8_t vlan_filter_on; /* The VLAN filter enabled */
+	/* information about rss configuration */
+	u32 rss_key_size;
+	u32 rss_lut_size;
+	uint8_t *rss_lut;
+	uint8_t *rss_key;
+	struct ice_eth_stats eth_stats_offset;
+	struct ice_eth_stats eth_stats;
+	bool offset_loaded;
+};
+
+struct ice_pf {
+	struct ice_adapter *adapter; /* The adapter this PF associate to */
+	struct ice_vsi *main_vsi; /* pointer to main VSI structure */
+	/* Used for next free software vsi idx.
+	 * To save the effort, we don't recycle the index.
+	 * Suppose the indexes are more than enough.
+	 */
+	uint16_t next_vsi_idx;
+	uint16_t vsis_allocated;
+	uint16_t vsis_unallocated;
+	struct ice_res_pool_info qp_pool;    /*Queue pair pool */
+	struct ice_res_pool_info msix_pool;  /* MSIX interrupt pool */
+	struct rte_eth_dev_data *dev_data; /* Pointer to the device data */
+	struct ether_addr dev_addr; /* PF device mac address */
+	uint64_t flags; /* PF feature flags */
+	uint16_t hash_lut_size; /* The size of hash lookup table */
+	uint16_t lan_nb_qp_max;
+	uint16_t lan_nb_qps; /* The number of queue pairs of LAN */
+	struct ice_hw_port_stats stats_offset;
+	struct ice_hw_port_stats stats;
+	/* internal packet statistics, it should be excluded from the total */
+	struct ice_eth_stats internal_stats_offset;
+	struct ice_eth_stats internal_stats;
+	bool offset_loaded;
+	bool adapter_stopped;
+};
+
+/**
+ * Structure to store private data for each PF/VF instance.
+ */
+struct ice_adapter {
+	/* Common for both PF and VF */
+	struct ice_hw hw;
+	struct rte_eth_dev *eth_dev;
+	struct ice_pf pf;
+	bool rx_bulk_alloc_allowed;
+	bool tx_simple_allowed;
+	/* ptype mapping table */
+	uint32_t ptype_tbl[ICE_MAX_PKT_TYPE] __rte_cache_min_aligned;
+};
+
+struct ice_vsi_vlan_pvid_info {
+	uint16_t on;		/* Enable or disable pvid */
+	union {
+		uint16_t pvid;	/* Valid in case 'on' is set to set pvid */
+		struct {
+			/* Valid in case 'on' is cleared. 'tagged' will reject
+			 * tagged packets, while 'untagged' will reject
+			 * untagged packets.
+			 */
+			uint8_t tagged;
+			uint8_t untagged;
+		} reject;
+	} config;
+};
+
+#define ICE_DEV_TO_PCI(eth_dev) \
+	RTE_DEV_TO_PCI((eth_dev)->device)
+
+/* ICE_DEV_PRIVATE_TO */
+#define ICE_DEV_PRIVATE_TO_PF(adapter) \
+	(&((struct ice_adapter *)adapter)->pf)
+#define ICE_DEV_PRIVATE_TO_HW(adapter) \
+	(&((struct ice_adapter *)adapter)->hw)
+#define ICE_DEV_PRIVATE_TO_ADAPTER(adapter) \
+	((struct ice_adapter *)adapter)
+
+/* ICE_VSI_TO */
+#define ICE_VSI_TO_HW(vsi) \
+	(&(((struct ice_vsi *)vsi)->adapter->hw))
+#define ICE_VSI_TO_PF(vsi) \
+	(&(((struct ice_vsi *)vsi)->adapter->pf))
+#define ICE_VSI_TO_ETH_DEV(vsi) \
+	(((struct ice_vsi *)vsi)->adapter->eth_dev)
+
+/* ICE_PF_TO */
+#define ICE_PF_TO_HW(pf) \
+	(&(((struct ice_pf *)pf)->adapter->hw))
+#define ICE_PF_TO_ADAPTER(pf) \
+	((struct ice_adapter *)(pf)->adapter)
+#define ICE_PF_TO_ETH_DEV(pf) \
+	(((struct ice_pf *)pf)->adapter->eth_dev)
+
+static inline int
+ice_align_floor(int n)
+{
+	if (n == 0)
+		return 0;
+	return 1 << (sizeof(n) * CHAR_BIT - 1 - __builtin_clz(n));
+}
+#endif /* _ICE_ETHDEV_H_ */
diff --git a/drivers/net/ice/ice_logs.h b/drivers/net/ice/ice_logs.h
new file mode 100644
index 0000000..de2d573
--- /dev/null
+++ b/drivers/net/ice/ice_logs.h
@@ -0,0 +1,45 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _ICE_LOGS_H_
+#define _ICE_LOGS_H_
+
+extern int ice_logtype_init;
+extern int ice_logtype_driver;
+
+#define PMD_INIT_LOG(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, ice_logtype_init, "%s(): " fmt "\n", \
+		__func__, ##args)
+
+#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>")
+
+#ifdef RTE_LIBRTE_ICE_DEBUG_RX
+#define PMD_RX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_RX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_ICE_DEBUG_TX
+#define PMD_TX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_TX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_ICE_DEBUG_TX_FREE
+#define PMD_TX_FREE_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_TX_FREE_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#define PMD_DRV_LOG_RAW(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, ice_logtype_driver, "%s(): " fmt, \
+		__func__, ## args)
+
+#define PMD_DRV_LOG(level, fmt, args...) \
+	PMD_DRV_LOG_RAW(level, fmt "\n", ## args)
+
+#endif /* _ICE_LOGS_H_ */
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
new file mode 100644
index 0000000..c37dc23
--- /dev/null
+++ b/drivers/net/ice/ice_rxtx.h
@@ -0,0 +1,117 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _ICE_RXTX_H_
+#define _ICE_RXTX_H_
+
+#include "ice_ethdev.h"
+
+#define ICE_ALIGN_RING_DESC  32
+#define ICE_MIN_RING_DESC    64
+#define ICE_MAX_RING_DESC    4096
+#define ICE_DMA_MEM_ALIGN    4096
+#define ICE_RING_BASE_ALIGN  128
+
+#define ICE_RX_MAX_BURST 32
+#define ICE_TX_MAX_BURST 32
+
+#define ICE_CHK_Q_ENA_COUNT        100
+#define ICE_CHK_Q_ENA_INTERVAL_US  100
+
+#ifdef RTE_LIBRTE_ICE_16BYTE_RX_DESC
+#define ice_rx_desc ice_16byte_rx_desc
+#else
+#define ice_rx_desc ice_32byte_rx_desc
+#endif
+
+#define ICE_SUPPORT_CHAIN_NUM 5
+
+struct ice_rx_entry {
+	struct rte_mbuf *mbuf;
+};
+
+struct ice_rx_queue {
+	struct rte_mempool *mp; /* mbuf pool to populate RX ring */
+	volatile union ice_rx_desc *rx_ring;/* RX ring virtual address */
+	uint64_t rx_ring_phys_addr; /* RX ring DMA address */
+	struct ice_rx_entry *sw_ring; /* address of RX soft ring */
+	uint16_t nb_rx_desc; /* number of RX descriptors */
+	uint16_t rx_free_thresh; /* max free RX desc to hold */
+	uint16_t rx_tail; /* current value of tail */
+	uint16_t nb_rx_hold; /* number of held free RX desc */
+	struct rte_mbuf *pkt_first_seg; /**< first segment of current packet */
+	struct rte_mbuf *pkt_last_seg; /**< last segment of current packet */
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+	uint16_t rx_nb_avail; /**< number of staged packets ready */
+	uint16_t rx_next_avail; /**< index of next staged packets */
+	uint16_t rx_free_trigger; /**< triggers rx buffer allocation */
+	struct rte_mbuf fake_mbuf; /**< dummy mbuf */
+	struct rte_mbuf *rx_stage[ICE_RX_MAX_BURST * 2];
+#endif
+	uint8_t port_id; /* device port ID */
+	uint8_t crc_len; /* 0 if CRC stripped, 4 otherwise */
+	uint16_t queue_id; /* RX queue index */
+	uint16_t reg_idx; /* RX queue register index */
+	uint8_t drop_en; /* if not 0, set register bit */
+	volatile uint8_t *qrx_tail; /* register address of tail */
+	struct ice_vsi *vsi; /* the VSI this queue belongs to */
+	uint16_t rx_buf_len; /* The packet buffer size */
+	uint16_t rx_hdr_len; /* The header buffer size */
+	uint16_t max_pkt_len; /* Maximum packet length */
+	bool q_set; /* indicate if rx queue has been configured */
+	bool rx_deferred_start; /* don't start this queue in dev start */
+};
+
+struct ice_tx_entry {
+	struct rte_mbuf *mbuf;
+	uint16_t next_id;
+	uint16_t last_id;
+};
+
+struct ice_tx_queue {
+	uint16_t nb_tx_desc; /* number of TX descriptors */
+	uint64_t tx_ring_phys_addr; /* TX ring DMA address */
+	volatile struct ice_tx_desc *tx_ring; /* TX ring virtual address */
+	struct ice_tx_entry *sw_ring; /* virtual address of SW ring */
+	uint16_t tx_tail; /* current value of tail register */
+	volatile uint8_t *qtx_tail; /* register address of tail */
+	uint16_t nb_tx_used; /* number of TX desc used since RS bit set */
+	/* index to last TX descriptor to have been cleaned */
+	uint16_t last_desc_cleaned;
+	/* Total number of TX descriptors ready to be allocated. */
+	uint16_t nb_tx_free;
+	/* Start freeing TX buffers if there are less free descriptors than
+	 * this value.
+	 */
+	uint16_t tx_free_thresh;
+	/* Number of TX descriptors to use before RS bit is set. */
+	uint16_t tx_rs_thresh;
+	uint8_t pthresh; /**< Prefetch threshold register. */
+	uint8_t hthresh; /**< Host threshold register. */
+	uint8_t wthresh; /**< Write-back threshold reg. */
+	uint8_t port_id; /* Device port identifier. */
+	uint16_t queue_id; /* TX queue index. */
+	uint32_t q_teid; /* TX schedule node id. */
+	uint16_t reg_idx;
+	uint64_t offloads;
+	struct ice_vsi *vsi; /* the VSI this queue belongs to */
+	uint16_t tx_next_dd;
+	uint16_t tx_next_rs;
+	bool tx_deferred_start; /* don't start this queue in dev start */
+	bool q_set; /* indicate if tx queue has been configured */
+};
+
+/* Offload features */
+union ice_tx_offload {
+	uint64_t data;
+	struct {
+		uint64_t l2_len:7; /* L2 (MAC) Header Length. */
+		uint64_t l3_len:9; /* L3 (IP) Header Length. */
+		uint64_t l4_len:8; /* L4 Header Length. */
+		uint64_t tso_segsz:16; /* TCP TSO segment size */
+		uint64_t outer_l2_len:8; /* outer L2 Header Length */
+		uint64_t outer_l3_len:16; /* outer L3 Header Length */
+	};
+};
+#endif /* _ICE_RXTX_H_ */
diff --git a/drivers/net/ice/rte_pmd_ice_version.map b/drivers/net/ice/rte_pmd_ice_version.map
new file mode 100644
index 0000000..7b23b60
--- /dev/null
+++ b/drivers/net/ice/rte_pmd_ice_version.map
@@ -0,0 +1,4 @@
+DPDK_19.02 {
+
+	local: *;
+};
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 5699d97..02e8b6f 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -163,6 +163,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_ENIC_PMD)       += -lrte_pmd_enic
 _LDLIBS-$(CONFIG_RTE_LIBRTE_FM10K_PMD)      += -lrte_pmd_fm10k
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_FAILSAFE)   += -lrte_pmd_failsafe
 _LDLIBS-$(CONFIG_RTE_LIBRTE_I40E_PMD)       += -lrte_pmd_i40e
+_LDLIBS-$(CONFIG_RTE_LIBRTE_ICE_PMD)        += -lrte_pmd_ice
 _LDLIBS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD)      += -lrte_pmd_ixgbe
 ifeq ($(CONFIG_RTE_LIBRTE_KNI),y)
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_KNI)        += -lrte_pmd_kni
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v3 17/34] net/ice: support device and queue ops
  2018-12-12  6:59 ` [dpdk-dev] [PATCH v3 00/34] A new net PMD - ice Wenzhuo Lu
                     ` (15 preceding siblings ...)
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 16/34] net/ice: support device initialization Wenzhuo Lu
@ 2018-12-12  6:59   ` Wenzhuo Lu
  2018-12-12 20:07     ` Mattias Rönnblom
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 18/34] net/ice: support getting device information Wenzhuo Lu
                     ` (17 subsequent siblings)
  34 siblings, 1 reply; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-12  6:59 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Normally when starting/stopping the device the queue
should be started and stopped. Support them both in
this patch.

Below ops are added,
dev_configure
dev_start
dev_stop
dev_close
dev_reset
rx_queue_start
rx_queue_stop
tx_queue_start
tx_queue_stop
rx_queue_setup
rx_queue_release
tx_queue_setup
tx_queue_release

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/ice/Makefile       |   3 +-
 drivers/net/ice/ice_ethdev.c   | 196 ++++++++-
 drivers/net/ice/ice_lan_rxtx.c | 927 +++++++++++++++++++++++++++++++++++++++++
 drivers/net/ice/ice_rxtx.h     |  20 +
 4 files changed, 1144 insertions(+), 2 deletions(-)
 create mode 100644 drivers/net/ice/ice_lan_rxtx.c

diff --git a/drivers/net/ice/Makefile b/drivers/net/ice/Makefile
index 5af66d9..472f9c7 100644
--- a/drivers/net/ice/Makefile
+++ b/drivers/net/ice/Makefile
@@ -11,7 +11,7 @@ LIB = librte_pmd_ice.a
 CFLAGS += -O3
 CFLAGS += $(WERROR_FLAGS)
 
-LDLIBS += -lrte_eal -lrte_ethdev -lrte_kvargs -lrte_bus_pci
+LDLIBS += -lrte_eal -lrte_ethdev -lrte_kvargs -lrte_bus_pci -lrte_mempool
 
 EXPORT_MAP := rte_pmd_ice_version.map
 
@@ -65,6 +65,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_switch.c
 SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_nvm.c
 
 SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_ethdev.c
+SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_lan_rxtx.c
 
 # this lib depends upon:
 DEPDIRS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += lib/librte_eal lib/librte_ether
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index e0bf15c..603896a 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -14,7 +14,11 @@
 int ice_logtype_init;
 int ice_logtype_driver;
 
+static int ice_dev_configure(struct rte_eth_dev *dev);
+static int ice_dev_start(struct rte_eth_dev *dev);
+static void ice_dev_stop(struct rte_eth_dev *dev);
 static void ice_dev_close(struct rte_eth_dev *dev);
+static int ice_dev_reset(struct rte_eth_dev *dev);
 
 static const struct rte_pci_id pci_id_ice_map[] = {
 	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
@@ -24,7 +28,19 @@
 };
 
 static const struct eth_dev_ops ice_eth_dev_ops = {
-	.dev_configure                = NULL,
+	.dev_configure                = ice_dev_configure,
+	.dev_start                    = ice_dev_start,
+	.dev_stop                     = ice_dev_stop,
+	.dev_close                    = ice_dev_close,
+	.dev_reset                    = ice_dev_reset,
+	.rx_queue_start               = ice_rx_queue_start,
+	.rx_queue_stop                = ice_rx_queue_stop,
+	.tx_queue_start               = ice_tx_queue_start,
+	.tx_queue_stop                = ice_tx_queue_stop,
+	.rx_queue_setup               = ice_rx_queue_setup,
+	.rx_queue_release             = ice_rx_queue_release,
+	.tx_queue_setup               = ice_tx_queue_setup,
+	.tx_queue_release             = ice_tx_queue_release,
 };
 
 static void
@@ -627,14 +643,192 @@
 		rte_log_set_level(ice_logtype_driver, RTE_LOG_NOTICE);
 }
 
+static int
+ice_dev_configure(__rte_unused struct rte_eth_dev *dev)
+{
+	struct ice_adapter *ad =
+		ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+
+	/* Initialize to TRUE. If any of Rx queues doesn't meet the
+	 * bulk allocation or vector Rx preconditions we will reset it.
+	 */
+	ad->rx_bulk_alloc_allowed = true;
+	ad->tx_simple_allowed = true;
+
+	return 0;
+}
+
+static int ice_init_rss(struct ice_pf *pf)
+{
+	struct ice_hw *hw = ICE_PF_TO_HW(pf);
+	struct ice_vsi *vsi = pf->main_vsi;
+	struct rte_eth_dev *dev = pf->adapter->eth_dev;
+	struct rte_eth_rss_conf *rss_conf;
+	struct ice_aqc_get_set_rss_keys key;
+	uint16_t i, nb_q;
+	int ret = 0;
+
+	rss_conf = &dev->data->dev_conf.rx_adv_conf.rss_conf;
+	nb_q = dev->data->nb_rx_queues;
+	vsi->rss_key_size = ICE_AQC_GET_SET_RSS_KEY_DATA_RSS_KEY_SIZE;
+	vsi->rss_lut_size = hw->func_caps.common_cap.rss_table_size;
+
+	if (!vsi->rss_key)
+		vsi->rss_key = rte_zmalloc(NULL,
+					   vsi->rss_key_size, 0);
+	if (!vsi->rss_lut)
+		vsi->rss_lut = rte_zmalloc(NULL,
+					   vsi->rss_lut_size, 0);
+
+	/* configure RSS key */
+	if (!rss_conf->rss_key) {
+		/* Calculate the default hash key */
+		for (i = 0; i <= vsi->rss_key_size; i++)
+			vsi->rss_key[i] = (uint8_t)rte_rand();
+	} else {
+		rte_memcpy(vsi->rss_key, rss_conf->rss_key,
+			   RTE_MIN(rss_conf->rss_key_len,
+				   vsi->rss_key_size));
+	}
+	rte_memcpy(key.standard_rss_key, vsi->rss_key, vsi->rss_key_size);
+	ret = ice_aq_set_rss_key(hw, vsi->idx, &key);
+	if (ret)
+		return -EINVAL;
+
+	/* init RSS LUT table */
+	for (i = 0; i < vsi->rss_lut_size; i++)
+		vsi->rss_lut[i] = i % nb_q;
+
+	ret = ice_aq_set_rss_lut(hw, vsi->idx,
+				 ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF,
+				 vsi->rss_lut, vsi->rss_lut_size);
+	if (ret)
+		return -EINVAL;
+
+	return 0;
+}
+
+static int
+ice_dev_start(struct rte_eth_dev *dev)
+{
+	struct rte_eth_dev_data *data = dev->data;
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	uint16_t nb_rxq = 0;
+	uint16_t nb_txq, i;
+	int ret;
+
+	/* program Tx queues' context in hardware */
+	for (nb_txq = 0; nb_txq < data->nb_tx_queues; nb_txq++) {
+		ret = ice_tx_queue_start(dev, nb_txq);
+		if (ret) {
+			PMD_DRV_LOG(ERR, "fail to start Tx queue %u", nb_txq);
+			goto tx_err;
+		}
+	}
+
+	/* program Rx queues' context in hardware*/
+	for (nb_rxq = 0; nb_rxq < data->nb_rx_queues; nb_rxq++) {
+		ret = ice_rx_queue_start(dev, nb_rxq);
+		if (ret) {
+			PMD_DRV_LOG(ERR, "fail to start Rx queue %u", nb_rxq);
+			goto rx_err;
+		}
+	}
+
+	ret = ice_init_rss(pf);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to enable rss for PF");
+		goto rx_err;
+	}
+
+	ret = ice_aq_set_event_mask(hw, hw->port_info->lport,
+				    ((u16)(ICE_AQ_LINK_EVENT_LINK_FAULT |
+				     ICE_AQ_LINK_EVENT_PHY_TEMP_ALARM |
+				     ICE_AQ_LINK_EVENT_EXCESSIVE_ERRORS |
+				     ICE_AQ_LINK_EVENT_SIGNAL_DETECT |
+				     ICE_AQ_LINK_EVENT_AN_COMPLETED |
+				     ICE_AQ_LINK_EVENT_PORT_TX_SUSPENDED)),
+				     NULL);
+	if (ret != ICE_SUCCESS)
+		PMD_DRV_LOG(WARNING, "Fail to set phy mask");
+
+	pf->adapter_stopped = false;
+
+	return 0;
+
+	/* stop the started queues if failed to start all queues */
+rx_err:
+	for (i = 0; i < nb_rxq; i++)
+		ice_rx_queue_stop(dev, i);
+tx_err:
+	for (i = 0; i < nb_txq; i++)
+		ice_tx_queue_stop(dev, i);
+
+	return -EIO;
+}
+
+static void
+ice_dev_stop(struct rte_eth_dev *dev)
+{
+	struct rte_eth_dev_data *data = dev->data;
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	uint16_t i;
+
+	/* avoid stopping again */
+	if (pf->adapter_stopped)
+		return;
+
+	/* stop and clear all Rx queues */
+	for (i = 0; i < data->nb_rx_queues; i++)
+		ice_rx_queue_stop(dev, i);
+
+	/* stop and clear all Tx queues */
+	for (i = 0; i < data->nb_tx_queues; i++)
+		ice_tx_queue_stop(dev, i);
+
+	/* Clear all queues and release mbufs */
+	ice_clear_queues(dev);
+
+	pf->adapter_stopped = true;
+}
+
 static void
 ice_dev_close(struct rte_eth_dev *dev)
 {
 	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
 	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
+	ice_dev_stop(dev);
+
+	/* release all queue resource */
+	ice_free_queues(dev);
+
 	ice_res_pool_destroy(&pf->msix_pool);
 	ice_release_vsi(pf->main_vsi);
 
 	ice_shutdown_all_ctrlq(hw);
 }
+
+static int
+ice_dev_reset(struct rte_eth_dev *dev)
+{
+	int ret;
+
+	if (dev->data->sriov.active)
+		return -ENOTSUP;
+
+	ret = ice_dev_uninit(dev);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "failed to uninit device, status = %d", ret);
+		return -ENXIO;
+	}
+
+	ret = ice_dev_init(dev);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "failed to init device, status = %d", ret);
+		return -ENXIO;
+	}
+
+	return 0;
+}
diff --git a/drivers/net/ice/ice_lan_rxtx.c b/drivers/net/ice/ice_lan_rxtx.c
new file mode 100644
index 0000000..5c2301a
--- /dev/null
+++ b/drivers/net/ice/ice_lan_rxtx.c
@@ -0,0 +1,927 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#include <rte_ethdev_driver.h>
+#include <rte_net.h>
+
+#include "ice_rxtx.h"
+
+#define ICE_TD_CMD ICE_TX_DESC_CMD_EOP
+
+#define ICE_TX_CKSUM_OFFLOAD_MASK (		 \
+		PKT_TX_IP_CKSUM |		 \
+		PKT_TX_L4_MASK |		 \
+		PKT_TX_TCP_SEG |		 \
+		PKT_TX_OUTER_IP_CKSUM)
+
+#define ICE_RX_ERR_BITS 0x3f
+
+static enum ice_status
+ice_program_hw_rx_queue(struct ice_rx_queue *rxq)
+{
+	struct ice_vsi *vsi = rxq->vsi;
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	struct rte_eth_dev *dev = ICE_VSI_TO_ETH_DEV(rxq->vsi);
+	struct ice_rlan_ctx rx_ctx;
+	enum ice_status err;
+	uint16_t buf_size, len;
+	struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
+	uint32_t regval;
+
+	/**
+	 * The kernel driver uses flex descriptor. It sets the register
+	 * to flex descriptor mode.
+	 * DPDK uses legacy descriptor. It should set the register back
+	 * to the default value, then uses legacy descriptor mode.
+	 */
+	regval = (0x01 << QRXFLXP_CNTXT_RXDID_PRIO_S) &
+		 QRXFLXP_CNTXT_RXDID_PRIO_M;
+	ICE_WRITE_REG(hw, QRXFLXP_CNTXT(rxq->reg_idx), regval);
+
+	/* Set buffer size as the head split is disabled. */
+	buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) -
+			      RTE_PKTMBUF_HEADROOM);
+	rxq->rx_hdr_len = 0;
+	rxq->rx_buf_len = RTE_ALIGN(buf_size, (1 << ICE_RLAN_CTX_DBUF_S));
+	len = ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len;
+	rxq->max_pkt_len = RTE_MIN(len,
+				   dev->data->dev_conf.rxmode.max_rx_pkt_len);
+
+	if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+		if (rxq->max_pkt_len <= ETHER_MAX_LEN ||
+		    rxq->max_pkt_len > ICE_FRAME_SIZE_MAX) {
+			PMD_DRV_LOG(ERR, "maximum packet length must "
+				    "be larger than %u and smaller than %u,"
+				    "as jumbo frame is enabled",
+				    (uint32_t)ETHER_MAX_LEN,
+				    (uint32_t)ICE_FRAME_SIZE_MAX);
+			return -EINVAL;
+		}
+	} else {
+		if (rxq->max_pkt_len < ETHER_MIN_LEN ||
+		    rxq->max_pkt_len > ETHER_MAX_LEN) {
+			PMD_DRV_LOG(ERR, "maximum packet length must be "
+				    "larger than %u and smaller than %u, "
+				    "as jumbo frame is disabled",
+				    (uint32_t)ETHER_MIN_LEN,
+				    (uint32_t)ETHER_MAX_LEN);
+			return -EINVAL;
+		}
+	}
+
+	memset(&rx_ctx, 0, sizeof(rx_ctx));
+
+	rx_ctx.base = rxq->rx_ring_phys_addr / ICE_QUEUE_BASE_ADDR_UNIT;
+	rx_ctx.qlen = rxq->nb_rx_desc;
+	rx_ctx.dbuf = rxq->rx_buf_len >> ICE_RLAN_CTX_DBUF_S;
+	rx_ctx.hbuf = rxq->rx_hdr_len >> ICE_RLAN_CTX_HBUF_S;
+	rx_ctx.dtype = 0; /* No Header Split mode */
+#ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
+	rx_ctx.dsize = 1; /* 32B descriptors */
+#endif
+	rx_ctx.rxmax = rxq->max_pkt_len;
+	/* TPH: Transaction Layer Packet (TLP) processing hints */
+	rx_ctx.tphrdesc_ena = 1;
+	rx_ctx.tphwdesc_ena = 1;
+	rx_ctx.tphdata_ena = 1;
+	rx_ctx.tphhead_ena = 1;
+	/* Low Receive Queue Threshold defined in 64 descriptors units.
+	 * When the number of free descriptors goes below the lrxqthresh,
+	 * an immediate interrupt is triggered.
+	 */
+	rx_ctx.lrxqthresh = 2;
+	/*default use 32 byte descriptor, vlan tag extract to L2TAG2(1st)*/
+	rx_ctx.l2tsel = 1;
+	rx_ctx.showiv = 0;
+
+	err = ice_clear_rxq_ctx(hw, rxq->reg_idx);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to clear Lan Rx queue (%u) context",
+			    rxq->queue_id);
+		return -EINVAL;
+	}
+	err = ice_write_rxq_ctx(hw, &rx_ctx, rxq->reg_idx);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to write Lan Rx queue (%u) context",
+			    rxq->queue_id);
+		return -EINVAL;
+	}
+
+	buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) -
+			      RTE_PKTMBUF_HEADROOM);
+
+	/* Check if scattered RX needs to be used. */
+	if ((rxq->max_pkt_len + 2 * ICE_VLAN_TAG_SIZE) > buf_size)
+		dev->data->scattered_rx = 1;
+
+	rxq->qrx_tail = hw->hw_addr + QRX_TAIL(rxq->reg_idx);
+
+	/* Init the Rx tail register*/
+	ICE_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1);
+
+	return 0;
+}
+
+/* Allocate mbufs for all descriptors in rx queue */
+static int
+ice_alloc_rx_queue_mbufs(struct ice_rx_queue *rxq)
+{
+	struct ice_rx_entry *rxe = rxq->sw_ring;
+	uint64_t dma_addr;
+	uint16_t i;
+
+	for (i = 0; i < rxq->nb_rx_desc; i++) {
+		volatile union ice_rx_desc *rxd;
+		struct rte_mbuf *mbuf = rte_mbuf_raw_alloc(rxq->mp);
+
+		if (unlikely(!mbuf)) {
+			PMD_DRV_LOG(ERR, "Failed to allocate mbuf for RX");
+			return -ENOMEM;
+		}
+
+		rte_mbuf_refcnt_set(mbuf, 1);
+		mbuf->next = NULL;
+		mbuf->data_off = RTE_PKTMBUF_HEADROOM;
+		mbuf->nb_segs = 1;
+		mbuf->port = rxq->port_id;
+
+		dma_addr =
+			rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf));
+
+		rxd = &rxq->rx_ring[i];
+		rxd->read.pkt_addr = dma_addr;
+		rxd->read.hdr_addr = 0;
+#ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
+		rxd->read.rsvd1 = 0;
+		rxd->read.rsvd2 = 0;
+#endif
+		rxe[i].mbuf = mbuf;
+	}
+
+	return 0;
+}
+
+/* Free all mbufs for descriptors in rx queue */
+static void
+ice_rx_queue_release_mbufs(struct ice_rx_queue *rxq)
+{
+	uint16_t i;
+
+	if (!rxq || !rxq->sw_ring) {
+		PMD_DRV_LOG(DEBUG, "Pointer to sw_ring is NULL");
+		return;
+	}
+
+	for (i = 0; i < rxq->nb_rx_desc; i++) {
+		if (rxq->sw_ring[i].mbuf) {
+			rte_pktmbuf_free_seg(rxq->sw_ring[i].mbuf);
+			rxq->sw_ring[i].mbuf = NULL;
+		}
+	}
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+		if (rxq->rx_nb_avail == 0)
+			return;
+		for (i = 0; i < rxq->rx_nb_avail; i++) {
+			struct rte_mbuf *mbuf;
+
+			mbuf = rxq->rx_stage[rxq->rx_next_avail + i];
+			rte_pktmbuf_free_seg(mbuf);
+		}
+		rxq->rx_nb_avail = 0;
+#endif /* RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC */
+}
+
+/* turn on or off rx queue
+ * @q_idx: queue index in pf scope
+ * @on: turn on or off the queue
+ */
+static int
+ice_switch_rx_queue(struct ice_hw *hw, uint16_t q_idx, bool on)
+{
+	uint32_t reg;
+	uint16_t j;
+
+	/* QRX_CTRL = QRX_ENA */
+	reg = ICE_READ_REG(hw, QRX_CTRL(q_idx));
+
+	if (on) {
+		if (reg & QRX_CTRL_QENA_STAT_M)
+			return 0; /* Already on, skip */
+		reg |= QRX_CTRL_QENA_REQ_M;
+	} else {
+		if (!(reg & QRX_CTRL_QENA_STAT_M))
+			return 0; /* Already off, skip */
+		reg &= ~QRX_CTRL_QENA_REQ_M;
+	}
+
+	/* Write the register */
+	ICE_WRITE_REG(hw, QRX_CTRL(q_idx), reg);
+	/* Check the result. It is said that QENA_STAT
+	 * follows the QENA_REQ not more than 10 use.
+	 * TODO: need to change the wait counter later
+	 */
+	for (j = 0; j < ICE_CHK_Q_ENA_COUNT; j++) {
+		rte_delay_us(ICE_CHK_Q_ENA_INTERVAL_US);
+		reg = ICE_READ_REG(hw, QRX_CTRL(q_idx));
+		if (on) {
+			if ((reg & QRX_CTRL_QENA_REQ_M) &&
+			    (reg & QRX_CTRL_QENA_STAT_M))
+				break;
+		} else {
+			if (!(reg & QRX_CTRL_QENA_REQ_M) &&
+			    !(reg & QRX_CTRL_QENA_STAT_M))
+				break;
+		}
+	}
+
+	/* Check if it is timeout */
+	if (j >= ICE_CHK_Q_ENA_COUNT) {
+		PMD_DRV_LOG(ERR, "Failed to %s rx queue[%u]",
+			    (on ? "enable" : "disable"), q_idx);
+		return -ETIMEDOUT;
+	}
+
+	return 0;
+}
+
+static inline int
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+ice_check_rx_burst_bulk_alloc_preconditions(struct ice_rx_queue *rxq)
+#else
+ice_check_rx_burst_bulk_alloc_preconditions
+	(__rte_unused struct ice_rx_queue *rxq)
+#endif
+{
+	int ret = 0;
+
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+	if (!(rxq->rx_free_thresh >= ICE_RX_MAX_BURST)) {
+		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: "
+			     "rxq->rx_free_thresh=%d, "
+			     "ICE_RX_MAX_BURST=%d",
+			     rxq->rx_free_thresh, ICE_RX_MAX_BURST);
+		ret = -EINVAL;
+	} else if (!(rxq->rx_free_thresh < rxq->nb_rx_desc)) {
+		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: "
+			     "rxq->rx_free_thresh=%d, "
+			     "rxq->nb_rx_desc=%d",
+			     rxq->rx_free_thresh, rxq->nb_rx_desc);
+		ret = -EINVAL;
+	} else if (rxq->nb_rx_desc % rxq->rx_free_thresh != 0) {
+		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: "
+			     "rxq->nb_rx_desc=%d, "
+			     "rxq->rx_free_thresh=%d",
+			     rxq->nb_rx_desc, rxq->rx_free_thresh);
+		ret = -EINVAL;
+	}
+#else
+	ret = -EINVAL;
+#endif
+
+	return ret;
+}
+
+/* reset fields in ice_rx_queue back to default */
+static void
+ice_reset_rx_queue(struct ice_rx_queue *rxq)
+{
+	unsigned i;
+	uint16_t len;
+
+	if (!rxq) {
+		PMD_DRV_LOG(DEBUG, "Pointer to rxq is NULL");
+		return;
+	}
+
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+	if (ice_check_rx_burst_bulk_alloc_preconditions(rxq) == 0)
+		len = (uint16_t)(rxq->nb_rx_desc + ICE_RX_MAX_BURST);
+	else
+#endif /* RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC */
+		len = rxq->nb_rx_desc;
+
+	for (i = 0; i < len * sizeof(union ice_rx_desc); i++)
+		((volatile char *)rxq->rx_ring)[i] = 0;
+
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+	memset(&rxq->fake_mbuf, 0x0, sizeof(rxq->fake_mbuf));
+	for (i = 0; i < ICE_RX_MAX_BURST; ++i)
+		rxq->sw_ring[rxq->nb_rx_desc + i].mbuf = &rxq->fake_mbuf;
+
+	rxq->rx_nb_avail = 0;
+	rxq->rx_next_avail = 0;
+	rxq->rx_free_trigger = (uint16_t)(rxq->rx_free_thresh - 1);
+#endif /* RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC */
+
+	rxq->rx_tail = 0;
+	rxq->nb_rx_hold = 0;
+	rxq->pkt_first_seg = NULL;
+	rxq->pkt_last_seg = NULL;
+}
+
+int
+ice_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct ice_rx_queue *rxq;
+	int err;
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (rx_queue_id >= dev->data->nb_rx_queues) {
+		PMD_DRV_LOG(ERR, "RX queue %u is out of range %u",
+			    rx_queue_id, dev->data->nb_rx_queues);
+		return -EINVAL;
+	}
+
+	rxq = dev->data->rx_queues[rx_queue_id];
+	if (!rxq || !rxq->q_set) {
+		PMD_DRV_LOG(ERR, "RX queue %u not available or setup",
+			    rx_queue_id);
+		return -EINVAL;
+	}
+
+	err = ice_program_hw_rx_queue(rxq);
+	if (err) {
+		PMD_DRV_LOG(ERR, "fail to program RX queue %u",
+			    rx_queue_id);
+		return -EIO;
+	}
+
+	err = ice_alloc_rx_queue_mbufs(rxq);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to allocate RX queue mbuf");
+		return -ENOMEM;
+	}
+
+	rte_wmb();
+
+	/* Init the RX tail register. */
+	ICE_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1);
+
+	err = ice_switch_rx_queue(hw, rxq->reg_idx, TRUE);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u on",
+			    rx_queue_id);
+
+		ice_rx_queue_release_mbufs(rxq);
+		ice_reset_rx_queue(rxq);
+		return -EINVAL;
+	}
+
+	dev->data->rx_queue_state[rx_queue_id] =
+		RTE_ETH_QUEUE_STATE_STARTED;
+
+	return 0;
+}
+
+int
+ice_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct ice_rx_queue *rxq;
+	int err;
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	if (rx_queue_id < dev->data->nb_rx_queues) {
+		rxq = dev->data->rx_queues[rx_queue_id];
+
+		err = ice_switch_rx_queue(hw, rxq->reg_idx, FALSE);
+		if (err) {
+			PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off",
+				    rx_queue_id);
+			return -EINVAL;
+		}
+		ice_rx_queue_release_mbufs(rxq);
+		ice_reset_rx_queue(rxq);
+		dev->data->rx_queue_state[rx_queue_id] =
+			RTE_ETH_QUEUE_STATE_STOPPED;
+	}
+
+	return 0;
+}
+
+int
+ice_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct ice_tx_queue *txq;
+	int err;
+	struct ice_vsi *vsi;
+	struct ice_hw *hw;
+	struct ice_aqc_add_tx_qgrp txq_elem;
+	struct ice_tlan_ctx tx_ctx;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (tx_queue_id >= dev->data->nb_tx_queues) {
+		PMD_DRV_LOG(ERR, "TX queue %u is out of range %u",
+			    tx_queue_id, dev->data->nb_tx_queues);
+		return -EINVAL;
+	}
+
+	txq = dev->data->tx_queues[tx_queue_id];
+	if (!txq || !txq->q_set) {
+		PMD_DRV_LOG(ERR, "TX queue %u is not available or setup",
+			    tx_queue_id);
+		return -EINVAL;
+	}
+
+	vsi = txq->vsi;
+	hw = ICE_VSI_TO_HW(vsi);
+
+	memset(&txq_elem, 0, sizeof(txq_elem));
+	memset(&tx_ctx, 0, sizeof(tx_ctx));
+	txq_elem.num_txqs = 1;
+	txq_elem.txqs[0].txq_id = rte_cpu_to_le_16(txq->reg_idx);
+
+	tx_ctx.base = txq->tx_ring_phys_addr / ICE_QUEUE_BASE_ADDR_UNIT;
+	tx_ctx.qlen = txq->nb_tx_desc;
+	tx_ctx.pf_num = hw->pf_id;
+	tx_ctx.vmvf_type = ICE_TLAN_CTX_VMVF_TYPE_PF;
+	tx_ctx.src_vsi = vsi->vsi_id;
+	tx_ctx.port_num = hw->port_info->lport;
+	tx_ctx.tso_ena = 1; /* tso enable */
+	tx_ctx.tso_qnum = txq->reg_idx; /* index for tso state structure */
+	tx_ctx.legacy_int = 1; /* Legacy or Advanced Host Interface */
+
+	ice_set_ctx((uint8_t *)&tx_ctx, txq_elem.txqs[0].txq_ctx,
+		    ice_tlan_ctx_info);
+
+	txq->qtx_tail = hw->hw_addr + QTX_COMM_DBELL(txq->reg_idx);
+
+	/* Init the Tx tail register*/
+	ICE_PCI_REG_WRITE(txq->qtx_tail, 0);
+
+	err = ice_ena_vsi_txq(hw->port_info, vsi->idx, 0, 1, &txq_elem,
+			      sizeof(txq_elem), NULL);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to add lan txq");
+		return -EIO;
+	}
+	/* store the schedule node id */
+	txq->q_teid = txq_elem.txqs[0].q_teid;
+
+	dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED;
+	return 0;
+}
+
+/* Free all mbufs for descriptors in tx queue */
+static void
+ice_tx_queue_release_mbufs(struct ice_tx_queue *txq)
+{
+	uint16_t i;
+
+	if (!txq || !txq->sw_ring) {
+		PMD_DRV_LOG(DEBUG, "Pointer to txq or sw_ring is NULL");
+		return;
+	}
+
+	for (i = 0; i < txq->nb_tx_desc; i++) {
+		if (txq->sw_ring[i].mbuf) {
+			rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
+			txq->sw_ring[i].mbuf = NULL;
+		}
+	}
+}
+
+static void
+ice_reset_tx_queue(struct ice_tx_queue *txq)
+{
+	struct ice_tx_entry *txe;
+	uint16_t i, prev, size;
+
+	if (!txq) {
+		PMD_DRV_LOG(DEBUG, "Pointer to txq is NULL");
+		return;
+	}
+
+	txe = txq->sw_ring;
+	size = sizeof(struct ice_tx_desc) * txq->nb_tx_desc;
+	for (i = 0; i < size; i++)
+		((volatile char *)txq->tx_ring)[i] = 0;
+
+	prev = (uint16_t)(txq->nb_tx_desc - 1);
+	for (i = 0; i < txq->nb_tx_desc; i++) {
+		volatile struct ice_tx_desc *txd = &txq->tx_ring[i];
+
+		txd->cmd_type_offset_bsz =
+			rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE);
+		txe[i].mbuf =  NULL;
+		txe[i].last_id = i;
+		txe[prev].next_id = i;
+		prev = i;
+	}
+
+	txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
+	txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
+
+	txq->tx_tail = 0;
+	txq->nb_tx_used = 0;
+
+	txq->last_desc_cleaned = (uint16_t)(txq->nb_tx_desc - 1);
+	txq->nb_tx_free = (uint16_t)(txq->nb_tx_desc - 1);
+}
+
+int
+ice_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct ice_tx_queue *txq;
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	enum ice_status status;
+	uint16_t q_ids[1];
+	uint32_t q_teids[1];
+
+	if (tx_queue_id >= dev->data->nb_tx_queues) {
+		PMD_DRV_LOG(ERR, "TX queue %u is out of range %u",
+			    tx_queue_id, dev->data->nb_tx_queues);
+		return -EINVAL;
+	}
+
+	txq = dev->data->tx_queues[tx_queue_id];
+	if (!txq) {
+		PMD_DRV_LOG(ERR, "TX queue %u is not available",
+			    tx_queue_id);
+		return -EINVAL;
+	}
+
+	q_ids[0] = txq->reg_idx;
+	q_teids[0] = txq->q_teid;
+
+	status = ice_dis_vsi_txq(hw->port_info, 1, q_ids, q_teids,
+				 ICE_NO_RESET, 0, NULL);
+	if (status != ICE_SUCCESS) {
+		PMD_DRV_LOG(DEBUG, "Failed to disable Lan Tx queue");
+		return -EINVAL;
+	}
+
+	ice_tx_queue_release_mbufs(txq);
+	ice_reset_tx_queue(txq);
+	dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+
+	return 0;
+}
+
+int
+ice_rx_queue_setup(struct rte_eth_dev *dev,
+		   uint16_t queue_idx,
+		   uint16_t nb_desc,
+		   unsigned int socket_id,
+		   const struct rte_eth_rxconf *rx_conf,
+		   struct rte_mempool *mp)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_adapter *ad =
+		ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct ice_vsi *vsi = pf->main_vsi;
+	struct ice_rx_queue *rxq;
+	const struct rte_memzone *rz;
+	uint32_t ring_size;
+	uint16_t len;
+	int use_def_burst_func = 1;
+
+	if (nb_desc % ICE_ALIGN_RING_DESC != 0 ||
+	    nb_desc > ICE_MAX_RING_DESC ||
+	    nb_desc < ICE_MIN_RING_DESC) {
+		PMD_INIT_LOG(ERR, "Number (%u) of receive descriptors is "
+			     "invalid", nb_desc);
+		return -EINVAL;
+	}
+
+	/* Free memory if needed */
+	if (dev->data->rx_queues[queue_idx]) {
+		ice_rx_queue_release(dev->data->rx_queues[queue_idx]);
+		dev->data->rx_queues[queue_idx] = NULL;
+	}
+
+	/* Allocate the rx queue data structure */
+	rxq = rte_zmalloc_socket(NULL,
+				 sizeof(struct ice_rx_queue),
+				 RTE_CACHE_LINE_SIZE,
+				 socket_id);
+	if (!rxq) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for "
+			     "rx queue data structure");
+		return -ENOMEM;
+	}
+	rxq->mp = mp;
+	rxq->nb_rx_desc = nb_desc;
+	rxq->rx_free_thresh = rx_conf->rx_free_thresh;
+	rxq->queue_id = queue_idx;
+
+	rxq->reg_idx = vsi->base_queue + queue_idx;
+	rxq->port_id = dev->data->port_id;
+	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+		rxq->crc_len = ETHER_CRC_LEN;
+	else
+		rxq->crc_len = 0;
+
+	rxq->drop_en = rx_conf->rx_drop_en;
+	rxq->vsi = vsi;
+	rxq->rx_deferred_start = rx_conf->rx_deferred_start;
+
+	/* Allocate the maximun number of RX ring hardware descriptor. */
+	len = ICE_MAX_RING_DESC;
+
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+	/**
+	 * Allocating a little more memory because vectorized/bulk_alloc Rx
+	 * functions doesn't check boundaries each time.
+	 */
+	len += ICE_RX_MAX_BURST;
+#endif
+
+	/* Allocate the maximum number of RX ring hardware descriptor. */
+	ring_size = sizeof(union ice_rx_desc) * len;
+	ring_size = RTE_ALIGN(ring_size, ICE_DMA_MEM_ALIGN);
+	rz = rte_eth_dma_zone_reserve(dev, "rx_ring", queue_idx,
+				      ring_size, ICE_RING_BASE_ALIGN,
+				      socket_id);
+	if (!rz) {
+		ice_rx_queue_release(rxq);
+		PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for RX");
+		return -ENOMEM;
+	}
+
+	/* Zero all the descriptors in the ring. */
+	memset(rz->addr, 0, ring_size);
+
+	rxq->rx_ring_phys_addr = rz->phys_addr;
+	rxq->rx_ring = (union ice_rx_desc *)rz->addr;
+
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+	len = (uint16_t)(nb_desc + ICE_RX_MAX_BURST);
+#else
+	len = nb_desc;
+#endif
+
+	/* Allocate the software ring. */
+	rxq->sw_ring = rte_zmalloc_socket(NULL,
+					  sizeof(struct ice_rx_entry) * len,
+					  RTE_CACHE_LINE_SIZE,
+					  socket_id);
+	if (!rxq->sw_ring) {
+		ice_rx_queue_release(rxq);
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for SW ring");
+		return -ENOMEM;
+	}
+
+	ice_reset_rx_queue(rxq);
+	rxq->q_set = TRUE;
+	dev->data->rx_queues[queue_idx] = rxq;
+
+	use_def_burst_func = ice_check_rx_burst_bulk_alloc_preconditions(rxq);
+
+	if (!use_def_burst_func) {
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions are "
+			     "satisfied. Rx Burst Bulk Alloc function will be "
+			     "used on port=%d, queue=%d.",
+			     rxq->port_id, rxq->queue_id);
+#endif /* RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC */
+	} else {
+		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions are "
+			     "not satisfied, Scattered Rx is requested, "
+			     "or RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC is "
+			     "not enabled on port=%d, queue=%d.",
+			     rxq->port_id, rxq->queue_id);
+		ad->rx_bulk_alloc_allowed = false;
+	}
+
+	return 0;
+}
+
+void
+ice_rx_queue_release(void *rxq)
+{
+	struct ice_rx_queue *q = (struct ice_rx_queue *)rxq;
+
+	if (!q) {
+		PMD_DRV_LOG(DEBUG, "Pointer to rxq is NULL");
+		return;
+	}
+
+	ice_rx_queue_release_mbufs(q);
+	rte_free(q->sw_ring);
+	rte_free(q);
+}
+
+int
+ice_tx_queue_setup(struct rte_eth_dev *dev,
+		   uint16_t queue_idx,
+		   uint16_t nb_desc,
+		   unsigned int socket_id,
+		   const struct rte_eth_txconf *tx_conf)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vsi *vsi = pf->main_vsi;
+	struct ice_tx_queue *txq;
+	const struct rte_memzone *tz;
+	uint32_t ring_size;
+	uint16_t tx_rs_thresh, tx_free_thresh;
+	uint64_t offloads;
+
+	offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads;
+
+	if (nb_desc % ICE_ALIGN_RING_DESC != 0 ||
+	    nb_desc > ICE_MAX_RING_DESC ||
+	    nb_desc < ICE_MIN_RING_DESC) {
+		PMD_INIT_LOG(ERR, "Number (%u) of transmit descriptors is "
+			     "invalid", nb_desc);
+		return -EINVAL;
+	}
+
+	/**
+	 * The following two parameters control the setting of the RS bit on
+	 * transmit descriptors. TX descriptors will have their RS bit set
+	 * after txq->tx_rs_thresh descriptors have been used. The TX
+	 * descriptor ring will be cleaned after txq->tx_free_thresh
+	 * descriptors are used or if the number of descriptors required to
+	 * transmit a packet is greater than the number of free TX descriptors.
+	 *
+	 * The following constraints must be satisfied:
+	 *  - tx_rs_thresh must be greater than 0.
+	 *  - tx_rs_thresh must be less than the size of the ring minus 2.
+	 *  - tx_rs_thresh must be less than or equal to tx_free_thresh.
+	 *  - tx_rs_thresh must be a divisor of the ring size.
+	 *  - tx_free_thresh must be greater than 0.
+	 *  - tx_free_thresh must be less than the size of the ring minus 3.
+	 *
+	 * One descriptor in the TX ring is used as a sentinel to avoid a H/W
+	 * race condition, hence the maximum threshold constraints. When set
+	 * to zero use default values.
+	 */
+	tx_rs_thresh = (uint16_t)(tx_conf->tx_rs_thresh ?
+				  tx_conf->tx_rs_thresh :
+				  ICE_DEFAULT_TX_RSBIT_THRESH);
+	tx_free_thresh = (uint16_t)(tx_conf->tx_free_thresh ?
+				    tx_conf->tx_free_thresh :
+				    ICE_DEFAULT_TX_FREE_THRESH);
+	if (tx_rs_thresh >= (nb_desc - 2)) {
+		PMD_INIT_LOG(ERR, "tx_rs_thresh must be less than the "
+			     "number of TX descriptors minus 2. "
+			     "(tx_rs_thresh=%u port=%d queue=%d)",
+			     (unsigned int)tx_rs_thresh,
+			     (int)dev->data->port_id,
+			     (int)queue_idx);
+		return -EINVAL;
+	}
+	if (tx_free_thresh >= (nb_desc - 3)) {
+		PMD_INIT_LOG(ERR, "tx_rs_thresh must be less than the "
+			     "tx_free_thresh must be less than the "
+			     "number of TX descriptors minus 3. "
+			     "(tx_free_thresh=%u port=%d queue=%d)",
+			     (unsigned int)tx_free_thresh,
+			     (int)dev->data->port_id,
+			     (int)queue_idx);
+		return -EINVAL;
+	}
+	if (tx_rs_thresh > tx_free_thresh) {
+		PMD_INIT_LOG(ERR, "tx_rs_thresh must be less than or "
+			     "equal to tx_free_thresh. (tx_free_thresh=%u"
+			     " tx_rs_thresh=%u port=%d queue=%d)",
+			     (unsigned int)tx_free_thresh,
+			     (unsigned int)tx_rs_thresh,
+			     (int)dev->data->port_id,
+			     (int)queue_idx);
+		return -EINVAL;
+	}
+	if ((nb_desc % tx_rs_thresh) != 0) {
+		PMD_INIT_LOG(ERR, "tx_rs_thresh must be a divisor of the "
+			     "number of TX descriptors. (tx_rs_thresh=%u"
+			     " port=%d queue=%d)",
+			     (unsigned int)tx_rs_thresh,
+			     (int)dev->data->port_id,
+			     (int)queue_idx);
+		return -EINVAL;
+	}
+	if (tx_rs_thresh > 1 && tx_conf->tx_thresh.wthresh != 0) {
+		PMD_INIT_LOG(ERR, "TX WTHRESH must be set to 0 if "
+			     "tx_rs_thresh is greater than 1. "
+			     "(tx_rs_thresh=%u port=%d queue=%d)",
+			     (unsigned int)tx_rs_thresh,
+			     (int)dev->data->port_id,
+			     (int)queue_idx);
+		return -EINVAL;
+	}
+
+	/* Free memory if needed. */
+	if (dev->data->tx_queues[queue_idx]) {
+		ice_tx_queue_release(dev->data->tx_queues[queue_idx]);
+		dev->data->tx_queues[queue_idx] = NULL;
+	}
+
+	/* Allocate the TX queue data structure. */
+	txq = rte_zmalloc_socket(NULL,
+				 sizeof(struct ice_tx_queue),
+				 RTE_CACHE_LINE_SIZE,
+				 socket_id);
+	if (!txq) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for "
+			     "tx queue structure");
+		return -ENOMEM;
+	}
+
+	/* Allocate TX hardware ring descriptors. */
+	ring_size = sizeof(struct ice_tx_desc) * ICE_MAX_RING_DESC;
+	ring_size = RTE_ALIGN(ring_size, ICE_DMA_MEM_ALIGN);
+	tz = rte_eth_dma_zone_reserve(dev, "tx_ring", queue_idx,
+				      ring_size, ICE_RING_BASE_ALIGN,
+				      socket_id);
+	if (!tz) {
+		ice_tx_queue_release(txq);
+		PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for TX");
+		return -ENOMEM;
+	}
+
+	txq->nb_tx_desc = nb_desc;
+	txq->tx_rs_thresh = tx_rs_thresh;
+	txq->tx_free_thresh = tx_free_thresh;
+	txq->pthresh = tx_conf->tx_thresh.pthresh;
+	txq->hthresh = tx_conf->tx_thresh.hthresh;
+	txq->wthresh = tx_conf->tx_thresh.wthresh;
+	txq->queue_id = queue_idx;
+
+	txq->reg_idx = vsi->base_queue + queue_idx;
+	txq->port_id = dev->data->port_id;
+	txq->offloads = offloads;
+	txq->vsi = vsi;
+	txq->tx_deferred_start = tx_conf->tx_deferred_start;
+
+	txq->tx_ring_phys_addr = tz->phys_addr;
+	txq->tx_ring = (struct ice_tx_desc *)tz->addr;
+
+	/* Allocate software ring */
+	txq->sw_ring =
+		rte_zmalloc_socket(NULL,
+				   sizeof(struct ice_tx_entry) * nb_desc,
+				   RTE_CACHE_LINE_SIZE,
+				   socket_id);
+	if (!txq->sw_ring) {
+		ice_tx_queue_release(txq);
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for SW TX ring");
+		return -ENOMEM;
+	}
+
+	ice_reset_tx_queue(txq);
+	txq->q_set = TRUE;
+	dev->data->tx_queues[queue_idx] = txq;
+
+	return 0;
+}
+
+void
+ice_tx_queue_release(void *txq)
+{
+	struct ice_tx_queue *q = (struct ice_tx_queue *)txq;
+
+	if (!q) {
+		PMD_DRV_LOG(DEBUG, "Pointer to TX queue is NULL");
+		return;
+	}
+
+	ice_tx_queue_release_mbufs(q);
+	rte_free(q->sw_ring);
+	rte_free(q);
+}
+
+void
+ice_clear_queues(struct rte_eth_dev *dev)
+{
+	uint16_t i;
+
+	PMD_INIT_FUNC_TRACE();
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		ice_tx_queue_release_mbufs(dev->data->tx_queues[i]);
+		ice_reset_tx_queue(dev->data->tx_queues[i]);
+	}
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		ice_rx_queue_release_mbufs(dev->data->rx_queues[i]);
+		ice_reset_rx_queue(dev->data->rx_queues[i]);
+	}
+}
+
+void
+ice_free_queues(struct rte_eth_dev *dev)
+{
+	uint16_t i;
+
+	PMD_INIT_FUNC_TRACE();
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		if (!dev->data->rx_queues[i])
+			continue;
+		ice_rx_queue_release(dev->data->rx_queues[i]);
+		dev->data->rx_queues[i] = NULL;
+	}
+	dev->data->nb_rx_queues = 0;
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		if (!dev->data->tx_queues[i])
+			continue;
+		ice_tx_queue_release(dev->data->tx_queues[i]);
+		dev->data->tx_queues[i] = NULL;
+	}
+	dev->data->nb_tx_queues = 0;
+}
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index c37dc23..088a206 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -114,4 +114,24 @@ struct ice_tx_queue {
 		uint64_t outer_l3_len:16; /* outer L3 Header Length */
 	};
 };
+
+int ice_rx_queue_setup(struct rte_eth_dev *dev,
+		       uint16_t queue_idx,
+		       uint16_t nb_desc,
+		       unsigned int socket_id,
+		       const struct rte_eth_rxconf *rx_conf,
+		       struct rte_mempool *mp);
+int ice_tx_queue_setup(struct rte_eth_dev *dev,
+		       uint16_t queue_idx,
+		       uint16_t nb_desc,
+		       unsigned int socket_id,
+		       const struct rte_eth_txconf *tx_conf);
+int ice_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+int ice_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+int ice_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+int ice_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+void ice_rx_queue_release(void *rxq);
+void ice_tx_queue_release(void *txq);
+void ice_clear_queues(struct rte_eth_dev *dev);
+void ice_free_queues(struct rte_eth_dev *dev);
 #endif /* _ICE_RXTX_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v3 18/34] net/ice: support getting device information
  2018-12-12  6:59 ` [dpdk-dev] [PATCH v3 00/34] A new net PMD - ice Wenzhuo Lu
                     ` (16 preceding siblings ...)
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 17/34] net/ice: support device and queue ops Wenzhuo Lu
@ 2018-12-12  6:59   ` Wenzhuo Lu
  2018-12-13  9:10     ` Zhang, Qi Z
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 19/34] net/ice: support packet type getting Wenzhuo Lu
                     ` (16 subsequent siblings)
  34 siblings, 1 reply; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-12  6:59 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add ops dev_infos_get.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/ice/ice_ethdev.c | 123 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 123 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 603896a..71108e7 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -19,6 +19,8 @@
 static void ice_dev_stop(struct rte_eth_dev *dev);
 static void ice_dev_close(struct rte_eth_dev *dev);
 static int ice_dev_reset(struct rte_eth_dev *dev);
+static void ice_dev_info_get(struct rte_eth_dev *dev,
+			     struct rte_eth_dev_info *dev_info);
 
 static const struct rte_pci_id pci_id_ice_map[] = {
 	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
@@ -41,6 +43,7 @@
 	.rx_queue_release             = ice_rx_queue_release,
 	.tx_queue_setup               = ice_tx_queue_setup,
 	.tx_queue_release             = ice_tx_queue_release,
+	.dev_infos_get                = ice_dev_info_get,
 };
 
 static void
@@ -832,3 +835,123 @@ static int ice_init_rss(struct ice_pf *pf)
 
 	return 0;
 }
+
+static void
+ice_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ice_vsi *vsi = pf->main_vsi;
+	struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
+
+	dev_info->min_rx_bufsize = ICE_BUF_SIZE_MIN;
+	dev_info->max_rx_pktlen = ICE_FRAME_SIZE_MAX;
+	dev_info->max_rx_queues = vsi->nb_qps;
+	dev_info->max_tx_queues = vsi->nb_qps;
+	dev_info->max_mac_addrs = vsi->max_macaddrs;
+	dev_info->max_vfs = pci_dev->max_vfs;
+
+	dev_info->rx_offload_capa =
+		DEV_RX_OFFLOAD_VLAN_STRIP |
+		DEV_RX_OFFLOAD_IPV4_CKSUM |
+		DEV_RX_OFFLOAD_UDP_CKSUM |
+		DEV_RX_OFFLOAD_TCP_CKSUM |
+		DEV_RX_OFFLOAD_QINQ_STRIP |
+		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		DEV_RX_OFFLOAD_VLAN_EXTEND |
+		DEV_RX_OFFLOAD_JUMBO_FRAME;
+	dev_info->tx_offload_capa =
+		DEV_TX_OFFLOAD_VLAN_INSERT |
+		DEV_TX_OFFLOAD_QINQ_INSERT |
+		DEV_TX_OFFLOAD_IPV4_CKSUM |
+		DEV_TX_OFFLOAD_UDP_CKSUM |
+		DEV_TX_OFFLOAD_TCP_CKSUM |
+		DEV_TX_OFFLOAD_SCTP_CKSUM |
+		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		DEV_TX_OFFLOAD_TCP_TSO;
+	dev_info->rx_queue_offload_capa = 0;
+	dev_info->tx_queue_offload_capa = 0;
+
+	dev_info->reta_size = hw->func_caps.common_cap.rss_table_size;
+	dev_info->hash_key_size = (VSIQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t);
+	dev_info->flow_type_rss_offloads = ICE_RSS_OFFLOAD_ALL;
+
+	dev_info->default_rxconf = (struct rte_eth_rxconf) {
+		.rx_thresh = {
+			.pthresh = ICE_DEFAULT_RX_PTHRESH,
+			.hthresh = ICE_DEFAULT_RX_HTHRESH,
+			.wthresh = ICE_DEFAULT_RX_WTHRESH,
+		},
+		.rx_free_thresh = ICE_DEFAULT_RX_FREE_THRESH,
+		.rx_drop_en = 0,
+		.offloads = 0,
+	};
+
+	dev_info->default_txconf = (struct rte_eth_txconf) {
+		.tx_thresh = {
+			.pthresh = ICE_DEFAULT_TX_PTHRESH,
+			.hthresh = ICE_DEFAULT_TX_HTHRESH,
+			.wthresh = ICE_DEFAULT_TX_WTHRESH,
+		},
+		.tx_free_thresh = ICE_DEFAULT_TX_FREE_THRESH,
+		.tx_rs_thresh = ICE_DEFAULT_TX_RSBIT_THRESH,
+		.offloads = 0,
+	};
+
+	dev_info->rx_desc_lim = (struct rte_eth_desc_lim) {
+		.nb_max = ICE_MAX_RING_DESC,
+		.nb_min = ICE_MIN_RING_DESC,
+		.nb_align = ICE_ALIGN_RING_DESC,
+	};
+
+	dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
+		.nb_max = ICE_MAX_RING_DESC,
+		.nb_min = ICE_MIN_RING_DESC,
+		.nb_align = ICE_ALIGN_RING_DESC,
+	};
+
+	switch (hw->port_info->phy.link_info.link_speed) {
+	case ICE_AQ_LINK_SPEED_10MB:
+		dev_info->speed_capa = ETH_LINK_SPEED_10M;
+		break;
+	case ICE_AQ_LINK_SPEED_100MB:
+		dev_info->speed_capa = ETH_LINK_SPEED_100M;
+		break;
+	case ICE_AQ_LINK_SPEED_1000MB:
+		dev_info->speed_capa = ETH_LINK_SPEED_1G;
+		break;
+	case ICE_AQ_LINK_SPEED_2500MB:
+		dev_info->speed_capa = ETH_LINK_SPEED_2_5G;
+		break;
+	case ICE_AQ_LINK_SPEED_5GB:
+		dev_info->speed_capa = ETH_LINK_SPEED_5G;
+		break;
+	case ICE_AQ_LINK_SPEED_10GB:
+		dev_info->speed_capa = ETH_LINK_SPEED_10G;
+		break;
+	case ICE_AQ_LINK_SPEED_20GB:
+		dev_info->speed_capa = ETH_LINK_SPEED_20G;
+		break;
+	case ICE_AQ_LINK_SPEED_25GB:
+		dev_info->speed_capa = ETH_LINK_SPEED_25G;
+		break;
+	case ICE_AQ_LINK_SPEED_40GB:
+		dev_info->speed_capa = ETH_LINK_SPEED_40G;
+		break;
+	case ICE_AQ_LINK_SPEED_UNKNOWN:
+	default:
+		PMD_DRV_LOG(ERR, "Unknown link speed");
+		dev_info->speed_capa = ETH_LINK_SPEED_AUTONEG;
+		break;
+	}
+
+	dev_info->nb_rx_queues = dev->data->nb_rx_queues;
+	dev_info->nb_tx_queues = dev->data->nb_tx_queues;
+
+	dev_info->default_rxportconf.burst_size = ICE_RX_MAX_BURST;
+	dev_info->default_txportconf.burst_size = ICE_TX_MAX_BURST;
+	dev_info->default_rxportconf.nb_queues = 1;
+	dev_info->default_txportconf.nb_queues = 1;
+	dev_info->default_rxportconf.ring_size = ICE_BUF_SIZE_MIN;
+	dev_info->default_txportconf.ring_size = ICE_BUF_SIZE_MIN;
+}
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v3 19/34] net/ice: support packet type getting
  2018-12-12  6:59 ` [dpdk-dev] [PATCH v3 00/34] A new net PMD - ice Wenzhuo Lu
                     ` (17 preceding siblings ...)
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 18/34] net/ice: support getting device information Wenzhuo Lu
@ 2018-12-12  6:59   ` Wenzhuo Lu
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 20/34] net/ice: support link update Wenzhuo Lu
                     ` (15 subsequent siblings)
  34 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-12  6:59 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Wei Zhao

Add ops dev_supported_ptypes_get.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 drivers/net/ice/ice_ethdev.c   |   2 +
 drivers/net/ice/ice_lan_rxtx.c | 601 +++++++++++++++++++++++++++++++++++++++++
 drivers/net/ice/ice_rxtx.h     |   2 +
 3 files changed, 605 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 71108e7..4d46725 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -44,6 +44,7 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 	.tx_queue_setup               = ice_tx_queue_setup,
 	.tx_queue_release             = ice_tx_queue_release,
 	.dev_infos_get                = ice_dev_info_get,
+	.dev_supported_ptypes_get     = ice_dev_supported_ptypes_get,
 };
 
 static void
@@ -493,6 +494,7 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 
 	dev->dev_ops = &ice_eth_dev_ops;
 
+	ice_set_default_ptype_table(dev);
 	pci_dev = RTE_DEV_TO_PCI(dev->device);
 
 	rte_eth_copy_pci_info(dev, pci_dev);
diff --git a/drivers/net/ice/ice_lan_rxtx.c b/drivers/net/ice/ice_lan_rxtx.c
index 5c2301a..8230bb2 100644
--- a/drivers/net/ice/ice_lan_rxtx.c
+++ b/drivers/net/ice/ice_lan_rxtx.c
@@ -884,6 +884,42 @@
 	rte_free(q);
 }
 
+const uint32_t *
+ice_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
+{
+	static const uint32_t ptypes[] = {
+		/* refers to ice_get_default_pkt_type() */
+		RTE_PTYPE_L2_ETHER,
+		RTE_PTYPE_L2_ETHER_LLDP,
+		RTE_PTYPE_L2_ETHER_ARP,
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN,
+		RTE_PTYPE_L3_IPV6_EXT_UNKNOWN,
+		RTE_PTYPE_L4_FRAG,
+		RTE_PTYPE_L4_ICMP,
+		RTE_PTYPE_L4_NONFRAG,
+		RTE_PTYPE_L4_SCTP,
+		RTE_PTYPE_L4_TCP,
+		RTE_PTYPE_L4_UDP,
+		RTE_PTYPE_TUNNEL_GRENAT,
+		RTE_PTYPE_TUNNEL_IP,
+		RTE_PTYPE_INNER_L2_ETHER,
+		RTE_PTYPE_INNER_L2_ETHER_VLAN,
+		RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN,
+		RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN,
+		RTE_PTYPE_INNER_L4_FRAG,
+		RTE_PTYPE_INNER_L4_ICMP,
+		RTE_PTYPE_INNER_L4_NONFRAG,
+		RTE_PTYPE_INNER_L4_SCTP,
+		RTE_PTYPE_INNER_L4_TCP,
+		RTE_PTYPE_INNER_L4_UDP,
+		RTE_PTYPE_TUNNEL_GTPC,
+		RTE_PTYPE_TUNNEL_GTPU,
+		RTE_PTYPE_UNKNOWN
+	};
+
+	return ptypes;
+}
+
 void
 ice_clear_queues(struct rte_eth_dev *dev)
 {
@@ -925,3 +961,568 @@
 	}
 	dev->data->nb_tx_queues = 0;
 }
+
+/* For each value it means, datasheet of hardware can tell more details
+ *
+ * @note: fix ice_dev_supported_ptypes_get() if any change here.
+ */
+static inline uint32_t
+ice_get_default_pkt_type(uint16_t ptype)
+{
+	static const uint32_t type_table[ICE_MAX_PKT_TYPE]
+		__rte_cache_aligned = {
+		/* L2 types */
+		/* [0] reserved */
+		[1] = RTE_PTYPE_L2_ETHER,
+		/* [2] - [5] reserved */
+		[6] = RTE_PTYPE_L2_ETHER_LLDP,
+		/* [7] - [10] reserved */
+		[11] = RTE_PTYPE_L2_ETHER_ARP,
+		/* [12] - [21] reserved */
+
+		/* Non tunneled IPv4 */
+		[22] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_FRAG,
+		[23] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_NONFRAG,
+		[24] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_UDP,
+		/* [25] reserved */
+		[26] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_TCP,
+		[27] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_SCTP,
+		[28] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_ICMP,
+
+		/* IPv4 --> IPv4 */
+		[29] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_FRAG,
+		[30] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_NONFRAG,
+		[31] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_UDP,
+		/* [32] reserved */
+		[33] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_TCP,
+		[34] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_SCTP,
+		[35] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv4 --> IPv6 */
+		[36] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_FRAG,
+		[37] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_NONFRAG,
+		[38] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_UDP,
+		/* [39] reserved */
+		[40] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_TCP,
+		[41] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_SCTP,
+		[42] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv4 --> GRE/Teredo/VXLAN */
+		[43] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT,
+
+		/* IPv4 --> GRE/Teredo/VXLAN --> IPv4 */
+		[44] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_FRAG,
+		[45] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_NONFRAG,
+		[46] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_UDP,
+		/* [47] reserved */
+		[48] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_TCP,
+		[49] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_SCTP,
+		[50] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv4 --> GRE/Teredo/VXLAN --> IPv6 */
+		[51] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_FRAG,
+		[52] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_NONFRAG,
+		[53] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_UDP,
+		/* [54] reserved */
+		[55] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_TCP,
+		[56] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_SCTP,
+		[57] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv4 --> GRE/Teredo/VXLAN --> MAC */
+		[58] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER,
+
+		/* IPv4 --> GRE/Teredo/VXLAN --> MAC --> IPv4 */
+		[59] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_FRAG,
+		[60] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_NONFRAG,
+		[61] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_UDP,
+		/* [62] reserved */
+		[63] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_TCP,
+		[64] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_SCTP,
+		[65] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv4 --> GRE/Teredo/VXLAN --> MAC --> IPv6 */
+		[66] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_FRAG,
+		[67] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_NONFRAG,
+		[68] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_UDP,
+		/* [69] reserved */
+		[70] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_TCP,
+		[71] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_SCTP,
+		[72] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv4 --> GRE/Teredo/VXLAN --> MAC/VLAN */
+		[73] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN,
+
+		/* IPv4 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv4 */
+		[74] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_FRAG,
+		[75] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_NONFRAG,
+		[76] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_UDP,
+		/* [77] reserved */
+		[78] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_TCP,
+		[79] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_SCTP,
+		[80] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv4 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv6 */
+		[81] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_FRAG,
+		[82] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_NONFRAG,
+		[83] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_UDP,
+		/* [84] reserved */
+		[85] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_TCP,
+		[86] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_SCTP,
+		[87] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_ICMP,
+
+		/* Non tunneled IPv6 */
+		[88] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_FRAG,
+		[89] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_NONFRAG,
+		[90] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_UDP,
+		/* [91] reserved */
+		[92] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_TCP,
+		[93] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_SCTP,
+		[94] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_ICMP,
+
+		/* IPv6 --> IPv4 */
+		[95] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_FRAG,
+		[96] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_NONFRAG,
+		[97] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_UDP,
+		/* [98] reserved */
+		[99] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_TCP,
+		[100] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_IP |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_SCTP,
+		[101] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_IP |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv6 --> IPv6 */
+		[102] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_IP |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_FRAG,
+		[103] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_IP |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_NONFRAG,
+		[104] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_IP |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_UDP,
+		/* [105] reserved */
+		[106] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_IP |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_TCP,
+		[107] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_IP |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_SCTP,
+		[108] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_IP |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv6 --> GRE/Teredo/VXLAN */
+		[109] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT,
+
+		/* IPv6 --> GRE/Teredo/VXLAN --> IPv4 */
+		[110] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_FRAG,
+		[111] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_NONFRAG,
+		[112] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_UDP,
+		/* [113] reserved */
+		[114] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_TCP,
+		[115] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_SCTP,
+		[116] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv6 --> GRE/Teredo/VXLAN --> IPv6 */
+		[117] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_FRAG,
+		[118] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_NONFRAG,
+		[119] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_UDP,
+		/* [120] reserved */
+		[121] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_TCP,
+		[122] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_SCTP,
+		[123] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv6 --> GRE/Teredo/VXLAN --> MAC */
+		[124] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER,
+
+		/* IPv6 --> GRE/Teredo/VXLAN --> MAC --> IPv4 */
+		[125] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_FRAG,
+		[126] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_NONFRAG,
+		[127] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_UDP,
+		/* [128] reserved */
+		[129] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_TCP,
+		[130] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_SCTP,
+		[131] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv6 --> GRE/Teredo/VXLAN --> MAC --> IPv6 */
+		[132] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_FRAG,
+		[133] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_NONFRAG,
+		[134] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_UDP,
+		/* [135] reserved */
+		[136] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_TCP,
+		[137] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_SCTP,
+		[138] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv6 --> GRE/Teredo/VXLAN --> MAC/VLAN */
+		[139] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN,
+
+		/* IPv6 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv4 */
+		[140] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_FRAG,
+		[141] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_NONFRAG,
+		[142] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_UDP,
+		/* [143] reserved */
+		[144] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_TCP,
+		[145] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_SCTP,
+		[146] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv6 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv6 */
+		[147] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_FRAG,
+		[148] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_NONFRAG,
+		[149] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_UDP,
+		/* [150] reserved */
+		[151] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_TCP,
+		[152] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_SCTP,
+		[153] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_ICMP,
+		/* [154] - [255] reserved */
+		[256] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GTPC,
+		[257] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GTPC,
+		[258] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+				RTE_PTYPE_TUNNEL_GTPU,
+		[259] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+				RTE_PTYPE_TUNNEL_GTPU,
+		/* [260] - [263] reserved */
+		[264] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GTPC,
+		[265] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GTPC,
+		[266] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+				RTE_PTYPE_TUNNEL_GTPU,
+		[267] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+				RTE_PTYPE_TUNNEL_GTPU,
+
+		/* All others reserved */
+	};
+
+	return type_table[ptype];
+}
+
+void __attribute__((cold))
+ice_set_default_ptype_table(struct rte_eth_dev *dev)
+{
+	struct ice_adapter *ad =
+		ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	int i;
+
+	for (i = 0; i < ICE_MAX_PKT_TYPE; i++)
+		ad->ptype_tbl[i] = ice_get_default_pkt_type(i);
+}
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index 088a206..871646f 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -134,4 +134,6 @@ int ice_tx_queue_setup(struct rte_eth_dev *dev,
 void ice_tx_queue_release(void *txq);
 void ice_clear_queues(struct rte_eth_dev *dev);
 void ice_free_queues(struct rte_eth_dev *dev);
+void ice_set_default_ptype_table(struct rte_eth_dev *dev);
+const uint32_t *ice_dev_supported_ptypes_get(struct rte_eth_dev *dev);
 #endif /* _ICE_RXTX_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v3 20/34] net/ice: support link update
  2018-12-12  6:59 ` [dpdk-dev] [PATCH v3 00/34] A new net PMD - ice Wenzhuo Lu
                     ` (18 preceding siblings ...)
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 19/34] net/ice: support packet type getting Wenzhuo Lu
@ 2018-12-12  6:59   ` Wenzhuo Lu
  2018-12-13  8:47     ` Zhang, Qi Z
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 21/34] net/ice: support MTU setting Wenzhuo Lu
                     ` (14 subsequent siblings)
  34 siblings, 1 reply; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-12  6:59 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add ops link_update.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/ice/ice_ethdev.c | 332 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 332 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 4d46725..518ce70 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -21,6 +21,8 @@
 static int ice_dev_reset(struct rte_eth_dev *dev);
 static void ice_dev_info_get(struct rte_eth_dev *dev,
 			     struct rte_eth_dev_info *dev_info);
+static int ice_link_update(struct rte_eth_dev *dev,
+			   int wait_to_complete);
 
 static const struct rte_pci_id pci_id_ice_map[] = {
 	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
@@ -45,6 +47,7 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 	.tx_queue_release             = ice_tx_queue_release,
 	.dev_infos_get                = ice_dev_info_get,
 	.dev_supported_ptypes_get     = ice_dev_supported_ptypes_get,
+	.link_update                  = ice_link_update,
 };
 
 static void
@@ -331,6 +334,187 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 	return 0;
 }
 
+/* Enable IRQ0 */
+static void
+ice_pf_enable_irq0(struct ice_hw *hw)
+{
+	/* reset the registers */
+	ICE_WRITE_REG(hw, PFINT_OICR_ENA, 0);
+	ICE_READ_REG(hw, PFINT_OICR);
+
+#ifdef ICE_LSE_SPT
+	ICE_WRITE_REG(hw, PFINT_OICR_ENA,
+		      (uint32_t)(PFINT_OICR_ENA_INT_ENA_M &
+				 (~PFINT_OICR_LINK_STAT_CHANGE_M)));
+
+	ICE_WRITE_REG(hw, PFINT_OICR_CTL,
+		      (0 & PFINT_OICR_CTL_MSIX_INDX_M) |
+		      ((0 << PFINT_OICR_CTL_ITR_INDX_S) &
+		       PFINT_OICR_CTL_ITR_INDX_M) |
+		      PFINT_OICR_CTL_CAUSE_ENA_M);
+
+	ICE_WRITE_REG(hw, PFINT_FW_CTL,
+		      (0 & PFINT_FW_CTL_MSIX_INDX_M) |
+		      ((0 << PFINT_FW_CTL_ITR_INDX_S) &
+		       PFINT_FW_CTL_ITR_INDX_M) |
+		      PFINT_FW_CTL_CAUSE_ENA_M);
+#else
+	ICE_WRITE_REG(hw, PFINT_OICR_ENA, PFINT_OICR_ENA_INT_ENA_M);
+#endif
+
+	ICE_WRITE_REG(hw, GLINT_DYN_CTL(0),
+		      GLINT_DYN_CTL_INTENA_M |
+		      GLINT_DYN_CTL_CLEARPBA_M |
+		      GLINT_DYN_CTL_ITR_INDX_M);
+
+	ice_flush(hw);
+}
+
+/* Disable IRQ0 */
+static void
+ice_pf_disable_irq0(struct ice_hw *hw)
+{
+	/* Disable all interrupt types */
+	ICE_WRITE_REG(hw, GLINT_DYN_CTL(0), GLINT_DYN_CTL_WB_ON_ITR_M);
+	ice_flush(hw);
+}
+
+#ifdef ICE_LSE_SPT
+static void
+ice_handle_aq_msg(struct rte_eth_dev *dev)
+{
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ice_ctl_q_info *cq = &hw->adminq;
+	struct ice_rq_event_info event;
+	uint16_t pending, opcode;
+	int ret;
+
+	event.buf_len = ICE_AQ_MAX_BUF_LEN;
+	event.msg_buf = rte_zmalloc(NULL, event.buf_len, 0);
+	if (!event.msg_buf) {
+		PMD_DRV_LOG(ERR, "Failed to allocate mem");
+		return;
+	}
+
+	pending = 1;
+	while (pending) {
+		ret = ice_clean_rq_elem(hw, cq, &event, &pending);
+
+		if (ret != ICE_SUCCESS) {
+			PMD_DRV_LOG(INFO,
+				    "Failed to read msg from AdminQ, "
+				    "adminq_err: %u",
+				    hw->adminq.sq_last_status);
+			break;
+		}
+		opcode = rte_le_to_cpu_16(event.desc.opcode);
+
+		switch (opcode) {
+		case ice_aqc_opc_get_link_status:
+			ret = ice_link_update(dev, 0);
+			if (!ret)
+				_rte_eth_dev_callback_process
+					(dev, RTE_ETH_EVENT_INTR_LSC, NULL);
+			break;
+		default:
+			PMD_DRV_LOG(DEBUG, "Request %u is not supported yet",
+				    opcode);
+			break;
+		}
+	}
+	rte_free(event.msg_buf);
+}
+#endif
+
+/**
+ * Interrupt handler triggered by NIC for handling
+ * specific interrupt.
+ *
+ * @param handle
+ *  Pointer to interrupt handle.
+ * @param param
+ *  The address of parameter (struct rte_eth_dev *) regsitered before.
+ *
+ * @return
+ *  void
+ */
+static void
+ice_interrupt_handler(void *param)
+{
+	struct rte_eth_dev *dev = (struct rte_eth_dev *)param;
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	uint32_t oicr;
+	uint32_t reg;
+	uint8_t pf_num;
+	uint8_t event;
+	uint16_t queue;
+#ifdef ICE_LSE_SPT
+	uint32_t int_fw_ctl;
+#endif
+
+	/* Disable interrupt */
+	ice_pf_disable_irq0(hw);
+
+	/* read out interrupt causes */
+	oicr = ICE_READ_REG(hw, PFINT_OICR);
+#ifdef ICE_LSE_SPT
+	int_fw_ctl = ICE_READ_REG(hw, PFINT_FW_CTL);
+#endif
+
+	/* No interrupt event indicated */
+	if (!(oicr & PFINT_OICR_INTEVENT_M)) {
+		PMD_DRV_LOG(INFO, "No interrupt event");
+		goto done;
+	}
+
+#ifdef ICE_LSE_SPT
+	if (int_fw_ctl & PFINT_FW_CTL_INTEVENT_M) {
+		PMD_DRV_LOG(INFO, "FW_CTL: link state change event");
+		ice_handle_aq_msg(dev);
+	}
+#else
+	if (oicr & PFINT_OICR_LINK_STAT_CHANGE_M) {
+		PMD_DRV_LOG(INFO, "OICR: link state change event");
+		ice_link_update(dev, 0);
+	}
+#endif
+
+	if (oicr & PFINT_OICR_MAL_DETECT_M) {
+		PMD_DRV_LOG(WARNING, "OICR: MDD event");
+		reg = ICE_READ_REG(hw, GL_MDET_TX_PQM);
+		if (reg & GL_MDET_TX_PQM_VALID_M) {
+			pf_num = (reg & GL_MDET_TX_PQM_PF_NUM_M) >>
+				 GL_MDET_TX_PQM_PF_NUM_S;
+			event = (reg & GL_MDET_TX_PQM_MAL_TYPE_M) >>
+				GL_MDET_TX_PQM_MAL_TYPE_S;
+			queue = (reg & GL_MDET_TX_PQM_QNUM_M) >>
+				GL_MDET_TX_PQM_QNUM_S;
+
+			PMD_DRV_LOG(WARNING, "Malicious Driver Detection event "
+				    "%d by PQM on TX queue %d PF# %d",
+				    event, queue, pf_num);
+		}
+
+		reg = ICE_READ_REG(hw, GL_MDET_TX_TCLAN);
+		if (reg & GL_MDET_TX_TCLAN_VALID_M) {
+			pf_num = (reg & GL_MDET_TX_TCLAN_PF_NUM_M) >>
+				 GL_MDET_TX_TCLAN_PF_NUM_S;
+			event = (reg & GL_MDET_TX_TCLAN_MAL_TYPE_M) >>
+				GL_MDET_TX_TCLAN_MAL_TYPE_S;
+			queue = (reg & GL_MDET_TX_TCLAN_QNUM_M) >>
+				GL_MDET_TX_TCLAN_QNUM_S;
+
+			PMD_DRV_LOG(WARNING, "Malicious Driver Detection event "
+				    "%d by TCLAN on TX queue %d PF# %d",
+				    event, queue, pf_num);
+		}
+	}
+done:
+	/* Enable interrupt */
+	ice_pf_enable_irq0(hw);
+	rte_intr_enable(dev->intr_handle);
+}
+
 /*  Initialize SW parameters of PF */
 static int
 ice_pf_sw_init(struct rte_eth_dev *dev)
@@ -488,6 +672,7 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 ice_dev_init(struct rte_eth_dev *dev)
 {
 	struct rte_pci_device *pci_dev;
+	struct rte_intr_handle *intr_handle;
 	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
 	int ret;
@@ -496,6 +681,7 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 
 	ice_set_default_ptype_table(dev);
 	pci_dev = RTE_DEV_TO_PCI(dev->device);
+	intr_handle = &pci_dev->intr_handle;
 
 	rte_eth_copy_pci_info(dev, pci_dev);
 	pf->adapter = ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
@@ -542,6 +728,15 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 		goto err_pf_setup;
 	}
 
+	/* register callback func to eal lib */
+	rte_intr_callback_register(intr_handle,
+				   ice_interrupt_handler, dev);
+
+	ice_pf_enable_irq0(hw);
+
+	/* enable uio intr after callback register */
+	rte_intr_enable(intr_handle);
+
 	return 0;
 
 err_pf_setup:
@@ -588,6 +783,8 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 {
 	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
 
 	ice_dev_close(dev);
 
@@ -598,6 +795,13 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 	rte_free(dev->data->mac_addrs);
 	dev->data->mac_addrs = NULL;
 
+	/* disable uio intr before callback unregister */
+	rte_intr_disable(intr_handle);
+
+	/* register callback func to eal lib */
+	rte_intr_callback_unregister(intr_handle,
+				     ice_interrupt_handler, dev);
+
 	ice_release_vsi(pf->main_vsi);
 	ice_sched_cleanup_all(hw);
 	rte_free(hw->port_info);
@@ -758,6 +962,9 @@ static int ice_init_rss(struct ice_pf *pf)
 	if (ret != ICE_SUCCESS)
 		PMD_DRV_LOG(WARNING, "Fail to set phy mask");
 
+	/* Call get_link_info aq commond to enable/disable LSE */
+	ice_link_update(dev, 0);
+
 	pf->adapter_stopped = false;
 
 	return 0;
@@ -778,6 +985,8 @@ static int ice_init_rss(struct ice_pf *pf)
 {
 	struct rte_eth_dev_data *data = dev->data;
 	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
 	uint16_t i;
 
 	/* avoid stopping again */
@@ -795,6 +1004,13 @@ static int ice_init_rss(struct ice_pf *pf)
 	/* Clear all queues and release mbufs */
 	ice_clear_queues(dev);
 
+	/* Clean datapath event and queue/vec mapping */
+	rte_intr_efd_disable(intr_handle);
+	if (intr_handle->intr_vec) {
+		rte_free(intr_handle->intr_vec);
+		intr_handle->intr_vec = NULL;
+	}
+
 	pf->adapter_stopped = true;
 }
 
@@ -957,3 +1173,119 @@ static int ice_init_rss(struct ice_pf *pf)
 	dev_info->default_rxportconf.ring_size = ICE_BUF_SIZE_MIN;
 	dev_info->default_txportconf.ring_size = ICE_BUF_SIZE_MIN;
 }
+
+static inline int
+ice_atomic_read_link_status(struct rte_eth_dev *dev,
+			    struct rte_eth_link *link)
+{
+	struct rte_eth_link *dst = link;
+	struct rte_eth_link *src = &dev->data->dev_link;
+
+	if (rte_atomic64_cmpset((uint64_t *)dst, *(uint64_t *)dst,
+				*(uint64_t *)src) == 0)
+		return -1;
+
+	return 0;
+}
+
+static inline int
+ice_atomic_write_link_status(struct rte_eth_dev *dev,
+			     struct rte_eth_link *link)
+{
+	struct rte_eth_link *dst = &dev->data->dev_link;
+	struct rte_eth_link *src = link;
+
+	if (rte_atomic64_cmpset((uint64_t *)dst, *(uint64_t *)dst,
+				*(uint64_t *)src) == 0)
+		return -1;
+
+	return 0;
+}
+
+static int
+ice_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete)
+{
+#define CHECK_INTERVAL 100  /* 100ms */
+#define MAX_REPEAT_TIME 10  /* 1s (10 * 100ms) in total */
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ice_link_status link_status;
+	struct rte_eth_link link, old;
+	int status;
+	unsigned int rep_cnt = MAX_REPEAT_TIME;
+	bool enable_lse = dev->data->dev_conf.intr_conf.lsc ? true : false;
+
+	memset(&link, 0, sizeof(link));
+	memset(&old, 0, sizeof(old));
+	memset(&link_status, 0, sizeof(link_status));
+	ice_atomic_read_link_status(dev, &old);
+
+	do {
+		/* Get link status information from hardware */
+		status = ice_aq_get_link_info(hw->port_info, enable_lse,
+					      &link_status, NULL);
+		if (status != ICE_SUCCESS) {
+			link.link_speed = ETH_SPEED_NUM_100M;
+			link.link_duplex = ETH_LINK_FULL_DUPLEX;
+			PMD_DRV_LOG(ERR, "Failed to get link info");
+			goto out;
+		}
+
+		link.link_status = link_status.link_info & ICE_AQ_LINK_UP;
+		if (!wait_to_complete || link.link_status)
+			break;
+
+		rte_delay_ms(CHECK_INTERVAL);
+	} while (--rep_cnt);
+
+	if (!link.link_status)
+		goto out;
+
+	/* Full-duplex operation at all supported speeds */
+	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+
+	/* Parse the link status */
+	switch (link_status.link_speed) {
+	case ICE_AQ_LINK_SPEED_10MB:
+		link.link_speed = ETH_SPEED_NUM_10M;
+		break;
+	case ICE_AQ_LINK_SPEED_100MB:
+		link.link_speed = ETH_SPEED_NUM_100M;
+		break;
+	case ICE_AQ_LINK_SPEED_1000MB:
+		link.link_speed = ETH_SPEED_NUM_1G;
+		break;
+	case ICE_AQ_LINK_SPEED_2500MB:
+		link.link_speed = ETH_SPEED_NUM_2_5G;
+		break;
+	case ICE_AQ_LINK_SPEED_5GB:
+		link.link_speed = ETH_SPEED_NUM_5G;
+		break;
+	case ICE_AQ_LINK_SPEED_10GB:
+		link.link_speed = ETH_SPEED_NUM_10G;
+		break;
+	case ICE_AQ_LINK_SPEED_20GB:
+		link.link_speed = ETH_SPEED_NUM_20G;
+		break;
+	case ICE_AQ_LINK_SPEED_25GB:
+		link.link_speed = ETH_SPEED_NUM_25G;
+		break;
+	case ICE_AQ_LINK_SPEED_40GB:
+		link.link_speed = ETH_SPEED_NUM_40G;
+		break;
+	case ICE_AQ_LINK_SPEED_UNKNOWN:
+	default:
+		PMD_DRV_LOG(ERR, "Unknown link speed");
+		link.link_speed = ETH_SPEED_NUM_NONE;
+		break;
+	}
+
+	link.link_autoneg = !(dev->data->dev_conf.link_speeds &
+			      ETH_LINK_SPEED_FIXED);
+
+out:
+	ice_atomic_write_link_status(dev, &link);
+	if (link.link_status == old.link_status)
+		return -1;
+
+	return 0;
+}
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v3 21/34] net/ice: support MTU setting
  2018-12-12  6:59 ` [dpdk-dev] [PATCH v3 00/34] A new net PMD - ice Wenzhuo Lu
                     ` (19 preceding siblings ...)
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 20/34] net/ice: support link update Wenzhuo Lu
@ 2018-12-12  6:59   ` Wenzhuo Lu
  2018-12-13 21:05     ` Ferruh Yigit
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 22/34] net/ice: support MAC ops Wenzhuo Lu
                     ` (13 subsequent siblings)
  34 siblings, 1 reply; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-12  6:59 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add ops mtu_set.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/ice/ice_ethdev.c | 34 ++++++++++++++++++++++++++++++++++
 1 file changed, 34 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 518ce70..38e822f 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -23,6 +23,7 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 			     struct rte_eth_dev_info *dev_info);
 static int ice_link_update(struct rte_eth_dev *dev,
 			   int wait_to_complete);
+static int ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu);
 
 static const struct rte_pci_id pci_id_ice_map[] = {
 	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
@@ -48,6 +49,7 @@ static int ice_link_update(struct rte_eth_dev *dev,
 	.dev_infos_get                = ice_dev_info_get,
 	.dev_supported_ptypes_get     = ice_dev_supported_ptypes_get,
 	.link_update                  = ice_link_update,
+	.mtu_set                      = ice_mtu_set,
 };
 
 static void
@@ -1289,3 +1291,35 @@ static int ice_init_rss(struct ice_pf *pf)
 
 	return 0;
 }
+
+static int
+ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct rte_eth_dev_data *dev_data = pf->dev_data;
+	uint32_t frame_size = mtu + ETHER_HDR_LEN
+			      + ETHER_CRC_LEN + ICE_VLAN_TAG_SIZE;
+
+	/* check if mtu is within the allowed range */
+	if (mtu < ETHER_MIN_MTU || frame_size > ICE_FRAME_SIZE_MAX)
+		return -EINVAL;
+
+	/* mtu setting is forbidden if port is start */
+	if (dev_data->dev_started) {
+		PMD_DRV_LOG(ERR,
+			    "port %d must be stopped before configuration",
+			    dev_data->port_id);
+		return -EBUSY;
+	}
+
+	if (frame_size > ETHER_MAX_LEN)
+		dev_data->dev_conf.rxmode.offloads |=
+			DEV_RX_OFFLOAD_JUMBO_FRAME;
+	else
+		dev_data->dev_conf.rxmode.offloads &=
+			~DEV_RX_OFFLOAD_JUMBO_FRAME;
+
+	dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
+
+	return 0;
+}
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v3 22/34] net/ice: support MAC ops
  2018-12-12  6:59 ` [dpdk-dev] [PATCH v3 00/34] A new net PMD - ice Wenzhuo Lu
                     ` (20 preceding siblings ...)
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 21/34] net/ice: support MTU setting Wenzhuo Lu
@ 2018-12-12  6:59   ` Wenzhuo Lu
  2018-12-13  9:00     ` Zhang, Qi Z
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 23/34] net/ice: support VLAN ops Wenzhuo Lu
                     ` (12 subsequent siblings)
  34 siblings, 1 reply; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-12  6:59 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add below ops,
mac_addr_set
mac_addr_add
mac_addr_remove

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/ice/ice_ethdev.c | 233 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 233 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 38e822f..7ae4e2d 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -24,6 +24,13 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 static int ice_link_update(struct rte_eth_dev *dev,
 			   int wait_to_complete);
 static int ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu);
+static int ice_macaddr_set(struct rte_eth_dev *dev,
+			   struct ether_addr *mac_addr);
+static int ice_macaddr_add(struct rte_eth_dev *dev,
+			   struct ether_addr *mac_addr,
+			   __rte_unused uint32_t index,
+			   uint32_t pool);
+static void ice_macaddr_remove(struct rte_eth_dev *dev, uint32_t index);
 
 static const struct rte_pci_id pci_id_ice_map[] = {
 	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
@@ -50,6 +57,9 @@ static int ice_link_update(struct rte_eth_dev *dev,
 	.dev_supported_ptypes_get     = ice_dev_supported_ptypes_get,
 	.link_update                  = ice_link_update,
 	.mtu_set                      = ice_mtu_set,
+	.mac_addr_set                 = ice_macaddr_set,
+	.mac_addr_add                 = ice_macaddr_add,
+	.mac_addr_remove              = ice_macaddr_remove,
 };
 
 static void
@@ -336,6 +346,130 @@ static int ice_link_update(struct rte_eth_dev *dev,
 	return 0;
 }
 
+/* Find out specific MAC filter */
+static struct ice_mac_filter *
+ice_find_mac_filter(struct ice_vsi *vsi, struct ether_addr *macaddr)
+{
+	struct ice_mac_filter *f;
+
+	TAILQ_FOREACH(f, &vsi->mac_list, next) {
+		if (is_same_ether_addr(macaddr, &f->mac_info.mac_addr))
+			return f;
+	}
+
+	return NULL;
+}
+
+static int
+ice_add_mac_filter(struct ice_vsi *vsi, struct ether_addr *mac_addr)
+{
+	struct ice_fltr_list_entry *m_list_itr = NULL;
+	struct ice_mac_filter *f;
+	struct LIST_HEAD_TYPE list_head;
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	int ret = 0;
+
+	/* If it's added and configured, return */
+	f = ice_find_mac_filter(vsi, mac_addr);
+	if (f) {
+		PMD_DRV_LOG(INFO, "This MAC filter already exists.");
+		return 0;
+	}
+
+	INIT_LIST_HEAD(&list_head);
+
+	m_list_itr = (struct ice_fltr_list_entry *)
+		ice_malloc(hw, sizeof(*m_list_itr));
+	if (!m_list_itr) {
+		ret = -ENOMEM;
+		goto DONE;
+	}
+	ice_memcpy(m_list_itr->fltr_info.l_data.mac.mac_addr,
+		   mac_addr, ETH_ALEN, ICE_NONDMA_TO_NONDMA);
+	m_list_itr->fltr_info.src_id = ICE_SRC_ID_VSI;
+	m_list_itr->fltr_info.fltr_act = ICE_FWD_TO_VSI;
+	m_list_itr->fltr_info.lkup_type = ICE_SW_LKUP_MAC;
+	m_list_itr->fltr_info.flag = ICE_FLTR_TX;
+	m_list_itr->fltr_info.vsi_handle = vsi->idx;
+
+	LIST_ADD(&m_list_itr->list_entry, &list_head);
+
+	/* Add the mac */
+	ret = ice_add_mac(hw, &list_head);
+	if (ret != ICE_SUCCESS) {
+		PMD_DRV_LOG(ERR, "Failed to add MAC filter");
+		ret = -EINVAL;
+		goto DONE;
+	}
+	/* Add the mac addr into mac list */
+	f = rte_zmalloc(NULL, sizeof(*f), 0);
+	if (!f) {
+		PMD_DRV_LOG(ERR, "failed to allocate memory");
+		ret = -ENOMEM;
+		goto DONE;
+	}
+	rte_memcpy(&f->mac_info.mac_addr, mac_addr, ETH_ADDR_LEN);
+	TAILQ_INSERT_TAIL(&vsi->mac_list, f, next);
+	vsi->mac_num++;
+
+	ret = 0;
+
+DONE:
+	rte_free(m_list_itr);
+	return ret;
+}
+
+static int
+ice_remove_mac_filter(struct ice_vsi *vsi, struct ether_addr *mac_addr)
+{
+	struct ice_fltr_list_entry *m_list_itr = NULL;
+	struct ice_mac_filter *f;
+	struct LIST_HEAD_TYPE list_head;
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	int ret = 0;
+
+	/* Can't find it, return an error */
+	f = ice_find_mac_filter(vsi, mac_addr);
+	if (!f)
+		return -EINVAL;
+
+	INIT_LIST_HEAD(&list_head);
+
+	m_list_itr = (struct ice_fltr_list_entry *)
+		ice_malloc(hw, sizeof(*m_list_itr));
+	if (!m_list_itr) {
+		ret = -ENOMEM;
+		goto DONE;
+	}
+	ice_memcpy(m_list_itr->fltr_info.l_data.mac.mac_addr,
+		   mac_addr, ETH_ALEN, ICE_NONDMA_TO_NONDMA);
+	m_list_itr->fltr_info.src_id = ICE_SRC_ID_VSI;
+	m_list_itr->fltr_info.fltr_act = ICE_FWD_TO_VSI;
+	m_list_itr->fltr_info.lkup_type = ICE_SW_LKUP_MAC;
+	m_list_itr->fltr_info.flag = ICE_FLTR_TX;
+	m_list_itr->fltr_info.vsi_handle = vsi->idx;
+
+	LIST_ADD(&m_list_itr->list_entry, &list_head);
+
+	/* remove the mac filter */
+	ret = ice_remove_mac(hw, &list_head);
+	if (ret != ICE_SUCCESS) {
+		PMD_DRV_LOG(ERR, "Failed to remove MAC filter");
+		ret = -EINVAL;
+		goto DONE;
+	}
+
+	/* Remove the mac addr from mac list */
+	TAILQ_REMOVE(&vsi->mac_list, f, next);
+	rte_free(f);
+	vsi->mac_num--;
+
+	ret = 0;
+DONE:
+	rte_free(m_list_itr);
+	return ret;
+}
+
 /* Enable IRQ0 */
 static void
 ice_pf_enable_irq0(struct ice_hw *hw)
@@ -544,6 +678,9 @@ static int ice_link_update(struct rte_eth_dev *dev,
 	struct ice_vsi *vsi = NULL;
 	struct ice_vsi_ctx vsi_ctx;
 	int ret;
+	struct ether_addr broadcast = {
+		.addr_bytes = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff} };
+	struct ether_addr mac_addr;
 	uint16_t max_txqs[ICE_MAX_TRAFFIC_CLASS] = { 0 };
 	uint8_t tc_bitmap = 0x1;
 
@@ -629,6 +766,21 @@ static int ice_link_update(struct rte_eth_dev *dev,
 	pf->vsis_allocated = vsi_ctx.vsis_allocd;
 	pf->vsis_unallocated = vsi_ctx.vsis_unallocated;
 
+	/* MAC configuration */
+	rte_memcpy(pf->dev_addr.addr_bytes,
+		   hw->port_info->mac.perm_addr,
+		   ETH_ADDR_LEN);
+
+	rte_memcpy(&mac_addr, &pf->dev_addr, ETHER_ADDR_LEN);
+	ret = ice_add_mac_filter(vsi, &mac_addr);
+	if (ret != ICE_SUCCESS)
+		PMD_INIT_LOG(ERR, "Failed to add dflt MAC filter");
+
+	rte_memcpy(&mac_addr, &broadcast, ETHER_ADDR_LEN);
+	ret = ice_add_mac_filter(vsi, &mac_addr);
+	if (ret != ICE_SUCCESS)
+		PMD_INIT_LOG(ERR, "Failed to add MAC filter");
+
 	/* At the beginning, only TC0. */
 	/* What we need here is the maximam number of the TX queues.
 	 * Currently vsi->nb_qps means it.
@@ -1323,3 +1475,84 @@ static int ice_init_rss(struct ice_pf *pf)
 
 	return 0;
 }
+
+static int ice_macaddr_set(struct rte_eth_dev *dev,
+			   struct ether_addr *mac_addr)
+{
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vsi *vsi = pf->main_vsi;
+	struct ice_mac_filter *f;
+	uint8_t flags = 0;
+	int ret;
+
+	if (!is_valid_assigned_ether_addr(mac_addr)) {
+		PMD_DRV_LOG(ERR, "Tried to set invalid MAC address.");
+		return -EINVAL;
+	}
+
+	TAILQ_FOREACH(f, &vsi->mac_list, next) {
+		if (is_same_ether_addr(&pf->dev_addr, &f->mac_info.mac_addr))
+			break;
+	}
+
+	if (!f) {
+		PMD_DRV_LOG(ERR, "Failed to find filter for default mac");
+		return -EIO;
+	}
+
+	ret = ice_remove_mac_filter(vsi, &f->mac_info.mac_addr);
+	if (ret != ICE_SUCCESS) {
+		PMD_DRV_LOG(ERR, "Failed to delete mac filter");
+		return -EIO;
+	}
+	ret = ice_add_mac_filter(vsi, mac_addr);
+	if (ret != ICE_SUCCESS) {
+		PMD_DRV_LOG(ERR, "Failed to add mac filter");
+		return -EIO;
+	}
+	memcpy(&pf->dev_addr, mac_addr, ETH_ADDR_LEN);
+
+	flags = ICE_AQC_MAN_MAC_UPDATE_LAA_WOL;
+	ice_aq_manage_mac_write(hw, mac_addr->addr_bytes, flags, NULL);
+
+	return 0;
+}
+
+/* Add a MAC address, and update filters */
+static int
+ice_macaddr_add(struct rte_eth_dev *dev,
+		struct ether_addr *mac_addr,
+		__rte_unused uint32_t index,
+		__rte_unused uint32_t pool)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vsi *vsi = pf->main_vsi;
+	int ret;
+
+	ret = ice_add_mac_filter(vsi, mac_addr);
+	if (ret != ICE_SUCCESS) {
+		PMD_DRV_LOG(ERR, "Failed to add MAC filter");
+		return -EINVAL;
+	}
+
+	return ICE_SUCCESS;
+}
+
+/* Remove a MAC address, and update filters */
+static void
+ice_macaddr_remove(struct rte_eth_dev *dev, uint32_t index)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vsi *vsi = pf->main_vsi;
+	struct rte_eth_dev_data *data = dev->data;
+	struct ether_addr *macaddr;
+	int ret;
+
+	macaddr = &data->mac_addrs[index];
+	ret = ice_remove_mac_filter(vsi, macaddr);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to remove MAC filter");
+		return;
+	}
+}
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v3 23/34] net/ice: support VLAN ops
  2018-12-12  6:59 ` [dpdk-dev] [PATCH v3 00/34] A new net PMD - ice Wenzhuo Lu
                     ` (21 preceding siblings ...)
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 22/34] net/ice: support MAC ops Wenzhuo Lu
@ 2018-12-12  6:59   ` Wenzhuo Lu
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 24/34] net/ice: support RSS Wenzhuo Lu
                     ` (11 subsequent siblings)
  34 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-12  6:59 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add below ops,
ice_vlan_filter_set
ice_vlan_offload_set
ice_vlan_tpid_set
ice_vlan_pvid_set

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/ice/ice_ethdev.c | 590 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 590 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 7ae4e2d..728e6af 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -24,6 +24,13 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 static int ice_link_update(struct rte_eth_dev *dev,
 			   int wait_to_complete);
 static int ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu);
+static int ice_vlan_offload_set(struct rte_eth_dev *dev, int mask);
+static int ice_vlan_tpid_set(struct rte_eth_dev *dev,
+			     enum rte_vlan_type vlan_type,
+			     uint16_t tpid);
+static int ice_vlan_filter_set(struct rte_eth_dev *dev,
+			       uint16_t vlan_id,
+			       int on);
 static int ice_macaddr_set(struct rte_eth_dev *dev,
 			   struct ether_addr *mac_addr);
 static int ice_macaddr_add(struct rte_eth_dev *dev,
@@ -31,6 +38,8 @@ static int ice_macaddr_add(struct rte_eth_dev *dev,
 			   __rte_unused uint32_t index,
 			   uint32_t pool);
 static void ice_macaddr_remove(struct rte_eth_dev *dev, uint32_t index);
+static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
+			     uint16_t pvid, int on);
 
 static const struct rte_pci_id pci_id_ice_map[] = {
 	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
@@ -60,6 +69,10 @@ static int ice_macaddr_add(struct rte_eth_dev *dev,
 	.mac_addr_set                 = ice_macaddr_set,
 	.mac_addr_add                 = ice_macaddr_add,
 	.mac_addr_remove              = ice_macaddr_remove,
+	.vlan_filter_set              = ice_vlan_filter_set,
+	.vlan_offload_set             = ice_vlan_offload_set,
+	.vlan_tpid_set                = ice_vlan_tpid_set,
+	.vlan_pvid_set                = ice_vlan_pvid_set,
 };
 
 static void
@@ -470,6 +483,297 @@ static int ice_macaddr_add(struct rte_eth_dev *dev,
 	return ret;
 }
 
+/* Find out specific VLAN filter */
+static struct ice_vlan_filter *
+ice_find_vlan_filter(struct ice_vsi *vsi, uint16_t vlan_id)
+{
+	struct ice_vlan_filter *f;
+
+	TAILQ_FOREACH(f, &vsi->vlan_list, next) {
+		if (vlan_id == f->vlan_info.vlan_id)
+			return f;
+	}
+
+	return NULL;
+}
+
+static int
+ice_add_vlan_filter(struct ice_vsi *vsi, uint16_t vlan_id)
+{
+	struct ice_fltr_list_entry *v_list_itr = NULL;
+	struct ice_vlan_filter *f;
+	struct LIST_HEAD_TYPE list_head;
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	int ret = 0;
+
+	if (!vsi || vlan_id > ETHER_MAX_VLAN_ID)
+		return -EINVAL;
+
+	/* If it's added and configured, return. */
+	f = ice_find_vlan_filter(vsi, vlan_id);
+	if (f) {
+		PMD_DRV_LOG(INFO, "This VLAN filter already exists.");
+		return 0;
+	}
+
+	if (!vsi->vlan_anti_spoof_on && !vsi->vlan_filter_on)
+		return 0;
+
+	INIT_LIST_HEAD(&list_head);
+
+	v_list_itr = (struct ice_fltr_list_entry *)
+		      ice_malloc(hw, sizeof(*v_list_itr));
+	if (!v_list_itr) {
+		ret = -ENOMEM;
+		goto DONE;
+	}
+	v_list_itr->fltr_info.l_data.vlan.vlan_id = vlan_id;
+	v_list_itr->fltr_info.src_id = ICE_SRC_ID_VSI;
+	v_list_itr->fltr_info.fltr_act = ICE_FWD_TO_VSI;
+	v_list_itr->fltr_info.lkup_type = ICE_SW_LKUP_VLAN;
+	v_list_itr->fltr_info.flag = ICE_FLTR_TX;
+	v_list_itr->fltr_info.vsi_handle = vsi->idx;
+
+	LIST_ADD(&v_list_itr->list_entry, &list_head);
+
+	/* Add the vlan */
+	ret = ice_add_vlan(hw, &list_head);
+	if (ret != ICE_SUCCESS) {
+		PMD_DRV_LOG(ERR, "Failed to add VLAN filter");
+		ret = -EINVAL;
+		goto DONE;
+	}
+
+	/* Add vlan into vlan list */
+	f = rte_zmalloc(NULL, sizeof(*f), 0);
+	if (!f) {
+		PMD_DRV_LOG(ERR, "failed to allocate memory");
+		ret = -ENOMEM;
+		goto DONE;
+	}
+	f->vlan_info.vlan_id = vlan_id;
+	TAILQ_INSERT_TAIL(&vsi->vlan_list, f, next);
+	vsi->vlan_num++;
+
+	ret = 0;
+
+DONE:
+	rte_free(v_list_itr);
+	return ret;
+}
+
+static int
+ice_remove_vlan_filter(struct ice_vsi *vsi, uint16_t vlan_id)
+{
+	struct ice_fltr_list_entry *v_list_itr = NULL;
+	struct ice_vlan_filter *f;
+	struct LIST_HEAD_TYPE list_head;
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	int ret = 0;
+
+	/**
+	 * Vlan 0 is the generic filter for untagged packets
+	 * and can't be removed.
+	 */
+	if (!vsi || vlan_id == 0 || vlan_id > ETHER_MAX_VLAN_ID)
+		return -EINVAL;
+
+	/* Can't find it, return an error */
+	f = ice_find_vlan_filter(vsi, vlan_id);
+	if (!f)
+		return -EINVAL;
+
+	INIT_LIST_HEAD(&list_head);
+
+	v_list_itr = (struct ice_fltr_list_entry *)
+		      ice_malloc(hw, sizeof(*v_list_itr));
+	if (!v_list_itr) {
+		ret = -ENOMEM;
+		goto DONE;
+	}
+
+	v_list_itr->fltr_info.l_data.vlan.vlan_id = vlan_id;
+	v_list_itr->fltr_info.src_id = ICE_SRC_ID_VSI;
+	v_list_itr->fltr_info.fltr_act = ICE_FWD_TO_VSI;
+	v_list_itr->fltr_info.lkup_type = ICE_SW_LKUP_VLAN;
+	v_list_itr->fltr_info.flag = ICE_FLTR_TX;
+	v_list_itr->fltr_info.vsi_handle = vsi->idx;
+
+	LIST_ADD(&v_list_itr->list_entry, &list_head);
+
+	/* remove the vlan filter */
+	ret = ice_remove_vlan(hw, &list_head);
+	if (ret != ICE_SUCCESS) {
+		PMD_DRV_LOG(ERR, "Failed to remove VLAN filter");
+		ret = -EINVAL;
+		goto DONE;
+	}
+
+	/* Remove the vlan id from vlan list */
+	TAILQ_REMOVE(&vsi->vlan_list, f, next);
+	rte_free(f);
+	vsi->vlan_num--;
+
+	ret = 0;
+DONE:
+	rte_free(v_list_itr);
+	return ret;
+}
+
+static int
+ice_remove_all_mac_vlan_filters(struct ice_vsi *vsi)
+{
+	struct ice_mac_filter *m_f;
+	struct ice_vlan_filter *v_f;
+	int ret = 0;
+
+	if (!vsi || !vsi->mac_num)
+		return -EINVAL;
+
+	TAILQ_FOREACH(m_f, &vsi->mac_list, next) {
+		ret = ice_remove_mac_filter(vsi, &m_f->mac_info.mac_addr);
+		if (ret != ICE_SUCCESS) {
+			ret = -EINVAL;
+			goto DONE;
+		}
+	}
+
+	if (vsi->vlan_num == 0)
+		return 0;
+
+	TAILQ_FOREACH(v_f, &vsi->vlan_list, next) {
+		ret = ice_remove_vlan_filter(vsi, v_f->vlan_info.vlan_id);
+		if (ret != ICE_SUCCESS) {
+			ret = -EINVAL;
+			goto DONE;
+		}
+	}
+
+DONE:
+	return ret;
+}
+
+static int
+ice_vsi_config_qinq_insertion(struct ice_vsi *vsi, bool on)
+{
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	struct ice_vsi_ctx ctxt;
+	uint8_t qinq_flags;
+	int ret = 0;
+
+	/* Check if it has been already on or off */
+	if (vsi->info.valid_sections &
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID)) {
+		if (on) {
+			if ((vsi->info.outer_tag_flags &
+			     ICE_AQ_VSI_OUTER_TAG_ACCEPT_HOST) ==
+			    ICE_AQ_VSI_OUTER_TAG_ACCEPT_HOST)
+				return 0; /* already on */
+		} else {
+			if (!(vsi->info.outer_tag_flags &
+			      ICE_AQ_VSI_OUTER_TAG_ACCEPT_HOST))
+				return 0; /* already off */
+		}
+	}
+
+	if (on)
+		qinq_flags = ICE_AQ_VSI_OUTER_TAG_ACCEPT_HOST;
+	else
+		qinq_flags = 0;
+	/* clear global insertion and use per packet insertion */
+	vsi->info.outer_tag_flags &= ~(ICE_AQ_VSI_OUTER_TAG_INSERT);
+	vsi->info.outer_tag_flags &= ~(ICE_AQ_VSI_OUTER_TAG_ACCEPT_HOST);
+	vsi->info.outer_tag_flags |= qinq_flags;
+	/* use default vlan type 0x8100 */
+	vsi->info.outer_tag_flags &= ~(ICE_AQ_VSI_OUTER_TAG_TYPE_M);
+	vsi->info.outer_tag_flags |= ICE_DFLT_OUTER_TAG_TYPE <<
+				     ICE_AQ_VSI_OUTER_TAG_TYPE_S;
+	(void)rte_memcpy(&ctxt.info, &vsi->info, sizeof(vsi->info));
+	ctxt.info.valid_sections =
+			rte_cpu_to_le_16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID);
+	ctxt.vsi_num = vsi->vsi_id;
+	ret = ice_update_vsi(hw, vsi->idx, &ctxt, NULL);
+	if (ret) {
+		PMD_DRV_LOG(INFO,
+			    "Update VSI failed to %s qinq stripping",
+			    on ? "enable" : "disable");
+		return -EINVAL;
+	}
+
+	vsi->info.valid_sections |=
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID);
+
+	return ret;
+}
+
+static int
+ice_vsi_config_qinq_stripping(struct ice_vsi *vsi, bool on)
+{
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	struct ice_vsi_ctx ctxt;
+	uint8_t qinq_flags;
+	int ret = 0;
+
+	/* Check if it has been already on or off */
+	if (vsi->info.valid_sections &
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID)) {
+		if (on) {
+			if ((vsi->info.outer_tag_flags &
+			     ICE_AQ_VSI_OUTER_TAG_MODE_M) ==
+			    ICE_AQ_VSI_OUTER_TAG_COPY)
+				return 0; /* already on */
+		} else {
+			if ((vsi->info.outer_tag_flags &
+			     ICE_AQ_VSI_OUTER_TAG_MODE_M) ==
+			    ICE_AQ_VSI_OUTER_TAG_NOTHING)
+				return 0; /* already off */
+		}
+	}
+
+	if (on)
+		qinq_flags = ICE_AQ_VSI_OUTER_TAG_COPY;
+	else
+		qinq_flags = ICE_AQ_VSI_OUTER_TAG_NOTHING;
+	vsi->info.outer_tag_flags &= ~(ICE_AQ_VSI_OUTER_TAG_MODE_M);
+	vsi->info.outer_tag_flags |= qinq_flags;
+	/* use default vlan type 0x8100 */
+	vsi->info.outer_tag_flags &= ~(ICE_AQ_VSI_OUTER_TAG_TYPE_M);
+	vsi->info.outer_tag_flags |= ICE_DFLT_OUTER_TAG_TYPE <<
+				     ICE_AQ_VSI_OUTER_TAG_TYPE_S;
+	(void)rte_memcpy(&ctxt.info, &vsi->info, sizeof(vsi->info));
+	ctxt.info.valid_sections =
+			rte_cpu_to_le_16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID);
+	ctxt.vsi_num = vsi->vsi_id;
+	ret = ice_update_vsi(hw, vsi->idx, &ctxt, NULL);
+	if (ret) {
+		PMD_DRV_LOG(INFO,
+			    "Update VSI failed to %s qinq stripping",
+			    on ? "enable" : "disable");
+		return -EINVAL;
+	}
+
+	vsi->info.valid_sections |=
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID);
+
+	return ret;
+}
+
+static int
+ice_vsi_config_double_vlan(struct ice_vsi *vsi, int on)
+{
+	int ret;
+
+	ret = ice_vsi_config_qinq_stripping(vsi, on);
+	if (ret)
+		PMD_DRV_LOG(ERR, "Fail to set qinq stripping - %d", ret);
+
+	ret = ice_vsi_config_qinq_insertion(vsi, on);
+	if (ret)
+		PMD_DRV_LOG(ERR, "Fail to set qinq insertion - %d", ret);
+
+	return ret;
+}
+
 /* Enable IRQ0 */
 static void
 ice_pf_enable_irq0(struct ice_hw *hw)
@@ -829,6 +1133,7 @@ static int ice_macaddr_add(struct rte_eth_dev *dev,
 	struct rte_intr_handle *intr_handle;
 	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vsi *vsi;
 	int ret;
 
 	dev->dev_ops = &ice_eth_dev_ops;
@@ -882,6 +1187,11 @@ static int ice_macaddr_add(struct rte_eth_dev *dev,
 		goto err_pf_setup;
 	}
 
+	vsi = pf->main_vsi;
+
+	/* Disable double vlan by default */
+	ice_vsi_config_double_vlan(vsi, FALSE);
+
 	/* register callback func to eal lib */
 	rte_intr_callback_register(intr_handle,
 				   ice_interrupt_handler, dev);
@@ -917,6 +1227,8 @@ static int ice_macaddr_add(struct rte_eth_dev *dev,
 
 	hw = ICE_VSI_TO_HW(vsi);
 
+	ice_remove_all_mac_vlan_filters(vsi);
+
 	memset(&vsi_ctx, 0, sizeof(vsi_ctx));
 
 	vsi_ctx.vsi_num = vsi->vsi_id;
@@ -1556,3 +1868,281 @@ static int ice_macaddr_set(struct rte_eth_dev *dev,
 		return;
 	}
 }
+
+static int
+ice_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vsi *vsi = pf->main_vsi;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (on) {
+		ret = ice_add_vlan_filter(vsi, vlan_id);
+		if (ret < 0) {
+			PMD_DRV_LOG(ERR, "Failed to add vlan filter");
+			return -EINVAL;
+		}
+	} else {
+		ret = ice_remove_vlan_filter(vsi, vlan_id);
+		if (ret < 0) {
+			PMD_DRV_LOG(ERR, "Failed to remove vlan filter");
+			return -EINVAL;
+		}
+	}
+
+	return 0;
+}
+
+/* Configure vlan filter on or off */
+static int
+ice_vsi_config_vlan_filter(struct ice_vsi *vsi, bool on)
+{
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	struct ice_vsi_ctx ctxt;
+	uint8_t sec_flags, sw_flags2;
+	int ret = 0;
+
+	sec_flags = ICE_AQ_VSI_SEC_TX_VLAN_PRUNE_ENA <<
+		    ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S;
+	sw_flags2 = ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA;
+
+	if (on) {
+		vsi->info.sec_flags |= sec_flags;
+		vsi->info.sw_flags2 |= sw_flags2;
+	} else {
+		vsi->info.sec_flags &= ~sec_flags;
+		vsi->info.sw_flags2 &= ~sw_flags2;
+	}
+	vsi->info.sw_id = hw->port_info->sw_id;
+	(void)rte_memcpy(&ctxt.info, &vsi->info, sizeof(vsi->info));
+	ctxt.info.valid_sections =
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_SW_VALID |
+				 ICE_AQ_VSI_PROP_SECURITY_VALID);
+	ctxt.vsi_num = vsi->vsi_id;
+
+	ret = ice_update_vsi(hw, vsi->idx, &ctxt, NULL);
+	if (ret) {
+		PMD_DRV_LOG(INFO, "Update VSI failed to %s vlan rx pruning",
+			    on ? "enable" : "disable");
+		ret = -EINVAL;
+	} else {
+		vsi->info.valid_sections |=
+			rte_cpu_to_le_16(ICE_AQ_VSI_PROP_SW_VALID |
+					 ICE_AQ_VSI_PROP_SECURITY_VALID);
+	}
+
+	return ret;
+}
+
+static int
+ice_vsi_config_vlan_stripping(struct ice_vsi *vsi, bool on)
+{
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	struct ice_vsi_ctx ctxt;
+	uint8_t vlan_flags;
+	int ret = 0;
+
+	/* Check if it has been already on or off */
+	if (vsi->info.valid_sections &
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_VLAN_VALID)) {
+		if (on) {
+			if ((vsi->info.vlan_flags &
+			     ICE_AQ_VSI_VLAN_EMOD_M) ==
+			    ICE_AQ_VSI_VLAN_EMOD_STR_BOTH)
+				return 0; /* already on */
+		} else {
+			if ((vsi->info.vlan_flags &
+			     ICE_AQ_VSI_VLAN_EMOD_M) ==
+			    ICE_AQ_VSI_VLAN_EMOD_NOTHING)
+				return 0; /* already off */
+		}
+	}
+
+	if (on)
+		vlan_flags = ICE_AQ_VSI_VLAN_EMOD_STR_BOTH;
+	else
+		vlan_flags = ICE_AQ_VSI_VLAN_EMOD_NOTHING;
+	vsi->info.vlan_flags &= ~(ICE_AQ_VSI_VLAN_EMOD_M);
+	vsi->info.vlan_flags |= vlan_flags;
+	(void)rte_memcpy(&ctxt.info, &vsi->info, sizeof(vsi->info));
+	ctxt.info.valid_sections =
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_VLAN_VALID);
+	ctxt.vsi_num = vsi->vsi_id;
+	ret = ice_update_vsi(hw, vsi->idx, &ctxt, NULL);
+	if (ret) {
+		PMD_DRV_LOG(INFO, "Update VSI failed to %s vlan stripping",
+			    on ? "enable" : "disable");
+		return -EINVAL;
+	}
+
+	vsi->info.valid_sections |=
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_VLAN_VALID);
+
+	return ret;
+}
+
+static int
+ice_vlan_offload_set(struct rte_eth_dev *dev, int mask)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vsi *vsi = pf->main_vsi;
+	struct rte_eth_rxmode *rxmode;
+
+	rxmode = &dev->data->dev_conf.rxmode;
+	if (mask & ETH_VLAN_FILTER_MASK) {
+		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+			ice_vsi_config_vlan_filter(vsi, TRUE);
+		else
+			ice_vsi_config_vlan_filter(vsi, FALSE);
+	}
+
+	if (mask & ETH_VLAN_STRIP_MASK) {
+		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+			ice_vsi_config_vlan_stripping(vsi, TRUE);
+		else
+			ice_vsi_config_vlan_stripping(vsi, FALSE);
+	}
+
+	if (mask & ETH_VLAN_EXTEND_MASK) {
+		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+			ice_vsi_config_double_vlan(vsi, TRUE);
+		else
+			ice_vsi_config_double_vlan(vsi, FALSE);
+	}
+
+	return 0;
+}
+
+static int
+ice_vlan_tpid_set(struct rte_eth_dev *dev,
+		  enum rte_vlan_type vlan_type,
+		  uint16_t tpid)
+{
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	uint64_t reg_r = 0, reg_w = 0;
+	uint16_t reg_id = 0;
+	int ret = 0;
+	int qinq = dev->data->dev_conf.rxmode.offloads &
+		   DEV_RX_OFFLOAD_VLAN_EXTEND;
+
+	switch (vlan_type) {
+	case ETH_VLAN_TYPE_OUTER:
+		if (qinq)
+			reg_id = 3;
+		else
+			reg_id = 5;
+	break;
+	case ETH_VLAN_TYPE_INNER:
+		if (qinq) {
+			reg_id = 5;
+		} else {
+			PMD_DRV_LOG(ERR,
+				    "Unsupported vlan type in single vlan.");
+			return -EINVAL;
+		}
+		break;
+	default:
+		PMD_DRV_LOG(ERR, "Unsupported vlan type %d", vlan_type);
+		return -EINVAL;
+	}
+	reg_r = ICE_READ_REG(hw, GL_SWT_L2TAGCTRL(reg_id));
+	PMD_DRV_LOG(DEBUG, "Debug read from ICE GL_SWT_L2TAGCTRL[%d]: "
+		    "0x%08"PRIx64"", reg_id, reg_r);
+
+	reg_w = reg_r & (~(GL_SWT_L2TAGCTRL_ETHERTYPE_M));
+	reg_w |= ((uint64_t)tpid << GL_SWT_L2TAGCTRL_ETHERTYPE_S);
+	if (reg_r == reg_w) {
+		PMD_DRV_LOG(DEBUG, "No need to write");
+		return 0;
+	}
+
+	ICE_WRITE_REG(hw, GL_SWT_L2TAGCTRL(reg_id), reg_w);
+	PMD_DRV_LOG(DEBUG, "Debug write 0x%08"PRIx64" to "
+		    "ICE GL_SWT_L2TAGCTRL[%d]", reg_w, reg_id);
+
+	return ret;
+}
+
+static int
+ice_vsi_vlan_pvid_set(struct ice_vsi *vsi, struct ice_vsi_vlan_pvid_info *info)
+{
+	struct ice_hw *hw;
+	struct ice_vsi_ctx ctxt;
+	uint8_t vlan_flags = 0;
+	int ret;
+
+	if (!vsi || !info) {
+		PMD_DRV_LOG(ERR, "invalid parameters");
+		return -EINVAL;
+	}
+
+	if (info->on) {
+		vsi->info.pvid = info->config.pvid;
+		/**
+		 * If insert pvid is enabled, only tagged pkts are
+		 * allowed to be sent out.
+		 */
+		vlan_flags = ICE_AQ_VSI_PVLAN_INSERT_PVID |
+			     ICE_AQ_VSI_VLAN_MODE_UNTAGGED;
+	} else {
+		vsi->info.pvid = 0;
+		if (info->config.reject.tagged == 0)
+			vlan_flags |= ICE_AQ_VSI_VLAN_MODE_TAGGED;
+
+		if (info->config.reject.untagged == 0)
+			vlan_flags |= ICE_AQ_VSI_VLAN_MODE_UNTAGGED;
+	}
+	vsi->info.vlan_flags &= ~(ICE_AQ_VSI_PVLAN_INSERT_PVID |
+				  ICE_AQ_VSI_VLAN_MODE_M);
+	vsi->info.vlan_flags |= vlan_flags;
+	memset(&ctxt, 0, sizeof(ctxt));
+	rte_memcpy(&ctxt.info, &vsi->info, sizeof(vsi->info));
+	ctxt.info.valid_sections =
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_VLAN_VALID);
+	ctxt.vsi_num = vsi->vsi_id;
+
+	hw = ICE_VSI_TO_HW(vsi);
+	ret = ice_update_vsi(hw, vsi->idx, &ctxt, NULL);
+	if (ret != ICE_SUCCESS) {
+		PMD_DRV_LOG(ERR,
+			    "update VSI for VLAN insert failed, err %d",
+			    ret);
+		return -EINVAL;
+	}
+
+	vsi->info.valid_sections |=
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_VLAN_VALID);
+
+	return ret;
+}
+
+static int
+ice_vlan_pvid_set(struct rte_eth_dev *dev, uint16_t pvid, int on)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vsi *vsi = pf->main_vsi;
+	struct rte_eth_dev_data *data = pf->dev_data;
+	struct ice_vsi_vlan_pvid_info info;
+	int ret;
+
+	memset(&info, 0, sizeof(info));
+	info.on = on;
+	if (info.on) {
+		info.config.pvid = pvid;
+	} else {
+		info.config.reject.tagged =
+			data->dev_conf.txmode.hw_vlan_reject_tagged;
+		info.config.reject.untagged =
+			data->dev_conf.txmode.hw_vlan_reject_untagged;
+	}
+
+	ret = ice_vsi_vlan_pvid_set(vsi, &info);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Failed to set pvid.");
+		return -EINVAL;
+	}
+
+	return 0;
+}
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v3 24/34] net/ice: support RSS
  2018-12-12  6:59 ` [dpdk-dev] [PATCH v3 00/34] A new net PMD - ice Wenzhuo Lu
                     ` (22 preceding siblings ...)
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 23/34] net/ice: support VLAN ops Wenzhuo Lu
@ 2018-12-12  6:59   ` Wenzhuo Lu
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 25/34] net/ice: support RX queue interruption Wenzhuo Lu
                     ` (10 subsequent siblings)
  34 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-12  6:59 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add below ops,
reta_update
reta_query
rss_hash_update
rss_hash_conf_get

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/ice/ice_ethdev.c | 242 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 242 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 728e6af..d82ce23 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -28,6 +28,16 @@ static int ice_link_update(struct rte_eth_dev *dev,
 static int ice_vlan_tpid_set(struct rte_eth_dev *dev,
 			     enum rte_vlan_type vlan_type,
 			     uint16_t tpid);
+static int ice_rss_reta_update(struct rte_eth_dev *dev,
+			       struct rte_eth_rss_reta_entry64 *reta_conf,
+			       uint16_t reta_size);
+static int ice_rss_reta_query(struct rte_eth_dev *dev,
+			      struct rte_eth_rss_reta_entry64 *reta_conf,
+			      uint16_t reta_size);
+static int ice_rss_hash_update(struct rte_eth_dev *dev,
+			       struct rte_eth_rss_conf *rss_conf);
+static int ice_rss_hash_conf_get(struct rte_eth_dev *dev,
+				 struct rte_eth_rss_conf *rss_conf);
 static int ice_vlan_filter_set(struct rte_eth_dev *dev,
 			       uint16_t vlan_id,
 			       int on);
@@ -72,6 +82,10 @@ static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
 	.vlan_filter_set              = ice_vlan_filter_set,
 	.vlan_offload_set             = ice_vlan_offload_set,
 	.vlan_tpid_set                = ice_vlan_tpid_set,
+	.reta_update                  = ice_rss_reta_update,
+	.reta_query                   = ice_rss_reta_query,
+	.rss_hash_update              = ice_rss_hash_update,
+	.rss_hash_conf_get            = ice_rss_hash_conf_get,
 	.vlan_pvid_set                = ice_vlan_pvid_set,
 };
 
@@ -2066,6 +2080,234 @@ static int ice_macaddr_set(struct rte_eth_dev *dev,
 }
 
 static int
+ice_get_rss_lut(struct ice_vsi *vsi, uint8_t *lut, uint16_t lut_size)
+{
+	struct ice_pf *pf = ICE_VSI_TO_PF(vsi);
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	int ret;
+
+	if (!lut)
+		return -EINVAL;
+
+	if (pf->flags & ICE_FLAG_RSS_AQ_CAPABLE) {
+		ret = ice_aq_get_rss_lut(hw, vsi->idx, TRUE,
+					 lut, lut_size);
+		if (ret) {
+			PMD_DRV_LOG(ERR, "Failed to get RSS lookup table");
+			return -EINVAL;
+		}
+	} else {
+		uint64_t *lut_dw = (uint64_t *)lut;
+		uint16_t i, lut_size_dw = lut_size / 4;
+
+		for (i = 0; i < lut_size_dw; i++)
+			lut_dw[i] = ICE_READ_REG(hw, PFQF_HLUT(i));
+	}
+
+	return 0;
+}
+
+static int
+ice_set_rss_lut(struct ice_vsi *vsi, uint8_t *lut, uint16_t lut_size)
+{
+	struct ice_pf *pf = ICE_VSI_TO_PF(vsi);
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	int ret;
+
+	if (!vsi || !lut)
+		return -EINVAL;
+
+	if (pf->flags & ICE_FLAG_RSS_AQ_CAPABLE) {
+		ret = ice_aq_set_rss_lut(hw, vsi->idx, TRUE,
+					 lut, lut_size);
+		if (ret) {
+			PMD_DRV_LOG(ERR, "Failed to set RSS lookup table");
+			return -EINVAL;
+		}
+	} else {
+		uint64_t *lut_dw = (uint64_t *)lut;
+		uint16_t i, lut_size_dw = lut_size / 4;
+
+		for (i = 0; i < lut_size_dw; i++)
+			ICE_WRITE_REG(hw, PFQF_HLUT(i), lut_dw[i]);
+
+		ice_flush(hw);
+	}
+
+	return 0;
+}
+
+static int
+ice_rss_reta_update(struct rte_eth_dev *dev,
+		    struct rte_eth_rss_reta_entry64 *reta_conf,
+		    uint16_t reta_size)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	uint16_t i, lut_size = hw->func_caps.common_cap.rss_table_size;
+	uint16_t idx, shift;
+	uint8_t *lut;
+	int ret;
+
+	if (reta_size != lut_size ||
+	    reta_size > ETH_RSS_RETA_SIZE_512) {
+		PMD_DRV_LOG(ERR,
+			    "The size of hash lookup table configured (%d)"
+			    "doesn't match the number hardware can "
+			    "supported (%d)",
+			    reta_size, lut_size);
+		return -EINVAL;
+	}
+
+	lut = rte_zmalloc(NULL, reta_size, 0);
+	if (!lut) {
+		PMD_DRV_LOG(ERR, "No memory can be allocated");
+		return -ENOMEM;
+	}
+	ret = ice_get_rss_lut(pf->main_vsi, lut, reta_size);
+	if (ret)
+		goto out;
+
+	for (i = 0; i < reta_size; i++) {
+		idx = i / RTE_RETA_GROUP_SIZE;
+		shift = i % RTE_RETA_GROUP_SIZE;
+		if (reta_conf[idx].mask & (1ULL << shift))
+			lut[i] = reta_conf[idx].reta[shift];
+	}
+	ret = ice_set_rss_lut(pf->main_vsi, lut, reta_size);
+
+out:
+	rte_free(lut);
+
+	return ret;
+}
+
+static int
+ice_rss_reta_query(struct rte_eth_dev *dev,
+		   struct rte_eth_rss_reta_entry64 *reta_conf,
+		   uint16_t reta_size)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	uint16_t i, lut_size = hw->func_caps.common_cap.rss_table_size;
+	uint16_t idx, shift;
+	uint8_t *lut;
+	int ret;
+
+	if (reta_size != lut_size ||
+	    reta_size > ETH_RSS_RETA_SIZE_512) {
+		PMD_DRV_LOG(ERR,
+			    "The size of hash lookup table configured (%d)"
+			    "doesn't match the number hardware can "
+			    "supported (%d)",
+			    reta_size, lut_size);
+		return -EINVAL;
+	}
+
+	lut = rte_zmalloc(NULL, reta_size, 0);
+	if (!lut) {
+		PMD_DRV_LOG(ERR, "No memory can be allocated");
+		return -ENOMEM;
+	}
+
+	ret = ice_get_rss_lut(pf->main_vsi, lut, reta_size);
+	if (ret)
+		goto out;
+
+	for (i = 0; i < reta_size; i++) {
+		idx = i / RTE_RETA_GROUP_SIZE;
+		shift = i % RTE_RETA_GROUP_SIZE;
+		if (reta_conf[idx].mask & (1ULL << shift))
+			reta_conf[idx].reta[shift] = lut[i];
+	}
+
+out:
+	rte_free(lut);
+
+	return ret;
+}
+
+static int
+ice_set_rss_key(struct ice_vsi *vsi, uint8_t *key, uint8_t key_len)
+{
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	int ret = 0;
+
+	if (!key || key_len == 0) {
+		PMD_DRV_LOG(DEBUG, "No key to be configured");
+		return 0;
+	} else if (key_len != (VSIQF_HKEY_MAX_INDEX + 1) *
+		   sizeof(uint32_t)) {
+		PMD_DRV_LOG(ERR, "Invalid key length %u", key_len);
+		return -EINVAL;
+	}
+
+	struct ice_aqc_get_set_rss_keys *key_dw =
+		(struct ice_aqc_get_set_rss_keys *)key;
+
+	ret = ice_aq_set_rss_key(hw, vsi->idx, key_dw);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to configure RSS key via AQ");
+		ret = -EINVAL;
+	}
+
+	return ret;
+}
+
+static int
+ice_get_rss_key(struct ice_vsi *vsi, uint8_t *key, uint8_t *key_len)
+{
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	int ret;
+
+	if (!key || !key_len)
+		return -EINVAL;
+
+	ret = ice_aq_get_rss_key
+		(hw, vsi->idx,
+		 (struct ice_aqc_get_set_rss_keys *)key);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to get RSS key via AQ");
+		return -EINVAL;
+	}
+	*key_len = (VSIQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t);
+
+	return 0;
+}
+
+static int
+ice_rss_hash_update(struct rte_eth_dev *dev,
+		    struct rte_eth_rss_conf *rss_conf)
+{
+	enum ice_status status = ICE_SUCCESS;
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vsi *vsi = pf->main_vsi;
+
+	/* set hash key */
+	status = ice_set_rss_key(vsi, rss_conf->rss_key, rss_conf->rss_key_len);
+	if (status)
+		return status;
+
+	/* TODO: hash enable config, ice_add_rss_cfg */
+	return 0;
+}
+
+static int
+ice_rss_hash_conf_get(struct rte_eth_dev *dev,
+		      struct rte_eth_rss_conf *rss_conf)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vsi *vsi = pf->main_vsi;
+
+	ice_get_rss_key(vsi, rss_conf->rss_key,
+			&rss_conf->rss_key_len);
+
+	/* TODO: default set to 0 as hf config is not supported now */
+	rss_conf->rss_hf = 0;
+	return 0;
+}
+
+static int
 ice_vsi_vlan_pvid_set(struct ice_vsi *vsi, struct ice_vsi_vlan_pvid_info *info)
 {
 	struct ice_hw *hw;
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v3 25/34] net/ice: support RX queue interruption
  2018-12-12  6:59 ` [dpdk-dev] [PATCH v3 00/34] A new net PMD - ice Wenzhuo Lu
                     ` (23 preceding siblings ...)
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 24/34] net/ice: support RSS Wenzhuo Lu
@ 2018-12-12  6:59   ` Wenzhuo Lu
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 26/34] net/ice: support FW version getting Wenzhuo Lu
                     ` (9 subsequent siblings)
  34 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-12  6:59 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add below ops,
rx_queue_intr_enable
rx_queue_intr_disable

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/ice/ice_ethdev.c | 230 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 230 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index d82ce23..d78169a 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -48,6 +48,10 @@ static int ice_macaddr_add(struct rte_eth_dev *dev,
 			   __rte_unused uint32_t index,
 			   uint32_t pool);
 static void ice_macaddr_remove(struct rte_eth_dev *dev, uint32_t index);
+static int ice_rx_queue_intr_enable(struct rte_eth_dev *dev,
+				    uint16_t queue_id);
+static int ice_rx_queue_intr_disable(struct rte_eth_dev *dev,
+				     uint16_t queue_id);
 static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
 			     uint16_t pvid, int on);
 
@@ -86,6 +90,8 @@ static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
 	.reta_query                   = ice_rss_reta_query,
 	.rss_hash_update              = ice_rss_hash_update,
 	.rss_hash_conf_get            = ice_rss_hash_conf_get,
+	.rx_queue_intr_enable         = ice_rx_queue_intr_enable,
+	.rx_queue_intr_disable        = ice_rx_queue_intr_disable,
 	.vlan_pvid_set                = ice_vlan_pvid_set,
 };
 
@@ -1397,6 +1403,186 @@ static int ice_init_rss(struct ice_pf *pf)
 	return 0;
 }
 
+static void
+__vsi_queues_bind_intr(struct ice_vsi *vsi, uint16_t msix_vect,
+		       int base_queue, int nb_queue)
+{
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	uint32_t val, val_tx;
+	int i;
+
+	for (i = 0; i < nb_queue; i++) {
+		/*do actual bind*/
+		val = (msix_vect & QINT_RQCTL_MSIX_INDX_M) |
+		      (0 < QINT_RQCTL_ITR_INDX_S) | QINT_RQCTL_CAUSE_ENA_M;
+		val_tx = (msix_vect & QINT_TQCTL_MSIX_INDX_M) |
+			 (0 < QINT_TQCTL_ITR_INDX_S) | QINT_TQCTL_CAUSE_ENA_M;
+
+		PMD_DRV_LOG(INFO, "queue %d is binding to vect %d",
+			    base_queue + i, msix_vect);
+		/* set ITR0 value */
+		ICE_WRITE_REG(hw, GLINT_ITR(0, msix_vect), 0x10);
+		ICE_WRITE_REG(hw, QINT_RQCTL(base_queue + i), val);
+		ICE_WRITE_REG(hw, QINT_TQCTL(base_queue + i), val_tx);
+	}
+}
+
+static void
+ice_vsi_queues_bind_intr(struct ice_vsi *vsi)
+{
+	struct rte_eth_dev *dev = vsi->adapter->eth_dev;
+	struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	uint16_t msix_vect = vsi->msix_intr;
+	uint16_t nb_msix = RTE_MIN(vsi->nb_msix, intr_handle->nb_efd);
+	uint16_t queue_idx = 0;
+	int record = 0;
+	int i;
+
+	/* clear Rx/Tx queue interrupt */
+	for (i = 0; i < vsi->nb_used_qps; i++) {
+		ICE_WRITE_REG(hw, QINT_TQCTL(vsi->base_queue + i), 0);
+		ICE_WRITE_REG(hw, QINT_RQCTL(vsi->base_queue + i), 0);
+	}
+
+	/* PF bind interrupt */
+	if (rte_intr_dp_is_en(intr_handle)) {
+		queue_idx = 0;
+		record = 1;
+	}
+
+	for (i = 0; i < vsi->nb_used_qps; i++) {
+		if (nb_msix <= 1) {
+			if (!rte_intr_allow_others(intr_handle))
+				msix_vect = ICE_MISC_VEC_ID;
+
+			/* uio mapping all queue to one msix_vect */
+			__vsi_queues_bind_intr(vsi, msix_vect,
+					       vsi->base_queue + i,
+					       vsi->nb_used_qps - i);
+
+			for (; !!record && i < vsi->nb_used_qps; i++)
+				intr_handle->intr_vec[queue_idx + i] =
+					msix_vect;
+			break;
+		}
+
+		/* vfio 1:1 queue/msix_vect mapping */
+		__vsi_queues_bind_intr(vsi, msix_vect,
+				       vsi->base_queue + i, 1);
+
+		if (!!record)
+			intr_handle->intr_vec[queue_idx + i] = msix_vect;
+
+		msix_vect++;
+		nb_msix--;
+	}
+}
+
+static void
+ice_vsi_enable_queues_intr(struct ice_vsi *vsi)
+{
+	struct rte_eth_dev *dev = vsi->adapter->eth_dev;
+	struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	uint16_t msix_intr, i;
+
+	if (rte_intr_allow_others(intr_handle))
+		for (i = 0; i < vsi->nb_used_qps; i++) {
+			msix_intr = vsi->msix_intr + i;
+			ICE_WRITE_REG(hw, GLINT_DYN_CTL(msix_intr),
+				      GLINT_DYN_CTL_INTENA_M |
+				      GLINT_DYN_CTL_CLEARPBA_M |
+				      GLINT_DYN_CTL_ITR_INDX_M |
+				      GLINT_DYN_CTL_WB_ON_ITR_M);
+		}
+	else
+		ICE_WRITE_REG(hw, GLINT_DYN_CTL(0),
+			      GLINT_DYN_CTL_INTENA_M |
+			      GLINT_DYN_CTL_CLEARPBA_M |
+			      GLINT_DYN_CTL_ITR_INDX_M |
+			      GLINT_DYN_CTL_WB_ON_ITR_M);
+}
+
+static void
+ice_vsi_disable_queues_intr(struct ice_vsi *vsi)
+{
+	struct rte_eth_dev *dev = vsi->adapter->eth_dev;
+	struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	uint16_t msix_intr, i;
+
+	/* disable interrupt and also clear all the exist config */
+	for (i = 0; i < vsi->nb_qps; i++) {
+		ICE_WRITE_REG(hw, QINT_TQCTL(vsi->base_queue + i), 0);
+		ICE_WRITE_REG(hw, QINT_RQCTL(vsi->base_queue + i), 0);
+		rte_wmb();
+	}
+
+	if (rte_intr_allow_others(intr_handle))
+		/* vfio-pci */
+		for (i = 0; i < vsi->nb_msix; i++) {
+			msix_intr = vsi->msix_intr + i;
+			ICE_WRITE_REG(hw, GLINT_DYN_CTL(msix_intr),
+				      GLINT_DYN_CTL_WB_ON_ITR_M);
+		}
+	else
+		/* igb_uio */
+		ICE_WRITE_REG(hw, GLINT_DYN_CTL(0), GLINT_DYN_CTL_WB_ON_ITR_M);
+}
+
+static int
+ice_rxq_intr_setup(struct rte_eth_dev *dev)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+	struct ice_vsi *vsi = pf->main_vsi;
+	uint32_t intr_vector = 0;
+
+	rte_intr_disable(intr_handle);
+
+	/* check and configure queue intr-vector mapping */
+	if ((rte_intr_cap_multiple(intr_handle) ||
+	     !RTE_ETH_DEV_SRIOV(dev).active) &&
+	    dev->data->dev_conf.intr_conf.rxq != 0) {
+		intr_vector = dev->data->nb_rx_queues;
+		if (intr_vector > ICE_MAX_INTR_QUEUE_NUM) {
+			PMD_DRV_LOG(ERR, "At most %d intr queues supported",
+				    ICE_MAX_INTR_QUEUE_NUM);
+			return -ENOTSUP;
+		}
+		if (rte_intr_efd_enable(intr_handle, intr_vector))
+			return -1;
+	}
+
+	if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
+		intr_handle->intr_vec =
+		rte_zmalloc(NULL, dev->data->nb_rx_queues * sizeof(int),
+			    0);
+		if (!intr_handle->intr_vec) {
+			PMD_DRV_LOG(ERR,
+				    "Failed to allocate %d rx_queues intr_vec",
+				    dev->data->nb_rx_queues);
+			return -ENOMEM;
+		}
+	}
+
+	/* Map queues with MSIX interrupt */
+	vsi->nb_used_qps = dev->data->nb_rx_queues;
+	ice_vsi_queues_bind_intr(vsi);
+
+	/* Enable interrupts for all the queues */
+	ice_vsi_enable_queues_intr(vsi);
+
+	rte_intr_enable(intr_handle);
+
+	return 0;
+}
+
 static int
 ice_dev_start(struct rte_eth_dev *dev)
 {
@@ -1431,6 +1617,10 @@ static int ice_init_rss(struct ice_pf *pf)
 		goto rx_err;
 	}
 
+	/* enable Rx interrput and mapping Rx queue to interrupt vector */
+	if (ice_rxq_intr_setup(dev))
+		return -EIO;
+
 	ret = ice_aq_set_event_mask(hw, hw->port_info->lport,
 				    ((u16)(ICE_AQ_LINK_EVENT_LINK_FAULT |
 				     ICE_AQ_LINK_EVENT_PHY_TEMP_ALARM |
@@ -1465,6 +1655,7 @@ static int ice_init_rss(struct ice_pf *pf)
 {
 	struct rte_eth_dev_data *data = dev->data;
 	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vsi *main_vsi = pf->main_vsi;
 	struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
 	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
 	uint16_t i;
@@ -1481,6 +1672,9 @@ static int ice_init_rss(struct ice_pf *pf)
 	for (i = 0; i < data->nb_tx_queues; i++)
 		ice_tx_queue_stop(dev, i);
 
+	/* disable all queue interrupts */
+	ice_vsi_disable_queues_intr(main_vsi);
+
 	/* Clear all queues and release mbufs */
 	ice_clear_queues(dev);
 
@@ -2307,6 +2501,42 @@ static int ice_macaddr_set(struct rte_eth_dev *dev,
 	return 0;
 }
 
+static int ice_rx_queue_intr_enable(struct rte_eth_dev *dev,
+				    uint16_t queue_id)
+{
+	struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	uint32_t val;
+	uint16_t msix_intr;
+
+	msix_intr = intr_handle->intr_vec[queue_id];
+
+	val = GLINT_DYN_CTL_INTENA_M | GLINT_DYN_CTL_CLEARPBA_M |
+	      GLINT_DYN_CTL_ITR_INDX_M;
+	val &= ~GLINT_DYN_CTL_WB_ON_ITR_M;
+
+	ICE_WRITE_REG(hw, GLINT_DYN_CTL(msix_intr), val);
+	rte_intr_enable(&pci_dev->intr_handle);
+
+	return 0;
+}
+
+static int ice_rx_queue_intr_disable(struct rte_eth_dev *dev,
+				     uint16_t queue_id)
+{
+	struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	uint16_t msix_intr;
+
+	msix_intr = intr_handle->intr_vec[queue_id];
+
+	ICE_WRITE_REG(hw, GLINT_DYN_CTL(msix_intr), GLINT_DYN_CTL_WB_ON_ITR_M);
+
+	return 0;
+}
+
 static int
 ice_vsi_vlan_pvid_set(struct ice_vsi *vsi, struct ice_vsi_vlan_pvid_info *info)
 {
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v3 26/34] net/ice: support FW version getting
  2018-12-12  6:59 ` [dpdk-dev] [PATCH v3 00/34] A new net PMD - ice Wenzhuo Lu
                     ` (24 preceding siblings ...)
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 25/34] net/ice: support RX queue interruption Wenzhuo Lu
@ 2018-12-12  6:59   ` Wenzhuo Lu
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 27/34] net/ice: support EEPROM information getting Wenzhuo Lu
                     ` (8 subsequent siblings)
  34 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-12  6:59 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add ops fw_version_get.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/ice/ice_ethdev.c | 21 +++++++++++++++++++++
 1 file changed, 21 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index d78169a..aae2c5e 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -52,6 +52,8 @@ static int ice_rx_queue_intr_enable(struct rte_eth_dev *dev,
 				    uint16_t queue_id);
 static int ice_rx_queue_intr_disable(struct rte_eth_dev *dev,
 				     uint16_t queue_id);
+static int ice_fw_version_get(struct rte_eth_dev *dev, char *fw_version,
+			      size_t fw_size);
 static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
 			     uint16_t pvid, int on);
 
@@ -92,6 +94,7 @@ static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
 	.rss_hash_conf_get            = ice_rss_hash_conf_get,
 	.rx_queue_intr_enable         = ice_rx_queue_intr_enable,
 	.rx_queue_intr_disable        = ice_rx_queue_intr_disable,
+	.fw_version_get               = ice_fw_version_get,
 	.vlan_pvid_set                = ice_vlan_pvid_set,
 };
 
@@ -2538,6 +2541,24 @@ static int ice_rx_queue_intr_disable(struct rte_eth_dev *dev,
 }
 
 static int
+ice_fw_version_get(struct rte_eth_dev *dev, char *fw_version, size_t fw_size)
+{
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	int ret;
+
+	ret = snprintf(fw_version, fw_size, "%d.%d.%05d %d.%d",
+		       hw->fw_maj_ver, hw->fw_min_ver, hw->fw_build,
+		       hw->api_maj_ver, hw->api_min_ver);
+
+	/* add the size of '\0' */
+	ret += 1;
+	if (fw_size < (u32)ret)
+		return ret;
+	else
+		return 0;
+}
+
+static int
 ice_vsi_vlan_pvid_set(struct ice_vsi *vsi, struct ice_vsi_vlan_pvid_info *info)
 {
 	struct ice_hw *hw;
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v3 27/34] net/ice: support EEPROM information getting
  2018-12-12  6:59 ` [dpdk-dev] [PATCH v3 00/34] A new net PMD - ice Wenzhuo Lu
                     ` (25 preceding siblings ...)
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 26/34] net/ice: support FW version getting Wenzhuo Lu
@ 2018-12-12  6:59   ` Wenzhuo Lu
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 28/34] net/ice: support statistics Wenzhuo Lu
                     ` (7 subsequent siblings)
  34 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-12  6:59 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Wei Zhao

Add below ops,
get_eeprom_length
get_eeprom

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 drivers/net/ice/ice_ethdev.c | 45 ++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 45 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index aae2c5e..98b17bc 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -56,6 +56,9 @@ static int ice_fw_version_get(struct rte_eth_dev *dev, char *fw_version,
 			      size_t fw_size);
 static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
 			     uint16_t pvid, int on);
+static int ice_get_eeprom_length(struct rte_eth_dev *dev);
+static int ice_get_eeprom(struct rte_eth_dev *dev,
+			  struct rte_dev_eeprom_info *eeprom);
 
 static const struct rte_pci_id pci_id_ice_map[] = {
 	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
@@ -96,6 +99,8 @@ static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
 	.rx_queue_intr_disable        = ice_rx_queue_intr_disable,
 	.fw_version_get               = ice_fw_version_get,
 	.vlan_pvid_set                = ice_vlan_pvid_set,
+	.get_eeprom_length            = ice_get_eeprom_length,
+	.get_eeprom                   = ice_get_eeprom,
 };
 
 static void
@@ -2639,3 +2644,43 @@ static int ice_rx_queue_intr_disable(struct rte_eth_dev *dev,
 
 	return 0;
 }
+
+static int
+ice_get_eeprom_length(struct rte_eth_dev *dev)
+{
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	/* Convert word count to byte count */
+	return hw->nvm.sr_words << 1;
+}
+
+static int
+ice_get_eeprom(struct rte_eth_dev *dev,
+	       struct rte_dev_eeprom_info *eeprom)
+{
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	uint16_t *data = eeprom->data;
+	uint16_t offset, length, i;
+	enum ice_status ret_code = ICE_SUCCESS;
+
+	offset = eeprom->offset >> 1;
+	length = eeprom->length >> 1;
+
+	if (offset > hw->nvm.sr_words ||
+	    offset + length > hw->nvm.sr_words) {
+		PMD_DRV_LOG(ERR, "Requested EEPROM bytes out of range.");
+		return -EINVAL;
+	}
+
+	eeprom->magic = hw->vendor_id | (hw->device_id << 16);
+
+	for (i = 0; i < length; i++) {
+		ret_code = ice_read_sr_word(hw, offset + i, &data[i]);
+		if (ret_code != ICE_SUCCESS) {
+			PMD_DRV_LOG(ERR, "EEPROM read failed.");
+			return -EIO;
+		}
+	}
+
+	return 0;
+}
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v3 28/34] net/ice: support statistics
  2018-12-12  6:59 ` [dpdk-dev] [PATCH v3 00/34] A new net PMD - ice Wenzhuo Lu
                     ` (26 preceding siblings ...)
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 27/34] net/ice: support EEPROM information getting Wenzhuo Lu
@ 2018-12-12  6:59   ` Wenzhuo Lu
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 29/34] net/ice: support queue information getting Wenzhuo Lu
                     ` (6 subsequent siblings)
  34 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-12  6:59 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Jia Guo

Add below ops,
stats_get
stats_reset
xstats_get
xstats_get_names
xstats_reset

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Jia Guo <jia.guo@intel.com>
---
 drivers/net/ice/ice_ethdev.c | 566 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 566 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 98b17bc..6d9d321 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -59,6 +59,14 @@ static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
 static int ice_get_eeprom_length(struct rte_eth_dev *dev);
 static int ice_get_eeprom(struct rte_eth_dev *dev,
 			  struct rte_dev_eeprom_info *eeprom);
+static int ice_stats_get(struct rte_eth_dev *dev,
+			 struct rte_eth_stats *stats);
+static void ice_stats_reset(struct rte_eth_dev *dev);
+static int ice_xstats_get(struct rte_eth_dev *dev,
+			  struct rte_eth_xstat *xstats, unsigned int n);
+static int ice_xstats_get_names(struct rte_eth_dev *dev,
+				struct rte_eth_xstat_name *xstats_names,
+				unsigned int limit);
 
 static const struct rte_pci_id pci_id_ice_map[] = {
 	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
@@ -101,8 +109,92 @@ static int ice_get_eeprom(struct rte_eth_dev *dev,
 	.vlan_pvid_set                = ice_vlan_pvid_set,
 	.get_eeprom_length            = ice_get_eeprom_length,
 	.get_eeprom                   = ice_get_eeprom,
+	.stats_get                    = ice_stats_get,
+	.stats_reset                  = ice_stats_reset,
+	.xstats_get                   = ice_xstats_get,
+	.xstats_get_names             = ice_xstats_get_names,
+	.xstats_reset                 = ice_stats_reset,
 };
 
+/* store statistics names and its offset in stats structure */
+struct ice_xstats_name_off {
+	char name[RTE_ETH_XSTATS_NAME_SIZE];
+	unsigned int offset;
+};
+
+static const struct ice_xstats_name_off ice_stats_strings[] = {
+	{"rx_unicast_packets", offsetof(struct ice_eth_stats, rx_unicast)},
+	{"rx_multicast_packets", offsetof(struct ice_eth_stats, rx_multicast)},
+	{"rx_broadcast_packets", offsetof(struct ice_eth_stats, rx_broadcast)},
+	{"rx_dropped", offsetof(struct ice_eth_stats, rx_discards)},
+	{"rx_unknown_protocol_packets", offsetof(struct ice_eth_stats,
+		rx_unknown_protocol)},
+	{"tx_unicast_packets", offsetof(struct ice_eth_stats, tx_unicast)},
+	{"tx_multicast_packets", offsetof(struct ice_eth_stats, tx_multicast)},
+	{"tx_broadcast_packets", offsetof(struct ice_eth_stats, tx_broadcast)},
+	{"tx_dropped", offsetof(struct ice_eth_stats, tx_discards)},
+};
+
+#define ICE_NB_ETH_XSTATS (sizeof(ice_stats_strings) / \
+		sizeof(ice_stats_strings[0]))
+
+static const struct ice_xstats_name_off ice_hw_port_strings[] = {
+	{"tx_link_down_dropped", offsetof(struct ice_hw_port_stats,
+		tx_dropped_link_down)},
+	{"rx_crc_errors", offsetof(struct ice_hw_port_stats, crc_errors)},
+	{"rx_illegal_byte_errors", offsetof(struct ice_hw_port_stats,
+		illegal_bytes)},
+	{"rx_error_bytes", offsetof(struct ice_hw_port_stats, error_bytes)},
+	{"mac_local_errors", offsetof(struct ice_hw_port_stats,
+		mac_local_faults)},
+	{"mac_remote_errors", offsetof(struct ice_hw_port_stats,
+		mac_remote_faults)},
+	{"rx_len_errors", offsetof(struct ice_hw_port_stats,
+		rx_len_errors)},
+	{"tx_xon_packets", offsetof(struct ice_hw_port_stats, link_xon_tx)},
+	{"rx_xon_packets", offsetof(struct ice_hw_port_stats, link_xon_rx)},
+	{"tx_xoff_packets", offsetof(struct ice_hw_port_stats, link_xoff_tx)},
+	{"rx_xoff_packets", offsetof(struct ice_hw_port_stats, link_xoff_rx)},
+	{"rx_size_64_packets", offsetof(struct ice_hw_port_stats, rx_size_64)},
+	{"rx_size_65_to_127_packets", offsetof(struct ice_hw_port_stats,
+		rx_size_127)},
+	{"rx_size_128_to_255_packets", offsetof(struct ice_hw_port_stats,
+		rx_size_255)},
+	{"rx_size_256_to_511_packets", offsetof(struct ice_hw_port_stats,
+		rx_size_511)},
+	{"rx_size_512_to_1023_packets", offsetof(struct ice_hw_port_stats,
+		rx_size_1023)},
+	{"rx_size_1024_to_1522_packets", offsetof(struct ice_hw_port_stats,
+		rx_size_1522)},
+	{"rx_size_1523_to_max_packets", offsetof(struct ice_hw_port_stats,
+		rx_size_big)},
+	{"rx_undersized_errors", offsetof(struct ice_hw_port_stats,
+		rx_undersize)},
+	{"rx_oversize_errors", offsetof(struct ice_hw_port_stats,
+		rx_oversize)},
+	{"rx_mac_short_pkt_dropped", offsetof(struct ice_hw_port_stats,
+		mac_short_pkt_dropped)},
+	{"rx_fragmented_errors", offsetof(struct ice_hw_port_stats,
+		rx_fragments)},
+	{"rx_jabber_errors", offsetof(struct ice_hw_port_stats, rx_jabber)},
+	{"tx_size_64_packets", offsetof(struct ice_hw_port_stats, tx_size_64)},
+	{"tx_size_65_to_127_packets", offsetof(struct ice_hw_port_stats,
+		tx_size_127)},
+	{"tx_size_128_to_255_packets", offsetof(struct ice_hw_port_stats,
+		tx_size_255)},
+	{"tx_size_256_to_511_packets", offsetof(struct ice_hw_port_stats,
+		tx_size_511)},
+	{"tx_size_512_to_1023_packets", offsetof(struct ice_hw_port_stats,
+		tx_size_1023)},
+	{"tx_size_1024_to_1522_packets", offsetof(struct ice_hw_port_stats,
+		tx_size_1522)},
+	{"tx_size_1523_to_max_packets", offsetof(struct ice_hw_port_stats,
+		tx_size_big)},
+};
+
+#define ICE_NB_HW_PORT_XSTATS (sizeof(ice_hw_port_strings) / \
+		sizeof(ice_hw_port_strings[0]))
+
 static void
 ice_init_controlq_parameter(struct ice_hw *hw)
 {
@@ -2684,3 +2776,477 @@ static int ice_rx_queue_intr_disable(struct rte_eth_dev *dev,
 
 	return 0;
 }
+
+static void
+ice_stat_update_32(struct ice_hw *hw,
+		   uint32_t reg,
+		   bool offset_loaded,
+		   uint64_t *offset,
+		   uint64_t *stat)
+{
+	uint64_t new_data;
+
+	new_data = (uint64_t)ICE_READ_REG(hw, reg);
+	if (!offset_loaded)
+		*offset = new_data;
+
+	if (new_data >= *offset)
+		*stat = (uint64_t)(new_data - *offset);
+	else
+		*stat = (uint64_t)((new_data +
+				    ((uint64_t)1 << ICE_32_BIT_WIDTH))
+				   - *offset);
+}
+
+static void
+ice_stat_update_40(struct ice_hw *hw,
+		   uint32_t hireg,
+		   uint32_t loreg,
+		   bool offset_loaded,
+		   uint64_t *offset,
+		   uint64_t *stat)
+{
+	uint64_t new_data;
+
+	new_data = (uint64_t)ICE_READ_REG(hw, loreg);
+	new_data |= (uint64_t)(ICE_READ_REG(hw, hireg) & ICE_8_BIT_MASK) <<
+		    ICE_32_BIT_WIDTH;
+
+	if (!offset_loaded)
+		*offset = new_data;
+
+	if (new_data >= *offset)
+		*stat = new_data - *offset;
+	else
+		*stat = (uint64_t)((new_data +
+				    ((uint64_t)1 << ICE_40_BIT_WIDTH)) -
+				   *offset);
+
+	*stat &= ICE_40_BIT_MASK;
+}
+
+/* Get all the statistics of a VSI */
+static void
+ice_update_vsi_stats(struct ice_vsi *vsi)
+{
+	struct ice_eth_stats *oes = &vsi->eth_stats_offset;
+	struct ice_eth_stats *nes = &vsi->eth_stats;
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	int idx = rte_le_to_cpu_16(vsi->vsi_id);
+
+	ice_stat_update_40(hw, GLV_GORCH(idx), GLV_GORCL(idx),
+			   vsi->offset_loaded, &oes->rx_bytes,
+			   &nes->rx_bytes);
+	ice_stat_update_40(hw, GLV_UPRCH(idx), GLV_UPRCL(idx),
+			   vsi->offset_loaded, &oes->rx_unicast,
+			   &nes->rx_unicast);
+	ice_stat_update_40(hw, GLV_MPRCH(idx), GLV_MPRCL(idx),
+			   vsi->offset_loaded, &oes->rx_multicast,
+			   &nes->rx_multicast);
+	ice_stat_update_40(hw, GLV_BPRCH(idx), GLV_BPRCL(idx),
+			   vsi->offset_loaded, &oes->rx_broadcast,
+			   &nes->rx_broadcast);
+	/* exclude CRC bytes */
+	nes->rx_bytes -= (nes->rx_unicast + nes->rx_multicast +
+			  nes->rx_broadcast) * ETHER_CRC_LEN;
+
+	ice_stat_update_32(hw, GLV_RDPC(idx), vsi->offset_loaded,
+			   &oes->rx_discards, &nes->rx_discards);
+	/* GLV_REPC not supported */
+	/* GLV_RMPC not supported */
+	ice_stat_update_32(hw, GLSWID_RUPP(idx), vsi->offset_loaded,
+			   &oes->rx_unknown_protocol,
+			   &nes->rx_unknown_protocol);
+	ice_stat_update_40(hw, GLV_GOTCH(idx), GLV_GOTCL(idx),
+			   vsi->offset_loaded, &oes->tx_bytes,
+			   &nes->tx_bytes);
+	ice_stat_update_40(hw, GLV_UPTCH(idx), GLV_UPTCL(idx),
+			   vsi->offset_loaded, &oes->tx_unicast,
+			   &nes->tx_unicast);
+	ice_stat_update_40(hw, GLV_MPTCH(idx), GLV_MPTCL(idx),
+			   vsi->offset_loaded, &oes->tx_multicast,
+			   &nes->tx_multicast);
+	ice_stat_update_40(hw, GLV_BPTCH(idx), GLV_BPTCL(idx),
+			   vsi->offset_loaded,  &oes->tx_broadcast,
+			   &nes->tx_broadcast);
+	/* GLV_TDPC not supported */
+	ice_stat_update_32(hw, GLV_TEPC(idx), vsi->offset_loaded,
+			   &oes->tx_errors, &nes->tx_errors);
+	vsi->offset_loaded = true;
+
+	PMD_DRV_LOG(DEBUG, "************** VSI[%u] stats start **************",
+		    vsi->vsi_id);
+	PMD_DRV_LOG(DEBUG, "rx_bytes:            %"PRIu64"", nes->rx_bytes);
+	PMD_DRV_LOG(DEBUG, "rx_unicast:          %"PRIu64"", nes->rx_unicast);
+	PMD_DRV_LOG(DEBUG, "rx_multicast:        %"PRIu64"", nes->rx_multicast);
+	PMD_DRV_LOG(DEBUG, "rx_broadcast:        %"PRIu64"", nes->rx_broadcast);
+	PMD_DRV_LOG(DEBUG, "rx_discards:         %"PRIu64"", nes->rx_discards);
+	PMD_DRV_LOG(DEBUG, "rx_unknown_protocol: %"PRIu64"",
+		    nes->rx_unknown_protocol);
+	PMD_DRV_LOG(DEBUG, "tx_bytes:            %"PRIu64"", nes->tx_bytes);
+	PMD_DRV_LOG(DEBUG, "tx_unicast:          %"PRIu64"", nes->tx_unicast);
+	PMD_DRV_LOG(DEBUG, "tx_multicast:        %"PRIu64"", nes->tx_multicast);
+	PMD_DRV_LOG(DEBUG, "tx_broadcast:        %"PRIu64"", nes->tx_broadcast);
+	PMD_DRV_LOG(DEBUG, "tx_discards:         %"PRIu64"", nes->tx_discards);
+	PMD_DRV_LOG(DEBUG, "tx_errors:           %"PRIu64"", nes->tx_errors);
+	PMD_DRV_LOG(DEBUG, "************** VSI[%u] stats end ****************",
+		    vsi->vsi_id);
+}
+
+static void
+ice_read_stats_registers(struct ice_pf *pf, struct ice_hw *hw)
+{
+	struct ice_hw_port_stats *ns = &pf->stats; /* new stats */
+	struct ice_hw_port_stats *os = &pf->stats_offset; /* old stats */
+
+	/* Get statistics of struct ice_eth_stats */
+	ice_stat_update_40(hw, GLPRT_GORCH(hw->port_info->lport),
+			   GLPRT_GORCL(hw->port_info->lport),
+			   pf->offset_loaded, &os->eth.rx_bytes,
+			   &ns->eth.rx_bytes);
+	ice_stat_update_40(hw, GLPRT_UPRCH(hw->port_info->lport),
+			   GLPRT_UPRCL(hw->port_info->lport),
+			   pf->offset_loaded, &os->eth.rx_unicast,
+			   &ns->eth.rx_unicast);
+	ice_stat_update_40(hw, GLPRT_MPRCH(hw->port_info->lport),
+			   GLPRT_MPRCL(hw->port_info->lport),
+			   pf->offset_loaded, &os->eth.rx_multicast,
+			   &ns->eth.rx_multicast);
+	ice_stat_update_40(hw, GLPRT_BPRCH(hw->port_info->lport),
+			   GLPRT_BPRCL(hw->port_info->lport),
+			   pf->offset_loaded, &os->eth.rx_broadcast,
+			   &ns->eth.rx_broadcast);
+	ice_stat_update_32(hw, PRTRPB_RDPC,
+			   pf->offset_loaded, &os->eth.rx_discards,
+			   &ns->eth.rx_discards);
+
+	/* Workaround: CRC size should not be included in byte statistics,
+	 * so subtract ETHER_CRC_LEN from the byte counter for each rx packet.
+	 */
+	ns->eth.rx_bytes -= (ns->eth.rx_unicast + ns->eth.rx_multicast +
+			     ns->eth.rx_broadcast) * ETHER_CRC_LEN;
+
+	/* GLPRT_REPC not supported */
+	/* GLPRT_RMPC not supported */
+	ice_stat_update_32(hw, GLSWID_RUPP(hw->port_info->lport),
+			   pf->offset_loaded,
+			   &os->eth.rx_unknown_protocol,
+			   &ns->eth.rx_unknown_protocol);
+	ice_stat_update_40(hw, GLPRT_GOTCH(hw->port_info->lport),
+			   GLPRT_GOTCL(hw->port_info->lport),
+			   pf->offset_loaded, &os->eth.tx_bytes,
+			   &ns->eth.tx_bytes);
+	ice_stat_update_40(hw, GLPRT_UPTCH(hw->port_info->lport),
+			   GLPRT_UPTCL(hw->port_info->lport),
+			   pf->offset_loaded, &os->eth.tx_unicast,
+			   &ns->eth.tx_unicast);
+	ice_stat_update_40(hw, GLPRT_MPTCH(hw->port_info->lport),
+			   GLPRT_MPTCL(hw->port_info->lport),
+			   pf->offset_loaded, &os->eth.tx_multicast,
+			   &ns->eth.tx_multicast);
+	ice_stat_update_40(hw, GLPRT_BPTCH(hw->port_info->lport),
+			   GLPRT_BPTCL(hw->port_info->lport),
+			   pf->offset_loaded, &os->eth.tx_broadcast,
+			   &ns->eth.tx_broadcast);
+	ns->eth.tx_bytes -= (ns->eth.tx_unicast + ns->eth.tx_multicast +
+			     ns->eth.tx_broadcast) * ETHER_CRC_LEN;
+
+	/* GLPRT_TEPC not supported */
+
+	/* additional port specific stats */
+	ice_stat_update_32(hw, GLPRT_TDOLD(hw->port_info->lport),
+			   pf->offset_loaded, &os->tx_dropped_link_down,
+			   &ns->tx_dropped_link_down);
+	ice_stat_update_32(hw, GLPRT_CRCERRS(hw->port_info->lport),
+			   pf->offset_loaded, &os->crc_errors,
+			   &ns->crc_errors);
+	ice_stat_update_32(hw, GLPRT_ILLERRC(hw->port_info->lport),
+			   pf->offset_loaded, &os->illegal_bytes,
+			   &ns->illegal_bytes);
+	/* GLPRT_ERRBC not supported */
+	ice_stat_update_32(hw, GLPRT_MLFC(hw->port_info->lport),
+			   pf->offset_loaded, &os->mac_local_faults,
+			   &ns->mac_local_faults);
+	ice_stat_update_32(hw, GLPRT_MRFC(hw->port_info->lport),
+			   pf->offset_loaded, &os->mac_remote_faults,
+			   &ns->mac_remote_faults);
+
+	ice_stat_update_32(hw, GLPRT_RLEC(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_len_errors,
+			   &ns->rx_len_errors);
+
+	ice_stat_update_32(hw, GLPRT_LXONRXC(hw->port_info->lport),
+			   pf->offset_loaded, &os->link_xon_rx,
+			   &ns->link_xon_rx);
+	ice_stat_update_32(hw, GLPRT_LXOFFRXC(hw->port_info->lport),
+			   pf->offset_loaded, &os->link_xoff_rx,
+			   &ns->link_xoff_rx);
+	ice_stat_update_32(hw, GLPRT_LXONTXC(hw->port_info->lport),
+			   pf->offset_loaded, &os->link_xon_tx,
+			   &ns->link_xon_tx);
+	ice_stat_update_32(hw, GLPRT_LXOFFTXC(hw->port_info->lport),
+			   pf->offset_loaded, &os->link_xoff_tx,
+			   &ns->link_xoff_tx);
+	ice_stat_update_40(hw, GLPRT_PRC64H(hw->port_info->lport),
+			   GLPRT_PRC64L(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_size_64,
+			   &ns->rx_size_64);
+	ice_stat_update_40(hw, GLPRT_PRC127H(hw->port_info->lport),
+			   GLPRT_PRC127L(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_size_127,
+			   &ns->rx_size_127);
+	ice_stat_update_40(hw, GLPRT_PRC255H(hw->port_info->lport),
+			   GLPRT_PRC255L(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_size_255,
+			   &ns->rx_size_255);
+	ice_stat_update_40(hw, GLPRT_PRC511H(hw->port_info->lport),
+			   GLPRT_PRC511L(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_size_511,
+			   &ns->rx_size_511);
+	ice_stat_update_40(hw, GLPRT_PRC1023H(hw->port_info->lport),
+			   GLPRT_PRC1023L(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_size_1023,
+			   &ns->rx_size_1023);
+	ice_stat_update_40(hw, GLPRT_PRC1522H(hw->port_info->lport),
+			   GLPRT_PRC1522L(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_size_1522,
+			   &ns->rx_size_1522);
+	ice_stat_update_40(hw, GLPRT_PRC9522H(hw->port_info->lport),
+			   GLPRT_PRC9522L(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_size_big,
+			   &ns->rx_size_big);
+	ice_stat_update_32(hw, GLPRT_RUC(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_undersize,
+			   &ns->rx_undersize);
+	ice_stat_update_32(hw, GLPRT_RFC(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_fragments,
+			   &ns->rx_fragments);
+	ice_stat_update_32(hw, GLPRT_ROC(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_oversize,
+			   &ns->rx_oversize);
+	ice_stat_update_32(hw, GLPRT_RJC(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_jabber,
+			   &ns->rx_jabber);
+	ice_stat_update_40(hw, GLPRT_PTC64H(hw->port_info->lport),
+			   GLPRT_PTC64L(hw->port_info->lport),
+			   pf->offset_loaded, &os->tx_size_64,
+			   &ns->tx_size_64);
+	ice_stat_update_40(hw, GLPRT_PTC127H(hw->port_info->lport),
+			   GLPRT_PTC127L(hw->port_info->lport),
+			   pf->offset_loaded, &os->tx_size_127,
+			   &ns->tx_size_127);
+	ice_stat_update_40(hw, GLPRT_PTC255H(hw->port_info->lport),
+			   GLPRT_PTC255L(hw->port_info->lport),
+			   pf->offset_loaded, &os->tx_size_255,
+			   &ns->tx_size_255);
+	ice_stat_update_40(hw, GLPRT_PTC511H(hw->port_info->lport),
+			   GLPRT_PTC511L(hw->port_info->lport),
+			   pf->offset_loaded, &os->tx_size_511,
+			   &ns->tx_size_511);
+	ice_stat_update_40(hw, GLPRT_PTC1023H(hw->port_info->lport),
+			   GLPRT_PTC1023L(hw->port_info->lport),
+			   pf->offset_loaded, &os->tx_size_1023,
+			   &ns->tx_size_1023);
+	ice_stat_update_40(hw, GLPRT_PTC1522H(hw->port_info->lport),
+			   GLPRT_PTC1522L(hw->port_info->lport),
+			   pf->offset_loaded, &os->tx_size_1522,
+			   &ns->tx_size_1522);
+	ice_stat_update_40(hw, GLPRT_PTC9522H(hw->port_info->lport),
+			   GLPRT_PTC9522L(hw->port_info->lport),
+			   pf->offset_loaded, &os->tx_size_big,
+			   &ns->tx_size_big);
+
+	/* GLPRT_MSPDC not supported */
+	/* GLPRT_XEC not supported */
+
+	pf->offset_loaded = true;
+
+	if (pf->main_vsi)
+		ice_update_vsi_stats(pf->main_vsi);
+}
+
+/* Get all statistics of a port */
+static int
+ice_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ice_hw_port_stats *ns = &pf->stats; /* new stats */
+
+	/* call read registers - updates values, now write them to struct */
+	ice_read_stats_registers(pf, hw);
+
+	stats->ipackets = ns->eth.rx_unicast +
+			  ns->eth.rx_multicast +
+			  ns->eth.rx_broadcast -
+			  ns->eth.rx_discards -
+			  pf->main_vsi->eth_stats.rx_discards;
+	stats->opackets = ns->eth.tx_unicast +
+			  ns->eth.tx_multicast +
+			  ns->eth.tx_broadcast;
+	stats->ibytes   = ns->eth.rx_bytes;
+	stats->obytes   = ns->eth.tx_bytes;
+	stats->oerrors  = ns->eth.tx_errors +
+			  pf->main_vsi->eth_stats.tx_errors;
+
+	/* Rx Errors */
+	stats->imissed  = ns->eth.rx_discards +
+			  pf->main_vsi->eth_stats.rx_discards;
+	stats->ierrors  = ns->crc_errors +
+			  ns->rx_undersize +
+			  ns->rx_oversize + ns->rx_fragments + ns->rx_jabber;
+
+	PMD_DRV_LOG(DEBUG, "*************** PF stats start *****************");
+	PMD_DRV_LOG(DEBUG, "rx_bytes:	%"PRIu64"", ns->eth.rx_bytes);
+	PMD_DRV_LOG(DEBUG, "rx_unicast:	%"PRIu64"", ns->eth.rx_unicast);
+	PMD_DRV_LOG(DEBUG, "rx_multicast:%"PRIu64"", ns->eth.rx_multicast);
+	PMD_DRV_LOG(DEBUG, "rx_broadcast:%"PRIu64"", ns->eth.rx_broadcast);
+	PMD_DRV_LOG(DEBUG, "rx_discards:%"PRIu64"", ns->eth.rx_discards);
+	PMD_DRV_LOG(DEBUG, "vsi rx_discards:%"PRIu64"",
+		    pf->main_vsi->eth_stats.rx_discards);
+	PMD_DRV_LOG(DEBUG, "rx_unknown_protocol:  %"PRIu64"",
+		    ns->eth.rx_unknown_protocol);
+	PMD_DRV_LOG(DEBUG, "tx_bytes:	%"PRIu64"", ns->eth.tx_bytes);
+	PMD_DRV_LOG(DEBUG, "tx_unicast:	%"PRIu64"", ns->eth.tx_unicast);
+	PMD_DRV_LOG(DEBUG, "tx_multicast:%"PRIu64"", ns->eth.tx_multicast);
+	PMD_DRV_LOG(DEBUG, "tx_broadcast:%"PRIu64"", ns->eth.tx_broadcast);
+	PMD_DRV_LOG(DEBUG, "tx_discards:%"PRIu64"", ns->eth.tx_discards);
+	PMD_DRV_LOG(DEBUG, "vsi tx_discards:%"PRIu64"",
+		    pf->main_vsi->eth_stats.tx_discards);
+	PMD_DRV_LOG(DEBUG, "tx_errors:		%"PRIu64"", ns->eth.tx_errors);
+
+	PMD_DRV_LOG(DEBUG, "tx_dropped_link_down:	%"PRIu64"",
+		    ns->tx_dropped_link_down);
+	PMD_DRV_LOG(DEBUG, "crc_errors:	%"PRIu64"", ns->crc_errors);
+	PMD_DRV_LOG(DEBUG, "illegal_bytes:	%"PRIu64"",
+		    ns->illegal_bytes);
+	PMD_DRV_LOG(DEBUG, "error_bytes:	%"PRIu64"", ns->error_bytes);
+	PMD_DRV_LOG(DEBUG, "mac_local_faults:	%"PRIu64"",
+		    ns->mac_local_faults);
+	PMD_DRV_LOG(DEBUG, "mac_remote_faults:	%"PRIu64"",
+		    ns->mac_remote_faults);
+	PMD_DRV_LOG(DEBUG, "link_xon_rx:	%"PRIu64"", ns->link_xon_rx);
+	PMD_DRV_LOG(DEBUG, "link_xoff_rx:	%"PRIu64"", ns->link_xoff_rx);
+	PMD_DRV_LOG(DEBUG, "link_xon_tx:	%"PRIu64"", ns->link_xon_tx);
+	PMD_DRV_LOG(DEBUG, "link_xoff_tx:	%"PRIu64"", ns->link_xoff_tx);
+	PMD_DRV_LOG(DEBUG, "rx_size_64:		%"PRIu64"", ns->rx_size_64);
+	PMD_DRV_LOG(DEBUG, "rx_size_127:	%"PRIu64"", ns->rx_size_127);
+	PMD_DRV_LOG(DEBUG, "rx_size_255:	%"PRIu64"", ns->rx_size_255);
+	PMD_DRV_LOG(DEBUG, "rx_size_511:	%"PRIu64"", ns->rx_size_511);
+	PMD_DRV_LOG(DEBUG, "rx_size_1023:	%"PRIu64"", ns->rx_size_1023);
+	PMD_DRV_LOG(DEBUG, "rx_size_1522:	%"PRIu64"", ns->rx_size_1522);
+	PMD_DRV_LOG(DEBUG, "rx_size_big:	%"PRIu64"", ns->rx_size_big);
+	PMD_DRV_LOG(DEBUG, "rx_undersize:	%"PRIu64"", ns->rx_undersize);
+	PMD_DRV_LOG(DEBUG, "rx_fragments:	%"PRIu64"", ns->rx_fragments);
+	PMD_DRV_LOG(DEBUG, "rx_oversize:	%"PRIu64"", ns->rx_oversize);
+	PMD_DRV_LOG(DEBUG, "rx_jabber:		%"PRIu64"", ns->rx_jabber);
+	PMD_DRV_LOG(DEBUG, "tx_size_64:		%"PRIu64"", ns->tx_size_64);
+	PMD_DRV_LOG(DEBUG, "tx_size_127:	%"PRIu64"", ns->tx_size_127);
+	PMD_DRV_LOG(DEBUG, "tx_size_255:	%"PRIu64"", ns->tx_size_255);
+	PMD_DRV_LOG(DEBUG, "tx_size_511:	%"PRIu64"", ns->tx_size_511);
+	PMD_DRV_LOG(DEBUG, "tx_size_1023:	%"PRIu64"", ns->tx_size_1023);
+	PMD_DRV_LOG(DEBUG, "tx_size_1522:	%"PRIu64"", ns->tx_size_1522);
+	PMD_DRV_LOG(DEBUG, "tx_size_big:	%"PRIu64"", ns->tx_size_big);
+	PMD_DRV_LOG(DEBUG, "rx_len_errors:	%"PRIu64"", ns->rx_len_errors);
+	PMD_DRV_LOG(DEBUG, "************* PF stats end ****************");
+	return 0;
+}
+
+/* Reset the statistics */
+static void
+ice_stats_reset(struct rte_eth_dev *dev)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	/* Mark PF and VSI stats to update the offset, aka "reset" */
+	pf->offset_loaded = false;
+	if (pf->main_vsi)
+		pf->main_vsi->offset_loaded = false;
+
+	/* read the stats, reading current register values into offset */
+	ice_read_stats_registers(pf, hw);
+}
+
+static uint32_t
+ice_xstats_calc_num(void)
+{
+	uint32_t num;
+
+	num = ICE_NB_ETH_XSTATS + ICE_NB_HW_PORT_XSTATS;
+
+	return num;
+}
+
+static int
+ice_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats,
+	       unsigned int n)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	unsigned int i;
+	unsigned int count;
+	struct ice_hw_port_stats *hw_stats = &pf->stats;
+
+	count = ice_xstats_calc_num();
+	if (n < count)
+		return count;
+
+	ice_read_stats_registers(pf, hw);
+
+	if (!xstats)
+		return 0;
+
+	count = 0;
+
+	/* Get stats from ice_eth_stats struct */
+	for (i = 0; i < ICE_NB_ETH_XSTATS; i++) {
+		xstats[count].value =
+			*(uint64_t *)((char *)&hw_stats->eth +
+				      ice_stats_strings[i].offset);
+		xstats[count].id = count;
+		count++;
+	}
+
+	/* Get individiual stats from ice_hw_port struct */
+	for (i = 0; i < ICE_NB_HW_PORT_XSTATS; i++) {
+		xstats[count].value =
+			*(uint64_t *)((char *)hw_stats +
+				      ice_hw_port_strings[i].offset);
+		xstats[count].id = count;
+		count++;
+	}
+
+	return count;
+}
+
+static int ice_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
+				struct rte_eth_xstat_name *xstats_names,
+				__rte_unused unsigned int limit)
+{
+	unsigned int count = 0;
+	unsigned int i;
+
+	if (!xstats_names)
+		return ice_xstats_calc_num();
+
+	/* Note: limit checked in rte_eth_xstats_names() */
+
+	/* Get stats from ice_eth_stats struct */
+	for (i = 0; i < ICE_NB_ETH_XSTATS; i++) {
+		snprintf(xstats_names[count].name,
+			 sizeof(xstats_names[count].name),
+			 "%s", ice_stats_strings[i].name);
+		count++;
+	}
+
+	/* Get individiual stats from ice_hw_port struct */
+	for (i = 0; i < ICE_NB_HW_PORT_XSTATS; i++) {
+		snprintf(xstats_names[count].name,
+			 sizeof(xstats_names[count].name),
+			 "%s", ice_hw_port_strings[i].name);
+		count++;
+	}
+
+	return count;
+}
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v3 29/34] net/ice: support queue information getting
  2018-12-12  6:59 ` [dpdk-dev] [PATCH v3 00/34] A new net PMD - ice Wenzhuo Lu
                     ` (27 preceding siblings ...)
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 28/34] net/ice: support statistics Wenzhuo Lu
@ 2018-12-12  6:59   ` Wenzhuo Lu
  2018-12-12  7:00   ` [dpdk-dev] [PATCH v3 30/34] net/ice: support basic RX/TX Wenzhuo Lu
                     ` (5 subsequent siblings)
  34 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-12  6:59 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add below ops,
rxq_info_get
txq_info_get
rx_queue_count

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/ice/ice_ethdev.c   |  3 ++
 drivers/net/ice/ice_lan_rxtx.c | 66 ++++++++++++++++++++++++++++++++++++++++++
 drivers/net/ice/ice_rxtx.h     |  5 ++++
 3 files changed, 74 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 6d9d321..56641ac 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -107,8 +107,11 @@ static int ice_xstats_get_names(struct rte_eth_dev *dev,
 	.rx_queue_intr_disable        = ice_rx_queue_intr_disable,
 	.fw_version_get               = ice_fw_version_get,
 	.vlan_pvid_set                = ice_vlan_pvid_set,
+	.rxq_info_get                 = ice_rxq_info_get,
+	.txq_info_get                 = ice_txq_info_get,
 	.get_eeprom_length            = ice_get_eeprom_length,
 	.get_eeprom                   = ice_get_eeprom,
+	.rx_queue_count               = ice_rx_queue_count,
 	.stats_get                    = ice_stats_get,
 	.stats_reset                  = ice_stats_reset,
 	.xstats_get                   = ice_xstats_get,
diff --git a/drivers/net/ice/ice_lan_rxtx.c b/drivers/net/ice/ice_lan_rxtx.c
index 8230bb2..fed12b4 100644
--- a/drivers/net/ice/ice_lan_rxtx.c
+++ b/drivers/net/ice/ice_lan_rxtx.c
@@ -921,6 +921,72 @@
 }
 
 void
+ice_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+		 struct rte_eth_rxq_info *qinfo)
+{
+	struct ice_rx_queue *rxq;
+
+	rxq = dev->data->rx_queues[queue_id];
+
+	qinfo->mp = rxq->mp;
+	qinfo->scattered_rx = dev->data->scattered_rx;
+	qinfo->nb_desc = rxq->nb_rx_desc;
+
+	qinfo->conf.rx_free_thresh = rxq->rx_free_thresh;
+	qinfo->conf.rx_drop_en = rxq->drop_en;
+	qinfo->conf.rx_deferred_start = rxq->rx_deferred_start;
+}
+
+void
+ice_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+		 struct rte_eth_txq_info *qinfo)
+{
+	struct ice_tx_queue *txq;
+
+	txq = dev->data->tx_queues[queue_id];
+
+	qinfo->nb_desc = txq->nb_tx_desc;
+
+	qinfo->conf.tx_thresh.pthresh = txq->pthresh;
+	qinfo->conf.tx_thresh.hthresh = txq->hthresh;
+	qinfo->conf.tx_thresh.wthresh = txq->wthresh;
+
+	qinfo->conf.tx_free_thresh = txq->tx_free_thresh;
+	qinfo->conf.tx_rs_thresh = txq->tx_rs_thresh;
+	qinfo->conf.offloads = txq->offloads;
+	qinfo->conf.tx_deferred_start = txq->tx_deferred_start;
+}
+
+uint32_t
+ice_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+#define ICE_RXQ_SCAN_INTERVAL 4
+	volatile union ice_rx_desc *rxdp;
+	struct ice_rx_queue *rxq;
+	uint16_t desc = 0;
+
+	rxq = dev->data->rx_queues[rx_queue_id];
+	rxdp = &rxq->rx_ring[rxq->rx_tail];
+	while ((desc < rxq->nb_rx_desc) &&
+	       ((rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
+		 ICE_RXD_QW1_STATUS_M) >> ICE_RXD_QW1_STATUS_S) &
+	       (1 << ICE_RX_DESC_STATUS_DD_S)) {
+		/**
+		 * Check the DD bit of a rx descriptor of each 4 in a group,
+		 * to avoid checking too frequently and downgrading performance
+		 * too much.
+		 */
+		desc += ICE_RXQ_SCAN_INTERVAL;
+		rxdp += ICE_RXQ_SCAN_INTERVAL;
+		if (rxq->rx_tail + desc >= rxq->nb_rx_desc)
+			rxdp = &(rxq->rx_ring[rxq->rx_tail +
+				 desc - rxq->nb_rx_desc]);
+	}
+
+	return desc;
+}
+
+void
 ice_clear_queues(struct rte_eth_dev *dev)
 {
 	uint16_t i;
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index 871646f..bad2b89 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -134,6 +134,11 @@ int ice_tx_queue_setup(struct rte_eth_dev *dev,
 void ice_tx_queue_release(void *txq);
 void ice_clear_queues(struct rte_eth_dev *dev);
 void ice_free_queues(struct rte_eth_dev *dev);
+uint32_t ice_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 void ice_set_default_ptype_table(struct rte_eth_dev *dev);
 const uint32_t *ice_dev_supported_ptypes_get(struct rte_eth_dev *dev);
+void ice_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+		      struct rte_eth_rxq_info *qinfo);
+void ice_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+		      struct rte_eth_txq_info *qinfo);
 #endif /* _ICE_RXTX_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v3 30/34] net/ice: support basic RX/TX
  2018-12-12  6:59 ` [dpdk-dev] [PATCH v3 00/34] A new net PMD - ice Wenzhuo Lu
                     ` (28 preceding siblings ...)
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 29/34] net/ice: support queue information getting Wenzhuo Lu
@ 2018-12-12  7:00   ` Wenzhuo Lu
  2018-12-12  7:00   ` [dpdk-dev] [PATCH v3 31/34] net/ice: support advance RX/TX Wenzhuo Lu
                     ` (4 subsequent siblings)
  34 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-12  7:00 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/ice/ice_ethdev.c   |   5 +
 drivers/net/ice/ice_lan_rxtx.c | 568 ++++++++++++++++++++++++++++++++++++++++-
 drivers/net/ice/ice_rxtx.h     |   8 +
 3 files changed, 579 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 56641ac..d63407c 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -1260,6 +1260,9 @@ struct ice_xstats_name_off {
 	int ret;
 
 	dev->dev_ops = &ice_eth_dev_ops;
+	dev->rx_pkt_burst = ice_recv_pkts;
+	dev->tx_pkt_burst = ice_xmit_pkts;
+	dev->tx_pkt_prepare = ice_prep_pkts;
 
 	ice_set_default_ptype_table(dev);
 	pci_dev = RTE_DEV_TO_PCI(dev->device);
@@ -1720,6 +1723,8 @@ static int ice_init_rss(struct ice_pf *pf)
 		goto rx_err;
 	}
 
+	ice_set_rx_function(dev);
+
 	/* enable Rx interrput and mapping Rx queue to interrupt vector */
 	if (ice_rxq_intr_setup(dev))
 		return -EIO;
diff --git a/drivers/net/ice/ice_lan_rxtx.c b/drivers/net/ice/ice_lan_rxtx.c
index fed12b4..1b1bf47 100644
--- a/drivers/net/ice/ice_lan_rxtx.c
+++ b/drivers/net/ice/ice_lan_rxtx.c
@@ -884,8 +884,81 @@
 	rte_free(q);
 }
 
+/* Translate the rx descriptor status to pkt flags */
+static inline uint64_t
+ice_rxd_status_to_pkt_flags(uint64_t qword)
+{
+	uint64_t flags;
+
+	/* Check if RSS_HASH */
+	flags = (((qword >> ICE_RX_DESC_STATUS_FLTSTAT_S) &
+		  ICE_RX_DESC_FLTSTAT_RSS_HASH) ==
+		 ICE_RX_DESC_FLTSTAT_RSS_HASH) ? PKT_RX_RSS_HASH : 0;
+
+	return flags;
+}
+
+/* Rx L3/L4 checksum */
+static inline uint64_t
+ice_rxd_error_to_pkt_flags(uint64_t qword)
+{
+	uint64_t flags = 0;
+	uint64_t error_bits = (qword >> ICE_RXD_QW1_ERROR_S);
+
+	if (likely((error_bits & ICE_RX_ERR_BITS) == 0)) {
+		flags |= (PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD);
+		return flags;
+	}
+
+	if (unlikely(error_bits & (1 << ICE_RX_DESC_ERROR_IPE_S)))
+		flags |= PKT_RX_IP_CKSUM_BAD;
+	else
+		flags |= PKT_RX_IP_CKSUM_GOOD;
+
+	if (unlikely(error_bits & (1 << ICE_RX_DESC_ERROR_L4E_S)))
+		flags |= PKT_RX_L4_CKSUM_BAD;
+	else
+		flags |= PKT_RX_L4_CKSUM_GOOD;
+
+	if (unlikely(error_bits & (1 << ICE_RX_DESC_ERROR_EIPE_S)))
+		flags |= PKT_RX_EIP_CKSUM_BAD;
+
+	return flags;
+}
+
+static inline void
+ice_rxd_to_vlan_tci(struct rte_mbuf *mb, volatile union ice_rx_desc *rxdp)
+{
+	if (rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
+	    (1 << ICE_RX_DESC_STATUS_L2TAG1P_S)) {
+		mb->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		mb->vlan_tci =
+			rte_le_to_cpu_16(rxdp->wb.qword0.lo_dword.l2tag1);
+		PMD_RX_LOG(DEBUG, "Descriptor l2tag1: %u",
+			   rte_le_to_cpu_16(rxdp->wb.qword0.lo_dword.l2tag1));
+	} else {
+		mb->vlan_tci = 0;
+	}
+
+#ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
+	if (rte_le_to_cpu_16(rxdp->wb.qword2.ext_status) &
+	    (1 << ICE_RX_DESC_EXT_STATUS_L2TAG2P_S)) {
+		mb->ol_flags |= PKT_RX_QINQ_STRIPPED | PKT_RX_QINQ |
+				PKT_RX_VLAN_STRIPPED | PKT_RX_VLAN;
+		mb->vlan_tci_outer = mb->vlan_tci;
+		mb->vlan_tci = rte_le_to_cpu_16(rxdp->wb.qword2.l2tag2_2);
+		PMD_RX_LOG(DEBUG, "Descriptor l2tag2_1: %u, l2tag2_2: %u",
+			   rte_le_to_cpu_16(rxdp->wb.qword2.l2tag2_1),
+			   rte_le_to_cpu_16(rxdp->wb.qword2.l2tag2_2));
+	} else {
+		mb->vlan_tci_outer = 0;
+	}
+#endif
+	PMD_RX_LOG(DEBUG, "Mbuf vlan_tci: %u, vlan_tci_outer: %u",
+		   mb->vlan_tci, mb->vlan_tci_outer);
+}
 const uint32_t *
-ice_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
+ice_dev_supported_ptypes_get(struct rte_eth_dev *dev)
 {
 	static const uint32_t ptypes[] = {
 		/* refers to ice_get_default_pkt_type() */
@@ -917,7 +990,9 @@
 		RTE_PTYPE_UNKNOWN
 	};
 
-	return ptypes;
+	if (dev->rx_pkt_burst == ice_recv_pkts)
+		return ptypes;
+	return NULL;
 }
 
 void
@@ -1028,6 +1103,495 @@
 	dev->data->nb_tx_queues = 0;
 }
 
+uint16_t
+ice_recv_pkts(void *rx_queue,
+	      struct rte_mbuf **rx_pkts,
+	      uint16_t nb_pkts)
+{
+	struct ice_rx_queue *rxq = rx_queue;
+	volatile union ice_rx_desc *rx_ring = rxq->rx_ring;
+	volatile union ice_rx_desc *rxdp;
+	union ice_rx_desc rxd;
+	struct ice_rx_entry *sw_ring = rxq->sw_ring;
+	struct ice_rx_entry *rxe;
+	struct rte_mbuf *nmb; /* new allocated mbuf */
+	struct rte_mbuf *rxm; /* pointer to store old mbuf in SW ring */
+	uint16_t rx_id = rxq->rx_tail;
+	uint16_t nb_rx = 0;
+	uint16_t nb_hold = 0;
+	uint16_t rx_packet_len;
+	uint32_t rx_status;
+	uint64_t qword1;
+	uint64_t dma_addr;
+	uint64_t pkt_flags = 0;
+	uint32_t *ptype_tbl = rxq->vsi->adapter->ptype_tbl;
+	struct rte_eth_dev *dev;
+
+	while (nb_rx < nb_pkts) {
+		rxdp = &rx_ring[rx_id];
+		qword1 = rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len);
+		rx_status = (qword1 & ICE_RXD_QW1_STATUS_M) >>
+			    ICE_RXD_QW1_STATUS_S;
+
+		/* Check the DD bit first */
+		if (!(rx_status & (1 << ICE_RX_DESC_STATUS_DD_S)))
+			break;
+
+		/* allocate mbuf */
+		nmb = rte_mbuf_raw_alloc(rxq->mp);
+		if (unlikely(!nmb)) {
+			dev = ICE_VSI_TO_ETH_DEV(rxq->vsi);
+			dev->data->rx_mbuf_alloc_failed++;
+			break;
+		}
+		rxd = *rxdp; /* copy descriptor in ring to temp variable*/
+
+		nb_hold++;
+		rxe = &sw_ring[rx_id]; /* get corresponding mbuf in SW ring */
+		rx_id++;
+		if (unlikely(rx_id == rxq->nb_rx_desc))
+			rx_id = 0;
+		rxm = rxe->mbuf;
+		rxe->mbuf = nmb;
+		dma_addr =
+			rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb));
+
+		/**
+		 * fill the read format of descriptor with physic address in
+		 * new allocated mbuf: nmb
+		 */
+		rxdp->read.hdr_addr = 0;
+		rxdp->read.pkt_addr = dma_addr;
+
+		/* calculate rx_packet_len of the received pkt */
+		rx_packet_len = ((qword1 & ICE_RXD_QW1_LEN_PBUF_M) >>
+				ICE_RXD_QW1_LEN_PBUF_S) - rxq->crc_len;
+
+		/* fill old mbuf with received descriptor: rxd */
+		rxm->data_off = RTE_PKTMBUF_HEADROOM;
+		rte_prefetch0(RTE_PTR_ADD(rxm->buf_addr, RTE_PKTMBUF_HEADROOM));
+		rxm->nb_segs = 1;
+		rxm->next = NULL;
+		rxm->pkt_len = rx_packet_len;
+		rxm->data_len = rx_packet_len;
+		rxm->port = rxq->port_id;
+		ice_rxd_to_vlan_tci(rxm, rxdp);
+		rxm->packet_type = ptype_tbl[(uint8_t)((qword1 &
+							ICE_RXD_QW1_PTYPE_M) >>
+						       ICE_RXD_QW1_PTYPE_S)];
+		pkt_flags = ice_rxd_status_to_pkt_flags(qword1);
+		pkt_flags |= ice_rxd_error_to_pkt_flags(qword1);
+		if (pkt_flags & PKT_RX_RSS_HASH)
+			rxm->hash.rss =
+				rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
+		rxm->ol_flags |= pkt_flags;
+		/* copy old mbuf to rx_pkts */
+		rx_pkts[nb_rx++] = rxm;
+	}
+	rxq->rx_tail = rx_id;
+	/**
+	 * If the number of free RX descriptors is greater than the RX free
+	 * threshold of the queue, advance the receive tail register of queue.
+	 * Update that register with the value of the last processed RX
+	 * descriptor minus 1.
+	 */
+	nb_hold = (uint16_t)(nb_hold + rxq->nb_rx_hold);
+	if (nb_hold > rxq->rx_free_thresh) {
+		rx_id = (uint16_t)(rx_id == 0 ?
+				   (rxq->nb_rx_desc - 1) : (rx_id - 1));
+		/* write TAIL register */
+		ICE_PCI_REG_WRITE(rxq->qrx_tail, rx_id);
+		nb_hold = 0;
+	}
+	rxq->nb_rx_hold = nb_hold;
+
+	/* return received packet in the burst */
+	return nb_rx;
+}
+
+static inline void
+ice_txd_enable_checksum(uint64_t ol_flags,
+			uint32_t *td_cmd,
+			uint32_t *td_offset,
+			union ice_tx_offload tx_offload)
+{
+	/* L2 length must be set. */
+	*td_offset |= (tx_offload.l2_len >> 1) <<
+		      ICE_TX_DESC_LEN_MACLEN_S;
+
+	/* Enable L3 checksum offloads */
+	if (ol_flags & PKT_TX_IP_CKSUM) {
+		*td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV4_CSUM;
+		*td_offset |= (tx_offload.l3_len >> 2) <<
+			      ICE_TX_DESC_LEN_IPLEN_S;
+	} else if (ol_flags & PKT_TX_IPV4) {
+		*td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV4;
+		*td_offset |= (tx_offload.l3_len >> 2) <<
+			      ICE_TX_DESC_LEN_IPLEN_S;
+	} else if (ol_flags & PKT_TX_IPV6) {
+		*td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV6;
+		*td_offset |= (tx_offload.l3_len >> 2) <<
+			      ICE_TX_DESC_LEN_IPLEN_S;
+	}
+
+	if (ol_flags & PKT_TX_TCP_SEG) {
+		*td_cmd |= ICE_TX_DESC_CMD_L4T_EOFT_TCP;
+		*td_offset |= (tx_offload.l4_len >> 2) <<
+			      ICE_TX_DESC_LEN_L4_LEN_S;
+		return;
+	}
+
+	/* Enable L4 checksum offloads */
+	switch (ol_flags & PKT_TX_L4_MASK) {
+	case PKT_TX_TCP_CKSUM:
+		*td_cmd |= ICE_TX_DESC_CMD_L4T_EOFT_TCP;
+		*td_offset |= (sizeof(struct tcp_hdr) >> 2) <<
+			      ICE_TX_DESC_LEN_L4_LEN_S;
+		break;
+	case PKT_TX_SCTP_CKSUM:
+		*td_cmd |= ICE_TX_DESC_CMD_L4T_EOFT_SCTP;
+		*td_offset |= (sizeof(struct sctp_hdr) >> 2) <<
+			      ICE_TX_DESC_LEN_L4_LEN_S;
+		break;
+	case PKT_TX_UDP_CKSUM:
+		*td_cmd |= ICE_TX_DESC_CMD_L4T_EOFT_UDP;
+		*td_offset |= (sizeof(struct udp_hdr) >> 2) <<
+			      ICE_TX_DESC_LEN_L4_LEN_S;
+		break;
+	default:
+		break;
+	}
+}
+
+static inline int
+ice_xmit_cleanup(struct ice_tx_queue *txq)
+{
+	struct ice_tx_entry *sw_ring = txq->sw_ring;
+	volatile struct ice_tx_desc *txd = txq->tx_ring;
+	uint16_t last_desc_cleaned = txq->last_desc_cleaned;
+	uint16_t nb_tx_desc = txq->nb_tx_desc;
+	uint16_t desc_to_clean_to;
+	uint16_t nb_tx_to_clean;
+
+	/* Determine the last descriptor needing to be cleaned */
+	desc_to_clean_to = (uint16_t)(last_desc_cleaned + txq->tx_rs_thresh);
+	if (desc_to_clean_to >= nb_tx_desc)
+		desc_to_clean_to = (uint16_t)(desc_to_clean_to - nb_tx_desc);
+
+	/* Check to make sure the last descriptor to clean is done */
+	desc_to_clean_to = sw_ring[desc_to_clean_to].last_id;
+	if (!(txd[desc_to_clean_to].cmd_type_offset_bsz &
+	    rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE))) {
+		PMD_TX_FREE_LOG(DEBUG, "TX descriptor %4u is not done "
+				"(port=%d queue=%d) value=0x%lx\n",
+				desc_to_clean_to,
+				txq->port_id, txq->queue_id,
+				txd[desc_to_clean_to].cmd_type_offset_bsz);
+		/* Failed to clean any descriptors */
+		return -1;
+	}
+
+	/* Figure out how many descriptors will be cleaned */
+	if (last_desc_cleaned > desc_to_clean_to)
+		nb_tx_to_clean = (uint16_t)((nb_tx_desc - last_desc_cleaned) +
+					    desc_to_clean_to);
+	else
+		nb_tx_to_clean = (uint16_t)(desc_to_clean_to -
+					    last_desc_cleaned);
+
+	/* The last descriptor to clean is done, so that means all the
+	 * descriptors from the last descriptor that was cleaned
+	 * up to the last descriptor with the RS bit set
+	 * are done. Only reset the threshold descriptor.
+	 */
+	txd[desc_to_clean_to].cmd_type_offset_bsz = 0;
+
+	/* Update the txq to reflect the last descriptor that was cleaned */
+	txq->last_desc_cleaned = desc_to_clean_to;
+	txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + nb_tx_to_clean);
+
+	return 0;
+}
+
+/* Check if the context descriptor is needed for TX offloading */
+static inline uint16_t
+ice_calc_context_desc(uint64_t flags)
+{
+	static uint64_t mask = PKT_TX_TCP_SEG | PKT_TX_QINQ_PKT;
+
+	return (flags & mask) ? 1 : 0;
+}
+
+/* set ice TSO context descriptor */
+static inline uint64_t
+ice_set_tso_ctx(struct rte_mbuf *mbuf, union ice_tx_offload tx_offload)
+{
+	uint64_t ctx_desc = 0;
+	uint32_t cd_cmd, hdr_len, cd_tso_len;
+
+	if (!tx_offload.l4_len) {
+		PMD_TX_LOG(DEBUG, "L4 length set to 0");
+		return ctx_desc;
+	}
+
+	/**
+	 * in case of non tunneling packet, the outer_l2_len and
+	 * outer_l3_len must be 0.
+	 */
+	hdr_len = tx_offload.outer_l2_len +
+		  tx_offload.outer_l3_len +
+		  tx_offload.l2_len +
+		  tx_offload.l3_len +
+		  tx_offload.l4_len;
+
+	cd_cmd = ICE_TX_CTX_DESC_TSO;
+	cd_tso_len = mbuf->pkt_len - hdr_len;
+	ctx_desc |= ((uint64_t)cd_cmd << ICE_TXD_CTX_QW1_CMD_S) |
+		    ((uint64_t)cd_tso_len << ICE_TXD_CTX_QW1_TSO_LEN_S) |
+		    ((uint64_t)mbuf->tso_segsz << ICE_TXD_CTX_QW1_MSS_S);
+
+	return ctx_desc;
+}
+
+uint16_t
+ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+	struct ice_tx_queue *txq;
+	volatile struct ice_tx_desc *tx_ring;
+	volatile struct ice_tx_desc *txd;
+	struct ice_tx_entry *sw_ring;
+	struct ice_tx_entry *txe, *txn;
+	struct rte_mbuf *tx_pkt;
+	struct rte_mbuf *m_seg;
+	uint16_t tx_id;
+	uint16_t nb_tx;
+	uint16_t nb_used;
+	uint16_t nb_ctx;
+	uint32_t td_cmd = 0;
+	uint32_t td_offset = 0;
+	uint32_t td_tag = 0;
+	uint16_t tx_last;
+	uint64_t buf_dma_addr;
+	uint64_t ol_flags;
+	union ice_tx_offload tx_offload = {0};
+
+	txq = tx_queue;
+	sw_ring = txq->sw_ring;
+	tx_ring = txq->tx_ring;
+	tx_id = txq->tx_tail;
+	txe = &sw_ring[tx_id];
+
+	/* Check if the descriptor ring needs to be cleaned. */
+	if (txq->nb_tx_free < txq->tx_free_thresh)
+		ice_xmit_cleanup(txq);
+
+	for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
+		tx_pkt = *tx_pkts++;
+
+		td_cmd = 0;
+		ol_flags = tx_pkt->ol_flags;
+		tx_offload.l2_len = tx_pkt->l2_len;
+		tx_offload.l3_len = tx_pkt->l3_len;
+		tx_offload.outer_l2_len = tx_pkt->outer_l2_len;
+		tx_offload.outer_l3_len = tx_pkt->outer_l3_len;
+		tx_offload.l4_len = tx_pkt->l4_len;
+		tx_offload.tso_segsz = tx_pkt->tso_segsz;
+		/* Calculate the number of context descriptors needed. */
+		nb_ctx = ice_calc_context_desc(ol_flags);
+
+		/* The number of descriptors that must be allocated for
+		 * a packet equals to the number of the segments of that
+		 * packet plus the number of context descriptor if needed.
+		 */
+		nb_used = (uint16_t)(tx_pkt->nb_segs + nb_ctx);
+		tx_last = (uint16_t)(tx_id + nb_used - 1);
+
+		/* Circular ring */
+		if (tx_last >= txq->nb_tx_desc)
+			tx_last = (uint16_t)(tx_last - txq->nb_tx_desc);
+
+		if (nb_used > txq->nb_tx_free) {
+			if (ice_xmit_cleanup(txq) != 0) {
+				if (nb_tx == 0)
+					return 0;
+				goto end_of_tx;
+			}
+			if (unlikely(nb_used > txq->tx_rs_thresh)) {
+				while (nb_used > txq->nb_tx_free) {
+					if (ice_xmit_cleanup(txq) != 0) {
+						if (nb_tx == 0)
+							return 0;
+						goto end_of_tx;
+					}
+				}
+			}
+		}
+
+		/* Descriptor based VLAN insertion */
+		if (ol_flags & (PKT_TX_VLAN_PKT | PKT_TX_QINQ_PKT)) {
+			td_cmd |= ICE_TX_DESC_CMD_IL2TAG1;
+			td_tag = tx_pkt->vlan_tci;
+		}
+
+		/* Enable checksum offloading */
+		if (ol_flags & ICE_TX_CKSUM_OFFLOAD_MASK) {
+			ice_txd_enable_checksum(ol_flags, &td_cmd,
+						&td_offset, tx_offload);
+		}
+
+		if (nb_ctx) {
+			/* Setup TX context descriptor if required */
+			volatile struct ice_tx_ctx_desc *ctx_txd =
+				(volatile struct ice_tx_ctx_desc *)
+					&tx_ring[tx_id];
+			uint16_t cd_l2tag2 = 0;
+			uint64_t cd_type_cmd_tso_mss = ICE_TX_DESC_DTYPE_CTX;
+
+			txn = &sw_ring[txe->next_id];
+			RTE_MBUF_PREFETCH_TO_FREE(txn->mbuf);
+			if (txe->mbuf) {
+				rte_pktmbuf_free_seg(txe->mbuf);
+				txe->mbuf = NULL;
+			}
+
+			if (ol_flags & PKT_TX_TCP_SEG)
+				cd_type_cmd_tso_mss |=
+					ice_set_tso_ctx(tx_pkt, tx_offload);
+
+			/* TX context descriptor based double VLAN insert */
+			if (ol_flags & PKT_TX_QINQ_PKT) {
+				cd_l2tag2 = tx_pkt->vlan_tci_outer;
+				cd_type_cmd_tso_mss |=
+					((uint64_t)ICE_TX_CTX_DESC_IL2TAG2 <<
+					 ICE_TXD_CTX_QW1_CMD_S);
+			}
+			ctx_txd->l2tag2 = rte_cpu_to_le_16(cd_l2tag2);
+			ctx_txd->qw1 =
+				rte_cpu_to_le_64(cd_type_cmd_tso_mss);
+
+			txe->last_id = tx_last;
+			tx_id = txe->next_id;
+			txe = txn;
+		}
+		m_seg = tx_pkt;
+
+		do {
+			txd = &tx_ring[tx_id];
+			txn = &sw_ring[txe->next_id];
+
+			if (txe->mbuf)
+				rte_pktmbuf_free_seg(txe->mbuf);
+			txe->mbuf = m_seg;
+
+			/* Setup TX Descriptor */
+			buf_dma_addr = rte_mbuf_data_iova(m_seg);
+			txd->buf_addr = rte_cpu_to_le_64(buf_dma_addr);
+			txd->cmd_type_offset_bsz =
+				rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DATA |
+				((uint64_t)td_cmd  << ICE_TXD_QW1_CMD_S) |
+				((uint64_t)td_offset << ICE_TXD_QW1_OFFSET_S) |
+				((uint64_t)m_seg->data_len  <<
+				 ICE_TXD_QW1_TX_BUF_SZ_S) |
+				((uint64_t)td_tag  << ICE_TXD_QW1_L2TAG1_S));
+
+			txe->last_id = tx_last;
+			tx_id = txe->next_id;
+			txe = txn;
+			m_seg = m_seg->next;
+		} while (m_seg);
+
+		/* fill the last descriptor with End of Packet (EOP) bit */
+		td_cmd |= ICE_TX_DESC_CMD_EOP;
+		txq->nb_tx_used = (uint16_t)(txq->nb_tx_used + nb_used);
+		txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_used);
+
+		/* set RS bit on the last descriptor of one packet */
+		if (txq->nb_tx_used >= txq->tx_rs_thresh) {
+			PMD_TX_FREE_LOG(DEBUG,
+					"Setting RS bit on TXD id="
+					"%4u (port=%d queue=%d)",
+					tx_last, txq->port_id, txq->queue_id);
+
+			td_cmd |= ICE_TX_DESC_CMD_RS;
+
+			/* Update txq RS bit counters */
+			txq->nb_tx_used = 0;
+		}
+		txd->cmd_type_offset_bsz |=
+			rte_cpu_to_le_64(((uint64_t)td_cmd) <<
+					 ICE_TXD_QW1_CMD_S);
+	}
+end_of_tx:
+	rte_wmb();
+
+	/* update Tail register */
+	ICE_PCI_REG_WRITE(txq->qtx_tail, tx_id);
+	txq->tx_tail = tx_id;
+
+	return nb_tx;
+}
+
+void __attribute__((cold))
+ice_set_rx_function(struct rte_eth_dev *dev)
+{
+	dev->rx_pkt_burst = ice_recv_pkts;
+}
+
+/*********************************************************************
+ *
+ *  TX prep functions
+ *
+ **********************************************************************/
+/* The default values of TSO MSS */
+#define ICE_MIN_TSO_MSS            64
+#define ICE_MAX_TSO_MSS            9728
+#define ICE_MAX_TSO_FRAME_SIZE     262144
+uint16_t
+ice_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
+	      uint16_t nb_pkts)
+{
+	int i, ret;
+	uint64_t ol_flags;
+	struct rte_mbuf *m;
+
+	for (i = 0; i < nb_pkts; i++) {
+		m = tx_pkts[i];
+		ol_flags = m->ol_flags;
+
+		if (ol_flags & PKT_TX_TCP_SEG &&
+		    (m->tso_segsz < ICE_MIN_TSO_MSS ||
+		     m->tso_segsz > ICE_MAX_TSO_MSS ||
+		     m->pkt_len > ICE_MAX_TSO_FRAME_SIZE)) {
+			/**
+			 * MSS outside the range are considered malicious
+			 */
+			rte_errno = -EINVAL;
+			return i;
+		}
+
+#ifdef RTE_LIBRTE_ETHDEV_DEBUG
+		ret = rte_validate_tx_offload(m);
+		if (ret != 0) {
+			rte_errno = ret;
+			return i;
+		}
+#endif
+		ret = rte_net_intel_cksum_prepare(m);
+		if (ret != 0) {
+			rte_errno = ret;
+			return i;
+		}
+	}
+	return i;
+}
+
+void __attribute__((cold))
+ice_set_tx_function(struct rte_eth_dev *dev)
+{
+		dev->tx_pkt_burst = ice_xmit_pkts;
+		dev->tx_pkt_prepare = ice_prep_pkts;
+}
+
 /* For each value it means, datasheet of hardware can tell more details
  *
  * @note: fix ice_dev_supported_ptypes_get() if any change here.
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index bad2b89..e0218b3 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -134,6 +134,14 @@ int ice_tx_queue_setup(struct rte_eth_dev *dev,
 void ice_tx_queue_release(void *txq);
 void ice_clear_queues(struct rte_eth_dev *dev);
 void ice_free_queues(struct rte_eth_dev *dev);
+uint16_t ice_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+		       uint16_t nb_pkts);
+uint16_t ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+		       uint16_t nb_pkts);
+void ice_set_rx_function(struct rte_eth_dev *dev);
+uint16_t ice_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
+		       uint16_t nb_pkts);
+void ice_set_tx_function(struct rte_eth_dev *dev);
 uint32_t ice_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 void ice_set_default_ptype_table(struct rte_eth_dev *dev);
 const uint32_t *ice_dev_supported_ptypes_get(struct rte_eth_dev *dev);
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v3 31/34] net/ice: support advance RX/TX
  2018-12-12  6:59 ` [dpdk-dev] [PATCH v3 00/34] A new net PMD - ice Wenzhuo Lu
                     ` (29 preceding siblings ...)
  2018-12-12  7:00   ` [dpdk-dev] [PATCH v3 30/34] net/ice: support basic RX/TX Wenzhuo Lu
@ 2018-12-12  7:00   ` Wenzhuo Lu
  2018-12-12  7:00   ` [dpdk-dev] [PATCH v3 32/34] net/ice: support descriptor ops Wenzhuo Lu
                     ` (3 subsequent siblings)
  34 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-12  7:00 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add RX functions, scatter and bulk.
Add TX function, simple.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/ice/ice_lan_rxtx.c | 660 ++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 658 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ice/ice_lan_rxtx.c b/drivers/net/ice/ice_lan_rxtx.c
index 1b1bf47..986cbc6 100644
--- a/drivers/net/ice/ice_lan_rxtx.c
+++ b/drivers/net/ice/ice_lan_rxtx.c
@@ -957,6 +957,431 @@
 	PMD_RX_LOG(DEBUG, "Mbuf vlan_tci: %u, vlan_tci_outer: %u",
 		   mb->vlan_tci, mb->vlan_tci_outer);
 }
+
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+#define ICE_LOOK_AHEAD 8
+#if (ICE_LOOK_AHEAD != 8)
+#error "PMD ICE: ICE_LOOK_AHEAD must be 8\n"
+#endif
+static inline int
+ice_rx_scan_hw_ring(struct ice_rx_queue *rxq)
+{
+	volatile union ice_rx_desc *rxdp;
+	struct ice_rx_entry *rxep;
+	struct rte_mbuf *mb;
+	uint16_t pkt_len;
+	uint64_t qword1;
+	uint32_t rx_status;
+	int32_t s[ICE_LOOK_AHEAD], nb_dd;
+	int32_t i, j, nb_rx = 0;
+	uint64_t pkt_flags = 0;
+	uint32_t *ptype_tbl = rxq->vsi->adapter->ptype_tbl;
+
+	rxdp = &rxq->rx_ring[rxq->rx_tail];
+	rxep = &rxq->sw_ring[rxq->rx_tail];
+
+	qword1 = rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len);
+	rx_status = (qword1 & ICE_RXD_QW1_STATUS_M) >> ICE_RXD_QW1_STATUS_S;
+
+	/* Make sure there is at least 1 packet to receive */
+	if (!(rx_status & (1 << ICE_RX_DESC_STATUS_DD_S)))
+		return 0;
+
+	/**
+	 * Scan LOOK_AHEAD descriptors at a time to determine which
+	 * descriptors reference packets that are ready to be received.
+	 */
+	for (i = 0; i < ICE_RX_MAX_BURST; i += ICE_LOOK_AHEAD,
+	     rxdp += ICE_LOOK_AHEAD, rxep += ICE_LOOK_AHEAD) {
+		/* Read desc statuses backwards to avoid race condition */
+		for (j = ICE_LOOK_AHEAD - 1; j >= 0; j--) {
+			qword1 = rte_le_to_cpu_64(
+					rxdp[j].wb.qword1.status_error_len);
+			s[j] = (qword1 & ICE_RXD_QW1_STATUS_M) >>
+			       ICE_RXD_QW1_STATUS_S;
+		}
+
+		rte_smp_rmb();
+
+		/* Compute how many status bits were set */
+		for (j = 0, nb_dd = 0; j < ICE_LOOK_AHEAD; j++)
+			nb_dd += s[j] & (1 << ICE_RX_DESC_STATUS_DD_S);
+
+		nb_rx += nb_dd;
+
+		/* Translate descriptor info to mbuf parameters */
+		for (j = 0; j < nb_dd; j++) {
+			mb = rxep[j].mbuf;
+			qword1 = rte_le_to_cpu_64(
+					rxdp[j].wb.qword1.status_error_len);
+			pkt_len = ((qword1 & ICE_RXD_QW1_LEN_PBUF_M) >>
+				   ICE_RXD_QW1_LEN_PBUF_S) - rxq->crc_len;
+			mb->data_len = pkt_len;
+			mb->pkt_len = pkt_len;
+			mb->ol_flags = 0;
+			pkt_flags = ice_rxd_status_to_pkt_flags(qword1);
+			pkt_flags |= ice_rxd_error_to_pkt_flags(qword1);
+			if (pkt_flags & PKT_RX_RSS_HASH)
+				mb->hash.rss =
+					rte_le_to_cpu_32(
+						rxdp[j].wb.qword0.hi_dword.rss);
+			mb->packet_type = ptype_tbl[(uint8_t)(
+						(qword1 &
+						 ICE_RXD_QW1_PTYPE_M) >>
+						ICE_RXD_QW1_PTYPE_S)];
+			ice_rxd_to_vlan_tci(mb, &rxdp[j]);
+
+			mb->ol_flags |= pkt_flags;
+		}
+
+		for (j = 0; j < ICE_LOOK_AHEAD; j++)
+			rxq->rx_stage[i + j] = rxep[j].mbuf;
+
+		if (nb_dd != ICE_LOOK_AHEAD)
+			break;
+	}
+
+	/* Clear software ring entries */
+	for (i = 0; i < nb_rx; i++)
+		rxq->sw_ring[rxq->rx_tail + i].mbuf = NULL;
+
+	PMD_RX_LOG(DEBUG, "ice_rx_scan_hw_ring: "
+		   "port_id=%u, queue_id=%u, nb_rx=%d",
+		   rxq->port_id, rxq->queue_id, nb_rx);
+
+	return nb_rx;
+}
+
+static inline uint16_t
+ice_rx_fill_from_stage(struct ice_rx_queue *rxq,
+		       struct rte_mbuf **rx_pkts,
+		       uint16_t nb_pkts)
+{
+	uint16_t i;
+	struct rte_mbuf **stage = &rxq->rx_stage[rxq->rx_next_avail];
+
+	nb_pkts = (uint16_t)RTE_MIN(nb_pkts, rxq->rx_nb_avail);
+
+	for (i = 0; i < nb_pkts; i++)
+		rx_pkts[i] = stage[i];
+
+	rxq->rx_nb_avail = (uint16_t)(rxq->rx_nb_avail - nb_pkts);
+	rxq->rx_next_avail = (uint16_t)(rxq->rx_next_avail + nb_pkts);
+
+	return nb_pkts;
+}
+
+static inline int
+ice_rx_alloc_bufs(struct ice_rx_queue *rxq)
+{
+	volatile union ice_rx_desc *rxdp;
+	struct ice_rx_entry *rxep;
+	struct rte_mbuf *mb;
+	uint16_t alloc_idx, i;
+	uint64_t dma_addr;
+	int diag;
+
+	/* Allocate buffers in bulk */
+	alloc_idx = (uint16_t)(rxq->rx_free_trigger -
+			       (rxq->rx_free_thresh - 1));
+	rxep = &rxq->sw_ring[alloc_idx];
+	diag = rte_mempool_get_bulk(rxq->mp, (void *)rxep,
+				    rxq->rx_free_thresh);
+	if (unlikely(diag != 0)) {
+		PMD_RX_LOG(ERR, "Failed to get mbufs in bulk");
+		return -ENOMEM;
+	}
+
+	rxdp = &rxq->rx_ring[alloc_idx];
+	for (i = 0; i < rxq->rx_free_thresh; i++) {
+		if (likely(i < (rxq->rx_free_thresh - 1)))
+			/* Prefetch next mbuf */
+			rte_prefetch0(rxep[i + 1].mbuf);
+
+		mb = rxep[i].mbuf;
+		rte_mbuf_refcnt_set(mb, 1);
+		mb->next = NULL;
+		mb->data_off = RTE_PKTMBUF_HEADROOM;
+		mb->nb_segs = 1;
+		mb->port = rxq->port_id;
+		dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(mb));
+		rxdp[i].read.hdr_addr = 0;
+		rxdp[i].read.pkt_addr = dma_addr;
+	}
+
+	/* Update rx tail regsiter */
+	rte_wmb();
+	ICE_PCI_REG_WRITE(rxq->qrx_tail, rxq->rx_free_trigger);
+
+	rxq->rx_free_trigger =
+		(uint16_t)(rxq->rx_free_trigger + rxq->rx_free_thresh);
+	if (rxq->rx_free_trigger >= rxq->nb_rx_desc)
+		rxq->rx_free_trigger = (uint16_t)(rxq->rx_free_thresh - 1);
+
+	return 0;
+}
+
+static inline uint16_t
+rx_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
+{
+	struct ice_rx_queue *rxq = (struct ice_rx_queue *)rx_queue;
+	uint16_t nb_rx = 0;
+	struct rte_eth_dev *dev;
+
+	if (!nb_pkts)
+		return 0;
+
+	if (rxq->rx_nb_avail)
+		return ice_rx_fill_from_stage(rxq, rx_pkts, nb_pkts);
+
+	nb_rx = (uint16_t)ice_rx_scan_hw_ring(rxq);
+	rxq->rx_next_avail = 0;
+	rxq->rx_nb_avail = nb_rx;
+	rxq->rx_tail = (uint16_t)(rxq->rx_tail + nb_rx);
+
+	if (rxq->rx_tail > rxq->rx_free_trigger) {
+		if (ice_rx_alloc_bufs(rxq) != 0) {
+			uint16_t i, j;
+
+			dev = ICE_VSI_TO_ETH_DEV(rxq->vsi);
+			dev->data->rx_mbuf_alloc_failed +=
+				rxq->rx_free_thresh;
+			PMD_RX_LOG(DEBUG, "Rx mbuf alloc failed for "
+				   "port_id=%u, queue_id=%u",
+				   rxq->port_id, rxq->queue_id);
+			rxq->rx_nb_avail = 0;
+			rxq->rx_tail = (uint16_t)(rxq->rx_tail - nb_rx);
+			for (i = 0, j = rxq->rx_tail; i < nb_rx; i++, j++)
+				rxq->sw_ring[j].mbuf = rxq->rx_stage[i];
+
+			return 0;
+		}
+	}
+
+	if (rxq->rx_tail >= rxq->nb_rx_desc)
+		rxq->rx_tail = 0;
+
+	if (rxq->rx_nb_avail)
+		return ice_rx_fill_from_stage(rxq, rx_pkts, nb_pkts);
+
+	return 0;
+}
+
+static uint16_t
+ice_recv_pkts_bulk_alloc(void *rx_queue,
+			 struct rte_mbuf **rx_pkts,
+			 uint16_t nb_pkts)
+{
+	uint16_t nb_rx = 0;
+	uint16_t n;
+	uint16_t count;
+
+	if (unlikely(nb_pkts == 0))
+		return nb_rx;
+
+	if (likely(nb_pkts <= ICE_RX_MAX_BURST))
+		return rx_recv_pkts(rx_queue, rx_pkts, nb_pkts);
+
+	while (nb_pkts) {
+		n = RTE_MIN(nb_pkts, ICE_RX_MAX_BURST);
+		count = rx_recv_pkts(rx_queue, &rx_pkts[nb_rx], n);
+		nb_rx = (uint16_t)(nb_rx + count);
+		nb_pkts = (uint16_t)(nb_pkts - count);
+		if (count < n)
+			break;
+	}
+
+	return nb_rx;
+}
+#else
+static uint16_t
+ice_recv_pkts_bulk_alloc(void __rte_unused *rx_queue,
+			 struct rte_mbuf __rte_unused **rx_pkts,
+			 uint16_t __rte_unused nb_pkts)
+{
+	return 0;
+}
+#endif /* RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC */
+
+static uint16_t
+ice_recv_scattered_pkts(void *rx_queue,
+			struct rte_mbuf **rx_pkts,
+			uint16_t nb_pkts)
+{
+	struct ice_rx_queue *rxq = rx_queue;
+	volatile union ice_rx_desc *rx_ring = rxq->rx_ring;
+	volatile union ice_rx_desc *rxdp;
+	union ice_rx_desc rxd;
+	struct ice_rx_entry *sw_ring = rxq->sw_ring;
+	struct ice_rx_entry *rxe;
+	struct rte_mbuf *first_seg = rxq->pkt_first_seg;
+	struct rte_mbuf *last_seg = rxq->pkt_last_seg;
+	struct rte_mbuf *nmb; /* new allocated mbuf */
+	struct rte_mbuf *rxm; /* pointer to store old mbuf in SW ring */
+	uint16_t rx_id = rxq->rx_tail;
+	uint16_t nb_rx = 0;
+	uint16_t nb_hold = 0;
+	uint16_t rx_packet_len;
+	uint32_t rx_status;
+	uint64_t qword1;
+	uint64_t dma_addr;
+	uint64_t pkt_flags = 0;
+	uint32_t *ptype_tbl = rxq->vsi->adapter->ptype_tbl;
+	struct rte_eth_dev *dev;
+
+	while (nb_rx < nb_pkts) {
+		rxdp = &rx_ring[rx_id];
+		qword1 = rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len);
+		rx_status = (qword1 & ICE_RXD_QW1_STATUS_M) >>
+			    ICE_RXD_QW1_STATUS_S;
+
+		/* Check the DD bit first */
+		if (!(rx_status & (1 << ICE_RX_DESC_STATUS_DD_S)))
+			break;
+
+		/* allocate mbuf */
+		nmb = rte_mbuf_raw_alloc(rxq->mp);
+		if (unlikely(!nmb)) {
+			dev = ICE_VSI_TO_ETH_DEV(rxq->vsi);
+			dev->data->rx_mbuf_alloc_failed++;
+			break;
+		}
+		rxd = *rxdp; /* copy descriptor in ring to temp variable*/
+
+		nb_hold++;
+		rxe = &sw_ring[rx_id]; /* get corresponding mbuf in SW ring */
+		rx_id++;
+		if (unlikely(rx_id == rxq->nb_rx_desc))
+			rx_id = 0;
+
+		/* Prefetch next mbuf */
+		rte_prefetch0(sw_ring[rx_id].mbuf);
+
+		/**
+		 * When next RX descriptor is on a cache line boundary,
+		 * prefetch the next 4 RX descriptors and next 8 pointers
+		 * to mbufs.
+		 */
+		if ((rx_id & 0x3) == 0) {
+			rte_prefetch0(&rx_ring[rx_id]);
+			rte_prefetch0(&sw_ring[rx_id]);
+		}
+
+		rxm = rxe->mbuf;
+		rxe->mbuf = nmb;
+		dma_addr =
+			rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb));
+
+		/* Set data buffer address and data length of the mbuf */
+		rxdp->read.hdr_addr = 0;
+		rxdp->read.pkt_addr = dma_addr;
+		rx_packet_len = (qword1 & ICE_RXD_QW1_LEN_PBUF_M) >>
+				ICE_RXD_QW1_LEN_PBUF_S;
+		rxm->data_len = rx_packet_len;
+		rxm->data_off = RTE_PKTMBUF_HEADROOM;
+		ice_rxd_to_vlan_tci(rxm, rxdp);
+		rxm->packet_type = ptype_tbl[(uint8_t)((qword1 &
+							ICE_RXD_QW1_PTYPE_M) >>
+						       ICE_RXD_QW1_PTYPE_S)];
+
+		/**
+		 * If this is the first buffer of the received packet, set the
+		 * pointer to the first mbuf of the packet and initialize its
+		 * context. Otherwise, update the total length and the number
+		 * of segments of the current scattered packet, and update the
+		 * pointer to the last mbuf of the current packet.
+		 */
+		if (!first_seg) {
+			first_seg = rxm;
+			first_seg->nb_segs = 1;
+			first_seg->pkt_len = rx_packet_len;
+		} else {
+			first_seg->pkt_len =
+				(uint16_t)(first_seg->pkt_len +
+					   rx_packet_len);
+			first_seg->nb_segs++;
+			last_seg->next = rxm;
+		}
+
+		/**
+		 * If this is not the last buffer of the received packet,
+		 * update the pointer to the last mbuf of the current scattered
+		 * packet and continue to parse the RX ring.
+		 */
+		if (!(rx_status & (1 << ICE_RX_DESC_STATUS_EOF_S))) {
+			last_seg = rxm;
+			continue;
+		}
+
+		/**
+		 * This is the last buffer of the received packet. If the CRC
+		 * is not stripped by the hardware:
+		 *  - Subtract the CRC length from the total packet length.
+		 *  - If the last buffer only contains the whole CRC or a part
+		 *  of it, free the mbuf associated to the last buffer. If part
+		 *  of the CRC is also contained in the previous mbuf, subtract
+		 *  the length of that CRC part from the data length of the
+		 *  previous mbuf.
+		 */
+		rxm->next = NULL;
+		if (unlikely(rxq->crc_len > 0)) {
+			first_seg->pkt_len -= ETHER_CRC_LEN;
+			if (rx_packet_len <= ETHER_CRC_LEN) {
+				rte_pktmbuf_free_seg(rxm);
+				first_seg->nb_segs--;
+				last_seg->data_len =
+					(uint16_t)(last_seg->data_len -
+					(ETHER_CRC_LEN - rx_packet_len));
+				last_seg->next = NULL;
+			} else
+				rxm->data_len = (uint16_t)(rx_packet_len -
+							   ETHER_CRC_LEN);
+		}
+
+		first_seg->port = rxq->port_id;
+		first_seg->ol_flags = 0;
+
+		pkt_flags = ice_rxd_status_to_pkt_flags(qword1);
+		pkt_flags |= ice_rxd_error_to_pkt_flags(qword1);
+		if (pkt_flags & PKT_RX_RSS_HASH)
+			first_seg->hash.rss =
+				rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
+
+		first_seg->ol_flags |= pkt_flags;
+		/* Prefetch data of first segment, if configured to do so. */
+		rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr,
+					  first_seg->data_off));
+		rx_pkts[nb_rx++] = first_seg;
+		first_seg = NULL;
+	}
+
+	/* Record index of the next RX descriptor to probe. */
+	rxq->rx_tail = rx_id;
+	rxq->pkt_first_seg = first_seg;
+	rxq->pkt_last_seg = last_seg;
+
+	/**
+	 * If the number of free RX descriptors is greater than the RX free
+	 * threshold of the queue, advance the Receive Descriptor Tail (RDT)
+	 * register. Update the RDT with the value of the last processed RX
+	 * descriptor minus 1, to guarantee that the RDT register is never
+	 * equal to the RDH register, which creates a "full" ring situtation
+	 * from the hardware point of view.
+	 */
+	nb_hold = (uint16_t)(nb_hold + rxq->nb_rx_hold);
+	if (nb_hold > rxq->rx_free_thresh) {
+		rx_id = (uint16_t)(rx_id == 0 ?
+				   (rxq->nb_rx_desc - 1) : (rx_id - 1));
+		/* write TAIL register */
+		ICE_PCI_REG_WRITE(rxq->qrx_tail, rx_id);
+		nb_hold = 0;
+	}
+	rxq->nb_rx_hold = nb_hold;
+
+	/* return received packet in the burst */
+	return nb_rx;
+}
+
 const uint32_t *
 ice_dev_supported_ptypes_get(struct rte_eth_dev *dev)
 {
@@ -990,7 +1415,11 @@
 		RTE_PTYPE_UNKNOWN
 	};
 
-	if (dev->rx_pkt_burst == ice_recv_pkts)
+	if (dev->rx_pkt_burst == ice_recv_pkts ||
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+	    dev->rx_pkt_burst == ice_recv_pkts_bulk_alloc ||
+#endif
+	    dev->rx_pkt_burst == ice_recv_scattered_pkts)
 		return ptypes;
 	return NULL;
 }
@@ -1313,6 +1742,20 @@
 	return 0;
 }
 
+/* Construct the tx flags */
+static inline uint64_t
+ice_build_ctob(uint32_t td_cmd,
+	       uint32_t td_offset,
+	       uint16_t size,
+	       uint32_t td_tag)
+{
+	return rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DATA |
+				((uint64_t)td_cmd << ICE_TXD_QW1_CMD_S) |
+				((uint64_t)td_offset << ICE_TXD_QW1_OFFSET_S) |
+				((uint64_t)size << ICE_TXD_QW1_TX_BUF_SZ_S) |
+				((uint64_t)td_tag << ICE_TXD_QW1_L2TAG1_S));
+}
+
 /* Check if the context descriptor is needed for TX offloading */
 static inline uint16_t
 ice_calc_context_desc(uint64_t flags)
@@ -1531,10 +1974,213 @@
 	return nb_tx;
 }
 
+static inline int __attribute__((always_inline))
+ice_tx_free_bufs(struct ice_tx_queue *txq)
+{
+	struct ice_tx_entry *txep;
+	uint16_t i;
+
+	if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
+	     rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M)) !=
+	    rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE))
+		return 0;
+
+	txep = &txq->sw_ring[txq->tx_next_dd - (txq->tx_rs_thresh - 1)];
+
+	for (i = 0; i < txq->tx_rs_thresh; i++)
+		rte_prefetch0((txep + i)->mbuf);
+
+	if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) {
+		for (i = 0; i < txq->tx_rs_thresh; ++i, ++txep) {
+			rte_mempool_put(txep->mbuf->pool, txep->mbuf);
+			txep->mbuf = NULL;
+		}
+	} else {
+		for (i = 0; i < txq->tx_rs_thresh; ++i, ++txep) {
+			rte_pktmbuf_free_seg(txep->mbuf);
+			txep->mbuf = NULL;
+		}
+	}
+
+	txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh);
+	txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh);
+	if (txq->tx_next_dd >= txq->nb_tx_desc)
+		txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
+
+	return txq->tx_rs_thresh;
+}
+
+/* Populate 4 descriptors with data from 4 mbufs */
+static inline void
+tx4(volatile struct ice_tx_desc *txdp, struct rte_mbuf **pkts)
+{
+	uint64_t dma_addr;
+	uint32_t i;
+
+	for (i = 0; i < 4; i++, txdp++, pkts++) {
+		dma_addr = rte_mbuf_data_iova(*pkts);
+		txdp->buf_addr = rte_cpu_to_le_64(dma_addr);
+		txdp->cmd_type_offset_bsz =
+			ice_build_ctob((uint32_t)ICE_TD_CMD, 0,
+				       (*pkts)->data_len, 0);
+	}
+}
+
+/* Populate 1 descriptor with data from 1 mbuf */
+static inline void
+tx1(volatile struct ice_tx_desc *txdp, struct rte_mbuf **pkts)
+{
+	uint64_t dma_addr;
+
+	dma_addr = rte_mbuf_data_iova(*pkts);
+	txdp->buf_addr = rte_cpu_to_le_64(dma_addr);
+	txdp->cmd_type_offset_bsz =
+		ice_build_ctob((uint32_t)ICE_TD_CMD, 0,
+			       (*pkts)->data_len, 0);
+}
+
+static inline void
+ice_tx_fill_hw_ring(struct ice_tx_queue *txq, struct rte_mbuf **pkts,
+		    uint16_t nb_pkts)
+{
+	volatile struct ice_tx_desc *txdp = &txq->tx_ring[txq->tx_tail];
+	struct ice_tx_entry *txep = &txq->sw_ring[txq->tx_tail];
+	const int N_PER_LOOP = 4;
+	const int N_PER_LOOP_MASK = N_PER_LOOP - 1;
+	int mainpart, leftover;
+	int i, j;
+
+	/**
+	 * Process most of the packets in chunks of N pkts.  Any
+	 * leftover packets will get processed one at a time.
+	 */
+	mainpart = nb_pkts & ((uint32_t)~N_PER_LOOP_MASK);
+	leftover = nb_pkts & ((uint32_t)N_PER_LOOP_MASK);
+	for (i = 0; i < mainpart; i += N_PER_LOOP) {
+		/* Copy N mbuf pointers to the S/W ring */
+		for (j = 0; j < N_PER_LOOP; ++j)
+			(txep + i + j)->mbuf = *(pkts + i + j);
+		tx4(txdp + i, pkts + i);
+	}
+
+	if (unlikely(leftover > 0)) {
+		for (i = 0; i < leftover; ++i) {
+			(txep + mainpart + i)->mbuf = *(pkts + mainpart + i);
+			tx1(txdp + mainpart + i, pkts + mainpart + i);
+		}
+	}
+}
+
+static inline uint16_t
+tx_xmit_pkts(struct ice_tx_queue *txq,
+	     struct rte_mbuf **tx_pkts,
+	     uint16_t nb_pkts)
+{
+	volatile struct ice_tx_desc *txr = txq->tx_ring;
+	uint16_t n = 0;
+
+	/**
+	 * Begin scanning the H/W ring for done descriptors when the number
+	 * of available descriptors drops below tx_free_thresh. For each done
+	 * descriptor, free the associated buffer.
+	 */
+	if (txq->nb_tx_free < txq->tx_free_thresh)
+		ice_tx_free_bufs(txq);
+
+	/* Use available descriptor only */
+	nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
+	if (unlikely(!nb_pkts))
+		return 0;
+
+	txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
+	if ((txq->tx_tail + nb_pkts) > txq->nb_tx_desc) {
+		n = (uint16_t)(txq->nb_tx_desc - txq->tx_tail);
+		ice_tx_fill_hw_ring(txq, tx_pkts, n);
+		txr[txq->tx_next_rs].cmd_type_offset_bsz |=
+			rte_cpu_to_le_64(((uint64_t)ICE_TX_DESC_CMD_RS) <<
+					 ICE_TXD_QW1_CMD_S);
+		txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
+		txq->tx_tail = 0;
+	}
+
+	/* Fill hardware descriptor ring with mbuf data */
+	ice_tx_fill_hw_ring(txq, tx_pkts + n, (uint16_t)(nb_pkts - n));
+	txq->tx_tail = (uint16_t)(txq->tx_tail + (nb_pkts - n));
+
+	/* Determin if RS bit needs to be set */
+	if (txq->tx_tail > txq->tx_next_rs) {
+		txr[txq->tx_next_rs].cmd_type_offset_bsz |=
+			rte_cpu_to_le_64(((uint64_t)ICE_TX_DESC_CMD_RS) <<
+					 ICE_TXD_QW1_CMD_S);
+		txq->tx_next_rs =
+			(uint16_t)(txq->tx_next_rs + txq->tx_rs_thresh);
+		if (txq->tx_next_rs >= txq->nb_tx_desc)
+			txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
+	}
+
+	if (txq->tx_tail >= txq->nb_tx_desc)
+		txq->tx_tail = 0;
+
+	/* Update the tx tail register */
+	rte_wmb();
+	ICE_PCI_REG_WRITE(txq->qtx_tail, txq->tx_tail);
+
+	return nb_pkts;
+}
+
+static uint16_t
+ice_xmit_pkts_simple(void *tx_queue,
+		     struct rte_mbuf **tx_pkts,
+		     uint16_t nb_pkts)
+{
+	uint16_t nb_tx = 0;
+
+	if (likely(nb_pkts <= ICE_TX_MAX_BURST))
+		return tx_xmit_pkts((struct ice_tx_queue *)tx_queue,
+				    tx_pkts, nb_pkts);
+
+	while (nb_pkts) {
+		uint16_t ret, num = (uint16_t)RTE_MIN(nb_pkts,
+						      ICE_TX_MAX_BURST);
+
+		ret = tx_xmit_pkts((struct ice_tx_queue *)tx_queue,
+				   &tx_pkts[nb_tx], num);
+		nb_tx = (uint16_t)(nb_tx + ret);
+		nb_pkts = (uint16_t)(nb_pkts - ret);
+		if (ret < num)
+			break;
+	}
+
+	return nb_tx;
+}
+
 void __attribute__((cold))
 ice_set_rx_function(struct rte_eth_dev *dev)
 {
-	dev->rx_pkt_burst = ice_recv_pkts;
+	PMD_INIT_FUNC_TRACE();
+	struct ice_adapter *ad =
+		ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+
+	if (dev->data->scattered_rx) {
+		/* Set the non-LRO scattered function */
+		PMD_INIT_LOG(DEBUG,
+			     "Using a Scattered function on port %d.",
+			     dev->data->port_id);
+		dev->rx_pkt_burst = ice_recv_scattered_pkts;
+	} else if (ad->rx_bulk_alloc_allowed) {
+		PMD_INIT_LOG(DEBUG,
+			     "Rx Burst Bulk Alloc Preconditions are "
+			     "satisfied. Rx Burst Bulk Alloc function "
+			     "will be used on port %d.",
+			     dev->data->port_id);
+		dev->rx_pkt_burst = ice_recv_pkts_bulk_alloc;
+	} else {
+		PMD_INIT_LOG(DEBUG,
+			     "Rx Burst Bulk Alloc Preconditions are not "
+			     "satisfied, Normal Rx will be used on port %d.",
+			     dev->data->port_id);
+		dev->rx_pkt_burst = ice_recv_pkts;
+	}
 }
 
 /*********************************************************************
@@ -1588,8 +2234,18 @@ void __attribute__((cold))
 void __attribute__((cold))
 ice_set_tx_function(struct rte_eth_dev *dev)
 {
+	struct ice_adapter *ad =
+		ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+
+	if (ad->tx_simple_allowed) {
+		PMD_INIT_LOG(DEBUG, "Simple tx finally be used.");
+		dev->tx_pkt_burst = ice_xmit_pkts_simple;
+		dev->tx_pkt_prepare = NULL;
+	} else {
+		PMD_INIT_LOG(DEBUG, "Normal tx finally be used.");
 		dev->tx_pkt_burst = ice_xmit_pkts;
 		dev->tx_pkt_prepare = ice_prep_pkts;
+	}
 }
 
 /* For each value it means, datasheet of hardware can tell more details
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v3 32/34] net/ice: support descriptor ops
  2018-12-12  6:59 ` [dpdk-dev] [PATCH v3 00/34] A new net PMD - ice Wenzhuo Lu
                     ` (30 preceding siblings ...)
  2018-12-12  7:00   ` [dpdk-dev] [PATCH v3 31/34] net/ice: support advance RX/TX Wenzhuo Lu
@ 2018-12-12  7:00   ` Wenzhuo Lu
  2018-12-13 21:30     ` Ferruh Yigit
  2018-12-12  7:00   ` [dpdk-dev] [PATCH v3 33/34] doc: add ICE description and update release note Wenzhuo Lu
                     ` (2 subsequent siblings)
  34 siblings, 1 reply; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-12  7:00 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add below ops,
rx_descriptor_done
rx_descriptor_status
tx_descriptor_status

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/ice/ice_ethdev.c   |  3 ++
 drivers/net/ice/ice_lan_rxtx.c | 84 ++++++++++++++++++++++++++++++++++++++++++
 drivers/net/ice/ice_rxtx.h     |  3 ++
 3 files changed, 90 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index d63407c..d938852 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -112,6 +112,9 @@ static int ice_xstats_get_names(struct rte_eth_dev *dev,
 	.get_eeprom_length            = ice_get_eeprom_length,
 	.get_eeprom                   = ice_get_eeprom,
 	.rx_queue_count               = ice_rx_queue_count,
+	.rx_descriptor_done           = ice_rx_descriptor_done,
+	.rx_descriptor_status         = ice_rx_descriptor_status,
+	.tx_descriptor_status         = ice_tx_descriptor_status,
 	.stats_get                    = ice_stats_get,
 	.stats_reset                  = ice_stats_reset,
 	.xstats_get                   = ice_xstats_get,
diff --git a/drivers/net/ice/ice_lan_rxtx.c b/drivers/net/ice/ice_lan_rxtx.c
index 986cbc6..fe67b49 100644
--- a/drivers/net/ice/ice_lan_rxtx.c
+++ b/drivers/net/ice/ice_lan_rxtx.c
@@ -1490,6 +1490,90 @@
 	return desc;
 }
 
+int
+ice_rx_descriptor_done(void *rx_queue, uint16_t offset)
+{
+	volatile union ice_rx_desc *rxdp;
+	struct ice_rx_queue *rxq = rx_queue;
+	uint16_t desc;
+	int ret;
+
+	if (unlikely(offset >= rxq->nb_rx_desc)) {
+		PMD_DRV_LOG(ERR, "Invalid RX descriptor id %u", offset);
+		return 0;
+	}
+
+	desc = rxq->rx_tail + offset;
+	if (desc >= rxq->nb_rx_desc)
+		desc -= rxq->nb_rx_desc;
+
+	rxdp = &rxq->rx_ring[desc];
+
+	ret = !!(((rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
+		  ICE_RXD_QW1_STATUS_M) >> ICE_RXD_QW1_STATUS_S) &
+		 (1 << ICE_RX_DESC_STATUS_DD_S));
+
+	return ret;
+}
+
+int
+ice_rx_descriptor_status(void *rx_queue, uint16_t offset)
+{
+	struct ice_rx_queue *rxq = rx_queue;
+	volatile uint64_t *status;
+	uint64_t mask;
+	uint32_t desc;
+
+	if (unlikely(offset >= rxq->nb_rx_desc))
+		return -EINVAL;
+
+	if (offset >= rxq->nb_rx_desc - rxq->nb_rx_hold)
+		return RTE_ETH_RX_DESC_UNAVAIL;
+
+	desc = rxq->rx_tail + offset;
+	if (desc >= rxq->nb_rx_desc)
+		desc -= rxq->nb_rx_desc;
+
+	status = &rxq->rx_ring[desc].wb.qword1.status_error_len;
+	mask = rte_cpu_to_le_64((1ULL << ICE_RX_DESC_STATUS_DD_S) <<
+				ICE_RXD_QW1_STATUS_S);
+	if (*status & mask)
+		return RTE_ETH_RX_DESC_DONE;
+
+	return RTE_ETH_RX_DESC_AVAIL;
+}
+
+int
+ice_tx_descriptor_status(void *tx_queue, uint16_t offset)
+{
+	struct ice_tx_queue *txq = tx_queue;
+	volatile uint64_t *status;
+	uint64_t mask, expect;
+	uint32_t desc;
+
+	if (unlikely(offset >= txq->nb_tx_desc))
+		return -EINVAL;
+
+	desc = txq->tx_tail + offset;
+	/* go to next desc that has the RS bit */
+	desc = ((desc + txq->tx_rs_thresh - 1) / txq->tx_rs_thresh) *
+		txq->tx_rs_thresh;
+	if (desc >= txq->nb_tx_desc) {
+		desc -= txq->nb_tx_desc;
+		if (desc >= txq->nb_tx_desc)
+			desc -= txq->nb_tx_desc;
+	}
+
+	status = &txq->tx_ring[desc].cmd_type_offset_bsz;
+	mask = rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M);
+	expect = rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE <<
+				  ICE_TXD_QW1_DTYPE_S);
+	if ((*status & mask) == expect)
+		return RTE_ETH_TX_DESC_DONE;
+
+	return RTE_ETH_TX_DESC_FULL;
+}
+
 void
 ice_clear_queues(struct rte_eth_dev *dev)
 {
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index e0218b3..12ad383 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -143,6 +143,9 @@ uint16_t ice_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
 		       uint16_t nb_pkts);
 void ice_set_tx_function(struct rte_eth_dev *dev);
 uint32_t ice_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+int ice_rx_descriptor_done(void *rx_queue, uint16_t offset);
+int ice_rx_descriptor_status(void *rx_queue, uint16_t offset);
+int ice_tx_descriptor_status(void *tx_queue, uint16_t offset);
 void ice_set_default_ptype_table(struct rte_eth_dev *dev);
 const uint32_t *ice_dev_supported_ptypes_get(struct rte_eth_dev *dev);
 void ice_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v3 33/34] doc: add ICE description and update release note
  2018-12-12  6:59 ` [dpdk-dev] [PATCH v3 00/34] A new net PMD - ice Wenzhuo Lu
                     ` (31 preceding siblings ...)
  2018-12-12  7:00   ` [dpdk-dev] [PATCH v3 32/34] net/ice: support descriptor ops Wenzhuo Lu
@ 2018-12-12  7:00   ` Wenzhuo Lu
  2018-12-13 21:34     ` Ferruh Yigit
  2018-12-12  7:00   ` [dpdk-dev] [PATCH v3 34/34] net/ice: support meson build Wenzhuo Lu
  2018-12-13  6:02   ` [dpdk-dev] [PATCH v3 00/34] A new net PMD - ice Varghese, Vipin
  34 siblings, 1 reply; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-12  7:00 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 MAINTAINERS                            |   1 +
 doc/guides/nics/features/ice.ini       |  38 +++++++++++++
 doc/guides/nics/ice.rst                | 101 +++++++++++++++++++++++++++++++++
 doc/guides/rel_notes/release_19_02.rst |   4 ++
 4 files changed, 144 insertions(+)
 create mode 100644 doc/guides/nics/features/ice.ini
 create mode 100644 doc/guides/nics/ice.rst

diff --git a/MAINTAINERS b/MAINTAINERS
index 37f3bf7..cd01565 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -598,6 +598,7 @@ M: Qiming Yang <qiming.yang@intel.com>
 M: Wenzhuo Lu <wenzhuo.lu@intel.com>
 T: git://dpdk.org/next/dpdk-next-net-intel
 F: drivers/net/ice/
+F: doc/guides/nics/features/ice*.ini
 
 Marvell mvpp2
 M: Tomasz Duszynski <tdu@semihalf.com>
diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
new file mode 100644
index 0000000..196b8d5
--- /dev/null
+++ b/doc/guides/nics/features/ice.ini
@@ -0,0 +1,38 @@
+;
+; Supported features of the 'ice' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Speed capabilities   = Y
+Link status          = Y
+Link status event    = Y
+Rx interrupt         = Y
+Queue start/stop     = Y
+MTU update           = Y
+Jumbo frame          = Y
+Scattered Rx         = Y
+TSO                  = Y
+Unicast MAC filter   = Y
+Multicast MAC filter = Y
+RSS hash             = Y
+RSS key update       = Y
+RSS reta update      = Y
+VLAN filter          = Y
+CRC offload          = Y
+VLAN offload         = Y
+QinQ offload         = Y
+L3 checksum offload  = Y
+L4 checksum offload  = Y
+Packet type parsing  = Y
+Rx descriptor status = Y
+Tx descriptor status = Y
+Basic stats          = Y
+Extended stats       = Y
+FW version           = Y
+Module EEPROM dump   = Y
+BSD nic_uio          = Y
+Linux UIO            = Y
+Linux VFIO           = Y
+x86-32               = Y
+x86-64               = Y
diff --git a/doc/guides/nics/ice.rst b/doc/guides/nics/ice.rst
new file mode 100644
index 0000000..95b409a
--- /dev/null
+++ b/doc/guides/nics/ice.rst
@@ -0,0 +1,101 @@
+..  SPDX-License-Identifier: BSD-3-Clause
+    Copyright(c) 2018 Intel Corporation.
+
+ICE Poll Mode Driver
+======================
+
+The ice PMD (librte_pmd_ice) provides poll mode driver support for
+10/25 Gbps Intel® Ethernet 810 Series Network Adapters based on
+the Intel Ethernet Controller E810.
+
+
+Prerequisites
+-------------
+
+- Identifying your adapter using `Intel Support
+  <http://www.intel.com/support>`_ and get the latest NVM/FW images.
+
+- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
+
+- To get better performance on Intel platforms, please follow the "How to get best performance with NICs on Intel platforms"
+  section of the :ref:`Getting Started Guide for Linux <linux_gsg>`.
+
+
+Pre-Installation Configuration
+------------------------------
+
+Config File Options
+~~~~~~~~~~~~~~~~~~~
+
+The following options can be modified in the ``config`` file.
+Please note that enabling debugging options may affect system performance.
+
+- ``CONFIG_RTE_LIBRTE_ICE_PMD`` (default ``y``)
+
+  Toggle compilation of the ``librte_pmd_ice`` driver.
+
+- ``CONFIG_RTE_LIBRTE_ICE_DEBUG_*`` (default ``n``)
+
+  Toggle display of generic debugging messages.
+
+- ``CONFIG_RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC`` (default ``y``)
+
+  Toggle bulk allocation for RX.
+
+- ``CONFIG_RTE_LIBRTE_ICE_16BYTE_RX_DESC`` (default ``n``)
+
+  Toggle to use a 16-byte RX descriptor, by default the RX descriptor is 32 byte.
+
+
+Driver compilation and testing
+------------------------------
+
+Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
+for details.
+
+
+Sample Application Notes
+------------------------
+
+Vlan filter
+~~~~~~~~~~~
+
+Vlan filter only works when Promiscuous mode is off.
+
+To start ``testpmd``, and add vlan 10 to port 0:
+
+.. code-block:: console
+
+    ./app/testpmd -l 0-15 -n 4 -- -i
+    ...
+
+    testpmd> rx_vlan add 10 0
+
+
+Limitations or Known issues
+---------------------------
+
+19.02 limitation
+~~~~~~~~~~~~~~~~
+
+Ice code released in 19.02 is for evaluation only.
+
+
+Secondary Process
+~~~~~~~~~~~~~~~~~
+Ice supports secondary process. But it does not support changing the setting
+and configuration in the secondary process.
+
+
+Promiscuous mode not supported
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+As promiscuous mode is not supported as this stage, a port can only receive the
+packets which destination MAC address is this port's own.
+
+
+TX anti-spoofing cannot be disabled
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+TX anti-spoofing is enabled by default. At this stage it's not supported to
+disable it. So any TX packet which source MAC address is not this port's own
+will be dropped by HW. It means io-fwd is not supported now. Recommand to use
+MAC-fwd for evaluation.
diff --git a/doc/guides/rel_notes/release_19_02.rst b/doc/guides/rel_notes/release_19_02.rst
index a94fa86..c5a054b 100644
--- a/doc/guides/rel_notes/release_19_02.rst
+++ b/doc/guides/rel_notes/release_19_02.rst
@@ -54,6 +54,10 @@ New Features
      Also, make sure to start the actual text at the margin.
      =========================================================
 
+* **Added ICE net PMD**
+
+  Added the new ``ice`` net driver for Intel® Ethernet Network Adapters E810.
+  See the :doc:`../nics/ice` NIC guide for more details on this new driver.
 
 Removed Items
 -------------
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v3 34/34] net/ice: support meson build
  2018-12-12  6:59 ` [dpdk-dev] [PATCH v3 00/34] A new net PMD - ice Wenzhuo Lu
                     ` (32 preceding siblings ...)
  2018-12-12  7:00   ` [dpdk-dev] [PATCH v3 33/34] doc: add ICE description and update release note Wenzhuo Lu
@ 2018-12-12  7:00   ` Wenzhuo Lu
  2018-12-13 21:15     ` Ferruh Yigit
  2018-12-13  6:02   ` [dpdk-dev] [PATCH v3 00/34] A new net PMD - ice Varghese, Vipin
  34 siblings, 1 reply; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-12  7:00 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 drivers/net/ice/base/meson.build | 30 ++++++++++++++++++++++++++++++
 drivers/net/ice/meson.build      | 15 +++++++++++++++
 drivers/net/meson.build          |  1 +
 3 files changed, 46 insertions(+)
 create mode 100644 drivers/net/ice/base/meson.build
 create mode 100644 drivers/net/ice/meson.build

diff --git a/drivers/net/ice/base/meson.build b/drivers/net/ice/base/meson.build
new file mode 100644
index 0000000..5aafff3
--- /dev/null
+++ b/drivers/net/ice/base/meson.build
@@ -0,0 +1,30 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Intel Corporation
+
+sources = [
+	'ice_controlq.c',
+	'ice_common.c',
+	'ice_sched.c',
+	'ice_switch.c',
+	'ice_nvm.c',
+]
+
+error_cflags = ['-Wno-sign-compare', '-Wno-unused-value',
+		'-Wno-format', '-Wno-error=format-security',
+		'-Wno-strict-aliasing', '-Wno-unused-but-set-variable',
+		'-Wno-unused-variable',
+]
+c_args = cflags
+if allow_experimental_apis
+	c_args += '-DALLOW_EXPERIMENTAL_API'
+endif
+foreach flag: error_cflags
+	if cc.has_argument(flag)
+		c_args += flag
+	endif
+endforeach
+
+base_lib = static_library('ice_base', sources,
+	dependencies: static_rte_eal,
+	c_args: c_args)
+base_objs = base_lib.extract_all_objects()
diff --git a/drivers/net/ice/meson.build b/drivers/net/ice/meson.build
new file mode 100644
index 0000000..b921354
--- /dev/null
+++ b/drivers/net/ice/meson.build
@@ -0,0 +1,15 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Intel Corporation
+
+cflags += ['-DALLOW_EXPERIMENTAL_API']
+
+subdir('base')
+objs = [base_objs]
+
+sources = files(
+	'ice_ethdev.c',
+	'ice_lan_rxtx.c'
+	)
+
+deps += ['hash']
+includes += include_directories('base')
diff --git a/drivers/net/meson.build b/drivers/net/meson.build
index 980eec2..45da3bb 100644
--- a/drivers/net/meson.build
+++ b/drivers/net/meson.build
@@ -17,6 +17,7 @@ drivers = ['af_packet',
 	'enic',
 	'failsafe',
 	'fm10k', 'i40e',
+	'ice',
 	'ifc',
 	'ixgbe',
 	'kni',
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v3 02/34] net/ice: Add basic structures
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 02/34] net/ice: Add basic structures Wenzhuo Lu
@ 2018-12-12 15:19     ` Ferruh Yigit
  2018-12-12 16:54       ` Stillwell Jr, Paul M
  2018-12-12 16:55       ` Ferruh Yigit
  2018-12-12 15:19     ` Ferruh Yigit
  1 sibling, 2 replies; 309+ messages in thread
From: Ferruh Yigit @ 2018-12-12 15:19 UTC (permalink / raw)
  To: Wenzhuo Lu, dev; +Cc: Paul M Stillwell Jr

On 12/12/2018 6:59 AM, Wenzhuo Lu wrote:
> From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
> 
> Add the structures required by the NIC.
> 
> Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>

<...>

> @@ -0,0 +1,869 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2001-2018
> + */
> +
> +#ifndef _ICE_TYPE_H_
> +#define _ICE_TYPE_H_
> +
> +#define ETH_ALEN	6
> +
> +#define ETH_HEADER_LEN	14
> +
> +#define BIT(a) (1UL << (a))
> +#define BIT_ULL(a) (1ULL << (a))
> +
> +#define BITS_PER_BYTE	8
> +
> +#define ICE_BYTES_PER_WORD	2
> +#define ICE_BYTES_PER_DWORD	4
> +#define ICE_MAX_TRAFFIC_CLASS	8
> +
> +
> +#include "ice_status.h"
> +#include "ice_hw_autogen.h"

Where this "ice_hw_autogen.h header file should come from? Name suggest it will
be auto-generated but my build system not able to find it.

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v3 02/34] net/ice: Add basic structures
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 02/34] net/ice: Add basic structures Wenzhuo Lu
  2018-12-12 15:19     ` Ferruh Yigit
@ 2018-12-12 15:19     ` Ferruh Yigit
  2018-12-13  5:17       ` Lu, Wenzhuo
  1 sibling, 1 reply; 309+ messages in thread
From: Ferruh Yigit @ 2018-12-12 15:19 UTC (permalink / raw)
  To: Wenzhuo Lu, dev; +Cc: Paul M Stillwell Jr

On 12/12/2018 6:59 AM, Wenzhuo Lu wrote:
> From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
> 
> Add the structures required by the NIC.
> 
> Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>

For consistency, base driver updates subsystem can be used as "net/ice/base: "
and it should start with lowercase, so something like:

"net/ice/base: add basic structures"

Same for rest of the base code updates.

<...>

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v3 02/34] net/ice: Add basic structures
  2018-12-12 15:19     ` Ferruh Yigit
@ 2018-12-12 16:54       ` Stillwell Jr, Paul M
  2018-12-12 16:57         ` Ferruh Yigit
  2018-12-12 16:55       ` Ferruh Yigit
  1 sibling, 1 reply; 309+ messages in thread
From: Stillwell Jr, Paul M @ 2018-12-12 16:54 UTC (permalink / raw)
  To: Yigit, Ferruh, Lu, Wenzhuo, dev

The "ice_hw_autogen.h" file is an auto generated file from the HW engineers. It is not created dynamically when the driver is built. I'm not sure why they didn't call it ice_registers.h, but that was their decision.

Paul


-----Original Message-----
From: Yigit, Ferruh 
Sent: Wednesday, December 12, 2018 7:19 AM
To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
Cc: Stillwell Jr, Paul M <paul.m.stillwell.jr@intel.com>
Subject: Re: [dpdk-dev] [PATCH v3 02/34] net/ice: Add basic structures

On 12/12/2018 6:59 AM, Wenzhuo Lu wrote:
> From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
> 
> Add the structures required by the NIC.
> 
> Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>

<...>

> @@ -0,0 +1,869 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2001-2018
> + */
> +
> +#ifndef _ICE_TYPE_H_
> +#define _ICE_TYPE_H_
> +
> +#define ETH_ALEN	6
> +
> +#define ETH_HEADER_LEN	14
> +
> +#define BIT(a) (1UL << (a))
> +#define BIT_ULL(a) (1ULL << (a))
> +
> +#define BITS_PER_BYTE	8
> +
> +#define ICE_BYTES_PER_WORD	2
> +#define ICE_BYTES_PER_DWORD	4
> +#define ICE_MAX_TRAFFIC_CLASS	8
> +
> +
> +#include "ice_status.h"
> +#include "ice_hw_autogen.h"

Where this "ice_hw_autogen.h header file should come from? Name suggest it will be auto-generated but my build system not able to find it.

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v3 02/34] net/ice: Add basic structures
  2018-12-12 15:19     ` Ferruh Yigit
  2018-12-12 16:54       ` Stillwell Jr, Paul M
@ 2018-12-12 16:55       ` Ferruh Yigit
  1 sibling, 0 replies; 309+ messages in thread
From: Ferruh Yigit @ 2018-12-12 16:55 UTC (permalink / raw)
  To: Wenzhuo Lu, dev; +Cc: Paul M Stillwell Jr

On 12/12/2018 3:19 PM, Ferruh Yigit wrote:
> On 12/12/2018 6:59 AM, Wenzhuo Lu wrote:
>> From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
>>
>> Add the structures required by the NIC.
>>
>> Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
> 
> <...>
> 
>> @@ -0,0 +1,869 @@
>> +/* SPDX-License-Identifier: BSD-3-Clause
>> + * Copyright(c) 2001-2018
>> + */
>> +
>> +#ifndef _ICE_TYPE_H_
>> +#define _ICE_TYPE_H_
>> +
>> +#define ETH_ALEN	6
>> +
>> +#define ETH_HEADER_LEN	14
>> +
>> +#define BIT(a) (1UL << (a))
>> +#define BIT_ULL(a) (1ULL << (a))
>> +
>> +#define BITS_PER_BYTE	8
>> +
>> +#define ICE_BYTES_PER_WORD	2
>> +#define ICE_BYTES_PER_DWORD	4
>> +#define ICE_MAX_TRAFFIC_CLASS	8
>> +
>> +
>> +#include "ice_status.h"
>> +#include "ice_hw_autogen.h"
> 
> Where this "ice_hw_autogen.h header file should come from? Name suggest it will
> be auto-generated but my build system not able to find it.
> 

It is in 01/34, which was blocked. It is released now, I will test again.

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v3 02/34] net/ice: Add basic structures
  2018-12-12 16:54       ` Stillwell Jr, Paul M
@ 2018-12-12 16:57         ` Ferruh Yigit
  0 siblings, 0 replies; 309+ messages in thread
From: Ferruh Yigit @ 2018-12-12 16:57 UTC (permalink / raw)
  To: Stillwell Jr, Paul M, Lu, Wenzhuo, dev

On 12/12/2018 4:54 PM, Stillwell Jr, Paul M wrote:
> The "ice_hw_autogen.h" file is an auto generated file from the HW engineers. It is not created dynamically when the driver is built. I'm not sure why they didn't call it ice_registers.h, but that was their decision.

my bad, it is in first patch was first patch didn't hit the mail list and I
didn't recognized that it is missing. Now I released the patch, patchset should
be complete, I will test again.

> 
> Paul
> 
> 
> -----Original Message-----
> From: Yigit, Ferruh 
> Sent: Wednesday, December 12, 2018 7:19 AM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> Cc: Stillwell Jr, Paul M <paul.m.stillwell.jr@intel.com>
> Subject: Re: [dpdk-dev] [PATCH v3 02/34] net/ice: Add basic structures
> 
> On 12/12/2018 6:59 AM, Wenzhuo Lu wrote:
>> From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
>>
>> Add the structures required by the NIC.
>>
>> Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
> 
> <...>
> 
>> @@ -0,0 +1,869 @@
>> +/* SPDX-License-Identifier: BSD-3-Clause
>> + * Copyright(c) 2001-2018
>> + */
>> +
>> +#ifndef _ICE_TYPE_H_
>> +#define _ICE_TYPE_H_
>> +
>> +#define ETH_ALEN	6
>> +
>> +#define ETH_HEADER_LEN	14
>> +
>> +#define BIT(a) (1UL << (a))
>> +#define BIT_ULL(a) (1ULL << (a))
>> +
>> +#define BITS_PER_BYTE	8
>> +
>> +#define ICE_BYTES_PER_WORD	2
>> +#define ICE_BYTES_PER_DWORD	4
>> +#define ICE_MAX_TRAFFIC_CLASS	8
>> +
>> +
>> +#include "ice_status.h"
>> +#include "ice_hw_autogen.h"
> 
> Where this "ice_hw_autogen.h header file should come from? Name suggest it will be auto-generated but my build system not able to find it.
> 

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v3 16/34] net/ice: support device initialization
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 16/34] net/ice: support device initialization Wenzhuo Lu
@ 2018-12-12 18:17     ` Ferruh Yigit
  2018-12-13  2:39       ` Lu, Wenzhuo
  2018-12-13  2:57       ` Lu, Wenzhuo
  0 siblings, 2 replies; 309+ messages in thread
From: Ferruh Yigit @ 2018-12-12 18:17 UTC (permalink / raw)
  To: Wenzhuo Lu, dev; +Cc: Qiming Yang, Xiaoyun Li, Jingjing Wu

On 12/12/2018 6:59 AM, Wenzhuo Lu wrote:
> Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> Signed-off-by: Qiming Yang <qiming.yang@intel.com>
> Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
> Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>

<...>

> @@ -297,6 +297,15 @@ CONFIG_RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE=y
>  CONFIG_RTE_LIBRTE_FM10K_INC_VECTOR=y
>  
>  #
> +# Compile burst-oriented ICE PMD driver
> +#
> +CONFIG_RTE_LIBRTE_ICE_PMD=y
> +CONFIG_RTE_LIBRTE_ICE_DEBUG_RX=n
> +CONFIG_RTE_LIBRTE_ICE_DEBUG_TX=n
> +CONFIG_RTE_LIBRTE_ICE_DEBUG_TX_FREE=n
> +CONFIG_RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC=y

Is there a way to convert this into runtime config? Does it needs to be compile
time config?

> +CONFIG_RTE_LIBRTE_ICE_16BYTE_RX_DESC=n

Some of these config options documented in ice.rst, but document is introduced
as last patch, what do you think adding documentation as the feature added?

<...>

> +#
> +# Add extra flags for base driver files (also known as shared code)
> +# to disable warnings
> +#
> +ifeq ($(CONFIG_RTE_TOOLCHAIN_ICC),y)
> +CFLAGS_BASE_DRIVER = -wd593 -wd188
> +else ifeq ($(CONFIG_RTE_TOOLCHAIN_CLANG),y)
> +CFLAGS_BASE_DRIVER += -Wno-sign-compare
> +CFLAGS_BASE_DRIVER += -Wno-unused-value
> +CFLAGS_BASE_DRIVER += -Wno-unused-parameter
> +CFLAGS_BASE_DRIVER += -Wno-strict-aliasing
> +CFLAGS_BASE_DRIVER += -Wno-format
> +CFLAGS_BASE_DRIVER += -Wno-missing-field-initializers
> +CFLAGS_BASE_DRIVER += -Wno-pointer-to-int-cast
> +CFLAGS_BASE_DRIVER += -Wno-format-nonliteral
> +CFLAGS_BASE_DRIVER += -Wno-unused-variable
> +else
> +CFLAGS_BASE_DRIVER  = -Wno-sign-compare
> +CFLAGS_BASE_DRIVER += -Wno-unused-value
> +CFLAGS_BASE_DRIVER += -Wno-unused-parameter
> +CFLAGS_BASE_DRIVER += -Wno-strict-aliasing
> +CFLAGS_BASE_DRIVER += -Wno-format
> +CFLAGS_BASE_DRIVER += -Wno-missing-field-initializers
> +CFLAGS_BASE_DRIVER += -Wno-pointer-to-int-cast
> +CFLAGS_BASE_DRIVER += -Wno-format-nonliteral
> +CFLAGS_BASE_DRIVER += -Wno-format-security
> +CFLAGS_BASE_DRIVER += -Wno-unused-variable
> +
> +ifeq ($(shell test $(GCC_VERSION) -ge 44 && echo 1), 1)
> +CFLAGS_BASE_DRIVER += -Wno-unused-but-set-variable
> +endif

Are all these special warning disable cases for ice? It looks like can be copy
paste from all driver, I suggest starting from empty exception list, we can add
them if we need but lets not start with existing list already.

<...>

> +# this lib depends upon:
> +DEPDIRS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += lib/librte_eal lib/librte_ether
> +DEPDIRS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += lib/librte_mempool lib/librte_mbuf
> +DEPDIRS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += lib/librte_net
> +DEPDIRS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += lib/librte_kvargs

As far as I remember we removed DEPDIRS from makefiles, there is no more dynamic
dependency resolving, so it should be safe to remove above lines.

> +
> +include $(RTE_SDK)/mk/rte.lib.mk
> diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
> new file mode 100644
> index 0000000..e0bf15c
> --- /dev/null
> +++ b/drivers/net/ice/ice_ethdev.c
> @@ -0,0 +1,640 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2018 Intel Corporation
> + */
> +
> +#include <rte_ethdev_pci.h>
> +
> +#include "base/ice_sched.h"
> +#include "ice_ethdev.h"
> +#include "ice_rxtx.h"
> +
> +#define ICE_MAX_QP_NUM "max_queue_pair_num"

When documentation is added into this patch, can you also add this runtime
config to that please?

<...>

> +static int
> +ice_dev_init(struct rte_eth_dev *dev)
> +{
> +	struct rte_pci_device *pci_dev;
> +	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
> +	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
> +	int ret;
> +
> +	dev->dev_ops = &ice_eth_dev_ops;
> +
> +	pci_dev = RTE_DEV_TO_PCI(dev->device);
> +
> +	rte_eth_copy_pci_info(dev, pci_dev);

This is done by rte_eth_dev_pci_generic_probe(), do we need here?

<...>

> +RTE_INIT(ice_init_log);
> +static void
> +ice_init_log(void)

Can merge these lines, please check other samples.

> +{
> +	ice_logtype_init = rte_log_register("pmd.ice.init");

pmd.net.ice.init

> +	if (ice_logtype_init >= 0)
> +		rte_log_set_level(ice_logtype_init, RTE_LOG_NOTICE);
> +	ice_logtype_driver = rte_log_register("pmd.ice.driver");

pmd.net.ice.driver

<...>

> +static void
> +ice_dev_close(struct rte_eth_dev *dev)
> +{
> +	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
> +	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
> +
> +	ice_res_pool_destroy(&pf->msix_pool);
> +	ice_release_vsi(pf->main_vsi);
> +
> +	ice_shutdown_all_ctrlq(hw);
> +}

I am mostly for ordering functions in a way that it doesn't require the forward
declaration, which is mostly helps reading the code since the function order is
close the call order.

It is up to you but also for sake of consistancy I think better to move this
function up, and leave probe/remove/init_log functions as last functions in file.

<...>

> +#define ICE_FLAG_ALL  (ICE_FLAG_RSS | \
> +		       ICE_FLAG_DCB | \
> +		       ICE_FLAG_VMDQ | \
> +		       ICE_FLAG_SRIOV | \
> +		       ICE_FLAG_HEADER_SPLIT_DISABLED | \
> +		       ICE_FLAG_HEADER_SPLIT_ENABLED | \
> +		       ICE_FLAG_FDIR | \
> +		       ICE_FLAG_VXLAN | \
> +		       ICE_FLAG_RSS_AQ_CAPABLE | \
> +		       ICE_FLAG_VF_MAC_BY_PF)
> +
> +#define ICE_RSS_OFFLOAD_ALL ( \
> +	ETH_RSS_FRAG_IPV4 | \
> +	ETH_RSS_NONFRAG_IPV4_TCP | \
> +	ETH_RSS_NONFRAG_IPV4_UDP | \
> +	ETH_RSS_NONFRAG_IPV4_SCTP | \
> +	ETH_RSS_NONFRAG_IPV4_OTHER | \
> +	ETH_RSS_FRAG_IPV6 | \
> +	ETH_RSS_NONFRAG_IPV6_TCP | \
> +	ETH_RSS_NONFRAG_IPV6_UDP | \
> +	ETH_RSS_NONFRAG_IPV6_SCTP | \
> +	ETH_RSS_NONFRAG_IPV6_OTHER | \
> +	ETH_RSS_L2_PAYLOAD)

ICE_RSS_OFFLOAD_ALL is not used at all until this patchset. I think it makes
more logical to add code when it is added, otherwise it is hard to have a
complete logic in signle patch and harder to observe any possible issue.
What do you think re-arranging them?

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v3 11/34] net/ice: Add common functions
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 11/34] net/ice: Add common functions Wenzhuo Lu
@ 2018-12-12 19:58     ` Mattias Rönnblom
  2018-12-12 21:18       ` Stillwell Jr, Paul M
  0 siblings, 1 reply; 309+ messages in thread
From: Mattias Rönnblom @ 2018-12-12 19:58 UTC (permalink / raw)
  To: Wenzhuo Lu, dev; +Cc: Paul M Stillwell Jr

On 2018-12-12 07:59, Wenzhuo Lu wrote:
> From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
> 
> Add code that multiple other features use.
> 
> Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
> ---
>   drivers/net/ice/base/ice_common.c | 3521 +++++++++++++++++++++++++++++++++++++
>   drivers/net/ice/base/ice_common.h |  186 ++
>   2 files changed, 3707 insertions(+)
>   create mode 100644 drivers/net/ice/base/ice_common.c
>   create mode 100644 drivers/net/ice/base/ice_common.h
> 
> diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c
> new file mode 100644
> index 0000000..d49264d
> --- /dev/null
> +++ b/drivers/net/ice/base/ice_common.c
> @@ -0,0 +1,3521 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2001-2018
> + */
> +
> +#include "ice_common.h"
> +#include "ice_sched.h"
> +#include "ice_adminq_cmd.h"
> +
> +#include "ice_flow.h"
> +#include "ice_switch.h"
> +
> +#define ICE_PF_RESET_WAIT_COUNT	200
> +
> +#define ICE_PROG_FLEX_ENTRY(hw, rxdid, mdid, idx) \
> +	wr32((hw), GLFLXP_RXDID_FLX_WRD_##idx(rxdid), \
> +	     ((ICE_RX_OPC_MDID << \
> +	       GLFLXP_RXDID_FLX_WRD_##idx##_RXDID_OPCODE_S) & \
> +	      GLFLXP_RXDID_FLX_WRD_##idx##_RXDID_OPCODE_M) | \
> +	     (((mdid) << GLFLXP_RXDID_FLX_WRD_##idx##_PROT_MDID_S) & \
> +	      GLFLXP_RXDID_FLX_WRD_##idx##_PROT_MDID_M))
> +
> +#define ICE_PROG_FLG_ENTRY(hw, rxdid, flg_0, flg_1, flg_2, flg_3, idx) \
> +	wr32((hw), GLFLXP_RXDID_FLAGS(rxdid, idx), \
> +	     (((flg_0) << GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_S) & \
> +	      GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_M) | \
> +	     (((flg_1) << GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_1_S) & \
> +	      GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_1_M) | \
> +	     (((flg_2) << GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_2_S) & \
> +	      GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_2_M) | \
> +	     (((flg_3) << GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_3_S) & \
> +	      GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_3_M))
> +
> +
> +/**
> + * ice_set_mac_type - Sets MAC type
> + * @hw: pointer to the HW structure
> + *
> + * This function sets the MAC type of the adapter based on the
> + * vendor ID and device ID stored in the hw structure.
> + */
> +static enum ice_status ice_set_mac_type(struct ice_hw *hw)
> +{
> +	enum ice_status status = ICE_SUCCESS;
> +
> +	ice_debug(hw, ICE_DBG_TRACE, "ice_set_mac_type\n");
> +
> +	if (hw->vendor_id == ICE_INTEL_VENDOR_ID) {
> +		switch (hw->device_id) {
> +		default:
> +			hw->mac_type = ICE_MAC_GENERIC;
> +			break;
> +		}
> +	} else {
> +		status = ICE_ERR_DEVICE_NOT_SUPPORTED;
> +	}
> +

Remove braces from single-statement block.

> +	ice_debug(hw, ICE_DBG_INIT, "found mac_type: %d, status: %d\n",
> +		  hw->mac_type, status);
> +
> +	return status;
> +}
> +
> +#if defined(FPGA_SUPPORT) || defined(CVL_A0_SUPPORT)
> +void ice_dev_onetime_setup(struct ice_hw *hw)
> +{
> +	/* configure Rx - set non pxe mode */
> +	wr32(hw, GLLAN_RCTL_0, 0x1);
> +
> +
> +
> +}
> +#endif /* FPGA_SUPPORT || CVL_A0_SUPPORT */
> +
> +/**
> + * ice_clear_pf_cfg - Clear PF configuration
> + * @hw: pointer to the hardware structure
> + *
> + * Clears any existing PF configuration (VSIs, VSI lists, switch rules, port
> + * configuration, flow director filters, etc.).
> + */
> +enum ice_status ice_clear_pf_cfg(struct ice_hw *hw)
> +{
> +	struct ice_aq_desc desc;
> +
> +	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_clear_pf_cfg);
> +
> +	return ice_aq_send_cmd(hw, &desc, NULL, 0, NULL);
> +}
> +
> +/**
> + * ice_aq_manage_mac_read - manage MAC address read command
> + * @hw: pointer to the hw struct
> + * @buf: a virtual buffer to hold the manage MAC read response
> + * @buf_size: Size of the virtual buffer
> + * @cd: pointer to command details structure or NULL
> + *
> + * This function is used to return per PF station MAC address (0x0107).
> + * NOTE: Upon successful completion of this command, MAC address information
> + * is returned in user specified buffer. Please interpret user specified
> + * buffer as "manage_mac_read" response.
> + * Response such as various MAC addresses are stored in HW struct (port.mac)
> + * ice_aq_discover_caps is expected to be called before this function is called.
> + */
> +static enum ice_status
> +ice_aq_manage_mac_read(struct ice_hw *hw, void *buf, u16 buf_size,
> +		       struct ice_sq_cd *cd)
> +{
> +	struct ice_aqc_manage_mac_read_resp *resp;
> +	struct ice_aqc_manage_mac_read *cmd;
> +	struct ice_aq_desc desc;
> +	enum ice_status status;
> +	u16 flags;
> +	u8 i;
> +
> +	cmd = &desc.params.mac_read;
> +
> +	if (buf_size < sizeof(*resp))
> +		return ICE_ERR_BUF_TOO_SHORT;
> +
> +	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_manage_mac_read);
> +
> +	status = ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
> +	if (status)
> +		return status;
> +
> +	resp = (struct ice_aqc_manage_mac_read_resp *)buf;
> +	flags = LE16_TO_CPU(cmd->flags) & ICE_AQC_MAN_MAC_READ_M;
> +
> +	if (!(flags & ICE_AQC_MAN_MAC_LAN_ADDR_VALID)) {
> +		ice_debug(hw, ICE_DBG_LAN, "got invalid MAC address\n");
> +		return ICE_ERR_CFG;
> +	}
> +
> +	/* A single port can report up to two (LAN and WoL) addresses */
> +	for (i = 0; i < cmd->num_addr; i++)
> +		if (resp[i].addr_type == ICE_AQC_MAN_MAC_ADDR_TYPE_LAN) {
> +			ice_memcpy(hw->port_info->mac.lan_addr,
> +				   resp[i].mac_addr, ETH_ALEN,
> +				   ICE_DMA_TO_NONDMA);
> +			ice_memcpy(hw->port_info->mac.perm_addr,
> +				   resp[i].mac_addr,
> +				   ETH_ALEN, ICE_DMA_TO_NONDMA);
> +			break;
> +		}
> +
> +	return ICE_SUCCESS;
> +}
> +
> +/**
> + * ice_aq_get_phy_caps - returns PHY capabilities
> + * @pi: port information structure
> + * @qual_mods: report qualified modules
> + * @report_mode: report mode capabilities
> + * @pcaps: structure for PHY capabilities to be filled
> + * @cd: pointer to command details structure or NULL
> + *
> + * Returns the various PHY capabilities supported on the Port (0x0600)
> + */
> +enum ice_status
> +ice_aq_get_phy_caps(struct ice_port_info *pi, bool qual_mods, u8 report_mode,
> +		    struct ice_aqc_get_phy_caps_data *pcaps,
> +		    struct ice_sq_cd *cd)
> +{
> +	struct ice_aqc_get_phy_caps *cmd;
> +	u16 pcaps_size = sizeof(*pcaps);
> +	struct ice_aq_desc desc;
> +	enum ice_status status;
> +
> +	cmd = &desc.params.get_phy;
> +
> +	if (!pcaps || (report_mode & ~ICE_AQC_REPORT_MODE_M) || !pi)
> +		return ICE_ERR_PARAM;
> +
> +	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_phy_caps);
> +
> +	if (qual_mods)
> +		cmd->param0 |= CPU_TO_LE16(ICE_AQC_GET_PHY_RQM);
> +
> +	cmd->param0 |= CPU_TO_LE16(report_mode);
> +	status = ice_aq_send_cmd(pi->hw, &desc, pcaps, pcaps_size, cd);
> +
> +	if (status == ICE_SUCCESS && report_mode == ICE_AQC_REPORT_TOPO_CAP) {
> +		pi->phy.phy_type_low = LE64_TO_CPU(pcaps->phy_type_low);
> +		pi->phy.phy_type_high = LE64_TO_CPU(pcaps->phy_type_high);
> +	}
> +
> +	return status;
> +}
> +
> +/**
> + * ice_get_media_type - Gets media type
> + * @pi: port information structure
> + */
> +static enum ice_media_type ice_get_media_type(struct ice_port_info *pi)
> +{
> +	struct ice_link_status *hw_link_info;
> +
> +	if (!pi)
> +		return ICE_MEDIA_UNKNOWN;
> +
> +	hw_link_info = &pi->phy.link_info;
> +	if (hw_link_info->phy_type_low && hw_link_info->phy_type_high)
> +		/* If more than one media type is selected, report unknown */
> +		return ICE_MEDIA_UNKNOWN;
> +
> +	if (hw_link_info->phy_type_low) {
> +		switch (hw_link_info->phy_type_low) {
> +		case ICE_PHY_TYPE_LOW_1000BASE_SX:
> +		case ICE_PHY_TYPE_LOW_1000BASE_LX:
> +		case ICE_PHY_TYPE_LOW_10GBASE_SR:
> +		case ICE_PHY_TYPE_LOW_10GBASE_LR:
> +		case ICE_PHY_TYPE_LOW_10G_SFI_C2C:
> +		case ICE_PHY_TYPE_LOW_25GBASE_SR:
> +		case ICE_PHY_TYPE_LOW_25GBASE_LR:
> +		case ICE_PHY_TYPE_LOW_25G_AUI_C2C:
> +		case ICE_PHY_TYPE_LOW_40GBASE_SR4:
> +		case ICE_PHY_TYPE_LOW_40GBASE_LR4:
> +		case ICE_PHY_TYPE_LOW_50GBASE_SR2:
> +		case ICE_PHY_TYPE_LOW_50GBASE_LR2:
> +		case ICE_PHY_TYPE_LOW_50GBASE_SR:
> +		case ICE_PHY_TYPE_LOW_50GBASE_FR:
> +		case ICE_PHY_TYPE_LOW_50GBASE_LR:
> +		case ICE_PHY_TYPE_LOW_100GBASE_SR4:
> +		case ICE_PHY_TYPE_LOW_100GBASE_LR4:
> +		case ICE_PHY_TYPE_LOW_100GBASE_SR2:
> +		case ICE_PHY_TYPE_LOW_100GBASE_DR:
> +			return ICE_MEDIA_FIBER;
> +		case ICE_PHY_TYPE_LOW_100BASE_TX:
> +		case ICE_PHY_TYPE_LOW_1000BASE_T:
> +		case ICE_PHY_TYPE_LOW_2500BASE_T:
> +		case ICE_PHY_TYPE_LOW_5GBASE_T:
> +		case ICE_PHY_TYPE_LOW_10GBASE_T:
> +		case ICE_PHY_TYPE_LOW_25GBASE_T:
> +			return ICE_MEDIA_BASET;
> +		case ICE_PHY_TYPE_LOW_10G_SFI_DA:
> +		case ICE_PHY_TYPE_LOW_25GBASE_CR:
> +		case ICE_PHY_TYPE_LOW_25GBASE_CR_S:
> +		case ICE_PHY_TYPE_LOW_25GBASE_CR1:
> +		case ICE_PHY_TYPE_LOW_40GBASE_CR4:
> +		case ICE_PHY_TYPE_LOW_50GBASE_CR2:
> +		case ICE_PHY_TYPE_LOW_50GBASE_CP:
> +		case ICE_PHY_TYPE_LOW_100GBASE_CR4:
> +		case ICE_PHY_TYPE_LOW_100GBASE_CR_PAM4:
> +		case ICE_PHY_TYPE_LOW_100GBASE_CP2:
> +			return ICE_MEDIA_DA;
> +		case ICE_PHY_TYPE_LOW_1000BASE_KX:
> +		case ICE_PHY_TYPE_LOW_2500BASE_KX:
> +		case ICE_PHY_TYPE_LOW_2500BASE_X:
> +		case ICE_PHY_TYPE_LOW_5GBASE_KR:
> +		case ICE_PHY_TYPE_LOW_10GBASE_KR_CR1:
> +		case ICE_PHY_TYPE_LOW_25GBASE_KR:
> +		case ICE_PHY_TYPE_LOW_25GBASE_KR1:
> +		case ICE_PHY_TYPE_LOW_25GBASE_KR_S:
> +		case ICE_PHY_TYPE_LOW_40GBASE_KR4:
> +		case ICE_PHY_TYPE_LOW_50GBASE_KR_PAM4:
> +		case ICE_PHY_TYPE_LOW_50GBASE_KR2:
> +		case ICE_PHY_TYPE_LOW_100GBASE_KR4:
> +		case ICE_PHY_TYPE_LOW_100GBASE_KR_PAM4:
> +			return ICE_MEDIA_BACKPLANE;
> +		}
> +	} else {
> +		switch (hw_link_info->phy_type_high) {
> +		case ICE_PHY_TYPE_HIGH_100GBASE_KR2_PAM4:
> +			return ICE_MEDIA_BACKPLANE;
> +		}
> +	}
> +	return ICE_MEDIA_UNKNOWN;
> +}
> +
> +/**
> + * ice_aq_get_link_info
> + * @pi: port information structure
> + * @ena_lse: enable/disable LinkStatusEvent reporting
> + * @link: pointer to link status structure - optional
> + * @cd: pointer to command details structure or NULL
> + *
> + * Get Link Status (0x607). Returns the link status of the adapter.
> + */
> +enum ice_status
> +ice_aq_get_link_info(struct ice_port_info *pi, bool ena_lse,
> +		     struct ice_link_status *link, struct ice_sq_cd *cd)
> +{
> +	struct ice_link_status *hw_link_info_old, *hw_link_info;
> +	struct ice_aqc_get_link_status_data link_data = { 0 };
> +	struct ice_aqc_get_link_status *resp;
> +	enum ice_media_type *hw_media_type;
> +	struct ice_fc_info *hw_fc_info;
> +	bool tx_pause, rx_pause;
> +	struct ice_aq_desc desc;
> +	enum ice_status status;
> +	u16 cmd_flags;
> +
> +	if (!pi)
> +		return ICE_ERR_PARAM;

if (pi == NULL)

> +	hw_link_info_old = &pi->phy.link_info_old;
> +	hw_media_type = &pi->phy.media_type;
> +	hw_link_info = &pi->phy.link_info;
> +	hw_fc_info = &pi->fc;
> +
> +	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_link_status);
> +	cmd_flags = (ena_lse) ? ICE_AQ_LSE_ENA : ICE_AQ_LSE_DIS;
> +	resp = &desc.params.get_link_status;
> +	resp->cmd_flags = CPU_TO_LE16(cmd_flags);
> +	resp->lport_num = pi->lport;
> +
> +	status = ice_aq_send_cmd(pi->hw, &desc, &link_data, sizeof(link_data),
> +				 cd);
> +
> +	if (status != ICE_SUCCESS)
> +		return status;
> +
> +	/* save off old link status information */
> +	*hw_link_info_old = *hw_link_info;
> +
> +	/* update current link status information */
> +	hw_link_info->link_speed = LE16_TO_CPU(link_data.link_speed);
> +	hw_link_info->phy_type_low = LE64_TO_CPU(link_data.phy_type_low);
> +	hw_link_info->phy_type_high = LE64_TO_CPU(link_data.phy_type_high);
> +	*hw_media_type = ice_get_media_type(pi);
> +	hw_link_info->link_info = link_data.link_info;
> +	hw_link_info->an_info = link_data.an_info;
> +	hw_link_info->ext_info = link_data.ext_info;
> +	hw_link_info->max_frame_size = LE16_TO_CPU(link_data.max_frame_size);
> +	hw_link_info->fec_info = link_data.cfg & ICE_AQ_FEC_MASK;
> +	hw_link_info->topo_media_conflict = link_data.topo_media_conflict;
> +	hw_link_info->pacing = link_data.cfg & ICE_AQ_CFG_PACING_M;
> +
> +	/* update fc info */
> +	tx_pause = !!(link_data.an_info & ICE_AQ_LINK_PAUSE_TX);
> +	rx_pause = !!(link_data.an_info & ICE_AQ_LINK_PAUSE_RX);
> +	if (tx_pause && rx_pause)
> +		hw_fc_info->current_mode = ICE_FC_FULL;
> +	else if (tx_pause)
> +		hw_fc_info->current_mode = ICE_FC_TX_PAUSE;
> +	else if (rx_pause)
> +		hw_fc_info->current_mode = ICE_FC_RX_PAUSE;
> +	else
> +		hw_fc_info->current_mode = ICE_FC_NONE;
> +
> +	hw_link_info->lse_ena =
> +		!!(resp->cmd_flags & CPU_TO_LE16(ICE_AQ_LSE_IS_ENABLED));
> +
> +
> +	/* save link status information */
> +	if (link)
> +		*link = *hw_link_info;
> +
> +	/* flag cleared so calling functions don't call AQ again */
> +	pi->phy.get_link_info = false;
> +
> +	return status;
> +}
> +
> +/**
> + * ice_init_flex_flags
> + * @hw: pointer to the hardware structure
> + * @prof_id: Rx Descriptor Builder profile ID
> + *
> + * Function to initialize Rx flex flags
> + */
> +static void ice_init_flex_flags(struct ice_hw *hw, enum ice_rxdid prof_id)
> +{
> +	u8 idx = 0;
> +
> +	/* Flex-flag fields (0-2) are programmed with FLG64 bits with layout:
> +	 * flexiflags0[5:0] - TCP flags, is_packet_fragmented, is_packet_UDP_GRE
> +	 * flexiflags1[3:0] - Not used for flag programming
> +	 * flexiflags2[7:0] - Tunnel and VLAN types
> +	 * 2 invalid fields in last index
> +	 */
> +	switch (prof_id) {
> +	/* Rx flex flags are currently programmed for the NIC profiles only.
> +	 * Different flag bit programming configurations can be added per
> +	 * profile as needed.
> +	 */
> +	case ICE_RXDID_FLEX_NIC:
> +	case ICE_RXDID_FLEX_NIC_2:
> +		ICE_PROG_FLG_ENTRY(hw, prof_id, ICE_RXFLG_PKT_FRG,
> +				   ICE_RXFLG_UDP_GRE, ICE_RXFLG_PKT_DSI,
> +				   ICE_RXFLG_FIN, idx++);
> +		/* flex flag 1 is not used for flexi-flag programming, skipping
> +		 * these four FLG64 bits.
> +		 */
> +		ICE_PROG_FLG_ENTRY(hw, prof_id, ICE_RXFLG_SYN, ICE_RXFLG_RST,
> +				   ICE_RXFLG_PKT_DSI, ICE_RXFLG_PKT_DSI, idx++);
> +		ICE_PROG_FLG_ENTRY(hw, prof_id, ICE_RXFLG_PKT_DSI,
> +				   ICE_RXFLG_PKT_DSI, ICE_RXFLG_EVLAN_x8100,
> +				   ICE_RXFLG_EVLAN_x9100, idx++);
> +		ICE_PROG_FLG_ENTRY(hw, prof_id, ICE_RXFLG_VLAN_x8100,
> +				   ICE_RXFLG_TNL_VLAN, ICE_RXFLG_TNL_MAC,
> +				   ICE_RXFLG_TNL0, idx++);
> +		ICE_PROG_FLG_ENTRY(hw, prof_id, ICE_RXFLG_TNL1, ICE_RXFLG_TNL2,
> +				   ICE_RXFLG_PKT_DSI, ICE_RXFLG_PKT_DSI, idx);
> +		break;
> +
> +	default:
> +		ice_debug(hw, ICE_DBG_INIT,
> +			  "Flag programming for profile ID %d not supported\n",
> +			  prof_id);
> +	}
> +}
> +
> +/**
> + * ice_init_flex_flds
> + * @hw: pointer to the hardware structure
> + * @prof_id: Rx Descriptor Builder profile ID
> + *
> + * Function to initialize flex descriptors
> + */
> +static void ice_init_flex_flds(struct ice_hw *hw, enum ice_rxdid prof_id)
> +{
> +	enum ice_flex_rx_mdid mdid;
> +
> +	switch (prof_id) {
> +	case ICE_RXDID_FLEX_NIC:
> +	case ICE_RXDID_FLEX_NIC_2:
> +		ICE_PROG_FLEX_ENTRY(hw, prof_id, ICE_RX_MDID_HASH_LOW, 0);
> +		ICE_PROG_FLEX_ENTRY(hw, prof_id, ICE_RX_MDID_HASH_HIGH, 1);
> +		ICE_PROG_FLEX_ENTRY(hw, prof_id, ICE_RX_MDID_FLOW_ID_LOWER, 2);
> +
> +		mdid = (prof_id == ICE_RXDID_FLEX_NIC_2) ?
> +			ICE_RX_MDID_SRC_VSI : ICE_RX_MDID_FLOW_ID_HIGH;
> +
> +		ICE_PROG_FLEX_ENTRY(hw, prof_id, mdid, 3);
> +
> +		ice_init_flex_flags(hw, prof_id);
> +		break;
> +
> +	default:
> +		ice_debug(hw, ICE_DBG_INIT,
> +			  "Field init for profile ID %d not supported\n",
> +			  prof_id);
> +	}
> +}
> +
> +
> +/**
> + * ice_init_fltr_mgmt_struct - initializes filter management list and locks
> + * @hw: pointer to the hw struct
> + */
> +static enum ice_status ice_init_fltr_mgmt_struct(struct ice_hw *hw)
> +{
> +	struct ice_switch_info *sw;
> +
> +	hw->switch_info = (struct ice_switch_info *)
> +			  ice_malloc(hw, sizeof(*hw->switch_info));
> +	sw = hw->switch_info;
> +
> +	if (!sw)
> +		return ICE_ERR_NO_MEMORY;

if (sw == NULL)

> +
> +	INIT_LIST_HEAD(&sw->vsi_list_map_head);
> +
> +	return ice_init_def_sw_recp(hw);
> +}
> +
> +/**
> + * ice_cleanup_fltr_mgmt_struct - cleanup filter management list and locks
> + * @hw: pointer to the hw struct
> + */
> +static void ice_cleanup_fltr_mgmt_struct(struct ice_hw *hw)
> +{
> +	struct ice_switch_info *sw = hw->switch_info;
> +	struct ice_vsi_list_map_info *v_pos_map;
> +	struct ice_vsi_list_map_info *v_tmp_map;
> +	struct ice_sw_recipe *recps;
> +	u8 i;
> +
> +	LIST_FOR_EACH_ENTRY_SAFE(v_pos_map, v_tmp_map, &sw->vsi_list_map_head,
> +				 ice_vsi_list_map_info, list_entry) {
> +		LIST_DEL(&v_pos_map->list_entry);
> +		ice_free(hw, v_pos_map);
> +	}
> +	recps = hw->switch_info->recp_list;
> +	for (i = 0; i < ICE_MAX_NUM_RECIPES; i++) {
> +		recps[i].root_rid = i;
> +
> +		if (recps[i].adv_rule) {
> +			struct ice_adv_fltr_mgmt_list_entry *tmp_entry;
> +			struct ice_adv_fltr_mgmt_list_entry *lst_itr;
> +
> +			ice_destroy_lock(&recps[i].filt_rule_lock);
> +			LIST_FOR_EACH_ENTRY_SAFE(lst_itr, tmp_entry,
> +						 &recps[i].filt_rules,
> +						 ice_adv_fltr_mgmt_list_entry,
> +						 list_entry) {
> +				LIST_DEL(&lst_itr->list_entry);
> +				ice_free(hw, lst_itr->lkups);
> +				ice_free(hw, lst_itr);
> +			}
> +		} else {
> +			struct ice_fltr_mgmt_list_entry *lst_itr, *tmp_entry;
> +
> +			ice_destroy_lock(&recps[i].filt_rule_lock);
> +			LIST_FOR_EACH_ENTRY_SAFE(lst_itr, tmp_entry,
> +						 &recps[i].filt_rules,
> +						 ice_fltr_mgmt_list_entry,
> +						 list_entry) {
> +				LIST_DEL(&lst_itr->list_entry);
> +				ice_free(hw, lst_itr);
> +			}
> +		}
> +	}
> +	ice_rm_all_sw_replay_rule_info(hw);
> +	ice_free(hw, sw->recp_list);
> +	ice_free(hw, sw);
> +}
> +
> +#define ICE_FW_LOG_DESC_SIZE(n)	(sizeof(struct ice_aqc_fw_logging_data) + \
> +	(((n) - 1) * sizeof(((struct ice_aqc_fw_logging_data *)0)->entry)))
> +#define ICE_FW_LOG_DESC_SIZE_MAX	\
> +	ICE_FW_LOG_DESC_SIZE(ICE_AQC_FW_LOG_ID_MAX)
> +
> +/**
> + * ice_cfg_fw_log - configure FW logging
> + * @hw: pointer to the hw struct
> + * @enable: enable certain FW logging events if true, disable all if false
> + *
> + * This function enables/disables the FW logging via Rx CQ events and a UART
> + * port based on predetermined configurations. FW logging via the Rx CQ can be
> + * enabled/disabled for individual PF's. However, FW logging via the UART can
> + * only be enabled/disabled for all PFs on the same device.
> + *
> + * To enable overall FW logging, the "cq_en" and "uart_en" enable bits in
> + * hw->fw_log need to be set accordingly, e.g. based on user-provided input,
> + * before initializing the device.
> + *
> + * When re/configuring FW logging, callers need to update the "cfg" elements of
> + * the hw->fw_log.evnts array with the desired logging event configurations for
> + * modules of interest. When disabling FW logging completely, the callers can
> + * just pass false in the "enable" parameter. On completion, the function will
> + * update the "cur" element of the hw->fw_log.evnts array with the resulting
> + * logging event configurations of the modules that are being re/configured. FW
> + * logging modules that are not part of a reconfiguration operation retain their
> + * previous states.
> + *
> + * Before resetting the device, it is recommended that the driver disables FW
> + * logging before shutting down the control queue. When disabling FW logging
> + * ("enable" = false), the latest configurations of FW logging events stored in
> + * hw->fw_log.evnts[] are not overridden to allow them to be reconfigured after
> + * a device reset.
> + *
> + * When enabling FW logging to emit log messages via the Rx CQ during the
> + * device's initialization phase, a mechanism alternative to interrupt handlers
> + * needs to be used to extract FW log messages from the Rx CQ periodically and
> + * to prevent the Rx CQ from being full and stalling other types of control
> + * messages from FW to SW. Interrupts are typically disabled during the device's
> + * initialization phase.
> + */
> +static enum ice_status ice_cfg_fw_log(struct ice_hw *hw, bool enable)
> +{
> +	struct ice_aqc_fw_logging_data *data = NULL;
> +	struct ice_aqc_fw_logging *cmd;
> +	enum ice_status status = ICE_SUCCESS;
> +	u16 i, chgs = 0, len = 0;
> +	struct ice_aq_desc desc;
> +	u8 actv_evnts = 0;
> +	void *buf = NULL;
> +
> +	if (!hw->fw_log.cq_en && !hw->fw_log.uart_en)
> +		return ICE_SUCCESS;
> +
> +	/* Disable FW logging only when the control queue is still responsive */
> +	if (!enable &&
> +	    (!hw->fw_log.actv_evnts || !ice_check_sq_alive(hw, &hw->adminq)))
> +		return ICE_SUCCESS;
> +
> +	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_fw_logging);
> +	cmd = &desc.params.fw_logging;
> +
> +	/* Indicate which controls are valid */
> +	if (hw->fw_log.cq_en)
> +		cmd->log_ctrl_valid |= ICE_AQC_FW_LOG_AQ_VALID;
> +
> +	if (hw->fw_log.uart_en)
> +		cmd->log_ctrl_valid |= ICE_AQC_FW_LOG_UART_VALID;
> +
> +	if (enable) {
> +		/* Fill in an array of entries with FW logging modules and
> +		 * logging events being reconfigured.
> +		 */
> +		for (i = 0; i < ICE_AQC_FW_LOG_ID_MAX; i++) {
> +			u16 val;
> +
> +			/* Keep track of enabled event types */
> +			actv_evnts |= hw->fw_log.evnts[i].cfg;
> +
> +			if (hw->fw_log.evnts[i].cfg == hw->fw_log.evnts[i].cur)
> +				continue;
> +
> +			if (!data) {
> +				data = (struct ice_aqc_fw_logging_data *)
> +					ice_malloc(hw,
> +						   ICE_FW_LOG_DESC_SIZE_MAX);
> +				if (!data)
> +					return ICE_ERR_NO_MEMORY;
> +			}
> +
> +			val = i << ICE_AQC_FW_LOG_ID_S;
> +			val |= hw->fw_log.evnts[i].cfg << ICE_AQC_FW_LOG_EN_S;
> +			data->entry[chgs++] = CPU_TO_LE16(val);
> +		}
> +
> +		/* Only enable FW logging if at least one module is specified.
> +		 * If FW logging is currently enabled but all modules are not
> +		 * enabled to emit log messages, disable FW logging altogether.
> +		 */
> +		if (actv_evnts) {
> +			/* Leave if there is effectively no change */
> +			if (!chgs)
> +				goto out;
> +
> +			if (hw->fw_log.cq_en)
> +				cmd->log_ctrl |= ICE_AQC_FW_LOG_AQ_EN;
> +
> +			if (hw->fw_log.uart_en)
> +				cmd->log_ctrl |= ICE_AQC_FW_LOG_UART_EN;
> +
> +			buf = data;
> +			len = ICE_FW_LOG_DESC_SIZE(chgs);
> +			desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
> +		}
> +	}
> +
> +	status = ice_aq_send_cmd(hw, &desc, buf, len, NULL);
> +	if (!status) {
> +		/* Update the current configuration to reflect events enabled.
> +		 * hw->fw_log.cq_en and hw->fw_log.uart_en indicate if the FW
> +		 * logging mode is enabled for the device. They do not reflect
> +		 * actual modules being enabled to emit log messages. So, their
> +		 * values remain unchanged even when all modules are disabled.
> +		 */
> +		u16 cnt = enable ? chgs : (u16)ICE_AQC_FW_LOG_ID_MAX;
> +
> +		hw->fw_log.actv_evnts = actv_evnts;
> +		for (i = 0; i < cnt; i++) {
> +			u16 v, m;
> +
> +			if (!enable) {
> +				/* When disabling all FW logging events as part
> +				 * of device's de-initialization, the original
> +				 * configurations are retained, and can be used
> +				 * to reconfigure FW logging later if the device
> +				 * is re-initialized.
> +				 */
> +				hw->fw_log.evnts[i].cur = 0;
> +				continue;
> +			}
> +
> +			v = LE16_TO_CPU(data->entry[i]);
> +			m = (v & ICE_AQC_FW_LOG_ID_M) >> ICE_AQC_FW_LOG_ID_S;
> +			hw->fw_log.evnts[m].cur = hw->fw_log.evnts[m].cfg;
> +		}
> +	}
> +
> +out:
> +	if (data)
> +		ice_free(hw, data);
> +
> +	return status;
> +}
> +
> +/**
> + * ice_output_fw_log
> + * @hw: pointer to the hw struct
> + * @desc: pointer to the AQ message descriptor
> + * @buf: pointer to the buffer accompanying the AQ message
> + *
> + * Formats a FW Log message and outputs it via the standard driver logs.
> + */
> +void ice_output_fw_log(struct ice_hw *hw, struct ice_aq_desc *desc, void *buf)
> +{
> +	ice_debug(hw, ICE_DBG_AQ_MSG, "[ FW Log Msg Start ]\n");
> +	ice_debug_array(hw, ICE_DBG_AQ_MSG, 16, 1, (u8 *)buf,
> +			LE16_TO_CPU(desc->datalen));
> +	ice_debug(hw, ICE_DBG_AQ_MSG, "[ FW Log Msg End ]\n");
> +}
> +
> +/**
> + * ice_get_itr_intrl_gran - determine int/intrl granularity
> + * @hw: pointer to the hw struct
> + *
> + * Determines the itr/intrl granularities based on the maximum aggregate
> + * bandwidth according to the device's configuration during power-on.
> + */
> +static enum ice_status ice_get_itr_intrl_gran(struct ice_hw *hw)
> +{
> +	u8 max_agg_bw = (rd32(hw, GL_PWR_MODE_CTL) &
> +			 GL_PWR_MODE_CTL_CAR_MAX_BW_M) >>
> +			GL_PWR_MODE_CTL_CAR_MAX_BW_S;
> +
> +	switch (max_agg_bw) {
> +	case ICE_MAX_AGG_BW_200G:
> +	case ICE_MAX_AGG_BW_100G:
> +	case ICE_MAX_AGG_BW_50G:
> +		hw->itr_gran = ICE_ITR_GRAN_ABOVE_25;
> +		hw->intrl_gran = ICE_INTRL_GRAN_ABOVE_25;
> +		break;
> +	case ICE_MAX_AGG_BW_25G:
> +		hw->itr_gran = ICE_ITR_GRAN_MAX_25;
> +		hw->intrl_gran = ICE_INTRL_GRAN_MAX_25;
> +		break;
> +	default:
> +		ice_debug(hw, ICE_DBG_INIT,
> +			  "Failed to determine itr/intrl granularity\n");
> +		return ICE_ERR_CFG;
> +	}
> +
> +	return ICE_SUCCESS;
> +}
> +
> +/**
> + * ice_init_hw - main hardware initialization routine
> + * @hw: pointer to the hardware structure
> + */
> +enum ice_status ice_init_hw(struct ice_hw *hw)
> +{
> +	struct ice_aqc_get_phy_caps_data *pcaps;
> +	enum ice_status status;
> +	u16 mac_buf_len;
> +	void *mac_buf;
> +
> +	ice_debug(hw, ICE_DBG_TRACE, "ice_init_hw");
> +
> +
> +	/* Set MAC type based on DeviceID */
> +	status = ice_set_mac_type(hw);
> +	if (status)
> +		return status;
> +
> +	hw->pf_id = (u8)(rd32(hw, PF_FUNC_RID) &
> +			 PF_FUNC_RID_FUNCTION_NUMBER_M) >>
> +		PF_FUNC_RID_FUNCTION_NUMBER_S;
> +
> +
> +	status = ice_reset(hw, ICE_RESET_PFR);
> +	if (status)
> +		return status;
> +
> +	status = ice_get_itr_intrl_gran(hw);
> +	if (status)
> +		return status;
> +
> +
> +	status = ice_init_all_ctrlq(hw);
> +	if (status)
> +		goto err_unroll_cqinit;
> +
> +	/* Enable FW logging. Not fatal if this fails. */
> +	status = ice_cfg_fw_log(hw, true);
> +	if (status)
> +		ice_debug(hw, ICE_DBG_INIT, "Failed to enable FW logging.\n");
> +
> +	status = ice_clear_pf_cfg(hw);
> +	if (status)
> +		goto err_unroll_cqinit;
> +
> +
> +	ice_clear_pxe_mode(hw);
> +
> +	status = ice_init_nvm(hw);
> +	if (status)
> +		goto err_unroll_cqinit;
> +
> +	status = ice_get_caps(hw);
> +	if (status)
> +		goto err_unroll_cqinit;
> +
> +	hw->port_info = (struct ice_port_info *)
> +			ice_malloc(hw, sizeof(*hw->port_info));
> +	if (!hw->port_info) {
> +		status = ICE_ERR_NO_MEMORY;
> +		goto err_unroll_cqinit;
> +	}
> +
> +	/* set the back pointer to hw */
> +	hw->port_info->hw = hw;
> +
> +	/* Initialize port_info struct with switch configuration data */
> +	status = ice_get_initial_sw_cfg(hw);
> +	if (status)
> +		goto err_unroll_alloc;
> +
> +	hw->evb_veb = true;
> +
> +	/* Query the allocated resources for Tx scheduler */
> +	status = ice_sched_query_res_alloc(hw);
> +	if (status) {
> +		ice_debug(hw, ICE_DBG_SCHED,
> +			  "Failed to get scheduler allocated resources\n");
> +		goto err_unroll_alloc;
> +	}
> +
> +
> +	/* Initialize port_info struct with scheduler data */
> +	status = ice_sched_init_port(hw->port_info);
> +	if (status)
> +		goto err_unroll_sched;
> +
> +	pcaps = (struct ice_aqc_get_phy_caps_data *)
> +		ice_malloc(hw, sizeof(*pcaps));
> +	if (!pcaps) {
> +		status = ICE_ERR_NO_MEMORY;
> +		goto err_unroll_sched;
> +	}
> +
> +	/* Initialize port_info struct with PHY capabilities */
> +	status = ice_aq_get_phy_caps(hw->port_info, false,
> +				     ICE_AQC_REPORT_TOPO_CAP, pcaps, NULL);
> +	ice_free(hw, pcaps);
> +	if (status)
> +		goto err_unroll_sched;
> +
> +	/* Initialize port_info struct with link information */
> +	status = ice_aq_get_link_info(hw->port_info, false, NULL, NULL);
> +	if (status)
> +		goto err_unroll_sched;
> +	/* need a valid SW entry point to build a Tx tree */
> +	if (!hw->sw_entry_point_layer) {
> +		ice_debug(hw, ICE_DBG_SCHED, "invalid sw entry point\n");
> +		status = ICE_ERR_CFG;
> +		goto err_unroll_sched;
> +	}
> +	INIT_LIST_HEAD(&hw->agg_list);
> +	/* Initialize max burst size */
> +	if (!hw->max_burst_size)
> +		ice_cfg_rl_burst_size(hw, ICE_SCHED_DFLT_BURST_SIZE);
> +
> +	status = ice_init_fltr_mgmt_struct(hw);
> +	if (status)
> +		goto err_unroll_sched;
> +
> +#if defined(FPGA_SUPPORT) || defined(CVL_A0_SUPPORT)
> +	/* some of the register write workarounds to get Rx working */
> +	ice_dev_onetime_setup(hw);
> +#endif /* FPGA_SUPPORT || CVL_A0_SUPPORT */
> +
> +	/* Get MAC information */
> +	/* A single port can report up to two (LAN and WoL) addresses */
> +	mac_buf = ice_calloc(hw, 2,
> +			     sizeof(struct ice_aqc_manage_mac_read_resp));
> +	mac_buf_len = 2 * sizeof(struct ice_aqc_manage_mac_read_resp);
> +
> +	if (!mac_buf) {
> +		status = ICE_ERR_NO_MEMORY;
> +		goto err_unroll_fltr_mgmt_struct;
> +	}
> +
> +	status = ice_aq_manage_mac_read(hw, mac_buf, mac_buf_len, NULL);
> +	ice_free(hw, mac_buf);
> +
> +	if (status)
> +		goto err_unroll_fltr_mgmt_struct;
> +
> +	ice_init_flex_flds(hw, ICE_RXDID_FLEX_NIC);
> +	ice_init_flex_flds(hw, ICE_RXDID_FLEX_NIC_2);
> +
> +
> +	return ICE_SUCCESS;
> +
> +err_unroll_fltr_mgmt_struct:
> +	ice_cleanup_fltr_mgmt_struct(hw);
> +err_unroll_sched:
> +	ice_sched_cleanup_all(hw);
> +err_unroll_alloc:
> +	ice_free(hw, hw->port_info);
> +	hw->port_info = NULL;
> +err_unroll_cqinit:
> +	ice_shutdown_all_ctrlq(hw);
> +	return status;
> +}
> +
> +/**
> + * ice_deinit_hw - unroll initialization operations done by ice_init_hw
> + * @hw: pointer to the hardware structure
> + *
> + * This should be called only during nominal operation, not as a result of
> + * ice_init_hw() failing since ice_init_hw() will take care of unrolling
> + * applicable initializations if it fails for any reason.
> + */
> +void ice_deinit_hw(struct ice_hw *hw)
> +{
> +	ice_cleanup_fltr_mgmt_struct(hw);
> +
> +	ice_sched_cleanup_all(hw);
> +	ice_sched_clear_agg(hw);
> +
> +	if (hw->port_info) {
> +		ice_free(hw, hw->port_info);
> +		hw->port_info = NULL;
> +	}
> +
> +	/* Attempt to disable FW logging before shutting down control queues */
> +	ice_cfg_fw_log(hw, false);
> +	ice_shutdown_all_ctrlq(hw);
> +
> +	/* Clear VSI contexts if not already cleared */
> +	ice_clear_all_vsi_ctx(hw);
> +}
> +
> +/**
> + * ice_check_reset - Check to see if a global reset is complete
> + * @hw: pointer to the hardware structure
> + */
> +enum ice_status ice_check_reset(struct ice_hw *hw)
> +{
> +	u32 cnt, reg = 0, grst_delay;
> +
> +	/* Poll for Device Active state in case a recent CORER, GLOBR,
> +	 * or EMPR has occurred. The grst delay value is in 100ms units.
> +	 * Add 1sec for outstanding AQ commands that can take a long time.
> +	 */
> +#define GLGEN_RSTCTL		0x000B8180 /* Reset Source: POR */
> +#define GLGEN_RSTCTL_GRSTDEL_S	0
> +#define GLGEN_RSTCTL_GRSTDEL_M	MAKEMASK(0x3F, GLGEN_RSTCTL_GRSTDEL_S)
> +	grst_delay = ((rd32(hw, GLGEN_RSTCTL) & GLGEN_RSTCTL_GRSTDEL_M) >>
> +		      GLGEN_RSTCTL_GRSTDEL_S) + 10;
> +
> +	for (cnt = 0; cnt < grst_delay; cnt++) {
> +		ice_msec_delay(100, true);
> +		reg = rd32(hw, GLGEN_RSTAT);
> +		if (!(reg & GLGEN_RSTAT_DEVSTATE_M))
> +			break;
> +	}
> +
> +	if (cnt == grst_delay) {
> +		ice_debug(hw, ICE_DBG_INIT,
> +			  "Global reset polling failed to complete.\n");
> +		return ICE_ERR_RESET_FAILED;
> +	}
> +
> +#define ICE_RESET_DONE_MASK	(GLNVM_ULD_CORER_DONE_M | \
> +				 GLNVM_ULD_GLOBR_DONE_M)
> +
> +	/* Device is Active; check Global Reset processes are done */
> +	for (cnt = 0; cnt < ICE_PF_RESET_WAIT_COUNT; cnt++) {
> +		reg = rd32(hw, GLNVM_ULD) & ICE_RESET_DONE_MASK;
> +		if (reg == ICE_RESET_DONE_MASK) {
> +			ice_debug(hw, ICE_DBG_INIT,
> +				  "Global reset processes done. %d\n", cnt);
> +			break;
> +		}
> +		ice_msec_delay(10, true);
> +	}
> +
> +	if (cnt == ICE_PF_RESET_WAIT_COUNT) {
> +		ice_debug(hw, ICE_DBG_INIT,
> +			  "Wait for Reset Done timed out. GLNVM_ULD = 0x%x\n",
> +			  reg);
> +		return ICE_ERR_RESET_FAILED;
> +	}
> +
> +	return ICE_SUCCESS;
> +}
> +
> +/**
> + * ice_pf_reset - Reset the PF
> + * @hw: pointer to the hardware structure
> + *
> + * If a global reset has been triggered, this function checks
> + * for its completion and then issues the PF reset
> + */
> +static enum ice_status ice_pf_reset(struct ice_hw *hw)
> +{
> +	u32 cnt, reg;
> +
> +	/* If at function entry a global reset was already in progress, i.e.
> +	 * state is not 'device active' or any of the reset done bits are not
> +	 * set in GLNVM_ULD, there is no need for a PF Reset; poll until the
> +	 * global reset is done.
> +	 */
> +	if ((rd32(hw, GLGEN_RSTAT) & GLGEN_RSTAT_DEVSTATE_M) ||
> +	    (rd32(hw, GLNVM_ULD) & ICE_RESET_DONE_MASK) ^ ICE_RESET_DONE_MASK) {
> +		/* poll on global reset currently in progress until done */
> +		if (ice_check_reset(hw))
> +			return ICE_ERR_RESET_FAILED;
> +
> +		return ICE_SUCCESS;
> +	}
> +
> +	/* Reset the PF */
> +	reg = rd32(hw, PFGEN_CTRL);
> +
> +	wr32(hw, PFGEN_CTRL, (reg | PFGEN_CTRL_PFSWR_M));
> +
> +	for (cnt = 0; cnt < ICE_PF_RESET_WAIT_COUNT; cnt++) {
> +		reg = rd32(hw, PFGEN_CTRL);
> +		if (!(reg & PFGEN_CTRL_PFSWR_M))
> +			break;
> +
> +		ice_msec_delay(1, true);
> +	}
> +
> +	if (cnt == ICE_PF_RESET_WAIT_COUNT) {
> +		ice_debug(hw, ICE_DBG_INIT,
> +			  "PF reset polling failed to complete.\n");
> +		return ICE_ERR_RESET_FAILED;
> +	}
> +
> +	return ICE_SUCCESS;
> +}
> +
> +/**
> + * ice_reset - Perform different types of reset
> + * @hw: pointer to the hardware structure
> + * @req: reset request
> + *
> + * This function triggers a reset as specified by the req parameter.
> + *
> + * Note:
> + * If anything other than a PF reset is triggered, PXE mode is restored.
> + * This has to be cleared using ice_clear_pxe_mode again, once the AQ
> + * interface has been restored in the rebuild flow.
> + */
> +enum ice_status ice_reset(struct ice_hw *hw, enum ice_reset_req req)
> +{
> +	u32 val = 0;
> +
> +	switch (req) {
> +	case ICE_RESET_PFR:
> +		return ice_pf_reset(hw);
> +	case ICE_RESET_CORER:
> +		ice_debug(hw, ICE_DBG_INIT, "CoreR requested\n");
> +		val = GLGEN_RTRIG_CORER_M;
> +		break;
> +	case ICE_RESET_GLOBR:
> +		ice_debug(hw, ICE_DBG_INIT, "GlobalR requested\n");
> +		val = GLGEN_RTRIG_GLOBR_M;
> +		break;
> +	default:
> +		return ICE_ERR_PARAM;
> +	}
> +
> +	val |= rd32(hw, GLGEN_RTRIG);
> +	wr32(hw, GLGEN_RTRIG, val);
> +	ice_flush(hw);
> +
> +
> +	/* wait for the FW to be ready */
> +	return ice_check_reset(hw);
> +}
> +
> +
> +
> +/**
> + * ice_copy_rxq_ctx_to_hw
> + * @hw: pointer to the hardware structure
> + * @ice_rxq_ctx: pointer to the rxq context
> + * @rxq_index: the index of the Rx queue
> + *
> + * Copies rxq context from dense structure to hw register space
> + */
> +static enum ice_status
> +ice_copy_rxq_ctx_to_hw(struct ice_hw *hw, u8 *ice_rxq_ctx, u32 rxq_index)
> +{
> +	u8 i;
> +
> +	if (!ice_rxq_ctx)
> +		return ICE_ERR_BAD_PTR;
> +
> +	if (rxq_index > QRX_CTRL_MAX_INDEX)
> +		return ICE_ERR_PARAM;
> +
> +	/* Copy each dword separately to hw */
> +	for (i = 0; i < ICE_RXQ_CTX_SIZE_DWORDS; i++) {
> +		wr32(hw, QRX_CONTEXT(i, rxq_index),
> +		     *((u32 *)(ice_rxq_ctx + (i * sizeof(u32)))));
> +
> +		ice_debug(hw, ICE_DBG_QCTX, "qrxdata[%d]: %08X\n", i,
> +			  *((u32 *)(ice_rxq_ctx + (i * sizeof(u32)))));
> +	}
> +
> +	return ICE_SUCCESS;
> +}
> +
> +/* LAN Rx Queue Context */
> +static const struct ice_ctx_ele ice_rlan_ctx_info[] = {
> +	/* Field		Width	LSB */
> +	ICE_CTX_STORE(ice_rlan_ctx, head,		13,	0),
> +	ICE_CTX_STORE(ice_rlan_ctx, cpuid,		8,	13),
> +	ICE_CTX_STORE(ice_rlan_ctx, base,		57,	32),
> +	ICE_CTX_STORE(ice_rlan_ctx, qlen,		13,	89),
> +	ICE_CTX_STORE(ice_rlan_ctx, dbuf,		7,	102),
> +	ICE_CTX_STORE(ice_rlan_ctx, hbuf,		5,	109),
> +	ICE_CTX_STORE(ice_rlan_ctx, dtype,		2,	114),
> +	ICE_CTX_STORE(ice_rlan_ctx, dsize,		1,	116),
> +	ICE_CTX_STORE(ice_rlan_ctx, crcstrip,		1,	117),
> +	ICE_CTX_STORE(ice_rlan_ctx, l2tsel,		1,	119),
> +	ICE_CTX_STORE(ice_rlan_ctx, hsplit_0,		4,	120),
> +	ICE_CTX_STORE(ice_rlan_ctx, hsplit_1,		2,	124),
> +	ICE_CTX_STORE(ice_rlan_ctx, showiv,		1,	127),
> +	ICE_CTX_STORE(ice_rlan_ctx, rxmax,		14,	174),
> +	ICE_CTX_STORE(ice_rlan_ctx, tphrdesc_ena,	1,	193),
> +	ICE_CTX_STORE(ice_rlan_ctx, tphwdesc_ena,	1,	194),
> +	ICE_CTX_STORE(ice_rlan_ctx, tphdata_ena,	1,	195),
> +	ICE_CTX_STORE(ice_rlan_ctx, tphhead_ena,	1,	196),
> +	ICE_CTX_STORE(ice_rlan_ctx, lrxqthresh,		3,	198),
> +	{ 0 }
> +};
> +
> +/**
> + * ice_write_rxq_ctx
> + * @hw: pointer to the hardware structure
> + * @rlan_ctx: pointer to the rxq context
> + * @rxq_index: the index of the Rx queue
> + *
> + * Converts rxq context from sparse to dense structure and then writes
> + * it to hw register space
> + */
> +enum ice_status
> +ice_write_rxq_ctx(struct ice_hw *hw, struct ice_rlan_ctx *rlan_ctx,
> +		  u32 rxq_index)
> +{
> +	u8 ctx_buf[ICE_RXQ_CTX_SZ] = { 0 };
> +
> +	ice_set_ctx((u8 *)rlan_ctx, ctx_buf, ice_rlan_ctx_info);
> +	return ice_copy_rxq_ctx_to_hw(hw, ctx_buf, rxq_index);
> +}
> +
> +#if !defined(NO_UNUSED_CTX_CODE) || defined(AE_DRIVER)
> +/**
> + * ice_clear_rxq_ctx
> + * @hw: pointer to the hardware structure
> + * @rxq_index: the index of the Rx queue to clear
> + *
> + * Clears rxq context in hw register space
> + */
> +enum ice_status ice_clear_rxq_ctx(struct ice_hw *hw, u32 rxq_index)
> +{
> +	u8 i;
> +
> +	if (rxq_index > QRX_CTRL_MAX_INDEX)
> +		return ICE_ERR_PARAM;
> +
> +	/* Clear each dword register separately */
> +	for (i = 0; i < ICE_RXQ_CTX_SIZE_DWORDS; i++)
> +		wr32(hw, QRX_CONTEXT(i, rxq_index), 0);
> +
> +	return ICE_SUCCESS;
> +}
> +#endif /* !NO_UNUSED_CTX_CODE || AE_DRIVER */
> +
> +/* LAN Tx Queue Context */
> +const struct ice_ctx_ele ice_tlan_ctx_info[] = {
> +				    /* Field			Width	LSB */
> +	ICE_CTX_STORE(ice_tlan_ctx, base,			57,	0),
> +	ICE_CTX_STORE(ice_tlan_ctx, port_num,			3,	57),
> +	ICE_CTX_STORE(ice_tlan_ctx, cgd_num,			5,	60),
> +	ICE_CTX_STORE(ice_tlan_ctx, pf_num,			3,	65),
> +	ICE_CTX_STORE(ice_tlan_ctx, vmvf_num,			10,	68),
> +	ICE_CTX_STORE(ice_tlan_ctx, vmvf_type,			2,	78),
> +	ICE_CTX_STORE(ice_tlan_ctx, src_vsi,			10,	80),
> +	ICE_CTX_STORE(ice_tlan_ctx, tsyn_ena,			1,	90),
> +	ICE_CTX_STORE(ice_tlan_ctx, alt_vlan,			1,	92),
> +	ICE_CTX_STORE(ice_tlan_ctx, cpuid,			8,	93),
> +	ICE_CTX_STORE(ice_tlan_ctx, wb_mode,			1,	101),
> +	ICE_CTX_STORE(ice_tlan_ctx, tphrd_desc,			1,	102),
> +	ICE_CTX_STORE(ice_tlan_ctx, tphrd,			1,	103),
> +	ICE_CTX_STORE(ice_tlan_ctx, tphwr_desc,			1,	104),
> +	ICE_CTX_STORE(ice_tlan_ctx, cmpq_id,			9,	105),
> +	ICE_CTX_STORE(ice_tlan_ctx, qnum_in_func,		14,	114),
> +	ICE_CTX_STORE(ice_tlan_ctx, itr_notification_mode,	1,	128),
> +	ICE_CTX_STORE(ice_tlan_ctx, adjust_prof_id,		6,	129),
> +	ICE_CTX_STORE(ice_tlan_ctx, qlen,			13,	135),
> +	ICE_CTX_STORE(ice_tlan_ctx, quanta_prof_idx,		4,	148),
> +	ICE_CTX_STORE(ice_tlan_ctx, tso_ena,			1,	152),
> +	ICE_CTX_STORE(ice_tlan_ctx, tso_qnum,			11,	153),
> +	ICE_CTX_STORE(ice_tlan_ctx, legacy_int,			1,	164),
> +	ICE_CTX_STORE(ice_tlan_ctx, drop_ena,			1,	165),
> +	ICE_CTX_STORE(ice_tlan_ctx, cache_prof_idx,		2,	166),
> +	ICE_CTX_STORE(ice_tlan_ctx, pkt_shaper_prof_idx,	3,	168),
> +	ICE_CTX_STORE(ice_tlan_ctx, int_q_state,		110,	171),
> +	{ 0 }
> +};
> +
> +#if !defined(NO_UNUSED_CTX_CODE) || defined(AE_DRIVER)
> +/**
> + * ice_copy_tx_cmpltnq_ctx_to_hw
> + * @hw: pointer to the hardware structure
> + * @ice_tx_cmpltnq_ctx: pointer to the Tx completion queue context
> + * @tx_cmpltnq_index: the index of the completion queue
> + *
> + * Copies Tx completion q context from dense structure to hw register space
> + */
> +static enum ice_status
> +ice_copy_tx_cmpltnq_ctx_to_hw(struct ice_hw *hw, u8 *ice_tx_cmpltnq_ctx,
> +			      u32 tx_cmpltnq_index)
> +{
> +	u8 i;
> +
> +	if (!ice_tx_cmpltnq_ctx)
> +		return ICE_ERR_BAD_PTR;
> +
> +	if (tx_cmpltnq_index > GLTCLAN_CQ_CNTX0_MAX_INDEX)
> +		return ICE_ERR_PARAM;
> +
> +	/* Copy each dword separately to hw */
> +	for (i = 0; i < ICE_TX_CMPLTNQ_CTX_SIZE_DWORDS; i++) {
> +		wr32(hw, GLTCLAN_CQ_CNTX(i, tx_cmpltnq_index),
> +		     *((u32 *)(ice_tx_cmpltnq_ctx + (i * sizeof(u32)))));
> +
> +		ice_debug(hw, ICE_DBG_QCTX, "cmpltnqdata[%d]: %08X\n", i,
> +			  *((u32 *)(ice_tx_cmpltnq_ctx + (i * sizeof(u32)))));
> +	}
> +
> +	return ICE_SUCCESS;
> +}
> +
> +/* LAN Tx Completion Queue Context */
> +static const struct ice_ctx_ele ice_tx_cmpltnq_ctx_info[] = {
> +				       /* Field			Width   LSB */
> +	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, base,			57,	0),
> +	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, q_len,		18,	64),
> +	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, generation,		1,	96),
> +	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, wrt_ptr,		22,	97),
> +	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, pf_num,		3,	128),
> +	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, vmvf_num,		10,	131),
> +	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, vmvf_type,		2,	141),
> +	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, tph_desc_wr,		1,	160),
> +	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, cpuid,		8,	161),
> +	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, cmpltn_cache,		512,	192),
> +	{ 0 }
> +};
> +
> +/**
> + * ice_write_tx_cmpltnq_ctx
> + * @hw: pointer to the hardware structure
> + * @tx_cmpltnq_ctx: pointer to the completion queue context
> + * @tx_cmpltnq_index: the index of the completion queue
> + *
> + * Converts completion queue context from sparse to dense structure and then
> + * writes it to hw register space
> + */
> +enum ice_status
> +ice_write_tx_cmpltnq_ctx(struct ice_hw *hw,
> +			 struct ice_tx_cmpltnq_ctx *tx_cmpltnq_ctx,
> +			 u32 tx_cmpltnq_index)
> +{
> +	u8 ctx_buf[ICE_TX_CMPLTNQ_CTX_SIZE_DWORDS * sizeof(u32)] = { 0 };
> +
> +	ice_set_ctx((u8 *)tx_cmpltnq_ctx, ctx_buf, ice_tx_cmpltnq_ctx_info);
> +	return ice_copy_tx_cmpltnq_ctx_to_hw(hw, ctx_buf, tx_cmpltnq_index);
> +}
> +
> +/**
> + * ice_clear_tx_cmpltnq_ctx
> + * @hw: pointer to the hardware structure
> + * @tx_cmpltnq_index: the index of the completion queue to clear
> + *
> + * Clears Tx completion queue context in hw register space
> + */
> +enum ice_status
> +ice_clear_tx_cmpltnq_ctx(struct ice_hw *hw, u32 tx_cmpltnq_index)
> +{
> +	u8 i;
> +
> +	if (tx_cmpltnq_index > GLTCLAN_CQ_CNTX0_MAX_INDEX)
> +		return ICE_ERR_PARAM;
> +
> +	/* Clear each dword register separately */
> +	for (i = 0; i < ICE_TX_CMPLTNQ_CTX_SIZE_DWORDS; i++)
> +		wr32(hw, GLTCLAN_CQ_CNTX(i, tx_cmpltnq_index), 0);
> +
> +	return ICE_SUCCESS;
> +}
> +
> +/**
> + * ice_copy_tx_drbell_q_ctx_to_hw
> + * @hw: pointer to the hardware structure
> + * @ice_tx_drbell_q_ctx: pointer to the doorbell queue context
> + * @tx_drbell_q_index: the index of the doorbell queue
> + *
> + * Copies doorbell q context from dense structure to hw register space
> + */
> +static enum ice_status
> +ice_copy_tx_drbell_q_ctx_to_hw(struct ice_hw *hw, u8 *ice_tx_drbell_q_ctx,
> +			       u32 tx_drbell_q_index)
> +{
> +	u8 i;
> +
> +	if (!ice_tx_drbell_q_ctx)
> +		return ICE_ERR_BAD_PTR;
> +
> +	if (tx_drbell_q_index > QTX_COMM_DBLQ_DBELL_MAX_INDEX)
> +		return ICE_ERR_PARAM;
> +
> +	/* Copy each dword separately to hw */
> +	for (i = 0; i < ICE_TX_DRBELL_Q_CTX_SIZE_DWORDS; i++) {
> +		wr32(hw, QTX_COMM_DBLQ_CNTX(i, tx_drbell_q_index),
> +		     *((u32 *)(ice_tx_drbell_q_ctx + (i * sizeof(u32)))));
> +
> +		ice_debug(hw, ICE_DBG_QCTX, "tx_drbell_qdata[%d]: %08X\n", i,
> +			  *((u32 *)(ice_tx_drbell_q_ctx + (i * sizeof(u32)))));
> +	}
> +
> +	return ICE_SUCCESS;
> +}
> +
> +/* LAN Tx Doorbell Queue Context info */
> +static const struct ice_ctx_ele ice_tx_drbell_q_ctx_info[] = {
> +					/* Field		Width   LSB */
> +	ICE_CTX_STORE(ice_tx_drbell_q_ctx, base,		57,	0),
> +	ICE_CTX_STORE(ice_tx_drbell_q_ctx, ring_len,		13,	64),
> +	ICE_CTX_STORE(ice_tx_drbell_q_ctx, pf_num,		3,	80),
> +	ICE_CTX_STORE(ice_tx_drbell_q_ctx, vf_num,		8,	84),
> +	ICE_CTX_STORE(ice_tx_drbell_q_ctx, vmvf_type,		2,	94),
> +	ICE_CTX_STORE(ice_tx_drbell_q_ctx, cpuid,		8,	96),
> +	ICE_CTX_STORE(ice_tx_drbell_q_ctx, tph_desc_rd,		1,	104),
> +	ICE_CTX_STORE(ice_tx_drbell_q_ctx, tph_desc_wr,		1,	108),
> +	ICE_CTX_STORE(ice_tx_drbell_q_ctx, db_q_en,		1,	112),
> +	ICE_CTX_STORE(ice_tx_drbell_q_ctx, rd_head,		13,	128),
> +	ICE_CTX_STORE(ice_tx_drbell_q_ctx, rd_tail,		13,	144),
> +	{ 0 }
> +};
> +
> +/**
> + * ice_write_tx_drbell_q_ctx
> + * @hw: pointer to the hardware structure
> + * @tx_drbell_q_ctx: pointer to the doorbell queue context
> + * @tx_drbell_q_index: the index of the doorbell queue
> + *
> + * Converts doorbell queue context from sparse to dense structure and then
> + * writes it to hw register space
> + */
> +enum ice_status
> +ice_write_tx_drbell_q_ctx(struct ice_hw *hw,
> +			  struct ice_tx_drbell_q_ctx *tx_drbell_q_ctx,
> +			  u32 tx_drbell_q_index)
> +{
> +	u8 ctx_buf[ICE_TX_DRBELL_Q_CTX_SIZE_DWORDS * sizeof(u32)] = { 0 };
> +
> +	ice_set_ctx((u8 *)tx_drbell_q_ctx, ctx_buf, ice_tx_drbell_q_ctx_info);
> +	return ice_copy_tx_drbell_q_ctx_to_hw(hw, ctx_buf, tx_drbell_q_index);
> +}
> +
> +/**
> + * ice_clear_tx_drbell_q_ctx
> + * @hw: pointer to the hardware structure
> + * @tx_drbell_q_index: the index of the doorbell queue to clear
> + *
> + * Clears doorbell queue context in hw register space
> + */
> +enum ice_status
> +ice_clear_tx_drbell_q_ctx(struct ice_hw *hw, u32 tx_drbell_q_index)
> +{
> +	u8 i;
> +
> +	if (tx_drbell_q_index > QTX_COMM_DBLQ_DBELL_MAX_INDEX)
> +		return ICE_ERR_PARAM;
> +
> +	/* Clear each dword register separately */
> +	for (i = 0; i < ICE_TX_DRBELL_Q_CTX_SIZE_DWORDS; i++)
> +		wr32(hw, QTX_COMM_DBLQ_CNTX(i, tx_drbell_q_index), 0);
> +
> +	return ICE_SUCCESS;
> +}
> +#endif /* !NO_UNUSED_CTX_CODE || AE_DRIVER */
> +
> +/**
> + * ice_debug_cq
> + * @hw: pointer to the hardware structure
> + * @mask: debug mask
> + * @desc: pointer to control queue descriptor
> + * @buf: pointer to command buffer
> + * @buf_len: max length of buf
> + *
> + * Dumps debug log about control command with descriptor contents.
> + */
> +void
> +ice_debug_cq(struct ice_hw *hw, u32 mask, void *desc, void *buf, u16 buf_len)
> +{
> +	struct ice_aq_desc *cq_desc = (struct ice_aq_desc *)desc;
> +	u16 len;
> +
> +	if (!(mask & hw->debug_mask))
> +		return;
> +
> +	if (!desc)
> +		return;
> +
> +	len = LE16_TO_CPU(cq_desc->datalen);
> +
> +	ice_debug(hw, mask,
> +		  "CQ CMD: opcode 0x%04X, flags 0x%04X, datalen 0x%04X, retval 0x%04X\n",
> +		  LE16_TO_CPU(cq_desc->opcode),
> +		  LE16_TO_CPU(cq_desc->flags),
> +		  LE16_TO_CPU(cq_desc->datalen), LE16_TO_CPU(cq_desc->retval));
> +	ice_debug(hw, mask, "\tcookie (h,l) 0x%08X 0x%08X\n",
> +		  LE32_TO_CPU(cq_desc->cookie_high),
> +		  LE32_TO_CPU(cq_desc->cookie_low));
> +	ice_debug(hw, mask, "\tparam (0,1)  0x%08X 0x%08X\n",
> +		  LE32_TO_CPU(cq_desc->params.generic.param0),
> +		  LE32_TO_CPU(cq_desc->params.generic.param1));
> +	ice_debug(hw, mask, "\taddr (h,l)   0x%08X 0x%08X\n",
> +		  LE32_TO_CPU(cq_desc->params.generic.addr_high),
> +		  LE32_TO_CPU(cq_desc->params.generic.addr_low));
> +	if (buf && cq_desc->datalen != 0) {
> +		ice_debug(hw, mask, "Buffer:\n");
> +		if (buf_len < len)
> +			len = buf_len;
> +
> +		ice_debug_array(hw, mask, 16, 1, (u8 *)buf, len);
> +	}
> +}
> +
> +
> +/* FW Admin Queue command wrappers */
> +
> +/**
> + * ice_aq_send_cmd - send FW Admin Queue command to FW Admin Queue
> + * @hw: pointer to the hw struct
> + * @desc: descriptor describing the command
> + * @buf: buffer to use for indirect commands (NULL for direct commands)
> + * @buf_size: size of buffer for indirect commands (0 for direct commands)
> + * @cd: pointer to command details structure
> + *
> + * Helper function to send FW Admin Queue commands to the FW Admin Queue.
> + */
> +enum ice_status
> +ice_aq_send_cmd(struct ice_hw *hw, struct ice_aq_desc *desc, void *buf,
> +		u16 buf_size, struct ice_sq_cd *cd)
> +{
> +	return ice_sq_send_cmd(hw, &hw->adminq, desc, buf, buf_size, cd);
> +}
> +
> +/**
> + * ice_aq_get_fw_ver
> + * @hw: pointer to the hw struct
> + * @cd: pointer to command details structure or NULL
> + *
> + * Get the firmware version (0x0001) from the admin queue commands
> + */
> +enum ice_status ice_aq_get_fw_ver(struct ice_hw *hw, struct ice_sq_cd *cd)
> +{
> +	struct ice_aqc_get_ver *resp;
> +	struct ice_aq_desc desc;
> +	enum ice_status status;
> +
> +	resp = &desc.params.get_ver;
> +
> +	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_ver);
> +
> +	status = ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
> +
> +	if (!status) {
> +		hw->fw_branch = resp->fw_branch;
> +		hw->fw_maj_ver = resp->fw_major;
> +		hw->fw_min_ver = resp->fw_minor;
> +		hw->fw_patch = resp->fw_patch;
> +		hw->fw_build = LE32_TO_CPU(resp->fw_build);
> +		hw->api_branch = resp->api_branch;
> +		hw->api_maj_ver = resp->api_major;
> +		hw->api_min_ver = resp->api_minor;
> +		hw->api_patch = resp->api_patch;
> +	}
> +
> +	return status;
> +}
> +
> +
> +/**
> + * ice_aq_q_shutdown
> + * @hw: pointer to the hw struct
> + * @unloading: is the driver unloading itself
> + *
> + * Tell the Firmware that we're shutting down the AdminQ and whether
> + * or not the driver is unloading as well (0x0003).
> + */
> +enum ice_status ice_aq_q_shutdown(struct ice_hw *hw, bool unloading)
> +{
> +	struct ice_aqc_q_shutdown *cmd;
> +	struct ice_aq_desc desc;
> +
> +	cmd = &desc.params.q_shutdown;
> +
> +	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_q_shutdown);
> +
> +	if (unloading)
> +		cmd->driver_unloading = CPU_TO_LE32(ICE_AQC_DRIVER_UNLOADING);
> +
> +	return ice_aq_send_cmd(hw, &desc, NULL, 0, NULL);
> +}
> +
> +/**
> + * ice_aq_req_res
> + * @hw: pointer to the hw struct
> + * @res: resource id
> + * @access: access type
> + * @sdp_number: resource number
> + * @timeout: the maximum time in ms that the driver may hold the resource
> + * @cd: pointer to command details structure or NULL
> + *
> + * Requests common resource using the admin queue commands (0x0008).
> + * When attempting to acquire the Global Config Lock, the driver can
> + * learn of three states:
> + *  1) ICE_SUCCESS -        acquired lock, and can perform download package
> + *  2) ICE_ERR_AQ_ERROR -   did not get lock, driver should fail to load
> + *  3) ICE_ERR_AQ_NO_WORK - did not get lock, but another driver has
> + *                          successfully downloaded the package; the driver does
> + *                          not have to download the package and can continue
> + *                          loading
> + *
> + * Note that if the caller is in an acquire lock, perform action, release lock
> + * phase of operation, it is possible that the FW may detect a timeout and issue
> + * a CORER. In this case, the driver will receive a CORER interrupt and will
> + * have to determine its cause. The calling thread that is handling this flow
> + * will likely get an error propagated back to it indicating the Download
> + * Package, Update Package or the Release Resource AQ commands timed out.
> + */
> +static enum ice_status
> +ice_aq_req_res(struct ice_hw *hw, enum ice_aq_res_ids res,
> +	       enum ice_aq_res_access_type access, u8 sdp_number, u32 *timeout,
> +	       struct ice_sq_cd *cd)
> +{
> +	struct ice_aqc_req_res *cmd_resp;
> +	struct ice_aq_desc desc;
> +	enum ice_status status;
> +
> +	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_req_res");
> +
> +	cmd_resp = &desc.params.res_owner;
> +
> +	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_req_res);
> +
> +	cmd_resp->res_id = CPU_TO_LE16(res);
> +	cmd_resp->access_type = CPU_TO_LE16(access);
> +	cmd_resp->res_number = CPU_TO_LE32(sdp_number);
> +	cmd_resp->timeout = CPU_TO_LE32(*timeout);
> +	*timeout = 0;
> +
> +	status = ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
> +
> +	/* The completion specifies the maximum time in ms that the driver
> +	 * may hold the resource in the Timeout field.
> +	 */
> +
> +	/* Global config lock response utilizes an additional status field.
> +	 *
> +	 * If the Global config lock resource is held by some other driver, the
> +	 * command completes with ICE_AQ_RES_GLBL_IN_PROG in the status field
> +	 * and the timeout field indicates the maximum time the current owner
> +	 * of the resource has to free it.
> +	 */
> +	if (res == ICE_GLOBAL_CFG_LOCK_RES_ID) {
> +		if (LE16_TO_CPU(cmd_resp->status) == ICE_AQ_RES_GLBL_SUCCESS) {
> +			*timeout = LE32_TO_CPU(cmd_resp->timeout);
> +			return ICE_SUCCESS;
> +		} else if (LE16_TO_CPU(cmd_resp->status) ==
> +			   ICE_AQ_RES_GLBL_IN_PROG) {
> +			*timeout = LE32_TO_CPU(cmd_resp->timeout);
> +			return ICE_ERR_AQ_ERROR;
> +		} else if (LE16_TO_CPU(cmd_resp->status) ==
> +			   ICE_AQ_RES_GLBL_DONE) {
> +			return ICE_ERR_AQ_NO_WORK;
> +		}
> +
> +		/* invalid FW response, force a timeout immediately */
> +		*timeout = 0;
> +		return ICE_ERR_AQ_ERROR;
> +	}
> +
> +	/* If the resource is held by some other driver, the command completes
> +	 * with a busy return value and the timeout field indicates the maximum
> +	 * time the current owner of the resource has to free it.
> +	 */
> +	if (!status || hw->adminq.sq_last_status == ICE_AQ_RC_EBUSY)
> +		*timeout = LE32_TO_CPU(cmd_resp->timeout);
> +
> +	return status;
> +}
> +
> +/**
> + * ice_aq_release_res
> + * @hw: pointer to the hw struct
> + * @res: resource id
> + * @sdp_number: resource number
> + * @cd: pointer to command details structure or NULL
> + *
> + * release common resource using the admin queue commands (0x0009)
> + */
> +static enum ice_status
> +ice_aq_release_res(struct ice_hw *hw, enum ice_aq_res_ids res, u8 sdp_number,
> +		   struct ice_sq_cd *cd)
> +{
> +	struct ice_aqc_req_res *cmd;
> +	struct ice_aq_desc desc;
> +
> +	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_release_res");
> +
> +	cmd = &desc.params.res_owner;
> +
> +	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_release_res);
> +
> +	cmd->res_id = CPU_TO_LE16(res);
> +	cmd->res_number = CPU_TO_LE32(sdp_number);
> +
> +	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
> +}
> +
> +/**
> + * ice_acquire_res
> + * @hw: pointer to the HW structure
> + * @res: resource id
> + * @access: access type (read or write)
> + * @timeout: timeout in milliseconds
> + *
> + * This function will attempt to acquire the ownership of a resource.
> + */
> +enum ice_status
> +ice_acquire_res(struct ice_hw *hw, enum ice_aq_res_ids res,
> +		enum ice_aq_res_access_type access, u32 timeout)
> +{
> +#define ICE_RES_POLLING_DELAY_MS	10
> +	u32 delay = ICE_RES_POLLING_DELAY_MS;
> +	u32 time_left = timeout;
> +	enum ice_status status;
> +
> +	ice_debug(hw, ICE_DBG_TRACE, "ice_acquire_res");
> +
> +	status = ice_aq_req_res(hw, res, access, 0, &time_left, NULL);
> +
> +	/* A return code of ICE_ERR_AQ_NO_WORK means that another driver has
> +	 * previously acquired the resource and performed any necessary updates;
> +	 * in this case the caller does not obtain the resource and has no
> +	 * further work to do.
> +	 */
> +	if (status == ICE_ERR_AQ_NO_WORK)
> +		goto ice_acquire_res_exit;
> +
> +	if (status)
> +		ice_debug(hw, ICE_DBG_RES,
> +			  "resource %d acquire type %d failed.\n", res, access);
> +
> +	/* If necessary, poll until the current lock owner timeouts */
> +	timeout = time_left;
> +	while (status && timeout && time_left) {
> +		ice_msec_delay(delay, true);
> +		timeout = (timeout > delay) ? timeout - delay : 0;
> +		status = ice_aq_req_res(hw, res, access, 0, &time_left, NULL);
> +
> +		if (status == ICE_ERR_AQ_NO_WORK)
> +			/* lock free, but no work to do */
> +			break;
> +
> +		if (!status)
> +			/* lock acquired */
> +			break;
> +	}
> +	if (status && status != ICE_ERR_AQ_NO_WORK)
> +		ice_debug(hw, ICE_DBG_RES, "resource acquire timed out.\n");
> +
> +ice_acquire_res_exit:
> +	if (status == ICE_ERR_AQ_NO_WORK) {
> +		if (access == ICE_RES_WRITE)
> +			ice_debug(hw, ICE_DBG_RES,
> +				  "resource indicates no work to do.\n");
> +		else
> +			ice_debug(hw, ICE_DBG_RES,
> +				  "Warning: ICE_ERR_AQ_NO_WORK not expected\n");
> +	}
> +	return status;
> +}
> +
> +/**
> + * ice_release_res
> + * @hw: pointer to the HW structure
> + * @res: resource id
> + *
> + * This function will release a resource using the proper Admin Command.
> + */
> +void ice_release_res(struct ice_hw *hw, enum ice_aq_res_ids res)
> +{
> +	enum ice_status status;
> +	u32 total_delay = 0;
> +
> +	ice_debug(hw, ICE_DBG_TRACE, "ice_release_res");
> +
> +	status = ice_aq_release_res(hw, res, 0, NULL);
> +
> +	/* there are some rare cases when trying to release the resource
> +	 * results in an admin Q timeout, so handle them correctly
> +	 */
> +	while ((status == ICE_ERR_AQ_TIMEOUT) &&
> +	       (total_delay < hw->adminq.sq_cmd_timeout)) {
> +		ice_msec_delay(1, true);
> +		status = ice_aq_release_res(hw, res, 0, NULL);
> +		total_delay++;
> +	}
> +}
> +
> +/**
> + * ice_aq_alloc_free_res - command to allocate/free resources
> + * @hw: pointer to the hw struct
> + * @num_entries: number of resource entries in buffer
> + * @buf: Indirect buffer to hold data parameters and response
> + * @buf_size: size of buffer for indirect commands
> + * @opc: pass in the command opcode
> + * @cd: pointer to command details structure or NULL
> + *
> + * Helper function to allocate/free resources using the admin queue commands
> + */
> +enum ice_status
> +ice_aq_alloc_free_res(struct ice_hw *hw, u16 num_entries,
> +		      struct ice_aqc_alloc_free_res_elem *buf, u16 buf_size,
> +		      enum ice_adminq_opc opc, struct ice_sq_cd *cd)
> +{
> +	struct ice_aqc_alloc_free_res_cmd *cmd;
> +	struct ice_aq_desc desc;
> +
> +	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_alloc_free_res");
> +
> +	cmd = &desc.params.sw_res_ctrl;
> +
> +	if (!buf)
> +		return ICE_ERR_PARAM;
> +
> +	if (buf_size < (num_entries * sizeof(buf->elem[0])))
> +		return ICE_ERR_PARAM;
> +
> +	ice_fill_dflt_direct_cmd_desc(&desc, opc);
> +
> +	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
> +
> +	cmd->num_entries = CPU_TO_LE16(num_entries);
> +
> +	return ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
> +}
> +
> +
> +/**
> + * ice_get_num_per_func - determine number of resources per PF
> + * @hw: pointer to the hw structure
> + * @max: value to be evenly split between each PF
> + *
> + * Determine the number of valid functions by going through the bitmap returned
> + * from parsing capabilities and use this to calculate the number of resources
> + * per PF based on the max value passed in.
> + */
> +static u32 ice_get_num_per_func(struct ice_hw *hw, u32 max)
> +{
> +	u8 funcs;
> +
> +#define ICE_CAPS_VALID_FUNCS_M	0xFF
> +	funcs = ice_hweight8(hw->dev_caps.common_cap.valid_functions &
> +			     ICE_CAPS_VALID_FUNCS_M);
> +
> +	if (!funcs)
> +		return 0;
> +
> +	return max / funcs;
> +}
> +
> +/**
> + * ice_parse_caps - parse function/device capabilities
> + * @hw: pointer to the hw struct
> + * @buf: pointer to a buffer containing function/device capability records
> + * @cap_count: number of capability records in the list
> + * @opc: type of capabilities list to parse
> + *
> + * Helper function to parse function(0x000a)/device(0x000b) capabilities list.
> + */
> +static void
> +ice_parse_caps(struct ice_hw *hw, void *buf, u32 cap_count,
> +	       enum ice_adminq_opc opc)
> +{
> +	struct ice_aqc_list_caps_elem *cap_resp;
> +	struct ice_hw_func_caps *func_p = NULL;
> +	struct ice_hw_dev_caps *dev_p = NULL;
> +	struct ice_hw_common_caps *caps;
> +	u32 i;
> +
> +	if (!buf)
> +		return;
> +
> +	cap_resp = (struct ice_aqc_list_caps_elem *)buf;
> +
> +	if (opc == ice_aqc_opc_list_dev_caps) {
> +		dev_p = &hw->dev_caps;
> +		caps = &dev_p->common_cap;
> +	} else if (opc == ice_aqc_opc_list_func_caps) {
> +		func_p = &hw->func_caps;
> +		caps = &func_p->common_cap;
> +	} else {
> +		ice_debug(hw, ICE_DBG_INIT, "wrong opcode\n");
> +		return;
> +	}
> +
> +	for (i = 0; caps && i < cap_count; i++, cap_resp++) {
> +		u32 logical_id = LE32_TO_CPU(cap_resp->logical_id);
> +		u32 phys_id = LE32_TO_CPU(cap_resp->phys_id);
> +		u32 number = LE32_TO_CPU(cap_resp->number);
> +		u16 cap = LE16_TO_CPU(cap_resp->cap);
> +
> +		switch (cap) {
> +		case ICE_AQC_CAPS_VALID_FUNCTIONS:
> +			caps->valid_functions = number;
> +			ice_debug(hw, ICE_DBG_INIT,
> +				  "HW caps: Valid Functions = %d\n",
> +				  caps->valid_functions);
> +			break;
> +		case ICE_AQC_CAPS_VSI:
> +			if (dev_p) {
> +				dev_p->num_vsi_allocd_to_host = number;
> +				ice_debug(hw, ICE_DBG_INIT,
> +					  "HW caps: Dev.VSI cnt = %d\n",
> +					  dev_p->num_vsi_allocd_to_host);
> +			} else if (func_p) {
> +				func_p->guar_num_vsi =
> +					ice_get_num_per_func(hw, ICE_MAX_VSI);
> +				ice_debug(hw, ICE_DBG_INIT,
> +					  "HW caps: Func.VSI cnt = %d\n",
> +					  number);
> +			}
> +			break;
> +		case ICE_AQC_CAPS_RSS:
> +			caps->rss_table_size = number;
> +			caps->rss_table_entry_width = logical_id;
> +			ice_debug(hw, ICE_DBG_INIT,
> +				  "HW caps: RSS table size = %d\n",
> +				  caps->rss_table_size);
> +			ice_debug(hw, ICE_DBG_INIT,
> +				  "HW caps: RSS table width = %d\n",
> +				  caps->rss_table_entry_width);
> +			break;
> +		case ICE_AQC_CAPS_RXQS:
> +			caps->num_rxq = number;
> +			caps->rxq_first_id = phys_id;
> +			ice_debug(hw, ICE_DBG_INIT,
> +				  "HW caps: Num Rx Qs = %d\n", caps->num_rxq);
> +			ice_debug(hw, ICE_DBG_INIT,
> +				  "HW caps: Rx first queue ID = %d\n",
> +				  caps->rxq_first_id);
> +			break;
> +		case ICE_AQC_CAPS_TXQS:
> +			caps->num_txq = number;
> +			caps->txq_first_id = phys_id;
> +			ice_debug(hw, ICE_DBG_INIT,
> +				  "HW caps: Num Tx Qs = %d\n", caps->num_txq);
> +			ice_debug(hw, ICE_DBG_INIT,
> +				  "HW caps: Tx first queue ID = %d\n",
> +				  caps->txq_first_id);
> +			break;
> +		case ICE_AQC_CAPS_MSIX:
> +			caps->num_msix_vectors = number;
> +			caps->msix_vector_first_id = phys_id;
> +			ice_debug(hw, ICE_DBG_INIT,
> +				  "HW caps: MSIX vector count = %d\n",
> +				  caps->num_msix_vectors);
> +			ice_debug(hw, ICE_DBG_INIT,
> +				  "HW caps: MSIX first vector index = %d\n",
> +				  caps->msix_vector_first_id);
> +			break;
> +		case ICE_AQC_CAPS_MAX_MTU:
> +			caps->max_mtu = number;
> +			if (dev_p)
> +				ice_debug(hw, ICE_DBG_INIT,
> +					  "HW caps: Dev.MaxMTU = %d\n",
> +					  caps->max_mtu);
> +			else if (func_p)
> +				ice_debug(hw, ICE_DBG_INIT,
> +					  "HW caps: func.MaxMTU = %d\n",
> +					  caps->max_mtu);
> +			break;
> +		default:
> +			ice_debug(hw, ICE_DBG_INIT,
> +				  "HW caps: Unknown capability[%d]: 0x%x\n", i,
> +				  cap);
> +			break;
> +		}
> +	}
> +}
> +
> +/**
> + * ice_aq_discover_caps - query function/device capabilities
> + * @hw: pointer to the hw struct
> + * @buf: a virtual buffer to hold the capabilities
> + * @buf_size: Size of the virtual buffer
> + * @cap_count: cap count needed if AQ err==ENOMEM
> + * @opc: capabilities type to discover - pass in the command opcode
> + * @cd: pointer to command details structure or NULL
> + *
> + * Get the function(0x000a)/device(0x000b) capabilities description from
> + * the firmware.
> + */
> +static enum ice_status
> +ice_aq_discover_caps(struct ice_hw *hw, void *buf, u16 buf_size, u32 *cap_count,
> +		     enum ice_adminq_opc opc, struct ice_sq_cd *cd)
> +{
> +	struct ice_aqc_list_caps *cmd;
> +	struct ice_aq_desc desc;
> +	enum ice_status status;
> +
> +	cmd = &desc.params.get_cap;
> +
> +	if (opc != ice_aqc_opc_list_func_caps &&
> +	    opc != ice_aqc_opc_list_dev_caps)
> +		return ICE_ERR_PARAM;
> +
> +	ice_fill_dflt_direct_cmd_desc(&desc, opc);
> +
> +	status = ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
> +	if (!status)
> +		ice_parse_caps(hw, buf, LE32_TO_CPU(cmd->count), opc);
> +	else if (hw->adminq.sq_last_status == ICE_AQ_RC_ENOMEM)
> +		*cap_count = LE32_TO_CPU(cmd->count);
> +	return status;
> +}
> +
> +/**
> + * ice_discover_caps - get info about the HW
> + * @hw: pointer to the hardware structure
> + * @opc: capabilities type to discover - pass in the command opcode
> + */
> +static enum ice_status
> +ice_discover_caps(struct ice_hw *hw, enum ice_adminq_opc opc)
> +{
> +	enum ice_status status;
> +	u32 cap_count;
> +	u16 cbuf_len;
> +	u8 retries;
> +
> +	/* The driver doesn't know how many capabilities the device will return
> +	 * so the buffer size required isn't known ahead of time. The driver
> +	 * starts with cbuf_len and if this turns out to be insufficient, the
> +	 * device returns ICE_AQ_RC_ENOMEM and also the cap_count it needs.
> +	 * The driver then allocates the buffer based on the count and retries
> +	 * the operation. So it follows that the retry count is 2.
> +	 */
> +#define ICE_GET_CAP_BUF_COUNT	40
> +#define ICE_GET_CAP_RETRY_COUNT	2
> +
> +	cap_count = ICE_GET_CAP_BUF_COUNT;
> +	retries = ICE_GET_CAP_RETRY_COUNT;
> +
> +	do {
> +		void *cbuf;
> +
> +		cbuf_len = (u16)(cap_count *
> +				 sizeof(struct ice_aqc_list_caps_elem));
> +		cbuf = ice_malloc(hw, cbuf_len);
> +		if (!cbuf)

== NULL

> +			return ICE_ERR_NO_MEMORY;
> +
> +		status = ice_aq_discover_caps(hw, cbuf, cbuf_len, &cap_count,
> +					      opc, NULL);
> +		ice_free(hw, cbuf);
> +
> +		if (!status || hw->adminq.sq_last_status != ICE_AQ_RC_ENOMEM)
> +			break;
> +
> +		/* If ENOMEM is returned, try again with bigger buffer */
> +	} while (--retries);
> +
> +	return status;
> +}
> +
> +/**
> + * ice_get_caps - get info about the HW
> + * @hw: pointer to the hardware structure
> + */
> +enum ice_status ice_get_caps(struct ice_hw *hw)
> +{
> +	enum ice_status status;
> +
> +	status = ice_discover_caps(hw, ice_aqc_opc_list_dev_caps);
> +	if (!status)
> +		status = ice_discover_caps(hw, ice_aqc_opc_list_func_caps);
> +
> +	return status;
> +}
> +
> +/**
> + * ice_aq_manage_mac_write - manage MAC address write command
> + * @hw: pointer to the hw struct
> + * @mac_addr: MAC address to be written as LAA/LAA+WoL/Port address
> + * @flags: flags to control write behavior
> + * @cd: pointer to command details structure or NULL
> + *
> + * This function is used to write MAC address to the NVM (0x0108).
> + */
> +enum ice_status
> +ice_aq_manage_mac_write(struct ice_hw *hw, const u8 *mac_addr, u8 flags,
> +			struct ice_sq_cd *cd)
> +{
> +	struct ice_aqc_manage_mac_write *cmd;
> +	struct ice_aq_desc desc;
> +
> +	cmd = &desc.params.mac_write;
> +	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_manage_mac_write);
> +
> +	cmd->flags = flags;
> +
> +
> +	/* Prep values for flags, sah, sal */
> +	cmd->sah = HTONS(*((const u16 *)mac_addr));
> +	cmd->sal = HTONL(*((const u32 *)(mac_addr + 2)));

Any particular reason these aren't rte_cpu_to_be_16/32?

> +
> +	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
> +}
> +
> +/**
> + * ice_aq_clear_pxe_mode
> + * @hw: pointer to the hw struct
> + *
> + * Tell the firmware that the driver is taking over from PXE (0x0110).
> + */
> +static enum ice_status ice_aq_clear_pxe_mode(struct ice_hw *hw)
> +{
> +	struct ice_aq_desc desc;
> +
> +	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_clear_pxe_mode);
> +	desc.params.clear_pxe.rx_cnt = ICE_AQC_CLEAR_PXE_RX_CNT;
> +
> +	return ice_aq_send_cmd(hw, &desc, NULL, 0, NULL);
> +}
> +
> +/**
> + * ice_clear_pxe_mode - clear pxe operations mode
> + * @hw: pointer to the hw struct
> + *
> + * Make sure all PXE mode settings are cleared, including things
> + * like descriptor fetch/write-back mode.
> + */
> +void ice_clear_pxe_mode(struct ice_hw *hw)
> +{
> +	if (ice_check_sq_alive(hw, &hw->adminq))
> +		ice_aq_clear_pxe_mode(hw);
> +}
> +
> +
> +/**
> + * ice_get_link_speed_based_on_phy_type - returns link speed
> + * @phy_type_low: lower part of phy_type
> + * @phy_type_high: higher part of phy_type
> + *
> + * This helper function will convert an entry in phy type structure
> + * [phy_type_low, phy_type_high] to its corresponding link speed.
> + * Note: In the structure of [phy_type_low, phy_type_high], there should
> + * be one bit set, as this function will convert one phy type to its
> + * speed.
> + * If no bit gets set, ICE_LINK_SPEED_UNKNOWN will be returned
> + * If more than one bit gets set, ICE_LINK_SPEED_UNKNOWN will be returned
> + */
> +static u16
> +ice_get_link_speed_based_on_phy_type(u64 phy_type_low, u64 phy_type_high)
> +{
> +	u16 speed_phy_type_high = ICE_AQ_LINK_SPEED_UNKNOWN;
> +	u16 speed_phy_type_low = ICE_AQ_LINK_SPEED_UNKNOWN;
> +
> +	switch (phy_type_low) {
> +	case ICE_PHY_TYPE_LOW_100BASE_TX:
> +	case ICE_PHY_TYPE_LOW_100M_SGMII:
> +		speed_phy_type_low = ICE_AQ_LINK_SPEED_100MB;
> +		break;
> +	case ICE_PHY_TYPE_LOW_1000BASE_T:
> +	case ICE_PHY_TYPE_LOW_1000BASE_SX:
> +	case ICE_PHY_TYPE_LOW_1000BASE_LX:
> +	case ICE_PHY_TYPE_LOW_1000BASE_KX:
> +	case ICE_PHY_TYPE_LOW_1G_SGMII:
> +		speed_phy_type_low = ICE_AQ_LINK_SPEED_1000MB;
> +		break;
> +	case ICE_PHY_TYPE_LOW_2500BASE_T:
> +	case ICE_PHY_TYPE_LOW_2500BASE_X:
> +	case ICE_PHY_TYPE_LOW_2500BASE_KX:
> +		speed_phy_type_low = ICE_AQ_LINK_SPEED_2500MB;
> +		break;
> +	case ICE_PHY_TYPE_LOW_5GBASE_T:
> +	case ICE_PHY_TYPE_LOW_5GBASE_KR:
> +		speed_phy_type_low = ICE_AQ_LINK_SPEED_5GB;
> +		break;
> +	case ICE_PHY_TYPE_LOW_10GBASE_T:
> +	case ICE_PHY_TYPE_LOW_10G_SFI_DA:
> +	case ICE_PHY_TYPE_LOW_10GBASE_SR:
> +	case ICE_PHY_TYPE_LOW_10GBASE_LR:
> +	case ICE_PHY_TYPE_LOW_10GBASE_KR_CR1:
> +	case ICE_PHY_TYPE_LOW_10G_SFI_AOC_ACC:
> +	case ICE_PHY_TYPE_LOW_10G_SFI_C2C:
> +		speed_phy_type_low = ICE_AQ_LINK_SPEED_10GB;
> +		break;
> +	case ICE_PHY_TYPE_LOW_25GBASE_T:
> +	case ICE_PHY_TYPE_LOW_25GBASE_CR:
> +	case ICE_PHY_TYPE_LOW_25GBASE_CR_S:
> +	case ICE_PHY_TYPE_LOW_25GBASE_CR1:
> +	case ICE_PHY_TYPE_LOW_25GBASE_SR:
> +	case ICE_PHY_TYPE_LOW_25GBASE_LR:
> +	case ICE_PHY_TYPE_LOW_25GBASE_KR:
> +	case ICE_PHY_TYPE_LOW_25GBASE_KR_S:
> +	case ICE_PHY_TYPE_LOW_25GBASE_KR1:
> +	case ICE_PHY_TYPE_LOW_25G_AUI_AOC_ACC:
> +	case ICE_PHY_TYPE_LOW_25G_AUI_C2C:
> +		speed_phy_type_low = ICE_AQ_LINK_SPEED_25GB;
> +		break;
> +	case ICE_PHY_TYPE_LOW_40GBASE_CR4:
> +	case ICE_PHY_TYPE_LOW_40GBASE_SR4:
> +	case ICE_PHY_TYPE_LOW_40GBASE_LR4:
> +	case ICE_PHY_TYPE_LOW_40GBASE_KR4:
> +	case ICE_PHY_TYPE_LOW_40G_XLAUI_AOC_ACC:
> +	case ICE_PHY_TYPE_LOW_40G_XLAUI:
> +		speed_phy_type_low = ICE_AQ_LINK_SPEED_40GB;
> +		break;
> +	case ICE_PHY_TYPE_LOW_50GBASE_CR2:
> +	case ICE_PHY_TYPE_LOW_50GBASE_SR2:
> +	case ICE_PHY_TYPE_LOW_50GBASE_LR2:
> +	case ICE_PHY_TYPE_LOW_50GBASE_KR2:
> +	case ICE_PHY_TYPE_LOW_50G_LAUI2_AOC_ACC:
> +	case ICE_PHY_TYPE_LOW_50G_LAUI2:
> +	case ICE_PHY_TYPE_LOW_50G_AUI2_AOC_ACC:
> +	case ICE_PHY_TYPE_LOW_50G_AUI2:
> +	case ICE_PHY_TYPE_LOW_50GBASE_CP:
> +	case ICE_PHY_TYPE_LOW_50GBASE_SR:
> +	case ICE_PHY_TYPE_LOW_50GBASE_FR:
> +	case ICE_PHY_TYPE_LOW_50GBASE_LR:
> +	case ICE_PHY_TYPE_LOW_50GBASE_KR_PAM4:
> +	case ICE_PHY_TYPE_LOW_50G_AUI1_AOC_ACC:
> +	case ICE_PHY_TYPE_LOW_50G_AUI1:
> +		speed_phy_type_low = ICE_AQ_LINK_SPEED_50GB;
> +		break;
> +	case ICE_PHY_TYPE_LOW_100GBASE_CR4:
> +	case ICE_PHY_TYPE_LOW_100GBASE_SR4:
> +	case ICE_PHY_TYPE_LOW_100GBASE_LR4:
> +	case ICE_PHY_TYPE_LOW_100GBASE_KR4:
> +	case ICE_PHY_TYPE_LOW_100G_CAUI4_AOC_ACC:
> +	case ICE_PHY_TYPE_LOW_100G_CAUI4:
> +	case ICE_PHY_TYPE_LOW_100G_AUI4_AOC_ACC:
> +	case ICE_PHY_TYPE_LOW_100G_AUI4:
> +	case ICE_PHY_TYPE_LOW_100GBASE_CR_PAM4:
> +	case ICE_PHY_TYPE_LOW_100GBASE_KR_PAM4:
> +	case ICE_PHY_TYPE_LOW_100GBASE_CP2:
> +	case ICE_PHY_TYPE_LOW_100GBASE_SR2:
> +	case ICE_PHY_TYPE_LOW_100GBASE_DR:
> +		speed_phy_type_low = ICE_AQ_LINK_SPEED_100GB;
> +		break;
> +	default:
> +		speed_phy_type_low = ICE_AQ_LINK_SPEED_UNKNOWN;
> +		break;
> +	}
> +
> +	switch (phy_type_high) {
> +	case ICE_PHY_TYPE_HIGH_100GBASE_KR2_PAM4:
> +	case ICE_PHY_TYPE_HIGH_100G_CAUI2_AOC_ACC:
> +	case ICE_PHY_TYPE_HIGH_100G_CAUI2:
> +	case ICE_PHY_TYPE_HIGH_100G_AUI2_AOC_ACC:
> +	case ICE_PHY_TYPE_HIGH_100G_AUI2:
> +		speed_phy_type_high = ICE_AQ_LINK_SPEED_100GB;
> +		break;
> +	default:
> +		speed_phy_type_high = ICE_AQ_LINK_SPEED_UNKNOWN;
> +		break;
> +	}
> +
> +	if (speed_phy_type_low == ICE_AQ_LINK_SPEED_UNKNOWN &&
> +	    speed_phy_type_high == ICE_AQ_LINK_SPEED_UNKNOWN)
> +		return ICE_AQ_LINK_SPEED_UNKNOWN;
> +	else if (speed_phy_type_low != ICE_AQ_LINK_SPEED_UNKNOWN &&
> +		 speed_phy_type_high != ICE_AQ_LINK_SPEED_UNKNOWN)
> +		return ICE_AQ_LINK_SPEED_UNKNOWN;
> +	else if (speed_phy_type_low != ICE_AQ_LINK_SPEED_UNKNOWN &&
> +		 speed_phy_type_high == ICE_AQ_LINK_SPEED_UNKNOWN)
> +		return speed_phy_type_low;
> +	else
> +		return speed_phy_type_high;
> +}
> +
> +/**
> + * ice_update_phy_type
> + * @phy_type_low: pointer to the lower part of phy_type
> + * @phy_type_high: pointer to the higher part of phy_type
> + * @link_speeds_bitmap: targeted link speeds bitmap
> + *
> + * Note: For the link_speeds_bitmap structure, you can check it at
> + * [ice_aqc_get_link_status->link_speed]. Caller can pass in
> + * link_speeds_bitmap include multiple speeds.
> + *
> + * Each entry in this [phy_type_low, phy_type_high] structure will
> + * present a certain link speed. This helper function will turn on bits
> + * in [phy_type_low, phy_type_high] structure based on the value of
> + * link_speeds_bitmap input parameter.
> + */
> +void
> +ice_update_phy_type(u64 *phy_type_low, u64 *phy_type_high,
> +		    u16 link_speeds_bitmap)
> +{
> +	u16 speed = ICE_AQ_LINK_SPEED_UNKNOWN;
> +	u64 pt_high;
> +	u64 pt_low;
> +	int index;
> +
> +	/* We first check with low part of phy_type */
> +	for (index = 0; index <= ICE_PHY_TYPE_LOW_MAX_INDEX; index++) {
> +		pt_low = BIT_ULL(index);
> +		speed = ice_get_link_speed_based_on_phy_type(pt_low, 0);
> +
> +		if (link_speeds_bitmap & speed)
> +			*phy_type_low |= BIT_ULL(index);
> +	}
> +
> +	/* We then check with high part of phy_type */
> +	for (index = 0; index <= ICE_PHY_TYPE_HIGH_MAX_INDEX; index++) {
> +		pt_high = BIT_ULL(index);
> +		speed = ice_get_link_speed_based_on_phy_type(0, pt_high);
> +
> +		if (link_speeds_bitmap & speed)
> +			*phy_type_high |= BIT_ULL(index);
> +	}
> +}
> +
> +/**
> + * ice_aq_set_phy_cfg
> + * @hw: pointer to the hw struct
> + * @lport: logical port number
> + * @cfg: structure with PHY configuration data to be set
> + * @cd: pointer to command details structure or NULL
> + *
> + * Set the various PHY configuration parameters supported on the Port.
> + * One or more of the Set PHY config parameters may be ignored in an MFP
> + * mode as the PF may not have the privilege to set some of the PHY Config
> + * parameters. This status will be indicated by the command response (0x0601).
> + */
> +enum ice_status
> +ice_aq_set_phy_cfg(struct ice_hw *hw, u8 lport,
> +		   struct ice_aqc_set_phy_cfg_data *cfg, struct ice_sq_cd *cd)
> +{
> +	struct ice_aq_desc desc;
> +
> +	if (!cfg)
> +		return ICE_ERR_PARAM;

== NULL

> +
> +	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_phy_cfg);
> +	desc.params.set_phy.lport_num = lport;
> +	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
> +
> +	return ice_aq_send_cmd(hw, &desc, cfg, sizeof(*cfg), cd);
> +}
> +
> +/**
> + * ice_update_link_info - update status of the HW network link
> + * @pi: port info structure of the interested logical port
> + */
> +enum ice_status ice_update_link_info(struct ice_port_info *pi)
> +{
> +	struct ice_aqc_get_phy_caps_data *pcaps;
> +	struct ice_phy_info *phy_info;
> +	enum ice_status status;
> +	struct ice_hw *hw;
> +
> +	if (!pi)
> +		return ICE_ERR_PARAM;

== NULL

> +
> +	hw = pi->hw;
> +
> +	pcaps = (struct ice_aqc_get_phy_caps_data *)
> +		ice_malloc(hw, sizeof(*pcaps));

No cast required.

> +	if (!pcaps)
> +		return ICE_ERR_NO_MEMORY;

== NULL

> +
> +	phy_info = &pi->phy;
> +	status = ice_aq_get_link_info(pi, true, NULL, NULL);
> +	if (status)
> +		goto out;
> +
> +	if (phy_info->link_info.link_info & ICE_AQ_MEDIA_AVAILABLE) {
> +		status = ice_aq_get_phy_caps(pi, false, ICE_AQC_REPORT_SW_CFG,
> +					     pcaps, NULL);
> +		if (status)
> +			goto out;
> +
> +		ice_memcpy(phy_info->link_info.module_type, &pcaps->module_type,
> +			   sizeof(phy_info->link_info.module_type),
> +			   ICE_NONDMA_TO_NONDMA);
> +	}
> +out:
> +	ice_free(hw, pcaps);
> +	return status;
> +}
> +
> +/**
> + * ice_set_fc
> + * @pi: port information structure
> + * @aq_failures: pointer to status code, specific to ice_set_fc routine
> + * @ena_auto_link_update: enable automatic link update
> + *
> + * Set the requested flow control mode.
> + */
> +enum ice_status
> +ice_set_fc(struct ice_port_info *pi, u8 *aq_failures, bool ena_auto_link_update)
> +{
> +	struct ice_aqc_set_phy_cfg_data cfg = { 0 };
> +	struct ice_aqc_get_phy_caps_data *pcaps;
> +	enum ice_status status;
> +	u8 pause_mask = 0x0;
> +	struct ice_hw *hw;
> +
> +	if (!pi)
> +		return ICE_ERR_PARAM;

== NULL

> +	hw = pi->hw;
> +	*aq_failures = ICE_SET_FC_AQ_FAIL_NONE;
> +
> +	switch (pi->fc.req_mode) {
> +	case ICE_FC_FULL:
> +		pause_mask |= ICE_AQC_PHY_EN_TX_LINK_PAUSE;
> +		pause_mask |= ICE_AQC_PHY_EN_RX_LINK_PAUSE;
> +		break;
> +	case ICE_FC_RX_PAUSE:
> +		pause_mask |= ICE_AQC_PHY_EN_RX_LINK_PAUSE;
> +		break;
> +	case ICE_FC_TX_PAUSE:
> +		pause_mask |= ICE_AQC_PHY_EN_TX_LINK_PAUSE;
> +		break;
> +	default:
> +		break;
> +	}
> +
> +	pcaps = (struct ice_aqc_get_phy_caps_data *)
> +		ice_malloc(hw, sizeof(*pcaps));

No cast required.

> +	if (!pcaps)
> +		return ICE_ERR_NO_MEMORY;

== NULL

> +
> +	/* Get the current phy config */
> +	status = ice_aq_get_phy_caps(pi, false, ICE_AQC_REPORT_SW_CFG, pcaps,
> +				     NULL);
> +	if (status) {
> +		*aq_failures = ICE_SET_FC_AQ_FAIL_GET;
> +		goto out;
> +	}
> +
> +	/* clear the old pause settings */
> +	cfg.caps = pcaps->caps & ~(ICE_AQC_PHY_EN_TX_LINK_PAUSE |
> +				   ICE_AQC_PHY_EN_RX_LINK_PAUSE);
> +	/* set the new capabilities */
> +	cfg.caps |= pause_mask;
> +	/* If the capabilities have changed, then set the new config */
> +	if (cfg.caps != pcaps->caps) {
> +		int retry_count, retry_max = 10;
> +
> +		/* Auto restart link so settings take effect */
> +		if (ena_auto_link_update)
> +			cfg.caps |= ICE_AQ_PHY_ENA_AUTO_LINK_UPDT;
> +		/* Copy over all the old settings */
> +		cfg.phy_type_high = pcaps->phy_type_high;
> +		cfg.phy_type_low = pcaps->phy_type_low;
> +		cfg.low_power_ctrl = pcaps->low_power_ctrl;
> +		cfg.eee_cap = pcaps->eee_cap;
> +		cfg.eeer_value = pcaps->eeer_value;
> +		cfg.link_fec_opt = pcaps->link_fec_options;
> +
> +		status = ice_aq_set_phy_cfg(hw, pi->lport, &cfg, NULL);
> +		if (status) {
> +			*aq_failures = ICE_SET_FC_AQ_FAIL_SET;
> +			goto out;
> +		}
> +
> +		/* Update the link info
> +		 * It sometimes takes a really long time for link to
> +		 * come back from the atomic reset. Thus, we wait a
> +		 * little bit.
> +		 */
> +		for (retry_count = 0; retry_count < retry_max; retry_count++) {
> +			status = ice_update_link_info(pi);
> +
> +			if (status == ICE_SUCCESS)
> +				break;
> +
> +			ice_msec_delay(100, true);
> +		}
> +
> +		if (status)
> +			*aq_failures = ICE_SET_FC_AQ_FAIL_UPDATE;
> +	}
> +
> +out:
> +	ice_free(hw, pcaps);
> +	return status;
> +}
> +
> +/**
> + * ice_copy_phy_caps_to_cfg - Copy PHY ability data to configuration data
> + * @caps: PHY ability structure to copy date from
> + * @cfg: PHY configuration structure to copy data to
> + *
> + * Helper function to copy AQC PHY get ability data to PHY set configuration
> + * data structure
> + */
> +void
> +ice_copy_phy_caps_to_cfg(struct ice_aqc_get_phy_caps_data *caps,
> +			 struct ice_aqc_set_phy_cfg_data *cfg)
> +{
> +	if (!caps || !cfg)
> +		return;

== NULL

> +
> +	cfg->phy_type_low = caps->phy_type_low;
> +	cfg->phy_type_high = caps->phy_type_high;
> +	cfg->caps = caps->caps;
> +	cfg->low_power_ctrl = caps->low_power_ctrl;
> +	cfg->eee_cap = caps->eee_cap;
> +	cfg->eeer_value = caps->eeer_value;
> +	cfg->link_fec_opt = caps->link_fec_options;
> +}
> +
> +/**
> + * ice_cfg_phy_fec - Configure PHY FEC data based on FEC mode
> + * @cfg: PHY configuration data to set FEC mode
> + * @fec: FEC mode to configure
> + *
> + * Caller should copy ice_aqc_get_phy_caps_data.caps ICE_AQC_PHY_EN_AUTO_FEC
> + * (bit 7) and ice_aqc_get_phy_caps_data.link_fec_options to cfg.caps
> + * ICE_AQ_PHY_ENA_AUTO_FEC (bit 7) and cfg.link_fec_options before calling.
> + */
> +void
> +ice_cfg_phy_fec(struct ice_aqc_set_phy_cfg_data *cfg, enum ice_fec_mode fec)
> +{
> +	switch (fec) {
> +	case ICE_FEC_BASER:
> +		/* Clear auto FEC and RS bits, and AND BASE-R ability
> +		 * bits and OR request bits.
> +		 */
> +		cfg->caps &= ~ICE_AQC_PHY_EN_AUTO_FEC;
> +		cfg->link_fec_opt &= ICE_AQC_PHY_FEC_10G_KR_40G_KR4_EN |
> +				     ICE_AQC_PHY_FEC_25G_KR_CLAUSE74_EN;
> +		cfg->link_fec_opt |= ICE_AQC_PHY_FEC_10G_KR_40G_KR4_REQ |
> +				     ICE_AQC_PHY_FEC_25G_KR_REQ;
> +		break;
> +	case ICE_FEC_RS:
> +		/* Clear auto FEC and BASE-R bits, and AND RS ability
> +		 * bits and OR request bits.
> +		 */
> +		cfg->caps &= ~ICE_AQC_PHY_EN_AUTO_FEC;
> +		cfg->link_fec_opt &= ICE_AQC_PHY_FEC_25G_RS_CLAUSE91_EN;
> +		cfg->link_fec_opt |= ICE_AQC_PHY_FEC_25G_RS_528_REQ |
> +				     ICE_AQC_PHY_FEC_25G_RS_544_REQ;
> +		break;
> +	case ICE_FEC_NONE:
> +		/* Clear auto FEC and all FEC option bits. */
> +		cfg->caps &= ~ICE_AQC_PHY_EN_AUTO_FEC;
> +		cfg->link_fec_opt &= ~ICE_AQC_PHY_FEC_MASK;
> +		break;
> +	case ICE_FEC_AUTO:
> +		/* AND auto FEC bit, and all caps bits. */
> +		cfg->caps &= ICE_AQC_PHY_CAPS_MASK;
> +		break;
> +	}
> +}
> +
> +/**
> + * ice_get_link_status - get status of the HW network link
> + * @pi: port information structure
> + * @link_up: pointer to bool (true/false = linkup/linkdown)
> + *
> + * Variable link_up is true if link is up, false if link is down.
> + * The variable link_up is invalid if status is non zero. As a
> + * result of this call, link status reporting becomes enabled
> + */
> +enum ice_status ice_get_link_status(struct ice_port_info *pi, bool *link_up)
> +{
> +	struct ice_phy_info *phy_info;
> +	enum ice_status status = ICE_SUCCESS;
> +
> +	if (!pi || !link_up)
> +		return ICE_ERR_PARAM;

== NULL

> +
> +	phy_info = &pi->phy;
> +
> +	if (phy_info->get_link_info) {
> +		status = ice_update_link_info(pi);
> +
> +		if (status)
> +			ice_debug(pi->hw, ICE_DBG_LINK,
> +				  "get link status error, status = %d\n",
> +				  status);
> +	}
> +
> +	*link_up = phy_info->link_info.link_info & ICE_AQ_LINK_UP;
> +
> +	return status;
> +}
> +
> +/**
> + * ice_aq_set_link_restart_an
> + * @pi: pointer to the port information structure
> + * @ena_link: if true: enable link, if false: disable link
> + * @cd: pointer to command details structure or NULL
> + *
> + * Sets up the link and restarts the Auto-Negotiation over the link.
> + */
> +enum ice_status
> +ice_aq_set_link_restart_an(struct ice_port_info *pi, bool ena_link,
> +			   struct ice_sq_cd *cd)
> +{
> +	struct ice_aqc_restart_an *cmd;
> +	struct ice_aq_desc desc;
> +
> +	cmd = &desc.params.restart_an;
> +
> +	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_restart_an);
> +
> +	cmd->cmd_flags = ICE_AQC_RESTART_AN_LINK_RESTART;
> +	cmd->lport_num = pi->lport;
> +	if (ena_link)
> +		cmd->cmd_flags |= ICE_AQC_RESTART_AN_LINK_ENABLE;
> +	else
> +		cmd->cmd_flags &= ~ICE_AQC_RESTART_AN_LINK_ENABLE;
> +
> +	return ice_aq_send_cmd(pi->hw, &desc, NULL, 0, cd);
> +}
> +
> +/**
> + * ice_aq_set_event_mask
> + * @hw: pointer to the hw struct
> + * @port_num: port number of the physical function
> + * @mask: event mask to be set
> + * @cd: pointer to command details structure or NULL
> + *
> + * Set event mask (0x0613)
> + */
> +enum ice_status
> +ice_aq_set_event_mask(struct ice_hw *hw, u8 port_num, u16 mask,
> +		      struct ice_sq_cd *cd)
> +{
> +	struct ice_aqc_set_event_mask *cmd;
> +	struct ice_aq_desc desc;
> +
> +	cmd = &desc.params.set_event_mask;
> +
> +	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_event_mask);
> +
> +	cmd->lport_num = port_num;
> +
> +	cmd->event_mask = CPU_TO_LE16(mask);
> +	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
> +}
> +
> +/**
> + * ice_aq_set_mac_loopback
> + * @hw: pointer to the hw struct
> + * @ena_lpbk: Enable or Disable loopback
> + * @cd: pointer to command details structure or NULL
> + *
> + * Enable/disable loopback on a given port
> + */
> +enum ice_status
> +ice_aq_set_mac_loopback(struct ice_hw *hw, bool ena_lpbk, struct ice_sq_cd *cd)
> +{
> +	struct ice_aqc_set_mac_lb *cmd;
> +	struct ice_aq_desc desc;
> +
> +	cmd = &desc.params.set_mac_lb;
> +
> +	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_mac_lb);
> +	if (ena_lpbk)
> +		cmd->lb_mode = ICE_AQ_MAC_LB_EN;
> +
> +	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
> +}
> +
> +
> +/**
> + * ice_aq_set_port_id_led
> + * @pi: pointer to the port information
> + * @is_orig_mode: is this LED set to original mode (by the net-list)
> + * @cd: pointer to command details structure or NULL
> + *
> + * Set LED value for the given port (0x06e9)
> + */
> +enum ice_status
> +ice_aq_set_port_id_led(struct ice_port_info *pi, bool is_orig_mode,
> +		       struct ice_sq_cd *cd)
> +{
> +	struct ice_aqc_set_port_id_led *cmd;
> +	struct ice_hw *hw = pi->hw;
> +	struct ice_aq_desc desc;
> +
> +	cmd = &desc.params.set_port_id_led;
> +
> +	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_port_id_led);
> +
> +
> +	if (is_orig_mode)
> +		cmd->ident_mode = ICE_AQC_PORT_IDENT_LED_ORIG;
> +	else
> +		cmd->ident_mode = ICE_AQC_PORT_IDENT_LED_BLINK;
> +
> +	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
> +}
> +
> +/**
> + * __ice_aq_get_set_rss_lut
> + * @hw: pointer to the hardware structure
> + * @vsi_id: VSI FW index
> + * @lut_type: LUT table type
> + * @lut: pointer to the LUT buffer provided by the caller
> + * @lut_size: size of the LUT buffer
> + * @glob_lut_idx: global LUT index
> + * @set: set true to set the table, false to get the table
> + *
> + * Internal function to get (0x0B05) or set (0x0B03) RSS look up table
> + */
> +static enum ice_status
> +__ice_aq_get_set_rss_lut(struct ice_hw *hw, u16 vsi_id, u8 lut_type, u8 *lut,
> +			 u16 lut_size, u8 glob_lut_idx, bool set)
> +{
> +	struct ice_aqc_get_set_rss_lut *cmd_resp;
> +	struct ice_aq_desc desc;
> +	enum ice_status status;
> +	u16 flags = 0;
> +
> +	cmd_resp = &desc.params.get_set_rss_lut;
> +
> +	if (set) {
> +		ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_rss_lut);
> +		desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
> +	} else {
> +		ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_rss_lut);
> +	}
> +
> +	cmd_resp->vsi_id = CPU_TO_LE16(((vsi_id <<
> +					 ICE_AQC_GSET_RSS_LUT_VSI_ID_S) &
> +					ICE_AQC_GSET_RSS_LUT_VSI_ID_M) |
> +				       ICE_AQC_GSET_RSS_LUT_VSI_VALID);
> +
> +	switch (lut_type) {
> +	case ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_VSI:
> +	case ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF:
> +	case ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_GLOBAL:
> +		flags |= ((lut_type << ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_S) &
> +			  ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_M);
> +		break;
> +	default:
> +		status = ICE_ERR_PARAM;
> +		goto ice_aq_get_set_rss_lut_exit;
> +	}
> +
> +	if (lut_type == ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_GLOBAL) {
> +		flags |= ((glob_lut_idx << ICE_AQC_GSET_RSS_LUT_GLOBAL_IDX_S) &
> +			  ICE_AQC_GSET_RSS_LUT_GLOBAL_IDX_M);
> +
> +		if (!set)
> +			goto ice_aq_get_set_rss_lut_send;
> +	} else if (lut_type == ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF) {
> +		if (!set)
> +			goto ice_aq_get_set_rss_lut_send;
> +	} else {
> +		goto ice_aq_get_set_rss_lut_send;
> +	}
> +
> +	/* LUT size is only valid for Global and PF table types */
> +	switch (lut_size) {
> +	case ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_128:
> +		flags |= (ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_128_FLAG <<
> +			  ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S) &
> +			 ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_M;
> +		break;
> +	case ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_512:
> +		flags |= (ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_512_FLAG <<
> +			  ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S) &
> +			 ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_M;
> +		break;
> +	case ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K:
> +		if (lut_type == ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF) {
> +			flags |= (ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K_FLAG <<
> +				  ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S) &
> +				 ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_M;
> +			break;
> +		}
> +		/* fall-through */
> +	default:
> +		status = ICE_ERR_PARAM;
> +		goto ice_aq_get_set_rss_lut_exit;
> +	}
> +
> +ice_aq_get_set_rss_lut_send:
> +	cmd_resp->flags = CPU_TO_LE16(flags);
> +	status = ice_aq_send_cmd(hw, &desc, lut, lut_size, NULL);
> +
> +ice_aq_get_set_rss_lut_exit:
> +	return status;
> +}
> +
> +/**
> + * ice_aq_get_rss_lut
> + * @hw: pointer to the hardware structure
> + * @vsi_handle: software VSI handle
> + * @lut_type: LUT table type
> + * @lut: pointer to the LUT buffer provided by the caller
> + * @lut_size: size of the LUT buffer
> + *
> + * get the RSS lookup table, PF or VSI type
> + */
> +enum ice_status
> +ice_aq_get_rss_lut(struct ice_hw *hw, u16 vsi_handle, u8 lut_type,
> +		   u8 *lut, u16 lut_size)
> +{
> +	if (!ice_is_vsi_valid(hw, vsi_handle) || !lut)
> +		return ICE_ERR_PARAM;
> +
> +	return __ice_aq_get_set_rss_lut(hw, ice_get_hw_vsi_num(hw, vsi_handle),
> +					lut_type, lut, lut_size, 0, false);
> +}
> +
> +/**
> + * ice_aq_set_rss_lut
> + * @hw: pointer to the hardware structure
> + * @vsi_handle: software VSI handle
> + * @lut_type: LUT table type
> + * @lut: pointer to the LUT buffer provided by the caller
> + * @lut_size: size of the LUT buffer
> + *
> + * set the RSS lookup table, PF or VSI type
> + */
> +enum ice_status
> +ice_aq_set_rss_lut(struct ice_hw *hw, u16 vsi_handle, u8 lut_type,
> +		   u8 *lut, u16 lut_size)
> +{
> +	if (!ice_is_vsi_valid(hw, vsi_handle) || !lut)

== NULL

> +		return ICE_ERR_PARAM;
> +
> +	return __ice_aq_get_set_rss_lut(hw, ice_get_hw_vsi_num(hw, vsi_handle),
> +					lut_type, lut, lut_size, 0, true);
> +}
> +
> +/**
> + * __ice_aq_get_set_rss_key
> + * @hw: pointer to the hw struct
> + * @vsi_id: VSI FW index
> + * @key: pointer to key info struct
> + * @set: set true to set the key, false to get the key
> + *
> + * get (0x0B04) or set (0x0B02) the RSS key per VSI
> + */
> +static enum
> +ice_status __ice_aq_get_set_rss_key(struct ice_hw *hw, u16 vsi_id,
> +				    struct ice_aqc_get_set_rss_keys *key,
> +				    bool set)
> +{
> +	struct ice_aqc_get_set_rss_key *cmd_resp;
> +	u16 key_size = sizeof(*key);
> +	struct ice_aq_desc desc;
> +
> +	cmd_resp = &desc.params.get_set_rss_key;
> +
> +	if (set) {
> +		ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_rss_key);
> +		desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
> +	} else {
> +		ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_rss_key);
> +	}
> +
> +	cmd_resp->vsi_id = CPU_TO_LE16(((vsi_id <<
> +					 ICE_AQC_GSET_RSS_KEY_VSI_ID_S) &
> +					ICE_AQC_GSET_RSS_KEY_VSI_ID_M) |
> +				       ICE_AQC_GSET_RSS_KEY_VSI_VALID);
> +
> +	return ice_aq_send_cmd(hw, &desc, key, key_size, NULL);
> +}
> +
> +/**
> + * ice_aq_get_rss_key
> + * @hw: pointer to the hw struct
> + * @vsi_handle: software VSI handle
> + * @key: pointer to key info struct
> + *
> + * get the RSS key per VSI
> + */
> +enum ice_status
> +ice_aq_get_rss_key(struct ice_hw *hw, u16 vsi_handle,
> +		   struct ice_aqc_get_set_rss_keys *key)
> +{
> +	if (!ice_is_vsi_valid(hw, vsi_handle) || !key)
== NULL
> +		return ICE_ERR_PARAM;
> +
> +	return __ice_aq_get_set_rss_key(hw, ice_get_hw_vsi_num(hw, vsi_handle),
> +					key, false);
> +}
> +
> +/**
> + * ice_aq_set_rss_key
> + * @hw: pointer to the hw struct
> + * @vsi_handle: software VSI handle
> + * @keys: pointer to key info struct
> + *
> + * set the RSS key per VSI
> + */
> +enum ice_status
> +ice_aq_set_rss_key(struct ice_hw *hw, u16 vsi_handle,
> +		   struct ice_aqc_get_set_rss_keys *keys)
> +{
> +	if (!ice_is_vsi_valid(hw, vsi_handle) || !keys)
> +		return ICE_ERR_PARAM;
== NULL
> +
> +	return __ice_aq_get_set_rss_key(hw, ice_get_hw_vsi_num(hw, vsi_handle),
> +					keys, true);
> +}
> +
> +/**
> + * ice_aq_add_lan_txq
> + * @hw: pointer to the hardware structure
> + * @num_qgrps: Number of added queue groups
> + * @qg_list: list of queue groups to be added
> + * @buf_size: size of buffer for indirect command
> + * @cd: pointer to command details structure or NULL
> + *
> + * Add Tx LAN queue (0x0C30)
> + *
> + * NOTE:
> + * Prior to calling add Tx LAN queue:
> + * Initialize the following as part of the Tx queue context:
> + * Completion queue ID if the queue uses Completion queue, Quanta profile,
> + * Cache profile and Packet shaper profile.
> + *
> + * After add Tx LAN queue AQ command is completed:
> + * Interrupts should be associated with specific queues,
> + * Association of Tx queue to Doorbell queue is not part of Add LAN Tx queue
> + * flow.
> + */
> +static enum ice_status
> +ice_aq_add_lan_txq(struct ice_hw *hw, u8 num_qgrps,
> +		   struct ice_aqc_add_tx_qgrp *qg_list, u16 buf_size,
> +		   struct ice_sq_cd *cd)
> +{
> +	u16 i, sum_header_size, sum_q_size = 0;
> +	struct ice_aqc_add_tx_qgrp *list;
> +	struct ice_aqc_add_txqs *cmd;
> +	struct ice_aq_desc desc;
> +
> +	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_add_lan_txq");
> +
> +	cmd = &desc.params.add_txqs;
> +
> +	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_add_txqs);
> +
> +	if (!qg_list)
> +		return ICE_ERR_PARAM;
== NULL
> +
> +	if (num_qgrps > ICE_LAN_TXQ_MAX_QGRPS)
> +		return ICE_ERR_PARAM;
> +
> +	sum_header_size = num_qgrps *
> +		(sizeof(*qg_list) - sizeof(*qg_list->txqs));
> +
> +	list = qg_list;
> +	for (i = 0; i < num_qgrps; i++) {
> +		struct ice_aqc_add_txqs_perq *q = list->txqs;
> +
> +		sum_q_size += list->num_txqs * sizeof(*q);
> +		list = (struct ice_aqc_add_tx_qgrp *)(q + list->num_txqs);
> +	}
> +
> +	if (buf_size != (sum_header_size + sum_q_size))
> +		return ICE_ERR_PARAM;
> +
> +	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
> +
> +	cmd->num_qgrps = num_qgrps;
> +
> +	return ice_aq_send_cmd(hw, &desc, qg_list, buf_size, cd);
> +}
> +
> +/**
> + * ice_aq_dis_lan_txq
> + * @hw: pointer to the hardware structure
> + * @num_qgrps: number of groups in the list
> + * @qg_list: the list of groups to disable
> + * @buf_size: the total size of the qg_list buffer in bytes
> + * @rst_src: if called due to reset, specifies the rst source
> + * @vmvf_num: the relative vm or vf number that is undergoing the reset
> + * @cd: pointer to command details structure or NULL
> + *
> + * Disable LAN Tx queue (0x0C31)
> + */
> +static enum ice_status
> +ice_aq_dis_lan_txq(struct ice_hw *hw, u8 num_qgrps,
> +		   struct ice_aqc_dis_txq_item *qg_list, u16 buf_size,
> +		   enum ice_disq_rst_src rst_src, u16 vmvf_num,
> +		   struct ice_sq_cd *cd)
> +{
> +	struct ice_aqc_dis_txqs *cmd;
> +	struct ice_aq_desc desc;
> +	enum ice_status status;
> +	u16 i, sz = 0;
> +
> +	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_dis_lan_txq");
> +	cmd = &desc.params.dis_txqs;
> +	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_dis_txqs);
> +
> +	/* qg_list can be NULL only in VM/VF reset flow */
> +	if (!qg_list && !rst_src)
> +		return ICE_ERR_PARAM;
> +
> +	if (num_qgrps > ICE_LAN_TXQ_MAX_QGRPS)
> +		return ICE_ERR_PARAM;
> +
> +	cmd->num_entries = num_qgrps;
> +
> +	cmd->vmvf_and_timeout = CPU_TO_LE16((5 << ICE_AQC_Q_DIS_TIMEOUT_S) &
> +					    ICE_AQC_Q_DIS_TIMEOUT_M);
> +
> +	switch (rst_src) {
> +	case ICE_VM_RESET:
> +		cmd->cmd_type = ICE_AQC_Q_DIS_CMD_VM_RESET;
> +		cmd->vmvf_and_timeout |=
> +			CPU_TO_LE16(vmvf_num & ICE_AQC_Q_DIS_VMVF_NUM_M);
> +		break;
> +	case ICE_NO_RESET:
> +	default:
> +		break;
> +	}
> +
> +	/* flush pipe on time out */
> +	cmd->cmd_type |= ICE_AQC_Q_DIS_CMD_FLUSH_PIPE;
> +	/* If no queue group info, we are in a reset flow. Issue the AQ */
> +	if (!qg_list)
> +		goto do_aq;

== NULL

> +
> +	/* set RD bit to indicate that command buffer is provided by the driver
> +	 * and it needs to be read by the firmware
> +	 */
> +	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
> +
> +	for (i = 0; i < num_qgrps; ++i) {
> +		/* Calculate the size taken up by the queue IDs in this group */
> +		sz += qg_list[i].num_qs * sizeof(qg_list[i].q_id);
> +
> +		/* Add the size of the group header */
> +		sz += sizeof(qg_list[i]) - sizeof(qg_list[i].q_id);
> +
> +		/* If the num of queues is even, add 2 bytes of padding */
> +		if ((qg_list[i].num_qs % 2) == 0)
> +			sz += 2;
> +	}
> +
> +	if (buf_size != sz)
> +		return ICE_ERR_PARAM;
> +
> +do_aq:
> +	status = ice_aq_send_cmd(hw, &desc, qg_list, buf_size, cd);
> +	if (status) {
> +		if (!qg_list)
> +			ice_debug(hw, ICE_DBG_SCHED, "VM%d disable failed %d\n",
> +				  vmvf_num, hw->adminq.sq_last_status);
> +		else
> +			ice_debug(hw, ICE_DBG_SCHED, "disable Q %d failed %d\n",
> +				  LE16_TO_CPU(qg_list[0].q_id[0]),
> +				  hw->adminq.sq_last_status);
> +	}
> +	return status;
> +}
> +
> +
> +/* End of FW Admin Queue command wrappers */
> +
> +/**
> + * ice_write_byte - write a byte to a packed context structure
> + * @src_ctx:  the context structure to read from
> + * @dest_ctx: the context to be written to
> + * @ce_info:  a description of the struct to be filled
> + */
> +static void
> +ice_write_byte(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info)
> +{
> +	u8 src_byte, dest_byte, mask;
> +	u8 *from, *dest;
> +	u16 shift_width;
> +
> +	/* copy from the next struct field */
> +	from = src_ctx + ce_info->offset;
> +
> +	/* prepare the bits and mask */
> +	shift_width = ce_info->lsb % 8;
> +	mask = (u8)(BIT(ce_info->width) - 1);
> +
> +	src_byte = *from;
> +	src_byte &= mask;
> +
> +	/* shift to correct alignment */
> +	mask <<= shift_width;
> +	src_byte <<= shift_width;
> +
> +	/* get the current bits from the target bit string */
> +	dest = dest_ctx + (ce_info->lsb / 8);
> +
> +	ice_memcpy(&dest_byte, dest, sizeof(dest_byte), ICE_DMA_TO_NONDMA);
> +
> +	dest_byte &= ~mask;	/* get the bits not changing */
> +	dest_byte |= src_byte;	/* add in the new bits */
> +
> +	/* put it all back */
> +	ice_memcpy(dest, &dest_byte, sizeof(dest_byte), ICE_NONDMA_TO_DMA);
> +}
> +
> +/**
> + * ice_write_word - write a word to a packed context structure
> + * @src_ctx:  the context structure to read from
> + * @dest_ctx: the context to be written to
> + * @ce_info:  a description of the struct to be filled
> + */
> +static void
> +ice_write_word(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info)
> +{
> +	u16 src_word, mask;
> +	__le16 dest_word;
> +	u8 *from, *dest;
> +	u16 shift_width;
> +
> +	/* copy from the next struct field */
> +	from = src_ctx + ce_info->offset;
> +
> +	/* prepare the bits and mask */
> +	shift_width = ce_info->lsb % 8;
> +	mask = BIT(ce_info->width) - 1;
> +
> +	/* don't swizzle the bits until after the mask because the mask bits
> +	 * will be in a different bit position on big endian machines
> +	 */
> +	src_word = *(u16 *)from;
> +	src_word &= mask;
> +
> +	/* shift to correct alignment */
> +	mask <<= shift_width;
> +	src_word <<= shift_width;
> +
> +	/* get the current bits from the target bit string */
> +	dest = dest_ctx + (ce_info->lsb / 8);
> +
> +	ice_memcpy(&dest_word, dest, sizeof(dest_word), ICE_DMA_TO_NONDMA);
> +
> +	dest_word &= ~(CPU_TO_LE16(mask));	/* get the bits not changing */
> +	dest_word |= CPU_TO_LE16(src_word);	/* add in the new bits */
> +
> +	/* put it all back */
> +	ice_memcpy(dest, &dest_word, sizeof(dest_word), ICE_NONDMA_TO_DMA);
> +}
> +
> +/**
> + * ice_write_dword - write a dword to a packed context structure
> + * @src_ctx:  the context structure to read from
> + * @dest_ctx: the context to be written to
> + * @ce_info:  a description of the struct to be filled
> + */
> +static void
> +ice_write_dword(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info)
> +{
> +	u32 src_dword, mask;
> +	__le32 dest_dword;
> +	u8 *from, *dest;
> +	u16 shift_width;
> +
> +	/* copy from the next struct field */
> +	from = src_ctx + ce_info->offset;
> +
> +	/* prepare the bits and mask */
> +	shift_width = ce_info->lsb % 8;
> +
> +	/* if the field width is exactly 32 on an x86 machine, then the shift
> +	 * operation will not work because the SHL instructions count is masked
> +	 * to 5 bits so the shift will do nothing
> +	 */
> +	if (ce_info->width < 32)
> +		mask = BIT(ce_info->width) - 1;
> +	else
> +		mask = (u32)~0;
> +
> +	/* don't swizzle the bits until after the mask because the mask bits
> +	 * will be in a different bit position on big endian machines
> +	 */
> +	src_dword = *(u32 *)from;
> +	src_dword &= mask;
> +
> +	/* shift to correct alignment */
> +	mask <<= shift_width;
> +	src_dword <<= shift_width;
> +
> +	/* get the current bits from the target bit string */
> +	dest = dest_ctx + (ce_info->lsb / 8);
> +
> +	ice_memcpy(&dest_dword, dest, sizeof(dest_dword), ICE_DMA_TO_NONDMA);
> +
> +	dest_dword &= ~(CPU_TO_LE32(mask));	/* get the bits not changing */
> +	dest_dword |= CPU_TO_LE32(src_dword);	/* add in the new bits */
> +
> +	/* put it all back */
> +	ice_memcpy(dest, &dest_dword, sizeof(dest_dword), ICE_NONDMA_TO_DMA);
> +}
> +
> +/**
> + * ice_write_qword - write a qword to a packed context structure
> + * @src_ctx:  the context structure to read from
> + * @dest_ctx: the context to be written to
> + * @ce_info:  a description of the struct to be filled
> + */
> +static void
> +ice_write_qword(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info)
> +{
> +	u64 src_qword, mask;
> +	__le64 dest_qword;
> +	u8 *from, *dest;
> +	u16 shift_width;
> +
> +	/* copy from the next struct field */
> +	from = src_ctx + ce_info->offset;
> +
> +	/* prepare the bits and mask */
> +	shift_width = ce_info->lsb % 8;
> +
> +	/* if the field width is exactly 64 on an x86 machine, then the shift
> +	 * operation will not work because the SHL instructions count is masked
> +	 * to 6 bits so the shift will do nothing
> +	 */
> +	if (ce_info->width < 64)
> +		mask = BIT_ULL(ce_info->width) - 1;
> +	else
> +		mask = (u64)~0;
> +
> +	/* don't swizzle the bits until after the mask because the mask bits
> +	 * will be in a different bit position on big endian machines
> +	 */
> +	src_qword = *(u64 *)from;
> +	src_qword &= mask;
> +
> +	/* shift to correct alignment */
> +	mask <<= shift_width;
> +	src_qword <<= shift_width;
> +
> +	/* get the current bits from the target bit string */
> +	dest = dest_ctx + (ce_info->lsb / 8);
> +
> +	ice_memcpy(&dest_qword, dest, sizeof(dest_qword), ICE_DMA_TO_NONDMA);
> +
> +	dest_qword &= ~(CPU_TO_LE64(mask));	/* get the bits not changing */
> +	dest_qword |= CPU_TO_LE64(src_qword);	/* add in the new bits */
> +
> +	/* put it all back */
> +	ice_memcpy(dest, &dest_qword, sizeof(dest_qword), ICE_NONDMA_TO_DMA);
> +}
> +
> +/**
> + * ice_set_ctx - set context bits in packed structure
> + * @src_ctx:  pointer to a generic non-packed context structure
> + * @dest_ctx: pointer to memory for the packed structure
> + * @ce_info:  a description of the structure to be transformed
> + */
> +enum ice_status
> +ice_set_ctx(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info)
> +{
> +	int f;
> +
> +	for (f = 0; ce_info[f].width; f++) {
> +		/* We have to deal with each element of the FW response
> +		 * using the correct size so that we are correct regardless
> +		 * of the endianness of the machine.
> +		 */
> +		switch (ce_info[f].size_of) {
> +		case sizeof(u8):
> +			ice_write_byte(src_ctx, dest_ctx, &ce_info[f]);
> +			break;
> +		case sizeof(u16):
> +			ice_write_word(src_ctx, dest_ctx, &ce_info[f]);
> +			break;
> +		case sizeof(u32):
> +			ice_write_dword(src_ctx, dest_ctx, &ce_info[f]);
> +			break;
> +		case sizeof(u64):
> +			ice_write_qword(src_ctx, dest_ctx, &ce_info[f]);
> +			break;
> +		default:
> +			return ICE_ERR_INVAL_SIZE;
> +		}
> +	}
> +
> +	return ICE_SUCCESS;
> +}
> +
> +
> +
> +
> +
> +/**
> + * ice_ena_vsi_txq
> + * @pi: port information structure
> + * @vsi_handle: software VSI handle
> + * @tc: tc number
> + * @num_qgrps: Number of added queue groups
> + * @buf: list of queue groups to be added
> + * @buf_size: size of buffer for indirect command
> + * @cd: pointer to command details structure or NULL
> + *
> + * This function adds one lan q
> + */
> +enum ice_status
> +ice_ena_vsi_txq(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u8 num_qgrps,
> +		struct ice_aqc_add_tx_qgrp *buf, u16 buf_size,
> +		struct ice_sq_cd *cd)
> +{
> +	struct ice_aqc_txsched_elem_data node = { 0 };
> +	struct ice_sched_node *parent;
> +	enum ice_status status;
> +	struct ice_hw *hw;
> +
> +	if (!pi || pi->port_state != ICE_SCHED_PORT_STATE_READY)
> +		return ICE_ERR_CFG;
> +
> +	if (num_qgrps > 1 || buf->num_txqs > 1)
> +		return ICE_ERR_MAX_LIMIT;
> +
> +	hw = pi->hw;
> +
> +	if (!ice_is_vsi_valid(hw, vsi_handle))
> +		return ICE_ERR_PARAM;
> +
> +	ice_acquire_lock(&pi->sched_lock);
> +
> +	/* find a parent node */
> +	parent = ice_sched_get_free_qparent(pi, vsi_handle, tc,
> +					    ICE_SCHED_NODE_OWNER_LAN);
> +	if (!parent) {
> +		status = ICE_ERR_PARAM;
> +		goto ena_txq_exit;
> +	}
> +
> +	buf->parent_teid = parent->info.node_teid;
> +	node.parent_teid = parent->info.node_teid;
> +	/* Mark that the values in the "generic" section as valid. The default
> +	 * value in the "generic" section is zero. This means that :
> +	 * - Scheduling mode is Bytes Per Second (BPS), indicated by Bit 0.
> +	 * - 0 priority among siblings, indicated by Bit 1-3.
> +	 * - WFQ, indicated by Bit 4.
> +	 * - 0 Adjustment value is used in PSM credit update flow, indicated by
> +	 * Bit 5-6.
> +	 * - Bit 7 is reserved.
> +	 * Without setting the generic section as valid in valid_sections, the
> +	 * Admin Q command will fail with error code ICE_AQ_RC_EINVAL.
> +	 */
> +	buf->txqs[0].info.valid_sections = ICE_AQC_ELEM_VALID_GENERIC;
> +
> +	/* add the lan q */
> +	status = ice_aq_add_lan_txq(hw, num_qgrps, buf, buf_size, cd);
> +	if (status != ICE_SUCCESS) {
> +		ice_debug(hw, ICE_DBG_SCHED, "enable Q %d failed %d\n",
> +			  LE16_TO_CPU(buf->txqs[0].txq_id),
> +			  hw->adminq.sq_last_status);
> +		goto ena_txq_exit;
> +	}
> +
> +	node.node_teid = buf->txqs[0].q_teid;
> +	node.data.elem_type = ICE_AQC_ELEM_TYPE_LEAF;
> +
> +	/* add a leaf node into schduler tree q layer */
> +	status = ice_sched_add_node(pi, hw->num_tx_sched_layers - 1, &node);
> +
> +ena_txq_exit:
> +	ice_release_lock(&pi->sched_lock);
> +	return status;
> +}
> +
> +/**
> + * ice_dis_vsi_txq
> + * @pi: port information structure
> + * @num_queues: number of queues
> + * @q_ids: pointer to the q_id array
> + * @q_teids: pointer to queue node teids
> + * @rst_src: if called due to reset, specifies the rst source
> + * @vmvf_num: the relative vm or vf number that is undergoing the reset
> + * @cd: pointer to command details structure or NULL
> + *
> + * This function removes queues and their corresponding nodes in SW DB
> + */
> +enum ice_status
> +ice_dis_vsi_txq(struct ice_port_info *pi, u8 num_queues, u16 *q_ids,
> +		u32 *q_teids, enum ice_disq_rst_src rst_src, u16 vmvf_num,
> +		struct ice_sq_cd *cd)
> +{
> +	enum ice_status status = ICE_ERR_DOES_NOT_EXIST;
> +	struct ice_aqc_dis_txq_item qg_list;
> +	u16 i;
> +
> +	if (!pi || pi->port_state != ICE_SCHED_PORT_STATE_READY)
> +		return ICE_ERR_CFG;
> +
> +	/* if queue is disabled already yet the disable queue command has to be
> +	 * sent to complete the VF reset, then call ice_aq_dis_lan_txq without
> +	 * any queue information
> +	 */
> +
> +	if (!num_queues && rst_src) > +		return ice_aq_dis_lan_txq(pi->hw, 0, NULL, 0, rst_src, vmvf_num,
> +					  NULL);
> +
> +	ice_acquire_lock(&pi->sched_lock);
> +
> +	for (i = 0; i < num_queues; i++) {
> +		struct ice_sched_node *node;
> +
> +		node = ice_sched_find_node_by_teid(pi->root, q_teids[i]);
> +		if (!node)
> +			continue;
> +		qg_list.parent_teid = node->info.parent_teid;
> +		qg_list.num_qs = 1;
> +		qg_list.q_id[0] = CPU_TO_LE16(q_ids[i]);
> +		status = ice_aq_dis_lan_txq(pi->hw, 1, &qg_list,
> +					    sizeof(qg_list), rst_src, vmvf_num,
> +					    cd);
> +
> +		if (status != ICE_SUCCESS)
> +			break;
> +		ice_free_sched_node(pi, node);
> +	}
> +	ice_release_lock(&pi->sched_lock);
> +	return status;
> +}
> +
> +/**
> + * ice_cfg_vsi_qs - configure the new/exisiting VSI queues
> + * @pi: port information structure
> + * @vsi_handle: software VSI handle
> + * @tc_bitmap: TC bitmap
> + * @maxqs: max queues array per TC
> + * @owner: lan or rdma
> + *
> + * This function adds/updates the VSI queues per TC.
> + */
> +static enum ice_status
> +ice_cfg_vsi_qs(struct ice_port_info *pi, u16 vsi_handle, u8 tc_bitmap,
> +	       u16 *maxqs, u8 owner)
> +{
> +	enum ice_status status = ICE_SUCCESS;
> +	u8 i;
> +
> +	if (!pi || pi->port_state != ICE_SCHED_PORT_STATE_READY)
> +		return ICE_ERR_CFG;
> +
> +	if (!ice_is_vsi_valid(pi->hw, vsi_handle))
> +		return ICE_ERR_PARAM;
> +
> +	ice_acquire_lock(&pi->sched_lock);
> +
> +	for (i = 0; i < ICE_MAX_TRAFFIC_CLASS; i++) {
> +		/* configuration is possible only if TC node is present */
> +		if (!ice_sched_get_tc_node(pi, i))
> +			continue;
> +
> +		status = ice_sched_cfg_vsi(pi, vsi_handle, i, maxqs[i], owner,
> +					   ice_is_tc_ena(tc_bitmap, i));
> +		if (status)
> +			break;
> +	}
> +
> +	ice_release_lock(&pi->sched_lock);
> +	return status;
> +}
> +
> +/**
> + * ice_cfg_vsi_lan - configure VSI lan queues
> + * @pi: port information structure
> + * @vsi_handle: software VSI handle
> + * @tc_bitmap: TC bitmap
> + * @max_lanqs: max lan queues array per TC
> + *
> + * This function adds/updates the VSI lan queues per TC.
> + */
> +enum ice_status
> +ice_cfg_vsi_lan(struct ice_port_info *pi, u16 vsi_handle, u8 tc_bitmap,
> +		u16 *max_lanqs)
> +{
> +	return ice_cfg_vsi_qs(pi, vsi_handle, tc_bitmap, max_lanqs,
> +			      ICE_SCHED_NODE_OWNER_LAN);
> +}
> +
> +
> +
> +/**
> + * ice_replay_pre_init - replay pre initialization
> + * @hw: pointer to the hw struct
> + *
> + * Initializes required config data for VSI, FD, ACL, and RSS before replay.
> + */
> +static enum ice_status ice_replay_pre_init(struct ice_hw *hw)
> +{
> +	struct ice_switch_info *sw = hw->switch_info;
> +	u8 i;
> +
> +	/* Delete old entries from replay filter list head if there is any */
> +	ice_rm_all_sw_replay_rule_info(hw);
> +	/* In start of replay, move entries into replay_rules list, it
> +	 * will allow adding rules entries back to filt_rules list,
> +	 * which is operational list.
> +	 */
> +	for (i = 0; i < ICE_MAX_NUM_RECIPES; i++)
> +		LIST_REPLACE_INIT(&sw->recp_list[i].filt_rules,
> +				  &sw->recp_list[i].filt_replay_rules);
> +	ice_sched_replay_agg_vsi_preinit(hw);
> +
> +	return ice_sched_replay_tc_node_bw(hw);
> +}
> +
> +/**
> + * ice_replay_vsi - replay vsi configuration
> + * @hw: pointer to the hw struct
> + * @vsi_handle: driver vsi handle
> + *
> + * Restore all VSI configuration after reset. It is required to call this
> + * function with main VSI first.
> + */
> +enum ice_status ice_replay_vsi(struct ice_hw *hw, u16 vsi_handle)
> +{
> +	enum ice_status status;
> +
> +	if (!ice_is_vsi_valid(hw, vsi_handle))
> +		return ICE_ERR_PARAM;
> +
> +	/* Replay pre-initialization if there is any */
> +	if (vsi_handle == ICE_MAIN_VSI_HANDLE) {
> +		status = ice_replay_pre_init(hw);
> +		if (status)
> +			return status;
> +	}
> +
> +	/* Replay per VSI all filters */
> +	status = ice_replay_vsi_all_fltr(hw, vsi_handle);
> +	if (!status)
> +		status = ice_replay_vsi_agg(hw, vsi_handle);
> +	return status;
> +}
> +
> +/**
> + * ice_replay_post - post replay configuration cleanup
> + * @hw: pointer to the hw struct
> + *
> + * Post replay cleanup.
> + */
> +void ice_replay_post(struct ice_hw *hw)
> +{
> +	/* Delete old entries from replay filter list head */
> +	ice_rm_all_sw_replay_rule_info(hw);
> +	ice_sched_replay_agg(hw);
> +}
> +
> +/**
> + * ice_stat_update40 - read 40 bit stat from the chip and update stat values
> + * @hw: ptr to the hardware info
> + * @hireg: high 32 bit HW register to read from
> + * @loreg: low 32 bit HW register to read from
> + * @prev_stat_loaded: bool to specify if previous stats are loaded
> + * @prev_stat: ptr to previous loaded stat value
> + * @cur_stat: ptr to current stat value
> + */
> +void
> +ice_stat_update40(struct ice_hw *hw, u32 hireg, u32 loreg,
> +		  bool prev_stat_loaded, u64 *prev_stat, u64 *cur_stat)
> +{
> +	u64 new_data;
> +
> +	new_data = rd32(hw, loreg);
> +	new_data |= ((u64)(rd32(hw, hireg) & 0xFFFF)) << 32;
> +
> +	/* device stats are not reset at PFR, they likely will not be zeroed
> +	 * when the driver starts. So save the first values read and use them as
> +	 * offsets to be subtracted from the raw values in order to report stats
> +	 * that count from zero.
> +	 */
> +	if (!prev_stat_loaded)
> +		*prev_stat = new_data;
> +	if (new_data >= *prev_stat)
> +		*cur_stat = new_data - *prev_stat;
> +	else
> +		/* to manage the potential roll-over */
> +		*cur_stat = (new_data + BIT_ULL(40)) - *prev_stat;
> +	*cur_stat &= 0xFFFFFFFFFFULL;
> +}
> +
> +/**
> + * ice_stat_update32 - read 32 bit stat from the chip and update stat values
> + * @hw: ptr to the hardware info
> + * @reg: HW register to read from
> + * @prev_stat_loaded: bool to specify if previous stats are loaded
> + * @prev_stat: ptr to previous loaded stat value
> + * @cur_stat: ptr to current stat value
> + */
> +void
> +ice_stat_update32(struct ice_hw *hw, u32 reg, bool prev_stat_loaded,
> +		  u64 *prev_stat, u64 *cur_stat)
> +{
> +	u32 new_data;
> +
> +	new_data = rd32(hw, reg);
> +
> +	/* device stats are not reset at PFR, they likely will not be zeroed
> +	 * when the driver starts. So save the first values read and use them as
> +	 * offsets to be subtracted from the raw values in order to report stats
> +	 * that count from zero.
> +	 */
> +	if (!prev_stat_loaded)
> +		*prev_stat = new_data;
> +	if (new_data >= *prev_stat)
> +		*cur_stat = new_data - *prev_stat;
> +	else
> +		/* to manage the potential roll-over */
> +		*cur_stat = (new_data + BIT_ULL(32)) - *prev_stat;
> +}
> +
> +
> +/**
> + * ice_sched_query_elem - query element information from hw
> + * @hw: pointer to the hw struct
> + * @node_teid: node teid to be queried
> + * @buf: buffer to element information
> + *
> + * This function queries HW element information
> + */
> +enum ice_status
> +ice_sched_query_elem(struct ice_hw *hw, u32 node_teid,
> +		     struct ice_aqc_get_elem *buf)
> +{
> +	u16 buf_size, num_elem_ret = 0;
> +	enum ice_status status;
> +
> +	buf_size = sizeof(*buf);
> +	ice_memset(buf, 0, buf_size, ICE_NONDMA_MEM);
> +	buf->generic[0].node_teid = CPU_TO_LE32(node_teid);
> +	status = ice_aq_query_sched_elems(hw, 1, buf, buf_size, &num_elem_ret,
> +					  NULL);
> +	if (status != ICE_SUCCESS || num_elem_ret != 1)
> +		ice_debug(hw, ICE_DBG_SCHED, "query element failed\n");
> +	return status;
> +}
> diff --git a/drivers/net/ice/base/ice_common.h b/drivers/net/ice/base/ice_common.h
> new file mode 100644
> index 0000000..082ae66
> --- /dev/null
> +++ b/drivers/net/ice/base/ice_common.h
> @@ -0,0 +1,186 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2001-2018
> + */
> +
> +#ifndef _ICE_COMMON_H_
> +#define _ICE_COMMON_H_
> +
> +#include "ice_type.h"
> +
> +#include "ice_switch.h"
> +
> +/* prototype for functions used for SW locks */
> +void ice_free_list(struct LIST_HEAD_TYPE *list);
> +void ice_init_lock(struct ice_lock *lock);
> +void ice_acquire_lock(struct ice_lock *lock);
> +void ice_release_lock(struct ice_lock *lock);
> +void ice_destroy_lock(struct ice_lock *lock);
> +
> +void *ice_alloc_dma_mem(struct ice_hw *hw, struct ice_dma_mem *m, u64 size);
> +void ice_free_dma_mem(struct ice_hw *hw, struct ice_dma_mem *m);
> +
> +bool ice_sq_done(struct ice_hw *hw, struct ice_ctl_q_info *cq);
> +
> +enum ice_status ice_nvm_validate_checksum(struct ice_hw *hw);
> +
> +void
> +ice_debug_cq(struct ice_hw *hw, u32 mask, void *desc, void *buf, u16 buf_len);
> +enum ice_status ice_init_hw(struct ice_hw *hw);
> +void ice_deinit_hw(struct ice_hw *hw);
> +enum ice_status ice_check_reset(struct ice_hw *hw);
> +enum ice_status ice_reset(struct ice_hw *hw, enum ice_reset_req req);
> +
> +enum ice_status ice_init_all_ctrlq(struct ice_hw *hw);
> +void ice_shutdown_all_ctrlq(struct ice_hw *hw);
> +enum ice_status
> +ice_clean_rq_elem(struct ice_hw *hw, struct ice_ctl_q_info *cq,
> +		  struct ice_rq_event_info *e, u16 *pending);
> +enum ice_status
> +ice_get_link_status(struct ice_port_info *pi, bool *link_up);
> +enum ice_status
> +ice_update_link_info(struct ice_port_info *pi);
> +enum ice_status
> +ice_acquire_res(struct ice_hw *hw, enum ice_aq_res_ids res,
> +		enum ice_aq_res_access_type access, u32 timeout);
> +void ice_release_res(struct ice_hw *hw, enum ice_aq_res_ids res);
> +enum ice_status
> +ice_aq_alloc_free_res(struct ice_hw *hw, u16 num_entries,
> +		      struct ice_aqc_alloc_free_res_elem *buf, u16 buf_size,
> +		      enum ice_adminq_opc opc, struct ice_sq_cd *cd);
> +enum ice_status ice_init_nvm(struct ice_hw *hw);
> +enum ice_status ice_read_sr_word(struct ice_hw *hw, u16 offset, u16 *data);
> +enum ice_status
> +ice_read_sr_buf(struct ice_hw *hw, u16 offset, u16 *words, u16 *data);
> +enum ice_status
> +ice_sq_send_cmd(struct ice_hw *hw, struct ice_ctl_q_info *cq,
> +		struct ice_aq_desc *desc, void *buf, u16 buf_size,
> +		struct ice_sq_cd *cd);
> +void ice_clear_pxe_mode(struct ice_hw *hw);
> +
> +enum ice_status ice_get_caps(struct ice_hw *hw);
> +
> +
> +
> +#if defined(FPGA_SUPPORT) || defined(CVL_A0_SUPPORT)
> +void ice_dev_onetime_setup(struct ice_hw *hw);
> +#endif /* FPGA_SUPPORT || CVL_A0_SUPPORT */
> +
> +
> +enum ice_status
> +ice_write_rxq_ctx(struct ice_hw *hw, struct ice_rlan_ctx *rlan_ctx,
> +		  u32 rxq_index);
> +#if !defined(NO_UNUSED_CTX_CODE) || defined(AE_DRIVER)
> +enum ice_status ice_clear_rxq_ctx(struct ice_hw *hw, u32 rxq_index);
> +enum ice_status
> +ice_clear_tx_cmpltnq_ctx(struct ice_hw *hw, u32 tx_cmpltnq_index);
> +enum ice_status
> +ice_write_tx_cmpltnq_ctx(struct ice_hw *hw,
> +			 struct ice_tx_cmpltnq_ctx *tx_cmpltnq_ctx,
> +			 u32 tx_cmpltnq_index);
> +enum ice_status
> +ice_clear_tx_drbell_q_ctx(struct ice_hw *hw, u32 tx_drbell_q_index);
> +enum ice_status
> +ice_write_tx_drbell_q_ctx(struct ice_hw *hw,
> +			  struct ice_tx_drbell_q_ctx *tx_drbell_q_ctx,
> +			  u32 tx_drbell_q_index);
> +#endif /* !NO_UNUSED_CTX_CODE || AE_DRIVER */
> +
> +enum ice_status
> +ice_aq_get_rss_lut(struct ice_hw *hw, u16 vsi_handle, u8 lut_type, u8 *lut,
> +		   u16 lut_size);
> +enum ice_status
> +ice_aq_set_rss_lut(struct ice_hw *hw, u16 vsi_handle, u8 lut_type, u8 *lut,
> +		   u16 lut_size);
> +enum ice_status
> +ice_aq_get_rss_key(struct ice_hw *hw, u16 vsi_handle,
> +		   struct ice_aqc_get_set_rss_keys *keys);
> +enum ice_status
> +ice_aq_set_rss_key(struct ice_hw *hw, u16 vsi_handle,
> +		   struct ice_aqc_get_set_rss_keys *keys);
> +
> +bool ice_check_sq_alive(struct ice_hw *hw, struct ice_ctl_q_info *cq);
> +enum ice_status ice_aq_q_shutdown(struct ice_hw *hw, bool unloading);
> +void ice_fill_dflt_direct_cmd_desc(struct ice_aq_desc *desc, u16 opcode);
> +extern const struct ice_ctx_ele ice_tlan_ctx_info[];
> +enum ice_status
> +ice_set_ctx(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info);
> +enum ice_status
> +ice_aq_send_cmd(struct ice_hw *hw, struct ice_aq_desc *desc,
> +		void *buf, u16 buf_size, struct ice_sq_cd *cd);
> +enum ice_status ice_aq_get_fw_ver(struct ice_hw *hw, struct ice_sq_cd *cd);
> +
> +enum ice_status
> +ice_aq_get_phy_caps(struct ice_port_info *pi, bool qual_mods, u8 report_mode,
> +		    struct ice_aqc_get_phy_caps_data *caps,
> +		    struct ice_sq_cd *cd);
> +void
> +ice_update_phy_type(u64 *phy_type_low, u64 *phy_type_high,
> +		    u16 link_speeds_bitmap);
> +enum ice_status
> +ice_aq_manage_mac_write(struct ice_hw *hw, const u8 *mac_addr, u8 flags,
> +			struct ice_sq_cd *cd);
> +
> +enum ice_status ice_clear_pf_cfg(struct ice_hw *hw);
> +enum ice_status
> +ice_aq_set_phy_cfg(struct ice_hw *hw, u8 lport,
> +		   struct ice_aqc_set_phy_cfg_data *cfg, struct ice_sq_cd *cd);
> +enum ice_status
> +ice_set_fc(struct ice_port_info *pi, u8 *aq_failures,
> +	   bool ena_auto_link_update);
> +void
> +ice_cfg_phy_fec(struct ice_aqc_set_phy_cfg_data *cfg, enum ice_fec_mode fec);
> +void
> +ice_copy_phy_caps_to_cfg(struct ice_aqc_get_phy_caps_data *caps,
> +			 struct ice_aqc_set_phy_cfg_data *cfg);
> +enum ice_status
> +ice_aq_set_link_restart_an(struct ice_port_info *pi, bool ena_link,
> +			   struct ice_sq_cd *cd);
> +enum ice_status
> +ice_aq_get_link_info(struct ice_port_info *pi, bool ena_lse,
> +		     struct ice_link_status *link, struct ice_sq_cd *cd);
> +enum ice_status
> +ice_aq_set_event_mask(struct ice_hw *hw, u8 port_num, u16 mask,
> +		      struct ice_sq_cd *cd);
> +enum ice_status
> +ice_aq_set_mac_loopback(struct ice_hw *hw, bool ena_lpbk, struct ice_sq_cd *cd);
> +
> +
> +enum ice_status
> +ice_aq_set_port_id_led(struct ice_port_info *pi, bool is_orig_mode,
> +		       struct ice_sq_cd *cd);
> +
> +
> +
> +
> +enum ice_status
> +ice_dis_vsi_txq(struct ice_port_info *pi, u8 num_queues, u16 *q_ids,
> +		u32 *q_teids, enum ice_disq_rst_src rst_src, u16 vmvf_num,
> +		struct ice_sq_cd *cmd_details);
> +enum ice_status
> +ice_cfg_vsi_lan(struct ice_port_info *pi, u16 vsi_handle, u8 tc_bitmap,
> +		u16 *max_lanqs);
> +enum ice_status
> +ice_ena_vsi_txq(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u8 num_qgrps,
> +		struct ice_aqc_add_tx_qgrp *buf, u16 buf_size,
> +		struct ice_sq_cd *cd);
> +enum ice_status ice_replay_vsi(struct ice_hw *hw, u16 vsi_handle);
> +void ice_replay_post(struct ice_hw *hw);
> +void ice_sched_replay_agg_vsi_preinit(struct ice_hw *hw);
> +void ice_sched_replay_agg(struct ice_hw *hw);
> +enum ice_status ice_sched_replay_tc_node_bw(struct ice_hw *hw);
> +enum ice_status ice_replay_vsi_agg(struct ice_hw *hw, u16 vsi_handle);
> +enum ice_status
> +ice_cfg_tc_node_bw_alloc(struct ice_port_info *pi, u8 tc,
> +			 enum ice_rl_type rl_type, u8 bw_alloc);
> +enum ice_status ice_cfg_rl_burst_size(struct ice_hw *hw, u32 bytes);
> +void ice_output_fw_log(struct ice_hw *hw, struct ice_aq_desc *desc, void *buf);
> +void
> +ice_stat_update40(struct ice_hw *hw, u32 hireg, u32 loreg,
> +		  bool prev_stat_loaded, u64 *prev_stat, u64 *cur_stat);
> +void
> +ice_stat_update32(struct ice_hw *hw, u32 reg, bool prev_stat_loaded,
> +		  u64 *prev_stat, u64 *cur_stat);
> +enum ice_status
> +ice_sched_query_elem(struct ice_hw *hw, u32 node_teid,
> +		     struct ice_aqc_get_elem *buf);
> +#endif /* _ICE_COMMON_H_ */
> 

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v3 17/34] net/ice: support device and queue ops
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 17/34] net/ice: support device and queue ops Wenzhuo Lu
@ 2018-12-12 20:07     ` Mattias Rönnblom
  2018-12-13  1:34       ` Lu, Wenzhuo
  0 siblings, 1 reply; 309+ messages in thread
From: Mattias Rönnblom @ 2018-12-12 20:07 UTC (permalink / raw)
  To: Wenzhuo Lu, dev; +Cc: Qiming Yang, Xiaoyun Li, Jingjing Wu

On 2018-12-12 07:59, Wenzhuo Lu wrote:

/../

> +
> +	for (i = 0; i < len * sizeof(union ice_rx_desc); i++)
> +		((volatile char *)rxq->rx_ring)[i] = 0;

More of a general question... but why doesn't DPDK has the 
READ/WRITE_ONCE() of the Linux kernel? Would reduce the amount of 
open-coded use of volatile.

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v3 11/34] net/ice: Add common functions
  2018-12-12 19:58     ` Mattias Rönnblom
@ 2018-12-12 21:18       ` Stillwell Jr, Paul M
  2018-12-13  1:26         ` Lu, Wenzhuo
  0 siblings, 1 reply; 309+ messages in thread
From: Stillwell Jr, Paul M @ 2018-12-12 21:18 UTC (permalink / raw)
  To: Mattias Rönnblom, Lu, Wenzhuo, dev

-----Original Message-----
From: Mattias Rönnblom [mailto:mattias.ronnblom@ericsson.com] 
Sent: Wednesday, December 12, 2018 11:59 AM
To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
Cc: Stillwell Jr, Paul M <paul.m.stillwell.jr@intel.com>
Subject: Re: [dpdk-dev] [PATCH v3 11/34] net/ice: Add common functions

On 2018-12-12 07:59, Wenzhuo Lu wrote:
> From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
> 
> Add code that multiple other features use.
> 
> Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
> ---
>   drivers/net/ice/base/ice_common.c | 3521 +++++++++++++++++++++++++++++++++++++
>   drivers/net/ice/base/ice_common.h |  186 ++
>   2 files changed, 3707 insertions(+)
>   create mode 100644 drivers/net/ice/base/ice_common.c
>   create mode 100644 drivers/net/ice/base/ice_common.h
> 
> diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c
> new file mode 100644
> index 0000000..d49264d
> --- /dev/null
> +++ b/drivers/net/ice/base/ice_common.c
> @@ -0,0 +1,3521 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2001-2018
> + */
> +
> +#include "ice_common.h"
> +#include "ice_sched.h"
> +#include "ice_adminq_cmd.h"
> +
> +#include "ice_flow.h"
> +#include "ice_switch.h"
> +
> +#define ICE_PF_RESET_WAIT_COUNT	200
> +
> +#define ICE_PROG_FLEX_ENTRY(hw, rxdid, mdid, idx) \
> +	wr32((hw), GLFLXP_RXDID_FLX_WRD_##idx(rxdid), \
> +	     ((ICE_RX_OPC_MDID << \
> +	       GLFLXP_RXDID_FLX_WRD_##idx##_RXDID_OPCODE_S) & \
> +	      GLFLXP_RXDID_FLX_WRD_##idx##_RXDID_OPCODE_M) | \
> +	     (((mdid) << GLFLXP_RXDID_FLX_WRD_##idx##_PROT_MDID_S) & \
> +	      GLFLXP_RXDID_FLX_WRD_##idx##_PROT_MDID_M))
> +
> +#define ICE_PROG_FLG_ENTRY(hw, rxdid, flg_0, flg_1, flg_2, flg_3, idx) \
> +	wr32((hw), GLFLXP_RXDID_FLAGS(rxdid, idx), \
> +	     (((flg_0) << GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_S) & \
> +	      GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_M) | \
> +	     (((flg_1) << GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_1_S) & \
> +	      GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_1_M) | \
> +	     (((flg_2) << GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_2_S) & \
> +	      GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_2_M) | \
> +	     (((flg_3) << GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_3_S) & \
> +	      GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_3_M))
> +
> +
> +/**
> + * ice_set_mac_type - Sets MAC type
> + * @hw: pointer to the HW structure
> + *
> + * This function sets the MAC type of the adapter based on the
> + * vendor ID and device ID stored in the hw structure.
> + */
> +static enum ice_status ice_set_mac_type(struct ice_hw *hw)
> +{
> +	enum ice_status status = ICE_SUCCESS;
> +
> +	ice_debug(hw, ICE_DBG_TRACE, "ice_set_mac_type\n");
> +
> +	if (hw->vendor_id == ICE_INTEL_VENDOR_ID) {
> +		switch (hw->device_id) {
> +		default:
> +			hw->mac_type = ICE_MAC_GENERIC;
> +			break;
> +		}
> +	} else {
> +		status = ICE_ERR_DEVICE_NOT_SUPPORTED;
> +	}
> +

Remove braces from single-statement block.

> +	ice_debug(hw, ICE_DBG_INIT, "found mac_type: %d, status: %d\n",
> +		  hw->mac_type, status);
> +
> +	return status;
> +}
> +
> +#if defined(FPGA_SUPPORT) || defined(CVL_A0_SUPPORT)
> +void ice_dev_onetime_setup(struct ice_hw *hw)
> +{
> +	/* configure Rx - set non pxe mode */
> +	wr32(hw, GLLAN_RCTL_0, 0x1);
> +
> +
> +
> +}
> +#endif /* FPGA_SUPPORT || CVL_A0_SUPPORT */
> +
> +/**
> + * ice_clear_pf_cfg - Clear PF configuration
> + * @hw: pointer to the hardware structure
> + *
> + * Clears any existing PF configuration (VSIs, VSI lists, switch rules, port
> + * configuration, flow director filters, etc.).
> + */
> +enum ice_status ice_clear_pf_cfg(struct ice_hw *hw)
> +{
> +	struct ice_aq_desc desc;
> +
> +	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_clear_pf_cfg);
> +
> +	return ice_aq_send_cmd(hw, &desc, NULL, 0, NULL);
> +}
> +
> +/**
> + * ice_aq_manage_mac_read - manage MAC address read command
> + * @hw: pointer to the hw struct
> + * @buf: a virtual buffer to hold the manage MAC read response
> + * @buf_size: Size of the virtual buffer
> + * @cd: pointer to command details structure or NULL
> + *
> + * This function is used to return per PF station MAC address (0x0107).
> + * NOTE: Upon successful completion of this command, MAC address information
> + * is returned in user specified buffer. Please interpret user specified
> + * buffer as "manage_mac_read" response.
> + * Response such as various MAC addresses are stored in HW struct (port.mac)
> + * ice_aq_discover_caps is expected to be called before this function is called.
> + */
> +static enum ice_status
> +ice_aq_manage_mac_read(struct ice_hw *hw, void *buf, u16 buf_size,
> +		       struct ice_sq_cd *cd)
> +{
> +	struct ice_aqc_manage_mac_read_resp *resp;
> +	struct ice_aqc_manage_mac_read *cmd;
> +	struct ice_aq_desc desc;
> +	enum ice_status status;
> +	u16 flags;
> +	u8 i;
> +
> +	cmd = &desc.params.mac_read;
> +
> +	if (buf_size < sizeof(*resp))
> +		return ICE_ERR_BUF_TOO_SHORT;
> +
> +	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_manage_mac_read);
> +
> +	status = ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
> +	if (status)
> +		return status;
> +
> +	resp = (struct ice_aqc_manage_mac_read_resp *)buf;
> +	flags = LE16_TO_CPU(cmd->flags) & ICE_AQC_MAN_MAC_READ_M;
> +
> +	if (!(flags & ICE_AQC_MAN_MAC_LAN_ADDR_VALID)) {
> +		ice_debug(hw, ICE_DBG_LAN, "got invalid MAC address\n");
> +		return ICE_ERR_CFG;
> +	}
> +
> +	/* A single port can report up to two (LAN and WoL) addresses */
> +	for (i = 0; i < cmd->num_addr; i++)
> +		if (resp[i].addr_type == ICE_AQC_MAN_MAC_ADDR_TYPE_LAN) {
> +			ice_memcpy(hw->port_info->mac.lan_addr,
> +				   resp[i].mac_addr, ETH_ALEN,
> +				   ICE_DMA_TO_NONDMA);
> +			ice_memcpy(hw->port_info->mac.perm_addr,
> +				   resp[i].mac_addr,
> +				   ETH_ALEN, ICE_DMA_TO_NONDMA);
> +			break;
> +		}
> +
> +	return ICE_SUCCESS;
> +}
> +
> +/**
> + * ice_aq_get_phy_caps - returns PHY capabilities
> + * @pi: port information structure
> + * @qual_mods: report qualified modules
> + * @report_mode: report mode capabilities
> + * @pcaps: structure for PHY capabilities to be filled
> + * @cd: pointer to command details structure or NULL
> + *
> + * Returns the various PHY capabilities supported on the Port (0x0600)
> + */
> +enum ice_status
> +ice_aq_get_phy_caps(struct ice_port_info *pi, bool qual_mods, u8 report_mode,
> +		    struct ice_aqc_get_phy_caps_data *pcaps,
> +		    struct ice_sq_cd *cd)
> +{
> +	struct ice_aqc_get_phy_caps *cmd;
> +	u16 pcaps_size = sizeof(*pcaps);
> +	struct ice_aq_desc desc;
> +	enum ice_status status;
> +
> +	cmd = &desc.params.get_phy;
> +
> +	if (!pcaps || (report_mode & ~ICE_AQC_REPORT_MODE_M) || !pi)
> +		return ICE_ERR_PARAM;
> +
> +	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_phy_caps);
> +
> +	if (qual_mods)
> +		cmd->param0 |= CPU_TO_LE16(ICE_AQC_GET_PHY_RQM);
> +
> +	cmd->param0 |= CPU_TO_LE16(report_mode);
> +	status = ice_aq_send_cmd(pi->hw, &desc, pcaps, pcaps_size, cd);
> +
> +	if (status == ICE_SUCCESS && report_mode == ICE_AQC_REPORT_TOPO_CAP) {
> +		pi->phy.phy_type_low = LE64_TO_CPU(pcaps->phy_type_low);
> +		pi->phy.phy_type_high = LE64_TO_CPU(pcaps->phy_type_high);
> +	}
> +
> +	return status;
> +}
> +
> +/**
> + * ice_get_media_type - Gets media type
> + * @pi: port information structure
> + */
> +static enum ice_media_type ice_get_media_type(struct ice_port_info *pi)
> +{
> +	struct ice_link_status *hw_link_info;
> +
> +	if (!pi)
> +		return ICE_MEDIA_UNKNOWN;
> +
> +	hw_link_info = &pi->phy.link_info;
> +	if (hw_link_info->phy_type_low && hw_link_info->phy_type_high)
> +		/* If more than one media type is selected, report unknown */
> +		return ICE_MEDIA_UNKNOWN;
> +
> +	if (hw_link_info->phy_type_low) {
> +		switch (hw_link_info->phy_type_low) {
> +		case ICE_PHY_TYPE_LOW_1000BASE_SX:
> +		case ICE_PHY_TYPE_LOW_1000BASE_LX:
> +		case ICE_PHY_TYPE_LOW_10GBASE_SR:
> +		case ICE_PHY_TYPE_LOW_10GBASE_LR:
> +		case ICE_PHY_TYPE_LOW_10G_SFI_C2C:
> +		case ICE_PHY_TYPE_LOW_25GBASE_SR:
> +		case ICE_PHY_TYPE_LOW_25GBASE_LR:
> +		case ICE_PHY_TYPE_LOW_25G_AUI_C2C:
> +		case ICE_PHY_TYPE_LOW_40GBASE_SR4:
> +		case ICE_PHY_TYPE_LOW_40GBASE_LR4:
> +		case ICE_PHY_TYPE_LOW_50GBASE_SR2:
> +		case ICE_PHY_TYPE_LOW_50GBASE_LR2:
> +		case ICE_PHY_TYPE_LOW_50GBASE_SR:
> +		case ICE_PHY_TYPE_LOW_50GBASE_FR:
> +		case ICE_PHY_TYPE_LOW_50GBASE_LR:
> +		case ICE_PHY_TYPE_LOW_100GBASE_SR4:
> +		case ICE_PHY_TYPE_LOW_100GBASE_LR4:
> +		case ICE_PHY_TYPE_LOW_100GBASE_SR2:
> +		case ICE_PHY_TYPE_LOW_100GBASE_DR:
> +			return ICE_MEDIA_FIBER;
> +		case ICE_PHY_TYPE_LOW_100BASE_TX:
> +		case ICE_PHY_TYPE_LOW_1000BASE_T:
> +		case ICE_PHY_TYPE_LOW_2500BASE_T:
> +		case ICE_PHY_TYPE_LOW_5GBASE_T:
> +		case ICE_PHY_TYPE_LOW_10GBASE_T:
> +		case ICE_PHY_TYPE_LOW_25GBASE_T:
> +			return ICE_MEDIA_BASET;
> +		case ICE_PHY_TYPE_LOW_10G_SFI_DA:
> +		case ICE_PHY_TYPE_LOW_25GBASE_CR:
> +		case ICE_PHY_TYPE_LOW_25GBASE_CR_S:
> +		case ICE_PHY_TYPE_LOW_25GBASE_CR1:
> +		case ICE_PHY_TYPE_LOW_40GBASE_CR4:
> +		case ICE_PHY_TYPE_LOW_50GBASE_CR2:
> +		case ICE_PHY_TYPE_LOW_50GBASE_CP:
> +		case ICE_PHY_TYPE_LOW_100GBASE_CR4:
> +		case ICE_PHY_TYPE_LOW_100GBASE_CR_PAM4:
> +		case ICE_PHY_TYPE_LOW_100GBASE_CP2:
> +			return ICE_MEDIA_DA;
> +		case ICE_PHY_TYPE_LOW_1000BASE_KX:
> +		case ICE_PHY_TYPE_LOW_2500BASE_KX:
> +		case ICE_PHY_TYPE_LOW_2500BASE_X:
> +		case ICE_PHY_TYPE_LOW_5GBASE_KR:
> +		case ICE_PHY_TYPE_LOW_10GBASE_KR_CR1:
> +		case ICE_PHY_TYPE_LOW_25GBASE_KR:
> +		case ICE_PHY_TYPE_LOW_25GBASE_KR1:
> +		case ICE_PHY_TYPE_LOW_25GBASE_KR_S:
> +		case ICE_PHY_TYPE_LOW_40GBASE_KR4:
> +		case ICE_PHY_TYPE_LOW_50GBASE_KR_PAM4:
> +		case ICE_PHY_TYPE_LOW_50GBASE_KR2:
> +		case ICE_PHY_TYPE_LOW_100GBASE_KR4:
> +		case ICE_PHY_TYPE_LOW_100GBASE_KR_PAM4:
> +			return ICE_MEDIA_BACKPLANE;
> +		}
> +	} else {
> +		switch (hw_link_info->phy_type_high) {
> +		case ICE_PHY_TYPE_HIGH_100GBASE_KR2_PAM4:
> +			return ICE_MEDIA_BACKPLANE;
> +		}
> +	}
> +	return ICE_MEDIA_UNKNOWN;
> +}
> +
> +/**
> + * ice_aq_get_link_info
> + * @pi: port information structure
> + * @ena_lse: enable/disable LinkStatusEvent reporting
> + * @link: pointer to link status structure - optional
> + * @cd: pointer to command details structure or NULL
> + *
> + * Get Link Status (0x607). Returns the link status of the adapter.
> + */
> +enum ice_status
> +ice_aq_get_link_info(struct ice_port_info *pi, bool ena_lse,
> +		     struct ice_link_status *link, struct ice_sq_cd *cd)
> +{
> +	struct ice_link_status *hw_link_info_old, *hw_link_info;
> +	struct ice_aqc_get_link_status_data link_data = { 0 };
> +	struct ice_aqc_get_link_status *resp;
> +	enum ice_media_type *hw_media_type;
> +	struct ice_fc_info *hw_fc_info;
> +	bool tx_pause, rx_pause;
> +	struct ice_aq_desc desc;
> +	enum ice_status status;
> +	u16 cmd_flags;
> +
> +	if (!pi)
> +		return ICE_ERR_PARAM;

if (pi == NULL)

I'm confused by this comment. There are hundreds of these types of expressions in DPDK currently so why would we change the instances in this file?

> +	hw_link_info_old = &pi->phy.link_info_old;
> +	hw_media_type = &pi->phy.media_type;
> +	hw_link_info = &pi->phy.link_info;
> +	hw_fc_info = &pi->fc;
> +
> +	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_link_status);
> +	cmd_flags = (ena_lse) ? ICE_AQ_LSE_ENA : ICE_AQ_LSE_DIS;
> +	resp = &desc.params.get_link_status;
> +	resp->cmd_flags = CPU_TO_LE16(cmd_flags);
> +	resp->lport_num = pi->lport;
> +
> +	status = ice_aq_send_cmd(pi->hw, &desc, &link_data, sizeof(link_data),
> +				 cd);
> +
> +	if (status != ICE_SUCCESS)
> +		return status;
> +
> +	/* save off old link status information */
> +	*hw_link_info_old = *hw_link_info;
> +
> +	/* update current link status information */
> +	hw_link_info->link_speed = LE16_TO_CPU(link_data.link_speed);
> +	hw_link_info->phy_type_low = LE64_TO_CPU(link_data.phy_type_low);
> +	hw_link_info->phy_type_high = LE64_TO_CPU(link_data.phy_type_high);
> +	*hw_media_type = ice_get_media_type(pi);
> +	hw_link_info->link_info = link_data.link_info;
> +	hw_link_info->an_info = link_data.an_info;
> +	hw_link_info->ext_info = link_data.ext_info;
> +	hw_link_info->max_frame_size = LE16_TO_CPU(link_data.max_frame_size);
> +	hw_link_info->fec_info = link_data.cfg & ICE_AQ_FEC_MASK;
> +	hw_link_info->topo_media_conflict = link_data.topo_media_conflict;
> +	hw_link_info->pacing = link_data.cfg & ICE_AQ_CFG_PACING_M;
> +
> +	/* update fc info */
> +	tx_pause = !!(link_data.an_info & ICE_AQ_LINK_PAUSE_TX);
> +	rx_pause = !!(link_data.an_info & ICE_AQ_LINK_PAUSE_RX);
> +	if (tx_pause && rx_pause)
> +		hw_fc_info->current_mode = ICE_FC_FULL;
> +	else if (tx_pause)
> +		hw_fc_info->current_mode = ICE_FC_TX_PAUSE;
> +	else if (rx_pause)
> +		hw_fc_info->current_mode = ICE_FC_RX_PAUSE;
> +	else
> +		hw_fc_info->current_mode = ICE_FC_NONE;
> +
> +	hw_link_info->lse_ena =
> +		!!(resp->cmd_flags & CPU_TO_LE16(ICE_AQ_LSE_IS_ENABLED));
> +
> +
> +	/* save link status information */
> +	if (link)
> +		*link = *hw_link_info;
> +
> +	/* flag cleared so calling functions don't call AQ again */
> +	pi->phy.get_link_info = false;
> +
> +	return status;
> +}
> +
> +/**
> + * ice_init_flex_flags
> + * @hw: pointer to the hardware structure
> + * @prof_id: Rx Descriptor Builder profile ID
> + *
> + * Function to initialize Rx flex flags
> + */
> +static void ice_init_flex_flags(struct ice_hw *hw, enum ice_rxdid prof_id)
> +{
> +	u8 idx = 0;
> +
> +	/* Flex-flag fields (0-2) are programmed with FLG64 bits with layout:
> +	 * flexiflags0[5:0] - TCP flags, is_packet_fragmented, is_packet_UDP_GRE
> +	 * flexiflags1[3:0] - Not used for flag programming
> +	 * flexiflags2[7:0] - Tunnel and VLAN types
> +	 * 2 invalid fields in last index
> +	 */
> +	switch (prof_id) {
> +	/* Rx flex flags are currently programmed for the NIC profiles only.
> +	 * Different flag bit programming configurations can be added per
> +	 * profile as needed.
> +	 */
> +	case ICE_RXDID_FLEX_NIC:
> +	case ICE_RXDID_FLEX_NIC_2:
> +		ICE_PROG_FLG_ENTRY(hw, prof_id, ICE_RXFLG_PKT_FRG,
> +				   ICE_RXFLG_UDP_GRE, ICE_RXFLG_PKT_DSI,
> +				   ICE_RXFLG_FIN, idx++);
> +		/* flex flag 1 is not used for flexi-flag programming, skipping
> +		 * these four FLG64 bits.
> +		 */
> +		ICE_PROG_FLG_ENTRY(hw, prof_id, ICE_RXFLG_SYN, ICE_RXFLG_RST,
> +				   ICE_RXFLG_PKT_DSI, ICE_RXFLG_PKT_DSI, idx++);
> +		ICE_PROG_FLG_ENTRY(hw, prof_id, ICE_RXFLG_PKT_DSI,
> +				   ICE_RXFLG_PKT_DSI, ICE_RXFLG_EVLAN_x8100,
> +				   ICE_RXFLG_EVLAN_x9100, idx++);
> +		ICE_PROG_FLG_ENTRY(hw, prof_id, ICE_RXFLG_VLAN_x8100,
> +				   ICE_RXFLG_TNL_VLAN, ICE_RXFLG_TNL_MAC,
> +				   ICE_RXFLG_TNL0, idx++);
> +		ICE_PROG_FLG_ENTRY(hw, prof_id, ICE_RXFLG_TNL1, ICE_RXFLG_TNL2,
> +				   ICE_RXFLG_PKT_DSI, ICE_RXFLG_PKT_DSI, idx);
> +		break;
> +
> +	default:
> +		ice_debug(hw, ICE_DBG_INIT,
> +			  "Flag programming for profile ID %d not supported\n",
> +			  prof_id);
> +	}
> +}
> +
> +/**
> + * ice_init_flex_flds
> + * @hw: pointer to the hardware structure
> + * @prof_id: Rx Descriptor Builder profile ID
> + *
> + * Function to initialize flex descriptors
> + */
> +static void ice_init_flex_flds(struct ice_hw *hw, enum ice_rxdid prof_id)
> +{
> +	enum ice_flex_rx_mdid mdid;
> +
> +	switch (prof_id) {
> +	case ICE_RXDID_FLEX_NIC:
> +	case ICE_RXDID_FLEX_NIC_2:
> +		ICE_PROG_FLEX_ENTRY(hw, prof_id, ICE_RX_MDID_HASH_LOW, 0);
> +		ICE_PROG_FLEX_ENTRY(hw, prof_id, ICE_RX_MDID_HASH_HIGH, 1);
> +		ICE_PROG_FLEX_ENTRY(hw, prof_id, ICE_RX_MDID_FLOW_ID_LOWER, 2);
> +
> +		mdid = (prof_id == ICE_RXDID_FLEX_NIC_2) ?
> +			ICE_RX_MDID_SRC_VSI : ICE_RX_MDID_FLOW_ID_HIGH;
> +
> +		ICE_PROG_FLEX_ENTRY(hw, prof_id, mdid, 3);
> +
> +		ice_init_flex_flags(hw, prof_id);
> +		break;
> +
> +	default:
> +		ice_debug(hw, ICE_DBG_INIT,
> +			  "Field init for profile ID %d not supported\n",
> +			  prof_id);
> +	}
> +}
> +
> +
> +/**
> + * ice_init_fltr_mgmt_struct - initializes filter management list and locks
> + * @hw: pointer to the hw struct
> + */
> +static enum ice_status ice_init_fltr_mgmt_struct(struct ice_hw *hw)
> +{
> +	struct ice_switch_info *sw;
> +
> +	hw->switch_info = (struct ice_switch_info *)
> +			  ice_malloc(hw, sizeof(*hw->switch_info));
> +	sw = hw->switch_info;
> +
> +	if (!sw)
> +		return ICE_ERR_NO_MEMORY;

if (sw == NULL)

> +
> +	INIT_LIST_HEAD(&sw->vsi_list_map_head);
> +
> +	return ice_init_def_sw_recp(hw);
> +}
> +
> +/**
> + * ice_cleanup_fltr_mgmt_struct - cleanup filter management list and locks
> + * @hw: pointer to the hw struct
> + */
> +static void ice_cleanup_fltr_mgmt_struct(struct ice_hw *hw)
> +{
> +	struct ice_switch_info *sw = hw->switch_info;
> +	struct ice_vsi_list_map_info *v_pos_map;
> +	struct ice_vsi_list_map_info *v_tmp_map;
> +	struct ice_sw_recipe *recps;
> +	u8 i;
> +
> +	LIST_FOR_EACH_ENTRY_SAFE(v_pos_map, v_tmp_map, &sw->vsi_list_map_head,
> +				 ice_vsi_list_map_info, list_entry) {
> +		LIST_DEL(&v_pos_map->list_entry);
> +		ice_free(hw, v_pos_map);
> +	}
> +	recps = hw->switch_info->recp_list;
> +	for (i = 0; i < ICE_MAX_NUM_RECIPES; i++) {
> +		recps[i].root_rid = i;
> +
> +		if (recps[i].adv_rule) {
> +			struct ice_adv_fltr_mgmt_list_entry *tmp_entry;
> +			struct ice_adv_fltr_mgmt_list_entry *lst_itr;
> +
> +			ice_destroy_lock(&recps[i].filt_rule_lock);
> +			LIST_FOR_EACH_ENTRY_SAFE(lst_itr, tmp_entry,
> +						 &recps[i].filt_rules,
> +						 ice_adv_fltr_mgmt_list_entry,
> +						 list_entry) {
> +				LIST_DEL(&lst_itr->list_entry);
> +				ice_free(hw, lst_itr->lkups);
> +				ice_free(hw, lst_itr);
> +			}
> +		} else {
> +			struct ice_fltr_mgmt_list_entry *lst_itr, *tmp_entry;
> +
> +			ice_destroy_lock(&recps[i].filt_rule_lock);
> +			LIST_FOR_EACH_ENTRY_SAFE(lst_itr, tmp_entry,
> +						 &recps[i].filt_rules,
> +						 ice_fltr_mgmt_list_entry,
> +						 list_entry) {
> +				LIST_DEL(&lst_itr->list_entry);
> +				ice_free(hw, lst_itr);
> +			}
> +		}
> +	}
> +	ice_rm_all_sw_replay_rule_info(hw);
> +	ice_free(hw, sw->recp_list);
> +	ice_free(hw, sw);
> +}
> +
> +#define ICE_FW_LOG_DESC_SIZE(n)	(sizeof(struct ice_aqc_fw_logging_data) + \
> +	(((n) - 1) * sizeof(((struct ice_aqc_fw_logging_data *)0)->entry)))
> +#define ICE_FW_LOG_DESC_SIZE_MAX	\
> +	ICE_FW_LOG_DESC_SIZE(ICE_AQC_FW_LOG_ID_MAX)
> +
> +/**
> + * ice_cfg_fw_log - configure FW logging
> + * @hw: pointer to the hw struct
> + * @enable: enable certain FW logging events if true, disable all if false
> + *
> + * This function enables/disables the FW logging via Rx CQ events and a UART
> + * port based on predetermined configurations. FW logging via the Rx CQ can be
> + * enabled/disabled for individual PF's. However, FW logging via the UART can
> + * only be enabled/disabled for all PFs on the same device.
> + *
> + * To enable overall FW logging, the "cq_en" and "uart_en" enable bits in
> + * hw->fw_log need to be set accordingly, e.g. based on user-provided input,
> + * before initializing the device.
> + *
> + * When re/configuring FW logging, callers need to update the "cfg" elements of
> + * the hw->fw_log.evnts array with the desired logging event configurations for
> + * modules of interest. When disabling FW logging completely, the callers can
> + * just pass false in the "enable" parameter. On completion, the function will
> + * update the "cur" element of the hw->fw_log.evnts array with the resulting
> + * logging event configurations of the modules that are being re/configured. FW
> + * logging modules that are not part of a reconfiguration operation retain their
> + * previous states.
> + *
> + * Before resetting the device, it is recommended that the driver disables FW
> + * logging before shutting down the control queue. When disabling FW logging
> + * ("enable" = false), the latest configurations of FW logging events stored in
> + * hw->fw_log.evnts[] are not overridden to allow them to be reconfigured after
> + * a device reset.
> + *
> + * When enabling FW logging to emit log messages via the Rx CQ during the
> + * device's initialization phase, a mechanism alternative to interrupt handlers
> + * needs to be used to extract FW log messages from the Rx CQ periodically and
> + * to prevent the Rx CQ from being full and stalling other types of control
> + * messages from FW to SW. Interrupts are typically disabled during the device's
> + * initialization phase.
> + */
> +static enum ice_status ice_cfg_fw_log(struct ice_hw *hw, bool enable)
> +{
> +	struct ice_aqc_fw_logging_data *data = NULL;
> +	struct ice_aqc_fw_logging *cmd;
> +	enum ice_status status = ICE_SUCCESS;
> +	u16 i, chgs = 0, len = 0;
> +	struct ice_aq_desc desc;
> +	u8 actv_evnts = 0;
> +	void *buf = NULL;
> +
> +	if (!hw->fw_log.cq_en && !hw->fw_log.uart_en)
> +		return ICE_SUCCESS;
> +
> +	/* Disable FW logging only when the control queue is still responsive */
> +	if (!enable &&
> +	    (!hw->fw_log.actv_evnts || !ice_check_sq_alive(hw, &hw->adminq)))
> +		return ICE_SUCCESS;
> +
> +	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_fw_logging);
> +	cmd = &desc.params.fw_logging;
> +
> +	/* Indicate which controls are valid */
> +	if (hw->fw_log.cq_en)
> +		cmd->log_ctrl_valid |= ICE_AQC_FW_LOG_AQ_VALID;
> +
> +	if (hw->fw_log.uart_en)
> +		cmd->log_ctrl_valid |= ICE_AQC_FW_LOG_UART_VALID;
> +
> +	if (enable) {
> +		/* Fill in an array of entries with FW logging modules and
> +		 * logging events being reconfigured.
> +		 */
> +		for (i = 0; i < ICE_AQC_FW_LOG_ID_MAX; i++) {
> +			u16 val;
> +
> +			/* Keep track of enabled event types */
> +			actv_evnts |= hw->fw_log.evnts[i].cfg;
> +
> +			if (hw->fw_log.evnts[i].cfg == hw->fw_log.evnts[i].cur)
> +				continue;
> +
> +			if (!data) {
> +				data = (struct ice_aqc_fw_logging_data *)
> +					ice_malloc(hw,
> +						   ICE_FW_LOG_DESC_SIZE_MAX);
> +				if (!data)
> +					return ICE_ERR_NO_MEMORY;
> +			}
> +
> +			val = i << ICE_AQC_FW_LOG_ID_S;
> +			val |= hw->fw_log.evnts[i].cfg << ICE_AQC_FW_LOG_EN_S;
> +			data->entry[chgs++] = CPU_TO_LE16(val);
> +		}
> +
> +		/* Only enable FW logging if at least one module is specified.
> +		 * If FW logging is currently enabled but all modules are not
> +		 * enabled to emit log messages, disable FW logging altogether.
> +		 */
> +		if (actv_evnts) {
> +			/* Leave if there is effectively no change */
> +			if (!chgs)
> +				goto out;
> +
> +			if (hw->fw_log.cq_en)
> +				cmd->log_ctrl |= ICE_AQC_FW_LOG_AQ_EN;
> +
> +			if (hw->fw_log.uart_en)
> +				cmd->log_ctrl |= ICE_AQC_FW_LOG_UART_EN;
> +
> +			buf = data;
> +			len = ICE_FW_LOG_DESC_SIZE(chgs);
> +			desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
> +		}
> +	}
> +
> +	status = ice_aq_send_cmd(hw, &desc, buf, len, NULL);
> +	if (!status) {
> +		/* Update the current configuration to reflect events enabled.
> +		 * hw->fw_log.cq_en and hw->fw_log.uart_en indicate if the FW
> +		 * logging mode is enabled for the device. They do not reflect
> +		 * actual modules being enabled to emit log messages. So, their
> +		 * values remain unchanged even when all modules are disabled.
> +		 */
> +		u16 cnt = enable ? chgs : (u16)ICE_AQC_FW_LOG_ID_MAX;
> +
> +		hw->fw_log.actv_evnts = actv_evnts;
> +		for (i = 0; i < cnt; i++) {
> +			u16 v, m;
> +
> +			if (!enable) {
> +				/* When disabling all FW logging events as part
> +				 * of device's de-initialization, the original
> +				 * configurations are retained, and can be used
> +				 * to reconfigure FW logging later if the device
> +				 * is re-initialized.
> +				 */
> +				hw->fw_log.evnts[i].cur = 0;
> +				continue;
> +			}
> +
> +			v = LE16_TO_CPU(data->entry[i]);
> +			m = (v & ICE_AQC_FW_LOG_ID_M) >> ICE_AQC_FW_LOG_ID_S;
> +			hw->fw_log.evnts[m].cur = hw->fw_log.evnts[m].cfg;
> +		}
> +	}
> +
> +out:
> +	if (data)
> +		ice_free(hw, data);
> +
> +	return status;
> +}
> +
> +/**
> + * ice_output_fw_log
> + * @hw: pointer to the hw struct
> + * @desc: pointer to the AQ message descriptor
> + * @buf: pointer to the buffer accompanying the AQ message
> + *
> + * Formats a FW Log message and outputs it via the standard driver logs.
> + */
> +void ice_output_fw_log(struct ice_hw *hw, struct ice_aq_desc *desc, void *buf)
> +{
> +	ice_debug(hw, ICE_DBG_AQ_MSG, "[ FW Log Msg Start ]\n");
> +	ice_debug_array(hw, ICE_DBG_AQ_MSG, 16, 1, (u8 *)buf,
> +			LE16_TO_CPU(desc->datalen));
> +	ice_debug(hw, ICE_DBG_AQ_MSG, "[ FW Log Msg End ]\n");
> +}
> +
> +/**
> + * ice_get_itr_intrl_gran - determine int/intrl granularity
> + * @hw: pointer to the hw struct
> + *
> + * Determines the itr/intrl granularities based on the maximum aggregate
> + * bandwidth according to the device's configuration during power-on.
> + */
> +static enum ice_status ice_get_itr_intrl_gran(struct ice_hw *hw)
> +{
> +	u8 max_agg_bw = (rd32(hw, GL_PWR_MODE_CTL) &
> +			 GL_PWR_MODE_CTL_CAR_MAX_BW_M) >>
> +			GL_PWR_MODE_CTL_CAR_MAX_BW_S;
> +
> +	switch (max_agg_bw) {
> +	case ICE_MAX_AGG_BW_200G:
> +	case ICE_MAX_AGG_BW_100G:
> +	case ICE_MAX_AGG_BW_50G:
> +		hw->itr_gran = ICE_ITR_GRAN_ABOVE_25;
> +		hw->intrl_gran = ICE_INTRL_GRAN_ABOVE_25;
> +		break;
> +	case ICE_MAX_AGG_BW_25G:
> +		hw->itr_gran = ICE_ITR_GRAN_MAX_25;
> +		hw->intrl_gran = ICE_INTRL_GRAN_MAX_25;
> +		break;
> +	default:
> +		ice_debug(hw, ICE_DBG_INIT,
> +			  "Failed to determine itr/intrl granularity\n");
> +		return ICE_ERR_CFG;
> +	}
> +
> +	return ICE_SUCCESS;
> +}
> +
> +/**
> + * ice_init_hw - main hardware initialization routine
> + * @hw: pointer to the hardware structure
> + */
> +enum ice_status ice_init_hw(struct ice_hw *hw)
> +{
> +	struct ice_aqc_get_phy_caps_data *pcaps;
> +	enum ice_status status;
> +	u16 mac_buf_len;
> +	void *mac_buf;
> +
> +	ice_debug(hw, ICE_DBG_TRACE, "ice_init_hw");
> +
> +
> +	/* Set MAC type based on DeviceID */
> +	status = ice_set_mac_type(hw);
> +	if (status)
> +		return status;
> +
> +	hw->pf_id = (u8)(rd32(hw, PF_FUNC_RID) &
> +			 PF_FUNC_RID_FUNCTION_NUMBER_M) >>
> +		PF_FUNC_RID_FUNCTION_NUMBER_S;
> +
> +
> +	status = ice_reset(hw, ICE_RESET_PFR);
> +	if (status)
> +		return status;
> +
> +	status = ice_get_itr_intrl_gran(hw);
> +	if (status)
> +		return status;
> +
> +
> +	status = ice_init_all_ctrlq(hw);
> +	if (status)
> +		goto err_unroll_cqinit;
> +
> +	/* Enable FW logging. Not fatal if this fails. */
> +	status = ice_cfg_fw_log(hw, true);
> +	if (status)
> +		ice_debug(hw, ICE_DBG_INIT, "Failed to enable FW logging.\n");
> +
> +	status = ice_clear_pf_cfg(hw);
> +	if (status)
> +		goto err_unroll_cqinit;
> +
> +
> +	ice_clear_pxe_mode(hw);
> +
> +	status = ice_init_nvm(hw);
> +	if (status)
> +		goto err_unroll_cqinit;
> +
> +	status = ice_get_caps(hw);
> +	if (status)
> +		goto err_unroll_cqinit;
> +
> +	hw->port_info = (struct ice_port_info *)
> +			ice_malloc(hw, sizeof(*hw->port_info));
> +	if (!hw->port_info) {
> +		status = ICE_ERR_NO_MEMORY;
> +		goto err_unroll_cqinit;
> +	}
> +
> +	/* set the back pointer to hw */
> +	hw->port_info->hw = hw;
> +
> +	/* Initialize port_info struct with switch configuration data */
> +	status = ice_get_initial_sw_cfg(hw);
> +	if (status)
> +		goto err_unroll_alloc;
> +
> +	hw->evb_veb = true;
> +
> +	/* Query the allocated resources for Tx scheduler */
> +	status = ice_sched_query_res_alloc(hw);
> +	if (status) {
> +		ice_debug(hw, ICE_DBG_SCHED,
> +			  "Failed to get scheduler allocated resources\n");
> +		goto err_unroll_alloc;
> +	}
> +
> +
> +	/* Initialize port_info struct with scheduler data */
> +	status = ice_sched_init_port(hw->port_info);
> +	if (status)
> +		goto err_unroll_sched;
> +
> +	pcaps = (struct ice_aqc_get_phy_caps_data *)
> +		ice_malloc(hw, sizeof(*pcaps));
> +	if (!pcaps) {
> +		status = ICE_ERR_NO_MEMORY;
> +		goto err_unroll_sched;
> +	}
> +
> +	/* Initialize port_info struct with PHY capabilities */
> +	status = ice_aq_get_phy_caps(hw->port_info, false,
> +				     ICE_AQC_REPORT_TOPO_CAP, pcaps, NULL);
> +	ice_free(hw, pcaps);
> +	if (status)
> +		goto err_unroll_sched;
> +
> +	/* Initialize port_info struct with link information */
> +	status = ice_aq_get_link_info(hw->port_info, false, NULL, NULL);
> +	if (status)
> +		goto err_unroll_sched;
> +	/* need a valid SW entry point to build a Tx tree */
> +	if (!hw->sw_entry_point_layer) {
> +		ice_debug(hw, ICE_DBG_SCHED, "invalid sw entry point\n");
> +		status = ICE_ERR_CFG;
> +		goto err_unroll_sched;
> +	}
> +	INIT_LIST_HEAD(&hw->agg_list);
> +	/* Initialize max burst size */
> +	if (!hw->max_burst_size)
> +		ice_cfg_rl_burst_size(hw, ICE_SCHED_DFLT_BURST_SIZE);
> +
> +	status = ice_init_fltr_mgmt_struct(hw);
> +	if (status)
> +		goto err_unroll_sched;
> +
> +#if defined(FPGA_SUPPORT) || defined(CVL_A0_SUPPORT)
> +	/* some of the register write workarounds to get Rx working */
> +	ice_dev_onetime_setup(hw);
> +#endif /* FPGA_SUPPORT || CVL_A0_SUPPORT */
> +
> +	/* Get MAC information */
> +	/* A single port can report up to two (LAN and WoL) addresses */
> +	mac_buf = ice_calloc(hw, 2,
> +			     sizeof(struct ice_aqc_manage_mac_read_resp));
> +	mac_buf_len = 2 * sizeof(struct ice_aqc_manage_mac_read_resp);
> +
> +	if (!mac_buf) {
> +		status = ICE_ERR_NO_MEMORY;
> +		goto err_unroll_fltr_mgmt_struct;
> +	}
> +
> +	status = ice_aq_manage_mac_read(hw, mac_buf, mac_buf_len, NULL);
> +	ice_free(hw, mac_buf);
> +
> +	if (status)
> +		goto err_unroll_fltr_mgmt_struct;
> +
> +	ice_init_flex_flds(hw, ICE_RXDID_FLEX_NIC);
> +	ice_init_flex_flds(hw, ICE_RXDID_FLEX_NIC_2);
> +
> +
> +	return ICE_SUCCESS;
> +
> +err_unroll_fltr_mgmt_struct:
> +	ice_cleanup_fltr_mgmt_struct(hw);
> +err_unroll_sched:
> +	ice_sched_cleanup_all(hw);
> +err_unroll_alloc:
> +	ice_free(hw, hw->port_info);
> +	hw->port_info = NULL;
> +err_unroll_cqinit:
> +	ice_shutdown_all_ctrlq(hw);
> +	return status;
> +}
> +
> +/**
> + * ice_deinit_hw - unroll initialization operations done by ice_init_hw
> + * @hw: pointer to the hardware structure
> + *
> + * This should be called only during nominal operation, not as a result of
> + * ice_init_hw() failing since ice_init_hw() will take care of unrolling
> + * applicable initializations if it fails for any reason.
> + */
> +void ice_deinit_hw(struct ice_hw *hw)
> +{
> +	ice_cleanup_fltr_mgmt_struct(hw);
> +
> +	ice_sched_cleanup_all(hw);
> +	ice_sched_clear_agg(hw);
> +
> +	if (hw->port_info) {
> +		ice_free(hw, hw->port_info);
> +		hw->port_info = NULL;
> +	}
> +
> +	/* Attempt to disable FW logging before shutting down control queues */
> +	ice_cfg_fw_log(hw, false);
> +	ice_shutdown_all_ctrlq(hw);
> +
> +	/* Clear VSI contexts if not already cleared */
> +	ice_clear_all_vsi_ctx(hw);
> +}
> +
> +/**
> + * ice_check_reset - Check to see if a global reset is complete
> + * @hw: pointer to the hardware structure
> + */
> +enum ice_status ice_check_reset(struct ice_hw *hw)
> +{
> +	u32 cnt, reg = 0, grst_delay;
> +
> +	/* Poll for Device Active state in case a recent CORER, GLOBR,
> +	 * or EMPR has occurred. The grst delay value is in 100ms units.
> +	 * Add 1sec for outstanding AQ commands that can take a long time.
> +	 */
> +#define GLGEN_RSTCTL		0x000B8180 /* Reset Source: POR */
> +#define GLGEN_RSTCTL_GRSTDEL_S	0
> +#define GLGEN_RSTCTL_GRSTDEL_M	MAKEMASK(0x3F, GLGEN_RSTCTL_GRSTDEL_S)
> +	grst_delay = ((rd32(hw, GLGEN_RSTCTL) & GLGEN_RSTCTL_GRSTDEL_M) >>
> +		      GLGEN_RSTCTL_GRSTDEL_S) + 10;
> +
> +	for (cnt = 0; cnt < grst_delay; cnt++) {
> +		ice_msec_delay(100, true);
> +		reg = rd32(hw, GLGEN_RSTAT);
> +		if (!(reg & GLGEN_RSTAT_DEVSTATE_M))
> +			break;
> +	}
> +
> +	if (cnt == grst_delay) {
> +		ice_debug(hw, ICE_DBG_INIT,
> +			  "Global reset polling failed to complete.\n");
> +		return ICE_ERR_RESET_FAILED;
> +	}
> +
> +#define ICE_RESET_DONE_MASK	(GLNVM_ULD_CORER_DONE_M | \
> +				 GLNVM_ULD_GLOBR_DONE_M)
> +
> +	/* Device is Active; check Global Reset processes are done */
> +	for (cnt = 0; cnt < ICE_PF_RESET_WAIT_COUNT; cnt++) {
> +		reg = rd32(hw, GLNVM_ULD) & ICE_RESET_DONE_MASK;
> +		if (reg == ICE_RESET_DONE_MASK) {
> +			ice_debug(hw, ICE_DBG_INIT,
> +				  "Global reset processes done. %d\n", cnt);
> +			break;
> +		}
> +		ice_msec_delay(10, true);
> +	}
> +
> +	if (cnt == ICE_PF_RESET_WAIT_COUNT) {
> +		ice_debug(hw, ICE_DBG_INIT,
> +			  "Wait for Reset Done timed out. GLNVM_ULD = 0x%x\n",
> +			  reg);
> +		return ICE_ERR_RESET_FAILED;
> +	}
> +
> +	return ICE_SUCCESS;
> +}
> +
> +/**
> + * ice_pf_reset - Reset the PF
> + * @hw: pointer to the hardware structure
> + *
> + * If a global reset has been triggered, this function checks
> + * for its completion and then issues the PF reset
> + */
> +static enum ice_status ice_pf_reset(struct ice_hw *hw)
> +{
> +	u32 cnt, reg;
> +
> +	/* If at function entry a global reset was already in progress, i.e.
> +	 * state is not 'device active' or any of the reset done bits are not
> +	 * set in GLNVM_ULD, there is no need for a PF Reset; poll until the
> +	 * global reset is done.
> +	 */
> +	if ((rd32(hw, GLGEN_RSTAT) & GLGEN_RSTAT_DEVSTATE_M) ||
> +	    (rd32(hw, GLNVM_ULD) & ICE_RESET_DONE_MASK) ^ ICE_RESET_DONE_MASK) {
> +		/* poll on global reset currently in progress until done */
> +		if (ice_check_reset(hw))
> +			return ICE_ERR_RESET_FAILED;
> +
> +		return ICE_SUCCESS;
> +	}
> +
> +	/* Reset the PF */
> +	reg = rd32(hw, PFGEN_CTRL);
> +
> +	wr32(hw, PFGEN_CTRL, (reg | PFGEN_CTRL_PFSWR_M));
> +
> +	for (cnt = 0; cnt < ICE_PF_RESET_WAIT_COUNT; cnt++) {
> +		reg = rd32(hw, PFGEN_CTRL);
> +		if (!(reg & PFGEN_CTRL_PFSWR_M))
> +			break;
> +
> +		ice_msec_delay(1, true);
> +	}
> +
> +	if (cnt == ICE_PF_RESET_WAIT_COUNT) {
> +		ice_debug(hw, ICE_DBG_INIT,
> +			  "PF reset polling failed to complete.\n");
> +		return ICE_ERR_RESET_FAILED;
> +	}
> +
> +	return ICE_SUCCESS;
> +}
> +
> +/**
> + * ice_reset - Perform different types of reset
> + * @hw: pointer to the hardware structure
> + * @req: reset request
> + *
> + * This function triggers a reset as specified by the req parameter.
> + *
> + * Note:
> + * If anything other than a PF reset is triggered, PXE mode is restored.
> + * This has to be cleared using ice_clear_pxe_mode again, once the AQ
> + * interface has been restored in the rebuild flow.
> + */
> +enum ice_status ice_reset(struct ice_hw *hw, enum ice_reset_req req)
> +{
> +	u32 val = 0;
> +
> +	switch (req) {
> +	case ICE_RESET_PFR:
> +		return ice_pf_reset(hw);
> +	case ICE_RESET_CORER:
> +		ice_debug(hw, ICE_DBG_INIT, "CoreR requested\n");
> +		val = GLGEN_RTRIG_CORER_M;
> +		break;
> +	case ICE_RESET_GLOBR:
> +		ice_debug(hw, ICE_DBG_INIT, "GlobalR requested\n");
> +		val = GLGEN_RTRIG_GLOBR_M;
> +		break;
> +	default:
> +		return ICE_ERR_PARAM;
> +	}
> +
> +	val |= rd32(hw, GLGEN_RTRIG);
> +	wr32(hw, GLGEN_RTRIG, val);
> +	ice_flush(hw);
> +
> +
> +	/* wait for the FW to be ready */
> +	return ice_check_reset(hw);
> +}
> +
> +
> +
> +/**
> + * ice_copy_rxq_ctx_to_hw
> + * @hw: pointer to the hardware structure
> + * @ice_rxq_ctx: pointer to the rxq context
> + * @rxq_index: the index of the Rx queue
> + *
> + * Copies rxq context from dense structure to hw register space
> + */
> +static enum ice_status
> +ice_copy_rxq_ctx_to_hw(struct ice_hw *hw, u8 *ice_rxq_ctx, u32 rxq_index)
> +{
> +	u8 i;
> +
> +	if (!ice_rxq_ctx)
> +		return ICE_ERR_BAD_PTR;
> +
> +	if (rxq_index > QRX_CTRL_MAX_INDEX)
> +		return ICE_ERR_PARAM;
> +
> +	/* Copy each dword separately to hw */
> +	for (i = 0; i < ICE_RXQ_CTX_SIZE_DWORDS; i++) {
> +		wr32(hw, QRX_CONTEXT(i, rxq_index),
> +		     *((u32 *)(ice_rxq_ctx + (i * sizeof(u32)))));
> +
> +		ice_debug(hw, ICE_DBG_QCTX, "qrxdata[%d]: %08X\n", i,
> +			  *((u32 *)(ice_rxq_ctx + (i * sizeof(u32)))));
> +	}
> +
> +	return ICE_SUCCESS;
> +}
> +
> +/* LAN Rx Queue Context */
> +static const struct ice_ctx_ele ice_rlan_ctx_info[] = {
> +	/* Field		Width	LSB */
> +	ICE_CTX_STORE(ice_rlan_ctx, head,		13,	0),
> +	ICE_CTX_STORE(ice_rlan_ctx, cpuid,		8,	13),
> +	ICE_CTX_STORE(ice_rlan_ctx, base,		57,	32),
> +	ICE_CTX_STORE(ice_rlan_ctx, qlen,		13,	89),
> +	ICE_CTX_STORE(ice_rlan_ctx, dbuf,		7,	102),
> +	ICE_CTX_STORE(ice_rlan_ctx, hbuf,		5,	109),
> +	ICE_CTX_STORE(ice_rlan_ctx, dtype,		2,	114),
> +	ICE_CTX_STORE(ice_rlan_ctx, dsize,		1,	116),
> +	ICE_CTX_STORE(ice_rlan_ctx, crcstrip,		1,	117),
> +	ICE_CTX_STORE(ice_rlan_ctx, l2tsel,		1,	119),
> +	ICE_CTX_STORE(ice_rlan_ctx, hsplit_0,		4,	120),
> +	ICE_CTX_STORE(ice_rlan_ctx, hsplit_1,		2,	124),
> +	ICE_CTX_STORE(ice_rlan_ctx, showiv,		1,	127),
> +	ICE_CTX_STORE(ice_rlan_ctx, rxmax,		14,	174),
> +	ICE_CTX_STORE(ice_rlan_ctx, tphrdesc_ena,	1,	193),
> +	ICE_CTX_STORE(ice_rlan_ctx, tphwdesc_ena,	1,	194),
> +	ICE_CTX_STORE(ice_rlan_ctx, tphdata_ena,	1,	195),
> +	ICE_CTX_STORE(ice_rlan_ctx, tphhead_ena,	1,	196),
> +	ICE_CTX_STORE(ice_rlan_ctx, lrxqthresh,		3,	198),
> +	{ 0 }
> +};
> +
> +/**
> + * ice_write_rxq_ctx
> + * @hw: pointer to the hardware structure
> + * @rlan_ctx: pointer to the rxq context
> + * @rxq_index: the index of the Rx queue
> + *
> + * Converts rxq context from sparse to dense structure and then writes
> + * it to hw register space
> + */
> +enum ice_status
> +ice_write_rxq_ctx(struct ice_hw *hw, struct ice_rlan_ctx *rlan_ctx,
> +		  u32 rxq_index)
> +{
> +	u8 ctx_buf[ICE_RXQ_CTX_SZ] = { 0 };
> +
> +	ice_set_ctx((u8 *)rlan_ctx, ctx_buf, ice_rlan_ctx_info);
> +	return ice_copy_rxq_ctx_to_hw(hw, ctx_buf, rxq_index);
> +}
> +
> +#if !defined(NO_UNUSED_CTX_CODE) || defined(AE_DRIVER)
> +/**
> + * ice_clear_rxq_ctx
> + * @hw: pointer to the hardware structure
> + * @rxq_index: the index of the Rx queue to clear
> + *
> + * Clears rxq context in hw register space
> + */
> +enum ice_status ice_clear_rxq_ctx(struct ice_hw *hw, u32 rxq_index)
> +{
> +	u8 i;
> +
> +	if (rxq_index > QRX_CTRL_MAX_INDEX)
> +		return ICE_ERR_PARAM;
> +
> +	/* Clear each dword register separately */
> +	for (i = 0; i < ICE_RXQ_CTX_SIZE_DWORDS; i++)
> +		wr32(hw, QRX_CONTEXT(i, rxq_index), 0);
> +
> +	return ICE_SUCCESS;
> +}
> +#endif /* !NO_UNUSED_CTX_CODE || AE_DRIVER */
> +
> +/* LAN Tx Queue Context */
> +const struct ice_ctx_ele ice_tlan_ctx_info[] = {
> +				    /* Field			Width	LSB */
> +	ICE_CTX_STORE(ice_tlan_ctx, base,			57,	0),
> +	ICE_CTX_STORE(ice_tlan_ctx, port_num,			3,	57),
> +	ICE_CTX_STORE(ice_tlan_ctx, cgd_num,			5,	60),
> +	ICE_CTX_STORE(ice_tlan_ctx, pf_num,			3,	65),
> +	ICE_CTX_STORE(ice_tlan_ctx, vmvf_num,			10,	68),
> +	ICE_CTX_STORE(ice_tlan_ctx, vmvf_type,			2,	78),
> +	ICE_CTX_STORE(ice_tlan_ctx, src_vsi,			10,	80),
> +	ICE_CTX_STORE(ice_tlan_ctx, tsyn_ena,			1,	90),
> +	ICE_CTX_STORE(ice_tlan_ctx, alt_vlan,			1,	92),
> +	ICE_CTX_STORE(ice_tlan_ctx, cpuid,			8,	93),
> +	ICE_CTX_STORE(ice_tlan_ctx, wb_mode,			1,	101),
> +	ICE_CTX_STORE(ice_tlan_ctx, tphrd_desc,			1,	102),
> +	ICE_CTX_STORE(ice_tlan_ctx, tphrd,			1,	103),
> +	ICE_CTX_STORE(ice_tlan_ctx, tphwr_desc,			1,	104),
> +	ICE_CTX_STORE(ice_tlan_ctx, cmpq_id,			9,	105),
> +	ICE_CTX_STORE(ice_tlan_ctx, qnum_in_func,		14,	114),
> +	ICE_CTX_STORE(ice_tlan_ctx, itr_notification_mode,	1,	128),
> +	ICE_CTX_STORE(ice_tlan_ctx, adjust_prof_id,		6,	129),
> +	ICE_CTX_STORE(ice_tlan_ctx, qlen,			13,	135),
> +	ICE_CTX_STORE(ice_tlan_ctx, quanta_prof_idx,		4,	148),
> +	ICE_CTX_STORE(ice_tlan_ctx, tso_ena,			1,	152),
> +	ICE_CTX_STORE(ice_tlan_ctx, tso_qnum,			11,	153),
> +	ICE_CTX_STORE(ice_tlan_ctx, legacy_int,			1,	164),
> +	ICE_CTX_STORE(ice_tlan_ctx, drop_ena,			1,	165),
> +	ICE_CTX_STORE(ice_tlan_ctx, cache_prof_idx,		2,	166),
> +	ICE_CTX_STORE(ice_tlan_ctx, pkt_shaper_prof_idx,	3,	168),
> +	ICE_CTX_STORE(ice_tlan_ctx, int_q_state,		110,	171),
> +	{ 0 }
> +};
> +
> +#if !defined(NO_UNUSED_CTX_CODE) || defined(AE_DRIVER)
> +/**
> + * ice_copy_tx_cmpltnq_ctx_to_hw
> + * @hw: pointer to the hardware structure
> + * @ice_tx_cmpltnq_ctx: pointer to the Tx completion queue context
> + * @tx_cmpltnq_index: the index of the completion queue
> + *
> + * Copies Tx completion q context from dense structure to hw register space
> + */
> +static enum ice_status
> +ice_copy_tx_cmpltnq_ctx_to_hw(struct ice_hw *hw, u8 *ice_tx_cmpltnq_ctx,
> +			      u32 tx_cmpltnq_index)
> +{
> +	u8 i;
> +
> +	if (!ice_tx_cmpltnq_ctx)
> +		return ICE_ERR_BAD_PTR;
> +
> +	if (tx_cmpltnq_index > GLTCLAN_CQ_CNTX0_MAX_INDEX)
> +		return ICE_ERR_PARAM;
> +
> +	/* Copy each dword separately to hw */
> +	for (i = 0; i < ICE_TX_CMPLTNQ_CTX_SIZE_DWORDS; i++) {
> +		wr32(hw, GLTCLAN_CQ_CNTX(i, tx_cmpltnq_index),
> +		     *((u32 *)(ice_tx_cmpltnq_ctx + (i * sizeof(u32)))));
> +
> +		ice_debug(hw, ICE_DBG_QCTX, "cmpltnqdata[%d]: %08X\n", i,
> +			  *((u32 *)(ice_tx_cmpltnq_ctx + (i * sizeof(u32)))));
> +	}
> +
> +	return ICE_SUCCESS;
> +}
> +
> +/* LAN Tx Completion Queue Context */
> +static const struct ice_ctx_ele ice_tx_cmpltnq_ctx_info[] = {
> +				       /* Field			Width   LSB */
> +	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, base,			57,	0),
> +	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, q_len,		18,	64),
> +	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, generation,		1,	96),
> +	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, wrt_ptr,		22,	97),
> +	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, pf_num,		3,	128),
> +	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, vmvf_num,		10,	131),
> +	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, vmvf_type,		2,	141),
> +	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, tph_desc_wr,		1,	160),
> +	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, cpuid,		8,	161),
> +	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, cmpltn_cache,		512,	192),
> +	{ 0 }
> +};
> +
> +/**
> + * ice_write_tx_cmpltnq_ctx
> + * @hw: pointer to the hardware structure
> + * @tx_cmpltnq_ctx: pointer to the completion queue context
> + * @tx_cmpltnq_index: the index of the completion queue
> + *
> + * Converts completion queue context from sparse to dense structure and then
> + * writes it to hw register space
> + */
> +enum ice_status
> +ice_write_tx_cmpltnq_ctx(struct ice_hw *hw,
> +			 struct ice_tx_cmpltnq_ctx *tx_cmpltnq_ctx,
> +			 u32 tx_cmpltnq_index)
> +{
> +	u8 ctx_buf[ICE_TX_CMPLTNQ_CTX_SIZE_DWORDS * sizeof(u32)] = { 0 };
> +
> +	ice_set_ctx((u8 *)tx_cmpltnq_ctx, ctx_buf, ice_tx_cmpltnq_ctx_info);
> +	return ice_copy_tx_cmpltnq_ctx_to_hw(hw, ctx_buf, tx_cmpltnq_index);
> +}
> +
> +/**
> + * ice_clear_tx_cmpltnq_ctx
> + * @hw: pointer to the hardware structure
> + * @tx_cmpltnq_index: the index of the completion queue to clear
> + *
> + * Clears Tx completion queue context in hw register space
> + */
> +enum ice_status
> +ice_clear_tx_cmpltnq_ctx(struct ice_hw *hw, u32 tx_cmpltnq_index)
> +{
> +	u8 i;
> +
> +	if (tx_cmpltnq_index > GLTCLAN_CQ_CNTX0_MAX_INDEX)
> +		return ICE_ERR_PARAM;
> +
> +	/* Clear each dword register separately */
> +	for (i = 0; i < ICE_TX_CMPLTNQ_CTX_SIZE_DWORDS; i++)
> +		wr32(hw, GLTCLAN_CQ_CNTX(i, tx_cmpltnq_index), 0);
> +
> +	return ICE_SUCCESS;
> +}
> +
> +/**
> + * ice_copy_tx_drbell_q_ctx_to_hw
> + * @hw: pointer to the hardware structure
> + * @ice_tx_drbell_q_ctx: pointer to the doorbell queue context
> + * @tx_drbell_q_index: the index of the doorbell queue
> + *
> + * Copies doorbell q context from dense structure to hw register space
> + */
> +static enum ice_status
> +ice_copy_tx_drbell_q_ctx_to_hw(struct ice_hw *hw, u8 *ice_tx_drbell_q_ctx,
> +			       u32 tx_drbell_q_index)
> +{
> +	u8 i;
> +
> +	if (!ice_tx_drbell_q_ctx)
> +		return ICE_ERR_BAD_PTR;
> +
> +	if (tx_drbell_q_index > QTX_COMM_DBLQ_DBELL_MAX_INDEX)
> +		return ICE_ERR_PARAM;
> +
> +	/* Copy each dword separately to hw */
> +	for (i = 0; i < ICE_TX_DRBELL_Q_CTX_SIZE_DWORDS; i++) {
> +		wr32(hw, QTX_COMM_DBLQ_CNTX(i, tx_drbell_q_index),
> +		     *((u32 *)(ice_tx_drbell_q_ctx + (i * sizeof(u32)))));
> +
> +		ice_debug(hw, ICE_DBG_QCTX, "tx_drbell_qdata[%d]: %08X\n", i,
> +			  *((u32 *)(ice_tx_drbell_q_ctx + (i * sizeof(u32)))));
> +	}
> +
> +	return ICE_SUCCESS;
> +}
> +
> +/* LAN Tx Doorbell Queue Context info */
> +static const struct ice_ctx_ele ice_tx_drbell_q_ctx_info[] = {
> +					/* Field		Width   LSB */
> +	ICE_CTX_STORE(ice_tx_drbell_q_ctx, base,		57,	0),
> +	ICE_CTX_STORE(ice_tx_drbell_q_ctx, ring_len,		13,	64),
> +	ICE_CTX_STORE(ice_tx_drbell_q_ctx, pf_num,		3,	80),
> +	ICE_CTX_STORE(ice_tx_drbell_q_ctx, vf_num,		8,	84),
> +	ICE_CTX_STORE(ice_tx_drbell_q_ctx, vmvf_type,		2,	94),
> +	ICE_CTX_STORE(ice_tx_drbell_q_ctx, cpuid,		8,	96),
> +	ICE_CTX_STORE(ice_tx_drbell_q_ctx, tph_desc_rd,		1,	104),
> +	ICE_CTX_STORE(ice_tx_drbell_q_ctx, tph_desc_wr,		1,	108),
> +	ICE_CTX_STORE(ice_tx_drbell_q_ctx, db_q_en,		1,	112),
> +	ICE_CTX_STORE(ice_tx_drbell_q_ctx, rd_head,		13,	128),
> +	ICE_CTX_STORE(ice_tx_drbell_q_ctx, rd_tail,		13,	144),
> +	{ 0 }
> +};
> +
> +/**
> + * ice_write_tx_drbell_q_ctx
> + * @hw: pointer to the hardware structure
> + * @tx_drbell_q_ctx: pointer to the doorbell queue context
> + * @tx_drbell_q_index: the index of the doorbell queue
> + *
> + * Converts doorbell queue context from sparse to dense structure and then
> + * writes it to hw register space
> + */
> +enum ice_status
> +ice_write_tx_drbell_q_ctx(struct ice_hw *hw,
> +			  struct ice_tx_drbell_q_ctx *tx_drbell_q_ctx,
> +			  u32 tx_drbell_q_index)
> +{
> +	u8 ctx_buf[ICE_TX_DRBELL_Q_CTX_SIZE_DWORDS * sizeof(u32)] = { 0 };
> +
> +	ice_set_ctx((u8 *)tx_drbell_q_ctx, ctx_buf, ice_tx_drbell_q_ctx_info);
> +	return ice_copy_tx_drbell_q_ctx_to_hw(hw, ctx_buf, tx_drbell_q_index);
> +}
> +
> +/**
> + * ice_clear_tx_drbell_q_ctx
> + * @hw: pointer to the hardware structure
> + * @tx_drbell_q_index: the index of the doorbell queue to clear
> + *
> + * Clears doorbell queue context in hw register space
> + */
> +enum ice_status
> +ice_clear_tx_drbell_q_ctx(struct ice_hw *hw, u32 tx_drbell_q_index)
> +{
> +	u8 i;
> +
> +	if (tx_drbell_q_index > QTX_COMM_DBLQ_DBELL_MAX_INDEX)
> +		return ICE_ERR_PARAM;
> +
> +	/* Clear each dword register separately */
> +	for (i = 0; i < ICE_TX_DRBELL_Q_CTX_SIZE_DWORDS; i++)
> +		wr32(hw, QTX_COMM_DBLQ_CNTX(i, tx_drbell_q_index), 0);
> +
> +	return ICE_SUCCESS;
> +}
> +#endif /* !NO_UNUSED_CTX_CODE || AE_DRIVER */
> +
> +/**
> + * ice_debug_cq
> + * @hw: pointer to the hardware structure
> + * @mask: debug mask
> + * @desc: pointer to control queue descriptor
> + * @buf: pointer to command buffer
> + * @buf_len: max length of buf
> + *
> + * Dumps debug log about control command with descriptor contents.
> + */
> +void
> +ice_debug_cq(struct ice_hw *hw, u32 mask, void *desc, void *buf, u16 buf_len)
> +{
> +	struct ice_aq_desc *cq_desc = (struct ice_aq_desc *)desc;
> +	u16 len;
> +
> +	if (!(mask & hw->debug_mask))
> +		return;
> +
> +	if (!desc)
> +		return;
> +
> +	len = LE16_TO_CPU(cq_desc->datalen);
> +
> +	ice_debug(hw, mask,
> +		  "CQ CMD: opcode 0x%04X, flags 0x%04X, datalen 0x%04X, retval 0x%04X\n",
> +		  LE16_TO_CPU(cq_desc->opcode),
> +		  LE16_TO_CPU(cq_desc->flags),
> +		  LE16_TO_CPU(cq_desc->datalen), LE16_TO_CPU(cq_desc->retval));
> +	ice_debug(hw, mask, "\tcookie (h,l) 0x%08X 0x%08X\n",
> +		  LE32_TO_CPU(cq_desc->cookie_high),
> +		  LE32_TO_CPU(cq_desc->cookie_low));
> +	ice_debug(hw, mask, "\tparam (0,1)  0x%08X 0x%08X\n",
> +		  LE32_TO_CPU(cq_desc->params.generic.param0),
> +		  LE32_TO_CPU(cq_desc->params.generic.param1));
> +	ice_debug(hw, mask, "\taddr (h,l)   0x%08X 0x%08X\n",
> +		  LE32_TO_CPU(cq_desc->params.generic.addr_high),
> +		  LE32_TO_CPU(cq_desc->params.generic.addr_low));
> +	if (buf && cq_desc->datalen != 0) {
> +		ice_debug(hw, mask, "Buffer:\n");
> +		if (buf_len < len)
> +			len = buf_len;
> +
> +		ice_debug_array(hw, mask, 16, 1, (u8 *)buf, len);
> +	}
> +}
> +
> +
> +/* FW Admin Queue command wrappers */
> +
> +/**
> + * ice_aq_send_cmd - send FW Admin Queue command to FW Admin Queue
> + * @hw: pointer to the hw struct
> + * @desc: descriptor describing the command
> + * @buf: buffer to use for indirect commands (NULL for direct commands)
> + * @buf_size: size of buffer for indirect commands (0 for direct commands)
> + * @cd: pointer to command details structure
> + *
> + * Helper function to send FW Admin Queue commands to the FW Admin Queue.
> + */
> +enum ice_status
> +ice_aq_send_cmd(struct ice_hw *hw, struct ice_aq_desc *desc, void *buf,
> +		u16 buf_size, struct ice_sq_cd *cd)
> +{
> +	return ice_sq_send_cmd(hw, &hw->adminq, desc, buf, buf_size, cd);
> +}
> +
> +/**
> + * ice_aq_get_fw_ver
> + * @hw: pointer to the hw struct
> + * @cd: pointer to command details structure or NULL
> + *
> + * Get the firmware version (0x0001) from the admin queue commands
> + */
> +enum ice_status ice_aq_get_fw_ver(struct ice_hw *hw, struct ice_sq_cd *cd)
> +{
> +	struct ice_aqc_get_ver *resp;
> +	struct ice_aq_desc desc;
> +	enum ice_status status;
> +
> +	resp = &desc.params.get_ver;
> +
> +	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_ver);
> +
> +	status = ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
> +
> +	if (!status) {
> +		hw->fw_branch = resp->fw_branch;
> +		hw->fw_maj_ver = resp->fw_major;
> +		hw->fw_min_ver = resp->fw_minor;
> +		hw->fw_patch = resp->fw_patch;
> +		hw->fw_build = LE32_TO_CPU(resp->fw_build);
> +		hw->api_branch = resp->api_branch;
> +		hw->api_maj_ver = resp->api_major;
> +		hw->api_min_ver = resp->api_minor;
> +		hw->api_patch = resp->api_patch;
> +	}
> +
> +	return status;
> +}
> +
> +
> +/**
> + * ice_aq_q_shutdown
> + * @hw: pointer to the hw struct
> + * @unloading: is the driver unloading itself
> + *
> + * Tell the Firmware that we're shutting down the AdminQ and whether
> + * or not the driver is unloading as well (0x0003).
> + */
> +enum ice_status ice_aq_q_shutdown(struct ice_hw *hw, bool unloading)
> +{
> +	struct ice_aqc_q_shutdown *cmd;
> +	struct ice_aq_desc desc;
> +
> +	cmd = &desc.params.q_shutdown;
> +
> +	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_q_shutdown);
> +
> +	if (unloading)
> +		cmd->driver_unloading = CPU_TO_LE32(ICE_AQC_DRIVER_UNLOADING);
> +
> +	return ice_aq_send_cmd(hw, &desc, NULL, 0, NULL);
> +}
> +
> +/**
> + * ice_aq_req_res
> + * @hw: pointer to the hw struct
> + * @res: resource id
> + * @access: access type
> + * @sdp_number: resource number
> + * @timeout: the maximum time in ms that the driver may hold the resource
> + * @cd: pointer to command details structure or NULL
> + *
> + * Requests common resource using the admin queue commands (0x0008).
> + * When attempting to acquire the Global Config Lock, the driver can
> + * learn of three states:
> + *  1) ICE_SUCCESS -        acquired lock, and can perform download package
> + *  2) ICE_ERR_AQ_ERROR -   did not get lock, driver should fail to load
> + *  3) ICE_ERR_AQ_NO_WORK - did not get lock, but another driver has
> + *                          successfully downloaded the package; the driver does
> + *                          not have to download the package and can continue
> + *                          loading
> + *
> + * Note that if the caller is in an acquire lock, perform action, release lock
> + * phase of operation, it is possible that the FW may detect a timeout and issue
> + * a CORER. In this case, the driver will receive a CORER interrupt and will
> + * have to determine its cause. The calling thread that is handling this flow
> + * will likely get an error propagated back to it indicating the Download
> + * Package, Update Package or the Release Resource AQ commands timed out.
> + */
> +static enum ice_status
> +ice_aq_req_res(struct ice_hw *hw, enum ice_aq_res_ids res,
> +	       enum ice_aq_res_access_type access, u8 sdp_number, u32 *timeout,
> +	       struct ice_sq_cd *cd)
> +{
> +	struct ice_aqc_req_res *cmd_resp;
> +	struct ice_aq_desc desc;
> +	enum ice_status status;
> +
> +	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_req_res");
> +
> +	cmd_resp = &desc.params.res_owner;
> +
> +	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_req_res);
> +
> +	cmd_resp->res_id = CPU_TO_LE16(res);
> +	cmd_resp->access_type = CPU_TO_LE16(access);
> +	cmd_resp->res_number = CPU_TO_LE32(sdp_number);
> +	cmd_resp->timeout = CPU_TO_LE32(*timeout);
> +	*timeout = 0;
> +
> +	status = ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
> +
> +	/* The completion specifies the maximum time in ms that the driver
> +	 * may hold the resource in the Timeout field.
> +	 */
> +
> +	/* Global config lock response utilizes an additional status field.
> +	 *
> +	 * If the Global config lock resource is held by some other driver, the
> +	 * command completes with ICE_AQ_RES_GLBL_IN_PROG in the status field
> +	 * and the timeout field indicates the maximum time the current owner
> +	 * of the resource has to free it.
> +	 */
> +	if (res == ICE_GLOBAL_CFG_LOCK_RES_ID) {
> +		if (LE16_TO_CPU(cmd_resp->status) == ICE_AQ_RES_GLBL_SUCCESS) {
> +			*timeout = LE32_TO_CPU(cmd_resp->timeout);
> +			return ICE_SUCCESS;
> +		} else if (LE16_TO_CPU(cmd_resp->status) ==
> +			   ICE_AQ_RES_GLBL_IN_PROG) {
> +			*timeout = LE32_TO_CPU(cmd_resp->timeout);
> +			return ICE_ERR_AQ_ERROR;
> +		} else if (LE16_TO_CPU(cmd_resp->status) ==
> +			   ICE_AQ_RES_GLBL_DONE) {
> +			return ICE_ERR_AQ_NO_WORK;
> +		}
> +
> +		/* invalid FW response, force a timeout immediately */
> +		*timeout = 0;
> +		return ICE_ERR_AQ_ERROR;
> +	}
> +
> +	/* If the resource is held by some other driver, the command completes
> +	 * with a busy return value and the timeout field indicates the maximum
> +	 * time the current owner of the resource has to free it.
> +	 */
> +	if (!status || hw->adminq.sq_last_status == ICE_AQ_RC_EBUSY)
> +		*timeout = LE32_TO_CPU(cmd_resp->timeout);
> +
> +	return status;
> +}
> +
> +/**
> + * ice_aq_release_res
> + * @hw: pointer to the hw struct
> + * @res: resource id
> + * @sdp_number: resource number
> + * @cd: pointer to command details structure or NULL
> + *
> + * release common resource using the admin queue commands (0x0009)
> + */
> +static enum ice_status
> +ice_aq_release_res(struct ice_hw *hw, enum ice_aq_res_ids res, u8 sdp_number,
> +		   struct ice_sq_cd *cd)
> +{
> +	struct ice_aqc_req_res *cmd;
> +	struct ice_aq_desc desc;
> +
> +	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_release_res");
> +
> +	cmd = &desc.params.res_owner;
> +
> +	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_release_res);
> +
> +	cmd->res_id = CPU_TO_LE16(res);
> +	cmd->res_number = CPU_TO_LE32(sdp_number);
> +
> +	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
> +}
> +
> +/**
> + * ice_acquire_res
> + * @hw: pointer to the HW structure
> + * @res: resource id
> + * @access: access type (read or write)
> + * @timeout: timeout in milliseconds
> + *
> + * This function will attempt to acquire the ownership of a resource.
> + */
> +enum ice_status
> +ice_acquire_res(struct ice_hw *hw, enum ice_aq_res_ids res,
> +		enum ice_aq_res_access_type access, u32 timeout)
> +{
> +#define ICE_RES_POLLING_DELAY_MS	10
> +	u32 delay = ICE_RES_POLLING_DELAY_MS;
> +	u32 time_left = timeout;
> +	enum ice_status status;
> +
> +	ice_debug(hw, ICE_DBG_TRACE, "ice_acquire_res");
> +
> +	status = ice_aq_req_res(hw, res, access, 0, &time_left, NULL);
> +
> +	/* A return code of ICE_ERR_AQ_NO_WORK means that another driver has
> +	 * previously acquired the resource and performed any necessary updates;
> +	 * in this case the caller does not obtain the resource and has no
> +	 * further work to do.
> +	 */
> +	if (status == ICE_ERR_AQ_NO_WORK)
> +		goto ice_acquire_res_exit;
> +
> +	if (status)
> +		ice_debug(hw, ICE_DBG_RES,
> +			  "resource %d acquire type %d failed.\n", res, access);
> +
> +	/* If necessary, poll until the current lock owner timeouts */
> +	timeout = time_left;
> +	while (status && timeout && time_left) {
> +		ice_msec_delay(delay, true);
> +		timeout = (timeout > delay) ? timeout - delay : 0;
> +		status = ice_aq_req_res(hw, res, access, 0, &time_left, NULL);
> +
> +		if (status == ICE_ERR_AQ_NO_WORK)
> +			/* lock free, but no work to do */
> +			break;
> +
> +		if (!status)
> +			/* lock acquired */
> +			break;
> +	}
> +	if (status && status != ICE_ERR_AQ_NO_WORK)
> +		ice_debug(hw, ICE_DBG_RES, "resource acquire timed out.\n");
> +
> +ice_acquire_res_exit:
> +	if (status == ICE_ERR_AQ_NO_WORK) {
> +		if (access == ICE_RES_WRITE)
> +			ice_debug(hw, ICE_DBG_RES,
> +				  "resource indicates no work to do.\n");
> +		else
> +			ice_debug(hw, ICE_DBG_RES,
> +				  "Warning: ICE_ERR_AQ_NO_WORK not expected\n");
> +	}
> +	return status;
> +}
> +
> +/**
> + * ice_release_res
> + * @hw: pointer to the HW structure
> + * @res: resource id
> + *
> + * This function will release a resource using the proper Admin Command.
> + */
> +void ice_release_res(struct ice_hw *hw, enum ice_aq_res_ids res)
> +{
> +	enum ice_status status;
> +	u32 total_delay = 0;
> +
> +	ice_debug(hw, ICE_DBG_TRACE, "ice_release_res");
> +
> +	status = ice_aq_release_res(hw, res, 0, NULL);
> +
> +	/* there are some rare cases when trying to release the resource
> +	 * results in an admin Q timeout, so handle them correctly
> +	 */
> +	while ((status == ICE_ERR_AQ_TIMEOUT) &&
> +	       (total_delay < hw->adminq.sq_cmd_timeout)) {
> +		ice_msec_delay(1, true);
> +		status = ice_aq_release_res(hw, res, 0, NULL);
> +		total_delay++;
> +	}
> +}
> +
> +/**
> + * ice_aq_alloc_free_res - command to allocate/free resources
> + * @hw: pointer to the hw struct
> + * @num_entries: number of resource entries in buffer
> + * @buf: Indirect buffer to hold data parameters and response
> + * @buf_size: size of buffer for indirect commands
> + * @opc: pass in the command opcode
> + * @cd: pointer to command details structure or NULL
> + *
> + * Helper function to allocate/free resources using the admin queue commands
> + */
> +enum ice_status
> +ice_aq_alloc_free_res(struct ice_hw *hw, u16 num_entries,
> +		      struct ice_aqc_alloc_free_res_elem *buf, u16 buf_size,
> +		      enum ice_adminq_opc opc, struct ice_sq_cd *cd)
> +{
> +	struct ice_aqc_alloc_free_res_cmd *cmd;
> +	struct ice_aq_desc desc;
> +
> +	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_alloc_free_res");
> +
> +	cmd = &desc.params.sw_res_ctrl;
> +
> +	if (!buf)
> +		return ICE_ERR_PARAM;
> +
> +	if (buf_size < (num_entries * sizeof(buf->elem[0])))
> +		return ICE_ERR_PARAM;
> +
> +	ice_fill_dflt_direct_cmd_desc(&desc, opc);
> +
> +	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
> +
> +	cmd->num_entries = CPU_TO_LE16(num_entries);
> +
> +	return ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
> +}
> +
> +
> +/**
> + * ice_get_num_per_func - determine number of resources per PF
> + * @hw: pointer to the hw structure
> + * @max: value to be evenly split between each PF
> + *
> + * Determine the number of valid functions by going through the bitmap returned
> + * from parsing capabilities and use this to calculate the number of resources
> + * per PF based on the max value passed in.
> + */
> +static u32 ice_get_num_per_func(struct ice_hw *hw, u32 max)
> +{
> +	u8 funcs;
> +
> +#define ICE_CAPS_VALID_FUNCS_M	0xFF
> +	funcs = ice_hweight8(hw->dev_caps.common_cap.valid_functions &
> +			     ICE_CAPS_VALID_FUNCS_M);
> +
> +	if (!funcs)
> +		return 0;
> +
> +	return max / funcs;
> +}
> +
> +/**
> + * ice_parse_caps - parse function/device capabilities
> + * @hw: pointer to the hw struct
> + * @buf: pointer to a buffer containing function/device capability records
> + * @cap_count: number of capability records in the list
> + * @opc: type of capabilities list to parse
> + *
> + * Helper function to parse function(0x000a)/device(0x000b) capabilities list.
> + */
> +static void
> +ice_parse_caps(struct ice_hw *hw, void *buf, u32 cap_count,
> +	       enum ice_adminq_opc opc)
> +{
> +	struct ice_aqc_list_caps_elem *cap_resp;
> +	struct ice_hw_func_caps *func_p = NULL;
> +	struct ice_hw_dev_caps *dev_p = NULL;
> +	struct ice_hw_common_caps *caps;
> +	u32 i;
> +
> +	if (!buf)
> +		return;
> +
> +	cap_resp = (struct ice_aqc_list_caps_elem *)buf;
> +
> +	if (opc == ice_aqc_opc_list_dev_caps) {
> +		dev_p = &hw->dev_caps;
> +		caps = &dev_p->common_cap;
> +	} else if (opc == ice_aqc_opc_list_func_caps) {
> +		func_p = &hw->func_caps;
> +		caps = &func_p->common_cap;
> +	} else {
> +		ice_debug(hw, ICE_DBG_INIT, "wrong opcode\n");
> +		return;
> +	}
> +
> +	for (i = 0; caps && i < cap_count; i++, cap_resp++) {
> +		u32 logical_id = LE32_TO_CPU(cap_resp->logical_id);
> +		u32 phys_id = LE32_TO_CPU(cap_resp->phys_id);
> +		u32 number = LE32_TO_CPU(cap_resp->number);
> +		u16 cap = LE16_TO_CPU(cap_resp->cap);
> +
> +		switch (cap) {
> +		case ICE_AQC_CAPS_VALID_FUNCTIONS:
> +			caps->valid_functions = number;
> +			ice_debug(hw, ICE_DBG_INIT,
> +				  "HW caps: Valid Functions = %d\n",
> +				  caps->valid_functions);
> +			break;
> +		case ICE_AQC_CAPS_VSI:
> +			if (dev_p) {
> +				dev_p->num_vsi_allocd_to_host = number;
> +				ice_debug(hw, ICE_DBG_INIT,
> +					  "HW caps: Dev.VSI cnt = %d\n",
> +					  dev_p->num_vsi_allocd_to_host);
> +			} else if (func_p) {
> +				func_p->guar_num_vsi =
> +					ice_get_num_per_func(hw, ICE_MAX_VSI);
> +				ice_debug(hw, ICE_DBG_INIT,
> +					  "HW caps: Func.VSI cnt = %d\n",
> +					  number);
> +			}
> +			break;
> +		case ICE_AQC_CAPS_RSS:
> +			caps->rss_table_size = number;
> +			caps->rss_table_entry_width = logical_id;
> +			ice_debug(hw, ICE_DBG_INIT,
> +				  "HW caps: RSS table size = %d\n",
> +				  caps->rss_table_size);
> +			ice_debug(hw, ICE_DBG_INIT,
> +				  "HW caps: RSS table width = %d\n",
> +				  caps->rss_table_entry_width);
> +			break;
> +		case ICE_AQC_CAPS_RXQS:
> +			caps->num_rxq = number;
> +			caps->rxq_first_id = phys_id;
> +			ice_debug(hw, ICE_DBG_INIT,
> +				  "HW caps: Num Rx Qs = %d\n", caps->num_rxq);
> +			ice_debug(hw, ICE_DBG_INIT,
> +				  "HW caps: Rx first queue ID = %d\n",
> +				  caps->rxq_first_id);
> +			break;
> +		case ICE_AQC_CAPS_TXQS:
> +			caps->num_txq = number;
> +			caps->txq_first_id = phys_id;
> +			ice_debug(hw, ICE_DBG_INIT,
> +				  "HW caps: Num Tx Qs = %d\n", caps->num_txq);
> +			ice_debug(hw, ICE_DBG_INIT,
> +				  "HW caps: Tx first queue ID = %d\n",
> +				  caps->txq_first_id);
> +			break;
> +		case ICE_AQC_CAPS_MSIX:
> +			caps->num_msix_vectors = number;
> +			caps->msix_vector_first_id = phys_id;
> +			ice_debug(hw, ICE_DBG_INIT,
> +				  "HW caps: MSIX vector count = %d\n",
> +				  caps->num_msix_vectors);
> +			ice_debug(hw, ICE_DBG_INIT,
> +				  "HW caps: MSIX first vector index = %d\n",
> +				  caps->msix_vector_first_id);
> +			break;
> +		case ICE_AQC_CAPS_MAX_MTU:
> +			caps->max_mtu = number;
> +			if (dev_p)
> +				ice_debug(hw, ICE_DBG_INIT,
> +					  "HW caps: Dev.MaxMTU = %d\n",
> +					  caps->max_mtu);
> +			else if (func_p)
> +				ice_debug(hw, ICE_DBG_INIT,
> +					  "HW caps: func.MaxMTU = %d\n",
> +					  caps->max_mtu);
> +			break;
> +		default:
> +			ice_debug(hw, ICE_DBG_INIT,
> +				  "HW caps: Unknown capability[%d]: 0x%x\n", i,
> +				  cap);
> +			break;
> +		}
> +	}
> +}
> +
> +/**
> + * ice_aq_discover_caps - query function/device capabilities
> + * @hw: pointer to the hw struct
> + * @buf: a virtual buffer to hold the capabilities
> + * @buf_size: Size of the virtual buffer
> + * @cap_count: cap count needed if AQ err==ENOMEM
> + * @opc: capabilities type to discover - pass in the command opcode
> + * @cd: pointer to command details structure or NULL
> + *
> + * Get the function(0x000a)/device(0x000b) capabilities description from
> + * the firmware.
> + */
> +static enum ice_status
> +ice_aq_discover_caps(struct ice_hw *hw, void *buf, u16 buf_size, u32 *cap_count,
> +		     enum ice_adminq_opc opc, struct ice_sq_cd *cd)
> +{
> +	struct ice_aqc_list_caps *cmd;
> +	struct ice_aq_desc desc;
> +	enum ice_status status;
> +
> +	cmd = &desc.params.get_cap;
> +
> +	if (opc != ice_aqc_opc_list_func_caps &&
> +	    opc != ice_aqc_opc_list_dev_caps)
> +		return ICE_ERR_PARAM;
> +
> +	ice_fill_dflt_direct_cmd_desc(&desc, opc);
> +
> +	status = ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
> +	if (!status)
> +		ice_parse_caps(hw, buf, LE32_TO_CPU(cmd->count), opc);
> +	else if (hw->adminq.sq_last_status == ICE_AQ_RC_ENOMEM)
> +		*cap_count = LE32_TO_CPU(cmd->count);
> +	return status;
> +}
> +
> +/**
> + * ice_discover_caps - get info about the HW
> + * @hw: pointer to the hardware structure
> + * @opc: capabilities type to discover - pass in the command opcode
> + */
> +static enum ice_status
> +ice_discover_caps(struct ice_hw *hw, enum ice_adminq_opc opc)
> +{
> +	enum ice_status status;
> +	u32 cap_count;
> +	u16 cbuf_len;
> +	u8 retries;
> +
> +	/* The driver doesn't know how many capabilities the device will return
> +	 * so the buffer size required isn't known ahead of time. The driver
> +	 * starts with cbuf_len and if this turns out to be insufficient, the
> +	 * device returns ICE_AQ_RC_ENOMEM and also the cap_count it needs.
> +	 * The driver then allocates the buffer based on the count and retries
> +	 * the operation. So it follows that the retry count is 2.
> +	 */
> +#define ICE_GET_CAP_BUF_COUNT	40
> +#define ICE_GET_CAP_RETRY_COUNT	2
> +
> +	cap_count = ICE_GET_CAP_BUF_COUNT;
> +	retries = ICE_GET_CAP_RETRY_COUNT;
> +
> +	do {
> +		void *cbuf;
> +
> +		cbuf_len = (u16)(cap_count *
> +				 sizeof(struct ice_aqc_list_caps_elem));
> +		cbuf = ice_malloc(hw, cbuf_len);
> +		if (!cbuf)

== NULL

> +			return ICE_ERR_NO_MEMORY;
> +
> +		status = ice_aq_discover_caps(hw, cbuf, cbuf_len, &cap_count,
> +					      opc, NULL);
> +		ice_free(hw, cbuf);
> +
> +		if (!status || hw->adminq.sq_last_status != ICE_AQ_RC_ENOMEM)
> +			break;
> +
> +		/* If ENOMEM is returned, try again with bigger buffer */
> +	} while (--retries);
> +
> +	return status;
> +}
> +
> +/**
> + * ice_get_caps - get info about the HW
> + * @hw: pointer to the hardware structure
> + */
> +enum ice_status ice_get_caps(struct ice_hw *hw)
> +{
> +	enum ice_status status;
> +
> +	status = ice_discover_caps(hw, ice_aqc_opc_list_dev_caps);
> +	if (!status)
> +		status = ice_discover_caps(hw, ice_aqc_opc_list_func_caps);
> +
> +	return status;
> +}
> +
> +/**
> + * ice_aq_manage_mac_write - manage MAC address write command
> + * @hw: pointer to the hw struct
> + * @mac_addr: MAC address to be written as LAA/LAA+WoL/Port address
> + * @flags: flags to control write behavior
> + * @cd: pointer to command details structure or NULL
> + *
> + * This function is used to write MAC address to the NVM (0x0108).
> + */
> +enum ice_status
> +ice_aq_manage_mac_write(struct ice_hw *hw, const u8 *mac_addr, u8 flags,
> +			struct ice_sq_cd *cd)
> +{
> +	struct ice_aqc_manage_mac_write *cmd;
> +	struct ice_aq_desc desc;
> +
> +	cmd = &desc.params.mac_write;
> +	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_manage_mac_write);
> +
> +	cmd->flags = flags;
> +
> +
> +	/* Prep values for flags, sah, sal */
> +	cmd->sah = HTONS(*((const u16 *)mac_addr));
> +	cmd->sal = HTONL(*((const u32 *)(mac_addr + 2)));

Any particular reason these aren't rte_cpu_to_be_16/32?

> +
> +	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
> +}
> +
> +/**
> + * ice_aq_clear_pxe_mode
> + * @hw: pointer to the hw struct
> + *
> + * Tell the firmware that the driver is taking over from PXE (0x0110).
> + */
> +static enum ice_status ice_aq_clear_pxe_mode(struct ice_hw *hw)
> +{
> +	struct ice_aq_desc desc;
> +
> +	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_clear_pxe_mode);
> +	desc.params.clear_pxe.rx_cnt = ICE_AQC_CLEAR_PXE_RX_CNT;
> +
> +	return ice_aq_send_cmd(hw, &desc, NULL, 0, NULL);
> +}
> +
> +/**
> + * ice_clear_pxe_mode - clear pxe operations mode
> + * @hw: pointer to the hw struct
> + *
> + * Make sure all PXE mode settings are cleared, including things
> + * like descriptor fetch/write-back mode.
> + */
> +void ice_clear_pxe_mode(struct ice_hw *hw)
> +{
> +	if (ice_check_sq_alive(hw, &hw->adminq))
> +		ice_aq_clear_pxe_mode(hw);
> +}
> +
> +
> +/**
> + * ice_get_link_speed_based_on_phy_type - returns link speed
> + * @phy_type_low: lower part of phy_type
> + * @phy_type_high: higher part of phy_type
> + *
> + * This helper function will convert an entry in phy type structure
> + * [phy_type_low, phy_type_high] to its corresponding link speed.
> + * Note: In the structure of [phy_type_low, phy_type_high], there should
> + * be one bit set, as this function will convert one phy type to its
> + * speed.
> + * If no bit gets set, ICE_LINK_SPEED_UNKNOWN will be returned
> + * If more than one bit gets set, ICE_LINK_SPEED_UNKNOWN will be returned
> + */
> +static u16
> +ice_get_link_speed_based_on_phy_type(u64 phy_type_low, u64 phy_type_high)
> +{
> +	u16 speed_phy_type_high = ICE_AQ_LINK_SPEED_UNKNOWN;
> +	u16 speed_phy_type_low = ICE_AQ_LINK_SPEED_UNKNOWN;
> +
> +	switch (phy_type_low) {
> +	case ICE_PHY_TYPE_LOW_100BASE_TX:
> +	case ICE_PHY_TYPE_LOW_100M_SGMII:
> +		speed_phy_type_low = ICE_AQ_LINK_SPEED_100MB;
> +		break;
> +	case ICE_PHY_TYPE_LOW_1000BASE_T:
> +	case ICE_PHY_TYPE_LOW_1000BASE_SX:
> +	case ICE_PHY_TYPE_LOW_1000BASE_LX:
> +	case ICE_PHY_TYPE_LOW_1000BASE_KX:
> +	case ICE_PHY_TYPE_LOW_1G_SGMII:
> +		speed_phy_type_low = ICE_AQ_LINK_SPEED_1000MB;
> +		break;
> +	case ICE_PHY_TYPE_LOW_2500BASE_T:
> +	case ICE_PHY_TYPE_LOW_2500BASE_X:
> +	case ICE_PHY_TYPE_LOW_2500BASE_KX:
> +		speed_phy_type_low = ICE_AQ_LINK_SPEED_2500MB;
> +		break;
> +	case ICE_PHY_TYPE_LOW_5GBASE_T:
> +	case ICE_PHY_TYPE_LOW_5GBASE_KR:
> +		speed_phy_type_low = ICE_AQ_LINK_SPEED_5GB;
> +		break;
> +	case ICE_PHY_TYPE_LOW_10GBASE_T:
> +	case ICE_PHY_TYPE_LOW_10G_SFI_DA:
> +	case ICE_PHY_TYPE_LOW_10GBASE_SR:
> +	case ICE_PHY_TYPE_LOW_10GBASE_LR:
> +	case ICE_PHY_TYPE_LOW_10GBASE_KR_CR1:
> +	case ICE_PHY_TYPE_LOW_10G_SFI_AOC_ACC:
> +	case ICE_PHY_TYPE_LOW_10G_SFI_C2C:
> +		speed_phy_type_low = ICE_AQ_LINK_SPEED_10GB;
> +		break;
> +	case ICE_PHY_TYPE_LOW_25GBASE_T:
> +	case ICE_PHY_TYPE_LOW_25GBASE_CR:
> +	case ICE_PHY_TYPE_LOW_25GBASE_CR_S:
> +	case ICE_PHY_TYPE_LOW_25GBASE_CR1:
> +	case ICE_PHY_TYPE_LOW_25GBASE_SR:
> +	case ICE_PHY_TYPE_LOW_25GBASE_LR:
> +	case ICE_PHY_TYPE_LOW_25GBASE_KR:
> +	case ICE_PHY_TYPE_LOW_25GBASE_KR_S:
> +	case ICE_PHY_TYPE_LOW_25GBASE_KR1:
> +	case ICE_PHY_TYPE_LOW_25G_AUI_AOC_ACC:
> +	case ICE_PHY_TYPE_LOW_25G_AUI_C2C:
> +		speed_phy_type_low = ICE_AQ_LINK_SPEED_25GB;
> +		break;
> +	case ICE_PHY_TYPE_LOW_40GBASE_CR4:
> +	case ICE_PHY_TYPE_LOW_40GBASE_SR4:
> +	case ICE_PHY_TYPE_LOW_40GBASE_LR4:
> +	case ICE_PHY_TYPE_LOW_40GBASE_KR4:
> +	case ICE_PHY_TYPE_LOW_40G_XLAUI_AOC_ACC:
> +	case ICE_PHY_TYPE_LOW_40G_XLAUI:
> +		speed_phy_type_low = ICE_AQ_LINK_SPEED_40GB;
> +		break;
> +	case ICE_PHY_TYPE_LOW_50GBASE_CR2:
> +	case ICE_PHY_TYPE_LOW_50GBASE_SR2:
> +	case ICE_PHY_TYPE_LOW_50GBASE_LR2:
> +	case ICE_PHY_TYPE_LOW_50GBASE_KR2:
> +	case ICE_PHY_TYPE_LOW_50G_LAUI2_AOC_ACC:
> +	case ICE_PHY_TYPE_LOW_50G_LAUI2:
> +	case ICE_PHY_TYPE_LOW_50G_AUI2_AOC_ACC:
> +	case ICE_PHY_TYPE_LOW_50G_AUI2:
> +	case ICE_PHY_TYPE_LOW_50GBASE_CP:
> +	case ICE_PHY_TYPE_LOW_50GBASE_SR:
> +	case ICE_PHY_TYPE_LOW_50GBASE_FR:
> +	case ICE_PHY_TYPE_LOW_50GBASE_LR:
> +	case ICE_PHY_TYPE_LOW_50GBASE_KR_PAM4:
> +	case ICE_PHY_TYPE_LOW_50G_AUI1_AOC_ACC:
> +	case ICE_PHY_TYPE_LOW_50G_AUI1:
> +		speed_phy_type_low = ICE_AQ_LINK_SPEED_50GB;
> +		break;
> +	case ICE_PHY_TYPE_LOW_100GBASE_CR4:
> +	case ICE_PHY_TYPE_LOW_100GBASE_SR4:
> +	case ICE_PHY_TYPE_LOW_100GBASE_LR4:
> +	case ICE_PHY_TYPE_LOW_100GBASE_KR4:
> +	case ICE_PHY_TYPE_LOW_100G_CAUI4_AOC_ACC:
> +	case ICE_PHY_TYPE_LOW_100G_CAUI4:
> +	case ICE_PHY_TYPE_LOW_100G_AUI4_AOC_ACC:
> +	case ICE_PHY_TYPE_LOW_100G_AUI4:
> +	case ICE_PHY_TYPE_LOW_100GBASE_CR_PAM4:
> +	case ICE_PHY_TYPE_LOW_100GBASE_KR_PAM4:
> +	case ICE_PHY_TYPE_LOW_100GBASE_CP2:
> +	case ICE_PHY_TYPE_LOW_100GBASE_SR2:
> +	case ICE_PHY_TYPE_LOW_100GBASE_DR:
> +		speed_phy_type_low = ICE_AQ_LINK_SPEED_100GB;
> +		break;
> +	default:
> +		speed_phy_type_low = ICE_AQ_LINK_SPEED_UNKNOWN;
> +		break;
> +	}
> +
> +	switch (phy_type_high) {
> +	case ICE_PHY_TYPE_HIGH_100GBASE_KR2_PAM4:
> +	case ICE_PHY_TYPE_HIGH_100G_CAUI2_AOC_ACC:
> +	case ICE_PHY_TYPE_HIGH_100G_CAUI2:
> +	case ICE_PHY_TYPE_HIGH_100G_AUI2_AOC_ACC:
> +	case ICE_PHY_TYPE_HIGH_100G_AUI2:
> +		speed_phy_type_high = ICE_AQ_LINK_SPEED_100GB;
> +		break;
> +	default:
> +		speed_phy_type_high = ICE_AQ_LINK_SPEED_UNKNOWN;
> +		break;
> +	}
> +
> +	if (speed_phy_type_low == ICE_AQ_LINK_SPEED_UNKNOWN &&
> +	    speed_phy_type_high == ICE_AQ_LINK_SPEED_UNKNOWN)
> +		return ICE_AQ_LINK_SPEED_UNKNOWN;
> +	else if (speed_phy_type_low != ICE_AQ_LINK_SPEED_UNKNOWN &&
> +		 speed_phy_type_high != ICE_AQ_LINK_SPEED_UNKNOWN)
> +		return ICE_AQ_LINK_SPEED_UNKNOWN;
> +	else if (speed_phy_type_low != ICE_AQ_LINK_SPEED_UNKNOWN &&
> +		 speed_phy_type_high == ICE_AQ_LINK_SPEED_UNKNOWN)
> +		return speed_phy_type_low;
> +	else
> +		return speed_phy_type_high;
> +}
> +
> +/**
> + * ice_update_phy_type
> + * @phy_type_low: pointer to the lower part of phy_type
> + * @phy_type_high: pointer to the higher part of phy_type
> + * @link_speeds_bitmap: targeted link speeds bitmap
> + *
> + * Note: For the link_speeds_bitmap structure, you can check it at
> + * [ice_aqc_get_link_status->link_speed]. Caller can pass in
> + * link_speeds_bitmap include multiple speeds.
> + *
> + * Each entry in this [phy_type_low, phy_type_high] structure will
> + * present a certain link speed. This helper function will turn on bits
> + * in [phy_type_low, phy_type_high] structure based on the value of
> + * link_speeds_bitmap input parameter.
> + */
> +void
> +ice_update_phy_type(u64 *phy_type_low, u64 *phy_type_high,
> +		    u16 link_speeds_bitmap)
> +{
> +	u16 speed = ICE_AQ_LINK_SPEED_UNKNOWN;
> +	u64 pt_high;
> +	u64 pt_low;
> +	int index;
> +
> +	/* We first check with low part of phy_type */
> +	for (index = 0; index <= ICE_PHY_TYPE_LOW_MAX_INDEX; index++) {
> +		pt_low = BIT_ULL(index);
> +		speed = ice_get_link_speed_based_on_phy_type(pt_low, 0);
> +
> +		if (link_speeds_bitmap & speed)
> +			*phy_type_low |= BIT_ULL(index);
> +	}
> +
> +	/* We then check with high part of phy_type */
> +	for (index = 0; index <= ICE_PHY_TYPE_HIGH_MAX_INDEX; index++) {
> +		pt_high = BIT_ULL(index);
> +		speed = ice_get_link_speed_based_on_phy_type(0, pt_high);
> +
> +		if (link_speeds_bitmap & speed)
> +			*phy_type_high |= BIT_ULL(index);
> +	}
> +}
> +
> +/**
> + * ice_aq_set_phy_cfg
> + * @hw: pointer to the hw struct
> + * @lport: logical port number
> + * @cfg: structure with PHY configuration data to be set
> + * @cd: pointer to command details structure or NULL
> + *
> + * Set the various PHY configuration parameters supported on the Port.
> + * One or more of the Set PHY config parameters may be ignored in an MFP
> + * mode as the PF may not have the privilege to set some of the PHY Config
> + * parameters. This status will be indicated by the command response (0x0601).
> + */
> +enum ice_status
> +ice_aq_set_phy_cfg(struct ice_hw *hw, u8 lport,
> +		   struct ice_aqc_set_phy_cfg_data *cfg, struct ice_sq_cd *cd)
> +{
> +	struct ice_aq_desc desc;
> +
> +	if (!cfg)
> +		return ICE_ERR_PARAM;

== NULL

> +
> +	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_phy_cfg);
> +	desc.params.set_phy.lport_num = lport;
> +	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
> +
> +	return ice_aq_send_cmd(hw, &desc, cfg, sizeof(*cfg), cd);
> +}
> +
> +/**
> + * ice_update_link_info - update status of the HW network link
> + * @pi: port info structure of the interested logical port
> + */
> +enum ice_status ice_update_link_info(struct ice_port_info *pi)
> +{
> +	struct ice_aqc_get_phy_caps_data *pcaps;
> +	struct ice_phy_info *phy_info;
> +	enum ice_status status;
> +	struct ice_hw *hw;
> +
> +	if (!pi)
> +		return ICE_ERR_PARAM;

== NULL

> +
> +	hw = pi->hw;
> +
> +	pcaps = (struct ice_aqc_get_phy_caps_data *)
> +		ice_malloc(hw, sizeof(*pcaps));

No cast required.

> +	if (!pcaps)
> +		return ICE_ERR_NO_MEMORY;

== NULL

> +
> +	phy_info = &pi->phy;
> +	status = ice_aq_get_link_info(pi, true, NULL, NULL);
> +	if (status)
> +		goto out;
> +
> +	if (phy_info->link_info.link_info & ICE_AQ_MEDIA_AVAILABLE) {
> +		status = ice_aq_get_phy_caps(pi, false, ICE_AQC_REPORT_SW_CFG,
> +					     pcaps, NULL);
> +		if (status)
> +			goto out;
> +
> +		ice_memcpy(phy_info->link_info.module_type, &pcaps->module_type,
> +			   sizeof(phy_info->link_info.module_type),
> +			   ICE_NONDMA_TO_NONDMA);
> +	}
> +out:
> +	ice_free(hw, pcaps);
> +	return status;
> +}
> +
> +/**
> + * ice_set_fc
> + * @pi: port information structure
> + * @aq_failures: pointer to status code, specific to ice_set_fc routine
> + * @ena_auto_link_update: enable automatic link update
> + *
> + * Set the requested flow control mode.
> + */
> +enum ice_status
> +ice_set_fc(struct ice_port_info *pi, u8 *aq_failures, bool ena_auto_link_update)
> +{
> +	struct ice_aqc_set_phy_cfg_data cfg = { 0 };
> +	struct ice_aqc_get_phy_caps_data *pcaps;
> +	enum ice_status status;
> +	u8 pause_mask = 0x0;
> +	struct ice_hw *hw;
> +
> +	if (!pi)
> +		return ICE_ERR_PARAM;

== NULL

> +	hw = pi->hw;
> +	*aq_failures = ICE_SET_FC_AQ_FAIL_NONE;
> +
> +	switch (pi->fc.req_mode) {
> +	case ICE_FC_FULL:
> +		pause_mask |= ICE_AQC_PHY_EN_TX_LINK_PAUSE;
> +		pause_mask |= ICE_AQC_PHY_EN_RX_LINK_PAUSE;
> +		break;
> +	case ICE_FC_RX_PAUSE:
> +		pause_mask |= ICE_AQC_PHY_EN_RX_LINK_PAUSE;
> +		break;
> +	case ICE_FC_TX_PAUSE:
> +		pause_mask |= ICE_AQC_PHY_EN_TX_LINK_PAUSE;
> +		break;
> +	default:
> +		break;
> +	}
> +
> +	pcaps = (struct ice_aqc_get_phy_caps_data *)
> +		ice_malloc(hw, sizeof(*pcaps));

No cast required.

> +	if (!pcaps)
> +		return ICE_ERR_NO_MEMORY;

== NULL

> +
> +	/* Get the current phy config */
> +	status = ice_aq_get_phy_caps(pi, false, ICE_AQC_REPORT_SW_CFG, pcaps,
> +				     NULL);
> +	if (status) {
> +		*aq_failures = ICE_SET_FC_AQ_FAIL_GET;
> +		goto out;
> +	}
> +
> +	/* clear the old pause settings */
> +	cfg.caps = pcaps->caps & ~(ICE_AQC_PHY_EN_TX_LINK_PAUSE |
> +				   ICE_AQC_PHY_EN_RX_LINK_PAUSE);
> +	/* set the new capabilities */
> +	cfg.caps |= pause_mask;
> +	/* If the capabilities have changed, then set the new config */
> +	if (cfg.caps != pcaps->caps) {
> +		int retry_count, retry_max = 10;
> +
> +		/* Auto restart link so settings take effect */
> +		if (ena_auto_link_update)
> +			cfg.caps |= ICE_AQ_PHY_ENA_AUTO_LINK_UPDT;
> +		/* Copy over all the old settings */
> +		cfg.phy_type_high = pcaps->phy_type_high;
> +		cfg.phy_type_low = pcaps->phy_type_low;
> +		cfg.low_power_ctrl = pcaps->low_power_ctrl;
> +		cfg.eee_cap = pcaps->eee_cap;
> +		cfg.eeer_value = pcaps->eeer_value;
> +		cfg.link_fec_opt = pcaps->link_fec_options;
> +
> +		status = ice_aq_set_phy_cfg(hw, pi->lport, &cfg, NULL);
> +		if (status) {
> +			*aq_failures = ICE_SET_FC_AQ_FAIL_SET;
> +			goto out;
> +		}
> +
> +		/* Update the link info
> +		 * It sometimes takes a really long time for link to
> +		 * come back from the atomic reset. Thus, we wait a
> +		 * little bit.
> +		 */
> +		for (retry_count = 0; retry_count < retry_max; retry_count++) {
> +			status = ice_update_link_info(pi);
> +
> +			if (status == ICE_SUCCESS)
> +				break;
> +
> +			ice_msec_delay(100, true);
> +		}
> +
> +		if (status)
> +			*aq_failures = ICE_SET_FC_AQ_FAIL_UPDATE;
> +	}
> +
> +out:
> +	ice_free(hw, pcaps);
> +	return status;
> +}
> +
> +/**
> + * ice_copy_phy_caps_to_cfg - Copy PHY ability data to configuration data
> + * @caps: PHY ability structure to copy date from
> + * @cfg: PHY configuration structure to copy data to
> + *
> + * Helper function to copy AQC PHY get ability data to PHY set configuration
> + * data structure
> + */
> +void
> +ice_copy_phy_caps_to_cfg(struct ice_aqc_get_phy_caps_data *caps,
> +			 struct ice_aqc_set_phy_cfg_data *cfg)
> +{
> +	if (!caps || !cfg)
> +		return;

== NULL

> +
> +	cfg->phy_type_low = caps->phy_type_low;
> +	cfg->phy_type_high = caps->phy_type_high;
> +	cfg->caps = caps->caps;
> +	cfg->low_power_ctrl = caps->low_power_ctrl;
> +	cfg->eee_cap = caps->eee_cap;
> +	cfg->eeer_value = caps->eeer_value;
> +	cfg->link_fec_opt = caps->link_fec_options;
> +}
> +
> +/**
> + * ice_cfg_phy_fec - Configure PHY FEC data based on FEC mode
> + * @cfg: PHY configuration data to set FEC mode
> + * @fec: FEC mode to configure
> + *
> + * Caller should copy ice_aqc_get_phy_caps_data.caps ICE_AQC_PHY_EN_AUTO_FEC
> + * (bit 7) and ice_aqc_get_phy_caps_data.link_fec_options to cfg.caps
> + * ICE_AQ_PHY_ENA_AUTO_FEC (bit 7) and cfg.link_fec_options before calling.
> + */
> +void
> +ice_cfg_phy_fec(struct ice_aqc_set_phy_cfg_data *cfg, enum ice_fec_mode fec)
> +{
> +	switch (fec) {
> +	case ICE_FEC_BASER:
> +		/* Clear auto FEC and RS bits, and AND BASE-R ability
> +		 * bits and OR request bits.
> +		 */
> +		cfg->caps &= ~ICE_AQC_PHY_EN_AUTO_FEC;
> +		cfg->link_fec_opt &= ICE_AQC_PHY_FEC_10G_KR_40G_KR4_EN |
> +				     ICE_AQC_PHY_FEC_25G_KR_CLAUSE74_EN;
> +		cfg->link_fec_opt |= ICE_AQC_PHY_FEC_10G_KR_40G_KR4_REQ |
> +				     ICE_AQC_PHY_FEC_25G_KR_REQ;
> +		break;
> +	case ICE_FEC_RS:
> +		/* Clear auto FEC and BASE-R bits, and AND RS ability
> +		 * bits and OR request bits.
> +		 */
> +		cfg->caps &= ~ICE_AQC_PHY_EN_AUTO_FEC;
> +		cfg->link_fec_opt &= ICE_AQC_PHY_FEC_25G_RS_CLAUSE91_EN;
> +		cfg->link_fec_opt |= ICE_AQC_PHY_FEC_25G_RS_528_REQ |
> +				     ICE_AQC_PHY_FEC_25G_RS_544_REQ;
> +		break;
> +	case ICE_FEC_NONE:
> +		/* Clear auto FEC and all FEC option bits. */
> +		cfg->caps &= ~ICE_AQC_PHY_EN_AUTO_FEC;
> +		cfg->link_fec_opt &= ~ICE_AQC_PHY_FEC_MASK;
> +		break;
> +	case ICE_FEC_AUTO:
> +		/* AND auto FEC bit, and all caps bits. */
> +		cfg->caps &= ICE_AQC_PHY_CAPS_MASK;
> +		break;
> +	}
> +}
> +
> +/**
> + * ice_get_link_status - get status of the HW network link
> + * @pi: port information structure
> + * @link_up: pointer to bool (true/false = linkup/linkdown)
> + *
> + * Variable link_up is true if link is up, false if link is down.
> + * The variable link_up is invalid if status is non zero. As a
> + * result of this call, link status reporting becomes enabled
> + */
> +enum ice_status ice_get_link_status(struct ice_port_info *pi, bool *link_up)
> +{
> +	struct ice_phy_info *phy_info;
> +	enum ice_status status = ICE_SUCCESS;
> +
> +	if (!pi || !link_up)
> +		return ICE_ERR_PARAM;

== NULL

> +
> +	phy_info = &pi->phy;
> +
> +	if (phy_info->get_link_info) {
> +		status = ice_update_link_info(pi);
> +
> +		if (status)
> +			ice_debug(pi->hw, ICE_DBG_LINK,
> +				  "get link status error, status = %d\n",
> +				  status);
> +	}
> +
> +	*link_up = phy_info->link_info.link_info & ICE_AQ_LINK_UP;
> +
> +	return status;
> +}
> +
> +/**
> + * ice_aq_set_link_restart_an
> + * @pi: pointer to the port information structure
> + * @ena_link: if true: enable link, if false: disable link
> + * @cd: pointer to command details structure or NULL
> + *
> + * Sets up the link and restarts the Auto-Negotiation over the link.
> + */
> +enum ice_status
> +ice_aq_set_link_restart_an(struct ice_port_info *pi, bool ena_link,
> +			   struct ice_sq_cd *cd)
> +{
> +	struct ice_aqc_restart_an *cmd;
> +	struct ice_aq_desc desc;
> +
> +	cmd = &desc.params.restart_an;
> +
> +	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_restart_an);
> +
> +	cmd->cmd_flags = ICE_AQC_RESTART_AN_LINK_RESTART;
> +	cmd->lport_num = pi->lport;
> +	if (ena_link)
> +		cmd->cmd_flags |= ICE_AQC_RESTART_AN_LINK_ENABLE;
> +	else
> +		cmd->cmd_flags &= ~ICE_AQC_RESTART_AN_LINK_ENABLE;
> +
> +	return ice_aq_send_cmd(pi->hw, &desc, NULL, 0, cd);
> +}
> +
> +/**
> + * ice_aq_set_event_mask
> + * @hw: pointer to the hw struct
> + * @port_num: port number of the physical function
> + * @mask: event mask to be set
> + * @cd: pointer to command details structure or NULL
> + *
> + * Set event mask (0x0613)
> + */
> +enum ice_status
> +ice_aq_set_event_mask(struct ice_hw *hw, u8 port_num, u16 mask,
> +		      struct ice_sq_cd *cd)
> +{
> +	struct ice_aqc_set_event_mask *cmd;
> +	struct ice_aq_desc desc;
> +
> +	cmd = &desc.params.set_event_mask;
> +
> +	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_event_mask);
> +
> +	cmd->lport_num = port_num;
> +
> +	cmd->event_mask = CPU_TO_LE16(mask);
> +	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
> +}
> +
> +/**
> + * ice_aq_set_mac_loopback
> + * @hw: pointer to the hw struct
> + * @ena_lpbk: Enable or Disable loopback
> + * @cd: pointer to command details structure or NULL
> + *
> + * Enable/disable loopback on a given port
> + */
> +enum ice_status
> +ice_aq_set_mac_loopback(struct ice_hw *hw, bool ena_lpbk, struct ice_sq_cd *cd)
> +{
> +	struct ice_aqc_set_mac_lb *cmd;
> +	struct ice_aq_desc desc;
> +
> +	cmd = &desc.params.set_mac_lb;
> +
> +	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_mac_lb);
> +	if (ena_lpbk)
> +		cmd->lb_mode = ICE_AQ_MAC_LB_EN;
> +
> +	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
> +}
> +
> +
> +/**
> + * ice_aq_set_port_id_led
> + * @pi: pointer to the port information
> + * @is_orig_mode: is this LED set to original mode (by the net-list)
> + * @cd: pointer to command details structure or NULL
> + *
> + * Set LED value for the given port (0x06e9)
> + */
> +enum ice_status
> +ice_aq_set_port_id_led(struct ice_port_info *pi, bool is_orig_mode,
> +		       struct ice_sq_cd *cd)
> +{
> +	struct ice_aqc_set_port_id_led *cmd;
> +	struct ice_hw *hw = pi->hw;
> +	struct ice_aq_desc desc;
> +
> +	cmd = &desc.params.set_port_id_led;
> +
> +	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_port_id_led);
> +
> +
> +	if (is_orig_mode)
> +		cmd->ident_mode = ICE_AQC_PORT_IDENT_LED_ORIG;
> +	else
> +		cmd->ident_mode = ICE_AQC_PORT_IDENT_LED_BLINK;
> +
> +	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
> +}
> +
> +/**
> + * __ice_aq_get_set_rss_lut
> + * @hw: pointer to the hardware structure
> + * @vsi_id: VSI FW index
> + * @lut_type: LUT table type
> + * @lut: pointer to the LUT buffer provided by the caller
> + * @lut_size: size of the LUT buffer
> + * @glob_lut_idx: global LUT index
> + * @set: set true to set the table, false to get the table
> + *
> + * Internal function to get (0x0B05) or set (0x0B03) RSS look up table
> + */
> +static enum ice_status
> +__ice_aq_get_set_rss_lut(struct ice_hw *hw, u16 vsi_id, u8 lut_type, u8 *lut,
> +			 u16 lut_size, u8 glob_lut_idx, bool set)
> +{
> +	struct ice_aqc_get_set_rss_lut *cmd_resp;
> +	struct ice_aq_desc desc;
> +	enum ice_status status;
> +	u16 flags = 0;
> +
> +	cmd_resp = &desc.params.get_set_rss_lut;
> +
> +	if (set) {
> +		ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_rss_lut);
> +		desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
> +	} else {
> +		ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_rss_lut);
> +	}
> +
> +	cmd_resp->vsi_id = CPU_TO_LE16(((vsi_id <<
> +					 ICE_AQC_GSET_RSS_LUT_VSI_ID_S) &
> +					ICE_AQC_GSET_RSS_LUT_VSI_ID_M) |
> +				       ICE_AQC_GSET_RSS_LUT_VSI_VALID);
> +
> +	switch (lut_type) {
> +	case ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_VSI:
> +	case ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF:
> +	case ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_GLOBAL:
> +		flags |= ((lut_type << ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_S) &
> +			  ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_M);
> +		break;
> +	default:
> +		status = ICE_ERR_PARAM;
> +		goto ice_aq_get_set_rss_lut_exit;
> +	}
> +
> +	if (lut_type == ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_GLOBAL) {
> +		flags |= ((glob_lut_idx << ICE_AQC_GSET_RSS_LUT_GLOBAL_IDX_S) &
> +			  ICE_AQC_GSET_RSS_LUT_GLOBAL_IDX_M);
> +
> +		if (!set)
> +			goto ice_aq_get_set_rss_lut_send;
> +	} else if (lut_type == ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF) {
> +		if (!set)
> +			goto ice_aq_get_set_rss_lut_send;
> +	} else {
> +		goto ice_aq_get_set_rss_lut_send;
> +	}
> +
> +	/* LUT size is only valid for Global and PF table types */
> +	switch (lut_size) {
> +	case ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_128:
> +		flags |= (ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_128_FLAG <<
> +			  ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S) &
> +			 ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_M;
> +		break;
> +	case ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_512:
> +		flags |= (ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_512_FLAG <<
> +			  ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S) &
> +			 ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_M;
> +		break;
> +	case ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K:
> +		if (lut_type == ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF) {
> +			flags |= (ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K_FLAG <<
> +				  ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S) &
> +				 ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_M;
> +			break;
> +		}
> +		/* fall-through */
> +	default:
> +		status = ICE_ERR_PARAM;
> +		goto ice_aq_get_set_rss_lut_exit;
> +	}
> +
> +ice_aq_get_set_rss_lut_send:
> +	cmd_resp->flags = CPU_TO_LE16(flags);
> +	status = ice_aq_send_cmd(hw, &desc, lut, lut_size, NULL);
> +
> +ice_aq_get_set_rss_lut_exit:
> +	return status;
> +}
> +
> +/**
> + * ice_aq_get_rss_lut
> + * @hw: pointer to the hardware structure
> + * @vsi_handle: software VSI handle
> + * @lut_type: LUT table type
> + * @lut: pointer to the LUT buffer provided by the caller
> + * @lut_size: size of the LUT buffer
> + *
> + * get the RSS lookup table, PF or VSI type
> + */
> +enum ice_status
> +ice_aq_get_rss_lut(struct ice_hw *hw, u16 vsi_handle, u8 lut_type,
> +		   u8 *lut, u16 lut_size)
> +{
> +	if (!ice_is_vsi_valid(hw, vsi_handle) || !lut)
> +		return ICE_ERR_PARAM;
> +
> +	return __ice_aq_get_set_rss_lut(hw, ice_get_hw_vsi_num(hw, vsi_handle),
> +					lut_type, lut, lut_size, 0, false);
> +}
> +
> +/**
> + * ice_aq_set_rss_lut
> + * @hw: pointer to the hardware structure
> + * @vsi_handle: software VSI handle
> + * @lut_type: LUT table type
> + * @lut: pointer to the LUT buffer provided by the caller
> + * @lut_size: size of the LUT buffer
> + *
> + * set the RSS lookup table, PF or VSI type
> + */
> +enum ice_status
> +ice_aq_set_rss_lut(struct ice_hw *hw, u16 vsi_handle, u8 lut_type,
> +		   u8 *lut, u16 lut_size)
> +{
> +	if (!ice_is_vsi_valid(hw, vsi_handle) || !lut)

== NULL

> +		return ICE_ERR_PARAM;
> +
> +	return __ice_aq_get_set_rss_lut(hw, ice_get_hw_vsi_num(hw, vsi_handle),
> +					lut_type, lut, lut_size, 0, true);
> +}
> +
> +/**
> + * __ice_aq_get_set_rss_key
> + * @hw: pointer to the hw struct
> + * @vsi_id: VSI FW index
> + * @key: pointer to key info struct
> + * @set: set true to set the key, false to get the key
> + *
> + * get (0x0B04) or set (0x0B02) the RSS key per VSI
> + */
> +static enum
> +ice_status __ice_aq_get_set_rss_key(struct ice_hw *hw, u16 vsi_id,
> +				    struct ice_aqc_get_set_rss_keys *key,
> +				    bool set)
> +{
> +	struct ice_aqc_get_set_rss_key *cmd_resp;
> +	u16 key_size = sizeof(*key);
> +	struct ice_aq_desc desc;
> +
> +	cmd_resp = &desc.params.get_set_rss_key;
> +
> +	if (set) {
> +		ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_rss_key);
> +		desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
> +	} else {
> +		ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_rss_key);
> +	}
> +
> +	cmd_resp->vsi_id = CPU_TO_LE16(((vsi_id <<
> +					 ICE_AQC_GSET_RSS_KEY_VSI_ID_S) &
> +					ICE_AQC_GSET_RSS_KEY_VSI_ID_M) |
> +				       ICE_AQC_GSET_RSS_KEY_VSI_VALID);
> +
> +	return ice_aq_send_cmd(hw, &desc, key, key_size, NULL);
> +}
> +
> +/**
> + * ice_aq_get_rss_key
> + * @hw: pointer to the hw struct
> + * @vsi_handle: software VSI handle
> + * @key: pointer to key info struct
> + *
> + * get the RSS key per VSI
> + */
> +enum ice_status
> +ice_aq_get_rss_key(struct ice_hw *hw, u16 vsi_handle,
> +		   struct ice_aqc_get_set_rss_keys *key)
> +{
> +	if (!ice_is_vsi_valid(hw, vsi_handle) || !key)
== NULL
> +		return ICE_ERR_PARAM;
> +
> +	return __ice_aq_get_set_rss_key(hw, ice_get_hw_vsi_num(hw, vsi_handle),
> +					key, false);
> +}
> +
> +/**
> + * ice_aq_set_rss_key
> + * @hw: pointer to the hw struct
> + * @vsi_handle: software VSI handle
> + * @keys: pointer to key info struct
> + *
> + * set the RSS key per VSI
> + */
> +enum ice_status
> +ice_aq_set_rss_key(struct ice_hw *hw, u16 vsi_handle,
> +		   struct ice_aqc_get_set_rss_keys *keys)
> +{
> +	if (!ice_is_vsi_valid(hw, vsi_handle) || !keys)
> +		return ICE_ERR_PARAM;
== NULL
> +
> +	return __ice_aq_get_set_rss_key(hw, ice_get_hw_vsi_num(hw, vsi_handle),
> +					keys, true);
> +}
> +
> +/**
> + * ice_aq_add_lan_txq
> + * @hw: pointer to the hardware structure
> + * @num_qgrps: Number of added queue groups
> + * @qg_list: list of queue groups to be added
> + * @buf_size: size of buffer for indirect command
> + * @cd: pointer to command details structure or NULL
> + *
> + * Add Tx LAN queue (0x0C30)
> + *
> + * NOTE:
> + * Prior to calling add Tx LAN queue:
> + * Initialize the following as part of the Tx queue context:
> + * Completion queue ID if the queue uses Completion queue, Quanta profile,
> + * Cache profile and Packet shaper profile.
> + *
> + * After add Tx LAN queue AQ command is completed:
> + * Interrupts should be associated with specific queues,
> + * Association of Tx queue to Doorbell queue is not part of Add LAN Tx queue
> + * flow.
> + */
> +static enum ice_status
> +ice_aq_add_lan_txq(struct ice_hw *hw, u8 num_qgrps,
> +		   struct ice_aqc_add_tx_qgrp *qg_list, u16 buf_size,
> +		   struct ice_sq_cd *cd)
> +{
> +	u16 i, sum_header_size, sum_q_size = 0;
> +	struct ice_aqc_add_tx_qgrp *list;
> +	struct ice_aqc_add_txqs *cmd;
> +	struct ice_aq_desc desc;
> +
> +	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_add_lan_txq");
> +
> +	cmd = &desc.params.add_txqs;
> +
> +	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_add_txqs);
> +
> +	if (!qg_list)
> +		return ICE_ERR_PARAM;
== NULL
> +
> +	if (num_qgrps > ICE_LAN_TXQ_MAX_QGRPS)
> +		return ICE_ERR_PARAM;
> +
> +	sum_header_size = num_qgrps *
> +		(sizeof(*qg_list) - sizeof(*qg_list->txqs));
> +
> +	list = qg_list;
> +	for (i = 0; i < num_qgrps; i++) {
> +		struct ice_aqc_add_txqs_perq *q = list->txqs;
> +
> +		sum_q_size += list->num_txqs * sizeof(*q);
> +		list = (struct ice_aqc_add_tx_qgrp *)(q + list->num_txqs);
> +	}
> +
> +	if (buf_size != (sum_header_size + sum_q_size))
> +		return ICE_ERR_PARAM;
> +
> +	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
> +
> +	cmd->num_qgrps = num_qgrps;
> +
> +	return ice_aq_send_cmd(hw, &desc, qg_list, buf_size, cd);
> +}
> +
> +/**
> + * ice_aq_dis_lan_txq
> + * @hw: pointer to the hardware structure
> + * @num_qgrps: number of groups in the list
> + * @qg_list: the list of groups to disable
> + * @buf_size: the total size of the qg_list buffer in bytes
> + * @rst_src: if called due to reset, specifies the rst source
> + * @vmvf_num: the relative vm or vf number that is undergoing the reset
> + * @cd: pointer to command details structure or NULL
> + *
> + * Disable LAN Tx queue (0x0C31)
> + */
> +static enum ice_status
> +ice_aq_dis_lan_txq(struct ice_hw *hw, u8 num_qgrps,
> +		   struct ice_aqc_dis_txq_item *qg_list, u16 buf_size,
> +		   enum ice_disq_rst_src rst_src, u16 vmvf_num,
> +		   struct ice_sq_cd *cd)
> +{
> +	struct ice_aqc_dis_txqs *cmd;
> +	struct ice_aq_desc desc;
> +	enum ice_status status;
> +	u16 i, sz = 0;
> +
> +	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_dis_lan_txq");
> +	cmd = &desc.params.dis_txqs;
> +	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_dis_txqs);
> +
> +	/* qg_list can be NULL only in VM/VF reset flow */
> +	if (!qg_list && !rst_src)
> +		return ICE_ERR_PARAM;
> +
> +	if (num_qgrps > ICE_LAN_TXQ_MAX_QGRPS)
> +		return ICE_ERR_PARAM;
> +
> +	cmd->num_entries = num_qgrps;
> +
> +	cmd->vmvf_and_timeout = CPU_TO_LE16((5 << ICE_AQC_Q_DIS_TIMEOUT_S) &
> +					    ICE_AQC_Q_DIS_TIMEOUT_M);
> +
> +	switch (rst_src) {
> +	case ICE_VM_RESET:
> +		cmd->cmd_type = ICE_AQC_Q_DIS_CMD_VM_RESET;
> +		cmd->vmvf_and_timeout |=
> +			CPU_TO_LE16(vmvf_num & ICE_AQC_Q_DIS_VMVF_NUM_M);
> +		break;
> +	case ICE_NO_RESET:
> +	default:
> +		break;
> +	}
> +
> +	/* flush pipe on time out */
> +	cmd->cmd_type |= ICE_AQC_Q_DIS_CMD_FLUSH_PIPE;
> +	/* If no queue group info, we are in a reset flow. Issue the AQ */
> +	if (!qg_list)
> +		goto do_aq;

== NULL

> +
> +	/* set RD bit to indicate that command buffer is provided by the driver
> +	 * and it needs to be read by the firmware
> +	 */
> +	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
> +
> +	for (i = 0; i < num_qgrps; ++i) {
> +		/* Calculate the size taken up by the queue IDs in this group */
> +		sz += qg_list[i].num_qs * sizeof(qg_list[i].q_id);
> +
> +		/* Add the size of the group header */
> +		sz += sizeof(qg_list[i]) - sizeof(qg_list[i].q_id);
> +
> +		/* If the num of queues is even, add 2 bytes of padding */
> +		if ((qg_list[i].num_qs % 2) == 0)
> +			sz += 2;
> +	}
> +
> +	if (buf_size != sz)
> +		return ICE_ERR_PARAM;
> +
> +do_aq:
> +	status = ice_aq_send_cmd(hw, &desc, qg_list, buf_size, cd);
> +	if (status) {
> +		if (!qg_list)
> +			ice_debug(hw, ICE_DBG_SCHED, "VM%d disable failed %d\n",
> +				  vmvf_num, hw->adminq.sq_last_status);
> +		else
> +			ice_debug(hw, ICE_DBG_SCHED, "disable Q %d failed %d\n",
> +				  LE16_TO_CPU(qg_list[0].q_id[0]),
> +				  hw->adminq.sq_last_status);
> +	}
> +	return status;
> +}
> +
> +
> +/* End of FW Admin Queue command wrappers */
> +
> +/**
> + * ice_write_byte - write a byte to a packed context structure
> + * @src_ctx:  the context structure to read from
> + * @dest_ctx: the context to be written to
> + * @ce_info:  a description of the struct to be filled
> + */
> +static void
> +ice_write_byte(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info)
> +{
> +	u8 src_byte, dest_byte, mask;
> +	u8 *from, *dest;
> +	u16 shift_width;
> +
> +	/* copy from the next struct field */
> +	from = src_ctx + ce_info->offset;
> +
> +	/* prepare the bits and mask */
> +	shift_width = ce_info->lsb % 8;
> +	mask = (u8)(BIT(ce_info->width) - 1);
> +
> +	src_byte = *from;
> +	src_byte &= mask;
> +
> +	/* shift to correct alignment */
> +	mask <<= shift_width;
> +	src_byte <<= shift_width;
> +
> +	/* get the current bits from the target bit string */
> +	dest = dest_ctx + (ce_info->lsb / 8);
> +
> +	ice_memcpy(&dest_byte, dest, sizeof(dest_byte), ICE_DMA_TO_NONDMA);
> +
> +	dest_byte &= ~mask;	/* get the bits not changing */
> +	dest_byte |= src_byte;	/* add in the new bits */
> +
> +	/* put it all back */
> +	ice_memcpy(dest, &dest_byte, sizeof(dest_byte), ICE_NONDMA_TO_DMA);
> +}
> +
> +/**
> + * ice_write_word - write a word to a packed context structure
> + * @src_ctx:  the context structure to read from
> + * @dest_ctx: the context to be written to
> + * @ce_info:  a description of the struct to be filled
> + */
> +static void
> +ice_write_word(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info)
> +{
> +	u16 src_word, mask;
> +	__le16 dest_word;
> +	u8 *from, *dest;
> +	u16 shift_width;
> +
> +	/* copy from the next struct field */
> +	from = src_ctx + ce_info->offset;
> +
> +	/* prepare the bits and mask */
> +	shift_width = ce_info->lsb % 8;
> +	mask = BIT(ce_info->width) - 1;
> +
> +	/* don't swizzle the bits until after the mask because the mask bits
> +	 * will be in a different bit position on big endian machines
> +	 */
> +	src_word = *(u16 *)from;
> +	src_word &= mask;
> +
> +	/* shift to correct alignment */
> +	mask <<= shift_width;
> +	src_word <<= shift_width;
> +
> +	/* get the current bits from the target bit string */
> +	dest = dest_ctx + (ce_info->lsb / 8);
> +
> +	ice_memcpy(&dest_word, dest, sizeof(dest_word), ICE_DMA_TO_NONDMA);
> +
> +	dest_word &= ~(CPU_TO_LE16(mask));	/* get the bits not changing */
> +	dest_word |= CPU_TO_LE16(src_word);	/* add in the new bits */
> +
> +	/* put it all back */
> +	ice_memcpy(dest, &dest_word, sizeof(dest_word), ICE_NONDMA_TO_DMA);
> +}
> +
> +/**
> + * ice_write_dword - write a dword to a packed context structure
> + * @src_ctx:  the context structure to read from
> + * @dest_ctx: the context to be written to
> + * @ce_info:  a description of the struct to be filled
> + */
> +static void
> +ice_write_dword(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info)
> +{
> +	u32 src_dword, mask;
> +	__le32 dest_dword;
> +	u8 *from, *dest;
> +	u16 shift_width;
> +
> +	/* copy from the next struct field */
> +	from = src_ctx + ce_info->offset;
> +
> +	/* prepare the bits and mask */
> +	shift_width = ce_info->lsb % 8;
> +
> +	/* if the field width is exactly 32 on an x86 machine, then the shift
> +	 * operation will not work because the SHL instructions count is masked
> +	 * to 5 bits so the shift will do nothing
> +	 */
> +	if (ce_info->width < 32)
> +		mask = BIT(ce_info->width) - 1;
> +	else
> +		mask = (u32)~0;
> +
> +	/* don't swizzle the bits until after the mask because the mask bits
> +	 * will be in a different bit position on big endian machines
> +	 */
> +	src_dword = *(u32 *)from;
> +	src_dword &= mask;
> +
> +	/* shift to correct alignment */
> +	mask <<= shift_width;
> +	src_dword <<= shift_width;
> +
> +	/* get the current bits from the target bit string */
> +	dest = dest_ctx + (ce_info->lsb / 8);
> +
> +	ice_memcpy(&dest_dword, dest, sizeof(dest_dword), ICE_DMA_TO_NONDMA);
> +
> +	dest_dword &= ~(CPU_TO_LE32(mask));	/* get the bits not changing */
> +	dest_dword |= CPU_TO_LE32(src_dword);	/* add in the new bits */
> +
> +	/* put it all back */
> +	ice_memcpy(dest, &dest_dword, sizeof(dest_dword), ICE_NONDMA_TO_DMA);
> +}
> +
> +/**
> + * ice_write_qword - write a qword to a packed context structure
> + * @src_ctx:  the context structure to read from
> + * @dest_ctx: the context to be written to
> + * @ce_info:  a description of the struct to be filled
> + */
> +static void
> +ice_write_qword(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info)
> +{
> +	u64 src_qword, mask;
> +	__le64 dest_qword;
> +	u8 *from, *dest;
> +	u16 shift_width;
> +
> +	/* copy from the next struct field */
> +	from = src_ctx + ce_info->offset;
> +
> +	/* prepare the bits and mask */
> +	shift_width = ce_info->lsb % 8;
> +
> +	/* if the field width is exactly 64 on an x86 machine, then the shift
> +	 * operation will not work because the SHL instructions count is masked
> +	 * to 6 bits so the shift will do nothing
> +	 */
> +	if (ce_info->width < 64)
> +		mask = BIT_ULL(ce_info->width) - 1;
> +	else
> +		mask = (u64)~0;
> +
> +	/* don't swizzle the bits until after the mask because the mask bits
> +	 * will be in a different bit position on big endian machines
> +	 */
> +	src_qword = *(u64 *)from;
> +	src_qword &= mask;
> +
> +	/* shift to correct alignment */
> +	mask <<= shift_width;
> +	src_qword <<= shift_width;
> +
> +	/* get the current bits from the target bit string */
> +	dest = dest_ctx + (ce_info->lsb / 8);
> +
> +	ice_memcpy(&dest_qword, dest, sizeof(dest_qword), ICE_DMA_TO_NONDMA);
> +
> +	dest_qword &= ~(CPU_TO_LE64(mask));	/* get the bits not changing */
> +	dest_qword |= CPU_TO_LE64(src_qword);	/* add in the new bits */
> +
> +	/* put it all back */
> +	ice_memcpy(dest, &dest_qword, sizeof(dest_qword), ICE_NONDMA_TO_DMA);
> +}
> +
> +/**
> + * ice_set_ctx - set context bits in packed structure
> + * @src_ctx:  pointer to a generic non-packed context structure
> + * @dest_ctx: pointer to memory for the packed structure
> + * @ce_info:  a description of the structure to be transformed
> + */
> +enum ice_status
> +ice_set_ctx(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info)
> +{
> +	int f;
> +
> +	for (f = 0; ce_info[f].width; f++) {
> +		/* We have to deal with each element of the FW response
> +		 * using the correct size so that we are correct regardless
> +		 * of the endianness of the machine.
> +		 */
> +		switch (ce_info[f].size_of) {
> +		case sizeof(u8):
> +			ice_write_byte(src_ctx, dest_ctx, &ce_info[f]);
> +			break;
> +		case sizeof(u16):
> +			ice_write_word(src_ctx, dest_ctx, &ce_info[f]);
> +			break;
> +		case sizeof(u32):
> +			ice_write_dword(src_ctx, dest_ctx, &ce_info[f]);
> +			break;
> +		case sizeof(u64):
> +			ice_write_qword(src_ctx, dest_ctx, &ce_info[f]);
> +			break;
> +		default:
> +			return ICE_ERR_INVAL_SIZE;
> +		}
> +	}
> +
> +	return ICE_SUCCESS;
> +}
> +
> +
> +
> +
> +
> +/**
> + * ice_ena_vsi_txq
> + * @pi: port information structure
> + * @vsi_handle: software VSI handle
> + * @tc: tc number
> + * @num_qgrps: Number of added queue groups
> + * @buf: list of queue groups to be added
> + * @buf_size: size of buffer for indirect command
> + * @cd: pointer to command details structure or NULL
> + *
> + * This function adds one lan q
> + */
> +enum ice_status
> +ice_ena_vsi_txq(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u8 num_qgrps,
> +		struct ice_aqc_add_tx_qgrp *buf, u16 buf_size,
> +		struct ice_sq_cd *cd)
> +{
> +	struct ice_aqc_txsched_elem_data node = { 0 };
> +	struct ice_sched_node *parent;
> +	enum ice_status status;
> +	struct ice_hw *hw;
> +
> +	if (!pi || pi->port_state != ICE_SCHED_PORT_STATE_READY)
> +		return ICE_ERR_CFG;
> +
> +	if (num_qgrps > 1 || buf->num_txqs > 1)
> +		return ICE_ERR_MAX_LIMIT;
> +
> +	hw = pi->hw;
> +
> +	if (!ice_is_vsi_valid(hw, vsi_handle))
> +		return ICE_ERR_PARAM;
> +
> +	ice_acquire_lock(&pi->sched_lock);
> +
> +	/* find a parent node */
> +	parent = ice_sched_get_free_qparent(pi, vsi_handle, tc,
> +					    ICE_SCHED_NODE_OWNER_LAN);
> +	if (!parent) {
> +		status = ICE_ERR_PARAM;
> +		goto ena_txq_exit;
> +	}
> +
> +	buf->parent_teid = parent->info.node_teid;
> +	node.parent_teid = parent->info.node_teid;
> +	/* Mark that the values in the "generic" section as valid. The default
> +	 * value in the "generic" section is zero. This means that :
> +	 * - Scheduling mode is Bytes Per Second (BPS), indicated by Bit 0.
> +	 * - 0 priority among siblings, indicated by Bit 1-3.
> +	 * - WFQ, indicated by Bit 4.
> +	 * - 0 Adjustment value is used in PSM credit update flow, indicated by
> +	 * Bit 5-6.
> +	 * - Bit 7 is reserved.
> +	 * Without setting the generic section as valid in valid_sections, the
> +	 * Admin Q command will fail with error code ICE_AQ_RC_EINVAL.
> +	 */
> +	buf->txqs[0].info.valid_sections = ICE_AQC_ELEM_VALID_GENERIC;
> +
> +	/* add the lan q */
> +	status = ice_aq_add_lan_txq(hw, num_qgrps, buf, buf_size, cd);
> +	if (status != ICE_SUCCESS) {
> +		ice_debug(hw, ICE_DBG_SCHED, "enable Q %d failed %d\n",
> +			  LE16_TO_CPU(buf->txqs[0].txq_id),
> +			  hw->adminq.sq_last_status);
> +		goto ena_txq_exit;
> +	}
> +
> +	node.node_teid = buf->txqs[0].q_teid;
> +	node.data.elem_type = ICE_AQC_ELEM_TYPE_LEAF;
> +
> +	/* add a leaf node into schduler tree q layer */
> +	status = ice_sched_add_node(pi, hw->num_tx_sched_layers - 1, &node);
> +
> +ena_txq_exit:
> +	ice_release_lock(&pi->sched_lock);
> +	return status;
> +}
> +
> +/**
> + * ice_dis_vsi_txq
> + * @pi: port information structure
> + * @num_queues: number of queues
> + * @q_ids: pointer to the q_id array
> + * @q_teids: pointer to queue node teids
> + * @rst_src: if called due to reset, specifies the rst source
> + * @vmvf_num: the relative vm or vf number that is undergoing the reset
> + * @cd: pointer to command details structure or NULL
> + *
> + * This function removes queues and their corresponding nodes in SW DB
> + */
> +enum ice_status
> +ice_dis_vsi_txq(struct ice_port_info *pi, u8 num_queues, u16 *q_ids,
> +		u32 *q_teids, enum ice_disq_rst_src rst_src, u16 vmvf_num,
> +		struct ice_sq_cd *cd)
> +{
> +	enum ice_status status = ICE_ERR_DOES_NOT_EXIST;
> +	struct ice_aqc_dis_txq_item qg_list;
> +	u16 i;
> +
> +	if (!pi || pi->port_state != ICE_SCHED_PORT_STATE_READY)
> +		return ICE_ERR_CFG;
> +
> +	/* if queue is disabled already yet the disable queue command has to be
> +	 * sent to complete the VF reset, then call ice_aq_dis_lan_txq without
> +	 * any queue information
> +	 */
> +
> +	if (!num_queues && rst_src) > +		return ice_aq_dis_lan_txq(pi->hw, 0, NULL, 0, rst_src, vmvf_num,
> +					  NULL);
> +
> +	ice_acquire_lock(&pi->sched_lock);
> +
> +	for (i = 0; i < num_queues; i++) {
> +		struct ice_sched_node *node;
> +
> +		node = ice_sched_find_node_by_teid(pi->root, q_teids[i]);
> +		if (!node)
> +			continue;
> +		qg_list.parent_teid = node->info.parent_teid;
> +		qg_list.num_qs = 1;
> +		qg_list.q_id[0] = CPU_TO_LE16(q_ids[i]);
> +		status = ice_aq_dis_lan_txq(pi->hw, 1, &qg_list,
> +					    sizeof(qg_list), rst_src, vmvf_num,
> +					    cd);
> +
> +		if (status != ICE_SUCCESS)
> +			break;
> +		ice_free_sched_node(pi, node);
> +	}
> +	ice_release_lock(&pi->sched_lock);
> +	return status;
> +}
> +
> +/**
> + * ice_cfg_vsi_qs - configure the new/exisiting VSI queues
> + * @pi: port information structure
> + * @vsi_handle: software VSI handle
> + * @tc_bitmap: TC bitmap
> + * @maxqs: max queues array per TC
> + * @owner: lan or rdma
> + *
> + * This function adds/updates the VSI queues per TC.
> + */
> +static enum ice_status
> +ice_cfg_vsi_qs(struct ice_port_info *pi, u16 vsi_handle, u8 tc_bitmap,
> +	       u16 *maxqs, u8 owner)
> +{
> +	enum ice_status status = ICE_SUCCESS;
> +	u8 i;
> +
> +	if (!pi || pi->port_state != ICE_SCHED_PORT_STATE_READY)
> +		return ICE_ERR_CFG;
> +
> +	if (!ice_is_vsi_valid(pi->hw, vsi_handle))
> +		return ICE_ERR_PARAM;
> +
> +	ice_acquire_lock(&pi->sched_lock);
> +
> +	for (i = 0; i < ICE_MAX_TRAFFIC_CLASS; i++) {
> +		/* configuration is possible only if TC node is present */
> +		if (!ice_sched_get_tc_node(pi, i))
> +			continue;
> +
> +		status = ice_sched_cfg_vsi(pi, vsi_handle, i, maxqs[i], owner,
> +					   ice_is_tc_ena(tc_bitmap, i));
> +		if (status)
> +			break;
> +	}
> +
> +	ice_release_lock(&pi->sched_lock);
> +	return status;
> +}
> +
> +/**
> + * ice_cfg_vsi_lan - configure VSI lan queues
> + * @pi: port information structure
> + * @vsi_handle: software VSI handle
> + * @tc_bitmap: TC bitmap
> + * @max_lanqs: max lan queues array per TC
> + *
> + * This function adds/updates the VSI lan queues per TC.
> + */
> +enum ice_status
> +ice_cfg_vsi_lan(struct ice_port_info *pi, u16 vsi_handle, u8 tc_bitmap,
> +		u16 *max_lanqs)
> +{
> +	return ice_cfg_vsi_qs(pi, vsi_handle, tc_bitmap, max_lanqs,
> +			      ICE_SCHED_NODE_OWNER_LAN);
> +}
> +
> +
> +
> +/**
> + * ice_replay_pre_init - replay pre initialization
> + * @hw: pointer to the hw struct
> + *
> + * Initializes required config data for VSI, FD, ACL, and RSS before replay.
> + */
> +static enum ice_status ice_replay_pre_init(struct ice_hw *hw)
> +{
> +	struct ice_switch_info *sw = hw->switch_info;
> +	u8 i;
> +
> +	/* Delete old entries from replay filter list head if there is any */
> +	ice_rm_all_sw_replay_rule_info(hw);
> +	/* In start of replay, move entries into replay_rules list, it
> +	 * will allow adding rules entries back to filt_rules list,
> +	 * which is operational list.
> +	 */
> +	for (i = 0; i < ICE_MAX_NUM_RECIPES; i++)
> +		LIST_REPLACE_INIT(&sw->recp_list[i].filt_rules,
> +				  &sw->recp_list[i].filt_replay_rules);
> +	ice_sched_replay_agg_vsi_preinit(hw);
> +
> +	return ice_sched_replay_tc_node_bw(hw);
> +}
> +
> +/**
> + * ice_replay_vsi - replay vsi configuration
> + * @hw: pointer to the hw struct
> + * @vsi_handle: driver vsi handle
> + *
> + * Restore all VSI configuration after reset. It is required to call this
> + * function with main VSI first.
> + */
> +enum ice_status ice_replay_vsi(struct ice_hw *hw, u16 vsi_handle)
> +{
> +	enum ice_status status;
> +
> +	if (!ice_is_vsi_valid(hw, vsi_handle))
> +		return ICE_ERR_PARAM;
> +
> +	/* Replay pre-initialization if there is any */
> +	if (vsi_handle == ICE_MAIN_VSI_HANDLE) {
> +		status = ice_replay_pre_init(hw);
> +		if (status)
> +			return status;
> +	}
> +
> +	/* Replay per VSI all filters */
> +	status = ice_replay_vsi_all_fltr(hw, vsi_handle);
> +	if (!status)
> +		status = ice_replay_vsi_agg(hw, vsi_handle);
> +	return status;
> +}
> +
> +/**
> + * ice_replay_post - post replay configuration cleanup
> + * @hw: pointer to the hw struct
> + *
> + * Post replay cleanup.
> + */
> +void ice_replay_post(struct ice_hw *hw)
> +{
> +	/* Delete old entries from replay filter list head */
> +	ice_rm_all_sw_replay_rule_info(hw);
> +	ice_sched_replay_agg(hw);
> +}
> +
> +/**
> + * ice_stat_update40 - read 40 bit stat from the chip and update stat values
> + * @hw: ptr to the hardware info
> + * @hireg: high 32 bit HW register to read from
> + * @loreg: low 32 bit HW register to read from
> + * @prev_stat_loaded: bool to specify if previous stats are loaded
> + * @prev_stat: ptr to previous loaded stat value
> + * @cur_stat: ptr to current stat value
> + */
> +void
> +ice_stat_update40(struct ice_hw *hw, u32 hireg, u32 loreg,
> +		  bool prev_stat_loaded, u64 *prev_stat, u64 *cur_stat)
> +{
> +	u64 new_data;
> +
> +	new_data = rd32(hw, loreg);
> +	new_data |= ((u64)(rd32(hw, hireg) & 0xFFFF)) << 32;
> +
> +	/* device stats are not reset at PFR, they likely will not be zeroed
> +	 * when the driver starts. So save the first values read and use them as
> +	 * offsets to be subtracted from the raw values in order to report stats
> +	 * that count from zero.
> +	 */
> +	if (!prev_stat_loaded)
> +		*prev_stat = new_data;
> +	if (new_data >= *prev_stat)
> +		*cur_stat = new_data - *prev_stat;
> +	else
> +		/* to manage the potential roll-over */
> +		*cur_stat = (new_data + BIT_ULL(40)) - *prev_stat;
> +	*cur_stat &= 0xFFFFFFFFFFULL;
> +}
> +
> +/**
> + * ice_stat_update32 - read 32 bit stat from the chip and update stat values
> + * @hw: ptr to the hardware info
> + * @reg: HW register to read from
> + * @prev_stat_loaded: bool to specify if previous stats are loaded
> + * @prev_stat: ptr to previous loaded stat value
> + * @cur_stat: ptr to current stat value
> + */
> +void
> +ice_stat_update32(struct ice_hw *hw, u32 reg, bool prev_stat_loaded,
> +		  u64 *prev_stat, u64 *cur_stat)
> +{
> +	u32 new_data;
> +
> +	new_data = rd32(hw, reg);
> +
> +	/* device stats are not reset at PFR, they likely will not be zeroed
> +	 * when the driver starts. So save the first values read and use them as
> +	 * offsets to be subtracted from the raw values in order to report stats
> +	 * that count from zero.
> +	 */
> +	if (!prev_stat_loaded)
> +		*prev_stat = new_data;
> +	if (new_data >= *prev_stat)
> +		*cur_stat = new_data - *prev_stat;
> +	else
> +		/* to manage the potential roll-over */
> +		*cur_stat = (new_data + BIT_ULL(32)) - *prev_stat;
> +}
> +
> +
> +/**
> + * ice_sched_query_elem - query element information from hw
> + * @hw: pointer to the hw struct
> + * @node_teid: node teid to be queried
> + * @buf: buffer to element information
> + *
> + * This function queries HW element information
> + */
> +enum ice_status
> +ice_sched_query_elem(struct ice_hw *hw, u32 node_teid,
> +		     struct ice_aqc_get_elem *buf)
> +{
> +	u16 buf_size, num_elem_ret = 0;
> +	enum ice_status status;
> +
> +	buf_size = sizeof(*buf);
> +	ice_memset(buf, 0, buf_size, ICE_NONDMA_MEM);
> +	buf->generic[0].node_teid = CPU_TO_LE32(node_teid);
> +	status = ice_aq_query_sched_elems(hw, 1, buf, buf_size, &num_elem_ret,
> +					  NULL);
> +	if (status != ICE_SUCCESS || num_elem_ret != 1)
> +		ice_debug(hw, ICE_DBG_SCHED, "query element failed\n");
> +	return status;
> +}
> diff --git a/drivers/net/ice/base/ice_common.h b/drivers/net/ice/base/ice_common.h
> new file mode 100644
> index 0000000..082ae66
> --- /dev/null
> +++ b/drivers/net/ice/base/ice_common.h
> @@ -0,0 +1,186 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2001-2018
> + */
> +
> +#ifndef _ICE_COMMON_H_
> +#define _ICE_COMMON_H_
> +
> +#include "ice_type.h"
> +
> +#include "ice_switch.h"
> +
> +/* prototype for functions used for SW locks */
> +void ice_free_list(struct LIST_HEAD_TYPE *list);
> +void ice_init_lock(struct ice_lock *lock);
> +void ice_acquire_lock(struct ice_lock *lock);
> +void ice_release_lock(struct ice_lock *lock);
> +void ice_destroy_lock(struct ice_lock *lock);
> +
> +void *ice_alloc_dma_mem(struct ice_hw *hw, struct ice_dma_mem *m, u64 size);
> +void ice_free_dma_mem(struct ice_hw *hw, struct ice_dma_mem *m);
> +
> +bool ice_sq_done(struct ice_hw *hw, struct ice_ctl_q_info *cq);
> +
> +enum ice_status ice_nvm_validate_checksum(struct ice_hw *hw);
> +
> +void
> +ice_debug_cq(struct ice_hw *hw, u32 mask, void *desc, void *buf, u16 buf_len);
> +enum ice_status ice_init_hw(struct ice_hw *hw);
> +void ice_deinit_hw(struct ice_hw *hw);
> +enum ice_status ice_check_reset(struct ice_hw *hw);
> +enum ice_status ice_reset(struct ice_hw *hw, enum ice_reset_req req);
> +
> +enum ice_status ice_init_all_ctrlq(struct ice_hw *hw);
> +void ice_shutdown_all_ctrlq(struct ice_hw *hw);
> +enum ice_status
> +ice_clean_rq_elem(struct ice_hw *hw, struct ice_ctl_q_info *cq,
> +		  struct ice_rq_event_info *e, u16 *pending);
> +enum ice_status
> +ice_get_link_status(struct ice_port_info *pi, bool *link_up);
> +enum ice_status
> +ice_update_link_info(struct ice_port_info *pi);
> +enum ice_status
> +ice_acquire_res(struct ice_hw *hw, enum ice_aq_res_ids res,
> +		enum ice_aq_res_access_type access, u32 timeout);
> +void ice_release_res(struct ice_hw *hw, enum ice_aq_res_ids res);
> +enum ice_status
> +ice_aq_alloc_free_res(struct ice_hw *hw, u16 num_entries,
> +		      struct ice_aqc_alloc_free_res_elem *buf, u16 buf_size,
> +		      enum ice_adminq_opc opc, struct ice_sq_cd *cd);
> +enum ice_status ice_init_nvm(struct ice_hw *hw);
> +enum ice_status ice_read_sr_word(struct ice_hw *hw, u16 offset, u16 *data);
> +enum ice_status
> +ice_read_sr_buf(struct ice_hw *hw, u16 offset, u16 *words, u16 *data);
> +enum ice_status
> +ice_sq_send_cmd(struct ice_hw *hw, struct ice_ctl_q_info *cq,
> +		struct ice_aq_desc *desc, void *buf, u16 buf_size,
> +		struct ice_sq_cd *cd);
> +void ice_clear_pxe_mode(struct ice_hw *hw);
> +
> +enum ice_status ice_get_caps(struct ice_hw *hw);
> +
> +
> +
> +#if defined(FPGA_SUPPORT) || defined(CVL_A0_SUPPORT)
> +void ice_dev_onetime_setup(struct ice_hw *hw);
> +#endif /* FPGA_SUPPORT || CVL_A0_SUPPORT */
> +
> +
> +enum ice_status
> +ice_write_rxq_ctx(struct ice_hw *hw, struct ice_rlan_ctx *rlan_ctx,
> +		  u32 rxq_index);
> +#if !defined(NO_UNUSED_CTX_CODE) || defined(AE_DRIVER)
> +enum ice_status ice_clear_rxq_ctx(struct ice_hw *hw, u32 rxq_index);
> +enum ice_status
> +ice_clear_tx_cmpltnq_ctx(struct ice_hw *hw, u32 tx_cmpltnq_index);
> +enum ice_status
> +ice_write_tx_cmpltnq_ctx(struct ice_hw *hw,
> +			 struct ice_tx_cmpltnq_ctx *tx_cmpltnq_ctx,
> +			 u32 tx_cmpltnq_index);
> +enum ice_status
> +ice_clear_tx_drbell_q_ctx(struct ice_hw *hw, u32 tx_drbell_q_index);
> +enum ice_status
> +ice_write_tx_drbell_q_ctx(struct ice_hw *hw,
> +			  struct ice_tx_drbell_q_ctx *tx_drbell_q_ctx,
> +			  u32 tx_drbell_q_index);
> +#endif /* !NO_UNUSED_CTX_CODE || AE_DRIVER */
> +
> +enum ice_status
> +ice_aq_get_rss_lut(struct ice_hw *hw, u16 vsi_handle, u8 lut_type, u8 *lut,
> +		   u16 lut_size);
> +enum ice_status
> +ice_aq_set_rss_lut(struct ice_hw *hw, u16 vsi_handle, u8 lut_type, u8 *lut,
> +		   u16 lut_size);
> +enum ice_status
> +ice_aq_get_rss_key(struct ice_hw *hw, u16 vsi_handle,
> +		   struct ice_aqc_get_set_rss_keys *keys);
> +enum ice_status
> +ice_aq_set_rss_key(struct ice_hw *hw, u16 vsi_handle,
> +		   struct ice_aqc_get_set_rss_keys *keys);
> +
> +bool ice_check_sq_alive(struct ice_hw *hw, struct ice_ctl_q_info *cq);
> +enum ice_status ice_aq_q_shutdown(struct ice_hw *hw, bool unloading);
> +void ice_fill_dflt_direct_cmd_desc(struct ice_aq_desc *desc, u16 opcode);
> +extern const struct ice_ctx_ele ice_tlan_ctx_info[];
> +enum ice_status
> +ice_set_ctx(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info);
> +enum ice_status
> +ice_aq_send_cmd(struct ice_hw *hw, struct ice_aq_desc *desc,
> +		void *buf, u16 buf_size, struct ice_sq_cd *cd);
> +enum ice_status ice_aq_get_fw_ver(struct ice_hw *hw, struct ice_sq_cd *cd);
> +
> +enum ice_status
> +ice_aq_get_phy_caps(struct ice_port_info *pi, bool qual_mods, u8 report_mode,
> +		    struct ice_aqc_get_phy_caps_data *caps,
> +		    struct ice_sq_cd *cd);
> +void
> +ice_update_phy_type(u64 *phy_type_low, u64 *phy_type_high,
> +		    u16 link_speeds_bitmap);
> +enum ice_status
> +ice_aq_manage_mac_write(struct ice_hw *hw, const u8 *mac_addr, u8 flags,
> +			struct ice_sq_cd *cd);
> +
> +enum ice_status ice_clear_pf_cfg(struct ice_hw *hw);
> +enum ice_status
> +ice_aq_set_phy_cfg(struct ice_hw *hw, u8 lport,
> +		   struct ice_aqc_set_phy_cfg_data *cfg, struct ice_sq_cd *cd);
> +enum ice_status
> +ice_set_fc(struct ice_port_info *pi, u8 *aq_failures,
> +	   bool ena_auto_link_update);
> +void
> +ice_cfg_phy_fec(struct ice_aqc_set_phy_cfg_data *cfg, enum ice_fec_mode fec);
> +void
> +ice_copy_phy_caps_to_cfg(struct ice_aqc_get_phy_caps_data *caps,
> +			 struct ice_aqc_set_phy_cfg_data *cfg);
> +enum ice_status
> +ice_aq_set_link_restart_an(struct ice_port_info *pi, bool ena_link,
> +			   struct ice_sq_cd *cd);
> +enum ice_status
> +ice_aq_get_link_info(struct ice_port_info *pi, bool ena_lse,
> +		     struct ice_link_status *link, struct ice_sq_cd *cd);
> +enum ice_status
> +ice_aq_set_event_mask(struct ice_hw *hw, u8 port_num, u16 mask,
> +		      struct ice_sq_cd *cd);
> +enum ice_status
> +ice_aq_set_mac_loopback(struct ice_hw *hw, bool ena_lpbk, struct ice_sq_cd *cd);
> +
> +
> +enum ice_status
> +ice_aq_set_port_id_led(struct ice_port_info *pi, bool is_orig_mode,
> +		       struct ice_sq_cd *cd);
> +
> +
> +
> +
> +enum ice_status
> +ice_dis_vsi_txq(struct ice_port_info *pi, u8 num_queues, u16 *q_ids,
> +		u32 *q_teids, enum ice_disq_rst_src rst_src, u16 vmvf_num,
> +		struct ice_sq_cd *cmd_details);
> +enum ice_status
> +ice_cfg_vsi_lan(struct ice_port_info *pi, u16 vsi_handle, u8 tc_bitmap,
> +		u16 *max_lanqs);
> +enum ice_status
> +ice_ena_vsi_txq(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u8 num_qgrps,
> +		struct ice_aqc_add_tx_qgrp *buf, u16 buf_size,
> +		struct ice_sq_cd *cd);
> +enum ice_status ice_replay_vsi(struct ice_hw *hw, u16 vsi_handle);
> +void ice_replay_post(struct ice_hw *hw);
> +void ice_sched_replay_agg_vsi_preinit(struct ice_hw *hw);
> +void ice_sched_replay_agg(struct ice_hw *hw);
> +enum ice_status ice_sched_replay_tc_node_bw(struct ice_hw *hw);
> +enum ice_status ice_replay_vsi_agg(struct ice_hw *hw, u16 vsi_handle);
> +enum ice_status
> +ice_cfg_tc_node_bw_alloc(struct ice_port_info *pi, u8 tc,
> +			 enum ice_rl_type rl_type, u8 bw_alloc);
> +enum ice_status ice_cfg_rl_burst_size(struct ice_hw *hw, u32 bytes);
> +void ice_output_fw_log(struct ice_hw *hw, struct ice_aq_desc *desc, void *buf);
> +void
> +ice_stat_update40(struct ice_hw *hw, u32 hireg, u32 loreg,
> +		  bool prev_stat_loaded, u64 *prev_stat, u64 *cur_stat);
> +void
> +ice_stat_update32(struct ice_hw *hw, u32 reg, bool prev_stat_loaded,
> +		  u64 *prev_stat, u64 *cur_stat);
> +enum ice_status
> +ice_sched_query_elem(struct ice_hw *hw, u32 node_teid,
> +		     struct ice_aqc_get_elem *buf);
> +#endif /* _ICE_COMMON_H_ */
> 

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v3 11/34] net/ice: Add common functions
  2018-12-12 21:18       ` Stillwell Jr, Paul M
@ 2018-12-13  1:26         ` Lu, Wenzhuo
  0 siblings, 0 replies; 309+ messages in thread
From: Lu, Wenzhuo @ 2018-12-13  1:26 UTC (permalink / raw)
  To: Stillwell Jr, Paul M, Mattias Rönnblom, dev

Hi Mattias,


> -----Original Message-----
> From: Stillwell Jr, Paul M
> Sent: Thursday, December 13, 2018 5:18 AM
> To: Mattias Rönnblom <mattias.ronnblom@ericsson.com>; Lu, Wenzhuo
> <wenzhuo.lu@intel.com>; dev@dpdk.org
> Subject: RE: [dpdk-dev] [PATCH v3 11/34] net/ice: Add common functions
> 
> -----Original Message-----
> From: Mattias Rönnblom [mailto:mattias.ronnblom@ericsson.com]
> Sent: Wednesday, December 12, 2018 11:59 AM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> Cc: Stillwell Jr, Paul M <paul.m.stillwell.jr@intel.com>
> Subject: Re: [dpdk-dev] [PATCH v3 11/34] net/ice: Add common functions
> 
> On 2018-12-12 07:59, Wenzhuo Lu wrote:
> > From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
> >
> > Add code that multiple other features use.
> >
> > Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
> > ---
> >   drivers/net/ice/base/ice_common.c | 3521
> +++++++++++++++++++++++++++++++++++++
> >   drivers/net/ice/base/ice_common.h |  186 ++
> >   2 files changed, 3707 insertions(+)
> >   create mode 100644 drivers/net/ice/base/ice_common.c
> >   create mode 100644 drivers/net/ice/base/ice_common.h
Thanks for the review. But this is the base code used by several teams internal, so we will correct it only if there's a bug.

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v3 17/34] net/ice: support device and queue ops
  2018-12-12 20:07     ` Mattias Rönnblom
@ 2018-12-13  1:34       ` Lu, Wenzhuo
  0 siblings, 0 replies; 309+ messages in thread
From: Lu, Wenzhuo @ 2018-12-13  1:34 UTC (permalink / raw)
  To: Mattias Rönnblom, dev; +Cc: Yang, Qiming, Li, Xiaoyun, Wu, Jingjing

Hi Mattias,


> -----Original Message-----
> From: Mattias Rönnblom [mailto:mattias.ronnblom@ericsson.com]
> Sent: Thursday, December 13, 2018 4:07 AM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> Cc: Yang, Qiming <qiming.yang@intel.com>; Li, Xiaoyun
> <xiaoyun.li@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>
> Subject: Re: [dpdk-dev] [PATCH v3 17/34] net/ice: support device and queue
> ops
> 
> On 2018-12-12 07:59, Wenzhuo Lu wrote:
> 
> /../
> 
> > +
> > +	for (i = 0; i < len * sizeof(union ice_rx_desc); i++)
> > +		((volatile char *)rxq->rx_ring)[i] = 0;
> 
> More of a general question... but why doesn't DPDK has the
> READ/WRITE_ONCE() of the Linux kernel? Would reduce the amount of
> open-coded use of volatile.
Thanks for the comments. Make me feel that I understand more about the checkpatch complain about the usage of volatile.
Sorry, I don’t know the answer. I guess maybe we need volunteer or license concern? Better waiting others' comments.


^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v3 16/34] net/ice: support device initialization
  2018-12-12 18:17     ` Ferruh Yigit
@ 2018-12-13  2:39       ` Lu, Wenzhuo
  2018-12-13 15:13         ` Ferruh Yigit
  2018-12-13  2:57       ` Lu, Wenzhuo
  1 sibling, 1 reply; 309+ messages in thread
From: Lu, Wenzhuo @ 2018-12-13  2:39 UTC (permalink / raw)
  To: Yigit, Ferruh, dev; +Cc: Yang, Qiming, Li, Xiaoyun, Wu, Jingjing

Hi Ferruh,

> -----Original Message-----
> From: Yigit, Ferruh
> Sent: Thursday, December 13, 2018 2:18 AM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> Cc: Yang, Qiming <qiming.yang@intel.com>; Li, Xiaoyun
> <xiaoyun.li@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>
> Subject: Re: [dpdk-dev] [PATCH v3 16/34] net/ice: support device
> initialization
> 
> On 12/12/2018 6:59 AM, Wenzhuo Lu wrote:
> > Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> > Signed-off-by: Qiming Yang <qiming.yang@intel.com>
> > Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
> > Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
> 
> <...>
> 
> > @@ -297,6 +297,15 @@
> CONFIG_RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE=y
> >  CONFIG_RTE_LIBRTE_FM10K_INC_VECTOR=y
> >
> >  #
> > +# Compile burst-oriented ICE PMD driver #
> CONFIG_RTE_LIBRTE_ICE_PMD=y
> > +CONFIG_RTE_LIBRTE_ICE_DEBUG_RX=n
> CONFIG_RTE_LIBRTE_ICE_DEBUG_TX=n
> > +CONFIG_RTE_LIBRTE_ICE_DEBUG_TX_FREE=n
> > +CONFIG_RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC=y
> 
> Is there a way to convert this into runtime config? Does it needs to be
> compile time config?
If only considering the functionality, we can totally remove this macro. That's why it's set to 'y' by default.
We introduce this macro for the users to improve performance for some specific cases. For example, if the MTU is small enough, we can totally remove the code wrapped by this macro. So the performance is better. Considering the purpose, to achieve the best performance, it's hard to make it a runtime configure.

> 
> > +CONFIG_RTE_LIBRTE_ICE_16BYTE_RX_DESC=n
> 
> Some of these config options documented in ice.rst, but document is
> introduced as last patch, what do you think adding documentation as the
> feature added?
Good suggestion. I'll change it in v4.

> 
> <...>
> 
> > +#
> > +# Add extra flags for base driver files (also known as shared code) #
> > +to disable warnings # ifeq ($(CONFIG_RTE_TOOLCHAIN_ICC),y)
> > +CFLAGS_BASE_DRIVER = -wd593 -wd188 else ifeq
> > +($(CONFIG_RTE_TOOLCHAIN_CLANG),y)
> > +CFLAGS_BASE_DRIVER += -Wno-sign-compare CFLAGS_BASE_DRIVER +=
> > +-Wno-unused-value CFLAGS_BASE_DRIVER += -Wno-unused-parameter
> > +CFLAGS_BASE_DRIVER += -Wno-strict-aliasing CFLAGS_BASE_DRIVER +=
> > +-Wno-format CFLAGS_BASE_DRIVER += -Wno-missing-field-initializers
> > +CFLAGS_BASE_DRIVER += -Wno-pointer-to-int-cast CFLAGS_BASE_DRIVER
> +=
> > +-Wno-format-nonliteral CFLAGS_BASE_DRIVER += -Wno-unused-variable
> > +else CFLAGS_BASE_DRIVER  = -Wno-sign-compare CFLAGS_BASE_DRIVER
> +=
> > +-Wno-unused-value CFLAGS_BASE_DRIVER += -Wno-unused-parameter
> > +CFLAGS_BASE_DRIVER += -Wno-strict-aliasing CFLAGS_BASE_DRIVER +=
> > +-Wno-format CFLAGS_BASE_DRIVER += -Wno-missing-field-initializers
> > +CFLAGS_BASE_DRIVER += -Wno-pointer-to-int-cast CFLAGS_BASE_DRIVER
> +=
> > +-Wno-format-nonliteral CFLAGS_BASE_DRIVER += -Wno-format-security
> > +CFLAGS_BASE_DRIVER += -Wno-unused-variable
> > +
> > +ifeq ($(shell test $(GCC_VERSION) -ge 44 && echo 1), 1)
> > +CFLAGS_BASE_DRIVER += -Wno-unused-but-set-variable endif
> 
> Are all these special warning disable cases for ice? It looks like can be copy
> paste from all driver, I suggest starting from empty exception list, we can
> add them if we need but lets not start with existing list already.
Totally agree, will handle it in v4.

> 
> <...>
> 
> > +# this lib depends upon:
> > +DEPDIRS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += lib/librte_eal
> > +lib/librte_ether
> > +DEPDIRS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += lib/librte_mempool
> > +lib/librte_mbuf
> > +DEPDIRS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += lib/librte_net
> > +DEPDIRS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += lib/librte_kvargs
> 
> As far as I remember we removed DEPDIRS from makefiles, there is no more
> dynamic dependency resolving, so it should be safe to remove above lines.
I don't understand. This is to handle the compile error in v1, like, https://patches.dpdk.org/patch/48286/
We have to have it.

> 
> > +
> > +include $(RTE_SDK)/mk/rte.lib.mk
> > diff --git a/drivers/net/ice/ice_ethdev.c
> > b/drivers/net/ice/ice_ethdev.c new file mode 100644 index
> > 0000000..e0bf15c
> > --- /dev/null
> > +++ b/drivers/net/ice/ice_ethdev.c
> > @@ -0,0 +1,640 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright(c) 2018 Intel Corporation  */
> > +
> > +#include <rte_ethdev_pci.h>
> > +
> > +#include "base/ice_sched.h"
> > +#include "ice_ethdev.h"
> > +#include "ice_rxtx.h"
> > +
> > +#define ICE_MAX_QP_NUM "max_queue_pair_num"
> 
> When documentation is added into this patch, can you also add this runtime
> config to that please?
The macro? It's not a configuration. It’s a string used internally.

> 
> <...>
> 
> > +static int
> > +ice_dev_init(struct rte_eth_dev *dev) {
> > +	struct rte_pci_device *pci_dev;
> > +	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data-
> >dev_private);
> > +	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data-
> >dev_private);
> > +	int ret;
> > +
> > +	dev->dev_ops = &ice_eth_dev_ops;
> > +
> > +	pci_dev = RTE_DEV_TO_PCI(dev->device);
> > +
> > +	rte_eth_copy_pci_info(dev, pci_dev);
> 
> This is done by rte_eth_dev_pci_generic_probe(), do we need here?
No, we only need the info, don’t want to probe.

> 
> <...>
> 
> > +RTE_INIT(ice_init_log);
> > +static void
> > +ice_init_log(void)
> 
> Can merge these lines, please check other samples.
Will change it in v4.

> 
> > +{
> > +	ice_logtype_init = rte_log_register("pmd.ice.init");
> 
> pmd.net.ice.init
Will change it in v4.

> 
> > +	if (ice_logtype_init >= 0)
> > +		rte_log_set_level(ice_logtype_init, RTE_LOG_NOTICE);
> > +	ice_logtype_driver = rte_log_register("pmd.ice.driver");
> 
> pmd.net.ice.driver
Will change it in v4.

> 
> <...>
> 
> > +static void
> > +ice_dev_close(struct rte_eth_dev *dev) {
> > +	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data-
> >dev_private);
> > +	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data-
> >dev_private);
> > +
> > +	ice_res_pool_destroy(&pf->msix_pool);
> > +	ice_release_vsi(pf->main_vsi);
> > +
> > +	ice_shutdown_all_ctrlq(hw);
> > +}
> 
> I am mostly for ordering functions in a way that it doesn't require the
> forward declaration, which is mostly helps reading the code since the
> function order is close the call order.
I just want to align the order with ice_eth_dev_ops. But anyway I can move it forward.

> 
> It is up to you but also for sake of consistancy I think better to move this
> function up, and leave probe/remove/init_log functions as last functions in
> file.
So, it's a bad try to show the strong connection of probe/remove with dev_init/uninit :)
I'll move them to the bottom of this file.

> 
> <...>
> 
> > +#define ICE_FLAG_ALL  (ICE_FLAG_RSS | \
> > +		       ICE_FLAG_DCB | \
> > +		       ICE_FLAG_VMDQ | \
> > +		       ICE_FLAG_SRIOV | \
> > +		       ICE_FLAG_HEADER_SPLIT_DISABLED | \
> > +		       ICE_FLAG_HEADER_SPLIT_ENABLED | \
> > +		       ICE_FLAG_FDIR | \
> > +		       ICE_FLAG_VXLAN | \
> > +		       ICE_FLAG_RSS_AQ_CAPABLE | \
> > +		       ICE_FLAG_VF_MAC_BY_PF)
> > +
> > +#define ICE_RSS_OFFLOAD_ALL ( \
> > +	ETH_RSS_FRAG_IPV4 | \
> > +	ETH_RSS_NONFRAG_IPV4_TCP | \
> > +	ETH_RSS_NONFRAG_IPV4_UDP | \
> > +	ETH_RSS_NONFRAG_IPV4_SCTP | \
> > +	ETH_RSS_NONFRAG_IPV4_OTHER | \
> > +	ETH_RSS_FRAG_IPV6 | \
> > +	ETH_RSS_NONFRAG_IPV6_TCP | \
> > +	ETH_RSS_NONFRAG_IPV6_UDP | \
> > +	ETH_RSS_NONFRAG_IPV6_SCTP | \
> > +	ETH_RSS_NONFRAG_IPV6_OTHER | \
> > +	ETH_RSS_L2_PAYLOAD)
> 
> ICE_RSS_OFFLOAD_ALL is not used at all until this patchset. I think it makes
> more logical to add code when it is added, otherwise it is hard to have a
> complete logic in signle patch and harder to observe any possible issue.
> What do you think re-arranging them?
Sure. I'll move it to another patch.

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v3 16/34] net/ice: support device initialization
  2018-12-12 18:17     ` Ferruh Yigit
  2018-12-13  2:39       ` Lu, Wenzhuo
@ 2018-12-13  2:57       ` Lu, Wenzhuo
  1 sibling, 0 replies; 309+ messages in thread
From: Lu, Wenzhuo @ 2018-12-13  2:57 UTC (permalink / raw)
  To: Yigit, Ferruh, dev; +Cc: Yang, Qiming, Li, Xiaoyun, Wu, Jingjing

Hi Ferruh,
> >
> > > +# this lib depends upon:
> > > +DEPDIRS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += lib/librte_eal
> > > +lib/librte_ether
> > > +DEPDIRS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += lib/librte_mempool
> > > +lib/librte_mbuf
> > > +DEPDIRS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += lib/librte_net
> > > +DEPDIRS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += lib/librte_kvargs
> >
> > As far as I remember we removed DEPDIRS from makefiles, there is no
> > more dynamic dependency resolving, so it should be safe to remove above
> lines.
> I don't understand. This is to handle the compile error in v1, like,
> https://patches.dpdk.org/patch/48286/
> We have to have it.
Sorry my bad. I talked about another change. I'll try to remove it.

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v2 02/20] net/ice: support device initialization
       [not found]                       ` <6A0DE07E22DDAD4C9103DF62FEBC09093FE1188F@shsmsx102.ccr.corp.intel.com>
@ 2018-12-13  5:16                         ` Varghese, Vipin
  0 siblings, 0 replies; 309+ messages in thread
From: Varghese, Vipin @ 2018-12-13  5:16 UTC (permalink / raw)
  To: Lu, Wenzhuo, Zhang, Qi Z, dev; +Cc: Yigit, Ferruh, Zhang, Helin, Xu, Qian Q

Adding to mailing list for the last update

> -----Original Message-----
> From: Lu, Wenzhuo
> Sent: Thursday, December 6, 2018 1:19 PM
> To: Zhang, Qi Z <qi.z.zhang@intel.com>; Varghese, Vipin
> <vipin.varghese@intel.com>
> Cc: Yigit, Ferruh <ferruh.yigit@intel.com>; Zhang, Helin
> <helin.zhang@intel.com>; Xu, Qian Q <qian.q.xu@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v2 02/20] net/ice: support device initialization
> 
> Yes, after discussing with colleagues, I think this feature is not well designed and
> implemented. Will remove it from this release.
> 
> 
> Best regards
> Wenzhuo Lu
> 
> 
> > -----Original Message-----
> > From: Zhang, Qi Z
> > Sent: Thursday, December 6, 2018 3:46 PM
> > To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; Varghese, Vipin
> > <vipin.varghese@intel.com>
> > Cc: Yigit, Ferruh <ferruh.yigit@intel.com>; Zhang, Helin
> > <helin.zhang@intel.com>; Xu, Qian Q <qian.q.xu@intel.com>
> > Subject: RE: [dpdk-dev] [PATCH v2 02/20] net/ice: support device
> > initialization
> >
> > Yes, I saw that, so there is no gap for this,  we can simply set this
> > feature to N in our current release.
> >
> > Thanks
> > Qi
> >
> > > -----Original Message-----
> > > From: Lu, Wenzhuo
> > > Sent: Thursday, December 6, 2018 3:43 PM
> > > To: Zhang, Qi Z <qi.z.zhang@intel.com>; Varghese, Vipin
> > > <vipin.varghese@intel.com>
> > > Cc: Yigit, Ferruh <ferruh.yigit@intel.com>; Zhang, Helin
> > > <helin.zhang@intel.com>; Xu, Qian Q <qian.q.xu@intel.com>
> > > Subject: RE: [dpdk-dev] [PATCH v2 02/20] net/ice: support device
> > > initialization
> > >
> > > Hi Qi,
> > > I think we have "secondary support", it's named as " Multiprocess
> > > aware". We can omit that.
> > >
> > > Best regards
> > > Wenzhuo Lu
> > >
> > >
> > > > -----Original Message-----
> > > > From: Zhang, Qi Z
> > > > Sent: Thursday, December 6, 2018 3:31 PM
> > > > To: Varghese, Vipin <vipin.varghese@intel.com>
> > > > Cc: Yigit, Ferruh <ferruh.yigit@intel.com>; Lu, Wenzhuo
> > > > <wenzhuo.lu@intel.com>; Zhang, Helin <helin.zhang@intel.com>; Xu,
> > > > Qian Q <qian.q.xu@intel.com>
> > > > Subject: RE: [dpdk-dev] [PATCH v2 02/20] net/ice: support device
> > > > initialization
> > > >
> > > > Hi Vipin:
> > > >
> > > > 	I saw you observed the missing feature description of CPK
> > > > document, from my view it's not a CPK specific issue, but a
> > > > generic issue as missing item in exist nic feature list.
> > > > 	I'm thinking if you could summarize all the gap from DTS view and
> > > > raise a Jira case to DPDK team, so we can add this to future
> > > > development plan.
> > > >
> > > > 	So far what I captured is
> > > > 	1. tx loopback
> > > > 	2. secondary support
> > > >
> > > > 	What do you think about it?
> > > >
> > > > Thanks
> > > > Qi
> > > >
> > > > > -----Original Message-----
> > > > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Lu, Wenzhuo
> > > > > Sent: Thursday, December 6, 2018 3:04 PM
> > > > > To: Varghese, Vipin <vipin.varghese@intel.com>; dev@dpdk.org
> > > > > Cc: Yang, Qiming <qiming.yang@intel.com>; Li, Xiaoyun
> > > > > <xiaoyun.li@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>
> > > > > Subject: Re: [dpdk-dev] [PATCH v2 02/20] net/ice: support device
> > > > > initialization
> > > > >
> > > > > Hi Vipin,
> > > > >
> > > > > > -----Original Message-----
> > > > > > From: Varghese, Vipin
> > > > > > Sent: Thursday, December 6, 2018 2:31 PM
> > > > > > To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> > > > > > Cc: Yang, Qiming <qiming.yang@intel.com>; Li, Xiaoyun
> > > > > > <xiaoyun.li@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>
> > > > > > Subject: RE: [dpdk-dev] [PATCH v2 02/20] net/ice: support
> > > > > > device initialization
> > > > > >
> > > > > > snipped
> > > > > > > > > > > +	ice_init_controlq_parameter(hw);
> > > > > > > > > > > +
> > > > > > > > > > > +	ret = ice_init_hw(hw);
> > > > > > > > > > > +	if (ret) {
> > > > > > > > > > > +		PMD_INIT_LOG(ERR, "Failed to initialize
> > HW");
> > > > > > > > > > > +		return -EINVAL;
> > > > > > > > > > > +	}
> > > > > > > > > >
> > > > > > > > > > Definition for ice_init_hw in patch 01/20 does not
> > > > > > > > > > check for
> > > > > > > > > > primary- secondary. Are we allowing secondary to
> > > > > > > > > > invoke ice_init_hw if it is initialized by primary?
> > > > > > > > > It's a patch split issue. We add the check in later patch.
> > > > > > > > > Will put it in this patch in the new version.
> > > > > > > > Suggestion in current patch if comment is kept it will be
> > > > > > > > easier to understand that it is taken care in future patch.
> > > > > > > >
> > > > > > > > Example patch 2/20 has comment stating adding support in
> > > > > > > > patch
> > > > 5/20.
> > > > > > > > Then in patch 5/20 it removes the ToDo it is easier to
> > > > > > > > read and understand the flow
> > > > > > > I mean I made a mistake that put the check code in a later patch.
> > > > > > > Actually this code should be put in this patch. I plan to correct it.
> > > > > > > But currently I think we're running out of time. I prefer
> > > > > > > not supporting multi process in this release.
> > > > > > Thanks for clarifying the same. It will helpful to add 'to do
> > > > > > or future items' in cover letter, code comment and release
> > > > > > documents which helps reviewers, early adopters and later
> > maintainers.
> > > > > I'd like to suggest focusing on what we have. Sorry, for many
> > > > > reasons it's not appropriate to talk too much about we'll do in
> > > > > the
> > future.
> > > > > Like Internally we have a plan, but it keeps changing. Like
> > > > > something is still
> > > > under investigation...
> > > > >
> > > > > >
> > > > > > >
> > > > > > > >
> > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > > +
> > > > > > > > > > > +	PMD_INIT_LOG(INFO, "FW %d.%d.%05d API %d.%d",
> > > > > > > > > > > +		     hw->fw_maj_ver, hw->fw_min_ver,
> > > hw->fw_build,
> > > > > > > > > > > +		     hw->api_maj_ver, hw->api_min_ver);
> > > > > > > > > > > +
> > > > > > > > > >
> > > > > > > > > > Snipped
> > > > > > > > > >
> > > > > > > > > > > +
> > > > > > > > > > > +static int
> > > > > > > > > > > +ice_dev_uninit(struct rte_eth_dev *dev) {
> > > > > > > > > > > +	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev-
> > >data-
> > > > > > > > > > > >dev_private);
> > > > > > > > > > > +	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev-
> > >data-
> > > > > > > > > > >dev_private);
> > > > > > > > > > > +
> > > > > > > > > > > +	ICE_PROC_SECONDARY_CHECK_RET_0;
> > > > > > > > > >
> > > > > > > > > > Should not we check if primary is alive and NIC is
> > > > > > > > > > used or initialized by primary then '
> > > > ICE_PROC_SECONDARY_CHECK_RET_0'?
> > > > > > > > > I think it's not a critical issue if the process is
> > > > > > > > > terminate abnormally without
> > > > > > > > uninit.
> > > > > > > > > Comparing with that, I have more concern about this
> > > > > > > > > scenario, if the primary process exit and uninit the
> > > > > > > > > resource, the secondary process is left
> > > > > > > > alone.
> > > > > > > > Since primary is application which reserves the huge page
> > > > > > > > memory (malloc, zmalloc, memzone). So when secondary is
> > > > > > > > killed or stop whole huge pages are released. I am bit
> > > > > > > > confused what is check
> > > > > > suggested affecting?
> > > > > > > >
> > > > > > > >  And also
> > > > > > > > > to me it looks not a good solution to change every PMD
> > > > > > > > > for this
> > > > feature.
> > > > > > > > I am not aware about why other PMD are done in specific way.
> > > > > > > > In my humble opinion, if there is a right way let it be
> > > > > > > > used rather than doing other way.
> > > > > > > >
> > > > > > > > I don't
> > > > > > > > > see many PMD support it. Maybe we'd better not support
> > > > > > > > > it now and wait for a better whole picture.
> > > > > > > > I wait for others to comment to this approach.
> > > > > > > >
> > > > > > > > snipped

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v3 02/34] net/ice: Add basic structures
  2018-12-12 15:19     ` Ferruh Yigit
@ 2018-12-13  5:17       ` Lu, Wenzhuo
  0 siblings, 0 replies; 309+ messages in thread
From: Lu, Wenzhuo @ 2018-12-13  5:17 UTC (permalink / raw)
  To: Yigit, Ferruh, dev; +Cc: Stillwell Jr, Paul M

Hi Ferruh,


> -----Original Message-----
> From: Yigit, Ferruh
> Sent: Wednesday, December 12, 2018 11:19 PM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> Cc: Stillwell Jr, Paul M <paul.m.stillwell.jr@intel.com>
> Subject: Re: [dpdk-dev] [PATCH v3 02/34] net/ice: Add basic structures
> 
> On 12/12/2018 6:59 AM, Wenzhuo Lu wrote:
> > From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
> >
> > Add the structures required by the NIC.
> >
> > Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
> 
> For consistency, base driver updates subsystem can be used as
> "net/ice/base: "
> and it should start with lowercase, so something like:
> 
> "net/ice/base: add basic structures"
> 
> Same for rest of the base code updates.
I will change them in v4.

> 
> <...>


^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v3 00/34] A new net PMD - ice
  2018-12-12  6:59 ` [dpdk-dev] [PATCH v3 00/34] A new net PMD - ice Wenzhuo Lu
                     ` (33 preceding siblings ...)
  2018-12-12  7:00   ` [dpdk-dev] [PATCH v3 34/34] net/ice: support meson build Wenzhuo Lu
@ 2018-12-13  6:02   ` Varghese, Vipin
  2018-12-13  7:10     ` Lu, Wenzhuo
  34 siblings, 1 reply; 309+ messages in thread
From: Varghese, Vipin @ 2018-12-13  6:02 UTC (permalink / raw)
  To: Lu, Wenzhuo, dev; +Cc: Lu, Wenzhuo

Hi,

> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Wenzhuo Lu
> Sent: Wednesday, December 12, 2018 12:30 PM
> To: dev@dpdk.org
> Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>
> Subject: [dpdk-dev] [PATCH v3 00/34] A new net PMD - ice
> 
> This patch set adds the support of a new net PMD, Intel® Ethernet Network
> Adapters E810, also called ice.
> 
> Besides enabling this new NIC, also some other features supported on this NIC.

Can you mention the other features

> Like below,
> 
> Basic features:
> 1, Basic device operations: probe, initialization, start/stop, configure, info get.
> 2, RX/TX queue operations: setup/release, start/stop, info get.
> 3, RX/TX.
> 
> HW Offload features:
> 1, CRC Stripping/insertion.
> 2, L2/L3 checksum strip/insertion.
> 3, PVID set.
> 4, TPID change.
> 5, TSO (LRO/RSC not supported).
> 
> Stats:
> 1, statics & xstatics.
> 
> Switch functions:
> 1, MAC Filter Add/Delete.
> 2, VLAN Filter Add/Delete.
> 
> Power saving:
> 1, RX interrupt mode.
> 
> Misc:
> 1, Interrupt For Link Status.
> 2, firmware info query.
> 3, Jumbo Frame Support.
> 4, ptype check.
> 5, EEPROM check and set.
> 

Can you add section to highlight the changes with "---". This is part of 'http://doc.dpdk.org/guides/contributing/patches.html' for 'This can be added to the cover letter or the annotations'

> v2:
>  - Fix shared lib compile issue.
>  - Add meson build support.
>  - Update documents.
>  - Fix more checkpatch issues.
> 
> v3:
>  - Removed the support of secondary process.
>  - Splitted the base code to more patches.
>  - Pass NULL to rte_zmalloc.
>  - Changed some magic numbers to macros.
>  - Fixed the wrong implementation of a specific bitmap.

Not all comments are addressed or closed from V2. So I have to assume you will be doing the same for v4. 


Some of the items

[PATCH v2 03/20] net/ice: support device and queue ops
> > > > +
> > > > +static int
> > > > +ice_dev_start(struct rte_eth_dev *dev) {
> > > > +	struct rte_eth_dev_data *data = dev->data;
> > > > +	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data-
> > > > >dev_private);
> > > > +	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data-
> > > >dev_private);
> > > > +	uint16_t nb_rxq = 0;
> > > > +	uint16_t nb_txq, i;
> > > > +	int ret;
> > > > +
> > > > +	ICE_PROC_SECONDARY_CHECK;
> > >
> > > Device start is not supported, but how is this differentiated from 
> > > primary configured device vs secondary configured device.
> > >
> > > Ie: primary uses black list '-b BB:DD:F' while secondary uses '-w 
> > > BB:DD:F'. In this case since we are checking process type this 
> > > will return without
> > start?
> Two updates with respect to your comment, 1. tool and application like 
> dpdk-procinfo will no longer be able pull data since you are asking to black list.
> 2. If there are functions which needs to shared, like primary using rx-0 and tx-0 then secondary rx-1 and tx-1 how to make this work?


[PATCH v2 01/20] net/ice: add base code
> Note: In version 1 I enquired about unit or DTS validation for PMD. Is 
> this still holding good?
Yes, it's planned and on going.

[PATCH v2 02/20] net/ice: support device initialization
> +# Compile burst-oriented ICE PMD driver # CONFIG_RTE_LIBRTE_ICE_PMD=y

Based on ' https://patches.dpdk.org/patch/48488/' it is suggested to configure. But here is it already set to 'y'. Is this correct? If yes can you update ' https://patches.dpdk.org/patch/48488/'

 [PATCH v2 20/20] net/ice: support meson build
> > > > Should not meson build option be add start. That is in patch 
> > > > 1/20 so compile options does not fail?
> > > It will not fail. Enabling the compile earlier only means the code 
> > > can be
> > compiled.
> > > But, to use this device we do need the whole patch set. From this 
> > > point of view, compiling it at the end maybe better.
> > Thanks for update, so will 'meson-build' success if apply 3 patches?
> Sure, meson build will not be broken by any one of these patches. Only 
> until this patch, what built by meson can support ice.
Thanks for confirmation that you have tried './devtools/test-meson-builds.sh' and the intermediate build for ICE_DSI PMD does not fail.

 [PATCH v2 16/20] net/ice: support basic RX/TX
 
 [PATCH v2 14/20] net/ice: support statistics
 > > > +	ns->eth.tx_bytes -= (ns->eth.tx_unicast + ns->eth.tx_multicast +
> > > +			     ns->eth.tx_broadcast) * ETHER_CRC_LEN;
> >
> > In earlier patch for 'mtu set check' we added VSI SWITCH VLAN. 
> > Should we add VSI VLAN here?
> Don't need. They're different functions. We add crc length here 
> because of HW counts the packet length before crc is added.
So you are not fetching stats from HW registers from switch is this correct? How will you get stats for actually transmitted in xstats? As I understand xstats is for switch HW stats right?

 [PATCH v2 04/20] net/ice: support getting	device	information
 > > Does this mean per queue offload capability is not supported? If 
> > yes, can you mention this in release notes under 'support or limitation'
> No, it's not supported. We have a document, ice.ini, to list all the 
> features supported. All the others are not supported.
> BTW, I don't think anything not supported is limitation.
If I understand correctly,  ICE_DSI_PMD is advertising it has not offload for RX and TX. But you are stating in ice.ini you are listing offload supports. So let me rephrase the question 'if you support port level offload capability, it will reflect for all queues rx and tx. But if you reflect queue level offload as 0 for rx and tx, then APIs rte_eth_rx_queue_setup and rte_eth_tx_queue_setup if queue offload enabled should fail. Is this correct understanding?'

> > If device switch is not configured (default value from NVM) should 
> > we highlight the switch can support speed 10, 100, 1000, 1000 and son on?
> No, this's the capability getting from HW.
If HW is supported or configured for 10, 100, 25G then those should be returned correctly this I agree. But when the device is queried for capability it should highlight all supported speeds of switch. Am I right?

> > If speed is not true as stated above, can you please add this to 
> > release notes and documentation.
> Here listed all the case we can get from HW.
Please add to ice_dsi documentation also.

snipped


^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v3 00/34] A new net PMD - ice
  2018-12-13  6:02   ` [dpdk-dev] [PATCH v3 00/34] A new net PMD - ice Varghese, Vipin
@ 2018-12-13  7:10     ` Lu, Wenzhuo
  2018-12-13 13:09       ` Varghese, Vipin
  0 siblings, 1 reply; 309+ messages in thread
From: Lu, Wenzhuo @ 2018-12-13  7:10 UTC (permalink / raw)
  To: Varghese, Vipin, dev

Hi Vipin,


> -----Original Message-----
> From: Varghese, Vipin
> Sent: Thursday, December 13, 2018 2:02 PM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v3 00/34] A new net PMD - ice
> 
> Hi,
> 
> > -----Original Message-----
> > From: dev <dev-bounces@dpdk.org> On Behalf Of Wenzhuo Lu
> > Sent: Wednesday, December 12, 2018 12:30 PM
> > To: dev@dpdk.org
> > Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>
> > Subject: [dpdk-dev] [PATCH v3 00/34] A new net PMD - ice
> >
> > This patch set adds the support of a new net PMD, Intel® Ethernet
> > Network Adapters E810, also called ice.
> >
> > Besides enabling this new NIC, also some other features supported on this
> NIC.
> 
> Can you mention the other features
Sorry for misleading you. The other features is below features. I'll reword it.

> 
> > Like below,
> >
> > Basic features:
> > 1, Basic device operations: probe, initialization, start/stop, configure, info
> get.
> > 2, RX/TX queue operations: setup/release, start/stop, info get.
> > 3, RX/TX.
> >
> > HW Offload features:
> > 1, CRC Stripping/insertion.
> > 2, L2/L3 checksum strip/insertion.
> > 3, PVID set.
> > 4, TPID change.
> > 5, TSO (LRO/RSC not supported).
> >
> > Stats:
> > 1, statics & xstatics.
> >
> > Switch functions:
> > 1, MAC Filter Add/Delete.
> > 2, VLAN Filter Add/Delete.
> >
> > Power saving:
> > 1, RX interrupt mode.
> >
> > Misc:
> > 1, Interrupt For Link Status.
> > 2, firmware info query.
> > 3, Jumbo Frame Support.
> > 4, ptype check.
> > 5, EEPROM check and set.
> >
> 
> Can you add section to highlight the changes with "---". This is part of
> 'http://doc.dpdk.org/guides/contributing/patches.html' for 'This can be
> added to the cover letter or the annotations'
Will add it.

> 
> > v2:
> >  - Fix shared lib compile issue.
> >  - Add meson build support.
> >  - Update documents.
> >  - Fix more checkpatch issues.
> >
> > v3:
> >  - Removed the support of secondary process.
> >  - Splitted the base code to more patches.
> >  - Pass NULL to rte_zmalloc.
> >  - Changed some magic numbers to macros.
> >  - Fixed the wrong implementation of a specific bitmap.
> 
> Not all comments are addressed or closed from V2. So I have to assume you
> will be doing the same for v4.
> 
> 
> Some of the items
> 
> [PATCH v2 03/20] net/ice: support device and queue ops
> > > > > +
> > > > > +static int
> > > > > +ice_dev_start(struct rte_eth_dev *dev) {
> > > > > +	struct rte_eth_dev_data *data = dev->data;
> > > > > +	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data-
> > > > > >dev_private);
> > > > > +	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data-
> > > > >dev_private);
> > > > > +	uint16_t nb_rxq = 0;
> > > > > +	uint16_t nb_txq, i;
> > > > > +	int ret;
> > > > > +
> > > > > +	ICE_PROC_SECONDARY_CHECK;
> > > >
> > > > Device start is not supported, but how is this differentiated from
> > > > primary configured device vs secondary configured device.
> > > >
> > > > Ie: primary uses black list '-b BB:DD:F' while secondary uses '-w
> > > > BB:DD:F'. In this case since we are checking process type this
> > > > will return without
> > > start?
> > Two updates with respect to your comment, 1. tool and application like
> > dpdk-procinfo will no longer be able pull data since you are asking to black
> list.
> > 2. If there are functions which needs to shared, like primary using rx-0 and
> tx-0 then secondary rx-1 and tx-1 how to make this work?
> 
> 
> [PATCH v2 01/20] net/ice: add base code
> > Note: In version 1 I enquired about unit or DTS validation for PMD. Is
> > this still holding good?
> Yes, it's planned and on going.
I confirmed validation is going on this device. But not impact this patch set unless we  found bug and need to fix it.

> 
> [PATCH v2 02/20] net/ice: support device initialization
> > +# Compile burst-oriented ICE PMD driver #
> CONFIG_RTE_LIBRTE_ICE_PMD=y
> 
> Based on ' https://patches.dpdk.org/patch/48488/' it is suggested to
> configure. But here is it already set to 'y'. Is this correct? If yes can you
> update ' https://patches.dpdk.org/patch/48488/'
We've discussed the setting. I don’t know anything left. If there's, please let me know.

> 
>  [PATCH v2 20/20] net/ice: support meson build
> > > > > Should not meson build option be add start. That is in patch
> > > > > 1/20 so compile options does not fail?
> > > > It will not fail. Enabling the compile earlier only means the code
> > > > can be
> > > compiled.
> > > > But, to use this device we do need the whole patch set. From this
> > > > point of view, compiling it at the end maybe better.
> > > Thanks for update, so will 'meson-build' success if apply 3 patches?
> > Sure, meson build will not be broken by any one of these patches. Only
> > until this patch, what built by meson can support ice.
> Thanks for confirmation that you have tried './devtools/test-meson-
> builds.sh' and the intermediate build for ICE_DSI PMD does not fail.
I said meson build is working. But don't know what's ICE_DSI PMD.

> 
>  [PATCH v2 16/20] net/ice: support basic RX/TX
> 
>  [PATCH v2 14/20] net/ice: support statistics
>  > > > +	ns->eth.tx_bytes -= (ns->eth.tx_unicast + ns->eth.tx_multicast +
> > > > +			     ns->eth.tx_broadcast) * ETHER_CRC_LEN;
> > >
> > > In earlier patch for 'mtu set check' we added VSI SWITCH VLAN.
> > > Should we add VSI VLAN here?
> > Don't need. They're different functions. We add crc length here
> > because of HW counts the packet length before crc is added.
> So you are not fetching stats from HW registers from switch is this correct?
> How will you get stats for actually transmitted in xstats? As I understand
> xstats is for switch HW stats right?
No. The stats is got from HW. We just correct it as HW doesn’t have the chance to add CRC length. But I don't understand why switch is mentioned here.

> 
>  [PATCH v2 04/20] net/ice: support getting	device	information
>  > > Does this mean per queue offload capability is not supported? If
> > > yes, can you mention this in release notes under 'support or limitation'
> > No, it's not supported. We have a document, ice.ini, to list all the
> > features supported. All the others are not supported.
> > BTW, I don't think anything not supported is limitation.
> If I understand correctly,  ICE_DSI_PMD is advertising it has not offload for
> RX and TX. But you are stating in ice.ini you are listing offload supports. So
> let me rephrase the question 'if you support port level offload capability, it
> will reflect for all queues rx and tx. But if you reflect queue level offload as 0
> for rx and tx, then APIs rte_eth_rx_queue_setup and
> rte_eth_tx_queue_setup if queue offload enabled should fail. Is this correct
> understanding?'
Sorry, I don’t know what's ICE_DSI_PMD.

> 
> > > If device switch is not configured (default value from NVM) should
> > > we highlight the switch can support speed 10, 100, 1000, 1000 and son
> on?
> > No, this's the capability getting from HW.
> If HW is supported or configured for 10, 100, 25G then those should be
> returned correctly this I agree. But when the device is queried for capability
> it should highlight all supported speeds of switch. Am I right?
No. Here shows the result not all the speeds supported. Like the speed after auto negotiation.

> 
> > > If speed is not true as stated above, can you please add this to
> > > release notes and documentation.
> > Here listed all the case we can get from HW.
> Please add to ice_dsi documentation also.
Sorry, no idea about ice_dsi.

> 
> snipped


^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v3 20/34] net/ice: support link update
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 20/34] net/ice: support link update Wenzhuo Lu
@ 2018-12-13  8:47     ` Zhang, Qi Z
  2018-12-14  0:36       ` Lu, Wenzhuo
  0 siblings, 1 reply; 309+ messages in thread
From: Zhang, Qi Z @ 2018-12-13  8:47 UTC (permalink / raw)
  To: Lu, Wenzhuo, dev; +Cc: Lu, Wenzhuo, Yang, Qiming, Li, Xiaoyun, Wu, Jingjing

Hi Wenzhuo:

> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Wenzhuo Lu
> Sent: Wednesday, December 12, 2018 3:00 PM
> To: dev@dpdk.org
> Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>; Yang, Qiming
> <qiming.yang@intel.com>; Li, Xiaoyun <xiaoyun.li@intel.com>; Wu, Jingjing
> <jingjing.wu@intel.com>
> Subject: [dpdk-dev] [PATCH v3 20/34] net/ice: support link update
> 
> Add ops link_update.


> +ice_interrupt_handler(void *param)
> +{
> +	struct rte_eth_dev *dev = (struct rte_eth_dev *)param;
> +	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
> +	uint32_t oicr;

I saw the patch also enabled interrupt handler, which looks like be independent and not related with the commit log.
It's better to separate it.

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v3 22/34] net/ice: support MAC ops
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 22/34] net/ice: support MAC ops Wenzhuo Lu
@ 2018-12-13  9:00     ` Zhang, Qi Z
  2018-12-14  0:37       ` Lu, Wenzhuo
  0 siblings, 1 reply; 309+ messages in thread
From: Zhang, Qi Z @ 2018-12-13  9:00 UTC (permalink / raw)
  To: Lu, Wenzhuo, dev; +Cc: Lu, Wenzhuo, Yang, Qiming, Li, Xiaoyun, Wu, Jingjing

> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Wenzhuo Lu
> Sent: Wednesday, December 12, 2018 3:00 PM
> To: dev@dpdk.org
> Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>; Yang, Qiming
> <qiming.yang@intel.com>; Li, Xiaoyun <xiaoyun.li@intel.com>; Wu, Jingjing
> <jingjing.wu@intel.com>
> Subject: [dpdk-dev] [PATCH v3 22/34] net/ice: support MAC ops
> 
> Add below ops,
> mac_addr_set
> mac_addr_add
> mac_addr_remove
> 
> Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> Signed-off-by: Qiming Yang <qiming.yang@intel.com>
> Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
> Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>

.....


> ---
>  drivers/net/ice/ice_ethdev.c | 233
> +++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 233 insertions(+)
>
> +static int ice_macaddr_set(struct rte_eth_dev *dev,
> +			   struct ether_addr *mac_addr)
> +{
> +	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
> +	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
> +	struct ice_vsi *vsi = pf->main_vsi;
> +	struct ice_mac_filter *f;
> +	uint8_t flags = 0;
> +	int ret;
> +
> +	if (!is_valid_assigned_ether_addr(mac_addr)) {
> +		PMD_DRV_LOG(ERR, "Tried to set invalid MAC address.");
> +		return -EINVAL;
> +	}
> +
> +	TAILQ_FOREACH(f, &vsi->mac_list, next) {
> +		if (is_same_ether_addr(&pf->dev_addr, &f->mac_info.mac_addr))
> +			break;
> +	}
> +
> +	if (!f) {
> +		PMD_DRV_LOG(ERR, "Failed to find filter for default mac");
> +		return -EIO;
> +	}
> +
> +	ret = ice_remove_mac_filter(vsi, &f->mac_info.mac_addr);
> +	if (ret != ICE_SUCCESS) {
> +		PMD_DRV_LOG(ERR, "Failed to delete mac filter");
> +		return -EIO;
> +	}
> +	ret = ice_add_mac_filter(vsi, mac_addr);
> +	if (ret != ICE_SUCCESS) {
> +		PMD_DRV_LOG(ERR, "Failed to add mac filter");
> +		return -EIO;
> +	}
> +	memcpy(&pf->dev_addr, mac_addr, ETH_ADDR_LEN);
> +
> +	flags = ICE_AQC_MAN_MAC_UPDATE_LAA_WOL;
> +	ice_aq_manage_mac_write(hw, mac_addr->addr_bytes, flags, NULL);

Should check return value of the AQ command in case some error happen?

> +
> +	return 0;
> +}
> +

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v3 18/34] net/ice: support getting device information
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 18/34] net/ice: support getting device information Wenzhuo Lu
@ 2018-12-13  9:10     ` Zhang, Qi Z
  2018-12-14  0:41       ` Lu, Wenzhuo
  0 siblings, 1 reply; 309+ messages in thread
From: Zhang, Qi Z @ 2018-12-13  9:10 UTC (permalink / raw)
  To: Lu, Wenzhuo, dev; +Cc: Lu, Wenzhuo, Yang, Qiming, Li, Xiaoyun, Wu, Jingjing



> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Wenzhuo Lu
> Sent: Wednesday, December 12, 2018 3:00 PM
> To: dev@dpdk.org
> Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>; Yang, Qiming
> <qiming.yang@intel.com>; Li, Xiaoyun <xiaoyun.li@intel.com>; Wu, Jingjing
> <jingjing.wu@intel.com>
> Subject: [dpdk-dev] [PATCH v3 18/34] net/ice: support getting device
> information
> 
> Add ops dev_infos_get.
> 
> Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> Signed-off-by: Qiming Yang <qiming.yang@intel.com>
> Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
> Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
> ---
>  drivers/net/ice/ice_ethdev.c | 123
> +++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 123 insertions(+)
> 

>  }
> +
> +static void
> +ice_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info
> +*dev_info) {
> +	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
> +	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
> +	struct ice_vsi *vsi = pf->main_vsi;
> +	struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
> +
> +	dev_info->min_rx_bufsize = ICE_BUF_SIZE_MIN;
> +	dev_info->max_rx_pktlen = ICE_FRAME_SIZE_MAX;
> +	dev_info->max_rx_queues = vsi->nb_qps;
> +	dev_info->max_tx_queues = vsi->nb_qps;
> +	dev_info->max_mac_addrs = vsi->max_macaddrs;
> +	dev_info->max_vfs = pci_dev->max_vfs;
> +
> +	dev_info->rx_offload_capa =
> +		DEV_RX_OFFLOAD_VLAN_STRIP |
> +		DEV_RX_OFFLOAD_IPV4_CKSUM |
> +		DEV_RX_OFFLOAD_UDP_CKSUM |
> +		DEV_RX_OFFLOAD_TCP_CKSUM |
> +		DEV_RX_OFFLOAD_QINQ_STRIP |
> +		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
> +		DEV_RX_OFFLOAD_VLAN_EXTEND |
> +		DEV_RX_OFFLOAD_JUMBO_FRAME;

I think we missed some offload here which ice driver does support 

Rx port offload
DEV_RX_OFFLOAD_KEEP_CRC
DEV_RX_OFFLOAD_SCATTER
DEV_RX_OFFLOAD_VLAN_FILTER

Tx queue offload
DEV_TX_OFFLOAD_MBUF_FAST_FREE

> +	dev_info->tx_offload_capa =
> +		DEV_TX_OFFLOAD_VLAN_INSERT |
> +		DEV_TX_OFFLOAD_QINQ_INSERT |
> +		DEV_TX_OFFLOAD_IPV4_CKSUM |
> +		DEV_TX_OFFLOAD_UDP_CKSUM |
> +		DEV_TX_OFFLOAD_TCP_CKSUM |
> +		DEV_TX_OFFLOAD_SCTP_CKSUM |
> +		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
> +		DEV_TX_OFFLOAD_TCP_TSO;
> +	dev_info->rx_queue_offload_capa = 0;
> +	dev_info->tx_queue_offload_capa = 0;

> +

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v3 00/34] A new net PMD - ice
  2018-12-13  7:10     ` Lu, Wenzhuo
@ 2018-12-13 13:09       ` Varghese, Vipin
  2018-12-14  1:11         ` Lu, Wenzhuo
  0 siblings, 1 reply; 309+ messages in thread
From: Varghese, Vipin @ 2018-12-13 13:09 UTC (permalink / raw)
  To: Lu, Wenzhuo, dev

Hi Wenzhuo,

Thanks for the update, couple of suggestion in my opinion shared below

Snipped
> > [PATCH v2 02/20] net/ice: support device initialization
> > > +# Compile burst-oriented ICE PMD driver #
> > CONFIG_RTE_LIBRTE_ICE_PMD=y
> >
> > Based on ' https://patches.dpdk.org/patch/48488/' it is suggested to
> > configure. But here is it already set to 'y'. Is this correct? If yes
> > can you update ' https://patches.dpdk.org/patch/48488/'
> We've discussed the setting. I don’t know anything left. If there's, please let me
> know.
I think I poorly communicated, so let me try again. In document it is stated to configure as 'y'. But in your default config is 'y'. So either make the default as 'n' so user can configure or reword in document as 'Ensure the config is set to y for before building'

> 
> >
> >  [PATCH v2 20/20] net/ice: support meson build
> > > > > > Should not meson build option be add start. That is in patch
> > > > > > 1/20 so compile options does not fail?
> > > > > It will not fail. Enabling the compile earlier only means the
> > > > > code can be
> > > > compiled.
> > > > > But, to use this device we do need the whole patch set. From
> > > > > this point of view, compiling it at the end maybe better.
> > > > Thanks for update, so will 'meson-build' success if apply 3 patches?
> > > Sure, meson build will not be broken by any one of these patches.
> > > Only until this patch, what built by meson can support ice.
> > Thanks for confirmation that you have tried './devtools/test-meson-
> > builds.sh' and the intermediate build for ICE_DSI PMD does not fail.
> I said meson build is working. But don't know what's ICE_DSI PMD.

Once again apologies if I poorly communicated ICE_PMD, my question is 'if I take patch 1/20 in v2 can I build for librte_pmd_ice'? is not we are expecting each additional layering for functionality like mtu set, switch set should be compiling and not just the last build?

> 
> >
> >  [PATCH v2 16/20] net/ice: support basic RX/TX
> >
> >  [PATCH v2 14/20] net/ice: support statistics
> >  > > > +	ns->eth.tx_bytes -= (ns->eth.tx_unicast + ns->eth.tx_multicast
> +
> > > > > +			     ns->eth.tx_broadcast) * ETHER_CRC_LEN;
> > > >
> > > > In earlier patch for 'mtu set check' we added VSI SWITCH VLAN.
> > > > Should we add VSI VLAN here?
> > > Don't need. They're different functions. We add crc length here
> > > because of HW counts the packet length before crc is added.
> > So you are not fetching stats from HW registers from switch is this correct?
> > How will you get stats for actually transmitted in xstats? As I
> > understand xstats is for switch HW stats right?
> No. The stats is got from HW. We just correct it as HW doesn’t have the chance
> to add CRC length. But I don't understand why switch is mentioned here.
Will ice pmd  for CVL expose all the switch stats as 'xstats'? If yes, can you share in patch comments what are switch level stats?

> 
> >
> >  [PATCH v2 04/20] net/ice: support getting	device	information
> >  > > Does this mean per queue offload capability is not supported? If
> > > > yes, can you mention this in release notes under 'support or limitation'
> > > No, it's not supported. We have a document, ice.ini, to list all the
> > > features supported. All the others are not supported.
> > > BTW, I don't think anything not supported is limitation.
> > If I understand correctly,  ICE_DSI_PMD is advertising it has not
> > offload for RX and TX. But you are stating in ice.ini you are listing
> > offload supports. So let me rephrase the question 'if you support port
> > level offload capability, it will reflect for all queues rx and tx.
> > But if you reflect queue level offload as 0 for rx and tx, then APIs
> > rte_eth_rx_queue_setup and rte_eth_tx_queue_setup if queue offload
> > enabled should fail. Is this correct understanding?'
> Sorry, I don’t know what's ICE_DSI_PMD.
ICE_PMD 


> 
> >
> > > > If device switch is not configured (default value from NVM) should
> > > > we highlight the switch can support speed 10, 100, 1000, 1000 and
> > > > son
> > on?
> > > No, this's the capability getting from HW.
> > If HW is supported or configured for 10, 100, 25G then those should be
> > returned correctly this I agree. But when the device is queried for
> > capability it should highlight all supported speeds of switch. Am I right?
> No. Here shows the result not all the speeds supported. Like the speed after
> auto negotiation.
As per your current statement "If user uses API rte_eth_dev_info_get to get speed_capa, current speed will be returned as auto negotiated value and not ' ETH_LINK_SPEED_10M| ETH_LINK_SPEED_100M| ETH_LINK_SPEED_1G| ETH_LINK_SPEED_10G| ETH_LINK_SPEED_25G'". I will leave this to others to comment since in my humble opinion this is not expected.

> 
> >
> > > > If speed is not true as stated above, can you please add this to
> > > > release notes and documentation.
> > > Here listed all the case we can get from HW.
> > Please add to ice_dsi documentation also.
> Sorry, no idea about ice_dsi.
My mistake ICE_PMD is what I am referring to.

> 
> >
> > snipped


^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v3 16/34] net/ice: support device initialization
  2018-12-13  2:39       ` Lu, Wenzhuo
@ 2018-12-13 15:13         ` Ferruh Yigit
  2018-12-14  2:30           ` Lu, Wenzhuo
  0 siblings, 1 reply; 309+ messages in thread
From: Ferruh Yigit @ 2018-12-13 15:13 UTC (permalink / raw)
  To: Lu, Wenzhuo, dev; +Cc: Yang, Qiming, Li, Xiaoyun, Wu, Jingjing

On 12/13/2018 2:39 AM, Lu, Wenzhuo wrote:
> Hi Ferruh,
> 
>> -----Original Message-----
>> From: Yigit, Ferruh
>> Sent: Thursday, December 13, 2018 2:18 AM
>> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
>> Cc: Yang, Qiming <qiming.yang@intel.com>; Li, Xiaoyun
>> <xiaoyun.li@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>
>> Subject: Re: [dpdk-dev] [PATCH v3 16/34] net/ice: support device
>> initialization
>>
>> On 12/12/2018 6:59 AM, Wenzhuo Lu wrote:
>>> Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
>>> Signed-off-by: Qiming Yang <qiming.yang@intel.com>
>>> Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
>>> Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
>>
>> <...>
>>
>>> @@ -297,6 +297,15 @@
>> CONFIG_RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE=y
>>>  CONFIG_RTE_LIBRTE_FM10K_INC_VECTOR=y
>>>
>>>  #
>>> +# Compile burst-oriented ICE PMD driver #
>> CONFIG_RTE_LIBRTE_ICE_PMD=y
>>> +CONFIG_RTE_LIBRTE_ICE_DEBUG_RX=n
>> CONFIG_RTE_LIBRTE_ICE_DEBUG_TX=n
>>> +CONFIG_RTE_LIBRTE_ICE_DEBUG_TX_FREE=n
>>> +CONFIG_RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC=y
>>
>> Is there a way to convert this into runtime config? Does it needs to be
>> compile time config?
> If only considering the functionality, we can totally remove this macro. That's why it's set to 'y' by default.
> We introduce this macro for the users to improve performance for some specific cases. For example, if the MTU is small enough, we can totally remove the code wrapped by this macro. So the performance is better. Considering the purpose, to achieve the best performance, it's hard to make it a runtime configure.

Thanks for clarification.
>>
>>> +
>>> +include $(RTE_SDK)/mk/rte.lib.mk
>>> diff --git a/drivers/net/ice/ice_ethdev.c
>>> b/drivers/net/ice/ice_ethdev.c new file mode 100644 index
>>> 0000000..e0bf15c
>>> --- /dev/null
>>> +++ b/drivers/net/ice/ice_ethdev.c
>>> @@ -0,0 +1,640 @@
>>> +/* SPDX-License-Identifier: BSD-3-Clause
>>> + * Copyright(c) 2018 Intel Corporation  */
>>> +
>>> +#include <rte_ethdev_pci.h>
>>> +
>>> +#include "base/ice_sched.h"
>>> +#include "ice_ethdev.h"
>>> +#include "ice_rxtx.h"
>>> +
>>> +#define ICE_MAX_QP_NUM "max_queue_pair_num"
>>
>> When documentation is added into this patch, can you also add this runtime
>> config to that please?
> The macro? It's not a configuration. It’s a string used internally.

It is used as devargs string:

 +	const char *queue_num_key = ICE_MAX_QP_NUM;
 <...>
 +	if (!rte_kvargs_count(kvlist, queue_num_key)) {
 +		rte_kvargs_free(kvlist);
 +		return 0;
 +	}

And briefly what I am suggesting is to document runtime devargs in ice
documentation.

> 
>>
>> <...>
>>
>>> +static int
>>> +ice_dev_init(struct rte_eth_dev *dev) {
>>> +	struct rte_pci_device *pci_dev;
>>> +	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data-
>>> dev_private);
>>> +	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data-
>>> dev_private);
>>> +	int ret;
>>> +
>>> +	dev->dev_ops = &ice_eth_dev_ops;
>>> +
>>> +	pci_dev = RTE_DEV_TO_PCI(dev->device);
>>> +
>>> +	rte_eth_copy_pci_info(dev, pci_dev);
>>
>> This is done by rte_eth_dev_pci_generic_probe(), do we need here?
> No, we only need the info, don’t want to probe.

Sorry, I didn't get it, let me rephrase:

rte_eth_copy_pci_info() called as following with existing code:
ice_pci_probe()
  rte_eth_dev_pci_generic_probe()
    rte_eth_dev_pci_allocate()  <---- (1)
      rte_eth_copy_pci_info()
    ice_dev_init()
      rte_eth_copy_pci_info()   <---- (2)

I am asking can we remove (2) one, since (1) does the copy already?

> 
>>
>> <...>
>>
>>> +RTE_INIT(ice_init_log);
>>> +static void
>>> +ice_init_log(void)
>>
>> Can merge these lines, please check other samples.
> Will change it in v4.
> 
>>
>>> +{
>>> +	ice_logtype_init = rte_log_register("pmd.ice.init");
>>
>> pmd.net.ice.init
> Will change it in v4.
> 
>>
>>> +	if (ice_logtype_init >= 0)
>>> +		rte_log_set_level(ice_logtype_init, RTE_LOG_NOTICE);
>>> +	ice_logtype_driver = rte_log_register("pmd.ice.driver");
>>
>> pmd.net.ice.driver
> Will change it in v4.
> 
>>
>> <...>
>>
>>> +static void
>>> +ice_dev_close(struct rte_eth_dev *dev) {
>>> +	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data-
>>> dev_private);
>>> +	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data-
>>> dev_private);
>>> +
>>> +	ice_res_pool_destroy(&pf->msix_pool);
>>> +	ice_release_vsi(pf->main_vsi);
>>> +
>>> +	ice_shutdown_all_ctrlq(hw);
>>> +}
>>
>> I am mostly for ordering functions in a way that it doesn't require the
>> forward declaration, which is mostly helps reading the code since the
>> function order is close the call order.
> I just want to align the order with ice_eth_dev_ops. But anyway I can move it forward.

I think it make sense to align the order with ice_eth_dev_ops, again improves
readability, and both can be done at same time:

static dev_ops1() {}
static dev_ops2() {}
static dev_ops3() {}

static const struct eth_dev_ops ice_eth_dev_ops = {
  ops1 = dev_ops1,
  ops2 = dev_ops2,
  ops3 = dev_ops3,
};

ice_dev_init() {
  dev->dev_ops = &ice_eth_dev_ops;
}

ice_pci_probe() {
  rte_eth_dev_pci_generic_probe(..., ice_dev_init);
}

rte_pci_driver rte_ice_pmd = {
  .probe = ice_pci_probe,
  .remove = ice_pci_remove,
};

RTE_PMD_REGISTER_PCI(net_ice, rte_ice_pmd);

> 
>>
>> It is up to you but also for sake of consistancy I think better to move this
>> function up, and leave probe/remove/init_log functions as last functions in
>> file.
> So, it's a bad try to show the strong connection of probe/remove with dev_init/uninit :)

No it is good to show that relation and keep them close, sorry perhaps I missed
your point but I wasn't suggesting to separate probe/remove & dev_init/uninit.

It looks me better to keep file entry/exit points at the bottom and develop
through upwards as call-stack moves deeper, instead of functions jump within the
file against call-stack. This improves readability because you only scroll one
way as you advance through code also you expose to code from more abstract to
detailed order. That order also has benefit of removing forward declarations.

I am aware it is already complex to develop a new PMD and you are already
dealing with lots of hardware details, I have no intention to make it more
complex for you, please take this only as an input.

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v3 21/34] net/ice: support MTU setting
  2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 21/34] net/ice: support MTU setting Wenzhuo Lu
@ 2018-12-13 21:05     ` Ferruh Yigit
  2018-12-14  2:33       ` Lu, Wenzhuo
  0 siblings, 1 reply; 309+ messages in thread
From: Ferruh Yigit @ 2018-12-13 21:05 UTC (permalink / raw)
  To: Wenzhuo Lu, dev; +Cc: Qiming Yang, Xiaoyun Li, Jingjing Wu

On 12/12/2018 6:59 AM, Wenzhuo Lu wrote:
> Add ops mtu_set.
> 
> Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> Signed-off-by: Qiming Yang <qiming.yang@intel.com>
> Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
> Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
> ---
>  drivers/net/ice/ice_ethdev.c | 34 ++++++++++++++++++++++++++++++++++

Can you please update the ice.ini file in the same patch feature added, it both
helps to verify the claimed features and provides extra documentation to patch,
same for all patches adding a feature.

Thanks,
ferruh

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v3 34/34] net/ice: support meson build
  2018-12-12  7:00   ` [dpdk-dev] [PATCH v3 34/34] net/ice: support meson build Wenzhuo Lu
@ 2018-12-13 21:15     ` Ferruh Yigit
  2018-12-14  2:38       ` Lu, Wenzhuo
  0 siblings, 1 reply; 309+ messages in thread
From: Ferruh Yigit @ 2018-12-13 21:15 UTC (permalink / raw)
  To: Wenzhuo Lu, dev

On 12/12/2018 7:00 AM, Wenzhuo Lu wrote:
> Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> ---
>  drivers/net/ice/base/meson.build | 30 ++++++++++++++++++++++++++++++
>  drivers/net/ice/meson.build      | 15 +++++++++++++++
>  drivers/net/meson.build          |  1 +

I think it is better to add meson file with on the patch Makefile added and
update in patches where required, instead of having a separate patch for it

>  3 files changed, 46 insertions(+)
>  create mode 100644 drivers/net/ice/base/meson.build
>  create mode 100644 drivers/net/ice/meson.build
> 
> diff --git a/drivers/net/ice/base/meson.build b/drivers/net/ice/base/meson.build
> new file mode 100644
> index 0000000..5aafff3
> --- /dev/null
> +++ b/drivers/net/ice/base/meson.build
> @@ -0,0 +1,30 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(c) 2018 Intel Corporation
> +
> +sources = [
> +	'ice_controlq.c',
> +	'ice_common.c',
> +	'ice_sched.c',
> +	'ice_switch.c',
> +	'ice_nvm.c',

ice_dcb.c? It is in base folder, isn't is compiled?

<...>

> @@ -0,0 +1,15 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(c) 2018 Intel Corporation
> +
> +cflags += ['-DALLOW_EXPERIMENTAL_API']

Makefile doesn't have this flag, I guess it is not needed, base folder meson
file also has it.

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v3 32/34] net/ice: support descriptor ops
  2018-12-12  7:00   ` [dpdk-dev] [PATCH v3 32/34] net/ice: support descriptor ops Wenzhuo Lu
@ 2018-12-13 21:30     ` Ferruh Yigit
  2018-12-14  2:39       ` Lu, Wenzhuo
  0 siblings, 1 reply; 309+ messages in thread
From: Ferruh Yigit @ 2018-12-13 21:30 UTC (permalink / raw)
  To: Wenzhuo Lu, dev; +Cc: Qiming Yang, Xiaoyun Li, Jingjing Wu, Olivier MATZ

On 12/12/2018 7:00 AM, Wenzhuo Lu wrote:
> Add below ops,
> rx_descriptor_done
> rx_descriptor_status
> tx_descriptor_status

I guess this is our mistake to not clarify this, sorry about it, but
"rx_descriptor_status" replaces "rx_descriptor_done", it is and extended
version, cc'ed Olivier to correct me in case I am wrong.

So when "rx_descriptor_status" implemented, "rx_descriptor_done" can be dropped.

Please see commit log of:
Commit b1b700ce7d6f ("ethdev: add descriptor status API")

copy-paste related part:
"
    The descriptor_done() API, and probably the rx_queue_count() API could
    be replaced by this new API as soon as it is implemented on all PMDs.
"

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v3 33/34] doc: add ICE description and update release note
  2018-12-12  7:00   ` [dpdk-dev] [PATCH v3 33/34] doc: add ICE description and update release note Wenzhuo Lu
@ 2018-12-13 21:34     ` Ferruh Yigit
  2018-12-14  2:42       ` Lu, Wenzhuo
  0 siblings, 1 reply; 309+ messages in thread
From: Ferruh Yigit @ 2018-12-13 21:34 UTC (permalink / raw)
  To: Wenzhuo Lu, dev

On 12/12/2018 7:00 AM, Wenzhuo Lu wrote:
> Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> ---
>  MAINTAINERS                            |   1 +
>  doc/guides/nics/features/ice.ini       |  38 +++++++++++++
>  doc/guides/nics/ice.rst                | 101 +++++++++++++++++++++++++++++++++

Need to update doc/guides/nics/index.rst too to include ice.rst

>  doc/guides/rel_notes/release_19_02.rst |   4 ++
>  4 files changed, 144 insertions(+)
>  create mode 100644 doc/guides/nics/features/ice.ini
>  create mode 100644 doc/guides/nics/ice.rst
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 37f3bf7..cd01565 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -598,6 +598,7 @@ M: Qiming Yang <qiming.yang@intel.com>
>  M: Wenzhuo Lu <wenzhuo.lu@intel.com>
>  T: git://dpdk.org/next/dpdk-next-net-intel
>  F: drivers/net/ice/
> +F: doc/guides/nics/features/ice*.ini

Should add .rst file too?

<...>

> @@ -0,0 +1,101 @@
> +..  SPDX-License-Identifier: BSD-3-Clause
> +    Copyright(c) 2018 Intel Corporation.
> +
> +ICE Poll Mode Driver
> +======================
> +
> +The ice PMD (librte_pmd_ice) provides poll mode driver support for
> +10/25 Gbps Intel® Ethernet 810 Series Network Adapters based on
> +the Intel Ethernet Controller E810.

Please remember to add links to product web-page when it is available.

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v3 20/34] net/ice: support link update
  2018-12-13  8:47     ` Zhang, Qi Z
@ 2018-12-14  0:36       ` Lu, Wenzhuo
  2018-12-14  2:43         ` Zhang, Qi Z
  0 siblings, 1 reply; 309+ messages in thread
From: Lu, Wenzhuo @ 2018-12-14  0:36 UTC (permalink / raw)
  To: Zhang, Qi Z, dev; +Cc: Yang, Qiming, Li, Xiaoyun, Wu, Jingjing

Hi Qi,

> -----Original Message-----
> From: Zhang, Qi Z
> Sent: Thursday, December 13, 2018 4:47 PM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>; Yang, Qiming
> <qiming.yang@intel.com>; Li, Xiaoyun <xiaoyun.li@intel.com>; Wu, Jingjing
> <jingjing.wu@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v3 20/34] net/ice: support link update
> 
> Hi Wenzhuo:
> 
> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Wenzhuo Lu
> > Sent: Wednesday, December 12, 2018 3:00 PM
> > To: dev@dpdk.org
> > Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>; Yang, Qiming
> > <qiming.yang@intel.com>; Li, Xiaoyun <xiaoyun.li@intel.com>; Wu,
> > Jingjing <jingjing.wu@intel.com>
> > Subject: [dpdk-dev] [PATCH v3 20/34] net/ice: support link update
> >
> > Add ops link_update.
> 
> 
> > +ice_interrupt_handler(void *param)
> > +{
> > +	struct rte_eth_dev *dev = (struct rte_eth_dev *)param;
> > +	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data-
> >dev_private);
> > +	uint32_t oicr;
> 
> I saw the patch also enabled interrupt handler, which looks like be
> independent and not related with the commit log.
> It's better to separate it.
Here we only enable LSC interrupt for link update. So, I prefer putting it in this patch.

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v3 22/34] net/ice: support MAC ops
  2018-12-13  9:00     ` Zhang, Qi Z
@ 2018-12-14  0:37       ` Lu, Wenzhuo
  0 siblings, 0 replies; 309+ messages in thread
From: Lu, Wenzhuo @ 2018-12-14  0:37 UTC (permalink / raw)
  To: Zhang, Qi Z, dev; +Cc: Yang, Qiming, Li, Xiaoyun, Wu, Jingjing

Hi Qi,

> -----Original Message-----
> From: Zhang, Qi Z
> Sent: Thursday, December 13, 2018 5:00 PM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>; Yang, Qiming
> <qiming.yang@intel.com>; Li, Xiaoyun <xiaoyun.li@intel.com>; Wu, Jingjing
> <jingjing.wu@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v3 22/34] net/ice: support MAC ops
> 
> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Wenzhuo Lu
> > Sent: Wednesday, December 12, 2018 3:00 PM
> > To: dev@dpdk.org
> > Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>; Yang, Qiming
> > <qiming.yang@intel.com>; Li, Xiaoyun <xiaoyun.li@intel.com>; Wu,
> > Jingjing <jingjing.wu@intel.com>
> > Subject: [dpdk-dev] [PATCH v3 22/34] net/ice: support MAC ops
> >
> > Add below ops,
> > mac_addr_set
> > mac_addr_add
> > mac_addr_remove
> >
> > Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> > Signed-off-by: Qiming Yang <qiming.yang@intel.com>
> > Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
> > Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
> 
> .....
> 
> 
> > ---
> >  drivers/net/ice/ice_ethdev.c | 233
> > +++++++++++++++++++++++++++++++++++++++++++
> >  1 file changed, 233 insertions(+)
> >
> > +static int ice_macaddr_set(struct rte_eth_dev *dev,
> > +			   struct ether_addr *mac_addr)
> > +{
> > +	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data-
> >dev_private);
> > +	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data-
> >dev_private);
> > +	struct ice_vsi *vsi = pf->main_vsi;
> > +	struct ice_mac_filter *f;
> > +	uint8_t flags = 0;
> > +	int ret;
> > +
> > +	if (!is_valid_assigned_ether_addr(mac_addr)) {
> > +		PMD_DRV_LOG(ERR, "Tried to set invalid MAC address.");
> > +		return -EINVAL;
> > +	}
> > +
> > +	TAILQ_FOREACH(f, &vsi->mac_list, next) {
> > +		if (is_same_ether_addr(&pf->dev_addr, &f-
> >mac_info.mac_addr))
> > +			break;
> > +	}
> > +
> > +	if (!f) {
> > +		PMD_DRV_LOG(ERR, "Failed to find filter for default mac");
> > +		return -EIO;
> > +	}
> > +
> > +	ret = ice_remove_mac_filter(vsi, &f->mac_info.mac_addr);
> > +	if (ret != ICE_SUCCESS) {
> > +		PMD_DRV_LOG(ERR, "Failed to delete mac filter");
> > +		return -EIO;
> > +	}
> > +	ret = ice_add_mac_filter(vsi, mac_addr);
> > +	if (ret != ICE_SUCCESS) {
> > +		PMD_DRV_LOG(ERR, "Failed to add mac filter");
> > +		return -EIO;
> > +	}
> > +	memcpy(&pf->dev_addr, mac_addr, ETH_ADDR_LEN);
> > +
> > +	flags = ICE_AQC_MAN_MAC_UPDATE_LAA_WOL;
> > +	ice_aq_manage_mac_write(hw, mac_addr->addr_bytes, flags, NULL);
> 
> Should check return value of the AQ command in case some error happen?
Good suggestion. I'll add an error print here.

> 
> > +
> > +	return 0;
> > +}
> > +

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v3 18/34] net/ice: support getting device information
  2018-12-13  9:10     ` Zhang, Qi Z
@ 2018-12-14  0:41       ` Lu, Wenzhuo
  0 siblings, 0 replies; 309+ messages in thread
From: Lu, Wenzhuo @ 2018-12-14  0:41 UTC (permalink / raw)
  To: Zhang, Qi Z, dev; +Cc: Yang, Qiming, Li, Xiaoyun, Wu, Jingjing

Hi Qi,

> -----Original Message-----
> From: Zhang, Qi Z
> Sent: Thursday, December 13, 2018 5:10 PM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>; Yang, Qiming
> <qiming.yang@intel.com>; Li, Xiaoyun <xiaoyun.li@intel.com>; Wu, Jingjing
> <jingjing.wu@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v3 18/34] net/ice: support getting device
> information
> 
> 
> 
> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Wenzhuo Lu
> > Sent: Wednesday, December 12, 2018 3:00 PM
> > To: dev@dpdk.org
> > Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>; Yang, Qiming
> > <qiming.yang@intel.com>; Li, Xiaoyun <xiaoyun.li@intel.com>; Wu,
> > Jingjing <jingjing.wu@intel.com>
> > Subject: [dpdk-dev] [PATCH v3 18/34] net/ice: support getting device
> > information
> >
> > Add ops dev_infos_get.
> >
> > Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> > Signed-off-by: Qiming Yang <qiming.yang@intel.com>
> > Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
> > Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
> > ---
> >  drivers/net/ice/ice_ethdev.c | 123
> > +++++++++++++++++++++++++++++++++++++++++++
> >  1 file changed, 123 insertions(+)
> >
> 
> >  }
> > +
> > +static void
> > +ice_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info
> > +*dev_info) {
> > +	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data-
> >dev_private);
> > +	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data-
> >dev_private);
> > +	struct ice_vsi *vsi = pf->main_vsi;
> > +	struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
> > +
> > +	dev_info->min_rx_bufsize = ICE_BUF_SIZE_MIN;
> > +	dev_info->max_rx_pktlen = ICE_FRAME_SIZE_MAX;
> > +	dev_info->max_rx_queues = vsi->nb_qps;
> > +	dev_info->max_tx_queues = vsi->nb_qps;
> > +	dev_info->max_mac_addrs = vsi->max_macaddrs;
> > +	dev_info->max_vfs = pci_dev->max_vfs;
> > +
> > +	dev_info->rx_offload_capa =
> > +		DEV_RX_OFFLOAD_VLAN_STRIP |
> > +		DEV_RX_OFFLOAD_IPV4_CKSUM |
> > +		DEV_RX_OFFLOAD_UDP_CKSUM |
> > +		DEV_RX_OFFLOAD_TCP_CKSUM |
> > +		DEV_RX_OFFLOAD_QINQ_STRIP |
> > +		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
> > +		DEV_RX_OFFLOAD_VLAN_EXTEND |
> > +		DEV_RX_OFFLOAD_JUMBO_FRAME;
> 
> I think we missed some offload here which ice driver does support
> 
> Rx port offload
> DEV_RX_OFFLOAD_KEEP_CRC
> DEV_RX_OFFLOAD_SCATTER
> DEV_RX_OFFLOAD_VLAN_FILTER
> 
> Tx queue offload
> DEV_TX_OFFLOAD_MBUF_FAST_FREE
Thanks. Will add it in v4.

> 
> > +	dev_info->tx_offload_capa =
> > +		DEV_TX_OFFLOAD_VLAN_INSERT |
> > +		DEV_TX_OFFLOAD_QINQ_INSERT |
> > +		DEV_TX_OFFLOAD_IPV4_CKSUM |
> > +		DEV_TX_OFFLOAD_UDP_CKSUM |
> > +		DEV_TX_OFFLOAD_TCP_CKSUM |
> > +		DEV_TX_OFFLOAD_SCTP_CKSUM |
> > +		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
> > +		DEV_TX_OFFLOAD_TCP_TSO;
> > +	dev_info->rx_queue_offload_capa = 0;
> > +	dev_info->tx_queue_offload_capa = 0;
> 
> > +

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v3 00/34] A new net PMD - ice
  2018-12-13 13:09       ` Varghese, Vipin
@ 2018-12-14  1:11         ` Lu, Wenzhuo
  2018-12-14  3:26           ` Varghese, Vipin
  0 siblings, 1 reply; 309+ messages in thread
From: Lu, Wenzhuo @ 2018-12-14  1:11 UTC (permalink / raw)
  To: Varghese, Vipin, dev

Hi Vipin,

> -----Original Message-----
> From: Varghese, Vipin
> Sent: Thursday, December 13, 2018 9:10 PM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> Subject: RE: [dpdk-dev] [PATCH v3 00/34] A new net PMD - ice
> 
> Hi Wenzhuo,
> 
> Thanks for the update, couple of suggestion in my opinion shared below
> 
> Snipped
> > > [PATCH v2 02/20] net/ice: support device initialization
> > > > +# Compile burst-oriented ICE PMD driver #
> > > CONFIG_RTE_LIBRTE_ICE_PMD=y
> > >
> > > Based on ' https://patches.dpdk.org/patch/48488/' it is suggested to
> > > configure. But here is it already set to 'y'. Is this correct? If
> > > yes can you update ' https://patches.dpdk.org/patch/48488/'
> > We've discussed the setting. I don’t know anything left. If there's,
> > please let me know.
> I think I poorly communicated, so let me try again. In document it is stated to
> configure as 'y'. But in your default config is 'y'. So either make the default as
> 'n' so user can configure or reword in document as 'Ensure the config is set
> to y for before building'
In the patch, I only see "``CONFIG_RTE_LIBRTE_ICE_PMD`` (default ``y``) ". No conflict with the configure file.

> 
> >
> > >
> > >  [PATCH v2 20/20] net/ice: support meson build
> > > > > > > Should not meson build option be add start. That is in patch
> > > > > > > 1/20 so compile options does not fail?
> > > > > > It will not fail. Enabling the compile earlier only means the
> > > > > > code can be
> > > > > compiled.
> > > > > > But, to use this device we do need the whole patch set. From
> > > > > > this point of view, compiling it at the end maybe better.
> > > > > Thanks for update, so will 'meson-build' success if apply 3 patches?
> > > > Sure, meson build will not be broken by any one of these patches.
> > > > Only until this patch, what built by meson can support ice.
> > > Thanks for confirmation that you have tried './devtools/test-meson-
> > > builds.sh' and the intermediate build for ICE_DSI PMD does not fail.
> > I said meson build is working. But don't know what's ICE_DSI PMD.
> 
> Once again apologies if I poorly communicated ICE_PMD, my question is 'if I
> take patch 1/20 in v2 can I build for librte_pmd_ice'? is not we are expecting
> each additional layering for functionality like mtu set, switch set should be
> compiling and not just the last build?
O, you want to support meson build from the beginning. I can move it forward, although I recommend to use it after all the patches.

> 
> >
> > >
> > >  [PATCH v2 16/20] net/ice: support basic RX/TX
> > >
> > >  [PATCH v2 14/20] net/ice: support statistics
> > >  > > > +	ns->eth.tx_bytes -= (ns->eth.tx_unicast + ns-
> >eth.tx_multicast
> > +
> > > > > > +			     ns->eth.tx_broadcast) * ETHER_CRC_LEN;
> > > > >
> > > > > In earlier patch for 'mtu set check' we added VSI SWITCH VLAN.
> > > > > Should we add VSI VLAN here?
> > > > Don't need. They're different functions. We add crc length here
> > > > because of HW counts the packet length before crc is added.
> > > So you are not fetching stats from HW registers from switch is this
> correct?
> > > How will you get stats for actually transmitted in xstats? As I
> > > understand xstats is for switch HW stats right?
> > No. The stats is got from HW. We just correct it as HW doesn’t have
> > the chance to add CRC length. But I don't understand why switch is
> mentioned here.
> Will ice pmd  for CVL expose all the switch stats as 'xstats'? If yes, can you
> share in patch comments what are switch level stats?
No. xstats means the extend stats which is not covered by the basic stats. It can be anything which is not in basic stats.

> 
> >
> > >
> > >  [PATCH v2 04/20] net/ice: support getting	device	information
> > >  > > Does this mean per queue offload capability is not supported?
> > > If
> > > > > yes, can you mention this in release notes under 'support or
> limitation'
> > > > No, it's not supported. We have a document, ice.ini, to list all
> > > > the features supported. All the others are not supported.
> > > > BTW, I don't think anything not supported is limitation.
> > > If I understand correctly,  ICE_DSI_PMD is advertising it has not
> > > offload for RX and TX. But you are stating in ice.ini you are
> > > listing offload supports. So let me rephrase the question 'if you
> > > support port level offload capability, it will reflect for all queues rx and tx.
> > > But if you reflect queue level offload as 0 for rx and tx, then APIs
> > > rte_eth_rx_queue_setup and rte_eth_tx_queue_setup if queue offload
> > > enabled should fail. Is this correct understanding?'
> > Sorry, I don’t know what's ICE_DSI_PMD.
> ICE_PMD
No exactly. We follow the rule of queue and port offload setting. If the feature is enabled in port level, we can ignore the queue setting. If the feature is not enabled in port level, we still can enable it per queue.

> 
> 
> >
> > >
> > > > > If device switch is not configured (default value from NVM)
> > > > > should we highlight the switch can support speed 10, 100, 1000,
> > > > > 1000 and son
> > > on?
> > > > No, this's the capability getting from HW.
> > > If HW is supported or configured for 10, 100, 25G then those should
> > > be returned correctly this I agree. But when the device is queried
> > > for capability it should highlight all supported speeds of switch. Am I right?
> > No. Here shows the result not all the speeds supported. Like the speed
> > after auto negotiation.
> As per your current statement "If user uses API rte_eth_dev_info_get to get
> speed_capa, current speed will be returned as auto negotiated value and
> not ' ETH_LINK_SPEED_10M| ETH_LINK_SPEED_100M|
> ETH_LINK_SPEED_1G| ETH_LINK_SPEED_10G| ETH_LINK_SPEED_25G'". I will
> leave this to others to comment since in my humble opinion this is not
> expected.
OK. I'll change it to the bitmap.

> 
> >
> > >
> > > > > If speed is not true as stated above, can you please add this to
> > > > > release notes and documentation.
> > > > Here listed all the case we can get from HW.
> > > Please add to ice_dsi documentation also.
> > Sorry, no idea about ice_dsi.
> My mistake ICE_PMD is what I am referring to.
> 
> >
> > >
> > > snipped


^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v3 16/34] net/ice: support device initialization
  2018-12-13 15:13         ` Ferruh Yigit
@ 2018-12-14  2:30           ` Lu, Wenzhuo
  0 siblings, 0 replies; 309+ messages in thread
From: Lu, Wenzhuo @ 2018-12-14  2:30 UTC (permalink / raw)
  To: Yigit, Ferruh, dev; +Cc: Yang, Qiming, Li, Xiaoyun, Wu, Jingjing

Hi Ferruh,

> -----Original Message-----
> From: Yigit, Ferruh
> Sent: Thursday, December 13, 2018 11:14 PM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> Cc: Yang, Qiming <qiming.yang@intel.com>; Li, Xiaoyun
> <xiaoyun.li@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>
> Subject: Re: [dpdk-dev] [PATCH v3 16/34] net/ice: support device
> initialization
> 
> On 12/13/2018 2:39 AM, Lu, Wenzhuo wrote:
> > Hi Ferruh,
> >
> >> -----Original Message-----
> >> From: Yigit, Ferruh
> >> Sent: Thursday, December 13, 2018 2:18 AM
> >> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> >> Cc: Yang, Qiming <qiming.yang@intel.com>; Li, Xiaoyun
> >> <xiaoyun.li@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>
> >> Subject: Re: [dpdk-dev] [PATCH v3 16/34] net/ice: support device
> >> initialization
> >>
> >> On 12/12/2018 6:59 AM, Wenzhuo Lu wrote:
> >>> Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> >>> Signed-off-by: Qiming Yang <qiming.yang@intel.com>
> >>> Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
> >>> Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
> >>
> >> <...>
> >>
> >>> @@ -297,6 +297,15 @@
> >> CONFIG_RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE=y
> >>>  CONFIG_RTE_LIBRTE_FM10K_INC_VECTOR=y
> >>>
> >>>  #
> >>> +# Compile burst-oriented ICE PMD driver #
> >> CONFIG_RTE_LIBRTE_ICE_PMD=y
> >>> +CONFIG_RTE_LIBRTE_ICE_DEBUG_RX=n
> >> CONFIG_RTE_LIBRTE_ICE_DEBUG_TX=n
> >>> +CONFIG_RTE_LIBRTE_ICE_DEBUG_TX_FREE=n
> >>> +CONFIG_RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC=y
> >>
> >> Is there a way to convert this into runtime config? Does it needs to
> >> be compile time config?
> > If only considering the functionality, we can totally remove this macro.
> That's why it's set to 'y' by default.
> > We introduce this macro for the users to improve performance for some
> specific cases. For example, if the MTU is small enough, we can totally
> remove the code wrapped by this macro. So the performance is better.
> Considering the purpose, to achieve the best performance, it's hard to make
> it a runtime configure.
> 
> Thanks for clarification.
> >>
> >>> +
> >>> +include $(RTE_SDK)/mk/rte.lib.mk
> >>> diff --git a/drivers/net/ice/ice_ethdev.c
> >>> b/drivers/net/ice/ice_ethdev.c new file mode 100644 index
> >>> 0000000..e0bf15c
> >>> --- /dev/null
> >>> +++ b/drivers/net/ice/ice_ethdev.c
> >>> @@ -0,0 +1,640 @@
> >>> +/* SPDX-License-Identifier: BSD-3-Clause
> >>> + * Copyright(c) 2018 Intel Corporation  */
> >>> +
> >>> +#include <rte_ethdev_pci.h>
> >>> +
> >>> +#include "base/ice_sched.h"
> >>> +#include "ice_ethdev.h"
> >>> +#include "ice_rxtx.h"
> >>> +
> >>> +#define ICE_MAX_QP_NUM "max_queue_pair_num"
> >>
> >> When documentation is added into this patch, can you also add this
> >> runtime config to that please?
> > The macro? It's not a configuration. It’s a string used internally.
> 
> It is used as devargs string:
> 
>  +	const char *queue_num_key = ICE_MAX_QP_NUM;
>  <...>
>  +	if (!rte_kvargs_count(kvlist, queue_num_key)) {
>  +		rte_kvargs_free(kvlist);
>  +		return 0;
>  +	}
> 
> And briefly what I am suggesting is to document runtime devargs in ice
> documentation.
Sorry, I didn’t get it at first. Will update the document.

> 
> >
> >>
> >> <...>
> >>
> >>> +static int
> >>> +ice_dev_init(struct rte_eth_dev *dev) {
> >>> +	struct rte_pci_device *pci_dev;
> >>> +	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data-
> >>> dev_private);
> >>> +	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data-
> >>> dev_private);
> >>> +	int ret;
> >>> +
> >>> +	dev->dev_ops = &ice_eth_dev_ops;
> >>> +
> >>> +	pci_dev = RTE_DEV_TO_PCI(dev->device);
> >>> +
> >>> +	rte_eth_copy_pci_info(dev, pci_dev);
> >>
> >> This is done by rte_eth_dev_pci_generic_probe(), do we need here?
> > No, we only need the info, don’t want to probe.
> 
> Sorry, I didn't get it, let me rephrase:
> 
> rte_eth_copy_pci_info() called as following with existing code:
> ice_pci_probe()
>   rte_eth_dev_pci_generic_probe()
>     rte_eth_dev_pci_allocate()  <---- (1)
>       rte_eth_copy_pci_info()
>     ice_dev_init()
>       rte_eth_copy_pci_info()   <---- (2)
> 
> I am asking can we remove (2) one, since (1) does the copy already?
Agree it looks redundant. Will check if we can remove (2).

> 
> >
> >>
> >> <...>
> >>
> >>> +RTE_INIT(ice_init_log);
> >>> +static void
> >>> +ice_init_log(void)
> >>
> >> Can merge these lines, please check other samples.
> > Will change it in v4.
> >
> >>
> >>> +{
> >>> +	ice_logtype_init = rte_log_register("pmd.ice.init");
> >>
> >> pmd.net.ice.init
> > Will change it in v4.
> >
> >>
> >>> +	if (ice_logtype_init >= 0)
> >>> +		rte_log_set_level(ice_logtype_init, RTE_LOG_NOTICE);
> >>> +	ice_logtype_driver = rte_log_register("pmd.ice.driver");
> >>
> >> pmd.net.ice.driver
> > Will change it in v4.
> >
> >>
> >> <...>
> >>
> >>> +static void
> >>> +ice_dev_close(struct rte_eth_dev *dev) {
> >>> +	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data-
> >>> dev_private);
> >>> +	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data-
> >>> dev_private);
> >>> +
> >>> +	ice_res_pool_destroy(&pf->msix_pool);
> >>> +	ice_release_vsi(pf->main_vsi);
> >>> +
> >>> +	ice_shutdown_all_ctrlq(hw);
> >>> +}
> >>
> >> I am mostly for ordering functions in a way that it doesn't require
> >> the forward declaration, which is mostly helps reading the code since
> >> the function order is close the call order.
> > I just want to align the order with ice_eth_dev_ops. But anyway I can move
> it forward.
> 
> I think it make sense to align the order with ice_eth_dev_ops, again
> improves readability, and both can be done at same time:
> 
> static dev_ops1() {}
> static dev_ops2() {}
> static dev_ops3() {}
> 
> static const struct eth_dev_ops ice_eth_dev_ops = {
>   ops1 = dev_ops1,
>   ops2 = dev_ops2,
>   ops3 = dev_ops3,
> };
> 
> ice_dev_init() {
>   dev->dev_ops = &ice_eth_dev_ops;
> }
> 
> ice_pci_probe() {
>   rte_eth_dev_pci_generic_probe(..., ice_dev_init); }
> 
> rte_pci_driver rte_ice_pmd = {
>   .probe = ice_pci_probe,
>   .remove = ice_pci_remove,
> };
> 
> RTE_PMD_REGISTER_PCI(net_ice, rte_ice_pmd);
> 
> >
> >>
> >> It is up to you but also for sake of consistancy I think better to
> >> move this function up, and leave probe/remove/init_log functions as
> >> last functions in file.
> > So, it's a bad try to show the strong connection of probe/remove with
> > dev_init/uninit :)
> 
> No it is good to show that relation and keep them close, sorry perhaps I
> missed your point but I wasn't suggesting to separate probe/remove &
> dev_init/uninit.
> 
> It looks me better to keep file entry/exit points at the bottom and develop
> through upwards as call-stack moves deeper, instead of functions jump
> within the file against call-stack. This improves readability because you only
> scroll one way as you advance through code also you expose to code from
> more abstract to detailed order. That order also has benefit of removing
> forward declarations.
> 
> I am aware it is already complex to develop a new PMD and you are already
> dealing with lots of hardware details, I have no intention to make it more
> complex for you, please take this only as an input.
I have the same feeling. Actually I've done much this kind of adjustment before creating the patches. Will make it better.



^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v3 21/34] net/ice: support MTU setting
  2018-12-13 21:05     ` Ferruh Yigit
@ 2018-12-14  2:33       ` Lu, Wenzhuo
  0 siblings, 0 replies; 309+ messages in thread
From: Lu, Wenzhuo @ 2018-12-14  2:33 UTC (permalink / raw)
  To: Yigit, Ferruh, dev; +Cc: Yang, Qiming, Li, Xiaoyun, Wu, Jingjing

Hi Ferruh,


> -----Original Message-----
> From: Yigit, Ferruh
> Sent: Friday, December 14, 2018 5:06 AM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> Cc: Yang, Qiming <qiming.yang@intel.com>; Li, Xiaoyun
> <xiaoyun.li@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>
> Subject: Re: [dpdk-dev] [PATCH v3 21/34] net/ice: support MTU setting
> 
> On 12/12/2018 6:59 AM, Wenzhuo Lu wrote:
> > Add ops mtu_set.
> >
> > Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> > Signed-off-by: Qiming Yang <qiming.yang@intel.com>
> > Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
> > Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
> > ---
> >  drivers/net/ice/ice_ethdev.c | 34
> ++++++++++++++++++++++++++++++++++
> 
> Can you please update the ice.ini file in the same patch feature added, it
> both helps to verify the claimed features and provides extra documentation
> to patch, same for all patches adding a feature.
Will change it in v4.

> 
> Thanks,
> ferruh


^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v3 34/34] net/ice: support meson build
  2018-12-13 21:15     ` Ferruh Yigit
@ 2018-12-14  2:38       ` Lu, Wenzhuo
  2018-12-14  8:47         ` Ferruh Yigit
  0 siblings, 1 reply; 309+ messages in thread
From: Lu, Wenzhuo @ 2018-12-14  2:38 UTC (permalink / raw)
  To: Yigit, Ferruh, dev

Hi Ferruh,


> -----Original Message-----
> From: Yigit, Ferruh
> Sent: Friday, December 14, 2018 5:16 AM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH v3 34/34] net/ice: support meson build
> 
> On 12/12/2018 7:00 AM, Wenzhuo Lu wrote:
> > Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> > ---
> >  drivers/net/ice/base/meson.build | 30
> ++++++++++++++++++++++++++++++
> >  drivers/net/ice/meson.build      | 15 +++++++++++++++
> >  drivers/net/meson.build          |  1 +
> 
> I think it is better to add meson file with on the patch Makefile added and
> update in patches where required, instead of having a separate patch for it
Discussed the same with Vipin, will change it.

> 
> >  3 files changed, 46 insertions(+)
> >  create mode 100644 drivers/net/ice/base/meson.build  create mode
> > 100644 drivers/net/ice/meson.build
> >
> > diff --git a/drivers/net/ice/base/meson.build
> > b/drivers/net/ice/base/meson.build
> > new file mode 100644
> > index 0000000..5aafff3
> > --- /dev/null
> > +++ b/drivers/net/ice/base/meson.build
> > @@ -0,0 +1,30 @@
> > +# SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2018 Intel
> > +Corporation
> > +
> > +sources = [
> > +	'ice_controlq.c',
> > +	'ice_common.c',
> > +	'ice_sched.c',
> > +	'ice_switch.c',
> > +	'ice_nvm.c',
> 
> ice_dcb.c? It is in base folder, isn't is compiled?
Currently we don’t use it. Just leave it uncompiled.
> 
> <...>
> 
> > @@ -0,0 +1,15 @@
> > +# SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2018 Intel
> > +Corporation
> > +
> > +cflags += ['-DALLOW_EXPERIMENTAL_API']
> 
> Makefile doesn't have this flag, I guess it is not needed, base folder meson
> file also has it.
Thanks for the reminder. Will remove it.


^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v3 32/34] net/ice: support descriptor ops
  2018-12-13 21:30     ` Ferruh Yigit
@ 2018-12-14  2:39       ` Lu, Wenzhuo
  0 siblings, 0 replies; 309+ messages in thread
From: Lu, Wenzhuo @ 2018-12-14  2:39 UTC (permalink / raw)
  To: Yigit, Ferruh, dev; +Cc: Yang, Qiming, Li, Xiaoyun, Wu, Jingjing, Olivier MATZ

Hi Ferruh,


> -----Original Message-----
> From: Yigit, Ferruh
> Sent: Friday, December 14, 2018 5:30 AM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> Cc: Yang, Qiming <qiming.yang@intel.com>; Li, Xiaoyun
> <xiaoyun.li@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>; Olivier MATZ
> <olivier.matz@6wind.com>
> Subject: Re: [dpdk-dev] [PATCH v3 32/34] net/ice: support descriptor ops
> 
> On 12/12/2018 7:00 AM, Wenzhuo Lu wrote:
> > Add below ops,
> > rx_descriptor_done
> > rx_descriptor_status
> > tx_descriptor_status
> 
> I guess this is our mistake to not clarify this, sorry about it, but
> "rx_descriptor_status" replaces "rx_descriptor_done", it is and extended
> version, cc'ed Olivier to correct me in case I am wrong.
> 
> So when "rx_descriptor_status" implemented, "rx_descriptor_done" can be
> dropped.
> 
> Please see commit log of:
> Commit b1b700ce7d6f ("ethdev: add descriptor status API")
> 
> copy-paste related part:
> "
>     The descriptor_done() API, and probably the rx_queue_count() API could
>     be replaced by this new API as soon as it is implemented on all PMDs.
> "
Thanks for the reminder. Will remove it.

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v3 33/34] doc: add ICE description and update release note
  2018-12-13 21:34     ` Ferruh Yigit
@ 2018-12-14  2:42       ` Lu, Wenzhuo
  0 siblings, 0 replies; 309+ messages in thread
From: Lu, Wenzhuo @ 2018-12-14  2:42 UTC (permalink / raw)
  To: Yigit, Ferruh, dev

Hi Ferruh,

> -----Original Message-----
> From: Yigit, Ferruh
> Sent: Friday, December 14, 2018 5:34 AM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH v3 33/34] doc: add ICE description and
> update release note
> 
> On 12/12/2018 7:00 AM, Wenzhuo Lu wrote:
> > Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> > ---
> >  MAINTAINERS                            |   1 +
> >  doc/guides/nics/features/ice.ini       |  38 +++++++++++++
> >  doc/guides/nics/ice.rst                | 101
> +++++++++++++++++++++++++++++++++
> 
> Need to update doc/guides/nics/index.rst too to include ice.rst
Will update it.
> 
> >  doc/guides/rel_notes/release_19_02.rst |   4 ++
> >  4 files changed, 144 insertions(+)
> >  create mode 100644 doc/guides/nics/features/ice.ini  create mode
> > 100644 doc/guides/nics/ice.rst
> >
> > diff --git a/MAINTAINERS b/MAINTAINERS index 37f3bf7..cd01565 100644
> > --- a/MAINTAINERS
> > +++ b/MAINTAINERS
> > @@ -598,6 +598,7 @@ M: Qiming Yang <qiming.yang@intel.com>
> >  M: Wenzhuo Lu <wenzhuo.lu@intel.com>
> >  T: git://dpdk.org/next/dpdk-next-net-intel
> >  F: drivers/net/ice/
> > +F: doc/guides/nics/features/ice*.ini
> 
> Should add .rst file too?
Will add it.
> 
> <...>
> 
> > @@ -0,0 +1,101 @@
> > +..  SPDX-License-Identifier: BSD-3-Clause
> > +    Copyright(c) 2018 Intel Corporation.
> > +
> > +ICE Poll Mode Driver
> > +======================
> > +
> > +The ice PMD (librte_pmd_ice) provides poll mode driver support for
> > +10/25 Gbps Intel® Ethernet 810 Series Network Adapters based on the
> > +Intel Ethernet Controller E810.
> 
> Please remember to add links to product web-page when it is available.
O, will add it later.

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v3 20/34] net/ice: support link update
  2018-12-14  0:36       ` Lu, Wenzhuo
@ 2018-12-14  2:43         ` Zhang, Qi Z
  2018-12-14  8:09           ` Lu, Wenzhuo
  0 siblings, 1 reply; 309+ messages in thread
From: Zhang, Qi Z @ 2018-12-14  2:43 UTC (permalink / raw)
  To: Lu, Wenzhuo, dev; +Cc: Yang, Qiming, Li, Xiaoyun, Wu, Jingjing



> -----Original Message-----
> From: Lu, Wenzhuo
> Sent: Friday, December 14, 2018 8:36 AM
> To: Zhang, Qi Z <qi.z.zhang@intel.com>; dev@dpdk.org
> Cc: Yang, Qiming <qiming.yang@intel.com>; Li, Xiaoyun <xiaoyun.li@intel.com>;
> Wu, Jingjing <jingjing.wu@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v3 20/34] net/ice: support link update
> 
> Hi Qi,
> 
> > -----Original Message-----
> > From: Zhang, Qi Z
> > Sent: Thursday, December 13, 2018 4:47 PM
> > To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> > Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>; Yang, Qiming
> > <qiming.yang@intel.com>; Li, Xiaoyun <xiaoyun.li@intel.com>; Wu,
> > Jingjing <jingjing.wu@intel.com>
> > Subject: RE: [dpdk-dev] [PATCH v3 20/34] net/ice: support link update
> >
> > Hi Wenzhuo:
> >
> > > -----Original Message-----
> > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Wenzhuo Lu
> > > Sent: Wednesday, December 12, 2018 3:00 PM
> > > To: dev@dpdk.org
> > > Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>; Yang, Qiming
> > > <qiming.yang@intel.com>; Li, Xiaoyun <xiaoyun.li@intel.com>; Wu,
> > > Jingjing <jingjing.wu@intel.com>
> > > Subject: [dpdk-dev] [PATCH v3 20/34] net/ice: support link update
> > >
> > > Add ops link_update.
> >
> >
> > > +ice_interrupt_handler(void *param)
> > > +{
> > > +	struct rte_eth_dev *dev = (struct rte_eth_dev *)param;
> > > +	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data-
> > >dev_private);
> > > +	uint32_t oicr;
> >
> > I saw the patch also enabled interrupt handler, which looks like be
> > independent and not related with the commit log.
> > It's better to separate it.
> Here we only enable LSC interrupt for link update. So, I prefer putting it in this
> patch.

OK, I will suggest to modify the commit log to include the LSC part, since enable LSC is different with add ops link_update.

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v3 00/34] A new net PMD - ice
  2018-12-14  1:11         ` Lu, Wenzhuo
@ 2018-12-14  3:26           ` Varghese, Vipin
  2018-12-14  8:20             ` Lu, Wenzhuo
  0 siblings, 1 reply; 309+ messages in thread
From: Varghese, Vipin @ 2018-12-14  3:26 UTC (permalink / raw)
  To: Lu, Wenzhuo, dev

Hi Wenzhuo,

snipped
> >
> > Thanks for the update, couple of suggestion in my opinion shared below
> >
> > Snipped
> > > > [PATCH v2 02/20] net/ice: support device initialization
> > > > > +# Compile burst-oriented ICE PMD driver #
> > > > CONFIG_RTE_LIBRTE_ICE_PMD=y
> > > >
> > > > Based on ' https://patches.dpdk.org/patch/48488/' it is suggested
> > > > to configure. But here is it already set to 'y'. Is this correct?
> > > > If yes can you update ' https://patches.dpdk.org/patch/48488/'
> > > We've discussed the setting. I don’t know anything left. If there's,
> > > please let me know.
> > I think I poorly communicated, so let me try again. In document it is
> > stated to configure as 'y'. But in your default config is 'y'. So
> > either make the default as 'n' so user can configure or reword in
> > document as 'Ensure the config is set to y for before building'
> In the patch, I only see "``CONFIG_RTE_LIBRTE_ICE_PMD`` (default ``y``) ". No
> conflict with the configure file.
I think I am failing you to convey even after multiple attempts. Hence I humbly leave this to other from mailing list to get it sorted.

> 
> >
> > >
> > > >
> > > >  [PATCH v2 20/20] net/ice: support meson build
> > > > > > > > Should not meson build option be add start. That is in
> > > > > > > > patch
> > > > > > > > 1/20 so compile options does not fail?
> > > > > > > It will not fail. Enabling the compile earlier only means
> > > > > > > the code can be
> > > > > > compiled.
> > > > > > > But, to use this device we do need the whole patch set. From
> > > > > > > this point of view, compiling it at the end maybe better.
> > > > > > Thanks for update, so will 'meson-build' success if apply 3 patches?
> > > > > Sure, meson build will not be broken by any one of these patches.
> > > > > Only until this patch, what built by meson can support ice.
> > > > Thanks for confirmation that you have tried
> > > > './devtools/test-meson- builds.sh' and the intermediate build for ICE_DSI
> PMD does not fail.
> > > I said meson build is working. But don't know what's ICE_DSI PMD.
> >
> > Once again apologies if I poorly communicated ICE_PMD, my question is
> > 'if I take patch 1/20 in v2 can I build for librte_pmd_ice'? is not we
> > are expecting each additional layering for functionality like mtu set,
> > switch set should be compiling and not just the last build?
> O, you want to support meson build from the beginning. I can move it forward,
> although I recommend to use it after all the patches.
Thank you for consideration, I think this approach helps a lot since we are systematically adding code to ICE_PMD

> 
> >
> > >
> > > >
> > > >  [PATCH v2 16/20] net/ice: support basic RX/TX
> > > >
> > > >  [PATCH v2 14/20] net/ice: support statistics
> > > >  > > > +	ns->eth.tx_bytes -= (ns->eth.tx_unicast + ns-
> > >eth.tx_multicast
> > > +
> > > > > > > +			     ns->eth.tx_broadcast) * ETHER_CRC_LEN;
> > > > > >
> > > > > > In earlier patch for 'mtu set check' we added VSI SWITCH VLAN.
> > > > > > Should we add VSI VLAN here?
> > > > > Don't need. They're different functions. We add crc length here
> > > > > because of HW counts the packet length before crc is added.
> > > > So you are not fetching stats from HW registers from switch is
> > > > this
> > correct?
> > > > How will you get stats for actually transmitted in xstats? As I
> > > > understand xstats is for switch HW stats right?
> > > No. The stats is got from HW. We just correct it as HW doesn’t have
> > > the chance to add CRC length. But I don't understand why switch is
> > mentioned here.
> > Will ice pmd  for CVL expose all the switch stats as 'xstats'? If yes,
> > can you share in patch comments what are switch level stats?
> No. xstats means the extend stats which is not covered by the basic stats. It can be
> anything which is not in basic stats.
Thanks for confirming, hence is xstats covering for ICE Switch stats which is not covered in PMD stats? If yes, can you mention the list of stats covered under xstats that is fetched specifically from ICE switch?

> 
> >
> > >
> > > >
> > > >  [PATCH v2 04/20] net/ice: support getting	device	information
> > > >  > > Does this mean per queue offload capability is not supported?
> > > > If
> > > > > > yes, can you mention this in release notes under 'support or
> > limitation'
> > > > > No, it's not supported. We have a document, ice.ini, to list all
> > > > > the features supported. All the others are not supported.
> > > > > BTW, I don't think anything not supported is limitation.
> > > > If I understand correctly,  ICE_DSI_PMD is advertising it has not
> > > > offload for RX and TX. But you are stating in ice.ini you are
> > > > listing offload supports. So let me rephrase the question 'if you
> > > > support port level offload capability, it will reflect for all queues rx and tx.
> > > > But if you reflect queue level offload as 0 for rx and tx, then
> > > > APIs rte_eth_rx_queue_setup and rte_eth_tx_queue_setup if queue
> > > > offload enabled should fail. Is this correct understanding?'
> > > Sorry, I don’t know what's ICE_DSI_PMD.
> > ICE_PMD
> No exactly. We follow the rule of queue and port offload setting. If the feature is
> enabled in port level, we can ignore the queue setting. If the feature is not
> enabled in port level, we still can enable it per queue.
Thanks for confirming, so what will be result for non-configured ICE port for offload for port and queue using rte_eth_dev_get_info?

> 
> >
> >
> > >
> > > >
> > > > > > If device switch is not configured (default value from NVM)
> > > > > > should we highlight the switch can support speed 10, 100,
> > > > > > 1000,
> > > > > > 1000 and son
> > > > on?
> > > > > No, this's the capability getting from HW.
> > > > If HW is supported or configured for 10, 100, 25G then those
> > > > should be returned correctly this I agree. But when the device is
> > > > queried for capability it should highlight all supported speeds of switch. Am I
> right?
> > > No. Here shows the result not all the speeds supported. Like the
> > > speed after auto negotiation.
> > As per your current statement "If user uses API rte_eth_dev_info_get
> > to get speed_capa, current speed will be returned as auto negotiated
> > value and not ' ETH_LINK_SPEED_10M| ETH_LINK_SPEED_100M|
> > ETH_LINK_SPEED_1G| ETH_LINK_SPEED_10G| ETH_LINK_SPEED_25G'". I will
> > leave this to others to comment since in my humble opinion this is not
> > expected.
> OK. I'll change it to the bitmap.
Thanks, appreciate the understanding

> 
> >
> > >
> > > >
> > > > > > If speed is not true as stated above, can you please add this
> > > > > > to release notes and documentation.
> > > > > Here listed all the case we can get from HW.
> > > > Please add to ice_dsi documentation also.
> > > Sorry, no idea about ice_dsi.
> > My mistake ICE_PMD is what I am referring to.
> >
> > >
> > > >
> > > > snipped


^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v3 20/34] net/ice: support link update
  2018-12-14  2:43         ` Zhang, Qi Z
@ 2018-12-14  8:09           ` Lu, Wenzhuo
  0 siblings, 0 replies; 309+ messages in thread
From: Lu, Wenzhuo @ 2018-12-14  8:09 UTC (permalink / raw)
  To: Zhang, Qi Z, dev; +Cc: Yang, Qiming, Li, Xiaoyun, Wu, Jingjing

> -----Original Message-----
> From: Zhang, Qi Z
> Sent: Friday, December 14, 2018 10:43 AM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> Cc: Yang, Qiming <qiming.yang@intel.com>; Li, Xiaoyun
> <xiaoyun.li@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v3 20/34] net/ice: support link update
> 
> 
> 
> > -----Original Message-----
> > From: Lu, Wenzhuo
> > Sent: Friday, December 14, 2018 8:36 AM
> > To: Zhang, Qi Z <qi.z.zhang@intel.com>; dev@dpdk.org
> > Cc: Yang, Qiming <qiming.yang@intel.com>; Li, Xiaoyun
> > <xiaoyun.li@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>
> > Subject: RE: [dpdk-dev] [PATCH v3 20/34] net/ice: support link update
> >
> > Hi Qi,
> >
> > > -----Original Message-----
> > > From: Zhang, Qi Z
> > > Sent: Thursday, December 13, 2018 4:47 PM
> > > To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> > > Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>; Yang, Qiming
> > > <qiming.yang@intel.com>; Li, Xiaoyun <xiaoyun.li@intel.com>; Wu,
> > > Jingjing <jingjing.wu@intel.com>
> > > Subject: RE: [dpdk-dev] [PATCH v3 20/34] net/ice: support link
> > > update
> > >
> > > Hi Wenzhuo:
> > >
> > > > -----Original Message-----
> > > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Wenzhuo Lu
> > > > Sent: Wednesday, December 12, 2018 3:00 PM
> > > > To: dev@dpdk.org
> > > > Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>; Yang, Qiming
> > > > <qiming.yang@intel.com>; Li, Xiaoyun <xiaoyun.li@intel.com>; Wu,
> > > > Jingjing <jingjing.wu@intel.com>
> > > > Subject: [dpdk-dev] [PATCH v3 20/34] net/ice: support link update
> > > >
> > > > Add ops link_update.
> > >
> > >
> > > > +ice_interrupt_handler(void *param) {
> > > > +	struct rte_eth_dev *dev = (struct rte_eth_dev *)param;
> > > > +	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data-
> > > >dev_private);
> > > > +	uint32_t oicr;
> > >
> > > I saw the patch also enabled interrupt handler, which looks like be
> > > independent and not related with the commit log.
> > > It's better to separate it.
> > Here we only enable LSC interrupt for link update. So, I prefer
> > putting it in this patch.
> 
> OK, I will suggest to modify the commit log to include the LSC part, since
> enable LSC is different with add ops link_update.
Will add it in the log.

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v3 00/34] A new net PMD - ice
  2018-12-14  3:26           ` Varghese, Vipin
@ 2018-12-14  8:20             ` Lu, Wenzhuo
  0 siblings, 0 replies; 309+ messages in thread
From: Lu, Wenzhuo @ 2018-12-14  8:20 UTC (permalink / raw)
  To: Varghese, Vipin, dev

> -----Original Message-----
> From: Varghese, Vipin
> Sent: Friday, December 14, 2018 11:26 AM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> Subject: RE: [dpdk-dev] [PATCH v3 00/34] A new net PMD - ice
> 
> Hi Wenzhuo,
> 
> snipped
> > >
> > > Thanks for the update, couple of suggestion in my opinion shared
> > > below
> > >
> > > Snipped
> > > > > [PATCH v2 02/20] net/ice: support device initialization
> > > > > > +# Compile burst-oriented ICE PMD driver #
> > > > > CONFIG_RTE_LIBRTE_ICE_PMD=y
> > > > >
> > > > > Based on ' https://patches.dpdk.org/patch/48488/' it is
> > > > > suggested to configure. But here is it already set to 'y'. Is this correct?
> > > > > If yes can you update ' https://patches.dpdk.org/patch/48488/'
> > > > We've discussed the setting. I don’t know anything left. If
> > > > there's, please let me know.
> > > I think I poorly communicated, so let me try again. In document it
> > > is stated to configure as 'y'. But in your default config is 'y'. So
> > > either make the default as 'n' so user can configure or reword in
> > > document as 'Ensure the config is set to y for before building'
> > In the patch, I only see "``CONFIG_RTE_LIBRTE_ICE_PMD`` (default
> > ``y``) ". No conflict with the configure file.
> I think I am failing you to convey even after multiple attempts. Hence I
> humbly leave this to other from mailing list to get it sorted.
I also feel confused. As the compile config is set to 'y' and the document says the default value is 'y'. I just don’t find the problem.

> 
> >
> > >
> > > >
> > > > >
> > > > >  [PATCH v2 20/20] net/ice: support meson build
> > > > > > > > > Should not meson build option be add start. That is in
> > > > > > > > > patch
> > > > > > > > > 1/20 so compile options does not fail?
> > > > > > > > It will not fail. Enabling the compile earlier only means
> > > > > > > > the code can be
> > > > > > > compiled.
> > > > > > > > But, to use this device we do need the whole patch set.
> > > > > > > > From this point of view, compiling it at the end maybe better.
> > > > > > > Thanks for update, so will 'meson-build' success if apply 3 patches?
> > > > > > Sure, meson build will not be broken by any one of these patches.
> > > > > > Only until this patch, what built by meson can support ice.
> > > > > Thanks for confirmation that you have tried
> > > > > './devtools/test-meson- builds.sh' and the intermediate build
> > > > > for ICE_DSI
> > PMD does not fail.
> > > > I said meson build is working. But don't know what's ICE_DSI PMD.
> > >
> > > Once again apologies if I poorly communicated ICE_PMD, my question
> > > is 'if I take patch 1/20 in v2 can I build for librte_pmd_ice'? is
> > > not we are expecting each additional layering for functionality like
> > > mtu set, switch set should be compiling and not just the last build?
> > O, you want to support meson build from the beginning. I can move it
> > forward, although I recommend to use it after all the patches.
> Thank you for consideration, I think this approach helps a lot since we are
> systematically adding code to ICE_PMD
> 
> >
> > >
> > > >
> > > > >
> > > > >  [PATCH v2 16/20] net/ice: support basic RX/TX
> > > > >
> > > > >  [PATCH v2 14/20] net/ice: support statistics
> > > > >  > > > +	ns->eth.tx_bytes -= (ns->eth.tx_unicast + ns-
> > > >eth.tx_multicast
> > > > +
> > > > > > > > +			     ns->eth.tx_broadcast) * ETHER_CRC_LEN;
> > > > > > >
> > > > > > > In earlier patch for 'mtu set check' we added VSI SWITCH VLAN.
> > > > > > > Should we add VSI VLAN here?
> > > > > > Don't need. They're different functions. We add crc length
> > > > > > here because of HW counts the packet length before crc is added.
> > > > > So you are not fetching stats from HW registers from switch is
> > > > > this
> > > correct?
> > > > > How will you get stats for actually transmitted in xstats? As I
> > > > > understand xstats is for switch HW stats right?
> > > > No. The stats is got from HW. We just correct it as HW doesn’t
> > > > have the chance to add CRC length. But I don't understand why
> > > > switch is
> > > mentioned here.
> > > Will ice pmd  for CVL expose all the switch stats as 'xstats'? If
> > > yes, can you share in patch comments what are switch level stats?
> > No. xstats means the extend stats which is not covered by the basic
> > stats. It can be anything which is not in basic stats.
> Thanks for confirming, hence is xstats covering for ICE Switch stats which is
> not covered in PMD stats? If yes, can you mention the list of stats covered
> under xstats that is fetched specifically from ICE switch?
Sorry. I cannot do that. The HW is somehow black box to us. I cannot tell you something I'm not sure about.

> 
> >
> > >
> > > >
> > > > >
> > > > >  [PATCH v2 04/20] net/ice: support getting	device	information
> > > > >  > > Does this mean per queue offload capability is not supported?
> > > > > If
> > > > > > > yes, can you mention this in release notes under 'support or
> > > limitation'
> > > > > > No, it's not supported. We have a document, ice.ini, to list
> > > > > > all the features supported. All the others are not supported.
> > > > > > BTW, I don't think anything not supported is limitation.
> > > > > If I understand correctly,  ICE_DSI_PMD is advertising it has
> > > > > not offload for RX and TX. But you are stating in ice.ini you
> > > > > are listing offload supports. So let me rephrase the question
> > > > > 'if you support port level offload capability, it will reflect for all
> queues rx and tx.
> > > > > But if you reflect queue level offload as 0 for rx and tx, then
> > > > > APIs rte_eth_rx_queue_setup and rte_eth_tx_queue_setup if queue
> > > > > offload enabled should fail. Is this correct understanding?'
> > > > Sorry, I don’t know what's ICE_DSI_PMD.
> > > ICE_PMD
> > No exactly. We follow the rule of queue and port offload setting. If
> > the feature is enabled in port level, we can ignore the queue setting.
> > If the feature is not enabled in port level, we still can enable it per queue.
> Thanks for confirming, so what will be result for non-configured ICE port for
> offload for port and queue using rte_eth_dev_get_info?
The get_info also show the default value. The offload is set to 0 by default.

> 
> >
> > >
> > >
> > > >
> > > > >
> > > > > > > If device switch is not configured (default value from NVM)
> > > > > > > should we highlight the switch can support speed 10, 100,
> > > > > > > 1000,
> > > > > > > 1000 and son
> > > > > on?
> > > > > > No, this's the capability getting from HW.
> > > > > If HW is supported or configured for 10, 100, 25G then those
> > > > > should be returned correctly this I agree. But when the device
> > > > > is queried for capability it should highlight all supported
> > > > > speeds of switch. Am I
> > right?
> > > > No. Here shows the result not all the speeds supported. Like the
> > > > speed after auto negotiation.
> > > As per your current statement "If user uses API rte_eth_dev_info_get
> > > to get speed_capa, current speed will be returned as auto negotiated
> > > value and not ' ETH_LINK_SPEED_10M| ETH_LINK_SPEED_100M|
> > > ETH_LINK_SPEED_1G| ETH_LINK_SPEED_10G| ETH_LINK_SPEED_25G'". I
> will
> > > leave this to others to comment since in my humble opinion this is
> > > not expected.
> > OK. I'll change it to the bitmap.
> Thanks, appreciate the understanding
> 
> >
> > >
> > > >
> > > > >
> > > > > > > If speed is not true as stated above, can you please add
> > > > > > > this to release notes and documentation.
> > > > > > Here listed all the case we can get from HW.
> > > > > Please add to ice_dsi documentation also.
> > > > Sorry, no idea about ice_dsi.
> > > My mistake ICE_PMD is what I am referring to.
> > >
> > > >
> > > > >
> > > > > snipped


^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v4 00/32] A new net PMD - ICE
  2018-11-23  6:56 [dpdk-dev] [PATCH 00/19] A new net PMD - ice Wenzhuo Lu
                   ` (21 preceding siblings ...)
  2018-12-12  6:59 ` [dpdk-dev] [PATCH v3 00/34] A new net PMD - ice Wenzhuo Lu
@ 2018-12-14  8:34 ` Wenzhuo Lu
  2018-12-14  8:34   ` [dpdk-dev] [PATCH v4 01/32] net/ice/base: add registers for Intel(R) E800 Series NIC Wenzhuo Lu
                     ` (31 more replies)
  2018-12-17  7:37 ` [dpdk-dev] [PATCH v5 00/31] A new net PMD - ICE Wenzhuo Lu
  2018-12-18  8:46 ` [dpdk-dev] [PATCH v6 00/31] A new net PMD - ICE Wenzhuo Lu
  24 siblings, 32 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-14  8:34 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu

This patch set adds the support of a new net PMD,
Intel® Ethernet Network Adapters E810, also
called ice.

Below features are enabled by this patch set,

Basic features:
1, Basic device operations: probe, initialization, start/stop, configure, info get.
2, RX/TX queue operations: setup/release, start/stop, info get.
3, RX/TX.

HW Offload features:
1, CRC Stripping/insertion.
2, L2/L3 checksum strip/insertion.
3, PVID set.
4, TPID change.
5, TSO (LRO/RSC not supported).

Stats:
1, statics & xstatics.

Switch functions:
1, MAC Filter Add/Delete.
2, VLAN Filter Add/Delete.

Power saving:
1, RX interrupt mode.

Misc:
1, Interrupt For Link Status.
2, firmware info query.
3, Jumbo Frame Support.
4, ptype check.
5, EEPROM check and set.

---
v2:
 - Fix shared lib compile issue.
 - Add meson build support.
 - Update documents.
 - Fix more checkpatch issues.

v3:
 - Removed the support of secondary process.
 - Splitted the base code to more patches.
 - Pass NULL to rte_zmalloc.
 - Changed some magic numbers to macros.
 - Fixed the wrong implementation of a specific bitmapi.

v4:
 - Moved meson build forward.
 - Updated and splitted the document to related patches.
 - Updated the device info.
 - Removed unnecessary compile config.
 - Removed the code of ops rx_descriptor_done.
 - Adjusted the order of the functions.
 - Added error print for MAC setting.

Paul M Stillwell Jr (14):
  net/ice/base: add registers for Intel(R) E800 Series NIC
  net/ice/base: add basic structures
  net/ice/base: add admin queue structures and commands
  net/ice/base: add sideband queue info
  net/ice/base: add device IDs for Intel(r) E800 Series NICs
  net/ice/base: add control queue information
  net/ice/base: add data center bridging (DCB)
  net/ice/base: add basic transmit scheduler
  net/ice/base: add virtual switch code
  net/ice/base: add code to work with the NVM
  net/ice/base: add common functions
  net/ice/base: add various headers
  net/ice/base: add protocol structures and defines
  net/ice/base: add structures for RX/TX queues

Wenzhuo Lu (18):
  net/ice/base: add OS specific implementation
  net/ice: support device initialization
  net/ice: support device and queue ops
  net/ice: support getting device information
  net/ice: support packet type getting
  net/ice: support link update
  net/ice: support MTU setting
  net/ice: support MAC ops
  net/ice: support VLAN ops
  net/ice: support RSS
  net/ice: support RX queue interruption
  net/ice: support FW version getting
  net/ice: support EEPROM information getting
  net/ice: support statistics
  net/ice: support queue information getting
  net/ice: support basic RX/TX
  net/ice: support advance RX/TX
  net/ice: support descriptor ops

 MAINTAINERS                              |    8 +
 config/common_base                       |    9 +
 doc/guides/nics/features/ice.ini         |   38 +
 doc/guides/nics/ice.rst                  |  104 +
 doc/guides/nics/index.rst                |    1 +
 doc/guides/rel_notes/release_19_02.rst   |    5 +
 drivers/net/Makefile                     |    1 +
 drivers/net/ice/Makefile                 |   55 +
 drivers/net/ice/base/README              |   22 +
 drivers/net/ice/base/ice_adminq_cmd.h    | 1891 ++++++
 drivers/net/ice/base/ice_alloc.h         |   22 +
 drivers/net/ice/base/ice_common.c        | 3521 +++++++++++
 drivers/net/ice/base/ice_common.h        |  186 +
 drivers/net/ice/base/ice_controlq.c      | 1098 ++++
 drivers/net/ice/base/ice_controlq.h      |   97 +
 drivers/net/ice/base/ice_dcb.c           | 1385 +++++
 drivers/net/ice/base/ice_dcb.h           |  220 +
 drivers/net/ice/base/ice_devids.h        |   17 +
 drivers/net/ice/base/ice_flex_type.h     |   19 +
 drivers/net/ice/base/ice_flow.h          |    8 +
 drivers/net/ice/base/ice_hw_autogen.h    | 9815 ++++++++++++++++++++++++++++++
 drivers/net/ice/base/ice_lan_tx_rx.h     | 2291 +++++++
 drivers/net/ice/base/ice_nvm.c           |  387 ++
 drivers/net/ice/base/ice_osdep.h         |  524 ++
 drivers/net/ice/base/ice_protocol_type.h |  248 +
 drivers/net/ice/base/ice_sbq_cmd.h       |   93 +
 drivers/net/ice/base/ice_sched.c         | 5380 ++++++++++++++++
 drivers/net/ice/base/ice_sched.h         |  210 +
 drivers/net/ice/base/ice_status.h        |   45 +
 drivers/net/ice/base/ice_switch.c        | 2812 +++++++++
 drivers/net/ice/base/ice_switch.h        |  333 +
 drivers/net/ice/base/ice_type.h          |  869 +++
 drivers/net/ice/base/meson.build         |   27 +
 drivers/net/ice/ice_ethdev.c             | 3242 ++++++++++
 drivers/net/ice/ice_ethdev.h             |  318 +
 drivers/net/ice/ice_lan_rxtx.c           | 2872 +++++++++
 drivers/net/ice/ice_logs.h               |   45 +
 drivers/net/ice/ice_rxtx.h               |  154 +
 drivers/net/ice/meson.build              |   13 +
 drivers/net/ice/rte_pmd_ice_version.map  |    4 +
 drivers/net/meson.build                  |    1 +
 mk/rte.app.mk                            |    1 +
 42 files changed, 38391 insertions(+)
 create mode 100644 doc/guides/nics/features/ice.ini
 create mode 100644 doc/guides/nics/ice.rst
 create mode 100644 drivers/net/ice/Makefile
 create mode 100644 drivers/net/ice/base/README
 create mode 100644 drivers/net/ice/base/ice_adminq_cmd.h
 create mode 100644 drivers/net/ice/base/ice_alloc.h
 create mode 100644 drivers/net/ice/base/ice_common.c
 create mode 100644 drivers/net/ice/base/ice_common.h
 create mode 100644 drivers/net/ice/base/ice_controlq.c
 create mode 100644 drivers/net/ice/base/ice_controlq.h
 create mode 100644 drivers/net/ice/base/ice_dcb.c
 create mode 100644 drivers/net/ice/base/ice_dcb.h
 create mode 100644 drivers/net/ice/base/ice_devids.h
 create mode 100644 drivers/net/ice/base/ice_flex_type.h
 create mode 100644 drivers/net/ice/base/ice_flow.h
 create mode 100644 drivers/net/ice/base/ice_hw_autogen.h
 create mode 100644 drivers/net/ice/base/ice_lan_tx_rx.h
 create mode 100644 drivers/net/ice/base/ice_nvm.c
 create mode 100644 drivers/net/ice/base/ice_osdep.h
 create mode 100644 drivers/net/ice/base/ice_protocol_type.h
 create mode 100644 drivers/net/ice/base/ice_sbq_cmd.h
 create mode 100644 drivers/net/ice/base/ice_sched.c
 create mode 100644 drivers/net/ice/base/ice_sched.h
 create mode 100644 drivers/net/ice/base/ice_status.h
 create mode 100644 drivers/net/ice/base/ice_switch.c
 create mode 100644 drivers/net/ice/base/ice_switch.h
 create mode 100644 drivers/net/ice/base/ice_type.h
 create mode 100644 drivers/net/ice/base/meson.build
 create mode 100644 drivers/net/ice/ice_ethdev.c
 create mode 100644 drivers/net/ice/ice_ethdev.h
 create mode 100644 drivers/net/ice/ice_lan_rxtx.c
 create mode 100644 drivers/net/ice/ice_logs.h
 create mode 100644 drivers/net/ice/ice_rxtx.h
 create mode 100644 drivers/net/ice/meson.build
 create mode 100644 drivers/net/ice/rte_pmd_ice_version.map

-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v4 01/32] net/ice/base: add registers for Intel(R) E800 Series NIC
  2018-12-14  8:34 ` [dpdk-dev] [PATCH v4 00/32] A new net PMD - ICE Wenzhuo Lu
@ 2018-12-14  8:34   ` Wenzhuo Lu
  2018-12-14  8:34   ` [dpdk-dev] [PATCH v4 02/32] net/ice/base: add basic structures Wenzhuo Lu
                     ` (30 subsequent siblings)
  31 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-14  8:34 UTC (permalink / raw)
  To: dev; +Cc: Paul M Stillwell Jr

From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>

Add the registers that comprise the Intel(R) 800
Series NIC. There is no functionality in this patch.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
---
 MAINTAINERS                           |    6 +
 drivers/net/ice/base/ice_hw_autogen.h | 9815 +++++++++++++++++++++++++++++++++
 2 files changed, 9821 insertions(+)
 create mode 100644 drivers/net/ice/base/ice_hw_autogen.h

diff --git a/MAINTAINERS b/MAINTAINERS
index 71ba312..37f3bf7 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -593,6 +593,12 @@ F: drivers/net/ifc/
 F: doc/guides/nics/ifc.rst
 F: doc/guides/nics/features/ifc*.ini
 
+Intel ice
+M: Qiming Yang <qiming.yang@intel.com>
+M: Wenzhuo Lu <wenzhuo.lu@intel.com>
+T: git://dpdk.org/next/dpdk-next-net-intel
+F: drivers/net/ice/
+
 Marvell mvpp2
 M: Tomasz Duszynski <tdu@semihalf.com>
 M: Dmitri Epshtein <dima@marvell.com>
diff --git a/drivers/net/ice/base/ice_hw_autogen.h b/drivers/net/ice/base/ice_hw_autogen.h
new file mode 100644
index 0000000..8c79891
--- /dev/null
+++ b/drivers/net/ice/base/ice_hw_autogen.h
@@ -0,0 +1,9815 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+/* Machine-generated file; do not edit */
+#ifndef _ICE_HW_AUTOGEN_H_
+#define _ICE_HW_AUTOGEN_H_
+
+
+
+#define GL_RDPU_CNTRL				0x00052054 /* Reset Source: CORER */
+#define GL_RDPU_CNTRL_RX_PAD_EN_S		0
+#define GL_RDPU_CNTRL_RX_PAD_EN_M		BIT(0)
+#define GL_RDPU_CNTRL_UDP_ZERO_EN_S		1
+#define GL_RDPU_CNTRL_UDP_ZERO_EN_M		BIT(1)
+#define GL_RDPU_CNTRL_BLNC_EN_S			2
+#define GL_RDPU_CNTRL_BLNC_EN_M			BIT(2)
+#define GL_RDPU_CNTRL_RECIPE_BYPASS_S		3
+#define GL_RDPU_CNTRL_RECIPE_BYPASS_M		BIT(3)
+#define GL_RDPU_CNTRL_RLAN_ACK_REQ_PM_TH_S	4
+#define GL_RDPU_CNTRL_RLAN_ACK_REQ_PM_TH_M	MAKEMASK(0x3F, 4)
+#define GL_RDPU_CNTRL_PE_ACK_REQ_PM_TH_S	10
+#define GL_RDPU_CNTRL_PE_ACK_REQ_PM_TH_M	MAKEMASK(0x3F, 10)
+#define GL_RDPU_CNTRL_REQ_WB_PM_TH_S		16
+#define GL_RDPU_CNTRL_REQ_WB_PM_TH_M		MAKEMASK(0x1F, 16)
+#define GL_RDPU_CNTRL_ECO_S			21
+#define GL_RDPU_CNTRL_ECO_M			MAKEMASK(0x7FF, 21)
+#define MSIX_PBA(_i)				(0x00008000 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: FLR */
+#define MSIX_PBA_MAX_INDEX			2
+#define MSIX_PBA_PENBIT_S			0
+#define MSIX_PBA_PENBIT_M			MAKEMASK(0xFFFFFFFF, 0)
+#define MSIX_TADD(_i)				(0x00000000 + ((_i) * 16)) /* _i=0...64 */ /* Reset Source: FLR */
+#define MSIX_TADD_MAX_INDEX			64
+#define MSIX_TADD_MSIXTADD10_S			0
+#define MSIX_TADD_MSIXTADD10_M			MAKEMASK(0x3, 0)
+#define MSIX_TADD_MSIXTADD_S			2
+#define MSIX_TADD_MSIXTADD_M			MAKEMASK(0x3FFFFFFF, 2)
+#define MSIX_TUADD(_i)				(0x00000004 + ((_i) * 16)) /* _i=0...64 */ /* Reset Source: FLR */
+#define MSIX_TUADD_MAX_INDEX			64
+#define MSIX_TUADD_MSIXTUADD_S			0
+#define MSIX_TUADD_MSIXTUADD_M			MAKEMASK(0xFFFFFFFF, 0)
+#define MSIX_TVCTRL(_i)				(0x0000000C + ((_i) * 16)) /* _i=0...64 */ /* Reset Source: FLR */
+#define MSIX_TVCTRL_MAX_INDEX			64
+#define MSIX_TVCTRL_MASK_S			0
+#define MSIX_TVCTRL_MASK_M			BIT(0)
+#define PF0_FW_HLP_ARQBAH_PAGE			0x02D00180 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQBAH_PAGE_ARQBAH_S		0
+#define PF0_FW_HLP_ARQBAH_PAGE_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_FW_HLP_ARQBAL_PAGE			0x02D00080 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQBAL_PAGE_ARQBAL_LSB_S	0
+#define PF0_FW_HLP_ARQBAL_PAGE_ARQBAL_LSB_M	MAKEMASK(0x3F, 0)
+#define PF0_FW_HLP_ARQBAL_PAGE_ARQBAL_S		6
+#define PF0_FW_HLP_ARQBAL_PAGE_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_FW_HLP_ARQH_PAGE			0x02D00380 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQH_PAGE_ARQH_S		0
+#define PF0_FW_HLP_ARQH_PAGE_ARQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ARQLEN_PAGE			0x02D00280 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQLEN_S		0
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQVFE_S		28
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQVFE_M		BIT(28)
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQOVFL_S	29
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQOVFL_M	BIT(29)
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQCRIT_S	30
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQCRIT_M	BIT(30)
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQENABLE_S	31
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQENABLE_M	BIT(31)
+#define PF0_FW_HLP_ARQT_PAGE			0x02D00480 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQT_PAGE_ARQT_S		0
+#define PF0_FW_HLP_ARQT_PAGE_ARQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ATQBAH_PAGE			0x02D00100 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQBAH_PAGE_ATQBAH_S		0
+#define PF0_FW_HLP_ATQBAH_PAGE_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_FW_HLP_ATQBAL_PAGE			0x02D00000 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQBAL_PAGE_ATQBAL_LSB_S	0
+#define PF0_FW_HLP_ATQBAL_PAGE_ATQBAL_LSB_M	MAKEMASK(0x3F, 0)
+#define PF0_FW_HLP_ATQBAL_PAGE_ATQBAL_S		6
+#define PF0_FW_HLP_ATQBAL_PAGE_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_FW_HLP_ATQH_PAGE			0x02D00300 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQH_PAGE_ATQH_S		0
+#define PF0_FW_HLP_ATQH_PAGE_ATQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ATQLEN_PAGE			0x02D00200 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQLEN_S		0
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQVFE_S		28
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQVFE_M		BIT(28)
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQOVFL_S	29
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQOVFL_M	BIT(29)
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQCRIT_S	30
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQCRIT_M	BIT(30)
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQENABLE_S	31
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQENABLE_M	BIT(31)
+#define PF0_FW_HLP_ATQT_PAGE			0x02D00400 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQT_PAGE_ATQT_S		0
+#define PF0_FW_HLP_ATQT_PAGE_ATQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ARQBAH_PAGE			0x02D40180 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQBAH_PAGE_ARQBAH_S		0
+#define PF0_FW_PSM_ARQBAH_PAGE_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_FW_PSM_ARQBAL_PAGE			0x02D40080 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQBAL_PAGE_ARQBAL_LSB_S	0
+#define PF0_FW_PSM_ARQBAL_PAGE_ARQBAL_LSB_M	MAKEMASK(0x3F, 0)
+#define PF0_FW_PSM_ARQBAL_PAGE_ARQBAL_S		6
+#define PF0_FW_PSM_ARQBAL_PAGE_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_FW_PSM_ARQH_PAGE			0x02D40380 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQH_PAGE_ARQH_S		0
+#define PF0_FW_PSM_ARQH_PAGE_ARQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ARQLEN_PAGE			0x02D40280 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQLEN_S		0
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQVFE_S		28
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQVFE_M		BIT(28)
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQOVFL_S	29
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQOVFL_M	BIT(29)
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQCRIT_S	30
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQCRIT_M	BIT(30)
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQENABLE_S	31
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQENABLE_M	BIT(31)
+#define PF0_FW_PSM_ARQT_PAGE			0x02D40480 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQT_PAGE_ARQT_S		0
+#define PF0_FW_PSM_ARQT_PAGE_ARQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ATQBAH_PAGE			0x02D40100 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQBAH_PAGE_ATQBAH_S		0
+#define PF0_FW_PSM_ATQBAH_PAGE_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_FW_PSM_ATQBAL_PAGE			0x02D40000 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQBAL_PAGE_ATQBAL_LSB_S	0
+#define PF0_FW_PSM_ATQBAL_PAGE_ATQBAL_LSB_M	MAKEMASK(0x3F, 0)
+#define PF0_FW_PSM_ATQBAL_PAGE_ATQBAL_S		6
+#define PF0_FW_PSM_ATQBAL_PAGE_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_FW_PSM_ATQH_PAGE			0x02D40300 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQH_PAGE_ATQH_S		0
+#define PF0_FW_PSM_ATQH_PAGE_ATQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ATQLEN_PAGE			0x02D40200 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQLEN_S		0
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQVFE_S		28
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQVFE_M		BIT(28)
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQOVFL_S	29
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQOVFL_M	BIT(29)
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQCRIT_S	30
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQCRIT_M	BIT(30)
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQENABLE_S	31
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQENABLE_M	BIT(31)
+#define PF0_FW_PSM_ATQT_PAGE			0x02D40400 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQT_PAGE_ATQT_S		0
+#define PF0_FW_PSM_ATQT_PAGE_ATQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ARQBAH_PAGE			0x02D80190 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQBAH_PAGE_ARQBAH_S	0
+#define PF0_MBX_CPM_ARQBAH_PAGE_ARQBAH_M	MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_CPM_ARQBAL_PAGE			0x02D80090 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQBAL_PAGE_ARQBAL_LSB_S	0
+#define PF0_MBX_CPM_ARQBAL_PAGE_ARQBAL_LSB_M	MAKEMASK(0x3F, 0)
+#define PF0_MBX_CPM_ARQBAL_PAGE_ARQBAL_S	6
+#define PF0_MBX_CPM_ARQBAL_PAGE_ARQBAL_M	MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_CPM_ARQH_PAGE			0x02D80390 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQH_PAGE_ARQH_S		0
+#define PF0_MBX_CPM_ARQH_PAGE_ARQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ARQLEN_PAGE			0x02D80290 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQLEN_S	0
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQLEN_M	MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQVFE_S	28
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQVFE_M	BIT(28)
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQOVFL_S	29
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQOVFL_M	BIT(29)
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQCRIT_S	30
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQCRIT_M	BIT(30)
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQENABLE_S	31
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQENABLE_M	BIT(31)
+#define PF0_MBX_CPM_ARQT_PAGE			0x02D80490 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQT_PAGE_ARQT_S		0
+#define PF0_MBX_CPM_ARQT_PAGE_ARQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ATQBAH_PAGE			0x02D80110 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQBAH_PAGE_ATQBAH_S	0
+#define PF0_MBX_CPM_ATQBAH_PAGE_ATQBAH_M	MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_CPM_ATQBAL_PAGE			0x02D80010 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQBAL_PAGE_ATQBAL_S	6
+#define PF0_MBX_CPM_ATQBAL_PAGE_ATQBAL_M	MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_CPM_ATQH_PAGE			0x02D80310 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQH_PAGE_ATQH_S		0
+#define PF0_MBX_CPM_ATQH_PAGE_ATQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ATQLEN_PAGE			0x02D80210 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQLEN_S	0
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQLEN_M	MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQVFE_S	28
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQVFE_M	BIT(28)
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQOVFL_S	29
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQOVFL_M	BIT(29)
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQCRIT_S	30
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQCRIT_M	BIT(30)
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQENABLE_S	31
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQENABLE_M	BIT(31)
+#define PF0_MBX_CPM_ATQT_PAGE			0x02D80410 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQT_PAGE_ATQT_S		0
+#define PF0_MBX_CPM_ATQT_PAGE_ATQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ARQBAH_PAGE			0x02D00190 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQBAH_PAGE_ARQBAH_S	0
+#define PF0_MBX_HLP_ARQBAH_PAGE_ARQBAH_M	MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_HLP_ARQBAL_PAGE			0x02D00090 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQBAL_PAGE_ARQBAL_LSB_S	0
+#define PF0_MBX_HLP_ARQBAL_PAGE_ARQBAL_LSB_M	MAKEMASK(0x3F, 0)
+#define PF0_MBX_HLP_ARQBAL_PAGE_ARQBAL_S	6
+#define PF0_MBX_HLP_ARQBAL_PAGE_ARQBAL_M	MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_HLP_ARQH_PAGE			0x02D00390 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQH_PAGE_ARQH_S		0
+#define PF0_MBX_HLP_ARQH_PAGE_ARQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ARQLEN_PAGE			0x02D00290 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQLEN_S	0
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQLEN_M	MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQVFE_S	28
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQVFE_M	BIT(28)
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQOVFL_S	29
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQOVFL_M	BIT(29)
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQCRIT_S	30
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQCRIT_M	BIT(30)
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQENABLE_S	31
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQENABLE_M	BIT(31)
+#define PF0_MBX_HLP_ARQT_PAGE			0x02D00490 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQT_PAGE_ARQT_S		0
+#define PF0_MBX_HLP_ARQT_PAGE_ARQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ATQBAH_PAGE			0x02D00110 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQBAH_PAGE_ATQBAH_S	0
+#define PF0_MBX_HLP_ATQBAH_PAGE_ATQBAH_M	MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_HLP_ATQBAL_PAGE			0x02D00010 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQBAL_PAGE_ATQBAL_S	6
+#define PF0_MBX_HLP_ATQBAL_PAGE_ATQBAL_M	MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_HLP_ATQH_PAGE			0x02D00310 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQH_PAGE_ATQH_S		0
+#define PF0_MBX_HLP_ATQH_PAGE_ATQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ATQLEN_PAGE			0x02D00210 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQLEN_S	0
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQLEN_M	MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQVFE_S	28
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQVFE_M	BIT(28)
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQOVFL_S	29
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQOVFL_M	BIT(29)
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQCRIT_S	30
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQCRIT_M	BIT(30)
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQENABLE_S	31
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQENABLE_M	BIT(31)
+#define PF0_MBX_HLP_ATQT_PAGE			0x02D00410 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQT_PAGE_ATQT_S		0
+#define PF0_MBX_HLP_ATQT_PAGE_ATQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ARQBAH_PAGE			0x02D40190 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQBAH_PAGE_ARQBAH_S	0
+#define PF0_MBX_PSM_ARQBAH_PAGE_ARQBAH_M	MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_PSM_ARQBAL_PAGE			0x02D40090 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQBAL_PAGE_ARQBAL_LSB_S	0
+#define PF0_MBX_PSM_ARQBAL_PAGE_ARQBAL_LSB_M	MAKEMASK(0x3F, 0)
+#define PF0_MBX_PSM_ARQBAL_PAGE_ARQBAL_S	6
+#define PF0_MBX_PSM_ARQBAL_PAGE_ARQBAL_M	MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_PSM_ARQH_PAGE			0x02D40390 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQH_PAGE_ARQH_S		0
+#define PF0_MBX_PSM_ARQH_PAGE_ARQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ARQLEN_PAGE			0x02D40290 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQLEN_S	0
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQLEN_M	MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQVFE_S	28
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQVFE_M	BIT(28)
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQOVFL_S	29
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQOVFL_M	BIT(29)
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQCRIT_S	30
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQCRIT_M	BIT(30)
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQENABLE_S	31
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQENABLE_M	BIT(31)
+#define PF0_MBX_PSM_ARQT_PAGE			0x02D40490 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQT_PAGE_ARQT_S		0
+#define PF0_MBX_PSM_ARQT_PAGE_ARQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ATQBAH_PAGE			0x02D40110 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQBAH_PAGE_ATQBAH_S	0
+#define PF0_MBX_PSM_ATQBAH_PAGE_ATQBAH_M	MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_PSM_ATQBAL_PAGE			0x02D40010 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQBAL_PAGE_ATQBAL_S	6
+#define PF0_MBX_PSM_ATQBAL_PAGE_ATQBAL_M	MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_PSM_ATQH_PAGE			0x02D40310 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQH_PAGE_ATQH_S		0
+#define PF0_MBX_PSM_ATQH_PAGE_ATQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ATQLEN_PAGE			0x02D40210 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQLEN_S	0
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQLEN_M	MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQVFE_S	28
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQVFE_M	BIT(28)
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQOVFL_S	29
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQOVFL_M	BIT(29)
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQCRIT_S	30
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQCRIT_M	BIT(30)
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQENABLE_S	31
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQENABLE_M	BIT(31)
+#define PF0_MBX_PSM_ATQT_PAGE			0x02D40410 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQT_PAGE_ATQT_S		0
+#define PF0_MBX_PSM_ATQT_PAGE_ATQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ARQBAH_PAGE			0x02D801A0 /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQBAH_PAGE_ARQBAH_S		0
+#define PF0_SB_CPM_ARQBAH_PAGE_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_SB_CPM_ARQBAL_PAGE			0x02D800A0 /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQBAL_PAGE_ARQBAL_LSB_S	0
+#define PF0_SB_CPM_ARQBAL_PAGE_ARQBAL_LSB_M	MAKEMASK(0x3F, 0)
+#define PF0_SB_CPM_ARQBAL_PAGE_ARQBAL_S		6
+#define PF0_SB_CPM_ARQBAL_PAGE_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_SB_CPM_ARQH_PAGE			0x02D803A0 /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQH_PAGE_ARQH_S		0
+#define PF0_SB_CPM_ARQH_PAGE_ARQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ARQLEN_PAGE			0x02D802A0 /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQLEN_S		0
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQVFE_S		28
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQVFE_M		BIT(28)
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQOVFL_S	29
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQOVFL_M	BIT(29)
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQCRIT_S	30
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQCRIT_M	BIT(30)
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQENABLE_S	31
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQENABLE_M	BIT(31)
+#define PF0_SB_CPM_ARQT_PAGE			0x02D804A0 /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQT_PAGE_ARQT_S		0
+#define PF0_SB_CPM_ARQT_PAGE_ARQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ATQBAH_PAGE			0x02D80120 /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQBAH_PAGE_ATQBAH_S		0
+#define PF0_SB_CPM_ATQBAH_PAGE_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_SB_CPM_ATQBAL_PAGE			0x02D80020 /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQBAL_PAGE_ATQBAL_S		6
+#define PF0_SB_CPM_ATQBAL_PAGE_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_SB_CPM_ATQH_PAGE			0x02D80320 /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQH_PAGE_ATQH_S		0
+#define PF0_SB_CPM_ATQH_PAGE_ATQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ATQLEN_PAGE			0x02D80220 /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQLEN_S		0
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQVFE_S		28
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQVFE_M		BIT(28)
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQOVFL_S	29
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQOVFL_M	BIT(29)
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQCRIT_S	30
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQCRIT_M	BIT(30)
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQENABLE_S	31
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQENABLE_M	BIT(31)
+#define PF0_SB_CPM_ATQT_PAGE			0x02D80420 /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQT_PAGE_ATQT_S		0
+#define PF0_SB_CPM_ATQT_PAGE_ATQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ARQBAH_PAGE			0x02D001A0 /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQBAH_PAGE_ARQBAH_S		0
+#define PF0_SB_HLP_ARQBAH_PAGE_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_SB_HLP_ARQBAL_PAGE			0x02D000A0 /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQBAL_PAGE_ARQBAL_LSB_S	0
+#define PF0_SB_HLP_ARQBAL_PAGE_ARQBAL_LSB_M	MAKEMASK(0x3F, 0)
+#define PF0_SB_HLP_ARQBAL_PAGE_ARQBAL_S		6
+#define PF0_SB_HLP_ARQBAL_PAGE_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_SB_HLP_ARQH_PAGE			0x02D003A0 /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQH_PAGE_ARQH_S		0
+#define PF0_SB_HLP_ARQH_PAGE_ARQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ARQLEN_PAGE			0x02D002A0 /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQLEN_S		0
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQVFE_S		28
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQVFE_M		BIT(28)
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQOVFL_S	29
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQOVFL_M	BIT(29)
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQCRIT_S	30
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQCRIT_M	BIT(30)
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQENABLE_S	31
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQENABLE_M	BIT(31)
+#define PF0_SB_HLP_ARQT_PAGE			0x02D004A0 /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQT_PAGE_ARQT_S		0
+#define PF0_SB_HLP_ARQT_PAGE_ARQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ATQBAH_PAGE			0x02D00120 /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQBAH_PAGE_ATQBAH_S		0
+#define PF0_SB_HLP_ATQBAH_PAGE_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_SB_HLP_ATQBAL_PAGE			0x02D00020 /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQBAL_PAGE_ATQBAL_S		6
+#define PF0_SB_HLP_ATQBAL_PAGE_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_SB_HLP_ATQH_PAGE			0x02D00320 /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQH_PAGE_ATQH_S		0
+#define PF0_SB_HLP_ATQH_PAGE_ATQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ATQLEN_PAGE			0x02D00220 /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQLEN_S		0
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQVFE_S		28
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQVFE_M		BIT(28)
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQOVFL_S	29
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQOVFL_M	BIT(29)
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQCRIT_S	30
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQCRIT_M	BIT(30)
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQENABLE_S	31
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQENABLE_M	BIT(31)
+#define PF0_SB_HLP_ATQT_PAGE			0x02D00420 /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQT_PAGE_ATQT_S		0
+#define PF0_SB_HLP_ATQT_PAGE_ATQT_M		MAKEMASK(0x3FF, 0)
+#define PF0INT_DYN_CTL(_i)			(0x03000000 + ((_i) * 4096)) /* _i=0...2047 */ /* Reset Source: PFR */
+#define PF0INT_DYN_CTL_MAX_INDEX		2047
+#define PF0INT_DYN_CTL_INTENA_S			0
+#define PF0INT_DYN_CTL_INTENA_M			BIT(0)
+#define PF0INT_DYN_CTL_CLEARPBA_S		1
+#define PF0INT_DYN_CTL_CLEARPBA_M		BIT(1)
+#define PF0INT_DYN_CTL_SWINT_TRIG_S		2
+#define PF0INT_DYN_CTL_SWINT_TRIG_M		BIT(2)
+#define PF0INT_DYN_CTL_ITR_INDX_S		3
+#define PF0INT_DYN_CTL_ITR_INDX_M		MAKEMASK(0x3, 3)
+#define PF0INT_DYN_CTL_INTERVAL_S		5
+#define PF0INT_DYN_CTL_INTERVAL_M		MAKEMASK(0xFFF, 5)
+#define PF0INT_DYN_CTL_SW_ITR_INDX_ENA_S	24
+#define PF0INT_DYN_CTL_SW_ITR_INDX_ENA_M	BIT(24)
+#define PF0INT_DYN_CTL_SW_ITR_INDX_S		25
+#define PF0INT_DYN_CTL_SW_ITR_INDX_M		MAKEMASK(0x3, 25)
+#define PF0INT_DYN_CTL_WB_ON_ITR_S		30
+#define PF0INT_DYN_CTL_WB_ON_ITR_M		BIT(30)
+#define PF0INT_DYN_CTL_INTENA_MSK_S		31
+#define PF0INT_DYN_CTL_INTENA_MSK_M		BIT(31)
+#define PF0INT_ITR_0(_i)			(0x03000004 + ((_i) * 4096)) /* _i=0...2047 */ /* Reset Source: PFR */
+#define PF0INT_ITR_0_MAX_INDEX			2047
+#define PF0INT_ITR_0_INTERVAL_S			0
+#define PF0INT_ITR_0_INTERVAL_M			MAKEMASK(0xFFF, 0)
+#define PF0INT_ITR_1(_i)			(0x03000008 + ((_i) * 4096)) /* _i=0...2047 */ /* Reset Source: PFR */
+#define PF0INT_ITR_1_MAX_INDEX			2047
+#define PF0INT_ITR_1_INTERVAL_S			0
+#define PF0INT_ITR_1_INTERVAL_M			MAKEMASK(0xFFF, 0)
+#define PF0INT_ITR_2(_i)			(0x0300000C + ((_i) * 4096)) /* _i=0...2047 */ /* Reset Source: PFR */
+#define PF0INT_ITR_2_MAX_INDEX			2047
+#define PF0INT_ITR_2_INTERVAL_S			0
+#define PF0INT_ITR_2_INTERVAL_M			MAKEMASK(0xFFF, 0)
+#define PF0INT_OICR_CPM_PAGE			0x02D03000 /* Reset Source: CORER */
+#define PF0INT_OICR_CPM_PAGE_INTEVENT_S		0
+#define PF0INT_OICR_CPM_PAGE_INTEVENT_M		BIT(0)
+#define PF0INT_OICR_CPM_PAGE_QUEUE_S		1
+#define PF0INT_OICR_CPM_PAGE_QUEUE_M		BIT(1)
+#define PF0INT_OICR_CPM_PAGE_RSV1_S		2
+#define PF0INT_OICR_CPM_PAGE_RSV1_M		MAKEMASK(0xFF, 2)
+#define PF0INT_OICR_CPM_PAGE_HH_COMP_S		10
+#define PF0INT_OICR_CPM_PAGE_HH_COMP_M		BIT(10)
+#define PF0INT_OICR_CPM_PAGE_TSYN_TX_S		11
+#define PF0INT_OICR_CPM_PAGE_TSYN_TX_M		BIT(11)
+#define PF0INT_OICR_CPM_PAGE_TSYN_EVNT_S	12
+#define PF0INT_OICR_CPM_PAGE_TSYN_EVNT_M	BIT(12)
+#define PF0INT_OICR_CPM_PAGE_TSYN_TGT_S		13
+#define PF0INT_OICR_CPM_PAGE_TSYN_TGT_M		BIT(13)
+#define PF0INT_OICR_CPM_PAGE_HLP_RDY_S		14
+#define PF0INT_OICR_CPM_PAGE_HLP_RDY_M		BIT(14)
+#define PF0INT_OICR_CPM_PAGE_CPM_RDY_S		15
+#define PF0INT_OICR_CPM_PAGE_CPM_RDY_M		BIT(15)
+#define PF0INT_OICR_CPM_PAGE_ECC_ERR_S		16
+#define PF0INT_OICR_CPM_PAGE_ECC_ERR_M		BIT(16)
+#define PF0INT_OICR_CPM_PAGE_RSV2_S		17
+#define PF0INT_OICR_CPM_PAGE_RSV2_M		MAKEMASK(0x3, 17)
+#define PF0INT_OICR_CPM_PAGE_MAL_DETECT_S	19
+#define PF0INT_OICR_CPM_PAGE_MAL_DETECT_M	BIT(19)
+#define PF0INT_OICR_CPM_PAGE_GRST_S		20
+#define PF0INT_OICR_CPM_PAGE_GRST_M		BIT(20)
+#define PF0INT_OICR_CPM_PAGE_PCI_EXCEPTION_S	21
+#define PF0INT_OICR_CPM_PAGE_PCI_EXCEPTION_M	BIT(21)
+#define PF0INT_OICR_CPM_PAGE_GPIO_S		22
+#define PF0INT_OICR_CPM_PAGE_GPIO_M		BIT(22)
+#define PF0INT_OICR_CPM_PAGE_RSV3_S		23
+#define PF0INT_OICR_CPM_PAGE_RSV3_M		BIT(23)
+#define PF0INT_OICR_CPM_PAGE_STORM_DETECT_S	24
+#define PF0INT_OICR_CPM_PAGE_STORM_DETECT_M	BIT(24)
+#define PF0INT_OICR_CPM_PAGE_LINK_STAT_CHANGE_S 25
+#define PF0INT_OICR_CPM_PAGE_LINK_STAT_CHANGE_M BIT(25)
+#define PF0INT_OICR_CPM_PAGE_HMC_ERR_S		26
+#define PF0INT_OICR_CPM_PAGE_HMC_ERR_M		BIT(26)
+#define PF0INT_OICR_CPM_PAGE_PE_PUSH_S		27
+#define PF0INT_OICR_CPM_PAGE_PE_PUSH_M		BIT(27)
+#define PF0INT_OICR_CPM_PAGE_PE_CRITERR_S	28
+#define PF0INT_OICR_CPM_PAGE_PE_CRITERR_M	BIT(28)
+#define PF0INT_OICR_CPM_PAGE_VFLR_S		29
+#define PF0INT_OICR_CPM_PAGE_VFLR_M		BIT(29)
+#define PF0INT_OICR_CPM_PAGE_XLR_HW_DONE_S	30
+#define PF0INT_OICR_CPM_PAGE_XLR_HW_DONE_M	BIT(30)
+#define PF0INT_OICR_CPM_PAGE_SWINT_S		31
+#define PF0INT_OICR_CPM_PAGE_SWINT_M		BIT(31)
+#define PF0INT_OICR_ENA_CPM_PAGE		0x02D03100 /* Reset Source: CORER */
+#define PF0INT_OICR_ENA_CPM_PAGE_RSV0_S		0
+#define PF0INT_OICR_ENA_CPM_PAGE_RSV0_M		BIT(0)
+#define PF0INT_OICR_ENA_CPM_PAGE_INT_ENA_S	1
+#define PF0INT_OICR_ENA_CPM_PAGE_INT_ENA_M	MAKEMASK(0x7FFFFFFF, 1)
+#define PF0INT_OICR_ENA_HLP_PAGE		0x02D01100 /* Reset Source: CORER */
+#define PF0INT_OICR_ENA_HLP_PAGE_RSV0_S		0
+#define PF0INT_OICR_ENA_HLP_PAGE_RSV0_M		BIT(0)
+#define PF0INT_OICR_ENA_HLP_PAGE_INT_ENA_S	1
+#define PF0INT_OICR_ENA_HLP_PAGE_INT_ENA_M	MAKEMASK(0x7FFFFFFF, 1)
+#define PF0INT_OICR_ENA_PSM_PAGE		0x02D02100 /* Reset Source: CORER */
+#define PF0INT_OICR_ENA_PSM_PAGE_RSV0_S		0
+#define PF0INT_OICR_ENA_PSM_PAGE_RSV0_M		BIT(0)
+#define PF0INT_OICR_ENA_PSM_PAGE_INT_ENA_S	1
+#define PF0INT_OICR_ENA_PSM_PAGE_INT_ENA_M	MAKEMASK(0x7FFFFFFF, 1)
+#define PF0INT_OICR_HLP_PAGE			0x02D01000 /* Reset Source: CORER */
+#define PF0INT_OICR_HLP_PAGE_INTEVENT_S		0
+#define PF0INT_OICR_HLP_PAGE_INTEVENT_M		BIT(0)
+#define PF0INT_OICR_HLP_PAGE_QUEUE_S		1
+#define PF0INT_OICR_HLP_PAGE_QUEUE_M		BIT(1)
+#define PF0INT_OICR_HLP_PAGE_RSV1_S		2
+#define PF0INT_OICR_HLP_PAGE_RSV1_M		MAKEMASK(0xFF, 2)
+#define PF0INT_OICR_HLP_PAGE_HH_COMP_S		10
+#define PF0INT_OICR_HLP_PAGE_HH_COMP_M		BIT(10)
+#define PF0INT_OICR_HLP_PAGE_TSYN_TX_S		11
+#define PF0INT_OICR_HLP_PAGE_TSYN_TX_M		BIT(11)
+#define PF0INT_OICR_HLP_PAGE_TSYN_EVNT_S	12
+#define PF0INT_OICR_HLP_PAGE_TSYN_EVNT_M	BIT(12)
+#define PF0INT_OICR_HLP_PAGE_TSYN_TGT_S		13
+#define PF0INT_OICR_HLP_PAGE_TSYN_TGT_M		BIT(13)
+#define PF0INT_OICR_HLP_PAGE_HLP_RDY_S		14
+#define PF0INT_OICR_HLP_PAGE_HLP_RDY_M		BIT(14)
+#define PF0INT_OICR_HLP_PAGE_CPM_RDY_S		15
+#define PF0INT_OICR_HLP_PAGE_CPM_RDY_M		BIT(15)
+#define PF0INT_OICR_HLP_PAGE_ECC_ERR_S		16
+#define PF0INT_OICR_HLP_PAGE_ECC_ERR_M		BIT(16)
+#define PF0INT_OICR_HLP_PAGE_RSV2_S		17
+#define PF0INT_OICR_HLP_PAGE_RSV2_M		MAKEMASK(0x3, 17)
+#define PF0INT_OICR_HLP_PAGE_MAL_DETECT_S	19
+#define PF0INT_OICR_HLP_PAGE_MAL_DETECT_M	BIT(19)
+#define PF0INT_OICR_HLP_PAGE_GRST_S		20
+#define PF0INT_OICR_HLP_PAGE_GRST_M		BIT(20)
+#define PF0INT_OICR_HLP_PAGE_PCI_EXCEPTION_S	21
+#define PF0INT_OICR_HLP_PAGE_PCI_EXCEPTION_M	BIT(21)
+#define PF0INT_OICR_HLP_PAGE_GPIO_S		22
+#define PF0INT_OICR_HLP_PAGE_GPIO_M		BIT(22)
+#define PF0INT_OICR_HLP_PAGE_RSV3_S		23
+#define PF0INT_OICR_HLP_PAGE_RSV3_M		BIT(23)
+#define PF0INT_OICR_HLP_PAGE_STORM_DETECT_S	24
+#define PF0INT_OICR_HLP_PAGE_STORM_DETECT_M	BIT(24)
+#define PF0INT_OICR_HLP_PAGE_LINK_STAT_CHANGE_S 25
+#define PF0INT_OICR_HLP_PAGE_LINK_STAT_CHANGE_M BIT(25)
+#define PF0INT_OICR_HLP_PAGE_HMC_ERR_S		26
+#define PF0INT_OICR_HLP_PAGE_HMC_ERR_M		BIT(26)
+#define PF0INT_OICR_HLP_PAGE_PE_PUSH_S		27
+#define PF0INT_OICR_HLP_PAGE_PE_PUSH_M		BIT(27)
+#define PF0INT_OICR_HLP_PAGE_PE_CRITERR_S	28
+#define PF0INT_OICR_HLP_PAGE_PE_CRITERR_M	BIT(28)
+#define PF0INT_OICR_HLP_PAGE_VFLR_S		29
+#define PF0INT_OICR_HLP_PAGE_VFLR_M		BIT(29)
+#define PF0INT_OICR_HLP_PAGE_XLR_HW_DONE_S	30
+#define PF0INT_OICR_HLP_PAGE_XLR_HW_DONE_M	BIT(30)
+#define PF0INT_OICR_HLP_PAGE_SWINT_S		31
+#define PF0INT_OICR_HLP_PAGE_SWINT_M		BIT(31)
+#define PF0INT_OICR_PSM_PAGE			0x02D02000 /* Reset Source: CORER */
+#define PF0INT_OICR_PSM_PAGE_INTEVENT_S		0
+#define PF0INT_OICR_PSM_PAGE_INTEVENT_M		BIT(0)
+#define PF0INT_OICR_PSM_PAGE_QUEUE_S		1
+#define PF0INT_OICR_PSM_PAGE_QUEUE_M		BIT(1)
+#define PF0INT_OICR_PSM_PAGE_RSV1_S		2
+#define PF0INT_OICR_PSM_PAGE_RSV1_M		MAKEMASK(0xFF, 2)
+#define PF0INT_OICR_PSM_PAGE_HH_COMP_S		10
+#define PF0INT_OICR_PSM_PAGE_HH_COMP_M		BIT(10)
+#define PF0INT_OICR_PSM_PAGE_TSYN_TX_S		11
+#define PF0INT_OICR_PSM_PAGE_TSYN_TX_M		BIT(11)
+#define PF0INT_OICR_PSM_PAGE_TSYN_EVNT_S	12
+#define PF0INT_OICR_PSM_PAGE_TSYN_EVNT_M	BIT(12)
+#define PF0INT_OICR_PSM_PAGE_TSYN_TGT_S		13
+#define PF0INT_OICR_PSM_PAGE_TSYN_TGT_M		BIT(13)
+#define PF0INT_OICR_PSM_PAGE_HLP_RDY_S		14
+#define PF0INT_OICR_PSM_PAGE_HLP_RDY_M		BIT(14)
+#define PF0INT_OICR_PSM_PAGE_CPM_RDY_S		15
+#define PF0INT_OICR_PSM_PAGE_CPM_RDY_M		BIT(15)
+#define PF0INT_OICR_PSM_PAGE_ECC_ERR_S		16
+#define PF0INT_OICR_PSM_PAGE_ECC_ERR_M		BIT(16)
+#define PF0INT_OICR_PSM_PAGE_RSV2_S		17
+#define PF0INT_OICR_PSM_PAGE_RSV2_M		MAKEMASK(0x3, 17)
+#define PF0INT_OICR_PSM_PAGE_MAL_DETECT_S	19
+#define PF0INT_OICR_PSM_PAGE_MAL_DETECT_M	BIT(19)
+#define PF0INT_OICR_PSM_PAGE_GRST_S		20
+#define PF0INT_OICR_PSM_PAGE_GRST_M		BIT(20)
+#define PF0INT_OICR_PSM_PAGE_PCI_EXCEPTION_S	21
+#define PF0INT_OICR_PSM_PAGE_PCI_EXCEPTION_M	BIT(21)
+#define PF0INT_OICR_PSM_PAGE_GPIO_S		22
+#define PF0INT_OICR_PSM_PAGE_GPIO_M		BIT(22)
+#define PF0INT_OICR_PSM_PAGE_RSV3_S		23
+#define PF0INT_OICR_PSM_PAGE_RSV3_M		BIT(23)
+#define PF0INT_OICR_PSM_PAGE_STORM_DETECT_S	24
+#define PF0INT_OICR_PSM_PAGE_STORM_DETECT_M	BIT(24)
+#define PF0INT_OICR_PSM_PAGE_LINK_STAT_CHANGE_S 25
+#define PF0INT_OICR_PSM_PAGE_LINK_STAT_CHANGE_M BIT(25)
+#define PF0INT_OICR_PSM_PAGE_HMC_ERR_S		26
+#define PF0INT_OICR_PSM_PAGE_HMC_ERR_M		BIT(26)
+#define PF0INT_OICR_PSM_PAGE_PE_PUSH_S		27
+#define PF0INT_OICR_PSM_PAGE_PE_PUSH_M		BIT(27)
+#define PF0INT_OICR_PSM_PAGE_PE_CRITERR_S	28
+#define PF0INT_OICR_PSM_PAGE_PE_CRITERR_M	BIT(28)
+#define PF0INT_OICR_PSM_PAGE_VFLR_S		29
+#define PF0INT_OICR_PSM_PAGE_VFLR_M		BIT(29)
+#define PF0INT_OICR_PSM_PAGE_XLR_HW_DONE_S	30
+#define PF0INT_OICR_PSM_PAGE_XLR_HW_DONE_M	BIT(30)
+#define PF0INT_OICR_PSM_PAGE_SWINT_S		31
+#define PF0INT_OICR_PSM_PAGE_SWINT_M		BIT(31)
+#define QRX_TAIL_PAGE(_QRX)			(0x03800000 + ((_QRX) * 4096)) /* _i=0...2047 */ /* Reset Source: CORER */
+#define QRX_TAIL_PAGE_MAX_INDEX			2047
+#define QRX_TAIL_PAGE_TAIL_S			0
+#define QRX_TAIL_PAGE_TAIL_M			MAKEMASK(0x1FFF, 0)
+#define QTX_COMM_DBELL_PAGE(_DBQM)		(0x04000000 + ((_DBQM) * 4096)) /* _i=0...16383 */ /* Reset Source: CORER */
+#define QTX_COMM_DBELL_PAGE_MAX_INDEX		16383
+#define QTX_COMM_DBELL_PAGE_QTX_COMM_DBELL_S	0
+#define QTX_COMM_DBELL_PAGE_QTX_COMM_DBELL_M	MAKEMASK(0xFFFFFFFF, 0)
+#define QTX_COMM_DBLQ_DBELL_PAGE(_DBLQ)		(0x02F00000 + ((_DBLQ) * 8)) /* _i=0...255 */ /* Reset Source: CORER */
+#define QTX_COMM_DBLQ_DBELL_PAGE_MAX_INDEX	255
+#define QTX_COMM_DBLQ_DBELL_PAGE_TAIL_S		0
+#define QTX_COMM_DBLQ_DBELL_PAGE_TAIL_M		MAKEMASK(0x1FFF, 0)
+#define VSI_MBX_ARQBAH(_VSI)			(0x02000018 + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ARQBAH_MAX_INDEX		767
+#define VSI_MBX_ARQBAH_ARQBAH_S			0
+#define VSI_MBX_ARQBAH_ARQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define VSI_MBX_ARQBAL(_VSI)			(0x02000014 + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ARQBAL_MAX_INDEX		767
+#define VSI_MBX_ARQBAL_ARQBAL_LSB_S		0
+#define VSI_MBX_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define VSI_MBX_ARQBAL_ARQBAL_S			6
+#define VSI_MBX_ARQBAL_ARQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define VSI_MBX_ARQH(_VSI)			(0x02000020 + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ARQH_MAX_INDEX			767
+#define VSI_MBX_ARQH_ARQH_S			0
+#define VSI_MBX_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define VSI_MBX_ARQLEN(_VSI)			(0x0200001C + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ARQLEN_MAX_INDEX		767
+#define VSI_MBX_ARQLEN_ARQLEN_S			0
+#define VSI_MBX_ARQLEN_ARQLEN_M			MAKEMASK(0x3FF, 0)
+#define VSI_MBX_ARQLEN_ARQVFE_S			28
+#define VSI_MBX_ARQLEN_ARQVFE_M			BIT(28)
+#define VSI_MBX_ARQLEN_ARQOVFL_S		29
+#define VSI_MBX_ARQLEN_ARQOVFL_M		BIT(29)
+#define VSI_MBX_ARQLEN_ARQCRIT_S		30
+#define VSI_MBX_ARQLEN_ARQCRIT_M		BIT(30)
+#define VSI_MBX_ARQLEN_ARQENABLE_S		31
+#define VSI_MBX_ARQLEN_ARQENABLE_M		BIT(31)
+#define VSI_MBX_ARQT(_VSI)			(0x02000024 + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ARQT_MAX_INDEX			767
+#define VSI_MBX_ARQT_ARQT_S			0
+#define VSI_MBX_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define VSI_MBX_ATQBAH(_VSI)			(0x02000004 + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ATQBAH_MAX_INDEX		767
+#define VSI_MBX_ATQBAH_ATQBAH_S			0
+#define VSI_MBX_ATQBAH_ATQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define VSI_MBX_ATQBAL(_VSI)			(0x02000000 + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ATQBAL_MAX_INDEX		767
+#define VSI_MBX_ATQBAL_ATQBAL_S			6
+#define VSI_MBX_ATQBAL_ATQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define VSI_MBX_ATQH(_VSI)			(0x0200000C + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ATQH_MAX_INDEX			767
+#define VSI_MBX_ATQH_ATQH_S			0
+#define VSI_MBX_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define VSI_MBX_ATQLEN(_VSI)			(0x02000008 + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ATQLEN_MAX_INDEX		767
+#define VSI_MBX_ATQLEN_ATQLEN_S			0
+#define VSI_MBX_ATQLEN_ATQLEN_M			MAKEMASK(0x3FF, 0)
+#define VSI_MBX_ATQLEN_ATQVFE_S			28
+#define VSI_MBX_ATQLEN_ATQVFE_M			BIT(28)
+#define VSI_MBX_ATQLEN_ATQOVFL_S		29
+#define VSI_MBX_ATQLEN_ATQOVFL_M		BIT(29)
+#define VSI_MBX_ATQLEN_ATQCRIT_S		30
+#define VSI_MBX_ATQLEN_ATQCRIT_M		BIT(30)
+#define VSI_MBX_ATQLEN_ATQENABLE_S		31
+#define VSI_MBX_ATQLEN_ATQENABLE_M		BIT(31)
+#define VSI_MBX_ATQT(_VSI)			(0x02000010 + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ATQT_MAX_INDEX			767
+#define VSI_MBX_ATQT_ATQT_S			0
+#define VSI_MBX_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define GL_ACL_ACCESS_CMD			0x00391000 /* Reset Source: CORER */
+#define GL_ACL_ACCESS_CMD_TABLE_ID_S		0
+#define GL_ACL_ACCESS_CMD_TABLE_ID_M		MAKEMASK(0xFF, 0)
+#define GL_ACL_ACCESS_CMD_ENTRY_INDEX_S		8
+#define GL_ACL_ACCESS_CMD_ENTRY_INDEX_M		MAKEMASK(0xFFF, 8)
+#define GL_ACL_ACCESS_CMD_OPERATION_S		20
+#define GL_ACL_ACCESS_CMD_OPERATION_M		BIT(20)
+#define GL_ACL_ACCESS_CMD_OBJ_TYPE_S		24
+#define GL_ACL_ACCESS_CMD_OBJ_TYPE_M		MAKEMASK(0xF, 24)
+#define GL_ACL_ACCESS_CMD_EXECUTE_S		31
+#define GL_ACL_ACCESS_CMD_EXECUTE_M		BIT(31)
+#define GL_ACL_ACCESS_STATUS			0x00391004 /* Reset Source: CORER */
+#define GL_ACL_ACCESS_STATUS_BUSY_S		0
+#define GL_ACL_ACCESS_STATUS_BUSY_M		BIT(0)
+#define GL_ACL_ACCESS_STATUS_DONE_S		1
+#define GL_ACL_ACCESS_STATUS_DONE_M		BIT(1)
+#define GL_ACL_ACCESS_STATUS_ERROR_S		2
+#define GL_ACL_ACCESS_STATUS_ERROR_M		BIT(2)
+#define GL_ACL_ACCESS_STATUS_OPERATION_S	3
+#define GL_ACL_ACCESS_STATUS_OPERATION_M	BIT(3)
+#define GL_ACL_ACCESS_STATUS_ERROR_CODE_S	4
+#define GL_ACL_ACCESS_STATUS_ERROR_CODE_M	MAKEMASK(0xF, 4)
+#define GL_ACL_ACCESS_STATUS_TABLE_ID_S		8
+#define GL_ACL_ACCESS_STATUS_TABLE_ID_M		MAKEMASK(0xFF, 8)
+#define GL_ACL_ACCESS_STATUS_ENTRY_INDEX_S	16
+#define GL_ACL_ACCESS_STATUS_ENTRY_INDEX_M	MAKEMASK(0xFFF, 16)
+#define GL_ACL_ACCESS_STATUS_OBJ_TYPE_S		28
+#define GL_ACL_ACCESS_STATUS_OBJ_TYPE_M		MAKEMASK(0xF, 28)
+#define GL_ACL_ACTMEM_ACT(_i)			(0x00393824 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GL_ACL_ACTMEM_ACT_MAX_INDEX		1
+#define GL_ACL_ACTMEM_ACT_VALUE_S		0
+#define GL_ACL_ACTMEM_ACT_VALUE_M		MAKEMASK(0xFFFF, 0)
+#define GL_ACL_ACTMEM_ACT_MDID_S		20
+#define GL_ACL_ACTMEM_ACT_MDID_M		MAKEMASK(0x3F, 20)
+#define GL_ACL_ACTMEM_ACT_PRIORITY_S		28
+#define GL_ACL_ACTMEM_ACT_PRIORITY_M		MAKEMASK(0x7, 28)
+#define GL_ACL_CHICKEN_REGISTER			0x00393810 /* Reset Source: CORER */
+#define GL_ACL_CHICKEN_REGISTER_TCAM_DATA_POL_CH_S 0
+#define GL_ACL_CHICKEN_REGISTER_TCAM_DATA_POL_CH_M BIT(0)
+#define GL_ACL_CHICKEN_REGISTER_TCAM_ADDR_POL_CH_S 1
+#define GL_ACL_CHICKEN_REGISTER_TCAM_ADDR_POL_CH_M BIT(1)
+#define GL_ACL_DEFAULT_ACT(_i)			(0x00391168 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GL_ACL_DEFAULT_ACT_MAX_INDEX		15
+#define GL_ACL_DEFAULT_ACT_VALUE_S		0
+#define GL_ACL_DEFAULT_ACT_VALUE_M		MAKEMASK(0xFFFF, 0)
+#define GL_ACL_DEFAULT_ACT_MDID_S		20
+#define GL_ACL_DEFAULT_ACT_MDID_M		MAKEMASK(0x3F, 20)
+#define GL_ACL_DEFAULT_ACT_PRIORITY_S		28
+#define GL_ACL_DEFAULT_ACT_PRIORITY_M		MAKEMASK(0x7, 28)
+#define GL_ACL_PROFILE_BWSB_SEL(_i)		(0x00391008 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GL_ACL_PROFILE_BWSB_SEL_MAX_INDEX	31
+#define GL_ACL_PROFILE_BWSB_SEL_BSB_SRC_OFF_S	0
+#define GL_ACL_PROFILE_BWSB_SEL_BSB_SRC_OFF_M	MAKEMASK(0x3F, 0)
+#define GL_ACL_PROFILE_BWSB_SEL_WSB_SRC_OFF_S	8
+#define GL_ACL_PROFILE_BWSB_SEL_WSB_SRC_OFF_M	MAKEMASK(0x1F, 8)
+#define GL_ACL_PROFILE_DWSB_SEL(_i)		(0x00391088 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GL_ACL_PROFILE_DWSB_SEL_MAX_INDEX	15
+#define GL_ACL_PROFILE_DWSB_SEL_DWORD_SEL_OFF_S 0
+#define GL_ACL_PROFILE_DWSB_SEL_DWORD_SEL_OFF_M MAKEMASK(0xF, 0)
+#define GL_ACL_PROFILE_PF_CFG(_i)		(0x003910C8 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GL_ACL_PROFILE_PF_CFG_MAX_INDEX		7
+#define GL_ACL_PROFILE_PF_CFG_SCEN_SEL_S	0
+#define GL_ACL_PROFILE_PF_CFG_SCEN_SEL_M	MAKEMASK(0x3F, 0)
+#define GL_ACL_PROFILE_RC_CFG(_i)		(0x003910E8 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GL_ACL_PROFILE_RC_CFG_MAX_INDEX		7
+#define GL_ACL_PROFILE_RC_CFG_LOW_BOUND_S	0
+#define GL_ACL_PROFILE_RC_CFG_LOW_BOUND_M	MAKEMASK(0xFFFF, 0)
+#define GL_ACL_PROFILE_RC_CFG_HIGH_BOUND_S	16
+#define GL_ACL_PROFILE_RC_CFG_HIGH_BOUND_M	MAKEMASK(0xFFFF, 16)
+#define GL_ACL_PROFILE_RCF_MASK(_i)		(0x00391108 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GL_ACL_PROFILE_RCF_MASK_MAX_INDEX	7
+#define GL_ACL_PROFILE_RCF_MASK_MASK_S		0
+#define GL_ACL_PROFILE_RCF_MASK_MASK_M		MAKEMASK(0xFFFF, 0)
+#define GL_ACL_SCENARIO_ACT_CFG(_i)		(0x003938AC + ((_i) * 4)) /* _i=0...19 */ /* Reset Source: CORER */
+#define GL_ACL_SCENARIO_ACT_CFG_MAX_INDEX	19
+#define GL_ACL_SCENARIO_ACT_CFG_ACTMEM_SEL_S	0
+#define GL_ACL_SCENARIO_ACT_CFG_ACTMEM_SEL_M	MAKEMASK(0xF, 0)
+#define GL_ACL_SCENARIO_ACT_CFG_ACTMEM_EN_S	8
+#define GL_ACL_SCENARIO_ACT_CFG_ACTMEM_EN_M	BIT(8)
+#define GL_ACL_SCENARIO_CFG_H(_i)		(0x0039386C + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GL_ACL_SCENARIO_CFG_H_MAX_INDEX		15
+#define GL_ACL_SCENARIO_CFG_H_SELECT4_S		0
+#define GL_ACL_SCENARIO_CFG_H_SELECT4_M		MAKEMASK(0x1F, 0)
+#define GL_ACL_SCENARIO_CFG_H_CHUNKMASK_S	8
+#define GL_ACL_SCENARIO_CFG_H_CHUNKMASK_M	MAKEMASK(0xFF, 8)
+#define GL_ACL_SCENARIO_CFG_H_START_COMPARE_S	24
+#define GL_ACL_SCENARIO_CFG_H_START_COMPARE_M	BIT(24)
+#define GL_ACL_SCENARIO_CFG_H_START_SET_S	28
+#define GL_ACL_SCENARIO_CFG_H_START_SET_M	BIT(28)
+#define GL_ACL_SCENARIO_CFG_L(_i)		(0x0039382C + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GL_ACL_SCENARIO_CFG_L_MAX_INDEX		15
+#define GL_ACL_SCENARIO_CFG_L_SELECT0_S		0
+#define GL_ACL_SCENARIO_CFG_L_SELECT0_M		MAKEMASK(0x7F, 0)
+#define GL_ACL_SCENARIO_CFG_L_SELECT1_S		8
+#define GL_ACL_SCENARIO_CFG_L_SELECT1_M		MAKEMASK(0x7F, 8)
+#define GL_ACL_SCENARIO_CFG_L_SELECT2_S		16
+#define GL_ACL_SCENARIO_CFG_L_SELECT2_M		MAKEMASK(0x7F, 16)
+#define GL_ACL_SCENARIO_CFG_L_SELECT3_S		24
+#define GL_ACL_SCENARIO_CFG_L_SELECT3_M		MAKEMASK(0x7F, 24)
+#define GL_ACL_TCAM_KEY_H			0x00393818 /* Reset Source: CORER */
+#define GL_ACL_TCAM_KEY_H_GL_ACL_FFU_TCAM_KEY_H_S 0
+#define GL_ACL_TCAM_KEY_H_GL_ACL_FFU_TCAM_KEY_H_M MAKEMASK(0xFF, 0)
+#define GL_ACL_TCAM_KEY_INV_H			0x00393820 /* Reset Source: CORER */
+#define GL_ACL_TCAM_KEY_INV_H_GL_ACL_FFU_TCAM_KEY_INV_H_S 0
+#define GL_ACL_TCAM_KEY_INV_H_GL_ACL_FFU_TCAM_KEY_INV_H_M MAKEMASK(0xFF, 0)
+#define GL_ACL_TCAM_KEY_INV_L			0x0039381C /* Reset Source: CORER */
+#define GL_ACL_TCAM_KEY_INV_L_GL_ACL_FFU_TCAM_KEY_INV_L_S 0
+#define GL_ACL_TCAM_KEY_INV_L_GL_ACL_FFU_TCAM_KEY_INV_L_M MAKEMASK(0xFFFFFFFF, 0)
+#define GL_ACL_TCAM_KEY_L			0x00393814 /* Reset Source: CORER */
+#define GL_ACL_TCAM_KEY_L_GL_ACL_FFU_TCAM_KEY_L_S 0
+#define GL_ACL_TCAM_KEY_L_GL_ACL_FFU_TCAM_KEY_L_M MAKEMASK(0xFFFFFFFF, 0)
+#define VSI_ACL_DEF_SEL(_VSI)			(0x00391800 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_ACL_DEF_SEL_MAX_INDEX		767
+#define VSI_ACL_DEF_SEL_RX_PROFILE_MISS_SEL_S	0
+#define VSI_ACL_DEF_SEL_RX_PROFILE_MISS_SEL_M	MAKEMASK(0x3, 0)
+#define VSI_ACL_DEF_SEL_RX_TABLES_MISS_SEL_S	4
+#define VSI_ACL_DEF_SEL_RX_TABLES_MISS_SEL_M	MAKEMASK(0x3, 4)
+#define VSI_ACL_DEF_SEL_TX_PROFILE_MISS_SEL_S	8
+#define VSI_ACL_DEF_SEL_TX_PROFILE_MISS_SEL_M	MAKEMASK(0x3, 8)
+#define VSI_ACL_DEF_SEL_TX_TABLES_MISS_SEL_S	12
+#define VSI_ACL_DEF_SEL_TX_TABLES_MISS_SEL_M	MAKEMASK(0x3, 12)
+#define GL_SWT_L2TAG0(_i)			(0x000492A8 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GL_SWT_L2TAG0_MAX_INDEX			7
+#define GL_SWT_L2TAG0_DATA_S			0
+#define GL_SWT_L2TAG0_DATA_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GL_SWT_L2TAG1(_i)			(0x000492C8 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GL_SWT_L2TAG1_MAX_INDEX			7
+#define GL_SWT_L2TAG1_DATA_S			0
+#define GL_SWT_L2TAG1_DATA_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GL_SWT_L2TAGCTRL(_i)			(0x001D2660 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GL_SWT_L2TAGCTRL_MAX_INDEX		7
+#define GL_SWT_L2TAGCTRL_LENGTH_S		0
+#define GL_SWT_L2TAGCTRL_LENGTH_M		MAKEMASK(0x7F, 0)
+#define GL_SWT_L2TAGCTRL_HAS_UP_S		7
+#define GL_SWT_L2TAGCTRL_HAS_UP_M		BIT(7)
+#define GL_SWT_L2TAGCTRL_ISVLAN_S		9
+#define GL_SWT_L2TAGCTRL_ISVLAN_M		BIT(9)
+#define GL_SWT_L2TAGCTRL_INNERUP_S		10
+#define GL_SWT_L2TAGCTRL_INNERUP_M		BIT(10)
+#define GL_SWT_L2TAGCTRL_OUTERUP_S		11
+#define GL_SWT_L2TAGCTRL_OUTERUP_M		BIT(11)
+#define GL_SWT_L2TAGCTRL_LONG_S			12
+#define GL_SWT_L2TAGCTRL_LONG_M			BIT(12)
+#define GL_SWT_L2TAGCTRL_ISMPLS_S		13
+#define GL_SWT_L2TAGCTRL_ISMPLS_M		BIT(13)
+#define GL_SWT_L2TAGCTRL_ISNSH_S		14
+#define GL_SWT_L2TAGCTRL_ISNSH_M		BIT(14)
+#define GL_SWT_L2TAGCTRL_ETHERTYPE_S		16
+#define GL_SWT_L2TAGCTRL_ETHERTYPE_M		MAKEMASK(0xFFFF, 16)
+#define GL_SWT_L2TAGRXEB(_i)			(0x00052000 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GL_SWT_L2TAGRXEB_MAX_INDEX		7
+#define GL_SWT_L2TAGRXEB_OFFSET_S		0
+#define GL_SWT_L2TAGRXEB_OFFSET_M		MAKEMASK(0xFF, 0)
+#define GL_SWT_L2TAGRXEB_LENGTH_S		8
+#define GL_SWT_L2TAGRXEB_LENGTH_M		MAKEMASK(0x3, 8)
+#define GL_SWT_L2TAGTXIB(_i)			(0x000492E8 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GL_SWT_L2TAGTXIB_MAX_INDEX		7
+#define GL_SWT_L2TAGTXIB_OFFSET_S		0
+#define GL_SWT_L2TAGTXIB_OFFSET_M		MAKEMASK(0xFF, 0)
+#define GL_SWT_L2TAGTXIB_LENGTH_S		8
+#define GL_SWT_L2TAGTXIB_LENGTH_M		MAKEMASK(0x3, 8)
+#define PRT_TDPUL2TAGSEN			0x00040BA0 /* Reset Source: CORER */
+#define PRT_TDPUL2TAGSEN_ENABLE_S		0
+#define PRT_TDPUL2TAGSEN_ENABLE_M		MAKEMASK(0xFF, 0)
+#define PRT_TDPUL2TAGSEN_NONLAST_TAG_S		8
+#define PRT_TDPUL2TAGSEN_NONLAST_TAG_M		MAKEMASK(0xFF, 8)
+#define GLCM_PE_CACHESIZE			0x005046B4 /* Reset Source: CORER */
+#define GLCM_PE_CACHESIZE_WORD_SIZE_S		0
+#define GLCM_PE_CACHESIZE_WORD_SIZE_M		MAKEMASK(0xFFF, 0)
+#define GLCM_PE_CACHESIZE_SETS_S		12
+#define GLCM_PE_CACHESIZE_SETS_M		MAKEMASK(0xF, 12)
+#define GLCM_PE_CACHESIZE_WAYS_S		16
+#define GLCM_PE_CACHESIZE_WAYS_M		MAKEMASK(0x1FF, 16)
+#define GLCOMM_CQ_CTL(_CQ)			(0x000F0000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLCOMM_CQ_CTL_MAX_INDEX			511
+#define GLCOMM_CQ_CTL_COMP_TYPE_S		0
+#define GLCOMM_CQ_CTL_COMP_TYPE_M		MAKEMASK(0x7, 0)
+#define GLCOMM_CQ_CTL_CMD_S			4
+#define GLCOMM_CQ_CTL_CMD_M			MAKEMASK(0x7, 4)
+#define GLCOMM_CQ_CTL_ID_S			16
+#define GLCOMM_CQ_CTL_ID_M			MAKEMASK(0x3FFF, 16)
+#define GLCOMM_MIN_MAX_PKT			0x000FC064 /* Reset Source: CORER */
+#define GLCOMM_MIN_MAX_PKT_MAHDL_S		0
+#define GLCOMM_MIN_MAX_PKT_MAHDL_M		MAKEMASK(0x3FFF, 0)
+#define GLCOMM_MIN_MAX_PKT_MIHDL_S		16
+#define GLCOMM_MIN_MAX_PKT_MIHDL_M		MAKEMASK(0x3F, 16)
+#define GLCOMM_MIN_MAX_PKT_LSO_COMS_MIHDL_S	22
+#define GLCOMM_MIN_MAX_PKT_LSO_COMS_MIHDL_M	MAKEMASK(0x3FF, 22)
+#define GLCOMM_PKT_SHAPER_PROF(_i)		(0x002D2DA8 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLCOMM_PKT_SHAPER_PROF_MAX_INDEX	7
+#define GLCOMM_PKT_SHAPER_PROF_PKTCNT_S		0
+#define GLCOMM_PKT_SHAPER_PROF_PKTCNT_M		MAKEMASK(0x3F, 0)
+#define GLCOMM_QTX_CNTX_CTL			0x002D2DC8 /* Reset Source: CORER */
+#define GLCOMM_QTX_CNTX_CTL_QUEUE_ID_S		0
+#define GLCOMM_QTX_CNTX_CTL_QUEUE_ID_M		MAKEMASK(0x3FFF, 0)
+#define GLCOMM_QTX_CNTX_CTL_CMD_S		16
+#define GLCOMM_QTX_CNTX_CTL_CMD_M		MAKEMASK(0x7, 16)
+#define GLCOMM_QTX_CNTX_CTL_CMD_EXEC_S		19
+#define GLCOMM_QTX_CNTX_CTL_CMD_EXEC_M		BIT(19)
+#define GLCOMM_QTX_CNTX_DATA(_i)		(0x002D2D40 + ((_i) * 4)) /* _i=0...9 */ /* Reset Source: CORER */
+#define GLCOMM_QTX_CNTX_DATA_MAX_INDEX		9
+#define GLCOMM_QTX_CNTX_DATA_DATA_S		0
+#define GLCOMM_QTX_CNTX_DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLCOMM_QTX_CNTX_STAT			0x002D2DCC /* Reset Source: CORER */
+#define GLCOMM_QTX_CNTX_STAT_CMD_IN_PROG_S	0
+#define GLCOMM_QTX_CNTX_STAT_CMD_IN_PROG_M	BIT(0)
+#define GLCOMM_QUANTA_PROF(_i)			(0x002D2D68 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLCOMM_QUANTA_PROF_MAX_INDEX		15
+#define GLCOMM_QUANTA_PROF_QUANTA_SIZE_S	0
+#define GLCOMM_QUANTA_PROF_QUANTA_SIZE_M	MAKEMASK(0x3FFF, 0)
+#define GLCOMM_QUANTA_PROF_MAX_CMD_S		16
+#define GLCOMM_QUANTA_PROF_MAX_CMD_M		MAKEMASK(0xFF, 16)
+#define GLCOMM_QUANTA_PROF_MAX_DESC_S		24
+#define GLCOMM_QUANTA_PROF_MAX_DESC_M		MAKEMASK(0x3F, 24)
+#define GLLAN_TCLAN_CACHE_CTL			0x000FC0B8 /* Reset Source: CORER */
+#define GLLAN_TCLAN_CACHE_CTL_MIN_FETCH_THRESH_S 0
+#define GLLAN_TCLAN_CACHE_CTL_MIN_FETCH_THRESH_M MAKEMASK(0x3F, 0)
+#define GLLAN_TCLAN_CACHE_CTL_FETCH_CL_ALIGN_S	6
+#define GLLAN_TCLAN_CACHE_CTL_FETCH_CL_ALIGN_M	BIT(6)
+#define GLLAN_TCLAN_CACHE_CTL_MIN_ALLOC_THRESH_S 7
+#define GLLAN_TCLAN_CACHE_CTL_MIN_ALLOC_THRESH_M MAKEMASK(0x7F, 7)
+#define GLLAN_TCLAN_CACHE_CTL_CACHE_ENTRY_CNT_S 14
+#define GLLAN_TCLAN_CACHE_CTL_CACHE_ENTRY_CNT_M MAKEMASK(0xFF, 14)
+#define GLLAN_TCLAN_CACHE_CTL_CACHE_DESC_LIM_S	22
+#define GLLAN_TCLAN_CACHE_CTL_CACHE_DESC_LIM_M	MAKEMASK(0x3FF, 22)
+#define GLTCLAN_CQ_CNTX0(_CQ)			(0x000F0800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX0_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX0_RING_ADDR_LSB_S	0
+#define GLTCLAN_CQ_CNTX0_RING_ADDR_LSB_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX1(_CQ)			(0x000F1000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX1_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX1_RING_ADDR_MSB_S	0
+#define GLTCLAN_CQ_CNTX1_RING_ADDR_MSB_M	MAKEMASK(0x1FFFFFF, 0)
+#define GLTCLAN_CQ_CNTX10(_CQ)			(0x000F5800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX10_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX10_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX10_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX11(_CQ)			(0x000F6000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX11_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX11_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX11_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX12(_CQ)			(0x000F6800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX12_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX12_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX12_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX13(_CQ)			(0x000F7000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX13_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX13_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX13_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX14(_CQ)			(0x000F7800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX14_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX14_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX14_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX15(_CQ)			(0x000F8000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX15_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX15_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX15_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX16(_CQ)			(0x000F8800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX16_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX16_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX16_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX17(_CQ)			(0x000F9000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX17_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX17_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX17_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX18(_CQ)			(0x000F9800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX18_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX18_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX18_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX19(_CQ)			(0x000FA000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX19_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX19_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX19_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX2(_CQ)			(0x000F1800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX2_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX2_RING_LEN_S		0
+#define GLTCLAN_CQ_CNTX2_RING_LEN_M		MAKEMASK(0x3FFFF, 0)
+#define GLTCLAN_CQ_CNTX20(_CQ)			(0x000FA800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX20_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX20_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX20_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX21(_CQ)			(0x000FB000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX21_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX21_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX21_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX3(_CQ)			(0x000F2000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX3_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX3_GENERATION_S		0
+#define GLTCLAN_CQ_CNTX3_GENERATION_M		BIT(0)
+#define GLTCLAN_CQ_CNTX3_CQ_WR_PTR_S		1
+#define GLTCLAN_CQ_CNTX3_CQ_WR_PTR_M		MAKEMASK(0x3FFFFF, 1)
+#define GLTCLAN_CQ_CNTX4(_CQ)			(0x000F2800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX4_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX4_PF_NUM_S		0
+#define GLTCLAN_CQ_CNTX4_PF_NUM_M		MAKEMASK(0x7, 0)
+#define GLTCLAN_CQ_CNTX4_VMVF_NUM_S		3
+#define GLTCLAN_CQ_CNTX4_VMVF_NUM_M		MAKEMASK(0x3FF, 3)
+#define GLTCLAN_CQ_CNTX4_VMVF_TYPE_S		13
+#define GLTCLAN_CQ_CNTX4_VMVF_TYPE_M		MAKEMASK(0x3, 13)
+#define GLTCLAN_CQ_CNTX5(_CQ)			(0x000F3000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX5_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX5_TPH_EN_S		0
+#define GLTCLAN_CQ_CNTX5_TPH_EN_M		BIT(0)
+#define GLTCLAN_CQ_CNTX5_CPU_ID_S		1
+#define GLTCLAN_CQ_CNTX5_CPU_ID_M		MAKEMASK(0xFF, 1)
+#define GLTCLAN_CQ_CNTX5_FLUSH_ON_ITR_DIS_S	9
+#define GLTCLAN_CQ_CNTX5_FLUSH_ON_ITR_DIS_M	BIT(9)
+#define GLTCLAN_CQ_CNTX6(_CQ)			(0x000F3800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX6_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX6_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX6_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX7(_CQ)			(0x000F4000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX7_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX7_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX7_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX8(_CQ)			(0x000F4800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX8_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX8_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX8_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX9(_CQ)			(0x000F5000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX9_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX9_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX9_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define QTX_COMM_DBELL(_DBQM)			(0x002C0000 + ((_DBQM) * 4)) /* _i=0...16383 */ /* Reset Source: CORER */
+#define QTX_COMM_DBELL_MAX_INDEX		16383
+#define QTX_COMM_DBELL_QTX_COMM_DBELL_S		0
+#define QTX_COMM_DBELL_QTX_COMM_DBELL_M		MAKEMASK(0xFFFFFFFF, 0)
+#define QTX_COMM_DBLQ_CNTX(_i, _DBLQ)		(0x002D0000 + ((_i) * 1024 + (_DBLQ) * 4)) /* _i=0...4, _DBLQ=0...255 */ /* Reset Source: CORER */
+#define QTX_COMM_DBLQ_CNTX_MAX_INDEX		4
+#define QTX_COMM_DBLQ_CNTX_DATA_S		0
+#define QTX_COMM_DBLQ_CNTX_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define QTX_COMM_DBLQ_DBELL(_DBLQ)		(0x002D1400 + ((_DBLQ) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define QTX_COMM_DBLQ_DBELL_MAX_INDEX		255
+#define QTX_COMM_DBLQ_DBELL_TAIL_S		0
+#define QTX_COMM_DBLQ_DBELL_TAIL_M		MAKEMASK(0x1FFF, 0)
+#define QTX_COMM_HEAD(_DBQM)			(0x000E0000 + ((_DBQM) * 4)) /* _i=0...16383 */ /* Reset Source: CORER */
+#define QTX_COMM_HEAD_MAX_INDEX			16383
+#define QTX_COMM_HEAD_HEAD_S			0
+#define QTX_COMM_HEAD_HEAD_M			MAKEMASK(0x1FFF, 0)
+#define QTX_COMM_HEAD_RS_PENDING_S		16
+#define QTX_COMM_HEAD_RS_PENDING_M		BIT(16)
+#define GL_FW_TOOL_ARQBAH			0x000801C0 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ARQBAH_ARQBAH_S		0
+#define GL_FW_TOOL_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_FW_TOOL_ARQBAL			0x000800C0 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ARQBAL_ARQBAL_LSB_S		0
+#define GL_FW_TOOL_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define GL_FW_TOOL_ARQBAL_ARQBAL_S		6
+#define GL_FW_TOOL_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define GL_FW_TOOL_ARQH				0x000803C0 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ARQH_ARQH_S			0
+#define GL_FW_TOOL_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define GL_FW_TOOL_ARQLEN			0x000802C0 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ARQLEN_ARQLEN_S		0
+#define GL_FW_TOOL_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define GL_FW_TOOL_ARQLEN_ARQVFE_S		28
+#define GL_FW_TOOL_ARQLEN_ARQVFE_M		BIT(28)
+#define GL_FW_TOOL_ARQLEN_ARQOVFL_S		29
+#define GL_FW_TOOL_ARQLEN_ARQOVFL_M		BIT(29)
+#define GL_FW_TOOL_ARQLEN_ARQCRIT_S		30
+#define GL_FW_TOOL_ARQLEN_ARQCRIT_M		BIT(30)
+#define GL_FW_TOOL_ARQLEN_ARQENABLE_S		31
+#define GL_FW_TOOL_ARQLEN_ARQENABLE_M		BIT(31)
+#define GL_FW_TOOL_ARQT				0x000804C0 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ARQT_ARQT_S			0
+#define GL_FW_TOOL_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define GL_FW_TOOL_ATQBAH			0x00080140 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ATQBAH_ATQBAH_S		0
+#define GL_FW_TOOL_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_FW_TOOL_ATQBAL			0x00080040 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ATQBAL_ATQBAL_LSB_S		0
+#define GL_FW_TOOL_ATQBAL_ATQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define GL_FW_TOOL_ATQBAL_ATQBAL_S		6
+#define GL_FW_TOOL_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define GL_FW_TOOL_ATQH				0x00080340 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ATQH_ATQH_S			0
+#define GL_FW_TOOL_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define GL_FW_TOOL_ATQLEN			0x00080240 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ATQLEN_ATQLEN_S		0
+#define GL_FW_TOOL_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define GL_FW_TOOL_ATQLEN_ATQVFE_S		28
+#define GL_FW_TOOL_ATQLEN_ATQVFE_M		BIT(28)
+#define GL_FW_TOOL_ATQLEN_ATQOVFL_S		29
+#define GL_FW_TOOL_ATQLEN_ATQOVFL_M		BIT(29)
+#define GL_FW_TOOL_ATQLEN_ATQCRIT_S		30
+#define GL_FW_TOOL_ATQLEN_ATQCRIT_M		BIT(30)
+#define GL_FW_TOOL_ATQLEN_ATQENABLE_S		31
+#define GL_FW_TOOL_ATQLEN_ATQENABLE_M		BIT(31)
+#define GL_FW_TOOL_ATQT				0x00080440 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ATQT_ATQT_S			0
+#define GL_FW_TOOL_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define GL_MBX_PASID				0x00231EC0 /* Reset Source: CORER */
+#define GL_MBX_PASID_PASID_MODE_S		0
+#define GL_MBX_PASID_PASID_MODE_M		BIT(0)
+#define GL_MBX_PASID_PASID_MODE_VALID_S		1
+#define GL_MBX_PASID_PASID_MODE_VALID_M		BIT(1)
+#define PF_FW_ARQBAH				0x00080180 /* Reset Source: EMPR */
+#define PF_FW_ARQBAH_ARQBAH_S			0
+#define PF_FW_ARQBAH_ARQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PF_FW_ARQBAL				0x00080080 /* Reset Source: EMPR */
+#define PF_FW_ARQBAL_ARQBAL_LSB_S		0
+#define PF_FW_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF_FW_ARQBAL_ARQBAL_S			6
+#define PF_FW_ARQBAL_ARQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define PF_FW_ARQH				0x00080380 /* Reset Source: EMPR */
+#define PF_FW_ARQH_ARQH_S			0
+#define PF_FW_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define PF_FW_ARQLEN				0x00080280 /* Reset Source: EMPR */
+#define PF_FW_ARQLEN_ARQLEN_S			0
+#define PF_FW_ARQLEN_ARQLEN_M			MAKEMASK(0x3FF, 0)
+#define PF_FW_ARQLEN_ARQVFE_S			28
+#define PF_FW_ARQLEN_ARQVFE_M			BIT(28)
+#define PF_FW_ARQLEN_ARQOVFL_S			29
+#define PF_FW_ARQLEN_ARQOVFL_M			BIT(29)
+#define PF_FW_ARQLEN_ARQCRIT_S			30
+#define PF_FW_ARQLEN_ARQCRIT_M			BIT(30)
+#define PF_FW_ARQLEN_ARQENABLE_S		31
+#define PF_FW_ARQLEN_ARQENABLE_M		BIT(31)
+#define PF_FW_ARQT				0x00080480 /* Reset Source: EMPR */
+#define PF_FW_ARQT_ARQT_S			0
+#define PF_FW_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define PF_FW_ATQBAH				0x00080100 /* Reset Source: EMPR */
+#define PF_FW_ATQBAH_ATQBAH_S			0
+#define PF_FW_ATQBAH_ATQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PF_FW_ATQBAL				0x00080000 /* Reset Source: EMPR */
+#define PF_FW_ATQBAL_ATQBAL_LSB_S		0
+#define PF_FW_ATQBAL_ATQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF_FW_ATQBAL_ATQBAL_S			6
+#define PF_FW_ATQBAL_ATQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define PF_FW_ATQH				0x00080300 /* Reset Source: EMPR */
+#define PF_FW_ATQH_ATQH_S			0
+#define PF_FW_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define PF_FW_ATQLEN				0x00080200 /* Reset Source: EMPR */
+#define PF_FW_ATQLEN_ATQLEN_S			0
+#define PF_FW_ATQLEN_ATQLEN_M			MAKEMASK(0x3FF, 0)
+#define PF_FW_ATQLEN_ATQVFE_S			28
+#define PF_FW_ATQLEN_ATQVFE_M			BIT(28)
+#define PF_FW_ATQLEN_ATQOVFL_S			29
+#define PF_FW_ATQLEN_ATQOVFL_M			BIT(29)
+#define PF_FW_ATQLEN_ATQCRIT_S			30
+#define PF_FW_ATQLEN_ATQCRIT_M			BIT(30)
+#define PF_FW_ATQLEN_ATQENABLE_S		31
+#define PF_FW_ATQLEN_ATQENABLE_M		BIT(31)
+#define PF_FW_ATQT				0x00080400 /* Reset Source: EMPR */
+#define PF_FW_ATQT_ATQT_S			0
+#define PF_FW_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define PF_MBX_ARQBAH				0x0022E400 /* Reset Source: CORER */
+#define PF_MBX_ARQBAH_ARQBAH_S			0
+#define PF_MBX_ARQBAH_ARQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PF_MBX_ARQBAL				0x0022E380 /* Reset Source: CORER */
+#define PF_MBX_ARQBAL_ARQBAL_LSB_S		0
+#define PF_MBX_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF_MBX_ARQBAL_ARQBAL_S			6
+#define PF_MBX_ARQBAL_ARQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define PF_MBX_ARQH				0x0022E500 /* Reset Source: CORER */
+#define PF_MBX_ARQH_ARQH_S			0
+#define PF_MBX_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define PF_MBX_ARQLEN				0x0022E480 /* Reset Source: CORER */
+#define PF_MBX_ARQLEN_ARQLEN_S			0
+#define PF_MBX_ARQLEN_ARQLEN_M			MAKEMASK(0x3FF, 0)
+#define PF_MBX_ARQLEN_ARQVFE_S			28
+#define PF_MBX_ARQLEN_ARQVFE_M			BIT(28)
+#define PF_MBX_ARQLEN_ARQOVFL_S			29
+#define PF_MBX_ARQLEN_ARQOVFL_M			BIT(29)
+#define PF_MBX_ARQLEN_ARQCRIT_S			30
+#define PF_MBX_ARQLEN_ARQCRIT_M			BIT(30)
+#define PF_MBX_ARQLEN_ARQENABLE_S		31
+#define PF_MBX_ARQLEN_ARQENABLE_M		BIT(31)
+#define PF_MBX_ARQT				0x0022E580 /* Reset Source: CORER */
+#define PF_MBX_ARQT_ARQT_S			0
+#define PF_MBX_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define PF_MBX_ATQBAH				0x0022E180 /* Reset Source: CORER */
+#define PF_MBX_ATQBAH_ATQBAH_S			0
+#define PF_MBX_ATQBAH_ATQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PF_MBX_ATQBAL				0x0022E100 /* Reset Source: CORER */
+#define PF_MBX_ATQBAL_ATQBAL_S			6
+#define PF_MBX_ATQBAL_ATQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define PF_MBX_ATQH				0x0022E280 /* Reset Source: CORER */
+#define PF_MBX_ATQH_ATQH_S			0
+#define PF_MBX_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define PF_MBX_ATQLEN				0x0022E200 /* Reset Source: CORER */
+#define PF_MBX_ATQLEN_ATQLEN_S			0
+#define PF_MBX_ATQLEN_ATQLEN_M			MAKEMASK(0x3FF, 0)
+#define PF_MBX_ATQLEN_ATQVFE_S			28
+#define PF_MBX_ATQLEN_ATQVFE_M			BIT(28)
+#define PF_MBX_ATQLEN_ATQOVFL_S			29
+#define PF_MBX_ATQLEN_ATQOVFL_M			BIT(29)
+#define PF_MBX_ATQLEN_ATQCRIT_S			30
+#define PF_MBX_ATQLEN_ATQCRIT_M			BIT(30)
+#define PF_MBX_ATQLEN_ATQENABLE_S		31
+#define PF_MBX_ATQLEN_ATQENABLE_M		BIT(31)
+#define PF_MBX_ATQT				0x0022E300 /* Reset Source: CORER */
+#define PF_MBX_ATQT_ATQT_S			0
+#define PF_MBX_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define PF_SB_ARQBAH				0x0022FF00 /* Reset Source: CORER */
+#define PF_SB_ARQBAH_ARQBAH_S			0
+#define PF_SB_ARQBAH_ARQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PF_SB_ARQBAL				0x0022FE80 /* Reset Source: CORER */
+#define PF_SB_ARQBAL_ARQBAL_LSB_S		0
+#define PF_SB_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF_SB_ARQBAL_ARQBAL_S			6
+#define PF_SB_ARQBAL_ARQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define PF_SB_ARQH				0x00230000 /* Reset Source: CORER */
+#define PF_SB_ARQH_ARQH_S			0
+#define PF_SB_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define PF_SB_ARQLEN				0x0022FF80 /* Reset Source: CORER */
+#define PF_SB_ARQLEN_ARQLEN_S			0
+#define PF_SB_ARQLEN_ARQLEN_M			MAKEMASK(0x3FF, 0)
+#define PF_SB_ARQLEN_ARQVFE_S			28
+#define PF_SB_ARQLEN_ARQVFE_M			BIT(28)
+#define PF_SB_ARQLEN_ARQOVFL_S			29
+#define PF_SB_ARQLEN_ARQOVFL_M			BIT(29)
+#define PF_SB_ARQLEN_ARQCRIT_S			30
+#define PF_SB_ARQLEN_ARQCRIT_M			BIT(30)
+#define PF_SB_ARQLEN_ARQENABLE_S		31
+#define PF_SB_ARQLEN_ARQENABLE_M		BIT(31)
+#define PF_SB_ARQT				0x00230080 /* Reset Source: CORER */
+#define PF_SB_ARQT_ARQT_S			0
+#define PF_SB_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define PF_SB_ATQBAH				0x0022FC80 /* Reset Source: CORER */
+#define PF_SB_ATQBAH_ATQBAH_S			0
+#define PF_SB_ATQBAH_ATQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PF_SB_ATQBAL				0x0022FC00 /* Reset Source: CORER */
+#define PF_SB_ATQBAL_ATQBAL_S			6
+#define PF_SB_ATQBAL_ATQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define PF_SB_ATQH				0x0022FD80 /* Reset Source: CORER */
+#define PF_SB_ATQH_ATQH_S			0
+#define PF_SB_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define PF_SB_ATQLEN				0x0022FD00 /* Reset Source: CORER */
+#define PF_SB_ATQLEN_ATQLEN_S			0
+#define PF_SB_ATQLEN_ATQLEN_M			MAKEMASK(0x3FF, 0)
+#define PF_SB_ATQLEN_ATQVFE_S			28
+#define PF_SB_ATQLEN_ATQVFE_M			BIT(28)
+#define PF_SB_ATQLEN_ATQOVFL_S			29
+#define PF_SB_ATQLEN_ATQOVFL_M			BIT(29)
+#define PF_SB_ATQLEN_ATQCRIT_S			30
+#define PF_SB_ATQLEN_ATQCRIT_M			BIT(30)
+#define PF_SB_ATQLEN_ATQENABLE_S		31
+#define PF_SB_ATQLEN_ATQENABLE_M		BIT(31)
+#define PF_SB_ATQT				0x0022FE00 /* Reset Source: CORER */
+#define PF_SB_ATQT_ATQT_S			0
+#define PF_SB_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define PF_SB_REM_DEV_CTL			0x002300F0 /* Reset Source: CORER */
+#define PF_SB_REM_DEV_CTL_DEST_EN_S		0
+#define PF_SB_REM_DEV_CTL_DEST_EN_M		MAKEMASK(0xFFFF, 0)
+#define PF0_FW_HLP_ARQBAH			0x000801C8 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQBAH_ARQBAH_S		0
+#define PF0_FW_HLP_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_FW_HLP_ARQBAL			0x000800C8 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQBAL_ARQBAL_LSB_S		0
+#define PF0_FW_HLP_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF0_FW_HLP_ARQBAL_ARQBAL_S		6
+#define PF0_FW_HLP_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_FW_HLP_ARQH				0x000803C8 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQH_ARQH_S			0
+#define PF0_FW_HLP_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ARQLEN			0x000802C8 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQLEN_ARQLEN_S		0
+#define PF0_FW_HLP_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ARQLEN_ARQVFE_S		28
+#define PF0_FW_HLP_ARQLEN_ARQVFE_M		BIT(28)
+#define PF0_FW_HLP_ARQLEN_ARQOVFL_S		29
+#define PF0_FW_HLP_ARQLEN_ARQOVFL_M		BIT(29)
+#define PF0_FW_HLP_ARQLEN_ARQCRIT_S		30
+#define PF0_FW_HLP_ARQLEN_ARQCRIT_M		BIT(30)
+#define PF0_FW_HLP_ARQLEN_ARQENABLE_S		31
+#define PF0_FW_HLP_ARQLEN_ARQENABLE_M		BIT(31)
+#define PF0_FW_HLP_ARQT				0x000804C8 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQT_ARQT_S			0
+#define PF0_FW_HLP_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ATQBAH			0x00080148 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQBAH_ATQBAH_S		0
+#define PF0_FW_HLP_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_FW_HLP_ATQBAL			0x00080048 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQBAL_ATQBAL_LSB_S		0
+#define PF0_FW_HLP_ATQBAL_ATQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF0_FW_HLP_ATQBAL_ATQBAL_S		6
+#define PF0_FW_HLP_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_FW_HLP_ATQH				0x00080348 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQH_ATQH_S			0
+#define PF0_FW_HLP_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ATQLEN			0x00080248 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQLEN_ATQLEN_S		0
+#define PF0_FW_HLP_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ATQLEN_ATQVFE_S		28
+#define PF0_FW_HLP_ATQLEN_ATQVFE_M		BIT(28)
+#define PF0_FW_HLP_ATQLEN_ATQOVFL_S		29
+#define PF0_FW_HLP_ATQLEN_ATQOVFL_M		BIT(29)
+#define PF0_FW_HLP_ATQLEN_ATQCRIT_S		30
+#define PF0_FW_HLP_ATQLEN_ATQCRIT_M		BIT(30)
+#define PF0_FW_HLP_ATQLEN_ATQENABLE_S		31
+#define PF0_FW_HLP_ATQLEN_ATQENABLE_M		BIT(31)
+#define PF0_FW_HLP_ATQT				0x00080448 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQT_ATQT_S			0
+#define PF0_FW_HLP_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ARQBAH			0x000801C4 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQBAH_ARQBAH_S		0
+#define PF0_FW_PSM_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_FW_PSM_ARQBAL			0x000800C4 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQBAL_ARQBAL_LSB_S		0
+#define PF0_FW_PSM_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF0_FW_PSM_ARQBAL_ARQBAL_S		6
+#define PF0_FW_PSM_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_FW_PSM_ARQH				0x000803C4 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQH_ARQH_S			0
+#define PF0_FW_PSM_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ARQLEN			0x000802C4 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQLEN_ARQLEN_S		0
+#define PF0_FW_PSM_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ARQLEN_ARQVFE_S		28
+#define PF0_FW_PSM_ARQLEN_ARQVFE_M		BIT(28)
+#define PF0_FW_PSM_ARQLEN_ARQOVFL_S		29
+#define PF0_FW_PSM_ARQLEN_ARQOVFL_M		BIT(29)
+#define PF0_FW_PSM_ARQLEN_ARQCRIT_S		30
+#define PF0_FW_PSM_ARQLEN_ARQCRIT_M		BIT(30)
+#define PF0_FW_PSM_ARQLEN_ARQENABLE_S		31
+#define PF0_FW_PSM_ARQLEN_ARQENABLE_M		BIT(31)
+#define PF0_FW_PSM_ARQT				0x000804C4 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQT_ARQT_S			0
+#define PF0_FW_PSM_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ATQBAH			0x00080144 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQBAH_ATQBAH_S		0
+#define PF0_FW_PSM_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_FW_PSM_ATQBAL			0x00080044 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQBAL_ATQBAL_LSB_S		0
+#define PF0_FW_PSM_ATQBAL_ATQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF0_FW_PSM_ATQBAL_ATQBAL_S		6
+#define PF0_FW_PSM_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_FW_PSM_ATQH				0x00080344 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQH_ATQH_S			0
+#define PF0_FW_PSM_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ATQLEN			0x00080244 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQLEN_ATQLEN_S		0
+#define PF0_FW_PSM_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ATQLEN_ATQVFE_S		28
+#define PF0_FW_PSM_ATQLEN_ATQVFE_M		BIT(28)
+#define PF0_FW_PSM_ATQLEN_ATQOVFL_S		29
+#define PF0_FW_PSM_ATQLEN_ATQOVFL_M		BIT(29)
+#define PF0_FW_PSM_ATQLEN_ATQCRIT_S		30
+#define PF0_FW_PSM_ATQLEN_ATQCRIT_M		BIT(30)
+#define PF0_FW_PSM_ATQLEN_ATQENABLE_S		31
+#define PF0_FW_PSM_ATQLEN_ATQENABLE_M		BIT(31)
+#define PF0_FW_PSM_ATQT				0x00080444 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQT_ATQT_S			0
+#define PF0_FW_PSM_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ARQBAH			0x0022E5D8 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQBAH_ARQBAH_S		0
+#define PF0_MBX_CPM_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_CPM_ARQBAL			0x0022E5D4 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQBAL_ARQBAL_LSB_S		0
+#define PF0_MBX_CPM_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF0_MBX_CPM_ARQBAL_ARQBAL_S		6
+#define PF0_MBX_CPM_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_CPM_ARQH			0x0022E5E0 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQH_ARQH_S			0
+#define PF0_MBX_CPM_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ARQLEN			0x0022E5DC /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQLEN_ARQLEN_S		0
+#define PF0_MBX_CPM_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ARQLEN_ARQVFE_S		28
+#define PF0_MBX_CPM_ARQLEN_ARQVFE_M		BIT(28)
+#define PF0_MBX_CPM_ARQLEN_ARQOVFL_S		29
+#define PF0_MBX_CPM_ARQLEN_ARQOVFL_M		BIT(29)
+#define PF0_MBX_CPM_ARQLEN_ARQCRIT_S		30
+#define PF0_MBX_CPM_ARQLEN_ARQCRIT_M		BIT(30)
+#define PF0_MBX_CPM_ARQLEN_ARQENABLE_S		31
+#define PF0_MBX_CPM_ARQLEN_ARQENABLE_M		BIT(31)
+#define PF0_MBX_CPM_ARQT			0x0022E5E4 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQT_ARQT_S			0
+#define PF0_MBX_CPM_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ATQBAH			0x0022E5C4 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQBAH_ATQBAH_S		0
+#define PF0_MBX_CPM_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_CPM_ATQBAL			0x0022E5C0 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQBAL_ATQBAL_S		6
+#define PF0_MBX_CPM_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_CPM_ATQH			0x0022E5CC /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQH_ATQH_S			0
+#define PF0_MBX_CPM_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ATQLEN			0x0022E5C8 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQLEN_ATQLEN_S		0
+#define PF0_MBX_CPM_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ATQLEN_ATQVFE_S		28
+#define PF0_MBX_CPM_ATQLEN_ATQVFE_M		BIT(28)
+#define PF0_MBX_CPM_ATQLEN_ATQOVFL_S		29
+#define PF0_MBX_CPM_ATQLEN_ATQOVFL_M		BIT(29)
+#define PF0_MBX_CPM_ATQLEN_ATQCRIT_S		30
+#define PF0_MBX_CPM_ATQLEN_ATQCRIT_M		BIT(30)
+#define PF0_MBX_CPM_ATQLEN_ATQENABLE_S		31
+#define PF0_MBX_CPM_ATQLEN_ATQENABLE_M		BIT(31)
+#define PF0_MBX_CPM_ATQT			0x0022E5D0 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQT_ATQT_S			0
+#define PF0_MBX_CPM_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ARQBAH			0x0022E600 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQBAH_ARQBAH_S		0
+#define PF0_MBX_HLP_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_HLP_ARQBAL			0x0022E5FC /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQBAL_ARQBAL_LSB_S		0
+#define PF0_MBX_HLP_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF0_MBX_HLP_ARQBAL_ARQBAL_S		6
+#define PF0_MBX_HLP_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_HLP_ARQH			0x0022E608 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQH_ARQH_S			0
+#define PF0_MBX_HLP_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ARQLEN			0x0022E604 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQLEN_ARQLEN_S		0
+#define PF0_MBX_HLP_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ARQLEN_ARQVFE_S		28
+#define PF0_MBX_HLP_ARQLEN_ARQVFE_M		BIT(28)
+#define PF0_MBX_HLP_ARQLEN_ARQOVFL_S		29
+#define PF0_MBX_HLP_ARQLEN_ARQOVFL_M		BIT(29)
+#define PF0_MBX_HLP_ARQLEN_ARQCRIT_S		30
+#define PF0_MBX_HLP_ARQLEN_ARQCRIT_M		BIT(30)
+#define PF0_MBX_HLP_ARQLEN_ARQENABLE_S		31
+#define PF0_MBX_HLP_ARQLEN_ARQENABLE_M		BIT(31)
+#define PF0_MBX_HLP_ARQT			0x0022E60C /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQT_ARQT_S			0
+#define PF0_MBX_HLP_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ATQBAH			0x0022E5EC /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQBAH_ATQBAH_S		0
+#define PF0_MBX_HLP_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_HLP_ATQBAL			0x0022E5E8 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQBAL_ATQBAL_S		6
+#define PF0_MBX_HLP_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_HLP_ATQH			0x0022E5F4 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQH_ATQH_S			0
+#define PF0_MBX_HLP_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ATQLEN			0x0022E5F0 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQLEN_ATQLEN_S		0
+#define PF0_MBX_HLP_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ATQLEN_ATQVFE_S		28
+#define PF0_MBX_HLP_ATQLEN_ATQVFE_M		BIT(28)
+#define PF0_MBX_HLP_ATQLEN_ATQOVFL_S		29
+#define PF0_MBX_HLP_ATQLEN_ATQOVFL_M		BIT(29)
+#define PF0_MBX_HLP_ATQLEN_ATQCRIT_S		30
+#define PF0_MBX_HLP_ATQLEN_ATQCRIT_M		BIT(30)
+#define PF0_MBX_HLP_ATQLEN_ATQENABLE_S		31
+#define PF0_MBX_HLP_ATQLEN_ATQENABLE_M		BIT(31)
+#define PF0_MBX_HLP_ATQT			0x0022E5F8 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQT_ATQT_S			0
+#define PF0_MBX_HLP_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ARQBAH			0x0022E628 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQBAH_ARQBAH_S		0
+#define PF0_MBX_PSM_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_PSM_ARQBAL			0x0022E624 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQBAL_ARQBAL_LSB_S		0
+#define PF0_MBX_PSM_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF0_MBX_PSM_ARQBAL_ARQBAL_S		6
+#define PF0_MBX_PSM_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_PSM_ARQH			0x0022E630 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQH_ARQH_S			0
+#define PF0_MBX_PSM_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ARQLEN			0x0022E62C /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQLEN_ARQLEN_S		0
+#define PF0_MBX_PSM_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ARQLEN_ARQVFE_S		28
+#define PF0_MBX_PSM_ARQLEN_ARQVFE_M		BIT(28)
+#define PF0_MBX_PSM_ARQLEN_ARQOVFL_S		29
+#define PF0_MBX_PSM_ARQLEN_ARQOVFL_M		BIT(29)
+#define PF0_MBX_PSM_ARQLEN_ARQCRIT_S		30
+#define PF0_MBX_PSM_ARQLEN_ARQCRIT_M		BIT(30)
+#define PF0_MBX_PSM_ARQLEN_ARQENABLE_S		31
+#define PF0_MBX_PSM_ARQLEN_ARQENABLE_M		BIT(31)
+#define PF0_MBX_PSM_ARQT			0x0022E634 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQT_ARQT_S			0
+#define PF0_MBX_PSM_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ATQBAH			0x0022E614 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQBAH_ATQBAH_S		0
+#define PF0_MBX_PSM_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_PSM_ATQBAL			0x0022E610 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQBAL_ATQBAL_S		6
+#define PF0_MBX_PSM_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_PSM_ATQH			0x0022E61C /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQH_ATQH_S			0
+#define PF0_MBX_PSM_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ATQLEN			0x0022E618 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQLEN_ATQLEN_S		0
+#define PF0_MBX_PSM_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ATQLEN_ATQVFE_S		28
+#define PF0_MBX_PSM_ATQLEN_ATQVFE_M		BIT(28)
+#define PF0_MBX_PSM_ATQLEN_ATQOVFL_S		29
+#define PF0_MBX_PSM_ATQLEN_ATQOVFL_M		BIT(29)
+#define PF0_MBX_PSM_ATQLEN_ATQCRIT_S		30
+#define PF0_MBX_PSM_ATQLEN_ATQCRIT_M		BIT(30)
+#define PF0_MBX_PSM_ATQLEN_ATQENABLE_S		31
+#define PF0_MBX_PSM_ATQLEN_ATQENABLE_M		BIT(31)
+#define PF0_MBX_PSM_ATQT			0x0022E620 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQT_ATQT_S			0
+#define PF0_MBX_PSM_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ARQBAH			0x0022E650 /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQBAH_ARQBAH_S		0
+#define PF0_SB_CPM_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_SB_CPM_ARQBAL			0x0022E64C /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQBAL_ARQBAL_LSB_S		0
+#define PF0_SB_CPM_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF0_SB_CPM_ARQBAL_ARQBAL_S		6
+#define PF0_SB_CPM_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_SB_CPM_ARQH				0x0022E658 /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQH_ARQH_S			0
+#define PF0_SB_CPM_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ARQLEN			0x0022E654 /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQLEN_ARQLEN_S		0
+#define PF0_SB_CPM_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ARQLEN_ARQVFE_S		28
+#define PF0_SB_CPM_ARQLEN_ARQVFE_M		BIT(28)
+#define PF0_SB_CPM_ARQLEN_ARQOVFL_S		29
+#define PF0_SB_CPM_ARQLEN_ARQOVFL_M		BIT(29)
+#define PF0_SB_CPM_ARQLEN_ARQCRIT_S		30
+#define PF0_SB_CPM_ARQLEN_ARQCRIT_M		BIT(30)
+#define PF0_SB_CPM_ARQLEN_ARQENABLE_S		31
+#define PF0_SB_CPM_ARQLEN_ARQENABLE_M		BIT(31)
+#define PF0_SB_CPM_ARQT				0x0022E65C /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQT_ARQT_S			0
+#define PF0_SB_CPM_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ATQBAH			0x0022E63C /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQBAH_ATQBAH_S		0
+#define PF0_SB_CPM_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_SB_CPM_ATQBAL			0x0022E638 /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQBAL_ATQBAL_S		6
+#define PF0_SB_CPM_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_SB_CPM_ATQH				0x0022E644 /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQH_ATQH_S			0
+#define PF0_SB_CPM_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ATQLEN			0x0022E640 /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQLEN_ATQLEN_S		0
+#define PF0_SB_CPM_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ATQLEN_ATQVFE_S		28
+#define PF0_SB_CPM_ATQLEN_ATQVFE_M		BIT(28)
+#define PF0_SB_CPM_ATQLEN_ATQOVFL_S		29
+#define PF0_SB_CPM_ATQLEN_ATQOVFL_M		BIT(29)
+#define PF0_SB_CPM_ATQLEN_ATQCRIT_S		30
+#define PF0_SB_CPM_ATQLEN_ATQCRIT_M		BIT(30)
+#define PF0_SB_CPM_ATQLEN_ATQENABLE_S		31
+#define PF0_SB_CPM_ATQLEN_ATQENABLE_M		BIT(31)
+#define PF0_SB_CPM_ATQT				0x0022E648 /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQT_ATQT_S			0
+#define PF0_SB_CPM_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_REM_DEV_CTL			0x002300F4 /* Reset Source: CORER */
+#define PF0_SB_CPM_REM_DEV_CTL_DEST_EN_S	0
+#define PF0_SB_CPM_REM_DEV_CTL_DEST_EN_M	MAKEMASK(0xFFFF, 0)
+#define PF0_SB_HLP_ARQBAH			0x002300D8 /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQBAH_ARQBAH_S		0
+#define PF0_SB_HLP_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_SB_HLP_ARQBAL			0x002300D4 /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQBAL_ARQBAL_LSB_S		0
+#define PF0_SB_HLP_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF0_SB_HLP_ARQBAL_ARQBAL_S		6
+#define PF0_SB_HLP_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_SB_HLP_ARQH				0x002300E0 /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQH_ARQH_S			0
+#define PF0_SB_HLP_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ARQLEN			0x002300DC /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQLEN_ARQLEN_S		0
+#define PF0_SB_HLP_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ARQLEN_ARQVFE_S		28
+#define PF0_SB_HLP_ARQLEN_ARQVFE_M		BIT(28)
+#define PF0_SB_HLP_ARQLEN_ARQOVFL_S		29
+#define PF0_SB_HLP_ARQLEN_ARQOVFL_M		BIT(29)
+#define PF0_SB_HLP_ARQLEN_ARQCRIT_S		30
+#define PF0_SB_HLP_ARQLEN_ARQCRIT_M		BIT(30)
+#define PF0_SB_HLP_ARQLEN_ARQENABLE_S		31
+#define PF0_SB_HLP_ARQLEN_ARQENABLE_M		BIT(31)
+#define PF0_SB_HLP_ARQT				0x002300E4 /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQT_ARQT_S			0
+#define PF0_SB_HLP_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ATQBAH			0x002300C4 /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQBAH_ATQBAH_S		0
+#define PF0_SB_HLP_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_SB_HLP_ATQBAL			0x002300C0 /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQBAL_ATQBAL_S		6
+#define PF0_SB_HLP_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_SB_HLP_ATQH				0x002300CC /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQH_ATQH_S			0
+#define PF0_SB_HLP_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ATQLEN			0x002300C8 /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQLEN_ATQLEN_S		0
+#define PF0_SB_HLP_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ATQLEN_ATQVFE_S		28
+#define PF0_SB_HLP_ATQLEN_ATQVFE_M		BIT(28)
+#define PF0_SB_HLP_ATQLEN_ATQOVFL_S		29
+#define PF0_SB_HLP_ATQLEN_ATQOVFL_M		BIT(29)
+#define PF0_SB_HLP_ATQLEN_ATQCRIT_S		30
+#define PF0_SB_HLP_ATQLEN_ATQCRIT_M		BIT(30)
+#define PF0_SB_HLP_ATQLEN_ATQENABLE_S		31
+#define PF0_SB_HLP_ATQLEN_ATQENABLE_M		BIT(31)
+#define PF0_SB_HLP_ATQT				0x002300D0 /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQT_ATQT_S			0
+#define PF0_SB_HLP_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_REM_DEV_CTL			0x002300E8 /* Reset Source: CORER */
+#define PF0_SB_HLP_REM_DEV_CTL_DEST_EN_S	0
+#define PF0_SB_HLP_REM_DEV_CTL_DEST_EN_M	MAKEMASK(0xFFFF, 0)
+#define SB_REM_DEV_DEST(_i)			(0x002300F8 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define SB_REM_DEV_DEST_MAX_INDEX		7
+#define SB_REM_DEV_DEST_DEST_S			0
+#define SB_REM_DEV_DEST_DEST_M			MAKEMASK(0xF, 0)
+#define SB_REM_DEV_DEST_DEST_VALID_S		31
+#define SB_REM_DEV_DEST_DEST_VALID_M		BIT(31)
+#define VF_MBX_ARQBAH(_VF)			(0x0022B800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ARQBAH_MAX_INDEX			255
+#define VF_MBX_ARQBAH_ARQBAH_S			0
+#define VF_MBX_ARQBAH_ARQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_ARQBAL(_VF)			(0x0022B400 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ARQBAL_MAX_INDEX			255
+#define VF_MBX_ARQBAL_ARQBAL_LSB_S		0
+#define VF_MBX_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define VF_MBX_ARQBAL_ARQBAL_S			6
+#define VF_MBX_ARQBAL_ARQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_ARQH(_VF)			(0x0022C000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ARQH_MAX_INDEX			255
+#define VF_MBX_ARQH_ARQH_S			0
+#define VF_MBX_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_ARQLEN(_VF)			(0x0022BC00 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ARQLEN_MAX_INDEX			255
+#define VF_MBX_ARQLEN_ARQLEN_S			0
+#define VF_MBX_ARQLEN_ARQLEN_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_ARQLEN_ARQVFE_S			28
+#define VF_MBX_ARQLEN_ARQVFE_M			BIT(28)
+#define VF_MBX_ARQLEN_ARQOVFL_S			29
+#define VF_MBX_ARQLEN_ARQOVFL_M			BIT(29)
+#define VF_MBX_ARQLEN_ARQCRIT_S			30
+#define VF_MBX_ARQLEN_ARQCRIT_M			BIT(30)
+#define VF_MBX_ARQLEN_ARQENABLE_S		31
+#define VF_MBX_ARQLEN_ARQENABLE_M		BIT(31)
+#define VF_MBX_ARQT(_VF)			(0x0022C400 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ARQT_MAX_INDEX			255
+#define VF_MBX_ARQT_ARQT_S			0
+#define VF_MBX_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_ATQBAH(_VF)			(0x0022A400 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ATQBAH_MAX_INDEX			255
+#define VF_MBX_ATQBAH_ATQBAH_S			0
+#define VF_MBX_ATQBAH_ATQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_ATQBAL(_VF)			(0x0022A000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ATQBAL_MAX_INDEX			255
+#define VF_MBX_ATQBAL_ATQBAL_S			6
+#define VF_MBX_ATQBAL_ATQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_ATQH(_VF)			(0x0022AC00 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ATQH_MAX_INDEX			255
+#define VF_MBX_ATQH_ATQH_S			0
+#define VF_MBX_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_ATQLEN(_VF)			(0x0022A800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ATQLEN_MAX_INDEX			255
+#define VF_MBX_ATQLEN_ATQLEN_S			0
+#define VF_MBX_ATQLEN_ATQLEN_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_ATQLEN_ATQVFE_S			28
+#define VF_MBX_ATQLEN_ATQVFE_M			BIT(28)
+#define VF_MBX_ATQLEN_ATQOVFL_S			29
+#define VF_MBX_ATQLEN_ATQOVFL_M			BIT(29)
+#define VF_MBX_ATQLEN_ATQCRIT_S			30
+#define VF_MBX_ATQLEN_ATQCRIT_M			BIT(30)
+#define VF_MBX_ATQLEN_ATQENABLE_S		31
+#define VF_MBX_ATQLEN_ATQENABLE_M		BIT(31)
+#define VF_MBX_ATQT(_VF)			(0x0022B000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ATQT_MAX_INDEX			255
+#define VF_MBX_ATQT_ATQT_S			0
+#define VF_MBX_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ARQBAH(_VF128)		(0x0022D400 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ARQBAH_MAX_INDEX		127
+#define VF_MBX_CPM_ARQBAH_ARQBAH_S		0
+#define VF_MBX_CPM_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_CPM_ARQBAL(_VF128)		(0x0022D200 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ARQBAL_MAX_INDEX		127
+#define VF_MBX_CPM_ARQBAL_ARQBAL_LSB_S		0
+#define VF_MBX_CPM_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define VF_MBX_CPM_ARQBAL_ARQBAL_S		6
+#define VF_MBX_CPM_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_CPM_ARQH(_VF128)			(0x0022D800 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ARQH_MAX_INDEX		127
+#define VF_MBX_CPM_ARQH_ARQH_S			0
+#define VF_MBX_CPM_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ARQLEN(_VF128)		(0x0022D600 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ARQLEN_MAX_INDEX		127
+#define VF_MBX_CPM_ARQLEN_ARQLEN_S		0
+#define VF_MBX_CPM_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ARQLEN_ARQVFE_S		28
+#define VF_MBX_CPM_ARQLEN_ARQVFE_M		BIT(28)
+#define VF_MBX_CPM_ARQLEN_ARQOVFL_S		29
+#define VF_MBX_CPM_ARQLEN_ARQOVFL_M		BIT(29)
+#define VF_MBX_CPM_ARQLEN_ARQCRIT_S		30
+#define VF_MBX_CPM_ARQLEN_ARQCRIT_M		BIT(30)
+#define VF_MBX_CPM_ARQLEN_ARQENABLE_S		31
+#define VF_MBX_CPM_ARQLEN_ARQENABLE_M		BIT(31)
+#define VF_MBX_CPM_ARQT(_VF128)			(0x0022DA00 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ARQT_MAX_INDEX		127
+#define VF_MBX_CPM_ARQT_ARQT_S			0
+#define VF_MBX_CPM_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ATQBAH(_VF128)		(0x0022CA00 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ATQBAH_MAX_INDEX		127
+#define VF_MBX_CPM_ATQBAH_ATQBAH_S		0
+#define VF_MBX_CPM_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_CPM_ATQBAL(_VF128)		(0x0022C800 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ATQBAL_MAX_INDEX		127
+#define VF_MBX_CPM_ATQBAL_ATQBAL_S		6
+#define VF_MBX_CPM_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_CPM_ATQH(_VF128)			(0x0022CE00 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ATQH_MAX_INDEX		127
+#define VF_MBX_CPM_ATQH_ATQH_S			0
+#define VF_MBX_CPM_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ATQLEN(_VF128)		(0x0022CC00 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ATQLEN_MAX_INDEX		127
+#define VF_MBX_CPM_ATQLEN_ATQLEN_S		0
+#define VF_MBX_CPM_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ATQLEN_ATQVFE_S		28
+#define VF_MBX_CPM_ATQLEN_ATQVFE_M		BIT(28)
+#define VF_MBX_CPM_ATQLEN_ATQOVFL_S		29
+#define VF_MBX_CPM_ATQLEN_ATQOVFL_M		BIT(29)
+#define VF_MBX_CPM_ATQLEN_ATQCRIT_S		30
+#define VF_MBX_CPM_ATQLEN_ATQCRIT_M		BIT(30)
+#define VF_MBX_CPM_ATQLEN_ATQENABLE_S		31
+#define VF_MBX_CPM_ATQLEN_ATQENABLE_M		BIT(31)
+#define VF_MBX_CPM_ATQT(_VF128)			(0x0022D000 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ATQT_MAX_INDEX		127
+#define VF_MBX_CPM_ATQT_ATQT_S			0
+#define VF_MBX_CPM_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ARQBAH(_VF16)		(0x0022DD80 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ARQBAH_MAX_INDEX		15
+#define VF_MBX_HLP_ARQBAH_ARQBAH_S		0
+#define VF_MBX_HLP_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_HLP_ARQBAL(_VF16)		(0x0022DD40 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ARQBAL_MAX_INDEX		15
+#define VF_MBX_HLP_ARQBAL_ARQBAL_LSB_S		0
+#define VF_MBX_HLP_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define VF_MBX_HLP_ARQBAL_ARQBAL_S		6
+#define VF_MBX_HLP_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_HLP_ARQH(_VF16)			(0x0022DE00 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ARQH_MAX_INDEX		15
+#define VF_MBX_HLP_ARQH_ARQH_S			0
+#define VF_MBX_HLP_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ARQLEN(_VF16)		(0x0022DDC0 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ARQLEN_MAX_INDEX		15
+#define VF_MBX_HLP_ARQLEN_ARQLEN_S		0
+#define VF_MBX_HLP_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ARQLEN_ARQVFE_S		28
+#define VF_MBX_HLP_ARQLEN_ARQVFE_M		BIT(28)
+#define VF_MBX_HLP_ARQLEN_ARQOVFL_S		29
+#define VF_MBX_HLP_ARQLEN_ARQOVFL_M		BIT(29)
+#define VF_MBX_HLP_ARQLEN_ARQCRIT_S		30
+#define VF_MBX_HLP_ARQLEN_ARQCRIT_M		BIT(30)
+#define VF_MBX_HLP_ARQLEN_ARQENABLE_S		31
+#define VF_MBX_HLP_ARQLEN_ARQENABLE_M		BIT(31)
+#define VF_MBX_HLP_ARQT(_VF16)			(0x0022DE40 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ARQT_MAX_INDEX		15
+#define VF_MBX_HLP_ARQT_ARQT_S			0
+#define VF_MBX_HLP_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ATQBAH(_VF16)		(0x0022DC40 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ATQBAH_MAX_INDEX		15
+#define VF_MBX_HLP_ATQBAH_ATQBAH_S		0
+#define VF_MBX_HLP_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_HLP_ATQBAL(_VF16)		(0x0022DC00 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ATQBAL_MAX_INDEX		15
+#define VF_MBX_HLP_ATQBAL_ATQBAL_S		6
+#define VF_MBX_HLP_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_HLP_ATQH(_VF16)			(0x0022DCC0 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ATQH_MAX_INDEX		15
+#define VF_MBX_HLP_ATQH_ATQH_S			0
+#define VF_MBX_HLP_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ATQLEN(_VF16)		(0x0022DC80 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ATQLEN_MAX_INDEX		15
+#define VF_MBX_HLP_ATQLEN_ATQLEN_S		0
+#define VF_MBX_HLP_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ATQLEN_ATQVFE_S		28
+#define VF_MBX_HLP_ATQLEN_ATQVFE_M		BIT(28)
+#define VF_MBX_HLP_ATQLEN_ATQOVFL_S		29
+#define VF_MBX_HLP_ATQLEN_ATQOVFL_M		BIT(29)
+#define VF_MBX_HLP_ATQLEN_ATQCRIT_S		30
+#define VF_MBX_HLP_ATQLEN_ATQCRIT_M		BIT(30)
+#define VF_MBX_HLP_ATQLEN_ATQENABLE_S		31
+#define VF_MBX_HLP_ATQLEN_ATQENABLE_M		BIT(31)
+#define VF_MBX_HLP_ATQT(_VF16)			(0x0022DD00 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ATQT_MAX_INDEX		15
+#define VF_MBX_HLP_ATQT_ATQT_S			0
+#define VF_MBX_HLP_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ARQBAH(_VF16)		(0x0022E000 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ARQBAH_MAX_INDEX		15
+#define VF_MBX_PSM_ARQBAH_ARQBAH_S		0
+#define VF_MBX_PSM_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_PSM_ARQBAL(_VF16)		(0x0022DFC0 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ARQBAL_MAX_INDEX		15
+#define VF_MBX_PSM_ARQBAL_ARQBAL_LSB_S		0
+#define VF_MBX_PSM_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define VF_MBX_PSM_ARQBAL_ARQBAL_S		6
+#define VF_MBX_PSM_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_PSM_ARQH(_VF16)			(0x0022E080 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ARQH_MAX_INDEX		15
+#define VF_MBX_PSM_ARQH_ARQH_S			0
+#define VF_MBX_PSM_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ARQLEN(_VF16)		(0x0022E040 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ARQLEN_MAX_INDEX		15
+#define VF_MBX_PSM_ARQLEN_ARQLEN_S		0
+#define VF_MBX_PSM_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ARQLEN_ARQVFE_S		28
+#define VF_MBX_PSM_ARQLEN_ARQVFE_M		BIT(28)
+#define VF_MBX_PSM_ARQLEN_ARQOVFL_S		29
+#define VF_MBX_PSM_ARQLEN_ARQOVFL_M		BIT(29)
+#define VF_MBX_PSM_ARQLEN_ARQCRIT_S		30
+#define VF_MBX_PSM_ARQLEN_ARQCRIT_M		BIT(30)
+#define VF_MBX_PSM_ARQLEN_ARQENABLE_S		31
+#define VF_MBX_PSM_ARQLEN_ARQENABLE_M		BIT(31)
+#define VF_MBX_PSM_ARQT(_VF16)			(0x0022E0C0 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ARQT_MAX_INDEX		15
+#define VF_MBX_PSM_ARQT_ARQT_S			0
+#define VF_MBX_PSM_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ATQBAH(_VF16)		(0x0022DEC0 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ATQBAH_MAX_INDEX		15
+#define VF_MBX_PSM_ATQBAH_ATQBAH_S		0
+#define VF_MBX_PSM_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_PSM_ATQBAL(_VF16)		(0x0022DE80 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ATQBAL_MAX_INDEX		15
+#define VF_MBX_PSM_ATQBAL_ATQBAL_S		6
+#define VF_MBX_PSM_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_PSM_ATQH(_VF16)			(0x0022DF40 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ATQH_MAX_INDEX		15
+#define VF_MBX_PSM_ATQH_ATQH_S			0
+#define VF_MBX_PSM_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ATQLEN(_VF16)		(0x0022DF00 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ATQLEN_MAX_INDEX		15
+#define VF_MBX_PSM_ATQLEN_ATQLEN_S		0
+#define VF_MBX_PSM_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ATQLEN_ATQVFE_S		28
+#define VF_MBX_PSM_ATQLEN_ATQVFE_M		BIT(28)
+#define VF_MBX_PSM_ATQLEN_ATQOVFL_S		29
+#define VF_MBX_PSM_ATQLEN_ATQOVFL_M		BIT(29)
+#define VF_MBX_PSM_ATQLEN_ATQCRIT_S		30
+#define VF_MBX_PSM_ATQLEN_ATQCRIT_M		BIT(30)
+#define VF_MBX_PSM_ATQLEN_ATQENABLE_S		31
+#define VF_MBX_PSM_ATQLEN_ATQENABLE_M		BIT(31)
+#define VF_MBX_PSM_ATQT(_VF16)			(0x0022DF80 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ATQT_MAX_INDEX		15
+#define VF_MBX_PSM_ATQT_ATQT_S			0
+#define VF_MBX_PSM_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ARQBAH(_VF128)		(0x0022F400 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ARQBAH_MAX_INDEX		127
+#define VF_SB_CPM_ARQBAH_ARQBAH_S		0
+#define VF_SB_CPM_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_SB_CPM_ARQBAL(_VF128)		(0x0022F200 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ARQBAL_MAX_INDEX		127
+#define VF_SB_CPM_ARQBAL_ARQBAL_LSB_S		0
+#define VF_SB_CPM_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define VF_SB_CPM_ARQBAL_ARQBAL_S		6
+#define VF_SB_CPM_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_SB_CPM_ARQH(_VF128)			(0x0022F800 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ARQH_MAX_INDEX		127
+#define VF_SB_CPM_ARQH_ARQH_S			0
+#define VF_SB_CPM_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ARQLEN(_VF128)		(0x0022F600 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ARQLEN_MAX_INDEX		127
+#define VF_SB_CPM_ARQLEN_ARQLEN_S		0
+#define VF_SB_CPM_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ARQLEN_ARQVFE_S		28
+#define VF_SB_CPM_ARQLEN_ARQVFE_M		BIT(28)
+#define VF_SB_CPM_ARQLEN_ARQOVFL_S		29
+#define VF_SB_CPM_ARQLEN_ARQOVFL_M		BIT(29)
+#define VF_SB_CPM_ARQLEN_ARQCRIT_S		30
+#define VF_SB_CPM_ARQLEN_ARQCRIT_M		BIT(30)
+#define VF_SB_CPM_ARQLEN_ARQENABLE_S		31
+#define VF_SB_CPM_ARQLEN_ARQENABLE_M		BIT(31)
+#define VF_SB_CPM_ARQT(_VF128)			(0x0022FA00 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ARQT_MAX_INDEX		127
+#define VF_SB_CPM_ARQT_ARQT_S			0
+#define VF_SB_CPM_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ATQBAH(_VF128)		(0x0022EA00 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ATQBAH_MAX_INDEX		127
+#define VF_SB_CPM_ATQBAH_ATQBAH_S		0
+#define VF_SB_CPM_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_SB_CPM_ATQBAL(_VF128)		(0x0022E800 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ATQBAL_MAX_INDEX		127
+#define VF_SB_CPM_ATQBAL_ATQBAL_S		6
+#define VF_SB_CPM_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_SB_CPM_ATQH(_VF128)			(0x0022EE00 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ATQH_MAX_INDEX		127
+#define VF_SB_CPM_ATQH_ATQH_S			0
+#define VF_SB_CPM_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ATQLEN(_VF128)		(0x0022EC00 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ATQLEN_MAX_INDEX		127
+#define VF_SB_CPM_ATQLEN_ATQLEN_S		0
+#define VF_SB_CPM_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ATQLEN_ATQVFE_S		28
+#define VF_SB_CPM_ATQLEN_ATQVFE_M		BIT(28)
+#define VF_SB_CPM_ATQLEN_ATQOVFL_S		29
+#define VF_SB_CPM_ATQLEN_ATQOVFL_M		BIT(29)
+#define VF_SB_CPM_ATQLEN_ATQCRIT_S		30
+#define VF_SB_CPM_ATQLEN_ATQCRIT_M		BIT(30)
+#define VF_SB_CPM_ATQLEN_ATQENABLE_S		31
+#define VF_SB_CPM_ATQLEN_ATQENABLE_M		BIT(31)
+#define VF_SB_CPM_ATQT(_VF128)			(0x0022F000 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ATQT_MAX_INDEX		127
+#define VF_SB_CPM_ATQT_ATQT_S			0
+#define VF_SB_CPM_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_REM_DEV_CTL			0x002300EC /* Reset Source: CORER */
+#define VF_SB_CPM_REM_DEV_CTL_DEST_EN_S		0
+#define VF_SB_CPM_REM_DEV_CTL_DEST_EN_M		MAKEMASK(0xFFFF, 0)
+#define VP_MBX_CPM_PF_VF_CTRL(_VP128)		(0x00231800 + ((_VP128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VP_MBX_CPM_PF_VF_CTRL_MAX_INDEX		127
+#define VP_MBX_CPM_PF_VF_CTRL_QUEUE_EN_S	0
+#define VP_MBX_CPM_PF_VF_CTRL_QUEUE_EN_M	BIT(0)
+#define VP_MBX_HLP_PF_VF_CTRL(_VP16)		(0x00231A00 + ((_VP16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VP_MBX_HLP_PF_VF_CTRL_MAX_INDEX		15
+#define VP_MBX_HLP_PF_VF_CTRL_QUEUE_EN_S	0
+#define VP_MBX_HLP_PF_VF_CTRL_QUEUE_EN_M	BIT(0)
+#define VP_MBX_PF_VF_CTRL(_VSI)			(0x00230800 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VP_MBX_PF_VF_CTRL_MAX_INDEX		767
+#define VP_MBX_PF_VF_CTRL_QUEUE_EN_S		0
+#define VP_MBX_PF_VF_CTRL_QUEUE_EN_M		BIT(0)
+#define VP_MBX_PSM_PF_VF_CTRL(_VP16)		(0x00231A40 + ((_VP16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VP_MBX_PSM_PF_VF_CTRL_MAX_INDEX		15
+#define VP_MBX_PSM_PF_VF_CTRL_QUEUE_EN_S	0
+#define VP_MBX_PSM_PF_VF_CTRL_QUEUE_EN_M	BIT(0)
+#define VP_SB_CPM_PF_VF_CTRL(_VP128)		(0x00231C00 + ((_VP128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VP_SB_CPM_PF_VF_CTRL_MAX_INDEX		127
+#define VP_SB_CPM_PF_VF_CTRL_QUEUE_EN_S		0
+#define VP_SB_CPM_PF_VF_CTRL_QUEUE_EN_M		BIT(0)
+#define GL_DCB_TDSCP2TC_BLOCK_DIS		0x00049218 /* Reset Source: CORER */
+#define GL_DCB_TDSCP2TC_BLOCK_DIS_DSCP2TC_BLOCK_DIS_S 0
+#define GL_DCB_TDSCP2TC_BLOCK_DIS_DSCP2TC_BLOCK_DIS_M BIT(0)
+#define GL_DCB_TDSCP2TC_BLOCK_IPV4(_i)		(0x00049018 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GL_DCB_TDSCP2TC_BLOCK_IPV4_MAX_INDEX	63
+#define GL_DCB_TDSCP2TC_BLOCK_IPV4_TC_BLOCK_LUT_S 0
+#define GL_DCB_TDSCP2TC_BLOCK_IPV4_TC_BLOCK_LUT_M MAKEMASK(0xFFFFFFFF, 0)
+#define GL_DCB_TDSCP2TC_BLOCK_IPV6(_i)		(0x00049118 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GL_DCB_TDSCP2TC_BLOCK_IPV6_MAX_INDEX	63
+#define GL_DCB_TDSCP2TC_BLOCK_IPV6_TC_BLOCK_LUT_S 0
+#define GL_DCB_TDSCP2TC_BLOCK_IPV6_TC_BLOCK_LUT_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_GENC				0x00083044 /* Reset Source: CORER */
+#define GLDCB_GENC_PCIRTT_S			0
+#define GLDCB_GENC_PCIRTT_M			MAKEMASK(0xFFFF, 0)
+#define GLDCB_PRS_RETSTCC(_i)			(0x002000B0 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLDCB_PRS_RETSTCC_MAX_INDEX		31
+#define GLDCB_PRS_RETSTCC_BWSHARE_S		0
+#define GLDCB_PRS_RETSTCC_BWSHARE_M		MAKEMASK(0x7F, 0)
+#define GLDCB_PRS_RETSTCC_ETSTC_S		31
+#define GLDCB_PRS_RETSTCC_ETSTC_M		BIT(31)
+#define GLDCB_PRS_RSPMC				0x00200160 /* Reset Source: CORER */
+#define GLDCB_PRS_RSPMC_RSPM_S			0
+#define GLDCB_PRS_RSPMC_RSPM_M			MAKEMASK(0xFF, 0)
+#define GLDCB_PRS_RSPMC_RPM_MODE_S		8
+#define GLDCB_PRS_RSPMC_RPM_MODE_M		MAKEMASK(0x3, 8)
+#define GLDCB_PRS_RSPMC_PRR_MAX_EXP_S		10
+#define GLDCB_PRS_RSPMC_PRR_MAX_EXP_M		MAKEMASK(0xF, 10)
+#define GLDCB_PRS_RSPMC_PFCTIMER_S		14
+#define GLDCB_PRS_RSPMC_PFCTIMER_M		MAKEMASK(0x3FFF, 14)
+#define GLDCB_PRS_RSPMC_RPM_DIS_S		31
+#define GLDCB_PRS_RSPMC_RPM_DIS_M		BIT(31)
+#define GLDCB_RETSTCC(_i)			(0x00122140 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLDCB_RETSTCC_MAX_INDEX			31
+#define GLDCB_RETSTCC_BWSHARE_S			0
+#define GLDCB_RETSTCC_BWSHARE_M			MAKEMASK(0x7F, 0)
+#define GLDCB_RETSTCC_ETSTC_S			31
+#define GLDCB_RETSTCC_ETSTC_M			BIT(31)
+#define GLDCB_RETSTCS(_i)			(0x001221C0 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLDCB_RETSTCS_MAX_INDEX			31
+#define GLDCB_RETSTCS_CREDITS_S			0
+#define GLDCB_RETSTCS_CREDITS_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_RTC2PFC_RCB			0x00122100 /* Reset Source: CORER */
+#define GLDCB_RTC2PFC_RCB_TC2PFC_S		0
+#define GLDCB_RTC2PFC_RCB_TC2PFC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_SWT_RETSTCC(_i)			(0x0020A040 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLDCB_SWT_RETSTCC_MAX_INDEX		31
+#define GLDCB_SWT_RETSTCC_BWSHARE_S		0
+#define GLDCB_SWT_RETSTCC_BWSHARE_M		MAKEMASK(0x7F, 0)
+#define GLDCB_SWT_RETSTCC_ETSTC_S		31
+#define GLDCB_SWT_RETSTCC_ETSTC_M		BIT(31)
+#define GLDCB_TC2PFC				0x001D2694 /* Reset Source: CORER */
+#define GLDCB_TC2PFC_TC2PFC_S			0
+#define GLDCB_TC2PFC_TC2PFC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_TCB_MNG_SP			0x000AE12C /* Reset Source: CORER */
+#define GLDCB_TCB_MNG_SP_MNG_SP_S		0
+#define GLDCB_TCB_MNG_SP_MNG_SP_M		BIT(0)
+#define GLDCB_TCB_TCLL_CFG			0x000AE134 /* Reset Source: CORER */
+#define GLDCB_TCB_TCLL_CFG_LLTC_S		0
+#define GLDCB_TCB_TCLL_CFG_LLTC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_TCB_WB_SP				0x000AE310 /* Reset Source: CORER */
+#define GLDCB_TCB_WB_SP_WB_SP_S			0
+#define GLDCB_TCB_WB_SP_WB_SP_M			BIT(0)
+#define GLDCB_TCUPM_IMM_EN			0x000BC824 /* Reset Source: CORER */
+#define GLDCB_TCUPM_IMM_EN_IMM_EN_S		0
+#define GLDCB_TCUPM_IMM_EN_IMM_EN_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_TCUPM_LEGACY_TC			0x000BC828 /* Reset Source: CORER */
+#define GLDCB_TCUPM_LEGACY_TC_LEGTC_S		0
+#define GLDCB_TCUPM_LEGACY_TC_LEGTC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_TCUPM_NO_EXCEED_DIS		0x000BC830 /* Reset Source: CORER */
+#define GLDCB_TCUPM_NO_EXCEED_DIS_NON_EXCEED_DIS_S 0
+#define GLDCB_TCUPM_NO_EXCEED_DIS_NON_EXCEED_DIS_M BIT(0)
+#define GLDCB_TCUPM_WB_DIS			0x000BC834 /* Reset Source: CORER */
+#define GLDCB_TCUPM_WB_DIS_PORT_DISABLE_S	0
+#define GLDCB_TCUPM_WB_DIS_PORT_DISABLE_M	BIT(0)
+#define GLDCB_TCUPM_WB_DIS_TC_DISABLE_S		1
+#define GLDCB_TCUPM_WB_DIS_TC_DISABLE_M		BIT(1)
+#define GLDCB_TFPFCI				0x0009949C /* Reset Source: CORER */
+#define GLDCB_TFPFCI_GLDCB_TFPFCI_S		0
+#define GLDCB_TFPFCI_GLDCB_TFPFCI_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_TLPM_IMM_TCB			0x000A0190 /* Reset Source: CORER */
+#define GLDCB_TLPM_IMM_TCB_IMM_EN_S		0
+#define GLDCB_TLPM_IMM_TCB_IMM_EN_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_TLPM_IMM_TCUPM			0x000A018C /* Reset Source: CORER */
+#define GLDCB_TLPM_IMM_TCUPM_IMM_EN_S		0
+#define GLDCB_TLPM_IMM_TCUPM_IMM_EN_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_TLPM_PCI_DM			0x000A0180 /* Reset Source: CORER */
+#define GLDCB_TLPM_PCI_DM_MONITOR_S		0
+#define GLDCB_TLPM_PCI_DM_MONITOR_M		MAKEMASK(0x7FFFF, 0)
+#define GLDCB_TLPM_PCI_DTHR			0x000A0184 /* Reset Source: CORER */
+#define GLDCB_TLPM_PCI_DTHR_PCI_TDATA_S		0
+#define GLDCB_TLPM_PCI_DTHR_PCI_TDATA_M		MAKEMASK(0xFFF, 0)
+#define GLDCB_TPB_IMM_TLPM			0x00099468 /* Reset Source: CORER */
+#define GLDCB_TPB_IMM_TLPM_IMM_EN_S		0
+#define GLDCB_TPB_IMM_TLPM_IMM_EN_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_TPB_IMM_TPB			0x0009946C /* Reset Source: CORER */
+#define GLDCB_TPB_IMM_TPB_IMM_EN_S		0
+#define GLDCB_TPB_IMM_TPB_IMM_EN_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_TPB_TCLL_CFG			0x00099464 /* Reset Source: CORER */
+#define GLDCB_TPB_TCLL_CFG_LLTC_S		0
+#define GLDCB_TPB_TCLL_CFG_LLTC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCB_BULK_DWRR_REG_QUANTA		0x000AE0E0 /* Reset Source: CORER */
+#define GLTCB_BULK_DWRR_REG_QUANTA_QUANTA_S	0
+#define GLTCB_BULK_DWRR_REG_QUANTA_QUANTA_M	MAKEMASK(0x7FF, 0)
+#define GLTCB_BULK_DWRR_REG_SAT			0x000AE0F0 /* Reset Source: CORER */
+#define GLTCB_BULK_DWRR_REG_SAT_SATURATION_S	0
+#define GLTCB_BULK_DWRR_REG_SAT_SATURATION_M	MAKEMASK(0x1FFFF, 0)
+#define GLTCB_BULK_DWRR_WB_QUANTA		0x000AE0E4 /* Reset Source: CORER */
+#define GLTCB_BULK_DWRR_WB_QUANTA_QUANTA_S	0
+#define GLTCB_BULK_DWRR_WB_QUANTA_QUANTA_M	MAKEMASK(0x7FF, 0)
+#define GLTCB_BULK_DWRR_WB_SAT			0x000AE0F4 /* Reset Source: CORER */
+#define GLTCB_BULK_DWRR_WB_SAT_SATURATION_S	0
+#define GLTCB_BULK_DWRR_WB_SAT_SATURATION_M	MAKEMASK(0x1FFFF, 0)
+#define GLTCB_CREDIT_EXP_CTL			0x000AE120 /* Reset Source: CORER */
+#define GLTCB_CREDIT_EXP_CTL_EN_S		0
+#define GLTCB_CREDIT_EXP_CTL_EN_M		BIT(0)
+#define GLTCB_CREDIT_EXP_CTL_MIN_PKT_S		1
+#define GLTCB_CREDIT_EXP_CTL_MIN_PKT_M		MAKEMASK(0x1FF, 1)
+#define GLTCB_LL_DWRR_REG_QUANTA		0x000AE0E8 /* Reset Source: CORER */
+#define GLTCB_LL_DWRR_REG_QUANTA_QUANTA_S	0
+#define GLTCB_LL_DWRR_REG_QUANTA_QUANTA_M	MAKEMASK(0x7FF, 0)
+#define GLTCB_LL_DWRR_REG_SAT			0x000AE0F8 /* Reset Source: CORER */
+#define GLTCB_LL_DWRR_REG_SAT_SATURATION_S	0
+#define GLTCB_LL_DWRR_REG_SAT_SATURATION_M	MAKEMASK(0x1FFFF, 0)
+#define GLTCB_LL_DWRR_WB_QUANTA			0x000AE0EC /* Reset Source: CORER */
+#define GLTCB_LL_DWRR_WB_QUANTA_QUANTA_S	0
+#define GLTCB_LL_DWRR_WB_QUANTA_QUANTA_M	MAKEMASK(0x7FF, 0)
+#define GLTCB_LL_DWRR_WB_SAT			0x000AE0FC /* Reset Source: CORER */
+#define GLTCB_LL_DWRR_WB_SAT_SATURATION_S	0
+#define GLTCB_LL_DWRR_WB_SAT_SATURATION_M	MAKEMASK(0x1FFFF, 0)
+#define GLTCB_WB_RL				0x000AE238 /* Reset Source: CORER */
+#define GLTCB_WB_RL_PERIOD_S			0
+#define GLTCB_WB_RL_PERIOD_M			MAKEMASK(0xFFFF, 0)
+#define GLTCB_WB_RL_EN_S			16
+#define GLTCB_WB_RL_EN_M			BIT(16)
+#define GLTPB_WB_RL				0x00099460 /* Reset Source: CORER */
+#define GLTPB_WB_RL_PERIOD_S			0
+#define GLTPB_WB_RL_PERIOD_M			MAKEMASK(0xFFFF, 0)
+#define GLTPB_WB_RL_EN_S			16
+#define GLTPB_WB_RL_EN_M			BIT(16)
+#define PRTDCB_FCCFG				0x001E4640 /* Reset Source: GLOBR */
+#define PRTDCB_FCCFG_TFCE_S			3
+#define PRTDCB_FCCFG_TFCE_M			MAKEMASK(0x3, 3)
+#define PRTDCB_FCRTV				0x001E4600 /* Reset Source: GLOBR */
+#define PRTDCB_FCRTV_FC_REFRESH_TH_S		0
+#define PRTDCB_FCRTV_FC_REFRESH_TH_M		MAKEMASK(0xFFFF, 0)
+#define PRTDCB_FCTTVN(_i)			(0x001E4580 + ((_i) * 32)) /* _i=0...3 */ /* Reset Source: GLOBR */
+#define PRTDCB_FCTTVN_MAX_INDEX			3
+#define PRTDCB_FCTTVN_TTV_2N_S			0
+#define PRTDCB_FCTTVN_TTV_2N_M			MAKEMASK(0xFFFF, 0)
+#define PRTDCB_FCTTVN_TTV_2N_P1_S		16
+#define PRTDCB_FCTTVN_TTV_2N_P1_M		MAKEMASK(0xFFFF, 16)
+#define PRTDCB_GENC				0x00083000 /* Reset Source: CORER */
+#define PRTDCB_GENC_NUMTC_S			2
+#define PRTDCB_GENC_NUMTC_M			MAKEMASK(0xF, 2)
+#define PRTDCB_GENC_FCOEUP_S			6
+#define PRTDCB_GENC_FCOEUP_M			MAKEMASK(0x7, 6)
+#define PRTDCB_GENC_FCOEUP_VALID_S		9
+#define PRTDCB_GENC_FCOEUP_VALID_M		BIT(9)
+#define PRTDCB_GENC_PFCLDA_S			16
+#define PRTDCB_GENC_PFCLDA_M			MAKEMASK(0xFFFF, 16)
+#define PRTDCB_GENS				0x00083020 /* Reset Source: CORER */
+#define PRTDCB_GENS_DCBX_STATUS_S		0
+#define PRTDCB_GENS_DCBX_STATUS_M		MAKEMASK(0x7, 0)
+#define PRTDCB_PRS_RETSC			0x002001A0 /* Reset Source: CORER */
+#define PRTDCB_PRS_RETSC_ETS_MODE_S		0
+#define PRTDCB_PRS_RETSC_ETS_MODE_M		BIT(0)
+#define PRTDCB_PRS_RETSC_NON_ETS_MODE_S		1
+#define PRTDCB_PRS_RETSC_NON_ETS_MODE_M		BIT(1)
+#define PRTDCB_PRS_RETSC_ETS_MAX_EXP_S		2
+#define PRTDCB_PRS_RETSC_ETS_MAX_EXP_M		MAKEMASK(0xF, 2)
+#define PRTDCB_PRS_RPRRC			0x00200180 /* Reset Source: CORER */
+#define PRTDCB_PRS_RPRRC_BWSHARE_S		0
+#define PRTDCB_PRS_RPRRC_BWSHARE_M		MAKEMASK(0x3FF, 0)
+#define PRTDCB_PRS_RPRRC_BWSHARE_DIS_S		31
+#define PRTDCB_PRS_RPRRC_BWSHARE_DIS_M		BIT(31)
+#define PRTDCB_RETSC				0x001222A0 /* Reset Source: CORER */
+#define PRTDCB_RETSC_ETS_MODE_S			0
+#define PRTDCB_RETSC_ETS_MODE_M			BIT(0)
+#define PRTDCB_RETSC_NON_ETS_MODE_S		1
+#define PRTDCB_RETSC_NON_ETS_MODE_M		BIT(1)
+#define PRTDCB_RETSC_ETS_MAX_EXP_S		2
+#define PRTDCB_RETSC_ETS_MAX_EXP_M		MAKEMASK(0xF, 2)
+#define PRTDCB_RPRRC				0x001220C0 /* Reset Source: CORER */
+#define PRTDCB_RPRRC_BWSHARE_S			0
+#define PRTDCB_RPRRC_BWSHARE_M			MAKEMASK(0x3FF, 0)
+#define PRTDCB_RPRRC_BWSHARE_DIS_S		31
+#define PRTDCB_RPRRC_BWSHARE_DIS_M		BIT(31)
+#define PRTDCB_RPRRS				0x001220E0 /* Reset Source: CORER */
+#define PRTDCB_RPRRS_CREDITS_S			0
+#define PRTDCB_RPRRS_CREDITS_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PRTDCB_RUP_TDPU				0x00040960 /* Reset Source: CORER */
+#define PRTDCB_RUP_TDPU_NOVLANUP_S		0
+#define PRTDCB_RUP_TDPU_NOVLANUP_M		MAKEMASK(0x7, 0)
+#define PRTDCB_RUP2TC				0x001D2640 /* Reset Source: CORER */
+#define PRTDCB_RUP2TC_UP0TC_S			0
+#define PRTDCB_RUP2TC_UP0TC_M			MAKEMASK(0x7, 0)
+#define PRTDCB_RUP2TC_UP1TC_S			3
+#define PRTDCB_RUP2TC_UP1TC_M			MAKEMASK(0x7, 3)
+#define PRTDCB_RUP2TC_UP2TC_S			6
+#define PRTDCB_RUP2TC_UP2TC_M			MAKEMASK(0x7, 6)
+#define PRTDCB_RUP2TC_UP3TC_S			9
+#define PRTDCB_RUP2TC_UP3TC_M			MAKEMASK(0x7, 9)
+#define PRTDCB_RUP2TC_UP4TC_S			12
+#define PRTDCB_RUP2TC_UP4TC_M			MAKEMASK(0x7, 12)
+#define PRTDCB_RUP2TC_UP5TC_S			15
+#define PRTDCB_RUP2TC_UP5TC_M			MAKEMASK(0x7, 15)
+#define PRTDCB_RUP2TC_UP6TC_S			18
+#define PRTDCB_RUP2TC_UP6TC_M			MAKEMASK(0x7, 18)
+#define PRTDCB_RUP2TC_UP7TC_S			21
+#define PRTDCB_RUP2TC_UP7TC_M			MAKEMASK(0x7, 21)
+#define PRTDCB_SWT_RETSC			0x0020A140 /* Reset Source: CORER */
+#define PRTDCB_SWT_RETSC_ETS_MODE_S		0
+#define PRTDCB_SWT_RETSC_ETS_MODE_M		BIT(0)
+#define PRTDCB_SWT_RETSC_NON_ETS_MODE_S		1
+#define PRTDCB_SWT_RETSC_NON_ETS_MODE_M		BIT(1)
+#define PRTDCB_SWT_RETSC_ETS_MAX_EXP_S		2
+#define PRTDCB_SWT_RETSC_ETS_MAX_EXP_M		MAKEMASK(0xF, 2)
+#define PRTDCB_TCB_DWRR_CREDITS			0x000AE000 /* Reset Source: CORER */
+#define PRTDCB_TCB_DWRR_CREDITS_CREDITS_S	0
+#define PRTDCB_TCB_DWRR_CREDITS_CREDITS_M	MAKEMASK(0x3FFFF, 0)
+#define PRTDCB_TCB_DWRR_QUANTA			0x000AE020 /* Reset Source: CORER */
+#define PRTDCB_TCB_DWRR_QUANTA_QUANTA_S		0
+#define PRTDCB_TCB_DWRR_QUANTA_QUANTA_M		MAKEMASK(0x7FF, 0)
+#define PRTDCB_TCB_DWRR_SAT			0x000AE040 /* Reset Source: CORER */
+#define PRTDCB_TCB_DWRR_SAT_SATURATION_S	0
+#define PRTDCB_TCB_DWRR_SAT_SATURATION_M	MAKEMASK(0x1FFFF, 0)
+#define PRTDCB_TCUPM_NO_EXCEED_DM		0x000BC3C0 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_NO_EXCEED_DM_MONITOR_S	0
+#define PRTDCB_TCUPM_NO_EXCEED_DM_MONITOR_M	MAKEMASK(0x7FFFF, 0)
+#define PRTDCB_TCUPM_REG_CM			0x000BC360 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_REG_CM_MONITOR_S		0
+#define PRTDCB_TCUPM_REG_CM_MONITOR_M		MAKEMASK(0x7FFF, 0)
+#define PRTDCB_TCUPM_REG_CTHR			0x000BC380 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_REG_CTHR_PORTOFFTH_H_S	0
+#define PRTDCB_TCUPM_REG_CTHR_PORTOFFTH_H_M	MAKEMASK(0x7FFF, 0)
+#define PRTDCB_TCUPM_REG_CTHR_PORTOFFTH_L_S	15
+#define PRTDCB_TCUPM_REG_CTHR_PORTOFFTH_L_M	MAKEMASK(0x7FFF, 15)
+#define PRTDCB_TCUPM_REG_DM			0x000BC3A0 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_REG_DM_MONITOR_S		0
+#define PRTDCB_TCUPM_REG_DM_MONITOR_M		MAKEMASK(0x7FFFF, 0)
+#define PRTDCB_TCUPM_REG_DTHR			0x000BC3E0 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_REG_DTHR_PORTOFFTH_H_S	0
+#define PRTDCB_TCUPM_REG_DTHR_PORTOFFTH_H_M	MAKEMASK(0xFFF, 0)
+#define PRTDCB_TCUPM_REG_DTHR_PORTOFFTH_L_S	12
+#define PRTDCB_TCUPM_REG_DTHR_PORTOFFTH_L_M	MAKEMASK(0xFFF, 12)
+#define PRTDCB_TCUPM_REG_PE_HB_DM		0x000BC400 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_REG_PE_HB_DM_MONITOR_S	0
+#define PRTDCB_TCUPM_REG_PE_HB_DM_MONITOR_M	MAKEMASK(0xFFF, 0)
+#define PRTDCB_TCUPM_REG_PE_HB_DTHR		0x000BC420 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_REG_PE_HB_DTHR_PORTOFFTH_H_S 0
+#define PRTDCB_TCUPM_REG_PE_HB_DTHR_PORTOFFTH_H_M MAKEMASK(0xFFF, 0)
+#define PRTDCB_TCUPM_REG_PE_HB_DTHR_PORTOFFTH_L_S 12
+#define PRTDCB_TCUPM_REG_PE_HB_DTHR_PORTOFFTH_L_M MAKEMASK(0xFFF, 12)
+#define PRTDCB_TCUPM_WAIT_PFC_CM		0x000BC440 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_WAIT_PFC_CM_MONITOR_S	0
+#define PRTDCB_TCUPM_WAIT_PFC_CM_MONITOR_M	MAKEMASK(0x7FFF, 0)
+#define PRTDCB_TCUPM_WAIT_PFC_CTHR		0x000BC460 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_WAIT_PFC_CTHR_PORTOFFTH_S	0
+#define PRTDCB_TCUPM_WAIT_PFC_CTHR_PORTOFFTH_M	MAKEMASK(0x7FFF, 0)
+#define PRTDCB_TCUPM_WAIT_PFC_DM		0x000BC480 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_WAIT_PFC_DM_MONITOR_S	0
+#define PRTDCB_TCUPM_WAIT_PFC_DM_MONITOR_M	MAKEMASK(0x7FFFF, 0)
+#define PRTDCB_TCUPM_WAIT_PFC_DTHR		0x000BC4A0 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_WAIT_PFC_DTHR_PORTOFFTH_S	0
+#define PRTDCB_TCUPM_WAIT_PFC_DTHR_PORTOFFTH_M	MAKEMASK(0xFFF, 0)
+#define PRTDCB_TCUPM_WAIT_PFC_PE_HB_DM		0x000BC4C0 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_WAIT_PFC_PE_HB_DM_MONITOR_S 0
+#define PRTDCB_TCUPM_WAIT_PFC_PE_HB_DM_MONITOR_M MAKEMASK(0xFFF, 0)
+#define PRTDCB_TCUPM_WAIT_PFC_PE_HB_DTHR	0x000BC4E0 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_WAIT_PFC_PE_HB_DTHR_PORTOFFTH_S 0
+#define PRTDCB_TCUPM_WAIT_PFC_PE_HB_DTHR_PORTOFFTH_M MAKEMASK(0xFFF, 0)
+#define PRTDCB_TDPUC				0x00040940 /* Reset Source: CORER */
+#define PRTDCB_TDPUC_MAX_TXFRAME_S		0
+#define PRTDCB_TDPUC_MAX_TXFRAME_M		MAKEMASK(0xFFFF, 0)
+#define PRTDCB_TDPUC_MAL_LENGTH_S		16
+#define PRTDCB_TDPUC_MAL_LENGTH_M		BIT(16)
+#define PRTDCB_TDPUC_MAL_CMD_S			17
+#define PRTDCB_TDPUC_MAL_CMD_M			BIT(17)
+#define PRTDCB_TDPUC_TTL_DROP_S			18
+#define PRTDCB_TDPUC_TTL_DROP_M			BIT(18)
+#define PRTDCB_TDPUC_UR_DROP_S			19
+#define PRTDCB_TDPUC_UR_DROP_M			BIT(19)
+#define PRTDCB_TDPUC_DUMMY_S			20
+#define PRTDCB_TDPUC_DUMMY_M			BIT(20)
+#define PRTDCB_TDPUC_BIG_PKT_SIZE_S		21
+#define PRTDCB_TDPUC_BIG_PKT_SIZE_M		BIT(21)
+#define PRTDCB_TDPUC_L2_ACCEPT_FAIL_S		22
+#define PRTDCB_TDPUC_L2_ACCEPT_FAIL_M		BIT(22)
+#define PRTDCB_TDPUC_DSCP_CHECK_FAIL_S		23
+#define PRTDCB_TDPUC_DSCP_CHECK_FAIL_M		BIT(23)
+#define PRTDCB_TDPUC_RCU_ANTISPOOF_S		24
+#define PRTDCB_TDPUC_RCU_ANTISPOOF_M		BIT(24)
+#define PRTDCB_TDPUC_NIC_DSI_S			25
+#define PRTDCB_TDPUC_NIC_DSI_M			BIT(25)
+#define PRTDCB_TDPUC_NIC_IPSEC_S		26
+#define PRTDCB_TDPUC_NIC_IPSEC_M		BIT(26)
+#define PRTDCB_TDPUC_CLEAR_DROP_S		31
+#define PRTDCB_TDPUC_CLEAR_DROP_M		BIT(31)
+#define PRTDCB_TFCS				0x001E4560 /* Reset Source: GLOBR */
+#define PRTDCB_TFCS_TXOFF_S			0
+#define PRTDCB_TFCS_TXOFF_M			BIT(0)
+#define PRTDCB_TFCS_TXOFF0_S			8
+#define PRTDCB_TFCS_TXOFF0_M			BIT(8)
+#define PRTDCB_TFCS_TXOFF1_S			9
+#define PRTDCB_TFCS_TXOFF1_M			BIT(9)
+#define PRTDCB_TFCS_TXOFF2_S			10
+#define PRTDCB_TFCS_TXOFF2_M			BIT(10)
+#define PRTDCB_TFCS_TXOFF3_S			11
+#define PRTDCB_TFCS_TXOFF3_M			BIT(11)
+#define PRTDCB_TFCS_TXOFF4_S			12
+#define PRTDCB_TFCS_TXOFF4_M			BIT(12)
+#define PRTDCB_TFCS_TXOFF5_S			13
+#define PRTDCB_TFCS_TXOFF5_M			BIT(13)
+#define PRTDCB_TFCS_TXOFF6_S			14
+#define PRTDCB_TFCS_TXOFF6_M			BIT(14)
+#define PRTDCB_TFCS_TXOFF7_S			15
+#define PRTDCB_TFCS_TXOFF7_M			BIT(15)
+#define PRTDCB_TLPM_REG_DM			0x000A0000 /* Reset Source: CORER */
+#define PRTDCB_TLPM_REG_DM_MONITOR_S		0
+#define PRTDCB_TLPM_REG_DM_MONITOR_M		MAKEMASK(0x7FFFF, 0)
+#define PRTDCB_TLPM_REG_DTHR			0x000A0020 /* Reset Source: CORER */
+#define PRTDCB_TLPM_REG_DTHR_PORTOFFTH_H_S	0
+#define PRTDCB_TLPM_REG_DTHR_PORTOFFTH_H_M	MAKEMASK(0xFFF, 0)
+#define PRTDCB_TLPM_REG_DTHR_PORTOFFTH_L_S	12
+#define PRTDCB_TLPM_REG_DTHR_PORTOFFTH_L_M	MAKEMASK(0xFFF, 12)
+#define PRTDCB_TLPM_WAIT_PFC_DM			0x000A0040 /* Reset Source: CORER */
+#define PRTDCB_TLPM_WAIT_PFC_DM_MONITOR_S	0
+#define PRTDCB_TLPM_WAIT_PFC_DM_MONITOR_M	MAKEMASK(0x7FFFF, 0)
+#define PRTDCB_TLPM_WAIT_PFC_DTHR		0x000A0060 /* Reset Source: CORER */
+#define PRTDCB_TLPM_WAIT_PFC_DTHR_PORTOFFTH_S	0
+#define PRTDCB_TLPM_WAIT_PFC_DTHR_PORTOFFTH_M	MAKEMASK(0xFFF, 0)
+#define PRTDCB_TPFCTS(_i)			(0x001E4660 + ((_i) * 32)) /* _i=0...7 */ /* Reset Source: GLOBR */
+#define PRTDCB_TPFCTS_MAX_INDEX			7
+#define PRTDCB_TPFCTS_PFCTIMER_S		0
+#define PRTDCB_TPFCTS_PFCTIMER_M		MAKEMASK(0x3FFF, 0)
+#define PRTDCB_TUP2TC				0x001D26C0 /* Reset Source: CORER */
+#define PRTDCB_TUP2TC_UP0TC_S			0
+#define PRTDCB_TUP2TC_UP0TC_M			MAKEMASK(0x7, 0)
+#define PRTDCB_TUP2TC_UP1TC_S			3
+#define PRTDCB_TUP2TC_UP1TC_M			MAKEMASK(0x7, 3)
+#define PRTDCB_TUP2TC_UP2TC_S			6
+#define PRTDCB_TUP2TC_UP2TC_M			MAKEMASK(0x7, 6)
+#define PRTDCB_TUP2TC_UP3TC_S			9
+#define PRTDCB_TUP2TC_UP3TC_M			MAKEMASK(0x7, 9)
+#define PRTDCB_TUP2TC_UP4TC_S			12
+#define PRTDCB_TUP2TC_UP4TC_M			MAKEMASK(0x7, 12)
+#define PRTDCB_TUP2TC_UP5TC_S			15
+#define PRTDCB_TUP2TC_UP5TC_M			MAKEMASK(0x7, 15)
+#define PRTDCB_TUP2TC_UP6TC_S			18
+#define PRTDCB_TUP2TC_UP6TC_M			MAKEMASK(0x7, 18)
+#define PRTDCB_TUP2TC_UP7TC_S			21
+#define PRTDCB_TUP2TC_UP7TC_M			MAKEMASK(0x7, 21)
+#define PRTDCB_TX_DSCP2UP_CTL			0x00040980 /* Reset Source: CORER */
+#define PRTDCB_TX_DSCP2UP_CTL_DSCP2UP_ENA_S	0
+#define PRTDCB_TX_DSCP2UP_CTL_DSCP2UP_ENA_M	BIT(0)
+#define PRTDCB_TX_DSCP2UP_CTL_DSCP_DEFAULT_UP_S 1
+#define PRTDCB_TX_DSCP2UP_CTL_DSCP_DEFAULT_UP_M MAKEMASK(0x7, 1)
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT(_i)		(0x000409A0 + ((_i) * 32)) /* _i=0...7 */ /* Reset Source: CORER */
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_MAX_INDEX	7
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_0_S 0
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_0_M MAKEMASK(0x7, 0)
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_1_S 4
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_1_M MAKEMASK(0x7, 4)
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_2_S 8
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_2_M MAKEMASK(0x7, 8)
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_3_S 12
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_3_M MAKEMASK(0x7, 12)
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_4_S 16
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_4_M MAKEMASK(0x7, 16)
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_5_S 20
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_5_M MAKEMASK(0x7, 20)
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_6_S 24
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_6_M MAKEMASK(0x7, 24)
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_7_S 28
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_7_M MAKEMASK(0x7, 28)
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT(_i)		(0x00040AA0 + ((_i) * 32)) /* _i=0...7 */ /* Reset Source: CORER */
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_MAX_INDEX	7
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_0_S 0
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_0_M MAKEMASK(0x7, 0)
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_1_S 4
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_1_M MAKEMASK(0x7, 4)
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_2_S 8
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_2_M MAKEMASK(0x7, 8)
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_3_S 12
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_3_M MAKEMASK(0x7, 12)
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_4_S 16
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_4_M MAKEMASK(0x7, 16)
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_5_S 20
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_5_M MAKEMASK(0x7, 20)
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_6_S 24
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_6_M MAKEMASK(0x7, 24)
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_7_S 28
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_7_M MAKEMASK(0x7, 28)
+#define PRTTCB_BULK_DWRR_REG_CREDITS		0x000AE060 /* Reset Source: CORER */
+#define PRTTCB_BULK_DWRR_REG_CREDITS_CREDITS_S	0
+#define PRTTCB_BULK_DWRR_REG_CREDITS_CREDITS_M	MAKEMASK(0x3FFFF, 0)
+#define PRTTCB_BULK_DWRR_WB_CREDITS		0x000AE080 /* Reset Source: CORER */
+#define PRTTCB_BULK_DWRR_WB_CREDITS_CREDITS_S	0
+#define PRTTCB_BULK_DWRR_WB_CREDITS_CREDITS_M	MAKEMASK(0x3FFFF, 0)
+#define PRTTCB_CREDIT_EXP			0x000AE100 /* Reset Source: CORER */
+#define PRTTCB_CREDIT_EXP_EXPANSION_S		0
+#define PRTTCB_CREDIT_EXP_EXPANSION_M		MAKEMASK(0xFF, 0)
+#define PRTTCB_LL_DWRR_REG_CREDITS		0x000AE0A0 /* Reset Source: CORER */
+#define PRTTCB_LL_DWRR_REG_CREDITS_CREDITS_S	0
+#define PRTTCB_LL_DWRR_REG_CREDITS_CREDITS_M	MAKEMASK(0x3FFFF, 0)
+#define PRTTCB_LL_DWRR_WB_CREDITS		0x000AE0C0 /* Reset Source: CORER */
+#define PRTTCB_LL_DWRR_WB_CREDITS_CREDITS_S	0
+#define PRTTCB_LL_DWRR_WB_CREDITS_CREDITS_M	MAKEMASK(0x3FFFF, 0)
+#define TCDCB_TCUPM_WAIT_CM(_i)			(0x000BC520 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCDCB_TCUPM_WAIT_CM_MAX_INDEX		31
+#define TCDCB_TCUPM_WAIT_CM_MONITOR_S		0
+#define TCDCB_TCUPM_WAIT_CM_MONITOR_M		MAKEMASK(0x7FFF, 0)
+#define TCDCB_TCUPM_WAIT_CTHR(_i)		(0x000BC5A0 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCDCB_TCUPM_WAIT_CTHR_MAX_INDEX		31
+#define TCDCB_TCUPM_WAIT_CTHR_TCOFFTH_S		0
+#define TCDCB_TCUPM_WAIT_CTHR_TCOFFTH_M		MAKEMASK(0x7FFF, 0)
+#define TCDCB_TCUPM_WAIT_DM(_i)			(0x000BC620 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCDCB_TCUPM_WAIT_DM_MAX_INDEX		31
+#define TCDCB_TCUPM_WAIT_DM_MONITOR_S		0
+#define TCDCB_TCUPM_WAIT_DM_MONITOR_M		MAKEMASK(0x7FFFF, 0)
+#define TCDCB_TCUPM_WAIT_DTHR(_i)		(0x000BC6A0 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCDCB_TCUPM_WAIT_DTHR_MAX_INDEX		31
+#define TCDCB_TCUPM_WAIT_DTHR_TCOFFTH_S		0
+#define TCDCB_TCUPM_WAIT_DTHR_TCOFFTH_M		MAKEMASK(0xFFF, 0)
+#define TCDCB_TCUPM_WAIT_PE_HB_DM(_i)		(0x000BC720 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCDCB_TCUPM_WAIT_PE_HB_DM_MAX_INDEX	31
+#define TCDCB_TCUPM_WAIT_PE_HB_DM_MONITOR_S	0
+#define TCDCB_TCUPM_WAIT_PE_HB_DM_MONITOR_M	MAKEMASK(0xFFF, 0)
+#define TCDCB_TCUPM_WAIT_PE_HB_DTHR(_i)		(0x000BC7A0 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCDCB_TCUPM_WAIT_PE_HB_DTHR_MAX_INDEX	31
+#define TCDCB_TCUPM_WAIT_PE_HB_DTHR_TCOFFTH_S	0
+#define TCDCB_TCUPM_WAIT_PE_HB_DTHR_TCOFFTH_M	MAKEMASK(0xFFF, 0)
+#define TCDCB_TLPM_WAIT_DM(_i)			(0x000A0080 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCDCB_TLPM_WAIT_DM_MAX_INDEX		31
+#define TCDCB_TLPM_WAIT_DM_MONITOR_S		0
+#define TCDCB_TLPM_WAIT_DM_MONITOR_M		MAKEMASK(0x7FFFF, 0)
+#define TCDCB_TLPM_WAIT_DTHR(_i)		(0x000A0100 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCDCB_TLPM_WAIT_DTHR_MAX_INDEX		31
+#define TCDCB_TLPM_WAIT_DTHR_TCOFFTH_S		0
+#define TCDCB_TLPM_WAIT_DTHR_TCOFFTH_M		MAKEMASK(0xFFF, 0)
+#define TCTCB_WB_RL_TC_CFG(_i)			(0x000AE138 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCTCB_WB_RL_TC_CFG_MAX_INDEX		31
+#define TCTCB_WB_RL_TC_CFG_TOKENS_S		0
+#define TCTCB_WB_RL_TC_CFG_TOKENS_M		MAKEMASK(0xFFF, 0)
+#define TCTCB_WB_RL_TC_CFG_BURST_SIZE_S		12
+#define TCTCB_WB_RL_TC_CFG_BURST_SIZE_M		MAKEMASK(0x3FF, 12)
+#define TCTCB_WB_RL_TC_STAT(_i)			(0x000AE1B8 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCTCB_WB_RL_TC_STAT_MAX_INDEX		31
+#define TCTCB_WB_RL_TC_STAT_BUCKET_S		0
+#define TCTCB_WB_RL_TC_STAT_BUCKET_M		MAKEMASK(0x1FFFF, 0)
+#define TPB_BULK_DWRR_REG_QUANTA		0x00099340 /* Reset Source: CORER */
+#define TPB_BULK_DWRR_REG_QUANTA_QUANTA_S	0
+#define TPB_BULK_DWRR_REG_QUANTA_QUANTA_M	MAKEMASK(0x7FF, 0)
+#define TPB_BULK_DWRR_REG_SAT			0x00099350 /* Reset Source: CORER */
+#define TPB_BULK_DWRR_REG_SAT_SATURATION_S	0
+#define TPB_BULK_DWRR_REG_SAT_SATURATION_M	MAKEMASK(0x1FFFF, 0)
+#define TPB_BULK_DWRR_WB_QUANTA			0x00099344 /* Reset Source: CORER */
+#define TPB_BULK_DWRR_WB_QUANTA_QUANTA_S	0
+#define TPB_BULK_DWRR_WB_QUANTA_QUANTA_M	MAKEMASK(0x7FF, 0)
+#define TPB_BULK_DWRR_WB_SAT			0x00099354 /* Reset Source: CORER */
+#define TPB_BULK_DWRR_WB_SAT_SATURATION_S	0
+#define TPB_BULK_DWRR_WB_SAT_SATURATION_M	MAKEMASK(0x1FFFF, 0)
+#define TPB_GLDCB_TCB_WB_SP			0x0009966C /* Reset Source: CORER */
+#define TPB_GLDCB_TCB_WB_SP_WB_SP_S		0
+#define TPB_GLDCB_TCB_WB_SP_WB_SP_M		BIT(0)
+#define TPB_GLTCB_CREDIT_EXP_CTL		0x00099664 /* Reset Source: CORER */
+#define TPB_GLTCB_CREDIT_EXP_CTL_EN_S		0
+#define TPB_GLTCB_CREDIT_EXP_CTL_EN_M		BIT(0)
+#define TPB_GLTCB_CREDIT_EXP_CTL_MIN_PKT_S	1
+#define TPB_GLTCB_CREDIT_EXP_CTL_MIN_PKT_M	MAKEMASK(0x1FF, 1)
+#define TPB_LL_DWRR_REG_QUANTA			0x00099348 /* Reset Source: CORER */
+#define TPB_LL_DWRR_REG_QUANTA_QUANTA_S		0
+#define TPB_LL_DWRR_REG_QUANTA_QUANTA_M		MAKEMASK(0x7FF, 0)
+#define TPB_LL_DWRR_REG_SAT			0x00099358 /* Reset Source: CORER */
+#define TPB_LL_DWRR_REG_SAT_SATURATION_S	0
+#define TPB_LL_DWRR_REG_SAT_SATURATION_M	MAKEMASK(0x1FFFF, 0)
+#define TPB_LL_DWRR_WB_QUANTA			0x0009934C /* Reset Source: CORER */
+#define TPB_LL_DWRR_WB_QUANTA_QUANTA_S		0
+#define TPB_LL_DWRR_WB_QUANTA_QUANTA_M		MAKEMASK(0x7FF, 0)
+#define TPB_LL_DWRR_WB_SAT			0x0009935C /* Reset Source: CORER */
+#define TPB_LL_DWRR_WB_SAT_SATURATION_S		0
+#define TPB_LL_DWRR_WB_SAT_SATURATION_M		MAKEMASK(0x1FFFF, 0)
+#define TPB_PRTDCB_TCB_DWRR_CREDITS		0x000991C0 /* Reset Source: CORER */
+#define TPB_PRTDCB_TCB_DWRR_CREDITS_CREDITS_S	0
+#define TPB_PRTDCB_TCB_DWRR_CREDITS_CREDITS_M	MAKEMASK(0x3FFFF, 0)
+#define TPB_PRTDCB_TCB_DWRR_QUANTA		0x00099220 /* Reset Source: CORER */
+#define TPB_PRTDCB_TCB_DWRR_QUANTA_QUANTA_S	0
+#define TPB_PRTDCB_TCB_DWRR_QUANTA_QUANTA_M	MAKEMASK(0x7FF, 0)
+#define TPB_PRTDCB_TCB_DWRR_SAT			0x00099260 /* Reset Source: CORER */
+#define TPB_PRTDCB_TCB_DWRR_SAT_SATURATION_S	0
+#define TPB_PRTDCB_TCB_DWRR_SAT_SATURATION_M	MAKEMASK(0x1FFFF, 0)
+#define TPB_PRTTCB_BULK_DWRR_REG_CREDITS	0x000992A0 /* Reset Source: CORER */
+#define TPB_PRTTCB_BULK_DWRR_REG_CREDITS_CREDITS_S 0
+#define TPB_PRTTCB_BULK_DWRR_REG_CREDITS_CREDITS_M MAKEMASK(0x3FFFF, 0)
+#define TPB_PRTTCB_BULK_DWRR_WB_CREDITS		0x000992C0 /* Reset Source: CORER */
+#define TPB_PRTTCB_BULK_DWRR_WB_CREDITS_CREDITS_S 0
+#define TPB_PRTTCB_BULK_DWRR_WB_CREDITS_CREDITS_M MAKEMASK(0x3FFFF, 0)
+#define TPB_PRTTCB_CREDIT_EXP			0x00099644 /* Reset Source: CORER */
+#define TPB_PRTTCB_CREDIT_EXP_EXPANSION_S	0
+#define TPB_PRTTCB_CREDIT_EXP_EXPANSION_M	MAKEMASK(0xFF, 0)
+#define TPB_PRTTCB_LL_DWRR_REG_CREDITS		0x00099300 /* Reset Source: CORER */
+#define TPB_PRTTCB_LL_DWRR_REG_CREDITS_CREDITS_S 0
+#define TPB_PRTTCB_LL_DWRR_REG_CREDITS_CREDITS_M MAKEMASK(0x3FFFF, 0)
+#define TPB_PRTTCB_LL_DWRR_WB_CREDITS		0x00099320 /* Reset Source: CORER */
+#define TPB_PRTTCB_LL_DWRR_WB_CREDITS_CREDITS_S 0
+#define TPB_PRTTCB_LL_DWRR_WB_CREDITS_CREDITS_M MAKEMASK(0x3FFFF, 0)
+#define TPB_WB_RL_TC_CFG(_i)			(0x00099360 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TPB_WB_RL_TC_CFG_MAX_INDEX		31
+#define TPB_WB_RL_TC_CFG_TOKENS_S		0
+#define TPB_WB_RL_TC_CFG_TOKENS_M		MAKEMASK(0xFFF, 0)
+#define TPB_WB_RL_TC_CFG_BURST_SIZE_S		12
+#define TPB_WB_RL_TC_CFG_BURST_SIZE_M		MAKEMASK(0x3FF, 12)
+#define TPB_WB_RL_TC_STAT(_i)			(0x000993E0 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TPB_WB_RL_TC_STAT_MAX_INDEX		31
+#define TPB_WB_RL_TC_STAT_BUCKET_S		0
+#define TPB_WB_RL_TC_STAT_BUCKET_M		MAKEMASK(0x1FFFF, 0)
+#define GL_ACLEXT_CDMD_L1SEL(_i)		(0x00210054 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_CDMD_L1SEL_MAX_INDEX		2
+#define GL_ACLEXT_CDMD_L1SEL_RX_SEL_S		0
+#define GL_ACLEXT_CDMD_L1SEL_RX_SEL_M		MAKEMASK(0x1F, 0)
+#define GL_ACLEXT_CDMD_L1SEL_TX_SEL_S		8
+#define GL_ACLEXT_CDMD_L1SEL_TX_SEL_M		MAKEMASK(0x1F, 8)
+#define GL_ACLEXT_CDMD_L1SEL_AUX0_SEL_S		16
+#define GL_ACLEXT_CDMD_L1SEL_AUX0_SEL_M		MAKEMASK(0x1F, 16)
+#define GL_ACLEXT_CDMD_L1SEL_AUX1_SEL_S		24
+#define GL_ACLEXT_CDMD_L1SEL_AUX1_SEL_M		MAKEMASK(0x1F, 24)
+#define GL_ACLEXT_CDMD_L1SEL_BIDIR_ENA_S	30
+#define GL_ACLEXT_CDMD_L1SEL_BIDIR_ENA_M	MAKEMASK(0x3, 30)
+#define GL_ACLEXT_CTLTBL_L2ADDR(_i)		(0x00210084 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_CTLTBL_L2ADDR_MAX_INDEX	2
+#define GL_ACLEXT_CTLTBL_L2ADDR_LINE_OFF_S	0
+#define GL_ACLEXT_CTLTBL_L2ADDR_LINE_OFF_M	MAKEMASK(0x7, 0)
+#define GL_ACLEXT_CTLTBL_L2ADDR_LINE_IDX_S	8
+#define GL_ACLEXT_CTLTBL_L2ADDR_LINE_IDX_M	MAKEMASK(0x7, 8)
+#define GL_ACLEXT_CTLTBL_L2ADDR_AUTO_INC_S	31
+#define GL_ACLEXT_CTLTBL_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_ACLEXT_CTLTBL_L2DATA(_i)		(0x00210090 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_CTLTBL_L2DATA_MAX_INDEX	2
+#define GL_ACLEXT_CTLTBL_L2DATA_DATA_S		0
+#define GL_ACLEXT_CTLTBL_L2DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_ACLEXT_DFLT_L2PRFL(_i)		(0x00210138 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_DFLT_L2PRFL_MAX_INDEX		2
+#define GL_ACLEXT_DFLT_L2PRFL_DFLT_PRFL_S	0
+#define GL_ACLEXT_DFLT_L2PRFL_DFLT_PRFL_M	MAKEMASK(0xFFFF, 0)
+#define GL_ACLEXT_DFLT_L2PRFL_ACL(_i)		(0x00393800 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_DFLT_L2PRFL_ACL_MAX_INDEX	2
+#define GL_ACLEXT_DFLT_L2PRFL_ACL_DFLT_PRFL_S	0
+#define GL_ACLEXT_DFLT_L2PRFL_ACL_DFLT_PRFL_M	MAKEMASK(0xFFFF, 0)
+#define GL_ACLEXT_FLGS_L1SEL0_1(_i)		(0x0021006C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_FLGS_L1SEL0_1_MAX_INDEX	2
+#define GL_ACLEXT_FLGS_L1SEL0_1_FLS0_S		0
+#define GL_ACLEXT_FLGS_L1SEL0_1_FLS0_M		MAKEMASK(0x1FF, 0)
+#define GL_ACLEXT_FLGS_L1SEL0_1_FLS1_S		16
+#define GL_ACLEXT_FLGS_L1SEL0_1_FLS1_M		MAKEMASK(0x1FF, 16)
+#define GL_ACLEXT_FLGS_L1SEL2_3(_i)		(0x00210078 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_FLGS_L1SEL2_3_MAX_INDEX	2
+#define GL_ACLEXT_FLGS_L1SEL2_3_FLS2_S		0
+#define GL_ACLEXT_FLGS_L1SEL2_3_FLS2_M		MAKEMASK(0x1FF, 0)
+#define GL_ACLEXT_FLGS_L1SEL2_3_FLS3_S		16
+#define GL_ACLEXT_FLGS_L1SEL2_3_FLS3_M		MAKEMASK(0x1FF, 16)
+#define GL_ACLEXT_FLGS_L1TBL(_i)		(0x00210060 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_FLGS_L1TBL_MAX_INDEX		2
+#define GL_ACLEXT_FLGS_L1TBL_LSB_S		0
+#define GL_ACLEXT_FLGS_L1TBL_LSB_M		MAKEMASK(0xFFFF, 0)
+#define GL_ACLEXT_FLGS_L1TBL_MSB_S		16
+#define GL_ACLEXT_FLGS_L1TBL_MSB_M		MAKEMASK(0xFFFF, 16)
+#define GL_ACLEXT_FORCE_L1CDID(_i)		(0x00210018 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_FORCE_L1CDID_MAX_INDEX	2
+#define GL_ACLEXT_FORCE_L1CDID_STATIC_CDID_S	0
+#define GL_ACLEXT_FORCE_L1CDID_STATIC_CDID_M	MAKEMASK(0xF, 0)
+#define GL_ACLEXT_FORCE_L1CDID_STATIC_CDID_EN_S 31
+#define GL_ACLEXT_FORCE_L1CDID_STATIC_CDID_EN_M BIT(31)
+#define GL_ACLEXT_FORCE_PID(_i)			(0x00210000 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_FORCE_PID_MAX_INDEX		2
+#define GL_ACLEXT_FORCE_PID_STATIC_PID_S	0
+#define GL_ACLEXT_FORCE_PID_STATIC_PID_M	MAKEMASK(0xFFFF, 0)
+#define GL_ACLEXT_FORCE_PID_STATIC_PID_EN_S	31
+#define GL_ACLEXT_FORCE_PID_STATIC_PID_EN_M	BIT(31)
+#define GL_ACLEXT_K2N_L2ADDR(_i)		(0x00210144 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_K2N_L2ADDR_MAX_INDEX		2
+#define GL_ACLEXT_K2N_L2ADDR_LINE_IDX_S		0
+#define GL_ACLEXT_K2N_L2ADDR_LINE_IDX_M		MAKEMASK(0x7F, 0)
+#define GL_ACLEXT_K2N_L2ADDR_AUTO_INC_S		31
+#define GL_ACLEXT_K2N_L2ADDR_AUTO_INC_M		BIT(31)
+#define GL_ACLEXT_K2N_L2DATA(_i)		(0x00210150 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_K2N_L2DATA_MAX_INDEX		2
+#define GL_ACLEXT_K2N_L2DATA_DATA0_S		0
+#define GL_ACLEXT_K2N_L2DATA_DATA0_M		MAKEMASK(0xFF, 0)
+#define GL_ACLEXT_K2N_L2DATA_DATA1_S		8
+#define GL_ACLEXT_K2N_L2DATA_DATA1_M		MAKEMASK(0xFF, 8)
+#define GL_ACLEXT_K2N_L2DATA_DATA2_S		16
+#define GL_ACLEXT_K2N_L2DATA_DATA2_M		MAKEMASK(0xFF, 16)
+#define GL_ACLEXT_K2N_L2DATA_DATA3_S		24
+#define GL_ACLEXT_K2N_L2DATA_DATA3_M		MAKEMASK(0xFF, 24)
+#define GL_ACLEXT_L2_PMASK0(_i)			(0x002100FC + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_L2_PMASK0_MAX_INDEX		2
+#define GL_ACLEXT_L2_PMASK0_BITMASK_S		0
+#define GL_ACLEXT_L2_PMASK0_BITMASK_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_ACLEXT_L2_PMASK1(_i)			(0x00210108 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_L2_PMASK1_MAX_INDEX		2
+#define GL_ACLEXT_L2_PMASK1_BITMASK_S		0
+#define GL_ACLEXT_L2_PMASK1_BITMASK_M		MAKEMASK(0xFFFF, 0)
+#define GL_ACLEXT_L2_TMASK0(_i)			(0x00210498 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_L2_TMASK0_MAX_INDEX		2
+#define GL_ACLEXT_L2_TMASK0_BITMASK_S		0
+#define GL_ACLEXT_L2_TMASK0_BITMASK_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_ACLEXT_L2_TMASK1(_i)			(0x002104A4 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_L2_TMASK1_MAX_INDEX		2
+#define GL_ACLEXT_L2_TMASK1_BITMASK_S		0
+#define GL_ACLEXT_L2_TMASK1_BITMASK_M		MAKEMASK(0xFF, 0)
+#define GL_ACLEXT_L2BMP0_3(_i)			(0x002100A8 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_L2BMP0_3_MAX_INDEX		2
+#define GL_ACLEXT_L2BMP0_3_BMP0_S		0
+#define GL_ACLEXT_L2BMP0_3_BMP0_M		MAKEMASK(0xFF, 0)
+#define GL_ACLEXT_L2BMP0_3_BMP1_S		8
+#define GL_ACLEXT_L2BMP0_3_BMP1_M		MAKEMASK(0xFF, 8)
+#define GL_ACLEXT_L2BMP0_3_BMP2_S		16
+#define GL_ACLEXT_L2BMP0_3_BMP2_M		MAKEMASK(0xFF, 16)
+#define GL_ACLEXT_L2BMP0_3_BMP3_S		24
+#define GL_ACLEXT_L2BMP0_3_BMP3_M		MAKEMASK(0xFF, 24)
+#define GL_ACLEXT_L2BMP4_7(_i)			(0x002100B4 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_L2BMP4_7_MAX_INDEX		2
+#define GL_ACLEXT_L2BMP4_7_BMP4_S		0
+#define GL_ACLEXT_L2BMP4_7_BMP4_M		MAKEMASK(0xFF, 0)
+#define GL_ACLEXT_L2BMP4_7_BMP5_S		8
+#define GL_ACLEXT_L2BMP4_7_BMP5_M		MAKEMASK(0xFF, 8)
+#define GL_ACLEXT_L2BMP4_7_BMP6_S		16
+#define GL_ACLEXT_L2BMP4_7_BMP6_M		MAKEMASK(0xFF, 16)
+#define GL_ACLEXT_L2BMP4_7_BMP7_S		24
+#define GL_ACLEXT_L2BMP4_7_BMP7_M		MAKEMASK(0xFF, 24)
+#define GL_ACLEXT_L2PRTMOD(_i)			(0x0021009C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_L2PRTMOD_MAX_INDEX		2
+#define GL_ACLEXT_L2PRTMOD_XLT1_S		0
+#define GL_ACLEXT_L2PRTMOD_XLT1_M		MAKEMASK(0x3, 0)
+#define GL_ACLEXT_L2PRTMOD_XLT2_S		8
+#define GL_ACLEXT_L2PRTMOD_XLT2_M		MAKEMASK(0x3, 8)
+#define GL_ACLEXT_N2N_L2ADDR(_i)		(0x0021015C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_N2N_L2ADDR_MAX_INDEX		2
+#define GL_ACLEXT_N2N_L2ADDR_LINE_IDX_S		0
+#define GL_ACLEXT_N2N_L2ADDR_LINE_IDX_M		MAKEMASK(0x3F, 0)
+#define GL_ACLEXT_N2N_L2ADDR_AUTO_INC_S		31
+#define GL_ACLEXT_N2N_L2ADDR_AUTO_INC_M		BIT(31)
+#define GL_ACLEXT_N2N_L2DATA(_i)		(0x00210168 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_N2N_L2DATA_MAX_INDEX		2
+#define GL_ACLEXT_N2N_L2DATA_DATA0_S		0
+#define GL_ACLEXT_N2N_L2DATA_DATA0_M		MAKEMASK(0xFF, 0)
+#define GL_ACLEXT_N2N_L2DATA_DATA1_S		8
+#define GL_ACLEXT_N2N_L2DATA_DATA1_M		MAKEMASK(0xFF, 8)
+#define GL_ACLEXT_N2N_L2DATA_DATA2_S		16
+#define GL_ACLEXT_N2N_L2DATA_DATA2_M		MAKEMASK(0xFF, 16)
+#define GL_ACLEXT_N2N_L2DATA_DATA3_S		24
+#define GL_ACLEXT_N2N_L2DATA_DATA3_M		MAKEMASK(0xFF, 24)
+#define GL_ACLEXT_P2P_L1ADDR(_i)		(0x00210024 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_P2P_L1ADDR_MAX_INDEX		2
+#define GL_ACLEXT_P2P_L1ADDR_LINE_IDX_S		0
+#define GL_ACLEXT_P2P_L1ADDR_LINE_IDX_M		BIT(0)
+#define GL_ACLEXT_P2P_L1ADDR_AUTO_INC_S		31
+#define GL_ACLEXT_P2P_L1ADDR_AUTO_INC_M		BIT(31)
+#define GL_ACLEXT_P2P_L1DATA(_i)		(0x00210030 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_P2P_L1DATA_MAX_INDEX		2
+#define GL_ACLEXT_P2P_L1DATA_DATA_S		0
+#define GL_ACLEXT_P2P_L1DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_ACLEXT_PID_L2GKTYPE(_i)		(0x002100F0 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_PID_L2GKTYPE_MAX_INDEX	2
+#define GL_ACLEXT_PID_L2GKTYPE_PID_GKTYPE_S	0
+#define GL_ACLEXT_PID_L2GKTYPE_PID_GKTYPE_M	MAKEMASK(0x3, 0)
+#define GL_ACLEXT_PLVL_SEL(_i)			(0x0021000C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_PLVL_SEL_MAX_INDEX		2
+#define GL_ACLEXT_PLVL_SEL_PLVL_SEL_S		0
+#define GL_ACLEXT_PLVL_SEL_PLVL_SEL_M		BIT(0)
+#define GL_ACLEXT_TCAM_L2ADDR(_i)		(0x00210114 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_TCAM_L2ADDR_MAX_INDEX		2
+#define GL_ACLEXT_TCAM_L2ADDR_LINE_IDX_S	0
+#define GL_ACLEXT_TCAM_L2ADDR_LINE_IDX_M	MAKEMASK(0x3FF, 0)
+#define GL_ACLEXT_TCAM_L2ADDR_AUTO_INC_S	31
+#define GL_ACLEXT_TCAM_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_ACLEXT_TCAM_L2DATALSB(_i)		(0x00210120 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_TCAM_L2DATALSB_MAX_INDEX	2
+#define GL_ACLEXT_TCAM_L2DATALSB_DATALSB_S	0
+#define GL_ACLEXT_TCAM_L2DATALSB_DATALSB_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GL_ACLEXT_TCAM_L2DATAMSB(_i)		(0x0021012C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_TCAM_L2DATAMSB_MAX_INDEX	2
+#define GL_ACLEXT_TCAM_L2DATAMSB_DATAMSB_S	0
+#define GL_ACLEXT_TCAM_L2DATAMSB_DATAMSB_M	MAKEMASK(0xFF, 0)
+#define GL_ACLEXT_XLT0_L1ADDR(_i)		(0x0021003C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_XLT0_L1ADDR_MAX_INDEX		2
+#define GL_ACLEXT_XLT0_L1ADDR_LINE_IDX_S	0
+#define GL_ACLEXT_XLT0_L1ADDR_LINE_IDX_M	MAKEMASK(0xFF, 0)
+#define GL_ACLEXT_XLT0_L1ADDR_AUTO_INC_S	31
+#define GL_ACLEXT_XLT0_L1ADDR_AUTO_INC_M	BIT(31)
+#define GL_ACLEXT_XLT0_L1DATA(_i)		(0x00210048 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_XLT0_L1DATA_MAX_INDEX		2
+#define GL_ACLEXT_XLT0_L1DATA_DATA_S		0
+#define GL_ACLEXT_XLT0_L1DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_ACLEXT_XLT1_L2ADDR(_i)		(0x002100C0 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_XLT1_L2ADDR_MAX_INDEX		2
+#define GL_ACLEXT_XLT1_L2ADDR_LINE_IDX_S	0
+#define GL_ACLEXT_XLT1_L2ADDR_LINE_IDX_M	MAKEMASK(0x7FF, 0)
+#define GL_ACLEXT_XLT1_L2ADDR_AUTO_INC_S	31
+#define GL_ACLEXT_XLT1_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_ACLEXT_XLT1_L2DATA(_i)		(0x002100CC + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_XLT1_L2DATA_MAX_INDEX		2
+#define GL_ACLEXT_XLT1_L2DATA_DATA_S		0
+#define GL_ACLEXT_XLT1_L2DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_ACLEXT_XLT2_L2ADDR(_i)		(0x002100D8 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_XLT2_L2ADDR_MAX_INDEX		2
+#define GL_ACLEXT_XLT2_L2ADDR_LINE_IDX_S	0
+#define GL_ACLEXT_XLT2_L2ADDR_LINE_IDX_M	MAKEMASK(0x1FF, 0)
+#define GL_ACLEXT_XLT2_L2ADDR_AUTO_INC_S	31
+#define GL_ACLEXT_XLT2_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_ACLEXT_XLT2_L2DATA(_i)		(0x002100E4 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_XLT2_L2DATA_MAX_INDEX		2
+#define GL_ACLEXT_XLT2_L2DATA_DATA_S		0
+#define GL_ACLEXT_XLT2_L2DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PREEXT_CDMD_L1SEL(_i)		(0x0020F054 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_CDMD_L1SEL_MAX_INDEX		2
+#define GL_PREEXT_CDMD_L1SEL_RX_SEL_S		0
+#define GL_PREEXT_CDMD_L1SEL_RX_SEL_M		MAKEMASK(0x1F, 0)
+#define GL_PREEXT_CDMD_L1SEL_TX_SEL_S		8
+#define GL_PREEXT_CDMD_L1SEL_TX_SEL_M		MAKEMASK(0x1F, 8)
+#define GL_PREEXT_CDMD_L1SEL_AUX0_SEL_S		16
+#define GL_PREEXT_CDMD_L1SEL_AUX0_SEL_M		MAKEMASK(0x1F, 16)
+#define GL_PREEXT_CDMD_L1SEL_AUX1_SEL_S		24
+#define GL_PREEXT_CDMD_L1SEL_AUX1_SEL_M		MAKEMASK(0x1F, 24)
+#define GL_PREEXT_CDMD_L1SEL_BIDIR_ENA_S	30
+#define GL_PREEXT_CDMD_L1SEL_BIDIR_ENA_M	MAKEMASK(0x3, 30)
+#define GL_PREEXT_CTLTBL_L2ADDR(_i)		(0x0020F084 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_CTLTBL_L2ADDR_MAX_INDEX	2
+#define GL_PREEXT_CTLTBL_L2ADDR_LINE_OFF_S	0
+#define GL_PREEXT_CTLTBL_L2ADDR_LINE_OFF_M	MAKEMASK(0x7, 0)
+#define GL_PREEXT_CTLTBL_L2ADDR_LINE_IDX_S	8
+#define GL_PREEXT_CTLTBL_L2ADDR_LINE_IDX_M	MAKEMASK(0x7, 8)
+#define GL_PREEXT_CTLTBL_L2ADDR_AUTO_INC_S	31
+#define GL_PREEXT_CTLTBL_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_PREEXT_CTLTBL_L2DATA(_i)		(0x0020F090 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_CTLTBL_L2DATA_MAX_INDEX	2
+#define GL_PREEXT_CTLTBL_L2DATA_DATA_S		0
+#define GL_PREEXT_CTLTBL_L2DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PREEXT_DFLT_L2PRFL(_i)		(0x0020F138 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_DFLT_L2PRFL_MAX_INDEX		2
+#define GL_PREEXT_DFLT_L2PRFL_DFLT_PRFL_S	0
+#define GL_PREEXT_DFLT_L2PRFL_DFLT_PRFL_M	MAKEMASK(0xFFFF, 0)
+#define GL_PREEXT_FLGS_L1SEL0_1(_i)		(0x0020F06C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_FLGS_L1SEL0_1_MAX_INDEX	2
+#define GL_PREEXT_FLGS_L1SEL0_1_FLS0_S		0
+#define GL_PREEXT_FLGS_L1SEL0_1_FLS0_M		MAKEMASK(0x1FF, 0)
+#define GL_PREEXT_FLGS_L1SEL0_1_FLS1_S		16
+#define GL_PREEXT_FLGS_L1SEL0_1_FLS1_M		MAKEMASK(0x1FF, 16)
+#define GL_PREEXT_FLGS_L1SEL2_3(_i)		(0x0020F078 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_FLGS_L1SEL2_3_MAX_INDEX	2
+#define GL_PREEXT_FLGS_L1SEL2_3_FLS2_S		0
+#define GL_PREEXT_FLGS_L1SEL2_3_FLS2_M		MAKEMASK(0x1FF, 0)
+#define GL_PREEXT_FLGS_L1SEL2_3_FLS3_S		16
+#define GL_PREEXT_FLGS_L1SEL2_3_FLS3_M		MAKEMASK(0x1FF, 16)
+#define GL_PREEXT_FLGS_L1TBL(_i)		(0x0020F060 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_FLGS_L1TBL_MAX_INDEX		2
+#define GL_PREEXT_FLGS_L1TBL_LSB_S		0
+#define GL_PREEXT_FLGS_L1TBL_LSB_M		MAKEMASK(0xFFFF, 0)
+#define GL_PREEXT_FLGS_L1TBL_MSB_S		16
+#define GL_PREEXT_FLGS_L1TBL_MSB_M		MAKEMASK(0xFFFF, 16)
+#define GL_PREEXT_FORCE_L1CDID(_i)		(0x0020F018 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_FORCE_L1CDID_MAX_INDEX	2
+#define GL_PREEXT_FORCE_L1CDID_STATIC_CDID_S	0
+#define GL_PREEXT_FORCE_L1CDID_STATIC_CDID_M	MAKEMASK(0xF, 0)
+#define GL_PREEXT_FORCE_L1CDID_STATIC_CDID_EN_S 31
+#define GL_PREEXT_FORCE_L1CDID_STATIC_CDID_EN_M BIT(31)
+#define GL_PREEXT_FORCE_PID(_i)			(0x0020F000 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_FORCE_PID_MAX_INDEX		2
+#define GL_PREEXT_FORCE_PID_STATIC_PID_S	0
+#define GL_PREEXT_FORCE_PID_STATIC_PID_M	MAKEMASK(0xFFFF, 0)
+#define GL_PREEXT_FORCE_PID_STATIC_PID_EN_S	31
+#define GL_PREEXT_FORCE_PID_STATIC_PID_EN_M	BIT(31)
+#define GL_PREEXT_K2N_L2ADDR(_i)		(0x0020F144 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_K2N_L2ADDR_MAX_INDEX		2
+#define GL_PREEXT_K2N_L2ADDR_LINE_IDX_S		0
+#define GL_PREEXT_K2N_L2ADDR_LINE_IDX_M		MAKEMASK(0x7F, 0)
+#define GL_PREEXT_K2N_L2ADDR_AUTO_INC_S		31
+#define GL_PREEXT_K2N_L2ADDR_AUTO_INC_M		BIT(31)
+#define GL_PREEXT_K2N_L2DATA(_i)		(0x0020F150 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_K2N_L2DATA_MAX_INDEX		2
+#define GL_PREEXT_K2N_L2DATA_DATA0_S		0
+#define GL_PREEXT_K2N_L2DATA_DATA0_M		MAKEMASK(0xFF, 0)
+#define GL_PREEXT_K2N_L2DATA_DATA1_S		8
+#define GL_PREEXT_K2N_L2DATA_DATA1_M		MAKEMASK(0xFF, 8)
+#define GL_PREEXT_K2N_L2DATA_DATA2_S		16
+#define GL_PREEXT_K2N_L2DATA_DATA2_M		MAKEMASK(0xFF, 16)
+#define GL_PREEXT_K2N_L2DATA_DATA3_S		24
+#define GL_PREEXT_K2N_L2DATA_DATA3_M		MAKEMASK(0xFF, 24)
+#define GL_PREEXT_L2_PMASK0(_i)			(0x0020F0FC + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_L2_PMASK0_MAX_INDEX		2
+#define GL_PREEXT_L2_PMASK0_BITMASK_S		0
+#define GL_PREEXT_L2_PMASK0_BITMASK_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PREEXT_L2_PMASK1(_i)			(0x0020F108 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_L2_PMASK1_MAX_INDEX		2
+#define GL_PREEXT_L2_PMASK1_BITMASK_S		0
+#define GL_PREEXT_L2_PMASK1_BITMASK_M		MAKEMASK(0xFFFF, 0)
+#define GL_PREEXT_L2_TMASK0(_i)			(0x0020F498 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_L2_TMASK0_MAX_INDEX		2
+#define GL_PREEXT_L2_TMASK0_BITMASK_S		0
+#define GL_PREEXT_L2_TMASK0_BITMASK_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PREEXT_L2_TMASK1(_i)			(0x0020F4A4 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_L2_TMASK1_MAX_INDEX		2
+#define GL_PREEXT_L2_TMASK1_BITMASK_S		0
+#define GL_PREEXT_L2_TMASK1_BITMASK_M		MAKEMASK(0xFF, 0)
+#define GL_PREEXT_L2BMP0_3(_i)			(0x0020F0A8 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_L2BMP0_3_MAX_INDEX		2
+#define GL_PREEXT_L2BMP0_3_BMP0_S		0
+#define GL_PREEXT_L2BMP0_3_BMP0_M		MAKEMASK(0xFF, 0)
+#define GL_PREEXT_L2BMP0_3_BMP1_S		8
+#define GL_PREEXT_L2BMP0_3_BMP1_M		MAKEMASK(0xFF, 8)
+#define GL_PREEXT_L2BMP0_3_BMP2_S		16
+#define GL_PREEXT_L2BMP0_3_BMP2_M		MAKEMASK(0xFF, 16)
+#define GL_PREEXT_L2BMP0_3_BMP3_S		24
+#define GL_PREEXT_L2BMP0_3_BMP3_M		MAKEMASK(0xFF, 24)
+#define GL_PREEXT_L2BMP4_7(_i)			(0x0020F0B4 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_L2BMP4_7_MAX_INDEX		2
+#define GL_PREEXT_L2BMP4_7_BMP4_S		0
+#define GL_PREEXT_L2BMP4_7_BMP4_M		MAKEMASK(0xFF, 0)
+#define GL_PREEXT_L2BMP4_7_BMP5_S		8
+#define GL_PREEXT_L2BMP4_7_BMP5_M		MAKEMASK(0xFF, 8)
+#define GL_PREEXT_L2BMP4_7_BMP6_S		16
+#define GL_PREEXT_L2BMP4_7_BMP6_M		MAKEMASK(0xFF, 16)
+#define GL_PREEXT_L2BMP4_7_BMP7_S		24
+#define GL_PREEXT_L2BMP4_7_BMP7_M		MAKEMASK(0xFF, 24)
+#define GL_PREEXT_L2PRTMOD(_i)			(0x0020F09C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_L2PRTMOD_MAX_INDEX		2
+#define GL_PREEXT_L2PRTMOD_XLT1_S		0
+#define GL_PREEXT_L2PRTMOD_XLT1_M		MAKEMASK(0x3, 0)
+#define GL_PREEXT_L2PRTMOD_XLT2_S		8
+#define GL_PREEXT_L2PRTMOD_XLT2_M		MAKEMASK(0x3, 8)
+#define GL_PREEXT_N2N_L2ADDR(_i)		(0x0020F15C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_N2N_L2ADDR_MAX_INDEX		2
+#define GL_PREEXT_N2N_L2ADDR_LINE_IDX_S		0
+#define GL_PREEXT_N2N_L2ADDR_LINE_IDX_M		MAKEMASK(0x3F, 0)
+#define GL_PREEXT_N2N_L2ADDR_AUTO_INC_S		31
+#define GL_PREEXT_N2N_L2ADDR_AUTO_INC_M		BIT(31)
+#define GL_PREEXT_N2N_L2DATA(_i)		(0x0020F168 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_N2N_L2DATA_MAX_INDEX		2
+#define GL_PREEXT_N2N_L2DATA_DATA0_S		0
+#define GL_PREEXT_N2N_L2DATA_DATA0_M		MAKEMASK(0xFF, 0)
+#define GL_PREEXT_N2N_L2DATA_DATA1_S		8
+#define GL_PREEXT_N2N_L2DATA_DATA1_M		MAKEMASK(0xFF, 8)
+#define GL_PREEXT_N2N_L2DATA_DATA2_S		16
+#define GL_PREEXT_N2N_L2DATA_DATA2_M		MAKEMASK(0xFF, 16)
+#define GL_PREEXT_N2N_L2DATA_DATA3_S		24
+#define GL_PREEXT_N2N_L2DATA_DATA3_M		MAKEMASK(0xFF, 24)
+#define GL_PREEXT_P2P_L1ADDR(_i)		(0x0020F024 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_P2P_L1ADDR_MAX_INDEX		2
+#define GL_PREEXT_P2P_L1ADDR_LINE_IDX_S		0
+#define GL_PREEXT_P2P_L1ADDR_LINE_IDX_M		BIT(0)
+#define GL_PREEXT_P2P_L1ADDR_AUTO_INC_S		31
+#define GL_PREEXT_P2P_L1ADDR_AUTO_INC_M		BIT(31)
+#define GL_PREEXT_P2P_L1DATA(_i)		(0x0020F030 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_P2P_L1DATA_MAX_INDEX		2
+#define GL_PREEXT_P2P_L1DATA_DATA_S		0
+#define GL_PREEXT_P2P_L1DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PREEXT_PID_L2GKTYPE(_i)		(0x0020F0F0 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_PID_L2GKTYPE_MAX_INDEX	2
+#define GL_PREEXT_PID_L2GKTYPE_PID_GKTYPE_S	0
+#define GL_PREEXT_PID_L2GKTYPE_PID_GKTYPE_M	MAKEMASK(0x3, 0)
+#define GL_PREEXT_PLVL_SEL(_i)			(0x0020F00C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_PLVL_SEL_MAX_INDEX		2
+#define GL_PREEXT_PLVL_SEL_PLVL_SEL_S		0
+#define GL_PREEXT_PLVL_SEL_PLVL_SEL_M		BIT(0)
+#define GL_PREEXT_TCAM_L2ADDR(_i)		(0x0020F114 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_TCAM_L2ADDR_MAX_INDEX		2
+#define GL_PREEXT_TCAM_L2ADDR_LINE_IDX_S	0
+#define GL_PREEXT_TCAM_L2ADDR_LINE_IDX_M	MAKEMASK(0x3FF, 0)
+#define GL_PREEXT_TCAM_L2ADDR_AUTO_INC_S	31
+#define GL_PREEXT_TCAM_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_PREEXT_TCAM_L2DATALSB(_i)		(0x0020F120 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_TCAM_L2DATALSB_MAX_INDEX	2
+#define GL_PREEXT_TCAM_L2DATALSB_DATALSB_S	0
+#define GL_PREEXT_TCAM_L2DATALSB_DATALSB_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PREEXT_TCAM_L2DATAMSB(_i)		(0x0020F12C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_TCAM_L2DATAMSB_MAX_INDEX	2
+#define GL_PREEXT_TCAM_L2DATAMSB_DATAMSB_S	0
+#define GL_PREEXT_TCAM_L2DATAMSB_DATAMSB_M	MAKEMASK(0xFF, 0)
+#define GL_PREEXT_XLT0_L1ADDR(_i)		(0x0020F03C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_XLT0_L1ADDR_MAX_INDEX		2
+#define GL_PREEXT_XLT0_L1ADDR_LINE_IDX_S	0
+#define GL_PREEXT_XLT0_L1ADDR_LINE_IDX_M	MAKEMASK(0xFF, 0)
+#define GL_PREEXT_XLT0_L1ADDR_AUTO_INC_S	31
+#define GL_PREEXT_XLT0_L1ADDR_AUTO_INC_M	BIT(31)
+#define GL_PREEXT_XLT0_L1DATA(_i)		(0x0020F048 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_XLT0_L1DATA_MAX_INDEX		2
+#define GL_PREEXT_XLT0_L1DATA_DATA_S		0
+#define GL_PREEXT_XLT0_L1DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PREEXT_XLT1_L2ADDR(_i)		(0x0020F0C0 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_XLT1_L2ADDR_MAX_INDEX		2
+#define GL_PREEXT_XLT1_L2ADDR_LINE_IDX_S	0
+#define GL_PREEXT_XLT1_L2ADDR_LINE_IDX_M	MAKEMASK(0x7FF, 0)
+#define GL_PREEXT_XLT1_L2ADDR_AUTO_INC_S	31
+#define GL_PREEXT_XLT1_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_PREEXT_XLT1_L2DATA(_i)		(0x0020F0CC + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_XLT1_L2DATA_MAX_INDEX		2
+#define GL_PREEXT_XLT1_L2DATA_DATA_S		0
+#define GL_PREEXT_XLT1_L2DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PREEXT_XLT2_L2ADDR(_i)		(0x0020F0D8 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_XLT2_L2ADDR_MAX_INDEX		2
+#define GL_PREEXT_XLT2_L2ADDR_LINE_IDX_S	0
+#define GL_PREEXT_XLT2_L2ADDR_LINE_IDX_M	MAKEMASK(0x1FF, 0)
+#define GL_PREEXT_XLT2_L2ADDR_AUTO_INC_S	31
+#define GL_PREEXT_XLT2_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_PREEXT_XLT2_L2DATA(_i)		(0x0020F0E4 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_XLT2_L2DATA_MAX_INDEX		2
+#define GL_PREEXT_XLT2_L2DATA_DATA_S		0
+#define GL_PREEXT_XLT2_L2DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_CDMD_L1SEL(_i)		(0x0020E054 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_CDMD_L1SEL_MAX_INDEX		2
+#define GL_PSTEXT_CDMD_L1SEL_RX_SEL_S		0
+#define GL_PSTEXT_CDMD_L1SEL_RX_SEL_M		MAKEMASK(0x1F, 0)
+#define GL_PSTEXT_CDMD_L1SEL_TX_SEL_S		8
+#define GL_PSTEXT_CDMD_L1SEL_TX_SEL_M		MAKEMASK(0x1F, 8)
+#define GL_PSTEXT_CDMD_L1SEL_AUX0_SEL_S		16
+#define GL_PSTEXT_CDMD_L1SEL_AUX0_SEL_M		MAKEMASK(0x1F, 16)
+#define GL_PSTEXT_CDMD_L1SEL_AUX1_SEL_S		24
+#define GL_PSTEXT_CDMD_L1SEL_AUX1_SEL_M		MAKEMASK(0x1F, 24)
+#define GL_PSTEXT_CDMD_L1SEL_BIDIR_ENA_S	30
+#define GL_PSTEXT_CDMD_L1SEL_BIDIR_ENA_M	MAKEMASK(0x3, 30)
+#define GL_PSTEXT_CTLTBL_L2ADDR(_i)		(0x0020E084 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_CTLTBL_L2ADDR_MAX_INDEX	2
+#define GL_PSTEXT_CTLTBL_L2ADDR_LINE_OFF_S	0
+#define GL_PSTEXT_CTLTBL_L2ADDR_LINE_OFF_M	MAKEMASK(0x7, 0)
+#define GL_PSTEXT_CTLTBL_L2ADDR_LINE_IDX_S	8
+#define GL_PSTEXT_CTLTBL_L2ADDR_LINE_IDX_M	MAKEMASK(0x7, 8)
+#define GL_PSTEXT_CTLTBL_L2ADDR_AUTO_INC_S	31
+#define GL_PSTEXT_CTLTBL_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_PSTEXT_CTLTBL_L2DATA(_i)		(0x0020E090 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_CTLTBL_L2DATA_MAX_INDEX	2
+#define GL_PSTEXT_CTLTBL_L2DATA_DATA_S		0
+#define GL_PSTEXT_CTLTBL_L2DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_DFLT_L2PRFL(_i)		(0x0020E138 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_DFLT_L2PRFL_MAX_INDEX		2
+#define GL_PSTEXT_DFLT_L2PRFL_DFLT_PRFL_S	0
+#define GL_PSTEXT_DFLT_L2PRFL_DFLT_PRFL_M	MAKEMASK(0xFFFF, 0)
+#define GL_PSTEXT_FL15_BMPLSB(_i)		(0x0020E480 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_FL15_BMPLSB_MAX_INDEX		2
+#define GL_PSTEXT_FL15_BMPLSB_BMPLSB_S		0
+#define GL_PSTEXT_FL15_BMPLSB_BMPLSB_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_FL15_BMPMSB(_i)		(0x0020E48C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_FL15_BMPMSB_MAX_INDEX		2
+#define GL_PSTEXT_FL15_BMPMSB_BMPMSB_S		0
+#define GL_PSTEXT_FL15_BMPMSB_BMPMSB_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_FLGS_L1SEL0_1(_i)		(0x0020E06C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_FLGS_L1SEL0_1_MAX_INDEX	2
+#define GL_PSTEXT_FLGS_L1SEL0_1_FLS0_S		0
+#define GL_PSTEXT_FLGS_L1SEL0_1_FLS0_M		MAKEMASK(0x1FF, 0)
+#define GL_PSTEXT_FLGS_L1SEL0_1_FLS1_S		16
+#define GL_PSTEXT_FLGS_L1SEL0_1_FLS1_M		MAKEMASK(0x1FF, 16)
+#define GL_PSTEXT_FLGS_L1SEL2_3(_i)		(0x0020E078 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_FLGS_L1SEL2_3_MAX_INDEX	2
+#define GL_PSTEXT_FLGS_L1SEL2_3_FLS2_S		0
+#define GL_PSTEXT_FLGS_L1SEL2_3_FLS2_M		MAKEMASK(0x1FF, 0)
+#define GL_PSTEXT_FLGS_L1SEL2_3_FLS3_S		16
+#define GL_PSTEXT_FLGS_L1SEL2_3_FLS3_M		MAKEMASK(0x1FF, 16)
+#define GL_PSTEXT_FLGS_L1TBL(_i)		(0x0020E060 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_FLGS_L1TBL_MAX_INDEX		2
+#define GL_PSTEXT_FLGS_L1TBL_LSB_S		0
+#define GL_PSTEXT_FLGS_L1TBL_LSB_M		MAKEMASK(0xFFFF, 0)
+#define GL_PSTEXT_FLGS_L1TBL_MSB_S		16
+#define GL_PSTEXT_FLGS_L1TBL_MSB_M		MAKEMASK(0xFFFF, 16)
+#define GL_PSTEXT_FORCE_L1CDID(_i)		(0x0020E018 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_FORCE_L1CDID_MAX_INDEX	2
+#define GL_PSTEXT_FORCE_L1CDID_STATIC_CDID_S	0
+#define GL_PSTEXT_FORCE_L1CDID_STATIC_CDID_M	MAKEMASK(0xF, 0)
+#define GL_PSTEXT_FORCE_L1CDID_STATIC_CDID_EN_S 31
+#define GL_PSTEXT_FORCE_L1CDID_STATIC_CDID_EN_M BIT(31)
+#define GL_PSTEXT_FORCE_PID(_i)			(0x0020E000 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_FORCE_PID_MAX_INDEX		2
+#define GL_PSTEXT_FORCE_PID_STATIC_PID_S	0
+#define GL_PSTEXT_FORCE_PID_STATIC_PID_M	MAKEMASK(0xFFFF, 0)
+#define GL_PSTEXT_FORCE_PID_STATIC_PID_EN_S	31
+#define GL_PSTEXT_FORCE_PID_STATIC_PID_EN_M	BIT(31)
+#define GL_PSTEXT_K2N_L2ADDR(_i)		(0x0020E144 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_K2N_L2ADDR_MAX_INDEX		2
+#define GL_PSTEXT_K2N_L2ADDR_LINE_IDX_S		0
+#define GL_PSTEXT_K2N_L2ADDR_LINE_IDX_M		MAKEMASK(0x7F, 0)
+#define GL_PSTEXT_K2N_L2ADDR_AUTO_INC_S		31
+#define GL_PSTEXT_K2N_L2ADDR_AUTO_INC_M		BIT(31)
+#define GL_PSTEXT_K2N_L2DATA(_i)		(0x0020E150 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_K2N_L2DATA_MAX_INDEX		2
+#define GL_PSTEXT_K2N_L2DATA_DATA0_S		0
+#define GL_PSTEXT_K2N_L2DATA_DATA0_M		MAKEMASK(0xFF, 0)
+#define GL_PSTEXT_K2N_L2DATA_DATA1_S		8
+#define GL_PSTEXT_K2N_L2DATA_DATA1_M		MAKEMASK(0xFF, 8)
+#define GL_PSTEXT_K2N_L2DATA_DATA2_S		16
+#define GL_PSTEXT_K2N_L2DATA_DATA2_M		MAKEMASK(0xFF, 16)
+#define GL_PSTEXT_K2N_L2DATA_DATA3_S		24
+#define GL_PSTEXT_K2N_L2DATA_DATA3_M		MAKEMASK(0xFF, 24)
+#define GL_PSTEXT_L2_PMASK0(_i)			(0x0020E0FC + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_L2_PMASK0_MAX_INDEX		2
+#define GL_PSTEXT_L2_PMASK0_BITMASK_S		0
+#define GL_PSTEXT_L2_PMASK0_BITMASK_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_L2_PMASK1(_i)			(0x0020E108 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_L2_PMASK1_MAX_INDEX		2
+#define GL_PSTEXT_L2_PMASK1_BITMASK_S		0
+#define GL_PSTEXT_L2_PMASK1_BITMASK_M		MAKEMASK(0xFFFF, 0)
+#define GL_PSTEXT_L2_TMASK0(_i)			(0x0020E498 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_L2_TMASK0_MAX_INDEX		2
+#define GL_PSTEXT_L2_TMASK0_BITMASK_S		0
+#define GL_PSTEXT_L2_TMASK0_BITMASK_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_L2_TMASK1(_i)			(0x0020E4A4 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_L2_TMASK1_MAX_INDEX		2
+#define GL_PSTEXT_L2_TMASK1_BITMASK_S		0
+#define GL_PSTEXT_L2_TMASK1_BITMASK_M		MAKEMASK(0xFF, 0)
+#define GL_PSTEXT_L2PRTMOD(_i)			(0x0020E09C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_L2PRTMOD_MAX_INDEX		2
+#define GL_PSTEXT_L2PRTMOD_XLT1_S		0
+#define GL_PSTEXT_L2PRTMOD_XLT1_M		MAKEMASK(0x3, 0)
+#define GL_PSTEXT_L2PRTMOD_XLT2_S		8
+#define GL_PSTEXT_L2PRTMOD_XLT2_M		MAKEMASK(0x3, 8)
+#define GL_PSTEXT_N2N_L2ADDR(_i)		(0x0020E15C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_N2N_L2ADDR_MAX_INDEX		2
+#define GL_PSTEXT_N2N_L2ADDR_LINE_IDX_S		0
+#define GL_PSTEXT_N2N_L2ADDR_LINE_IDX_M		MAKEMASK(0x3F, 0)
+#define GL_PSTEXT_N2N_L2ADDR_AUTO_INC_S		31
+#define GL_PSTEXT_N2N_L2ADDR_AUTO_INC_M		BIT(31)
+#define GL_PSTEXT_N2N_L2DATA(_i)		(0x0020E168 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_N2N_L2DATA_MAX_INDEX		2
+#define GL_PSTEXT_N2N_L2DATA_DATA0_S		0
+#define GL_PSTEXT_N2N_L2DATA_DATA0_M		MAKEMASK(0xFF, 0)
+#define GL_PSTEXT_N2N_L2DATA_DATA1_S		8
+#define GL_PSTEXT_N2N_L2DATA_DATA1_M		MAKEMASK(0xFF, 8)
+#define GL_PSTEXT_N2N_L2DATA_DATA2_S		16
+#define GL_PSTEXT_N2N_L2DATA_DATA2_M		MAKEMASK(0xFF, 16)
+#define GL_PSTEXT_N2N_L2DATA_DATA3_S		24
+#define GL_PSTEXT_N2N_L2DATA_DATA3_M		MAKEMASK(0xFF, 24)
+#define GL_PSTEXT_P2P_L1ADDR(_i)		(0x0020E024 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_P2P_L1ADDR_MAX_INDEX		2
+#define GL_PSTEXT_P2P_L1ADDR_LINE_IDX_S		0
+#define GL_PSTEXT_P2P_L1ADDR_LINE_IDX_M		BIT(0)
+#define GL_PSTEXT_P2P_L1ADDR_AUTO_INC_S		31
+#define GL_PSTEXT_P2P_L1ADDR_AUTO_INC_M		BIT(31)
+#define GL_PSTEXT_P2P_L1DATA(_i)		(0x0020E030 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_P2P_L1DATA_MAX_INDEX		2
+#define GL_PSTEXT_P2P_L1DATA_DATA_S		0
+#define GL_PSTEXT_P2P_L1DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_PID_L2GKTYPE(_i)		(0x0020E0F0 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_PID_L2GKTYPE_MAX_INDEX	2
+#define GL_PSTEXT_PID_L2GKTYPE_PID_GKTYPE_S	0
+#define GL_PSTEXT_PID_L2GKTYPE_PID_GKTYPE_M	MAKEMASK(0x3, 0)
+#define GL_PSTEXT_PLVL_SEL(_i)			(0x0020E00C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_PLVL_SEL_MAX_INDEX		2
+#define GL_PSTEXT_PLVL_SEL_PLVL_SEL_S		0
+#define GL_PSTEXT_PLVL_SEL_PLVL_SEL_M		BIT(0)
+#define GL_PSTEXT_PRFLM_CTRL(_i)		(0x0020E474 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_PRFLM_CTRL_MAX_INDEX		2
+#define GL_PSTEXT_PRFLM_CTRL_PRFL_IDX_S		0
+#define GL_PSTEXT_PRFLM_CTRL_PRFL_IDX_M		MAKEMASK(0xFF, 0)
+#define GL_PSTEXT_PRFLM_CTRL_RD_REQ_S		30
+#define GL_PSTEXT_PRFLM_CTRL_RD_REQ_M		BIT(30)
+#define GL_PSTEXT_PRFLM_CTRL_WR_REQ_S		31
+#define GL_PSTEXT_PRFLM_CTRL_WR_REQ_M		BIT(31)
+#define GL_PSTEXT_PRFLM_DATA_0(_i)		(0x0020E174 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GL_PSTEXT_PRFLM_DATA_0_MAX_INDEX	63
+#define GL_PSTEXT_PRFLM_DATA_0_PROT_S		0
+#define GL_PSTEXT_PRFLM_DATA_0_PROT_M		MAKEMASK(0xFF, 0)
+#define GL_PSTEXT_PRFLM_DATA_0_OFF_S		16
+#define GL_PSTEXT_PRFLM_DATA_0_OFF_M		MAKEMASK(0x1FF, 16)
+#define GL_PSTEXT_PRFLM_DATA_1(_i)		(0x0020E274 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GL_PSTEXT_PRFLM_DATA_1_MAX_INDEX	63
+#define GL_PSTEXT_PRFLM_DATA_1_PROT_S		0
+#define GL_PSTEXT_PRFLM_DATA_1_PROT_M		MAKEMASK(0xFF, 0)
+#define GL_PSTEXT_PRFLM_DATA_1_OFF_S		16
+#define GL_PSTEXT_PRFLM_DATA_1_OFF_M		MAKEMASK(0x1FF, 16)
+#define GL_PSTEXT_PRFLM_DATA_2(_i)		(0x0020E374 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GL_PSTEXT_PRFLM_DATA_2_MAX_INDEX	63
+#define GL_PSTEXT_PRFLM_DATA_2_PROT_S		0
+#define GL_PSTEXT_PRFLM_DATA_2_PROT_M		MAKEMASK(0xFF, 0)
+#define GL_PSTEXT_PRFLM_DATA_2_OFF_S		16
+#define GL_PSTEXT_PRFLM_DATA_2_OFF_M		MAKEMASK(0x1FF, 16)
+#define GL_PSTEXT_TCAM_L2ADDR(_i)		(0x0020E114 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_TCAM_L2ADDR_MAX_INDEX		2
+#define GL_PSTEXT_TCAM_L2ADDR_LINE_IDX_S	0
+#define GL_PSTEXT_TCAM_L2ADDR_LINE_IDX_M	MAKEMASK(0x3FF, 0)
+#define GL_PSTEXT_TCAM_L2ADDR_AUTO_INC_S	31
+#define GL_PSTEXT_TCAM_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_PSTEXT_TCAM_L2DATALSB(_i)		(0x0020E120 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_TCAM_L2DATALSB_MAX_INDEX	2
+#define GL_PSTEXT_TCAM_L2DATALSB_DATALSB_S	0
+#define GL_PSTEXT_TCAM_L2DATALSB_DATALSB_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_TCAM_L2DATAMSB(_i)		(0x0020E12C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_TCAM_L2DATAMSB_MAX_INDEX	2
+#define GL_PSTEXT_TCAM_L2DATAMSB_DATAMSB_S	0
+#define GL_PSTEXT_TCAM_L2DATAMSB_DATAMSB_M	MAKEMASK(0xFF, 0)
+#define GL_PSTEXT_XLT0_L1ADDR(_i)		(0x0020E03C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_XLT0_L1ADDR_MAX_INDEX		2
+#define GL_PSTEXT_XLT0_L1ADDR_LINE_IDX_S	0
+#define GL_PSTEXT_XLT0_L1ADDR_LINE_IDX_M	MAKEMASK(0xFF, 0)
+#define GL_PSTEXT_XLT0_L1ADDR_AUTO_INC_S	31
+#define GL_PSTEXT_XLT0_L1ADDR_AUTO_INC_M	BIT(31)
+#define GL_PSTEXT_XLT0_L1DATA(_i)		(0x0020E048 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_XLT0_L1DATA_MAX_INDEX		2
+#define GL_PSTEXT_XLT0_L1DATA_DATA_S		0
+#define GL_PSTEXT_XLT0_L1DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_XLT1_L2ADDR(_i)		(0x0020E0C0 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_XLT1_L2ADDR_MAX_INDEX		2
+#define GL_PSTEXT_XLT1_L2ADDR_LINE_IDX_S	0
+#define GL_PSTEXT_XLT1_L2ADDR_LINE_IDX_M	MAKEMASK(0x7FF, 0)
+#define GL_PSTEXT_XLT1_L2ADDR_AUTO_INC_S	31
+#define GL_PSTEXT_XLT1_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_PSTEXT_XLT1_L2DATA(_i)		(0x0020E0CC + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_XLT1_L2DATA_MAX_INDEX		2
+#define GL_PSTEXT_XLT1_L2DATA_DATA_S		0
+#define GL_PSTEXT_XLT1_L2DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_XLT2_L2ADDR(_i)		(0x0020E0D8 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_XLT2_L2ADDR_MAX_INDEX		2
+#define GL_PSTEXT_XLT2_L2ADDR_LINE_IDX_S	0
+#define GL_PSTEXT_XLT2_L2ADDR_LINE_IDX_M	MAKEMASK(0x1FF, 0)
+#define GL_PSTEXT_XLT2_L2ADDR_AUTO_INC_S	31
+#define GL_PSTEXT_XLT2_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_PSTEXT_XLT2_L2DATA(_i)		(0x0020E0E4 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_XLT2_L2DATA_MAX_INDEX		2
+#define GL_PSTEXT_XLT2_L2DATA_DATA_S		0
+#define GL_PSTEXT_XLT2_L2DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLFLXP_PTYPE_TRANSLATION(_i)		(0x0045C000 + ((_i) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define GLFLXP_PTYPE_TRANSLATION_MAX_INDEX	255
+#define GLFLXP_PTYPE_TRANSLATION_PTYPE_4N_S	0
+#define GLFLXP_PTYPE_TRANSLATION_PTYPE_4N_M	MAKEMASK(0xFF, 0)
+#define GLFLXP_PTYPE_TRANSLATION_PTYPE_4N_1_S	8
+#define GLFLXP_PTYPE_TRANSLATION_PTYPE_4N_1_M	MAKEMASK(0xFF, 8)
+#define GLFLXP_PTYPE_TRANSLATION_PTYPE_4N_2_S	16
+#define GLFLXP_PTYPE_TRANSLATION_PTYPE_4N_2_M	MAKEMASK(0xFF, 16)
+#define GLFLXP_PTYPE_TRANSLATION_PTYPE_4N_3_S	24
+#define GLFLXP_PTYPE_TRANSLATION_PTYPE_4N_3_M	MAKEMASK(0xFF, 24)
+#define GLFLXP_RX_CMD_LX_PROT_IDX(_i)		(0x0045C400 + ((_i) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define GLFLXP_RX_CMD_LX_PROT_IDX_MAX_INDEX	255
+#define GLFLXP_RX_CMD_LX_PROT_IDX_INNER_CLOUD_OFFSET_INDEX_S 0
+#define GLFLXP_RX_CMD_LX_PROT_IDX_INNER_CLOUD_OFFSET_INDEX_M MAKEMASK(0x7, 0)
+#define GLFLXP_RX_CMD_LX_PROT_IDX_L4_OFFSET_INDEX_S 4
+#define GLFLXP_RX_CMD_LX_PROT_IDX_L4_OFFSET_INDEX_M MAKEMASK(0x7, 4)
+#define GLFLXP_RX_CMD_LX_PROT_IDX_PAYLOAD_OFFSET_INDEX_S 8
+#define GLFLXP_RX_CMD_LX_PROT_IDX_PAYLOAD_OFFSET_INDEX_M MAKEMASK(0x7, 8)
+#define GLFLXP_RX_CMD_LX_PROT_IDX_L3_PROTOCOL_S 12
+#define GLFLXP_RX_CMD_LX_PROT_IDX_L3_PROTOCOL_M MAKEMASK(0x3, 12)
+#define GLFLXP_RX_CMD_LX_PROT_IDX_L4_PROTOCOL_S 14
+#define GLFLXP_RX_CMD_LX_PROT_IDX_L4_PROTOCOL_M MAKEMASK(0x3, 14)
+#define GLFLXP_RX_CMD_PROTIDS(_i, _j)		(0x0045A000 + ((_i) * 4 + (_j) * 1024)) /* _i=0...255, _j=0...5 */ /* Reset Source: CORER */
+#define GLFLXP_RX_CMD_PROTIDS_MAX_INDEX		255
+#define GLFLXP_RX_CMD_PROTIDS_PROTID_4N_S	0
+#define GLFLXP_RX_CMD_PROTIDS_PROTID_4N_M	MAKEMASK(0xFF, 0)
+#define GLFLXP_RX_CMD_PROTIDS_PROTID_4N_1_S	8
+#define GLFLXP_RX_CMD_PROTIDS_PROTID_4N_1_M	MAKEMASK(0xFF, 8)
+#define GLFLXP_RX_CMD_PROTIDS_PROTID_4N_2_S	16
+#define GLFLXP_RX_CMD_PROTIDS_PROTID_4N_2_M	MAKEMASK(0xFF, 16)
+#define GLFLXP_RX_CMD_PROTIDS_PROTID_4N_3_S	24
+#define GLFLXP_RX_CMD_PROTIDS_PROTID_4N_3_M	MAKEMASK(0xFF, 24)
+#define GLFLXP_RXDID_FLAGS(_i, _j)		(0x0045D000 + ((_i) * 4 + (_j) * 256)) /* _i=0...63, _j=0...4 */ /* Reset Source: CORER */
+#define GLFLXP_RXDID_FLAGS_MAX_INDEX		63
+#define GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_S	0
+#define GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_M	MAKEMASK(0x3F, 0)
+#define GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_1_S	8
+#define GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_1_M	MAKEMASK(0x3F, 8)
+#define GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_2_S	16
+#define GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_2_M	MAKEMASK(0x3F, 16)
+#define GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_3_S	24
+#define GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_3_M	MAKEMASK(0x3F, 24)
+#define GLFLXP_RXDID_FLAGS1_OVERRIDE(_i)	(0x0045D600 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GLFLXP_RXDID_FLAGS1_OVERRIDE_MAX_INDEX	63
+#define GLFLXP_RXDID_FLAGS1_OVERRIDE_FLEXIFLAGS1_OVERRIDE_S 0
+#define GLFLXP_RXDID_FLAGS1_OVERRIDE_FLEXIFLAGS1_OVERRIDE_M MAKEMASK(0xF, 0)
+#define GLFLXP_RXDID_FLX_WRD_0(_i)		(0x0045c800 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GLFLXP_RXDID_FLX_WRD_0_MAX_INDEX	63
+#define GLFLXP_RXDID_FLX_WRD_0_PROT_MDID_S	0
+#define GLFLXP_RXDID_FLX_WRD_0_PROT_MDID_M	MAKEMASK(0xFF, 0)
+#define GLFLXP_RXDID_FLX_WRD_0_EXTRACTION_OFFSET_S 8
+#define GLFLXP_RXDID_FLX_WRD_0_EXTRACTION_OFFSET_M MAKEMASK(0x3FF, 8)
+#define GLFLXP_RXDID_FLX_WRD_0_RXDID_OPCODE_S	30
+#define GLFLXP_RXDID_FLX_WRD_0_RXDID_OPCODE_M	MAKEMASK(0x3, 30)
+#define GLFLXP_RXDID_FLX_WRD_1(_i)		(0x0045c900 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GLFLXP_RXDID_FLX_WRD_1_MAX_INDEX	63
+#define GLFLXP_RXDID_FLX_WRD_1_PROT_MDID_S	0
+#define GLFLXP_RXDID_FLX_WRD_1_PROT_MDID_M	MAKEMASK(0xFF, 0)
+#define GLFLXP_RXDID_FLX_WRD_1_EXTRACTION_OFFSET_S 8
+#define GLFLXP_RXDID_FLX_WRD_1_EXTRACTION_OFFSET_M MAKEMASK(0x3FF, 8)
+#define GLFLXP_RXDID_FLX_WRD_1_RXDID_OPCODE_S	30
+#define GLFLXP_RXDID_FLX_WRD_1_RXDID_OPCODE_M	MAKEMASK(0x3, 30)
+#define GLFLXP_RXDID_FLX_WRD_2(_i)		(0x0045ca00 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GLFLXP_RXDID_FLX_WRD_2_MAX_INDEX	63
+#define GLFLXP_RXDID_FLX_WRD_2_PROT_MDID_S	0
+#define GLFLXP_RXDID_FLX_WRD_2_PROT_MDID_M	MAKEMASK(0xFF, 0)
+#define GLFLXP_RXDID_FLX_WRD_2_EXTRACTION_OFFSET_S 8
+#define GLFLXP_RXDID_FLX_WRD_2_EXTRACTION_OFFSET_M MAKEMASK(0x3FF, 8)
+#define GLFLXP_RXDID_FLX_WRD_2_RXDID_OPCODE_S	30
+#define GLFLXP_RXDID_FLX_WRD_2_RXDID_OPCODE_M	MAKEMASK(0x3, 30)
+#define GLFLXP_RXDID_FLX_WRD_3(_i)		(0x0045cb00 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GLFLXP_RXDID_FLX_WRD_3_MAX_INDEX	63
+#define GLFLXP_RXDID_FLX_WRD_3_PROT_MDID_S	0
+#define GLFLXP_RXDID_FLX_WRD_3_PROT_MDID_M	MAKEMASK(0xFF, 0)
+#define GLFLXP_RXDID_FLX_WRD_3_EXTRACTION_OFFSET_S 8
+#define GLFLXP_RXDID_FLX_WRD_3_EXTRACTION_OFFSET_M MAKEMASK(0x3FF, 8)
+#define GLFLXP_RXDID_FLX_WRD_3_RXDID_OPCODE_S	30
+#define GLFLXP_RXDID_FLX_WRD_3_RXDID_OPCODE_M	MAKEMASK(0x3, 30)
+#define GLFLXP_RXDID_FLX_WRD_4(_i)		(0x0045cc00 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GLFLXP_RXDID_FLX_WRD_4_MAX_INDEX	63
+#define GLFLXP_RXDID_FLX_WRD_4_PROT_MDID_S	0
+#define GLFLXP_RXDID_FLX_WRD_4_PROT_MDID_M	MAKEMASK(0xFF, 0)
+#define GLFLXP_RXDID_FLX_WRD_4_EXTRACTION_OFFSET_S 8
+#define GLFLXP_RXDID_FLX_WRD_4_EXTRACTION_OFFSET_M MAKEMASK(0x3FF, 8)
+#define GLFLXP_RXDID_FLX_WRD_4_RXDID_OPCODE_S	30
+#define GLFLXP_RXDID_FLX_WRD_4_RXDID_OPCODE_M	MAKEMASK(0x3, 30)
+#define GLFLXP_RXDID_FLX_WRD_5(_i)		(0x0045cd00 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GLFLXP_RXDID_FLX_WRD_5_MAX_INDEX	63
+#define GLFLXP_RXDID_FLX_WRD_5_PROT_MDID_S	0
+#define GLFLXP_RXDID_FLX_WRD_5_PROT_MDID_M	MAKEMASK(0xFF, 0)
+#define GLFLXP_RXDID_FLX_WRD_5_EXTRACTION_OFFSET_S 8
+#define GLFLXP_RXDID_FLX_WRD_5_EXTRACTION_OFFSET_M MAKEMASK(0x3FF, 8)
+#define GLFLXP_RXDID_FLX_WRD_5_RXDID_OPCODE_S	30
+#define GLFLXP_RXDID_FLX_WRD_5_RXDID_OPCODE_M	MAKEMASK(0x3, 30)
+#define GLFLXP_TX_SCHED_CORRECT(_i, _j)		(0x00458000 + ((_i) * 4 + (_j) * 256)) /* _i=0...63, _j=0...31 */ /* Reset Source: CORER */
+#define GLFLXP_TX_SCHED_CORRECT_MAX_INDEX	63
+#define GLFLXP_TX_SCHED_CORRECT_PROTD_ID_2N_S	0
+#define GLFLXP_TX_SCHED_CORRECT_PROTD_ID_2N_M	MAKEMASK(0xFF, 0)
+#define GLFLXP_TX_SCHED_CORRECT_RECIPE_2N_S	8
+#define GLFLXP_TX_SCHED_CORRECT_RECIPE_2N_M	MAKEMASK(0x1F, 8)
+#define GLFLXP_TX_SCHED_CORRECT_PROTD_ID_2N_1_S 16
+#define GLFLXP_TX_SCHED_CORRECT_PROTD_ID_2N_1_M MAKEMASK(0xFF, 16)
+#define GLFLXP_TX_SCHED_CORRECT_RECIPE_2N_1_S	24
+#define GLFLXP_TX_SCHED_CORRECT_RECIPE_2N_1_M	MAKEMASK(0x1F, 24)
+#define QRXFLXP_CNTXT(_QRX)			(0x00480000 + ((_QRX) * 4)) /* _i=0...2047 */ /* Reset Source: CORER */
+#define QRXFLXP_CNTXT_MAX_INDEX			2047
+#define QRXFLXP_CNTXT_RXDID_IDX_S		0
+#define QRXFLXP_CNTXT_RXDID_IDX_M		MAKEMASK(0x3F, 0)
+#define QRXFLXP_CNTXT_RXDID_PRIO_S		8
+#define QRXFLXP_CNTXT_RXDID_PRIO_M		MAKEMASK(0x7, 8)
+#define QRXFLXP_CNTXT_TS_S			11
+#define QRXFLXP_CNTXT_TS_M			BIT(11)
+#define GL_FWSTS				0x00083048 /* Reset Source: POR */
+#define GL_FWSTS_FWS0B_S			0
+#define GL_FWSTS_FWS0B_M			MAKEMASK(0xFF, 0)
+#define GL_FWSTS_FWROWD_S			8
+#define GL_FWSTS_FWROWD_M			BIT(8)
+#define GL_FWSTS_FWRI_S				9
+#define GL_FWSTS_FWRI_M				BIT(9)
+#define GL_FWSTS_FWS1B_S			16
+#define GL_FWSTS_FWS1B_M			MAKEMASK(0xFF, 16)
+#define GL_TCVMLR_DRAIN_CNTR_CTL		0x000A21E0 /* Reset Source: CORER */
+#define GL_TCVMLR_DRAIN_CNTR_CTL_OP_S		0
+#define GL_TCVMLR_DRAIN_CNTR_CTL_OP_M		BIT(0)
+#define GL_TCVMLR_DRAIN_CNTR_CTL_PORT_S		1
+#define GL_TCVMLR_DRAIN_CNTR_CTL_PORT_M		MAKEMASK(0x7, 1)
+#define GL_TCVMLR_DRAIN_CNTR_CTL_VALUE_S	4
+#define GL_TCVMLR_DRAIN_CNTR_CTL_VALUE_M	MAKEMASK(0x3FFF, 4)
+#define GL_TCVMLR_DRAIN_DONE_DEC		0x000A21A8 /* Reset Source: CORER */
+#define GL_TCVMLR_DRAIN_DONE_DEC_TARGET_S	0
+#define GL_TCVMLR_DRAIN_DONE_DEC_TARGET_M	BIT(0)
+#define GL_TCVMLR_DRAIN_DONE_DEC_INDEX_S	1
+#define GL_TCVMLR_DRAIN_DONE_DEC_INDEX_M	MAKEMASK(0x1F, 1)
+#define GL_TCVMLR_DRAIN_DONE_DEC_VALUE_S	6
+#define GL_TCVMLR_DRAIN_DONE_DEC_VALUE_M	MAKEMASK(0xFF, 6)
+#define GL_TCVMLR_DRAIN_DONE_TCLAN(_i)		(0x000A20A8 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GL_TCVMLR_DRAIN_DONE_TCLAN_MAX_INDEX	31
+#define GL_TCVMLR_DRAIN_DONE_TCLAN_COUNT_S	0
+#define GL_TCVMLR_DRAIN_DONE_TCLAN_COUNT_M	MAKEMASK(0xFF, 0)
+#define GL_TCVMLR_DRAIN_DONE_TPB(_i)		(0x000A2128 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GL_TCVMLR_DRAIN_DONE_TPB_MAX_INDEX	31
+#define GL_TCVMLR_DRAIN_DONE_TPB_COUNT_S	0
+#define GL_TCVMLR_DRAIN_DONE_TPB_COUNT_M	MAKEMASK(0xFF, 0)
+#define GL_TCVMLR_DRAIN_MARKER			0x000A2008 /* Reset Source: CORER */
+#define GL_TCVMLR_DRAIN_MARKER_PORT_S		0
+#define GL_TCVMLR_DRAIN_MARKER_PORT_M		MAKEMASK(0x7, 0)
+#define GL_TCVMLR_DRAIN_MARKER_TC_S		3
+#define GL_TCVMLR_DRAIN_MARKER_TC_M		MAKEMASK(0x1F, 3)
+#define GL_TCVMLR_ERR_STAT			0x000A2024 /* Reset Source: CORER */
+#define GL_TCVMLR_ERR_STAT_ERROR_S		0
+#define GL_TCVMLR_ERR_STAT_ERROR_M		BIT(0)
+#define GL_TCVMLR_ERR_STAT_FW_REQ_S		1
+#define GL_TCVMLR_ERR_STAT_FW_REQ_M		BIT(1)
+#define GL_TCVMLR_ERR_STAT_STAT_S		2
+#define GL_TCVMLR_ERR_STAT_STAT_M		MAKEMASK(0x7, 2)
+#define GL_TCVMLR_ERR_STAT_ENT_TYPE_S		5
+#define GL_TCVMLR_ERR_STAT_ENT_TYPE_M		MAKEMASK(0x7, 5)
+#define GL_TCVMLR_ERR_STAT_ENT_ID_S		8
+#define GL_TCVMLR_ERR_STAT_ENT_ID_M		MAKEMASK(0x3FFF, 8)
+#define GL_TCVMLR_QCFG				0x000A2010 /* Reset Source: CORER */
+#define GL_TCVMLR_QCFG_QID_S			0
+#define GL_TCVMLR_QCFG_QID_M			MAKEMASK(0x3FFF, 0)
+#define GL_TCVMLR_QCFG_OP_S			14
+#define GL_TCVMLR_QCFG_OP_M			BIT(14)
+#define GL_TCVMLR_QCFG_PORT_S			15
+#define GL_TCVMLR_QCFG_PORT_M			MAKEMASK(0x7, 15)
+#define GL_TCVMLR_QCFG_TC_S			18
+#define GL_TCVMLR_QCFG_TC_M			MAKEMASK(0x1F, 18)
+#define GL_TCVMLR_QCFG_RD			0x000A2014 /* Reset Source: CORER */
+#define GL_TCVMLR_QCFG_RD_QID_S			0
+#define GL_TCVMLR_QCFG_RD_QID_M			MAKEMASK(0x3FFF, 0)
+#define GL_TCVMLR_QCFG_RD_PORT_S		14
+#define GL_TCVMLR_QCFG_RD_PORT_M		MAKEMASK(0x7, 14)
+#define GL_TCVMLR_QCFG_RD_TC_S			17
+#define GL_TCVMLR_QCFG_RD_TC_M			MAKEMASK(0x1F, 17)
+#define GL_TCVMLR_QCNTR				0x000A200C /* Reset Source: CORER */
+#define GL_TCVMLR_QCNTR_CNTR_S			0
+#define GL_TCVMLR_QCNTR_CNTR_M			MAKEMASK(0x7FFF, 0)
+#define GL_TCVMLR_QCTL				0x000A2004 /* Reset Source: CORER */
+#define GL_TCVMLR_QCTL_QID_S			0
+#define GL_TCVMLR_QCTL_QID_M			MAKEMASK(0x3FFF, 0)
+#define GL_TCVMLR_QCTL_OP_S			14
+#define GL_TCVMLR_QCTL_OP_M			BIT(14)
+#define GL_TCVMLR_REQ_STAT			0x000A2018 /* Reset Source: CORER */
+#define GL_TCVMLR_REQ_STAT_ENT_TYPE_S		0
+#define GL_TCVMLR_REQ_STAT_ENT_TYPE_M		MAKEMASK(0x7, 0)
+#define GL_TCVMLR_REQ_STAT_ENT_ID_S		3
+#define GL_TCVMLR_REQ_STAT_ENT_ID_M		MAKEMASK(0x3FFF, 3)
+#define GL_TCVMLR_REQ_STAT_OP_S			17
+#define GL_TCVMLR_REQ_STAT_OP_M			BIT(17)
+#define GL_TCVMLR_REQ_STAT_WRITE_STATUS_S	18
+#define GL_TCVMLR_REQ_STAT_WRITE_STATUS_M	MAKEMASK(0x7, 18)
+#define GL_TCVMLR_STAT				0x000A201C /* Reset Source: CORER */
+#define GL_TCVMLR_STAT_ENT_TYPE_S		0
+#define GL_TCVMLR_STAT_ENT_TYPE_M		MAKEMASK(0x7, 0)
+#define GL_TCVMLR_STAT_ENT_ID_S			3
+#define GL_TCVMLR_STAT_ENT_ID_M			MAKEMASK(0x3FFF, 3)
+#define GL_TCVMLR_STAT_STATUS_S			17
+#define GL_TCVMLR_STAT_STATUS_M			MAKEMASK(0x7, 17)
+#define GL_XLR_MARKER_TRIG_TCVMLR		0x000A2000 /* Reset Source: CORER */
+#define GL_XLR_MARKER_TRIG_TCVMLR_VM_VF_NUM_S	0
+#define GL_XLR_MARKER_TRIG_TCVMLR_VM_VF_NUM_M	MAKEMASK(0x3FF, 0)
+#define GL_XLR_MARKER_TRIG_TCVMLR_VM_VF_TYPE_S	10
+#define GL_XLR_MARKER_TRIG_TCVMLR_VM_VF_TYPE_M	MAKEMASK(0x3, 10)
+#define GL_XLR_MARKER_TRIG_TCVMLR_PF_NUM_S	12
+#define GL_XLR_MARKER_TRIG_TCVMLR_PF_NUM_M	MAKEMASK(0x7, 12)
+#define GL_XLR_MARKER_TRIG_TCVMLR_PORT_NUM_S	16
+#define GL_XLR_MARKER_TRIG_TCVMLR_PORT_NUM_M	MAKEMASK(0x7, 16)
+#define GL_XLR_MARKER_TRIG_VMLR			0x00093804 /* Reset Source: CORER */
+#define GL_XLR_MARKER_TRIG_VMLR_VM_VF_NUM_S	0
+#define GL_XLR_MARKER_TRIG_VMLR_VM_VF_NUM_M	MAKEMASK(0x3FF, 0)
+#define GL_XLR_MARKER_TRIG_VMLR_VM_VF_TYPE_S	10
+#define GL_XLR_MARKER_TRIG_VMLR_VM_VF_TYPE_M	MAKEMASK(0x3, 10)
+#define GL_XLR_MARKER_TRIG_VMLR_PF_NUM_S	12
+#define GL_XLR_MARKER_TRIG_VMLR_PF_NUM_M	MAKEMASK(0x7, 12)
+#define GL_XLR_MARKER_TRIG_VMLR_PORT_NUM_S	16
+#define GL_XLR_MARKER_TRIG_VMLR_PORT_NUM_M	MAKEMASK(0x7, 16)
+#define GLGEN_ANA_ABORT_PTYPE			0x0020C21C /* Reset Source: CORER */
+#define GLGEN_ANA_ABORT_PTYPE_ABORT_S		0
+#define GLGEN_ANA_ABORT_PTYPE_ABORT_M		MAKEMASK(0x3FF, 0)
+#define GLGEN_ANA_ALU_ACCSS_OUT_OF_PKT		0x0020C208 /* Reset Source: CORER */
+#define GLGEN_ANA_ALU_ACCSS_OUT_OF_PKT_NPC_S	0
+#define GLGEN_ANA_ALU_ACCSS_OUT_OF_PKT_NPC_M	MAKEMASK(0xFF, 0)
+#define GLGEN_ANA_CFG_CTRL			0x0020C104 /* Reset Source: CORER */
+#define GLGEN_ANA_CFG_CTRL_LINE_IDX_S		0
+#define GLGEN_ANA_CFG_CTRL_LINE_IDX_M		MAKEMASK(0x3FFFF, 0)
+#define GLGEN_ANA_CFG_CTRL_TABLE_ID_S		18
+#define GLGEN_ANA_CFG_CTRL_TABLE_ID_M		MAKEMASK(0xFF, 18)
+#define GLGEN_ANA_CFG_CTRL_RESRVED_S		26
+#define GLGEN_ANA_CFG_CTRL_RESRVED_M		MAKEMASK(0x7, 26)
+#define GLGEN_ANA_CFG_CTRL_OPERATION_ID_S	29
+#define GLGEN_ANA_CFG_CTRL_OPERATION_ID_M	MAKEMASK(0x7, 29)
+#define GLGEN_ANA_CFG_HTBL_LU_RESULT		0x0020C158 /* Reset Source: CORER */
+#define GLGEN_ANA_CFG_HTBL_LU_RESULT_HIT_S	0
+#define GLGEN_ANA_CFG_HTBL_LU_RESULT_HIT_M	BIT(0)
+#define GLGEN_ANA_CFG_HTBL_LU_RESULT_PG_MEM_IDX_S 1
+#define GLGEN_ANA_CFG_HTBL_LU_RESULT_PG_MEM_IDX_M MAKEMASK(0x7, 1)
+#define GLGEN_ANA_CFG_HTBL_LU_RESULT_ADDR_S	4
+#define GLGEN_ANA_CFG_HTBL_LU_RESULT_ADDR_M	MAKEMASK(0x1FF, 4)
+#define GLGEN_ANA_CFG_LU_KEY(_i)		(0x0020C14C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GLGEN_ANA_CFG_LU_KEY_MAX_INDEX		2
+#define GLGEN_ANA_CFG_LU_KEY_LU_KEY_S		0
+#define GLGEN_ANA_CFG_LU_KEY_LU_KEY_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_CFG_RDDATA(_i)		(0x0020C10C + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLGEN_ANA_CFG_RDDATA_MAX_INDEX		15
+#define GLGEN_ANA_CFG_RDDATA_RD_DATA_S		0
+#define GLGEN_ANA_CFG_RDDATA_RD_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_CFG_SPLBUF_LU_RESULT		0x0020C15C /* Reset Source: CORER */
+#define GLGEN_ANA_CFG_SPLBUF_LU_RESULT_HIT_S	0
+#define GLGEN_ANA_CFG_SPLBUF_LU_RESULT_HIT_M	BIT(0)
+#define GLGEN_ANA_CFG_SPLBUF_LU_RESULT_RSV_S	1
+#define GLGEN_ANA_CFG_SPLBUF_LU_RESULT_RSV_M	MAKEMASK(0x7, 1)
+#define GLGEN_ANA_CFG_SPLBUF_LU_RESULT_ADDR_S	4
+#define GLGEN_ANA_CFG_SPLBUF_LU_RESULT_ADDR_M	MAKEMASK(0x1FF, 4)
+#define GLGEN_ANA_CFG_WRDATA			0x0020C108 /* Reset Source: CORER */
+#define GLGEN_ANA_CFG_WRDATA_WR_DATA_S		0
+#define GLGEN_ANA_CFG_WRDATA_WR_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_DEF_PTYPE			0x0020C100 /* Reset Source: CORER */
+#define GLGEN_ANA_DEF_PTYPE_DEF_PTYPE_S		0
+#define GLGEN_ANA_DEF_PTYPE_DEF_PTYPE_M		MAKEMASK(0x3FF, 0)
+#define GLGEN_ANA_DFD_FIFO_0			0x0020C398 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_FIFO_0_PC_NXT_S		0
+#define GLGEN_ANA_DFD_FIFO_0_PC_NXT_M		BIT(0)
+#define GLGEN_ANA_DFD_FIFO_0_HO_NXT_S		1
+#define GLGEN_ANA_DFD_FIFO_0_HO_NXT_M		BIT(1)
+#define GLGEN_ANA_DFD_FIFO_0_NID_NXT_S		2
+#define GLGEN_ANA_DFD_FIFO_0_NID_NXT_M		BIT(2)
+#define GLGEN_ANA_DFD_FIFO_0_PG_KEY_SEL_S	8
+#define GLGEN_ANA_DFD_FIFO_0_PG_KEY_SEL_M	BIT(8)
+#define GLGEN_ANA_DFD_FIFO_PTR			0x0020C43C /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_FIFO_PTR_HEAD_S		0
+#define GLGEN_ANA_DFD_FIFO_PTR_HEAD_M		MAKEMASK(0x1FF, 0)
+#define GLGEN_ANA_DFD_FIFO_PTR_USED_SPACE_S	16
+#define GLGEN_ANA_DFD_FIFO_PTR_USED_SPACE_M	MAKEMASK(0x3FF, 16)
+#define GLGEN_ANA_DFD_GEN_CTRL			0x0020C38C /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_GEN_CTRL_ENABLE_S		0
+#define GLGEN_ANA_DFD_GEN_CTRL_ENABLE_M		BIT(0)
+#define GLGEN_ANA_DFD_GEN_CTRL_BLK_INPUT_S	1
+#define GLGEN_ANA_DFD_GEN_CTRL_BLK_INPUT_M	BIT(1)
+#define GLGEN_ANA_DFD_LOG_0			0x0020C3A8 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_LOG_0_SOURCE_S		0
+#define GLGEN_ANA_DFD_LOG_0_SOURCE_M		MAKEMASK(0x7, 0)
+#define GLGEN_ANA_DFD_LOG_0_LVL_OR_EDGE_S	3
+#define GLGEN_ANA_DFD_LOG_0_LVL_OR_EDGE_M	BIT(3)
+#define GLGEN_ANA_DFD_LOG_0_RC_DISP_TRIG_S	4
+#define GLGEN_ANA_DFD_LOG_0_RC_DISP_TRIG_M	MAKEMASK(0x7, 4)
+#define GLGEN_ANA_DFD_LOG_0_FLD_MODE_S		8
+#define GLGEN_ANA_DFD_LOG_0_FLD_MODE_M		BIT(8)
+#define GLGEN_ANA_DFD_LOG_0_DLY_CYCL_S		16
+#define GLGEN_ANA_DFD_LOG_0_DLY_CYCL_M		MAKEMASK(0x3FF, 16)
+#define GLGEN_ANA_DFD_LOG_1			0x0020C3AC /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_LOG_1_NUM_EVENTS_S	0
+#define GLGEN_ANA_DFD_LOG_1_NUM_EVENTS_M	MAKEMASK(0x3FF, 0)
+#define GLGEN_ANA_DFD_LOG_1_NUM_TRIGS_S		16
+#define GLGEN_ANA_DFD_LOG_1_NUM_TRIGS_M		MAKEMASK(0x3FF, 16)
+#define GLGEN_ANA_DFD_LOG_ACTN_EN		0x0020C3F8 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_BLK_PH_ARB_S	0
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_BLK_PH_ARB_M	BIT(0)
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_BLK_PH_FB_S	1
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_BLK_PH_FB_M	BIT(1)
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_BLK_INPUT_S	2
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_BLK_INPUT_M	BIT(2)
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_STP_OUTPUT_S	3
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_STP_OUTPUT_M	BIT(3)
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_STP_WR_DFD_FIFO_S 4
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_STP_WR_DFD_FIFO_M BIT(4)
+#define GLGEN_ANA_DFD_LOG_ACTN_RST		0x0020C3FC /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_BLK_PH_ARB_S 0
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_BLK_PH_ARB_M BIT(0)
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_BLK_PH_FB_S	1
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_BLK_PH_FB_M	BIT(1)
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_BLK_INPUT_S	2
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_BLK_INPUT_M	BIT(2)
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_STP_OUTPUT_S 3
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_STP_OUTPUT_M BIT(3)
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_STP_WR_DFD_FIFO_S 4
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_STP_WR_DFD_FIFO_M BIT(4)
+#define GLGEN_ANA_DFD_LOG_DATA(_i)		(0x0020C3B0 + ((_i) * 4)) /* _i=0...8 */ /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_LOG_DATA_MAX_INDEX	8
+#define GLGEN_ANA_DFD_LOG_DATA_DATA_S		0
+#define GLGEN_ANA_DFD_LOG_DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_DFD_LOG_MASK(_i)		(0x0020C3D4 + ((_i) * 4)) /* _i=0...8 */ /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_LOG_MASK_MAX_INDEX	8
+#define GLGEN_ANA_DFD_LOG_MASK_MASK_S		0
+#define GLGEN_ANA_DFD_LOG_MASK_MASK_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_DFD_LOG_RST_ALL		0x0020C400 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_LOG_RST_ALL_RST_S		0
+#define GLGEN_ANA_DFD_LOG_RST_ALL_RST_M		BIT(0)
+#define GLGEN_ANA_DFD_LOG_RST_ALL_GEN_RST_S	1
+#define GLGEN_ANA_DFD_LOG_RST_ALL_GEN_RST_M	BIT(1)
+#define GLGEN_ANA_DFD_LOG_TRG_0			0x0020C404 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_LOG_TRG_0_TAGID_S		0
+#define GLGEN_ANA_DFD_LOG_TRG_0_TAGID_M		MAKEMASK(0x3F, 0)
+#define GLGEN_ANA_DFD_LOG_TRG_0_ACT_TRIGGED_S	16
+#define GLGEN_ANA_DFD_LOG_TRG_0_ACT_TRIGGED_M	BIT(16)
+#define GLGEN_ANA_DFD_LOG_TRG_0_MAX_NUM_RND_S	24
+#define GLGEN_ANA_DFD_LOG_TRG_0_MAX_NUM_RND_M	MAKEMASK(0x7F, 24)
+#define GLGEN_ANA_DFD_LOG_TRG_0_TRIGGED_S	31
+#define GLGEN_ANA_DFD_LOG_TRG_0_TRIGGED_M	BIT(31)
+#define GLGEN_ANA_DFD_LOG_TRG_DATA(_i)		(0x0020C408 + ((_i) * 4)) /* _i=0...8 */ /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_LOG_TRG_DATA_MAX_INDEX	8
+#define GLGEN_ANA_DFD_LOG_TRG_DATA_DATA_S	0
+#define GLGEN_ANA_DFD_LOG_TRG_DATA_DATA_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_DFD_PACE_OUT			0x0020C4CC /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_PACE_OUT_PUSH_S		0
+#define GLGEN_ANA_DFD_PACE_OUT_PUSH_M		BIT(0)
+#define GLGEN_ANA_DFD_PACING_0			0x0020C390 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_PACING_0_STOP_FEEDBK_S	0
+#define GLGEN_ANA_DFD_PACING_0_STOP_FEEDBK_M	BIT(0)
+#define GLGEN_ANA_DFD_PACING_0_STOP_ARB_S	1
+#define GLGEN_ANA_DFD_PACING_0_STOP_ARB_M	BIT(1)
+#define GLGEN_ANA_DFD_PACING_0_NUM_CHUNKS_S	2
+#define GLGEN_ANA_DFD_PACING_0_NUM_CHUNKS_M	MAKEMASK(0x1F, 2)
+#define GLGEN_ANA_DFD_PACING_1			0x0020C394 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_PACING_1_PUSH_S		0
+#define GLGEN_ANA_DFD_PACING_1_PUSH_M		BIT(0)
+#define GLGEN_ANA_DFD_REG_FILE_ACC_0		0x0020C39C /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_REG_FILE_ACC_0_SLOT_ID_S	0
+#define GLGEN_ANA_DFD_REG_FILE_ACC_0_SLOT_ID_M	MAKEMASK(0xF, 0)
+#define GLGEN_ANA_DFD_REG_FILE_ACC_1		0x0020C3A0 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_REG_FILE_ACC_1_REGID_S	0
+#define GLGEN_ANA_DFD_REG_FILE_ACC_1_REGID_M	MAKEMASK(0xFF, 0)
+#define GLGEN_ANA_DFD_REG_FILE_ACC_RES		0x0020C3A4 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_REG_FILE_ACC_RES_REG_VAL_S 0
+#define GLGEN_ANA_DFD_REG_FILE_ACC_RES_REG_VAL_M MAKEMASK(0xFFFF, 0)
+#define GLGEN_ANA_DFD_REG_FILE_ACC_RES_EXCEPTIONS_S 16
+#define GLGEN_ANA_DFD_REG_FILE_ACC_RES_EXCEPTIONS_M MAKEMASK(0x7FFF, 16)
+#define GLGEN_ANA_DFD_TAGIDS			0x0020C438 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_TAGIDS_TAGID_IN_DFD_FIFO_S 0
+#define GLGEN_ANA_DFD_TAGIDS_TAGID_IN_DFD_FIFO_M MAKEMASK(0x3F, 0)
+#define GLGEN_ANA_DFD_TAGIDS_TAGID_NXT_ANA_S	8
+#define GLGEN_ANA_DFD_TAGIDS_TAGID_NXT_ANA_M	MAKEMASK(0x3F, 8)
+#define GLGEN_ANA_DFD_TAGIDS_TAGID_OUT_S	16
+#define GLGEN_ANA_DFD_TAGIDS_TAGID_OUT_M	MAKEMASK(0x3F, 16)
+#define GLGEN_ANA_DFD_TAGIDS_SLOTID_IN_DFD_FIFO_S 24
+#define GLGEN_ANA_DFD_TAGIDS_SLOTID_IN_DFD_FIFO_M MAKEMASK(0xF, 24)
+#define GLGEN_ANA_DFD_TAGIDS_SLOTID_NXT_ANA_S	28
+#define GLGEN_ANA_DFD_TAGIDS_SLOTID_NXT_ANA_M	MAKEMASK(0xF, 28)
+#define GLGEN_ANA_ERR_AUX			0x0020C228 /* Reset Source: CORER */
+#define GLGEN_ANA_ERR_AUX_IPLEN_GPREG_S		0
+#define GLGEN_ANA_ERR_AUX_IPLEN_GPREG_M		MAKEMASK(0xF, 0)
+#define GLGEN_ANA_ERR_CTRL			0x0020C220 /* Reset Source: CORER */
+#define GLGEN_ANA_ERR_CTRL_ERR_MASK_EN_S	0
+#define GLGEN_ANA_ERR_CTRL_ERR_MASK_EN_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_FLAG_MAP(_i)			(0x0020C000 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GLGEN_ANA_FLAG_MAP_MAX_INDEX		63
+#define GLGEN_ANA_FLAG_MAP_FLAG_EN_S		0
+#define GLGEN_ANA_FLAG_MAP_FLAG_EN_M		BIT(0)
+#define GLGEN_ANA_FLAG_MAP_EXT_FLAG_ID_S	1
+#define GLGEN_ANA_FLAG_MAP_EXT_FLAG_ID_M	MAKEMASK(0x3F, 1)
+#define GLGEN_ANA_GEN_DFD_RO			0x0020C4C8 /* Reset Source: CORER */
+#define GLGEN_ANA_GEN_DFD_RO_GEN_VAL_S		0
+#define GLGEN_ANA_GEN_DFD_RO_GEN_VAL_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_GIGO_FIFO_PTR			0x0020C448 /* Reset Source: CORER */
+#define GLGEN_ANA_GIGO_FIFO_PTR_HEAD_S		0
+#define GLGEN_ANA_GIGO_FIFO_PTR_HEAD_M		MAKEMASK(0x1FF, 0)
+#define GLGEN_ANA_GIGO_FIFO_PTR_USED_SPACE_S	16
+#define GLGEN_ANA_GIGO_FIFO_PTR_USED_SPACE_M	MAKEMASK(0x3FF, 16)
+#define GLGEN_ANA_HDR_FIFO_FIFO_PTR		0x0020C44C /* Reset Source: CORER */
+#define GLGEN_ANA_HDR_FIFO_FIFO_PTR_HEAD_S	0
+#define GLGEN_ANA_HDR_FIFO_FIFO_PTR_HEAD_M	MAKEMASK(0x1FF, 0)
+#define GLGEN_ANA_HDR_FIFO_FIFO_PTR_USED_SPACE_S 16
+#define GLGEN_ANA_HDR_FIFO_FIFO_PTR_USED_SPACE_M MAKEMASK(0x3FF, 16)
+#define GLGEN_ANA_INV_NODE_PTYPE		0x0020C210 /* Reset Source: CORER */
+#define GLGEN_ANA_INV_NODE_PTYPE_INV_NODE_PTYPE_S 0
+#define GLGEN_ANA_INV_NODE_PTYPE_INV_NODE_PTYPE_M MAKEMASK(0x7FF, 0)
+#define GLGEN_ANA_INV_PROT_ID			0x0020C214 /* Reset Source: CORER */
+#define GLGEN_ANA_INV_PROT_ID_INV_PROT_ID_S	0
+#define GLGEN_ANA_INV_PROT_ID_INV_PROT_ID_M	MAKEMASK(0xFF, 0)
+#define GLGEN_ANA_INV_PTYPE_MARKER		0x0020C218 /* Reset Source: CORER */
+#define GLGEN_ANA_INV_PTYPE_MARKER_INV_PTYPE_MARKER_S 0
+#define GLGEN_ANA_INV_PTYPE_MARKER_INV_PTYPE_MARKER_M MAKEMASK(0x7F, 0)
+#define GLGEN_ANA_LAST_PROT_ID(_i)		(0x0020C1E4 + ((_i) * 4)) /* _i=0...5 */ /* Reset Source: CORER */
+#define GLGEN_ANA_LAST_PROT_ID_MAX_INDEX	5
+#define GLGEN_ANA_LAST_PROT_ID_EN_S		0
+#define GLGEN_ANA_LAST_PROT_ID_EN_M		BIT(0)
+#define GLGEN_ANA_LAST_PROT_ID_PROT_ID_S	1
+#define GLGEN_ANA_LAST_PROT_ID_PROT_ID_M	MAKEMASK(0xFF, 1)
+#define GLGEN_ANA_MAX_HDRLEN			0x0020C1E0 /* Reset Source: CORER */
+#define GLGEN_ANA_MAX_HDRLEN_NPC_S		0
+#define GLGEN_ANA_MAX_HDRLEN_NPC_M		MAKEMASK(0xFF, 0)
+#define GLGEN_ANA_MAX_HDRLEN_MAX_HDR_LEN_S	8
+#define GLGEN_ANA_MAX_HDRLEN_MAX_HDR_LEN_M	MAKEMASK(0x1FF, 8)
+#define GLGEN_ANA_MAX_PROT			0x0020C224 /* Reset Source: CORER */
+#define GLGEN_ANA_MAX_PROT_MAX_PRTS_S		0
+#define GLGEN_ANA_MAX_PROT_MAX_PRTS_M		MAKEMASK(0x7F, 0)
+#define GLGEN_ANA_MAX_ROUND			0x0020C20C /* Reset Source: CORER */
+#define GLGEN_ANA_MAX_ROUND_MAX_ROUND_ABS_S	0
+#define GLGEN_ANA_MAX_ROUND_MAX_ROUND_ABS_M	MAKEMASK(0x7F, 0)
+#define GLGEN_ANA_MIN_PKT			0x0020C42C /* Reset Source: CORER */
+#define GLGEN_ANA_MIN_PKT_MIN_LEN_S		0
+#define GLGEN_ANA_MIN_PKT_MIN_LEN_M		MAKEMASK(0x3FFF, 0)
+#define GLGEN_ANA_NMPG_KEYMASK(_i)		(0x0020C1D0 + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: CORER */
+#define GLGEN_ANA_NMPG_KEYMASK_MAX_INDEX	3
+#define GLGEN_ANA_NMPG_KEYMASK_HASH_KEY_S	0
+#define GLGEN_ANA_NMPG_KEYMASK_HASH_KEY_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_NMPG0_HASHKEY(_i)		(0x0020C1B0 + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: CORER */
+#define GLGEN_ANA_NMPG0_HASHKEY_MAX_INDEX	3
+#define GLGEN_ANA_NMPG0_HASHKEY_HASH_KEY_S	0
+#define GLGEN_ANA_NMPG0_HASHKEY_HASH_KEY_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_NO_HIT_PG_NM_PG		0x0020C204 /* Reset Source: CORER */
+#define GLGEN_ANA_NO_HIT_PG_NM_PG_NPC_S		0
+#define GLGEN_ANA_NO_HIT_PG_NM_PG_NPC_M		MAKEMASK(0xFF, 0)
+#define GLGEN_ANA_OUT_OF_PKT			0x0020C200 /* Reset Source: CORER */
+#define GLGEN_ANA_OUT_OF_PKT_NPC_S		0
+#define GLGEN_ANA_OUT_OF_PKT_NPC_M		MAKEMASK(0xFF, 0)
+#define GLGEN_ANA_P2P(_i)			(0x0020C160 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLGEN_ANA_P2P_MAX_INDEX			15
+#define GLGEN_ANA_P2P_TARGET_PROF_S		0
+#define GLGEN_ANA_P2P_TARGET_PROF_M		MAKEMASK(0xF, 0)
+#define GLGEN_ANA_PG_KEYMASK(_i)		(0x0020C1C0 + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: CORER */
+#define GLGEN_ANA_PG_KEYMASK_MAX_INDEX		3
+#define GLGEN_ANA_PG_KEYMASK_HASH_KEY_S		0
+#define GLGEN_ANA_PG_KEYMASK_HASH_KEY_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_PG0_HASHKEY(_i)		(0x0020C1A0 + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: CORER */
+#define GLGEN_ANA_PG0_HASHKEY_MAX_INDEX		3
+#define GLGEN_ANA_PG0_HASHKEY_HASH_KEY_S	0
+#define GLGEN_ANA_PG0_HASHKEY_HASH_KEY_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_PROFIL_CTRL			0x0020C1FC /* Reset Source: CORER */
+#define GLGEN_ANA_PROFIL_CTRL_PROFILE_SELECT_MDID_S 0
+#define GLGEN_ANA_PROFIL_CTRL_PROFILE_SELECT_MDID_M MAKEMASK(0x1F, 0)
+#define GLGEN_ANA_PROFIL_CTRL_PROFILE_SELECT_MDSTART_S 5
+#define GLGEN_ANA_PROFIL_CTRL_PROFILE_SELECT_MDSTART_M MAKEMASK(0xF, 5)
+#define GLGEN_ANA_PROFIL_CTRL_PROFILE_SELECT_MD_LEN_S 9
+#define GLGEN_ANA_PROFIL_CTRL_PROFILE_SELECT_MD_LEN_M MAKEMASK(0x1F, 9)
+#define GLGEN_ANA_PROFIL_CTRL_NUM_CTRL_DOMAIN_S 14
+#define GLGEN_ANA_PROFIL_CTRL_NUM_CTRL_DOMAIN_M MAKEMASK(0x3, 14)
+#define GLGEN_ANA_PROFIL_CTRL_DEF_PROF_ID_S	16
+#define GLGEN_ANA_PROFIL_CTRL_DEF_PROF_ID_M	MAKEMASK(0xF, 16)
+#define GLGEN_ANA_PROFIL_CTRL_SEL_DEF_PROF_ID_S 20
+#define GLGEN_ANA_PROFIL_CTRL_SEL_DEF_PROF_ID_M BIT(20)
+#define GLGEN_ANA_PSTAT_FIFO_PTR		0x0020C444 /* Reset Source: CORER */
+#define GLGEN_ANA_PSTAT_FIFO_PTR_HEAD_S		0
+#define GLGEN_ANA_PSTAT_FIFO_PTR_HEAD_M		MAKEMASK(0x1FF, 0)
+#define GLGEN_ANA_PSTAT_FIFO_PTR_USED_SPACE_S	16
+#define GLGEN_ANA_PSTAT_FIFO_PTR_USED_SPACE_M	MAKEMASK(0x3FF, 16)
+#define GLGEN_ANA_STAT_FIFO_PTR			0x0020C440 /* Reset Source: CORER */
+#define GLGEN_ANA_STAT_FIFO_PTR_HEAD_S		0
+#define GLGEN_ANA_STAT_FIFO_PTR_HEAD_M		MAKEMASK(0x1FF, 0)
+#define GLGEN_ANA_STAT_FIFO_PTR_USED_SPACE_S	16
+#define GLGEN_ANA_STAT_FIFO_PTR_USED_SPACE_M	MAKEMASK(0x3FF, 16)
+#define GLGEN_ANA_TX_DFD_LOG_0			0x0020D3A8 /* Reset Source: CORER */
+#define GLGEN_ANA_TX_DFD_LOG_0_SOURCE_S		0
+#define GLGEN_ANA_TX_DFD_LOG_0_SOURCE_M		MAKEMASK(0x7, 0)
+#define GLGEN_ANA_TX_DFD_LOG_0_LVL_OR_EDGE_S	3
+#define GLGEN_ANA_TX_DFD_LOG_0_LVL_OR_EDGE_M	BIT(3)
+#define GLGEN_ANA_TX_DFD_LOG_0_RC_DISP_TRIG_S	4
+#define GLGEN_ANA_TX_DFD_LOG_0_RC_DISP_TRIG_M	MAKEMASK(0x7, 4)
+#define GLGEN_ANA_TX_DFD_LOG_0_FLD_MODE_S	8
+#define GLGEN_ANA_TX_DFD_LOG_0_FLD_MODE_M	BIT(8)
+#define GLGEN_ANA_TX_DFD_LOG_0_DLY_CYCL_S	16
+#define GLGEN_ANA_TX_DFD_LOG_0_DLY_CYCL_M	MAKEMASK(0x3FF, 16)
+#define GLGEN_ANA_TX_DFD_PACE_OUT		0x0020D4CC /* Reset Source: CORER */
+#define GLGEN_ANA_TX_DFD_PACE_OUT_PUSH_S	0
+#define GLGEN_ANA_TX_DFD_PACE_OUT_PUSH_M	BIT(0)
+#define GLGEN_ANA_TX_GEN_DFD_RO			0x0020D4C8 /* Reset Source: CORER */
+#define GLGEN_ANA_TX_GEN_DFD_RO_GEN_VAL_S	0
+#define GLGEN_ANA_TX_GEN_DFD_RO_GEN_VAL_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_TX_P2P(_i)			(0x0020D160 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLGEN_ANA_TX_P2P_MAX_INDEX		15
+#define GLGEN_ANA_TX_P2P_TARGET_PROF_S		0
+#define GLGEN_ANA_TX_P2P_TARGET_PROF_M		MAKEMASK(0xF, 0)
+#define GLGEN_ASSERT_HLP			0x000B81E4 /* Reset Source: POR */
+#define GLGEN_ASSERT_HLP_CORE_ON_RST_S		0
+#define GLGEN_ASSERT_HLP_CORE_ON_RST_M		BIT(0)
+#define GLGEN_ASSERT_HLP_FULL_ON_RST_S		1
+#define GLGEN_ASSERT_HLP_FULL_ON_RST_M		BIT(1)
+#define GLGEN_CLKSTAT				0x000B8184 /* Reset Source: POR */
+#define GLGEN_CLKSTAT_U_CLK_SPEED_S		0
+#define GLGEN_CLKSTAT_U_CLK_SPEED_M		MAKEMASK(0x7, 0)
+#define GLGEN_CLKSTAT_L_CLK_SPEED_S		3
+#define GLGEN_CLKSTAT_L_CLK_SPEED_M		MAKEMASK(0x7, 3)
+#define GLGEN_CLKSTAT_PSM_CLK_SPEED_S		6
+#define GLGEN_CLKSTAT_PSM_CLK_SPEED_M		MAKEMASK(0x7, 6)
+#define GLGEN_CLKSTAT_RXCTL_CLK_SPEED_S		9
+#define GLGEN_CLKSTAT_RXCTL_CLK_SPEED_M		MAKEMASK(0x7, 9)
+#define GLGEN_CLKSTAT_UANA_CLK_SPEED_S		12
+#define GLGEN_CLKSTAT_UANA_CLK_SPEED_M		MAKEMASK(0x7, 12)
+#define GLGEN_CLKSTAT_PE_CLK_SPEED_S		18
+#define GLGEN_CLKSTAT_PE_CLK_SPEED_M		MAKEMASK(0x7, 18)
+#define GLGEN_CLKSTAT_SRC			0x000B826C /* Reset Source: POR */
+#define GLGEN_CLKSTAT_SRC_U_CLK_SRC_S		0
+#define GLGEN_CLKSTAT_SRC_U_CLK_SRC_M		MAKEMASK(0x3, 0)
+#define GLGEN_CLKSTAT_SRC_L_CLK_SRC_S		2
+#define GLGEN_CLKSTAT_SRC_L_CLK_SRC_M		MAKEMASK(0x3, 2)
+#define GLGEN_CLKSTAT_SRC_PSM_CLK_SRC_S		4
+#define GLGEN_CLKSTAT_SRC_PSM_CLK_SRC_M		MAKEMASK(0x3, 4)
+#define GLGEN_CLKSTAT_SRC_RXCTL_CLK_SRC_S	6
+#define GLGEN_CLKSTAT_SRC_RXCTL_CLK_SRC_M	MAKEMASK(0x3, 6)
+#define GLGEN_CLKSTAT_SRC_UANA_CLK_SRC_S	8
+#define GLGEN_CLKSTAT_SRC_UANA_CLK_SRC_M	MAKEMASK(0xF, 8)
+#define GLGEN_ECC_ERR_INT_TOG_MASK_H		0x00093A00 /* Reset Source: CORER */
+#define GLGEN_ECC_ERR_INT_TOG_MASK_H_CLIENT_NUM_S 0
+#define GLGEN_ECC_ERR_INT_TOG_MASK_H_CLIENT_NUM_M MAKEMASK(0x7F, 0)
+#define GLGEN_ECC_ERR_INT_TOG_MASK_L		0x000939FC /* Reset Source: CORER */
+#define GLGEN_ECC_ERR_INT_TOG_MASK_L_CLIENT_NUM_S 0
+#define GLGEN_ECC_ERR_INT_TOG_MASK_L_CLIENT_NUM_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ECC_ERR_RST_MASK_H		0x000939F8 /* Reset Source: CORER */
+#define GLGEN_ECC_ERR_RST_MASK_H_CLIENT_NUM_S	0
+#define GLGEN_ECC_ERR_RST_MASK_H_CLIENT_NUM_M	MAKEMASK(0x7F, 0)
+#define GLGEN_ECC_ERR_RST_MASK_L		0x000939F4 /* Reset Source: CORER */
+#define GLGEN_ECC_ERR_RST_MASK_L_CLIENT_NUM_S	0
+#define GLGEN_ECC_ERR_RST_MASK_L_CLIENT_NUM_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_GPIO_CTL(_i)			(0x000880C8 + ((_i) * 4)) /* _i=0...6 */ /* Reset Source: POR */
+#define GLGEN_GPIO_CTL_MAX_INDEX		6
+#define GLGEN_GPIO_CTL_IN_VALUE_S		0
+#define GLGEN_GPIO_CTL_IN_VALUE_M		BIT(0)
+#define GLGEN_GPIO_CTL_IN_TRANSIT_S		1
+#define GLGEN_GPIO_CTL_IN_TRANSIT_M		BIT(1)
+#define GLGEN_GPIO_CTL_OUT_VALUE_S		2
+#define GLGEN_GPIO_CTL_OUT_VALUE_M		BIT(2)
+#define GLGEN_GPIO_CTL_NO_P_UP_S		3
+#define GLGEN_GPIO_CTL_NO_P_UP_M		BIT(3)
+#define GLGEN_GPIO_CTL_PIN_DIR_S		4
+#define GLGEN_GPIO_CTL_PIN_DIR_M		BIT(4)
+#define GLGEN_GPIO_CTL_TRI_CTL_S		5
+#define GLGEN_GPIO_CTL_TRI_CTL_M		BIT(5)
+#define GLGEN_GPIO_CTL_PIN_FUNC_S		8
+#define GLGEN_GPIO_CTL_PIN_FUNC_M		MAKEMASK(0xF, 8)
+#define GLGEN_GPIO_CTL_INT_MODE_S		12
+#define GLGEN_GPIO_CTL_INT_MODE_M		MAKEMASK(0x3, 12)
+#define GLGEN_MARKER_COUNT			0x000939E8 /* Reset Source: CORER */
+#define GLGEN_MARKER_COUNT_MARKER_COUNT_S	0
+#define GLGEN_MARKER_COUNT_MARKER_COUNT_M	MAKEMASK(0xFF, 0)
+#define GLGEN_MARKER_COUNT_MARKER_COUNT_EN_S	31
+#define GLGEN_MARKER_COUNT_MARKER_COUNT_EN_M	BIT(31)
+#define GLGEN_RSTAT				0x000B8188 /* Reset Source: POR */
+#define GLGEN_RSTAT_DEVSTATE_S			0
+#define GLGEN_RSTAT_DEVSTATE_M			MAKEMASK(0x3, 0)
+#define GLGEN_RSTAT_RESET_TYPE_S		2
+#define GLGEN_RSTAT_RESET_TYPE_M		MAKEMASK(0x3, 2)
+#define GLGEN_RSTAT_CORERCNT_S			4
+#define GLGEN_RSTAT_CORERCNT_M			MAKEMASK(0x3, 4)
+#define GLGEN_RSTAT_GLOBRCNT_S			6
+#define GLGEN_RSTAT_GLOBRCNT_M			MAKEMASK(0x3, 6)
+#define GLGEN_RSTAT_EMPRCNT_S			8
+#define GLGEN_RSTAT_EMPRCNT_M			MAKEMASK(0x3, 8)
+#define GLGEN_RSTAT_TIME_TO_RST_S		10
+#define GLGEN_RSTAT_TIME_TO_RST_M		MAKEMASK(0x3F, 10)
+#define GLGEN_RSTAT_RTRIG_FLR_S			16
+#define GLGEN_RSTAT_RTRIG_FLR_M			BIT(16)
+#define GLGEN_RSTAT_RTRIG_ECC_S			17
+#define GLGEN_RSTAT_RTRIG_ECC_M			BIT(17)
+#define GLGEN_RSTAT_RTRIG_FW_AUX_S		18
+#define GLGEN_RSTAT_RTRIG_FW_AUX_M		BIT(18)
+#define GLGEN_RTRIG				0x000B8190 /* Reset Source: CORER */
+#define GLGEN_RTRIG_CORER_S			0
+#define GLGEN_RTRIG_CORER_M			BIT(0)
+#define GLGEN_RTRIG_GLOBR_S			1
+#define GLGEN_RTRIG_GLOBR_M			BIT(1)
+#define GLGEN_RTRIG_EMPFWR_S			2
+#define GLGEN_RTRIG_EMPFWR_M			BIT(2)
+#define GLGEN_STAT				0x000B612C /* Reset Source: POR */
+#define GLGEN_STAT_RSVD4FW_S			0
+#define GLGEN_STAT_RSVD4FW_M			MAKEMASK(0xFF, 0)
+#define GLGEN_VFLRSTAT(_i)			(0x00093A04 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLGEN_VFLRSTAT_MAX_INDEX		7
+#define GLGEN_VFLRSTAT_VFLRS_S			0
+#define GLGEN_VFLRSTAT_VFLRS_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_XLR_MSK2HLP_RDY			0x000939F0 /* Reset Source: CORER */
+#define GLGEN_XLR_MSK2HLP_RDY_GLGEN_XLR_MSK2HLP_RDY_S 0
+#define GLGEN_XLR_MSK2HLP_RDY_GLGEN_XLR_MSK2HLP_RDY_M BIT(0)
+#define GLGEN_XLR_TRNS_WAIT_COUNT		0x000939EC /* Reset Source: CORER */
+#define GLGEN_XLR_TRNS_WAIT_COUNT_W_BTWN_TRNS_COUNT_S 0
+#define GLGEN_XLR_TRNS_WAIT_COUNT_W_BTWN_TRNS_COUNT_M MAKEMASK(0x1F, 0)
+#define GLGEN_XLR_TRNS_WAIT_COUNT_W_PEND_TRNS_COUNT_S 8
+#define GLGEN_XLR_TRNS_WAIT_COUNT_W_PEND_TRNS_COUNT_M MAKEMASK(0xFF, 8)
+#define GLQDC_DFD_CAM_ACC			0x002D2E24 /* Reset Source: CORER */
+#define GLQDC_DFD_CAM_ACC_CLNUM_S		0
+#define GLQDC_DFD_CAM_ACC_CLNUM_M		MAKEMASK(0x7F, 0)
+#define GLQDC_DFD_CAM_ACC_RES_0			0x002D2E28 /* Reset Source: CORER */
+#define GLQDC_DFD_CAM_ACC_RES_0_QID_S		0
+#define GLQDC_DFD_CAM_ACC_RES_0_QID_M		MAKEMASK(0x3FFF, 0)
+#define GLQDC_DFD_CAM_ACC_RES_0_CAM_V_S		16
+#define GLQDC_DFD_CAM_ACC_RES_0_CAM_V_M		BIT(16)
+#define GLQDC_DFD_CAM_ACC_RES_0_CAM_E_S		31
+#define GLQDC_DFD_CAM_ACC_RES_0_CAM_E_M		BIT(31)
+#define GLQDC_DFD_CAM_ACC_RES_1			0x002D2E2C /* Reset Source: CORER */
+#define GLQDC_DFD_CAM_ACC_RES_1_CL_HEAD_S	0
+#define GLQDC_DFD_CAM_ACC_RES_1_CL_HEAD_M	MAKEMASK(0x3F, 0)
+#define GLQDC_DFD_CAM_ACC_RES_1_CL_TAIL_S	8
+#define GLQDC_DFD_CAM_ACC_RES_1_CL_TAIL_M	MAKEMASK(0x3F, 8)
+#define GLQDC_DFD_CAM_ACC_RES_1_CL_EMPTY_S	16
+#define GLQDC_DFD_CAM_ACC_RES_1_CL_EMPTY_M	BIT(16)
+#define GLQDC_DFD_CAM_ACC_RES_1_CL_MALC_S	24
+#define GLQDC_DFD_CAM_ACC_RES_1_CL_MALC_M	MAKEMASK(0x3F, 24)
+#define GLQDC_DFD_FIFO_CFG_0			0x002D2E34 /* Reset Source: CORER */
+#define GLQDC_DFD_FIFO_CFG_0_QID_S		0
+#define GLQDC_DFD_FIFO_CFG_0_QID_M		MAKEMASK(0x3FFF, 0)
+#define GLQDC_DFD_FIFO_CFG_0_SMPL_PT_S		16
+#define GLQDC_DFD_FIFO_CFG_0_SMPL_PT_M		MAKEMASK(0xFF, 16)
+#define GLQDC_DFD_FIFO_CFG_0_ALL_QID_S		31
+#define GLQDC_DFD_FIFO_CFG_0_ALL_QID_M		BIT(31)
+#define GLQDC_DFD_FIFO_CFG_1			0x002D2E38 /* Reset Source: CORER */
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_0_S		0
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_0_M		MAKEMASK(0x7, 0)
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_1_S		4
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_1_M		MAKEMASK(0x7, 4)
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_2_S		8
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_2_M		MAKEMASK(0x7, 8)
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_3_S		12
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_3_M		MAKEMASK(0x7, 12)
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_4_S		16
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_4_M		MAKEMASK(0x7, 16)
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_5_S		20
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_5_M		MAKEMASK(0x7, 20)
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_6_S		24
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_6_M		MAKEMASK(0x7, 24)
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_7_S		28
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_7_M		MAKEMASK(0x7, 28)
+#define GLQDC_DFD_FIFO_SZ_CFG			0x002D30AC /* Reset Source: CORER */
+#define GLQDC_DFD_FIFO_SZ_CFG_COMP_S		0
+#define GLQDC_DFD_FIFO_SZ_CFG_COMP_M		MAKEMASK(0xFF, 0)
+#define GLQDC_DFD_FIFO_SZ_CFG_MISS_S		8
+#define GLQDC_DFD_FIFO_SZ_CFG_MISS_M		MAKEMASK(0xFF, 8)
+#define GLQDC_DFD_FIFO_SZ_CFG_MISS_COMP_S	16
+#define GLQDC_DFD_FIFO_SZ_CFG_MISS_COMP_M	MAKEMASK(0xFF, 16)
+#define GLQDC_DFD_GEN_CHKN			0x002D30A0 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_CHKN_GEN_BITS_S		0
+#define GLQDC_DFD_GEN_CHKN_GEN_BITS_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_GEN_CHKN_2			0x002D30A4 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_CHKN_2_GEN_BITS_S		0
+#define GLQDC_DFD_GEN_CHKN_2_GEN_BITS_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_GEN_CTRL			0x002D2E20 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_CTRL_ENABLE_S		0
+#define GLQDC_DFD_GEN_CTRL_ENABLE_M		BIT(0)
+#define GLQDC_DFD_GEN_CTRL_BLK_INJECT_M1_S	1
+#define GLQDC_DFD_GEN_CTRL_BLK_INJECT_M1_M	BIT(1)
+#define GLQDC_DFD_GEN_CTRL_NUM_PAUSE_M1_S	16
+#define GLQDC_DFD_GEN_CTRL_NUM_PAUSE_M1_M	MAKEMASK(0x3FF, 16)
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0		0x002D2EE8 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_MISS_COMP_ACK_S 0
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_MISS_COMP_ACK_M MAKEMASK(0x7F, 0)
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_MISS_COMP_S 7
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_MISS_COMP_M MAKEMASK(0x7F, 7)
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_COMP_FSM_DATA_S 14
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_COMP_FSM_DATA_M MAKEMASK(0x3, 14)
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_COMP_FSM_S	16
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_COMP_FSM_M	MAKEMASK(0x7F, 16)
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_PCIE_OUT_S	23
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_PCIE_OUT_M	MAKEMASK(0x7, 23)
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_1		0x002D2EEC /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_1_MISS_FSM_S	0
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_1_MISS_FSM_M	MAKEMASK(0x7F, 0)
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_1_DFD_S	7
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_1_DFD_M	MAKEMASK(0xFF, 7)
+#define GLQDC_DFD_GEN_LOG_FSM			0x002D2EF0 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOG_FSM_FTSTATE_S		0
+#define GLQDC_DFD_GEN_LOG_FSM_FTSTATE_M		MAKEMASK(0x3, 0)
+#define GLQDC_DFD_GEN_LOG_FSM_MISS_FIFO_FSM_ST_S 2
+#define GLQDC_DFD_GEN_LOG_FSM_MISS_FIFO_FSM_ST_M MAKEMASK(0x7, 2)
+#define GLQDC_DFD_GEN_LOG_FSM_IN_MISS_FIFO_S	5
+#define GLQDC_DFD_GEN_LOG_FSM_IN_MISS_FIFO_M	MAKEMASK(0x3, 5)
+#define GLQDC_DFD_GEN_LOG_FSM_CPSTATE_S		7
+#define GLQDC_DFD_GEN_LOG_FSM_CPSTATE_M		MAKEMASK(0x7, 7)
+#define GLQDC_DFD_GEN_LOGGNG_0			0x002D2EE0 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOGGNG_0_RINGH_WR_RD_S	0
+#define GLQDC_DFD_GEN_LOGGNG_0_RINGH_WR_RD_M	BIT(0)
+#define GLQDC_DFD_GEN_LOGGNG_0_QD_WR_RD_S	1
+#define GLQDC_DFD_GEN_LOGGNG_0_QD_WR_RD_M	BIT(1)
+#define GLQDC_DFD_GEN_LOGGNG_0_PCIE_RD_REQ_VLD_S 2
+#define GLQDC_DFD_GEN_LOGGNG_0_PCIE_RD_REQ_VLD_M BIT(2)
+#define GLQDC_DFD_GEN_LOGGNG_0_NXT_SQ_VLD_S	3
+#define GLQDC_DFD_GEN_LOGGNG_0_NXT_SQ_VLD_M	BIT(3)
+#define GLQDC_DFD_GEN_LOGGNG_0_SQ_VLD_TO_DONE_S 4
+#define GLQDC_DFD_GEN_LOGGNG_0_SQ_VLD_TO_DONE_M BIT(4)
+#define GLQDC_DFD_GEN_LOGGNG_0_PCIE_COMP_VLD_S	5
+#define GLQDC_DFD_GEN_LOGGNG_0_PCIE_COMP_VLD_M	BIT(5)
+#define GLQDC_DFD_GEN_LOGGNG_0_FETCH_NXT_SQ_VLD_S 6
+#define GLQDC_DFD_GEN_LOGGNG_0_FETCH_NXT_SQ_VLD_M BIT(6)
+#define GLQDC_DFD_GEN_LOGGNG_0_MALC_RPT_S	8
+#define GLQDC_DFD_GEN_LOGGNG_0_MALC_RPT_M	MAKEMASK(0xF, 8)
+#define GLQDC_DFD_GEN_LOGGNG_0_DFD_FIFO_ADD_S	16
+#define GLQDC_DFD_GEN_LOGGNG_0_DFD_FIFO_ADD_M	MAKEMASK(0x7F, 16)
+#define GLQDC_DFD_GEN_LOGGNG_1			0x002D2EE4 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_RFIL_WM_S	0
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_RFIL_WM_M	MAKEMASK(0x3, 0)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_RFIL_S	2
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_RFIL_M	MAKEMASK(0x3, 2)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_MRED_WM_S	4
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_MRED_WM_M	MAKEMASK(0x3, 4)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_MRED_S	6
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_MRED_M	MAKEMASK(0x3, 6)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_M3_WM_S	8
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_M3_WM_M	MAKEMASK(0x3, 8)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_M3_S		10
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_M3_M		MAKEMASK(0x3, 10)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_LSO_MT_M3_WM_S 12
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_LSO_MT_M3_WM_M MAKEMASK(0x3, 12)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_LSO_MT_M3_S	14
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_LSO_MT_M3_M	MAKEMASK(0x3, 14)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_ACK_MISS_FIFO_M3_WM_S 16
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_ACK_MISS_FIFO_M3_WM_M MAKEMASK(0x3, 16)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_ACK_MISS_FIFO_M3_S 18
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_ACK_MISS_FIFO_M3_M MAKEMASK(0x3, 18)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_EVICT_WM_S	20
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_EVICT_WM_M	MAKEMASK(0x3, 20)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_EVICT_S	22
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_EVICT_M	MAKEMASK(0x3, 22)
+#define GLQDC_DFD_GEN_LOGGNG_2			0x002D2FFC /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOGGNG_2_WR_WHEN_FULL_S	0
+#define GLQDC_DFD_GEN_LOGGNG_2_WR_WHEN_FULL_M	MAKEMASK(0x3F, 0)
+#define GLQDC_DFD_GEN_LOGGNG_2_WR_WHEN_FULL_LT_S 6
+#define GLQDC_DFD_GEN_LOGGNG_2_WR_WHEN_FULL_LT_M MAKEMASK(0x3F, 6)
+#define GLQDC_DFD_GEN_LOGGNG_2_TEST_S		24
+#define GLQDC_DFD_GEN_LOGGNG_2_TEST_M		MAKEMASK(0xFF, 24)
+#define GLQDC_DFD_GEN_LOGGNG_3			0x002D3008 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOGGNG_3_GEN_S		0
+#define GLQDC_DFD_GEN_LOGGNG_3_GEN_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_GEN_LOGGNG_4			0x002D300C /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOGGNG_4_GEN_S		0
+#define GLQDC_DFD_GEN_LOGGNG_4_GEN_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_GEN_LOGGNG_5			0x002D3010 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOGGNG_5_GEN_S		0
+#define GLQDC_DFD_GEN_LOGGNG_5_GEN_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_GEN_LOGGNG_6			0x002D3014 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOGGNG_6_GEN_S		0
+#define GLQDC_DFD_GEN_LOGGNG_6_GEN_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_GEN_STAT_REGS(_i)		(0x002D3018 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_STAT_REGS_MAX_INDEX	15
+#define GLQDC_DFD_GEN_STAT_REGS_COUNT_S		0
+#define GLQDC_DFD_GEN_STAT_REGS_COUNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_LOG_0				0x002D2E3C /* Reset Source: CORER */
+#define GLQDC_DFD_LOG_0_SOURCE_S		0
+#define GLQDC_DFD_LOG_0_SOURCE_M		MAKEMASK(0x3, 0)
+#define GLQDC_DFD_LOG_0_LVL_OR_EDGE_S		4
+#define GLQDC_DFD_LOG_0_LVL_OR_EDGE_M		BIT(4)
+#define GLQDC_DFD_LOG_0_DLY_CYCL_S		16
+#define GLQDC_DFD_LOG_0_DLY_CYCL_M		MAKEMASK(0x3FF, 16)
+#define GLQDC_DFD_LOG_1				0x002D2E40 /* Reset Source: CORER */
+#define GLQDC_DFD_LOG_1_NUM_EVENTS_S		0
+#define GLQDC_DFD_LOG_1_NUM_EVENTS_M		MAKEMASK(0x3FF, 0)
+#define GLQDC_DFD_LOG_1_NUM_TRIGS_S		16
+#define GLQDC_DFD_LOG_1_NUM_TRIGS_M		MAKEMASK(0x3FF, 16)
+#define GLQDC_DFD_LOG_1_TRIG_B2B_S		31
+#define GLQDC_DFD_LOG_1_TRIG_B2B_M		BIT(31)
+#define GLQDC_DFD_LOG_ACTN_EN			0x002D2EA4 /* Reset Source: CORER */
+#define GLQDC_DFD_LOG_ACTN_EN_BLK_INJECT_M1_S	0
+#define GLQDC_DFD_LOG_ACTN_EN_BLK_INJECT_M1_M	BIT(0)
+#define GLQDC_DFD_LOG_ACTN_EN_STP_WR_DFD_FIFO_S 1
+#define GLQDC_DFD_LOG_ACTN_EN_STP_WR_DFD_FIFO_M BIT(1)
+#define GLQDC_DFD_LOG_ACTN_EN_STP_UPDT_MALC_RPT_CSR_S 2
+#define GLQDC_DFD_LOG_ACTN_EN_STP_UPDT_MALC_RPT_CSR_M BIT(2)
+#define GLQDC_DFD_LOG_ACTN_RST			0x002D2EA8 /* Reset Source: CORER */
+#define GLQDC_DFD_LOG_ACTN_RST_BLK_INJECT_M1_S	0
+#define GLQDC_DFD_LOG_ACTN_RST_BLK_INJECT_M1_M	BIT(0)
+#define GLQDC_DFD_LOG_ACTN_RST_STP_WR_DFD_FIFO_S 1
+#define GLQDC_DFD_LOG_ACTN_RST_STP_WR_DFD_FIFO_M BIT(1)
+#define GLQDC_DFD_LOG_ACTN_RST_STP_UPDT_MALC_RPT_CSR_S 2
+#define GLQDC_DFD_LOG_ACTN_RST_STP_UPDT_MALC_RPT_CSR_M BIT(2)
+#define GLQDC_DFD_LOG_DATA(_i)			(0x002D2E44 + ((_i) * 4)) /* _i=0...11 */ /* Reset Source: CORER */
+#define GLQDC_DFD_LOG_DATA_MAX_INDEX		11
+#define GLQDC_DFD_LOG_DATA_DATA_S		0
+#define GLQDC_DFD_LOG_DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_LOG_MASK(_i)			(0x002D2E74 + ((_i) * 4)) /* _i=0...11 */ /* Reset Source: CORER */
+#define GLQDC_DFD_LOG_MASK_MAX_INDEX		11
+#define GLQDC_DFD_LOG_MASK_MASK_S		0
+#define GLQDC_DFD_LOG_MASK_MASK_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_LOG_TRG_0			0x002D2EAC /* Reset Source: CORER */
+#define GLQDC_DFD_LOG_TRG_0_QID_S		0
+#define GLQDC_DFD_LOG_TRG_0_QID_M		MAKEMASK(0x3FFF, 0)
+#define GLQDC_DFD_LOG_TRG_0_ACT_TRIGGED_S	16
+#define GLQDC_DFD_LOG_TRG_0_ACT_TRIGGED_M	BIT(16)
+#define GLQDC_DFD_LOG_TRG_0_TRIGGED_S		31
+#define GLQDC_DFD_LOG_TRG_0_TRIGGED_M		BIT(31)
+#define GLQDC_DFD_LOG_TRG_DATA(_i)		(0x002D2EB0 + ((_i) * 4)) /* _i=0...11 */ /* Reset Source: CORER */
+#define GLQDC_DFD_LOG_TRG_DATA_MAX_INDEX	11
+#define GLQDC_DFD_LOG_TRG_DATA_DATA_S		0
+#define GLQDC_DFD_LOG_TRG_DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_PACE				0x002D3000 /* Reset Source: CORER */
+#define GLQDC_DFD_PACE_PUSH_S			0
+#define GLQDC_DFD_PACE_PUSH_M			BIT(0)
+#define GLQDC_DFD_RST				0x002D2E30 /* Reset Source: CORER */
+#define GLQDC_DFD_RST_RST_S			0
+#define GLQDC_DFD_RST_RST_M			BIT(0)
+#define GLQDC_DFD_RST_CLR_MALC_RPT_S		1
+#define GLQDC_DFD_RST_CLR_MALC_RPT_M		BIT(1)
+#define GLQDC_DFD_RST_LOG_RST_S			2
+#define GLQDC_DFD_RST_LOG_RST_M			BIT(2)
+#define GLQDC_DFD_SAMPLE_RO_CSR			0x002D3004 /* Reset Source: CORER */
+#define GLQDC_DFD_SAMPLE_RO_CSR_SMPL_S		0
+#define GLQDC_DFD_SAMPLE_RO_CSR_SMPL_M		BIT(0)
+#define GLQDC_DFD_STATS_CFG_0			0x002D3058 /* Reset Source: CORER */
+#define GLQDC_DFD_STATS_CFG_0_CLR_S		0
+#define GLQDC_DFD_STATS_CFG_0_CLR_M		BIT(0)
+#define GLQDC_DFD_STATS_CFG_1			0x002D305C /* Reset Source: CORER */
+#define GLQDC_DFD_STATS_CFG_1_QID_S		0
+#define GLQDC_DFD_STATS_CFG_1_QID_M		MAKEMASK(0x3FFF, 0)
+#define GLQDC_DFD_STATS_CFG_1_GEN_CFG_S		16
+#define GLQDC_DFD_STATS_CFG_1_GEN_CFG_M		MAKEMASK(0x1F, 16)
+#define GLQDC_DFD_STATS_CFG_EVNT(_i)		(0x002D3060 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLQDC_DFD_STATS_CFG_EVNT_MAX_INDEX	15
+#define GLQDC_DFD_STATS_CFG_EVNT_EVNT_ID_S	0
+#define GLQDC_DFD_STATS_CFG_EVNT_EVNT_ID_M	MAKEMASK(0x1F, 0)
+#define GLQDC_DFD_STATS_CFG_EVNT_WRAP_EN_S	31
+#define GLQDC_DFD_STATS_CFG_EVNT_WRAP_EN_M	BIT(31)
+#define GLQDC_DFD_TEST_MNG			0x002D30A8 /* Reset Source: CORER */
+#define GLQDC_DFD_TEST_MNG_TST_S		2
+#define GLQDC_DFD_TEST_MNG_TST_M		BIT(2)
+#define GLVFGEN_TIMER				0x000B8214 /* Reset Source: POR */
+#define GLVFGEN_TIMER_GTIME_S			0
+#define GLVFGEN_TIMER_GTIME_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PFGEN_CTRL				0x00091000 /* Reset Source: CORER */
+#define PFGEN_CTRL_PFSWR_S			0
+#define PFGEN_CTRL_PFSWR_M			BIT(0)
+#define PFGEN_DRUN				0x00091180 /* Reset Source: CORER */
+#define PFGEN_DRUN_DRVUNLD_S			0
+#define PFGEN_DRUN_DRVUNLD_M			BIT(0)
+#define PFGEN_PFRSTAT				0x00091080 /* Reset Source: CORER */
+#define PFGEN_PFRSTAT_PFRD_S			0
+#define PFGEN_PFRSTAT_PFRD_M			BIT(0)
+#define PFGEN_PORTNUM				0x001D2400 /* Reset Source: CORER */
+#define PFGEN_PORTNUM_PORT_NUM_S		0
+#define PFGEN_PORTNUM_PORT_NUM_M		MAKEMASK(0x7, 0)
+#define PFGEN_STATE				0x00088000 /* Reset Source: CORER */
+#define PFGEN_STATE_PFPEEN_S			0
+#define PFGEN_STATE_PFPEEN_M			BIT(0)
+#define PFGEN_STATE_RSVD_S			1
+#define PFGEN_STATE_RSVD_M			BIT(1)
+#define PFGEN_STATE_PFLINKEN_S			2
+#define PFGEN_STATE_PFLINKEN_M			BIT(2)
+#define PFGEN_STATE_PFSCEN_S			3
+#define PFGEN_STATE_PFSCEN_M			BIT(3)
+#define PRT_TCVMLR_DRAIN_CNTR			0x000A21C0 /* Reset Source: CORER */
+#define PRT_TCVMLR_DRAIN_CNTR_CNTR_S		0
+#define PRT_TCVMLR_DRAIN_CNTR_CNTR_M		MAKEMASK(0x3FFF, 0)
+#define PRTGEN_CNF				0x000B8120 /* Reset Source: POR */
+#define PRTGEN_CNF_PORT_DIS_S			0
+#define PRTGEN_CNF_PORT_DIS_M			BIT(0)
+#define PRTGEN_CNF_ALLOW_PORT_DIS_S		1
+#define PRTGEN_CNF_ALLOW_PORT_DIS_M		BIT(1)
+#define PRTGEN_CNF_EMP_PORT_DIS_S		2
+#define PRTGEN_CNF_EMP_PORT_DIS_M		BIT(2)
+#define PRTGEN_CNF2				0x000B8160 /* Reset Source: POR */
+#define PRTGEN_CNF2_ACTIVATE_PORT_LINK_S	0
+#define PRTGEN_CNF2_ACTIVATE_PORT_LINK_M	BIT(0)
+#define PRTGEN_CNF3				0x000B8280 /* Reset Source: POR */
+#define PRTGEN_CNF3_PORT_STAGERING_EN_S		0
+#define PRTGEN_CNF3_PORT_STAGERING_EN_M		BIT(0)
+#define PRTGEN_STATUS				0x000B8100 /* Reset Source: POR */
+#define PRTGEN_STATUS_PORT_VALID_S		0
+#define PRTGEN_STATUS_PORT_VALID_M		BIT(0)
+#define PRTGEN_STATUS_PORT_ACTIVE_S		1
+#define PRTGEN_STATUS_PORT_ACTIVE_M		BIT(1)
+#define VFGEN_RSTAT(_VF)			(0x00074000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: VFR */
+#define VFGEN_RSTAT_MAX_INDEX			255
+#define VFGEN_RSTAT_VFR_STATE_S			0
+#define VFGEN_RSTAT_VFR_STATE_M			MAKEMASK(0x3, 0)
+#define VPGEN_VFRSTAT(_VF)			(0x00090800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPGEN_VFRSTAT_MAX_INDEX			255
+#define VPGEN_VFRSTAT_VFRD_S			0
+#define VPGEN_VFRSTAT_VFRD_M			BIT(0)
+#define VPGEN_VFRTRIG(_VF)			(0x00090000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPGEN_VFRTRIG_MAX_INDEX			255
+#define VPGEN_VFRTRIG_VFSWR_S			0
+#define VPGEN_VFRTRIG_VFSWR_M			BIT(0)
+#define VSIGEN_RSTAT(_VSI)			(0x00092800 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSIGEN_RSTAT_MAX_INDEX			767
+#define VSIGEN_RSTAT_VMRD_S			0
+#define VSIGEN_RSTAT_VMRD_M			BIT(0)
+#define VSIGEN_RTRIG(_VSI)			(0x00091800 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSIGEN_RTRIG_MAX_INDEX			767
+#define VSIGEN_RTRIG_VMSWR_S			0
+#define VSIGEN_RTRIG_VMSWR_M			BIT(0)
+#define GLHMC_APBVTINUSEBASE(_i)		(0x00524A00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_APBVTINUSEBASE_MAX_INDEX		7
+#define GLHMC_APBVTINUSEBASE_FPMAPBINUSEBASE_S	0
+#define GLHMC_APBVTINUSEBASE_FPMAPBINUSEBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_CEQPART(_i)			(0x005031C0 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_CEQPART_MAX_INDEX			7
+#define GLHMC_CEQPART_PMCEQBASE_S		0
+#define GLHMC_CEQPART_PMCEQBASE_M		MAKEMASK(0x3FF, 0)
+#define GLHMC_CEQPART_PMCEQSIZE_S		16
+#define GLHMC_CEQPART_PMCEQSIZE_M		MAKEMASK(0x3FF, 16)
+#define GLHMC_DBCQMAX				0x005220F0 /* Reset Source: CORER */
+#define GLHMC_DBCQMAX_GLHMC_DBCQMAX_S		0
+#define GLHMC_DBCQMAX_GLHMC_DBCQMAX_M		MAKEMASK(0xFFFFF, 0)
+#define GLHMC_DBCQPART(_i)			(0x00503180 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_DBCQPART_MAX_INDEX		7
+#define GLHMC_DBCQPART_PMDBCQBASE_S		0
+#define GLHMC_DBCQPART_PMDBCQBASE_M		MAKEMASK(0x3FFF, 0)
+#define GLHMC_DBCQPART_PMDBCQSIZE_S		16
+#define GLHMC_DBCQPART_PMDBCQSIZE_M		MAKEMASK(0x7FFF, 16)
+#define GLHMC_DBQPMAX				0x005220EC /* Reset Source: CORER */
+#define GLHMC_DBQPMAX_GLHMC_DBQPMAX_S		0
+#define GLHMC_DBQPMAX_GLHMC_DBQPMAX_M		MAKEMASK(0x7FFFF, 0)
+#define GLHMC_DBQPPART(_i)			(0x005044C0 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_DBQPPART_MAX_INDEX		7
+#define GLHMC_DBQPPART_PMDBQPBASE_S		0
+#define GLHMC_DBQPPART_PMDBQPBASE_M		MAKEMASK(0x3FFF, 0)
+#define GLHMC_DBQPPART_PMDBQPSIZE_S		16
+#define GLHMC_DBQPPART_PMDBQPSIZE_M		MAKEMASK(0x7FFF, 16)
+#define GLHMC_FSIAVBASE(_i)			(0x00525600 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_FSIAVBASE_MAX_INDEX		7
+#define GLHMC_FSIAVBASE_FPMFSIAVBASE_S		0
+#define GLHMC_FSIAVBASE_FPMFSIAVBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_FSIAVCNT(_i)			(0x00525700 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_FSIAVCNT_MAX_INDEX		7
+#define GLHMC_FSIAVCNT_FPMFSIAVCNT_S		0
+#define GLHMC_FSIAVCNT_FPMFSIAVCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_FSIAVMAX				0x00522068 /* Reset Source: CORER */
+#define GLHMC_FSIAVMAX_PMFSIAVMAX_S		0
+#define GLHMC_FSIAVMAX_PMFSIAVMAX_M		MAKEMASK(0x1FFFF, 0)
+#define GLHMC_FSIAVOBJSZ			0x00522064 /* Reset Source: CORER */
+#define GLHMC_FSIAVOBJSZ_PMFSIAVOBJSZ_S		0
+#define GLHMC_FSIAVOBJSZ_PMFSIAVOBJSZ_M		MAKEMASK(0xF, 0)
+#define GLHMC_FSIMCBASE(_i)			(0x00526000 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_FSIMCBASE_MAX_INDEX		7
+#define GLHMC_FSIMCBASE_FPMFSIMCBASE_S		0
+#define GLHMC_FSIMCBASE_FPMFSIMCBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_FSIMCCNT(_i)			(0x00526100 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_FSIMCCNT_MAX_INDEX		7
+#define GLHMC_FSIMCCNT_FPMFSIMCSZ_S		0
+#define GLHMC_FSIMCCNT_FPMFSIMCSZ_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_FSIMCMAX				0x00522060 /* Reset Source: CORER */
+#define GLHMC_FSIMCMAX_PMFSIMCMAX_S		0
+#define GLHMC_FSIMCMAX_PMFSIMCMAX_M		MAKEMASK(0x3FFF, 0)
+#define GLHMC_FSIMCOBJSZ			0x0052205C /* Reset Source: CORER */
+#define GLHMC_FSIMCOBJSZ_PMFSIMCOBJSZ_S		0
+#define GLHMC_FSIMCOBJSZ_PMFSIMCOBJSZ_M		MAKEMASK(0xF, 0)
+#define GLHMC_FWPDINV				0x0052207C /* Reset Source: CORER */
+#define GLHMC_FWPDINV_PMSDIDX_S			0
+#define GLHMC_FWPDINV_PMSDIDX_M			MAKEMASK(0xFFF, 0)
+#define GLHMC_FWPDINV_PMSDPARTSEL_S		15
+#define GLHMC_FWPDINV_PMSDPARTSEL_M		BIT(15)
+#define GLHMC_FWPDINV_PMPDIDX_S			16
+#define GLHMC_FWPDINV_PMPDIDX_M			MAKEMASK(0x1FF, 16)
+#define GLHMC_FWPDINV_FPMAT			0x0010207c /* Reset Source: CORER */
+#define GLHMC_FWPDINV_FPMAT_PMSDIDX_S		0
+#define GLHMC_FWPDINV_FPMAT_PMSDIDX_M		MAKEMASK(0xFFF, 0)
+#define GLHMC_FWPDINV_FPMAT_PMSDPARTSEL_S	15
+#define GLHMC_FWPDINV_FPMAT_PMSDPARTSEL_M	BIT(15)
+#define GLHMC_FWPDINV_FPMAT_PMPDIDX_S		16
+#define GLHMC_FWPDINV_FPMAT_PMPDIDX_M		MAKEMASK(0x1FF, 16)
+#define GLHMC_FWSDDATAHIGH			0x00522078 /* Reset Source: CORER */
+#define GLHMC_FWSDDATAHIGH_PMSDDATAHIGH_S	0
+#define GLHMC_FWSDDATAHIGH_PMSDDATAHIGH_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_FWSDDATAHIGH_FPMAT		0x00102078 /* Reset Source: CORER */
+#define GLHMC_FWSDDATAHIGH_FPMAT_PMSDDATAHIGH_S 0
+#define GLHMC_FWSDDATAHIGH_FPMAT_PMSDDATAHIGH_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_FWSDDATALOW			0x00522074 /* Reset Source: CORER */
+#define GLHMC_FWSDDATALOW_PMSDVALID_S		0
+#define GLHMC_FWSDDATALOW_PMSDVALID_M		BIT(0)
+#define GLHMC_FWSDDATALOW_PMSDTYPE_S		1
+#define GLHMC_FWSDDATALOW_PMSDTYPE_M		BIT(1)
+#define GLHMC_FWSDDATALOW_PMSDBPCOUNT_S		2
+#define GLHMC_FWSDDATALOW_PMSDBPCOUNT_M		MAKEMASK(0x3FF, 2)
+#define GLHMC_FWSDDATALOW_PMSDDATALOW_S		12
+#define GLHMC_FWSDDATALOW_PMSDDATALOW_M		MAKEMASK(0xFFFFF, 12)
+#define GLHMC_FWSDDATALOW_FPMAT			0x00102074 /* Reset Source: CORER */
+#define GLHMC_FWSDDATALOW_FPMAT_PMSDVALID_S	0
+#define GLHMC_FWSDDATALOW_FPMAT_PMSDVALID_M	BIT(0)
+#define GLHMC_FWSDDATALOW_FPMAT_PMSDTYPE_S	1
+#define GLHMC_FWSDDATALOW_FPMAT_PMSDTYPE_M	BIT(1)
+#define GLHMC_FWSDDATALOW_FPMAT_PMSDBPCOUNT_S	2
+#define GLHMC_FWSDDATALOW_FPMAT_PMSDBPCOUNT_M	MAKEMASK(0x3FF, 2)
+#define GLHMC_FWSDDATALOW_FPMAT_PMSDDATALOW_S	12
+#define GLHMC_FWSDDATALOW_FPMAT_PMSDDATALOW_M	MAKEMASK(0xFFFFF, 12)
+#define GLHMC_PEARPBASE(_i)			(0x00524800 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEARPBASE_MAX_INDEX		7
+#define GLHMC_PEARPBASE_FPMPEARPBASE_S		0
+#define GLHMC_PEARPBASE_FPMPEARPBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEARPCNT(_i)			(0x00524900 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEARPCNT_MAX_INDEX		7
+#define GLHMC_PEARPCNT_FPMPEARPCNT_S		0
+#define GLHMC_PEARPCNT_FPMPEARPCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEARPMAX				0x00522038 /* Reset Source: CORER */
+#define GLHMC_PEARPMAX_PMPEARPMAX_S		0
+#define GLHMC_PEARPMAX_PMPEARPMAX_M		MAKEMASK(0x1FFFF, 0)
+#define GLHMC_PEARPOBJSZ			0x00522034 /* Reset Source: CORER */
+#define GLHMC_PEARPOBJSZ_PMPEARPOBJSZ_S		0
+#define GLHMC_PEARPOBJSZ_PMPEARPOBJSZ_M		MAKEMASK(0x7, 0)
+#define GLHMC_PECQBASE(_i)			(0x00524200 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PECQBASE_MAX_INDEX		7
+#define GLHMC_PECQBASE_FPMPECQBASE_S		0
+#define GLHMC_PECQBASE_FPMPECQBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PECQCNT(_i)			(0x00524300 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PECQCNT_MAX_INDEX			7
+#define GLHMC_PECQCNT_FPMPECQCNT_S		0
+#define GLHMC_PECQCNT_FPMPECQCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PECQOBJSZ				0x00522020 /* Reset Source: CORER */
+#define GLHMC_PECQOBJSZ_PMPECQOBJSZ_S		0
+#define GLHMC_PECQOBJSZ_PMPECQOBJSZ_M		MAKEMASK(0xF, 0)
+#define GLHMC_PEHDRBASE(_i)			(0x00526200 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEHDRBASE_MAX_INDEX		7
+#define GLHMC_PEHDRBASE_GLHMC_PEHDRBASE_S	0
+#define GLHMC_PEHDRBASE_GLHMC_PEHDRBASE_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PEHDRCNT(_i)			(0x00526300 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEHDRCNT_MAX_INDEX		7
+#define GLHMC_PEHDRCNT_GLHMC_PEHDRCNT_S		0
+#define GLHMC_PEHDRCNT_GLHMC_PEHDRCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PEHDRMAX				0x00522008 /* Reset Source: CORER */
+#define GLHMC_PEHDRMAX_PMPEHDRMAX_S		0
+#define GLHMC_PEHDRMAX_PMPEHDRMAX_M		MAKEMASK(0x7FFFF, 0)
+#define GLHMC_PEHDRMAX_RSVD_S			19
+#define GLHMC_PEHDRMAX_RSVD_M			MAKEMASK(0x1FFF, 19)
+#define GLHMC_PEHDROBJSZ			0x00522004 /* Reset Source: CORER */
+#define GLHMC_PEHDROBJSZ_PMPEHDROBJSZ_S		0
+#define GLHMC_PEHDROBJSZ_PMPEHDROBJSZ_M		MAKEMASK(0xF, 0)
+#define GLHMC_PEHDROBJSZ_RSVD_S			4
+#define GLHMC_PEHDROBJSZ_RSVD_M			MAKEMASK(0xFFFFFFF, 4)
+#define GLHMC_PEHTCNT(_i)			(0x00524700 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEHTCNT_MAX_INDEX			7
+#define GLHMC_PEHTCNT_FPMPEHTCNT_S		0
+#define GLHMC_PEHTCNT_FPMPEHTCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEHTCNT_FPMAT(_i)			(0x00104700 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEHTCNT_FPMAT_MAX_INDEX		7
+#define GLHMC_PEHTCNT_FPMAT_FPMPEHTCNT_S	0
+#define GLHMC_PEHTCNT_FPMAT_FPMPEHTCNT_M	MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEHTEBASE(_i)			(0x00524600 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEHTEBASE_MAX_INDEX		7
+#define GLHMC_PEHTEBASE_FPMPEHTEBASE_S		0
+#define GLHMC_PEHTEBASE_FPMPEHTEBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEHTEBASE_FPMAT(_i)		(0x00104600 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEHTEBASE_FPMAT_MAX_INDEX		7
+#define GLHMC_PEHTEBASE_FPMAT_FPMPEHTEBASE_S	0
+#define GLHMC_PEHTEBASE_FPMAT_FPMPEHTEBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEHTEOBJSZ			0x0052202C /* Reset Source: CORER */
+#define GLHMC_PEHTEOBJSZ_PMPEHTEOBJSZ_S		0
+#define GLHMC_PEHTEOBJSZ_PMPEHTEOBJSZ_M		MAKEMASK(0xF, 0)
+#define GLHMC_PEHTEOBJSZ_FPMAT			0x0010202c /* Reset Source: CORER */
+#define GLHMC_PEHTEOBJSZ_FPMAT_PMPEHTEOBJSZ_S	0
+#define GLHMC_PEHTEOBJSZ_FPMAT_PMPEHTEOBJSZ_M	MAKEMASK(0xF, 0)
+#define GLHMC_PEHTMAX				0x00522030 /* Reset Source: CORER */
+#define GLHMC_PEHTMAX_PMPEHTMAX_S		0
+#define GLHMC_PEHTMAX_PMPEHTMAX_M		MAKEMASK(0x1FFFFF, 0)
+#define GLHMC_PEHTMAX_FPMAT			0x00102030 /* Reset Source: CORER */
+#define GLHMC_PEHTMAX_FPMAT_PMPEHTMAX_S		0
+#define GLHMC_PEHTMAX_FPMAT_PMPEHTMAX_M		MAKEMASK(0x1FFFFF, 0)
+#define GLHMC_PEMDBASE(_i)			(0x00526400 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEMDBASE_MAX_INDEX		7
+#define GLHMC_PEMDBASE_GLHMC_PEMDBASE_S		0
+#define GLHMC_PEMDBASE_GLHMC_PEMDBASE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PEMDCNT(_i)			(0x00526500 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEMDCNT_MAX_INDEX			7
+#define GLHMC_PEMDCNT_GLHMC_PEMDCNT_S		0
+#define GLHMC_PEMDCNT_GLHMC_PEMDCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PEMDMAX				0x00522010 /* Reset Source: CORER */
+#define GLHMC_PEMDMAX_PMPEMDMAX_S		0
+#define GLHMC_PEMDMAX_PMPEMDMAX_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEMDMAX_RSVD_S			24
+#define GLHMC_PEMDMAX_RSVD_M			MAKEMASK(0xFF, 24)
+#define GLHMC_PEMDOBJSZ				0x0052200C /* Reset Source: CORER */
+#define GLHMC_PEMDOBJSZ_PMPEMDOBJSZ_S		0
+#define GLHMC_PEMDOBJSZ_PMPEMDOBJSZ_M		MAKEMASK(0xF, 0)
+#define GLHMC_PEMDOBJSZ_RSVD_S			4
+#define GLHMC_PEMDOBJSZ_RSVD_M			MAKEMASK(0xFFFFFFF, 4)
+#define GLHMC_PEMRBASE(_i)			(0x00524C00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEMRBASE_MAX_INDEX		7
+#define GLHMC_PEMRBASE_FPMPEMRBASE_S		0
+#define GLHMC_PEMRBASE_FPMPEMRBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEMRCNT(_i)			(0x00524D00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEMRCNT_MAX_INDEX			7
+#define GLHMC_PEMRCNT_FPMPEMRSZ_S		0
+#define GLHMC_PEMRCNT_FPMPEMRSZ_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEMRMAX				0x00522040 /* Reset Source: CORER */
+#define GLHMC_PEMRMAX_PMPEMRMAX_S		0
+#define GLHMC_PEMRMAX_PMPEMRMAX_M		MAKEMASK(0x7FFFFF, 0)
+#define GLHMC_PEMROBJSZ				0x0052203c /* Reset Source: CORER */
+#define GLHMC_PEMROBJSZ_PMPEMROBJSZ_S		0
+#define GLHMC_PEMROBJSZ_PMPEMROBJSZ_M		MAKEMASK(0xF, 0)
+#define GLHMC_PEOOISCBASE(_i)			(0x00526600 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEOOISCBASE_MAX_INDEX		7
+#define GLHMC_PEOOISCBASE_GLHMC_PEOOISCBASE_S	0
+#define GLHMC_PEOOISCBASE_GLHMC_PEOOISCBASE_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PEOOISCCNT(_i)			(0x00526700 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEOOISCCNT_MAX_INDEX		7
+#define GLHMC_PEOOISCCNT_GLHMC_PEOOISCCNT_S	0
+#define GLHMC_PEOOISCCNT_GLHMC_PEOOISCCNT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PEOOISCFFLBASE(_i)		(0x00526C00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEOOISCFFLBASE_MAX_INDEX		7
+#define GLHMC_PEOOISCFFLBASE_GLHMC_PEOOISCFFLBASE_S 0
+#define GLHMC_PEOOISCFFLBASE_GLHMC_PEOOISCFFLBASE_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PEOOISCFFLCNT_PMAT(_i)		(0x00526D00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEOOISCFFLCNT_PMAT_MAX_INDEX	7
+#define GLHMC_PEOOISCFFLCNT_PMAT_FPMPEOOISCFLCNT_S 0
+#define GLHMC_PEOOISCFFLCNT_PMAT_FPMPEOOISCFLCNT_M MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEOOISCFFLMAX			0x005220A4 /* Reset Source: CORER */
+#define GLHMC_PEOOISCFFLMAX_PMPEOOISCFFLMAX_S	0
+#define GLHMC_PEOOISCFFLMAX_PMPEOOISCFFLMAX_M	MAKEMASK(0x7FFFF, 0)
+#define GLHMC_PEOOISCFFLMAX_RSVD_S		19
+#define GLHMC_PEOOISCFFLMAX_RSVD_M		MAKEMASK(0x1FFF, 19)
+#define GLHMC_PEOOISCMAX			0x00522018 /* Reset Source: CORER */
+#define GLHMC_PEOOISCMAX_PMPEOOISCMAX_S		0
+#define GLHMC_PEOOISCMAX_PMPEOOISCMAX_M		MAKEMASK(0x7FFFF, 0)
+#define GLHMC_PEOOISCMAX_RSVD_S			19
+#define GLHMC_PEOOISCMAX_RSVD_M			MAKEMASK(0x1FFF, 19)
+#define GLHMC_PEOOISCOBJSZ			0x00522014 /* Reset Source: CORER */
+#define GLHMC_PEOOISCOBJSZ_PMPEOOISCOBJSZ_S	0
+#define GLHMC_PEOOISCOBJSZ_PMPEOOISCOBJSZ_M	MAKEMASK(0xF, 0)
+#define GLHMC_PEOOISCOBJSZ_RSVD_S		4
+#define GLHMC_PEOOISCOBJSZ_RSVD_M		MAKEMASK(0xFFFFFFF, 4)
+#define GLHMC_PEPBLBASE(_i)			(0x00525800 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEPBLBASE_MAX_INDEX		7
+#define GLHMC_PEPBLBASE_FPMPEPBLBASE_S		0
+#define GLHMC_PEPBLBASE_FPMPEPBLBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEPBLCNT(_i)			(0x00525900 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEPBLCNT_MAX_INDEX		7
+#define GLHMC_PEPBLCNT_FPMPEPBLCNT_S		0
+#define GLHMC_PEPBLCNT_FPMPEPBLCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEPBLMAX				0x0052206C /* Reset Source: CORER */
+#define GLHMC_PEPBLMAX_PMPEPBLMAX_S		0
+#define GLHMC_PEPBLMAX_PMPEPBLMAX_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEQ1BASE(_i)			(0x00525200 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEQ1BASE_MAX_INDEX		7
+#define GLHMC_PEQ1BASE_FPMPEQ1BASE_S		0
+#define GLHMC_PEQ1BASE_FPMPEQ1BASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEQ1CNT(_i)			(0x00525300 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEQ1CNT_MAX_INDEX			7
+#define GLHMC_PEQ1CNT_FPMPEQ1CNT_S		0
+#define GLHMC_PEQ1CNT_FPMPEQ1CNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEQ1FLBASE(_i)			(0x00525400 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEQ1FLBASE_MAX_INDEX		7
+#define GLHMC_PEQ1FLBASE_FPMPEQ1FLBASE_S	0
+#define GLHMC_PEQ1FLBASE_FPMPEQ1FLBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEQ1FLMAX				0x00522058 /* Reset Source: CORER */
+#define GLHMC_PEQ1FLMAX_PMPEQ1FLMAX_S		0
+#define GLHMC_PEQ1FLMAX_PMPEQ1FLMAX_M		MAKEMASK(0x3FFFFFF, 0)
+#define GLHMC_PEQ1MAX				0x00522054 /* Reset Source: CORER */
+#define GLHMC_PEQ1MAX_PMPEQ1MAX_S		0
+#define GLHMC_PEQ1MAX_PMPEQ1MAX_M		MAKEMASK(0xFFFFFFF, 0)
+#define GLHMC_PEQ1OBJSZ				0x00522050 /* Reset Source: CORER */
+#define GLHMC_PEQ1OBJSZ_PMPEQ1OBJSZ_S		0
+#define GLHMC_PEQ1OBJSZ_PMPEQ1OBJSZ_M		MAKEMASK(0xF, 0)
+#define GLHMC_PEQPBASE(_i)			(0x00524000 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEQPBASE_MAX_INDEX		7
+#define GLHMC_PEQPBASE_FPMPEQPBASE_S		0
+#define GLHMC_PEQPBASE_FPMPEQPBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEQPCNT(_i)			(0x00524100 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEQPCNT_MAX_INDEX			7
+#define GLHMC_PEQPCNT_FPMPEQPCNT_S		0
+#define GLHMC_PEQPCNT_FPMPEQPCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEQPOBJSZ				0x0052201C /* Reset Source: CORER */
+#define GLHMC_PEQPOBJSZ_PMPEQPOBJSZ_S		0
+#define GLHMC_PEQPOBJSZ_PMPEQPOBJSZ_M		MAKEMASK(0xF, 0)
+#define GLHMC_PERRFBASE(_i)			(0x00526800 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PERRFBASE_MAX_INDEX		7
+#define GLHMC_PERRFBASE_GLHMC_PERRFBASE_S	0
+#define GLHMC_PERRFBASE_GLHMC_PERRFBASE_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PERRFCNT(_i)			(0x00526900 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PERRFCNT_MAX_INDEX		7
+#define GLHMC_PERRFCNT_GLHMC_PERRFCNT_S		0
+#define GLHMC_PERRFCNT_GLHMC_PERRFCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PERRFFLBASE(_i)			(0x00526A00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PERRFFLBASE_MAX_INDEX		7
+#define GLHMC_PERRFFLBASE_GLHMC_PERRFFLBASE_S	0
+#define GLHMC_PERRFFLBASE_GLHMC_PERRFFLBASE_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PERRFFLCNT_PMAT(_i)		(0x00526B00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PERRFFLCNT_PMAT_MAX_INDEX		7
+#define GLHMC_PERRFFLCNT_PMAT_FPMPERRFFLCNT_S	0
+#define GLHMC_PERRFFLCNT_PMAT_FPMPERRFFLCNT_M	MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PERRFFLMAX			0x005220A0 /* Reset Source: CORER */
+#define GLHMC_PERRFFLMAX_PMPERRFFLMAX_S		0
+#define GLHMC_PERRFFLMAX_PMPERRFFLMAX_M		MAKEMASK(0x3FFFFFF, 0)
+#define GLHMC_PERRFFLMAX_RSVD_S			26
+#define GLHMC_PERRFFLMAX_RSVD_M			MAKEMASK(0x3F, 26)
+#define GLHMC_PERRFMAX				0x0052209C /* Reset Source: CORER */
+#define GLHMC_PERRFMAX_PMPERRFMAX_S		0
+#define GLHMC_PERRFMAX_PMPERRFMAX_M		MAKEMASK(0xFFFFFFF, 0)
+#define GLHMC_PERRFMAX_RSVD_S			28
+#define GLHMC_PERRFMAX_RSVD_M			MAKEMASK(0xF, 28)
+#define GLHMC_PERRFOBJSZ			0x00522098 /* Reset Source: CORER */
+#define GLHMC_PERRFOBJSZ_PMPERRFOBJSZ_S		0
+#define GLHMC_PERRFOBJSZ_PMPERRFOBJSZ_M		MAKEMASK(0xF, 0)
+#define GLHMC_PERRFOBJSZ_RSVD_S			4
+#define GLHMC_PERRFOBJSZ_RSVD_M			MAKEMASK(0xFFFFFFF, 4)
+#define GLHMC_PETIMERBASE(_i)			(0x00525A00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PETIMERBASE_MAX_INDEX		7
+#define GLHMC_PETIMERBASE_FPMPETIMERBASE_S	0
+#define GLHMC_PETIMERBASE_FPMPETIMERBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PETIMERCNT(_i)			(0x00525B00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PETIMERCNT_MAX_INDEX		7
+#define GLHMC_PETIMERCNT_FPMPETIMERCNT_S	0
+#define GLHMC_PETIMERCNT_FPMPETIMERCNT_M	MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PETIMERMAX			0x00522084 /* Reset Source: CORER */
+#define GLHMC_PETIMERMAX_PMPETIMERMAX_S		0
+#define GLHMC_PETIMERMAX_PMPETIMERMAX_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PETIMEROBJSZ			0x00522080 /* Reset Source: CORER */
+#define GLHMC_PETIMEROBJSZ_PMPETIMEROBJSZ_S	0
+#define GLHMC_PETIMEROBJSZ_PMPETIMEROBJSZ_M	MAKEMASK(0xF, 0)
+#define GLHMC_PEXFBASE(_i)			(0x00524E00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEXFBASE_MAX_INDEX		7
+#define GLHMC_PEXFBASE_FPMPEXFBASE_S		0
+#define GLHMC_PEXFBASE_FPMPEXFBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEXFCNT(_i)			(0x00524F00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEXFCNT_MAX_INDEX			7
+#define GLHMC_PEXFCNT_FPMPEXFCNT_S		0
+#define GLHMC_PEXFCNT_FPMPEXFCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEXFFLBASE(_i)			(0x00525000 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEXFFLBASE_MAX_INDEX		7
+#define GLHMC_PEXFFLBASE_FPMPEXFFLBASE_S	0
+#define GLHMC_PEXFFLBASE_FPMPEXFFLBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEXFFLMAX				0x0052204C /* Reset Source: CORER */
+#define GLHMC_PEXFFLMAX_PMPEXFFLMAX_S		0
+#define GLHMC_PEXFFLMAX_PMPEXFFLMAX_M		MAKEMASK(0x3FFFFFF, 0)
+#define GLHMC_PEXFMAX				0x00522048 /* Reset Source: CORER */
+#define GLHMC_PEXFMAX_PMPEXFMAX_S		0
+#define GLHMC_PEXFMAX_PMPEXFMAX_M		MAKEMASK(0xFFFFFFF, 0)
+#define GLHMC_PEXFOBJSZ				0x00522044 /* Reset Source: CORER */
+#define GLHMC_PEXFOBJSZ_PMPEXFOBJSZ_S		0
+#define GLHMC_PEXFOBJSZ_PMPEXFOBJSZ_M		MAKEMASK(0xF, 0)
+#define GLHMC_PFPESDPART(_i)			(0x00520880 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PFPESDPART_MAX_INDEX		7
+#define GLHMC_PFPESDPART_PMSDBASE_S		0
+#define GLHMC_PFPESDPART_PMSDBASE_M		MAKEMASK(0xFFF, 0)
+#define GLHMC_PFPESDPART_PMSDSIZE_S		16
+#define GLHMC_PFPESDPART_PMSDSIZE_M		MAKEMASK(0x1FFF, 16)
+#define GLHMC_PFPESDPART_FPMAT(_i)		(0x00100880 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PFPESDPART_FPMAT_MAX_INDEX	7
+#define GLHMC_PFPESDPART_FPMAT_PMSDBASE_S	0
+#define GLHMC_PFPESDPART_FPMAT_PMSDBASE_M	MAKEMASK(0xFFF, 0)
+#define GLHMC_PFPESDPART_FPMAT_PMSDSIZE_S	16
+#define GLHMC_PFPESDPART_FPMAT_PMSDSIZE_M	MAKEMASK(0x1FFF, 16)
+#define GLHMC_SDPART(_i)			(0x00520800 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_SDPART_MAX_INDEX			7
+#define GLHMC_SDPART_PMSDBASE_S			0
+#define GLHMC_SDPART_PMSDBASE_M			MAKEMASK(0xFFF, 0)
+#define GLHMC_SDPART_PMSDSIZE_S			16
+#define GLHMC_SDPART_PMSDSIZE_M			MAKEMASK(0x1FFF, 16)
+#define GLHMC_SDPART_FPMAT(_i)			(0x00100800 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_SDPART_FPMAT_MAX_INDEX		7
+#define GLHMC_SDPART_FPMAT_PMSDBASE_S		0
+#define GLHMC_SDPART_FPMAT_PMSDBASE_M		MAKEMASK(0xFFF, 0)
+#define GLHMC_SDPART_FPMAT_PMSDSIZE_S		16
+#define GLHMC_SDPART_FPMAT_PMSDSIZE_M		MAKEMASK(0x1FFF, 16)
+#define GLHMC_VFAPBVTINUSEBASE(_i)		(0x0052CA00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFAPBVTINUSEBASE_MAX_INDEX	31
+#define GLHMC_VFAPBVTINUSEBASE_FPMAPBINUSEBASE_S 0
+#define GLHMC_VFAPBVTINUSEBASE_FPMAPBINUSEBASE_M MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFCEQPART(_i)			(0x00502F00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFCEQPART_MAX_INDEX		31
+#define GLHMC_VFCEQPART_PMCEQBASE_S		0
+#define GLHMC_VFCEQPART_PMCEQBASE_M		MAKEMASK(0x3FF, 0)
+#define GLHMC_VFCEQPART_PMCEQSIZE_S		16
+#define GLHMC_VFCEQPART_PMCEQSIZE_M		MAKEMASK(0x3FF, 16)
+#define GLHMC_VFDBCQPART(_i)			(0x00502E00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFDBCQPART_MAX_INDEX		31
+#define GLHMC_VFDBCQPART_PMDBCQBASE_S		0
+#define GLHMC_VFDBCQPART_PMDBCQBASE_M		MAKEMASK(0x3FFF, 0)
+#define GLHMC_VFDBCQPART_PMDBCQSIZE_S		16
+#define GLHMC_VFDBCQPART_PMDBCQSIZE_M		MAKEMASK(0x7FFF, 16)
+#define GLHMC_VFDBQPPART(_i)			(0x00504520 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFDBQPPART_MAX_INDEX		31
+#define GLHMC_VFDBQPPART_PMDBQPBASE_S		0
+#define GLHMC_VFDBQPPART_PMDBQPBASE_M		MAKEMASK(0x3FFF, 0)
+#define GLHMC_VFDBQPPART_PMDBQPSIZE_S		16
+#define GLHMC_VFDBQPPART_PMDBQPSIZE_M		MAKEMASK(0x7FFF, 16)
+#define GLHMC_VFFSIAVBASE(_i)			(0x0052D600 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFFSIAVBASE_MAX_INDEX		31
+#define GLHMC_VFFSIAVBASE_FPMFSIAVBASE_S	0
+#define GLHMC_VFFSIAVBASE_FPMFSIAVBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFFSIAVCNT(_i)			(0x0052D700 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFFSIAVCNT_MAX_INDEX		31
+#define GLHMC_VFFSIAVCNT_FPMFSIAVCNT_S		0
+#define GLHMC_VFFSIAVCNT_FPMFSIAVCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFFSIMCBASE(_i)			(0x0052E000 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFFSIMCBASE_MAX_INDEX		31
+#define GLHMC_VFFSIMCBASE_FPMFSIMCBASE_S	0
+#define GLHMC_VFFSIMCBASE_FPMFSIMCBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFFSIMCCNT(_i)			(0x0052E100 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFFSIMCCNT_MAX_INDEX		31
+#define GLHMC_VFFSIMCCNT_FPMFSIMCSZ_S		0
+#define GLHMC_VFFSIMCCNT_FPMFSIMCSZ_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPDINV(_i)			(0x00528300 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPDINV_MAX_INDEX			31
+#define GLHMC_VFPDINV_PMSDIDX_S			0
+#define GLHMC_VFPDINV_PMSDIDX_M			MAKEMASK(0xFFF, 0)
+#define GLHMC_VFPDINV_PMSDPARTSEL_S		15
+#define GLHMC_VFPDINV_PMSDPARTSEL_M		BIT(15)
+#define GLHMC_VFPDINV_PMPDIDX_S			16
+#define GLHMC_VFPDINV_PMPDIDX_M			MAKEMASK(0x1FF, 16)
+#define GLHMC_VFPDINV_FPMAT(_i)			(0x00108300 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPDINV_FPMAT_MAX_INDEX		31
+#define GLHMC_VFPDINV_FPMAT_PMSDIDX_S		0
+#define GLHMC_VFPDINV_FPMAT_PMSDIDX_M		MAKEMASK(0xFFF, 0)
+#define GLHMC_VFPDINV_FPMAT_PMSDPARTSEL_S	15
+#define GLHMC_VFPDINV_FPMAT_PMSDPARTSEL_M	BIT(15)
+#define GLHMC_VFPDINV_FPMAT_PMPDIDX_S		16
+#define GLHMC_VFPDINV_FPMAT_PMPDIDX_M		MAKEMASK(0x1FF, 16)
+#define GLHMC_VFPEARPBASE(_i)			(0x0052C800 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEARPBASE_MAX_INDEX		31
+#define GLHMC_VFPEARPBASE_FPMPEARPBASE_S	0
+#define GLHMC_VFPEARPBASE_FPMPEARPBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPEARPCNT(_i)			(0x0052C900 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEARPCNT_MAX_INDEX		31
+#define GLHMC_VFPEARPCNT_FPMPEARPCNT_S		0
+#define GLHMC_VFPEARPCNT_FPMPEARPCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPECQBASE(_i)			(0x0052C200 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPECQBASE_MAX_INDEX		31
+#define GLHMC_VFPECQBASE_FPMPECQBASE_S		0
+#define GLHMC_VFPECQBASE_FPMPECQBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPECQCNT(_i)			(0x0052C300 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPECQCNT_MAX_INDEX		31
+#define GLHMC_VFPECQCNT_FPMPECQCNT_S		0
+#define GLHMC_VFPECQCNT_FPMPECQCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPEHDRBASE(_i)			(0x0052E200 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEHDRBASE_MAX_INDEX		31
+#define GLHMC_VFPEHDRBASE_GLHMC_PEHDRBASE_S	0
+#define GLHMC_VFPEHDRBASE_GLHMC_PEHDRBASE_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPEHDRCNT(_i)			(0x0052E300 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEHDRCNT_MAX_INDEX		31
+#define GLHMC_VFPEHDRCNT_GLHMC_PEHDRCNT_S	0
+#define GLHMC_VFPEHDRCNT_GLHMC_PEHDRCNT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPEHTCNT(_i)			(0x0052C700 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEHTCNT_MAX_INDEX		31
+#define GLHMC_VFPEHTCNT_FPMPEHTCNT_S		0
+#define GLHMC_VFPEHTCNT_FPMPEHTCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPEHTCNT_FPMAT(_i)		(0x0010c700 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEHTCNT_FPMAT_MAX_INDEX		31
+#define GLHMC_VFPEHTCNT_FPMAT_FPMPEHTCNT_S	0
+#define GLHMC_VFPEHTCNT_FPMAT_FPMPEHTCNT_M	MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPEHTEBASE(_i)			(0x0052C600 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEHTEBASE_MAX_INDEX		31
+#define GLHMC_VFPEHTEBASE_FPMPEHTEBASE_S	0
+#define GLHMC_VFPEHTEBASE_FPMPEHTEBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPEHTEBASE_FPMAT(_i)		(0x0010C600 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEHTEBASE_FPMAT_MAX_INDEX	31
+#define GLHMC_VFPEHTEBASE_FPMAT_FPMPEHTEBASE_S	0
+#define GLHMC_VFPEHTEBASE_FPMAT_FPMPEHTEBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPEMDBASE(_i)			(0x0052E400 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEMDBASE_MAX_INDEX		31
+#define GLHMC_VFPEMDBASE_GLHMC_PEMDBASE_S	0
+#define GLHMC_VFPEMDBASE_GLHMC_PEMDBASE_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPEMDCNT(_i)			(0x0052E500 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEMDCNT_MAX_INDEX		31
+#define GLHMC_VFPEMDCNT_GLHMC_PEMDCNT_S		0
+#define GLHMC_VFPEMDCNT_GLHMC_PEMDCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPEMRBASE(_i)			(0x0052CC00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEMRBASE_MAX_INDEX		31
+#define GLHMC_VFPEMRBASE_FPMPEMRBASE_S		0
+#define GLHMC_VFPEMRBASE_FPMPEMRBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPEMRCNT(_i)			(0x0052CD00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEMRCNT_MAX_INDEX		31
+#define GLHMC_VFPEMRCNT_FPMPEMRSZ_S		0
+#define GLHMC_VFPEMRCNT_FPMPEMRSZ_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPEOOISCBASE(_i)			(0x0052E600 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEOOISCBASE_MAX_INDEX		31
+#define GLHMC_VFPEOOISCBASE_GLHMC_PEOOISCBASE_S 0
+#define GLHMC_VFPEOOISCBASE_GLHMC_PEOOISCBASE_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPEOOISCCNT(_i)			(0x0052E700 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEOOISCCNT_MAX_INDEX		31
+#define GLHMC_VFPEOOISCCNT_GLHMC_PEOOISCCNT_S	0
+#define GLHMC_VFPEOOISCCNT_GLHMC_PEOOISCCNT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPEOOISCFFLBASE(_i)		(0x0052EC00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEOOISCFFLBASE_MAX_INDEX	31
+#define GLHMC_VFPEOOISCFFLBASE_GLHMC_PEOOISCFFLBASE_S 0
+#define GLHMC_VFPEOOISCFFLBASE_GLHMC_PEOOISCFFLBASE_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPEPBLBASE(_i)			(0x0052D800 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEPBLBASE_MAX_INDEX		31
+#define GLHMC_VFPEPBLBASE_FPMPEPBLBASE_S	0
+#define GLHMC_VFPEPBLBASE_FPMPEPBLBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPEPBLCNT(_i)			(0x0052D900 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEPBLCNT_MAX_INDEX		31
+#define GLHMC_VFPEPBLCNT_FPMPEPBLCNT_S		0
+#define GLHMC_VFPEPBLCNT_FPMPEPBLCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPEQ1BASE(_i)			(0x0052D200 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEQ1BASE_MAX_INDEX		31
+#define GLHMC_VFPEQ1BASE_FPMPEQ1BASE_S		0
+#define GLHMC_VFPEQ1BASE_FPMPEQ1BASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPEQ1CNT(_i)			(0x0052D300 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEQ1CNT_MAX_INDEX		31
+#define GLHMC_VFPEQ1CNT_FPMPEQ1CNT_S		0
+#define GLHMC_VFPEQ1CNT_FPMPEQ1CNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPEQ1FLBASE(_i)			(0x0052D400 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEQ1FLBASE_MAX_INDEX		31
+#define GLHMC_VFPEQ1FLBASE_FPMPEQ1FLBASE_S	0
+#define GLHMC_VFPEQ1FLBASE_FPMPEQ1FLBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPEQPBASE(_i)			(0x0052C000 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEQPBASE_MAX_INDEX		31
+#define GLHMC_VFPEQPBASE_FPMPEQPBASE_S		0
+#define GLHMC_VFPEQPBASE_FPMPEQPBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPEQPCNT(_i)			(0x0052C100 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEQPCNT_MAX_INDEX		31
+#define GLHMC_VFPEQPCNT_FPMPEQPCNT_S		0
+#define GLHMC_VFPEQPCNT_FPMPEQPCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPERRFBASE(_i)			(0x0052E800 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPERRFBASE_MAX_INDEX		31
+#define GLHMC_VFPERRFBASE_GLHMC_PERRFBASE_S	0
+#define GLHMC_VFPERRFBASE_GLHMC_PERRFBASE_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPERRFCNT(_i)			(0x0052E900 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPERRFCNT_MAX_INDEX		31
+#define GLHMC_VFPERRFCNT_GLHMC_PERRFCNT_S	0
+#define GLHMC_VFPERRFCNT_GLHMC_PERRFCNT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPERRFFLBASE(_i)			(0x0052EA00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPERRFFLBASE_MAX_INDEX		31
+#define GLHMC_VFPERRFFLBASE_GLHMC_PERRFFLBASE_S 0
+#define GLHMC_VFPERRFFLBASE_GLHMC_PERRFFLBASE_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPETIMERBASE(_i)			(0x0052DA00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPETIMERBASE_MAX_INDEX		31
+#define GLHMC_VFPETIMERBASE_FPMPETIMERBASE_S	0
+#define GLHMC_VFPETIMERBASE_FPMPETIMERBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPETIMERCNT(_i)			(0x0052DB00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPETIMERCNT_MAX_INDEX		31
+#define GLHMC_VFPETIMERCNT_FPMPETIMERCNT_S	0
+#define GLHMC_VFPETIMERCNT_FPMPETIMERCNT_M	MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPEXFBASE(_i)			(0x0052CE00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEXFBASE_MAX_INDEX		31
+#define GLHMC_VFPEXFBASE_FPMPEXFBASE_S		0
+#define GLHMC_VFPEXFBASE_FPMPEXFBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPEXFCNT(_i)			(0x0052CF00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEXFCNT_MAX_INDEX		31
+#define GLHMC_VFPEXFCNT_FPMPEXFCNT_S		0
+#define GLHMC_VFPEXFCNT_FPMPEXFCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPEXFFLBASE(_i)			(0x0052D000 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEXFFLBASE_MAX_INDEX		31
+#define GLHMC_VFPEXFFLBASE_FPMPEXFFLBASE_S	0
+#define GLHMC_VFPEXFFLBASE_FPMPEXFFLBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFSDDATAHIGH(_i)			(0x00528200 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFSDDATAHIGH_MAX_INDEX		31
+#define GLHMC_VFSDDATAHIGH_PMSDDATAHIGH_S	0
+#define GLHMC_VFSDDATAHIGH_PMSDDATAHIGH_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFSDDATAHIGH_FPMAT(_i)		(0x00108200 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFSDDATAHIGH_FPMAT_MAX_INDEX	31
+#define GLHMC_VFSDDATAHIGH_FPMAT_PMSDDATAHIGH_S 0
+#define GLHMC_VFSDDATAHIGH_FPMAT_PMSDDATAHIGH_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFSDDATALOW(_i)			(0x00528100 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFSDDATALOW_MAX_INDEX		31
+#define GLHMC_VFSDDATALOW_PMSDVALID_S		0
+#define GLHMC_VFSDDATALOW_PMSDVALID_M		BIT(0)
+#define GLHMC_VFSDDATALOW_PMSDTYPE_S		1
+#define GLHMC_VFSDDATALOW_PMSDTYPE_M		BIT(1)
+#define GLHMC_VFSDDATALOW_PMSDBPCOUNT_S		2
+#define GLHMC_VFSDDATALOW_PMSDBPCOUNT_M		MAKEMASK(0x3FF, 2)
+#define GLHMC_VFSDDATALOW_PMSDDATALOW_S		12
+#define GLHMC_VFSDDATALOW_PMSDDATALOW_M		MAKEMASK(0xFFFFF, 12)
+#define GLHMC_VFSDDATALOW_FPMAT(_i)		(0x00108100 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFSDDATALOW_FPMAT_MAX_INDEX	31
+#define GLHMC_VFSDDATALOW_FPMAT_PMSDVALID_S	0
+#define GLHMC_VFSDDATALOW_FPMAT_PMSDVALID_M	BIT(0)
+#define GLHMC_VFSDDATALOW_FPMAT_PMSDTYPE_S	1
+#define GLHMC_VFSDDATALOW_FPMAT_PMSDTYPE_M	BIT(1)
+#define GLHMC_VFSDDATALOW_FPMAT_PMSDBPCOUNT_S	2
+#define GLHMC_VFSDDATALOW_FPMAT_PMSDBPCOUNT_M	MAKEMASK(0x3FF, 2)
+#define GLHMC_VFSDDATALOW_FPMAT_PMSDDATALOW_S	12
+#define GLHMC_VFSDDATALOW_FPMAT_PMSDDATALOW_M	MAKEMASK(0xFFFFF, 12)
+#define GLHMC_VFSDPART(_i)			(0x00528800 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFSDPART_MAX_INDEX		31
+#define GLHMC_VFSDPART_PMSDBASE_S		0
+#define GLHMC_VFSDPART_PMSDBASE_M		MAKEMASK(0xFFF, 0)
+#define GLHMC_VFSDPART_PMSDSIZE_S		16
+#define GLHMC_VFSDPART_PMSDSIZE_M		MAKEMASK(0x1FFF, 16)
+#define GLHMC_VFSDPART_FPMAT(_i)		(0x00108800 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFSDPART_FPMAT_MAX_INDEX		31
+#define GLHMC_VFSDPART_FPMAT_PMSDBASE_S		0
+#define GLHMC_VFSDPART_FPMAT_PMSDBASE_M		MAKEMASK(0xFFF, 0)
+#define GLHMC_VFSDPART_FPMAT_PMSDSIZE_S		16
+#define GLHMC_VFSDPART_FPMAT_PMSDSIZE_M		MAKEMASK(0x1FFF, 16)
+#define GLMDOC_CACHESIZE			0x0051C06C /* Reset Source: CORER */
+#define GLMDOC_CACHESIZE_WORD_SIZE_S		0
+#define GLMDOC_CACHESIZE_WORD_SIZE_M		MAKEMASK(0xFF, 0)
+#define GLMDOC_CACHESIZE_SETS_S			8
+#define GLMDOC_CACHESIZE_SETS_M			MAKEMASK(0xFFF, 8)
+#define GLMDOC_CACHESIZE_WAYS_S			20
+#define GLMDOC_CACHESIZE_WAYS_M			MAKEMASK(0xF, 20)
+#define GLPBLOC0_CACHESIZE			0x00518074 /* Reset Source: CORER */
+#define GLPBLOC0_CACHESIZE_WORD_SIZE_S		0
+#define GLPBLOC0_CACHESIZE_WORD_SIZE_M		MAKEMASK(0xFF, 0)
+#define GLPBLOC0_CACHESIZE_SETS_S		8
+#define GLPBLOC0_CACHESIZE_SETS_M		MAKEMASK(0xFFF, 8)
+#define GLPBLOC0_CACHESIZE_WAYS_S		20
+#define GLPBLOC0_CACHESIZE_WAYS_M		MAKEMASK(0xF, 20)
+#define GLPBLOC1_CACHESIZE			0x0051A074 /* Reset Source: CORER */
+#define GLPBLOC1_CACHESIZE_WORD_SIZE_S		0
+#define GLPBLOC1_CACHESIZE_WORD_SIZE_M		MAKEMASK(0xFF, 0)
+#define GLPBLOC1_CACHESIZE_SETS_S		8
+#define GLPBLOC1_CACHESIZE_SETS_M		MAKEMASK(0xFFF, 8)
+#define GLPBLOC1_CACHESIZE_WAYS_S		20
+#define GLPBLOC1_CACHESIZE_WAYS_M		MAKEMASK(0xF, 20)
+#define GLPDOC_CACHESIZE			0x00530048 /* Reset Source: CORER */
+#define GLPDOC_CACHESIZE_WORD_SIZE_S		0
+#define GLPDOC_CACHESIZE_WORD_SIZE_M		MAKEMASK(0xFF, 0)
+#define GLPDOC_CACHESIZE_SETS_S			8
+#define GLPDOC_CACHESIZE_SETS_M			MAKEMASK(0xFFF, 8)
+#define GLPDOC_CACHESIZE_WAYS_S			20
+#define GLPDOC_CACHESIZE_WAYS_M			MAKEMASK(0xF, 20)
+#define GLPDOC_CACHESIZE_FPMAT			0x00110088 /* Reset Source: CORER */
+#define GLPDOC_CACHESIZE_FPMAT_WORD_SIZE_S	0
+#define GLPDOC_CACHESIZE_FPMAT_WORD_SIZE_M	MAKEMASK(0xFF, 0)
+#define GLPDOC_CACHESIZE_FPMAT_SETS_S		8
+#define GLPDOC_CACHESIZE_FPMAT_SETS_M		MAKEMASK(0xFFF, 8)
+#define GLPDOC_CACHESIZE_FPMAT_WAYS_S		20
+#define GLPDOC_CACHESIZE_FPMAT_WAYS_M		MAKEMASK(0xF, 20)
+#define GLPEOC0_CACHESIZE			0x005140A8 /* Reset Source: CORER */
+#define GLPEOC0_CACHESIZE_WORD_SIZE_S		0
+#define GLPEOC0_CACHESIZE_WORD_SIZE_M		MAKEMASK(0xFF, 0)
+#define GLPEOC0_CACHESIZE_SETS_S		8
+#define GLPEOC0_CACHESIZE_SETS_M		MAKEMASK(0xFFF, 8)
+#define GLPEOC0_CACHESIZE_WAYS_S		20
+#define GLPEOC0_CACHESIZE_WAYS_M		MAKEMASK(0xF, 20)
+#define GLPEOC1_CACHESIZE			0x005160A8 /* Reset Source: CORER */
+#define GLPEOC1_CACHESIZE_WORD_SIZE_S		0
+#define GLPEOC1_CACHESIZE_WORD_SIZE_M		MAKEMASK(0xFF, 0)
+#define GLPEOC1_CACHESIZE_SETS_S		8
+#define GLPEOC1_CACHESIZE_SETS_M		MAKEMASK(0xFFF, 8)
+#define GLPEOC1_CACHESIZE_WAYS_S		20
+#define GLPEOC1_CACHESIZE_WAYS_M		MAKEMASK(0xF, 20)
+#define PFHMC_ERRORDATA				0x00520500 /* Reset Source: PFR */
+#define PFHMC_ERRORDATA_HMC_ERROR_DATA_S	0
+#define PFHMC_ERRORDATA_HMC_ERROR_DATA_M	MAKEMASK(0x3FFFFFFF, 0)
+#define PFHMC_ERRORDATA_FPMAT			0x00100500 /* Reset Source: PFR */
+#define PFHMC_ERRORDATA_FPMAT_HMC_ERROR_DATA_S	0
+#define PFHMC_ERRORDATA_FPMAT_HMC_ERROR_DATA_M	MAKEMASK(0x3FFFFFFF, 0)
+#define PFHMC_ERRORINFO				0x00520400 /* Reset Source: PFR */
+#define PFHMC_ERRORINFO_PMF_INDEX_S		0
+#define PFHMC_ERRORINFO_PMF_INDEX_M		MAKEMASK(0x1F, 0)
+#define PFHMC_ERRORINFO_PMF_ISVF_S		7
+#define PFHMC_ERRORINFO_PMF_ISVF_M		BIT(7)
+#define PFHMC_ERRORINFO_HMC_ERROR_TYPE_S	8
+#define PFHMC_ERRORINFO_HMC_ERROR_TYPE_M	MAKEMASK(0xF, 8)
+#define PFHMC_ERRORINFO_HMC_OBJECT_TYPE_S	16
+#define PFHMC_ERRORINFO_HMC_OBJECT_TYPE_M	MAKEMASK(0x1F, 16)
+#define PFHMC_ERRORINFO_ERROR_DETECTED_S	31
+#define PFHMC_ERRORINFO_ERROR_DETECTED_M	BIT(31)
+#define PFHMC_ERRORINFO_FPMAT			0x00100400 /* Reset Source: PFR */
+#define PFHMC_ERRORINFO_FPMAT_PMF_INDEX_S	0
+#define PFHMC_ERRORINFO_FPMAT_PMF_INDEX_M	MAKEMASK(0x1F, 0)
+#define PFHMC_ERRORINFO_FPMAT_PMF_ISVF_S	7
+#define PFHMC_ERRORINFO_FPMAT_PMF_ISVF_M	BIT(7)
+#define PFHMC_ERRORINFO_FPMAT_HMC_ERROR_TYPE_S	8
+#define PFHMC_ERRORINFO_FPMAT_HMC_ERROR_TYPE_M	MAKEMASK(0xF, 8)
+#define PFHMC_ERRORINFO_FPMAT_HMC_OBJECT_TYPE_S 16
+#define PFHMC_ERRORINFO_FPMAT_HMC_OBJECT_TYPE_M MAKEMASK(0x1F, 16)
+#define PFHMC_ERRORINFO_FPMAT_ERROR_DETECTED_S	31
+#define PFHMC_ERRORINFO_FPMAT_ERROR_DETECTED_M	BIT(31)
+#define PFHMC_PDINV				0x00520300 /* Reset Source: PFR */
+#define PFHMC_PDINV_PMSDIDX_S			0
+#define PFHMC_PDINV_PMSDIDX_M			MAKEMASK(0xFFF, 0)
+#define PFHMC_PDINV_PMSDPARTSEL_S		15
+#define PFHMC_PDINV_PMSDPARTSEL_M		BIT(15)
+#define PFHMC_PDINV_PMPDIDX_S			16
+#define PFHMC_PDINV_PMPDIDX_M			MAKEMASK(0x1FF, 16)
+#define PFHMC_PDINV_FPMAT			0x00100300 /* Reset Source: PFR */
+#define PFHMC_PDINV_FPMAT_PMSDIDX_S		0
+#define PFHMC_PDINV_FPMAT_PMSDIDX_M		MAKEMASK(0xFFF, 0)
+#define PFHMC_PDINV_FPMAT_PMSDPARTSEL_S		15
+#define PFHMC_PDINV_FPMAT_PMSDPARTSEL_M		BIT(15)
+#define PFHMC_PDINV_FPMAT_PMPDIDX_S		16
+#define PFHMC_PDINV_FPMAT_PMPDIDX_M		MAKEMASK(0x1FF, 16)
+#define PFHMC_SDCMD				0x00520000 /* Reset Source: PFR */
+#define PFHMC_SDCMD_PMSDIDX_S			0
+#define PFHMC_SDCMD_PMSDIDX_M			MAKEMASK(0xFFF, 0)
+#define PFHMC_SDCMD_PMSDPARTSEL_S		15
+#define PFHMC_SDCMD_PMSDPARTSEL_M		BIT(15)
+#define PFHMC_SDCMD_PMSDWR_S			31
+#define PFHMC_SDCMD_PMSDWR_M			BIT(31)
+#define PFHMC_SDCMD_FPMAT			0x00100000 /* Reset Source: PFR */
+#define PFHMC_SDCMD_FPMAT_PMSDIDX_S		0
+#define PFHMC_SDCMD_FPMAT_PMSDIDX_M		MAKEMASK(0xFFF, 0)
+#define PFHMC_SDCMD_FPMAT_PMSDPARTSEL_S		15
+#define PFHMC_SDCMD_FPMAT_PMSDPARTSEL_M		BIT(15)
+#define PFHMC_SDCMD_FPMAT_PMSDWR_S		31
+#define PFHMC_SDCMD_FPMAT_PMSDWR_M		BIT(31)
+#define PFHMC_SDDATAHIGH			0x00520200 /* Reset Source: PFR */
+#define PFHMC_SDDATAHIGH_PMSDDATAHIGH_S		0
+#define PFHMC_SDDATAHIGH_PMSDDATAHIGH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PFHMC_SDDATAHIGH_FPMAT			0x00100200 /* Reset Source: PFR */
+#define PFHMC_SDDATAHIGH_FPMAT_PMSDDATAHIGH_S	0
+#define PFHMC_SDDATAHIGH_FPMAT_PMSDDATAHIGH_M	MAKEMASK(0xFFFFFFFF, 0)
+#define PFHMC_SDDATALOW				0x00520100 /* Reset Source: PFR */
+#define PFHMC_SDDATALOW_PMSDVALID_S		0
+#define PFHMC_SDDATALOW_PMSDVALID_M		BIT(0)
+#define PFHMC_SDDATALOW_PMSDTYPE_S		1
+#define PFHMC_SDDATALOW_PMSDTYPE_M		BIT(1)
+#define PFHMC_SDDATALOW_PMSDBPCOUNT_S		2
+#define PFHMC_SDDATALOW_PMSDBPCOUNT_M		MAKEMASK(0x3FF, 2)
+#define PFHMC_SDDATALOW_PMSDDATALOW_S		12
+#define PFHMC_SDDATALOW_PMSDDATALOW_M		MAKEMASK(0xFFFFF, 12)
+#define PFHMC_SDDATALOW_FPMAT			0x00100100 /* Reset Source: PFR */
+#define PFHMC_SDDATALOW_FPMAT_PMSDVALID_S	0
+#define PFHMC_SDDATALOW_FPMAT_PMSDVALID_M	BIT(0)
+#define PFHMC_SDDATALOW_FPMAT_PMSDTYPE_S	1
+#define PFHMC_SDDATALOW_FPMAT_PMSDTYPE_M	BIT(1)
+#define PFHMC_SDDATALOW_FPMAT_PMSDBPCOUNT_S	2
+#define PFHMC_SDDATALOW_FPMAT_PMSDBPCOUNT_M	MAKEMASK(0x3FF, 2)
+#define PFHMC_SDDATALOW_FPMAT_PMSDDATALOW_S	12
+#define PFHMC_SDDATALOW_FPMAT_PMSDDATALOW_M	MAKEMASK(0xFFFFF, 12)
+#define GL_DSI_RDPC				0x00294204 /* Reset Source: CORER */
+#define GL_DSI_RDPC_RDPC_S			0
+#define GL_DSI_RDPC_RDPC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GL_DSI_REPC				0x00294208 /* Reset Source: CORER */
+#define GL_DSI_REPC_NO_DESC_CNT_S		0
+#define GL_DSI_REPC_NO_DESC_CNT_M		MAKEMASK(0xFFFF, 0)
+#define GL_DSI_REPC_ERROR_CNT_S			16
+#define GL_DSI_REPC_ERROR_CNT_M			MAKEMASK(0xFFFF, 16)
+#define GL_MDCK_TDAT_TCLAN			0x000FC0DC /* Reset Source: CORER */
+#define GL_MDCK_TDAT_TCLAN_WRONG_ORDER_FORMAT_DESC_S 0
+#define GL_MDCK_TDAT_TCLAN_WRONG_ORDER_FORMAT_DESC_M BIT(0)
+#define GL_MDCK_TDAT_TCLAN_UR_S			1
+#define GL_MDCK_TDAT_TCLAN_UR_M			BIT(1)
+#define GL_MDCK_TDAT_TCLAN_TAIL_DESC_NOT_DDESC_EOP_NOP_S 2
+#define GL_MDCK_TDAT_TCLAN_TAIL_DESC_NOT_DDESC_EOP_NOP_M BIT(2)
+#define GL_MDCK_TDAT_TCLAN_FALSE_SCHEDULING_S	3
+#define GL_MDCK_TDAT_TCLAN_FALSE_SCHEDULING_M	BIT(3)
+#define GL_MDCK_TDAT_TCLAN_TAIL_VALUE_BIGGER_THAN_RING_LEN_S 4
+#define GL_MDCK_TDAT_TCLAN_TAIL_VALUE_BIGGER_THAN_RING_LEN_M BIT(4)
+#define GL_MDCK_TDAT_TCLAN_MORE_THAN_8_DCMDS_IN_PKT_S 5
+#define GL_MDCK_TDAT_TCLAN_MORE_THAN_8_DCMDS_IN_PKT_M BIT(5)
+#define GL_MDCK_TDAT_TCLAN_NO_HEAD_UPDATE_IN_QUANTA_S 6
+#define GL_MDCK_TDAT_TCLAN_NO_HEAD_UPDATE_IN_QUANTA_M BIT(6)
+#define GL_MDCK_TDAT_TCLAN_PKT_LEN_NOT_LEGAL_S	7
+#define GL_MDCK_TDAT_TCLAN_PKT_LEN_NOT_LEGAL_M	BIT(7)
+#define GL_MDCK_TDAT_TCLAN_TSO_TLEN_NOT_COHERENT_WITH_SUM_BUFS_S 8
+#define GL_MDCK_TDAT_TCLAN_TSO_TLEN_NOT_COHERENT_WITH_SUM_BUFS_M BIT(8)
+#define GL_MDCK_TDAT_TCLAN_TSO_TAIL_REACHED_BEFORE_TLEN_END_S 9
+#define GL_MDCK_TDAT_TCLAN_TSO_TAIL_REACHED_BEFORE_TLEN_END_M BIT(9)
+#define GL_MDCK_TDAT_TCLAN_TSO_MORE_THAN_3_HDRS_S 10
+#define GL_MDCK_TDAT_TCLAN_TSO_MORE_THAN_3_HDRS_M BIT(10)
+#define GL_MDCK_TDAT_TCLAN_TSO_SUM_BUFFS_LT_SUM_HDRS_S 11
+#define GL_MDCK_TDAT_TCLAN_TSO_SUM_BUFFS_LT_SUM_HDRS_M BIT(11)
+#define GL_MDCK_TDAT_TCLAN_TSO_ZERO_MSS_TLEN_HDRS_S 12
+#define GL_MDCK_TDAT_TCLAN_TSO_ZERO_MSS_TLEN_HDRS_M BIT(12)
+#define GL_MDCK_TDAT_TCLAN_TSO_CTX_DESC_IPSEC_S 13
+#define GL_MDCK_TDAT_TCLAN_TSO_CTX_DESC_IPSEC_M BIT(13)
+#define GL_MDCK_TDAT_TCLAN_SSO_COMS_NOT_WHOLE_PKT_NUM_IN_QUANTA_S 14
+#define GL_MDCK_TDAT_TCLAN_SSO_COMS_NOT_WHOLE_PKT_NUM_IN_QUANTA_M BIT(14)
+#define GL_MDCK_TDAT_TCLAN_COMS_QUANTA_BYTES_EXCEED_PKTLEN_X_64_S 15
+#define GL_MDCK_TDAT_TCLAN_COMS_QUANTA_BYTES_EXCEED_PKTLEN_X_64_M BIT(15)
+#define GL_MDCK_TDAT_TCLAN_COMS_QUANTA_CMDS_EXCEED_S 16
+#define GL_MDCK_TDAT_TCLAN_COMS_QUANTA_CMDS_EXCEED_M BIT(16)
+#define GL_MDCK_TDAT_TCLAN_TSO_COMS_TSO_DESCS_LAST_LSO_QUANTA_S 17
+#define GL_MDCK_TDAT_TCLAN_TSO_COMS_TSO_DESCS_LAST_LSO_QUANTA_M BIT(17)
+#define GL_MDCK_TDAT_TCLAN_TSO_COMS_TSO_DESCS_TLEN_S 18
+#define GL_MDCK_TDAT_TCLAN_TSO_COMS_TSO_DESCS_TLEN_M BIT(18)
+#define GL_MDCK_TDAT_TCLAN_TSO_COMS_QUANTA_FINISHED_TOO_EARLY_S 19
+#define GL_MDCK_TDAT_TCLAN_TSO_COMS_QUANTA_FINISHED_TOO_EARLY_M BIT(19)
+#define GL_MDCK_TDAT_TCLAN_COMS_NUM_PKTS_IN_QUANTA_S 20
+#define GL_MDCK_TDAT_TCLAN_COMS_NUM_PKTS_IN_QUANTA_M BIT(20)
+#define GL_PPRS_SPARE_0				0x000841A8 /* Reset Source: CORER */
+#define GL_PPRS_SPARE_0_GL_PPRS_SPARE_S		0
+#define GL_PPRS_SPARE_0_GL_PPRS_SPARE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PPRS_SPARE_1				0x000851A8 /* Reset Source: CORER */
+#define GL_PPRS_SPARE_1_GL_PPRS_SPARE_S		0
+#define GL_PPRS_SPARE_1_GL_PPRS_SPARE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PPRS_SPARE_2				0x000861A8 /* Reset Source: CORER */
+#define GL_PPRS_SPARE_2_GL_PPRS_SPARE_S		0
+#define GL_PPRS_SPARE_2_GL_PPRS_SPARE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PPRS_SPARE_3				0x000871A8 /* Reset Source: CORER */
+#define GL_PPRS_SPARE_3_GL_PPRS_SPARE_S		0
+#define GL_PPRS_SPARE_3_GL_PPRS_SPARE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLCORE_CLKCTL_H				0x000B81E8 /* Reset Source: POR */
+#define GLCORE_CLKCTL_H_UPPER_CLK_SRC_H_S	0
+#define GLCORE_CLKCTL_H_UPPER_CLK_SRC_H_M	MAKEMASK(0x3, 0)
+#define GLCORE_CLKCTL_H_LOWER_CLK_SRC_H_S	2
+#define GLCORE_CLKCTL_H_LOWER_CLK_SRC_H_M	MAKEMASK(0x3, 2)
+#define GLCORE_CLKCTL_H_PSM_CLK_SRC_H_S		4
+#define GLCORE_CLKCTL_H_PSM_CLK_SRC_H_M		MAKEMASK(0x3, 4)
+#define GLCORE_CLKCTL_H_RXCTL_CLK_SRC_H_S	6
+#define GLCORE_CLKCTL_H_RXCTL_CLK_SRC_H_M	MAKEMASK(0x3, 6)
+#define GLCORE_CLKCTL_H_UANA_CLK_SRC_H_S	8
+#define GLCORE_CLKCTL_H_UANA_CLK_SRC_H_M	MAKEMASK(0x7, 8)
+#define GLCORE_CLKCTL_L				0x000B8254 /* Reset Source: POR */
+#define GLCORE_CLKCTL_L_UPPER_CLK_SRC_L_S	0
+#define GLCORE_CLKCTL_L_UPPER_CLK_SRC_L_M	MAKEMASK(0x3, 0)
+#define GLCORE_CLKCTL_L_LOWER_CLK_SRC_L_S	2
+#define GLCORE_CLKCTL_L_LOWER_CLK_SRC_L_M	MAKEMASK(0x3, 2)
+#define GLCORE_CLKCTL_L_PSM_CLK_SRC_L_S		4
+#define GLCORE_CLKCTL_L_PSM_CLK_SRC_L_M		MAKEMASK(0x3, 4)
+#define GLCORE_CLKCTL_L_RXCTL_CLK_SRC_L_S	6
+#define GLCORE_CLKCTL_L_RXCTL_CLK_SRC_L_M	MAKEMASK(0x3, 6)
+#define GLCORE_CLKCTL_L_UANA_CLK_SRC_L_S	8
+#define GLCORE_CLKCTL_L_UANA_CLK_SRC_L_M	MAKEMASK(0x7, 8)
+#define GLCORE_CLKCTL_M				0x000B8258 /* Reset Source: POR */
+#define GLCORE_CLKCTL_M_UPPER_CLK_SRC_M_S	0
+#define GLCORE_CLKCTL_M_UPPER_CLK_SRC_M_M	MAKEMASK(0x3, 0)
+#define GLCORE_CLKCTL_M_LOWER_CLK_SRC_M_S	2
+#define GLCORE_CLKCTL_M_LOWER_CLK_SRC_M_M	MAKEMASK(0x3, 2)
+#define GLCORE_CLKCTL_M_PSM_CLK_SRC_M_S		4
+#define GLCORE_CLKCTL_M_PSM_CLK_SRC_M_M		MAKEMASK(0x3, 4)
+#define GLCORE_CLKCTL_M_RXCTL_CLK_SRC_M_S	6
+#define GLCORE_CLKCTL_M_RXCTL_CLK_SRC_M_M	MAKEMASK(0x3, 6)
+#define GLCORE_CLKCTL_M_UANA_CLK_SRC_M_S	8
+#define GLCORE_CLKCTL_M_UANA_CLK_SRC_M_M	MAKEMASK(0x7, 8)
+#define GLFOC_CACHESIZE				0x000AA074 /* Reset Source: CORER */
+#define GLFOC_CACHESIZE_WORD_SIZE_S		0
+#define GLFOC_CACHESIZE_WORD_SIZE_M		MAKEMASK(0xFF, 0)
+#define GLFOC_CACHESIZE_SETS_S			8
+#define GLFOC_CACHESIZE_SETS_M			MAKEMASK(0xFFF, 8)
+#define GLFOC_CACHESIZE_WAYS_S			20
+#define GLFOC_CACHESIZE_WAYS_M			MAKEMASK(0xF, 20)
+#define GLGEN_CAR_DEBUG				0x000B81C0 /* Reset Source: POR */
+#define GLGEN_CAR_DEBUG_CAR_UPPER_CORE_CLK_EN_S 0
+#define GLGEN_CAR_DEBUG_CAR_UPPER_CORE_CLK_EN_M BIT(0)
+#define GLGEN_CAR_DEBUG_CAR_PCIE_HIU_CLK_EN_S	1
+#define GLGEN_CAR_DEBUG_CAR_PCIE_HIU_CLK_EN_M	BIT(1)
+#define GLGEN_CAR_DEBUG_CAR_PE_CLK_EN_S		2
+#define GLGEN_CAR_DEBUG_CAR_PE_CLK_EN_M		BIT(2)
+#define GLGEN_CAR_DEBUG_CAR_PCIE_PRIM_CLK_ACTIVE_S 3
+#define GLGEN_CAR_DEBUG_CAR_PCIE_PRIM_CLK_ACTIVE_M BIT(3)
+#define GLGEN_CAR_DEBUG_CDC_PE_ACTIVE_S		4
+#define GLGEN_CAR_DEBUG_CDC_PE_ACTIVE_M		BIT(4)
+#define GLGEN_CAR_DEBUG_CAR_PCIE_RAW_PRST_RESET_N_S 5
+#define GLGEN_CAR_DEBUG_CAR_PCIE_RAW_PRST_RESET_N_M BIT(5)
+#define GLGEN_CAR_DEBUG_CAR_PCIE_RAW_SCLR_RESET_N_S 6
+#define GLGEN_CAR_DEBUG_CAR_PCIE_RAW_SCLR_RESET_N_M BIT(6)
+#define GLGEN_CAR_DEBUG_CAR_PCIE_RAW_IB_RESET_N_S 7
+#define GLGEN_CAR_DEBUG_CAR_PCIE_RAW_IB_RESET_N_M BIT(7)
+#define GLGEN_CAR_DEBUG_CAR_PCIE_RAW_IMIB_RESET_N_S 8
+#define GLGEN_CAR_DEBUG_CAR_PCIE_RAW_IMIB_RESET_N_M BIT(8)
+#define GLGEN_CAR_DEBUG_CAR_RAW_EMP_RESET_N_S	9
+#define GLGEN_CAR_DEBUG_CAR_RAW_EMP_RESET_N_M	BIT(9)
+#define GLGEN_CAR_DEBUG_CAR_RAW_GLOBAL_RESET_N_S 10
+#define GLGEN_CAR_DEBUG_CAR_RAW_GLOBAL_RESET_N_M BIT(10)
+#define GLGEN_CAR_DEBUG_CAR_RAW_LAN_POWER_GOOD_S 11
+#define GLGEN_CAR_DEBUG_CAR_RAW_LAN_POWER_GOOD_M BIT(11)
+#define GLGEN_CAR_DEBUG_CDC_IOSF_PRIMERY_RST_B_S 12
+#define GLGEN_CAR_DEBUG_CDC_IOSF_PRIMERY_RST_B_M BIT(12)
+#define GLGEN_CAR_DEBUG_GBE_GLOBALRST_B_S	13
+#define GLGEN_CAR_DEBUG_GBE_GLOBALRST_B_M	BIT(13)
+#define GLGEN_CAR_DEBUG_FLEEP_AL_GLOBR_DONE_S	14
+#define GLGEN_CAR_DEBUG_FLEEP_AL_GLOBR_DONE_M	BIT(14)
+#define GLGEN_CAR_DEBUG_CAR_RST_STATE_S		15
+#define GLGEN_CAR_DEBUG_CAR_RST_STATE_M		MAKEMASK(0xF, 15)
+#define GLGEN_CAR_SPARE				0x000B81C4 /* Reset Source: POR */
+#define GLGEN_CAR_SPARE_SPARE_CLEAR_S		0
+#define GLGEN_CAR_SPARE_SPARE_CLEAR_M		MAKEMASK(0xFFFF, 0)
+#define GLGEN_CAR_SPARE_SPARE_SET_S		16
+#define GLGEN_CAR_SPARE_SPARE_SET_M		MAKEMASK(0xFFFF, 16)
+#define GLMAC_CLKSTAT				0x000B8210 /* Reset Source: POR */
+#define GLMAC_CLKSTAT_P0_CLK_SPEED_S		0
+#define GLMAC_CLKSTAT_P0_CLK_SPEED_M		MAKEMASK(0xF, 0)
+#define GLMAC_CLKSTAT_P1_CLK_SPEED_S		4
+#define GLMAC_CLKSTAT_P1_CLK_SPEED_M		MAKEMASK(0xF, 4)
+#define GLMAC_CLKSTAT_P2_CLK_SPEED_S		8
+#define GLMAC_CLKSTAT_P2_CLK_SPEED_M		MAKEMASK(0xF, 8)
+#define GLMAC_CLKSTAT_P3_CLK_SPEED_S		12
+#define GLMAC_CLKSTAT_P3_CLK_SPEED_M		MAKEMASK(0xF, 12)
+#define GLMAC_CLKSTAT_P4_CLK_SPEED_S		16
+#define GLMAC_CLKSTAT_P4_CLK_SPEED_M		MAKEMASK(0xF, 16)
+#define GLMAC_CLKSTAT_P5_CLK_SPEED_S		20
+#define GLMAC_CLKSTAT_P5_CLK_SPEED_M		MAKEMASK(0xF, 20)
+#define GLMAC_CLKSTAT_P6_CLK_SPEED_S		24
+#define GLMAC_CLKSTAT_P6_CLK_SPEED_M		MAKEMASK(0xF, 24)
+#define GLMAC_CLKSTAT_P7_CLK_SPEED_S		28
+#define GLMAC_CLKSTAT_P7_CLK_SPEED_M		MAKEMASK(0xF, 28)
+#define GLRCB_DCB_LAN_PMS			0x001223F8 /* Reset Source: CORER */
+#define GLRCB_DCB_LAN_PMS_PSM_LAN_S		0
+#define GLRCB_DCB_LAN_PMS_PSM_LAN_M		MAKEMASK(0x3FFF, 0)
+#define GLRCB_DCB_RDMA_PMS			0x001223FC /* Reset Source: CORER */
+#define GLRCB_DCB_RDMA_PMS_PSM_RDMA_S		0
+#define GLRCB_DCB_RDMA_PMS_PSM_RDMA_M		MAKEMASK(0x3FFF, 0)
+#define GLRLAN_MDET				0x00294200 /* Reset Source: CORER */
+#define GLRLAN_MDET_PCKT_EXTRCT_ERR_S		0
+#define GLRLAN_MDET_PCKT_EXTRCT_ERR_M		BIT(0)
+#define GLTPB_100G_MAC_FC_THRESH		0x00099510 /* Reset Source: CORER */
+#define GLTPB_100G_MAC_FC_THRESH_PORT0_FC_THRESH_S 0
+#define GLTPB_100G_MAC_FC_THRESH_PORT0_FC_THRESH_M MAKEMASK(0xFFFF, 0)
+#define GLTPB_100G_MAC_FC_THRESH_PORT1_FC_THRESH_S 16
+#define GLTPB_100G_MAC_FC_THRESH_PORT1_FC_THRESH_M MAKEMASK(0xFFFF, 16)
+#define GLTPB_100G_RPB_FC_THRESH		0x0009963C /* Reset Source: CORER */
+#define GLTPB_100G_RPB_FC_THRESH_PORT0_FC_THRESH_S 0
+#define GLTPB_100G_RPB_FC_THRESH_PORT0_FC_THRESH_M MAKEMASK(0xFFFF, 0)
+#define GLTPB_100G_RPB_FC_THRESH_PORT1_FC_THRESH_S 16
+#define GLTPB_100G_RPB_FC_THRESH_PORT1_FC_THRESH_M MAKEMASK(0xFFFF, 16)
+#define GLTPB_PACING_10G			0x000994E4 /* Reset Source: CORER */
+#define GLTPB_PACING_10G_N_S			0
+#define GLTPB_PACING_10G_N_M			MAKEMASK(0xFF, 0)
+#define GLTPB_PACING_10G_K_S			8
+#define GLTPB_PACING_10G_K_M			MAKEMASK(0xFF, 8)
+#define GLTPB_PACING_10G_S_S			16
+#define GLTPB_PACING_10G_S_M			MAKEMASK(0x1FF, 16)
+#define GLTPB_PACING_25G			0x000994E0 /* Reset Source: CORER */
+#define GLTPB_PACING_25G_N_S			0
+#define GLTPB_PACING_25G_N_M			MAKEMASK(0xFF, 0)
+#define GLTPB_PACING_25G_K_S			8
+#define GLTPB_PACING_25G_K_M			MAKEMASK(0xFF, 8)
+#define GLTPB_PACING_25G_S_S			16
+#define GLTPB_PACING_25G_S_M			MAKEMASK(0x1FF, 16)
+#define GLTPB_PORT_PACING_SPEED			0x000994E8 /* Reset Source: CORER */
+#define GLTPB_PORT_PACING_SPEED_PORT0_SPEED_S	0
+#define GLTPB_PORT_PACING_SPEED_PORT0_SPEED_M	BIT(0)
+#define GLTPB_PORT_PACING_SPEED_PORT1_SPEED_S	1
+#define GLTPB_PORT_PACING_SPEED_PORT1_SPEED_M	BIT(1)
+#define GLTPB_PORT_PACING_SPEED_PORT2_SPEED_S	2
+#define GLTPB_PORT_PACING_SPEED_PORT2_SPEED_M	BIT(2)
+#define GLTPB_PORT_PACING_SPEED_PORT3_SPEED_S	3
+#define GLTPB_PORT_PACING_SPEED_PORT3_SPEED_M	BIT(3)
+#define GLTPB_PORT_PACING_SPEED_PORT4_SPEED_S	4
+#define GLTPB_PORT_PACING_SPEED_PORT4_SPEED_M	BIT(4)
+#define GLTPB_PORT_PACING_SPEED_PORT5_SPEED_S	5
+#define GLTPB_PORT_PACING_SPEED_PORT5_SPEED_M	BIT(5)
+#define GLTPB_PORT_PACING_SPEED_PORT6_SPEED_S	6
+#define GLTPB_PORT_PACING_SPEED_PORT6_SPEED_M	BIT(6)
+#define GLTPB_PORT_PACING_SPEED_PORT7_SPEED_S	7
+#define GLTPB_PORT_PACING_SPEED_PORT7_SPEED_M	BIT(7)
+#define GLTSYN_HH_DBG				0x000889F0 /* Reset Source: CORER */
+#define GLTSYN_HH_DBG_HH_SYNC_S			0
+#define GLTSYN_HH_DBG_HH_SYNC_M			BIT(0)
+#define GLTSYN_HH_DBG_HH_LATCH_EN_S		1
+#define GLTSYN_HH_DBG_HH_LATCH_EN_M		BIT(1)
+#define TPB_CFG_SCHEDULED_BC_THRESHOLD		0x00099494 /* Reset Source: CORER */
+#define TPB_CFG_SCHEDULED_BC_THRESHOLD_THRESHOLD_S 0
+#define TPB_CFG_SCHEDULED_BC_THRESHOLD_THRESHOLD_M MAKEMASK(0x7FFF, 0)
+#define GL_UFUSE_SOC				0x000A400C /* Reset Source: POR */
+#define GL_UFUSE_SOC_PORT_MODE_S		0
+#define GL_UFUSE_SOC_PORT_MODE_M		MAKEMASK(0x3, 0)
+#define GL_UFUSE_SOC_BANDWIDTH_S		2
+#define GL_UFUSE_SOC_BANDWIDTH_M		MAKEMASK(0x3, 2)
+#define GL_UFUSE_SOC_PE_DISABLE_S		4
+#define GL_UFUSE_SOC_PE_DISABLE_M		BIT(4)
+#define GL_UFUSE_SOC_SWITCH_MODE_S		5
+#define GL_UFUSE_SOC_SWITCH_MODE_M		BIT(5)
+#define GL_UFUSE_SOC_CSR_PROTECTION_ENABLE_S	6
+#define GL_UFUSE_SOC_CSR_PROTECTION_ENABLE_M	BIT(6)
+#define GL_UFUSE_SOC_SERIAL_50G_S		7
+#define GL_UFUSE_SOC_SERIAL_50G_M		BIT(7)
+#define GL_UFUSE_SOC_NIC_ID_S			8
+#define GL_UFUSE_SOC_NIC_ID_M			BIT(8)
+#define GL_UFUSE_SOC_BLOCK_BME_TO_FW_S		9
+#define GL_UFUSE_SOC_BLOCK_BME_TO_FW_M		BIT(9)
+#define GL_UFUSE_SOC_SOC_TYPE_S			10
+#define GL_UFUSE_SOC_SOC_TYPE_M			BIT(10)
+#define GL_UFUSE_SOC_BTS_MODE_S			11
+#define GL_UFUSE_SOC_BTS_MODE_M			BIT(11)
+#define GL_UFUSE_SOC_SPARE_FUSES_S		12
+#define GL_UFUSE_SOC_SPARE_FUSES_M		MAKEMASK(0xF, 12)
+#define EMPINT_GPIO_ENA				0x000880C0 /* Reset Source: POR */
+#define EMPINT_GPIO_ENA_GPIO0_ENA_S		0
+#define EMPINT_GPIO_ENA_GPIO0_ENA_M		BIT(0)
+#define EMPINT_GPIO_ENA_GPIO1_ENA_S		1
+#define EMPINT_GPIO_ENA_GPIO1_ENA_M		BIT(1)
+#define EMPINT_GPIO_ENA_GPIO2_ENA_S		2
+#define EMPINT_GPIO_ENA_GPIO2_ENA_M		BIT(2)
+#define EMPINT_GPIO_ENA_GPIO3_ENA_S		3
+#define EMPINT_GPIO_ENA_GPIO3_ENA_M		BIT(3)
+#define EMPINT_GPIO_ENA_GPIO4_ENA_S		4
+#define EMPINT_GPIO_ENA_GPIO4_ENA_M		BIT(4)
+#define EMPINT_GPIO_ENA_GPIO5_ENA_S		5
+#define EMPINT_GPIO_ENA_GPIO5_ENA_M		BIT(5)
+#define EMPINT_GPIO_ENA_GPIO6_ENA_S		6
+#define EMPINT_GPIO_ENA_GPIO6_ENA_M		BIT(6)
+#define GL_CLKGEN_DEBUG				0x000B8268 /* Reset Source: POR */
+#define GL_CLKGEN_DEBUG_PROBE_S			0
+#define GL_CLKGEN_DEBUG_PROBE_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GL_CLKGEN_DEBUG_SEL			0x000B8264 /* Reset Source: POR */
+#define GL_CLKGEN_DEBUG_SEL_GL_CLKGEN_DEBUG_SEL_S 0
+#define GL_CLKGEN_DEBUG_SEL_GL_CLKGEN_DEBUG_SEL_M MAKEMASK(0xFFFF, 0)
+#define GLGEN_MAC_LINK_TOPO			0x000B81DC /* Reset Source: GLOBR */
+#define GLGEN_MAC_LINK_TOPO_LINK_TOPO_S		0
+#define GLGEN_MAC_LINK_TOPO_LINK_TOPO_M		MAKEMASK(0x3, 0)
+#define GLINT_CEQCTL(_INT)			(0x0015C000 + ((_INT) * 4)) /* _i=0...2047 */ /* Reset Source: CORER */
+#define GLINT_CEQCTL_MAX_INDEX			2047
+#define GLINT_CEQCTL_MSIX_INDX_S		0
+#define GLINT_CEQCTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define GLINT_CEQCTL_ITR_INDX_S			11
+#define GLINT_CEQCTL_ITR_INDX_M			MAKEMASK(0x3, 11)
+#define GLINT_CEQCTL_CAUSE_ENA_S		30
+#define GLINT_CEQCTL_CAUSE_ENA_M		BIT(30)
+#define GLINT_CEQCTL_INTEVENT_S			31
+#define GLINT_CEQCTL_INTEVENT_M			BIT(31)
+#define GLINT_CTL				0x0016CC54 /* Reset Source: CORER */
+#define GLINT_CTL_DIS_AUTOMASK_S		0
+#define GLINT_CTL_DIS_AUTOMASK_M		BIT(0)
+#define GLINT_CTL_RSVD_S			1
+#define GLINT_CTL_RSVD_M			MAKEMASK(0x7FFF, 1)
+#define GLINT_CTL_ITR_GRAN_200_S		16
+#define GLINT_CTL_ITR_GRAN_200_M		MAKEMASK(0xF, 16)
+#define GLINT_CTL_ITR_GRAN_100_S		20
+#define GLINT_CTL_ITR_GRAN_100_M		MAKEMASK(0xF, 20)
+#define GLINT_CTL_ITR_GRAN_50_S			24
+#define GLINT_CTL_ITR_GRAN_50_M			MAKEMASK(0xF, 24)
+#define GLINT_CTL_ITR_GRAN_25_S			28
+#define GLINT_CTL_ITR_GRAN_25_M			MAKEMASK(0xF, 28)
+#define GLINT_DYN_CTL(_INT)			(0x00160000 + ((_INT) * 4)) /* _i=0...2047 */ /* Reset Source: PFR */
+#define GLINT_DYN_CTL_MAX_INDEX			2047
+#define GLINT_DYN_CTL_INTENA_S			0
+#define GLINT_DYN_CTL_INTENA_M			BIT(0)
+#define GLINT_DYN_CTL_CLEARPBA_S		1
+#define GLINT_DYN_CTL_CLEARPBA_M		BIT(1)
+#define GLINT_DYN_CTL_SWINT_TRIG_S		2
+#define GLINT_DYN_CTL_SWINT_TRIG_M		BIT(2)
+#define GLINT_DYN_CTL_ITR_INDX_S		3
+#define GLINT_DYN_CTL_ITR_INDX_M		MAKEMASK(0x3, 3)
+#define GLINT_DYN_CTL_INTERVAL_S		5
+#define GLINT_DYN_CTL_INTERVAL_M		MAKEMASK(0xFFF, 5)
+#define GLINT_DYN_CTL_SW_ITR_INDX_ENA_S		24
+#define GLINT_DYN_CTL_SW_ITR_INDX_ENA_M		BIT(24)
+#define GLINT_DYN_CTL_SW_ITR_INDX_S		25
+#define GLINT_DYN_CTL_SW_ITR_INDX_M		MAKEMASK(0x3, 25)
+#define GLINT_DYN_CTL_WB_ON_ITR_S		30
+#define GLINT_DYN_CTL_WB_ON_ITR_M		BIT(30)
+#define GLINT_DYN_CTL_INTENA_MSK_S		31
+#define GLINT_DYN_CTL_INTENA_MSK_M		BIT(31)
+#define GLINT_FW_TOOL_CTL			0x0016C840 /* Reset Source: CORER */
+#define GLINT_FW_TOOL_CTL_MSIX_INDX_S		0
+#define GLINT_FW_TOOL_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define GLINT_FW_TOOL_CTL_ITR_INDX_S		11
+#define GLINT_FW_TOOL_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define GLINT_FW_TOOL_CTL_CAUSE_ENA_S		30
+#define GLINT_FW_TOOL_CTL_CAUSE_ENA_M		BIT(30)
+#define GLINT_FW_TOOL_CTL_INTEVENT_S		31
+#define GLINT_FW_TOOL_CTL_INTEVENT_M		BIT(31)
+#define GLINT_ITR(_i, _INT)			(0x00154000 + ((_i) * 8192 + (_INT) * 4)) /* _i=0...2, _INT=0...2047 */ /* Reset Source: PFR */
+#define GLINT_ITR_MAX_INDEX			2
+#define GLINT_ITR_INTERVAL_S			0
+#define GLINT_ITR_INTERVAL_M			MAKEMASK(0xFFF, 0)
+#define GLINT_RATE(_INT)			(0x0015A000 + ((_INT) * 4)) /* _i=0...2047 */ /* Reset Source: PFR */
+#define GLINT_RATE_MAX_INDEX			2047
+#define GLINT_RATE_INTERVAL_S			0
+#define GLINT_RATE_INTERVAL_M			MAKEMASK(0x3F, 0)
+#define GLINT_RATE_INTRL_ENA_S			6
+#define GLINT_RATE_INTRL_ENA_M			BIT(6)
+#define GLINT_TSYN_PFMSTR(_i)			(0x0016CCC0 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLINT_TSYN_PFMSTR_MAX_INDEX		1
+#define GLINT_TSYN_PFMSTR_PF_MASTER_S		0
+#define GLINT_TSYN_PFMSTR_PF_MASTER_M		MAKEMASK(0x7, 0)
+#define GLINT_TSYN_PHY				0x0016CC50 /* Reset Source: CORER */
+#define GLINT_TSYN_PHY_PHY_INDX_S		0
+#define GLINT_TSYN_PHY_PHY_INDX_M		MAKEMASK(0x1F, 0)
+#define GLINT_VECT2FUNC(_INT)			(0x00162000 + ((_INT) * 4)) /* _i=0...2047 */ /* Reset Source: CORER */
+#define GLINT_VECT2FUNC_MAX_INDEX		2047
+#define GLINT_VECT2FUNC_VF_NUM_S		0
+#define GLINT_VECT2FUNC_VF_NUM_M		MAKEMASK(0xFF, 0)
+#define GLINT_VECT2FUNC_PF_NUM_S		12
+#define GLINT_VECT2FUNC_PF_NUM_M		MAKEMASK(0x7, 12)
+#define GLINT_VECT2FUNC_IS_PF_S			16
+#define GLINT_VECT2FUNC_IS_PF_M			BIT(16)
+#define PF0INT_FW_HLP_CTL			0x0016C844 /* Reset Source: CORER */
+#define PF0INT_FW_HLP_CTL_MSIX_INDX_S		0
+#define PF0INT_FW_HLP_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PF0INT_FW_HLP_CTL_ITR_INDX_S		11
+#define PF0INT_FW_HLP_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PF0INT_FW_HLP_CTL_CAUSE_ENA_S		30
+#define PF0INT_FW_HLP_CTL_CAUSE_ENA_M		BIT(30)
+#define PF0INT_FW_HLP_CTL_INTEVENT_S		31
+#define PF0INT_FW_HLP_CTL_INTEVENT_M		BIT(31)
+#define PF0INT_FW_PSM_CTL			0x0016C848 /* Reset Source: CORER */
+#define PF0INT_FW_PSM_CTL_MSIX_INDX_S		0
+#define PF0INT_FW_PSM_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PF0INT_FW_PSM_CTL_ITR_INDX_S		11
+#define PF0INT_FW_PSM_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PF0INT_FW_PSM_CTL_CAUSE_ENA_S		30
+#define PF0INT_FW_PSM_CTL_CAUSE_ENA_M		BIT(30)
+#define PF0INT_FW_PSM_CTL_INTEVENT_S		31
+#define PF0INT_FW_PSM_CTL_INTEVENT_M		BIT(31)
+#define PF0INT_MBX_CPM_CTL			0x0016B2C0 /* Reset Source: CORER */
+#define PF0INT_MBX_CPM_CTL_MSIX_INDX_S		0
+#define PF0INT_MBX_CPM_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PF0INT_MBX_CPM_CTL_ITR_INDX_S		11
+#define PF0INT_MBX_CPM_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PF0INT_MBX_CPM_CTL_CAUSE_ENA_S		30
+#define PF0INT_MBX_CPM_CTL_CAUSE_ENA_M		BIT(30)
+#define PF0INT_MBX_CPM_CTL_INTEVENT_S		31
+#define PF0INT_MBX_CPM_CTL_INTEVENT_M		BIT(31)
+#define PF0INT_MBX_HLP_CTL			0x0016B2C4 /* Reset Source: CORER */
+#define PF0INT_MBX_HLP_CTL_MSIX_INDX_S		0
+#define PF0INT_MBX_HLP_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PF0INT_MBX_HLP_CTL_ITR_INDX_S		11
+#define PF0INT_MBX_HLP_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PF0INT_MBX_HLP_CTL_CAUSE_ENA_S		30
+#define PF0INT_MBX_HLP_CTL_CAUSE_ENA_M		BIT(30)
+#define PF0INT_MBX_HLP_CTL_INTEVENT_S		31
+#define PF0INT_MBX_HLP_CTL_INTEVENT_M		BIT(31)
+#define PF0INT_MBX_PSM_CTL			0x0016B2C8 /* Reset Source: CORER */
+#define PF0INT_MBX_PSM_CTL_MSIX_INDX_S		0
+#define PF0INT_MBX_PSM_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PF0INT_MBX_PSM_CTL_ITR_INDX_S		11
+#define PF0INT_MBX_PSM_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PF0INT_MBX_PSM_CTL_CAUSE_ENA_S		30
+#define PF0INT_MBX_PSM_CTL_CAUSE_ENA_M		BIT(30)
+#define PF0INT_MBX_PSM_CTL_INTEVENT_S		31
+#define PF0INT_MBX_PSM_CTL_INTEVENT_M		BIT(31)
+#define PF0INT_OICR_CPM				0x0016CC40 /* Reset Source: CORER */
+#define PF0INT_OICR_CPM_INTEVENT_S		0
+#define PF0INT_OICR_CPM_INTEVENT_M		BIT(0)
+#define PF0INT_OICR_CPM_QUEUE_S			1
+#define PF0INT_OICR_CPM_QUEUE_M			BIT(1)
+#define PF0INT_OICR_CPM_RSV1_S			2
+#define PF0INT_OICR_CPM_RSV1_M			MAKEMASK(0xFF, 2)
+#define PF0INT_OICR_CPM_HH_COMP_S		10
+#define PF0INT_OICR_CPM_HH_COMP_M		BIT(10)
+#define PF0INT_OICR_CPM_TSYN_TX_S		11
+#define PF0INT_OICR_CPM_TSYN_TX_M		BIT(11)
+#define PF0INT_OICR_CPM_TSYN_EVNT_S		12
+#define PF0INT_OICR_CPM_TSYN_EVNT_M		BIT(12)
+#define PF0INT_OICR_CPM_TSYN_TGT_S		13
+#define PF0INT_OICR_CPM_TSYN_TGT_M		BIT(13)
+#define PF0INT_OICR_CPM_HLP_RDY_S		14
+#define PF0INT_OICR_CPM_HLP_RDY_M		BIT(14)
+#define PF0INT_OICR_CPM_CPM_RDY_S		15
+#define PF0INT_OICR_CPM_CPM_RDY_M		BIT(15)
+#define PF0INT_OICR_CPM_ECC_ERR_S		16
+#define PF0INT_OICR_CPM_ECC_ERR_M		BIT(16)
+#define PF0INT_OICR_CPM_RSV2_S			17
+#define PF0INT_OICR_CPM_RSV2_M			MAKEMASK(0x3, 17)
+#define PF0INT_OICR_CPM_MAL_DETECT_S		19
+#define PF0INT_OICR_CPM_MAL_DETECT_M		BIT(19)
+#define PF0INT_OICR_CPM_GRST_S			20
+#define PF0INT_OICR_CPM_GRST_M			BIT(20)
+#define PF0INT_OICR_CPM_PCI_EXCEPTION_S		21
+#define PF0INT_OICR_CPM_PCI_EXCEPTION_M		BIT(21)
+#define PF0INT_OICR_CPM_GPIO_S			22
+#define PF0INT_OICR_CPM_GPIO_M			BIT(22)
+#define PF0INT_OICR_CPM_RSV3_S			23
+#define PF0INT_OICR_CPM_RSV3_M			BIT(23)
+#define PF0INT_OICR_CPM_STORM_DETECT_S		24
+#define PF0INT_OICR_CPM_STORM_DETECT_M		BIT(24)
+#define PF0INT_OICR_CPM_LINK_STAT_CHANGE_S	25
+#define PF0INT_OICR_CPM_LINK_STAT_CHANGE_M	BIT(25)
+#define PF0INT_OICR_CPM_HMC_ERR_S		26
+#define PF0INT_OICR_CPM_HMC_ERR_M		BIT(26)
+#define PF0INT_OICR_CPM_PE_PUSH_S		27
+#define PF0INT_OICR_CPM_PE_PUSH_M		BIT(27)
+#define PF0INT_OICR_CPM_PE_CRITERR_S		28
+#define PF0INT_OICR_CPM_PE_CRITERR_M		BIT(28)
+#define PF0INT_OICR_CPM_VFLR_S			29
+#define PF0INT_OICR_CPM_VFLR_M			BIT(29)
+#define PF0INT_OICR_CPM_XLR_HW_DONE_S		30
+#define PF0INT_OICR_CPM_XLR_HW_DONE_M		BIT(30)
+#define PF0INT_OICR_CPM_SWINT_S			31
+#define PF0INT_OICR_CPM_SWINT_M			BIT(31)
+#define PF0INT_OICR_CTL_CPM			0x0016CC48 /* Reset Source: CORER */
+#define PF0INT_OICR_CTL_CPM_MSIX_INDX_S		0
+#define PF0INT_OICR_CTL_CPM_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PF0INT_OICR_CTL_CPM_ITR_INDX_S		11
+#define PF0INT_OICR_CTL_CPM_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PF0INT_OICR_CTL_CPM_CAUSE_ENA_S		30
+#define PF0INT_OICR_CTL_CPM_CAUSE_ENA_M		BIT(30)
+#define PF0INT_OICR_CTL_CPM_INTEVENT_S		31
+#define PF0INT_OICR_CTL_CPM_INTEVENT_M		BIT(31)
+#define PF0INT_OICR_CTL_HLP			0x0016CC5C /* Reset Source: CORER */
+#define PF0INT_OICR_CTL_HLP_MSIX_INDX_S		0
+#define PF0INT_OICR_CTL_HLP_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PF0INT_OICR_CTL_HLP_ITR_INDX_S		11
+#define PF0INT_OICR_CTL_HLP_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PF0INT_OICR_CTL_HLP_CAUSE_ENA_S		30
+#define PF0INT_OICR_CTL_HLP_CAUSE_ENA_M		BIT(30)
+#define PF0INT_OICR_CTL_HLP_INTEVENT_S		31
+#define PF0INT_OICR_CTL_HLP_INTEVENT_M		BIT(31)
+#define PF0INT_OICR_CTL_PSM			0x0016CC64 /* Reset Source: CORER */
+#define PF0INT_OICR_CTL_PSM_MSIX_INDX_S		0
+#define PF0INT_OICR_CTL_PSM_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PF0INT_OICR_CTL_PSM_ITR_INDX_S		11
+#define PF0INT_OICR_CTL_PSM_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PF0INT_OICR_CTL_PSM_CAUSE_ENA_S		30
+#define PF0INT_OICR_CTL_PSM_CAUSE_ENA_M		BIT(30)
+#define PF0INT_OICR_CTL_PSM_INTEVENT_S		31
+#define PF0INT_OICR_CTL_PSM_INTEVENT_M		BIT(31)
+#define PF0INT_OICR_ENA_CPM			0x0016CC60 /* Reset Source: CORER */
+#define PF0INT_OICR_ENA_CPM_RSV0_S		0
+#define PF0INT_OICR_ENA_CPM_RSV0_M		BIT(0)
+#define PF0INT_OICR_ENA_CPM_INT_ENA_S		1
+#define PF0INT_OICR_ENA_CPM_INT_ENA_M		MAKEMASK(0x7FFFFFFF, 1)
+#define PF0INT_OICR_ENA_HLP			0x0016CC4C /* Reset Source: CORER */
+#define PF0INT_OICR_ENA_HLP_RSV0_S		0
+#define PF0INT_OICR_ENA_HLP_RSV0_M		BIT(0)
+#define PF0INT_OICR_ENA_HLP_INT_ENA_S		1
+#define PF0INT_OICR_ENA_HLP_INT_ENA_M		MAKEMASK(0x7FFFFFFF, 1)
+#define PF0INT_OICR_ENA_PSM			0x0016CC58 /* Reset Source: CORER */
+#define PF0INT_OICR_ENA_PSM_RSV0_S		0
+#define PF0INT_OICR_ENA_PSM_RSV0_M		BIT(0)
+#define PF0INT_OICR_ENA_PSM_INT_ENA_S		1
+#define PF0INT_OICR_ENA_PSM_INT_ENA_M		MAKEMASK(0x7FFFFFFF, 1)
+#define PF0INT_OICR_HLP				0x0016CC68 /* Reset Source: CORER */
+#define PF0INT_OICR_HLP_INTEVENT_S		0
+#define PF0INT_OICR_HLP_INTEVENT_M		BIT(0)
+#define PF0INT_OICR_HLP_QUEUE_S			1
+#define PF0INT_OICR_HLP_QUEUE_M			BIT(1)
+#define PF0INT_OICR_HLP_RSV1_S			2
+#define PF0INT_OICR_HLP_RSV1_M			MAKEMASK(0xFF, 2)
+#define PF0INT_OICR_HLP_HH_COMP_S		10
+#define PF0INT_OICR_HLP_HH_COMP_M		BIT(10)
+#define PF0INT_OICR_HLP_TSYN_TX_S		11
+#define PF0INT_OICR_HLP_TSYN_TX_M		BIT(11)
+#define PF0INT_OICR_HLP_TSYN_EVNT_S		12
+#define PF0INT_OICR_HLP_TSYN_EVNT_M		BIT(12)
+#define PF0INT_OICR_HLP_TSYN_TGT_S		13
+#define PF0INT_OICR_HLP_TSYN_TGT_M		BIT(13)
+#define PF0INT_OICR_HLP_HLP_RDY_S		14
+#define PF0INT_OICR_HLP_HLP_RDY_M		BIT(14)
+#define PF0INT_OICR_HLP_CPM_RDY_S		15
+#define PF0INT_OICR_HLP_CPM_RDY_M		BIT(15)
+#define PF0INT_OICR_HLP_ECC_ERR_S		16
+#define PF0INT_OICR_HLP_ECC_ERR_M		BIT(16)
+#define PF0INT_OICR_HLP_RSV2_S			17
+#define PF0INT_OICR_HLP_RSV2_M			MAKEMASK(0x3, 17)
+#define PF0INT_OICR_HLP_MAL_DETECT_S		19
+#define PF0INT_OICR_HLP_MAL_DETECT_M		BIT(19)
+#define PF0INT_OICR_HLP_GRST_S			20
+#define PF0INT_OICR_HLP_GRST_M			BIT(20)
+#define PF0INT_OICR_HLP_PCI_EXCEPTION_S		21
+#define PF0INT_OICR_HLP_PCI_EXCEPTION_M		BIT(21)
+#define PF0INT_OICR_HLP_GPIO_S			22
+#define PF0INT_OICR_HLP_GPIO_M			BIT(22)
+#define PF0INT_OICR_HLP_RSV3_S			23
+#define PF0INT_OICR_HLP_RSV3_M			BIT(23)
+#define PF0INT_OICR_HLP_STORM_DETECT_S		24
+#define PF0INT_OICR_HLP_STORM_DETECT_M		BIT(24)
+#define PF0INT_OICR_HLP_LINK_STAT_CHANGE_S	25
+#define PF0INT_OICR_HLP_LINK_STAT_CHANGE_M	BIT(25)
+#define PF0INT_OICR_HLP_HMC_ERR_S		26
+#define PF0INT_OICR_HLP_HMC_ERR_M		BIT(26)
+#define PF0INT_OICR_HLP_PE_PUSH_S		27
+#define PF0INT_OICR_HLP_PE_PUSH_M		BIT(27)
+#define PF0INT_OICR_HLP_PE_CRITERR_S		28
+#define PF0INT_OICR_HLP_PE_CRITERR_M		BIT(28)
+#define PF0INT_OICR_HLP_VFLR_S			29
+#define PF0INT_OICR_HLP_VFLR_M			BIT(29)
+#define PF0INT_OICR_HLP_XLR_HW_DONE_S		30
+#define PF0INT_OICR_HLP_XLR_HW_DONE_M		BIT(30)
+#define PF0INT_OICR_HLP_SWINT_S			31
+#define PF0INT_OICR_HLP_SWINT_M			BIT(31)
+#define PF0INT_OICR_PSM				0x0016CC44 /* Reset Source: CORER */
+#define PF0INT_OICR_PSM_INTEVENT_S		0
+#define PF0INT_OICR_PSM_INTEVENT_M		BIT(0)
+#define PF0INT_OICR_PSM_QUEUE_S			1
+#define PF0INT_OICR_PSM_QUEUE_M			BIT(1)
+#define PF0INT_OICR_PSM_RSV1_S			2
+#define PF0INT_OICR_PSM_RSV1_M			MAKEMASK(0xFF, 2)
+#define PF0INT_OICR_PSM_HH_COMP_S		10
+#define PF0INT_OICR_PSM_HH_COMP_M		BIT(10)
+#define PF0INT_OICR_PSM_TSYN_TX_S		11
+#define PF0INT_OICR_PSM_TSYN_TX_M		BIT(11)
+#define PF0INT_OICR_PSM_TSYN_EVNT_S		12
+#define PF0INT_OICR_PSM_TSYN_EVNT_M		BIT(12)
+#define PF0INT_OICR_PSM_TSYN_TGT_S		13
+#define PF0INT_OICR_PSM_TSYN_TGT_M		BIT(13)
+#define PF0INT_OICR_PSM_HLP_RDY_S		14
+#define PF0INT_OICR_PSM_HLP_RDY_M		BIT(14)
+#define PF0INT_OICR_PSM_CPM_RDY_S		15
+#define PF0INT_OICR_PSM_CPM_RDY_M		BIT(15)
+#define PF0INT_OICR_PSM_ECC_ERR_S		16
+#define PF0INT_OICR_PSM_ECC_ERR_M		BIT(16)
+#define PF0INT_OICR_PSM_RSV2_S			17
+#define PF0INT_OICR_PSM_RSV2_M			MAKEMASK(0x3, 17)
+#define PF0INT_OICR_PSM_MAL_DETECT_S		19
+#define PF0INT_OICR_PSM_MAL_DETECT_M		BIT(19)
+#define PF0INT_OICR_PSM_GRST_S			20
+#define PF0INT_OICR_PSM_GRST_M			BIT(20)
+#define PF0INT_OICR_PSM_PCI_EXCEPTION_S		21
+#define PF0INT_OICR_PSM_PCI_EXCEPTION_M		BIT(21)
+#define PF0INT_OICR_PSM_GPIO_S			22
+#define PF0INT_OICR_PSM_GPIO_M			BIT(22)
+#define PF0INT_OICR_PSM_RSV3_S			23
+#define PF0INT_OICR_PSM_RSV3_M			BIT(23)
+#define PF0INT_OICR_PSM_STORM_DETECT_S		24
+#define PF0INT_OICR_PSM_STORM_DETECT_M		BIT(24)
+#define PF0INT_OICR_PSM_LINK_STAT_CHANGE_S	25
+#define PF0INT_OICR_PSM_LINK_STAT_CHANGE_M	BIT(25)
+#define PF0INT_OICR_PSM_HMC_ERR_S		26
+#define PF0INT_OICR_PSM_HMC_ERR_M		BIT(26)
+#define PF0INT_OICR_PSM_PE_PUSH_S		27
+#define PF0INT_OICR_PSM_PE_PUSH_M		BIT(27)
+#define PF0INT_OICR_PSM_PE_CRITERR_S		28
+#define PF0INT_OICR_PSM_PE_CRITERR_M		BIT(28)
+#define PF0INT_OICR_PSM_VFLR_S			29
+#define PF0INT_OICR_PSM_VFLR_M			BIT(29)
+#define PF0INT_OICR_PSM_XLR_HW_DONE_S		30
+#define PF0INT_OICR_PSM_XLR_HW_DONE_M		BIT(30)
+#define PF0INT_OICR_PSM_SWINT_S			31
+#define PF0INT_OICR_PSM_SWINT_M			BIT(31)
+#define PF0INT_SB_CPM_CTL			0x0016B2CC /* Reset Source: CORER */
+#define PF0INT_SB_CPM_CTL_MSIX_INDX_S		0
+#define PF0INT_SB_CPM_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PF0INT_SB_CPM_CTL_ITR_INDX_S		11
+#define PF0INT_SB_CPM_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PF0INT_SB_CPM_CTL_CAUSE_ENA_S		30
+#define PF0INT_SB_CPM_CTL_CAUSE_ENA_M		BIT(30)
+#define PF0INT_SB_CPM_CTL_INTEVENT_S		31
+#define PF0INT_SB_CPM_CTL_INTEVENT_M		BIT(31)
+#define PF0INT_SB_HLP_CTL			0x0016B640 /* Reset Source: CORER */
+#define PF0INT_SB_HLP_CTL_MSIX_INDX_S		0
+#define PF0INT_SB_HLP_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PF0INT_SB_HLP_CTL_ITR_INDX_S		11
+#define PF0INT_SB_HLP_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PF0INT_SB_HLP_CTL_CAUSE_ENA_S		30
+#define PF0INT_SB_HLP_CTL_CAUSE_ENA_M		BIT(30)
+#define PF0INT_SB_HLP_CTL_INTEVENT_S		31
+#define PF0INT_SB_HLP_CTL_INTEVENT_M		BIT(31)
+#define PFINT_AEQCTL				0x0016CB00 /* Reset Source: CORER */
+#define PFINT_AEQCTL_MSIX_INDX_S		0
+#define PFINT_AEQCTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PFINT_AEQCTL_ITR_INDX_S			11
+#define PFINT_AEQCTL_ITR_INDX_M			MAKEMASK(0x3, 11)
+#define PFINT_AEQCTL_CAUSE_ENA_S		30
+#define PFINT_AEQCTL_CAUSE_ENA_M		BIT(30)
+#define PFINT_AEQCTL_INTEVENT_S			31
+#define PFINT_AEQCTL_INTEVENT_M			BIT(31)
+#define PFINT_ALLOC				0x001D2600 /* Reset Source: CORER */
+#define PFINT_ALLOC_FIRST_S			0
+#define PFINT_ALLOC_FIRST_M			MAKEMASK(0x7FF, 0)
+#define PFINT_ALLOC_LAST_S			12
+#define PFINT_ALLOC_LAST_M			MAKEMASK(0x7FF, 12)
+#define PFINT_ALLOC_VALID_S			31
+#define PFINT_ALLOC_VALID_M			BIT(31)
+#define PFINT_ALLOC_PCI				0x0009D800 /* Reset Source: PCIR */
+#define PFINT_ALLOC_PCI_FIRST_S			0
+#define PFINT_ALLOC_PCI_FIRST_M			MAKEMASK(0x7FF, 0)
+#define PFINT_ALLOC_PCI_LAST_S			12
+#define PFINT_ALLOC_PCI_LAST_M			MAKEMASK(0x7FF, 12)
+#define PFINT_ALLOC_PCI_VALID_S			31
+#define PFINT_ALLOC_PCI_VALID_M			BIT(31)
+#define PFINT_FW_CTL				0x0016C800 /* Reset Source: CORER */
+#define PFINT_FW_CTL_MSIX_INDX_S		0
+#define PFINT_FW_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PFINT_FW_CTL_ITR_INDX_S			11
+#define PFINT_FW_CTL_ITR_INDX_M			MAKEMASK(0x3, 11)
+#define PFINT_FW_CTL_CAUSE_ENA_S		30
+#define PFINT_FW_CTL_CAUSE_ENA_M		BIT(30)
+#define PFINT_FW_CTL_INTEVENT_S			31
+#define PFINT_FW_CTL_INTEVENT_M			BIT(31)
+#define PFINT_GPIO_ENA				0x00088080 /* Reset Source: CORER */
+#define PFINT_GPIO_ENA_GPIO0_ENA_S		0
+#define PFINT_GPIO_ENA_GPIO0_ENA_M		BIT(0)
+#define PFINT_GPIO_ENA_GPIO1_ENA_S		1
+#define PFINT_GPIO_ENA_GPIO1_ENA_M		BIT(1)
+#define PFINT_GPIO_ENA_GPIO2_ENA_S		2
+#define PFINT_GPIO_ENA_GPIO2_ENA_M		BIT(2)
+#define PFINT_GPIO_ENA_GPIO3_ENA_S		3
+#define PFINT_GPIO_ENA_GPIO3_ENA_M		BIT(3)
+#define PFINT_GPIO_ENA_GPIO4_ENA_S		4
+#define PFINT_GPIO_ENA_GPIO4_ENA_M		BIT(4)
+#define PFINT_GPIO_ENA_GPIO5_ENA_S		5
+#define PFINT_GPIO_ENA_GPIO5_ENA_M		BIT(5)
+#define PFINT_GPIO_ENA_GPIO6_ENA_S		6
+#define PFINT_GPIO_ENA_GPIO6_ENA_M		BIT(6)
+#define PFINT_MBX_CTL				0x0016B280 /* Reset Source: CORER */
+#define PFINT_MBX_CTL_MSIX_INDX_S		0
+#define PFINT_MBX_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PFINT_MBX_CTL_ITR_INDX_S		11
+#define PFINT_MBX_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PFINT_MBX_CTL_CAUSE_ENA_S		30
+#define PFINT_MBX_CTL_CAUSE_ENA_M		BIT(30)
+#define PFINT_MBX_CTL_INTEVENT_S		31
+#define PFINT_MBX_CTL_INTEVENT_M		BIT(31)
+#define PFINT_OICR				0x0016CA00 /* Reset Source: CORER */
+#define PFINT_OICR_INTEVENT_S			0
+#define PFINT_OICR_INTEVENT_M			BIT(0)
+#define PFINT_OICR_QUEUE_S			1
+#define PFINT_OICR_QUEUE_M			BIT(1)
+#define PFINT_OICR_RSV1_S			2
+#define PFINT_OICR_RSV1_M			MAKEMASK(0xFF, 2)
+#define PFINT_OICR_HH_COMP_S			10
+#define PFINT_OICR_HH_COMP_M			BIT(10)
+#define PFINT_OICR_TSYN_TX_S			11
+#define PFINT_OICR_TSYN_TX_M			BIT(11)
+#define PFINT_OICR_TSYN_EVNT_S			12
+#define PFINT_OICR_TSYN_EVNT_M			BIT(12)
+#define PFINT_OICR_TSYN_TGT_S			13
+#define PFINT_OICR_TSYN_TGT_M			BIT(13)
+#define PFINT_OICR_HLP_RDY_S			14
+#define PFINT_OICR_HLP_RDY_M			BIT(14)
+#define PFINT_OICR_CPM_RDY_S			15
+#define PFINT_OICR_CPM_RDY_M			BIT(15)
+#define PFINT_OICR_ECC_ERR_S			16
+#define PFINT_OICR_ECC_ERR_M			BIT(16)
+#define PFINT_OICR_RSV2_S			17
+#define PFINT_OICR_RSV2_M			MAKEMASK(0x3, 17)
+#define PFINT_OICR_MAL_DETECT_S			19
+#define PFINT_OICR_MAL_DETECT_M			BIT(19)
+#define PFINT_OICR_GRST_S			20
+#define PFINT_OICR_GRST_M			BIT(20)
+#define PFINT_OICR_PCI_EXCEPTION_S		21
+#define PFINT_OICR_PCI_EXCEPTION_M		BIT(21)
+#define PFINT_OICR_GPIO_S			22
+#define PFINT_OICR_GPIO_M			BIT(22)
+#define PFINT_OICR_RSV3_S			23
+#define PFINT_OICR_RSV3_M			BIT(23)
+#define PFINT_OICR_STORM_DETECT_S		24
+#define PFINT_OICR_STORM_DETECT_M		BIT(24)
+#define PFINT_OICR_LINK_STAT_CHANGE_S		25
+#define PFINT_OICR_LINK_STAT_CHANGE_M		BIT(25)
+#define PFINT_OICR_HMC_ERR_S			26
+#define PFINT_OICR_HMC_ERR_M			BIT(26)
+#define PFINT_OICR_PE_PUSH_S			27
+#define PFINT_OICR_PE_PUSH_M			BIT(27)
+#define PFINT_OICR_PE_CRITERR_S			28
+#define PFINT_OICR_PE_CRITERR_M			BIT(28)
+#define PFINT_OICR_VFLR_S			29
+#define PFINT_OICR_VFLR_M			BIT(29)
+#define PFINT_OICR_XLR_HW_DONE_S		30
+#define PFINT_OICR_XLR_HW_DONE_M		BIT(30)
+#define PFINT_OICR_SWINT_S			31
+#define PFINT_OICR_SWINT_M			BIT(31)
+#define PFINT_OICR_CTL				0x0016CA80 /* Reset Source: CORER */
+#define PFINT_OICR_CTL_MSIX_INDX_S		0
+#define PFINT_OICR_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PFINT_OICR_CTL_ITR_INDX_S		11
+#define PFINT_OICR_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PFINT_OICR_CTL_CAUSE_ENA_S		30
+#define PFINT_OICR_CTL_CAUSE_ENA_M		BIT(30)
+#define PFINT_OICR_CTL_INTEVENT_S		31
+#define PFINT_OICR_CTL_INTEVENT_M		BIT(31)
+#define PFINT_OICR_ENA				0x0016C900 /* Reset Source: CORER */
+#define PFINT_OICR_ENA_RSV0_S			0
+#define PFINT_OICR_ENA_RSV0_M			BIT(0)
+#define PFINT_OICR_ENA_INT_ENA_S		1
+#define PFINT_OICR_ENA_INT_ENA_M		MAKEMASK(0x7FFFFFFF, 1)
+#define PFINT_SB_CTL				0x0016B600 /* Reset Source: CORER */
+#define PFINT_SB_CTL_MSIX_INDX_S		0
+#define PFINT_SB_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PFINT_SB_CTL_ITR_INDX_S			11
+#define PFINT_SB_CTL_ITR_INDX_M			MAKEMASK(0x3, 11)
+#define PFINT_SB_CTL_CAUSE_ENA_S		30
+#define PFINT_SB_CTL_CAUSE_ENA_M		BIT(30)
+#define PFINT_SB_CTL_INTEVENT_S			31
+#define PFINT_SB_CTL_INTEVENT_M			BIT(31)
+#define PFINT_TSYN_MSK				0x0016C980 /* Reset Source: CORER */
+#define PFINT_TSYN_MSK_PHY_INDX_S		0
+#define PFINT_TSYN_MSK_PHY_INDX_M		MAKEMASK(0x1F, 0)
+#define QINT_RQCTL(_QRX)			(0x00150000 + ((_QRX) * 4)) /* _i=0...2047 */ /* Reset Source: CORER */
+#define QINT_RQCTL_MAX_INDEX			2047
+#define QINT_RQCTL_MSIX_INDX_S			0
+#define QINT_RQCTL_MSIX_INDX_M			MAKEMASK(0x7FF, 0)
+#define QINT_RQCTL_ITR_INDX_S			11
+#define QINT_RQCTL_ITR_INDX_M			MAKEMASK(0x3, 11)
+#define QINT_RQCTL_CAUSE_ENA_S			30
+#define QINT_RQCTL_CAUSE_ENA_M			BIT(30)
+#define QINT_RQCTL_INTEVENT_S			31
+#define QINT_RQCTL_INTEVENT_M			BIT(31)
+#define QINT_TQCTL(_DBQM)			(0x00140000 + ((_DBQM) * 4)) /* _i=0...16383 */ /* Reset Source: CORER */
+#define QINT_TQCTL_MAX_INDEX			16383
+#define QINT_TQCTL_MSIX_INDX_S			0
+#define QINT_TQCTL_MSIX_INDX_M			MAKEMASK(0x7FF, 0)
+#define QINT_TQCTL_ITR_INDX_S			11
+#define QINT_TQCTL_ITR_INDX_M			MAKEMASK(0x3, 11)
+#define QINT_TQCTL_CAUSE_ENA_S			30
+#define QINT_TQCTL_CAUSE_ENA_M			BIT(30)
+#define QINT_TQCTL_INTEVENT_S			31
+#define QINT_TQCTL_INTEVENT_M			BIT(31)
+#define VPINT_AEQCTL(_VF)			(0x0016B800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPINT_AEQCTL_MAX_INDEX			255
+#define VPINT_AEQCTL_MSIX_INDX_S		0
+#define VPINT_AEQCTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define VPINT_AEQCTL_ITR_INDX_S			11
+#define VPINT_AEQCTL_ITR_INDX_M			MAKEMASK(0x3, 11)
+#define VPINT_AEQCTL_CAUSE_ENA_S		30
+#define VPINT_AEQCTL_CAUSE_ENA_M		BIT(30)
+#define VPINT_AEQCTL_INTEVENT_S			31
+#define VPINT_AEQCTL_INTEVENT_M			BIT(31)
+#define VPINT_ALLOC(_VF)			(0x001D1000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPINT_ALLOC_MAX_INDEX			255
+#define VPINT_ALLOC_FIRST_S			0
+#define VPINT_ALLOC_FIRST_M			MAKEMASK(0x7FF, 0)
+#define VPINT_ALLOC_LAST_S			12
+#define VPINT_ALLOC_LAST_M			MAKEMASK(0x7FF, 12)
+#define VPINT_ALLOC_VALID_S			31
+#define VPINT_ALLOC_VALID_M			BIT(31)
+#define VPINT_ALLOC_PCI(_VF)			(0x0009D000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PCIR */
+#define VPINT_ALLOC_PCI_MAX_INDEX		255
+#define VPINT_ALLOC_PCI_FIRST_S			0
+#define VPINT_ALLOC_PCI_FIRST_M			MAKEMASK(0x7FF, 0)
+#define VPINT_ALLOC_PCI_LAST_S			12
+#define VPINT_ALLOC_PCI_LAST_M			MAKEMASK(0x7FF, 12)
+#define VPINT_ALLOC_PCI_VALID_S			31
+#define VPINT_ALLOC_PCI_VALID_M			BIT(31)
+#define VPINT_MBX_CPM_CTL(_VP128)		(0x0016B000 + ((_VP128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VPINT_MBX_CPM_CTL_MAX_INDEX		127
+#define VPINT_MBX_CPM_CTL_MSIX_INDX_S		0
+#define VPINT_MBX_CPM_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define VPINT_MBX_CPM_CTL_ITR_INDX_S		11
+#define VPINT_MBX_CPM_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define VPINT_MBX_CPM_CTL_CAUSE_ENA_S		30
+#define VPINT_MBX_CPM_CTL_CAUSE_ENA_M		BIT(30)
+#define VPINT_MBX_CPM_CTL_INTEVENT_S		31
+#define VPINT_MBX_CPM_CTL_INTEVENT_M		BIT(31)
+#define VPINT_MBX_CTL(_VSI)			(0x0016A000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VPINT_MBX_CTL_MAX_INDEX			767
+#define VPINT_MBX_CTL_MSIX_INDX_S		0
+#define VPINT_MBX_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define VPINT_MBX_CTL_ITR_INDX_S		11
+#define VPINT_MBX_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define VPINT_MBX_CTL_CAUSE_ENA_S		30
+#define VPINT_MBX_CTL_CAUSE_ENA_M		BIT(30)
+#define VPINT_MBX_CTL_INTEVENT_S		31
+#define VPINT_MBX_CTL_INTEVENT_M		BIT(31)
+#define VPINT_MBX_HLP_CTL(_VP16)		(0x0016B200 + ((_VP16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VPINT_MBX_HLP_CTL_MAX_INDEX		15
+#define VPINT_MBX_HLP_CTL_MSIX_INDX_S		0
+#define VPINT_MBX_HLP_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define VPINT_MBX_HLP_CTL_ITR_INDX_S		11
+#define VPINT_MBX_HLP_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define VPINT_MBX_HLP_CTL_CAUSE_ENA_S		30
+#define VPINT_MBX_HLP_CTL_CAUSE_ENA_M		BIT(30)
+#define VPINT_MBX_HLP_CTL_INTEVENT_S		31
+#define VPINT_MBX_HLP_CTL_INTEVENT_M		BIT(31)
+#define VPINT_MBX_PSM_CTL(_VP16)		(0x0016B240 + ((_VP16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VPINT_MBX_PSM_CTL_MAX_INDEX		15
+#define VPINT_MBX_PSM_CTL_MSIX_INDX_S		0
+#define VPINT_MBX_PSM_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define VPINT_MBX_PSM_CTL_ITR_INDX_S		11
+#define VPINT_MBX_PSM_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define VPINT_MBX_PSM_CTL_CAUSE_ENA_S		30
+#define VPINT_MBX_PSM_CTL_CAUSE_ENA_M		BIT(30)
+#define VPINT_MBX_PSM_CTL_INTEVENT_S		31
+#define VPINT_MBX_PSM_CTL_INTEVENT_M		BIT(31)
+#define VPINT_SB_CPM_CTL(_VP128)		(0x0016B400 + ((_VP128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VPINT_SB_CPM_CTL_MAX_INDEX		127
+#define VPINT_SB_CPM_CTL_MSIX_INDX_S		0
+#define VPINT_SB_CPM_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define VPINT_SB_CPM_CTL_ITR_INDX_S		11
+#define VPINT_SB_CPM_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define VPINT_SB_CPM_CTL_CAUSE_ENA_S		30
+#define VPINT_SB_CPM_CTL_CAUSE_ENA_M		BIT(30)
+#define VPINT_SB_CPM_CTL_INTEVENT_S		31
+#define VPINT_SB_CPM_CTL_INTEVENT_M		BIT(31)
+#define GL_HLP_PRT_IPG_PREAMBLE_SIZE(_i)	(0x00049240 + ((_i) * 4)) /* _i=0...20 */ /* Reset Source: CORER */
+#define GL_HLP_PRT_IPG_PREAMBLE_SIZE_MAX_INDEX	20
+#define GL_HLP_PRT_IPG_PREAMBLE_SIZE_IPG_PREAMBLE_SIZE_S 0
+#define GL_HLP_PRT_IPG_PREAMBLE_SIZE_IPG_PREAMBLE_SIZE_M MAKEMASK(0xFF, 0)
+#define GL_TDPU_PSM_DEFAULT_RECIPE(_i)		(0x00049294 + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: CORER */
+#define GL_TDPU_PSM_DEFAULT_RECIPE_MAX_INDEX	3
+#define GL_TDPU_PSM_DEFAULT_RECIPE_ADD_IPG_S	0
+#define GL_TDPU_PSM_DEFAULT_RECIPE_ADD_IPG_M	BIT(0)
+#define GL_TDPU_PSM_DEFAULT_RECIPE_SUB_CRC_S	1
+#define GL_TDPU_PSM_DEFAULT_RECIPE_SUB_CRC_M	BIT(1)
+#define GL_TDPU_PSM_DEFAULT_RECIPE_SUB_ESP_TRAILER_S 2
+#define GL_TDPU_PSM_DEFAULT_RECIPE_SUB_ESP_TRAILER_M BIT(2)
+#define GL_TDPU_PSM_DEFAULT_RECIPE_INCLUDE_L2_PAD_S 3
+#define GL_TDPU_PSM_DEFAULT_RECIPE_INCLUDE_L2_PAD_M BIT(3)
+#define GL_TDPU_PSM_DEFAULT_RECIPE_DEFAULT_UPDATE_MODE_S 4
+#define GL_TDPU_PSM_DEFAULT_RECIPE_DEFAULT_UPDATE_MODE_M BIT(4)
+#define GLLAN_PF_RECIPE(_i)			(0x0029420C + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLLAN_PF_RECIPE_MAX_INDEX		7
+#define GLLAN_PF_RECIPE_RECIPE_S		0
+#define GLLAN_PF_RECIPE_RECIPE_M		MAKEMASK(0x3, 0)
+#define GLLAN_RCTL_0				0x002941F8 /* Reset Source: CORER */
+#define GLLAN_RCTL_0_PXE_MODE_S			0
+#define GLLAN_RCTL_0_PXE_MODE_M			BIT(0)
+#define GLLAN_RCTL_1				0x002941FC /* Reset Source: CORER */
+#define GLLAN_RCTL_1_RXMAX_EXPANSION_S		12
+#define GLLAN_RCTL_1_RXMAX_EXPANSION_M		MAKEMASK(0xF, 12)
+#define GLLAN_RCTL_1_RXDRDCTL_S			17
+#define GLLAN_RCTL_1_RXDRDCTL_M			BIT(17)
+#define GLLAN_RCTL_1_RXDESCRDROEN_S		18
+#define GLLAN_RCTL_1_RXDESCRDROEN_M		BIT(18)
+#define GLLAN_RCTL_1_RXDATAWRROEN_S		19
+#define GLLAN_RCTL_1_RXDATAWRROEN_M		BIT(19)
+#define GLLAN_TSOMSK_F				0x00049308 /* Reset Source: CORER */
+#define GLLAN_TSOMSK_F_TCPMSKF_S		0
+#define GLLAN_TSOMSK_F_TCPMSKF_M		MAKEMASK(0xFFF, 0)
+#define GLLAN_TSOMSK_L				0x00049310 /* Reset Source: CORER */
+#define GLLAN_TSOMSK_L_TCPMSKL_S		0
+#define GLLAN_TSOMSK_L_TCPMSKL_M		MAKEMASK(0xFFF, 0)
+#define GLLAN_TSOMSK_M				0x0004930C /* Reset Source: CORER */
+#define GLLAN_TSOMSK_M_TCPMSKM_S		0
+#define GLLAN_TSOMSK_M_TCPMSKM_M		MAKEMASK(0xFFF, 0)
+#define PFLAN_CP_QALLOC				0x00075700 /* Reset Source: CORER */
+#define PFLAN_CP_QALLOC_FIRSTQ_S		0
+#define PFLAN_CP_QALLOC_FIRSTQ_M		MAKEMASK(0x1FF, 0)
+#define PFLAN_CP_QALLOC_LASTQ_S			16
+#define PFLAN_CP_QALLOC_LASTQ_M			MAKEMASK(0x1FF, 16)
+#define PFLAN_CP_QALLOC_VALID_S			31
+#define PFLAN_CP_QALLOC_VALID_M			BIT(31)
+#define PFLAN_DB_QALLOC				0x00075680 /* Reset Source: CORER */
+#define PFLAN_DB_QALLOC_FIRSTQ_S		0
+#define PFLAN_DB_QALLOC_FIRSTQ_M		MAKEMASK(0xFF, 0)
+#define PFLAN_DB_QALLOC_LASTQ_S			16
+#define PFLAN_DB_QALLOC_LASTQ_M			MAKEMASK(0xFF, 16)
+#define PFLAN_DB_QALLOC_VALID_S			31
+#define PFLAN_DB_QALLOC_VALID_M			BIT(31)
+#define PFLAN_RX_QALLOC				0x001D2500 /* Reset Source: CORER */
+#define PFLAN_RX_QALLOC_FIRSTQ_S		0
+#define PFLAN_RX_QALLOC_FIRSTQ_M		MAKEMASK(0x7FF, 0)
+#define PFLAN_RX_QALLOC_LASTQ_S			16
+#define PFLAN_RX_QALLOC_LASTQ_M			MAKEMASK(0x7FF, 16)
+#define PFLAN_RX_QALLOC_VALID_S			31
+#define PFLAN_RX_QALLOC_VALID_M			BIT(31)
+#define PFLAN_TX_QALLOC				0x001D2580 /* Reset Source: CORER */
+#define PFLAN_TX_QALLOC_FIRSTQ_S		0
+#define PFLAN_TX_QALLOC_FIRSTQ_M		MAKEMASK(0x3FFF, 0)
+#define PFLAN_TX_QALLOC_LASTQ_S			16
+#define PFLAN_TX_QALLOC_LASTQ_M			MAKEMASK(0x3FFF, 16)
+#define PFLAN_TX_QALLOC_VALID_S			31
+#define PFLAN_TX_QALLOC_VALID_M			BIT(31)
+#define QRX_CONTEXT(_i, _QRX)			(0x00280000 + ((_i) * 8192 + (_QRX) * 4)) /* _i=0...7, _QRX=0...2047 */ /* Reset Source: CORER */
+#define QRX_CONTEXT_MAX_INDEX			7
+#define QRX_CONTEXT_RXQ_CONTEXT_S		0
+#define QRX_CONTEXT_RXQ_CONTEXT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define QRX_CTRL(_QRX)				(0x00120000 + ((_QRX) * 4)) /* _i=0...2047 */ /* Reset Source: PFR */
+#define QRX_CTRL_MAX_INDEX			2047
+#define QRX_CTRL_QENA_REQ_S			0
+#define QRX_CTRL_QENA_REQ_M			BIT(0)
+#define QRX_CTRL_FAST_QDIS_S			1
+#define QRX_CTRL_FAST_QDIS_M			BIT(1)
+#define QRX_CTRL_QENA_STAT_S			2
+#define QRX_CTRL_QENA_STAT_M			BIT(2)
+#define QRX_CTRL_CDE_S				3
+#define QRX_CTRL_CDE_M				BIT(3)
+#define QRX_CTRL_CDS_S				4
+#define QRX_CTRL_CDS_M				BIT(4)
+#define QRX_ITR(_QRX)				(0x00292000 + ((_QRX) * 4)) /* _i=0...2047 */ /* Reset Source: CORER */
+#define QRX_ITR_MAX_INDEX			2047
+#define QRX_ITR_NO_EXPR_S			0
+#define QRX_ITR_NO_EXPR_M			BIT(0)
+#define QRX_TAIL(_QRX)				(0x00290000 + ((_QRX) * 4)) /* _i=0...2047 */ /* Reset Source: CORER */
+#define QRX_TAIL_MAX_INDEX			2047
+#define QRX_TAIL_TAIL_S				0
+#define QRX_TAIL_TAIL_M				MAKEMASK(0x1FFF, 0)
+#define VPDSI_RX_QTABLE(_i, _VP16)		(0x00074C00 + ((_i) * 64 + (_VP16) * 4)) /* _i=0...15, _VP16=0...15 */ /* Reset Source: CORER */
+#define VPDSI_RX_QTABLE_MAX_INDEX		15
+#define VPDSI_RX_QTABLE_PAGE_INDEX0_S		0
+#define VPDSI_RX_QTABLE_PAGE_INDEX0_M		MAKEMASK(0x7F, 0)
+#define VPDSI_RX_QTABLE_PAGE_INDEX1_S		8
+#define VPDSI_RX_QTABLE_PAGE_INDEX1_M		MAKEMASK(0x7F, 8)
+#define VPDSI_RX_QTABLE_PAGE_INDEX2_S		16
+#define VPDSI_RX_QTABLE_PAGE_INDEX2_M		MAKEMASK(0x7F, 16)
+#define VPDSI_RX_QTABLE_PAGE_INDEX3_S		24
+#define VPDSI_RX_QTABLE_PAGE_INDEX3_M		MAKEMASK(0x7F, 24)
+#define VPDSI_TX_QTABLE(_i, _VP16)		(0x001D2000 + ((_i) * 64 + (_VP16) * 4)) /* _i=0...15, _VP16=0...15 */ /* Reset Source: CORER */
+#define VPDSI_TX_QTABLE_MAX_INDEX		15
+#define VPDSI_TX_QTABLE_PAGE_INDEX0_S		0
+#define VPDSI_TX_QTABLE_PAGE_INDEX0_M		MAKEMASK(0x7F, 0)
+#define VPDSI_TX_QTABLE_PAGE_INDEX1_S		8
+#define VPDSI_TX_QTABLE_PAGE_INDEX1_M		MAKEMASK(0x7F, 8)
+#define VPDSI_TX_QTABLE_PAGE_INDEX2_S		16
+#define VPDSI_TX_QTABLE_PAGE_INDEX2_M		MAKEMASK(0x7F, 16)
+#define VPDSI_TX_QTABLE_PAGE_INDEX3_S		24
+#define VPDSI_TX_QTABLE_PAGE_INDEX3_M		MAKEMASK(0x7F, 24)
+#define VPLAN_DB_QTABLE(_i, _VF)		(0x00070000 + ((_i) * 2048 + (_VF) * 4)) /* _i=0...3, _VF=0...255 */ /* Reset Source: CORER */
+#define VPLAN_DB_QTABLE_MAX_INDEX		3
+#define VPLAN_DB_QTABLE_QINDEX_S		0
+#define VPLAN_DB_QTABLE_QINDEX_M		MAKEMASK(0x1FF, 0)
+#define VPLAN_DSI_VF_MODE(_VP16)		(0x002D2C00 + ((_VP16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VPLAN_DSI_VF_MODE_MAX_INDEX		15
+#define VPLAN_DSI_VF_MODE_LAN_DSI_VF_MODE_S	0
+#define VPLAN_DSI_VF_MODE_LAN_DSI_VF_MODE_M	BIT(0)
+#define VPLAN_RX_QBASE(_VF)			(0x00072000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPLAN_RX_QBASE_MAX_INDEX		255
+#define VPLAN_RX_QBASE_VFFIRSTQ_S		0
+#define VPLAN_RX_QBASE_VFFIRSTQ_M		MAKEMASK(0x7FF, 0)
+#define VPLAN_RX_QBASE_VFNUMQ_S			16
+#define VPLAN_RX_QBASE_VFNUMQ_M			MAKEMASK(0xFF, 16)
+#define VPLAN_RX_QBASE_VFQTABLE_ENA_S		31
+#define VPLAN_RX_QBASE_VFQTABLE_ENA_M		BIT(31)
+#define VPLAN_RX_QTABLE(_i, _VF)		(0x00060000 + ((_i) * 2048 + (_VF) * 4)) /* _i=0...15, _VF=0...255 */ /* Reset Source: CORER */
+#define VPLAN_RX_QTABLE_MAX_INDEX		15
+#define VPLAN_RX_QTABLE_QINDEX_S		0
+#define VPLAN_RX_QTABLE_QINDEX_M		MAKEMASK(0xFFF, 0)
+#define VPLAN_RXQ_MAPENA(_VF)			(0x00073000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPLAN_RXQ_MAPENA_MAX_INDEX		255
+#define VPLAN_RXQ_MAPENA_RX_ENA_S		0
+#define VPLAN_RXQ_MAPENA_RX_ENA_M		BIT(0)
+#define VPLAN_TX_QBASE(_VF)			(0x001D1800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPLAN_TX_QBASE_MAX_INDEX		255
+#define VPLAN_TX_QBASE_VFFIRSTQ_S		0
+#define VPLAN_TX_QBASE_VFFIRSTQ_M		MAKEMASK(0x3FFF, 0)
+#define VPLAN_TX_QBASE_VFNUMQ_S			16
+#define VPLAN_TX_QBASE_VFNUMQ_M			MAKEMASK(0xFF, 16)
+#define VPLAN_TX_QBASE_VFQTABLE_ENA_S		31
+#define VPLAN_TX_QBASE_VFQTABLE_ENA_M		BIT(31)
+#define VPLAN_TX_QTABLE(_i, _VF)		(0x001C0000 + ((_i) * 2048 + (_VF) * 4)) /* _i=0...15, _VF=0...255 */ /* Reset Source: CORER */
+#define VPLAN_TX_QTABLE_MAX_INDEX		15
+#define VPLAN_TX_QTABLE_QINDEX_S		0
+#define VPLAN_TX_QTABLE_QINDEX_M		MAKEMASK(0x7FFF, 0)
+#define VPLAN_TXQ_MAPENA(_VF)			(0x00073800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPLAN_TXQ_MAPENA_MAX_INDEX		255
+#define VPLAN_TXQ_MAPENA_TX_ENA_S		0
+#define VPLAN_TXQ_MAPENA_TX_ENA_M		BIT(0)
+#define VSILAN_QBASE(_VSI)			(0x0044c000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSILAN_QBASE_MAX_INDEX			767
+#define VSILAN_QBASE_VSIBASE_S			0
+#define VSILAN_QBASE_VSIBASE_M			MAKEMASK(0x7FF, 0)
+#define VSILAN_QBASE_VSIQTABLE_ENA_S		11
+#define VSILAN_QBASE_VSIQTABLE_ENA_M		BIT(11)
+#define VSILAN_QTABLE(_i, _VSI)			(0x00440000 + ((_i) * 4096 + (_VSI) * 4)) /* _i=0...7, _VSI=0...767 */ /* Reset Source: PFR */
+#define VSILAN_QTABLE_MAX_INDEX			7
+#define VSILAN_QTABLE_QINDEX_0_S		0
+#define VSILAN_QTABLE_QINDEX_0_M		MAKEMASK(0x7FF, 0)
+#define VSILAN_QTABLE_QINDEX_1_S		16
+#define VSILAN_QTABLE_QINDEX_1_M		MAKEMASK(0x7FF, 16)
+#define PRTMAC_HSEC_CTL_RX_ENABLE_GCP		0x001E31C0 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_ENABLE_GCP_HSEC_CTL_RX_ENABLE_GCP_S 0
+#define PRTMAC_HSEC_CTL_RX_ENABLE_GCP_HSEC_CTL_RX_ENABLE_GCP_M BIT(0)
+#define PRTMAC_HSEC_CTL_RX_ENABLE_GPP		0x001E34C0 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_ENABLE_GPP_HSEC_CTL_RX_ENABLE_GPP_S 0
+#define PRTMAC_HSEC_CTL_RX_ENABLE_GPP_HSEC_CTL_RX_ENABLE_GPP_M BIT(0)
+#define PRTMAC_HSEC_CTL_RX_ENABLE_PPP		0x001E35C0 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_ENABLE_PPP_HSEC_CTL_RX_ENABLE_PPP_S 0
+#define PRTMAC_HSEC_CTL_RX_ENABLE_PPP_HSEC_CTL_RX_ENABLE_PPP_M BIT(0)
+#define PRTMAC_HSEC_CTL_RX_FORWARD_CONTROL	0x001E36C0 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_FORWARD_CONTROL_HSEC_CTL_RX_FORWARD_CONTROL_S 0
+#define PRTMAC_HSEC_CTL_RX_FORWARD_CONTROL_HSEC_CTL_RX_FORWARD_CONTROL_M BIT(0)
+#define PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART1 0x001E3220 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART1_HSEC_CTL_RX_PAUSE_DA_UCAST_PART1_S 0
+#define PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART1_HSEC_CTL_RX_PAUSE_DA_UCAST_PART1_M MAKEMASK(0xFFFFFFFF, 0)
+#define PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART2 0x001E3240 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART2_HSEC_CTL_RX_PAUSE_DA_UCAST_PART2_S 0
+#define PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART2_HSEC_CTL_RX_PAUSE_DA_UCAST_PART2_M MAKEMASK(0xFFFF, 0)
+#define PRTMAC_HSEC_CTL_RX_PAUSE_ENABLE		0x001E3180 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_PAUSE_ENABLE_HSEC_CTL_RX_PAUSE_ENABLE_S 0
+#define PRTMAC_HSEC_CTL_RX_PAUSE_ENABLE_HSEC_CTL_RX_PAUSE_ENABLE_M MAKEMASK(0x1FF, 0)
+#define PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART1	0x001E3280 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART1_HSEC_CTL_RX_PAUSE_SA_PART1_S 0
+#define PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART1_HSEC_CTL_RX_PAUSE_SA_PART1_M MAKEMASK(0xFFFFFFFF, 0)
+#define PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART2	0x001E32A0 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART2_HSEC_CTL_RX_PAUSE_SA_PART2_S 0
+#define PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART2_HSEC_CTL_RX_PAUSE_SA_PART2_M MAKEMASK(0xFFFF, 0)
+#define PRTMAC_HSEC_CTL_RX_QUANTA_S		0x001E3C40 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_QUANTA_SHIFT_PRTMAC_HSEC_CTL_RX_QUANTA_SHIFT_S 0
+#define PRTMAC_HSEC_CTL_RX_QUANTA_SHIFT_PRTMAC_HSEC_CTL_RX_QUANTA_SHIFT_M MAKEMASK(0xFFFF, 0)
+#define PRTMAC_HSEC_CTL_TX_PAUSE_ENABLE		0x001E31A0 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_TX_PAUSE_ENABLE_HSEC_CTL_TX_PAUSE_ENABLE_S 0
+#define PRTMAC_HSEC_CTL_TX_PAUSE_ENABLE_HSEC_CTL_TX_PAUSE_ENABLE_M MAKEMASK(0x1FF, 0)
+#define PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA(_i)	(0x001E36E0 + ((_i) * 32)) /* _i=0...8 */ /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA_MAX_INDEX 8
+#define PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA_HSEC_CTL_TX_PAUSE_QUANTA_S 0
+#define PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA_HSEC_CTL_TX_PAUSE_QUANTA_M MAKEMASK(0xFFFF, 0)
+#define PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER(_i) (0x001E3800 + ((_i) * 32)) /* _i=0...8 */ /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_MAX_INDEX 8
+#define PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_S 0
+#define PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_M MAKEMASK(0xFFFF, 0)
+#define PRTMAC_HSEC_CTL_TX_SA_PART1		0x001E3960 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_TX_SA_PART1_HSEC_CTL_TX_SA_PART1_S 0
+#define PRTMAC_HSEC_CTL_TX_SA_PART1_HSEC_CTL_TX_SA_PART1_M MAKEMASK(0xFFFFFFFF, 0)
+#define PRTMAC_HSEC_CTL_TX_SA_PART2		0x001E3980 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_TX_SA_PART2_HSEC_CTL_TX_SA_PART2_S 0
+#define PRTMAC_HSEC_CTL_TX_SA_PART2_HSEC_CTL_TX_SA_PART2_M MAKEMASK(0xFFFF, 0)
+#define PRTMAC_LINK_DOWN_COUNTER		0x001E47C0 /* Reset Source: GLOBR */
+#define PRTMAC_LINK_DOWN_COUNTER_LINK_DOWN_COUNTER_S 0
+#define PRTMAC_LINK_DOWN_COUNTER_LINK_DOWN_COUNTER_M MAKEMASK(0xFFFF, 0)
+#define PRTMAC_MD_OVRRIDE_ENABLE(_i)		(0x001E3C60 + ((_i) * 32)) /* _i=0...7 */ /* Reset Source: GLOBR */
+#define PRTMAC_MD_OVRRIDE_ENABLE_MAX_INDEX	7
+#define PRTMAC_MD_OVRRIDE_ENABLE_PRTMAC_MD_OVRRIDE_ENABLE_S 0
+#define PRTMAC_MD_OVRRIDE_ENABLE_PRTMAC_MD_OVRRIDE_ENABLE_M MAKEMASK(0xFFFFFFFF, 0)
+#define PRTMAC_MD_OVRRIDE_VAL(_i)		(0x001E3D60 + ((_i) * 32)) /* _i=0...7 */ /* Reset Source: GLOBR */
+#define PRTMAC_MD_OVRRIDE_VAL_MAX_INDEX		7
+#define PRTMAC_MD_OVRRIDE_VAL_PRTMAC_MD_OVRRIDE_ENABLE_S 0
+#define PRTMAC_MD_OVRRIDE_VAL_PRTMAC_MD_OVRRIDE_ENABLE_M MAKEMASK(0xFFFFFFFF, 0)
+#define PRTMAC_RX_CNT_MRKR			0x001E48E0 /* Reset Source: GLOBR */
+#define PRTMAC_RX_CNT_MRKR_RX_CNT_MRKR_S	0
+#define PRTMAC_RX_CNT_MRKR_RX_CNT_MRKR_M	MAKEMASK(0xFFFF, 0)
+#define PRTMAC_RX_PKT_DRP_CNT			0x001E3C20 /* Reset Source: GLOBR */
+#define PRTMAC_RX_PKT_DRP_CNT_RX_PKT_DRP_CNT_S	0
+#define PRTMAC_RX_PKT_DRP_CNT_RX_PKT_DRP_CNT_M	MAKEMASK(0xFFFF, 0)
+#define PRTMAC_RX_PKT_DRP_CNT_RX_MKR_PKT_DRP_CNT_S 16
+#define PRTMAC_RX_PKT_DRP_CNT_RX_MKR_PKT_DRP_CNT_M MAKEMASK(0xFFFF, 16)
+#define PRTMAC_TX_CNT_MRKR			0x001E48C0 /* Reset Source: GLOBR */
+#define PRTMAC_TX_CNT_MRKR_TX_CNT_MRKR_S	0
+#define PRTMAC_TX_CNT_MRKR_TX_CNT_MRKR_M	MAKEMASK(0xFFFF, 0)
+#define PRTMAC_TX_LNK_UP_CNT			0x001E4840 /* Reset Source: GLOBR */
+#define PRTMAC_TX_LNK_UP_CNT_TX_LINK_UP_CNT_S	0
+#define PRTMAC_TX_LNK_UP_CNT_TX_LINK_UP_CNT_M	MAKEMASK(0xFFFF, 0)
+#define GL_MDCK_CFG1_TX_PQM			0x002D2DF4 /* Reset Source: CORER */
+#define GL_MDCK_CFG1_TX_PQM_SSO_MAX_DATA_LEN_S	0
+#define GL_MDCK_CFG1_TX_PQM_SSO_MAX_DATA_LEN_M	MAKEMASK(0xFF, 0)
+#define GL_MDCK_CFG1_TX_PQM_SSO_MAX_PKT_CNT_S	8
+#define GL_MDCK_CFG1_TX_PQM_SSO_MAX_PKT_CNT_M	MAKEMASK(0x3F, 8)
+#define GL_MDCK_CFG1_TX_PQM_SSO_MAX_DESC_CNT_S	16
+#define GL_MDCK_CFG1_TX_PQM_SSO_MAX_DESC_CNT_M	MAKEMASK(0x3F, 16)
+#define GL_MDCK_EN_TX_PQM			0x002D2DFC /* Reset Source: CORER */
+#define GL_MDCK_EN_TX_PQM_PCI_DUMMY_COMP_S	0
+#define GL_MDCK_EN_TX_PQM_PCI_DUMMY_COMP_M	BIT(0)
+#define GL_MDCK_EN_TX_PQM_PCI_UR_COMP_S		1
+#define GL_MDCK_EN_TX_PQM_PCI_UR_COMP_M		BIT(1)
+#define GL_MDCK_EN_TX_PQM_RCV_SH_BE_LSO_S	3
+#define GL_MDCK_EN_TX_PQM_RCV_SH_BE_LSO_M	BIT(3)
+#define GL_MDCK_EN_TX_PQM_Q_FL_MNG_EPY_CH_S	4
+#define GL_MDCK_EN_TX_PQM_Q_FL_MNG_EPY_CH_M	BIT(4)
+#define GL_MDCK_EN_TX_PQM_Q_EPY_MNG_FL_CH_S	5
+#define GL_MDCK_EN_TX_PQM_Q_EPY_MNG_FL_CH_M	BIT(5)
+#define GL_MDCK_EN_TX_PQM_LSO_NUMDESCS_ZERO_S	6
+#define GL_MDCK_EN_TX_PQM_LSO_NUMDESCS_ZERO_M	BIT(6)
+#define GL_MDCK_EN_TX_PQM_LSO_LENGTH_ZERO_S	7
+#define GL_MDCK_EN_TX_PQM_LSO_LENGTH_ZERO_M	BIT(7)
+#define GL_MDCK_EN_TX_PQM_LSO_MSS_BELOW_MIN_S	8
+#define GL_MDCK_EN_TX_PQM_LSO_MSS_BELOW_MIN_M	BIT(8)
+#define GL_MDCK_EN_TX_PQM_LSO_MSS_ABOVE_MAX_S	9
+#define GL_MDCK_EN_TX_PQM_LSO_MSS_ABOVE_MAX_M	BIT(9)
+#define GL_MDCK_EN_TX_PQM_LSO_HDR_SIZE_ZERO_S	10
+#define GL_MDCK_EN_TX_PQM_LSO_HDR_SIZE_ZERO_M	BIT(10)
+#define GL_MDCK_EN_TX_PQM_RCV_CNT_BE_LSO_S	11
+#define GL_MDCK_EN_TX_PQM_RCV_CNT_BE_LSO_M	BIT(11)
+#define GL_MDCK_EN_TX_PQM_SKIP_ONE_QT_ONLY_S	12
+#define GL_MDCK_EN_TX_PQM_SKIP_ONE_QT_ONLY_M	BIT(12)
+#define GL_MDCK_EN_TX_PQM_LSO_PKTCNT_ZERO_S	13
+#define GL_MDCK_EN_TX_PQM_LSO_PKTCNT_ZERO_M	BIT(13)
+#define GL_MDCK_EN_TX_PQM_SSO_LENGTH_ZERO_S	14
+#define GL_MDCK_EN_TX_PQM_SSO_LENGTH_ZERO_M	BIT(14)
+#define GL_MDCK_EN_TX_PQM_SSO_LENGTH_EXCEED_S	15
+#define GL_MDCK_EN_TX_PQM_SSO_LENGTH_EXCEED_M	BIT(15)
+#define GL_MDCK_EN_TX_PQM_SSO_PKTCNT_ZERO_S	16
+#define GL_MDCK_EN_TX_PQM_SSO_PKTCNT_ZERO_M	BIT(16)
+#define GL_MDCK_EN_TX_PQM_SSO_PKTCNT_EXCEED_S	17
+#define GL_MDCK_EN_TX_PQM_SSO_PKTCNT_EXCEED_M	BIT(17)
+#define GL_MDCK_EN_TX_PQM_SSO_NUMDESCS_ZERO_S	18
+#define GL_MDCK_EN_TX_PQM_SSO_NUMDESCS_ZERO_M	BIT(18)
+#define GL_MDCK_EN_TX_PQM_SSO_NUMDESCS_EXCEED_S 19
+#define GL_MDCK_EN_TX_PQM_SSO_NUMDESCS_EXCEED_M BIT(19)
+#define GL_MDCK_EN_TX_PQM_TAIL_GT_RING_LENGTH_S 20
+#define GL_MDCK_EN_TX_PQM_TAIL_GT_RING_LENGTH_M BIT(20)
+#define GL_MDCK_EN_TX_PQM_RESERVED_DBL_TYPE_S	21
+#define GL_MDCK_EN_TX_PQM_RESERVED_DBL_TYPE_M	BIT(21)
+#define GL_MDCK_EN_TX_PQM_ILLEGAL_HEAD_DROP_DBL_S 22
+#define GL_MDCK_EN_TX_PQM_ILLEGAL_HEAD_DROP_DBL_M BIT(22)
+#define GL_MDCK_EN_TX_PQM_LSO_OVER_COMMS_Q_S	23
+#define GL_MDCK_EN_TX_PQM_LSO_OVER_COMMS_Q_M	BIT(23)
+#define GL_MDCK_EN_TX_PQM_ILLEGAL_VF_QNUM_S	24
+#define GL_MDCK_EN_TX_PQM_ILLEGAL_VF_QNUM_M	BIT(24)
+#define GL_MDCK_EN_TX_PQM_QTAIL_GT_RING_LENGTH_S 25
+#define GL_MDCK_EN_TX_PQM_QTAIL_GT_RING_LENGTH_M BIT(25)
+#define GL_MDCK_EN_TX_PQM_RSVD_S		26
+#define GL_MDCK_EN_TX_PQM_RSVD_M		MAKEMASK(0x3F, 26)
+#define GL_MDCK_RX				0x0029422C /* Reset Source: CORER */
+#define GL_MDCK_RX_DESC_ADDR_S			0
+#define GL_MDCK_RX_DESC_ADDR_M			BIT(0)
+#define GL_MDET_RX				0x00294C00 /* Reset Source: CORER */
+#define GL_MDET_RX_QNUM_S			0
+#define GL_MDET_RX_QNUM_M			MAKEMASK(0x7FFF, 0)
+#define GL_MDET_RX_VF_NUM_S			15
+#define GL_MDET_RX_VF_NUM_M			MAKEMASK(0xFF, 15)
+#define GL_MDET_RX_PF_NUM_S			23
+#define GL_MDET_RX_PF_NUM_M			MAKEMASK(0x7, 23)
+#define GL_MDET_RX_MAL_TYPE_S			26
+#define GL_MDET_RX_MAL_TYPE_M			MAKEMASK(0x1F, 26)
+#define GL_MDET_RX_VALID_S			31
+#define GL_MDET_RX_VALID_M			BIT(31)
+#define GL_MDET_TX_PQM				0x002D2E00 /* Reset Source: CORER */
+#define GL_MDET_TX_PQM_PF_NUM_S			0
+#define GL_MDET_TX_PQM_PF_NUM_M			MAKEMASK(0x7, 0)
+#define GL_MDET_TX_PQM_VF_NUM_S			4
+#define GL_MDET_TX_PQM_VF_NUM_M			MAKEMASK(0xFF, 4)
+#define GL_MDET_TX_PQM_QNUM_S			12
+#define GL_MDET_TX_PQM_QNUM_M			MAKEMASK(0x3FFF, 12)
+#define GL_MDET_TX_PQM_MAL_TYPE_S		26
+#define GL_MDET_TX_PQM_MAL_TYPE_M		MAKEMASK(0x1F, 26)
+#define GL_MDET_TX_PQM_VALID_S			31
+#define GL_MDET_TX_PQM_VALID_M			BIT(31)
+#define GL_MDET_TX_TCLAN			0x000FC068 /* Reset Source: CORER */
+#define GL_MDET_TX_TCLAN_QNUM_S			0
+#define GL_MDET_TX_TCLAN_QNUM_M			MAKEMASK(0x7FFF, 0)
+#define GL_MDET_TX_TCLAN_VF_NUM_S		15
+#define GL_MDET_TX_TCLAN_VF_NUM_M		MAKEMASK(0xFF, 15)
+#define GL_MDET_TX_TCLAN_PF_NUM_S		23
+#define GL_MDET_TX_TCLAN_PF_NUM_M		MAKEMASK(0x7, 23)
+#define GL_MDET_TX_TCLAN_MAL_TYPE_S		26
+#define GL_MDET_TX_TCLAN_MAL_TYPE_M		MAKEMASK(0x1F, 26)
+#define GL_MDET_TX_TCLAN_VALID_S		31
+#define GL_MDET_TX_TCLAN_VALID_M		BIT(31)
+#define PF_MDET_RX				0x00294280 /* Reset Source: CORER */
+#define PF_MDET_RX_VALID_S			0
+#define PF_MDET_RX_VALID_M			BIT(0)
+#define PF_MDET_TX_PQM				0x002D2C80 /* Reset Source: CORER */
+#define PF_MDET_TX_PQM_VALID_S			0
+#define PF_MDET_TX_PQM_VALID_M			BIT(0)
+#define PF_MDET_TX_TCLAN			0x000FC000 /* Reset Source: CORER */
+#define PF_MDET_TX_TCLAN_VALID_S		0
+#define PF_MDET_TX_TCLAN_VALID_M		BIT(0)
+#define PF_MDET_TX_TDPU				0x00040800 /* Reset Source: CORER */
+#define PF_MDET_TX_TDPU_VALID_S			0
+#define PF_MDET_TX_TDPU_VALID_M			BIT(0)
+#define VP_MDET_RX(_VF)				(0x00294400 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VP_MDET_RX_MAX_INDEX			255
+#define VP_MDET_RX_VALID_S			0
+#define VP_MDET_RX_VALID_M			BIT(0)
+#define VP_MDET_TX_PQM(_VF)			(0x002D2000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VP_MDET_TX_PQM_MAX_INDEX		255
+#define VP_MDET_TX_PQM_VALID_S			0
+#define VP_MDET_TX_PQM_VALID_M			BIT(0)
+#define VP_MDET_TX_TCLAN(_VF)			(0x000FB800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VP_MDET_TX_TCLAN_MAX_INDEX		255
+#define VP_MDET_TX_TCLAN_VALID_S		0
+#define VP_MDET_TX_TCLAN_VALID_M		BIT(0)
+#define VP_MDET_TX_TDPU(_VF)			(0x00040000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VP_MDET_TX_TDPU_MAX_INDEX		255
+#define VP_MDET_TX_TDPU_VALID_S			0
+#define VP_MDET_TX_TDPU_VALID_M			BIT(0)
+#define GENERAL_MNG_FW_DBG_CSR(_i)		(0x000B6180 + ((_i) * 4)) /* _i=0...9 */ /* Reset Source: POR */
+#define GENERAL_MNG_FW_DBG_CSR_MAX_INDEX	9
+#define GENERAL_MNG_FW_DBG_CSR_GENERAL_FW_DBG_S 0
+#define GENERAL_MNG_FW_DBG_CSR_GENERAL_FW_DBG_M MAKEMASK(0xFFFFFFFF, 0)
+#define GL_FWRESETCNT				0x00083100 /* Reset Source: POR */
+#define GL_FWRESETCNT_FWRESETCNT_S		0
+#define GL_FWRESETCNT_FWRESETCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_MNG_FW_RAM_STAT			0x0008309C /* Reset Source: POR */
+#define GL_MNG_FW_RAM_STAT_FW_RAM_RST_STAT_S	0
+#define GL_MNG_FW_RAM_STAT_FW_RAM_RST_STAT_M	BIT(0)
+#define GL_MNG_FW_RAM_STAT_MNG_MEM_ECC_ERR_S	1
+#define GL_MNG_FW_RAM_STAT_MNG_MEM_ECC_ERR_M	BIT(1)
+#define GL_MNG_FWSM				0x000B6134 /* Reset Source: POR */
+#define GL_MNG_FWSM_FW_MODES_S			0
+#define GL_MNG_FWSM_FW_MODES_M			MAKEMASK(0x3, 0)
+#define GL_MNG_FWSM_RSV0_S			2
+#define GL_MNG_FWSM_RSV0_M			MAKEMASK(0xFF, 2)
+#define GL_MNG_FWSM_EEP_RELOAD_IND_S		10
+#define GL_MNG_FWSM_EEP_RELOAD_IND_M		BIT(10)
+#define GL_MNG_FWSM_RSV1_S			11
+#define GL_MNG_FWSM_RSV1_M			MAKEMASK(0xF, 11)
+#define GL_MNG_FWSM_RSV2_S			15
+#define GL_MNG_FWSM_RSV2_M			BIT(15)
+#define GL_MNG_FWSM_PCIR_AL_FAILURE_S		16
+#define GL_MNG_FWSM_PCIR_AL_FAILURE_M		BIT(16)
+#define GL_MNG_FWSM_POR_AL_FAILURE_S		17
+#define GL_MNG_FWSM_POR_AL_FAILURE_M		BIT(17)
+#define GL_MNG_FWSM_RSV3_S			18
+#define GL_MNG_FWSM_RSV3_M			BIT(18)
+#define GL_MNG_FWSM_EXT_ERR_IND_S		19
+#define GL_MNG_FWSM_EXT_ERR_IND_M		MAKEMASK(0x3F, 19)
+#define GL_MNG_FWSM_RSV4_S			25
+#define GL_MNG_FWSM_RSV4_M			BIT(25)
+#define GL_MNG_FWSM_RESERVED_11_S		26
+#define GL_MNG_FWSM_RESERVED_11_M		MAKEMASK(0xF, 26)
+#define GL_MNG_FWSM_RSV5_S			30
+#define GL_MNG_FWSM_RSV5_M			MAKEMASK(0x3, 30)
+#define GL_MNG_HWARB_CTRL			0x000B6130 /* Reset Source: POR */
+#define GL_MNG_HWARB_CTRL_NCSI_ARB_EN_S		0
+#define GL_MNG_HWARB_CTRL_NCSI_ARB_EN_M		BIT(0)
+#define GL_MNG_SHA_EXTEND(_i)			(0x00083120 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: EMPR */
+#define GL_MNG_SHA_EXTEND_MAX_INDEX		7
+#define GL_MNG_SHA_EXTEND_GL_MNG_SHA_EXTEND_S	0
+#define GL_MNG_SHA_EXTEND_GL_MNG_SHA_EXTEND_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GL_MNG_SHA_EXTEND_ROM(_i)		(0x00083160 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: EMPR */
+#define GL_MNG_SHA_EXTEND_ROM_MAX_INDEX		7
+#define GL_MNG_SHA_EXTEND_ROM_GL_MNG_SHA_EXTEND_ROM_S 0
+#define GL_MNG_SHA_EXTEND_ROM_GL_MNG_SHA_EXTEND_ROM_M MAKEMASK(0xFFFFFFFF, 0)
+#define GL_MNG_SHA_EXTEND_STATUS		0x00083148 /* Reset Source: EMPR */
+#define GL_MNG_SHA_EXTEND_STATUS_STAGE_S	0
+#define GL_MNG_SHA_EXTEND_STATUS_STAGE_M	MAKEMASK(0x7, 0)
+#define GL_MNG_SHA_EXTEND_STATUS_FW_HALTED_S	30
+#define GL_MNG_SHA_EXTEND_STATUS_FW_HALTED_M	BIT(30)
+#define GL_MNG_SHA_EXTEND_STATUS_DONE_S		31
+#define GL_MNG_SHA_EXTEND_STATUS_DONE_M		BIT(31)
+#define GL_SWT_PRT2MDEF(_i)			(0x00216018 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: POR */
+#define GL_SWT_PRT2MDEF_MAX_INDEX		31
+#define GL_SWT_PRT2MDEF_MDEFIDX_S		0
+#define GL_SWT_PRT2MDEF_MDEFIDX_M		MAKEMASK(0x7, 0)
+#define GL_SWT_PRT2MDEF_MDEFENA_S		31
+#define GL_SWT_PRT2MDEF_MDEFENA_M		BIT(31)
+#define PRT_MNG_MANC				0x00214720 /* Reset Source: POR */
+#define PRT_MNG_MANC_FLOW_CONTROL_DISCARD_S	0
+#define PRT_MNG_MANC_FLOW_CONTROL_DISCARD_M	BIT(0)
+#define PRT_MNG_MANC_NCSI_DISCARD_S		1
+#define PRT_MNG_MANC_NCSI_DISCARD_M		BIT(1)
+#define PRT_MNG_MANC_RCV_TCO_EN_S		17
+#define PRT_MNG_MANC_RCV_TCO_EN_M		BIT(17)
+#define PRT_MNG_MANC_RCV_ALL_S			19
+#define PRT_MNG_MANC_RCV_ALL_M			BIT(19)
+#define PRT_MNG_MANC_FIXED_NET_TYPE_S		25
+#define PRT_MNG_MANC_FIXED_NET_TYPE_M		BIT(25)
+#define PRT_MNG_MANC_NET_TYPE_S			26
+#define PRT_MNG_MANC_NET_TYPE_M			BIT(26)
+#define PRT_MNG_MANC_EN_BMC2OS_S		28
+#define PRT_MNG_MANC_EN_BMC2OS_M		BIT(28)
+#define PRT_MNG_MANC_EN_BMC2NET_S		29
+#define PRT_MNG_MANC_EN_BMC2NET_M		BIT(29)
+#define PRT_MNG_MAVTV(_i)			(0x00214780 + ((_i) * 32)) /* _i=0...7 */ /* Reset Source: POR */
+#define PRT_MNG_MAVTV_MAX_INDEX			7
+#define PRT_MNG_MAVTV_VID_S			0
+#define PRT_MNG_MAVTV_VID_M			MAKEMASK(0xFFF, 0)
+#define PRT_MNG_MDEF(_i)			(0x00214880 + ((_i) * 32)) /* _i=0...7 */ /* Reset Source: POR */
+#define PRT_MNG_MDEF_MAX_INDEX			7
+#define PRT_MNG_MDEF_MAC_EXACT_AND_S		0
+#define PRT_MNG_MDEF_MAC_EXACT_AND_M		MAKEMASK(0xF, 0)
+#define PRT_MNG_MDEF_BROADCAST_AND_S		4
+#define PRT_MNG_MDEF_BROADCAST_AND_M		BIT(4)
+#define PRT_MNG_MDEF_VLAN_AND_S			5
+#define PRT_MNG_MDEF_VLAN_AND_M			MAKEMASK(0xFF, 5)
+#define PRT_MNG_MDEF_IPV4_ADDRESS_AND_S		13
+#define PRT_MNG_MDEF_IPV4_ADDRESS_AND_M		MAKEMASK(0xF, 13)
+#define PRT_MNG_MDEF_IPV6_ADDRESS_AND_S		17
+#define PRT_MNG_MDEF_IPV6_ADDRESS_AND_M		MAKEMASK(0xF, 17)
+#define PRT_MNG_MDEF_MAC_EXACT_OR_S		21
+#define PRT_MNG_MDEF_MAC_EXACT_OR_M		MAKEMASK(0xF, 21)
+#define PRT_MNG_MDEF_BROADCAST_OR_S		25
+#define PRT_MNG_MDEF_BROADCAST_OR_M		BIT(25)
+#define PRT_MNG_MDEF_MULTICAST_AND_S		26
+#define PRT_MNG_MDEF_MULTICAST_AND_M		BIT(26)
+#define PRT_MNG_MDEF_ARP_REQUEST_OR_S		27
+#define PRT_MNG_MDEF_ARP_REQUEST_OR_M		BIT(27)
+#define PRT_MNG_MDEF_ARP_RESPONSE_OR_S		28
+#define PRT_MNG_MDEF_ARP_RESPONSE_OR_M		BIT(28)
+#define PRT_MNG_MDEF_NEIGHBOR_DISCOVERY_134_OR_S 29
+#define PRT_MNG_MDEF_NEIGHBOR_DISCOVERY_134_OR_M BIT(29)
+#define PRT_MNG_MDEF_PORT_0X298_OR_S		30
+#define PRT_MNG_MDEF_PORT_0X298_OR_M		BIT(30)
+#define PRT_MNG_MDEF_PORT_0X26F_OR_S		31
+#define PRT_MNG_MDEF_PORT_0X26F_OR_M		BIT(31)
+#define PRT_MNG_MDEF_EXT(_i)			(0x00214A00 + ((_i) * 32)) /* _i=0...7 */ /* Reset Source: POR */
+#define PRT_MNG_MDEF_EXT_MAX_INDEX		7
+#define PRT_MNG_MDEF_EXT_L2_ETHERTYPE_AND_S	0
+#define PRT_MNG_MDEF_EXT_L2_ETHERTYPE_AND_M	MAKEMASK(0xF, 0)
+#define PRT_MNG_MDEF_EXT_L2_ETHERTYPE_OR_S	4
+#define PRT_MNG_MDEF_EXT_L2_ETHERTYPE_OR_M	MAKEMASK(0xF, 4)
+#define PRT_MNG_MDEF_EXT_FLEX_PORT_OR_S		8
+#define PRT_MNG_MDEF_EXT_FLEX_PORT_OR_M		MAKEMASK(0xFFFF, 8)
+#define PRT_MNG_MDEF_EXT_FLEX_TCO_S		24
+#define PRT_MNG_MDEF_EXT_FLEX_TCO_M		BIT(24)
+#define PRT_MNG_MDEF_EXT_NEIGHBOR_DISCOVERY_135_OR_S 25
+#define PRT_MNG_MDEF_EXT_NEIGHBOR_DISCOVERY_135_OR_M BIT(25)
+#define PRT_MNG_MDEF_EXT_NEIGHBOR_DISCOVERY_136_OR_S 26
+#define PRT_MNG_MDEF_EXT_NEIGHBOR_DISCOVERY_136_OR_M BIT(26)
+#define PRT_MNG_MDEF_EXT_NEIGHBOR_DISCOVERY_137_OR_S 27
+#define PRT_MNG_MDEF_EXT_NEIGHBOR_DISCOVERY_137_OR_M BIT(27)
+#define PRT_MNG_MDEF_EXT_ICMP_OR_S		28
+#define PRT_MNG_MDEF_EXT_ICMP_OR_M		BIT(28)
+#define PRT_MNG_MDEF_EXT_MLD_S			29
+#define PRT_MNG_MDEF_EXT_MLD_M			BIT(29)
+#define PRT_MNG_MDEF_EXT_APPLY_TO_NETWORK_TRAFFIC_S 30
+#define PRT_MNG_MDEF_EXT_APPLY_TO_NETWORK_TRAFFIC_M BIT(30)
+#define PRT_MNG_MDEF_EXT_APPLY_TO_HOST_TRAFFIC_S 31
+#define PRT_MNG_MDEF_EXT_APPLY_TO_HOST_TRAFFIC_M BIT(31)
+#define PRT_MNG_MDEFVSI(_i)			(0x00214980 + ((_i) * 32)) /* _i=0...3 */ /* Reset Source: POR */
+#define PRT_MNG_MDEFVSI_MAX_INDEX		3
+#define PRT_MNG_MDEFVSI_MDEFVSI_2N_S		0
+#define PRT_MNG_MDEFVSI_MDEFVSI_2N_M		MAKEMASK(0xFFFF, 0)
+#define PRT_MNG_MDEFVSI_MDEFVSI_2NP1_S		16
+#define PRT_MNG_MDEFVSI_MDEFVSI_2NP1_M		MAKEMASK(0xFFFF, 16)
+#define PRT_MNG_METF(_i)			(0x00214120 + ((_i) * 32)) /* _i=0...3 */ /* Reset Source: POR */
+#define PRT_MNG_METF_MAX_INDEX			3
+#define PRT_MNG_METF_ETYPE_S			0
+#define PRT_MNG_METF_ETYPE_M			MAKEMASK(0xFFFF, 0)
+#define PRT_MNG_METF_POLARITY_S			30
+#define PRT_MNG_METF_POLARITY_M			BIT(30)
+#define PRT_MNG_MFUTP(_i)			(0x00214320 + ((_i) * 32)) /* _i=0...15 */ /* Reset Source: POR */
+#define PRT_MNG_MFUTP_MAX_INDEX			15
+#define PRT_MNG_MFUTP_MFUTP_N_S			0
+#define PRT_MNG_MFUTP_MFUTP_N_M			MAKEMASK(0xFFFF, 0)
+#define PRT_MNG_MFUTP_UDP_S			16
+#define PRT_MNG_MFUTP_UDP_M			BIT(16)
+#define PRT_MNG_MFUTP_TCP_S			17
+#define PRT_MNG_MFUTP_TCP_M			BIT(17)
+#define PRT_MNG_MFUTP_SOURCE_DESTINATION_S	18
+#define PRT_MNG_MFUTP_SOURCE_DESTINATION_M	BIT(18)
+#define PRT_MNG_MIPAF4(_i)			(0x002141A0 + ((_i) * 32)) /* _i=0...3 */ /* Reset Source: POR */
+#define PRT_MNG_MIPAF4_MAX_INDEX		3
+#define PRT_MNG_MIPAF4_MIPAF_S			0
+#define PRT_MNG_MIPAF4_MIPAF_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PRT_MNG_MIPAF6(_i)			(0x00214520 + ((_i) * 32)) /* _i=0...15 */ /* Reset Source: POR */
+#define PRT_MNG_MIPAF6_MAX_INDEX		15
+#define PRT_MNG_MIPAF6_MIPAF_S			0
+#define PRT_MNG_MIPAF6_MIPAF_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PRT_MNG_MMAH(_i)			(0x00214220 + ((_i) * 32)) /* _i=0...3 */ /* Reset Source: POR */
+#define PRT_MNG_MMAH_MAX_INDEX			3
+#define PRT_MNG_MMAH_MMAH_S			0
+#define PRT_MNG_MMAH_MMAH_M			MAKEMASK(0xFFFF, 0)
+#define PRT_MNG_MMAL(_i)			(0x002142A0 + ((_i) * 32)) /* _i=0...3 */ /* Reset Source: POR */
+#define PRT_MNG_MMAL_MAX_INDEX			3
+#define PRT_MNG_MMAL_MMAL_S			0
+#define PRT_MNG_MMAL_MMAL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PRT_MNG_MNGONLY				0x00214740 /* Reset Source: POR */
+#define PRT_MNG_MNGONLY_EXCLUSIVE_TO_MANAGEABILITY_S 0
+#define PRT_MNG_MNGONLY_EXCLUSIVE_TO_MANAGEABILITY_M MAKEMASK(0xFF, 0)
+#define PRT_MNG_MSFM				0x00214760 /* Reset Source: POR */
+#define PRT_MNG_MSFM_PORT_26F_UDP_S		0
+#define PRT_MNG_MSFM_PORT_26F_UDP_M		BIT(0)
+#define PRT_MNG_MSFM_PORT_26F_TCP_S		1
+#define PRT_MNG_MSFM_PORT_26F_TCP_M		BIT(1)
+#define PRT_MNG_MSFM_PORT_298_UDP_S		2
+#define PRT_MNG_MSFM_PORT_298_UDP_M		BIT(2)
+#define PRT_MNG_MSFM_PORT_298_TCP_S		3
+#define PRT_MNG_MSFM_PORT_298_TCP_M		BIT(3)
+#define PRT_MNG_MSFM_IPV6_0_MASK_S		4
+#define PRT_MNG_MSFM_IPV6_0_MASK_M		BIT(4)
+#define PRT_MNG_MSFM_IPV6_1_MASK_S		5
+#define PRT_MNG_MSFM_IPV6_1_MASK_M		BIT(5)
+#define PRT_MNG_MSFM_IPV6_2_MASK_S		6
+#define PRT_MNG_MSFM_IPV6_2_MASK_M		BIT(6)
+#define PRT_MNG_MSFM_IPV6_3_MASK_S		7
+#define PRT_MNG_MSFM_IPV6_3_MASK_M		BIT(7)
+#define MSIX_PBA_PAGE(_i)			(0x02E08000 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: FLR */
+#define MSIX_PBA_PAGE_MAX_INDEX			63
+#define MSIX_PBA_PAGE_PENBIT_S			0
+#define MSIX_PBA_PAGE_PENBIT_M			MAKEMASK(0xFFFFFFFF, 0)
+#define MSIX_PBA1(_i)				(0x00008000 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: FLR */
+#define MSIX_PBA1_MAX_INDEX			63
+#define MSIX_PBA1_PENBIT_S			0
+#define MSIX_PBA1_PENBIT_M			MAKEMASK(0xFFFFFFFF, 0)
+#define MSIX_TADD_PAGE(_i)			(0x02E00000 + ((_i) * 16)) /* _i=0...2047 */ /* Reset Source: FLR */
+#define MSIX_TADD_PAGE_MAX_INDEX		2047
+#define MSIX_TADD_PAGE_MSIXTADD10_S		0
+#define MSIX_TADD_PAGE_MSIXTADD10_M		MAKEMASK(0x3, 0)
+#define MSIX_TADD_PAGE_MSIXTADD_S		2
+#define MSIX_TADD_PAGE_MSIXTADD_M		MAKEMASK(0x3FFFFFFF, 2)
+#define MSIX_TADD1(_i)				(0x00000000 + ((_i) * 16)) /* _i=0...2047 */ /* Reset Source: FLR */
+#define MSIX_TADD1_MAX_INDEX			2047
+#define MSIX_TADD1_MSIXTADD10_S			0
+#define MSIX_TADD1_MSIXTADD10_M			MAKEMASK(0x3, 0)
+#define MSIX_TADD1_MSIXTADD_S			2
+#define MSIX_TADD1_MSIXTADD_M			MAKEMASK(0x3FFFFFFF, 2)
+#define MSIX_TMSG(_i)				(0x00000008 + ((_i) * 16)) /* _i=0...2047 */ /* Reset Source: FLR */
+#define MSIX_TMSG_MAX_INDEX			2047
+#define MSIX_TMSG_MSIXTMSG_S			0
+#define MSIX_TMSG_MSIXTMSG_M			MAKEMASK(0xFFFFFFFF, 0)
+#define MSIX_TMSG_PAGE(_i)			(0x02E00008 + ((_i) * 16)) /* _i=0...2047 */ /* Reset Source: FLR */
+#define MSIX_TMSG_PAGE_MAX_INDEX		2047
+#define MSIX_TMSG_PAGE_MSIXTMSG_S		0
+#define MSIX_TMSG_PAGE_MSIXTMSG_M		MAKEMASK(0xFFFFFFFF, 0)
+#define MSIX_TUADD_PAGE(_i)			(0x02E00004 + ((_i) * 16)) /* _i=0...2047 */ /* Reset Source: FLR */
+#define MSIX_TUADD_PAGE_MAX_INDEX		2047
+#define MSIX_TUADD_PAGE_MSIXTUADD_S		0
+#define MSIX_TUADD_PAGE_MSIXTUADD_M		MAKEMASK(0xFFFFFFFF, 0)
+#define MSIX_TUADD1(_i)				(0x00000004 + ((_i) * 16)) /* _i=0...2047 */ /* Reset Source: FLR */
+#define MSIX_TUADD1_MAX_INDEX			2047
+#define MSIX_TUADD1_MSIXTUADD_S			0
+#define MSIX_TUADD1_MSIXTUADD_M			MAKEMASK(0xFFFFFFFF, 0)
+#define MSIX_TVCTRL_PAGE(_i)			(0x02E0000C + ((_i) * 16)) /* _i=0...2047 */ /* Reset Source: FLR */
+#define MSIX_TVCTRL_PAGE_MAX_INDEX		2047
+#define MSIX_TVCTRL_PAGE_MASK_S			0
+#define MSIX_TVCTRL_PAGE_MASK_M			BIT(0)
+#define MSIX_TVCTRL1(_i)			(0x0000000C + ((_i) * 16)) /* _i=0...2047 */ /* Reset Source: FLR */
+#define MSIX_TVCTRL1_MAX_INDEX			2047
+#define MSIX_TVCTRL1_MASK_S			0
+#define MSIX_TVCTRL1_MASK_M			BIT(0)
+#define GLNVM_AL_DONE_HLP			0x000824C4 /* Reset Source: POR */
+#define GLNVM_AL_DONE_HLP_HLP_CORER_S		0
+#define GLNVM_AL_DONE_HLP_HLP_CORER_M		BIT(0)
+#define GLNVM_AL_DONE_HLP_HLP_FULLR_S		1
+#define GLNVM_AL_DONE_HLP_HLP_FULLR_M		BIT(1)
+#define GLNVM_ALTIMERS				0x000B6140 /* Reset Source: POR */
+#define GLNVM_ALTIMERS_PCI_ALTIMER_S		0
+#define GLNVM_ALTIMERS_PCI_ALTIMER_M		MAKEMASK(0xFFF, 0)
+#define GLNVM_ALTIMERS_GEN_ALTIMER_S		12
+#define GLNVM_ALTIMERS_GEN_ALTIMER_M		MAKEMASK(0xFFFFF, 12)
+#define GLNVM_FLA				0x000B6108 /* Reset Source: POR */
+#define GLNVM_FLA_LOCKED_S			6
+#define GLNVM_FLA_LOCKED_M			BIT(6)
+#define GLNVM_GENS				0x000B6100 /* Reset Source: POR */
+#define GLNVM_GENS_NVM_PRES_S			0
+#define GLNVM_GENS_NVM_PRES_M			BIT(0)
+#define GLNVM_GENS_SR_SIZE_S			5
+#define GLNVM_GENS_SR_SIZE_M			MAKEMASK(0x7, 5)
+#define GLNVM_GENS_BANK1VAL_S			8
+#define GLNVM_GENS_BANK1VAL_M			BIT(8)
+#define GLNVM_GENS_ALT_PRST_S			23
+#define GLNVM_GENS_ALT_PRST_M			BIT(23)
+#define GLNVM_GENS_FL_AUTO_RD_S			25
+#define GLNVM_GENS_FL_AUTO_RD_M			BIT(25)
+#define GLNVM_PROTCSR(_i)			(0x000B6010 + ((_i) * 4)) /* _i=0...59 */ /* Reset Source: POR */
+#define GLNVM_PROTCSR_MAX_INDEX			59
+#define GLNVM_PROTCSR_ADDR_BLOCK_S		0
+#define GLNVM_PROTCSR_ADDR_BLOCK_M		MAKEMASK(0xFFFFFF, 0)
+#define GLNVM_ULD				0x000B6008 /* Reset Source: POR */
+#define GLNVM_ULD_PCIER_DONE_S			0
+#define GLNVM_ULD_PCIER_DONE_M			BIT(0)
+#define GLNVM_ULD_PCIER_DONE_1_S		1
+#define GLNVM_ULD_PCIER_DONE_1_M		BIT(1)
+#define GLNVM_ULD_CORER_DONE_S			3
+#define GLNVM_ULD_CORER_DONE_M			BIT(3)
+#define GLNVM_ULD_GLOBR_DONE_S			4
+#define GLNVM_ULD_GLOBR_DONE_M			BIT(4)
+#define GLNVM_ULD_POR_DONE_S			5
+#define GLNVM_ULD_POR_DONE_M			BIT(5)
+#define GLNVM_ULD_POR_DONE_1_S			8
+#define GLNVM_ULD_POR_DONE_1_M			BIT(8)
+#define GLNVM_ULD_PCIER_DONE_2_S		9
+#define GLNVM_ULD_PCIER_DONE_2_M		BIT(9)
+#define GLNVM_ULD_PE_DONE_S			10
+#define GLNVM_ULD_PE_DONE_M			BIT(10)
+#define GLNVM_ULD_HLP_CORE_DONE_S		11
+#define GLNVM_ULD_HLP_CORE_DONE_M		BIT(11)
+#define GLNVM_ULD_HLP_FULL_DONE_S		12
+#define GLNVM_ULD_HLP_FULL_DONE_M		BIT(12)
+#define GLNVM_ULT				0x000B6154 /* Reset Source: POR */
+#define GLNVM_ULT_CONF_PCIR_AE_S		0
+#define GLNVM_ULT_CONF_PCIR_AE_M		BIT(0)
+#define GLNVM_ULT_CONF_PCIRTL_AE_S		1
+#define GLNVM_ULT_CONF_PCIRTL_AE_M		BIT(1)
+#define GLNVM_ULT_RESERVED_1_S			2
+#define GLNVM_ULT_RESERVED_1_M			BIT(2)
+#define GLNVM_ULT_CONF_CORE_AE_S		3
+#define GLNVM_ULT_CONF_CORE_AE_M		BIT(3)
+#define GLNVM_ULT_CONF_GLOBAL_AE_S		4
+#define GLNVM_ULT_CONF_GLOBAL_AE_M		BIT(4)
+#define GLNVM_ULT_CONF_POR_AE_S			5
+#define GLNVM_ULT_CONF_POR_AE_M			BIT(5)
+#define GLNVM_ULT_RESERVED_2_S			6
+#define GLNVM_ULT_RESERVED_2_M			BIT(6)
+#define GLNVM_ULT_RESERVED_3_S			7
+#define GLNVM_ULT_RESERVED_3_M			BIT(7)
+#define GLNVM_ULT_RESERVED_5_S			8
+#define GLNVM_ULT_RESERVED_5_M			BIT(8)
+#define GLNVM_ULT_CONF_PCIALT_AE_S		9
+#define GLNVM_ULT_CONF_PCIALT_AE_M		BIT(9)
+#define GLNVM_ULT_CONF_PE_AE_S			10
+#define GLNVM_ULT_CONF_PE_AE_M			BIT(10)
+#define GLNVM_ULT_RESERVED_4_S			11
+#define GLNVM_ULT_RESERVED_4_M			MAKEMASK(0x1FFFFF, 11)
+#define GL_COTF_MARKER_STATUS			0x00200200 /* Reset Source: CORER */
+#define GL_COTF_MARKER_STATUS_MRKR_BUSY_S	0
+#define GL_COTF_MARKER_STATUS_MRKR_BUSY_M	MAKEMASK(0xFF, 0)
+#define GL_COTF_MARKER_TRIG_RCU_PRS(_i)		(0x002001D4 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GL_COTF_MARKER_TRIG_RCU_PRS_MAX_INDEX	7
+#define GL_COTF_MARKER_TRIG_RCU_PRS_SET_RST_S	0
+#define GL_COTF_MARKER_TRIG_RCU_PRS_SET_RST_M	BIT(0)
+#define GL_PRS_MARKER_ERROR			0x00200204 /* Reset Source: CORER */
+#define GL_PRS_MARKER_ERROR_XLR_CFG_ERR_S	0
+#define GL_PRS_MARKER_ERROR_XLR_CFG_ERR_M	BIT(0)
+#define GL_PRS_MARKER_ERROR_QH_CFG_ERR_S	1
+#define GL_PRS_MARKER_ERROR_QH_CFG_ERR_M	BIT(1)
+#define GL_PRS_MARKER_ERROR_COTF_CFG_ERR_S	2
+#define GL_PRS_MARKER_ERROR_COTF_CFG_ERR_M	BIT(2)
+#define GL_PRS_RX_PIPE_INIT0(_i)		(0x0020000C + ((_i) * 4)) /* _i=0...6 */ /* Reset Source: CORER */
+#define GL_PRS_RX_PIPE_INIT0_MAX_INDEX		6
+#define GL_PRS_RX_PIPE_INIT0_GPCSR_INIT_S	0
+#define GL_PRS_RX_PIPE_INIT0_GPCSR_INIT_M	MAKEMASK(0xFFFF, 0)
+#define GL_PRS_RX_PIPE_INIT1			0x00200028 /* Reset Source: CORER */
+#define GL_PRS_RX_PIPE_INIT1_GPCSR_INIT_S	0
+#define GL_PRS_RX_PIPE_INIT1_GPCSR_INIT_M	MAKEMASK(0xFFFF, 0)
+#define GL_PRS_RX_PIPE_INIT2			0x0020002C /* Reset Source: CORER */
+#define GL_PRS_RX_PIPE_INIT2_GPCSR_INIT_S	0
+#define GL_PRS_RX_PIPE_INIT2_GPCSR_INIT_M	MAKEMASK(0xFFFF, 0)
+#define GL_PRS_RX_SIZE_CTRL			0x00200004 /* Reset Source: CORER */
+#define GL_PRS_RX_SIZE_CTRL_MIN_SIZE_S		0
+#define GL_PRS_RX_SIZE_CTRL_MIN_SIZE_M		MAKEMASK(0x3FF, 0)
+#define GL_PRS_RX_SIZE_CTRL_MIN_SIZE_EN_S	15
+#define GL_PRS_RX_SIZE_CTRL_MIN_SIZE_EN_M	BIT(15)
+#define GL_PRS_RX_SIZE_CTRL_MAX_SIZE_S		16
+#define GL_PRS_RX_SIZE_CTRL_MAX_SIZE_M		MAKEMASK(0x3FF, 16)
+#define GL_PRS_RX_SIZE_CTRL_MAX_SIZE_EN_S	31
+#define GL_PRS_RX_SIZE_CTRL_MAX_SIZE_EN_M	BIT(31)
+#define GL_PRS_TX_PIPE_INIT0(_i)		(0x00202018 + ((_i) * 4)) /* _i=0...6 */ /* Reset Source: CORER */
+#define GL_PRS_TX_PIPE_INIT0_MAX_INDEX		6
+#define GL_PRS_TX_PIPE_INIT0_GPCSR_INIT_S	0
+#define GL_PRS_TX_PIPE_INIT0_GPCSR_INIT_M	MAKEMASK(0xFFFF, 0)
+#define GL_PRS_TX_PIPE_INIT1			0x00202034 /* Reset Source: CORER */
+#define GL_PRS_TX_PIPE_INIT1_GPCSR_INIT_S	0
+#define GL_PRS_TX_PIPE_INIT1_GPCSR_INIT_M	MAKEMASK(0xFFFF, 0)
+#define GL_PRS_TX_PIPE_INIT2			0x00202038 /* Reset Source: CORER */
+#define GL_PRS_TX_PIPE_INIT2_GPCSR_INIT_S	0
+#define GL_PRS_TX_PIPE_INIT2_GPCSR_INIT_M	MAKEMASK(0xFFFF, 0)
+#define GL_PRS_TX_SIZE_CTRL			0x00202014 /* Reset Source: CORER */
+#define GL_PRS_TX_SIZE_CTRL_MIN_SIZE_S		0
+#define GL_PRS_TX_SIZE_CTRL_MIN_SIZE_M		MAKEMASK(0x3FF, 0)
+#define GL_PRS_TX_SIZE_CTRL_MIN_SIZE_EN_S	15
+#define GL_PRS_TX_SIZE_CTRL_MIN_SIZE_EN_M	BIT(15)
+#define GL_PRS_TX_SIZE_CTRL_MAX_SIZE_S		16
+#define GL_PRS_TX_SIZE_CTRL_MAX_SIZE_M		MAKEMASK(0x3FF, 16)
+#define GL_PRS_TX_SIZE_CTRL_MAX_SIZE_EN_S	31
+#define GL_PRS_TX_SIZE_CTRL_MAX_SIZE_EN_M	BIT(31)
+#define GL_QH_MARKER_STATUS			0x002001FC /* Reset Source: CORER */
+#define GL_QH_MARKER_STATUS_MRKR_BUSY_S		0
+#define GL_QH_MARKER_STATUS_MRKR_BUSY_M		MAKEMASK(0xF, 0)
+#define GL_QH_MARKER_TRIG_RCU_PRS(_i)		(0x002001C4 + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: CORER */
+#define GL_QH_MARKER_TRIG_RCU_PRS_MAX_INDEX	3
+#define GL_QH_MARKER_TRIG_RCU_PRS_QPID_S	0
+#define GL_QH_MARKER_TRIG_RCU_PRS_QPID_M	MAKEMASK(0x3FFFF, 0)
+#define GL_QH_MARKER_TRIG_RCU_PRS_PE_TAG_S	18
+#define GL_QH_MARKER_TRIG_RCU_PRS_PE_TAG_M	MAKEMASK(0xFF, 18)
+#define GL_QH_MARKER_TRIG_RCU_PRS_PORT_NUM_S	26
+#define GL_QH_MARKER_TRIG_RCU_PRS_PORT_NUM_M	MAKEMASK(0x7, 26)
+#define GL_QH_MARKER_TRIG_RCU_PRS_SET_RST_S	31
+#define GL_QH_MARKER_TRIG_RCU_PRS_SET_RST_M	BIT(31)
+#define GL_RPRS_ANA_CSR_CTRL			0x00200708 /* Reset Source: CORER */
+#define GL_RPRS_ANA_CSR_CTRL_SELECT_EN_S	0
+#define GL_RPRS_ANA_CSR_CTRL_SELECT_EN_M	BIT(0)
+#define GL_RPRS_ANA_CSR_CTRL_SELECTED_ANA_S	1
+#define GL_RPRS_ANA_CSR_CTRL_SELECTED_ANA_M	BIT(1)
+#define GL_TPRS_ANA_CSR_CTRL			0x00202100 /* Reset Source: CORER */
+#define GL_TPRS_ANA_CSR_CTRL_SELECT_EN_S	0
+#define GL_TPRS_ANA_CSR_CTRL_SELECT_EN_M	BIT(0)
+#define GL_TPRS_ANA_CSR_CTRL_SELECTED_ANA_S	1
+#define GL_TPRS_ANA_CSR_CTRL_SELECTED_ANA_M	BIT(1)
+#define GL_TPRS_MNG_PM_THR			0x00202004 /* Reset Source: CORER */
+#define GL_TPRS_MNG_PM_THR_MNG_PM_THR_S		0
+#define GL_TPRS_MNG_PM_THR_MNG_PM_THR_M		MAKEMASK(0x3FFF, 0)
+#define GL_TPRS_PM_CNT(_i)			(0x00202008 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GL_TPRS_PM_CNT_MAX_INDEX		1
+#define GL_TPRS_PM_CNT_GL_PRS_PM_CNT_S		0
+#define GL_TPRS_PM_CNT_GL_PRS_PM_CNT_M		MAKEMASK(0x3FFF, 0)
+#define GL_TPRS_PM_THR				0x00202000 /* Reset Source: CORER */
+#define GL_TPRS_PM_THR_PM_THR_S			0
+#define GL_TPRS_PM_THR_PM_THR_M			MAKEMASK(0x3FFF, 0)
+#define GL_XLR_MARKER_LOG_RCU_PRS(_i)		(0x00200208 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GL_XLR_MARKER_LOG_RCU_PRS_MAX_INDEX	63
+#define GL_XLR_MARKER_LOG_RCU_PRS_XLR_TRIG_S	0
+#define GL_XLR_MARKER_LOG_RCU_PRS_XLR_TRIG_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GL_XLR_MARKER_STATUS(_i)		(0x002001F4 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GL_XLR_MARKER_STATUS_MAX_INDEX		1
+#define GL_XLR_MARKER_STATUS_MRKR_BUSY_S	0
+#define GL_XLR_MARKER_STATUS_MRKR_BUSY_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GL_XLR_MARKER_TRIG_PE			0x005008C0 /* Reset Source: CORER */
+#define GL_XLR_MARKER_TRIG_PE_VM_VF_NUM_S	0
+#define GL_XLR_MARKER_TRIG_PE_VM_VF_NUM_M	MAKEMASK(0x3FF, 0)
+#define GL_XLR_MARKER_TRIG_PE_VM_VF_TYPE_S	10
+#define GL_XLR_MARKER_TRIG_PE_VM_VF_TYPE_M	MAKEMASK(0x3, 10)
+#define GL_XLR_MARKER_TRIG_PE_PF_NUM_S		12
+#define GL_XLR_MARKER_TRIG_PE_PF_NUM_M		MAKEMASK(0x7, 12)
+#define GL_XLR_MARKER_TRIG_PE_PORT_NUM_S	16
+#define GL_XLR_MARKER_TRIG_PE_PORT_NUM_M	MAKEMASK(0x7, 16)
+#define GL_XLR_MARKER_TRIG_RCU_PRS		0x002001C0 /* Reset Source: CORER */
+#define GL_XLR_MARKER_TRIG_RCU_PRS_VM_VF_NUM_S	0
+#define GL_XLR_MARKER_TRIG_RCU_PRS_VM_VF_NUM_M	MAKEMASK(0x3FF, 0)
+#define GL_XLR_MARKER_TRIG_RCU_PRS_VM_VF_TYPE_S 10
+#define GL_XLR_MARKER_TRIG_RCU_PRS_VM_VF_TYPE_M MAKEMASK(0x3, 10)
+#define GL_XLR_MARKER_TRIG_RCU_PRS_PF_NUM_S	12
+#define GL_XLR_MARKER_TRIG_RCU_PRS_PF_NUM_M	MAKEMASK(0x7, 12)
+#define GL_XLR_MARKER_TRIG_RCU_PRS_PORT_NUM_S	16
+#define GL_XLR_MARKER_TRIG_RCU_PRS_PORT_NUM_M	MAKEMASK(0x7, 16)
+#define GL_CLKGATE_EVENTS			0x0009DE70 /* Reset Source: PERST */
+#define GL_CLKGATE_EVENTS_PRIMARY_CLKGATE_EVENTS_S 0
+#define GL_CLKGATE_EVENTS_PRIMARY_CLKGATE_EVENTS_M MAKEMASK(0xFFFF, 0)
+#define GL_CLKGATE_EVENTS_SIDEBAND_CLKGATE_EVENTS_S 16
+#define GL_CLKGATE_EVENTS_SIDEBAND_CLKGATE_EVENTS_M MAKEMASK(0xFFFF, 16)
+#define GLPCI_BYTCTH_NP_C			0x000BFDA8 /* Reset Source: PCIR */
+#define GLPCI_BYTCTH_NP_C_PCI_COUNT_BW_BCT_S	0
+#define GLPCI_BYTCTH_NP_C_PCI_COUNT_BW_BCT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPCI_BYTCTH_P				0x0009E970 /* Reset Source: PCIR */
+#define GLPCI_BYTCTH_P_PCI_COUNT_BW_BCT_S	0
+#define GLPCI_BYTCTH_P_PCI_COUNT_BW_BCT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPCI_BYTCTL_NP_C			0x000BFDAC /* Reset Source: PCIR */
+#define GLPCI_BYTCTL_NP_C_PCI_COUNT_BW_BCT_S	0
+#define GLPCI_BYTCTL_NP_C_PCI_COUNT_BW_BCT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPCI_BYTCTL_P				0x0009E994 /* Reset Source: PCIR */
+#define GLPCI_BYTCTL_P_PCI_COUNT_BW_BCT_S	0
+#define GLPCI_BYTCTL_P_PCI_COUNT_BW_BCT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPCI_CAPCTRL				0x0009DE88 /* Reset Source: PCIR */
+#define GLPCI_CAPCTRL_VPD_EN_S			0
+#define GLPCI_CAPCTRL_VPD_EN_M			BIT(0)
+#define GLPCI_CAPSUP				0x0009DE8C /* Reset Source: PCIR */
+#define GLPCI_CAPSUP_PCIE_VER_S			0
+#define GLPCI_CAPSUP_PCIE_VER_M			BIT(0)
+#define GLPCI_CAPSUP_RESERVED_2_S		1
+#define GLPCI_CAPSUP_RESERVED_2_M		BIT(1)
+#define GLPCI_CAPSUP_LTR_EN_S			2
+#define GLPCI_CAPSUP_LTR_EN_M			BIT(2)
+#define GLPCI_CAPSUP_TPH_EN_S			3
+#define GLPCI_CAPSUP_TPH_EN_M			BIT(3)
+#define GLPCI_CAPSUP_ARI_EN_S			4
+#define GLPCI_CAPSUP_ARI_EN_M			BIT(4)
+#define GLPCI_CAPSUP_IOV_EN_S			5
+#define GLPCI_CAPSUP_IOV_EN_M			BIT(5)
+#define GLPCI_CAPSUP_ACS_EN_S			6
+#define GLPCI_CAPSUP_ACS_EN_M			BIT(6)
+#define GLPCI_CAPSUP_SEC_EN_S			7
+#define GLPCI_CAPSUP_SEC_EN_M			BIT(7)
+#define GLPCI_CAPSUP_PASID_EN_S			8
+#define GLPCI_CAPSUP_PASID_EN_M			BIT(8)
+#define GLPCI_CAPSUP_DLFE_EN_S			9
+#define GLPCI_CAPSUP_DLFE_EN_M			BIT(9)
+#define GLPCI_CAPSUP_GEN4_EXT_EN_S		10
+#define GLPCI_CAPSUP_GEN4_EXT_EN_M		BIT(10)
+#define GLPCI_CAPSUP_GEN4_MARG_EN_S		11
+#define GLPCI_CAPSUP_GEN4_MARG_EN_M		BIT(11)
+#define GLPCI_CAPSUP_ECRC_GEN_EN_S		16
+#define GLPCI_CAPSUP_ECRC_GEN_EN_M		BIT(16)
+#define GLPCI_CAPSUP_ECRC_CHK_EN_S		17
+#define GLPCI_CAPSUP_ECRC_CHK_EN_M		BIT(17)
+#define GLPCI_CAPSUP_IDO_EN_S			18
+#define GLPCI_CAPSUP_IDO_EN_M			BIT(18)
+#define GLPCI_CAPSUP_MSI_MASK_S			19
+#define GLPCI_CAPSUP_MSI_MASK_M			BIT(19)
+#define GLPCI_CAPSUP_CSR_CONF_EN_S		20
+#define GLPCI_CAPSUP_CSR_CONF_EN_M		BIT(20)
+#define GLPCI_CAPSUP_WAKUP_EN_S			21
+#define GLPCI_CAPSUP_WAKUP_EN_M			BIT(21)
+#define GLPCI_CAPSUP_LOAD_SUBSYS_ID_S		30
+#define GLPCI_CAPSUP_LOAD_SUBSYS_ID_M		BIT(30)
+#define GLPCI_CAPSUP_LOAD_DEV_ID_S		31
+#define GLPCI_CAPSUP_LOAD_DEV_ID_M		BIT(31)
+#define GLPCI_CNF				0x0009DEA0 /* Reset Source: POR */
+#define GLPCI_CNF_FLEX10_S			1
+#define GLPCI_CNF_FLEX10_M			BIT(1)
+#define GLPCI_CNF_WAKE_PIN_EN_S			2
+#define GLPCI_CNF_WAKE_PIN_EN_M			BIT(2)
+#define GLPCI_CNF_MSIX_ECC_BLOCK_DISABLE_S	3
+#define GLPCI_CNF_MSIX_ECC_BLOCK_DISABLE_M	BIT(3)
+#define GLPCI_CNF2				0x000BE004 /* Reset Source: PCIR */
+#define GLPCI_CNF2_RO_DIS_S			0
+#define GLPCI_CNF2_RO_DIS_M			BIT(0)
+#define GLPCI_CNF2_CACHELINE_SIZE_S		1
+#define GLPCI_CNF2_CACHELINE_SIZE_M		BIT(1)
+#define GLPCI_DREVID				0x0009E9AC /* Reset Source: PCIR */
+#define GLPCI_DREVID_DEFAULT_REVID_S		0
+#define GLPCI_DREVID_DEFAULT_REVID_M		MAKEMASK(0xFF, 0)
+#define GLPCI_GSCL_1_NP_C			0x000BFDA4 /* Reset Source: PCIR */
+#define GLPCI_GSCL_1_NP_C_RT_MODE_S		8
+#define GLPCI_GSCL_1_NP_C_RT_MODE_M		BIT(8)
+#define GLPCI_GSCL_1_NP_C_RT_EVENT_S		9
+#define GLPCI_GSCL_1_NP_C_RT_EVENT_M		MAKEMASK(0x1F, 9)
+#define GLPCI_GSCL_1_NP_C_PCI_COUNT_BW_EN_S	14
+#define GLPCI_GSCL_1_NP_C_PCI_COUNT_BW_EN_M	BIT(14)
+#define GLPCI_GSCL_1_NP_C_PCI_COUNT_BW_EV_S	15
+#define GLPCI_GSCL_1_NP_C_PCI_COUNT_BW_EV_M	MAKEMASK(0x1F, 15)
+#define GLPCI_GSCL_1_NP_C_GIO_COUNT_RESET_S	29
+#define GLPCI_GSCL_1_NP_C_GIO_COUNT_RESET_M	BIT(29)
+#define GLPCI_GSCL_1_NP_C_GIO_COUNT_STOP_S	30
+#define GLPCI_GSCL_1_NP_C_GIO_COUNT_STOP_M	BIT(30)
+#define GLPCI_GSCL_1_NP_C_GIO_COUNT_START_S	31
+#define GLPCI_GSCL_1_NP_C_GIO_COUNT_START_M	BIT(31)
+#define GLPCI_GSCL_1_P				0x0009E9B4 /* Reset Source: PCIR */
+#define GLPCI_GSCL_1_P_GIO_COUNT_EN_0_S		0
+#define GLPCI_GSCL_1_P_GIO_COUNT_EN_0_M		BIT(0)
+#define GLPCI_GSCL_1_P_GIO_COUNT_EN_1_S		1
+#define GLPCI_GSCL_1_P_GIO_COUNT_EN_1_M		BIT(1)
+#define GLPCI_GSCL_1_P_GIO_COUNT_EN_2_S		2
+#define GLPCI_GSCL_1_P_GIO_COUNT_EN_2_M		BIT(2)
+#define GLPCI_GSCL_1_P_GIO_COUNT_EN_3_S		3
+#define GLPCI_GSCL_1_P_GIO_COUNT_EN_3_M		BIT(3)
+#define GLPCI_GSCL_1_P_LBC_ENABLE_0_S		4
+#define GLPCI_GSCL_1_P_LBC_ENABLE_0_M		BIT(4)
+#define GLPCI_GSCL_1_P_LBC_ENABLE_1_S		5
+#define GLPCI_GSCL_1_P_LBC_ENABLE_1_M		BIT(5)
+#define GLPCI_GSCL_1_P_LBC_ENABLE_2_S		6
+#define GLPCI_GSCL_1_P_LBC_ENABLE_2_M		BIT(6)
+#define GLPCI_GSCL_1_P_LBC_ENABLE_3_S		7
+#define GLPCI_GSCL_1_P_LBC_ENABLE_3_M		BIT(7)
+#define GLPCI_GSCL_1_P_PCI_COUNT_BW_EN_S	14
+#define GLPCI_GSCL_1_P_PCI_COUNT_BW_EN_M	BIT(14)
+#define GLPCI_GSCL_1_P_GIO_64_BIT_EN_S		28
+#define GLPCI_GSCL_1_P_GIO_64_BIT_EN_M		BIT(28)
+#define GLPCI_GSCL_1_P_GIO_COUNT_RESET_S	29
+#define GLPCI_GSCL_1_P_GIO_COUNT_RESET_M	BIT(29)
+#define GLPCI_GSCL_1_P_GIO_COUNT_STOP_S		30
+#define GLPCI_GSCL_1_P_GIO_COUNT_STOP_M		BIT(30)
+#define GLPCI_GSCL_1_P_GIO_COUNT_START_S	31
+#define GLPCI_GSCL_1_P_GIO_COUNT_START_M	BIT(31)
+#define GLPCI_GSCL_2				0x0009E998 /* Reset Source: PCIR */
+#define GLPCI_GSCL_2_GIO_EVENT_NUM_0_S		0
+#define GLPCI_GSCL_2_GIO_EVENT_NUM_0_M		MAKEMASK(0xFF, 0)
+#define GLPCI_GSCL_2_GIO_EVENT_NUM_1_S		8
+#define GLPCI_GSCL_2_GIO_EVENT_NUM_1_M		MAKEMASK(0xFF, 8)
+#define GLPCI_GSCL_2_GIO_EVENT_NUM_2_S		16
+#define GLPCI_GSCL_2_GIO_EVENT_NUM_2_M		MAKEMASK(0xFF, 16)
+#define GLPCI_GSCL_2_GIO_EVENT_NUM_3_S		24
+#define GLPCI_GSCL_2_GIO_EVENT_NUM_3_M		MAKEMASK(0xFF, 24)
+#define GLPCI_GSCL_5_8(_i)			(0x0009E954 + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: PCIR */
+#define GLPCI_GSCL_5_8_MAX_INDEX		3
+#define GLPCI_GSCL_5_8_LBC_THRESHOLD_N_S	0
+#define GLPCI_GSCL_5_8_LBC_THRESHOLD_N_M	MAKEMASK(0xFFFF, 0)
+#define GLPCI_GSCL_5_8_LBC_TIMER_N_S		16
+#define GLPCI_GSCL_5_8_LBC_TIMER_N_M		MAKEMASK(0xFFFF, 16)
+#define GLPCI_GSCN_0_3(_i)			(0x0009E99C + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: PCIR */
+#define GLPCI_GSCN_0_3_MAX_INDEX		3
+#define GLPCI_GSCN_0_3_EVENT_COUNTER_S		0
+#define GLPCI_GSCN_0_3_EVENT_COUNTER_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPCI_LATCT_NP_C			0x000BFDA0 /* Reset Source: PCIR */
+#define GLPCI_LATCT_NP_C_PCI_LATENCY_COUNT_S	0
+#define GLPCI_LATCT_NP_C_PCI_LATENCY_COUNT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPCI_LBARCTRL				0x0009DE74 /* Reset Source: POR */
+#define GLPCI_LBARCTRL_PREFBAR_S		0
+#define GLPCI_LBARCTRL_PREFBAR_M		BIT(0)
+#define GLPCI_LBARCTRL_BAR32_S			1
+#define GLPCI_LBARCTRL_BAR32_M			BIT(1)
+#define GLPCI_LBARCTRL_PAGES_SPACE_EN_PF_S	2
+#define GLPCI_LBARCTRL_PAGES_SPACE_EN_PF_M	BIT(2)
+#define GLPCI_LBARCTRL_FLASH_EXPOSE_S		3
+#define GLPCI_LBARCTRL_FLASH_EXPOSE_M		BIT(3)
+#define GLPCI_LBARCTRL_PE_DB_SIZE_S		4
+#define GLPCI_LBARCTRL_PE_DB_SIZE_M		MAKEMASK(0x3, 4)
+#define GLPCI_LBARCTRL_PAGES_SPACE_EN_VF_S	9
+#define GLPCI_LBARCTRL_PAGES_SPACE_EN_VF_M	BIT(9)
+#define GLPCI_LBARCTRL_EXROM_SIZE_S		11
+#define GLPCI_LBARCTRL_EXROM_SIZE_M		MAKEMASK(0x7, 11)
+#define GLPCI_LBARCTRL_VF_PE_DB_SIZE_S		14
+#define GLPCI_LBARCTRL_VF_PE_DB_SIZE_M		MAKEMASK(0x3, 14)
+#define GLPCI_LINKCAP				0x0009DE90 /* Reset Source: PCIR */
+#define GLPCI_LINKCAP_LINK_SPEEDS_VECTOR_S	0
+#define GLPCI_LINKCAP_LINK_SPEEDS_VECTOR_M	MAKEMASK(0x3F, 0)
+#define GLPCI_LINKCAP_MAX_LINK_WIDTH_S		9
+#define GLPCI_LINKCAP_MAX_LINK_WIDTH_M		MAKEMASK(0xF, 9)
+#define GLPCI_NPQ_CFG				0x000BFD80 /* Reset Source: PCIR */
+#define GLPCI_NPQ_CFG_EXTEND_TO_S		0
+#define GLPCI_NPQ_CFG_EXTEND_TO_M		BIT(0)
+#define GLPCI_NPQ_CFG_SMALL_TO_S		1
+#define GLPCI_NPQ_CFG_SMALL_TO_M		BIT(1)
+#define GLPCI_NPQ_CFG_WEIGHT_AVG_S		2
+#define GLPCI_NPQ_CFG_WEIGHT_AVG_M		MAKEMASK(0xF, 2)
+#define GLPCI_NPQ_CFG_NPQ_SPARE_S		6
+#define GLPCI_NPQ_CFG_NPQ_SPARE_M		MAKEMASK(0x3FF, 6)
+#define GLPCI_NPQ_CFG_NPQ_ERR_STAT_S		16
+#define GLPCI_NPQ_CFG_NPQ_ERR_STAT_M		MAKEMASK(0xF, 16)
+#define GLPCI_PKTCT_NP_C			0x000BFD9C /* Reset Source: PCIR */
+#define GLPCI_PKTCT_NP_C_PCI_COUNT_BW_PCT_S	0
+#define GLPCI_PKTCT_NP_C_PCI_COUNT_BW_PCT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPCI_PKTCT_P				0x0009E9B0 /* Reset Source: PCIR */
+#define GLPCI_PKTCT_P_PCI_COUNT_BW_PCT_S	0
+#define GLPCI_PKTCT_P_PCI_COUNT_BW_PCT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPCI_PMSUP				0x0009DE94 /* Reset Source: PCIR */
+#define GLPCI_PMSUP_RESERVED_0_S		0
+#define GLPCI_PMSUP_RESERVED_0_M		MAKEMASK(0x3, 0)
+#define GLPCI_PMSUP_RESERVED_1_S		2
+#define GLPCI_PMSUP_RESERVED_1_M		MAKEMASK(0x7, 2)
+#define GLPCI_PMSUP_RESERVED_2_S		5
+#define GLPCI_PMSUP_RESERVED_2_M		MAKEMASK(0x7, 5)
+#define GLPCI_PMSUP_L0S_ACC_LAT_S		8
+#define GLPCI_PMSUP_L0S_ACC_LAT_M		MAKEMASK(0x7, 8)
+#define GLPCI_PMSUP_L1_ACC_LAT_S		11
+#define GLPCI_PMSUP_L1_ACC_LAT_M		MAKEMASK(0x7, 11)
+#define GLPCI_PMSUP_RESERVED_3_S		14
+#define GLPCI_PMSUP_RESERVED_3_M		BIT(14)
+#define GLPCI_PMSUP_OBFF_SUP_S			15
+#define GLPCI_PMSUP_OBFF_SUP_M			MAKEMASK(0x3, 15)
+#define GLPCI_PUSH_PE_IF_TO_STATUS		0x0009DF44 /* Reset Source: PCIR */
+#define GLPCI_PUSH_PE_IF_TO_STATUS_GLPCI_PUSH_PE_IF_TO_STATUS_S 0
+#define GLPCI_PUSH_PE_IF_TO_STATUS_GLPCI_PUSH_PE_IF_TO_STATUS_M BIT(0)
+#define GLPCI_PWRDATA				0x0009DE7C /* Reset Source: PCIR */
+#define GLPCI_PWRDATA_D0_POWER_S		0
+#define GLPCI_PWRDATA_D0_POWER_M		MAKEMASK(0xFF, 0)
+#define GLPCI_PWRDATA_COMM_POWER_S		8
+#define GLPCI_PWRDATA_COMM_POWER_M		MAKEMASK(0xFF, 8)
+#define GLPCI_PWRDATA_D3_POWER_S		16
+#define GLPCI_PWRDATA_D3_POWER_M		MAKEMASK(0xFF, 16)
+#define GLPCI_PWRDATA_DATA_SCALE_S		24
+#define GLPCI_PWRDATA_DATA_SCALE_M		MAKEMASK(0x3, 24)
+#define GLPCI_REVID				0x0009DE98 /* Reset Source: PCIR */
+#define GLPCI_REVID_NVM_REVID_S			0
+#define GLPCI_REVID_NVM_REVID_M			MAKEMASK(0xFF, 0)
+#define GLPCI_SERH				0x0009DE84 /* Reset Source: PCIR */
+#define GLPCI_SERH_SER_NUM_H_S			0
+#define GLPCI_SERH_SER_NUM_H_M			MAKEMASK(0xFFFF, 0)
+#define GLPCI_SERL				0x0009DE80 /* Reset Source: PCIR */
+#define GLPCI_SERL_SER_NUM_L_S			0
+#define GLPCI_SERL_SER_NUM_L_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPCI_SUBVENID				0x0009DEE8 /* Reset Source: PCIR */
+#define GLPCI_SUBVENID_SUB_VEN_ID_S		0
+#define GLPCI_SUBVENID_SUB_VEN_ID_M		MAKEMASK(0xFFFF, 0)
+#define GLPCI_UPADD				0x000BE0D4 /* Reset Source: PCIR */
+#define GLPCI_UPADD_ADDRESS_S			1
+#define GLPCI_UPADD_ADDRESS_M			MAKEMASK(0x7FFFFFFF, 1)
+#define GLPCI_VENDORID				0x0009DEC8 /* Reset Source: PCIR */
+#define GLPCI_VENDORID_VENDORID_S		0
+#define GLPCI_VENDORID_VENDORID_M		MAKEMASK(0xFFFF, 0)
+#define GLPCI_VFSUP				0x0009DE9C /* Reset Source: PCIR */
+#define GLPCI_VFSUP_VF_PREFETCH_S		0
+#define GLPCI_VFSUP_VF_PREFETCH_M		BIT(0)
+#define GLPCI_VFSUP_VR_BAR_TYPE_S		1
+#define GLPCI_VFSUP_VR_BAR_TYPE_M		BIT(1)
+#define GLPCI_WATMK_CLNT_PIPEMON		0x000BFD90 /* Reset Source: PCIR */
+#define GLPCI_WATMK_CLNT_PIPEMON_DATA_LINES_S	0
+#define GLPCI_WATMK_CLNT_PIPEMON_DATA_LINES_M	MAKEMASK(0xFFFF, 0)
+#define PF_FUNC_RID				0x0009E880 /* Reset Source: PCIR */
+#define PF_FUNC_RID_FUNCTION_NUMBER_S		0
+#define PF_FUNC_RID_FUNCTION_NUMBER_M		MAKEMASK(0x7, 0)
+#define PF_FUNC_RID_DEVICE_NUMBER_S		3
+#define PF_FUNC_RID_DEVICE_NUMBER_M		MAKEMASK(0x1F, 3)
+#define PF_FUNC_RID_BUS_NUMBER_S		8
+#define PF_FUNC_RID_BUS_NUMBER_M		MAKEMASK(0xFF, 8)
+#define PF_PCI_CIAA				0x0009E580 /* Reset Source: FLR */
+#define PF_PCI_CIAA_ADDRESS_S			0
+#define PF_PCI_CIAA_ADDRESS_M			MAKEMASK(0xFFF, 0)
+#define PF_PCI_CIAA_VF_NUM_S			12
+#define PF_PCI_CIAA_VF_NUM_M			MAKEMASK(0xFF, 12)
+#define PF_PCI_CIAD				0x0009E500 /* Reset Source: FLR */
+#define PF_PCI_CIAD_DATA_S			0
+#define PF_PCI_CIAD_DATA_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PFPCI_CLASS				0x0009DB00 /* Reset Source: PCIR */
+#define PFPCI_CLASS_STORAGE_CLASS_S		0
+#define PFPCI_CLASS_STORAGE_CLASS_M		BIT(0)
+#define PFPCI_CLASS_PF_IS_LAN_S			2
+#define PFPCI_CLASS_PF_IS_LAN_M			BIT(2)
+#define PFPCI_CNF				0x0009DF00 /* Reset Source: PCIR */
+#define PFPCI_CNF_MSI_EN_S			2
+#define PFPCI_CNF_MSI_EN_M			BIT(2)
+#define PFPCI_CNF_EXROM_DIS_S			3
+#define PFPCI_CNF_EXROM_DIS_M			BIT(3)
+#define PFPCI_CNF_IO_BAR_S			4
+#define PFPCI_CNF_IO_BAR_M			BIT(4)
+#define PFPCI_CNF_INT_PIN_S			5
+#define PFPCI_CNF_INT_PIN_M			MAKEMASK(0x3, 5)
+#define PFPCI_DEVID				0x0009DE00 /* Reset Source: PCIR */
+#define PFPCI_DEVID_PF_DEV_ID_S			0
+#define PFPCI_DEVID_PF_DEV_ID_M			MAKEMASK(0xFFFF, 0)
+#define PFPCI_DEVID_VF_DEV_ID_S			16
+#define PFPCI_DEVID_VF_DEV_ID_M			MAKEMASK(0xFFFF, 16)
+#define PFPCI_FACTPS				0x0009E900 /* Reset Source: FLR */
+#define PFPCI_FACTPS_FUNC_POWER_STATE_S		0
+#define PFPCI_FACTPS_FUNC_POWER_STATE_M		MAKEMASK(0x3, 0)
+#define PFPCI_FACTPS_FUNC_AUX_EN_S		3
+#define PFPCI_FACTPS_FUNC_AUX_EN_M		BIT(3)
+#define PFPCI_FUNC				0x0009D980 /* Reset Source: POR */
+#define PFPCI_FUNC_FUNC_DIS_S			0
+#define PFPCI_FUNC_FUNC_DIS_M			BIT(0)
+#define PFPCI_FUNC_ALLOW_FUNC_DIS_S		1
+#define PFPCI_FUNC_ALLOW_FUNC_DIS_M		BIT(1)
+#define PFPCI_FUNC_DIS_FUNC_ON_PORT_DIS_S	2
+#define PFPCI_FUNC_DIS_FUNC_ON_PORT_DIS_M	BIT(2)
+#define PFPCI_PF_FLUSH_DONE			0x0009E400 /* Reset Source: PCIR */
+#define PFPCI_PF_FLUSH_DONE_FLUSH_DONE_S	0
+#define PFPCI_PF_FLUSH_DONE_FLUSH_DONE_M	BIT(0)
+#define PFPCI_PM				0x0009DA80 /* Reset Source: POR */
+#define PFPCI_PM_PME_EN_S			0
+#define PFPCI_PM_PME_EN_M			BIT(0)
+#define PFPCI_STATUS1				0x0009DA00 /* Reset Source: POR */
+#define PFPCI_STATUS1_FUNC_VALID_S		0
+#define PFPCI_STATUS1_FUNC_VALID_M		BIT(0)
+#define PFPCI_SUBSYSID				0x0009D880 /* Reset Source: PCIR */
+#define PFPCI_SUBSYSID_PF_SUBSYS_ID_S		0
+#define PFPCI_SUBSYSID_PF_SUBSYS_ID_M		MAKEMASK(0xFFFF, 0)
+#define PFPCI_SUBSYSID_VF_SUBSYS_ID_S		16
+#define PFPCI_SUBSYSID_VF_SUBSYS_ID_M		MAKEMASK(0xFFFF, 16)
+#define PFPCI_VF_FLUSH_DONE(_VF)		(0x0009E000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PCIR */
+#define PFPCI_VF_FLUSH_DONE_MAX_INDEX		255
+#define PFPCI_VF_FLUSH_DONE_FLUSH_DONE_S	0
+#define PFPCI_VF_FLUSH_DONE_FLUSH_DONE_M	BIT(0)
+#define PFPCI_VM_FLUSH_DONE			0x0009E480 /* Reset Source: PCIR */
+#define PFPCI_VM_FLUSH_DONE_FLUSH_DONE_S	0
+#define PFPCI_VM_FLUSH_DONE_FLUSH_DONE_M	BIT(0)
+#define PFPCI_VMINDEX				0x0009E600 /* Reset Source: PCIR */
+#define PFPCI_VMINDEX_VMINDEX_S			0
+#define PFPCI_VMINDEX_VMINDEX_M			MAKEMASK(0x3FF, 0)
+#define PFPCI_VMPEND				0x0009E800 /* Reset Source: PCIR */
+#define PFPCI_VMPEND_PENDING_S			0
+#define PFPCI_VMPEND_PENDING_M			BIT(0)
+#define PQ_FIFO_STATUS				0x0009DF40 /* Reset Source: PCIR */
+#define PQ_FIFO_STATUS_PQ_FIFO_COUNT_S		0
+#define PQ_FIFO_STATUS_PQ_FIFO_COUNT_M		MAKEMASK(0x7FFFFFFF, 0)
+#define PQ_FIFO_STATUS_PQ_FIFO_EMPTY_S		31
+#define PQ_FIFO_STATUS_PQ_FIFO_EMPTY_M		BIT(31)
+#define GLPE_CPUSTATUS0				0x0050BA5C /* Reset Source: CORER */
+#define GLPE_CPUSTATUS0_PECPUSTATUS0_S		0
+#define GLPE_CPUSTATUS0_PECPUSTATUS0_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPE_CPUSTATUS1				0x0050BA60 /* Reset Source: CORER */
+#define GLPE_CPUSTATUS1_PECPUSTATUS1_S		0
+#define GLPE_CPUSTATUS1_PECPUSTATUS1_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPE_CPUSTATUS2				0x0050BA64 /* Reset Source: CORER */
+#define GLPE_CPUSTATUS2_PECPUSTATUS2_S		0
+#define GLPE_CPUSTATUS2_PECPUSTATUS2_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPE_MDQ_BASE(_i)			(0x00536000 + ((_i) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLPE_MDQ_BASE_MAX_INDEX			511
+#define GLPE_MDQ_BASE_MDOC_INDEX_S		0
+#define GLPE_MDQ_BASE_MDOC_INDEX_M		MAKEMASK(0xFFFFFFF, 0)
+#define GLPE_MDQ_PTR(_i)			(0x00537000 + ((_i) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLPE_MDQ_PTR_MAX_INDEX			511
+#define GLPE_MDQ_PTR_MDQ_HEAD_S			0
+#define GLPE_MDQ_PTR_MDQ_HEAD_M			MAKEMASK(0x3FFF, 0)
+#define GLPE_MDQ_PTR_MDQ_TAIL_S			16
+#define GLPE_MDQ_PTR_MDQ_TAIL_M			MAKEMASK(0x3FFF, 16)
+#define GLPE_MDQ_SIZE(_i)			(0x00536800 + ((_i) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLPE_MDQ_SIZE_MAX_INDEX			511
+#define GLPE_MDQ_SIZE_MDQ_SIZE_S		0
+#define GLPE_MDQ_SIZE_MDQ_SIZE_M		MAKEMASK(0x3FFF, 0)
+#define GLPE_PEPM_CTRL				0x0050C000 /* Reset Source: PERST */
+#define GLPE_PEPM_CTRL_PEPM_ENABLE_S		0
+#define GLPE_PEPM_CTRL_PEPM_ENABLE_M		BIT(0)
+#define GLPE_PEPM_CTRL_PEPM_HALT_S		8
+#define GLPE_PEPM_CTRL_PEPM_HALT_M		BIT(8)
+#define GLPE_PEPM_CTRL_PEPM_PUSH_MARGIN_S	16
+#define GLPE_PEPM_CTRL_PEPM_PUSH_MARGIN_M	MAKEMASK(0xFF, 16)
+#define GLPE_PEPM_DEALLOC			0x0050C004 /* Reset Source: PERST */
+#define GLPE_PEPM_DEALLOC_MDQ_CREDITS_S		0
+#define GLPE_PEPM_DEALLOC_MDQ_CREDITS_M		MAKEMASK(0x3FFF, 0)
+#define GLPE_PEPM_DEALLOC_PSQ_CREDITS_S		14
+#define GLPE_PEPM_DEALLOC_PSQ_CREDITS_M		MAKEMASK(0x1F, 14)
+#define GLPE_PEPM_DEALLOC_PQID_S		19
+#define GLPE_PEPM_DEALLOC_PQID_M		MAKEMASK(0x1FF, 19)
+#define GLPE_PEPM_DEALLOC_PORT_S		28
+#define GLPE_PEPM_DEALLOC_PORT_M		MAKEMASK(0x7, 28)
+#define GLPE_PEPM_DEALLOC_DEALLOC_RDY_S		31
+#define GLPE_PEPM_DEALLOC_DEALLOC_RDY_M		BIT(31)
+#define GLPE_PEPM_PSQ_COUNT			0x0050C020 /* Reset Source: PERST */
+#define GLPE_PEPM_PSQ_COUNT_PEPM_PSQ_COUNT_S	0
+#define GLPE_PEPM_PSQ_COUNT_PEPM_PSQ_COUNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_PEPM_THRESH(_i)			(0x0050C840 + ((_i) * 4)) /* _i=0...511 */ /* Reset Source: PERST */
+#define GLPE_PEPM_THRESH_MAX_INDEX		511
+#define GLPE_PEPM_THRESH_PEPM_PSQ_THRESH_S	0
+#define GLPE_PEPM_THRESH_PEPM_PSQ_THRESH_M	MAKEMASK(0x1F, 0)
+#define GLPE_PEPM_THRESH_PEPM_MDQ_THRESH_S	16
+#define GLPE_PEPM_THRESH_PEPM_MDQ_THRESH_M	MAKEMASK(0x3FFF, 16)
+#define GLPE_PFAEQEDROPCNT(_i)			(0x00503240 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPE_PFAEQEDROPCNT_MAX_INDEX		7
+#define GLPE_PFAEQEDROPCNT_AEQEDROPCNT_S	0
+#define GLPE_PFAEQEDROPCNT_AEQEDROPCNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_PFCEQEDROPCNT(_i)			(0x00503220 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPE_PFCEQEDROPCNT_MAX_INDEX		7
+#define GLPE_PFCEQEDROPCNT_CEQEDROPCNT_S	0
+#define GLPE_PFCEQEDROPCNT_CEQEDROPCNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_PFCQEDROPCNT(_i)			(0x00503200 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPE_PFCQEDROPCNT_MAX_INDEX		7
+#define GLPE_PFCQEDROPCNT_CQEDROPCNT_S		0
+#define GLPE_PFCQEDROPCNT_CQEDROPCNT_M		MAKEMASK(0xFFFF, 0)
+#define GLPE_PFFLMOOISCALLOCERR(_i)		(0x0050B960 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPE_PFFLMOOISCALLOCERR_MAX_INDEX	7
+#define GLPE_PFFLMOOISCALLOCERR_ERROR_COUNT_S	0
+#define GLPE_PFFLMOOISCALLOCERR_ERROR_COUNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_PFFLMQ1ALLOCERR(_i)		(0x0050B920 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPE_PFFLMQ1ALLOCERR_MAX_INDEX		7
+#define GLPE_PFFLMQ1ALLOCERR_ERROR_COUNT_S	0
+#define GLPE_PFFLMQ1ALLOCERR_ERROR_COUNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_PFFLMRRFALLOCERR(_i)		(0x0050B940 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPE_PFFLMRRFALLOCERR_MAX_INDEX		7
+#define GLPE_PFFLMRRFALLOCERR_ERROR_COUNT_S	0
+#define GLPE_PFFLMRRFALLOCERR_ERROR_COUNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_PFFLMXMITALLOCERR(_i)		(0x0050B900 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPE_PFFLMXMITALLOCERR_MAX_INDEX	7
+#define GLPE_PFFLMXMITALLOCERR_ERROR_COUNT_S	0
+#define GLPE_PFFLMXMITALLOCERR_ERROR_COUNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_PFTCPNOW50USCNT(_i)		(0x0050B8C0 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPE_PFTCPNOW50USCNT_MAX_INDEX		7
+#define GLPE_PFTCPNOW50USCNT_CNT_S		0
+#define GLPE_PFTCPNOW50USCNT_CNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPE_PUSH_PEPM				0x0053241C /* Reset Source: CORER */
+#define GLPE_PUSH_PEPM_MDQ_CREDITS_S		0
+#define GLPE_PUSH_PEPM_MDQ_CREDITS_M		MAKEMASK(0xFF, 0)
+#define GLPE_VFAEQEDROPCNT(_i)			(0x00503100 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLPE_VFAEQEDROPCNT_MAX_INDEX		31
+#define GLPE_VFAEQEDROPCNT_AEQEDROPCNT_S	0
+#define GLPE_VFAEQEDROPCNT_AEQEDROPCNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_VFCEQEDROPCNT(_i)			(0x00503080 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLPE_VFCEQEDROPCNT_MAX_INDEX		31
+#define GLPE_VFCEQEDROPCNT_CEQEDROPCNT_S	0
+#define GLPE_VFCEQEDROPCNT_CEQEDROPCNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_VFCQEDROPCNT(_i)			(0x00503000 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLPE_VFCQEDROPCNT_MAX_INDEX		31
+#define GLPE_VFCQEDROPCNT_CQEDROPCNT_S		0
+#define GLPE_VFCQEDROPCNT_CQEDROPCNT_M		MAKEMASK(0xFFFF, 0)
+#define GLPE_VFFLMOOISCALLOCERR(_i)		(0x0050B580 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLPE_VFFLMOOISCALLOCERR_MAX_INDEX	31
+#define GLPE_VFFLMOOISCALLOCERR_ERROR_COUNT_S	0
+#define GLPE_VFFLMOOISCALLOCERR_ERROR_COUNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_VFFLMQ1ALLOCERR(_i)		(0x0050B480 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLPE_VFFLMQ1ALLOCERR_MAX_INDEX		31
+#define GLPE_VFFLMQ1ALLOCERR_ERROR_COUNT_S	0
+#define GLPE_VFFLMQ1ALLOCERR_ERROR_COUNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_VFFLMRRFALLOCERR(_i)		(0x0050B500 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLPE_VFFLMRRFALLOCERR_MAX_INDEX		31
+#define GLPE_VFFLMRRFALLOCERR_ERROR_COUNT_S	0
+#define GLPE_VFFLMRRFALLOCERR_ERROR_COUNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_VFFLMXMITALLOCERR(_i)		(0x0050B400 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLPE_VFFLMXMITALLOCERR_MAX_INDEX	31
+#define GLPE_VFFLMXMITALLOCERR_ERROR_COUNT_S	0
+#define GLPE_VFFLMXMITALLOCERR_ERROR_COUNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_VFTCPNOW50USCNT(_i)		(0x0050B300 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: PE_CORER */
+#define GLPE_VFTCPNOW50USCNT_MAX_INDEX		31
+#define GLPE_VFTCPNOW50USCNT_CNT_S		0
+#define GLPE_VFTCPNOW50USCNT_CNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PFPE_AEQALLOC				0x00502D00 /* Reset Source: PFR */
+#define PFPE_AEQALLOC_AECOUNT_S			0
+#define PFPE_AEQALLOC_AECOUNT_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PFPE_CCQPHIGH				0x0050A100 /* Reset Source: PFR */
+#define PFPE_CCQPHIGH_PECCQPHIGH_S		0
+#define PFPE_CCQPHIGH_PECCQPHIGH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PFPE_CCQPLOW				0x0050A080 /* Reset Source: PFR */
+#define PFPE_CCQPLOW_PECCQPLOW_S		0
+#define PFPE_CCQPLOW_PECCQPLOW_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PFPE_CCQPSTATUS				0x0050A000 /* Reset Source: PFR */
+#define PFPE_CCQPSTATUS_CCQP_DONE_S		0
+#define PFPE_CCQPSTATUS_CCQP_DONE_M		BIT(0)
+#define PFPE_CCQPSTATUS_HMC_PROFILE_S		4
+#define PFPE_CCQPSTATUS_HMC_PROFILE_M		MAKEMASK(0x7, 4)
+#define PFPE_CCQPSTATUS_RDMA_EN_VFS_S		16
+#define PFPE_CCQPSTATUS_RDMA_EN_VFS_M		MAKEMASK(0x3F, 16)
+#define PFPE_CCQPSTATUS_CCQP_ERR_S		31
+#define PFPE_CCQPSTATUS_CCQP_ERR_M		BIT(31)
+#define PFPE_CQACK				0x00502C80 /* Reset Source: PFR */
+#define PFPE_CQACK_PECQID_S			0
+#define PFPE_CQACK_PECQID_M			MAKEMASK(0x7FFFF, 0)
+#define PFPE_CQARM				0x00502C00 /* Reset Source: PFR */
+#define PFPE_CQARM_PECQID_S			0
+#define PFPE_CQARM_PECQID_M			MAKEMASK(0x7FFFF, 0)
+#define PFPE_CQPDB				0x00500800 /* Reset Source: PFR */
+#define PFPE_CQPDB_WQHEAD_S			0
+#define PFPE_CQPDB_WQHEAD_M			MAKEMASK(0x7FF, 0)
+#define PFPE_CQPERRCODES			0x0050A200 /* Reset Source: PFR */
+#define PFPE_CQPERRCODES_CQP_MINOR_CODE_S	0
+#define PFPE_CQPERRCODES_CQP_MINOR_CODE_M	MAKEMASK(0xFFFF, 0)
+#define PFPE_CQPERRCODES_CQP_MAJOR_CODE_S	16
+#define PFPE_CQPERRCODES_CQP_MAJOR_CODE_M	MAKEMASK(0xFFFF, 16)
+#define PFPE_CQPTAIL				0x00500880 /* Reset Source: PFR */
+#define PFPE_CQPTAIL_WQTAIL_S			0
+#define PFPE_CQPTAIL_WQTAIL_M			MAKEMASK(0x7FF, 0)
+#define PFPE_CQPTAIL_CQP_OP_ERR_S		31
+#define PFPE_CQPTAIL_CQP_OP_ERR_M		BIT(31)
+#define PFPE_IPCONFIG0				0x0050A180 /* Reset Source: PFR */
+#define PFPE_IPCONFIG0_PEIPID_S			0
+#define PFPE_IPCONFIG0_PEIPID_M			MAKEMASK(0xFFFF, 0)
+#define PFPE_IPCONFIG0_USEENTIREIDRANGE_S	16
+#define PFPE_IPCONFIG0_USEENTIREIDRANGE_M	BIT(16)
+#define PFPE_IPCONFIG0_UDP_SRC_PORT_MASK_EN_S	17
+#define PFPE_IPCONFIG0_UDP_SRC_PORT_MASK_EN_M	BIT(17)
+#define PFPE_MRTEIDXMASK			0x0050A300 /* Reset Source: PFR */
+#define PFPE_MRTEIDXMASK_MRTEIDXMASKBITS_S	0
+#define PFPE_MRTEIDXMASK_MRTEIDXMASKBITS_M	MAKEMASK(0x1F, 0)
+#define PFPE_RCVUNEXPECTEDERROR			0x0050A380 /* Reset Source: PFR */
+#define PFPE_RCVUNEXPECTEDERROR_TCP_RX_UNEXP_ERR_S 0
+#define PFPE_RCVUNEXPECTEDERROR_TCP_RX_UNEXP_ERR_M MAKEMASK(0xFFFFFF, 0)
+#define PFPE_TCPNOWTIMER			0x0050A280 /* Reset Source: PFR */
+#define PFPE_TCPNOWTIMER_TCP_NOW_S		0
+#define PFPE_TCPNOWTIMER_TCP_NOW_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PFPE_WQEALLOC				0x00504400 /* Reset Source: PFR */
+#define PFPE_WQEALLOC_PEQPID_S			0
+#define PFPE_WQEALLOC_PEQPID_M			MAKEMASK(0x3FFFF, 0)
+#define PFPE_WQEALLOC_WQE_DESC_INDEX_S		20
+#define PFPE_WQEALLOC_WQE_DESC_INDEX_M		MAKEMASK(0xFFF, 20)
+#define PRT_PEPM_COUNT(_i)			(0x0050C040 + ((_i) * 4)) /* _i=0...511 */ /* Reset Source: PERST */
+#define PRT_PEPM_COUNT_MAX_INDEX		511
+#define PRT_PEPM_COUNT_PEPM_PSQ_COUNT_S		0
+#define PRT_PEPM_COUNT_PEPM_PSQ_COUNT_M		MAKEMASK(0x1F, 0)
+#define PRT_PEPM_COUNT_PEPM_MDQ_COUNT_S		16
+#define PRT_PEPM_COUNT_PEPM_MDQ_COUNT_M		MAKEMASK(0x3FFF, 16)
+#define VFPE_AEQALLOC(_VF)			(0x00502800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_AEQALLOC_MAX_INDEX			255
+#define VFPE_AEQALLOC_AECOUNT_S			0
+#define VFPE_AEQALLOC_AECOUNT_M			MAKEMASK(0xFFFFFFFF, 0)
+#define VFPE_CCQPHIGH(_VF)			(0x00508800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_CCQPHIGH_MAX_INDEX			255
+#define VFPE_CCQPHIGH_PECCQPHIGH_S		0
+#define VFPE_CCQPHIGH_PECCQPHIGH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VFPE_CCQPLOW(_VF)			(0x00508400 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_CCQPLOW_MAX_INDEX			255
+#define VFPE_CCQPLOW_PECCQPLOW_S		0
+#define VFPE_CCQPLOW_PECCQPLOW_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VFPE_CCQPSTATUS(_VF)			(0x00508000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_CCQPSTATUS_MAX_INDEX		255
+#define VFPE_CCQPSTATUS_CCQP_DONE_S		0
+#define VFPE_CCQPSTATUS_CCQP_DONE_M		BIT(0)
+#define VFPE_CCQPSTATUS_HMC_PROFILE_S		4
+#define VFPE_CCQPSTATUS_HMC_PROFILE_M		MAKEMASK(0x7, 4)
+#define VFPE_CCQPSTATUS_RDMA_EN_VFS_S		16
+#define VFPE_CCQPSTATUS_RDMA_EN_VFS_M		MAKEMASK(0x3F, 16)
+#define VFPE_CCQPSTATUS_CCQP_ERR_S		31
+#define VFPE_CCQPSTATUS_CCQP_ERR_M		BIT(31)
+#define VFPE_CQACK(_VF)				(0x00502400 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_CQACK_MAX_INDEX			255
+#define VFPE_CQACK_PECQID_S			0
+#define VFPE_CQACK_PECQID_M			MAKEMASK(0x7FFFF, 0)
+#define VFPE_CQARM(_VF)				(0x00502000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_CQARM_MAX_INDEX			255
+#define VFPE_CQARM_PECQID_S			0
+#define VFPE_CQARM_PECQID_M			MAKEMASK(0x7FFFF, 0)
+#define VFPE_CQPDB(_VF)				(0x00500000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_CQPDB_MAX_INDEX			255
+#define VFPE_CQPDB_WQHEAD_S			0
+#define VFPE_CQPDB_WQHEAD_M			MAKEMASK(0x7FF, 0)
+#define VFPE_CQPERRCODES(_VF)			(0x00509000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_CQPERRCODES_MAX_INDEX		255
+#define VFPE_CQPERRCODES_CQP_MINOR_CODE_S	0
+#define VFPE_CQPERRCODES_CQP_MINOR_CODE_M	MAKEMASK(0xFFFF, 0)
+#define VFPE_CQPERRCODES_CQP_MAJOR_CODE_S	16
+#define VFPE_CQPERRCODES_CQP_MAJOR_CODE_M	MAKEMASK(0xFFFF, 16)
+#define VFPE_CQPTAIL(_VF)			(0x00500400 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_CQPTAIL_MAX_INDEX			255
+#define VFPE_CQPTAIL_WQTAIL_S			0
+#define VFPE_CQPTAIL_WQTAIL_M			MAKEMASK(0x7FF, 0)
+#define VFPE_CQPTAIL_CQP_OP_ERR_S		31
+#define VFPE_CQPTAIL_CQP_OP_ERR_M		BIT(31)
+#define VFPE_IPCONFIG0(_VF)			(0x00508C00 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_IPCONFIG0_MAX_INDEX		255
+#define VFPE_IPCONFIG0_PEIPID_S			0
+#define VFPE_IPCONFIG0_PEIPID_M			MAKEMASK(0xFFFF, 0)
+#define VFPE_IPCONFIG0_USEENTIREIDRANGE_S	16
+#define VFPE_IPCONFIG0_USEENTIREIDRANGE_M	BIT(16)
+#define VFPE_IPCONFIG0_UDP_SRC_PORT_MASK_EN_S	17
+#define VFPE_IPCONFIG0_UDP_SRC_PORT_MASK_EN_M	BIT(17)
+#define VFPE_RCVUNEXPECTEDERROR(_VF)		(0x00509C00 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_RCVUNEXPECTEDERROR_MAX_INDEX	255
+#define VFPE_RCVUNEXPECTEDERROR_TCP_RX_UNEXP_ERR_S 0
+#define VFPE_RCVUNEXPECTEDERROR_TCP_RX_UNEXP_ERR_M MAKEMASK(0xFFFFFF, 0)
+#define VFPE_TCPNOWTIMER(_VF)			(0x00509400 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_TCPNOWTIMER_MAX_INDEX		255
+#define VFPE_TCPNOWTIMER_TCP_NOW_S		0
+#define VFPE_TCPNOWTIMER_TCP_NOW_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VFPE_WQEALLOC(_VF)			(0x00504000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_WQEALLOC_MAX_INDEX			255
+#define VFPE_WQEALLOC_PEQPID_S			0
+#define VFPE_WQEALLOC_PEQPID_M			MAKEMASK(0x3FFFF, 0)
+#define VFPE_WQEALLOC_WQE_DESC_INDEX_S		20
+#define VFPE_WQEALLOC_WQE_DESC_INDEX_M		MAKEMASK(0xFFF, 20)
+#define GLPES_PFIP4RXDISCARD(_i)		(0x00541400 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXDISCARD_MAX_INDEX		127
+#define GLPES_PFIP4RXDISCARD_IP4RXDISCARD_S	0
+#define GLPES_PFIP4RXDISCARD_IP4RXDISCARD_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4RXFRAGSHI(_i)		(0x00541C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXFRAGSHI_MAX_INDEX		127
+#define GLPES_PFIP4RXFRAGSHI_IP4RXFRAGSHI_S	0
+#define GLPES_PFIP4RXFRAGSHI_IP4RXFRAGSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP4RXFRAGSLO(_i)		(0x00541C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXFRAGSLO_MAX_INDEX		127
+#define GLPES_PFIP4RXFRAGSLO_IP4RXFRAGSLO_S	0
+#define GLPES_PFIP4RXFRAGSLO_IP4RXFRAGSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4RXMCOCTSHI(_i)		(0x00542404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXMCOCTSHI_MAX_INDEX		127
+#define GLPES_PFIP4RXMCOCTSHI_IP4RXMCOCTSHI_S	0
+#define GLPES_PFIP4RXMCOCTSHI_IP4RXMCOCTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP4RXMCOCTSLO(_i)		(0x00542400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXMCOCTSLO_MAX_INDEX		127
+#define GLPES_PFIP4RXMCOCTSLO_IP4RXMCOCTSLO_S	0
+#define GLPES_PFIP4RXMCOCTSLO_IP4RXMCOCTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4RXMCPKTSHI(_i)		(0x00542C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXMCPKTSHI_MAX_INDEX		127
+#define GLPES_PFIP4RXMCPKTSHI_IP4RXMCPKTSHI_S	0
+#define GLPES_PFIP4RXMCPKTSHI_IP4RXMCPKTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP4RXMCPKTSLO(_i)		(0x00542C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXMCPKTSLO_MAX_INDEX		127
+#define GLPES_PFIP4RXMCPKTSLO_IP4RXMCPKTSLO_S	0
+#define GLPES_PFIP4RXMCPKTSLO_IP4RXMCPKTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4RXOCTSHI(_i)			(0x00540404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXOCTSHI_MAX_INDEX		127
+#define GLPES_PFIP4RXOCTSHI_IP4RXOCTSHI_S	0
+#define GLPES_PFIP4RXOCTSHI_IP4RXOCTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP4RXOCTSLO(_i)			(0x00540400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXOCTSLO_MAX_INDEX		127
+#define GLPES_PFIP4RXOCTSLO_IP4RXOCTSLO_S	0
+#define GLPES_PFIP4RXOCTSLO_IP4RXOCTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4RXPKTSHI(_i)			(0x00540C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXPKTSHI_MAX_INDEX		127
+#define GLPES_PFIP4RXPKTSHI_IP4RXPKTSHI_S	0
+#define GLPES_PFIP4RXPKTSHI_IP4RXPKTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP4RXPKTSLO(_i)			(0x00540C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXPKTSLO_MAX_INDEX		127
+#define GLPES_PFIP4RXPKTSLO_IP4RXPKTSLO_S	0
+#define GLPES_PFIP4RXPKTSLO_IP4RXPKTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4RXTRUNC(_i)			(0x00541800 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXTRUNC_MAX_INDEX		127
+#define GLPES_PFIP4RXTRUNC_IP4RXTRUNC_S		0
+#define GLPES_PFIP4RXTRUNC_IP4RXTRUNC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4TXFRAGSHI(_i)		(0x00547404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4TXFRAGSHI_MAX_INDEX		127
+#define GLPES_PFIP4TXFRAGSHI_IP4TXFRAGSHI_S	0
+#define GLPES_PFIP4TXFRAGSHI_IP4TXFRAGSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP4TXFRAGSLO(_i)		(0x00547400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4TXFRAGSLO_MAX_INDEX		127
+#define GLPES_PFIP4TXFRAGSLO_IP4TXFRAGSLO_S	0
+#define GLPES_PFIP4TXFRAGSLO_IP4TXFRAGSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4TXMCOCTSHI(_i)		(0x00547C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4TXMCOCTSHI_MAX_INDEX		127
+#define GLPES_PFIP4TXMCOCTSHI_IP4TXMCOCTSHI_S	0
+#define GLPES_PFIP4TXMCOCTSHI_IP4TXMCOCTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP4TXMCOCTSLO(_i)		(0x00547C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4TXMCOCTSLO_MAX_INDEX		127
+#define GLPES_PFIP4TXMCOCTSLO_IP4TXMCOCTSLO_S	0
+#define GLPES_PFIP4TXMCOCTSLO_IP4TXMCOCTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4TXMCPKTSHI(_i)		(0x00548404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4TXMCPKTSHI_MAX_INDEX		127
+#define GLPES_PFIP4TXMCPKTSHI_IP4TXMCPKTSHI_S	0
+#define GLPES_PFIP4TXMCPKTSHI_IP4TXMCPKTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP4TXMCPKTSLO(_i)		(0x00548400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4TXMCPKTSLO_MAX_INDEX		127
+#define GLPES_PFIP4TXMCPKTSLO_IP4TXMCPKTSLO_S	0
+#define GLPES_PFIP4TXMCPKTSLO_IP4TXMCPKTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4TXNOROUTE(_i)		(0x0054B400 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4TXNOROUTE_MAX_INDEX		127
+#define GLPES_PFIP4TXNOROUTE_IP4TXNOROUTE_S	0
+#define GLPES_PFIP4TXNOROUTE_IP4TXNOROUTE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLPES_PFIP4TXOCTSHI(_i)			(0x00546404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4TXOCTSHI_MAX_INDEX		127
+#define GLPES_PFIP4TXOCTSHI_IP4TXOCTSHI_S	0
+#define GLPES_PFIP4TXOCTSHI_IP4TXOCTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP4TXOCTSLO(_i)			(0x00546400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4TXOCTSLO_MAX_INDEX		127
+#define GLPES_PFIP4TXOCTSLO_IP4TXOCTSLO_S	0
+#define GLPES_PFIP4TXOCTSLO_IP4TXOCTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4TXPKTSHI(_i)			(0x00546C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4TXPKTSHI_MAX_INDEX		127
+#define GLPES_PFIP4TXPKTSHI_IP4TXPKTSHI_S	0
+#define GLPES_PFIP4TXPKTSHI_IP4TXPKTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP4TXPKTSLO(_i)			(0x00546C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4TXPKTSLO_MAX_INDEX		127
+#define GLPES_PFIP4TXPKTSLO_IP4TXPKTSLO_S	0
+#define GLPES_PFIP4TXPKTSLO_IP4TXPKTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6RXDISCARD(_i)		(0x00544400 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXDISCARD_MAX_INDEX		127
+#define GLPES_PFIP6RXDISCARD_IP6RXDISCARD_S	0
+#define GLPES_PFIP6RXDISCARD_IP6RXDISCARD_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6RXFRAGSHI(_i)		(0x00544C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXFRAGSHI_MAX_INDEX		127
+#define GLPES_PFIP6RXFRAGSHI_IP6RXFRAGSHI_S	0
+#define GLPES_PFIP6RXFRAGSHI_IP6RXFRAGSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP6RXFRAGSLO(_i)		(0x00544C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXFRAGSLO_MAX_INDEX		127
+#define GLPES_PFIP6RXFRAGSLO_IP6RXFRAGSLO_S	0
+#define GLPES_PFIP6RXFRAGSLO_IP6RXFRAGSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6RXMCOCTSHI(_i)		(0x00545404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXMCOCTSHI_MAX_INDEX		127
+#define GLPES_PFIP6RXMCOCTSHI_IP6RXMCOCTSHI_S	0
+#define GLPES_PFIP6RXMCOCTSHI_IP6RXMCOCTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP6RXMCOCTSLO(_i)		(0x00545400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXMCOCTSLO_MAX_INDEX		127
+#define GLPES_PFIP6RXMCOCTSLO_IP6RXMCOCTSLO_S	0
+#define GLPES_PFIP6RXMCOCTSLO_IP6RXMCOCTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6RXMCPKTSHI(_i)		(0x00545C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXMCPKTSHI_MAX_INDEX		127
+#define GLPES_PFIP6RXMCPKTSHI_IP6RXMCPKTSHI_S	0
+#define GLPES_PFIP6RXMCPKTSHI_IP6RXMCPKTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP6RXMCPKTSLO(_i)		(0x00545C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXMCPKTSLO_MAX_INDEX		127
+#define GLPES_PFIP6RXMCPKTSLO_IP6RXMCPKTSLO_S	0
+#define GLPES_PFIP6RXMCPKTSLO_IP6RXMCPKTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6RXOCTSHI(_i)			(0x00543404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXOCTSHI_MAX_INDEX		127
+#define GLPES_PFIP6RXOCTSHI_IP6RXOCTSHI_S	0
+#define GLPES_PFIP6RXOCTSHI_IP6RXOCTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP6RXOCTSLO(_i)			(0x00543400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXOCTSLO_MAX_INDEX		127
+#define GLPES_PFIP6RXOCTSLO_IP6RXOCTSLO_S	0
+#define GLPES_PFIP6RXOCTSLO_IP6RXOCTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6RXPKTSHI(_i)			(0x00543C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXPKTSHI_MAX_INDEX		127
+#define GLPES_PFIP6RXPKTSHI_IP6RXPKTSHI_S	0
+#define GLPES_PFIP6RXPKTSHI_IP6RXPKTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP6RXPKTSLO(_i)			(0x00543C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXPKTSLO_MAX_INDEX		127
+#define GLPES_PFIP6RXPKTSLO_IP6RXPKTSLO_S	0
+#define GLPES_PFIP6RXPKTSLO_IP6RXPKTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6RXTRUNC(_i)			(0x00544800 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXTRUNC_MAX_INDEX		127
+#define GLPES_PFIP6RXTRUNC_IP6RXTRUNC_S		0
+#define GLPES_PFIP6RXTRUNC_IP6RXTRUNC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6TXFRAGSHI(_i)		(0x00549C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6TXFRAGSHI_MAX_INDEX		127
+#define GLPES_PFIP6TXFRAGSHI_IP6TXFRAGSHI_S	0
+#define GLPES_PFIP6TXFRAGSHI_IP6TXFRAGSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP6TXFRAGSLO(_i)		(0x00549C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6TXFRAGSLO_MAX_INDEX		127
+#define GLPES_PFIP6TXFRAGSLO_IP6TXFRAGSLO_S	0
+#define GLPES_PFIP6TXFRAGSLO_IP6TXFRAGSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6TXMCOCTSHI(_i)		(0x0054A404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6TXMCOCTSHI_MAX_INDEX		127
+#define GLPES_PFIP6TXMCOCTSHI_IP6TXMCOCTSHI_S	0
+#define GLPES_PFIP6TXMCOCTSHI_IP6TXMCOCTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP6TXMCOCTSLO(_i)		(0x0054A400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6TXMCOCTSLO_MAX_INDEX		127
+#define GLPES_PFIP6TXMCOCTSLO_IP6TXMCOCTSLO_S	0
+#define GLPES_PFIP6TXMCOCTSLO_IP6TXMCOCTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6TXMCPKTSHI(_i)		(0x0054AC04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6TXMCPKTSHI_MAX_INDEX		127
+#define GLPES_PFIP6TXMCPKTSHI_IP6TXMCPKTSHI_S	0
+#define GLPES_PFIP6TXMCPKTSHI_IP6TXMCPKTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP6TXMCPKTSLO(_i)		(0x0054AC00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6TXMCPKTSLO_MAX_INDEX		127
+#define GLPES_PFIP6TXMCPKTSLO_IP6TXMCPKTSLO_S	0
+#define GLPES_PFIP6TXMCPKTSLO_IP6TXMCPKTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6TXNOROUTE(_i)		(0x0054B800 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6TXNOROUTE_MAX_INDEX		127
+#define GLPES_PFIP6TXNOROUTE_IP6TXNOROUTE_S	0
+#define GLPES_PFIP6TXNOROUTE_IP6TXNOROUTE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLPES_PFIP6TXOCTSHI(_i)			(0x00548C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6TXOCTSHI_MAX_INDEX		127
+#define GLPES_PFIP6TXOCTSHI_IP6TXOCTSHI_S	0
+#define GLPES_PFIP6TXOCTSHI_IP6TXOCTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP6TXOCTSLO(_i)			(0x00548C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6TXOCTSLO_MAX_INDEX		127
+#define GLPES_PFIP6TXOCTSLO_IP6TXOCTSLO_S	0
+#define GLPES_PFIP6TXOCTSLO_IP6TXOCTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6TXPKTSHI(_i)			(0x00549404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6TXPKTSHI_MAX_INDEX		127
+#define GLPES_PFIP6TXPKTSHI_IP6TXPKTSHI_S	0
+#define GLPES_PFIP6TXPKTSHI_IP6TXPKTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP6TXPKTSLO(_i)			(0x00549400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6TXPKTSLO_MAX_INDEX		127
+#define GLPES_PFIP6TXPKTSLO_IP6TXPKTSLO_S	0
+#define GLPES_PFIP6TXPKTSLO_IP6TXPKTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFRDMARXRDSHI(_i)			(0x0054EC04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMARXRDSHI_MAX_INDEX		127
+#define GLPES_PFRDMARXRDSHI_RDMARXRDSHI_S	0
+#define GLPES_PFRDMARXRDSHI_RDMARXRDSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFRDMARXRDSLO(_i)			(0x0054EC00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMARXRDSLO_MAX_INDEX		127
+#define GLPES_PFRDMARXRDSLO_RDMARXRDSLO_S	0
+#define GLPES_PFRDMARXRDSLO_RDMARXRDSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFRDMARXSNDSHI(_i)		(0x0054F404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMARXSNDSHI_MAX_INDEX		127
+#define GLPES_PFRDMARXSNDSHI_RDMARXSNDSHI_S	0
+#define GLPES_PFRDMARXSNDSHI_RDMARXSNDSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFRDMARXSNDSLO(_i)		(0x0054F400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMARXSNDSLO_MAX_INDEX		127
+#define GLPES_PFRDMARXSNDSLO_RDMARXSNDSLO_S	0
+#define GLPES_PFRDMARXSNDSLO_RDMARXSNDSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFRDMARXWRSHI(_i)			(0x0054E404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMARXWRSHI_MAX_INDEX		127
+#define GLPES_PFRDMARXWRSHI_RDMARXWRSHI_S	0
+#define GLPES_PFRDMARXWRSHI_RDMARXWRSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFRDMARXWRSLO(_i)			(0x0054E400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMARXWRSLO_MAX_INDEX		127
+#define GLPES_PFRDMARXWRSLO_RDMARXWRSLO_S	0
+#define GLPES_PFRDMARXWRSLO_RDMARXWRSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFRDMATXRDSHI(_i)			(0x00550404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMATXRDSHI_MAX_INDEX		127
+#define GLPES_PFRDMATXRDSHI_RDMARXRDSHI_S	0
+#define GLPES_PFRDMATXRDSHI_RDMARXRDSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFRDMATXRDSLO(_i)			(0x00550400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMATXRDSLO_MAX_INDEX		127
+#define GLPES_PFRDMATXRDSLO_RDMARXRDSLO_S	0
+#define GLPES_PFRDMATXRDSLO_RDMARXRDSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFRDMATXSNDSHI(_i)		(0x00550C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMATXSNDSHI_MAX_INDEX		127
+#define GLPES_PFRDMATXSNDSHI_RDMARXSNDSHI_S	0
+#define GLPES_PFRDMATXSNDSHI_RDMARXSNDSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFRDMATXSNDSLO(_i)		(0x00550C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMATXSNDSLO_MAX_INDEX		127
+#define GLPES_PFRDMATXSNDSLO_RDMARXSNDSLO_S	0
+#define GLPES_PFRDMATXSNDSLO_RDMARXSNDSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFRDMATXWRSHI(_i)			(0x0054FC04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMATXWRSHI_MAX_INDEX		127
+#define GLPES_PFRDMATXWRSHI_RDMARXWRSHI_S	0
+#define GLPES_PFRDMATXWRSHI_RDMARXWRSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFRDMATXWRSLO(_i)			(0x0054FC00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMATXWRSLO_MAX_INDEX		127
+#define GLPES_PFRDMATXWRSLO_RDMARXWRSLO_S	0
+#define GLPES_PFRDMATXWRSLO_RDMARXWRSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFRDMAVBNDHI(_i)			(0x00551404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMAVBNDHI_MAX_INDEX		127
+#define GLPES_PFRDMAVBNDHI_RDMAVBNDHI_S		0
+#define GLPES_PFRDMAVBNDHI_RDMAVBNDHI_M		MAKEMASK(0xFFFF, 0)
+#define GLPES_PFRDMAVBNDLO(_i)			(0x00551400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMAVBNDLO_MAX_INDEX		127
+#define GLPES_PFRDMAVBNDLO_RDMAVBNDLO_S		0
+#define GLPES_PFRDMAVBNDLO_RDMAVBNDLO_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFRDMAVINVHI(_i)			(0x00551C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMAVINVHI_MAX_INDEX		127
+#define GLPES_PFRDMAVINVHI_RDMAVINVHI_S		0
+#define GLPES_PFRDMAVINVHI_RDMAVINVHI_M		MAKEMASK(0xFFFF, 0)
+#define GLPES_PFRDMAVINVLO(_i)			(0x00551C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMAVINVLO_MAX_INDEX		127
+#define GLPES_PFRDMAVINVLO_RDMAVINVLO_S		0
+#define GLPES_PFRDMAVINVLO_RDMAVINVLO_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFRXVLANERR(_i)			(0x00540000 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRXVLANERR_MAX_INDEX		127
+#define GLPES_PFRXVLANERR_RXVLANERR_S		0
+#define GLPES_PFRXVLANERR_RXVLANERR_M		MAKEMASK(0xFFFFFF, 0)
+#define GLPES_PFTCPRTXSEG(_i)			(0x00552400 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFTCPRTXSEG_MAX_INDEX		127
+#define GLPES_PFTCPRTXSEG_TCPRTXSEG_S		0
+#define GLPES_PFTCPRTXSEG_TCPRTXSEG_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFTCPRXOPTERR(_i)			(0x0054C400 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFTCPRXOPTERR_MAX_INDEX		127
+#define GLPES_PFTCPRXOPTERR_TCPRXOPTERR_S	0
+#define GLPES_PFTCPRXOPTERR_TCPRXOPTERR_M	MAKEMASK(0xFFFFFF, 0)
+#define GLPES_PFTCPRXPROTOERR(_i)		(0x0054C800 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFTCPRXPROTOERR_MAX_INDEX		127
+#define GLPES_PFTCPRXPROTOERR_TCPRXPROTOERR_S	0
+#define GLPES_PFTCPRXPROTOERR_TCPRXPROTOERR_M	MAKEMASK(0xFFFFFF, 0)
+#define GLPES_PFTCPRXSEGSHI(_i)			(0x0054BC04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFTCPRXSEGSHI_MAX_INDEX		127
+#define GLPES_PFTCPRXSEGSHI_TCPRXSEGSHI_S	0
+#define GLPES_PFTCPRXSEGSHI_TCPRXSEGSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFTCPRXSEGSLO(_i)			(0x0054BC00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFTCPRXSEGSLO_MAX_INDEX		127
+#define GLPES_PFTCPRXSEGSLO_TCPRXSEGSLO_S	0
+#define GLPES_PFTCPRXSEGSLO_TCPRXSEGSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFTCPTXSEGHI(_i)			(0x0054CC04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFTCPTXSEGHI_MAX_INDEX		127
+#define GLPES_PFTCPTXSEGHI_TCPTXSEGHI_S		0
+#define GLPES_PFTCPTXSEGHI_TCPTXSEGHI_M		MAKEMASK(0xFFFF, 0)
+#define GLPES_PFTCPTXSEGLO(_i)			(0x0054CC00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFTCPTXSEGLO_MAX_INDEX		127
+#define GLPES_PFTCPTXSEGLO_TCPTXSEGLO_S		0
+#define GLPES_PFTCPTXSEGLO_TCPTXSEGLO_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFUDPRXPKTSHI(_i)			(0x0054D404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFUDPRXPKTSHI_MAX_INDEX		127
+#define GLPES_PFUDPRXPKTSHI_UDPRXPKTSHI_S	0
+#define GLPES_PFUDPRXPKTSHI_UDPRXPKTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFUDPRXPKTSLO(_i)			(0x0054D400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFUDPRXPKTSLO_MAX_INDEX		127
+#define GLPES_PFUDPRXPKTSLO_UDPRXPKTSLO_S	0
+#define GLPES_PFUDPRXPKTSLO_UDPRXPKTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFUDPTXPKTSHI(_i)			(0x0054DC04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFUDPTXPKTSHI_MAX_INDEX		127
+#define GLPES_PFUDPTXPKTSHI_UDPTXPKTSHI_S	0
+#define GLPES_PFUDPTXPKTSHI_UDPTXPKTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFUDPTXPKTSLO(_i)			(0x0054DC00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFUDPTXPKTSLO_MAX_INDEX		127
+#define GLPES_PFUDPTXPKTSLO_UDPTXPKTSLO_S	0
+#define GLPES_PFUDPTXPKTSLO_UDPTXPKTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_RDMARXMULTFPDUSHI			0x0055E00C /* Reset Source: CORER */
+#define GLPES_RDMARXMULTFPDUSHI_RDMARXMULTFPDUSHI_S 0
+#define GLPES_RDMARXMULTFPDUSHI_RDMARXMULTFPDUSHI_M MAKEMASK(0xFFFFFF, 0)
+#define GLPES_RDMARXMULTFPDUSLO			0x0055E008 /* Reset Source: CORER */
+#define GLPES_RDMARXMULTFPDUSLO_RDMARXMULTFPDUSLO_S 0
+#define GLPES_RDMARXMULTFPDUSLO_RDMARXMULTFPDUSLO_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_RDMARXOOODDPHI			0x0055E014 /* Reset Source: CORER */
+#define GLPES_RDMARXOOODDPHI_RDMARXOOODDPHI_S	0
+#define GLPES_RDMARXOOODDPHI_RDMARXOOODDPHI_M	MAKEMASK(0xFFFFFF, 0)
+#define GLPES_RDMARXOOODDPLO			0x0055E010 /* Reset Source: CORER */
+#define GLPES_RDMARXOOODDPLO_RDMARXOOODDPLO_S	0
+#define GLPES_RDMARXOOODDPLO_RDMARXOOODDPLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_RDMARXOOONOMARK			0x0055E004 /* Reset Source: CORER */
+#define GLPES_RDMARXOOONOMARK_RDMAOOONOMARK_S	0
+#define GLPES_RDMARXOOONOMARK_RDMAOOONOMARK_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_RDMARXUNALIGN			0x0055E000 /* Reset Source: CORER */
+#define GLPES_RDMARXUNALIGN_RDMRXAUNALIGN_S	0
+#define GLPES_RDMARXUNALIGN_RDMRXAUNALIGN_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_TCPRXFOURHOLEHI			0x0055E03C /* Reset Source: CORER */
+#define GLPES_TCPRXFOURHOLEHI_TCPRXFOURHOLEHI_S 0
+#define GLPES_TCPRXFOURHOLEHI_TCPRXFOURHOLEHI_M MAKEMASK(0xFFFFFF, 0)
+#define GLPES_TCPRXFOURHOLELO			0x0055E038 /* Reset Source: CORER */
+#define GLPES_TCPRXFOURHOLELO_TCPRXFOURHOLELO_S 0
+#define GLPES_TCPRXFOURHOLELO_TCPRXFOURHOLELO_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_TCPRXONEHOLEHI			0x0055E024 /* Reset Source: CORER */
+#define GLPES_TCPRXONEHOLEHI_TCPRXONEHOLEHI_S	0
+#define GLPES_TCPRXONEHOLEHI_TCPRXONEHOLEHI_M	MAKEMASK(0xFFFFFF, 0)
+#define GLPES_TCPRXONEHOLELO			0x0055E020 /* Reset Source: CORER */
+#define GLPES_TCPRXONEHOLELO_TCPRXONEHOLELO_S	0
+#define GLPES_TCPRXONEHOLELO_TCPRXONEHOLELO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_TCPRXPUREACKHI			0x0055E01C /* Reset Source: CORER */
+#define GLPES_TCPRXPUREACKHI_TCPRXPUREACKSHI_S	0
+#define GLPES_TCPRXPUREACKHI_TCPRXPUREACKSHI_M	MAKEMASK(0xFFFFFF, 0)
+#define GLPES_TCPRXPUREACKSLO			0x0055E018 /* Reset Source: CORER */
+#define GLPES_TCPRXPUREACKSLO_TCPRXPUREACKLO_S	0
+#define GLPES_TCPRXPUREACKSLO_TCPRXPUREACKLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_TCPRXTHREEHOLEHI			0x0055E034 /* Reset Source: CORER */
+#define GLPES_TCPRXTHREEHOLEHI_TCPRXTHREEHOLEHI_S 0
+#define GLPES_TCPRXTHREEHOLEHI_TCPRXTHREEHOLEHI_M MAKEMASK(0xFFFFFF, 0)
+#define GLPES_TCPRXTHREEHOLELO			0x0055E030 /* Reset Source: CORER */
+#define GLPES_TCPRXTHREEHOLELO_TCPRXTHREEHOLELO_S 0
+#define GLPES_TCPRXTHREEHOLELO_TCPRXTHREEHOLELO_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_TCPRXTWOHOLEHI			0x0055E02C /* Reset Source: CORER */
+#define GLPES_TCPRXTWOHOLEHI_TCPRXTWOHOLEHI_S	0
+#define GLPES_TCPRXTWOHOLEHI_TCPRXTWOHOLEHI_M	MAKEMASK(0xFFFFFF, 0)
+#define GLPES_TCPRXTWOHOLELO			0x0055E028 /* Reset Source: CORER */
+#define GLPES_TCPRXTWOHOLELO_TCPRXTWOHOLELO_S	0
+#define GLPES_TCPRXTWOHOLELO_TCPRXTWOHOLELO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_TCPTXRETRANSFASTHI		0x0055E044 /* Reset Source: CORER */
+#define GLPES_TCPTXRETRANSFASTHI_TCPTXRETRANSFASTHI_S 0
+#define GLPES_TCPTXRETRANSFASTHI_TCPTXRETRANSFASTHI_M MAKEMASK(0xFFFFFF, 0)
+#define GLPES_TCPTXRETRANSFASTLO		0x0055E040 /* Reset Source: CORER */
+#define GLPES_TCPTXRETRANSFASTLO_TCPTXRETRANSFASTLO_S 0
+#define GLPES_TCPTXRETRANSFASTLO_TCPTXRETRANSFASTLO_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_TCPTXTOUTSFASTHI			0x0055E04C /* Reset Source: CORER */
+#define GLPES_TCPTXTOUTSFASTHI_TCPTXTOUTSFASTHI_S 0
+#define GLPES_TCPTXTOUTSFASTHI_TCPTXTOUTSFASTHI_M MAKEMASK(0xFFFFFF, 0)
+#define GLPES_TCPTXTOUTSFASTLO			0x0055E048 /* Reset Source: CORER */
+#define GLPES_TCPTXTOUTSFASTLO_TCPTXTOUTSFASTLO_S 0
+#define GLPES_TCPTXTOUTSFASTLO_TCPTXTOUTSFASTLO_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_TCPTXTOUTSHI			0x0055E054 /* Reset Source: CORER */
+#define GLPES_TCPTXTOUTSHI_TCPTXTOUTSHI_S	0
+#define GLPES_TCPTXTOUTSHI_TCPTXTOUTSHI_M	MAKEMASK(0xFFFFFF, 0)
+#define GLPES_TCPTXTOUTSLO			0x0055E050 /* Reset Source: CORER */
+#define GLPES_TCPTXTOUTSLO_TCPTXTOUTSLO_S	0
+#define GLPES_TCPTXTOUTSLO_TCPTXTOUTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PWR_MODE_CTL				0x000B820C /* Reset Source: POR */
+#define GL_PWR_MODE_CTL_SWITCH_PWR_MODE_EN_S	0
+#define GL_PWR_MODE_CTL_SWITCH_PWR_MODE_EN_M	BIT(0)
+#define GL_PWR_MODE_CTL_NIC_PWR_MODE_EN_S	1
+#define GL_PWR_MODE_CTL_NIC_PWR_MODE_EN_M	BIT(1)
+#define GL_PWR_MODE_CTL_S5_PWR_MODE_EN_S	2
+#define GL_PWR_MODE_CTL_S5_PWR_MODE_EN_M	BIT(2)
+#define GL_PWR_MODE_CTL_CAR_MAX_SW_CONFIG_S	3
+#define GL_PWR_MODE_CTL_CAR_MAX_SW_CONFIG_M	MAKEMASK(0x3, 3)
+#define GL_PWR_MODE_CTL_CAR_MAX_BW_S		30
+#define GL_PWR_MODE_CTL_CAR_MAX_BW_M		MAKEMASK(0x3, 30)
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT	0x000B825C /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_PECLK_S 0
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_PECLK_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_UCLK_S 3
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_UCLK_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_LCLK_S 6
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_LCLK_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_PSM_S 9
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_PSM_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_RXCTL_S 12
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_RXCTL_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_UANA_S 15
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_UANA_M MAKEMASK(0x7, 15)
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_S5_S 18
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_S5_M MAKEMASK(0x7, 18)
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT	0x000B8218 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_PECLK_S 0
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_PECLK_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_UCLK_S 3
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_UCLK_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_LCLK_S 6
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_LCLK_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_PSM_S 9
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_PSM_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_RXCTL_S 12
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_RXCTL_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_UANA_S 15
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_UANA_M MAKEMASK(0x7, 15)
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_S5_S 18
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_S5_M MAKEMASK(0x7, 18)
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT	0x000B8260 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_PECLK_S 0
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_PECLK_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_UCLK_S 3
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_UCLK_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_LCLK_S 6
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_LCLK_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_PSM_S 9
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_PSM_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_RXCTL_S 12
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_RXCTL_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_UANA_S 15
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_UANA_M MAKEMASK(0x7, 15)
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_S5_S 18
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_S5_M MAKEMASK(0x7, 18)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_LCLK	0x000B8200 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_LCLK_DIV_VAL_TBW_50G_H_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_LCLK_DIV_VAL_TBW_50G_H_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_LCLK_DIV_VAL_TBW_25G_H_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_LCLK_DIV_VAL_TBW_25G_H_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_LCLK_DIV_VAL_TBW_10G_H_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_LCLK_DIV_VAL_TBW_10G_H_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_LCLK_DIV_VAL_TBW_4G_H_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_LCLK_DIV_VAL_TBW_4G_H_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_LCLK_DIV_VAL_TBW_A50G_H_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_LCLK_DIV_VAL_TBW_A50G_H_M MAKEMASK(0xF, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PECLK	0x000B81F0 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PECLK_DIV_VAL_TBW_50G_H_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PECLK_DIV_VAL_TBW_50G_H_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PECLK_DIV_VAL_TBW_25G_H_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PECLK_DIV_VAL_TBW_25G_H_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PECLK_DIV_VAL_TBW_10G_H_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PECLK_DIV_VAL_TBW_10G_H_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PECLK_DIV_VAL_TBW_4G_H_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PECLK_DIV_VAL_TBW_4G_H_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PECLK_DIV_VAL_TBW_A50G_H_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PECLK_DIV_VAL_TBW_A50G_H_M MAKEMASK(0xF, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PSM	0x000B81FC /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PSM_DIV_VAL_TBW_50G_H_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PSM_DIV_VAL_TBW_50G_H_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PSM_DIV_VAL_TBW_25G_H_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PSM_DIV_VAL_TBW_25G_H_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PSM_DIV_VAL_TBW_10G_H_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PSM_DIV_VAL_TBW_10G_H_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PSM_DIV_VAL_TBW_4G_H_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PSM_DIV_VAL_TBW_4G_H_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PSM_DIV_VAL_TBW_A50G_H_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PSM_DIV_VAL_TBW_A50G_H_M MAKEMASK(0xF, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_RXCTL	0x000B81F8 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_RXCTL_DIV_VAL_TBW_50G_H_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_RXCTL_DIV_VAL_TBW_50G_H_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_RXCTL_DIV_VAL_TBW_25G_H_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_RXCTL_DIV_VAL_TBW_25G_H_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_RXCTL_DIV_VAL_TBW_10G_H_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_RXCTL_DIV_VAL_TBW_10G_H_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_RXCTL_DIV_VAL_TBW_4G_H_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_RXCTL_DIV_VAL_TBW_4G_H_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_RXCTL_DIV_VAL_TBW_A50G_H_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_RXCTL_DIV_VAL_TBW_A50G_H_M MAKEMASK(0xF, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UANA	0x000B8208 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UANA_DIV_VAL_TBW_50G_H_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UANA_DIV_VAL_TBW_50G_H_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UANA_DIV_VAL_TBW_25G_H_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UANA_DIV_VAL_TBW_25G_H_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UANA_DIV_VAL_TBW_10G_H_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UANA_DIV_VAL_TBW_10G_H_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UANA_DIV_VAL_TBW_4G_H_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UANA_DIV_VAL_TBW_4G_H_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UANA_DIV_VAL_TBW_A50G_H_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UANA_DIV_VAL_TBW_A50G_H_M MAKEMASK(0xF, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UCLK	0x000B81F4 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UCLK_DIV_VAL_TBW_50G_H_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UCLK_DIV_VAL_TBW_50G_H_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UCLK_DIV_VAL_TBW_25G_H_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UCLK_DIV_VAL_TBW_25G_H_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UCLK_DIV_VAL_TBW_10G_H_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UCLK_DIV_VAL_TBW_10G_H_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UCLK_DIV_VAL_TBW_4G_H_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UCLK_DIV_VAL_TBW_4G_H_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UCLK_DIV_VAL_TBW_A50G_H_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UCLK_DIV_VAL_TBW_A50G_H_M MAKEMASK(0xF, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_LCLK	0x000B8244 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_LCLK_DIV_VAL_TBW_50G_L_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_LCLK_DIV_VAL_TBW_50G_L_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_LCLK_DIV_VAL_TBW_25G_L_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_LCLK_DIV_VAL_TBW_25G_L_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_LCLK_DIV_VAL_TBW_10G_L_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_LCLK_DIV_VAL_TBW_10G_L_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_LCLK_DIV_VAL_TBW_4G_L_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_LCLK_DIV_VAL_TBW_4G_L_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_LCLK_DIV_VAL_TBW_A50G_L_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_LCLK_DIV_VAL_TBW_A50G_L_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PECLK	0x000B8220 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PECLK_DIV_VAL_TBW_50G_L_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PECLK_DIV_VAL_TBW_50G_L_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PECLK_DIV_VAL_TBW_25G_L_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PECLK_DIV_VAL_TBW_25G_L_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PECLK_DIV_VAL_TBW_10G_L_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PECLK_DIV_VAL_TBW_10G_L_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PECLK_DIV_VAL_TBW_4G_L_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PECLK_DIV_VAL_TBW_4G_L_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PECLK_DIV_VAL_TBW_A50G_L_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PECLK_DIV_VAL_TBW_A50G_L_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PSM	0x000B8240 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PSM_DIV_VAL_TBW_50G_L_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PSM_DIV_VAL_TBW_50G_L_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PSM_DIV_VAL_TBW_25G_L_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PSM_DIV_VAL_TBW_25G_L_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PSM_DIV_VAL_TBW_10G_L_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PSM_DIV_VAL_TBW_10G_L_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PSM_DIV_VAL_TBW_4G_L_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PSM_DIV_VAL_TBW_4G_L_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PSM_DIV_VAL_TBW_A50G_L_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PSM_DIV_VAL_TBW_A50G_L_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_RXCTL	0x000B823C /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_RXCTL_DIV_VAL_TBW_50G_L_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_RXCTL_DIV_VAL_TBW_50G_L_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_RXCTL_DIV_VAL_TBW_25G_L_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_RXCTL_DIV_VAL_TBW_25G_L_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_RXCTL_DIV_VAL_TBW_10G_L_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_RXCTL_DIV_VAL_TBW_10G_L_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_RXCTL_DIV_VAL_TBW_4G_L_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_RXCTL_DIV_VAL_TBW_4G_L_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_RXCTL_DIV_VAL_TBW_A50G_L_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_RXCTL_DIV_VAL_TBW_A50G_L_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UANA	0x000B8248 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UANA_DIV_VAL_TBW_50G_L_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UANA_DIV_VAL_TBW_50G_L_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UANA_DIV_VAL_TBW_25G_L_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UANA_DIV_VAL_TBW_25G_L_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UANA_DIV_VAL_TBW_10G_L_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UANA_DIV_VAL_TBW_10G_L_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UANA_DIV_VAL_TBW_4G_L_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UANA_DIV_VAL_TBW_4G_L_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UANA_DIV_VAL_TBW_A50G_L_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UANA_DIV_VAL_TBW_A50G_L_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UCLK	0x000B8238 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UCLK_DIV_VAL_TBW_50G_L_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UCLK_DIV_VAL_TBW_50G_L_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UCLK_DIV_VAL_TBW_25G_L_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UCLK_DIV_VAL_TBW_25G_L_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UCLK_DIV_VAL_TBW_10G_L_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UCLK_DIV_VAL_TBW_10G_L_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UCLK_DIV_VAL_TBW_4G_L_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UCLK_DIV_VAL_TBW_4G_L_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UCLK_DIV_VAL_TBW_A50G_L_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UCLK_DIV_VAL_TBW_A50G_L_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_LCLK	0x000B8230 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_LCLK_DIV_VAL_TBW_50G_M_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_LCLK_DIV_VAL_TBW_50G_M_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_LCLK_DIV_VAL_TBW_25G_M_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_LCLK_DIV_VAL_TBW_25G_M_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_LCLK_DIV_VAL_TBW_10G_M_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_LCLK_DIV_VAL_TBW_10G_M_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_LCLK_DIV_VAL_TBW_4G_M_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_LCLK_DIV_VAL_TBW_4G_M_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_LCLK_DIV_VAL_TBW_A50G_M_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_LCLK_DIV_VAL_TBW_A50G_M_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PECLK	0x000B821C /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PECLK_DIV_VAL_TBW_50G_M_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PECLK_DIV_VAL_TBW_50G_M_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PECLK_DIV_VAL_TBW_25G_M_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PECLK_DIV_VAL_TBW_25G_M_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PECLK_DIV_VAL_TBW_10G_M_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PECLK_DIV_VAL_TBW_10G_M_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PECLK_DIV_VAL_TBW_4G_M_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PECLK_DIV_VAL_TBW_4G_M_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PECLK_DIV_VAL_TBW_A50G_M_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PECLK_DIV_VAL_TBW_A50G_M_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PSM	0x000B822C /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PSM_DIV_VAL_TBW_50G_M_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PSM_DIV_VAL_TBW_50G_M_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PSM_DIV_VAL_TBW_25G_M_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PSM_DIV_VAL_TBW_25G_M_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PSM_DIV_VAL_TBW_10G_M_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PSM_DIV_VAL_TBW_10G_M_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PSM_DIV_VAL_TBW_4G_M_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PSM_DIV_VAL_TBW_4G_M_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PSM_DIV_VAL_TBW_A50G_M_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PSM_DIV_VAL_TBW_A50G_M_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_RXCTL	0x000B8228 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_RXCTL_DIV_VAL_TBW_50G_M_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_RXCTL_DIV_VAL_TBW_50G_M_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_RXCTL_DIV_VAL_TBW_25G_M_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_RXCTL_DIV_VAL_TBW_25G_M_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_RXCTL_DIV_VAL_TBW_10G_M_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_RXCTL_DIV_VAL_TBW_10G_M_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_RXCTL_DIV_VAL_TBW_4G_M_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_RXCTL_DIV_VAL_TBW_4G_M_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_RXCTL_DIV_VAL_TBW_A50G_M_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_RXCTL_DIV_VAL_TBW_A50G_M_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UANA	0x000B8234 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UANA_DIV_VAL_TBW_50G_M_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UANA_DIV_VAL_TBW_50G_M_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UANA_DIV_VAL_TBW_25G_M_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UANA_DIV_VAL_TBW_25G_M_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UANA_DIV_VAL_TBW_10G_M_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UANA_DIV_VAL_TBW_10G_M_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UANA_DIV_VAL_TBW_4G_M_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UANA_DIV_VAL_TBW_4G_M_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UANA_DIV_VAL_TBW_A50G_M_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UANA_DIV_VAL_TBW_A50G_M_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UCLK	0x000B8224 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UCLK_DIV_VAL_TBW_50G_M_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UCLK_DIV_VAL_TBW_50G_M_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UCLK_DIV_VAL_TBW_25G_M_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UCLK_DIV_VAL_TBW_25G_M_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UCLK_DIV_VAL_TBW_10G_M_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UCLK_DIV_VAL_TBW_10G_M_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UCLK_DIV_VAL_TBW_4G_M_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UCLK_DIV_VAL_TBW_4G_M_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UCLK_DIV_VAL_TBW_A50G_M_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UCLK_DIV_VAL_TBW_A50G_M_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S5_H_CTRL		0x000B81EC /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S5_H_CTRL_DIV_VAL_TBW_50G_H_S 0
+#define GL_PWR_MODE_DIVIDE_S5_H_CTRL_DIV_VAL_TBW_50G_H_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S5_H_CTRL_DIV_VAL_TBW_25G_H_S 3
+#define GL_PWR_MODE_DIVIDE_S5_H_CTRL_DIV_VAL_TBW_25G_H_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S5_H_CTRL_DIV_VAL_TBW_10G_H_S 6
+#define GL_PWR_MODE_DIVIDE_S5_H_CTRL_DIV_VAL_TBW_10G_H_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S5_H_CTRL_DIV_VAL_TBW_4G_H_S 9
+#define GL_PWR_MODE_DIVIDE_S5_H_CTRL_DIV_VAL_TBW_4G_H_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S5_H_CTRL_DIV_VAL_TBW_A50G_H_S 12
+#define GL_PWR_MODE_DIVIDE_S5_H_CTRL_DIV_VAL_TBW_A50G_H_M MAKEMASK(0xF, 12)
+#define GL_PWR_MODE_DIVIDE_S5_L_CTRL		0x000B824C /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S5_L_CTRL_DIV_VAL_TBW_50G_L_S 0
+#define GL_PWR_MODE_DIVIDE_S5_L_CTRL_DIV_VAL_TBW_50G_L_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S5_L_CTRL_DIV_VAL_TBW_25G_L_S 3
+#define GL_PWR_MODE_DIVIDE_S5_L_CTRL_DIV_VAL_TBW_25G_L_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S5_L_CTRL_DIV_VAL_TBW_10G_L_S 6
+#define GL_PWR_MODE_DIVIDE_S5_L_CTRL_DIV_VAL_TBW_10G_L_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S5_L_CTRL_DIV_VAL_TBW_4G_L_S 9
+#define GL_PWR_MODE_DIVIDE_S5_L_CTRL_DIV_VAL_TBW_4G_L_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S5_L_CTRL_DIV_VAL_TBW_A50G_L_S 12
+#define GL_PWR_MODE_DIVIDE_S5_L_CTRL_DIV_VAL_TBW_A50G_L_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S5_M_CTRL		0x000B8250 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S5_M_CTRL_DIV_VAL_TBW_50G_M_S 0
+#define GL_PWR_MODE_DIVIDE_S5_M_CTRL_DIV_VAL_TBW_50G_M_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S5_M_CTRL_DIV_VAL_TBW_25G_M_S 3
+#define GL_PWR_MODE_DIVIDE_S5_M_CTRL_DIV_VAL_TBW_25G_M_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S5_M_CTRL_DIV_VAL_TBW_10G_M_S 6
+#define GL_PWR_MODE_DIVIDE_S5_M_CTRL_DIV_VAL_TBW_10G_M_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S5_M_CTRL_DIV_VAL_TBW_4G_M_S 9
+#define GL_PWR_MODE_DIVIDE_S5_M_CTRL_DIV_VAL_TBW_4G_M_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S5_M_CTRL_DIV_VAL_TBW_A50G_M_S 12
+#define GL_PWR_MODE_DIVIDE_S5_M_CTRL_DIV_VAL_TBW_A50G_M_M MAKEMASK(0x7, 12)
+#define GL_S5_PWR_MODE_EXIT_CTL			0x000B8270 /* Reset Source: POR */
+#define GL_S5_PWR_MODE_EXIT_CTL_S5_PWR_MODE_AUTO_EXIT_S 0
+#define GL_S5_PWR_MODE_EXIT_CTL_S5_PWR_MODE_AUTO_EXIT_M BIT(0)
+#define GL_S5_PWR_MODE_EXIT_CTL_S5_PWR_MODE_FW_EXIT_S 1
+#define GL_S5_PWR_MODE_EXIT_CTL_S5_PWR_MODE_FW_EXIT_M BIT(1)
+#define GL_S5_PWR_MODE_EXIT_CTL_S5_PWR_MODE_PRST_FLOWS_ON_CORER_S 3
+#define GL_S5_PWR_MODE_EXIT_CTL_S5_PWR_MODE_PRST_FLOWS_ON_CORER_M BIT(3)
+#define GLGEN_PME_TO				0x000B81BC /* Reset Source: POR */
+#define GLGEN_PME_TO_PME_TO_FOR_PE_S		0
+#define GLGEN_PME_TO_PME_TO_FOR_PE_M		BIT(0)
+#define PRTPM_EEE_STAT				0x001E4320 /* Reset Source: GLOBR */
+#define PRTPM_EEE_STAT_EEE_NEG_S		29
+#define PRTPM_EEE_STAT_EEE_NEG_M		BIT(29)
+#define PRTPM_EEE_STAT_RX_LPI_STATUS_S		30
+#define PRTPM_EEE_STAT_RX_LPI_STATUS_M		BIT(30)
+#define PRTPM_EEE_STAT_TX_LPI_STATUS_S		31
+#define PRTPM_EEE_STAT_TX_LPI_STATUS_M		BIT(31)
+#define PRTPM_EEEC				0x001E4380 /* Reset Source: GLOBR */
+#define PRTPM_EEEC_TW_WAKE_MIN_S		16
+#define PRTPM_EEEC_TW_WAKE_MIN_M		MAKEMASK(0x3F, 16)
+#define PRTPM_EEEC_TX_LU_LPI_DLY_S		24
+#define PRTPM_EEEC_TX_LU_LPI_DLY_M		MAKEMASK(0x3, 24)
+#define PRTPM_EEEC_TEEE_DLY_S			26
+#define PRTPM_EEEC_TEEE_DLY_M			MAKEMASK(0x3F, 26)
+#define PRTPM_EEEFWD				0x001E4400 /* Reset Source: GLOBR */
+#define PRTPM_EEEFWD_EEE_FW_CONFIG_DONE_S	31
+#define PRTPM_EEEFWD_EEE_FW_CONFIG_DONE_M	BIT(31)
+#define PRTPM_EEER				0x001E4360 /* Reset Source: GLOBR */
+#define PRTPM_EEER_TW_SYSTEM_S			0
+#define PRTPM_EEER_TW_SYSTEM_M			MAKEMASK(0xFFFF, 0)
+#define PRTPM_EEER_TX_LPI_EN_S			16
+#define PRTPM_EEER_TX_LPI_EN_M			BIT(16)
+#define PRTPM_EEETXC				0x001E43E0 /* Reset Source: GLOBR */
+#define PRTPM_EEETXC_TW_PHY_S			0
+#define PRTPM_EEETXC_TW_PHY_M			MAKEMASK(0xFFFF, 0)
+#define PRTPM_RLPIC				0x001E43A0 /* Reset Source: GLOBR */
+#define PRTPM_RLPIC_ERLPIC_S			0
+#define PRTPM_RLPIC_ERLPIC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PRTPM_TLPIC				0x001E43C0 /* Reset Source: GLOBR */
+#define PRTPM_TLPIC_ETLPIC_S			0
+#define PRTPM_TLPIC_ETLPIC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLRPB_DHW(_i)				(0x000AC000 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLRPB_DHW_MAX_INDEX			15
+#define GLRPB_DHW_DHW_TCN_S			0
+#define GLRPB_DHW_DHW_TCN_M			MAKEMASK(0xFFFFF, 0)
+#define GLRPB_DLW(_i)				(0x000AC044 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLRPB_DLW_MAX_INDEX			15
+#define GLRPB_DLW_DLW_TCN_S			0
+#define GLRPB_DLW_DLW_TCN_M			MAKEMASK(0xFFFFF, 0)
+#define GLRPB_DPS(_i)				(0x000AC084 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLRPB_DPS_MAX_INDEX			15
+#define GLRPB_DPS_DPS_TCN_S			0
+#define GLRPB_DPS_DPS_TCN_M			MAKEMASK(0xFFFFF, 0)
+#define GLRPB_DSI_EN				0x000AC324 /* Reset Source: CORER */
+#define GLRPB_DSI_EN_DSI_EN_S			0
+#define GLRPB_DSI_EN_DSI_EN_M			BIT(0)
+#define GLRPB_DSI_EN_DSI_L2_MAC_ERR_DROP_EN_S	1
+#define GLRPB_DSI_EN_DSI_L2_MAC_ERR_DROP_EN_M	BIT(1)
+#define GLRPB_SHW(_i)				(0x000AC120 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLRPB_SHW_MAX_INDEX			7
+#define GLRPB_SHW_SHW_S				0
+#define GLRPB_SHW_SHW_M				MAKEMASK(0xFFFFF, 0)
+#define GLRPB_SLW(_i)				(0x000AC140 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLRPB_SLW_MAX_INDEX			7
+#define GLRPB_SLW_SLW_S				0
+#define GLRPB_SLW_SLW_M				MAKEMASK(0xFFFFF, 0)
+#define GLRPB_SPS(_i)				(0x000AC0C4 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLRPB_SPS_MAX_INDEX			7
+#define GLRPB_SPS_SPS_TCN_S			0
+#define GLRPB_SPS_SPS_TCN_M			MAKEMASK(0xFFFFF, 0)
+#define GLRPB_TC_CFG(_i)			(0x000AC2A4 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLRPB_TC_CFG_MAX_INDEX			31
+#define GLRPB_TC_CFG_D_POOL_S			0
+#define GLRPB_TC_CFG_D_POOL_M			MAKEMASK(0xFFFF, 0)
+#define GLRPB_TC_CFG_S_POOL_S			16
+#define GLRPB_TC_CFG_S_POOL_M			MAKEMASK(0xFFFF, 16)
+#define GLRPB_TCHW(_i)				(0x000AC330 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLRPB_TCHW_MAX_INDEX			31
+#define GLRPB_TCHW_TCHW_S			0
+#define GLRPB_TCHW_TCHW_M			MAKEMASK(0xFFFFF, 0)
+#define GLRPB_TCLW(_i)				(0x000AC3B0 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLRPB_TCLW_MAX_INDEX			31
+#define GLRPB_TCLW_TCLW_S			0
+#define GLRPB_TCLW_TCLW_M			MAKEMASK(0xFFFFF, 0)
+#define GLQF_APBVT(_i)				(0x00450000 + ((_i) * 4)) /* _i=0...2047 */ /* Reset Source: CORER */
+#define GLQF_APBVT_MAX_INDEX			2047
+#define GLQF_APBVT_APBVT_S			0
+#define GLQF_APBVT_APBVT_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLQF_FD_CLSN_0				0x00460028 /* Reset Source: CORER */
+#define GLQF_FD_CLSN_0_HITSBCNT_S		0
+#define GLQF_FD_CLSN_0_HITSBCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQF_FD_CLSN1				0x00460030 /* Reset Source: CORER */
+#define GLQF_FD_CLSN1_HITLBCNT_S		0
+#define GLQF_FD_CLSN1_HITLBCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQF_FD_CNT				0x00460018 /* Reset Source: CORER */
+#define GLQF_FD_CNT_FD_GCNT_S			0
+#define GLQF_FD_CNT_FD_GCNT_M			MAKEMASK(0x7FFF, 0)
+#define GLQF_FD_CNT_FD_BCNT_S			16
+#define GLQF_FD_CNT_FD_BCNT_M			MAKEMASK(0x7FFF, 16)
+#define GLQF_FD_CTL				0x00460000 /* Reset Source: CORER */
+#define GLQF_FD_CTL_FDLONG_S			0
+#define GLQF_FD_CTL_FDLONG_M			MAKEMASK(0xF, 0)
+#define GLQF_FD_CTL_HASH_REPORT_S		4
+#define GLQF_FD_CTL_HASH_REPORT_M		BIT(4)
+#define GLQF_FD_CTL_FLT_ADDR_REPORT_S		5
+#define GLQF_FD_CTL_FLT_ADDR_REPORT_M		BIT(5)
+#define GLQF_FD_SIZE				0x00460010 /* Reset Source: CORER */
+#define GLQF_FD_SIZE_FD_GSIZE_S			0
+#define GLQF_FD_SIZE_FD_GSIZE_M			MAKEMASK(0x7FFF, 0)
+#define GLQF_FD_SIZE_FD_BSIZE_S			16
+#define GLQF_FD_SIZE_FD_BSIZE_M			MAKEMASK(0x7FFF, 16)
+#define GLQF_FDCNT_0				0x00460020 /* Reset Source: CORER */
+#define GLQF_FDCNT_0_BUCKETCNT_S		0
+#define GLQF_FDCNT_0_BUCKETCNT_M		MAKEMASK(0x7FFF, 0)
+#define GLQF_FDCNT_0_CNT_NOT_VLD_S		31
+#define GLQF_FDCNT_0_CNT_NOT_VLD_M		BIT(31)
+#define GLQF_FDEVICTENA(_i)			(0x00452000 + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: CORER */
+#define GLQF_FDEVICTENA_MAX_INDEX		3
+#define GLQF_FDEVICTENA_FDEVICTENA_S		0
+#define GLQF_FDEVICTENA_FDEVICTENA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQF_FDINSET(_i, _j)			(0x00412000 + ((_i) * 4 + (_j) * 512)) /* _i=0...127, _j=0...5 */ /* Reset Source: CORER */
+#define GLQF_FDINSET_MAX_INDEX			127
+#define GLQF_FDINSET_FV_WORD_INDX0_S		0
+#define GLQF_FDINSET_FV_WORD_INDX0_M		MAKEMASK(0x1F, 0)
+#define GLQF_FDINSET_FV_WORD_VAL0_S		7
+#define GLQF_FDINSET_FV_WORD_VAL0_M		BIT(7)
+#define GLQF_FDINSET_FV_WORD_INDX1_S		8
+#define GLQF_FDINSET_FV_WORD_INDX1_M		MAKEMASK(0x1F, 8)
+#define GLQF_FDINSET_FV_WORD_VAL1_S		15
+#define GLQF_FDINSET_FV_WORD_VAL1_M		BIT(15)
+#define GLQF_FDINSET_FV_WORD_INDX2_S		16
+#define GLQF_FDINSET_FV_WORD_INDX2_M		MAKEMASK(0x1F, 16)
+#define GLQF_FDINSET_FV_WORD_VAL2_S		23
+#define GLQF_FDINSET_FV_WORD_VAL2_M		BIT(23)
+#define GLQF_FDINSET_FV_WORD_INDX3_S		24
+#define GLQF_FDINSET_FV_WORD_INDX3_M		MAKEMASK(0x1F, 24)
+#define GLQF_FDINSET_FV_WORD_VAL3_S		31
+#define GLQF_FDINSET_FV_WORD_VAL3_M		BIT(31)
+#define GLQF_FDMASK(_i)				(0x00410800 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLQF_FDMASK_MAX_INDEX			31
+#define GLQF_FDMASK_MSK_INDEX_S			0
+#define GLQF_FDMASK_MSK_INDEX_M			MAKEMASK(0x1F, 0)
+#define GLQF_FDMASK_MASK_S			16
+#define GLQF_FDMASK_MASK_M			MAKEMASK(0xFFFF, 16)
+#define GLQF_FDMASK_SEL(_i)			(0x00410400 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLQF_FDMASK_SEL_MAX_INDEX		127
+#define GLQF_FDMASK_SEL_MASK_SEL_S		0
+#define GLQF_FDMASK_SEL_MASK_SEL_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQF_FDSWAP(_i, _j)			(0x00413000 + ((_i) * 4 + (_j) * 512)) /* _i=0...127, _j=0...5 */ /* Reset Source: CORER */
+#define GLQF_FDSWAP_MAX_INDEX			127
+#define GLQF_FDSWAP_FV_WORD_INDX0_S		0
+#define GLQF_FDSWAP_FV_WORD_INDX0_M		MAKEMASK(0x1F, 0)
+#define GLQF_FDSWAP_FV_WORD_VAL0_S		7
+#define GLQF_FDSWAP_FV_WORD_VAL0_M		BIT(7)
+#define GLQF_FDSWAP_FV_WORD_INDX1_S		8
+#define GLQF_FDSWAP_FV_WORD_INDX1_M		MAKEMASK(0x1F, 8)
+#define GLQF_FDSWAP_FV_WORD_VAL1_S		15
+#define GLQF_FDSWAP_FV_WORD_VAL1_M		BIT(15)
+#define GLQF_FDSWAP_FV_WORD_INDX2_S		16
+#define GLQF_FDSWAP_FV_WORD_INDX2_M		MAKEMASK(0x1F, 16)
+#define GLQF_FDSWAP_FV_WORD_VAL2_S		23
+#define GLQF_FDSWAP_FV_WORD_VAL2_M		BIT(23)
+#define GLQF_FDSWAP_FV_WORD_INDX3_S		24
+#define GLQF_FDSWAP_FV_WORD_INDX3_M		MAKEMASK(0x1F, 24)
+#define GLQF_FDSWAP_FV_WORD_VAL3_S		31
+#define GLQF_FDSWAP_FV_WORD_VAL3_M		BIT(31)
+#define GLQF_HINSET(_i, _j)			(0x0040E000 + ((_i) * 4 + (_j) * 512)) /* _i=0...127, _j=0...5 */ /* Reset Source: CORER */
+#define GLQF_HINSET_MAX_INDEX			127
+#define GLQF_HINSET_FV_WORD_INDX0_S		0
+#define GLQF_HINSET_FV_WORD_INDX0_M		MAKEMASK(0x1F, 0)
+#define GLQF_HINSET_FV_WORD_VAL0_S		7
+#define GLQF_HINSET_FV_WORD_VAL0_M		BIT(7)
+#define GLQF_HINSET_FV_WORD_INDX1_S		8
+#define GLQF_HINSET_FV_WORD_INDX1_M		MAKEMASK(0x1F, 8)
+#define GLQF_HINSET_FV_WORD_VAL1_S		15
+#define GLQF_HINSET_FV_WORD_VAL1_M		BIT(15)
+#define GLQF_HINSET_FV_WORD_INDX2_S		16
+#define GLQF_HINSET_FV_WORD_INDX2_M		MAKEMASK(0x1F, 16)
+#define GLQF_HINSET_FV_WORD_VAL2_S		23
+#define GLQF_HINSET_FV_WORD_VAL2_M		BIT(23)
+#define GLQF_HINSET_FV_WORD_INDX3_S		24
+#define GLQF_HINSET_FV_WORD_INDX3_M		MAKEMASK(0x1F, 24)
+#define GLQF_HINSET_FV_WORD_VAL3_S		31
+#define GLQF_HINSET_FV_WORD_VAL3_M		BIT(31)
+#define GLQF_HKEY(_i)				(0x00456000 + ((_i) * 4)) /* _i=0...12 */ /* Reset Source: CORER */
+#define GLQF_HKEY_MAX_INDEX			12
+#define GLQF_HKEY_KEY_0_S			0
+#define GLQF_HKEY_KEY_0_M			MAKEMASK(0xFF, 0)
+#define GLQF_HKEY_KEY_1_S			8
+#define GLQF_HKEY_KEY_1_M			MAKEMASK(0xFF, 8)
+#define GLQF_HKEY_KEY_2_S			16
+#define GLQF_HKEY_KEY_2_M			MAKEMASK(0xFF, 16)
+#define GLQF_HKEY_KEY_3_S			24
+#define GLQF_HKEY_KEY_3_M			MAKEMASK(0xFF, 24)
+#define GLQF_HLUT(_i, _j)			(0x00438000 + ((_i) * 4 + (_j) * 512)) /* _i=0...127, _j=0...15 */ /* Reset Source: CORER */
+#define GLQF_HLUT_MAX_INDEX			127
+#define GLQF_HLUT_LUT0_S			0
+#define GLQF_HLUT_LUT0_M			MAKEMASK(0x3F, 0)
+#define GLQF_HLUT_LUT1_S			8
+#define GLQF_HLUT_LUT1_M			MAKEMASK(0x3F, 8)
+#define GLQF_HLUT_LUT2_S			16
+#define GLQF_HLUT_LUT2_M			MAKEMASK(0x3F, 16)
+#define GLQF_HLUT_LUT3_S			24
+#define GLQF_HLUT_LUT3_M			MAKEMASK(0x3F, 24)
+#define GLQF_HLUT_SIZE(_i)			(0x00455400 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLQF_HLUT_SIZE_MAX_INDEX		15
+#define GLQF_HLUT_SIZE_HSIZE_S			0
+#define GLQF_HLUT_SIZE_HSIZE_M			BIT(0)
+#define GLQF_HMASK(_i)				(0x0040FC00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLQF_HMASK_MAX_INDEX			31
+#define GLQF_HMASK_MSK_INDEX_S			0
+#define GLQF_HMASK_MSK_INDEX_M			MAKEMASK(0x1F, 0)
+#define GLQF_HMASK_MASK_S			16
+#define GLQF_HMASK_MASK_M			MAKEMASK(0xFFFF, 16)
+#define GLQF_HMASK_SEL(_i)			(0x00410000 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLQF_HMASK_SEL_MAX_INDEX		127
+#define GLQF_HMASK_SEL_MASK_SEL_S		0
+#define GLQF_HMASK_SEL_MASK_SEL_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQF_HSYMM(_i, _j)			(0x0040F000 + ((_i) * 4 + (_j) * 512)) /* _i=0...127, _j=0...5 */ /* Reset Source: CORER */
+#define GLQF_HSYMM_MAX_INDEX			127
+#define GLQF_HSYMM_FV_SYMM_INDX0_S		0
+#define GLQF_HSYMM_FV_SYMM_INDX0_M		MAKEMASK(0x1F, 0)
+#define GLQF_HSYMM_SYMM0_ENA_S			7
+#define GLQF_HSYMM_SYMM0_ENA_M			BIT(7)
+#define GLQF_HSYMM_FV_SYMM_INDX1_S		8
+#define GLQF_HSYMM_FV_SYMM_INDX1_M		MAKEMASK(0x1F, 8)
+#define GLQF_HSYMM_SYMM1_ENA_S			15
+#define GLQF_HSYMM_SYMM1_ENA_M			BIT(15)
+#define GLQF_HSYMM_FV_SYMM_INDX2_S		16
+#define GLQF_HSYMM_FV_SYMM_INDX2_M		MAKEMASK(0x1F, 16)
+#define GLQF_HSYMM_SYMM2_ENA_S			23
+#define GLQF_HSYMM_SYMM2_ENA_M			BIT(23)
+#define GLQF_HSYMM_FV_SYMM_INDX3_S		24
+#define GLQF_HSYMM_FV_SYMM_INDX3_M		MAKEMASK(0x1F, 24)
+#define GLQF_HSYMM_SYMM3_ENA_S			31
+#define GLQF_HSYMM_SYMM3_ENA_M			BIT(31)
+#define GLQF_PE_APBVT_CNT			0x00455500 /* Reset Source: CORER */
+#define GLQF_PE_APBVT_CNT_APBVT_LAN_S		0
+#define GLQF_PE_APBVT_CNT_APBVT_LAN_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQF_PE_CMD				0x00471080 /* Reset Source: CORER */
+#define GLQF_PE_CMD_ADDREM_STS_S		0
+#define GLQF_PE_CMD_ADDREM_STS_M		MAKEMASK(0xFFFFFF, 0)
+#define GLQF_PE_CMD_ADDREM_ID_S			28
+#define GLQF_PE_CMD_ADDREM_ID_M			MAKEMASK(0xF, 28)
+#define GLQF_PE_CTL				0x004710C0 /* Reset Source: CORER */
+#define GLQF_PE_CTL_PELONG_S			0
+#define GLQF_PE_CTL_PELONG_M			MAKEMASK(0xF, 0)
+#define GLQF_PE_CTL2(_i)			(0x00455200 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLQF_PE_CTL2_MAX_INDEX			31
+#define GLQF_PE_CTL2_TO_QH_S			0
+#define GLQF_PE_CTL2_TO_QH_M			MAKEMASK(0x3, 0)
+#define GLQF_PE_CTL2_APBVT_ENA_S		2
+#define GLQF_PE_CTL2_APBVT_ENA_M		BIT(2)
+#define GLQF_PE_FVE				0x0020E514 /* Reset Source: CORER */
+#define GLQF_PE_FVE_W_ENA_S			0
+#define GLQF_PE_FVE_W_ENA_M			MAKEMASK(0xFFFFFF, 0)
+#define GLQF_PE_OSR_STS				0x00471040 /* Reset Source: CORER */
+#define GLQF_PE_OSR_STS_QH_SRCH_MAXOSR_S	0
+#define GLQF_PE_OSR_STS_QH_SRCH_MAXOSR_M	MAKEMASK(0x3FF, 0)
+#define GLQF_PE_OSR_STS_QH_CMD_MAXOSR_S		16
+#define GLQF_PE_OSR_STS_QH_CMD_MAXOSR_M		MAKEMASK(0x3FF, 16)
+#define GLQF_PEINSET(_i, _j)			(0x00415000 + ((_i) * 4 + (_j) * 128)) /* _i=0...31, _j=0...5 */ /* Reset Source: CORER */
+#define GLQF_PEINSET_MAX_INDEX			31
+#define GLQF_PEINSET_FV_WORD_INDX0_S		0
+#define GLQF_PEINSET_FV_WORD_INDX0_M		MAKEMASK(0x1F, 0)
+#define GLQF_PEINSET_FV_WORD_VAL0_S		7
+#define GLQF_PEINSET_FV_WORD_VAL0_M		BIT(7)
+#define GLQF_PEINSET_FV_WORD_INDX1_S		8
+#define GLQF_PEINSET_FV_WORD_INDX1_M		MAKEMASK(0x1F, 8)
+#define GLQF_PEINSET_FV_WORD_VAL1_S		15
+#define GLQF_PEINSET_FV_WORD_VAL1_M		BIT(15)
+#define GLQF_PEINSET_FV_WORD_INDX2_S		16
+#define GLQF_PEINSET_FV_WORD_INDX2_M		MAKEMASK(0x1F, 16)
+#define GLQF_PEINSET_FV_WORD_VAL2_S		23
+#define GLQF_PEINSET_FV_WORD_VAL2_M		BIT(23)
+#define GLQF_PEINSET_FV_WORD_INDX3_S		24
+#define GLQF_PEINSET_FV_WORD_INDX3_M		MAKEMASK(0x1F, 24)
+#define GLQF_PEINSET_FV_WORD_VAL3_S		31
+#define GLQF_PEINSET_FV_WORD_VAL3_M		BIT(31)
+#define GLQF_PEMASK(_i)				(0x00415400 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLQF_PEMASK_MAX_INDEX			15
+#define GLQF_PEMASK_MSK_INDEX_S			0
+#define GLQF_PEMASK_MSK_INDEX_M			MAKEMASK(0x1F, 0)
+#define GLQF_PEMASK_MASK_S			16
+#define GLQF_PEMASK_MASK_M			MAKEMASK(0xFFFF, 16)
+#define GLQF_PEMASK_SEL(_i)			(0x00415500 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLQF_PEMASK_SEL_MAX_INDEX		31
+#define GLQF_PEMASK_SEL_MASK_SEL_S		0
+#define GLQF_PEMASK_SEL_MASK_SEL_M		MAKEMASK(0xFFFF, 0)
+#define GLQF_PETABLE_CLR(_i)			(0x000AA078 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLQF_PETABLE_CLR_MAX_INDEX		1
+#define GLQF_PETABLE_CLR_VM_VF_NUM_S		0
+#define GLQF_PETABLE_CLR_VM_VF_NUM_M		MAKEMASK(0x3FF, 0)
+#define GLQF_PETABLE_CLR_VM_VF_TYPE_S		10
+#define GLQF_PETABLE_CLR_VM_VF_TYPE_M		MAKEMASK(0x3, 10)
+#define GLQF_PETABLE_CLR_PF_NUM_S		12
+#define GLQF_PETABLE_CLR_PF_NUM_M		MAKEMASK(0x7, 12)
+#define GLQF_PETABLE_CLR_PE_BUSY_S		16
+#define GLQF_PETABLE_CLR_PE_BUSY_M		BIT(16)
+#define GLQF_PETABLE_CLR_PE_CLEAR_S		17
+#define GLQF_PETABLE_CLR_PE_CLEAR_M		BIT(17)
+#define GLQF_PROF2TC(_i, _j)			(0x0044D000 + ((_i) * 4 + (_j) * 512)) /* _i=0...127, _j=0...3 */ /* Reset Source: CORER */
+#define GLQF_PROF2TC_MAX_INDEX			127
+#define GLQF_PROF2TC_OVERRIDE_ENA_0_S		0
+#define GLQF_PROF2TC_OVERRIDE_ENA_0_M		BIT(0)
+#define GLQF_PROF2TC_REGION_0_S			1
+#define GLQF_PROF2TC_REGION_0_M			MAKEMASK(0x7, 1)
+#define GLQF_PROF2TC_OVERRIDE_ENA_1_S		4
+#define GLQF_PROF2TC_OVERRIDE_ENA_1_M		BIT(4)
+#define GLQF_PROF2TC_REGION_1_S			5
+#define GLQF_PROF2TC_REGION_1_M			MAKEMASK(0x7, 5)
+#define GLQF_PROF2TC_OVERRIDE_ENA_2_S		8
+#define GLQF_PROF2TC_OVERRIDE_ENA_2_M		BIT(8)
+#define GLQF_PROF2TC_REGION_2_S			9
+#define GLQF_PROF2TC_REGION_2_M			MAKEMASK(0x7, 9)
+#define GLQF_PROF2TC_OVERRIDE_ENA_3_S		12
+#define GLQF_PROF2TC_OVERRIDE_ENA_3_M		BIT(12)
+#define GLQF_PROF2TC_REGION_3_S			13
+#define GLQF_PROF2TC_REGION_3_M			MAKEMASK(0x7, 13)
+#define GLQF_PROF2TC_OVERRIDE_ENA_4_S		16
+#define GLQF_PROF2TC_OVERRIDE_ENA_4_M		BIT(16)
+#define GLQF_PROF2TC_REGION_4_S			17
+#define GLQF_PROF2TC_REGION_4_M			MAKEMASK(0x7, 17)
+#define GLQF_PROF2TC_OVERRIDE_ENA_5_S		20
+#define GLQF_PROF2TC_OVERRIDE_ENA_5_M		BIT(20)
+#define GLQF_PROF2TC_REGION_5_S			21
+#define GLQF_PROF2TC_REGION_5_M			MAKEMASK(0x7, 21)
+#define GLQF_PROF2TC_OVERRIDE_ENA_6_S		24
+#define GLQF_PROF2TC_OVERRIDE_ENA_6_M		BIT(24)
+#define GLQF_PROF2TC_REGION_6_S			25
+#define GLQF_PROF2TC_REGION_6_M			MAKEMASK(0x7, 25)
+#define GLQF_PROF2TC_OVERRIDE_ENA_7_S		28
+#define GLQF_PROF2TC_OVERRIDE_ENA_7_M		BIT(28)
+#define GLQF_PROF2TC_REGION_7_S			29
+#define GLQF_PROF2TC_REGION_7_M			MAKEMASK(0x7, 29)
+#define PFQF_FD_CNT				0x00460180 /* Reset Source: CORER */
+#define PFQF_FD_CNT_FD_GCNT_S			0
+#define PFQF_FD_CNT_FD_GCNT_M			MAKEMASK(0x7FFF, 0)
+#define PFQF_FD_CNT_FD_BCNT_S			16
+#define PFQF_FD_CNT_FD_BCNT_M			MAKEMASK(0x7FFF, 16)
+#define PFQF_FD_ENA				0x0043A000 /* Reset Source: CORER */
+#define PFQF_FD_ENA_FD_ENA_S			0
+#define PFQF_FD_ENA_FD_ENA_M			BIT(0)
+#define PFQF_FD_SIZE				0x00460100 /* Reset Source: CORER */
+#define PFQF_FD_SIZE_FD_GSIZE_S			0
+#define PFQF_FD_SIZE_FD_GSIZE_M			MAKEMASK(0x7FFF, 0)
+#define PFQF_FD_SIZE_FD_BSIZE_S			16
+#define PFQF_FD_SIZE_FD_BSIZE_M			MAKEMASK(0x7FFF, 16)
+#define PFQF_FD_SUBTRACT			0x00460200 /* Reset Source: CORER */
+#define PFQF_FD_SUBTRACT_FD_GCNT_S		0
+#define PFQF_FD_SUBTRACT_FD_GCNT_M		MAKEMASK(0x7FFF, 0)
+#define PFQF_FD_SUBTRACT_FD_BCNT_S		16
+#define PFQF_FD_SUBTRACT_FD_BCNT_M		MAKEMASK(0x7FFF, 16)
+#define PFQF_HLUT(_i)				(0x00430000 + ((_i) * 64)) /* _i=0...511 */ /* Reset Source: CORER */
+#define PFQF_HLUT_MAX_INDEX			511
+#define PFQF_HLUT_LUT0_S			0
+#define PFQF_HLUT_LUT0_M			MAKEMASK(0xFF, 0)
+#define PFQF_HLUT_LUT1_S			8
+#define PFQF_HLUT_LUT1_M			MAKEMASK(0xFF, 8)
+#define PFQF_HLUT_LUT2_S			16
+#define PFQF_HLUT_LUT2_M			MAKEMASK(0xFF, 16)
+#define PFQF_HLUT_LUT3_S			24
+#define PFQF_HLUT_LUT3_M			MAKEMASK(0xFF, 24)
+#define PFQF_HLUT_SIZE				0x00455480 /* Reset Source: CORER */
+#define PFQF_HLUT_SIZE_HSIZE_S			0
+#define PFQF_HLUT_SIZE_HSIZE_M			MAKEMASK(0x3, 0)
+#define PFQF_PE_CLSN0				0x00470480 /* Reset Source: CORER */
+#define PFQF_PE_CLSN0_HITSBCNT_S		0
+#define PFQF_PE_CLSN0_HITSBCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PFQF_PE_CLSN1				0x00470500 /* Reset Source: CORER */
+#define PFQF_PE_CLSN1_HITLBCNT_S		0
+#define PFQF_PE_CLSN1_HITLBCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PFQF_PE_CTL1				0x00470000 /* Reset Source: CORER */
+#define PFQF_PE_CTL1_PEHSIZE_S			0
+#define PFQF_PE_CTL1_PEHSIZE_M			MAKEMASK(0xF, 0)
+#define PFQF_PE_CTL2				0x00470040 /* Reset Source: CORER */
+#define PFQF_PE_CTL2_PEDSIZE_S			0
+#define PFQF_PE_CTL2_PEDSIZE_M			MAKEMASK(0xF, 0)
+#define PFQF_PE_FILTERING_ENA			0x0043A080 /* Reset Source: CORER */
+#define PFQF_PE_FILTERING_ENA_PE_ENA_S		0
+#define PFQF_PE_FILTERING_ENA_PE_ENA_M		BIT(0)
+#define PFQF_PE_FLHD				0x00470100 /* Reset Source: CORER */
+#define PFQF_PE_FLHD_FLHD_S			0
+#define PFQF_PE_FLHD_FLHD_M			MAKEMASK(0xFFFFFF, 0)
+#define PFQF_PE_ST_CTL				0x00470400 /* Reset Source: CORER */
+#define PFQF_PE_ST_CTL_PF_CNT_EN_S		0
+#define PFQF_PE_ST_CTL_PF_CNT_EN_M		BIT(0)
+#define PFQF_PE_ST_CTL_VFS_CNT_EN_S		1
+#define PFQF_PE_ST_CTL_VFS_CNT_EN_M		BIT(1)
+#define PFQF_PE_ST_CTL_VF_CNT_EN_S		2
+#define PFQF_PE_ST_CTL_VF_CNT_EN_M		BIT(2)
+#define PFQF_PE_ST_CTL_VF_NUM_S			16
+#define PFQF_PE_ST_CTL_VF_NUM_M			MAKEMASK(0xFF, 16)
+#define PFQF_PE_TC_CTL				0x00452080 /* Reset Source: CORER */
+#define PFQF_PE_TC_CTL_TC_EN_PF_S		0
+#define PFQF_PE_TC_CTL_TC_EN_PF_M		MAKEMASK(0xFF, 0)
+#define PFQF_PE_TC_CTL_TC_EN_VF_S		16
+#define PFQF_PE_TC_CTL_TC_EN_VF_M		MAKEMASK(0xFF, 16)
+#define PFQF_PECNT_0				0x00470200 /* Reset Source: CORER */
+#define PFQF_PECNT_0_BUCKETCNT_S		0
+#define PFQF_PECNT_0_BUCKETCNT_M		MAKEMASK(0x3FFFF, 0)
+#define PFQF_PECNT_1				0x00470300 /* Reset Source: CORER */
+#define PFQF_PECNT_1_FLTCNT_S			0
+#define PFQF_PECNT_1_FLTCNT_M			MAKEMASK(0x3FFFF, 0)
+#define VPQF_PE_CTL1(_VF)			(0x00474000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPQF_PE_CTL1_MAX_INDEX			255
+#define VPQF_PE_CTL1_PEHSIZE_S			0
+#define VPQF_PE_CTL1_PEHSIZE_M			MAKEMASK(0xF, 0)
+#define VPQF_PE_CTL2(_VF)			(0x00474800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPQF_PE_CTL2_MAX_INDEX			255
+#define VPQF_PE_CTL2_PEDSIZE_S			0
+#define VPQF_PE_CTL2_PEDSIZE_M			MAKEMASK(0xF, 0)
+#define VPQF_PE_FILTERING_ENA(_VF)		(0x00455800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPQF_PE_FILTERING_ENA_MAX_INDEX		255
+#define VPQF_PE_FILTERING_ENA_PE_ENA_S		0
+#define VPQF_PE_FILTERING_ENA_PE_ENA_M		BIT(0)
+#define VPQF_PE_FLHD(_VF)			(0x00472000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPQF_PE_FLHD_MAX_INDEX			255
+#define VPQF_PE_FLHD_FLHD_S			0
+#define VPQF_PE_FLHD_FLHD_M			MAKEMASK(0xFFFFFF, 0)
+#define VPQF_PECNT_0(_VF)			(0x00472800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPQF_PECNT_0_MAX_INDEX			255
+#define VPQF_PECNT_0_BUCKETCNT_S		0
+#define VPQF_PECNT_0_BUCKETCNT_M		MAKEMASK(0x3FFFF, 0)
+#define VPQF_PECNT_1(_VF)			(0x00473000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPQF_PECNT_1_MAX_INDEX			255
+#define VPQF_PECNT_1_FLTCNT_S			0
+#define VPQF_PECNT_1_FLTCNT_M			MAKEMASK(0x3FFFF, 0)
+#define GLDCB_RMPMC				0x001223C8 /* Reset Source: CORER */
+#define GLDCB_RMPMC_RSPM_S			0
+#define GLDCB_RMPMC_RSPM_M			MAKEMASK(0x3F, 0)
+#define GLDCB_RMPMC_MIQ_NODROP_MODE_S		6
+#define GLDCB_RMPMC_MIQ_NODROP_MODE_M		MAKEMASK(0x1F, 6)
+#define GLDCB_RMPMC_RPM_DIS_S			31
+#define GLDCB_RMPMC_RPM_DIS_M			BIT(31)
+#define GLDCB_RMPMS				0x001223CC /* Reset Source: CORER */
+#define GLDCB_RMPMS_RMPM_S			0
+#define GLDCB_RMPMS_RMPM_M			MAKEMASK(0xFFFF, 0)
+#define GLDCB_RPCC				0x00122260 /* Reset Source: CORER */
+#define GLDCB_RPCC_EN_S				0
+#define GLDCB_RPCC_EN_M				BIT(0)
+#define GLDCB_RPCC_SCL_FACT_S			4
+#define GLDCB_RPCC_SCL_FACT_M			MAKEMASK(0x1F, 4)
+#define GLDCB_RPCC_THRSH_S			16
+#define GLDCB_RPCC_THRSH_M			MAKEMASK(0xFFF, 16)
+#define GLDCB_RSPMC				0x001223C4 /* Reset Source: CORER */
+#define GLDCB_RSPMC_RSPM_S			0
+#define GLDCB_RSPMC_RSPM_M			MAKEMASK(0xFF, 0)
+#define GLDCB_RSPMC_RPM_MODE_S			8
+#define GLDCB_RSPMC_RPM_MODE_M			MAKEMASK(0x3, 8)
+#define GLDCB_RSPMC_PRR_MAX_EXP_S		10
+#define GLDCB_RSPMC_PRR_MAX_EXP_M		MAKEMASK(0xF, 10)
+#define GLDCB_RSPMC_PFCTIMER_S			14
+#define GLDCB_RSPMC_PFCTIMER_M			MAKEMASK(0x3FFF, 14)
+#define GLDCB_RSPMC_RPM_DIS_S			31
+#define GLDCB_RSPMC_RPM_DIS_M			BIT(31)
+#define GLDCB_RSPMS				0x001223C0 /* Reset Source: CORER */
+#define GLDCB_RSPMS_RSPM_S			0
+#define GLDCB_RSPMS_RSPM_M			MAKEMASK(0x3FFFF, 0)
+#define GLDCB_RTCTI				0x001223D0 /* Reset Source: CORER */
+#define GLDCB_RTCTI_PFCTIMEOUT_TC_S		0
+#define GLDCB_RTCTI_PFCTIMEOUT_TC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_RTCTQ(_i)				(0x001222C0 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLDCB_RTCTQ_MAX_INDEX			31
+#define GLDCB_RTCTQ_RXQNUM_S			0
+#define GLDCB_RTCTQ_RXQNUM_M			MAKEMASK(0x7FF, 0)
+#define GLDCB_RTCTQ_IS_PF_Q_S			16
+#define GLDCB_RTCTQ_IS_PF_Q_M			BIT(16)
+#define GLDCB_RTCTS(_i)				(0x00122340 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLDCB_RTCTS_MAX_INDEX			31
+#define GLDCB_RTCTS_PFCTIMER_S			0
+#define GLDCB_RTCTS_PFCTIMER_M			MAKEMASK(0x3FFF, 0)
+#define GLRCB_CFG_COTF_CNT(_i)			(0x001223D4 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLRCB_CFG_COTF_CNT_MAX_INDEX		7
+#define GLRCB_CFG_COTF_CNT_MRKR_COTF_CNT_S	0
+#define GLRCB_CFG_COTF_CNT_MRKR_COTF_CNT_M	MAKEMASK(0x3F, 0)
+#define GLRCB_CFG_COTF_ST			0x001223F4 /* Reset Source: CORER */
+#define GLRCB_CFG_COTF_ST_MRKR_COTF_ST_S	0
+#define GLRCB_CFG_COTF_ST_MRKR_COTF_ST_M	MAKEMASK(0xFF, 0)
+#define GLRPRS_PMCFG_DHW(_i)			(0x00200388 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLRPRS_PMCFG_DHW_MAX_INDEX		15
+#define GLRPRS_PMCFG_DHW_DHW_S			0
+#define GLRPRS_PMCFG_DHW_DHW_M			MAKEMASK(0xFFFFF, 0)
+#define GLRPRS_PMCFG_DLW(_i)			(0x002003C8 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLRPRS_PMCFG_DLW_MAX_INDEX		15
+#define GLRPRS_PMCFG_DLW_DLW_S			0
+#define GLRPRS_PMCFG_DLW_DLW_M			MAKEMASK(0xFFFFF, 0)
+#define GLRPRS_PMCFG_DPS(_i)			(0x00200308 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLRPRS_PMCFG_DPS_MAX_INDEX		15
+#define GLRPRS_PMCFG_DPS_DPS_S			0
+#define GLRPRS_PMCFG_DPS_DPS_M			MAKEMASK(0xFFFFF, 0)
+#define GLRPRS_PMCFG_SHW(_i)			(0x00200448 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLRPRS_PMCFG_SHW_MAX_INDEX		7
+#define GLRPRS_PMCFG_SHW_SHW_S			0
+#define GLRPRS_PMCFG_SHW_SHW_M			MAKEMASK(0xFFFFF, 0)
+#define GLRPRS_PMCFG_SLW(_i)			(0x00200468 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLRPRS_PMCFG_SLW_MAX_INDEX		7
+#define GLRPRS_PMCFG_SLW_SLW_S			0
+#define GLRPRS_PMCFG_SLW_SLW_M			MAKEMASK(0xFFFFF, 0)
+#define GLRPRS_PMCFG_SPS(_i)			(0x00200408 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLRPRS_PMCFG_SPS_MAX_INDEX		7
+#define GLRPRS_PMCFG_SPS_SPS_S			0
+#define GLRPRS_PMCFG_SPS_SPS_M			MAKEMASK(0xFFFFF, 0)
+#define GLRPRS_PMCFG_TC_CFG(_i)			(0x00200488 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLRPRS_PMCFG_TC_CFG_MAX_INDEX		31
+#define GLRPRS_PMCFG_TC_CFG_D_POOL_S		0
+#define GLRPRS_PMCFG_TC_CFG_D_POOL_M		MAKEMASK(0xF, 0)
+#define GLRPRS_PMCFG_TC_CFG_S_POOL_S		16
+#define GLRPRS_PMCFG_TC_CFG_S_POOL_M		MAKEMASK(0x7, 16)
+#define GLRPRS_PMCFG_TCHW(_i)			(0x00200588 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLRPRS_PMCFG_TCHW_MAX_INDEX		31
+#define GLRPRS_PMCFG_TCHW_TCHW_S		0
+#define GLRPRS_PMCFG_TCHW_TCHW_M		MAKEMASK(0xFFFFF, 0)
+#define GLRPRS_PMCFG_TCLW(_i)			(0x00200608 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLRPRS_PMCFG_TCLW_MAX_INDEX		31
+#define GLRPRS_PMCFG_TCLW_TCLW_S		0
+#define GLRPRS_PMCFG_TCLW_TCLW_M		MAKEMASK(0xFFFFF, 0)
+#define GLSWT_PMCFG_TC_CFG(_i)			(0x00204900 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSWT_PMCFG_TC_CFG_MAX_INDEX		31
+#define GLSWT_PMCFG_TC_CFG_D_POOL_S		0
+#define GLSWT_PMCFG_TC_CFG_D_POOL_M		MAKEMASK(0xF, 0)
+#define GLSWT_PMCFG_TC_CFG_S_POOL_S		16
+#define GLSWT_PMCFG_TC_CFG_S_POOL_M		MAKEMASK(0x7, 16)
+#define PRTDCB_RLANPMS				0x00122280 /* Reset Source: CORER */
+#define PRTDCB_RLANPMS_LANRPPM_S		0
+#define PRTDCB_RLANPMS_LANRPPM_M		MAKEMASK(0x3FFFF, 0)
+#define PRTDCB_RPPMC				0x00122240 /* Reset Source: CORER */
+#define PRTDCB_RPPMC_LANRPPM_S			0
+#define PRTDCB_RPPMC_LANRPPM_M			MAKEMASK(0xFF, 0)
+#define PRTDCB_RPPMC_RDMARPPM_S			8
+#define PRTDCB_RPPMC_RDMARPPM_M			MAKEMASK(0xFF, 8)
+#define PRTDCB_RRDMAPMS				0x00122120 /* Reset Source: CORER */
+#define PRTDCB_RRDMAPMS_RDMARPPM_S		0
+#define PRTDCB_RRDMAPMS_RDMARPPM_M		MAKEMASK(0x3FFFF, 0)
+#define GL_STAT_SWR_BPCH(_i)			(0x00347804 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GL_STAT_SWR_BPCH_MAX_INDEX		127
+#define GL_STAT_SWR_BPCH_VLBPCH_S		0
+#define GL_STAT_SWR_BPCH_VLBPCH_M		MAKEMASK(0xFF, 0)
+#define GL_STAT_SWR_BPCL(_i)			(0x00347800 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GL_STAT_SWR_BPCL_MAX_INDEX		127
+#define GL_STAT_SWR_BPCL_VLBPCL_S		0
+#define GL_STAT_SWR_BPCL_VLBPCL_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_STAT_SWR_GORCH(_i)			(0x00342004 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GL_STAT_SWR_GORCH_MAX_INDEX		127
+#define GL_STAT_SWR_GORCH_VLBCH_S		0
+#define GL_STAT_SWR_GORCH_VLBCH_M		MAKEMASK(0xFF, 0)
+#define GL_STAT_SWR_GORCL(_i)			(0x00342000 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GL_STAT_SWR_GORCL_MAX_INDEX		127
+#define GL_STAT_SWR_GORCL_VLBCL_S		0
+#define GL_STAT_SWR_GORCL_VLBCL_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_STAT_SWR_GOTCH(_i)			(0x00304004 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GL_STAT_SWR_GOTCH_MAX_INDEX		127
+#define GL_STAT_SWR_GOTCH_VLBCH_S		0
+#define GL_STAT_SWR_GOTCH_VLBCH_M		MAKEMASK(0xFF, 0)
+#define GL_STAT_SWR_GOTCL(_i)			(0x00304000 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GL_STAT_SWR_GOTCL_MAX_INDEX		127
+#define GL_STAT_SWR_GOTCL_VLBCL_S		0
+#define GL_STAT_SWR_GOTCL_VLBCL_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_STAT_SWR_MPCH(_i)			(0x00347404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GL_STAT_SWR_MPCH_MAX_INDEX		127
+#define GL_STAT_SWR_MPCH_VLMPCH_S		0
+#define GL_STAT_SWR_MPCH_VLMPCH_M		MAKEMASK(0xFF, 0)
+#define GL_STAT_SWR_MPCL(_i)			(0x00347400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GL_STAT_SWR_MPCL_MAX_INDEX		127
+#define GL_STAT_SWR_MPCL_VLMPCL_S		0
+#define GL_STAT_SWR_MPCL_VLMPCL_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_STAT_SWR_UPCH(_i)			(0x00347004 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GL_STAT_SWR_UPCH_MAX_INDEX		127
+#define GL_STAT_SWR_UPCH_VLUPCH_S		0
+#define GL_STAT_SWR_UPCH_VLUPCH_M		MAKEMASK(0xFF, 0)
+#define GL_STAT_SWR_UPCL(_i)			(0x00347000 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GL_STAT_SWR_UPCL_MAX_INDEX		127
+#define GL_STAT_SWR_UPCL_VLUPCL_S		0
+#define GL_STAT_SWR_UPCL_VLUPCL_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_AORCL(_i)				(0x003812C0 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_AORCL_MAX_INDEX			7
+#define GLPRT_AORCL_AORCL_S			0
+#define GLPRT_AORCL_AORCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_BPRCH(_i)				(0x00381384 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_BPRCH_MAX_INDEX			7
+#define GLPRT_BPRCH_UPRCH_S			0
+#define GLPRT_BPRCH_UPRCH_M			MAKEMASK(0xFF, 0)
+#define GLPRT_BPRCL(_i)				(0x00381380 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_BPRCL_MAX_INDEX			7
+#define GLPRT_BPRCL_UPRCH_S			0
+#define GLPRT_BPRCL_UPRCH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_BPTCH(_i)				(0x00381244 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_BPTCH_MAX_INDEX			7
+#define GLPRT_BPTCH_UPRCH_S			0
+#define GLPRT_BPTCH_UPRCH_M			MAKEMASK(0xFF, 0)
+#define GLPRT_BPTCL(_i)				(0x00381240 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_BPTCL_MAX_INDEX			7
+#define GLPRT_BPTCL_UPRCH_S			0
+#define GLPRT_BPTCL_UPRCH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_CRCERRS(_i)			(0x00380100 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_CRCERRS_MAX_INDEX			7
+#define GLPRT_CRCERRS_CRCERRS_S			0
+#define GLPRT_CRCERRS_CRCERRS_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_CRCERRS_H(_i)			(0x00380104 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_CRCERRS_H_MAX_INDEX		7
+#define GLPRT_CRCERRS_H_CRCERRS_S		0
+#define GLPRT_CRCERRS_H_CRCERRS_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_GORCH(_i)				(0x00380004 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_GORCH_MAX_INDEX			7
+#define GLPRT_GORCH_GORCH_S			0
+#define GLPRT_GORCH_GORCH_M			MAKEMASK(0xFF, 0)
+#define GLPRT_GORCL(_i)				(0x00380000 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_GORCL_MAX_INDEX			7
+#define GLPRT_GORCL_GORCL_S			0
+#define GLPRT_GORCL_GORCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_GOTCH(_i)				(0x00380B44 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_GOTCH_MAX_INDEX			7
+#define GLPRT_GOTCH_GOTCH_S			0
+#define GLPRT_GOTCH_GOTCH_M			MAKEMASK(0xFF, 0)
+#define GLPRT_GOTCL(_i)				(0x00380B40 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_GOTCL_MAX_INDEX			7
+#define GLPRT_GOTCL_GOTCL_S			0
+#define GLPRT_GOTCL_GOTCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_ILLERRC(_i)			(0x003801C0 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_ILLERRC_MAX_INDEX			7
+#define GLPRT_ILLERRC_ILLERRC_S			0
+#define GLPRT_ILLERRC_ILLERRC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_ILLERRC_H(_i)			(0x003801C4 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_ILLERRC_H_MAX_INDEX		7
+#define GLPRT_ILLERRC_H_ILLERRC_S		0
+#define GLPRT_ILLERRC_H_ILLERRC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_LXOFFRXC(_i)			(0x003802C0 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_LXOFFRXC_MAX_INDEX		7
+#define GLPRT_LXOFFRXC_LXOFFRXCNT_S		0
+#define GLPRT_LXOFFRXC_LXOFFRXCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_LXOFFRXC_H(_i)			(0x003802C4 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_LXOFFRXC_H_MAX_INDEX		7
+#define GLPRT_LXOFFRXC_H_LXOFFRXCNT_S		0
+#define GLPRT_LXOFFRXC_H_LXOFFRXCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_LXOFFTXC(_i)			(0x00381180 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_LXOFFTXC_MAX_INDEX		7
+#define GLPRT_LXOFFTXC_LXOFFTXC_S		0
+#define GLPRT_LXOFFTXC_LXOFFTXC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_LXOFFTXC_H(_i)			(0x00381184 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_LXOFFTXC_H_MAX_INDEX		7
+#define GLPRT_LXOFFTXC_H_LXOFFTXC_S		0
+#define GLPRT_LXOFFTXC_H_LXOFFTXC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_LXONRXC(_i)			(0x00380280 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_LXONRXC_MAX_INDEX			7
+#define GLPRT_LXONRXC_LXONRXCNT_S		0
+#define GLPRT_LXONRXC_LXONRXCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_LXONRXC_H(_i)			(0x00380284 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_LXONRXC_H_MAX_INDEX		7
+#define GLPRT_LXONRXC_H_LXONRXCNT_S		0
+#define GLPRT_LXONRXC_H_LXONRXCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_LXONTXC(_i)			(0x00381140 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_LXONTXC_MAX_INDEX			7
+#define GLPRT_LXONTXC_LXONTXC_S			0
+#define GLPRT_LXONTXC_LXONTXC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_LXONTXC_H(_i)			(0x00381144 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_LXONTXC_H_MAX_INDEX		7
+#define GLPRT_LXONTXC_H_LXONTXC_S		0
+#define GLPRT_LXONTXC_H_LXONTXC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_MLFC(_i)				(0x00380040 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_MLFC_MAX_INDEX			7
+#define GLPRT_MLFC_MLFC_S			0
+#define GLPRT_MLFC_MLFC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_MLFC_H(_i)			(0x00380044 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_MLFC_H_MAX_INDEX			7
+#define GLPRT_MLFC_H_MLFC_S			0
+#define GLPRT_MLFC_H_MLFC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_MPRCH(_i)				(0x00381344 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_MPRCH_MAX_INDEX			7
+#define GLPRT_MPRCH_MPRCH_S			0
+#define GLPRT_MPRCH_MPRCH_M			MAKEMASK(0xFF, 0)
+#define GLPRT_MPRCL(_i)				(0x00381340 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_MPRCL_MAX_INDEX			7
+#define GLPRT_MPRCL_MPRCL_S			0
+#define GLPRT_MPRCL_MPRCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_MPTCH(_i)				(0x00381204 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_MPTCH_MAX_INDEX			7
+#define GLPRT_MPTCH_MPTCH_S			0
+#define GLPRT_MPTCH_MPTCH_M			MAKEMASK(0xFF, 0)
+#define GLPRT_MPTCL(_i)				(0x00381200 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_MPTCL_MAX_INDEX			7
+#define GLPRT_MPTCL_MPTCL_S			0
+#define GLPRT_MPTCL_MPTCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_MRFC(_i)				(0x00380080 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_MRFC_MAX_INDEX			7
+#define GLPRT_MRFC_MRFC_S			0
+#define GLPRT_MRFC_MRFC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_MRFC_H(_i)			(0x00380084 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_MRFC_H_MAX_INDEX			7
+#define GLPRT_MRFC_H_MRFC_S			0
+#define GLPRT_MRFC_H_MRFC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PRC1023H(_i)			(0x00380A04 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC1023H_MAX_INDEX		7
+#define GLPRT_PRC1023H_PRC1023H_S		0
+#define GLPRT_PRC1023H_PRC1023H_M		MAKEMASK(0xFF, 0)
+#define GLPRT_PRC1023L(_i)			(0x00380A00 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC1023L_MAX_INDEX		7
+#define GLPRT_PRC1023L_PRC1023L_S		0
+#define GLPRT_PRC1023L_PRC1023L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PRC127H(_i)			(0x00380944 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC127H_MAX_INDEX			7
+#define GLPRT_PRC127H_PRC127H_S			0
+#define GLPRT_PRC127H_PRC127H_M			MAKEMASK(0xFF, 0)
+#define GLPRT_PRC127L(_i)			(0x00380940 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC127L_MAX_INDEX			7
+#define GLPRT_PRC127L_PRC127L_S			0
+#define GLPRT_PRC127L_PRC127L_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PRC1522H(_i)			(0x00380A44 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC1522H_MAX_INDEX		7
+#define GLPRT_PRC1522H_PRC1522H_S		0
+#define GLPRT_PRC1522H_PRC1522H_M		MAKEMASK(0xFF, 0)
+#define GLPRT_PRC1522L(_i)			(0x00380A40 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC1522L_MAX_INDEX		7
+#define GLPRT_PRC1522L_PRC1522L_S		0
+#define GLPRT_PRC1522L_PRC1522L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PRC255H(_i)			(0x00380984 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC255H_MAX_INDEX			7
+#define GLPRT_PRC255H_PRTPRC255H_S		0
+#define GLPRT_PRC255H_PRTPRC255H_M		MAKEMASK(0xFF, 0)
+#define GLPRT_PRC255L(_i)			(0x00380980 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC255L_MAX_INDEX			7
+#define GLPRT_PRC255L_PRC255L_S			0
+#define GLPRT_PRC255L_PRC255L_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PRC511H(_i)			(0x003809C4 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC511H_MAX_INDEX			7
+#define GLPRT_PRC511H_PRC511H_S			0
+#define GLPRT_PRC511H_PRC511H_M			MAKEMASK(0xFF, 0)
+#define GLPRT_PRC511L(_i)			(0x003809C0 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC511L_MAX_INDEX			7
+#define GLPRT_PRC511L_PRC511L_S			0
+#define GLPRT_PRC511L_PRC511L_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PRC64H(_i)			(0x00380904 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC64H_MAX_INDEX			7
+#define GLPRT_PRC64H_PRC64H_S			0
+#define GLPRT_PRC64H_PRC64H_M			MAKEMASK(0xFF, 0)
+#define GLPRT_PRC64L(_i)			(0x00380900 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC64L_MAX_INDEX			7
+#define GLPRT_PRC64L_PRC64L_S			0
+#define GLPRT_PRC64L_PRC64L_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PRC9522H(_i)			(0x00380A84 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC9522H_MAX_INDEX		7
+#define GLPRT_PRC9522H_PRC1522H_S		0
+#define GLPRT_PRC9522H_PRC1522H_M		MAKEMASK(0xFF, 0)
+#define GLPRT_PRC9522L(_i)			(0x00380A80 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC9522L_MAX_INDEX		7
+#define GLPRT_PRC9522L_PRC1522L_S		0
+#define GLPRT_PRC9522L_PRC1522L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PTC1023H(_i)			(0x00380C84 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC1023H_MAX_INDEX		7
+#define GLPRT_PTC1023H_PTC1023H_S		0
+#define GLPRT_PTC1023H_PTC1023H_M		MAKEMASK(0xFF, 0)
+#define GLPRT_PTC1023L(_i)			(0x00380C80 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC1023L_MAX_INDEX		7
+#define GLPRT_PTC1023L_PTC1023L_S		0
+#define GLPRT_PTC1023L_PTC1023L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PTC127H(_i)			(0x00380BC4 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC127H_MAX_INDEX			7
+#define GLPRT_PTC127H_PTC127H_S			0
+#define GLPRT_PTC127H_PTC127H_M			MAKEMASK(0xFF, 0)
+#define GLPRT_PTC127L(_i)			(0x00380BC0 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC127L_MAX_INDEX			7
+#define GLPRT_PTC127L_PTC127L_S			0
+#define GLPRT_PTC127L_PTC127L_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PTC1522H(_i)			(0x00380CC4 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC1522H_MAX_INDEX		7
+#define GLPRT_PTC1522H_PTC1522H_S		0
+#define GLPRT_PTC1522H_PTC1522H_M		MAKEMASK(0xFF, 0)
+#define GLPRT_PTC1522L(_i)			(0x00380CC0 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC1522L_MAX_INDEX		7
+#define GLPRT_PTC1522L_PTC1522L_S		0
+#define GLPRT_PTC1522L_PTC1522L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PTC255H(_i)			(0x00380C04 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC255H_MAX_INDEX			7
+#define GLPRT_PTC255H_PTC255H_S			0
+#define GLPRT_PTC255H_PTC255H_M			MAKEMASK(0xFF, 0)
+#define GLPRT_PTC255L(_i)			(0x00380C00 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC255L_MAX_INDEX			7
+#define GLPRT_PTC255L_PTC255L_S			0
+#define GLPRT_PTC255L_PTC255L_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PTC511H(_i)			(0x00380C44 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC511H_MAX_INDEX			7
+#define GLPRT_PTC511H_PTC511H_S			0
+#define GLPRT_PTC511H_PTC511H_M			MAKEMASK(0xFF, 0)
+#define GLPRT_PTC511L(_i)			(0x00380C40 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC511L_MAX_INDEX			7
+#define GLPRT_PTC511L_PTC511L_S			0
+#define GLPRT_PTC511L_PTC511L_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PTC64H(_i)			(0x00380B84 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC64H_MAX_INDEX			7
+#define GLPRT_PTC64H_PTC64H_S			0
+#define GLPRT_PTC64H_PTC64H_M			MAKEMASK(0xFF, 0)
+#define GLPRT_PTC64L(_i)			(0x00380B80 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC64L_MAX_INDEX			7
+#define GLPRT_PTC64L_PTC64L_S			0
+#define GLPRT_PTC64L_PTC64L_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PTC9522H(_i)			(0x00380D04 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC9522H_MAX_INDEX		7
+#define GLPRT_PTC9522H_PTC9522H_S		0
+#define GLPRT_PTC9522H_PTC9522H_M		MAKEMASK(0xFF, 0)
+#define GLPRT_PTC9522L(_i)			(0x00380D00 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC9522L_MAX_INDEX		7
+#define GLPRT_PTC9522L_PTC9522L_S		0
+#define GLPRT_PTC9522L_PTC9522L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PXOFFRXC(_i, _j)			(0x00380500 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PXOFFRXC_MAX_INDEX		7
+#define GLPRT_PXOFFRXC_PRPXOFFRXCNT_S		0
+#define GLPRT_PXOFFRXC_PRPXOFFRXCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PXOFFRXC_H(_i, _j)		(0x00380504 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PXOFFRXC_H_MAX_INDEX		7
+#define GLPRT_PXOFFRXC_H_PRPXOFFRXCNT_S		0
+#define GLPRT_PXOFFRXC_H_PRPXOFFRXCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PXOFFTXC(_i, _j)			(0x00380F40 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PXOFFTXC_MAX_INDEX		7
+#define GLPRT_PXOFFTXC_PRPXOFFTXCNT_S		0
+#define GLPRT_PXOFFTXC_PRPXOFFTXCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PXOFFTXC_H(_i, _j)		(0x00380F44 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PXOFFTXC_H_MAX_INDEX		7
+#define GLPRT_PXOFFTXC_H_PRPXOFFTXCNT_S		0
+#define GLPRT_PXOFFTXC_H_PRPXOFFTXCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PXONRXC(_i, _j)			(0x00380300 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PXONRXC_MAX_INDEX			7
+#define GLPRT_PXONRXC_PRPXONRXCNT_S		0
+#define GLPRT_PXONRXC_PRPXONRXCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PXONRXC_H(_i, _j)			(0x00380304 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PXONRXC_H_MAX_INDEX		7
+#define GLPRT_PXONRXC_H_PRPXONRXCNT_S		0
+#define GLPRT_PXONRXC_H_PRPXONRXCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PXONTXC(_i, _j)			(0x00380D40 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PXONTXC_MAX_INDEX			7
+#define GLPRT_PXONTXC_PRPXONTXC_S		0
+#define GLPRT_PXONTXC_PRPXONTXC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PXONTXC_H(_i, _j)			(0x00380D44 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PXONTXC_H_MAX_INDEX		7
+#define GLPRT_PXONTXC_H_PRPXONTXC_S		0
+#define GLPRT_PXONTXC_H_PRPXONTXC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_RFC(_i)				(0x00380AC0 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_RFC_MAX_INDEX			7
+#define GLPRT_RFC_RFC_S				0
+#define GLPRT_RFC_RFC_M				MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_RFC_H(_i)				(0x00380AC4 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_RFC_H_MAX_INDEX			7
+#define GLPRT_RFC_H_RFC_S			0
+#define GLPRT_RFC_H_RFC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_RJC(_i)				(0x00380B00 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_RJC_MAX_INDEX			7
+#define GLPRT_RJC_RJC_S				0
+#define GLPRT_RJC_RJC_M				MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_RJC_H(_i)				(0x00380B04 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_RJC_H_MAX_INDEX			7
+#define GLPRT_RJC_H_RJC_S			0
+#define GLPRT_RJC_H_RJC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_RLEC(_i)				(0x00380140 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_RLEC_MAX_INDEX			7
+#define GLPRT_RLEC_RLEC_S			0
+#define GLPRT_RLEC_RLEC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_RLEC_H(_i)			(0x00380144 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_RLEC_H_MAX_INDEX			7
+#define GLPRT_RLEC_H_RLEC_S			0
+#define GLPRT_RLEC_H_RLEC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_ROC(_i)				(0x00380240 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_ROC_MAX_INDEX			7
+#define GLPRT_ROC_ROC_S				0
+#define GLPRT_ROC_ROC_M				MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_ROC_H(_i)				(0x00380244 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_ROC_H_MAX_INDEX			7
+#define GLPRT_ROC_H_ROC_S			0
+#define GLPRT_ROC_H_ROC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_RUC(_i)				(0x00380200 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_RUC_MAX_INDEX			7
+#define GLPRT_RUC_RUC_S				0
+#define GLPRT_RUC_RUC_M				MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_RUC_H(_i)				(0x00380204 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_RUC_H_MAX_INDEX			7
+#define GLPRT_RUC_H_RUC_S			0
+#define GLPRT_RUC_H_RUC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_RXON2OFFCNT(_i, _j)		(0x00380700 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...7 */ /* Reset Source: CORER */
+#define GLPRT_RXON2OFFCNT_MAX_INDEX		7
+#define GLPRT_RXON2OFFCNT_PRRXON2OFFCNT_S	0
+#define GLPRT_RXON2OFFCNT_PRRXON2OFFCNT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_RXON2OFFCNT_H(_i, _j)		(0x00380704 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...7 */ /* Reset Source: CORER */
+#define GLPRT_RXON2OFFCNT_H_MAX_INDEX		7
+#define GLPRT_RXON2OFFCNT_H_PRRXON2OFFCNT_S	0
+#define GLPRT_RXON2OFFCNT_H_PRRXON2OFFCNT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_STDC(_i)				(0x00340000 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_STDC_MAX_INDEX			7
+#define GLPRT_STDC_STDC_S			0
+#define GLPRT_STDC_STDC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_TDOLD(_i)				(0x00381280 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_TDOLD_MAX_INDEX			7
+#define GLPRT_TDOLD_GLPRT_TDOLD_S		0
+#define GLPRT_TDOLD_GLPRT_TDOLD_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_TDOLD_H(_i)			(0x00381284 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_TDOLD_H_MAX_INDEX			7
+#define GLPRT_TDOLD_H_GLPRT_TDOLD_S		0
+#define GLPRT_TDOLD_H_GLPRT_TDOLD_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_UPRCH(_i)				(0x00381304 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_UPRCH_MAX_INDEX			7
+#define GLPRT_UPRCH_UPRCH_S			0
+#define GLPRT_UPRCH_UPRCH_M			MAKEMASK(0xFF, 0)
+#define GLPRT_UPRCL(_i)				(0x00381300 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_UPRCL_MAX_INDEX			7
+#define GLPRT_UPRCL_UPRCL_S			0
+#define GLPRT_UPRCL_UPRCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_UPTCH(_i)				(0x003811C4 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_UPTCH_MAX_INDEX			7
+#define GLPRT_UPTCH_UPTCH_S			0
+#define GLPRT_UPTCH_UPTCH_M			MAKEMASK(0xFF, 0)
+#define GLPRT_UPTCL(_i)				(0x003811C0 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_UPTCL_MAX_INDEX			7
+#define GLPRT_UPTCL_VUPTCH_S			0
+#define GLPRT_UPTCL_VUPTCH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLSTAT_ACL_CNT_0_H(_i)			(0x00388004 + ((_i) * 8)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLSTAT_ACL_CNT_0_H_MAX_INDEX		511
+#define GLSTAT_ACL_CNT_0_H_CNT_MSB_S		0
+#define GLSTAT_ACL_CNT_0_H_CNT_MSB_M		MAKEMASK(0xFF, 0)
+#define GLSTAT_ACL_CNT_0_L(_i)			(0x00388000 + ((_i) * 8)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLSTAT_ACL_CNT_0_L_MAX_INDEX		511
+#define GLSTAT_ACL_CNT_0_L_CNT_LSB_S		0
+#define GLSTAT_ACL_CNT_0_L_CNT_LSB_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLSTAT_ACL_CNT_1_H(_i)			(0x00389004 + ((_i) * 8)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLSTAT_ACL_CNT_1_H_MAX_INDEX		511
+#define GLSTAT_ACL_CNT_1_H_CNT_MSB_S		0
+#define GLSTAT_ACL_CNT_1_H_CNT_MSB_M		MAKEMASK(0xFF, 0)
+#define GLSTAT_ACL_CNT_1_L(_i)			(0x00389000 + ((_i) * 8)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLSTAT_ACL_CNT_1_L_MAX_INDEX		511
+#define GLSTAT_ACL_CNT_1_L_CNT_LSB_S		0
+#define GLSTAT_ACL_CNT_1_L_CNT_LSB_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLSTAT_ACL_CNT_2_H(_i)			(0x0038A004 + ((_i) * 8)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLSTAT_ACL_CNT_2_H_MAX_INDEX		511
+#define GLSTAT_ACL_CNT_2_H_CNT_MSB_S		0
+#define GLSTAT_ACL_CNT_2_H_CNT_MSB_M		MAKEMASK(0xFF, 0)
+#define GLSTAT_ACL_CNT_2_L(_i)			(0x0038A000 + ((_i) * 8)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLSTAT_ACL_CNT_2_L_MAX_INDEX		511
+#define GLSTAT_ACL_CNT_2_L_CNT_LSB_S		0
+#define GLSTAT_ACL_CNT_2_L_CNT_LSB_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLSTAT_ACL_CNT_3_H(_i)			(0x0038B004 + ((_i) * 8)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLSTAT_ACL_CNT_3_H_MAX_INDEX		511
+#define GLSTAT_ACL_CNT_3_H_CNT_MSB_S		0
+#define GLSTAT_ACL_CNT_3_H_CNT_MSB_M		MAKEMASK(0xFF, 0)
+#define GLSTAT_ACL_CNT_3_L(_i)			(0x0038B000 + ((_i) * 8)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLSTAT_ACL_CNT_3_L_MAX_INDEX		511
+#define GLSTAT_ACL_CNT_3_L_CNT_LSB_S		0
+#define GLSTAT_ACL_CNT_3_L_CNT_LSB_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLSTAT_FD_CNT0H(_i)			(0x003A0004 + ((_i) * 8)) /* _i=0...4095 */ /* Reset Source: CORER */
+#define GLSTAT_FD_CNT0H_MAX_INDEX		4095
+#define GLSTAT_FD_CNT0H_FD0_CNT_H_S		0
+#define GLSTAT_FD_CNT0H_FD0_CNT_H_M		MAKEMASK(0xFF, 0)
+#define GLSTAT_FD_CNT0L(_i)			(0x003A0000 + ((_i) * 8)) /* _i=0...4095 */ /* Reset Source: CORER */
+#define GLSTAT_FD_CNT0L_MAX_INDEX		4095
+#define GLSTAT_FD_CNT0L_FD0_CNT_L_S		0
+#define GLSTAT_FD_CNT0L_FD0_CNT_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLSTAT_FD_CNT1H(_i)			(0x003A8004 + ((_i) * 8)) /* _i=0...4095 */ /* Reset Source: CORER */
+#define GLSTAT_FD_CNT1H_MAX_INDEX		4095
+#define GLSTAT_FD_CNT1H_FD0_CNT_H_S		0
+#define GLSTAT_FD_CNT1H_FD0_CNT_H_M		MAKEMASK(0xFF, 0)
+#define GLSTAT_FD_CNT1L(_i)			(0x003A8000 + ((_i) * 8)) /* _i=0...4095 */ /* Reset Source: CORER */
+#define GLSTAT_FD_CNT1L_MAX_INDEX		4095
+#define GLSTAT_FD_CNT1L_FD0_CNT_L_S		0
+#define GLSTAT_FD_CNT1L_FD0_CNT_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLSW_BPRCH(_i)				(0x00346204 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_BPRCH_MAX_INDEX			31
+#define GLSW_BPRCH_BPRCH_S			0
+#define GLSW_BPRCH_BPRCH_M			MAKEMASK(0xFF, 0)
+#define GLSW_BPRCL(_i)				(0x00346200 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_BPRCL_MAX_INDEX			31
+#define GLSW_BPRCL_BPRCL_S			0
+#define GLSW_BPRCL_BPRCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLSW_BPTCH(_i)				(0x00310204 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_BPTCH_MAX_INDEX			31
+#define GLSW_BPTCH_BPTCH_S			0
+#define GLSW_BPTCH_BPTCH_M			MAKEMASK(0xFF, 0)
+#define GLSW_BPTCL(_i)				(0x00310200 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_BPTCL_MAX_INDEX			31
+#define GLSW_BPTCL_BPTCL_S			0
+#define GLSW_BPTCL_BPTCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLSW_GORCH(_i)				(0x00341004 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_GORCH_MAX_INDEX			31
+#define GLSW_GORCH_GORCH_S			0
+#define GLSW_GORCH_GORCH_M			MAKEMASK(0xFF, 0)
+#define GLSW_GORCL(_i)				(0x00341000 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_GORCL_MAX_INDEX			31
+#define GLSW_GORCL_GORCL_S			0
+#define GLSW_GORCL_GORCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLSW_GOTCH(_i)				(0x00302004 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_GOTCH_MAX_INDEX			31
+#define GLSW_GOTCH_GOTCH_S			0
+#define GLSW_GOTCH_GOTCH_M			MAKEMASK(0xFF, 0)
+#define GLSW_GOTCL(_i)				(0x00302000 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_GOTCL_MAX_INDEX			31
+#define GLSW_GOTCL_GOTCL_S			0
+#define GLSW_GOTCL_GOTCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLSW_MPRCH(_i)				(0x00346104 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_MPRCH_MAX_INDEX			31
+#define GLSW_MPRCH_MPRCH_S			0
+#define GLSW_MPRCH_MPRCH_M			MAKEMASK(0xFF, 0)
+#define GLSW_MPRCL(_i)				(0x00346100 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_MPRCL_MAX_INDEX			31
+#define GLSW_MPRCL_MPRCL_S			0
+#define GLSW_MPRCL_MPRCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLSW_MPTCH(_i)				(0x00310104 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_MPTCH_MAX_INDEX			31
+#define GLSW_MPTCH_MPTCH_S			0
+#define GLSW_MPTCH_MPTCH_M			MAKEMASK(0xFF, 0)
+#define GLSW_MPTCL(_i)				(0x00310100 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_MPTCL_MAX_INDEX			31
+#define GLSW_MPTCL_MPTCL_S			0
+#define GLSW_MPTCL_MPTCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLSW_UPRCH(_i)				(0x00346004 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_UPRCH_MAX_INDEX			31
+#define GLSW_UPRCH_UPRCH_S			0
+#define GLSW_UPRCH_UPRCH_M			MAKEMASK(0xFF, 0)
+#define GLSW_UPRCL(_i)				(0x00346000 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_UPRCL_MAX_INDEX			31
+#define GLSW_UPRCL_UPRCL_S			0
+#define GLSW_UPRCL_UPRCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLSW_UPTCH(_i)				(0x00310004 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_UPTCH_MAX_INDEX			31
+#define GLSW_UPTCH_UPTCH_S			0
+#define GLSW_UPTCH_UPTCH_M			MAKEMASK(0xFF, 0)
+#define GLSW_UPTCL(_i)				(0x00310000 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_UPTCL_MAX_INDEX			31
+#define GLSW_UPTCL_UPTCL_S			0
+#define GLSW_UPTCL_UPTCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLSWID_RUPP(_i)				(0x00345000 + ((_i) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define GLSWID_RUPP_MAX_INDEX			255
+#define GLSWID_RUPP_RUPP_S			0
+#define GLSWID_RUPP_RUPP_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLV_BPRCH(_i)				(0x003B6004 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_BPRCH_MAX_INDEX			767
+#define GLV_BPRCH_BPRCH_S			0
+#define GLV_BPRCH_BPRCH_M			MAKEMASK(0xFF, 0)
+#define GLV_BPRCL(_i)				(0x003B6000 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_BPRCL_MAX_INDEX			767
+#define GLV_BPRCL_BPRCL_S			0
+#define GLV_BPRCL_BPRCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLV_BPTCH(_i)				(0x0030E004 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_BPTCH_MAX_INDEX			767
+#define GLV_BPTCH_BPTCH_S			0
+#define GLV_BPTCH_BPTCH_M			MAKEMASK(0xFF, 0)
+#define GLV_BPTCL(_i)				(0x0030E000 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_BPTCL_MAX_INDEX			767
+#define GLV_BPTCL_BPTCL_S			0
+#define GLV_BPTCL_BPTCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLV_GORCH(_i)				(0x003B0004 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_GORCH_MAX_INDEX			767
+#define GLV_GORCH_GORCH_S			0
+#define GLV_GORCH_GORCH_M			MAKEMASK(0xFF, 0)
+#define GLV_GORCL(_i)				(0x003B0000 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_GORCL_MAX_INDEX			767
+#define GLV_GORCL_GORCL_S			0
+#define GLV_GORCL_GORCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLV_GOTCH(_i)				(0x00300004 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_GOTCH_MAX_INDEX			767
+#define GLV_GOTCH_GOTCH_S			0
+#define GLV_GOTCH_GOTCH_M			MAKEMASK(0xFF, 0)
+#define GLV_GOTCL(_i)				(0x00300000 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_GOTCL_MAX_INDEX			767
+#define GLV_GOTCL_GOTCL_S			0
+#define GLV_GOTCL_GOTCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLV_MPRCH(_i)				(0x003B4004 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_MPRCH_MAX_INDEX			767
+#define GLV_MPRCH_MPRCH_S			0
+#define GLV_MPRCH_MPRCH_M			MAKEMASK(0xFF, 0)
+#define GLV_MPRCL(_i)				(0x003B4000 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_MPRCL_MAX_INDEX			767
+#define GLV_MPRCL_MPRCL_S			0
+#define GLV_MPRCL_MPRCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLV_MPTCH(_i)				(0x0030C004 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_MPTCH_MAX_INDEX			767
+#define GLV_MPTCH_MPTCH_S			0
+#define GLV_MPTCH_MPTCH_M			MAKEMASK(0xFF, 0)
+#define GLV_MPTCL(_i)				(0x0030C000 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_MPTCL_MAX_INDEX			767
+#define GLV_MPTCL_MPTCL_S			0
+#define GLV_MPTCL_MPTCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLV_RDPC(_i)				(0x00294C04 + ((_i) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_RDPC_MAX_INDEX			767
+#define GLV_RDPC_RDPC_S				0
+#define GLV_RDPC_RDPC_M				MAKEMASK(0xFFFFFFFF, 0)
+#define GLV_REPC(_i)				(0x00295804 + ((_i) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_REPC_MAX_INDEX			767
+#define GLV_REPC_NO_DESC_CNT_S			0
+#define GLV_REPC_NO_DESC_CNT_M			MAKEMASK(0xFFFF, 0)
+#define GLV_REPC_ERROR_CNT_S			16
+#define GLV_REPC_ERROR_CNT_M			MAKEMASK(0xFFFF, 16)
+#define GLV_TEPC(_VSI)				(0x00312000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_TEPC_MAX_INDEX			767
+#define GLV_TEPC_TEPC_S				0
+#define GLV_TEPC_TEPC_M				MAKEMASK(0xFFFFFFFF, 0)
+#define GLV_UPRCH(_i)				(0x003B2004 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_UPRCH_MAX_INDEX			767
+#define GLV_UPRCH_UPRCH_S			0
+#define GLV_UPRCH_UPRCH_M			MAKEMASK(0xFF, 0)
+#define GLV_UPRCL(_i)				(0x003B2000 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_UPRCL_MAX_INDEX			767
+#define GLV_UPRCL_UPRCL_S			0
+#define GLV_UPRCL_UPRCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLV_UPTCH(_i)				(0x0030A004 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_UPTCH_MAX_INDEX			767
+#define GLV_UPTCH_GLVUPTCH_S			0
+#define GLV_UPTCH_GLVUPTCH_M			MAKEMASK(0xFF, 0)
+#define GLV_UPTCL(_i)				(0x0030A000 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_UPTCL_MAX_INDEX			767
+#define GLV_UPTCL_UPTCL_S			0
+#define GLV_UPTCL_UPTCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLVEBUP_RBCH(_i, _j)			(0x00343004 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...31 */ /* Reset Source: CORER */
+#define GLVEBUP_RBCH_MAX_INDEX			7
+#define GLVEBUP_RBCH_UPBCH_S			0
+#define GLVEBUP_RBCH_UPBCH_M			MAKEMASK(0xFF, 0)
+#define GLVEBUP_RBCL(_i, _j)			(0x00343000 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...31 */ /* Reset Source: CORER */
+#define GLVEBUP_RBCL_MAX_INDEX			7
+#define GLVEBUP_RBCL_UPBCL_S			0
+#define GLVEBUP_RBCL_UPBCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLVEBUP_RPCH(_i, _j)			(0x00344004 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...31 */ /* Reset Source: CORER */
+#define GLVEBUP_RPCH_MAX_INDEX			7
+#define GLVEBUP_RPCH_UPPCH_S			0
+#define GLVEBUP_RPCH_UPPCH_M			MAKEMASK(0xFF, 0)
+#define GLVEBUP_RPCL(_i, _j)			(0x00344000 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...31 */ /* Reset Source: CORER */
+#define GLVEBUP_RPCL_MAX_INDEX			7
+#define GLVEBUP_RPCL_UPPCL_S			0
+#define GLVEBUP_RPCL_UPPCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLVEBUP_TBCH(_i, _j)			(0x00306004 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...31 */ /* Reset Source: CORER */
+#define GLVEBUP_TBCH_MAX_INDEX			7
+#define GLVEBUP_TBCH_UPBCH_S			0
+#define GLVEBUP_TBCH_UPBCH_M			MAKEMASK(0xFF, 0)
+#define GLVEBUP_TBCL(_i, _j)			(0x00306000 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...31 */ /* Reset Source: CORER */
+#define GLVEBUP_TBCL_MAX_INDEX			7
+#define GLVEBUP_TBCL_UPBCL_S			0
+#define GLVEBUP_TBCL_UPBCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLVEBUP_TPCH(_i, _j)			(0x00308004 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...31 */ /* Reset Source: CORER */
+#define GLVEBUP_TPCH_MAX_INDEX			7
+#define GLVEBUP_TPCH_UPPCH_S			0
+#define GLVEBUP_TPCH_UPPCH_M			MAKEMASK(0xFF, 0)
+#define GLVEBUP_TPCL(_i, _j)			(0x00308000 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...31 */ /* Reset Source: CORER */
+#define GLVEBUP_TPCL_MAX_INDEX			7
+#define GLVEBUP_TPCL_UPPCL_S			0
+#define GLVEBUP_TPCL_UPPCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PRTRPB_LDPC				0x000AC280 /* Reset Source: CORER */
+#define PRTRPB_LDPC_CRCERRS_S			0
+#define PRTRPB_LDPC_CRCERRS_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PRTRPB_RDPC				0x000AC260 /* Reset Source: CORER */
+#define PRTRPB_RDPC_CRCERRS_S			0
+#define PRTRPB_RDPC_CRCERRS_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PRTTPB_STAT_TC_BYTES_SENTL(_i)		(0x00098200 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define PRTTPB_STAT_TC_BYTES_SENTL_MAX_INDEX	63
+#define PRTTPB_STAT_TC_BYTES_SENTL_TCCNT_S	0
+#define PRTTPB_STAT_TC_BYTES_SENTL_TCCNT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define TPB_PRTTPB_STAT_PKT_SENT(_i)		(0x00099470 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define TPB_PRTTPB_STAT_PKT_SENT_MAX_INDEX	7
+#define TPB_PRTTPB_STAT_PKT_SENT_PKTCNT_S	0
+#define TPB_PRTTPB_STAT_PKT_SENT_PKTCNT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define TPB_PRTTPB_STAT_TC_BYTES_SENT(_i)	(0x00099094 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define TPB_PRTTPB_STAT_TC_BYTES_SENT_MAX_INDEX 63
+#define TPB_PRTTPB_STAT_TC_BYTES_SENT_TCCNT_S	0
+#define TPB_PRTTPB_STAT_TC_BYTES_SENT_TCCNT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define EMP_SWT_PRUNIND				0x00204020 /* Reset Source: CORER */
+#define EMP_SWT_PRUNIND_OPCODE_S		0
+#define EMP_SWT_PRUNIND_OPCODE_M		MAKEMASK(0xF, 0)
+#define EMP_SWT_PRUNIND_LIST_INDEX_NUM_S	4
+#define EMP_SWT_PRUNIND_LIST_INDEX_NUM_M	MAKEMASK(0x3FF, 4)
+#define EMP_SWT_PRUNIND_VSI_NUM_S		16
+#define EMP_SWT_PRUNIND_VSI_NUM_M		MAKEMASK(0x3FF, 16)
+#define EMP_SWT_PRUNIND_BIT_VALUE_S		31
+#define EMP_SWT_PRUNIND_BIT_VALUE_M		BIT(31)
+#define EMP_SWT_REPIND				0x0020401c /* Reset Source: CORER */
+#define EMP_SWT_REPIND_OPCODE_S			0
+#define EMP_SWT_REPIND_OPCODE_M			MAKEMASK(0xF, 0)
+#define EMP_SWT_REPIND_LIST_INDEX_NUMBER_S	4
+#define EMP_SWT_REPIND_LIST_INDEX_NUMBER_M	MAKEMASK(0x3FF, 4)
+#define EMP_SWT_REPIND_VSI_NUM_S		16
+#define EMP_SWT_REPIND_VSI_NUM_M		MAKEMASK(0x3FF, 16)
+#define EMP_SWT_REPIND_BIT_VALUE_S		31
+#define EMP_SWT_REPIND_BIT_VALUE_M		BIT(31)
+#define GL_OVERRIDEC				0x002040a4 /* Reset Source: CORER */
+#define GL_OVERRIDEC_OVERRIDE_ATTEMPTC_S	0
+#define GL_OVERRIDEC_OVERRIDE_ATTEMPTC_M	MAKEMASK(0xFFFF, 0)
+#define GL_OVERRIDEC_LAST_VSI_S			16
+#define GL_OVERRIDEC_LAST_VSI_M			MAKEMASK(0x3FF, 16)
+#define GL_PLG_AVG_CALC_CFG			0x0020A5AC /* Reset Source: CORER */
+#define GL_PLG_AVG_CALC_CFG_CYCLE_LEN_S		0
+#define GL_PLG_AVG_CALC_CFG_CYCLE_LEN_M		MAKEMASK(0x7FFFFFFF, 0)
+#define GL_PLG_AVG_CALC_CFG_MODE_S		31
+#define GL_PLG_AVG_CALC_CFG_MODE_M		BIT(31)
+#define GL_PLG_AVG_CALC_ST			0x0020A5B0 /* Reset Source: CORER */
+#define GL_PLG_AVG_CALC_ST_IN_DATA_S		0
+#define GL_PLG_AVG_CALC_ST_IN_DATA_M		MAKEMASK(0x7FFF, 0)
+#define GL_PLG_AVG_CALC_ST_OUT_DATA_S		16
+#define GL_PLG_AVG_CALC_ST_OUT_DATA_M		MAKEMASK(0x7FFF, 16)
+#define GL_PLG_AVG_CALC_ST_VALID_S		31
+#define GL_PLG_AVG_CALC_ST_VALID_M		BIT(31)
+#define GL_PRE_CFG_CMD				0x00214090 /* Reset Source: CORER */
+#define GL_PRE_CFG_CMD_ADDR_S			0
+#define GL_PRE_CFG_CMD_ADDR_M			MAKEMASK(0x1FFF, 0)
+#define GL_PRE_CFG_CMD_TBLIDX_S			16
+#define GL_PRE_CFG_CMD_TBLIDX_M			MAKEMASK(0x7, 16)
+#define GL_PRE_CFG_CMD_CMD_S			29
+#define GL_PRE_CFG_CMD_CMD_M			BIT(29)
+#define GL_PRE_CFG_CMD_DONE_S			31
+#define GL_PRE_CFG_CMD_DONE_M			BIT(31)
+#define GL_PRE_CFG_DATA(_i)			(0x00214074 + ((_i) * 4)) /* _i=0...6 */ /* Reset Source: CORER */
+#define GL_PRE_CFG_DATA_MAX_INDEX		6
+#define GL_PRE_CFG_DATA_GL_PRE_RCP_DATA_S	0
+#define GL_PRE_CFG_DATA_GL_PRE_RCP_DATA_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GL_SWT_FUNCFILT				0x001D2698 /* Reset Source: CORER */
+#define GL_SWT_FUNCFILT_FUNCFILT_S		0
+#define GL_SWT_FUNCFILT_FUNCFILT_M		BIT(0)
+#define GL_SWT_FW_STS(_i)			(0x00216000 + ((_i) * 4)) /* _i=0...5 */ /* Reset Source: CORER */
+#define GL_SWT_FW_STS_MAX_INDEX			5
+#define GL_SWT_FW_STS_GL_SWT_FW_STS_S		0
+#define GL_SWT_FW_STS_GL_SWT_FW_STS_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_SWT_LAT_DOUBLE			0x00204004 /* Reset Source: CORER */
+#define GL_SWT_LAT_DOUBLE_BASE_S		0
+#define GL_SWT_LAT_DOUBLE_BASE_M		MAKEMASK(0x7FF, 0)
+#define GL_SWT_LAT_DOUBLE_SIZE_S		16
+#define GL_SWT_LAT_DOUBLE_SIZE_M		MAKEMASK(0x7FF, 16)
+#define GL_SWT_LAT_QUAD				0x00204008 /* Reset Source: CORER */
+#define GL_SWT_LAT_QUAD_BASE_S			0
+#define GL_SWT_LAT_QUAD_BASE_M			MAKEMASK(0x7FF, 0)
+#define GL_SWT_LAT_QUAD_SIZE_S			16
+#define GL_SWT_LAT_QUAD_SIZE_M			MAKEMASK(0x7FF, 16)
+#define GL_SWT_LAT_SINGLE			0x00204000 /* Reset Source: CORER */
+#define GL_SWT_LAT_SINGLE_BASE_S		0
+#define GL_SWT_LAT_SINGLE_BASE_M		MAKEMASK(0x7FF, 0)
+#define GL_SWT_LAT_SINGLE_SIZE_S		16
+#define GL_SWT_LAT_SINGLE_SIZE_M		MAKEMASK(0x7FF, 16)
+#define GL_SWT_MD_PRI				0x002040ac /* Reset Source: CORER */
+#define GL_SWT_MD_PRI_VSI_PRI_S			0
+#define GL_SWT_MD_PRI_VSI_PRI_M			MAKEMASK(0x7, 0)
+#define GL_SWT_MD_PRI_LB_PRI_S			4
+#define GL_SWT_MD_PRI_LB_PRI_M			MAKEMASK(0x7, 4)
+#define GL_SWT_MD_PRI_LAN_EN_PRI_S		8
+#define GL_SWT_MD_PRI_LAN_EN_PRI_M		MAKEMASK(0x7, 8)
+#define GL_SWT_MD_PRI_QH_PRI_S			12
+#define GL_SWT_MD_PRI_QH_PRI_M			MAKEMASK(0x7, 12)
+#define GL_SWT_MD_PRI_QL_PRI_S			16
+#define GL_SWT_MD_PRI_QL_PRI_M			MAKEMASK(0x7, 16)
+#define GL_SWT_MIRTARVSI(_i)			(0x00204500 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GL_SWT_MIRTARVSI_MAX_INDEX		63
+#define GL_SWT_MIRTARVSI_VFVMNUMBER_S		0
+#define GL_SWT_MIRTARVSI_VFVMNUMBER_M		MAKEMASK(0x3FF, 0)
+#define GL_SWT_MIRTARVSI_FUNCTIONTYPE_S		10
+#define GL_SWT_MIRTARVSI_FUNCTIONTYPE_M		MAKEMASK(0x3, 10)
+#define GL_SWT_MIRTARVSI_PFNUMBER_S		12
+#define GL_SWT_MIRTARVSI_PFNUMBER_M		MAKEMASK(0x7, 12)
+#define GL_SWT_MIRTARVSI_TARGETVSI_S		20
+#define GL_SWT_MIRTARVSI_TARGETVSI_M		MAKEMASK(0x3FF, 20)
+#define GL_SWT_MIRTARVSI_RULEENABLE_S		31
+#define GL_SWT_MIRTARVSI_RULEENABLE_M		BIT(31)
+#define GL_SWT_NOMDEF_FLGS_H			0x0021411C /* Reset Source: CORER */
+#define GL_SWT_NOMDEF_FLGS_H_FLGS_S		0
+#define GL_SWT_NOMDEF_FLGS_H_FLGS_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_SWT_NOMDEF_FLGS_L			0x00214118 /* Reset Source: CORER */
+#define GL_SWT_NOMDEF_FLGS_L_FLGS_S		0
+#define GL_SWT_NOMDEF_FLGS_L_FLGS_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_SWT_SWIDFVIDX			0x00214114 /* Reset Source: CORER */
+#define GL_SWT_SWIDFVIDX_SWIDFVIDX_S		0
+#define GL_SWT_SWIDFVIDX_SWIDFVIDX_M		MAKEMASK(0x3F, 0)
+#define GL_SWT_SWIDFVIDX_PORT_TYPE_S		31
+#define GL_SWT_SWIDFVIDX_PORT_TYPE_M		BIT(31)
+#define GL_VP_SWITCHID(_i)			(0x00214094 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GL_VP_SWITCHID_MAX_INDEX		31
+#define GL_VP_SWITCHID_SWITCHID_S		0
+#define GL_VP_SWITCHID_SWITCHID_M		MAKEMASK(0xFF, 0)
+#define GLSWID_STAT_BLOCK(_i)			(0x0020A1A4 + ((_i) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define GLSWID_STAT_BLOCK_MAX_INDEX		255
+#define GLSWID_STAT_BLOCK_VEBID_S		0
+#define GLSWID_STAT_BLOCK_VEBID_M		MAKEMASK(0x1F, 0)
+#define GLSWID_STAT_BLOCK_VEBID_VALID_S		31
+#define GLSWID_STAT_BLOCK_VEBID_VALID_M		BIT(31)
+#define GLSWT_ACT_RESP_0			0x0020A5A4 /* Reset Source: CORER */
+#define GLSWT_ACT_RESP_0_GLSWT_ACT_RESP_S	0
+#define GLSWT_ACT_RESP_0_GLSWT_ACT_RESP_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLSWT_ACT_RESP_1			0x0020A5A8 /* Reset Source: CORER */
+#define GLSWT_ACT_RESP_1_GLSWT_ACT_RESP_S	0
+#define GLSWT_ACT_RESP_1_GLSWT_ACT_RESP_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLSWT_ARB_MODE				0x0020A674 /* Reset Source: CORER */
+#define GLSWT_ARB_MODE_FLU_PRI_SHM_S		0
+#define GLSWT_ARB_MODE_FLU_PRI_SHM_M		BIT(0)
+#define GLSWT_ARB_MODE_TX_RX_FWD_PRI_S		1
+#define GLSWT_ARB_MODE_TX_RX_FWD_PRI_M		BIT(1)
+#define PRT_SBPVSI				0x00204120 /* Reset Source: CORER */
+#define PRT_SBPVSI_BAD_FRAMES_VSI_S		0
+#define PRT_SBPVSI_BAD_FRAMES_VSI_M		MAKEMASK(0x3FF, 0)
+#define PRT_SBPVSI_SBP_S			31
+#define PRT_SBPVSI_SBP_M			BIT(31)
+#define PRT_SCSTS				0x00204140 /* Reset Source: CORER */
+#define PRT_SCSTS_BSCA_S			0
+#define PRT_SCSTS_BSCA_M			BIT(0)
+#define PRT_SCSTS_BSCAP_S			1
+#define PRT_SCSTS_BSCAP_M			BIT(1)
+#define PRT_SCSTS_MSCA_S			2
+#define PRT_SCSTS_MSCA_M			BIT(2)
+#define PRT_SCSTS_MSCAP_S			3
+#define PRT_SCSTS_MSCAP_M			BIT(3)
+#define PRT_SWT_BSCCNT				0x00204160 /* Reset Source: CORER */
+#define PRT_SWT_BSCCNT_CCOUNT_S			0
+#define PRT_SWT_BSCCNT_CCOUNT_M			MAKEMASK(0x1FFFFFF, 0)
+#define PRT_SWT_BSCTRH				0x00204180 /* Reset Source: CORER */
+#define PRT_SWT_BSCTRH_UTRESH_S			0
+#define PRT_SWT_BSCTRH_UTRESH_M			MAKEMASK(0x7FFFF, 0)
+#define PRT_SWT_MIREG				0x002042A0 /* Reset Source: CORER */
+#define PRT_SWT_MIREG_MIRRULE_S			0
+#define PRT_SWT_MIREG_MIRRULE_M			MAKEMASK(0x3F, 0)
+#define PRT_SWT_MIREG_MIRENA_S			7
+#define PRT_SWT_MIREG_MIRENA_M			BIT(7)
+#define PRT_SWT_MIRIG				0x00204280 /* Reset Source: CORER */
+#define PRT_SWT_MIRIG_MIRRULE_S			0
+#define PRT_SWT_MIRIG_MIRRULE_M			MAKEMASK(0x3F, 0)
+#define PRT_SWT_MIRIG_MIRENA_S			7
+#define PRT_SWT_MIRIG_MIRENA_M			BIT(7)
+#define PRT_SWT_MSCCNT				0x00204100 /* Reset Source: CORER */
+#define PRT_SWT_MSCCNT_CCOUNT_S			0
+#define PRT_SWT_MSCCNT_CCOUNT_M			MAKEMASK(0x1FFFFFF, 0)
+#define PRT_SWT_MSCTRH				0x002041c0 /* Reset Source: CORER */
+#define PRT_SWT_MSCTRH_UTRESH_S			0
+#define PRT_SWT_MSCTRH_UTRESH_M			MAKEMASK(0x7FFFF, 0)
+#define PRT_SWT_SCBI				0x002041e0 /* Reset Source: CORER */
+#define PRT_SWT_SCBI_BI_S			0
+#define PRT_SWT_SCBI_BI_M			MAKEMASK(0x1FFFFFF, 0)
+#define PRT_SWT_SCCRL				0x00204200 /* Reset Source: CORER */
+#define PRT_SWT_SCCRL_MDIPW_S			0
+#define PRT_SWT_SCCRL_MDIPW_M			BIT(0)
+#define PRT_SWT_SCCRL_MDICW_S			1
+#define PRT_SWT_SCCRL_MDICW_M			BIT(1)
+#define PRT_SWT_SCCRL_BDIPW_S			2
+#define PRT_SWT_SCCRL_BDIPW_M			BIT(2)
+#define PRT_SWT_SCCRL_BDICW_S			3
+#define PRT_SWT_SCCRL_BDICW_M			BIT(3)
+#define PRT_SWT_SCCRL_INTERVAL_S		8
+#define PRT_SWT_SCCRL_INTERVAL_M		MAKEMASK(0xFFFFF, 8)
+#define PRT_TCTUPR(_i)				(0x00040840 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define PRT_TCTUPR_MAX_INDEX			31
+#define PRT_TCTUPR_UP0_S			0
+#define PRT_TCTUPR_UP0_M			MAKEMASK(0x7, 0)
+#define PRT_TCTUPR_UP1_S			4
+#define PRT_TCTUPR_UP1_M			MAKEMASK(0x7, 4)
+#define PRT_TCTUPR_UP2_S			8
+#define PRT_TCTUPR_UP2_M			MAKEMASK(0x7, 8)
+#define PRT_TCTUPR_UP3_S			12
+#define PRT_TCTUPR_UP3_M			MAKEMASK(0x7, 12)
+#define PRT_TCTUPR_UP4_S			16
+#define PRT_TCTUPR_UP4_M			MAKEMASK(0x7, 16)
+#define PRT_TCTUPR_UP5_S			20
+#define PRT_TCTUPR_UP5_M			MAKEMASK(0x7, 20)
+#define PRT_TCTUPR_UP6_S			24
+#define PRT_TCTUPR_UP6_M			MAKEMASK(0x7, 24)
+#define PRT_TCTUPR_UP7_S			28
+#define PRT_TCTUPR_UP7_M			MAKEMASK(0x7, 28)
+#define GLHH_ART_CTL				0x000A41D4 /* Reset Source: POR */
+#define GLHH_ART_CTL_ACTIVE_S			0
+#define GLHH_ART_CTL_ACTIVE_M			BIT(0)
+#define GLHH_ART_CTL_TIME_OUT1_S		1
+#define GLHH_ART_CTL_TIME_OUT1_M		BIT(1)
+#define GLHH_ART_CTL_TIME_OUT2_S		2
+#define GLHH_ART_CTL_TIME_OUT2_M		BIT(2)
+#define GLHH_ART_CTL_RESET_HH_S			31
+#define GLHH_ART_CTL_RESET_HH_M			BIT(31)
+#define GLHH_ART_DATA				0x000A41E0 /* Reset Source: POR */
+#define GLHH_ART_DATA_AGENT_TYPE_S		0
+#define GLHH_ART_DATA_AGENT_TYPE_M		MAKEMASK(0x7, 0)
+#define GLHH_ART_DATA_SYNC_TYPE_S		3
+#define GLHH_ART_DATA_SYNC_TYPE_M		BIT(3)
+#define GLHH_ART_DATA_MAX_DELAY_S		4
+#define GLHH_ART_DATA_MAX_DELAY_M		MAKEMASK(0xF, 4)
+#define GLHH_ART_DATA_TIME_BASE_S		8
+#define GLHH_ART_DATA_TIME_BASE_M		MAKEMASK(0xF, 8)
+#define GLHH_ART_DATA_RSV_DATA_S		12
+#define GLHH_ART_DATA_RSV_DATA_M		MAKEMASK(0xFFFFF, 12)
+#define GLHH_ART_TIME_H				0x000A41D8 /* Reset Source: POR */
+#define GLHH_ART_TIME_H_ART_TIME_H_S		0
+#define GLHH_ART_TIME_H_ART_TIME_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLHH_ART_TIME_L				0x000A41DC /* Reset Source: POR */
+#define GLHH_ART_TIME_L_ART_TIME_L_S		0
+#define GLHH_ART_TIME_L_ART_TIME_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_AUX_IN_0(_i)			(0x000889D8 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_AUX_IN_0_MAX_INDEX		1
+#define GLTSYN_AUX_IN_0_EVNTLVL_S		0
+#define GLTSYN_AUX_IN_0_EVNTLVL_M		MAKEMASK(0x3, 0)
+#define GLTSYN_AUX_IN_0_INT_ENA_S		4
+#define GLTSYN_AUX_IN_0_INT_ENA_M		BIT(4)
+#define GLTSYN_AUX_IN_1(_i)			(0x000889E0 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_AUX_IN_1_MAX_INDEX		1
+#define GLTSYN_AUX_IN_1_EVNTLVL_S		0
+#define GLTSYN_AUX_IN_1_EVNTLVL_M		MAKEMASK(0x3, 0)
+#define GLTSYN_AUX_IN_1_INT_ENA_S		4
+#define GLTSYN_AUX_IN_1_INT_ENA_M		BIT(4)
+#define GLTSYN_AUX_IN_2(_i)			(0x000889E8 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_AUX_IN_2_MAX_INDEX		1
+#define GLTSYN_AUX_IN_2_EVNTLVL_S		0
+#define GLTSYN_AUX_IN_2_EVNTLVL_M		MAKEMASK(0x3, 0)
+#define GLTSYN_AUX_IN_2_INT_ENA_S		4
+#define GLTSYN_AUX_IN_2_INT_ENA_M		BIT(4)
+#define GLTSYN_AUX_OUT_0(_i)			(0x00088998 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_AUX_OUT_0_MAX_INDEX		1
+#define GLTSYN_AUX_OUT_0_OUT_ENA_S		0
+#define GLTSYN_AUX_OUT_0_OUT_ENA_M		BIT(0)
+#define GLTSYN_AUX_OUT_0_OUTMOD_S		1
+#define GLTSYN_AUX_OUT_0_OUTMOD_M		MAKEMASK(0x3, 1)
+#define GLTSYN_AUX_OUT_0_OUTLVL_S		3
+#define GLTSYN_AUX_OUT_0_OUTLVL_M		BIT(3)
+#define GLTSYN_AUX_OUT_0_INT_ENA_S		4
+#define GLTSYN_AUX_OUT_0_INT_ENA_M		BIT(4)
+#define GLTSYN_AUX_OUT_0_PULSEW_S		8
+#define GLTSYN_AUX_OUT_0_PULSEW_M		MAKEMASK(0xF, 8)
+#define GLTSYN_AUX_OUT_1(_i)			(0x000889A0 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_AUX_OUT_1_MAX_INDEX		1
+#define GLTSYN_AUX_OUT_1_OUT_ENA_S		0
+#define GLTSYN_AUX_OUT_1_OUT_ENA_M		BIT(0)
+#define GLTSYN_AUX_OUT_1_OUTMOD_S		1
+#define GLTSYN_AUX_OUT_1_OUTMOD_M		MAKEMASK(0x3, 1)
+#define GLTSYN_AUX_OUT_1_OUTLVL_S		3
+#define GLTSYN_AUX_OUT_1_OUTLVL_M		BIT(3)
+#define GLTSYN_AUX_OUT_1_INT_ENA_S		4
+#define GLTSYN_AUX_OUT_1_INT_ENA_M		BIT(4)
+#define GLTSYN_AUX_OUT_1_PULSEW_S		8
+#define GLTSYN_AUX_OUT_1_PULSEW_M		MAKEMASK(0xF, 8)
+#define GLTSYN_AUX_OUT_2(_i)			(0x000889A8 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_AUX_OUT_2_MAX_INDEX		1
+#define GLTSYN_AUX_OUT_2_OUT_ENA_S		0
+#define GLTSYN_AUX_OUT_2_OUT_ENA_M		BIT(0)
+#define GLTSYN_AUX_OUT_2_OUTMOD_S		1
+#define GLTSYN_AUX_OUT_2_OUTMOD_M		MAKEMASK(0x3, 1)
+#define GLTSYN_AUX_OUT_2_OUTLVL_S		3
+#define GLTSYN_AUX_OUT_2_OUTLVL_M		BIT(3)
+#define GLTSYN_AUX_OUT_2_INT_ENA_S		4
+#define GLTSYN_AUX_OUT_2_INT_ENA_M		BIT(4)
+#define GLTSYN_AUX_OUT_2_PULSEW_S		8
+#define GLTSYN_AUX_OUT_2_PULSEW_M		MAKEMASK(0xF, 8)
+#define GLTSYN_AUX_OUT_3(_i)			(0x000889B0 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_AUX_OUT_3_MAX_INDEX		1
+#define GLTSYN_AUX_OUT_3_OUT_ENA_S		0
+#define GLTSYN_AUX_OUT_3_OUT_ENA_M		BIT(0)
+#define GLTSYN_AUX_OUT_3_OUTMOD_S		1
+#define GLTSYN_AUX_OUT_3_OUTMOD_M		MAKEMASK(0x3, 1)
+#define GLTSYN_AUX_OUT_3_OUTLVL_S		3
+#define GLTSYN_AUX_OUT_3_OUTLVL_M		BIT(3)
+#define GLTSYN_AUX_OUT_3_INT_ENA_S		4
+#define GLTSYN_AUX_OUT_3_INT_ENA_M		BIT(4)
+#define GLTSYN_AUX_OUT_3_PULSEW_S		8
+#define GLTSYN_AUX_OUT_3_PULSEW_M		MAKEMASK(0xF, 8)
+#define GLTSYN_CLKO_0(_i)			(0x000889B8 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_CLKO_0_MAX_INDEX			1
+#define GLTSYN_CLKO_0_TSYNCLKO_S		0
+#define GLTSYN_CLKO_0_TSYNCLKO_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_CLKO_1(_i)			(0x000889C0 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_CLKO_1_MAX_INDEX			1
+#define GLTSYN_CLKO_1_TSYNCLKO_S		0
+#define GLTSYN_CLKO_1_TSYNCLKO_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_CLKO_2(_i)			(0x000889C8 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_CLKO_2_MAX_INDEX			1
+#define GLTSYN_CLKO_2_TSYNCLKO_S		0
+#define GLTSYN_CLKO_2_TSYNCLKO_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_CLKO_3(_i)			(0x000889D0 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_CLKO_3_MAX_INDEX			1
+#define GLTSYN_CLKO_3_TSYNCLKO_S		0
+#define GLTSYN_CLKO_3_TSYNCLKO_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_CMD				0x00088810 /* Reset Source: CORER */
+#define GLTSYN_CMD_CMD_S			0
+#define GLTSYN_CMD_CMD_M			MAKEMASK(0xFF, 0)
+#define GLTSYN_CMD_SEL_MASTER_S			8
+#define GLTSYN_CMD_SEL_MASTER_M			BIT(8)
+#define GLTSYN_CMD_SYNC				0x00088814 /* Reset Source: CORER */
+#define GLTSYN_CMD_SYNC_SYNC_S			0
+#define GLTSYN_CMD_SYNC_SYNC_M			MAKEMASK(0x3, 0)
+#define GLTSYN_ENA(_i)				(0x00088808 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_ENA_MAX_INDEX			1
+#define GLTSYN_ENA_TSYN_ENA_S			0
+#define GLTSYN_ENA_TSYN_ENA_M			BIT(0)
+#define GLTSYN_EVNT_H_0(_i)			(0x00088970 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_EVNT_H_0_MAX_INDEX		1
+#define GLTSYN_EVNT_H_0_TSYNEVNT_H_S		0
+#define GLTSYN_EVNT_H_0_TSYNEVNT_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_EVNT_H_1(_i)			(0x00088980 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_EVNT_H_1_MAX_INDEX		1
+#define GLTSYN_EVNT_H_1_TSYNEVNT_H_S		0
+#define GLTSYN_EVNT_H_1_TSYNEVNT_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_EVNT_H_2(_i)			(0x00088990 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_EVNT_H_2_MAX_INDEX		1
+#define GLTSYN_EVNT_H_2_TSYNEVNT_H_S		0
+#define GLTSYN_EVNT_H_2_TSYNEVNT_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_EVNT_L_0(_i)			(0x00088968 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_EVNT_L_0_MAX_INDEX		1
+#define GLTSYN_EVNT_L_0_TSYNEVNT_L_S		0
+#define GLTSYN_EVNT_L_0_TSYNEVNT_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_EVNT_L_1(_i)			(0x00088978 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_EVNT_L_1_MAX_INDEX		1
+#define GLTSYN_EVNT_L_1_TSYNEVNT_L_S		0
+#define GLTSYN_EVNT_L_1_TSYNEVNT_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_EVNT_L_2(_i)			(0x00088988 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_EVNT_L_2_MAX_INDEX		1
+#define GLTSYN_EVNT_L_2_TSYNEVNT_L_S		0
+#define GLTSYN_EVNT_L_2_TSYNEVNT_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_HHTIME_H(_i)			(0x00088900 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_HHTIME_H_MAX_INDEX		1
+#define GLTSYN_HHTIME_H_TSYNEVNT_H_S		0
+#define GLTSYN_HHTIME_H_TSYNEVNT_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_HHTIME_L(_i)			(0x000888F8 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_HHTIME_L_MAX_INDEX		1
+#define GLTSYN_HHTIME_L_TSYNEVNT_L_S		0
+#define GLTSYN_HHTIME_L_TSYNEVNT_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_INCVAL_H(_i)			(0x00088920 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_INCVAL_H_MAX_INDEX		1
+#define GLTSYN_INCVAL_H_INCVAL_H_S		0
+#define GLTSYN_INCVAL_H_INCVAL_H_M		MAKEMASK(0xFF, 0)
+#define GLTSYN_INCVAL_L(_i)			(0x00088918 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_INCVAL_L_MAX_INDEX		1
+#define GLTSYN_INCVAL_L_INCVAL_L_S		0
+#define GLTSYN_INCVAL_L_INCVAL_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_SHADJ_H(_i)			(0x00088910 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_SHADJ_H_MAX_INDEX		1
+#define GLTSYN_SHADJ_H_ADJUST_H_S		0
+#define GLTSYN_SHADJ_H_ADJUST_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_SHADJ_L(_i)			(0x00088908 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_SHADJ_L_MAX_INDEX		1
+#define GLTSYN_SHADJ_L_ADJUST_L_S		0
+#define GLTSYN_SHADJ_L_ADJUST_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_SHTIME_0(_i)			(0x000888E0 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_SHTIME_0_MAX_INDEX		1
+#define GLTSYN_SHTIME_0_TSYNTIME_0_S		0
+#define GLTSYN_SHTIME_0_TSYNTIME_0_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_SHTIME_H(_i)			(0x000888F0 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_SHTIME_H_MAX_INDEX		1
+#define GLTSYN_SHTIME_H_TSYNTIME_H_S		0
+#define GLTSYN_SHTIME_H_TSYNTIME_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_SHTIME_L(_i)			(0x000888E8 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_SHTIME_L_MAX_INDEX		1
+#define GLTSYN_SHTIME_L_TSYNTIME_L_S		0
+#define GLTSYN_SHTIME_L_TSYNTIME_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_STAT(_i)				(0x000888C0 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_STAT_MAX_INDEX			1
+#define GLTSYN_STAT_EVENT0_S			0
+#define GLTSYN_STAT_EVENT0_M			BIT(0)
+#define GLTSYN_STAT_EVENT1_S			1
+#define GLTSYN_STAT_EVENT1_M			BIT(1)
+#define GLTSYN_STAT_EVENT2_S			2
+#define GLTSYN_STAT_EVENT2_M			BIT(2)
+#define GLTSYN_STAT_TGT0_S			4
+#define GLTSYN_STAT_TGT0_M			BIT(4)
+#define GLTSYN_STAT_TGT1_S			5
+#define GLTSYN_STAT_TGT1_M			BIT(5)
+#define GLTSYN_STAT_TGT2_S			6
+#define GLTSYN_STAT_TGT2_M			BIT(6)
+#define GLTSYN_STAT_TGT3_S			7
+#define GLTSYN_STAT_TGT3_M			BIT(7)
+#define GLTSYN_SYNC_DLAY			0x00088818 /* Reset Source: CORER */
+#define GLTSYN_SYNC_DLAY_SYNC_DELAY_S		0
+#define GLTSYN_SYNC_DLAY_SYNC_DELAY_M		MAKEMASK(0x1F, 0)
+#define GLTSYN_TGT_H_0(_i)			(0x00088930 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_TGT_H_0_MAX_INDEX		1
+#define GLTSYN_TGT_H_0_TSYNTGTT_H_S		0
+#define GLTSYN_TGT_H_0_TSYNTGTT_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_TGT_H_1(_i)			(0x00088940 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_TGT_H_1_MAX_INDEX		1
+#define GLTSYN_TGT_H_1_TSYNTGTT_H_S		0
+#define GLTSYN_TGT_H_1_TSYNTGTT_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_TGT_H_2(_i)			(0x00088950 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_TGT_H_2_MAX_INDEX		1
+#define GLTSYN_TGT_H_2_TSYNTGTT_H_S		0
+#define GLTSYN_TGT_H_2_TSYNTGTT_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_TGT_H_3(_i)			(0x00088960 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_TGT_H_3_MAX_INDEX		1
+#define GLTSYN_TGT_H_3_TSYNTGTT_H_S		0
+#define GLTSYN_TGT_H_3_TSYNTGTT_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_TGT_L_0(_i)			(0x00088928 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_TGT_L_0_MAX_INDEX		1
+#define GLTSYN_TGT_L_0_TSYNTGTT_L_S		0
+#define GLTSYN_TGT_L_0_TSYNTGTT_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_TGT_L_1(_i)			(0x00088938 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_TGT_L_1_MAX_INDEX		1
+#define GLTSYN_TGT_L_1_TSYNTGTT_L_S		0
+#define GLTSYN_TGT_L_1_TSYNTGTT_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_TGT_L_2(_i)			(0x00088948 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_TGT_L_2_MAX_INDEX		1
+#define GLTSYN_TGT_L_2_TSYNTGTT_L_S		0
+#define GLTSYN_TGT_L_2_TSYNTGTT_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_TGT_L_3(_i)			(0x00088958 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_TGT_L_3_MAX_INDEX		1
+#define GLTSYN_TGT_L_3_TSYNTGTT_L_S		0
+#define GLTSYN_TGT_L_3_TSYNTGTT_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_TIME_0(_i)			(0x000888C8 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_TIME_0_MAX_INDEX			1
+#define GLTSYN_TIME_0_TSYNTIME_0_S		0
+#define GLTSYN_TIME_0_TSYNTIME_0_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_TIME_H(_i)			(0x000888D8 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_TIME_H_MAX_INDEX			1
+#define GLTSYN_TIME_H_TSYNTIME_H_S		0
+#define GLTSYN_TIME_H_TSYNTIME_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_TIME_L(_i)			(0x000888D0 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_TIME_L_MAX_INDEX			1
+#define GLTSYN_TIME_L_TSYNTIME_L_S		0
+#define GLTSYN_TIME_L_TSYNTIME_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PFHH_SEM				0x000A4200 /* Reset Source: PFR */
+#define PFHH_SEM_BUSY_S				0
+#define PFHH_SEM_BUSY_M				BIT(0)
+#define PFHH_SEM_PF_OWNER_S			4
+#define PFHH_SEM_PF_OWNER_M			MAKEMASK(0x7, 4)
+#define PFTSYN_SEM				0x00088880 /* Reset Source: PFR */
+#define PFTSYN_SEM_BUSY_S			0
+#define PFTSYN_SEM_BUSY_M			BIT(0)
+#define PFTSYN_SEM_PF_OWNER_S			4
+#define PFTSYN_SEM_PF_OWNER_M			MAKEMASK(0x7, 4)
+#define GLPE_TSCD_FLR(_i)			(0x0051E24c + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: CORER */
+#define GLPE_TSCD_FLR_MAX_INDEX			3
+#define GLPE_TSCD_FLR_DRAIN_VCTR_ID_S		0
+#define GLPE_TSCD_FLR_DRAIN_VCTR_ID_M		MAKEMASK(0x3, 0)
+#define GLPE_TSCD_FLR_PORT_S			2
+#define GLPE_TSCD_FLR_PORT_M			MAKEMASK(0x7, 2)
+#define GLPE_TSCD_FLR_PF_NUM_S			5
+#define GLPE_TSCD_FLR_PF_NUM_M			MAKEMASK(0x7, 5)
+#define GLPE_TSCD_FLR_VM_VF_TYPE_S		8
+#define GLPE_TSCD_FLR_VM_VF_TYPE_M		MAKEMASK(0x3, 8)
+#define GLPE_TSCD_FLR_VM_VF_NUM_S		16
+#define GLPE_TSCD_FLR_VM_VF_NUM_M		MAKEMASK(0x3FF, 16)
+#define GLPE_TSCD_FLR_VLD_S			31
+#define GLPE_TSCD_FLR_VLD_M			BIT(31)
+#define GLPE_TSCD_PEPM				0x0051E228 /* Reset Source: CORER */
+#define GLPE_TSCD_PEPM_MDQ_CREDITS_S		0
+#define GLPE_TSCD_PEPM_MDQ_CREDITS_M		MAKEMASK(0xFF, 0)
+#define PF_VIRT_VSTATUS				0x0009E680 /* Reset Source: PFR */
+#define PF_VIRT_VSTATUS_NUM_VFS_S		0
+#define PF_VIRT_VSTATUS_NUM_VFS_M		MAKEMASK(0xFF, 0)
+#define PF_VIRT_VSTATUS_TOTAL_VFS_S		8
+#define PF_VIRT_VSTATUS_TOTAL_VFS_M		MAKEMASK(0xFF, 8)
+#define PF_VIRT_VSTATUS_IOV_ACTIVE_S		16
+#define PF_VIRT_VSTATUS_IOV_ACTIVE_M		BIT(16)
+#define PF_VT_PFALLOC				0x001D2480 /* Reset Source: CORER */
+#define PF_VT_PFALLOC_FIRSTVF_S			0
+#define PF_VT_PFALLOC_FIRSTVF_M			MAKEMASK(0xFF, 0)
+#define PF_VT_PFALLOC_LASTVF_S			8
+#define PF_VT_PFALLOC_LASTVF_M			MAKEMASK(0xFF, 8)
+#define PF_VT_PFALLOC_VALID_S			31
+#define PF_VT_PFALLOC_VALID_M			BIT(31)
+#define PF_VT_PFALLOC_HIF			0x0009DD80 /* Reset Source: PCIR */
+#define PF_VT_PFALLOC_HIF_FIRSTVF_S		0
+#define PF_VT_PFALLOC_HIF_FIRSTVF_M		MAKEMASK(0xFF, 0)
+#define PF_VT_PFALLOC_HIF_LASTVF_S		8
+#define PF_VT_PFALLOC_HIF_LASTVF_M		MAKEMASK(0xFF, 8)
+#define PF_VT_PFALLOC_HIF_VALID_S		31
+#define PF_VT_PFALLOC_HIF_VALID_M		BIT(31)
+#define PF_VT_PFALLOC_PCIE			0x000BE080 /* Reset Source: PCIR */
+#define PF_VT_PFALLOC_PCIE_FIRSTVF_S		0
+#define PF_VT_PFALLOC_PCIE_FIRSTVF_M		MAKEMASK(0xFF, 0)
+#define PF_VT_PFALLOC_PCIE_LASTVF_S		8
+#define PF_VT_PFALLOC_PCIE_LASTVF_M		MAKEMASK(0xFF, 8)
+#define PF_VT_PFALLOC_PCIE_VALID_S		31
+#define PF_VT_PFALLOC_PCIE_VALID_M		BIT(31)
+#define VSI_L2TAGSTXVALID(_VSI)			(0x00046000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_L2TAGSTXVALID_MAX_INDEX		767
+#define VSI_L2TAGSTXVALID_L2TAG1INSERTID_S	0
+#define VSI_L2TAGSTXVALID_L2TAG1INSERTID_M	MAKEMASK(0x7, 0)
+#define VSI_L2TAGSTXVALID_L2TAG1INSERTID_VALID_S 3
+#define VSI_L2TAGSTXVALID_L2TAG1INSERTID_VALID_M BIT(3)
+#define VSI_L2TAGSTXVALID_L2TAG2INSERTID_S	4
+#define VSI_L2TAGSTXVALID_L2TAG2INSERTID_M	MAKEMASK(0x7, 4)
+#define VSI_L2TAGSTXVALID_L2TAG2INSERTID_VALID_S 7
+#define VSI_L2TAGSTXVALID_L2TAG2INSERTID_VALID_M BIT(7)
+#define VSI_L2TAGSTXVALID_TIR0INSERTID_S	16
+#define VSI_L2TAGSTXVALID_TIR0INSERTID_M	MAKEMASK(0x7, 16)
+#define VSI_L2TAGSTXVALID_TIR0_INSERT_S		19
+#define VSI_L2TAGSTXVALID_TIR0_INSERT_M		BIT(19)
+#define VSI_L2TAGSTXVALID_TIR1INSERTID_S	20
+#define VSI_L2TAGSTXVALID_TIR1INSERTID_M	MAKEMASK(0x7, 20)
+#define VSI_L2TAGSTXVALID_TIR1_INSERT_S		23
+#define VSI_L2TAGSTXVALID_TIR1_INSERT_M		BIT(23)
+#define VSI_L2TAGSTXVALID_TIR2INSERTID_S	24
+#define VSI_L2TAGSTXVALID_TIR2INSERTID_M	MAKEMASK(0x7, 24)
+#define VSI_L2TAGSTXVALID_TIR2_INSERT_S		27
+#define VSI_L2TAGSTXVALID_TIR2_INSERT_M		BIT(27)
+#define VSI_PASID(_VSI)				(0x0009C000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_PASID_MAX_INDEX			767
+#define VSI_PASID_PASID_S			0
+#define VSI_PASID_PASID_M			MAKEMASK(0xFFFFF, 0)
+#define VSI_PASID_EN_S				31
+#define VSI_PASID_EN_M				BIT(31)
+#define VSI_RUPR(_VSI)				(0x00050000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_RUPR_MAX_INDEX			767
+#define VSI_RUPR_UP0_S				0
+#define VSI_RUPR_UP0_M				MAKEMASK(0x7, 0)
+#define VSI_RUPR_UP1_S				3
+#define VSI_RUPR_UP1_M				MAKEMASK(0x7, 3)
+#define VSI_RUPR_UP2_S				6
+#define VSI_RUPR_UP2_M				MAKEMASK(0x7, 6)
+#define VSI_RUPR_UP3_S				9
+#define VSI_RUPR_UP3_M				MAKEMASK(0x7, 9)
+#define VSI_RUPR_UP4_S				12
+#define VSI_RUPR_UP4_M				MAKEMASK(0x7, 12)
+#define VSI_RUPR_UP5_S				15
+#define VSI_RUPR_UP5_M				MAKEMASK(0x7, 15)
+#define VSI_RUPR_UP6_S				18
+#define VSI_RUPR_UP6_M				MAKEMASK(0x7, 18)
+#define VSI_RUPR_UP7_S				21
+#define VSI_RUPR_UP7_M				MAKEMASK(0x7, 21)
+#define VSI_RXSWCTRL(_VSI)			(0x00205000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_RXSWCTRL_MAX_INDEX			767
+#define VSI_RXSWCTRL_MACVSIPRUNEENABLE_S	8
+#define VSI_RXSWCTRL_MACVSIPRUNEENABLE_M	BIT(8)
+#define VSI_RXSWCTRL_PRUNEENABLE_S		9
+#define VSI_RXSWCTRL_PRUNEENABLE_M		MAKEMASK(0xF, 9)
+#define VSI_RXSWCTRL_SRCPRUNEENABLE_S		13
+#define VSI_RXSWCTRL_SRCPRUNEENABLE_M		BIT(13)
+#define VSI_SRCSWCTRL(_VSI)			(0x00209000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_SRCSWCTRL_MAX_INDEX			767
+#define VSI_SRCSWCTRL_ALLOWDESTOVERRIDE_S	0
+#define VSI_SRCSWCTRL_ALLOWDESTOVERRIDE_M	BIT(0)
+#define VSI_SRCSWCTRL_ALLOWLOOPBACK_S		1
+#define VSI_SRCSWCTRL_ALLOWLOOPBACK_M		BIT(1)
+#define VSI_SRCSWCTRL_LANENABLE_S		2
+#define VSI_SRCSWCTRL_LANENABLE_M		BIT(2)
+#define VSI_SRCSWCTRL_MACAS_S			3
+#define VSI_SRCSWCTRL_MACAS_M			BIT(3)
+#define VSI_SRCSWCTRL_PRUNEENABLE_S		4
+#define VSI_SRCSWCTRL_PRUNEENABLE_M		MAKEMASK(0xF, 4)
+#define VSI_SWITCHID(_VSI)			(0x00215000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_SWITCHID_MAX_INDEX			767
+#define VSI_SWITCHID_SWITCHID_S			0
+#define VSI_SWITCHID_SWITCHID_M			MAKEMASK(0xFF, 0)
+#define VSI_SWT_MIREG(_VSI)			(0x00207000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_SWT_MIREG_MAX_INDEX			767
+#define VSI_SWT_MIREG_MIRRULE_S			0
+#define VSI_SWT_MIREG_MIRRULE_M			MAKEMASK(0x3F, 0)
+#define VSI_SWT_MIREG_MIRENA_S			7
+#define VSI_SWT_MIREG_MIRENA_M			BIT(7)
+#define VSI_SWT_MIRIG(_VSI)			(0x00208000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_SWT_MIRIG_MAX_INDEX			767
+#define VSI_SWT_MIRIG_MIRRULE_S			0
+#define VSI_SWT_MIRIG_MIRRULE_M			MAKEMASK(0x3F, 0)
+#define VSI_SWT_MIRIG_MIRENA_S			7
+#define VSI_SWT_MIRIG_MIRENA_M			BIT(7)
+#define VSI_TAIR(_VSI)				(0x00044000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_TAIR_MAX_INDEX			767
+#define VSI_TAIR_PORT_TAG_ID_S			0
+#define VSI_TAIR_PORT_TAG_ID_M			MAKEMASK(0xFFFF, 0)
+#define VSI_TAR(_VSI)				(0x00045000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_TAR_MAX_INDEX			767
+#define VSI_TAR_ACCEPTTAGGED_S			0
+#define VSI_TAR_ACCEPTTAGGED_M			MAKEMASK(0x3FF, 0)
+#define VSI_TAR_ACCEPTUNTAGGED_S		16
+#define VSI_TAR_ACCEPTUNTAGGED_M		MAKEMASK(0x3FF, 16)
+#define VSI_TIR_0(_VSI)				(0x00041000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_TIR_0_MAX_INDEX			767
+#define VSI_TIR_0_PORT_TAG_ID_S			0
+#define VSI_TIR_0_PORT_TAG_ID_M			MAKEMASK(0xFFFF, 0)
+#define VSI_TIR_1(_VSI)				(0x00042000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_TIR_1_MAX_INDEX			767
+#define VSI_TIR_1_PORT_TAG_ID_S			0
+#define VSI_TIR_1_PORT_TAG_ID_M			MAKEMASK(0xFFFFFFFF, 0)
+#define VSI_TIR_2(_VSI)				(0x00043000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_TIR_2_MAX_INDEX			767
+#define VSI_TIR_2_PORT_TAG_ID_S			0
+#define VSI_TIR_2_PORT_TAG_ID_M			MAKEMASK(0xFFFF, 0)
+#define VSI_TSR(_VSI)				(0x00051000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_TSR_MAX_INDEX			767
+#define VSI_TSR_STRIPTAG_S			0
+#define VSI_TSR_STRIPTAG_M			MAKEMASK(0x3FF, 0)
+#define VSI_TSR_SHOWTAG_S			10
+#define VSI_TSR_SHOWTAG_M			MAKEMASK(0x3FF, 10)
+#define VSI_TSR_SHOWPRIONLY_S			20
+#define VSI_TSR_SHOWPRIONLY_M			MAKEMASK(0x3FF, 20)
+#define VSI_TUPIOM(_VSI)			(0x00048000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_TUPIOM_MAX_INDEX			767
+#define VSI_TUPIOM_UP0_S			0
+#define VSI_TUPIOM_UP0_M			MAKEMASK(0x7, 0)
+#define VSI_TUPIOM_UP1_S			3
+#define VSI_TUPIOM_UP1_M			MAKEMASK(0x7, 3)
+#define VSI_TUPIOM_UP2_S			6
+#define VSI_TUPIOM_UP2_M			MAKEMASK(0x7, 6)
+#define VSI_TUPIOM_UP3_S			9
+#define VSI_TUPIOM_UP3_M			MAKEMASK(0x7, 9)
+#define VSI_TUPIOM_UP4_S			12
+#define VSI_TUPIOM_UP4_M			MAKEMASK(0x7, 12)
+#define VSI_TUPIOM_UP5_S			15
+#define VSI_TUPIOM_UP5_M			MAKEMASK(0x7, 15)
+#define VSI_TUPIOM_UP6_S			18
+#define VSI_TUPIOM_UP6_M			MAKEMASK(0x7, 18)
+#define VSI_TUPIOM_UP7_S			21
+#define VSI_TUPIOM_UP7_M			MAKEMASK(0x7, 21)
+#define VSI_TUPR(_VSI)				(0x00047000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_TUPR_MAX_INDEX			767
+#define VSI_TUPR_UP0_S				0
+#define VSI_TUPR_UP0_M				MAKEMASK(0x7, 0)
+#define VSI_TUPR_UP1_S				3
+#define VSI_TUPR_UP1_M				MAKEMASK(0x7, 3)
+#define VSI_TUPR_UP2_S				6
+#define VSI_TUPR_UP2_M				MAKEMASK(0x7, 6)
+#define VSI_TUPR_UP3_S				9
+#define VSI_TUPR_UP3_M				MAKEMASK(0x7, 9)
+#define VSI_TUPR_UP4_S				12
+#define VSI_TUPR_UP4_M				MAKEMASK(0x7, 12)
+#define VSI_TUPR_UP5_S				15
+#define VSI_TUPR_UP5_M				MAKEMASK(0x7, 15)
+#define VSI_TUPR_UP6_S				18
+#define VSI_TUPR_UP6_M				MAKEMASK(0x7, 18)
+#define VSI_TUPR_UP7_S				21
+#define VSI_TUPR_UP7_M				MAKEMASK(0x7, 21)
+#define VSI_VSI2F(_VSI)				(0x001D0000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_VSI2F_MAX_INDEX			767
+#define VSI_VSI2F_VFVMNUMBER_S			0
+#define VSI_VSI2F_VFVMNUMBER_M			MAKEMASK(0x3FF, 0)
+#define VSI_VSI2F_FUNCTIONTYPE_S		10
+#define VSI_VSI2F_FUNCTIONTYPE_M		MAKEMASK(0x3, 10)
+#define VSI_VSI2F_PFNUMBER_S			12
+#define VSI_VSI2F_PFNUMBER_M			MAKEMASK(0x7, 12)
+#define VSI_VSI2F_BUFFERNUMBER_S		16
+#define VSI_VSI2F_BUFFERNUMBER_M		MAKEMASK(0x7, 16)
+#define VSI_VSI2F_VSI_NUMBER_S			20
+#define VSI_VSI2F_VSI_NUMBER_M			MAKEMASK(0x3FF, 20)
+#define VSI_VSI2F_VSI_ENABLE_S			31
+#define VSI_VSI2F_VSI_ENABLE_M			BIT(31)
+#define VSI_VSI2F_MBX(_VSI)			(0x00232000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_VSI2F_MBX_MAX_INDEX			767
+#define VSI_VSI2F_MBX_VFVMNUMBER_S		0
+#define VSI_VSI2F_MBX_VFVMNUMBER_M		MAKEMASK(0x3FF, 0)
+#define VSI_VSI2F_MBX_FUNCTIONTYPE_S		10
+#define VSI_VSI2F_MBX_FUNCTIONTYPE_M		MAKEMASK(0x3, 10)
+#define VSI_VSI2F_MBX_PFNUMBER_S		12
+#define VSI_VSI2F_MBX_PFNUMBER_M		MAKEMASK(0x7, 12)
+#define VSI_VSI2F_MBX_BUFFERNUMBER_S		16
+#define VSI_VSI2F_MBX_BUFFERNUMBER_M		MAKEMASK(0x7, 16)
+#define VSI_VSI2F_MBX_VSI_NUMBER_S		20
+#define VSI_VSI2F_MBX_VSI_NUMBER_M		MAKEMASK(0x3FF, 20)
+#define VSI_VSI2F_MBX_VSI_ENABLE_S		31
+#define VSI_VSI2F_MBX_VSI_ENABLE_M		BIT(31)
+#define VSIQF_FD_CNT(_VSI)			(0x00464000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSIQF_FD_CNT_MAX_INDEX			767
+#define VSIQF_FD_CNT_FD_GCNT_S			0
+#define VSIQF_FD_CNT_FD_GCNT_M			MAKEMASK(0x3FFF, 0)
+#define VSIQF_FD_CNT_FD_BCNT_S			16
+#define VSIQF_FD_CNT_FD_BCNT_M			MAKEMASK(0x3FFF, 16)
+#define VSIQF_FD_CTL1(_VSI)			(0x00411000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSIQF_FD_CTL1_MAX_INDEX			767
+#define VSIQF_FD_CTL1_FLT_ENA_S			0
+#define VSIQF_FD_CTL1_FLT_ENA_M			BIT(0)
+#define VSIQF_FD_CTL1_CFG_ENA_S			1
+#define VSIQF_FD_CTL1_CFG_ENA_M			BIT(1)
+#define VSIQF_FD_CTL1_EVICT_ENA_S		2
+#define VSIQF_FD_CTL1_EVICT_ENA_M		BIT(2)
+#define VSIQF_FD_DFLT(_VSI)			(0x00457000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSIQF_FD_DFLT_MAX_INDEX			767
+#define VSIQF_FD_DFLT_DEFLT_QINDX_S		0
+#define VSIQF_FD_DFLT_DEFLT_QINDX_M		MAKEMASK(0x7FF, 0)
+#define VSIQF_FD_DFLT_DEFLT_TOQUEUE_S		12
+#define VSIQF_FD_DFLT_DEFLT_TOQUEUE_M		MAKEMASK(0x7, 12)
+#define VSIQF_FD_DFLT_COMP_QINDX_S		16
+#define VSIQF_FD_DFLT_COMP_QINDX_M		MAKEMASK(0x7FF, 16)
+#define VSIQF_FD_DFLT_DEFLT_QINDX_PRIO_S	28
+#define VSIQF_FD_DFLT_DEFLT_QINDX_PRIO_M	MAKEMASK(0x7, 28)
+#define VSIQF_FD_DFLT_DEFLT_DROP_S		31
+#define VSIQF_FD_DFLT_DEFLT_DROP_M		BIT(31)
+#define VSIQF_FD_SIZE(_VSI)			(0x00462000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSIQF_FD_SIZE_MAX_INDEX			767
+#define VSIQF_FD_SIZE_FD_GSIZE_S		0
+#define VSIQF_FD_SIZE_FD_GSIZE_M		MAKEMASK(0x3FFF, 0)
+#define VSIQF_FD_SIZE_FD_BSIZE_S		16
+#define VSIQF_FD_SIZE_FD_BSIZE_M		MAKEMASK(0x3FFF, 16)
+#define VSIQF_HASH_CTL(_VSI)			(0x0040D000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSIQF_HASH_CTL_MAX_INDEX		767
+#define VSIQF_HASH_CTL_HASH_LUT_SEL_S		0
+#define VSIQF_HASH_CTL_HASH_LUT_SEL_M		MAKEMASK(0x3, 0)
+#define VSIQF_HASH_CTL_GLOB_LUT_S		2
+#define VSIQF_HASH_CTL_GLOB_LUT_M		MAKEMASK(0xF, 2)
+#define VSIQF_HASH_CTL_HASH_SCHEME_S		6
+#define VSIQF_HASH_CTL_HASH_SCHEME_M		MAKEMASK(0x3, 6)
+#define VSIQF_HASH_CTL_TC_OVER_SEL_S		8
+#define VSIQF_HASH_CTL_TC_OVER_SEL_M		MAKEMASK(0x1F, 8)
+#define VSIQF_HASH_CTL_TC_OVER_ENA_S		15
+#define VSIQF_HASH_CTL_TC_OVER_ENA_M		BIT(15)
+#define VSIQF_HKEY(_i, _VSI)			(0x00400000 + ((_i) * 4096 + (_VSI) * 4)) /* _i=0...12, _VSI=0...767 */ /* Reset Source: PFR */
+#define VSIQF_HKEY_MAX_INDEX			12
+#define VSIQF_HKEY_KEY_0_S			0
+#define VSIQF_HKEY_KEY_0_M			MAKEMASK(0xFF, 0)
+#define VSIQF_HKEY_KEY_1_S			8
+#define VSIQF_HKEY_KEY_1_M			MAKEMASK(0xFF, 8)
+#define VSIQF_HKEY_KEY_2_S			16
+#define VSIQF_HKEY_KEY_2_M			MAKEMASK(0xFF, 16)
+#define VSIQF_HKEY_KEY_3_S			24
+#define VSIQF_HKEY_KEY_3_M			MAKEMASK(0xFF, 24)
+#define VSIQF_HLUT(_i, _VSI)			(0x00420000 + ((_i) * 4096 + (_VSI) * 4)) /* _i=0...15, _VSI=0...767 */ /* Reset Source: PFR */
+#define VSIQF_HLUT_MAX_INDEX			15
+#define VSIQF_HLUT_LUT0_S			0
+#define VSIQF_HLUT_LUT0_M			MAKEMASK(0xF, 0)
+#define VSIQF_HLUT_LUT1_S			8
+#define VSIQF_HLUT_LUT1_M			MAKEMASK(0xF, 8)
+#define VSIQF_HLUT_LUT2_S			16
+#define VSIQF_HLUT_LUT2_M			MAKEMASK(0xF, 16)
+#define VSIQF_HLUT_LUT3_S			24
+#define VSIQF_HLUT_LUT3_M			MAKEMASK(0xF, 24)
+#define VSIQF_PE_CTL1(_VSI)			(0x00414000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSIQF_PE_CTL1_MAX_INDEX			767
+#define VSIQF_PE_CTL1_PE_FLTENA_S		0
+#define VSIQF_PE_CTL1_PE_FLTENA_M		BIT(0)
+#define VSIQF_TC_REGION(_i, _VSI)		(0x00448000 + ((_i) * 4096 + (_VSI) * 4)) /* _i=0...3, _VSI=0...767 */ /* Reset Source: PFR */
+#define VSIQF_TC_REGION_MAX_INDEX		3
+#define VSIQF_TC_REGION_TC_BASE0_S		0
+#define VSIQF_TC_REGION_TC_BASE0_M		MAKEMASK(0x7FF, 0)
+#define VSIQF_TC_REGION_TC_SIZE0_S		11
+#define VSIQF_TC_REGION_TC_SIZE0_M		MAKEMASK(0xF, 11)
+#define VSIQF_TC_REGION_TC_BASE1_S		16
+#define VSIQF_TC_REGION_TC_BASE1_M		MAKEMASK(0x7FF, 16)
+#define VSIQF_TC_REGION_TC_SIZE1_S		27
+#define VSIQF_TC_REGION_TC_SIZE1_M		MAKEMASK(0xF, 27)
+#define GLPM_WUMC				0x0009DEE4 /* Reset Source: POR */
+#define GLPM_WUMC_MNG_WU_PF_S			16
+#define GLPM_WUMC_MNG_WU_PF_M			MAKEMASK(0xFF, 16)
+#define PFPM_APM				0x000B8080 /* Reset Source: POR */
+#define PFPM_APM_APME_S				0
+#define PFPM_APM_APME_M				BIT(0)
+#define PFPM_WUC				0x0009DC80 /* Reset Source: POR */
+#define PFPM_WUC_EN_APM_D0_S			5
+#define PFPM_WUC_EN_APM_D0_M			BIT(5)
+#define PFPM_WUFC				0x0009DC00 /* Reset Source: POR */
+#define PFPM_WUFC_LNKC_S			0
+#define PFPM_WUFC_LNKC_M			BIT(0)
+#define PFPM_WUFC_MAG_S				1
+#define PFPM_WUFC_MAG_M				BIT(1)
+#define PFPM_WUFC_MNG_S				3
+#define PFPM_WUFC_MNG_M				BIT(3)
+#define PFPM_WUFC_FLX0_ACT_S			4
+#define PFPM_WUFC_FLX0_ACT_M			BIT(4)
+#define PFPM_WUFC_FLX1_ACT_S			5
+#define PFPM_WUFC_FLX1_ACT_M			BIT(5)
+#define PFPM_WUFC_FLX2_ACT_S			6
+#define PFPM_WUFC_FLX2_ACT_M			BIT(6)
+#define PFPM_WUFC_FLX3_ACT_S			7
+#define PFPM_WUFC_FLX3_ACT_M			BIT(7)
+#define PFPM_WUFC_FLX4_ACT_S			8
+#define PFPM_WUFC_FLX4_ACT_M			BIT(8)
+#define PFPM_WUFC_FLX5_ACT_S			9
+#define PFPM_WUFC_FLX5_ACT_M			BIT(9)
+#define PFPM_WUFC_FLX6_ACT_S			10
+#define PFPM_WUFC_FLX6_ACT_M			BIT(10)
+#define PFPM_WUFC_FLX7_ACT_S			11
+#define PFPM_WUFC_FLX7_ACT_M			BIT(11)
+#define PFPM_WUFC_FLX0_S			16
+#define PFPM_WUFC_FLX0_M			BIT(16)
+#define PFPM_WUFC_FLX1_S			17
+#define PFPM_WUFC_FLX1_M			BIT(17)
+#define PFPM_WUFC_FLX2_S			18
+#define PFPM_WUFC_FLX2_M			BIT(18)
+#define PFPM_WUFC_FLX3_S			19
+#define PFPM_WUFC_FLX3_M			BIT(19)
+#define PFPM_WUFC_FLX4_S			20
+#define PFPM_WUFC_FLX4_M			BIT(20)
+#define PFPM_WUFC_FLX5_S			21
+#define PFPM_WUFC_FLX5_M			BIT(21)
+#define PFPM_WUFC_FLX6_S			22
+#define PFPM_WUFC_FLX6_M			BIT(22)
+#define PFPM_WUFC_FLX7_S			23
+#define PFPM_WUFC_FLX7_M			BIT(23)
+#define PFPM_WUFC_FW_RST_WK_S			31
+#define PFPM_WUFC_FW_RST_WK_M			BIT(31)
+#define PFPM_WUS				0x0009DB80 /* Reset Source: POR */
+#define PFPM_WUS_LNKC_S				0
+#define PFPM_WUS_LNKC_M				BIT(0)
+#define PFPM_WUS_MAG_S				1
+#define PFPM_WUS_MAG_M				BIT(1)
+#define PFPM_WUS_PME_STATUS_S			2
+#define PFPM_WUS_PME_STATUS_M			BIT(2)
+#define PFPM_WUS_MNG_S				3
+#define PFPM_WUS_MNG_M				BIT(3)
+#define PFPM_WUS_FLX0_S				16
+#define PFPM_WUS_FLX0_M				BIT(16)
+#define PFPM_WUS_FLX1_S				17
+#define PFPM_WUS_FLX1_M				BIT(17)
+#define PFPM_WUS_FLX2_S				18
+#define PFPM_WUS_FLX2_M				BIT(18)
+#define PFPM_WUS_FLX3_S				19
+#define PFPM_WUS_FLX3_M				BIT(19)
+#define PFPM_WUS_FLX4_S				20
+#define PFPM_WUS_FLX4_M				BIT(20)
+#define PFPM_WUS_FLX5_S				21
+#define PFPM_WUS_FLX5_M				BIT(21)
+#define PFPM_WUS_FLX6_S				22
+#define PFPM_WUS_FLX6_M				BIT(22)
+#define PFPM_WUS_FLX7_S				23
+#define PFPM_WUS_FLX7_M				BIT(23)
+#define PFPM_WUS_FW_RST_WK_S			31
+#define PFPM_WUS_FW_RST_WK_M			BIT(31)
+#define PRTPM_SAH(_i)				(0x001E3BA0 + ((_i) * 32)) /* _i=0...3 */ /* Reset Source: PFR */
+#define PRTPM_SAH_MAX_INDEX			3
+#define PRTPM_SAH_PFPM_SAH_S			0
+#define PRTPM_SAH_PFPM_SAH_M			MAKEMASK(0xFFFF, 0)
+#define PRTPM_SAH_PF_NUM_S			26
+#define PRTPM_SAH_PF_NUM_M			MAKEMASK(0xF, 26)
+#define PRTPM_SAH_MC_MAG_EN_S			30
+#define PRTPM_SAH_MC_MAG_EN_M			BIT(30)
+#define PRTPM_SAH_AV_S				31
+#define PRTPM_SAH_AV_M				BIT(31)
+#define PRTPM_SAL(_i)				(0x001E3B20 + ((_i) * 32)) /* _i=0...3 */ /* Reset Source: PFR */
+#define PRTPM_SAL_MAX_INDEX			3
+#define PRTPM_SAL_PFPM_SAL_S			0
+#define PRTPM_SAL_PFPM_SAL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPE_CQM_FUNC_INVALIDATE		0x00503300 /* Reset Source: CORER */
+#define GLPE_CQM_FUNC_INVALIDATE_PF_NUM_S	0
+#define GLPE_CQM_FUNC_INVALIDATE_PF_NUM_M	MAKEMASK(0x7, 0)
+#define GLPE_CQM_FUNC_INVALIDATE_VM_VF_NUM_S	3
+#define GLPE_CQM_FUNC_INVALIDATE_VM_VF_NUM_M	MAKEMASK(0x3FF, 3)
+#define GLPE_CQM_FUNC_INVALIDATE_VM_VF_TYPE_S	13
+#define GLPE_CQM_FUNC_INVALIDATE_VM_VF_TYPE_M	MAKEMASK(0x3, 13)
+#define GLPE_CQM_FUNC_INVALIDATE_ENABLE_S	31
+#define GLPE_CQM_FUNC_INVALIDATE_ENABLE_M	BIT(31)
+#define VFPE_MRTEIDXMASK			0x00009000 /* Reset Source: PFR */
+#define VFPE_MRTEIDXMASK_MRTEIDXMASKBITS_S	0
+#define VFPE_MRTEIDXMASK_MRTEIDXMASKBITS_M	MAKEMASK(0x1F, 0)
+#define GLTSYN_HH_DLAY				0x0008881C /* Reset Source: CORER */
+#define GLTSYN_HH_DLAY_SYNC_DELAY_S		0
+#define GLTSYN_HH_DLAY_SYNC_DELAY_M		MAKEMASK(0xF, 0)
+#define VF_MBX_ARQBAH1				0x00006000 /* Reset Source: CORER */
+#define VF_MBX_ARQBAH1_ARQBAH_S			0
+#define VF_MBX_ARQBAH1_ARQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_ARQBAL1				0x00006C00 /* Reset Source: CORER */
+#define VF_MBX_ARQBAL1_ARQBAL_LSB_S		0
+#define VF_MBX_ARQBAL1_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define VF_MBX_ARQBAL1_ARQBAL_S			6
+#define VF_MBX_ARQBAL1_ARQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_ARQH1				0x00007400 /* Reset Source: CORER */
+#define VF_MBX_ARQH1_ARQH_S			0
+#define VF_MBX_ARQH1_ARQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_ARQLEN1				0x00008000 /* Reset Source: CORER */
+#define VF_MBX_ARQLEN1_ARQLEN_S			0
+#define VF_MBX_ARQLEN1_ARQLEN_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_ARQLEN1_ARQVFE_S			28
+#define VF_MBX_ARQLEN1_ARQVFE_M			BIT(28)
+#define VF_MBX_ARQLEN1_ARQOVFL_S		29
+#define VF_MBX_ARQLEN1_ARQOVFL_M		BIT(29)
+#define VF_MBX_ARQLEN1_ARQCRIT_S		30
+#define VF_MBX_ARQLEN1_ARQCRIT_M		BIT(30)
+#define VF_MBX_ARQLEN1_ARQENABLE_S		31
+#define VF_MBX_ARQLEN1_ARQENABLE_M		BIT(31)
+#define VF_MBX_ARQT1				0x00007000 /* Reset Source: CORER */
+#define VF_MBX_ARQT1_ARQT_S			0
+#define VF_MBX_ARQT1_ARQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_ATQBAH1				0x00007800 /* Reset Source: CORER */
+#define VF_MBX_ATQBAH1_ATQBAH_S			0
+#define VF_MBX_ATQBAH1_ATQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_ATQBAL1				0x00007C00 /* Reset Source: CORER */
+#define VF_MBX_ATQBAL1_ATQBAL_S			6
+#define VF_MBX_ATQBAL1_ATQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_ATQH1				0x00006400 /* Reset Source: CORER */
+#define VF_MBX_ATQH1_ATQH_S			0
+#define VF_MBX_ATQH1_ATQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_ATQLEN1				0x00006800 /* Reset Source: CORER */
+#define VF_MBX_ATQLEN1_ATQLEN_S			0
+#define VF_MBX_ATQLEN1_ATQLEN_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_ATQLEN1_ATQVFE_S			28
+#define VF_MBX_ATQLEN1_ATQVFE_M			BIT(28)
+#define VF_MBX_ATQLEN1_ATQOVFL_S		29
+#define VF_MBX_ATQLEN1_ATQOVFL_M		BIT(29)
+#define VF_MBX_ATQLEN1_ATQCRIT_S		30
+#define VF_MBX_ATQLEN1_ATQCRIT_M		BIT(30)
+#define VF_MBX_ATQLEN1_ATQENABLE_S		31
+#define VF_MBX_ATQLEN1_ATQENABLE_M		BIT(31)
+#define VF_MBX_ATQT1				0x00008400 /* Reset Source: CORER */
+#define VF_MBX_ATQT1_ATQT_S			0
+#define VF_MBX_ATQT1_ATQT_M			MAKEMASK(0x3FF, 0)
+#define PFPCI_VF_FLUSH_DONE1			0x0000E400 /* Reset Source: PCIR */
+#define PFPCI_VF_FLUSH_DONE1_FLUSH_DONE_S	0
+#define PFPCI_VF_FLUSH_DONE1_FLUSH_DONE_M	BIT(0)
+#define VFGEN_RSTAT1				0x00008800 /* Reset Source: VFR */
+#define VFGEN_RSTAT1_VFR_STATE_S		0
+#define VFGEN_RSTAT1_VFR_STATE_M		MAKEMASK(0x3, 0)
+#define VFINT_DYN_CTL0				0x00005C00 /* Reset Source: PFR */
+#define VFINT_DYN_CTL0_INTENA_S			0
+#define VFINT_DYN_CTL0_INTENA_M			BIT(0)
+#define VFINT_DYN_CTL0_CLEARPBA_S		1
+#define VFINT_DYN_CTL0_CLEARPBA_M		BIT(1)
+#define VFINT_DYN_CTL0_SWINT_TRIG_S		2
+#define VFINT_DYN_CTL0_SWINT_TRIG_M		BIT(2)
+#define VFINT_DYN_CTL0_ITR_INDX_S		3
+#define VFINT_DYN_CTL0_ITR_INDX_M		MAKEMASK(0x3, 3)
+#define VFINT_DYN_CTL0_INTERVAL_S		5
+#define VFINT_DYN_CTL0_INTERVAL_M		MAKEMASK(0xFFF, 5)
+#define VFINT_DYN_CTL0_SW_ITR_INDX_ENA_S	24
+#define VFINT_DYN_CTL0_SW_ITR_INDX_ENA_M	BIT(24)
+#define VFINT_DYN_CTL0_SW_ITR_INDX_S		25
+#define VFINT_DYN_CTL0_SW_ITR_INDX_M		MAKEMASK(0x3, 25)
+#define VFINT_DYN_CTL0_WB_ON_ITR_S		30
+#define VFINT_DYN_CTL0_WB_ON_ITR_M		BIT(30)
+#define VFINT_DYN_CTL0_INTENA_MSK_S		31
+#define VFINT_DYN_CTL0_INTENA_MSK_M		BIT(31)
+#define VFINT_DYN_CTLN(_i)			(0x00003800 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: PFR */
+#define VFINT_DYN_CTLN_MAX_INDEX		63
+#define VFINT_DYN_CTLN_INTENA_S			0
+#define VFINT_DYN_CTLN_INTENA_M			BIT(0)
+#define VFINT_DYN_CTLN_CLEARPBA_S		1
+#define VFINT_DYN_CTLN_CLEARPBA_M		BIT(1)
+#define VFINT_DYN_CTLN_SWINT_TRIG_S		2
+#define VFINT_DYN_CTLN_SWINT_TRIG_M		BIT(2)
+#define VFINT_DYN_CTLN_ITR_INDX_S		3
+#define VFINT_DYN_CTLN_ITR_INDX_M		MAKEMASK(0x3, 3)
+#define VFINT_DYN_CTLN_INTERVAL_S		5
+#define VFINT_DYN_CTLN_INTERVAL_M		MAKEMASK(0xFFF, 5)
+#define VFINT_DYN_CTLN_SW_ITR_INDX_ENA_S	24
+#define VFINT_DYN_CTLN_SW_ITR_INDX_ENA_M	BIT(24)
+#define VFINT_DYN_CTLN_SW_ITR_INDX_S		25
+#define VFINT_DYN_CTLN_SW_ITR_INDX_M		MAKEMASK(0x3, 25)
+#define VFINT_DYN_CTLN_WB_ON_ITR_S		30
+#define VFINT_DYN_CTLN_WB_ON_ITR_M		BIT(30)
+#define VFINT_DYN_CTLN_INTENA_MSK_S		31
+#define VFINT_DYN_CTLN_INTENA_MSK_M		BIT(31)
+#define VFINT_ITR0(_i)				(0x00004C00 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: PFR */
+#define VFINT_ITR0_MAX_INDEX			2
+#define VFINT_ITR0_INTERVAL_S			0
+#define VFINT_ITR0_INTERVAL_M			MAKEMASK(0xFFF, 0)
+#define VFINT_ITRN(_i, _j)			(0x00002800 + ((_i) * 4 + (_j) * 12)) /* _i=0...2, _j=0...63 */ /* Reset Source: PFR */
+#define VFINT_ITRN_MAX_INDEX			2
+#define VFINT_ITRN_INTERVAL_S			0
+#define VFINT_ITRN_INTERVAL_M			MAKEMASK(0xFFF, 0)
+#define QRX_TAIL1(_QRX)				(0x00002000 + ((_QRX) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define QRX_TAIL1_MAX_INDEX			255
+#define QRX_TAIL1_TAIL_S			0
+#define QRX_TAIL1_TAIL_M			MAKEMASK(0x1FFF, 0)
+#define QTX_TAIL(_DBQM)				(0x00000000 + ((_DBQM) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define QTX_TAIL_MAX_INDEX			255
+#define QTX_TAIL_QTX_COMM_DBELL_S		0
+#define QTX_TAIL_QTX_COMM_DBELL_M		MAKEMASK(0xFFFFFFFF, 0)
+#define MSIX_TMSG1(_i)				(0x00000008 + ((_i) * 16)) /* _i=0...64 */ /* Reset Source: FLR */
+#define MSIX_TMSG1_MAX_INDEX			64
+#define MSIX_TMSG1_MSIXTMSG_S			0
+#define MSIX_TMSG1_MSIXTMSG_M			MAKEMASK(0xFFFFFFFF, 0)
+#define VFPE_AEQALLOC1				0x0000A400 /* Reset Source: VFR */
+#define VFPE_AEQALLOC1_AECOUNT_S		0
+#define VFPE_AEQALLOC1_AECOUNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VFPE_CCQPHIGH1				0x00009800 /* Reset Source: VFR */
+#define VFPE_CCQPHIGH1_PECCQPHIGH_S		0
+#define VFPE_CCQPHIGH1_PECCQPHIGH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VFPE_CCQPLOW1				0x0000AC00 /* Reset Source: VFR */
+#define VFPE_CCQPLOW1_PECCQPLOW_S		0
+#define VFPE_CCQPLOW1_PECCQPLOW_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VFPE_CCQPSTATUS1			0x0000B800 /* Reset Source: VFR */
+#define VFPE_CCQPSTATUS1_CCQP_DONE_S		0
+#define VFPE_CCQPSTATUS1_CCQP_DONE_M		BIT(0)
+#define VFPE_CCQPSTATUS1_HMC_PROFILE_S		4
+#define VFPE_CCQPSTATUS1_HMC_PROFILE_M		MAKEMASK(0x7, 4)
+#define VFPE_CCQPSTATUS1_RDMA_EN_VFS_S		16
+#define VFPE_CCQPSTATUS1_RDMA_EN_VFS_M		MAKEMASK(0x3F, 16)
+#define VFPE_CCQPSTATUS1_CCQP_ERR_S		31
+#define VFPE_CCQPSTATUS1_CCQP_ERR_M		BIT(31)
+#define VFPE_CQACK1				0x0000B000 /* Reset Source: VFR */
+#define VFPE_CQACK1_PECQID_S			0
+#define VFPE_CQACK1_PECQID_M			MAKEMASK(0x7FFFF, 0)
+#define VFPE_CQARM1				0x0000B400 /* Reset Source: VFR */
+#define VFPE_CQARM1_PECQID_S			0
+#define VFPE_CQARM1_PECQID_M			MAKEMASK(0x7FFFF, 0)
+#define VFPE_CQPDB1				0x0000BC00 /* Reset Source: VFR */
+#define VFPE_CQPDB1_WQHEAD_S			0
+#define VFPE_CQPDB1_WQHEAD_M			MAKEMASK(0x7FF, 0)
+#define VFPE_CQPERRCODES1			0x00009C00 /* Reset Source: VFR */
+#define VFPE_CQPERRCODES1_CQP_MINOR_CODE_S	0
+#define VFPE_CQPERRCODES1_CQP_MINOR_CODE_M	MAKEMASK(0xFFFF, 0)
+#define VFPE_CQPERRCODES1_CQP_MAJOR_CODE_S	16
+#define VFPE_CQPERRCODES1_CQP_MAJOR_CODE_M	MAKEMASK(0xFFFF, 16)
+#define VFPE_CQPTAIL1				0x0000A000 /* Reset Source: VFR */
+#define VFPE_CQPTAIL1_WQTAIL_S			0
+#define VFPE_CQPTAIL1_WQTAIL_M			MAKEMASK(0x7FF, 0)
+#define VFPE_CQPTAIL1_CQP_OP_ERR_S		31
+#define VFPE_CQPTAIL1_CQP_OP_ERR_M		BIT(31)
+#define VFPE_IPCONFIG01				0x00008C00 /* Reset Source: VFR */
+#define VFPE_IPCONFIG01_PEIPID_S		0
+#define VFPE_IPCONFIG01_PEIPID_M		MAKEMASK(0xFFFF, 0)
+#define VFPE_IPCONFIG01_USEENTIREIDRANGE_S	16
+#define VFPE_IPCONFIG01_USEENTIREIDRANGE_M	BIT(16)
+#define VFPE_IPCONFIG01_UDP_SRC_PORT_MASK_EN_S	17
+#define VFPE_IPCONFIG01_UDP_SRC_PORT_MASK_EN_M	BIT(17)
+#define VFPE_MRTEIDXMASK1(_VF)			(0x00509800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_MRTEIDXMASK1_MAX_INDEX		255
+#define VFPE_MRTEIDXMASK1_MRTEIDXMASKBITS_S	0
+#define VFPE_MRTEIDXMASK1_MRTEIDXMASKBITS_M	MAKEMASK(0x1F, 0)
+#define VFPE_RCVUNEXPECTEDERROR1		0x00009400 /* Reset Source: VFR */
+#define VFPE_RCVUNEXPECTEDERROR1_TCP_RX_UNEXP_ERR_S 0
+#define VFPE_RCVUNEXPECTEDERROR1_TCP_RX_UNEXP_ERR_M MAKEMASK(0xFFFFFF, 0)
+#define VFPE_TCPNOWTIMER1			0x0000A800 /* Reset Source: VFR */
+#define VFPE_TCPNOWTIMER1_TCP_NOW_S		0
+#define VFPE_TCPNOWTIMER1_TCP_NOW_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VFPE_WQEALLOC1				0x0000C000 /* Reset Source: VFR */
+#define VFPE_WQEALLOC1_PEQPID_S			0
+#define VFPE_WQEALLOC1_PEQPID_M			MAKEMASK(0x3FFFF, 0)
+#define VFPE_WQEALLOC1_WQE_DESC_INDEX_S		20
+#define VFPE_WQEALLOC1_WQE_DESC_INDEX_M		MAKEMASK(0xFFF, 20)
+#define VF_MBX_CPM_ARQBAH1			0x0000F060 /* Reset Source: CORER */
+#define VF_MBX_CPM_ARQBAH1_ARQBAH_S		0
+#define VF_MBX_CPM_ARQBAH1_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_CPM_ARQBAL1			0x0000F050 /* Reset Source: CORER */
+#define VF_MBX_CPM_ARQBAL1_ARQBAL_LSB_S		0
+#define VF_MBX_CPM_ARQBAL1_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define VF_MBX_CPM_ARQBAL1_ARQBAL_S		6
+#define VF_MBX_CPM_ARQBAL1_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_CPM_ARQH1			0x0000F080 /* Reset Source: CORER */
+#define VF_MBX_CPM_ARQH1_ARQH_S			0
+#define VF_MBX_CPM_ARQH1_ARQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ARQLEN1			0x0000F070 /* Reset Source: CORER */
+#define VF_MBX_CPM_ARQLEN1_ARQLEN_S		0
+#define VF_MBX_CPM_ARQLEN1_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ARQLEN1_ARQVFE_S		28
+#define VF_MBX_CPM_ARQLEN1_ARQVFE_M		BIT(28)
+#define VF_MBX_CPM_ARQLEN1_ARQOVFL_S		29
+#define VF_MBX_CPM_ARQLEN1_ARQOVFL_M		BIT(29)
+#define VF_MBX_CPM_ARQLEN1_ARQCRIT_S		30
+#define VF_MBX_CPM_ARQLEN1_ARQCRIT_M		BIT(30)
+#define VF_MBX_CPM_ARQLEN1_ARQENABLE_S		31
+#define VF_MBX_CPM_ARQLEN1_ARQENABLE_M		BIT(31)
+#define VF_MBX_CPM_ARQT1			0x0000F090 /* Reset Source: CORER */
+#define VF_MBX_CPM_ARQT1_ARQT_S			0
+#define VF_MBX_CPM_ARQT1_ARQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ATQBAH1			0x0000F010 /* Reset Source: CORER */
+#define VF_MBX_CPM_ATQBAH1_ATQBAH_S		0
+#define VF_MBX_CPM_ATQBAH1_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_CPM_ATQBAL1			0x0000F000 /* Reset Source: CORER */
+#define VF_MBX_CPM_ATQBAL1_ATQBAL_S		6
+#define VF_MBX_CPM_ATQBAL1_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_CPM_ATQH1			0x0000F030 /* Reset Source: CORER */
+#define VF_MBX_CPM_ATQH1_ATQH_S			0
+#define VF_MBX_CPM_ATQH1_ATQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ATQLEN1			0x0000F020 /* Reset Source: CORER */
+#define VF_MBX_CPM_ATQLEN1_ATQLEN_S		0
+#define VF_MBX_CPM_ATQLEN1_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ATQLEN1_ATQVFE_S		28
+#define VF_MBX_CPM_ATQLEN1_ATQVFE_M		BIT(28)
+#define VF_MBX_CPM_ATQLEN1_ATQOVFL_S		29
+#define VF_MBX_CPM_ATQLEN1_ATQOVFL_M		BIT(29)
+#define VF_MBX_CPM_ATQLEN1_ATQCRIT_S		30
+#define VF_MBX_CPM_ATQLEN1_ATQCRIT_M		BIT(30)
+#define VF_MBX_CPM_ATQLEN1_ATQENABLE_S		31
+#define VF_MBX_CPM_ATQLEN1_ATQENABLE_M		BIT(31)
+#define VF_MBX_CPM_ATQT1			0x0000F040 /* Reset Source: CORER */
+#define VF_MBX_CPM_ATQT1_ATQT_S			0
+#define VF_MBX_CPM_ATQT1_ATQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ARQBAH1			0x00020060 /* Reset Source: CORER */
+#define VF_MBX_HLP_ARQBAH1_ARQBAH_S		0
+#define VF_MBX_HLP_ARQBAH1_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_HLP_ARQBAL1			0x00020050 /* Reset Source: CORER */
+#define VF_MBX_HLP_ARQBAL1_ARQBAL_LSB_S		0
+#define VF_MBX_HLP_ARQBAL1_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define VF_MBX_HLP_ARQBAL1_ARQBAL_S		6
+#define VF_MBX_HLP_ARQBAL1_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_HLP_ARQH1			0x00020080 /* Reset Source: CORER */
+#define VF_MBX_HLP_ARQH1_ARQH_S			0
+#define VF_MBX_HLP_ARQH1_ARQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ARQLEN1			0x00020070 /* Reset Source: CORER */
+#define VF_MBX_HLP_ARQLEN1_ARQLEN_S		0
+#define VF_MBX_HLP_ARQLEN1_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ARQLEN1_ARQVFE_S		28
+#define VF_MBX_HLP_ARQLEN1_ARQVFE_M		BIT(28)
+#define VF_MBX_HLP_ARQLEN1_ARQOVFL_S		29
+#define VF_MBX_HLP_ARQLEN1_ARQOVFL_M		BIT(29)
+#define VF_MBX_HLP_ARQLEN1_ARQCRIT_S		30
+#define VF_MBX_HLP_ARQLEN1_ARQCRIT_M		BIT(30)
+#define VF_MBX_HLP_ARQLEN1_ARQENABLE_S		31
+#define VF_MBX_HLP_ARQLEN1_ARQENABLE_M		BIT(31)
+#define VF_MBX_HLP_ARQT1			0x00020090 /* Reset Source: CORER */
+#define VF_MBX_HLP_ARQT1_ARQT_S			0
+#define VF_MBX_HLP_ARQT1_ARQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ATQBAH1			0x00020010 /* Reset Source: CORER */
+#define VF_MBX_HLP_ATQBAH1_ATQBAH_S		0
+#define VF_MBX_HLP_ATQBAH1_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_HLP_ATQBAL1			0x00020000 /* Reset Source: CORER */
+#define VF_MBX_HLP_ATQBAL1_ATQBAL_S		6
+#define VF_MBX_HLP_ATQBAL1_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_HLP_ATQH1			0x00020030 /* Reset Source: CORER */
+#define VF_MBX_HLP_ATQH1_ATQH_S			0
+#define VF_MBX_HLP_ATQH1_ATQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ATQLEN1			0x00020020 /* Reset Source: CORER */
+#define VF_MBX_HLP_ATQLEN1_ATQLEN_S		0
+#define VF_MBX_HLP_ATQLEN1_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ATQLEN1_ATQVFE_S		28
+#define VF_MBX_HLP_ATQLEN1_ATQVFE_M		BIT(28)
+#define VF_MBX_HLP_ATQLEN1_ATQOVFL_S		29
+#define VF_MBX_HLP_ATQLEN1_ATQOVFL_M		BIT(29)
+#define VF_MBX_HLP_ATQLEN1_ATQCRIT_S		30
+#define VF_MBX_HLP_ATQLEN1_ATQCRIT_M		BIT(30)
+#define VF_MBX_HLP_ATQLEN1_ATQENABLE_S		31
+#define VF_MBX_HLP_ATQLEN1_ATQENABLE_M		BIT(31)
+#define VF_MBX_HLP_ATQT1			0x00020040 /* Reset Source: CORER */
+#define VF_MBX_HLP_ATQT1_ATQT_S			0
+#define VF_MBX_HLP_ATQT1_ATQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ARQBAH1			0x00021060 /* Reset Source: CORER */
+#define VF_MBX_PSM_ARQBAH1_ARQBAH_S		0
+#define VF_MBX_PSM_ARQBAH1_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_PSM_ARQBAL1			0x00021050 /* Reset Source: CORER */
+#define VF_MBX_PSM_ARQBAL1_ARQBAL_LSB_S		0
+#define VF_MBX_PSM_ARQBAL1_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define VF_MBX_PSM_ARQBAL1_ARQBAL_S		6
+#define VF_MBX_PSM_ARQBAL1_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_PSM_ARQH1			0x00021080 /* Reset Source: CORER */
+#define VF_MBX_PSM_ARQH1_ARQH_S			0
+#define VF_MBX_PSM_ARQH1_ARQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ARQLEN1			0x00021070 /* Reset Source: CORER */
+#define VF_MBX_PSM_ARQLEN1_ARQLEN_S		0
+#define VF_MBX_PSM_ARQLEN1_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ARQLEN1_ARQVFE_S		28
+#define VF_MBX_PSM_ARQLEN1_ARQVFE_M		BIT(28)
+#define VF_MBX_PSM_ARQLEN1_ARQOVFL_S		29
+#define VF_MBX_PSM_ARQLEN1_ARQOVFL_M		BIT(29)
+#define VF_MBX_PSM_ARQLEN1_ARQCRIT_S		30
+#define VF_MBX_PSM_ARQLEN1_ARQCRIT_M		BIT(30)
+#define VF_MBX_PSM_ARQLEN1_ARQENABLE_S		31
+#define VF_MBX_PSM_ARQLEN1_ARQENABLE_M		BIT(31)
+#define VF_MBX_PSM_ARQT1			0x00021090 /* Reset Source: CORER */
+#define VF_MBX_PSM_ARQT1_ARQT_S			0
+#define VF_MBX_PSM_ARQT1_ARQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ATQBAH1			0x00021010 /* Reset Source: CORER */
+#define VF_MBX_PSM_ATQBAH1_ATQBAH_S		0
+#define VF_MBX_PSM_ATQBAH1_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_PSM_ATQBAL1			0x00021000 /* Reset Source: CORER */
+#define VF_MBX_PSM_ATQBAL1_ATQBAL_S		6
+#define VF_MBX_PSM_ATQBAL1_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_PSM_ATQH1			0x00021030 /* Reset Source: CORER */
+#define VF_MBX_PSM_ATQH1_ATQH_S			0
+#define VF_MBX_PSM_ATQH1_ATQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ATQLEN1			0x00021020 /* Reset Source: CORER */
+#define VF_MBX_PSM_ATQLEN1_ATQLEN_S		0
+#define VF_MBX_PSM_ATQLEN1_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ATQLEN1_ATQVFE_S		28
+#define VF_MBX_PSM_ATQLEN1_ATQVFE_M		BIT(28)
+#define VF_MBX_PSM_ATQLEN1_ATQOVFL_S		29
+#define VF_MBX_PSM_ATQLEN1_ATQOVFL_M		BIT(29)
+#define VF_MBX_PSM_ATQLEN1_ATQCRIT_S		30
+#define VF_MBX_PSM_ATQLEN1_ATQCRIT_M		BIT(30)
+#define VF_MBX_PSM_ATQLEN1_ATQENABLE_S		31
+#define VF_MBX_PSM_ATQLEN1_ATQENABLE_M		BIT(31)
+#define VF_MBX_PSM_ATQT1			0x00021040 /* Reset Source: CORER */
+#define VF_MBX_PSM_ATQT1_ATQT_S			0
+#define VF_MBX_PSM_ATQT1_ATQT_M			MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ARQBAH1			0x0000F160 /* Reset Source: CORER */
+#define VF_SB_CPM_ARQBAH1_ARQBAH_S		0
+#define VF_SB_CPM_ARQBAH1_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_SB_CPM_ARQBAL1			0x0000F150 /* Reset Source: CORER */
+#define VF_SB_CPM_ARQBAL1_ARQBAL_LSB_S		0
+#define VF_SB_CPM_ARQBAL1_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define VF_SB_CPM_ARQBAL1_ARQBAL_S		6
+#define VF_SB_CPM_ARQBAL1_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_SB_CPM_ARQH1				0x0000F180 /* Reset Source: CORER */
+#define VF_SB_CPM_ARQH1_ARQH_S			0
+#define VF_SB_CPM_ARQH1_ARQH_M			MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ARQLEN1			0x0000F170 /* Reset Source: CORER */
+#define VF_SB_CPM_ARQLEN1_ARQLEN_S		0
+#define VF_SB_CPM_ARQLEN1_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ARQLEN1_ARQVFE_S		28
+#define VF_SB_CPM_ARQLEN1_ARQVFE_M		BIT(28)
+#define VF_SB_CPM_ARQLEN1_ARQOVFL_S		29
+#define VF_SB_CPM_ARQLEN1_ARQOVFL_M		BIT(29)
+#define VF_SB_CPM_ARQLEN1_ARQCRIT_S		30
+#define VF_SB_CPM_ARQLEN1_ARQCRIT_M		BIT(30)
+#define VF_SB_CPM_ARQLEN1_ARQENABLE_S		31
+#define VF_SB_CPM_ARQLEN1_ARQENABLE_M		BIT(31)
+#define VF_SB_CPM_ARQT1				0x0000F190 /* Reset Source: CORER */
+#define VF_SB_CPM_ARQT1_ARQT_S			0
+#define VF_SB_CPM_ARQT1_ARQT_M			MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ATQBAH1			0x0000F110 /* Reset Source: CORER */
+#define VF_SB_CPM_ATQBAH1_ATQBAH_S		0
+#define VF_SB_CPM_ATQBAH1_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_SB_CPM_ATQBAL1			0x0000F100 /* Reset Source: CORER */
+#define VF_SB_CPM_ATQBAL1_ATQBAL_S		6
+#define VF_SB_CPM_ATQBAL1_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_SB_CPM_ATQH1				0x0000F130 /* Reset Source: CORER */
+#define VF_SB_CPM_ATQH1_ATQH_S			0
+#define VF_SB_CPM_ATQH1_ATQH_M			MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ATQLEN1			0x0000F120 /* Reset Source: CORER */
+#define VF_SB_CPM_ATQLEN1_ATQLEN_S		0
+#define VF_SB_CPM_ATQLEN1_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ATQLEN1_ATQVFE_S		28
+#define VF_SB_CPM_ATQLEN1_ATQVFE_M		BIT(28)
+#define VF_SB_CPM_ATQLEN1_ATQOVFL_S		29
+#define VF_SB_CPM_ATQLEN1_ATQOVFL_M		BIT(29)
+#define VF_SB_CPM_ATQLEN1_ATQCRIT_S		30
+#define VF_SB_CPM_ATQLEN1_ATQCRIT_M		BIT(30)
+#define VF_SB_CPM_ATQLEN1_ATQENABLE_S		31
+#define VF_SB_CPM_ATQLEN1_ATQENABLE_M		BIT(31)
+#define VF_SB_CPM_ATQT1				0x0000F140 /* Reset Source: CORER */
+#define VF_SB_CPM_ATQT1_ATQT_S			0
+#define VF_SB_CPM_ATQT1_ATQT_M			MAKEMASK(0x3FF, 0)
+#define VFINT_DYN_CTL(_i)			(0x00023000 + ((_i) * 4096)) /* _i=0...7 */ /* Reset Source: PFR */
+#define VFINT_DYN_CTL_MAX_INDEX			7
+#define VFINT_DYN_CTL_INTENA_S			0
+#define VFINT_DYN_CTL_INTENA_M			BIT(0)
+#define VFINT_DYN_CTL_CLEARPBA_S		1
+#define VFINT_DYN_CTL_CLEARPBA_M		BIT(1)
+#define VFINT_DYN_CTL_SWINT_TRIG_S		2
+#define VFINT_DYN_CTL_SWINT_TRIG_M		BIT(2)
+#define VFINT_DYN_CTL_ITR_INDX_S		3
+#define VFINT_DYN_CTL_ITR_INDX_M		MAKEMASK(0x3, 3)
+#define VFINT_DYN_CTL_INTERVAL_S		5
+#define VFINT_DYN_CTL_INTERVAL_M		MAKEMASK(0xFFF, 5)
+#define VFINT_DYN_CTL_SW_ITR_INDX_ENA_S		24
+#define VFINT_DYN_CTL_SW_ITR_INDX_ENA_M		BIT(24)
+#define VFINT_DYN_CTL_SW_ITR_INDX_S		25
+#define VFINT_DYN_CTL_SW_ITR_INDX_M		MAKEMASK(0x3, 25)
+#define VFINT_DYN_CTL_WB_ON_ITR_S		30
+#define VFINT_DYN_CTL_WB_ON_ITR_M		BIT(30)
+#define VFINT_DYN_CTL_INTENA_MSK_S		31
+#define VFINT_DYN_CTL_INTENA_MSK_M		BIT(31)
+#define VFINT_ITR_0(_i)				(0x00023004 + ((_i) * 4096)) /* _i=0...7 */ /* Reset Source: PFR */
+#define VFINT_ITR_0_MAX_INDEX			7
+#define VFINT_ITR_0_INTERVAL_S			0
+#define VFINT_ITR_0_INTERVAL_M			MAKEMASK(0xFFF, 0)
+#define VFINT_ITR_1(_i)				(0x00023008 + ((_i) * 4096)) /* _i=0...7 */ /* Reset Source: PFR */
+#define VFINT_ITR_1_MAX_INDEX			7
+#define VFINT_ITR_1_INTERVAL_S			0
+#define VFINT_ITR_1_INTERVAL_M			MAKEMASK(0xFFF, 0)
+#define VFINT_ITR_2(_i)				(0x0002300C + ((_i) * 4096)) /* _i=0...7 */ /* Reset Source: PFR */
+#define VFINT_ITR_2_MAX_INDEX			7
+#define VFINT_ITR_2_INTERVAL_S			0
+#define VFINT_ITR_2_INTERVAL_M			MAKEMASK(0xFFF, 0)
+#define VFQRX_TAIL(_QRX)			(0x0002E000 + ((_QRX) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VFQRX_TAIL_MAX_INDEX			255
+#define VFQRX_TAIL_TAIL_S			0
+#define VFQRX_TAIL_TAIL_M			MAKEMASK(0x1FFF, 0)
+#define VFQTX_COMM_DBELL(_DBQM)			(0x00030000 + ((_DBQM) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VFQTX_COMM_DBELL_MAX_INDEX		255
+#define VFQTX_COMM_DBELL_QTX_COMM_DBELL_S	0
+#define VFQTX_COMM_DBELL_QTX_COMM_DBELL_M	MAKEMASK(0xFFFFFFFF, 0)
+#define VFQTX_COMM_DBLQ_DBELL(_DBLQ)		(0x00022000 + ((_DBLQ) * 4)) /* _i=0...3 */ /* Reset Source: CORER */
+#define VFQTX_COMM_DBLQ_DBELL_MAX_INDEX		3
+#define VFQTX_COMM_DBLQ_DBELL_TAIL_S		0
+#define VFQTX_COMM_DBLQ_DBELL_TAIL_M		MAKEMASK(0x1FFF, 0)
+
+#endif
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v4 02/32] net/ice/base: add basic structures
  2018-12-14  8:34 ` [dpdk-dev] [PATCH v4 00/32] A new net PMD - ICE Wenzhuo Lu
  2018-12-14  8:34   ` [dpdk-dev] [PATCH v4 01/32] net/ice/base: add registers for Intel(R) E800 Series NIC Wenzhuo Lu
@ 2018-12-14  8:34   ` Wenzhuo Lu
  2018-12-14  8:34   ` [dpdk-dev] [PATCH v4 03/32] net/ice/base: add admin queue structures and commands Wenzhuo Lu
                     ` (29 subsequent siblings)
  31 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-14  8:34 UTC (permalink / raw)
  To: dev; +Cc: Paul M Stillwell Jr

From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>

Add the structures required by the NIC.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
---
 drivers/net/ice/base/ice_type.h | 869 ++++++++++++++++++++++++++++++++++++++++
 1 file changed, 869 insertions(+)
 create mode 100644 drivers/net/ice/base/ice_type.h

diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h
new file mode 100644
index 0000000..256bf3f
--- /dev/null
+++ b/drivers/net/ice/base/ice_type.h
@@ -0,0 +1,869 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_TYPE_H_
+#define _ICE_TYPE_H_
+
+#define ETH_ALEN	6
+
+#define ETH_HEADER_LEN	14
+
+#define BIT(a) (1UL << (a))
+#define BIT_ULL(a) (1ULL << (a))
+
+#define BITS_PER_BYTE	8
+
+#define ICE_BYTES_PER_WORD	2
+#define ICE_BYTES_PER_DWORD	4
+#define ICE_MAX_TRAFFIC_CLASS	8
+
+
+#include "ice_status.h"
+#include "ice_hw_autogen.h"
+#include "ice_devids.h"
+#include "ice_osdep.h"
+#include "ice_controlq.h"
+#include "ice_lan_tx_rx.h"
+#include "ice_flex_type.h"
+#include "ice_protocol_type.h"
+
+static inline bool ice_is_tc_ena(ice_bitmap_t bitmap, u8 tc)
+{
+	return ice_is_bit_set(&bitmap, tc);
+}
+
+#ifndef DIV_64BIT
+#define DIV_64BIT(n, d) ((n) / (d))
+#endif /* DIV_64BIT */
+
+static inline u64 round_up_64bit(u64 a, u32 b)
+{
+	return DIV_64BIT(((a) + (b) / 2), (b));
+}
+
+static inline u32 ice_round_to_num(u32 N, u32 R)
+{
+	return ((((N) % (R)) < ((R) / 2)) ? (((N) / (R)) * (R)) :
+		((((N) + (R) - 1) / (R)) * (R)));
+}
+
+/* Driver always calls main vsi_handle first */
+#define ICE_MAIN_VSI_HANDLE		0
+
+/* Switch from ms to the 1usec global time (this is the GTIME resolution) */
+#define ICE_MS_TO_GTIME(time)		((time) * 1000)
+
+/* Data type manipulation macros. */
+#define ICE_HI_DWORD(x)		((u32)((((x) >> 16) >> 16) & 0xFFFFFFFF))
+#define ICE_LO_DWORD(x)		((u32)((x) & 0xFFFFFFFF))
+#define ICE_HI_WORD(x)		((u16)(((x) >> 16) & 0xFFFF))
+
+/* debug masks - set these bits in hw->debug_mask to control output */
+#define ICE_DBG_INIT		BIT_ULL(1)
+#define ICE_DBG_RELEASE		BIT_ULL(2)
+
+#define ICE_DBG_LINK		BIT_ULL(4)
+#define ICE_DBG_PHY		BIT_ULL(5)
+#define ICE_DBG_QCTX		BIT_ULL(6)
+#define ICE_DBG_NVM		BIT_ULL(7)
+#define ICE_DBG_LAN		BIT_ULL(8)
+#define ICE_DBG_FLOW		BIT_ULL(9)
+#define ICE_DBG_DCB		BIT_ULL(10)
+#define ICE_DBG_DIAG		BIT_ULL(11)
+#define ICE_DBG_FD		BIT_ULL(12)
+#define ICE_DBG_SW		BIT_ULL(13)
+#define ICE_DBG_SCHED		BIT_ULL(14)
+
+#define ICE_DBG_PKG		BIT_ULL(16)
+#define ICE_DBG_RES		BIT_ULL(17)
+#define ICE_DBG_AQ_MSG		BIT_ULL(24)
+#define ICE_DBG_AQ_DESC		BIT_ULL(25)
+#define ICE_DBG_AQ_DESC_BUF	BIT_ULL(26)
+#define ICE_DBG_AQ_CMD		BIT_ULL(27)
+#define ICE_DBG_AQ		(ICE_DBG_AQ_MSG		| \
+				 ICE_DBG_AQ_DESC	| \
+				 ICE_DBG_AQ_DESC_BUF	| \
+				 ICE_DBG_AQ_CMD)
+
+#define ICE_DBG_USER		BIT_ULL(31)
+#define ICE_DBG_ALL		0xFFFFFFFFFFFFFFFFULL
+
+
+
+
+
+
+enum ice_aq_res_ids {
+	ICE_NVM_RES_ID = 1,
+	ICE_SPD_RES_ID,
+	ICE_CHANGE_LOCK_RES_ID,
+	ICE_GLOBAL_CFG_LOCK_RES_ID
+};
+
+/* FW update timeout definitions are in milliseconds */
+#define ICE_NVM_TIMEOUT			180000
+#define ICE_CHANGE_LOCK_TIMEOUT		1000
+#define ICE_GLOBAL_CFG_LOCK_TIMEOUT	3000
+
+enum ice_aq_res_access_type {
+	ICE_RES_READ = 1,
+	ICE_RES_WRITE
+};
+
+struct ice_driver_ver {
+	u8 major_ver;
+	u8 minor_ver;
+	u8 build_ver;
+	u8 subbuild_ver;
+	u8 driver_string[32];
+};
+
+enum ice_fc_mode {
+	ICE_FC_NONE = 0,
+	ICE_FC_RX_PAUSE,
+	ICE_FC_TX_PAUSE,
+	ICE_FC_FULL,
+	ICE_FC_PFC,
+	ICE_FC_DFLT
+};
+
+enum ice_fec_mode {
+	ICE_FEC_NONE = 0,
+	ICE_FEC_RS,
+	ICE_FEC_BASER,
+	ICE_FEC_AUTO
+};
+
+enum ice_set_fc_aq_failures {
+	ICE_SET_FC_AQ_FAIL_NONE = 0,
+	ICE_SET_FC_AQ_FAIL_GET,
+	ICE_SET_FC_AQ_FAIL_SET,
+	ICE_SET_FC_AQ_FAIL_UPDATE
+};
+
+/* These are structs for managing the hardware information and the operations */
+/* MAC types */
+enum ice_mac_type {
+	ICE_MAC_UNKNOWN = 0,
+	ICE_MAC_GENERIC,
+};
+
+/* Media Types */
+enum ice_media_type {
+	ICE_MEDIA_UNKNOWN = 0,
+	ICE_MEDIA_FIBER,
+	ICE_MEDIA_BASET,
+	ICE_MEDIA_BACKPLANE,
+	ICE_MEDIA_DA,
+};
+
+/* Software VSI types. */
+enum ice_vsi_type {
+	ICE_VSI_PF = 0,
+#ifdef ADQ_SUPPORT
+	ICE_VSI_CHNL = 4,
+#endif /* ADQ_SUPPORT */
+};
+
+struct ice_link_status {
+	/* Refer to ice_aq_phy_type for bits definition */
+	u64 phy_type_low;
+	u64 phy_type_high;
+	u8 topo_media_conflict;
+	u16 max_frame_size;
+	u16 link_speed;
+	u16 req_speeds;
+	u8 lse_ena;	/* Link Status Event notification */
+	u8 link_info;
+	u8 an_info;
+	u8 ext_info;
+	u8 fec_info;
+	u8 pacing;
+	/* Refer to #define from module_type[ICE_MODULE_TYPE_TOTAL_BYTE] of
+	 * ice_aqc_get_phy_caps structure
+	 */
+	u8 module_type[ICE_MODULE_TYPE_TOTAL_BYTE];
+};
+
+/* Different data queue types: These are mainly for SW consumption. */
+enum ice_q {
+	ICE_DATA_Q_DOORBELL,
+	ICE_DATA_Q_CMPL,
+	ICE_DATA_Q_QUANTA,
+	ICE_DATA_Q_RX,
+	ICE_DATA_Q_TX,
+};
+
+/* Different reset sources for which a disable queue AQ call has to be made in
+ * order to clean the TX scheduler as a part of the reset
+ */
+enum ice_disq_rst_src {
+	ICE_NO_RESET = 0,
+	ICE_VM_RESET,
+};
+
+/* PHY info such as phy_type, etc... */
+struct ice_phy_info {
+	struct ice_link_status link_info;
+	struct ice_link_status link_info_old;
+	u64 phy_type_low;
+	u64 phy_type_high;
+	enum ice_media_type media_type;
+	u8 get_link_info;
+};
+
+#define ICE_MAX_NUM_MIRROR_RULES	64
+
+/* Common HW capabilities for SW use */
+struct ice_hw_common_caps {
+	/* Write CSR protection */
+	u64 wr_csr_prot;
+	u32 switching_mode;
+	/* switching mode supported - EVB switching (including cloud) */
+#define ICE_NVM_IMAGE_TYPE_EVB		0x0
+
+	/* Manageablity mode & supported protocols over MCTP */
+	u32 mgmt_mode;
+#define ICE_MGMT_MODE_PASS_THRU_MODE_M		0xF
+#define ICE_MGMT_MODE_CTL_INTERFACE_M		0xF0
+#define ICE_MGMT_MODE_REDIR_SB_INTERFACE_M	0xF00
+
+	u32 mgmt_protocols_mctp;
+#define ICE_MGMT_MODE_PROTO_RSVD	BIT(0)
+#define ICE_MGMT_MODE_PROTO_PLDM	BIT(1)
+#define ICE_MGMT_MODE_PROTO_OEM		BIT(2)
+#define ICE_MGMT_MODE_PROTO_NC_SI	BIT(3)
+
+	u32 os2bmc;
+	u32 valid_functions;
+
+	/* RSS related capabilities */
+	u32 rss_table_size;		/* 512 for PFs and 64 for VFs */
+	u32 rss_table_entry_width;	/* RSS Entry width in bits */
+
+	/* TX/RX queues */
+	u32 num_rxq;			/* Number/Total RX queues */
+	u32 rxq_first_id;		/* First queue ID for RX queues */
+	u32 num_txq;			/* Number/Total TX queues */
+	u32 txq_first_id;		/* First queue ID for TX queues */
+
+	/* MSI-X vectors */
+	u32 num_msix_vectors;
+	u32 msix_vector_first_id;
+
+	/* Max MTU for function or device */
+	u32 max_mtu;
+
+	/* WOL related */
+	u32 num_wol_proxy_fltr;
+	u32 wol_proxy_vsi_seid;
+
+	/* LED/SDP pin count */
+	u32 led_pin_num;
+	u32 sdp_pin_num;
+
+	/* LED/SDP - Supports up to 12 LED pins and 8 SDP signals */
+#define ICE_MAX_SUPPORTED_GPIO_LED	12
+#define ICE_MAX_SUPPORTED_GPIO_SDP	8
+	u8 led[ICE_MAX_SUPPORTED_GPIO_LED];
+	u8 sdp[ICE_MAX_SUPPORTED_GPIO_SDP];
+
+	/* EVB capabilities */
+	u8 evb_802_1_qbg;		/* Edge Virtual Bridging */
+	u8 evb_802_1_qbh;		/* Bridge Port Extension */
+
+	u8 iscsi;
+	u8 mgmt_cem;
+
+	/* WoL and APM support */
+#define ICE_WOL_SUPPORT_M		BIT(0)
+#define ICE_ACPI_PROG_MTHD_M		BIT(1)
+#define ICE_PROXY_SUPPORT_M		BIT(2)
+	u8 apm_wol_support;
+	u8 acpi_prog_mthd;
+	u8 proxy_support;
+};
+
+
+/* Function specific capabilities */
+struct ice_hw_func_caps {
+	struct ice_hw_common_caps common_cap;
+	u32 guar_num_vsi;
+};
+
+/* Device wide capabilities */
+struct ice_hw_dev_caps {
+	struct ice_hw_common_caps common_cap;
+	u32 num_vsi_allocd_to_host;	/* Excluding EMP VSI */
+};
+
+
+/* Information about MAC such as address, etc... */
+struct ice_mac_info {
+	u8 lan_addr[ETH_ALEN];
+	u8 perm_addr[ETH_ALEN];
+	u8 port_addr[ETH_ALEN];
+	u8 wol_addr[ETH_ALEN];
+};
+
+/* PCI bus types */
+enum ice_bus_type {
+	ice_bus_unknown = 0,
+	ice_bus_pci_express,
+	ice_bus_embedded, /* Is device Embedded versus card */
+	ice_bus_reserved
+};
+
+/* PCI bus speeds */
+enum ice_pcie_bus_speed {
+	ice_pcie_speed_unknown	= 0xff,
+	ice_pcie_speed_2_5GT	= 0x14,
+	ice_pcie_speed_5_0GT	= 0x15,
+	ice_pcie_speed_8_0GT	= 0x16,
+	ice_pcie_speed_16_0GT	= 0x17
+};
+
+/* PCI bus widths */
+enum ice_pcie_link_width {
+	ice_pcie_lnk_width_resrv	= 0x00,
+	ice_pcie_lnk_x1			= 0x01,
+	ice_pcie_lnk_x2			= 0x02,
+	ice_pcie_lnk_x4			= 0x04,
+	ice_pcie_lnk_x8			= 0x08,
+	ice_pcie_lnk_x12		= 0x0C,
+	ice_pcie_lnk_x16		= 0x10,
+	ice_pcie_lnk_x32		= 0x20,
+	ice_pcie_lnk_width_unknown	= 0xff,
+};
+
+/* Reset types used to determine which kind of reset was requested. These
+ * defines match what the RESET_TYPE field of the GLGEN_RSTAT register.
+ * ICE_RESET_PFR does not match any RESET_TYPE field in the GLGEN_RSTAT register
+ * because its reset source is different than the other types listed.
+ */
+enum ice_reset_req {
+	ICE_RESET_POR	= 0,
+	ICE_RESET_INVAL	= 0,
+	ICE_RESET_CORER	= 1,
+	ICE_RESET_GLOBR	= 2,
+	ICE_RESET_EMPR	= 3,
+	ICE_RESET_PFR	= 4,
+};
+
+/* Bus parameters */
+struct ice_bus_info {
+	enum ice_pcie_bus_speed speed;
+	enum ice_pcie_link_width width;
+	enum ice_bus_type type;
+	u16 domain_num;
+	u16 device;
+	u8 func;
+	u8 bus_num;
+};
+
+/* Flow control (FC) parameters */
+struct ice_fc_info {
+	enum ice_fc_mode current_mode;	/* FC mode in effect */
+	enum ice_fc_mode req_mode;	/* FC mode requested by caller */
+};
+
+/* NVM Information */
+struct ice_nvm_info {
+	u32 eetrack;			/* NVM data version */
+	u32 oem_ver;			/* OEM version info */
+	u16 sr_words;			/* Shadow RAM size in words */
+	u16 ver;			/* NVM package version */
+	u8 blank_nvm_mode;		/* is NVM empty (no FW present)*/
+};
+
+/* Max number of port to queue branches w.r.t topology */
+#define ICE_TXSCHED_MAX_BRANCHES ICE_MAX_TRAFFIC_CLASS
+/* ICE_DFLT_AGG_ID means that all new VM(s)/VSI node connects
+ * to driver defined policy for default aggregator
+ */
+#define ICE_INVAL_TEID 0xFFFFFFFF
+#define ICE_DFLT_AGG_ID 0
+
+struct ice_sched_node {
+	struct ice_sched_node *parent;
+	struct ice_sched_node *sibling; /* next sibling in the same layer */
+	struct ice_sched_node **children;
+	struct ice_aqc_txsched_elem_data info;
+	u32 agg_id;			/* aggregator group id */
+	u16 vsi_handle;
+	u8 in_use;			/* suspended or in use */
+	u8 tx_sched_layer;		/* Logical Layer (1-9) */
+	u8 num_children;
+	u8 tc_num;
+	u8 owner;
+#define ICE_SCHED_NODE_OWNER_LAN	0
+#define ICE_SCHED_NODE_OWNER_AE		1
+#define ICE_SCHED_NODE_OWNER_RDMA	2
+};
+
+/* Access Macros for Tx Sched Elements data */
+#define ICE_TXSCHED_GET_NODE_TEID(x) LE32_TO_CPU((x)->info.node_teid)
+#define ICE_TXSCHED_GET_PARENT_TEID(x) LE32_TO_CPU((x)->info.parent_teid)
+#define ICE_TXSCHED_GET_CIR_RL_ID(x)	\
+	LE16_TO_CPU((x)->info.cir_bw.bw_profile_idx)
+#define ICE_TXSCHED_GET_EIR_RL_ID(x)	\
+	LE16_TO_CPU((x)->info.eir_bw.bw_profile_idx)
+#define ICE_TXSCHED_GET_SRL_ID(x) LE16_TO_CPU((x)->info.srl_id)
+#define ICE_TXSCHED_GET_CIR_BWALLOC(x)	\
+	LE16_TO_CPU((x)->info.cir_bw.bw_alloc)
+#define ICE_TXSCHED_GET_EIR_BWALLOC(x)	\
+	LE16_TO_CPU((x)->info.eir_bw.bw_alloc)
+
+struct ice_sched_rl_profle {
+	u32 rate; /* In Kbps */
+	struct ice_aqc_rl_profile_elem info;
+};
+
+/* The aggregator type determines if identifier is for a VSI group,
+ * aggregator group, aggregator of queues, or queue group.
+ */
+enum ice_agg_type {
+	ICE_AGG_TYPE_UNKNOWN = 0,
+	ICE_AGG_TYPE_TC,
+	ICE_AGG_TYPE_AGG, /* aggregator */
+	ICE_AGG_TYPE_VSI,
+	ICE_AGG_TYPE_QG,
+	ICE_AGG_TYPE_Q
+};
+
+/* Rate limit types */
+enum ice_rl_type {
+	ICE_UNKNOWN_BW = 0,
+	ICE_MIN_BW,		/* for cir profile */
+	ICE_MAX_BW,		/* for eir profile */
+	ICE_SHARED_BW		/* for shared profile */
+};
+
+#define ICE_SCHED_MIN_BW		500		/* in Kbps */
+#define ICE_SCHED_MAX_BW		100000000	/* in Kbps */
+#define ICE_SCHED_DFLT_BW		0xFFFFFFFF	/* unlimited */
+#define ICE_SCHED_NO_PRIORITY		0
+#define ICE_SCHED_NO_BW_WT		0
+#define ICE_SCHED_DFLT_RL_PROF_ID	0
+#define ICE_SCHED_NO_SHARED_RL_PROF_ID	0xFFFF
+#define ICE_SCHED_DFLT_BW_WT		1
+#define ICE_SCHED_INVAL_PROF_ID		0xFFFF
+#define ICE_SCHED_DFLT_BURST_SIZE	(15 * 1024)	/* in bytes (15k) */
+
+/* Access Macros for Tx Sched RL Profile data */
+#define ICE_TXSCHED_GET_RL_PROF_ID(p) LE16_TO_CPU((p)->info.profile_id)
+#define ICE_TXSCHED_GET_RL_MBS(p) LE16_TO_CPU((p)->info.max_burst_size)
+#define ICE_TXSCHED_GET_RL_MULTIPLIER(p) LE16_TO_CPU((p)->info.rl_multiply)
+#define ICE_TXSCHED_GET_RL_WAKEUP_MV(p) LE16_TO_CPU((p)->info.wake_up_calc)
+#define ICE_TXSCHED_GET_RL_ENCODE(p) LE16_TO_CPU((p)->info.rl_encode)
+
+
+/* The following tree example shows the naming conventions followed under
+ * ice_port_info struct for default scheduler tree topology.
+ *
+ *                 A tree on a port
+ *                       *                ---> root node
+ *        (TC0)/  /  /  / \  \  \  \(TC7) ---> num_branches (range:1- 8)
+ *            *  *  *  *   *  *  *  *     |
+ *           /                            |
+ *          *                             |
+ *         /                              |-> num_elements (range:1 - 9)
+ *        *                               |   implies num_of_layers
+ *       /                                |
+ *   (a)*                                 |
+ *
+ *  (a) is the last_node_teid(not of type Leaf). A leaf node is created under
+ *  (a) as child node where queues get added, add Tx/Rx queue admin commands;
+ *  need teid of (a) to add queues.
+ *
+ *  This tree
+ *       -> has 8 branches (one for each TC)
+ *       -> First branch (TC0) has 4 elements
+ *       -> has 4 layers
+ *       -> (a) is the topmost layer node created by firmware on branch 0
+ *
+ *  Note: Above asterisk tree covers only basic terminology and scenario.
+ *  Refer to the documentation for more info.
+ */
+
+ /* Data structure for saving bw information */
+enum ice_bw_type {
+	ICE_BW_TYPE_PRIO,
+	ICE_BW_TYPE_CIR,
+	ICE_BW_TYPE_CIR_WT,
+	ICE_BW_TYPE_EIR,
+	ICE_BW_TYPE_EIR_WT,
+	ICE_BW_TYPE_SHARED,
+	ICE_BW_TYPE_CNT		/* This must be last */
+};
+
+struct ice_bw {
+	u32 bw;
+	u16 bw_alloc;
+};
+
+struct ice_bw_type_info {
+	ice_declare_bitmap(bw_t_bitmap, ICE_BW_TYPE_CNT);
+	u8 generic;
+	struct ice_bw cir_bw;
+	struct ice_bw eir_bw;
+	u32 shared_bw;
+};
+
+/* vsi type list entry to locate corresponding vsi/ag nodes */
+struct ice_sched_vsi_info {
+	struct ice_sched_node *vsi_node[ICE_MAX_TRAFFIC_CLASS];
+	struct ice_sched_node *ag_node[ICE_MAX_TRAFFIC_CLASS];
+	u16 max_lanq[ICE_MAX_TRAFFIC_CLASS];
+	/* bw_t_info saves VSI bw information */
+	struct ice_bw_type_info bw_t_info[ICE_MAX_TRAFFIC_CLASS];
+};
+
+#if !defined(NO_DCB_SUPPORT) || defined(ADQ_SUPPORT)
+/* CEE or IEEE 802.1Qaz ETS Configuration data */
+struct ice_dcb_ets_cfg {
+	u8 willing;
+	u8 cbs;
+	u8 maxtcs;
+	u8 prio_table[ICE_MAX_TRAFFIC_CLASS];
+	u8 tcbwtable[ICE_MAX_TRAFFIC_CLASS];
+	u8 tsatable[ICE_MAX_TRAFFIC_CLASS];
+};
+
+/* CEE or IEEE 802.1Qaz PFC Configuration data */
+struct ice_dcb_pfc_cfg {
+	u8 willing;
+	u8 mbc;
+	u8 pfccap;
+	u8 pfcena;
+};
+
+/* CEE or IEEE 802.1Qaz Application Priority data */
+struct ice_dcb_app_priority_table {
+	u16 prot_id;
+	u8 priority;
+	u8 selector;
+};
+
+#define ICE_MAX_USER_PRIORITY	8
+#define ICE_DCBX_MAX_APPS	32
+#define ICE_LLDPDU_SIZE		1500
+#define ICE_TLV_STATUS_OPER	0x1
+#define ICE_TLV_STATUS_SYNC	0x2
+#define ICE_TLV_STATUS_ERR	0x4
+#define ICE_APP_PROT_ID_FCOE	0x8906
+#define ICE_APP_PROT_ID_ISCSI	0x0cbc
+#define ICE_APP_PROT_ID_FIP	0x8914
+#define ICE_APP_SEL_ETHTYPE	0x1
+#define ICE_APP_SEL_TCPIP	0x2
+#define ICE_CEE_APP_SEL_ETHTYPE	0x0
+#define ICE_CEE_APP_SEL_TCPIP	0x1
+
+struct ice_dcbx_cfg {
+	u32 numapps;
+	u32 tlv_status; /* CEE mode TLV status */
+	struct ice_dcb_ets_cfg etscfg;
+	struct ice_dcb_ets_cfg etsrec;
+	struct ice_dcb_pfc_cfg pfc;
+	struct ice_dcb_app_priority_table app[ICE_DCBX_MAX_APPS];
+	u8 dcbx_mode;
+#define ICE_DCBX_MODE_CEE	0x1
+#define ICE_DCBX_MODE_IEEE	0x2
+	u8 app_mode;
+#define ICE_DCBX_APPS_NON_WILLING	0x1
+};
+#endif /* !NO_DCB_SUPPORT || ADQ_SUPPORT */
+
+struct ice_port_info {
+	struct ice_sched_node *root;	/* Root Node per Port */
+	struct ice_hw *hw;		/* back pointer to hw instance */
+	u32 last_node_teid;		/* scheduler last node info */
+	u16 sw_id;			/* Initial switch ID belongs to port */
+	u16 pf_vf_num;
+	u8 port_state;
+#define ICE_SCHED_PORT_STATE_INIT	0x0
+#define ICE_SCHED_PORT_STATE_READY	0x1
+	u16 dflt_tx_vsi_rule_id;
+	u16 dflt_tx_vsi_num;
+	u16 dflt_rx_vsi_rule_id;
+	u16 dflt_rx_vsi_num;
+	struct ice_fc_info fc;
+	struct ice_mac_info mac;
+	struct ice_phy_info phy;
+	struct ice_lock sched_lock;	/* protect access to TXSched tree */
+	/* List contain profile id(s) and other params per layer */
+	struct LIST_HEAD_TYPE rl_prof_list[ICE_AQC_TOPO_MAX_LEVEL_NUM];
+#if !defined(NO_DCB_SUPPORT) || defined(ADQ_SUPPORT)
+	struct ice_dcbx_cfg local_dcbx_cfg;	/* Oper/Local Cfg */
+#endif /* !NO_DCB_SUPPORT || ADQ_SUPPORT */
+	u8 lport;
+#define ICE_LPORT_MASK		0xff
+	u8 is_vf;
+};
+
+struct ice_switch_info {
+	struct LIST_HEAD_TYPE vsi_list_map_head;
+	struct ice_sw_recipe *recp_list;
+};
+
+/* FW logging configuration */
+struct ice_fw_log_evnt {
+	u8 cfg : 4;	/* New event enables to configure */
+	u8 cur : 4;	/* Current/active event enables */
+};
+
+struct ice_fw_log_cfg {
+	u8 cq_en : 1;    /* FW logging is enabled via the control queue */
+	u8 uart_en : 1;  /* FW logging is enabled via UART for all PFs */
+	u8 actv_evnts;   /* Cumulation of currently enabled log events */
+
+#define ICE_FW_LOG_EVNT_INFO	(ICE_AQC_FW_LOG_INFO_EN >> ICE_AQC_FW_LOG_EN_S)
+#define ICE_FW_LOG_EVNT_INIT	(ICE_AQC_FW_LOG_INIT_EN >> ICE_AQC_FW_LOG_EN_S)
+#define ICE_FW_LOG_EVNT_FLOW	(ICE_AQC_FW_LOG_FLOW_EN >> ICE_AQC_FW_LOG_EN_S)
+#define ICE_FW_LOG_EVNT_ERR	(ICE_AQC_FW_LOG_ERR_EN >> ICE_AQC_FW_LOG_EN_S)
+	struct ice_fw_log_evnt evnts[ICE_AQC_FW_LOG_ID_MAX];
+};
+
+/* Port hardware description */
+struct ice_hw {
+	u8 *hw_addr;
+	void *back;
+	struct ice_aqc_layer_props *layer_info;
+	struct ice_port_info *port_info;
+	/* 2D Array for each Tx Sched RL Profile type */
+	struct ice_sched_rl_profile **cir_profiles;
+	struct ice_sched_rl_profile **eir_profiles;
+	struct ice_sched_rl_profile **srl_profiles;
+	u64 debug_mask;		/* BITMAP for debug mask */
+	enum ice_mac_type mac_type;
+
+	/* pci info */
+	u16 device_id;
+	u16 vendor_id;
+	u16 subsystem_device_id;
+	u16 subsystem_vendor_id;
+	u8 revision_id;
+
+	u8 pf_id;		/* device profile info */
+
+	u16 max_burst_size;	/* driver sets this value */
+	/* TX Scheduler values */
+	u16 num_tx_sched_layers;
+	u16 num_tx_sched_phys_layers;
+	u8 flattened_layers;
+	u8 max_cgds;
+	u8 sw_entry_point_layer;
+	u16 max_children[ICE_AQC_TOPO_MAX_LEVEL_NUM];
+	struct LIST_HEAD_TYPE agg_list;	/* lists all aggregator */
+	struct ice_bw_type_info tc_node_bw_t_info[ICE_MAX_TRAFFIC_CLASS];
+	struct ice_vsi_ctx *vsi_ctx[ICE_MAX_VSI];
+	u8 evb_veb;		/* true for VEB, false for VEPA */
+	u8 reset_ongoing;	/* true if hw is in reset, false otherwise */
+	struct ice_bus_info bus;
+	struct ice_nvm_info nvm;
+	struct ice_hw_dev_caps dev_caps;	/* device capabilities */
+	struct ice_hw_func_caps func_caps;	/* function capabilities */
+
+	struct ice_switch_info *switch_info;	/* switch filter lists */
+
+	/* Control Queue info */
+	struct ice_ctl_q_info adminq;
+	struct ice_ctl_q_info mailboxq;
+
+	u8 api_branch;		/* API branch version */
+	u8 api_maj_ver;		/* API major version */
+	u8 api_min_ver;		/* API minor version */
+	u8 api_patch;		/* API patch version */
+	u8 fw_branch;		/* firmware branch version */
+	u8 fw_maj_ver;		/* firmware major version */
+	u8 fw_min_ver;		/* firmware minor version */
+	u8 fw_patch;		/* firmware patch version */
+	u32 fw_build;		/* firmware build number */
+
+	struct ice_fw_log_cfg fw_log;
+
+/* Device max aggregate bandwidths corresponding to the GL_PWR_MODE_CTL
+ * register. Used for determining the itr/intrl granularity during
+ * initialization.
+ */
+#define ICE_MAX_AGG_BW_200G	0x0
+#define ICE_MAX_AGG_BW_100G	0X1
+#define ICE_MAX_AGG_BW_50G	0x2
+#define ICE_MAX_AGG_BW_25G	0x3
+	/* ITR granularity for different speeds */
+#define ICE_ITR_GRAN_ABOVE_25	2
+#define ICE_ITR_GRAN_MAX_25	4
+	/* ITR granularity in 1 us */
+	u8 itr_gran;
+	/* INTRL granularity for different speeds */
+#define ICE_INTRL_GRAN_ABOVE_25	4
+#define ICE_INTRL_GRAN_MAX_25	8
+	/* INTRL granularity in 1 us */
+	u8 intrl_gran;
+
+	u8 ucast_shared;	/* true if VSIs can share unicast addr */
+
+
+};
+
+/* Statistics collected by each port, VSI, VEB, and S-channel */
+struct ice_eth_stats {
+	u64 rx_bytes;			/* gorc */
+	u64 rx_unicast;			/* uprc */
+	u64 rx_multicast;		/* mprc */
+	u64 rx_broadcast;		/* bprc */
+	u64 rx_discards;		/* rdpc */
+	u64 rx_unknown_protocol;	/* rupp */
+	u64 tx_bytes;			/* gotc */
+	u64 tx_unicast;			/* uptc */
+	u64 tx_multicast;		/* mptc */
+	u64 tx_broadcast;		/* bptc */
+	u64 tx_discards;		/* tdpc */
+	u64 tx_errors;			/* tepc */
+};
+
+#define ICE_MAX_UP	8
+
+/* Statistics collected per VEB per User Priority (UP) for up to 8 UPs */
+struct ice_veb_up_stats {
+	u64 up_rx_pkts[ICE_MAX_UP];
+	u64 up_rx_bytes[ICE_MAX_UP];
+	u64 up_tx_pkts[ICE_MAX_UP];
+	u64 up_tx_bytes[ICE_MAX_UP];
+};
+
+/* Statistics collected by the MAC */
+struct ice_hw_port_stats {
+	/* eth stats collected by the port */
+	struct ice_eth_stats eth;
+	/* additional port specific stats */
+	u64 tx_dropped_link_down;	/* tdold */
+	u64 crc_errors;			/* crcerrs */
+	u64 illegal_bytes;		/* illerrc */
+	u64 error_bytes;		/* errbc */
+	u64 mac_local_faults;		/* mlfc */
+	u64 mac_remote_faults;		/* mrfc */
+	u64 rx_len_errors;		/* rlec */
+	u64 link_xon_rx;		/* lxonrxc */
+	u64 link_xoff_rx;		/* lxoffrxc */
+	u64 link_xon_tx;		/* lxontxc */
+	u64 link_xoff_tx;		/* lxofftxc */
+	u64 rx_size_64;			/* prc64 */
+	u64 rx_size_127;		/* prc127 */
+	u64 rx_size_255;		/* prc255 */
+	u64 rx_size_511;		/* prc511 */
+	u64 rx_size_1023;		/* prc1023 */
+	u64 rx_size_1522;		/* prc1522 */
+	u64 rx_size_big;		/* prc9522 */
+	u64 rx_undersize;		/* ruc */
+	u64 rx_fragments;		/* rfc */
+	u64 rx_oversize;		/* roc */
+	u64 rx_jabber;			/* rjc */
+	u64 tx_size_64;			/* ptc64 */
+	u64 tx_size_127;		/* ptc127 */
+	u64 tx_size_255;		/* ptc255 */
+	u64 tx_size_511;		/* ptc511 */
+	u64 tx_size_1023;		/* ptc1023 */
+	u64 tx_size_1522;		/* ptc1522 */
+	u64 tx_size_big;		/* ptc9522 */
+	u64 mac_short_pkt_dropped;	/* mspdc */
+};
+
+enum ice_sw_fwd_act_type {
+	ICE_FWD_TO_VSI = 0,
+	ICE_FWD_TO_VSI_LIST, /* Do not use this when adding filter */
+	ICE_FWD_TO_Q,
+	ICE_FWD_TO_QGRP,
+	ICE_DROP_PACKET,
+	ICE_INVAL_ACT
+};
+
+/* Checksum and Shadow RAM pointers */
+#define ICE_SR_NVM_CTRL_WORD			0x00
+#define ICE_SR_PHY_ANALOG_PTR			0x04
+#define ICE_SR_OPTION_ROM_PTR			0x05
+#define ICE_SR_RO_PCIR_REGS_AUTO_LOAD_PTR	0x06
+#define ICE_SR_AUTO_GENERATED_POINTERS_PTR	0x07
+#define ICE_SR_PCIR_REGS_AUTO_LOAD_PTR		0x08
+#define ICE_SR_EMP_GLOBAL_MODULE_PTR		0x09
+#define ICE_SR_EMP_IMAGE_PTR			0x0B
+#define ICE_SR_PE_IMAGE_PTR			0x0C
+#define ICE_SR_CSR_PROTECTED_LIST_PTR		0x0D
+#define ICE_SR_MNG_CFG_PTR			0x0E
+#define ICE_SR_EMP_MODULE_PTR			0x0F
+#define ICE_SR_PBA_FLAGS			0x15
+#define ICE_SR_PBA_BLOCK_PTR			0x16
+#define ICE_SR_BOOT_CFG_PTR			0x17
+#define ICE_SR_NVM_WOL_CFG			0x19
+#define ICE_NVM_OEM_VER_OFF			0x83
+#define ICE_SR_NVM_DEV_STARTER_VER		0x18
+#define ICE_SR_ALTERNATE_SAN_MAC_ADDR_PTR	0x27
+#define ICE_SR_PERMANENT_SAN_MAC_ADDR_PTR	0x28
+#define ICE_SR_NVM_MAP_VER			0x29
+#define ICE_SR_NVM_IMAGE_VER			0x2A
+#define ICE_SR_NVM_STRUCTURE_VER		0x2B
+#define ICE_SR_NVM_EETRACK_LO			0x2D
+#define ICE_SR_NVM_EETRACK_HI			0x2E
+#define ICE_NVM_VER_LO_SHIFT			0
+#define ICE_NVM_VER_LO_MASK			(0xff << ICE_NVM_VER_LO_SHIFT)
+#define ICE_NVM_VER_HI_SHIFT			12
+#define ICE_NVM_VER_HI_MASK			(0xf << ICE_NVM_VER_HI_SHIFT)
+#define ICE_OEM_EETRACK_ID			0xffffffff
+#define ICE_OEM_VER_PATCH_SHIFT			0
+#define ICE_OEM_VER_PATCH_MASK		(0xff << ICE_OEM_VER_PATCH_SHIFT)
+#define ICE_OEM_VER_BUILD_SHIFT			8
+#define ICE_OEM_VER_BUILD_MASK		(0xffff << ICE_OEM_VER_BUILD_SHIFT)
+#define ICE_OEM_VER_SHIFT			24
+#define ICE_OEM_VER_MASK			(0xff << ICE_OEM_VER_SHIFT)
+#define ICE_SR_VPD_PTR				0x2F
+#define ICE_SR_PXE_SETUP_PTR			0x30
+#define ICE_SR_PXE_CFG_CUST_OPTIONS_PTR		0x31
+#define ICE_SR_NVM_ORIGINAL_EETRACK_LO		0x34
+#define ICE_SR_NVM_ORIGINAL_EETRACK_HI		0x35
+#define ICE_SR_VLAN_CFG_PTR			0x37
+#define ICE_SR_POR_REGS_AUTO_LOAD_PTR		0x38
+#define ICE_SR_EMPR_REGS_AUTO_LOAD_PTR		0x3A
+#define ICE_SR_GLOBR_REGS_AUTO_LOAD_PTR		0x3B
+#define ICE_SR_CORER_REGS_AUTO_LOAD_PTR		0x3C
+#define ICE_SR_PHY_CFG_SCRIPT_PTR		0x3D
+#define ICE_SR_PCIE_ALT_AUTO_LOAD_PTR		0x3E
+#define ICE_SR_SW_CHECKSUM_WORD			0x3F
+#define ICE_SR_PFA_PTR				0x40
+#define ICE_SR_1ST_SCRATCH_PAD_PTR		0x41
+#define ICE_SR_1ST_NVM_BANK_PTR			0x42
+#define ICE_SR_NVM_BANK_SIZE			0x43
+#define ICE_SR_1ND_OROM_BANK_PTR		0x44
+#define ICE_SR_OROM_BANK_SIZE			0x45
+#define ICE_SR_EMP_SR_SETTINGS_PTR		0x48
+#define ICE_SR_CONFIGURATION_METADATA_PTR	0x4D
+#define ICE_SR_IMMEDIATE_VALUES_PTR		0x4E
+
+/* Auxiliary field, mask and shift definition for Shadow RAM and NVM Flash */
+#define ICE_SR_VPD_SIZE_WORDS		512
+#define ICE_SR_PCIE_ALT_SIZE_WORDS	512
+#define ICE_SR_CTRL_WORD_1_S		0x06
+#define ICE_SR_CTRL_WORD_1_M		(0x03 << ICE_SR_CTRL_WORD_1_S)
+
+/* Shadow RAM related */
+#define ICE_SR_SECTOR_SIZE_IN_WORDS	0x800
+#define ICE_SR_BUF_ALIGNMENT		4096
+#define ICE_SR_WORDS_IN_1KB		512
+/* Checksum should be calculated such that after adding all the words,
+ * including the checksum word itself, the sum should be 0xBABA.
+ */
+#define ICE_SR_SW_CHECKSUM_BASE		0xBABA
+
+#define ICE_PBA_FLAG_DFLT		0xFAFA
+/* Hash redirection LUT for VSI - maximum array size */
+#define ICE_VSIQF_HLUT_ARRAY_SIZE	((VSIQF_HLUT_MAX_INDEX + 1) * 4)
+
+/*
+ * Defines for values in the VF_PE_DB_SIZE bits in the GLPCI_LBARCTRL register.
+ * This is needed to determine the BAR0 space for the VFs
+ */
+#define GLPCI_LBARCTRL_VF_PE_DB_SIZE_0KB 0x0
+#define GLPCI_LBARCTRL_VF_PE_DB_SIZE_8KB 0x1
+#define GLPCI_LBARCTRL_VF_PE_DB_SIZE_64KB 0x2
+
+#endif /* _ICE_TYPE_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v4 03/32] net/ice/base: add admin queue structures and commands
  2018-12-14  8:34 ` [dpdk-dev] [PATCH v4 00/32] A new net PMD - ICE Wenzhuo Lu
  2018-12-14  8:34   ` [dpdk-dev] [PATCH v4 01/32] net/ice/base: add registers for Intel(R) E800 Series NIC Wenzhuo Lu
  2018-12-14  8:34   ` [dpdk-dev] [PATCH v4 02/32] net/ice/base: add basic structures Wenzhuo Lu
@ 2018-12-14  8:34   ` Wenzhuo Lu
  2018-12-14  8:34   ` [dpdk-dev] [PATCH v4 04/32] net/ice/base: add sideband queue info Wenzhuo Lu
                     ` (28 subsequent siblings)
  31 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-14  8:34 UTC (permalink / raw)
  To: dev; +Cc: Paul M Stillwell Jr

From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>

Add the commands, error codes, and structures for
the admin queue.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
---
 drivers/net/ice/base/ice_adminq_cmd.h | 1891 +++++++++++++++++++++++++++++++++
 1 file changed, 1891 insertions(+)
 create mode 100644 drivers/net/ice/base/ice_adminq_cmd.h

diff --git a/drivers/net/ice/base/ice_adminq_cmd.h b/drivers/net/ice/base/ice_adminq_cmd.h
new file mode 100644
index 0000000..9332f84
--- /dev/null
+++ b/drivers/net/ice/base/ice_adminq_cmd.h
@@ -0,0 +1,1891 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_ADMINQ_CMD_H_
+#define _ICE_ADMINQ_CMD_H_
+
+/* This header file defines the Admin Queue commands, error codes and
+ * descriptor format. It is shared between Firmware and Software.
+ */
+
+
+#define ICE_MAX_VSI			768
+#define ICE_AQC_TOPO_MAX_LEVEL_NUM	0x9
+#define ICE_AQ_SET_MAC_FRAME_SIZE_MAX	9728
+
+
+struct ice_aqc_generic {
+	__le32 param0;
+	__le32 param1;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Get version (direct 0x0001) */
+struct ice_aqc_get_ver {
+	__le32 rom_ver;
+	__le32 fw_build;
+	u8 fw_branch;
+	u8 fw_major;
+	u8 fw_minor;
+	u8 fw_patch;
+	u8 api_branch;
+	u8 api_major;
+	u8 api_minor;
+	u8 api_patch;
+};
+
+
+
+/* Queue Shutdown (direct 0x0003) */
+struct ice_aqc_q_shutdown {
+	__le32 driver_unloading;
+#define ICE_AQC_DRIVER_UNLOADING	BIT(0)
+	u8 reserved[12];
+};
+
+
+
+
+/* Request resource ownership (direct 0x0008)
+ * Release resource ownership (direct 0x0009)
+ */
+struct ice_aqc_req_res {
+	__le16 res_id;
+#define ICE_AQC_RES_ID_NVM		1
+#define ICE_AQC_RES_ID_SDP		2
+#define ICE_AQC_RES_ID_CHNG_LOCK	3
+#define ICE_AQC_RES_ID_GLBL_LOCK	4
+	__le16 access_type;
+#define ICE_AQC_RES_ACCESS_READ		1
+#define ICE_AQC_RES_ACCESS_WRITE	2
+
+	/* Upon successful completion, FW writes this value and driver is
+	 * expected to release resource before timeout. This value is provided
+	 * in milliseconds.
+	 */
+	__le32 timeout;
+#define ICE_AQ_RES_NVM_READ_DFLT_TIMEOUT_MS	3000
+#define ICE_AQ_RES_NVM_WRITE_DFLT_TIMEOUT_MS	180000
+#define ICE_AQ_RES_CHNG_LOCK_DFLT_TIMEOUT_MS	1000
+#define ICE_AQ_RES_GLBL_LOCK_DFLT_TIMEOUT_MS	3000
+	/* For SDP: pin id of the SDP */
+	__le32 res_number;
+	/* Status is only used for ICE_AQC_RES_ID_GLBL_LOCK */
+	__le16 status;
+#define ICE_AQ_RES_GLBL_SUCCESS		0
+#define ICE_AQ_RES_GLBL_IN_PROG		1
+#define ICE_AQ_RES_GLBL_DONE		2
+	u8 reserved[2];
+};
+
+
+/* Get function capabilities (indirect 0x000A)
+ * Get device capabilities (indirect 0x000B)
+ */
+struct ice_aqc_list_caps {
+	u8 cmd_flags;
+	u8 pf_index;
+	u8 reserved[2];
+	__le32 count;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Device/Function buffer entry, repeated per reported capability */
+struct ice_aqc_list_caps_elem {
+	__le16 cap;
+#define ICE_AQC_CAPS_VALID_FUNCTIONS			0x0005
+#define ICE_AQC_CAPS_VSI				0x0017
+#define ICE_AQC_CAPS_RSS				0x0040
+#define ICE_AQC_CAPS_RXQS				0x0041
+#define ICE_AQC_CAPS_TXQS				0x0042
+#define ICE_AQC_CAPS_MSIX				0x0043
+#define ICE_AQC_CAPS_MAX_MTU				0x0047
+
+	u8 major_ver;
+	u8 minor_ver;
+	/* Number of resources described by this capability */
+	__le32 number;
+	/* Only meaningful for some types of resources */
+	__le32 logical_id;
+	/* Only meaningful for some types of resources */
+	__le32 phys_id;
+	__le64 rsvd1;
+	__le64 rsvd2;
+};
+
+
+/* Manage MAC address, read command - indirect (0x0107)
+ * This struct is also used for the response
+ */
+struct ice_aqc_manage_mac_read {
+	__le16 flags; /* Zeroed by device driver */
+#define ICE_AQC_MAN_MAC_LAN_ADDR_VALID		BIT(4)
+#define ICE_AQC_MAN_MAC_SAN_ADDR_VALID		BIT(5)
+#define ICE_AQC_MAN_MAC_PORT_ADDR_VALID		BIT(6)
+#define ICE_AQC_MAN_MAC_WOL_ADDR_VALID		BIT(7)
+#define ICE_AQC_MAN_MAC_READ_S			4
+#define ICE_AQC_MAN_MAC_READ_M			(0xF << ICE_AQC_MAN_MAC_READ_S)
+	u8 lport_num;
+	u8 lport_num_valid;
+#define ICE_AQC_MAN_MAC_PORT_NUM_IS_VALID	BIT(0)
+	u8 num_addr; /* Used in response */
+	u8 reserved[3];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Response buffer format for manage MAC read command */
+struct ice_aqc_manage_mac_read_resp {
+	u8 lport_num;
+	u8 addr_type;
+#define ICE_AQC_MAN_MAC_ADDR_TYPE_LAN		0
+#define ICE_AQC_MAN_MAC_ADDR_TYPE_WOL		1
+	u8 mac_addr[ETH_ALEN];
+};
+
+
+/* Manage MAC address, write command - direct (0x0108) */
+struct ice_aqc_manage_mac_write {
+	u8 port_num;
+	u8 flags;
+#define ICE_AQC_MAN_MAC_WR_MC_MAG_EN		BIT(0)
+#define ICE_AQC_MAN_MAC_WR_WOL_LAA_PFR_KEEP	BIT(1)
+#define ICE_AQC_MAN_MAC_WR_S		6
+#define ICE_AQC_MAN_MAC_WR_M		(3 << ICE_AQC_MAN_MAC_WR_S)
+#define ICE_AQC_MAN_MAC_UPDATE_LAA	0
+#define ICE_AQC_MAN_MAC_UPDATE_LAA_WOL	(BIT(0) << ICE_AQC_MAN_MAC_WR_S)
+	/* High 16 bits of MAC address in big endian order */
+	__be16 sah;
+	/* Low 32 bits of MAC address in big endian order */
+	__be32 sal;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Clear PXE Command and response (direct 0x0110) */
+struct ice_aqc_clear_pxe {
+	u8 rx_cnt;
+#define ICE_AQC_CLEAR_PXE_RX_CNT		0x2
+	u8 reserved[15];
+};
+
+
+/* Get switch configuration (0x0200) */
+struct ice_aqc_get_sw_cfg {
+	/* Reserved for command and copy of request flags for response */
+	__le16 flags;
+	/* First desc in case of command and next_elem in case of response
+	 * In case of response, if it is not zero, means all the configuration
+	 * was not returned and new command shall be sent with this value in
+	 * the 'first desc' field
+	 */
+	__le16 element;
+	/* Reserved for command, only used for response */
+	__le16 num_elems;
+	__le16 rsvd;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Each entry in the response buffer is of the following type: */
+struct ice_aqc_get_sw_cfg_resp_elem {
+	/* VSI/Port Number */
+	__le16 vsi_port_num;
+#define ICE_AQC_GET_SW_CONF_RESP_VSI_PORT_NUM_S	0
+#define ICE_AQC_GET_SW_CONF_RESP_VSI_PORT_NUM_M	\
+			(0x3FF << ICE_AQC_GET_SW_CONF_RESP_VSI_PORT_NUM_S)
+#define ICE_AQC_GET_SW_CONF_RESP_TYPE_S	14
+#define ICE_AQC_GET_SW_CONF_RESP_TYPE_M	(0x3 << ICE_AQC_GET_SW_CONF_RESP_TYPE_S)
+#define ICE_AQC_GET_SW_CONF_RESP_PHYS_PORT	0
+#define ICE_AQC_GET_SW_CONF_RESP_VIRT_PORT	1
+#define ICE_AQC_GET_SW_CONF_RESP_VSI		2
+
+	/* SWID VSI/Port belongs to */
+	__le16 swid;
+
+	/* Bit 14..0 : PF/VF number VSI belongs to
+	 * Bit 15 : VF indication bit
+	 */
+	__le16 pf_vf_num;
+#define ICE_AQC_GET_SW_CONF_RESP_FUNC_NUM_S	0
+#define ICE_AQC_GET_SW_CONF_RESP_FUNC_NUM_M	\
+				(0x7FFF << ICE_AQC_GET_SW_CONF_RESP_FUNC_NUM_S)
+#define ICE_AQC_GET_SW_CONF_RESP_IS_VF		BIT(15)
+};
+
+
+/* The response buffer is as follows. Note that the length of the
+ * elements array varies with the length of the command response.
+ */
+struct ice_aqc_get_sw_cfg_resp {
+	struct ice_aqc_get_sw_cfg_resp_elem elements[1];
+};
+
+
+
+/* These resource type defines are used for all switch resource
+ * commands where a resource type is required, such as:
+ * Get Resource Allocation command (indirect 0x0204)
+ * Allocate Resources command (indirect 0x0208)
+ * Free Resources command (indirect 0x0209)
+ * Get Allocated Resource Descriptors Command (indirect 0x020A)
+ */
+#define ICE_AQC_RES_TYPE_VSI_LIST_REP			0x03
+#define ICE_AQC_RES_TYPE_VSI_LIST_PRUNE			0x04
+
+#define ICE_AQC_RES_TYPE_FLAG_SHARED			BIT(7)
+#define ICE_AQC_RES_TYPE_FLAG_SCAN_BOTTOM		BIT(12)
+#define ICE_AQC_RES_TYPE_FLAG_IGNORE_INDEX		BIT(13)
+
+#define ICE_AQC_RES_TYPE_FLAG_DEDICATED			0x00
+
+
+
+/* Allocate Resources command (indirect 0x0208)
+ * Free Resources command (indirect 0x0209)
+ */
+struct ice_aqc_alloc_free_res_cmd {
+	__le16 num_entries; /* Number of Resource entries */
+	u8 reserved[6];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Resource descriptor */
+struct ice_aqc_res_elem {
+	union {
+		__le16 sw_resp;
+		__le16 flu_resp;
+	} e;
+};
+
+
+/* Buffer for Allocate/Free Resources commands */
+struct ice_aqc_alloc_free_res_elem {
+	__le16 res_type; /* Types defined above cmd 0x0204 */
+#define ICE_AQC_RES_TYPE_VSI_PRUNE_LIST_S	8
+#define ICE_AQC_RES_TYPE_VSI_PRUNE_LIST_M	\
+				(0xF << ICE_AQC_RES_TYPE_VSI_PRUNE_LIST_S)
+	__le16 num_elems;
+	struct ice_aqc_res_elem elem[1];
+};
+
+
+
+
+/* Add VSI (indirect 0x0210)
+ * Update VSI (indirect 0x0211)
+ * Get VSI (indirect 0x0212)
+ * Free VSI (indirect 0x0213)
+ */
+struct ice_aqc_add_get_update_free_vsi {
+	__le16 vsi_num;
+#define ICE_AQ_VSI_NUM_S	0
+#define ICE_AQ_VSI_NUM_M	(0x03FF << ICE_AQ_VSI_NUM_S)
+#define ICE_AQ_VSI_IS_VALID	BIT(15)
+	__le16 cmd_flags;
+#define ICE_AQ_VSI_KEEP_ALLOC	0x1
+	u8 vf_id;
+	u8 reserved;
+	__le16 vsi_flags;
+#define ICE_AQ_VSI_TYPE_S	0
+#define ICE_AQ_VSI_TYPE_M	(0x3 << ICE_AQ_VSI_TYPE_S)
+#define ICE_AQ_VSI_TYPE_VF	0x0
+#define ICE_AQ_VSI_TYPE_VMDQ2	0x1
+#define ICE_AQ_VSI_TYPE_PF	0x2
+#define ICE_AQ_VSI_TYPE_EMP_MNG	0x3
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Response descriptor for:
+ * Add VSI (indirect 0x0210)
+ * Update VSI (indirect 0x0211)
+ * Free VSI (indirect 0x0213)
+ */
+struct ice_aqc_add_update_free_vsi_resp {
+	__le16 vsi_num;
+	__le16 ext_status;
+	__le16 vsi_used;
+	__le16 vsi_free;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+
+struct ice_aqc_vsi_props {
+	__le16 valid_sections;
+#define ICE_AQ_VSI_PROP_SW_VALID		BIT(0)
+#define ICE_AQ_VSI_PROP_SECURITY_VALID		BIT(1)
+#define ICE_AQ_VSI_PROP_VLAN_VALID		BIT(2)
+#define ICE_AQ_VSI_PROP_OUTER_TAG_VALID		BIT(3)
+#define ICE_AQ_VSI_PROP_INGRESS_UP_VALID	BIT(4)
+#define ICE_AQ_VSI_PROP_EGRESS_UP_VALID		BIT(5)
+#define ICE_AQ_VSI_PROP_RXQ_MAP_VALID		BIT(6)
+#define ICE_AQ_VSI_PROP_Q_OPT_VALID		BIT(7)
+#define ICE_AQ_VSI_PROP_OUTER_UP_VALID		BIT(8)
+#define ICE_AQ_VSI_PROP_FLOW_DIR_VALID		BIT(11)
+#define ICE_AQ_VSI_PROP_PASID_VALID		BIT(12)
+	/* switch section */
+	u8 sw_id;
+	u8 sw_flags;
+#define ICE_AQ_VSI_SW_FLAG_ALLOW_LB		BIT(5)
+#define ICE_AQ_VSI_SW_FLAG_LOCAL_LB		BIT(6)
+#define ICE_AQ_VSI_SW_FLAG_SRC_PRUNE		BIT(7)
+	u8 sw_flags2;
+#define ICE_AQ_VSI_SW_FLAG_RX_PRUNE_EN_S	0
+#define ICE_AQ_VSI_SW_FLAG_RX_PRUNE_EN_M	\
+				(0xF << ICE_AQ_VSI_SW_FLAG_RX_PRUNE_EN_S)
+#define ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA	BIT(0)
+#define ICE_AQ_VSI_SW_FLAG_LAN_ENA		BIT(4)
+	u8 veb_stat_id;
+#define ICE_AQ_VSI_SW_VEB_STAT_ID_S		0
+#define ICE_AQ_VSI_SW_VEB_STAT_ID_M	(0x1F << ICE_AQ_VSI_SW_VEB_STAT_ID_S)
+#define ICE_AQ_VSI_SW_VEB_STAT_ID_VALID		BIT(5)
+	/* security section */
+	u8 sec_flags;
+#define ICE_AQ_VSI_SEC_FLAG_ALLOW_DEST_OVRD	BIT(0)
+#define ICE_AQ_VSI_SEC_FLAG_ENA_MAC_ANTI_SPOOF	BIT(2)
+#define ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S	4
+#define ICE_AQ_VSI_SEC_TX_PRUNE_ENA_M	(0xF << ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S)
+#define ICE_AQ_VSI_SEC_TX_VLAN_PRUNE_ENA	BIT(0)
+	u8 sec_reserved;
+	/* VLAN section */
+	__le16 pvid; /* VLANS include priority bits */
+	u8 pvlan_reserved[2];
+	u8 vlan_flags;
+#define ICE_AQ_VSI_VLAN_MODE_S	0
+#define ICE_AQ_VSI_VLAN_MODE_M	(0x3 << ICE_AQ_VSI_VLAN_MODE_S)
+#define ICE_AQ_VSI_VLAN_MODE_UNTAGGED	0x1
+#define ICE_AQ_VSI_VLAN_MODE_TAGGED	0x2
+#define ICE_AQ_VSI_VLAN_MODE_ALL	0x3
+#define ICE_AQ_VSI_PVLAN_INSERT_PVID	BIT(2)
+#define ICE_AQ_VSI_VLAN_EMOD_S	3
+#define ICE_AQ_VSI_VLAN_EMOD_M	(0x3 << ICE_AQ_VSI_VLAN_EMOD_S)
+#define ICE_AQ_VSI_VLAN_EMOD_STR_BOTH	(0x0 << ICE_AQ_VSI_VLAN_EMOD_S)
+#define ICE_AQ_VSI_VLAN_EMOD_STR_UP	(0x1 << ICE_AQ_VSI_VLAN_EMOD_S)
+#define ICE_AQ_VSI_VLAN_EMOD_STR	(0x2 << ICE_AQ_VSI_VLAN_EMOD_S)
+#define ICE_AQ_VSI_VLAN_EMOD_NOTHING	(0x3 << ICE_AQ_VSI_VLAN_EMOD_S)
+	u8 pvlan_reserved2[3];
+	/* ingress egress up sections */
+	__le32 ingress_table; /* bitmap, 3 bits per up */
+#define ICE_AQ_VSI_UP_TABLE_UP0_S	0
+#define ICE_AQ_VSI_UP_TABLE_UP0_M	(0x7 << ICE_AQ_VSI_UP_TABLE_UP0_S)
+#define ICE_AQ_VSI_UP_TABLE_UP1_S	3
+#define ICE_AQ_VSI_UP_TABLE_UP1_M	(0x7 << ICE_AQ_VSI_UP_TABLE_UP1_S)
+#define ICE_AQ_VSI_UP_TABLE_UP2_S	6
+#define ICE_AQ_VSI_UP_TABLE_UP2_M	(0x7 << ICE_AQ_VSI_UP_TABLE_UP2_S)
+#define ICE_AQ_VSI_UP_TABLE_UP3_S	9
+#define ICE_AQ_VSI_UP_TABLE_UP3_M	(0x7 << ICE_AQ_VSI_UP_TABLE_UP3_S)
+#define ICE_AQ_VSI_UP_TABLE_UP4_S	12
+#define ICE_AQ_VSI_UP_TABLE_UP4_M	(0x7 << ICE_AQ_VSI_UP_TABLE_UP4_S)
+#define ICE_AQ_VSI_UP_TABLE_UP5_S	15
+#define ICE_AQ_VSI_UP_TABLE_UP5_M	(0x7 << ICE_AQ_VSI_UP_TABLE_UP5_S)
+#define ICE_AQ_VSI_UP_TABLE_UP6_S	18
+#define ICE_AQ_VSI_UP_TABLE_UP6_M	(0x7 << ICE_AQ_VSI_UP_TABLE_UP6_S)
+#define ICE_AQ_VSI_UP_TABLE_UP7_S	21
+#define ICE_AQ_VSI_UP_TABLE_UP7_M	(0x7 << ICE_AQ_VSI_UP_TABLE_UP7_S)
+	__le32 egress_table;   /* same defines as for ingress table */
+	/* outer tags section */
+	__le16 outer_tag;
+	u8 outer_tag_flags;
+#define ICE_AQ_VSI_OUTER_TAG_MODE_S	0
+#define ICE_AQ_VSI_OUTER_TAG_MODE_M	(0x3 << ICE_AQ_VSI_OUTER_TAG_MODE_S)
+#define ICE_AQ_VSI_OUTER_TAG_NOTHING	0x0
+#define ICE_AQ_VSI_OUTER_TAG_REMOVE	0x1
+#define ICE_AQ_VSI_OUTER_TAG_COPY	0x2
+#define ICE_AQ_VSI_OUTER_TAG_TYPE_S	2
+#define ICE_AQ_VSI_OUTER_TAG_TYPE_M	(0x3 << ICE_AQ_VSI_OUTER_TAG_TYPE_S)
+#define ICE_AQ_VSI_OUTER_TAG_NONE	0x0
+#define ICE_AQ_VSI_OUTER_TAG_STAG	0x1
+#define ICE_AQ_VSI_OUTER_TAG_VLAN_8100	0x2
+#define ICE_AQ_VSI_OUTER_TAG_VLAN_9100	0x3
+#define ICE_AQ_VSI_OUTER_TAG_INSERT	BIT(4)
+#define ICE_AQ_VSI_OUTER_TAG_ACCEPT_HOST BIT(6)
+	u8 outer_tag_reserved;
+	/* queue mapping section */
+	__le16 mapping_flags;
+#define ICE_AQ_VSI_Q_MAP_CONTIG	0x0
+#define ICE_AQ_VSI_Q_MAP_NONCONTIG	BIT(0)
+	__le16 q_mapping[16];
+#define ICE_AQ_VSI_Q_S		0
+#define ICE_AQ_VSI_Q_M		(0x7FF << ICE_AQ_VSI_Q_S)
+	__le16 tc_mapping[8];
+#define ICE_AQ_VSI_TC_Q_OFFSET_S	0
+#define ICE_AQ_VSI_TC_Q_OFFSET_M	(0x7FF << ICE_AQ_VSI_TC_Q_OFFSET_S)
+#define ICE_AQ_VSI_TC_Q_NUM_S		11
+#define ICE_AQ_VSI_TC_Q_NUM_M		(0xF << ICE_AQ_VSI_TC_Q_NUM_S)
+	/* queueing option section */
+	u8 q_opt_rss;
+#define ICE_AQ_VSI_Q_OPT_RSS_LUT_S	0
+#define ICE_AQ_VSI_Q_OPT_RSS_LUT_M	(0x3 << ICE_AQ_VSI_Q_OPT_RSS_LUT_S)
+#define ICE_AQ_VSI_Q_OPT_RSS_LUT_VSI	0x0
+#define ICE_AQ_VSI_Q_OPT_RSS_LUT_PF	0x2
+#define ICE_AQ_VSI_Q_OPT_RSS_LUT_GBL	0x3
+#define ICE_AQ_VSI_Q_OPT_RSS_GBL_LUT_S	2
+#define ICE_AQ_VSI_Q_OPT_RSS_GBL_LUT_M	(0xF << ICE_AQ_VSI_Q_OPT_RSS_GBL_LUT_S)
+#define ICE_AQ_VSI_Q_OPT_RSS_HASH_S	6
+#define ICE_AQ_VSI_Q_OPT_RSS_HASH_M	(0x3 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S)
+#define ICE_AQ_VSI_Q_OPT_RSS_TPLZ	(0x0 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S)
+#define ICE_AQ_VSI_Q_OPT_RSS_SYM_TPLZ	(0x1 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S)
+#define ICE_AQ_VSI_Q_OPT_RSS_XOR	(0x2 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S)
+#define ICE_AQ_VSI_Q_OPT_RSS_JHASH	(0x3 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S)
+	u8 q_opt_tc;
+#define ICE_AQ_VSI_Q_OPT_TC_OVR_S	0
+#define ICE_AQ_VSI_Q_OPT_TC_OVR_M	(0x1F << ICE_AQ_VSI_Q_OPT_TC_OVR_S)
+#define ICE_AQ_VSI_Q_OPT_PROF_TC_OVR	BIT(7)
+	u8 q_opt_flags;
+#define ICE_AQ_VSI_Q_OPT_PE_FLTR_EN	BIT(0)
+	u8 q_opt_reserved[3];
+	/* outer up section */
+	__le32 outer_up_table; /* same structure and defines as ingress tbl */
+	/* section 10 */
+	__le16 sect_10_reserved;
+	/* flow director section */
+	__le16 fd_options;
+#define ICE_AQ_VSI_FD_ENABLE		BIT(0)
+#define ICE_AQ_VSI_FD_TX_AUTO_ENABLE	BIT(1)
+#define ICE_AQ_VSI_FD_PROG_ENABLE	BIT(3)
+	__le16 max_fd_fltr_dedicated;
+	__le16 max_fd_fltr_shared;
+	__le16 fd_def_q;
+#define ICE_AQ_VSI_FD_DEF_Q_S		0
+#define ICE_AQ_VSI_FD_DEF_Q_M		(0x7FF << ICE_AQ_VSI_FD_DEF_Q_S)
+#define ICE_AQ_VSI_FD_DEF_GRP_S	12
+#define ICE_AQ_VSI_FD_DEF_GRP_M	(0x7 << ICE_AQ_VSI_FD_DEF_GRP_S)
+	__le16 fd_report_opt;
+#define ICE_AQ_VSI_FD_REPORT_Q_S	0
+#define ICE_AQ_VSI_FD_REPORT_Q_M	(0x7FF << ICE_AQ_VSI_FD_REPORT_Q_S)
+#define ICE_AQ_VSI_FD_DEF_PRIORITY_S	12
+#define ICE_AQ_VSI_FD_DEF_PRIORITY_M	(0x7 << ICE_AQ_VSI_FD_DEF_PRIORITY_S)
+#define ICE_AQ_VSI_FD_DEF_DROP		BIT(15)
+	/* PASID section */
+	__le32 pasid_id;
+#define ICE_AQ_VSI_PASID_ID_S		0
+#define ICE_AQ_VSI_PASID_ID_M		(0xFFFFF << ICE_AQ_VSI_PASID_ID_S)
+#define ICE_AQ_VSI_PASID_ID_VALID	BIT(31)
+	u8 reserved[24];
+};
+
+
+
+#define ICE_MAX_NUM_RECIPES 64
+
+
+/* Add/Update/Remove/Get switch rules (indirect 0x02A0, 0x02A1, 0x02A2, 0x02A3)
+ */
+struct ice_aqc_sw_rules {
+	/* ops: add switch rules, referring the number of rules.
+	 * ops: update switch rules, referring the number of filters
+	 * ops: remove switch rules, referring the entry index.
+	 * ops: get switch rules, referring to the number of filters.
+	 */
+	__le16 num_rules_fltr_entry_index;
+	u8 reserved[6];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+#pragma pack(1)
+/* Add/Update/Get/Remove lookup Rx/Tx command/response entry
+ * This structures describes the lookup rules and associated actions. "index"
+ * is returned as part of a response to a successful Add command, and can be
+ * used to identify the rule for Update/Get/Remove commands.
+ */
+struct ice_sw_rule_lkup_rx_tx {
+	__le16 recipe_id;
+#define ICE_SW_RECIPE_LOGICAL_PORT_FWD		10
+	/* Source port for LOOKUP_RX and source VSI in case of LOOKUP_TX */
+	__le16 src;
+	__le32 act;
+
+	/* Bit 0:1 - Action type */
+#define ICE_SINGLE_ACT_TYPE_S	0x00
+#define ICE_SINGLE_ACT_TYPE_M	(0x3 << ICE_SINGLE_ACT_TYPE_S)
+
+	/* Bit 2 - Loop back enable
+	 * Bit 3 - LAN enable
+	 */
+#define ICE_SINGLE_ACT_LB_ENABLE	BIT(2)
+#define ICE_SINGLE_ACT_LAN_ENABLE	BIT(3)
+
+	/* Action type = 0 - Forward to VSI or VSI list */
+#define ICE_SINGLE_ACT_VSI_FORWARDING	0x0
+
+#define ICE_SINGLE_ACT_VSI_ID_S		4
+#define ICE_SINGLE_ACT_VSI_ID_M		(0x3FF << ICE_SINGLE_ACT_VSI_ID_S)
+#define ICE_SINGLE_ACT_VSI_LIST_ID_S	4
+#define ICE_SINGLE_ACT_VSI_LIST_ID_M	(0x3FF << ICE_SINGLE_ACT_VSI_LIST_ID_S)
+	/* This bit needs to be set if action is forward to VSI list */
+#define ICE_SINGLE_ACT_VSI_LIST		BIT(14)
+#define ICE_SINGLE_ACT_VALID_BIT	BIT(17)
+#define ICE_SINGLE_ACT_DROP		BIT(18)
+
+	/* Action type = 1 - Forward to Queue of Queue group */
+#define ICE_SINGLE_ACT_TO_Q		0x1
+#define ICE_SINGLE_ACT_Q_INDEX_S	4
+#define ICE_SINGLE_ACT_Q_INDEX_M	(0x7FF << ICE_SINGLE_ACT_Q_INDEX_S)
+#define ICE_SINGLE_ACT_Q_REGION_S	15
+#define ICE_SINGLE_ACT_Q_REGION_M	(0x7 << ICE_SINGLE_ACT_Q_REGION_S)
+#define ICE_SINGLE_ACT_Q_PRIORITY	BIT(18)
+
+	/* Action type = 2 - Prune */
+#define ICE_SINGLE_ACT_PRUNE		0x2
+#define ICE_SINGLE_ACT_EGRESS		BIT(15)
+#define ICE_SINGLE_ACT_INGRESS		BIT(16)
+#define ICE_SINGLE_ACT_PRUNET		BIT(17)
+	/* Bit 18 should be set to 0 for this action */
+
+	/* Action type = 2 - Pointer */
+#define ICE_SINGLE_ACT_PTR		0x2
+#define ICE_SINGLE_ACT_PTR_VAL_S	4
+#define ICE_SINGLE_ACT_PTR_VAL_M	(0x1FFF << ICE_SINGLE_ACT_PTR_VAL_S)
+	/* Bit 18 should be set to 1 */
+#define ICE_SINGLE_ACT_PTR_BIT		BIT(18)
+
+	/* Action type = 3 - Other actions. Last two bits
+	 * are other action identifier
+	 */
+#define ICE_SINGLE_ACT_OTHER_ACTS		0x3
+#define ICE_SINGLE_OTHER_ACT_IDENTIFIER_S	17
+#define ICE_SINGLE_OTHER_ACT_IDENTIFIER_M	\
+				(0x3 << \ ICE_SINGLE_OTHER_ACT_IDENTIFIER_S)
+
+	/* Bit 17:18 - Defines other actions */
+	/* Other action = 0 - Mirror VSI */
+#define ICE_SINGLE_OTHER_ACT_MIRROR		0
+#define ICE_SINGLE_ACT_MIRROR_VSI_ID_S	4
+#define ICE_SINGLE_ACT_MIRROR_VSI_ID_M	\
+				(0x3FF << ICE_SINGLE_ACT_MIRROR_VSI_ID_S)
+
+	/* Other action = 3 - Set Stat count */
+#define ICE_SINGLE_OTHER_ACT_STAT_COUNT		3
+#define ICE_SINGLE_ACT_STAT_COUNT_INDEX_S	4
+#define ICE_SINGLE_ACT_STAT_COUNT_INDEX_M	\
+				(0x7F << ICE_SINGLE_ACT_STAT_COUNT_INDEX_S)
+
+	__le16 index; /* The index of the rule in the lookup table */
+	/* Length and values of the header to be matched per recipe or
+	 * lookup-type
+	 */
+	__le16 hdr_len;
+	u8 hdr[1];
+};
+#pragma pack()
+
+
+/* Add/Update/Remove large action command/response entry
+ * "index" is returned as part of a response to a successful Add command, and
+ * can be used to identify the action for Update/Get/Remove commands.
+ */
+struct ice_sw_rule_lg_act {
+	__le16 index; /* Index in large action table */
+	__le16 size;
+	__le32 act[1]; /* array of size for actions */
+	/* Max number of large actions */
+#define ICE_MAX_LG_ACT	4
+	/* Bit 0:1 - Action type */
+#define ICE_LG_ACT_TYPE_S	0
+#define ICE_LG_ACT_TYPE_M	(0x7 << ICE_LG_ACT_TYPE_S)
+
+	/* Action type = 0 - Forward to VSI or VSI list */
+#define ICE_LG_ACT_VSI_FORWARDING	0
+#define ICE_LG_ACT_VSI_ID_S		3
+#define ICE_LG_ACT_VSI_ID_M		(0x3FF << ICE_LG_ACT_VSI_ID_S)
+#define ICE_LG_ACT_VSI_LIST_ID_S	3
+#define ICE_LG_ACT_VSI_LIST_ID_M	(0x3FF << ICE_LG_ACT_VSI_LIST_ID_S)
+	/* This bit needs to be set if action is forward to VSI list */
+#define ICE_LG_ACT_VSI_LIST		BIT(13)
+
+#define ICE_LG_ACT_VALID_BIT		BIT(16)
+
+	/* Action type = 1 - Forward to Queue of Queue group */
+#define ICE_LG_ACT_TO_Q			0x1
+#define ICE_LG_ACT_Q_INDEX_S		3
+#define ICE_LG_ACT_Q_INDEX_M		(0x7FF << ICE_LG_ACT_Q_INDEX_S)
+#define ICE_LG_ACT_Q_REGION_S		14
+#define ICE_LG_ACT_Q_REGION_M		(0x7 << ICE_LG_ACT_Q_REGION_S)
+#define ICE_LG_ACT_Q_PRIORITY_SET	BIT(17)
+
+	/* Action type = 2 - Prune */
+#define ICE_LG_ACT_PRUNE		0x2
+#define ICE_LG_ACT_EGRESS		BIT(14)
+#define ICE_LG_ACT_INGRESS		BIT(15)
+#define ICE_LG_ACT_PRUNET		BIT(16)
+
+	/* Action type = 3 - Mirror VSI */
+#define ICE_LG_OTHER_ACT_MIRROR		0x3
+#define ICE_LG_ACT_MIRROR_VSI_ID_S	3
+#define ICE_LG_ACT_MIRROR_VSI_ID_M	(0x3FF << ICE_LG_ACT_MIRROR_VSI_ID_S)
+
+	/* Action type = 5 - Generic Value */
+#define ICE_LG_ACT_GENERIC		0x5
+#define ICE_LG_ACT_GENERIC_VALUE_S	3
+#define ICE_LG_ACT_GENERIC_VALUE_M	(0xFFFF << ICE_LG_ACT_GENERIC_VALUE_S)
+#define ICE_LG_ACT_GENERIC_OFFSET_S	19
+#define ICE_LG_ACT_GENERIC_OFFSET_M	(0x7 << ICE_LG_ACT_GENERIC_OFFSET_S)
+#define ICE_LG_ACT_GENERIC_PRIORITY_S	22
+#define ICE_LG_ACT_GENERIC_PRIORITY_M	(0x7 << ICE_LG_ACT_GENERIC_PRIORITY_S)
+#define ICE_LG_ACT_GENERIC_OFF_RX_DESC_PROF_IDX	7
+
+	/* Action = 7 - Set Stat count */
+#define ICE_LG_ACT_STAT_COUNT		0x7
+#define ICE_LG_ACT_STAT_COUNT_S		3
+#define ICE_LG_ACT_STAT_COUNT_M		(0x7F << ICE_LG_ACT_STAT_COUNT_S)
+};
+
+
+/* Add/Update/Remove VSI list command/response entry
+ * "index" is returned as part of a response to a successful Add command, and
+ * can be used to identify the VSI list for Update/Get/Remove commands.
+ */
+struct ice_sw_rule_vsi_list {
+	__le16 index; /* Index of VSI/Prune list */
+	__le16 number_vsi;
+	__le16 vsi[1]; /* Array of number_vsi VSI numbers */
+};
+
+
+#pragma pack(1)
+/* Query VSI list command/response entry */
+struct ice_sw_rule_vsi_list_query {
+	__le16 index;
+	ice_declare_bitmap(vsi_list, ICE_MAX_VSI);
+};
+#pragma pack()
+
+
+#pragma pack(1)
+/* Add switch rule response:
+ * Content of return buffer is same as the input buffer. The status field and
+ * LUT index are updated as part of the response
+ */
+struct ice_aqc_sw_rules_elem {
+	__le16 type; /* Switch rule type, one of T_... */
+#define ICE_AQC_SW_RULES_T_LKUP_RX		0x0
+#define ICE_AQC_SW_RULES_T_LKUP_TX		0x1
+#define ICE_AQC_SW_RULES_T_LG_ACT		0x2
+#define ICE_AQC_SW_RULES_T_VSI_LIST_SET		0x3
+#define ICE_AQC_SW_RULES_T_VSI_LIST_CLEAR	0x4
+#define ICE_AQC_SW_RULES_T_PRUNE_LIST_SET	0x5
+#define ICE_AQC_SW_RULES_T_PRUNE_LIST_CLEAR	0x6
+	__le16 status;
+	union {
+		struct ice_sw_rule_lkup_rx_tx lkup_tx_rx;
+		struct ice_sw_rule_lg_act lg_act;
+		struct ice_sw_rule_vsi_list vsi_list;
+		struct ice_sw_rule_vsi_list_query vsi_list_query;
+	} pdata;
+};
+
+#pragma pack()
+
+
+
+/* Get Default Topology (indirect 0x0400) */
+struct ice_aqc_get_topo {
+	u8 port_num;
+	u8 num_branches;
+	__le16 reserved1;
+	__le32 reserved2;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Update TSE (indirect 0x0403)
+ * Get TSE (indirect 0x0404)
+ * Add TSE (indirect 0x0401)
+ * Delete TSE (indirect 0x040F)
+ * Move TSE (indirect 0x0408)
+ * Suspend Nodes (indirect 0x0409)
+ * Resume Nodes (indirect 0x040A)
+ */
+struct ice_aqc_sched_elem_cmd {
+	__le16 num_elem_req;	/* Used by commands */
+	__le16 num_elem_resp;	/* Used by responses */
+	__le32 reserved;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* This is the buffer for:
+ * Suspend Nodes (indirect 0x0409)
+ * Resume Nodes (indirect 0x040A)
+ */
+struct ice_aqc_suspend_resume_elem {
+	__le32 teid[1];
+};
+
+
+struct ice_aqc_txsched_move_grp_info_hdr {
+	__le32 src_parent_teid;
+	__le32 dest_parent_teid;
+	__le16 num_elems;
+	__le16 reserved;
+};
+
+
+struct ice_aqc_move_elem {
+	struct ice_aqc_txsched_move_grp_info_hdr hdr;
+	__le32 teid[1];
+};
+
+
+struct ice_aqc_elem_info_bw {
+	__le16 bw_profile_idx;
+	__le16 bw_alloc;
+};
+
+
+struct ice_aqc_txsched_elem {
+	u8 elem_type; /* Special field, reserved for some aq calls */
+#define ICE_AQC_ELEM_TYPE_UNDEFINED		0x0
+#define ICE_AQC_ELEM_TYPE_ROOT_PORT		0x1
+#define ICE_AQC_ELEM_TYPE_TC			0x2
+#define ICE_AQC_ELEM_TYPE_SE_GENERIC		0x3
+#define ICE_AQC_ELEM_TYPE_ENTRY_POINT		0x4
+#define ICE_AQC_ELEM_TYPE_LEAF			0x5
+#define ICE_AQC_ELEM_TYPE_SE_PADDED		0x6
+	u8 valid_sections;
+#define ICE_AQC_ELEM_VALID_GENERIC		BIT(0)
+#define ICE_AQC_ELEM_VALID_CIR			BIT(1)
+#define ICE_AQC_ELEM_VALID_EIR			BIT(2)
+#define ICE_AQC_ELEM_VALID_SHARED		BIT(3)
+	u8 generic;
+#define ICE_AQC_ELEM_GENERIC_MODE_M		0x1
+#define ICE_AQC_ELEM_GENERIC_PRIO_S		0x1
+#define ICE_AQC_ELEM_GENERIC_PRIO_M	(0x7 << ICE_AQC_ELEM_GENERIC_PRIO_S)
+#define ICE_AQC_ELEM_GENERIC_SP_S		0x4
+#define ICE_AQC_ELEM_GENERIC_SP_M	(0x1 << ICE_AQC_ELEM_GENERIC_SP_S)
+#define ICE_AQC_ELEM_GENERIC_ADJUST_VAL_S	0x5
+#define ICE_AQC_ELEM_GENERIC_ADJUST_VAL_M	\
+	(0x3 << ICE_AQC_ELEM_GENERIC_ADJUST_VAL_S)
+	u8 flags; /* Special field, reserved for some aq calls */
+#define ICE_AQC_ELEM_FLAG_SUSPEND_M		0x1
+	struct ice_aqc_elem_info_bw cir_bw;
+	struct ice_aqc_elem_info_bw eir_bw;
+	__le16 srl_id;
+	__le16 reserved2;
+};
+
+
+struct ice_aqc_txsched_elem_data {
+	__le32 parent_teid;
+	__le32 node_teid;
+	struct ice_aqc_txsched_elem data;
+};
+
+
+struct ice_aqc_txsched_topo_grp_info_hdr {
+	__le32 parent_teid;
+	__le16 num_elems;
+	__le16 reserved2;
+};
+
+
+struct ice_aqc_add_elem {
+	struct ice_aqc_txsched_topo_grp_info_hdr hdr;
+	struct ice_aqc_txsched_elem_data generic[1];
+};
+
+
+struct ice_aqc_conf_elem {
+	struct ice_aqc_txsched_elem_data generic[1];
+};
+
+
+struct ice_aqc_get_elem {
+	struct ice_aqc_txsched_elem_data generic[1];
+};
+
+
+struct ice_aqc_get_topo_elem {
+	struct ice_aqc_txsched_topo_grp_info_hdr hdr;
+	struct ice_aqc_txsched_elem_data
+		generic[ICE_AQC_TOPO_MAX_LEVEL_NUM];
+};
+
+
+struct ice_aqc_delete_elem {
+	struct ice_aqc_txsched_topo_grp_info_hdr hdr;
+	__le32 teid[1];
+};
+
+
+
+
+/* Rate limiting profile for
+ * Add RL profile (indirect 0x0410)
+ * Query RL profile (indirect 0x0411)
+ * Remove RL profile (indirect 0x0415)
+ * These indirect commands acts on single or multiple
+ * RL profiles with specified data.
+ */
+struct ice_aqc_rl_profile {
+	__le16 num_profiles;
+	__le16 num_processed; /* Only for response. Reserved in Command. */
+	u8 reserved[4];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+struct ice_aqc_rl_profile_elem {
+	u8 level;
+	u8 flags;
+#define ICE_AQC_RL_PROFILE_TYPE_S	0x0
+#define ICE_AQC_RL_PROFILE_TYPE_M	(0x3 << ICE_AQC_RL_PROFILE_TYPE_S)
+#define ICE_AQC_RL_PROFILE_TYPE_CIR	0
+#define ICE_AQC_RL_PROFILE_TYPE_EIR	1
+#define ICE_AQC_RL_PROFILE_TYPE_SRL	2
+/* The following flag is used for Query RL Profile Data */
+#define ICE_AQC_RL_PROFILE_INVAL_S	0x7
+#define ICE_AQC_RL_PROFILE_INVAL_M	(0x1 << ICE_AQC_RL_PROFILE_INVAL_S)
+
+	__le16 profile_id;
+	__le16 max_burst_size;
+	__le16 rl_multiply;
+	__le16 wake_up_calc;
+	__le16 rl_encode;
+};
+
+
+struct ice_aqc_rl_profile_generic_elem {
+	struct ice_aqc_rl_profile_elem generic[1];
+};
+
+
+
+/* Configure L2 Node CGD (indirect 0x0414)
+ * This indirect command allows configuring a congestion domain for given L2
+ * node TEIDs in the scheduler topology.
+ */
+struct ice_aqc_cfg_l2_node_cgd {
+	__le16 num_l2_nodes;
+	u8 reserved[6];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+struct ice_aqc_cfg_l2_node_cgd_elem {
+	__le32 node_teid;
+	u8 cgd;
+	u8 reserved[3];
+};
+
+
+struct ice_aqc_cfg_l2_node_cgd_data {
+	struct ice_aqc_cfg_l2_node_cgd_elem elem[1];
+};
+
+
+/* Query Scheduler Resource Allocation (indirect 0x0412)
+ * This indirect command retrieves the scheduler resources allocated by
+ * EMP Firmware to the given PF.
+ */
+struct ice_aqc_query_txsched_res {
+	u8 reserved[8];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+struct ice_aqc_generic_sched_props {
+	__le16 phys_levels;
+	__le16 logical_levels;
+	u8 flattening_bitmap;
+	u8 max_device_cgds;
+	u8 max_pf_cgds;
+	u8 rsvd0;
+	__le16 rdma_qsets;
+	u8 rsvd1[22];
+};
+
+
+struct ice_aqc_layer_props {
+	u8 logical_layer;
+	u8 chunk_size;
+	__le16 max_device_nodes;
+	__le16 max_pf_nodes;
+	u8 rsvd0[4];
+	__le16 max_sibl_grp_sz;
+	__le16 max_cir_rl_profiles;
+	__le16 max_eir_rl_profiles;
+	__le16 max_srl_profiles;
+	u8 rsvd1[14];
+};
+
+
+struct ice_aqc_query_txsched_res_resp {
+	struct ice_aqc_generic_sched_props sched_props;
+	struct ice_aqc_layer_props layer_props[ICE_AQC_TOPO_MAX_LEVEL_NUM];
+};
+
+
+/* Query Node to Root Topology (indirect 0x0413)
+ * This command uses ice_aqc_get_elem as its data buffer.
+ */
+struct ice_aqc_query_node_to_root {
+	__le32 teid;
+	__le32 num_nodes; /* Response only */
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Get PHY capabilities (indirect 0x0600) */
+struct ice_aqc_get_phy_caps {
+	u8 lport_num;
+	u8 reserved;
+	__le16 param0;
+	/* 18.0 - Report qualified modules */
+#define ICE_AQC_GET_PHY_RQM		BIT(0)
+	/* 18.1 - 18.2 : Report mode
+	 * 00b - Report NVM capabilities
+	 * 01b - Report topology capabilities
+	 * 10b - Report SW configured
+	 */
+#define ICE_AQC_REPORT_MODE_S		1
+#define ICE_AQC_REPORT_MODE_M		(3 << ICE_AQC_REPORT_MODE_S)
+#define ICE_AQC_REPORT_NVM_CAP		0
+#define ICE_AQC_REPORT_TOPO_CAP		BIT(1)
+#define ICE_AQC_REPORT_SW_CFG		BIT(2)
+	__le32 reserved1;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* This is #define of PHY type (Extended):
+ * The first set of defines is for phy_type_low.
+ */
+#define ICE_PHY_TYPE_LOW_100BASE_TX		BIT_ULL(0)
+#define ICE_PHY_TYPE_LOW_100M_SGMII		BIT_ULL(1)
+#define ICE_PHY_TYPE_LOW_1000BASE_T		BIT_ULL(2)
+#define ICE_PHY_TYPE_LOW_1000BASE_SX		BIT_ULL(3)
+#define ICE_PHY_TYPE_LOW_1000BASE_LX		BIT_ULL(4)
+#define ICE_PHY_TYPE_LOW_1000BASE_KX		BIT_ULL(5)
+#define ICE_PHY_TYPE_LOW_1G_SGMII		BIT_ULL(6)
+#define ICE_PHY_TYPE_LOW_2500BASE_T		BIT_ULL(7)
+#define ICE_PHY_TYPE_LOW_2500BASE_X		BIT_ULL(8)
+#define ICE_PHY_TYPE_LOW_2500BASE_KX		BIT_ULL(9)
+#define ICE_PHY_TYPE_LOW_5GBASE_T		BIT_ULL(10)
+#define ICE_PHY_TYPE_LOW_5GBASE_KR		BIT_ULL(11)
+#define ICE_PHY_TYPE_LOW_10GBASE_T		BIT_ULL(12)
+#define ICE_PHY_TYPE_LOW_10G_SFI_DA		BIT_ULL(13)
+#define ICE_PHY_TYPE_LOW_10GBASE_SR		BIT_ULL(14)
+#define ICE_PHY_TYPE_LOW_10GBASE_LR		BIT_ULL(15)
+#define ICE_PHY_TYPE_LOW_10GBASE_KR_CR1		BIT_ULL(16)
+#define ICE_PHY_TYPE_LOW_10G_SFI_AOC_ACC	BIT_ULL(17)
+#define ICE_PHY_TYPE_LOW_10G_SFI_C2C		BIT_ULL(18)
+#define ICE_PHY_TYPE_LOW_25GBASE_T		BIT_ULL(19)
+#define ICE_PHY_TYPE_LOW_25GBASE_CR		BIT_ULL(20)
+#define ICE_PHY_TYPE_LOW_25GBASE_CR_S		BIT_ULL(21)
+#define ICE_PHY_TYPE_LOW_25GBASE_CR1		BIT_ULL(22)
+#define ICE_PHY_TYPE_LOW_25GBASE_SR		BIT_ULL(23)
+#define ICE_PHY_TYPE_LOW_25GBASE_LR		BIT_ULL(24)
+#define ICE_PHY_TYPE_LOW_25GBASE_KR		BIT_ULL(25)
+#define ICE_PHY_TYPE_LOW_25GBASE_KR_S		BIT_ULL(26)
+#define ICE_PHY_TYPE_LOW_25GBASE_KR1		BIT_ULL(27)
+#define ICE_PHY_TYPE_LOW_25G_AUI_AOC_ACC	BIT_ULL(28)
+#define ICE_PHY_TYPE_LOW_25G_AUI_C2C		BIT_ULL(29)
+#define ICE_PHY_TYPE_LOW_40GBASE_CR4		BIT_ULL(30)
+#define ICE_PHY_TYPE_LOW_40GBASE_SR4		BIT_ULL(31)
+#define ICE_PHY_TYPE_LOW_40GBASE_LR4		BIT_ULL(32)
+#define ICE_PHY_TYPE_LOW_40GBASE_KR4		BIT_ULL(33)
+#define ICE_PHY_TYPE_LOW_40G_XLAUI_AOC_ACC	BIT_ULL(34)
+#define ICE_PHY_TYPE_LOW_40G_XLAUI		BIT_ULL(35)
+#define ICE_PHY_TYPE_LOW_50GBASE_CR2		BIT_ULL(36)
+#define ICE_PHY_TYPE_LOW_50GBASE_SR2		BIT_ULL(37)
+#define ICE_PHY_TYPE_LOW_50GBASE_LR2		BIT_ULL(38)
+#define ICE_PHY_TYPE_LOW_50GBASE_KR2		BIT_ULL(39)
+#define ICE_PHY_TYPE_LOW_50G_LAUI2_AOC_ACC	BIT_ULL(40)
+#define ICE_PHY_TYPE_LOW_50G_LAUI2		BIT_ULL(41)
+#define ICE_PHY_TYPE_LOW_50G_AUI2_AOC_ACC	BIT_ULL(42)
+#define ICE_PHY_TYPE_LOW_50G_AUI2		BIT_ULL(43)
+#define ICE_PHY_TYPE_LOW_50GBASE_CP		BIT_ULL(44)
+#define ICE_PHY_TYPE_LOW_50GBASE_SR		BIT_ULL(45)
+#define ICE_PHY_TYPE_LOW_50GBASE_FR		BIT_ULL(46)
+#define ICE_PHY_TYPE_LOW_50GBASE_LR		BIT_ULL(47)
+#define ICE_PHY_TYPE_LOW_50GBASE_KR_PAM4	BIT_ULL(48)
+#define ICE_PHY_TYPE_LOW_50G_AUI1_AOC_ACC	BIT_ULL(49)
+#define ICE_PHY_TYPE_LOW_50G_AUI1		BIT_ULL(50)
+#define ICE_PHY_TYPE_LOW_100GBASE_CR4		BIT_ULL(51)
+#define ICE_PHY_TYPE_LOW_100GBASE_SR4		BIT_ULL(52)
+#define ICE_PHY_TYPE_LOW_100GBASE_LR4		BIT_ULL(53)
+#define ICE_PHY_TYPE_LOW_100GBASE_KR4		BIT_ULL(54)
+#define ICE_PHY_TYPE_LOW_100G_CAUI4_AOC_ACC	BIT_ULL(55)
+#define ICE_PHY_TYPE_LOW_100G_CAUI4		BIT_ULL(56)
+#define ICE_PHY_TYPE_LOW_100G_AUI4_AOC_ACC	BIT_ULL(57)
+#define ICE_PHY_TYPE_LOW_100G_AUI4		BIT_ULL(58)
+#define ICE_PHY_TYPE_LOW_100GBASE_CR_PAM4	BIT_ULL(59)
+#define ICE_PHY_TYPE_LOW_100GBASE_KR_PAM4	BIT_ULL(60)
+#define ICE_PHY_TYPE_LOW_100GBASE_CP2		BIT_ULL(61)
+#define ICE_PHY_TYPE_LOW_100GBASE_SR2		BIT_ULL(62)
+#define ICE_PHY_TYPE_LOW_100GBASE_DR		BIT_ULL(63)
+#define ICE_PHY_TYPE_LOW_MAX_INDEX		63
+/* The second set of defines is for phy_type_high. */
+#define ICE_PHY_TYPE_HIGH_100GBASE_KR2_PAM4	BIT_ULL(0)
+#define ICE_PHY_TYPE_HIGH_100G_CAUI2_AOC_ACC	BIT_ULL(1)
+#define ICE_PHY_TYPE_HIGH_100G_CAUI2		BIT_ULL(2)
+#define ICE_PHY_TYPE_HIGH_100G_AUI2_AOC_ACC	BIT_ULL(3)
+#define ICE_PHY_TYPE_HIGH_100G_AUI2		BIT_ULL(4)
+#define ICE_PHY_TYPE_HIGH_MAX_INDEX		19
+
+struct ice_aqc_get_phy_caps_data {
+	__le64 phy_type_low; /* Use values from ICE_PHY_TYPE_LOW_* */
+	__le64 phy_type_high; /* Use values from ICE_PHY_TYPE_HIGH_* */
+	u8 caps;
+#define ICE_AQC_PHY_EN_TX_LINK_PAUSE			BIT(0)
+#define ICE_AQC_PHY_EN_RX_LINK_PAUSE			BIT(1)
+#define ICE_AQC_PHY_LOW_POWER_MODE			BIT(2)
+#define ICE_AQC_PHY_EN_LINK				BIT(3)
+#define ICE_AQC_PHY_AN_MODE				BIT(4)
+#define ICE_AQC_PHY_EN_MOD_QUAL				BIT(5)
+#define ICE_AQC_PHY_EN_LESM				BIT(6)
+#define ICE_AQC_PHY_EN_AUTO_FEC				BIT(7)
+#define ICE_AQC_PHY_CAPS_MASK				MAKEMASK(0xff, 0)
+	u8 low_power_ctrl;
+#define ICE_AQC_PHY_EN_D3COLD_LOW_POWER_AUTONEG		BIT(0)
+	__le16 eee_cap;
+#define ICE_AQC_PHY_EEE_EN_100BASE_TX			BIT(0)
+#define ICE_AQC_PHY_EEE_EN_1000BASE_T			BIT(1)
+#define ICE_AQC_PHY_EEE_EN_10GBASE_T			BIT(2)
+#define ICE_AQC_PHY_EEE_EN_1000BASE_KX			BIT(3)
+#define ICE_AQC_PHY_EEE_EN_10GBASE_KR			BIT(4)
+#define ICE_AQC_PHY_EEE_EN_25GBASE_KR			BIT(5)
+#define ICE_AQC_PHY_EEE_EN_40GBASE_KR4			BIT(6)
+#define ICE_AQC_PHY_EEE_EN_50GBASE_KR2			BIT(7)
+#define ICE_AQC_PHY_EEE_EN_50GBASE_KR_PAM4		BIT(8)
+#define ICE_AQC_PHY_EEE_EN_100GBASE_KR4			BIT(9)
+#define ICE_AQC_PHY_EEE_EN_100GBASE_KR2_PAM4		BIT(10)
+	__le16 eeer_value;
+	u8 phy_id_oui[4]; /* PHY/Module ID connected on the port */
+	u8 phy_fw_ver[8];
+	u8 link_fec_options;
+#define ICE_AQC_PHY_FEC_10G_KR_40G_KR4_EN		BIT(0)
+#define ICE_AQC_PHY_FEC_10G_KR_40G_KR4_REQ		BIT(1)
+#define ICE_AQC_PHY_FEC_25G_RS_528_REQ			BIT(2)
+#define ICE_AQC_PHY_FEC_25G_KR_REQ			BIT(3)
+#define ICE_AQC_PHY_FEC_25G_RS_544_REQ			BIT(4)
+#define ICE_AQC_PHY_FEC_25G_RS_CLAUSE91_EN		BIT(6)
+#define ICE_AQC_PHY_FEC_25G_KR_CLAUSE74_EN		BIT(7)
+#define ICE_AQC_PHY_FEC_MASK				MAKEMASK(0xdf, 0)
+	u8 extended_compliance_code;
+#define ICE_MODULE_TYPE_TOTAL_BYTE			3
+	u8 module_type[ICE_MODULE_TYPE_TOTAL_BYTE];
+#define ICE_AQC_MOD_TYPE_BYTE0_SFP_PLUS			0xA0
+#define ICE_AQC_MOD_TYPE_BYTE0_QSFP_PLUS		0x80
+#define ICE_AQC_MOD_TYPE_BYTE1_SFP_PLUS_CU_PASSIVE	BIT(0)
+#define ICE_AQC_MOD_TYPE_BYTE1_SFP_PLUS_CU_ACTIVE	BIT(1)
+#define ICE_AQC_MOD_TYPE_BYTE1_10G_BASE_SR		BIT(4)
+#define ICE_AQC_MOD_TYPE_BYTE1_10G_BASE_LR		BIT(5)
+#define ICE_AQC_MOD_TYPE_BYTE1_10G_BASE_LRM		BIT(6)
+#define ICE_AQC_MOD_TYPE_BYTE1_10G_BASE_ER		BIT(7)
+#define ICE_AQC_MOD_TYPE_BYTE2_SFP_PLUS			0xA0
+#define ICE_AQC_MOD_TYPE_BYTE2_QSFP_PLUS		0x86
+	u8 qualified_module_count;
+#define ICE_AQC_QUAL_MOD_COUNT_MAX			16
+	struct {
+		u8 v_oui[3];
+		u8 rsvd3;
+		u8 v_part[16];
+		__le32 v_rev;
+		__le64 rsvd8;
+	} qual_modules[ICE_AQC_QUAL_MOD_COUNT_MAX];
+};
+
+
+/* Set PHY capabilities (direct 0x0601)
+ * NOTE: This command must be followed by setup link and restart auto-neg
+ */
+struct ice_aqc_set_phy_cfg {
+	u8 lport_num;
+	u8 reserved[7];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Set PHY config command data structure */
+struct ice_aqc_set_phy_cfg_data {
+	__le64 phy_type_low; /* Use values from ICE_PHY_TYPE_LOW_* */
+	__le64 phy_type_high; /* Use values from ICE_PHY_TYPE_HIGH_* */
+	u8 caps;
+#define ICE_AQ_PHY_ENA_TX_PAUSE_ABILITY		BIT(0)
+#define ICE_AQ_PHY_ENA_RX_PAUSE_ABILITY		BIT(1)
+#define ICE_AQ_PHY_ENA_LOW_POWER	BIT(2)
+#define ICE_AQ_PHY_ENA_LINK		BIT(3)
+#define ICE_AQ_PHY_ENA_AUTO_LINK_UPDT	BIT(5)
+#define ICE_AQ_PHY_ENA_LESM		BIT(6)
+#define ICE_AQ_PHY_ENA_AUTO_FEC		BIT(7)
+	u8 low_power_ctrl;
+	__le16 eee_cap; /* Value from ice_aqc_get_phy_caps */
+	__le16 eeer_value;
+	u8 link_fec_opt; /* Use defines from ice_aqc_get_phy_caps */
+	u8 rsvd1;
+};
+
+
+
+/* Restart AN command data structure (direct 0x0605)
+ * Also used for response, with only the lport_num field present.
+ */
+struct ice_aqc_restart_an {
+	u8 lport_num;
+	u8 reserved;
+	u8 cmd_flags;
+#define ICE_AQC_RESTART_AN_LINK_RESTART	BIT(1)
+#define ICE_AQC_RESTART_AN_LINK_ENABLE	BIT(2)
+	u8 reserved2[13];
+};
+
+
+/* Get link status (indirect 0x0607), also used for Link Status Event */
+struct ice_aqc_get_link_status {
+	u8 lport_num;
+	u8 reserved;
+	__le16 cmd_flags;
+#define ICE_AQ_LSE_M			0x3
+#define ICE_AQ_LSE_NOP			0x0
+#define ICE_AQ_LSE_DIS			0x2
+#define ICE_AQ_LSE_ENA			0x3
+	/* only response uses this flag */
+#define ICE_AQ_LSE_IS_ENABLED		0x1
+	__le32 reserved2;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Get link status response data structure, also used for Link Status Event */
+struct ice_aqc_get_link_status_data {
+	u8 topo_media_conflict;
+#define ICE_AQ_LINK_TOPO_CONFLICT	BIT(0)
+#define ICE_AQ_LINK_MEDIA_CONFLICT	BIT(1)
+#define ICE_AQ_LINK_TOPO_CORRUPT	BIT(2)
+	u8 reserved1;
+	u8 link_info;
+#define ICE_AQ_LINK_UP			BIT(0)	/* Link Status */
+#define ICE_AQ_LINK_FAULT		BIT(1)
+#define ICE_AQ_LINK_FAULT_TX		BIT(2)
+#define ICE_AQ_LINK_FAULT_RX		BIT(3)
+#define ICE_AQ_LINK_FAULT_REMOTE	BIT(4)
+#define ICE_AQ_LINK_UP_PORT		BIT(5)	/* External Port Link Status */
+#define ICE_AQ_MEDIA_AVAILABLE		BIT(6)
+#define ICE_AQ_SIGNAL_DETECT		BIT(7)
+	u8 an_info;
+#define ICE_AQ_AN_COMPLETED		BIT(0)
+#define ICE_AQ_LP_AN_ABILITY		BIT(1)
+#define ICE_AQ_PD_FAULT			BIT(2)	/* Parallel Detection Fault */
+#define ICE_AQ_FEC_EN			BIT(3)
+#define ICE_AQ_PHY_LOW_POWER		BIT(4)	/* Low Power State */
+#define ICE_AQ_LINK_PAUSE_TX		BIT(5)
+#define ICE_AQ_LINK_PAUSE_RX		BIT(6)
+#define ICE_AQ_QUALIFIED_MODULE		BIT(7)
+	u8 ext_info;
+#define ICE_AQ_LINK_PHY_TEMP_ALARM	BIT(0)
+#define ICE_AQ_LINK_EXCESSIVE_ERRORS	BIT(1)	/* Excessive Link Errors */
+	/* Port TX Suspended */
+#define ICE_AQ_LINK_TX_S		2
+#define ICE_AQ_LINK_TX_M		(0x03 << ICE_AQ_LINK_TX_S)
+#define ICE_AQ_LINK_TX_ACTIVE		0
+#define ICE_AQ_LINK_TX_DRAINED		1
+#define ICE_AQ_LINK_TX_FLUSHED		3
+	u8 reserved2;
+	__le16 max_frame_size;
+	u8 cfg;
+#define ICE_AQ_LINK_25G_KR_FEC_EN	BIT(0)
+#define ICE_AQ_LINK_25G_RS_528_FEC_EN	BIT(1)
+#define ICE_AQ_LINK_25G_RS_544_FEC_EN	BIT(2)
+#define ICE_AQ_FEC_MASK			MAKEMASK(0x7, 0)
+	/* Pacing Config */
+#define ICE_AQ_CFG_PACING_S		3
+#define ICE_AQ_CFG_PACING_M		(0xF << ICE_AQ_CFG_PACING_S)
+#define ICE_AQ_CFG_PACING_TYPE_M	BIT(7)
+#define ICE_AQ_CFG_PACING_TYPE_AVG	0
+#define ICE_AQ_CFG_PACING_TYPE_FIXED	ICE_AQ_CFG_PACING_TYPE_M
+	/* External Device Power Ability */
+	u8 power_desc;
+#define ICE_AQ_PWR_CLASS_M		0x3
+#define ICE_AQ_LINK_PWR_BASET_LOW_HIGH	0
+#define ICE_AQ_LINK_PWR_BASET_HIGH	1
+#define ICE_AQ_LINK_PWR_QSFP_CLASS_1	0
+#define ICE_AQ_LINK_PWR_QSFP_CLASS_2	1
+#define ICE_AQ_LINK_PWR_QSFP_CLASS_3	2
+#define ICE_AQ_LINK_PWR_QSFP_CLASS_4	3
+	__le16 link_speed;
+#define ICE_AQ_LINK_SPEED_10MB		BIT(0)
+#define ICE_AQ_LINK_SPEED_100MB		BIT(1)
+#define ICE_AQ_LINK_SPEED_1000MB	BIT(2)
+#define ICE_AQ_LINK_SPEED_2500MB	BIT(3)
+#define ICE_AQ_LINK_SPEED_5GB		BIT(4)
+#define ICE_AQ_LINK_SPEED_10GB		BIT(5)
+#define ICE_AQ_LINK_SPEED_20GB		BIT(6)
+#define ICE_AQ_LINK_SPEED_25GB		BIT(7)
+#define ICE_AQ_LINK_SPEED_40GB		BIT(8)
+#define ICE_AQ_LINK_SPEED_50GB		BIT(9)
+#define ICE_AQ_LINK_SPEED_100GB		BIT(10)
+#define ICE_AQ_LINK_SPEED_UNKNOWN	BIT(15)
+	__le32 reserved3; /* Aligns next field to 8-byte boundary */
+	__le64 phy_type_low; /* Use values from ICE_PHY_TYPE_LOW_* */
+	__le64 phy_type_high; /* Use values from ICE_PHY_TYPE_HIGH_* */
+};
+
+
+/* Set event mask command (direct 0x0613) */
+struct ice_aqc_set_event_mask {
+	u8	lport_num;
+	u8	reserved[7];
+	__le16	event_mask;
+#define ICE_AQ_LINK_EVENT_UPDOWN		BIT(1)
+#define ICE_AQ_LINK_EVENT_MEDIA_NA		BIT(2)
+#define ICE_AQ_LINK_EVENT_LINK_FAULT		BIT(3)
+#define ICE_AQ_LINK_EVENT_PHY_TEMP_ALARM	BIT(4)
+#define ICE_AQ_LINK_EVENT_EXCESSIVE_ERRORS	BIT(5)
+#define ICE_AQ_LINK_EVENT_SIGNAL_DETECT		BIT(6)
+#define ICE_AQ_LINK_EVENT_AN_COMPLETED		BIT(7)
+#define ICE_AQ_LINK_EVENT_MODULE_QUAL_FAIL	BIT(8)
+#define ICE_AQ_LINK_EVENT_PORT_TX_SUSPENDED	BIT(9)
+	u8	reserved1[6];
+};
+
+
+
+/* Set MAC Loopback command (direct 0x0620) */
+struct ice_aqc_set_mac_lb {
+	u8 lb_mode;
+#define ICE_AQ_MAC_LB_EN		BIT(0)
+#define ICE_AQ_MAC_LB_OSC_CLK		BIT(1)
+	u8 reserved[15];
+};
+
+
+
+
+
+/* Set Port Identification LED (direct, 0x06E9) */
+struct ice_aqc_set_port_id_led {
+	u8 lport_num;
+	u8 lport_num_valid;
+#define ICE_AQC_PORT_ID_PORT_NUM_VALID	BIT(0)
+	u8 ident_mode;
+#define ICE_AQC_PORT_IDENT_LED_BLINK	BIT(0)
+#define ICE_AQC_PORT_IDENT_LED_ORIG	0
+	u8 rsvd[13];
+};
+
+
+
+/* NVM Read command (indirect 0x0701)
+ * NVM Erase commands (direct 0x0702)
+ * NVM Update commands (indirect 0x0703)
+ */
+struct ice_aqc_nvm {
+	__le16 offset_low;
+	u8 offset_high;
+	u8 cmd_flags;
+#define ICE_AQC_NVM_LAST_CMD		BIT(0)
+#define ICE_AQC_NVM_PCIR_REQ		BIT(0)	/* Used by NVM Update reply */
+#define ICE_AQC_NVM_PRESERVATION_S	1
+#define ICE_AQC_NVM_PRESERVATION_M	(3 << ICE_AQC_NVM_PRESERVATION_S)
+#define ICE_AQC_NVM_NO_PRESERVATION	(0 << ICE_AQC_NVM_PRESERVATION_S)
+#define ICE_AQC_NVM_PRESERVE_ALL	BIT(1)
+#define ICE_AQC_NVM_FACTORY_DEFAULT	(2 << ICE_AQC_NVM_PRESERVATION_S)
+#define ICE_AQC_NVM_PRESERVE_SELECTED	(3 << ICE_AQC_NVM_PRESERVATION_S)
+#define ICE_AQC_NVM_FLASH_ONLY		BIT(7)
+	__le16 module_typeid;
+	__le16 length;
+#define ICE_AQC_NVM_ERASE_LEN	0xFFFF
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Used for 0x0704 as well as for 0x0705 commands */
+struct ice_aqc_nvm_cfg {
+	u8	cmd_flags;
+#define ICE_AQC_ANVM_MULTIPLE_ELEMS	BIT(0)
+#define ICE_AQC_ANVM_IMMEDIATE_FIELD	BIT(1)
+#define ICE_AQC_ANVM_NEW_CFG		BIT(2)
+	u8	reserved;
+	__le16 count;
+	__le16 id;
+	u8 reserved1[2];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+struct ice_aqc_nvm_cfg_data {
+	__le16 field_id;
+	__le16 field_options;
+	__le16 field_value;
+};
+
+
+/* NVM Checksum Command (direct, 0x0706) */
+struct ice_aqc_nvm_checksum {
+	u8 flags;
+#define ICE_AQC_NVM_CHECKSUM_VERIFY	BIT(0)
+#define ICE_AQC_NVM_CHECKSUM_RECALC	BIT(1)
+	u8 rsvd;
+	__le16 checksum; /* Used only by response */
+#define ICE_AQC_NVM_CHECKSUM_CORRECT	0xBABA
+	u8 rsvd2[12];
+};
+
+
+
+
+
+/* Get/Set RSS key (indirect 0x0B04/0x0B02) */
+struct ice_aqc_get_set_rss_key {
+#define ICE_AQC_GSET_RSS_KEY_VSI_VALID	BIT(15)
+#define ICE_AQC_GSET_RSS_KEY_VSI_ID_S	0
+#define ICE_AQC_GSET_RSS_KEY_VSI_ID_M	(0x3FF << ICE_AQC_GSET_RSS_KEY_VSI_ID_S)
+	__le16 vsi_id;
+	u8 reserved[6];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+#define ICE_AQC_GET_SET_RSS_KEY_DATA_RSS_KEY_SIZE	0x28
+#define ICE_AQC_GET_SET_RSS_KEY_DATA_HASH_KEY_SIZE	0xC
+
+struct ice_aqc_get_set_rss_keys {
+	u8 standard_rss_key[ICE_AQC_GET_SET_RSS_KEY_DATA_RSS_KEY_SIZE];
+	u8 extended_hash_key[ICE_AQC_GET_SET_RSS_KEY_DATA_HASH_KEY_SIZE];
+};
+
+
+/* Get/Set RSS LUT (indirect 0x0B05/0x0B03) */
+struct ice_aqc_get_set_rss_lut {
+#define ICE_AQC_GSET_RSS_LUT_VSI_VALID	BIT(15)
+#define ICE_AQC_GSET_RSS_LUT_VSI_ID_S	0
+#define ICE_AQC_GSET_RSS_LUT_VSI_ID_M	(0x1FF << ICE_AQC_GSET_RSS_LUT_VSI_ID_S)
+	__le16 vsi_id;
+#define ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_S	0
+#define ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_M	\
+				(0x3 << ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_S)
+
+#define ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_VSI	 0
+#define ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF	 1
+#define ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_GLOBAL	 2
+
+#define ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S	 2
+#define ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_M	 \
+				(0x3 << ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S)
+
+#define ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_128	 128
+#define ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_128_FLAG 0
+#define ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_512	 512
+#define ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_512_FLAG 1
+#define ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K	 2048
+#define ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K_FLAG	 2
+
+#define ICE_AQC_GSET_RSS_LUT_GLOBAL_IDX_S	 4
+#define ICE_AQC_GSET_RSS_LUT_GLOBAL_IDX_M	 \
+				(0xF << ICE_AQC_GSET_RSS_LUT_GLOBAL_IDX_S)
+
+	__le16 flags;
+	__le32 reserved;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+
+
+
+/* Add TX LAN Queues (indirect 0x0C30) */
+struct ice_aqc_add_txqs {
+	u8 num_qgrps;
+	u8 reserved[3];
+	__le32 reserved1;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* This is the descriptor of each queue entry for the Add TX LAN Queues
+ * command (0x0C30). Only used within struct ice_aqc_add_tx_qgrp.
+ */
+struct ice_aqc_add_txqs_perq {
+	__le16 txq_id;
+	u8 rsvd[2];
+	__le32 q_teid;
+	u8 txq_ctx[22];
+	u8 rsvd2[2];
+	struct ice_aqc_txsched_elem info;
+};
+
+
+/* The format of the command buffer for Add TX LAN Queues (0x0C30)
+ * is an array of the following structs. Please note that the length of
+ * each struct ice_aqc_add_tx_qgrp is variable due
+ * to the variable number of queues in each group!
+ */
+struct ice_aqc_add_tx_qgrp {
+	__le32 parent_teid;
+	u8 num_txqs;
+	u8 rsvd[3];
+	struct ice_aqc_add_txqs_perq txqs[1];
+};
+
+
+/* Disable TX LAN Queues (indirect 0x0C31) */
+struct ice_aqc_dis_txqs {
+	u8 cmd_type;
+#define ICE_AQC_Q_DIS_CMD_S		0
+#define ICE_AQC_Q_DIS_CMD_M		(0x3 << ICE_AQC_Q_DIS_CMD_S)
+#define ICE_AQC_Q_DIS_CMD_NO_FUNC_RESET	(0 << ICE_AQC_Q_DIS_CMD_S)
+#define ICE_AQC_Q_DIS_CMD_VM_RESET	BIT(ICE_AQC_Q_DIS_CMD_S)
+#define ICE_AQC_Q_DIS_CMD_VF_RESET	(2 << ICE_AQC_Q_DIS_CMD_S)
+#define ICE_AQC_Q_DIS_CMD_PF_RESET	(3 << ICE_AQC_Q_DIS_CMD_S)
+#define ICE_AQC_Q_DIS_CMD_SUBSEQ_CALL	BIT(2)
+#define ICE_AQC_Q_DIS_CMD_FLUSH_PIPE	BIT(3)
+	u8 num_entries;
+	__le16 vmvf_and_timeout;
+#define ICE_AQC_Q_DIS_VMVF_NUM_S	0
+#define ICE_AQC_Q_DIS_VMVF_NUM_M	(0x3FF << ICE_AQC_Q_DIS_VMVF_NUM_S)
+#define ICE_AQC_Q_DIS_TIMEOUT_S		10
+#define ICE_AQC_Q_DIS_TIMEOUT_M		(0x3F << ICE_AQC_Q_DIS_TIMEOUT_S)
+	__le32 blocked_cgds;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* The buffer for Disable TX LAN Queues (indirect 0x0C31)
+ * contains the following structures, arrayed one after the
+ * other.
+ * Note: Since the q_id is 16 bits wide, if the
+ * number of queues is even, then 2 bytes of alignment MUST be
+ * added before the start of the next group, to allow correct
+ * alignment of the parent_teid field.
+ */
+struct ice_aqc_dis_txq_item {
+	__le32 parent_teid;
+	u8 num_qs;
+	u8 rsvd;
+	/* The length of the q_id array varies according to num_qs */
+	__le16 q_id[1];
+	/* This only applies from F8 onward */
+#define ICE_AQC_Q_DIS_BUF_ELEM_TYPE_S		15
+#define ICE_AQC_Q_DIS_BUF_ELEM_TYPE_LAN_Q	\
+			(0 << ICE_AQC_Q_DIS_BUF_ELEM_TYPE_S)
+#define ICE_AQC_Q_DIS_BUF_ELEM_TYPE_RDMA_QSET	\
+			(1 << ICE_AQC_Q_DIS_BUF_ELEM_TYPE_S)
+};
+
+
+struct ice_aqc_dis_txq {
+	struct ice_aqc_dis_txq_item qgrps[1];
+};
+
+
+/* TX LAN Queues Cleanup Event (0x0C31) */
+struct ice_aqc_txqs_cleanup {
+	__le16 caller_opc;
+	__le16 cmd_tag;
+	u8 reserved[12];
+};
+
+
+/* Move / Reconfigure TX Queues (indirect 0x0C32) */
+struct ice_aqc_move_txqs {
+	u8 cmd_type;
+#define ICE_AQC_Q_CMD_TYPE_S		0
+#define ICE_AQC_Q_CMD_TYPE_M		(0x3 << ICE_AQC_Q_CMD_TYPE_S)
+#define ICE_AQC_Q_CMD_TYPE_MOVE		1
+#define ICE_AQC_Q_CMD_TYPE_TC_CHANGE	2
+#define ICE_AQC_Q_CMD_TYPE_MOVE_AND_TC	3
+#define ICE_AQC_Q_CMD_SUBSEQ_CALL	BIT(2)
+#define ICE_AQC_Q_CMD_FLUSH_PIPE	BIT(3)
+	u8 num_qs;
+	u8 rsvd;
+	u8 timeout;
+#define ICE_AQC_Q_CMD_TIMEOUT_S		2
+#define ICE_AQC_Q_CMD_TIMEOUT_M		(0x3F << ICE_AQC_Q_CMD_TIMEOUT_S)
+	__le32 blocked_cgds;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* This is the descriptor of each queue entry for the move TX LAN Queues
+ * command (0x0C32).
+ */
+struct ice_aqc_move_txqs_elem {
+	__le16 txq_id;
+	u8 q_cgd;
+	u8 rsvd;
+	__le32 q_teid;
+};
+
+
+struct ice_aqc_move_txqs_data {
+	__le32 src_teid;
+	__le32 dest_teid;
+	struct ice_aqc_move_txqs_elem txqs[1];
+};
+
+
+
+
+
+
+/* Lan Queue Overflow Event (direct, 0x1001) */
+struct ice_aqc_event_lan_overflow {
+	__le32 prtdcb_ruptq;
+	__le32 qtx_ctl;
+	u8 reserved[8];
+};
+
+
+
+/* Configure Firmware Logging Command (indirect 0xFF09)
+ * Logging Information Read Response (indirect 0xFF10)
+ * Note: The 0xFF10 command has no input parameters.
+ */
+struct ice_aqc_fw_logging {
+	u8 log_ctrl;
+#define ICE_AQC_FW_LOG_AQ_EN		BIT(0)
+#define ICE_AQC_FW_LOG_UART_EN		BIT(1)
+	u8 rsvd0;
+	u8 log_ctrl_valid; /* Not used by 0xFF10 Response */
+#define ICE_AQC_FW_LOG_AQ_VALID		BIT(0)
+#define ICE_AQC_FW_LOG_UART_VALID	BIT(1)
+	u8 rsvd1[5];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+enum ice_aqc_fw_logging_mod {
+	ICE_AQC_FW_LOG_ID_GENERAL = 0,
+	ICE_AQC_FW_LOG_ID_CTRL,
+	ICE_AQC_FW_LOG_ID_LINK,
+	ICE_AQC_FW_LOG_ID_LINK_TOPO,
+	ICE_AQC_FW_LOG_ID_DNL,
+	ICE_AQC_FW_LOG_ID_I2C,
+	ICE_AQC_FW_LOG_ID_SDP,
+	ICE_AQC_FW_LOG_ID_MDIO,
+	ICE_AQC_FW_LOG_ID_ADMINQ,
+	ICE_AQC_FW_LOG_ID_HDMA,
+	ICE_AQC_FW_LOG_ID_LLDP,
+	ICE_AQC_FW_LOG_ID_DCBX,
+	ICE_AQC_FW_LOG_ID_DCB,
+	ICE_AQC_FW_LOG_ID_NETPROXY,
+	ICE_AQC_FW_LOG_ID_NVM,
+	ICE_AQC_FW_LOG_ID_AUTH,
+	ICE_AQC_FW_LOG_ID_VPD,
+	ICE_AQC_FW_LOG_ID_IOSF,
+	ICE_AQC_FW_LOG_ID_PARSER,
+	ICE_AQC_FW_LOG_ID_SW,
+	ICE_AQC_FW_LOG_ID_SCHEDULER,
+	ICE_AQC_FW_LOG_ID_TXQ,
+	ICE_AQC_FW_LOG_ID_RSVD,
+	ICE_AQC_FW_LOG_ID_POST,
+	ICE_AQC_FW_LOG_ID_WATCHDOG,
+	ICE_AQC_FW_LOG_ID_TASK_DISPATCH,
+	ICE_AQC_FW_LOG_ID_MNG,
+	ICE_AQC_FW_LOG_ID_MAX,
+};
+
+/* This is the buffer for both of the logging commands.
+ * The entry array size depends on the datalen parameter in the descriptor.
+ * There will be a total of datalen / 2 entries.
+ */
+struct ice_aqc_fw_logging_data {
+	__le16 entry[1];
+#define ICE_AQC_FW_LOG_ID_S		0
+#define ICE_AQC_FW_LOG_ID_M		(0xFFF << ICE_AQC_FW_LOG_ID_S)
+
+#define ICE_AQC_FW_LOG_CONF_SUCCESS	0	/* Used by response */
+#define ICE_AQC_FW_LOG_CONF_BAD_INDX	BIT(12)	/* Used by response */
+
+#define ICE_AQC_FW_LOG_EN_S		12
+#define ICE_AQC_FW_LOG_EN_M		(0xF << ICE_AQC_FW_LOG_EN_S)
+#define ICE_AQC_FW_LOG_INFO_EN		BIT(12)	/* Used by command */
+#define ICE_AQC_FW_LOG_INIT_EN		BIT(13)	/* Used by command */
+#define ICE_AQC_FW_LOG_FLOW_EN		BIT(14)	/* Used by command */
+#define ICE_AQC_FW_LOG_ERR_EN		BIT(15)	/* Used by command */
+};
+
+
+/* Get/Clear FW Log (indirect 0xFF11) */
+struct ice_aqc_get_clear_fw_log {
+	u8 flags;
+#define ICE_AQC_FW_LOG_CLEAR		BIT(0)
+#define ICE_AQC_FW_LOG_MORE_DATA_AVAIL	BIT(1)
+	u8 rsvd1[7];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/**
+ * struct ice_aq_desc - Admin Queue (AQ) descriptor
+ * @flags: ICE_AQ_FLAG_* flags
+ * @opcode: AQ command opcode
+ * @datalen: length in bytes of indirect/external data buffer
+ * @retval: return value from firmware
+ * @cookie_h: opaque data high-half
+ * @cookie_l: opaque data low-half
+ * @params: command-specific parameters
+ *
+ * Descriptor format for commands the driver posts on the Admin Transmit Queue
+ * (ATQ). The firmware writes back onto the command descriptor and returns
+ * the result of the command. Asynchronous events that are not an immediate
+ * result of the command are written to the Admin Receive Queue (ARQ) using
+ * the same descriptor format. Descriptors are in little-endian notation with
+ * 32-bit words.
+ */
+struct ice_aq_desc {
+	__le16 flags;
+	__le16 opcode;
+	__le16 datalen;
+	__le16 retval;
+	__le32 cookie_high;
+	__le32 cookie_low;
+	union {
+		u8 raw[16];
+		struct ice_aqc_generic generic;
+		struct ice_aqc_get_ver get_ver;
+		struct ice_aqc_q_shutdown q_shutdown;
+		struct ice_aqc_req_res res_owner;
+		struct ice_aqc_manage_mac_read mac_read;
+		struct ice_aqc_manage_mac_write mac_write;
+		struct ice_aqc_clear_pxe clear_pxe;
+		struct ice_aqc_list_caps get_cap;
+		struct ice_aqc_get_phy_caps get_phy;
+		struct ice_aqc_set_phy_cfg set_phy;
+		struct ice_aqc_restart_an restart_an;
+		struct ice_aqc_set_port_id_led set_port_id_led;
+		struct ice_aqc_get_sw_cfg get_sw_conf;
+		struct ice_aqc_sw_rules sw_rules;
+		struct ice_aqc_get_topo get_topo;
+		struct ice_aqc_sched_elem_cmd sched_elem_cmd;
+		struct ice_aqc_query_txsched_res query_sched_res;
+		struct ice_aqc_query_node_to_root query_node_to_root;
+		struct ice_aqc_cfg_l2_node_cgd cfg_l2_node_cgd;
+		struct ice_aqc_rl_profile rl_profile;
+
+		struct ice_aqc_nvm nvm;
+		struct ice_aqc_nvm_cfg nvm_cfg;
+		struct ice_aqc_nvm_checksum nvm_checksum;
+		struct ice_aqc_get_set_rss_lut get_set_rss_lut;
+		struct ice_aqc_get_set_rss_key get_set_rss_key;
+		struct ice_aqc_add_txqs add_txqs;
+		struct ice_aqc_dis_txqs dis_txqs;
+		struct ice_aqc_txqs_cleanup txqs_cleanup;
+		struct ice_aqc_add_get_update_free_vsi vsi_cmd;
+		struct ice_aqc_add_update_free_vsi_resp add_update_free_vsi_res;
+		struct ice_aqc_fw_logging fw_logging;
+		struct ice_aqc_get_clear_fw_log get_clear_fw_log;
+		struct ice_aqc_set_mac_lb set_mac_lb;
+		struct ice_aqc_alloc_free_res_cmd sw_res_ctrl;
+		struct ice_aqc_set_event_mask set_event_mask;
+		struct ice_aqc_get_link_status get_link_status;
+	} params;
+};
+
+
+/* FW defined boundary for a large buffer, 4k >= Large buffer > 512 bytes */
+#define ICE_AQ_LG_BUF	512
+
+/* Flags sub-structure
+ * |0  |1  |2  |3  |4  |5  |6  |7  |8  |9  |10 |11 |12 |13 |14 |15 |
+ * |DD |CMP|ERR|VFE| * *  RESERVED * * |LB |RD |VFC|BUF|SI |EI |FE |
+ */
+
+/* command flags and offsets */
+#define ICE_AQ_FLAG_DD_S	0
+#define ICE_AQ_FLAG_CMP_S	1
+#define ICE_AQ_FLAG_ERR_S	2
+#define ICE_AQ_FLAG_VFE_S	3
+#define ICE_AQ_FLAG_LB_S	9
+#define ICE_AQ_FLAG_RD_S	10
+#define ICE_AQ_FLAG_VFC_S	11
+#define ICE_AQ_FLAG_BUF_S	12
+#define ICE_AQ_FLAG_SI_S	13
+#define ICE_AQ_FLAG_EI_S	14
+#define ICE_AQ_FLAG_FE_S	15
+
+#define ICE_AQ_FLAG_DD		BIT(ICE_AQ_FLAG_DD_S)  /* 0x1    */
+#define ICE_AQ_FLAG_CMP		BIT(ICE_AQ_FLAG_CMP_S) /* 0x2    */
+#define ICE_AQ_FLAG_ERR		BIT(ICE_AQ_FLAG_ERR_S) /* 0x4    */
+#define ICE_AQ_FLAG_VFE		BIT(ICE_AQ_FLAG_VFE_S) /* 0x8    */
+#define ICE_AQ_FLAG_LB		BIT(ICE_AQ_FLAG_LB_S)  /* 0x200  */
+#define ICE_AQ_FLAG_RD		BIT(ICE_AQ_FLAG_RD_S)  /* 0x400  */
+#define ICE_AQ_FLAG_VFC		BIT(ICE_AQ_FLAG_VFC_S) /* 0x800  */
+#define ICE_AQ_FLAG_BUF		BIT(ICE_AQ_FLAG_BUF_S) /* 0x1000 */
+#define ICE_AQ_FLAG_SI		BIT(ICE_AQ_FLAG_SI_S)  /* 0x2000 */
+#define ICE_AQ_FLAG_EI		BIT(ICE_AQ_FLAG_EI_S)  /* 0x4000 */
+#define ICE_AQ_FLAG_FE		BIT(ICE_AQ_FLAG_FE_S)  /* 0x8000 */
+
+/* error codes */
+enum ice_aq_err {
+	ICE_AQ_RC_OK		= 0,  /* Success */
+	ICE_AQ_RC_EPERM		= 1,  /* Operation not permitted */
+	ICE_AQ_RC_ENOENT	= 2,  /* No such element */
+	ICE_AQ_RC_ESRCH		= 3,  /* Bad opcode */
+	ICE_AQ_RC_EINTR		= 4,  /* Operation interrupted */
+	ICE_AQ_RC_EIO		= 5,  /* I/O error */
+	ICE_AQ_RC_ENXIO		= 6,  /* No such resource */
+	ICE_AQ_RC_E2BIG		= 7,  /* Arg too long */
+	ICE_AQ_RC_EAGAIN	= 8,  /* Try again */
+	ICE_AQ_RC_ENOMEM	= 9,  /* Out of memory */
+	ICE_AQ_RC_EACCES	= 10, /* Permission denied */
+	ICE_AQ_RC_EFAULT	= 11, /* Bad address */
+	ICE_AQ_RC_EBUSY		= 12, /* Device or resource busy */
+	ICE_AQ_RC_EEXIST	= 13, /* Object already exists */
+	ICE_AQ_RC_EINVAL	= 14, /* Invalid argument */
+	ICE_AQ_RC_ENOTTY	= 15, /* Not a typewriter */
+	ICE_AQ_RC_ENOSPC	= 16, /* No space left or allocation failure */
+	ICE_AQ_RC_ENOSYS	= 17, /* Function not implemented */
+	ICE_AQ_RC_ERANGE	= 18, /* Parameter out of range */
+	ICE_AQ_RC_EFLUSHED	= 19, /* Cmd flushed due to prev cmd error */
+	ICE_AQ_RC_BAD_ADDR	= 20, /* Descriptor contains a bad pointer */
+	ICE_AQ_RC_EMODE		= 21, /* Op not allowed in current dev mode */
+	ICE_AQ_RC_EFBIG		= 22, /* File too big */
+	ICE_AQ_RC_ESBCOMP	= 23, /* SB-IOSF completion unsuccessful */
+	ICE_AQ_RC_ENOSEC	= 24, /* Missing security manifest */
+	ICE_AQ_RC_EBADSIG	= 25, /* Bad RSA signature */
+	ICE_AQ_RC_ESVN		= 26, /* SVN number prohibits this package */
+	ICE_AQ_RC_EBADMAN	= 27, /* Manifest hash mismatch */
+	ICE_AQ_RC_EBADBUF	= 28, /* Buffer hash mismatches manifest */
+};
+
+/* Admin Queue command opcodes */
+enum ice_adminq_opc {
+	/* AQ commands */
+	ice_aqc_opc_get_ver				= 0x0001,
+	ice_aqc_opc_driver_ver				= 0x0002,
+	ice_aqc_opc_q_shutdown				= 0x0003,
+	ice_aqc_opc_get_exp_err				= 0x0005,
+
+	/* resource ownership */
+	ice_aqc_opc_req_res				= 0x0008,
+	ice_aqc_opc_release_res				= 0x0009,
+
+	/* device/function capabilities */
+	ice_aqc_opc_list_func_caps			= 0x000A,
+	ice_aqc_opc_list_dev_caps			= 0x000B,
+
+	/* manage MAC address */
+	ice_aqc_opc_manage_mac_read			= 0x0107,
+	ice_aqc_opc_manage_mac_write			= 0x0108,
+
+	/* PXE */
+	ice_aqc_opc_clear_pxe_mode			= 0x0110,
+
+	/* internal switch commands */
+	ice_aqc_opc_get_sw_cfg				= 0x0200,
+
+	/* Alloc/Free/Get Resources */
+	ice_aqc_opc_get_res_alloc			= 0x0204,
+	ice_aqc_opc_alloc_res				= 0x0208,
+	ice_aqc_opc_free_res				= 0x0209,
+	ice_aqc_opc_get_allocd_res_desc			= 0x020A,
+
+	/* VSI commands */
+	ice_aqc_opc_add_vsi				= 0x0210,
+	ice_aqc_opc_update_vsi				= 0x0211,
+	ice_aqc_opc_get_vsi_params			= 0x0212,
+	ice_aqc_opc_free_vsi				= 0x0213,
+
+
+
+	/* switch rules population commands */
+	ice_aqc_opc_add_sw_rules			= 0x02A0,
+	ice_aqc_opc_update_sw_rules			= 0x02A1,
+	ice_aqc_opc_remove_sw_rules			= 0x02A2,
+	ice_aqc_opc_get_sw_rules			= 0x02A3,
+	ice_aqc_opc_clear_pf_cfg			= 0x02A4,
+
+
+	/* transmit scheduler commands */
+	ice_aqc_opc_get_dflt_topo			= 0x0400,
+	ice_aqc_opc_add_sched_elems			= 0x0401,
+	ice_aqc_opc_cfg_sched_elems			= 0x0403,
+	ice_aqc_opc_get_sched_elems			= 0x0404,
+	ice_aqc_opc_move_sched_elems			= 0x0408,
+	ice_aqc_opc_suspend_sched_elems			= 0x0409,
+	ice_aqc_opc_resume_sched_elems			= 0x040A,
+	ice_aqc_opc_suspend_sched_traffic		= 0x040B,
+	ice_aqc_opc_resume_sched_traffic		= 0x040C,
+	ice_aqc_opc_delete_sched_elems			= 0x040F,
+	ice_aqc_opc_add_rl_profiles			= 0x0410,
+	ice_aqc_opc_query_rl_profiles			= 0x0411,
+	ice_aqc_opc_query_sched_res			= 0x0412,
+	ice_aqc_opc_query_node_to_root			= 0x0413,
+	ice_aqc_opc_cfg_l2_node_cgd			= 0x0414,
+	ice_aqc_opc_remove_rl_profiles			= 0x0415,
+
+	/* PHY commands */
+	ice_aqc_opc_get_phy_caps			= 0x0600,
+	ice_aqc_opc_set_phy_cfg				= 0x0601,
+	ice_aqc_opc_set_mac_cfg				= 0x0603,
+	ice_aqc_opc_restart_an				= 0x0605,
+	ice_aqc_opc_get_link_status			= 0x0607,
+	ice_aqc_opc_set_event_mask			= 0x0613,
+	ice_aqc_opc_set_mac_lb				= 0x0620,
+	ice_aqc_opc_set_port_id_led			= 0x06E9,
+	ice_aqc_opc_get_port_options			= 0x06EA,
+	ice_aqc_opc_set_port_option			= 0x06EB,
+	ice_aqc_opc_set_gpio				= 0x06EC,
+	ice_aqc_opc_get_gpio				= 0x06ED,
+
+	/* NVM commands */
+	ice_aqc_opc_nvm_read				= 0x0701,
+	ice_aqc_opc_nvm_erase				= 0x0702,
+	ice_aqc_opc_nvm_update				= 0x0703,
+	ice_aqc_opc_nvm_cfg_read			= 0x0704,
+	ice_aqc_opc_nvm_cfg_write			= 0x0705,
+	ice_aqc_opc_nvm_checksum			= 0x0706,
+
+
+	/* RSS commands */
+	ice_aqc_opc_set_rss_key				= 0x0B02,
+	ice_aqc_opc_set_rss_lut				= 0x0B03,
+	ice_aqc_opc_get_rss_key				= 0x0B04,
+	ice_aqc_opc_get_rss_lut				= 0x0B05,
+
+	/* TX queue handling commands/events */
+	ice_aqc_opc_add_txqs				= 0x0C30,
+	ice_aqc_opc_dis_txqs				= 0x0C31,
+	ice_aqc_opc_txqs_cleanup			= 0x0C31,
+	ice_aqc_opc_move_recfg_txqs			= 0x0C32,
+
+
+
+
+	/* Standalone Commands/Events */
+	ice_aqc_opc_event_lan_overflow			= 0x1001,
+
+	/* debug commands */
+	ice_aqc_opc_fw_logging				= 0xFF09,
+	ice_aqc_opc_fw_logging_info			= 0xFF10,
+	ice_aqc_opc_get_clear_fw_log			= 0xFF11
+};
+
+#endif /* _ICE_ADMINQ_CMD_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v4 04/32] net/ice/base: add sideband queue info
  2018-12-14  8:34 ` [dpdk-dev] [PATCH v4 00/32] A new net PMD - ICE Wenzhuo Lu
                     ` (2 preceding siblings ...)
  2018-12-14  8:34   ` [dpdk-dev] [PATCH v4 03/32] net/ice/base: add admin queue structures and commands Wenzhuo Lu
@ 2018-12-14  8:34   ` Wenzhuo Lu
  2018-12-14  8:34   ` [dpdk-dev] [PATCH v4 05/32] net/ice/base: add device IDs for Intel(r) E800 Series NICs Wenzhuo Lu
                     ` (27 subsequent siblings)
  31 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-14  8:34 UTC (permalink / raw)
  To: dev; +Cc: Paul M Stillwell Jr

From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>

Add the commands, error codes, and structures
for the sideband queue.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
---
 drivers/net/ice/base/ice_sbq_cmd.h | 93 ++++++++++++++++++++++++++++++++++++++
 1 file changed, 93 insertions(+)
 create mode 100644 drivers/net/ice/base/ice_sbq_cmd.h

diff --git a/drivers/net/ice/base/ice_sbq_cmd.h b/drivers/net/ice/base/ice_sbq_cmd.h
new file mode 100644
index 0000000..6dff378
--- /dev/null
+++ b/drivers/net/ice/base/ice_sbq_cmd.h
@@ -0,0 +1,93 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_SBQ_CMD_H_
+#define _ICE_SBQ_CMD_H_
+
+/* This header file defines the Sideband Queue commands, error codes and
+ * descriptor format. It is shared between Firmware and Software.
+ */
+
+/* Sideband Queue command structure and opcodes */
+enum ice_sbq_opc {
+	/* Sideband Queue commands */
+	ice_sbq_opc_neigh_dev_req			= 0x0C00,
+	ice_sbq_opc_neigh_dev_ev			= 0x0C01
+};
+
+/* Sideband Queue descriptor. Indirect command
+ * and non posted
+ */
+struct ice_sbq_cmd_desc {
+	__le16 flags;
+	__le16 opcode;
+	__le16 datalen;
+	__le16 cmd_retval;
+
+	/* Opaque message data */
+	__le32 cookie_high;
+	__le32 cookie_low;
+
+	union {
+		__le16 cmd_len;
+		__le16 cmpl_len;
+	} param0;
+
+	u8 reserved[6];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+struct ice_sbq_evt_desc {
+	__le16 flags;
+	__le16 opcode;
+	__le16 datalen;
+	__le16 cmd_retval;
+	u8 data[24];
+};
+
+enum ice_sbq_msg_dev {
+	rmn_0	= 0x02,
+	rmn_1	= 0x03,
+	rmn_2	= 0x04,
+	cgu	= 0x06
+};
+
+enum ice_sbq_msg_opcode {
+	ice_sbq_msg_rd	= 0x00,
+	ice_sbq_msg_wr	= 0x01
+};
+
+#define ICE_SBQ_MSG_FLAGS	0x40
+#define ICE_SBQ_MSG_SBE_FBE	0x0F
+
+struct ice_sbq_msg_req {
+	u8 dest_dev;
+	u8 src_dev;
+	u8 opcode;
+	u8 flags;
+	u8 sbe_fbe;
+	u8 func_id;
+	__le16 msg_addr_low;
+	__le32 msg_addr_high;
+	__le32 data;
+};
+
+struct ice_sbq_msg_cmpl {
+	u8 dest_dev;
+	u8 src_dev;
+	u8 opcode;
+	u8 flags;
+	__le32 data;
+};
+
+/* Internal struct */
+struct ice_sbq_msg_input {
+	u8 dest_dev;
+	u8 opcode;
+	u16 msg_addr_low;
+	u32 msg_addr_high;
+	u32 data;
+};
+#endif /* _ICE_SBQ_CMD_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v4 05/32] net/ice/base: add device IDs for Intel(r) E800 Series NICs
  2018-12-14  8:34 ` [dpdk-dev] [PATCH v4 00/32] A new net PMD - ICE Wenzhuo Lu
                     ` (3 preceding siblings ...)
  2018-12-14  8:34   ` [dpdk-dev] [PATCH v4 04/32] net/ice/base: add sideband queue info Wenzhuo Lu
@ 2018-12-14  8:34   ` Wenzhuo Lu
  2018-12-14  8:34   ` [dpdk-dev] [PATCH v4 06/32] net/ice/base: add control queue information Wenzhuo Lu
                     ` (26 subsequent siblings)
  31 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-14  8:34 UTC (permalink / raw)
  To: dev; +Cc: Paul M Stillwell Jr

From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>

Add all the device IDs that represent the NIC.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
---
 drivers/net/ice/base/ice_devids.h | 17 +++++++++++++++++
 1 file changed, 17 insertions(+)
 create mode 100644 drivers/net/ice/base/ice_devids.h

diff --git a/drivers/net/ice/base/ice_devids.h b/drivers/net/ice/base/ice_devids.h
new file mode 100644
index 0000000..87f17ab
--- /dev/null
+++ b/drivers/net/ice/base/ice_devids.h
@@ -0,0 +1,17 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_DEVIDS_H_
+#define _ICE_DEVIDS_H_
+
+
+/* Device IDs */
+/* Intel(R) Ethernet Controller E810-C for backplane */
+#define ICE_DEV_ID_E810C_BACKPLANE	0x1591
+/* Intel(R) Ethernet Controller E810-C for QSFP */
+#define ICE_DEV_ID_E810C_QSFP		0x1592
+/* Intel(R) Ethernet Controller E810-C for SFP */
+#define ICE_DEV_ID_E810C_SFP		0x1593
+
+#endif /* _ICE_DEVIDS_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v4 06/32] net/ice/base: add control queue information
  2018-12-14  8:34 ` [dpdk-dev] [PATCH v4 00/32] A new net PMD - ICE Wenzhuo Lu
                     ` (4 preceding siblings ...)
  2018-12-14  8:34   ` [dpdk-dev] [PATCH v4 05/32] net/ice/base: add device IDs for Intel(r) E800 Series NICs Wenzhuo Lu
@ 2018-12-14  8:34   ` Wenzhuo Lu
  2018-12-14  8:34   ` [dpdk-dev] [PATCH v4 07/32] net/ice/base: add data center bridging (DCB) Wenzhuo Lu
                     ` (25 subsequent siblings)
  31 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-14  8:34 UTC (permalink / raw)
  To: dev; +Cc: Paul M Stillwell Jr

From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>

Add the structures for the control queues.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
---
 drivers/net/ice/base/ice_controlq.c | 1098 +++++++++++++++++++++++++++++++++++
 drivers/net/ice/base/ice_controlq.h |   97 ++++
 2 files changed, 1195 insertions(+)
 create mode 100644 drivers/net/ice/base/ice_controlq.c
 create mode 100644 drivers/net/ice/base/ice_controlq.h

diff --git a/drivers/net/ice/base/ice_controlq.c b/drivers/net/ice/base/ice_controlq.c
new file mode 100644
index 0000000..fb82c23
--- /dev/null
+++ b/drivers/net/ice/base/ice_controlq.c
@@ -0,0 +1,1098 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#include "ice_common.h"
+
+
+#define ICE_CQ_INIT_REGS(qinfo, prefix)				\
+do {								\
+	(qinfo)->sq.head = prefix##_ATQH;			\
+	(qinfo)->sq.tail = prefix##_ATQT;			\
+	(qinfo)->sq.len = prefix##_ATQLEN;			\
+	(qinfo)->sq.bah = prefix##_ATQBAH;			\
+	(qinfo)->sq.bal = prefix##_ATQBAL;			\
+	(qinfo)->sq.len_mask = prefix##_ATQLEN_ATQLEN_M;	\
+	(qinfo)->sq.len_ena_mask = prefix##_ATQLEN_ATQENABLE_M;	\
+	(qinfo)->sq.head_mask = prefix##_ATQH_ATQH_M;		\
+	(qinfo)->rq.head = prefix##_ARQH;			\
+	(qinfo)->rq.tail = prefix##_ARQT;			\
+	(qinfo)->rq.len = prefix##_ARQLEN;			\
+	(qinfo)->rq.bah = prefix##_ARQBAH;			\
+	(qinfo)->rq.bal = prefix##_ARQBAL;			\
+	(qinfo)->rq.len_mask = prefix##_ARQLEN_ARQLEN_M;	\
+	(qinfo)->rq.len_ena_mask = prefix##_ARQLEN_ARQENABLE_M;	\
+	(qinfo)->rq.head_mask = prefix##_ARQH_ARQH_M;		\
+} while (0)
+
+/**
+ * ice_adminq_init_regs - Initialize AdminQ registers
+ * @hw: pointer to the hardware structure
+ *
+ * This assumes the alloc_sq and alloc_rq functions have already been called
+ */
+static void ice_adminq_init_regs(struct ice_hw *hw)
+{
+	struct ice_ctl_q_info *cq = &hw->adminq;
+
+	ICE_CQ_INIT_REGS(cq, PF_FW);
+}
+
+/**
+ * ice_mailbox_init_regs - Initialize Mailbox registers
+ * @hw: pointer to the hardware structure
+ *
+ * This assumes the alloc_sq and alloc_rq functions have already been called
+ */
+static void ice_mailbox_init_regs(struct ice_hw *hw)
+{
+	struct ice_ctl_q_info *cq = &hw->mailboxq;
+
+	ICE_CQ_INIT_REGS(cq, PF_MBX);
+}
+
+
+/**
+ * ice_check_sq_alive
+ * @hw: pointer to the hw struct
+ * @cq: pointer to the specific Control queue
+ *
+ * Returns true if Queue is enabled else false.
+ */
+bool ice_check_sq_alive(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	/* check both queue-length and queue-enable fields */
+	if (cq->sq.len && cq->sq.len_mask && cq->sq.len_ena_mask)
+		return (rd32(hw, cq->sq.len) & (cq->sq.len_mask |
+						cq->sq.len_ena_mask)) ==
+			(cq->num_sq_entries | cq->sq.len_ena_mask);
+
+	return false;
+}
+
+/**
+ * ice_alloc_ctrlq_sq_ring - Allocate Control Transmit Queue (ATQ) rings
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ */
+static enum ice_status
+ice_alloc_ctrlq_sq_ring(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	size_t size = cq->num_sq_entries * sizeof(struct ice_aq_desc);
+
+	cq->sq.desc_buf.va = ice_alloc_dma_mem(hw, &cq->sq.desc_buf, size);
+	if (!cq->sq.desc_buf.va)
+		return ICE_ERR_NO_MEMORY;
+
+	cq->sq.cmd_buf = ice_calloc(hw, cq->num_sq_entries,
+				    sizeof(struct ice_sq_cd));
+	if (!cq->sq.cmd_buf) {
+		ice_free_dma_mem(hw, &cq->sq.desc_buf);
+		return ICE_ERR_NO_MEMORY;
+	}
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_alloc_ctrlq_rq_ring - Allocate Control Receive Queue (ARQ) rings
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ */
+static enum ice_status
+ice_alloc_ctrlq_rq_ring(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	size_t size = cq->num_rq_entries * sizeof(struct ice_aq_desc);
+
+	cq->rq.desc_buf.va = ice_alloc_dma_mem(hw, &cq->rq.desc_buf, size);
+	if (!cq->rq.desc_buf.va)
+		return ICE_ERR_NO_MEMORY;
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_free_cq_ring - Free control queue ring
+ * @hw: pointer to the hardware structure
+ * @ring: pointer to the specific control queue ring
+ *
+ * This assumes the posted buffers have already been cleaned
+ * and de-allocated
+ */
+static void ice_free_cq_ring(struct ice_hw *hw, struct ice_ctl_q_ring *ring)
+{
+	ice_free_dma_mem(hw, &ring->desc_buf);
+}
+
+/**
+ * ice_alloc_rq_bufs - Allocate pre-posted buffers for the ARQ
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ */
+static enum ice_status
+ice_alloc_rq_bufs(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	int i;
+
+	/* We'll be allocating the buffer info memory first, then we can
+	 * allocate the mapped buffers for the event processing
+	 */
+	cq->rq.dma_head = ice_calloc(hw, cq->num_rq_entries,
+				     sizeof(cq->rq.desc_buf));
+	if (!cq->rq.dma_head)
+		return ICE_ERR_NO_MEMORY;
+	cq->rq.r.rq_bi = (struct ice_dma_mem *)cq->rq.dma_head;
+
+	/* allocate the mapped buffers */
+	for (i = 0; i < cq->num_rq_entries; i++) {
+		struct ice_aq_desc *desc;
+		struct ice_dma_mem *bi;
+
+		bi = &cq->rq.r.rq_bi[i];
+		bi->va = ice_alloc_dma_mem(hw, bi, cq->rq_buf_size);
+		if (!bi->va)
+			goto unwind_alloc_rq_bufs;
+
+		/* now configure the descriptors for use */
+		desc = ICE_CTL_Q_DESC(cq->rq, i);
+
+		desc->flags = CPU_TO_LE16(ICE_AQ_FLAG_BUF);
+		if (cq->rq_buf_size > ICE_AQ_LG_BUF)
+			desc->flags |= CPU_TO_LE16(ICE_AQ_FLAG_LB);
+		desc->opcode = 0;
+		/* This is in accordance with Admin queue design, there is no
+		 * register for buffer size configuration
+		 */
+		desc->datalen = CPU_TO_LE16(bi->size);
+		desc->retval = 0;
+		desc->cookie_high = 0;
+		desc->cookie_low = 0;
+		desc->params.generic.addr_high =
+			CPU_TO_LE32(ICE_HI_DWORD(bi->pa));
+		desc->params.generic.addr_low =
+			CPU_TO_LE32(ICE_LO_DWORD(bi->pa));
+		desc->params.generic.param0 = 0;
+		desc->params.generic.param1 = 0;
+	}
+	return ICE_SUCCESS;
+
+unwind_alloc_rq_bufs:
+	/* don't try to free the one that failed... */
+	i--;
+	for (; i >= 0; i--)
+		ice_free_dma_mem(hw, &cq->rq.r.rq_bi[i]);
+	ice_free(hw, cq->rq.dma_head);
+
+	return ICE_ERR_NO_MEMORY;
+}
+
+/**
+ * ice_alloc_sq_bufs - Allocate empty buffer structs for the ATQ
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ */
+static enum ice_status
+ice_alloc_sq_bufs(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	int i;
+
+	/* No mapped memory needed yet, just the buffer info structures */
+	cq->sq.dma_head = ice_calloc(hw, cq->num_sq_entries,
+				     sizeof(cq->sq.desc_buf));
+	if (!cq->sq.dma_head)
+		return ICE_ERR_NO_MEMORY;
+	cq->sq.r.sq_bi = (struct ice_dma_mem *)cq->sq.dma_head;
+
+	/* allocate the mapped buffers */
+	for (i = 0; i < cq->num_sq_entries; i++) {
+		struct ice_dma_mem *bi;
+
+		bi = &cq->sq.r.sq_bi[i];
+		bi->va = ice_alloc_dma_mem(hw, bi, cq->sq_buf_size);
+		if (!bi->va)
+			goto unwind_alloc_sq_bufs;
+	}
+	return ICE_SUCCESS;
+
+unwind_alloc_sq_bufs:
+	/* don't try to free the one that failed... */
+	i--;
+	for (; i >= 0; i--)
+		ice_free_dma_mem(hw, &cq->sq.r.sq_bi[i]);
+	ice_free(hw, cq->sq.dma_head);
+
+	return ICE_ERR_NO_MEMORY;
+}
+
+static enum ice_status
+ice_cfg_cq_regs(struct ice_hw *hw, struct ice_ctl_q_ring *ring, u16 num_entries)
+{
+	/* Clear Head and Tail */
+	wr32(hw, ring->head, 0);
+	wr32(hw, ring->tail, 0);
+
+	/* set starting point */
+	wr32(hw, ring->len, (num_entries | ring->len_ena_mask));
+	wr32(hw, ring->bal, ICE_LO_DWORD(ring->desc_buf.pa));
+	wr32(hw, ring->bah, ICE_HI_DWORD(ring->desc_buf.pa));
+
+	/* Check one register to verify that config was applied */
+	if (rd32(hw, ring->bal) != ICE_LO_DWORD(ring->desc_buf.pa))
+		return ICE_ERR_AQ_ERROR;
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_cfg_sq_regs - configure Control ATQ registers
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ *
+ * Configure base address and length registers for the transmit queue
+ */
+static enum ice_status
+ice_cfg_sq_regs(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	return ice_cfg_cq_regs(hw, &cq->sq, cq->num_sq_entries);
+}
+
+/**
+ * ice_cfg_rq_regs - configure Control ARQ register
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ *
+ * Configure base address and length registers for the receive (event q)
+ */
+static enum ice_status
+ice_cfg_rq_regs(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	enum ice_status status;
+
+	status = ice_cfg_cq_regs(hw, &cq->rq, cq->num_rq_entries);
+	if (status)
+		return status;
+
+	/* Update tail in the HW to post pre-allocated buffers */
+	wr32(hw, cq->rq.tail, (u32)(cq->num_rq_entries - 1));
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_init_sq - main initialization routine for Control ATQ
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ *
+ * This is the main initialization routine for the Control Send Queue
+ * Prior to calling this function, drivers *MUST* set the following fields
+ * in the cq->structure:
+ *     - cq->num_sq_entries
+ *     - cq->sq_buf_size
+ *
+ * Do *NOT* hold the lock when calling this as the memory allocation routines
+ * called are not going to be atomic context safe
+ */
+static enum ice_status ice_init_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	enum ice_status ret_code;
+
+	if (cq->sq.count > 0) {
+		/* queue already initialized */
+		ret_code = ICE_ERR_NOT_READY;
+		goto init_ctrlq_exit;
+	}
+
+	/* verify input for valid configuration */
+	if (!cq->num_sq_entries || !cq->sq_buf_size) {
+		ret_code = ICE_ERR_CFG;
+		goto init_ctrlq_exit;
+	}
+
+	cq->sq.next_to_use = 0;
+	cq->sq.next_to_clean = 0;
+
+	/* allocate the ring memory */
+	ret_code = ice_alloc_ctrlq_sq_ring(hw, cq);
+	if (ret_code)
+		goto init_ctrlq_exit;
+
+	/* allocate buffers in the rings */
+	ret_code = ice_alloc_sq_bufs(hw, cq);
+	if (ret_code)
+		goto init_ctrlq_free_rings;
+
+	/* initialize base registers */
+	ret_code = ice_cfg_sq_regs(hw, cq);
+	if (ret_code)
+		goto init_ctrlq_free_rings;
+
+	/* success! */
+	cq->sq.count = cq->num_sq_entries;
+	goto init_ctrlq_exit;
+
+init_ctrlq_free_rings:
+	ice_free_cq_ring(hw, &cq->sq);
+
+init_ctrlq_exit:
+	return ret_code;
+}
+
+/**
+ * ice_init_rq - initialize ARQ
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ *
+ * The main initialization routine for the Admin Receive (Event) Queue.
+ * Prior to calling this function, drivers *MUST* set the following fields
+ * in the cq->structure:
+ *     - cq->num_rq_entries
+ *     - cq->rq_buf_size
+ *
+ * Do *NOT* hold the lock when calling this as the memory allocation routines
+ * called are not going to be atomic context safe
+ */
+static enum ice_status ice_init_rq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	enum ice_status ret_code;
+
+	if (cq->rq.count > 0) {
+		/* queue already initialized */
+		ret_code = ICE_ERR_NOT_READY;
+		goto init_ctrlq_exit;
+	}
+
+	/* verify input for valid configuration */
+	if (!cq->num_rq_entries || !cq->rq_buf_size) {
+		ret_code = ICE_ERR_CFG;
+		goto init_ctrlq_exit;
+	}
+
+	cq->rq.next_to_use = 0;
+	cq->rq.next_to_clean = 0;
+
+	/* allocate the ring memory */
+	ret_code = ice_alloc_ctrlq_rq_ring(hw, cq);
+	if (ret_code)
+		goto init_ctrlq_exit;
+
+	/* allocate buffers in the rings */
+	ret_code = ice_alloc_rq_bufs(hw, cq);
+	if (ret_code)
+		goto init_ctrlq_free_rings;
+
+	/* initialize base registers */
+	ret_code = ice_cfg_rq_regs(hw, cq);
+	if (ret_code)
+		goto init_ctrlq_free_rings;
+
+	/* success! */
+	cq->rq.count = cq->num_rq_entries;
+	goto init_ctrlq_exit;
+
+init_ctrlq_free_rings:
+	ice_free_cq_ring(hw, &cq->rq);
+
+init_ctrlq_exit:
+	return ret_code;
+}
+
+#define ICE_FREE_CQ_BUFS(hw, qi, ring)					\
+do {									\
+	int i;								\
+	/* free descriptors */						\
+	for (i = 0; i < (qi)->num_##ring##_entries; i++)		\
+		if ((qi)->ring.r.ring##_bi[i].pa)			\
+			ice_free_dma_mem((hw),				\
+					 &(qi)->ring.r.ring##_bi[i]);	\
+	/* free the buffer info list */					\
+	if ((qi)->ring.cmd_buf)						\
+		ice_free(hw, (qi)->ring.cmd_buf);			\
+	/* free dma head */						\
+	ice_free(hw, (qi)->ring.dma_head);				\
+} while (0)
+
+/**
+ * ice_shutdown_sq - shutdown the Control ATQ
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ *
+ * The main shutdown routine for the Control Transmit Queue
+ */
+static enum ice_status
+ice_shutdown_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	enum ice_status ret_code = ICE_SUCCESS;
+
+	ice_acquire_lock(&cq->sq_lock);
+
+	if (!cq->sq.count) {
+		ret_code = ICE_ERR_NOT_READY;
+		goto shutdown_sq_out;
+	}
+
+	/* Stop firmware AdminQ processing */
+	wr32(hw, cq->sq.head, 0);
+	wr32(hw, cq->sq.tail, 0);
+	wr32(hw, cq->sq.len, 0);
+	wr32(hw, cq->sq.bal, 0);
+	wr32(hw, cq->sq.bah, 0);
+
+	cq->sq.count = 0;	/* to indicate uninitialized queue */
+
+	/* free ring buffers and the ring itself */
+	ICE_FREE_CQ_BUFS(hw, cq, sq);
+	ice_free_cq_ring(hw, &cq->sq);
+
+shutdown_sq_out:
+	ice_release_lock(&cq->sq_lock);
+	return ret_code;
+}
+
+/**
+ * ice_aq_ver_check - Check the reported AQ API version.
+ * @hw: pointer to the hardware structure
+ *
+ * Checks if the driver should load on a given AQ API version.
+ *
+ * Return: 'true' iff the driver should attempt to load. 'false' otherwise.
+ */
+static bool ice_aq_ver_check(struct ice_hw *hw)
+{
+	if (hw->api_maj_ver > EXP_FW_API_VER_MAJOR) {
+		/* Major API version is newer than expected, don't load */
+		ice_warn(hw, "The driver for the device stopped because the NVM image is newer than expected. You must install the most recent version of the network driver.\n");
+		return false;
+	} else if (hw->api_maj_ver == EXP_FW_API_VER_MAJOR) {
+		if (hw->api_min_ver > (EXP_FW_API_VER_MINOR + 2))
+			ice_info(hw, "The driver for the device detected a newer version of the NVM image than expected. Please install the most recent version of the network driver.\n");
+		else if ((hw->api_min_ver + 2) < EXP_FW_API_VER_MINOR)
+			ice_info(hw, "The driver for the device detected an older version of the NVM image than expected. Please update the NVM image.\n");
+	} else {
+		/* Major API version is older than expected, log a warning */
+		ice_info(hw, "The driver for the device detected an older version of the NVM image than expected. Please update the NVM image.\n");
+	}
+	return true;
+}
+
+/**
+ * ice_shutdown_rq - shutdown Control ARQ
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ *
+ * The main shutdown routine for the Control Receive Queue
+ */
+static enum ice_status
+ice_shutdown_rq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	enum ice_status ret_code = ICE_SUCCESS;
+
+	ice_acquire_lock(&cq->rq_lock);
+
+	if (!cq->rq.count) {
+		ret_code = ICE_ERR_NOT_READY;
+		goto shutdown_rq_out;
+	}
+
+	/* Stop Control Queue processing */
+	wr32(hw, cq->rq.head, 0);
+	wr32(hw, cq->rq.tail, 0);
+	wr32(hw, cq->rq.len, 0);
+	wr32(hw, cq->rq.bal, 0);
+	wr32(hw, cq->rq.bah, 0);
+
+	/* set rq.count to 0 to indicate uninitialized queue */
+	cq->rq.count = 0;
+
+	/* free ring buffers and the ring itself */
+	ICE_FREE_CQ_BUFS(hw, cq, rq);
+	ice_free_cq_ring(hw, &cq->rq);
+
+shutdown_rq_out:
+	ice_release_lock(&cq->rq_lock);
+	return ret_code;
+}
+
+
+/**
+ * ice_init_check_adminq - Check version for Admin Queue to know if its alive
+ * @hw: pointer to the hardware structure
+ */
+static enum ice_status ice_init_check_adminq(struct ice_hw *hw)
+{
+	struct ice_ctl_q_info *cq = &hw->adminq;
+	enum ice_status status;
+
+
+	status = ice_aq_get_fw_ver(hw, NULL);
+	if (status)
+		goto init_ctrlq_free_rq;
+
+
+	if (!ice_aq_ver_check(hw)) {
+		status = ICE_ERR_FW_API_VER;
+		goto init_ctrlq_free_rq;
+	}
+
+	return ICE_SUCCESS;
+
+init_ctrlq_free_rq:
+	if (cq->rq.count) {
+		ice_shutdown_rq(hw, cq);
+		ice_destroy_lock(&cq->rq_lock);
+	}
+	if (cq->sq.count) {
+		ice_shutdown_sq(hw, cq);
+		ice_destroy_lock(&cq->sq_lock);
+	}
+	return status;
+}
+
+/**
+ * ice_init_ctrlq - main initialization routine for any control Queue
+ * @hw: pointer to the hardware structure
+ * @q_type: specific Control queue type
+ *
+ * Prior to calling this function, drivers *MUST* set the following fields
+ * in the cq->structure:
+ *     - cq->num_sq_entries
+ *     - cq->num_rq_entries
+ *     - cq->rq_buf_size
+ *     - cq->sq_buf_size
+ */
+static enum ice_status ice_init_ctrlq(struct ice_hw *hw, enum ice_ctl_q q_type)
+{
+	struct ice_ctl_q_info *cq;
+	enum ice_status ret_code;
+
+	switch (q_type) {
+	case ICE_CTL_Q_ADMIN:
+		ice_adminq_init_regs(hw);
+		cq = &hw->adminq;
+		break;
+	case ICE_CTL_Q_MAILBOX:
+		ice_mailbox_init_regs(hw);
+		cq = &hw->mailboxq;
+		break;
+	default:
+		return ICE_ERR_PARAM;
+	}
+	cq->qtype = q_type;
+
+	/* verify input for valid configuration */
+	if (!cq->num_rq_entries || !cq->num_sq_entries ||
+	    !cq->rq_buf_size || !cq->sq_buf_size) {
+		return ICE_ERR_CFG;
+	}
+	ice_init_lock(&cq->sq_lock);
+	ice_init_lock(&cq->rq_lock);
+
+	/* setup SQ command write back timeout */
+	cq->sq_cmd_timeout = ICE_CTL_Q_SQ_CMD_TIMEOUT;
+
+	/* allocate the ATQ */
+	ret_code = ice_init_sq(hw, cq);
+	if (ret_code)
+		goto init_ctrlq_destroy_locks;
+
+	/* allocate the ARQ */
+	ret_code = ice_init_rq(hw, cq);
+	if (ret_code)
+		goto init_ctrlq_free_sq;
+
+	/* success! */
+	return ICE_SUCCESS;
+
+init_ctrlq_free_sq:
+	ice_shutdown_sq(hw, cq);
+init_ctrlq_destroy_locks:
+	ice_destroy_lock(&cq->sq_lock);
+	ice_destroy_lock(&cq->rq_lock);
+	return ret_code;
+}
+
+/**
+ * ice_init_all_ctrlq - main initialization routine for all control queues
+ * @hw: pointer to the hardware structure
+ *
+ * Prior to calling this function, drivers *MUST* set the following fields
+ * in the cq->structure for all control queues:
+ *     - cq->num_sq_entries
+ *     - cq->num_rq_entries
+ *     - cq->rq_buf_size
+ *     - cq->sq_buf_size
+ */
+enum ice_status ice_init_all_ctrlq(struct ice_hw *hw)
+{
+	enum ice_status ret_code;
+
+
+	/* Init FW admin queue */
+	ret_code = ice_init_ctrlq(hw, ICE_CTL_Q_ADMIN);
+	if (ret_code)
+		return ret_code;
+
+	ret_code = ice_init_check_adminq(hw);
+	if (ret_code)
+		return ret_code;
+	/* Init Mailbox queue */
+	return ice_init_ctrlq(hw, ICE_CTL_Q_MAILBOX);
+}
+
+/**
+ * ice_shutdown_ctrlq - shutdown routine for any control queue
+ * @hw: pointer to the hardware structure
+ * @q_type: specific Control queue type
+ */
+static void ice_shutdown_ctrlq(struct ice_hw *hw, enum ice_ctl_q q_type)
+{
+	struct ice_ctl_q_info *cq;
+
+	switch (q_type) {
+	case ICE_CTL_Q_ADMIN:
+		cq = &hw->adminq;
+		if (ice_check_sq_alive(hw, cq))
+			ice_aq_q_shutdown(hw, true);
+		break;
+	case ICE_CTL_Q_MAILBOX:
+		cq = &hw->mailboxq;
+		break;
+	default:
+		return;
+	}
+
+	if (cq->sq.count) {
+		ice_shutdown_sq(hw, cq);
+		ice_destroy_lock(&cq->sq_lock);
+	}
+	if (cq->rq.count) {
+		ice_shutdown_rq(hw, cq);
+		ice_destroy_lock(&cq->rq_lock);
+	}
+}
+
+/**
+ * ice_shutdown_all_ctrlq - shutdown routine for all control queues
+ * @hw: pointer to the hardware structure
+ */
+void ice_shutdown_all_ctrlq(struct ice_hw *hw)
+{
+	/* Shutdown FW admin queue */
+	ice_shutdown_ctrlq(hw, ICE_CTL_Q_ADMIN);
+	/* Shutdown PF-VF Mailbox */
+	ice_shutdown_ctrlq(hw, ICE_CTL_Q_MAILBOX);
+}
+
+/**
+ * ice_clean_sq - cleans Admin send queue (ATQ)
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ *
+ * returns the number of free desc
+ */
+static u16 ice_clean_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	struct ice_ctl_q_ring *sq = &cq->sq;
+	u16 ntc = sq->next_to_clean;
+	struct ice_sq_cd *details;
+#if 0
+	struct ice_aq_desc desc_cb;
+#endif
+	struct ice_aq_desc *desc;
+
+	desc = ICE_CTL_Q_DESC(*sq, ntc);
+	details = ICE_CTL_Q_DETAILS(*sq, ntc);
+
+	while (rd32(hw, cq->sq.head) != ntc) {
+		ice_debug(hw, ICE_DBG_AQ_MSG,
+			  "ntc %d head %d.\n", ntc, rd32(hw, cq->sq.head));
+#if 0
+		if (details->callback) {
+			ICE_CTL_Q_CALLBACK cb_func =
+				(ICE_CTL_Q_CALLBACK)details->callback;
+			ice_memcpy(&desc_cb, desc, sizeof(desc_cb),
+				   ICE_DMA_TO_DMA);
+			cb_func(hw, &desc_cb);
+		}
+#endif
+		ice_memset(desc, 0, sizeof(*desc), ICE_DMA_MEM);
+		ice_memset(details, 0, sizeof(*details), ICE_NONDMA_MEM);
+		ntc++;
+		if (ntc == sq->count)
+			ntc = 0;
+		desc = ICE_CTL_Q_DESC(*sq, ntc);
+		details = ICE_CTL_Q_DETAILS(*sq, ntc);
+	}
+
+	sq->next_to_clean = ntc;
+
+	return ICE_CTL_Q_DESC_UNUSED(sq);
+}
+
+/**
+ * ice_sq_done - check if FW has processed the Admin Send Queue (ATQ)
+ * @hw: pointer to the hw struct
+ * @cq: pointer to the specific Control queue
+ *
+ * Returns true if the firmware has processed all descriptors on the
+ * admin send queue. Returns false if there are still requests pending.
+ */
+bool ice_sq_done(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	/* AQ designers suggest use of head for better
+	 * timing reliability than DD bit
+	 */
+	return rd32(hw, cq->sq.head) == cq->sq.next_to_use;
+}
+
+/**
+ * ice_sq_send_cmd - send command to Control Queue (ATQ)
+ * @hw: pointer to the hw struct
+ * @cq: pointer to the specific Control queue
+ * @desc: prefilled descriptor describing the command (non DMA mem)
+ * @buf: buffer to use for indirect commands (or NULL for direct commands)
+ * @buf_size: size of buffer for indirect commands (or 0 for direct commands)
+ * @cd: pointer to command details structure
+ *
+ * This is the main send command routine for the ATQ. It runs the queue,
+ * cleans the queue, etc.
+ */
+enum ice_status
+ice_sq_send_cmd(struct ice_hw *hw, struct ice_ctl_q_info *cq,
+		struct ice_aq_desc *desc, void *buf, u16 buf_size,
+		struct ice_sq_cd *cd)
+{
+	struct ice_dma_mem *dma_buf = NULL;
+	struct ice_aq_desc *desc_on_ring;
+	bool cmd_completed = false;
+	enum ice_status status = ICE_SUCCESS;
+	struct ice_sq_cd *details;
+	u32 total_delay = 0;
+	u16 retval = 0;
+	u32 val = 0;
+
+	/* if reset is in progress return a soft error */
+	if (hw->reset_ongoing)
+		return ICE_ERR_RESET_ONGOING;
+	ice_acquire_lock(&cq->sq_lock);
+
+	cq->sq_last_status = ICE_AQ_RC_OK;
+
+	if (!cq->sq.count) {
+		ice_debug(hw, ICE_DBG_AQ_MSG,
+			  "Control Send queue not initialized.\n");
+		status = ICE_ERR_AQ_EMPTY;
+		goto sq_send_command_error;
+	}
+
+	if ((buf && !buf_size) || (!buf && buf_size)) {
+		status = ICE_ERR_PARAM;
+		goto sq_send_command_error;
+	}
+
+	if (buf) {
+		if (buf_size > cq->sq_buf_size) {
+			ice_debug(hw, ICE_DBG_AQ_MSG,
+				  "Invalid buffer size for Control Send queue: %d.\n",
+				  buf_size);
+			status = ICE_ERR_INVAL_SIZE;
+			goto sq_send_command_error;
+		}
+
+		desc->flags |= CPU_TO_LE16(ICE_AQ_FLAG_BUF);
+		if (buf_size > ICE_AQ_LG_BUF)
+			desc->flags |= CPU_TO_LE16(ICE_AQ_FLAG_LB);
+	}
+
+	val = rd32(hw, cq->sq.head);
+	if (val >= cq->num_sq_entries) {
+		ice_debug(hw, ICE_DBG_AQ_MSG,
+			  "head overrun at %d in the Control Send Queue ring\n",
+			  val);
+		status = ICE_ERR_AQ_EMPTY;
+		goto sq_send_command_error;
+	}
+
+	details = ICE_CTL_Q_DETAILS(cq->sq, cq->sq.next_to_use);
+	if (cd)
+		*details = *cd;
+#if 0
+		/* FIXME: if/when this block gets enabled (when the #if 0
+		 * is removed), add braces to both branches of the surrounding
+		 * conditional expression. The braces have been removed to
+		 * prevent checkpatch complaining.
+		 */
+
+		/* If the command details are defined copy the cookie. The
+		 * CPU_TO_LE32 is not needed here because the data is ignored
+		 * by the FW, only used by the driver
+		 */
+		if (details->cookie) {
+			desc->cookie_high =
+				CPU_TO_LE32(ICE_HI_DWORD(details->cookie));
+			desc->cookie_low =
+				CPU_TO_LE32(ICE_LO_DWORD(details->cookie));
+		}
+#endif
+	else
+		ice_memset(details, 0, sizeof(*details), ICE_NONDMA_MEM);
+#if 0
+	/* clear requested flags and then set additional flags if defined */
+	desc->flags &= ~CPU_TO_LE16(details->flags_dis);
+	desc->flags |= CPU_TO_LE16(details->flags_ena);
+
+	if (details->postpone && !details->async) {
+		ice_debug(hw, ICE_DBG_AQ_MSG,
+			  "Async flag not set along with postpone flag\n");
+		status = ICE_ERR_PARAM;
+		goto sq_send_command_error;
+	}
+#endif
+
+	/* Call clean and check queue available function to reclaim the
+	 * descriptors that were processed by FW/MBX; the function returns the
+	 * number of desc available. The clean function called here could be
+	 * called in a separate thread in case of asynchronous completions.
+	 */
+	if (ice_clean_sq(hw, cq) == 0) {
+		ice_debug(hw, ICE_DBG_AQ_MSG,
+			  "Error: Control Send Queue is full.\n");
+		status = ICE_ERR_AQ_FULL;
+		goto sq_send_command_error;
+	}
+
+	/* initialize the temp desc pointer with the right desc */
+	desc_on_ring = ICE_CTL_Q_DESC(cq->sq, cq->sq.next_to_use);
+
+	/* if the desc is available copy the temp desc to the right place */
+	ice_memcpy(desc_on_ring, desc, sizeof(*desc_on_ring),
+		   ICE_NONDMA_TO_DMA);
+
+	/* if buf is not NULL assume indirect command */
+	if (buf) {
+		dma_buf = &cq->sq.r.sq_bi[cq->sq.next_to_use];
+		/* copy the user buf into the respective DMA buf */
+		ice_memcpy(dma_buf->va, buf, buf_size, ICE_NONDMA_TO_DMA);
+		desc_on_ring->datalen = CPU_TO_LE16(buf_size);
+
+		/* Update the address values in the desc with the pa value
+		 * for respective buffer
+		 */
+		desc_on_ring->params.generic.addr_high =
+			CPU_TO_LE32(ICE_HI_DWORD(dma_buf->pa));
+		desc_on_ring->params.generic.addr_low =
+			CPU_TO_LE32(ICE_LO_DWORD(dma_buf->pa));
+	}
+
+	/* Debug desc and buffer */
+	ice_debug(hw, ICE_DBG_AQ_MSG,
+		  "ATQ: Control Send queue desc and buffer:\n");
+
+	ice_debug_cq(hw, ICE_DBG_AQ_CMD, (void *)desc_on_ring, buf, buf_size);
+
+
+	(cq->sq.next_to_use)++;
+	if (cq->sq.next_to_use == cq->sq.count)
+		cq->sq.next_to_use = 0;
+#if 0
+	/* FIXME - handle this case? */
+	if (!details->postpone)
+#endif
+	wr32(hw, cq->sq.tail, cq->sq.next_to_use);
+
+#if 0
+	/* if command details are not defined or async flag is not set,
+	 * we need to wait for desc write back
+	 */
+	if (!details->async && !details->postpone) {
+		/* FIXME - handle this case? */
+	}
+#endif
+	do {
+		if (ice_sq_done(hw, cq))
+			break;
+
+		ice_msec_delay(1, false);
+		total_delay++;
+	} while (total_delay < cq->sq_cmd_timeout);
+
+	/* if ready, copy the desc back to temp */
+	if (ice_sq_done(hw, cq)) {
+		ice_memcpy(desc, desc_on_ring, sizeof(*desc),
+			   ICE_DMA_TO_NONDMA);
+		if (buf) {
+			/* get returned length to copy */
+			u16 copy_size = LE16_TO_CPU(desc->datalen);
+
+			if (copy_size > buf_size) {
+				ice_debug(hw, ICE_DBG_AQ_MSG,
+					  "Return len %d > than buf len %d\n",
+					  copy_size, buf_size);
+				status = ICE_ERR_AQ_ERROR;
+			} else {
+				ice_memcpy(buf, dma_buf->va, copy_size,
+					   ICE_DMA_TO_NONDMA);
+			}
+		}
+		retval = LE16_TO_CPU(desc->retval);
+		if (retval) {
+			ice_debug(hw, ICE_DBG_AQ_MSG,
+				  "Control Send Queue command completed with error 0x%x\n",
+				  retval);
+
+			/* strip off FW internal code */
+			retval &= 0xff;
+		}
+		cmd_completed = true;
+		if (!status && retval != ICE_AQ_RC_OK)
+			status = ICE_ERR_AQ_ERROR;
+		cq->sq_last_status = (enum ice_aq_err)retval;
+	}
+
+	ice_debug(hw, ICE_DBG_AQ_MSG,
+		  "ATQ: desc and buffer writeback:\n");
+
+	ice_debug_cq(hw, ICE_DBG_AQ_CMD, (void *)desc, buf, buf_size);
+
+
+	/* save writeback AQ if requested */
+	if (details->wb_desc)
+		ice_memcpy(details->wb_desc, desc_on_ring,
+			   sizeof(*details->wb_desc), ICE_DMA_TO_NONDMA);
+
+	/* update the error if time out occurred */
+	if (!cmd_completed) {
+#if 0
+	    (!details->async && !details->postpone)) {
+#endif
+		ice_debug(hw, ICE_DBG_AQ_MSG,
+			  "Control Send Queue Writeback timeout.\n");
+		status = ICE_ERR_AQ_TIMEOUT;
+	}
+
+sq_send_command_error:
+	ice_release_lock(&cq->sq_lock);
+	return status;
+}
+
+/**
+ * ice_fill_dflt_direct_cmd_desc - AQ descriptor helper function
+ * @desc: pointer to the temp descriptor (non DMA mem)
+ * @opcode: the opcode can be used to decide which flags to turn off or on
+ *
+ * Fill the desc with default values
+ */
+void ice_fill_dflt_direct_cmd_desc(struct ice_aq_desc *desc, u16 opcode)
+{
+	/* zero out the desc */
+	ice_memset(desc, 0, sizeof(*desc), ICE_NONDMA_MEM);
+	desc->opcode = CPU_TO_LE16(opcode);
+	desc->flags = CPU_TO_LE16(ICE_AQ_FLAG_SI);
+}
+
+/**
+ * ice_clean_rq_elem
+ * @hw: pointer to the hw struct
+ * @cq: pointer to the specific Control queue
+ * @e: event info from the receive descriptor, includes any buffers
+ * @pending: number of events that could be left to process
+ *
+ * This function cleans one Admin Receive Queue element and returns
+ * the contents through e. It can also return how many events are
+ * left to process through 'pending'.
+ */
+enum ice_status
+ice_clean_rq_elem(struct ice_hw *hw, struct ice_ctl_q_info *cq,
+		  struct ice_rq_event_info *e, u16 *pending)
+{
+	u16 ntc = cq->rq.next_to_clean;
+	enum ice_status ret_code = ICE_SUCCESS;
+	struct ice_aq_desc *desc;
+	struct ice_dma_mem *bi;
+	u16 desc_idx;
+	u16 datalen;
+	u16 flags;
+	u16 ntu;
+
+	/* pre-clean the event info */
+	ice_memset(&e->desc, 0, sizeof(e->desc), ICE_NONDMA_MEM);
+
+	/* take the lock before we start messing with the ring */
+	ice_acquire_lock(&cq->rq_lock);
+
+	if (!cq->rq.count) {
+		ice_debug(hw, ICE_DBG_AQ_MSG,
+			  "Control Receive queue not initialized.\n");
+		ret_code = ICE_ERR_AQ_EMPTY;
+		goto clean_rq_elem_err;
+	}
+
+	/* set next_to_use to head */
+	ntu = (u16)(rd32(hw, cq->rq.head) & cq->rq.head_mask);
+
+	if (ntu == ntc) {
+		/* nothing to do - shouldn't need to update ring's values */
+		ret_code = ICE_ERR_AQ_NO_WORK;
+		goto clean_rq_elem_out;
+	}
+
+	/* now clean the next descriptor */
+	desc = ICE_CTL_Q_DESC(cq->rq, ntc);
+	desc_idx = ntc;
+
+	cq->rq_last_status = (enum ice_aq_err)LE16_TO_CPU(desc->retval);
+	flags = LE16_TO_CPU(desc->flags);
+	if (flags & ICE_AQ_FLAG_ERR) {
+		ret_code = ICE_ERR_AQ_ERROR;
+		ice_debug(hw, ICE_DBG_AQ_MSG,
+			  "Control Receive Queue Event received with error 0x%x\n",
+			  cq->rq_last_status);
+	}
+	ice_memcpy(&e->desc, desc, sizeof(e->desc), ICE_DMA_TO_NONDMA);
+	datalen = LE16_TO_CPU(desc->datalen);
+	e->msg_len = min(datalen, e->buf_len);
+	if (e->msg_buf && e->msg_len)
+		ice_memcpy(e->msg_buf, cq->rq.r.rq_bi[desc_idx].va,
+			   e->msg_len, ICE_DMA_TO_NONDMA);
+
+	ice_debug(hw, ICE_DBG_AQ_MSG, "ARQ: desc and buffer:\n");
+
+	ice_debug_cq(hw, ICE_DBG_AQ_CMD, (void *)desc, e->msg_buf,
+		     cq->rq_buf_size);
+
+
+	/* Restore the original datalen and buffer address in the desc,
+	 * FW updates datalen to indicate the event message size
+	 */
+	bi = &cq->rq.r.rq_bi[ntc];
+	ice_memset(desc, 0, sizeof(*desc), ICE_DMA_MEM);
+
+	desc->flags = CPU_TO_LE16(ICE_AQ_FLAG_BUF);
+	if (cq->rq_buf_size > ICE_AQ_LG_BUF)
+		desc->flags |= CPU_TO_LE16(ICE_AQ_FLAG_LB);
+	desc->datalen = CPU_TO_LE16(bi->size);
+	desc->params.generic.addr_high = CPU_TO_LE32(ICE_HI_DWORD(bi->pa));
+	desc->params.generic.addr_low = CPU_TO_LE32(ICE_LO_DWORD(bi->pa));
+
+	/* set tail = the last cleaned desc index. */
+	wr32(hw, cq->rq.tail, ntc);
+	/* ntc is updated to tail + 1 */
+	ntc++;
+	if (ntc == cq->num_rq_entries)
+		ntc = 0;
+	cq->rq.next_to_clean = ntc;
+	cq->rq.next_to_use = ntu;
+
+#if 0
+	ice_nvmupd_check_wait_event(hw, LE16_TO_CPU(e->desc.opcode));
+#endif
+clean_rq_elem_out:
+	/* Set pending if needed, unlock and return */
+	if (pending) {
+		/* re-read HW head to calculate actual pending messages */
+		ntu = (u16)(rd32(hw, cq->rq.head) & cq->rq.head_mask);
+		*pending = (u16)((ntc > ntu ? cq->rq.count : 0) + (ntu - ntc));
+	}
+clean_rq_elem_err:
+	ice_release_lock(&cq->rq_lock);
+
+	return ret_code;
+}
diff --git a/drivers/net/ice/base/ice_controlq.h b/drivers/net/ice/base/ice_controlq.h
new file mode 100644
index 0000000..db2db93
--- /dev/null
+++ b/drivers/net/ice/base/ice_controlq.h
@@ -0,0 +1,97 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_CONTROLQ_H_
+#define _ICE_CONTROLQ_H_
+
+#include "ice_adminq_cmd.h"
+
+
+/* Maximum buffer lengths for all control queue types */
+#define ICE_AQ_MAX_BUF_LEN 4096
+#define ICE_MBXQ_MAX_BUF_LEN 4096
+
+#define ICE_CTL_Q_DESC(R, i) \
+	(&(((struct ice_aq_desc *)((R).desc_buf.va))[i]))
+
+#define ICE_CTL_Q_DESC_UNUSED(R) \
+	(u16)((((R)->next_to_clean > (R)->next_to_use) ? 0 : (R)->count) + \
+	      (R)->next_to_clean - (R)->next_to_use - 1)
+
+/* Defines that help manage the driver vs FW API checks.
+ * Take a look at ice_aq_ver_check in ice_controlq.c for actual usage.
+ */
+#define EXP_FW_API_VER_BRANCH		0x00
+#define EXP_FW_API_VER_MAJOR		0x01
+#define EXP_FW_API_VER_MINOR		0x03
+
+/* Different control queue types: These are mainly for SW consumption. */
+enum ice_ctl_q {
+	ICE_CTL_Q_UNKNOWN = 0,
+	ICE_CTL_Q_ADMIN,
+	ICE_CTL_Q_MAILBOX,
+};
+
+/* Control Queue default settings */
+#define ICE_CTL_Q_SQ_CMD_TIMEOUT	250  /* msecs */
+
+struct ice_ctl_q_ring {
+	void *dma_head;			/* Virtual address to dma head */
+	struct ice_dma_mem desc_buf;	/* descriptor ring memory */
+	void *cmd_buf;			/* command buffer memory */
+
+	union {
+		struct ice_dma_mem *sq_bi;
+		struct ice_dma_mem *rq_bi;
+	} r;
+
+	u16 count;		/* Number of descriptors */
+
+	/* used for interrupt processing */
+	u16 next_to_use;
+	u16 next_to_clean;
+
+	/* used for queue tracking */
+	u32 head;
+	u32 tail;
+	u32 len;
+	u32 bah;
+	u32 bal;
+	u32 len_mask;
+	u32 len_ena_mask;
+	u32 head_mask;
+};
+
+/* sq transaction details */
+struct ice_sq_cd {
+	struct ice_aq_desc *wb_desc;
+};
+
+#define ICE_CTL_Q_DETAILS(R, i) (&(((struct ice_sq_cd *)((R).cmd_buf))[i]))
+
+/* rq event information */
+struct ice_rq_event_info {
+	struct ice_aq_desc desc;
+	u16 msg_len;
+	u16 buf_len;
+	u8 *msg_buf;
+};
+
+/* Control Queue information */
+struct ice_ctl_q_info {
+	enum ice_ctl_q qtype;
+	struct ice_ctl_q_ring rq;	/* receive queue */
+	struct ice_ctl_q_ring sq;	/* send queue */
+	u32 sq_cmd_timeout;		/* send queue cmd write back timeout */
+	u16 num_rq_entries;		/* receive queue depth */
+	u16 num_sq_entries;		/* send queue depth */
+	u16 rq_buf_size;		/* receive queue buffer size */
+	u16 sq_buf_size;		/* send queue buffer size */
+	struct ice_lock sq_lock;		/* Send queue lock */
+	struct ice_lock rq_lock;		/* Receive queue lock */
+	enum ice_aq_err sq_last_status;	/* last status on send queue */
+	enum ice_aq_err rq_last_status;	/* last status on receive queue */
+};
+
+#endif /* _ICE_CONTROLQ_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v4 07/32] net/ice/base: add data center bridging (DCB)
  2018-12-14  8:34 ` [dpdk-dev] [PATCH v4 00/32] A new net PMD - ICE Wenzhuo Lu
                     ` (5 preceding siblings ...)
  2018-12-14  8:34   ` [dpdk-dev] [PATCH v4 06/32] net/ice/base: add control queue information Wenzhuo Lu
@ 2018-12-14  8:34   ` Wenzhuo Lu
  2018-12-14  8:34   ` [dpdk-dev] [PATCH v4 08/32] net/ice/base: add basic transmit scheduler Wenzhuo Lu
                     ` (24 subsequent siblings)
  31 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-14  8:34 UTC (permalink / raw)
  To: dev; +Cc: Paul M Stillwell Jr

From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>

Add the code to handle DCB.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
---
 drivers/net/ice/base/ice_dcb.c | 1385 ++++++++++++++++++++++++++++++++++++++++
 drivers/net/ice/base/ice_dcb.h |  220 +++++++
 2 files changed, 1605 insertions(+)
 create mode 100644 drivers/net/ice/base/ice_dcb.c
 create mode 100644 drivers/net/ice/base/ice_dcb.h

diff --git a/drivers/net/ice/base/ice_dcb.c b/drivers/net/ice/base/ice_dcb.c
new file mode 100644
index 0000000..76411d5
--- /dev/null
+++ b/drivers/net/ice/base/ice_dcb.c
@@ -0,0 +1,1385 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#include "ice_common.h"
+#include "ice_sched.h"
+#include "ice_dcb.h"
+
+/**
+ * ice_aq_get_lldp_mib
+ * @hw: pointer to the hw struct
+ * @bridge_type: type of bridge requested
+ * @mib_type: Local, Remote or both Local and Remote MIBs
+ * @buf: pointer to the caller-supplied buffer to store the MIB block
+ * @buf_size: size of the buffer (in bytes)
+ * @local_len: length of the returned Local LLDP MIB
+ * @remote_len: length of the returned Remote LLDP MIB
+ * @cd: pointer to command details structure or NULL
+ *
+ * Requests the complete LLDP MIB (entire packet). (0x0A00)
+ */
+enum ice_status
+ice_aq_get_lldp_mib(struct ice_hw *hw, u8 bridge_type, u8 mib_type, void *buf,
+		    u16 buf_size, u16 *local_len, u16 *remote_len,
+		    struct ice_sq_cd *cd)
+{
+	struct ice_aqc_lldp_get_mib *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	cmd = &desc.params.lldp_get_mib;
+
+	if (buf_size == 0 || !buf)
+		return ICE_ERR_PARAM;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_lldp_get_mib);
+
+	cmd->type = mib_type & ICE_AQ_LLDP_MIB_TYPE_M;
+	cmd->type |= (bridge_type << ICE_AQ_LLDP_BRID_TYPE_S) &
+		ICE_AQ_LLDP_BRID_TYPE_M;
+
+	desc.datalen = CPU_TO_LE16(buf_size);
+
+	status = ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
+	if (!status) {
+		if (local_len)
+			*local_len = LE16_TO_CPU(cmd->local_len);
+		if (remote_len)
+			*remote_len = LE16_TO_CPU(cmd->remote_len);
+	}
+
+	return status;
+}
+
+/**
+ * ice_aq_cfg_lldp_mib_change
+ * @hw: pointer to the hw struct
+ * @ena_update: Enable or Disable event posting
+ * @cd: pointer to command details structure or NULL
+ *
+ * Enable or Disable posting of an event on ARQ when LLDP MIB
+ * associated with the interface changes (0x0A01)
+ */
+enum ice_status
+ice_aq_cfg_lldp_mib_change(struct ice_hw *hw, bool ena_update,
+			   struct ice_sq_cd *cd)
+{
+	struct ice_aqc_lldp_set_mib_change *cmd;
+	struct ice_aq_desc desc;
+
+	cmd = &desc.params.lldp_set_event;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_lldp_set_mib_change);
+
+	if (!ena_update)
+		cmd->command |= ICE_AQ_LLDP_MIB_UPDATE_DIS;
+
+	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+}
+
+
+/**
+ * ice_aq_start_lldp
+ * @hw: pointer to the hw struct
+ * @cd: pointer to command details structure or NULL
+ *
+ * Start the embedded LLDP Agent on all ports. (0x0A06)
+ */
+enum ice_status ice_aq_start_lldp(struct ice_hw *hw, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_lldp_start *cmd;
+	struct ice_aq_desc desc;
+
+	cmd = &desc.params.lldp_start;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_lldp_start);
+
+	cmd->command = ICE_AQ_LLDP_AGENT_START;
+
+	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+}
+
+/**
+ * ice_aq_set_lldp_mib - Set the LLDP MIB
+ * @hw: pointer to the hw struct
+ * @mib_type: Local, Remote or both Local and Remote MIBs
+ * @buf: pointer to the caller-supplied buffer to store the MIB block
+ * @buf_size: size of the buffer (in bytes)
+ * @cd: pointer to command details structure or NULL
+ *
+ * Set the LLDP MIB. (0x0A08)
+ */
+enum ice_status
+ice_aq_set_lldp_mib(struct ice_hw *hw, u8 mib_type, void *buf, u16 buf_size,
+		    struct ice_sq_cd *cd)
+{
+	struct ice_aqc_lldp_set_local_mib *cmd;
+	struct ice_aq_desc desc;
+
+	cmd = &desc.params.lldp_set_mib;
+
+	if (buf_size == 0 || !buf)
+		return ICE_ERR_PARAM;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_lldp_set_local_mib);
+
+	desc.flags |= CPU_TO_LE16((u16)ICE_AQ_FLAG_RD);
+	desc.datalen = CPU_TO_LE16(buf_size);
+
+	cmd->type = mib_type;
+	cmd->length = CPU_TO_LE16(buf_size);
+
+	return ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
+}
+
+/**
+ * ice_get_dcbx_status
+ * @hw: pointer to the hw struct
+ *
+ * Get the DCBX status from the Firmware
+ */
+u8 ice_get_dcbx_status(struct ice_hw *hw)
+{
+	u32 reg;
+
+	reg = rd32(hw, PRTDCB_GENS);
+	return (u8)((reg & PRTDCB_GENS_DCBX_STATUS_M) >>
+		    PRTDCB_GENS_DCBX_STATUS_S);
+}
+
+/**
+ * ice_parse_ieee_ets_common_tlv
+ * @buf: Data buffer to be parsed for ETS CFG/REC data
+ * @ets_cfg: Container to store parsed data
+ *
+ * Parses the common data of IEEE 802.1Qaz ETS CFG/REC TLV
+ */
+static void
+ice_parse_ieee_ets_common_tlv(u8 *buf, struct ice_dcb_ets_cfg *ets_cfg)
+{
+	u8 offset = 0;
+	int i;
+
+	/* Priority Assignment Table (4 octets)
+	 * Octets:|    1    |    2    |    3    |    4    |
+	 *        -----------------------------------------
+	 *        |pri0|pri1|pri2|pri3|pri4|pri5|pri6|pri7|
+	 *        -----------------------------------------
+	 *   Bits:|7  4|3  0|7  4|3  0|7  4|3  0|7  4|3  0|
+	 *        -----------------------------------------
+	 */
+	for (i = 0; i < 4; i++) {
+		ets_cfg->prio_table[i * 2] =
+			((buf[offset] & ICE_IEEE_ETS_PRIO_1_M) >>
+			 ICE_IEEE_ETS_PRIO_1_S);
+		ets_cfg->prio_table[i * 2 + 1] =
+			((buf[offset] & ICE_IEEE_ETS_PRIO_0_M) >>
+			 ICE_IEEE_ETS_PRIO_0_S);
+		offset++;
+	}
+
+	/* TC Bandwidth Table (8 octets)
+	 * Octets:| 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
+	 *        ---------------------------------
+	 *        |tc0|tc1|tc2|tc3|tc4|tc5|tc6|tc7|
+	 *        ---------------------------------
+	 *
+	 * TSA Assignment Table (8 octets)
+	 * Octets:| 9 | 10| 11| 12| 13| 14| 15| 16|
+	 *        ---------------------------------
+	 *        |tc0|tc1|tc2|tc3|tc4|tc5|tc6|tc7|
+	 *        ---------------------------------
+	 */
+	for (i = 0; i < ICE_MAX_TRAFFIC_CLASS; i++) {
+		ets_cfg->tcbwtable[i] = buf[offset];
+		ets_cfg->tsatable[i] = buf[ICE_MAX_TRAFFIC_CLASS + offset++];
+	}
+}
+
+/**
+ * ice_parse_ieee_etscfg_tlv
+ * @tlv: IEEE 802.1Qaz ETS CFG TLV
+ * @dcbcfg: Local store to update ETS CFG data
+ *
+ * Parses IEEE 802.1Qaz ETS CFG TLV
+ */
+static void
+ice_parse_ieee_etscfg_tlv(struct ice_lldp_org_tlv *tlv,
+			  struct ice_dcbx_cfg *dcbcfg)
+{
+	struct ice_dcb_ets_cfg *etscfg;
+	u8 *buf = tlv->tlvinfo;
+
+	/* First Octet post subtype
+	 * --------------------------
+	 * |will-|CBS  | Re-  | Max |
+	 * |ing  |     |served| TCs |
+	 * --------------------------
+	 * |1bit | 1bit|3 bits|3bits|
+	 */
+	etscfg = &dcbcfg->etscfg;
+	etscfg->willing = ((buf[0] & ICE_IEEE_ETS_WILLING_M) >>
+			   ICE_IEEE_ETS_WILLING_S);
+	etscfg->cbs = ((buf[0] & ICE_IEEE_ETS_CBS_M) >> ICE_IEEE_ETS_CBS_S);
+	etscfg->maxtcs = ((buf[0] & ICE_IEEE_ETS_MAXTC_M) >>
+			  ICE_IEEE_ETS_MAXTC_S);
+
+	/* Begin parsing at Priority Assignment Table (offset 1 in buf) */
+	ice_parse_ieee_ets_common_tlv(&buf[1], etscfg);
+}
+
+/**
+ * ice_parse_ieee_etsrec_tlv
+ * @tlv: IEEE 802.1Qaz ETS REC TLV
+ * @dcbcfg: Local store to update ETS REC data
+ *
+ * Parses IEEE 802.1Qaz ETS REC TLV
+ */
+static void
+ice_parse_ieee_etsrec_tlv(struct ice_lldp_org_tlv *tlv,
+			  struct ice_dcbx_cfg *dcbcfg)
+{
+	u8 *buf = tlv->tlvinfo;
+
+	/* Begin parsing at Priority Assignment Table (offset 1 in buf) */
+	ice_parse_ieee_ets_common_tlv(&buf[1], &dcbcfg->etsrec);
+}
+
+/**
+ * ice_parse_ieee_pfccfg_tlv
+ * @tlv: IEEE 802.1Qaz PFC CFG TLV
+ * @dcbcfg: Local store to update PFC CFG data
+ *
+ * Parses IEEE 802.1Qaz PFC CFG TLV
+ */
+static void
+ice_parse_ieee_pfccfg_tlv(struct ice_lldp_org_tlv *tlv,
+			  struct ice_dcbx_cfg *dcbcfg)
+{
+	u8 *buf = tlv->tlvinfo;
+
+	/* ----------------------------------------
+	 * |will-|MBC  | Re-  | PFC |  PFC Enable  |
+	 * |ing  |     |served| cap |              |
+	 * -----------------------------------------
+	 * |1bit | 1bit|2 bits|4bits| 1 octet      |
+	 */
+	dcbcfg->pfc.willing = ((buf[0] & ICE_IEEE_PFC_WILLING_M) >>
+			       ICE_IEEE_PFC_WILLING_S);
+	dcbcfg->pfc.mbc = ((buf[0] & ICE_IEEE_PFC_MBC_M) >> ICE_IEEE_PFC_MBC_S);
+	dcbcfg->pfc.pfccap = ((buf[0] & ICE_IEEE_PFC_CAP_M) >>
+			      ICE_IEEE_PFC_CAP_S);
+	dcbcfg->pfc.pfcena = buf[1];
+}
+
+/**
+ * ice_parse_ieee_app_tlv
+ * @tlv: IEEE 802.1Qaz APP TLV
+ * @dcbcfg: Local store to update APP PRIO data
+ *
+ * Parses IEEE 802.1Qaz APP PRIO TLV
+ */
+static void
+ice_parse_ieee_app_tlv(struct ice_lldp_org_tlv *tlv,
+		       struct ice_dcbx_cfg *dcbcfg)
+{
+	u16 offset = 0;
+	u16 typelen;
+	int i = 0;
+	u16 len;
+	u8 *buf;
+
+	typelen = NTOHS(tlv->typelen);
+	len = ((typelen & ICE_LLDP_TLV_LEN_M) >> ICE_LLDP_TLV_LEN_S);
+	buf = tlv->tlvinfo;
+
+	/* Removing sizeof(ouisubtype) and reserved byte from len.
+	 * Remaining len div 3 is number of APP TLVs.
+	 */
+	len -= (sizeof(tlv->ouisubtype) + 1);
+
+	/* Move offset to App Priority Table */
+	offset++;
+
+	/* Application Priority Table (3 octets)
+	 * Octets:|         1          |    2    |    3    |
+	 *        -----------------------------------------
+	 *        |Priority|Rsrvd| Sel |    Protocol ID    |
+	 *        -----------------------------------------
+	 *   Bits:|23    21|20 19|18 16|15                0|
+	 *        -----------------------------------------
+	 */
+	while (offset < len) {
+		dcbcfg->app[i].priority = ((buf[offset] &
+					    ICE_IEEE_APP_PRIO_M) >>
+					   ICE_IEEE_APP_PRIO_S);
+		dcbcfg->app[i].selector = ((buf[offset] &
+					    ICE_IEEE_APP_SEL_M) >>
+					   ICE_IEEE_APP_SEL_S);
+		dcbcfg->app[i].prot_id = (buf[offset + 1] << 0x8) |
+			buf[offset + 2];
+		/* Move to next app */
+		offset += 3;
+		i++;
+		if (i >= ICE_DCBX_MAX_APPS)
+			break;
+	}
+
+	dcbcfg->numapps = i;
+}
+
+/**
+ * ice_parse_ieee_tlv
+ * @tlv: IEEE 802.1Qaz TLV
+ * @dcbcfg: Local store to update ETS REC data
+ *
+ * Get the TLV subtype and send it to parsing function
+ * based on the subtype value
+ */
+static void
+ice_parse_ieee_tlv(struct ice_lldp_org_tlv *tlv, struct ice_dcbx_cfg *dcbcfg)
+{
+	u32 ouisubtype;
+	u8 subtype;
+
+	ouisubtype = NTOHL(tlv->ouisubtype);
+	subtype = (u8)((ouisubtype & ICE_LLDP_TLV_SUBTYPE_M) >>
+		       ICE_LLDP_TLV_SUBTYPE_S);
+	switch (subtype) {
+	case ICE_IEEE_SUBTYPE_ETS_CFG:
+		ice_parse_ieee_etscfg_tlv(tlv, dcbcfg);
+		break;
+	case ICE_IEEE_SUBTYPE_ETS_REC:
+		ice_parse_ieee_etsrec_tlv(tlv, dcbcfg);
+		break;
+	case ICE_IEEE_SUBTYPE_PFC_CFG:
+		ice_parse_ieee_pfccfg_tlv(tlv, dcbcfg);
+		break;
+	case ICE_IEEE_SUBTYPE_APP_PRI:
+		ice_parse_ieee_app_tlv(tlv, dcbcfg);
+		break;
+	default:
+		break;
+	}
+}
+
+/**
+ * ice_parse_cee_pgcfg_tlv
+ * @tlv: CEE DCBX PG CFG TLV
+ * @dcbcfg: Local store to update ETS CFG data
+ *
+ * Parses CEE DCBX PG CFG TLV
+ */
+static void
+ice_parse_cee_pgcfg_tlv(struct ice_cee_feat_tlv *tlv,
+			struct ice_dcbx_cfg *dcbcfg)
+{
+	struct ice_dcb_ets_cfg *etscfg;
+	u8 *buf = tlv->tlvinfo;
+	u16 offset = 0;
+	int i;
+
+	etscfg = &dcbcfg->etscfg;
+
+	if (tlv->en_will_err & ICE_CEE_FEAT_TLV_WILLING_M)
+		etscfg->willing = 1;
+
+	etscfg->cbs = 0;
+	/* Priority Group Table (4 octets)
+	 * Octets:|    1    |    2    |    3    |    4    |
+	 *        -----------------------------------------
+	 *        |pri0|pri1|pri2|pri3|pri4|pri5|pri6|pri7|
+	 *        -----------------------------------------
+	 *   Bits:|7  4|3  0|7  4|3  0|7  4|3  0|7  4|3  0|
+	 *        -----------------------------------------
+	 */
+	for (i = 0; i < 4; i++) {
+		etscfg->prio_table[i * 2] =
+			((buf[offset] & ICE_CEE_PGID_PRIO_1_M) >>
+			 ICE_CEE_PGID_PRIO_1_S);
+		etscfg->prio_table[i * 2 + 1] =
+			((buf[offset] & ICE_CEE_PGID_PRIO_0_M) >>
+			 ICE_CEE_PGID_PRIO_0_S);
+		offset++;
+	}
+
+	/* PG Percentage Table (8 octets)
+	 * Octets:| 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
+	 *        ---------------------------------
+	 *        |pg0|pg1|pg2|pg3|pg4|pg5|pg6|pg7|
+	 *        ---------------------------------
+	 */
+	for (i = 0; i < ICE_MAX_TRAFFIC_CLASS; i++)
+		etscfg->tcbwtable[i] = buf[offset++];
+
+	/* Number of TCs supported (1 octet) */
+	etscfg->maxtcs = buf[offset];
+}
+
+/**
+ * ice_parse_cee_pfccfg_tlv
+ * @tlv: CEE DCBX PFC CFG TLV
+ * @dcbcfg: Local store to update PFC CFG data
+ *
+ * Parses CEE DCBX PFC CFG TLV
+ */
+static void
+ice_parse_cee_pfccfg_tlv(struct ice_cee_feat_tlv *tlv,
+			 struct ice_dcbx_cfg *dcbcfg)
+{
+	u8 *buf = tlv->tlvinfo;
+
+	if (tlv->en_will_err & ICE_CEE_FEAT_TLV_WILLING_M)
+		dcbcfg->pfc.willing = 1;
+
+	/* ------------------------
+	 * | PFC Enable | PFC TCs |
+	 * ------------------------
+	 * | 1 octet    | 1 octet |
+	 */
+	dcbcfg->pfc.pfcena = buf[0];
+	dcbcfg->pfc.pfccap = buf[1];
+}
+
+/**
+ * ice_parse_cee_app_tlv
+ * @tlv: CEE DCBX APP TLV
+ * @dcbcfg: Local store to update APP PRIO data
+ *
+ * Parses CEE DCBX APP PRIO TLV
+ */
+static void
+ice_parse_cee_app_tlv(struct ice_cee_feat_tlv *tlv, struct ice_dcbx_cfg *dcbcfg)
+{
+	u16 len, typelen, offset = 0;
+	struct ice_cee_app_prio *app;
+	u8 i;
+
+	typelen = NTOHS(tlv->hdr.typelen);
+	len = ((typelen & ICE_LLDP_TLV_LEN_M) >> ICE_LLDP_TLV_LEN_S);
+
+	dcbcfg->numapps = len / sizeof(*app);
+	if (!dcbcfg->numapps)
+		return;
+	if (dcbcfg->numapps > ICE_DCBX_MAX_APPS)
+		dcbcfg->numapps = ICE_DCBX_MAX_APPS;
+
+	for (i = 0; i < dcbcfg->numapps; i++) {
+		u8 up, selector;
+
+		app = (struct ice_cee_app_prio *)(tlv->tlvinfo + offset);
+		for (up = 0; up < ICE_MAX_USER_PRIORITY; up++)
+			if (app->prio_map & BIT(up))
+				break;
+
+		dcbcfg->app[i].priority = up;
+
+		/* Get Selector from lower 2 bits, and convert to IEEE */
+		selector = (app->upper_oui_sel & ICE_CEE_APP_SELECTOR_M);
+		switch (selector) {
+		case ICE_CEE_APP_SEL_ETHTYPE:
+			dcbcfg->app[i].selector = ICE_APP_SEL_ETHTYPE;
+			break;
+		case ICE_CEE_APP_SEL_TCPIP:
+			dcbcfg->app[i].selector = ICE_APP_SEL_TCPIP;
+			break;
+		default:
+			/* Keep selector as it is for unknown types */
+			dcbcfg->app[i].selector = selector;
+		}
+
+		dcbcfg->app[i].prot_id = NTOHS(app->protocol);
+		/* Move to next app */
+		offset += sizeof(*app);
+	}
+}
+
+/**
+ * ice_parse_cee_tlv
+ * @tlv: CEE DCBX TLV
+ * @dcbcfg: Local store to update DCBX config data
+ *
+ * Get the TLV subtype and send it to parsing function
+ * based on the subtype value
+ */
+static void
+ice_parse_cee_tlv(struct ice_lldp_org_tlv *tlv, struct ice_dcbx_cfg *dcbcfg)
+{
+	struct ice_cee_feat_tlv *sub_tlv;
+	u8 subtype, feat_tlv_count = 0;
+	u16 len, tlvlen, typelen;
+	u32 ouisubtype;
+
+	ouisubtype = NTOHL(tlv->ouisubtype);
+	subtype = (u8)((ouisubtype & ICE_LLDP_TLV_SUBTYPE_M) >>
+		       ICE_LLDP_TLV_SUBTYPE_S);
+	/* Return if not CEE DCBX */
+	if (subtype != ICE_CEE_DCBX_TYPE)
+		return;
+
+	typelen = NTOHS(tlv->typelen);
+	tlvlen = ((typelen & ICE_LLDP_TLV_LEN_M) >> ICE_LLDP_TLV_LEN_S);
+	len = sizeof(tlv->typelen) + sizeof(ouisubtype) +
+		sizeof(struct ice_cee_ctrl_tlv);
+	/* Return if no CEE DCBX Feature TLVs */
+	if (tlvlen <= len)
+		return;
+
+	sub_tlv = (struct ice_cee_feat_tlv *)((char *)tlv + len);
+	while (feat_tlv_count < ICE_CEE_MAX_FEAT_TYPE) {
+		u16 sublen;
+
+		typelen = NTOHS(sub_tlv->hdr.typelen);
+		sublen = ((typelen & ICE_LLDP_TLV_LEN_M) >> ICE_LLDP_TLV_LEN_S);
+		subtype = (u8)((typelen & ICE_LLDP_TLV_TYPE_M) >>
+			       ICE_LLDP_TLV_TYPE_S);
+		switch (subtype) {
+		case ICE_CEE_SUBTYPE_PG_CFG:
+			ice_parse_cee_pgcfg_tlv(sub_tlv, dcbcfg);
+			break;
+		case ICE_CEE_SUBTYPE_PFC_CFG:
+			ice_parse_cee_pfccfg_tlv(sub_tlv, dcbcfg);
+			break;
+		case ICE_CEE_SUBTYPE_APP_PRI:
+			ice_parse_cee_app_tlv(sub_tlv, dcbcfg);
+			break;
+		default:
+			return;	/* Invalid Sub-type return */
+		}
+		feat_tlv_count++;
+		/* Move to next sub TLV */
+		sub_tlv = (struct ice_cee_feat_tlv *)
+			  ((char *)sub_tlv + sizeof(sub_tlv->hdr.typelen) +
+			   sublen);
+	}
+}
+
+/**
+ * ice_parse_org_tlv
+ * @tlv: Organization specific TLV
+ * @dcbcfg: Local store to update ETS REC data
+ *
+ * Currently only IEEE 802.1Qaz TLV is supported, all others
+ * will be returned
+ */
+static void
+ice_parse_org_tlv(struct ice_lldp_org_tlv *tlv, struct ice_dcbx_cfg *dcbcfg)
+{
+	u32 ouisubtype;
+	u32 oui;
+
+	ouisubtype = NTOHL(tlv->ouisubtype);
+	oui = ((ouisubtype & ICE_LLDP_TLV_OUI_M) >> ICE_LLDP_TLV_OUI_S);
+	switch (oui) {
+	case ICE_IEEE_8021QAZ_OUI:
+		ice_parse_ieee_tlv(tlv, dcbcfg);
+		break;
+	case ICE_CEE_DCBX_OUI:
+		ice_parse_cee_tlv(tlv, dcbcfg);
+		break;
+	default:
+		break;
+	}
+}
+
+/**
+ * ice_lldp_to_dcb_cfg
+ * @lldpmib: LLDPDU to be parsed
+ * @dcbcfg: store for LLDPDU data
+ *
+ * Parse DCB configuration from the LLDPDU
+ */
+enum ice_status ice_lldp_to_dcb_cfg(u8 *lldpmib, struct ice_dcbx_cfg *dcbcfg)
+{
+	struct ice_lldp_org_tlv *tlv;
+	enum ice_status ret = ICE_SUCCESS;
+	u16 offset = 0;
+	u16 typelen;
+	u16 type;
+	u16 len;
+
+	if (!lldpmib || !dcbcfg)
+		return ICE_ERR_PARAM;
+
+	/* set to the start of LLDPDU */
+	lldpmib += ETH_HEADER_LEN;
+	tlv = (struct ice_lldp_org_tlv *)lldpmib;
+	while (1) {
+		typelen = NTOHS(tlv->typelen);
+		type = ((typelen & ICE_LLDP_TLV_TYPE_M) >> ICE_LLDP_TLV_TYPE_S);
+		len = ((typelen & ICE_LLDP_TLV_LEN_M) >> ICE_LLDP_TLV_LEN_S);
+		offset += sizeof(typelen) + len;
+
+		/* END TLV or beyond LLDPDU size */
+		if (type == ICE_TLV_TYPE_END || offset > ICE_LLDPDU_SIZE)
+			break;
+
+		switch (type) {
+		case ICE_TLV_TYPE_ORG:
+			ice_parse_org_tlv(tlv, dcbcfg);
+			break;
+		default:
+			break;
+		}
+
+		/* Move to next TLV */
+		tlv = (struct ice_lldp_org_tlv *)
+		      ((char *)tlv + sizeof(tlv->typelen) + len);
+	}
+
+	return ret;
+}
+
+/**
+ * ice_aq_get_dcb_cfg
+ * @hw: pointer to the hw struct
+ * @mib_type: mib type for the query
+ * @bridgetype: bridge type for the query (remote)
+ * @dcbcfg: store for LLDPDU data
+ *
+ * Query DCB configuration from the firmware
+ */
+enum ice_status
+ice_aq_get_dcb_cfg(struct ice_hw *hw, u8 mib_type, u8 bridgetype,
+		   struct ice_dcbx_cfg *dcbcfg)
+{
+	enum ice_status ret;
+	u8 *lldpmib;
+
+	/* Allocate the LLDPDU */
+	lldpmib = (u8 *)ice_malloc(hw, ICE_LLDPDU_SIZE);
+	if (!lldpmib)
+		return ICE_ERR_NO_MEMORY;
+
+	ret = ice_aq_get_lldp_mib(hw, bridgetype, mib_type, (void *)lldpmib,
+				  ICE_LLDPDU_SIZE, NULL, NULL, NULL);
+
+	if (ret == ICE_SUCCESS)
+		/* Parse LLDP MIB to get dcb configuration */
+		ret = ice_lldp_to_dcb_cfg(lldpmib, dcbcfg);
+
+	ice_free(hw, lldpmib);
+
+	return ret;
+}
+
+
+/**
+ * ice_aq_start_stop_dcbx - Start/Stop DCBx service in FW
+ * @hw: pointer to the hw struct
+ * @start_dcbx_agent: True if DCBx Agent needs to be started
+ *		      False if DCBx Agent needs to be stopped
+ * @dcbx_agent_status: FW indicates back the DCBx agent status
+ *		       True if DCBx Agent is active
+ *		       False if DCBx Agent is stopped
+ * @cd: pointer to command details structure or NULL
+ *
+ * Start/Stop the embedded dcbx Agent. In case that this wrapper function
+ * returns ICE_SUCCESS, caller will need to check if FW returns back the same
+ * value as stated in dcbx_agent_status, and react accordingly. (0x0A09)
+ */
+enum ice_status
+ice_aq_start_stop_dcbx(struct ice_hw *hw, bool start_dcbx_agent,
+		       bool *dcbx_agent_status, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_lldp_stop_start_specific_agent *cmd;
+	enum ice_status status;
+	struct ice_aq_desc desc;
+	u16 opcode;
+
+	cmd = &desc.params.lldp_agent_ctrl;
+
+	opcode = ice_aqc_opc_lldp_stop_start_specific_agent;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, opcode);
+
+	if (start_dcbx_agent)
+		cmd->command = ICE_AQC_START_STOP_AGENT_START_DCBX;
+
+	status = ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+
+	*dcbx_agent_status = false;
+
+	if (status == ICE_SUCCESS &&
+	    cmd->command == ICE_AQC_START_STOP_AGENT_START_DCBX)
+		*dcbx_agent_status = true;
+
+	return status;
+}
+
+/**
+ * ice_aq_get_cee_dcb_cfg
+ * @hw: pointer to the hw struct
+ * @buff: response buffer that stores CEE operational configuration
+ * @cd: pointer to command details structure or NULL
+ *
+ * Get CEE DCBX mode operational configuration from firmware (0x0A07)
+ */
+enum ice_status
+ice_aq_get_cee_dcb_cfg(struct ice_hw *hw,
+		       struct ice_aqc_get_cee_dcb_cfg_resp *buff,
+		       struct ice_sq_cd *cd)
+{
+	struct ice_aq_desc desc;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_cee_dcb_cfg);
+
+	return ice_aq_send_cmd(hw, &desc, (void *)buff, sizeof(*buff), cd);
+}
+
+
+/**
+ * ice_cee_to_dcb_cfg
+ * @cee_cfg: pointer to CEE configuration struct
+ * @dcbcfg: DCB configuration struct
+ *
+ * Convert CEE configuration from firmware to DCB configuration
+ */
+static void
+ice_cee_to_dcb_cfg(struct ice_aqc_get_cee_dcb_cfg_resp *cee_cfg,
+		   struct ice_dcbx_cfg *dcbcfg)
+{
+	u32 status, tlv_status = LE32_TO_CPU(cee_cfg->tlv_status);
+	u32 ice_aqc_cee_status_mask, ice_aqc_cee_status_shift;
+	u16 app_prio = LE16_TO_CPU(cee_cfg->oper_app_prio);
+	u8 i, err, sync, oper, app_index, ice_app_sel_type;
+	u16 ice_aqc_cee_app_mask, ice_aqc_cee_app_shift;
+	u16 ice_app_prot_id_type;
+
+	/* CEE PG data to ETS config */
+	dcbcfg->etscfg.maxtcs = cee_cfg->oper_num_tc;
+
+	/* Note that the FW creates the oper_prio_tc nibbles reversed
+	 * from those in the CEE Priority Group sub-TLV.
+	 */
+	for (i = 0; i < ICE_MAX_TRAFFIC_CLASS / 2; i++) {
+		dcbcfg->etscfg.prio_table[i * 2] =
+			((cee_cfg->oper_prio_tc[i] & ICE_CEE_PGID_PRIO_0_M) >>
+			 ICE_CEE_PGID_PRIO_0_S);
+		dcbcfg->etscfg.prio_table[i * 2 + 1] =
+			((cee_cfg->oper_prio_tc[i] & ICE_CEE_PGID_PRIO_1_M) >>
+			 ICE_CEE_PGID_PRIO_1_S);
+	}
+
+	for (i = 0; i < ICE_MAX_TRAFFIC_CLASS; i++) {
+		dcbcfg->etscfg.tcbwtable[i] = cee_cfg->oper_tc_bw[i];
+
+		if (dcbcfg->etscfg.prio_table[i] == ICE_CEE_PGID_STRICT) {
+			/* Map it to next empty TC */
+			dcbcfg->etscfg.prio_table[i] = cee_cfg->oper_num_tc - 1;
+			dcbcfg->etscfg.tsatable[i] = ICE_IEEE_TSA_STRICT;
+		} else {
+			dcbcfg->etscfg.tsatable[i] = ICE_IEEE_TSA_ETS;
+		}
+	}
+
+	/* CEE PFC data to ETS config */
+	dcbcfg->pfc.pfcena = cee_cfg->oper_pfc_en;
+	dcbcfg->pfc.pfccap = ICE_MAX_TRAFFIC_CLASS;
+
+	app_index = 0;
+	for (i = 0; i < 3; i++) {
+		if (i == 0) {
+			/* FCoE APP */
+			ice_aqc_cee_status_mask = ICE_AQC_CEE_FCOE_STATUS_M;
+			ice_aqc_cee_status_shift = ICE_AQC_CEE_FCOE_STATUS_S;
+			ice_aqc_cee_app_mask = ICE_AQC_CEE_APP_FCOE_M;
+			ice_aqc_cee_app_shift = ICE_AQC_CEE_APP_FCOE_S;
+			ice_app_sel_type = ICE_APP_SEL_ETHTYPE;
+			ice_app_prot_id_type = ICE_APP_PROT_ID_FCOE;
+		} else if (i == 1) {
+			/* iSCSI APP */
+			ice_aqc_cee_status_mask = ICE_AQC_CEE_ISCSI_STATUS_M;
+			ice_aqc_cee_status_shift = ICE_AQC_CEE_ISCSI_STATUS_S;
+			ice_aqc_cee_app_mask = ICE_AQC_CEE_APP_ISCSI_M;
+			ice_aqc_cee_app_shift = ICE_AQC_CEE_APP_ISCSI_S;
+			ice_app_sel_type = ICE_APP_SEL_TCPIP;
+			ice_app_prot_id_type = ICE_APP_PROT_ID_ISCSI;
+		} else {
+			/* FIP APP */
+			ice_aqc_cee_status_mask = ICE_AQC_CEE_FIP_STATUS_M;
+			ice_aqc_cee_status_shift = ICE_AQC_CEE_FIP_STATUS_S;
+			ice_aqc_cee_app_mask = ICE_AQC_CEE_APP_FIP_M;
+			ice_aqc_cee_app_shift = ICE_AQC_CEE_APP_FIP_S;
+			ice_app_sel_type = ICE_APP_SEL_ETHTYPE;
+			ice_app_prot_id_type = ICE_APP_PROT_ID_FIP;
+		}
+
+		status = (tlv_status & ice_aqc_cee_status_mask) >>
+			 ice_aqc_cee_status_shift;
+		err = (status & ICE_TLV_STATUS_ERR) ? 1 : 0;
+		sync = (status & ICE_TLV_STATUS_SYNC) ? 1 : 0;
+		oper = (status & ICE_TLV_STATUS_OPER) ? 1 : 0;
+		/* Add FCoE/iSCSI/FIP APP if Error is False and
+		 * Oper/Sync is True
+		 */
+		if (!err && sync && oper) {
+			dcbcfg->app[app_index].priority =
+				(app_prio & ice_aqc_cee_app_mask) >>
+				ice_aqc_cee_app_shift;
+			dcbcfg->app[app_index].selector = ice_app_sel_type;
+			dcbcfg->app[app_index].prot_id = ice_app_prot_id_type;
+			app_index++;
+		}
+	}
+
+	dcbcfg->numapps = app_index;
+}
+
+/**
+ * ice_get_ieee_dcb_cfg
+ * @pi: port information structure
+ * @dcbx_mode: mode of DCBX (IEEE or CEE)
+ *
+ * Get IEEE or CEE mode DCB configuration from the Firmware
+ */
+STATIC enum ice_status
+ice_get_ieee_or_cee_dcb_cfg(struct ice_port_info *pi, u8 dcbx_mode)
+{
+	struct ice_dcbx_cfg *dcbx_cfg = NULL;
+	enum ice_status ret;
+
+	if (!pi)
+		return ICE_ERR_PARAM;
+
+	if (dcbx_mode == ICE_DCBX_MODE_IEEE)
+		dcbx_cfg = &pi->local_dcbx_cfg;
+	else if (dcbx_mode == ICE_DCBX_MODE_CEE)
+		dcbx_cfg = &pi->desired_dcbx_cfg;
+
+	/* Get Local DCB Config in case of ICE_DCBX_MODE_IEEE
+	 * or get CEE DCB Desired Config in case of ICE_DCBX_MODE_CEE
+	 */
+	ret = ice_aq_get_dcb_cfg(pi->hw, ICE_AQ_LLDP_MIB_LOCAL,
+				 ICE_AQ_LLDP_BRID_TYPE_NEAREST_BRID, dcbx_cfg);
+	if (ret)
+		goto out;
+
+	/* Get Remote DCB Config */
+	dcbx_cfg = &pi->remote_dcbx_cfg;
+	ret = ice_aq_get_dcb_cfg(pi->hw, ICE_AQ_LLDP_MIB_REMOTE,
+				 ICE_AQ_LLDP_BRID_TYPE_NEAREST_BRID, dcbx_cfg);
+	/* Don't treat ENOENT as an error for Remote MIBs */
+	if (pi->hw->adminq.sq_last_status == ICE_AQ_RC_ENOENT)
+		ret = ICE_SUCCESS;
+
+out:
+	return ret;
+}
+
+/**
+ * ice_get_dcb_cfg
+ * @pi: port information structure
+ *
+ * Get DCB configuration from the Firmware
+ */
+enum ice_status ice_get_dcb_cfg(struct ice_port_info *pi)
+{
+	struct ice_aqc_get_cee_dcb_cfg_resp cee_cfg;
+	struct ice_dcbx_cfg *dcbx_cfg;
+	enum ice_status ret;
+
+	if (!pi)
+		return ICE_ERR_PARAM;
+
+	ret = ice_aq_get_cee_dcb_cfg(pi->hw, &cee_cfg, NULL);
+	if (ret == ICE_SUCCESS) {
+		/* CEE mode */
+		dcbx_cfg = &pi->local_dcbx_cfg;
+		dcbx_cfg->dcbx_mode = ICE_DCBX_MODE_CEE;
+		dcbx_cfg->tlv_status = LE32_TO_CPU(cee_cfg.tlv_status);
+		ice_cee_to_dcb_cfg(&cee_cfg, dcbx_cfg);
+		ret = ice_get_ieee_or_cee_dcb_cfg(pi, ICE_DCBX_MODE_CEE);
+	} else if (pi->hw->adminq.sq_last_status == ICE_AQ_RC_ENOENT) {
+		/* CEE mode not enabled try querying IEEE data */
+		dcbx_cfg = &pi->local_dcbx_cfg;
+		dcbx_cfg->dcbx_mode = ICE_DCBX_MODE_IEEE;
+		ret = ice_get_ieee_or_cee_dcb_cfg(pi, ICE_DCBX_MODE_IEEE);
+	}
+
+	return ret;
+}
+
+/**
+ * ice_init_dcb
+ * @hw: pointer to the hw struct
+ *
+ * Update DCB configuration from the Firmware
+ */
+enum ice_status ice_init_dcb(struct ice_hw *hw)
+{
+	struct ice_port_info *pi;
+	enum ice_status ret = ICE_SUCCESS;
+
+	if (!hw->func_caps.common_cap.dcb)
+		return ret;
+	pi = hw->port_info;
+	pi->is_sw_lldp = true;
+
+	/* Get DCBX status */
+	pi->dcbx_status = ice_get_dcbx_status(hw);
+
+	/* Check the DCBX Status */
+	switch (pi->dcbx_status) {
+	case ICE_DCBX_STATUS_NOT_STARTED:
+		break;
+	case ICE_DCBX_STATUS_DIS:
+		/* DCBx not in usable state, stop init */
+		return ret;
+	case ICE_DCBX_STATUS_DONE:
+	case ICE_DCBX_STATUS_IN_PROGRESS:
+		/* Get current DCBX configuration */
+		ret = ice_get_dcb_cfg(pi);
+		pi->is_sw_lldp = (hw->adminq.sq_last_status == ICE_AQ_RC_EPERM);
+		if (ret)
+			return ret;
+		break;
+	case ICE_DCBX_STATUS_MULTIPLE_PEERS:
+	default:
+		break;
+	}
+
+	/* Configure the LLDP MIB change event */
+	ret = ice_aq_cfg_lldp_mib_change(hw, true, NULL);
+	if (!ret)
+		pi->is_sw_lldp = false;
+
+	return ret;
+}
+
+/**
+ * ice_add_ieee_ets_common_tlv
+ * @buf: Data buffer to be populated with ice_dcb_ets_cfg data
+ * @ets_cfg: Container for ice_dcb_ets_cfg data
+ *
+ * Populate the TLV buffer with ice_dcb_ets_cfg data
+ */
+static void
+ice_add_ieee_ets_common_tlv(u8 *buf, struct ice_dcb_ets_cfg *ets_cfg)
+{
+	u8 priority0, priority1;
+	u8 offset = 0;
+	int i;
+
+	/* Priority Assignment Table (4 octets)
+	 * Octets:|    1    |    2    |    3    |    4    |
+	 *        -----------------------------------------
+	 *        |pri0|pri1|pri2|pri3|pri4|pri5|pri6|pri7|
+	 *        -----------------------------------------
+	 *   Bits:|7  4|3  0|7  4|3  0|7  4|3  0|7  4|3  0|
+	 *        -----------------------------------------
+	 */
+	for (i = 0; i < ICE_MAX_TRAFFIC_CLASS / 2; i++) {
+		priority0 = ets_cfg->prio_table[i * 2] & 0xF;
+		priority1 = ets_cfg->prio_table[i * 2 + 1] & 0xF;
+		buf[offset] = (priority0 << ICE_IEEE_ETS_PRIO_1_S) | priority1;
+		offset++;
+	}
+
+	/* TC Bandwidth Table (8 octets)
+	 * Octets:| 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
+	 *        ---------------------------------
+	 *        |tc0|tc1|tc2|tc3|tc4|tc5|tc6|tc7|
+	 *        ---------------------------------
+	 *
+	 * TSA Assignment Table (8 octets)
+	 * Octets:| 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
+	 *        ---------------------------------
+	 *        |tc0|tc1|tc2|tc3|tc4|tc5|tc6|tc7|
+	 *        ---------------------------------
+	 */
+	for (i = 0; i < ICE_MAX_TRAFFIC_CLASS; i++) {
+		buf[offset] = ets_cfg->tcbwtable[i];
+		buf[ICE_MAX_TRAFFIC_CLASS + offset] = ets_cfg->tsatable[i];
+		offset++;
+	}
+}
+
+/**
+ * ice_add_ieee_ets_tlv - Prepare ETS TLV in IEEE format
+ * @tlv: Fill the ETS config data in IEEE format
+ * @dcbcfg: Local store which holds the DCB Config
+ *
+ * Prepare IEEE 802.1Qaz ETS CFG TLV
+ */
+static void
+ice_add_ieee_ets_tlv(struct ice_lldp_org_tlv *tlv, struct ice_dcbx_cfg *dcbcfg)
+{
+	struct ice_dcb_ets_cfg *etscfg;
+	u8 *buf = tlv->tlvinfo;
+	u8 maxtcwilling = 0;
+	u32 ouisubtype;
+	u16 typelen;
+
+	typelen = ((ICE_TLV_TYPE_ORG << ICE_LLDP_TLV_TYPE_S) |
+		   ICE_IEEE_ETS_TLV_LEN);
+	tlv->typelen = HTONS(typelen);
+
+	ouisubtype = ((ICE_IEEE_8021QAZ_OUI << ICE_LLDP_TLV_OUI_S) |
+		      ICE_IEEE_SUBTYPE_ETS_CFG);
+	tlv->ouisubtype = HTONL(ouisubtype);
+
+	/* First Octet post subtype
+	 * --------------------------
+	 * |will-|CBS  | Re-  | Max |
+	 * |ing  |     |served| TCs |
+	 * --------------------------
+	 * |1bit | 1bit|3 bits|3bits|
+	 */
+	etscfg = &dcbcfg->etscfg;
+	if (etscfg->willing)
+		maxtcwilling = BIT(ICE_IEEE_ETS_WILLING_S);
+	maxtcwilling |= etscfg->maxtcs & ICE_IEEE_ETS_MAXTC_M;
+	buf[0] = maxtcwilling;
+
+	/* Begin adding at Priority Assignment Table (offset 1 in buf) */
+	ice_add_ieee_ets_common_tlv(&buf[1], etscfg);
+}
+
+/**
+ * ice_add_ieee_etsrec_tlv - Prepare ETS Recommended TLV in IEEE format
+ * @tlv: Fill ETS Recommended TLV in IEEE format
+ * @dcbcfg: Local store which holds the DCB Config
+ *
+ * Prepare IEEE 802.1Qaz ETS REC TLV
+ */
+static void
+ice_add_ieee_etsrec_tlv(struct ice_lldp_org_tlv *tlv,
+			struct ice_dcbx_cfg *dcbcfg)
+{
+	struct ice_dcb_ets_cfg *etsrec;
+	u8 *buf = tlv->tlvinfo;
+	u32 ouisubtype;
+	u16 typelen;
+
+	typelen = ((ICE_TLV_TYPE_ORG << ICE_LLDP_TLV_TYPE_S) |
+		   ICE_IEEE_ETS_TLV_LEN);
+	tlv->typelen = HTONS(typelen);
+
+	ouisubtype = ((ICE_IEEE_8021QAZ_OUI << ICE_LLDP_TLV_OUI_S) |
+		      ICE_IEEE_SUBTYPE_ETS_REC);
+	tlv->ouisubtype = HTONL(ouisubtype);
+
+	etsrec = &dcbcfg->etsrec;
+
+	/* First Octet is reserved */
+	/* Begin adding at Priority Assignment Table (offset 1 in buf) */
+	ice_add_ieee_ets_common_tlv(&buf[1], etsrec);
+}
+
+/**
+ * ice_add_ieee_pfc_tlv - Prepare PFC TLV in IEEE format
+ * @tlv: Fill PFC TLV in IEEE format
+ * @dcbcfg: Local store which holds the PFC CFG data
+ *
+ * Prepare IEEE 802.1Qaz PFC CFG TLV
+ */
+static void
+ice_add_ieee_pfc_tlv(struct ice_lldp_org_tlv *tlv, struct ice_dcbx_cfg *dcbcfg)
+{
+	u8 *buf = tlv->tlvinfo;
+	u32 ouisubtype;
+	u16 typelen;
+
+	typelen = ((ICE_TLV_TYPE_ORG << ICE_LLDP_TLV_TYPE_S) |
+		   ICE_IEEE_PFC_TLV_LEN);
+	tlv->typelen = HTONS(typelen);
+
+	ouisubtype = ((ICE_IEEE_8021QAZ_OUI << ICE_LLDP_TLV_OUI_S) |
+		      ICE_IEEE_SUBTYPE_PFC_CFG);
+	tlv->ouisubtype = HTONL(ouisubtype);
+
+	/* ----------------------------------------
+	 * |will-|MBC  | Re-  | PFC |  PFC Enable  |
+	 * |ing  |     |served| cap |              |
+	 * -----------------------------------------
+	 * |1bit | 1bit|2 bits|4bits| 1 octet      |
+	 */
+	if (dcbcfg->pfc.willing)
+		buf[0] = BIT(ICE_IEEE_PFC_WILLING_S);
+
+	if (dcbcfg->pfc.mbc)
+		buf[0] |= BIT(ICE_IEEE_PFC_MBC_S);
+
+	buf[0] |= dcbcfg->pfc.pfccap & 0xF;
+	buf[1] = dcbcfg->pfc.pfcena;
+}
+
+/**
+ * ice_add_ieee_app_pri_tlv -  Prepare APP TLV in IEEE format
+ * @tlv: Fill APP TLV in IEEE format
+ * @dcbcfg: Local store which holds the APP CFG data
+ *
+ * Prepare IEEE 802.1Qaz APP CFG TLV
+ */
+static void
+ice_add_ieee_app_pri_tlv(struct ice_lldp_org_tlv *tlv,
+			 struct ice_dcbx_cfg *dcbcfg)
+{
+	u16 typelen, len, offset = 0;
+	u8 priority, selector, i = 0;
+	u8 *buf = tlv->tlvinfo;
+	u32 ouisubtype;
+
+	/* No APP TLVs then just return */
+	if (dcbcfg->numapps == 0)
+		return;
+	ouisubtype = ((ICE_IEEE_8021QAZ_OUI << ICE_LLDP_TLV_OUI_S) |
+		      ICE_IEEE_SUBTYPE_APP_PRI);
+	tlv->ouisubtype = HTONL(ouisubtype);
+
+	/* Move offset to App Priority Table */
+	offset++;
+	/* Application Priority Table (3 octets)
+	 * Octets:|         1          |    2    |    3    |
+	 *        -----------------------------------------
+	 *        |Priority|Rsrvd| Sel |    Protocol ID    |
+	 *        -----------------------------------------
+	 *   Bits:|23    21|20 19|18 16|15                0|
+	 *        -----------------------------------------
+	 */
+	while (i < dcbcfg->numapps) {
+		priority = dcbcfg->app[i].priority & 0x7;
+		selector = dcbcfg->app[i].selector & 0x7;
+		buf[offset] = (priority << ICE_IEEE_APP_PRIO_S) | selector;
+		buf[offset + 1] = (dcbcfg->app[i].prot_id >> 0x8) & 0xFF;
+		buf[offset + 2] = dcbcfg->app[i].prot_id & 0xFF;
+		/* Move to next app */
+		offset += 3;
+		i++;
+		if (i >= ICE_DCBX_MAX_APPS)
+			break;
+	}
+	/* len includes size of ouisubtype + 1 reserved + 3*numapps */
+	len = sizeof(tlv->ouisubtype) + 1 + (i * 3);
+	typelen = ((ICE_TLV_TYPE_ORG << ICE_LLDP_TLV_TYPE_S) | (len & 0x1FF));
+	tlv->typelen = HTONS(typelen);
+}
+
+/**
+ * ice_add_dcb_tlv - Add all IEEE TLVs
+ * @tlv: Fill TLV data in IEEE format
+ * @dcbcfg: Local store which holds the DCB Config
+ * @tlvid: Type of IEEE TLV
+ *
+ * Add tlv information
+ */
+static void
+ice_add_dcb_tlv(struct ice_lldp_org_tlv *tlv, struct ice_dcbx_cfg *dcbcfg,
+		u16 tlvid)
+{
+	switch (tlvid) {
+	case ICE_IEEE_TLV_ID_ETS_CFG:
+		ice_add_ieee_ets_tlv(tlv, dcbcfg);
+		break;
+	case ICE_IEEE_TLV_ID_ETS_REC:
+		ice_add_ieee_etsrec_tlv(tlv, dcbcfg);
+		break;
+	case ICE_IEEE_TLV_ID_PFC_CFG:
+		ice_add_ieee_pfc_tlv(tlv, dcbcfg);
+		break;
+	case ICE_IEEE_TLV_ID_APP_PRI:
+		ice_add_ieee_app_pri_tlv(tlv, dcbcfg);
+		break;
+	default:
+		break;
+	}
+}
+
+/**
+ * ice_dcb_cfg_to_lldp - Convert DCB configuration to MIB format
+ * @lldpmib: pointer to the hw struct
+ * @miblen: length of LLDP mib
+ * @dcbcfg: Local store which holds the DCB Config
+ *
+ * Convert the DCB configuration to MIB format
+ */
+void ice_dcb_cfg_to_lldp(u8 *lldpmib, u16 *miblen, struct ice_dcbx_cfg *dcbcfg)
+{
+	u16 len, offset = 0, tlvid = ICE_TLV_ID_START;
+	struct ice_lldp_org_tlv *tlv;
+	u16 typelen;
+
+	tlv = (struct ice_lldp_org_tlv *)lldpmib;
+	while (1) {
+		ice_add_dcb_tlv(tlv, dcbcfg, tlvid++);
+		typelen = NTOHS(tlv->typelen);
+		len = (typelen & ICE_LLDP_TLV_LEN_M) >> ICE_LLDP_TLV_LEN_S;
+		if (len)
+			offset += len + 2;
+		/* END TLV or beyond LLDPDU size */
+		if (tlvid >= ICE_TLV_ID_END_OF_LLDPPDU ||
+		    offset > ICE_LLDPDU_SIZE)
+			break;
+		/* Move to next TLV */
+		if (len)
+			tlv = (struct ice_lldp_org_tlv *)
+				((char *)tlv + sizeof(tlv->typelen) + len);
+	}
+	*miblen = offset;
+}
+
+/**
+ * ice_set_dcb_cfg - Set the local LLDP MIB to FW
+ * @pi: port information structure
+ *
+ * Set DCB configuration to the Firmware
+ */
+enum ice_status ice_set_dcb_cfg(struct ice_port_info *pi)
+{
+	u8 mib_type, *lldpmib = NULL;
+	struct ice_dcbx_cfg *dcbcfg;
+	enum ice_status ret;
+	struct ice_hw *hw;
+	u16 miblen;
+
+	if (!pi)
+		return ICE_ERR_PARAM;
+
+	hw = pi->hw;
+
+	/* update the hw local config */
+	dcbcfg = &pi->local_dcbx_cfg;
+	/* Allocate the LLDPDU */
+	lldpmib = (u8 *)ice_malloc(hw, ICE_LLDPDU_SIZE);
+	if (!lldpmib)
+		return ICE_ERR_NO_MEMORY;
+
+	mib_type = SET_LOCAL_MIB_TYPE_LOCAL_MIB;
+	if (dcbcfg->app_mode == ICE_DCBX_APPS_NON_WILLING)
+		mib_type |= SET_LOCAL_MIB_TYPE_CEE_NON_WILLING;
+
+	ice_dcb_cfg_to_lldp(lldpmib, &miblen, dcbcfg);
+	ret = ice_aq_set_lldp_mib(hw, mib_type, (void *)lldpmib, miblen,
+				  NULL);
+
+	ice_free(hw, lldpmib);
+
+	return ret;
+}
+
+/**
+ * ice_aq_query_cfg_port_ets - query or config port ets configuration
+ * @pi: port information structure
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @cd: pointer to command details structure or NULL
+ * @opcode: query or config port ets
+ *
+ * query current or config port ets configuration
+ */
+enum ice_status
+ice_aq_query_cfg_port_ets(struct ice_port_info *pi,
+			  struct ice_aqc_port_ets_elem *buf, u16 buf_size,
+			  struct ice_sq_cd *cd, enum ice_adminq_opc opcode)
+{
+	struct ice_aqc_cfg_query_port_ets *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	if (!pi)
+		return ICE_ERR_PARAM;
+	cmd = &desc.params.port_ets;
+	ice_fill_dflt_direct_cmd_desc(&desc, opcode);
+	cmd->port_teid = pi->root->info.node_teid;
+
+	if (opcode == ice_aqc_opc_cfg_port_ets)
+		desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+
+	status = ice_aq_send_cmd(pi->hw, &desc, buf, buf_size, cd);
+	return status;
+}
+
+/**
+ * ice_update_port_tc_tree_cfg - update tc tree configuration
+ * @pi: port information structure
+ * @buf: pointer to buffer
+ *
+ * update the SW DB with the new TC changes
+ */
+enum ice_status
+ice_update_port_tc_tree_cfg(struct ice_port_info *pi,
+			    struct ice_aqc_port_ets_elem *buf)
+{
+	struct ice_sched_node *node, *tc_node;
+	struct ice_aqc_get_elem elem;
+	enum ice_status status = ICE_SUCCESS;
+	u32 teid1, teid2;
+	u8 i, j;
+
+	if (!pi)
+		return ICE_ERR_PARAM;
+	/* suspend the missing TC nodes */
+	for (i = 0; i < pi->root->num_children; i++) {
+		teid1 = LE32_TO_CPU(pi->root->children[i]->info.node_teid);
+		for (j = 0; j < ICE_MAX_TRAFFIC_CLASS; j++) {
+			teid2 = LE32_TO_CPU(buf->tc_node_teid[j]);
+			if (teid1 == teid2)
+				break;
+		}
+		if (j < ICE_MAX_TRAFFIC_CLASS)
+			continue;
+		/* TC is missing */
+		pi->root->children[i]->in_use = false;
+	}
+	/* add the new TC nodes */
+	for (j = 0; j < ICE_MAX_TRAFFIC_CLASS; j++) {
+		teid2 = LE32_TO_CPU(buf->tc_node_teid[j]);
+		if (teid2 == ICE_INVAL_TEID)
+			continue;
+		/* Is it already present in the tree ? */
+		for (i = 0; i < pi->root->num_children; i++) {
+			tc_node = pi->root->children[i];
+			if (!tc_node)
+				continue;
+			teid1 = LE32_TO_CPU(tc_node->info.node_teid);
+			if (teid1 == teid2) {
+				tc_node->tc_num = j;
+				tc_node->in_use = true;
+				break;
+			}
+		}
+		if (i < pi->root->num_children)
+			continue;
+		/* new TC */
+		status = ice_sched_query_elem(pi->hw, teid2, &elem);
+		if (!status)
+			status = ice_sched_add_node(pi, 1, &elem.generic[0]);
+		if (status)
+			break;
+		/* update the TC number */
+		node = ice_sched_find_node_by_teid(pi->root, teid2);
+		if (node)
+			node->tc_num = j;
+	}
+	return status;
+}
+
+/**
+ * ice_query_cfg_port_ets - query or config port ets configuration
+ * @pi: port information structure
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @cd: pointer to command details structure or NULL
+ * @config: true - config port ets, false - query port ets
+ *
+ * query current/config port ets configuration and update the
+ * SW DB with the TC changes
+ */
+enum ice_status
+ice_query_cfg_port_ets(struct ice_port_info *pi,
+		       struct ice_aqc_port_ets_elem *buf, u16 buf_size,
+		       struct ice_sq_cd *cd, bool config)
+{
+	enum ice_adminq_opc opcode;
+	enum ice_status status;
+
+	opcode = config ? ice_aqc_opc_cfg_port_ets : ice_aqc_opc_query_port_ets;
+	ice_acquire_lock(&pi->sched_lock);
+	status = ice_aq_query_cfg_port_ets(pi, buf, buf_size, cd, opcode);
+	if (!status)
+		status = ice_update_port_tc_tree_cfg(pi, buf);
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
diff --git a/drivers/net/ice/base/ice_dcb.h b/drivers/net/ice/base/ice_dcb.h
new file mode 100644
index 0000000..b0e5a5f
--- /dev/null
+++ b/drivers/net/ice/base/ice_dcb.h
@@ -0,0 +1,220 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_DCB_H_
+#define _ICE_DCB_H_
+
+#include "ice_type.h"
+
+#define ICE_DCBX_OFFLOAD_DIS		0
+#define ICE_DCBX_OFFLOAD_ENABLED	1
+
+#define ICE_DCBX_STATUS_NOT_STARTED	0
+#define ICE_DCBX_STATUS_IN_PROGRESS	1
+#define ICE_DCBX_STATUS_DONE		2
+#define ICE_DCBX_STATUS_MULTIPLE_PEERS	3
+#define ICE_DCBX_STATUS_DIS		7
+
+#define ICE_TLV_TYPE_END		0
+#define ICE_TLV_TYPE_ORG		127
+
+#define ICE_IEEE_8021QAZ_OUI		0x0080C2
+#define ICE_IEEE_SUBTYPE_ETS_CFG	9
+#define ICE_IEEE_SUBTYPE_ETS_REC	10
+#define ICE_IEEE_SUBTYPE_PFC_CFG	11
+#define ICE_IEEE_SUBTYPE_APP_PRI	12
+
+#define ICE_CEE_DCBX_OUI		0x001B21
+#define ICE_CEE_DCBX_TYPE		2
+
+#define ICE_CEE_SUBTYPE_CTRL		1
+#define ICE_CEE_SUBTYPE_PG_CFG		2
+#define ICE_CEE_SUBTYPE_PFC_CFG		3
+#define ICE_CEE_SUBTYPE_APP_PRI		4
+
+#define ICE_CEE_MAX_FEAT_TYPE		3
+#define ICE_LLDP_ADMINSTATUS_DIS	0
+#define ICE_LLDP_ADMINSTATUS_ENA_RX	1
+#define ICE_LLDP_ADMINSTATUS_ENA_TX	2
+#define ICE_LLDP_ADMINSTATUS_ENA_RXTX	3
+
+/* Defines for LLDP TLV header */
+#define ICE_LLDP_TLV_LEN_S		0
+#define ICE_LLDP_TLV_LEN_M		(0x01FF << ICE_LLDP_TLV_LEN_S)
+#define ICE_LLDP_TLV_TYPE_S		9
+#define ICE_LLDP_TLV_TYPE_M		(0x7F << ICE_LLDP_TLV_TYPE_S)
+#define ICE_LLDP_TLV_SUBTYPE_S		0
+#define ICE_LLDP_TLV_SUBTYPE_M		(0xFF << ICE_LLDP_TLV_SUBTYPE_S)
+#define ICE_LLDP_TLV_OUI_S		8
+#define ICE_LLDP_TLV_OUI_M		(0xFFFFFFUL << ICE_LLDP_TLV_OUI_S)
+
+/* Defines for IEEE ETS TLV */
+#define ICE_IEEE_ETS_MAXTC_S	0
+#define ICE_IEEE_ETS_MAXTC_M		(0x7 << ICE_IEEE_ETS_MAXTC_S)
+#define ICE_IEEE_ETS_CBS_S		6
+#define ICE_IEEE_ETS_CBS_M		BIT(ICE_IEEE_ETS_CBS_S)
+#define ICE_IEEE_ETS_WILLING_S		7
+#define ICE_IEEE_ETS_WILLING_M		BIT(ICE_IEEE_ETS_WILLING_S)
+#define ICE_IEEE_ETS_PRIO_0_S		0
+#define ICE_IEEE_ETS_PRIO_0_M		(0x7 << ICE_IEEE_ETS_PRIO_0_S)
+#define ICE_IEEE_ETS_PRIO_1_S		4
+#define ICE_IEEE_ETS_PRIO_1_M		(0x7 << ICE_IEEE_ETS_PRIO_1_S)
+#define ICE_CEE_PGID_PRIO_0_S		0
+#define ICE_CEE_PGID_PRIO_0_M		(0xF << ICE_CEE_PGID_PRIO_0_S)
+#define ICE_CEE_PGID_PRIO_1_S		4
+#define ICE_CEE_PGID_PRIO_1_M		(0xF << ICE_CEE_PGID_PRIO_1_S)
+#define ICE_CEE_PGID_STRICT		15
+
+/* Defines for IEEE TSA types */
+#define ICE_IEEE_TSA_STRICT		0
+#define ICE_IEEE_TSA_CBS		1
+#define ICE_IEEE_TSA_ETS		2
+#define ICE_IEEE_TSA_VENDOR		255
+
+/* Defines for IEEE PFC TLV */
+#define ICE_IEEE_PFC_CAP_S		0
+#define ICE_IEEE_PFC_CAP_M		(0xF << ICE_IEEE_PFC_CAP_S)
+#define ICE_IEEE_PFC_MBC_S		6
+#define ICE_IEEE_PFC_MBC_M		BIT(ICE_IEEE_PFC_MBC_S)
+#define ICE_IEEE_PFC_WILLING_S		7
+#define ICE_IEEE_PFC_WILLING_M		BIT(ICE_IEEE_PFC_WILLING_S)
+
+/* Defines for IEEE APP TLV */
+#define ICE_IEEE_APP_SEL_S		0
+#define ICE_IEEE_APP_SEL_M		(0x7 << ICE_IEEE_APP_SEL_S)
+#define ICE_IEEE_APP_PRIO_S		5
+#define ICE_IEEE_APP_PRIO_M		(0x7 << ICE_IEEE_APP_PRIO_S)
+
+/* TLV definitions for preparing MIB */
+#define ICE_TLV_ID_CHASSIS_ID		0
+#define ICE_TLV_ID_PORT_ID		1
+#define ICE_TLV_ID_TIME_TO_LIVE		2
+#define ICE_IEEE_TLV_ID_ETS_CFG		3
+#define ICE_IEEE_TLV_ID_ETS_REC		4
+#define ICE_IEEE_TLV_ID_PFC_CFG		5
+#define ICE_IEEE_TLV_ID_APP_PRI		6
+#define ICE_TLV_ID_END_OF_LLDPPDU	7
+#define ICE_TLV_ID_START		ICE_IEEE_TLV_ID_ETS_CFG
+
+#define ICE_IEEE_ETS_TLV_LEN		25
+#define ICE_IEEE_PFC_TLV_LEN		6
+#define ICE_IEEE_APP_TLV_LEN		11
+
+#pragma pack(1)
+/* IEEE 802.1AB LLDP TLV structure */
+struct ice_lldp_generic_tlv {
+	__be16 typelen;
+	u8 tlvinfo[1];
+};
+
+/* IEEE 802.1AB LLDP Organization specific TLV */
+struct ice_lldp_org_tlv {
+	__be16 typelen;
+	__be32 ouisubtype;
+	u8 tlvinfo[1];
+};
+#pragma pack()
+
+struct ice_cee_tlv_hdr {
+	__be16 typelen;
+	u8 operver;
+	u8 maxver;
+};
+
+struct ice_cee_ctrl_tlv {
+	struct ice_cee_tlv_hdr hdr;
+	__be32 seqno;
+	__be32 ackno;
+};
+
+struct ice_cee_feat_tlv {
+	struct ice_cee_tlv_hdr hdr;
+	u8 en_will_err; /* Bits: |En|Will|Err|Reserved(5)| */
+#define ICE_CEE_FEAT_TLV_ENA_M		0x80
+#define ICE_CEE_FEAT_TLV_WILLING_M	0x40
+#define ICE_CEE_FEAT_TLV_ERR_M		0x20
+	u8 subtype;
+	u8 tlvinfo[1];
+};
+
+#pragma pack(1)
+struct ice_cee_app_prio {
+	__be16 protocol;
+	u8 upper_oui_sel; /* Bits: |Upper OUI(6)|Selector(2)| */
+#define ICE_CEE_APP_SELECTOR_M	0x03
+	__be16 lower_oui;
+	u8 prio_map;
+};
+#pragma pack()
+
+/* TODO: The below structures related LLDP/DCBX variables
+ * and statistics are defined but need to find how to get
+ * the required information from the Firmware to use them
+ */
+
+/* IEEE 802.1AB LLDP Agent Statistics */
+struct ice_lldp_stats {
+	u64 remtablelastchangetime;
+	u64 remtableinserts;
+	u64 remtabledeletes;
+	u64 remtabledrops;
+	u64 remtableageouts;
+	u64 txframestotal;
+	u64 rxframesdiscarded;
+	u64 rxportframeerrors;
+	u64 rxportframestotal;
+	u64 rxporttlvsdiscardedtotal;
+	u64 rxporttlvsunrecognizedtotal;
+	u64 remtoomanyneighbors;
+};
+
+/* IEEE 802.1Qaz DCBX variables */
+struct ice_dcbx_variables {
+	u32 defmaxtrafficclasses;
+	u32 defprioritytcmapping;
+	u32 deftcbandwidth;
+	u32 deftsaassignment;
+};
+
+
+enum ice_status
+ice_aq_get_lldp_mib(struct ice_hw *hw, u8 bridge_type, u8 mib_type, void *buf,
+		    u16 buf_size, u16 *local_len, u16 *remote_len,
+		    struct ice_sq_cd *cd);
+enum ice_status
+ice_aq_cfg_lldp_mib_change(struct ice_hw *hw, bool ena_update,
+			   struct ice_sq_cd *cd);
+enum ice_status
+ice_aq_start_lldp(struct ice_hw *hw, struct ice_sq_cd *cd);
+enum ice_status
+ice_aq_set_lldp_mib(struct ice_hw *hw, u8 mib_type, void *buf, u16 buf_size,
+		    struct ice_sq_cd *cd);
+enum ice_status
+ice_aq_get_cee_dcb_cfg(struct ice_hw *hw,
+		       struct ice_aqc_get_cee_dcb_cfg_resp *buff,
+		       struct ice_sq_cd *cd);
+enum ice_status
+ice_aq_start_stop_dcbx(struct ice_hw *hw, bool start_dcbx_agent,
+		       bool *dcbx_agent_status, struct ice_sq_cd *cd);
+u8 ice_get_dcbx_status(struct ice_hw *hw);
+enum ice_status ice_lldp_to_dcb_cfg(u8 *lldpmib, struct ice_dcbx_cfg *dcbcfg);
+enum ice_status
+ice_aq_get_dcb_cfg(struct ice_hw *hw, u8 mib_type, u8 bridgetype,
+		   struct ice_dcbx_cfg *dcbcfg);
+enum ice_status ice_get_dcb_cfg(struct ice_port_info *pi);
+enum ice_status ice_init_dcb(struct ice_hw *hw);
+enum ice_status ice_set_dcb_cfg(struct ice_port_info *pi);
+void ice_dcb_cfg_to_lldp(u8 *lldpmib, u16 *miblen, struct ice_dcbx_cfg *dcbcfg);
+enum ice_status
+ice_query_cfg_port_ets(struct ice_port_info *pi,
+		       struct ice_aqc_port_ets_elem *buff, u16 buf_size,
+		       struct ice_sq_cd *cmd_details, bool config);
+enum ice_status
+ice_aq_query_cfg_port_ets(struct ice_port_info *pi,
+			  struct ice_aqc_port_ets_elem *buf, u16 buf_size,
+			  struct ice_sq_cd *cd, enum ice_adminq_opc opcode);
+enum ice_status
+ice_update_port_tc_tree_cfg(struct ice_port_info *pi,
+			    struct ice_aqc_port_ets_elem *buf);
+#endif /* _ICE_DCB_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v4 08/32] net/ice/base: add basic transmit scheduler
  2018-12-14  8:34 ` [dpdk-dev] [PATCH v4 00/32] A new net PMD - ICE Wenzhuo Lu
                     ` (6 preceding siblings ...)
  2018-12-14  8:34   ` [dpdk-dev] [PATCH v4 07/32] net/ice/base: add data center bridging (DCB) Wenzhuo Lu
@ 2018-12-14  8:34   ` Wenzhuo Lu
  2018-12-14  8:34   ` [dpdk-dev] [PATCH v4 09/32] net/ice/base: add virtual switch code Wenzhuo Lu
                     ` (23 subsequent siblings)
  31 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-14  8:34 UTC (permalink / raw)
  To: dev; +Cc: Paul M Stillwell Jr

From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>

Add code for the basic TX scheduler.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
---
 drivers/net/ice/base/ice_sched.c | 5380 ++++++++++++++++++++++++++++++++++++++
 drivers/net/ice/base/ice_sched.h |  210 ++
 2 files changed, 5590 insertions(+)
 create mode 100644 drivers/net/ice/base/ice_sched.c
 create mode 100644 drivers/net/ice/base/ice_sched.h

diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c
new file mode 100644
index 0000000..7acbae6
--- /dev/null
+++ b/drivers/net/ice/base/ice_sched.c
@@ -0,0 +1,5380 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#include "ice_sched.h"
+
+
+/**
+ * ice_sched_add_root_node - Insert the Tx scheduler root node in SW DB
+ * @pi: port information structure
+ * @info: Scheduler element information from firmware
+ *
+ * This function inserts the root node of the scheduling tree topology
+ * to the SW DB.
+ */
+static enum ice_status
+ice_sched_add_root_node(struct ice_port_info *pi,
+			struct ice_aqc_txsched_elem_data *info)
+{
+	struct ice_sched_node *root;
+	struct ice_hw *hw;
+
+	if (!pi)
+		return ICE_ERR_PARAM;
+
+	hw = pi->hw;
+
+	root = (struct ice_sched_node *)ice_malloc(hw, sizeof(*root));
+	if (!root)
+		return ICE_ERR_NO_MEMORY;
+
+	/* coverity[suspicious_sizeof] */
+	root->children = (struct ice_sched_node **)
+		ice_calloc(hw, hw->max_children[0], sizeof(*root));
+	if (!root->children) {
+		ice_free(hw, root);
+		return ICE_ERR_NO_MEMORY;
+	}
+
+	ice_memcpy(&root->info, info, sizeof(*info), ICE_DMA_TO_NONDMA);
+	pi->root = root;
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_sched_find_node_by_teid - Find the Tx scheduler node in SW DB
+ * @start_node: pointer to the starting ice_sched_node struct in a sub-tree
+ * @teid: node teid to search
+ *
+ * This function searches for a node matching the teid in the scheduling tree
+ * from the SW DB. The search is recursive and is restricted by the number of
+ * layers it has searched through; stopping at the max supported layer.
+ *
+ * This function needs to be called when holding the port_info->sched_lock
+ */
+struct ice_sched_node *
+ice_sched_find_node_by_teid(struct ice_sched_node *start_node, u32 teid)
+{
+	u16 i;
+
+	/* The TEID is same as that of the start_node */
+	if (ICE_TXSCHED_GET_NODE_TEID(start_node) == teid)
+		return start_node;
+
+	/* The node has no children or is at the max layer */
+	if (!start_node->num_children ||
+	    start_node->tx_sched_layer >= ICE_AQC_TOPO_MAX_LEVEL_NUM ||
+	    start_node->info.data.elem_type == ICE_AQC_ELEM_TYPE_LEAF)
+		return NULL;
+
+	/* Check if teid matches to any of the children nodes */
+	for (i = 0; i < start_node->num_children; i++)
+		if (ICE_TXSCHED_GET_NODE_TEID(start_node->children[i]) == teid)
+			return start_node->children[i];
+
+	/* Search within each child's sub-tree */
+	for (i = 0; i < start_node->num_children; i++) {
+		struct ice_sched_node *tmp;
+
+		tmp = ice_sched_find_node_by_teid(start_node->children[i],
+						  teid);
+		if (tmp)
+			return tmp;
+	}
+
+	return NULL;
+}
+
+/**
+ * ice_aqc_send_sched_elem_cmd - send scheduling elements cmd
+ * @hw: pointer to the hw struct
+ * @cmd_opc: cmd opcode
+ * @elems_req: number of elements to request
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @elems_resp: returns total number of elements response
+ * @cd: pointer to command details structure or NULL
+ *
+ * This function sends a scheduling elements cmd (cmd_opc)
+ */
+static enum ice_status
+ice_aqc_send_sched_elem_cmd(struct ice_hw *hw, enum ice_adminq_opc cmd_opc,
+			    u16 elems_req, void *buf, u16 buf_size,
+			    u16 *elems_resp, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_sched_elem_cmd *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	cmd = &desc.params.sched_elem_cmd;
+	ice_fill_dflt_direct_cmd_desc(&desc, cmd_opc);
+	cmd->num_elem_req = CPU_TO_LE16(elems_req);
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+	status = ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
+	if (!status && elems_resp)
+		*elems_resp = LE16_TO_CPU(cmd->num_elem_resp);
+
+	return status;
+}
+
+/**
+ * ice_aq_query_sched_elems - query scheduler elements
+ * @hw: pointer to the hw struct
+ * @elems_req: number of elements to query
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @elems_ret: returns total number of elements returned
+ * @cd: pointer to command details structure or NULL
+ *
+ * Query scheduling elements (0x0404)
+ */
+enum ice_status
+ice_aq_query_sched_elems(struct ice_hw *hw, u16 elems_req,
+			 struct ice_aqc_get_elem *buf, u16 buf_size,
+			 u16 *elems_ret, struct ice_sq_cd *cd)
+{
+	return ice_aqc_send_sched_elem_cmd(hw, ice_aqc_opc_get_sched_elems,
+					   elems_req, (void *)buf, buf_size,
+					   elems_ret, cd);
+}
+
+/**
+ * ice_sched_add_node - Insert the Tx scheduler node in SW DB
+ * @pi: port information structure
+ * @layer: Scheduler layer of the node
+ * @info: Scheduler element information from firmware
+ *
+ * This function inserts a scheduler node to the SW DB.
+ */
+enum ice_status
+ice_sched_add_node(struct ice_port_info *pi, u8 layer,
+		   struct ice_aqc_txsched_elem_data *info)
+{
+	struct ice_sched_node *parent;
+	struct ice_aqc_get_elem elem;
+	struct ice_sched_node *node;
+	enum ice_status status;
+	struct ice_hw *hw;
+
+	if (!pi)
+		return ICE_ERR_PARAM;
+
+	hw = pi->hw;
+
+	/* A valid parent node should be there */
+	parent = ice_sched_find_node_by_teid(pi->root,
+					     LE32_TO_CPU(info->parent_teid));
+	if (!parent) {
+		ice_debug(hw, ICE_DBG_SCHED,
+			  "Parent Node not found for parent_teid=0x%x\n",
+			  LE32_TO_CPU(info->parent_teid));
+		return ICE_ERR_PARAM;
+	}
+
+	/* query the current node information from FW  before additing it
+	 * to the SW DB
+	 */
+	status = ice_sched_query_elem(hw, LE32_TO_CPU(info->node_teid), &elem);
+	if (status)
+		return status;
+	node = (struct ice_sched_node *)ice_malloc(hw, sizeof(*node));
+	if (!node)
+		return ICE_ERR_NO_MEMORY;
+	if (hw->max_children[layer]) {
+		/* coverity[suspicious_sizeof] */
+		node->children = (struct ice_sched_node **)
+			ice_calloc(hw, hw->max_children[layer], sizeof(*node));
+		if (!node->children) {
+			ice_free(hw, node);
+			return ICE_ERR_NO_MEMORY;
+		}
+	}
+
+	node->in_use = true;
+	node->parent = parent;
+	node->tx_sched_layer = layer;
+	parent->children[parent->num_children++] = node;
+	node->info = elem.generic[0];
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_aq_delete_sched_elems - delete scheduler elements
+ * @hw: pointer to the hw struct
+ * @grps_req: number of groups to delete
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @grps_del: returns total number of elements deleted
+ * @cd: pointer to command details structure or NULL
+ *
+ * Delete scheduling elements (0x040F)
+ */
+static enum ice_status
+ice_aq_delete_sched_elems(struct ice_hw *hw, u16 grps_req,
+			  struct ice_aqc_delete_elem *buf, u16 buf_size,
+			  u16 *grps_del, struct ice_sq_cd *cd)
+{
+	return ice_aqc_send_sched_elem_cmd(hw, ice_aqc_opc_delete_sched_elems,
+					   grps_req, (void *)buf, buf_size,
+					   grps_del, cd);
+}
+
+/**
+ * ice_sched_remove_elems - remove nodes from hw
+ * @hw: pointer to the hw struct
+ * @parent: pointer to the parent node
+ * @num_nodes: number of nodes
+ * @node_teids: array of node teids to be deleted
+ *
+ * This function remove nodes from hw
+ */
+static enum ice_status
+ice_sched_remove_elems(struct ice_hw *hw, struct ice_sched_node *parent,
+		       u16 num_nodes, u32 *node_teids)
+{
+	struct ice_aqc_delete_elem *buf;
+	u16 i, num_groups_removed = 0;
+	enum ice_status status;
+	u16 buf_size;
+
+	buf_size = sizeof(*buf) + sizeof(u32) * (num_nodes - 1);
+	buf = (struct ice_aqc_delete_elem *)ice_malloc(hw, buf_size);
+	if (!buf)
+		return ICE_ERR_NO_MEMORY;
+
+	buf->hdr.parent_teid = parent->info.node_teid;
+	buf->hdr.num_elems = CPU_TO_LE16(num_nodes);
+	for (i = 0; i < num_nodes; i++)
+		buf->teid[i] = CPU_TO_LE32(node_teids[i]);
+
+	status = ice_aq_delete_sched_elems(hw, 1, buf, buf_size,
+					   &num_groups_removed, NULL);
+	if (status != ICE_SUCCESS || num_groups_removed != 1)
+		ice_debug(hw, ICE_DBG_SCHED, "remove node failed FW error %d\n",
+			  hw->adminq.sq_last_status);
+
+	ice_free(hw, buf);
+	return status;
+}
+
+/**
+ * ice_sched_get_first_node - get the first node of the given layer
+ * @hw: pointer to the hw struct
+ * @parent: pointer the base node of the subtree
+ * @layer: layer number
+ *
+ * This function retrieves the first node of the given layer from the subtree
+ */
+static struct ice_sched_node *
+ice_sched_get_first_node(struct ice_hw *hw, struct ice_sched_node *parent,
+			 u8 layer)
+{
+	u8 i;
+
+	if (layer < hw->sw_entry_point_layer)
+		return NULL;
+	for (i = 0; i < parent->num_children; i++) {
+		struct ice_sched_node *node = parent->children[i];
+
+		if (node) {
+			if (node->tx_sched_layer == layer)
+				return node;
+			/* this recursion is intentional, and wouldn't
+			 * go more than 9 calls
+			 */
+			return ice_sched_get_first_node(hw, node, layer);
+		}
+	}
+	return NULL;
+}
+
+/**
+ * ice_sched_get_tc_node - get pointer to TC node
+ * @pi: port information structure
+ * @tc: TC number
+ *
+ * This function returns the TC node pointer
+ */
+struct ice_sched_node *ice_sched_get_tc_node(struct ice_port_info *pi, u8 tc)
+{
+	u8 i;
+
+	if (!pi)
+		return NULL;
+	for (i = 0; i < pi->root->num_children; i++)
+		if (pi->root->children[i]->tc_num == tc)
+			return pi->root->children[i];
+	return NULL;
+}
+
+/**
+ * ice_free_sched_node - Free a Tx scheduler node from SW DB
+ * @pi: port information structure
+ * @node: pointer to the ice_sched_node struct
+ *
+ * This function frees up a node from SW DB as well as from HW
+ *
+ * This function needs to be called with the port_info->sched_lock held
+ */
+void ice_free_sched_node(struct ice_port_info *pi, struct ice_sched_node *node)
+{
+	struct ice_sched_node *parent;
+	struct ice_hw *hw = pi->hw;
+	u8 i, j;
+
+	/* Free the children before freeing up the parent node
+	 * The parent array is updated below and that shifts the nodes
+	 * in the array. So always pick the first child if num children > 0
+	 */
+	while (node->num_children)
+		ice_free_sched_node(pi, node->children[0]);
+
+	/* Leaf, TC and root nodes can't be deleted by SW */
+	if (node->tx_sched_layer >= hw->sw_entry_point_layer &&
+	    node->info.data.elem_type != ICE_AQC_ELEM_TYPE_TC &&
+	    node->info.data.elem_type != ICE_AQC_ELEM_TYPE_ROOT_PORT &&
+	    node->info.data.elem_type != ICE_AQC_ELEM_TYPE_LEAF) {
+		u32 teid = LE32_TO_CPU(node->info.node_teid);
+
+		ice_sched_remove_elems(hw, node->parent, 1, &teid);
+	}
+	parent = node->parent;
+	/* root has no parent */
+	if (parent) {
+		struct ice_sched_node *p, *tc_node;
+
+		/* update the parent */
+		for (i = 0; i < parent->num_children; i++)
+			if (parent->children[i] == node) {
+				for (j = i + 1; j < parent->num_children; j++)
+					parent->children[j - 1] =
+						parent->children[j];
+				parent->num_children--;
+				break;
+			}
+
+		/* search for previous sibling that points to this node and
+		 * remove the reference
+		 */
+		tc_node = ice_sched_get_tc_node(pi, node->tc_num);
+		if (!tc_node) {
+			ice_debug(hw, ICE_DBG_SCHED,
+				  "Invalid TC number %d\n", node->tc_num);
+			goto err_exit;
+		}
+		p = ice_sched_get_first_node(hw, tc_node, node->tx_sched_layer);
+		while (p) {
+			if (p->sibling == node) {
+				p->sibling = node->sibling;
+				break;
+			}
+			p = p->sibling;
+		}
+	}
+err_exit:
+	/* leaf nodes have no children */
+	if (node->children)
+		ice_free(hw, node->children);
+	ice_free(hw, node);
+}
+
+/**
+ * ice_aq_get_dflt_topo - gets default scheduler topology
+ * @hw: pointer to the hw struct
+ * @lport: logical port number
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @num_branches: returns total number of queue to port branches
+ * @cd: pointer to command details structure or NULL
+ *
+ * Get default scheduler topology (0x400)
+ */
+static enum ice_status
+ice_aq_get_dflt_topo(struct ice_hw *hw, u8 lport,
+		     struct ice_aqc_get_topo_elem *buf, u16 buf_size,
+		     u8 *num_branches, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_get_topo *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	cmd = &desc.params.get_topo;
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_dflt_topo);
+	cmd->port_num = lport;
+	status = ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
+	if (!status && num_branches)
+		*num_branches = cmd->num_branches;
+
+	return status;
+}
+
+/**
+ * ice_aq_add_sched_elems - adds scheduling element
+ * @hw: pointer to the hw struct
+ * @grps_req: the number of groups that are requested to be added
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @grps_added: returns total number of groups added
+ * @cd: pointer to command details structure or NULL
+ *
+ * Add scheduling elements (0x0401)
+ */
+static enum ice_status
+ice_aq_add_sched_elems(struct ice_hw *hw, u16 grps_req,
+		       struct ice_aqc_add_elem *buf, u16 buf_size,
+		       u16 *grps_added, struct ice_sq_cd *cd)
+{
+	return ice_aqc_send_sched_elem_cmd(hw, ice_aqc_opc_add_sched_elems,
+					   grps_req, (void *)buf, buf_size,
+					   grps_added, cd);
+}
+
+/**
+ * ice_aq_cfg_sched_elems - configures scheduler elements
+ * @hw: pointer to the hw struct
+ * @elems_req: number of elements to configure
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @elems_cfgd: returns total number of elements configured
+ * @cd: pointer to command details structure or NULL
+ *
+ * Configure scheduling elements (0x0403)
+ */
+static enum ice_status
+ice_aq_cfg_sched_elems(struct ice_hw *hw, u16 elems_req,
+		       struct ice_aqc_conf_elem *buf, u16 buf_size,
+		       u16 *elems_cfgd, struct ice_sq_cd *cd)
+{
+	return ice_aqc_send_sched_elem_cmd(hw, ice_aqc_opc_cfg_sched_elems,
+					   elems_req, (void *)buf, buf_size,
+					   elems_cfgd, cd);
+}
+
+/**
+ * ice_aq_move_sched_elems - move scheduler elements
+ * @hw: pointer to the hw struct
+ * @grps_req: number of groups to move
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @grps_movd: returns total number of groups moved
+ * @cd: pointer to command details structure or NULL
+ *
+ * Move scheduling elements (0x0408)
+ */
+enum ice_status
+ice_aq_move_sched_elems(struct ice_hw *hw, u16 grps_req,
+			struct ice_aqc_move_elem *buf, u16 buf_size,
+			u16 *grps_movd, struct ice_sq_cd *cd)
+{
+	return ice_aqc_send_sched_elem_cmd(hw, ice_aqc_opc_move_sched_elems,
+					   grps_req, (void *)buf, buf_size,
+					   grps_movd, cd);
+}
+
+/**
+ * ice_aq_suspend_sched_elems - suspend scheduler elements
+ * @hw: pointer to the hw struct
+ * @elems_req: number of elements to suspend
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @elems_ret: returns total number of elements suspended
+ * @cd: pointer to command details structure or NULL
+ *
+ * Suspend scheduling elements (0x0409)
+ */
+static enum ice_status
+ice_aq_suspend_sched_elems(struct ice_hw *hw, u16 elems_req,
+			   struct ice_aqc_suspend_resume_elem *buf,
+			   u16 buf_size, u16 *elems_ret, struct ice_sq_cd *cd)
+{
+	return ice_aqc_send_sched_elem_cmd(hw, ice_aqc_opc_suspend_sched_elems,
+					   elems_req, (void *)buf, buf_size,
+					   elems_ret, cd);
+}
+
+/**
+ * ice_aq_resume_sched_elems - resume scheduler elements
+ * @hw: pointer to the hw struct
+ * @elems_req: number of elements to resume
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @elems_ret: returns total number of elements resumed
+ * @cd: pointer to command details structure or NULL
+ *
+ * resume scheduling elements (0x040A)
+ */
+static enum ice_status
+ice_aq_resume_sched_elems(struct ice_hw *hw, u16 elems_req,
+			  struct ice_aqc_suspend_resume_elem *buf,
+			  u16 buf_size, u16 *elems_ret, struct ice_sq_cd *cd)
+{
+	return ice_aqc_send_sched_elem_cmd(hw, ice_aqc_opc_resume_sched_elems,
+					   elems_req, (void *)buf, buf_size,
+					   elems_ret, cd);
+}
+
+/**
+ * ice_aq_query_sched_res - query scheduler resource
+ * @hw: pointer to the hw struct
+ * @buf_size: buffer size in bytes
+ * @buf: pointer to buffer
+ * @cd: pointer to command details structure or NULL
+ *
+ * Query scheduler resource allocation (0x0412)
+ */
+static enum ice_status
+ice_aq_query_sched_res(struct ice_hw *hw, u16 buf_size,
+		       struct ice_aqc_query_txsched_res_resp *buf,
+		       struct ice_sq_cd *cd)
+{
+	struct ice_aq_desc desc;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_query_sched_res);
+	return ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
+}
+
+/**
+ * ice_sched_suspend_resume_elems - suspend or resume hw nodes
+ * @hw: pointer to the hw struct
+ * @num_nodes: number of nodes
+ * @node_teids: array of node teids to be suspended or resumed
+ * @suspend: true means suspend / false means resume
+ *
+ * This function suspends or resumes hw nodes
+ */
+static enum ice_status
+ice_sched_suspend_resume_elems(struct ice_hw *hw, u8 num_nodes, u32 *node_teids,
+			       bool suspend)
+{
+	struct ice_aqc_suspend_resume_elem *buf;
+	u16 i, buf_size, num_elem_ret = 0;
+	enum ice_status status;
+
+	buf_size = sizeof(*buf) * num_nodes;
+	buf = (struct ice_aqc_suspend_resume_elem *)
+		ice_malloc(hw, buf_size);
+	if (!buf)
+		return ICE_ERR_NO_MEMORY;
+
+	for (i = 0; i < num_nodes; i++)
+		buf->teid[i] = CPU_TO_LE32(node_teids[i]);
+
+	if (suspend)
+		status = ice_aq_suspend_sched_elems(hw, num_nodes, buf,
+						    buf_size, &num_elem_ret,
+						    NULL);
+	else
+		status = ice_aq_resume_sched_elems(hw, num_nodes, buf,
+						   buf_size, &num_elem_ret,
+						   NULL);
+	if (status != ICE_SUCCESS || num_elem_ret != num_nodes)
+		ice_debug(hw, ICE_DBG_SCHED, "suspend/resume failed\n");
+
+	ice_free(hw, buf);
+	return status;
+}
+
+/**
+ * ice_aq_rl_profile - performs a rate limiting task
+ * @hw: pointer to the hw struct
+ * @opcode:opcode for add, query, or remove profile(s)
+ * @num_profiles: the number of profiles
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @num_processed: number of processed add or remove profile(s) to return
+ * @cd: pointer to command details structure
+ *
+ * Rl profile function to add, query, or remove profile(s)
+ */
+static enum ice_status
+ice_aq_rl_profile(struct ice_hw *hw, enum ice_adminq_opc opcode,
+		  u16 num_profiles, struct ice_aqc_rl_profile_generic_elem *buf,
+		  u16 buf_size, u16 *num_processed, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_rl_profile *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	cmd = &desc.params.rl_profile;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, opcode);
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+	cmd->num_profiles = CPU_TO_LE16(num_profiles);
+	status = ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
+	if (!status && num_processed)
+		*num_processed = LE16_TO_CPU(cmd->num_processed);
+	return status;
+}
+
+/**
+ * ice_aq_add_rl_profile - adds rate limiting profile(s)
+ * @hw: pointer to the hw struct
+ * @num_profiles: the number of profile(s) to be add
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @num_profiles_added: total number of profiles added to return
+ * @cd: pointer to command details structure
+ *
+ * Add rl profile (0x0410)
+ */
+static enum ice_status
+ice_aq_add_rl_profile(struct ice_hw *hw, u16 num_profiles,
+		      struct ice_aqc_rl_profile_generic_elem *buf,
+		      u16 buf_size, u16 *num_profiles_added,
+		      struct ice_sq_cd *cd)
+{
+	return ice_aq_rl_profile(hw, ice_aqc_opc_add_rl_profiles,
+				 num_profiles, buf,
+				 buf_size, num_profiles_added, cd);
+}
+
+/**
+ * ice_aq_query_rl_profile - query rate limiting profile(s)
+ * @hw: pointer to the hw struct
+ * @num_profiles: the number of profile(s) to query
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @cd: pointer to command details structure
+ *
+ * Query rl profile (0x0411)
+ */
+enum ice_status
+ice_aq_query_rl_profile(struct ice_hw *hw, u16 num_profiles,
+			struct ice_aqc_rl_profile_generic_elem *buf,
+			u16 buf_size, struct ice_sq_cd *cd)
+{
+	return ice_aq_rl_profile(hw, ice_aqc_opc_query_rl_profiles,
+				 num_profiles, buf, buf_size, NULL, cd);
+}
+
+/**
+ * ice_aq_remove_rl_profile - removes rl profile(s)
+ * @hw: pointer to the hw struct
+ * @num_profiles: the number of profile(s) to remove
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @num_profiles_removed: total number of profiles removed to return
+ * @cd: pointer to command details structure or NULL
+ *
+ * Remove rl profile (0x0415)
+ */
+static enum ice_status
+ice_aq_remove_rl_profile(struct ice_hw *hw, u16 num_profiles,
+			 struct ice_aqc_rl_profile_generic_elem *buf,
+			 u16 buf_size, u16 *num_profiles_removed,
+			 struct ice_sq_cd *cd)
+{
+	return ice_aq_rl_profile(hw, ice_aqc_opc_remove_rl_profiles,
+				 num_profiles, buf,
+				 buf_size, num_profiles_removed, cd);
+}
+
+/**
+ * ice_sched_clear_rl_prof - clears rl prof entries
+ * @pi: port information structure
+ *
+ * This function removes all rl profile from hw as well as from SW DB.
+ */
+static void ice_sched_clear_rl_prof(struct ice_port_info *pi)
+{
+	u8 ln;
+
+	for (ln = 0; ln < pi->hw->num_tx_sched_layers; ln++) {
+		struct ice_aqc_rl_profile_info *rl_prof_elem;
+		struct ice_aqc_rl_profile_info *rl_prof_tmp;
+
+		LIST_FOR_EACH_ENTRY_SAFE(rl_prof_elem, rl_prof_tmp,
+					 &pi->rl_prof_list[ln],
+					 ice_aqc_rl_profile_info, list_entry) {
+			struct ice_hw *hw = pi->hw;
+			enum ice_status status;
+
+			rl_prof_elem->prof_id_ref = 0;
+			status = ice_sched_del_rl_profile(hw, rl_prof_elem);
+			if (status) {
+				ice_debug(hw, ICE_DBG_SCHED,
+					  "Remove rl profile failed\n");
+				/* On error, free mem required */
+				LIST_DEL(&rl_prof_elem->list_entry);
+				ice_free(hw, rl_prof_elem);
+			}
+		}
+	}
+}
+
+/**
+ * ice_sched_clear_agg - clears the agg related information
+ * @hw: pointer to the hardware structure
+ *
+ * This function removes agg list and free up agg related memory
+ * previously allocated.
+ */
+void ice_sched_clear_agg(struct ice_hw *hw)
+{
+	struct ice_sched_agg_info *agg_info;
+	struct ice_sched_agg_info *atmp;
+
+	LIST_FOR_EACH_ENTRY_SAFE(agg_info, atmp, &hw->agg_list,
+				 ice_sched_agg_info,
+				 list_entry) {
+		struct ice_sched_agg_vsi_info *agg_vsi_info;
+		struct ice_sched_agg_vsi_info *vtmp;
+
+		LIST_FOR_EACH_ENTRY_SAFE(agg_vsi_info, vtmp,
+					 &agg_info->agg_vsi_list,
+					 ice_sched_agg_vsi_info, list_entry) {
+			LIST_DEL(&agg_vsi_info->list_entry);
+			ice_free(hw, agg_vsi_info);
+		}
+		LIST_DEL(&agg_info->list_entry);
+		ice_free(hw, agg_info);
+	}
+}
+
+/**
+ * ice_sched_clear_tx_topo - clears the schduler tree nodes
+ * @pi: port information structure
+ *
+ * This function removes all the nodes from HW as well as from SW DB.
+ */
+static void ice_sched_clear_tx_topo(struct ice_port_info *pi)
+{
+	if (!pi)
+		return;
+	/* remove rl profiles related lists */
+	ice_sched_clear_rl_prof(pi);
+	if (pi->root) {
+		ice_free_sched_node(pi, pi->root);
+		pi->root = NULL;
+	}
+}
+
+/**
+ * ice_sched_clear_port - clear the scheduler elements from SW DB for a port
+ * @pi: port information structure
+ *
+ * Cleanup scheduling elements from SW DB
+ */
+void ice_sched_clear_port(struct ice_port_info *pi)
+{
+	if (!pi || pi->port_state != ICE_SCHED_PORT_STATE_READY)
+		return;
+
+	pi->port_state = ICE_SCHED_PORT_STATE_INIT;
+	ice_acquire_lock(&pi->sched_lock);
+	ice_sched_clear_tx_topo(pi);
+	ice_release_lock(&pi->sched_lock);
+	ice_destroy_lock(&pi->sched_lock);
+}
+
+/**
+ * ice_sched_cleanup_all - cleanup scheduler elements from SW DB for all ports
+ * @hw: pointer to the hw struct
+ *
+ * Cleanup scheduling elements from SW DB for all the ports
+ */
+void ice_sched_cleanup_all(struct ice_hw *hw)
+{
+	if (!hw)
+		return;
+
+	if (hw->layer_info) {
+		ice_free(hw, hw->layer_info);
+		hw->layer_info = NULL;
+	}
+
+	if (hw->port_info)
+		ice_sched_clear_port(hw->port_info);
+
+	hw->num_tx_sched_layers = 0;
+	hw->num_tx_sched_phys_layers = 0;
+	hw->flattened_layers = 0;
+	hw->max_cgds = 0;
+}
+
+/**
+ * ice_aq_cfg_l2_node_cgd - configures L2 node to CGD mapping
+ * @hw: pointer to the hw struct
+ * @num_l2_nodes: the number of L2 nodes whose CGDs to configure
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @cd: pointer to command details structure or NULL
+ *
+ * Configure L2 Node CGD (0x0414)
+ */
+enum ice_status
+ice_aq_cfg_l2_node_cgd(struct ice_hw *hw, u16 num_l2_nodes,
+		       struct ice_aqc_cfg_l2_node_cgd_data *buf,
+		       u16 buf_size, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_cfg_l2_node_cgd *cmd;
+	struct ice_aq_desc desc;
+
+	cmd = &desc.params.cfg_l2_node_cgd;
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_cfg_l2_node_cgd);
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+
+	cmd->num_l2_nodes = CPU_TO_LE16(num_l2_nodes);
+	return ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
+}
+
+
+/**
+ * ice_sched_add_elems - add nodes to hw and SW DB
+ * @pi: port information structure
+ * @tc_node: pointer to the branch node
+ * @parent: pointer to the parent node
+ * @layer: layer number to add nodes
+ * @num_nodes: number of nodes
+ * @num_nodes_added: pointer to num nodes added
+ * @first_node_teid: if new nodes are added then return the teid of first node
+ *
+ * This function add nodes to hw as well as to SW DB for a given layer
+ */
+static enum ice_status
+ice_sched_add_elems(struct ice_port_info *pi, struct ice_sched_node *tc_node,
+		    struct ice_sched_node *parent, u8 layer, u16 num_nodes,
+		    u16 *num_nodes_added, u32 *first_node_teid)
+{
+	struct ice_sched_node *prev, *new_node;
+	struct ice_aqc_add_elem *buf;
+	u16 i, num_groups_added = 0;
+	enum ice_status status = ICE_SUCCESS;
+	struct ice_hw *hw = pi->hw;
+	u16 buf_size;
+	u32 teid;
+
+	buf_size = sizeof(*buf) + sizeof(*buf->generic) * (num_nodes - 1);
+	buf = (struct ice_aqc_add_elem *)ice_malloc(hw, buf_size);
+	if (!buf)
+		return ICE_ERR_NO_MEMORY;
+
+	buf->hdr.parent_teid = parent->info.node_teid;
+	buf->hdr.num_elems = CPU_TO_LE16(num_nodes);
+	for (i = 0; i < num_nodes; i++) {
+		buf->generic[i].parent_teid = parent->info.node_teid;
+		buf->generic[i].data.elem_type = ICE_AQC_ELEM_TYPE_SE_GENERIC;
+		buf->generic[i].data.valid_sections =
+			ICE_AQC_ELEM_VALID_GENERIC | ICE_AQC_ELEM_VALID_CIR |
+			ICE_AQC_ELEM_VALID_EIR;
+		buf->generic[i].data.generic = 0;
+		buf->generic[i].data.cir_bw.bw_profile_idx =
+			CPU_TO_LE16(ICE_SCHED_DFLT_RL_PROF_ID);
+		buf->generic[i].data.cir_bw.bw_alloc =
+			CPU_TO_LE16(ICE_SCHED_DFLT_BW_WT);
+		buf->generic[i].data.eir_bw.bw_profile_idx =
+			CPU_TO_LE16(ICE_SCHED_DFLT_RL_PROF_ID);
+		buf->generic[i].data.eir_bw.bw_alloc =
+			CPU_TO_LE16(ICE_SCHED_DFLT_BW_WT);
+	}
+
+	status = ice_aq_add_sched_elems(hw, 1, buf, buf_size,
+					&num_groups_added, NULL);
+	if (status != ICE_SUCCESS || num_groups_added != 1) {
+		ice_debug(hw, ICE_DBG_SCHED, "add node failed FW Error %d\n",
+			  hw->adminq.sq_last_status);
+		ice_free(hw, buf);
+		return ICE_ERR_CFG;
+	}
+
+	*num_nodes_added = num_nodes;
+	/* add nodes to the SW DB */
+	for (i = 0; i < num_nodes; i++) {
+		status = ice_sched_add_node(pi, layer, &buf->generic[i]);
+		if (status != ICE_SUCCESS) {
+			ice_debug(hw, ICE_DBG_SCHED,
+				  "add nodes in SW DB failed status =%d\n",
+				  status);
+			break;
+		}
+
+		teid = LE32_TO_CPU(buf->generic[i].node_teid);
+		new_node = ice_sched_find_node_by_teid(parent, teid);
+		if (!new_node) {
+			ice_debug(hw, ICE_DBG_SCHED,
+				  "Node is missing for teid =%d\n", teid);
+			break;
+		}
+
+		new_node->sibling = NULL;
+		new_node->tc_num = tc_node->tc_num;
+
+		/* add it to previous node sibling pointer */
+		/* Note: siblings are not linked across branches */
+		prev = ice_sched_get_first_node(hw, tc_node, layer);
+		if (prev && prev != new_node) {
+			while (prev->sibling)
+				prev = prev->sibling;
+			prev->sibling = new_node;
+		}
+
+		if (i == 0)
+			*first_node_teid = teid;
+	}
+
+	ice_free(hw, buf);
+	return status;
+}
+
+/**
+ * ice_sched_add_nodes_to_layer - Add nodes to a given layer
+ * @pi: port information structure
+ * @tc_node: pointer to TC node
+ * @parent: pointer to parent node
+ * @layer: layer number to add nodes
+ * @num_nodes: number of nodes to be added
+ * @first_node_teid: pointer to the first node teid
+ * @num_nodes_added: pointer to number of nodes added
+ *
+ * This function add nodes to a given layer.
+ */
+static enum ice_status
+ice_sched_add_nodes_to_layer(struct ice_port_info *pi,
+			     struct ice_sched_node *tc_node,
+			     struct ice_sched_node *parent, u8 layer,
+			     u16 num_nodes, u32 *first_node_teid,
+			     u16 *num_nodes_added)
+{
+	u32 *first_teid_ptr = first_node_teid;
+	u16 new_num_nodes, max_child_nodes;
+	enum ice_status status = ICE_SUCCESS;
+	struct ice_hw *hw = pi->hw;
+	u16 num_added = 0;
+	u32 temp;
+
+	*num_nodes_added = 0;
+
+	if (!num_nodes)
+		return status;
+
+	if (!parent || layer < hw->sw_entry_point_layer)
+		return ICE_ERR_PARAM;
+
+	/* max children per node per layer */
+	max_child_nodes = hw->max_children[parent->tx_sched_layer];
+
+	/* current number of children + required nodes exceed max children ? */
+	if ((parent->num_children + num_nodes) > max_child_nodes) {
+		/* Fail if the parent is a TC node */
+		if (parent == tc_node)
+			return ICE_ERR_CFG;
+
+		/* utilize all the spaces if the parent is not full */
+		if (parent->num_children < max_child_nodes) {
+			new_num_nodes = max_child_nodes - parent->num_children;
+			/* this recursion is intentional, and wouldn't
+			 * go more than 2 calls
+			 */
+			status = ice_sched_add_nodes_to_layer(pi, tc_node,
+							      parent, layer,
+							      new_num_nodes,
+							      first_node_teid,
+							      &num_added);
+			if (status != ICE_SUCCESS)
+				return status;
+
+			*num_nodes_added += num_added;
+		}
+		/* Don't modify the first node teid memory if the first node was
+		 * added already in the above call. Instead send some temp
+		 * memory for all other recursive calls.
+		 */
+		if (num_added)
+			first_teid_ptr = &temp;
+
+		new_num_nodes = num_nodes - num_added;
+
+		/* This parent is full, try the next sibling */
+		parent = parent->sibling;
+
+		/* this recursion is intentional, for 1024 queues
+		 * per VSI, it goes max of 16 iterations.
+		 * 1024 / 8 = 128 layer 8 nodes
+		 * 128 /8 = 16 (add 8 nodes per iteration)
+		 */
+		status = ice_sched_add_nodes_to_layer(pi, tc_node, parent,
+						      layer, new_num_nodes,
+						      first_teid_ptr,
+						      &num_added);
+		*num_nodes_added += num_added;
+		return status;
+	}
+
+	status = ice_sched_add_elems(pi, tc_node, parent, layer, num_nodes,
+				     num_nodes_added, first_node_teid);
+	return status;
+}
+
+/**
+ * ice_sched_get_qgrp_layer - get the current queue group layer number
+ * @hw: pointer to the hw struct
+ *
+ * This function returns the current queue group layer number
+ */
+static u8 ice_sched_get_qgrp_layer(struct ice_hw *hw)
+{
+	/* It's always total layers - 1, the array is 0 relative so -2 */
+	return hw->num_tx_sched_layers - ICE_QGRP_LAYER_OFFSET;
+}
+
+/**
+ * ice_sched_get_vsi_layer - get the current VSI layer number
+ * @hw: pointer to the hw struct
+ *
+ * This function returns the current VSI layer number
+ */
+static u8 ice_sched_get_vsi_layer(struct ice_hw *hw)
+{
+	/* Num Layers       VSI layer
+	 *     9               6
+	 *     7               4
+	 *     5 or less       sw_entry_point_layer
+	 */
+	/* calculate the vsi layer based on number of layers. */
+	if (hw->num_tx_sched_layers > ICE_VSI_LAYER_OFFSET + 1) {
+		u8 layer = hw->num_tx_sched_layers - ICE_VSI_LAYER_OFFSET;
+
+		if (layer > hw->sw_entry_point_layer)
+			return layer;
+	}
+	return hw->sw_entry_point_layer;
+}
+
+/**
+ * ice_sched_get_agg_layer - get the current aggregator layer number
+ * @hw: pointer to the hw struct
+ *
+ * This function returns the current aggregator layer number
+ */
+static u8 ice_sched_get_agg_layer(struct ice_hw *hw)
+{
+	/* Num Layers       agg layer
+	 *     9               4
+	 *     7 or less       sw_entry_point_layer
+	 */
+	/* calculate the agg layer based on number of layers. */
+	if (hw->num_tx_sched_layers > ICE_AGG_LAYER_OFFSET + 1) {
+		u8 layer = hw->num_tx_sched_layers - ICE_AGG_LAYER_OFFSET;
+
+		if (layer > hw->sw_entry_point_layer)
+			return layer;
+	}
+	return hw->sw_entry_point_layer;
+}
+
+/**
+ * ice_rm_dflt_leaf_node - remove the default leaf node in the tree
+ * @pi: port information structure
+ *
+ * This function removes the leaf node that was created by the FW
+ * during initialization
+ */
+static void ice_rm_dflt_leaf_node(struct ice_port_info *pi)
+{
+	struct ice_sched_node *node;
+
+	node = pi->root;
+	while (node) {
+		if (!node->num_children)
+			break;
+		node = node->children[0];
+	}
+	if (node && node->info.data.elem_type == ICE_AQC_ELEM_TYPE_LEAF) {
+		u32 teid = LE32_TO_CPU(node->info.node_teid);
+		enum ice_status status;
+
+		/* remove the default leaf node */
+		status = ice_sched_remove_elems(pi->hw, node->parent, 1, &teid);
+		if (!status)
+			ice_free_sched_node(pi, node);
+	}
+}
+
+/**
+ * ice_sched_rm_dflt_nodes - free the default nodes in the tree
+ * @pi: port information structure
+ *
+ * This function frees all the nodes except root and TC that were created by
+ * the FW during initialization
+ */
+static void ice_sched_rm_dflt_nodes(struct ice_port_info *pi)
+{
+	struct ice_sched_node *node;
+
+	ice_rm_dflt_leaf_node(pi);
+
+	/* remove the default nodes except TC and root nodes */
+	node = pi->root;
+	while (node) {
+		if (node->tx_sched_layer >= pi->hw->sw_entry_point_layer &&
+		    node->info.data.elem_type != ICE_AQC_ELEM_TYPE_TC &&
+		    node->info.data.elem_type != ICE_AQC_ELEM_TYPE_ROOT_PORT) {
+			ice_free_sched_node(pi, node);
+			break;
+		}
+
+		if (!node->num_children)
+			break;
+		node = node->children[0];
+	}
+}
+
+/**
+ * ice_sched_init_port - Initialize scheduler by querying information from FW
+ * @pi: port info structure for the tree to cleanup
+ *
+ * This function is the initial call to find the total number of Tx scheduler
+ * resources, default topology created by firmware and storing the information
+ * in SW DB.
+ */
+enum ice_status ice_sched_init_port(struct ice_port_info *pi)
+{
+	struct ice_aqc_get_topo_elem *buf;
+	enum ice_status status;
+	struct ice_hw *hw;
+	u8 num_branches;
+	u16 num_elems;
+	u8 i, j;
+
+	if (!pi)
+		return ICE_ERR_PARAM;
+	hw = pi->hw;
+
+	/* Query the Default Topology from FW */
+	buf = (struct ice_aqc_get_topo_elem *)ice_malloc(hw,
+							 ICE_AQ_MAX_BUF_LEN);
+	if (!buf)
+		return ICE_ERR_NO_MEMORY;
+
+	/* Query default scheduling tree topology */
+	status = ice_aq_get_dflt_topo(hw, pi->lport, buf, ICE_AQ_MAX_BUF_LEN,
+				      &num_branches, NULL);
+	if (status)
+		goto err_init_port;
+
+	/* num_branches should be between 1-8 */
+	if (num_branches < 1 || num_branches > ICE_TXSCHED_MAX_BRANCHES) {
+		ice_debug(hw, ICE_DBG_SCHED, "num_branches unexpected %d\n",
+			  num_branches);
+		status = ICE_ERR_PARAM;
+		goto err_init_port;
+	}
+
+	/* get the number of elements on the default/first branch */
+	num_elems = LE16_TO_CPU(buf[0].hdr.num_elems);
+
+	/* num_elems should always be between 1-9 */
+	if (num_elems < 1 || num_elems > ICE_AQC_TOPO_MAX_LEVEL_NUM) {
+		ice_debug(hw, ICE_DBG_SCHED, "num_elems unexpected %d\n",
+			  num_elems);
+		status = ICE_ERR_PARAM;
+		goto err_init_port;
+	}
+
+	/* If the last node is a leaf node then the index of the Q group
+	 * layer is two less than the number of elements.
+	 */
+	if (num_elems > 2 && buf[0].generic[num_elems - 1].data.elem_type ==
+	    ICE_AQC_ELEM_TYPE_LEAF)
+		pi->last_node_teid =
+			LE32_TO_CPU(buf[0].generic[num_elems - 2].node_teid);
+	else
+		pi->last_node_teid =
+			LE32_TO_CPU(buf[0].generic[num_elems - 1].node_teid);
+
+	/* Insert the Tx Sched root node */
+	status = ice_sched_add_root_node(pi, &buf[0].generic[0]);
+	if (status)
+		goto err_init_port;
+
+	/* Parse the default tree and cache the information */
+	for (i = 0; i < num_branches; i++) {
+		num_elems = LE16_TO_CPU(buf[i].hdr.num_elems);
+
+		/* Skip root element as already inserted */
+		for (j = 1; j < num_elems; j++) {
+			/* update the sw entry point */
+			if (buf[0].generic[j].data.elem_type ==
+			    ICE_AQC_ELEM_TYPE_ENTRY_POINT)
+				hw->sw_entry_point_layer = j;
+
+			status = ice_sched_add_node(pi, j, &buf[i].generic[j]);
+			if (status)
+				goto err_init_port;
+		}
+	}
+
+	/* Remove the default nodes. */
+	if (pi->root)
+		ice_sched_rm_dflt_nodes(pi);
+
+	/* initialize the port for handling the scheduler tree */
+	pi->port_state = ICE_SCHED_PORT_STATE_READY;
+	ice_init_lock(&pi->sched_lock);
+	for (i = 0; i < ICE_AQC_TOPO_MAX_LEVEL_NUM; i++)
+		INIT_LIST_HEAD(&pi->rl_prof_list[i]);
+
+err_init_port:
+	if (status && pi->root) {
+		ice_free_sched_node(pi, pi->root);
+		pi->root = NULL;
+	}
+
+	ice_free(hw, buf);
+	return status;
+}
+
+/**
+ * ice_sched_get_node - Get the struct ice_sched_node for given teid
+ * @pi: port information structure
+ * @teid: Scheduler node TEID
+ *
+ * This function retrieves the ice_sched_node struct for given teid from
+ * the SW DB and returns it to the caller.
+ */
+struct ice_sched_node *ice_sched_get_node(struct ice_port_info *pi, u32 teid)
+{
+	struct ice_sched_node *node;
+
+	if (!pi)
+		return NULL;
+
+	/* Find the node starting from root */
+	ice_acquire_lock(&pi->sched_lock);
+	node = ice_sched_find_node_by_teid(pi->root, teid);
+	ice_release_lock(&pi->sched_lock);
+
+	if (!node)
+		ice_debug(pi->hw, ICE_DBG_SCHED,
+			  "Node not found for teid=0x%x\n", teid);
+
+	return node;
+}
+
+/**
+ * ice_sched_query_res_alloc - query the FW for num of logical sched layers
+ * @hw: pointer to the HW struct
+ *
+ * query FW for allocated scheduler resources and store in HW struct
+ */
+enum ice_status ice_sched_query_res_alloc(struct ice_hw *hw)
+{
+	struct ice_aqc_query_txsched_res_resp *buf;
+	enum ice_status status = ICE_SUCCESS;
+	__le16 max_sibl;
+	u8 i;
+
+	if (hw->layer_info)
+		return status;
+
+	buf = (struct ice_aqc_query_txsched_res_resp *)
+		ice_malloc(hw, sizeof(*buf));
+	if (!buf)
+		return ICE_ERR_NO_MEMORY;
+
+	status = ice_aq_query_sched_res(hw, sizeof(*buf), buf, NULL);
+	if (status)
+		goto sched_query_out;
+
+	hw->num_tx_sched_layers = LE16_TO_CPU(buf->sched_props.logical_levels);
+	hw->num_tx_sched_phys_layers =
+		LE16_TO_CPU(buf->sched_props.phys_levels);
+	hw->flattened_layers = buf->sched_props.flattening_bitmap;
+	hw->max_cgds = buf->sched_props.max_pf_cgds;
+
+	/* max sibling group size of current layer refers to the max children
+	 * of the below layer node.
+	 * layer 1 node max children will be layer 2 max sibling group size
+	 * layer 2 node max children will be layer 3 max sibling group size
+	 * and so on. This array will be populated from root (index 0) to
+	 * qgroup layer 7. Leaf node has no children.
+	 */
+	for (i = 0; i < hw->num_tx_sched_layers - 1; i++) {
+		max_sibl = buf->layer_props[i + 1].max_sibl_grp_sz;
+		hw->max_children[i] = LE16_TO_CPU(max_sibl);
+	}
+
+	hw->layer_info = (struct ice_aqc_layer_props *)
+			 ice_memdup(hw, buf->layer_props,
+				    (hw->num_tx_sched_layers *
+				     sizeof(*hw->layer_info)),
+				    ICE_DMA_TO_DMA);
+	if (!hw->layer_info) {
+		status = ICE_ERR_NO_MEMORY;
+		goto sched_query_out;
+	}
+
+
+sched_query_out:
+	ice_free(hw, buf);
+	return status;
+}
+
+/**
+ * ice_sched_find_node_in_subtree - Find node in part of base node subtree
+ * @hw: pointer to the hw struct
+ * @base: pointer to the base node
+ * @node: pointer to the node to search
+ *
+ * This function checks whether a given node is part of the base node
+ * subtree or not
+ */
+bool
+ice_sched_find_node_in_subtree(struct ice_hw *hw, struct ice_sched_node *base,
+			       struct ice_sched_node *node)
+{
+	u8 i;
+
+	for (i = 0; i < base->num_children; i++) {
+		struct ice_sched_node *child = base->children[i];
+
+		if (node == child)
+			return true;
+
+		if (child->tx_sched_layer > node->tx_sched_layer)
+			return false;
+
+		/* this recursion is intentional, and wouldn't
+		 * go more than 8 calls
+		 */
+		if (ice_sched_find_node_in_subtree(hw, child, node))
+			return true;
+	}
+	return false;
+}
+
+/**
+ * ice_sched_get_free_qparent - Get a free lan or rdma q group node
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc: branch number
+ * @owner: lan or rdma
+ *
+ * This function retrieves a free lan or rdma q group node
+ */
+struct ice_sched_node *
+ice_sched_get_free_qparent(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+			   u8 owner)
+{
+	struct ice_sched_node *vsi_node, *qgrp_node = NULL;
+	struct ice_vsi_ctx *vsi_ctx;
+	u16 max_children;
+	u8 qgrp_layer;
+
+	qgrp_layer = ice_sched_get_qgrp_layer(pi->hw);
+	max_children = pi->hw->max_children[qgrp_layer];
+
+	vsi_ctx = ice_get_vsi_ctx(pi->hw, vsi_handle);
+	if (!vsi_ctx)
+		return NULL;
+	vsi_node = vsi_ctx->sched.vsi_node[tc];
+	/* validate invalid VSI id */
+	if (!vsi_node)
+		goto lan_q_exit;
+
+	/* get the first q group node from VSI sub-tree */
+	qgrp_node = ice_sched_get_first_node(pi->hw, vsi_node, qgrp_layer);
+	while (qgrp_node) {
+		/* make sure the qgroup node is part of the VSI subtree */
+		if (ice_sched_find_node_in_subtree(pi->hw, vsi_node, qgrp_node))
+			if (qgrp_node->num_children < max_children &&
+			    qgrp_node->owner == owner)
+				break;
+		qgrp_node = qgrp_node->sibling;
+	}
+
+lan_q_exit:
+	return qgrp_node;
+}
+
+/**
+ * ice_sched_get_vsi_node - Get a VSI node based on VSI id
+ * @hw: pointer to the hw struct
+ * @tc_node: pointer to the TC node
+ * @vsi_handle: software VSI handle
+ *
+ * This function retrieves a VSI node for a given VSI id from a given
+ * TC branch
+ */
+struct ice_sched_node *
+ice_sched_get_vsi_node(struct ice_hw *hw, struct ice_sched_node *tc_node,
+		       u16 vsi_handle)
+{
+	struct ice_sched_node *node;
+	u8 vsi_layer;
+
+	vsi_layer = ice_sched_get_vsi_layer(hw);
+	node = ice_sched_get_first_node(hw, tc_node, vsi_layer);
+
+	/* Check whether it already exists */
+	while (node) {
+		if (node->vsi_handle == vsi_handle)
+			return node;
+		node = node->sibling;
+	}
+
+	return node;
+}
+
+/**
+ * ice_sched_get_agg_node - Get an aggregator node based on agg id
+ * @hw: pointer to the hw struct
+ * @tc_node: pointer to the TC node
+ * @agg_id: aggregator id
+ *
+ * This function retrieves an aggregator node for a given agg id from a given
+ * TC branch
+ */
+struct ice_sched_node *
+ice_sched_get_agg_node(struct ice_hw *hw, struct ice_sched_node *tc_node,
+		       u32 agg_id)
+{
+	struct ice_sched_node *node;
+	u8 agg_layer;
+
+	agg_layer = ice_sched_get_agg_layer(hw);
+	node = ice_sched_get_first_node(hw, tc_node, agg_layer);
+
+	/* Check whether it already exists */
+	while (node) {
+		if (node->agg_id == agg_id)
+			return node;
+		node = node->sibling;
+	}
+
+	return node;
+}
+
+/**
+ * ice_sched_check_node - Compare node parameters between SW DB and HW DB
+ * @hw: pointer to the hw struct
+ * @node: pointer to the ice_sched_node struct
+ *
+ * This function queries and compares the HW element with SW DB node parameters
+ */
+static bool ice_sched_check_node(struct ice_hw *hw, struct ice_sched_node *node)
+{
+	struct ice_aqc_get_elem buf;
+	enum ice_status status;
+	u32 node_teid;
+
+	node_teid = LE32_TO_CPU(node->info.node_teid);
+	status = ice_sched_query_elem(hw, node_teid, &buf);
+	if (status != ICE_SUCCESS)
+		return false;
+
+	if (memcmp(buf.generic, &node->info, sizeof(*buf.generic))) {
+		ice_debug(hw, ICE_DBG_SCHED, "Node mismatch for teid=0x%x\n",
+			  node_teid);
+		return false;
+	}
+
+	return true;
+}
+
+/**
+ * ice_sched_calc_vsi_child_nodes - calculate number of VSI child nodes
+ * @hw: pointer to the hw struct
+ * @num_qs: number of queues
+ * @num_nodes: num nodes array
+ *
+ * This function calculates the number of VSI child nodes based on the
+ * number of queues.
+ */
+static void
+ice_sched_calc_vsi_child_nodes(struct ice_hw *hw, u16 num_qs, u16 *num_nodes)
+{
+	u16 num = num_qs;
+	u8 i, qgl, vsil;
+
+	qgl = ice_sched_get_qgrp_layer(hw);
+	vsil = ice_sched_get_vsi_layer(hw);
+
+	/* calculate num nodes from q group to VSI layer */
+	for (i = qgl; i > vsil; i--) {
+		/* round to the next integer if there is a remainder */
+		num = DIVIDE_AND_ROUND_UP(num, hw->max_children[i]);
+
+		/* need at least one node */
+		num_nodes[i] = num ? num : 1;
+	}
+}
+
+/**
+ * ice_sched_add_vsi_child_nodes - add VSI child nodes to tree
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc_node: pointer to the TC node
+ * @num_nodes: pointer to the num nodes that needs to be added per layer
+ * @owner: node owner (lan or rdma)
+ *
+ * This function adds the VSI child nodes to tree. It gets called for
+ * lan and rdma separately.
+ */
+static enum ice_status
+ice_sched_add_vsi_child_nodes(struct ice_port_info *pi, u16 vsi_handle,
+			      struct ice_sched_node *tc_node, u16 *num_nodes,
+			      u8 owner)
+{
+	struct ice_sched_node *parent, *node;
+	struct ice_hw *hw = pi->hw;
+	enum ice_status status;
+	u32 first_node_teid;
+	u16 num_added = 0;
+	u8 i, qgl, vsil;
+
+	qgl = ice_sched_get_qgrp_layer(hw);
+	vsil = ice_sched_get_vsi_layer(hw);
+	parent = ice_sched_get_vsi_node(hw, tc_node, vsi_handle);
+	for (i = vsil + 1; i <= qgl; i++) {
+		if (!parent)
+			return ICE_ERR_CFG;
+
+		status = ice_sched_add_nodes_to_layer(pi, tc_node, parent, i,
+						      num_nodes[i],
+						      &first_node_teid,
+						      &num_added);
+		if (status != ICE_SUCCESS || num_nodes[i] != num_added)
+			return ICE_ERR_CFG;
+
+		/* The newly added node can be a new parent for the next
+		 * layer nodes
+		 */
+		if (num_added) {
+			parent = ice_sched_find_node_by_teid(tc_node,
+							     first_node_teid);
+			node = parent;
+			while (node) {
+				node->owner = owner;
+				node = node->sibling;
+			}
+		} else {
+			parent = parent->children[0];
+		}
+	}
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_sched_calc_vsi_support_nodes - calculate number of VSI support nodes
+ * @hw: pointer to the hw struct
+ * @tc_node: pointer to TC node
+ * @num_nodes: pointer to num nodes array
+ *
+ * This function calculates the number of supported nodes needed to add this
+ * VSI into Tx tree including the VSI, parent and intermediate nodes in below
+ * layers
+ */
+static void
+ice_sched_calc_vsi_support_nodes(struct ice_hw *hw,
+				 struct ice_sched_node *tc_node, u16 *num_nodes)
+{
+	struct ice_sched_node *node;
+	u8 vsil;
+	int i;
+
+	vsil = ice_sched_get_vsi_layer(hw);
+	for (i = vsil; i >= hw->sw_entry_point_layer; i--)
+		/* Add intermediate nodes if TC has no children and
+		 * need at least one node for VSI
+		 */
+		if (!tc_node->num_children || i == vsil) {
+			num_nodes[i]++;
+		} else {
+			/* If intermediate nodes are reached max children
+			 * then add a new one.
+			 */
+			node = ice_sched_get_first_node(hw, tc_node, (u8)i);
+			/* scan all the siblings */
+			while (node) {
+				if (node->num_children < hw->max_children[i])
+					break;
+				node = node->sibling;
+			}
+
+			/* tree has one intermediate node to add this new VSI.
+			 * So no need to calculate supported nodes for below
+			 * layers.
+			 */
+			if (node)
+				break;
+			/* all the nodes are full, allocate a new one */
+			num_nodes[i]++;
+		}
+}
+
+/**
+ * ice_sched_add_vsi_support_nodes - add VSI supported nodes into Tx tree
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc_node: pointer to TC node
+ * @num_nodes: pointer to num nodes array
+ *
+ * This function adds the VSI supported nodes into Tx tree including the
+ * VSI, its parent and intermediate nodes in below layers
+ */
+static enum ice_status
+ice_sched_add_vsi_support_nodes(struct ice_port_info *pi, u16 vsi_handle,
+				struct ice_sched_node *tc_node, u16 *num_nodes)
+{
+	struct ice_sched_node *parent = tc_node;
+	enum ice_status status;
+	u32 first_node_teid;
+	u16 num_added = 0;
+	u8 i, vsil;
+
+	if (!pi)
+		return ICE_ERR_PARAM;
+
+	vsil = ice_sched_get_vsi_layer(pi->hw);
+	for (i = pi->hw->sw_entry_point_layer; i <= vsil; i++) {
+		status = ice_sched_add_nodes_to_layer(pi, tc_node, parent,
+						      i, num_nodes[i],
+						      &first_node_teid,
+						      &num_added);
+		if (status != ICE_SUCCESS || num_nodes[i] != num_added)
+			return ICE_ERR_CFG;
+
+		/* The newly added node can be a new parent for the next
+		 * layer nodes
+		 */
+		if (num_added)
+			parent = ice_sched_find_node_by_teid(tc_node,
+							     first_node_teid);
+		else
+			parent = parent->children[0];
+
+		if (!parent)
+			return ICE_ERR_CFG;
+
+		if (i == vsil)
+			parent->vsi_handle = vsi_handle;
+	}
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_sched_add_vsi_to_topo - add a new VSI into tree
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc: TC number
+ *
+ * This function adds a new VSI into scheduler tree
+ */
+static enum ice_status
+ice_sched_add_vsi_to_topo(struct ice_port_info *pi, u16 vsi_handle, u8 tc)
+{
+	u16 num_nodes[ICE_AQC_TOPO_MAX_LEVEL_NUM] = { 0 };
+	struct ice_sched_node *tc_node;
+	struct ice_hw *hw = pi->hw;
+
+	tc_node = ice_sched_get_tc_node(pi, tc);
+	if (!tc_node)
+		return ICE_ERR_PARAM;
+
+	/* calculate number of supported nodes needed for this VSI */
+	ice_sched_calc_vsi_support_nodes(hw, tc_node, num_nodes);
+
+	/* add vsi supported nodes to tc subtree */
+	return ice_sched_add_vsi_support_nodes(pi, vsi_handle, tc_node,
+					       num_nodes);
+}
+
+/**
+ * ice_sched_update_vsi_child_nodes - update VSI child nodes
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc: TC number
+ * @new_numqs: new number of max queues
+ * @owner: owner of this subtree
+ *
+ * This function updates the VSI child nodes based on the number of queues
+ */
+static enum ice_status
+ice_sched_update_vsi_child_nodes(struct ice_port_info *pi, u16 vsi_handle,
+				 u8 tc, u16 new_numqs, u8 owner)
+{
+	u16 new_num_nodes[ICE_AQC_TOPO_MAX_LEVEL_NUM] = { 0 };
+	struct ice_sched_node *vsi_node;
+	struct ice_sched_node *tc_node;
+	struct ice_vsi_ctx *vsi_ctx;
+	enum ice_status status = ICE_SUCCESS;
+	struct ice_hw *hw = pi->hw;
+	u16 prev_numqs;
+
+	tc_node = ice_sched_get_tc_node(pi, tc);
+	if (!tc_node)
+		return ICE_ERR_CFG;
+
+	vsi_node = ice_sched_get_vsi_node(hw, tc_node, vsi_handle);
+	if (!vsi_node)
+		return ICE_ERR_CFG;
+
+	vsi_ctx = ice_get_vsi_ctx(hw, vsi_handle);
+	if (!vsi_ctx)
+		return ICE_ERR_PARAM;
+
+	if (owner == ICE_SCHED_NODE_OWNER_LAN)
+		prev_numqs = vsi_ctx->sched.max_lanq[tc];
+	else
+		return ICE_ERR_PARAM;
+
+	/* num queues are not changed or less than the previous number */
+	if (new_numqs <= prev_numqs)
+		return status;
+	if (new_numqs)
+		ice_sched_calc_vsi_child_nodes(hw, new_numqs, new_num_nodes);
+	/* Keep the max number of queue configuration all the time. Update the
+	 * tree only if number of queues > previous number of queues. This may
+	 * leave some extra nodes in the tree if number of queues < previous
+	 * number but that wouldn't harm anything. Removing those extra nodes
+	 * may complicate the code if those nodes are part of SRL or
+	 * individually rate limited.
+	 */
+	status = ice_sched_add_vsi_child_nodes(pi, vsi_handle, tc_node,
+					       new_num_nodes, owner);
+	if (status)
+		return status;
+	vsi_ctx->sched.max_lanq[tc] = new_numqs;
+
+	return status;
+}
+
+/**
+ * ice_sched_cfg_vsi - configure the new/existing VSI
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc: TC number
+ * @maxqs: max number of queues
+ * @owner: lan or rdma
+ * @enable: TC enabled or disabled
+ *
+ * This function adds/updates VSI nodes based on the number of queues. If TC is
+ * enabled and VSI is in suspended state then resume the VSI back. If TC is
+ * disabled then suspend the VSI if it is not already.
+ */
+enum ice_status
+ice_sched_cfg_vsi(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u16 maxqs,
+		  u8 owner, bool enable)
+{
+	struct ice_sched_node *vsi_node, *tc_node;
+	struct ice_vsi_ctx *vsi_ctx;
+	enum ice_status status = ICE_SUCCESS;
+	struct ice_hw *hw = pi->hw;
+
+	ice_debug(pi->hw, ICE_DBG_SCHED, "add/config VSI %d\n", vsi_handle);
+	tc_node = ice_sched_get_tc_node(pi, tc);
+	if (!tc_node)
+		return ICE_ERR_PARAM;
+	vsi_ctx = ice_get_vsi_ctx(hw, vsi_handle);
+	if (!vsi_ctx)
+		return ICE_ERR_PARAM;
+	vsi_node = ice_sched_get_vsi_node(hw, tc_node, vsi_handle);
+
+	/* suspend the VSI if tc is not enabled */
+	if (!enable) {
+		if (vsi_node && vsi_node->in_use) {
+			u32 teid = LE32_TO_CPU(vsi_node->info.node_teid);
+
+			status = ice_sched_suspend_resume_elems(hw, 1, &teid,
+								true);
+			if (!status)
+				vsi_node->in_use = false;
+		}
+		return status;
+	}
+
+	/* TC is enabled, if it is a new VSI then add it to the tree */
+	if (!vsi_node) {
+		status = ice_sched_add_vsi_to_topo(pi, vsi_handle, tc);
+		if (status)
+			return status;
+
+		vsi_node = ice_sched_get_vsi_node(hw, tc_node, vsi_handle);
+		if (!vsi_node)
+			return ICE_ERR_CFG;
+
+		vsi_ctx->sched.vsi_node[tc] = vsi_node;
+		vsi_node->in_use = true;
+		/* invalidate the max queues whenever VSI gets added first time
+		 * into the scheduler tree (boot or after reset). We need to
+		 * recreate the child nodes all the time in these cases.
+		 */
+		vsi_ctx->sched.max_lanq[tc] = 0;
+	}
+
+	/* update the VSI child nodes */
+	status = ice_sched_update_vsi_child_nodes(pi, vsi_handle, tc, maxqs,
+						  owner);
+	if (status)
+		return status;
+
+	/* TC is enabled, resume the VSI if it is in the suspend state */
+	if (!vsi_node->in_use) {
+		u32 teid = LE32_TO_CPU(vsi_node->info.node_teid);
+
+		status = ice_sched_suspend_resume_elems(hw, 1, &teid, false);
+		if (!status)
+			vsi_node->in_use = true;
+	}
+
+	return status;
+}
+
+/**
+ * ice_sched_rm_agg_vsi_entry - remove agg related vsi info entry
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ *
+ * This function removes single aggregator vsi info entry from
+ * aggregator list.
+ */
+static void
+ice_sched_rm_agg_vsi_info(struct ice_port_info *pi, u16 vsi_handle)
+{
+	struct ice_sched_agg_info *agg_info;
+	struct ice_sched_agg_info *atmp;
+
+	LIST_FOR_EACH_ENTRY_SAFE(agg_info, atmp, &pi->hw->agg_list,
+				 ice_sched_agg_info,
+				 list_entry) {
+		struct ice_sched_agg_vsi_info *agg_vsi_info;
+		struct ice_sched_agg_vsi_info *vtmp;
+
+		LIST_FOR_EACH_ENTRY_SAFE(agg_vsi_info, vtmp,
+					 &agg_info->agg_vsi_list,
+					 ice_sched_agg_vsi_info, list_entry)
+			if (agg_vsi_info->vsi_handle == vsi_handle) {
+				LIST_DEL(&agg_vsi_info->list_entry);
+				ice_free(pi->hw, agg_vsi_info);
+				return;
+			}
+	}
+}
+
+/**
+ * ice_sched_is_leaf_node_present - check for a leaf node in the sub-tree
+ * @node: pointer to the sub-tree node
+ *
+ * This function checks for a leaf node presence in a given sub-tree node.
+ */
+static bool ice_sched_is_leaf_node_present(struct ice_sched_node *node)
+{
+	u8 i;
+
+	for (i = 0; i < node->num_children; i++)
+		if (ice_sched_is_leaf_node_present(node->children[i]))
+			return true;
+	/* check for a leaf node */
+	return (node->info.data.elem_type == ICE_AQC_ELEM_TYPE_LEAF);
+}
+
+/**
+ * ice_sched_rm_vsi_cfg - remove the VSI and its children nodes
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @owner: lan or rdma
+ *
+ * This function removes the VSI and its lan or rdma children nodes from the
+ * scheduler tree.
+ */
+static enum ice_status
+ice_sched_rm_vsi_cfg(struct ice_port_info *pi, u16 vsi_handle, u8 owner)
+{
+	enum ice_status status = ICE_ERR_PARAM;
+	struct ice_vsi_ctx *vsi_ctx;
+	u8 i;
+
+	ice_debug(pi->hw, ICE_DBG_SCHED, "removing VSI %d\n", vsi_handle);
+	if (!ice_is_vsi_valid(pi->hw, vsi_handle))
+		return status;
+	ice_acquire_lock(&pi->sched_lock);
+	vsi_ctx = ice_get_vsi_ctx(pi->hw, vsi_handle);
+	if (!vsi_ctx)
+		goto exit_sched_rm_vsi_cfg;
+
+	for (i = 0; i < ICE_MAX_TRAFFIC_CLASS; i++) {
+		struct ice_sched_node *vsi_node, *tc_node;
+		u8 j = 0;
+
+		tc_node = ice_sched_get_tc_node(pi, i);
+		if (!tc_node)
+			continue;
+
+		vsi_node = ice_sched_get_vsi_node(pi->hw, tc_node, vsi_handle);
+		if (!vsi_node)
+			continue;
+
+		if (ice_sched_is_leaf_node_present(vsi_node)) {
+			ice_debug(pi->hw, ICE_DBG_SCHED,
+				  "VSI has leaf nodes in TC %d\n", i);
+			status = ICE_ERR_IN_USE;
+			goto exit_sched_rm_vsi_cfg;
+		}
+		while (j < vsi_node->num_children) {
+			if (vsi_node->children[j]->owner == owner) {
+				ice_free_sched_node(pi, vsi_node->children[j]);
+
+				/* reset the counter again since the num
+				 * children will be updated after node removal
+				 */
+				j = 0;
+			} else {
+				j++;
+			}
+		}
+		/* remove the VSI if it has no children */
+		if (!vsi_node->num_children) {
+			ice_free_sched_node(pi, vsi_node);
+			vsi_ctx->sched.vsi_node[i] = NULL;
+
+			/* clean up agg related vsi info if any */
+			ice_sched_rm_agg_vsi_info(pi, vsi_handle);
+		}
+		if (owner == ICE_SCHED_NODE_OWNER_LAN)
+			vsi_ctx->sched.max_lanq[i] = 0;
+	}
+	status = ICE_SUCCESS;
+
+exit_sched_rm_vsi_cfg:
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_rm_vsi_lan_cfg - remove VSI and its lan children nodes
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ *
+ * This function clears the VSI and its lan children nodes from scheduler tree
+ * for all TCs.
+ */
+enum ice_status ice_rm_vsi_lan_cfg(struct ice_port_info *pi, u16 vsi_handle)
+{
+	return ice_sched_rm_vsi_cfg(pi, vsi_handle, ICE_SCHED_NODE_OWNER_LAN);
+}
+
+
+/**
+ * ice_sched_is_tree_balanced - Check tree nodes are identical or not
+ * @hw: pointer to the hw struct
+ * @node: pointer to the ice_sched_node struct
+ *
+ * This function compares all the nodes for a given tree against HW DB nodes
+ * This function needs to be called with the port_info->sched_lock held
+ */
+bool ice_sched_is_tree_balanced(struct ice_hw *hw, struct ice_sched_node *node)
+{
+	u8 i;
+
+	/* start from the leaf node */
+	for (i = 0; i < node->num_children; i++)
+		/* Fail if node doesn't match with the SW DB
+		 * this recursion is intentional, and wouldn't
+		 * go more than 9 calls
+		 */
+		if (!ice_sched_is_tree_balanced(hw, node->children[i]))
+			return false;
+
+	return ice_sched_check_node(hw, node);
+}
+
+/**
+ * ice_aq_query_node_to_root - retrieve the tree topology for a given node teid
+ * @hw: pointer to the hw struct
+ * @node_teid: node teid
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @cd: pointer to command details structure or NULL
+ *
+ * This function retrieves the tree topology from the firmware for a given
+ * node teid to the root node.
+ */
+enum ice_status
+ice_aq_query_node_to_root(struct ice_hw *hw, u32 node_teid,
+			  struct ice_aqc_get_elem *buf, u16 buf_size,
+			  struct ice_sq_cd *cd)
+{
+	struct ice_aqc_query_node_to_root *cmd;
+	struct ice_aq_desc desc;
+
+	cmd = &desc.params.query_node_to_root;
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_query_node_to_root);
+	cmd->teid = CPU_TO_LE32(node_teid);
+	return ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
+}
+
+/**
+ * ice_get_agg_info - get the agg id
+ * @hw: pointer to the hardware structure
+ * @agg_id: aggregator id
+ *
+ * This function validates agg id. The function returns info if agg id is
+ * prsent in list otherwise it returns null.
+ */
+static struct ice_sched_agg_info*
+ice_get_agg_info(struct ice_hw *hw, u32 agg_id)
+{
+	struct ice_sched_agg_info *agg_info;
+
+	LIST_FOR_EACH_ENTRY(agg_info, &hw->agg_list, ice_sched_agg_info,
+			    list_entry)
+		if (agg_info->agg_id == agg_id)
+			return agg_info;
+
+	return NULL;
+}
+
+/**
+ * ice_move_all_vsi_to_dflt_agg - move all VSI(s) to default agg
+ * @pi: port information structure
+ * @agg_info: aggregator info
+ * @tc: traffic class number
+ * @rm_vsi_info: true or false
+ *
+ * This function move all the VSI(s) to the default aggregator and delete
+ * agg vsi info based on passed in boolean parameter rm_vsi_info. The
+ * caller holds the scheduler lock.
+ */
+static enum ice_status
+ice_move_all_vsi_to_dflt_agg(struct ice_port_info *pi,
+			     struct ice_sched_agg_info *agg_info, u8 tc,
+			     bool rm_vsi_info)
+{
+	struct ice_sched_agg_vsi_info *agg_vsi_info;
+	struct ice_sched_agg_vsi_info *tmp;
+	enum ice_status status = ICE_SUCCESS;
+
+	LIST_FOR_EACH_ENTRY_SAFE(agg_vsi_info, tmp, &agg_info->agg_vsi_list,
+				 ice_sched_agg_vsi_info, list_entry) {
+		u16 vsi_handle = agg_vsi_info->vsi_handle;
+
+		/* Move VSI to default agg */
+		if (!ice_is_tc_ena(agg_vsi_info->tc_bitmap[0], tc))
+			continue;
+
+		status = ice_sched_move_vsi_to_agg(pi, vsi_handle,
+						   ICE_DFLT_AGG_ID, tc);
+		if (status)
+			break;
+
+		ice_clear_bit(tc, agg_vsi_info->tc_bitmap);
+		if (rm_vsi_info && !agg_vsi_info->tc_bitmap[0]) {
+			LIST_DEL(&agg_vsi_info->list_entry);
+			ice_free(pi->hw, agg_vsi_info);
+		}
+	}
+
+	return status;
+}
+
+/**
+ * ice_rm_agg_cfg_tc - remove agg configuration for tc
+ * @pi: port information structure
+ * @agg_info: aggregator id
+ * @tc: tc number
+ * @rm_vsi_info: bool value true or false
+ *
+ * This function removes agg reference to vsi of given tc. It removes the agg
+ * configuration completely for requested tc. The caller needs to hold the
+ * scheduler lock.
+ */
+static enum ice_status
+ice_rm_agg_cfg_tc(struct ice_port_info *pi, struct ice_sched_agg_info *agg_info,
+		  u8 tc, bool rm_vsi_info)
+{
+	enum ice_status status = ICE_SUCCESS;
+
+	/* If nothing to remove - return success */
+	if (!ice_is_tc_ena(agg_info->tc_bitmap[0], tc))
+		goto exit_rm_agg_cfg_tc;
+
+	status = ice_move_all_vsi_to_dflt_agg(pi, agg_info, tc, rm_vsi_info);
+	if (status)
+		goto exit_rm_agg_cfg_tc;
+
+	/* Delete aggregator node(s) */
+	status = ice_sched_rm_agg_cfg(pi, agg_info->agg_id, tc);
+	if (status)
+		goto exit_rm_agg_cfg_tc;
+
+	ice_clear_bit(tc, agg_info->tc_bitmap);
+exit_rm_agg_cfg_tc:
+	return status;
+}
+
+/**
+ * ice_save_agg_tc_bitmap - save agg TC bitmap
+ * @pi: port information structure
+ * @agg_id: aggregator id
+ * @tc_bitmap: 8 bits TC bitmap
+ *
+ * Save agg TC bitmap. This function needs to be called with scheduler
+ * lock held.
+ */
+static enum ice_status
+ice_save_agg_tc_bitmap(struct ice_port_info *pi, u32 agg_id,
+		       ice_bitmap_t *tc_bitmap)
+{
+	struct ice_sched_agg_info *agg_info;
+
+	agg_info = ice_get_agg_info(pi->hw, agg_id);
+	if (!agg_info)
+		return ICE_ERR_PARAM;
+	ice_cp_bitmap(agg_info->replay_tc_bitmap, tc_bitmap,
+		      ICE_MAX_TRAFFIC_CLASS);
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_sched_cfg_agg - configure agg node
+ * @pi: port information structure
+ * @agg_id: aggregator id
+ * @agg_type: aggregator type queue, VSI, or agg group
+ * @tc_bitmap: bits TC bitmap
+ *
+ * It registers a unique aggregator node into scheduler services. It
+ * allows a user to register with a unique ID to track it's resources.
+ * The aggregator type determines if this is a queue group, VSI group
+ * or aggregator group. It then creates the agg node(s) for requested
+ * tc(s) or removes an existing agg node including its configuration
+ * if indicated via tc_bitmap. Call ice_rm_agg_cfg to release agg
+ * resources and remove agg id.
+ * This function needs to be called with scheduler lock held.
+ */
+static enum ice_status
+ice_sched_cfg_agg(struct ice_port_info *pi, u32 agg_id,
+		  enum ice_agg_type agg_type, ice_bitmap_t *tc_bitmap)
+{
+	struct ice_sched_agg_info *agg_info;
+	enum ice_status status = ICE_SUCCESS;
+	struct ice_hw *hw = pi->hw;
+	u8 tc;
+
+	agg_info = ice_get_agg_info(hw, agg_id);
+	if (!agg_info) {
+		/* Creat new entry for new agg id */
+		agg_info = (struct ice_sched_agg_info *)
+			ice_malloc(hw, sizeof(*agg_info));
+		if (!agg_info) {
+			status = ICE_ERR_NO_MEMORY;
+			goto exit_reg_agg;
+		}
+		agg_info->agg_id = agg_id;
+		agg_info->agg_type = agg_type;
+		agg_info->tc_bitmap[0] = 0;
+
+		/* Initialize the aggregator vsi list head */
+		INIT_LIST_HEAD(&agg_info->agg_vsi_list);
+
+		/* Add new entry in agg list */
+		LIST_ADD(&agg_info->list_entry, &hw->agg_list);
+	}
+	/* Create agg node(s) for requested tc(s) */
+	for (tc = 0; tc < ICE_MAX_TRAFFIC_CLASS; tc++) {
+		if (!ice_is_tc_ena(*tc_bitmap, tc)) {
+			/* Delete agg cfg tc if it exists previously */
+			status = ice_rm_agg_cfg_tc(pi, agg_info, tc, false);
+			if (status)
+				break;
+			continue;
+		}
+
+		/* Check if agg node for tc already exists */
+		if (ice_is_tc_ena(agg_info->tc_bitmap[0], tc))
+			continue;
+
+		/* Create new agg node for tc */
+		status = ice_sched_add_agg_cfg(pi, agg_id, tc);
+		if (status)
+			break;
+
+		/* Save agg node's tc information */
+		ice_set_bit(tc, agg_info->tc_bitmap);
+	}
+exit_reg_agg:
+	return status;
+}
+
+/**
+ * ice_cfg_agg - config agg node
+ * @pi: port information structure
+ * @agg_id: aggregator id
+ * @agg_type: aggregator type queue, VSI, or agg group
+ * @tc_bitmap: bits TC bitmap
+ *
+ * This function configures aggregator node(s).
+ */
+enum ice_status
+ice_cfg_agg(struct ice_port_info *pi, u32 agg_id, enum ice_agg_type agg_type,
+	    u8 tc_bitmap)
+{
+	ice_bitmap_t bitmap = tc_bitmap;
+	enum ice_status status;
+
+	ice_acquire_lock(&pi->sched_lock);
+	status = ice_sched_cfg_agg(pi, agg_id, agg_type,
+				   (ice_bitmap_t *)&bitmap);
+	if (!status)
+		status = ice_save_agg_tc_bitmap(pi, agg_id,
+						(ice_bitmap_t *)&bitmap);
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_get_agg_vsi_info - get the agg id
+ * @agg_info: aggregator info
+ * @vsi_handle: software VSI handle
+ *
+ * The function returns agg VSI info based on VSI handle. This function needs
+ * to be called with scheduler lock held.
+ */
+static struct ice_sched_agg_vsi_info*
+ice_get_agg_vsi_info(struct ice_sched_agg_info *agg_info, u16 vsi_handle)
+{
+	struct ice_sched_agg_vsi_info *agg_vsi_info;
+
+	LIST_FOR_EACH_ENTRY(agg_vsi_info, &agg_info->agg_vsi_list,
+			    ice_sched_agg_vsi_info, list_entry)
+		if (agg_vsi_info->vsi_handle == vsi_handle)
+			return agg_vsi_info;
+
+	return NULL;
+}
+
+/**
+ * ice_get_vsi_agg_info - get the agg info of VSI
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: Sw VSI handle
+ *
+ * The function returns agg info of VSI represented via vsi_handle. The VSI has
+ * in this case a different aggregator than the default one. This function
+ * needs to be called with scheduler lock held.
+ */
+static struct ice_sched_agg_info*
+ice_get_vsi_agg_info(struct ice_hw *hw, u16 vsi_handle)
+{
+	struct ice_sched_agg_info *agg_info;
+
+	LIST_FOR_EACH_ENTRY(agg_info, &hw->agg_list, ice_sched_agg_info,
+			    list_entry) {
+		struct ice_sched_agg_vsi_info *agg_vsi_info;
+
+		agg_vsi_info = ice_get_agg_vsi_info(agg_info, vsi_handle);
+		if (agg_vsi_info)
+			return agg_info;
+	}
+	return NULL;
+}
+
+/**
+ * ice_save_agg_vsi_tc_bitmap - save aggregator VSI TC bitmap
+ * @pi: port information structure
+ * @agg_id: aggregator id
+ * @vsi_handle: software VSI handle
+ * @tc_bitmap: TC bitmap of enabled tc(s)
+ *
+ * Save VSI to aggregator TC bitmap. This function needs to call with scheduler
+ * lock held.
+ */
+static enum ice_status
+ice_save_agg_vsi_tc_bitmap(struct ice_port_info *pi, u32 agg_id, u16 vsi_handle,
+			   ice_bitmap_t *tc_bitmap)
+{
+	struct ice_sched_agg_vsi_info *agg_vsi_info;
+	struct ice_sched_agg_info *agg_info;
+
+	agg_info = ice_get_agg_info(pi->hw, agg_id);
+	if (!agg_info)
+		return ICE_ERR_PARAM;
+	/* check if entry already exist */
+	agg_vsi_info = ice_get_agg_vsi_info(agg_info, vsi_handle);
+	if (!agg_vsi_info)
+		return ICE_ERR_PARAM;
+	ice_cp_bitmap(agg_vsi_info->replay_tc_bitmap, tc_bitmap,
+		      ICE_MAX_TRAFFIC_CLASS);
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_sched_assoc_vsi_to_agg - associate or move VSI to new or default agg
+ * @pi: port information structure
+ * @agg_id: aggregator id
+ * @vsi_handle: software VSI handle
+ * @tc_bitmap: TC bitmap of enabled tc(s)
+ *
+ * This function moves VSI to a new or default aggregator node. If VSI is
+ * already associated to the agg node then no operation is performed on the
+ * tree. This function needs to be called with scheduler lock held.
+ */
+static enum ice_status
+ice_sched_assoc_vsi_to_agg(struct ice_port_info *pi, u32 agg_id,
+			   u16 vsi_handle, ice_bitmap_t *tc_bitmap)
+{
+	struct ice_sched_agg_vsi_info *agg_vsi_info;
+	struct ice_sched_agg_info *agg_info;
+	enum ice_status status = ICE_SUCCESS;
+	struct ice_hw *hw = pi->hw;
+	u8 tc;
+
+	if (!ice_is_vsi_valid(pi->hw, vsi_handle))
+		return ICE_ERR_PARAM;
+	agg_info = ice_get_agg_info(hw, agg_id);
+	if (!agg_info)
+		return ICE_ERR_PARAM;
+	/* check if entry already exist */
+	agg_vsi_info = ice_get_agg_vsi_info(agg_info, vsi_handle);
+	if (!agg_vsi_info) {
+		/* Create new entry for vsi under agg list */
+		agg_vsi_info = (struct ice_sched_agg_vsi_info *)
+			ice_malloc(hw, sizeof(*agg_vsi_info));
+		if (!agg_vsi_info)
+			return ICE_ERR_PARAM;
+
+		/* add vsi id into the agg list */
+		agg_vsi_info->vsi_handle = vsi_handle;
+		LIST_ADD(&agg_vsi_info->list_entry, &agg_info->agg_vsi_list);
+	}
+	/* Move vsi node to new agg node for requested tc(s) */
+	for (tc = 0; tc < ICE_MAX_TRAFFIC_CLASS; tc++) {
+		if (!ice_is_tc_ena(*tc_bitmap, tc))
+			continue;
+
+		/* Move VSI to new agg */
+		status = ice_sched_move_vsi_to_agg(pi, vsi_handle, agg_id, tc);
+		if (status)
+			break;
+
+		if (agg_id != ICE_DFLT_AGG_ID)
+			ice_set_bit(tc, agg_vsi_info->tc_bitmap);
+		else
+			ice_clear_bit(tc, agg_vsi_info->tc_bitmap);
+	}
+	/* If vsi moved back to default agg then delete entry agg_vsi_info. */
+	if (!ice_is_any_bit_set(agg_vsi_info->tc_bitmap,
+				ICE_MAX_TRAFFIC_CLASS)) {
+		LIST_DEL(&agg_vsi_info->list_entry);
+		ice_free(hw, agg_vsi_info);
+	}
+	return status;
+}
+
+/**
+ * ice_move_vsi_to_agg - moves VSI to new or default agg
+ * @pi: port information structure
+ * @agg_id: aggregator id
+ * @vsi_handle: software VSI handle
+ * @tc_bitmap: tc bitmap of enabled tc(s)
+ *
+ * Move or associate VSI to a new or default aggregator node.
+ */
+enum ice_status
+ice_move_vsi_to_agg(struct ice_port_info *pi, u32 agg_id, u16 vsi_handle,
+		    u8 tc_bitmap)
+{
+	ice_bitmap_t bitmap = tc_bitmap;
+	enum ice_status status;
+
+	ice_acquire_lock(&pi->sched_lock);
+	status = ice_sched_assoc_vsi_to_agg(pi, agg_id, vsi_handle,
+					    (ice_bitmap_t *)&bitmap);
+	if (!status)
+		status = ice_save_agg_vsi_tc_bitmap(pi, agg_id, vsi_handle,
+						    (ice_bitmap_t *)&bitmap);
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_rm_agg_cfg - remove agg configuration
+ * @pi: port information structure
+ * @agg_id: aggregator id
+ *
+ * This function removes agg reference to vsi and delete agg id info.
+ * It removes the agg configuration completely.
+ */
+enum ice_status ice_rm_agg_cfg(struct ice_port_info *pi, u32 agg_id)
+{
+	struct ice_sched_agg_info *agg_info;
+	enum ice_status status = ICE_SUCCESS;
+	u8 tc;
+
+	ice_acquire_lock(&pi->sched_lock);
+	agg_info = ice_get_agg_info(pi->hw, agg_id);
+	if (!agg_info) {
+		status = ICE_ERR_DOES_NOT_EXIST;
+		goto exit_ice_rm_agg_cfg;
+	}
+
+	for (tc = 0; tc < ICE_MAX_TRAFFIC_CLASS; tc++) {
+		status = ice_rm_agg_cfg_tc(pi, agg_info, tc, true);
+		if (status)
+			goto exit_ice_rm_agg_cfg;
+	}
+
+	if (ice_is_any_bit_set(agg_info->tc_bitmap, ICE_MAX_TRAFFIC_CLASS)) {
+		status = ICE_ERR_IN_USE;
+		goto exit_ice_rm_agg_cfg;
+	}
+
+	/* Safe to delete entry now */
+	LIST_DEL(&agg_info->list_entry);
+	ice_free(pi->hw, agg_info);
+
+	/* Remove unused rl profile ids from HW and SW DB */
+	ice_sched_rm_unused_rl_prof(pi);
+
+exit_ice_rm_agg_cfg:
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_set_clear_cir_bw_alloc - set or clear CIR bw alloc information
+ * @bw_t_info: bandwidth type information structure
+ * @bw_alloc: Bandwidth allocation information
+ *
+ * Save or clear CIR bw alloc information (bw_alloc) in the passed param
+ * bw_t_info.
+ */
+static void
+ice_set_clear_cir_bw_alloc(struct ice_bw_type_info *bw_t_info, u16 bw_alloc)
+{
+	bw_t_info->cir_bw.bw_alloc = bw_alloc;
+	if (bw_t_info->cir_bw.bw_alloc)
+		ice_set_bit(ICE_BW_TYPE_CIR_WT, bw_t_info->bw_t_bitmap);
+	else
+		ice_clear_bit(ICE_BW_TYPE_CIR_WT, bw_t_info->bw_t_bitmap);
+}
+
+/**
+ * ice_set_clear_eir_bw_alloc - set or clear EIR bw alloc information
+ * @bw_t_info: bandwidth type information structure
+ * @bw_alloc: Bandwidth allocation information
+ *
+ * Save or clear EIR bw alloc information (bw_alloc) in the passed param
+ * bw_t_info.
+ */
+static void
+ice_set_clear_eir_bw_alloc(struct ice_bw_type_info *bw_t_info, u16 bw_alloc)
+{
+	bw_t_info->eir_bw.bw_alloc = bw_alloc;
+	if (bw_t_info->eir_bw.bw_alloc)
+		ice_set_bit(ICE_BW_TYPE_EIR_WT, bw_t_info->bw_t_bitmap);
+	else
+		ice_clear_bit(ICE_BW_TYPE_EIR_WT, bw_t_info->bw_t_bitmap);
+}
+
+/**
+ * ice_sched_save_vsi_bw_alloc - save VSI node's bw alloc information
+ * @pi: port information structure
+ * @vsi_handle: sw VSI handle
+ * @tc: traffic class
+ * @rl_type: rate limit type min or max
+ * @bw_alloc: Bandwidth allocation information
+ *
+ * Save bw alloc information of VSI type node for post replay use.
+ */
+static enum ice_status
+ice_sched_save_vsi_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+			    enum ice_rl_type rl_type, u16 bw_alloc)
+{
+	struct ice_vsi_ctx *vsi_ctx;
+
+	if (!ice_is_vsi_valid(pi->hw, vsi_handle))
+		return ICE_ERR_PARAM;
+	vsi_ctx = ice_get_vsi_ctx(pi->hw, vsi_handle);
+	if (!vsi_ctx)
+		return ICE_ERR_PARAM;
+	switch (rl_type) {
+	case ICE_MIN_BW:
+		ice_set_clear_cir_bw_alloc(&vsi_ctx->sched.bw_t_info[tc],
+					   bw_alloc);
+		break;
+	case ICE_MAX_BW:
+		ice_set_clear_eir_bw_alloc(&vsi_ctx->sched.bw_t_info[tc],
+					   bw_alloc);
+		break;
+	default:
+		return ICE_ERR_PARAM;
+	}
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_set_clear_cir_bw - set or clear CIR bw
+ * @bw_t_info: bandwidth type information structure
+ * @bw: bandwidth in Kbps - Kilo bits per sec
+ *
+ * Save or clear CIR bandwidth (bw) in the passed param bw_t_info.
+ */
+static void
+ice_set_clear_cir_bw(struct ice_bw_type_info *bw_t_info, u32 bw)
+{
+	if (bw == ICE_SCHED_DFLT_BW) {
+		ice_clear_bit(ICE_BW_TYPE_CIR, bw_t_info->bw_t_bitmap);
+		bw_t_info->cir_bw.bw = 0;
+	} else {
+		/* Save type of bw information */
+		ice_set_bit(ICE_BW_TYPE_CIR, bw_t_info->bw_t_bitmap);
+		bw_t_info->cir_bw.bw = bw;
+	}
+}
+
+/**
+ * ice_set_clear_eir_bw - set or clear EIR bw
+ * @bw_t_info: bandwidth type information structure
+ * @bw: bandwidth in Kbps - Kilo bits per sec
+ *
+ * Save or clear EIR bandwidth (bw) in the passed param bw_t_info.
+ */
+static void
+ice_set_clear_eir_bw(struct ice_bw_type_info *bw_t_info, u32 bw)
+{
+	if (bw == ICE_SCHED_DFLT_BW) {
+		ice_clear_bit(ICE_BW_TYPE_EIR, bw_t_info->bw_t_bitmap);
+		bw_t_info->eir_bw.bw = 0;
+	} else {
+		/* EIR bw and Shared bw profiles are mutually exclusive and
+		 * hence only one of them may be set for any given element.
+		 * First clear earlier saved shared bw information.
+		 */
+		ice_clear_bit(ICE_BW_TYPE_SHARED, bw_t_info->bw_t_bitmap);
+		bw_t_info->shared_bw = 0;
+		/* save EIR bw information */
+		ice_set_bit(ICE_BW_TYPE_EIR, bw_t_info->bw_t_bitmap);
+		bw_t_info->eir_bw.bw = bw;
+	}
+}
+
+/**
+ * ice_set_clear_shared_bw - set or clear shared bw
+ * @bw_t_info: bandwidth type information structure
+ * @bw: bandwidth in Kbps - Kilo bits per sec
+ *
+ * Save or clear shared bandwidth (bw) in the passed param bw_t_info.
+ */
+static void
+ice_set_clear_shared_bw(struct ice_bw_type_info *bw_t_info, u32 bw)
+{
+	if (bw == ICE_SCHED_DFLT_BW) {
+		ice_clear_bit(ICE_BW_TYPE_SHARED, bw_t_info->bw_t_bitmap);
+		bw_t_info->shared_bw = 0;
+	} else {
+		/* EIR bw and Shared bw profiles are mutually exclusive and
+		 * hence only one of them may be set for any given element.
+		 * First clear earlier saved EIR bw information.
+		 */
+		ice_clear_bit(ICE_BW_TYPE_EIR, bw_t_info->bw_t_bitmap);
+		bw_t_info->eir_bw.bw = 0;
+		/* save shared bw information */
+		ice_set_bit(ICE_BW_TYPE_SHARED, bw_t_info->bw_t_bitmap);
+		bw_t_info->shared_bw = bw;
+	}
+}
+
+/**
+ * ice_sched_save_vsi_bw - save VSI node's bw information
+ * @pi: port information structure
+ * @vsi_handle: sw VSI handle
+ * @tc: traffic class
+ * @rl_type: rate limit type min, max, or shared
+ * @bw: bandwidth in Kbps - Kilo bits per sec
+ *
+ * Save bw information of VSI type node for post replay use.
+ */
+static enum ice_status
+ice_sched_save_vsi_bw(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+		      enum ice_rl_type rl_type, u32 bw)
+{
+	struct ice_vsi_ctx *vsi_ctx;
+
+	if (!ice_is_vsi_valid(pi->hw, vsi_handle))
+		return ICE_ERR_PARAM;
+	vsi_ctx = ice_get_vsi_ctx(pi->hw, vsi_handle);
+	if (!vsi_ctx)
+		return ICE_ERR_PARAM;
+	switch (rl_type) {
+	case ICE_MIN_BW:
+		ice_set_clear_cir_bw(&vsi_ctx->sched.bw_t_info[tc], bw);
+		break;
+	case ICE_MAX_BW:
+		ice_set_clear_eir_bw(&vsi_ctx->sched.bw_t_info[tc], bw);
+		break;
+	case ICE_SHARED_BW:
+		ice_set_clear_shared_bw(&vsi_ctx->sched.bw_t_info[tc], bw);
+		break;
+	default:
+		return ICE_ERR_PARAM;
+	}
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_set_clear_prio - set or clear priority information
+ * @bw_t_info: bandwidth type information structure
+ * @prio: priority to save
+ *
+ * Save or clear priority (prio) in the passed param bw_t_info.
+ */
+static void
+ice_set_clear_prio(struct ice_bw_type_info *bw_t_info, u8 prio)
+{
+	bw_t_info->generic = prio;
+	if (bw_t_info->generic)
+		ice_set_bit(ICE_BW_TYPE_PRIO, bw_t_info->bw_t_bitmap);
+	else
+		ice_clear_bit(ICE_BW_TYPE_PRIO, bw_t_info->bw_t_bitmap);
+}
+
+/**
+ * ice_sched_save_vsi_prio - save VSI node's priority information
+ * @pi: port information structure
+ * @vsi_handle: Software VSI handle
+ * @tc: traffic class
+ * @prio: priority to save
+ *
+ * Save priority information of VSI type node for post replay use.
+ */
+static enum ice_status
+ice_sched_save_vsi_prio(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+			u8 prio)
+{
+	struct ice_vsi_ctx *vsi_ctx;
+
+	if (!ice_is_vsi_valid(pi->hw, vsi_handle))
+		return ICE_ERR_PARAM;
+	vsi_ctx = ice_get_vsi_ctx(pi->hw, vsi_handle);
+	if (!vsi_ctx)
+		return ICE_ERR_PARAM;
+	if (tc >= ICE_MAX_TRAFFIC_CLASS)
+		return ICE_ERR_PARAM;
+	ice_set_clear_prio(&vsi_ctx->sched.bw_t_info[tc], prio);
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_sched_save_agg_bw_alloc - save agg node's bw alloc information
+ * @pi: port information structure
+ * @agg_id: node aggregator id
+ * @tc: traffic class
+ * @rl_type: rate limit type min or max
+ * @bw_alloc: bandwidth alloc information
+ *
+ * Save bw alloc information of AGG type node for post replay use.
+ */
+static enum ice_status
+ice_sched_save_agg_bw_alloc(struct ice_port_info *pi, u32 agg_id, u8 tc,
+			    enum ice_rl_type rl_type, u16 bw_alloc)
+{
+	struct ice_sched_agg_info *agg_info;
+
+	agg_info = ice_get_agg_info(pi->hw, agg_id);
+	if (!agg_info)
+		return ICE_ERR_PARAM;
+	if (!ice_is_tc_ena(agg_info->tc_bitmap[0], tc))
+		return ICE_ERR_PARAM;
+	switch (rl_type) {
+	case ICE_MIN_BW:
+		ice_set_clear_cir_bw_alloc(&agg_info->bw_t_info[tc], bw_alloc);
+		break;
+	case ICE_MAX_BW:
+		ice_set_clear_eir_bw_alloc(&agg_info->bw_t_info[tc], bw_alloc);
+		break;
+	default:
+		return ICE_ERR_PARAM;
+	}
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_sched_save_agg_bw - save agg node's bw information
+ * @pi: port information structure
+ * @agg_id: node aggregator id
+ * @tc: traffic class
+ * @rl_type: rate limit type min, max, or shared
+ * @bw: bandwidth in Kbps - Kilo bits per sec
+ *
+ * Save bw information of AGG type node for post replay use.
+ */
+static enum ice_status
+ice_sched_save_agg_bw(struct ice_port_info *pi, u32 agg_id, u8 tc,
+		      enum ice_rl_type rl_type, u32 bw)
+{
+	struct ice_sched_agg_info *agg_info;
+
+	agg_info = ice_get_agg_info(pi->hw, agg_id);
+	if (!agg_info)
+		return ICE_ERR_PARAM;
+	if (!ice_is_tc_ena(agg_info->tc_bitmap[0], tc))
+		return ICE_ERR_PARAM;
+	switch (rl_type) {
+	case ICE_MIN_BW:
+		ice_set_clear_cir_bw(&agg_info->bw_t_info[tc], bw);
+		break;
+	case ICE_MAX_BW:
+		ice_set_clear_eir_bw(&agg_info->bw_t_info[tc], bw);
+		break;
+	case ICE_SHARED_BW:
+		ice_set_clear_shared_bw(&agg_info->bw_t_info[tc], bw);
+		break;
+	default:
+		return ICE_ERR_PARAM;
+	}
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_cfg_vsi_bw_lmt_per_tc - configure VSI bw limit per tc
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc: traffic class
+ * @rl_type: min or max
+ * @bw: bandwidth in kbps
+ *
+ * This function configures bw limit of VSI scheduling node based on tc
+ * information.
+ */
+enum ice_status
+ice_cfg_vsi_bw_lmt_per_tc(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+			  enum ice_rl_type rl_type, u32 bw)
+{
+	enum ice_status status;
+
+	status = ice_sched_set_node_bw_lmt_per_tc(pi, vsi_handle,
+						  ICE_AGG_TYPE_VSI,
+						  tc, rl_type, bw);
+	if (!status) {
+		ice_acquire_lock(&pi->sched_lock);
+		status = ice_sched_save_vsi_bw(pi, vsi_handle, tc, rl_type, bw);
+		ice_release_lock(&pi->sched_lock);
+	}
+	return status;
+}
+
+/**
+ * ice_cfg_dflt_vsi_bw_lmt_per_tc - configure default VSI bw limit per tc
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc: traffic class
+ * @rl_type: min or max
+ *
+ * This function configures default bw limit of VSI scheduling node based on tc
+ * information.
+ */
+enum ice_status
+ice_cfg_vsi_bw_dflt_lmt_per_tc(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+			       enum ice_rl_type rl_type)
+{
+	enum ice_status status;
+
+	status = ice_sched_set_node_bw_lmt_per_tc(pi, vsi_handle,
+						  ICE_AGG_TYPE_VSI,
+						  tc, rl_type,
+						  ICE_SCHED_DFLT_BW);
+	if (!status) {
+		ice_acquire_lock(&pi->sched_lock);
+		status = ice_sched_save_vsi_bw(pi, vsi_handle, tc, rl_type,
+					       ICE_SCHED_DFLT_BW);
+		ice_release_lock(&pi->sched_lock);
+	}
+	return status;
+}
+
+/**
+ * ice_cfg_agg_bw_lmt_per_tc - configure aggregator bw limit per tc
+ * @pi: port information structure
+ * @agg_id: aggregator id
+ * @tc: traffic class
+ * @rl_type: min or max
+ * @bw: bandwidth in kbps
+ *
+ * This function applies bw limit to aggregator scheduling node based on tc
+ * information.
+ */
+enum ice_status
+ice_cfg_agg_bw_lmt_per_tc(struct ice_port_info *pi, u32 agg_id, u8 tc,
+			  enum ice_rl_type rl_type, u32 bw)
+{
+	enum ice_status status;
+
+	status = ice_sched_set_node_bw_lmt_per_tc(pi, agg_id, ICE_AGG_TYPE_AGG,
+						  tc, rl_type, bw);
+	if (!status) {
+		ice_acquire_lock(&pi->sched_lock);
+		status = ice_sched_save_agg_bw(pi, agg_id, tc, rl_type, bw);
+		ice_release_lock(&pi->sched_lock);
+	}
+	return status;
+}
+
+/**
+ * ice_cfg_agg_bw_dflt_lmt_per_tc - configure aggregator bw default limit per tc
+ * @pi: port information structure
+ * @agg_id: aggregator id
+ * @tc: traffic class
+ * @rl_type: min or max
+ *
+ * This function applies default bw limit to aggregator scheduling node based
+ * on tc information.
+ */
+enum ice_status
+ice_cfg_agg_bw_dflt_lmt_per_tc(struct ice_port_info *pi, u32 agg_id, u8 tc,
+			       enum ice_rl_type rl_type)
+{
+	enum ice_status status;
+
+	status = ice_sched_set_node_bw_lmt_per_tc(pi, agg_id, ICE_AGG_TYPE_AGG,
+						  tc, rl_type,
+						  ICE_SCHED_DFLT_BW);
+	if (!status) {
+		ice_acquire_lock(&pi->sched_lock);
+		status = ice_sched_save_agg_bw(pi, agg_id, tc, rl_type,
+					       ICE_SCHED_DFLT_BW);
+		ice_release_lock(&pi->sched_lock);
+	}
+	return status;
+}
+
+/**
+ * ice_cfg_vsi_bw_shared_lmt - configure VSI bw shared limit
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @bw: bandwidth in kbps
+ *
+ * This function Configures shared rate limiter(SRL) of all VSI type nodes
+ * across all traffic classes for VSI matching handle.
+ */
+enum ice_status
+ice_cfg_vsi_bw_shared_lmt(struct ice_port_info *pi, u16 vsi_handle, u32 bw)
+{
+	return ice_sched_set_vsi_bw_shared_lmt(pi, vsi_handle, bw);
+}
+
+/**
+ * ice_cfg_vsi_bw_no_shared_lmt - configure VSI bw for no shared limiter
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ *
+ * This function removes the shared rate limiter(SRL) of all VSI type nodes
+ * across all traffic classes for VSI matching handle.
+ */
+enum ice_status
+ice_cfg_vsi_bw_no_shared_lmt(struct ice_port_info *pi, u16 vsi_handle)
+{
+	return ice_sched_set_vsi_bw_shared_lmt(pi, vsi_handle,
+					       ICE_SCHED_DFLT_BW);
+}
+
+/**
+ * ice_cfg_agg_bw_shared_lmt - configure aggregator bw shared limit
+ * @pi: port information structure
+ * @agg_id: aggregator id
+ * @bw: bandwidth in kbps
+ *
+ * This function configures the shared rate limiter(SRL) of all agg type nodes
+ * across all traffic classes for aggregator matching agg_id.
+ */
+enum ice_status
+ice_cfg_agg_bw_shared_lmt(struct ice_port_info *pi, u32 agg_id, u32 bw)
+{
+	return ice_sched_set_agg_bw_shared_lmt(pi, agg_id, bw);
+}
+
+/**
+ * ice_cfg_agg_bw_no_shared_lmt - configure aggregator bw for no shared limiter
+ * @pi: port information structure
+ * @agg_id: aggregator id
+ *
+ * This function removes the shared rate limiter(SRL) of all agg type nodes
+ * across all traffic classes for aggregator matching agg_id.
+ */
+enum ice_status
+ice_cfg_agg_bw_no_shared_lmt(struct ice_port_info *pi, u32 agg_id)
+{
+	return ice_sched_set_agg_bw_shared_lmt(pi, agg_id, ICE_SCHED_DFLT_BW);
+}
+
+/**
+ * ice_config_vsi_queue_priority - config VSI queue priority of node
+ * @pi: port information structure
+ * @num_qs: number of VSI queues
+ * @q_ids: queue ids array
+ * @q_ids: queue ids array
+ * @q_prio: queue priority array
+ *
+ * This function configures the queue node priority (Sibling Priority) of the
+ * passed in VSI's queue(s) for a given traffic class (tc).
+ */
+enum ice_status
+ice_cfg_vsi_q_priority(struct ice_port_info *pi, u16 num_qs, u32 *q_ids,
+		       u8 *q_prio)
+{
+	enum ice_status status = ICE_ERR_PARAM;
+	struct ice_hw *hw = pi->hw;
+	u16 i;
+
+	ice_acquire_lock(&pi->sched_lock);
+
+	for (i = 0; i < num_qs; i++) {
+		struct ice_sched_node *node;
+
+		node = ice_sched_find_node_by_teid(pi->root, q_ids[i]);
+		if (!node || node->info.data.elem_type !=
+		    ICE_AQC_ELEM_TYPE_LEAF) {
+			status = ICE_ERR_PARAM;
+			break;
+		}
+		/* Configure Priority */
+		status = ice_sched_cfg_sibl_node_prio(hw, node, q_prio[i]);
+		if (status)
+			break;
+	}
+
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_cfg_agg_vsi_priority_per_tc - config agg's VSI priority per tc
+ * @pi: port information structure
+ * @agg_id: Aggregator id
+ * @num_vsis: number of VSI(s)
+ * @vsi_handle_arr: array of software VSI handles
+ * @node_prio: pointer to node priority
+ * @tc: traffic class
+ *
+ * This function configures the node priority (Sibling Priority) of the
+ * passed in VSI's for a given traffic class (tc) of an Aggregator id.
+ */
+enum ice_status
+ice_cfg_agg_vsi_priority_per_tc(struct ice_port_info *pi, u32 agg_id,
+				u16 num_vsis, u16 *vsi_handle_arr,
+				u8 *node_prio, u8 tc)
+{
+	struct ice_sched_agg_vsi_info *agg_vsi_info;
+	struct ice_sched_node *tc_node, *agg_node;
+	enum ice_status status = ICE_ERR_PARAM;
+	struct ice_sched_agg_info *agg_info;
+	bool agg_id_present = false;
+	struct ice_hw *hw = pi->hw;
+	u16 i;
+
+	ice_acquire_lock(&pi->sched_lock);
+	LIST_FOR_EACH_ENTRY(agg_info, &hw->agg_list, ice_sched_agg_info,
+			    list_entry)
+		if (agg_info->agg_id == agg_id) {
+			agg_id_present = true;
+			break;
+		}
+	if (!agg_id_present)
+		goto exit_agg_priority_per_tc;
+
+	tc_node = ice_sched_get_tc_node(pi, tc);
+	if (!tc_node)
+		goto exit_agg_priority_per_tc;
+
+	agg_node = ice_sched_get_agg_node(hw, tc_node, agg_id);
+	if (!agg_node)
+		goto exit_agg_priority_per_tc;
+
+	if (num_vsis > hw->max_children[agg_node->tx_sched_layer])
+		goto exit_agg_priority_per_tc;
+
+	for (i = 0; i < num_vsis; i++) {
+		struct ice_sched_node *vsi_node;
+		bool vsi_handle_valid = false;
+		u16 vsi_handle;
+
+		status = ICE_ERR_PARAM;
+		vsi_handle = vsi_handle_arr[i];
+		if (!ice_is_vsi_valid(hw, vsi_handle))
+			goto exit_agg_priority_per_tc;
+		/* Verify child nodes before applying settings */
+		LIST_FOR_EACH_ENTRY(agg_vsi_info, &agg_info->agg_vsi_list,
+				    ice_sched_agg_vsi_info, list_entry)
+			if (agg_vsi_info->vsi_handle == vsi_handle) {
+				vsi_handle_valid = true;
+				break;
+			}
+		if (!vsi_handle_valid)
+			goto exit_agg_priority_per_tc;
+
+		vsi_node = ice_sched_get_vsi_node(hw, tc_node, vsi_handle);
+		if (!vsi_node)
+			goto exit_agg_priority_per_tc;
+
+		if (ice_sched_find_node_in_subtree(hw, agg_node, vsi_node)) {
+			/* Configure Priority */
+			status = ice_sched_cfg_sibl_node_prio(hw, vsi_node,
+							      node_prio[i]);
+			if (status)
+				break;
+			status = ice_sched_save_vsi_prio(pi, vsi_handle, tc,
+							 node_prio[i]);
+			if (status)
+				break;
+		}
+	}
+
+exit_agg_priority_per_tc:
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_cfg_vsi_bw_alloc - config VSI bw alloc per tc
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @ena_tcmap: enabled tc map
+ * @rl_type: Rate limit type CIR/EIR
+ * @bw_alloc: Array of bw alloc
+ *
+ * This function configures the bw allocation of the passed in VSI's
+ * node(s) for enabled traffic class.
+ */
+enum ice_status
+ice_cfg_vsi_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 ena_tcmap,
+		     enum ice_rl_type rl_type, u8 *bw_alloc)
+{
+	enum ice_status status = ICE_SUCCESS;
+	u8 tc;
+
+	if (!ice_is_vsi_valid(pi->hw, vsi_handle))
+		return ICE_ERR_PARAM;
+
+	ice_acquire_lock(&pi->sched_lock);
+
+	/* Return success if no nodes are present across tc */
+	for (tc = 0; tc < ICE_MAX_TRAFFIC_CLASS; tc++) {
+		struct ice_sched_node *tc_node, *vsi_node;
+
+		if (!ice_is_tc_ena(ena_tcmap, tc))
+			continue;
+
+		tc_node = ice_sched_get_tc_node(pi, tc);
+		if (!tc_node)
+			continue;
+
+		vsi_node = ice_sched_get_vsi_node(pi->hw, tc_node, vsi_handle);
+		if (!vsi_node)
+			continue;
+
+		status = ice_sched_cfg_node_bw_alloc(pi->hw, vsi_node, rl_type,
+						     bw_alloc[tc]);
+		if (status)
+			break;
+		status = ice_sched_save_vsi_bw_alloc(pi, vsi_handle, tc,
+						     rl_type, bw_alloc[tc]);
+		if (status)
+			break;
+	}
+
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_cfg_agg_bw_alloc - config agg bw alloc
+ * @pi: port information structure
+ * @agg_id: aggregator id
+ * @ena_tcmap: enabled tc map
+ * @rl_type: rate limit type CIR/EIR
+ * @bw_alloc: array of bw alloc
+ *
+ * This function configures the bw allocation of passed in aggregator for
+ * enabled traffic class(s).
+ */
+enum ice_status
+ice_cfg_agg_bw_alloc(struct ice_port_info *pi, u32 agg_id, u8 ena_tcmap,
+		     enum ice_rl_type rl_type, u8 *bw_alloc)
+{
+	struct ice_sched_agg_info *agg_info;
+	bool agg_id_present = false;
+	enum ice_status status = ICE_SUCCESS;
+	struct ice_hw *hw = pi->hw;
+	u8 tc;
+
+	ice_acquire_lock(&pi->sched_lock);
+	LIST_FOR_EACH_ENTRY(agg_info, &hw->agg_list, ice_sched_agg_info,
+			    list_entry)
+		if (agg_info->agg_id == agg_id) {
+			agg_id_present = true;
+			break;
+		}
+	if (!agg_id_present) {
+		status = ICE_ERR_PARAM;
+		goto exit_cfg_agg_bw_alloc;
+	}
+
+	/* Return success if no nodes are present across tc */
+	for (tc = 0; tc < ICE_MAX_TRAFFIC_CLASS; tc++) {
+		struct ice_sched_node *tc_node, *agg_node;
+
+		if (!ice_is_tc_ena(ena_tcmap, tc))
+			continue;
+
+		tc_node = ice_sched_get_tc_node(pi, tc);
+		if (!tc_node)
+			continue;
+
+		agg_node = ice_sched_get_agg_node(hw, tc_node, agg_id);
+		if (!agg_node)
+			continue;
+
+		status = ice_sched_cfg_node_bw_alloc(hw, agg_node, rl_type,
+						     bw_alloc[tc]);
+		if (status)
+			break;
+		status = ice_sched_save_agg_bw_alloc(pi, agg_id, tc, rl_type,
+						     bw_alloc[tc]);
+		if (status)
+			break;
+	}
+
+exit_cfg_agg_bw_alloc:
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_sched_calc_wakeup - calculate rl profile wakeup parameter
+ * @bw: bandwidth in kbps
+ *
+ * This function calculates the wakeup parameter of rl profile.
+ */
+static u16 ice_sched_calc_wakeup(s32 bw)
+{
+	s64 bytes_per_sec, wakeup_int, wakeup_a, wakeup_b, wakeup_f;
+	s32 wakeup_f_int;
+	u16 wakeup = 0;
+
+	/* Get the wakeup integer value */
+	bytes_per_sec = DIV_64BIT(((s64)bw * 1000), BITS_PER_BYTE);
+	wakeup_int = DIV_64BIT(ICE_RL_PROF_FREQUENCY, bytes_per_sec);
+	if (wakeup_int > 63) {
+		wakeup = (u16)((1 << 15) | wakeup_int);
+	} else {
+		/* Calculate fraction value up to 4 decimals
+		 * Convert Integer value to a constant multiplier
+		 */
+		wakeup_b = (s64)ICE_RL_PROF_MULTIPLIER * wakeup_int;
+		wakeup_a = DIV_64BIT((s64)ICE_RL_PROF_MULTIPLIER *
+				     ICE_RL_PROF_FREQUENCY, bytes_per_sec);
+
+		/* Get Fraction value */
+		wakeup_f = wakeup_a - wakeup_b;
+
+		/* Round up the Fractional value via Ceil(Fractional value) */
+		if (wakeup_f > DIV_64BIT(ICE_RL_PROF_MULTIPLIER, 2))
+			wakeup_f += 1;
+
+		wakeup_f_int = (s32)DIV_64BIT(wakeup_f * ICE_RL_PROF_FRACTION,
+					      ICE_RL_PROF_MULTIPLIER);
+		wakeup |= (u16)(wakeup_int << 9);
+		wakeup |= (u16)(0x1ff & wakeup_f_int);
+	}
+
+	return wakeup;
+}
+
+/**
+ * ice_sched_bw_to_rl_profile - convert bw to profile parameters
+ * @bw: bandwidth in kbps
+ * @profile: profile parameters to return
+ *
+ * This function converts the bw to profile structure format.
+ */
+static enum ice_status
+ice_sched_bw_to_rl_profile(u32 bw, struct ice_aqc_rl_profile_elem *profile)
+{
+	enum ice_status status = ICE_ERR_PARAM;
+	s64 bytes_per_sec, ts_rate, mv_tmp;
+	bool found = false;
+	s32 encode = 0;
+	s64 mv = 0;
+	s32 i;
+
+	/* Bw settings range is from 0.5Mb/sec to 100Gb/sec */
+	if (bw < ICE_SCHED_MIN_BW || bw > ICE_SCHED_MAX_BW)
+		return status;
+
+	/* Bytes per second from kbps */
+	bytes_per_sec = DIV_64BIT(((s64)bw * 1000), BITS_PER_BYTE);
+
+	/* encode is 6 bits but really useful are 5 bits */
+	for (i = 0; i < 64; i++) {
+		u64 pow_result = BIT_ULL(i);
+
+		ts_rate = DIV_64BIT((s64)ICE_RL_PROF_FREQUENCY,
+				    pow_result * ICE_RL_PROF_TS_MULTIPLIER);
+		if (ts_rate <= 0)
+			continue;
+
+		/* Multiplier value */
+		mv_tmp = DIV_64BIT(bytes_per_sec * ICE_RL_PROF_MULTIPLIER,
+				   ts_rate);
+
+		/* Round to the nearest ICE_RL_PROF_MULTIPLIER */
+		mv = round_up_64bit(mv_tmp, ICE_RL_PROF_MULTIPLIER);
+
+		/* First multiplier value greater than the given
+		 * accuracy bytes
+		 */
+		if (mv > ICE_RL_PROF_ACCURACY_BYTES) {
+			encode = i;
+			found = true;
+			break;
+		}
+	}
+	if (found) {
+		u16 wm;
+
+		wm = ice_sched_calc_wakeup(bw);
+		profile->rl_multiply = CPU_TO_LE16(mv);
+		profile->wake_up_calc = CPU_TO_LE16(wm);
+		profile->rl_encode = CPU_TO_LE16(encode);
+		status = ICE_SUCCESS;
+	} else {
+		status = ICE_ERR_DOES_NOT_EXIST;
+	}
+
+	return status;
+}
+
+/**
+ * ice_sched_add_rl_profile - add rl profile
+ * @pi: port information structure
+ * @rl_type: type of rate limit bw - min, max, or shared
+ * @bw: bandwidth in Kbps - Kilo bits per sec
+ * @layer_num: specifies in which layer to create profile
+ *
+ * This function first checks the existing list for corresponding bw
+ * parameter. If it exists, it returns the associated profile otherwise
+ * it creates a new rate limit profile for requested bw, and adds it to
+ * the hw db and local list. It returns the new profile or null on error.
+ * The caller needs to hold the scheduler lock.
+ */
+static struct ice_aqc_rl_profile_info *
+ice_sched_add_rl_profile(struct ice_port_info *pi,
+			 enum ice_rl_type rl_type, u32 bw, u8 layer_num)
+{
+	struct ice_aqc_rl_profile_generic_elem *buf;
+	struct ice_aqc_rl_profile_info *rl_prof_elem;
+	u16 profiles_added = 0, num_profiles = 1;
+	enum ice_status status = ICE_ERR_PARAM;
+	struct ice_hw *hw;
+	u8 profile_type;
+
+	switch (rl_type) {
+	case ICE_MIN_BW:
+		profile_type = ICE_AQC_RL_PROFILE_TYPE_CIR;
+		break;
+	case ICE_MAX_BW:
+		profile_type = ICE_AQC_RL_PROFILE_TYPE_EIR;
+		break;
+	case ICE_SHARED_BW:
+		profile_type = ICE_AQC_RL_PROFILE_TYPE_SRL;
+		break;
+	default:
+		return NULL;
+	}
+
+	if (!pi)
+		return NULL;
+	hw = pi->hw;
+	LIST_FOR_EACH_ENTRY(rl_prof_elem, &pi->rl_prof_list[layer_num],
+			    ice_aqc_rl_profile_info, list_entry)
+		if (rl_prof_elem->profile.flags == profile_type &&
+		    rl_prof_elem->bw == bw)
+			/* Return existing profile id info */
+			return rl_prof_elem;
+
+	/* Create new profile id */
+	rl_prof_elem = (struct ice_aqc_rl_profile_info *)
+		ice_malloc(hw, sizeof(*rl_prof_elem));
+
+	if (!rl_prof_elem)
+		return NULL;
+
+	status = ice_sched_bw_to_rl_profile(bw, &rl_prof_elem->profile);
+	if (status != ICE_SUCCESS)
+		goto exit_add_rl_prof;
+
+	rl_prof_elem->bw = bw;
+	/* layer_num is zero relative, and fw expects level from 1 to 9 */
+	rl_prof_elem->profile.level = layer_num + 1;
+	rl_prof_elem->profile.flags = profile_type;
+	rl_prof_elem->profile.max_burst_size = CPU_TO_LE16(hw->max_burst_size);
+
+	/* Create new entry in hw db */
+	buf = (struct ice_aqc_rl_profile_generic_elem *)
+		&rl_prof_elem->profile;
+	status = ice_aq_add_rl_profile(hw, num_profiles, buf, sizeof(*buf),
+				       &profiles_added, NULL);
+	if (status || profiles_added != num_profiles)
+		goto exit_add_rl_prof;
+
+	/* Good entry - add in the list */
+	rl_prof_elem->prof_id_ref = 0;
+	LIST_ADD(&rl_prof_elem->list_entry, &pi->rl_prof_list[layer_num]);
+	return rl_prof_elem;
+
+exit_add_rl_prof:
+	ice_free(hw, rl_prof_elem);
+	return NULL;
+}
+
+/**
+ * ice_sched_del_rl_profile - remove rl profile
+ * @hw: pointer to the hw struct
+ * @rl_info: rate limit profile information
+ *
+ * If the profile id is not referenced anymore, it removes profile id with
+ * its associated parameters from hw db,and locally. The caller needs to
+ * hold scheduler lock.
+ */
+enum ice_status
+ice_sched_del_rl_profile(struct ice_hw *hw,
+			 struct ice_aqc_rl_profile_info *rl_info)
+{
+	struct ice_aqc_rl_profile_generic_elem *buf;
+	u16 num_profiles_removed;
+	enum ice_status status;
+	u16 num_profiles = 1;
+
+	if (rl_info->prof_id_ref != 0)
+		return ICE_ERR_IN_USE;
+
+	/* Safe to remove profile id */
+	buf = (struct ice_aqc_rl_profile_generic_elem *)
+		&rl_info->profile;
+	status = ice_aq_remove_rl_profile(hw, num_profiles, buf, sizeof(*buf),
+					  &num_profiles_removed, NULL);
+	if (status || num_profiles_removed != num_profiles)
+		return ICE_ERR_CFG;
+
+	/* Delete stale entry now */
+	LIST_DEL(&rl_info->list_entry);
+	ice_free(hw, rl_info);
+	return status;
+}
+
+/**
+ * ice_sched_rm_unused_rl_prof - remove unused rl profile
+ * @pi: port information structure
+ *
+ * This function removes unused rate limit profiles from the hw and
+ * SW DB. The caller needs to hold scheduler lock.
+ */
+void ice_sched_rm_unused_rl_prof(struct ice_port_info *pi)
+{
+	u8 ln;
+
+	for (ln = 0; ln < pi->hw->num_tx_sched_layers; ln++) {
+		struct ice_aqc_rl_profile_info *rl_prof_elem;
+		struct ice_aqc_rl_profile_info *rl_prof_tmp;
+
+		LIST_FOR_EACH_ENTRY_SAFE(rl_prof_elem, rl_prof_tmp,
+					 &pi->rl_prof_list[ln],
+					 ice_aqc_rl_profile_info, list_entry) {
+			if (!ice_sched_del_rl_profile(pi->hw, rl_prof_elem))
+				ice_debug(pi->hw, ICE_DBG_SCHED,
+					  "Removed rl profile\n");
+		}
+	}
+}
+
+/**
+ * ice_sched_update_elem - update element
+ * @hw: pointer to the hw struct
+ * @node: pointer to node
+ * @info: node info to update
+ *
+ * It updates the HW DB, and local SW DB of node. It updates the scheduling
+ * parameters of node from argument info data buffer (Info->data buf) and
+ * returns success or error on config sched element failure. The caller
+ * needs to hold scheduler lock.
+ */
+static enum ice_status
+ice_sched_update_elem(struct ice_hw *hw, struct ice_sched_node *node,
+		      struct ice_aqc_txsched_elem_data *info)
+{
+	struct ice_aqc_conf_elem buf;
+	enum ice_status status;
+	u16 elem_cfgd = 0;
+	u16 num_elems = 1;
+
+	buf.generic[0] = *info;
+	/* Parent teid is reserved field in this aq call */
+	buf.generic[0].parent_teid = 0;
+	/* Element type is reserved field in this aq call */
+	buf.generic[0].data.elem_type = 0;
+	/* Flags is reserved field in this aq call */
+	buf.generic[0].data.flags = 0;
+
+	/* Update HW DB */
+	/* Configure element node */
+	status = ice_aq_cfg_sched_elems(hw, num_elems, &buf, sizeof(buf),
+					&elem_cfgd, NULL);
+	if (status || elem_cfgd != num_elems) {
+		ice_debug(hw, ICE_DBG_SCHED, "Config sched elem error\n");
+		return ICE_ERR_CFG;
+	}
+
+	/* Config success case */
+	/* Now update local SW DB */
+	/* Only copy the data portion of info buffer */
+	node->info.data = info->data;
+	return status;
+}
+
+/**
+ * ice_sched_cfg_node_bw_lmt - configure node sched params
+ * @hw: pointer to the hw struct
+ * @node: sched node to configure
+ * @rl_type: rate limit type cir, eir, or shared
+ * @rl_prof_id: rate limit profile id
+ *
+ * This function configures node element's bw limit.
+ */
+static enum ice_status
+ice_sched_cfg_node_bw_lmt(struct ice_hw *hw, struct ice_sched_node *node,
+			  enum ice_rl_type rl_type, u16 rl_prof_id)
+{
+	struct ice_aqc_txsched_elem_data buf;
+	struct ice_aqc_txsched_elem *data;
+
+	buf = node->info;
+	data = &buf.data;
+	switch (rl_type) {
+	case ICE_MIN_BW:
+		data->valid_sections |= ICE_AQC_ELEM_VALID_CIR;
+		data->cir_bw.bw_profile_idx = CPU_TO_LE16(rl_prof_id);
+		break;
+	case ICE_MAX_BW:
+		/* EIR bw and Shared bw profiles are mutually exclusive and
+		 * hence only one of them may be set for any given element
+		 */
+		if (data->valid_sections & ICE_AQC_ELEM_VALID_SHARED)
+			return ICE_ERR_CFG;
+		data->valid_sections |= ICE_AQC_ELEM_VALID_EIR;
+		data->eir_bw.bw_profile_idx = CPU_TO_LE16(rl_prof_id);
+		break;
+	case ICE_SHARED_BW:
+		/* Check for removing shared bw */
+		if (rl_prof_id == ICE_SCHED_NO_SHARED_RL_PROF_ID) {
+			/* remove shared profile */
+			data->valid_sections &= ~ICE_AQC_ELEM_VALID_SHARED;
+			data->srl_id = 0; /* clear srl field */
+
+			/* enable back EIR to default profile */
+			data->valid_sections |= ICE_AQC_ELEM_VALID_EIR;
+			data->eir_bw.bw_profile_idx =
+				CPU_TO_LE16(ICE_SCHED_DFLT_RL_PROF_ID);
+			break;
+		}
+		/* EIR bw and Shared bw profiles are mutually exclusive and
+		 * hence only one of them may be set for any given element
+		 */
+		if ((data->valid_sections & ICE_AQC_ELEM_VALID_EIR) &&
+		    (LE16_TO_CPU(data->eir_bw.bw_profile_idx) !=
+			    ICE_SCHED_DFLT_RL_PROF_ID))
+			return ICE_ERR_CFG;
+		/* EIR bw is set to default, disable it */
+		data->valid_sections &= ~ICE_AQC_ELEM_VALID_EIR;
+		/* Okay to enable shared bw now */
+		data->valid_sections |= ICE_AQC_ELEM_VALID_SHARED;
+		data->srl_id = CPU_TO_LE16(rl_prof_id);
+		break;
+	default:
+		/* Unknown rate limit type */
+		return ICE_ERR_PARAM;
+	}
+
+	/* Configure element */
+	return ice_sched_update_elem(hw, node, &buf);
+}
+
+/**
+ * ice_sched_get_node_rl_prof_id - get node's rate limit profile id
+ * @node: sched node
+ * @rl_type: rate limit type
+ *
+ * If existing profile matches, it returns the corresponding rate
+ * limit profile id, otherwise it returns an invalid id as error.
+ */
+static u16
+ice_sched_get_node_rl_prof_id(struct ice_sched_node *node,
+			      enum ice_rl_type rl_type)
+{
+	u16 rl_prof_id = ICE_SCHED_INVAL_PROF_ID;
+	struct ice_aqc_txsched_elem *data;
+
+	data = &node->info.data;
+	switch (rl_type) {
+	case ICE_MIN_BW:
+		if (data->valid_sections & ICE_AQC_ELEM_VALID_CIR)
+			rl_prof_id = LE16_TO_CPU(data->cir_bw.bw_profile_idx);
+		break;
+	case ICE_MAX_BW:
+		if (data->valid_sections & ICE_AQC_ELEM_VALID_EIR)
+			rl_prof_id = LE16_TO_CPU(data->eir_bw.bw_profile_idx);
+		break;
+	case ICE_SHARED_BW:
+		if (data->valid_sections & ICE_AQC_ELEM_VALID_SHARED)
+			rl_prof_id = LE16_TO_CPU(data->srl_id);
+		break;
+	default:
+		break;
+	}
+
+	return rl_prof_id;
+}
+
+/**
+ * ice_sched_get_rl_prof_layer - selects rate limit profile creation layer
+ * @pi: port information structure
+ * @rl_type: type of rate limit bw - min, max, or shared
+ * @layer_index: layer index
+ *
+ * This function returns requested profile creation layer.
+ */
+static u8
+ice_sched_get_rl_prof_layer(struct ice_port_info *pi, enum ice_rl_type rl_type,
+			    u8 layer_index)
+{
+	struct ice_hw *hw = pi->hw;
+
+	if (layer_index >= hw->num_tx_sched_layers)
+		return ICE_SCHED_INVAL_LAYER_NUM;
+	switch (rl_type) {
+	case ICE_MIN_BW:
+		if (hw->layer_info[layer_index].max_cir_rl_profiles)
+			return layer_index;
+		break;
+	case ICE_MAX_BW:
+		if (hw->layer_info[layer_index].max_eir_rl_profiles)
+			return layer_index;
+		break;
+	case ICE_SHARED_BW:
+		/* if current layer doesn't support SRL profile creation
+		 * then try a layer up or down.
+		 */
+		if (hw->layer_info[layer_index].max_srl_profiles)
+			return layer_index;
+		else if (layer_index < hw->num_tx_sched_layers - 1 &&
+			 hw->layer_info[layer_index + 1].max_srl_profiles)
+			return layer_index + 1;
+		else if (layer_index > 0 &&
+			 hw->layer_info[layer_index - 1].max_srl_profiles)
+			return layer_index - 1;
+		break;
+	default:
+		break;
+	}
+	return ICE_SCHED_INVAL_LAYER_NUM;
+}
+
+/**
+ * ice_sched_get_srl_node - get shared rate limit node
+ * @node: tree node
+ * @srl_layer: shared rate limit layer
+ *
+ * This function returns SRL node to be used for shared rate limit purpose.
+ * The caller needs to hold scheduler lock.
+ */
+static struct ice_sched_node *
+ice_sched_get_srl_node(struct ice_sched_node *node, u8 srl_layer)
+{
+	if (srl_layer > node->tx_sched_layer)
+		return node->children[0];
+	else if (srl_layer < node->tx_sched_layer)
+		/* Node can't be created without a parent. It will always
+		 * have a valid parent except root node.
+		 */
+		return node->parent;
+	else
+		return node;
+}
+
+/**
+ * ice_sched_rm_rl_profile - remove rl profile id
+ * @pi: port information structure
+ * @layer_num: layer number where profiles are saved
+ * @profile_type: profile type like EIR, CIR, or SRL
+ * @profile_id: profile id to remove
+ *
+ * This function removes rate limit profile from layer 'layer_num' of type
+ * 'profile_type' and profile id as 'profile_id'. The caller needs to hold
+ * scheduler lock.
+ */
+static enum ice_status
+ice_sched_rm_rl_profile(struct ice_port_info *pi, u8 layer_num, u8 profile_type,
+			u16 profile_id)
+{
+	struct ice_aqc_rl_profile_info *rl_prof_elem;
+	enum ice_status status = ICE_SUCCESS;
+
+	/* Check the existing list for rl profile */
+	LIST_FOR_EACH_ENTRY(rl_prof_elem, &pi->rl_prof_list[layer_num],
+			    ice_aqc_rl_profile_info, list_entry)
+		if (rl_prof_elem->profile.flags == profile_type &&
+		    LE16_TO_CPU(rl_prof_elem->profile.profile_id) ==
+		    profile_id) {
+			if (rl_prof_elem->prof_id_ref)
+				rl_prof_elem->prof_id_ref--;
+
+			/* Remove old profile id from database */
+			status = ice_sched_del_rl_profile(pi->hw, rl_prof_elem);
+			if (status && status != ICE_ERR_IN_USE)
+				ice_debug(pi->hw, ICE_DBG_SCHED,
+					  "Remove rl profile failed\n");
+			break;
+		}
+	if (status == ICE_ERR_IN_USE)
+		status = ICE_SUCCESS;
+	return status;
+}
+
+/**
+ * ice_sched_set_node_bw_dflt - set node's bandwidth limit to default
+ * @pi: port information structure
+ * @node: pointer to node structure
+ * @rl_type: rate limit type min, max, or shared
+ * @layer_num: layer number where rl profiles are saved
+ *
+ * This function configures node element's bw rate limit profile id of
+ * type cir, eir, or srl to default. This function needs to be called
+ * with the scheduler lock held.
+ */
+static enum ice_status
+ice_sched_set_node_bw_dflt(struct ice_port_info *pi,
+			   struct ice_sched_node *node,
+			   enum ice_rl_type rl_type, u8 layer_num)
+{
+	enum ice_status status;
+	struct ice_hw *hw;
+	u8 profile_type;
+	u16 rl_prof_id;
+	u16 old_id;
+
+	hw = pi->hw;
+	switch (rl_type) {
+	case ICE_MIN_BW:
+		profile_type = ICE_AQC_RL_PROFILE_TYPE_CIR;
+		rl_prof_id = ICE_SCHED_DFLT_RL_PROF_ID;
+		break;
+	case ICE_MAX_BW:
+		profile_type = ICE_AQC_RL_PROFILE_TYPE_EIR;
+		rl_prof_id = ICE_SCHED_DFLT_RL_PROF_ID;
+		break;
+	case ICE_SHARED_BW:
+		profile_type = ICE_AQC_RL_PROFILE_TYPE_SRL;
+		/* No SRL is configured for default case */
+		rl_prof_id = ICE_SCHED_NO_SHARED_RL_PROF_ID;
+		break;
+	default:
+		return ICE_ERR_PARAM;
+	}
+	/* Save existing rl prof id for later clean up */
+	old_id = ice_sched_get_node_rl_prof_id(node, rl_type);
+	/* Configure bw scheduling parameters */
+	status = ice_sched_cfg_node_bw_lmt(hw, node, rl_type, rl_prof_id);
+	if (status)
+		return status;
+
+	/* Remove stale rl profile id */
+	if (old_id == ICE_SCHED_DFLT_RL_PROF_ID ||
+	    old_id == ICE_SCHED_INVAL_PROF_ID)
+		return status;
+	return ice_sched_rm_rl_profile(pi, layer_num, profile_type, old_id);
+}
+
+/**
+ * ice_sched_set_eir_srl_excl - set EIR/SRL exclusiveness
+ * @pi: port information structure
+ * @node: pointer to node structure
+ * @layer_num: layer number where rate limit profiles are saved
+ * @rl_type: rate limit type min, max, or shared
+ * @bw: bandwidth value
+ *
+ * This function prepares node element's bandwidth to SRL or EIR exclusively.
+ * EIR bw and Shared bw profiles are mutually exclusive and hence only one of
+ * them may be set for any given element. This function needs to be called
+ * with the scheduler lock held.
+ */
+static enum ice_status
+ice_sched_set_eir_srl_excl(struct ice_port_info *pi,
+			   struct ice_sched_node *node,
+			   u8 layer_num, enum ice_rl_type rl_type, u32 bw)
+{
+	if (rl_type == ICE_SHARED_BW) {
+		/* SRL node passed in this case, it may be different node */
+		if (bw == ICE_SCHED_DFLT_BW)
+			/* SRL being removed, ice_sched_cfg_node_bw_lmt()
+			 * enables EIR to default. EIR is not set in this
+			 * case, so no additional action is required.
+			 */
+			return ICE_SUCCESS;
+
+		/* SRL being configured, set EIR to default here.
+		 * ice_sched_cfg_node_bw_lmt() disables EIR when it
+		 * configures SRL
+		 */
+		return ice_sched_set_node_bw_dflt(pi, node, ICE_MAX_BW,
+						  layer_num);
+	} else if (rl_type == ICE_MAX_BW &&
+		   node->info.data.valid_sections & ICE_AQC_ELEM_VALID_SHARED) {
+		/* Remove Shared profile. Set default shared bw call
+		 * removes shared profile for a node.
+		 */
+		return ice_sched_set_node_bw_dflt(pi, node,
+						  ICE_SHARED_BW,
+						  layer_num);
+	}
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_sched_set_node_bw - set node's bandwidth
+ * @pi: port information structure
+ * @node: tree node
+ * @rl_type: rate limit type min, max, or shared
+ * @bw: bandwidth in Kbps - Kilo bits per sec
+ * @layer_num: layer number
+ *
+ * This function adds new profile corresponding to requested bw, configures
+ * node's rl profile id of type cir, eir, or srl, and removes old profile
+ * id from local database. The caller needs to hold scheduler lock.
+ */
+static enum ice_status
+ice_sched_set_node_bw(struct ice_port_info *pi, struct ice_sched_node *node,
+		      enum ice_rl_type rl_type, u32 bw, u8 layer_num)
+{
+	struct ice_aqc_rl_profile_info *rl_prof_info;
+	enum ice_status status = ICE_ERR_PARAM;
+	struct ice_hw *hw = pi->hw;
+	u16 old_id, rl_prof_id;
+
+	rl_prof_info = ice_sched_add_rl_profile(pi, rl_type, bw, layer_num);
+	if (!rl_prof_info)
+		return status;
+
+	rl_prof_id = LE16_TO_CPU(rl_prof_info->profile.profile_id);
+
+	/* Save existing rl prof id for later clean up */
+	old_id = ice_sched_get_node_rl_prof_id(node, rl_type);
+	/* Configure bw scheduling parameters */
+	status = ice_sched_cfg_node_bw_lmt(hw, node, rl_type, rl_prof_id);
+	if (status)
+		return status;
+
+	/* New changes has been applied */
+	/* Increment the profile id reference count */
+	rl_prof_info->prof_id_ref++;
+
+	/* Check for old id removal */
+	if ((old_id == ICE_SCHED_DFLT_RL_PROF_ID && rl_type != ICE_SHARED_BW) ||
+	    old_id == ICE_SCHED_INVAL_PROF_ID || old_id == rl_prof_id)
+		return status;
+
+	return ice_sched_rm_rl_profile(pi, layer_num,
+				       rl_prof_info->profile.flags,
+				       old_id);
+}
+
+/**
+ * ice_sched_set_node_bw_lmt - set node's bw limit
+ * @pi: port information structure
+ * @node: tree node
+ * @rl_type: rate limit type min, max, or shared
+ * @bw: bandwidth in Kbps - Kilo bits per sec
+ *
+ * It updates node's bw limit parameters like bw rl profile id of type cir,
+ * eir, or srl. The caller needs to hold scheduler lock.
+ */
+enum ice_status
+ice_sched_set_node_bw_lmt(struct ice_port_info *pi, struct ice_sched_node *node,
+			  enum ice_rl_type rl_type, u32 bw)
+{
+	struct ice_sched_node *cfg_node = node;
+	enum ice_status status;
+
+	struct ice_hw *hw;
+	u8 layer_num;
+
+	if (!pi)
+		return ICE_ERR_PARAM;
+	hw = pi->hw;
+	/* Remove unused rl profile ids from HW and SW DB */
+	ice_sched_rm_unused_rl_prof(pi);
+	layer_num = ice_sched_get_rl_prof_layer(pi, rl_type,
+						node->tx_sched_layer);
+	if (layer_num >= hw->num_tx_sched_layers)
+		return ICE_ERR_PARAM;
+
+	if (rl_type == ICE_SHARED_BW) {
+		/* SRL node may be different */
+		cfg_node = ice_sched_get_srl_node(node, layer_num);
+		if (!cfg_node)
+			return ICE_ERR_CFG;
+	}
+	/* EIR bw and Shared bw profiles are mutually exclusive and
+	 * hence only one of them may be set for any given element
+	 */
+	status = ice_sched_set_eir_srl_excl(pi, cfg_node, layer_num, rl_type,
+					    bw);
+	if (status)
+		return status;
+	if (bw == ICE_SCHED_DFLT_BW)
+		return ice_sched_set_node_bw_dflt(pi, cfg_node, rl_type,
+						  layer_num);
+	return ice_sched_set_node_bw(pi, cfg_node, rl_type, bw, layer_num);
+}
+
+/**
+ * ice_sched_set_node_bw_dflt_lmt - set node's bw limit to default
+ * @pi: port information structure
+ * @node: pointer to node structure
+ * @rl_type: rate limit type min, max, or shared
+ *
+ * This function configures node element's bw rate limit profile id of
+ * type cir, eir, or srl to default. This function needs to be called
+ * with the scheduler lock held.
+ */
+static enum ice_status
+ice_sched_set_node_bw_dflt_lmt(struct ice_port_info *pi,
+			       struct ice_sched_node *node,
+			       enum ice_rl_type rl_type)
+{
+	return ice_sched_set_node_bw_lmt(pi, node, rl_type,
+					 ICE_SCHED_DFLT_BW);
+}
+
+/**
+ * ice_sched_validate_srl_node - Check node for SRL applicability
+ * @node: sched node to configure
+ * @sel_layer: selected SRL layer
+ *
+ * This function checks if the SRL can be applied to a selceted layer node on
+ * behalf of the requested node (first argument). This function needs to be
+ * called with scheduler lock held.
+ */
+static enum ice_status
+ice_sched_validate_srl_node(struct ice_sched_node *node, u8 sel_layer)
+{
+	/* SRL profiles are not available on all layers. Check if the
+	 * SRL profile can be applied to a node above or below the
+	 * requested node. SRL configuration is possible only if the
+	 * selected layer's node has single child.
+	 */
+	if (sel_layer == node->tx_sched_layer ||
+	    ((sel_layer == node->tx_sched_layer + 1) &&
+	    node->num_children == 1) ||
+	    ((sel_layer == node->tx_sched_layer - 1) &&
+	    (node->parent && node->parent->num_children == 1)))
+		return ICE_SUCCESS;
+
+	return ICE_ERR_CFG;
+}
+
+/**
+ * ice_sched_set_q_bw_lmt - sets queue bw limit
+ * @pi: port information structure
+ * @q_id: queue id (leaf node teid)
+ * @rl_type: min, max, or shared
+ * @bw: bandwidth in kbps
+ *
+ * This function sets bw limit of queue scheduling node.
+ */
+static enum ice_status
+ice_sched_set_q_bw_lmt(struct ice_port_info *pi, u32 q_id,
+		       enum ice_rl_type rl_type, u32 bw)
+{
+	enum ice_status status = ICE_ERR_PARAM;
+	struct ice_sched_node *node;
+
+	ice_acquire_lock(&pi->sched_lock);
+
+	node = ice_sched_find_node_by_teid(pi->root, q_id);
+	if (!node) {
+		ice_debug(pi->hw, ICE_DBG_SCHED, "Wrong q_id\n");
+		goto exit_q_bw_lmt;
+	}
+
+	/* Return error if it is not a leaf node */
+	if (node->info.data.elem_type != ICE_AQC_ELEM_TYPE_LEAF)
+		goto exit_q_bw_lmt;
+
+	/* SRL bandwidth layer selection */
+	if (rl_type == ICE_SHARED_BW) {
+		u8 sel_layer; /* selected layer */
+
+		sel_layer = ice_sched_get_rl_prof_layer(pi, rl_type,
+							node->tx_sched_layer);
+		if (sel_layer >= pi->hw->num_tx_sched_layers) {
+			status = ICE_ERR_PARAM;
+			goto exit_q_bw_lmt;
+		}
+		status = ice_sched_validate_srl_node(node, sel_layer);
+		if (status)
+			goto exit_q_bw_lmt;
+	}
+
+	if (bw == ICE_SCHED_DFLT_BW)
+		status = ice_sched_set_node_bw_dflt_lmt(pi, node, rl_type);
+	else
+		status = ice_sched_set_node_bw_lmt(pi, node, rl_type, bw);
+
+exit_q_bw_lmt:
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_cfg_q_bw_lmt - configure queue bw limit
+ * @pi: port information structure
+ * @q_id: queue id (leaf node teid)
+ * @rl_type: min, max, or shared
+ * @bw: bandwidth in kbps
+ *
+ * This function configures bw limit of queue scheduling node.
+ */
+enum ice_status
+ice_cfg_q_bw_lmt(struct ice_port_info *pi, u32 q_id, enum ice_rl_type rl_type,
+		 u32 bw)
+{
+	return ice_sched_set_q_bw_lmt(pi, q_id, rl_type, bw);
+}
+
+/**
+ * ice_cfg_q_bw_dflt_lmt - configure queue bw default limit
+ * @pi: port information structure
+ * @q_id: queue id (leaf node teid)
+ * @rl_type: min, max, or shared
+ *
+ * This function configures bw default limit of queue scheduling node.
+ */
+enum ice_status
+ice_cfg_q_bw_dflt_lmt(struct ice_port_info *pi, u32 q_id,
+		      enum ice_rl_type rl_type)
+{
+	return ice_sched_set_q_bw_lmt(pi, q_id, rl_type, ICE_SCHED_DFLT_BW);
+}
+
+/**
+ * ice_sched_save_tc_node_bw - save tc node bw limit
+ * @pi: port information structure
+ * @tc: tc number
+ * @rl_type: min or max
+ * @bw: bandwidth in kbps
+ *
+ * This function saves the modified values of bandwidth settings for later
+ * replay purpose (restore) after reset.
+ */
+static enum ice_status
+ice_sched_save_tc_node_bw(struct ice_port_info *pi, u8 tc,
+			  enum ice_rl_type rl_type, u32 bw)
+{
+	struct ice_hw *hw = pi->hw;
+
+	if (tc >= ICE_MAX_TRAFFIC_CLASS)
+		return ICE_ERR_PARAM;
+	switch (rl_type) {
+	case ICE_MIN_BW:
+		ice_set_clear_cir_bw(&hw->tc_node_bw_t_info[tc], bw);
+		break;
+	case ICE_MAX_BW:
+		ice_set_clear_eir_bw(&hw->tc_node_bw_t_info[tc], bw);
+		break;
+	case ICE_SHARED_BW:
+		ice_set_clear_shared_bw(&hw->tc_node_bw_t_info[tc], bw);
+		break;
+	default:
+		return ICE_ERR_PARAM;
+	}
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_sched_set_tc_node_bw_lmt - sets tc node bw limit
+ * @pi: port information structure
+ * @tc: tc number
+ * @rl_type: min or max
+ * @bw: bandwidth in kbps
+ *
+ * This function configures bandwidth limit of tc node.
+ */
+static enum ice_status
+ice_sched_set_tc_node_bw_lmt(struct ice_port_info *pi, u8 tc,
+			     enum ice_rl_type rl_type, u32 bw)
+{
+	enum ice_status status = ICE_ERR_PARAM;
+	struct ice_sched_node *tc_node;
+
+	if (tc >= ICE_MAX_TRAFFIC_CLASS)
+		return status;
+	ice_acquire_lock(&pi->sched_lock);
+	tc_node = ice_sched_get_tc_node(pi, tc);
+	if (!tc_node)
+		goto exit_set_tc_node_bw;
+	if (bw == ICE_SCHED_DFLT_BW)
+		status = ice_sched_set_node_bw_dflt_lmt(pi, tc_node, rl_type);
+	else
+		status = ice_sched_set_node_bw_lmt(pi, tc_node, rl_type, bw);
+	if (!status)
+		status = ice_sched_save_tc_node_bw(pi, tc, rl_type, bw);
+
+exit_set_tc_node_bw:
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_cfg_tc_node_bw_lmt - configure tc node bw limit
+ * @pi: port information structure
+ * @tc: tc number
+ * @rl_type: min or max
+ * @bw: bandwidth in kbps
+ *
+ * This function configures bw limit of tc node.
+ * Note: The minimum guaranteed reservation is done via DCBX.
+ */
+enum ice_status
+ice_cfg_tc_node_bw_lmt(struct ice_port_info *pi, u8 tc,
+		       enum ice_rl_type rl_type, u32 bw)
+{
+	return ice_sched_set_tc_node_bw_lmt(pi, tc, rl_type, bw);
+}
+
+/**
+ * ice_cfg_tc_node_bw_dflt_lmt - configure tc node bw default limit
+ * @pi: port information structure
+ * @tc: tc number
+ * @rl_type: min or max
+ *
+ * This function configures bw default limit of tc node.
+ */
+enum ice_status
+ice_cfg_tc_node_bw_dflt_lmt(struct ice_port_info *pi, u8 tc,
+			    enum ice_rl_type rl_type)
+{
+	return ice_sched_set_tc_node_bw_lmt(pi, tc, rl_type, ICE_SCHED_DFLT_BW);
+}
+
+/**
+ * ice_sched_save_tc_node_bw_alloc - save tc node's bw alloc information
+ * @pi: port information structure
+ * @tc: traffic class
+ * @rl_type: rate limit type min or max
+ * @bw_alloc: Bandwidth allocation information
+ *
+ * Save bw alloc information of VSI type node for post replay use.
+ */
+static enum ice_status
+ice_sched_save_tc_node_bw_alloc(struct ice_port_info *pi, u8 tc,
+				enum ice_rl_type rl_type, u16 bw_alloc)
+{
+	struct ice_hw *hw = pi->hw;
+
+	if (tc >= ICE_MAX_TRAFFIC_CLASS)
+		return ICE_ERR_PARAM;
+	switch (rl_type) {
+	case ICE_MIN_BW:
+		ice_set_clear_cir_bw_alloc(&hw->tc_node_bw_t_info[tc],
+					   bw_alloc);
+		break;
+	case ICE_MAX_BW:
+		ice_set_clear_eir_bw_alloc(&hw->tc_node_bw_t_info[tc],
+					   bw_alloc);
+		break;
+	default:
+		return ICE_ERR_PARAM;
+	}
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_sched_set_tc_node_bw_alloc - set tc node bw alloc
+ * @pi: port information structure
+ * @tc: tc number
+ * @rl_type: min or max
+ * @bw_alloc: bandwidth alloc
+ *
+ * This function configures bandwidth alloc of tc node, also saves the
+ * changed settings for replay purpose, and return success if it succeeds
+ * in modifying bandwidth alloc setting.
+ */
+static enum ice_status
+ice_sched_set_tc_node_bw_alloc(struct ice_port_info *pi, u8 tc,
+			       enum ice_rl_type rl_type, u8 bw_alloc)
+{
+	enum ice_status status = ICE_ERR_PARAM;
+	struct ice_sched_node *tc_node;
+
+	if (tc >= ICE_MAX_TRAFFIC_CLASS)
+		return status;
+	ice_acquire_lock(&pi->sched_lock);
+	tc_node = ice_sched_get_tc_node(pi, tc);
+	if (!tc_node)
+		goto exit_set_tc_node_bw_alloc;
+	status = ice_sched_cfg_node_bw_alloc(pi->hw, tc_node, rl_type,
+					     bw_alloc);
+	if (status)
+		goto exit_set_tc_node_bw_alloc;
+	status = ice_sched_save_tc_node_bw_alloc(pi, tc, rl_type, bw_alloc);
+
+exit_set_tc_node_bw_alloc:
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_cfg_tc_node_bw_alloc - configure tc node bw alloc
+ * @pi: port information structure
+ * @tc: tc number
+ * @rl_type: min or max
+ * @bw_alloc: bandwidth alloc
+ *
+ * This function configures bw limit of tc node.
+ * Note: The minimum guaranteed reservation is done via DCBX.
+ */
+enum ice_status
+ice_cfg_tc_node_bw_alloc(struct ice_port_info *pi, u8 tc,
+			 enum ice_rl_type rl_type, u8 bw_alloc)
+{
+	return ice_sched_set_tc_node_bw_alloc(pi, tc, rl_type, bw_alloc);
+}
+
+/**
+ * ice_sched_set_agg_bw_dflt_lmt - set agg node's bw limit to default
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ *
+ * This function retrieves the aggregator id based on VSI id and tc,
+ * and sets node's bw limit to default. This function needs to be
+ * called with the scheduler lock held.
+ */
+enum ice_status
+ice_sched_set_agg_bw_dflt_lmt(struct ice_port_info *pi, u16 vsi_handle)
+{
+	struct ice_vsi_ctx *vsi_ctx;
+	enum ice_status status = ICE_SUCCESS;
+	u8 tc;
+
+	if (!ice_is_vsi_valid(pi->hw, vsi_handle))
+		return ICE_ERR_PARAM;
+	vsi_ctx = ice_get_vsi_ctx(pi->hw, vsi_handle);
+	if (!vsi_ctx)
+		return ICE_ERR_PARAM;
+
+	for (tc = 0; tc < ICE_MAX_TRAFFIC_CLASS; tc++) {
+		struct ice_sched_node *node;
+
+		node = vsi_ctx->sched.ag_node[tc];
+		if (!node)
+			continue;
+
+		/* Set min profile to default */
+		status = ice_sched_set_node_bw_dflt_lmt(pi, node, ICE_MIN_BW);
+		if (status)
+			break;
+
+		/* Set max profile to default */
+		status = ice_sched_set_node_bw_dflt_lmt(pi, node, ICE_MAX_BW);
+		if (status)
+			break;
+
+		/* Remove shared profile, if there is one */
+		status = ice_sched_set_node_bw_dflt_lmt(pi, node,
+							ICE_SHARED_BW);
+		if (status)
+			break;
+	}
+
+	return status;
+}
+
+/**
+ * ice_sched_get_node_by_id_type - get node from id type
+ * @pi: port information structure
+ * @id: identifier
+ * @agg_type: type of aggregator
+ * @tc: traffic class
+ *
+ * This function returns node identified by id of type aggregator, and
+ * based on traffic class (tc). This function needs to be called with
+ * the scheduler lock held.
+ */
+static struct ice_sched_node *
+ice_sched_get_node_by_id_type(struct ice_port_info *pi, u32 id,
+			      enum ice_agg_type agg_type, u8 tc)
+{
+	struct ice_sched_node *node = NULL;
+	struct ice_sched_node *child_node;
+
+	switch (agg_type) {
+	case ICE_AGG_TYPE_VSI: {
+		struct ice_vsi_ctx *vsi_ctx;
+		u16 vsi_handle = (u16)id;
+
+		if (!ice_is_vsi_valid(pi->hw, vsi_handle))
+			break;
+		/* Get sched_vsi_info */
+		vsi_ctx = ice_get_vsi_ctx(pi->hw, vsi_handle);
+		if (!vsi_ctx)
+			break;
+		node = vsi_ctx->sched.vsi_node[tc];
+		break;
+	}
+
+	case ICE_AGG_TYPE_AGG: {
+		struct ice_sched_node *tc_node;
+
+		tc_node = ice_sched_get_tc_node(pi, tc);
+		if (tc_node)
+			node = ice_sched_get_agg_node(pi->hw, tc_node, id);
+		break;
+	}
+
+	case ICE_AGG_TYPE_Q:
+		/* The current implementation allows single queue to modify */
+		node = ice_sched_get_node(pi, id);
+		break;
+
+	case ICE_AGG_TYPE_QG:
+		/* The current implementation allows single qg to modify */
+		child_node = ice_sched_get_node(pi, id);
+		if (!child_node)
+			break;
+		node = child_node->parent;
+		break;
+
+	default:
+		break;
+	}
+
+	return node;
+}
+
+/**
+ * ice_sched_set_node_bw_lmt_per_tc - set node bw limit per tc
+ * @pi: port information structure
+ * @id: id (software VSI handle or AGG id)
+ * @agg_type: aggregator type (VSI or AGG type node)
+ * @tc: traffic class
+ * @rl_type: min or max
+ * @bw: bandwidth in kbps
+ *
+ * This function sets bw limit of VSI or Aggregator scheduling node
+ * based on tc information from passed in argument bw.
+ */
+enum ice_status
+ice_sched_set_node_bw_lmt_per_tc(struct ice_port_info *pi, u32 id,
+				 enum ice_agg_type agg_type, u8 tc,
+				 enum ice_rl_type rl_type, u32 bw)
+{
+	enum ice_status status = ICE_ERR_PARAM;
+	struct ice_sched_node *node;
+
+	if (!pi)
+		return status;
+
+	if (rl_type == ICE_UNKNOWN_BW)
+		return status;
+
+	ice_acquire_lock(&pi->sched_lock);
+	node = ice_sched_get_node_by_id_type(pi, id, agg_type, tc);
+	if (!node) {
+		ice_debug(pi->hw, ICE_DBG_SCHED, "Wrong id, agg type, or tc\n");
+		goto exit_set_node_bw_lmt_per_tc;
+	}
+	if (bw == ICE_SCHED_DFLT_BW)
+		status = ice_sched_set_node_bw_dflt_lmt(pi, node, rl_type);
+	else
+		status = ice_sched_set_node_bw_lmt(pi, node, rl_type, bw);
+
+exit_set_node_bw_lmt_per_tc:
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_sched_validate_vsi_srl_node - validate VSI SRL node
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ *
+ * This function validates SRL node of the VSI node if available SRL layer is
+ * different than the VSI node layer on all tc(s).This function needs to be
+ * called with scheduler lock held.
+ */
+static enum ice_status
+ice_sched_validate_vsi_srl_node(struct ice_port_info *pi, u16 vsi_handle)
+{
+	u8 sel_layer = ICE_SCHED_INVAL_LAYER_NUM;
+	u8 tc;
+
+	if (!ice_is_vsi_valid(pi->hw, vsi_handle))
+		return ICE_ERR_PARAM;
+
+	/* Return success if no nodes are present across tc */
+	for (tc = 0; tc < ICE_MAX_TRAFFIC_CLASS; tc++) {
+		struct ice_sched_node *tc_node, *vsi_node;
+		enum ice_rl_type rl_type = ICE_SHARED_BW;
+		enum ice_status status;
+
+		tc_node = ice_sched_get_tc_node(pi, tc);
+		if (!tc_node)
+			continue;
+
+		vsi_node = ice_sched_get_vsi_node(pi->hw, tc_node, vsi_handle);
+		if (!vsi_node)
+			continue;
+
+		/* SRL bandwidth layer selection */
+		if (sel_layer == ICE_SCHED_INVAL_LAYER_NUM) {
+			u8 node_layer = vsi_node->tx_sched_layer;
+			u8 layer_num;
+
+			layer_num = ice_sched_get_rl_prof_layer(pi, rl_type,
+								node_layer);
+			if (layer_num >= pi->hw->num_tx_sched_layers)
+				return ICE_ERR_PARAM;
+			sel_layer = layer_num;
+		}
+
+		status = ice_sched_validate_srl_node(vsi_node, sel_layer);
+		if (status)
+			return status;
+	}
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_sched_set_vsi_bw_shared_lmt - set VSI bw shared limit
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @bw: bandwidth in kbps
+ *
+ * This function Configures shared rate limiter(SRL) of all VSI type nodes
+ * across all traffic classes for VSI matching handle. When bw value of
+ * ICE_SCHED_DFLT_BW is passed, it removes the SRL from the node.
+ */
+enum ice_status
+ice_sched_set_vsi_bw_shared_lmt(struct ice_port_info *pi, u16 vsi_handle,
+				u32 bw)
+{
+	enum ice_status status = ICE_SUCCESS;
+	u8 tc;
+
+	if (!pi)
+		return ICE_ERR_PARAM;
+
+	if (!ice_is_vsi_valid(pi->hw, vsi_handle))
+		return ICE_ERR_PARAM;
+
+	ice_acquire_lock(&pi->sched_lock);
+	status = ice_sched_validate_vsi_srl_node(pi, vsi_handle);
+	if (status)
+		goto exit_set_vsi_bw_shared_lmt;
+	/* Return success if no nodes are present across tc */
+	for (tc = 0; tc < ICE_MAX_TRAFFIC_CLASS; tc++) {
+		struct ice_sched_node *tc_node, *vsi_node;
+		enum ice_rl_type rl_type = ICE_SHARED_BW;
+
+		tc_node = ice_sched_get_tc_node(pi, tc);
+		if (!tc_node)
+			continue;
+
+		vsi_node = ice_sched_get_vsi_node(pi->hw, tc_node, vsi_handle);
+		if (!vsi_node)
+			continue;
+
+		if (bw == ICE_SCHED_DFLT_BW)
+			/* It removes existing SRL from the node */
+			status = ice_sched_set_node_bw_dflt_lmt(pi, vsi_node,
+								rl_type);
+		else
+			status = ice_sched_set_node_bw_lmt(pi, vsi_node,
+							   rl_type, bw);
+		if (status)
+			break;
+		status = ice_sched_save_vsi_bw(pi, vsi_handle, tc, rl_type, bw);
+		if (status)
+			break;
+	}
+
+exit_set_vsi_bw_shared_lmt:
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_sched_validate_agg_srl_node - validate AGG SRL node
+ * @pi: port information structure
+ * @agg_id: aggregator id
+ *
+ * This function validates SRL node of the AGG node if available SRL layer is
+ * different than the AGG node layer on all tc(s).This function needs to be
+ * called with scheduler lock held.
+ */
+static enum ice_status
+ice_sched_validate_agg_srl_node(struct ice_port_info *pi, u32 agg_id)
+{
+	u8 sel_layer = ICE_SCHED_INVAL_LAYER_NUM;
+	struct ice_sched_agg_info *agg_info;
+	bool agg_id_present = false;
+	enum ice_status status = ICE_SUCCESS;
+	u8 tc;
+
+	LIST_FOR_EACH_ENTRY(agg_info, &pi->hw->agg_list, ice_sched_agg_info,
+			    list_entry)
+		if (agg_info->agg_id == agg_id) {
+			agg_id_present = true;
+			break;
+		}
+	if (!agg_id_present)
+		return ICE_ERR_PARAM;
+	/* Return success if no nodes are present across tc */
+	for (tc = 0; tc < ICE_MAX_TRAFFIC_CLASS; tc++) {
+		struct ice_sched_node *tc_node, *agg_node;
+		enum ice_rl_type rl_type = ICE_SHARED_BW;
+
+		tc_node = ice_sched_get_tc_node(pi, tc);
+		if (!tc_node)
+			continue;
+
+		agg_node = ice_sched_get_agg_node(pi->hw, tc_node, agg_id);
+		if (!agg_node)
+			continue;
+		/* SRL bandwidth layer selection */
+		if (sel_layer == ICE_SCHED_INVAL_LAYER_NUM) {
+			u8 node_layer = agg_node->tx_sched_layer;
+			u8 layer_num;
+
+			layer_num = ice_sched_get_rl_prof_layer(pi, rl_type,
+								node_layer);
+			if (layer_num >= pi->hw->num_tx_sched_layers)
+				return ICE_ERR_PARAM;
+			sel_layer = layer_num;
+		}
+
+		status = ice_sched_validate_srl_node(agg_node, sel_layer);
+		if (status)
+			break;
+	}
+	return status;
+}
+
+/**
+ * ice_sched_set_agg_bw_shared_lmt - set aggregator bw shared limit
+ * @pi: port information structure
+ * @agg_id: aggregator id
+ * @bw: bandwidth in kbps
+ *
+ * This function configures the shared rate limiter(SRL) of all agg type
+ * nodes across all traffic classes for aggregator matching agg_id. When
+ * bw value of ICE_SCHED_DFLT_BW is passed, it removes SRL from the
+ * node(s).
+ */
+enum ice_status
+ice_sched_set_agg_bw_shared_lmt(struct ice_port_info *pi, u32 agg_id, u32 bw)
+{
+	struct ice_sched_agg_info *agg_info;
+	struct ice_sched_agg_info *tmp;
+	bool agg_id_present = false;
+	enum ice_status status = ICE_SUCCESS;
+	u8 tc;
+
+	if (!pi)
+		return ICE_ERR_PARAM;
+
+	ice_acquire_lock(&pi->sched_lock);
+	status = ice_sched_validate_agg_srl_node(pi, agg_id);
+	if (status)
+		goto exit_agg_bw_shared_lmt;
+
+	LIST_FOR_EACH_ENTRY_SAFE(agg_info, tmp, &pi->hw->agg_list,
+				 ice_sched_agg_info, list_entry)
+		if (agg_info->agg_id == agg_id) {
+			agg_id_present = true;
+			break;
+		}
+
+	if (!agg_id_present) {
+		status = ICE_ERR_PARAM;
+		goto exit_agg_bw_shared_lmt;
+	}
+
+	/* Return success if no nodes are present across tc */
+	for (tc = 0; tc < ICE_MAX_TRAFFIC_CLASS; tc++) {
+		enum ice_rl_type rl_type = ICE_SHARED_BW;
+		struct ice_sched_node *tc_node, *agg_node;
+
+		tc_node = ice_sched_get_tc_node(pi, tc);
+		if (!tc_node)
+			continue;
+
+		agg_node = ice_sched_get_agg_node(pi->hw, tc_node, agg_id);
+		if (!agg_node)
+			continue;
+
+		if (bw == ICE_SCHED_DFLT_BW)
+			/* It removes existing SRL from the node */
+			status = ice_sched_set_node_bw_dflt_lmt(pi, agg_node,
+								rl_type);
+		else
+			status = ice_sched_set_node_bw_lmt(pi, agg_node,
+							   rl_type, bw);
+		if (status)
+			break;
+		status = ice_sched_save_agg_bw(pi, agg_id, tc, rl_type, bw);
+		if (status)
+			break;
+	}
+
+exit_agg_bw_shared_lmt:
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_sched_cfg_sibl_node_prio - configure node sibling priority
+ * @hw: pointer to the hw struct
+ * @node: sched node to configure
+ * @priority: sibling priority
+ *
+ * This function configures node element's sibling priority only. This
+ * function needs to be called with scheduler lock held.
+ */
+enum ice_status
+ice_sched_cfg_sibl_node_prio(struct ice_hw *hw, struct ice_sched_node *node,
+			     u8 priority)
+{
+	struct ice_aqc_txsched_elem_data buf;
+	struct ice_aqc_txsched_elem *data;
+	enum ice_status status;
+
+	buf = node->info;
+	data = &buf.data;
+	data->valid_sections |= ICE_AQC_ELEM_VALID_GENERIC;
+	priority = (priority << ICE_AQC_ELEM_GENERIC_PRIO_S) &
+		   ICE_AQC_ELEM_GENERIC_PRIO_M;
+	data->generic &= ~ICE_AQC_ELEM_GENERIC_PRIO_M;
+	data->generic |= priority;
+
+	/* Configure element */
+	status = ice_sched_update_elem(hw, node, &buf);
+	return status;
+}
+
+/**
+ * ice_sched_cfg_node_bw_alloc - configure node bw weight/alloc params
+ * @hw: pointer to the hw struct
+ * @node: sched node to configure
+ * @rl_type: rate limit type cir, eir, or shared
+ * @bw_alloc: bw weight/allocation
+ *
+ * This function configures node element's bw allocation.
+ */
+enum ice_status
+ice_sched_cfg_node_bw_alloc(struct ice_hw *hw, struct ice_sched_node *node,
+			    enum ice_rl_type rl_type, u8 bw_alloc)
+{
+	struct ice_aqc_txsched_elem_data buf;
+	struct ice_aqc_txsched_elem *data;
+	enum ice_status status;
+
+	buf = node->info;
+	data = &buf.data;
+	if (rl_type == ICE_MIN_BW) {
+		data->valid_sections |= ICE_AQC_ELEM_VALID_CIR;
+		data->cir_bw.bw_alloc = CPU_TO_LE16(bw_alloc);
+	} else if (rl_type == ICE_MAX_BW) {
+		data->valid_sections |= ICE_AQC_ELEM_VALID_EIR;
+		data->eir_bw.bw_alloc = CPU_TO_LE16(bw_alloc);
+	} else {
+		return ICE_ERR_PARAM;
+	}
+
+	/* Configure element */
+	status = ice_sched_update_elem(hw, node, &buf);
+	return status;
+}
+
+/**
+ * ice_sched_add_agg_cfg - create an aggregator node
+ * @pi: port information structure
+ * @agg_id: aggregator id
+ * @tc: TC number
+ *
+ * This function creates an aggregator node and intermediate nodes if required
+ * for the given TC
+ */
+enum ice_status
+ice_sched_add_agg_cfg(struct ice_port_info *pi, u32 agg_id, u8 tc)
+{
+	struct ice_sched_node *parent, *agg_node, *tc_node;
+	u16 num_nodes[ICE_AQC_TOPO_MAX_LEVEL_NUM] = { 0 };
+	enum ice_status status = ICE_SUCCESS;
+	struct ice_hw *hw = pi->hw;
+	u32 first_node_teid;
+	u16 num_nodes_added;
+	u8 i, aggl;
+
+	tc_node = ice_sched_get_tc_node(pi, tc);
+	if (!tc_node)
+		return ICE_ERR_CFG;
+
+	agg_node = ice_sched_get_agg_node(hw, tc_node, agg_id);
+	/* Does Agg node already exist ? */
+	if (agg_node)
+		return status;
+
+	aggl = ice_sched_get_agg_layer(hw);
+
+	/* need one node in Agg layer */
+	num_nodes[aggl] = 1;
+
+	/* Check whether the intermediate nodes have space to add the
+	 * new agg. If they are full, then SW needs to allocate a new
+	 * intermediate node on those layers
+	 */
+	for (i = hw->sw_entry_point_layer; i < aggl; i++) {
+		parent = ice_sched_get_first_node(hw, tc_node, i);
+
+		/* scan all the siblings */
+		while (parent) {
+			if (parent->num_children < hw->max_children[i])
+				break;
+			parent = parent->sibling;
+		}
+
+		/* all the nodes are full, reserve one for this layer */
+		if (!parent)
+			num_nodes[i]++;
+	}
+
+	/* add the agg node */
+	parent = tc_node;
+	for (i = hw->sw_entry_point_layer; i <= aggl; i++) {
+		if (!parent)
+			return ICE_ERR_CFG;
+
+		status = ice_sched_add_nodes_to_layer(pi, tc_node, parent, i,
+						      num_nodes[i],
+						      &first_node_teid,
+						      &num_nodes_added);
+		if (status != ICE_SUCCESS || num_nodes[i] != num_nodes_added)
+			return ICE_ERR_CFG;
+
+		/* The newly added node can be a new parent for the next
+		 * layer nodes
+		 */
+		if (num_nodes_added) {
+			parent = ice_sched_find_node_by_teid(tc_node,
+							     first_node_teid);
+			/* register the aggregator id with the agg node */
+			if (parent && i == aggl)
+				parent->agg_id = agg_id;
+		} else {
+			parent = parent->children[0];
+		}
+	}
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_sched_is_agg_inuse - check whether the agg is in use or not
+ * @pi: port information structure
+ * @node: node pointer
+ *
+ * This function checks whether the agg is attached with any vsi or not.
+ */
+static bool
+ice_sched_is_agg_inuse(struct ice_port_info *pi, struct ice_sched_node *node)
+{
+	u8 vsil, i;
+
+	vsil = ice_sched_get_vsi_layer(pi->hw);
+	if (node->tx_sched_layer < vsil - 1) {
+		for (i = 0; i < node->num_children; i++)
+			if (ice_sched_is_agg_inuse(pi, node->children[i]))
+				return true;
+		return false;
+	} else {
+		return node->num_children ? true : false;
+	}
+}
+
+/**
+ * ice_sched_rm_agg_cfg - remove the aggregator node
+ * @pi: port information structure
+ * @agg_id: aggregator id
+ * @tc: TC number
+ *
+ * This function removes the aggregator node and intermediate nodes if any
+ * from the given TC
+ */
+enum ice_status
+ice_sched_rm_agg_cfg(struct ice_port_info *pi, u32 agg_id, u8 tc)
+{
+	struct ice_sched_node *tc_node, *agg_node;
+	struct ice_hw *hw = pi->hw;
+
+	tc_node = ice_sched_get_tc_node(pi, tc);
+	if (!tc_node)
+		return ICE_ERR_CFG;
+
+	agg_node = ice_sched_get_agg_node(hw, tc_node, agg_id);
+	if (!agg_node)
+		return ICE_ERR_DOES_NOT_EXIST;
+
+	/* Can't remove the agg node if it has children */
+	if (ice_sched_is_agg_inuse(pi, agg_node))
+		return ICE_ERR_IN_USE;
+
+	/* need to remove the whole subtree if agg node is the
+	 * only child.
+	 */
+	while (agg_node->tx_sched_layer > hw->sw_entry_point_layer) {
+		struct ice_sched_node *parent = agg_node->parent;
+
+		if (!parent)
+			return ICE_ERR_CFG;
+
+		if (parent->num_children > 1)
+			break;
+
+		agg_node = parent;
+	}
+
+	ice_free_sched_node(pi, agg_node);
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_sched_get_free_vsi_parent - Find a free parent node in agg subtree
+ * @hw: pointer to the hw struct
+ * @node: pointer to a child node
+ * @num_nodes: num nodes count array
+ *
+ * This function walks through the aggregator subtree to find a free parent
+ * node
+ */
+static struct ice_sched_node *
+ice_sched_get_free_vsi_parent(struct ice_hw *hw, struct ice_sched_node *node,
+			      u16 *num_nodes)
+{
+	u8 l = node->tx_sched_layer;
+	u8 vsil, i;
+
+	vsil = ice_sched_get_vsi_layer(hw);
+
+	/* Is it VSI parent layer ? */
+	if (l == vsil - 1)
+		return (node->num_children < hw->max_children[l]) ? node : NULL;
+
+	/* We have intermediate nodes. Let's walk through the subtree. If the
+	 * intermediate node has space to add a new node then clear the count
+	 */
+	if (node->num_children < hw->max_children[l])
+		num_nodes[l] = 0;
+	/* The below recursive call is intentional and wouldn't go more than
+	 * 2 or 3 iterations.
+	 */
+
+	for (i = 0; i < node->num_children; i++) {
+		struct ice_sched_node *parent;
+
+		parent = ice_sched_get_free_vsi_parent(hw, node->children[i],
+						       num_nodes);
+		if (parent)
+			return parent;
+	}
+
+	return NULL;
+}
+
+/**
+ * ice_sched_update_new_parent - update the new parent in SW DB
+ * @new_parent: pointer to a new parent node
+ * @node: pointer to a child node
+ *
+ * This function removes the child from the old parent and adds it to a new
+ * parent
+ */
+static void
+ice_sched_update_parent(struct ice_sched_node *new_parent,
+			struct ice_sched_node *node)
+{
+	struct ice_sched_node *old_parent;
+	u8 i, j;
+
+	old_parent = node->parent;
+
+	/* update the old parent children */
+	for (i = 0; i < old_parent->num_children; i++)
+		if (old_parent->children[i] == node) {
+			for (j = i + 1; j < old_parent->num_children; j++)
+				old_parent->children[j - 1] =
+					old_parent->children[j];
+			old_parent->num_children--;
+			break;
+		}
+
+	/* now move the node to a new parent */
+	new_parent->children[new_parent->num_children++] = node;
+	node->parent = new_parent;
+	node->info.parent_teid = new_parent->info.node_teid;
+}
+
+/**
+ * ice_sched_move_nodes - move child nodes to a given parent
+ * @pi: port information structure
+ * @parent: pointer to parent node
+ * @num_items: number of child nodes to be moved
+ * @list: pointer to child node teids
+ *
+ * This function move the child nodes to a given parent.
+ */
+static enum ice_status
+ice_sched_move_nodes(struct ice_port_info *pi, struct ice_sched_node *parent,
+		     u16 num_items, u32 *list)
+{
+	struct ice_aqc_move_elem *buf;
+	struct ice_sched_node *node;
+	enum ice_status status = ICE_SUCCESS;
+	struct ice_hw *hw;
+	u16 grps_movd = 0;
+	u8 i;
+
+	hw = pi->hw;
+
+	if (!parent || !num_items)
+		return ICE_ERR_PARAM;
+
+	/* Does parent have enough space */
+	if (parent->num_children + num_items >=
+	    hw->max_children[parent->tx_sched_layer])
+		return ICE_ERR_AQ_FULL;
+
+	buf = (struct ice_aqc_move_elem *) ice_malloc(hw, sizeof(*buf));
+	if (!buf)
+		return ICE_ERR_NO_MEMORY;
+
+	for (i = 0; i < num_items; i++) {
+		node = ice_sched_find_node_by_teid(pi->root, list[i]);
+		if (!node) {
+			status = ICE_ERR_PARAM;
+			goto move_err_exit;
+		}
+
+		buf->hdr.src_parent_teid = node->info.parent_teid;
+		buf->hdr.dest_parent_teid = parent->info.node_teid;
+		buf->teid[0] = node->info.node_teid;
+		buf->hdr.num_elems = CPU_TO_LE16(1);
+		status = ice_aq_move_sched_elems(hw, 1, buf, sizeof(*buf),
+						 &grps_movd, NULL);
+		if (status && grps_movd != 1) {
+			status = ICE_ERR_CFG;
+			goto move_err_exit;
+		}
+
+		/* update the SW DB */
+		ice_sched_update_parent(parent, node);
+	}
+
+move_err_exit:
+	ice_free(hw, buf);
+	return status;
+}
+
+/**
+ * ice_sched_move_vsi_to_agg - move VSI to aggregator node
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @agg_id: aggregator id
+ * @tc: TC number
+ *
+ * This function moves a VSI to an aggregator node or its subtree.
+ * Intermediate nodes may be created if required.
+ */
+enum ice_status
+ice_sched_move_vsi_to_agg(struct ice_port_info *pi, u16 vsi_handle, u32 agg_id,
+			  u8 tc)
+{
+	struct ice_sched_node *vsi_node, *agg_node, *tc_node, *parent;
+	u16 num_nodes[ICE_AQC_TOPO_MAX_LEVEL_NUM] = { 0 };
+	u32 first_node_teid, vsi_teid;
+	enum ice_status status;
+	u16 num_nodes_added;
+	u8 aggl, vsil, i;
+
+	tc_node = ice_sched_get_tc_node(pi, tc);
+	if (!tc_node)
+		return ICE_ERR_CFG;
+
+	agg_node = ice_sched_get_agg_node(pi->hw, tc_node, agg_id);
+	if (!agg_node)
+		return ICE_ERR_DOES_NOT_EXIST;
+
+	vsi_node = ice_sched_get_vsi_node(pi->hw, tc_node, vsi_handle);
+	if (!vsi_node)
+		return ICE_ERR_DOES_NOT_EXIST;
+
+	aggl = ice_sched_get_agg_layer(pi->hw);
+	vsil = ice_sched_get_vsi_layer(pi->hw);
+
+	/* initialize intermediate node count to 1 between agg and VSI layers */
+	for (i = aggl + 1; i < vsil; i++)
+		num_nodes[i] = 1;
+
+	/* Check whether the agg subtree has any free node to add the VSI */
+	for (i = 0; i < agg_node->num_children; i++) {
+		parent = ice_sched_get_free_vsi_parent(pi->hw,
+						       agg_node->children[i],
+						       num_nodes);
+		if (parent)
+			goto move_nodes;
+	}
+
+	/* add new nodes */
+	parent = agg_node;
+	for (i = aggl + 1; i < vsil; i++) {
+		status = ice_sched_add_nodes_to_layer(pi, tc_node, parent, i,
+						      num_nodes[i],
+						      &first_node_teid,
+						      &num_nodes_added);
+		if (status != ICE_SUCCESS || num_nodes[i] != num_nodes_added)
+			return ICE_ERR_CFG;
+
+		/* The newly added node can be a new parent for the next
+		 * layer nodes
+		 */
+		if (num_nodes_added)
+			parent = ice_sched_find_node_by_teid(tc_node,
+							     first_node_teid);
+		else
+			parent = parent->children[0];
+
+		if (!parent)
+			return ICE_ERR_CFG;
+	}
+
+move_nodes:
+	vsi_teid = LE32_TO_CPU(vsi_node->info.node_teid);
+	return ice_sched_move_nodes(pi, parent, 1, &vsi_teid);
+}
+
+/**
+ * ice_cfg_rl_burst_size - Set burst size value
+ * @hw: pointer to the hw struct
+ * @bytes: burst size in bytes
+ *
+ * This function configures/set the burst size to requested new value. The new
+ * burst size value is used for future rate limit calls. It doesn't change the
+ * existing or previously created RL profiles.
+ */
+enum ice_status ice_cfg_rl_burst_size(struct ice_hw *hw, u32 bytes)
+{
+	u16 burst_size_to_prog;
+
+	if (bytes < ICE_MIN_BURST_SIZE_ALLOWED ||
+	    bytes > ICE_MAX_BURST_SIZE_ALLOWED)
+		return ICE_ERR_PARAM;
+	if (bytes <= ICE_MAX_BURST_SIZE_BYTE_GRANULARITY) {
+		/* byte granularity case */
+		/* Disable MSB granularity bit */
+		burst_size_to_prog = ICE_BYTE_GRANULARITY;
+		/* round number to nearest 256 granularity */
+		bytes = ice_round_to_num(bytes, 256);
+		/* check rounding doesn't go beyound allowed */
+		if (bytes > ICE_MAX_BURST_SIZE_BYTE_GRANULARITY)
+			bytes = ICE_MAX_BURST_SIZE_BYTE_GRANULARITY;
+		burst_size_to_prog |= (u16)bytes;
+	} else {
+		/* k bytes granularity case */
+		/* Enable MSB granularity bit */
+		burst_size_to_prog = ICE_KBYTE_GRANULARITY;
+		/* round number to nearest 1024 granularity */
+		bytes = ice_round_to_num(bytes, 1024);
+		/* check rounding doesn't go beyound allowed */
+		if (bytes > ICE_MAX_BURST_SIZE_KBYTE_GRANULARITY)
+			bytes = ICE_MAX_BURST_SIZE_KBYTE_GRANULARITY;
+		/* The value is in k bytes */
+		burst_size_to_prog |= (u16)(bytes / 1024);
+	}
+	hw->max_burst_size = burst_size_to_prog;
+	return ICE_SUCCESS;
+}
+
+/*
+ * ice_sched_replay_node_prio - re-configure node priority
+ * @hw: pointer to the hw struct
+ * @node: sched node to configure
+ * @priority: priority value
+ *
+ * This function configures node element's priority value. It
+ * needs to be called with scheduler lock held.
+ */
+static enum ice_status
+ice_sched_replay_node_prio(struct ice_hw *hw, struct ice_sched_node *node,
+			   u8 priority)
+{
+	struct ice_aqc_txsched_elem_data buf;
+	struct ice_aqc_txsched_elem *data;
+	enum ice_status status;
+
+	buf = node->info;
+	data = &buf.data;
+	data->valid_sections |= ICE_AQC_ELEM_VALID_GENERIC;
+	data->generic = priority;
+
+	/* Configure element */
+	status = ice_sched_update_elem(hw, node, &buf);
+	return status;
+}
+
+/**
+ * ice_sched_replay_node_bw - replay node(s) bw
+ * @hw: pointer to the hw struct
+ * @node: sched node to configure
+ * @bw_t_info: bw type information
+ *
+ * This function restores node's bw from bw_t_info. The caller needs
+ * to hold the scheduler lock.
+ */
+static enum ice_status
+ice_sched_replay_node_bw(struct ice_hw *hw, struct ice_sched_node *node,
+			 struct ice_bw_type_info *bw_t_info)
+{
+	struct ice_port_info *pi = hw->port_info;
+	enum ice_status status = ICE_ERR_PARAM;
+	u16 bw_alloc;
+
+	if (!node)
+		return status;
+	if (!ice_is_any_bit_set(bw_t_info->bw_t_bitmap, ICE_BW_TYPE_CNT))
+		return ICE_SUCCESS;
+	if (ice_is_bit_set(bw_t_info->bw_t_bitmap, ICE_BW_TYPE_PRIO)) {
+		status = ice_sched_replay_node_prio(hw, node,
+						    bw_t_info->generic);
+		if (status)
+			return status;
+	}
+	if (ice_is_bit_set(bw_t_info->bw_t_bitmap, ICE_BW_TYPE_CIR)) {
+		status = ice_sched_set_node_bw_lmt(pi, node, ICE_MIN_BW,
+						   bw_t_info->cir_bw.bw);
+		if (status)
+			return status;
+	}
+	if (ice_is_bit_set(bw_t_info->bw_t_bitmap, ICE_BW_TYPE_CIR_WT)) {
+		bw_alloc = bw_t_info->cir_bw.bw_alloc;
+		status = ice_sched_cfg_node_bw_alloc(hw, node, ICE_MIN_BW,
+						     bw_alloc);
+		if (status)
+			return status;
+	}
+	if (ice_is_bit_set(bw_t_info->bw_t_bitmap, ICE_BW_TYPE_EIR)) {
+		status = ice_sched_set_node_bw_lmt(pi, node, ICE_MAX_BW,
+						   bw_t_info->eir_bw.bw);
+		if (status)
+			return status;
+	}
+	if (ice_is_bit_set(bw_t_info->bw_t_bitmap, ICE_BW_TYPE_EIR_WT)) {
+		bw_alloc = bw_t_info->eir_bw.bw_alloc;
+		status = ice_sched_cfg_node_bw_alloc(hw, node, ICE_MAX_BW,
+						     bw_alloc);
+		if (status)
+			return status;
+	}
+	if (ice_is_bit_set(bw_t_info->bw_t_bitmap, ICE_BW_TYPE_SHARED))
+		status = ice_sched_set_node_bw_lmt(pi, node, ICE_SHARED_BW,
+						   bw_t_info->shared_bw);
+	return status;
+}
+
+/**
+ * ice_sched_replay_agg_bw - replay aggregator node(s) bw
+ * @hw: pointer to the hw struct
+ * @agg_info: aggregator data structure
+ *
+ * This function re-creates aggregator type nodes. The caller needs to hold
+ * the scheduler lock.
+ */
+static enum ice_status
+ice_sched_replay_agg_bw(struct ice_hw *hw, struct ice_sched_agg_info *agg_info)
+{
+	struct ice_sched_node *tc_node, *agg_node;
+	enum ice_status status = ICE_SUCCESS;
+	u8 tc;
+
+	if (!agg_info)
+		return ICE_ERR_PARAM;
+	for (tc = 0; tc < ICE_MAX_TRAFFIC_CLASS; tc++) {
+		if (!ice_is_any_bit_set(agg_info->bw_t_info[tc].bw_t_bitmap,
+					ICE_BW_TYPE_CNT))
+			continue;
+		tc_node = ice_sched_get_tc_node(hw->port_info, tc);
+		if (!tc_node) {
+			status = ICE_ERR_PARAM;
+			break;
+		}
+		agg_node = ice_sched_get_agg_node(hw, tc_node,
+						  agg_info->agg_id);
+		if (!agg_node) {
+			status = ICE_ERR_PARAM;
+			break;
+		}
+		status = ice_sched_replay_node_bw(hw, agg_node,
+						  &agg_info->bw_t_info[tc]);
+		if (status)
+			break;
+	}
+	return status;
+}
+
+/**
+ * ice_sched_get_ena_tc_bitmap - get enabled TC bitmap
+ * @pi: port info struct
+ * @tc_bitmap: 8 bits TC bitmap to check
+ * @ena_tc_bitmap: 8 bits enabled TC bitmap to return
+ *
+ * This function returns enabled TC bitmap in variable ena_tc_bitmap. Some TCs
+ * may be missing, it returns enabled TCs. This function needs to be called with
+ * scheduler lock held.
+ */
+static void
+ice_sched_get_ena_tc_bitmap(struct ice_port_info *pi, ice_bitmap_t *tc_bitmap,
+			    ice_bitmap_t *ena_tc_bitmap)
+{
+	u8 tc;
+
+	/* Some tc(s) may be missing after reset, adjust for replay */
+	for (tc = 0; tc < ICE_MAX_TRAFFIC_CLASS; tc++)
+		if (ice_is_tc_ena(*tc_bitmap, tc) &&
+		    (ice_sched_get_tc_node(pi, tc)))
+			ice_set_bit(tc, ena_tc_bitmap);
+}
+
+/**
+ * ice_sched_replay_agg - recreate aggregator node(s)
+ * @hw: pointer to the hw struct
+ *
+ * This function recreate aggregator type nodes which are not replayed earlier.
+ * It also replay aggregator bw information. These aggregator nodes are not
+ * associated with VSI type node yet.
+ */
+void ice_sched_replay_agg(struct ice_hw *hw)
+{
+	struct ice_port_info *pi = hw->port_info;
+	struct ice_sched_agg_info *agg_info;
+
+	ice_acquire_lock(&pi->sched_lock);
+	LIST_FOR_EACH_ENTRY(agg_info, &hw->agg_list, ice_sched_agg_info,
+			    list_entry) {
+		/* replay agg (re-create aggregator node) */
+		if (!ice_cmp_bitmap(agg_info->tc_bitmap,
+				    agg_info->replay_tc_bitmap,
+				    ICE_MAX_TRAFFIC_CLASS)) {
+			ice_declare_bitmap(replay_bitmap,
+					   ICE_MAX_TRAFFIC_CLASS);
+			enum ice_status status;
+
+			ice_zero_bitmap(replay_bitmap,
+					sizeof(replay_bitmap) * BITS_PER_BYTE);
+			ice_sched_get_ena_tc_bitmap(pi,
+						    agg_info->replay_tc_bitmap,
+						    replay_bitmap);
+			status = ice_sched_cfg_agg(hw->port_info,
+						   agg_info->agg_id,
+						   ICE_AGG_TYPE_AGG,
+						   replay_bitmap);
+			if (status) {
+				ice_info(hw, "Replay agg id[%d] failed\n",
+					 agg_info->agg_id);
+				/* Move on to next one */
+				continue;
+			}
+			/* Replay agg node bw (restore agg bw) */
+			status = ice_sched_replay_agg_bw(hw, agg_info);
+			if (status)
+				ice_info(hw, "Replay agg bw [id=%d] failed\n",
+					 agg_info->agg_id);
+		}
+	}
+	ice_release_lock(&pi->sched_lock);
+}
+
+/**
+ * ice_sched_replay_agg_vsi_preinit - Agg/VSI replay pre initialization
+ * @hw: pointer to the hw struct
+ *
+ * This function initialize aggregator(s) TC bitmap to zero. A required
+ * preinit step for replaying aggregators.
+ */
+void ice_sched_replay_agg_vsi_preinit(struct ice_hw *hw)
+{
+	struct ice_port_info *pi = hw->port_info;
+	struct ice_sched_agg_info *agg_info;
+
+	ice_acquire_lock(&pi->sched_lock);
+	LIST_FOR_EACH_ENTRY(agg_info, &hw->agg_list, ice_sched_agg_info,
+			    list_entry) {
+		struct ice_sched_agg_vsi_info *agg_vsi_info;
+
+		agg_info->tc_bitmap[0] = 0;
+		LIST_FOR_EACH_ENTRY(agg_vsi_info, &agg_info->agg_vsi_list,
+				    ice_sched_agg_vsi_info, list_entry)
+			agg_vsi_info->tc_bitmap[0] = 0;
+	}
+	ice_release_lock(&pi->sched_lock);
+}
+
+/**
+ * ice_sched_replay_tc_node_bw - replay tc node(s) bw
+ * @hw: pointer to the hw struct
+ *
+ * This function replay tc nodes. The caller needs to hold the scheduler lock.
+ */
+enum ice_status
+ice_sched_replay_tc_node_bw(struct ice_hw *hw)
+{
+	struct ice_port_info *pi = hw->port_info;
+	enum ice_status status = ICE_SUCCESS;
+	u8 tc;
+
+	ice_acquire_lock(&pi->sched_lock);
+	for (tc = 0; tc < ICE_MAX_TRAFFIC_CLASS; tc++) {
+		struct ice_sched_node *tc_node;
+
+		tc_node = ice_sched_get_tc_node(hw->port_info, tc);
+		if (!tc_node)
+			continue; /* tc not present */
+		status = ice_sched_replay_node_bw(hw, tc_node,
+						  &hw->tc_node_bw_t_info[tc]);
+		if (status)
+			break;
+	}
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_sched_replay_vsi_bw - replay VSI type node(s) bw
+ * @hw: pointer to the hw struct
+ * @vsi_handle: software VSI handle
+ * @tc_bitmap: 8 bits TC bitmap
+ *
+ * This function replays VSI type nodes bandwidth. This function needs to be
+ * called with scheduler lock held.
+ */
+static enum ice_status
+ice_sched_replay_vsi_bw(struct ice_hw *hw, u16 vsi_handle,
+			ice_bitmap_t *tc_bitmap)
+{
+	struct ice_sched_node *vsi_node, *tc_node;
+	struct ice_port_info *pi = hw->port_info;
+	struct ice_bw_type_info *bw_t_info;
+	struct ice_vsi_ctx *vsi_ctx;
+	enum ice_status status = ICE_SUCCESS;
+	u8 tc;
+
+	vsi_ctx = ice_get_vsi_ctx(pi->hw, vsi_handle);
+	if (!vsi_ctx)
+		return ICE_ERR_PARAM;
+	for (tc = 0; tc < ICE_MAX_TRAFFIC_CLASS; tc++) {
+		if (!ice_is_tc_ena(*tc_bitmap, tc))
+			continue;
+		tc_node = ice_sched_get_tc_node(pi, tc);
+		if (!tc_node)
+			continue;
+		vsi_node = ice_sched_get_vsi_node(hw, tc_node, vsi_handle);
+		if (!vsi_node)
+			continue;
+		bw_t_info = &vsi_ctx->sched.bw_t_info[tc];
+		status = ice_sched_replay_node_bw(hw, vsi_node, bw_t_info);
+		if (status)
+			break;
+	}
+	return status;
+}
+
+/**
+ * ice_sched_replay_vsi_agg - replay agg & VSI to aggregator node(s)
+ * @hw: pointer to the hw struct
+ * @vsi_handle: software VSI handle
+ *
+ * This function replays aggregator node, VSI to aggregator type nodes, and
+ * their node bandwidth information. This function needs to be called with
+ * scheduler lock held.
+ */
+static enum ice_status
+ice_sched_replay_vsi_agg(struct ice_hw *hw, u16 vsi_handle)
+{
+	ice_declare_bitmap(replay_bitmap, ICE_MAX_TRAFFIC_CLASS);
+	struct ice_sched_agg_vsi_info *agg_vsi_info;
+	struct ice_port_info *pi = hw->port_info;
+	struct ice_sched_agg_info *agg_info;
+	enum ice_status status;
+
+	ice_zero_bitmap(replay_bitmap, sizeof(replay_bitmap) * BITS_PER_BYTE);
+	if (!ice_is_vsi_valid(hw, vsi_handle))
+		return ICE_ERR_PARAM;
+	agg_info = ice_get_vsi_agg_info(hw, vsi_handle);
+	if (!agg_info)
+		return ICE_SUCCESS; /* Not present in list - default Agg case */
+	agg_vsi_info = ice_get_agg_vsi_info(agg_info, vsi_handle);
+	if (!agg_vsi_info)
+		return ICE_SUCCESS; /* Not present in list - default Agg case */
+	ice_sched_get_ena_tc_bitmap(pi, agg_info->replay_tc_bitmap,
+				    replay_bitmap);
+	/* Replay agg node associated to vsi_handle */
+	status = ice_sched_cfg_agg(hw->port_info, agg_info->agg_id,
+				   ICE_AGG_TYPE_AGG, replay_bitmap);
+	if (status)
+		return status;
+	/* Replay agg node bw (restore agg bw) */
+	status = ice_sched_replay_agg_bw(hw, agg_info);
+	if (status)
+		return status;
+
+	ice_zero_bitmap(replay_bitmap, ICE_MAX_TRAFFIC_CLASS);
+	ice_sched_get_ena_tc_bitmap(pi, agg_vsi_info->replay_tc_bitmap,
+				    replay_bitmap);
+	/* Move this VSI (vsi_handle) to above aggregator */
+	status = ice_sched_assoc_vsi_to_agg(pi, agg_info->agg_id, vsi_handle,
+					    replay_bitmap);
+	if (status)
+		return status;
+	/* Replay VSI bw (restore VSI bw) */
+	return ice_sched_replay_vsi_bw(hw, vsi_handle,
+				       agg_vsi_info->tc_bitmap);
+}
+
+/**
+ * ice_replay_vsi_agg - replay VSI to aggregator node
+ * @hw: pointer to the hw struct
+ * @vsi_handle: software VSI handle
+ *
+ * This function replays association of VSI to aggregator type nodes, and
+ * node bandwidth information.
+ */
+enum ice_status
+ice_replay_vsi_agg(struct ice_hw *hw, u16 vsi_handle)
+{
+	struct ice_port_info *pi = hw->port_info;
+	enum ice_status status;
+
+	ice_acquire_lock(&pi->sched_lock);
+	status = ice_sched_replay_vsi_agg(hw, vsi_handle);
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
diff --git a/drivers/net/ice/base/ice_sched.h b/drivers/net/ice/base/ice_sched.h
new file mode 100644
index 0000000..a556594
--- /dev/null
+++ b/drivers/net/ice/base/ice_sched.h
@@ -0,0 +1,210 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_SCHED_H_
+#define _ICE_SCHED_H_
+
+#include "ice_common.h"
+
+#define ICE_QGRP_LAYER_OFFSET	2
+#define ICE_VSI_LAYER_OFFSET	4
+#define ICE_AGG_LAYER_OFFSET	6
+#define ICE_SCHED_INVAL_LAYER_NUM	0xFF
+/* Burst size is a 12 bits register that is configured while creating the RL
+ * profile(s). MSB is a granularity bit and tells the granularity type
+ * 0 - LSB bits are in bytes granularity
+ * 1 - LSB bits are in 1K bytes granularity
+ */
+#define ICE_BYTE_GRANULARITY			0
+#define ICE_KBYTE_GRANULARITY			0x800
+#define ICE_MIN_BURST_SIZE_ALLOWED		1 /* In Bytes */
+#define ICE_MAX_BURST_SIZE_ALLOWED		(2047 * 1024) /* In Bytes */
+#define ICE_MAX_BURST_SIZE_BYTE_GRANULARITY	2047 /* In Bytes */
+#define ICE_MAX_BURST_SIZE_KBYTE_GRANULARITY	ICE_MAX_BURST_SIZE_ALLOWED
+
+#define ICE_RL_PROF_FREQUENCY 446000000
+#define ICE_RL_PROF_ACCURACY_BYTES 128
+#define ICE_RL_PROF_MULTIPLIER 10000
+#define ICE_RL_PROF_TS_MULTIPLIER 32
+#define ICE_RL_PROF_FRACTION 512
+
+struct rl_profile_params {
+	u32 bw;			/* in Kbps */
+	u16 rl_multiplier;
+	u16 wake_up_calc;
+	u16 rl_encode;
+};
+
+/* BW rate limit profile parameters list entry along
+ * with bandwidth maintained per layer in port info
+ */
+struct ice_aqc_rl_profile_info {
+	struct ice_aqc_rl_profile_elem profile;
+	struct LIST_ENTRY_TYPE list_entry;
+	u32 bw;			/* requested */
+	u16 prof_id_ref;	/* profile id to node association ref count */
+};
+
+struct ice_sched_agg_vsi_info {
+	struct LIST_ENTRY_TYPE list_entry;
+	ice_declare_bitmap(tc_bitmap, ICE_MAX_TRAFFIC_CLASS);
+	u16 vsi_handle;
+	/* save agg vsi TC bitmap */
+	ice_declare_bitmap(replay_tc_bitmap, ICE_MAX_TRAFFIC_CLASS);
+};
+
+struct ice_sched_agg_info {
+	struct LIST_HEAD_TYPE agg_vsi_list;
+	struct LIST_ENTRY_TYPE list_entry;
+	ice_declare_bitmap(tc_bitmap, ICE_MAX_TRAFFIC_CLASS);
+	u32 agg_id;
+	enum ice_agg_type agg_type;
+	/* bw_t_info saves agg bw information */
+	struct ice_bw_type_info bw_t_info[ICE_MAX_TRAFFIC_CLASS];
+	/* save agg TC bitmap */
+	ice_declare_bitmap(replay_tc_bitmap, ICE_MAX_TRAFFIC_CLASS);
+};
+
+/* FW AQ command calls */
+enum ice_status
+ice_aq_query_rl_profile(struct ice_hw *hw, u16 num_profiles,
+			struct ice_aqc_rl_profile_generic_elem *buf,
+			u16 buf_size, struct ice_sq_cd *cd);
+enum ice_status
+ice_aq_cfg_l2_node_cgd(struct ice_hw *hw, u16 num_nodes,
+		       struct ice_aqc_cfg_l2_node_cgd_data *buf, u16 buf_size,
+		       struct ice_sq_cd *cd);
+enum ice_status
+ice_aq_move_sched_elems(struct ice_hw *hw, u16 grps_req,
+			struct ice_aqc_move_elem *buf, u16 buf_size,
+			u16 *grps_movd, struct ice_sq_cd *cd);
+enum ice_status
+ice_aq_query_sched_elems(struct ice_hw *hw, u16 elems_req,
+			 struct ice_aqc_get_elem *buf, u16 buf_size,
+			 u16 *elems_ret, struct ice_sq_cd *cd);
+enum ice_status ice_sched_init_port(struct ice_port_info *pi);
+enum ice_status ice_sched_query_res_alloc(struct ice_hw *hw);
+
+/* Functions to cleanup scheduler SW DB */
+void ice_sched_clear_port(struct ice_port_info *pi);
+void ice_sched_cleanup_all(struct ice_hw *hw);
+void ice_sched_clear_agg(struct ice_hw *hw);
+
+/* Get a scheduling node from SW DB for given TEID */
+struct ice_sched_node *ice_sched_get_node(struct ice_port_info *pi, u32 teid);
+struct ice_sched_node *
+ice_sched_find_node_by_teid(struct ice_sched_node *start_node, u32 teid);
+/* Add a scheduling node into SW DB for given info */
+enum ice_status
+ice_sched_add_node(struct ice_port_info *pi, u8 layer,
+		   struct ice_aqc_txsched_elem_data *info);
+void ice_free_sched_node(struct ice_port_info *pi, struct ice_sched_node *node);
+struct ice_sched_node *ice_sched_get_tc_node(struct ice_port_info *pi, u8 tc);
+struct ice_sched_node *
+ice_sched_get_free_qparent(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+			   u8 owner);
+enum ice_status
+ice_sched_cfg_vsi(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u16 maxqs,
+		  u8 owner, bool enable);
+enum ice_status ice_rm_vsi_lan_cfg(struct ice_port_info *pi, u16 vsi_handle);
+struct ice_sched_node *
+ice_sched_get_agg_node(struct ice_hw *hw, struct ice_sched_node *tc_node,
+		       u32 agg_id);
+struct ice_sched_node *
+ice_sched_get_vsi_node(struct ice_hw *hw, struct ice_sched_node *tc_node,
+		       u16 vsi_handle);
+bool ice_sched_is_tree_balanced(struct ice_hw *hw, struct ice_sched_node *node);
+enum ice_status
+ice_aq_query_node_to_root(struct ice_hw *hw, u32 node_teid,
+			  struct ice_aqc_get_elem *buf, u16 buf_size,
+			  struct ice_sq_cd *cd);
+
+/* Tx scheduler rate limiter functions */
+enum ice_status
+ice_cfg_agg(struct ice_port_info *pi, u32 agg_id,
+	    enum ice_agg_type agg_type, u8 tc_bitmap);
+enum ice_status
+ice_move_vsi_to_agg(struct ice_port_info *pi, u32 agg_id, u16 vsi_handle,
+		    u8 tc_bitmap);
+enum ice_status ice_rm_agg_cfg(struct ice_port_info *pi, u32 agg_id);
+enum ice_status
+ice_cfg_q_bw_lmt(struct ice_port_info *pi, u32 q_id, enum ice_rl_type rl_type,
+		 u32 bw);
+enum ice_status
+ice_cfg_q_bw_dflt_lmt(struct ice_port_info *pi, u32 q_id,
+		      enum ice_rl_type rl_type);
+enum ice_status
+ice_cfg_tc_node_bw_lmt(struct ice_port_info *pi, u8 tc,
+		       enum ice_rl_type rl_type, u32 bw);
+enum ice_status
+ice_cfg_tc_node_bw_dflt_lmt(struct ice_port_info *pi, u8 tc,
+			    enum ice_rl_type rl_type);
+enum ice_status
+ice_cfg_vsi_bw_lmt_per_tc(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+			  enum ice_rl_type rl_type, u32 bw);
+enum ice_status
+ice_cfg_vsi_bw_dflt_lmt_per_tc(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+			       enum ice_rl_type rl_type);
+enum ice_status
+ice_cfg_agg_bw_lmt_per_tc(struct ice_port_info *pi, u32 agg_id, u8 tc,
+			  enum ice_rl_type rl_type, u32 bw);
+enum ice_status
+ice_cfg_agg_bw_dflt_lmt_per_tc(struct ice_port_info *pi, u32 agg_id, u8 tc,
+			       enum ice_rl_type rl_type);
+enum ice_status
+ice_cfg_vsi_bw_shared_lmt(struct ice_port_info *pi, u16 vsi_handle, u32 bw);
+enum ice_status
+ice_cfg_vsi_bw_no_shared_lmt(struct ice_port_info *pi, u16 vsi_handle);
+enum ice_status
+ice_cfg_agg_bw_shared_lmt(struct ice_port_info *pi, u32 agg_id, u32 bw);
+enum ice_status
+ice_cfg_agg_bw_no_shared_lmt(struct ice_port_info *pi, u32 agg_id);
+enum ice_status
+ice_cfg_vsi_q_priority(struct ice_port_info *pi, u16 num_qs, u32 *q_ids,
+		       u8 *q_prio);
+enum ice_status
+ice_cfg_vsi_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 ena_tcmap,
+		     enum ice_rl_type rl_type, u8 *bw_alloc);
+enum ice_status
+ice_cfg_agg_vsi_priority_per_tc(struct ice_port_info *pi, u32 agg_id,
+				u16 num_vsis, u16 *vsi_handle_arr,
+				u8 *node_prio, u8 tc);
+enum ice_status
+ice_cfg_agg_bw_alloc(struct ice_port_info *pi, u32 agg_id, u8 ena_tcmap,
+		     enum ice_rl_type rl_type, u8 *bw_alloc);
+bool
+ice_sched_find_node_in_subtree(struct ice_hw *hw, struct ice_sched_node *base,
+			       struct ice_sched_node *node);
+enum ice_status
+ice_sched_set_node_bw_lmt(struct ice_port_info *pi, struct ice_sched_node *node,
+			  enum ice_rl_type rl_type, u32 bw);
+enum ice_status
+ice_sched_set_agg_bw_dflt_lmt(struct ice_port_info *pi, u16 vsi_handle);
+enum ice_status
+ice_sched_set_node_bw_lmt_per_tc(struct ice_port_info *pi, u32 id,
+				 enum ice_agg_type agg_type, u8 tc,
+				 enum ice_rl_type rl_type, u32 bw);
+enum ice_status
+ice_sched_set_vsi_bw_shared_lmt(struct ice_port_info *pi, u16 vsi_handle,
+				u32 bw);
+enum ice_status
+ice_sched_set_agg_bw_shared_lmt(struct ice_port_info *pi, u32 agg_id, u32 bw);
+enum ice_status
+ice_sched_cfg_sibl_node_prio(struct ice_hw *hw, struct ice_sched_node *node,
+			     u8 priority);
+enum ice_status
+ice_sched_cfg_node_bw_alloc(struct ice_hw *hw, struct ice_sched_node *node,
+			    enum ice_rl_type rl_type, u8 bw_alloc);
+enum ice_status
+ice_sched_add_agg_cfg(struct ice_port_info *pi, u32 agg_id, u8 tc);
+enum ice_status
+ice_sched_rm_agg_cfg(struct ice_port_info *pi, u32 agg_id, u8 tc);
+enum ice_status
+ice_sched_move_vsi_to_agg(struct ice_port_info *pi, u16 vsi_handle, u32 agg_id,
+			  u8 tc);
+enum ice_status
+ice_sched_del_rl_profile(struct ice_hw *hw,
+			 struct ice_aqc_rl_profile_info *rl_info);
+void ice_sched_rm_unused_rl_prof(struct ice_port_info *pi);
+#endif /* _ICE_SCHED_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v4 09/32] net/ice/base: add virtual switch code
  2018-12-14  8:34 ` [dpdk-dev] [PATCH v4 00/32] A new net PMD - ICE Wenzhuo Lu
                     ` (7 preceding siblings ...)
  2018-12-14  8:34   ` [dpdk-dev] [PATCH v4 08/32] net/ice/base: add basic transmit scheduler Wenzhuo Lu
@ 2018-12-14  8:34   ` Wenzhuo Lu
  2018-12-14  8:34   ` [dpdk-dev] [PATCH v4 10/32] net/ice/base: add code to work with the NVM Wenzhuo Lu
                     ` (22 subsequent siblings)
  31 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-14  8:34 UTC (permalink / raw)
  To: dev; +Cc: Paul M Stillwell Jr

From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>

Add code to handle the virtual switch within the NIC.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
---
 drivers/net/ice/base/ice_switch.c | 2812 +++++++++++++++++++++++++++++++++++++
 drivers/net/ice/base/ice_switch.h |  333 +++++
 2 files changed, 3145 insertions(+)
 create mode 100644 drivers/net/ice/base/ice_switch.c
 create mode 100644 drivers/net/ice/base/ice_switch.h

diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
new file mode 100644
index 0000000..0379cd0
--- /dev/null
+++ b/drivers/net/ice/base/ice_switch.c
@@ -0,0 +1,2812 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#include "ice_switch.h"
+
+
+#define ICE_ETH_DA_OFFSET		0
+#define ICE_ETH_ETHTYPE_OFFSET		12
+#define ICE_ETH_VLAN_TCI_OFFSET		14
+#define ICE_MAX_VLAN_ID			0xFFF
+
+/* Dummy ethernet header needed in the ice_aqc_sw_rules_elem
+ * struct to configure any switch filter rules.
+ * {DA (6 bytes), SA(6 bytes),
+ * Ether type (2 bytes for header without VLAN tag) OR
+ * VLAN tag (4 bytes for header with VLAN tag) }
+ *
+ * Word on Hardcoded values
+ * byte 0 = 0x2: to identify it as locally administered DA MAC
+ * byte 6 = 0x2: to identify it as locally administered SA MAC
+ * byte 12 = 0x81 & byte 13 = 0x00:
+ *	In case of VLAN filter first two bytes defines ether type (0x8100)
+ *	and remaining two bytes are placeholder for programming a given VLAN id
+ *	In case of Ether type filter it is treated as header without VLAN tag
+ *	and byte 12 and 13 is used to program a given Ether type instead
+ */
+#define DUMMY_ETH_HDR_LEN		16
+static const u8 dummy_eth_header[DUMMY_ETH_HDR_LEN] = { 0x2, 0, 0, 0, 0, 0,
+							0x2, 0, 0, 0, 0, 0,
+							0x81, 0, 0, 0};
+
+#define ICE_SW_RULE_RX_TX_ETH_HDR_SIZE \
+	(sizeof(struct ice_aqc_sw_rules_elem) - \
+	 sizeof(((struct ice_aqc_sw_rules_elem *)0)->pdata) + \
+	 sizeof(struct ice_sw_rule_lkup_rx_tx) + DUMMY_ETH_HDR_LEN - 1)
+#define ICE_SW_RULE_RX_TX_NO_HDR_SIZE \
+	(sizeof(struct ice_aqc_sw_rules_elem) - \
+	 sizeof(((struct ice_aqc_sw_rules_elem *)0)->pdata) + \
+	 sizeof(struct ice_sw_rule_lkup_rx_tx) - 1)
+#define ICE_SW_RULE_LG_ACT_SIZE(n) \
+	(sizeof(struct ice_aqc_sw_rules_elem) - \
+	 sizeof(((struct ice_aqc_sw_rules_elem *)0)->pdata) + \
+	 sizeof(struct ice_sw_rule_lg_act) - \
+	 sizeof(((struct ice_sw_rule_lg_act *)0)->act) + \
+	 ((n) * sizeof(((struct ice_sw_rule_lg_act *)0)->act)))
+#define ICE_SW_RULE_VSI_LIST_SIZE(n) \
+	(sizeof(struct ice_aqc_sw_rules_elem) - \
+	 sizeof(((struct ice_aqc_sw_rules_elem *)0)->pdata) + \
+	 sizeof(struct ice_sw_rule_vsi_list) - \
+	 sizeof(((struct ice_sw_rule_vsi_list *)0)->vsi) + \
+	 ((n) * sizeof(((struct ice_sw_rule_vsi_list *)0)->vsi)))
+
+
+/**
+ * ice_init_def_sw_recp - initialize the recipe book keeping tables
+ * @hw: pointer to the hw struct
+ *
+ * Allocate memory for the entire recipe table and initialize the structures/
+ * entries corresponding to basic recipes.
+ */
+enum ice_status ice_init_def_sw_recp(struct ice_hw *hw)
+{
+	struct ice_sw_recipe *recps;
+	u8 i;
+
+	recps = (struct ice_sw_recipe *)
+		ice_calloc(hw, ICE_MAX_NUM_RECIPES, sizeof(*recps));
+	if (!recps)
+		return ICE_ERR_NO_MEMORY;
+
+	for (i = 0; i < ICE_MAX_NUM_RECIPES; i++) {
+		recps[i].root_rid = i;
+		INIT_LIST_HEAD(&recps[i].filt_rules);
+		INIT_LIST_HEAD(&recps[i].filt_replay_rules);
+		ice_init_lock(&recps[i].filt_rule_lock);
+	}
+
+	hw->switch_info->recp_list = recps;
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_aq_get_sw_cfg - get switch configuration
+ * @hw: pointer to the hardware structure
+ * @buf: pointer to the result buffer
+ * @buf_size: length of the buffer available for response
+ * @req_desc: pointer to requested descriptor
+ * @num_elems: pointer to number of elements
+ * @cd: pointer to command details structure or NULL
+ *
+ * Get switch configuration (0x0200) to be placed in 'buff'.
+ * This admin command returns information such as initial VSI/port number
+ * and switch ID it belongs to.
+ *
+ * NOTE: *req_desc is both an input/output parameter.
+ * The caller of this function first calls this function with *request_desc set
+ * to 0. If the response from f/w has *req_desc set to 0, all the switch
+ * configuration information has been returned; if non-zero (meaning not all
+ * the information was returned), the caller should call this function again
+ * with *req_desc set to the previous value returned by f/w to get the
+ * next block of switch configuration information.
+ *
+ * *num_elems is output only parameter. This reflects the number of elements
+ * in response buffer. The caller of this function to use *num_elems while
+ * parsing the response buffer.
+ */
+static enum ice_status
+ice_aq_get_sw_cfg(struct ice_hw *hw, struct ice_aqc_get_sw_cfg_resp *buf,
+		  u16 buf_size, u16 *req_desc, u16 *num_elems,
+		  struct ice_sq_cd *cd)
+{
+	struct ice_aqc_get_sw_cfg *cmd;
+	enum ice_status status;
+	struct ice_aq_desc desc;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_sw_cfg);
+	cmd = &desc.params.get_sw_conf;
+	cmd->element = CPU_TO_LE16(*req_desc);
+
+	status = ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
+	if (!status) {
+		*req_desc = LE16_TO_CPU(cmd->element);
+		*num_elems = LE16_TO_CPU(cmd->num_elems);
+	}
+
+	return status;
+}
+
+
+
+/**
+ * ice_aq_add_vsi
+ * @hw: pointer to the hw struct
+ * @vsi_ctx: pointer to a VSI context struct
+ * @cd: pointer to command details structure or NULL
+ *
+ * Add a VSI context to the hardware (0x0210)
+ */
+static enum ice_status
+ice_aq_add_vsi(struct ice_hw *hw, struct ice_vsi_ctx *vsi_ctx,
+	       struct ice_sq_cd *cd)
+{
+	struct ice_aqc_add_update_free_vsi_resp *res;
+	struct ice_aqc_add_get_update_free_vsi *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	cmd = &desc.params.vsi_cmd;
+	res = &desc.params.add_update_free_vsi_res;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_add_vsi);
+
+	if (!vsi_ctx->alloc_from_pool)
+		cmd->vsi_num = CPU_TO_LE16(vsi_ctx->vsi_num |
+					   ICE_AQ_VSI_IS_VALID);
+
+	cmd->vsi_flags = CPU_TO_LE16(vsi_ctx->flags);
+
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+
+	status = ice_aq_send_cmd(hw, &desc, &vsi_ctx->info,
+				 sizeof(vsi_ctx->info), cd);
+
+	if (!status) {
+		vsi_ctx->vsi_num = LE16_TO_CPU(res->vsi_num) & ICE_AQ_VSI_NUM_M;
+		vsi_ctx->vsis_allocd = LE16_TO_CPU(res->vsi_used);
+		vsi_ctx->vsis_unallocated = LE16_TO_CPU(res->vsi_free);
+	}
+
+	return status;
+}
+
+/**
+ * ice_aq_free_vsi
+ * @hw: pointer to the hw struct
+ * @vsi_ctx: pointer to a VSI context struct
+ * @keep_vsi_alloc: keep VSI allocation as part of this PF's resources
+ * @cd: pointer to command details structure or NULL
+ *
+ * Free VSI context info from hardware (0x0213)
+ */
+static enum ice_status
+ice_aq_free_vsi(struct ice_hw *hw, struct ice_vsi_ctx *vsi_ctx,
+		bool keep_vsi_alloc, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_add_update_free_vsi_resp *resp;
+	struct ice_aqc_add_get_update_free_vsi *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	cmd = &desc.params.vsi_cmd;
+	resp = &desc.params.add_update_free_vsi_res;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_free_vsi);
+
+	cmd->vsi_num = CPU_TO_LE16(vsi_ctx->vsi_num | ICE_AQ_VSI_IS_VALID);
+	if (keep_vsi_alloc)
+		cmd->cmd_flags = CPU_TO_LE16(ICE_AQ_VSI_KEEP_ALLOC);
+
+	status = ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+	if (!status) {
+		vsi_ctx->vsis_allocd = LE16_TO_CPU(resp->vsi_used);
+		vsi_ctx->vsis_unallocated = LE16_TO_CPU(resp->vsi_free);
+	}
+
+	return status;
+}
+
+/**
+ * ice_aq_update_vsi
+ * @hw: pointer to the hw struct
+ * @vsi_ctx: pointer to a VSI context struct
+ * @cd: pointer to command details structure or NULL
+ *
+ * Update VSI context in the hardware (0x0211)
+ */
+static enum ice_status
+ice_aq_update_vsi(struct ice_hw *hw, struct ice_vsi_ctx *vsi_ctx,
+		  struct ice_sq_cd *cd)
+{
+	struct ice_aqc_add_update_free_vsi_resp *resp;
+	struct ice_aqc_add_get_update_free_vsi *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	cmd = &desc.params.vsi_cmd;
+	resp = &desc.params.add_update_free_vsi_res;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_update_vsi);
+
+	cmd->vsi_num = CPU_TO_LE16(vsi_ctx->vsi_num | ICE_AQ_VSI_IS_VALID);
+
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+
+	status = ice_aq_send_cmd(hw, &desc, &vsi_ctx->info,
+				 sizeof(vsi_ctx->info), cd);
+
+	if (!status) {
+		vsi_ctx->vsis_allocd = LE16_TO_CPU(resp->vsi_used);
+		vsi_ctx->vsis_unallocated = LE16_TO_CPU(resp->vsi_free);
+	}
+
+	return status;
+}
+
+/**
+ * ice_is_vsi_valid - check whether the VSI is valid or not
+ * @hw: pointer to the hw struct
+ * @vsi_handle: VSI handle
+ *
+ * check whether the VSI is valid or not
+ */
+bool ice_is_vsi_valid(struct ice_hw *hw, u16 vsi_handle)
+{
+	return vsi_handle < ICE_MAX_VSI && hw->vsi_ctx[vsi_handle];
+}
+
+/**
+ * ice_get_hw_vsi_num - return the hw VSI number
+ * @hw: pointer to the hw struct
+ * @vsi_handle: VSI handle
+ *
+ * return the hw VSI number
+ * Caution: call this function only if VSI is valid (ice_is_vsi_valid)
+ */
+u16 ice_get_hw_vsi_num(struct ice_hw *hw, u16 vsi_handle)
+{
+	return hw->vsi_ctx[vsi_handle]->vsi_num;
+}
+
+/**
+ * ice_get_vsi_ctx - return the VSI context entry for a given VSI handle
+ * @hw: pointer to the hw struct
+ * @vsi_handle: VSI handle
+ *
+ * return the VSI context entry for a given VSI handle
+ */
+struct ice_vsi_ctx *ice_get_vsi_ctx(struct ice_hw *hw, u16 vsi_handle)
+{
+	return (vsi_handle >= ICE_MAX_VSI) ? NULL : hw->vsi_ctx[vsi_handle];
+}
+
+/**
+ * ice_save_vsi_ctx - save the VSI context for a given VSI handle
+ * @hw: pointer to the hw struct
+ * @vsi_handle: VSI handle
+ * @vsi: VSI context pointer
+ *
+ * save the VSI context entry for a given VSI handle
+ */
+static void
+ice_save_vsi_ctx(struct ice_hw *hw, u16 vsi_handle, struct ice_vsi_ctx *vsi)
+{
+	hw->vsi_ctx[vsi_handle] = vsi;
+}
+
+/**
+ * ice_clear_vsi_ctx - clear the VSI context entry
+ * @hw: pointer to the hw struct
+ * @vsi_handle: VSI handle
+ *
+ * clear the VSI context entry
+ */
+static void ice_clear_vsi_ctx(struct ice_hw *hw, u16 vsi_handle)
+{
+	struct ice_vsi_ctx *vsi;
+
+	vsi = ice_get_vsi_ctx(hw, vsi_handle);
+	if (vsi) {
+		ice_destroy_lock(&vsi->rss_locks);
+		ice_free(hw, vsi);
+		hw->vsi_ctx[vsi_handle] = NULL;
+	}
+}
+
+/**
+ * ice_clear_all_vsi_ctx - clear all the VSI context entries
+ * @hw: pointer to the hw struct
+ */
+void ice_clear_all_vsi_ctx(struct ice_hw *hw)
+{
+	u16 i;
+
+	for (i = 0; i < ICE_MAX_VSI; i++)
+		ice_clear_vsi_ctx(hw, i);
+}
+
+/**
+ * ice_add_vsi - add VSI context to the hardware and VSI handle list
+ * @hw: pointer to the hw struct
+ * @vsi_handle: unique VSI handle provided by drivers
+ * @vsi_ctx: pointer to a VSI context struct
+ * @cd: pointer to command details structure or NULL
+ *
+ * Add a VSI context to the hardware also add it into the VSI handle list.
+ * If this function gets called after reset for exisiting VSIs then update
+ * with the new HW VSI number in the corresponding VSI handle list entry.
+ */
+enum ice_status
+ice_add_vsi(struct ice_hw *hw, u16 vsi_handle, struct ice_vsi_ctx *vsi_ctx,
+	    struct ice_sq_cd *cd)
+{
+	struct ice_vsi_ctx *tmp_vsi_ctx;
+	enum ice_status status;
+
+	if (vsi_handle >= ICE_MAX_VSI)
+		return ICE_ERR_PARAM;
+	status = ice_aq_add_vsi(hw, vsi_ctx, cd);
+	if (status)
+		return status;
+	tmp_vsi_ctx = ice_get_vsi_ctx(hw, vsi_handle);
+	if (!tmp_vsi_ctx) {
+		/* Create a new vsi context */
+		tmp_vsi_ctx = (struct ice_vsi_ctx *)
+			ice_malloc(hw, sizeof(*tmp_vsi_ctx));
+		if (!tmp_vsi_ctx) {
+			ice_aq_free_vsi(hw, vsi_ctx, false, cd);
+			return ICE_ERR_NO_MEMORY;
+		}
+		*tmp_vsi_ctx = *vsi_ctx;
+		ice_init_lock(&tmp_vsi_ctx->rss_locks);
+		INIT_LIST_HEAD(&tmp_vsi_ctx->rss_list_head);
+		ice_save_vsi_ctx(hw, vsi_handle, tmp_vsi_ctx);
+	} else {
+		/* update with new HW VSI num */
+		if (tmp_vsi_ctx->vsi_num != vsi_ctx->vsi_num)
+			tmp_vsi_ctx->vsi_num = vsi_ctx->vsi_num;
+	}
+
+	return status;
+}
+
+/**
+ * ice_free_vsi- free VSI context from hardware and VSI handle list
+ * @hw: pointer to the hw struct
+ * @vsi_handle: unique VSI handle
+ * @vsi_ctx: pointer to a VSI context struct
+ * @keep_vsi_alloc: keep VSI allocation as part of this PF's resources
+ * @cd: pointer to command details structure or NULL
+ *
+ * Free VSI context info from hardware as well as from VSI handle list
+ */
+enum ice_status
+ice_free_vsi(struct ice_hw *hw, u16 vsi_handle, struct ice_vsi_ctx *vsi_ctx,
+	     bool keep_vsi_alloc, struct ice_sq_cd *cd)
+{
+	enum ice_status status;
+
+	if (!ice_is_vsi_valid(hw, vsi_handle))
+		return ICE_ERR_PARAM;
+	vsi_ctx->vsi_num = ice_get_hw_vsi_num(hw, vsi_handle);
+	status = ice_aq_free_vsi(hw, vsi_ctx, keep_vsi_alloc, cd);
+	if (!status)
+		ice_clear_vsi_ctx(hw, vsi_handle);
+	return status;
+}
+
+/**
+ * ice_update_vsi
+ * @hw: pointer to the hw struct
+ * @vsi_handle: unique VSI handle
+ * @vsi_ctx: pointer to a VSI context struct
+ * @cd: pointer to command details structure or NULL
+ *
+ * Update VSI context in the hardware
+ */
+enum ice_status
+ice_update_vsi(struct ice_hw *hw, u16 vsi_handle, struct ice_vsi_ctx *vsi_ctx,
+	       struct ice_sq_cd *cd)
+{
+	if (!ice_is_vsi_valid(hw, vsi_handle))
+		return ICE_ERR_PARAM;
+	vsi_ctx->vsi_num = ice_get_hw_vsi_num(hw, vsi_handle);
+	return ice_aq_update_vsi(hw, vsi_ctx, cd);
+}
+
+
+
+/**
+ * ice_aq_alloc_free_vsi_list
+ * @hw: pointer to the hw struct
+ * @vsi_list_id: VSI list id returned or used for lookup
+ * @lkup_type: switch rule filter lookup type
+ * @opc: switch rules population command type - pass in the command opcode
+ *
+ * allocates or free a VSI list resource
+ */
+static enum ice_status
+ice_aq_alloc_free_vsi_list(struct ice_hw *hw, u16 *vsi_list_id,
+			   enum ice_sw_lkup_type lkup_type,
+			   enum ice_adminq_opc opc)
+{
+	struct ice_aqc_alloc_free_res_elem *sw_buf;
+	struct ice_aqc_res_elem *vsi_ele;
+	enum ice_status status;
+	u16 buf_len;
+
+	buf_len = sizeof(*sw_buf);
+	sw_buf = (struct ice_aqc_alloc_free_res_elem *)
+		ice_malloc(hw, buf_len);
+	if (!sw_buf)
+		return ICE_ERR_NO_MEMORY;
+	sw_buf->num_elems = CPU_TO_LE16(1);
+
+	if (lkup_type == ICE_SW_LKUP_MAC ||
+	    lkup_type == ICE_SW_LKUP_MAC_VLAN ||
+	    lkup_type == ICE_SW_LKUP_ETHERTYPE ||
+	    lkup_type == ICE_SW_LKUP_ETHERTYPE_MAC ||
+	    lkup_type == ICE_SW_LKUP_PROMISC ||
+	    lkup_type == ICE_SW_LKUP_PROMISC_VLAN) {
+		sw_buf->res_type = CPU_TO_LE16(ICE_AQC_RES_TYPE_VSI_LIST_REP);
+	} else if (lkup_type == ICE_SW_LKUP_VLAN) {
+		sw_buf->res_type =
+			CPU_TO_LE16(ICE_AQC_RES_TYPE_VSI_LIST_PRUNE);
+	} else {
+		status = ICE_ERR_PARAM;
+		goto ice_aq_alloc_free_vsi_list_exit;
+	}
+
+	if (opc == ice_aqc_opc_free_res)
+		sw_buf->elem[0].e.sw_resp = CPU_TO_LE16(*vsi_list_id);
+
+	status = ice_aq_alloc_free_res(hw, 1, sw_buf, buf_len, opc, NULL);
+	if (status)
+		goto ice_aq_alloc_free_vsi_list_exit;
+
+	if (opc == ice_aqc_opc_alloc_res) {
+		vsi_ele = &sw_buf->elem[0];
+		*vsi_list_id = LE16_TO_CPU(vsi_ele->e.sw_resp);
+	}
+
+ice_aq_alloc_free_vsi_list_exit:
+	ice_free(hw, sw_buf);
+	return status;
+}
+
+
+/**
+ * ice_aq_sw_rules - add/update/remove switch rules
+ * @hw: pointer to the hw struct
+ * @rule_list: pointer to switch rule population list
+ * @rule_list_sz: total size of the rule list in bytes
+ * @num_rules: number of switch rules in the rule_list
+ * @opc: switch rules population command type - pass in the command opcode
+ * @cd: pointer to command details structure or NULL
+ *
+ * Add(0x02a0)/Update(0x02a1)/Remove(0x02a2) switch rules commands to firmware
+ */
+static enum ice_status
+ice_aq_sw_rules(struct ice_hw *hw, void *rule_list, u16 rule_list_sz,
+		u8 num_rules, enum ice_adminq_opc opc, struct ice_sq_cd *cd)
+{
+	struct ice_aq_desc desc;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_sw_rules");
+
+	if (opc != ice_aqc_opc_add_sw_rules &&
+	    opc != ice_aqc_opc_update_sw_rules &&
+	    opc != ice_aqc_opc_remove_sw_rules)
+		return ICE_ERR_PARAM;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, opc);
+
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+	desc.params.sw_rules.num_rules_fltr_entry_index =
+		CPU_TO_LE16(num_rules);
+	return ice_aq_send_cmd(hw, &desc, rule_list, rule_list_sz, cd);
+}
+
+
+/* ice_init_port_info - Initialize port_info with switch configuration data
+ * @pi: pointer to port_info
+ * @vsi_port_num: VSI number or port number
+ * @type: Type of switch element (port or VSI)
+ * @swid: switch ID of the switch the element is attached to
+ * @pf_vf_num: PF or VF number
+ * @is_vf: true if the element is a VF, false otherwise
+ */
+static void
+ice_init_port_info(struct ice_port_info *pi, u16 vsi_port_num, u8 type,
+		   u16 swid, u16 pf_vf_num, bool is_vf)
+{
+	switch (type) {
+	case ICE_AQC_GET_SW_CONF_RESP_PHYS_PORT:
+		pi->lport = (u8)(vsi_port_num & ICE_LPORT_MASK);
+		pi->sw_id = swid;
+		pi->pf_vf_num = pf_vf_num;
+		pi->is_vf = is_vf;
+		pi->dflt_tx_vsi_num = ICE_DFLT_VSI_INVAL;
+		pi->dflt_rx_vsi_num = ICE_DFLT_VSI_INVAL;
+		break;
+	default:
+		ice_debug(pi->hw, ICE_DBG_SW,
+			  "incorrect VSI/port type received\n");
+		break;
+	}
+}
+
+/* ice_get_initial_sw_cfg - Get initial port and default VSI data
+ * @hw: pointer to the hardware structure
+ */
+enum ice_status ice_get_initial_sw_cfg(struct ice_hw *hw)
+{
+	struct ice_aqc_get_sw_cfg_resp *rbuf;
+	enum ice_status status;
+	u16 num_total_ports;
+	u16 req_desc = 0;
+	u16 num_elems;
+	u16 j = 0;
+	u16 i;
+
+	num_total_ports = 1;
+
+	rbuf = (struct ice_aqc_get_sw_cfg_resp *)
+		ice_malloc(hw, ICE_SW_CFG_MAX_BUF_LEN);
+
+	if (!rbuf)
+		return ICE_ERR_NO_MEMORY;
+
+	/* Multiple calls to ice_aq_get_sw_cfg may be required
+	 * to get all the switch configuration information. The need
+	 * for additional calls is indicated by ice_aq_get_sw_cfg
+	 * writing a non-zero value in req_desc
+	 */
+	do {
+		status = ice_aq_get_sw_cfg(hw, rbuf, ICE_SW_CFG_MAX_BUF_LEN,
+					   &req_desc, &num_elems, NULL);
+
+		if (status)
+			break;
+
+		for (i = 0; i < num_elems; i++) {
+			struct ice_aqc_get_sw_cfg_resp_elem *ele;
+			u16 pf_vf_num, swid, vsi_port_num;
+			bool is_vf = false;
+			u8 type;
+
+			ele = rbuf[i].elements;
+			vsi_port_num = LE16_TO_CPU(ele->vsi_port_num) &
+				ICE_AQC_GET_SW_CONF_RESP_VSI_PORT_NUM_M;
+
+			pf_vf_num = LE16_TO_CPU(ele->pf_vf_num) &
+				ICE_AQC_GET_SW_CONF_RESP_FUNC_NUM_M;
+
+			swid = LE16_TO_CPU(ele->swid);
+
+			if (LE16_TO_CPU(ele->pf_vf_num) &
+			    ICE_AQC_GET_SW_CONF_RESP_IS_VF)
+				is_vf = true;
+
+			type = LE16_TO_CPU(ele->vsi_port_num) >>
+				ICE_AQC_GET_SW_CONF_RESP_TYPE_S;
+
+			switch (type) {
+			case ICE_AQC_GET_SW_CONF_RESP_PHYS_PORT:
+			case ICE_AQC_GET_SW_CONF_RESP_VIRT_PORT:
+				if (j == num_total_ports) {
+					ice_debug(hw, ICE_DBG_SW,
+						  "more ports than expected\n");
+					status = ICE_ERR_CFG;
+					goto out;
+				}
+				ice_init_port_info(hw->port_info,
+						   vsi_port_num, type, swid,
+						   pf_vf_num, is_vf);
+				j++;
+				break;
+			default:
+				break;
+			}
+		}
+	} while (req_desc && !status);
+
+
+out:
+	ice_free(hw, (void *)rbuf);
+	return status;
+}
+
+
+/**
+ * ice_fill_sw_info - Helper function to populate lb_en and lan_en
+ * @hw: pointer to the hardware structure
+ * @fi: filter info structure to fill/update
+ *
+ * This helper function populates the lb_en and lan_en elements of the provided
+ * ice_fltr_info struct using the switch's type and characteristics of the
+ * switch rule being configured.
+ */
+static void ice_fill_sw_info(struct ice_hw *hw, struct ice_fltr_info *fi)
+{
+	fi->lb_en = false;
+	fi->lan_en = false;
+	if ((fi->flag & ICE_FLTR_TX) &&
+	    (fi->fltr_act == ICE_FWD_TO_VSI ||
+	     fi->fltr_act == ICE_FWD_TO_VSI_LIST ||
+	     fi->fltr_act == ICE_FWD_TO_Q ||
+	     fi->fltr_act == ICE_FWD_TO_QGRP)) {
+		/* Setting LB for prune actions will result in replicated
+		 * packets to the internal switch that will be dropped.
+		 */
+		if (fi->lkup_type != ICE_SW_LKUP_VLAN)
+			fi->lb_en = true;
+
+		/* Set lan_en to TRUE if
+		 * 1. The switch is a VEB AND
+		 * 2
+		 * 2.1 The lookup is a directional lookup like ethertype,
+		 * promiscuous, ethertype-mac, promiscuous-vlan
+		 * and default-port OR
+		 * 2.2 The lookup is VLAN, OR
+		 * 2.3 The lookup is MAC with mcast or bcast addr for MAC, OR
+		 * 2.4 The lookup is MAC_VLAN with mcast or bcast addr for MAC.
+		 *
+		 * OR
+		 *
+		 * The switch is a VEPA.
+		 *
+		 * In all other cases, the LAN enable has to be set to false.
+		 */
+		if (hw->evb_veb) {
+			if (fi->lkup_type == ICE_SW_LKUP_ETHERTYPE ||
+			    fi->lkup_type == ICE_SW_LKUP_PROMISC ||
+			    fi->lkup_type == ICE_SW_LKUP_ETHERTYPE_MAC ||
+			    fi->lkup_type == ICE_SW_LKUP_PROMISC_VLAN ||
+			    fi->lkup_type == ICE_SW_LKUP_DFLT ||
+			    fi->lkup_type == ICE_SW_LKUP_VLAN ||
+			    (fi->lkup_type == ICE_SW_LKUP_MAC &&
+			     !IS_UNICAST_ETHER_ADDR(fi->l_data.mac.mac_addr)) ||
+			    (fi->lkup_type == ICE_SW_LKUP_MAC_VLAN &&
+			     !IS_UNICAST_ETHER_ADDR(fi->l_data.mac.mac_addr)))
+				fi->lan_en = true;
+		} else {
+			fi->lan_en = true;
+		}
+	}
+}
+
+/**
+ * ice_ilog2 - Caculates integer log base 2 of a number
+ * @n: number on which to perform operation
+ */
+static int ice_ilog2(u64 n)
+{
+	int i;
+
+	for (i = 63; i >= 0; i--)
+		if (((u64)1 << i) & n)
+			return i;
+
+	return -1;
+}
+
+
+/**
+ * ice_fill_sw_rule - Helper function to fill switch rule structure
+ * @hw: pointer to the hardware structure
+ * @f_info: entry containing packet forwarding information
+ * @s_rule: switch rule structure to be filled in based on mac_entry
+ * @opc: switch rules population command type - pass in the command opcode
+ */
+static void
+ice_fill_sw_rule(struct ice_hw *hw, struct ice_fltr_info *f_info,
+		 struct ice_aqc_sw_rules_elem *s_rule, enum ice_adminq_opc opc)
+{
+	u16 vlan_id = ICE_MAX_VLAN_ID + 1;
+	void *daddr = NULL;
+	u16 eth_hdr_sz;
+	u8 *eth_hdr;
+	u32 act = 0;
+	__be16 *off;
+	u8 q_rgn;
+
+
+	if (opc == ice_aqc_opc_remove_sw_rules) {
+		s_rule->pdata.lkup_tx_rx.act = 0;
+		s_rule->pdata.lkup_tx_rx.index =
+			CPU_TO_LE16(f_info->fltr_rule_id);
+		s_rule->pdata.lkup_tx_rx.hdr_len = 0;
+		return;
+	}
+
+	eth_hdr_sz = sizeof(dummy_eth_header);
+	eth_hdr = s_rule->pdata.lkup_tx_rx.hdr;
+
+	/* initialize the ether header with a dummy header */
+	ice_memcpy(eth_hdr, dummy_eth_header, eth_hdr_sz, ICE_NONDMA_TO_NONDMA);
+	ice_fill_sw_info(hw, f_info);
+
+	switch (f_info->fltr_act) {
+	case ICE_FWD_TO_VSI:
+		act |= (f_info->fwd_id.hw_vsi_id << ICE_SINGLE_ACT_VSI_ID_S) &
+			ICE_SINGLE_ACT_VSI_ID_M;
+		if (f_info->lkup_type != ICE_SW_LKUP_VLAN)
+			act |= ICE_SINGLE_ACT_VSI_FORWARDING |
+				ICE_SINGLE_ACT_VALID_BIT;
+		break;
+	case ICE_FWD_TO_VSI_LIST:
+		act |= ICE_SINGLE_ACT_VSI_LIST;
+		act |= (f_info->fwd_id.vsi_list_id <<
+			ICE_SINGLE_ACT_VSI_LIST_ID_S) &
+			ICE_SINGLE_ACT_VSI_LIST_ID_M;
+		if (f_info->lkup_type != ICE_SW_LKUP_VLAN)
+			act |= ICE_SINGLE_ACT_VSI_FORWARDING |
+				ICE_SINGLE_ACT_VALID_BIT;
+		break;
+	case ICE_FWD_TO_Q:
+		act |= ICE_SINGLE_ACT_TO_Q;
+		act |= (f_info->fwd_id.q_id << ICE_SINGLE_ACT_Q_INDEX_S) &
+			ICE_SINGLE_ACT_Q_INDEX_M;
+		break;
+	case ICE_DROP_PACKET:
+		act |= ICE_SINGLE_ACT_VSI_FORWARDING | ICE_SINGLE_ACT_DROP |
+			ICE_SINGLE_ACT_VALID_BIT;
+		break;
+	case ICE_FWD_TO_QGRP:
+		q_rgn = f_info->qgrp_size > 0 ?
+			(u8)ice_ilog2(f_info->qgrp_size) : 0;
+		act |= ICE_SINGLE_ACT_TO_Q;
+		act |= (f_info->fwd_id.q_id << ICE_SINGLE_ACT_Q_INDEX_S) &
+			ICE_SINGLE_ACT_Q_INDEX_M;
+		act |= (q_rgn << ICE_SINGLE_ACT_Q_REGION_S) &
+			ICE_SINGLE_ACT_Q_REGION_M;
+		break;
+	default:
+		return;
+	}
+
+	if (f_info->lb_en)
+		act |= ICE_SINGLE_ACT_LB_ENABLE;
+	if (f_info->lan_en)
+		act |= ICE_SINGLE_ACT_LAN_ENABLE;
+
+	switch (f_info->lkup_type) {
+	case ICE_SW_LKUP_MAC:
+		daddr = f_info->l_data.mac.mac_addr;
+		break;
+	case ICE_SW_LKUP_VLAN:
+		vlan_id = f_info->l_data.vlan.vlan_id;
+		if (f_info->fltr_act == ICE_FWD_TO_VSI ||
+		    f_info->fltr_act == ICE_FWD_TO_VSI_LIST) {
+			act |= ICE_SINGLE_ACT_PRUNE;
+			act |= ICE_SINGLE_ACT_EGRESS | ICE_SINGLE_ACT_INGRESS;
+		}
+		break;
+	case ICE_SW_LKUP_ETHERTYPE_MAC:
+		daddr = f_info->l_data.ethertype_mac.mac_addr;
+		/* fall-through */
+	case ICE_SW_LKUP_ETHERTYPE:
+		off = (__be16 *)(eth_hdr + ICE_ETH_ETHTYPE_OFFSET);
+		*off = CPU_TO_BE16(f_info->l_data.ethertype_mac.ethertype);
+		break;
+	case ICE_SW_LKUP_MAC_VLAN:
+		daddr = f_info->l_data.mac_vlan.mac_addr;
+		vlan_id = f_info->l_data.mac_vlan.vlan_id;
+		break;
+	case ICE_SW_LKUP_PROMISC_VLAN:
+		vlan_id = f_info->l_data.mac_vlan.vlan_id;
+		/* fall-through */
+	case ICE_SW_LKUP_PROMISC:
+		daddr = f_info->l_data.mac_vlan.mac_addr;
+		break;
+	default:
+		break;
+	}
+
+	s_rule->type = (f_info->flag & ICE_FLTR_RX) ?
+		CPU_TO_LE16(ICE_AQC_SW_RULES_T_LKUP_RX) :
+		CPU_TO_LE16(ICE_AQC_SW_RULES_T_LKUP_TX);
+
+	/* Recipe set depending on lookup type */
+	s_rule->pdata.lkup_tx_rx.recipe_id = CPU_TO_LE16(f_info->lkup_type);
+	s_rule->pdata.lkup_tx_rx.src = CPU_TO_LE16(f_info->src);
+	s_rule->pdata.lkup_tx_rx.act = CPU_TO_LE32(act);
+
+	if (daddr)
+		ice_memcpy(eth_hdr + ICE_ETH_DA_OFFSET, daddr, ETH_ALEN,
+			   ICE_NONDMA_TO_NONDMA);
+
+	if (!(vlan_id > ICE_MAX_VLAN_ID)) {
+		off = (__be16 *)(eth_hdr + ICE_ETH_VLAN_TCI_OFFSET);
+		*off = CPU_TO_BE16(vlan_id);
+	}
+
+	/* Create the switch rule with the final dummy Ethernet header */
+	if (opc != ice_aqc_opc_update_sw_rules)
+		s_rule->pdata.lkup_tx_rx.hdr_len = CPU_TO_LE16(eth_hdr_sz);
+}
+
+/**
+ * ice_add_marker_act
+ * @hw: pointer to the hardware structure
+ * @m_ent: the management entry for which sw marker needs to be added
+ * @sw_marker: sw marker to tag the Rx descriptor with
+ * @l_id: large action resource id
+ *
+ * Create a large action to hold software marker and update the switch rule
+ * entry pointed by m_ent with newly created large action
+ */
+static enum ice_status
+ice_add_marker_act(struct ice_hw *hw, struct ice_fltr_mgmt_list_entry *m_ent,
+		   u16 sw_marker, u16 l_id)
+{
+	struct ice_aqc_sw_rules_elem *lg_act, *rx_tx;
+	/* For software marker we need 3 large actions
+	 * 1. FWD action: FWD TO VSI or VSI LIST
+	 * 2. GENERIC VALUE action to hold the profile id
+	 * 3. GENERIC VALUE action to hold the software marker id
+	 */
+	const u16 num_lg_acts = 3;
+	enum ice_status status;
+	u16 lg_act_size;
+	u16 rules_size;
+	u32 act;
+	u16 id;
+
+	if (m_ent->fltr_info.lkup_type != ICE_SW_LKUP_MAC)
+		return ICE_ERR_PARAM;
+
+	/* Create two back-to-back switch rules and submit them to the HW using
+	 * one memory buffer:
+	 *    1. Large Action
+	 *    2. Look up Tx Rx
+	 */
+	lg_act_size = (u16)ICE_SW_RULE_LG_ACT_SIZE(num_lg_acts);
+	rules_size = lg_act_size + ICE_SW_RULE_RX_TX_ETH_HDR_SIZE;
+	lg_act = (struct ice_aqc_sw_rules_elem *)ice_malloc(hw, rules_size);
+	if (!lg_act)
+		return ICE_ERR_NO_MEMORY;
+
+	rx_tx = (struct ice_aqc_sw_rules_elem *)((u8 *)lg_act + lg_act_size);
+
+	/* Fill in the first switch rule i.e. large action */
+	lg_act->type = CPU_TO_LE16(ICE_AQC_SW_RULES_T_LG_ACT);
+	lg_act->pdata.lg_act.index = CPU_TO_LE16(l_id);
+	lg_act->pdata.lg_act.size = CPU_TO_LE16(num_lg_acts);
+
+	/* First action VSI forwarding or VSI list forwarding depending on how
+	 * many VSIs
+	 */
+	id = (m_ent->vsi_count > 1) ? m_ent->fltr_info.fwd_id.vsi_list_id :
+		m_ent->fltr_info.fwd_id.hw_vsi_id;
+
+	act = ICE_LG_ACT_VSI_FORWARDING | ICE_LG_ACT_VALID_BIT;
+	act |= (id << ICE_LG_ACT_VSI_LIST_ID_S) &
+		ICE_LG_ACT_VSI_LIST_ID_M;
+	if (m_ent->vsi_count > 1)
+		act |= ICE_LG_ACT_VSI_LIST;
+	lg_act->pdata.lg_act.act[0] = CPU_TO_LE32(act);
+
+	/* Second action descriptor type */
+	act = ICE_LG_ACT_GENERIC;
+
+	act |= (1 << ICE_LG_ACT_GENERIC_VALUE_S) & ICE_LG_ACT_GENERIC_VALUE_M;
+	lg_act->pdata.lg_act.act[1] = CPU_TO_LE32(act);
+
+	act = (ICE_LG_ACT_GENERIC_OFF_RX_DESC_PROF_IDX <<
+	       ICE_LG_ACT_GENERIC_OFFSET_S) & ICE_LG_ACT_GENERIC_OFFSET_M;
+
+	/* Third action Marker value */
+	act |= ICE_LG_ACT_GENERIC;
+	act |= (sw_marker << ICE_LG_ACT_GENERIC_VALUE_S) &
+		ICE_LG_ACT_GENERIC_VALUE_M;
+
+	lg_act->pdata.lg_act.act[2] = CPU_TO_LE32(act);
+
+	/* call the fill switch rule to fill the lookup Tx Rx structure */
+	ice_fill_sw_rule(hw, &m_ent->fltr_info, rx_tx,
+			 ice_aqc_opc_update_sw_rules);
+
+	/* Update the action to point to the large action id */
+	rx_tx->pdata.lkup_tx_rx.act =
+		CPU_TO_LE32(ICE_SINGLE_ACT_PTR |
+			    ((l_id << ICE_SINGLE_ACT_PTR_VAL_S) &
+			     ICE_SINGLE_ACT_PTR_VAL_M));
+
+	/* Use the filter rule id of the previously created rule with single
+	 * act. Once the update happens, hardware will treat this as large
+	 * action
+	 */
+	rx_tx->pdata.lkup_tx_rx.index =
+		CPU_TO_LE16(m_ent->fltr_info.fltr_rule_id);
+
+	status = ice_aq_sw_rules(hw, lg_act, rules_size, 2,
+				 ice_aqc_opc_update_sw_rules, NULL);
+	if (!status) {
+		m_ent->lg_act_idx = l_id;
+		m_ent->sw_marker_id = sw_marker;
+	}
+
+	ice_free(hw, lg_act);
+	return status;
+}
+
+
+/**
+ * ice_create_vsi_list_map
+ * @hw: pointer to the hardware structure
+ * @vsi_handle_arr: array of VSI handles to set in the VSI mapping
+ * @num_vsi: number of VSI handles in the array
+ * @vsi_list_id: VSI list id generated as part of allocate resource
+ *
+ * Helper function to create a new entry of VSI list id to VSI mapping
+ * using the given VSI list id
+ */
+static struct ice_vsi_list_map_info *
+ice_create_vsi_list_map(struct ice_hw *hw, u16 *vsi_handle_arr, u16 num_vsi,
+			u16 vsi_list_id)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	struct ice_vsi_list_map_info *v_map;
+	int i;
+
+	v_map = (struct ice_vsi_list_map_info *)ice_calloc(hw, 1,
+		sizeof(*v_map));
+	if (!v_map)
+		return NULL;
+
+	v_map->vsi_list_id = vsi_list_id;
+	v_map->ref_cnt = 1;
+	for (i = 0; i < num_vsi; i++)
+		ice_set_bit(vsi_handle_arr[i], v_map->vsi_map);
+
+	LIST_ADD(&v_map->list_entry, &sw->vsi_list_map_head);
+	return v_map;
+}
+
+/**
+ * ice_update_vsi_list_rule
+ * @hw: pointer to the hardware structure
+ * @vsi_handle_arr: array of VSI handles to form a VSI list
+ * @num_vsi: number of VSI handles in the array
+ * @vsi_list_id: VSI list id generated as part of allocate resource
+ * @remove: Boolean value to indicate if this is a remove action
+ * @opc: switch rules population command type - pass in the command opcode
+ * @lkup_type: lookup type of the filter
+ *
+ * Call AQ command to add a new switch rule or update existing switch rule
+ * using the given VSI list id
+ */
+static enum ice_status
+ice_update_vsi_list_rule(struct ice_hw *hw, u16 *vsi_handle_arr, u16 num_vsi,
+			 u16 vsi_list_id, bool remove, enum ice_adminq_opc opc,
+			 enum ice_sw_lkup_type lkup_type)
+{
+	struct ice_aqc_sw_rules_elem *s_rule;
+	enum ice_status status;
+	u16 s_rule_size;
+	u16 type;
+	int i;
+
+	if (!num_vsi)
+		return ICE_ERR_PARAM;
+
+	if (lkup_type == ICE_SW_LKUP_MAC ||
+	    lkup_type == ICE_SW_LKUP_MAC_VLAN ||
+	    lkup_type == ICE_SW_LKUP_ETHERTYPE ||
+	    lkup_type == ICE_SW_LKUP_ETHERTYPE_MAC ||
+	    lkup_type == ICE_SW_LKUP_PROMISC ||
+	    lkup_type == ICE_SW_LKUP_PROMISC_VLAN)
+		type = remove ? ICE_AQC_SW_RULES_T_VSI_LIST_CLEAR :
+				ICE_AQC_SW_RULES_T_VSI_LIST_SET;
+	else if (lkup_type == ICE_SW_LKUP_VLAN)
+		type = remove ? ICE_AQC_SW_RULES_T_PRUNE_LIST_CLEAR :
+				ICE_AQC_SW_RULES_T_PRUNE_LIST_SET;
+	else
+		return ICE_ERR_PARAM;
+
+	s_rule_size = (u16)ICE_SW_RULE_VSI_LIST_SIZE(num_vsi);
+	s_rule = (struct ice_aqc_sw_rules_elem *)ice_malloc(hw, s_rule_size);
+	if (!s_rule)
+		return ICE_ERR_NO_MEMORY;
+	for (i = 0; i < num_vsi; i++) {
+		if (!ice_is_vsi_valid(hw, vsi_handle_arr[i])) {
+			status = ICE_ERR_PARAM;
+			goto exit;
+		}
+		/* AQ call requires hw_vsi_id(s) */
+		s_rule->pdata.vsi_list.vsi[i] =
+			CPU_TO_LE16(ice_get_hw_vsi_num(hw, vsi_handle_arr[i]));
+	}
+
+	s_rule->type = CPU_TO_LE16(type);
+	s_rule->pdata.vsi_list.number_vsi = CPU_TO_LE16(num_vsi);
+	s_rule->pdata.vsi_list.index = CPU_TO_LE16(vsi_list_id);
+
+	status = ice_aq_sw_rules(hw, s_rule, s_rule_size, 1, opc, NULL);
+
+exit:
+	ice_free(hw, s_rule);
+	return status;
+}
+
+/**
+ * ice_create_vsi_list_rule - Creates and populates a VSI list rule
+ * @hw: pointer to the hw struct
+ * @vsi_handle_arr: array of VSI handles to form a VSI list
+ * @num_vsi: number of VSI handles in the array
+ * @vsi_list_id: stores the ID of the VSI list to be created
+ * @lkup_type: switch rule filter's lookup type
+ */
+static enum ice_status
+ice_create_vsi_list_rule(struct ice_hw *hw, u16 *vsi_handle_arr, u16 num_vsi,
+			 u16 *vsi_list_id, enum ice_sw_lkup_type lkup_type)
+{
+	enum ice_status status;
+
+	status = ice_aq_alloc_free_vsi_list(hw, vsi_list_id, lkup_type,
+					    ice_aqc_opc_alloc_res);
+	if (status)
+		return status;
+
+	/* Update the newly created VSI list to include the specified VSIs */
+	return ice_update_vsi_list_rule(hw, vsi_handle_arr, num_vsi,
+					*vsi_list_id, false,
+					ice_aqc_opc_add_sw_rules, lkup_type);
+}
+
+/**
+ * ice_create_pkt_fwd_rule
+ * @hw: pointer to the hardware structure
+ * @f_entry: entry containing packet forwarding information
+ *
+ * Create switch rule with given filter information and add an entry
+ * to the corresponding filter management list to track this switch rule
+ * and VSI mapping
+ */
+static enum ice_status
+ice_create_pkt_fwd_rule(struct ice_hw *hw,
+			struct ice_fltr_list_entry *f_entry)
+{
+	struct ice_fltr_mgmt_list_entry *fm_entry;
+	struct ice_aqc_sw_rules_elem *s_rule;
+	enum ice_sw_lkup_type l_type;
+	struct ice_sw_recipe *recp;
+	enum ice_status status;
+
+	s_rule = (struct ice_aqc_sw_rules_elem *)
+		ice_malloc(hw, ICE_SW_RULE_RX_TX_ETH_HDR_SIZE);
+	if (!s_rule)
+		return ICE_ERR_NO_MEMORY;
+	fm_entry = (struct ice_fltr_mgmt_list_entry *)
+		   ice_malloc(hw, sizeof(*fm_entry));
+	if (!fm_entry) {
+		status = ICE_ERR_NO_MEMORY;
+		goto ice_create_pkt_fwd_rule_exit;
+	}
+
+	fm_entry->fltr_info = f_entry->fltr_info;
+
+	/* Initialize all the fields for the management entry */
+	fm_entry->vsi_count = 1;
+	fm_entry->lg_act_idx = ICE_INVAL_LG_ACT_INDEX;
+	fm_entry->sw_marker_id = ICE_INVAL_SW_MARKER_ID;
+	fm_entry->counter_index = ICE_INVAL_COUNTER_ID;
+
+	ice_fill_sw_rule(hw, &fm_entry->fltr_info, s_rule,
+			 ice_aqc_opc_add_sw_rules);
+
+	status = ice_aq_sw_rules(hw, s_rule, ICE_SW_RULE_RX_TX_ETH_HDR_SIZE, 1,
+				 ice_aqc_opc_add_sw_rules, NULL);
+	if (status) {
+		ice_free(hw, fm_entry);
+		goto ice_create_pkt_fwd_rule_exit;
+	}
+
+	f_entry->fltr_info.fltr_rule_id =
+		LE16_TO_CPU(s_rule->pdata.lkup_tx_rx.index);
+	fm_entry->fltr_info.fltr_rule_id =
+		LE16_TO_CPU(s_rule->pdata.lkup_tx_rx.index);
+
+	/* The book keeping entries will get removed when base driver
+	 * calls remove filter AQ command
+	 */
+	l_type = fm_entry->fltr_info.lkup_type;
+	recp = &hw->switch_info->recp_list[l_type];
+	LIST_ADD(&fm_entry->list_entry, &recp->filt_rules);
+
+ice_create_pkt_fwd_rule_exit:
+	ice_free(hw, s_rule);
+	return status;
+}
+
+/**
+ * ice_update_pkt_fwd_rule
+ * @hw: pointer to the hardware structure
+ * @f_info: filter information for switch rule
+ *
+ * Call AQ command to update a previously created switch rule with a
+ * VSI list id
+ */
+static enum ice_status
+ice_update_pkt_fwd_rule(struct ice_hw *hw, struct ice_fltr_info *f_info)
+{
+	struct ice_aqc_sw_rules_elem *s_rule;
+	enum ice_status status;
+
+	s_rule = (struct ice_aqc_sw_rules_elem *)
+		ice_malloc(hw, ICE_SW_RULE_RX_TX_ETH_HDR_SIZE);
+	if (!s_rule)
+		return ICE_ERR_NO_MEMORY;
+
+	ice_fill_sw_rule(hw, f_info, s_rule, ice_aqc_opc_update_sw_rules);
+
+	s_rule->pdata.lkup_tx_rx.index = CPU_TO_LE16(f_info->fltr_rule_id);
+
+	/* Update switch rule with new rule set to forward VSI list */
+	status = ice_aq_sw_rules(hw, s_rule, ICE_SW_RULE_RX_TX_ETH_HDR_SIZE, 1,
+				 ice_aqc_opc_update_sw_rules, NULL);
+
+	ice_free(hw, s_rule);
+	return status;
+}
+
+/**
+ * ice_update_sw_rule_bridge_mode
+ * @hw: pointer to the hw struct
+ *
+ * Updates unicast switch filter rules based on VEB/VEPA mode
+ */
+enum ice_status ice_update_sw_rule_bridge_mode(struct ice_hw *hw)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	struct ice_fltr_mgmt_list_entry *fm_entry;
+	enum ice_status status = ICE_SUCCESS;
+	struct LIST_HEAD_TYPE *rule_head;
+	struct ice_lock *rule_lock; /* Lock to protect filter rule list */
+
+	rule_lock = &sw->recp_list[ICE_SW_LKUP_MAC].filt_rule_lock;
+	rule_head = &sw->recp_list[ICE_SW_LKUP_MAC].filt_rules;
+
+	ice_acquire_lock(rule_lock);
+	LIST_FOR_EACH_ENTRY(fm_entry, rule_head, ice_fltr_mgmt_list_entry,
+			    list_entry) {
+		struct ice_fltr_info *fi = &fm_entry->fltr_info;
+		u8 *addr = fi->l_data.mac.mac_addr;
+
+		/* Update unicast Tx rules to reflect the selected
+		 * VEB/VEPA mode
+		 */
+		if ((fi->flag & ICE_FLTR_TX) && IS_UNICAST_ETHER_ADDR(addr) &&
+		    (fi->fltr_act == ICE_FWD_TO_VSI ||
+		     fi->fltr_act == ICE_FWD_TO_VSI_LIST ||
+		     fi->fltr_act == ICE_FWD_TO_Q ||
+		     fi->fltr_act == ICE_FWD_TO_QGRP)) {
+			status = ice_update_pkt_fwd_rule(hw, fi);
+			if (status)
+				break;
+		}
+	}
+
+	ice_release_lock(rule_lock);
+
+	return status;
+}
+
+/**
+ * ice_add_update_vsi_list
+ * @hw: pointer to the hardware structure
+ * @m_entry: pointer to current filter management list entry
+ * @cur_fltr: filter information from the book keeping entry
+ * @new_fltr: filter information with the new VSI to be added
+ *
+ * Call AQ command to add or update previously created VSI list with new VSI.
+ *
+ * Helper function to do book keeping associated with adding filter information
+ * The algorithm to do the book keeping is described below :
+ * When a VSI needs to subscribe to a given filter (MAC/VLAN/Ethtype etc.)
+ *	if only one VSI has been added till now
+ *		Allocate a new VSI list and add two VSIs
+ *		to this list using switch rule command
+ *		Update the previously created switch rule with the
+ *		newly created VSI list id
+ *	if a VSI list was previously created
+ *		Add the new VSI to the previously created VSI list set
+ *		using the update switch rule command
+ */
+static enum ice_status
+ice_add_update_vsi_list(struct ice_hw *hw,
+			struct ice_fltr_mgmt_list_entry *m_entry,
+			struct ice_fltr_info *cur_fltr,
+			struct ice_fltr_info *new_fltr)
+{
+	enum ice_status status = ICE_SUCCESS;
+	u16 vsi_list_id = 0;
+
+	if ((cur_fltr->fltr_act == ICE_FWD_TO_Q ||
+	     cur_fltr->fltr_act == ICE_FWD_TO_QGRP))
+		return ICE_ERR_NOT_IMPL;
+
+	if ((new_fltr->fltr_act == ICE_FWD_TO_Q ||
+	     new_fltr->fltr_act == ICE_FWD_TO_QGRP) &&
+	    (cur_fltr->fltr_act == ICE_FWD_TO_VSI ||
+	     cur_fltr->fltr_act == ICE_FWD_TO_VSI_LIST))
+		return ICE_ERR_NOT_IMPL;
+
+	if (m_entry->vsi_count < 2 && !m_entry->vsi_list_info) {
+		/* Only one entry existed in the mapping and it was not already
+		 * a part of a VSI list. So, create a VSI list with the old and
+		 * new VSIs.
+		 */
+		struct ice_fltr_info tmp_fltr;
+		u16 vsi_handle_arr[2];
+
+		/* A rule already exists with the new VSI being added */
+		if (cur_fltr->fwd_id.hw_vsi_id == new_fltr->fwd_id.hw_vsi_id)
+			return ICE_ERR_ALREADY_EXISTS;
+
+		vsi_handle_arr[0] = cur_fltr->vsi_handle;
+		vsi_handle_arr[1] = new_fltr->vsi_handle;
+		status = ice_create_vsi_list_rule(hw, &vsi_handle_arr[0], 2,
+						  &vsi_list_id,
+						  new_fltr->lkup_type);
+		if (status)
+			return status;
+
+		tmp_fltr = *new_fltr;
+		tmp_fltr.fltr_rule_id = cur_fltr->fltr_rule_id;
+		tmp_fltr.fltr_act = ICE_FWD_TO_VSI_LIST;
+		tmp_fltr.fwd_id.vsi_list_id = vsi_list_id;
+		/* Update the previous switch rule of "MAC forward to VSI" to
+		 * "MAC fwd to VSI list"
+		 */
+		status = ice_update_pkt_fwd_rule(hw, &tmp_fltr);
+		if (status)
+			return status;
+
+		cur_fltr->fwd_id.vsi_list_id = vsi_list_id;
+		cur_fltr->fltr_act = ICE_FWD_TO_VSI_LIST;
+		m_entry->vsi_list_info =
+			ice_create_vsi_list_map(hw, &vsi_handle_arr[0], 2,
+						vsi_list_id);
+
+		/* If this entry was large action then the large action needs
+		 * to be updated to point to FWD to VSI list
+		 */
+		if (m_entry->sw_marker_id != ICE_INVAL_SW_MARKER_ID)
+			status =
+			    ice_add_marker_act(hw, m_entry,
+					       m_entry->sw_marker_id,
+					       m_entry->lg_act_idx);
+	} else {
+		u16 vsi_handle = new_fltr->vsi_handle;
+		enum ice_adminq_opc opcode;
+
+		if (!m_entry->vsi_list_info)
+			return ICE_ERR_CFG;
+
+		/* A rule already exists with the new VSI being added */
+		if (ice_is_bit_set(m_entry->vsi_list_info->vsi_map, vsi_handle))
+			return ICE_SUCCESS;
+
+		/* Update the previously created VSI list set with
+		 * the new VSI id passed in
+		 */
+		vsi_list_id = cur_fltr->fwd_id.vsi_list_id;
+		opcode = ice_aqc_opc_update_sw_rules;
+
+		status = ice_update_vsi_list_rule(hw, &vsi_handle, 1,
+						  vsi_list_id, false, opcode,
+						  new_fltr->lkup_type);
+		/* update VSI list mapping info with new VSI id */
+		if (!status)
+			ice_set_bit(vsi_handle,
+				    m_entry->vsi_list_info->vsi_map);
+	}
+	if (!status)
+		m_entry->vsi_count++;
+	return status;
+}
+
+/**
+ * ice_find_rule_entry - Search a rule entry
+ * @hw: pointer to the hardware structure
+ * @recp_id: lookup type for which the specified rule needs to be searched
+ * @f_info: rule information
+ *
+ * Helper function to search for a given rule entry
+ * Returns pointer to entry storing the rule if found
+ */
+static struct ice_fltr_mgmt_list_entry *
+ice_find_rule_entry(struct ice_hw *hw, u8 recp_id, struct ice_fltr_info *f_info)
+{
+	struct ice_fltr_mgmt_list_entry *list_itr, *ret = NULL;
+	struct ice_switch_info *sw = hw->switch_info;
+	struct LIST_HEAD_TYPE *list_head;
+
+	list_head = &sw->recp_list[recp_id].filt_rules;
+	LIST_FOR_EACH_ENTRY(list_itr, list_head, ice_fltr_mgmt_list_entry,
+			    list_entry) {
+		if (!memcmp(&f_info->l_data, &list_itr->fltr_info.l_data,
+			    sizeof(f_info->l_data)) &&
+		    f_info->flag == list_itr->fltr_info.flag) {
+			ret = list_itr;
+			break;
+		}
+	}
+	return ret;
+}
+
+/**
+ * ice_find_vsi_list_entry - Search VSI list map with VSI count 1
+ * @hw: pointer to the hardware structure
+ * @recp_id: lookup type for which VSI lists needs to be searched
+ * @vsi_handle: VSI handle to be found in VSI list
+ * @vsi_list_id: VSI list id found contaning vsi_handle
+ *
+ * Helper function to search a VSI list with single entry containing given VSI
+ * handle element. This can be extended further to search VSI list with more
+ * than 1 vsi_count. Returns pointer to VSI list entry if found.
+ */
+static struct ice_vsi_list_map_info *
+ice_find_vsi_list_entry(struct ice_hw *hw, u8 recp_id, u16 vsi_handle,
+			u16 *vsi_list_id)
+{
+	struct ice_vsi_list_map_info *map_info = NULL;
+	struct ice_switch_info *sw = hw->switch_info;
+	struct ice_fltr_mgmt_list_entry *list_itr;
+	struct LIST_HEAD_TYPE *list_head;
+
+	list_head = &sw->recp_list[recp_id].filt_rules;
+	LIST_FOR_EACH_ENTRY(list_itr, list_head, ice_fltr_mgmt_list_entry,
+			    list_entry) {
+		if (list_itr->vsi_count == 1 && list_itr->vsi_list_info) {
+			map_info = list_itr->vsi_list_info;
+			if (ice_is_bit_set(map_info->vsi_map, vsi_handle)) {
+				*vsi_list_id = map_info->vsi_list_id;
+				return map_info;
+			}
+		}
+	}
+	return NULL;
+}
+
+/**
+ * ice_add_rule_internal - add rule for a given lookup type
+ * @hw: pointer to the hardware structure
+ * @recp_id: lookup type (recipe id) for which rule has to be added
+ * @f_entry: structure containing MAC forwarding information
+ *
+ * Adds or updates the rule lists for a given recipe
+ */
+static enum ice_status
+ice_add_rule_internal(struct ice_hw *hw, u8 recp_id,
+		      struct ice_fltr_list_entry *f_entry)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	struct ice_fltr_info *new_fltr, *cur_fltr;
+	struct ice_fltr_mgmt_list_entry *m_entry;
+	struct ice_lock *rule_lock; /* Lock to protect filter rule list */
+	enum ice_status status = ICE_SUCCESS;
+
+	if (!ice_is_vsi_valid(hw, f_entry->fltr_info.vsi_handle))
+		return ICE_ERR_PARAM;
+
+	/* Load the hw_vsi_id only if the fwd action is fwd to VSI */
+	if (f_entry->fltr_info.fltr_act == ICE_FWD_TO_VSI)
+		f_entry->fltr_info.fwd_id.hw_vsi_id =
+			ice_get_hw_vsi_num(hw, f_entry->fltr_info.vsi_handle);
+
+	rule_lock = &sw->recp_list[recp_id].filt_rule_lock;
+
+	ice_acquire_lock(rule_lock);
+	new_fltr = &f_entry->fltr_info;
+	if (new_fltr->flag & ICE_FLTR_RX)
+		new_fltr->src = hw->port_info->lport;
+	else if (new_fltr->flag & ICE_FLTR_TX)
+		new_fltr->src =
+			ice_get_hw_vsi_num(hw, f_entry->fltr_info.vsi_handle);
+
+	m_entry = ice_find_rule_entry(hw, recp_id, new_fltr);
+	if (!m_entry) {
+		ice_release_lock(rule_lock);
+		return ice_create_pkt_fwd_rule(hw, f_entry);
+	}
+
+	cur_fltr = &m_entry->fltr_info;
+	status = ice_add_update_vsi_list(hw, m_entry, cur_fltr, new_fltr);
+	ice_release_lock(rule_lock);
+
+	return status;
+}
+
+/**
+ * ice_remove_vsi_list_rule
+ * @hw: pointer to the hardware structure
+ * @vsi_list_id: VSI list id generated as part of allocate resource
+ * @lkup_type: switch rule filter lookup type
+ *
+ * The VSI list should be emptied before this function is called to remove the
+ * VSI list.
+ */
+static enum ice_status
+ice_remove_vsi_list_rule(struct ice_hw *hw, u16 vsi_list_id,
+			 enum ice_sw_lkup_type lkup_type)
+{
+	struct ice_aqc_sw_rules_elem *s_rule;
+	enum ice_status status;
+	u16 s_rule_size;
+
+	s_rule_size = (u16)ICE_SW_RULE_VSI_LIST_SIZE(0);
+	s_rule = (struct ice_aqc_sw_rules_elem *)ice_malloc(hw, s_rule_size);
+	if (!s_rule)
+		return ICE_ERR_NO_MEMORY;
+
+	s_rule->type = CPU_TO_LE16(ICE_AQC_SW_RULES_T_VSI_LIST_CLEAR);
+	s_rule->pdata.vsi_list.index = CPU_TO_LE16(vsi_list_id);
+
+	/* Free the vsi_list resource that we allocated. It is assumed that the
+	 * list is empty at this point.
+	 */
+	status = ice_aq_alloc_free_vsi_list(hw, &vsi_list_id, lkup_type,
+					    ice_aqc_opc_free_res);
+
+	ice_free(hw, s_rule);
+	return status;
+}
+
+/**
+ * ice_rem_update_vsi_list
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: VSI handle of the VSI to remove
+ * @fm_list: filter management entry for which the VSI list management needs to
+ *	     be done
+ */
+static enum ice_status
+ice_rem_update_vsi_list(struct ice_hw *hw, u16 vsi_handle,
+			struct ice_fltr_mgmt_list_entry *fm_list)
+{
+	enum ice_sw_lkup_type lkup_type;
+	enum ice_status status = ICE_SUCCESS;
+	u16 vsi_list_id;
+
+	if (fm_list->fltr_info.fltr_act != ICE_FWD_TO_VSI_LIST ||
+	    fm_list->vsi_count == 0)
+		return ICE_ERR_PARAM;
+
+	/* A rule with the VSI being removed does not exist */
+	if (!ice_is_bit_set(fm_list->vsi_list_info->vsi_map, vsi_handle))
+		return ICE_ERR_DOES_NOT_EXIST;
+
+	lkup_type = fm_list->fltr_info.lkup_type;
+	vsi_list_id = fm_list->fltr_info.fwd_id.vsi_list_id;
+	status = ice_update_vsi_list_rule(hw, &vsi_handle, 1, vsi_list_id, true,
+					  ice_aqc_opc_update_sw_rules,
+					  lkup_type);
+	if (status)
+		return status;
+
+	fm_list->vsi_count--;
+	ice_clear_bit(vsi_handle, fm_list->vsi_list_info->vsi_map);
+
+	if (fm_list->vsi_count == 1 && lkup_type != ICE_SW_LKUP_VLAN) {
+		struct ice_fltr_info tmp_fltr_info = fm_list->fltr_info;
+		struct ice_vsi_list_map_info *vsi_list_info =
+			fm_list->vsi_list_info;
+		u16 rem_vsi_handle;
+
+		rem_vsi_handle = ice_find_first_bit(vsi_list_info->vsi_map,
+						    ICE_MAX_VSI);
+		if (!ice_is_vsi_valid(hw, rem_vsi_handle))
+			return ICE_ERR_OUT_OF_RANGE;
+
+		/* Make sure VSI list is empty before removing it below */
+		status = ice_update_vsi_list_rule(hw, &rem_vsi_handle, 1,
+						  vsi_list_id, true,
+						  ice_aqc_opc_update_sw_rules,
+						  lkup_type);
+		if (status)
+			return status;
+
+		tmp_fltr_info.fltr_act = ICE_FWD_TO_VSI;
+		tmp_fltr_info.fwd_id.hw_vsi_id =
+			ice_get_hw_vsi_num(hw, rem_vsi_handle);
+		tmp_fltr_info.vsi_handle = rem_vsi_handle;
+		status = ice_update_pkt_fwd_rule(hw, &tmp_fltr_info);
+		if (status) {
+			ice_debug(hw, ICE_DBG_SW,
+				  "Failed to update pkt fwd rule to FWD_TO_VSI on HW VSI %d, error %d\n",
+				  tmp_fltr_info.fwd_id.hw_vsi_id, status);
+			return status;
+		}
+
+		fm_list->fltr_info = tmp_fltr_info;
+	}
+
+	if ((fm_list->vsi_count == 1 && lkup_type != ICE_SW_LKUP_VLAN) ||
+	    (fm_list->vsi_count == 0 && lkup_type == ICE_SW_LKUP_VLAN)) {
+		struct ice_vsi_list_map_info *vsi_list_info =
+			fm_list->vsi_list_info;
+
+		/* Remove the VSI list since it is no longer used */
+		status = ice_remove_vsi_list_rule(hw, vsi_list_id, lkup_type);
+		if (status) {
+			ice_debug(hw, ICE_DBG_SW,
+				  "Failed to remove VSI list %d, error %d\n",
+				  vsi_list_id, status);
+			return status;
+		}
+
+		LIST_DEL(&vsi_list_info->list_entry);
+		ice_free(hw, vsi_list_info);
+		fm_list->vsi_list_info = NULL;
+	}
+
+	return status;
+}
+
+/**
+ * ice_remove_rule_internal - Remove a filter rule of a given type
+ *
+ * @hw: pointer to the hardware structure
+ * @recp_id: recipe id for which the rule needs to removed
+ * @f_entry: rule entry containing filter information
+ */
+static enum ice_status
+ice_remove_rule_internal(struct ice_hw *hw, u8 recp_id,
+			 struct ice_fltr_list_entry *f_entry)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	struct ice_fltr_mgmt_list_entry *list_elem;
+	struct ice_lock *rule_lock; /* Lock to protect filter rule list */
+	enum ice_status status = ICE_SUCCESS;
+	bool remove_rule = false;
+	u16 vsi_handle;
+
+	if (!ice_is_vsi_valid(hw, f_entry->fltr_info.vsi_handle))
+		return ICE_ERR_PARAM;
+	f_entry->fltr_info.fwd_id.hw_vsi_id =
+		ice_get_hw_vsi_num(hw, f_entry->fltr_info.vsi_handle);
+
+	rule_lock = &sw->recp_list[recp_id].filt_rule_lock;
+	ice_acquire_lock(rule_lock);
+	list_elem = ice_find_rule_entry(hw, recp_id, &f_entry->fltr_info);
+	if (!list_elem) {
+		status = ICE_ERR_DOES_NOT_EXIST;
+		goto exit;
+	}
+
+	if (list_elem->fltr_info.fltr_act != ICE_FWD_TO_VSI_LIST) {
+		remove_rule = true;
+	} else if (!list_elem->vsi_list_info) {
+		status = ICE_ERR_DOES_NOT_EXIST;
+		goto exit;
+	} else if (list_elem->vsi_list_info->ref_cnt > 1) {
+		/* a ref_cnt > 1 indicates that the vsi_list is being
+		 * shared by multiple rules. Decrement the ref_cnt and
+		 * remove this rule, but do not modify the list, as it
+		 * is in-use by other rules.
+		 */
+		list_elem->vsi_list_info->ref_cnt--;
+		remove_rule = true;
+	} else {
+		/* a ref_cnt of 1 indicates the vsi_list is only used
+		 * by one rule. However, the original removal request is only
+		 * for a single VSI. Update the vsi_list first, and only
+		 * remove the rule if there are no further VSIs in this list.
+		 */
+		vsi_handle = f_entry->fltr_info.vsi_handle;
+		status = ice_rem_update_vsi_list(hw, vsi_handle, list_elem);
+		if (status)
+			goto exit;
+		/* if vsi count goes to zero after updating the vsi list */
+		if (list_elem->vsi_count == 0)
+			remove_rule = true;
+	}
+
+	if (remove_rule) {
+		/* Remove the lookup rule */
+		struct ice_aqc_sw_rules_elem *s_rule;
+
+		s_rule = (struct ice_aqc_sw_rules_elem *)
+			ice_malloc(hw, ICE_SW_RULE_RX_TX_NO_HDR_SIZE);
+		if (!s_rule) {
+			status = ICE_ERR_NO_MEMORY;
+			goto exit;
+		}
+
+		ice_fill_sw_rule(hw, &list_elem->fltr_info, s_rule,
+				 ice_aqc_opc_remove_sw_rules);
+
+		status = ice_aq_sw_rules(hw, s_rule,
+					 ICE_SW_RULE_RX_TX_NO_HDR_SIZE, 1,
+					 ice_aqc_opc_remove_sw_rules, NULL);
+		if (status)
+			goto exit;
+
+		/* Remove a book keeping from the list */
+		ice_free(hw, s_rule);
+
+		LIST_DEL(&list_elem->list_entry);
+		ice_free(hw, list_elem);
+	}
+exit:
+	ice_release_lock(rule_lock);
+	return status;
+}
+
+
+/**
+ * ice_add_mac - Add a MAC address based filter rule
+ * @hw: pointer to the hardware structure
+ * @m_list: list of MAC addresses and forwarding information
+ *
+ * IMPORTANT: When the ucast_shared flag is set to false and m_list has
+ * multiple unicast addresses, the function assumes that all the
+ * addresses are unique in a given add_mac call. It doesn't
+ * check for duplicates in this case, removing duplicates from a given
+ * list should be taken care of in the caller of this function.
+ */
+enum ice_status
+ice_add_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list)
+{
+	struct ice_aqc_sw_rules_elem *s_rule, *r_iter;
+	struct ice_fltr_list_entry *m_list_itr;
+	struct LIST_HEAD_TYPE *rule_head;
+	u16 elem_sent, total_elem_left;
+	struct ice_switch_info *sw;
+	struct ice_lock *rule_lock; /* Lock to protect filter rule list */
+	enum ice_status status = ICE_SUCCESS;
+	u16 num_unicast = 0;
+	u16 s_rule_size;
+
+	if (!m_list || !hw)
+		return ICE_ERR_PARAM;
+	s_rule = NULL;
+	sw = hw->switch_info;
+	rule_lock = &sw->recp_list[ICE_SW_LKUP_MAC].filt_rule_lock;
+	LIST_FOR_EACH_ENTRY(m_list_itr, m_list, ice_fltr_list_entry,
+			    list_entry) {
+		u8 *add = &m_list_itr->fltr_info.l_data.mac.mac_addr[0];
+		u16 vsi_handle;
+		u16 hw_vsi_id;
+
+		m_list_itr->fltr_info.flag = ICE_FLTR_TX;
+		vsi_handle = m_list_itr->fltr_info.vsi_handle;
+		if (!ice_is_vsi_valid(hw, vsi_handle))
+			return ICE_ERR_PARAM;
+		hw_vsi_id = ice_get_hw_vsi_num(hw, vsi_handle);
+		m_list_itr->fltr_info.fwd_id.hw_vsi_id = hw_vsi_id;
+		/* update the src in case it is vsi num */
+		if (m_list_itr->fltr_info.src_id != ICE_SRC_ID_VSI)
+			return ICE_ERR_PARAM;
+		m_list_itr->fltr_info.src = hw_vsi_id;
+		if (m_list_itr->fltr_info.lkup_type != ICE_SW_LKUP_MAC ||
+		    IS_ZERO_ETHER_ADDR(add))
+			return ICE_ERR_PARAM;
+		if (IS_UNICAST_ETHER_ADDR(add) && !hw->ucast_shared) {
+			/* Don't overwrite the unicast address */
+			ice_acquire_lock(rule_lock);
+			if (ice_find_rule_entry(hw, ICE_SW_LKUP_MAC,
+						&m_list_itr->fltr_info)) {
+				ice_release_lock(rule_lock);
+				return ICE_ERR_ALREADY_EXISTS;
+			}
+			ice_release_lock(rule_lock);
+			num_unicast++;
+		} else if (IS_MULTICAST_ETHER_ADDR(add) ||
+			   (IS_UNICAST_ETHER_ADDR(add) && hw->ucast_shared)) {
+			m_list_itr->status =
+				ice_add_rule_internal(hw, ICE_SW_LKUP_MAC,
+						      m_list_itr);
+			if (m_list_itr->status)
+				return m_list_itr->status;
+		}
+	}
+
+	ice_acquire_lock(rule_lock);
+	/* Exit if no suitable entries were found for adding bulk switch rule */
+	if (!num_unicast) {
+		status = ICE_SUCCESS;
+		goto ice_add_mac_exit;
+	}
+
+	rule_head = &sw->recp_list[ICE_SW_LKUP_MAC].filt_rules;
+
+	/* Allocate switch rule buffer for the bulk update for unicast */
+	s_rule_size = ICE_SW_RULE_RX_TX_ETH_HDR_SIZE;
+	s_rule = (struct ice_aqc_sw_rules_elem *)
+		ice_calloc(hw, num_unicast, s_rule_size);
+	if (!s_rule) {
+		status = ICE_ERR_NO_MEMORY;
+		goto ice_add_mac_exit;
+	}
+
+	r_iter = s_rule;
+	LIST_FOR_EACH_ENTRY(m_list_itr, m_list, ice_fltr_list_entry,
+			    list_entry) {
+		struct ice_fltr_info *f_info = &m_list_itr->fltr_info;
+		u8 *mac_addr = &f_info->l_data.mac.mac_addr[0];
+
+		if (IS_UNICAST_ETHER_ADDR(mac_addr)) {
+			ice_fill_sw_rule(hw, &m_list_itr->fltr_info, r_iter,
+					 ice_aqc_opc_add_sw_rules);
+			r_iter = (struct ice_aqc_sw_rules_elem *)
+				((u8 *)r_iter + s_rule_size);
+		}
+	}
+
+	/* Call AQ bulk switch rule update for all unicast addresses */
+	r_iter = s_rule;
+	/* Call AQ switch rule in AQ_MAX chunk */
+	for (total_elem_left = num_unicast; total_elem_left > 0;
+	     total_elem_left -= elem_sent) {
+		struct ice_aqc_sw_rules_elem *entry = r_iter;
+
+		elem_sent = min(total_elem_left,
+				(u16)(ICE_AQ_MAX_BUF_LEN / s_rule_size));
+		status = ice_aq_sw_rules(hw, entry, elem_sent * s_rule_size,
+					 elem_sent, ice_aqc_opc_add_sw_rules,
+					 NULL);
+		if (status)
+			goto ice_add_mac_exit;
+		r_iter = (struct ice_aqc_sw_rules_elem *)
+			((u8 *)r_iter + (elem_sent * s_rule_size));
+	}
+
+	/* Fill up rule id based on the value returned from FW */
+	r_iter = s_rule;
+	LIST_FOR_EACH_ENTRY(m_list_itr, m_list, ice_fltr_list_entry,
+			    list_entry) {
+		struct ice_fltr_info *f_info = &m_list_itr->fltr_info;
+		u8 *mac_addr = &f_info->l_data.mac.mac_addr[0];
+		struct ice_fltr_mgmt_list_entry *fm_entry;
+
+		if (IS_UNICAST_ETHER_ADDR(mac_addr)) {
+			f_info->fltr_rule_id =
+				LE16_TO_CPU(r_iter->pdata.lkup_tx_rx.index);
+			f_info->fltr_act = ICE_FWD_TO_VSI;
+			/* Create an entry to track this MAC address */
+			fm_entry = (struct ice_fltr_mgmt_list_entry *)
+				ice_malloc(hw, sizeof(*fm_entry));
+			if (!fm_entry) {
+				status = ICE_ERR_NO_MEMORY;
+				goto ice_add_mac_exit;
+			}
+			fm_entry->fltr_info = *f_info;
+			fm_entry->vsi_count = 1;
+			/* The book keeping entries will get removed when
+			 * base driver calls remove filter AQ command
+			 */
+
+			LIST_ADD(&fm_entry->list_entry, rule_head);
+			r_iter = (struct ice_aqc_sw_rules_elem *)
+				((u8 *)r_iter + s_rule_size);
+		}
+	}
+
+ice_add_mac_exit:
+	ice_release_lock(rule_lock);
+	if (s_rule)
+		ice_free(hw, s_rule);
+	return status;
+}
+
+/**
+ * ice_add_vlan_internal - Add one VLAN based filter rule
+ * @hw: pointer to the hardware structure
+ * @f_entry: filter entry containing one VLAN information
+ */
+static enum ice_status
+ice_add_vlan_internal(struct ice_hw *hw, struct ice_fltr_list_entry *f_entry)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	struct ice_fltr_mgmt_list_entry *v_list_itr;
+	struct ice_fltr_info *new_fltr, *cur_fltr;
+	enum ice_sw_lkup_type lkup_type;
+	u16 vsi_list_id = 0, vsi_handle;
+	struct ice_lock *rule_lock; /* Lock to protect filter rule list */
+	enum ice_status status = ICE_SUCCESS;
+
+	if (!ice_is_vsi_valid(hw, f_entry->fltr_info.vsi_handle))
+		return ICE_ERR_PARAM;
+
+	f_entry->fltr_info.fwd_id.hw_vsi_id =
+		ice_get_hw_vsi_num(hw, f_entry->fltr_info.vsi_handle);
+	new_fltr = &f_entry->fltr_info;
+
+	/* VLAN id should only be 12 bits */
+	if (new_fltr->l_data.vlan.vlan_id > ICE_MAX_VLAN_ID)
+		return ICE_ERR_PARAM;
+
+	if (new_fltr->src_id != ICE_SRC_ID_VSI)
+		return ICE_ERR_PARAM;
+
+	new_fltr->src = new_fltr->fwd_id.hw_vsi_id;
+	lkup_type = new_fltr->lkup_type;
+	vsi_handle = new_fltr->vsi_handle;
+	rule_lock = &sw->recp_list[ICE_SW_LKUP_VLAN].filt_rule_lock;
+	ice_acquire_lock(rule_lock);
+	v_list_itr = ice_find_rule_entry(hw, ICE_SW_LKUP_VLAN, new_fltr);
+	if (!v_list_itr) {
+		struct ice_vsi_list_map_info *map_info = NULL;
+
+		if (new_fltr->fltr_act == ICE_FWD_TO_VSI) {
+			/* All VLAN pruning rules use a VSI list. Check if
+			 * there is already a VSI list containing VSI that we
+			 * want to add. If found, use the same vsi_list_id for
+			 * this new VLAN rule or else create a new list.
+			 */
+			map_info = ice_find_vsi_list_entry(hw, ICE_SW_LKUP_VLAN,
+							   vsi_handle,
+							   &vsi_list_id);
+			if (!map_info) {
+				status = ice_create_vsi_list_rule(hw,
+								  &vsi_handle,
+								  1,
+								  &vsi_list_id,
+								  lkup_type);
+				if (status)
+					goto exit;
+			}
+			/* Convert the action to forwarding to a VSI list. */
+			new_fltr->fltr_act = ICE_FWD_TO_VSI_LIST;
+			new_fltr->fwd_id.vsi_list_id = vsi_list_id;
+		}
+
+		status = ice_create_pkt_fwd_rule(hw, f_entry);
+		if (!status) {
+			v_list_itr = ice_find_rule_entry(hw, ICE_SW_LKUP_VLAN,
+							 new_fltr);
+			if (!v_list_itr) {
+				status = ICE_ERR_DOES_NOT_EXIST;
+				goto exit;
+			}
+			/* reuse VSI list for new rule and increment ref_cnt */
+			if (map_info) {
+				v_list_itr->vsi_list_info = map_info;
+				map_info->ref_cnt++;
+			} else {
+				v_list_itr->vsi_list_info =
+					ice_create_vsi_list_map(hw, &vsi_handle,
+								1, vsi_list_id);
+			}
+		}
+	} else if (v_list_itr->vsi_list_info->ref_cnt == 1) {
+		/* Update existing VSI list to add new VSI id only if it used
+		 * by one VLAN rule.
+		 */
+		cur_fltr = &v_list_itr->fltr_info;
+		status = ice_add_update_vsi_list(hw, v_list_itr, cur_fltr,
+						 new_fltr);
+	} else {
+		/* If VLAN rule exists and VSI list being used by this rule is
+		 * referenced by more than 1 VLAN rule. Then create a new VSI
+		 * list appending previous VSI with new VSI and update existing
+		 * VLAN rule to point to new VSI list id
+		 */
+		struct ice_fltr_info tmp_fltr;
+		u16 vsi_handle_arr[2];
+		u16 cur_handle;
+
+		/* Current implementation only supports reusing VSI list with
+		 * one VSI count. We should never hit below condition
+		 */
+		if (v_list_itr->vsi_count > 1 &&
+		    v_list_itr->vsi_list_info->ref_cnt > 1) {
+			ice_debug(hw, ICE_DBG_SW,
+				  "Invalid configuration: Optimization to reuse VSI list with more than one VSI is not being done yet\n");
+			status = ICE_ERR_CFG;
+			goto exit;
+		}
+
+		cur_handle =
+			ice_find_first_bit(v_list_itr->vsi_list_info->vsi_map,
+					   ICE_MAX_VSI);
+
+		/* A rule already exists with the new VSI being added */
+		if (cur_handle == vsi_handle) {
+			status = ICE_ERR_ALREADY_EXISTS;
+			goto exit;
+		}
+
+		vsi_handle_arr[0] = cur_handle;
+		vsi_handle_arr[1] = vsi_handle;
+		status = ice_create_vsi_list_rule(hw, &vsi_handle_arr[0], 2,
+						  &vsi_list_id, lkup_type);
+		if (status)
+			goto exit;
+
+		tmp_fltr = v_list_itr->fltr_info;
+		tmp_fltr.fltr_rule_id = v_list_itr->fltr_info.fltr_rule_id;
+		tmp_fltr.fwd_id.vsi_list_id = vsi_list_id;
+		tmp_fltr.fltr_act = ICE_FWD_TO_VSI_LIST;
+		/* Update the previous switch rule to a new VSI list which
+		 * includes current VSI that is requested
+		 */
+		status = ice_update_pkt_fwd_rule(hw, &tmp_fltr);
+		if (status)
+			goto exit;
+
+		/* before overriding VSI list map info. decrement ref_cnt of
+		 * previous VSI list
+		 */
+		v_list_itr->vsi_list_info->ref_cnt--;
+
+		/* now update to newly created list */
+		v_list_itr->fltr_info.fwd_id.vsi_list_id = vsi_list_id;
+		v_list_itr->vsi_list_info =
+			ice_create_vsi_list_map(hw, &vsi_handle_arr[0], 2,
+						vsi_list_id);
+		v_list_itr->vsi_count++;
+	}
+
+exit:
+	ice_release_lock(rule_lock);
+	return status;
+}
+
+/**
+ * ice_add_vlan - Add VLAN based filter rule
+ * @hw: pointer to the hardware structure
+ * @v_list: list of VLAN entries and forwarding information
+ */
+enum ice_status
+ice_add_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list)
+{
+	struct ice_fltr_list_entry *v_list_itr;
+
+	if (!v_list || !hw)
+		return ICE_ERR_PARAM;
+
+	LIST_FOR_EACH_ENTRY(v_list_itr, v_list, ice_fltr_list_entry,
+			    list_entry) {
+		if (v_list_itr->fltr_info.lkup_type != ICE_SW_LKUP_VLAN)
+			return ICE_ERR_PARAM;
+		v_list_itr->fltr_info.flag = ICE_FLTR_TX;
+		v_list_itr->status = ice_add_vlan_internal(hw, v_list_itr);
+		if (v_list_itr->status)
+			return v_list_itr->status;
+	}
+	return ICE_SUCCESS;
+}
+
+#ifndef NO_MACVLAN_SUPPORT
+/**
+ * ice_add_mac_vlan - Add MAC and VLAN pair based filter rule
+ * @hw: pointer to the hardware structure
+ * @mv_list: list of MAC and VLAN filters
+ *
+ * If the VSI on which the mac-vlan pair has to be added has RX and Tx VLAN
+ * pruning bits enabled, then it is the responsibility of the caller to make
+ * sure to add a vlan only filter on the same VSI. Packets belonging to that
+ * VLAN won't be received on that VSI otherwise.
+ */
+enum ice_status
+ice_add_mac_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *mv_list)
+{
+	struct ice_fltr_list_entry *mv_list_itr;
+
+	if (!mv_list || !hw)
+		return ICE_ERR_PARAM;
+
+	LIST_FOR_EACH_ENTRY(mv_list_itr, mv_list, ice_fltr_list_entry,
+			    list_entry) {
+		enum ice_sw_lkup_type l_type =
+			mv_list_itr->fltr_info.lkup_type;
+
+		if (l_type != ICE_SW_LKUP_MAC_VLAN)
+			return ICE_ERR_PARAM;
+		mv_list_itr->fltr_info.flag = ICE_FLTR_TX;
+		mv_list_itr->status =
+			ice_add_rule_internal(hw, ICE_SW_LKUP_MAC_VLAN,
+					      mv_list_itr);
+		if (mv_list_itr->status)
+			return mv_list_itr->status;
+	}
+	return ICE_SUCCESS;
+}
+#endif
+
+
+
+/**
+ * ice_rem_sw_rule_info
+ * @hw: pointer to the hardware structure
+ * @rule_head: pointer to the switch list structure that we want to delete
+ */
+static void
+ice_rem_sw_rule_info(struct ice_hw *hw, struct LIST_HEAD_TYPE *rule_head)
+{
+	if (!LIST_EMPTY(rule_head)) {
+		struct ice_fltr_mgmt_list_entry *entry;
+		struct ice_fltr_mgmt_list_entry *tmp;
+
+		LIST_FOR_EACH_ENTRY_SAFE(entry, tmp, rule_head,
+					 ice_fltr_mgmt_list_entry, list_entry) {
+			LIST_DEL(&entry->list_entry);
+			ice_free(hw, entry);
+		}
+	}
+}
+
+
+
+/**
+ * ice_cfg_dflt_vsi - change state of VSI to set/clear default
+ * @pi: pointer to the port_info structure
+ * @vsi_handle: VSI handle to set as default
+ * @set: true to add the above mentioned switch rule, false to remove it
+ * @direction: ICE_FLTR_RX or ICE_FLTR_TX
+ *
+ * add filter rule to set/unset given VSI as default VSI for the switch
+ * (represented by swid)
+ */
+enum ice_status
+ice_cfg_dflt_vsi(struct ice_port_info *pi, u16 vsi_handle, bool set,
+		 u8 direction)
+{
+	struct ice_aqc_sw_rules_elem *s_rule;
+	struct ice_fltr_info f_info;
+	struct ice_hw *hw = pi->hw;
+	enum ice_adminq_opc opcode;
+	enum ice_status status;
+	u16 s_rule_size;
+	u16 hw_vsi_id;
+
+	if (!ice_is_vsi_valid(hw, vsi_handle))
+		return ICE_ERR_PARAM;
+	hw_vsi_id = ice_get_hw_vsi_num(hw, vsi_handle);
+
+	s_rule_size = set ? ICE_SW_RULE_RX_TX_ETH_HDR_SIZE :
+			    ICE_SW_RULE_RX_TX_NO_HDR_SIZE;
+	s_rule = (struct ice_aqc_sw_rules_elem *)ice_malloc(hw, s_rule_size);
+	if (!s_rule)
+		return ICE_ERR_NO_MEMORY;
+
+	ice_memset(&f_info, 0, sizeof(f_info), ICE_NONDMA_MEM);
+
+	f_info.lkup_type = ICE_SW_LKUP_DFLT;
+	f_info.flag = direction;
+	f_info.fltr_act = ICE_FWD_TO_VSI;
+	f_info.fwd_id.hw_vsi_id = hw_vsi_id;
+
+	if (f_info.flag & ICE_FLTR_RX) {
+		f_info.src = pi->lport;
+		f_info.src_id = ICE_SRC_ID_LPORT;
+		if (!set)
+			f_info.fltr_rule_id =
+				pi->dflt_rx_vsi_rule_id;
+	} else if (f_info.flag & ICE_FLTR_TX) {
+		f_info.src_id = ICE_SRC_ID_VSI;
+		f_info.src = hw_vsi_id;
+		if (!set)
+			f_info.fltr_rule_id =
+				pi->dflt_tx_vsi_rule_id;
+	}
+
+	if (set)
+		opcode = ice_aqc_opc_add_sw_rules;
+	else
+		opcode = ice_aqc_opc_remove_sw_rules;
+
+	ice_fill_sw_rule(hw, &f_info, s_rule, opcode);
+
+	status = ice_aq_sw_rules(hw, s_rule, s_rule_size, 1, opcode, NULL);
+	if (status || !(f_info.flag & ICE_FLTR_TX_RX))
+		goto out;
+	if (set) {
+		u16 index = LE16_TO_CPU(s_rule->pdata.lkup_tx_rx.index);
+
+		if (f_info.flag & ICE_FLTR_TX) {
+			pi->dflt_tx_vsi_num = hw_vsi_id;
+			pi->dflt_tx_vsi_rule_id = index;
+		} else if (f_info.flag & ICE_FLTR_RX) {
+			pi->dflt_rx_vsi_num = hw_vsi_id;
+			pi->dflt_rx_vsi_rule_id = index;
+		}
+	} else {
+		if (f_info.flag & ICE_FLTR_TX) {
+			pi->dflt_tx_vsi_num = ICE_DFLT_VSI_INVAL;
+			pi->dflt_tx_vsi_rule_id = ICE_INVAL_ACT;
+		} else if (f_info.flag & ICE_FLTR_RX) {
+			pi->dflt_rx_vsi_num = ICE_DFLT_VSI_INVAL;
+			pi->dflt_rx_vsi_rule_id = ICE_INVAL_ACT;
+		}
+	}
+
+out:
+	ice_free(hw, s_rule);
+	return status;
+}
+
+/**
+ * ice_remove_mac - remove a MAC address based filter rule
+ * @hw: pointer to the hardware structure
+ * @m_list: list of MAC addresses and forwarding information
+ *
+ * This function removes either a MAC filter rule or a specific VSI from a
+ * VSI list for a multicast MAC address.
+ *
+ * Returns ICE_ERR_DOES_NOT_EXIST if a given entry was not added by
+ * ice_add_mac. Caller should be aware that this call will only work if all
+ * the entries passed into m_list were added previously. It will not attempt to
+ * do a partial remove of entries that were found.
+ */
+enum ice_status
+ice_remove_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list)
+{
+	struct ice_fltr_list_entry *list_itr, *tmp;
+
+	if (!m_list)
+		return ICE_ERR_PARAM;
+
+	LIST_FOR_EACH_ENTRY_SAFE(list_itr, tmp, m_list, ice_fltr_list_entry,
+				 list_entry) {
+		enum ice_sw_lkup_type l_type = list_itr->fltr_info.lkup_type;
+
+		if (l_type != ICE_SW_LKUP_MAC)
+			return ICE_ERR_PARAM;
+		list_itr->status = ice_remove_rule_internal(hw,
+							    ICE_SW_LKUP_MAC,
+							    list_itr);
+		if (list_itr->status)
+			return list_itr->status;
+	}
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_remove_vlan - Remove VLAN based filter rule
+ * @hw: pointer to the hardware structure
+ * @v_list: list of VLAN entries and forwarding information
+ */
+enum ice_status
+ice_remove_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list)
+{
+	struct ice_fltr_list_entry *v_list_itr, *tmp;
+
+	if (!v_list || !hw)
+		return ICE_ERR_PARAM;
+
+	LIST_FOR_EACH_ENTRY_SAFE(v_list_itr, tmp, v_list, ice_fltr_list_entry,
+				 list_entry) {
+		enum ice_sw_lkup_type l_type = v_list_itr->fltr_info.lkup_type;
+
+		if (l_type != ICE_SW_LKUP_VLAN)
+			return ICE_ERR_PARAM;
+		v_list_itr->status = ice_remove_rule_internal(hw,
+							      ICE_SW_LKUP_VLAN,
+							      v_list_itr);
+		if (v_list_itr->status)
+			return v_list_itr->status;
+	}
+	return ICE_SUCCESS;
+}
+
+#ifndef NO_MACVLAN_SUPPORT
+/**
+ * ice_remove_mac_vlan - Remove MAC VLAN based filter rule
+ * @hw: pointer to the hardware structure
+ * @v_list: list of MAC VLAN entries and forwarding information
+ */
+enum ice_status
+ice_remove_mac_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list)
+{
+	struct ice_fltr_list_entry *v_list_itr, *tmp;
+
+	if (!v_list || !hw)
+		return ICE_ERR_PARAM;
+
+	LIST_FOR_EACH_ENTRY_SAFE(v_list_itr, tmp, v_list, ice_fltr_list_entry,
+				 list_entry) {
+		enum ice_sw_lkup_type l_type = v_list_itr->fltr_info.lkup_type;
+
+		if (l_type != ICE_SW_LKUP_MAC_VLAN)
+			return ICE_ERR_PARAM;
+		v_list_itr->status =
+			ice_remove_rule_internal(hw, ICE_SW_LKUP_MAC_VLAN,
+						 v_list_itr);
+		if (v_list_itr->status)
+			return v_list_itr->status;
+	}
+	return ICE_SUCCESS;
+}
+#endif /* !NO_MACVLAN_SUPPORT */
+
+/**
+ * ice_vsi_uses_fltr - Determine if given VSI uses specified filter
+ * @fm_entry: filter entry to inspect
+ * @vsi_handle: VSI handle to compare with filter info
+ */
+static bool
+ice_vsi_uses_fltr(struct ice_fltr_mgmt_list_entry *fm_entry, u16 vsi_handle)
+{
+	return ((fm_entry->fltr_info.fltr_act == ICE_FWD_TO_VSI &&
+		 fm_entry->fltr_info.vsi_handle == vsi_handle) ||
+		(fm_entry->fltr_info.fltr_act == ICE_FWD_TO_VSI_LIST &&
+		 (ice_is_bit_set(fm_entry->vsi_list_info->vsi_map,
+				 vsi_handle))));
+}
+
+/**
+ * ice_add_entry_to_vsi_fltr_list - Add copy of fltr_list_entry to remove list
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: VSI handle to remove filters from
+ * @vsi_list_head: pointer to the list to add entry to
+ * @fi: pointer to fltr_info of filter entry to copy & add
+ *
+ * Helper function, used when creating a list of filters to remove from
+ * a specific VSI. The entry added to vsi_list_head is a COPY of the
+ * original filter entry, with the exception of fltr_info.fltr_act and
+ * fltr_info.fwd_id fields. These are set such that later logic can
+ * extract which VSI to remove the fltr from, and pass on that information.
+ */
+static enum ice_status
+ice_add_entry_to_vsi_fltr_list(struct ice_hw *hw, u16 vsi_handle,
+			       struct LIST_HEAD_TYPE *vsi_list_head,
+			       struct ice_fltr_info *fi)
+{
+	struct ice_fltr_list_entry *tmp;
+
+	/* this memory is freed up in the caller function
+	 * once filters for this VSI are removed
+	 */
+	tmp = (struct ice_fltr_list_entry *)ice_malloc(hw, sizeof(*tmp));
+	if (!tmp)
+		return ICE_ERR_NO_MEMORY;
+
+	tmp->fltr_info = *fi;
+
+	/* Overwrite these fields to indicate which VSI to remove filter from,
+	 * so find and remove logic can extract the information from the
+	 * list entries. Note that original entries will still have proper
+	 * values.
+	 */
+	tmp->fltr_info.fltr_act = ICE_FWD_TO_VSI;
+	tmp->fltr_info.vsi_handle = vsi_handle;
+	tmp->fltr_info.fwd_id.hw_vsi_id = ice_get_hw_vsi_num(hw, vsi_handle);
+
+	LIST_ADD(&tmp->list_entry, vsi_list_head);
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_add_to_vsi_fltr_list - Add VSI filters to the list
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: VSI handle to remove filters from
+ * @lkup_list_head: pointer to the list that has certain lookup type filters
+ * @vsi_list_head: pointer to the list pertaining to VSI with vsi_handle
+ *
+ * Locates all filters in lkup_list_head that are used by the given VSI,
+ * and adds COPIES of those entries to vsi_list_head (intended to be used
+ * to remove the listed filters).
+ * Note that this means all entries in vsi_list_head must be explicitly
+ * deallocated by the caller when done with list.
+ */
+static enum ice_status
+ice_add_to_vsi_fltr_list(struct ice_hw *hw, u16 vsi_handle,
+			 struct LIST_HEAD_TYPE *lkup_list_head,
+			 struct LIST_HEAD_TYPE *vsi_list_head)
+{
+	struct ice_fltr_mgmt_list_entry *fm_entry;
+	enum ice_status status = ICE_SUCCESS;
+
+	/* check to make sure VSI id is valid and within boundary */
+	if (!ice_is_vsi_valid(hw, vsi_handle))
+		return ICE_ERR_PARAM;
+
+	LIST_FOR_EACH_ENTRY(fm_entry, lkup_list_head,
+			    ice_fltr_mgmt_list_entry, list_entry) {
+		struct ice_fltr_info *fi;
+
+		fi = &fm_entry->fltr_info;
+		if (!fi || !ice_vsi_uses_fltr(fm_entry, vsi_handle))
+			continue;
+
+		status = ice_add_entry_to_vsi_fltr_list(hw, vsi_handle,
+							vsi_list_head, fi);
+		if (status)
+			return status;
+	}
+	return status;
+}
+
+
+/**
+ * ice_determine_promisc_mask
+ * @fi: filter info to parse
+ *
+ * Helper function to determine which ICE_PROMISC_ mask corresponds
+ * to given filter into.
+ */
+static u8 ice_determine_promisc_mask(struct ice_fltr_info *fi)
+{
+	u16 vid = fi->l_data.mac_vlan.vlan_id;
+	u8 *macaddr = fi->l_data.mac.mac_addr;
+	bool is_tx_fltr = false;
+	u8 promisc_mask = 0;
+
+	if (fi->flag == ICE_FLTR_TX)
+		is_tx_fltr = true;
+
+	if (IS_BROADCAST_ETHER_ADDR(macaddr))
+		promisc_mask |= is_tx_fltr ?
+			ICE_PROMISC_BCAST_TX : ICE_PROMISC_BCAST_RX;
+	else if (IS_MULTICAST_ETHER_ADDR(macaddr))
+		promisc_mask |= is_tx_fltr ?
+			ICE_PROMISC_MCAST_TX : ICE_PROMISC_MCAST_RX;
+	else if (IS_UNICAST_ETHER_ADDR(macaddr))
+		promisc_mask |= is_tx_fltr ?
+			ICE_PROMISC_UCAST_TX : ICE_PROMISC_UCAST_RX;
+	if (vid)
+		promisc_mask |= is_tx_fltr ?
+			ICE_PROMISC_VLAN_TX : ICE_PROMISC_VLAN_RX;
+
+	return promisc_mask;
+}
+
+
+/**
+ * ice_remove_promisc - Remove promisc based filter rules
+ * @hw: pointer to the hardware structure
+ * @recp_id: recipe id for which the rule needs to removed
+ * @v_list: list of promisc entries
+ */
+static enum ice_status
+ice_remove_promisc(struct ice_hw *hw, u8 recp_id,
+		   struct LIST_HEAD_TYPE *v_list)
+{
+	struct ice_fltr_list_entry *v_list_itr, *tmp;
+
+	LIST_FOR_EACH_ENTRY_SAFE(v_list_itr, tmp, v_list, ice_fltr_list_entry,
+				 list_entry) {
+		v_list_itr->status =
+			ice_remove_rule_internal(hw, recp_id, v_list_itr);
+		if (v_list_itr->status)
+			return v_list_itr->status;
+	}
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_clear_vsi_promisc - clear specified promiscuous mode(s) for given VSI
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: VSI handle to clear mode
+ * @promisc_mask: mask of promiscuous config bits to clear
+ * @vid: VLAN ID to clear VLAN promiscuous
+ */
+enum ice_status
+ice_clear_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask,
+		      u16 vid)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	struct ice_fltr_list_entry *fm_entry, *tmp;
+	struct LIST_HEAD_TYPE remove_list_head;
+	struct ice_fltr_mgmt_list_entry *itr;
+	struct LIST_HEAD_TYPE *rule_head;
+	struct ice_lock *rule_lock;	/* Lock to protect filter rule list */
+	enum ice_status status = ICE_SUCCESS;
+	u8 recipe_id;
+
+	if (!ice_is_vsi_valid(hw, vsi_handle))
+		return ICE_ERR_PARAM;
+
+	if (vid)
+		recipe_id = ICE_SW_LKUP_PROMISC_VLAN;
+	else
+		recipe_id = ICE_SW_LKUP_PROMISC;
+
+	rule_head = &sw->recp_list[recipe_id].filt_rules;
+	rule_lock = &sw->recp_list[recipe_id].filt_rule_lock;
+
+	INIT_LIST_HEAD(&remove_list_head);
+
+	ice_acquire_lock(rule_lock);
+	LIST_FOR_EACH_ENTRY(itr, rule_head,
+			    ice_fltr_mgmt_list_entry, list_entry) {
+		u8 fltr_promisc_mask = 0;
+
+		if (!ice_vsi_uses_fltr(itr, vsi_handle))
+			continue;
+
+		fltr_promisc_mask |=
+			ice_determine_promisc_mask(&itr->fltr_info);
+
+		/* Skip if filter is not completely specified by given mask */
+		if (fltr_promisc_mask & ~promisc_mask)
+			continue;
+
+		status = ice_add_entry_to_vsi_fltr_list(hw, vsi_handle,
+							&remove_list_head,
+							&itr->fltr_info);
+		if (status) {
+			ice_release_lock(rule_lock);
+			goto free_fltr_list;
+		}
+	}
+	ice_release_lock(rule_lock);
+
+	status = ice_remove_promisc(hw, recipe_id, &remove_list_head);
+
+free_fltr_list:
+	LIST_FOR_EACH_ENTRY_SAFE(fm_entry, tmp, &remove_list_head,
+				 ice_fltr_list_entry, list_entry) {
+		LIST_DEL(&fm_entry->list_entry);
+		ice_free(hw, fm_entry);
+	}
+
+	return status;
+}
+
+/**
+ * ice_set_vsi_promisc - set given VSI to given promiscuous mode(s)
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: VSI handle to configure
+ * @promisc_mask: mask of promiscuous config bits
+ * @vid: VLAN ID to set VLAN promiscuous
+ */
+enum ice_status
+ice_set_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask, u16 vid)
+{
+	enum { UCAST_FLTR = 1, MCAST_FLTR, BCAST_FLTR };
+	struct ice_fltr_list_entry f_list_entry;
+	struct ice_fltr_info new_fltr;
+	enum ice_status status = ICE_SUCCESS;
+	bool is_tx_fltr;
+	u16 hw_vsi_id;
+	int pkt_type;
+	u8 recipe_id;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_set_vsi_promisc\n");
+
+	if (!ice_is_vsi_valid(hw, vsi_handle))
+		return ICE_ERR_PARAM;
+	hw_vsi_id = ice_get_hw_vsi_num(hw, vsi_handle);
+
+	ice_memset(&new_fltr, 0, sizeof(new_fltr), ICE_NONDMA_MEM);
+
+	if (promisc_mask & (ICE_PROMISC_VLAN_RX | ICE_PROMISC_VLAN_TX)) {
+		new_fltr.lkup_type = ICE_SW_LKUP_PROMISC_VLAN;
+		new_fltr.l_data.mac_vlan.vlan_id = vid;
+		recipe_id = ICE_SW_LKUP_PROMISC_VLAN;
+	} else {
+		new_fltr.lkup_type = ICE_SW_LKUP_PROMISC;
+		recipe_id = ICE_SW_LKUP_PROMISC;
+	}
+
+	/* Separate filters must be set for each direction/packet type
+	 * combination, so we will loop over the mask value, store the
+	 * individual type, and clear it out in the input mask as it
+	 * is found.
+	 */
+	while (promisc_mask) {
+		u8 *mac_addr;
+
+		pkt_type = 0;
+		is_tx_fltr = false;
+
+		if (promisc_mask & ICE_PROMISC_UCAST_RX) {
+			promisc_mask &= ~ICE_PROMISC_UCAST_RX;
+			pkt_type = UCAST_FLTR;
+		} else if (promisc_mask & ICE_PROMISC_UCAST_TX) {
+			promisc_mask &= ~ICE_PROMISC_UCAST_TX;
+			pkt_type = UCAST_FLTR;
+			is_tx_fltr = true;
+		} else if (promisc_mask & ICE_PROMISC_MCAST_RX) {
+			promisc_mask &= ~ICE_PROMISC_MCAST_RX;
+			pkt_type = MCAST_FLTR;
+		} else if (promisc_mask & ICE_PROMISC_MCAST_TX) {
+			promisc_mask &= ~ICE_PROMISC_MCAST_TX;
+			pkt_type = MCAST_FLTR;
+			is_tx_fltr = true;
+		} else if (promisc_mask & ICE_PROMISC_BCAST_RX) {
+			promisc_mask &= ~ICE_PROMISC_BCAST_RX;
+			pkt_type = BCAST_FLTR;
+		} else if (promisc_mask & ICE_PROMISC_BCAST_TX) {
+			promisc_mask &= ~ICE_PROMISC_BCAST_TX;
+			pkt_type = BCAST_FLTR;
+			is_tx_fltr = true;
+		}
+
+		/* Check for VLAN promiscuous flag */
+		if (promisc_mask & ICE_PROMISC_VLAN_RX) {
+			promisc_mask &= ~ICE_PROMISC_VLAN_RX;
+		} else if (promisc_mask & ICE_PROMISC_VLAN_TX) {
+			promisc_mask &= ~ICE_PROMISC_VLAN_TX;
+			is_tx_fltr = true;
+		}
+
+		/* Set filter DA based on packet type */
+		mac_addr = new_fltr.l_data.mac.mac_addr;
+		if (pkt_type == BCAST_FLTR) {
+			ice_memset(mac_addr, 0xff, ETH_ALEN, ICE_NONDMA_MEM);
+		} else if (pkt_type == MCAST_FLTR ||
+			   pkt_type == UCAST_FLTR) {
+			/* Use the dummy ether header DA */
+			ice_memcpy(mac_addr, dummy_eth_header, ETH_ALEN,
+				   ICE_NONDMA_TO_NONDMA);
+			if (pkt_type == MCAST_FLTR)
+				mac_addr[0] |= 0x1;	/* Set multicast bit */
+		}
+
+		/* Need to reset this to zero for all iterations */
+		new_fltr.flag = 0;
+		if (is_tx_fltr) {
+			new_fltr.flag |= ICE_FLTR_TX;
+			new_fltr.src = hw_vsi_id;
+		} else {
+			new_fltr.flag |= ICE_FLTR_RX;
+			new_fltr.src = hw->port_info->lport;
+		}
+
+		new_fltr.fltr_act = ICE_FWD_TO_VSI;
+		new_fltr.vsi_handle = vsi_handle;
+		new_fltr.fwd_id.hw_vsi_id = hw_vsi_id;
+		f_list_entry.fltr_info = new_fltr;
+
+		status = ice_add_rule_internal(hw, recipe_id, &f_list_entry);
+		if (status != ICE_SUCCESS)
+			goto set_promisc_exit;
+	}
+
+set_promisc_exit:
+	return status;
+}
+
+/**
+ * ice_set_vlan_vsi_promisc
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: VSI handle to configure
+ * @promisc_mask: mask of promiscuous config bits
+ * @rm_vlan_promisc: Clear VLANs VSI promisc mode
+ *
+ * Configure VSI with all associated VLANs to given promiscuous mode(s)
+ */
+enum ice_status
+ice_set_vlan_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask,
+			 bool rm_vlan_promisc)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	struct ice_fltr_list_entry *list_itr, *tmp;
+	struct LIST_HEAD_TYPE vsi_list_head;
+	struct LIST_HEAD_TYPE *vlan_head;
+	struct ice_lock *vlan_lock; /* Lock to protect filter rule list */
+	enum ice_status status;
+	u16 vlan_id;
+
+	INIT_LIST_HEAD(&vsi_list_head);
+	vlan_lock = &sw->recp_list[ICE_SW_LKUP_VLAN].filt_rule_lock;
+	vlan_head = &sw->recp_list[ICE_SW_LKUP_VLAN].filt_rules;
+	ice_acquire_lock(vlan_lock);
+	status = ice_add_to_vsi_fltr_list(hw, vsi_handle, vlan_head,
+					  &vsi_list_head);
+	ice_release_lock(vlan_lock);
+	if (status)
+		goto free_fltr_list;
+
+	LIST_FOR_EACH_ENTRY(list_itr, &vsi_list_head, ice_fltr_list_entry,
+			    list_entry) {
+		vlan_id = list_itr->fltr_info.l_data.vlan.vlan_id;
+		if (rm_vlan_promisc)
+			status = ice_clear_vsi_promisc(hw, vsi_handle,
+						       promisc_mask, vlan_id);
+		else
+			status = ice_set_vsi_promisc(hw, vsi_handle,
+						     promisc_mask, vlan_id);
+		if (status)
+			break;
+	}
+
+free_fltr_list:
+	LIST_FOR_EACH_ENTRY_SAFE(list_itr, tmp, &vsi_list_head,
+				 ice_fltr_list_entry, list_entry) {
+		LIST_DEL(&list_itr->list_entry);
+		ice_free(hw, list_itr);
+	}
+	return status;
+}
+
+/**
+ * ice_remove_vsi_lkup_fltr - Remove lookup type filters for a VSI
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: VSI handle to remove filters from
+ * @lkup: switch rule filter lookup type
+ */
+static void
+ice_remove_vsi_lkup_fltr(struct ice_hw *hw, u16 vsi_handle,
+			 enum ice_sw_lkup_type lkup)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	struct ice_fltr_list_entry *fm_entry;
+	struct LIST_HEAD_TYPE remove_list_head;
+	struct LIST_HEAD_TYPE *rule_head;
+	struct ice_fltr_list_entry *tmp;
+	struct ice_lock *rule_lock;	/* Lock to protect filter rule list */
+	enum ice_status status;
+
+	INIT_LIST_HEAD(&remove_list_head);
+	rule_lock = &sw->recp_list[lkup].filt_rule_lock;
+	rule_head = &sw->recp_list[lkup].filt_rules;
+	ice_acquire_lock(rule_lock);
+	status = ice_add_to_vsi_fltr_list(hw, vsi_handle, rule_head,
+					  &remove_list_head);
+	ice_release_lock(rule_lock);
+	if (status)
+		return;
+
+	switch (lkup) {
+	case ICE_SW_LKUP_MAC:
+		ice_remove_mac(hw, &remove_list_head);
+		break;
+	case ICE_SW_LKUP_VLAN:
+		ice_remove_vlan(hw, &remove_list_head);
+		break;
+	case ICE_SW_LKUP_PROMISC:
+	case ICE_SW_LKUP_PROMISC_VLAN:
+		ice_remove_promisc(hw, lkup, &remove_list_head);
+		break;
+	case ICE_SW_LKUP_MAC_VLAN:
+#ifndef NO_MACVLAN_SUPPORT
+		ice_remove_mac_vlan(hw, &remove_list_head);
+#else
+		ice_debug(hw, ICE_DBG_SW, "MAC VLAN look up is not supported yet\n");
+#endif /* !NO_MACVLAN_SUPPORT */
+		break;
+	case ICE_SW_LKUP_ETHERTYPE:
+	case ICE_SW_LKUP_ETHERTYPE_MAC:
+	case ICE_SW_LKUP_DFLT:
+		ice_debug(hw, ICE_DBG_SW,
+			  "Remove filters for this lookup type hasn't been implemented yet\n");
+		break;
+	case ICE_SW_LKUP_LAST:
+		ice_debug(hw, ICE_DBG_SW, "Unsupported lookup type\n");
+		break;
+	}
+
+	LIST_FOR_EACH_ENTRY_SAFE(fm_entry, tmp, &remove_list_head,
+				 ice_fltr_list_entry, list_entry) {
+		LIST_DEL(&fm_entry->list_entry);
+		ice_free(hw, fm_entry);
+	}
+}
+
+/**
+ * ice_remove_vsi_fltr - Remove all filters for a VSI
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: VSI handle to remove filters from
+ */
+void ice_remove_vsi_fltr(struct ice_hw *hw, u16 vsi_handle)
+{
+	ice_debug(hw, ICE_DBG_TRACE, "ice_remove_vsi_fltr\n");
+
+	ice_remove_vsi_lkup_fltr(hw, vsi_handle, ICE_SW_LKUP_MAC);
+	ice_remove_vsi_lkup_fltr(hw, vsi_handle, ICE_SW_LKUP_MAC_VLAN);
+	ice_remove_vsi_lkup_fltr(hw, vsi_handle, ICE_SW_LKUP_PROMISC);
+	ice_remove_vsi_lkup_fltr(hw, vsi_handle, ICE_SW_LKUP_VLAN);
+	ice_remove_vsi_lkup_fltr(hw, vsi_handle, ICE_SW_LKUP_DFLT);
+	ice_remove_vsi_lkup_fltr(hw, vsi_handle, ICE_SW_LKUP_ETHERTYPE);
+	ice_remove_vsi_lkup_fltr(hw, vsi_handle, ICE_SW_LKUP_ETHERTYPE_MAC);
+	ice_remove_vsi_lkup_fltr(hw, vsi_handle, ICE_SW_LKUP_PROMISC_VLAN);
+}
+
+
+
+
+
+/**
+ * ice_replay_vsi_fltr - Replay filters for requested VSI
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: driver vsi handle
+ * @recp_id: Recipe id for which rules need to be replayed
+ * @list_head: list for which filters need to be replayed
+ *
+ * Replays the filter of recipe recp_id for a VSI represented via vsi_handle.
+ * It is required to pass valid VSI handle.
+ */
+static enum ice_status
+ice_replay_vsi_fltr(struct ice_hw *hw, u16 vsi_handle, u8 recp_id,
+		    struct LIST_HEAD_TYPE *list_head)
+{
+	struct ice_fltr_mgmt_list_entry *itr;
+	enum ice_status status = ICE_SUCCESS;
+	u16 hw_vsi_id;
+
+	if (LIST_EMPTY(list_head))
+		return status;
+	hw_vsi_id = ice_get_hw_vsi_num(hw, vsi_handle);
+
+	LIST_FOR_EACH_ENTRY(itr, list_head, ice_fltr_mgmt_list_entry,
+			    list_entry) {
+		struct ice_fltr_list_entry f_entry;
+
+		f_entry.fltr_info = itr->fltr_info;
+		if (itr->vsi_count < 2 && recp_id != ICE_SW_LKUP_VLAN &&
+		    itr->fltr_info.vsi_handle == vsi_handle) {
+			/* update the src in case it is vsi num */
+			if (f_entry.fltr_info.src_id == ICE_SRC_ID_VSI)
+				f_entry.fltr_info.src = hw_vsi_id;
+			status = ice_add_rule_internal(hw, recp_id, &f_entry);
+			if (status != ICE_SUCCESS)
+				goto end;
+			continue;
+		}
+		if (!itr->vsi_list_info ||
+		    !ice_is_bit_set(itr->vsi_list_info->vsi_map, vsi_handle))
+			continue;
+		/* Clearing it so that the logic can add it back */
+		ice_clear_bit(vsi_handle, itr->vsi_list_info->vsi_map);
+		f_entry.fltr_info.vsi_handle = vsi_handle;
+		f_entry.fltr_info.fltr_act = ICE_FWD_TO_VSI;
+		/* update the src in case it is vsi num */
+		if (f_entry.fltr_info.src_id == ICE_SRC_ID_VSI)
+			f_entry.fltr_info.src = hw_vsi_id;
+		if (recp_id == ICE_SW_LKUP_VLAN)
+			status = ice_add_vlan_internal(hw, &f_entry);
+		else
+			status = ice_add_rule_internal(hw, recp_id, &f_entry);
+		if (status != ICE_SUCCESS)
+			goto end;
+	}
+end:
+	return status;
+}
+
+
+/**
+ * ice_replay_vsi_all_fltr - replay all filters stored in bookkeeping lists
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: driver vsi handle
+ *
+ * Replays filters for requested VSI via vsi_handle.
+ */
+enum ice_status ice_replay_vsi_all_fltr(struct ice_hw *hw, u16 vsi_handle)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	enum ice_status status = ICE_SUCCESS;
+	u8 i;
+
+	for (i = 0; i < ICE_MAX_NUM_RECIPES; i++) {
+		/* Update the default recipe lines and ones that were created */
+		if (i < ICE_MAX_NUM_RECIPES || sw->recp_list[i].recp_created) {
+			struct LIST_HEAD_TYPE *head;
+
+			head = &sw->recp_list[i].filt_replay_rules;
+			if (!sw->recp_list[i].adv_rule)
+				status = ice_replay_vsi_fltr(hw, vsi_handle, i,
+							     head);
+			if (status != ICE_SUCCESS)
+				return status;
+		}
+	}
+	return status;
+}
+
+/**
+ * ice_rm_all_sw_replay_rule_info - deletes filter replay rules
+ * @hw: pointer to the hw struct
+ *
+ * Deletes the filter replay rules.
+ */
+void ice_rm_all_sw_replay_rule_info(struct ice_hw *hw)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	u8 i;
+
+	if (!sw)
+		return;
+
+	for (i = 0; i < ICE_MAX_NUM_RECIPES; i++) {
+		if (!LIST_EMPTY(&sw->recp_list[i].filt_replay_rules)) {
+			struct LIST_HEAD_TYPE *l_head;
+
+			l_head = &sw->recp_list[i].filt_replay_rules;
+			if (!sw->recp_list[i].adv_rule)
+				ice_rem_sw_rule_info(hw, l_head);
+		}
+	}
+}
diff --git a/drivers/net/ice/base/ice_switch.h b/drivers/net/ice/base/ice_switch.h
new file mode 100644
index 0000000..66a172f
--- /dev/null
+++ b/drivers/net/ice/base/ice_switch.h
@@ -0,0 +1,333 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_SWITCH_H_
+#define _ICE_SWITCH_H_
+
+#include "ice_common.h"
+#include "ice_protocol_type.h"
+
+#define ICE_SW_CFG_MAX_BUF_LEN 2048
+#define ICE_MAX_SW 256
+#define ICE_DFLT_VSI_INVAL 0xff
+
+
+
+#define ICE_VSI_INVAL_ID 0xFFFF
+
+/* VSI context structure for add/get/update/free operations */
+struct ice_vsi_ctx {
+	u16 vsi_num;
+	u16 vsis_allocd;
+	u16 vsis_unallocated;
+	u16 flags;
+	struct ice_aqc_vsi_props info;
+	struct ice_sched_vsi_info sched;
+	u8 alloc_from_pool;
+	struct ice_lock rss_locks;	/* protect rss config in VSI ctx */
+	struct LIST_HEAD_TYPE rss_list_head;
+};
+
+
+/* Switch recipe ID enum values are specific to hardware */
+enum ice_sw_lkup_type {
+	ICE_SW_LKUP_ETHERTYPE = 0,
+	ICE_SW_LKUP_MAC = 1,
+	ICE_SW_LKUP_MAC_VLAN = 2,
+	ICE_SW_LKUP_PROMISC = 3,
+	ICE_SW_LKUP_VLAN = 4,
+	ICE_SW_LKUP_DFLT = 5,
+	ICE_SW_LKUP_ETHERTYPE_MAC = 8,
+	ICE_SW_LKUP_PROMISC_VLAN = 9,
+	ICE_SW_LKUP_LAST,
+};
+
+/* type of filter src id */
+enum ice_src_id {
+	ICE_SRC_ID_UNKNOWN = 0,
+	ICE_SRC_ID_VSI,
+	ICE_SRC_ID_QUEUE,
+	ICE_SRC_ID_LPORT,
+};
+
+struct ice_fltr_info {
+	/* Look up information: how to look up packet */
+	enum ice_sw_lkup_type lkup_type;
+	/* Forward action: filter action to do after lookup */
+	enum ice_sw_fwd_act_type fltr_act;
+	/* rule ID returned by firmware once filter rule is created */
+	u16 fltr_rule_id;
+	u16 flag;
+#define ICE_FLTR_RX		BIT(0)
+#define ICE_FLTR_TX		BIT(1)
+#define ICE_FLTR_TX_RX		(ICE_FLTR_RX | ICE_FLTR_TX)
+
+	/* Source VSI for LOOKUP_TX or source port for LOOKUP_RX */
+	u16 src;
+	enum ice_src_id src_id;
+
+	union {
+		struct {
+			u8 mac_addr[ETH_ALEN];
+		} mac;
+		struct {
+			u8 mac_addr[ETH_ALEN];
+			u16 vlan_id;
+		} mac_vlan;
+		struct {
+			u16 vlan_id;
+		} vlan;
+		/* Set lkup_type as ICE_SW_LKUP_ETHERTYPE
+		 * if just using ethertype as filter. Set lkup_type as
+		 * ICE_SW_LKUP_ETHERTYPE_MAC if MAC also needs to be
+		 * passed in as filter.
+		 */
+		struct {
+			u16 ethertype;
+			u8 mac_addr[ETH_ALEN]; /* optional */
+		} ethertype_mac;
+	} l_data; /* Make sure to zero out the memory of l_data before using
+		   * it or only set the data associated with lookup match
+		   * rest everything should be zero
+		   */
+
+	/* Depending on filter action */
+	union {
+		/* queue id in case of ICE_FWD_TO_Q and starting
+		 * queue id in case of ICE_FWD_TO_QGRP.
+		 */
+		u16 q_id:11;
+		u16 hw_vsi_id:10;
+		u16 vsi_id:10;
+		u16 vsi_list_id:10;
+	} fwd_id;
+
+	/* Sw VSI handle */
+	u16 vsi_handle;
+
+	/* Set to num_queues if action is ICE_FWD_TO_QGRP. This field
+	 * determines the range of queues the packet needs to be forwarded to.
+	 * Note that qgrp_size must be set to a power of 2.
+	 */
+	u8 qgrp_size;
+
+	/* Rule creations populate these indicators basing on the switch type */
+	u8 lb_en;	/* Indicate if packet can be looped back */
+	u8 lan_en;	/* Indicate if packet can be forwarded to the uplink */
+};
+
+struct ice_adv_lkup_elem {
+	enum ice_protocol_type type;
+	union ice_prot_hdr h_u;	/* Header values */
+	union ice_prot_hdr m_u;	/* Mask of header values to match */
+};
+
+struct ice_sw_act_ctrl {
+	/* Source VSI for LOOKUP_TX or source port for LOOKUP_RX */
+	u16 src;
+	u16 flag;
+#define ICE_FLTR_RX             BIT(0)
+#define ICE_FLTR_TX             BIT(1)
+#define ICE_FLTR_TX_RX (ICE_FLTR_RX | ICE_FLTR_TX)
+
+	enum ice_sw_fwd_act_type fltr_act;
+	/* Depending on filter action */
+	union {
+		/* This is a queue id in case of ICE_FWD_TO_Q and starting
+		 * queue id in case of ICE_FWD_TO_QGRP.
+		 */
+		u16 q_id:11;
+		u16 vsi_id:10;
+		u16 hw_vsi_id:10;
+		u16 vsi_list_id:10;
+	} fwd_id;
+	/* software VSI handle */
+	u16 vsi_handle;
+	u8 qgrp_size;
+};
+
+struct ice_adv_rule_info {
+	enum ice_sw_tunnel_type tun_type;
+	struct ice_sw_act_ctrl sw_act;
+	u32 priority;
+	u8 rx; /* true means LOOKUP_RX otherwise LOOKUP_TX */
+};
+
+/* A collection of one or more four word recipe */
+struct ice_sw_recipe {
+	/* For a chained recipe the root recipe is what should be used for
+	 * programming rules
+	 */
+	u8 root_rid;
+	u8 recp_created;
+
+	/* Number of extraction words */
+	u8 n_ext_words;
+	/* Protocol ID and Offset pair (extraction word) to describe the
+	 * recipe
+	 */
+	struct ice_fv_word ext_words[ICE_MAX_CHAIN_WORDS];
+
+	/* if this recipe is a collection of other recipe */
+	u8 big_recp;
+
+	/* if this recipe is part of another bigger recipe then chain index
+	 * corresponding to this recipe
+	 */
+	u8 chain_idx;
+
+	/* if this recipe is a collection of other recipe then count of other
+	 * recipes and recipe ids of those recipes
+	 */
+	u8 n_grp_count;
+
+	/* Bit map specifying the IDs associated with this group of recipe */
+	ice_declare_bitmap(r_bitmap, ICE_MAX_NUM_RECIPES);
+
+	enum ice_sw_tunnel_type tun_type;
+
+	/* List of type ice_fltr_mgmt_list_entry or adv_rule */
+	u8 adv_rule;
+	struct LIST_HEAD_TYPE filt_rules;
+	struct LIST_HEAD_TYPE filt_replay_rules;
+
+	struct ice_lock filt_rule_lock;	/* protect filter rule structure */
+
+	/* Profiles this recipe should be associated with */
+	struct LIST_HEAD_TYPE fv_list;
+
+	/* Profiles this recipe is associated with */
+	u8 num_profs, *prof_ids;
+
+	/* This allows user to specify the recipe priority.
+	 * For now, this becomes 'fwd_priority' when recipe
+	 * is created, usually recipes can have 'fwd' and 'join'
+	 * priority.
+	 */
+	u8 priority;
+
+	struct LIST_HEAD_TYPE rg_list;
+
+	/* AQ buffer associated with this recipe */
+	struct ice_aqc_recipe_data_elem *root_buf;
+};
+
+/* Bookkeeping structure to hold bitmap of VSIs corresponding to VSI list id */
+struct ice_vsi_list_map_info {
+	struct LIST_ENTRY_TYPE list_entry;
+	ice_declare_bitmap(vsi_map, ICE_MAX_VSI);
+	u16 vsi_list_id;
+	/* counter to track how many rules are reusing this VSI list */
+	u16 ref_cnt;
+};
+
+struct ice_fltr_list_entry {
+	struct LIST_ENTRY_TYPE list_entry;
+	enum ice_status status;
+	struct ice_fltr_info fltr_info;
+};
+
+/* This defines an entry in the list that maintains MAC or VLAN membership
+ * to HW list mapping, since multiple VSIs can subscribe to the same MAC or
+ * VLAN. As an optimization the VSI list should be created only when a
+ * second VSI becomes a subscriber to the same MAC address. VSI lists are always
+ * used for VLAN membership.
+ */
+struct ice_fltr_mgmt_list_entry {
+	/* back pointer to VSI list id to VSI list mapping */
+	struct ice_vsi_list_map_info *vsi_list_info;
+	u16 vsi_count;
+#define ICE_INVAL_LG_ACT_INDEX 0xffff
+	u16 lg_act_idx;
+#define ICE_INVAL_SW_MARKER_ID 0xffff
+	u16 sw_marker_id;
+	struct LIST_ENTRY_TYPE list_entry;
+	struct ice_fltr_info fltr_info;
+#define ICE_INVAL_COUNTER_ID 0xff
+	u8 counter_index;
+};
+
+struct ice_adv_fltr_mgmt_list_entry {
+	struct LIST_ENTRY_TYPE list_entry;
+
+	struct ice_adv_lkup_elem *lkups;
+	struct ice_adv_rule_info rule_info;
+	u16 lkups_cnt;
+};
+
+enum ice_promisc_flags {
+	ICE_PROMISC_UCAST_RX = 0x1,
+	ICE_PROMISC_UCAST_TX = 0x2,
+	ICE_PROMISC_MCAST_RX = 0x4,
+	ICE_PROMISC_MCAST_TX = 0x8,
+	ICE_PROMISC_BCAST_RX = 0x10,
+	ICE_PROMISC_BCAST_TX = 0x20,
+	ICE_PROMISC_VLAN_RX = 0x40,
+	ICE_PROMISC_VLAN_TX = 0x80,
+};
+
+/* VSI related commands */
+enum ice_status
+ice_add_vsi(struct ice_hw *hw, u16 vsi_handle, struct ice_vsi_ctx *vsi_ctx,
+	    struct ice_sq_cd *cd);
+enum ice_status
+ice_free_vsi(struct ice_hw *hw, u16 vsi_handle, struct ice_vsi_ctx *vsi_ctx,
+	     bool keep_vsi_alloc, struct ice_sq_cd *cd);
+enum ice_status
+ice_update_vsi(struct ice_hw *hw, u16 vsi_handle, struct ice_vsi_ctx *vsi_ctx,
+	       struct ice_sq_cd *cd);
+struct ice_vsi_ctx *ice_get_vsi_ctx(struct ice_hw *hw, u16 vsi_handle);
+void ice_clear_all_vsi_ctx(struct ice_hw *hw);
+/* Switch config */
+enum ice_status ice_get_initial_sw_cfg(struct ice_hw *hw);
+
+enum ice_status
+ice_alloc_vlan_res_counter(struct ice_hw *hw, u16 *counter_id);
+enum ice_status
+ice_free_vlan_res_counter(struct ice_hw *hw, u16 counter_id);
+
+/* Switch/bridge related commands */
+enum ice_status ice_update_sw_rule_bridge_mode(struct ice_hw *hw);
+enum ice_status
+ice_add_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list);
+enum ice_status ice_add_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_lst);
+enum ice_status ice_remove_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_lst);
+enum ice_status
+ice_remove_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list);
+#ifndef NO_MACVLAN_SUPPORT
+enum ice_status
+ice_add_mac_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list);
+enum ice_status
+ice_remove_mac_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list);
+#endif /* !NO_MACVLAN_SUPPORT */
+
+void ice_remove_vsi_fltr(struct ice_hw *hw, u16 vsi_handle);
+
+
+/* Promisc/defport setup for VSIs */
+enum ice_status
+ice_cfg_dflt_vsi(struct ice_port_info *pi, u16 vsi_handle, bool set,
+		 u8 direction);
+enum ice_status
+ice_set_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask,
+		    u16 vid);
+enum ice_status
+ice_clear_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask,
+		      u16 vid);
+enum ice_status
+ice_set_vlan_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask,
+			 bool rm_vlan_promisc);
+
+
+
+
+
+enum ice_status ice_init_def_sw_recp(struct ice_hw *hw);
+u16 ice_get_hw_vsi_num(struct ice_hw *hw, u16 vsi_handle);
+bool ice_is_vsi_valid(struct ice_hw *hw, u16 vsi_handle);
+
+enum ice_status ice_replay_vsi_all_fltr(struct ice_hw *hw, u16 vsi_handle);
+void ice_rm_all_sw_replay_rule_info(struct ice_hw *hw);
+
+#endif /* _ICE_SWITCH_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v4 10/32] net/ice/base: add code to work with the NVM
  2018-12-14  8:34 ` [dpdk-dev] [PATCH v4 00/32] A new net PMD - ICE Wenzhuo Lu
                     ` (8 preceding siblings ...)
  2018-12-14  8:34   ` [dpdk-dev] [PATCH v4 09/32] net/ice/base: add virtual switch code Wenzhuo Lu
@ 2018-12-14  8:34   ` Wenzhuo Lu
  2018-12-14  8:34   ` [dpdk-dev] [PATCH v4 11/32] net/ice/base: add common functions Wenzhuo Lu
                     ` (21 subsequent siblings)
  31 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-14  8:34 UTC (permalink / raw)
  To: dev; +Cc: Paul M Stillwell Jr

From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>

Add code to read/write/query the NVM image.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
---
 drivers/net/ice/base/ice_nvm.c | 387 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 387 insertions(+)
 create mode 100644 drivers/net/ice/base/ice_nvm.c

diff --git a/drivers/net/ice/base/ice_nvm.c b/drivers/net/ice/base/ice_nvm.c
new file mode 100644
index 0000000..25a2ca4
--- /dev/null
+++ b/drivers/net/ice/base/ice_nvm.c
@@ -0,0 +1,387 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#include "ice_common.h"
+
+
+/**
+ * ice_aq_read_nvm
+ * @hw: pointer to the hw struct
+ * @module_typeid: module pointer location in words from the NVM beginning
+ * @offset: byte offset from the module beginning
+ * @length: length of the section to be read (in bytes from the offset)
+ * @data: command buffer (size [bytes] = length)
+ * @last_command: tells if this is the last command in a series
+ * @cd: pointer to command details structure or NULL
+ *
+ * Read the NVM using the admin queue commands (0x0701)
+ */
+static enum ice_status
+ice_aq_read_nvm(struct ice_hw *hw, u16 module_typeid, u32 offset, u16 length,
+		void *data, bool last_command, struct ice_sq_cd *cd)
+{
+	struct ice_aq_desc desc;
+	struct ice_aqc_nvm *cmd;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_read_nvm");
+
+	cmd = &desc.params.nvm;
+
+	/* In offset the highest byte must be zeroed. */
+	if (offset & 0xFF000000)
+		return ICE_ERR_PARAM;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_nvm_read);
+
+	/* If this is the last command in a series, set the proper flag. */
+	if (last_command)
+		cmd->cmd_flags |= ICE_AQC_NVM_LAST_CMD;
+	cmd->module_typeid = CPU_TO_LE16(module_typeid);
+	cmd->offset_low = CPU_TO_LE16(offset & 0xFFFF);
+	cmd->offset_high = (offset >> 16) & 0xFF;
+	cmd->length = CPU_TO_LE16(length);
+
+	return ice_aq_send_cmd(hw, &desc, data, length, cd);
+}
+
+/**
+ * ice_check_sr_access_params - verify params for Shadow RAM R/W operations.
+ * @hw: pointer to the HW structure
+ * @offset: offset in words from module start
+ * @words: number of words to access
+ */
+static enum ice_status
+ice_check_sr_access_params(struct ice_hw *hw, u32 offset, u16 words)
+{
+	if ((offset + words) > hw->nvm.sr_words) {
+		ice_debug(hw, ICE_DBG_NVM,
+			  "NVM error: offset beyond SR lmt.\n");
+		return ICE_ERR_PARAM;
+	}
+
+	if (words > ICE_SR_SECTOR_SIZE_IN_WORDS) {
+		/* We can access only up to 4KB (one sector), in one AQ write */
+		ice_debug(hw, ICE_DBG_NVM,
+			  "NVM error: tried to access %d words, limit is %d.\n",
+			  words, ICE_SR_SECTOR_SIZE_IN_WORDS);
+		return ICE_ERR_PARAM;
+	}
+
+	if (((offset + (words - 1)) / ICE_SR_SECTOR_SIZE_IN_WORDS) !=
+	    (offset / ICE_SR_SECTOR_SIZE_IN_WORDS)) {
+		/* A single access cannot spread over two sectors */
+		ice_debug(hw, ICE_DBG_NVM,
+			  "NVM error: cannot spread over two sectors.\n");
+		return ICE_ERR_PARAM;
+	}
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_read_sr_aq - Read Shadow RAM.
+ * @hw: pointer to the HW structure
+ * @offset: offset in words from module start
+ * @words: number of words to read
+ * @data: buffer for words reads from Shadow RAM
+ * @last_command: tells the AdminQ that this is the last command
+ *
+ * Reads 16-bit word buffers from the Shadow RAM using the admin command.
+ */
+static enum ice_status
+ice_read_sr_aq(struct ice_hw *hw, u32 offset, u16 words, u16 *data,
+	       bool last_command)
+{
+	enum ice_status status;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_read_sr_aq");
+
+	status = ice_check_sr_access_params(hw, offset, words);
+
+	/* values in "offset" and "words" parameters are sized as words
+	 * (16 bits) but ice_aq_read_nvm expects these values in bytes.
+	 * So do this conversion while calling ice_aq_read_nvm.
+	 */
+	if (!status)
+		status = ice_aq_read_nvm(hw, 0, 2 * offset, 2 * words, data,
+					 last_command, NULL);
+
+	return status;
+}
+
+/**
+ * ice_read_sr_word_aq - Reads Shadow RAM via AQ
+ * @hw: pointer to the HW structure
+ * @offset: offset of the Shadow RAM word to read (0x000000 - 0x001FFF)
+ * @data: word read from the Shadow RAM
+ *
+ * Reads one 16 bit word from the Shadow RAM using the ice_read_sr_aq method.
+ */
+static enum ice_status
+ice_read_sr_word_aq(struct ice_hw *hw, u16 offset, u16 *data)
+{
+	enum ice_status status;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_read_sr_word_aq");
+
+	status = ice_read_sr_aq(hw, offset, 1, data, true);
+	if (!status)
+		*data = LE16_TO_CPU(*(__le16 *)data);
+
+	return status;
+}
+
+
+/**
+ * ice_read_sr_buf_aq - Reads Shadow RAM buf via AQ
+ * @hw: pointer to the HW structure
+ * @offset: offset of the Shadow RAM word to read (0x000000 - 0x001FFF)
+ * @words: (in) number of words to read; (out) number of words actually read
+ * @data: words read from the Shadow RAM
+ *
+ * Reads 16 bit words (data buf) from the SR using the ice_read_sr_aq
+ * method. Ownership of the NVM is taken before reading the buffer and later
+ * released.
+ */
+static enum ice_status
+ice_read_sr_buf_aq(struct ice_hw *hw, u16 offset, u16 *words, u16 *data)
+{
+	enum ice_status status;
+	bool last_cmd = false;
+	u16 words_read = 0;
+	u16 i = 0;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_read_sr_buf_aq");
+
+	do {
+		u16 read_size, off_w;
+
+		/* Calculate number of bytes we should read in this step.
+		 * It's not allowed to read more than one page at a time or
+		 * to cross page boundaries.
+		 */
+		off_w = offset % ICE_SR_SECTOR_SIZE_IN_WORDS;
+		read_size = off_w ?
+			min(*words,
+			    (u16)(ICE_SR_SECTOR_SIZE_IN_WORDS - off_w)) :
+			min((*words - words_read), ICE_SR_SECTOR_SIZE_IN_WORDS);
+
+		/* Check if this is last command, if so set proper flag */
+		if ((words_read + read_size) >= *words)
+			last_cmd = true;
+
+		status = ice_read_sr_aq(hw, offset, read_size,
+					data + words_read, last_cmd);
+		if (status)
+			goto read_nvm_buf_aq_exit;
+
+		/* Increment counter for words already read and move offset to
+		 * new read location
+		 */
+		words_read += read_size;
+		offset += read_size;
+	} while (words_read < *words);
+
+	for (i = 0; i < *words; i++)
+		data[i] = LE16_TO_CPU(((__le16 *)data)[i]);
+
+read_nvm_buf_aq_exit:
+	*words = words_read;
+	return status;
+}
+
+/**
+ * ice_acquire_nvm - Generic request for acquiring the NVM ownership
+ * @hw: pointer to the HW structure
+ * @access: NVM access type (read or write)
+ *
+ * This function will request NVM ownership.
+ */
+static enum ice_status
+ice_acquire_nvm(struct ice_hw *hw, enum ice_aq_res_access_type access)
+{
+	ice_debug(hw, ICE_DBG_TRACE, "ice_acquire_nvm");
+
+	if (hw->nvm.blank_nvm_mode)
+		return ICE_SUCCESS;
+
+	return ice_acquire_res(hw, ICE_NVM_RES_ID, access, ICE_NVM_TIMEOUT);
+}
+
+/**
+ * ice_release_nvm - Generic request for releasing the NVM ownership
+ * @hw: pointer to the HW structure
+ *
+ * This function will release NVM ownership.
+ */
+static void ice_release_nvm(struct ice_hw *hw)
+{
+	ice_debug(hw, ICE_DBG_TRACE, "ice_release_nvm");
+
+	if (hw->nvm.blank_nvm_mode)
+		return;
+
+	ice_release_res(hw, ICE_NVM_RES_ID);
+}
+
+/**
+ * ice_read_sr_word - Reads Shadow RAM word and acquire NVM if necessary
+ * @hw: pointer to the HW structure
+ * @offset: offset of the Shadow RAM word to read (0x000000 - 0x001FFF)
+ * @data: word read from the Shadow RAM
+ *
+ * Reads one 16 bit word from the Shadow RAM using the ice_read_sr_word_aq.
+ */
+enum ice_status ice_read_sr_word(struct ice_hw *hw, u16 offset, u16 *data)
+{
+	enum ice_status status;
+
+	status = ice_acquire_nvm(hw, ICE_RES_READ);
+	if (!status) {
+		status = ice_read_sr_word_aq(hw, offset, data);
+		ice_release_nvm(hw);
+	}
+
+	return status;
+}
+
+/**
+ * ice_init_nvm - initializes NVM setting
+ * @hw: pointer to the hw struct
+ *
+ * This function reads and populates NVM settings such as Shadow RAM size,
+ * max_timeout, and blank_nvm_mode
+ */
+enum ice_status ice_init_nvm(struct ice_hw *hw)
+{
+	struct ice_nvm_info *nvm = &hw->nvm;
+	u16 oem_hi, oem_lo, cfg_ptr;
+	u16 eetrack_lo, eetrack_hi;
+	enum ice_status status = ICE_SUCCESS;
+	u32 fla, gens_stat;
+	u8 sr_size;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_init_nvm");
+
+	/* The SR size is stored regardless of the nvm programming mode
+	 * as the blank mode may be used in the factory line.
+	 */
+	gens_stat = rd32(hw, GLNVM_GENS);
+	sr_size = (gens_stat & GLNVM_GENS_SR_SIZE_M) >> GLNVM_GENS_SR_SIZE_S;
+
+	/* Switching to words (sr_size contains power of 2) */
+	nvm->sr_words = BIT(sr_size) * ICE_SR_WORDS_IN_1KB;
+
+	/* Check if we are in the normal or blank NVM programming mode */
+	fla = rd32(hw, GLNVM_FLA);
+	if (fla & GLNVM_FLA_LOCKED_M) { /* Normal programming mode */
+		nvm->blank_nvm_mode = false;
+	} else { /* Blank programming mode */
+		nvm->blank_nvm_mode = true;
+		status = ICE_ERR_NVM_BLANK_MODE;
+		ice_debug(hw, ICE_DBG_NVM,
+			  "NVM init error: unsupported blank mode.\n");
+		return status;
+	}
+
+	status = ice_read_sr_word(hw, ICE_SR_NVM_DEV_STARTER_VER, &hw->nvm.ver);
+	if (status) {
+		ice_debug(hw, ICE_DBG_INIT,
+			  "Failed to read DEV starter version.\n");
+		return status;
+	}
+
+	status = ice_read_sr_word(hw, ICE_SR_NVM_EETRACK_LO, &eetrack_lo);
+	if (status) {
+		ice_debug(hw, ICE_DBG_INIT, "Failed to read EETRACK lo.\n");
+		return status;
+	}
+	status = ice_read_sr_word(hw, ICE_SR_NVM_EETRACK_HI, &eetrack_hi);
+	if (status) {
+		ice_debug(hw, ICE_DBG_INIT, "Failed to read EETRACK hi.\n");
+		return status;
+	}
+
+	hw->nvm.eetrack = (eetrack_hi << 16) | eetrack_lo;
+
+	status = ice_read_sr_word(hw, ICE_SR_BOOT_CFG_PTR, &cfg_ptr);
+	if (status) {
+		ice_debug(hw, ICE_DBG_INIT, "Failed to read BOOT_CONFIG_PTR.\n");
+		return status;
+	}
+
+	status = ice_read_sr_word(hw, (cfg_ptr + ICE_NVM_OEM_VER_OFF), &oem_hi);
+	if (status) {
+		ice_debug(hw, ICE_DBG_INIT, "Failed to read OEM_VER hi.\n");
+		return status;
+	}
+
+	status = ice_read_sr_word(hw, (cfg_ptr + (ICE_NVM_OEM_VER_OFF + 1)),
+				  &oem_lo);
+	if (status) {
+		ice_debug(hw, ICE_DBG_INIT, "Failed to read OEM_VER lo.\n");
+		return status;
+	}
+
+	hw->nvm.oem_ver = ((u32)oem_hi << 16) | oem_lo;
+	return status;
+}
+
+
+/**
+ * ice_read_sr_buf - Reads Shadow RAM buf and acquire lock if necessary
+ * @hw: pointer to the HW structure
+ * @offset: offset of the Shadow RAM word to read (0x000000 - 0x001FFF)
+ * @words: (in) number of words to read; (out) number of words actually read
+ * @data: words read from the Shadow RAM
+ *
+ * Reads 16 bit words (data buf) from the SR using the ice_read_nvm_buf_aq
+ * method. The buf read is preceded by the NVM ownership take
+ * and followed by the release.
+ */
+enum ice_status
+ice_read_sr_buf(struct ice_hw *hw, u16 offset, u16 *words, u16 *data)
+{
+	enum ice_status status;
+
+	status = ice_acquire_nvm(hw, ICE_RES_READ);
+	if (!status) {
+		status = ice_read_sr_buf_aq(hw, offset, words, data);
+		ice_release_nvm(hw);
+	}
+
+	return status;
+}
+
+
+/**
+ * ice_nvm_validate_checksum
+ * @hw: pointer to the hw struct
+ *
+ * Verify NVM PFA checksum validity (0x0706)
+ */
+enum ice_status ice_nvm_validate_checksum(struct ice_hw *hw)
+{
+	struct ice_aqc_nvm_checksum *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	status = ice_acquire_nvm(hw, ICE_RES_READ);
+	if (status)
+		return status;
+
+	cmd = &desc.params.nvm_checksum;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_nvm_checksum);
+	cmd->flags = ICE_AQC_NVM_CHECKSUM_VERIFY;
+
+	status = ice_aq_send_cmd(hw, &desc, NULL, 0, NULL);
+	ice_release_nvm(hw);
+
+	if (!status)
+		if (LE16_TO_CPU(cmd->checksum) != ICE_AQC_NVM_CHECKSUM_CORRECT)
+			status = ICE_ERR_NVM_CHECKSUM;
+
+	return status;
+}
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v4 11/32] net/ice/base: add common functions
  2018-12-14  8:34 ` [dpdk-dev] [PATCH v4 00/32] A new net PMD - ICE Wenzhuo Lu
                     ` (9 preceding siblings ...)
  2018-12-14  8:34   ` [dpdk-dev] [PATCH v4 10/32] net/ice/base: add code to work with the NVM Wenzhuo Lu
@ 2018-12-14  8:34   ` Wenzhuo Lu
  2018-12-14  8:34   ` [dpdk-dev] [PATCH v4 12/32] net/ice/base: add various headers Wenzhuo Lu
                     ` (20 subsequent siblings)
  31 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-14  8:34 UTC (permalink / raw)
  To: dev; +Cc: Paul M Stillwell Jr

From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>

Add code that multiple other features use.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
---
 drivers/net/ice/base/ice_common.c | 3521 +++++++++++++++++++++++++++++++++++++
 drivers/net/ice/base/ice_common.h |  186 ++
 2 files changed, 3707 insertions(+)
 create mode 100644 drivers/net/ice/base/ice_common.c
 create mode 100644 drivers/net/ice/base/ice_common.h

diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c
new file mode 100644
index 0000000..d49264d
--- /dev/null
+++ b/drivers/net/ice/base/ice_common.c
@@ -0,0 +1,3521 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#include "ice_common.h"
+#include "ice_sched.h"
+#include "ice_adminq_cmd.h"
+
+#include "ice_flow.h"
+#include "ice_switch.h"
+
+#define ICE_PF_RESET_WAIT_COUNT	200
+
+#define ICE_PROG_FLEX_ENTRY(hw, rxdid, mdid, idx) \
+	wr32((hw), GLFLXP_RXDID_FLX_WRD_##idx(rxdid), \
+	     ((ICE_RX_OPC_MDID << \
+	       GLFLXP_RXDID_FLX_WRD_##idx##_RXDID_OPCODE_S) & \
+	      GLFLXP_RXDID_FLX_WRD_##idx##_RXDID_OPCODE_M) | \
+	     (((mdid) << GLFLXP_RXDID_FLX_WRD_##idx##_PROT_MDID_S) & \
+	      GLFLXP_RXDID_FLX_WRD_##idx##_PROT_MDID_M))
+
+#define ICE_PROG_FLG_ENTRY(hw, rxdid, flg_0, flg_1, flg_2, flg_3, idx) \
+	wr32((hw), GLFLXP_RXDID_FLAGS(rxdid, idx), \
+	     (((flg_0) << GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_S) & \
+	      GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_M) | \
+	     (((flg_1) << GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_1_S) & \
+	      GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_1_M) | \
+	     (((flg_2) << GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_2_S) & \
+	      GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_2_M) | \
+	     (((flg_3) << GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_3_S) & \
+	      GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_3_M))
+
+
+/**
+ * ice_set_mac_type - Sets MAC type
+ * @hw: pointer to the HW structure
+ *
+ * This function sets the MAC type of the adapter based on the
+ * vendor ID and device ID stored in the hw structure.
+ */
+static enum ice_status ice_set_mac_type(struct ice_hw *hw)
+{
+	enum ice_status status = ICE_SUCCESS;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_set_mac_type\n");
+
+	if (hw->vendor_id == ICE_INTEL_VENDOR_ID) {
+		switch (hw->device_id) {
+		default:
+			hw->mac_type = ICE_MAC_GENERIC;
+			break;
+		}
+	} else {
+		status = ICE_ERR_DEVICE_NOT_SUPPORTED;
+	}
+
+	ice_debug(hw, ICE_DBG_INIT, "found mac_type: %d, status: %d\n",
+		  hw->mac_type, status);
+
+	return status;
+}
+
+#if defined(FPGA_SUPPORT) || defined(CVL_A0_SUPPORT)
+void ice_dev_onetime_setup(struct ice_hw *hw)
+{
+	/* configure Rx - set non pxe mode */
+	wr32(hw, GLLAN_RCTL_0, 0x1);
+
+
+
+}
+#endif /* FPGA_SUPPORT || CVL_A0_SUPPORT */
+
+/**
+ * ice_clear_pf_cfg - Clear PF configuration
+ * @hw: pointer to the hardware structure
+ *
+ * Clears any existing PF configuration (VSIs, VSI lists, switch rules, port
+ * configuration, flow director filters, etc.).
+ */
+enum ice_status ice_clear_pf_cfg(struct ice_hw *hw)
+{
+	struct ice_aq_desc desc;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_clear_pf_cfg);
+
+	return ice_aq_send_cmd(hw, &desc, NULL, 0, NULL);
+}
+
+/**
+ * ice_aq_manage_mac_read - manage MAC address read command
+ * @hw: pointer to the hw struct
+ * @buf: a virtual buffer to hold the manage MAC read response
+ * @buf_size: Size of the virtual buffer
+ * @cd: pointer to command details structure or NULL
+ *
+ * This function is used to return per PF station MAC address (0x0107).
+ * NOTE: Upon successful completion of this command, MAC address information
+ * is returned in user specified buffer. Please interpret user specified
+ * buffer as "manage_mac_read" response.
+ * Response such as various MAC addresses are stored in HW struct (port.mac)
+ * ice_aq_discover_caps is expected to be called before this function is called.
+ */
+static enum ice_status
+ice_aq_manage_mac_read(struct ice_hw *hw, void *buf, u16 buf_size,
+		       struct ice_sq_cd *cd)
+{
+	struct ice_aqc_manage_mac_read_resp *resp;
+	struct ice_aqc_manage_mac_read *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+	u16 flags;
+	u8 i;
+
+	cmd = &desc.params.mac_read;
+
+	if (buf_size < sizeof(*resp))
+		return ICE_ERR_BUF_TOO_SHORT;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_manage_mac_read);
+
+	status = ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
+	if (status)
+		return status;
+
+	resp = (struct ice_aqc_manage_mac_read_resp *)buf;
+	flags = LE16_TO_CPU(cmd->flags) & ICE_AQC_MAN_MAC_READ_M;
+
+	if (!(flags & ICE_AQC_MAN_MAC_LAN_ADDR_VALID)) {
+		ice_debug(hw, ICE_DBG_LAN, "got invalid MAC address\n");
+		return ICE_ERR_CFG;
+	}
+
+	/* A single port can report up to two (LAN and WoL) addresses */
+	for (i = 0; i < cmd->num_addr; i++)
+		if (resp[i].addr_type == ICE_AQC_MAN_MAC_ADDR_TYPE_LAN) {
+			ice_memcpy(hw->port_info->mac.lan_addr,
+				   resp[i].mac_addr, ETH_ALEN,
+				   ICE_DMA_TO_NONDMA);
+			ice_memcpy(hw->port_info->mac.perm_addr,
+				   resp[i].mac_addr,
+				   ETH_ALEN, ICE_DMA_TO_NONDMA);
+			break;
+		}
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_aq_get_phy_caps - returns PHY capabilities
+ * @pi: port information structure
+ * @qual_mods: report qualified modules
+ * @report_mode: report mode capabilities
+ * @pcaps: structure for PHY capabilities to be filled
+ * @cd: pointer to command details structure or NULL
+ *
+ * Returns the various PHY capabilities supported on the Port (0x0600)
+ */
+enum ice_status
+ice_aq_get_phy_caps(struct ice_port_info *pi, bool qual_mods, u8 report_mode,
+		    struct ice_aqc_get_phy_caps_data *pcaps,
+		    struct ice_sq_cd *cd)
+{
+	struct ice_aqc_get_phy_caps *cmd;
+	u16 pcaps_size = sizeof(*pcaps);
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	cmd = &desc.params.get_phy;
+
+	if (!pcaps || (report_mode & ~ICE_AQC_REPORT_MODE_M) || !pi)
+		return ICE_ERR_PARAM;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_phy_caps);
+
+	if (qual_mods)
+		cmd->param0 |= CPU_TO_LE16(ICE_AQC_GET_PHY_RQM);
+
+	cmd->param0 |= CPU_TO_LE16(report_mode);
+	status = ice_aq_send_cmd(pi->hw, &desc, pcaps, pcaps_size, cd);
+
+	if (status == ICE_SUCCESS && report_mode == ICE_AQC_REPORT_TOPO_CAP) {
+		pi->phy.phy_type_low = LE64_TO_CPU(pcaps->phy_type_low);
+		pi->phy.phy_type_high = LE64_TO_CPU(pcaps->phy_type_high);
+	}
+
+	return status;
+}
+
+/**
+ * ice_get_media_type - Gets media type
+ * @pi: port information structure
+ */
+static enum ice_media_type ice_get_media_type(struct ice_port_info *pi)
+{
+	struct ice_link_status *hw_link_info;
+
+	if (!pi)
+		return ICE_MEDIA_UNKNOWN;
+
+	hw_link_info = &pi->phy.link_info;
+	if (hw_link_info->phy_type_low && hw_link_info->phy_type_high)
+		/* If more than one media type is selected, report unknown */
+		return ICE_MEDIA_UNKNOWN;
+
+	if (hw_link_info->phy_type_low) {
+		switch (hw_link_info->phy_type_low) {
+		case ICE_PHY_TYPE_LOW_1000BASE_SX:
+		case ICE_PHY_TYPE_LOW_1000BASE_LX:
+		case ICE_PHY_TYPE_LOW_10GBASE_SR:
+		case ICE_PHY_TYPE_LOW_10GBASE_LR:
+		case ICE_PHY_TYPE_LOW_10G_SFI_C2C:
+		case ICE_PHY_TYPE_LOW_25GBASE_SR:
+		case ICE_PHY_TYPE_LOW_25GBASE_LR:
+		case ICE_PHY_TYPE_LOW_25G_AUI_C2C:
+		case ICE_PHY_TYPE_LOW_40GBASE_SR4:
+		case ICE_PHY_TYPE_LOW_40GBASE_LR4:
+		case ICE_PHY_TYPE_LOW_50GBASE_SR2:
+		case ICE_PHY_TYPE_LOW_50GBASE_LR2:
+		case ICE_PHY_TYPE_LOW_50GBASE_SR:
+		case ICE_PHY_TYPE_LOW_50GBASE_FR:
+		case ICE_PHY_TYPE_LOW_50GBASE_LR:
+		case ICE_PHY_TYPE_LOW_100GBASE_SR4:
+		case ICE_PHY_TYPE_LOW_100GBASE_LR4:
+		case ICE_PHY_TYPE_LOW_100GBASE_SR2:
+		case ICE_PHY_TYPE_LOW_100GBASE_DR:
+			return ICE_MEDIA_FIBER;
+		case ICE_PHY_TYPE_LOW_100BASE_TX:
+		case ICE_PHY_TYPE_LOW_1000BASE_T:
+		case ICE_PHY_TYPE_LOW_2500BASE_T:
+		case ICE_PHY_TYPE_LOW_5GBASE_T:
+		case ICE_PHY_TYPE_LOW_10GBASE_T:
+		case ICE_PHY_TYPE_LOW_25GBASE_T:
+			return ICE_MEDIA_BASET;
+		case ICE_PHY_TYPE_LOW_10G_SFI_DA:
+		case ICE_PHY_TYPE_LOW_25GBASE_CR:
+		case ICE_PHY_TYPE_LOW_25GBASE_CR_S:
+		case ICE_PHY_TYPE_LOW_25GBASE_CR1:
+		case ICE_PHY_TYPE_LOW_40GBASE_CR4:
+		case ICE_PHY_TYPE_LOW_50GBASE_CR2:
+		case ICE_PHY_TYPE_LOW_50GBASE_CP:
+		case ICE_PHY_TYPE_LOW_100GBASE_CR4:
+		case ICE_PHY_TYPE_LOW_100GBASE_CR_PAM4:
+		case ICE_PHY_TYPE_LOW_100GBASE_CP2:
+			return ICE_MEDIA_DA;
+		case ICE_PHY_TYPE_LOW_1000BASE_KX:
+		case ICE_PHY_TYPE_LOW_2500BASE_KX:
+		case ICE_PHY_TYPE_LOW_2500BASE_X:
+		case ICE_PHY_TYPE_LOW_5GBASE_KR:
+		case ICE_PHY_TYPE_LOW_10GBASE_KR_CR1:
+		case ICE_PHY_TYPE_LOW_25GBASE_KR:
+		case ICE_PHY_TYPE_LOW_25GBASE_KR1:
+		case ICE_PHY_TYPE_LOW_25GBASE_KR_S:
+		case ICE_PHY_TYPE_LOW_40GBASE_KR4:
+		case ICE_PHY_TYPE_LOW_50GBASE_KR_PAM4:
+		case ICE_PHY_TYPE_LOW_50GBASE_KR2:
+		case ICE_PHY_TYPE_LOW_100GBASE_KR4:
+		case ICE_PHY_TYPE_LOW_100GBASE_KR_PAM4:
+			return ICE_MEDIA_BACKPLANE;
+		}
+	} else {
+		switch (hw_link_info->phy_type_high) {
+		case ICE_PHY_TYPE_HIGH_100GBASE_KR2_PAM4:
+			return ICE_MEDIA_BACKPLANE;
+		}
+	}
+	return ICE_MEDIA_UNKNOWN;
+}
+
+/**
+ * ice_aq_get_link_info
+ * @pi: port information structure
+ * @ena_lse: enable/disable LinkStatusEvent reporting
+ * @link: pointer to link status structure - optional
+ * @cd: pointer to command details structure or NULL
+ *
+ * Get Link Status (0x607). Returns the link status of the adapter.
+ */
+enum ice_status
+ice_aq_get_link_info(struct ice_port_info *pi, bool ena_lse,
+		     struct ice_link_status *link, struct ice_sq_cd *cd)
+{
+	struct ice_link_status *hw_link_info_old, *hw_link_info;
+	struct ice_aqc_get_link_status_data link_data = { 0 };
+	struct ice_aqc_get_link_status *resp;
+	enum ice_media_type *hw_media_type;
+	struct ice_fc_info *hw_fc_info;
+	bool tx_pause, rx_pause;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+	u16 cmd_flags;
+
+	if (!pi)
+		return ICE_ERR_PARAM;
+	hw_link_info_old = &pi->phy.link_info_old;
+	hw_media_type = &pi->phy.media_type;
+	hw_link_info = &pi->phy.link_info;
+	hw_fc_info = &pi->fc;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_link_status);
+	cmd_flags = (ena_lse) ? ICE_AQ_LSE_ENA : ICE_AQ_LSE_DIS;
+	resp = &desc.params.get_link_status;
+	resp->cmd_flags = CPU_TO_LE16(cmd_flags);
+	resp->lport_num = pi->lport;
+
+	status = ice_aq_send_cmd(pi->hw, &desc, &link_data, sizeof(link_data),
+				 cd);
+
+	if (status != ICE_SUCCESS)
+		return status;
+
+	/* save off old link status information */
+	*hw_link_info_old = *hw_link_info;
+
+	/* update current link status information */
+	hw_link_info->link_speed = LE16_TO_CPU(link_data.link_speed);
+	hw_link_info->phy_type_low = LE64_TO_CPU(link_data.phy_type_low);
+	hw_link_info->phy_type_high = LE64_TO_CPU(link_data.phy_type_high);
+	*hw_media_type = ice_get_media_type(pi);
+	hw_link_info->link_info = link_data.link_info;
+	hw_link_info->an_info = link_data.an_info;
+	hw_link_info->ext_info = link_data.ext_info;
+	hw_link_info->max_frame_size = LE16_TO_CPU(link_data.max_frame_size);
+	hw_link_info->fec_info = link_data.cfg & ICE_AQ_FEC_MASK;
+	hw_link_info->topo_media_conflict = link_data.topo_media_conflict;
+	hw_link_info->pacing = link_data.cfg & ICE_AQ_CFG_PACING_M;
+
+	/* update fc info */
+	tx_pause = !!(link_data.an_info & ICE_AQ_LINK_PAUSE_TX);
+	rx_pause = !!(link_data.an_info & ICE_AQ_LINK_PAUSE_RX);
+	if (tx_pause && rx_pause)
+		hw_fc_info->current_mode = ICE_FC_FULL;
+	else if (tx_pause)
+		hw_fc_info->current_mode = ICE_FC_TX_PAUSE;
+	else if (rx_pause)
+		hw_fc_info->current_mode = ICE_FC_RX_PAUSE;
+	else
+		hw_fc_info->current_mode = ICE_FC_NONE;
+
+	hw_link_info->lse_ena =
+		!!(resp->cmd_flags & CPU_TO_LE16(ICE_AQ_LSE_IS_ENABLED));
+
+
+	/* save link status information */
+	if (link)
+		*link = *hw_link_info;
+
+	/* flag cleared so calling functions don't call AQ again */
+	pi->phy.get_link_info = false;
+
+	return status;
+}
+
+/**
+ * ice_init_flex_flags
+ * @hw: pointer to the hardware structure
+ * @prof_id: Rx Descriptor Builder profile ID
+ *
+ * Function to initialize Rx flex flags
+ */
+static void ice_init_flex_flags(struct ice_hw *hw, enum ice_rxdid prof_id)
+{
+	u8 idx = 0;
+
+	/* Flex-flag fields (0-2) are programmed with FLG64 bits with layout:
+	 * flexiflags0[5:0] - TCP flags, is_packet_fragmented, is_packet_UDP_GRE
+	 * flexiflags1[3:0] - Not used for flag programming
+	 * flexiflags2[7:0] - Tunnel and VLAN types
+	 * 2 invalid fields in last index
+	 */
+	switch (prof_id) {
+	/* Rx flex flags are currently programmed for the NIC profiles only.
+	 * Different flag bit programming configurations can be added per
+	 * profile as needed.
+	 */
+	case ICE_RXDID_FLEX_NIC:
+	case ICE_RXDID_FLEX_NIC_2:
+		ICE_PROG_FLG_ENTRY(hw, prof_id, ICE_RXFLG_PKT_FRG,
+				   ICE_RXFLG_UDP_GRE, ICE_RXFLG_PKT_DSI,
+				   ICE_RXFLG_FIN, idx++);
+		/* flex flag 1 is not used for flexi-flag programming, skipping
+		 * these four FLG64 bits.
+		 */
+		ICE_PROG_FLG_ENTRY(hw, prof_id, ICE_RXFLG_SYN, ICE_RXFLG_RST,
+				   ICE_RXFLG_PKT_DSI, ICE_RXFLG_PKT_DSI, idx++);
+		ICE_PROG_FLG_ENTRY(hw, prof_id, ICE_RXFLG_PKT_DSI,
+				   ICE_RXFLG_PKT_DSI, ICE_RXFLG_EVLAN_x8100,
+				   ICE_RXFLG_EVLAN_x9100, idx++);
+		ICE_PROG_FLG_ENTRY(hw, prof_id, ICE_RXFLG_VLAN_x8100,
+				   ICE_RXFLG_TNL_VLAN, ICE_RXFLG_TNL_MAC,
+				   ICE_RXFLG_TNL0, idx++);
+		ICE_PROG_FLG_ENTRY(hw, prof_id, ICE_RXFLG_TNL1, ICE_RXFLG_TNL2,
+				   ICE_RXFLG_PKT_DSI, ICE_RXFLG_PKT_DSI, idx);
+		break;
+
+	default:
+		ice_debug(hw, ICE_DBG_INIT,
+			  "Flag programming for profile ID %d not supported\n",
+			  prof_id);
+	}
+}
+
+/**
+ * ice_init_flex_flds
+ * @hw: pointer to the hardware structure
+ * @prof_id: Rx Descriptor Builder profile ID
+ *
+ * Function to initialize flex descriptors
+ */
+static void ice_init_flex_flds(struct ice_hw *hw, enum ice_rxdid prof_id)
+{
+	enum ice_flex_rx_mdid mdid;
+
+	switch (prof_id) {
+	case ICE_RXDID_FLEX_NIC:
+	case ICE_RXDID_FLEX_NIC_2:
+		ICE_PROG_FLEX_ENTRY(hw, prof_id, ICE_RX_MDID_HASH_LOW, 0);
+		ICE_PROG_FLEX_ENTRY(hw, prof_id, ICE_RX_MDID_HASH_HIGH, 1);
+		ICE_PROG_FLEX_ENTRY(hw, prof_id, ICE_RX_MDID_FLOW_ID_LOWER, 2);
+
+		mdid = (prof_id == ICE_RXDID_FLEX_NIC_2) ?
+			ICE_RX_MDID_SRC_VSI : ICE_RX_MDID_FLOW_ID_HIGH;
+
+		ICE_PROG_FLEX_ENTRY(hw, prof_id, mdid, 3);
+
+		ice_init_flex_flags(hw, prof_id);
+		break;
+
+	default:
+		ice_debug(hw, ICE_DBG_INIT,
+			  "Field init for profile ID %d not supported\n",
+			  prof_id);
+	}
+}
+
+
+/**
+ * ice_init_fltr_mgmt_struct - initializes filter management list and locks
+ * @hw: pointer to the hw struct
+ */
+static enum ice_status ice_init_fltr_mgmt_struct(struct ice_hw *hw)
+{
+	struct ice_switch_info *sw;
+
+	hw->switch_info = (struct ice_switch_info *)
+			  ice_malloc(hw, sizeof(*hw->switch_info));
+	sw = hw->switch_info;
+
+	if (!sw)
+		return ICE_ERR_NO_MEMORY;
+
+	INIT_LIST_HEAD(&sw->vsi_list_map_head);
+
+	return ice_init_def_sw_recp(hw);
+}
+
+/**
+ * ice_cleanup_fltr_mgmt_struct - cleanup filter management list and locks
+ * @hw: pointer to the hw struct
+ */
+static void ice_cleanup_fltr_mgmt_struct(struct ice_hw *hw)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	struct ice_vsi_list_map_info *v_pos_map;
+	struct ice_vsi_list_map_info *v_tmp_map;
+	struct ice_sw_recipe *recps;
+	u8 i;
+
+	LIST_FOR_EACH_ENTRY_SAFE(v_pos_map, v_tmp_map, &sw->vsi_list_map_head,
+				 ice_vsi_list_map_info, list_entry) {
+		LIST_DEL(&v_pos_map->list_entry);
+		ice_free(hw, v_pos_map);
+	}
+	recps = hw->switch_info->recp_list;
+	for (i = 0; i < ICE_MAX_NUM_RECIPES; i++) {
+		recps[i].root_rid = i;
+
+		if (recps[i].adv_rule) {
+			struct ice_adv_fltr_mgmt_list_entry *tmp_entry;
+			struct ice_adv_fltr_mgmt_list_entry *lst_itr;
+
+			ice_destroy_lock(&recps[i].filt_rule_lock);
+			LIST_FOR_EACH_ENTRY_SAFE(lst_itr, tmp_entry,
+						 &recps[i].filt_rules,
+						 ice_adv_fltr_mgmt_list_entry,
+						 list_entry) {
+				LIST_DEL(&lst_itr->list_entry);
+				ice_free(hw, lst_itr->lkups);
+				ice_free(hw, lst_itr);
+			}
+		} else {
+			struct ice_fltr_mgmt_list_entry *lst_itr, *tmp_entry;
+
+			ice_destroy_lock(&recps[i].filt_rule_lock);
+			LIST_FOR_EACH_ENTRY_SAFE(lst_itr, tmp_entry,
+						 &recps[i].filt_rules,
+						 ice_fltr_mgmt_list_entry,
+						 list_entry) {
+				LIST_DEL(&lst_itr->list_entry);
+				ice_free(hw, lst_itr);
+			}
+		}
+	}
+	ice_rm_all_sw_replay_rule_info(hw);
+	ice_free(hw, sw->recp_list);
+	ice_free(hw, sw);
+}
+
+#define ICE_FW_LOG_DESC_SIZE(n)	(sizeof(struct ice_aqc_fw_logging_data) + \
+	(((n) - 1) * sizeof(((struct ice_aqc_fw_logging_data *)0)->entry)))
+#define ICE_FW_LOG_DESC_SIZE_MAX	\
+	ICE_FW_LOG_DESC_SIZE(ICE_AQC_FW_LOG_ID_MAX)
+
+/**
+ * ice_cfg_fw_log - configure FW logging
+ * @hw: pointer to the hw struct
+ * @enable: enable certain FW logging events if true, disable all if false
+ *
+ * This function enables/disables the FW logging via Rx CQ events and a UART
+ * port based on predetermined configurations. FW logging via the Rx CQ can be
+ * enabled/disabled for individual PF's. However, FW logging via the UART can
+ * only be enabled/disabled for all PFs on the same device.
+ *
+ * To enable overall FW logging, the "cq_en" and "uart_en" enable bits in
+ * hw->fw_log need to be set accordingly, e.g. based on user-provided input,
+ * before initializing the device.
+ *
+ * When re/configuring FW logging, callers need to update the "cfg" elements of
+ * the hw->fw_log.evnts array with the desired logging event configurations for
+ * modules of interest. When disabling FW logging completely, the callers can
+ * just pass false in the "enable" parameter. On completion, the function will
+ * update the "cur" element of the hw->fw_log.evnts array with the resulting
+ * logging event configurations of the modules that are being re/configured. FW
+ * logging modules that are not part of a reconfiguration operation retain their
+ * previous states.
+ *
+ * Before resetting the device, it is recommended that the driver disables FW
+ * logging before shutting down the control queue. When disabling FW logging
+ * ("enable" = false), the latest configurations of FW logging events stored in
+ * hw->fw_log.evnts[] are not overridden to allow them to be reconfigured after
+ * a device reset.
+ *
+ * When enabling FW logging to emit log messages via the Rx CQ during the
+ * device's initialization phase, a mechanism alternative to interrupt handlers
+ * needs to be used to extract FW log messages from the Rx CQ periodically and
+ * to prevent the Rx CQ from being full and stalling other types of control
+ * messages from FW to SW. Interrupts are typically disabled during the device's
+ * initialization phase.
+ */
+static enum ice_status ice_cfg_fw_log(struct ice_hw *hw, bool enable)
+{
+	struct ice_aqc_fw_logging_data *data = NULL;
+	struct ice_aqc_fw_logging *cmd;
+	enum ice_status status = ICE_SUCCESS;
+	u16 i, chgs = 0, len = 0;
+	struct ice_aq_desc desc;
+	u8 actv_evnts = 0;
+	void *buf = NULL;
+
+	if (!hw->fw_log.cq_en && !hw->fw_log.uart_en)
+		return ICE_SUCCESS;
+
+	/* Disable FW logging only when the control queue is still responsive */
+	if (!enable &&
+	    (!hw->fw_log.actv_evnts || !ice_check_sq_alive(hw, &hw->adminq)))
+		return ICE_SUCCESS;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_fw_logging);
+	cmd = &desc.params.fw_logging;
+
+	/* Indicate which controls are valid */
+	if (hw->fw_log.cq_en)
+		cmd->log_ctrl_valid |= ICE_AQC_FW_LOG_AQ_VALID;
+
+	if (hw->fw_log.uart_en)
+		cmd->log_ctrl_valid |= ICE_AQC_FW_LOG_UART_VALID;
+
+	if (enable) {
+		/* Fill in an array of entries with FW logging modules and
+		 * logging events being reconfigured.
+		 */
+		for (i = 0; i < ICE_AQC_FW_LOG_ID_MAX; i++) {
+			u16 val;
+
+			/* Keep track of enabled event types */
+			actv_evnts |= hw->fw_log.evnts[i].cfg;
+
+			if (hw->fw_log.evnts[i].cfg == hw->fw_log.evnts[i].cur)
+				continue;
+
+			if (!data) {
+				data = (struct ice_aqc_fw_logging_data *)
+					ice_malloc(hw,
+						   ICE_FW_LOG_DESC_SIZE_MAX);
+				if (!data)
+					return ICE_ERR_NO_MEMORY;
+			}
+
+			val = i << ICE_AQC_FW_LOG_ID_S;
+			val |= hw->fw_log.evnts[i].cfg << ICE_AQC_FW_LOG_EN_S;
+			data->entry[chgs++] = CPU_TO_LE16(val);
+		}
+
+		/* Only enable FW logging if at least one module is specified.
+		 * If FW logging is currently enabled but all modules are not
+		 * enabled to emit log messages, disable FW logging altogether.
+		 */
+		if (actv_evnts) {
+			/* Leave if there is effectively no change */
+			if (!chgs)
+				goto out;
+
+			if (hw->fw_log.cq_en)
+				cmd->log_ctrl |= ICE_AQC_FW_LOG_AQ_EN;
+
+			if (hw->fw_log.uart_en)
+				cmd->log_ctrl |= ICE_AQC_FW_LOG_UART_EN;
+
+			buf = data;
+			len = ICE_FW_LOG_DESC_SIZE(chgs);
+			desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+		}
+	}
+
+	status = ice_aq_send_cmd(hw, &desc, buf, len, NULL);
+	if (!status) {
+		/* Update the current configuration to reflect events enabled.
+		 * hw->fw_log.cq_en and hw->fw_log.uart_en indicate if the FW
+		 * logging mode is enabled for the device. They do not reflect
+		 * actual modules being enabled to emit log messages. So, their
+		 * values remain unchanged even when all modules are disabled.
+		 */
+		u16 cnt = enable ? chgs : (u16)ICE_AQC_FW_LOG_ID_MAX;
+
+		hw->fw_log.actv_evnts = actv_evnts;
+		for (i = 0; i < cnt; i++) {
+			u16 v, m;
+
+			if (!enable) {
+				/* When disabling all FW logging events as part
+				 * of device's de-initialization, the original
+				 * configurations are retained, and can be used
+				 * to reconfigure FW logging later if the device
+				 * is re-initialized.
+				 */
+				hw->fw_log.evnts[i].cur = 0;
+				continue;
+			}
+
+			v = LE16_TO_CPU(data->entry[i]);
+			m = (v & ICE_AQC_FW_LOG_ID_M) >> ICE_AQC_FW_LOG_ID_S;
+			hw->fw_log.evnts[m].cur = hw->fw_log.evnts[m].cfg;
+		}
+	}
+
+out:
+	if (data)
+		ice_free(hw, data);
+
+	return status;
+}
+
+/**
+ * ice_output_fw_log
+ * @hw: pointer to the hw struct
+ * @desc: pointer to the AQ message descriptor
+ * @buf: pointer to the buffer accompanying the AQ message
+ *
+ * Formats a FW Log message and outputs it via the standard driver logs.
+ */
+void ice_output_fw_log(struct ice_hw *hw, struct ice_aq_desc *desc, void *buf)
+{
+	ice_debug(hw, ICE_DBG_AQ_MSG, "[ FW Log Msg Start ]\n");
+	ice_debug_array(hw, ICE_DBG_AQ_MSG, 16, 1, (u8 *)buf,
+			LE16_TO_CPU(desc->datalen));
+	ice_debug(hw, ICE_DBG_AQ_MSG, "[ FW Log Msg End ]\n");
+}
+
+/**
+ * ice_get_itr_intrl_gran - determine int/intrl granularity
+ * @hw: pointer to the hw struct
+ *
+ * Determines the itr/intrl granularities based on the maximum aggregate
+ * bandwidth according to the device's configuration during power-on.
+ */
+static enum ice_status ice_get_itr_intrl_gran(struct ice_hw *hw)
+{
+	u8 max_agg_bw = (rd32(hw, GL_PWR_MODE_CTL) &
+			 GL_PWR_MODE_CTL_CAR_MAX_BW_M) >>
+			GL_PWR_MODE_CTL_CAR_MAX_BW_S;
+
+	switch (max_agg_bw) {
+	case ICE_MAX_AGG_BW_200G:
+	case ICE_MAX_AGG_BW_100G:
+	case ICE_MAX_AGG_BW_50G:
+		hw->itr_gran = ICE_ITR_GRAN_ABOVE_25;
+		hw->intrl_gran = ICE_INTRL_GRAN_ABOVE_25;
+		break;
+	case ICE_MAX_AGG_BW_25G:
+		hw->itr_gran = ICE_ITR_GRAN_MAX_25;
+		hw->intrl_gran = ICE_INTRL_GRAN_MAX_25;
+		break;
+	default:
+		ice_debug(hw, ICE_DBG_INIT,
+			  "Failed to determine itr/intrl granularity\n");
+		return ICE_ERR_CFG;
+	}
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_init_hw - main hardware initialization routine
+ * @hw: pointer to the hardware structure
+ */
+enum ice_status ice_init_hw(struct ice_hw *hw)
+{
+	struct ice_aqc_get_phy_caps_data *pcaps;
+	enum ice_status status;
+	u16 mac_buf_len;
+	void *mac_buf;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_init_hw");
+
+
+	/* Set MAC type based on DeviceID */
+	status = ice_set_mac_type(hw);
+	if (status)
+		return status;
+
+	hw->pf_id = (u8)(rd32(hw, PF_FUNC_RID) &
+			 PF_FUNC_RID_FUNCTION_NUMBER_M) >>
+		PF_FUNC_RID_FUNCTION_NUMBER_S;
+
+
+	status = ice_reset(hw, ICE_RESET_PFR);
+	if (status)
+		return status;
+
+	status = ice_get_itr_intrl_gran(hw);
+	if (status)
+		return status;
+
+
+	status = ice_init_all_ctrlq(hw);
+	if (status)
+		goto err_unroll_cqinit;
+
+	/* Enable FW logging. Not fatal if this fails. */
+	status = ice_cfg_fw_log(hw, true);
+	if (status)
+		ice_debug(hw, ICE_DBG_INIT, "Failed to enable FW logging.\n");
+
+	status = ice_clear_pf_cfg(hw);
+	if (status)
+		goto err_unroll_cqinit;
+
+
+	ice_clear_pxe_mode(hw);
+
+	status = ice_init_nvm(hw);
+	if (status)
+		goto err_unroll_cqinit;
+
+	status = ice_get_caps(hw);
+	if (status)
+		goto err_unroll_cqinit;
+
+	hw->port_info = (struct ice_port_info *)
+			ice_malloc(hw, sizeof(*hw->port_info));
+	if (!hw->port_info) {
+		status = ICE_ERR_NO_MEMORY;
+		goto err_unroll_cqinit;
+	}
+
+	/* set the back pointer to hw */
+	hw->port_info->hw = hw;
+
+	/* Initialize port_info struct with switch configuration data */
+	status = ice_get_initial_sw_cfg(hw);
+	if (status)
+		goto err_unroll_alloc;
+
+	hw->evb_veb = true;
+
+	/* Query the allocated resources for Tx scheduler */
+	status = ice_sched_query_res_alloc(hw);
+	if (status) {
+		ice_debug(hw, ICE_DBG_SCHED,
+			  "Failed to get scheduler allocated resources\n");
+		goto err_unroll_alloc;
+	}
+
+
+	/* Initialize port_info struct with scheduler data */
+	status = ice_sched_init_port(hw->port_info);
+	if (status)
+		goto err_unroll_sched;
+
+	pcaps = (struct ice_aqc_get_phy_caps_data *)
+		ice_malloc(hw, sizeof(*pcaps));
+	if (!pcaps) {
+		status = ICE_ERR_NO_MEMORY;
+		goto err_unroll_sched;
+	}
+
+	/* Initialize port_info struct with PHY capabilities */
+	status = ice_aq_get_phy_caps(hw->port_info, false,
+				     ICE_AQC_REPORT_TOPO_CAP, pcaps, NULL);
+	ice_free(hw, pcaps);
+	if (status)
+		goto err_unroll_sched;
+
+	/* Initialize port_info struct with link information */
+	status = ice_aq_get_link_info(hw->port_info, false, NULL, NULL);
+	if (status)
+		goto err_unroll_sched;
+	/* need a valid SW entry point to build a Tx tree */
+	if (!hw->sw_entry_point_layer) {
+		ice_debug(hw, ICE_DBG_SCHED, "invalid sw entry point\n");
+		status = ICE_ERR_CFG;
+		goto err_unroll_sched;
+	}
+	INIT_LIST_HEAD(&hw->agg_list);
+	/* Initialize max burst size */
+	if (!hw->max_burst_size)
+		ice_cfg_rl_burst_size(hw, ICE_SCHED_DFLT_BURST_SIZE);
+
+	status = ice_init_fltr_mgmt_struct(hw);
+	if (status)
+		goto err_unroll_sched;
+
+#if defined(FPGA_SUPPORT) || defined(CVL_A0_SUPPORT)
+	/* some of the register write workarounds to get Rx working */
+	ice_dev_onetime_setup(hw);
+#endif /* FPGA_SUPPORT || CVL_A0_SUPPORT */
+
+	/* Get MAC information */
+	/* A single port can report up to two (LAN and WoL) addresses */
+	mac_buf = ice_calloc(hw, 2,
+			     sizeof(struct ice_aqc_manage_mac_read_resp));
+	mac_buf_len = 2 * sizeof(struct ice_aqc_manage_mac_read_resp);
+
+	if (!mac_buf) {
+		status = ICE_ERR_NO_MEMORY;
+		goto err_unroll_fltr_mgmt_struct;
+	}
+
+	status = ice_aq_manage_mac_read(hw, mac_buf, mac_buf_len, NULL);
+	ice_free(hw, mac_buf);
+
+	if (status)
+		goto err_unroll_fltr_mgmt_struct;
+
+	ice_init_flex_flds(hw, ICE_RXDID_FLEX_NIC);
+	ice_init_flex_flds(hw, ICE_RXDID_FLEX_NIC_2);
+
+
+	return ICE_SUCCESS;
+
+err_unroll_fltr_mgmt_struct:
+	ice_cleanup_fltr_mgmt_struct(hw);
+err_unroll_sched:
+	ice_sched_cleanup_all(hw);
+err_unroll_alloc:
+	ice_free(hw, hw->port_info);
+	hw->port_info = NULL;
+err_unroll_cqinit:
+	ice_shutdown_all_ctrlq(hw);
+	return status;
+}
+
+/**
+ * ice_deinit_hw - unroll initialization operations done by ice_init_hw
+ * @hw: pointer to the hardware structure
+ *
+ * This should be called only during nominal operation, not as a result of
+ * ice_init_hw() failing since ice_init_hw() will take care of unrolling
+ * applicable initializations if it fails for any reason.
+ */
+void ice_deinit_hw(struct ice_hw *hw)
+{
+	ice_cleanup_fltr_mgmt_struct(hw);
+
+	ice_sched_cleanup_all(hw);
+	ice_sched_clear_agg(hw);
+
+	if (hw->port_info) {
+		ice_free(hw, hw->port_info);
+		hw->port_info = NULL;
+	}
+
+	/* Attempt to disable FW logging before shutting down control queues */
+	ice_cfg_fw_log(hw, false);
+	ice_shutdown_all_ctrlq(hw);
+
+	/* Clear VSI contexts if not already cleared */
+	ice_clear_all_vsi_ctx(hw);
+}
+
+/**
+ * ice_check_reset - Check to see if a global reset is complete
+ * @hw: pointer to the hardware structure
+ */
+enum ice_status ice_check_reset(struct ice_hw *hw)
+{
+	u32 cnt, reg = 0, grst_delay;
+
+	/* Poll for Device Active state in case a recent CORER, GLOBR,
+	 * or EMPR has occurred. The grst delay value is in 100ms units.
+	 * Add 1sec for outstanding AQ commands that can take a long time.
+	 */
+#define GLGEN_RSTCTL		0x000B8180 /* Reset Source: POR */
+#define GLGEN_RSTCTL_GRSTDEL_S	0
+#define GLGEN_RSTCTL_GRSTDEL_M	MAKEMASK(0x3F, GLGEN_RSTCTL_GRSTDEL_S)
+	grst_delay = ((rd32(hw, GLGEN_RSTCTL) & GLGEN_RSTCTL_GRSTDEL_M) >>
+		      GLGEN_RSTCTL_GRSTDEL_S) + 10;
+
+	for (cnt = 0; cnt < grst_delay; cnt++) {
+		ice_msec_delay(100, true);
+		reg = rd32(hw, GLGEN_RSTAT);
+		if (!(reg & GLGEN_RSTAT_DEVSTATE_M))
+			break;
+	}
+
+	if (cnt == grst_delay) {
+		ice_debug(hw, ICE_DBG_INIT,
+			  "Global reset polling failed to complete.\n");
+		return ICE_ERR_RESET_FAILED;
+	}
+
+#define ICE_RESET_DONE_MASK	(GLNVM_ULD_CORER_DONE_M | \
+				 GLNVM_ULD_GLOBR_DONE_M)
+
+	/* Device is Active; check Global Reset processes are done */
+	for (cnt = 0; cnt < ICE_PF_RESET_WAIT_COUNT; cnt++) {
+		reg = rd32(hw, GLNVM_ULD) & ICE_RESET_DONE_MASK;
+		if (reg == ICE_RESET_DONE_MASK) {
+			ice_debug(hw, ICE_DBG_INIT,
+				  "Global reset processes done. %d\n", cnt);
+			break;
+		}
+		ice_msec_delay(10, true);
+	}
+
+	if (cnt == ICE_PF_RESET_WAIT_COUNT) {
+		ice_debug(hw, ICE_DBG_INIT,
+			  "Wait for Reset Done timed out. GLNVM_ULD = 0x%x\n",
+			  reg);
+		return ICE_ERR_RESET_FAILED;
+	}
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_pf_reset - Reset the PF
+ * @hw: pointer to the hardware structure
+ *
+ * If a global reset has been triggered, this function checks
+ * for its completion and then issues the PF reset
+ */
+static enum ice_status ice_pf_reset(struct ice_hw *hw)
+{
+	u32 cnt, reg;
+
+	/* If at function entry a global reset was already in progress, i.e.
+	 * state is not 'device active' or any of the reset done bits are not
+	 * set in GLNVM_ULD, there is no need for a PF Reset; poll until the
+	 * global reset is done.
+	 */
+	if ((rd32(hw, GLGEN_RSTAT) & GLGEN_RSTAT_DEVSTATE_M) ||
+	    (rd32(hw, GLNVM_ULD) & ICE_RESET_DONE_MASK) ^ ICE_RESET_DONE_MASK) {
+		/* poll on global reset currently in progress until done */
+		if (ice_check_reset(hw))
+			return ICE_ERR_RESET_FAILED;
+
+		return ICE_SUCCESS;
+	}
+
+	/* Reset the PF */
+	reg = rd32(hw, PFGEN_CTRL);
+
+	wr32(hw, PFGEN_CTRL, (reg | PFGEN_CTRL_PFSWR_M));
+
+	for (cnt = 0; cnt < ICE_PF_RESET_WAIT_COUNT; cnt++) {
+		reg = rd32(hw, PFGEN_CTRL);
+		if (!(reg & PFGEN_CTRL_PFSWR_M))
+			break;
+
+		ice_msec_delay(1, true);
+	}
+
+	if (cnt == ICE_PF_RESET_WAIT_COUNT) {
+		ice_debug(hw, ICE_DBG_INIT,
+			  "PF reset polling failed to complete.\n");
+		return ICE_ERR_RESET_FAILED;
+	}
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_reset - Perform different types of reset
+ * @hw: pointer to the hardware structure
+ * @req: reset request
+ *
+ * This function triggers a reset as specified by the req parameter.
+ *
+ * Note:
+ * If anything other than a PF reset is triggered, PXE mode is restored.
+ * This has to be cleared using ice_clear_pxe_mode again, once the AQ
+ * interface has been restored in the rebuild flow.
+ */
+enum ice_status ice_reset(struct ice_hw *hw, enum ice_reset_req req)
+{
+	u32 val = 0;
+
+	switch (req) {
+	case ICE_RESET_PFR:
+		return ice_pf_reset(hw);
+	case ICE_RESET_CORER:
+		ice_debug(hw, ICE_DBG_INIT, "CoreR requested\n");
+		val = GLGEN_RTRIG_CORER_M;
+		break;
+	case ICE_RESET_GLOBR:
+		ice_debug(hw, ICE_DBG_INIT, "GlobalR requested\n");
+		val = GLGEN_RTRIG_GLOBR_M;
+		break;
+	default:
+		return ICE_ERR_PARAM;
+	}
+
+	val |= rd32(hw, GLGEN_RTRIG);
+	wr32(hw, GLGEN_RTRIG, val);
+	ice_flush(hw);
+
+
+	/* wait for the FW to be ready */
+	return ice_check_reset(hw);
+}
+
+
+
+/**
+ * ice_copy_rxq_ctx_to_hw
+ * @hw: pointer to the hardware structure
+ * @ice_rxq_ctx: pointer to the rxq context
+ * @rxq_index: the index of the Rx queue
+ *
+ * Copies rxq context from dense structure to hw register space
+ */
+static enum ice_status
+ice_copy_rxq_ctx_to_hw(struct ice_hw *hw, u8 *ice_rxq_ctx, u32 rxq_index)
+{
+	u8 i;
+
+	if (!ice_rxq_ctx)
+		return ICE_ERR_BAD_PTR;
+
+	if (rxq_index > QRX_CTRL_MAX_INDEX)
+		return ICE_ERR_PARAM;
+
+	/* Copy each dword separately to hw */
+	for (i = 0; i < ICE_RXQ_CTX_SIZE_DWORDS; i++) {
+		wr32(hw, QRX_CONTEXT(i, rxq_index),
+		     *((u32 *)(ice_rxq_ctx + (i * sizeof(u32)))));
+
+		ice_debug(hw, ICE_DBG_QCTX, "qrxdata[%d]: %08X\n", i,
+			  *((u32 *)(ice_rxq_ctx + (i * sizeof(u32)))));
+	}
+
+	return ICE_SUCCESS;
+}
+
+/* LAN Rx Queue Context */
+static const struct ice_ctx_ele ice_rlan_ctx_info[] = {
+	/* Field		Width	LSB */
+	ICE_CTX_STORE(ice_rlan_ctx, head,		13,	0),
+	ICE_CTX_STORE(ice_rlan_ctx, cpuid,		8,	13),
+	ICE_CTX_STORE(ice_rlan_ctx, base,		57,	32),
+	ICE_CTX_STORE(ice_rlan_ctx, qlen,		13,	89),
+	ICE_CTX_STORE(ice_rlan_ctx, dbuf,		7,	102),
+	ICE_CTX_STORE(ice_rlan_ctx, hbuf,		5,	109),
+	ICE_CTX_STORE(ice_rlan_ctx, dtype,		2,	114),
+	ICE_CTX_STORE(ice_rlan_ctx, dsize,		1,	116),
+	ICE_CTX_STORE(ice_rlan_ctx, crcstrip,		1,	117),
+	ICE_CTX_STORE(ice_rlan_ctx, l2tsel,		1,	119),
+	ICE_CTX_STORE(ice_rlan_ctx, hsplit_0,		4,	120),
+	ICE_CTX_STORE(ice_rlan_ctx, hsplit_1,		2,	124),
+	ICE_CTX_STORE(ice_rlan_ctx, showiv,		1,	127),
+	ICE_CTX_STORE(ice_rlan_ctx, rxmax,		14,	174),
+	ICE_CTX_STORE(ice_rlan_ctx, tphrdesc_ena,	1,	193),
+	ICE_CTX_STORE(ice_rlan_ctx, tphwdesc_ena,	1,	194),
+	ICE_CTX_STORE(ice_rlan_ctx, tphdata_ena,	1,	195),
+	ICE_CTX_STORE(ice_rlan_ctx, tphhead_ena,	1,	196),
+	ICE_CTX_STORE(ice_rlan_ctx, lrxqthresh,		3,	198),
+	{ 0 }
+};
+
+/**
+ * ice_write_rxq_ctx
+ * @hw: pointer to the hardware structure
+ * @rlan_ctx: pointer to the rxq context
+ * @rxq_index: the index of the Rx queue
+ *
+ * Converts rxq context from sparse to dense structure and then writes
+ * it to hw register space
+ */
+enum ice_status
+ice_write_rxq_ctx(struct ice_hw *hw, struct ice_rlan_ctx *rlan_ctx,
+		  u32 rxq_index)
+{
+	u8 ctx_buf[ICE_RXQ_CTX_SZ] = { 0 };
+
+	ice_set_ctx((u8 *)rlan_ctx, ctx_buf, ice_rlan_ctx_info);
+	return ice_copy_rxq_ctx_to_hw(hw, ctx_buf, rxq_index);
+}
+
+#if !defined(NO_UNUSED_CTX_CODE) || defined(AE_DRIVER)
+/**
+ * ice_clear_rxq_ctx
+ * @hw: pointer to the hardware structure
+ * @rxq_index: the index of the Rx queue to clear
+ *
+ * Clears rxq context in hw register space
+ */
+enum ice_status ice_clear_rxq_ctx(struct ice_hw *hw, u32 rxq_index)
+{
+	u8 i;
+
+	if (rxq_index > QRX_CTRL_MAX_INDEX)
+		return ICE_ERR_PARAM;
+
+	/* Clear each dword register separately */
+	for (i = 0; i < ICE_RXQ_CTX_SIZE_DWORDS; i++)
+		wr32(hw, QRX_CONTEXT(i, rxq_index), 0);
+
+	return ICE_SUCCESS;
+}
+#endif /* !NO_UNUSED_CTX_CODE || AE_DRIVER */
+
+/* LAN Tx Queue Context */
+const struct ice_ctx_ele ice_tlan_ctx_info[] = {
+				    /* Field			Width	LSB */
+	ICE_CTX_STORE(ice_tlan_ctx, base,			57,	0),
+	ICE_CTX_STORE(ice_tlan_ctx, port_num,			3,	57),
+	ICE_CTX_STORE(ice_tlan_ctx, cgd_num,			5,	60),
+	ICE_CTX_STORE(ice_tlan_ctx, pf_num,			3,	65),
+	ICE_CTX_STORE(ice_tlan_ctx, vmvf_num,			10,	68),
+	ICE_CTX_STORE(ice_tlan_ctx, vmvf_type,			2,	78),
+	ICE_CTX_STORE(ice_tlan_ctx, src_vsi,			10,	80),
+	ICE_CTX_STORE(ice_tlan_ctx, tsyn_ena,			1,	90),
+	ICE_CTX_STORE(ice_tlan_ctx, alt_vlan,			1,	92),
+	ICE_CTX_STORE(ice_tlan_ctx, cpuid,			8,	93),
+	ICE_CTX_STORE(ice_tlan_ctx, wb_mode,			1,	101),
+	ICE_CTX_STORE(ice_tlan_ctx, tphrd_desc,			1,	102),
+	ICE_CTX_STORE(ice_tlan_ctx, tphrd,			1,	103),
+	ICE_CTX_STORE(ice_tlan_ctx, tphwr_desc,			1,	104),
+	ICE_CTX_STORE(ice_tlan_ctx, cmpq_id,			9,	105),
+	ICE_CTX_STORE(ice_tlan_ctx, qnum_in_func,		14,	114),
+	ICE_CTX_STORE(ice_tlan_ctx, itr_notification_mode,	1,	128),
+	ICE_CTX_STORE(ice_tlan_ctx, adjust_prof_id,		6,	129),
+	ICE_CTX_STORE(ice_tlan_ctx, qlen,			13,	135),
+	ICE_CTX_STORE(ice_tlan_ctx, quanta_prof_idx,		4,	148),
+	ICE_CTX_STORE(ice_tlan_ctx, tso_ena,			1,	152),
+	ICE_CTX_STORE(ice_tlan_ctx, tso_qnum,			11,	153),
+	ICE_CTX_STORE(ice_tlan_ctx, legacy_int,			1,	164),
+	ICE_CTX_STORE(ice_tlan_ctx, drop_ena,			1,	165),
+	ICE_CTX_STORE(ice_tlan_ctx, cache_prof_idx,		2,	166),
+	ICE_CTX_STORE(ice_tlan_ctx, pkt_shaper_prof_idx,	3,	168),
+	ICE_CTX_STORE(ice_tlan_ctx, int_q_state,		110,	171),
+	{ 0 }
+};
+
+#if !defined(NO_UNUSED_CTX_CODE) || defined(AE_DRIVER)
+/**
+ * ice_copy_tx_cmpltnq_ctx_to_hw
+ * @hw: pointer to the hardware structure
+ * @ice_tx_cmpltnq_ctx: pointer to the Tx completion queue context
+ * @tx_cmpltnq_index: the index of the completion queue
+ *
+ * Copies Tx completion q context from dense structure to hw register space
+ */
+static enum ice_status
+ice_copy_tx_cmpltnq_ctx_to_hw(struct ice_hw *hw, u8 *ice_tx_cmpltnq_ctx,
+			      u32 tx_cmpltnq_index)
+{
+	u8 i;
+
+	if (!ice_tx_cmpltnq_ctx)
+		return ICE_ERR_BAD_PTR;
+
+	if (tx_cmpltnq_index > GLTCLAN_CQ_CNTX0_MAX_INDEX)
+		return ICE_ERR_PARAM;
+
+	/* Copy each dword separately to hw */
+	for (i = 0; i < ICE_TX_CMPLTNQ_CTX_SIZE_DWORDS; i++) {
+		wr32(hw, GLTCLAN_CQ_CNTX(i, tx_cmpltnq_index),
+		     *((u32 *)(ice_tx_cmpltnq_ctx + (i * sizeof(u32)))));
+
+		ice_debug(hw, ICE_DBG_QCTX, "cmpltnqdata[%d]: %08X\n", i,
+			  *((u32 *)(ice_tx_cmpltnq_ctx + (i * sizeof(u32)))));
+	}
+
+	return ICE_SUCCESS;
+}
+
+/* LAN Tx Completion Queue Context */
+static const struct ice_ctx_ele ice_tx_cmpltnq_ctx_info[] = {
+				       /* Field			Width   LSB */
+	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, base,			57,	0),
+	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, q_len,		18,	64),
+	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, generation,		1,	96),
+	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, wrt_ptr,		22,	97),
+	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, pf_num,		3,	128),
+	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, vmvf_num,		10,	131),
+	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, vmvf_type,		2,	141),
+	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, tph_desc_wr,		1,	160),
+	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, cpuid,		8,	161),
+	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, cmpltn_cache,		512,	192),
+	{ 0 }
+};
+
+/**
+ * ice_write_tx_cmpltnq_ctx
+ * @hw: pointer to the hardware structure
+ * @tx_cmpltnq_ctx: pointer to the completion queue context
+ * @tx_cmpltnq_index: the index of the completion queue
+ *
+ * Converts completion queue context from sparse to dense structure and then
+ * writes it to hw register space
+ */
+enum ice_status
+ice_write_tx_cmpltnq_ctx(struct ice_hw *hw,
+			 struct ice_tx_cmpltnq_ctx *tx_cmpltnq_ctx,
+			 u32 tx_cmpltnq_index)
+{
+	u8 ctx_buf[ICE_TX_CMPLTNQ_CTX_SIZE_DWORDS * sizeof(u32)] = { 0 };
+
+	ice_set_ctx((u8 *)tx_cmpltnq_ctx, ctx_buf, ice_tx_cmpltnq_ctx_info);
+	return ice_copy_tx_cmpltnq_ctx_to_hw(hw, ctx_buf, tx_cmpltnq_index);
+}
+
+/**
+ * ice_clear_tx_cmpltnq_ctx
+ * @hw: pointer to the hardware structure
+ * @tx_cmpltnq_index: the index of the completion queue to clear
+ *
+ * Clears Tx completion queue context in hw register space
+ */
+enum ice_status
+ice_clear_tx_cmpltnq_ctx(struct ice_hw *hw, u32 tx_cmpltnq_index)
+{
+	u8 i;
+
+	if (tx_cmpltnq_index > GLTCLAN_CQ_CNTX0_MAX_INDEX)
+		return ICE_ERR_PARAM;
+
+	/* Clear each dword register separately */
+	for (i = 0; i < ICE_TX_CMPLTNQ_CTX_SIZE_DWORDS; i++)
+		wr32(hw, GLTCLAN_CQ_CNTX(i, tx_cmpltnq_index), 0);
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_copy_tx_drbell_q_ctx_to_hw
+ * @hw: pointer to the hardware structure
+ * @ice_tx_drbell_q_ctx: pointer to the doorbell queue context
+ * @tx_drbell_q_index: the index of the doorbell queue
+ *
+ * Copies doorbell q context from dense structure to hw register space
+ */
+static enum ice_status
+ice_copy_tx_drbell_q_ctx_to_hw(struct ice_hw *hw, u8 *ice_tx_drbell_q_ctx,
+			       u32 tx_drbell_q_index)
+{
+	u8 i;
+
+	if (!ice_tx_drbell_q_ctx)
+		return ICE_ERR_BAD_PTR;
+
+	if (tx_drbell_q_index > QTX_COMM_DBLQ_DBELL_MAX_INDEX)
+		return ICE_ERR_PARAM;
+
+	/* Copy each dword separately to hw */
+	for (i = 0; i < ICE_TX_DRBELL_Q_CTX_SIZE_DWORDS; i++) {
+		wr32(hw, QTX_COMM_DBLQ_CNTX(i, tx_drbell_q_index),
+		     *((u32 *)(ice_tx_drbell_q_ctx + (i * sizeof(u32)))));
+
+		ice_debug(hw, ICE_DBG_QCTX, "tx_drbell_qdata[%d]: %08X\n", i,
+			  *((u32 *)(ice_tx_drbell_q_ctx + (i * sizeof(u32)))));
+	}
+
+	return ICE_SUCCESS;
+}
+
+/* LAN Tx Doorbell Queue Context info */
+static const struct ice_ctx_ele ice_tx_drbell_q_ctx_info[] = {
+					/* Field		Width   LSB */
+	ICE_CTX_STORE(ice_tx_drbell_q_ctx, base,		57,	0),
+	ICE_CTX_STORE(ice_tx_drbell_q_ctx, ring_len,		13,	64),
+	ICE_CTX_STORE(ice_tx_drbell_q_ctx, pf_num,		3,	80),
+	ICE_CTX_STORE(ice_tx_drbell_q_ctx, vf_num,		8,	84),
+	ICE_CTX_STORE(ice_tx_drbell_q_ctx, vmvf_type,		2,	94),
+	ICE_CTX_STORE(ice_tx_drbell_q_ctx, cpuid,		8,	96),
+	ICE_CTX_STORE(ice_tx_drbell_q_ctx, tph_desc_rd,		1,	104),
+	ICE_CTX_STORE(ice_tx_drbell_q_ctx, tph_desc_wr,		1,	108),
+	ICE_CTX_STORE(ice_tx_drbell_q_ctx, db_q_en,		1,	112),
+	ICE_CTX_STORE(ice_tx_drbell_q_ctx, rd_head,		13,	128),
+	ICE_CTX_STORE(ice_tx_drbell_q_ctx, rd_tail,		13,	144),
+	{ 0 }
+};
+
+/**
+ * ice_write_tx_drbell_q_ctx
+ * @hw: pointer to the hardware structure
+ * @tx_drbell_q_ctx: pointer to the doorbell queue context
+ * @tx_drbell_q_index: the index of the doorbell queue
+ *
+ * Converts doorbell queue context from sparse to dense structure and then
+ * writes it to hw register space
+ */
+enum ice_status
+ice_write_tx_drbell_q_ctx(struct ice_hw *hw,
+			  struct ice_tx_drbell_q_ctx *tx_drbell_q_ctx,
+			  u32 tx_drbell_q_index)
+{
+	u8 ctx_buf[ICE_TX_DRBELL_Q_CTX_SIZE_DWORDS * sizeof(u32)] = { 0 };
+
+	ice_set_ctx((u8 *)tx_drbell_q_ctx, ctx_buf, ice_tx_drbell_q_ctx_info);
+	return ice_copy_tx_drbell_q_ctx_to_hw(hw, ctx_buf, tx_drbell_q_index);
+}
+
+/**
+ * ice_clear_tx_drbell_q_ctx
+ * @hw: pointer to the hardware structure
+ * @tx_drbell_q_index: the index of the doorbell queue to clear
+ *
+ * Clears doorbell queue context in hw register space
+ */
+enum ice_status
+ice_clear_tx_drbell_q_ctx(struct ice_hw *hw, u32 tx_drbell_q_index)
+{
+	u8 i;
+
+	if (tx_drbell_q_index > QTX_COMM_DBLQ_DBELL_MAX_INDEX)
+		return ICE_ERR_PARAM;
+
+	/* Clear each dword register separately */
+	for (i = 0; i < ICE_TX_DRBELL_Q_CTX_SIZE_DWORDS; i++)
+		wr32(hw, QTX_COMM_DBLQ_CNTX(i, tx_drbell_q_index), 0);
+
+	return ICE_SUCCESS;
+}
+#endif /* !NO_UNUSED_CTX_CODE || AE_DRIVER */
+
+/**
+ * ice_debug_cq
+ * @hw: pointer to the hardware structure
+ * @mask: debug mask
+ * @desc: pointer to control queue descriptor
+ * @buf: pointer to command buffer
+ * @buf_len: max length of buf
+ *
+ * Dumps debug log about control command with descriptor contents.
+ */
+void
+ice_debug_cq(struct ice_hw *hw, u32 mask, void *desc, void *buf, u16 buf_len)
+{
+	struct ice_aq_desc *cq_desc = (struct ice_aq_desc *)desc;
+	u16 len;
+
+	if (!(mask & hw->debug_mask))
+		return;
+
+	if (!desc)
+		return;
+
+	len = LE16_TO_CPU(cq_desc->datalen);
+
+	ice_debug(hw, mask,
+		  "CQ CMD: opcode 0x%04X, flags 0x%04X, datalen 0x%04X, retval 0x%04X\n",
+		  LE16_TO_CPU(cq_desc->opcode),
+		  LE16_TO_CPU(cq_desc->flags),
+		  LE16_TO_CPU(cq_desc->datalen), LE16_TO_CPU(cq_desc->retval));
+	ice_debug(hw, mask, "\tcookie (h,l) 0x%08X 0x%08X\n",
+		  LE32_TO_CPU(cq_desc->cookie_high),
+		  LE32_TO_CPU(cq_desc->cookie_low));
+	ice_debug(hw, mask, "\tparam (0,1)  0x%08X 0x%08X\n",
+		  LE32_TO_CPU(cq_desc->params.generic.param0),
+		  LE32_TO_CPU(cq_desc->params.generic.param1));
+	ice_debug(hw, mask, "\taddr (h,l)   0x%08X 0x%08X\n",
+		  LE32_TO_CPU(cq_desc->params.generic.addr_high),
+		  LE32_TO_CPU(cq_desc->params.generic.addr_low));
+	if (buf && cq_desc->datalen != 0) {
+		ice_debug(hw, mask, "Buffer:\n");
+		if (buf_len < len)
+			len = buf_len;
+
+		ice_debug_array(hw, mask, 16, 1, (u8 *)buf, len);
+	}
+}
+
+
+/* FW Admin Queue command wrappers */
+
+/**
+ * ice_aq_send_cmd - send FW Admin Queue command to FW Admin Queue
+ * @hw: pointer to the hw struct
+ * @desc: descriptor describing the command
+ * @buf: buffer to use for indirect commands (NULL for direct commands)
+ * @buf_size: size of buffer for indirect commands (0 for direct commands)
+ * @cd: pointer to command details structure
+ *
+ * Helper function to send FW Admin Queue commands to the FW Admin Queue.
+ */
+enum ice_status
+ice_aq_send_cmd(struct ice_hw *hw, struct ice_aq_desc *desc, void *buf,
+		u16 buf_size, struct ice_sq_cd *cd)
+{
+	return ice_sq_send_cmd(hw, &hw->adminq, desc, buf, buf_size, cd);
+}
+
+/**
+ * ice_aq_get_fw_ver
+ * @hw: pointer to the hw struct
+ * @cd: pointer to command details structure or NULL
+ *
+ * Get the firmware version (0x0001) from the admin queue commands
+ */
+enum ice_status ice_aq_get_fw_ver(struct ice_hw *hw, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_get_ver *resp;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	resp = &desc.params.get_ver;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_ver);
+
+	status = ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+
+	if (!status) {
+		hw->fw_branch = resp->fw_branch;
+		hw->fw_maj_ver = resp->fw_major;
+		hw->fw_min_ver = resp->fw_minor;
+		hw->fw_patch = resp->fw_patch;
+		hw->fw_build = LE32_TO_CPU(resp->fw_build);
+		hw->api_branch = resp->api_branch;
+		hw->api_maj_ver = resp->api_major;
+		hw->api_min_ver = resp->api_minor;
+		hw->api_patch = resp->api_patch;
+	}
+
+	return status;
+}
+
+
+/**
+ * ice_aq_q_shutdown
+ * @hw: pointer to the hw struct
+ * @unloading: is the driver unloading itself
+ *
+ * Tell the Firmware that we're shutting down the AdminQ and whether
+ * or not the driver is unloading as well (0x0003).
+ */
+enum ice_status ice_aq_q_shutdown(struct ice_hw *hw, bool unloading)
+{
+	struct ice_aqc_q_shutdown *cmd;
+	struct ice_aq_desc desc;
+
+	cmd = &desc.params.q_shutdown;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_q_shutdown);
+
+	if (unloading)
+		cmd->driver_unloading = CPU_TO_LE32(ICE_AQC_DRIVER_UNLOADING);
+
+	return ice_aq_send_cmd(hw, &desc, NULL, 0, NULL);
+}
+
+/**
+ * ice_aq_req_res
+ * @hw: pointer to the hw struct
+ * @res: resource id
+ * @access: access type
+ * @sdp_number: resource number
+ * @timeout: the maximum time in ms that the driver may hold the resource
+ * @cd: pointer to command details structure or NULL
+ *
+ * Requests common resource using the admin queue commands (0x0008).
+ * When attempting to acquire the Global Config Lock, the driver can
+ * learn of three states:
+ *  1) ICE_SUCCESS -        acquired lock, and can perform download package
+ *  2) ICE_ERR_AQ_ERROR -   did not get lock, driver should fail to load
+ *  3) ICE_ERR_AQ_NO_WORK - did not get lock, but another driver has
+ *                          successfully downloaded the package; the driver does
+ *                          not have to download the package and can continue
+ *                          loading
+ *
+ * Note that if the caller is in an acquire lock, perform action, release lock
+ * phase of operation, it is possible that the FW may detect a timeout and issue
+ * a CORER. In this case, the driver will receive a CORER interrupt and will
+ * have to determine its cause. The calling thread that is handling this flow
+ * will likely get an error propagated back to it indicating the Download
+ * Package, Update Package or the Release Resource AQ commands timed out.
+ */
+static enum ice_status
+ice_aq_req_res(struct ice_hw *hw, enum ice_aq_res_ids res,
+	       enum ice_aq_res_access_type access, u8 sdp_number, u32 *timeout,
+	       struct ice_sq_cd *cd)
+{
+	struct ice_aqc_req_res *cmd_resp;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_req_res");
+
+	cmd_resp = &desc.params.res_owner;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_req_res);
+
+	cmd_resp->res_id = CPU_TO_LE16(res);
+	cmd_resp->access_type = CPU_TO_LE16(access);
+	cmd_resp->res_number = CPU_TO_LE32(sdp_number);
+	cmd_resp->timeout = CPU_TO_LE32(*timeout);
+	*timeout = 0;
+
+	status = ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+
+	/* The completion specifies the maximum time in ms that the driver
+	 * may hold the resource in the Timeout field.
+	 */
+
+	/* Global config lock response utilizes an additional status field.
+	 *
+	 * If the Global config lock resource is held by some other driver, the
+	 * command completes with ICE_AQ_RES_GLBL_IN_PROG in the status field
+	 * and the timeout field indicates the maximum time the current owner
+	 * of the resource has to free it.
+	 */
+	if (res == ICE_GLOBAL_CFG_LOCK_RES_ID) {
+		if (LE16_TO_CPU(cmd_resp->status) == ICE_AQ_RES_GLBL_SUCCESS) {
+			*timeout = LE32_TO_CPU(cmd_resp->timeout);
+			return ICE_SUCCESS;
+		} else if (LE16_TO_CPU(cmd_resp->status) ==
+			   ICE_AQ_RES_GLBL_IN_PROG) {
+			*timeout = LE32_TO_CPU(cmd_resp->timeout);
+			return ICE_ERR_AQ_ERROR;
+		} else if (LE16_TO_CPU(cmd_resp->status) ==
+			   ICE_AQ_RES_GLBL_DONE) {
+			return ICE_ERR_AQ_NO_WORK;
+		}
+
+		/* invalid FW response, force a timeout immediately */
+		*timeout = 0;
+		return ICE_ERR_AQ_ERROR;
+	}
+
+	/* If the resource is held by some other driver, the command completes
+	 * with a busy return value and the timeout field indicates the maximum
+	 * time the current owner of the resource has to free it.
+	 */
+	if (!status || hw->adminq.sq_last_status == ICE_AQ_RC_EBUSY)
+		*timeout = LE32_TO_CPU(cmd_resp->timeout);
+
+	return status;
+}
+
+/**
+ * ice_aq_release_res
+ * @hw: pointer to the hw struct
+ * @res: resource id
+ * @sdp_number: resource number
+ * @cd: pointer to command details structure or NULL
+ *
+ * release common resource using the admin queue commands (0x0009)
+ */
+static enum ice_status
+ice_aq_release_res(struct ice_hw *hw, enum ice_aq_res_ids res, u8 sdp_number,
+		   struct ice_sq_cd *cd)
+{
+	struct ice_aqc_req_res *cmd;
+	struct ice_aq_desc desc;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_release_res");
+
+	cmd = &desc.params.res_owner;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_release_res);
+
+	cmd->res_id = CPU_TO_LE16(res);
+	cmd->res_number = CPU_TO_LE32(sdp_number);
+
+	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+}
+
+/**
+ * ice_acquire_res
+ * @hw: pointer to the HW structure
+ * @res: resource id
+ * @access: access type (read or write)
+ * @timeout: timeout in milliseconds
+ *
+ * This function will attempt to acquire the ownership of a resource.
+ */
+enum ice_status
+ice_acquire_res(struct ice_hw *hw, enum ice_aq_res_ids res,
+		enum ice_aq_res_access_type access, u32 timeout)
+{
+#define ICE_RES_POLLING_DELAY_MS	10
+	u32 delay = ICE_RES_POLLING_DELAY_MS;
+	u32 time_left = timeout;
+	enum ice_status status;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_acquire_res");
+
+	status = ice_aq_req_res(hw, res, access, 0, &time_left, NULL);
+
+	/* A return code of ICE_ERR_AQ_NO_WORK means that another driver has
+	 * previously acquired the resource and performed any necessary updates;
+	 * in this case the caller does not obtain the resource and has no
+	 * further work to do.
+	 */
+	if (status == ICE_ERR_AQ_NO_WORK)
+		goto ice_acquire_res_exit;
+
+	if (status)
+		ice_debug(hw, ICE_DBG_RES,
+			  "resource %d acquire type %d failed.\n", res, access);
+
+	/* If necessary, poll until the current lock owner timeouts */
+	timeout = time_left;
+	while (status && timeout && time_left) {
+		ice_msec_delay(delay, true);
+		timeout = (timeout > delay) ? timeout - delay : 0;
+		status = ice_aq_req_res(hw, res, access, 0, &time_left, NULL);
+
+		if (status == ICE_ERR_AQ_NO_WORK)
+			/* lock free, but no work to do */
+			break;
+
+		if (!status)
+			/* lock acquired */
+			break;
+	}
+	if (status && status != ICE_ERR_AQ_NO_WORK)
+		ice_debug(hw, ICE_DBG_RES, "resource acquire timed out.\n");
+
+ice_acquire_res_exit:
+	if (status == ICE_ERR_AQ_NO_WORK) {
+		if (access == ICE_RES_WRITE)
+			ice_debug(hw, ICE_DBG_RES,
+				  "resource indicates no work to do.\n");
+		else
+			ice_debug(hw, ICE_DBG_RES,
+				  "Warning: ICE_ERR_AQ_NO_WORK not expected\n");
+	}
+	return status;
+}
+
+/**
+ * ice_release_res
+ * @hw: pointer to the HW structure
+ * @res: resource id
+ *
+ * This function will release a resource using the proper Admin Command.
+ */
+void ice_release_res(struct ice_hw *hw, enum ice_aq_res_ids res)
+{
+	enum ice_status status;
+	u32 total_delay = 0;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_release_res");
+
+	status = ice_aq_release_res(hw, res, 0, NULL);
+
+	/* there are some rare cases when trying to release the resource
+	 * results in an admin Q timeout, so handle them correctly
+	 */
+	while ((status == ICE_ERR_AQ_TIMEOUT) &&
+	       (total_delay < hw->adminq.sq_cmd_timeout)) {
+		ice_msec_delay(1, true);
+		status = ice_aq_release_res(hw, res, 0, NULL);
+		total_delay++;
+	}
+}
+
+/**
+ * ice_aq_alloc_free_res - command to allocate/free resources
+ * @hw: pointer to the hw struct
+ * @num_entries: number of resource entries in buffer
+ * @buf: Indirect buffer to hold data parameters and response
+ * @buf_size: size of buffer for indirect commands
+ * @opc: pass in the command opcode
+ * @cd: pointer to command details structure or NULL
+ *
+ * Helper function to allocate/free resources using the admin queue commands
+ */
+enum ice_status
+ice_aq_alloc_free_res(struct ice_hw *hw, u16 num_entries,
+		      struct ice_aqc_alloc_free_res_elem *buf, u16 buf_size,
+		      enum ice_adminq_opc opc, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_alloc_free_res_cmd *cmd;
+	struct ice_aq_desc desc;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_alloc_free_res");
+
+	cmd = &desc.params.sw_res_ctrl;
+
+	if (!buf)
+		return ICE_ERR_PARAM;
+
+	if (buf_size < (num_entries * sizeof(buf->elem[0])))
+		return ICE_ERR_PARAM;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, opc);
+
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+
+	cmd->num_entries = CPU_TO_LE16(num_entries);
+
+	return ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
+}
+
+
+/**
+ * ice_get_num_per_func - determine number of resources per PF
+ * @hw: pointer to the hw structure
+ * @max: value to be evenly split between each PF
+ *
+ * Determine the number of valid functions by going through the bitmap returned
+ * from parsing capabilities and use this to calculate the number of resources
+ * per PF based on the max value passed in.
+ */
+static u32 ice_get_num_per_func(struct ice_hw *hw, u32 max)
+{
+	u8 funcs;
+
+#define ICE_CAPS_VALID_FUNCS_M	0xFF
+	funcs = ice_hweight8(hw->dev_caps.common_cap.valid_functions &
+			     ICE_CAPS_VALID_FUNCS_M);
+
+	if (!funcs)
+		return 0;
+
+	return max / funcs;
+}
+
+/**
+ * ice_parse_caps - parse function/device capabilities
+ * @hw: pointer to the hw struct
+ * @buf: pointer to a buffer containing function/device capability records
+ * @cap_count: number of capability records in the list
+ * @opc: type of capabilities list to parse
+ *
+ * Helper function to parse function(0x000a)/device(0x000b) capabilities list.
+ */
+static void
+ice_parse_caps(struct ice_hw *hw, void *buf, u32 cap_count,
+	       enum ice_adminq_opc opc)
+{
+	struct ice_aqc_list_caps_elem *cap_resp;
+	struct ice_hw_func_caps *func_p = NULL;
+	struct ice_hw_dev_caps *dev_p = NULL;
+	struct ice_hw_common_caps *caps;
+	u32 i;
+
+	if (!buf)
+		return;
+
+	cap_resp = (struct ice_aqc_list_caps_elem *)buf;
+
+	if (opc == ice_aqc_opc_list_dev_caps) {
+		dev_p = &hw->dev_caps;
+		caps = &dev_p->common_cap;
+	} else if (opc == ice_aqc_opc_list_func_caps) {
+		func_p = &hw->func_caps;
+		caps = &func_p->common_cap;
+	} else {
+		ice_debug(hw, ICE_DBG_INIT, "wrong opcode\n");
+		return;
+	}
+
+	for (i = 0; caps && i < cap_count; i++, cap_resp++) {
+		u32 logical_id = LE32_TO_CPU(cap_resp->logical_id);
+		u32 phys_id = LE32_TO_CPU(cap_resp->phys_id);
+		u32 number = LE32_TO_CPU(cap_resp->number);
+		u16 cap = LE16_TO_CPU(cap_resp->cap);
+
+		switch (cap) {
+		case ICE_AQC_CAPS_VALID_FUNCTIONS:
+			caps->valid_functions = number;
+			ice_debug(hw, ICE_DBG_INIT,
+				  "HW caps: Valid Functions = %d\n",
+				  caps->valid_functions);
+			break;
+		case ICE_AQC_CAPS_VSI:
+			if (dev_p) {
+				dev_p->num_vsi_allocd_to_host = number;
+				ice_debug(hw, ICE_DBG_INIT,
+					  "HW caps: Dev.VSI cnt = %d\n",
+					  dev_p->num_vsi_allocd_to_host);
+			} else if (func_p) {
+				func_p->guar_num_vsi =
+					ice_get_num_per_func(hw, ICE_MAX_VSI);
+				ice_debug(hw, ICE_DBG_INIT,
+					  "HW caps: Func.VSI cnt = %d\n",
+					  number);
+			}
+			break;
+		case ICE_AQC_CAPS_RSS:
+			caps->rss_table_size = number;
+			caps->rss_table_entry_width = logical_id;
+			ice_debug(hw, ICE_DBG_INIT,
+				  "HW caps: RSS table size = %d\n",
+				  caps->rss_table_size);
+			ice_debug(hw, ICE_DBG_INIT,
+				  "HW caps: RSS table width = %d\n",
+				  caps->rss_table_entry_width);
+			break;
+		case ICE_AQC_CAPS_RXQS:
+			caps->num_rxq = number;
+			caps->rxq_first_id = phys_id;
+			ice_debug(hw, ICE_DBG_INIT,
+				  "HW caps: Num Rx Qs = %d\n", caps->num_rxq);
+			ice_debug(hw, ICE_DBG_INIT,
+				  "HW caps: Rx first queue ID = %d\n",
+				  caps->rxq_first_id);
+			break;
+		case ICE_AQC_CAPS_TXQS:
+			caps->num_txq = number;
+			caps->txq_first_id = phys_id;
+			ice_debug(hw, ICE_DBG_INIT,
+				  "HW caps: Num Tx Qs = %d\n", caps->num_txq);
+			ice_debug(hw, ICE_DBG_INIT,
+				  "HW caps: Tx first queue ID = %d\n",
+				  caps->txq_first_id);
+			break;
+		case ICE_AQC_CAPS_MSIX:
+			caps->num_msix_vectors = number;
+			caps->msix_vector_first_id = phys_id;
+			ice_debug(hw, ICE_DBG_INIT,
+				  "HW caps: MSIX vector count = %d\n",
+				  caps->num_msix_vectors);
+			ice_debug(hw, ICE_DBG_INIT,
+				  "HW caps: MSIX first vector index = %d\n",
+				  caps->msix_vector_first_id);
+			break;
+		case ICE_AQC_CAPS_MAX_MTU:
+			caps->max_mtu = number;
+			if (dev_p)
+				ice_debug(hw, ICE_DBG_INIT,
+					  "HW caps: Dev.MaxMTU = %d\n",
+					  caps->max_mtu);
+			else if (func_p)
+				ice_debug(hw, ICE_DBG_INIT,
+					  "HW caps: func.MaxMTU = %d\n",
+					  caps->max_mtu);
+			break;
+		default:
+			ice_debug(hw, ICE_DBG_INIT,
+				  "HW caps: Unknown capability[%d]: 0x%x\n", i,
+				  cap);
+			break;
+		}
+	}
+}
+
+/**
+ * ice_aq_discover_caps - query function/device capabilities
+ * @hw: pointer to the hw struct
+ * @buf: a virtual buffer to hold the capabilities
+ * @buf_size: Size of the virtual buffer
+ * @cap_count: cap count needed if AQ err==ENOMEM
+ * @opc: capabilities type to discover - pass in the command opcode
+ * @cd: pointer to command details structure or NULL
+ *
+ * Get the function(0x000a)/device(0x000b) capabilities description from
+ * the firmware.
+ */
+static enum ice_status
+ice_aq_discover_caps(struct ice_hw *hw, void *buf, u16 buf_size, u32 *cap_count,
+		     enum ice_adminq_opc opc, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_list_caps *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	cmd = &desc.params.get_cap;
+
+	if (opc != ice_aqc_opc_list_func_caps &&
+	    opc != ice_aqc_opc_list_dev_caps)
+		return ICE_ERR_PARAM;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, opc);
+
+	status = ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
+	if (!status)
+		ice_parse_caps(hw, buf, LE32_TO_CPU(cmd->count), opc);
+	else if (hw->adminq.sq_last_status == ICE_AQ_RC_ENOMEM)
+		*cap_count = LE32_TO_CPU(cmd->count);
+	return status;
+}
+
+/**
+ * ice_discover_caps - get info about the HW
+ * @hw: pointer to the hardware structure
+ * @opc: capabilities type to discover - pass in the command opcode
+ */
+static enum ice_status
+ice_discover_caps(struct ice_hw *hw, enum ice_adminq_opc opc)
+{
+	enum ice_status status;
+	u32 cap_count;
+	u16 cbuf_len;
+	u8 retries;
+
+	/* The driver doesn't know how many capabilities the device will return
+	 * so the buffer size required isn't known ahead of time. The driver
+	 * starts with cbuf_len and if this turns out to be insufficient, the
+	 * device returns ICE_AQ_RC_ENOMEM and also the cap_count it needs.
+	 * The driver then allocates the buffer based on the count and retries
+	 * the operation. So it follows that the retry count is 2.
+	 */
+#define ICE_GET_CAP_BUF_COUNT	40
+#define ICE_GET_CAP_RETRY_COUNT	2
+
+	cap_count = ICE_GET_CAP_BUF_COUNT;
+	retries = ICE_GET_CAP_RETRY_COUNT;
+
+	do {
+		void *cbuf;
+
+		cbuf_len = (u16)(cap_count *
+				 sizeof(struct ice_aqc_list_caps_elem));
+		cbuf = ice_malloc(hw, cbuf_len);
+		if (!cbuf)
+			return ICE_ERR_NO_MEMORY;
+
+		status = ice_aq_discover_caps(hw, cbuf, cbuf_len, &cap_count,
+					      opc, NULL);
+		ice_free(hw, cbuf);
+
+		if (!status || hw->adminq.sq_last_status != ICE_AQ_RC_ENOMEM)
+			break;
+
+		/* If ENOMEM is returned, try again with bigger buffer */
+	} while (--retries);
+
+	return status;
+}
+
+/**
+ * ice_get_caps - get info about the HW
+ * @hw: pointer to the hardware structure
+ */
+enum ice_status ice_get_caps(struct ice_hw *hw)
+{
+	enum ice_status status;
+
+	status = ice_discover_caps(hw, ice_aqc_opc_list_dev_caps);
+	if (!status)
+		status = ice_discover_caps(hw, ice_aqc_opc_list_func_caps);
+
+	return status;
+}
+
+/**
+ * ice_aq_manage_mac_write - manage MAC address write command
+ * @hw: pointer to the hw struct
+ * @mac_addr: MAC address to be written as LAA/LAA+WoL/Port address
+ * @flags: flags to control write behavior
+ * @cd: pointer to command details structure or NULL
+ *
+ * This function is used to write MAC address to the NVM (0x0108).
+ */
+enum ice_status
+ice_aq_manage_mac_write(struct ice_hw *hw, const u8 *mac_addr, u8 flags,
+			struct ice_sq_cd *cd)
+{
+	struct ice_aqc_manage_mac_write *cmd;
+	struct ice_aq_desc desc;
+
+	cmd = &desc.params.mac_write;
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_manage_mac_write);
+
+	cmd->flags = flags;
+
+
+	/* Prep values for flags, sah, sal */
+	cmd->sah = HTONS(*((const u16 *)mac_addr));
+	cmd->sal = HTONL(*((const u32 *)(mac_addr + 2)));
+
+	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+}
+
+/**
+ * ice_aq_clear_pxe_mode
+ * @hw: pointer to the hw struct
+ *
+ * Tell the firmware that the driver is taking over from PXE (0x0110).
+ */
+static enum ice_status ice_aq_clear_pxe_mode(struct ice_hw *hw)
+{
+	struct ice_aq_desc desc;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_clear_pxe_mode);
+	desc.params.clear_pxe.rx_cnt = ICE_AQC_CLEAR_PXE_RX_CNT;
+
+	return ice_aq_send_cmd(hw, &desc, NULL, 0, NULL);
+}
+
+/**
+ * ice_clear_pxe_mode - clear pxe operations mode
+ * @hw: pointer to the hw struct
+ *
+ * Make sure all PXE mode settings are cleared, including things
+ * like descriptor fetch/write-back mode.
+ */
+void ice_clear_pxe_mode(struct ice_hw *hw)
+{
+	if (ice_check_sq_alive(hw, &hw->adminq))
+		ice_aq_clear_pxe_mode(hw);
+}
+
+
+/**
+ * ice_get_link_speed_based_on_phy_type - returns link speed
+ * @phy_type_low: lower part of phy_type
+ * @phy_type_high: higher part of phy_type
+ *
+ * This helper function will convert an entry in phy type structure
+ * [phy_type_low, phy_type_high] to its corresponding link speed.
+ * Note: In the structure of [phy_type_low, phy_type_high], there should
+ * be one bit set, as this function will convert one phy type to its
+ * speed.
+ * If no bit gets set, ICE_LINK_SPEED_UNKNOWN will be returned
+ * If more than one bit gets set, ICE_LINK_SPEED_UNKNOWN will be returned
+ */
+static u16
+ice_get_link_speed_based_on_phy_type(u64 phy_type_low, u64 phy_type_high)
+{
+	u16 speed_phy_type_high = ICE_AQ_LINK_SPEED_UNKNOWN;
+	u16 speed_phy_type_low = ICE_AQ_LINK_SPEED_UNKNOWN;
+
+	switch (phy_type_low) {
+	case ICE_PHY_TYPE_LOW_100BASE_TX:
+	case ICE_PHY_TYPE_LOW_100M_SGMII:
+		speed_phy_type_low = ICE_AQ_LINK_SPEED_100MB;
+		break;
+	case ICE_PHY_TYPE_LOW_1000BASE_T:
+	case ICE_PHY_TYPE_LOW_1000BASE_SX:
+	case ICE_PHY_TYPE_LOW_1000BASE_LX:
+	case ICE_PHY_TYPE_LOW_1000BASE_KX:
+	case ICE_PHY_TYPE_LOW_1G_SGMII:
+		speed_phy_type_low = ICE_AQ_LINK_SPEED_1000MB;
+		break;
+	case ICE_PHY_TYPE_LOW_2500BASE_T:
+	case ICE_PHY_TYPE_LOW_2500BASE_X:
+	case ICE_PHY_TYPE_LOW_2500BASE_KX:
+		speed_phy_type_low = ICE_AQ_LINK_SPEED_2500MB;
+		break;
+	case ICE_PHY_TYPE_LOW_5GBASE_T:
+	case ICE_PHY_TYPE_LOW_5GBASE_KR:
+		speed_phy_type_low = ICE_AQ_LINK_SPEED_5GB;
+		break;
+	case ICE_PHY_TYPE_LOW_10GBASE_T:
+	case ICE_PHY_TYPE_LOW_10G_SFI_DA:
+	case ICE_PHY_TYPE_LOW_10GBASE_SR:
+	case ICE_PHY_TYPE_LOW_10GBASE_LR:
+	case ICE_PHY_TYPE_LOW_10GBASE_KR_CR1:
+	case ICE_PHY_TYPE_LOW_10G_SFI_AOC_ACC:
+	case ICE_PHY_TYPE_LOW_10G_SFI_C2C:
+		speed_phy_type_low = ICE_AQ_LINK_SPEED_10GB;
+		break;
+	case ICE_PHY_TYPE_LOW_25GBASE_T:
+	case ICE_PHY_TYPE_LOW_25GBASE_CR:
+	case ICE_PHY_TYPE_LOW_25GBASE_CR_S:
+	case ICE_PHY_TYPE_LOW_25GBASE_CR1:
+	case ICE_PHY_TYPE_LOW_25GBASE_SR:
+	case ICE_PHY_TYPE_LOW_25GBASE_LR:
+	case ICE_PHY_TYPE_LOW_25GBASE_KR:
+	case ICE_PHY_TYPE_LOW_25GBASE_KR_S:
+	case ICE_PHY_TYPE_LOW_25GBASE_KR1:
+	case ICE_PHY_TYPE_LOW_25G_AUI_AOC_ACC:
+	case ICE_PHY_TYPE_LOW_25G_AUI_C2C:
+		speed_phy_type_low = ICE_AQ_LINK_SPEED_25GB;
+		break;
+	case ICE_PHY_TYPE_LOW_40GBASE_CR4:
+	case ICE_PHY_TYPE_LOW_40GBASE_SR4:
+	case ICE_PHY_TYPE_LOW_40GBASE_LR4:
+	case ICE_PHY_TYPE_LOW_40GBASE_KR4:
+	case ICE_PHY_TYPE_LOW_40G_XLAUI_AOC_ACC:
+	case ICE_PHY_TYPE_LOW_40G_XLAUI:
+		speed_phy_type_low = ICE_AQ_LINK_SPEED_40GB;
+		break;
+	case ICE_PHY_TYPE_LOW_50GBASE_CR2:
+	case ICE_PHY_TYPE_LOW_50GBASE_SR2:
+	case ICE_PHY_TYPE_LOW_50GBASE_LR2:
+	case ICE_PHY_TYPE_LOW_50GBASE_KR2:
+	case ICE_PHY_TYPE_LOW_50G_LAUI2_AOC_ACC:
+	case ICE_PHY_TYPE_LOW_50G_LAUI2:
+	case ICE_PHY_TYPE_LOW_50G_AUI2_AOC_ACC:
+	case ICE_PHY_TYPE_LOW_50G_AUI2:
+	case ICE_PHY_TYPE_LOW_50GBASE_CP:
+	case ICE_PHY_TYPE_LOW_50GBASE_SR:
+	case ICE_PHY_TYPE_LOW_50GBASE_FR:
+	case ICE_PHY_TYPE_LOW_50GBASE_LR:
+	case ICE_PHY_TYPE_LOW_50GBASE_KR_PAM4:
+	case ICE_PHY_TYPE_LOW_50G_AUI1_AOC_ACC:
+	case ICE_PHY_TYPE_LOW_50G_AUI1:
+		speed_phy_type_low = ICE_AQ_LINK_SPEED_50GB;
+		break;
+	case ICE_PHY_TYPE_LOW_100GBASE_CR4:
+	case ICE_PHY_TYPE_LOW_100GBASE_SR4:
+	case ICE_PHY_TYPE_LOW_100GBASE_LR4:
+	case ICE_PHY_TYPE_LOW_100GBASE_KR4:
+	case ICE_PHY_TYPE_LOW_100G_CAUI4_AOC_ACC:
+	case ICE_PHY_TYPE_LOW_100G_CAUI4:
+	case ICE_PHY_TYPE_LOW_100G_AUI4_AOC_ACC:
+	case ICE_PHY_TYPE_LOW_100G_AUI4:
+	case ICE_PHY_TYPE_LOW_100GBASE_CR_PAM4:
+	case ICE_PHY_TYPE_LOW_100GBASE_KR_PAM4:
+	case ICE_PHY_TYPE_LOW_100GBASE_CP2:
+	case ICE_PHY_TYPE_LOW_100GBASE_SR2:
+	case ICE_PHY_TYPE_LOW_100GBASE_DR:
+		speed_phy_type_low = ICE_AQ_LINK_SPEED_100GB;
+		break;
+	default:
+		speed_phy_type_low = ICE_AQ_LINK_SPEED_UNKNOWN;
+		break;
+	}
+
+	switch (phy_type_high) {
+	case ICE_PHY_TYPE_HIGH_100GBASE_KR2_PAM4:
+	case ICE_PHY_TYPE_HIGH_100G_CAUI2_AOC_ACC:
+	case ICE_PHY_TYPE_HIGH_100G_CAUI2:
+	case ICE_PHY_TYPE_HIGH_100G_AUI2_AOC_ACC:
+	case ICE_PHY_TYPE_HIGH_100G_AUI2:
+		speed_phy_type_high = ICE_AQ_LINK_SPEED_100GB;
+		break;
+	default:
+		speed_phy_type_high = ICE_AQ_LINK_SPEED_UNKNOWN;
+		break;
+	}
+
+	if (speed_phy_type_low == ICE_AQ_LINK_SPEED_UNKNOWN &&
+	    speed_phy_type_high == ICE_AQ_LINK_SPEED_UNKNOWN)
+		return ICE_AQ_LINK_SPEED_UNKNOWN;
+	else if (speed_phy_type_low != ICE_AQ_LINK_SPEED_UNKNOWN &&
+		 speed_phy_type_high != ICE_AQ_LINK_SPEED_UNKNOWN)
+		return ICE_AQ_LINK_SPEED_UNKNOWN;
+	else if (speed_phy_type_low != ICE_AQ_LINK_SPEED_UNKNOWN &&
+		 speed_phy_type_high == ICE_AQ_LINK_SPEED_UNKNOWN)
+		return speed_phy_type_low;
+	else
+		return speed_phy_type_high;
+}
+
+/**
+ * ice_update_phy_type
+ * @phy_type_low: pointer to the lower part of phy_type
+ * @phy_type_high: pointer to the higher part of phy_type
+ * @link_speeds_bitmap: targeted link speeds bitmap
+ *
+ * Note: For the link_speeds_bitmap structure, you can check it at
+ * [ice_aqc_get_link_status->link_speed]. Caller can pass in
+ * link_speeds_bitmap include multiple speeds.
+ *
+ * Each entry in this [phy_type_low, phy_type_high] structure will
+ * present a certain link speed. This helper function will turn on bits
+ * in [phy_type_low, phy_type_high] structure based on the value of
+ * link_speeds_bitmap input parameter.
+ */
+void
+ice_update_phy_type(u64 *phy_type_low, u64 *phy_type_high,
+		    u16 link_speeds_bitmap)
+{
+	u16 speed = ICE_AQ_LINK_SPEED_UNKNOWN;
+	u64 pt_high;
+	u64 pt_low;
+	int index;
+
+	/* We first check with low part of phy_type */
+	for (index = 0; index <= ICE_PHY_TYPE_LOW_MAX_INDEX; index++) {
+		pt_low = BIT_ULL(index);
+		speed = ice_get_link_speed_based_on_phy_type(pt_low, 0);
+
+		if (link_speeds_bitmap & speed)
+			*phy_type_low |= BIT_ULL(index);
+	}
+
+	/* We then check with high part of phy_type */
+	for (index = 0; index <= ICE_PHY_TYPE_HIGH_MAX_INDEX; index++) {
+		pt_high = BIT_ULL(index);
+		speed = ice_get_link_speed_based_on_phy_type(0, pt_high);
+
+		if (link_speeds_bitmap & speed)
+			*phy_type_high |= BIT_ULL(index);
+	}
+}
+
+/**
+ * ice_aq_set_phy_cfg
+ * @hw: pointer to the hw struct
+ * @lport: logical port number
+ * @cfg: structure with PHY configuration data to be set
+ * @cd: pointer to command details structure or NULL
+ *
+ * Set the various PHY configuration parameters supported on the Port.
+ * One or more of the Set PHY config parameters may be ignored in an MFP
+ * mode as the PF may not have the privilege to set some of the PHY Config
+ * parameters. This status will be indicated by the command response (0x0601).
+ */
+enum ice_status
+ice_aq_set_phy_cfg(struct ice_hw *hw, u8 lport,
+		   struct ice_aqc_set_phy_cfg_data *cfg, struct ice_sq_cd *cd)
+{
+	struct ice_aq_desc desc;
+
+	if (!cfg)
+		return ICE_ERR_PARAM;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_phy_cfg);
+	desc.params.set_phy.lport_num = lport;
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+
+	return ice_aq_send_cmd(hw, &desc, cfg, sizeof(*cfg), cd);
+}
+
+/**
+ * ice_update_link_info - update status of the HW network link
+ * @pi: port info structure of the interested logical port
+ */
+enum ice_status ice_update_link_info(struct ice_port_info *pi)
+{
+	struct ice_aqc_get_phy_caps_data *pcaps;
+	struct ice_phy_info *phy_info;
+	enum ice_status status;
+	struct ice_hw *hw;
+
+	if (!pi)
+		return ICE_ERR_PARAM;
+
+	hw = pi->hw;
+
+	pcaps = (struct ice_aqc_get_phy_caps_data *)
+		ice_malloc(hw, sizeof(*pcaps));
+	if (!pcaps)
+		return ICE_ERR_NO_MEMORY;
+
+	phy_info = &pi->phy;
+	status = ice_aq_get_link_info(pi, true, NULL, NULL);
+	if (status)
+		goto out;
+
+	if (phy_info->link_info.link_info & ICE_AQ_MEDIA_AVAILABLE) {
+		status = ice_aq_get_phy_caps(pi, false, ICE_AQC_REPORT_SW_CFG,
+					     pcaps, NULL);
+		if (status)
+			goto out;
+
+		ice_memcpy(phy_info->link_info.module_type, &pcaps->module_type,
+			   sizeof(phy_info->link_info.module_type),
+			   ICE_NONDMA_TO_NONDMA);
+	}
+out:
+	ice_free(hw, pcaps);
+	return status;
+}
+
+/**
+ * ice_set_fc
+ * @pi: port information structure
+ * @aq_failures: pointer to status code, specific to ice_set_fc routine
+ * @ena_auto_link_update: enable automatic link update
+ *
+ * Set the requested flow control mode.
+ */
+enum ice_status
+ice_set_fc(struct ice_port_info *pi, u8 *aq_failures, bool ena_auto_link_update)
+{
+	struct ice_aqc_set_phy_cfg_data cfg = { 0 };
+	struct ice_aqc_get_phy_caps_data *pcaps;
+	enum ice_status status;
+	u8 pause_mask = 0x0;
+	struct ice_hw *hw;
+
+	if (!pi)
+		return ICE_ERR_PARAM;
+	hw = pi->hw;
+	*aq_failures = ICE_SET_FC_AQ_FAIL_NONE;
+
+	switch (pi->fc.req_mode) {
+	case ICE_FC_FULL:
+		pause_mask |= ICE_AQC_PHY_EN_TX_LINK_PAUSE;
+		pause_mask |= ICE_AQC_PHY_EN_RX_LINK_PAUSE;
+		break;
+	case ICE_FC_RX_PAUSE:
+		pause_mask |= ICE_AQC_PHY_EN_RX_LINK_PAUSE;
+		break;
+	case ICE_FC_TX_PAUSE:
+		pause_mask |= ICE_AQC_PHY_EN_TX_LINK_PAUSE;
+		break;
+	default:
+		break;
+	}
+
+	pcaps = (struct ice_aqc_get_phy_caps_data *)
+		ice_malloc(hw, sizeof(*pcaps));
+	if (!pcaps)
+		return ICE_ERR_NO_MEMORY;
+
+	/* Get the current phy config */
+	status = ice_aq_get_phy_caps(pi, false, ICE_AQC_REPORT_SW_CFG, pcaps,
+				     NULL);
+	if (status) {
+		*aq_failures = ICE_SET_FC_AQ_FAIL_GET;
+		goto out;
+	}
+
+	/* clear the old pause settings */
+	cfg.caps = pcaps->caps & ~(ICE_AQC_PHY_EN_TX_LINK_PAUSE |
+				   ICE_AQC_PHY_EN_RX_LINK_PAUSE);
+	/* set the new capabilities */
+	cfg.caps |= pause_mask;
+	/* If the capabilities have changed, then set the new config */
+	if (cfg.caps != pcaps->caps) {
+		int retry_count, retry_max = 10;
+
+		/* Auto restart link so settings take effect */
+		if (ena_auto_link_update)
+			cfg.caps |= ICE_AQ_PHY_ENA_AUTO_LINK_UPDT;
+		/* Copy over all the old settings */
+		cfg.phy_type_high = pcaps->phy_type_high;
+		cfg.phy_type_low = pcaps->phy_type_low;
+		cfg.low_power_ctrl = pcaps->low_power_ctrl;
+		cfg.eee_cap = pcaps->eee_cap;
+		cfg.eeer_value = pcaps->eeer_value;
+		cfg.link_fec_opt = pcaps->link_fec_options;
+
+		status = ice_aq_set_phy_cfg(hw, pi->lport, &cfg, NULL);
+		if (status) {
+			*aq_failures = ICE_SET_FC_AQ_FAIL_SET;
+			goto out;
+		}
+
+		/* Update the link info
+		 * It sometimes takes a really long time for link to
+		 * come back from the atomic reset. Thus, we wait a
+		 * little bit.
+		 */
+		for (retry_count = 0; retry_count < retry_max; retry_count++) {
+			status = ice_update_link_info(pi);
+
+			if (status == ICE_SUCCESS)
+				break;
+
+			ice_msec_delay(100, true);
+		}
+
+		if (status)
+			*aq_failures = ICE_SET_FC_AQ_FAIL_UPDATE;
+	}
+
+out:
+	ice_free(hw, pcaps);
+	return status;
+}
+
+/**
+ * ice_copy_phy_caps_to_cfg - Copy PHY ability data to configuration data
+ * @caps: PHY ability structure to copy date from
+ * @cfg: PHY configuration structure to copy data to
+ *
+ * Helper function to copy AQC PHY get ability data to PHY set configuration
+ * data structure
+ */
+void
+ice_copy_phy_caps_to_cfg(struct ice_aqc_get_phy_caps_data *caps,
+			 struct ice_aqc_set_phy_cfg_data *cfg)
+{
+	if (!caps || !cfg)
+		return;
+
+	cfg->phy_type_low = caps->phy_type_low;
+	cfg->phy_type_high = caps->phy_type_high;
+	cfg->caps = caps->caps;
+	cfg->low_power_ctrl = caps->low_power_ctrl;
+	cfg->eee_cap = caps->eee_cap;
+	cfg->eeer_value = caps->eeer_value;
+	cfg->link_fec_opt = caps->link_fec_options;
+}
+
+/**
+ * ice_cfg_phy_fec - Configure PHY FEC data based on FEC mode
+ * @cfg: PHY configuration data to set FEC mode
+ * @fec: FEC mode to configure
+ *
+ * Caller should copy ice_aqc_get_phy_caps_data.caps ICE_AQC_PHY_EN_AUTO_FEC
+ * (bit 7) and ice_aqc_get_phy_caps_data.link_fec_options to cfg.caps
+ * ICE_AQ_PHY_ENA_AUTO_FEC (bit 7) and cfg.link_fec_options before calling.
+ */
+void
+ice_cfg_phy_fec(struct ice_aqc_set_phy_cfg_data *cfg, enum ice_fec_mode fec)
+{
+	switch (fec) {
+	case ICE_FEC_BASER:
+		/* Clear auto FEC and RS bits, and AND BASE-R ability
+		 * bits and OR request bits.
+		 */
+		cfg->caps &= ~ICE_AQC_PHY_EN_AUTO_FEC;
+		cfg->link_fec_opt &= ICE_AQC_PHY_FEC_10G_KR_40G_KR4_EN |
+				     ICE_AQC_PHY_FEC_25G_KR_CLAUSE74_EN;
+		cfg->link_fec_opt |= ICE_AQC_PHY_FEC_10G_KR_40G_KR4_REQ |
+				     ICE_AQC_PHY_FEC_25G_KR_REQ;
+		break;
+	case ICE_FEC_RS:
+		/* Clear auto FEC and BASE-R bits, and AND RS ability
+		 * bits and OR request bits.
+		 */
+		cfg->caps &= ~ICE_AQC_PHY_EN_AUTO_FEC;
+		cfg->link_fec_opt &= ICE_AQC_PHY_FEC_25G_RS_CLAUSE91_EN;
+		cfg->link_fec_opt |= ICE_AQC_PHY_FEC_25G_RS_528_REQ |
+				     ICE_AQC_PHY_FEC_25G_RS_544_REQ;
+		break;
+	case ICE_FEC_NONE:
+		/* Clear auto FEC and all FEC option bits. */
+		cfg->caps &= ~ICE_AQC_PHY_EN_AUTO_FEC;
+		cfg->link_fec_opt &= ~ICE_AQC_PHY_FEC_MASK;
+		break;
+	case ICE_FEC_AUTO:
+		/* AND auto FEC bit, and all caps bits. */
+		cfg->caps &= ICE_AQC_PHY_CAPS_MASK;
+		break;
+	}
+}
+
+/**
+ * ice_get_link_status - get status of the HW network link
+ * @pi: port information structure
+ * @link_up: pointer to bool (true/false = linkup/linkdown)
+ *
+ * Variable link_up is true if link is up, false if link is down.
+ * The variable link_up is invalid if status is non zero. As a
+ * result of this call, link status reporting becomes enabled
+ */
+enum ice_status ice_get_link_status(struct ice_port_info *pi, bool *link_up)
+{
+	struct ice_phy_info *phy_info;
+	enum ice_status status = ICE_SUCCESS;
+
+	if (!pi || !link_up)
+		return ICE_ERR_PARAM;
+
+	phy_info = &pi->phy;
+
+	if (phy_info->get_link_info) {
+		status = ice_update_link_info(pi);
+
+		if (status)
+			ice_debug(pi->hw, ICE_DBG_LINK,
+				  "get link status error, status = %d\n",
+				  status);
+	}
+
+	*link_up = phy_info->link_info.link_info & ICE_AQ_LINK_UP;
+
+	return status;
+}
+
+/**
+ * ice_aq_set_link_restart_an
+ * @pi: pointer to the port information structure
+ * @ena_link: if true: enable link, if false: disable link
+ * @cd: pointer to command details structure or NULL
+ *
+ * Sets up the link and restarts the Auto-Negotiation over the link.
+ */
+enum ice_status
+ice_aq_set_link_restart_an(struct ice_port_info *pi, bool ena_link,
+			   struct ice_sq_cd *cd)
+{
+	struct ice_aqc_restart_an *cmd;
+	struct ice_aq_desc desc;
+
+	cmd = &desc.params.restart_an;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_restart_an);
+
+	cmd->cmd_flags = ICE_AQC_RESTART_AN_LINK_RESTART;
+	cmd->lport_num = pi->lport;
+	if (ena_link)
+		cmd->cmd_flags |= ICE_AQC_RESTART_AN_LINK_ENABLE;
+	else
+		cmd->cmd_flags &= ~ICE_AQC_RESTART_AN_LINK_ENABLE;
+
+	return ice_aq_send_cmd(pi->hw, &desc, NULL, 0, cd);
+}
+
+/**
+ * ice_aq_set_event_mask
+ * @hw: pointer to the hw struct
+ * @port_num: port number of the physical function
+ * @mask: event mask to be set
+ * @cd: pointer to command details structure or NULL
+ *
+ * Set event mask (0x0613)
+ */
+enum ice_status
+ice_aq_set_event_mask(struct ice_hw *hw, u8 port_num, u16 mask,
+		      struct ice_sq_cd *cd)
+{
+	struct ice_aqc_set_event_mask *cmd;
+	struct ice_aq_desc desc;
+
+	cmd = &desc.params.set_event_mask;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_event_mask);
+
+	cmd->lport_num = port_num;
+
+	cmd->event_mask = CPU_TO_LE16(mask);
+	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+}
+
+/**
+ * ice_aq_set_mac_loopback
+ * @hw: pointer to the hw struct
+ * @ena_lpbk: Enable or Disable loopback
+ * @cd: pointer to command details structure or NULL
+ *
+ * Enable/disable loopback on a given port
+ */
+enum ice_status
+ice_aq_set_mac_loopback(struct ice_hw *hw, bool ena_lpbk, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_set_mac_lb *cmd;
+	struct ice_aq_desc desc;
+
+	cmd = &desc.params.set_mac_lb;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_mac_lb);
+	if (ena_lpbk)
+		cmd->lb_mode = ICE_AQ_MAC_LB_EN;
+
+	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+}
+
+
+/**
+ * ice_aq_set_port_id_led
+ * @pi: pointer to the port information
+ * @is_orig_mode: is this LED set to original mode (by the net-list)
+ * @cd: pointer to command details structure or NULL
+ *
+ * Set LED value for the given port (0x06e9)
+ */
+enum ice_status
+ice_aq_set_port_id_led(struct ice_port_info *pi, bool is_orig_mode,
+		       struct ice_sq_cd *cd)
+{
+	struct ice_aqc_set_port_id_led *cmd;
+	struct ice_hw *hw = pi->hw;
+	struct ice_aq_desc desc;
+
+	cmd = &desc.params.set_port_id_led;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_port_id_led);
+
+
+	if (is_orig_mode)
+		cmd->ident_mode = ICE_AQC_PORT_IDENT_LED_ORIG;
+	else
+		cmd->ident_mode = ICE_AQC_PORT_IDENT_LED_BLINK;
+
+	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+}
+
+/**
+ * __ice_aq_get_set_rss_lut
+ * @hw: pointer to the hardware structure
+ * @vsi_id: VSI FW index
+ * @lut_type: LUT table type
+ * @lut: pointer to the LUT buffer provided by the caller
+ * @lut_size: size of the LUT buffer
+ * @glob_lut_idx: global LUT index
+ * @set: set true to set the table, false to get the table
+ *
+ * Internal function to get (0x0B05) or set (0x0B03) RSS look up table
+ */
+static enum ice_status
+__ice_aq_get_set_rss_lut(struct ice_hw *hw, u16 vsi_id, u8 lut_type, u8 *lut,
+			 u16 lut_size, u8 glob_lut_idx, bool set)
+{
+	struct ice_aqc_get_set_rss_lut *cmd_resp;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+	u16 flags = 0;
+
+	cmd_resp = &desc.params.get_set_rss_lut;
+
+	if (set) {
+		ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_rss_lut);
+		desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+	} else {
+		ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_rss_lut);
+	}
+
+	cmd_resp->vsi_id = CPU_TO_LE16(((vsi_id <<
+					 ICE_AQC_GSET_RSS_LUT_VSI_ID_S) &
+					ICE_AQC_GSET_RSS_LUT_VSI_ID_M) |
+				       ICE_AQC_GSET_RSS_LUT_VSI_VALID);
+
+	switch (lut_type) {
+	case ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_VSI:
+	case ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF:
+	case ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_GLOBAL:
+		flags |= ((lut_type << ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_S) &
+			  ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_M);
+		break;
+	default:
+		status = ICE_ERR_PARAM;
+		goto ice_aq_get_set_rss_lut_exit;
+	}
+
+	if (lut_type == ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_GLOBAL) {
+		flags |= ((glob_lut_idx << ICE_AQC_GSET_RSS_LUT_GLOBAL_IDX_S) &
+			  ICE_AQC_GSET_RSS_LUT_GLOBAL_IDX_M);
+
+		if (!set)
+			goto ice_aq_get_set_rss_lut_send;
+	} else if (lut_type == ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF) {
+		if (!set)
+			goto ice_aq_get_set_rss_lut_send;
+	} else {
+		goto ice_aq_get_set_rss_lut_send;
+	}
+
+	/* LUT size is only valid for Global and PF table types */
+	switch (lut_size) {
+	case ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_128:
+		flags |= (ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_128_FLAG <<
+			  ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S) &
+			 ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_M;
+		break;
+	case ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_512:
+		flags |= (ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_512_FLAG <<
+			  ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S) &
+			 ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_M;
+		break;
+	case ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K:
+		if (lut_type == ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF) {
+			flags |= (ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K_FLAG <<
+				  ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S) &
+				 ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_M;
+			break;
+		}
+		/* fall-through */
+	default:
+		status = ICE_ERR_PARAM;
+		goto ice_aq_get_set_rss_lut_exit;
+	}
+
+ice_aq_get_set_rss_lut_send:
+	cmd_resp->flags = CPU_TO_LE16(flags);
+	status = ice_aq_send_cmd(hw, &desc, lut, lut_size, NULL);
+
+ice_aq_get_set_rss_lut_exit:
+	return status;
+}
+
+/**
+ * ice_aq_get_rss_lut
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: software VSI handle
+ * @lut_type: LUT table type
+ * @lut: pointer to the LUT buffer provided by the caller
+ * @lut_size: size of the LUT buffer
+ *
+ * get the RSS lookup table, PF or VSI type
+ */
+enum ice_status
+ice_aq_get_rss_lut(struct ice_hw *hw, u16 vsi_handle, u8 lut_type,
+		   u8 *lut, u16 lut_size)
+{
+	if (!ice_is_vsi_valid(hw, vsi_handle) || !lut)
+		return ICE_ERR_PARAM;
+
+	return __ice_aq_get_set_rss_lut(hw, ice_get_hw_vsi_num(hw, vsi_handle),
+					lut_type, lut, lut_size, 0, false);
+}
+
+/**
+ * ice_aq_set_rss_lut
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: software VSI handle
+ * @lut_type: LUT table type
+ * @lut: pointer to the LUT buffer provided by the caller
+ * @lut_size: size of the LUT buffer
+ *
+ * set the RSS lookup table, PF or VSI type
+ */
+enum ice_status
+ice_aq_set_rss_lut(struct ice_hw *hw, u16 vsi_handle, u8 lut_type,
+		   u8 *lut, u16 lut_size)
+{
+	if (!ice_is_vsi_valid(hw, vsi_handle) || !lut)
+		return ICE_ERR_PARAM;
+
+	return __ice_aq_get_set_rss_lut(hw, ice_get_hw_vsi_num(hw, vsi_handle),
+					lut_type, lut, lut_size, 0, true);
+}
+
+/**
+ * __ice_aq_get_set_rss_key
+ * @hw: pointer to the hw struct
+ * @vsi_id: VSI FW index
+ * @key: pointer to key info struct
+ * @set: set true to set the key, false to get the key
+ *
+ * get (0x0B04) or set (0x0B02) the RSS key per VSI
+ */
+static enum
+ice_status __ice_aq_get_set_rss_key(struct ice_hw *hw, u16 vsi_id,
+				    struct ice_aqc_get_set_rss_keys *key,
+				    bool set)
+{
+	struct ice_aqc_get_set_rss_key *cmd_resp;
+	u16 key_size = sizeof(*key);
+	struct ice_aq_desc desc;
+
+	cmd_resp = &desc.params.get_set_rss_key;
+
+	if (set) {
+		ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_rss_key);
+		desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+	} else {
+		ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_rss_key);
+	}
+
+	cmd_resp->vsi_id = CPU_TO_LE16(((vsi_id <<
+					 ICE_AQC_GSET_RSS_KEY_VSI_ID_S) &
+					ICE_AQC_GSET_RSS_KEY_VSI_ID_M) |
+				       ICE_AQC_GSET_RSS_KEY_VSI_VALID);
+
+	return ice_aq_send_cmd(hw, &desc, key, key_size, NULL);
+}
+
+/**
+ * ice_aq_get_rss_key
+ * @hw: pointer to the hw struct
+ * @vsi_handle: software VSI handle
+ * @key: pointer to key info struct
+ *
+ * get the RSS key per VSI
+ */
+enum ice_status
+ice_aq_get_rss_key(struct ice_hw *hw, u16 vsi_handle,
+		   struct ice_aqc_get_set_rss_keys *key)
+{
+	if (!ice_is_vsi_valid(hw, vsi_handle) || !key)
+		return ICE_ERR_PARAM;
+
+	return __ice_aq_get_set_rss_key(hw, ice_get_hw_vsi_num(hw, vsi_handle),
+					key, false);
+}
+
+/**
+ * ice_aq_set_rss_key
+ * @hw: pointer to the hw struct
+ * @vsi_handle: software VSI handle
+ * @keys: pointer to key info struct
+ *
+ * set the RSS key per VSI
+ */
+enum ice_status
+ice_aq_set_rss_key(struct ice_hw *hw, u16 vsi_handle,
+		   struct ice_aqc_get_set_rss_keys *keys)
+{
+	if (!ice_is_vsi_valid(hw, vsi_handle) || !keys)
+		return ICE_ERR_PARAM;
+
+	return __ice_aq_get_set_rss_key(hw, ice_get_hw_vsi_num(hw, vsi_handle),
+					keys, true);
+}
+
+/**
+ * ice_aq_add_lan_txq
+ * @hw: pointer to the hardware structure
+ * @num_qgrps: Number of added queue groups
+ * @qg_list: list of queue groups to be added
+ * @buf_size: size of buffer for indirect command
+ * @cd: pointer to command details structure or NULL
+ *
+ * Add Tx LAN queue (0x0C30)
+ *
+ * NOTE:
+ * Prior to calling add Tx LAN queue:
+ * Initialize the following as part of the Tx queue context:
+ * Completion queue ID if the queue uses Completion queue, Quanta profile,
+ * Cache profile and Packet shaper profile.
+ *
+ * After add Tx LAN queue AQ command is completed:
+ * Interrupts should be associated with specific queues,
+ * Association of Tx queue to Doorbell queue is not part of Add LAN Tx queue
+ * flow.
+ */
+static enum ice_status
+ice_aq_add_lan_txq(struct ice_hw *hw, u8 num_qgrps,
+		   struct ice_aqc_add_tx_qgrp *qg_list, u16 buf_size,
+		   struct ice_sq_cd *cd)
+{
+	u16 i, sum_header_size, sum_q_size = 0;
+	struct ice_aqc_add_tx_qgrp *list;
+	struct ice_aqc_add_txqs *cmd;
+	struct ice_aq_desc desc;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_add_lan_txq");
+
+	cmd = &desc.params.add_txqs;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_add_txqs);
+
+	if (!qg_list)
+		return ICE_ERR_PARAM;
+
+	if (num_qgrps > ICE_LAN_TXQ_MAX_QGRPS)
+		return ICE_ERR_PARAM;
+
+	sum_header_size = num_qgrps *
+		(sizeof(*qg_list) - sizeof(*qg_list->txqs));
+
+	list = qg_list;
+	for (i = 0; i < num_qgrps; i++) {
+		struct ice_aqc_add_txqs_perq *q = list->txqs;
+
+		sum_q_size += list->num_txqs * sizeof(*q);
+		list = (struct ice_aqc_add_tx_qgrp *)(q + list->num_txqs);
+	}
+
+	if (buf_size != (sum_header_size + sum_q_size))
+		return ICE_ERR_PARAM;
+
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+
+	cmd->num_qgrps = num_qgrps;
+
+	return ice_aq_send_cmd(hw, &desc, qg_list, buf_size, cd);
+}
+
+/**
+ * ice_aq_dis_lan_txq
+ * @hw: pointer to the hardware structure
+ * @num_qgrps: number of groups in the list
+ * @qg_list: the list of groups to disable
+ * @buf_size: the total size of the qg_list buffer in bytes
+ * @rst_src: if called due to reset, specifies the rst source
+ * @vmvf_num: the relative vm or vf number that is undergoing the reset
+ * @cd: pointer to command details structure or NULL
+ *
+ * Disable LAN Tx queue (0x0C31)
+ */
+static enum ice_status
+ice_aq_dis_lan_txq(struct ice_hw *hw, u8 num_qgrps,
+		   struct ice_aqc_dis_txq_item *qg_list, u16 buf_size,
+		   enum ice_disq_rst_src rst_src, u16 vmvf_num,
+		   struct ice_sq_cd *cd)
+{
+	struct ice_aqc_dis_txqs *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+	u16 i, sz = 0;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_dis_lan_txq");
+	cmd = &desc.params.dis_txqs;
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_dis_txqs);
+
+	/* qg_list can be NULL only in VM/VF reset flow */
+	if (!qg_list && !rst_src)
+		return ICE_ERR_PARAM;
+
+	if (num_qgrps > ICE_LAN_TXQ_MAX_QGRPS)
+		return ICE_ERR_PARAM;
+
+	cmd->num_entries = num_qgrps;
+
+	cmd->vmvf_and_timeout = CPU_TO_LE16((5 << ICE_AQC_Q_DIS_TIMEOUT_S) &
+					    ICE_AQC_Q_DIS_TIMEOUT_M);
+
+	switch (rst_src) {
+	case ICE_VM_RESET:
+		cmd->cmd_type = ICE_AQC_Q_DIS_CMD_VM_RESET;
+		cmd->vmvf_and_timeout |=
+			CPU_TO_LE16(vmvf_num & ICE_AQC_Q_DIS_VMVF_NUM_M);
+		break;
+	case ICE_NO_RESET:
+	default:
+		break;
+	}
+
+	/* flush pipe on time out */
+	cmd->cmd_type |= ICE_AQC_Q_DIS_CMD_FLUSH_PIPE;
+	/* If no queue group info, we are in a reset flow. Issue the AQ */
+	if (!qg_list)
+		goto do_aq;
+
+	/* set RD bit to indicate that command buffer is provided by the driver
+	 * and it needs to be read by the firmware
+	 */
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+
+	for (i = 0; i < num_qgrps; ++i) {
+		/* Calculate the size taken up by the queue IDs in this group */
+		sz += qg_list[i].num_qs * sizeof(qg_list[i].q_id);
+
+		/* Add the size of the group header */
+		sz += sizeof(qg_list[i]) - sizeof(qg_list[i].q_id);
+
+		/* If the num of queues is even, add 2 bytes of padding */
+		if ((qg_list[i].num_qs % 2) == 0)
+			sz += 2;
+	}
+
+	if (buf_size != sz)
+		return ICE_ERR_PARAM;
+
+do_aq:
+	status = ice_aq_send_cmd(hw, &desc, qg_list, buf_size, cd);
+	if (status) {
+		if (!qg_list)
+			ice_debug(hw, ICE_DBG_SCHED, "VM%d disable failed %d\n",
+				  vmvf_num, hw->adminq.sq_last_status);
+		else
+			ice_debug(hw, ICE_DBG_SCHED, "disable Q %d failed %d\n",
+				  LE16_TO_CPU(qg_list[0].q_id[0]),
+				  hw->adminq.sq_last_status);
+	}
+	return status;
+}
+
+
+/* End of FW Admin Queue command wrappers */
+
+/**
+ * ice_write_byte - write a byte to a packed context structure
+ * @src_ctx:  the context structure to read from
+ * @dest_ctx: the context to be written to
+ * @ce_info:  a description of the struct to be filled
+ */
+static void
+ice_write_byte(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info)
+{
+	u8 src_byte, dest_byte, mask;
+	u8 *from, *dest;
+	u16 shift_width;
+
+	/* copy from the next struct field */
+	from = src_ctx + ce_info->offset;
+
+	/* prepare the bits and mask */
+	shift_width = ce_info->lsb % 8;
+	mask = (u8)(BIT(ce_info->width) - 1);
+
+	src_byte = *from;
+	src_byte &= mask;
+
+	/* shift to correct alignment */
+	mask <<= shift_width;
+	src_byte <<= shift_width;
+
+	/* get the current bits from the target bit string */
+	dest = dest_ctx + (ce_info->lsb / 8);
+
+	ice_memcpy(&dest_byte, dest, sizeof(dest_byte), ICE_DMA_TO_NONDMA);
+
+	dest_byte &= ~mask;	/* get the bits not changing */
+	dest_byte |= src_byte;	/* add in the new bits */
+
+	/* put it all back */
+	ice_memcpy(dest, &dest_byte, sizeof(dest_byte), ICE_NONDMA_TO_DMA);
+}
+
+/**
+ * ice_write_word - write a word to a packed context structure
+ * @src_ctx:  the context structure to read from
+ * @dest_ctx: the context to be written to
+ * @ce_info:  a description of the struct to be filled
+ */
+static void
+ice_write_word(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info)
+{
+	u16 src_word, mask;
+	__le16 dest_word;
+	u8 *from, *dest;
+	u16 shift_width;
+
+	/* copy from the next struct field */
+	from = src_ctx + ce_info->offset;
+
+	/* prepare the bits and mask */
+	shift_width = ce_info->lsb % 8;
+	mask = BIT(ce_info->width) - 1;
+
+	/* don't swizzle the bits until after the mask because the mask bits
+	 * will be in a different bit position on big endian machines
+	 */
+	src_word = *(u16 *)from;
+	src_word &= mask;
+
+	/* shift to correct alignment */
+	mask <<= shift_width;
+	src_word <<= shift_width;
+
+	/* get the current bits from the target bit string */
+	dest = dest_ctx + (ce_info->lsb / 8);
+
+	ice_memcpy(&dest_word, dest, sizeof(dest_word), ICE_DMA_TO_NONDMA);
+
+	dest_word &= ~(CPU_TO_LE16(mask));	/* get the bits not changing */
+	dest_word |= CPU_TO_LE16(src_word);	/* add in the new bits */
+
+	/* put it all back */
+	ice_memcpy(dest, &dest_word, sizeof(dest_word), ICE_NONDMA_TO_DMA);
+}
+
+/**
+ * ice_write_dword - write a dword to a packed context structure
+ * @src_ctx:  the context structure to read from
+ * @dest_ctx: the context to be written to
+ * @ce_info:  a description of the struct to be filled
+ */
+static void
+ice_write_dword(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info)
+{
+	u32 src_dword, mask;
+	__le32 dest_dword;
+	u8 *from, *dest;
+	u16 shift_width;
+
+	/* copy from the next struct field */
+	from = src_ctx + ce_info->offset;
+
+	/* prepare the bits and mask */
+	shift_width = ce_info->lsb % 8;
+
+	/* if the field width is exactly 32 on an x86 machine, then the shift
+	 * operation will not work because the SHL instructions count is masked
+	 * to 5 bits so the shift will do nothing
+	 */
+	if (ce_info->width < 32)
+		mask = BIT(ce_info->width) - 1;
+	else
+		mask = (u32)~0;
+
+	/* don't swizzle the bits until after the mask because the mask bits
+	 * will be in a different bit position on big endian machines
+	 */
+	src_dword = *(u32 *)from;
+	src_dword &= mask;
+
+	/* shift to correct alignment */
+	mask <<= shift_width;
+	src_dword <<= shift_width;
+
+	/* get the current bits from the target bit string */
+	dest = dest_ctx + (ce_info->lsb / 8);
+
+	ice_memcpy(&dest_dword, dest, sizeof(dest_dword), ICE_DMA_TO_NONDMA);
+
+	dest_dword &= ~(CPU_TO_LE32(mask));	/* get the bits not changing */
+	dest_dword |= CPU_TO_LE32(src_dword);	/* add in the new bits */
+
+	/* put it all back */
+	ice_memcpy(dest, &dest_dword, sizeof(dest_dword), ICE_NONDMA_TO_DMA);
+}
+
+/**
+ * ice_write_qword - write a qword to a packed context structure
+ * @src_ctx:  the context structure to read from
+ * @dest_ctx: the context to be written to
+ * @ce_info:  a description of the struct to be filled
+ */
+static void
+ice_write_qword(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info)
+{
+	u64 src_qword, mask;
+	__le64 dest_qword;
+	u8 *from, *dest;
+	u16 shift_width;
+
+	/* copy from the next struct field */
+	from = src_ctx + ce_info->offset;
+
+	/* prepare the bits and mask */
+	shift_width = ce_info->lsb % 8;
+
+	/* if the field width is exactly 64 on an x86 machine, then the shift
+	 * operation will not work because the SHL instructions count is masked
+	 * to 6 bits so the shift will do nothing
+	 */
+	if (ce_info->width < 64)
+		mask = BIT_ULL(ce_info->width) - 1;
+	else
+		mask = (u64)~0;
+
+	/* don't swizzle the bits until after the mask because the mask bits
+	 * will be in a different bit position on big endian machines
+	 */
+	src_qword = *(u64 *)from;
+	src_qword &= mask;
+
+	/* shift to correct alignment */
+	mask <<= shift_width;
+	src_qword <<= shift_width;
+
+	/* get the current bits from the target bit string */
+	dest = dest_ctx + (ce_info->lsb / 8);
+
+	ice_memcpy(&dest_qword, dest, sizeof(dest_qword), ICE_DMA_TO_NONDMA);
+
+	dest_qword &= ~(CPU_TO_LE64(mask));	/* get the bits not changing */
+	dest_qword |= CPU_TO_LE64(src_qword);	/* add in the new bits */
+
+	/* put it all back */
+	ice_memcpy(dest, &dest_qword, sizeof(dest_qword), ICE_NONDMA_TO_DMA);
+}
+
+/**
+ * ice_set_ctx - set context bits in packed structure
+ * @src_ctx:  pointer to a generic non-packed context structure
+ * @dest_ctx: pointer to memory for the packed structure
+ * @ce_info:  a description of the structure to be transformed
+ */
+enum ice_status
+ice_set_ctx(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info)
+{
+	int f;
+
+	for (f = 0; ce_info[f].width; f++) {
+		/* We have to deal with each element of the FW response
+		 * using the correct size so that we are correct regardless
+		 * of the endianness of the machine.
+		 */
+		switch (ce_info[f].size_of) {
+		case sizeof(u8):
+			ice_write_byte(src_ctx, dest_ctx, &ce_info[f]);
+			break;
+		case sizeof(u16):
+			ice_write_word(src_ctx, dest_ctx, &ce_info[f]);
+			break;
+		case sizeof(u32):
+			ice_write_dword(src_ctx, dest_ctx, &ce_info[f]);
+			break;
+		case sizeof(u64):
+			ice_write_qword(src_ctx, dest_ctx, &ce_info[f]);
+			break;
+		default:
+			return ICE_ERR_INVAL_SIZE;
+		}
+	}
+
+	return ICE_SUCCESS;
+}
+
+
+
+
+
+/**
+ * ice_ena_vsi_txq
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc: tc number
+ * @num_qgrps: Number of added queue groups
+ * @buf: list of queue groups to be added
+ * @buf_size: size of buffer for indirect command
+ * @cd: pointer to command details structure or NULL
+ *
+ * This function adds one lan q
+ */
+enum ice_status
+ice_ena_vsi_txq(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u8 num_qgrps,
+		struct ice_aqc_add_tx_qgrp *buf, u16 buf_size,
+		struct ice_sq_cd *cd)
+{
+	struct ice_aqc_txsched_elem_data node = { 0 };
+	struct ice_sched_node *parent;
+	enum ice_status status;
+	struct ice_hw *hw;
+
+	if (!pi || pi->port_state != ICE_SCHED_PORT_STATE_READY)
+		return ICE_ERR_CFG;
+
+	if (num_qgrps > 1 || buf->num_txqs > 1)
+		return ICE_ERR_MAX_LIMIT;
+
+	hw = pi->hw;
+
+	if (!ice_is_vsi_valid(hw, vsi_handle))
+		return ICE_ERR_PARAM;
+
+	ice_acquire_lock(&pi->sched_lock);
+
+	/* find a parent node */
+	parent = ice_sched_get_free_qparent(pi, vsi_handle, tc,
+					    ICE_SCHED_NODE_OWNER_LAN);
+	if (!parent) {
+		status = ICE_ERR_PARAM;
+		goto ena_txq_exit;
+	}
+
+	buf->parent_teid = parent->info.node_teid;
+	node.parent_teid = parent->info.node_teid;
+	/* Mark that the values in the "generic" section as valid. The default
+	 * value in the "generic" section is zero. This means that :
+	 * - Scheduling mode is Bytes Per Second (BPS), indicated by Bit 0.
+	 * - 0 priority among siblings, indicated by Bit 1-3.
+	 * - WFQ, indicated by Bit 4.
+	 * - 0 Adjustment value is used in PSM credit update flow, indicated by
+	 * Bit 5-6.
+	 * - Bit 7 is reserved.
+	 * Without setting the generic section as valid in valid_sections, the
+	 * Admin Q command will fail with error code ICE_AQ_RC_EINVAL.
+	 */
+	buf->txqs[0].info.valid_sections = ICE_AQC_ELEM_VALID_GENERIC;
+
+	/* add the lan q */
+	status = ice_aq_add_lan_txq(hw, num_qgrps, buf, buf_size, cd);
+	if (status != ICE_SUCCESS) {
+		ice_debug(hw, ICE_DBG_SCHED, "enable Q %d failed %d\n",
+			  LE16_TO_CPU(buf->txqs[0].txq_id),
+			  hw->adminq.sq_last_status);
+		goto ena_txq_exit;
+	}
+
+	node.node_teid = buf->txqs[0].q_teid;
+	node.data.elem_type = ICE_AQC_ELEM_TYPE_LEAF;
+
+	/* add a leaf node into schduler tree q layer */
+	status = ice_sched_add_node(pi, hw->num_tx_sched_layers - 1, &node);
+
+ena_txq_exit:
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_dis_vsi_txq
+ * @pi: port information structure
+ * @num_queues: number of queues
+ * @q_ids: pointer to the q_id array
+ * @q_teids: pointer to queue node teids
+ * @rst_src: if called due to reset, specifies the rst source
+ * @vmvf_num: the relative vm or vf number that is undergoing the reset
+ * @cd: pointer to command details structure or NULL
+ *
+ * This function removes queues and their corresponding nodes in SW DB
+ */
+enum ice_status
+ice_dis_vsi_txq(struct ice_port_info *pi, u8 num_queues, u16 *q_ids,
+		u32 *q_teids, enum ice_disq_rst_src rst_src, u16 vmvf_num,
+		struct ice_sq_cd *cd)
+{
+	enum ice_status status = ICE_ERR_DOES_NOT_EXIST;
+	struct ice_aqc_dis_txq_item qg_list;
+	u16 i;
+
+	if (!pi || pi->port_state != ICE_SCHED_PORT_STATE_READY)
+		return ICE_ERR_CFG;
+
+	/* if queue is disabled already yet the disable queue command has to be
+	 * sent to complete the VF reset, then call ice_aq_dis_lan_txq without
+	 * any queue information
+	 */
+
+	if (!num_queues && rst_src)
+		return ice_aq_dis_lan_txq(pi->hw, 0, NULL, 0, rst_src, vmvf_num,
+					  NULL);
+
+	ice_acquire_lock(&pi->sched_lock);
+
+	for (i = 0; i < num_queues; i++) {
+		struct ice_sched_node *node;
+
+		node = ice_sched_find_node_by_teid(pi->root, q_teids[i]);
+		if (!node)
+			continue;
+		qg_list.parent_teid = node->info.parent_teid;
+		qg_list.num_qs = 1;
+		qg_list.q_id[0] = CPU_TO_LE16(q_ids[i]);
+		status = ice_aq_dis_lan_txq(pi->hw, 1, &qg_list,
+					    sizeof(qg_list), rst_src, vmvf_num,
+					    cd);
+
+		if (status != ICE_SUCCESS)
+			break;
+		ice_free_sched_node(pi, node);
+	}
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_cfg_vsi_qs - configure the new/exisiting VSI queues
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc_bitmap: TC bitmap
+ * @maxqs: max queues array per TC
+ * @owner: lan or rdma
+ *
+ * This function adds/updates the VSI queues per TC.
+ */
+static enum ice_status
+ice_cfg_vsi_qs(struct ice_port_info *pi, u16 vsi_handle, u8 tc_bitmap,
+	       u16 *maxqs, u8 owner)
+{
+	enum ice_status status = ICE_SUCCESS;
+	u8 i;
+
+	if (!pi || pi->port_state != ICE_SCHED_PORT_STATE_READY)
+		return ICE_ERR_CFG;
+
+	if (!ice_is_vsi_valid(pi->hw, vsi_handle))
+		return ICE_ERR_PARAM;
+
+	ice_acquire_lock(&pi->sched_lock);
+
+	for (i = 0; i < ICE_MAX_TRAFFIC_CLASS; i++) {
+		/* configuration is possible only if TC node is present */
+		if (!ice_sched_get_tc_node(pi, i))
+			continue;
+
+		status = ice_sched_cfg_vsi(pi, vsi_handle, i, maxqs[i], owner,
+					   ice_is_tc_ena(tc_bitmap, i));
+		if (status)
+			break;
+	}
+
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_cfg_vsi_lan - configure VSI lan queues
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc_bitmap: TC bitmap
+ * @max_lanqs: max lan queues array per TC
+ *
+ * This function adds/updates the VSI lan queues per TC.
+ */
+enum ice_status
+ice_cfg_vsi_lan(struct ice_port_info *pi, u16 vsi_handle, u8 tc_bitmap,
+		u16 *max_lanqs)
+{
+	return ice_cfg_vsi_qs(pi, vsi_handle, tc_bitmap, max_lanqs,
+			      ICE_SCHED_NODE_OWNER_LAN);
+}
+
+
+
+/**
+ * ice_replay_pre_init - replay pre initialization
+ * @hw: pointer to the hw struct
+ *
+ * Initializes required config data for VSI, FD, ACL, and RSS before replay.
+ */
+static enum ice_status ice_replay_pre_init(struct ice_hw *hw)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	u8 i;
+
+	/* Delete old entries from replay filter list head if there is any */
+	ice_rm_all_sw_replay_rule_info(hw);
+	/* In start of replay, move entries into replay_rules list, it
+	 * will allow adding rules entries back to filt_rules list,
+	 * which is operational list.
+	 */
+	for (i = 0; i < ICE_MAX_NUM_RECIPES; i++)
+		LIST_REPLACE_INIT(&sw->recp_list[i].filt_rules,
+				  &sw->recp_list[i].filt_replay_rules);
+	ice_sched_replay_agg_vsi_preinit(hw);
+
+	return ice_sched_replay_tc_node_bw(hw);
+}
+
+/**
+ * ice_replay_vsi - replay vsi configuration
+ * @hw: pointer to the hw struct
+ * @vsi_handle: driver vsi handle
+ *
+ * Restore all VSI configuration after reset. It is required to call this
+ * function with main VSI first.
+ */
+enum ice_status ice_replay_vsi(struct ice_hw *hw, u16 vsi_handle)
+{
+	enum ice_status status;
+
+	if (!ice_is_vsi_valid(hw, vsi_handle))
+		return ICE_ERR_PARAM;
+
+	/* Replay pre-initialization if there is any */
+	if (vsi_handle == ICE_MAIN_VSI_HANDLE) {
+		status = ice_replay_pre_init(hw);
+		if (status)
+			return status;
+	}
+
+	/* Replay per VSI all filters */
+	status = ice_replay_vsi_all_fltr(hw, vsi_handle);
+	if (!status)
+		status = ice_replay_vsi_agg(hw, vsi_handle);
+	return status;
+}
+
+/**
+ * ice_replay_post - post replay configuration cleanup
+ * @hw: pointer to the hw struct
+ *
+ * Post replay cleanup.
+ */
+void ice_replay_post(struct ice_hw *hw)
+{
+	/* Delete old entries from replay filter list head */
+	ice_rm_all_sw_replay_rule_info(hw);
+	ice_sched_replay_agg(hw);
+}
+
+/**
+ * ice_stat_update40 - read 40 bit stat from the chip and update stat values
+ * @hw: ptr to the hardware info
+ * @hireg: high 32 bit HW register to read from
+ * @loreg: low 32 bit HW register to read from
+ * @prev_stat_loaded: bool to specify if previous stats are loaded
+ * @prev_stat: ptr to previous loaded stat value
+ * @cur_stat: ptr to current stat value
+ */
+void
+ice_stat_update40(struct ice_hw *hw, u32 hireg, u32 loreg,
+		  bool prev_stat_loaded, u64 *prev_stat, u64 *cur_stat)
+{
+	u64 new_data;
+
+	new_data = rd32(hw, loreg);
+	new_data |= ((u64)(rd32(hw, hireg) & 0xFFFF)) << 32;
+
+	/* device stats are not reset at PFR, they likely will not be zeroed
+	 * when the driver starts. So save the first values read and use them as
+	 * offsets to be subtracted from the raw values in order to report stats
+	 * that count from zero.
+	 */
+	if (!prev_stat_loaded)
+		*prev_stat = new_data;
+	if (new_data >= *prev_stat)
+		*cur_stat = new_data - *prev_stat;
+	else
+		/* to manage the potential roll-over */
+		*cur_stat = (new_data + BIT_ULL(40)) - *prev_stat;
+	*cur_stat &= 0xFFFFFFFFFFULL;
+}
+
+/**
+ * ice_stat_update32 - read 32 bit stat from the chip and update stat values
+ * @hw: ptr to the hardware info
+ * @reg: HW register to read from
+ * @prev_stat_loaded: bool to specify if previous stats are loaded
+ * @prev_stat: ptr to previous loaded stat value
+ * @cur_stat: ptr to current stat value
+ */
+void
+ice_stat_update32(struct ice_hw *hw, u32 reg, bool prev_stat_loaded,
+		  u64 *prev_stat, u64 *cur_stat)
+{
+	u32 new_data;
+
+	new_data = rd32(hw, reg);
+
+	/* device stats are not reset at PFR, they likely will not be zeroed
+	 * when the driver starts. So save the first values read and use them as
+	 * offsets to be subtracted from the raw values in order to report stats
+	 * that count from zero.
+	 */
+	if (!prev_stat_loaded)
+		*prev_stat = new_data;
+	if (new_data >= *prev_stat)
+		*cur_stat = new_data - *prev_stat;
+	else
+		/* to manage the potential roll-over */
+		*cur_stat = (new_data + BIT_ULL(32)) - *prev_stat;
+}
+
+
+/**
+ * ice_sched_query_elem - query element information from hw
+ * @hw: pointer to the hw struct
+ * @node_teid: node teid to be queried
+ * @buf: buffer to element information
+ *
+ * This function queries HW element information
+ */
+enum ice_status
+ice_sched_query_elem(struct ice_hw *hw, u32 node_teid,
+		     struct ice_aqc_get_elem *buf)
+{
+	u16 buf_size, num_elem_ret = 0;
+	enum ice_status status;
+
+	buf_size = sizeof(*buf);
+	ice_memset(buf, 0, buf_size, ICE_NONDMA_MEM);
+	buf->generic[0].node_teid = CPU_TO_LE32(node_teid);
+	status = ice_aq_query_sched_elems(hw, 1, buf, buf_size, &num_elem_ret,
+					  NULL);
+	if (status != ICE_SUCCESS || num_elem_ret != 1)
+		ice_debug(hw, ICE_DBG_SCHED, "query element failed\n");
+	return status;
+}
diff --git a/drivers/net/ice/base/ice_common.h b/drivers/net/ice/base/ice_common.h
new file mode 100644
index 0000000..082ae66
--- /dev/null
+++ b/drivers/net/ice/base/ice_common.h
@@ -0,0 +1,186 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_COMMON_H_
+#define _ICE_COMMON_H_
+
+#include "ice_type.h"
+
+#include "ice_switch.h"
+
+/* prototype for functions used for SW locks */
+void ice_free_list(struct LIST_HEAD_TYPE *list);
+void ice_init_lock(struct ice_lock *lock);
+void ice_acquire_lock(struct ice_lock *lock);
+void ice_release_lock(struct ice_lock *lock);
+void ice_destroy_lock(struct ice_lock *lock);
+
+void *ice_alloc_dma_mem(struct ice_hw *hw, struct ice_dma_mem *m, u64 size);
+void ice_free_dma_mem(struct ice_hw *hw, struct ice_dma_mem *m);
+
+bool ice_sq_done(struct ice_hw *hw, struct ice_ctl_q_info *cq);
+
+enum ice_status ice_nvm_validate_checksum(struct ice_hw *hw);
+
+void
+ice_debug_cq(struct ice_hw *hw, u32 mask, void *desc, void *buf, u16 buf_len);
+enum ice_status ice_init_hw(struct ice_hw *hw);
+void ice_deinit_hw(struct ice_hw *hw);
+enum ice_status ice_check_reset(struct ice_hw *hw);
+enum ice_status ice_reset(struct ice_hw *hw, enum ice_reset_req req);
+
+enum ice_status ice_init_all_ctrlq(struct ice_hw *hw);
+void ice_shutdown_all_ctrlq(struct ice_hw *hw);
+enum ice_status
+ice_clean_rq_elem(struct ice_hw *hw, struct ice_ctl_q_info *cq,
+		  struct ice_rq_event_info *e, u16 *pending);
+enum ice_status
+ice_get_link_status(struct ice_port_info *pi, bool *link_up);
+enum ice_status
+ice_update_link_info(struct ice_port_info *pi);
+enum ice_status
+ice_acquire_res(struct ice_hw *hw, enum ice_aq_res_ids res,
+		enum ice_aq_res_access_type access, u32 timeout);
+void ice_release_res(struct ice_hw *hw, enum ice_aq_res_ids res);
+enum ice_status
+ice_aq_alloc_free_res(struct ice_hw *hw, u16 num_entries,
+		      struct ice_aqc_alloc_free_res_elem *buf, u16 buf_size,
+		      enum ice_adminq_opc opc, struct ice_sq_cd *cd);
+enum ice_status ice_init_nvm(struct ice_hw *hw);
+enum ice_status ice_read_sr_word(struct ice_hw *hw, u16 offset, u16 *data);
+enum ice_status
+ice_read_sr_buf(struct ice_hw *hw, u16 offset, u16 *words, u16 *data);
+enum ice_status
+ice_sq_send_cmd(struct ice_hw *hw, struct ice_ctl_q_info *cq,
+		struct ice_aq_desc *desc, void *buf, u16 buf_size,
+		struct ice_sq_cd *cd);
+void ice_clear_pxe_mode(struct ice_hw *hw);
+
+enum ice_status ice_get_caps(struct ice_hw *hw);
+
+
+
+#if defined(FPGA_SUPPORT) || defined(CVL_A0_SUPPORT)
+void ice_dev_onetime_setup(struct ice_hw *hw);
+#endif /* FPGA_SUPPORT || CVL_A0_SUPPORT */
+
+
+enum ice_status
+ice_write_rxq_ctx(struct ice_hw *hw, struct ice_rlan_ctx *rlan_ctx,
+		  u32 rxq_index);
+#if !defined(NO_UNUSED_CTX_CODE) || defined(AE_DRIVER)
+enum ice_status ice_clear_rxq_ctx(struct ice_hw *hw, u32 rxq_index);
+enum ice_status
+ice_clear_tx_cmpltnq_ctx(struct ice_hw *hw, u32 tx_cmpltnq_index);
+enum ice_status
+ice_write_tx_cmpltnq_ctx(struct ice_hw *hw,
+			 struct ice_tx_cmpltnq_ctx *tx_cmpltnq_ctx,
+			 u32 tx_cmpltnq_index);
+enum ice_status
+ice_clear_tx_drbell_q_ctx(struct ice_hw *hw, u32 tx_drbell_q_index);
+enum ice_status
+ice_write_tx_drbell_q_ctx(struct ice_hw *hw,
+			  struct ice_tx_drbell_q_ctx *tx_drbell_q_ctx,
+			  u32 tx_drbell_q_index);
+#endif /* !NO_UNUSED_CTX_CODE || AE_DRIVER */
+
+enum ice_status
+ice_aq_get_rss_lut(struct ice_hw *hw, u16 vsi_handle, u8 lut_type, u8 *lut,
+		   u16 lut_size);
+enum ice_status
+ice_aq_set_rss_lut(struct ice_hw *hw, u16 vsi_handle, u8 lut_type, u8 *lut,
+		   u16 lut_size);
+enum ice_status
+ice_aq_get_rss_key(struct ice_hw *hw, u16 vsi_handle,
+		   struct ice_aqc_get_set_rss_keys *keys);
+enum ice_status
+ice_aq_set_rss_key(struct ice_hw *hw, u16 vsi_handle,
+		   struct ice_aqc_get_set_rss_keys *keys);
+
+bool ice_check_sq_alive(struct ice_hw *hw, struct ice_ctl_q_info *cq);
+enum ice_status ice_aq_q_shutdown(struct ice_hw *hw, bool unloading);
+void ice_fill_dflt_direct_cmd_desc(struct ice_aq_desc *desc, u16 opcode);
+extern const struct ice_ctx_ele ice_tlan_ctx_info[];
+enum ice_status
+ice_set_ctx(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info);
+enum ice_status
+ice_aq_send_cmd(struct ice_hw *hw, struct ice_aq_desc *desc,
+		void *buf, u16 buf_size, struct ice_sq_cd *cd);
+enum ice_status ice_aq_get_fw_ver(struct ice_hw *hw, struct ice_sq_cd *cd);
+
+enum ice_status
+ice_aq_get_phy_caps(struct ice_port_info *pi, bool qual_mods, u8 report_mode,
+		    struct ice_aqc_get_phy_caps_data *caps,
+		    struct ice_sq_cd *cd);
+void
+ice_update_phy_type(u64 *phy_type_low, u64 *phy_type_high,
+		    u16 link_speeds_bitmap);
+enum ice_status
+ice_aq_manage_mac_write(struct ice_hw *hw, const u8 *mac_addr, u8 flags,
+			struct ice_sq_cd *cd);
+
+enum ice_status ice_clear_pf_cfg(struct ice_hw *hw);
+enum ice_status
+ice_aq_set_phy_cfg(struct ice_hw *hw, u8 lport,
+		   struct ice_aqc_set_phy_cfg_data *cfg, struct ice_sq_cd *cd);
+enum ice_status
+ice_set_fc(struct ice_port_info *pi, u8 *aq_failures,
+	   bool ena_auto_link_update);
+void
+ice_cfg_phy_fec(struct ice_aqc_set_phy_cfg_data *cfg, enum ice_fec_mode fec);
+void
+ice_copy_phy_caps_to_cfg(struct ice_aqc_get_phy_caps_data *caps,
+			 struct ice_aqc_set_phy_cfg_data *cfg);
+enum ice_status
+ice_aq_set_link_restart_an(struct ice_port_info *pi, bool ena_link,
+			   struct ice_sq_cd *cd);
+enum ice_status
+ice_aq_get_link_info(struct ice_port_info *pi, bool ena_lse,
+		     struct ice_link_status *link, struct ice_sq_cd *cd);
+enum ice_status
+ice_aq_set_event_mask(struct ice_hw *hw, u8 port_num, u16 mask,
+		      struct ice_sq_cd *cd);
+enum ice_status
+ice_aq_set_mac_loopback(struct ice_hw *hw, bool ena_lpbk, struct ice_sq_cd *cd);
+
+
+enum ice_status
+ice_aq_set_port_id_led(struct ice_port_info *pi, bool is_orig_mode,
+		       struct ice_sq_cd *cd);
+
+
+
+
+enum ice_status
+ice_dis_vsi_txq(struct ice_port_info *pi, u8 num_queues, u16 *q_ids,
+		u32 *q_teids, enum ice_disq_rst_src rst_src, u16 vmvf_num,
+		struct ice_sq_cd *cmd_details);
+enum ice_status
+ice_cfg_vsi_lan(struct ice_port_info *pi, u16 vsi_handle, u8 tc_bitmap,
+		u16 *max_lanqs);
+enum ice_status
+ice_ena_vsi_txq(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u8 num_qgrps,
+		struct ice_aqc_add_tx_qgrp *buf, u16 buf_size,
+		struct ice_sq_cd *cd);
+enum ice_status ice_replay_vsi(struct ice_hw *hw, u16 vsi_handle);
+void ice_replay_post(struct ice_hw *hw);
+void ice_sched_replay_agg_vsi_preinit(struct ice_hw *hw);
+void ice_sched_replay_agg(struct ice_hw *hw);
+enum ice_status ice_sched_replay_tc_node_bw(struct ice_hw *hw);
+enum ice_status ice_replay_vsi_agg(struct ice_hw *hw, u16 vsi_handle);
+enum ice_status
+ice_cfg_tc_node_bw_alloc(struct ice_port_info *pi, u8 tc,
+			 enum ice_rl_type rl_type, u8 bw_alloc);
+enum ice_status ice_cfg_rl_burst_size(struct ice_hw *hw, u32 bytes);
+void ice_output_fw_log(struct ice_hw *hw, struct ice_aq_desc *desc, void *buf);
+void
+ice_stat_update40(struct ice_hw *hw, u32 hireg, u32 loreg,
+		  bool prev_stat_loaded, u64 *prev_stat, u64 *cur_stat);
+void
+ice_stat_update32(struct ice_hw *hw, u32 reg, bool prev_stat_loaded,
+		  u64 *prev_stat, u64 *cur_stat);
+enum ice_status
+ice_sched_query_elem(struct ice_hw *hw, u32 node_teid,
+		     struct ice_aqc_get_elem *buf);
+#endif /* _ICE_COMMON_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v4 12/32] net/ice/base: add various headers
  2018-12-14  8:34 ` [dpdk-dev] [PATCH v4 00/32] A new net PMD - ICE Wenzhuo Lu
                     ` (10 preceding siblings ...)
  2018-12-14  8:34   ` [dpdk-dev] [PATCH v4 11/32] net/ice/base: add common functions Wenzhuo Lu
@ 2018-12-14  8:34   ` Wenzhuo Lu
  2018-12-14  8:34   ` [dpdk-dev] [PATCH v4 13/32] net/ice/base: add protocol structures and defines Wenzhuo Lu
                     ` (19 subsequent siblings)
  31 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-14  8:34 UTC (permalink / raw)
  To: dev; +Cc: Paul M Stillwell Jr

From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>

Add various headers that define status codes and
basic defines for use in the code.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
---
 drivers/net/ice/base/ice_alloc.h     | 22 ++++++++++++++++++
 drivers/net/ice/base/ice_flex_type.h | 19 +++++++++++++++
 drivers/net/ice/base/ice_flow.h      |  8 +++++++
 drivers/net/ice/base/ice_status.h    | 45 ++++++++++++++++++++++++++++++++++++
 4 files changed, 94 insertions(+)
 create mode 100644 drivers/net/ice/base/ice_alloc.h
 create mode 100644 drivers/net/ice/base/ice_flex_type.h
 create mode 100644 drivers/net/ice/base/ice_flow.h
 create mode 100644 drivers/net/ice/base/ice_status.h

diff --git a/drivers/net/ice/base/ice_alloc.h b/drivers/net/ice/base/ice_alloc.h
new file mode 100644
index 0000000..7883104
--- /dev/null
+++ b/drivers/net/ice/base/ice_alloc.h
@@ -0,0 +1,22 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_ALLOC_H_
+#define _ICE_ALLOC_H_
+
+/* Memory types */
+enum ice_memset_type {
+	ICE_NONDMA_MEM = 0,
+	ICE_DMA_MEM
+};
+
+/* Memcpy types */
+enum ice_memcpy_type {
+	ICE_NONDMA_TO_NONDMA = 0,
+	ICE_NONDMA_TO_DMA,
+	ICE_DMA_TO_DMA,
+	ICE_DMA_TO_NONDMA
+};
+
+#endif /* _ICE_ALLOC_H_ */
diff --git a/drivers/net/ice/base/ice_flex_type.h b/drivers/net/ice/base/ice_flex_type.h
new file mode 100644
index 0000000..84a38cb
--- /dev/null
+++ b/drivers/net/ice/base/ice_flex_type.h
@@ -0,0 +1,19 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_FLEX_TYPE_H_
+#define _ICE_FLEX_TYPE_H_
+
+/* Extraction Sequence (Field Vector) Table */
+struct ice_fv_word {
+	u8 prot_id;
+	u8 off;		/* Offset within the protocol header */
+};
+
+#define ICE_MAX_FV_WORDS 48
+struct ice_fv {
+	struct ice_fv_word ew[ICE_MAX_FV_WORDS];
+};
+
+#endif /* _ICE_FLEX_TYPE_H_ */
diff --git a/drivers/net/ice/base/ice_flow.h b/drivers/net/ice/base/ice_flow.h
new file mode 100644
index 0000000..228a2c0
--- /dev/null
+++ b/drivers/net/ice/base/ice_flow.h
@@ -0,0 +1,8 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_FLOW_H_
+#define _ICE_FLOW_H_
+
+#endif /* _ICE_FLOW_H_ */
diff --git a/drivers/net/ice/base/ice_status.h b/drivers/net/ice/base/ice_status.h
new file mode 100644
index 0000000..898bfa6
--- /dev/null
+++ b/drivers/net/ice/base/ice_status.h
@@ -0,0 +1,45 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_STATUS_H_
+#define _ICE_STATUS_H_
+
+/* Error Codes */
+enum ice_status {
+	ICE_SUCCESS				= 0,
+
+	/* Generic codes : Range -1..-49 */
+	ICE_ERR_PARAM				= -1,
+	ICE_ERR_NOT_IMPL			= -2,
+	ICE_ERR_NOT_READY			= -3,
+	ICE_ERR_BAD_PTR				= -5,
+	ICE_ERR_INVAL_SIZE			= -6,
+	ICE_ERR_DEVICE_NOT_SUPPORTED		= -8,
+	ICE_ERR_RESET_FAILED			= -9,
+	ICE_ERR_FW_API_VER			= -10,
+	ICE_ERR_NO_MEMORY			= -11,
+	ICE_ERR_CFG				= -12,
+	ICE_ERR_OUT_OF_RANGE			= -13,
+	ICE_ERR_ALREADY_EXISTS			= -14,
+	ICE_ERR_DOES_NOT_EXIST			= -15,
+	ICE_ERR_IN_USE				= -16,
+	ICE_ERR_MAX_LIMIT			= -17,
+	ICE_ERR_RESET_ONGOING			= -18,
+	ICE_ERR_HW_TABLE			= -19,
+
+	/* NVM specific error codes: Range -50..-59 */
+	ICE_ERR_NVM				= -50,
+	ICE_ERR_NVM_CHECKSUM			= -51,
+	ICE_ERR_BUF_TOO_SHORT			= -52,
+	ICE_ERR_NVM_BLANK_MODE			= -53,
+
+	/* ARQ/ASQ specific error codes. Range -100..-109 */
+	ICE_ERR_AQ_ERROR			= -100,
+	ICE_ERR_AQ_TIMEOUT			= -101,
+	ICE_ERR_AQ_FULL				= -102,
+	ICE_ERR_AQ_NO_WORK			= -103,
+	ICE_ERR_AQ_EMPTY			= -104,
+};
+
+#endif /* _ICE_STATUS_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v4 13/32] net/ice/base: add protocol structures and defines
  2018-12-14  8:34 ` [dpdk-dev] [PATCH v4 00/32] A new net PMD - ICE Wenzhuo Lu
                     ` (11 preceding siblings ...)
  2018-12-14  8:34   ` [dpdk-dev] [PATCH v4 12/32] net/ice/base: add various headers Wenzhuo Lu
@ 2018-12-14  8:34   ` Wenzhuo Lu
  2018-12-14  8:35   ` [dpdk-dev] [PATCH v4 14/32] net/ice/base: add structures for RX/TX queues Wenzhuo Lu
                     ` (18 subsequent siblings)
  31 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-14  8:34 UTC (permalink / raw)
  To: dev; +Cc: Paul M Stillwell Jr

From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>

Add the structures and defines that define what
protocols the NIC can handle.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
---
 drivers/net/ice/base/ice_protocol_type.h | 248 +++++++++++++++++++++++++++++++
 1 file changed, 248 insertions(+)
 create mode 100644 drivers/net/ice/base/ice_protocol_type.h

diff --git a/drivers/net/ice/base/ice_protocol_type.h b/drivers/net/ice/base/ice_protocol_type.h
new file mode 100644
index 0000000..7b92c71
--- /dev/null
+++ b/drivers/net/ice/base/ice_protocol_type.h
@@ -0,0 +1,248 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_PROTOCOL_TYPE_H_
+#define _ICE_PROTOCOL_TYPE_H_
+#include "ice_flex_type.h"
+#define ICE_IPV6_ADDR_LENGTH 16
+
+/* Each recipe can match up to 5 different fields. Fields to match can be meta-
+ * data, values extracted from packet headers, or results from other recipes.
+ * One of the 5 fields is reserved for matching the switch ID. So, up to 4
+ * recipes can provide intermediate results to another one through chaining,
+ * e.g. recipes 0, 1, 2, and 3 can provide intermediate results to recipe 4.
+ */
+#define ICE_NUM_WORDS_RECIPE 4
+
+/* Max recipes that can be chained */
+#define ICE_MAX_CHAIN_RECIPE 5
+
+/* 1 word reserved for switch id from allowed 5 words.
+ * So a recipe can have max 4 words. And you can chain 5 such recipes
+ * together. So maximum words that can be programmed for look up is 5 * 4.
+ */
+#define ICE_MAX_CHAIN_WORDS (ICE_NUM_WORDS_RECIPE * ICE_MAX_CHAIN_RECIPE)
+
+/* Field vector index corresponding to chaining */
+#define ICE_CHAIN_FV_INDEX_START 47
+
+enum ice_protocol_type {
+	ICE_MAC_OFOS = 0,
+	ICE_MAC_IL,
+	ICE_IPV4_OFOS,
+	ICE_IPV4_IL,
+	ICE_IPV6_IL,
+	ICE_IPV6_OFOS,
+	ICE_TCP_IL,
+	ICE_UDP_ILOS,
+	ICE_SCTP_IL,
+	ICE_VXLAN,
+	ICE_GENEVE,
+	ICE_VXLAN_GPE,
+	ICE_NVGRE,
+	ICE_PROTOCOL_LAST
+};
+
+enum ice_sw_tunnel_type {
+	ICE_NON_TUN,
+	ICE_SW_TUN_VXLAN_GPE,
+	ICE_SW_TUN_GENEVE,
+	ICE_SW_TUN_VXLAN,
+	ICE_SW_TUN_NVGRE,
+	ICE_SW_TUN_UDP, /* This means all "UDP" tunnel types: VXLAN-GPE, VXLAN
+			 * and GENEVE
+			 */
+	ICE_ALL_TUNNELS /* All tunnel types including NVGRE */
+};
+
+/* Decoders for ice_prot_id:
+ * - F: First
+ * - I: Inner
+ * - L: Last
+ * - O: Outer
+ * - S: Single
+ */
+enum ice_prot_id {
+	ICE_PROT_ID_INVAL	= 0,
+	ICE_PROT_MAC_OF_OR_S	= 1,
+	ICE_PROT_MAC_O2		= 2,
+	ICE_PROT_MAC_IL		= 4,
+	ICE_PROT_MAC_IN_MAC	= 7,
+	ICE_PROT_ETYPE_OL	= 9,
+	ICE_PROT_ETYPE_IL	= 10,
+	ICE_PROT_PAY		= 15,
+	ICE_PROT_EVLAN_O	= 16,
+	ICE_PROT_VLAN_O		= 17,
+	ICE_PROT_VLAN_IF	= 18,
+	ICE_PROT_MPLS_OL_MINUS_1 = 27,
+	ICE_PROT_MPLS_OL_OR_OS	= 28,
+	ICE_PROT_MPLS_IL	= 29,
+	ICE_PROT_IPV4_OF_OR_S	= 32,
+	ICE_PROT_IPV4_IL	= 33,
+	ICE_PROT_IPV6_OF_OR_S	= 40,
+	ICE_PROT_IPV6_IL	= 41,
+	ICE_PROT_IPV6_FRAG	= 47,
+	ICE_PROT_TCP_IL		= 49,
+	ICE_PROT_UDP_OF		= 52,
+	ICE_PROT_UDP_IL_OR_S	= 53,
+	ICE_PROT_GRE_OF		= 64,
+	ICE_PROT_NSH_F		= 84,
+	ICE_PROT_ESP_F		= 88,
+	ICE_PROT_ESP_2		= 89,
+	ICE_PROT_SCTP_IL	= 96,
+	ICE_PROT_ICMP_IL	= 98,
+	ICE_PROT_ICMPV6_IL	= 100,
+	ICE_PROT_VRRP_F		= 101,
+	ICE_PROT_OSPF		= 102,
+	ICE_PROT_ATAOE_OF	= 114,
+	ICE_PROT_CTRL_OF	= 116,
+	ICE_PROT_LLDP_OF	= 117,
+	ICE_PROT_ARP_OF		= 118,
+	ICE_PROT_EAPOL_OF	= 120,
+	ICE_PROT_META_ID	= 255, /* when offset == metaddata */
+	ICE_PROT_INVALID	= 255  /* when offset == 0xFF */
+};
+
+
+#define ICE_MAC_OFOS_HW		1
+#define ICE_MAC_IL_HW		4
+#define ICE_IPV4_OFOS_HW	32
+#define ICE_IPV4_IL_HW		33
+#define ICE_IPV6_OFOS_HW	40
+#define ICE_IPV6_IL_HW		41
+#define ICE_TCP_IL_HW		49
+#define ICE_UDP_ILOS_HW		53
+#define ICE_SCTP_IL_HW		96
+
+/* ICE_UDP_OF is used to identify all 3 tunnel types
+ * VXLAN, GENEVE and VXLAN_GPE. To differentiate further
+ * need to use flags from the field vector
+ */
+#define ICE_UDP_OF_HW	52 /* UDP Tunnels */
+#define ICE_GRE_OF_HW	64 /* NVGRE */
+#define ICE_META_DATA_ID_HW 255 /* this is used for tunnel type */
+
+#define ICE_TUN_FLAG_MASK 0xFF
+#define ICE_TUN_FLAG_FV_IND 2
+
+#define ICE_PROTOCOL_MAX_ENTRIES 16
+
+/* Mapping of software defined protocol id to hardware defined protocol id */
+struct ice_protocol_entry {
+	enum ice_protocol_type type;
+	u8 protocol_id;
+};
+
+
+struct ice_ether_hdr {
+	u8 dst_addr[ETH_ALEN];
+	u8 src_addr[ETH_ALEN];
+	u16 ethtype_id;
+};
+
+struct ice_ether_vlan_hdr {
+	u8 dst_addr[ETH_ALEN];
+	u8 src_addr[ETH_ALEN];
+	u32 vlan_id;
+};
+
+struct ice_ipv4_hdr {
+	u8 version;
+	u8 tos;
+	u16 total_length;
+	u16 id;
+	u16 frag_off;
+	u8 time_to_live;
+	u8 protocol;
+	u16 check;
+	u32 src_addr;
+	u32 dst_addr;
+};
+
+struct ice_ipv6_hdr {
+	u8 version;
+	u8 tc;
+	u16 flow_label;
+	u16 payload_len;
+	u8 next_hdr;
+	u8 hop_limit;
+	u8 src_addr[ICE_IPV6_ADDR_LENGTH];
+	u8 dst_addr[ICE_IPV6_ADDR_LENGTH];
+};
+
+struct ice_sctp_hdr {
+	u16 src_port;
+	u16 dst_port;
+	u32 verification_tag;
+	u32 check;
+};
+
+struct ice_l4_hdr {
+	u16 src_port;
+	u16 dst_port;
+	u16 len;
+	u16 check;
+};
+
+struct ice_udp_tnl_hdr {
+	u16 field;
+	u16 proto_type;
+	u16 vni;
+};
+
+struct ice_nvgre {
+	u16 tni;
+	u16 flow_id;
+};
+
+union ice_prot_hdr {
+		struct ice_ether_hdr eth_hdr;
+		struct ice_ipv4_hdr ipv4_hdr;
+		struct ice_ipv6_hdr ice_ipv6_ofos_hdr;
+		struct ice_l4_hdr l4_hdr;
+		struct ice_sctp_hdr sctp_hdr;
+		struct ice_udp_tnl_hdr tnl_hdr;
+		struct ice_nvgre nvgre_hdr;
+};
+
+/* This is mapping table entry that maps every word within a given protocol
+ * structure to the real byte offset as per the specification of that
+ * protocol header.
+ * for e.g. dst address is 3 words in ethertype header and corresponding bytes
+ * are 0, 2, 3 in the actual packet header and src address is at 4, 6, 8
+ */
+struct ice_prot_ext_tbl_entry {
+	enum ice_protocol_type prot_type;
+	/* Byte offset into header of given protocol type */
+	u8 offs[sizeof(union ice_prot_hdr)];
+};
+
+/* Extractions to be looked up for a given recipe */
+struct ice_prot_lkup_ext {
+	u16 prot_type;
+	u8 n_val_words;
+	/* create a buffer to hold max words per recipe */
+	u8 field_off[ICE_MAX_CHAIN_WORDS];
+
+	struct ice_fv_word fv_words[ICE_MAX_CHAIN_WORDS];
+
+	/* Indicate field offsets that have field vector indices assigned */
+	ice_declare_bitmap(done, ICE_MAX_CHAIN_WORDS);
+};
+
+struct ice_pref_recipe_group {
+	u8 n_val_pairs;		/* Number of valid pairs */
+	struct ice_fv_word pairs[ICE_NUM_WORDS_RECIPE];
+};
+
+struct ice_recp_grp_entry {
+	struct LIST_ENTRY_TYPE l_entry;
+
+#define ICE_INVAL_CHAIN_IND 0xFF
+	u16 rid;
+	u8 chain_idx;
+	u16 fv_idx[ICE_NUM_WORDS_RECIPE];
+	struct ice_pref_recipe_group r_group;
+};
+#endif /* _ICE_PROTOCOL_TYPE_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v4 14/32] net/ice/base: add structures for RX/TX queues
  2018-12-14  8:34 ` [dpdk-dev] [PATCH v4 00/32] A new net PMD - ICE Wenzhuo Lu
                     ` (12 preceding siblings ...)
  2018-12-14  8:34   ` [dpdk-dev] [PATCH v4 13/32] net/ice/base: add protocol structures and defines Wenzhuo Lu
@ 2018-12-14  8:35   ` Wenzhuo Lu
  2018-12-14  8:35   ` [dpdk-dev] [PATCH v4 15/32] net/ice/base: add OS specific implementation Wenzhuo Lu
                     ` (17 subsequent siblings)
  31 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-14  8:35 UTC (permalink / raw)
  To: dev; +Cc: Paul M Stillwell Jr

From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>

Add the structures that define how the RX/TX queues
are used.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
---
 drivers/net/ice/base/ice_lan_tx_rx.h | 2291 ++++++++++++++++++++++++++++++++++
 1 file changed, 2291 insertions(+)
 create mode 100644 drivers/net/ice/base/ice_lan_tx_rx.h

diff --git a/drivers/net/ice/base/ice_lan_tx_rx.h b/drivers/net/ice/base/ice_lan_tx_rx.h
new file mode 100644
index 0000000..d27045f
--- /dev/null
+++ b/drivers/net/ice/base/ice_lan_tx_rx.h
@@ -0,0 +1,2291 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_LAN_TX_RX_H_
+#define _ICE_LAN_TX_RX_H_
+#include "ice_osdep.h"
+
+/* RX Descriptors */
+union ice_16byte_rx_desc {
+	struct {
+		__le64 pkt_addr; /* Packet buffer address */
+		__le64 hdr_addr; /* Header buffer address */
+	} read;
+	struct {
+		struct {
+			struct {
+				__le16 mirroring_status;
+				__le16 l2tag1;
+			} lo_dword;
+			union {
+				__le32 rss; /* RSS Hash */
+				__le32 fd_id; /* Flow Director filter id */
+			} hi_dword;
+		} qword0;
+		struct {
+			/* ext status/error/PTYPE/length */
+			__le64 status_error_len;
+		} qword1;
+	} wb;  /* writeback */
+};
+
+union ice_32byte_rx_desc {
+	struct {
+		__le64 pkt_addr; /* Packet buffer address */
+		__le64 hdr_addr; /* Header buffer address */
+			/* bit 0 of hdr_addr is DD bit */
+		__le64 rsvd1;
+		__le64 rsvd2;
+	} read;
+	struct {
+		struct {
+			struct {
+				__le16 mirroring_status;
+				__le16 l2tag1;
+			} lo_dword;
+			union {
+				__le32 rss; /* RSS Hash */
+				__le32 fd_id; /* Flow Director filter id */
+			} hi_dword;
+		} qword0;
+		struct {
+			/* status/error/PTYPE/length */
+			__le64 status_error_len;
+		} qword1;
+		struct {
+			__le16 ext_status; /* extended status */
+			__le16 rsvd;
+			__le16 l2tag2_1;
+			__le16 l2tag2_2;
+		} qword2;
+		struct {
+			__le32 reserved;
+			__le32 fd_id;
+		} qword3;
+	} wb; /* writeback */
+};
+
+struct ice_fltr_desc {
+	__le64 qidx_compq_space_stat;
+	__le64 dtype_cmd_vsi_fdid;
+};
+
+#define ICE_FXD_FLTR_QW0_QINDEX_S	0
+#define ICE_FXD_FLTR_QW0_QINDEX_M	(0x7FFULL << ICE_FXD_FLTR_QW0_QINDEX_S)
+#define ICE_FXD_FLTR_QW0_COMP_Q_S	11
+#define ICE_FXD_FLTR_QW0_COMP_Q_M	BIT_ULL(ICE_FXD_FLTR_QW0_COMP_Q_S)
+#define ICE_FXD_FLTR_QW0_COMP_Q_ZERO	0x0ULL
+#define ICE_FXD_FLTR_QW0_COMP_Q_QINDX	0x1ULL
+
+#define ICE_FXD_FLTR_QW0_COMP_REPORT_S	12
+#define ICE_FXD_FLTR_QW0_COMP_REPORT_M	\
+				(0x3ULL << ICE_FXD_FLTR_QW0_COMP_REPORT_S)
+#define ICE_FXD_FLTR_QW0_COMP_REPORT_NONE	0x0ULL
+#define ICE_FXD_FLTR_QW0_COMP_REPORT_SW_FAIL	0x1ULL
+#define ICE_FXD_FLTR_QW0_COMP_REPORT_SW		0x2ULL
+
+#define ICE_FXD_FLTR_QW0_FD_SPACE_S	14
+#define ICE_FXD_FLTR_QW0_FD_SPACE_M	(0x3ULL << ICE_FXD_FLTR_QW0_FD_SPACE_S)
+#define ICE_FXD_FLTR_QW0_FD_SPACE_GUAR			0x0ULL
+#define ICE_FXD_FLTR_QW0_FD_SPACE_BEST_EFFORT		0x1ULL
+#define ICE_FXD_FLTR_QW0_FD_SPACE_GUAR_BEST		0x2ULL
+#define ICE_FXD_FLTR_QW0_FD_SPACE_BEST_GUAR		0x3ULL
+
+#define ICE_FXD_FLTR_QW0_STAT_CNT_S	16
+#define ICE_FXD_FLTR_QW0_STAT_CNT_M	\
+				(0x1FFFULL << ICE_FXD_FLTR_QW0_STAT_CNT_S)
+#define ICE_FXD_FLTR_QW0_STAT_ENA_S	29
+#define ICE_FXD_FLTR_QW0_STAT_ENA_M	(0x3ULL << ICE_FXD_FLTR_QW0_STAT_ENA_S)
+#define ICE_FXD_FLTR_QW0_STAT_ENA_NONE		0x0ULL
+#define ICE_FXD_FLTR_QW0_STAT_ENA_PKTS		0x1ULL
+#define ICE_FXD_FLTR_QW0_STAT_ENA_BYTES		0x2ULL
+#define ICE_FXD_FLTR_QW0_STAT_ENA_PKTS_BYTES	0x3ULL
+
+#define ICE_FXD_FLTR_QW0_EVICT_ENA_S	31
+#define ICE_FXD_FLTR_QW0_EVICT_ENA_M	BIT_ULL(ICE_FXD_FLTR_QW0_EVICT_ENA_S)
+#define ICE_FXD_FLTR_QW0_EVICT_ENA_FALSE	0x0ULL
+#define ICE_FXD_FLTR_QW0_EVICT_ENA_TRUE		0x1ULL
+
+#define ICE_FXD_FLTR_QW0_TO_Q_S		32
+#define ICE_FXD_FLTR_QW0_TO_Q_M		(0x7ULL << ICE_FXD_FLTR_QW0_TO_Q_S)
+#define ICE_FXD_FLTR_QW0_TO_Q_EQUALS_QINDEX	0x0ULL
+
+#define ICE_FXD_FLTR_QW0_TO_Q_PRI_S	35
+#define ICE_FXD_FLTR_QW0_TO_Q_PRI_M	(0x7ULL << ICE_FXD_FLTR_QW0_TO_Q_PRI_S)
+#define ICE_FXD_FLTR_QW0_TO_Q_PRIO1	0x1ULL
+
+#define ICE_FXD_FLTR_QW0_DPU_RECIPE_S	38
+#define ICE_FXD_FLTR_QW0_DPU_RECIPE_M	\
+			(0x3ULL << ICE_FXD_FLTR_QW0_DPU_RECIPE_S)
+#define ICE_FXD_FLTR_QW0_DPU_RECIPE_DFLT	0x0ULL
+
+#define ICE_FXD_FLTR_QW0_DROP_S		40
+#define ICE_FXD_FLTR_QW0_DROP_M		BIT_ULL(ICE_FXD_FLTR_QW0_DROP_S)
+#define ICE_FXD_FLTR_QW0_DROP_NO	0x0ULL
+#define ICE_FXD_FLTR_QW0_DROP_YES	0x1ULL
+
+#define ICE_FXD_FLTR_QW0_FLEX_PRI_S	41
+#define ICE_FXD_FLTR_QW0_FLEX_PRI_M	(0x7ULL << ICE_FXD_FLTR_QW0_FLEX_PRI_S)
+#define ICE_FXD_FLTR_QW0_FLEX_PRI_NONE	0x0ULL
+
+#define ICE_FXD_FLTR_QW0_FLEX_MDID_S	44
+#define ICE_FXD_FLTR_QW0_FLEX_MDID_M	(0xFULL << ICE_FXD_FLTR_QW0_FLEX_MDID_S)
+#define ICE_FXD_FLTR_QW0_FLEX_MDID0	0x0ULL
+
+#define ICE_FXD_FLTR_QW0_FLEX_VAL_S	48
+#define ICE_FXD_FLTR_QW0_FLEX_VAL_M	\
+				(0xFFFFULL << ICE_FXD_FLTR_QW0_FLEX_VAL_S)
+#define ICE_FXD_FLTR_QW0_FLEX_VAL0	0x0ULL
+
+#define ICE_FXD_FLTR_QW1_DTYPE_S	0
+#define ICE_FXD_FLTR_QW1_DTYPE_M	(0xFULL << ICE_FXD_FLTR_QW1_DTYPE_S)
+#define ICE_FXD_FLTR_QW1_PCMD_S		4
+#define ICE_FXD_FLTR_QW1_PCMD_M		BIT_ULL(ICE_FXD_FLTR_QW1_PCMD_S)
+#define ICE_FXD_FLTR_QW1_PCMD_ADD	0x0ULL
+#define ICE_FXD_FLTR_QW1_PCMD_REMOVE	0x1ULL
+
+#define ICE_FXD_FLTR_QW1_PROF_PRI_S	5
+#define ICE_FXD_FLTR_QW1_PROF_PRI_M	(0x7ULL << ICE_FXD_FLTR_QW1_PROF_PRI_S)
+#define ICE_FXD_FLTR_QW1_PROF_PRIO_ZERO	0x0ULL
+
+#define ICE_FXD_FLTR_QW1_PROF_S		8
+#define ICE_FXD_FLTR_QW1_PROF_M		(0x3FULL << ICE_FXD_FLTR_QW1_PROF_S)
+#define ICE_FXD_FLTR_QW1_PROF_ZERO	0x0ULL
+
+#define ICE_FXD_FLTR_QW1_FD_VSI_S	14
+#define ICE_FXD_FLTR_QW1_FD_VSI_M	(0x3FFULL << ICE_FXD_FLTR_QW1_FD_VSI_S)
+#define ICE_FXD_FLTR_QW1_SWAP_S		24
+#define ICE_FXD_FLTR_QW1_SWAP_M		BIT_ULL(ICE_FXD_FLTR_QW1_SWAP_S)
+#define ICE_FXD_FLTR_QW1_SWAP_NOT_SET	0x0ULL
+#define ICE_FXD_FLTR_QW1_SWAP_SET	0x1ULL
+
+#define ICE_FXD_FLTR_QW1_FDID_PRI_S	25
+#define ICE_FXD_FLTR_QW1_FDID_PRI_M	(0x7ULL << ICE_FXD_FLTR_QW1_FDID_PRI_S)
+#define ICE_FXD_FLTR_QW1_FDID_PRI_ZERO	0x0ULL
+
+#define ICE_FXD_FLTR_QW1_FDID_MDID_S	28
+#define ICE_FXD_FLTR_QW1_FDID_MDID_M	(0xFULL << ICE_FXD_FLTR_QW1_FDID_MDID_S)
+#define ICE_FXD_FLTR_QW1_FDID_MDID_FD	0x05ULL
+
+#define ICE_FXD_FLTR_QW1_FDID_S		32
+#define ICE_FXD_FLTR_QW1_FDID_M		\
+			(0xFFFFFFFFULL << ICE_FXD_FLTR_QW1_FDID_S)
+#define ICE_FXD_FLTR_QW1_FDID_ZERO	0x0ULL
+
+
+enum ice_rx_desc_status_bits {
+	/* Note: These are predefined bit offsets */
+	ICE_RX_DESC_STATUS_DD_S			= 0,
+	ICE_RX_DESC_STATUS_EOF_S		= 1,
+	ICE_RX_DESC_STATUS_L2TAG1P_S		= 2,
+	ICE_RX_DESC_STATUS_L3L4P_S		= 3,
+	ICE_RX_DESC_STATUS_CRCP_S		= 4,
+	ICE_RX_DESC_STATUS_TSYNINDX_S		= 5, /* 2 BITS */
+	ICE_RX_DESC_STATUS_TSYNVALID_S		= 7,
+	ICE_RX_DESC_STATUS_EXT_UDP_0_S		= 8,
+	ICE_RX_DESC_STATUS_UMBCAST_S		= 9, /* 2 BITS */
+	ICE_RX_DESC_STATUS_FLM_S		= 11,
+	ICE_RX_DESC_STATUS_FLTSTAT_S		= 12, /* 2 BITS */
+	ICE_RX_DESC_STATUS_LPBK_S		= 14,
+	ICE_RX_DESC_STATUS_IPV6EXADD_S		= 15,
+	ICE_RX_DESC_STATUS_RESERVED2_S		= 16, /* 2 BITS */
+	ICE_RX_DESC_STATUS_INT_UDP_0_S		= 18,
+	ICE_RX_DESC_STATUS_LAST /* this entry must be last!!! */
+};
+
+#define ICE_RXD_QW1_STATUS_S	0
+#define ICE_RXD_QW1_STATUS_M	((BIT(ICE_RX_DESC_STATUS_LAST) - 1) << \
+				 ICE_RXD_QW1_STATUS_S)
+
+#define ICE_RXD_QW1_STATUS_TSYNINDX_S ICE_RX_DESC_STATUS_TSYNINDX_S
+#define ICE_RXD_QW1_STATUS_TSYNINDX_M (0x3UL << ICE_RXD_QW1_STATUS_TSYNINDX_S)
+
+#define ICE_RXD_QW1_STATUS_TSYNVALID_S ICE_RX_DESC_STATUS_TSYNVALID_S
+#define ICE_RXD_QW1_STATUS_TSYNVALID_M BIT_ULL(ICE_RXD_QW1_STATUS_TSYNVALID_S)
+
+
+enum ice_rx_desc_fltstat_values {
+	ICE_RX_DESC_FLTSTAT_NO_DATA	= 0,
+	ICE_RX_DESC_FLTSTAT_RSV_FD_ID	= 1, /* 16byte desc? FD_ID : RSV */
+	ICE_RX_DESC_FLTSTAT_RSV		= 2,
+	ICE_RX_DESC_FLTSTAT_RSS_HASH	= 3,
+};
+
+
+#define ICE_RXD_QW1_ERROR_S	19
+#define ICE_RXD_QW1_ERROR_M		(0xFFUL << ICE_RXD_QW1_ERROR_S)
+
+enum ice_rx_desc_error_bits {
+	/* Note: These are predefined bit offsets */
+	ICE_RX_DESC_ERROR_RXE_S			= 0,
+	ICE_RX_DESC_ERROR_RECIPE_S		= 1,
+	ICE_RX_DESC_ERROR_HBO_S			= 2,
+	ICE_RX_DESC_ERROR_L3L4E_S		= 3, /* 3 BITS */
+	ICE_RX_DESC_ERROR_IPE_S			= 3,
+	ICE_RX_DESC_ERROR_L4E_S			= 4,
+	ICE_RX_DESC_ERROR_EIPE_S		= 5,
+	ICE_RX_DESC_ERROR_OVERSIZE_S		= 6,
+	ICE_RX_DESC_ERROR_PPRS_S		= 7
+};
+
+enum ice_rx_desc_error_l3l4e_masks {
+	ICE_RX_DESC_ERROR_L3L4E_NONE		= 0,
+	ICE_RX_DESC_ERROR_L3L4E_PROT		= 1,
+};
+
+#define ICE_RXD_QW1_PTYPE_S	30
+#define ICE_RXD_QW1_PTYPE_M	(0xFFULL << ICE_RXD_QW1_PTYPE_S)
+
+/* Packet type non-ip values */
+enum ice_rx_l2_ptype {
+	ICE_RX_PTYPE_L2_RESERVED	= 0,
+	ICE_RX_PTYPE_L2_MAC_PAY2	= 1,
+	ICE_RX_PTYPE_L2_FIP_PAY2	= 3,
+	ICE_RX_PTYPE_L2_OUI_PAY2	= 4,
+	ICE_RX_PTYPE_L2_MACCNTRL_PAY2	= 5,
+	ICE_RX_PTYPE_L2_LLDP_PAY2	= 6,
+	ICE_RX_PTYPE_L2_ECP_PAY2	= 7,
+	ICE_RX_PTYPE_L2_EVB_PAY2	= 8,
+	ICE_RX_PTYPE_L2_QCN_PAY2	= 9,
+	ICE_RX_PTYPE_L2_EAPOL_PAY2	= 10,
+	ICE_RX_PTYPE_L2_ARP		= 11,
+};
+
+struct ice_rx_ptype_decoded {
+	u32 ptype:10;
+	u32 known:1;
+	u32 outer_ip:1;
+	u32 outer_ip_ver:2;
+	u32 outer_frag:1;
+	u32 tunnel_type:3;
+	u32 tunnel_end_prot:2;
+	u32 tunnel_end_frag:1;
+	u32 inner_prot:4;
+	u32 payload_layer:3;
+};
+
+enum ice_rx_ptype_outer_ip {
+	ICE_RX_PTYPE_OUTER_L2	= 0,
+	ICE_RX_PTYPE_OUTER_IP	= 1,
+};
+
+enum ice_rx_ptype_outer_ip_ver {
+	ICE_RX_PTYPE_OUTER_NONE	= 0,
+	ICE_RX_PTYPE_OUTER_IPV4	= 1,
+	ICE_RX_PTYPE_OUTER_IPV6	= 2,
+};
+
+enum ice_rx_ptype_outer_fragmented {
+	ICE_RX_PTYPE_NOT_FRAG	= 0,
+	ICE_RX_PTYPE_FRAG	= 1,
+};
+
+enum ice_rx_ptype_tunnel_type {
+	ICE_RX_PTYPE_TUNNEL_NONE		= 0,
+	ICE_RX_PTYPE_TUNNEL_IP_IP		= 1,
+	ICE_RX_PTYPE_TUNNEL_IP_GRENAT		= 2,
+	ICE_RX_PTYPE_TUNNEL_IP_GRENAT_MAC	= 3,
+	ICE_RX_PTYPE_TUNNEL_IP_GRENAT_MAC_VLAN	= 4,
+};
+
+enum ice_rx_ptype_tunnel_end_prot {
+	ICE_RX_PTYPE_TUNNEL_END_NONE	= 0,
+	ICE_RX_PTYPE_TUNNEL_END_IPV4	= 1,
+	ICE_RX_PTYPE_TUNNEL_END_IPV6	= 2,
+};
+
+enum ice_rx_ptype_inner_prot {
+	ICE_RX_PTYPE_INNER_PROT_NONE		= 0,
+	ICE_RX_PTYPE_INNER_PROT_UDP		= 1,
+	ICE_RX_PTYPE_INNER_PROT_TCP		= 2,
+	ICE_RX_PTYPE_INNER_PROT_SCTP		= 3,
+	ICE_RX_PTYPE_INNER_PROT_ICMP		= 4,
+};
+
+enum ice_rx_ptype_payload_layer {
+	ICE_RX_PTYPE_PAYLOAD_LAYER_NONE	= 0,
+	ICE_RX_PTYPE_PAYLOAD_LAYER_PAY2	= 1,
+	ICE_RX_PTYPE_PAYLOAD_LAYER_PAY3	= 2,
+	ICE_RX_PTYPE_PAYLOAD_LAYER_PAY4	= 3,
+};
+
+
+#define ICE_RXD_QW1_LEN_PBUF_S	38
+#define ICE_RXD_QW1_LEN_PBUF_M	(0x3FFFULL << ICE_RXD_QW1_LEN_PBUF_S)
+
+#define ICE_RXD_QW1_LEN_HBUF_S	52
+#define ICE_RXD_QW1_LEN_HBUF_M	(0x7FFULL << ICE_RXD_QW1_LEN_HBUF_S)
+
+#define ICE_RXD_QW1_LEN_SPH_S	63
+#define ICE_RXD_QW1_LEN_SPH_M	BIT_ULL(ICE_RXD_QW1_LEN_SPH_S)
+
+
+enum ice_rx_desc_ext_status_bits {
+	/* Note: These are predefined bit offsets */
+	ICE_RX_DESC_EXT_STATUS_L2TAG2P_S	= 0,
+	ICE_RX_DESC_EXT_STATUS_L2TAG3P_S	= 1,
+	ICE_RX_DESC_EXT_STATUS_FLEXBL_S		= 2, /* 2 BITS */
+	ICE_RX_DESC_EXT_STATUS_FLEXBH_S		= 4, /* 2 BITS */
+	ICE_RX_DESC_EXT_STATUS_FDLONGB_S	= 9,
+	ICE_RX_DESC_EXT_STATUS_PELONGB_S	= 11,
+};
+
+
+enum ice_rx_desc_pe_status_bits {
+	/* Note: These are predefined bit offsets */
+	ICE_RX_DESC_PE_STATUS_QPID_S		= 0, /* 18 BITS */
+	ICE_RX_DESC_PE_STATUS_L4PORT_S		= 0, /* 16 BITS */
+	ICE_RX_DESC_PE_STATUS_IPINDEX_S		= 16, /* 8 BITS */
+	ICE_RX_DESC_PE_STATUS_QPIDHIT_S		= 24,
+	ICE_RX_DESC_PE_STATUS_APBVTHIT_S	= 25,
+	ICE_RX_DESC_PE_STATUS_PORTV_S		= 26,
+	ICE_RX_DESC_PE_STATUS_URG_S		= 27,
+	ICE_RX_DESC_PE_STATUS_IPFRAG_S		= 28,
+	ICE_RX_DESC_PE_STATUS_IPOPT_S		= 29
+};
+
+#define ICE_RX_PROG_STATUS_DESC_LEN_S	38
+#define ICE_RX_PROG_STATUS_DESC_LEN	0x2000000
+
+#define ICE_RX_PROG_STATUS_DESC_QW1_PROGID_S	2
+#define ICE_RX_PROG_STATUS_DESC_QW1_PROGID_M	\
+			(0x7UL << ICE_RX_PROG_STATUS_DESC_QW1_PROGID_S)
+
+
+#define ICE_RX_PROG_STATUS_DESC_QW1_ERROR_S	19
+#define ICE_RX_PROG_STATUS_DESC_QW1_ERROR_M	\
+			(0x3FUL << ICE_RX_PROG_STATUS_DESC_QW1_ERROR_S)
+
+enum ice_rx_prog_status_desc_status_bits {
+	/* Note: These are predefined bit offsets */
+	ICE_RX_PROG_STATUS_DESC_DD_S		= 0,
+	ICE_RX_PROG_STATUS_DESC_PROG_ID_S	= 2 /* 3 BITS */
+};
+
+enum ice_rx_prog_status_desc_prog_id_masks {
+	ICE_RX_PROG_STATUS_DESC_FD_FLTR_STATUS	= 1,
+};
+
+enum ice_rx_prog_status_desc_error_bits {
+	/* Note: These are predefined bit offsets */
+	ICE_RX_PROG_STATUS_DESC_FD_TBL_FULL_S	= 0,
+	ICE_RX_PROG_STATUS_DESC_NO_FD_ENTRY_S	= 1,
+};
+
+/* RX Flex Descriptor
+ * This descriptor is used instead of the legacy version descriptor when
+ * ice_rlan_ctx.adv_desc is set
+ */
+union ice_32b_rx_flex_desc {
+	struct {
+		__le64 pkt_addr; /* Packet buffer address */
+		__le64 hdr_addr; /* Header buffer address */
+				 /* bit 0 of hdr_addr is DD bit */
+		__le64 rsvd1;
+		__le64 rsvd2;
+	} read;
+	struct {
+		/* Qword 0 */
+		u8 rxdid; /* descriptor builder profile id */
+		u8 mir_id_umb_cast; /* mirror=[5:0], umb=[7:6] */
+		__le16 ptype_flex_flags0; /* ptype=[9:0], ff0=[15:10] */
+		__le16 pkt_len; /* [15:14] are reserved */
+		__le16 hdr_len_sph_flex_flags1; /* header=[10:0] */
+						/* sph=[11:11] */
+						/* ff1/ext=[15:12] */
+
+		/* Qword 1 */
+		__le16 status_error0;
+		__le16 l2tag1;
+		__le16 flex_meta0;
+		__le16 flex_meta1;
+
+		/* Qword 2 */
+		__le16 status_error1;
+		u8 flex_flags2;
+		u8 time_stamp_low;
+		__le16 l2tag2_1st;
+		__le16 l2tag2_2nd;
+
+		/* Qword 3 */
+		__le16 flex_meta2;
+		__le16 flex_meta3;
+		union {
+			struct {
+				__le16 flex_meta4;
+				__le16 flex_meta5;
+			} flex;
+			__le32 ts_high;
+		} flex_ts;
+	} wb; /* writeback */
+};
+
+/* Rx Flex Descriptor NIC Profile
+ * RxDID Profile Id 2
+ * Flex-field 0: RSS hash lower 16-bits
+ * Flex-field 1: RSS hash upper 16-bits
+ * Flex-field 2: Flow Id lower 16-bits
+ * Flex-field 3: Flow Id higher 16-bits
+ * Flex-field 4: reserved, Vlan id taken from L2Tag
+ */
+struct ice_32b_rx_flex_desc_nic {
+	/* Qword 0 */
+	u8 rxdid;
+	u8 mir_id_umb_cast;
+	__le16 ptype_flexi_flags0;
+	__le16 pkt_len;
+	__le16 hdr_len_sph_flex_flags1;
+
+	/* Qword 1 */
+	__le16 status_error0;
+	__le16 l2tag1;
+	__le32 rss_hash;
+
+	/* Qword 2 */
+	__le16 status_error1;
+	u8 flexi_flags2;
+	u8 ts_low;
+	__le16 l2tag2_1st;
+	__le16 l2tag2_2nd;
+
+	/* Qword 3 */
+	__le32 flow_id;
+	union {
+		struct {
+			__le16 rsvd;
+			__le16 flow_id_ipv6;
+		} flex;
+		__le32 ts_high;
+	} flex_ts;
+};
+
+/* Rx Flex Descriptor Switch Profile
+ * RxDID Profile Id 3
+ * Flex-field 0: Source Vsi
+ */
+struct ice_32b_rx_flex_desc_sw {
+	/* Qword 0 */
+	u8 rxdid;
+	u8 mir_id_umb_cast;
+	__le16 ptype_flexi_flags0;
+	__le16 pkt_len;
+	__le16 hdr_len_sph_flex_flags1;
+
+	/* Qword 1 */
+	__le16 status_error0;
+	__le16 l2tag1;
+	__le16 src_vsi; /* [10:15] are reserved */
+	__le16 flex_md1_rsvd;
+
+	/* Qword 2 */
+	__le16 status_error1;
+	u8 flex_flags2;
+	u8 ts_low;
+	__le16 l2tag2_1st;
+	__le16 l2tag2_2nd;
+
+	/* Qword 3 */
+	__le32 rsvd; /* flex words 2-3 are reserved */
+	__le32 ts_high;
+};
+
+/* Rx Flex Descriptor NIC VEB Profile
+ * RxDID Profile Id 4
+ * Flex-field 0: Destination Vsi
+ */
+struct ice_32b_rx_flex_desc_nic_veb_dbg {
+	/* Qword 0 */
+	u8 rxdid;
+	u8 mir_id_umb_cast;
+	__le16 ptype_flexi_flags0;
+	__le16 pkt_len;
+	__le16 hdr_len_sph_flex_flags1;
+
+	/* Qword 1 */
+	__le16 status_error0;
+	__le16 l2tag1;
+	__le16 dst_vsi; /* [0:12]: destination vsi */
+			/* 13: vsi valid bit */
+			/* [14:15] are reserved */
+	__le16 flex_field_1;
+
+	/* Qword 2 */
+	__le16 status_error1;
+	u8 flex_flags2;
+	u8 ts_low;
+	__le16 l2tag2_1st;
+	__le16 l2tag2_2nd;
+
+	/* Qword 3 */
+	__le32 rsvd; /* flex words 2-3 are reserved */
+	__le32 ts_high;
+};
+
+/* Rx Flex Descriptor NIC ACL Profile
+ * RxDID Profile Id 5
+ * Flex-field 0: ACL Counter 0
+ * Flex-field 1: ACL Counter 1
+ * Flex-field 2: ACL Counter 2
+ */
+struct ice_32b_rx_flex_desc_nic_acl_dbg {
+	/* Qword 0 */
+	u8 rxdid;
+	u8 mir_id_umb_cast;
+	__le16 ptype_flexi_flags0;
+	__le16 pkt_len;
+	__le16 hdr_len_sph_flex_flags1;
+
+	/* Qword 1 */
+	__le16 status_error0;
+	__le16 l2tag1;
+	__le16 acl_ctr0;
+	__le16 acl_ctr1;
+
+	/* Qword 2 */
+	__le16 status_error1;
+	u8 flex_flags2;
+	u8 ts_low;
+	__le16 l2tag2_1st;
+	__le16 l2tag2_2nd;
+
+	/* Qword 3 */
+	__le16 acl_ctr2;
+	__le16 rsvd; /* flex words 2-3 are reserved */
+	__le32 ts_high;
+};
+
+/* Rx Flex Descriptor NIC Profile
+ * RxDID Profile Id 6
+ * Flex-field 0: RSS hash lower 16-bits
+ * Flex-field 1: RSS hash upper 16-bits
+ * Flex-field 2: Flow Id lower 16-bits
+ * Flex-field 3: Source Vsi
+ * Flex-field 4: reserved, Vlan id taken from L2Tag
+ */
+struct ice_32b_rx_flex_desc_nic_2 {
+	/* Qword 0 */
+	u8 rxdid;
+	u8 mir_id_umb_cast;
+	__le16 ptype_flexi_flags0;
+	__le16 pkt_len;
+	__le16 hdr_len_sph_flex_flags1;
+
+	/* Qword 1 */
+	__le16 status_error0;
+	__le16 l2tag1;
+	__le32 rss_hash;
+
+	/* Qword 2 */
+	__le16 status_error1;
+	u8 flexi_flags2;
+	u8 ts_low;
+	__le16 l2tag2_1st;
+	__le16 l2tag2_2nd;
+
+	/* Qword 3 */
+	__le16 flow_id;
+	__le16 src_vsi;
+	union {
+		struct {
+			__le16 rsvd;
+			__le16 flow_id_ipv6;
+		} flex;
+		__le32 ts_high;
+	} flex_ts;
+};
+
+/* Receive Flex Descriptor profile IDs: There are a total
+ * of 64 profiles where profile IDs 0/1 are for legacy; and
+ * profiles 2-63 are flex profiles that can be programmed
+ * with a specific metadata (profile 7 reserved for HW)
+ */
+enum ice_rxdid {
+	ICE_RXDID_LEGACY_0		= 0,
+	ICE_RXDID_LEGACY_1		= 1,
+	ICE_RXDID_FLEX_NIC		= 2,
+	ICE_RXDID_FLEX_NIC_2		= 6,
+	ICE_RXDID_HW			= 7,
+	ICE_RXDID_LAST			= 63,
+};
+
+/* Recceive Flex descriptor Dword Index */
+enum ice_flex_word {
+	ICE_RX_FLEX_DWORD_0 = 0,
+	ICE_RX_FLEX_DWORD_1,
+	ICE_RX_FLEX_DWORD_2,
+	ICE_RX_FLEX_DWORD_3,
+	ICE_RX_FLEX_DWORD_4,
+	ICE_RX_FLEX_DWORD_5
+};
+
+/* Receive Flex Descriptor Rx opcode values */
+enum ice_flex_opcode {
+	ICE_RX_OPC_DEBUG = 0,
+	ICE_RX_OPC_MDID,
+	ICE_RX_OPC_EXTRACT,
+	ICE_RX_OPC_PROTID
+};
+
+/* Receive Descriptor MDID values */
+enum ice_flex_rx_mdid {
+	ICE_RX_MDID_FLOW_ID_LOWER	= 5,
+	ICE_RX_MDID_FLOW_ID_HIGH,
+	ICE_RX_MDID_DST_VSI		= 13,
+	ICE_RX_MDID_SRC_VSI		= 19,
+	ICE_RX_MDID_HASH_LOW		= 56,
+	ICE_RX_MDID_HASH_HIGH,
+	ICE_RX_MDID_ACL_CTR0		= ICE_RX_MDID_HASH_LOW,
+	ICE_RX_MDID_ACL_CTR1		= ICE_RX_MDID_HASH_HIGH,
+	ICE_RX_MDID_ACL_CTR2		= 59
+};
+
+/* for ice_32byte_rx_flex_desc.mir_id_umb_cast member */
+#define ICE_RX_FLEX_DESC_MIRROR_M	(0x3F) /* 6-bits */
+
+/* Rx Flag64 packet flag bits */
+enum ice_rx_flg64_bits {
+	ICE_RXFLG_PKT_DSI	= 0,
+	ICE_RXFLG_EVLAN_x8100	= 15,
+	ICE_RXFLG_EVLAN_x9100,
+	ICE_RXFLG_VLAN_x8100,
+	ICE_RXFLG_TNL_MAC	= 22,
+	ICE_RXFLG_TNL_VLAN,
+	ICE_RXFLG_PKT_FRG,
+	ICE_RXFLG_FIN		= 32,
+	ICE_RXFLG_SYN,
+	ICE_RXFLG_RST,
+	ICE_RXFLG_TNL0		= 38,
+	ICE_RXFLG_TNL1,
+	ICE_RXFLG_TNL2,
+	ICE_RXFLG_UDP_GRE,
+	ICE_RXFLG_RSVD		= 63
+};
+
+enum ice_rx_flex_desc_umb_cast_bits { /* field is 2 bits long */
+	ICE_RX_FLEX_DESC_UMB_CAST_S = 6,
+	ICE_RX_FLEX_DESC_UMB_CAST_LAST /* this entry must be last!!! */
+};
+
+enum ice_umbcast_dest_addr_types {
+	ICE_DEST_UNICAST = 0,
+	ICE_DEST_MULTICAST,
+	ICE_DEST_BROADCAST,
+	ICE_DEST_MIRRORED,
+};
+
+/* for ice_32byte_rx_flex_desc.ptype_flexi_flags0 member */
+#define ICE_RX_FLEX_DESC_PTYPE_M	(0x3FF) /* 10-bits */
+
+enum ice_rx_flex_desc_flexi_flags0_bits { /* field is 6 bits long */
+	ICE_RX_FLEX_DESC_FLEXI_FLAGS0_S = 10,
+	ICE_RX_FLEX_DESC_FLEXI_FLAGS0_LAST /* this entry must be last!!! */
+};
+
+/* for ice_32byte_rx_flex_desc.pkt_length member */
+#define ICE_RX_FLX_DESC_PKT_LEN_M	(0x3FFF) /* 14-bits */
+
+/* for ice_32byte_rx_flex_desc.header_length_sph_flexi_flags1 member */
+#define ICE_RX_FLEX_DESC_HEADER_LEN_M	(0x7FF) /* 11-bits */
+
+enum ice_rx_flex_desc_sph_bits { /* field is 1 bit long */
+	ICE_RX_FLEX_DESC_SPH_S = 11,
+	ICE_RX_FLEX_DESC_SPH_LAST /* this entry must be last!!! */
+};
+
+enum ice_rx_flex_desc_flexi_flags1_bits { /* field is 4 bits long */
+	ICE_RX_FLEX_DESC_FLEXI_FLAGS1_S = 12,
+	ICE_RX_FLEX_DESC_FLEXI_FLAGS1_LAST /* this entry must be last!!! */
+};
+
+enum ice_rx_flex_desc_ext_status_bits { /* field is 4 bits long */
+	ICE_RX_FLEX_DESC_EXT_STATUS_EXT_UDP_S = 12,
+	ICE_RX_FLEX_DESC_EXT_STATUS_INT_UDP_S = 13,
+	ICE_RX_FLEX_DESC_EXT_STATUS_RECIPE_S = 14,
+	ICE_RX_FLEX_DESC_EXT_STATUS_OVERSIZE_S = 15,
+	ICE_RX_FLEX_DESC_EXT_STATUS_LAST /* entry must be last!!! */
+};
+
+enum ice_rx_flex_desc_status_error_0_bits {
+	/* Note: These are predefined bit offsets */
+	ICE_RX_FLEX_DESC_STATUS0_DD_S = 0,
+	ICE_RX_FLEX_DESC_STATUS0_EOF_S,
+	ICE_RX_FLEX_DESC_STATUS0_HBO_S,
+	ICE_RX_FLEX_DESC_STATUS0_L3L4P_S,
+	ICE_RX_FLEX_DESC_STATUS0_XSUM_IPE_S,
+	ICE_RX_FLEX_DESC_STATUS0_XSUM_L4E_S,
+	ICE_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S,
+	ICE_RX_FLEX_DESC_STATUS0_XSUM_EUDPE_S,
+	ICE_RX_FLEX_DESC_STATUS0_LPBK_S,
+	ICE_RX_FLEX_DESC_STATUS0_IPV6EXADD_S,
+	ICE_RX_FLEX_DESC_STATUS0_RXE_S,
+	ICE_RX_FLEX_DESC_STATUS0_CRCP_S,
+	ICE_RX_FLEX_DESC_STATUS0_RSS_VALID_S,
+	ICE_RX_FLEX_DESC_STATUS0_L2TAG1P_S,
+	ICE_RX_FLEX_DESC_STATUS0_XTRMD0_VALID_S,
+	ICE_RX_FLEX_DESC_STATUS0_XTRMD1_VALID_S,
+	ICE_RX_FLEX_DESC_STATUS0_LAST /* this entry must be last!!! */
+};
+
+enum ice_rx_flex_desc_status_error_1_bits {
+	/* Note: These are predefined bit offsets */
+	ICE_RX_FLEX_DESC_STATUS1_CPM_S = 0, /* 4 bits */
+	ICE_RX_FLEX_DESC_STATUS1_NAT_S = 4,
+	ICE_RX_FLEX_DESC_STATUS1_CRYPTO_S = 5,
+	/* [10:6] reserved */
+	ICE_RX_FLEX_DESC_STATUS1_L2TAG2P_S = 11,
+	ICE_RX_FLEX_DESC_STATUS1_XTRMD2_VALID_S = 12,
+	ICE_RX_FLEX_DESC_STATUS1_XTRMD3_VALID_S = 13,
+	ICE_RX_FLEX_DESC_STATUS1_XTRMD4_VALID_S = 14,
+	ICE_RX_FLEX_DESC_STATUS1_XTRMD5_VALID_S = 15,
+	ICE_RX_FLEX_DESC_STATUS1_LAST /* this entry must be last!!! */
+};
+
+enum ice_rx_flex_desc_exstat_bits {
+	/* Note: These are predefined bit offsets */
+	ICE_RX_FLEX_DESC_EXSTAT_EXTUDP_S = 0,
+	ICE_RX_FLEX_DESC_EXSTAT_INTUDP_S = 1,
+	ICE_RX_FLEX_DESC_EXSTAT_RECIPE_S = 2,
+	ICE_RX_FLEX_DESC_EXSTAT_OVERSIZE_S = 3,
+};
+
+
+#define ICE_RXQ_CTX_SIZE_DWORDS		8
+#define ICE_RXQ_CTX_SZ			(ICE_RXQ_CTX_SIZE_DWORDS * sizeof(u32))
+#define ICE_TX_CMPLTNQ_CTX_SIZE_DWORDS	22
+#define ICE_TX_DRBELL_Q_CTX_SIZE_DWORDS	5
+#define GLTCLAN_CQ_CNTX(i, CQ)		(GLTCLAN_CQ_CNTX0(CQ) + ((i) * 0x0800))
+
+/* RLAN Rx queue context data
+ *
+ * The sizes of the variables may be larger than needed due to crossing byte
+ * boundaries. If we do not have the width of the variable set to the correct
+ * size then we could end up shifting bits off the top of the variable when the
+ * variable is at the top of a byte and crosses over into the next byte.
+ */
+struct ice_rlan_ctx {
+	u16 head;
+	u16 cpuid; /* bigger than needed, see above for reason */
+#define ICE_RLAN_BASE_S 7
+	u64 base;
+	u16 qlen;
+#define ICE_RLAN_CTX_DBUF_S 7
+	u16 dbuf; /* bigger than needed, see above for reason */
+#define ICE_RLAN_CTX_HBUF_S 6
+	u16 hbuf; /* bigger than needed, see above for reason */
+	u8 dtype;
+	u8 dsize;
+	u8 crcstrip;
+	u8 l2tsel;
+	u8 hsplit_0;
+	u8 hsplit_1;
+	u8 showiv;
+	u32 rxmax; /* bigger than needed, see above for reason */
+	u8 tphrdesc_ena;
+	u8 tphwdesc_ena;
+	u8 tphdata_ena;
+	u8 tphhead_ena;
+	u16 lrxqthresh; /* bigger than needed, see above for reason */
+};
+
+struct ice_ctx_ele {
+	u16 offset;
+	u16 size_of;
+	u16 width;
+	u16 lsb;
+};
+
+#define ICE_CTX_STORE(_struct, _ele, _width, _lsb) {	\
+	.offset = offsetof(struct _struct, _ele),	\
+	.size_of = FIELD_SIZEOF(struct _struct, _ele),	\
+	.width = _width,				\
+	.lsb = _lsb,					\
+}
+
+/* for hsplit_0 field of Rx RLAN context */
+enum ice_rlan_ctx_rx_hsplit_0 {
+	ICE_RLAN_RX_HSPLIT_0_NO_SPLIT		= 0,
+	ICE_RLAN_RX_HSPLIT_0_SPLIT_L2		= 1,
+	ICE_RLAN_RX_HSPLIT_0_SPLIT_IP		= 2,
+	ICE_RLAN_RX_HSPLIT_0_SPLIT_TCP_UDP	= 4,
+	ICE_RLAN_RX_HSPLIT_0_SPLIT_SCTP		= 8,
+};
+
+/* for hsplit_1 field of Rx RLAN context */
+enum ice_rlan_ctx_rx_hsplit_1 {
+	ICE_RLAN_RX_HSPLIT_1_NO_SPLIT		= 0,
+	ICE_RLAN_RX_HSPLIT_1_SPLIT_L2		= 1,
+	ICE_RLAN_RX_HSPLIT_1_SPLIT_ALWAYS	= 2,
+};
+
+/* TX Descriptor */
+struct ice_tx_desc {
+	__le64 buf_addr; /* Address of descriptor's data buf */
+	__le64 cmd_type_offset_bsz;
+};
+
+#define ICE_TXD_QW1_DTYPE_S	0
+#define ICE_TXD_QW1_DTYPE_M	(0xFUL << ICE_TXD_QW1_DTYPE_S)
+
+enum ice_tx_desc_dtype_value {
+	ICE_TX_DESC_DTYPE_DATA		= 0x0,
+	ICE_TX_DESC_DTYPE_CTX		= 0x1,
+	ICE_TX_DESC_DTYPE_IPSEC		= 0x3,
+	ICE_TX_DESC_DTYPE_FLTR_PROG	= 0x8,
+	ICE_TX_DESC_DTYPE_HLP_META	= 0x9,
+	/* DESC_DONE - HW has completed write-back of descriptor */
+	ICE_TX_DESC_DTYPE_DESC_DONE	= 0xF,
+};
+
+#define ICE_TXD_QW1_CMD_S	4
+#define ICE_TXD_QW1_CMD_M	(0xFFFUL << ICE_TXD_QW1_CMD_S)
+
+enum ice_tx_desc_cmd_bits {
+	ICE_TX_DESC_CMD_EOP			= 0x0001,
+	ICE_TX_DESC_CMD_RS			= 0x0002,
+	ICE_TX_DESC_CMD_RSVD			= 0x0004,
+	ICE_TX_DESC_CMD_IL2TAG1			= 0x0008,
+	ICE_TX_DESC_CMD_DUMMY			= 0x0010,
+	ICE_TX_DESC_CMD_IIPT_NONIP		= 0x0000, /* 2 BITS */
+	ICE_TX_DESC_CMD_IIPT_IPV6		= 0x0020, /* 2 BITS */
+	ICE_TX_DESC_CMD_IIPT_IPV4		= 0x0040, /* 2 BITS */
+	ICE_TX_DESC_CMD_IIPT_IPV4_CSUM		= 0x0060, /* 2 BITS */
+	ICE_TX_DESC_CMD_RSVD2			= 0x0080,
+	ICE_TX_DESC_CMD_L4T_EOFT_UNK		= 0x0000, /* 2 BITS */
+	ICE_TX_DESC_CMD_L4T_EOFT_TCP		= 0x0100, /* 2 BITS */
+	ICE_TX_DESC_CMD_L4T_EOFT_SCTP		= 0x0200, /* 2 BITS */
+	ICE_TX_DESC_CMD_L4T_EOFT_UDP		= 0x0300, /* 2 BITS */
+	ICE_TX_DESC_CMD_RE			= 0x0400,
+	ICE_TX_DESC_CMD_RSVD3			= 0x0800,
+};
+
+#define ICE_TXD_QW1_OFFSET_S	16
+#define ICE_TXD_QW1_OFFSET_M	(0x3FFFFULL << ICE_TXD_QW1_OFFSET_S)
+
+enum ice_tx_desc_len_fields {
+	/* Note: These are predefined bit offsets */
+	ICE_TX_DESC_LEN_MACLEN_S	= 0, /* 7 BITS */
+	ICE_TX_DESC_LEN_IPLEN_S	= 7, /* 7 BITS */
+	ICE_TX_DESC_LEN_L4_LEN_S	= 14 /* 4 BITS */
+};
+
+#define ICE_TXD_QW1_MACLEN_M (0x7FUL << ICE_TX_DESC_LEN_MACLEN_S)
+#define ICE_TXD_QW1_IPLEN_M  (0x7FUL << ICE_TX_DESC_LEN_IPLEN_S)
+#define ICE_TXD_QW1_L4LEN_M  (0xFUL << ICE_TX_DESC_LEN_L4_LEN_S)
+
+/* Tx descriptor field limits in bytes */
+#define ICE_TXD_MACLEN_MAX ((ICE_TXD_QW1_MACLEN_M >> \
+			     ICE_TX_DESC_LEN_MACLEN_S) * ICE_BYTES_PER_WORD)
+#define ICE_TXD_IPLEN_MAX ((ICE_TXD_QW1_IPLEN_M >> \
+			    ICE_TX_DESC_LEN_IPLEN_S) * ICE_BYTES_PER_DWORD)
+#define ICE_TXD_L4LEN_MAX ((ICE_TXD_QW1_L4LEN_M >> \
+			    ICE_TX_DESC_LEN_L4_LEN_S) * ICE_BYTES_PER_DWORD)
+
+#define ICE_TXD_QW1_TX_BUF_SZ_S	34
+#define ICE_TXD_QW1_TX_BUF_SZ_M	(0x3FFFULL << ICE_TXD_QW1_TX_BUF_SZ_S)
+
+#define ICE_TXD_QW1_L2TAG1_S	48
+#define ICE_TXD_QW1_L2TAG1_M	(0xFFFFULL << ICE_TXD_QW1_L2TAG1_S)
+
+/* Context descriptors */
+struct ice_tx_ctx_desc {
+	__le32 tunneling_params;
+	__le16 l2tag2;
+	__le16 rsvd;
+	__le64 qw1;
+};
+
+#define ICE_TXD_CTX_QW1_DTYPE_S	0
+#define ICE_TXD_CTX_QW1_DTYPE_M	(0xFUL << ICE_TXD_CTX_QW1_DTYPE_S)
+
+#define ICE_TXD_CTX_QW1_CMD_S	4
+#define ICE_TXD_CTX_QW1_CMD_M	(0x7FUL << ICE_TXD_CTX_QW1_CMD_S)
+
+#define ICE_TXD_CTX_QW1_IPSEC_S	11
+#define ICE_TXD_CTX_QW1_IPSEC_M	(0x7FUL << ICE_TXD_CTX_QW1_IPSEC_S)
+
+#define ICE_TXD_CTX_QW1_TSO_LEN_S	30
+#define ICE_TXD_CTX_QW1_TSO_LEN_M	\
+			(0x3FFFFULL << ICE_TXD_CTX_QW1_TSO_LEN_S)
+
+#define ICE_TXD_CTX_QW1_TSYN_S	ICE_TXD_CTX_QW1_TSO_LEN_S
+#define ICE_TXD_CTX_QW1_TSYN_M	ICE_TXD_CTX_QW1_TSO_LEN_M
+
+#define ICE_TXD_CTX_QW1_MSS_S	50
+#define ICE_TXD_CTX_QW1_MSS_M	(0x3FFFULL << ICE_TXD_CTX_QW1_MSS_S)
+#define ICE_TXD_CTX_MIN_MSS	64
+#define ICE_TXD_CTX_MAX_MSS	9668
+
+#define ICE_TXD_CTX_QW1_VSI_S	50
+#define ICE_TXD_CTX_QW1_VSI_M	(0x3FFULL << ICE_TXD_CTX_QW1_VSI_S)
+
+enum ice_tx_ctx_desc_cmd_bits {
+	ICE_TX_CTX_DESC_TSO		= 0x01,
+	ICE_TX_CTX_DESC_TSYN		= 0x02,
+	ICE_TX_CTX_DESC_IL2TAG2		= 0x04,
+	ICE_TX_CTX_DESC_IL2TAG2_IL2H	= 0x08,
+	ICE_TX_CTX_DESC_SWTCH_NOTAG	= 0x00,
+	ICE_TX_CTX_DESC_SWTCH_UPLINK	= 0x10,
+	ICE_TX_CTX_DESC_SWTCH_LOCAL	= 0x20,
+	ICE_TX_CTX_DESC_SWTCH_VSI	= 0x30,
+	ICE_TX_CTX_DESC_RESERVED	= 0x40
+};
+
+enum ice_tx_ctx_desc_eipt_offload {
+	ICE_TX_CTX_EIPT_NONE		= 0x0,
+	ICE_TX_CTX_EIPT_IPV6		= 0x1,
+	ICE_TX_CTX_EIPT_IPV4_NO_CSUM	= 0x2,
+	ICE_TX_CTX_EIPT_IPV4		= 0x3
+};
+
+#define ICE_TXD_CTX_QW0_EIPT_S	0
+#define ICE_TXD_CTX_QW0_EIPT_M	(0x3ULL << ICE_TXD_CTX_QW0_EIPT_S)
+
+#define ICE_TXD_CTX_QW0_EIPLEN_S	2
+#define ICE_TXD_CTX_QW0_EIPLEN_M	(0x7FUL << ICE_TXD_CTX_QW0_EIPLEN_S)
+
+#define ICE_TXD_CTX_QW0_L4TUNT_S	9
+#define ICE_TXD_CTX_QW0_L4TUNT_M	(0x3ULL << ICE_TXD_CTX_QW0_L4TUNT_S)
+
+#define ICE_TXD_CTX_UDP_TUNNELING	BIT_ULL(ICE_TXD_CTX_QW0_L4TUNT_S)
+#define ICE_TXD_CTX_GRE_TUNNELING	(0x2ULL << ICE_TXD_CTX_QW0_L4TUNT_S)
+
+#define ICE_TXD_CTX_QW0_EIP_NOINC_S	11
+#define ICE_TXD_CTX_QW0_EIP_NOINC_M	BIT_ULL(ICE_TXD_CTX_QW0_EIP_NOINC_S)
+
+#define ICE_TXD_CTX_EIP_NOINC_IPID_CONST	ICE_TXD_CTX_QW0_EIP_NOINC_M
+
+#define ICE_TXD_CTX_QW0_NATLEN_S	12
+#define ICE_TXD_CTX_QW0_NATLEN_M	(0X7FULL << ICE_TXD_CTX_QW0_NATLEN_S)
+
+#define ICE_TXD_CTX_QW0_DECTTL_S	19
+#define ICE_TXD_CTX_QW0_DECTTL_M	(0xFULL << ICE_TXD_CTX_QW0_DECTTL_S)
+
+#define ICE_TXD_CTX_QW0_L4T_CS_S	23
+#define ICE_TXD_CTX_QW0_L4T_CS_M	BIT_ULL(ICE_TXD_CTX_QW0_L4T_CS_S)
+
+
+#define ICE_LAN_TXQ_MAX_QGRPS	127
+#define ICE_LAN_TXQ_MAX_QDIS	1023
+
+/* Tx queue context data
+ *
+ * The sizes of the variables may be larger than needed due to crossing byte
+ * boundaries. If we do not have the width of the variable set to the correct
+ * size then we could end up shifting bits off the top of the variable when the
+ * variable is at the top of a byte and crosses over into the next byte.
+ */
+struct ice_tlan_ctx {
+#define ICE_TLAN_CTX_BASE_S	7
+	u64 base;		/* base is defined in 128-byte units */
+	u8 port_num;
+	u16 cgd_num;		/* bigger than needed, see above for reason */
+	u8 pf_num;
+	u16 vmvf_num;
+	u8 vmvf_type;
+#define ICE_TLAN_CTX_VMVF_TYPE_VMQ	1
+#define ICE_TLAN_CTX_VMVF_TYPE_PF	2
+	u16 src_vsi;
+	u8 tsyn_ena;
+	u8 alt_vlan;
+	u16 cpuid;		/* bigger than needed, see above for reason */
+	u8 wb_mode;
+	u8 tphrd_desc;
+	u8 tphrd;
+	u8 tphwr_desc;
+	u16 cmpq_id;
+	u16 qnum_in_func;
+	u8 itr_notification_mode;
+	u8 adjust_prof_id;
+	u32 qlen;		/* bigger than needed, see above for reason */
+	u8 quanta_prof_idx;
+	u8 tso_ena;
+	u16 tso_qnum;
+	u8 legacy_int;
+	u8 drop_ena;
+	u8 cache_prof_idx;
+	u8 pkt_shaper_prof_idx;
+	u8 int_q_state;	/* width not needed - internal do not write */
+};
+
+/* LAN Tx Completion Queue data */
+#pragma pack(1)
+struct ice_tx_cmpltnq {
+	u16 txq_id;
+	u8 generation;
+	u16 tx_head;
+	u8 cmpl_type;
+};
+#pragma pack()
+
+
+/* LAN Tx Completion Queue Context */
+#pragma pack(1)
+struct ice_tx_cmpltnq_ctx {
+	u64 base;
+	u32 q_len;
+#define ICE_TX_CMPLTNQ_CTX_Q_LEN_S	4
+	u8 generation;
+	u32 wrt_ptr;
+	u8 pf_num;
+	u16 vmvf_num;
+	u8 vmvf_type;
+	u8 tph_desc_wr;
+	u8 cpuid;
+	u32 cmpltn_cache[16];
+};
+#pragma pack()
+
+/* LAN Tx Doorbell Descriptor Format */
+struct ice_tx_drbell_fmt {
+	u16 txq_id;
+	u8 dd;
+	u8 rs;
+	u32 db;
+};
+
+
+/* LAN Tx Doorbell Queue Context */
+#pragma pack(1)
+struct ice_tx_drbell_q_ctx {
+	u64 base;
+	u16 ring_len;
+	u8 pf_num;
+	u16 vf_num;
+	u8 vmvf_type;
+	u8 cpuid;
+	u8 tph_desc_rd;
+	u8 tph_desc_wr;
+	u8 db_q_en;
+	u16 rd_head;
+	u16 rd_tail;
+};
+#pragma pack()
+
+/* The ice_ptype_lkup table is used to convert from the 10-bit ptype in the
+ * hardware to a bit-field that can be used by SW to more easily determine the
+ * packet type.
+ *
+ * Macros are used to shorten the table lines and make this table human
+ * readable.
+ *
+ * We store the PTYPE in the top byte of the bit field - this is just so that
+ * we can check that the table doesn't have a row missing, as the index into
+ * the table should be the PTYPE.
+ *
+ * Typical work flow:
+ *
+ * IF NOT ice_ptype_lkup[ptype].known
+ * THEN
+ *      Packet is unknown
+ * ELSE IF ice_ptype_lkup[ptype].outer_ip == ICE_RX_PTYPE_OUTER_IP
+ *      Use the rest of the fields to look at the tunnels, inner protocols, etc
+ * ELSE
+ *      Use the enum ice_rx_l2_ptype to decode the packet type
+ * ENDIF
+ */
+
+/* macro to make the table lines short */
+#define ICE_PTT(PTYPE, OUTER_IP, OUTER_IP_VER, OUTER_FRAG, T, TE, TEF, I, PL)\
+	{	PTYPE, \
+		1, \
+		ICE_RX_PTYPE_OUTER_##OUTER_IP, \
+		ICE_RX_PTYPE_OUTER_##OUTER_IP_VER, \
+		ICE_RX_PTYPE_##OUTER_FRAG, \
+		ICE_RX_PTYPE_TUNNEL_##T, \
+		ICE_RX_PTYPE_TUNNEL_END_##TE, \
+		ICE_RX_PTYPE_##TEF, \
+		ICE_RX_PTYPE_INNER_PROT_##I, \
+		ICE_RX_PTYPE_PAYLOAD_LAYER_##PL }
+
+#define ICE_PTT_UNUSED_ENTRY(PTYPE) { PTYPE, 0, 0, 0, 0, 0, 0, 0, 0, 0 }
+
+/* shorter macros makes the table fit but are terse */
+#define ICE_RX_PTYPE_NOF		ICE_RX_PTYPE_NOT_FRAG
+#define ICE_RX_PTYPE_FRG		ICE_RX_PTYPE_FRAG
+
+/* Lookup table mapping the HW PTYPE to the bit field for decoding */
+static const struct ice_rx_ptype_decoded ice_ptype_lkup[] = {
+	/* L2 Packet types */
+	ICE_PTT_UNUSED_ENTRY(0),
+	ICE_PTT(1, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2),
+	ICE_PTT(2, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE),
+	ICE_PTT_UNUSED_ENTRY(3),
+	ICE_PTT_UNUSED_ENTRY(4),
+	ICE_PTT_UNUSED_ENTRY(5),
+	ICE_PTT(6, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE),
+	ICE_PTT(7, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE),
+	ICE_PTT_UNUSED_ENTRY(8),
+	ICE_PTT_UNUSED_ENTRY(9),
+	ICE_PTT(10, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE),
+	ICE_PTT(11, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE),
+	ICE_PTT_UNUSED_ENTRY(12),
+	ICE_PTT_UNUSED_ENTRY(13),
+	ICE_PTT_UNUSED_ENTRY(14),
+	ICE_PTT_UNUSED_ENTRY(15),
+	ICE_PTT_UNUSED_ENTRY(16),
+	ICE_PTT_UNUSED_ENTRY(17),
+	ICE_PTT_UNUSED_ENTRY(18),
+	ICE_PTT_UNUSED_ENTRY(19),
+	ICE_PTT_UNUSED_ENTRY(20),
+	ICE_PTT_UNUSED_ENTRY(21),
+
+	/* Non Tunneled IPv4 */
+	ICE_PTT(22, IP, IPV4, FRG, NONE, NONE, NOF, NONE, PAY3),
+	ICE_PTT(23, IP, IPV4, NOF, NONE, NONE, NOF, NONE, PAY3),
+	ICE_PTT(24, IP, IPV4, NOF, NONE, NONE, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(25),
+	ICE_PTT(26, IP, IPV4, NOF, NONE, NONE, NOF, TCP,  PAY4),
+	ICE_PTT(27, IP, IPV4, NOF, NONE, NONE, NOF, SCTP, PAY4),
+	ICE_PTT(28, IP, IPV4, NOF, NONE, NONE, NOF, ICMP, PAY4),
+
+	/* IPv4 --> IPv4 */
+	ICE_PTT(29, IP, IPV4, NOF, IP_IP, IPV4, FRG, NONE, PAY3),
+	ICE_PTT(30, IP, IPV4, NOF, IP_IP, IPV4, NOF, NONE, PAY3),
+	ICE_PTT(31, IP, IPV4, NOF, IP_IP, IPV4, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(32),
+	ICE_PTT(33, IP, IPV4, NOF, IP_IP, IPV4, NOF, TCP,  PAY4),
+	ICE_PTT(34, IP, IPV4, NOF, IP_IP, IPV4, NOF, SCTP, PAY4),
+	ICE_PTT(35, IP, IPV4, NOF, IP_IP, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv4 --> IPv6 */
+	ICE_PTT(36, IP, IPV4, NOF, IP_IP, IPV6, FRG, NONE, PAY3),
+	ICE_PTT(37, IP, IPV4, NOF, IP_IP, IPV6, NOF, NONE, PAY3),
+	ICE_PTT(38, IP, IPV4, NOF, IP_IP, IPV6, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(39),
+	ICE_PTT(40, IP, IPV4, NOF, IP_IP, IPV6, NOF, TCP,  PAY4),
+	ICE_PTT(41, IP, IPV4, NOF, IP_IP, IPV6, NOF, SCTP, PAY4),
+	ICE_PTT(42, IP, IPV4, NOF, IP_IP, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv4 --> GRE/NAT */
+	ICE_PTT(43, IP, IPV4, NOF, IP_GRENAT, NONE, NOF, NONE, PAY3),
+
+	/* IPv4 --> GRE/NAT --> IPv4 */
+	ICE_PTT(44, IP, IPV4, NOF, IP_GRENAT, IPV4, FRG, NONE, PAY3),
+	ICE_PTT(45, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, NONE, PAY3),
+	ICE_PTT(46, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(47),
+	ICE_PTT(48, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, TCP,  PAY4),
+	ICE_PTT(49, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, SCTP, PAY4),
+	ICE_PTT(50, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv4 --> GRE/NAT --> IPv6 */
+	ICE_PTT(51, IP, IPV4, NOF, IP_GRENAT, IPV6, FRG, NONE, PAY3),
+	ICE_PTT(52, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, NONE, PAY3),
+	ICE_PTT(53, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(54),
+	ICE_PTT(55, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, TCP,  PAY4),
+	ICE_PTT(56, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, SCTP, PAY4),
+	ICE_PTT(57, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv4 --> GRE/NAT --> MAC */
+	ICE_PTT(58, IP, IPV4, NOF, IP_GRENAT_MAC, NONE, NOF, NONE, PAY3),
+
+	/* IPv4 --> GRE/NAT --> MAC --> IPv4 */
+	ICE_PTT(59, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, FRG, NONE, PAY3),
+	ICE_PTT(60, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, NONE, PAY3),
+	ICE_PTT(61, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(62),
+	ICE_PTT(63, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, TCP,  PAY4),
+	ICE_PTT(64, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, SCTP, PAY4),
+	ICE_PTT(65, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv4 --> GRE/NAT -> MAC --> IPv6 */
+	ICE_PTT(66, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, FRG, NONE, PAY3),
+	ICE_PTT(67, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, NONE, PAY3),
+	ICE_PTT(68, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(69),
+	ICE_PTT(70, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, TCP,  PAY4),
+	ICE_PTT(71, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, SCTP, PAY4),
+	ICE_PTT(72, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv4 --> GRE/NAT --> MAC/VLAN */
+	ICE_PTT(73, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, NONE, NOF, NONE, PAY3),
+
+	/* IPv4 ---> GRE/NAT -> MAC/VLAN --> IPv4 */
+	ICE_PTT(74, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, FRG, NONE, PAY3),
+	ICE_PTT(75, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, NONE, PAY3),
+	ICE_PTT(76, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(77),
+	ICE_PTT(78, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, TCP,  PAY4),
+	ICE_PTT(79, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, SCTP, PAY4),
+	ICE_PTT(80, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv4 -> GRE/NAT -> MAC/VLAN --> IPv6 */
+	ICE_PTT(81, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, FRG, NONE, PAY3),
+	ICE_PTT(82, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, NONE, PAY3),
+	ICE_PTT(83, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(84),
+	ICE_PTT(85, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, TCP,  PAY4),
+	ICE_PTT(86, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, SCTP, PAY4),
+	ICE_PTT(87, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, ICMP, PAY4),
+
+	/* Non Tunneled IPv6 */
+	ICE_PTT(88, IP, IPV6, FRG, NONE, NONE, NOF, NONE, PAY3),
+	ICE_PTT(89, IP, IPV6, NOF, NONE, NONE, NOF, NONE, PAY3),
+	ICE_PTT(90, IP, IPV6, NOF, NONE, NONE, NOF, UDP,  PAY3),
+	ICE_PTT_UNUSED_ENTRY(91),
+	ICE_PTT(92, IP, IPV6, NOF, NONE, NONE, NOF, TCP,  PAY4),
+	ICE_PTT(93, IP, IPV6, NOF, NONE, NONE, NOF, SCTP, PAY4),
+	ICE_PTT(94, IP, IPV6, NOF, NONE, NONE, NOF, ICMP, PAY4),
+
+	/* IPv6 --> IPv4 */
+	ICE_PTT(95, IP, IPV6, NOF, IP_IP, IPV4, FRG, NONE, PAY3),
+	ICE_PTT(96, IP, IPV6, NOF, IP_IP, IPV4, NOF, NONE, PAY3),
+	ICE_PTT(97, IP, IPV6, NOF, IP_IP, IPV4, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(98),
+	ICE_PTT(99, IP, IPV6, NOF, IP_IP, IPV4, NOF, TCP,  PAY4),
+	ICE_PTT(100, IP, IPV6, NOF, IP_IP, IPV4, NOF, SCTP, PAY4),
+	ICE_PTT(101, IP, IPV6, NOF, IP_IP, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv6 --> IPv6 */
+	ICE_PTT(102, IP, IPV6, NOF, IP_IP, IPV6, FRG, NONE, PAY3),
+	ICE_PTT(103, IP, IPV6, NOF, IP_IP, IPV6, NOF, NONE, PAY3),
+	ICE_PTT(104, IP, IPV6, NOF, IP_IP, IPV6, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(105),
+	ICE_PTT(106, IP, IPV6, NOF, IP_IP, IPV6, NOF, TCP,  PAY4),
+	ICE_PTT(107, IP, IPV6, NOF, IP_IP, IPV6, NOF, SCTP, PAY4),
+	ICE_PTT(108, IP, IPV6, NOF, IP_IP, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT */
+	ICE_PTT(109, IP, IPV6, NOF, IP_GRENAT, NONE, NOF, NONE, PAY3),
+
+	/* IPv6 --> GRE/NAT -> IPv4 */
+	ICE_PTT(110, IP, IPV6, NOF, IP_GRENAT, IPV4, FRG, NONE, PAY3),
+	ICE_PTT(111, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, NONE, PAY3),
+	ICE_PTT(112, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(113),
+	ICE_PTT(114, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, TCP,  PAY4),
+	ICE_PTT(115, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, SCTP, PAY4),
+	ICE_PTT(116, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT -> IPv6 */
+	ICE_PTT(117, IP, IPV6, NOF, IP_GRENAT, IPV6, FRG, NONE, PAY3),
+	ICE_PTT(118, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, NONE, PAY3),
+	ICE_PTT(119, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(120),
+	ICE_PTT(121, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, TCP,  PAY4),
+	ICE_PTT(122, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, SCTP, PAY4),
+	ICE_PTT(123, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT -> MAC */
+	ICE_PTT(124, IP, IPV6, NOF, IP_GRENAT_MAC, NONE, NOF, NONE, PAY3),
+
+	/* IPv6 --> GRE/NAT -> MAC -> IPv4 */
+	ICE_PTT(125, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, FRG, NONE, PAY3),
+	ICE_PTT(126, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, NONE, PAY3),
+	ICE_PTT(127, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(128),
+	ICE_PTT(129, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, TCP,  PAY4),
+	ICE_PTT(130, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, SCTP, PAY4),
+	ICE_PTT(131, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT -> MAC -> IPv6 */
+	ICE_PTT(132, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, FRG, NONE, PAY3),
+	ICE_PTT(133, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, NONE, PAY3),
+	ICE_PTT(134, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(135),
+	ICE_PTT(136, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, TCP,  PAY4),
+	ICE_PTT(137, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, SCTP, PAY4),
+	ICE_PTT(138, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT -> MAC/VLAN */
+	ICE_PTT(139, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, NONE, NOF, NONE, PAY3),
+
+	/* IPv6 --> GRE/NAT -> MAC/VLAN --> IPv4 */
+	ICE_PTT(140, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, FRG, NONE, PAY3),
+	ICE_PTT(141, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, NONE, PAY3),
+	ICE_PTT(142, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(143),
+	ICE_PTT(144, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, TCP,  PAY4),
+	ICE_PTT(145, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, SCTP, PAY4),
+	ICE_PTT(146, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT -> MAC/VLAN --> IPv6 */
+	ICE_PTT(147, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, FRG, NONE, PAY3),
+	ICE_PTT(148, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, NONE, PAY3),
+	ICE_PTT(149, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(150),
+	ICE_PTT(151, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, TCP,  PAY4),
+	ICE_PTT(152, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, SCTP, PAY4),
+	ICE_PTT(153, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, ICMP, PAY4),
+
+	/* unused entries */
+	ICE_PTT_UNUSED_ENTRY(154),
+	ICE_PTT_UNUSED_ENTRY(155),
+	ICE_PTT_UNUSED_ENTRY(156),
+	ICE_PTT_UNUSED_ENTRY(157),
+	ICE_PTT_UNUSED_ENTRY(158),
+	ICE_PTT_UNUSED_ENTRY(159),
+
+	ICE_PTT_UNUSED_ENTRY(160),
+	ICE_PTT_UNUSED_ENTRY(161),
+	ICE_PTT_UNUSED_ENTRY(162),
+	ICE_PTT_UNUSED_ENTRY(163),
+	ICE_PTT_UNUSED_ENTRY(164),
+	ICE_PTT_UNUSED_ENTRY(165),
+	ICE_PTT_UNUSED_ENTRY(166),
+	ICE_PTT_UNUSED_ENTRY(167),
+	ICE_PTT_UNUSED_ENTRY(168),
+	ICE_PTT_UNUSED_ENTRY(169),
+
+	ICE_PTT_UNUSED_ENTRY(170),
+	ICE_PTT_UNUSED_ENTRY(171),
+	ICE_PTT_UNUSED_ENTRY(172),
+	ICE_PTT_UNUSED_ENTRY(173),
+	ICE_PTT_UNUSED_ENTRY(174),
+	ICE_PTT_UNUSED_ENTRY(175),
+	ICE_PTT_UNUSED_ENTRY(176),
+	ICE_PTT_UNUSED_ENTRY(177),
+	ICE_PTT_UNUSED_ENTRY(178),
+	ICE_PTT_UNUSED_ENTRY(179),
+
+	ICE_PTT_UNUSED_ENTRY(180),
+	ICE_PTT_UNUSED_ENTRY(181),
+	ICE_PTT_UNUSED_ENTRY(182),
+	ICE_PTT_UNUSED_ENTRY(183),
+	ICE_PTT_UNUSED_ENTRY(184),
+	ICE_PTT_UNUSED_ENTRY(185),
+	ICE_PTT_UNUSED_ENTRY(186),
+	ICE_PTT_UNUSED_ENTRY(187),
+	ICE_PTT_UNUSED_ENTRY(188),
+	ICE_PTT_UNUSED_ENTRY(189),
+
+	ICE_PTT_UNUSED_ENTRY(190),
+	ICE_PTT_UNUSED_ENTRY(191),
+	ICE_PTT_UNUSED_ENTRY(192),
+	ICE_PTT_UNUSED_ENTRY(193),
+	ICE_PTT_UNUSED_ENTRY(194),
+	ICE_PTT_UNUSED_ENTRY(195),
+	ICE_PTT_UNUSED_ENTRY(196),
+	ICE_PTT_UNUSED_ENTRY(197),
+	ICE_PTT_UNUSED_ENTRY(198),
+	ICE_PTT_UNUSED_ENTRY(199),
+
+	ICE_PTT_UNUSED_ENTRY(200),
+	ICE_PTT_UNUSED_ENTRY(201),
+	ICE_PTT_UNUSED_ENTRY(202),
+	ICE_PTT_UNUSED_ENTRY(203),
+	ICE_PTT_UNUSED_ENTRY(204),
+	ICE_PTT_UNUSED_ENTRY(205),
+	ICE_PTT_UNUSED_ENTRY(206),
+	ICE_PTT_UNUSED_ENTRY(207),
+	ICE_PTT_UNUSED_ENTRY(208),
+	ICE_PTT_UNUSED_ENTRY(209),
+
+	ICE_PTT_UNUSED_ENTRY(210),
+	ICE_PTT_UNUSED_ENTRY(211),
+	ICE_PTT_UNUSED_ENTRY(212),
+	ICE_PTT_UNUSED_ENTRY(213),
+	ICE_PTT_UNUSED_ENTRY(214),
+	ICE_PTT_UNUSED_ENTRY(215),
+	ICE_PTT_UNUSED_ENTRY(216),
+	ICE_PTT_UNUSED_ENTRY(217),
+	ICE_PTT_UNUSED_ENTRY(218),
+	ICE_PTT_UNUSED_ENTRY(219),
+
+	ICE_PTT_UNUSED_ENTRY(220),
+	ICE_PTT_UNUSED_ENTRY(221),
+	ICE_PTT_UNUSED_ENTRY(222),
+	ICE_PTT_UNUSED_ENTRY(223),
+	ICE_PTT_UNUSED_ENTRY(224),
+	ICE_PTT_UNUSED_ENTRY(225),
+	ICE_PTT_UNUSED_ENTRY(226),
+	ICE_PTT_UNUSED_ENTRY(227),
+	ICE_PTT_UNUSED_ENTRY(228),
+	ICE_PTT_UNUSED_ENTRY(229),
+
+	ICE_PTT_UNUSED_ENTRY(230),
+	ICE_PTT_UNUSED_ENTRY(231),
+	ICE_PTT_UNUSED_ENTRY(232),
+	ICE_PTT_UNUSED_ENTRY(233),
+	ICE_PTT_UNUSED_ENTRY(234),
+	ICE_PTT_UNUSED_ENTRY(235),
+	ICE_PTT_UNUSED_ENTRY(236),
+	ICE_PTT_UNUSED_ENTRY(237),
+	ICE_PTT_UNUSED_ENTRY(238),
+	ICE_PTT_UNUSED_ENTRY(239),
+
+	ICE_PTT_UNUSED_ENTRY(240),
+	ICE_PTT_UNUSED_ENTRY(241),
+	ICE_PTT_UNUSED_ENTRY(242),
+	ICE_PTT_UNUSED_ENTRY(243),
+	ICE_PTT_UNUSED_ENTRY(244),
+	ICE_PTT_UNUSED_ENTRY(245),
+	ICE_PTT_UNUSED_ENTRY(246),
+	ICE_PTT_UNUSED_ENTRY(247),
+	ICE_PTT_UNUSED_ENTRY(248),
+	ICE_PTT_UNUSED_ENTRY(249),
+
+	ICE_PTT_UNUSED_ENTRY(250),
+	ICE_PTT_UNUSED_ENTRY(251),
+	ICE_PTT_UNUSED_ENTRY(252),
+	ICE_PTT_UNUSED_ENTRY(253),
+	ICE_PTT_UNUSED_ENTRY(254),
+	ICE_PTT_UNUSED_ENTRY(255),
+	ICE_PTT_UNUSED_ENTRY(256),
+	ICE_PTT_UNUSED_ENTRY(257),
+	ICE_PTT_UNUSED_ENTRY(258),
+	ICE_PTT_UNUSED_ENTRY(259),
+
+	ICE_PTT_UNUSED_ENTRY(260),
+	ICE_PTT_UNUSED_ENTRY(261),
+	ICE_PTT_UNUSED_ENTRY(262),
+	ICE_PTT_UNUSED_ENTRY(263),
+	ICE_PTT_UNUSED_ENTRY(264),
+	ICE_PTT_UNUSED_ENTRY(265),
+	ICE_PTT_UNUSED_ENTRY(266),
+	ICE_PTT_UNUSED_ENTRY(267),
+	ICE_PTT_UNUSED_ENTRY(268),
+	ICE_PTT_UNUSED_ENTRY(269),
+
+	ICE_PTT_UNUSED_ENTRY(270),
+	ICE_PTT_UNUSED_ENTRY(271),
+	ICE_PTT_UNUSED_ENTRY(272),
+	ICE_PTT_UNUSED_ENTRY(273),
+	ICE_PTT_UNUSED_ENTRY(274),
+	ICE_PTT_UNUSED_ENTRY(275),
+	ICE_PTT_UNUSED_ENTRY(276),
+	ICE_PTT_UNUSED_ENTRY(277),
+	ICE_PTT_UNUSED_ENTRY(278),
+	ICE_PTT_UNUSED_ENTRY(279),
+
+	ICE_PTT_UNUSED_ENTRY(280),
+	ICE_PTT_UNUSED_ENTRY(281),
+	ICE_PTT_UNUSED_ENTRY(282),
+	ICE_PTT_UNUSED_ENTRY(283),
+	ICE_PTT_UNUSED_ENTRY(284),
+	ICE_PTT_UNUSED_ENTRY(285),
+	ICE_PTT_UNUSED_ENTRY(286),
+	ICE_PTT_UNUSED_ENTRY(287),
+	ICE_PTT_UNUSED_ENTRY(288),
+	ICE_PTT_UNUSED_ENTRY(289),
+
+	ICE_PTT_UNUSED_ENTRY(290),
+	ICE_PTT_UNUSED_ENTRY(291),
+	ICE_PTT_UNUSED_ENTRY(292),
+	ICE_PTT_UNUSED_ENTRY(293),
+	ICE_PTT_UNUSED_ENTRY(294),
+	ICE_PTT_UNUSED_ENTRY(295),
+	ICE_PTT_UNUSED_ENTRY(296),
+	ICE_PTT_UNUSED_ENTRY(297),
+	ICE_PTT_UNUSED_ENTRY(298),
+	ICE_PTT_UNUSED_ENTRY(299),
+
+	ICE_PTT_UNUSED_ENTRY(300),
+	ICE_PTT_UNUSED_ENTRY(301),
+	ICE_PTT_UNUSED_ENTRY(302),
+	ICE_PTT_UNUSED_ENTRY(303),
+	ICE_PTT_UNUSED_ENTRY(304),
+	ICE_PTT_UNUSED_ENTRY(305),
+	ICE_PTT_UNUSED_ENTRY(306),
+	ICE_PTT_UNUSED_ENTRY(307),
+	ICE_PTT_UNUSED_ENTRY(308),
+	ICE_PTT_UNUSED_ENTRY(309),
+
+	ICE_PTT_UNUSED_ENTRY(310),
+	ICE_PTT_UNUSED_ENTRY(311),
+	ICE_PTT_UNUSED_ENTRY(312),
+	ICE_PTT_UNUSED_ENTRY(313),
+	ICE_PTT_UNUSED_ENTRY(314),
+	ICE_PTT_UNUSED_ENTRY(315),
+	ICE_PTT_UNUSED_ENTRY(316),
+	ICE_PTT_UNUSED_ENTRY(317),
+	ICE_PTT_UNUSED_ENTRY(318),
+	ICE_PTT_UNUSED_ENTRY(319),
+
+	ICE_PTT_UNUSED_ENTRY(320),
+	ICE_PTT_UNUSED_ENTRY(321),
+	ICE_PTT_UNUSED_ENTRY(322),
+	ICE_PTT_UNUSED_ENTRY(323),
+	ICE_PTT_UNUSED_ENTRY(324),
+	ICE_PTT_UNUSED_ENTRY(325),
+	ICE_PTT_UNUSED_ENTRY(326),
+	ICE_PTT_UNUSED_ENTRY(327),
+	ICE_PTT_UNUSED_ENTRY(328),
+	ICE_PTT_UNUSED_ENTRY(329),
+
+	ICE_PTT_UNUSED_ENTRY(330),
+	ICE_PTT_UNUSED_ENTRY(331),
+	ICE_PTT_UNUSED_ENTRY(332),
+	ICE_PTT_UNUSED_ENTRY(333),
+	ICE_PTT_UNUSED_ENTRY(334),
+	ICE_PTT_UNUSED_ENTRY(335),
+	ICE_PTT_UNUSED_ENTRY(336),
+	ICE_PTT_UNUSED_ENTRY(337),
+	ICE_PTT_UNUSED_ENTRY(338),
+	ICE_PTT_UNUSED_ENTRY(339),
+
+	ICE_PTT_UNUSED_ENTRY(340),
+	ICE_PTT_UNUSED_ENTRY(341),
+	ICE_PTT_UNUSED_ENTRY(342),
+	ICE_PTT_UNUSED_ENTRY(343),
+	ICE_PTT_UNUSED_ENTRY(344),
+	ICE_PTT_UNUSED_ENTRY(345),
+	ICE_PTT_UNUSED_ENTRY(346),
+	ICE_PTT_UNUSED_ENTRY(347),
+	ICE_PTT_UNUSED_ENTRY(348),
+	ICE_PTT_UNUSED_ENTRY(349),
+
+	ICE_PTT_UNUSED_ENTRY(350),
+	ICE_PTT_UNUSED_ENTRY(351),
+	ICE_PTT_UNUSED_ENTRY(352),
+	ICE_PTT_UNUSED_ENTRY(353),
+	ICE_PTT_UNUSED_ENTRY(354),
+	ICE_PTT_UNUSED_ENTRY(355),
+	ICE_PTT_UNUSED_ENTRY(356),
+	ICE_PTT_UNUSED_ENTRY(357),
+	ICE_PTT_UNUSED_ENTRY(358),
+	ICE_PTT_UNUSED_ENTRY(359),
+
+	ICE_PTT_UNUSED_ENTRY(360),
+	ICE_PTT_UNUSED_ENTRY(361),
+	ICE_PTT_UNUSED_ENTRY(362),
+	ICE_PTT_UNUSED_ENTRY(363),
+	ICE_PTT_UNUSED_ENTRY(364),
+	ICE_PTT_UNUSED_ENTRY(365),
+	ICE_PTT_UNUSED_ENTRY(366),
+	ICE_PTT_UNUSED_ENTRY(367),
+	ICE_PTT_UNUSED_ENTRY(368),
+	ICE_PTT_UNUSED_ENTRY(369),
+
+	ICE_PTT_UNUSED_ENTRY(370),
+	ICE_PTT_UNUSED_ENTRY(371),
+	ICE_PTT_UNUSED_ENTRY(372),
+	ICE_PTT_UNUSED_ENTRY(373),
+	ICE_PTT_UNUSED_ENTRY(374),
+	ICE_PTT_UNUSED_ENTRY(375),
+	ICE_PTT_UNUSED_ENTRY(376),
+	ICE_PTT_UNUSED_ENTRY(377),
+	ICE_PTT_UNUSED_ENTRY(378),
+	ICE_PTT_UNUSED_ENTRY(379),
+
+	ICE_PTT_UNUSED_ENTRY(380),
+	ICE_PTT_UNUSED_ENTRY(381),
+	ICE_PTT_UNUSED_ENTRY(382),
+	ICE_PTT_UNUSED_ENTRY(383),
+	ICE_PTT_UNUSED_ENTRY(384),
+	ICE_PTT_UNUSED_ENTRY(385),
+	ICE_PTT_UNUSED_ENTRY(386),
+	ICE_PTT_UNUSED_ENTRY(387),
+	ICE_PTT_UNUSED_ENTRY(388),
+	ICE_PTT_UNUSED_ENTRY(389),
+
+	ICE_PTT_UNUSED_ENTRY(390),
+	ICE_PTT_UNUSED_ENTRY(391),
+	ICE_PTT_UNUSED_ENTRY(392),
+	ICE_PTT_UNUSED_ENTRY(393),
+	ICE_PTT_UNUSED_ENTRY(394),
+	ICE_PTT_UNUSED_ENTRY(395),
+	ICE_PTT_UNUSED_ENTRY(396),
+	ICE_PTT_UNUSED_ENTRY(397),
+	ICE_PTT_UNUSED_ENTRY(398),
+	ICE_PTT_UNUSED_ENTRY(399),
+
+	ICE_PTT_UNUSED_ENTRY(400),
+	ICE_PTT_UNUSED_ENTRY(401),
+	ICE_PTT_UNUSED_ENTRY(402),
+	ICE_PTT_UNUSED_ENTRY(403),
+	ICE_PTT_UNUSED_ENTRY(404),
+	ICE_PTT_UNUSED_ENTRY(405),
+	ICE_PTT_UNUSED_ENTRY(406),
+	ICE_PTT_UNUSED_ENTRY(407),
+	ICE_PTT_UNUSED_ENTRY(408),
+	ICE_PTT_UNUSED_ENTRY(409),
+
+	ICE_PTT_UNUSED_ENTRY(410),
+	ICE_PTT_UNUSED_ENTRY(411),
+	ICE_PTT_UNUSED_ENTRY(412),
+	ICE_PTT_UNUSED_ENTRY(413),
+	ICE_PTT_UNUSED_ENTRY(414),
+	ICE_PTT_UNUSED_ENTRY(415),
+	ICE_PTT_UNUSED_ENTRY(416),
+	ICE_PTT_UNUSED_ENTRY(417),
+	ICE_PTT_UNUSED_ENTRY(418),
+	ICE_PTT_UNUSED_ENTRY(419),
+
+	ICE_PTT_UNUSED_ENTRY(420),
+	ICE_PTT_UNUSED_ENTRY(421),
+	ICE_PTT_UNUSED_ENTRY(422),
+	ICE_PTT_UNUSED_ENTRY(423),
+	ICE_PTT_UNUSED_ENTRY(424),
+	ICE_PTT_UNUSED_ENTRY(425),
+	ICE_PTT_UNUSED_ENTRY(426),
+	ICE_PTT_UNUSED_ENTRY(427),
+	ICE_PTT_UNUSED_ENTRY(428),
+	ICE_PTT_UNUSED_ENTRY(429),
+
+	ICE_PTT_UNUSED_ENTRY(430),
+	ICE_PTT_UNUSED_ENTRY(431),
+	ICE_PTT_UNUSED_ENTRY(432),
+	ICE_PTT_UNUSED_ENTRY(433),
+	ICE_PTT_UNUSED_ENTRY(434),
+	ICE_PTT_UNUSED_ENTRY(435),
+	ICE_PTT_UNUSED_ENTRY(436),
+	ICE_PTT_UNUSED_ENTRY(437),
+	ICE_PTT_UNUSED_ENTRY(438),
+	ICE_PTT_UNUSED_ENTRY(439),
+
+	ICE_PTT_UNUSED_ENTRY(440),
+	ICE_PTT_UNUSED_ENTRY(441),
+	ICE_PTT_UNUSED_ENTRY(442),
+	ICE_PTT_UNUSED_ENTRY(443),
+	ICE_PTT_UNUSED_ENTRY(444),
+	ICE_PTT_UNUSED_ENTRY(445),
+	ICE_PTT_UNUSED_ENTRY(446),
+	ICE_PTT_UNUSED_ENTRY(447),
+	ICE_PTT_UNUSED_ENTRY(448),
+	ICE_PTT_UNUSED_ENTRY(449),
+
+	ICE_PTT_UNUSED_ENTRY(450),
+	ICE_PTT_UNUSED_ENTRY(451),
+	ICE_PTT_UNUSED_ENTRY(452),
+	ICE_PTT_UNUSED_ENTRY(453),
+	ICE_PTT_UNUSED_ENTRY(454),
+	ICE_PTT_UNUSED_ENTRY(455),
+	ICE_PTT_UNUSED_ENTRY(456),
+	ICE_PTT_UNUSED_ENTRY(457),
+	ICE_PTT_UNUSED_ENTRY(458),
+	ICE_PTT_UNUSED_ENTRY(459),
+
+	ICE_PTT_UNUSED_ENTRY(460),
+	ICE_PTT_UNUSED_ENTRY(461),
+	ICE_PTT_UNUSED_ENTRY(462),
+	ICE_PTT_UNUSED_ENTRY(463),
+	ICE_PTT_UNUSED_ENTRY(464),
+	ICE_PTT_UNUSED_ENTRY(465),
+	ICE_PTT_UNUSED_ENTRY(466),
+	ICE_PTT_UNUSED_ENTRY(467),
+	ICE_PTT_UNUSED_ENTRY(468),
+	ICE_PTT_UNUSED_ENTRY(469),
+
+	ICE_PTT_UNUSED_ENTRY(470),
+	ICE_PTT_UNUSED_ENTRY(471),
+	ICE_PTT_UNUSED_ENTRY(472),
+	ICE_PTT_UNUSED_ENTRY(473),
+	ICE_PTT_UNUSED_ENTRY(474),
+	ICE_PTT_UNUSED_ENTRY(475),
+	ICE_PTT_UNUSED_ENTRY(476),
+	ICE_PTT_UNUSED_ENTRY(477),
+	ICE_PTT_UNUSED_ENTRY(478),
+	ICE_PTT_UNUSED_ENTRY(479),
+
+	ICE_PTT_UNUSED_ENTRY(480),
+	ICE_PTT_UNUSED_ENTRY(481),
+	ICE_PTT_UNUSED_ENTRY(482),
+	ICE_PTT_UNUSED_ENTRY(483),
+	ICE_PTT_UNUSED_ENTRY(484),
+	ICE_PTT_UNUSED_ENTRY(485),
+	ICE_PTT_UNUSED_ENTRY(486),
+	ICE_PTT_UNUSED_ENTRY(487),
+	ICE_PTT_UNUSED_ENTRY(488),
+	ICE_PTT_UNUSED_ENTRY(489),
+
+	ICE_PTT_UNUSED_ENTRY(490),
+	ICE_PTT_UNUSED_ENTRY(491),
+	ICE_PTT_UNUSED_ENTRY(492),
+	ICE_PTT_UNUSED_ENTRY(493),
+	ICE_PTT_UNUSED_ENTRY(494),
+	ICE_PTT_UNUSED_ENTRY(495),
+	ICE_PTT_UNUSED_ENTRY(496),
+	ICE_PTT_UNUSED_ENTRY(497),
+	ICE_PTT_UNUSED_ENTRY(498),
+	ICE_PTT_UNUSED_ENTRY(499),
+
+	ICE_PTT_UNUSED_ENTRY(500),
+	ICE_PTT_UNUSED_ENTRY(501),
+	ICE_PTT_UNUSED_ENTRY(502),
+	ICE_PTT_UNUSED_ENTRY(503),
+	ICE_PTT_UNUSED_ENTRY(504),
+	ICE_PTT_UNUSED_ENTRY(505),
+	ICE_PTT_UNUSED_ENTRY(506),
+	ICE_PTT_UNUSED_ENTRY(507),
+	ICE_PTT_UNUSED_ENTRY(508),
+	ICE_PTT_UNUSED_ENTRY(509),
+
+	ICE_PTT_UNUSED_ENTRY(510),
+	ICE_PTT_UNUSED_ENTRY(511),
+	ICE_PTT_UNUSED_ENTRY(512),
+	ICE_PTT_UNUSED_ENTRY(513),
+	ICE_PTT_UNUSED_ENTRY(514),
+	ICE_PTT_UNUSED_ENTRY(515),
+	ICE_PTT_UNUSED_ENTRY(516),
+	ICE_PTT_UNUSED_ENTRY(517),
+	ICE_PTT_UNUSED_ENTRY(518),
+	ICE_PTT_UNUSED_ENTRY(519),
+
+	ICE_PTT_UNUSED_ENTRY(520),
+	ICE_PTT_UNUSED_ENTRY(521),
+	ICE_PTT_UNUSED_ENTRY(522),
+	ICE_PTT_UNUSED_ENTRY(523),
+	ICE_PTT_UNUSED_ENTRY(524),
+	ICE_PTT_UNUSED_ENTRY(525),
+	ICE_PTT_UNUSED_ENTRY(526),
+	ICE_PTT_UNUSED_ENTRY(527),
+	ICE_PTT_UNUSED_ENTRY(528),
+	ICE_PTT_UNUSED_ENTRY(529),
+
+	ICE_PTT_UNUSED_ENTRY(530),
+	ICE_PTT_UNUSED_ENTRY(531),
+	ICE_PTT_UNUSED_ENTRY(532),
+	ICE_PTT_UNUSED_ENTRY(533),
+	ICE_PTT_UNUSED_ENTRY(534),
+	ICE_PTT_UNUSED_ENTRY(535),
+	ICE_PTT_UNUSED_ENTRY(536),
+	ICE_PTT_UNUSED_ENTRY(537),
+	ICE_PTT_UNUSED_ENTRY(538),
+	ICE_PTT_UNUSED_ENTRY(539),
+
+	ICE_PTT_UNUSED_ENTRY(540),
+	ICE_PTT_UNUSED_ENTRY(541),
+	ICE_PTT_UNUSED_ENTRY(542),
+	ICE_PTT_UNUSED_ENTRY(543),
+	ICE_PTT_UNUSED_ENTRY(544),
+	ICE_PTT_UNUSED_ENTRY(545),
+	ICE_PTT_UNUSED_ENTRY(546),
+	ICE_PTT_UNUSED_ENTRY(547),
+	ICE_PTT_UNUSED_ENTRY(548),
+	ICE_PTT_UNUSED_ENTRY(549),
+
+	ICE_PTT_UNUSED_ENTRY(550),
+	ICE_PTT_UNUSED_ENTRY(551),
+	ICE_PTT_UNUSED_ENTRY(552),
+	ICE_PTT_UNUSED_ENTRY(553),
+	ICE_PTT_UNUSED_ENTRY(554),
+	ICE_PTT_UNUSED_ENTRY(555),
+	ICE_PTT_UNUSED_ENTRY(556),
+	ICE_PTT_UNUSED_ENTRY(557),
+	ICE_PTT_UNUSED_ENTRY(558),
+	ICE_PTT_UNUSED_ENTRY(559),
+
+	ICE_PTT_UNUSED_ENTRY(560),
+	ICE_PTT_UNUSED_ENTRY(561),
+	ICE_PTT_UNUSED_ENTRY(562),
+	ICE_PTT_UNUSED_ENTRY(563),
+	ICE_PTT_UNUSED_ENTRY(564),
+	ICE_PTT_UNUSED_ENTRY(565),
+	ICE_PTT_UNUSED_ENTRY(566),
+	ICE_PTT_UNUSED_ENTRY(567),
+	ICE_PTT_UNUSED_ENTRY(568),
+	ICE_PTT_UNUSED_ENTRY(569),
+
+	ICE_PTT_UNUSED_ENTRY(570),
+	ICE_PTT_UNUSED_ENTRY(571),
+	ICE_PTT_UNUSED_ENTRY(572),
+	ICE_PTT_UNUSED_ENTRY(573),
+	ICE_PTT_UNUSED_ENTRY(574),
+	ICE_PTT_UNUSED_ENTRY(575),
+	ICE_PTT_UNUSED_ENTRY(576),
+	ICE_PTT_UNUSED_ENTRY(577),
+	ICE_PTT_UNUSED_ENTRY(578),
+	ICE_PTT_UNUSED_ENTRY(579),
+
+	ICE_PTT_UNUSED_ENTRY(580),
+	ICE_PTT_UNUSED_ENTRY(581),
+	ICE_PTT_UNUSED_ENTRY(582),
+	ICE_PTT_UNUSED_ENTRY(583),
+	ICE_PTT_UNUSED_ENTRY(584),
+	ICE_PTT_UNUSED_ENTRY(585),
+	ICE_PTT_UNUSED_ENTRY(586),
+	ICE_PTT_UNUSED_ENTRY(587),
+	ICE_PTT_UNUSED_ENTRY(588),
+	ICE_PTT_UNUSED_ENTRY(589),
+
+	ICE_PTT_UNUSED_ENTRY(590),
+	ICE_PTT_UNUSED_ENTRY(591),
+	ICE_PTT_UNUSED_ENTRY(592),
+	ICE_PTT_UNUSED_ENTRY(593),
+	ICE_PTT_UNUSED_ENTRY(594),
+	ICE_PTT_UNUSED_ENTRY(595),
+	ICE_PTT_UNUSED_ENTRY(596),
+	ICE_PTT_UNUSED_ENTRY(597),
+	ICE_PTT_UNUSED_ENTRY(598),
+	ICE_PTT_UNUSED_ENTRY(599),
+
+	ICE_PTT_UNUSED_ENTRY(600),
+	ICE_PTT_UNUSED_ENTRY(601),
+	ICE_PTT_UNUSED_ENTRY(602),
+	ICE_PTT_UNUSED_ENTRY(603),
+	ICE_PTT_UNUSED_ENTRY(604),
+	ICE_PTT_UNUSED_ENTRY(605),
+	ICE_PTT_UNUSED_ENTRY(606),
+	ICE_PTT_UNUSED_ENTRY(607),
+	ICE_PTT_UNUSED_ENTRY(608),
+	ICE_PTT_UNUSED_ENTRY(609),
+
+	ICE_PTT_UNUSED_ENTRY(610),
+	ICE_PTT_UNUSED_ENTRY(611),
+	ICE_PTT_UNUSED_ENTRY(612),
+	ICE_PTT_UNUSED_ENTRY(613),
+	ICE_PTT_UNUSED_ENTRY(614),
+	ICE_PTT_UNUSED_ENTRY(615),
+	ICE_PTT_UNUSED_ENTRY(616),
+	ICE_PTT_UNUSED_ENTRY(617),
+	ICE_PTT_UNUSED_ENTRY(618),
+	ICE_PTT_UNUSED_ENTRY(619),
+
+	ICE_PTT_UNUSED_ENTRY(620),
+	ICE_PTT_UNUSED_ENTRY(621),
+	ICE_PTT_UNUSED_ENTRY(622),
+	ICE_PTT_UNUSED_ENTRY(623),
+	ICE_PTT_UNUSED_ENTRY(624),
+	ICE_PTT_UNUSED_ENTRY(625),
+	ICE_PTT_UNUSED_ENTRY(626),
+	ICE_PTT_UNUSED_ENTRY(627),
+	ICE_PTT_UNUSED_ENTRY(628),
+	ICE_PTT_UNUSED_ENTRY(629),
+
+	ICE_PTT_UNUSED_ENTRY(630),
+	ICE_PTT_UNUSED_ENTRY(631),
+	ICE_PTT_UNUSED_ENTRY(632),
+	ICE_PTT_UNUSED_ENTRY(633),
+	ICE_PTT_UNUSED_ENTRY(634),
+	ICE_PTT_UNUSED_ENTRY(635),
+	ICE_PTT_UNUSED_ENTRY(636),
+	ICE_PTT_UNUSED_ENTRY(637),
+	ICE_PTT_UNUSED_ENTRY(638),
+	ICE_PTT_UNUSED_ENTRY(639),
+
+	ICE_PTT_UNUSED_ENTRY(640),
+	ICE_PTT_UNUSED_ENTRY(641),
+	ICE_PTT_UNUSED_ENTRY(642),
+	ICE_PTT_UNUSED_ENTRY(643),
+	ICE_PTT_UNUSED_ENTRY(644),
+	ICE_PTT_UNUSED_ENTRY(645),
+	ICE_PTT_UNUSED_ENTRY(646),
+	ICE_PTT_UNUSED_ENTRY(647),
+	ICE_PTT_UNUSED_ENTRY(648),
+	ICE_PTT_UNUSED_ENTRY(649),
+
+	ICE_PTT_UNUSED_ENTRY(650),
+	ICE_PTT_UNUSED_ENTRY(651),
+	ICE_PTT_UNUSED_ENTRY(652),
+	ICE_PTT_UNUSED_ENTRY(653),
+	ICE_PTT_UNUSED_ENTRY(654),
+	ICE_PTT_UNUSED_ENTRY(655),
+	ICE_PTT_UNUSED_ENTRY(656),
+	ICE_PTT_UNUSED_ENTRY(657),
+	ICE_PTT_UNUSED_ENTRY(658),
+	ICE_PTT_UNUSED_ENTRY(659),
+
+	ICE_PTT_UNUSED_ENTRY(660),
+	ICE_PTT_UNUSED_ENTRY(661),
+	ICE_PTT_UNUSED_ENTRY(662),
+	ICE_PTT_UNUSED_ENTRY(663),
+	ICE_PTT_UNUSED_ENTRY(664),
+	ICE_PTT_UNUSED_ENTRY(665),
+	ICE_PTT_UNUSED_ENTRY(666),
+	ICE_PTT_UNUSED_ENTRY(667),
+	ICE_PTT_UNUSED_ENTRY(668),
+	ICE_PTT_UNUSED_ENTRY(669),
+
+	ICE_PTT_UNUSED_ENTRY(670),
+	ICE_PTT_UNUSED_ENTRY(671),
+	ICE_PTT_UNUSED_ENTRY(672),
+	ICE_PTT_UNUSED_ENTRY(673),
+	ICE_PTT_UNUSED_ENTRY(674),
+	ICE_PTT_UNUSED_ENTRY(675),
+	ICE_PTT_UNUSED_ENTRY(676),
+	ICE_PTT_UNUSED_ENTRY(677),
+	ICE_PTT_UNUSED_ENTRY(678),
+	ICE_PTT_UNUSED_ENTRY(679),
+
+	ICE_PTT_UNUSED_ENTRY(680),
+	ICE_PTT_UNUSED_ENTRY(681),
+	ICE_PTT_UNUSED_ENTRY(682),
+	ICE_PTT_UNUSED_ENTRY(683),
+	ICE_PTT_UNUSED_ENTRY(684),
+	ICE_PTT_UNUSED_ENTRY(685),
+	ICE_PTT_UNUSED_ENTRY(686),
+	ICE_PTT_UNUSED_ENTRY(687),
+	ICE_PTT_UNUSED_ENTRY(688),
+	ICE_PTT_UNUSED_ENTRY(689),
+
+	ICE_PTT_UNUSED_ENTRY(690),
+	ICE_PTT_UNUSED_ENTRY(691),
+	ICE_PTT_UNUSED_ENTRY(692),
+	ICE_PTT_UNUSED_ENTRY(693),
+	ICE_PTT_UNUSED_ENTRY(694),
+	ICE_PTT_UNUSED_ENTRY(695),
+	ICE_PTT_UNUSED_ENTRY(696),
+	ICE_PTT_UNUSED_ENTRY(697),
+	ICE_PTT_UNUSED_ENTRY(698),
+	ICE_PTT_UNUSED_ENTRY(699),
+
+	ICE_PTT_UNUSED_ENTRY(700),
+	ICE_PTT_UNUSED_ENTRY(701),
+	ICE_PTT_UNUSED_ENTRY(702),
+	ICE_PTT_UNUSED_ENTRY(703),
+	ICE_PTT_UNUSED_ENTRY(704),
+	ICE_PTT_UNUSED_ENTRY(705),
+	ICE_PTT_UNUSED_ENTRY(706),
+	ICE_PTT_UNUSED_ENTRY(707),
+	ICE_PTT_UNUSED_ENTRY(708),
+	ICE_PTT_UNUSED_ENTRY(709),
+
+	ICE_PTT_UNUSED_ENTRY(710),
+	ICE_PTT_UNUSED_ENTRY(711),
+	ICE_PTT_UNUSED_ENTRY(712),
+	ICE_PTT_UNUSED_ENTRY(713),
+	ICE_PTT_UNUSED_ENTRY(714),
+	ICE_PTT_UNUSED_ENTRY(715),
+	ICE_PTT_UNUSED_ENTRY(716),
+	ICE_PTT_UNUSED_ENTRY(717),
+	ICE_PTT_UNUSED_ENTRY(718),
+	ICE_PTT_UNUSED_ENTRY(719),
+
+	ICE_PTT_UNUSED_ENTRY(720),
+	ICE_PTT_UNUSED_ENTRY(721),
+	ICE_PTT_UNUSED_ENTRY(722),
+	ICE_PTT_UNUSED_ENTRY(723),
+	ICE_PTT_UNUSED_ENTRY(724),
+	ICE_PTT_UNUSED_ENTRY(725),
+	ICE_PTT_UNUSED_ENTRY(726),
+	ICE_PTT_UNUSED_ENTRY(727),
+	ICE_PTT_UNUSED_ENTRY(728),
+	ICE_PTT_UNUSED_ENTRY(729),
+
+	ICE_PTT_UNUSED_ENTRY(730),
+	ICE_PTT_UNUSED_ENTRY(731),
+	ICE_PTT_UNUSED_ENTRY(732),
+	ICE_PTT_UNUSED_ENTRY(733),
+	ICE_PTT_UNUSED_ENTRY(734),
+	ICE_PTT_UNUSED_ENTRY(735),
+	ICE_PTT_UNUSED_ENTRY(736),
+	ICE_PTT_UNUSED_ENTRY(737),
+	ICE_PTT_UNUSED_ENTRY(738),
+	ICE_PTT_UNUSED_ENTRY(739),
+
+	ICE_PTT_UNUSED_ENTRY(740),
+	ICE_PTT_UNUSED_ENTRY(741),
+	ICE_PTT_UNUSED_ENTRY(742),
+	ICE_PTT_UNUSED_ENTRY(743),
+	ICE_PTT_UNUSED_ENTRY(744),
+	ICE_PTT_UNUSED_ENTRY(745),
+	ICE_PTT_UNUSED_ENTRY(746),
+	ICE_PTT_UNUSED_ENTRY(747),
+	ICE_PTT_UNUSED_ENTRY(748),
+	ICE_PTT_UNUSED_ENTRY(749),
+
+	ICE_PTT_UNUSED_ENTRY(750),
+	ICE_PTT_UNUSED_ENTRY(751),
+	ICE_PTT_UNUSED_ENTRY(752),
+	ICE_PTT_UNUSED_ENTRY(753),
+	ICE_PTT_UNUSED_ENTRY(754),
+	ICE_PTT_UNUSED_ENTRY(755),
+	ICE_PTT_UNUSED_ENTRY(756),
+	ICE_PTT_UNUSED_ENTRY(757),
+	ICE_PTT_UNUSED_ENTRY(758),
+	ICE_PTT_UNUSED_ENTRY(759),
+
+	ICE_PTT_UNUSED_ENTRY(760),
+	ICE_PTT_UNUSED_ENTRY(761),
+	ICE_PTT_UNUSED_ENTRY(762),
+	ICE_PTT_UNUSED_ENTRY(763),
+	ICE_PTT_UNUSED_ENTRY(764),
+	ICE_PTT_UNUSED_ENTRY(765),
+	ICE_PTT_UNUSED_ENTRY(766),
+	ICE_PTT_UNUSED_ENTRY(767),
+	ICE_PTT_UNUSED_ENTRY(768),
+	ICE_PTT_UNUSED_ENTRY(769),
+
+	ICE_PTT_UNUSED_ENTRY(770),
+	ICE_PTT_UNUSED_ENTRY(771),
+	ICE_PTT_UNUSED_ENTRY(772),
+	ICE_PTT_UNUSED_ENTRY(773),
+	ICE_PTT_UNUSED_ENTRY(774),
+	ICE_PTT_UNUSED_ENTRY(775),
+	ICE_PTT_UNUSED_ENTRY(776),
+	ICE_PTT_UNUSED_ENTRY(777),
+	ICE_PTT_UNUSED_ENTRY(778),
+	ICE_PTT_UNUSED_ENTRY(779),
+
+	ICE_PTT_UNUSED_ENTRY(780),
+	ICE_PTT_UNUSED_ENTRY(781),
+	ICE_PTT_UNUSED_ENTRY(782),
+	ICE_PTT_UNUSED_ENTRY(783),
+	ICE_PTT_UNUSED_ENTRY(784),
+	ICE_PTT_UNUSED_ENTRY(785),
+	ICE_PTT_UNUSED_ENTRY(786),
+	ICE_PTT_UNUSED_ENTRY(787),
+	ICE_PTT_UNUSED_ENTRY(788),
+	ICE_PTT_UNUSED_ENTRY(789),
+
+	ICE_PTT_UNUSED_ENTRY(790),
+	ICE_PTT_UNUSED_ENTRY(791),
+	ICE_PTT_UNUSED_ENTRY(792),
+	ICE_PTT_UNUSED_ENTRY(793),
+	ICE_PTT_UNUSED_ENTRY(794),
+	ICE_PTT_UNUSED_ENTRY(795),
+	ICE_PTT_UNUSED_ENTRY(796),
+	ICE_PTT_UNUSED_ENTRY(797),
+	ICE_PTT_UNUSED_ENTRY(798),
+	ICE_PTT_UNUSED_ENTRY(799),
+
+	ICE_PTT_UNUSED_ENTRY(800),
+	ICE_PTT_UNUSED_ENTRY(801),
+	ICE_PTT_UNUSED_ENTRY(802),
+	ICE_PTT_UNUSED_ENTRY(803),
+	ICE_PTT_UNUSED_ENTRY(804),
+	ICE_PTT_UNUSED_ENTRY(805),
+	ICE_PTT_UNUSED_ENTRY(806),
+	ICE_PTT_UNUSED_ENTRY(807),
+	ICE_PTT_UNUSED_ENTRY(808),
+	ICE_PTT_UNUSED_ENTRY(809),
+
+	ICE_PTT_UNUSED_ENTRY(810),
+	ICE_PTT_UNUSED_ENTRY(811),
+	ICE_PTT_UNUSED_ENTRY(812),
+	ICE_PTT_UNUSED_ENTRY(813),
+	ICE_PTT_UNUSED_ENTRY(814),
+	ICE_PTT_UNUSED_ENTRY(815),
+	ICE_PTT_UNUSED_ENTRY(816),
+	ICE_PTT_UNUSED_ENTRY(817),
+	ICE_PTT_UNUSED_ENTRY(818),
+	ICE_PTT_UNUSED_ENTRY(819),
+
+	ICE_PTT_UNUSED_ENTRY(820),
+	ICE_PTT_UNUSED_ENTRY(821),
+	ICE_PTT_UNUSED_ENTRY(822),
+	ICE_PTT_UNUSED_ENTRY(823),
+	ICE_PTT_UNUSED_ENTRY(824),
+	ICE_PTT_UNUSED_ENTRY(825),
+	ICE_PTT_UNUSED_ENTRY(826),
+	ICE_PTT_UNUSED_ENTRY(827),
+	ICE_PTT_UNUSED_ENTRY(828),
+	ICE_PTT_UNUSED_ENTRY(829),
+
+	ICE_PTT_UNUSED_ENTRY(830),
+	ICE_PTT_UNUSED_ENTRY(831),
+	ICE_PTT_UNUSED_ENTRY(832),
+	ICE_PTT_UNUSED_ENTRY(833),
+	ICE_PTT_UNUSED_ENTRY(834),
+	ICE_PTT_UNUSED_ENTRY(835),
+	ICE_PTT_UNUSED_ENTRY(836),
+	ICE_PTT_UNUSED_ENTRY(837),
+	ICE_PTT_UNUSED_ENTRY(838),
+	ICE_PTT_UNUSED_ENTRY(839),
+
+	ICE_PTT_UNUSED_ENTRY(840),
+	ICE_PTT_UNUSED_ENTRY(841),
+	ICE_PTT_UNUSED_ENTRY(842),
+	ICE_PTT_UNUSED_ENTRY(843),
+	ICE_PTT_UNUSED_ENTRY(844),
+	ICE_PTT_UNUSED_ENTRY(845),
+	ICE_PTT_UNUSED_ENTRY(846),
+	ICE_PTT_UNUSED_ENTRY(847),
+	ICE_PTT_UNUSED_ENTRY(848),
+	ICE_PTT_UNUSED_ENTRY(849),
+
+	ICE_PTT_UNUSED_ENTRY(850),
+	ICE_PTT_UNUSED_ENTRY(851),
+	ICE_PTT_UNUSED_ENTRY(852),
+	ICE_PTT_UNUSED_ENTRY(853),
+	ICE_PTT_UNUSED_ENTRY(854),
+	ICE_PTT_UNUSED_ENTRY(855),
+	ICE_PTT_UNUSED_ENTRY(856),
+	ICE_PTT_UNUSED_ENTRY(857),
+	ICE_PTT_UNUSED_ENTRY(858),
+	ICE_PTT_UNUSED_ENTRY(859),
+
+	ICE_PTT_UNUSED_ENTRY(860),
+	ICE_PTT_UNUSED_ENTRY(861),
+	ICE_PTT_UNUSED_ENTRY(862),
+	ICE_PTT_UNUSED_ENTRY(863),
+	ICE_PTT_UNUSED_ENTRY(864),
+	ICE_PTT_UNUSED_ENTRY(865),
+	ICE_PTT_UNUSED_ENTRY(866),
+	ICE_PTT_UNUSED_ENTRY(867),
+	ICE_PTT_UNUSED_ENTRY(868),
+	ICE_PTT_UNUSED_ENTRY(869),
+
+	ICE_PTT_UNUSED_ENTRY(870),
+	ICE_PTT_UNUSED_ENTRY(871),
+	ICE_PTT_UNUSED_ENTRY(872),
+	ICE_PTT_UNUSED_ENTRY(873),
+	ICE_PTT_UNUSED_ENTRY(874),
+	ICE_PTT_UNUSED_ENTRY(875),
+	ICE_PTT_UNUSED_ENTRY(876),
+	ICE_PTT_UNUSED_ENTRY(877),
+	ICE_PTT_UNUSED_ENTRY(878),
+	ICE_PTT_UNUSED_ENTRY(879),
+
+	ICE_PTT_UNUSED_ENTRY(880),
+	ICE_PTT_UNUSED_ENTRY(881),
+	ICE_PTT_UNUSED_ENTRY(882),
+	ICE_PTT_UNUSED_ENTRY(883),
+	ICE_PTT_UNUSED_ENTRY(884),
+	ICE_PTT_UNUSED_ENTRY(885),
+	ICE_PTT_UNUSED_ENTRY(886),
+	ICE_PTT_UNUSED_ENTRY(887),
+	ICE_PTT_UNUSED_ENTRY(888),
+	ICE_PTT_UNUSED_ENTRY(889),
+
+	ICE_PTT_UNUSED_ENTRY(890),
+	ICE_PTT_UNUSED_ENTRY(891),
+	ICE_PTT_UNUSED_ENTRY(892),
+	ICE_PTT_UNUSED_ENTRY(893),
+	ICE_PTT_UNUSED_ENTRY(894),
+	ICE_PTT_UNUSED_ENTRY(895),
+	ICE_PTT_UNUSED_ENTRY(896),
+	ICE_PTT_UNUSED_ENTRY(897),
+	ICE_PTT_UNUSED_ENTRY(898),
+	ICE_PTT_UNUSED_ENTRY(899),
+
+	ICE_PTT_UNUSED_ENTRY(900),
+	ICE_PTT_UNUSED_ENTRY(901),
+	ICE_PTT_UNUSED_ENTRY(902),
+	ICE_PTT_UNUSED_ENTRY(903),
+	ICE_PTT_UNUSED_ENTRY(904),
+	ICE_PTT_UNUSED_ENTRY(905),
+	ICE_PTT_UNUSED_ENTRY(906),
+	ICE_PTT_UNUSED_ENTRY(907),
+	ICE_PTT_UNUSED_ENTRY(908),
+	ICE_PTT_UNUSED_ENTRY(909),
+
+	ICE_PTT_UNUSED_ENTRY(910),
+	ICE_PTT_UNUSED_ENTRY(911),
+	ICE_PTT_UNUSED_ENTRY(912),
+	ICE_PTT_UNUSED_ENTRY(913),
+	ICE_PTT_UNUSED_ENTRY(914),
+	ICE_PTT_UNUSED_ENTRY(915),
+	ICE_PTT_UNUSED_ENTRY(916),
+	ICE_PTT_UNUSED_ENTRY(917),
+	ICE_PTT_UNUSED_ENTRY(918),
+	ICE_PTT_UNUSED_ENTRY(919),
+
+	ICE_PTT_UNUSED_ENTRY(920),
+	ICE_PTT_UNUSED_ENTRY(921),
+	ICE_PTT_UNUSED_ENTRY(922),
+	ICE_PTT_UNUSED_ENTRY(923),
+	ICE_PTT_UNUSED_ENTRY(924),
+	ICE_PTT_UNUSED_ENTRY(925),
+	ICE_PTT_UNUSED_ENTRY(926),
+	ICE_PTT_UNUSED_ENTRY(927),
+	ICE_PTT_UNUSED_ENTRY(928),
+	ICE_PTT_UNUSED_ENTRY(929),
+
+	ICE_PTT_UNUSED_ENTRY(930),
+	ICE_PTT_UNUSED_ENTRY(931),
+	ICE_PTT_UNUSED_ENTRY(932),
+	ICE_PTT_UNUSED_ENTRY(933),
+	ICE_PTT_UNUSED_ENTRY(934),
+	ICE_PTT_UNUSED_ENTRY(935),
+	ICE_PTT_UNUSED_ENTRY(936),
+	ICE_PTT_UNUSED_ENTRY(937),
+	ICE_PTT_UNUSED_ENTRY(938),
+	ICE_PTT_UNUSED_ENTRY(939),
+
+	ICE_PTT_UNUSED_ENTRY(940),
+	ICE_PTT_UNUSED_ENTRY(941),
+	ICE_PTT_UNUSED_ENTRY(942),
+	ICE_PTT_UNUSED_ENTRY(943),
+	ICE_PTT_UNUSED_ENTRY(944),
+	ICE_PTT_UNUSED_ENTRY(945),
+	ICE_PTT_UNUSED_ENTRY(946),
+	ICE_PTT_UNUSED_ENTRY(947),
+	ICE_PTT_UNUSED_ENTRY(948),
+	ICE_PTT_UNUSED_ENTRY(949),
+
+	ICE_PTT_UNUSED_ENTRY(950),
+	ICE_PTT_UNUSED_ENTRY(951),
+	ICE_PTT_UNUSED_ENTRY(952),
+	ICE_PTT_UNUSED_ENTRY(953),
+	ICE_PTT_UNUSED_ENTRY(954),
+	ICE_PTT_UNUSED_ENTRY(955),
+	ICE_PTT_UNUSED_ENTRY(956),
+	ICE_PTT_UNUSED_ENTRY(957),
+	ICE_PTT_UNUSED_ENTRY(958),
+	ICE_PTT_UNUSED_ENTRY(959),
+
+	ICE_PTT_UNUSED_ENTRY(960),
+	ICE_PTT_UNUSED_ENTRY(961),
+	ICE_PTT_UNUSED_ENTRY(962),
+	ICE_PTT_UNUSED_ENTRY(963),
+	ICE_PTT_UNUSED_ENTRY(964),
+	ICE_PTT_UNUSED_ENTRY(965),
+	ICE_PTT_UNUSED_ENTRY(966),
+	ICE_PTT_UNUSED_ENTRY(967),
+	ICE_PTT_UNUSED_ENTRY(968),
+	ICE_PTT_UNUSED_ENTRY(969),
+
+	ICE_PTT_UNUSED_ENTRY(970),
+	ICE_PTT_UNUSED_ENTRY(971),
+	ICE_PTT_UNUSED_ENTRY(972),
+	ICE_PTT_UNUSED_ENTRY(973),
+	ICE_PTT_UNUSED_ENTRY(974),
+	ICE_PTT_UNUSED_ENTRY(975),
+	ICE_PTT_UNUSED_ENTRY(976),
+	ICE_PTT_UNUSED_ENTRY(977),
+	ICE_PTT_UNUSED_ENTRY(978),
+	ICE_PTT_UNUSED_ENTRY(979),
+
+	ICE_PTT_UNUSED_ENTRY(980),
+	ICE_PTT_UNUSED_ENTRY(981),
+	ICE_PTT_UNUSED_ENTRY(982),
+	ICE_PTT_UNUSED_ENTRY(983),
+	ICE_PTT_UNUSED_ENTRY(984),
+	ICE_PTT_UNUSED_ENTRY(985),
+	ICE_PTT_UNUSED_ENTRY(986),
+	ICE_PTT_UNUSED_ENTRY(987),
+	ICE_PTT_UNUSED_ENTRY(988),
+	ICE_PTT_UNUSED_ENTRY(989),
+
+	ICE_PTT_UNUSED_ENTRY(990),
+	ICE_PTT_UNUSED_ENTRY(991),
+	ICE_PTT_UNUSED_ENTRY(992),
+	ICE_PTT_UNUSED_ENTRY(993),
+	ICE_PTT_UNUSED_ENTRY(994),
+	ICE_PTT_UNUSED_ENTRY(995),
+	ICE_PTT_UNUSED_ENTRY(996),
+	ICE_PTT_UNUSED_ENTRY(997),
+	ICE_PTT_UNUSED_ENTRY(998),
+	ICE_PTT_UNUSED_ENTRY(999),
+
+	ICE_PTT_UNUSED_ENTRY(1000),
+	ICE_PTT_UNUSED_ENTRY(1001),
+	ICE_PTT_UNUSED_ENTRY(1002),
+	ICE_PTT_UNUSED_ENTRY(1003),
+	ICE_PTT_UNUSED_ENTRY(1004),
+	ICE_PTT_UNUSED_ENTRY(1005),
+	ICE_PTT_UNUSED_ENTRY(1006),
+	ICE_PTT_UNUSED_ENTRY(1007),
+	ICE_PTT_UNUSED_ENTRY(1008),
+	ICE_PTT_UNUSED_ENTRY(1009),
+
+	ICE_PTT_UNUSED_ENTRY(1010),
+	ICE_PTT_UNUSED_ENTRY(1011),
+	ICE_PTT_UNUSED_ENTRY(1012),
+	ICE_PTT_UNUSED_ENTRY(1013),
+	ICE_PTT_UNUSED_ENTRY(1014),
+	ICE_PTT_UNUSED_ENTRY(1015),
+	ICE_PTT_UNUSED_ENTRY(1016),
+	ICE_PTT_UNUSED_ENTRY(1017),
+	ICE_PTT_UNUSED_ENTRY(1018),
+	ICE_PTT_UNUSED_ENTRY(1019),
+
+	ICE_PTT_UNUSED_ENTRY(1020),
+	ICE_PTT_UNUSED_ENTRY(1021),
+	ICE_PTT_UNUSED_ENTRY(1022),
+	ICE_PTT_UNUSED_ENTRY(1023),
+};
+
+static inline struct ice_rx_ptype_decoded ice_decode_rx_desc_ptype(u16 ptype)
+{
+	return ice_ptype_lkup[ptype];
+}
+
+#define ICE_LINK_SPEED_UNKNOWN		0
+#define ICE_LINK_SPEED_10MBPS		10
+#define ICE_LINK_SPEED_100MBPS		100
+#define ICE_LINK_SPEED_1000MBPS		1000
+#define ICE_LINK_SPEED_2500MBPS		2500
+#define ICE_LINK_SPEED_5000MBPS		5000
+#define ICE_LINK_SPEED_10000MBPS	10000
+#define ICE_LINK_SPEED_20000MBPS	20000
+#define ICE_LINK_SPEED_25000MBPS	25000
+#define ICE_LINK_SPEED_40000MBPS	40000
+#define ICE_LINK_SPEED_50000MBPS	50000
+#define ICE_LINK_SPEED_100000MBPS	100000
+
+#endif /* _ICE_LAN_TX_RX_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v4 15/32] net/ice/base: add OS specific implementation
  2018-12-14  8:34 ` [dpdk-dev] [PATCH v4 00/32] A new net PMD - ICE Wenzhuo Lu
                     ` (13 preceding siblings ...)
  2018-12-14  8:35   ` [dpdk-dev] [PATCH v4 14/32] net/ice/base: add structures for RX/TX queues Wenzhuo Lu
@ 2018-12-14  8:35   ` Wenzhuo Lu
  2018-12-14  8:35   ` [dpdk-dev] [PATCH v4 16/32] net/ice: support device initialization Wenzhuo Lu
                     ` (16 subsequent siblings)
  31 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-14  8:35 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu

Add some MACRO defination and small functions which
are specific for DPDK.
Add readme too.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 drivers/net/ice/base/README      |  22 ++
 drivers/net/ice/base/ice_osdep.h | 524 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 546 insertions(+)
 create mode 100644 drivers/net/ice/base/README
 create mode 100644 drivers/net/ice/base/ice_osdep.h

diff --git a/drivers/net/ice/base/README b/drivers/net/ice/base/README
new file mode 100644
index 0000000..708f607
--- /dev/null
+++ b/drivers/net/ice/base/README
@@ -0,0 +1,22 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+Intel® ICE driver
+==================
+
+This directory contains source code of FreeBSD ice driver of version
+2018.12.11 released by the team which develops
+basic drivers for any ice NIC. The directory of base/ contains the
+original source package.
+This driver is valid for the product(s) listed below
+
+* Intel® Ethernet Network Adapters E810
+
+Updating the driver
+===================
+
+NOTE: The source code in this directory should not be modified apart from
+the following file(s):
+
+    ice_osdep.h
diff --git a/drivers/net/ice/base/ice_osdep.h b/drivers/net/ice/base/ice_osdep.h
new file mode 100644
index 0000000..dd25b75
--- /dev/null
+++ b/drivers/net/ice/base/ice_osdep.h
@@ -0,0 +1,524 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _ICE_OSDEP_H_
+#define _ICE_OSDEP_H_
+
+#include <string.h>
+#include <stdint.h>
+#include <stdio.h>
+#include <stdarg.h>
+#include <inttypes.h>
+#include <sys/queue.h>
+#include <stdbool.h>
+
+#include <rte_common.h>
+#include <rte_memcpy.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <rte_byteorder.h>
+#include <rte_cycles.h>
+#include <rte_spinlock.h>
+#include <rte_log.h>
+#include <rte_random.h>
+#include <rte_io.h>
+
+#include "../ice_logs.h"
+
+#define INLINE inline
+#define STATIC static
+
+typedef uint8_t         u8;
+typedef int8_t          s8;
+typedef uint16_t        u16;
+typedef int16_t         s16;
+typedef uint32_t        u32;
+typedef int32_t         s32;
+typedef uint64_t        u64;
+typedef uint64_t        s64;
+
+#define __iomem
+#define hw_dbg(hw, S, A...) do {} while (0)
+#define upper_32_bits(n) ((u32)(((n) >> 16) >> 16))
+#define lower_32_bits(n) ((u32)(n))
+#define low_16_bits(x)   ((x) & 0xFFFF)
+#define high_16_bits(x)  (((x) & 0xFFFF0000) >> 16)
+
+#ifndef ETH_ADDR_LEN
+#define ETH_ADDR_LEN                  6
+#endif
+
+#ifndef __le16
+#define __le16          uint16_t
+#endif
+#ifndef __le32
+#define __le32          uint32_t
+#endif
+#ifndef __le64
+#define __le64          uint64_t
+#endif
+#ifndef __be16
+#define __be16          uint16_t
+#endif
+#ifndef __be32
+#define __be32          uint32_t
+#endif
+#ifndef __be64
+#define __be64          uint64_t
+#endif
+
+#ifndef __always_unused
+#define __always_unused  __attribute__((unused))
+#endif
+#ifndef __maybe_unused
+#define __maybe_unused  __attribute__((unused))
+#endif
+#ifndef __packed
+#define __packed  __attribute__((packed))
+#endif
+
+#ifndef BIT_ULL
+#define BIT_ULL(a) (1ULL << (a))
+#endif
+
+#define FALSE           0
+#define TRUE            1
+#define false           0
+#define true            1
+
+#define min(a, b) RTE_MIN(a, b)
+#define max(a, b) RTE_MAX(a, b)
+
+#define ARRAY_SIZE(arr) (sizeof(arr) / sizeof(arr[0]))
+#define FIELD_SIZEOF(t, f) (sizeof(((t *)0)->f))
+#define MAKEMASK(m, s) ((m) << (s))
+
+#define DEBUGOUT(S, A...) PMD_DRV_LOG_RAW(DEBUG, S, ##A)
+#define DEBUGFUNC(F) PMD_DRV_LOG_RAW(DEBUG, F)
+
+#define ice_debug(h, m, s, ...)					\
+do {								\
+	if (((m) & (h)->debug_mask))				\
+		PMD_DRV_LOG_RAW(DEBUG, "ice %02x.%x " s,	\
+			(h)->bus.device, (h)->bus.func,		\
+					##__VA_ARGS__);		\
+} while (0)
+
+#define ice_info(hw, fmt, args...) ice_debug(hw, ICE_DBG_ALL, fmt, ##args)
+#define ice_warn(hw, fmt, args...) ice_debug(hw, ICE_DBG_ALL, fmt, ##args)
+#define ice_debug_array(hw, type, rowsize, groupsize, buf, len)		\
+do {									\
+	struct ice_hw *hw_l = hw;					\
+		u16 len_l = len;					\
+		u8 *buf_l = buf;					\
+		int i;							\
+		for (i = 0; i < len_l; i += 8)				\
+			ice_debug(hw_l, type,				\
+				  "0x%04X  0x%016"PRIx64"\n",		\
+				  i, *((u64 *)((buf_l) + i)));		\
+} while (0)
+#define ice_snprintf snprintf
+#ifndef SNPRINTF
+#define SNPRINTF ice_snprintf
+#endif
+
+#define ICE_PCI_REG(reg)     rte_read32(reg)
+#define ICE_PCI_REG_ADDR(a, reg) \
+	((volatile uint32_t *)((char *)(a)->hw_addr + (reg)))
+static inline uint32_t ice_read_addr(volatile void *addr)
+{
+	return rte_le_to_cpu_32(ICE_PCI_REG(addr));
+}
+
+#define ICE_PCI_REG_WRITE(reg, value) \
+	rte_write32((rte_cpu_to_le_32(value)), reg)
+
+#define ice_flush(a)   ICE_READ_REG((a), GLGEN_STAT)
+#define icevf_flush(a) ICE_READ_REG((a), VFGEN_RSTAT)
+#define ICE_READ_REG(hw, reg) ice_read_addr(ICE_PCI_REG_ADDR((hw), (reg)))
+#define ICE_WRITE_REG(hw, reg, value) \
+	ICE_PCI_REG_WRITE(ICE_PCI_REG_ADDR((hw), (reg)), (value))
+
+#define rd32(a, reg) ice_read_addr(ICE_PCI_REG_ADDR((a), (reg)))
+#define wr32(a, reg, value) \
+	ICE_PCI_REG_WRITE(ICE_PCI_REG_ADDR((a), (reg)), (value))
+#define flush(a) ice_read_addr(ICE_PCI_REG_ADDR((a), (GLGEN_STAT)))
+#define div64_long(n, d) ((n) / (d))
+
+#define BITS_PER_BYTE       8
+typedef u32 ice_bitmap_t;
+#define DIV_ROUND_UP(n, d) (((n) + (d) - 1) / (d))
+#define BITS_TO_CHUNKS(nr)   DIV_ROUND_UP(nr, BITS_PER_BYTE * sizeof(ice_bitmap_t))
+#define ice_declare_bitmap(name, bits) \
+	ice_bitmap_t name[BITS_TO_CHUNKS(bits)]
+
+#define BITS_CHUNK_MASK(nr)	(((ice_bitmap_t)~0) >>			\
+		((BITS_PER_BYTE * sizeof(ice_bitmap_t)) -		\
+		(((nr) - 1) % (BITS_PER_BYTE * sizeof(ice_bitmap_t))	\
+		 + 1)))
+#define BITS_PER_CHUNK          (BITS_PER_BYTE * sizeof(ice_bitmap_t))
+#define BIT_CHUNK(nr)           ((nr) / BITS_PER_CHUNK)
+#define BIT_IN_CHUNK(nr)        BIT((nr) % BITS_PER_CHUNK)
+
+static inline bool ice_is_bit_set(const ice_bitmap_t *bitmap, u16 nr)
+{
+        return !!(bitmap[BIT_CHUNK(nr)] & BIT_IN_CHUNK(nr));
+}
+
+#define ice_and_bitmap(d, b1, b2, sz) \
+	ice_intersect_bitmaps((u8 *)d, (u8 *)b1, (const u8 *)b2, (u16)sz)
+static inline int
+ice_intersect_bitmaps(u8 *dst, const u8 *bmp1, const u8 *bmp2, u16 sz)
+{
+	u32 res = 0;
+	int cnt;
+	u16 i;
+
+	/* Utilize 32-bit operations */
+	cnt = (sz % BITS_PER_BYTE) ?
+		(sz / BITS_PER_BYTE) + 1 : sz / BITS_PER_BYTE;
+	for (i = 0; i < cnt / 4; i++) {
+		((u32 *)dst)[i] = ((const u32 *)bmp1)[i] &
+		((const u32 *)bmp2)[i];
+		res |= ((u32 *)dst)[i];
+	}
+
+	for (i *= 4; i < cnt; i++) {
+		if ((sz % 8 == 0) || (i + 1 < cnt)) {
+			dst[i] = bmp1[i] & bmp2[i];
+		} else {
+			/* Remaining bits that do not occupy the whole byte */
+			u8 mask = ~0u >> (8 - (sz % 8));
+
+			dst[i] = bmp1[i] & bmp2[i] & mask;
+		}
+
+		res |= dst[i];
+	}
+
+	return res != 0;
+}
+
+static inline int ice_find_first_bit(ice_bitmap_t *name, u16 size)
+{
+	u16 i;
+
+	for (i = 0; i < BITS_PER_BYTE * (size / BITS_PER_BYTE); i++)
+		if (ice_is_bit_set(name, i))
+			return i;
+	return size;
+}
+
+static inline int ice_find_next_bit(ice_bitmap_t *name, u16 size, u16 bits)
+{
+	u16 i;
+
+	for (i = bits; i < BITS_PER_BYTE * (size / BITS_PER_BYTE); i++)
+		if (ice_is_bit_set(name, i))
+			return i;
+	return bits;
+}
+
+#define for_each_set_bit(bit, addr, size)				\
+	for ((bit) = ice_find_first_bit((addr), (size));		\
+	(bit) < (size);							\
+	(bit) = ice_find_next_bit((addr), (size), (bit) + 1))
+
+static inline bool ice_is_any_bit_set(ice_bitmap_t *bitmap, u32 bits)
+{
+	u32 max_index = BITS_TO_CHUNKS(bits);
+	u32 i;
+
+	for (i = 0; i < max_index; i++) {
+		if (bitmap[i])
+			return true;
+	}
+	return false;
+}
+
+/* memory allocation tracking */
+struct ice_dma_mem {
+	void *va;
+	u64 pa;
+	u32 size;
+	const void *zone;
+} __attribute__((packed));
+
+struct ice_virt_mem {
+	void *va;
+	u32 size;
+} __attribute__((packed));
+
+#define ice_malloc(h, s)    rte_zmalloc(NULL, s, 0)
+#define ice_calloc(h, c, s) rte_zmalloc(NULL, c * s, 0)
+#define ice_free(h, m)         rte_free(m)
+
+#define ice_memset(a, b, c, d) memset((a), (b), (c))
+#define ice_memcpy(a, b, c, d) rte_memcpy((a), (b), (c))
+#define ice_memdup(a, b, c, d) rte_memcpy(ice_malloc(a, c), b, c)
+
+#define CPU_TO_BE16(o) rte_cpu_to_be_16(o)
+#define CPU_TO_BE32(o) rte_cpu_to_be_32(o)
+#define CPU_TO_BE64(o) rte_cpu_to_be_64(o)
+#define CPU_TO_LE16(o) rte_cpu_to_le_16(o)
+#define CPU_TO_LE32(s) rte_cpu_to_le_32(s)
+#define CPU_TO_LE64(h) rte_cpu_to_le_64(h)
+#define LE16_TO_CPU(a) rte_le_to_cpu_16(a)
+#define LE32_TO_CPU(c) rte_le_to_cpu_32(c)
+#define LE64_TO_CPU(k) rte_le_to_cpu_64(k)
+
+#define NTOHS(a) rte_be_to_cpu_16(a)
+#define NTOHL(a) rte_be_to_cpu_32(a)
+#define HTONS(a) rte_cpu_to_be_16(a)
+#define HTONL(a) rte_cpu_to_be_32(a)
+
+static inline void
+ice_set_bit(unsigned int nr, volatile ice_bitmap_t *addr)
+{
+	__sync_fetch_and_or(addr, (1UL << nr));
+}
+
+static inline void
+ice_clear_bit(unsigned int nr, volatile ice_bitmap_t *addr)
+{
+	__sync_fetch_and_and(addr, (0UL << nr));
+}
+
+static inline void
+ice_zero_bitmap(ice_bitmap_t *bmp, u16 size)
+{
+	unsigned long mask;
+	u16 i;
+
+	for (i = 0; i < BITS_TO_CHUNKS(size) - 1; i++)
+		bmp[i] = 0;
+	mask = BITS_CHUNK_MASK(size);
+	bmp[i] &= ~mask;
+}
+
+static inline void
+ice_or_bitmap(ice_bitmap_t *dst, const ice_bitmap_t *bmp1,
+	      const ice_bitmap_t *bmp2, u16 size)
+{
+	unsigned long mask;
+	u16 i;
+
+	/* Handle all but last chunk*/
+	for (i = 0; i < BITS_TO_CHUNKS(size) - 1; i++)
+		dst[i] = bmp1[i] | bmp2[i];
+
+	/* We want to only OR bits within the size. Furthermore, we also do
+	 * not want to modify destination bits which are beyond the specified
+	 * size. Use a bitmask to ensure that we only modify the bits that are
+	 * within the specified size.
+	 */
+	mask = BITS_CHUNK_MASK(size);
+	dst[i] &= ~mask;
+	dst[i] |= (bmp1[i] | bmp2[i]) & mask;
+}
+
+static inline void ice_cp_bitmap(ice_bitmap_t *dst, ice_bitmap_t *src, u16 size)
+{
+        ice_bitmap_t mask;
+        u16 i;
+
+        /* Handle all but last chunk*/
+        for (i = 0; i < BITS_TO_CHUNKS(size) - 1; i++)
+                dst[i] = src[i];
+
+        /* We want to only copy bits within the size.*/
+        mask = BITS_CHUNK_MASK(size);
+        dst[i] &= ~mask;
+        dst[i] |= src[i] & mask;
+}
+
+static inline bool
+ice_cmp_bitmap(ice_bitmap_t *bmp1, ice_bitmap_t *bmp2, u16 size)
+{
+        ice_bitmap_t mask;
+        u16 i;
+
+        /* Handle all but last chunk*/
+        for (i = 0; i < BITS_TO_CHUNKS(size) - 1; i++)
+                if (bmp1[i] != bmp2[i])
+                        return false;
+
+        /* We want to only compare bits within the size.*/
+        mask = BITS_CHUNK_MASK(size);
+        if ((bmp1[i] & mask) != (bmp2[i] & mask))
+                return false;
+
+        return true;
+}
+
+/* SW spinlock */
+struct ice_lock {
+	rte_spinlock_t spinlock;
+};
+
+static inline void
+ice_init_lock(struct ice_lock *sp)
+{
+	rte_spinlock_init(&sp->spinlock);
+}
+
+static inline void
+ice_acquire_lock(struct ice_lock *sp)
+{
+	rte_spinlock_lock(&sp->spinlock);
+}
+
+static inline void
+ice_release_lock(struct ice_lock *sp)
+{
+	rte_spinlock_unlock(&sp->spinlock);
+}
+
+static inline void
+ice_destroy_lock(__attribute__((unused)) struct ice_lock *sp)
+{
+}
+
+struct ice_hw;
+
+static inline void *
+ice_alloc_dma_mem(__attribute__((unused)) struct ice_hw *hw,
+		  struct ice_dma_mem *mem, u64 size)
+{
+	const struct rte_memzone *mz = NULL;
+	char z_name[RTE_MEMZONE_NAMESIZE];
+
+	if (!mem)
+		return NULL;
+
+	snprintf(z_name, sizeof(z_name), "ice_dma_%"PRIu64, rte_rand());
+	mz = rte_memzone_reserve_bounded(z_name, size, SOCKET_ID_ANY, 0,
+					 0, RTE_PGSIZE_2M);
+	if (!mz)
+		return NULL;
+
+	mem->size = size;
+	mem->va = mz->addr;
+	mem->pa = mz->phys_addr;
+	mem->zone = (const void *)mz;
+	PMD_DRV_LOG(DEBUG, "memzone %s allocated with physical address: "
+		    "%"PRIu64, mz->name, mem->pa);
+
+	return mem->va;
+}
+
+static inline void
+ice_free_dma_mem(__attribute__((unused)) struct ice_hw *hw,
+		 struct ice_dma_mem *mem)
+{
+	PMD_DRV_LOG(DEBUG, "memzone %s to be freed with physical address: "
+		    "%"PRIu64, ((const struct rte_memzone *)mem->zone)->name,
+		    mem->pa);
+	rte_memzone_free((const struct rte_memzone *)mem->zone);
+	mem->zone = NULL;
+	mem->va = NULL;
+	mem->pa = (u64)0;
+}
+
+static inline u8
+ice_hweight8(u32 num)
+{
+	u8 bits = 0;
+	u32 i;
+
+	for (i = 0; i < 8; i++) {
+		bits += (u8)(num & 0x1);
+		num >>= 1;
+	}
+
+	return bits;
+}
+
+#define DIV_ROUND_UP(n, d) (((n) + (d) - 1) / (d))
+#define DELAY(x) rte_delay_us(x)
+#define ice_usec_delay(x) rte_delay_us(x)
+#define ice_msec_delay(x, y) rte_delay_us(1000 * (x))
+#define udelay(x) DELAY(x)
+#define msleep(x) DELAY(1000 * (x))
+#define usleep_range(min, max) msleep(DIV_ROUND_UP(min, 1000))
+
+struct ice_list_entry {
+	LIST_ENTRY(ice_list_entry) next;
+};
+
+LIST_HEAD(ice_list_head, ice_list_entry);
+
+#define LIST_ENTRY_TYPE    ice_list_entry
+#define LIST_HEAD_TYPE     ice_list_head
+#define INIT_LIST_HEAD(list_head)  LIST_INIT(list_head)
+#define LIST_DEL(entry)            LIST_REMOVE(entry, next)
+/* LIST_EMPTY(list_head)) the same in sys/queue.h */
+
+/*Note parameters are swapped*/
+#define LIST_FIRST_ENTRY(head, type, field) (type *)((head)->lh_first)
+#define LIST_ADD(entry, list_head)    LIST_INSERT_HEAD(list_head, entry, next)
+#define LIST_ADD_AFTER(entry, list_entry) \
+	LIST_INSERT_AFTER(list_entry, entry, next)
+#define LIST_FOR_EACH_ENTRY(pos, head, type, member)			       \
+	for ((pos) = (head)->lh_first ?					       \
+		     container_of((head)->lh_first, struct type, member) :     \
+		     0;							       \
+	     (pos);							       \
+	     (pos) = (pos)->member.next.le_next ?			       \
+		     container_of((pos)->member.next.le_next, struct type,     \
+				  member) :				       \
+		     0)
+
+#define LIST_REPLACE_INIT(list_head, head) do {				\
+	(head)->lh_first = (list_head)->lh_first;			\
+	INIT_LIST_HEAD(list_head);					\
+} while (0)
+
+#define HLIST_NODE_TYPE         LIST_ENTRY_TYPE
+#define HLIST_HEAD_TYPE         LIST_HEAD_TYPE
+#define INIT_HLIST_HEAD(list_head)             INIT_LIST_HEAD(list_head)
+#define HLIST_ADD_HEAD(entry, list_head)       LIST_ADD(entry, list_head)
+#define HLIST_EMPTY(list_head)                 LIST_EMPTY(list_head)
+#define HLIST_DEL(entry)                       LIST_DEL(entry)
+#define HLIST_FOR_EACH_ENTRY(pos, head, type, member) \
+	LIST_FOR_EACH_ENTRY(pos, head, type, member)
+#define LIST_FOR_EACH_ENTRY_SAFE(pos, tmp, head, type, member) \
+	LIST_FOR_EACH_ENTRY(pos, head, type, member)
+
+#ifndef ICE_DBG_TRACE
+#define ICE_DBG_TRACE		BIT_ULL(0)
+#endif
+
+#ifndef DIVIDE_AND_ROUND_UP
+#define DIVIDE_AND_ROUND_UP(a, b) (((a) + (b) - 1) / (b))
+#endif
+
+#ifndef ICE_INTEL_VENDOR_ID
+#define ICE_INTEL_VENDOR_ID		0x8086
+#endif
+
+#ifndef IS_UNICAST_ETHER_ADDR
+#define IS_UNICAST_ETHER_ADDR(addr) \
+	((bool)((((u8 *)(addr))[0] % ((u8)0x2)) == 0))
+#endif
+
+#ifndef IS_MULTICAST_ETHER_ADDR
+#define IS_MULTICAST_ETHER_ADDR(addr) \
+	((bool)((((u8 *)(addr))[0] % ((u8)0x2)) == 1))
+#endif
+
+#ifndef IS_BROADCAST_ETHER_ADDR
+/* Check whether an address is broadcast. */
+#define IS_BROADCAST_ETHER_ADDR(addr)	\
+	((bool)((((u16 *)(addr))[0] == ((u16)0xffff))))
+#endif
+
+#ifndef IS_ZERO_ETHER_ADDR
+#define IS_ZERO_ETHER_ADDR(addr) \
+	(((bool)((((u16 *)(addr))[0] == ((u16)0x0)))) && \
+	 ((bool)((((u16 *)(addr))[1] == ((u16)0x0)))) && \
+	 ((bool)((((u16 *)(addr))[2] == ((u16)0x0)))))
+#endif
+
+#endif /* _ICE_OSDEP_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v4 16/32] net/ice: support device initialization
  2018-12-14  8:34 ` [dpdk-dev] [PATCH v4 00/32] A new net PMD - ICE Wenzhuo Lu
                     ` (14 preceding siblings ...)
  2018-12-14  8:35   ` [dpdk-dev] [PATCH v4 15/32] net/ice/base: add OS specific implementation Wenzhuo Lu
@ 2018-12-14  8:35   ` Wenzhuo Lu
  2018-12-14  9:46     ` Ferruh Yigit
  2018-12-14 12:05     ` David Marchand
  2018-12-14  8:35   ` [dpdk-dev] [PATCH v4 17/32] net/ice: support device and queue ops Wenzhuo Lu
                     ` (15 subsequent siblings)
  31 siblings, 2 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-14  8:35 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Update the documents too.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 MAINTAINERS                             |   2 +
 config/common_base                      |   7 +
 doc/guides/nics/features/ice.ini        |  11 +
 doc/guides/nics/ice.rst                 |  80 ++++
 doc/guides/nics/index.rst               |   1 +
 doc/guides/rel_notes/release_19_02.rst  |   5 +
 drivers/net/Makefile                    |   1 +
 drivers/net/ice/Makefile                |  54 +++
 drivers/net/ice/base/meson.build        |  27 ++
 drivers/net/ice/ice_ethdev.c            | 635 ++++++++++++++++++++++++++++++++
 drivers/net/ice/ice_ethdev.h            | 305 +++++++++++++++
 drivers/net/ice/ice_logs.h              |  45 +++
 drivers/net/ice/ice_rxtx.h              | 117 ++++++
 drivers/net/ice/meson.build             |  12 +
 drivers/net/ice/rte_pmd_ice_version.map |   4 +
 drivers/net/meson.build                 |   1 +
 mk/rte.app.mk                           |   1 +
 17 files changed, 1308 insertions(+)
 create mode 100644 doc/guides/nics/features/ice.ini
 create mode 100644 doc/guides/nics/ice.rst
 create mode 100644 drivers/net/ice/Makefile
 create mode 100644 drivers/net/ice/base/meson.build
 create mode 100644 drivers/net/ice/ice_ethdev.c
 create mode 100644 drivers/net/ice/ice_ethdev.h
 create mode 100644 drivers/net/ice/ice_logs.h
 create mode 100644 drivers/net/ice/ice_rxtx.h
 create mode 100644 drivers/net/ice/meson.build
 create mode 100644 drivers/net/ice/rte_pmd_ice_version.map

diff --git a/MAINTAINERS b/MAINTAINERS
index 37f3bf7..cdb18e0 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -598,6 +598,8 @@ M: Qiming Yang <qiming.yang@intel.com>
 M: Wenzhuo Lu <wenzhuo.lu@intel.com>
 T: git://dpdk.org/next/dpdk-next-net-intel
 F: drivers/net/ice/
+F: doc/guides/nics/ice.rst
+F: doc/guides/nics/features/ice.ini
 
 Marvell mvpp2
 M: Tomasz Duszynski <tdu@semihalf.com>
diff --git a/config/common_base b/config/common_base
index d12ae98..872f440 100644
--- a/config/common_base
+++ b/config/common_base
@@ -297,6 +297,13 @@ CONFIG_RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE=y
 CONFIG_RTE_LIBRTE_FM10K_INC_VECTOR=y
 
 #
+# Compile burst-oriented ICE PMD driver
+#
+CONFIG_RTE_LIBRTE_ICE_PMD=y
+CONFIG_RTE_LIBRTE_ICE_DEBUG_RX=n
+CONFIG_RTE_LIBRTE_ICE_DEBUG_TX=n
+CONFIG_RTE_LIBRTE_ICE_DEBUG_TX_FREE=n
+
 # Compile burst-oriented AVF PMD driver
 #
 CONFIG_RTE_LIBRTE_AVF_PMD=y
diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
new file mode 100644
index 0000000..085e848
--- /dev/null
+++ b/doc/guides/nics/features/ice.ini
@@ -0,0 +1,11 @@
+;
+; Supported features of the 'ice' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+BSD nic_uio          = Y
+Linux UIO            = Y
+Linux VFIO           = Y
+x86-32               = Y
+x86-64               = Y
diff --git a/doc/guides/nics/ice.rst b/doc/guides/nics/ice.rst
new file mode 100644
index 0000000..946ed04
--- /dev/null
+++ b/doc/guides/nics/ice.rst
@@ -0,0 +1,80 @@
+..  SPDX-License-Identifier: BSD-3-Clause
+    Copyright(c) 2018 Intel Corporation.
+
+ICE Poll Mode Driver
+======================
+
+The ice PMD (librte_pmd_ice) provides poll mode driver support for
+10/25 Gbps Intel® Ethernet 810 Series Network Adapters based on
+the Intel Ethernet Controller E810.
+
+
+Prerequisites
+-------------
+
+- Identifying your adapter using `Intel Support
+  <http://www.intel.com/support>`_ and get the latest NVM/FW images.
+
+- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
+
+- To get better performance on Intel platforms, please follow the "How to get best performance with NICs on Intel platforms"
+  section of the :ref:`Getting Started Guide for Linux <linux_gsg>`.
+
+
+Pre-Installation Configuration
+------------------------------
+
+Config File Options
+~~~~~~~~~~~~~~~~~~~
+
+The following options can be modified in the ``config`` file.
+Please note that enabling debugging options may affect system performance.
+
+- ``CONFIG_RTE_LIBRTE_ICE_PMD`` (default ``y``)
+
+  Toggle compilation of the ``librte_pmd_ice`` driver.
+
+- ``CONFIG_RTE_LIBRTE_ICE_DEBUG_*`` (default ``n``)
+
+  Toggle display of generic debugging messages.
+
+Runtime Config Options
+~~~~~~~~~~~~~~~~~~~~~~
+
+- ``Maximum Number of Queue Pairs``
+
+  The maximum number of queue pairs is decided by HW. If not configured, APP
+  uses the number from HW. Users can check the number by calling the API
+  ``rte_eth_dev_info_get``.
+  If users want to limit the number of queues, they can set a smaller number
+  using EAL parameter like ``max_queue_pair_num=n``.
+
+
+Driver compilation and testing
+------------------------------
+
+Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
+for details.
+
+
+Limitations or Known issues
+---------------------------
+
+19.02 limitation
+~~~~~~~~~~~~~~~~
+
+Ice code released in 19.02 is for evaluation only.
+
+
+Promiscuous mode not supported
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+As promiscuous mode is not supported as this stage, a port can only receive the
+packets which destination MAC address is this port's own.
+
+
+TX anti-spoofing cannot be disabled
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+TX anti-spoofing is enabled by default. At this stage it's not supported to
+disable it. So any TX packet which source MAC address is not this port's own
+will be dropped by HW. It means io-fwd is not supported now. Recommand to use
+MAC-fwd for evaluation.
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index 1e46705..a205f15 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -26,6 +26,7 @@ Network Interface Controller Drivers
     enic
     fm10k
     i40e
+    ice
     ifc
     igb
     ixgbe
diff --git a/doc/guides/rel_notes/release_19_02.rst b/doc/guides/rel_notes/release_19_02.rst
index a94fa86..ca560b1 100644
--- a/doc/guides/rel_notes/release_19_02.rst
+++ b/doc/guides/rel_notes/release_19_02.rst
@@ -54,6 +54,11 @@ New Features
      Also, make sure to start the actual text at the margin.
      =========================================================
 
+* **Added ICE net PMD**
+
+  Added the new ``ice`` net driver for Intel® Ethernet Network Adapters E810.
+  See the :doc:`../nics/ice` NIC guide for more details on this new driver.
+
 
 Removed Items
 -------------
diff --git a/drivers/net/Makefile b/drivers/net/Makefile
index c0386fe..670d7f7 100644
--- a/drivers/net/Makefile
+++ b/drivers/net/Makefile
@@ -30,6 +30,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_ENIC_PMD) += enic
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_FAILSAFE) += failsafe
 DIRS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k
 DIRS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += i40e
+DIRS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice
 DIRS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe
 DIRS-$(CONFIG_RTE_LIBRTE_LIO_PMD) += liquidio
 DIRS-$(CONFIG_RTE_LIBRTE_MLX4_PMD) += mlx4
diff --git a/drivers/net/ice/Makefile b/drivers/net/ice/Makefile
new file mode 100644
index 0000000..548972d
--- /dev/null
+++ b/drivers/net/ice/Makefile
@@ -0,0 +1,54 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Intel Corporation
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_pmd_ice.a
+
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+LDLIBS += -lrte_eal -lrte_ethdev -lrte_kvargs -lrte_bus_pci
+
+EXPORT_MAP := rte_pmd_ice_version.map
+
+LIBABIVER := 1
+
+#
+# Add extra flags for base driver files (also known as shared code)
+# to disable warnings
+#
+ifeq ($(CONFIG_RTE_TOOLCHAIN_ICC),y)
+CFLAGS_BASE_DRIVER = -wd593 -wd188
+else ifeq ($(CONFIG_RTE_TOOLCHAIN_CLANG),y)
+CFLAGS_BASE_DRIVER += -Wno-unused-parameter
+CFLAGS_BASE_DRIVER += -Wno-unused-variable
+else
+CFLAGS_BASE_DRIVER += -Wno-unused-parameter
+CFLAGS_BASE_DRIVER += -Wno-unused-variable
+
+ifeq ($(shell test $(GCC_VERSION) -ge 44 && echo 1), 1)
+CFLAGS_BASE_DRIVER += -Wno-unused-but-set-variable
+endif
+
+endif
+OBJS_BASE_DRIVER=$(patsubst %.c,%.o,$(notdir $(wildcard $(SRCDIR)/base/*.c)))
+$(foreach obj, $(OBJS_BASE_DRIVER), $(eval CFLAGS_$(obj)+=$(CFLAGS_BASE_DRIVER)))
+
+VPATH += $(SRCDIR)/base
+
+#
+# all source are stored in SRCS-y
+#
+SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_controlq.c
+SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_common.c
+SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_sched.c
+SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_switch.c
+SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_nvm.c
+
+SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_ethdev.c
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/ice/base/meson.build b/drivers/net/ice/base/meson.build
new file mode 100644
index 0000000..0cfc8cd
--- /dev/null
+++ b/drivers/net/ice/base/meson.build
@@ -0,0 +1,27 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Intel Corporation
+
+sources = [
+	'ice_controlq.c',
+	'ice_common.c',
+	'ice_sched.c',
+	'ice_switch.c',
+	'ice_nvm.c',
+]
+
+error_cflags = ['-Wno-unused-value',
+		'-Wno-unused-but-set-variable',
+		'-Wno-unused-variable',
+]
+c_args = cflags
+
+foreach flag: error_cflags
+	if cc.has_argument(flag)
+		c_args += flag
+	endif
+endforeach
+
+base_lib = static_library('ice_base', sources,
+	dependencies: static_rte_eal,
+	c_args: c_args)
+base_objs = base_lib.extract_all_objects()
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
new file mode 100644
index 0000000..21cf7de
--- /dev/null
+++ b/drivers/net/ice/ice_ethdev.c
@@ -0,0 +1,635 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#include <rte_ethdev_pci.h>
+
+#include "base/ice_sched.h"
+#include "ice_ethdev.h"
+#include "ice_rxtx.h"
+
+#define ICE_MAX_QP_NUM "max_queue_pair_num"
+#define ICE_DFLT_OUTER_TAG_TYPE ICE_AQ_VSI_OUTER_TAG_VLAN_9100
+
+int ice_logtype_init;
+int ice_logtype_driver;
+
+static const struct rte_pci_id pci_id_ice_map[] = {
+	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
+	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_QSFP) },
+	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_SFP) },
+	{ .vendor_id = 0, /* sentinel */ },
+};
+
+static const struct eth_dev_ops ice_eth_dev_ops = {
+	.dev_configure                = NULL,
+};
+
+static void
+ice_init_controlq_parameter(struct ice_hw *hw)
+{
+	/* fields for adminq */
+	hw->adminq.num_rq_entries = ICE_ADMINQ_LEN;
+	hw->adminq.num_sq_entries = ICE_ADMINQ_LEN;
+	hw->adminq.rq_buf_size = ICE_ADMINQ_BUF_SZ;
+	hw->adminq.sq_buf_size = ICE_ADMINQ_BUF_SZ;
+
+	/* fields for mailboxq, DPDK used as PF host */
+	hw->mailboxq.num_rq_entries = ICE_MAILBOXQ_LEN;
+	hw->mailboxq.num_sq_entries = ICE_MAILBOXQ_LEN;
+	hw->mailboxq.rq_buf_size = ICE_MAILBOXQ_BUF_SZ;
+	hw->mailboxq.sq_buf_size = ICE_MAILBOXQ_BUF_SZ;
+}
+
+static int
+ice_check_qp_num(const char *key, const char *qp_value,
+		 __rte_unused void *opaque)
+{
+	char *end = NULL;
+	int num = 0;
+
+	while (isblank(*qp_value))
+		qp_value++;
+
+	num = strtoul(qp_value, &end, 10);
+
+	if (!num || (*end == '-') || errno) {
+		PMD_DRV_LOG(WARNING, "invalid value:\"%s\" for key:\"%s\", "
+			    "value must be > 0",
+			    qp_value, key);
+		return -1;
+	}
+
+	return num;
+}
+
+static int
+ice_config_max_queue_pair_num(struct rte_devargs *devargs)
+{
+	struct rte_kvargs *kvlist;
+	const char *queue_num_key = ICE_MAX_QP_NUM;
+	int ret;
+
+	if (!devargs)
+		return 0;
+
+	kvlist = rte_kvargs_parse(devargs->args, NULL);
+	if (!kvlist)
+		return 0;
+
+	if (!rte_kvargs_count(kvlist, queue_num_key)) {
+		rte_kvargs_free(kvlist);
+		return 0;
+	}
+
+	if (rte_kvargs_process(kvlist, queue_num_key,
+			       ice_check_qp_num, NULL) < 0) {
+		rte_kvargs_free(kvlist);
+		return 0;
+	}
+	ret = rte_kvargs_process(kvlist, queue_num_key,
+				 ice_check_qp_num, NULL);
+	rte_kvargs_free(kvlist);
+
+	return ret;
+}
+
+static int
+ice_res_pool_init(struct ice_res_pool_info *pool, uint32_t base,
+		  uint32_t num)
+{
+	struct pool_entry *entry;
+
+	if (!pool || !num)
+		return -EINVAL;
+
+	entry = rte_zmalloc(NULL, sizeof(*entry), 0);
+	if (!entry) {
+		PMD_INIT_LOG(ERR,
+			     "Failed to allocate memory for resource pool");
+		return -ENOMEM;
+	}
+
+	/* queue heap initialize */
+	pool->num_free = num;
+	pool->num_alloc = 0;
+	pool->base = base;
+	LIST_INIT(&pool->alloc_list);
+	LIST_INIT(&pool->free_list);
+
+	/* Initialize element  */
+	entry->base = 0;
+	entry->len = num;
+
+	LIST_INSERT_HEAD(&pool->free_list, entry, next);
+	return 0;
+}
+
+static int
+ice_res_pool_alloc(struct ice_res_pool_info *pool,
+		   uint16_t num)
+{
+	struct pool_entry *entry, *valid_entry;
+
+	if (!pool || !num) {
+		PMD_INIT_LOG(ERR, "Invalid parameter");
+		return -EINVAL;
+	}
+
+	if (pool->num_free < num) {
+		PMD_INIT_LOG(ERR, "No resource. ask:%u, available:%u",
+			     num, pool->num_free);
+		return -ENOMEM;
+	}
+
+	valid_entry = NULL;
+	/* Lookup  in free list and find most fit one */
+	LIST_FOREACH(entry, &pool->free_list, next) {
+		if (entry->len >= num) {
+			/* Find best one */
+			if (entry->len == num) {
+				valid_entry = entry;
+				break;
+			}
+			if (!valid_entry ||
+			    valid_entry->len > entry->len)
+				valid_entry = entry;
+		}
+	}
+
+	/* Not find one to satisfy the request, return */
+	if (!valid_entry) {
+		PMD_INIT_LOG(ERR, "No valid entry found");
+		return -ENOMEM;
+	}
+	/**
+	 * The entry have equal queue number as requested,
+	 * remove it from alloc_list.
+	 */
+	if (valid_entry->len == num) {
+		LIST_REMOVE(valid_entry, next);
+	} else {
+		/**
+		 * The entry have more numbers than requested,
+		 * create a new entry for alloc_list and minus its
+		 * queue base and number in free_list.
+		 */
+		entry = rte_zmalloc(NULL, sizeof(*entry), 0);
+		if (!entry) {
+			PMD_INIT_LOG(ERR,
+				     "Failed to allocate memory for "
+				     "resource pool");
+			return -ENOMEM;
+		}
+		entry->base = valid_entry->base;
+		entry->len = num;
+		valid_entry->base += num;
+		valid_entry->len -= num;
+		valid_entry = entry;
+	}
+
+	/* Insert it into alloc list, not sorted */
+	LIST_INSERT_HEAD(&pool->alloc_list, valid_entry, next);
+
+	pool->num_free -= valid_entry->len;
+	pool->num_alloc += valid_entry->len;
+
+	return valid_entry->base + pool->base;
+}
+
+static void
+ice_res_pool_destroy(struct ice_res_pool_info *pool)
+{
+	struct pool_entry *entry, *next_entry;
+
+	if (!pool)
+		return;
+
+	for (entry = LIST_FIRST(&pool->alloc_list);
+	     entry && (next_entry = LIST_NEXT(entry, next), 1);
+	     entry = next_entry) {
+		LIST_REMOVE(entry, next);
+		rte_free(entry);
+	}
+
+	for (entry = LIST_FIRST(&pool->free_list);
+	     entry && (next_entry = LIST_NEXT(entry, next), 1);
+	     entry = next_entry) {
+		LIST_REMOVE(entry, next);
+		rte_free(entry);
+	}
+
+	pool->num_free = 0;
+	pool->num_alloc = 0;
+	pool->base = 0;
+	LIST_INIT(&pool->alloc_list);
+	LIST_INIT(&pool->free_list);
+}
+
+static void
+ice_vsi_config_default_rss(struct ice_aqc_vsi_props *info)
+{
+	/* Set VSI LUT selection */
+	info->q_opt_rss = ICE_AQ_VSI_Q_OPT_RSS_LUT_VSI &
+			  ICE_AQ_VSI_Q_OPT_RSS_LUT_M;
+	/* Set Hash scheme */
+	info->q_opt_rss |= ICE_AQ_VSI_Q_OPT_RSS_TPLZ &
+			   ICE_AQ_VSI_Q_OPT_RSS_HASH_M;
+	/* enable TC */
+	info->q_opt_tc = ICE_AQ_VSI_Q_OPT_TC_OVR_M;
+}
+
+static enum ice_status
+ice_vsi_config_tc_queue_mapping(struct ice_vsi *vsi,
+				struct ice_aqc_vsi_props *info,
+				uint8_t enabled_tcmap)
+{
+	uint16_t bsf, qp_idx;
+
+	/* default tc 0 now. Multi-TC supporting need to be done later.
+	 * Configure TC and queue mapping parameters, for enabled TC,
+	 * allocate qpnum_per_tc queues to this traffic.
+	 */
+	if (enabled_tcmap != 0x01) {
+		PMD_INIT_LOG(ERR, "only TC0 is supported");
+		return -ENOTSUP;
+	}
+
+	vsi->nb_qps = RTE_MIN(vsi->nb_qps, ICE_MAX_Q_PER_TC);
+	bsf = rte_bsf32(vsi->nb_qps);
+	/* Adjust the queue number to actual queues that can be applied */
+	vsi->nb_qps = 0x1 << bsf;
+
+	qp_idx = 0;
+	/* Set tc and queue mapping with VSI */
+	info->tc_mapping[0] = rte_cpu_to_le_16((qp_idx <<
+						ICE_AQ_VSI_TC_Q_OFFSET_S) |
+					       (bsf << ICE_AQ_VSI_TC_Q_NUM_S));
+
+	/* Associate queue number with VSI */
+	info->mapping_flags |= rte_cpu_to_le_16(ICE_AQ_VSI_Q_MAP_CONTIG);
+	info->q_mapping[0] = rte_cpu_to_le_16(vsi->base_queue);
+	info->q_mapping[1] = rte_cpu_to_le_16(vsi->nb_qps);
+	info->valid_sections |=
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_RXQ_MAP_VALID);
+	/* Set the info.ingress_table and info.egress_table
+	 * for UP translate table. Now just set it to 1:1 map by default
+	 * -- 0b 111 110 101 100 011 010 001 000 == 0xFAC688
+	 */
+#define ICE_TC_QUEUE_TABLE_DFLT 0x00FAC688
+	info->ingress_table  = rte_cpu_to_le_32(ICE_TC_QUEUE_TABLE_DFLT);
+	info->egress_table   = rte_cpu_to_le_32(ICE_TC_QUEUE_TABLE_DFLT);
+	info->outer_up_table = rte_cpu_to_le_32(ICE_TC_QUEUE_TABLE_DFLT);
+	return 0;
+}
+
+static int
+ice_init_mac_address(struct rte_eth_dev *dev)
+{
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	if (!is_unicast_ether_addr
+		((struct ether_addr *)hw->port_info[0].mac.lan_addr)) {
+		PMD_INIT_LOG(ERR, "Invalid MAC address");
+		return -EINVAL;
+	}
+
+	ether_addr_copy((struct ether_addr *)hw->port_info[0].mac.lan_addr,
+			(struct ether_addr *)hw->port_info[0].mac.perm_addr);
+
+	dev->data->mac_addrs = rte_zmalloc(NULL, sizeof(struct ether_addr), 0);
+	if (!dev->data->mac_addrs) {
+		PMD_INIT_LOG(ERR,
+			     "Failed to allocate memory to store mac address");
+		return -ENOMEM;
+	}
+	/* store it to dev data */
+	ether_addr_copy((struct ether_addr *)hw->port_info[0].mac.perm_addr,
+			&dev->data->mac_addrs[0]);
+	return 0;
+}
+
+/*  Initialize SW parameters of PF */
+static int
+ice_pf_sw_init(struct rte_eth_dev *dev)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_hw *hw = ICE_PF_TO_HW(pf);
+
+	if (ice_config_max_queue_pair_num(dev->device->devargs) > 0)
+		pf->lan_nb_qp_max =
+			ice_config_max_queue_pair_num(dev->device->devargs);
+	else
+		pf->lan_nb_qp_max =
+			(uint16_t)RTE_MIN(hw->func_caps.common_cap.num_txq,
+					  hw->func_caps.common_cap.num_rxq);
+
+	pf->lan_nb_qps = pf->lan_nb_qp_max;
+
+	return 0;
+}
+
+static struct ice_vsi *
+ice_setup_vsi(struct ice_pf *pf, enum ice_vsi_type type)
+{
+	struct ice_hw *hw = ICE_PF_TO_HW(pf);
+	struct ice_vsi *vsi = NULL;
+	struct ice_vsi_ctx vsi_ctx;
+	int ret;
+	uint16_t max_txqs[ICE_MAX_TRAFFIC_CLASS] = { 0 };
+	uint8_t tc_bitmap = 0x1;
+
+	/* hw->num_lports = 1 in NIC mode */
+	vsi = rte_zmalloc(NULL, sizeof(struct ice_vsi), 0);
+	if (!vsi)
+		return NULL;
+
+	vsi->idx = pf->next_vsi_idx;
+	pf->next_vsi_idx++;
+	vsi->type = type;
+	vsi->adapter = ICE_PF_TO_ADAPTER(pf);
+	vsi->max_macaddrs = ICE_NUM_MACADDR_MAX;
+	vsi->vlan_anti_spoof_on = 0;
+	vsi->vlan_filter_on = 1;
+	TAILQ_INIT(&vsi->mac_list);
+	TAILQ_INIT(&vsi->vlan_list);
+
+	memset(&vsi_ctx, 0, sizeof(vsi_ctx));
+	/* base_queue in used in queue mapping of VSI add/update command.
+	 * Suppose vsi->base_queue is 0 now, don't consider SRIOV, VMDQ
+	 * cases in the first stage. Only Main VSI.
+	 */
+	vsi->base_queue = 0;
+	switch (type) {
+	case ICE_VSI_PF:
+		vsi->nb_qps = pf->lan_nb_qps;
+		ice_vsi_config_default_rss(&vsi_ctx.info);
+		vsi_ctx.alloc_from_pool = true;
+		vsi_ctx.flags = ICE_AQ_VSI_TYPE_PF;
+		/* switch_id is queried by get_switch_config aq, which is done
+		 * by ice_init_hw
+		 */
+		vsi_ctx.info.sw_id = hw->port_info->sw_id;
+		vsi_ctx.info.sw_flags2 = ICE_AQ_VSI_SW_FLAG_LAN_ENA;
+		/* Allow all untagged or tagged packets */
+		vsi_ctx.info.vlan_flags = ICE_AQ_VSI_VLAN_MODE_ALL;
+		vsi_ctx.info.vlan_flags |= ICE_AQ_VSI_VLAN_EMOD_NOTHING;
+		vsi_ctx.info.q_opt_rss = ICE_AQ_VSI_Q_OPT_RSS_LUT_PF |
+					 ICE_AQ_VSI_Q_OPT_RSS_TPLZ;
+		/* Enable VLAN/UP trip */
+		ret = ice_vsi_config_tc_queue_mapping(vsi,
+						      &vsi_ctx.info,
+						      ICE_DEFAULT_TCMAP);
+		if (ret) {
+			PMD_INIT_LOG(ERR,
+				     "tc queue mapping with vsi failed, "
+				     "err = %d",
+				     ret);
+			goto fail_mem;
+		}
+
+		break;
+	default:
+		/* for other types of VSI */
+		PMD_INIT_LOG(ERR, "other types of VSI not supported");
+		goto fail_mem;
+	}
+
+	/* VF has MSIX interrupt in VF range, don't allocate here */
+	if (type == ICE_VSI_PF) {
+		ret = ice_res_pool_alloc(&pf->msix_pool,
+					 RTE_MIN(vsi->nb_qps,
+						 RTE_MAX_RXTX_INTR_VEC_ID));
+		if (ret < 0) {
+			PMD_INIT_LOG(ERR, "VSI MAIN %d get heap failed %d",
+				     vsi->vsi_id, ret);
+		}
+		vsi->msix_intr = ret;
+		vsi->nb_msix = RTE_MIN(vsi->nb_qps, RTE_MAX_RXTX_INTR_VEC_ID);
+	} else {
+		vsi->msix_intr = 0;
+		vsi->nb_msix = 0;
+	}
+	ret = ice_add_vsi(hw, vsi->idx, &vsi_ctx, NULL);
+	if (ret != ICE_SUCCESS) {
+		PMD_INIT_LOG(ERR, "add vsi failed, err = %d", ret);
+		goto fail_mem;
+	}
+	/* store vsi information is SW structure */
+	vsi->vsi_id = vsi_ctx.vsi_num;
+	vsi->info = vsi_ctx.info;
+	pf->vsis_allocated = vsi_ctx.vsis_allocd;
+	pf->vsis_unallocated = vsi_ctx.vsis_unallocated;
+
+	/* At the beginning, only TC0. */
+	/* What we need here is the maximam number of the TX queues.
+	 * Currently vsi->nb_qps means it.
+	 * Correct it if any change.
+	 */
+	max_txqs[0] = vsi->nb_qps;
+	ret = ice_cfg_vsi_lan(hw->port_info, vsi->idx,
+			      tc_bitmap, max_txqs);
+	if (ret != ICE_SUCCESS)
+		PMD_INIT_LOG(ERR, "Failed to config vsi sched");
+
+	return vsi;
+fail_mem:
+	rte_free(vsi);
+	pf->next_vsi_idx--;
+	return NULL;
+}
+
+static int
+ice_pf_setup(struct ice_pf *pf)
+{
+	struct ice_vsi *vsi;
+
+	/* Clear all stats counters */
+	pf->offset_loaded = FALSE;
+	memset(&pf->stats, 0, sizeof(struct ice_hw_port_stats));
+	memset(&pf->stats_offset, 0, sizeof(struct ice_hw_port_stats));
+	memset(&pf->internal_stats, 0, sizeof(struct ice_eth_stats));
+	memset(&pf->internal_stats_offset, 0, sizeof(struct ice_eth_stats));
+
+	vsi = ice_setup_vsi(pf, ICE_VSI_PF);
+	if (!vsi) {
+		PMD_INIT_LOG(ERR, "Failed to add vsi for PF");
+		return -EINVAL;
+	}
+
+	pf->main_vsi = vsi;
+
+	return 0;
+}
+
+static int
+ice_dev_init(struct rte_eth_dev *dev)
+{
+	struct rte_pci_device *pci_dev;
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	int ret;
+
+	dev->dev_ops = &ice_eth_dev_ops;
+
+	pci_dev = RTE_DEV_TO_PCI(dev->device);
+
+	pf->adapter = ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	pf->adapter->eth_dev = dev;
+	pf->dev_data = dev->data;
+	hw->back = pf->adapter;
+	hw->hw_addr = (uint8_t *)pci_dev->mem_resource[0].addr;
+	hw->vendor_id = pci_dev->id.vendor_id;
+	hw->device_id = pci_dev->id.device_id;
+	hw->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id;
+	hw->subsystem_device_id = pci_dev->id.subsystem_device_id;
+	hw->bus.device = pci_dev->addr.devid;
+	hw->bus.func = pci_dev->addr.function;
+
+	ice_init_controlq_parameter(hw);
+
+	ret = ice_init_hw(hw);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to initialize HW");
+		return -EINVAL;
+	}
+
+	PMD_INIT_LOG(INFO, "FW %d.%d.%05d API %d.%d",
+		     hw->fw_maj_ver, hw->fw_min_ver, hw->fw_build,
+		     hw->api_maj_ver, hw->api_min_ver);
+
+	ice_pf_sw_init(dev);
+	ret = ice_init_mac_address(dev);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to initialize mac address");
+		goto err_init_mac;
+	}
+
+	ret = ice_res_pool_init(&pf->msix_pool, 1,
+				hw->func_caps.common_cap.num_msix_vectors - 1);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to init MSIX pool");
+		goto err_msix_pool_init;
+	}
+
+	ret = ice_pf_setup(pf);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to setup PF");
+		goto err_pf_setup;
+	}
+
+	return 0;
+
+err_pf_setup:
+	ice_res_pool_destroy(&pf->msix_pool);
+err_msix_pool_init:
+	rte_free(dev->data->mac_addrs);
+err_init_mac:
+	ice_sched_cleanup_all(hw);
+	rte_free(hw->port_info);
+	ice_shutdown_all_ctrlq(hw);
+
+	return ret;
+}
+
+static int
+ice_release_vsi(struct ice_vsi *vsi)
+{
+	struct ice_hw *hw;
+	struct ice_vsi_ctx vsi_ctx;
+	enum ice_status ret;
+
+	if (!vsi)
+		return 0;
+
+	hw = ICE_VSI_TO_HW(vsi);
+
+	memset(&vsi_ctx, 0, sizeof(vsi_ctx));
+
+	vsi_ctx.vsi_num = vsi->vsi_id;
+	vsi_ctx.info = vsi->info;
+	ret = ice_free_vsi(hw, vsi->idx, &vsi_ctx, false, NULL);
+	if (ret != ICE_SUCCESS) {
+		PMD_INIT_LOG(ERR, "Failed to free vsi by aq, %u", vsi->vsi_id);
+		rte_free(vsi);
+		return -1;
+	}
+
+	rte_free(vsi);
+	return 0;
+}
+
+static void
+ice_dev_close(struct rte_eth_dev *dev)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	ice_res_pool_destroy(&pf->msix_pool);
+	ice_release_vsi(pf->main_vsi);
+
+	ice_shutdown_all_ctrlq(hw);
+}
+
+static int
+ice_dev_uninit(struct rte_eth_dev *dev)
+{
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+
+	ice_dev_close(dev);
+
+	dev->dev_ops = NULL;
+	dev->rx_pkt_burst = NULL;
+	dev->tx_pkt_burst = NULL;
+
+	rte_free(dev->data->mac_addrs);
+	dev->data->mac_addrs = NULL;
+
+	ice_release_vsi(pf->main_vsi);
+	ice_sched_cleanup_all(hw);
+	rte_free(hw->port_info);
+	ice_shutdown_all_ctrlq(hw);
+
+	return 0;
+}
+
+static int
+ice_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+	      struct rte_pci_device *pci_dev)
+{
+	return rte_eth_dev_pci_generic_probe(pci_dev,
+					     sizeof(struct ice_adapter),
+					     ice_dev_init);
+}
+
+static int
+ice_pci_remove(struct rte_pci_device *pci_dev)
+{
+	return rte_eth_dev_pci_generic_remove(pci_dev, ice_dev_uninit);
+}
+
+static struct rte_pci_driver rte_ice_pmd = {
+	.id_table = pci_id_ice_map,
+	.drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC,
+	.probe = ice_pci_probe,
+	.remove = ice_pci_remove,
+};
+
+/**
+ * Driver initialization routine.
+ * Invoked once at EAL init time.
+ * Register itself as the [Poll Mode] Driver of PCI devices.
+ */
+RTE_PMD_REGISTER_PCI(net_ice, rte_ice_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(ice, pci_id_ice_map);
+
+RTE_INIT(ice_init_log)
+{
+	ice_logtype_init = rte_log_register("pmd.net.ice.init");
+	if (ice_logtype_init >= 0)
+		rte_log_set_level(ice_logtype_init, RTE_LOG_NOTICE);
+	ice_logtype_driver = rte_log_register("pmd.net.ice.driver");
+	if (ice_logtype_driver >= 0)
+		rte_log_set_level(ice_logtype_driver, RTE_LOG_NOTICE);
+}
diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
new file mode 100644
index 0000000..94e45c8
--- /dev/null
+++ b/drivers/net/ice/ice_ethdev.h
@@ -0,0 +1,305 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _ICE_ETHDEV_H_
+#define _ICE_ETHDEV_H_
+
+#include <rte_kvargs.h>
+
+#include "base/ice_common.h"
+#include "base/ice_adminq_cmd.h"
+
+#define ICE_VLAN_TAG_SIZE        4
+
+#define ICE_ADMINQ_LEN               32
+#define ICE_SBIOQ_LEN                32
+#define ICE_MAILBOXQ_LEN             32
+#define ICE_ADMINQ_BUF_SZ            4096
+#define ICE_SBIOQ_BUF_SZ             4096
+#define ICE_MAILBOXQ_BUF_SZ          4096
+/* Number of queues per TC should be one of 1, 2, 4, 8, 16, 32, 64 */
+#define ICE_MAX_Q_PER_TC         64
+#define ICE_NUM_DESC_DEFAULT     512
+#define ICE_BUF_SIZE_MIN         1024
+#define ICE_FRAME_SIZE_MAX       9728
+#define ICE_QUEUE_BASE_ADDR_UNIT 128
+/* number of VSIs and queue default setting */
+#define ICE_MAX_QP_NUM_PER_VF    16
+#define ICE_DEFAULT_QP_NUM_FDIR  1
+#define ICE_UINT32_BIT_SIZE      (CHAR_BIT * sizeof(uint32_t))
+#define ICE_VFTA_SIZE            (4096 / ICE_UINT32_BIT_SIZE)
+/* Maximun number of MAC addresses */
+#define ICE_NUM_MACADDR_MAX       64
+/* Maximum number of VFs */
+#define ICE_MAX_VF               128
+#define ICE_MAX_INTR_QUEUE_NUM   256
+
+#define ICE_MISC_VEC_ID          RTE_INTR_VEC_ZERO_OFFSET
+#define ICE_RX_VEC_ID            RTE_INTR_VEC_RXTX_OFFSET
+
+#define ICE_MAX_PKT_TYPE  1024
+
+/**
+ * vlan_id is a 12 bit number.
+ * The VFTA array is actually a 4096 bit array, 128 of 32bit elements.
+ * 2^5 = 32. The val of lower 5 bits specifies the bit in the 32bit element.
+ * The higher 7 bit val specifies VFTA array index.
+ */
+#define ICE_VFTA_BIT(vlan_id)    (1 << ((vlan_id) & 0x1F))
+#define ICE_VFTA_IDX(vlan_id)    ((vlan_id) >> 5)
+
+/* Default TC traffic in case DCB is not enabled */
+#define ICE_DEFAULT_TCMAP        0x1
+#define ICE_FDIR_QUEUE_ID        0
+
+/* Always assign pool 0 to main VSI, VMDQ will start from 1 */
+#define ICE_VMDQ_POOL_BASE       1
+
+#define ICE_DEFAULT_RX_FREE_THRESH  32
+#define ICE_DEFAULT_RX_PTHRESH      8
+#define ICE_DEFAULT_RX_HTHRESH      8
+#define ICE_DEFAULT_RX_WTHRESH      0
+
+#define ICE_DEFAULT_TX_FREE_THRESH  32
+#define ICE_DEFAULT_TX_PTHRESH      32
+#define ICE_DEFAULT_TX_HTHRESH      0
+#define ICE_DEFAULT_TX_WTHRESH      0
+#define ICE_DEFAULT_TX_RSBIT_THRESH 32
+
+/* Bit shift and mask */
+#define ICE_4_BIT_WIDTH  (CHAR_BIT / 2)
+#define ICE_4_BIT_MASK   RTE_LEN2MASK(ICE_4_BIT_WIDTH, uint8_t)
+#define ICE_8_BIT_WIDTH  CHAR_BIT
+#define ICE_8_BIT_MASK   UINT8_MAX
+#define ICE_16_BIT_WIDTH (CHAR_BIT * 2)
+#define ICE_16_BIT_MASK  UINT16_MAX
+#define ICE_32_BIT_WIDTH (CHAR_BIT * 4)
+#define ICE_32_BIT_MASK  UINT32_MAX
+#define ICE_40_BIT_WIDTH (CHAR_BIT * 5)
+#define ICE_40_BIT_MASK  RTE_LEN2MASK(ICE_40_BIT_WIDTH, uint64_t)
+#define ICE_48_BIT_WIDTH (CHAR_BIT * 6)
+#define ICE_48_BIT_MASK  RTE_LEN2MASK(ICE_48_BIT_WIDTH, uint64_t)
+
+#define ICE_FLAG_RSS                   BIT_ULL(0)
+#define ICE_FLAG_DCB                   BIT_ULL(1)
+#define ICE_FLAG_VMDQ                  BIT_ULL(2)
+#define ICE_FLAG_SRIOV                 BIT_ULL(3)
+#define ICE_FLAG_HEADER_SPLIT_DISABLED BIT_ULL(4)
+#define ICE_FLAG_HEADER_SPLIT_ENABLED  BIT_ULL(5)
+#define ICE_FLAG_FDIR                  BIT_ULL(6)
+#define ICE_FLAG_VXLAN                 BIT_ULL(7)
+#define ICE_FLAG_RSS_AQ_CAPABLE        BIT_ULL(8)
+#define ICE_FLAG_VF_MAC_BY_PF          BIT_ULL(9)
+#define ICE_FLAG_ALL  (ICE_FLAG_RSS | \
+		       ICE_FLAG_DCB | \
+		       ICE_FLAG_VMDQ | \
+		       ICE_FLAG_SRIOV | \
+		       ICE_FLAG_HEADER_SPLIT_DISABLED | \
+		       ICE_FLAG_HEADER_SPLIT_ENABLED | \
+		       ICE_FLAG_FDIR | \
+		       ICE_FLAG_VXLAN | \
+		       ICE_FLAG_RSS_AQ_CAPABLE | \
+		       ICE_FLAG_VF_MAC_BY_PF)
+
+struct ice_adapter;
+
+/**
+ * MAC filter structure
+ */
+struct ice_mac_filter_info {
+	struct ether_addr mac_addr;
+};
+
+TAILQ_HEAD(ice_mac_filter_list, ice_mac_filter);
+
+/* MAC filter list structure */
+struct ice_mac_filter {
+	TAILQ_ENTRY(ice_mac_filter) next;
+	struct ice_mac_filter_info mac_info;
+};
+
+/**
+ * VLAN filter structure
+ */
+struct ice_vlan_filter_info {
+	uint16_t vlan_id;
+};
+
+TAILQ_HEAD(ice_vlan_filter_list, ice_vlan_filter);
+
+/* VLAN filter list structure */
+struct ice_vlan_filter {
+	TAILQ_ENTRY(ice_vlan_filter) next;
+	struct ice_vlan_filter_info vlan_info;
+};
+
+struct pool_entry {
+	LIST_ENTRY(pool_entry) next;
+	uint16_t base;
+	uint16_t len;
+};
+
+LIST_HEAD(res_list, pool_entry);
+
+struct ice_res_pool_info {
+	uint32_t base;              /* Resource start index */
+	uint32_t num_alloc;         /* Allocated resource number */
+	uint32_t num_free;          /* Total available resource number */
+	struct res_list alloc_list; /* Allocated resource list */
+	struct res_list free_list;  /* Available resource list */
+};
+
+TAILQ_HEAD(ice_vsi_list_head, ice_vsi_list);
+
+struct ice_vsi;
+
+/* VSI list structure */
+struct ice_vsi_list {
+	TAILQ_ENTRY(ice_vsi_list) list;
+	struct ice_vsi *vsi;
+};
+
+struct ice_rx_queue;
+struct ice_tx_queue;
+
+/**
+ * Structure that defines a VSI, associated with a adapter.
+ */
+struct ice_vsi {
+	struct ice_adapter *adapter; /* Backreference to associated adapter */
+	struct ice_aqc_vsi_props info; /* VSI properties */
+	/**
+	 * When drivers loaded, only a default main VSI exists. In case new VSI
+	 * needs to add, HW needs to know the layout that VSIs are organized.
+	 * Besides that, VSI isan element and can't switch packets, which needs
+	 * to add new component VEB to perform switching. So, a new VSI needs
+	 * to specify the the uplink VSI (Parent VSI) before created. The
+	 * uplink VSI will check whether it had a VEB to switch packets. If no,
+	 * it will try to create one. Then, uplink VSI will move the new VSI
+	 * into its' sib_vsi_list to manage all the downlink VSI.
+	 *  sib_vsi_list: the VSI list that shared the same uplink VSI.
+	 *  parent_vsi  : the uplink VSI. It's NULL for main VSI.
+	 *  veb         : the VEB associates with the VSI.
+	 */
+	struct ice_vsi_list sib_vsi_list; /* sibling vsi list */
+	struct ice_vsi *parent_vsi;
+	enum ice_vsi_type type; /* VSI types */
+	uint16_t vlan_num;       /* Total VLAN number */
+	uint16_t mac_num;        /* Total mac number */
+	struct ice_mac_filter_list mac_list; /* macvlan filter list */
+	struct ice_vlan_filter_list vlan_list; /* vlan filter list */
+	uint16_t nb_qps;         /* Number of queue pairs VSI can occupy */
+	uint16_t nb_used_qps;    /* Number of queue pairs VSI uses */
+	uint16_t max_macaddrs;   /* Maximum number of MAC addresses */
+	uint16_t base_queue;     /* The first queue index of this VSI */
+	uint16_t vsi_id;         /* Hardware Id */
+	uint16_t idx;            /* vsi_handle: SW index in hw->vsi_ctx */
+	/* VF number to which the VSI connects, valid when VSI is VF type */
+	uint8_t vf_num;
+	uint16_t msix_intr; /* The MSIX interrupt binds to VSI */
+	uint16_t nb_msix;   /* The max number of msix vector */
+	uint8_t enabled_tc; /* The traffic class enabled */
+	uint8_t vlan_anti_spoof_on; /* The VLAN anti-spoofing enabled */
+	uint8_t vlan_filter_on; /* The VLAN filter enabled */
+	/* information about rss configuration */
+	u32 rss_key_size;
+	u32 rss_lut_size;
+	uint8_t *rss_lut;
+	uint8_t *rss_key;
+	struct ice_eth_stats eth_stats_offset;
+	struct ice_eth_stats eth_stats;
+	bool offset_loaded;
+};
+
+struct ice_pf {
+	struct ice_adapter *adapter; /* The adapter this PF associate to */
+	struct ice_vsi *main_vsi; /* pointer to main VSI structure */
+	/* Used for next free software vsi idx.
+	 * To save the effort, we don't recycle the index.
+	 * Suppose the indexes are more than enough.
+	 */
+	uint16_t next_vsi_idx;
+	uint16_t vsis_allocated;
+	uint16_t vsis_unallocated;
+	struct ice_res_pool_info qp_pool;    /*Queue pair pool */
+	struct ice_res_pool_info msix_pool;  /* MSIX interrupt pool */
+	struct rte_eth_dev_data *dev_data; /* Pointer to the device data */
+	struct ether_addr dev_addr; /* PF device mac address */
+	uint64_t flags; /* PF feature flags */
+	uint16_t hash_lut_size; /* The size of hash lookup table */
+	uint16_t lan_nb_qp_max;
+	uint16_t lan_nb_qps; /* The number of queue pairs of LAN */
+	struct ice_hw_port_stats stats_offset;
+	struct ice_hw_port_stats stats;
+	/* internal packet statistics, it should be excluded from the total */
+	struct ice_eth_stats internal_stats_offset;
+	struct ice_eth_stats internal_stats;
+	bool offset_loaded;
+	bool adapter_stopped;
+};
+
+/**
+ * Structure to store private data for each PF/VF instance.
+ */
+struct ice_adapter {
+	/* Common for both PF and VF */
+	struct ice_hw hw;
+	struct rte_eth_dev *eth_dev;
+	struct ice_pf pf;
+	bool rx_bulk_alloc_allowed;
+	bool tx_simple_allowed;
+	/* ptype mapping table */
+	uint32_t ptype_tbl[ICE_MAX_PKT_TYPE] __rte_cache_min_aligned;
+};
+
+struct ice_vsi_vlan_pvid_info {
+	uint16_t on;		/* Enable or disable pvid */
+	union {
+		uint16_t pvid;	/* Valid in case 'on' is set to set pvid */
+		struct {
+			/* Valid in case 'on' is cleared. 'tagged' will reject
+			 * tagged packets, while 'untagged' will reject
+			 * untagged packets.
+			 */
+			uint8_t tagged;
+			uint8_t untagged;
+		} reject;
+	} config;
+};
+
+#define ICE_DEV_TO_PCI(eth_dev) \
+	RTE_DEV_TO_PCI((eth_dev)->device)
+
+/* ICE_DEV_PRIVATE_TO */
+#define ICE_DEV_PRIVATE_TO_PF(adapter) \
+	(&((struct ice_adapter *)adapter)->pf)
+#define ICE_DEV_PRIVATE_TO_HW(adapter) \
+	(&((struct ice_adapter *)adapter)->hw)
+#define ICE_DEV_PRIVATE_TO_ADAPTER(adapter) \
+	((struct ice_adapter *)adapter)
+
+/* ICE_VSI_TO */
+#define ICE_VSI_TO_HW(vsi) \
+	(&(((struct ice_vsi *)vsi)->adapter->hw))
+#define ICE_VSI_TO_PF(vsi) \
+	(&(((struct ice_vsi *)vsi)->adapter->pf))
+#define ICE_VSI_TO_ETH_DEV(vsi) \
+	(((struct ice_vsi *)vsi)->adapter->eth_dev)
+
+/* ICE_PF_TO */
+#define ICE_PF_TO_HW(pf) \
+	(&(((struct ice_pf *)pf)->adapter->hw))
+#define ICE_PF_TO_ADAPTER(pf) \
+	((struct ice_adapter *)(pf)->adapter)
+#define ICE_PF_TO_ETH_DEV(pf) \
+	(((struct ice_pf *)pf)->adapter->eth_dev)
+
+static inline int
+ice_align_floor(int n)
+{
+	if (n == 0)
+		return 0;
+	return 1 << (sizeof(n) * CHAR_BIT - 1 - __builtin_clz(n));
+}
+#endif /* _ICE_ETHDEV_H_ */
diff --git a/drivers/net/ice/ice_logs.h b/drivers/net/ice/ice_logs.h
new file mode 100644
index 0000000..de2d573
--- /dev/null
+++ b/drivers/net/ice/ice_logs.h
@@ -0,0 +1,45 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _ICE_LOGS_H_
+#define _ICE_LOGS_H_
+
+extern int ice_logtype_init;
+extern int ice_logtype_driver;
+
+#define PMD_INIT_LOG(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, ice_logtype_init, "%s(): " fmt "\n", \
+		__func__, ##args)
+
+#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>")
+
+#ifdef RTE_LIBRTE_ICE_DEBUG_RX
+#define PMD_RX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_RX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_ICE_DEBUG_TX
+#define PMD_TX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_TX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_ICE_DEBUG_TX_FREE
+#define PMD_TX_FREE_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_TX_FREE_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#define PMD_DRV_LOG_RAW(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, ice_logtype_driver, "%s(): " fmt, \
+		__func__, ## args)
+
+#define PMD_DRV_LOG(level, fmt, args...) \
+	PMD_DRV_LOG_RAW(level, fmt "\n", ## args)
+
+#endif /* _ICE_LOGS_H_ */
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
new file mode 100644
index 0000000..c37dc23
--- /dev/null
+++ b/drivers/net/ice/ice_rxtx.h
@@ -0,0 +1,117 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _ICE_RXTX_H_
+#define _ICE_RXTX_H_
+
+#include "ice_ethdev.h"
+
+#define ICE_ALIGN_RING_DESC  32
+#define ICE_MIN_RING_DESC    64
+#define ICE_MAX_RING_DESC    4096
+#define ICE_DMA_MEM_ALIGN    4096
+#define ICE_RING_BASE_ALIGN  128
+
+#define ICE_RX_MAX_BURST 32
+#define ICE_TX_MAX_BURST 32
+
+#define ICE_CHK_Q_ENA_COUNT        100
+#define ICE_CHK_Q_ENA_INTERVAL_US  100
+
+#ifdef RTE_LIBRTE_ICE_16BYTE_RX_DESC
+#define ice_rx_desc ice_16byte_rx_desc
+#else
+#define ice_rx_desc ice_32byte_rx_desc
+#endif
+
+#define ICE_SUPPORT_CHAIN_NUM 5
+
+struct ice_rx_entry {
+	struct rte_mbuf *mbuf;
+};
+
+struct ice_rx_queue {
+	struct rte_mempool *mp; /* mbuf pool to populate RX ring */
+	volatile union ice_rx_desc *rx_ring;/* RX ring virtual address */
+	uint64_t rx_ring_phys_addr; /* RX ring DMA address */
+	struct ice_rx_entry *sw_ring; /* address of RX soft ring */
+	uint16_t nb_rx_desc; /* number of RX descriptors */
+	uint16_t rx_free_thresh; /* max free RX desc to hold */
+	uint16_t rx_tail; /* current value of tail */
+	uint16_t nb_rx_hold; /* number of held free RX desc */
+	struct rte_mbuf *pkt_first_seg; /**< first segment of current packet */
+	struct rte_mbuf *pkt_last_seg; /**< last segment of current packet */
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+	uint16_t rx_nb_avail; /**< number of staged packets ready */
+	uint16_t rx_next_avail; /**< index of next staged packets */
+	uint16_t rx_free_trigger; /**< triggers rx buffer allocation */
+	struct rte_mbuf fake_mbuf; /**< dummy mbuf */
+	struct rte_mbuf *rx_stage[ICE_RX_MAX_BURST * 2];
+#endif
+	uint8_t port_id; /* device port ID */
+	uint8_t crc_len; /* 0 if CRC stripped, 4 otherwise */
+	uint16_t queue_id; /* RX queue index */
+	uint16_t reg_idx; /* RX queue register index */
+	uint8_t drop_en; /* if not 0, set register bit */
+	volatile uint8_t *qrx_tail; /* register address of tail */
+	struct ice_vsi *vsi; /* the VSI this queue belongs to */
+	uint16_t rx_buf_len; /* The packet buffer size */
+	uint16_t rx_hdr_len; /* The header buffer size */
+	uint16_t max_pkt_len; /* Maximum packet length */
+	bool q_set; /* indicate if rx queue has been configured */
+	bool rx_deferred_start; /* don't start this queue in dev start */
+};
+
+struct ice_tx_entry {
+	struct rte_mbuf *mbuf;
+	uint16_t next_id;
+	uint16_t last_id;
+};
+
+struct ice_tx_queue {
+	uint16_t nb_tx_desc; /* number of TX descriptors */
+	uint64_t tx_ring_phys_addr; /* TX ring DMA address */
+	volatile struct ice_tx_desc *tx_ring; /* TX ring virtual address */
+	struct ice_tx_entry *sw_ring; /* virtual address of SW ring */
+	uint16_t tx_tail; /* current value of tail register */
+	volatile uint8_t *qtx_tail; /* register address of tail */
+	uint16_t nb_tx_used; /* number of TX desc used since RS bit set */
+	/* index to last TX descriptor to have been cleaned */
+	uint16_t last_desc_cleaned;
+	/* Total number of TX descriptors ready to be allocated. */
+	uint16_t nb_tx_free;
+	/* Start freeing TX buffers if there are less free descriptors than
+	 * this value.
+	 */
+	uint16_t tx_free_thresh;
+	/* Number of TX descriptors to use before RS bit is set. */
+	uint16_t tx_rs_thresh;
+	uint8_t pthresh; /**< Prefetch threshold register. */
+	uint8_t hthresh; /**< Host threshold register. */
+	uint8_t wthresh; /**< Write-back threshold reg. */
+	uint8_t port_id; /* Device port identifier. */
+	uint16_t queue_id; /* TX queue index. */
+	uint32_t q_teid; /* TX schedule node id. */
+	uint16_t reg_idx;
+	uint64_t offloads;
+	struct ice_vsi *vsi; /* the VSI this queue belongs to */
+	uint16_t tx_next_dd;
+	uint16_t tx_next_rs;
+	bool tx_deferred_start; /* don't start this queue in dev start */
+	bool q_set; /* indicate if tx queue has been configured */
+};
+
+/* Offload features */
+union ice_tx_offload {
+	uint64_t data;
+	struct {
+		uint64_t l2_len:7; /* L2 (MAC) Header Length. */
+		uint64_t l3_len:9; /* L3 (IP) Header Length. */
+		uint64_t l4_len:8; /* L4 Header Length. */
+		uint64_t tso_segsz:16; /* TCP TSO segment size */
+		uint64_t outer_l2_len:8; /* outer L2 Header Length */
+		uint64_t outer_l3_len:16; /* outer L3 Header Length */
+	};
+};
+#endif /* _ICE_RXTX_H_ */
diff --git a/drivers/net/ice/meson.build b/drivers/net/ice/meson.build
new file mode 100644
index 0000000..9ed7b27
--- /dev/null
+++ b/drivers/net/ice/meson.build
@@ -0,0 +1,12 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Intel Corporation
+
+subdir('base')
+objs = [base_objs]
+
+sources = files(
+	'ice_ethdev.c'
+	)
+
+deps += ['hash']
+includes += include_directories('base')
diff --git a/drivers/net/ice/rte_pmd_ice_version.map b/drivers/net/ice/rte_pmd_ice_version.map
new file mode 100644
index 0000000..7b23b60
--- /dev/null
+++ b/drivers/net/ice/rte_pmd_ice_version.map
@@ -0,0 +1,4 @@
+DPDK_19.02 {
+
+	local: *;
+};
diff --git a/drivers/net/meson.build b/drivers/net/meson.build
index 980eec2..45da3bb 100644
--- a/drivers/net/meson.build
+++ b/drivers/net/meson.build
@@ -17,6 +17,7 @@ drivers = ['af_packet',
 	'enic',
 	'failsafe',
 	'fm10k', 'i40e',
+	'ice',
 	'ifc',
 	'ixgbe',
 	'kni',
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 5699d97..02e8b6f 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -163,6 +163,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_ENIC_PMD)       += -lrte_pmd_enic
 _LDLIBS-$(CONFIG_RTE_LIBRTE_FM10K_PMD)      += -lrte_pmd_fm10k
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_FAILSAFE)   += -lrte_pmd_failsafe
 _LDLIBS-$(CONFIG_RTE_LIBRTE_I40E_PMD)       += -lrte_pmd_i40e
+_LDLIBS-$(CONFIG_RTE_LIBRTE_ICE_PMD)        += -lrte_pmd_ice
 _LDLIBS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD)      += -lrte_pmd_ixgbe
 ifeq ($(CONFIG_RTE_LIBRTE_KNI),y)
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_KNI)        += -lrte_pmd_kni
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v4 17/32] net/ice: support device and queue ops
  2018-12-14  8:34 ` [dpdk-dev] [PATCH v4 00/32] A new net PMD - ICE Wenzhuo Lu
                     ` (15 preceding siblings ...)
  2018-12-14  8:35   ` [dpdk-dev] [PATCH v4 16/32] net/ice: support device initialization Wenzhuo Lu
@ 2018-12-14  8:35   ` Wenzhuo Lu
  2018-12-14  8:35   ` [dpdk-dev] [PATCH v4 18/32] net/ice: support getting device information Wenzhuo Lu
                     ` (14 subsequent siblings)
  31 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-14  8:35 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Normally when starting/stopping the device the queue
should be started and stopped. Support them both in
this patch.

Below ops are added,
dev_configure
dev_start
dev_stop
dev_close
dev_reset
rx_queue_start
rx_queue_stop
tx_queue_start
tx_queue_stop
rx_queue_setup
rx_queue_release
tx_queue_setup
tx_queue_release

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 config/common_base               |   2 +
 doc/guides/nics/features/ice.ini |   1 +
 doc/guides/nics/ice.rst          |   8 +
 drivers/net/ice/Makefile         |   3 +-
 drivers/net/ice/ice_ethdev.c     | 198 ++++++++-
 drivers/net/ice/ice_lan_rxtx.c   | 927 +++++++++++++++++++++++++++++++++++++++
 drivers/net/ice/ice_rxtx.h       |  20 +
 drivers/net/ice/meson.build      |   3 +-
 8 files changed, 1159 insertions(+), 3 deletions(-)
 create mode 100644 drivers/net/ice/ice_lan_rxtx.c

diff --git a/config/common_base b/config/common_base
index 872f440..a342760 100644
--- a/config/common_base
+++ b/config/common_base
@@ -303,6 +303,8 @@ CONFIG_RTE_LIBRTE_ICE_PMD=y
 CONFIG_RTE_LIBRTE_ICE_DEBUG_RX=n
 CONFIG_RTE_LIBRTE_ICE_DEBUG_TX=n
 CONFIG_RTE_LIBRTE_ICE_DEBUG_TX_FREE=n
+CONFIG_RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC=y
+CONFIG_RTE_LIBRTE_ICE_16BYTE_RX_DESC=n
 
 # Compile burst-oriented AVF PMD driver
 #
diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
index 085e848..a43a9cd 100644
--- a/doc/guides/nics/features/ice.ini
+++ b/doc/guides/nics/features/ice.ini
@@ -4,6 +4,7 @@
 ; Refer to default.ini for the full list of available PMD features.
 ;
 [Features]
+Queue start/stop     = Y
 BSD nic_uio          = Y
 Linux UIO            = Y
 Linux VFIO           = Y
diff --git a/doc/guides/nics/ice.rst b/doc/guides/nics/ice.rst
index 946ed04..96a594f 100644
--- a/doc/guides/nics/ice.rst
+++ b/doc/guides/nics/ice.rst
@@ -38,6 +38,14 @@ Please note that enabling debugging options may affect system performance.
 
   Toggle display of generic debugging messages.
 
+- ``CONFIG_RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC`` (default ``y``)
+
+  Toggle bulk allocation for RX.
+
+- ``CONFIG_RTE_LIBRTE_ICE_16BYTE_RX_DESC`` (default ``n``)
+
+  Toggle to use a 16-byte RX descriptor, by default the RX descriptor is 32 byte.
+
 Runtime Config Options
 ~~~~~~~~~~~~~~~~~~~~~~
 
diff --git a/drivers/net/ice/Makefile b/drivers/net/ice/Makefile
index 548972d..19e1787 100644
--- a/drivers/net/ice/Makefile
+++ b/drivers/net/ice/Makefile
@@ -11,7 +11,7 @@ LIB = librte_pmd_ice.a
 CFLAGS += -O3
 CFLAGS += $(WERROR_FLAGS)
 
-LDLIBS += -lrte_eal -lrte_ethdev -lrte_kvargs -lrte_bus_pci
+LDLIBS += -lrte_eal -lrte_ethdev -lrte_kvargs -lrte_bus_pci -lrte_mempool
 
 EXPORT_MAP := rte_pmd_ice_version.map
 
@@ -50,5 +50,6 @@ SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_switch.c
 SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_nvm.c
 
 SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_ethdev.c
+SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_lan_rxtx.c
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 21cf7de..18663bd 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -14,6 +14,12 @@
 int ice_logtype_init;
 int ice_logtype_driver;
 
+static int ice_dev_configure(struct rte_eth_dev *dev);
+static int ice_dev_start(struct rte_eth_dev *dev);
+static void ice_dev_stop(struct rte_eth_dev *dev);
+static void ice_dev_close(struct rte_eth_dev *dev);
+static int ice_dev_reset(struct rte_eth_dev *dev);
+
 static const struct rte_pci_id pci_id_ice_map[] = {
 	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
 	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_QSFP) },
@@ -22,7 +28,19 @@
 };
 
 static const struct eth_dev_ops ice_eth_dev_ops = {
-	.dev_configure                = NULL,
+	.dev_configure                = ice_dev_configure,
+	.dev_start                    = ice_dev_start,
+	.dev_stop                     = ice_dev_stop,
+	.dev_close                    = ice_dev_close,
+	.dev_reset                    = ice_dev_reset,
+	.rx_queue_start               = ice_rx_queue_start,
+	.rx_queue_stop                = ice_rx_queue_stop,
+	.tx_queue_start               = ice_tx_queue_start,
+	.tx_queue_stop                = ice_tx_queue_stop,
+	.rx_queue_setup               = ice_rx_queue_setup,
+	.rx_queue_release             = ice_rx_queue_release,
+	.tx_queue_setup               = ice_tx_queue_setup,
+	.tx_queue_release             = ice_tx_queue_release,
 };
 
 static void
@@ -560,11 +578,41 @@
 }
 
 static void
+ice_dev_stop(struct rte_eth_dev *dev)
+{
+	struct rte_eth_dev_data *data = dev->data;
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	uint16_t i;
+
+	/* avoid stopping again */
+	if (pf->adapter_stopped)
+		return;
+
+	/* stop and clear all Rx queues */
+	for (i = 0; i < data->nb_rx_queues; i++)
+		ice_rx_queue_stop(dev, i);
+
+	/* stop and clear all Tx queues */
+	for (i = 0; i < data->nb_tx_queues; i++)
+		ice_tx_queue_stop(dev, i);
+
+	/* Clear all queues and release mbufs */
+	ice_clear_queues(dev);
+
+	pf->adapter_stopped = true;
+}
+
+static void
 ice_dev_close(struct rte_eth_dev *dev)
 {
 	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
 	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
+	ice_dev_stop(dev);
+
+	/* release all queue resource */
+	ice_free_queues(dev);
+
 	ice_res_pool_destroy(&pf->msix_pool);
 	ice_release_vsi(pf->main_vsi);
 
@@ -595,6 +643,154 @@
 }
 
 static int
+ice_dev_configure(__rte_unused struct rte_eth_dev *dev)
+{
+	struct ice_adapter *ad =
+		ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+
+	/* Initialize to TRUE. If any of Rx queues doesn't meet the
+	 * bulk allocation or vector Rx preconditions we will reset it.
+	 */
+	ad->rx_bulk_alloc_allowed = true;
+	ad->tx_simple_allowed = true;
+
+	return 0;
+}
+
+static int ice_init_rss(struct ice_pf *pf)
+{
+	struct ice_hw *hw = ICE_PF_TO_HW(pf);
+	struct ice_vsi *vsi = pf->main_vsi;
+	struct rte_eth_dev *dev = pf->adapter->eth_dev;
+	struct rte_eth_rss_conf *rss_conf;
+	struct ice_aqc_get_set_rss_keys key;
+	uint16_t i, nb_q;
+	int ret = 0;
+
+	rss_conf = &dev->data->dev_conf.rx_adv_conf.rss_conf;
+	nb_q = dev->data->nb_rx_queues;
+	vsi->rss_key_size = ICE_AQC_GET_SET_RSS_KEY_DATA_RSS_KEY_SIZE;
+	vsi->rss_lut_size = hw->func_caps.common_cap.rss_table_size;
+
+	if (!vsi->rss_key)
+		vsi->rss_key = rte_zmalloc(NULL,
+					   vsi->rss_key_size, 0);
+	if (!vsi->rss_lut)
+		vsi->rss_lut = rte_zmalloc(NULL,
+					   vsi->rss_lut_size, 0);
+
+	/* configure RSS key */
+	if (!rss_conf->rss_key) {
+		/* Calculate the default hash key */
+		for (i = 0; i <= vsi->rss_key_size; i++)
+			vsi->rss_key[i] = (uint8_t)rte_rand();
+	} else {
+		rte_memcpy(vsi->rss_key, rss_conf->rss_key,
+			   RTE_MIN(rss_conf->rss_key_len,
+				   vsi->rss_key_size));
+	}
+	rte_memcpy(key.standard_rss_key, vsi->rss_key, vsi->rss_key_size);
+	ret = ice_aq_set_rss_key(hw, vsi->idx, &key);
+	if (ret)
+		return -EINVAL;
+
+	/* init RSS LUT table */
+	for (i = 0; i < vsi->rss_lut_size; i++)
+		vsi->rss_lut[i] = i % nb_q;
+
+	ret = ice_aq_set_rss_lut(hw, vsi->idx,
+				 ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF,
+				 vsi->rss_lut, vsi->rss_lut_size);
+	if (ret)
+		return -EINVAL;
+
+	return 0;
+}
+
+static int
+ice_dev_start(struct rte_eth_dev *dev)
+{
+	struct rte_eth_dev_data *data = dev->data;
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	uint16_t nb_rxq = 0;
+	uint16_t nb_txq, i;
+	int ret;
+
+	/* program Tx queues' context in hardware */
+	for (nb_txq = 0; nb_txq < data->nb_tx_queues; nb_txq++) {
+		ret = ice_tx_queue_start(dev, nb_txq);
+		if (ret) {
+			PMD_DRV_LOG(ERR, "fail to start Tx queue %u", nb_txq);
+			goto tx_err;
+		}
+	}
+
+	/* program Rx queues' context in hardware*/
+	for (nb_rxq = 0; nb_rxq < data->nb_rx_queues; nb_rxq++) {
+		ret = ice_rx_queue_start(dev, nb_rxq);
+		if (ret) {
+			PMD_DRV_LOG(ERR, "fail to start Rx queue %u", nb_rxq);
+			goto rx_err;
+		}
+	}
+
+	ret = ice_init_rss(pf);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to enable rss for PF");
+		goto rx_err;
+	}
+
+	ret = ice_aq_set_event_mask(hw, hw->port_info->lport,
+				    ((u16)(ICE_AQ_LINK_EVENT_LINK_FAULT |
+				     ICE_AQ_LINK_EVENT_PHY_TEMP_ALARM |
+				     ICE_AQ_LINK_EVENT_EXCESSIVE_ERRORS |
+				     ICE_AQ_LINK_EVENT_SIGNAL_DETECT |
+				     ICE_AQ_LINK_EVENT_AN_COMPLETED |
+				     ICE_AQ_LINK_EVENT_PORT_TX_SUSPENDED)),
+				     NULL);
+	if (ret != ICE_SUCCESS)
+		PMD_DRV_LOG(WARNING, "Fail to set phy mask");
+
+	pf->adapter_stopped = false;
+
+	return 0;
+
+	/* stop the started queues if failed to start all queues */
+rx_err:
+	for (i = 0; i < nb_rxq; i++)
+		ice_rx_queue_stop(dev, i);
+tx_err:
+	for (i = 0; i < nb_txq; i++)
+		ice_tx_queue_stop(dev, i);
+
+	return -EIO;
+}
+
+static int
+ice_dev_reset(struct rte_eth_dev *dev)
+{
+	int ret;
+
+	if (dev->data->sriov.active)
+		return -ENOTSUP;
+
+	ret = ice_dev_uninit(dev);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "failed to uninit device, status = %d", ret);
+		return -ENXIO;
+	}
+
+	ret = ice_dev_init(dev);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "failed to init device, status = %d", ret);
+		return -ENXIO;
+	}
+
+	return 0;
+}
+
+static int
 ice_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 	      struct rte_pci_device *pci_dev)
 {
diff --git a/drivers/net/ice/ice_lan_rxtx.c b/drivers/net/ice/ice_lan_rxtx.c
new file mode 100644
index 0000000..5c2301a
--- /dev/null
+++ b/drivers/net/ice/ice_lan_rxtx.c
@@ -0,0 +1,927 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#include <rte_ethdev_driver.h>
+#include <rte_net.h>
+
+#include "ice_rxtx.h"
+
+#define ICE_TD_CMD ICE_TX_DESC_CMD_EOP
+
+#define ICE_TX_CKSUM_OFFLOAD_MASK (		 \
+		PKT_TX_IP_CKSUM |		 \
+		PKT_TX_L4_MASK |		 \
+		PKT_TX_TCP_SEG |		 \
+		PKT_TX_OUTER_IP_CKSUM)
+
+#define ICE_RX_ERR_BITS 0x3f
+
+static enum ice_status
+ice_program_hw_rx_queue(struct ice_rx_queue *rxq)
+{
+	struct ice_vsi *vsi = rxq->vsi;
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	struct rte_eth_dev *dev = ICE_VSI_TO_ETH_DEV(rxq->vsi);
+	struct ice_rlan_ctx rx_ctx;
+	enum ice_status err;
+	uint16_t buf_size, len;
+	struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
+	uint32_t regval;
+
+	/**
+	 * The kernel driver uses flex descriptor. It sets the register
+	 * to flex descriptor mode.
+	 * DPDK uses legacy descriptor. It should set the register back
+	 * to the default value, then uses legacy descriptor mode.
+	 */
+	regval = (0x01 << QRXFLXP_CNTXT_RXDID_PRIO_S) &
+		 QRXFLXP_CNTXT_RXDID_PRIO_M;
+	ICE_WRITE_REG(hw, QRXFLXP_CNTXT(rxq->reg_idx), regval);
+
+	/* Set buffer size as the head split is disabled. */
+	buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) -
+			      RTE_PKTMBUF_HEADROOM);
+	rxq->rx_hdr_len = 0;
+	rxq->rx_buf_len = RTE_ALIGN(buf_size, (1 << ICE_RLAN_CTX_DBUF_S));
+	len = ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len;
+	rxq->max_pkt_len = RTE_MIN(len,
+				   dev->data->dev_conf.rxmode.max_rx_pkt_len);
+
+	if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+		if (rxq->max_pkt_len <= ETHER_MAX_LEN ||
+		    rxq->max_pkt_len > ICE_FRAME_SIZE_MAX) {
+			PMD_DRV_LOG(ERR, "maximum packet length must "
+				    "be larger than %u and smaller than %u,"
+				    "as jumbo frame is enabled",
+				    (uint32_t)ETHER_MAX_LEN,
+				    (uint32_t)ICE_FRAME_SIZE_MAX);
+			return -EINVAL;
+		}
+	} else {
+		if (rxq->max_pkt_len < ETHER_MIN_LEN ||
+		    rxq->max_pkt_len > ETHER_MAX_LEN) {
+			PMD_DRV_LOG(ERR, "maximum packet length must be "
+				    "larger than %u and smaller than %u, "
+				    "as jumbo frame is disabled",
+				    (uint32_t)ETHER_MIN_LEN,
+				    (uint32_t)ETHER_MAX_LEN);
+			return -EINVAL;
+		}
+	}
+
+	memset(&rx_ctx, 0, sizeof(rx_ctx));
+
+	rx_ctx.base = rxq->rx_ring_phys_addr / ICE_QUEUE_BASE_ADDR_UNIT;
+	rx_ctx.qlen = rxq->nb_rx_desc;
+	rx_ctx.dbuf = rxq->rx_buf_len >> ICE_RLAN_CTX_DBUF_S;
+	rx_ctx.hbuf = rxq->rx_hdr_len >> ICE_RLAN_CTX_HBUF_S;
+	rx_ctx.dtype = 0; /* No Header Split mode */
+#ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
+	rx_ctx.dsize = 1; /* 32B descriptors */
+#endif
+	rx_ctx.rxmax = rxq->max_pkt_len;
+	/* TPH: Transaction Layer Packet (TLP) processing hints */
+	rx_ctx.tphrdesc_ena = 1;
+	rx_ctx.tphwdesc_ena = 1;
+	rx_ctx.tphdata_ena = 1;
+	rx_ctx.tphhead_ena = 1;
+	/* Low Receive Queue Threshold defined in 64 descriptors units.
+	 * When the number of free descriptors goes below the lrxqthresh,
+	 * an immediate interrupt is triggered.
+	 */
+	rx_ctx.lrxqthresh = 2;
+	/*default use 32 byte descriptor, vlan tag extract to L2TAG2(1st)*/
+	rx_ctx.l2tsel = 1;
+	rx_ctx.showiv = 0;
+
+	err = ice_clear_rxq_ctx(hw, rxq->reg_idx);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to clear Lan Rx queue (%u) context",
+			    rxq->queue_id);
+		return -EINVAL;
+	}
+	err = ice_write_rxq_ctx(hw, &rx_ctx, rxq->reg_idx);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to write Lan Rx queue (%u) context",
+			    rxq->queue_id);
+		return -EINVAL;
+	}
+
+	buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) -
+			      RTE_PKTMBUF_HEADROOM);
+
+	/* Check if scattered RX needs to be used. */
+	if ((rxq->max_pkt_len + 2 * ICE_VLAN_TAG_SIZE) > buf_size)
+		dev->data->scattered_rx = 1;
+
+	rxq->qrx_tail = hw->hw_addr + QRX_TAIL(rxq->reg_idx);
+
+	/* Init the Rx tail register*/
+	ICE_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1);
+
+	return 0;
+}
+
+/* Allocate mbufs for all descriptors in rx queue */
+static int
+ice_alloc_rx_queue_mbufs(struct ice_rx_queue *rxq)
+{
+	struct ice_rx_entry *rxe = rxq->sw_ring;
+	uint64_t dma_addr;
+	uint16_t i;
+
+	for (i = 0; i < rxq->nb_rx_desc; i++) {
+		volatile union ice_rx_desc *rxd;
+		struct rte_mbuf *mbuf = rte_mbuf_raw_alloc(rxq->mp);
+
+		if (unlikely(!mbuf)) {
+			PMD_DRV_LOG(ERR, "Failed to allocate mbuf for RX");
+			return -ENOMEM;
+		}
+
+		rte_mbuf_refcnt_set(mbuf, 1);
+		mbuf->next = NULL;
+		mbuf->data_off = RTE_PKTMBUF_HEADROOM;
+		mbuf->nb_segs = 1;
+		mbuf->port = rxq->port_id;
+
+		dma_addr =
+			rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf));
+
+		rxd = &rxq->rx_ring[i];
+		rxd->read.pkt_addr = dma_addr;
+		rxd->read.hdr_addr = 0;
+#ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
+		rxd->read.rsvd1 = 0;
+		rxd->read.rsvd2 = 0;
+#endif
+		rxe[i].mbuf = mbuf;
+	}
+
+	return 0;
+}
+
+/* Free all mbufs for descriptors in rx queue */
+static void
+ice_rx_queue_release_mbufs(struct ice_rx_queue *rxq)
+{
+	uint16_t i;
+
+	if (!rxq || !rxq->sw_ring) {
+		PMD_DRV_LOG(DEBUG, "Pointer to sw_ring is NULL");
+		return;
+	}
+
+	for (i = 0; i < rxq->nb_rx_desc; i++) {
+		if (rxq->sw_ring[i].mbuf) {
+			rte_pktmbuf_free_seg(rxq->sw_ring[i].mbuf);
+			rxq->sw_ring[i].mbuf = NULL;
+		}
+	}
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+		if (rxq->rx_nb_avail == 0)
+			return;
+		for (i = 0; i < rxq->rx_nb_avail; i++) {
+			struct rte_mbuf *mbuf;
+
+			mbuf = rxq->rx_stage[rxq->rx_next_avail + i];
+			rte_pktmbuf_free_seg(mbuf);
+		}
+		rxq->rx_nb_avail = 0;
+#endif /* RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC */
+}
+
+/* turn on or off rx queue
+ * @q_idx: queue index in pf scope
+ * @on: turn on or off the queue
+ */
+static int
+ice_switch_rx_queue(struct ice_hw *hw, uint16_t q_idx, bool on)
+{
+	uint32_t reg;
+	uint16_t j;
+
+	/* QRX_CTRL = QRX_ENA */
+	reg = ICE_READ_REG(hw, QRX_CTRL(q_idx));
+
+	if (on) {
+		if (reg & QRX_CTRL_QENA_STAT_M)
+			return 0; /* Already on, skip */
+		reg |= QRX_CTRL_QENA_REQ_M;
+	} else {
+		if (!(reg & QRX_CTRL_QENA_STAT_M))
+			return 0; /* Already off, skip */
+		reg &= ~QRX_CTRL_QENA_REQ_M;
+	}
+
+	/* Write the register */
+	ICE_WRITE_REG(hw, QRX_CTRL(q_idx), reg);
+	/* Check the result. It is said that QENA_STAT
+	 * follows the QENA_REQ not more than 10 use.
+	 * TODO: need to change the wait counter later
+	 */
+	for (j = 0; j < ICE_CHK_Q_ENA_COUNT; j++) {
+		rte_delay_us(ICE_CHK_Q_ENA_INTERVAL_US);
+		reg = ICE_READ_REG(hw, QRX_CTRL(q_idx));
+		if (on) {
+			if ((reg & QRX_CTRL_QENA_REQ_M) &&
+			    (reg & QRX_CTRL_QENA_STAT_M))
+				break;
+		} else {
+			if (!(reg & QRX_CTRL_QENA_REQ_M) &&
+			    !(reg & QRX_CTRL_QENA_STAT_M))
+				break;
+		}
+	}
+
+	/* Check if it is timeout */
+	if (j >= ICE_CHK_Q_ENA_COUNT) {
+		PMD_DRV_LOG(ERR, "Failed to %s rx queue[%u]",
+			    (on ? "enable" : "disable"), q_idx);
+		return -ETIMEDOUT;
+	}
+
+	return 0;
+}
+
+static inline int
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+ice_check_rx_burst_bulk_alloc_preconditions(struct ice_rx_queue *rxq)
+#else
+ice_check_rx_burst_bulk_alloc_preconditions
+	(__rte_unused struct ice_rx_queue *rxq)
+#endif
+{
+	int ret = 0;
+
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+	if (!(rxq->rx_free_thresh >= ICE_RX_MAX_BURST)) {
+		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: "
+			     "rxq->rx_free_thresh=%d, "
+			     "ICE_RX_MAX_BURST=%d",
+			     rxq->rx_free_thresh, ICE_RX_MAX_BURST);
+		ret = -EINVAL;
+	} else if (!(rxq->rx_free_thresh < rxq->nb_rx_desc)) {
+		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: "
+			     "rxq->rx_free_thresh=%d, "
+			     "rxq->nb_rx_desc=%d",
+			     rxq->rx_free_thresh, rxq->nb_rx_desc);
+		ret = -EINVAL;
+	} else if (rxq->nb_rx_desc % rxq->rx_free_thresh != 0) {
+		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: "
+			     "rxq->nb_rx_desc=%d, "
+			     "rxq->rx_free_thresh=%d",
+			     rxq->nb_rx_desc, rxq->rx_free_thresh);
+		ret = -EINVAL;
+	}
+#else
+	ret = -EINVAL;
+#endif
+
+	return ret;
+}
+
+/* reset fields in ice_rx_queue back to default */
+static void
+ice_reset_rx_queue(struct ice_rx_queue *rxq)
+{
+	unsigned i;
+	uint16_t len;
+
+	if (!rxq) {
+		PMD_DRV_LOG(DEBUG, "Pointer to rxq is NULL");
+		return;
+	}
+
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+	if (ice_check_rx_burst_bulk_alloc_preconditions(rxq) == 0)
+		len = (uint16_t)(rxq->nb_rx_desc + ICE_RX_MAX_BURST);
+	else
+#endif /* RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC */
+		len = rxq->nb_rx_desc;
+
+	for (i = 0; i < len * sizeof(union ice_rx_desc); i++)
+		((volatile char *)rxq->rx_ring)[i] = 0;
+
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+	memset(&rxq->fake_mbuf, 0x0, sizeof(rxq->fake_mbuf));
+	for (i = 0; i < ICE_RX_MAX_BURST; ++i)
+		rxq->sw_ring[rxq->nb_rx_desc + i].mbuf = &rxq->fake_mbuf;
+
+	rxq->rx_nb_avail = 0;
+	rxq->rx_next_avail = 0;
+	rxq->rx_free_trigger = (uint16_t)(rxq->rx_free_thresh - 1);
+#endif /* RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC */
+
+	rxq->rx_tail = 0;
+	rxq->nb_rx_hold = 0;
+	rxq->pkt_first_seg = NULL;
+	rxq->pkt_last_seg = NULL;
+}
+
+int
+ice_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct ice_rx_queue *rxq;
+	int err;
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (rx_queue_id >= dev->data->nb_rx_queues) {
+		PMD_DRV_LOG(ERR, "RX queue %u is out of range %u",
+			    rx_queue_id, dev->data->nb_rx_queues);
+		return -EINVAL;
+	}
+
+	rxq = dev->data->rx_queues[rx_queue_id];
+	if (!rxq || !rxq->q_set) {
+		PMD_DRV_LOG(ERR, "RX queue %u not available or setup",
+			    rx_queue_id);
+		return -EINVAL;
+	}
+
+	err = ice_program_hw_rx_queue(rxq);
+	if (err) {
+		PMD_DRV_LOG(ERR, "fail to program RX queue %u",
+			    rx_queue_id);
+		return -EIO;
+	}
+
+	err = ice_alloc_rx_queue_mbufs(rxq);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to allocate RX queue mbuf");
+		return -ENOMEM;
+	}
+
+	rte_wmb();
+
+	/* Init the RX tail register. */
+	ICE_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1);
+
+	err = ice_switch_rx_queue(hw, rxq->reg_idx, TRUE);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u on",
+			    rx_queue_id);
+
+		ice_rx_queue_release_mbufs(rxq);
+		ice_reset_rx_queue(rxq);
+		return -EINVAL;
+	}
+
+	dev->data->rx_queue_state[rx_queue_id] =
+		RTE_ETH_QUEUE_STATE_STARTED;
+
+	return 0;
+}
+
+int
+ice_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct ice_rx_queue *rxq;
+	int err;
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	if (rx_queue_id < dev->data->nb_rx_queues) {
+		rxq = dev->data->rx_queues[rx_queue_id];
+
+		err = ice_switch_rx_queue(hw, rxq->reg_idx, FALSE);
+		if (err) {
+			PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off",
+				    rx_queue_id);
+			return -EINVAL;
+		}
+		ice_rx_queue_release_mbufs(rxq);
+		ice_reset_rx_queue(rxq);
+		dev->data->rx_queue_state[rx_queue_id] =
+			RTE_ETH_QUEUE_STATE_STOPPED;
+	}
+
+	return 0;
+}
+
+int
+ice_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct ice_tx_queue *txq;
+	int err;
+	struct ice_vsi *vsi;
+	struct ice_hw *hw;
+	struct ice_aqc_add_tx_qgrp txq_elem;
+	struct ice_tlan_ctx tx_ctx;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (tx_queue_id >= dev->data->nb_tx_queues) {
+		PMD_DRV_LOG(ERR, "TX queue %u is out of range %u",
+			    tx_queue_id, dev->data->nb_tx_queues);
+		return -EINVAL;
+	}
+
+	txq = dev->data->tx_queues[tx_queue_id];
+	if (!txq || !txq->q_set) {
+		PMD_DRV_LOG(ERR, "TX queue %u is not available or setup",
+			    tx_queue_id);
+		return -EINVAL;
+	}
+
+	vsi = txq->vsi;
+	hw = ICE_VSI_TO_HW(vsi);
+
+	memset(&txq_elem, 0, sizeof(txq_elem));
+	memset(&tx_ctx, 0, sizeof(tx_ctx));
+	txq_elem.num_txqs = 1;
+	txq_elem.txqs[0].txq_id = rte_cpu_to_le_16(txq->reg_idx);
+
+	tx_ctx.base = txq->tx_ring_phys_addr / ICE_QUEUE_BASE_ADDR_UNIT;
+	tx_ctx.qlen = txq->nb_tx_desc;
+	tx_ctx.pf_num = hw->pf_id;
+	tx_ctx.vmvf_type = ICE_TLAN_CTX_VMVF_TYPE_PF;
+	tx_ctx.src_vsi = vsi->vsi_id;
+	tx_ctx.port_num = hw->port_info->lport;
+	tx_ctx.tso_ena = 1; /* tso enable */
+	tx_ctx.tso_qnum = txq->reg_idx; /* index for tso state structure */
+	tx_ctx.legacy_int = 1; /* Legacy or Advanced Host Interface */
+
+	ice_set_ctx((uint8_t *)&tx_ctx, txq_elem.txqs[0].txq_ctx,
+		    ice_tlan_ctx_info);
+
+	txq->qtx_tail = hw->hw_addr + QTX_COMM_DBELL(txq->reg_idx);
+
+	/* Init the Tx tail register*/
+	ICE_PCI_REG_WRITE(txq->qtx_tail, 0);
+
+	err = ice_ena_vsi_txq(hw->port_info, vsi->idx, 0, 1, &txq_elem,
+			      sizeof(txq_elem), NULL);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to add lan txq");
+		return -EIO;
+	}
+	/* store the schedule node id */
+	txq->q_teid = txq_elem.txqs[0].q_teid;
+
+	dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED;
+	return 0;
+}
+
+/* Free all mbufs for descriptors in tx queue */
+static void
+ice_tx_queue_release_mbufs(struct ice_tx_queue *txq)
+{
+	uint16_t i;
+
+	if (!txq || !txq->sw_ring) {
+		PMD_DRV_LOG(DEBUG, "Pointer to txq or sw_ring is NULL");
+		return;
+	}
+
+	for (i = 0; i < txq->nb_tx_desc; i++) {
+		if (txq->sw_ring[i].mbuf) {
+			rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
+			txq->sw_ring[i].mbuf = NULL;
+		}
+	}
+}
+
+static void
+ice_reset_tx_queue(struct ice_tx_queue *txq)
+{
+	struct ice_tx_entry *txe;
+	uint16_t i, prev, size;
+
+	if (!txq) {
+		PMD_DRV_LOG(DEBUG, "Pointer to txq is NULL");
+		return;
+	}
+
+	txe = txq->sw_ring;
+	size = sizeof(struct ice_tx_desc) * txq->nb_tx_desc;
+	for (i = 0; i < size; i++)
+		((volatile char *)txq->tx_ring)[i] = 0;
+
+	prev = (uint16_t)(txq->nb_tx_desc - 1);
+	for (i = 0; i < txq->nb_tx_desc; i++) {
+		volatile struct ice_tx_desc *txd = &txq->tx_ring[i];
+
+		txd->cmd_type_offset_bsz =
+			rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE);
+		txe[i].mbuf =  NULL;
+		txe[i].last_id = i;
+		txe[prev].next_id = i;
+		prev = i;
+	}
+
+	txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
+	txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
+
+	txq->tx_tail = 0;
+	txq->nb_tx_used = 0;
+
+	txq->last_desc_cleaned = (uint16_t)(txq->nb_tx_desc - 1);
+	txq->nb_tx_free = (uint16_t)(txq->nb_tx_desc - 1);
+}
+
+int
+ice_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct ice_tx_queue *txq;
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	enum ice_status status;
+	uint16_t q_ids[1];
+	uint32_t q_teids[1];
+
+	if (tx_queue_id >= dev->data->nb_tx_queues) {
+		PMD_DRV_LOG(ERR, "TX queue %u is out of range %u",
+			    tx_queue_id, dev->data->nb_tx_queues);
+		return -EINVAL;
+	}
+
+	txq = dev->data->tx_queues[tx_queue_id];
+	if (!txq) {
+		PMD_DRV_LOG(ERR, "TX queue %u is not available",
+			    tx_queue_id);
+		return -EINVAL;
+	}
+
+	q_ids[0] = txq->reg_idx;
+	q_teids[0] = txq->q_teid;
+
+	status = ice_dis_vsi_txq(hw->port_info, 1, q_ids, q_teids,
+				 ICE_NO_RESET, 0, NULL);
+	if (status != ICE_SUCCESS) {
+		PMD_DRV_LOG(DEBUG, "Failed to disable Lan Tx queue");
+		return -EINVAL;
+	}
+
+	ice_tx_queue_release_mbufs(txq);
+	ice_reset_tx_queue(txq);
+	dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+
+	return 0;
+}
+
+int
+ice_rx_queue_setup(struct rte_eth_dev *dev,
+		   uint16_t queue_idx,
+		   uint16_t nb_desc,
+		   unsigned int socket_id,
+		   const struct rte_eth_rxconf *rx_conf,
+		   struct rte_mempool *mp)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_adapter *ad =
+		ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct ice_vsi *vsi = pf->main_vsi;
+	struct ice_rx_queue *rxq;
+	const struct rte_memzone *rz;
+	uint32_t ring_size;
+	uint16_t len;
+	int use_def_burst_func = 1;
+
+	if (nb_desc % ICE_ALIGN_RING_DESC != 0 ||
+	    nb_desc > ICE_MAX_RING_DESC ||
+	    nb_desc < ICE_MIN_RING_DESC) {
+		PMD_INIT_LOG(ERR, "Number (%u) of receive descriptors is "
+			     "invalid", nb_desc);
+		return -EINVAL;
+	}
+
+	/* Free memory if needed */
+	if (dev->data->rx_queues[queue_idx]) {
+		ice_rx_queue_release(dev->data->rx_queues[queue_idx]);
+		dev->data->rx_queues[queue_idx] = NULL;
+	}
+
+	/* Allocate the rx queue data structure */
+	rxq = rte_zmalloc_socket(NULL,
+				 sizeof(struct ice_rx_queue),
+				 RTE_CACHE_LINE_SIZE,
+				 socket_id);
+	if (!rxq) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for "
+			     "rx queue data structure");
+		return -ENOMEM;
+	}
+	rxq->mp = mp;
+	rxq->nb_rx_desc = nb_desc;
+	rxq->rx_free_thresh = rx_conf->rx_free_thresh;
+	rxq->queue_id = queue_idx;
+
+	rxq->reg_idx = vsi->base_queue + queue_idx;
+	rxq->port_id = dev->data->port_id;
+	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+		rxq->crc_len = ETHER_CRC_LEN;
+	else
+		rxq->crc_len = 0;
+
+	rxq->drop_en = rx_conf->rx_drop_en;
+	rxq->vsi = vsi;
+	rxq->rx_deferred_start = rx_conf->rx_deferred_start;
+
+	/* Allocate the maximun number of RX ring hardware descriptor. */
+	len = ICE_MAX_RING_DESC;
+
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+	/**
+	 * Allocating a little more memory because vectorized/bulk_alloc Rx
+	 * functions doesn't check boundaries each time.
+	 */
+	len += ICE_RX_MAX_BURST;
+#endif
+
+	/* Allocate the maximum number of RX ring hardware descriptor. */
+	ring_size = sizeof(union ice_rx_desc) * len;
+	ring_size = RTE_ALIGN(ring_size, ICE_DMA_MEM_ALIGN);
+	rz = rte_eth_dma_zone_reserve(dev, "rx_ring", queue_idx,
+				      ring_size, ICE_RING_BASE_ALIGN,
+				      socket_id);
+	if (!rz) {
+		ice_rx_queue_release(rxq);
+		PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for RX");
+		return -ENOMEM;
+	}
+
+	/* Zero all the descriptors in the ring. */
+	memset(rz->addr, 0, ring_size);
+
+	rxq->rx_ring_phys_addr = rz->phys_addr;
+	rxq->rx_ring = (union ice_rx_desc *)rz->addr;
+
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+	len = (uint16_t)(nb_desc + ICE_RX_MAX_BURST);
+#else
+	len = nb_desc;
+#endif
+
+	/* Allocate the software ring. */
+	rxq->sw_ring = rte_zmalloc_socket(NULL,
+					  sizeof(struct ice_rx_entry) * len,
+					  RTE_CACHE_LINE_SIZE,
+					  socket_id);
+	if (!rxq->sw_ring) {
+		ice_rx_queue_release(rxq);
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for SW ring");
+		return -ENOMEM;
+	}
+
+	ice_reset_rx_queue(rxq);
+	rxq->q_set = TRUE;
+	dev->data->rx_queues[queue_idx] = rxq;
+
+	use_def_burst_func = ice_check_rx_burst_bulk_alloc_preconditions(rxq);
+
+	if (!use_def_burst_func) {
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions are "
+			     "satisfied. Rx Burst Bulk Alloc function will be "
+			     "used on port=%d, queue=%d.",
+			     rxq->port_id, rxq->queue_id);
+#endif /* RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC */
+	} else {
+		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions are "
+			     "not satisfied, Scattered Rx is requested, "
+			     "or RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC is "
+			     "not enabled on port=%d, queue=%d.",
+			     rxq->port_id, rxq->queue_id);
+		ad->rx_bulk_alloc_allowed = false;
+	}
+
+	return 0;
+}
+
+void
+ice_rx_queue_release(void *rxq)
+{
+	struct ice_rx_queue *q = (struct ice_rx_queue *)rxq;
+
+	if (!q) {
+		PMD_DRV_LOG(DEBUG, "Pointer to rxq is NULL");
+		return;
+	}
+
+	ice_rx_queue_release_mbufs(q);
+	rte_free(q->sw_ring);
+	rte_free(q);
+}
+
+int
+ice_tx_queue_setup(struct rte_eth_dev *dev,
+		   uint16_t queue_idx,
+		   uint16_t nb_desc,
+		   unsigned int socket_id,
+		   const struct rte_eth_txconf *tx_conf)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vsi *vsi = pf->main_vsi;
+	struct ice_tx_queue *txq;
+	const struct rte_memzone *tz;
+	uint32_t ring_size;
+	uint16_t tx_rs_thresh, tx_free_thresh;
+	uint64_t offloads;
+
+	offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads;
+
+	if (nb_desc % ICE_ALIGN_RING_DESC != 0 ||
+	    nb_desc > ICE_MAX_RING_DESC ||
+	    nb_desc < ICE_MIN_RING_DESC) {
+		PMD_INIT_LOG(ERR, "Number (%u) of transmit descriptors is "
+			     "invalid", nb_desc);
+		return -EINVAL;
+	}
+
+	/**
+	 * The following two parameters control the setting of the RS bit on
+	 * transmit descriptors. TX descriptors will have their RS bit set
+	 * after txq->tx_rs_thresh descriptors have been used. The TX
+	 * descriptor ring will be cleaned after txq->tx_free_thresh
+	 * descriptors are used or if the number of descriptors required to
+	 * transmit a packet is greater than the number of free TX descriptors.
+	 *
+	 * The following constraints must be satisfied:
+	 *  - tx_rs_thresh must be greater than 0.
+	 *  - tx_rs_thresh must be less than the size of the ring minus 2.
+	 *  - tx_rs_thresh must be less than or equal to tx_free_thresh.
+	 *  - tx_rs_thresh must be a divisor of the ring size.
+	 *  - tx_free_thresh must be greater than 0.
+	 *  - tx_free_thresh must be less than the size of the ring minus 3.
+	 *
+	 * One descriptor in the TX ring is used as a sentinel to avoid a H/W
+	 * race condition, hence the maximum threshold constraints. When set
+	 * to zero use default values.
+	 */
+	tx_rs_thresh = (uint16_t)(tx_conf->tx_rs_thresh ?
+				  tx_conf->tx_rs_thresh :
+				  ICE_DEFAULT_TX_RSBIT_THRESH);
+	tx_free_thresh = (uint16_t)(tx_conf->tx_free_thresh ?
+				    tx_conf->tx_free_thresh :
+				    ICE_DEFAULT_TX_FREE_THRESH);
+	if (tx_rs_thresh >= (nb_desc - 2)) {
+		PMD_INIT_LOG(ERR, "tx_rs_thresh must be less than the "
+			     "number of TX descriptors minus 2. "
+			     "(tx_rs_thresh=%u port=%d queue=%d)",
+			     (unsigned int)tx_rs_thresh,
+			     (int)dev->data->port_id,
+			     (int)queue_idx);
+		return -EINVAL;
+	}
+	if (tx_free_thresh >= (nb_desc - 3)) {
+		PMD_INIT_LOG(ERR, "tx_rs_thresh must be less than the "
+			     "tx_free_thresh must be less than the "
+			     "number of TX descriptors minus 3. "
+			     "(tx_free_thresh=%u port=%d queue=%d)",
+			     (unsigned int)tx_free_thresh,
+			     (int)dev->data->port_id,
+			     (int)queue_idx);
+		return -EINVAL;
+	}
+	if (tx_rs_thresh > tx_free_thresh) {
+		PMD_INIT_LOG(ERR, "tx_rs_thresh must be less than or "
+			     "equal to tx_free_thresh. (tx_free_thresh=%u"
+			     " tx_rs_thresh=%u port=%d queue=%d)",
+			     (unsigned int)tx_free_thresh,
+			     (unsigned int)tx_rs_thresh,
+			     (int)dev->data->port_id,
+			     (int)queue_idx);
+		return -EINVAL;
+	}
+	if ((nb_desc % tx_rs_thresh) != 0) {
+		PMD_INIT_LOG(ERR, "tx_rs_thresh must be a divisor of the "
+			     "number of TX descriptors. (tx_rs_thresh=%u"
+			     " port=%d queue=%d)",
+			     (unsigned int)tx_rs_thresh,
+			     (int)dev->data->port_id,
+			     (int)queue_idx);
+		return -EINVAL;
+	}
+	if (tx_rs_thresh > 1 && tx_conf->tx_thresh.wthresh != 0) {
+		PMD_INIT_LOG(ERR, "TX WTHRESH must be set to 0 if "
+			     "tx_rs_thresh is greater than 1. "
+			     "(tx_rs_thresh=%u port=%d queue=%d)",
+			     (unsigned int)tx_rs_thresh,
+			     (int)dev->data->port_id,
+			     (int)queue_idx);
+		return -EINVAL;
+	}
+
+	/* Free memory if needed. */
+	if (dev->data->tx_queues[queue_idx]) {
+		ice_tx_queue_release(dev->data->tx_queues[queue_idx]);
+		dev->data->tx_queues[queue_idx] = NULL;
+	}
+
+	/* Allocate the TX queue data structure. */
+	txq = rte_zmalloc_socket(NULL,
+				 sizeof(struct ice_tx_queue),
+				 RTE_CACHE_LINE_SIZE,
+				 socket_id);
+	if (!txq) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for "
+			     "tx queue structure");
+		return -ENOMEM;
+	}
+
+	/* Allocate TX hardware ring descriptors. */
+	ring_size = sizeof(struct ice_tx_desc) * ICE_MAX_RING_DESC;
+	ring_size = RTE_ALIGN(ring_size, ICE_DMA_MEM_ALIGN);
+	tz = rte_eth_dma_zone_reserve(dev, "tx_ring", queue_idx,
+				      ring_size, ICE_RING_BASE_ALIGN,
+				      socket_id);
+	if (!tz) {
+		ice_tx_queue_release(txq);
+		PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for TX");
+		return -ENOMEM;
+	}
+
+	txq->nb_tx_desc = nb_desc;
+	txq->tx_rs_thresh = tx_rs_thresh;
+	txq->tx_free_thresh = tx_free_thresh;
+	txq->pthresh = tx_conf->tx_thresh.pthresh;
+	txq->hthresh = tx_conf->tx_thresh.hthresh;
+	txq->wthresh = tx_conf->tx_thresh.wthresh;
+	txq->queue_id = queue_idx;
+
+	txq->reg_idx = vsi->base_queue + queue_idx;
+	txq->port_id = dev->data->port_id;
+	txq->offloads = offloads;
+	txq->vsi = vsi;
+	txq->tx_deferred_start = tx_conf->tx_deferred_start;
+
+	txq->tx_ring_phys_addr = tz->phys_addr;
+	txq->tx_ring = (struct ice_tx_desc *)tz->addr;
+
+	/* Allocate software ring */
+	txq->sw_ring =
+		rte_zmalloc_socket(NULL,
+				   sizeof(struct ice_tx_entry) * nb_desc,
+				   RTE_CACHE_LINE_SIZE,
+				   socket_id);
+	if (!txq->sw_ring) {
+		ice_tx_queue_release(txq);
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for SW TX ring");
+		return -ENOMEM;
+	}
+
+	ice_reset_tx_queue(txq);
+	txq->q_set = TRUE;
+	dev->data->tx_queues[queue_idx] = txq;
+
+	return 0;
+}
+
+void
+ice_tx_queue_release(void *txq)
+{
+	struct ice_tx_queue *q = (struct ice_tx_queue *)txq;
+
+	if (!q) {
+		PMD_DRV_LOG(DEBUG, "Pointer to TX queue is NULL");
+		return;
+	}
+
+	ice_tx_queue_release_mbufs(q);
+	rte_free(q->sw_ring);
+	rte_free(q);
+}
+
+void
+ice_clear_queues(struct rte_eth_dev *dev)
+{
+	uint16_t i;
+
+	PMD_INIT_FUNC_TRACE();
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		ice_tx_queue_release_mbufs(dev->data->tx_queues[i]);
+		ice_reset_tx_queue(dev->data->tx_queues[i]);
+	}
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		ice_rx_queue_release_mbufs(dev->data->rx_queues[i]);
+		ice_reset_rx_queue(dev->data->rx_queues[i]);
+	}
+}
+
+void
+ice_free_queues(struct rte_eth_dev *dev)
+{
+	uint16_t i;
+
+	PMD_INIT_FUNC_TRACE();
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		if (!dev->data->rx_queues[i])
+			continue;
+		ice_rx_queue_release(dev->data->rx_queues[i]);
+		dev->data->rx_queues[i] = NULL;
+	}
+	dev->data->nb_rx_queues = 0;
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		if (!dev->data->tx_queues[i])
+			continue;
+		ice_tx_queue_release(dev->data->tx_queues[i]);
+		dev->data->tx_queues[i] = NULL;
+	}
+	dev->data->nb_tx_queues = 0;
+}
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index c37dc23..088a206 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -114,4 +114,24 @@ struct ice_tx_queue {
 		uint64_t outer_l3_len:16; /* outer L3 Header Length */
 	};
 };
+
+int ice_rx_queue_setup(struct rte_eth_dev *dev,
+		       uint16_t queue_idx,
+		       uint16_t nb_desc,
+		       unsigned int socket_id,
+		       const struct rte_eth_rxconf *rx_conf,
+		       struct rte_mempool *mp);
+int ice_tx_queue_setup(struct rte_eth_dev *dev,
+		       uint16_t queue_idx,
+		       uint16_t nb_desc,
+		       unsigned int socket_id,
+		       const struct rte_eth_txconf *tx_conf);
+int ice_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+int ice_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+int ice_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+int ice_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+void ice_rx_queue_release(void *rxq);
+void ice_tx_queue_release(void *txq);
+void ice_clear_queues(struct rte_eth_dev *dev);
+void ice_free_queues(struct rte_eth_dev *dev);
 #endif /* _ICE_RXTX_H_ */
diff --git a/drivers/net/ice/meson.build b/drivers/net/ice/meson.build
index 9ed7b27..beb0d39 100644
--- a/drivers/net/ice/meson.build
+++ b/drivers/net/ice/meson.build
@@ -5,7 +5,8 @@ subdir('base')
 objs = [base_objs]
 
 sources = files(
-	'ice_ethdev.c'
+	'ice_ethdev.c',
+	'ice_lan_rxtx.c'
 	)
 
 deps += ['hash']
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v4 18/32] net/ice: support getting device information
  2018-12-14  8:34 ` [dpdk-dev] [PATCH v4 00/32] A new net PMD - ICE Wenzhuo Lu
                     ` (16 preceding siblings ...)
  2018-12-14  8:35   ` [dpdk-dev] [PATCH v4 17/32] net/ice: support device and queue ops Wenzhuo Lu
@ 2018-12-14  8:35   ` Wenzhuo Lu
  2018-12-14  8:35   ` [dpdk-dev] [PATCH v4 19/32] net/ice: support packet type getting Wenzhuo Lu
                     ` (13 subsequent siblings)
  31 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-14  8:35 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add ops dev_infos_get.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/ice.ini |   1 +
 drivers/net/ice/ice_ethdev.c     | 103 +++++++++++++++++++++++++++++++++++++++
 drivers/net/ice/ice_ethdev.h     |  13 +++++
 3 files changed, 117 insertions(+)

diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
index a43a9cd..af8f0d3 100644
--- a/doc/guides/nics/features/ice.ini
+++ b/doc/guides/nics/features/ice.ini
@@ -4,6 +4,7 @@
 ; Refer to default.ini for the full list of available PMD features.
 ;
 [Features]
+Speed capabilities   = Y
 Queue start/stop     = Y
 BSD nic_uio          = Y
 Linux UIO            = Y
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 18663bd..f41f6e8 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -19,6 +19,8 @@
 static void ice_dev_stop(struct rte_eth_dev *dev);
 static void ice_dev_close(struct rte_eth_dev *dev);
 static int ice_dev_reset(struct rte_eth_dev *dev);
+static void ice_dev_info_get(struct rte_eth_dev *dev,
+			     struct rte_eth_dev_info *dev_info);
 
 static const struct rte_pci_id pci_id_ice_map[] = {
 	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
@@ -41,6 +43,7 @@
 	.rx_queue_release             = ice_rx_queue_release,
 	.tx_queue_setup               = ice_tx_queue_setup,
 	.tx_queue_release             = ice_tx_queue_release,
+	.dev_infos_get                = ice_dev_info_get,
 };
 
 static void
@@ -790,6 +793,106 @@ static int ice_init_rss(struct ice_pf *pf)
 	return 0;
 }
 
+static void
+ice_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ice_vsi *vsi = pf->main_vsi;
+	struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
+
+	dev_info->min_rx_bufsize = ICE_BUF_SIZE_MIN;
+	dev_info->max_rx_pktlen = ICE_FRAME_SIZE_MAX;
+	dev_info->max_rx_queues = vsi->nb_qps;
+	dev_info->max_tx_queues = vsi->nb_qps;
+	dev_info->max_mac_addrs = vsi->max_macaddrs;
+	dev_info->max_vfs = pci_dev->max_vfs;
+
+	dev_info->rx_offload_capa =
+		DEV_RX_OFFLOAD_VLAN_STRIP |
+		DEV_RX_OFFLOAD_IPV4_CKSUM |
+		DEV_RX_OFFLOAD_UDP_CKSUM |
+		DEV_RX_OFFLOAD_TCP_CKSUM |
+		DEV_RX_OFFLOAD_QINQ_STRIP |
+		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		DEV_RX_OFFLOAD_VLAN_EXTEND |
+		DEV_RX_OFFLOAD_JUMBO_FRAME |
+		DEV_RX_OFFLOAD_KEEP_CRC |
+		DEV_RX_OFFLOAD_SCATTER |
+		DEV_RX_OFFLOAD_VLAN_FILTER;
+	dev_info->tx_offload_capa =
+		DEV_TX_OFFLOAD_VLAN_INSERT |
+		DEV_TX_OFFLOAD_QINQ_INSERT |
+		DEV_TX_OFFLOAD_IPV4_CKSUM |
+		DEV_TX_OFFLOAD_UDP_CKSUM |
+		DEV_TX_OFFLOAD_TCP_CKSUM |
+		DEV_TX_OFFLOAD_SCTP_CKSUM |
+		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		DEV_TX_OFFLOAD_TCP_TSO |
+		DEV_TX_OFFLOAD_MULTI_SEGS |
+		DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+	dev_info->rx_queue_offload_capa = 0;
+	dev_info->tx_queue_offload_capa = 0;
+
+	dev_info->reta_size = hw->func_caps.common_cap.rss_table_size;
+	dev_info->hash_key_size = (VSIQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t);
+	dev_info->flow_type_rss_offloads = ICE_RSS_OFFLOAD_ALL;
+
+	dev_info->default_rxconf = (struct rte_eth_rxconf) {
+		.rx_thresh = {
+			.pthresh = ICE_DEFAULT_RX_PTHRESH,
+			.hthresh = ICE_DEFAULT_RX_HTHRESH,
+			.wthresh = ICE_DEFAULT_RX_WTHRESH,
+		},
+		.rx_free_thresh = ICE_DEFAULT_RX_FREE_THRESH,
+		.rx_drop_en = 0,
+		.offloads = 0,
+	};
+
+	dev_info->default_txconf = (struct rte_eth_txconf) {
+		.tx_thresh = {
+			.pthresh = ICE_DEFAULT_TX_PTHRESH,
+			.hthresh = ICE_DEFAULT_TX_HTHRESH,
+			.wthresh = ICE_DEFAULT_TX_WTHRESH,
+		},
+		.tx_free_thresh = ICE_DEFAULT_TX_FREE_THRESH,
+		.tx_rs_thresh = ICE_DEFAULT_TX_RSBIT_THRESH,
+		.offloads = 0,
+	};
+
+	dev_info->rx_desc_lim = (struct rte_eth_desc_lim) {
+		.nb_max = ICE_MAX_RING_DESC,
+		.nb_min = ICE_MIN_RING_DESC,
+		.nb_align = ICE_ALIGN_RING_DESC,
+	};
+
+	dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
+		.nb_max = ICE_MAX_RING_DESC,
+		.nb_min = ICE_MIN_RING_DESC,
+		.nb_align = ICE_ALIGN_RING_DESC,
+	};
+
+	dev_info->speed_capa = ETH_LINK_SPEED_10M |
+			       ETH_LINK_SPEED_100M |
+			       ETH_LINK_SPEED_1G |
+			       ETH_LINK_SPEED_2_5G |
+			       ETH_LINK_SPEED_5G |
+			       ETH_LINK_SPEED_10G |
+			       ETH_LINK_SPEED_20G |
+			       ETH_LINK_SPEED_25G |
+			       ETH_LINK_SPEED_40G;
+
+	dev_info->nb_rx_queues = dev->data->nb_rx_queues;
+	dev_info->nb_tx_queues = dev->data->nb_tx_queues;
+
+	dev_info->default_rxportconf.burst_size = ICE_RX_MAX_BURST;
+	dev_info->default_txportconf.burst_size = ICE_TX_MAX_BURST;
+	dev_info->default_rxportconf.nb_queues = 1;
+	dev_info->default_txportconf.nb_queues = 1;
+	dev_info->default_rxportconf.ring_size = ICE_BUF_SIZE_MIN;
+	dev_info->default_txportconf.ring_size = ICE_BUF_SIZE_MIN;
+}
+
 static int
 ice_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 	      struct rte_pci_device *pci_dev)
diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
index 94e45c8..3cefa5b 100644
--- a/drivers/net/ice/ice_ethdev.h
+++ b/drivers/net/ice/ice_ethdev.h
@@ -102,6 +102,19 @@
 		       ICE_FLAG_RSS_AQ_CAPABLE | \
 		       ICE_FLAG_VF_MAC_BY_PF)
 
+#define ICE_RSS_OFFLOAD_ALL ( \
+	ETH_RSS_FRAG_IPV4 | \
+	ETH_RSS_NONFRAG_IPV4_TCP | \
+	ETH_RSS_NONFRAG_IPV4_UDP | \
+	ETH_RSS_NONFRAG_IPV4_SCTP | \
+	ETH_RSS_NONFRAG_IPV4_OTHER | \
+	ETH_RSS_FRAG_IPV6 | \
+	ETH_RSS_NONFRAG_IPV6_TCP | \
+	ETH_RSS_NONFRAG_IPV6_UDP | \
+	ETH_RSS_NONFRAG_IPV6_SCTP | \
+	ETH_RSS_NONFRAG_IPV6_OTHER | \
+	ETH_RSS_L2_PAYLOAD)
+
 struct ice_adapter;
 
 /**
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v4 19/32] net/ice: support packet type getting
  2018-12-14  8:34 ` [dpdk-dev] [PATCH v4 00/32] A new net PMD - ICE Wenzhuo Lu
                     ` (17 preceding siblings ...)
  2018-12-14  8:35   ` [dpdk-dev] [PATCH v4 18/32] net/ice: support getting device information Wenzhuo Lu
@ 2018-12-14  8:35   ` Wenzhuo Lu
  2018-12-14  8:35   ` [dpdk-dev] [PATCH v4 20/32] net/ice: support link update Wenzhuo Lu
                     ` (12 subsequent siblings)
  31 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-14  8:35 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Wei Zhao

Add ops dev_supported_ptypes_get.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 drivers/net/ice/ice_ethdev.c   |   2 +
 drivers/net/ice/ice_lan_rxtx.c | 601 +++++++++++++++++++++++++++++++++++++++++
 drivers/net/ice/ice_rxtx.h     |   2 +
 3 files changed, 605 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index f41f6e8..cd35c4e 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -44,6 +44,7 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 	.tx_queue_setup               = ice_tx_queue_setup,
 	.tx_queue_release             = ice_tx_queue_release,
 	.dev_infos_get                = ice_dev_info_get,
+	.dev_supported_ptypes_get     = ice_dev_supported_ptypes_get,
 };
 
 static void
@@ -493,6 +494,7 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 
 	dev->dev_ops = &ice_eth_dev_ops;
 
+	ice_set_default_ptype_table(dev);
 	pci_dev = RTE_DEV_TO_PCI(dev->device);
 
 	pf->adapter = ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
diff --git a/drivers/net/ice/ice_lan_rxtx.c b/drivers/net/ice/ice_lan_rxtx.c
index 5c2301a..8230bb2 100644
--- a/drivers/net/ice/ice_lan_rxtx.c
+++ b/drivers/net/ice/ice_lan_rxtx.c
@@ -884,6 +884,42 @@
 	rte_free(q);
 }
 
+const uint32_t *
+ice_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
+{
+	static const uint32_t ptypes[] = {
+		/* refers to ice_get_default_pkt_type() */
+		RTE_PTYPE_L2_ETHER,
+		RTE_PTYPE_L2_ETHER_LLDP,
+		RTE_PTYPE_L2_ETHER_ARP,
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN,
+		RTE_PTYPE_L3_IPV6_EXT_UNKNOWN,
+		RTE_PTYPE_L4_FRAG,
+		RTE_PTYPE_L4_ICMP,
+		RTE_PTYPE_L4_NONFRAG,
+		RTE_PTYPE_L4_SCTP,
+		RTE_PTYPE_L4_TCP,
+		RTE_PTYPE_L4_UDP,
+		RTE_PTYPE_TUNNEL_GRENAT,
+		RTE_PTYPE_TUNNEL_IP,
+		RTE_PTYPE_INNER_L2_ETHER,
+		RTE_PTYPE_INNER_L2_ETHER_VLAN,
+		RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN,
+		RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN,
+		RTE_PTYPE_INNER_L4_FRAG,
+		RTE_PTYPE_INNER_L4_ICMP,
+		RTE_PTYPE_INNER_L4_NONFRAG,
+		RTE_PTYPE_INNER_L4_SCTP,
+		RTE_PTYPE_INNER_L4_TCP,
+		RTE_PTYPE_INNER_L4_UDP,
+		RTE_PTYPE_TUNNEL_GTPC,
+		RTE_PTYPE_TUNNEL_GTPU,
+		RTE_PTYPE_UNKNOWN
+	};
+
+	return ptypes;
+}
+
 void
 ice_clear_queues(struct rte_eth_dev *dev)
 {
@@ -925,3 +961,568 @@
 	}
 	dev->data->nb_tx_queues = 0;
 }
+
+/* For each value it means, datasheet of hardware can tell more details
+ *
+ * @note: fix ice_dev_supported_ptypes_get() if any change here.
+ */
+static inline uint32_t
+ice_get_default_pkt_type(uint16_t ptype)
+{
+	static const uint32_t type_table[ICE_MAX_PKT_TYPE]
+		__rte_cache_aligned = {
+		/* L2 types */
+		/* [0] reserved */
+		[1] = RTE_PTYPE_L2_ETHER,
+		/* [2] - [5] reserved */
+		[6] = RTE_PTYPE_L2_ETHER_LLDP,
+		/* [7] - [10] reserved */
+		[11] = RTE_PTYPE_L2_ETHER_ARP,
+		/* [12] - [21] reserved */
+
+		/* Non tunneled IPv4 */
+		[22] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_FRAG,
+		[23] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_NONFRAG,
+		[24] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_UDP,
+		/* [25] reserved */
+		[26] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_TCP,
+		[27] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_SCTP,
+		[28] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_ICMP,
+
+		/* IPv4 --> IPv4 */
+		[29] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_FRAG,
+		[30] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_NONFRAG,
+		[31] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_UDP,
+		/* [32] reserved */
+		[33] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_TCP,
+		[34] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_SCTP,
+		[35] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv4 --> IPv6 */
+		[36] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_FRAG,
+		[37] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_NONFRAG,
+		[38] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_UDP,
+		/* [39] reserved */
+		[40] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_TCP,
+		[41] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_SCTP,
+		[42] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv4 --> GRE/Teredo/VXLAN */
+		[43] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT,
+
+		/* IPv4 --> GRE/Teredo/VXLAN --> IPv4 */
+		[44] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_FRAG,
+		[45] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_NONFRAG,
+		[46] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_UDP,
+		/* [47] reserved */
+		[48] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_TCP,
+		[49] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_SCTP,
+		[50] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv4 --> GRE/Teredo/VXLAN --> IPv6 */
+		[51] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_FRAG,
+		[52] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_NONFRAG,
+		[53] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_UDP,
+		/* [54] reserved */
+		[55] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_TCP,
+		[56] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_SCTP,
+		[57] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv4 --> GRE/Teredo/VXLAN --> MAC */
+		[58] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER,
+
+		/* IPv4 --> GRE/Teredo/VXLAN --> MAC --> IPv4 */
+		[59] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_FRAG,
+		[60] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_NONFRAG,
+		[61] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_UDP,
+		/* [62] reserved */
+		[63] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_TCP,
+		[64] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_SCTP,
+		[65] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv4 --> GRE/Teredo/VXLAN --> MAC --> IPv6 */
+		[66] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_FRAG,
+		[67] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_NONFRAG,
+		[68] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_UDP,
+		/* [69] reserved */
+		[70] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_TCP,
+		[71] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_SCTP,
+		[72] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv4 --> GRE/Teredo/VXLAN --> MAC/VLAN */
+		[73] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN,
+
+		/* IPv4 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv4 */
+		[74] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_FRAG,
+		[75] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_NONFRAG,
+		[76] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_UDP,
+		/* [77] reserved */
+		[78] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_TCP,
+		[79] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_SCTP,
+		[80] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv4 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv6 */
+		[81] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_FRAG,
+		[82] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_NONFRAG,
+		[83] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_UDP,
+		/* [84] reserved */
+		[85] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_TCP,
+		[86] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_SCTP,
+		[87] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_ICMP,
+
+		/* Non tunneled IPv6 */
+		[88] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_FRAG,
+		[89] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_NONFRAG,
+		[90] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_UDP,
+		/* [91] reserved */
+		[92] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_TCP,
+		[93] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_SCTP,
+		[94] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_ICMP,
+
+		/* IPv6 --> IPv4 */
+		[95] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_FRAG,
+		[96] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_NONFRAG,
+		[97] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_UDP,
+		/* [98] reserved */
+		[99] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_TCP,
+		[100] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_IP |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_SCTP,
+		[101] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_IP |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv6 --> IPv6 */
+		[102] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_IP |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_FRAG,
+		[103] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_IP |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_NONFRAG,
+		[104] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_IP |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_UDP,
+		/* [105] reserved */
+		[106] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_IP |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_TCP,
+		[107] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_IP |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_SCTP,
+		[108] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_IP |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv6 --> GRE/Teredo/VXLAN */
+		[109] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT,
+
+		/* IPv6 --> GRE/Teredo/VXLAN --> IPv4 */
+		[110] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_FRAG,
+		[111] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_NONFRAG,
+		[112] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_UDP,
+		/* [113] reserved */
+		[114] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_TCP,
+		[115] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_SCTP,
+		[116] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv6 --> GRE/Teredo/VXLAN --> IPv6 */
+		[117] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_FRAG,
+		[118] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_NONFRAG,
+		[119] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_UDP,
+		/* [120] reserved */
+		[121] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_TCP,
+		[122] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_SCTP,
+		[123] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv6 --> GRE/Teredo/VXLAN --> MAC */
+		[124] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER,
+
+		/* IPv6 --> GRE/Teredo/VXLAN --> MAC --> IPv4 */
+		[125] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_FRAG,
+		[126] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_NONFRAG,
+		[127] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_UDP,
+		/* [128] reserved */
+		[129] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_TCP,
+		[130] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_SCTP,
+		[131] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv6 --> GRE/Teredo/VXLAN --> MAC --> IPv6 */
+		[132] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_FRAG,
+		[133] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_NONFRAG,
+		[134] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_UDP,
+		/* [135] reserved */
+		[136] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_TCP,
+		[137] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_SCTP,
+		[138] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv6 --> GRE/Teredo/VXLAN --> MAC/VLAN */
+		[139] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN,
+
+		/* IPv6 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv4 */
+		[140] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_FRAG,
+		[141] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_NONFRAG,
+		[142] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_UDP,
+		/* [143] reserved */
+		[144] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_TCP,
+		[145] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_SCTP,
+		[146] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv6 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv6 */
+		[147] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_FRAG,
+		[148] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_NONFRAG,
+		[149] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_UDP,
+		/* [150] reserved */
+		[151] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_TCP,
+		[152] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_SCTP,
+		[153] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_ICMP,
+		/* [154] - [255] reserved */
+		[256] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GTPC,
+		[257] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GTPC,
+		[258] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+				RTE_PTYPE_TUNNEL_GTPU,
+		[259] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+				RTE_PTYPE_TUNNEL_GTPU,
+		/* [260] - [263] reserved */
+		[264] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GTPC,
+		[265] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GTPC,
+		[266] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+				RTE_PTYPE_TUNNEL_GTPU,
+		[267] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+				RTE_PTYPE_TUNNEL_GTPU,
+
+		/* All others reserved */
+	};
+
+	return type_table[ptype];
+}
+
+void __attribute__((cold))
+ice_set_default_ptype_table(struct rte_eth_dev *dev)
+{
+	struct ice_adapter *ad =
+		ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	int i;
+
+	for (i = 0; i < ICE_MAX_PKT_TYPE; i++)
+		ad->ptype_tbl[i] = ice_get_default_pkt_type(i);
+}
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index 088a206..871646f 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -134,4 +134,6 @@ int ice_tx_queue_setup(struct rte_eth_dev *dev,
 void ice_tx_queue_release(void *txq);
 void ice_clear_queues(struct rte_eth_dev *dev);
 void ice_free_queues(struct rte_eth_dev *dev);
+void ice_set_default_ptype_table(struct rte_eth_dev *dev);
+const uint32_t *ice_dev_supported_ptypes_get(struct rte_eth_dev *dev);
 #endif /* _ICE_RXTX_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v4 20/32] net/ice: support link update
  2018-12-14  8:34 ` [dpdk-dev] [PATCH v4 00/32] A new net PMD - ICE Wenzhuo Lu
                     ` (18 preceding siblings ...)
  2018-12-14  8:35   ` [dpdk-dev] [PATCH v4 19/32] net/ice: support packet type getting Wenzhuo Lu
@ 2018-12-14  8:35   ` Wenzhuo Lu
  2018-12-14  8:35   ` [dpdk-dev] [PATCH v4 21/32] net/ice: support MTU setting Wenzhuo Lu
                     ` (11 subsequent siblings)
  31 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-14  8:35 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add ops link_update.
LSC interrupt is also enabled in this patch.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/ice.ini |   2 +
 drivers/net/ice/ice_ethdev.c     | 332 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 334 insertions(+)

diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
index af8f0d3..eb852ff 100644
--- a/doc/guides/nics/features/ice.ini
+++ b/doc/guides/nics/features/ice.ini
@@ -5,6 +5,8 @@
 ;
 [Features]
 Speed capabilities   = Y
+Link status          = Y
+Link status event    = Y
 Queue start/stop     = Y
 BSD nic_uio          = Y
 Linux UIO            = Y
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index cd35c4e..4621eb6 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -21,6 +21,8 @@
 static int ice_dev_reset(struct rte_eth_dev *dev);
 static void ice_dev_info_get(struct rte_eth_dev *dev,
 			     struct rte_eth_dev_info *dev_info);
+static int ice_link_update(struct rte_eth_dev *dev,
+			   int wait_to_complete);
 
 static const struct rte_pci_id pci_id_ice_map[] = {
 	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
@@ -45,6 +47,7 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 	.tx_queue_release             = ice_tx_queue_release,
 	.dev_infos_get                = ice_dev_info_get,
 	.dev_supported_ptypes_get     = ice_dev_supported_ptypes_get,
+	.link_update                  = ice_link_update,
 };
 
 static void
@@ -331,6 +334,187 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 	return 0;
 }
 
+/* Enable IRQ0 */
+static void
+ice_pf_enable_irq0(struct ice_hw *hw)
+{
+	/* reset the registers */
+	ICE_WRITE_REG(hw, PFINT_OICR_ENA, 0);
+	ICE_READ_REG(hw, PFINT_OICR);
+
+#ifdef ICE_LSE_SPT
+	ICE_WRITE_REG(hw, PFINT_OICR_ENA,
+		      (uint32_t)(PFINT_OICR_ENA_INT_ENA_M &
+				 (~PFINT_OICR_LINK_STAT_CHANGE_M)));
+
+	ICE_WRITE_REG(hw, PFINT_OICR_CTL,
+		      (0 & PFINT_OICR_CTL_MSIX_INDX_M) |
+		      ((0 << PFINT_OICR_CTL_ITR_INDX_S) &
+		       PFINT_OICR_CTL_ITR_INDX_M) |
+		      PFINT_OICR_CTL_CAUSE_ENA_M);
+
+	ICE_WRITE_REG(hw, PFINT_FW_CTL,
+		      (0 & PFINT_FW_CTL_MSIX_INDX_M) |
+		      ((0 << PFINT_FW_CTL_ITR_INDX_S) &
+		       PFINT_FW_CTL_ITR_INDX_M) |
+		      PFINT_FW_CTL_CAUSE_ENA_M);
+#else
+	ICE_WRITE_REG(hw, PFINT_OICR_ENA, PFINT_OICR_ENA_INT_ENA_M);
+#endif
+
+	ICE_WRITE_REG(hw, GLINT_DYN_CTL(0),
+		      GLINT_DYN_CTL_INTENA_M |
+		      GLINT_DYN_CTL_CLEARPBA_M |
+		      GLINT_DYN_CTL_ITR_INDX_M);
+
+	ice_flush(hw);
+}
+
+/* Disable IRQ0 */
+static void
+ice_pf_disable_irq0(struct ice_hw *hw)
+{
+	/* Disable all interrupt types */
+	ICE_WRITE_REG(hw, GLINT_DYN_CTL(0), GLINT_DYN_CTL_WB_ON_ITR_M);
+	ice_flush(hw);
+}
+
+#ifdef ICE_LSE_SPT
+static void
+ice_handle_aq_msg(struct rte_eth_dev *dev)
+{
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ice_ctl_q_info *cq = &hw->adminq;
+	struct ice_rq_event_info event;
+	uint16_t pending, opcode;
+	int ret;
+
+	event.buf_len = ICE_AQ_MAX_BUF_LEN;
+	event.msg_buf = rte_zmalloc(NULL, event.buf_len, 0);
+	if (!event.msg_buf) {
+		PMD_DRV_LOG(ERR, "Failed to allocate mem");
+		return;
+	}
+
+	pending = 1;
+	while (pending) {
+		ret = ice_clean_rq_elem(hw, cq, &event, &pending);
+
+		if (ret != ICE_SUCCESS) {
+			PMD_DRV_LOG(INFO,
+				    "Failed to read msg from AdminQ, "
+				    "adminq_err: %u",
+				    hw->adminq.sq_last_status);
+			break;
+		}
+		opcode = rte_le_to_cpu_16(event.desc.opcode);
+
+		switch (opcode) {
+		case ice_aqc_opc_get_link_status:
+			ret = ice_link_update(dev, 0);
+			if (!ret)
+				_rte_eth_dev_callback_process
+					(dev, RTE_ETH_EVENT_INTR_LSC, NULL);
+			break;
+		default:
+			PMD_DRV_LOG(DEBUG, "Request %u is not supported yet",
+				    opcode);
+			break;
+		}
+	}
+	rte_free(event.msg_buf);
+}
+#endif
+
+/**
+ * Interrupt handler triggered by NIC for handling
+ * specific interrupt.
+ *
+ * @param handle
+ *  Pointer to interrupt handle.
+ * @param param
+ *  The address of parameter (struct rte_eth_dev *) regsitered before.
+ *
+ * @return
+ *  void
+ */
+static void
+ice_interrupt_handler(void *param)
+{
+	struct rte_eth_dev *dev = (struct rte_eth_dev *)param;
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	uint32_t oicr;
+	uint32_t reg;
+	uint8_t pf_num;
+	uint8_t event;
+	uint16_t queue;
+#ifdef ICE_LSE_SPT
+	uint32_t int_fw_ctl;
+#endif
+
+	/* Disable interrupt */
+	ice_pf_disable_irq0(hw);
+
+	/* read out interrupt causes */
+	oicr = ICE_READ_REG(hw, PFINT_OICR);
+#ifdef ICE_LSE_SPT
+	int_fw_ctl = ICE_READ_REG(hw, PFINT_FW_CTL);
+#endif
+
+	/* No interrupt event indicated */
+	if (!(oicr & PFINT_OICR_INTEVENT_M)) {
+		PMD_DRV_LOG(INFO, "No interrupt event");
+		goto done;
+	}
+
+#ifdef ICE_LSE_SPT
+	if (int_fw_ctl & PFINT_FW_CTL_INTEVENT_M) {
+		PMD_DRV_LOG(INFO, "FW_CTL: link state change event");
+		ice_handle_aq_msg(dev);
+	}
+#else
+	if (oicr & PFINT_OICR_LINK_STAT_CHANGE_M) {
+		PMD_DRV_LOG(INFO, "OICR: link state change event");
+		ice_link_update(dev, 0);
+	}
+#endif
+
+	if (oicr & PFINT_OICR_MAL_DETECT_M) {
+		PMD_DRV_LOG(WARNING, "OICR: MDD event");
+		reg = ICE_READ_REG(hw, GL_MDET_TX_PQM);
+		if (reg & GL_MDET_TX_PQM_VALID_M) {
+			pf_num = (reg & GL_MDET_TX_PQM_PF_NUM_M) >>
+				 GL_MDET_TX_PQM_PF_NUM_S;
+			event = (reg & GL_MDET_TX_PQM_MAL_TYPE_M) >>
+				GL_MDET_TX_PQM_MAL_TYPE_S;
+			queue = (reg & GL_MDET_TX_PQM_QNUM_M) >>
+				GL_MDET_TX_PQM_QNUM_S;
+
+			PMD_DRV_LOG(WARNING, "Malicious Driver Detection event "
+				    "%d by PQM on TX queue %d PF# %d",
+				    event, queue, pf_num);
+		}
+
+		reg = ICE_READ_REG(hw, GL_MDET_TX_TCLAN);
+		if (reg & GL_MDET_TX_TCLAN_VALID_M) {
+			pf_num = (reg & GL_MDET_TX_TCLAN_PF_NUM_M) >>
+				 GL_MDET_TX_TCLAN_PF_NUM_S;
+			event = (reg & GL_MDET_TX_TCLAN_MAL_TYPE_M) >>
+				GL_MDET_TX_TCLAN_MAL_TYPE_S;
+			queue = (reg & GL_MDET_TX_TCLAN_QNUM_M) >>
+				GL_MDET_TX_TCLAN_QNUM_S;
+
+			PMD_DRV_LOG(WARNING, "Malicious Driver Detection event "
+				    "%d by TCLAN on TX queue %d PF# %d",
+				    event, queue, pf_num);
+		}
+	}
+done:
+	/* Enable interrupt */
+	ice_pf_enable_irq0(hw);
+	rte_intr_enable(dev->intr_handle);
+}
+
 /*  Initialize SW parameters of PF */
 static int
 ice_pf_sw_init(struct rte_eth_dev *dev)
@@ -488,6 +672,7 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 ice_dev_init(struct rte_eth_dev *dev)
 {
 	struct rte_pci_device *pci_dev;
+	struct rte_intr_handle *intr_handle;
 	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
 	int ret;
@@ -496,6 +681,7 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 
 	ice_set_default_ptype_table(dev);
 	pci_dev = RTE_DEV_TO_PCI(dev->device);
+	intr_handle = &pci_dev->intr_handle;
 
 	pf->adapter = ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
 	pf->adapter->eth_dev = dev;
@@ -541,6 +727,15 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 		goto err_pf_setup;
 	}
 
+	/* register callback func to eal lib */
+	rte_intr_callback_register(intr_handle,
+				   ice_interrupt_handler, dev);
+
+	ice_pf_enable_irq0(hw);
+
+	/* enable uio intr after callback register */
+	rte_intr_enable(intr_handle);
+
 	return 0;
 
 err_pf_setup:
@@ -587,6 +782,8 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 {
 	struct rte_eth_dev_data *data = dev->data;
 	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
 	uint16_t i;
 
 	/* avoid stopping again */
@@ -604,6 +801,13 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 	/* Clear all queues and release mbufs */
 	ice_clear_queues(dev);
 
+	/* Clean datapath event and queue/vec mapping */
+	rte_intr_efd_disable(intr_handle);
+	if (intr_handle->intr_vec) {
+		rte_free(intr_handle->intr_vec);
+		intr_handle->intr_vec = NULL;
+	}
+
 	pf->adapter_stopped = true;
 }
 
@@ -629,6 +833,8 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 {
 	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
 
 	ice_dev_close(dev);
 
@@ -639,6 +845,13 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 	rte_free(dev->data->mac_addrs);
 	dev->data->mac_addrs = NULL;
 
+	/* disable uio intr before callback unregister */
+	rte_intr_disable(intr_handle);
+
+	/* register callback func to eal lib */
+	rte_intr_callback_unregister(intr_handle,
+				     ice_interrupt_handler, dev);
+
 	ice_release_vsi(pf->main_vsi);
 	ice_sched_cleanup_all(hw);
 	rte_free(hw->port_info);
@@ -757,6 +970,9 @@ static int ice_init_rss(struct ice_pf *pf)
 	if (ret != ICE_SUCCESS)
 		PMD_DRV_LOG(WARNING, "Fail to set phy mask");
 
+	/* Call get_link_info aq commond to enable/disable LSE */
+	ice_link_update(dev, 0);
+
 	pf->adapter_stopped = false;
 
 	return 0;
@@ -895,6 +1111,122 @@ static int ice_init_rss(struct ice_pf *pf)
 	dev_info->default_txportconf.ring_size = ICE_BUF_SIZE_MIN;
 }
 
+static inline int
+ice_atomic_read_link_status(struct rte_eth_dev *dev,
+			    struct rte_eth_link *link)
+{
+	struct rte_eth_link *dst = link;
+	struct rte_eth_link *src = &dev->data->dev_link;
+
+	if (rte_atomic64_cmpset((uint64_t *)dst, *(uint64_t *)dst,
+				*(uint64_t *)src) == 0)
+		return -1;
+
+	return 0;
+}
+
+static inline int
+ice_atomic_write_link_status(struct rte_eth_dev *dev,
+			     struct rte_eth_link *link)
+{
+	struct rte_eth_link *dst = &dev->data->dev_link;
+	struct rte_eth_link *src = link;
+
+	if (rte_atomic64_cmpset((uint64_t *)dst, *(uint64_t *)dst,
+				*(uint64_t *)src) == 0)
+		return -1;
+
+	return 0;
+}
+
+static int
+ice_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete)
+{
+#define CHECK_INTERVAL 100  /* 100ms */
+#define MAX_REPEAT_TIME 10  /* 1s (10 * 100ms) in total */
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ice_link_status link_status;
+	struct rte_eth_link link, old;
+	int status;
+	unsigned int rep_cnt = MAX_REPEAT_TIME;
+	bool enable_lse = dev->data->dev_conf.intr_conf.lsc ? true : false;
+
+	memset(&link, 0, sizeof(link));
+	memset(&old, 0, sizeof(old));
+	memset(&link_status, 0, sizeof(link_status));
+	ice_atomic_read_link_status(dev, &old);
+
+	do {
+		/* Get link status information from hardware */
+		status = ice_aq_get_link_info(hw->port_info, enable_lse,
+					      &link_status, NULL);
+		if (status != ICE_SUCCESS) {
+			link.link_speed = ETH_SPEED_NUM_100M;
+			link.link_duplex = ETH_LINK_FULL_DUPLEX;
+			PMD_DRV_LOG(ERR, "Failed to get link info");
+			goto out;
+		}
+
+		link.link_status = link_status.link_info & ICE_AQ_LINK_UP;
+		if (!wait_to_complete || link.link_status)
+			break;
+
+		rte_delay_ms(CHECK_INTERVAL);
+	} while (--rep_cnt);
+
+	if (!link.link_status)
+		goto out;
+
+	/* Full-duplex operation at all supported speeds */
+	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+
+	/* Parse the link status */
+	switch (link_status.link_speed) {
+	case ICE_AQ_LINK_SPEED_10MB:
+		link.link_speed = ETH_SPEED_NUM_10M;
+		break;
+	case ICE_AQ_LINK_SPEED_100MB:
+		link.link_speed = ETH_SPEED_NUM_100M;
+		break;
+	case ICE_AQ_LINK_SPEED_1000MB:
+		link.link_speed = ETH_SPEED_NUM_1G;
+		break;
+	case ICE_AQ_LINK_SPEED_2500MB:
+		link.link_speed = ETH_SPEED_NUM_2_5G;
+		break;
+	case ICE_AQ_LINK_SPEED_5GB:
+		link.link_speed = ETH_SPEED_NUM_5G;
+		break;
+	case ICE_AQ_LINK_SPEED_10GB:
+		link.link_speed = ETH_SPEED_NUM_10G;
+		break;
+	case ICE_AQ_LINK_SPEED_20GB:
+		link.link_speed = ETH_SPEED_NUM_20G;
+		break;
+	case ICE_AQ_LINK_SPEED_25GB:
+		link.link_speed = ETH_SPEED_NUM_25G;
+		break;
+	case ICE_AQ_LINK_SPEED_40GB:
+		link.link_speed = ETH_SPEED_NUM_40G;
+		break;
+	case ICE_AQ_LINK_SPEED_UNKNOWN:
+	default:
+		PMD_DRV_LOG(ERR, "Unknown link speed");
+		link.link_speed = ETH_SPEED_NUM_NONE;
+		break;
+	}
+
+	link.link_autoneg = !(dev->data->dev_conf.link_speeds &
+			      ETH_LINK_SPEED_FIXED);
+
+out:
+	ice_atomic_write_link_status(dev, &link);
+	if (link.link_status == old.link_status)
+		return -1;
+
+	return 0;
+}
+
 static int
 ice_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 	      struct rte_pci_device *pci_dev)
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v4 21/32] net/ice: support MTU setting
  2018-12-14  8:34 ` [dpdk-dev] [PATCH v4 00/32] A new net PMD - ICE Wenzhuo Lu
                     ` (19 preceding siblings ...)
  2018-12-14  8:35   ` [dpdk-dev] [PATCH v4 20/32] net/ice: support link update Wenzhuo Lu
@ 2018-12-14  8:35   ` Wenzhuo Lu
  2018-12-14  8:35   ` [dpdk-dev] [PATCH v4 22/32] net/ice: support MAC ops Wenzhuo Lu
                     ` (10 subsequent siblings)
  31 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-14  8:35 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add ops mtu_set.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/ice.ini |  2 ++
 drivers/net/ice/ice_ethdev.c     | 34 ++++++++++++++++++++++++++++++++++
 2 files changed, 36 insertions(+)

diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
index eb852ff..fab6442 100644
--- a/doc/guides/nics/features/ice.ini
+++ b/doc/guides/nics/features/ice.ini
@@ -8,6 +8,8 @@ Speed capabilities   = Y
 Link status          = Y
 Link status event    = Y
 Queue start/stop     = Y
+MTU update           = Y
+Jumbo frame          = Y
 BSD nic_uio          = Y
 Linux UIO            = Y
 Linux VFIO           = Y
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 4621eb6..9cf843d 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -23,6 +23,7 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 			     struct rte_eth_dev_info *dev_info);
 static int ice_link_update(struct rte_eth_dev *dev,
 			   int wait_to_complete);
+static int ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu);
 
 static const struct rte_pci_id pci_id_ice_map[] = {
 	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
@@ -48,6 +49,7 @@ static int ice_link_update(struct rte_eth_dev *dev,
 	.dev_infos_get                = ice_dev_info_get,
 	.dev_supported_ptypes_get     = ice_dev_supported_ptypes_get,
 	.link_update                  = ice_link_update,
+	.mtu_set                      = ice_mtu_set,
 };
 
 static void
@@ -1228,6 +1230,38 @@ static int ice_init_rss(struct ice_pf *pf)
 }
 
 static int
+ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct rte_eth_dev_data *dev_data = pf->dev_data;
+	uint32_t frame_size = mtu + ETHER_HDR_LEN
+			      + ETHER_CRC_LEN + ICE_VLAN_TAG_SIZE;
+
+	/* check if mtu is within the allowed range */
+	if (mtu < ETHER_MIN_MTU || frame_size > ICE_FRAME_SIZE_MAX)
+		return -EINVAL;
+
+	/* mtu setting is forbidden if port is start */
+	if (dev_data->dev_started) {
+		PMD_DRV_LOG(ERR,
+			    "port %d must be stopped before configuration",
+			    dev_data->port_id);
+		return -EBUSY;
+	}
+
+	if (frame_size > ETHER_MAX_LEN)
+		dev_data->dev_conf.rxmode.offloads |=
+			DEV_RX_OFFLOAD_JUMBO_FRAME;
+	else
+		dev_data->dev_conf.rxmode.offloads &=
+			~DEV_RX_OFFLOAD_JUMBO_FRAME;
+
+	dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
+
+	return 0;
+}
+
+static int
 ice_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 	      struct rte_pci_device *pci_dev)
 {
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v4 22/32] net/ice: support MAC ops
  2018-12-14  8:34 ` [dpdk-dev] [PATCH v4 00/32] A new net PMD - ICE Wenzhuo Lu
                     ` (20 preceding siblings ...)
  2018-12-14  8:35   ` [dpdk-dev] [PATCH v4 21/32] net/ice: support MTU setting Wenzhuo Lu
@ 2018-12-14  8:35   ` Wenzhuo Lu
  2018-12-14  8:35   ` [dpdk-dev] [PATCH v4 23/32] net/ice: support VLAN ops Wenzhuo Lu
                     ` (9 subsequent siblings)
  31 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-14  8:35 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add below ops,
mac_addr_set
mac_addr_add
mac_addr_remove

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/ice.ini |   2 +
 drivers/net/ice/ice_ethdev.c     | 236 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 238 insertions(+)

diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
index fab6442..759a036 100644
--- a/doc/guides/nics/features/ice.ini
+++ b/doc/guides/nics/features/ice.ini
@@ -10,6 +10,8 @@ Link status event    = Y
 Queue start/stop     = Y
 MTU update           = Y
 Jumbo frame          = Y
+Unicast MAC filter   = Y
+Multicast MAC filter = Y
 BSD nic_uio          = Y
 Linux UIO            = Y
 Linux VFIO           = Y
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 9cf843d..39c26fe 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -24,6 +24,13 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 static int ice_link_update(struct rte_eth_dev *dev,
 			   int wait_to_complete);
 static int ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu);
+static int ice_macaddr_set(struct rte_eth_dev *dev,
+			   struct ether_addr *mac_addr);
+static int ice_macaddr_add(struct rte_eth_dev *dev,
+			   struct ether_addr *mac_addr,
+			   __rte_unused uint32_t index,
+			   uint32_t pool);
+static void ice_macaddr_remove(struct rte_eth_dev *dev, uint32_t index);
 
 static const struct rte_pci_id pci_id_ice_map[] = {
 	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
@@ -50,6 +57,9 @@ static int ice_link_update(struct rte_eth_dev *dev,
 	.dev_supported_ptypes_get     = ice_dev_supported_ptypes_get,
 	.link_update                  = ice_link_update,
 	.mtu_set                      = ice_mtu_set,
+	.mac_addr_set                 = ice_macaddr_set,
+	.mac_addr_add                 = ice_macaddr_add,
+	.mac_addr_remove              = ice_macaddr_remove,
 };
 
 static void
@@ -336,6 +346,130 @@ static int ice_link_update(struct rte_eth_dev *dev,
 	return 0;
 }
 
+/* Find out specific MAC filter */
+static struct ice_mac_filter *
+ice_find_mac_filter(struct ice_vsi *vsi, struct ether_addr *macaddr)
+{
+	struct ice_mac_filter *f;
+
+	TAILQ_FOREACH(f, &vsi->mac_list, next) {
+		if (is_same_ether_addr(macaddr, &f->mac_info.mac_addr))
+			return f;
+	}
+
+	return NULL;
+}
+
+static int
+ice_add_mac_filter(struct ice_vsi *vsi, struct ether_addr *mac_addr)
+{
+	struct ice_fltr_list_entry *m_list_itr = NULL;
+	struct ice_mac_filter *f;
+	struct LIST_HEAD_TYPE list_head;
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	int ret = 0;
+
+	/* If it's added and configured, return */
+	f = ice_find_mac_filter(vsi, mac_addr);
+	if (f) {
+		PMD_DRV_LOG(INFO, "This MAC filter already exists.");
+		return 0;
+	}
+
+	INIT_LIST_HEAD(&list_head);
+
+	m_list_itr = (struct ice_fltr_list_entry *)
+		ice_malloc(hw, sizeof(*m_list_itr));
+	if (!m_list_itr) {
+		ret = -ENOMEM;
+		goto DONE;
+	}
+	ice_memcpy(m_list_itr->fltr_info.l_data.mac.mac_addr,
+		   mac_addr, ETH_ALEN, ICE_NONDMA_TO_NONDMA);
+	m_list_itr->fltr_info.src_id = ICE_SRC_ID_VSI;
+	m_list_itr->fltr_info.fltr_act = ICE_FWD_TO_VSI;
+	m_list_itr->fltr_info.lkup_type = ICE_SW_LKUP_MAC;
+	m_list_itr->fltr_info.flag = ICE_FLTR_TX;
+	m_list_itr->fltr_info.vsi_handle = vsi->idx;
+
+	LIST_ADD(&m_list_itr->list_entry, &list_head);
+
+	/* Add the mac */
+	ret = ice_add_mac(hw, &list_head);
+	if (ret != ICE_SUCCESS) {
+		PMD_DRV_LOG(ERR, "Failed to add MAC filter");
+		ret = -EINVAL;
+		goto DONE;
+	}
+	/* Add the mac addr into mac list */
+	f = rte_zmalloc(NULL, sizeof(*f), 0);
+	if (!f) {
+		PMD_DRV_LOG(ERR, "failed to allocate memory");
+		ret = -ENOMEM;
+		goto DONE;
+	}
+	rte_memcpy(&f->mac_info.mac_addr, mac_addr, ETH_ADDR_LEN);
+	TAILQ_INSERT_TAIL(&vsi->mac_list, f, next);
+	vsi->mac_num++;
+
+	ret = 0;
+
+DONE:
+	rte_free(m_list_itr);
+	return ret;
+}
+
+static int
+ice_remove_mac_filter(struct ice_vsi *vsi, struct ether_addr *mac_addr)
+{
+	struct ice_fltr_list_entry *m_list_itr = NULL;
+	struct ice_mac_filter *f;
+	struct LIST_HEAD_TYPE list_head;
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	int ret = 0;
+
+	/* Can't find it, return an error */
+	f = ice_find_mac_filter(vsi, mac_addr);
+	if (!f)
+		return -EINVAL;
+
+	INIT_LIST_HEAD(&list_head);
+
+	m_list_itr = (struct ice_fltr_list_entry *)
+		ice_malloc(hw, sizeof(*m_list_itr));
+	if (!m_list_itr) {
+		ret = -ENOMEM;
+		goto DONE;
+	}
+	ice_memcpy(m_list_itr->fltr_info.l_data.mac.mac_addr,
+		   mac_addr, ETH_ALEN, ICE_NONDMA_TO_NONDMA);
+	m_list_itr->fltr_info.src_id = ICE_SRC_ID_VSI;
+	m_list_itr->fltr_info.fltr_act = ICE_FWD_TO_VSI;
+	m_list_itr->fltr_info.lkup_type = ICE_SW_LKUP_MAC;
+	m_list_itr->fltr_info.flag = ICE_FLTR_TX;
+	m_list_itr->fltr_info.vsi_handle = vsi->idx;
+
+	LIST_ADD(&m_list_itr->list_entry, &list_head);
+
+	/* remove the mac filter */
+	ret = ice_remove_mac(hw, &list_head);
+	if (ret != ICE_SUCCESS) {
+		PMD_DRV_LOG(ERR, "Failed to remove MAC filter");
+		ret = -EINVAL;
+		goto DONE;
+	}
+
+	/* Remove the mac addr from mac list */
+	TAILQ_REMOVE(&vsi->mac_list, f, next);
+	rte_free(f);
+	vsi->mac_num--;
+
+	ret = 0;
+DONE:
+	rte_free(m_list_itr);
+	return ret;
+}
+
 /* Enable IRQ0 */
 static void
 ice_pf_enable_irq0(struct ice_hw *hw)
@@ -544,6 +678,9 @@ static int ice_link_update(struct rte_eth_dev *dev,
 	struct ice_vsi *vsi = NULL;
 	struct ice_vsi_ctx vsi_ctx;
 	int ret;
+	struct ether_addr broadcast = {
+		.addr_bytes = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff} };
+	struct ether_addr mac_addr;
 	uint16_t max_txqs[ICE_MAX_TRAFFIC_CLASS] = { 0 };
 	uint8_t tc_bitmap = 0x1;
 
@@ -629,6 +766,21 @@ static int ice_link_update(struct rte_eth_dev *dev,
 	pf->vsis_allocated = vsi_ctx.vsis_allocd;
 	pf->vsis_unallocated = vsi_ctx.vsis_unallocated;
 
+	/* MAC configuration */
+	rte_memcpy(pf->dev_addr.addr_bytes,
+		   hw->port_info->mac.perm_addr,
+		   ETH_ADDR_LEN);
+
+	rte_memcpy(&mac_addr, &pf->dev_addr, ETHER_ADDR_LEN);
+	ret = ice_add_mac_filter(vsi, &mac_addr);
+	if (ret != ICE_SUCCESS)
+		PMD_INIT_LOG(ERR, "Failed to add dflt MAC filter");
+
+	rte_memcpy(&mac_addr, &broadcast, ETHER_ADDR_LEN);
+	ret = ice_add_mac_filter(vsi, &mac_addr);
+	if (ret != ICE_SUCCESS)
+		PMD_INIT_LOG(ERR, "Failed to add MAC filter");
+
 	/* At the beginning, only TC0. */
 	/* What we need here is the maximam number of the TX queues.
 	 * Currently vsi->nb_qps means it.
@@ -1261,6 +1413,90 @@ static int ice_init_rss(struct ice_pf *pf)
 	return 0;
 }
 
+static int ice_macaddr_set(struct rte_eth_dev *dev,
+			   struct ether_addr *mac_addr)
+{
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vsi *vsi = pf->main_vsi;
+	struct ice_mac_filter *f;
+	uint8_t flags = 0;
+	int ret;
+
+	if (!is_valid_assigned_ether_addr(mac_addr)) {
+		PMD_DRV_LOG(ERR, "Tried to set invalid MAC address.");
+		return -EINVAL;
+	}
+
+	TAILQ_FOREACH(f, &vsi->mac_list, next) {
+		if (is_same_ether_addr(&pf->dev_addr, &f->mac_info.mac_addr))
+			break;
+	}
+
+	if (!f) {
+		PMD_DRV_LOG(ERR, "Failed to find filter for default mac");
+		return -EIO;
+	}
+
+	ret = ice_remove_mac_filter(vsi, &f->mac_info.mac_addr);
+	if (ret != ICE_SUCCESS) {
+		PMD_DRV_LOG(ERR, "Failed to delete mac filter");
+		return -EIO;
+	}
+	ret = ice_add_mac_filter(vsi, mac_addr);
+	if (ret != ICE_SUCCESS) {
+		PMD_DRV_LOG(ERR, "Failed to add mac filter");
+		return -EIO;
+	}
+	memcpy(&pf->dev_addr, mac_addr, ETH_ADDR_LEN);
+
+	flags = ICE_AQC_MAN_MAC_UPDATE_LAA_WOL;
+	ret = ice_aq_manage_mac_write(hw, mac_addr->addr_bytes, flags, NULL);
+	if (ret != ICE_SUCCESS) {
+		PMD_DRV_LOG(ERR, "Failed to set manage mac");
+	}
+
+	return 0;
+}
+
+/* Add a MAC address, and update filters */
+static int
+ice_macaddr_add(struct rte_eth_dev *dev,
+		struct ether_addr *mac_addr,
+		__rte_unused uint32_t index,
+		__rte_unused uint32_t pool)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vsi *vsi = pf->main_vsi;
+	int ret;
+
+	ret = ice_add_mac_filter(vsi, mac_addr);
+	if (ret != ICE_SUCCESS) {
+		PMD_DRV_LOG(ERR, "Failed to add MAC filter");
+		return -EINVAL;
+	}
+
+	return ICE_SUCCESS;
+}
+
+/* Remove a MAC address, and update filters */
+static void
+ice_macaddr_remove(struct rte_eth_dev *dev, uint32_t index)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vsi *vsi = pf->main_vsi;
+	struct rte_eth_dev_data *data = dev->data;
+	struct ether_addr *macaddr;
+	int ret;
+
+	macaddr = &data->mac_addrs[index];
+	ret = ice_remove_mac_filter(vsi, macaddr);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to remove MAC filter");
+		return;
+	}
+}
+
 static int
 ice_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 	      struct rte_pci_device *pci_dev)
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v4 23/32] net/ice: support VLAN ops
  2018-12-14  8:34 ` [dpdk-dev] [PATCH v4 00/32] A new net PMD - ICE Wenzhuo Lu
                     ` (21 preceding siblings ...)
  2018-12-14  8:35   ` [dpdk-dev] [PATCH v4 22/32] net/ice: support MAC ops Wenzhuo Lu
@ 2018-12-14  8:35   ` Wenzhuo Lu
  2018-12-14  8:35   ` [dpdk-dev] [PATCH v4 24/32] net/ice: support RSS Wenzhuo Lu
                     ` (8 subsequent siblings)
  31 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-14  8:35 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add below ops,
ice_vlan_filter_set
ice_vlan_offload_set
ice_vlan_tpid_set
ice_vlan_pvid_set

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/ice.ini |   3 +
 doc/guides/nics/ice.rst          |  16 ++
 drivers/net/ice/ice_ethdev.c     | 590 +++++++++++++++++++++++++++++++++++++++
 3 files changed, 609 insertions(+)

diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
index 759a036..5ac8e56 100644
--- a/doc/guides/nics/features/ice.ini
+++ b/doc/guides/nics/features/ice.ini
@@ -12,6 +12,9 @@ MTU update           = Y
 Jumbo frame          = Y
 Unicast MAC filter   = Y
 Multicast MAC filter = Y
+VLAN filter          = Y
+VLAN offload         = Y
+QinQ offload         = Y
 BSD nic_uio          = Y
 Linux UIO            = Y
 Linux VFIO           = Y
diff --git a/doc/guides/nics/ice.rst b/doc/guides/nics/ice.rst
index 96a594f..466af55 100644
--- a/doc/guides/nics/ice.rst
+++ b/doc/guides/nics/ice.rst
@@ -64,6 +64,22 @@ Driver compilation and testing
 Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
 for details.
 
+Sample Application Notes
+------------------------
+
+Vlan filter
+~~~~~~~~~~~
+
+Vlan filter only works when Promiscuous mode is off.
+
+To start ``testpmd``, and add vlan 10 to port 0:
+
+.. code-block:: console
+
+    ./app/testpmd -l 0-15 -n 4 -- -i
+    ...
+
+    testpmd> rx_vlan add 10 0
 
 Limitations or Known issues
 ---------------------------
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 39c26fe..58ac0af 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -24,6 +24,13 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 static int ice_link_update(struct rte_eth_dev *dev,
 			   int wait_to_complete);
 static int ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu);
+static int ice_vlan_offload_set(struct rte_eth_dev *dev, int mask);
+static int ice_vlan_tpid_set(struct rte_eth_dev *dev,
+			     enum rte_vlan_type vlan_type,
+			     uint16_t tpid);
+static int ice_vlan_filter_set(struct rte_eth_dev *dev,
+			       uint16_t vlan_id,
+			       int on);
 static int ice_macaddr_set(struct rte_eth_dev *dev,
 			   struct ether_addr *mac_addr);
 static int ice_macaddr_add(struct rte_eth_dev *dev,
@@ -31,6 +38,8 @@ static int ice_macaddr_add(struct rte_eth_dev *dev,
 			   __rte_unused uint32_t index,
 			   uint32_t pool);
 static void ice_macaddr_remove(struct rte_eth_dev *dev, uint32_t index);
+static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
+			     uint16_t pvid, int on);
 
 static const struct rte_pci_id pci_id_ice_map[] = {
 	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
@@ -60,6 +69,10 @@ static int ice_macaddr_add(struct rte_eth_dev *dev,
 	.mac_addr_set                 = ice_macaddr_set,
 	.mac_addr_add                 = ice_macaddr_add,
 	.mac_addr_remove              = ice_macaddr_remove,
+	.vlan_filter_set              = ice_vlan_filter_set,
+	.vlan_offload_set             = ice_vlan_offload_set,
+	.vlan_tpid_set                = ice_vlan_tpid_set,
+	.vlan_pvid_set                = ice_vlan_pvid_set,
 };
 
 static void
@@ -470,6 +483,297 @@ static int ice_macaddr_add(struct rte_eth_dev *dev,
 	return ret;
 }
 
+/* Find out specific VLAN filter */
+static struct ice_vlan_filter *
+ice_find_vlan_filter(struct ice_vsi *vsi, uint16_t vlan_id)
+{
+	struct ice_vlan_filter *f;
+
+	TAILQ_FOREACH(f, &vsi->vlan_list, next) {
+		if (vlan_id == f->vlan_info.vlan_id)
+			return f;
+	}
+
+	return NULL;
+}
+
+static int
+ice_add_vlan_filter(struct ice_vsi *vsi, uint16_t vlan_id)
+{
+	struct ice_fltr_list_entry *v_list_itr = NULL;
+	struct ice_vlan_filter *f;
+	struct LIST_HEAD_TYPE list_head;
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	int ret = 0;
+
+	if (!vsi || vlan_id > ETHER_MAX_VLAN_ID)
+		return -EINVAL;
+
+	/* If it's added and configured, return. */
+	f = ice_find_vlan_filter(vsi, vlan_id);
+	if (f) {
+		PMD_DRV_LOG(INFO, "This VLAN filter already exists.");
+		return 0;
+	}
+
+	if (!vsi->vlan_anti_spoof_on && !vsi->vlan_filter_on)
+		return 0;
+
+	INIT_LIST_HEAD(&list_head);
+
+	v_list_itr = (struct ice_fltr_list_entry *)
+		      ice_malloc(hw, sizeof(*v_list_itr));
+	if (!v_list_itr) {
+		ret = -ENOMEM;
+		goto DONE;
+	}
+	v_list_itr->fltr_info.l_data.vlan.vlan_id = vlan_id;
+	v_list_itr->fltr_info.src_id = ICE_SRC_ID_VSI;
+	v_list_itr->fltr_info.fltr_act = ICE_FWD_TO_VSI;
+	v_list_itr->fltr_info.lkup_type = ICE_SW_LKUP_VLAN;
+	v_list_itr->fltr_info.flag = ICE_FLTR_TX;
+	v_list_itr->fltr_info.vsi_handle = vsi->idx;
+
+	LIST_ADD(&v_list_itr->list_entry, &list_head);
+
+	/* Add the vlan */
+	ret = ice_add_vlan(hw, &list_head);
+	if (ret != ICE_SUCCESS) {
+		PMD_DRV_LOG(ERR, "Failed to add VLAN filter");
+		ret = -EINVAL;
+		goto DONE;
+	}
+
+	/* Add vlan into vlan list */
+	f = rte_zmalloc(NULL, sizeof(*f), 0);
+	if (!f) {
+		PMD_DRV_LOG(ERR, "failed to allocate memory");
+		ret = -ENOMEM;
+		goto DONE;
+	}
+	f->vlan_info.vlan_id = vlan_id;
+	TAILQ_INSERT_TAIL(&vsi->vlan_list, f, next);
+	vsi->vlan_num++;
+
+	ret = 0;
+
+DONE:
+	rte_free(v_list_itr);
+	return ret;
+}
+
+static int
+ice_remove_vlan_filter(struct ice_vsi *vsi, uint16_t vlan_id)
+{
+	struct ice_fltr_list_entry *v_list_itr = NULL;
+	struct ice_vlan_filter *f;
+	struct LIST_HEAD_TYPE list_head;
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	int ret = 0;
+
+	/**
+	 * Vlan 0 is the generic filter for untagged packets
+	 * and can't be removed.
+	 */
+	if (!vsi || vlan_id == 0 || vlan_id > ETHER_MAX_VLAN_ID)
+		return -EINVAL;
+
+	/* Can't find it, return an error */
+	f = ice_find_vlan_filter(vsi, vlan_id);
+	if (!f)
+		return -EINVAL;
+
+	INIT_LIST_HEAD(&list_head);
+
+	v_list_itr = (struct ice_fltr_list_entry *)
+		      ice_malloc(hw, sizeof(*v_list_itr));
+	if (!v_list_itr) {
+		ret = -ENOMEM;
+		goto DONE;
+	}
+
+	v_list_itr->fltr_info.l_data.vlan.vlan_id = vlan_id;
+	v_list_itr->fltr_info.src_id = ICE_SRC_ID_VSI;
+	v_list_itr->fltr_info.fltr_act = ICE_FWD_TO_VSI;
+	v_list_itr->fltr_info.lkup_type = ICE_SW_LKUP_VLAN;
+	v_list_itr->fltr_info.flag = ICE_FLTR_TX;
+	v_list_itr->fltr_info.vsi_handle = vsi->idx;
+
+	LIST_ADD(&v_list_itr->list_entry, &list_head);
+
+	/* remove the vlan filter */
+	ret = ice_remove_vlan(hw, &list_head);
+	if (ret != ICE_SUCCESS) {
+		PMD_DRV_LOG(ERR, "Failed to remove VLAN filter");
+		ret = -EINVAL;
+		goto DONE;
+	}
+
+	/* Remove the vlan id from vlan list */
+	TAILQ_REMOVE(&vsi->vlan_list, f, next);
+	rte_free(f);
+	vsi->vlan_num--;
+
+	ret = 0;
+DONE:
+	rte_free(v_list_itr);
+	return ret;
+}
+
+static int
+ice_remove_all_mac_vlan_filters(struct ice_vsi *vsi)
+{
+	struct ice_mac_filter *m_f;
+	struct ice_vlan_filter *v_f;
+	int ret = 0;
+
+	if (!vsi || !vsi->mac_num)
+		return -EINVAL;
+
+	TAILQ_FOREACH(m_f, &vsi->mac_list, next) {
+		ret = ice_remove_mac_filter(vsi, &m_f->mac_info.mac_addr);
+		if (ret != ICE_SUCCESS) {
+			ret = -EINVAL;
+			goto DONE;
+		}
+	}
+
+	if (vsi->vlan_num == 0)
+		return 0;
+
+	TAILQ_FOREACH(v_f, &vsi->vlan_list, next) {
+		ret = ice_remove_vlan_filter(vsi, v_f->vlan_info.vlan_id);
+		if (ret != ICE_SUCCESS) {
+			ret = -EINVAL;
+			goto DONE;
+		}
+	}
+
+DONE:
+	return ret;
+}
+
+static int
+ice_vsi_config_qinq_insertion(struct ice_vsi *vsi, bool on)
+{
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	struct ice_vsi_ctx ctxt;
+	uint8_t qinq_flags;
+	int ret = 0;
+
+	/* Check if it has been already on or off */
+	if (vsi->info.valid_sections &
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID)) {
+		if (on) {
+			if ((vsi->info.outer_tag_flags &
+			     ICE_AQ_VSI_OUTER_TAG_ACCEPT_HOST) ==
+			    ICE_AQ_VSI_OUTER_TAG_ACCEPT_HOST)
+				return 0; /* already on */
+		} else {
+			if (!(vsi->info.outer_tag_flags &
+			      ICE_AQ_VSI_OUTER_TAG_ACCEPT_HOST))
+				return 0; /* already off */
+		}
+	}
+
+	if (on)
+		qinq_flags = ICE_AQ_VSI_OUTER_TAG_ACCEPT_HOST;
+	else
+		qinq_flags = 0;
+	/* clear global insertion and use per packet insertion */
+	vsi->info.outer_tag_flags &= ~(ICE_AQ_VSI_OUTER_TAG_INSERT);
+	vsi->info.outer_tag_flags &= ~(ICE_AQ_VSI_OUTER_TAG_ACCEPT_HOST);
+	vsi->info.outer_tag_flags |= qinq_flags;
+	/* use default vlan type 0x8100 */
+	vsi->info.outer_tag_flags &= ~(ICE_AQ_VSI_OUTER_TAG_TYPE_M);
+	vsi->info.outer_tag_flags |= ICE_DFLT_OUTER_TAG_TYPE <<
+				     ICE_AQ_VSI_OUTER_TAG_TYPE_S;
+	(void)rte_memcpy(&ctxt.info, &vsi->info, sizeof(vsi->info));
+	ctxt.info.valid_sections =
+			rte_cpu_to_le_16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID);
+	ctxt.vsi_num = vsi->vsi_id;
+	ret = ice_update_vsi(hw, vsi->idx, &ctxt, NULL);
+	if (ret) {
+		PMD_DRV_LOG(INFO,
+			    "Update VSI failed to %s qinq stripping",
+			    on ? "enable" : "disable");
+		return -EINVAL;
+	}
+
+	vsi->info.valid_sections |=
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID);
+
+	return ret;
+}
+
+static int
+ice_vsi_config_qinq_stripping(struct ice_vsi *vsi, bool on)
+{
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	struct ice_vsi_ctx ctxt;
+	uint8_t qinq_flags;
+	int ret = 0;
+
+	/* Check if it has been already on or off */
+	if (vsi->info.valid_sections &
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID)) {
+		if (on) {
+			if ((vsi->info.outer_tag_flags &
+			     ICE_AQ_VSI_OUTER_TAG_MODE_M) ==
+			    ICE_AQ_VSI_OUTER_TAG_COPY)
+				return 0; /* already on */
+		} else {
+			if ((vsi->info.outer_tag_flags &
+			     ICE_AQ_VSI_OUTER_TAG_MODE_M) ==
+			    ICE_AQ_VSI_OUTER_TAG_NOTHING)
+				return 0; /* already off */
+		}
+	}
+
+	if (on)
+		qinq_flags = ICE_AQ_VSI_OUTER_TAG_COPY;
+	else
+		qinq_flags = ICE_AQ_VSI_OUTER_TAG_NOTHING;
+	vsi->info.outer_tag_flags &= ~(ICE_AQ_VSI_OUTER_TAG_MODE_M);
+	vsi->info.outer_tag_flags |= qinq_flags;
+	/* use default vlan type 0x8100 */
+	vsi->info.outer_tag_flags &= ~(ICE_AQ_VSI_OUTER_TAG_TYPE_M);
+	vsi->info.outer_tag_flags |= ICE_DFLT_OUTER_TAG_TYPE <<
+				     ICE_AQ_VSI_OUTER_TAG_TYPE_S;
+	(void)rte_memcpy(&ctxt.info, &vsi->info, sizeof(vsi->info));
+	ctxt.info.valid_sections =
+			rte_cpu_to_le_16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID);
+	ctxt.vsi_num = vsi->vsi_id;
+	ret = ice_update_vsi(hw, vsi->idx, &ctxt, NULL);
+	if (ret) {
+		PMD_DRV_LOG(INFO,
+			    "Update VSI failed to %s qinq stripping",
+			    on ? "enable" : "disable");
+		return -EINVAL;
+	}
+
+	vsi->info.valid_sections |=
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID);
+
+	return ret;
+}
+
+static int
+ice_vsi_config_double_vlan(struct ice_vsi *vsi, int on)
+{
+	int ret;
+
+	ret = ice_vsi_config_qinq_stripping(vsi, on);
+	if (ret)
+		PMD_DRV_LOG(ERR, "Fail to set qinq stripping - %d", ret);
+
+	ret = ice_vsi_config_qinq_insertion(vsi, on);
+	if (ret)
+		PMD_DRV_LOG(ERR, "Fail to set qinq insertion - %d", ret);
+
+	return ret;
+}
+
 /* Enable IRQ0 */
 static void
 ice_pf_enable_irq0(struct ice_hw *hw)
@@ -829,6 +1133,7 @@ static int ice_macaddr_add(struct rte_eth_dev *dev,
 	struct rte_intr_handle *intr_handle;
 	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vsi *vsi;
 	int ret;
 
 	dev->dev_ops = &ice_eth_dev_ops;
@@ -881,6 +1186,11 @@ static int ice_macaddr_add(struct rte_eth_dev *dev,
 		goto err_pf_setup;
 	}
 
+	vsi = pf->main_vsi;
+
+	/* Disable double vlan by default */
+	ice_vsi_config_double_vlan(vsi, FALSE);
+
 	/* register callback func to eal lib */
 	rte_intr_callback_register(intr_handle,
 				   ice_interrupt_handler, dev);
@@ -916,6 +1226,8 @@ static int ice_macaddr_add(struct rte_eth_dev *dev,
 
 	hw = ICE_VSI_TO_HW(vsi);
 
+	ice_remove_all_mac_vlan_filters(vsi);
+
 	memset(&vsi_ctx, 0, sizeof(vsi_ctx));
 
 	vsi_ctx.vsi_num = vsi->vsi_id;
@@ -1498,6 +1810,284 @@ static int ice_macaddr_set(struct rte_eth_dev *dev,
 }
 
 static int
+ice_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vsi *vsi = pf->main_vsi;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (on) {
+		ret = ice_add_vlan_filter(vsi, vlan_id);
+		if (ret < 0) {
+			PMD_DRV_LOG(ERR, "Failed to add vlan filter");
+			return -EINVAL;
+		}
+	} else {
+		ret = ice_remove_vlan_filter(vsi, vlan_id);
+		if (ret < 0) {
+			PMD_DRV_LOG(ERR, "Failed to remove vlan filter");
+			return -EINVAL;
+		}
+	}
+
+	return 0;
+}
+
+/* Configure vlan filter on or off */
+static int
+ice_vsi_config_vlan_filter(struct ice_vsi *vsi, bool on)
+{
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	struct ice_vsi_ctx ctxt;
+	uint8_t sec_flags, sw_flags2;
+	int ret = 0;
+
+	sec_flags = ICE_AQ_VSI_SEC_TX_VLAN_PRUNE_ENA <<
+		    ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S;
+	sw_flags2 = ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA;
+
+	if (on) {
+		vsi->info.sec_flags |= sec_flags;
+		vsi->info.sw_flags2 |= sw_flags2;
+	} else {
+		vsi->info.sec_flags &= ~sec_flags;
+		vsi->info.sw_flags2 &= ~sw_flags2;
+	}
+	vsi->info.sw_id = hw->port_info->sw_id;
+	(void)rte_memcpy(&ctxt.info, &vsi->info, sizeof(vsi->info));
+	ctxt.info.valid_sections =
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_SW_VALID |
+				 ICE_AQ_VSI_PROP_SECURITY_VALID);
+	ctxt.vsi_num = vsi->vsi_id;
+
+	ret = ice_update_vsi(hw, vsi->idx, &ctxt, NULL);
+	if (ret) {
+		PMD_DRV_LOG(INFO, "Update VSI failed to %s vlan rx pruning",
+			    on ? "enable" : "disable");
+		ret = -EINVAL;
+	} else {
+		vsi->info.valid_sections |=
+			rte_cpu_to_le_16(ICE_AQ_VSI_PROP_SW_VALID |
+					 ICE_AQ_VSI_PROP_SECURITY_VALID);
+	}
+
+	return ret;
+}
+
+static int
+ice_vsi_config_vlan_stripping(struct ice_vsi *vsi, bool on)
+{
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	struct ice_vsi_ctx ctxt;
+	uint8_t vlan_flags;
+	int ret = 0;
+
+	/* Check if it has been already on or off */
+	if (vsi->info.valid_sections &
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_VLAN_VALID)) {
+		if (on) {
+			if ((vsi->info.vlan_flags &
+			     ICE_AQ_VSI_VLAN_EMOD_M) ==
+			    ICE_AQ_VSI_VLAN_EMOD_STR_BOTH)
+				return 0; /* already on */
+		} else {
+			if ((vsi->info.vlan_flags &
+			     ICE_AQ_VSI_VLAN_EMOD_M) ==
+			    ICE_AQ_VSI_VLAN_EMOD_NOTHING)
+				return 0; /* already off */
+		}
+	}
+
+	if (on)
+		vlan_flags = ICE_AQ_VSI_VLAN_EMOD_STR_BOTH;
+	else
+		vlan_flags = ICE_AQ_VSI_VLAN_EMOD_NOTHING;
+	vsi->info.vlan_flags &= ~(ICE_AQ_VSI_VLAN_EMOD_M);
+	vsi->info.vlan_flags |= vlan_flags;
+	(void)rte_memcpy(&ctxt.info, &vsi->info, sizeof(vsi->info));
+	ctxt.info.valid_sections =
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_VLAN_VALID);
+	ctxt.vsi_num = vsi->vsi_id;
+	ret = ice_update_vsi(hw, vsi->idx, &ctxt, NULL);
+	if (ret) {
+		PMD_DRV_LOG(INFO, "Update VSI failed to %s vlan stripping",
+			    on ? "enable" : "disable");
+		return -EINVAL;
+	}
+
+	vsi->info.valid_sections |=
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_VLAN_VALID);
+
+	return ret;
+}
+
+static int
+ice_vlan_offload_set(struct rte_eth_dev *dev, int mask)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vsi *vsi = pf->main_vsi;
+	struct rte_eth_rxmode *rxmode;
+
+	rxmode = &dev->data->dev_conf.rxmode;
+	if (mask & ETH_VLAN_FILTER_MASK) {
+		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+			ice_vsi_config_vlan_filter(vsi, TRUE);
+		else
+			ice_vsi_config_vlan_filter(vsi, FALSE);
+	}
+
+	if (mask & ETH_VLAN_STRIP_MASK) {
+		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+			ice_vsi_config_vlan_stripping(vsi, TRUE);
+		else
+			ice_vsi_config_vlan_stripping(vsi, FALSE);
+	}
+
+	if (mask & ETH_VLAN_EXTEND_MASK) {
+		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+			ice_vsi_config_double_vlan(vsi, TRUE);
+		else
+			ice_vsi_config_double_vlan(vsi, FALSE);
+	}
+
+	return 0;
+}
+
+static int
+ice_vlan_tpid_set(struct rte_eth_dev *dev,
+		  enum rte_vlan_type vlan_type,
+		  uint16_t tpid)
+{
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	uint64_t reg_r = 0, reg_w = 0;
+	uint16_t reg_id = 0;
+	int ret = 0;
+	int qinq = dev->data->dev_conf.rxmode.offloads &
+		   DEV_RX_OFFLOAD_VLAN_EXTEND;
+
+	switch (vlan_type) {
+	case ETH_VLAN_TYPE_OUTER:
+		if (qinq)
+			reg_id = 3;
+		else
+			reg_id = 5;
+	break;
+	case ETH_VLAN_TYPE_INNER:
+		if (qinq) {
+			reg_id = 5;
+		} else {
+			PMD_DRV_LOG(ERR,
+				    "Unsupported vlan type in single vlan.");
+			return -EINVAL;
+		}
+		break;
+	default:
+		PMD_DRV_LOG(ERR, "Unsupported vlan type %d", vlan_type);
+		return -EINVAL;
+	}
+	reg_r = ICE_READ_REG(hw, GL_SWT_L2TAGCTRL(reg_id));
+	PMD_DRV_LOG(DEBUG, "Debug read from ICE GL_SWT_L2TAGCTRL[%d]: "
+		    "0x%08"PRIx64"", reg_id, reg_r);
+
+	reg_w = reg_r & (~(GL_SWT_L2TAGCTRL_ETHERTYPE_M));
+	reg_w |= ((uint64_t)tpid << GL_SWT_L2TAGCTRL_ETHERTYPE_S);
+	if (reg_r == reg_w) {
+		PMD_DRV_LOG(DEBUG, "No need to write");
+		return 0;
+	}
+
+	ICE_WRITE_REG(hw, GL_SWT_L2TAGCTRL(reg_id), reg_w);
+	PMD_DRV_LOG(DEBUG, "Debug write 0x%08"PRIx64" to "
+		    "ICE GL_SWT_L2TAGCTRL[%d]", reg_w, reg_id);
+
+	return ret;
+}
+
+static int
+ice_vsi_vlan_pvid_set(struct ice_vsi *vsi, struct ice_vsi_vlan_pvid_info *info)
+{
+	struct ice_hw *hw;
+	struct ice_vsi_ctx ctxt;
+	uint8_t vlan_flags = 0;
+	int ret;
+
+	if (!vsi || !info) {
+		PMD_DRV_LOG(ERR, "invalid parameters");
+		return -EINVAL;
+	}
+
+	if (info->on) {
+		vsi->info.pvid = info->config.pvid;
+		/**
+		 * If insert pvid is enabled, only tagged pkts are
+		 * allowed to be sent out.
+		 */
+		vlan_flags = ICE_AQ_VSI_PVLAN_INSERT_PVID |
+			     ICE_AQ_VSI_VLAN_MODE_UNTAGGED;
+	} else {
+		vsi->info.pvid = 0;
+		if (info->config.reject.tagged == 0)
+			vlan_flags |= ICE_AQ_VSI_VLAN_MODE_TAGGED;
+
+		if (info->config.reject.untagged == 0)
+			vlan_flags |= ICE_AQ_VSI_VLAN_MODE_UNTAGGED;
+	}
+	vsi->info.vlan_flags &= ~(ICE_AQ_VSI_PVLAN_INSERT_PVID |
+				  ICE_AQ_VSI_VLAN_MODE_M);
+	vsi->info.vlan_flags |= vlan_flags;
+	memset(&ctxt, 0, sizeof(ctxt));
+	rte_memcpy(&ctxt.info, &vsi->info, sizeof(vsi->info));
+	ctxt.info.valid_sections =
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_VLAN_VALID);
+	ctxt.vsi_num = vsi->vsi_id;
+
+	hw = ICE_VSI_TO_HW(vsi);
+	ret = ice_update_vsi(hw, vsi->idx, &ctxt, NULL);
+	if (ret != ICE_SUCCESS) {
+		PMD_DRV_LOG(ERR,
+			    "update VSI for VLAN insert failed, err %d",
+			    ret);
+		return -EINVAL;
+	}
+
+	vsi->info.valid_sections |=
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_VLAN_VALID);
+
+	return ret;
+}
+
+static int
+ice_vlan_pvid_set(struct rte_eth_dev *dev, uint16_t pvid, int on)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vsi *vsi = pf->main_vsi;
+	struct rte_eth_dev_data *data = pf->dev_data;
+	struct ice_vsi_vlan_pvid_info info;
+	int ret;
+
+	memset(&info, 0, sizeof(info));
+	info.on = on;
+	if (info.on) {
+		info.config.pvid = pvid;
+	} else {
+		info.config.reject.tagged =
+			data->dev_conf.txmode.hw_vlan_reject_tagged;
+		info.config.reject.untagged =
+			data->dev_conf.txmode.hw_vlan_reject_untagged;
+	}
+
+	ret = ice_vsi_vlan_pvid_set(vsi, &info);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Failed to set pvid.");
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int
 ice_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 	      struct rte_pci_device *pci_dev)
 {
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v4 24/32] net/ice: support RSS
  2018-12-14  8:34 ` [dpdk-dev] [PATCH v4 00/32] A new net PMD - ICE Wenzhuo Lu
                     ` (22 preceding siblings ...)
  2018-12-14  8:35   ` [dpdk-dev] [PATCH v4 23/32] net/ice: support VLAN ops Wenzhuo Lu
@ 2018-12-14  8:35   ` Wenzhuo Lu
  2018-12-14  8:35   ` [dpdk-dev] [PATCH v4 25/32] net/ice: support RX queue interruption Wenzhuo Lu
                     ` (7 subsequent siblings)
  31 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-14  8:35 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add below ops,
reta_update
reta_query
rss_hash_update
rss_hash_conf_get

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/ice.ini |   3 +
 drivers/net/ice/ice_ethdev.c     | 242 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 245 insertions(+)

diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
index 5ac8e56..953a869 100644
--- a/doc/guides/nics/features/ice.ini
+++ b/doc/guides/nics/features/ice.ini
@@ -12,6 +12,9 @@ MTU update           = Y
 Jumbo frame          = Y
 Unicast MAC filter   = Y
 Multicast MAC filter = Y
+RSS hash             = Y
+RSS key update       = Y
+RSS reta update      = Y
 VLAN filter          = Y
 VLAN offload         = Y
 QinQ offload         = Y
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 58ac0af..43b5803 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -28,6 +28,16 @@ static int ice_link_update(struct rte_eth_dev *dev,
 static int ice_vlan_tpid_set(struct rte_eth_dev *dev,
 			     enum rte_vlan_type vlan_type,
 			     uint16_t tpid);
+static int ice_rss_reta_update(struct rte_eth_dev *dev,
+			       struct rte_eth_rss_reta_entry64 *reta_conf,
+			       uint16_t reta_size);
+static int ice_rss_reta_query(struct rte_eth_dev *dev,
+			      struct rte_eth_rss_reta_entry64 *reta_conf,
+			      uint16_t reta_size);
+static int ice_rss_hash_update(struct rte_eth_dev *dev,
+			       struct rte_eth_rss_conf *rss_conf);
+static int ice_rss_hash_conf_get(struct rte_eth_dev *dev,
+				 struct rte_eth_rss_conf *rss_conf);
 static int ice_vlan_filter_set(struct rte_eth_dev *dev,
 			       uint16_t vlan_id,
 			       int on);
@@ -72,6 +82,10 @@ static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
 	.vlan_filter_set              = ice_vlan_filter_set,
 	.vlan_offload_set             = ice_vlan_offload_set,
 	.vlan_tpid_set                = ice_vlan_tpid_set,
+	.reta_update                  = ice_rss_reta_update,
+	.reta_query                   = ice_rss_reta_query,
+	.rss_hash_update              = ice_rss_hash_update,
+	.rss_hash_conf_get            = ice_rss_hash_conf_get,
 	.vlan_pvid_set                = ice_vlan_pvid_set,
 };
 
@@ -2006,6 +2020,234 @@ static int ice_macaddr_set(struct rte_eth_dev *dev,
 }
 
 static int
+ice_get_rss_lut(struct ice_vsi *vsi, uint8_t *lut, uint16_t lut_size)
+{
+	struct ice_pf *pf = ICE_VSI_TO_PF(vsi);
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	int ret;
+
+	if (!lut)
+		return -EINVAL;
+
+	if (pf->flags & ICE_FLAG_RSS_AQ_CAPABLE) {
+		ret = ice_aq_get_rss_lut(hw, vsi->idx, TRUE,
+					 lut, lut_size);
+		if (ret) {
+			PMD_DRV_LOG(ERR, "Failed to get RSS lookup table");
+			return -EINVAL;
+		}
+	} else {
+		uint64_t *lut_dw = (uint64_t *)lut;
+		uint16_t i, lut_size_dw = lut_size / 4;
+
+		for (i = 0; i < lut_size_dw; i++)
+			lut_dw[i] = ICE_READ_REG(hw, PFQF_HLUT(i));
+	}
+
+	return 0;
+}
+
+static int
+ice_set_rss_lut(struct ice_vsi *vsi, uint8_t *lut, uint16_t lut_size)
+{
+	struct ice_pf *pf = ICE_VSI_TO_PF(vsi);
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	int ret;
+
+	if (!vsi || !lut)
+		return -EINVAL;
+
+	if (pf->flags & ICE_FLAG_RSS_AQ_CAPABLE) {
+		ret = ice_aq_set_rss_lut(hw, vsi->idx, TRUE,
+					 lut, lut_size);
+		if (ret) {
+			PMD_DRV_LOG(ERR, "Failed to set RSS lookup table");
+			return -EINVAL;
+		}
+	} else {
+		uint64_t *lut_dw = (uint64_t *)lut;
+		uint16_t i, lut_size_dw = lut_size / 4;
+
+		for (i = 0; i < lut_size_dw; i++)
+			ICE_WRITE_REG(hw, PFQF_HLUT(i), lut_dw[i]);
+
+		ice_flush(hw);
+	}
+
+	return 0;
+}
+
+static int
+ice_rss_reta_update(struct rte_eth_dev *dev,
+		    struct rte_eth_rss_reta_entry64 *reta_conf,
+		    uint16_t reta_size)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	uint16_t i, lut_size = hw->func_caps.common_cap.rss_table_size;
+	uint16_t idx, shift;
+	uint8_t *lut;
+	int ret;
+
+	if (reta_size != lut_size ||
+	    reta_size > ETH_RSS_RETA_SIZE_512) {
+		PMD_DRV_LOG(ERR,
+			    "The size of hash lookup table configured (%d)"
+			    "doesn't match the number hardware can "
+			    "supported (%d)",
+			    reta_size, lut_size);
+		return -EINVAL;
+	}
+
+	lut = rte_zmalloc(NULL, reta_size, 0);
+	if (!lut) {
+		PMD_DRV_LOG(ERR, "No memory can be allocated");
+		return -ENOMEM;
+	}
+	ret = ice_get_rss_lut(pf->main_vsi, lut, reta_size);
+	if (ret)
+		goto out;
+
+	for (i = 0; i < reta_size; i++) {
+		idx = i / RTE_RETA_GROUP_SIZE;
+		shift = i % RTE_RETA_GROUP_SIZE;
+		if (reta_conf[idx].mask & (1ULL << shift))
+			lut[i] = reta_conf[idx].reta[shift];
+	}
+	ret = ice_set_rss_lut(pf->main_vsi, lut, reta_size);
+
+out:
+	rte_free(lut);
+
+	return ret;
+}
+
+static int
+ice_rss_reta_query(struct rte_eth_dev *dev,
+		   struct rte_eth_rss_reta_entry64 *reta_conf,
+		   uint16_t reta_size)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	uint16_t i, lut_size = hw->func_caps.common_cap.rss_table_size;
+	uint16_t idx, shift;
+	uint8_t *lut;
+	int ret;
+
+	if (reta_size != lut_size ||
+	    reta_size > ETH_RSS_RETA_SIZE_512) {
+		PMD_DRV_LOG(ERR,
+			    "The size of hash lookup table configured (%d)"
+			    "doesn't match the number hardware can "
+			    "supported (%d)",
+			    reta_size, lut_size);
+		return -EINVAL;
+	}
+
+	lut = rte_zmalloc(NULL, reta_size, 0);
+	if (!lut) {
+		PMD_DRV_LOG(ERR, "No memory can be allocated");
+		return -ENOMEM;
+	}
+
+	ret = ice_get_rss_lut(pf->main_vsi, lut, reta_size);
+	if (ret)
+		goto out;
+
+	for (i = 0; i < reta_size; i++) {
+		idx = i / RTE_RETA_GROUP_SIZE;
+		shift = i % RTE_RETA_GROUP_SIZE;
+		if (reta_conf[idx].mask & (1ULL << shift))
+			reta_conf[idx].reta[shift] = lut[i];
+	}
+
+out:
+	rte_free(lut);
+
+	return ret;
+}
+
+static int
+ice_set_rss_key(struct ice_vsi *vsi, uint8_t *key, uint8_t key_len)
+{
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	int ret = 0;
+
+	if (!key || key_len == 0) {
+		PMD_DRV_LOG(DEBUG, "No key to be configured");
+		return 0;
+	} else if (key_len != (VSIQF_HKEY_MAX_INDEX + 1) *
+		   sizeof(uint32_t)) {
+		PMD_DRV_LOG(ERR, "Invalid key length %u", key_len);
+		return -EINVAL;
+	}
+
+	struct ice_aqc_get_set_rss_keys *key_dw =
+		(struct ice_aqc_get_set_rss_keys *)key;
+
+	ret = ice_aq_set_rss_key(hw, vsi->idx, key_dw);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to configure RSS key via AQ");
+		ret = -EINVAL;
+	}
+
+	return ret;
+}
+
+static int
+ice_get_rss_key(struct ice_vsi *vsi, uint8_t *key, uint8_t *key_len)
+{
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	int ret;
+
+	if (!key || !key_len)
+		return -EINVAL;
+
+	ret = ice_aq_get_rss_key
+		(hw, vsi->idx,
+		 (struct ice_aqc_get_set_rss_keys *)key);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to get RSS key via AQ");
+		return -EINVAL;
+	}
+	*key_len = (VSIQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t);
+
+	return 0;
+}
+
+static int
+ice_rss_hash_update(struct rte_eth_dev *dev,
+		    struct rte_eth_rss_conf *rss_conf)
+{
+	enum ice_status status = ICE_SUCCESS;
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vsi *vsi = pf->main_vsi;
+
+	/* set hash key */
+	status = ice_set_rss_key(vsi, rss_conf->rss_key, rss_conf->rss_key_len);
+	if (status)
+		return status;
+
+	/* TODO: hash enable config, ice_add_rss_cfg */
+	return 0;
+}
+
+static int
+ice_rss_hash_conf_get(struct rte_eth_dev *dev,
+		      struct rte_eth_rss_conf *rss_conf)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vsi *vsi = pf->main_vsi;
+
+	ice_get_rss_key(vsi, rss_conf->rss_key,
+			&rss_conf->rss_key_len);
+
+	/* TODO: default set to 0 as hf config is not supported now */
+	rss_conf->rss_hf = 0;
+	return 0;
+}
+
+static int
 ice_vsi_vlan_pvid_set(struct ice_vsi *vsi, struct ice_vsi_vlan_pvid_info *info)
 {
 	struct ice_hw *hw;
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v4 25/32] net/ice: support RX queue interruption
  2018-12-14  8:34 ` [dpdk-dev] [PATCH v4 00/32] A new net PMD - ICE Wenzhuo Lu
                     ` (23 preceding siblings ...)
  2018-12-14  8:35   ` [dpdk-dev] [PATCH v4 24/32] net/ice: support RSS Wenzhuo Lu
@ 2018-12-14  8:35   ` Wenzhuo Lu
  2018-12-14  8:35   ` [dpdk-dev] [PATCH v4 26/32] net/ice: support FW version getting Wenzhuo Lu
                     ` (6 subsequent siblings)
  31 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-14  8:35 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add below ops,
rx_queue_intr_enable
rx_queue_intr_disable

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/ice.ini |   1 +
 drivers/net/ice/ice_ethdev.c     | 230 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 231 insertions(+)

diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
index 953a869..2844f4c 100644
--- a/doc/guides/nics/features/ice.ini
+++ b/doc/guides/nics/features/ice.ini
@@ -7,6 +7,7 @@
 Speed capabilities   = Y
 Link status          = Y
 Link status event    = Y
+Rx interrupt         = Y
 Queue start/stop     = Y
 MTU update           = Y
 Jumbo frame          = Y
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 43b5803..8eb96a0 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -48,6 +48,10 @@ static int ice_macaddr_add(struct rte_eth_dev *dev,
 			   __rte_unused uint32_t index,
 			   uint32_t pool);
 static void ice_macaddr_remove(struct rte_eth_dev *dev, uint32_t index);
+static int ice_rx_queue_intr_enable(struct rte_eth_dev *dev,
+				    uint16_t queue_id);
+static int ice_rx_queue_intr_disable(struct rte_eth_dev *dev,
+				     uint16_t queue_id);
 static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
 			     uint16_t pvid, int on);
 
@@ -86,6 +90,8 @@ static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
 	.reta_query                   = ice_rss_reta_query,
 	.rss_hash_update              = ice_rss_hash_update,
 	.rss_hash_conf_get            = ice_rss_hash_conf_get,
+	.rx_queue_intr_enable         = ice_rx_queue_intr_enable,
+	.rx_queue_intr_disable        = ice_rx_queue_intr_disable,
 	.vlan_pvid_set                = ice_vlan_pvid_set,
 };
 
@@ -1258,10 +1264,39 @@ static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
 }
 
 static void
+ice_vsi_disable_queues_intr(struct ice_vsi *vsi)
+{
+	struct rte_eth_dev *dev = vsi->adapter->eth_dev;
+	struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	uint16_t msix_intr, i;
+
+	/* disable interrupt and also clear all the exist config */
+	for (i = 0; i < vsi->nb_qps; i++) {
+		ICE_WRITE_REG(hw, QINT_TQCTL(vsi->base_queue + i), 0);
+		ICE_WRITE_REG(hw, QINT_RQCTL(vsi->base_queue + i), 0);
+		rte_wmb();
+	}
+
+	if (rte_intr_allow_others(intr_handle))
+		/* vfio-pci */
+		for (i = 0; i < vsi->nb_msix; i++) {
+			msix_intr = vsi->msix_intr + i;
+			ICE_WRITE_REG(hw, GLINT_DYN_CTL(msix_intr),
+				      GLINT_DYN_CTL_WB_ON_ITR_M);
+		}
+	else
+		/* igb_uio */
+		ICE_WRITE_REG(hw, GLINT_DYN_CTL(0), GLINT_DYN_CTL_WB_ON_ITR_M);
+}
+
+static void
 ice_dev_stop(struct rte_eth_dev *dev)
 {
 	struct rte_eth_dev_data *data = dev->data;
 	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vsi *main_vsi = pf->main_vsi;
 	struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
 	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
 	uint16_t i;
@@ -1278,6 +1313,9 @@ static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
 	for (i = 0; i < data->nb_tx_queues; i++)
 		ice_tx_queue_stop(dev, i);
 
+	/* disable all queue interrupts */
+	ice_vsi_disable_queues_intr(main_vsi);
+
 	/* Clear all queues and release mbufs */
 	ice_clear_queues(dev);
 
@@ -1405,6 +1443,158 @@ static int ice_init_rss(struct ice_pf *pf)
 	return 0;
 }
 
+static void
+__vsi_queues_bind_intr(struct ice_vsi *vsi, uint16_t msix_vect,
+		       int base_queue, int nb_queue)
+{
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	uint32_t val, val_tx;
+	int i;
+
+	for (i = 0; i < nb_queue; i++) {
+		/*do actual bind*/
+		val = (msix_vect & QINT_RQCTL_MSIX_INDX_M) |
+		      (0 < QINT_RQCTL_ITR_INDX_S) | QINT_RQCTL_CAUSE_ENA_M;
+		val_tx = (msix_vect & QINT_TQCTL_MSIX_INDX_M) |
+			 (0 < QINT_TQCTL_ITR_INDX_S) | QINT_TQCTL_CAUSE_ENA_M;
+
+		PMD_DRV_LOG(INFO, "queue %d is binding to vect %d",
+			    base_queue + i, msix_vect);
+		/* set ITR0 value */
+		ICE_WRITE_REG(hw, GLINT_ITR(0, msix_vect), 0x10);
+		ICE_WRITE_REG(hw, QINT_RQCTL(base_queue + i), val);
+		ICE_WRITE_REG(hw, QINT_TQCTL(base_queue + i), val_tx);
+	}
+}
+
+static void
+ice_vsi_queues_bind_intr(struct ice_vsi *vsi)
+{
+	struct rte_eth_dev *dev = vsi->adapter->eth_dev;
+	struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	uint16_t msix_vect = vsi->msix_intr;
+	uint16_t nb_msix = RTE_MIN(vsi->nb_msix, intr_handle->nb_efd);
+	uint16_t queue_idx = 0;
+	int record = 0;
+	int i;
+
+	/* clear Rx/Tx queue interrupt */
+	for (i = 0; i < vsi->nb_used_qps; i++) {
+		ICE_WRITE_REG(hw, QINT_TQCTL(vsi->base_queue + i), 0);
+		ICE_WRITE_REG(hw, QINT_RQCTL(vsi->base_queue + i), 0);
+	}
+
+	/* PF bind interrupt */
+	if (rte_intr_dp_is_en(intr_handle)) {
+		queue_idx = 0;
+		record = 1;
+	}
+
+	for (i = 0; i < vsi->nb_used_qps; i++) {
+		if (nb_msix <= 1) {
+			if (!rte_intr_allow_others(intr_handle))
+				msix_vect = ICE_MISC_VEC_ID;
+
+			/* uio mapping all queue to one msix_vect */
+			__vsi_queues_bind_intr(vsi, msix_vect,
+					       vsi->base_queue + i,
+					       vsi->nb_used_qps - i);
+
+			for (; !!record && i < vsi->nb_used_qps; i++)
+				intr_handle->intr_vec[queue_idx + i] =
+					msix_vect;
+			break;
+		}
+
+		/* vfio 1:1 queue/msix_vect mapping */
+		__vsi_queues_bind_intr(vsi, msix_vect,
+				       vsi->base_queue + i, 1);
+
+		if (!!record)
+			intr_handle->intr_vec[queue_idx + i] = msix_vect;
+
+		msix_vect++;
+		nb_msix--;
+	}
+}
+
+static void
+ice_vsi_enable_queues_intr(struct ice_vsi *vsi)
+{
+	struct rte_eth_dev *dev = vsi->adapter->eth_dev;
+	struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	uint16_t msix_intr, i;
+
+	if (rte_intr_allow_others(intr_handle))
+		for (i = 0; i < vsi->nb_used_qps; i++) {
+			msix_intr = vsi->msix_intr + i;
+			ICE_WRITE_REG(hw, GLINT_DYN_CTL(msix_intr),
+				      GLINT_DYN_CTL_INTENA_M |
+				      GLINT_DYN_CTL_CLEARPBA_M |
+				      GLINT_DYN_CTL_ITR_INDX_M |
+				      GLINT_DYN_CTL_WB_ON_ITR_M);
+		}
+	else
+		ICE_WRITE_REG(hw, GLINT_DYN_CTL(0),
+			      GLINT_DYN_CTL_INTENA_M |
+			      GLINT_DYN_CTL_CLEARPBA_M |
+			      GLINT_DYN_CTL_ITR_INDX_M |
+			      GLINT_DYN_CTL_WB_ON_ITR_M);
+}
+
+static int
+ice_rxq_intr_setup(struct rte_eth_dev *dev)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+	struct ice_vsi *vsi = pf->main_vsi;
+	uint32_t intr_vector = 0;
+
+	rte_intr_disable(intr_handle);
+
+	/* check and configure queue intr-vector mapping */
+	if ((rte_intr_cap_multiple(intr_handle) ||
+	     !RTE_ETH_DEV_SRIOV(dev).active) &&
+	    dev->data->dev_conf.intr_conf.rxq != 0) {
+		intr_vector = dev->data->nb_rx_queues;
+		if (intr_vector > ICE_MAX_INTR_QUEUE_NUM) {
+			PMD_DRV_LOG(ERR, "At most %d intr queues supported",
+				    ICE_MAX_INTR_QUEUE_NUM);
+			return -ENOTSUP;
+		}
+		if (rte_intr_efd_enable(intr_handle, intr_vector))
+			return -1;
+	}
+
+	if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
+		intr_handle->intr_vec =
+		rte_zmalloc(NULL, dev->data->nb_rx_queues * sizeof(int),
+			    0);
+		if (!intr_handle->intr_vec) {
+			PMD_DRV_LOG(ERR,
+				    "Failed to allocate %d rx_queues intr_vec",
+				    dev->data->nb_rx_queues);
+			return -ENOMEM;
+		}
+	}
+
+	/* Map queues with MSIX interrupt */
+	vsi->nb_used_qps = dev->data->nb_rx_queues;
+	ice_vsi_queues_bind_intr(vsi);
+
+	/* Enable interrupts for all the queues */
+	ice_vsi_enable_queues_intr(vsi);
+
+	rte_intr_enable(intr_handle);
+
+	return 0;
+}
+
 static int
 ice_dev_start(struct rte_eth_dev *dev)
 {
@@ -1439,6 +1629,10 @@ static int ice_init_rss(struct ice_pf *pf)
 		goto rx_err;
 	}
 
+	/* enable Rx interrput and mapping Rx queue to interrupt vector */
+	if (ice_rxq_intr_setup(dev))
+		return -EIO;
+
 	ret = ice_aq_set_event_mask(hw, hw->port_info->lport,
 				    ((u16)(ICE_AQ_LINK_EVENT_LINK_FAULT |
 				     ICE_AQ_LINK_EVENT_PHY_TEMP_ALARM |
@@ -2247,6 +2441,42 @@ static int ice_macaddr_set(struct rte_eth_dev *dev,
 	return 0;
 }
 
+static int ice_rx_queue_intr_enable(struct rte_eth_dev *dev,
+				    uint16_t queue_id)
+{
+	struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	uint32_t val;
+	uint16_t msix_intr;
+
+	msix_intr = intr_handle->intr_vec[queue_id];
+
+	val = GLINT_DYN_CTL_INTENA_M | GLINT_DYN_CTL_CLEARPBA_M |
+	      GLINT_DYN_CTL_ITR_INDX_M;
+	val &= ~GLINT_DYN_CTL_WB_ON_ITR_M;
+
+	ICE_WRITE_REG(hw, GLINT_DYN_CTL(msix_intr), val);
+	rte_intr_enable(&pci_dev->intr_handle);
+
+	return 0;
+}
+
+static int ice_rx_queue_intr_disable(struct rte_eth_dev *dev,
+				     uint16_t queue_id)
+{
+	struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	uint16_t msix_intr;
+
+	msix_intr = intr_handle->intr_vec[queue_id];
+
+	ICE_WRITE_REG(hw, GLINT_DYN_CTL(msix_intr), GLINT_DYN_CTL_WB_ON_ITR_M);
+
+	return 0;
+}
+
 static int
 ice_vsi_vlan_pvid_set(struct ice_vsi *vsi, struct ice_vsi_vlan_pvid_info *info)
 {
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v4 26/32] net/ice: support FW version getting
  2018-12-14  8:34 ` [dpdk-dev] [PATCH v4 00/32] A new net PMD - ICE Wenzhuo Lu
                     ` (24 preceding siblings ...)
  2018-12-14  8:35   ` [dpdk-dev] [PATCH v4 25/32] net/ice: support RX queue interruption Wenzhuo Lu
@ 2018-12-14  8:35   ` Wenzhuo Lu
  2018-12-14  8:35   ` [dpdk-dev] [PATCH v4 27/32] net/ice: support EEPROM information getting Wenzhuo Lu
                     ` (5 subsequent siblings)
  31 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-14  8:35 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add ops fw_version_get.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/ice.ini |  1 +
 drivers/net/ice/ice_ethdev.c     | 21 +++++++++++++++++++++
 2 files changed, 22 insertions(+)

diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
index 2844f4c..4867433 100644
--- a/doc/guides/nics/features/ice.ini
+++ b/doc/guides/nics/features/ice.ini
@@ -19,6 +19,7 @@ RSS reta update      = Y
 VLAN filter          = Y
 VLAN offload         = Y
 QinQ offload         = Y
+FW version           = Y
 BSD nic_uio          = Y
 Linux UIO            = Y
 Linux VFIO           = Y
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 8eb96a0..9638678 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -52,6 +52,8 @@ static int ice_rx_queue_intr_enable(struct rte_eth_dev *dev,
 				    uint16_t queue_id);
 static int ice_rx_queue_intr_disable(struct rte_eth_dev *dev,
 				     uint16_t queue_id);
+static int ice_fw_version_get(struct rte_eth_dev *dev, char *fw_version,
+			      size_t fw_size);
 static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
 			     uint16_t pvid, int on);
 
@@ -92,6 +94,7 @@ static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
 	.rss_hash_conf_get            = ice_rss_hash_conf_get,
 	.rx_queue_intr_enable         = ice_rx_queue_intr_enable,
 	.rx_queue_intr_disable        = ice_rx_queue_intr_disable,
+	.fw_version_get               = ice_fw_version_get,
 	.vlan_pvid_set                = ice_vlan_pvid_set,
 };
 
@@ -2478,6 +2481,24 @@ static int ice_rx_queue_intr_disable(struct rte_eth_dev *dev,
 }
 
 static int
+ice_fw_version_get(struct rte_eth_dev *dev, char *fw_version, size_t fw_size)
+{
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	int ret;
+
+	ret = snprintf(fw_version, fw_size, "%d.%d.%05d %d.%d",
+		       hw->fw_maj_ver, hw->fw_min_ver, hw->fw_build,
+		       hw->api_maj_ver, hw->api_min_ver);
+
+	/* add the size of '\0' */
+	ret += 1;
+	if (fw_size < (u32)ret)
+		return ret;
+	else
+		return 0;
+}
+
+static int
 ice_vsi_vlan_pvid_set(struct ice_vsi *vsi, struct ice_vsi_vlan_pvid_info *info)
 {
 	struct ice_hw *hw;
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v4 27/32] net/ice: support EEPROM information getting
  2018-12-14  8:34 ` [dpdk-dev] [PATCH v4 00/32] A new net PMD - ICE Wenzhuo Lu
                     ` (25 preceding siblings ...)
  2018-12-14  8:35   ` [dpdk-dev] [PATCH v4 26/32] net/ice: support FW version getting Wenzhuo Lu
@ 2018-12-14  8:35   ` Wenzhuo Lu
  2018-12-14  8:35   ` [dpdk-dev] [PATCH v4 28/32] net/ice: support statistics Wenzhuo Lu
                     ` (4 subsequent siblings)
  31 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-14  8:35 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Wei Zhao

Add below ops,
get_eeprom_length
get_eeprom

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 doc/guides/nics/features/ice.ini |  1 +
 drivers/net/ice/ice_ethdev.c     | 45 ++++++++++++++++++++++++++++++++++++++++
 2 files changed, 46 insertions(+)

diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
index 4867433..c939b52 100644
--- a/doc/guides/nics/features/ice.ini
+++ b/doc/guides/nics/features/ice.ini
@@ -20,6 +20,7 @@ VLAN filter          = Y
 VLAN offload         = Y
 QinQ offload         = Y
 FW version           = Y
+Module EEPROM dump   = Y
 BSD nic_uio          = Y
 Linux UIO            = Y
 Linux VFIO           = Y
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 9638678..b51f4d3 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -56,6 +56,9 @@ static int ice_fw_version_get(struct rte_eth_dev *dev, char *fw_version,
 			      size_t fw_size);
 static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
 			     uint16_t pvid, int on);
+static int ice_get_eeprom_length(struct rte_eth_dev *dev);
+static int ice_get_eeprom(struct rte_eth_dev *dev,
+			  struct rte_dev_eeprom_info *eeprom);
 
 static const struct rte_pci_id pci_id_ice_map[] = {
 	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
@@ -96,6 +99,8 @@ static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
 	.rx_queue_intr_disable        = ice_rx_queue_intr_disable,
 	.fw_version_get               = ice_fw_version_get,
 	.vlan_pvid_set                = ice_vlan_pvid_set,
+	.get_eeprom_length            = ice_get_eeprom_length,
+	.get_eeprom                   = ice_get_eeprom,
 };
 
 static void
@@ -2581,6 +2586,46 @@ static int ice_rx_queue_intr_disable(struct rte_eth_dev *dev,
 }
 
 static int
+ice_get_eeprom_length(struct rte_eth_dev *dev)
+{
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	/* Convert word count to byte count */
+	return hw->nvm.sr_words << 1;
+}
+
+static int
+ice_get_eeprom(struct rte_eth_dev *dev,
+	       struct rte_dev_eeprom_info *eeprom)
+{
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	uint16_t *data = eeprom->data;
+	uint16_t offset, length, i;
+	enum ice_status ret_code = ICE_SUCCESS;
+
+	offset = eeprom->offset >> 1;
+	length = eeprom->length >> 1;
+
+	if (offset > hw->nvm.sr_words ||
+	    offset + length > hw->nvm.sr_words) {
+		PMD_DRV_LOG(ERR, "Requested EEPROM bytes out of range.");
+		return -EINVAL;
+	}
+
+	eeprom->magic = hw->vendor_id | (hw->device_id << 16);
+
+	for (i = 0; i < length; i++) {
+		ret_code = ice_read_sr_word(hw, offset + i, &data[i]);
+		if (ret_code != ICE_SUCCESS) {
+			PMD_DRV_LOG(ERR, "EEPROM read failed.");
+			return -EIO;
+		}
+	}
+
+	return 0;
+}
+
+static int
 ice_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 	      struct rte_pci_device *pci_dev)
 {
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v4 28/32] net/ice: support statistics
  2018-12-14  8:34 ` [dpdk-dev] [PATCH v4 00/32] A new net PMD - ICE Wenzhuo Lu
                     ` (26 preceding siblings ...)
  2018-12-14  8:35   ` [dpdk-dev] [PATCH v4 27/32] net/ice: support EEPROM information getting Wenzhuo Lu
@ 2018-12-14  8:35   ` Wenzhuo Lu
  2018-12-14  8:35   ` [dpdk-dev] [PATCH v4 29/32] net/ice: support queue information getting Wenzhuo Lu
                     ` (3 subsequent siblings)
  31 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-14  8:35 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Jia Guo

Add below ops,
stats_get
stats_reset
xstats_get
xstats_get_names
xstats_reset

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Jia Guo <jia.guo@intel.com>
---
 doc/guides/nics/features/ice.ini |   2 +
 drivers/net/ice/ice_ethdev.c     | 566 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 568 insertions(+)

diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
index c939b52..67fd044 100644
--- a/doc/guides/nics/features/ice.ini
+++ b/doc/guides/nics/features/ice.ini
@@ -19,6 +19,8 @@ RSS reta update      = Y
 VLAN filter          = Y
 VLAN offload         = Y
 QinQ offload         = Y
+Basic stats          = Y
+Extended stats       = Y
 FW version           = Y
 Module EEPROM dump   = Y
 BSD nic_uio          = Y
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index b51f4d3..499d226 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -59,6 +59,14 @@ static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
 static int ice_get_eeprom_length(struct rte_eth_dev *dev);
 static int ice_get_eeprom(struct rte_eth_dev *dev,
 			  struct rte_dev_eeprom_info *eeprom);
+static int ice_stats_get(struct rte_eth_dev *dev,
+			 struct rte_eth_stats *stats);
+static void ice_stats_reset(struct rte_eth_dev *dev);
+static int ice_xstats_get(struct rte_eth_dev *dev,
+			  struct rte_eth_xstat *xstats, unsigned int n);
+static int ice_xstats_get_names(struct rte_eth_dev *dev,
+				struct rte_eth_xstat_name *xstats_names,
+				unsigned int limit);
 
 static const struct rte_pci_id pci_id_ice_map[] = {
 	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
@@ -101,8 +109,92 @@ static int ice_get_eeprom(struct rte_eth_dev *dev,
 	.vlan_pvid_set                = ice_vlan_pvid_set,
 	.get_eeprom_length            = ice_get_eeprom_length,
 	.get_eeprom                   = ice_get_eeprom,
+	.stats_get                    = ice_stats_get,
+	.stats_reset                  = ice_stats_reset,
+	.xstats_get                   = ice_xstats_get,
+	.xstats_get_names             = ice_xstats_get_names,
+	.xstats_reset                 = ice_stats_reset,
 };
 
+/* store statistics names and its offset in stats structure */
+struct ice_xstats_name_off {
+	char name[RTE_ETH_XSTATS_NAME_SIZE];
+	unsigned int offset;
+};
+
+static const struct ice_xstats_name_off ice_stats_strings[] = {
+	{"rx_unicast_packets", offsetof(struct ice_eth_stats, rx_unicast)},
+	{"rx_multicast_packets", offsetof(struct ice_eth_stats, rx_multicast)},
+	{"rx_broadcast_packets", offsetof(struct ice_eth_stats, rx_broadcast)},
+	{"rx_dropped", offsetof(struct ice_eth_stats, rx_discards)},
+	{"rx_unknown_protocol_packets", offsetof(struct ice_eth_stats,
+		rx_unknown_protocol)},
+	{"tx_unicast_packets", offsetof(struct ice_eth_stats, tx_unicast)},
+	{"tx_multicast_packets", offsetof(struct ice_eth_stats, tx_multicast)},
+	{"tx_broadcast_packets", offsetof(struct ice_eth_stats, tx_broadcast)},
+	{"tx_dropped", offsetof(struct ice_eth_stats, tx_discards)},
+};
+
+#define ICE_NB_ETH_XSTATS (sizeof(ice_stats_strings) / \
+		sizeof(ice_stats_strings[0]))
+
+static const struct ice_xstats_name_off ice_hw_port_strings[] = {
+	{"tx_link_down_dropped", offsetof(struct ice_hw_port_stats,
+		tx_dropped_link_down)},
+	{"rx_crc_errors", offsetof(struct ice_hw_port_stats, crc_errors)},
+	{"rx_illegal_byte_errors", offsetof(struct ice_hw_port_stats,
+		illegal_bytes)},
+	{"rx_error_bytes", offsetof(struct ice_hw_port_stats, error_bytes)},
+	{"mac_local_errors", offsetof(struct ice_hw_port_stats,
+		mac_local_faults)},
+	{"mac_remote_errors", offsetof(struct ice_hw_port_stats,
+		mac_remote_faults)},
+	{"rx_len_errors", offsetof(struct ice_hw_port_stats,
+		rx_len_errors)},
+	{"tx_xon_packets", offsetof(struct ice_hw_port_stats, link_xon_tx)},
+	{"rx_xon_packets", offsetof(struct ice_hw_port_stats, link_xon_rx)},
+	{"tx_xoff_packets", offsetof(struct ice_hw_port_stats, link_xoff_tx)},
+	{"rx_xoff_packets", offsetof(struct ice_hw_port_stats, link_xoff_rx)},
+	{"rx_size_64_packets", offsetof(struct ice_hw_port_stats, rx_size_64)},
+	{"rx_size_65_to_127_packets", offsetof(struct ice_hw_port_stats,
+		rx_size_127)},
+	{"rx_size_128_to_255_packets", offsetof(struct ice_hw_port_stats,
+		rx_size_255)},
+	{"rx_size_256_to_511_packets", offsetof(struct ice_hw_port_stats,
+		rx_size_511)},
+	{"rx_size_512_to_1023_packets", offsetof(struct ice_hw_port_stats,
+		rx_size_1023)},
+	{"rx_size_1024_to_1522_packets", offsetof(struct ice_hw_port_stats,
+		rx_size_1522)},
+	{"rx_size_1523_to_max_packets", offsetof(struct ice_hw_port_stats,
+		rx_size_big)},
+	{"rx_undersized_errors", offsetof(struct ice_hw_port_stats,
+		rx_undersize)},
+	{"rx_oversize_errors", offsetof(struct ice_hw_port_stats,
+		rx_oversize)},
+	{"rx_mac_short_pkt_dropped", offsetof(struct ice_hw_port_stats,
+		mac_short_pkt_dropped)},
+	{"rx_fragmented_errors", offsetof(struct ice_hw_port_stats,
+		rx_fragments)},
+	{"rx_jabber_errors", offsetof(struct ice_hw_port_stats, rx_jabber)},
+	{"tx_size_64_packets", offsetof(struct ice_hw_port_stats, tx_size_64)},
+	{"tx_size_65_to_127_packets", offsetof(struct ice_hw_port_stats,
+		tx_size_127)},
+	{"tx_size_128_to_255_packets", offsetof(struct ice_hw_port_stats,
+		tx_size_255)},
+	{"tx_size_256_to_511_packets", offsetof(struct ice_hw_port_stats,
+		tx_size_511)},
+	{"tx_size_512_to_1023_packets", offsetof(struct ice_hw_port_stats,
+		tx_size_1023)},
+	{"tx_size_1024_to_1522_packets", offsetof(struct ice_hw_port_stats,
+		tx_size_1522)},
+	{"tx_size_1523_to_max_packets", offsetof(struct ice_hw_port_stats,
+		tx_size_big)},
+};
+
+#define ICE_NB_HW_PORT_XSTATS (sizeof(ice_hw_port_strings) / \
+		sizeof(ice_hw_port_strings[0]))
+
 static void
 ice_init_controlq_parameter(struct ice_hw *hw)
 {
@@ -2625,6 +2717,480 @@ static int ice_rx_queue_intr_disable(struct rte_eth_dev *dev,
 	return 0;
 }
 
+static void
+ice_stat_update_32(struct ice_hw *hw,
+		   uint32_t reg,
+		   bool offset_loaded,
+		   uint64_t *offset,
+		   uint64_t *stat)
+{
+	uint64_t new_data;
+
+	new_data = (uint64_t)ICE_READ_REG(hw, reg);
+	if (!offset_loaded)
+		*offset = new_data;
+
+	if (new_data >= *offset)
+		*stat = (uint64_t)(new_data - *offset);
+	else
+		*stat = (uint64_t)((new_data +
+				    ((uint64_t)1 << ICE_32_BIT_WIDTH))
+				   - *offset);
+}
+
+static void
+ice_stat_update_40(struct ice_hw *hw,
+		   uint32_t hireg,
+		   uint32_t loreg,
+		   bool offset_loaded,
+		   uint64_t *offset,
+		   uint64_t *stat)
+{
+	uint64_t new_data;
+
+	new_data = (uint64_t)ICE_READ_REG(hw, loreg);
+	new_data |= (uint64_t)(ICE_READ_REG(hw, hireg) & ICE_8_BIT_MASK) <<
+		    ICE_32_BIT_WIDTH;
+
+	if (!offset_loaded)
+		*offset = new_data;
+
+	if (new_data >= *offset)
+		*stat = new_data - *offset;
+	else
+		*stat = (uint64_t)((new_data +
+				    ((uint64_t)1 << ICE_40_BIT_WIDTH)) -
+				   *offset);
+
+	*stat &= ICE_40_BIT_MASK;
+}
+
+/* Get all the statistics of a VSI */
+static void
+ice_update_vsi_stats(struct ice_vsi *vsi)
+{
+	struct ice_eth_stats *oes = &vsi->eth_stats_offset;
+	struct ice_eth_stats *nes = &vsi->eth_stats;
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	int idx = rte_le_to_cpu_16(vsi->vsi_id);
+
+	ice_stat_update_40(hw, GLV_GORCH(idx), GLV_GORCL(idx),
+			   vsi->offset_loaded, &oes->rx_bytes,
+			   &nes->rx_bytes);
+	ice_stat_update_40(hw, GLV_UPRCH(idx), GLV_UPRCL(idx),
+			   vsi->offset_loaded, &oes->rx_unicast,
+			   &nes->rx_unicast);
+	ice_stat_update_40(hw, GLV_MPRCH(idx), GLV_MPRCL(idx),
+			   vsi->offset_loaded, &oes->rx_multicast,
+			   &nes->rx_multicast);
+	ice_stat_update_40(hw, GLV_BPRCH(idx), GLV_BPRCL(idx),
+			   vsi->offset_loaded, &oes->rx_broadcast,
+			   &nes->rx_broadcast);
+	/* exclude CRC bytes */
+	nes->rx_bytes -= (nes->rx_unicast + nes->rx_multicast +
+			  nes->rx_broadcast) * ETHER_CRC_LEN;
+
+	ice_stat_update_32(hw, GLV_RDPC(idx), vsi->offset_loaded,
+			   &oes->rx_discards, &nes->rx_discards);
+	/* GLV_REPC not supported */
+	/* GLV_RMPC not supported */
+	ice_stat_update_32(hw, GLSWID_RUPP(idx), vsi->offset_loaded,
+			   &oes->rx_unknown_protocol,
+			   &nes->rx_unknown_protocol);
+	ice_stat_update_40(hw, GLV_GOTCH(idx), GLV_GOTCL(idx),
+			   vsi->offset_loaded, &oes->tx_bytes,
+			   &nes->tx_bytes);
+	ice_stat_update_40(hw, GLV_UPTCH(idx), GLV_UPTCL(idx),
+			   vsi->offset_loaded, &oes->tx_unicast,
+			   &nes->tx_unicast);
+	ice_stat_update_40(hw, GLV_MPTCH(idx), GLV_MPTCL(idx),
+			   vsi->offset_loaded, &oes->tx_multicast,
+			   &nes->tx_multicast);
+	ice_stat_update_40(hw, GLV_BPTCH(idx), GLV_BPTCL(idx),
+			   vsi->offset_loaded,  &oes->tx_broadcast,
+			   &nes->tx_broadcast);
+	/* GLV_TDPC not supported */
+	ice_stat_update_32(hw, GLV_TEPC(idx), vsi->offset_loaded,
+			   &oes->tx_errors, &nes->tx_errors);
+	vsi->offset_loaded = true;
+
+	PMD_DRV_LOG(DEBUG, "************** VSI[%u] stats start **************",
+		    vsi->vsi_id);
+	PMD_DRV_LOG(DEBUG, "rx_bytes:            %"PRIu64"", nes->rx_bytes);
+	PMD_DRV_LOG(DEBUG, "rx_unicast:          %"PRIu64"", nes->rx_unicast);
+	PMD_DRV_LOG(DEBUG, "rx_multicast:        %"PRIu64"", nes->rx_multicast);
+	PMD_DRV_LOG(DEBUG, "rx_broadcast:        %"PRIu64"", nes->rx_broadcast);
+	PMD_DRV_LOG(DEBUG, "rx_discards:         %"PRIu64"", nes->rx_discards);
+	PMD_DRV_LOG(DEBUG, "rx_unknown_protocol: %"PRIu64"",
+		    nes->rx_unknown_protocol);
+	PMD_DRV_LOG(DEBUG, "tx_bytes:            %"PRIu64"", nes->tx_bytes);
+	PMD_DRV_LOG(DEBUG, "tx_unicast:          %"PRIu64"", nes->tx_unicast);
+	PMD_DRV_LOG(DEBUG, "tx_multicast:        %"PRIu64"", nes->tx_multicast);
+	PMD_DRV_LOG(DEBUG, "tx_broadcast:        %"PRIu64"", nes->tx_broadcast);
+	PMD_DRV_LOG(DEBUG, "tx_discards:         %"PRIu64"", nes->tx_discards);
+	PMD_DRV_LOG(DEBUG, "tx_errors:           %"PRIu64"", nes->tx_errors);
+	PMD_DRV_LOG(DEBUG, "************** VSI[%u] stats end ****************",
+		    vsi->vsi_id);
+}
+
+static void
+ice_read_stats_registers(struct ice_pf *pf, struct ice_hw *hw)
+{
+	struct ice_hw_port_stats *ns = &pf->stats; /* new stats */
+	struct ice_hw_port_stats *os = &pf->stats_offset; /* old stats */
+
+	/* Get statistics of struct ice_eth_stats */
+	ice_stat_update_40(hw, GLPRT_GORCH(hw->port_info->lport),
+			   GLPRT_GORCL(hw->port_info->lport),
+			   pf->offset_loaded, &os->eth.rx_bytes,
+			   &ns->eth.rx_bytes);
+	ice_stat_update_40(hw, GLPRT_UPRCH(hw->port_info->lport),
+			   GLPRT_UPRCL(hw->port_info->lport),
+			   pf->offset_loaded, &os->eth.rx_unicast,
+			   &ns->eth.rx_unicast);
+	ice_stat_update_40(hw, GLPRT_MPRCH(hw->port_info->lport),
+			   GLPRT_MPRCL(hw->port_info->lport),
+			   pf->offset_loaded, &os->eth.rx_multicast,
+			   &ns->eth.rx_multicast);
+	ice_stat_update_40(hw, GLPRT_BPRCH(hw->port_info->lport),
+			   GLPRT_BPRCL(hw->port_info->lport),
+			   pf->offset_loaded, &os->eth.rx_broadcast,
+			   &ns->eth.rx_broadcast);
+	ice_stat_update_32(hw, PRTRPB_RDPC,
+			   pf->offset_loaded, &os->eth.rx_discards,
+			   &ns->eth.rx_discards);
+
+	/* Workaround: CRC size should not be included in byte statistics,
+	 * so subtract ETHER_CRC_LEN from the byte counter for each rx packet.
+	 */
+	ns->eth.rx_bytes -= (ns->eth.rx_unicast + ns->eth.rx_multicast +
+			     ns->eth.rx_broadcast) * ETHER_CRC_LEN;
+
+	/* GLPRT_REPC not supported */
+	/* GLPRT_RMPC not supported */
+	ice_stat_update_32(hw, GLSWID_RUPP(hw->port_info->lport),
+			   pf->offset_loaded,
+			   &os->eth.rx_unknown_protocol,
+			   &ns->eth.rx_unknown_protocol);
+	ice_stat_update_40(hw, GLPRT_GOTCH(hw->port_info->lport),
+			   GLPRT_GOTCL(hw->port_info->lport),
+			   pf->offset_loaded, &os->eth.tx_bytes,
+			   &ns->eth.tx_bytes);
+	ice_stat_update_40(hw, GLPRT_UPTCH(hw->port_info->lport),
+			   GLPRT_UPTCL(hw->port_info->lport),
+			   pf->offset_loaded, &os->eth.tx_unicast,
+			   &ns->eth.tx_unicast);
+	ice_stat_update_40(hw, GLPRT_MPTCH(hw->port_info->lport),
+			   GLPRT_MPTCL(hw->port_info->lport),
+			   pf->offset_loaded, &os->eth.tx_multicast,
+			   &ns->eth.tx_multicast);
+	ice_stat_update_40(hw, GLPRT_BPTCH(hw->port_info->lport),
+			   GLPRT_BPTCL(hw->port_info->lport),
+			   pf->offset_loaded, &os->eth.tx_broadcast,
+			   &ns->eth.tx_broadcast);
+	ns->eth.tx_bytes -= (ns->eth.tx_unicast + ns->eth.tx_multicast +
+			     ns->eth.tx_broadcast) * ETHER_CRC_LEN;
+
+	/* GLPRT_TEPC not supported */
+
+	/* additional port specific stats */
+	ice_stat_update_32(hw, GLPRT_TDOLD(hw->port_info->lport),
+			   pf->offset_loaded, &os->tx_dropped_link_down,
+			   &ns->tx_dropped_link_down);
+	ice_stat_update_32(hw, GLPRT_CRCERRS(hw->port_info->lport),
+			   pf->offset_loaded, &os->crc_errors,
+			   &ns->crc_errors);
+	ice_stat_update_32(hw, GLPRT_ILLERRC(hw->port_info->lport),
+			   pf->offset_loaded, &os->illegal_bytes,
+			   &ns->illegal_bytes);
+	/* GLPRT_ERRBC not supported */
+	ice_stat_update_32(hw, GLPRT_MLFC(hw->port_info->lport),
+			   pf->offset_loaded, &os->mac_local_faults,
+			   &ns->mac_local_faults);
+	ice_stat_update_32(hw, GLPRT_MRFC(hw->port_info->lport),
+			   pf->offset_loaded, &os->mac_remote_faults,
+			   &ns->mac_remote_faults);
+
+	ice_stat_update_32(hw, GLPRT_RLEC(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_len_errors,
+			   &ns->rx_len_errors);
+
+	ice_stat_update_32(hw, GLPRT_LXONRXC(hw->port_info->lport),
+			   pf->offset_loaded, &os->link_xon_rx,
+			   &ns->link_xon_rx);
+	ice_stat_update_32(hw, GLPRT_LXOFFRXC(hw->port_info->lport),
+			   pf->offset_loaded, &os->link_xoff_rx,
+			   &ns->link_xoff_rx);
+	ice_stat_update_32(hw, GLPRT_LXONTXC(hw->port_info->lport),
+			   pf->offset_loaded, &os->link_xon_tx,
+			   &ns->link_xon_tx);
+	ice_stat_update_32(hw, GLPRT_LXOFFTXC(hw->port_info->lport),
+			   pf->offset_loaded, &os->link_xoff_tx,
+			   &ns->link_xoff_tx);
+	ice_stat_update_40(hw, GLPRT_PRC64H(hw->port_info->lport),
+			   GLPRT_PRC64L(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_size_64,
+			   &ns->rx_size_64);
+	ice_stat_update_40(hw, GLPRT_PRC127H(hw->port_info->lport),
+			   GLPRT_PRC127L(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_size_127,
+			   &ns->rx_size_127);
+	ice_stat_update_40(hw, GLPRT_PRC255H(hw->port_info->lport),
+			   GLPRT_PRC255L(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_size_255,
+			   &ns->rx_size_255);
+	ice_stat_update_40(hw, GLPRT_PRC511H(hw->port_info->lport),
+			   GLPRT_PRC511L(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_size_511,
+			   &ns->rx_size_511);
+	ice_stat_update_40(hw, GLPRT_PRC1023H(hw->port_info->lport),
+			   GLPRT_PRC1023L(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_size_1023,
+			   &ns->rx_size_1023);
+	ice_stat_update_40(hw, GLPRT_PRC1522H(hw->port_info->lport),
+			   GLPRT_PRC1522L(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_size_1522,
+			   &ns->rx_size_1522);
+	ice_stat_update_40(hw, GLPRT_PRC9522H(hw->port_info->lport),
+			   GLPRT_PRC9522L(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_size_big,
+			   &ns->rx_size_big);
+	ice_stat_update_32(hw, GLPRT_RUC(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_undersize,
+			   &ns->rx_undersize);
+	ice_stat_update_32(hw, GLPRT_RFC(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_fragments,
+			   &ns->rx_fragments);
+	ice_stat_update_32(hw, GLPRT_ROC(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_oversize,
+			   &ns->rx_oversize);
+	ice_stat_update_32(hw, GLPRT_RJC(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_jabber,
+			   &ns->rx_jabber);
+	ice_stat_update_40(hw, GLPRT_PTC64H(hw->port_info->lport),
+			   GLPRT_PTC64L(hw->port_info->lport),
+			   pf->offset_loaded, &os->tx_size_64,
+			   &ns->tx_size_64);
+	ice_stat_update_40(hw, GLPRT_PTC127H(hw->port_info->lport),
+			   GLPRT_PTC127L(hw->port_info->lport),
+			   pf->offset_loaded, &os->tx_size_127,
+			   &ns->tx_size_127);
+	ice_stat_update_40(hw, GLPRT_PTC255H(hw->port_info->lport),
+			   GLPRT_PTC255L(hw->port_info->lport),
+			   pf->offset_loaded, &os->tx_size_255,
+			   &ns->tx_size_255);
+	ice_stat_update_40(hw, GLPRT_PTC511H(hw->port_info->lport),
+			   GLPRT_PTC511L(hw->port_info->lport),
+			   pf->offset_loaded, &os->tx_size_511,
+			   &ns->tx_size_511);
+	ice_stat_update_40(hw, GLPRT_PTC1023H(hw->port_info->lport),
+			   GLPRT_PTC1023L(hw->port_info->lport),
+			   pf->offset_loaded, &os->tx_size_1023,
+			   &ns->tx_size_1023);
+	ice_stat_update_40(hw, GLPRT_PTC1522H(hw->port_info->lport),
+			   GLPRT_PTC1522L(hw->port_info->lport),
+			   pf->offset_loaded, &os->tx_size_1522,
+			   &ns->tx_size_1522);
+	ice_stat_update_40(hw, GLPRT_PTC9522H(hw->port_info->lport),
+			   GLPRT_PTC9522L(hw->port_info->lport),
+			   pf->offset_loaded, &os->tx_size_big,
+			   &ns->tx_size_big);
+
+	/* GLPRT_MSPDC not supported */
+	/* GLPRT_XEC not supported */
+
+	pf->offset_loaded = true;
+
+	if (pf->main_vsi)
+		ice_update_vsi_stats(pf->main_vsi);
+}
+
+/* Get all statistics of a port */
+static int
+ice_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ice_hw_port_stats *ns = &pf->stats; /* new stats */
+
+	/* call read registers - updates values, now write them to struct */
+	ice_read_stats_registers(pf, hw);
+
+	stats->ipackets = ns->eth.rx_unicast +
+			  ns->eth.rx_multicast +
+			  ns->eth.rx_broadcast -
+			  ns->eth.rx_discards -
+			  pf->main_vsi->eth_stats.rx_discards;
+	stats->opackets = ns->eth.tx_unicast +
+			  ns->eth.tx_multicast +
+			  ns->eth.tx_broadcast;
+	stats->ibytes   = ns->eth.rx_bytes;
+	stats->obytes   = ns->eth.tx_bytes;
+	stats->oerrors  = ns->eth.tx_errors +
+			  pf->main_vsi->eth_stats.tx_errors;
+
+	/* Rx Errors */
+	stats->imissed  = ns->eth.rx_discards +
+			  pf->main_vsi->eth_stats.rx_discards;
+	stats->ierrors  = ns->crc_errors +
+			  ns->rx_undersize +
+			  ns->rx_oversize + ns->rx_fragments + ns->rx_jabber;
+
+	PMD_DRV_LOG(DEBUG, "*************** PF stats start *****************");
+	PMD_DRV_LOG(DEBUG, "rx_bytes:	%"PRIu64"", ns->eth.rx_bytes);
+	PMD_DRV_LOG(DEBUG, "rx_unicast:	%"PRIu64"", ns->eth.rx_unicast);
+	PMD_DRV_LOG(DEBUG, "rx_multicast:%"PRIu64"", ns->eth.rx_multicast);
+	PMD_DRV_LOG(DEBUG, "rx_broadcast:%"PRIu64"", ns->eth.rx_broadcast);
+	PMD_DRV_LOG(DEBUG, "rx_discards:%"PRIu64"", ns->eth.rx_discards);
+	PMD_DRV_LOG(DEBUG, "vsi rx_discards:%"PRIu64"",
+		    pf->main_vsi->eth_stats.rx_discards);
+	PMD_DRV_LOG(DEBUG, "rx_unknown_protocol:  %"PRIu64"",
+		    ns->eth.rx_unknown_protocol);
+	PMD_DRV_LOG(DEBUG, "tx_bytes:	%"PRIu64"", ns->eth.tx_bytes);
+	PMD_DRV_LOG(DEBUG, "tx_unicast:	%"PRIu64"", ns->eth.tx_unicast);
+	PMD_DRV_LOG(DEBUG, "tx_multicast:%"PRIu64"", ns->eth.tx_multicast);
+	PMD_DRV_LOG(DEBUG, "tx_broadcast:%"PRIu64"", ns->eth.tx_broadcast);
+	PMD_DRV_LOG(DEBUG, "tx_discards:%"PRIu64"", ns->eth.tx_discards);
+	PMD_DRV_LOG(DEBUG, "vsi tx_discards:%"PRIu64"",
+		    pf->main_vsi->eth_stats.tx_discards);
+	PMD_DRV_LOG(DEBUG, "tx_errors:		%"PRIu64"", ns->eth.tx_errors);
+
+	PMD_DRV_LOG(DEBUG, "tx_dropped_link_down:	%"PRIu64"",
+		    ns->tx_dropped_link_down);
+	PMD_DRV_LOG(DEBUG, "crc_errors:	%"PRIu64"", ns->crc_errors);
+	PMD_DRV_LOG(DEBUG, "illegal_bytes:	%"PRIu64"",
+		    ns->illegal_bytes);
+	PMD_DRV_LOG(DEBUG, "error_bytes:	%"PRIu64"", ns->error_bytes);
+	PMD_DRV_LOG(DEBUG, "mac_local_faults:	%"PRIu64"",
+		    ns->mac_local_faults);
+	PMD_DRV_LOG(DEBUG, "mac_remote_faults:	%"PRIu64"",
+		    ns->mac_remote_faults);
+	PMD_DRV_LOG(DEBUG, "link_xon_rx:	%"PRIu64"", ns->link_xon_rx);
+	PMD_DRV_LOG(DEBUG, "link_xoff_rx:	%"PRIu64"", ns->link_xoff_rx);
+	PMD_DRV_LOG(DEBUG, "link_xon_tx:	%"PRIu64"", ns->link_xon_tx);
+	PMD_DRV_LOG(DEBUG, "link_xoff_tx:	%"PRIu64"", ns->link_xoff_tx);
+	PMD_DRV_LOG(DEBUG, "rx_size_64:		%"PRIu64"", ns->rx_size_64);
+	PMD_DRV_LOG(DEBUG, "rx_size_127:	%"PRIu64"", ns->rx_size_127);
+	PMD_DRV_LOG(DEBUG, "rx_size_255:	%"PRIu64"", ns->rx_size_255);
+	PMD_DRV_LOG(DEBUG, "rx_size_511:	%"PRIu64"", ns->rx_size_511);
+	PMD_DRV_LOG(DEBUG, "rx_size_1023:	%"PRIu64"", ns->rx_size_1023);
+	PMD_DRV_LOG(DEBUG, "rx_size_1522:	%"PRIu64"", ns->rx_size_1522);
+	PMD_DRV_LOG(DEBUG, "rx_size_big:	%"PRIu64"", ns->rx_size_big);
+	PMD_DRV_LOG(DEBUG, "rx_undersize:	%"PRIu64"", ns->rx_undersize);
+	PMD_DRV_LOG(DEBUG, "rx_fragments:	%"PRIu64"", ns->rx_fragments);
+	PMD_DRV_LOG(DEBUG, "rx_oversize:	%"PRIu64"", ns->rx_oversize);
+	PMD_DRV_LOG(DEBUG, "rx_jabber:		%"PRIu64"", ns->rx_jabber);
+	PMD_DRV_LOG(DEBUG, "tx_size_64:		%"PRIu64"", ns->tx_size_64);
+	PMD_DRV_LOG(DEBUG, "tx_size_127:	%"PRIu64"", ns->tx_size_127);
+	PMD_DRV_LOG(DEBUG, "tx_size_255:	%"PRIu64"", ns->tx_size_255);
+	PMD_DRV_LOG(DEBUG, "tx_size_511:	%"PRIu64"", ns->tx_size_511);
+	PMD_DRV_LOG(DEBUG, "tx_size_1023:	%"PRIu64"", ns->tx_size_1023);
+	PMD_DRV_LOG(DEBUG, "tx_size_1522:	%"PRIu64"", ns->tx_size_1522);
+	PMD_DRV_LOG(DEBUG, "tx_size_big:	%"PRIu64"", ns->tx_size_big);
+	PMD_DRV_LOG(DEBUG, "rx_len_errors:	%"PRIu64"", ns->rx_len_errors);
+	PMD_DRV_LOG(DEBUG, "************* PF stats end ****************");
+	return 0;
+}
+
+/* Reset the statistics */
+static void
+ice_stats_reset(struct rte_eth_dev *dev)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	/* Mark PF and VSI stats to update the offset, aka "reset" */
+	pf->offset_loaded = false;
+	if (pf->main_vsi)
+		pf->main_vsi->offset_loaded = false;
+
+	/* read the stats, reading current register values into offset */
+	ice_read_stats_registers(pf, hw);
+}
+
+static uint32_t
+ice_xstats_calc_num(void)
+{
+	uint32_t num;
+
+	num = ICE_NB_ETH_XSTATS + ICE_NB_HW_PORT_XSTATS;
+
+	return num;
+}
+
+static int
+ice_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats,
+	       unsigned int n)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	unsigned int i;
+	unsigned int count;
+	struct ice_hw_port_stats *hw_stats = &pf->stats;
+
+	count = ice_xstats_calc_num();
+	if (n < count)
+		return count;
+
+	ice_read_stats_registers(pf, hw);
+
+	if (!xstats)
+		return 0;
+
+	count = 0;
+
+	/* Get stats from ice_eth_stats struct */
+	for (i = 0; i < ICE_NB_ETH_XSTATS; i++) {
+		xstats[count].value =
+			*(uint64_t *)((char *)&hw_stats->eth +
+				      ice_stats_strings[i].offset);
+		xstats[count].id = count;
+		count++;
+	}
+
+	/* Get individiual stats from ice_hw_port struct */
+	for (i = 0; i < ICE_NB_HW_PORT_XSTATS; i++) {
+		xstats[count].value =
+			*(uint64_t *)((char *)hw_stats +
+				      ice_hw_port_strings[i].offset);
+		xstats[count].id = count;
+		count++;
+	}
+
+	return count;
+}
+
+static int ice_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
+				struct rte_eth_xstat_name *xstats_names,
+				__rte_unused unsigned int limit)
+{
+	unsigned int count = 0;
+	unsigned int i;
+
+	if (!xstats_names)
+		return ice_xstats_calc_num();
+
+	/* Note: limit checked in rte_eth_xstats_names() */
+
+	/* Get stats from ice_eth_stats struct */
+	for (i = 0; i < ICE_NB_ETH_XSTATS; i++) {
+		snprintf(xstats_names[count].name,
+			 sizeof(xstats_names[count].name),
+			 "%s", ice_stats_strings[i].name);
+		count++;
+	}
+
+	/* Get individiual stats from ice_hw_port struct */
+	for (i = 0; i < ICE_NB_HW_PORT_XSTATS; i++) {
+		snprintf(xstats_names[count].name,
+			 sizeof(xstats_names[count].name),
+			 "%s", ice_hw_port_strings[i].name);
+		count++;
+	}
+
+	return count;
+}
+
 static int
 ice_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 	      struct rte_pci_device *pci_dev)
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v4 29/32] net/ice: support queue information getting
  2018-12-14  8:34 ` [dpdk-dev] [PATCH v4 00/32] A new net PMD - ICE Wenzhuo Lu
                     ` (27 preceding siblings ...)
  2018-12-14  8:35   ` [dpdk-dev] [PATCH v4 28/32] net/ice: support statistics Wenzhuo Lu
@ 2018-12-14  8:35   ` Wenzhuo Lu
  2018-12-14  8:35   ` [dpdk-dev] [PATCH v4 30/32] net/ice: support basic RX/TX Wenzhuo Lu
                     ` (2 subsequent siblings)
  31 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-14  8:35 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add below ops,
rxq_info_get
txq_info_get
rx_queue_count

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/ice/ice_ethdev.c   |  3 ++
 drivers/net/ice/ice_lan_rxtx.c | 66 ++++++++++++++++++++++++++++++++++++++++++
 drivers/net/ice/ice_rxtx.h     |  5 ++++
 3 files changed, 74 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 499d226..fd1327b 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -107,8 +107,11 @@ static int ice_xstats_get_names(struct rte_eth_dev *dev,
 	.rx_queue_intr_disable        = ice_rx_queue_intr_disable,
 	.fw_version_get               = ice_fw_version_get,
 	.vlan_pvid_set                = ice_vlan_pvid_set,
+	.rxq_info_get                 = ice_rxq_info_get,
+	.txq_info_get                 = ice_txq_info_get,
 	.get_eeprom_length            = ice_get_eeprom_length,
 	.get_eeprom                   = ice_get_eeprom,
+	.rx_queue_count               = ice_rx_queue_count,
 	.stats_get                    = ice_stats_get,
 	.stats_reset                  = ice_stats_reset,
 	.xstats_get                   = ice_xstats_get,
diff --git a/drivers/net/ice/ice_lan_rxtx.c b/drivers/net/ice/ice_lan_rxtx.c
index 8230bb2..fed12b4 100644
--- a/drivers/net/ice/ice_lan_rxtx.c
+++ b/drivers/net/ice/ice_lan_rxtx.c
@@ -921,6 +921,72 @@
 }
 
 void
+ice_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+		 struct rte_eth_rxq_info *qinfo)
+{
+	struct ice_rx_queue *rxq;
+
+	rxq = dev->data->rx_queues[queue_id];
+
+	qinfo->mp = rxq->mp;
+	qinfo->scattered_rx = dev->data->scattered_rx;
+	qinfo->nb_desc = rxq->nb_rx_desc;
+
+	qinfo->conf.rx_free_thresh = rxq->rx_free_thresh;
+	qinfo->conf.rx_drop_en = rxq->drop_en;
+	qinfo->conf.rx_deferred_start = rxq->rx_deferred_start;
+}
+
+void
+ice_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+		 struct rte_eth_txq_info *qinfo)
+{
+	struct ice_tx_queue *txq;
+
+	txq = dev->data->tx_queues[queue_id];
+
+	qinfo->nb_desc = txq->nb_tx_desc;
+
+	qinfo->conf.tx_thresh.pthresh = txq->pthresh;
+	qinfo->conf.tx_thresh.hthresh = txq->hthresh;
+	qinfo->conf.tx_thresh.wthresh = txq->wthresh;
+
+	qinfo->conf.tx_free_thresh = txq->tx_free_thresh;
+	qinfo->conf.tx_rs_thresh = txq->tx_rs_thresh;
+	qinfo->conf.offloads = txq->offloads;
+	qinfo->conf.tx_deferred_start = txq->tx_deferred_start;
+}
+
+uint32_t
+ice_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+#define ICE_RXQ_SCAN_INTERVAL 4
+	volatile union ice_rx_desc *rxdp;
+	struct ice_rx_queue *rxq;
+	uint16_t desc = 0;
+
+	rxq = dev->data->rx_queues[rx_queue_id];
+	rxdp = &rxq->rx_ring[rxq->rx_tail];
+	while ((desc < rxq->nb_rx_desc) &&
+	       ((rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
+		 ICE_RXD_QW1_STATUS_M) >> ICE_RXD_QW1_STATUS_S) &
+	       (1 << ICE_RX_DESC_STATUS_DD_S)) {
+		/**
+		 * Check the DD bit of a rx descriptor of each 4 in a group,
+		 * to avoid checking too frequently and downgrading performance
+		 * too much.
+		 */
+		desc += ICE_RXQ_SCAN_INTERVAL;
+		rxdp += ICE_RXQ_SCAN_INTERVAL;
+		if (rxq->rx_tail + desc >= rxq->nb_rx_desc)
+			rxdp = &(rxq->rx_ring[rxq->rx_tail +
+				 desc - rxq->nb_rx_desc]);
+	}
+
+	return desc;
+}
+
+void
 ice_clear_queues(struct rte_eth_dev *dev)
 {
 	uint16_t i;
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index 871646f..bad2b89 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -134,6 +134,11 @@ int ice_tx_queue_setup(struct rte_eth_dev *dev,
 void ice_tx_queue_release(void *txq);
 void ice_clear_queues(struct rte_eth_dev *dev);
 void ice_free_queues(struct rte_eth_dev *dev);
+uint32_t ice_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 void ice_set_default_ptype_table(struct rte_eth_dev *dev);
 const uint32_t *ice_dev_supported_ptypes_get(struct rte_eth_dev *dev);
+void ice_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+		      struct rte_eth_rxq_info *qinfo);
+void ice_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+		      struct rte_eth_txq_info *qinfo);
 #endif /* _ICE_RXTX_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v4 30/32] net/ice: support basic RX/TX
  2018-12-14  8:34 ` [dpdk-dev] [PATCH v4 00/32] A new net PMD - ICE Wenzhuo Lu
                     ` (28 preceding siblings ...)
  2018-12-14  8:35   ` [dpdk-dev] [PATCH v4 29/32] net/ice: support queue information getting Wenzhuo Lu
@ 2018-12-14  8:35   ` Wenzhuo Lu
  2018-12-14 13:00     ` Ferruh Yigit
  2018-12-14  8:35   ` [dpdk-dev] [PATCH v4 31/32] net/ice: support advance RX/TX Wenzhuo Lu
  2018-12-14  8:35   ` [dpdk-dev] [PATCH v4 32/32] net/ice: support descriptor ops Wenzhuo Lu
  31 siblings, 1 reply; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-14  8:35 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/ice.ini |   5 +
 drivers/net/ice/ice_ethdev.c     |   5 +
 drivers/net/ice/ice_lan_rxtx.c   | 568 ++++++++++++++++++++++++++++++++++++++-
 drivers/net/ice/ice_rxtx.h       |   8 +
 4 files changed, 584 insertions(+), 2 deletions(-)

diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
index 67fd044..19655f1 100644
--- a/doc/guides/nics/features/ice.ini
+++ b/doc/guides/nics/features/ice.ini
@@ -11,14 +11,19 @@ Rx interrupt         = Y
 Queue start/stop     = Y
 MTU update           = Y
 Jumbo frame          = Y
+TSO                  = Y
 Unicast MAC filter   = Y
 Multicast MAC filter = Y
 RSS hash             = Y
 RSS key update       = Y
 RSS reta update      = Y
 VLAN filter          = Y
+CRC offload          = Y
 VLAN offload         = Y
 QinQ offload         = Y
+L3 checksum offload  = Y
+L4 checksum offload  = Y
+Packet type parsing  = Y
 Basic stats          = Y
 Extended stats       = Y
 FW version           = Y
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index fd1327b..6a51033 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -1260,6 +1260,9 @@ struct ice_xstats_name_off {
 	int ret;
 
 	dev->dev_ops = &ice_eth_dev_ops;
+	dev->rx_pkt_burst = ice_recv_pkts;
+	dev->tx_pkt_burst = ice_xmit_pkts;
+	dev->tx_pkt_prepare = ice_prep_pkts;
 
 	ice_set_default_ptype_table(dev);
 	pci_dev = RTE_DEV_TO_PCI(dev->device);
@@ -1732,6 +1735,8 @@ static int ice_init_rss(struct ice_pf *pf)
 		goto rx_err;
 	}
 
+	ice_set_rx_function(dev);
+
 	/* enable Rx interrput and mapping Rx queue to interrupt vector */
 	if (ice_rxq_intr_setup(dev))
 		return -EIO;
diff --git a/drivers/net/ice/ice_lan_rxtx.c b/drivers/net/ice/ice_lan_rxtx.c
index fed12b4..1b1bf47 100644
--- a/drivers/net/ice/ice_lan_rxtx.c
+++ b/drivers/net/ice/ice_lan_rxtx.c
@@ -884,8 +884,81 @@
 	rte_free(q);
 }
 
+/* Translate the rx descriptor status to pkt flags */
+static inline uint64_t
+ice_rxd_status_to_pkt_flags(uint64_t qword)
+{
+	uint64_t flags;
+
+	/* Check if RSS_HASH */
+	flags = (((qword >> ICE_RX_DESC_STATUS_FLTSTAT_S) &
+		  ICE_RX_DESC_FLTSTAT_RSS_HASH) ==
+		 ICE_RX_DESC_FLTSTAT_RSS_HASH) ? PKT_RX_RSS_HASH : 0;
+
+	return flags;
+}
+
+/* Rx L3/L4 checksum */
+static inline uint64_t
+ice_rxd_error_to_pkt_flags(uint64_t qword)
+{
+	uint64_t flags = 0;
+	uint64_t error_bits = (qword >> ICE_RXD_QW1_ERROR_S);
+
+	if (likely((error_bits & ICE_RX_ERR_BITS) == 0)) {
+		flags |= (PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD);
+		return flags;
+	}
+
+	if (unlikely(error_bits & (1 << ICE_RX_DESC_ERROR_IPE_S)))
+		flags |= PKT_RX_IP_CKSUM_BAD;
+	else
+		flags |= PKT_RX_IP_CKSUM_GOOD;
+
+	if (unlikely(error_bits & (1 << ICE_RX_DESC_ERROR_L4E_S)))
+		flags |= PKT_RX_L4_CKSUM_BAD;
+	else
+		flags |= PKT_RX_L4_CKSUM_GOOD;
+
+	if (unlikely(error_bits & (1 << ICE_RX_DESC_ERROR_EIPE_S)))
+		flags |= PKT_RX_EIP_CKSUM_BAD;
+
+	return flags;
+}
+
+static inline void
+ice_rxd_to_vlan_tci(struct rte_mbuf *mb, volatile union ice_rx_desc *rxdp)
+{
+	if (rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
+	    (1 << ICE_RX_DESC_STATUS_L2TAG1P_S)) {
+		mb->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		mb->vlan_tci =
+			rte_le_to_cpu_16(rxdp->wb.qword0.lo_dword.l2tag1);
+		PMD_RX_LOG(DEBUG, "Descriptor l2tag1: %u",
+			   rte_le_to_cpu_16(rxdp->wb.qword0.lo_dword.l2tag1));
+	} else {
+		mb->vlan_tci = 0;
+	}
+
+#ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
+	if (rte_le_to_cpu_16(rxdp->wb.qword2.ext_status) &
+	    (1 << ICE_RX_DESC_EXT_STATUS_L2TAG2P_S)) {
+		mb->ol_flags |= PKT_RX_QINQ_STRIPPED | PKT_RX_QINQ |
+				PKT_RX_VLAN_STRIPPED | PKT_RX_VLAN;
+		mb->vlan_tci_outer = mb->vlan_tci;
+		mb->vlan_tci = rte_le_to_cpu_16(rxdp->wb.qword2.l2tag2_2);
+		PMD_RX_LOG(DEBUG, "Descriptor l2tag2_1: %u, l2tag2_2: %u",
+			   rte_le_to_cpu_16(rxdp->wb.qword2.l2tag2_1),
+			   rte_le_to_cpu_16(rxdp->wb.qword2.l2tag2_2));
+	} else {
+		mb->vlan_tci_outer = 0;
+	}
+#endif
+	PMD_RX_LOG(DEBUG, "Mbuf vlan_tci: %u, vlan_tci_outer: %u",
+		   mb->vlan_tci, mb->vlan_tci_outer);
+}
 const uint32_t *
-ice_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
+ice_dev_supported_ptypes_get(struct rte_eth_dev *dev)
 {
 	static const uint32_t ptypes[] = {
 		/* refers to ice_get_default_pkt_type() */
@@ -917,7 +990,9 @@
 		RTE_PTYPE_UNKNOWN
 	};
 
-	return ptypes;
+	if (dev->rx_pkt_burst == ice_recv_pkts)
+		return ptypes;
+	return NULL;
 }
 
 void
@@ -1028,6 +1103,495 @@
 	dev->data->nb_tx_queues = 0;
 }
 
+uint16_t
+ice_recv_pkts(void *rx_queue,
+	      struct rte_mbuf **rx_pkts,
+	      uint16_t nb_pkts)
+{
+	struct ice_rx_queue *rxq = rx_queue;
+	volatile union ice_rx_desc *rx_ring = rxq->rx_ring;
+	volatile union ice_rx_desc *rxdp;
+	union ice_rx_desc rxd;
+	struct ice_rx_entry *sw_ring = rxq->sw_ring;
+	struct ice_rx_entry *rxe;
+	struct rte_mbuf *nmb; /* new allocated mbuf */
+	struct rte_mbuf *rxm; /* pointer to store old mbuf in SW ring */
+	uint16_t rx_id = rxq->rx_tail;
+	uint16_t nb_rx = 0;
+	uint16_t nb_hold = 0;
+	uint16_t rx_packet_len;
+	uint32_t rx_status;
+	uint64_t qword1;
+	uint64_t dma_addr;
+	uint64_t pkt_flags = 0;
+	uint32_t *ptype_tbl = rxq->vsi->adapter->ptype_tbl;
+	struct rte_eth_dev *dev;
+
+	while (nb_rx < nb_pkts) {
+		rxdp = &rx_ring[rx_id];
+		qword1 = rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len);
+		rx_status = (qword1 & ICE_RXD_QW1_STATUS_M) >>
+			    ICE_RXD_QW1_STATUS_S;
+
+		/* Check the DD bit first */
+		if (!(rx_status & (1 << ICE_RX_DESC_STATUS_DD_S)))
+			break;
+
+		/* allocate mbuf */
+		nmb = rte_mbuf_raw_alloc(rxq->mp);
+		if (unlikely(!nmb)) {
+			dev = ICE_VSI_TO_ETH_DEV(rxq->vsi);
+			dev->data->rx_mbuf_alloc_failed++;
+			break;
+		}
+		rxd = *rxdp; /* copy descriptor in ring to temp variable*/
+
+		nb_hold++;
+		rxe = &sw_ring[rx_id]; /* get corresponding mbuf in SW ring */
+		rx_id++;
+		if (unlikely(rx_id == rxq->nb_rx_desc))
+			rx_id = 0;
+		rxm = rxe->mbuf;
+		rxe->mbuf = nmb;
+		dma_addr =
+			rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb));
+
+		/**
+		 * fill the read format of descriptor with physic address in
+		 * new allocated mbuf: nmb
+		 */
+		rxdp->read.hdr_addr = 0;
+		rxdp->read.pkt_addr = dma_addr;
+
+		/* calculate rx_packet_len of the received pkt */
+		rx_packet_len = ((qword1 & ICE_RXD_QW1_LEN_PBUF_M) >>
+				ICE_RXD_QW1_LEN_PBUF_S) - rxq->crc_len;
+
+		/* fill old mbuf with received descriptor: rxd */
+		rxm->data_off = RTE_PKTMBUF_HEADROOM;
+		rte_prefetch0(RTE_PTR_ADD(rxm->buf_addr, RTE_PKTMBUF_HEADROOM));
+		rxm->nb_segs = 1;
+		rxm->next = NULL;
+		rxm->pkt_len = rx_packet_len;
+		rxm->data_len = rx_packet_len;
+		rxm->port = rxq->port_id;
+		ice_rxd_to_vlan_tci(rxm, rxdp);
+		rxm->packet_type = ptype_tbl[(uint8_t)((qword1 &
+							ICE_RXD_QW1_PTYPE_M) >>
+						       ICE_RXD_QW1_PTYPE_S)];
+		pkt_flags = ice_rxd_status_to_pkt_flags(qword1);
+		pkt_flags |= ice_rxd_error_to_pkt_flags(qword1);
+		if (pkt_flags & PKT_RX_RSS_HASH)
+			rxm->hash.rss =
+				rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
+		rxm->ol_flags |= pkt_flags;
+		/* copy old mbuf to rx_pkts */
+		rx_pkts[nb_rx++] = rxm;
+	}
+	rxq->rx_tail = rx_id;
+	/**
+	 * If the number of free RX descriptors is greater than the RX free
+	 * threshold of the queue, advance the receive tail register of queue.
+	 * Update that register with the value of the last processed RX
+	 * descriptor minus 1.
+	 */
+	nb_hold = (uint16_t)(nb_hold + rxq->nb_rx_hold);
+	if (nb_hold > rxq->rx_free_thresh) {
+		rx_id = (uint16_t)(rx_id == 0 ?
+				   (rxq->nb_rx_desc - 1) : (rx_id - 1));
+		/* write TAIL register */
+		ICE_PCI_REG_WRITE(rxq->qrx_tail, rx_id);
+		nb_hold = 0;
+	}
+	rxq->nb_rx_hold = nb_hold;
+
+	/* return received packet in the burst */
+	return nb_rx;
+}
+
+static inline void
+ice_txd_enable_checksum(uint64_t ol_flags,
+			uint32_t *td_cmd,
+			uint32_t *td_offset,
+			union ice_tx_offload tx_offload)
+{
+	/* L2 length must be set. */
+	*td_offset |= (tx_offload.l2_len >> 1) <<
+		      ICE_TX_DESC_LEN_MACLEN_S;
+
+	/* Enable L3 checksum offloads */
+	if (ol_flags & PKT_TX_IP_CKSUM) {
+		*td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV4_CSUM;
+		*td_offset |= (tx_offload.l3_len >> 2) <<
+			      ICE_TX_DESC_LEN_IPLEN_S;
+	} else if (ol_flags & PKT_TX_IPV4) {
+		*td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV4;
+		*td_offset |= (tx_offload.l3_len >> 2) <<
+			      ICE_TX_DESC_LEN_IPLEN_S;
+	} else if (ol_flags & PKT_TX_IPV6) {
+		*td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV6;
+		*td_offset |= (tx_offload.l3_len >> 2) <<
+			      ICE_TX_DESC_LEN_IPLEN_S;
+	}
+
+	if (ol_flags & PKT_TX_TCP_SEG) {
+		*td_cmd |= ICE_TX_DESC_CMD_L4T_EOFT_TCP;
+		*td_offset |= (tx_offload.l4_len >> 2) <<
+			      ICE_TX_DESC_LEN_L4_LEN_S;
+		return;
+	}
+
+	/* Enable L4 checksum offloads */
+	switch (ol_flags & PKT_TX_L4_MASK) {
+	case PKT_TX_TCP_CKSUM:
+		*td_cmd |= ICE_TX_DESC_CMD_L4T_EOFT_TCP;
+		*td_offset |= (sizeof(struct tcp_hdr) >> 2) <<
+			      ICE_TX_DESC_LEN_L4_LEN_S;
+		break;
+	case PKT_TX_SCTP_CKSUM:
+		*td_cmd |= ICE_TX_DESC_CMD_L4T_EOFT_SCTP;
+		*td_offset |= (sizeof(struct sctp_hdr) >> 2) <<
+			      ICE_TX_DESC_LEN_L4_LEN_S;
+		break;
+	case PKT_TX_UDP_CKSUM:
+		*td_cmd |= ICE_TX_DESC_CMD_L4T_EOFT_UDP;
+		*td_offset |= (sizeof(struct udp_hdr) >> 2) <<
+			      ICE_TX_DESC_LEN_L4_LEN_S;
+		break;
+	default:
+		break;
+	}
+}
+
+static inline int
+ice_xmit_cleanup(struct ice_tx_queue *txq)
+{
+	struct ice_tx_entry *sw_ring = txq->sw_ring;
+	volatile struct ice_tx_desc *txd = txq->tx_ring;
+	uint16_t last_desc_cleaned = txq->last_desc_cleaned;
+	uint16_t nb_tx_desc = txq->nb_tx_desc;
+	uint16_t desc_to_clean_to;
+	uint16_t nb_tx_to_clean;
+
+	/* Determine the last descriptor needing to be cleaned */
+	desc_to_clean_to = (uint16_t)(last_desc_cleaned + txq->tx_rs_thresh);
+	if (desc_to_clean_to >= nb_tx_desc)
+		desc_to_clean_to = (uint16_t)(desc_to_clean_to - nb_tx_desc);
+
+	/* Check to make sure the last descriptor to clean is done */
+	desc_to_clean_to = sw_ring[desc_to_clean_to].last_id;
+	if (!(txd[desc_to_clean_to].cmd_type_offset_bsz &
+	    rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE))) {
+		PMD_TX_FREE_LOG(DEBUG, "TX descriptor %4u is not done "
+				"(port=%d queue=%d) value=0x%lx\n",
+				desc_to_clean_to,
+				txq->port_id, txq->queue_id,
+				txd[desc_to_clean_to].cmd_type_offset_bsz);
+		/* Failed to clean any descriptors */
+		return -1;
+	}
+
+	/* Figure out how many descriptors will be cleaned */
+	if (last_desc_cleaned > desc_to_clean_to)
+		nb_tx_to_clean = (uint16_t)((nb_tx_desc - last_desc_cleaned) +
+					    desc_to_clean_to);
+	else
+		nb_tx_to_clean = (uint16_t)(desc_to_clean_to -
+					    last_desc_cleaned);
+
+	/* The last descriptor to clean is done, so that means all the
+	 * descriptors from the last descriptor that was cleaned
+	 * up to the last descriptor with the RS bit set
+	 * are done. Only reset the threshold descriptor.
+	 */
+	txd[desc_to_clean_to].cmd_type_offset_bsz = 0;
+
+	/* Update the txq to reflect the last descriptor that was cleaned */
+	txq->last_desc_cleaned = desc_to_clean_to;
+	txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + nb_tx_to_clean);
+
+	return 0;
+}
+
+/* Check if the context descriptor is needed for TX offloading */
+static inline uint16_t
+ice_calc_context_desc(uint64_t flags)
+{
+	static uint64_t mask = PKT_TX_TCP_SEG | PKT_TX_QINQ_PKT;
+
+	return (flags & mask) ? 1 : 0;
+}
+
+/* set ice TSO context descriptor */
+static inline uint64_t
+ice_set_tso_ctx(struct rte_mbuf *mbuf, union ice_tx_offload tx_offload)
+{
+	uint64_t ctx_desc = 0;
+	uint32_t cd_cmd, hdr_len, cd_tso_len;
+
+	if (!tx_offload.l4_len) {
+		PMD_TX_LOG(DEBUG, "L4 length set to 0");
+		return ctx_desc;
+	}
+
+	/**
+	 * in case of non tunneling packet, the outer_l2_len and
+	 * outer_l3_len must be 0.
+	 */
+	hdr_len = tx_offload.outer_l2_len +
+		  tx_offload.outer_l3_len +
+		  tx_offload.l2_len +
+		  tx_offload.l3_len +
+		  tx_offload.l4_len;
+
+	cd_cmd = ICE_TX_CTX_DESC_TSO;
+	cd_tso_len = mbuf->pkt_len - hdr_len;
+	ctx_desc |= ((uint64_t)cd_cmd << ICE_TXD_CTX_QW1_CMD_S) |
+		    ((uint64_t)cd_tso_len << ICE_TXD_CTX_QW1_TSO_LEN_S) |
+		    ((uint64_t)mbuf->tso_segsz << ICE_TXD_CTX_QW1_MSS_S);
+
+	return ctx_desc;
+}
+
+uint16_t
+ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+	struct ice_tx_queue *txq;
+	volatile struct ice_tx_desc *tx_ring;
+	volatile struct ice_tx_desc *txd;
+	struct ice_tx_entry *sw_ring;
+	struct ice_tx_entry *txe, *txn;
+	struct rte_mbuf *tx_pkt;
+	struct rte_mbuf *m_seg;
+	uint16_t tx_id;
+	uint16_t nb_tx;
+	uint16_t nb_used;
+	uint16_t nb_ctx;
+	uint32_t td_cmd = 0;
+	uint32_t td_offset = 0;
+	uint32_t td_tag = 0;
+	uint16_t tx_last;
+	uint64_t buf_dma_addr;
+	uint64_t ol_flags;
+	union ice_tx_offload tx_offload = {0};
+
+	txq = tx_queue;
+	sw_ring = txq->sw_ring;
+	tx_ring = txq->tx_ring;
+	tx_id = txq->tx_tail;
+	txe = &sw_ring[tx_id];
+
+	/* Check if the descriptor ring needs to be cleaned. */
+	if (txq->nb_tx_free < txq->tx_free_thresh)
+		ice_xmit_cleanup(txq);
+
+	for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
+		tx_pkt = *tx_pkts++;
+
+		td_cmd = 0;
+		ol_flags = tx_pkt->ol_flags;
+		tx_offload.l2_len = tx_pkt->l2_len;
+		tx_offload.l3_len = tx_pkt->l3_len;
+		tx_offload.outer_l2_len = tx_pkt->outer_l2_len;
+		tx_offload.outer_l3_len = tx_pkt->outer_l3_len;
+		tx_offload.l4_len = tx_pkt->l4_len;
+		tx_offload.tso_segsz = tx_pkt->tso_segsz;
+		/* Calculate the number of context descriptors needed. */
+		nb_ctx = ice_calc_context_desc(ol_flags);
+
+		/* The number of descriptors that must be allocated for
+		 * a packet equals to the number of the segments of that
+		 * packet plus the number of context descriptor if needed.
+		 */
+		nb_used = (uint16_t)(tx_pkt->nb_segs + nb_ctx);
+		tx_last = (uint16_t)(tx_id + nb_used - 1);
+
+		/* Circular ring */
+		if (tx_last >= txq->nb_tx_desc)
+			tx_last = (uint16_t)(tx_last - txq->nb_tx_desc);
+
+		if (nb_used > txq->nb_tx_free) {
+			if (ice_xmit_cleanup(txq) != 0) {
+				if (nb_tx == 0)
+					return 0;
+				goto end_of_tx;
+			}
+			if (unlikely(nb_used > txq->tx_rs_thresh)) {
+				while (nb_used > txq->nb_tx_free) {
+					if (ice_xmit_cleanup(txq) != 0) {
+						if (nb_tx == 0)
+							return 0;
+						goto end_of_tx;
+					}
+				}
+			}
+		}
+
+		/* Descriptor based VLAN insertion */
+		if (ol_flags & (PKT_TX_VLAN_PKT | PKT_TX_QINQ_PKT)) {
+			td_cmd |= ICE_TX_DESC_CMD_IL2TAG1;
+			td_tag = tx_pkt->vlan_tci;
+		}
+
+		/* Enable checksum offloading */
+		if (ol_flags & ICE_TX_CKSUM_OFFLOAD_MASK) {
+			ice_txd_enable_checksum(ol_flags, &td_cmd,
+						&td_offset, tx_offload);
+		}
+
+		if (nb_ctx) {
+			/* Setup TX context descriptor if required */
+			volatile struct ice_tx_ctx_desc *ctx_txd =
+				(volatile struct ice_tx_ctx_desc *)
+					&tx_ring[tx_id];
+			uint16_t cd_l2tag2 = 0;
+			uint64_t cd_type_cmd_tso_mss = ICE_TX_DESC_DTYPE_CTX;
+
+			txn = &sw_ring[txe->next_id];
+			RTE_MBUF_PREFETCH_TO_FREE(txn->mbuf);
+			if (txe->mbuf) {
+				rte_pktmbuf_free_seg(txe->mbuf);
+				txe->mbuf = NULL;
+			}
+
+			if (ol_flags & PKT_TX_TCP_SEG)
+				cd_type_cmd_tso_mss |=
+					ice_set_tso_ctx(tx_pkt, tx_offload);
+
+			/* TX context descriptor based double VLAN insert */
+			if (ol_flags & PKT_TX_QINQ_PKT) {
+				cd_l2tag2 = tx_pkt->vlan_tci_outer;
+				cd_type_cmd_tso_mss |=
+					((uint64_t)ICE_TX_CTX_DESC_IL2TAG2 <<
+					 ICE_TXD_CTX_QW1_CMD_S);
+			}
+			ctx_txd->l2tag2 = rte_cpu_to_le_16(cd_l2tag2);
+			ctx_txd->qw1 =
+				rte_cpu_to_le_64(cd_type_cmd_tso_mss);
+
+			txe->last_id = tx_last;
+			tx_id = txe->next_id;
+			txe = txn;
+		}
+		m_seg = tx_pkt;
+
+		do {
+			txd = &tx_ring[tx_id];
+			txn = &sw_ring[txe->next_id];
+
+			if (txe->mbuf)
+				rte_pktmbuf_free_seg(txe->mbuf);
+			txe->mbuf = m_seg;
+
+			/* Setup TX Descriptor */
+			buf_dma_addr = rte_mbuf_data_iova(m_seg);
+			txd->buf_addr = rte_cpu_to_le_64(buf_dma_addr);
+			txd->cmd_type_offset_bsz =
+				rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DATA |
+				((uint64_t)td_cmd  << ICE_TXD_QW1_CMD_S) |
+				((uint64_t)td_offset << ICE_TXD_QW1_OFFSET_S) |
+				((uint64_t)m_seg->data_len  <<
+				 ICE_TXD_QW1_TX_BUF_SZ_S) |
+				((uint64_t)td_tag  << ICE_TXD_QW1_L2TAG1_S));
+
+			txe->last_id = tx_last;
+			tx_id = txe->next_id;
+			txe = txn;
+			m_seg = m_seg->next;
+		} while (m_seg);
+
+		/* fill the last descriptor with End of Packet (EOP) bit */
+		td_cmd |= ICE_TX_DESC_CMD_EOP;
+		txq->nb_tx_used = (uint16_t)(txq->nb_tx_used + nb_used);
+		txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_used);
+
+		/* set RS bit on the last descriptor of one packet */
+		if (txq->nb_tx_used >= txq->tx_rs_thresh) {
+			PMD_TX_FREE_LOG(DEBUG,
+					"Setting RS bit on TXD id="
+					"%4u (port=%d queue=%d)",
+					tx_last, txq->port_id, txq->queue_id);
+
+			td_cmd |= ICE_TX_DESC_CMD_RS;
+
+			/* Update txq RS bit counters */
+			txq->nb_tx_used = 0;
+		}
+		txd->cmd_type_offset_bsz |=
+			rte_cpu_to_le_64(((uint64_t)td_cmd) <<
+					 ICE_TXD_QW1_CMD_S);
+	}
+end_of_tx:
+	rte_wmb();
+
+	/* update Tail register */
+	ICE_PCI_REG_WRITE(txq->qtx_tail, tx_id);
+	txq->tx_tail = tx_id;
+
+	return nb_tx;
+}
+
+void __attribute__((cold))
+ice_set_rx_function(struct rte_eth_dev *dev)
+{
+	dev->rx_pkt_burst = ice_recv_pkts;
+}
+
+/*********************************************************************
+ *
+ *  TX prep functions
+ *
+ **********************************************************************/
+/* The default values of TSO MSS */
+#define ICE_MIN_TSO_MSS            64
+#define ICE_MAX_TSO_MSS            9728
+#define ICE_MAX_TSO_FRAME_SIZE     262144
+uint16_t
+ice_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
+	      uint16_t nb_pkts)
+{
+	int i, ret;
+	uint64_t ol_flags;
+	struct rte_mbuf *m;
+
+	for (i = 0; i < nb_pkts; i++) {
+		m = tx_pkts[i];
+		ol_flags = m->ol_flags;
+
+		if (ol_flags & PKT_TX_TCP_SEG &&
+		    (m->tso_segsz < ICE_MIN_TSO_MSS ||
+		     m->tso_segsz > ICE_MAX_TSO_MSS ||
+		     m->pkt_len > ICE_MAX_TSO_FRAME_SIZE)) {
+			/**
+			 * MSS outside the range are considered malicious
+			 */
+			rte_errno = -EINVAL;
+			return i;
+		}
+
+#ifdef RTE_LIBRTE_ETHDEV_DEBUG
+		ret = rte_validate_tx_offload(m);
+		if (ret != 0) {
+			rte_errno = ret;
+			return i;
+		}
+#endif
+		ret = rte_net_intel_cksum_prepare(m);
+		if (ret != 0) {
+			rte_errno = ret;
+			return i;
+		}
+	}
+	return i;
+}
+
+void __attribute__((cold))
+ice_set_tx_function(struct rte_eth_dev *dev)
+{
+		dev->tx_pkt_burst = ice_xmit_pkts;
+		dev->tx_pkt_prepare = ice_prep_pkts;
+}
+
 /* For each value it means, datasheet of hardware can tell more details
  *
  * @note: fix ice_dev_supported_ptypes_get() if any change here.
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index bad2b89..e0218b3 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -134,6 +134,14 @@ int ice_tx_queue_setup(struct rte_eth_dev *dev,
 void ice_tx_queue_release(void *txq);
 void ice_clear_queues(struct rte_eth_dev *dev);
 void ice_free_queues(struct rte_eth_dev *dev);
+uint16_t ice_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+		       uint16_t nb_pkts);
+uint16_t ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+		       uint16_t nb_pkts);
+void ice_set_rx_function(struct rte_eth_dev *dev);
+uint16_t ice_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
+		       uint16_t nb_pkts);
+void ice_set_tx_function(struct rte_eth_dev *dev);
 uint32_t ice_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 void ice_set_default_ptype_table(struct rte_eth_dev *dev);
 const uint32_t *ice_dev_supported_ptypes_get(struct rte_eth_dev *dev);
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v4 31/32] net/ice: support advance RX/TX
  2018-12-14  8:34 ` [dpdk-dev] [PATCH v4 00/32] A new net PMD - ICE Wenzhuo Lu
                     ` (29 preceding siblings ...)
  2018-12-14  8:35   ` [dpdk-dev] [PATCH v4 30/32] net/ice: support basic RX/TX Wenzhuo Lu
@ 2018-12-14  8:35   ` Wenzhuo Lu
  2018-12-14  8:35   ` [dpdk-dev] [PATCH v4 32/32] net/ice: support descriptor ops Wenzhuo Lu
  31 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-14  8:35 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add RX functions, scatter and bulk.
Add TX function, simple.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/ice.ini |   1 +
 drivers/net/ice/ice_lan_rxtx.c   | 660 ++++++++++++++++++++++++++++++++++++++-
 2 files changed, 659 insertions(+), 2 deletions(-)

diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
index 19655f1..300eced 100644
--- a/doc/guides/nics/features/ice.ini
+++ b/doc/guides/nics/features/ice.ini
@@ -11,6 +11,7 @@ Rx interrupt         = Y
 Queue start/stop     = Y
 MTU update           = Y
 Jumbo frame          = Y
+Scattered Rx         = Y
 TSO                  = Y
 Unicast MAC filter   = Y
 Multicast MAC filter = Y
diff --git a/drivers/net/ice/ice_lan_rxtx.c b/drivers/net/ice/ice_lan_rxtx.c
index 1b1bf47..986cbc6 100644
--- a/drivers/net/ice/ice_lan_rxtx.c
+++ b/drivers/net/ice/ice_lan_rxtx.c
@@ -957,6 +957,431 @@
 	PMD_RX_LOG(DEBUG, "Mbuf vlan_tci: %u, vlan_tci_outer: %u",
 		   mb->vlan_tci, mb->vlan_tci_outer);
 }
+
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+#define ICE_LOOK_AHEAD 8
+#if (ICE_LOOK_AHEAD != 8)
+#error "PMD ICE: ICE_LOOK_AHEAD must be 8\n"
+#endif
+static inline int
+ice_rx_scan_hw_ring(struct ice_rx_queue *rxq)
+{
+	volatile union ice_rx_desc *rxdp;
+	struct ice_rx_entry *rxep;
+	struct rte_mbuf *mb;
+	uint16_t pkt_len;
+	uint64_t qword1;
+	uint32_t rx_status;
+	int32_t s[ICE_LOOK_AHEAD], nb_dd;
+	int32_t i, j, nb_rx = 0;
+	uint64_t pkt_flags = 0;
+	uint32_t *ptype_tbl = rxq->vsi->adapter->ptype_tbl;
+
+	rxdp = &rxq->rx_ring[rxq->rx_tail];
+	rxep = &rxq->sw_ring[rxq->rx_tail];
+
+	qword1 = rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len);
+	rx_status = (qword1 & ICE_RXD_QW1_STATUS_M) >> ICE_RXD_QW1_STATUS_S;
+
+	/* Make sure there is at least 1 packet to receive */
+	if (!(rx_status & (1 << ICE_RX_DESC_STATUS_DD_S)))
+		return 0;
+
+	/**
+	 * Scan LOOK_AHEAD descriptors at a time to determine which
+	 * descriptors reference packets that are ready to be received.
+	 */
+	for (i = 0; i < ICE_RX_MAX_BURST; i += ICE_LOOK_AHEAD,
+	     rxdp += ICE_LOOK_AHEAD, rxep += ICE_LOOK_AHEAD) {
+		/* Read desc statuses backwards to avoid race condition */
+		for (j = ICE_LOOK_AHEAD - 1; j >= 0; j--) {
+			qword1 = rte_le_to_cpu_64(
+					rxdp[j].wb.qword1.status_error_len);
+			s[j] = (qword1 & ICE_RXD_QW1_STATUS_M) >>
+			       ICE_RXD_QW1_STATUS_S;
+		}
+
+		rte_smp_rmb();
+
+		/* Compute how many status bits were set */
+		for (j = 0, nb_dd = 0; j < ICE_LOOK_AHEAD; j++)
+			nb_dd += s[j] & (1 << ICE_RX_DESC_STATUS_DD_S);
+
+		nb_rx += nb_dd;
+
+		/* Translate descriptor info to mbuf parameters */
+		for (j = 0; j < nb_dd; j++) {
+			mb = rxep[j].mbuf;
+			qword1 = rte_le_to_cpu_64(
+					rxdp[j].wb.qword1.status_error_len);
+			pkt_len = ((qword1 & ICE_RXD_QW1_LEN_PBUF_M) >>
+				   ICE_RXD_QW1_LEN_PBUF_S) - rxq->crc_len;
+			mb->data_len = pkt_len;
+			mb->pkt_len = pkt_len;
+			mb->ol_flags = 0;
+			pkt_flags = ice_rxd_status_to_pkt_flags(qword1);
+			pkt_flags |= ice_rxd_error_to_pkt_flags(qword1);
+			if (pkt_flags & PKT_RX_RSS_HASH)
+				mb->hash.rss =
+					rte_le_to_cpu_32(
+						rxdp[j].wb.qword0.hi_dword.rss);
+			mb->packet_type = ptype_tbl[(uint8_t)(
+						(qword1 &
+						 ICE_RXD_QW1_PTYPE_M) >>
+						ICE_RXD_QW1_PTYPE_S)];
+			ice_rxd_to_vlan_tci(mb, &rxdp[j]);
+
+			mb->ol_flags |= pkt_flags;
+		}
+
+		for (j = 0; j < ICE_LOOK_AHEAD; j++)
+			rxq->rx_stage[i + j] = rxep[j].mbuf;
+
+		if (nb_dd != ICE_LOOK_AHEAD)
+			break;
+	}
+
+	/* Clear software ring entries */
+	for (i = 0; i < nb_rx; i++)
+		rxq->sw_ring[rxq->rx_tail + i].mbuf = NULL;
+
+	PMD_RX_LOG(DEBUG, "ice_rx_scan_hw_ring: "
+		   "port_id=%u, queue_id=%u, nb_rx=%d",
+		   rxq->port_id, rxq->queue_id, nb_rx);
+
+	return nb_rx;
+}
+
+static inline uint16_t
+ice_rx_fill_from_stage(struct ice_rx_queue *rxq,
+		       struct rte_mbuf **rx_pkts,
+		       uint16_t nb_pkts)
+{
+	uint16_t i;
+	struct rte_mbuf **stage = &rxq->rx_stage[rxq->rx_next_avail];
+
+	nb_pkts = (uint16_t)RTE_MIN(nb_pkts, rxq->rx_nb_avail);
+
+	for (i = 0; i < nb_pkts; i++)
+		rx_pkts[i] = stage[i];
+
+	rxq->rx_nb_avail = (uint16_t)(rxq->rx_nb_avail - nb_pkts);
+	rxq->rx_next_avail = (uint16_t)(rxq->rx_next_avail + nb_pkts);
+
+	return nb_pkts;
+}
+
+static inline int
+ice_rx_alloc_bufs(struct ice_rx_queue *rxq)
+{
+	volatile union ice_rx_desc *rxdp;
+	struct ice_rx_entry *rxep;
+	struct rte_mbuf *mb;
+	uint16_t alloc_idx, i;
+	uint64_t dma_addr;
+	int diag;
+
+	/* Allocate buffers in bulk */
+	alloc_idx = (uint16_t)(rxq->rx_free_trigger -
+			       (rxq->rx_free_thresh - 1));
+	rxep = &rxq->sw_ring[alloc_idx];
+	diag = rte_mempool_get_bulk(rxq->mp, (void *)rxep,
+				    rxq->rx_free_thresh);
+	if (unlikely(diag != 0)) {
+		PMD_RX_LOG(ERR, "Failed to get mbufs in bulk");
+		return -ENOMEM;
+	}
+
+	rxdp = &rxq->rx_ring[alloc_idx];
+	for (i = 0; i < rxq->rx_free_thresh; i++) {
+		if (likely(i < (rxq->rx_free_thresh - 1)))
+			/* Prefetch next mbuf */
+			rte_prefetch0(rxep[i + 1].mbuf);
+
+		mb = rxep[i].mbuf;
+		rte_mbuf_refcnt_set(mb, 1);
+		mb->next = NULL;
+		mb->data_off = RTE_PKTMBUF_HEADROOM;
+		mb->nb_segs = 1;
+		mb->port = rxq->port_id;
+		dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(mb));
+		rxdp[i].read.hdr_addr = 0;
+		rxdp[i].read.pkt_addr = dma_addr;
+	}
+
+	/* Update rx tail regsiter */
+	rte_wmb();
+	ICE_PCI_REG_WRITE(rxq->qrx_tail, rxq->rx_free_trigger);
+
+	rxq->rx_free_trigger =
+		(uint16_t)(rxq->rx_free_trigger + rxq->rx_free_thresh);
+	if (rxq->rx_free_trigger >= rxq->nb_rx_desc)
+		rxq->rx_free_trigger = (uint16_t)(rxq->rx_free_thresh - 1);
+
+	return 0;
+}
+
+static inline uint16_t
+rx_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
+{
+	struct ice_rx_queue *rxq = (struct ice_rx_queue *)rx_queue;
+	uint16_t nb_rx = 0;
+	struct rte_eth_dev *dev;
+
+	if (!nb_pkts)
+		return 0;
+
+	if (rxq->rx_nb_avail)
+		return ice_rx_fill_from_stage(rxq, rx_pkts, nb_pkts);
+
+	nb_rx = (uint16_t)ice_rx_scan_hw_ring(rxq);
+	rxq->rx_next_avail = 0;
+	rxq->rx_nb_avail = nb_rx;
+	rxq->rx_tail = (uint16_t)(rxq->rx_tail + nb_rx);
+
+	if (rxq->rx_tail > rxq->rx_free_trigger) {
+		if (ice_rx_alloc_bufs(rxq) != 0) {
+			uint16_t i, j;
+
+			dev = ICE_VSI_TO_ETH_DEV(rxq->vsi);
+			dev->data->rx_mbuf_alloc_failed +=
+				rxq->rx_free_thresh;
+			PMD_RX_LOG(DEBUG, "Rx mbuf alloc failed for "
+				   "port_id=%u, queue_id=%u",
+				   rxq->port_id, rxq->queue_id);
+			rxq->rx_nb_avail = 0;
+			rxq->rx_tail = (uint16_t)(rxq->rx_tail - nb_rx);
+			for (i = 0, j = rxq->rx_tail; i < nb_rx; i++, j++)
+				rxq->sw_ring[j].mbuf = rxq->rx_stage[i];
+
+			return 0;
+		}
+	}
+
+	if (rxq->rx_tail >= rxq->nb_rx_desc)
+		rxq->rx_tail = 0;
+
+	if (rxq->rx_nb_avail)
+		return ice_rx_fill_from_stage(rxq, rx_pkts, nb_pkts);
+
+	return 0;
+}
+
+static uint16_t
+ice_recv_pkts_bulk_alloc(void *rx_queue,
+			 struct rte_mbuf **rx_pkts,
+			 uint16_t nb_pkts)
+{
+	uint16_t nb_rx = 0;
+	uint16_t n;
+	uint16_t count;
+
+	if (unlikely(nb_pkts == 0))
+		return nb_rx;
+
+	if (likely(nb_pkts <= ICE_RX_MAX_BURST))
+		return rx_recv_pkts(rx_queue, rx_pkts, nb_pkts);
+
+	while (nb_pkts) {
+		n = RTE_MIN(nb_pkts, ICE_RX_MAX_BURST);
+		count = rx_recv_pkts(rx_queue, &rx_pkts[nb_rx], n);
+		nb_rx = (uint16_t)(nb_rx + count);
+		nb_pkts = (uint16_t)(nb_pkts - count);
+		if (count < n)
+			break;
+	}
+
+	return nb_rx;
+}
+#else
+static uint16_t
+ice_recv_pkts_bulk_alloc(void __rte_unused *rx_queue,
+			 struct rte_mbuf __rte_unused **rx_pkts,
+			 uint16_t __rte_unused nb_pkts)
+{
+	return 0;
+}
+#endif /* RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC */
+
+static uint16_t
+ice_recv_scattered_pkts(void *rx_queue,
+			struct rte_mbuf **rx_pkts,
+			uint16_t nb_pkts)
+{
+	struct ice_rx_queue *rxq = rx_queue;
+	volatile union ice_rx_desc *rx_ring = rxq->rx_ring;
+	volatile union ice_rx_desc *rxdp;
+	union ice_rx_desc rxd;
+	struct ice_rx_entry *sw_ring = rxq->sw_ring;
+	struct ice_rx_entry *rxe;
+	struct rte_mbuf *first_seg = rxq->pkt_first_seg;
+	struct rte_mbuf *last_seg = rxq->pkt_last_seg;
+	struct rte_mbuf *nmb; /* new allocated mbuf */
+	struct rte_mbuf *rxm; /* pointer to store old mbuf in SW ring */
+	uint16_t rx_id = rxq->rx_tail;
+	uint16_t nb_rx = 0;
+	uint16_t nb_hold = 0;
+	uint16_t rx_packet_len;
+	uint32_t rx_status;
+	uint64_t qword1;
+	uint64_t dma_addr;
+	uint64_t pkt_flags = 0;
+	uint32_t *ptype_tbl = rxq->vsi->adapter->ptype_tbl;
+	struct rte_eth_dev *dev;
+
+	while (nb_rx < nb_pkts) {
+		rxdp = &rx_ring[rx_id];
+		qword1 = rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len);
+		rx_status = (qword1 & ICE_RXD_QW1_STATUS_M) >>
+			    ICE_RXD_QW1_STATUS_S;
+
+		/* Check the DD bit first */
+		if (!(rx_status & (1 << ICE_RX_DESC_STATUS_DD_S)))
+			break;
+
+		/* allocate mbuf */
+		nmb = rte_mbuf_raw_alloc(rxq->mp);
+		if (unlikely(!nmb)) {
+			dev = ICE_VSI_TO_ETH_DEV(rxq->vsi);
+			dev->data->rx_mbuf_alloc_failed++;
+			break;
+		}
+		rxd = *rxdp; /* copy descriptor in ring to temp variable*/
+
+		nb_hold++;
+		rxe = &sw_ring[rx_id]; /* get corresponding mbuf in SW ring */
+		rx_id++;
+		if (unlikely(rx_id == rxq->nb_rx_desc))
+			rx_id = 0;
+
+		/* Prefetch next mbuf */
+		rte_prefetch0(sw_ring[rx_id].mbuf);
+
+		/**
+		 * When next RX descriptor is on a cache line boundary,
+		 * prefetch the next 4 RX descriptors and next 8 pointers
+		 * to mbufs.
+		 */
+		if ((rx_id & 0x3) == 0) {
+			rte_prefetch0(&rx_ring[rx_id]);
+			rte_prefetch0(&sw_ring[rx_id]);
+		}
+
+		rxm = rxe->mbuf;
+		rxe->mbuf = nmb;
+		dma_addr =
+			rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb));
+
+		/* Set data buffer address and data length of the mbuf */
+		rxdp->read.hdr_addr = 0;
+		rxdp->read.pkt_addr = dma_addr;
+		rx_packet_len = (qword1 & ICE_RXD_QW1_LEN_PBUF_M) >>
+				ICE_RXD_QW1_LEN_PBUF_S;
+		rxm->data_len = rx_packet_len;
+		rxm->data_off = RTE_PKTMBUF_HEADROOM;
+		ice_rxd_to_vlan_tci(rxm, rxdp);
+		rxm->packet_type = ptype_tbl[(uint8_t)((qword1 &
+							ICE_RXD_QW1_PTYPE_M) >>
+						       ICE_RXD_QW1_PTYPE_S)];
+
+		/**
+		 * If this is the first buffer of the received packet, set the
+		 * pointer to the first mbuf of the packet and initialize its
+		 * context. Otherwise, update the total length and the number
+		 * of segments of the current scattered packet, and update the
+		 * pointer to the last mbuf of the current packet.
+		 */
+		if (!first_seg) {
+			first_seg = rxm;
+			first_seg->nb_segs = 1;
+			first_seg->pkt_len = rx_packet_len;
+		} else {
+			first_seg->pkt_len =
+				(uint16_t)(first_seg->pkt_len +
+					   rx_packet_len);
+			first_seg->nb_segs++;
+			last_seg->next = rxm;
+		}
+
+		/**
+		 * If this is not the last buffer of the received packet,
+		 * update the pointer to the last mbuf of the current scattered
+		 * packet and continue to parse the RX ring.
+		 */
+		if (!(rx_status & (1 << ICE_RX_DESC_STATUS_EOF_S))) {
+			last_seg = rxm;
+			continue;
+		}
+
+		/**
+		 * This is the last buffer of the received packet. If the CRC
+		 * is not stripped by the hardware:
+		 *  - Subtract the CRC length from the total packet length.
+		 *  - If the last buffer only contains the whole CRC or a part
+		 *  of it, free the mbuf associated to the last buffer. If part
+		 *  of the CRC is also contained in the previous mbuf, subtract
+		 *  the length of that CRC part from the data length of the
+		 *  previous mbuf.
+		 */
+		rxm->next = NULL;
+		if (unlikely(rxq->crc_len > 0)) {
+			first_seg->pkt_len -= ETHER_CRC_LEN;
+			if (rx_packet_len <= ETHER_CRC_LEN) {
+				rte_pktmbuf_free_seg(rxm);
+				first_seg->nb_segs--;
+				last_seg->data_len =
+					(uint16_t)(last_seg->data_len -
+					(ETHER_CRC_LEN - rx_packet_len));
+				last_seg->next = NULL;
+			} else
+				rxm->data_len = (uint16_t)(rx_packet_len -
+							   ETHER_CRC_LEN);
+		}
+
+		first_seg->port = rxq->port_id;
+		first_seg->ol_flags = 0;
+
+		pkt_flags = ice_rxd_status_to_pkt_flags(qword1);
+		pkt_flags |= ice_rxd_error_to_pkt_flags(qword1);
+		if (pkt_flags & PKT_RX_RSS_HASH)
+			first_seg->hash.rss =
+				rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
+
+		first_seg->ol_flags |= pkt_flags;
+		/* Prefetch data of first segment, if configured to do so. */
+		rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr,
+					  first_seg->data_off));
+		rx_pkts[nb_rx++] = first_seg;
+		first_seg = NULL;
+	}
+
+	/* Record index of the next RX descriptor to probe. */
+	rxq->rx_tail = rx_id;
+	rxq->pkt_first_seg = first_seg;
+	rxq->pkt_last_seg = last_seg;
+
+	/**
+	 * If the number of free RX descriptors is greater than the RX free
+	 * threshold of the queue, advance the Receive Descriptor Tail (RDT)
+	 * register. Update the RDT with the value of the last processed RX
+	 * descriptor minus 1, to guarantee that the RDT register is never
+	 * equal to the RDH register, which creates a "full" ring situtation
+	 * from the hardware point of view.
+	 */
+	nb_hold = (uint16_t)(nb_hold + rxq->nb_rx_hold);
+	if (nb_hold > rxq->rx_free_thresh) {
+		rx_id = (uint16_t)(rx_id == 0 ?
+				   (rxq->nb_rx_desc - 1) : (rx_id - 1));
+		/* write TAIL register */
+		ICE_PCI_REG_WRITE(rxq->qrx_tail, rx_id);
+		nb_hold = 0;
+	}
+	rxq->nb_rx_hold = nb_hold;
+
+	/* return received packet in the burst */
+	return nb_rx;
+}
+
 const uint32_t *
 ice_dev_supported_ptypes_get(struct rte_eth_dev *dev)
 {
@@ -990,7 +1415,11 @@
 		RTE_PTYPE_UNKNOWN
 	};
 
-	if (dev->rx_pkt_burst == ice_recv_pkts)
+	if (dev->rx_pkt_burst == ice_recv_pkts ||
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+	    dev->rx_pkt_burst == ice_recv_pkts_bulk_alloc ||
+#endif
+	    dev->rx_pkt_burst == ice_recv_scattered_pkts)
 		return ptypes;
 	return NULL;
 }
@@ -1313,6 +1742,20 @@
 	return 0;
 }
 
+/* Construct the tx flags */
+static inline uint64_t
+ice_build_ctob(uint32_t td_cmd,
+	       uint32_t td_offset,
+	       uint16_t size,
+	       uint32_t td_tag)
+{
+	return rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DATA |
+				((uint64_t)td_cmd << ICE_TXD_QW1_CMD_S) |
+				((uint64_t)td_offset << ICE_TXD_QW1_OFFSET_S) |
+				((uint64_t)size << ICE_TXD_QW1_TX_BUF_SZ_S) |
+				((uint64_t)td_tag << ICE_TXD_QW1_L2TAG1_S));
+}
+
 /* Check if the context descriptor is needed for TX offloading */
 static inline uint16_t
 ice_calc_context_desc(uint64_t flags)
@@ -1531,10 +1974,213 @@
 	return nb_tx;
 }
 
+static inline int __attribute__((always_inline))
+ice_tx_free_bufs(struct ice_tx_queue *txq)
+{
+	struct ice_tx_entry *txep;
+	uint16_t i;
+
+	if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
+	     rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M)) !=
+	    rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE))
+		return 0;
+
+	txep = &txq->sw_ring[txq->tx_next_dd - (txq->tx_rs_thresh - 1)];
+
+	for (i = 0; i < txq->tx_rs_thresh; i++)
+		rte_prefetch0((txep + i)->mbuf);
+
+	if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) {
+		for (i = 0; i < txq->tx_rs_thresh; ++i, ++txep) {
+			rte_mempool_put(txep->mbuf->pool, txep->mbuf);
+			txep->mbuf = NULL;
+		}
+	} else {
+		for (i = 0; i < txq->tx_rs_thresh; ++i, ++txep) {
+			rte_pktmbuf_free_seg(txep->mbuf);
+			txep->mbuf = NULL;
+		}
+	}
+
+	txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh);
+	txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh);
+	if (txq->tx_next_dd >= txq->nb_tx_desc)
+		txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
+
+	return txq->tx_rs_thresh;
+}
+
+/* Populate 4 descriptors with data from 4 mbufs */
+static inline void
+tx4(volatile struct ice_tx_desc *txdp, struct rte_mbuf **pkts)
+{
+	uint64_t dma_addr;
+	uint32_t i;
+
+	for (i = 0; i < 4; i++, txdp++, pkts++) {
+		dma_addr = rte_mbuf_data_iova(*pkts);
+		txdp->buf_addr = rte_cpu_to_le_64(dma_addr);
+		txdp->cmd_type_offset_bsz =
+			ice_build_ctob((uint32_t)ICE_TD_CMD, 0,
+				       (*pkts)->data_len, 0);
+	}
+}
+
+/* Populate 1 descriptor with data from 1 mbuf */
+static inline void
+tx1(volatile struct ice_tx_desc *txdp, struct rte_mbuf **pkts)
+{
+	uint64_t dma_addr;
+
+	dma_addr = rte_mbuf_data_iova(*pkts);
+	txdp->buf_addr = rte_cpu_to_le_64(dma_addr);
+	txdp->cmd_type_offset_bsz =
+		ice_build_ctob((uint32_t)ICE_TD_CMD, 0,
+			       (*pkts)->data_len, 0);
+}
+
+static inline void
+ice_tx_fill_hw_ring(struct ice_tx_queue *txq, struct rte_mbuf **pkts,
+		    uint16_t nb_pkts)
+{
+	volatile struct ice_tx_desc *txdp = &txq->tx_ring[txq->tx_tail];
+	struct ice_tx_entry *txep = &txq->sw_ring[txq->tx_tail];
+	const int N_PER_LOOP = 4;
+	const int N_PER_LOOP_MASK = N_PER_LOOP - 1;
+	int mainpart, leftover;
+	int i, j;
+
+	/**
+	 * Process most of the packets in chunks of N pkts.  Any
+	 * leftover packets will get processed one at a time.
+	 */
+	mainpart = nb_pkts & ((uint32_t)~N_PER_LOOP_MASK);
+	leftover = nb_pkts & ((uint32_t)N_PER_LOOP_MASK);
+	for (i = 0; i < mainpart; i += N_PER_LOOP) {
+		/* Copy N mbuf pointers to the S/W ring */
+		for (j = 0; j < N_PER_LOOP; ++j)
+			(txep + i + j)->mbuf = *(pkts + i + j);
+		tx4(txdp + i, pkts + i);
+	}
+
+	if (unlikely(leftover > 0)) {
+		for (i = 0; i < leftover; ++i) {
+			(txep + mainpart + i)->mbuf = *(pkts + mainpart + i);
+			tx1(txdp + mainpart + i, pkts + mainpart + i);
+		}
+	}
+}
+
+static inline uint16_t
+tx_xmit_pkts(struct ice_tx_queue *txq,
+	     struct rte_mbuf **tx_pkts,
+	     uint16_t nb_pkts)
+{
+	volatile struct ice_tx_desc *txr = txq->tx_ring;
+	uint16_t n = 0;
+
+	/**
+	 * Begin scanning the H/W ring for done descriptors when the number
+	 * of available descriptors drops below tx_free_thresh. For each done
+	 * descriptor, free the associated buffer.
+	 */
+	if (txq->nb_tx_free < txq->tx_free_thresh)
+		ice_tx_free_bufs(txq);
+
+	/* Use available descriptor only */
+	nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
+	if (unlikely(!nb_pkts))
+		return 0;
+
+	txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
+	if ((txq->tx_tail + nb_pkts) > txq->nb_tx_desc) {
+		n = (uint16_t)(txq->nb_tx_desc - txq->tx_tail);
+		ice_tx_fill_hw_ring(txq, tx_pkts, n);
+		txr[txq->tx_next_rs].cmd_type_offset_bsz |=
+			rte_cpu_to_le_64(((uint64_t)ICE_TX_DESC_CMD_RS) <<
+					 ICE_TXD_QW1_CMD_S);
+		txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
+		txq->tx_tail = 0;
+	}
+
+	/* Fill hardware descriptor ring with mbuf data */
+	ice_tx_fill_hw_ring(txq, tx_pkts + n, (uint16_t)(nb_pkts - n));
+	txq->tx_tail = (uint16_t)(txq->tx_tail + (nb_pkts - n));
+
+	/* Determin if RS bit needs to be set */
+	if (txq->tx_tail > txq->tx_next_rs) {
+		txr[txq->tx_next_rs].cmd_type_offset_bsz |=
+			rte_cpu_to_le_64(((uint64_t)ICE_TX_DESC_CMD_RS) <<
+					 ICE_TXD_QW1_CMD_S);
+		txq->tx_next_rs =
+			(uint16_t)(txq->tx_next_rs + txq->tx_rs_thresh);
+		if (txq->tx_next_rs >= txq->nb_tx_desc)
+			txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
+	}
+
+	if (txq->tx_tail >= txq->nb_tx_desc)
+		txq->tx_tail = 0;
+
+	/* Update the tx tail register */
+	rte_wmb();
+	ICE_PCI_REG_WRITE(txq->qtx_tail, txq->tx_tail);
+
+	return nb_pkts;
+}
+
+static uint16_t
+ice_xmit_pkts_simple(void *tx_queue,
+		     struct rte_mbuf **tx_pkts,
+		     uint16_t nb_pkts)
+{
+	uint16_t nb_tx = 0;
+
+	if (likely(nb_pkts <= ICE_TX_MAX_BURST))
+		return tx_xmit_pkts((struct ice_tx_queue *)tx_queue,
+				    tx_pkts, nb_pkts);
+
+	while (nb_pkts) {
+		uint16_t ret, num = (uint16_t)RTE_MIN(nb_pkts,
+						      ICE_TX_MAX_BURST);
+
+		ret = tx_xmit_pkts((struct ice_tx_queue *)tx_queue,
+				   &tx_pkts[nb_tx], num);
+		nb_tx = (uint16_t)(nb_tx + ret);
+		nb_pkts = (uint16_t)(nb_pkts - ret);
+		if (ret < num)
+			break;
+	}
+
+	return nb_tx;
+}
+
 void __attribute__((cold))
 ice_set_rx_function(struct rte_eth_dev *dev)
 {
-	dev->rx_pkt_burst = ice_recv_pkts;
+	PMD_INIT_FUNC_TRACE();
+	struct ice_adapter *ad =
+		ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+
+	if (dev->data->scattered_rx) {
+		/* Set the non-LRO scattered function */
+		PMD_INIT_LOG(DEBUG,
+			     "Using a Scattered function on port %d.",
+			     dev->data->port_id);
+		dev->rx_pkt_burst = ice_recv_scattered_pkts;
+	} else if (ad->rx_bulk_alloc_allowed) {
+		PMD_INIT_LOG(DEBUG,
+			     "Rx Burst Bulk Alloc Preconditions are "
+			     "satisfied. Rx Burst Bulk Alloc function "
+			     "will be used on port %d.",
+			     dev->data->port_id);
+		dev->rx_pkt_burst = ice_recv_pkts_bulk_alloc;
+	} else {
+		PMD_INIT_LOG(DEBUG,
+			     "Rx Burst Bulk Alloc Preconditions are not "
+			     "satisfied, Normal Rx will be used on port %d.",
+			     dev->data->port_id);
+		dev->rx_pkt_burst = ice_recv_pkts;
+	}
 }
 
 /*********************************************************************
@@ -1588,8 +2234,18 @@ void __attribute__((cold))
 void __attribute__((cold))
 ice_set_tx_function(struct rte_eth_dev *dev)
 {
+	struct ice_adapter *ad =
+		ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+
+	if (ad->tx_simple_allowed) {
+		PMD_INIT_LOG(DEBUG, "Simple tx finally be used.");
+		dev->tx_pkt_burst = ice_xmit_pkts_simple;
+		dev->tx_pkt_prepare = NULL;
+	} else {
+		PMD_INIT_LOG(DEBUG, "Normal tx finally be used.");
 		dev->tx_pkt_burst = ice_xmit_pkts;
 		dev->tx_pkt_prepare = ice_prep_pkts;
+	}
 }
 
 /* For each value it means, datasheet of hardware can tell more details
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v4 32/32] net/ice: support descriptor ops
  2018-12-14  8:34 ` [dpdk-dev] [PATCH v4 00/32] A new net PMD - ICE Wenzhuo Lu
                     ` (30 preceding siblings ...)
  2018-12-14  8:35   ` [dpdk-dev] [PATCH v4 31/32] net/ice: support advance RX/TX Wenzhuo Lu
@ 2018-12-14  8:35   ` Wenzhuo Lu
  31 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-14  8:35 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add below ops,
rx_descriptor_status
tx_descriptor_status

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/ice.ini |  2 ++
 drivers/net/ice/ice_ethdev.c     |  2 ++
 drivers/net/ice/ice_lan_rxtx.c   | 58 ++++++++++++++++++++++++++++++++++++++++
 drivers/net/ice/ice_rxtx.h       |  2 ++
 4 files changed, 64 insertions(+)

diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
index 300eced..196b8d5 100644
--- a/doc/guides/nics/features/ice.ini
+++ b/doc/guides/nics/features/ice.ini
@@ -25,6 +25,8 @@ QinQ offload         = Y
 L3 checksum offload  = Y
 L4 checksum offload  = Y
 Packet type parsing  = Y
+Rx descriptor status = Y
+Tx descriptor status = Y
 Basic stats          = Y
 Extended stats       = Y
 FW version           = Y
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 6a51033..2ec9e89 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -112,6 +112,8 @@ static int ice_xstats_get_names(struct rte_eth_dev *dev,
 	.get_eeprom_length            = ice_get_eeprom_length,
 	.get_eeprom                   = ice_get_eeprom,
 	.rx_queue_count               = ice_rx_queue_count,
+	.rx_descriptor_status         = ice_rx_descriptor_status,
+	.tx_descriptor_status         = ice_tx_descriptor_status,
 	.stats_get                    = ice_stats_get,
 	.stats_reset                  = ice_stats_reset,
 	.xstats_get                   = ice_xstats_get,
diff --git a/drivers/net/ice/ice_lan_rxtx.c b/drivers/net/ice/ice_lan_rxtx.c
index 986cbc6..8bfd34f 100644
--- a/drivers/net/ice/ice_lan_rxtx.c
+++ b/drivers/net/ice/ice_lan_rxtx.c
@@ -1490,6 +1490,64 @@
 	return desc;
 }
 
+int
+ice_rx_descriptor_status(void *rx_queue, uint16_t offset)
+{
+	struct ice_rx_queue *rxq = rx_queue;
+	volatile uint64_t *status;
+	uint64_t mask;
+	uint32_t desc;
+
+	if (unlikely(offset >= rxq->nb_rx_desc))
+		return -EINVAL;
+
+	if (offset >= rxq->nb_rx_desc - rxq->nb_rx_hold)
+		return RTE_ETH_RX_DESC_UNAVAIL;
+
+	desc = rxq->rx_tail + offset;
+	if (desc >= rxq->nb_rx_desc)
+		desc -= rxq->nb_rx_desc;
+
+	status = &rxq->rx_ring[desc].wb.qword1.status_error_len;
+	mask = rte_cpu_to_le_64((1ULL << ICE_RX_DESC_STATUS_DD_S) <<
+				ICE_RXD_QW1_STATUS_S);
+	if (*status & mask)
+		return RTE_ETH_RX_DESC_DONE;
+
+	return RTE_ETH_RX_DESC_AVAIL;
+}
+
+int
+ice_tx_descriptor_status(void *tx_queue, uint16_t offset)
+{
+	struct ice_tx_queue *txq = tx_queue;
+	volatile uint64_t *status;
+	uint64_t mask, expect;
+	uint32_t desc;
+
+	if (unlikely(offset >= txq->nb_tx_desc))
+		return -EINVAL;
+
+	desc = txq->tx_tail + offset;
+	/* go to next desc that has the RS bit */
+	desc = ((desc + txq->tx_rs_thresh - 1) / txq->tx_rs_thresh) *
+		txq->tx_rs_thresh;
+	if (desc >= txq->nb_tx_desc) {
+		desc -= txq->nb_tx_desc;
+		if (desc >= txq->nb_tx_desc)
+			desc -= txq->nb_tx_desc;
+	}
+
+	status = &txq->tx_ring[desc].cmd_type_offset_bsz;
+	mask = rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M);
+	expect = rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE <<
+				  ICE_TXD_QW1_DTYPE_S);
+	if ((*status & mask) == expect)
+		return RTE_ETH_TX_DESC_DONE;
+
+	return RTE_ETH_TX_DESC_FULL;
+}
+
 void
 ice_clear_queues(struct rte_eth_dev *dev)
 {
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index e0218b3..a0aa8f9 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -143,6 +143,8 @@ uint16_t ice_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
 		       uint16_t nb_pkts);
 void ice_set_tx_function(struct rte_eth_dev *dev);
 uint32_t ice_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+int ice_rx_descriptor_status(void *rx_queue, uint16_t offset);
+int ice_tx_descriptor_status(void *tx_queue, uint16_t offset);
 void ice_set_default_ptype_table(struct rte_eth_dev *dev);
 const uint32_t *ice_dev_supported_ptypes_get(struct rte_eth_dev *dev);
 void ice_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v3 34/34] net/ice: support meson build
  2018-12-14  2:38       ` Lu, Wenzhuo
@ 2018-12-14  8:47         ` Ferruh Yigit
  2018-12-16  1:43           ` Lu, Wenzhuo
  0 siblings, 1 reply; 309+ messages in thread
From: Ferruh Yigit @ 2018-12-14  8:47 UTC (permalink / raw)
  To: Lu, Wenzhuo, dev

On 12/14/2018 2:38 AM, Lu, Wenzhuo wrote:
<...>

>>> @@ -0,0 +1,30 @@
>>> +# SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2018 Intel
>>> +Corporation
>>> +
>>> +sources = [
>>> +	'ice_controlq.c',
>>> +	'ice_common.c',
>>> +	'ice_sched.c',
>>> +	'ice_switch.c',
>>> +	'ice_nvm.c',
>>
>> ice_dcb.c? It is in base folder, isn't is compiled?
> Currently we don’t use it. Just leave it uncompiled.

Why not remove it now, and add back when used?

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v4 16/32] net/ice: support device initialization
  2018-12-14  8:35   ` [dpdk-dev] [PATCH v4 16/32] net/ice: support device initialization Wenzhuo Lu
@ 2018-12-14  9:46     ` Ferruh Yigit
  2018-12-14 11:19       ` Zhang, Qi Z
  2018-12-17  4:54       ` Lu, Wenzhuo
  2018-12-14 12:05     ` David Marchand
  1 sibling, 2 replies; 309+ messages in thread
From: Ferruh Yigit @ 2018-12-14  9:46 UTC (permalink / raw)
  To: Wenzhuo Lu, dev; +Cc: Qiming Yang, Xiaoyun Li, Jingjing Wu

On 12/14/2018 8:35 AM, Wenzhuo Lu wrote:
> +ifeq ($(CONFIG_RTE_TOOLCHAIN_ICC),y)
> +CFLAGS_BASE_DRIVER = -wd593 -wd188

This is causing following warning for icc [1], new icc versions require the
syntax "-diag-disable ###" instead of "-wd###", please check [2].


[1]
command line remark #10010: option '-wd593' is deprecated and will be removed in
a future release. See '-help deprecated'

[2]
Commit f16d0b36f816 ("drivers/net: fix icc deprecated parameter warning")


$ icc --version
icc (ICC) 19.0.1.144 20181018

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v4 16/32] net/ice: support device initialization
  2018-12-14  9:46     ` Ferruh Yigit
@ 2018-12-14 11:19       ` Zhang, Qi Z
  2018-12-17  4:54       ` Lu, Wenzhuo
  1 sibling, 0 replies; 309+ messages in thread
From: Zhang, Qi Z @ 2018-12-14 11:19 UTC (permalink / raw)
  To: Yigit, Ferruh, Lu, Wenzhuo, dev; +Cc: Yang, Qiming, Li, Xiaoyun, Wu, Jingjing



> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Ferruh Yigit
> Sent: Friday, December 14, 2018 5:46 PM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> Cc: Yang, Qiming <qiming.yang@intel.com>; Li, Xiaoyun <xiaoyun.li@intel.com>;
> Wu, Jingjing <jingjing.wu@intel.com>
> Subject: Re: [dpdk-dev] [PATCH v4 16/32] net/ice: support device initialization
> 
> On 12/14/2018 8:35 AM, Wenzhuo Lu wrote:
> > +ifeq ($(CONFIG_RTE_TOOLCHAIN_ICC),y)
> > +CFLAGS_BASE_DRIVER = -wd593 -wd188
> 
> This is causing following warning for icc [1], new icc versions require the syntax
> "-diag-disable ###" instead of "-wd###", please check [2].
> 
> 
> [1]
> command line remark #10010: option '-wd593' is deprecated and will be removed
> in a future release. See '-help deprecated'
> 
> [2]
> Commit f16d0b36f816 ("drivers/net: fix icc deprecated parameter warning")

OK, I can capture this during apply to dpdk-next-inet-intel if no v5 needed.

Thanks
Qi

> 
> 
> $ icc --version
> icc (ICC) 19.0.1.144 20181018

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v4 16/32] net/ice: support device initialization
  2018-12-14  8:35   ` [dpdk-dev] [PATCH v4 16/32] net/ice: support device initialization Wenzhuo Lu
  2018-12-14  9:46     ` Ferruh Yigit
@ 2018-12-14 12:05     ` David Marchand
  2018-12-17  1:11       ` Lu, Wenzhuo
  1 sibling, 1 reply; 309+ messages in thread
From: David Marchand @ 2018-12-14 12:05 UTC (permalink / raw)
  To: Wenzhuo Lu; +Cc: dev, Qiming Yang, Xiaoyun Li, Jingjing Wu

On Fri, Dec 14, 2018 at 9:34 AM Wenzhuo Lu <wenzhuo.lu@intel.com> wrote:

>
> diff --git a/doc/guides/nics/features/ice.ini
> b/doc/guides/nics/features/ice.ini
> new file mode 100644
> index 0000000..085e848
> --- /dev/null
> +++ b/doc/guides/nics/features/ice.ini
> @@ -0,0 +1,11 @@
> +;
> +; Supported features of the 'ice' network poll mode driver.
> +;
> +; Refer to default.ini for the full list of available PMD features.
> +;
> +[Features]
> +BSD nic_uio          = Y
> +Linux UIO            = Y
> +Linux VFIO           = Y
> +x86-32               = Y
> +x86-64               = Y
>

[snip]


> +/**
> + * Driver initialization routine.
> + * Invoked once at EAL init time.
> + * Register itself as the [Poll Mode] Driver of PCI devices.
> + */
> +RTE_PMD_REGISTER_PCI(net_ice, rte_ice_pmd);
> +RTE_PMD_REGISTER_PCI_TABLE(ice, pci_id_ice_map);
> +
> +RTE_INIT(ice_init_log)
> +{
> +       ice_logtype_init = rte_log_register("pmd.net.ice.init");
> +       if (ice_logtype_init >= 0)
> +               rte_log_set_level(ice_logtype_init, RTE_LOG_NOTICE);
> +       ice_logtype_driver = rte_log_register("pmd.net.ice.driver");
> +       if (ice_logtype_driver >= 0)
> +               rte_log_set_level(ice_logtype_driver, RTE_LOG_NOTICE);
> +}
>

If this pmd is uio/vfio based, then you must report it via
RTE_PMD_REGISTER_KMOD_DEP().


-- 
David Marchand

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v4 30/32] net/ice: support basic RX/TX
  2018-12-14  8:35   ` [dpdk-dev] [PATCH v4 30/32] net/ice: support basic RX/TX Wenzhuo Lu
@ 2018-12-14 13:00     ` Ferruh Yigit
  2018-12-14 16:41       ` Thomas Monjalon
  2018-12-17  6:47       ` Lu, Wenzhuo
  0 siblings, 2 replies; 309+ messages in thread
From: Ferruh Yigit @ 2018-12-14 13:00 UTC (permalink / raw)
  To: Wenzhuo Lu, dev; +Cc: Qiming Yang, Xiaoyun Li, Jingjing Wu, Thomas Monjalon

On 12/14/2018 8:35 AM, Wenzhuo Lu wrote:
> Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> Signed-off-by: Qiming Yang <qiming.yang@intel.com>
> Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
> Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>

<...>

> +
> +	/* Check to make sure the last descriptor to clean is done */
> +	desc_to_clean_to = sw_ring[desc_to_clean_to].last_id;
> +	if (!(txd[desc_to_clean_to].cmd_type_offset_bsz &
> +	    rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE))) {
> +		PMD_TX_FREE_LOG(DEBUG, "TX descriptor %4u is not done "
> +				"(port=%d queue=%d) value=0x%lx\n",
> +				desc_to_clean_to,
> +				txq->port_id, txq->queue_id,
> +				txd[desc_to_clean_to].cmd_type_offset_bsz);

Causing build error for i686 [1], should use PRIx64 for 64bit variables.

Perhaps we should create a rule in checkpatch to check and warn %lx %lu formats
`git grep -n '%l[xud]' drivers/net/ice/` shows only this occurrence in 'ice' but
there are more in other drivers...


[1]
In file included from .../i686-native-linuxapp-gcc/include/rte_ethdev.h:150,
                 from .../i686-native-linuxapp-gcc/include/rte_ethdev_driver.h:18,
                 from .../drivers/net/ice/ice_lan_rxtx.c:5:
.../drivers/net/ice/ice_lan_rxtx.c: In function ‘ice_xmit_cleanup’:
.../drivers/net/ice/ice_lan_rxtx.c:1776:46: error: format ‘%lx’ expects argument
of type ‘long unsigned int’, but argument 8 has type ‘uint64_t’ {aka ‘volatile
long long unsigned int’} [-Werror=format=]
     txd[desc_to_clean_to].cmd_type_offset_bsz);
                                              ^
.../i686-native-linuxapp-gcc/include/rte_log.h:322:25: note: in definition of
macro ‘RTE_LOG’
    RTE_LOGTYPE_ ## t, # t ": " __VA_ARGS__)
                         ^
.../drivers/net/ice/ice_lan_rxtx.c:1772:3: note: in expansion of macro
‘PMD_TX_FREE_LOG’
   PMD_TX_FREE_LOG(DEBUG, "TX descriptor %4u is not done "
   ^~~~~~~~~~~~~~~

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v4 30/32] net/ice: support basic RX/TX
  2018-12-14 13:00     ` Ferruh Yigit
@ 2018-12-14 16:41       ` Thomas Monjalon
  2018-12-17  6:47       ` Lu, Wenzhuo
  1 sibling, 0 replies; 309+ messages in thread
From: Thomas Monjalon @ 2018-12-14 16:41 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: Wenzhuo Lu, dev, Qiming Yang, Xiaoyun Li, Jingjing Wu

14/12/2018 14:00, Ferruh Yigit:
> On 12/14/2018 8:35 AM, Wenzhuo Lu wrote:
> > Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> > Signed-off-by: Qiming Yang <qiming.yang@intel.com>
> > Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
> > Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
> 
> <...>
> 
> > +
> > +	/* Check to make sure the last descriptor to clean is done */
> > +	desc_to_clean_to = sw_ring[desc_to_clean_to].last_id;
> > +	if (!(txd[desc_to_clean_to].cmd_type_offset_bsz &
> > +	    rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE))) {
> > +		PMD_TX_FREE_LOG(DEBUG, "TX descriptor %4u is not done "
> > +				"(port=%d queue=%d) value=0x%lx\n",
> > +				desc_to_clean_to,
> > +				txq->port_id, txq->queue_id,
> > +				txd[desc_to_clean_to].cmd_type_offset_bsz);
> 
> Causing build error for i686 [1], should use PRIx64 for 64bit variables.
> 
> Perhaps we should create a rule in checkpatch to check and warn %lx %lu formats
> `git grep -n '%l[xud]' drivers/net/ice/` shows only this occurrence in 'ice' but
> there are more in other drivers...

If it's clear to everybody that checkpatch can return some false positive,
yes I am for checking '%l[xud]'. It is most of the time a mistake.

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v3 34/34] net/ice: support meson build
  2018-12-14  8:47         ` Ferruh Yigit
@ 2018-12-16  1:43           ` Lu, Wenzhuo
  0 siblings, 0 replies; 309+ messages in thread
From: Lu, Wenzhuo @ 2018-12-16  1:43 UTC (permalink / raw)
  To: Yigit, Ferruh, dev

> -----Original Message-----
> From: Yigit, Ferruh
> Sent: Friday, December 14, 2018 4:48 PM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH v3 34/34] net/ice: support meson build
> 
> On 12/14/2018 2:38 AM, Lu, Wenzhuo wrote:
> <...>
> 
> >>> @@ -0,0 +1,30 @@
> >>> +# SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2018 Intel
> >>> +Corporation
> >>> +
> >>> +sources = [
> >>> +	'ice_controlq.c',
> >>> +	'ice_common.c',
> >>> +	'ice_sched.c',
> >>> +	'ice_switch.c',
> >>> +	'ice_nvm.c',
> >>
> >> ice_dcb.c? It is in base folder, isn't is compiled?
> > Currently we don’t use it. Just leave it uncompiled.
> 
> Why not remove it now, and add back when used?
To touch the base code or not, it's a question. But anyway, it's a whole unused file, I can remove it.

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v4 16/32] net/ice: support device initialization
  2018-12-14 12:05     ` David Marchand
@ 2018-12-17  1:11       ` Lu, Wenzhuo
  0 siblings, 0 replies; 309+ messages in thread
From: Lu, Wenzhuo @ 2018-12-17  1:11 UTC (permalink / raw)
  To: David Marchand; +Cc: dev, Yang, Qiming, Li, Xiaoyun, Wu, Jingjing

Hi David,
From: David Marchand [mailto:david.marchand@redhat.com]
Sent: Friday, December 14, 2018 8:05 PM
To: Lu, Wenzhuo <wenzhuo.lu@intel.com>
Cc: dev@dpdk.org; Yang, Qiming <qiming.yang@intel.com>; Li, Xiaoyun <xiaoyun.li@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>
Subject: Re: [dpdk-dev] [PATCH v4 16/32] net/ice: support device initialization



On Fri, Dec 14, 2018 at 9:34 AM Wenzhuo Lu <wenzhuo.lu@intel.com<mailto:wenzhuo.lu@intel.com>> wrote:

diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
new file mode 100644
index 0000000..085e848
--- /dev/null
+++ b/doc/guides/nics/features/ice.ini
@@ -0,0 +1,11 @@
+;
+; Supported features of the 'ice' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+BSD nic_uio          = Y
+Linux UIO            = Y
+Linux VFIO           = Y
+x86-32               = Y
+x86-64               = Y

[snip]

+/**
+ * Driver initialization routine.
+ * Invoked once at EAL init time.
+ * Register itself as the [Poll Mode] Driver of PCI devices.
+ */
+RTE_PMD_REGISTER_PCI(net_ice, rte_ice_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(ice, pci_id_ice_map);
+
+RTE_INIT(ice_init_log)
+{
+       ice_logtype_init = rte_log_register("pmd.net.ice.init");
+       if (ice_logtype_init >= 0)
+               rte_log_set_level(ice_logtype_init, RTE_LOG_NOTICE);
+       ice_logtype_driver = rte_log_register("pmd.net.ice.driver");
+       if (ice_logtype_driver >= 0)
+               rte_log_set_level(ice_logtype_driver, RTE_LOG_NOTICE);
+}

If this pmd is uio/vfio based, then you must report it via RTE_PMD_REGISTER_KMOD_DEP().
Thanks for reminder. Will add it.

--
David Marchand


^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v4 16/32] net/ice: support device initialization
  2018-12-14  9:46     ` Ferruh Yigit
  2018-12-14 11:19       ` Zhang, Qi Z
@ 2018-12-17  4:54       ` Lu, Wenzhuo
  1 sibling, 0 replies; 309+ messages in thread
From: Lu, Wenzhuo @ 2018-12-17  4:54 UTC (permalink / raw)
  To: Yigit, Ferruh, dev; +Cc: Yang, Qiming, Li, Xiaoyun, Wu, Jingjing

Hi Ferruh,

> -----Original Message-----
> From: Yigit, Ferruh
> Sent: Friday, December 14, 2018 5:46 PM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> Cc: Yang, Qiming <qiming.yang@intel.com>; Li, Xiaoyun
> <xiaoyun.li@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>
> Subject: Re: [dpdk-dev] [PATCH v4 16/32] net/ice: support device
> initialization
> 
> On 12/14/2018 8:35 AM, Wenzhuo Lu wrote:
> > +ifeq ($(CONFIG_RTE_TOOLCHAIN_ICC),y)
> > +CFLAGS_BASE_DRIVER = -wd593 -wd188
> 
> This is causing following warning for icc [1], new icc versions require the
> syntax "-diag-disable ###" instead of "-wd###", please check [2].
> 
> 
> [1]
> command line remark #10010: option '-wd593' is deprecated and will be
> removed in a future release. See '-help deprecated'
> 
> [2]
> Commit f16d0b36f816 ("drivers/net: fix icc deprecated parameter warning")
> 
> 
> $ icc --version
> icc (ICC) 19.0.1.144 20181018
Thanks for the comments. Will change it.

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v4 30/32] net/ice: support basic RX/TX
  2018-12-14 13:00     ` Ferruh Yigit
  2018-12-14 16:41       ` Thomas Monjalon
@ 2018-12-17  6:47       ` Lu, Wenzhuo
  1 sibling, 0 replies; 309+ messages in thread
From: Lu, Wenzhuo @ 2018-12-17  6:47 UTC (permalink / raw)
  To: Yigit, Ferruh, dev
  Cc: Yang, Qiming, Li, Xiaoyun, Wu, Jingjing, Thomas Monjalon

Hi Ferruh,


> -----Original Message-----
> From: Yigit, Ferruh
> Sent: Friday, December 14, 2018 9:00 PM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> Cc: Yang, Qiming <qiming.yang@intel.com>; Li, Xiaoyun
> <xiaoyun.li@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>; Thomas
> Monjalon <thomas@monjalon.net>
> Subject: Re: [dpdk-dev] [PATCH v4 30/32] net/ice: support basic RX/TX
> 
> On 12/14/2018 8:35 AM, Wenzhuo Lu wrote:
> > Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> > Signed-off-by: Qiming Yang <qiming.yang@intel.com>
> > Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
> > Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
> 
> <...>
> 
> > +
> > +	/* Check to make sure the last descriptor to clean is done */
> > +	desc_to_clean_to = sw_ring[desc_to_clean_to].last_id;
> > +	if (!(txd[desc_to_clean_to].cmd_type_offset_bsz &
> > +	    rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE))) {
> > +		PMD_TX_FREE_LOG(DEBUG, "TX descriptor %4u is not done
> "
> > +				"(port=%d queue=%d) value=0x%lx\n",
> > +				desc_to_clean_to,
> > +				txq->port_id, txq->queue_id,
> > +				txd[desc_to_clean_to].cmd_type_offset_bsz);
> 
> Causing build error for i686 [1], should use PRIx64 for 64bit variables.
> 
> Perhaps we should create a rule in checkpatch to check and warn %lx %lu
> formats `git grep -n '%l[xud]' drivers/net/ice/` shows only this occurrence in
> 'ice' but there are more in other drivers...
> 
> 
> [1]
> In file included from .../i686-native-linuxapp-gcc/include/rte_ethdev.h:150,
>                  from .../i686-native-linuxapp-gcc/include/rte_ethdev_driver.h:18,
>                  from .../drivers/net/ice/ice_lan_rxtx.c:5:
> .../drivers/net/ice/ice_lan_rxtx.c: In function ‘ice_xmit_cleanup’:
> .../drivers/net/ice/ice_lan_rxtx.c:1776:46: error: format ‘%lx’ expects
> argument of type ‘long unsigned int’, but argument 8 has type ‘uint64_t’
> {aka ‘volatile long long unsigned int’} [-Werror=format=]
>      txd[desc_to_clean_to].cmd_type_offset_bsz);
>                                               ^
> .../i686-native-linuxapp-gcc/include/rte_log.h:322:25: note: in definition of
> macro ‘RTE_LOG’
>     RTE_LOGTYPE_ ## t, # t ": " __VA_ARGS__)
>                          ^
> .../drivers/net/ice/ice_lan_rxtx.c:1772:3: note: in expansion of macro
> ‘PMD_TX_FREE_LOG’
>    PMD_TX_FREE_LOG(DEBUG, "TX descriptor %4u is not done "
>    ^~~~~~~~~~~~~~~
Thanks for the check. Will correct it.

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v5 00/31] A new net PMD - ICE
  2018-11-23  6:56 [dpdk-dev] [PATCH 00/19] A new net PMD - ice Wenzhuo Lu
                   ` (22 preceding siblings ...)
  2018-12-14  8:34 ` [dpdk-dev] [PATCH v4 00/32] A new net PMD - ICE Wenzhuo Lu
@ 2018-12-17  7:37 ` Wenzhuo Lu
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 01/31] net/ice/base: add registers for Intel(R) E800 Series NIC Wenzhuo Lu
                     ` (30 more replies)
  2018-12-18  8:46 ` [dpdk-dev] [PATCH v6 00/31] A new net PMD - ICE Wenzhuo Lu
  24 siblings, 31 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-17  7:37 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu

This patch set adds the support of a new net PMD,
Intel® Ethernet Network Adapters E810, also
called ice.

Below features are enabled by this patch set,

Basic features:
1, Basic device operations: probe, initialization, start/stop, configure, info get.
2, RX/TX queue operations: setup/release, start/stop, info get.
3, RX/TX.

HW Offload features:
1, CRC Stripping/insertion.
2, L2/L3 checksum strip/insertion.
3, PVID set.
4, TPID change.
5, TSO (LRO/RSC not supported).

Stats:
1, statics & xstatics.

Switch functions:
1, MAC Filter Add/Delete.
2, VLAN Filter Add/Delete.

Power saving:
1, RX interrupt mode.

Misc:
1, Interrupt For Link Status.
2, firmware info query.
3, Jumbo Frame Support.
4, ptype check.
5, EEPROM check and set.

---
v2:
 - Fix shared lib compile issue.
 - Add meson build support.
 - Update documents.
 - Fix more checkpatch issues.

v3:
 - Removed the support of secondary process.
 - Splitted the base code to more patches.
 - Pass NULL to rte_zmalloc.
 - Changed some magic numbers to macros.
 - Fixed the wrong implementation of a specific bitmapi.

v4:
 - Moved meson build forward.
 - Updated and splitted the document to related patches.
 - Updated the device info.
 - Removed unnecessary compile config.
 - Removed the code of ops rx_descriptor_done.
 - Adjusted the order of the functions.
 - Added error print for MAC setting.

v5:
 - Removed ice_dcb.c/h.
 - Fixed compile error of icc and i686.
 - Announced dependence of uio and vfio.

Paul M Stillwell Jr (13):
  net/ice/base: add registers for Intel(R) E800 Series NIC
  net/ice/base: add basic structures
  net/ice/base: add admin queue structures and commands
  net/ice/base: add sideband queue info
  net/ice/base: add device IDs for Intel(r) E800 Series NICs
  net/ice/base: add control queue information
  net/ice/base: add basic transmit scheduler
  net/ice/base: add virtual switch code
  net/ice/base: add code to work with the NVM
  net/ice/base: add common functions
  net/ice/base: add various headers
  net/ice/base: add protocol structures and defines
  net/ice/base: add structures for RX/TX queues

Wenzhuo Lu (18):
  net/ice/base: add OS specific implementation
  net/ice: support device initialization
  net/ice: support device and queue ops
  net/ice: support getting device information
  net/ice: support packet type getting
  net/ice: support link update
  net/ice: support MTU setting
  net/ice: support MAC ops
  net/ice: support VLAN ops
  net/ice: support RSS
  net/ice: support RX queue interruption
  net/ice: support FW version getting
  net/ice: support EEPROM information getting
  net/ice: support statistics
  net/ice: support queue information getting
  net/ice: support basic RX/TX
  net/ice: support advance RX/TX
  net/ice: support descriptor ops

 MAINTAINERS                              |    8 +
 config/common_base                       |    9 +
 doc/guides/nics/features/ice.ini         |   38 +
 doc/guides/nics/ice.rst                  |  104 +
 doc/guides/nics/index.rst                |    1 +
 doc/guides/rel_notes/release_19_02.rst   |    5 +
 drivers/net/Makefile                     |    1 +
 drivers/net/ice/Makefile                 |   55 +
 drivers/net/ice/base/README              |   22 +
 drivers/net/ice/base/ice_adminq_cmd.h    | 1891 ++++++
 drivers/net/ice/base/ice_alloc.h         |   22 +
 drivers/net/ice/base/ice_common.c        | 3521 +++++++++++
 drivers/net/ice/base/ice_common.h        |  186 +
 drivers/net/ice/base/ice_controlq.c      | 1098 ++++
 drivers/net/ice/base/ice_controlq.h      |   97 +
 drivers/net/ice/base/ice_devids.h        |   17 +
 drivers/net/ice/base/ice_flex_type.h     |   19 +
 drivers/net/ice/base/ice_flow.h          |    8 +
 drivers/net/ice/base/ice_hw_autogen.h    | 9815 ++++++++++++++++++++++++++++++
 drivers/net/ice/base/ice_lan_tx_rx.h     | 2291 +++++++
 drivers/net/ice/base/ice_nvm.c           |  387 ++
 drivers/net/ice/base/ice_osdep.h         |  524 ++
 drivers/net/ice/base/ice_protocol_type.h |  248 +
 drivers/net/ice/base/ice_sbq_cmd.h       |   93 +
 drivers/net/ice/base/ice_sched.c         | 5380 ++++++++++++++++
 drivers/net/ice/base/ice_sched.h         |  210 +
 drivers/net/ice/base/ice_status.h        |   45 +
 drivers/net/ice/base/ice_switch.c        | 2812 +++++++++
 drivers/net/ice/base/ice_switch.h        |  333 +
 drivers/net/ice/base/ice_type.h          |  869 +++
 drivers/net/ice/base/meson.build         |   27 +
 drivers/net/ice/ice_ethdev.c             | 3243 ++++++++++
 drivers/net/ice/ice_ethdev.h             |  318 +
 drivers/net/ice/ice_lan_rxtx.c           | 2872 +++++++++
 drivers/net/ice/ice_logs.h               |   45 +
 drivers/net/ice/ice_rxtx.h               |  154 +
 drivers/net/ice/meson.build              |   13 +
 drivers/net/ice/rte_pmd_ice_version.map  |    4 +
 drivers/net/meson.build                  |    1 +
 mk/rte.app.mk                            |    1 +
 40 files changed, 36787 insertions(+)
 create mode 100644 doc/guides/nics/features/ice.ini
 create mode 100644 doc/guides/nics/ice.rst
 create mode 100644 drivers/net/ice/Makefile
 create mode 100644 drivers/net/ice/base/README
 create mode 100644 drivers/net/ice/base/ice_adminq_cmd.h
 create mode 100644 drivers/net/ice/base/ice_alloc.h
 create mode 100644 drivers/net/ice/base/ice_common.c
 create mode 100644 drivers/net/ice/base/ice_common.h
 create mode 100644 drivers/net/ice/base/ice_controlq.c
 create mode 100644 drivers/net/ice/base/ice_controlq.h
 create mode 100644 drivers/net/ice/base/ice_devids.h
 create mode 100644 drivers/net/ice/base/ice_flex_type.h
 create mode 100644 drivers/net/ice/base/ice_flow.h
 create mode 100644 drivers/net/ice/base/ice_hw_autogen.h
 create mode 100644 drivers/net/ice/base/ice_lan_tx_rx.h
 create mode 100644 drivers/net/ice/base/ice_nvm.c
 create mode 100644 drivers/net/ice/base/ice_osdep.h
 create mode 100644 drivers/net/ice/base/ice_protocol_type.h
 create mode 100644 drivers/net/ice/base/ice_sbq_cmd.h
 create mode 100644 drivers/net/ice/base/ice_sched.c
 create mode 100644 drivers/net/ice/base/ice_sched.h
 create mode 100644 drivers/net/ice/base/ice_status.h
 create mode 100644 drivers/net/ice/base/ice_switch.c
 create mode 100644 drivers/net/ice/base/ice_switch.h
 create mode 100644 drivers/net/ice/base/ice_type.h
 create mode 100644 drivers/net/ice/base/meson.build
 create mode 100644 drivers/net/ice/ice_ethdev.c
 create mode 100644 drivers/net/ice/ice_ethdev.h
 create mode 100644 drivers/net/ice/ice_lan_rxtx.c
 create mode 100644 drivers/net/ice/ice_logs.h
 create mode 100644 drivers/net/ice/ice_rxtx.h
 create mode 100644 drivers/net/ice/meson.build
 create mode 100644 drivers/net/ice/rte_pmd_ice_version.map

-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v5 01/31] net/ice/base: add registers for Intel(R) E800 Series NIC
  2018-12-17  7:37 ` [dpdk-dev] [PATCH v5 00/31] A new net PMD - ICE Wenzhuo Lu
@ 2018-12-17  7:37   ` Wenzhuo Lu
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 02/31] net/ice/base: add basic structures Wenzhuo Lu
                     ` (29 subsequent siblings)
  30 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-17  7:37 UTC (permalink / raw)
  To: dev; +Cc: Paul M Stillwell Jr

From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>

Add the registers that comprise the Intel(R) 800
Series NIC. There is no functionality in this patch.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
---
 MAINTAINERS                           |    6 +
 drivers/net/ice/base/ice_hw_autogen.h | 9815 +++++++++++++++++++++++++++++++++
 2 files changed, 9821 insertions(+)
 create mode 100644 drivers/net/ice/base/ice_hw_autogen.h

diff --git a/MAINTAINERS b/MAINTAINERS
index 71ba312..37f3bf7 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -593,6 +593,12 @@ F: drivers/net/ifc/
 F: doc/guides/nics/ifc.rst
 F: doc/guides/nics/features/ifc*.ini
 
+Intel ice
+M: Qiming Yang <qiming.yang@intel.com>
+M: Wenzhuo Lu <wenzhuo.lu@intel.com>
+T: git://dpdk.org/next/dpdk-next-net-intel
+F: drivers/net/ice/
+
 Marvell mvpp2
 M: Tomasz Duszynski <tdu@semihalf.com>
 M: Dmitri Epshtein <dima@marvell.com>
diff --git a/drivers/net/ice/base/ice_hw_autogen.h b/drivers/net/ice/base/ice_hw_autogen.h
new file mode 100644
index 0000000..8c79891
--- /dev/null
+++ b/drivers/net/ice/base/ice_hw_autogen.h
@@ -0,0 +1,9815 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+/* Machine-generated file; do not edit */
+#ifndef _ICE_HW_AUTOGEN_H_
+#define _ICE_HW_AUTOGEN_H_
+
+
+
+#define GL_RDPU_CNTRL				0x00052054 /* Reset Source: CORER */
+#define GL_RDPU_CNTRL_RX_PAD_EN_S		0
+#define GL_RDPU_CNTRL_RX_PAD_EN_M		BIT(0)
+#define GL_RDPU_CNTRL_UDP_ZERO_EN_S		1
+#define GL_RDPU_CNTRL_UDP_ZERO_EN_M		BIT(1)
+#define GL_RDPU_CNTRL_BLNC_EN_S			2
+#define GL_RDPU_CNTRL_BLNC_EN_M			BIT(2)
+#define GL_RDPU_CNTRL_RECIPE_BYPASS_S		3
+#define GL_RDPU_CNTRL_RECIPE_BYPASS_M		BIT(3)
+#define GL_RDPU_CNTRL_RLAN_ACK_REQ_PM_TH_S	4
+#define GL_RDPU_CNTRL_RLAN_ACK_REQ_PM_TH_M	MAKEMASK(0x3F, 4)
+#define GL_RDPU_CNTRL_PE_ACK_REQ_PM_TH_S	10
+#define GL_RDPU_CNTRL_PE_ACK_REQ_PM_TH_M	MAKEMASK(0x3F, 10)
+#define GL_RDPU_CNTRL_REQ_WB_PM_TH_S		16
+#define GL_RDPU_CNTRL_REQ_WB_PM_TH_M		MAKEMASK(0x1F, 16)
+#define GL_RDPU_CNTRL_ECO_S			21
+#define GL_RDPU_CNTRL_ECO_M			MAKEMASK(0x7FF, 21)
+#define MSIX_PBA(_i)				(0x00008000 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: FLR */
+#define MSIX_PBA_MAX_INDEX			2
+#define MSIX_PBA_PENBIT_S			0
+#define MSIX_PBA_PENBIT_M			MAKEMASK(0xFFFFFFFF, 0)
+#define MSIX_TADD(_i)				(0x00000000 + ((_i) * 16)) /* _i=0...64 */ /* Reset Source: FLR */
+#define MSIX_TADD_MAX_INDEX			64
+#define MSIX_TADD_MSIXTADD10_S			0
+#define MSIX_TADD_MSIXTADD10_M			MAKEMASK(0x3, 0)
+#define MSIX_TADD_MSIXTADD_S			2
+#define MSIX_TADD_MSIXTADD_M			MAKEMASK(0x3FFFFFFF, 2)
+#define MSIX_TUADD(_i)				(0x00000004 + ((_i) * 16)) /* _i=0...64 */ /* Reset Source: FLR */
+#define MSIX_TUADD_MAX_INDEX			64
+#define MSIX_TUADD_MSIXTUADD_S			0
+#define MSIX_TUADD_MSIXTUADD_M			MAKEMASK(0xFFFFFFFF, 0)
+#define MSIX_TVCTRL(_i)				(0x0000000C + ((_i) * 16)) /* _i=0...64 */ /* Reset Source: FLR */
+#define MSIX_TVCTRL_MAX_INDEX			64
+#define MSIX_TVCTRL_MASK_S			0
+#define MSIX_TVCTRL_MASK_M			BIT(0)
+#define PF0_FW_HLP_ARQBAH_PAGE			0x02D00180 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQBAH_PAGE_ARQBAH_S		0
+#define PF0_FW_HLP_ARQBAH_PAGE_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_FW_HLP_ARQBAL_PAGE			0x02D00080 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQBAL_PAGE_ARQBAL_LSB_S	0
+#define PF0_FW_HLP_ARQBAL_PAGE_ARQBAL_LSB_M	MAKEMASK(0x3F, 0)
+#define PF0_FW_HLP_ARQBAL_PAGE_ARQBAL_S		6
+#define PF0_FW_HLP_ARQBAL_PAGE_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_FW_HLP_ARQH_PAGE			0x02D00380 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQH_PAGE_ARQH_S		0
+#define PF0_FW_HLP_ARQH_PAGE_ARQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ARQLEN_PAGE			0x02D00280 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQLEN_S		0
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQVFE_S		28
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQVFE_M		BIT(28)
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQOVFL_S	29
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQOVFL_M	BIT(29)
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQCRIT_S	30
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQCRIT_M	BIT(30)
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQENABLE_S	31
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQENABLE_M	BIT(31)
+#define PF0_FW_HLP_ARQT_PAGE			0x02D00480 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQT_PAGE_ARQT_S		0
+#define PF0_FW_HLP_ARQT_PAGE_ARQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ATQBAH_PAGE			0x02D00100 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQBAH_PAGE_ATQBAH_S		0
+#define PF0_FW_HLP_ATQBAH_PAGE_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_FW_HLP_ATQBAL_PAGE			0x02D00000 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQBAL_PAGE_ATQBAL_LSB_S	0
+#define PF0_FW_HLP_ATQBAL_PAGE_ATQBAL_LSB_M	MAKEMASK(0x3F, 0)
+#define PF0_FW_HLP_ATQBAL_PAGE_ATQBAL_S		6
+#define PF0_FW_HLP_ATQBAL_PAGE_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_FW_HLP_ATQH_PAGE			0x02D00300 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQH_PAGE_ATQH_S		0
+#define PF0_FW_HLP_ATQH_PAGE_ATQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ATQLEN_PAGE			0x02D00200 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQLEN_S		0
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQVFE_S		28
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQVFE_M		BIT(28)
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQOVFL_S	29
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQOVFL_M	BIT(29)
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQCRIT_S	30
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQCRIT_M	BIT(30)
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQENABLE_S	31
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQENABLE_M	BIT(31)
+#define PF0_FW_HLP_ATQT_PAGE			0x02D00400 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQT_PAGE_ATQT_S		0
+#define PF0_FW_HLP_ATQT_PAGE_ATQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ARQBAH_PAGE			0x02D40180 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQBAH_PAGE_ARQBAH_S		0
+#define PF0_FW_PSM_ARQBAH_PAGE_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_FW_PSM_ARQBAL_PAGE			0x02D40080 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQBAL_PAGE_ARQBAL_LSB_S	0
+#define PF0_FW_PSM_ARQBAL_PAGE_ARQBAL_LSB_M	MAKEMASK(0x3F, 0)
+#define PF0_FW_PSM_ARQBAL_PAGE_ARQBAL_S		6
+#define PF0_FW_PSM_ARQBAL_PAGE_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_FW_PSM_ARQH_PAGE			0x02D40380 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQH_PAGE_ARQH_S		0
+#define PF0_FW_PSM_ARQH_PAGE_ARQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ARQLEN_PAGE			0x02D40280 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQLEN_S		0
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQVFE_S		28
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQVFE_M		BIT(28)
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQOVFL_S	29
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQOVFL_M	BIT(29)
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQCRIT_S	30
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQCRIT_M	BIT(30)
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQENABLE_S	31
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQENABLE_M	BIT(31)
+#define PF0_FW_PSM_ARQT_PAGE			0x02D40480 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQT_PAGE_ARQT_S		0
+#define PF0_FW_PSM_ARQT_PAGE_ARQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ATQBAH_PAGE			0x02D40100 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQBAH_PAGE_ATQBAH_S		0
+#define PF0_FW_PSM_ATQBAH_PAGE_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_FW_PSM_ATQBAL_PAGE			0x02D40000 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQBAL_PAGE_ATQBAL_LSB_S	0
+#define PF0_FW_PSM_ATQBAL_PAGE_ATQBAL_LSB_M	MAKEMASK(0x3F, 0)
+#define PF0_FW_PSM_ATQBAL_PAGE_ATQBAL_S		6
+#define PF0_FW_PSM_ATQBAL_PAGE_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_FW_PSM_ATQH_PAGE			0x02D40300 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQH_PAGE_ATQH_S		0
+#define PF0_FW_PSM_ATQH_PAGE_ATQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ATQLEN_PAGE			0x02D40200 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQLEN_S		0
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQVFE_S		28
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQVFE_M		BIT(28)
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQOVFL_S	29
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQOVFL_M	BIT(29)
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQCRIT_S	30
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQCRIT_M	BIT(30)
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQENABLE_S	31
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQENABLE_M	BIT(31)
+#define PF0_FW_PSM_ATQT_PAGE			0x02D40400 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQT_PAGE_ATQT_S		0
+#define PF0_FW_PSM_ATQT_PAGE_ATQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ARQBAH_PAGE			0x02D80190 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQBAH_PAGE_ARQBAH_S	0
+#define PF0_MBX_CPM_ARQBAH_PAGE_ARQBAH_M	MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_CPM_ARQBAL_PAGE			0x02D80090 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQBAL_PAGE_ARQBAL_LSB_S	0
+#define PF0_MBX_CPM_ARQBAL_PAGE_ARQBAL_LSB_M	MAKEMASK(0x3F, 0)
+#define PF0_MBX_CPM_ARQBAL_PAGE_ARQBAL_S	6
+#define PF0_MBX_CPM_ARQBAL_PAGE_ARQBAL_M	MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_CPM_ARQH_PAGE			0x02D80390 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQH_PAGE_ARQH_S		0
+#define PF0_MBX_CPM_ARQH_PAGE_ARQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ARQLEN_PAGE			0x02D80290 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQLEN_S	0
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQLEN_M	MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQVFE_S	28
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQVFE_M	BIT(28)
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQOVFL_S	29
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQOVFL_M	BIT(29)
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQCRIT_S	30
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQCRIT_M	BIT(30)
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQENABLE_S	31
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQENABLE_M	BIT(31)
+#define PF0_MBX_CPM_ARQT_PAGE			0x02D80490 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQT_PAGE_ARQT_S		0
+#define PF0_MBX_CPM_ARQT_PAGE_ARQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ATQBAH_PAGE			0x02D80110 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQBAH_PAGE_ATQBAH_S	0
+#define PF0_MBX_CPM_ATQBAH_PAGE_ATQBAH_M	MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_CPM_ATQBAL_PAGE			0x02D80010 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQBAL_PAGE_ATQBAL_S	6
+#define PF0_MBX_CPM_ATQBAL_PAGE_ATQBAL_M	MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_CPM_ATQH_PAGE			0x02D80310 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQH_PAGE_ATQH_S		0
+#define PF0_MBX_CPM_ATQH_PAGE_ATQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ATQLEN_PAGE			0x02D80210 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQLEN_S	0
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQLEN_M	MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQVFE_S	28
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQVFE_M	BIT(28)
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQOVFL_S	29
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQOVFL_M	BIT(29)
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQCRIT_S	30
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQCRIT_M	BIT(30)
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQENABLE_S	31
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQENABLE_M	BIT(31)
+#define PF0_MBX_CPM_ATQT_PAGE			0x02D80410 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQT_PAGE_ATQT_S		0
+#define PF0_MBX_CPM_ATQT_PAGE_ATQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ARQBAH_PAGE			0x02D00190 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQBAH_PAGE_ARQBAH_S	0
+#define PF0_MBX_HLP_ARQBAH_PAGE_ARQBAH_M	MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_HLP_ARQBAL_PAGE			0x02D00090 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQBAL_PAGE_ARQBAL_LSB_S	0
+#define PF0_MBX_HLP_ARQBAL_PAGE_ARQBAL_LSB_M	MAKEMASK(0x3F, 0)
+#define PF0_MBX_HLP_ARQBAL_PAGE_ARQBAL_S	6
+#define PF0_MBX_HLP_ARQBAL_PAGE_ARQBAL_M	MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_HLP_ARQH_PAGE			0x02D00390 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQH_PAGE_ARQH_S		0
+#define PF0_MBX_HLP_ARQH_PAGE_ARQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ARQLEN_PAGE			0x02D00290 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQLEN_S	0
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQLEN_M	MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQVFE_S	28
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQVFE_M	BIT(28)
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQOVFL_S	29
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQOVFL_M	BIT(29)
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQCRIT_S	30
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQCRIT_M	BIT(30)
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQENABLE_S	31
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQENABLE_M	BIT(31)
+#define PF0_MBX_HLP_ARQT_PAGE			0x02D00490 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQT_PAGE_ARQT_S		0
+#define PF0_MBX_HLP_ARQT_PAGE_ARQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ATQBAH_PAGE			0x02D00110 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQBAH_PAGE_ATQBAH_S	0
+#define PF0_MBX_HLP_ATQBAH_PAGE_ATQBAH_M	MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_HLP_ATQBAL_PAGE			0x02D00010 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQBAL_PAGE_ATQBAL_S	6
+#define PF0_MBX_HLP_ATQBAL_PAGE_ATQBAL_M	MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_HLP_ATQH_PAGE			0x02D00310 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQH_PAGE_ATQH_S		0
+#define PF0_MBX_HLP_ATQH_PAGE_ATQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ATQLEN_PAGE			0x02D00210 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQLEN_S	0
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQLEN_M	MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQVFE_S	28
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQVFE_M	BIT(28)
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQOVFL_S	29
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQOVFL_M	BIT(29)
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQCRIT_S	30
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQCRIT_M	BIT(30)
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQENABLE_S	31
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQENABLE_M	BIT(31)
+#define PF0_MBX_HLP_ATQT_PAGE			0x02D00410 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQT_PAGE_ATQT_S		0
+#define PF0_MBX_HLP_ATQT_PAGE_ATQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ARQBAH_PAGE			0x02D40190 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQBAH_PAGE_ARQBAH_S	0
+#define PF0_MBX_PSM_ARQBAH_PAGE_ARQBAH_M	MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_PSM_ARQBAL_PAGE			0x02D40090 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQBAL_PAGE_ARQBAL_LSB_S	0
+#define PF0_MBX_PSM_ARQBAL_PAGE_ARQBAL_LSB_M	MAKEMASK(0x3F, 0)
+#define PF0_MBX_PSM_ARQBAL_PAGE_ARQBAL_S	6
+#define PF0_MBX_PSM_ARQBAL_PAGE_ARQBAL_M	MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_PSM_ARQH_PAGE			0x02D40390 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQH_PAGE_ARQH_S		0
+#define PF0_MBX_PSM_ARQH_PAGE_ARQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ARQLEN_PAGE			0x02D40290 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQLEN_S	0
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQLEN_M	MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQVFE_S	28
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQVFE_M	BIT(28)
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQOVFL_S	29
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQOVFL_M	BIT(29)
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQCRIT_S	30
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQCRIT_M	BIT(30)
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQENABLE_S	31
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQENABLE_M	BIT(31)
+#define PF0_MBX_PSM_ARQT_PAGE			0x02D40490 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQT_PAGE_ARQT_S		0
+#define PF0_MBX_PSM_ARQT_PAGE_ARQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ATQBAH_PAGE			0x02D40110 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQBAH_PAGE_ATQBAH_S	0
+#define PF0_MBX_PSM_ATQBAH_PAGE_ATQBAH_M	MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_PSM_ATQBAL_PAGE			0x02D40010 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQBAL_PAGE_ATQBAL_S	6
+#define PF0_MBX_PSM_ATQBAL_PAGE_ATQBAL_M	MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_PSM_ATQH_PAGE			0x02D40310 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQH_PAGE_ATQH_S		0
+#define PF0_MBX_PSM_ATQH_PAGE_ATQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ATQLEN_PAGE			0x02D40210 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQLEN_S	0
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQLEN_M	MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQVFE_S	28
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQVFE_M	BIT(28)
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQOVFL_S	29
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQOVFL_M	BIT(29)
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQCRIT_S	30
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQCRIT_M	BIT(30)
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQENABLE_S	31
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQENABLE_M	BIT(31)
+#define PF0_MBX_PSM_ATQT_PAGE			0x02D40410 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQT_PAGE_ATQT_S		0
+#define PF0_MBX_PSM_ATQT_PAGE_ATQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ARQBAH_PAGE			0x02D801A0 /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQBAH_PAGE_ARQBAH_S		0
+#define PF0_SB_CPM_ARQBAH_PAGE_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_SB_CPM_ARQBAL_PAGE			0x02D800A0 /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQBAL_PAGE_ARQBAL_LSB_S	0
+#define PF0_SB_CPM_ARQBAL_PAGE_ARQBAL_LSB_M	MAKEMASK(0x3F, 0)
+#define PF0_SB_CPM_ARQBAL_PAGE_ARQBAL_S		6
+#define PF0_SB_CPM_ARQBAL_PAGE_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_SB_CPM_ARQH_PAGE			0x02D803A0 /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQH_PAGE_ARQH_S		0
+#define PF0_SB_CPM_ARQH_PAGE_ARQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ARQLEN_PAGE			0x02D802A0 /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQLEN_S		0
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQVFE_S		28
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQVFE_M		BIT(28)
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQOVFL_S	29
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQOVFL_M	BIT(29)
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQCRIT_S	30
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQCRIT_M	BIT(30)
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQENABLE_S	31
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQENABLE_M	BIT(31)
+#define PF0_SB_CPM_ARQT_PAGE			0x02D804A0 /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQT_PAGE_ARQT_S		0
+#define PF0_SB_CPM_ARQT_PAGE_ARQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ATQBAH_PAGE			0x02D80120 /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQBAH_PAGE_ATQBAH_S		0
+#define PF0_SB_CPM_ATQBAH_PAGE_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_SB_CPM_ATQBAL_PAGE			0x02D80020 /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQBAL_PAGE_ATQBAL_S		6
+#define PF0_SB_CPM_ATQBAL_PAGE_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_SB_CPM_ATQH_PAGE			0x02D80320 /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQH_PAGE_ATQH_S		0
+#define PF0_SB_CPM_ATQH_PAGE_ATQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ATQLEN_PAGE			0x02D80220 /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQLEN_S		0
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQVFE_S		28
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQVFE_M		BIT(28)
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQOVFL_S	29
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQOVFL_M	BIT(29)
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQCRIT_S	30
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQCRIT_M	BIT(30)
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQENABLE_S	31
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQENABLE_M	BIT(31)
+#define PF0_SB_CPM_ATQT_PAGE			0x02D80420 /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQT_PAGE_ATQT_S		0
+#define PF0_SB_CPM_ATQT_PAGE_ATQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ARQBAH_PAGE			0x02D001A0 /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQBAH_PAGE_ARQBAH_S		0
+#define PF0_SB_HLP_ARQBAH_PAGE_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_SB_HLP_ARQBAL_PAGE			0x02D000A0 /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQBAL_PAGE_ARQBAL_LSB_S	0
+#define PF0_SB_HLP_ARQBAL_PAGE_ARQBAL_LSB_M	MAKEMASK(0x3F, 0)
+#define PF0_SB_HLP_ARQBAL_PAGE_ARQBAL_S		6
+#define PF0_SB_HLP_ARQBAL_PAGE_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_SB_HLP_ARQH_PAGE			0x02D003A0 /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQH_PAGE_ARQH_S		0
+#define PF0_SB_HLP_ARQH_PAGE_ARQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ARQLEN_PAGE			0x02D002A0 /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQLEN_S		0
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQVFE_S		28
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQVFE_M		BIT(28)
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQOVFL_S	29
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQOVFL_M	BIT(29)
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQCRIT_S	30
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQCRIT_M	BIT(30)
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQENABLE_S	31
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQENABLE_M	BIT(31)
+#define PF0_SB_HLP_ARQT_PAGE			0x02D004A0 /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQT_PAGE_ARQT_S		0
+#define PF0_SB_HLP_ARQT_PAGE_ARQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ATQBAH_PAGE			0x02D00120 /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQBAH_PAGE_ATQBAH_S		0
+#define PF0_SB_HLP_ATQBAH_PAGE_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_SB_HLP_ATQBAL_PAGE			0x02D00020 /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQBAL_PAGE_ATQBAL_S		6
+#define PF0_SB_HLP_ATQBAL_PAGE_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_SB_HLP_ATQH_PAGE			0x02D00320 /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQH_PAGE_ATQH_S		0
+#define PF0_SB_HLP_ATQH_PAGE_ATQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ATQLEN_PAGE			0x02D00220 /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQLEN_S		0
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQVFE_S		28
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQVFE_M		BIT(28)
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQOVFL_S	29
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQOVFL_M	BIT(29)
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQCRIT_S	30
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQCRIT_M	BIT(30)
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQENABLE_S	31
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQENABLE_M	BIT(31)
+#define PF0_SB_HLP_ATQT_PAGE			0x02D00420 /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQT_PAGE_ATQT_S		0
+#define PF0_SB_HLP_ATQT_PAGE_ATQT_M		MAKEMASK(0x3FF, 0)
+#define PF0INT_DYN_CTL(_i)			(0x03000000 + ((_i) * 4096)) /* _i=0...2047 */ /* Reset Source: PFR */
+#define PF0INT_DYN_CTL_MAX_INDEX		2047
+#define PF0INT_DYN_CTL_INTENA_S			0
+#define PF0INT_DYN_CTL_INTENA_M			BIT(0)
+#define PF0INT_DYN_CTL_CLEARPBA_S		1
+#define PF0INT_DYN_CTL_CLEARPBA_M		BIT(1)
+#define PF0INT_DYN_CTL_SWINT_TRIG_S		2
+#define PF0INT_DYN_CTL_SWINT_TRIG_M		BIT(2)
+#define PF0INT_DYN_CTL_ITR_INDX_S		3
+#define PF0INT_DYN_CTL_ITR_INDX_M		MAKEMASK(0x3, 3)
+#define PF0INT_DYN_CTL_INTERVAL_S		5
+#define PF0INT_DYN_CTL_INTERVAL_M		MAKEMASK(0xFFF, 5)
+#define PF0INT_DYN_CTL_SW_ITR_INDX_ENA_S	24
+#define PF0INT_DYN_CTL_SW_ITR_INDX_ENA_M	BIT(24)
+#define PF0INT_DYN_CTL_SW_ITR_INDX_S		25
+#define PF0INT_DYN_CTL_SW_ITR_INDX_M		MAKEMASK(0x3, 25)
+#define PF0INT_DYN_CTL_WB_ON_ITR_S		30
+#define PF0INT_DYN_CTL_WB_ON_ITR_M		BIT(30)
+#define PF0INT_DYN_CTL_INTENA_MSK_S		31
+#define PF0INT_DYN_CTL_INTENA_MSK_M		BIT(31)
+#define PF0INT_ITR_0(_i)			(0x03000004 + ((_i) * 4096)) /* _i=0...2047 */ /* Reset Source: PFR */
+#define PF0INT_ITR_0_MAX_INDEX			2047
+#define PF0INT_ITR_0_INTERVAL_S			0
+#define PF0INT_ITR_0_INTERVAL_M			MAKEMASK(0xFFF, 0)
+#define PF0INT_ITR_1(_i)			(0x03000008 + ((_i) * 4096)) /* _i=0...2047 */ /* Reset Source: PFR */
+#define PF0INT_ITR_1_MAX_INDEX			2047
+#define PF0INT_ITR_1_INTERVAL_S			0
+#define PF0INT_ITR_1_INTERVAL_M			MAKEMASK(0xFFF, 0)
+#define PF0INT_ITR_2(_i)			(0x0300000C + ((_i) * 4096)) /* _i=0...2047 */ /* Reset Source: PFR */
+#define PF0INT_ITR_2_MAX_INDEX			2047
+#define PF0INT_ITR_2_INTERVAL_S			0
+#define PF0INT_ITR_2_INTERVAL_M			MAKEMASK(0xFFF, 0)
+#define PF0INT_OICR_CPM_PAGE			0x02D03000 /* Reset Source: CORER */
+#define PF0INT_OICR_CPM_PAGE_INTEVENT_S		0
+#define PF0INT_OICR_CPM_PAGE_INTEVENT_M		BIT(0)
+#define PF0INT_OICR_CPM_PAGE_QUEUE_S		1
+#define PF0INT_OICR_CPM_PAGE_QUEUE_M		BIT(1)
+#define PF0INT_OICR_CPM_PAGE_RSV1_S		2
+#define PF0INT_OICR_CPM_PAGE_RSV1_M		MAKEMASK(0xFF, 2)
+#define PF0INT_OICR_CPM_PAGE_HH_COMP_S		10
+#define PF0INT_OICR_CPM_PAGE_HH_COMP_M		BIT(10)
+#define PF0INT_OICR_CPM_PAGE_TSYN_TX_S		11
+#define PF0INT_OICR_CPM_PAGE_TSYN_TX_M		BIT(11)
+#define PF0INT_OICR_CPM_PAGE_TSYN_EVNT_S	12
+#define PF0INT_OICR_CPM_PAGE_TSYN_EVNT_M	BIT(12)
+#define PF0INT_OICR_CPM_PAGE_TSYN_TGT_S		13
+#define PF0INT_OICR_CPM_PAGE_TSYN_TGT_M		BIT(13)
+#define PF0INT_OICR_CPM_PAGE_HLP_RDY_S		14
+#define PF0INT_OICR_CPM_PAGE_HLP_RDY_M		BIT(14)
+#define PF0INT_OICR_CPM_PAGE_CPM_RDY_S		15
+#define PF0INT_OICR_CPM_PAGE_CPM_RDY_M		BIT(15)
+#define PF0INT_OICR_CPM_PAGE_ECC_ERR_S		16
+#define PF0INT_OICR_CPM_PAGE_ECC_ERR_M		BIT(16)
+#define PF0INT_OICR_CPM_PAGE_RSV2_S		17
+#define PF0INT_OICR_CPM_PAGE_RSV2_M		MAKEMASK(0x3, 17)
+#define PF0INT_OICR_CPM_PAGE_MAL_DETECT_S	19
+#define PF0INT_OICR_CPM_PAGE_MAL_DETECT_M	BIT(19)
+#define PF0INT_OICR_CPM_PAGE_GRST_S		20
+#define PF0INT_OICR_CPM_PAGE_GRST_M		BIT(20)
+#define PF0INT_OICR_CPM_PAGE_PCI_EXCEPTION_S	21
+#define PF0INT_OICR_CPM_PAGE_PCI_EXCEPTION_M	BIT(21)
+#define PF0INT_OICR_CPM_PAGE_GPIO_S		22
+#define PF0INT_OICR_CPM_PAGE_GPIO_M		BIT(22)
+#define PF0INT_OICR_CPM_PAGE_RSV3_S		23
+#define PF0INT_OICR_CPM_PAGE_RSV3_M		BIT(23)
+#define PF0INT_OICR_CPM_PAGE_STORM_DETECT_S	24
+#define PF0INT_OICR_CPM_PAGE_STORM_DETECT_M	BIT(24)
+#define PF0INT_OICR_CPM_PAGE_LINK_STAT_CHANGE_S 25
+#define PF0INT_OICR_CPM_PAGE_LINK_STAT_CHANGE_M BIT(25)
+#define PF0INT_OICR_CPM_PAGE_HMC_ERR_S		26
+#define PF0INT_OICR_CPM_PAGE_HMC_ERR_M		BIT(26)
+#define PF0INT_OICR_CPM_PAGE_PE_PUSH_S		27
+#define PF0INT_OICR_CPM_PAGE_PE_PUSH_M		BIT(27)
+#define PF0INT_OICR_CPM_PAGE_PE_CRITERR_S	28
+#define PF0INT_OICR_CPM_PAGE_PE_CRITERR_M	BIT(28)
+#define PF0INT_OICR_CPM_PAGE_VFLR_S		29
+#define PF0INT_OICR_CPM_PAGE_VFLR_M		BIT(29)
+#define PF0INT_OICR_CPM_PAGE_XLR_HW_DONE_S	30
+#define PF0INT_OICR_CPM_PAGE_XLR_HW_DONE_M	BIT(30)
+#define PF0INT_OICR_CPM_PAGE_SWINT_S		31
+#define PF0INT_OICR_CPM_PAGE_SWINT_M		BIT(31)
+#define PF0INT_OICR_ENA_CPM_PAGE		0x02D03100 /* Reset Source: CORER */
+#define PF0INT_OICR_ENA_CPM_PAGE_RSV0_S		0
+#define PF0INT_OICR_ENA_CPM_PAGE_RSV0_M		BIT(0)
+#define PF0INT_OICR_ENA_CPM_PAGE_INT_ENA_S	1
+#define PF0INT_OICR_ENA_CPM_PAGE_INT_ENA_M	MAKEMASK(0x7FFFFFFF, 1)
+#define PF0INT_OICR_ENA_HLP_PAGE		0x02D01100 /* Reset Source: CORER */
+#define PF0INT_OICR_ENA_HLP_PAGE_RSV0_S		0
+#define PF0INT_OICR_ENA_HLP_PAGE_RSV0_M		BIT(0)
+#define PF0INT_OICR_ENA_HLP_PAGE_INT_ENA_S	1
+#define PF0INT_OICR_ENA_HLP_PAGE_INT_ENA_M	MAKEMASK(0x7FFFFFFF, 1)
+#define PF0INT_OICR_ENA_PSM_PAGE		0x02D02100 /* Reset Source: CORER */
+#define PF0INT_OICR_ENA_PSM_PAGE_RSV0_S		0
+#define PF0INT_OICR_ENA_PSM_PAGE_RSV0_M		BIT(0)
+#define PF0INT_OICR_ENA_PSM_PAGE_INT_ENA_S	1
+#define PF0INT_OICR_ENA_PSM_PAGE_INT_ENA_M	MAKEMASK(0x7FFFFFFF, 1)
+#define PF0INT_OICR_HLP_PAGE			0x02D01000 /* Reset Source: CORER */
+#define PF0INT_OICR_HLP_PAGE_INTEVENT_S		0
+#define PF0INT_OICR_HLP_PAGE_INTEVENT_M		BIT(0)
+#define PF0INT_OICR_HLP_PAGE_QUEUE_S		1
+#define PF0INT_OICR_HLP_PAGE_QUEUE_M		BIT(1)
+#define PF0INT_OICR_HLP_PAGE_RSV1_S		2
+#define PF0INT_OICR_HLP_PAGE_RSV1_M		MAKEMASK(0xFF, 2)
+#define PF0INT_OICR_HLP_PAGE_HH_COMP_S		10
+#define PF0INT_OICR_HLP_PAGE_HH_COMP_M		BIT(10)
+#define PF0INT_OICR_HLP_PAGE_TSYN_TX_S		11
+#define PF0INT_OICR_HLP_PAGE_TSYN_TX_M		BIT(11)
+#define PF0INT_OICR_HLP_PAGE_TSYN_EVNT_S	12
+#define PF0INT_OICR_HLP_PAGE_TSYN_EVNT_M	BIT(12)
+#define PF0INT_OICR_HLP_PAGE_TSYN_TGT_S		13
+#define PF0INT_OICR_HLP_PAGE_TSYN_TGT_M		BIT(13)
+#define PF0INT_OICR_HLP_PAGE_HLP_RDY_S		14
+#define PF0INT_OICR_HLP_PAGE_HLP_RDY_M		BIT(14)
+#define PF0INT_OICR_HLP_PAGE_CPM_RDY_S		15
+#define PF0INT_OICR_HLP_PAGE_CPM_RDY_M		BIT(15)
+#define PF0INT_OICR_HLP_PAGE_ECC_ERR_S		16
+#define PF0INT_OICR_HLP_PAGE_ECC_ERR_M		BIT(16)
+#define PF0INT_OICR_HLP_PAGE_RSV2_S		17
+#define PF0INT_OICR_HLP_PAGE_RSV2_M		MAKEMASK(0x3, 17)
+#define PF0INT_OICR_HLP_PAGE_MAL_DETECT_S	19
+#define PF0INT_OICR_HLP_PAGE_MAL_DETECT_M	BIT(19)
+#define PF0INT_OICR_HLP_PAGE_GRST_S		20
+#define PF0INT_OICR_HLP_PAGE_GRST_M		BIT(20)
+#define PF0INT_OICR_HLP_PAGE_PCI_EXCEPTION_S	21
+#define PF0INT_OICR_HLP_PAGE_PCI_EXCEPTION_M	BIT(21)
+#define PF0INT_OICR_HLP_PAGE_GPIO_S		22
+#define PF0INT_OICR_HLP_PAGE_GPIO_M		BIT(22)
+#define PF0INT_OICR_HLP_PAGE_RSV3_S		23
+#define PF0INT_OICR_HLP_PAGE_RSV3_M		BIT(23)
+#define PF0INT_OICR_HLP_PAGE_STORM_DETECT_S	24
+#define PF0INT_OICR_HLP_PAGE_STORM_DETECT_M	BIT(24)
+#define PF0INT_OICR_HLP_PAGE_LINK_STAT_CHANGE_S 25
+#define PF0INT_OICR_HLP_PAGE_LINK_STAT_CHANGE_M BIT(25)
+#define PF0INT_OICR_HLP_PAGE_HMC_ERR_S		26
+#define PF0INT_OICR_HLP_PAGE_HMC_ERR_M		BIT(26)
+#define PF0INT_OICR_HLP_PAGE_PE_PUSH_S		27
+#define PF0INT_OICR_HLP_PAGE_PE_PUSH_M		BIT(27)
+#define PF0INT_OICR_HLP_PAGE_PE_CRITERR_S	28
+#define PF0INT_OICR_HLP_PAGE_PE_CRITERR_M	BIT(28)
+#define PF0INT_OICR_HLP_PAGE_VFLR_S		29
+#define PF0INT_OICR_HLP_PAGE_VFLR_M		BIT(29)
+#define PF0INT_OICR_HLP_PAGE_XLR_HW_DONE_S	30
+#define PF0INT_OICR_HLP_PAGE_XLR_HW_DONE_M	BIT(30)
+#define PF0INT_OICR_HLP_PAGE_SWINT_S		31
+#define PF0INT_OICR_HLP_PAGE_SWINT_M		BIT(31)
+#define PF0INT_OICR_PSM_PAGE			0x02D02000 /* Reset Source: CORER */
+#define PF0INT_OICR_PSM_PAGE_INTEVENT_S		0
+#define PF0INT_OICR_PSM_PAGE_INTEVENT_M		BIT(0)
+#define PF0INT_OICR_PSM_PAGE_QUEUE_S		1
+#define PF0INT_OICR_PSM_PAGE_QUEUE_M		BIT(1)
+#define PF0INT_OICR_PSM_PAGE_RSV1_S		2
+#define PF0INT_OICR_PSM_PAGE_RSV1_M		MAKEMASK(0xFF, 2)
+#define PF0INT_OICR_PSM_PAGE_HH_COMP_S		10
+#define PF0INT_OICR_PSM_PAGE_HH_COMP_M		BIT(10)
+#define PF0INT_OICR_PSM_PAGE_TSYN_TX_S		11
+#define PF0INT_OICR_PSM_PAGE_TSYN_TX_M		BIT(11)
+#define PF0INT_OICR_PSM_PAGE_TSYN_EVNT_S	12
+#define PF0INT_OICR_PSM_PAGE_TSYN_EVNT_M	BIT(12)
+#define PF0INT_OICR_PSM_PAGE_TSYN_TGT_S		13
+#define PF0INT_OICR_PSM_PAGE_TSYN_TGT_M		BIT(13)
+#define PF0INT_OICR_PSM_PAGE_HLP_RDY_S		14
+#define PF0INT_OICR_PSM_PAGE_HLP_RDY_M		BIT(14)
+#define PF0INT_OICR_PSM_PAGE_CPM_RDY_S		15
+#define PF0INT_OICR_PSM_PAGE_CPM_RDY_M		BIT(15)
+#define PF0INT_OICR_PSM_PAGE_ECC_ERR_S		16
+#define PF0INT_OICR_PSM_PAGE_ECC_ERR_M		BIT(16)
+#define PF0INT_OICR_PSM_PAGE_RSV2_S		17
+#define PF0INT_OICR_PSM_PAGE_RSV2_M		MAKEMASK(0x3, 17)
+#define PF0INT_OICR_PSM_PAGE_MAL_DETECT_S	19
+#define PF0INT_OICR_PSM_PAGE_MAL_DETECT_M	BIT(19)
+#define PF0INT_OICR_PSM_PAGE_GRST_S		20
+#define PF0INT_OICR_PSM_PAGE_GRST_M		BIT(20)
+#define PF0INT_OICR_PSM_PAGE_PCI_EXCEPTION_S	21
+#define PF0INT_OICR_PSM_PAGE_PCI_EXCEPTION_M	BIT(21)
+#define PF0INT_OICR_PSM_PAGE_GPIO_S		22
+#define PF0INT_OICR_PSM_PAGE_GPIO_M		BIT(22)
+#define PF0INT_OICR_PSM_PAGE_RSV3_S		23
+#define PF0INT_OICR_PSM_PAGE_RSV3_M		BIT(23)
+#define PF0INT_OICR_PSM_PAGE_STORM_DETECT_S	24
+#define PF0INT_OICR_PSM_PAGE_STORM_DETECT_M	BIT(24)
+#define PF0INT_OICR_PSM_PAGE_LINK_STAT_CHANGE_S 25
+#define PF0INT_OICR_PSM_PAGE_LINK_STAT_CHANGE_M BIT(25)
+#define PF0INT_OICR_PSM_PAGE_HMC_ERR_S		26
+#define PF0INT_OICR_PSM_PAGE_HMC_ERR_M		BIT(26)
+#define PF0INT_OICR_PSM_PAGE_PE_PUSH_S		27
+#define PF0INT_OICR_PSM_PAGE_PE_PUSH_M		BIT(27)
+#define PF0INT_OICR_PSM_PAGE_PE_CRITERR_S	28
+#define PF0INT_OICR_PSM_PAGE_PE_CRITERR_M	BIT(28)
+#define PF0INT_OICR_PSM_PAGE_VFLR_S		29
+#define PF0INT_OICR_PSM_PAGE_VFLR_M		BIT(29)
+#define PF0INT_OICR_PSM_PAGE_XLR_HW_DONE_S	30
+#define PF0INT_OICR_PSM_PAGE_XLR_HW_DONE_M	BIT(30)
+#define PF0INT_OICR_PSM_PAGE_SWINT_S		31
+#define PF0INT_OICR_PSM_PAGE_SWINT_M		BIT(31)
+#define QRX_TAIL_PAGE(_QRX)			(0x03800000 + ((_QRX) * 4096)) /* _i=0...2047 */ /* Reset Source: CORER */
+#define QRX_TAIL_PAGE_MAX_INDEX			2047
+#define QRX_TAIL_PAGE_TAIL_S			0
+#define QRX_TAIL_PAGE_TAIL_M			MAKEMASK(0x1FFF, 0)
+#define QTX_COMM_DBELL_PAGE(_DBQM)		(0x04000000 + ((_DBQM) * 4096)) /* _i=0...16383 */ /* Reset Source: CORER */
+#define QTX_COMM_DBELL_PAGE_MAX_INDEX		16383
+#define QTX_COMM_DBELL_PAGE_QTX_COMM_DBELL_S	0
+#define QTX_COMM_DBELL_PAGE_QTX_COMM_DBELL_M	MAKEMASK(0xFFFFFFFF, 0)
+#define QTX_COMM_DBLQ_DBELL_PAGE(_DBLQ)		(0x02F00000 + ((_DBLQ) * 8)) /* _i=0...255 */ /* Reset Source: CORER */
+#define QTX_COMM_DBLQ_DBELL_PAGE_MAX_INDEX	255
+#define QTX_COMM_DBLQ_DBELL_PAGE_TAIL_S		0
+#define QTX_COMM_DBLQ_DBELL_PAGE_TAIL_M		MAKEMASK(0x1FFF, 0)
+#define VSI_MBX_ARQBAH(_VSI)			(0x02000018 + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ARQBAH_MAX_INDEX		767
+#define VSI_MBX_ARQBAH_ARQBAH_S			0
+#define VSI_MBX_ARQBAH_ARQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define VSI_MBX_ARQBAL(_VSI)			(0x02000014 + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ARQBAL_MAX_INDEX		767
+#define VSI_MBX_ARQBAL_ARQBAL_LSB_S		0
+#define VSI_MBX_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define VSI_MBX_ARQBAL_ARQBAL_S			6
+#define VSI_MBX_ARQBAL_ARQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define VSI_MBX_ARQH(_VSI)			(0x02000020 + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ARQH_MAX_INDEX			767
+#define VSI_MBX_ARQH_ARQH_S			0
+#define VSI_MBX_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define VSI_MBX_ARQLEN(_VSI)			(0x0200001C + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ARQLEN_MAX_INDEX		767
+#define VSI_MBX_ARQLEN_ARQLEN_S			0
+#define VSI_MBX_ARQLEN_ARQLEN_M			MAKEMASK(0x3FF, 0)
+#define VSI_MBX_ARQLEN_ARQVFE_S			28
+#define VSI_MBX_ARQLEN_ARQVFE_M			BIT(28)
+#define VSI_MBX_ARQLEN_ARQOVFL_S		29
+#define VSI_MBX_ARQLEN_ARQOVFL_M		BIT(29)
+#define VSI_MBX_ARQLEN_ARQCRIT_S		30
+#define VSI_MBX_ARQLEN_ARQCRIT_M		BIT(30)
+#define VSI_MBX_ARQLEN_ARQENABLE_S		31
+#define VSI_MBX_ARQLEN_ARQENABLE_M		BIT(31)
+#define VSI_MBX_ARQT(_VSI)			(0x02000024 + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ARQT_MAX_INDEX			767
+#define VSI_MBX_ARQT_ARQT_S			0
+#define VSI_MBX_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define VSI_MBX_ATQBAH(_VSI)			(0x02000004 + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ATQBAH_MAX_INDEX		767
+#define VSI_MBX_ATQBAH_ATQBAH_S			0
+#define VSI_MBX_ATQBAH_ATQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define VSI_MBX_ATQBAL(_VSI)			(0x02000000 + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ATQBAL_MAX_INDEX		767
+#define VSI_MBX_ATQBAL_ATQBAL_S			6
+#define VSI_MBX_ATQBAL_ATQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define VSI_MBX_ATQH(_VSI)			(0x0200000C + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ATQH_MAX_INDEX			767
+#define VSI_MBX_ATQH_ATQH_S			0
+#define VSI_MBX_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define VSI_MBX_ATQLEN(_VSI)			(0x02000008 + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ATQLEN_MAX_INDEX		767
+#define VSI_MBX_ATQLEN_ATQLEN_S			0
+#define VSI_MBX_ATQLEN_ATQLEN_M			MAKEMASK(0x3FF, 0)
+#define VSI_MBX_ATQLEN_ATQVFE_S			28
+#define VSI_MBX_ATQLEN_ATQVFE_M			BIT(28)
+#define VSI_MBX_ATQLEN_ATQOVFL_S		29
+#define VSI_MBX_ATQLEN_ATQOVFL_M		BIT(29)
+#define VSI_MBX_ATQLEN_ATQCRIT_S		30
+#define VSI_MBX_ATQLEN_ATQCRIT_M		BIT(30)
+#define VSI_MBX_ATQLEN_ATQENABLE_S		31
+#define VSI_MBX_ATQLEN_ATQENABLE_M		BIT(31)
+#define VSI_MBX_ATQT(_VSI)			(0x02000010 + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ATQT_MAX_INDEX			767
+#define VSI_MBX_ATQT_ATQT_S			0
+#define VSI_MBX_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define GL_ACL_ACCESS_CMD			0x00391000 /* Reset Source: CORER */
+#define GL_ACL_ACCESS_CMD_TABLE_ID_S		0
+#define GL_ACL_ACCESS_CMD_TABLE_ID_M		MAKEMASK(0xFF, 0)
+#define GL_ACL_ACCESS_CMD_ENTRY_INDEX_S		8
+#define GL_ACL_ACCESS_CMD_ENTRY_INDEX_M		MAKEMASK(0xFFF, 8)
+#define GL_ACL_ACCESS_CMD_OPERATION_S		20
+#define GL_ACL_ACCESS_CMD_OPERATION_M		BIT(20)
+#define GL_ACL_ACCESS_CMD_OBJ_TYPE_S		24
+#define GL_ACL_ACCESS_CMD_OBJ_TYPE_M		MAKEMASK(0xF, 24)
+#define GL_ACL_ACCESS_CMD_EXECUTE_S		31
+#define GL_ACL_ACCESS_CMD_EXECUTE_M		BIT(31)
+#define GL_ACL_ACCESS_STATUS			0x00391004 /* Reset Source: CORER */
+#define GL_ACL_ACCESS_STATUS_BUSY_S		0
+#define GL_ACL_ACCESS_STATUS_BUSY_M		BIT(0)
+#define GL_ACL_ACCESS_STATUS_DONE_S		1
+#define GL_ACL_ACCESS_STATUS_DONE_M		BIT(1)
+#define GL_ACL_ACCESS_STATUS_ERROR_S		2
+#define GL_ACL_ACCESS_STATUS_ERROR_M		BIT(2)
+#define GL_ACL_ACCESS_STATUS_OPERATION_S	3
+#define GL_ACL_ACCESS_STATUS_OPERATION_M	BIT(3)
+#define GL_ACL_ACCESS_STATUS_ERROR_CODE_S	4
+#define GL_ACL_ACCESS_STATUS_ERROR_CODE_M	MAKEMASK(0xF, 4)
+#define GL_ACL_ACCESS_STATUS_TABLE_ID_S		8
+#define GL_ACL_ACCESS_STATUS_TABLE_ID_M		MAKEMASK(0xFF, 8)
+#define GL_ACL_ACCESS_STATUS_ENTRY_INDEX_S	16
+#define GL_ACL_ACCESS_STATUS_ENTRY_INDEX_M	MAKEMASK(0xFFF, 16)
+#define GL_ACL_ACCESS_STATUS_OBJ_TYPE_S		28
+#define GL_ACL_ACCESS_STATUS_OBJ_TYPE_M		MAKEMASK(0xF, 28)
+#define GL_ACL_ACTMEM_ACT(_i)			(0x00393824 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GL_ACL_ACTMEM_ACT_MAX_INDEX		1
+#define GL_ACL_ACTMEM_ACT_VALUE_S		0
+#define GL_ACL_ACTMEM_ACT_VALUE_M		MAKEMASK(0xFFFF, 0)
+#define GL_ACL_ACTMEM_ACT_MDID_S		20
+#define GL_ACL_ACTMEM_ACT_MDID_M		MAKEMASK(0x3F, 20)
+#define GL_ACL_ACTMEM_ACT_PRIORITY_S		28
+#define GL_ACL_ACTMEM_ACT_PRIORITY_M		MAKEMASK(0x7, 28)
+#define GL_ACL_CHICKEN_REGISTER			0x00393810 /* Reset Source: CORER */
+#define GL_ACL_CHICKEN_REGISTER_TCAM_DATA_POL_CH_S 0
+#define GL_ACL_CHICKEN_REGISTER_TCAM_DATA_POL_CH_M BIT(0)
+#define GL_ACL_CHICKEN_REGISTER_TCAM_ADDR_POL_CH_S 1
+#define GL_ACL_CHICKEN_REGISTER_TCAM_ADDR_POL_CH_M BIT(1)
+#define GL_ACL_DEFAULT_ACT(_i)			(0x00391168 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GL_ACL_DEFAULT_ACT_MAX_INDEX		15
+#define GL_ACL_DEFAULT_ACT_VALUE_S		0
+#define GL_ACL_DEFAULT_ACT_VALUE_M		MAKEMASK(0xFFFF, 0)
+#define GL_ACL_DEFAULT_ACT_MDID_S		20
+#define GL_ACL_DEFAULT_ACT_MDID_M		MAKEMASK(0x3F, 20)
+#define GL_ACL_DEFAULT_ACT_PRIORITY_S		28
+#define GL_ACL_DEFAULT_ACT_PRIORITY_M		MAKEMASK(0x7, 28)
+#define GL_ACL_PROFILE_BWSB_SEL(_i)		(0x00391008 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GL_ACL_PROFILE_BWSB_SEL_MAX_INDEX	31
+#define GL_ACL_PROFILE_BWSB_SEL_BSB_SRC_OFF_S	0
+#define GL_ACL_PROFILE_BWSB_SEL_BSB_SRC_OFF_M	MAKEMASK(0x3F, 0)
+#define GL_ACL_PROFILE_BWSB_SEL_WSB_SRC_OFF_S	8
+#define GL_ACL_PROFILE_BWSB_SEL_WSB_SRC_OFF_M	MAKEMASK(0x1F, 8)
+#define GL_ACL_PROFILE_DWSB_SEL(_i)		(0x00391088 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GL_ACL_PROFILE_DWSB_SEL_MAX_INDEX	15
+#define GL_ACL_PROFILE_DWSB_SEL_DWORD_SEL_OFF_S 0
+#define GL_ACL_PROFILE_DWSB_SEL_DWORD_SEL_OFF_M MAKEMASK(0xF, 0)
+#define GL_ACL_PROFILE_PF_CFG(_i)		(0x003910C8 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GL_ACL_PROFILE_PF_CFG_MAX_INDEX		7
+#define GL_ACL_PROFILE_PF_CFG_SCEN_SEL_S	0
+#define GL_ACL_PROFILE_PF_CFG_SCEN_SEL_M	MAKEMASK(0x3F, 0)
+#define GL_ACL_PROFILE_RC_CFG(_i)		(0x003910E8 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GL_ACL_PROFILE_RC_CFG_MAX_INDEX		7
+#define GL_ACL_PROFILE_RC_CFG_LOW_BOUND_S	0
+#define GL_ACL_PROFILE_RC_CFG_LOW_BOUND_M	MAKEMASK(0xFFFF, 0)
+#define GL_ACL_PROFILE_RC_CFG_HIGH_BOUND_S	16
+#define GL_ACL_PROFILE_RC_CFG_HIGH_BOUND_M	MAKEMASK(0xFFFF, 16)
+#define GL_ACL_PROFILE_RCF_MASK(_i)		(0x00391108 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GL_ACL_PROFILE_RCF_MASK_MAX_INDEX	7
+#define GL_ACL_PROFILE_RCF_MASK_MASK_S		0
+#define GL_ACL_PROFILE_RCF_MASK_MASK_M		MAKEMASK(0xFFFF, 0)
+#define GL_ACL_SCENARIO_ACT_CFG(_i)		(0x003938AC + ((_i) * 4)) /* _i=0...19 */ /* Reset Source: CORER */
+#define GL_ACL_SCENARIO_ACT_CFG_MAX_INDEX	19
+#define GL_ACL_SCENARIO_ACT_CFG_ACTMEM_SEL_S	0
+#define GL_ACL_SCENARIO_ACT_CFG_ACTMEM_SEL_M	MAKEMASK(0xF, 0)
+#define GL_ACL_SCENARIO_ACT_CFG_ACTMEM_EN_S	8
+#define GL_ACL_SCENARIO_ACT_CFG_ACTMEM_EN_M	BIT(8)
+#define GL_ACL_SCENARIO_CFG_H(_i)		(0x0039386C + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GL_ACL_SCENARIO_CFG_H_MAX_INDEX		15
+#define GL_ACL_SCENARIO_CFG_H_SELECT4_S		0
+#define GL_ACL_SCENARIO_CFG_H_SELECT4_M		MAKEMASK(0x1F, 0)
+#define GL_ACL_SCENARIO_CFG_H_CHUNKMASK_S	8
+#define GL_ACL_SCENARIO_CFG_H_CHUNKMASK_M	MAKEMASK(0xFF, 8)
+#define GL_ACL_SCENARIO_CFG_H_START_COMPARE_S	24
+#define GL_ACL_SCENARIO_CFG_H_START_COMPARE_M	BIT(24)
+#define GL_ACL_SCENARIO_CFG_H_START_SET_S	28
+#define GL_ACL_SCENARIO_CFG_H_START_SET_M	BIT(28)
+#define GL_ACL_SCENARIO_CFG_L(_i)		(0x0039382C + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GL_ACL_SCENARIO_CFG_L_MAX_INDEX		15
+#define GL_ACL_SCENARIO_CFG_L_SELECT0_S		0
+#define GL_ACL_SCENARIO_CFG_L_SELECT0_M		MAKEMASK(0x7F, 0)
+#define GL_ACL_SCENARIO_CFG_L_SELECT1_S		8
+#define GL_ACL_SCENARIO_CFG_L_SELECT1_M		MAKEMASK(0x7F, 8)
+#define GL_ACL_SCENARIO_CFG_L_SELECT2_S		16
+#define GL_ACL_SCENARIO_CFG_L_SELECT2_M		MAKEMASK(0x7F, 16)
+#define GL_ACL_SCENARIO_CFG_L_SELECT3_S		24
+#define GL_ACL_SCENARIO_CFG_L_SELECT3_M		MAKEMASK(0x7F, 24)
+#define GL_ACL_TCAM_KEY_H			0x00393818 /* Reset Source: CORER */
+#define GL_ACL_TCAM_KEY_H_GL_ACL_FFU_TCAM_KEY_H_S 0
+#define GL_ACL_TCAM_KEY_H_GL_ACL_FFU_TCAM_KEY_H_M MAKEMASK(0xFF, 0)
+#define GL_ACL_TCAM_KEY_INV_H			0x00393820 /* Reset Source: CORER */
+#define GL_ACL_TCAM_KEY_INV_H_GL_ACL_FFU_TCAM_KEY_INV_H_S 0
+#define GL_ACL_TCAM_KEY_INV_H_GL_ACL_FFU_TCAM_KEY_INV_H_M MAKEMASK(0xFF, 0)
+#define GL_ACL_TCAM_KEY_INV_L			0x0039381C /* Reset Source: CORER */
+#define GL_ACL_TCAM_KEY_INV_L_GL_ACL_FFU_TCAM_KEY_INV_L_S 0
+#define GL_ACL_TCAM_KEY_INV_L_GL_ACL_FFU_TCAM_KEY_INV_L_M MAKEMASK(0xFFFFFFFF, 0)
+#define GL_ACL_TCAM_KEY_L			0x00393814 /* Reset Source: CORER */
+#define GL_ACL_TCAM_KEY_L_GL_ACL_FFU_TCAM_KEY_L_S 0
+#define GL_ACL_TCAM_KEY_L_GL_ACL_FFU_TCAM_KEY_L_M MAKEMASK(0xFFFFFFFF, 0)
+#define VSI_ACL_DEF_SEL(_VSI)			(0x00391800 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_ACL_DEF_SEL_MAX_INDEX		767
+#define VSI_ACL_DEF_SEL_RX_PROFILE_MISS_SEL_S	0
+#define VSI_ACL_DEF_SEL_RX_PROFILE_MISS_SEL_M	MAKEMASK(0x3, 0)
+#define VSI_ACL_DEF_SEL_RX_TABLES_MISS_SEL_S	4
+#define VSI_ACL_DEF_SEL_RX_TABLES_MISS_SEL_M	MAKEMASK(0x3, 4)
+#define VSI_ACL_DEF_SEL_TX_PROFILE_MISS_SEL_S	8
+#define VSI_ACL_DEF_SEL_TX_PROFILE_MISS_SEL_M	MAKEMASK(0x3, 8)
+#define VSI_ACL_DEF_SEL_TX_TABLES_MISS_SEL_S	12
+#define VSI_ACL_DEF_SEL_TX_TABLES_MISS_SEL_M	MAKEMASK(0x3, 12)
+#define GL_SWT_L2TAG0(_i)			(0x000492A8 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GL_SWT_L2TAG0_MAX_INDEX			7
+#define GL_SWT_L2TAG0_DATA_S			0
+#define GL_SWT_L2TAG0_DATA_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GL_SWT_L2TAG1(_i)			(0x000492C8 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GL_SWT_L2TAG1_MAX_INDEX			7
+#define GL_SWT_L2TAG1_DATA_S			0
+#define GL_SWT_L2TAG1_DATA_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GL_SWT_L2TAGCTRL(_i)			(0x001D2660 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GL_SWT_L2TAGCTRL_MAX_INDEX		7
+#define GL_SWT_L2TAGCTRL_LENGTH_S		0
+#define GL_SWT_L2TAGCTRL_LENGTH_M		MAKEMASK(0x7F, 0)
+#define GL_SWT_L2TAGCTRL_HAS_UP_S		7
+#define GL_SWT_L2TAGCTRL_HAS_UP_M		BIT(7)
+#define GL_SWT_L2TAGCTRL_ISVLAN_S		9
+#define GL_SWT_L2TAGCTRL_ISVLAN_M		BIT(9)
+#define GL_SWT_L2TAGCTRL_INNERUP_S		10
+#define GL_SWT_L2TAGCTRL_INNERUP_M		BIT(10)
+#define GL_SWT_L2TAGCTRL_OUTERUP_S		11
+#define GL_SWT_L2TAGCTRL_OUTERUP_M		BIT(11)
+#define GL_SWT_L2TAGCTRL_LONG_S			12
+#define GL_SWT_L2TAGCTRL_LONG_M			BIT(12)
+#define GL_SWT_L2TAGCTRL_ISMPLS_S		13
+#define GL_SWT_L2TAGCTRL_ISMPLS_M		BIT(13)
+#define GL_SWT_L2TAGCTRL_ISNSH_S		14
+#define GL_SWT_L2TAGCTRL_ISNSH_M		BIT(14)
+#define GL_SWT_L2TAGCTRL_ETHERTYPE_S		16
+#define GL_SWT_L2TAGCTRL_ETHERTYPE_M		MAKEMASK(0xFFFF, 16)
+#define GL_SWT_L2TAGRXEB(_i)			(0x00052000 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GL_SWT_L2TAGRXEB_MAX_INDEX		7
+#define GL_SWT_L2TAGRXEB_OFFSET_S		0
+#define GL_SWT_L2TAGRXEB_OFFSET_M		MAKEMASK(0xFF, 0)
+#define GL_SWT_L2TAGRXEB_LENGTH_S		8
+#define GL_SWT_L2TAGRXEB_LENGTH_M		MAKEMASK(0x3, 8)
+#define GL_SWT_L2TAGTXIB(_i)			(0x000492E8 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GL_SWT_L2TAGTXIB_MAX_INDEX		7
+#define GL_SWT_L2TAGTXIB_OFFSET_S		0
+#define GL_SWT_L2TAGTXIB_OFFSET_M		MAKEMASK(0xFF, 0)
+#define GL_SWT_L2TAGTXIB_LENGTH_S		8
+#define GL_SWT_L2TAGTXIB_LENGTH_M		MAKEMASK(0x3, 8)
+#define PRT_TDPUL2TAGSEN			0x00040BA0 /* Reset Source: CORER */
+#define PRT_TDPUL2TAGSEN_ENABLE_S		0
+#define PRT_TDPUL2TAGSEN_ENABLE_M		MAKEMASK(0xFF, 0)
+#define PRT_TDPUL2TAGSEN_NONLAST_TAG_S		8
+#define PRT_TDPUL2TAGSEN_NONLAST_TAG_M		MAKEMASK(0xFF, 8)
+#define GLCM_PE_CACHESIZE			0x005046B4 /* Reset Source: CORER */
+#define GLCM_PE_CACHESIZE_WORD_SIZE_S		0
+#define GLCM_PE_CACHESIZE_WORD_SIZE_M		MAKEMASK(0xFFF, 0)
+#define GLCM_PE_CACHESIZE_SETS_S		12
+#define GLCM_PE_CACHESIZE_SETS_M		MAKEMASK(0xF, 12)
+#define GLCM_PE_CACHESIZE_WAYS_S		16
+#define GLCM_PE_CACHESIZE_WAYS_M		MAKEMASK(0x1FF, 16)
+#define GLCOMM_CQ_CTL(_CQ)			(0x000F0000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLCOMM_CQ_CTL_MAX_INDEX			511
+#define GLCOMM_CQ_CTL_COMP_TYPE_S		0
+#define GLCOMM_CQ_CTL_COMP_TYPE_M		MAKEMASK(0x7, 0)
+#define GLCOMM_CQ_CTL_CMD_S			4
+#define GLCOMM_CQ_CTL_CMD_M			MAKEMASK(0x7, 4)
+#define GLCOMM_CQ_CTL_ID_S			16
+#define GLCOMM_CQ_CTL_ID_M			MAKEMASK(0x3FFF, 16)
+#define GLCOMM_MIN_MAX_PKT			0x000FC064 /* Reset Source: CORER */
+#define GLCOMM_MIN_MAX_PKT_MAHDL_S		0
+#define GLCOMM_MIN_MAX_PKT_MAHDL_M		MAKEMASK(0x3FFF, 0)
+#define GLCOMM_MIN_MAX_PKT_MIHDL_S		16
+#define GLCOMM_MIN_MAX_PKT_MIHDL_M		MAKEMASK(0x3F, 16)
+#define GLCOMM_MIN_MAX_PKT_LSO_COMS_MIHDL_S	22
+#define GLCOMM_MIN_MAX_PKT_LSO_COMS_MIHDL_M	MAKEMASK(0x3FF, 22)
+#define GLCOMM_PKT_SHAPER_PROF(_i)		(0x002D2DA8 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLCOMM_PKT_SHAPER_PROF_MAX_INDEX	7
+#define GLCOMM_PKT_SHAPER_PROF_PKTCNT_S		0
+#define GLCOMM_PKT_SHAPER_PROF_PKTCNT_M		MAKEMASK(0x3F, 0)
+#define GLCOMM_QTX_CNTX_CTL			0x002D2DC8 /* Reset Source: CORER */
+#define GLCOMM_QTX_CNTX_CTL_QUEUE_ID_S		0
+#define GLCOMM_QTX_CNTX_CTL_QUEUE_ID_M		MAKEMASK(0x3FFF, 0)
+#define GLCOMM_QTX_CNTX_CTL_CMD_S		16
+#define GLCOMM_QTX_CNTX_CTL_CMD_M		MAKEMASK(0x7, 16)
+#define GLCOMM_QTX_CNTX_CTL_CMD_EXEC_S		19
+#define GLCOMM_QTX_CNTX_CTL_CMD_EXEC_M		BIT(19)
+#define GLCOMM_QTX_CNTX_DATA(_i)		(0x002D2D40 + ((_i) * 4)) /* _i=0...9 */ /* Reset Source: CORER */
+#define GLCOMM_QTX_CNTX_DATA_MAX_INDEX		9
+#define GLCOMM_QTX_CNTX_DATA_DATA_S		0
+#define GLCOMM_QTX_CNTX_DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLCOMM_QTX_CNTX_STAT			0x002D2DCC /* Reset Source: CORER */
+#define GLCOMM_QTX_CNTX_STAT_CMD_IN_PROG_S	0
+#define GLCOMM_QTX_CNTX_STAT_CMD_IN_PROG_M	BIT(0)
+#define GLCOMM_QUANTA_PROF(_i)			(0x002D2D68 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLCOMM_QUANTA_PROF_MAX_INDEX		15
+#define GLCOMM_QUANTA_PROF_QUANTA_SIZE_S	0
+#define GLCOMM_QUANTA_PROF_QUANTA_SIZE_M	MAKEMASK(0x3FFF, 0)
+#define GLCOMM_QUANTA_PROF_MAX_CMD_S		16
+#define GLCOMM_QUANTA_PROF_MAX_CMD_M		MAKEMASK(0xFF, 16)
+#define GLCOMM_QUANTA_PROF_MAX_DESC_S		24
+#define GLCOMM_QUANTA_PROF_MAX_DESC_M		MAKEMASK(0x3F, 24)
+#define GLLAN_TCLAN_CACHE_CTL			0x000FC0B8 /* Reset Source: CORER */
+#define GLLAN_TCLAN_CACHE_CTL_MIN_FETCH_THRESH_S 0
+#define GLLAN_TCLAN_CACHE_CTL_MIN_FETCH_THRESH_M MAKEMASK(0x3F, 0)
+#define GLLAN_TCLAN_CACHE_CTL_FETCH_CL_ALIGN_S	6
+#define GLLAN_TCLAN_CACHE_CTL_FETCH_CL_ALIGN_M	BIT(6)
+#define GLLAN_TCLAN_CACHE_CTL_MIN_ALLOC_THRESH_S 7
+#define GLLAN_TCLAN_CACHE_CTL_MIN_ALLOC_THRESH_M MAKEMASK(0x7F, 7)
+#define GLLAN_TCLAN_CACHE_CTL_CACHE_ENTRY_CNT_S 14
+#define GLLAN_TCLAN_CACHE_CTL_CACHE_ENTRY_CNT_M MAKEMASK(0xFF, 14)
+#define GLLAN_TCLAN_CACHE_CTL_CACHE_DESC_LIM_S	22
+#define GLLAN_TCLAN_CACHE_CTL_CACHE_DESC_LIM_M	MAKEMASK(0x3FF, 22)
+#define GLTCLAN_CQ_CNTX0(_CQ)			(0x000F0800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX0_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX0_RING_ADDR_LSB_S	0
+#define GLTCLAN_CQ_CNTX0_RING_ADDR_LSB_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX1(_CQ)			(0x000F1000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX1_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX1_RING_ADDR_MSB_S	0
+#define GLTCLAN_CQ_CNTX1_RING_ADDR_MSB_M	MAKEMASK(0x1FFFFFF, 0)
+#define GLTCLAN_CQ_CNTX10(_CQ)			(0x000F5800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX10_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX10_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX10_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX11(_CQ)			(0x000F6000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX11_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX11_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX11_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX12(_CQ)			(0x000F6800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX12_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX12_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX12_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX13(_CQ)			(0x000F7000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX13_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX13_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX13_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX14(_CQ)			(0x000F7800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX14_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX14_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX14_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX15(_CQ)			(0x000F8000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX15_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX15_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX15_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX16(_CQ)			(0x000F8800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX16_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX16_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX16_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX17(_CQ)			(0x000F9000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX17_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX17_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX17_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX18(_CQ)			(0x000F9800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX18_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX18_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX18_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX19(_CQ)			(0x000FA000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX19_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX19_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX19_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX2(_CQ)			(0x000F1800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX2_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX2_RING_LEN_S		0
+#define GLTCLAN_CQ_CNTX2_RING_LEN_M		MAKEMASK(0x3FFFF, 0)
+#define GLTCLAN_CQ_CNTX20(_CQ)			(0x000FA800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX20_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX20_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX20_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX21(_CQ)			(0x000FB000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX21_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX21_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX21_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX3(_CQ)			(0x000F2000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX3_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX3_GENERATION_S		0
+#define GLTCLAN_CQ_CNTX3_GENERATION_M		BIT(0)
+#define GLTCLAN_CQ_CNTX3_CQ_WR_PTR_S		1
+#define GLTCLAN_CQ_CNTX3_CQ_WR_PTR_M		MAKEMASK(0x3FFFFF, 1)
+#define GLTCLAN_CQ_CNTX4(_CQ)			(0x000F2800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX4_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX4_PF_NUM_S		0
+#define GLTCLAN_CQ_CNTX4_PF_NUM_M		MAKEMASK(0x7, 0)
+#define GLTCLAN_CQ_CNTX4_VMVF_NUM_S		3
+#define GLTCLAN_CQ_CNTX4_VMVF_NUM_M		MAKEMASK(0x3FF, 3)
+#define GLTCLAN_CQ_CNTX4_VMVF_TYPE_S		13
+#define GLTCLAN_CQ_CNTX4_VMVF_TYPE_M		MAKEMASK(0x3, 13)
+#define GLTCLAN_CQ_CNTX5(_CQ)			(0x000F3000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX5_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX5_TPH_EN_S		0
+#define GLTCLAN_CQ_CNTX5_TPH_EN_M		BIT(0)
+#define GLTCLAN_CQ_CNTX5_CPU_ID_S		1
+#define GLTCLAN_CQ_CNTX5_CPU_ID_M		MAKEMASK(0xFF, 1)
+#define GLTCLAN_CQ_CNTX5_FLUSH_ON_ITR_DIS_S	9
+#define GLTCLAN_CQ_CNTX5_FLUSH_ON_ITR_DIS_M	BIT(9)
+#define GLTCLAN_CQ_CNTX6(_CQ)			(0x000F3800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX6_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX6_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX6_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX7(_CQ)			(0x000F4000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX7_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX7_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX7_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX8(_CQ)			(0x000F4800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX8_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX8_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX8_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX9(_CQ)			(0x000F5000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX9_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX9_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX9_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define QTX_COMM_DBELL(_DBQM)			(0x002C0000 + ((_DBQM) * 4)) /* _i=0...16383 */ /* Reset Source: CORER */
+#define QTX_COMM_DBELL_MAX_INDEX		16383
+#define QTX_COMM_DBELL_QTX_COMM_DBELL_S		0
+#define QTX_COMM_DBELL_QTX_COMM_DBELL_M		MAKEMASK(0xFFFFFFFF, 0)
+#define QTX_COMM_DBLQ_CNTX(_i, _DBLQ)		(0x002D0000 + ((_i) * 1024 + (_DBLQ) * 4)) /* _i=0...4, _DBLQ=0...255 */ /* Reset Source: CORER */
+#define QTX_COMM_DBLQ_CNTX_MAX_INDEX		4
+#define QTX_COMM_DBLQ_CNTX_DATA_S		0
+#define QTX_COMM_DBLQ_CNTX_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define QTX_COMM_DBLQ_DBELL(_DBLQ)		(0x002D1400 + ((_DBLQ) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define QTX_COMM_DBLQ_DBELL_MAX_INDEX		255
+#define QTX_COMM_DBLQ_DBELL_TAIL_S		0
+#define QTX_COMM_DBLQ_DBELL_TAIL_M		MAKEMASK(0x1FFF, 0)
+#define QTX_COMM_HEAD(_DBQM)			(0x000E0000 + ((_DBQM) * 4)) /* _i=0...16383 */ /* Reset Source: CORER */
+#define QTX_COMM_HEAD_MAX_INDEX			16383
+#define QTX_COMM_HEAD_HEAD_S			0
+#define QTX_COMM_HEAD_HEAD_M			MAKEMASK(0x1FFF, 0)
+#define QTX_COMM_HEAD_RS_PENDING_S		16
+#define QTX_COMM_HEAD_RS_PENDING_M		BIT(16)
+#define GL_FW_TOOL_ARQBAH			0x000801C0 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ARQBAH_ARQBAH_S		0
+#define GL_FW_TOOL_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_FW_TOOL_ARQBAL			0x000800C0 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ARQBAL_ARQBAL_LSB_S		0
+#define GL_FW_TOOL_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define GL_FW_TOOL_ARQBAL_ARQBAL_S		6
+#define GL_FW_TOOL_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define GL_FW_TOOL_ARQH				0x000803C0 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ARQH_ARQH_S			0
+#define GL_FW_TOOL_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define GL_FW_TOOL_ARQLEN			0x000802C0 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ARQLEN_ARQLEN_S		0
+#define GL_FW_TOOL_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define GL_FW_TOOL_ARQLEN_ARQVFE_S		28
+#define GL_FW_TOOL_ARQLEN_ARQVFE_M		BIT(28)
+#define GL_FW_TOOL_ARQLEN_ARQOVFL_S		29
+#define GL_FW_TOOL_ARQLEN_ARQOVFL_M		BIT(29)
+#define GL_FW_TOOL_ARQLEN_ARQCRIT_S		30
+#define GL_FW_TOOL_ARQLEN_ARQCRIT_M		BIT(30)
+#define GL_FW_TOOL_ARQLEN_ARQENABLE_S		31
+#define GL_FW_TOOL_ARQLEN_ARQENABLE_M		BIT(31)
+#define GL_FW_TOOL_ARQT				0x000804C0 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ARQT_ARQT_S			0
+#define GL_FW_TOOL_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define GL_FW_TOOL_ATQBAH			0x00080140 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ATQBAH_ATQBAH_S		0
+#define GL_FW_TOOL_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_FW_TOOL_ATQBAL			0x00080040 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ATQBAL_ATQBAL_LSB_S		0
+#define GL_FW_TOOL_ATQBAL_ATQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define GL_FW_TOOL_ATQBAL_ATQBAL_S		6
+#define GL_FW_TOOL_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define GL_FW_TOOL_ATQH				0x00080340 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ATQH_ATQH_S			0
+#define GL_FW_TOOL_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define GL_FW_TOOL_ATQLEN			0x00080240 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ATQLEN_ATQLEN_S		0
+#define GL_FW_TOOL_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define GL_FW_TOOL_ATQLEN_ATQVFE_S		28
+#define GL_FW_TOOL_ATQLEN_ATQVFE_M		BIT(28)
+#define GL_FW_TOOL_ATQLEN_ATQOVFL_S		29
+#define GL_FW_TOOL_ATQLEN_ATQOVFL_M		BIT(29)
+#define GL_FW_TOOL_ATQLEN_ATQCRIT_S		30
+#define GL_FW_TOOL_ATQLEN_ATQCRIT_M		BIT(30)
+#define GL_FW_TOOL_ATQLEN_ATQENABLE_S		31
+#define GL_FW_TOOL_ATQLEN_ATQENABLE_M		BIT(31)
+#define GL_FW_TOOL_ATQT				0x00080440 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ATQT_ATQT_S			0
+#define GL_FW_TOOL_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define GL_MBX_PASID				0x00231EC0 /* Reset Source: CORER */
+#define GL_MBX_PASID_PASID_MODE_S		0
+#define GL_MBX_PASID_PASID_MODE_M		BIT(0)
+#define GL_MBX_PASID_PASID_MODE_VALID_S		1
+#define GL_MBX_PASID_PASID_MODE_VALID_M		BIT(1)
+#define PF_FW_ARQBAH				0x00080180 /* Reset Source: EMPR */
+#define PF_FW_ARQBAH_ARQBAH_S			0
+#define PF_FW_ARQBAH_ARQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PF_FW_ARQBAL				0x00080080 /* Reset Source: EMPR */
+#define PF_FW_ARQBAL_ARQBAL_LSB_S		0
+#define PF_FW_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF_FW_ARQBAL_ARQBAL_S			6
+#define PF_FW_ARQBAL_ARQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define PF_FW_ARQH				0x00080380 /* Reset Source: EMPR */
+#define PF_FW_ARQH_ARQH_S			0
+#define PF_FW_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define PF_FW_ARQLEN				0x00080280 /* Reset Source: EMPR */
+#define PF_FW_ARQLEN_ARQLEN_S			0
+#define PF_FW_ARQLEN_ARQLEN_M			MAKEMASK(0x3FF, 0)
+#define PF_FW_ARQLEN_ARQVFE_S			28
+#define PF_FW_ARQLEN_ARQVFE_M			BIT(28)
+#define PF_FW_ARQLEN_ARQOVFL_S			29
+#define PF_FW_ARQLEN_ARQOVFL_M			BIT(29)
+#define PF_FW_ARQLEN_ARQCRIT_S			30
+#define PF_FW_ARQLEN_ARQCRIT_M			BIT(30)
+#define PF_FW_ARQLEN_ARQENABLE_S		31
+#define PF_FW_ARQLEN_ARQENABLE_M		BIT(31)
+#define PF_FW_ARQT				0x00080480 /* Reset Source: EMPR */
+#define PF_FW_ARQT_ARQT_S			0
+#define PF_FW_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define PF_FW_ATQBAH				0x00080100 /* Reset Source: EMPR */
+#define PF_FW_ATQBAH_ATQBAH_S			0
+#define PF_FW_ATQBAH_ATQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PF_FW_ATQBAL				0x00080000 /* Reset Source: EMPR */
+#define PF_FW_ATQBAL_ATQBAL_LSB_S		0
+#define PF_FW_ATQBAL_ATQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF_FW_ATQBAL_ATQBAL_S			6
+#define PF_FW_ATQBAL_ATQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define PF_FW_ATQH				0x00080300 /* Reset Source: EMPR */
+#define PF_FW_ATQH_ATQH_S			0
+#define PF_FW_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define PF_FW_ATQLEN				0x00080200 /* Reset Source: EMPR */
+#define PF_FW_ATQLEN_ATQLEN_S			0
+#define PF_FW_ATQLEN_ATQLEN_M			MAKEMASK(0x3FF, 0)
+#define PF_FW_ATQLEN_ATQVFE_S			28
+#define PF_FW_ATQLEN_ATQVFE_M			BIT(28)
+#define PF_FW_ATQLEN_ATQOVFL_S			29
+#define PF_FW_ATQLEN_ATQOVFL_M			BIT(29)
+#define PF_FW_ATQLEN_ATQCRIT_S			30
+#define PF_FW_ATQLEN_ATQCRIT_M			BIT(30)
+#define PF_FW_ATQLEN_ATQENABLE_S		31
+#define PF_FW_ATQLEN_ATQENABLE_M		BIT(31)
+#define PF_FW_ATQT				0x00080400 /* Reset Source: EMPR */
+#define PF_FW_ATQT_ATQT_S			0
+#define PF_FW_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define PF_MBX_ARQBAH				0x0022E400 /* Reset Source: CORER */
+#define PF_MBX_ARQBAH_ARQBAH_S			0
+#define PF_MBX_ARQBAH_ARQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PF_MBX_ARQBAL				0x0022E380 /* Reset Source: CORER */
+#define PF_MBX_ARQBAL_ARQBAL_LSB_S		0
+#define PF_MBX_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF_MBX_ARQBAL_ARQBAL_S			6
+#define PF_MBX_ARQBAL_ARQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define PF_MBX_ARQH				0x0022E500 /* Reset Source: CORER */
+#define PF_MBX_ARQH_ARQH_S			0
+#define PF_MBX_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define PF_MBX_ARQLEN				0x0022E480 /* Reset Source: CORER */
+#define PF_MBX_ARQLEN_ARQLEN_S			0
+#define PF_MBX_ARQLEN_ARQLEN_M			MAKEMASK(0x3FF, 0)
+#define PF_MBX_ARQLEN_ARQVFE_S			28
+#define PF_MBX_ARQLEN_ARQVFE_M			BIT(28)
+#define PF_MBX_ARQLEN_ARQOVFL_S			29
+#define PF_MBX_ARQLEN_ARQOVFL_M			BIT(29)
+#define PF_MBX_ARQLEN_ARQCRIT_S			30
+#define PF_MBX_ARQLEN_ARQCRIT_M			BIT(30)
+#define PF_MBX_ARQLEN_ARQENABLE_S		31
+#define PF_MBX_ARQLEN_ARQENABLE_M		BIT(31)
+#define PF_MBX_ARQT				0x0022E580 /* Reset Source: CORER */
+#define PF_MBX_ARQT_ARQT_S			0
+#define PF_MBX_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define PF_MBX_ATQBAH				0x0022E180 /* Reset Source: CORER */
+#define PF_MBX_ATQBAH_ATQBAH_S			0
+#define PF_MBX_ATQBAH_ATQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PF_MBX_ATQBAL				0x0022E100 /* Reset Source: CORER */
+#define PF_MBX_ATQBAL_ATQBAL_S			6
+#define PF_MBX_ATQBAL_ATQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define PF_MBX_ATQH				0x0022E280 /* Reset Source: CORER */
+#define PF_MBX_ATQH_ATQH_S			0
+#define PF_MBX_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define PF_MBX_ATQLEN				0x0022E200 /* Reset Source: CORER */
+#define PF_MBX_ATQLEN_ATQLEN_S			0
+#define PF_MBX_ATQLEN_ATQLEN_M			MAKEMASK(0x3FF, 0)
+#define PF_MBX_ATQLEN_ATQVFE_S			28
+#define PF_MBX_ATQLEN_ATQVFE_M			BIT(28)
+#define PF_MBX_ATQLEN_ATQOVFL_S			29
+#define PF_MBX_ATQLEN_ATQOVFL_M			BIT(29)
+#define PF_MBX_ATQLEN_ATQCRIT_S			30
+#define PF_MBX_ATQLEN_ATQCRIT_M			BIT(30)
+#define PF_MBX_ATQLEN_ATQENABLE_S		31
+#define PF_MBX_ATQLEN_ATQENABLE_M		BIT(31)
+#define PF_MBX_ATQT				0x0022E300 /* Reset Source: CORER */
+#define PF_MBX_ATQT_ATQT_S			0
+#define PF_MBX_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define PF_SB_ARQBAH				0x0022FF00 /* Reset Source: CORER */
+#define PF_SB_ARQBAH_ARQBAH_S			0
+#define PF_SB_ARQBAH_ARQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PF_SB_ARQBAL				0x0022FE80 /* Reset Source: CORER */
+#define PF_SB_ARQBAL_ARQBAL_LSB_S		0
+#define PF_SB_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF_SB_ARQBAL_ARQBAL_S			6
+#define PF_SB_ARQBAL_ARQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define PF_SB_ARQH				0x00230000 /* Reset Source: CORER */
+#define PF_SB_ARQH_ARQH_S			0
+#define PF_SB_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define PF_SB_ARQLEN				0x0022FF80 /* Reset Source: CORER */
+#define PF_SB_ARQLEN_ARQLEN_S			0
+#define PF_SB_ARQLEN_ARQLEN_M			MAKEMASK(0x3FF, 0)
+#define PF_SB_ARQLEN_ARQVFE_S			28
+#define PF_SB_ARQLEN_ARQVFE_M			BIT(28)
+#define PF_SB_ARQLEN_ARQOVFL_S			29
+#define PF_SB_ARQLEN_ARQOVFL_M			BIT(29)
+#define PF_SB_ARQLEN_ARQCRIT_S			30
+#define PF_SB_ARQLEN_ARQCRIT_M			BIT(30)
+#define PF_SB_ARQLEN_ARQENABLE_S		31
+#define PF_SB_ARQLEN_ARQENABLE_M		BIT(31)
+#define PF_SB_ARQT				0x00230080 /* Reset Source: CORER */
+#define PF_SB_ARQT_ARQT_S			0
+#define PF_SB_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define PF_SB_ATQBAH				0x0022FC80 /* Reset Source: CORER */
+#define PF_SB_ATQBAH_ATQBAH_S			0
+#define PF_SB_ATQBAH_ATQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PF_SB_ATQBAL				0x0022FC00 /* Reset Source: CORER */
+#define PF_SB_ATQBAL_ATQBAL_S			6
+#define PF_SB_ATQBAL_ATQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define PF_SB_ATQH				0x0022FD80 /* Reset Source: CORER */
+#define PF_SB_ATQH_ATQH_S			0
+#define PF_SB_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define PF_SB_ATQLEN				0x0022FD00 /* Reset Source: CORER */
+#define PF_SB_ATQLEN_ATQLEN_S			0
+#define PF_SB_ATQLEN_ATQLEN_M			MAKEMASK(0x3FF, 0)
+#define PF_SB_ATQLEN_ATQVFE_S			28
+#define PF_SB_ATQLEN_ATQVFE_M			BIT(28)
+#define PF_SB_ATQLEN_ATQOVFL_S			29
+#define PF_SB_ATQLEN_ATQOVFL_M			BIT(29)
+#define PF_SB_ATQLEN_ATQCRIT_S			30
+#define PF_SB_ATQLEN_ATQCRIT_M			BIT(30)
+#define PF_SB_ATQLEN_ATQENABLE_S		31
+#define PF_SB_ATQLEN_ATQENABLE_M		BIT(31)
+#define PF_SB_ATQT				0x0022FE00 /* Reset Source: CORER */
+#define PF_SB_ATQT_ATQT_S			0
+#define PF_SB_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define PF_SB_REM_DEV_CTL			0x002300F0 /* Reset Source: CORER */
+#define PF_SB_REM_DEV_CTL_DEST_EN_S		0
+#define PF_SB_REM_DEV_CTL_DEST_EN_M		MAKEMASK(0xFFFF, 0)
+#define PF0_FW_HLP_ARQBAH			0x000801C8 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQBAH_ARQBAH_S		0
+#define PF0_FW_HLP_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_FW_HLP_ARQBAL			0x000800C8 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQBAL_ARQBAL_LSB_S		0
+#define PF0_FW_HLP_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF0_FW_HLP_ARQBAL_ARQBAL_S		6
+#define PF0_FW_HLP_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_FW_HLP_ARQH				0x000803C8 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQH_ARQH_S			0
+#define PF0_FW_HLP_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ARQLEN			0x000802C8 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQLEN_ARQLEN_S		0
+#define PF0_FW_HLP_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ARQLEN_ARQVFE_S		28
+#define PF0_FW_HLP_ARQLEN_ARQVFE_M		BIT(28)
+#define PF0_FW_HLP_ARQLEN_ARQOVFL_S		29
+#define PF0_FW_HLP_ARQLEN_ARQOVFL_M		BIT(29)
+#define PF0_FW_HLP_ARQLEN_ARQCRIT_S		30
+#define PF0_FW_HLP_ARQLEN_ARQCRIT_M		BIT(30)
+#define PF0_FW_HLP_ARQLEN_ARQENABLE_S		31
+#define PF0_FW_HLP_ARQLEN_ARQENABLE_M		BIT(31)
+#define PF0_FW_HLP_ARQT				0x000804C8 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQT_ARQT_S			0
+#define PF0_FW_HLP_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ATQBAH			0x00080148 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQBAH_ATQBAH_S		0
+#define PF0_FW_HLP_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_FW_HLP_ATQBAL			0x00080048 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQBAL_ATQBAL_LSB_S		0
+#define PF0_FW_HLP_ATQBAL_ATQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF0_FW_HLP_ATQBAL_ATQBAL_S		6
+#define PF0_FW_HLP_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_FW_HLP_ATQH				0x00080348 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQH_ATQH_S			0
+#define PF0_FW_HLP_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ATQLEN			0x00080248 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQLEN_ATQLEN_S		0
+#define PF0_FW_HLP_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ATQLEN_ATQVFE_S		28
+#define PF0_FW_HLP_ATQLEN_ATQVFE_M		BIT(28)
+#define PF0_FW_HLP_ATQLEN_ATQOVFL_S		29
+#define PF0_FW_HLP_ATQLEN_ATQOVFL_M		BIT(29)
+#define PF0_FW_HLP_ATQLEN_ATQCRIT_S		30
+#define PF0_FW_HLP_ATQLEN_ATQCRIT_M		BIT(30)
+#define PF0_FW_HLP_ATQLEN_ATQENABLE_S		31
+#define PF0_FW_HLP_ATQLEN_ATQENABLE_M		BIT(31)
+#define PF0_FW_HLP_ATQT				0x00080448 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQT_ATQT_S			0
+#define PF0_FW_HLP_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ARQBAH			0x000801C4 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQBAH_ARQBAH_S		0
+#define PF0_FW_PSM_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_FW_PSM_ARQBAL			0x000800C4 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQBAL_ARQBAL_LSB_S		0
+#define PF0_FW_PSM_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF0_FW_PSM_ARQBAL_ARQBAL_S		6
+#define PF0_FW_PSM_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_FW_PSM_ARQH				0x000803C4 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQH_ARQH_S			0
+#define PF0_FW_PSM_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ARQLEN			0x000802C4 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQLEN_ARQLEN_S		0
+#define PF0_FW_PSM_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ARQLEN_ARQVFE_S		28
+#define PF0_FW_PSM_ARQLEN_ARQVFE_M		BIT(28)
+#define PF0_FW_PSM_ARQLEN_ARQOVFL_S		29
+#define PF0_FW_PSM_ARQLEN_ARQOVFL_M		BIT(29)
+#define PF0_FW_PSM_ARQLEN_ARQCRIT_S		30
+#define PF0_FW_PSM_ARQLEN_ARQCRIT_M		BIT(30)
+#define PF0_FW_PSM_ARQLEN_ARQENABLE_S		31
+#define PF0_FW_PSM_ARQLEN_ARQENABLE_M		BIT(31)
+#define PF0_FW_PSM_ARQT				0x000804C4 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQT_ARQT_S			0
+#define PF0_FW_PSM_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ATQBAH			0x00080144 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQBAH_ATQBAH_S		0
+#define PF0_FW_PSM_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_FW_PSM_ATQBAL			0x00080044 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQBAL_ATQBAL_LSB_S		0
+#define PF0_FW_PSM_ATQBAL_ATQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF0_FW_PSM_ATQBAL_ATQBAL_S		6
+#define PF0_FW_PSM_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_FW_PSM_ATQH				0x00080344 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQH_ATQH_S			0
+#define PF0_FW_PSM_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ATQLEN			0x00080244 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQLEN_ATQLEN_S		0
+#define PF0_FW_PSM_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ATQLEN_ATQVFE_S		28
+#define PF0_FW_PSM_ATQLEN_ATQVFE_M		BIT(28)
+#define PF0_FW_PSM_ATQLEN_ATQOVFL_S		29
+#define PF0_FW_PSM_ATQLEN_ATQOVFL_M		BIT(29)
+#define PF0_FW_PSM_ATQLEN_ATQCRIT_S		30
+#define PF0_FW_PSM_ATQLEN_ATQCRIT_M		BIT(30)
+#define PF0_FW_PSM_ATQLEN_ATQENABLE_S		31
+#define PF0_FW_PSM_ATQLEN_ATQENABLE_M		BIT(31)
+#define PF0_FW_PSM_ATQT				0x00080444 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQT_ATQT_S			0
+#define PF0_FW_PSM_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ARQBAH			0x0022E5D8 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQBAH_ARQBAH_S		0
+#define PF0_MBX_CPM_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_CPM_ARQBAL			0x0022E5D4 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQBAL_ARQBAL_LSB_S		0
+#define PF0_MBX_CPM_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF0_MBX_CPM_ARQBAL_ARQBAL_S		6
+#define PF0_MBX_CPM_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_CPM_ARQH			0x0022E5E0 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQH_ARQH_S			0
+#define PF0_MBX_CPM_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ARQLEN			0x0022E5DC /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQLEN_ARQLEN_S		0
+#define PF0_MBX_CPM_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ARQLEN_ARQVFE_S		28
+#define PF0_MBX_CPM_ARQLEN_ARQVFE_M		BIT(28)
+#define PF0_MBX_CPM_ARQLEN_ARQOVFL_S		29
+#define PF0_MBX_CPM_ARQLEN_ARQOVFL_M		BIT(29)
+#define PF0_MBX_CPM_ARQLEN_ARQCRIT_S		30
+#define PF0_MBX_CPM_ARQLEN_ARQCRIT_M		BIT(30)
+#define PF0_MBX_CPM_ARQLEN_ARQENABLE_S		31
+#define PF0_MBX_CPM_ARQLEN_ARQENABLE_M		BIT(31)
+#define PF0_MBX_CPM_ARQT			0x0022E5E4 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQT_ARQT_S			0
+#define PF0_MBX_CPM_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ATQBAH			0x0022E5C4 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQBAH_ATQBAH_S		0
+#define PF0_MBX_CPM_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_CPM_ATQBAL			0x0022E5C0 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQBAL_ATQBAL_S		6
+#define PF0_MBX_CPM_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_CPM_ATQH			0x0022E5CC /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQH_ATQH_S			0
+#define PF0_MBX_CPM_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ATQLEN			0x0022E5C8 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQLEN_ATQLEN_S		0
+#define PF0_MBX_CPM_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ATQLEN_ATQVFE_S		28
+#define PF0_MBX_CPM_ATQLEN_ATQVFE_M		BIT(28)
+#define PF0_MBX_CPM_ATQLEN_ATQOVFL_S		29
+#define PF0_MBX_CPM_ATQLEN_ATQOVFL_M		BIT(29)
+#define PF0_MBX_CPM_ATQLEN_ATQCRIT_S		30
+#define PF0_MBX_CPM_ATQLEN_ATQCRIT_M		BIT(30)
+#define PF0_MBX_CPM_ATQLEN_ATQENABLE_S		31
+#define PF0_MBX_CPM_ATQLEN_ATQENABLE_M		BIT(31)
+#define PF0_MBX_CPM_ATQT			0x0022E5D0 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQT_ATQT_S			0
+#define PF0_MBX_CPM_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ARQBAH			0x0022E600 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQBAH_ARQBAH_S		0
+#define PF0_MBX_HLP_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_HLP_ARQBAL			0x0022E5FC /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQBAL_ARQBAL_LSB_S		0
+#define PF0_MBX_HLP_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF0_MBX_HLP_ARQBAL_ARQBAL_S		6
+#define PF0_MBX_HLP_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_HLP_ARQH			0x0022E608 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQH_ARQH_S			0
+#define PF0_MBX_HLP_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ARQLEN			0x0022E604 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQLEN_ARQLEN_S		0
+#define PF0_MBX_HLP_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ARQLEN_ARQVFE_S		28
+#define PF0_MBX_HLP_ARQLEN_ARQVFE_M		BIT(28)
+#define PF0_MBX_HLP_ARQLEN_ARQOVFL_S		29
+#define PF0_MBX_HLP_ARQLEN_ARQOVFL_M		BIT(29)
+#define PF0_MBX_HLP_ARQLEN_ARQCRIT_S		30
+#define PF0_MBX_HLP_ARQLEN_ARQCRIT_M		BIT(30)
+#define PF0_MBX_HLP_ARQLEN_ARQENABLE_S		31
+#define PF0_MBX_HLP_ARQLEN_ARQENABLE_M		BIT(31)
+#define PF0_MBX_HLP_ARQT			0x0022E60C /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQT_ARQT_S			0
+#define PF0_MBX_HLP_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ATQBAH			0x0022E5EC /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQBAH_ATQBAH_S		0
+#define PF0_MBX_HLP_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_HLP_ATQBAL			0x0022E5E8 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQBAL_ATQBAL_S		6
+#define PF0_MBX_HLP_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_HLP_ATQH			0x0022E5F4 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQH_ATQH_S			0
+#define PF0_MBX_HLP_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ATQLEN			0x0022E5F0 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQLEN_ATQLEN_S		0
+#define PF0_MBX_HLP_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ATQLEN_ATQVFE_S		28
+#define PF0_MBX_HLP_ATQLEN_ATQVFE_M		BIT(28)
+#define PF0_MBX_HLP_ATQLEN_ATQOVFL_S		29
+#define PF0_MBX_HLP_ATQLEN_ATQOVFL_M		BIT(29)
+#define PF0_MBX_HLP_ATQLEN_ATQCRIT_S		30
+#define PF0_MBX_HLP_ATQLEN_ATQCRIT_M		BIT(30)
+#define PF0_MBX_HLP_ATQLEN_ATQENABLE_S		31
+#define PF0_MBX_HLP_ATQLEN_ATQENABLE_M		BIT(31)
+#define PF0_MBX_HLP_ATQT			0x0022E5F8 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQT_ATQT_S			0
+#define PF0_MBX_HLP_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ARQBAH			0x0022E628 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQBAH_ARQBAH_S		0
+#define PF0_MBX_PSM_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_PSM_ARQBAL			0x0022E624 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQBAL_ARQBAL_LSB_S		0
+#define PF0_MBX_PSM_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF0_MBX_PSM_ARQBAL_ARQBAL_S		6
+#define PF0_MBX_PSM_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_PSM_ARQH			0x0022E630 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQH_ARQH_S			0
+#define PF0_MBX_PSM_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ARQLEN			0x0022E62C /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQLEN_ARQLEN_S		0
+#define PF0_MBX_PSM_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ARQLEN_ARQVFE_S		28
+#define PF0_MBX_PSM_ARQLEN_ARQVFE_M		BIT(28)
+#define PF0_MBX_PSM_ARQLEN_ARQOVFL_S		29
+#define PF0_MBX_PSM_ARQLEN_ARQOVFL_M		BIT(29)
+#define PF0_MBX_PSM_ARQLEN_ARQCRIT_S		30
+#define PF0_MBX_PSM_ARQLEN_ARQCRIT_M		BIT(30)
+#define PF0_MBX_PSM_ARQLEN_ARQENABLE_S		31
+#define PF0_MBX_PSM_ARQLEN_ARQENABLE_M		BIT(31)
+#define PF0_MBX_PSM_ARQT			0x0022E634 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQT_ARQT_S			0
+#define PF0_MBX_PSM_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ATQBAH			0x0022E614 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQBAH_ATQBAH_S		0
+#define PF0_MBX_PSM_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_PSM_ATQBAL			0x0022E610 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQBAL_ATQBAL_S		6
+#define PF0_MBX_PSM_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_PSM_ATQH			0x0022E61C /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQH_ATQH_S			0
+#define PF0_MBX_PSM_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ATQLEN			0x0022E618 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQLEN_ATQLEN_S		0
+#define PF0_MBX_PSM_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ATQLEN_ATQVFE_S		28
+#define PF0_MBX_PSM_ATQLEN_ATQVFE_M		BIT(28)
+#define PF0_MBX_PSM_ATQLEN_ATQOVFL_S		29
+#define PF0_MBX_PSM_ATQLEN_ATQOVFL_M		BIT(29)
+#define PF0_MBX_PSM_ATQLEN_ATQCRIT_S		30
+#define PF0_MBX_PSM_ATQLEN_ATQCRIT_M		BIT(30)
+#define PF0_MBX_PSM_ATQLEN_ATQENABLE_S		31
+#define PF0_MBX_PSM_ATQLEN_ATQENABLE_M		BIT(31)
+#define PF0_MBX_PSM_ATQT			0x0022E620 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQT_ATQT_S			0
+#define PF0_MBX_PSM_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ARQBAH			0x0022E650 /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQBAH_ARQBAH_S		0
+#define PF0_SB_CPM_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_SB_CPM_ARQBAL			0x0022E64C /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQBAL_ARQBAL_LSB_S		0
+#define PF0_SB_CPM_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF0_SB_CPM_ARQBAL_ARQBAL_S		6
+#define PF0_SB_CPM_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_SB_CPM_ARQH				0x0022E658 /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQH_ARQH_S			0
+#define PF0_SB_CPM_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ARQLEN			0x0022E654 /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQLEN_ARQLEN_S		0
+#define PF0_SB_CPM_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ARQLEN_ARQVFE_S		28
+#define PF0_SB_CPM_ARQLEN_ARQVFE_M		BIT(28)
+#define PF0_SB_CPM_ARQLEN_ARQOVFL_S		29
+#define PF0_SB_CPM_ARQLEN_ARQOVFL_M		BIT(29)
+#define PF0_SB_CPM_ARQLEN_ARQCRIT_S		30
+#define PF0_SB_CPM_ARQLEN_ARQCRIT_M		BIT(30)
+#define PF0_SB_CPM_ARQLEN_ARQENABLE_S		31
+#define PF0_SB_CPM_ARQLEN_ARQENABLE_M		BIT(31)
+#define PF0_SB_CPM_ARQT				0x0022E65C /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQT_ARQT_S			0
+#define PF0_SB_CPM_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ATQBAH			0x0022E63C /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQBAH_ATQBAH_S		0
+#define PF0_SB_CPM_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_SB_CPM_ATQBAL			0x0022E638 /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQBAL_ATQBAL_S		6
+#define PF0_SB_CPM_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_SB_CPM_ATQH				0x0022E644 /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQH_ATQH_S			0
+#define PF0_SB_CPM_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ATQLEN			0x0022E640 /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQLEN_ATQLEN_S		0
+#define PF0_SB_CPM_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ATQLEN_ATQVFE_S		28
+#define PF0_SB_CPM_ATQLEN_ATQVFE_M		BIT(28)
+#define PF0_SB_CPM_ATQLEN_ATQOVFL_S		29
+#define PF0_SB_CPM_ATQLEN_ATQOVFL_M		BIT(29)
+#define PF0_SB_CPM_ATQLEN_ATQCRIT_S		30
+#define PF0_SB_CPM_ATQLEN_ATQCRIT_M		BIT(30)
+#define PF0_SB_CPM_ATQLEN_ATQENABLE_S		31
+#define PF0_SB_CPM_ATQLEN_ATQENABLE_M		BIT(31)
+#define PF0_SB_CPM_ATQT				0x0022E648 /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQT_ATQT_S			0
+#define PF0_SB_CPM_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_REM_DEV_CTL			0x002300F4 /* Reset Source: CORER */
+#define PF0_SB_CPM_REM_DEV_CTL_DEST_EN_S	0
+#define PF0_SB_CPM_REM_DEV_CTL_DEST_EN_M	MAKEMASK(0xFFFF, 0)
+#define PF0_SB_HLP_ARQBAH			0x002300D8 /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQBAH_ARQBAH_S		0
+#define PF0_SB_HLP_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_SB_HLP_ARQBAL			0x002300D4 /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQBAL_ARQBAL_LSB_S		0
+#define PF0_SB_HLP_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF0_SB_HLP_ARQBAL_ARQBAL_S		6
+#define PF0_SB_HLP_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_SB_HLP_ARQH				0x002300E0 /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQH_ARQH_S			0
+#define PF0_SB_HLP_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ARQLEN			0x002300DC /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQLEN_ARQLEN_S		0
+#define PF0_SB_HLP_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ARQLEN_ARQVFE_S		28
+#define PF0_SB_HLP_ARQLEN_ARQVFE_M		BIT(28)
+#define PF0_SB_HLP_ARQLEN_ARQOVFL_S		29
+#define PF0_SB_HLP_ARQLEN_ARQOVFL_M		BIT(29)
+#define PF0_SB_HLP_ARQLEN_ARQCRIT_S		30
+#define PF0_SB_HLP_ARQLEN_ARQCRIT_M		BIT(30)
+#define PF0_SB_HLP_ARQLEN_ARQENABLE_S		31
+#define PF0_SB_HLP_ARQLEN_ARQENABLE_M		BIT(31)
+#define PF0_SB_HLP_ARQT				0x002300E4 /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQT_ARQT_S			0
+#define PF0_SB_HLP_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ATQBAH			0x002300C4 /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQBAH_ATQBAH_S		0
+#define PF0_SB_HLP_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_SB_HLP_ATQBAL			0x002300C0 /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQBAL_ATQBAL_S		6
+#define PF0_SB_HLP_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_SB_HLP_ATQH				0x002300CC /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQH_ATQH_S			0
+#define PF0_SB_HLP_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ATQLEN			0x002300C8 /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQLEN_ATQLEN_S		0
+#define PF0_SB_HLP_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ATQLEN_ATQVFE_S		28
+#define PF0_SB_HLP_ATQLEN_ATQVFE_M		BIT(28)
+#define PF0_SB_HLP_ATQLEN_ATQOVFL_S		29
+#define PF0_SB_HLP_ATQLEN_ATQOVFL_M		BIT(29)
+#define PF0_SB_HLP_ATQLEN_ATQCRIT_S		30
+#define PF0_SB_HLP_ATQLEN_ATQCRIT_M		BIT(30)
+#define PF0_SB_HLP_ATQLEN_ATQENABLE_S		31
+#define PF0_SB_HLP_ATQLEN_ATQENABLE_M		BIT(31)
+#define PF0_SB_HLP_ATQT				0x002300D0 /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQT_ATQT_S			0
+#define PF0_SB_HLP_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_REM_DEV_CTL			0x002300E8 /* Reset Source: CORER */
+#define PF0_SB_HLP_REM_DEV_CTL_DEST_EN_S	0
+#define PF0_SB_HLP_REM_DEV_CTL_DEST_EN_M	MAKEMASK(0xFFFF, 0)
+#define SB_REM_DEV_DEST(_i)			(0x002300F8 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define SB_REM_DEV_DEST_MAX_INDEX		7
+#define SB_REM_DEV_DEST_DEST_S			0
+#define SB_REM_DEV_DEST_DEST_M			MAKEMASK(0xF, 0)
+#define SB_REM_DEV_DEST_DEST_VALID_S		31
+#define SB_REM_DEV_DEST_DEST_VALID_M		BIT(31)
+#define VF_MBX_ARQBAH(_VF)			(0x0022B800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ARQBAH_MAX_INDEX			255
+#define VF_MBX_ARQBAH_ARQBAH_S			0
+#define VF_MBX_ARQBAH_ARQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_ARQBAL(_VF)			(0x0022B400 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ARQBAL_MAX_INDEX			255
+#define VF_MBX_ARQBAL_ARQBAL_LSB_S		0
+#define VF_MBX_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define VF_MBX_ARQBAL_ARQBAL_S			6
+#define VF_MBX_ARQBAL_ARQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_ARQH(_VF)			(0x0022C000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ARQH_MAX_INDEX			255
+#define VF_MBX_ARQH_ARQH_S			0
+#define VF_MBX_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_ARQLEN(_VF)			(0x0022BC00 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ARQLEN_MAX_INDEX			255
+#define VF_MBX_ARQLEN_ARQLEN_S			0
+#define VF_MBX_ARQLEN_ARQLEN_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_ARQLEN_ARQVFE_S			28
+#define VF_MBX_ARQLEN_ARQVFE_M			BIT(28)
+#define VF_MBX_ARQLEN_ARQOVFL_S			29
+#define VF_MBX_ARQLEN_ARQOVFL_M			BIT(29)
+#define VF_MBX_ARQLEN_ARQCRIT_S			30
+#define VF_MBX_ARQLEN_ARQCRIT_M			BIT(30)
+#define VF_MBX_ARQLEN_ARQENABLE_S		31
+#define VF_MBX_ARQLEN_ARQENABLE_M		BIT(31)
+#define VF_MBX_ARQT(_VF)			(0x0022C400 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ARQT_MAX_INDEX			255
+#define VF_MBX_ARQT_ARQT_S			0
+#define VF_MBX_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_ATQBAH(_VF)			(0x0022A400 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ATQBAH_MAX_INDEX			255
+#define VF_MBX_ATQBAH_ATQBAH_S			0
+#define VF_MBX_ATQBAH_ATQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_ATQBAL(_VF)			(0x0022A000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ATQBAL_MAX_INDEX			255
+#define VF_MBX_ATQBAL_ATQBAL_S			6
+#define VF_MBX_ATQBAL_ATQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_ATQH(_VF)			(0x0022AC00 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ATQH_MAX_INDEX			255
+#define VF_MBX_ATQH_ATQH_S			0
+#define VF_MBX_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_ATQLEN(_VF)			(0x0022A800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ATQLEN_MAX_INDEX			255
+#define VF_MBX_ATQLEN_ATQLEN_S			0
+#define VF_MBX_ATQLEN_ATQLEN_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_ATQLEN_ATQVFE_S			28
+#define VF_MBX_ATQLEN_ATQVFE_M			BIT(28)
+#define VF_MBX_ATQLEN_ATQOVFL_S			29
+#define VF_MBX_ATQLEN_ATQOVFL_M			BIT(29)
+#define VF_MBX_ATQLEN_ATQCRIT_S			30
+#define VF_MBX_ATQLEN_ATQCRIT_M			BIT(30)
+#define VF_MBX_ATQLEN_ATQENABLE_S		31
+#define VF_MBX_ATQLEN_ATQENABLE_M		BIT(31)
+#define VF_MBX_ATQT(_VF)			(0x0022B000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ATQT_MAX_INDEX			255
+#define VF_MBX_ATQT_ATQT_S			0
+#define VF_MBX_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ARQBAH(_VF128)		(0x0022D400 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ARQBAH_MAX_INDEX		127
+#define VF_MBX_CPM_ARQBAH_ARQBAH_S		0
+#define VF_MBX_CPM_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_CPM_ARQBAL(_VF128)		(0x0022D200 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ARQBAL_MAX_INDEX		127
+#define VF_MBX_CPM_ARQBAL_ARQBAL_LSB_S		0
+#define VF_MBX_CPM_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define VF_MBX_CPM_ARQBAL_ARQBAL_S		6
+#define VF_MBX_CPM_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_CPM_ARQH(_VF128)			(0x0022D800 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ARQH_MAX_INDEX		127
+#define VF_MBX_CPM_ARQH_ARQH_S			0
+#define VF_MBX_CPM_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ARQLEN(_VF128)		(0x0022D600 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ARQLEN_MAX_INDEX		127
+#define VF_MBX_CPM_ARQLEN_ARQLEN_S		0
+#define VF_MBX_CPM_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ARQLEN_ARQVFE_S		28
+#define VF_MBX_CPM_ARQLEN_ARQVFE_M		BIT(28)
+#define VF_MBX_CPM_ARQLEN_ARQOVFL_S		29
+#define VF_MBX_CPM_ARQLEN_ARQOVFL_M		BIT(29)
+#define VF_MBX_CPM_ARQLEN_ARQCRIT_S		30
+#define VF_MBX_CPM_ARQLEN_ARQCRIT_M		BIT(30)
+#define VF_MBX_CPM_ARQLEN_ARQENABLE_S		31
+#define VF_MBX_CPM_ARQLEN_ARQENABLE_M		BIT(31)
+#define VF_MBX_CPM_ARQT(_VF128)			(0x0022DA00 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ARQT_MAX_INDEX		127
+#define VF_MBX_CPM_ARQT_ARQT_S			0
+#define VF_MBX_CPM_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ATQBAH(_VF128)		(0x0022CA00 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ATQBAH_MAX_INDEX		127
+#define VF_MBX_CPM_ATQBAH_ATQBAH_S		0
+#define VF_MBX_CPM_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_CPM_ATQBAL(_VF128)		(0x0022C800 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ATQBAL_MAX_INDEX		127
+#define VF_MBX_CPM_ATQBAL_ATQBAL_S		6
+#define VF_MBX_CPM_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_CPM_ATQH(_VF128)			(0x0022CE00 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ATQH_MAX_INDEX		127
+#define VF_MBX_CPM_ATQH_ATQH_S			0
+#define VF_MBX_CPM_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ATQLEN(_VF128)		(0x0022CC00 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ATQLEN_MAX_INDEX		127
+#define VF_MBX_CPM_ATQLEN_ATQLEN_S		0
+#define VF_MBX_CPM_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ATQLEN_ATQVFE_S		28
+#define VF_MBX_CPM_ATQLEN_ATQVFE_M		BIT(28)
+#define VF_MBX_CPM_ATQLEN_ATQOVFL_S		29
+#define VF_MBX_CPM_ATQLEN_ATQOVFL_M		BIT(29)
+#define VF_MBX_CPM_ATQLEN_ATQCRIT_S		30
+#define VF_MBX_CPM_ATQLEN_ATQCRIT_M		BIT(30)
+#define VF_MBX_CPM_ATQLEN_ATQENABLE_S		31
+#define VF_MBX_CPM_ATQLEN_ATQENABLE_M		BIT(31)
+#define VF_MBX_CPM_ATQT(_VF128)			(0x0022D000 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ATQT_MAX_INDEX		127
+#define VF_MBX_CPM_ATQT_ATQT_S			0
+#define VF_MBX_CPM_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ARQBAH(_VF16)		(0x0022DD80 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ARQBAH_MAX_INDEX		15
+#define VF_MBX_HLP_ARQBAH_ARQBAH_S		0
+#define VF_MBX_HLP_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_HLP_ARQBAL(_VF16)		(0x0022DD40 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ARQBAL_MAX_INDEX		15
+#define VF_MBX_HLP_ARQBAL_ARQBAL_LSB_S		0
+#define VF_MBX_HLP_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define VF_MBX_HLP_ARQBAL_ARQBAL_S		6
+#define VF_MBX_HLP_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_HLP_ARQH(_VF16)			(0x0022DE00 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ARQH_MAX_INDEX		15
+#define VF_MBX_HLP_ARQH_ARQH_S			0
+#define VF_MBX_HLP_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ARQLEN(_VF16)		(0x0022DDC0 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ARQLEN_MAX_INDEX		15
+#define VF_MBX_HLP_ARQLEN_ARQLEN_S		0
+#define VF_MBX_HLP_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ARQLEN_ARQVFE_S		28
+#define VF_MBX_HLP_ARQLEN_ARQVFE_M		BIT(28)
+#define VF_MBX_HLP_ARQLEN_ARQOVFL_S		29
+#define VF_MBX_HLP_ARQLEN_ARQOVFL_M		BIT(29)
+#define VF_MBX_HLP_ARQLEN_ARQCRIT_S		30
+#define VF_MBX_HLP_ARQLEN_ARQCRIT_M		BIT(30)
+#define VF_MBX_HLP_ARQLEN_ARQENABLE_S		31
+#define VF_MBX_HLP_ARQLEN_ARQENABLE_M		BIT(31)
+#define VF_MBX_HLP_ARQT(_VF16)			(0x0022DE40 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ARQT_MAX_INDEX		15
+#define VF_MBX_HLP_ARQT_ARQT_S			0
+#define VF_MBX_HLP_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ATQBAH(_VF16)		(0x0022DC40 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ATQBAH_MAX_INDEX		15
+#define VF_MBX_HLP_ATQBAH_ATQBAH_S		0
+#define VF_MBX_HLP_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_HLP_ATQBAL(_VF16)		(0x0022DC00 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ATQBAL_MAX_INDEX		15
+#define VF_MBX_HLP_ATQBAL_ATQBAL_S		6
+#define VF_MBX_HLP_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_HLP_ATQH(_VF16)			(0x0022DCC0 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ATQH_MAX_INDEX		15
+#define VF_MBX_HLP_ATQH_ATQH_S			0
+#define VF_MBX_HLP_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ATQLEN(_VF16)		(0x0022DC80 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ATQLEN_MAX_INDEX		15
+#define VF_MBX_HLP_ATQLEN_ATQLEN_S		0
+#define VF_MBX_HLP_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ATQLEN_ATQVFE_S		28
+#define VF_MBX_HLP_ATQLEN_ATQVFE_M		BIT(28)
+#define VF_MBX_HLP_ATQLEN_ATQOVFL_S		29
+#define VF_MBX_HLP_ATQLEN_ATQOVFL_M		BIT(29)
+#define VF_MBX_HLP_ATQLEN_ATQCRIT_S		30
+#define VF_MBX_HLP_ATQLEN_ATQCRIT_M		BIT(30)
+#define VF_MBX_HLP_ATQLEN_ATQENABLE_S		31
+#define VF_MBX_HLP_ATQLEN_ATQENABLE_M		BIT(31)
+#define VF_MBX_HLP_ATQT(_VF16)			(0x0022DD00 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ATQT_MAX_INDEX		15
+#define VF_MBX_HLP_ATQT_ATQT_S			0
+#define VF_MBX_HLP_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ARQBAH(_VF16)		(0x0022E000 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ARQBAH_MAX_INDEX		15
+#define VF_MBX_PSM_ARQBAH_ARQBAH_S		0
+#define VF_MBX_PSM_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_PSM_ARQBAL(_VF16)		(0x0022DFC0 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ARQBAL_MAX_INDEX		15
+#define VF_MBX_PSM_ARQBAL_ARQBAL_LSB_S		0
+#define VF_MBX_PSM_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define VF_MBX_PSM_ARQBAL_ARQBAL_S		6
+#define VF_MBX_PSM_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_PSM_ARQH(_VF16)			(0x0022E080 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ARQH_MAX_INDEX		15
+#define VF_MBX_PSM_ARQH_ARQH_S			0
+#define VF_MBX_PSM_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ARQLEN(_VF16)		(0x0022E040 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ARQLEN_MAX_INDEX		15
+#define VF_MBX_PSM_ARQLEN_ARQLEN_S		0
+#define VF_MBX_PSM_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ARQLEN_ARQVFE_S		28
+#define VF_MBX_PSM_ARQLEN_ARQVFE_M		BIT(28)
+#define VF_MBX_PSM_ARQLEN_ARQOVFL_S		29
+#define VF_MBX_PSM_ARQLEN_ARQOVFL_M		BIT(29)
+#define VF_MBX_PSM_ARQLEN_ARQCRIT_S		30
+#define VF_MBX_PSM_ARQLEN_ARQCRIT_M		BIT(30)
+#define VF_MBX_PSM_ARQLEN_ARQENABLE_S		31
+#define VF_MBX_PSM_ARQLEN_ARQENABLE_M		BIT(31)
+#define VF_MBX_PSM_ARQT(_VF16)			(0x0022E0C0 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ARQT_MAX_INDEX		15
+#define VF_MBX_PSM_ARQT_ARQT_S			0
+#define VF_MBX_PSM_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ATQBAH(_VF16)		(0x0022DEC0 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ATQBAH_MAX_INDEX		15
+#define VF_MBX_PSM_ATQBAH_ATQBAH_S		0
+#define VF_MBX_PSM_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_PSM_ATQBAL(_VF16)		(0x0022DE80 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ATQBAL_MAX_INDEX		15
+#define VF_MBX_PSM_ATQBAL_ATQBAL_S		6
+#define VF_MBX_PSM_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_PSM_ATQH(_VF16)			(0x0022DF40 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ATQH_MAX_INDEX		15
+#define VF_MBX_PSM_ATQH_ATQH_S			0
+#define VF_MBX_PSM_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ATQLEN(_VF16)		(0x0022DF00 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ATQLEN_MAX_INDEX		15
+#define VF_MBX_PSM_ATQLEN_ATQLEN_S		0
+#define VF_MBX_PSM_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ATQLEN_ATQVFE_S		28
+#define VF_MBX_PSM_ATQLEN_ATQVFE_M		BIT(28)
+#define VF_MBX_PSM_ATQLEN_ATQOVFL_S		29
+#define VF_MBX_PSM_ATQLEN_ATQOVFL_M		BIT(29)
+#define VF_MBX_PSM_ATQLEN_ATQCRIT_S		30
+#define VF_MBX_PSM_ATQLEN_ATQCRIT_M		BIT(30)
+#define VF_MBX_PSM_ATQLEN_ATQENABLE_S		31
+#define VF_MBX_PSM_ATQLEN_ATQENABLE_M		BIT(31)
+#define VF_MBX_PSM_ATQT(_VF16)			(0x0022DF80 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ATQT_MAX_INDEX		15
+#define VF_MBX_PSM_ATQT_ATQT_S			0
+#define VF_MBX_PSM_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ARQBAH(_VF128)		(0x0022F400 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ARQBAH_MAX_INDEX		127
+#define VF_SB_CPM_ARQBAH_ARQBAH_S		0
+#define VF_SB_CPM_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_SB_CPM_ARQBAL(_VF128)		(0x0022F200 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ARQBAL_MAX_INDEX		127
+#define VF_SB_CPM_ARQBAL_ARQBAL_LSB_S		0
+#define VF_SB_CPM_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define VF_SB_CPM_ARQBAL_ARQBAL_S		6
+#define VF_SB_CPM_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_SB_CPM_ARQH(_VF128)			(0x0022F800 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ARQH_MAX_INDEX		127
+#define VF_SB_CPM_ARQH_ARQH_S			0
+#define VF_SB_CPM_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ARQLEN(_VF128)		(0x0022F600 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ARQLEN_MAX_INDEX		127
+#define VF_SB_CPM_ARQLEN_ARQLEN_S		0
+#define VF_SB_CPM_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ARQLEN_ARQVFE_S		28
+#define VF_SB_CPM_ARQLEN_ARQVFE_M		BIT(28)
+#define VF_SB_CPM_ARQLEN_ARQOVFL_S		29
+#define VF_SB_CPM_ARQLEN_ARQOVFL_M		BIT(29)
+#define VF_SB_CPM_ARQLEN_ARQCRIT_S		30
+#define VF_SB_CPM_ARQLEN_ARQCRIT_M		BIT(30)
+#define VF_SB_CPM_ARQLEN_ARQENABLE_S		31
+#define VF_SB_CPM_ARQLEN_ARQENABLE_M		BIT(31)
+#define VF_SB_CPM_ARQT(_VF128)			(0x0022FA00 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ARQT_MAX_INDEX		127
+#define VF_SB_CPM_ARQT_ARQT_S			0
+#define VF_SB_CPM_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ATQBAH(_VF128)		(0x0022EA00 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ATQBAH_MAX_INDEX		127
+#define VF_SB_CPM_ATQBAH_ATQBAH_S		0
+#define VF_SB_CPM_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_SB_CPM_ATQBAL(_VF128)		(0x0022E800 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ATQBAL_MAX_INDEX		127
+#define VF_SB_CPM_ATQBAL_ATQBAL_S		6
+#define VF_SB_CPM_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_SB_CPM_ATQH(_VF128)			(0x0022EE00 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ATQH_MAX_INDEX		127
+#define VF_SB_CPM_ATQH_ATQH_S			0
+#define VF_SB_CPM_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ATQLEN(_VF128)		(0x0022EC00 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ATQLEN_MAX_INDEX		127
+#define VF_SB_CPM_ATQLEN_ATQLEN_S		0
+#define VF_SB_CPM_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ATQLEN_ATQVFE_S		28
+#define VF_SB_CPM_ATQLEN_ATQVFE_M		BIT(28)
+#define VF_SB_CPM_ATQLEN_ATQOVFL_S		29
+#define VF_SB_CPM_ATQLEN_ATQOVFL_M		BIT(29)
+#define VF_SB_CPM_ATQLEN_ATQCRIT_S		30
+#define VF_SB_CPM_ATQLEN_ATQCRIT_M		BIT(30)
+#define VF_SB_CPM_ATQLEN_ATQENABLE_S		31
+#define VF_SB_CPM_ATQLEN_ATQENABLE_M		BIT(31)
+#define VF_SB_CPM_ATQT(_VF128)			(0x0022F000 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ATQT_MAX_INDEX		127
+#define VF_SB_CPM_ATQT_ATQT_S			0
+#define VF_SB_CPM_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_REM_DEV_CTL			0x002300EC /* Reset Source: CORER */
+#define VF_SB_CPM_REM_DEV_CTL_DEST_EN_S		0
+#define VF_SB_CPM_REM_DEV_CTL_DEST_EN_M		MAKEMASK(0xFFFF, 0)
+#define VP_MBX_CPM_PF_VF_CTRL(_VP128)		(0x00231800 + ((_VP128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VP_MBX_CPM_PF_VF_CTRL_MAX_INDEX		127
+#define VP_MBX_CPM_PF_VF_CTRL_QUEUE_EN_S	0
+#define VP_MBX_CPM_PF_VF_CTRL_QUEUE_EN_M	BIT(0)
+#define VP_MBX_HLP_PF_VF_CTRL(_VP16)		(0x00231A00 + ((_VP16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VP_MBX_HLP_PF_VF_CTRL_MAX_INDEX		15
+#define VP_MBX_HLP_PF_VF_CTRL_QUEUE_EN_S	0
+#define VP_MBX_HLP_PF_VF_CTRL_QUEUE_EN_M	BIT(0)
+#define VP_MBX_PF_VF_CTRL(_VSI)			(0x00230800 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VP_MBX_PF_VF_CTRL_MAX_INDEX		767
+#define VP_MBX_PF_VF_CTRL_QUEUE_EN_S		0
+#define VP_MBX_PF_VF_CTRL_QUEUE_EN_M		BIT(0)
+#define VP_MBX_PSM_PF_VF_CTRL(_VP16)		(0x00231A40 + ((_VP16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VP_MBX_PSM_PF_VF_CTRL_MAX_INDEX		15
+#define VP_MBX_PSM_PF_VF_CTRL_QUEUE_EN_S	0
+#define VP_MBX_PSM_PF_VF_CTRL_QUEUE_EN_M	BIT(0)
+#define VP_SB_CPM_PF_VF_CTRL(_VP128)		(0x00231C00 + ((_VP128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VP_SB_CPM_PF_VF_CTRL_MAX_INDEX		127
+#define VP_SB_CPM_PF_VF_CTRL_QUEUE_EN_S		0
+#define VP_SB_CPM_PF_VF_CTRL_QUEUE_EN_M		BIT(0)
+#define GL_DCB_TDSCP2TC_BLOCK_DIS		0x00049218 /* Reset Source: CORER */
+#define GL_DCB_TDSCP2TC_BLOCK_DIS_DSCP2TC_BLOCK_DIS_S 0
+#define GL_DCB_TDSCP2TC_BLOCK_DIS_DSCP2TC_BLOCK_DIS_M BIT(0)
+#define GL_DCB_TDSCP2TC_BLOCK_IPV4(_i)		(0x00049018 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GL_DCB_TDSCP2TC_BLOCK_IPV4_MAX_INDEX	63
+#define GL_DCB_TDSCP2TC_BLOCK_IPV4_TC_BLOCK_LUT_S 0
+#define GL_DCB_TDSCP2TC_BLOCK_IPV4_TC_BLOCK_LUT_M MAKEMASK(0xFFFFFFFF, 0)
+#define GL_DCB_TDSCP2TC_BLOCK_IPV6(_i)		(0x00049118 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GL_DCB_TDSCP2TC_BLOCK_IPV6_MAX_INDEX	63
+#define GL_DCB_TDSCP2TC_BLOCK_IPV6_TC_BLOCK_LUT_S 0
+#define GL_DCB_TDSCP2TC_BLOCK_IPV6_TC_BLOCK_LUT_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_GENC				0x00083044 /* Reset Source: CORER */
+#define GLDCB_GENC_PCIRTT_S			0
+#define GLDCB_GENC_PCIRTT_M			MAKEMASK(0xFFFF, 0)
+#define GLDCB_PRS_RETSTCC(_i)			(0x002000B0 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLDCB_PRS_RETSTCC_MAX_INDEX		31
+#define GLDCB_PRS_RETSTCC_BWSHARE_S		0
+#define GLDCB_PRS_RETSTCC_BWSHARE_M		MAKEMASK(0x7F, 0)
+#define GLDCB_PRS_RETSTCC_ETSTC_S		31
+#define GLDCB_PRS_RETSTCC_ETSTC_M		BIT(31)
+#define GLDCB_PRS_RSPMC				0x00200160 /* Reset Source: CORER */
+#define GLDCB_PRS_RSPMC_RSPM_S			0
+#define GLDCB_PRS_RSPMC_RSPM_M			MAKEMASK(0xFF, 0)
+#define GLDCB_PRS_RSPMC_RPM_MODE_S		8
+#define GLDCB_PRS_RSPMC_RPM_MODE_M		MAKEMASK(0x3, 8)
+#define GLDCB_PRS_RSPMC_PRR_MAX_EXP_S		10
+#define GLDCB_PRS_RSPMC_PRR_MAX_EXP_M		MAKEMASK(0xF, 10)
+#define GLDCB_PRS_RSPMC_PFCTIMER_S		14
+#define GLDCB_PRS_RSPMC_PFCTIMER_M		MAKEMASK(0x3FFF, 14)
+#define GLDCB_PRS_RSPMC_RPM_DIS_S		31
+#define GLDCB_PRS_RSPMC_RPM_DIS_M		BIT(31)
+#define GLDCB_RETSTCC(_i)			(0x00122140 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLDCB_RETSTCC_MAX_INDEX			31
+#define GLDCB_RETSTCC_BWSHARE_S			0
+#define GLDCB_RETSTCC_BWSHARE_M			MAKEMASK(0x7F, 0)
+#define GLDCB_RETSTCC_ETSTC_S			31
+#define GLDCB_RETSTCC_ETSTC_M			BIT(31)
+#define GLDCB_RETSTCS(_i)			(0x001221C0 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLDCB_RETSTCS_MAX_INDEX			31
+#define GLDCB_RETSTCS_CREDITS_S			0
+#define GLDCB_RETSTCS_CREDITS_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_RTC2PFC_RCB			0x00122100 /* Reset Source: CORER */
+#define GLDCB_RTC2PFC_RCB_TC2PFC_S		0
+#define GLDCB_RTC2PFC_RCB_TC2PFC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_SWT_RETSTCC(_i)			(0x0020A040 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLDCB_SWT_RETSTCC_MAX_INDEX		31
+#define GLDCB_SWT_RETSTCC_BWSHARE_S		0
+#define GLDCB_SWT_RETSTCC_BWSHARE_M		MAKEMASK(0x7F, 0)
+#define GLDCB_SWT_RETSTCC_ETSTC_S		31
+#define GLDCB_SWT_RETSTCC_ETSTC_M		BIT(31)
+#define GLDCB_TC2PFC				0x001D2694 /* Reset Source: CORER */
+#define GLDCB_TC2PFC_TC2PFC_S			0
+#define GLDCB_TC2PFC_TC2PFC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_TCB_MNG_SP			0x000AE12C /* Reset Source: CORER */
+#define GLDCB_TCB_MNG_SP_MNG_SP_S		0
+#define GLDCB_TCB_MNG_SP_MNG_SP_M		BIT(0)
+#define GLDCB_TCB_TCLL_CFG			0x000AE134 /* Reset Source: CORER */
+#define GLDCB_TCB_TCLL_CFG_LLTC_S		0
+#define GLDCB_TCB_TCLL_CFG_LLTC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_TCB_WB_SP				0x000AE310 /* Reset Source: CORER */
+#define GLDCB_TCB_WB_SP_WB_SP_S			0
+#define GLDCB_TCB_WB_SP_WB_SP_M			BIT(0)
+#define GLDCB_TCUPM_IMM_EN			0x000BC824 /* Reset Source: CORER */
+#define GLDCB_TCUPM_IMM_EN_IMM_EN_S		0
+#define GLDCB_TCUPM_IMM_EN_IMM_EN_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_TCUPM_LEGACY_TC			0x000BC828 /* Reset Source: CORER */
+#define GLDCB_TCUPM_LEGACY_TC_LEGTC_S		0
+#define GLDCB_TCUPM_LEGACY_TC_LEGTC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_TCUPM_NO_EXCEED_DIS		0x000BC830 /* Reset Source: CORER */
+#define GLDCB_TCUPM_NO_EXCEED_DIS_NON_EXCEED_DIS_S 0
+#define GLDCB_TCUPM_NO_EXCEED_DIS_NON_EXCEED_DIS_M BIT(0)
+#define GLDCB_TCUPM_WB_DIS			0x000BC834 /* Reset Source: CORER */
+#define GLDCB_TCUPM_WB_DIS_PORT_DISABLE_S	0
+#define GLDCB_TCUPM_WB_DIS_PORT_DISABLE_M	BIT(0)
+#define GLDCB_TCUPM_WB_DIS_TC_DISABLE_S		1
+#define GLDCB_TCUPM_WB_DIS_TC_DISABLE_M		BIT(1)
+#define GLDCB_TFPFCI				0x0009949C /* Reset Source: CORER */
+#define GLDCB_TFPFCI_GLDCB_TFPFCI_S		0
+#define GLDCB_TFPFCI_GLDCB_TFPFCI_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_TLPM_IMM_TCB			0x000A0190 /* Reset Source: CORER */
+#define GLDCB_TLPM_IMM_TCB_IMM_EN_S		0
+#define GLDCB_TLPM_IMM_TCB_IMM_EN_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_TLPM_IMM_TCUPM			0x000A018C /* Reset Source: CORER */
+#define GLDCB_TLPM_IMM_TCUPM_IMM_EN_S		0
+#define GLDCB_TLPM_IMM_TCUPM_IMM_EN_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_TLPM_PCI_DM			0x000A0180 /* Reset Source: CORER */
+#define GLDCB_TLPM_PCI_DM_MONITOR_S		0
+#define GLDCB_TLPM_PCI_DM_MONITOR_M		MAKEMASK(0x7FFFF, 0)
+#define GLDCB_TLPM_PCI_DTHR			0x000A0184 /* Reset Source: CORER */
+#define GLDCB_TLPM_PCI_DTHR_PCI_TDATA_S		0
+#define GLDCB_TLPM_PCI_DTHR_PCI_TDATA_M		MAKEMASK(0xFFF, 0)
+#define GLDCB_TPB_IMM_TLPM			0x00099468 /* Reset Source: CORER */
+#define GLDCB_TPB_IMM_TLPM_IMM_EN_S		0
+#define GLDCB_TPB_IMM_TLPM_IMM_EN_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_TPB_IMM_TPB			0x0009946C /* Reset Source: CORER */
+#define GLDCB_TPB_IMM_TPB_IMM_EN_S		0
+#define GLDCB_TPB_IMM_TPB_IMM_EN_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_TPB_TCLL_CFG			0x00099464 /* Reset Source: CORER */
+#define GLDCB_TPB_TCLL_CFG_LLTC_S		0
+#define GLDCB_TPB_TCLL_CFG_LLTC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCB_BULK_DWRR_REG_QUANTA		0x000AE0E0 /* Reset Source: CORER */
+#define GLTCB_BULK_DWRR_REG_QUANTA_QUANTA_S	0
+#define GLTCB_BULK_DWRR_REG_QUANTA_QUANTA_M	MAKEMASK(0x7FF, 0)
+#define GLTCB_BULK_DWRR_REG_SAT			0x000AE0F0 /* Reset Source: CORER */
+#define GLTCB_BULK_DWRR_REG_SAT_SATURATION_S	0
+#define GLTCB_BULK_DWRR_REG_SAT_SATURATION_M	MAKEMASK(0x1FFFF, 0)
+#define GLTCB_BULK_DWRR_WB_QUANTA		0x000AE0E4 /* Reset Source: CORER */
+#define GLTCB_BULK_DWRR_WB_QUANTA_QUANTA_S	0
+#define GLTCB_BULK_DWRR_WB_QUANTA_QUANTA_M	MAKEMASK(0x7FF, 0)
+#define GLTCB_BULK_DWRR_WB_SAT			0x000AE0F4 /* Reset Source: CORER */
+#define GLTCB_BULK_DWRR_WB_SAT_SATURATION_S	0
+#define GLTCB_BULK_DWRR_WB_SAT_SATURATION_M	MAKEMASK(0x1FFFF, 0)
+#define GLTCB_CREDIT_EXP_CTL			0x000AE120 /* Reset Source: CORER */
+#define GLTCB_CREDIT_EXP_CTL_EN_S		0
+#define GLTCB_CREDIT_EXP_CTL_EN_M		BIT(0)
+#define GLTCB_CREDIT_EXP_CTL_MIN_PKT_S		1
+#define GLTCB_CREDIT_EXP_CTL_MIN_PKT_M		MAKEMASK(0x1FF, 1)
+#define GLTCB_LL_DWRR_REG_QUANTA		0x000AE0E8 /* Reset Source: CORER */
+#define GLTCB_LL_DWRR_REG_QUANTA_QUANTA_S	0
+#define GLTCB_LL_DWRR_REG_QUANTA_QUANTA_M	MAKEMASK(0x7FF, 0)
+#define GLTCB_LL_DWRR_REG_SAT			0x000AE0F8 /* Reset Source: CORER */
+#define GLTCB_LL_DWRR_REG_SAT_SATURATION_S	0
+#define GLTCB_LL_DWRR_REG_SAT_SATURATION_M	MAKEMASK(0x1FFFF, 0)
+#define GLTCB_LL_DWRR_WB_QUANTA			0x000AE0EC /* Reset Source: CORER */
+#define GLTCB_LL_DWRR_WB_QUANTA_QUANTA_S	0
+#define GLTCB_LL_DWRR_WB_QUANTA_QUANTA_M	MAKEMASK(0x7FF, 0)
+#define GLTCB_LL_DWRR_WB_SAT			0x000AE0FC /* Reset Source: CORER */
+#define GLTCB_LL_DWRR_WB_SAT_SATURATION_S	0
+#define GLTCB_LL_DWRR_WB_SAT_SATURATION_M	MAKEMASK(0x1FFFF, 0)
+#define GLTCB_WB_RL				0x000AE238 /* Reset Source: CORER */
+#define GLTCB_WB_RL_PERIOD_S			0
+#define GLTCB_WB_RL_PERIOD_M			MAKEMASK(0xFFFF, 0)
+#define GLTCB_WB_RL_EN_S			16
+#define GLTCB_WB_RL_EN_M			BIT(16)
+#define GLTPB_WB_RL				0x00099460 /* Reset Source: CORER */
+#define GLTPB_WB_RL_PERIOD_S			0
+#define GLTPB_WB_RL_PERIOD_M			MAKEMASK(0xFFFF, 0)
+#define GLTPB_WB_RL_EN_S			16
+#define GLTPB_WB_RL_EN_M			BIT(16)
+#define PRTDCB_FCCFG				0x001E4640 /* Reset Source: GLOBR */
+#define PRTDCB_FCCFG_TFCE_S			3
+#define PRTDCB_FCCFG_TFCE_M			MAKEMASK(0x3, 3)
+#define PRTDCB_FCRTV				0x001E4600 /* Reset Source: GLOBR */
+#define PRTDCB_FCRTV_FC_REFRESH_TH_S		0
+#define PRTDCB_FCRTV_FC_REFRESH_TH_M		MAKEMASK(0xFFFF, 0)
+#define PRTDCB_FCTTVN(_i)			(0x001E4580 + ((_i) * 32)) /* _i=0...3 */ /* Reset Source: GLOBR */
+#define PRTDCB_FCTTVN_MAX_INDEX			3
+#define PRTDCB_FCTTVN_TTV_2N_S			0
+#define PRTDCB_FCTTVN_TTV_2N_M			MAKEMASK(0xFFFF, 0)
+#define PRTDCB_FCTTVN_TTV_2N_P1_S		16
+#define PRTDCB_FCTTVN_TTV_2N_P1_M		MAKEMASK(0xFFFF, 16)
+#define PRTDCB_GENC				0x00083000 /* Reset Source: CORER */
+#define PRTDCB_GENC_NUMTC_S			2
+#define PRTDCB_GENC_NUMTC_M			MAKEMASK(0xF, 2)
+#define PRTDCB_GENC_FCOEUP_S			6
+#define PRTDCB_GENC_FCOEUP_M			MAKEMASK(0x7, 6)
+#define PRTDCB_GENC_FCOEUP_VALID_S		9
+#define PRTDCB_GENC_FCOEUP_VALID_M		BIT(9)
+#define PRTDCB_GENC_PFCLDA_S			16
+#define PRTDCB_GENC_PFCLDA_M			MAKEMASK(0xFFFF, 16)
+#define PRTDCB_GENS				0x00083020 /* Reset Source: CORER */
+#define PRTDCB_GENS_DCBX_STATUS_S		0
+#define PRTDCB_GENS_DCBX_STATUS_M		MAKEMASK(0x7, 0)
+#define PRTDCB_PRS_RETSC			0x002001A0 /* Reset Source: CORER */
+#define PRTDCB_PRS_RETSC_ETS_MODE_S		0
+#define PRTDCB_PRS_RETSC_ETS_MODE_M		BIT(0)
+#define PRTDCB_PRS_RETSC_NON_ETS_MODE_S		1
+#define PRTDCB_PRS_RETSC_NON_ETS_MODE_M		BIT(1)
+#define PRTDCB_PRS_RETSC_ETS_MAX_EXP_S		2
+#define PRTDCB_PRS_RETSC_ETS_MAX_EXP_M		MAKEMASK(0xF, 2)
+#define PRTDCB_PRS_RPRRC			0x00200180 /* Reset Source: CORER */
+#define PRTDCB_PRS_RPRRC_BWSHARE_S		0
+#define PRTDCB_PRS_RPRRC_BWSHARE_M		MAKEMASK(0x3FF, 0)
+#define PRTDCB_PRS_RPRRC_BWSHARE_DIS_S		31
+#define PRTDCB_PRS_RPRRC_BWSHARE_DIS_M		BIT(31)
+#define PRTDCB_RETSC				0x001222A0 /* Reset Source: CORER */
+#define PRTDCB_RETSC_ETS_MODE_S			0
+#define PRTDCB_RETSC_ETS_MODE_M			BIT(0)
+#define PRTDCB_RETSC_NON_ETS_MODE_S		1
+#define PRTDCB_RETSC_NON_ETS_MODE_M		BIT(1)
+#define PRTDCB_RETSC_ETS_MAX_EXP_S		2
+#define PRTDCB_RETSC_ETS_MAX_EXP_M		MAKEMASK(0xF, 2)
+#define PRTDCB_RPRRC				0x001220C0 /* Reset Source: CORER */
+#define PRTDCB_RPRRC_BWSHARE_S			0
+#define PRTDCB_RPRRC_BWSHARE_M			MAKEMASK(0x3FF, 0)
+#define PRTDCB_RPRRC_BWSHARE_DIS_S		31
+#define PRTDCB_RPRRC_BWSHARE_DIS_M		BIT(31)
+#define PRTDCB_RPRRS				0x001220E0 /* Reset Source: CORER */
+#define PRTDCB_RPRRS_CREDITS_S			0
+#define PRTDCB_RPRRS_CREDITS_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PRTDCB_RUP_TDPU				0x00040960 /* Reset Source: CORER */
+#define PRTDCB_RUP_TDPU_NOVLANUP_S		0
+#define PRTDCB_RUP_TDPU_NOVLANUP_M		MAKEMASK(0x7, 0)
+#define PRTDCB_RUP2TC				0x001D2640 /* Reset Source: CORER */
+#define PRTDCB_RUP2TC_UP0TC_S			0
+#define PRTDCB_RUP2TC_UP0TC_M			MAKEMASK(0x7, 0)
+#define PRTDCB_RUP2TC_UP1TC_S			3
+#define PRTDCB_RUP2TC_UP1TC_M			MAKEMASK(0x7, 3)
+#define PRTDCB_RUP2TC_UP2TC_S			6
+#define PRTDCB_RUP2TC_UP2TC_M			MAKEMASK(0x7, 6)
+#define PRTDCB_RUP2TC_UP3TC_S			9
+#define PRTDCB_RUP2TC_UP3TC_M			MAKEMASK(0x7, 9)
+#define PRTDCB_RUP2TC_UP4TC_S			12
+#define PRTDCB_RUP2TC_UP4TC_M			MAKEMASK(0x7, 12)
+#define PRTDCB_RUP2TC_UP5TC_S			15
+#define PRTDCB_RUP2TC_UP5TC_M			MAKEMASK(0x7, 15)
+#define PRTDCB_RUP2TC_UP6TC_S			18
+#define PRTDCB_RUP2TC_UP6TC_M			MAKEMASK(0x7, 18)
+#define PRTDCB_RUP2TC_UP7TC_S			21
+#define PRTDCB_RUP2TC_UP7TC_M			MAKEMASK(0x7, 21)
+#define PRTDCB_SWT_RETSC			0x0020A140 /* Reset Source: CORER */
+#define PRTDCB_SWT_RETSC_ETS_MODE_S		0
+#define PRTDCB_SWT_RETSC_ETS_MODE_M		BIT(0)
+#define PRTDCB_SWT_RETSC_NON_ETS_MODE_S		1
+#define PRTDCB_SWT_RETSC_NON_ETS_MODE_M		BIT(1)
+#define PRTDCB_SWT_RETSC_ETS_MAX_EXP_S		2
+#define PRTDCB_SWT_RETSC_ETS_MAX_EXP_M		MAKEMASK(0xF, 2)
+#define PRTDCB_TCB_DWRR_CREDITS			0x000AE000 /* Reset Source: CORER */
+#define PRTDCB_TCB_DWRR_CREDITS_CREDITS_S	0
+#define PRTDCB_TCB_DWRR_CREDITS_CREDITS_M	MAKEMASK(0x3FFFF, 0)
+#define PRTDCB_TCB_DWRR_QUANTA			0x000AE020 /* Reset Source: CORER */
+#define PRTDCB_TCB_DWRR_QUANTA_QUANTA_S		0
+#define PRTDCB_TCB_DWRR_QUANTA_QUANTA_M		MAKEMASK(0x7FF, 0)
+#define PRTDCB_TCB_DWRR_SAT			0x000AE040 /* Reset Source: CORER */
+#define PRTDCB_TCB_DWRR_SAT_SATURATION_S	0
+#define PRTDCB_TCB_DWRR_SAT_SATURATION_M	MAKEMASK(0x1FFFF, 0)
+#define PRTDCB_TCUPM_NO_EXCEED_DM		0x000BC3C0 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_NO_EXCEED_DM_MONITOR_S	0
+#define PRTDCB_TCUPM_NO_EXCEED_DM_MONITOR_M	MAKEMASK(0x7FFFF, 0)
+#define PRTDCB_TCUPM_REG_CM			0x000BC360 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_REG_CM_MONITOR_S		0
+#define PRTDCB_TCUPM_REG_CM_MONITOR_M		MAKEMASK(0x7FFF, 0)
+#define PRTDCB_TCUPM_REG_CTHR			0x000BC380 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_REG_CTHR_PORTOFFTH_H_S	0
+#define PRTDCB_TCUPM_REG_CTHR_PORTOFFTH_H_M	MAKEMASK(0x7FFF, 0)
+#define PRTDCB_TCUPM_REG_CTHR_PORTOFFTH_L_S	15
+#define PRTDCB_TCUPM_REG_CTHR_PORTOFFTH_L_M	MAKEMASK(0x7FFF, 15)
+#define PRTDCB_TCUPM_REG_DM			0x000BC3A0 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_REG_DM_MONITOR_S		0
+#define PRTDCB_TCUPM_REG_DM_MONITOR_M		MAKEMASK(0x7FFFF, 0)
+#define PRTDCB_TCUPM_REG_DTHR			0x000BC3E0 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_REG_DTHR_PORTOFFTH_H_S	0
+#define PRTDCB_TCUPM_REG_DTHR_PORTOFFTH_H_M	MAKEMASK(0xFFF, 0)
+#define PRTDCB_TCUPM_REG_DTHR_PORTOFFTH_L_S	12
+#define PRTDCB_TCUPM_REG_DTHR_PORTOFFTH_L_M	MAKEMASK(0xFFF, 12)
+#define PRTDCB_TCUPM_REG_PE_HB_DM		0x000BC400 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_REG_PE_HB_DM_MONITOR_S	0
+#define PRTDCB_TCUPM_REG_PE_HB_DM_MONITOR_M	MAKEMASK(0xFFF, 0)
+#define PRTDCB_TCUPM_REG_PE_HB_DTHR		0x000BC420 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_REG_PE_HB_DTHR_PORTOFFTH_H_S 0
+#define PRTDCB_TCUPM_REG_PE_HB_DTHR_PORTOFFTH_H_M MAKEMASK(0xFFF, 0)
+#define PRTDCB_TCUPM_REG_PE_HB_DTHR_PORTOFFTH_L_S 12
+#define PRTDCB_TCUPM_REG_PE_HB_DTHR_PORTOFFTH_L_M MAKEMASK(0xFFF, 12)
+#define PRTDCB_TCUPM_WAIT_PFC_CM		0x000BC440 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_WAIT_PFC_CM_MONITOR_S	0
+#define PRTDCB_TCUPM_WAIT_PFC_CM_MONITOR_M	MAKEMASK(0x7FFF, 0)
+#define PRTDCB_TCUPM_WAIT_PFC_CTHR		0x000BC460 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_WAIT_PFC_CTHR_PORTOFFTH_S	0
+#define PRTDCB_TCUPM_WAIT_PFC_CTHR_PORTOFFTH_M	MAKEMASK(0x7FFF, 0)
+#define PRTDCB_TCUPM_WAIT_PFC_DM		0x000BC480 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_WAIT_PFC_DM_MONITOR_S	0
+#define PRTDCB_TCUPM_WAIT_PFC_DM_MONITOR_M	MAKEMASK(0x7FFFF, 0)
+#define PRTDCB_TCUPM_WAIT_PFC_DTHR		0x000BC4A0 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_WAIT_PFC_DTHR_PORTOFFTH_S	0
+#define PRTDCB_TCUPM_WAIT_PFC_DTHR_PORTOFFTH_M	MAKEMASK(0xFFF, 0)
+#define PRTDCB_TCUPM_WAIT_PFC_PE_HB_DM		0x000BC4C0 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_WAIT_PFC_PE_HB_DM_MONITOR_S 0
+#define PRTDCB_TCUPM_WAIT_PFC_PE_HB_DM_MONITOR_M MAKEMASK(0xFFF, 0)
+#define PRTDCB_TCUPM_WAIT_PFC_PE_HB_DTHR	0x000BC4E0 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_WAIT_PFC_PE_HB_DTHR_PORTOFFTH_S 0
+#define PRTDCB_TCUPM_WAIT_PFC_PE_HB_DTHR_PORTOFFTH_M MAKEMASK(0xFFF, 0)
+#define PRTDCB_TDPUC				0x00040940 /* Reset Source: CORER */
+#define PRTDCB_TDPUC_MAX_TXFRAME_S		0
+#define PRTDCB_TDPUC_MAX_TXFRAME_M		MAKEMASK(0xFFFF, 0)
+#define PRTDCB_TDPUC_MAL_LENGTH_S		16
+#define PRTDCB_TDPUC_MAL_LENGTH_M		BIT(16)
+#define PRTDCB_TDPUC_MAL_CMD_S			17
+#define PRTDCB_TDPUC_MAL_CMD_M			BIT(17)
+#define PRTDCB_TDPUC_TTL_DROP_S			18
+#define PRTDCB_TDPUC_TTL_DROP_M			BIT(18)
+#define PRTDCB_TDPUC_UR_DROP_S			19
+#define PRTDCB_TDPUC_UR_DROP_M			BIT(19)
+#define PRTDCB_TDPUC_DUMMY_S			20
+#define PRTDCB_TDPUC_DUMMY_M			BIT(20)
+#define PRTDCB_TDPUC_BIG_PKT_SIZE_S		21
+#define PRTDCB_TDPUC_BIG_PKT_SIZE_M		BIT(21)
+#define PRTDCB_TDPUC_L2_ACCEPT_FAIL_S		22
+#define PRTDCB_TDPUC_L2_ACCEPT_FAIL_M		BIT(22)
+#define PRTDCB_TDPUC_DSCP_CHECK_FAIL_S		23
+#define PRTDCB_TDPUC_DSCP_CHECK_FAIL_M		BIT(23)
+#define PRTDCB_TDPUC_RCU_ANTISPOOF_S		24
+#define PRTDCB_TDPUC_RCU_ANTISPOOF_M		BIT(24)
+#define PRTDCB_TDPUC_NIC_DSI_S			25
+#define PRTDCB_TDPUC_NIC_DSI_M			BIT(25)
+#define PRTDCB_TDPUC_NIC_IPSEC_S		26
+#define PRTDCB_TDPUC_NIC_IPSEC_M		BIT(26)
+#define PRTDCB_TDPUC_CLEAR_DROP_S		31
+#define PRTDCB_TDPUC_CLEAR_DROP_M		BIT(31)
+#define PRTDCB_TFCS				0x001E4560 /* Reset Source: GLOBR */
+#define PRTDCB_TFCS_TXOFF_S			0
+#define PRTDCB_TFCS_TXOFF_M			BIT(0)
+#define PRTDCB_TFCS_TXOFF0_S			8
+#define PRTDCB_TFCS_TXOFF0_M			BIT(8)
+#define PRTDCB_TFCS_TXOFF1_S			9
+#define PRTDCB_TFCS_TXOFF1_M			BIT(9)
+#define PRTDCB_TFCS_TXOFF2_S			10
+#define PRTDCB_TFCS_TXOFF2_M			BIT(10)
+#define PRTDCB_TFCS_TXOFF3_S			11
+#define PRTDCB_TFCS_TXOFF3_M			BIT(11)
+#define PRTDCB_TFCS_TXOFF4_S			12
+#define PRTDCB_TFCS_TXOFF4_M			BIT(12)
+#define PRTDCB_TFCS_TXOFF5_S			13
+#define PRTDCB_TFCS_TXOFF5_M			BIT(13)
+#define PRTDCB_TFCS_TXOFF6_S			14
+#define PRTDCB_TFCS_TXOFF6_M			BIT(14)
+#define PRTDCB_TFCS_TXOFF7_S			15
+#define PRTDCB_TFCS_TXOFF7_M			BIT(15)
+#define PRTDCB_TLPM_REG_DM			0x000A0000 /* Reset Source: CORER */
+#define PRTDCB_TLPM_REG_DM_MONITOR_S		0
+#define PRTDCB_TLPM_REG_DM_MONITOR_M		MAKEMASK(0x7FFFF, 0)
+#define PRTDCB_TLPM_REG_DTHR			0x000A0020 /* Reset Source: CORER */
+#define PRTDCB_TLPM_REG_DTHR_PORTOFFTH_H_S	0
+#define PRTDCB_TLPM_REG_DTHR_PORTOFFTH_H_M	MAKEMASK(0xFFF, 0)
+#define PRTDCB_TLPM_REG_DTHR_PORTOFFTH_L_S	12
+#define PRTDCB_TLPM_REG_DTHR_PORTOFFTH_L_M	MAKEMASK(0xFFF, 12)
+#define PRTDCB_TLPM_WAIT_PFC_DM			0x000A0040 /* Reset Source: CORER */
+#define PRTDCB_TLPM_WAIT_PFC_DM_MONITOR_S	0
+#define PRTDCB_TLPM_WAIT_PFC_DM_MONITOR_M	MAKEMASK(0x7FFFF, 0)
+#define PRTDCB_TLPM_WAIT_PFC_DTHR		0x000A0060 /* Reset Source: CORER */
+#define PRTDCB_TLPM_WAIT_PFC_DTHR_PORTOFFTH_S	0
+#define PRTDCB_TLPM_WAIT_PFC_DTHR_PORTOFFTH_M	MAKEMASK(0xFFF, 0)
+#define PRTDCB_TPFCTS(_i)			(0x001E4660 + ((_i) * 32)) /* _i=0...7 */ /* Reset Source: GLOBR */
+#define PRTDCB_TPFCTS_MAX_INDEX			7
+#define PRTDCB_TPFCTS_PFCTIMER_S		0
+#define PRTDCB_TPFCTS_PFCTIMER_M		MAKEMASK(0x3FFF, 0)
+#define PRTDCB_TUP2TC				0x001D26C0 /* Reset Source: CORER */
+#define PRTDCB_TUP2TC_UP0TC_S			0
+#define PRTDCB_TUP2TC_UP0TC_M			MAKEMASK(0x7, 0)
+#define PRTDCB_TUP2TC_UP1TC_S			3
+#define PRTDCB_TUP2TC_UP1TC_M			MAKEMASK(0x7, 3)
+#define PRTDCB_TUP2TC_UP2TC_S			6
+#define PRTDCB_TUP2TC_UP2TC_M			MAKEMASK(0x7, 6)
+#define PRTDCB_TUP2TC_UP3TC_S			9
+#define PRTDCB_TUP2TC_UP3TC_M			MAKEMASK(0x7, 9)
+#define PRTDCB_TUP2TC_UP4TC_S			12
+#define PRTDCB_TUP2TC_UP4TC_M			MAKEMASK(0x7, 12)
+#define PRTDCB_TUP2TC_UP5TC_S			15
+#define PRTDCB_TUP2TC_UP5TC_M			MAKEMASK(0x7, 15)
+#define PRTDCB_TUP2TC_UP6TC_S			18
+#define PRTDCB_TUP2TC_UP6TC_M			MAKEMASK(0x7, 18)
+#define PRTDCB_TUP2TC_UP7TC_S			21
+#define PRTDCB_TUP2TC_UP7TC_M			MAKEMASK(0x7, 21)
+#define PRTDCB_TX_DSCP2UP_CTL			0x00040980 /* Reset Source: CORER */
+#define PRTDCB_TX_DSCP2UP_CTL_DSCP2UP_ENA_S	0
+#define PRTDCB_TX_DSCP2UP_CTL_DSCP2UP_ENA_M	BIT(0)
+#define PRTDCB_TX_DSCP2UP_CTL_DSCP_DEFAULT_UP_S 1
+#define PRTDCB_TX_DSCP2UP_CTL_DSCP_DEFAULT_UP_M MAKEMASK(0x7, 1)
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT(_i)		(0x000409A0 + ((_i) * 32)) /* _i=0...7 */ /* Reset Source: CORER */
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_MAX_INDEX	7
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_0_S 0
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_0_M MAKEMASK(0x7, 0)
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_1_S 4
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_1_M MAKEMASK(0x7, 4)
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_2_S 8
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_2_M MAKEMASK(0x7, 8)
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_3_S 12
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_3_M MAKEMASK(0x7, 12)
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_4_S 16
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_4_M MAKEMASK(0x7, 16)
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_5_S 20
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_5_M MAKEMASK(0x7, 20)
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_6_S 24
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_6_M MAKEMASK(0x7, 24)
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_7_S 28
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_7_M MAKEMASK(0x7, 28)
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT(_i)		(0x00040AA0 + ((_i) * 32)) /* _i=0...7 */ /* Reset Source: CORER */
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_MAX_INDEX	7
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_0_S 0
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_0_M MAKEMASK(0x7, 0)
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_1_S 4
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_1_M MAKEMASK(0x7, 4)
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_2_S 8
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_2_M MAKEMASK(0x7, 8)
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_3_S 12
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_3_M MAKEMASK(0x7, 12)
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_4_S 16
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_4_M MAKEMASK(0x7, 16)
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_5_S 20
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_5_M MAKEMASK(0x7, 20)
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_6_S 24
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_6_M MAKEMASK(0x7, 24)
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_7_S 28
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_7_M MAKEMASK(0x7, 28)
+#define PRTTCB_BULK_DWRR_REG_CREDITS		0x000AE060 /* Reset Source: CORER */
+#define PRTTCB_BULK_DWRR_REG_CREDITS_CREDITS_S	0
+#define PRTTCB_BULK_DWRR_REG_CREDITS_CREDITS_M	MAKEMASK(0x3FFFF, 0)
+#define PRTTCB_BULK_DWRR_WB_CREDITS		0x000AE080 /* Reset Source: CORER */
+#define PRTTCB_BULK_DWRR_WB_CREDITS_CREDITS_S	0
+#define PRTTCB_BULK_DWRR_WB_CREDITS_CREDITS_M	MAKEMASK(0x3FFFF, 0)
+#define PRTTCB_CREDIT_EXP			0x000AE100 /* Reset Source: CORER */
+#define PRTTCB_CREDIT_EXP_EXPANSION_S		0
+#define PRTTCB_CREDIT_EXP_EXPANSION_M		MAKEMASK(0xFF, 0)
+#define PRTTCB_LL_DWRR_REG_CREDITS		0x000AE0A0 /* Reset Source: CORER */
+#define PRTTCB_LL_DWRR_REG_CREDITS_CREDITS_S	0
+#define PRTTCB_LL_DWRR_REG_CREDITS_CREDITS_M	MAKEMASK(0x3FFFF, 0)
+#define PRTTCB_LL_DWRR_WB_CREDITS		0x000AE0C0 /* Reset Source: CORER */
+#define PRTTCB_LL_DWRR_WB_CREDITS_CREDITS_S	0
+#define PRTTCB_LL_DWRR_WB_CREDITS_CREDITS_M	MAKEMASK(0x3FFFF, 0)
+#define TCDCB_TCUPM_WAIT_CM(_i)			(0x000BC520 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCDCB_TCUPM_WAIT_CM_MAX_INDEX		31
+#define TCDCB_TCUPM_WAIT_CM_MONITOR_S		0
+#define TCDCB_TCUPM_WAIT_CM_MONITOR_M		MAKEMASK(0x7FFF, 0)
+#define TCDCB_TCUPM_WAIT_CTHR(_i)		(0x000BC5A0 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCDCB_TCUPM_WAIT_CTHR_MAX_INDEX		31
+#define TCDCB_TCUPM_WAIT_CTHR_TCOFFTH_S		0
+#define TCDCB_TCUPM_WAIT_CTHR_TCOFFTH_M		MAKEMASK(0x7FFF, 0)
+#define TCDCB_TCUPM_WAIT_DM(_i)			(0x000BC620 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCDCB_TCUPM_WAIT_DM_MAX_INDEX		31
+#define TCDCB_TCUPM_WAIT_DM_MONITOR_S		0
+#define TCDCB_TCUPM_WAIT_DM_MONITOR_M		MAKEMASK(0x7FFFF, 0)
+#define TCDCB_TCUPM_WAIT_DTHR(_i)		(0x000BC6A0 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCDCB_TCUPM_WAIT_DTHR_MAX_INDEX		31
+#define TCDCB_TCUPM_WAIT_DTHR_TCOFFTH_S		0
+#define TCDCB_TCUPM_WAIT_DTHR_TCOFFTH_M		MAKEMASK(0xFFF, 0)
+#define TCDCB_TCUPM_WAIT_PE_HB_DM(_i)		(0x000BC720 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCDCB_TCUPM_WAIT_PE_HB_DM_MAX_INDEX	31
+#define TCDCB_TCUPM_WAIT_PE_HB_DM_MONITOR_S	0
+#define TCDCB_TCUPM_WAIT_PE_HB_DM_MONITOR_M	MAKEMASK(0xFFF, 0)
+#define TCDCB_TCUPM_WAIT_PE_HB_DTHR(_i)		(0x000BC7A0 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCDCB_TCUPM_WAIT_PE_HB_DTHR_MAX_INDEX	31
+#define TCDCB_TCUPM_WAIT_PE_HB_DTHR_TCOFFTH_S	0
+#define TCDCB_TCUPM_WAIT_PE_HB_DTHR_TCOFFTH_M	MAKEMASK(0xFFF, 0)
+#define TCDCB_TLPM_WAIT_DM(_i)			(0x000A0080 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCDCB_TLPM_WAIT_DM_MAX_INDEX		31
+#define TCDCB_TLPM_WAIT_DM_MONITOR_S		0
+#define TCDCB_TLPM_WAIT_DM_MONITOR_M		MAKEMASK(0x7FFFF, 0)
+#define TCDCB_TLPM_WAIT_DTHR(_i)		(0x000A0100 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCDCB_TLPM_WAIT_DTHR_MAX_INDEX		31
+#define TCDCB_TLPM_WAIT_DTHR_TCOFFTH_S		0
+#define TCDCB_TLPM_WAIT_DTHR_TCOFFTH_M		MAKEMASK(0xFFF, 0)
+#define TCTCB_WB_RL_TC_CFG(_i)			(0x000AE138 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCTCB_WB_RL_TC_CFG_MAX_INDEX		31
+#define TCTCB_WB_RL_TC_CFG_TOKENS_S		0
+#define TCTCB_WB_RL_TC_CFG_TOKENS_M		MAKEMASK(0xFFF, 0)
+#define TCTCB_WB_RL_TC_CFG_BURST_SIZE_S		12
+#define TCTCB_WB_RL_TC_CFG_BURST_SIZE_M		MAKEMASK(0x3FF, 12)
+#define TCTCB_WB_RL_TC_STAT(_i)			(0x000AE1B8 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCTCB_WB_RL_TC_STAT_MAX_INDEX		31
+#define TCTCB_WB_RL_TC_STAT_BUCKET_S		0
+#define TCTCB_WB_RL_TC_STAT_BUCKET_M		MAKEMASK(0x1FFFF, 0)
+#define TPB_BULK_DWRR_REG_QUANTA		0x00099340 /* Reset Source: CORER */
+#define TPB_BULK_DWRR_REG_QUANTA_QUANTA_S	0
+#define TPB_BULK_DWRR_REG_QUANTA_QUANTA_M	MAKEMASK(0x7FF, 0)
+#define TPB_BULK_DWRR_REG_SAT			0x00099350 /* Reset Source: CORER */
+#define TPB_BULK_DWRR_REG_SAT_SATURATION_S	0
+#define TPB_BULK_DWRR_REG_SAT_SATURATION_M	MAKEMASK(0x1FFFF, 0)
+#define TPB_BULK_DWRR_WB_QUANTA			0x00099344 /* Reset Source: CORER */
+#define TPB_BULK_DWRR_WB_QUANTA_QUANTA_S	0
+#define TPB_BULK_DWRR_WB_QUANTA_QUANTA_M	MAKEMASK(0x7FF, 0)
+#define TPB_BULK_DWRR_WB_SAT			0x00099354 /* Reset Source: CORER */
+#define TPB_BULK_DWRR_WB_SAT_SATURATION_S	0
+#define TPB_BULK_DWRR_WB_SAT_SATURATION_M	MAKEMASK(0x1FFFF, 0)
+#define TPB_GLDCB_TCB_WB_SP			0x0009966C /* Reset Source: CORER */
+#define TPB_GLDCB_TCB_WB_SP_WB_SP_S		0
+#define TPB_GLDCB_TCB_WB_SP_WB_SP_M		BIT(0)
+#define TPB_GLTCB_CREDIT_EXP_CTL		0x00099664 /* Reset Source: CORER */
+#define TPB_GLTCB_CREDIT_EXP_CTL_EN_S		0
+#define TPB_GLTCB_CREDIT_EXP_CTL_EN_M		BIT(0)
+#define TPB_GLTCB_CREDIT_EXP_CTL_MIN_PKT_S	1
+#define TPB_GLTCB_CREDIT_EXP_CTL_MIN_PKT_M	MAKEMASK(0x1FF, 1)
+#define TPB_LL_DWRR_REG_QUANTA			0x00099348 /* Reset Source: CORER */
+#define TPB_LL_DWRR_REG_QUANTA_QUANTA_S		0
+#define TPB_LL_DWRR_REG_QUANTA_QUANTA_M		MAKEMASK(0x7FF, 0)
+#define TPB_LL_DWRR_REG_SAT			0x00099358 /* Reset Source: CORER */
+#define TPB_LL_DWRR_REG_SAT_SATURATION_S	0
+#define TPB_LL_DWRR_REG_SAT_SATURATION_M	MAKEMASK(0x1FFFF, 0)
+#define TPB_LL_DWRR_WB_QUANTA			0x0009934C /* Reset Source: CORER */
+#define TPB_LL_DWRR_WB_QUANTA_QUANTA_S		0
+#define TPB_LL_DWRR_WB_QUANTA_QUANTA_M		MAKEMASK(0x7FF, 0)
+#define TPB_LL_DWRR_WB_SAT			0x0009935C /* Reset Source: CORER */
+#define TPB_LL_DWRR_WB_SAT_SATURATION_S		0
+#define TPB_LL_DWRR_WB_SAT_SATURATION_M		MAKEMASK(0x1FFFF, 0)
+#define TPB_PRTDCB_TCB_DWRR_CREDITS		0x000991C0 /* Reset Source: CORER */
+#define TPB_PRTDCB_TCB_DWRR_CREDITS_CREDITS_S	0
+#define TPB_PRTDCB_TCB_DWRR_CREDITS_CREDITS_M	MAKEMASK(0x3FFFF, 0)
+#define TPB_PRTDCB_TCB_DWRR_QUANTA		0x00099220 /* Reset Source: CORER */
+#define TPB_PRTDCB_TCB_DWRR_QUANTA_QUANTA_S	0
+#define TPB_PRTDCB_TCB_DWRR_QUANTA_QUANTA_M	MAKEMASK(0x7FF, 0)
+#define TPB_PRTDCB_TCB_DWRR_SAT			0x00099260 /* Reset Source: CORER */
+#define TPB_PRTDCB_TCB_DWRR_SAT_SATURATION_S	0
+#define TPB_PRTDCB_TCB_DWRR_SAT_SATURATION_M	MAKEMASK(0x1FFFF, 0)
+#define TPB_PRTTCB_BULK_DWRR_REG_CREDITS	0x000992A0 /* Reset Source: CORER */
+#define TPB_PRTTCB_BULK_DWRR_REG_CREDITS_CREDITS_S 0
+#define TPB_PRTTCB_BULK_DWRR_REG_CREDITS_CREDITS_M MAKEMASK(0x3FFFF, 0)
+#define TPB_PRTTCB_BULK_DWRR_WB_CREDITS		0x000992C0 /* Reset Source: CORER */
+#define TPB_PRTTCB_BULK_DWRR_WB_CREDITS_CREDITS_S 0
+#define TPB_PRTTCB_BULK_DWRR_WB_CREDITS_CREDITS_M MAKEMASK(0x3FFFF, 0)
+#define TPB_PRTTCB_CREDIT_EXP			0x00099644 /* Reset Source: CORER */
+#define TPB_PRTTCB_CREDIT_EXP_EXPANSION_S	0
+#define TPB_PRTTCB_CREDIT_EXP_EXPANSION_M	MAKEMASK(0xFF, 0)
+#define TPB_PRTTCB_LL_DWRR_REG_CREDITS		0x00099300 /* Reset Source: CORER */
+#define TPB_PRTTCB_LL_DWRR_REG_CREDITS_CREDITS_S 0
+#define TPB_PRTTCB_LL_DWRR_REG_CREDITS_CREDITS_M MAKEMASK(0x3FFFF, 0)
+#define TPB_PRTTCB_LL_DWRR_WB_CREDITS		0x00099320 /* Reset Source: CORER */
+#define TPB_PRTTCB_LL_DWRR_WB_CREDITS_CREDITS_S 0
+#define TPB_PRTTCB_LL_DWRR_WB_CREDITS_CREDITS_M MAKEMASK(0x3FFFF, 0)
+#define TPB_WB_RL_TC_CFG(_i)			(0x00099360 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TPB_WB_RL_TC_CFG_MAX_INDEX		31
+#define TPB_WB_RL_TC_CFG_TOKENS_S		0
+#define TPB_WB_RL_TC_CFG_TOKENS_M		MAKEMASK(0xFFF, 0)
+#define TPB_WB_RL_TC_CFG_BURST_SIZE_S		12
+#define TPB_WB_RL_TC_CFG_BURST_SIZE_M		MAKEMASK(0x3FF, 12)
+#define TPB_WB_RL_TC_STAT(_i)			(0x000993E0 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TPB_WB_RL_TC_STAT_MAX_INDEX		31
+#define TPB_WB_RL_TC_STAT_BUCKET_S		0
+#define TPB_WB_RL_TC_STAT_BUCKET_M		MAKEMASK(0x1FFFF, 0)
+#define GL_ACLEXT_CDMD_L1SEL(_i)		(0x00210054 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_CDMD_L1SEL_MAX_INDEX		2
+#define GL_ACLEXT_CDMD_L1SEL_RX_SEL_S		0
+#define GL_ACLEXT_CDMD_L1SEL_RX_SEL_M		MAKEMASK(0x1F, 0)
+#define GL_ACLEXT_CDMD_L1SEL_TX_SEL_S		8
+#define GL_ACLEXT_CDMD_L1SEL_TX_SEL_M		MAKEMASK(0x1F, 8)
+#define GL_ACLEXT_CDMD_L1SEL_AUX0_SEL_S		16
+#define GL_ACLEXT_CDMD_L1SEL_AUX0_SEL_M		MAKEMASK(0x1F, 16)
+#define GL_ACLEXT_CDMD_L1SEL_AUX1_SEL_S		24
+#define GL_ACLEXT_CDMD_L1SEL_AUX1_SEL_M		MAKEMASK(0x1F, 24)
+#define GL_ACLEXT_CDMD_L1SEL_BIDIR_ENA_S	30
+#define GL_ACLEXT_CDMD_L1SEL_BIDIR_ENA_M	MAKEMASK(0x3, 30)
+#define GL_ACLEXT_CTLTBL_L2ADDR(_i)		(0x00210084 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_CTLTBL_L2ADDR_MAX_INDEX	2
+#define GL_ACLEXT_CTLTBL_L2ADDR_LINE_OFF_S	0
+#define GL_ACLEXT_CTLTBL_L2ADDR_LINE_OFF_M	MAKEMASK(0x7, 0)
+#define GL_ACLEXT_CTLTBL_L2ADDR_LINE_IDX_S	8
+#define GL_ACLEXT_CTLTBL_L2ADDR_LINE_IDX_M	MAKEMASK(0x7, 8)
+#define GL_ACLEXT_CTLTBL_L2ADDR_AUTO_INC_S	31
+#define GL_ACLEXT_CTLTBL_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_ACLEXT_CTLTBL_L2DATA(_i)		(0x00210090 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_CTLTBL_L2DATA_MAX_INDEX	2
+#define GL_ACLEXT_CTLTBL_L2DATA_DATA_S		0
+#define GL_ACLEXT_CTLTBL_L2DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_ACLEXT_DFLT_L2PRFL(_i)		(0x00210138 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_DFLT_L2PRFL_MAX_INDEX		2
+#define GL_ACLEXT_DFLT_L2PRFL_DFLT_PRFL_S	0
+#define GL_ACLEXT_DFLT_L2PRFL_DFLT_PRFL_M	MAKEMASK(0xFFFF, 0)
+#define GL_ACLEXT_DFLT_L2PRFL_ACL(_i)		(0x00393800 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_DFLT_L2PRFL_ACL_MAX_INDEX	2
+#define GL_ACLEXT_DFLT_L2PRFL_ACL_DFLT_PRFL_S	0
+#define GL_ACLEXT_DFLT_L2PRFL_ACL_DFLT_PRFL_M	MAKEMASK(0xFFFF, 0)
+#define GL_ACLEXT_FLGS_L1SEL0_1(_i)		(0x0021006C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_FLGS_L1SEL0_1_MAX_INDEX	2
+#define GL_ACLEXT_FLGS_L1SEL0_1_FLS0_S		0
+#define GL_ACLEXT_FLGS_L1SEL0_1_FLS0_M		MAKEMASK(0x1FF, 0)
+#define GL_ACLEXT_FLGS_L1SEL0_1_FLS1_S		16
+#define GL_ACLEXT_FLGS_L1SEL0_1_FLS1_M		MAKEMASK(0x1FF, 16)
+#define GL_ACLEXT_FLGS_L1SEL2_3(_i)		(0x00210078 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_FLGS_L1SEL2_3_MAX_INDEX	2
+#define GL_ACLEXT_FLGS_L1SEL2_3_FLS2_S		0
+#define GL_ACLEXT_FLGS_L1SEL2_3_FLS2_M		MAKEMASK(0x1FF, 0)
+#define GL_ACLEXT_FLGS_L1SEL2_3_FLS3_S		16
+#define GL_ACLEXT_FLGS_L1SEL2_3_FLS3_M		MAKEMASK(0x1FF, 16)
+#define GL_ACLEXT_FLGS_L1TBL(_i)		(0x00210060 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_FLGS_L1TBL_MAX_INDEX		2
+#define GL_ACLEXT_FLGS_L1TBL_LSB_S		0
+#define GL_ACLEXT_FLGS_L1TBL_LSB_M		MAKEMASK(0xFFFF, 0)
+#define GL_ACLEXT_FLGS_L1TBL_MSB_S		16
+#define GL_ACLEXT_FLGS_L1TBL_MSB_M		MAKEMASK(0xFFFF, 16)
+#define GL_ACLEXT_FORCE_L1CDID(_i)		(0x00210018 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_FORCE_L1CDID_MAX_INDEX	2
+#define GL_ACLEXT_FORCE_L1CDID_STATIC_CDID_S	0
+#define GL_ACLEXT_FORCE_L1CDID_STATIC_CDID_M	MAKEMASK(0xF, 0)
+#define GL_ACLEXT_FORCE_L1CDID_STATIC_CDID_EN_S 31
+#define GL_ACLEXT_FORCE_L1CDID_STATIC_CDID_EN_M BIT(31)
+#define GL_ACLEXT_FORCE_PID(_i)			(0x00210000 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_FORCE_PID_MAX_INDEX		2
+#define GL_ACLEXT_FORCE_PID_STATIC_PID_S	0
+#define GL_ACLEXT_FORCE_PID_STATIC_PID_M	MAKEMASK(0xFFFF, 0)
+#define GL_ACLEXT_FORCE_PID_STATIC_PID_EN_S	31
+#define GL_ACLEXT_FORCE_PID_STATIC_PID_EN_M	BIT(31)
+#define GL_ACLEXT_K2N_L2ADDR(_i)		(0x00210144 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_K2N_L2ADDR_MAX_INDEX		2
+#define GL_ACLEXT_K2N_L2ADDR_LINE_IDX_S		0
+#define GL_ACLEXT_K2N_L2ADDR_LINE_IDX_M		MAKEMASK(0x7F, 0)
+#define GL_ACLEXT_K2N_L2ADDR_AUTO_INC_S		31
+#define GL_ACLEXT_K2N_L2ADDR_AUTO_INC_M		BIT(31)
+#define GL_ACLEXT_K2N_L2DATA(_i)		(0x00210150 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_K2N_L2DATA_MAX_INDEX		2
+#define GL_ACLEXT_K2N_L2DATA_DATA0_S		0
+#define GL_ACLEXT_K2N_L2DATA_DATA0_M		MAKEMASK(0xFF, 0)
+#define GL_ACLEXT_K2N_L2DATA_DATA1_S		8
+#define GL_ACLEXT_K2N_L2DATA_DATA1_M		MAKEMASK(0xFF, 8)
+#define GL_ACLEXT_K2N_L2DATA_DATA2_S		16
+#define GL_ACLEXT_K2N_L2DATA_DATA2_M		MAKEMASK(0xFF, 16)
+#define GL_ACLEXT_K2N_L2DATA_DATA3_S		24
+#define GL_ACLEXT_K2N_L2DATA_DATA3_M		MAKEMASK(0xFF, 24)
+#define GL_ACLEXT_L2_PMASK0(_i)			(0x002100FC + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_L2_PMASK0_MAX_INDEX		2
+#define GL_ACLEXT_L2_PMASK0_BITMASK_S		0
+#define GL_ACLEXT_L2_PMASK0_BITMASK_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_ACLEXT_L2_PMASK1(_i)			(0x00210108 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_L2_PMASK1_MAX_INDEX		2
+#define GL_ACLEXT_L2_PMASK1_BITMASK_S		0
+#define GL_ACLEXT_L2_PMASK1_BITMASK_M		MAKEMASK(0xFFFF, 0)
+#define GL_ACLEXT_L2_TMASK0(_i)			(0x00210498 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_L2_TMASK0_MAX_INDEX		2
+#define GL_ACLEXT_L2_TMASK0_BITMASK_S		0
+#define GL_ACLEXT_L2_TMASK0_BITMASK_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_ACLEXT_L2_TMASK1(_i)			(0x002104A4 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_L2_TMASK1_MAX_INDEX		2
+#define GL_ACLEXT_L2_TMASK1_BITMASK_S		0
+#define GL_ACLEXT_L2_TMASK1_BITMASK_M		MAKEMASK(0xFF, 0)
+#define GL_ACLEXT_L2BMP0_3(_i)			(0x002100A8 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_L2BMP0_3_MAX_INDEX		2
+#define GL_ACLEXT_L2BMP0_3_BMP0_S		0
+#define GL_ACLEXT_L2BMP0_3_BMP0_M		MAKEMASK(0xFF, 0)
+#define GL_ACLEXT_L2BMP0_3_BMP1_S		8
+#define GL_ACLEXT_L2BMP0_3_BMP1_M		MAKEMASK(0xFF, 8)
+#define GL_ACLEXT_L2BMP0_3_BMP2_S		16
+#define GL_ACLEXT_L2BMP0_3_BMP2_M		MAKEMASK(0xFF, 16)
+#define GL_ACLEXT_L2BMP0_3_BMP3_S		24
+#define GL_ACLEXT_L2BMP0_3_BMP3_M		MAKEMASK(0xFF, 24)
+#define GL_ACLEXT_L2BMP4_7(_i)			(0x002100B4 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_L2BMP4_7_MAX_INDEX		2
+#define GL_ACLEXT_L2BMP4_7_BMP4_S		0
+#define GL_ACLEXT_L2BMP4_7_BMP4_M		MAKEMASK(0xFF, 0)
+#define GL_ACLEXT_L2BMP4_7_BMP5_S		8
+#define GL_ACLEXT_L2BMP4_7_BMP5_M		MAKEMASK(0xFF, 8)
+#define GL_ACLEXT_L2BMP4_7_BMP6_S		16
+#define GL_ACLEXT_L2BMP4_7_BMP6_M		MAKEMASK(0xFF, 16)
+#define GL_ACLEXT_L2BMP4_7_BMP7_S		24
+#define GL_ACLEXT_L2BMP4_7_BMP7_M		MAKEMASK(0xFF, 24)
+#define GL_ACLEXT_L2PRTMOD(_i)			(0x0021009C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_L2PRTMOD_MAX_INDEX		2
+#define GL_ACLEXT_L2PRTMOD_XLT1_S		0
+#define GL_ACLEXT_L2PRTMOD_XLT1_M		MAKEMASK(0x3, 0)
+#define GL_ACLEXT_L2PRTMOD_XLT2_S		8
+#define GL_ACLEXT_L2PRTMOD_XLT2_M		MAKEMASK(0x3, 8)
+#define GL_ACLEXT_N2N_L2ADDR(_i)		(0x0021015C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_N2N_L2ADDR_MAX_INDEX		2
+#define GL_ACLEXT_N2N_L2ADDR_LINE_IDX_S		0
+#define GL_ACLEXT_N2N_L2ADDR_LINE_IDX_M		MAKEMASK(0x3F, 0)
+#define GL_ACLEXT_N2N_L2ADDR_AUTO_INC_S		31
+#define GL_ACLEXT_N2N_L2ADDR_AUTO_INC_M		BIT(31)
+#define GL_ACLEXT_N2N_L2DATA(_i)		(0x00210168 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_N2N_L2DATA_MAX_INDEX		2
+#define GL_ACLEXT_N2N_L2DATA_DATA0_S		0
+#define GL_ACLEXT_N2N_L2DATA_DATA0_M		MAKEMASK(0xFF, 0)
+#define GL_ACLEXT_N2N_L2DATA_DATA1_S		8
+#define GL_ACLEXT_N2N_L2DATA_DATA1_M		MAKEMASK(0xFF, 8)
+#define GL_ACLEXT_N2N_L2DATA_DATA2_S		16
+#define GL_ACLEXT_N2N_L2DATA_DATA2_M		MAKEMASK(0xFF, 16)
+#define GL_ACLEXT_N2N_L2DATA_DATA3_S		24
+#define GL_ACLEXT_N2N_L2DATA_DATA3_M		MAKEMASK(0xFF, 24)
+#define GL_ACLEXT_P2P_L1ADDR(_i)		(0x00210024 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_P2P_L1ADDR_MAX_INDEX		2
+#define GL_ACLEXT_P2P_L1ADDR_LINE_IDX_S		0
+#define GL_ACLEXT_P2P_L1ADDR_LINE_IDX_M		BIT(0)
+#define GL_ACLEXT_P2P_L1ADDR_AUTO_INC_S		31
+#define GL_ACLEXT_P2P_L1ADDR_AUTO_INC_M		BIT(31)
+#define GL_ACLEXT_P2P_L1DATA(_i)		(0x00210030 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_P2P_L1DATA_MAX_INDEX		2
+#define GL_ACLEXT_P2P_L1DATA_DATA_S		0
+#define GL_ACLEXT_P2P_L1DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_ACLEXT_PID_L2GKTYPE(_i)		(0x002100F0 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_PID_L2GKTYPE_MAX_INDEX	2
+#define GL_ACLEXT_PID_L2GKTYPE_PID_GKTYPE_S	0
+#define GL_ACLEXT_PID_L2GKTYPE_PID_GKTYPE_M	MAKEMASK(0x3, 0)
+#define GL_ACLEXT_PLVL_SEL(_i)			(0x0021000C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_PLVL_SEL_MAX_INDEX		2
+#define GL_ACLEXT_PLVL_SEL_PLVL_SEL_S		0
+#define GL_ACLEXT_PLVL_SEL_PLVL_SEL_M		BIT(0)
+#define GL_ACLEXT_TCAM_L2ADDR(_i)		(0x00210114 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_TCAM_L2ADDR_MAX_INDEX		2
+#define GL_ACLEXT_TCAM_L2ADDR_LINE_IDX_S	0
+#define GL_ACLEXT_TCAM_L2ADDR_LINE_IDX_M	MAKEMASK(0x3FF, 0)
+#define GL_ACLEXT_TCAM_L2ADDR_AUTO_INC_S	31
+#define GL_ACLEXT_TCAM_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_ACLEXT_TCAM_L2DATALSB(_i)		(0x00210120 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_TCAM_L2DATALSB_MAX_INDEX	2
+#define GL_ACLEXT_TCAM_L2DATALSB_DATALSB_S	0
+#define GL_ACLEXT_TCAM_L2DATALSB_DATALSB_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GL_ACLEXT_TCAM_L2DATAMSB(_i)		(0x0021012C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_TCAM_L2DATAMSB_MAX_INDEX	2
+#define GL_ACLEXT_TCAM_L2DATAMSB_DATAMSB_S	0
+#define GL_ACLEXT_TCAM_L2DATAMSB_DATAMSB_M	MAKEMASK(0xFF, 0)
+#define GL_ACLEXT_XLT0_L1ADDR(_i)		(0x0021003C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_XLT0_L1ADDR_MAX_INDEX		2
+#define GL_ACLEXT_XLT0_L1ADDR_LINE_IDX_S	0
+#define GL_ACLEXT_XLT0_L1ADDR_LINE_IDX_M	MAKEMASK(0xFF, 0)
+#define GL_ACLEXT_XLT0_L1ADDR_AUTO_INC_S	31
+#define GL_ACLEXT_XLT0_L1ADDR_AUTO_INC_M	BIT(31)
+#define GL_ACLEXT_XLT0_L1DATA(_i)		(0x00210048 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_XLT0_L1DATA_MAX_INDEX		2
+#define GL_ACLEXT_XLT0_L1DATA_DATA_S		0
+#define GL_ACLEXT_XLT0_L1DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_ACLEXT_XLT1_L2ADDR(_i)		(0x002100C0 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_XLT1_L2ADDR_MAX_INDEX		2
+#define GL_ACLEXT_XLT1_L2ADDR_LINE_IDX_S	0
+#define GL_ACLEXT_XLT1_L2ADDR_LINE_IDX_M	MAKEMASK(0x7FF, 0)
+#define GL_ACLEXT_XLT1_L2ADDR_AUTO_INC_S	31
+#define GL_ACLEXT_XLT1_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_ACLEXT_XLT1_L2DATA(_i)		(0x002100CC + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_XLT1_L2DATA_MAX_INDEX		2
+#define GL_ACLEXT_XLT1_L2DATA_DATA_S		0
+#define GL_ACLEXT_XLT1_L2DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_ACLEXT_XLT2_L2ADDR(_i)		(0x002100D8 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_XLT2_L2ADDR_MAX_INDEX		2
+#define GL_ACLEXT_XLT2_L2ADDR_LINE_IDX_S	0
+#define GL_ACLEXT_XLT2_L2ADDR_LINE_IDX_M	MAKEMASK(0x1FF, 0)
+#define GL_ACLEXT_XLT2_L2ADDR_AUTO_INC_S	31
+#define GL_ACLEXT_XLT2_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_ACLEXT_XLT2_L2DATA(_i)		(0x002100E4 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_XLT2_L2DATA_MAX_INDEX		2
+#define GL_ACLEXT_XLT2_L2DATA_DATA_S		0
+#define GL_ACLEXT_XLT2_L2DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PREEXT_CDMD_L1SEL(_i)		(0x0020F054 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_CDMD_L1SEL_MAX_INDEX		2
+#define GL_PREEXT_CDMD_L1SEL_RX_SEL_S		0
+#define GL_PREEXT_CDMD_L1SEL_RX_SEL_M		MAKEMASK(0x1F, 0)
+#define GL_PREEXT_CDMD_L1SEL_TX_SEL_S		8
+#define GL_PREEXT_CDMD_L1SEL_TX_SEL_M		MAKEMASK(0x1F, 8)
+#define GL_PREEXT_CDMD_L1SEL_AUX0_SEL_S		16
+#define GL_PREEXT_CDMD_L1SEL_AUX0_SEL_M		MAKEMASK(0x1F, 16)
+#define GL_PREEXT_CDMD_L1SEL_AUX1_SEL_S		24
+#define GL_PREEXT_CDMD_L1SEL_AUX1_SEL_M		MAKEMASK(0x1F, 24)
+#define GL_PREEXT_CDMD_L1SEL_BIDIR_ENA_S	30
+#define GL_PREEXT_CDMD_L1SEL_BIDIR_ENA_M	MAKEMASK(0x3, 30)
+#define GL_PREEXT_CTLTBL_L2ADDR(_i)		(0x0020F084 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_CTLTBL_L2ADDR_MAX_INDEX	2
+#define GL_PREEXT_CTLTBL_L2ADDR_LINE_OFF_S	0
+#define GL_PREEXT_CTLTBL_L2ADDR_LINE_OFF_M	MAKEMASK(0x7, 0)
+#define GL_PREEXT_CTLTBL_L2ADDR_LINE_IDX_S	8
+#define GL_PREEXT_CTLTBL_L2ADDR_LINE_IDX_M	MAKEMASK(0x7, 8)
+#define GL_PREEXT_CTLTBL_L2ADDR_AUTO_INC_S	31
+#define GL_PREEXT_CTLTBL_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_PREEXT_CTLTBL_L2DATA(_i)		(0x0020F090 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_CTLTBL_L2DATA_MAX_INDEX	2
+#define GL_PREEXT_CTLTBL_L2DATA_DATA_S		0
+#define GL_PREEXT_CTLTBL_L2DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PREEXT_DFLT_L2PRFL(_i)		(0x0020F138 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_DFLT_L2PRFL_MAX_INDEX		2
+#define GL_PREEXT_DFLT_L2PRFL_DFLT_PRFL_S	0
+#define GL_PREEXT_DFLT_L2PRFL_DFLT_PRFL_M	MAKEMASK(0xFFFF, 0)
+#define GL_PREEXT_FLGS_L1SEL0_1(_i)		(0x0020F06C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_FLGS_L1SEL0_1_MAX_INDEX	2
+#define GL_PREEXT_FLGS_L1SEL0_1_FLS0_S		0
+#define GL_PREEXT_FLGS_L1SEL0_1_FLS0_M		MAKEMASK(0x1FF, 0)
+#define GL_PREEXT_FLGS_L1SEL0_1_FLS1_S		16
+#define GL_PREEXT_FLGS_L1SEL0_1_FLS1_M		MAKEMASK(0x1FF, 16)
+#define GL_PREEXT_FLGS_L1SEL2_3(_i)		(0x0020F078 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_FLGS_L1SEL2_3_MAX_INDEX	2
+#define GL_PREEXT_FLGS_L1SEL2_3_FLS2_S		0
+#define GL_PREEXT_FLGS_L1SEL2_3_FLS2_M		MAKEMASK(0x1FF, 0)
+#define GL_PREEXT_FLGS_L1SEL2_3_FLS3_S		16
+#define GL_PREEXT_FLGS_L1SEL2_3_FLS3_M		MAKEMASK(0x1FF, 16)
+#define GL_PREEXT_FLGS_L1TBL(_i)		(0x0020F060 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_FLGS_L1TBL_MAX_INDEX		2
+#define GL_PREEXT_FLGS_L1TBL_LSB_S		0
+#define GL_PREEXT_FLGS_L1TBL_LSB_M		MAKEMASK(0xFFFF, 0)
+#define GL_PREEXT_FLGS_L1TBL_MSB_S		16
+#define GL_PREEXT_FLGS_L1TBL_MSB_M		MAKEMASK(0xFFFF, 16)
+#define GL_PREEXT_FORCE_L1CDID(_i)		(0x0020F018 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_FORCE_L1CDID_MAX_INDEX	2
+#define GL_PREEXT_FORCE_L1CDID_STATIC_CDID_S	0
+#define GL_PREEXT_FORCE_L1CDID_STATIC_CDID_M	MAKEMASK(0xF, 0)
+#define GL_PREEXT_FORCE_L1CDID_STATIC_CDID_EN_S 31
+#define GL_PREEXT_FORCE_L1CDID_STATIC_CDID_EN_M BIT(31)
+#define GL_PREEXT_FORCE_PID(_i)			(0x0020F000 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_FORCE_PID_MAX_INDEX		2
+#define GL_PREEXT_FORCE_PID_STATIC_PID_S	0
+#define GL_PREEXT_FORCE_PID_STATIC_PID_M	MAKEMASK(0xFFFF, 0)
+#define GL_PREEXT_FORCE_PID_STATIC_PID_EN_S	31
+#define GL_PREEXT_FORCE_PID_STATIC_PID_EN_M	BIT(31)
+#define GL_PREEXT_K2N_L2ADDR(_i)		(0x0020F144 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_K2N_L2ADDR_MAX_INDEX		2
+#define GL_PREEXT_K2N_L2ADDR_LINE_IDX_S		0
+#define GL_PREEXT_K2N_L2ADDR_LINE_IDX_M		MAKEMASK(0x7F, 0)
+#define GL_PREEXT_K2N_L2ADDR_AUTO_INC_S		31
+#define GL_PREEXT_K2N_L2ADDR_AUTO_INC_M		BIT(31)
+#define GL_PREEXT_K2N_L2DATA(_i)		(0x0020F150 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_K2N_L2DATA_MAX_INDEX		2
+#define GL_PREEXT_K2N_L2DATA_DATA0_S		0
+#define GL_PREEXT_K2N_L2DATA_DATA0_M		MAKEMASK(0xFF, 0)
+#define GL_PREEXT_K2N_L2DATA_DATA1_S		8
+#define GL_PREEXT_K2N_L2DATA_DATA1_M		MAKEMASK(0xFF, 8)
+#define GL_PREEXT_K2N_L2DATA_DATA2_S		16
+#define GL_PREEXT_K2N_L2DATA_DATA2_M		MAKEMASK(0xFF, 16)
+#define GL_PREEXT_K2N_L2DATA_DATA3_S		24
+#define GL_PREEXT_K2N_L2DATA_DATA3_M		MAKEMASK(0xFF, 24)
+#define GL_PREEXT_L2_PMASK0(_i)			(0x0020F0FC + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_L2_PMASK0_MAX_INDEX		2
+#define GL_PREEXT_L2_PMASK0_BITMASK_S		0
+#define GL_PREEXT_L2_PMASK0_BITMASK_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PREEXT_L2_PMASK1(_i)			(0x0020F108 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_L2_PMASK1_MAX_INDEX		2
+#define GL_PREEXT_L2_PMASK1_BITMASK_S		0
+#define GL_PREEXT_L2_PMASK1_BITMASK_M		MAKEMASK(0xFFFF, 0)
+#define GL_PREEXT_L2_TMASK0(_i)			(0x0020F498 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_L2_TMASK0_MAX_INDEX		2
+#define GL_PREEXT_L2_TMASK0_BITMASK_S		0
+#define GL_PREEXT_L2_TMASK0_BITMASK_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PREEXT_L2_TMASK1(_i)			(0x0020F4A4 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_L2_TMASK1_MAX_INDEX		2
+#define GL_PREEXT_L2_TMASK1_BITMASK_S		0
+#define GL_PREEXT_L2_TMASK1_BITMASK_M		MAKEMASK(0xFF, 0)
+#define GL_PREEXT_L2BMP0_3(_i)			(0x0020F0A8 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_L2BMP0_3_MAX_INDEX		2
+#define GL_PREEXT_L2BMP0_3_BMP0_S		0
+#define GL_PREEXT_L2BMP0_3_BMP0_M		MAKEMASK(0xFF, 0)
+#define GL_PREEXT_L2BMP0_3_BMP1_S		8
+#define GL_PREEXT_L2BMP0_3_BMP1_M		MAKEMASK(0xFF, 8)
+#define GL_PREEXT_L2BMP0_3_BMP2_S		16
+#define GL_PREEXT_L2BMP0_3_BMP2_M		MAKEMASK(0xFF, 16)
+#define GL_PREEXT_L2BMP0_3_BMP3_S		24
+#define GL_PREEXT_L2BMP0_3_BMP3_M		MAKEMASK(0xFF, 24)
+#define GL_PREEXT_L2BMP4_7(_i)			(0x0020F0B4 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_L2BMP4_7_MAX_INDEX		2
+#define GL_PREEXT_L2BMP4_7_BMP4_S		0
+#define GL_PREEXT_L2BMP4_7_BMP4_M		MAKEMASK(0xFF, 0)
+#define GL_PREEXT_L2BMP4_7_BMP5_S		8
+#define GL_PREEXT_L2BMP4_7_BMP5_M		MAKEMASK(0xFF, 8)
+#define GL_PREEXT_L2BMP4_7_BMP6_S		16
+#define GL_PREEXT_L2BMP4_7_BMP6_M		MAKEMASK(0xFF, 16)
+#define GL_PREEXT_L2BMP4_7_BMP7_S		24
+#define GL_PREEXT_L2BMP4_7_BMP7_M		MAKEMASK(0xFF, 24)
+#define GL_PREEXT_L2PRTMOD(_i)			(0x0020F09C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_L2PRTMOD_MAX_INDEX		2
+#define GL_PREEXT_L2PRTMOD_XLT1_S		0
+#define GL_PREEXT_L2PRTMOD_XLT1_M		MAKEMASK(0x3, 0)
+#define GL_PREEXT_L2PRTMOD_XLT2_S		8
+#define GL_PREEXT_L2PRTMOD_XLT2_M		MAKEMASK(0x3, 8)
+#define GL_PREEXT_N2N_L2ADDR(_i)		(0x0020F15C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_N2N_L2ADDR_MAX_INDEX		2
+#define GL_PREEXT_N2N_L2ADDR_LINE_IDX_S		0
+#define GL_PREEXT_N2N_L2ADDR_LINE_IDX_M		MAKEMASK(0x3F, 0)
+#define GL_PREEXT_N2N_L2ADDR_AUTO_INC_S		31
+#define GL_PREEXT_N2N_L2ADDR_AUTO_INC_M		BIT(31)
+#define GL_PREEXT_N2N_L2DATA(_i)		(0x0020F168 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_N2N_L2DATA_MAX_INDEX		2
+#define GL_PREEXT_N2N_L2DATA_DATA0_S		0
+#define GL_PREEXT_N2N_L2DATA_DATA0_M		MAKEMASK(0xFF, 0)
+#define GL_PREEXT_N2N_L2DATA_DATA1_S		8
+#define GL_PREEXT_N2N_L2DATA_DATA1_M		MAKEMASK(0xFF, 8)
+#define GL_PREEXT_N2N_L2DATA_DATA2_S		16
+#define GL_PREEXT_N2N_L2DATA_DATA2_M		MAKEMASK(0xFF, 16)
+#define GL_PREEXT_N2N_L2DATA_DATA3_S		24
+#define GL_PREEXT_N2N_L2DATA_DATA3_M		MAKEMASK(0xFF, 24)
+#define GL_PREEXT_P2P_L1ADDR(_i)		(0x0020F024 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_P2P_L1ADDR_MAX_INDEX		2
+#define GL_PREEXT_P2P_L1ADDR_LINE_IDX_S		0
+#define GL_PREEXT_P2P_L1ADDR_LINE_IDX_M		BIT(0)
+#define GL_PREEXT_P2P_L1ADDR_AUTO_INC_S		31
+#define GL_PREEXT_P2P_L1ADDR_AUTO_INC_M		BIT(31)
+#define GL_PREEXT_P2P_L1DATA(_i)		(0x0020F030 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_P2P_L1DATA_MAX_INDEX		2
+#define GL_PREEXT_P2P_L1DATA_DATA_S		0
+#define GL_PREEXT_P2P_L1DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PREEXT_PID_L2GKTYPE(_i)		(0x0020F0F0 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_PID_L2GKTYPE_MAX_INDEX	2
+#define GL_PREEXT_PID_L2GKTYPE_PID_GKTYPE_S	0
+#define GL_PREEXT_PID_L2GKTYPE_PID_GKTYPE_M	MAKEMASK(0x3, 0)
+#define GL_PREEXT_PLVL_SEL(_i)			(0x0020F00C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_PLVL_SEL_MAX_INDEX		2
+#define GL_PREEXT_PLVL_SEL_PLVL_SEL_S		0
+#define GL_PREEXT_PLVL_SEL_PLVL_SEL_M		BIT(0)
+#define GL_PREEXT_TCAM_L2ADDR(_i)		(0x0020F114 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_TCAM_L2ADDR_MAX_INDEX		2
+#define GL_PREEXT_TCAM_L2ADDR_LINE_IDX_S	0
+#define GL_PREEXT_TCAM_L2ADDR_LINE_IDX_M	MAKEMASK(0x3FF, 0)
+#define GL_PREEXT_TCAM_L2ADDR_AUTO_INC_S	31
+#define GL_PREEXT_TCAM_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_PREEXT_TCAM_L2DATALSB(_i)		(0x0020F120 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_TCAM_L2DATALSB_MAX_INDEX	2
+#define GL_PREEXT_TCAM_L2DATALSB_DATALSB_S	0
+#define GL_PREEXT_TCAM_L2DATALSB_DATALSB_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PREEXT_TCAM_L2DATAMSB(_i)		(0x0020F12C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_TCAM_L2DATAMSB_MAX_INDEX	2
+#define GL_PREEXT_TCAM_L2DATAMSB_DATAMSB_S	0
+#define GL_PREEXT_TCAM_L2DATAMSB_DATAMSB_M	MAKEMASK(0xFF, 0)
+#define GL_PREEXT_XLT0_L1ADDR(_i)		(0x0020F03C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_XLT0_L1ADDR_MAX_INDEX		2
+#define GL_PREEXT_XLT0_L1ADDR_LINE_IDX_S	0
+#define GL_PREEXT_XLT0_L1ADDR_LINE_IDX_M	MAKEMASK(0xFF, 0)
+#define GL_PREEXT_XLT0_L1ADDR_AUTO_INC_S	31
+#define GL_PREEXT_XLT0_L1ADDR_AUTO_INC_M	BIT(31)
+#define GL_PREEXT_XLT0_L1DATA(_i)		(0x0020F048 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_XLT0_L1DATA_MAX_INDEX		2
+#define GL_PREEXT_XLT0_L1DATA_DATA_S		0
+#define GL_PREEXT_XLT0_L1DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PREEXT_XLT1_L2ADDR(_i)		(0x0020F0C0 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_XLT1_L2ADDR_MAX_INDEX		2
+#define GL_PREEXT_XLT1_L2ADDR_LINE_IDX_S	0
+#define GL_PREEXT_XLT1_L2ADDR_LINE_IDX_M	MAKEMASK(0x7FF, 0)
+#define GL_PREEXT_XLT1_L2ADDR_AUTO_INC_S	31
+#define GL_PREEXT_XLT1_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_PREEXT_XLT1_L2DATA(_i)		(0x0020F0CC + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_XLT1_L2DATA_MAX_INDEX		2
+#define GL_PREEXT_XLT1_L2DATA_DATA_S		0
+#define GL_PREEXT_XLT1_L2DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PREEXT_XLT2_L2ADDR(_i)		(0x0020F0D8 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_XLT2_L2ADDR_MAX_INDEX		2
+#define GL_PREEXT_XLT2_L2ADDR_LINE_IDX_S	0
+#define GL_PREEXT_XLT2_L2ADDR_LINE_IDX_M	MAKEMASK(0x1FF, 0)
+#define GL_PREEXT_XLT2_L2ADDR_AUTO_INC_S	31
+#define GL_PREEXT_XLT2_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_PREEXT_XLT2_L2DATA(_i)		(0x0020F0E4 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_XLT2_L2DATA_MAX_INDEX		2
+#define GL_PREEXT_XLT2_L2DATA_DATA_S		0
+#define GL_PREEXT_XLT2_L2DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_CDMD_L1SEL(_i)		(0x0020E054 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_CDMD_L1SEL_MAX_INDEX		2
+#define GL_PSTEXT_CDMD_L1SEL_RX_SEL_S		0
+#define GL_PSTEXT_CDMD_L1SEL_RX_SEL_M		MAKEMASK(0x1F, 0)
+#define GL_PSTEXT_CDMD_L1SEL_TX_SEL_S		8
+#define GL_PSTEXT_CDMD_L1SEL_TX_SEL_M		MAKEMASK(0x1F, 8)
+#define GL_PSTEXT_CDMD_L1SEL_AUX0_SEL_S		16
+#define GL_PSTEXT_CDMD_L1SEL_AUX0_SEL_M		MAKEMASK(0x1F, 16)
+#define GL_PSTEXT_CDMD_L1SEL_AUX1_SEL_S		24
+#define GL_PSTEXT_CDMD_L1SEL_AUX1_SEL_M		MAKEMASK(0x1F, 24)
+#define GL_PSTEXT_CDMD_L1SEL_BIDIR_ENA_S	30
+#define GL_PSTEXT_CDMD_L1SEL_BIDIR_ENA_M	MAKEMASK(0x3, 30)
+#define GL_PSTEXT_CTLTBL_L2ADDR(_i)		(0x0020E084 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_CTLTBL_L2ADDR_MAX_INDEX	2
+#define GL_PSTEXT_CTLTBL_L2ADDR_LINE_OFF_S	0
+#define GL_PSTEXT_CTLTBL_L2ADDR_LINE_OFF_M	MAKEMASK(0x7, 0)
+#define GL_PSTEXT_CTLTBL_L2ADDR_LINE_IDX_S	8
+#define GL_PSTEXT_CTLTBL_L2ADDR_LINE_IDX_M	MAKEMASK(0x7, 8)
+#define GL_PSTEXT_CTLTBL_L2ADDR_AUTO_INC_S	31
+#define GL_PSTEXT_CTLTBL_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_PSTEXT_CTLTBL_L2DATA(_i)		(0x0020E090 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_CTLTBL_L2DATA_MAX_INDEX	2
+#define GL_PSTEXT_CTLTBL_L2DATA_DATA_S		0
+#define GL_PSTEXT_CTLTBL_L2DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_DFLT_L2PRFL(_i)		(0x0020E138 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_DFLT_L2PRFL_MAX_INDEX		2
+#define GL_PSTEXT_DFLT_L2PRFL_DFLT_PRFL_S	0
+#define GL_PSTEXT_DFLT_L2PRFL_DFLT_PRFL_M	MAKEMASK(0xFFFF, 0)
+#define GL_PSTEXT_FL15_BMPLSB(_i)		(0x0020E480 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_FL15_BMPLSB_MAX_INDEX		2
+#define GL_PSTEXT_FL15_BMPLSB_BMPLSB_S		0
+#define GL_PSTEXT_FL15_BMPLSB_BMPLSB_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_FL15_BMPMSB(_i)		(0x0020E48C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_FL15_BMPMSB_MAX_INDEX		2
+#define GL_PSTEXT_FL15_BMPMSB_BMPMSB_S		0
+#define GL_PSTEXT_FL15_BMPMSB_BMPMSB_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_FLGS_L1SEL0_1(_i)		(0x0020E06C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_FLGS_L1SEL0_1_MAX_INDEX	2
+#define GL_PSTEXT_FLGS_L1SEL0_1_FLS0_S		0
+#define GL_PSTEXT_FLGS_L1SEL0_1_FLS0_M		MAKEMASK(0x1FF, 0)
+#define GL_PSTEXT_FLGS_L1SEL0_1_FLS1_S		16
+#define GL_PSTEXT_FLGS_L1SEL0_1_FLS1_M		MAKEMASK(0x1FF, 16)
+#define GL_PSTEXT_FLGS_L1SEL2_3(_i)		(0x0020E078 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_FLGS_L1SEL2_3_MAX_INDEX	2
+#define GL_PSTEXT_FLGS_L1SEL2_3_FLS2_S		0
+#define GL_PSTEXT_FLGS_L1SEL2_3_FLS2_M		MAKEMASK(0x1FF, 0)
+#define GL_PSTEXT_FLGS_L1SEL2_3_FLS3_S		16
+#define GL_PSTEXT_FLGS_L1SEL2_3_FLS3_M		MAKEMASK(0x1FF, 16)
+#define GL_PSTEXT_FLGS_L1TBL(_i)		(0x0020E060 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_FLGS_L1TBL_MAX_INDEX		2
+#define GL_PSTEXT_FLGS_L1TBL_LSB_S		0
+#define GL_PSTEXT_FLGS_L1TBL_LSB_M		MAKEMASK(0xFFFF, 0)
+#define GL_PSTEXT_FLGS_L1TBL_MSB_S		16
+#define GL_PSTEXT_FLGS_L1TBL_MSB_M		MAKEMASK(0xFFFF, 16)
+#define GL_PSTEXT_FORCE_L1CDID(_i)		(0x0020E018 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_FORCE_L1CDID_MAX_INDEX	2
+#define GL_PSTEXT_FORCE_L1CDID_STATIC_CDID_S	0
+#define GL_PSTEXT_FORCE_L1CDID_STATIC_CDID_M	MAKEMASK(0xF, 0)
+#define GL_PSTEXT_FORCE_L1CDID_STATIC_CDID_EN_S 31
+#define GL_PSTEXT_FORCE_L1CDID_STATIC_CDID_EN_M BIT(31)
+#define GL_PSTEXT_FORCE_PID(_i)			(0x0020E000 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_FORCE_PID_MAX_INDEX		2
+#define GL_PSTEXT_FORCE_PID_STATIC_PID_S	0
+#define GL_PSTEXT_FORCE_PID_STATIC_PID_M	MAKEMASK(0xFFFF, 0)
+#define GL_PSTEXT_FORCE_PID_STATIC_PID_EN_S	31
+#define GL_PSTEXT_FORCE_PID_STATIC_PID_EN_M	BIT(31)
+#define GL_PSTEXT_K2N_L2ADDR(_i)		(0x0020E144 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_K2N_L2ADDR_MAX_INDEX		2
+#define GL_PSTEXT_K2N_L2ADDR_LINE_IDX_S		0
+#define GL_PSTEXT_K2N_L2ADDR_LINE_IDX_M		MAKEMASK(0x7F, 0)
+#define GL_PSTEXT_K2N_L2ADDR_AUTO_INC_S		31
+#define GL_PSTEXT_K2N_L2ADDR_AUTO_INC_M		BIT(31)
+#define GL_PSTEXT_K2N_L2DATA(_i)		(0x0020E150 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_K2N_L2DATA_MAX_INDEX		2
+#define GL_PSTEXT_K2N_L2DATA_DATA0_S		0
+#define GL_PSTEXT_K2N_L2DATA_DATA0_M		MAKEMASK(0xFF, 0)
+#define GL_PSTEXT_K2N_L2DATA_DATA1_S		8
+#define GL_PSTEXT_K2N_L2DATA_DATA1_M		MAKEMASK(0xFF, 8)
+#define GL_PSTEXT_K2N_L2DATA_DATA2_S		16
+#define GL_PSTEXT_K2N_L2DATA_DATA2_M		MAKEMASK(0xFF, 16)
+#define GL_PSTEXT_K2N_L2DATA_DATA3_S		24
+#define GL_PSTEXT_K2N_L2DATA_DATA3_M		MAKEMASK(0xFF, 24)
+#define GL_PSTEXT_L2_PMASK0(_i)			(0x0020E0FC + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_L2_PMASK0_MAX_INDEX		2
+#define GL_PSTEXT_L2_PMASK0_BITMASK_S		0
+#define GL_PSTEXT_L2_PMASK0_BITMASK_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_L2_PMASK1(_i)			(0x0020E108 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_L2_PMASK1_MAX_INDEX		2
+#define GL_PSTEXT_L2_PMASK1_BITMASK_S		0
+#define GL_PSTEXT_L2_PMASK1_BITMASK_M		MAKEMASK(0xFFFF, 0)
+#define GL_PSTEXT_L2_TMASK0(_i)			(0x0020E498 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_L2_TMASK0_MAX_INDEX		2
+#define GL_PSTEXT_L2_TMASK0_BITMASK_S		0
+#define GL_PSTEXT_L2_TMASK0_BITMASK_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_L2_TMASK1(_i)			(0x0020E4A4 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_L2_TMASK1_MAX_INDEX		2
+#define GL_PSTEXT_L2_TMASK1_BITMASK_S		0
+#define GL_PSTEXT_L2_TMASK1_BITMASK_M		MAKEMASK(0xFF, 0)
+#define GL_PSTEXT_L2PRTMOD(_i)			(0x0020E09C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_L2PRTMOD_MAX_INDEX		2
+#define GL_PSTEXT_L2PRTMOD_XLT1_S		0
+#define GL_PSTEXT_L2PRTMOD_XLT1_M		MAKEMASK(0x3, 0)
+#define GL_PSTEXT_L2PRTMOD_XLT2_S		8
+#define GL_PSTEXT_L2PRTMOD_XLT2_M		MAKEMASK(0x3, 8)
+#define GL_PSTEXT_N2N_L2ADDR(_i)		(0x0020E15C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_N2N_L2ADDR_MAX_INDEX		2
+#define GL_PSTEXT_N2N_L2ADDR_LINE_IDX_S		0
+#define GL_PSTEXT_N2N_L2ADDR_LINE_IDX_M		MAKEMASK(0x3F, 0)
+#define GL_PSTEXT_N2N_L2ADDR_AUTO_INC_S		31
+#define GL_PSTEXT_N2N_L2ADDR_AUTO_INC_M		BIT(31)
+#define GL_PSTEXT_N2N_L2DATA(_i)		(0x0020E168 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_N2N_L2DATA_MAX_INDEX		2
+#define GL_PSTEXT_N2N_L2DATA_DATA0_S		0
+#define GL_PSTEXT_N2N_L2DATA_DATA0_M		MAKEMASK(0xFF, 0)
+#define GL_PSTEXT_N2N_L2DATA_DATA1_S		8
+#define GL_PSTEXT_N2N_L2DATA_DATA1_M		MAKEMASK(0xFF, 8)
+#define GL_PSTEXT_N2N_L2DATA_DATA2_S		16
+#define GL_PSTEXT_N2N_L2DATA_DATA2_M		MAKEMASK(0xFF, 16)
+#define GL_PSTEXT_N2N_L2DATA_DATA3_S		24
+#define GL_PSTEXT_N2N_L2DATA_DATA3_M		MAKEMASK(0xFF, 24)
+#define GL_PSTEXT_P2P_L1ADDR(_i)		(0x0020E024 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_P2P_L1ADDR_MAX_INDEX		2
+#define GL_PSTEXT_P2P_L1ADDR_LINE_IDX_S		0
+#define GL_PSTEXT_P2P_L1ADDR_LINE_IDX_M		BIT(0)
+#define GL_PSTEXT_P2P_L1ADDR_AUTO_INC_S		31
+#define GL_PSTEXT_P2P_L1ADDR_AUTO_INC_M		BIT(31)
+#define GL_PSTEXT_P2P_L1DATA(_i)		(0x0020E030 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_P2P_L1DATA_MAX_INDEX		2
+#define GL_PSTEXT_P2P_L1DATA_DATA_S		0
+#define GL_PSTEXT_P2P_L1DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_PID_L2GKTYPE(_i)		(0x0020E0F0 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_PID_L2GKTYPE_MAX_INDEX	2
+#define GL_PSTEXT_PID_L2GKTYPE_PID_GKTYPE_S	0
+#define GL_PSTEXT_PID_L2GKTYPE_PID_GKTYPE_M	MAKEMASK(0x3, 0)
+#define GL_PSTEXT_PLVL_SEL(_i)			(0x0020E00C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_PLVL_SEL_MAX_INDEX		2
+#define GL_PSTEXT_PLVL_SEL_PLVL_SEL_S		0
+#define GL_PSTEXT_PLVL_SEL_PLVL_SEL_M		BIT(0)
+#define GL_PSTEXT_PRFLM_CTRL(_i)		(0x0020E474 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_PRFLM_CTRL_MAX_INDEX		2
+#define GL_PSTEXT_PRFLM_CTRL_PRFL_IDX_S		0
+#define GL_PSTEXT_PRFLM_CTRL_PRFL_IDX_M		MAKEMASK(0xFF, 0)
+#define GL_PSTEXT_PRFLM_CTRL_RD_REQ_S		30
+#define GL_PSTEXT_PRFLM_CTRL_RD_REQ_M		BIT(30)
+#define GL_PSTEXT_PRFLM_CTRL_WR_REQ_S		31
+#define GL_PSTEXT_PRFLM_CTRL_WR_REQ_M		BIT(31)
+#define GL_PSTEXT_PRFLM_DATA_0(_i)		(0x0020E174 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GL_PSTEXT_PRFLM_DATA_0_MAX_INDEX	63
+#define GL_PSTEXT_PRFLM_DATA_0_PROT_S		0
+#define GL_PSTEXT_PRFLM_DATA_0_PROT_M		MAKEMASK(0xFF, 0)
+#define GL_PSTEXT_PRFLM_DATA_0_OFF_S		16
+#define GL_PSTEXT_PRFLM_DATA_0_OFF_M		MAKEMASK(0x1FF, 16)
+#define GL_PSTEXT_PRFLM_DATA_1(_i)		(0x0020E274 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GL_PSTEXT_PRFLM_DATA_1_MAX_INDEX	63
+#define GL_PSTEXT_PRFLM_DATA_1_PROT_S		0
+#define GL_PSTEXT_PRFLM_DATA_1_PROT_M		MAKEMASK(0xFF, 0)
+#define GL_PSTEXT_PRFLM_DATA_1_OFF_S		16
+#define GL_PSTEXT_PRFLM_DATA_1_OFF_M		MAKEMASK(0x1FF, 16)
+#define GL_PSTEXT_PRFLM_DATA_2(_i)		(0x0020E374 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GL_PSTEXT_PRFLM_DATA_2_MAX_INDEX	63
+#define GL_PSTEXT_PRFLM_DATA_2_PROT_S		0
+#define GL_PSTEXT_PRFLM_DATA_2_PROT_M		MAKEMASK(0xFF, 0)
+#define GL_PSTEXT_PRFLM_DATA_2_OFF_S		16
+#define GL_PSTEXT_PRFLM_DATA_2_OFF_M		MAKEMASK(0x1FF, 16)
+#define GL_PSTEXT_TCAM_L2ADDR(_i)		(0x0020E114 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_TCAM_L2ADDR_MAX_INDEX		2
+#define GL_PSTEXT_TCAM_L2ADDR_LINE_IDX_S	0
+#define GL_PSTEXT_TCAM_L2ADDR_LINE_IDX_M	MAKEMASK(0x3FF, 0)
+#define GL_PSTEXT_TCAM_L2ADDR_AUTO_INC_S	31
+#define GL_PSTEXT_TCAM_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_PSTEXT_TCAM_L2DATALSB(_i)		(0x0020E120 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_TCAM_L2DATALSB_MAX_INDEX	2
+#define GL_PSTEXT_TCAM_L2DATALSB_DATALSB_S	0
+#define GL_PSTEXT_TCAM_L2DATALSB_DATALSB_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_TCAM_L2DATAMSB(_i)		(0x0020E12C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_TCAM_L2DATAMSB_MAX_INDEX	2
+#define GL_PSTEXT_TCAM_L2DATAMSB_DATAMSB_S	0
+#define GL_PSTEXT_TCAM_L2DATAMSB_DATAMSB_M	MAKEMASK(0xFF, 0)
+#define GL_PSTEXT_XLT0_L1ADDR(_i)		(0x0020E03C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_XLT0_L1ADDR_MAX_INDEX		2
+#define GL_PSTEXT_XLT0_L1ADDR_LINE_IDX_S	0
+#define GL_PSTEXT_XLT0_L1ADDR_LINE_IDX_M	MAKEMASK(0xFF, 0)
+#define GL_PSTEXT_XLT0_L1ADDR_AUTO_INC_S	31
+#define GL_PSTEXT_XLT0_L1ADDR_AUTO_INC_M	BIT(31)
+#define GL_PSTEXT_XLT0_L1DATA(_i)		(0x0020E048 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_XLT0_L1DATA_MAX_INDEX		2
+#define GL_PSTEXT_XLT0_L1DATA_DATA_S		0
+#define GL_PSTEXT_XLT0_L1DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_XLT1_L2ADDR(_i)		(0x0020E0C0 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_XLT1_L2ADDR_MAX_INDEX		2
+#define GL_PSTEXT_XLT1_L2ADDR_LINE_IDX_S	0
+#define GL_PSTEXT_XLT1_L2ADDR_LINE_IDX_M	MAKEMASK(0x7FF, 0)
+#define GL_PSTEXT_XLT1_L2ADDR_AUTO_INC_S	31
+#define GL_PSTEXT_XLT1_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_PSTEXT_XLT1_L2DATA(_i)		(0x0020E0CC + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_XLT1_L2DATA_MAX_INDEX		2
+#define GL_PSTEXT_XLT1_L2DATA_DATA_S		0
+#define GL_PSTEXT_XLT1_L2DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_XLT2_L2ADDR(_i)		(0x0020E0D8 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_XLT2_L2ADDR_MAX_INDEX		2
+#define GL_PSTEXT_XLT2_L2ADDR_LINE_IDX_S	0
+#define GL_PSTEXT_XLT2_L2ADDR_LINE_IDX_M	MAKEMASK(0x1FF, 0)
+#define GL_PSTEXT_XLT2_L2ADDR_AUTO_INC_S	31
+#define GL_PSTEXT_XLT2_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_PSTEXT_XLT2_L2DATA(_i)		(0x0020E0E4 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_XLT2_L2DATA_MAX_INDEX		2
+#define GL_PSTEXT_XLT2_L2DATA_DATA_S		0
+#define GL_PSTEXT_XLT2_L2DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLFLXP_PTYPE_TRANSLATION(_i)		(0x0045C000 + ((_i) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define GLFLXP_PTYPE_TRANSLATION_MAX_INDEX	255
+#define GLFLXP_PTYPE_TRANSLATION_PTYPE_4N_S	0
+#define GLFLXP_PTYPE_TRANSLATION_PTYPE_4N_M	MAKEMASK(0xFF, 0)
+#define GLFLXP_PTYPE_TRANSLATION_PTYPE_4N_1_S	8
+#define GLFLXP_PTYPE_TRANSLATION_PTYPE_4N_1_M	MAKEMASK(0xFF, 8)
+#define GLFLXP_PTYPE_TRANSLATION_PTYPE_4N_2_S	16
+#define GLFLXP_PTYPE_TRANSLATION_PTYPE_4N_2_M	MAKEMASK(0xFF, 16)
+#define GLFLXP_PTYPE_TRANSLATION_PTYPE_4N_3_S	24
+#define GLFLXP_PTYPE_TRANSLATION_PTYPE_4N_3_M	MAKEMASK(0xFF, 24)
+#define GLFLXP_RX_CMD_LX_PROT_IDX(_i)		(0x0045C400 + ((_i) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define GLFLXP_RX_CMD_LX_PROT_IDX_MAX_INDEX	255
+#define GLFLXP_RX_CMD_LX_PROT_IDX_INNER_CLOUD_OFFSET_INDEX_S 0
+#define GLFLXP_RX_CMD_LX_PROT_IDX_INNER_CLOUD_OFFSET_INDEX_M MAKEMASK(0x7, 0)
+#define GLFLXP_RX_CMD_LX_PROT_IDX_L4_OFFSET_INDEX_S 4
+#define GLFLXP_RX_CMD_LX_PROT_IDX_L4_OFFSET_INDEX_M MAKEMASK(0x7, 4)
+#define GLFLXP_RX_CMD_LX_PROT_IDX_PAYLOAD_OFFSET_INDEX_S 8
+#define GLFLXP_RX_CMD_LX_PROT_IDX_PAYLOAD_OFFSET_INDEX_M MAKEMASK(0x7, 8)
+#define GLFLXP_RX_CMD_LX_PROT_IDX_L3_PROTOCOL_S 12
+#define GLFLXP_RX_CMD_LX_PROT_IDX_L3_PROTOCOL_M MAKEMASK(0x3, 12)
+#define GLFLXP_RX_CMD_LX_PROT_IDX_L4_PROTOCOL_S 14
+#define GLFLXP_RX_CMD_LX_PROT_IDX_L4_PROTOCOL_M MAKEMASK(0x3, 14)
+#define GLFLXP_RX_CMD_PROTIDS(_i, _j)		(0x0045A000 + ((_i) * 4 + (_j) * 1024)) /* _i=0...255, _j=0...5 */ /* Reset Source: CORER */
+#define GLFLXP_RX_CMD_PROTIDS_MAX_INDEX		255
+#define GLFLXP_RX_CMD_PROTIDS_PROTID_4N_S	0
+#define GLFLXP_RX_CMD_PROTIDS_PROTID_4N_M	MAKEMASK(0xFF, 0)
+#define GLFLXP_RX_CMD_PROTIDS_PROTID_4N_1_S	8
+#define GLFLXP_RX_CMD_PROTIDS_PROTID_4N_1_M	MAKEMASK(0xFF, 8)
+#define GLFLXP_RX_CMD_PROTIDS_PROTID_4N_2_S	16
+#define GLFLXP_RX_CMD_PROTIDS_PROTID_4N_2_M	MAKEMASK(0xFF, 16)
+#define GLFLXP_RX_CMD_PROTIDS_PROTID_4N_3_S	24
+#define GLFLXP_RX_CMD_PROTIDS_PROTID_4N_3_M	MAKEMASK(0xFF, 24)
+#define GLFLXP_RXDID_FLAGS(_i, _j)		(0x0045D000 + ((_i) * 4 + (_j) * 256)) /* _i=0...63, _j=0...4 */ /* Reset Source: CORER */
+#define GLFLXP_RXDID_FLAGS_MAX_INDEX		63
+#define GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_S	0
+#define GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_M	MAKEMASK(0x3F, 0)
+#define GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_1_S	8
+#define GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_1_M	MAKEMASK(0x3F, 8)
+#define GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_2_S	16
+#define GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_2_M	MAKEMASK(0x3F, 16)
+#define GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_3_S	24
+#define GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_3_M	MAKEMASK(0x3F, 24)
+#define GLFLXP_RXDID_FLAGS1_OVERRIDE(_i)	(0x0045D600 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GLFLXP_RXDID_FLAGS1_OVERRIDE_MAX_INDEX	63
+#define GLFLXP_RXDID_FLAGS1_OVERRIDE_FLEXIFLAGS1_OVERRIDE_S 0
+#define GLFLXP_RXDID_FLAGS1_OVERRIDE_FLEXIFLAGS1_OVERRIDE_M MAKEMASK(0xF, 0)
+#define GLFLXP_RXDID_FLX_WRD_0(_i)		(0x0045c800 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GLFLXP_RXDID_FLX_WRD_0_MAX_INDEX	63
+#define GLFLXP_RXDID_FLX_WRD_0_PROT_MDID_S	0
+#define GLFLXP_RXDID_FLX_WRD_0_PROT_MDID_M	MAKEMASK(0xFF, 0)
+#define GLFLXP_RXDID_FLX_WRD_0_EXTRACTION_OFFSET_S 8
+#define GLFLXP_RXDID_FLX_WRD_0_EXTRACTION_OFFSET_M MAKEMASK(0x3FF, 8)
+#define GLFLXP_RXDID_FLX_WRD_0_RXDID_OPCODE_S	30
+#define GLFLXP_RXDID_FLX_WRD_0_RXDID_OPCODE_M	MAKEMASK(0x3, 30)
+#define GLFLXP_RXDID_FLX_WRD_1(_i)		(0x0045c900 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GLFLXP_RXDID_FLX_WRD_1_MAX_INDEX	63
+#define GLFLXP_RXDID_FLX_WRD_1_PROT_MDID_S	0
+#define GLFLXP_RXDID_FLX_WRD_1_PROT_MDID_M	MAKEMASK(0xFF, 0)
+#define GLFLXP_RXDID_FLX_WRD_1_EXTRACTION_OFFSET_S 8
+#define GLFLXP_RXDID_FLX_WRD_1_EXTRACTION_OFFSET_M MAKEMASK(0x3FF, 8)
+#define GLFLXP_RXDID_FLX_WRD_1_RXDID_OPCODE_S	30
+#define GLFLXP_RXDID_FLX_WRD_1_RXDID_OPCODE_M	MAKEMASK(0x3, 30)
+#define GLFLXP_RXDID_FLX_WRD_2(_i)		(0x0045ca00 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GLFLXP_RXDID_FLX_WRD_2_MAX_INDEX	63
+#define GLFLXP_RXDID_FLX_WRD_2_PROT_MDID_S	0
+#define GLFLXP_RXDID_FLX_WRD_2_PROT_MDID_M	MAKEMASK(0xFF, 0)
+#define GLFLXP_RXDID_FLX_WRD_2_EXTRACTION_OFFSET_S 8
+#define GLFLXP_RXDID_FLX_WRD_2_EXTRACTION_OFFSET_M MAKEMASK(0x3FF, 8)
+#define GLFLXP_RXDID_FLX_WRD_2_RXDID_OPCODE_S	30
+#define GLFLXP_RXDID_FLX_WRD_2_RXDID_OPCODE_M	MAKEMASK(0x3, 30)
+#define GLFLXP_RXDID_FLX_WRD_3(_i)		(0x0045cb00 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GLFLXP_RXDID_FLX_WRD_3_MAX_INDEX	63
+#define GLFLXP_RXDID_FLX_WRD_3_PROT_MDID_S	0
+#define GLFLXP_RXDID_FLX_WRD_3_PROT_MDID_M	MAKEMASK(0xFF, 0)
+#define GLFLXP_RXDID_FLX_WRD_3_EXTRACTION_OFFSET_S 8
+#define GLFLXP_RXDID_FLX_WRD_3_EXTRACTION_OFFSET_M MAKEMASK(0x3FF, 8)
+#define GLFLXP_RXDID_FLX_WRD_3_RXDID_OPCODE_S	30
+#define GLFLXP_RXDID_FLX_WRD_3_RXDID_OPCODE_M	MAKEMASK(0x3, 30)
+#define GLFLXP_RXDID_FLX_WRD_4(_i)		(0x0045cc00 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GLFLXP_RXDID_FLX_WRD_4_MAX_INDEX	63
+#define GLFLXP_RXDID_FLX_WRD_4_PROT_MDID_S	0
+#define GLFLXP_RXDID_FLX_WRD_4_PROT_MDID_M	MAKEMASK(0xFF, 0)
+#define GLFLXP_RXDID_FLX_WRD_4_EXTRACTION_OFFSET_S 8
+#define GLFLXP_RXDID_FLX_WRD_4_EXTRACTION_OFFSET_M MAKEMASK(0x3FF, 8)
+#define GLFLXP_RXDID_FLX_WRD_4_RXDID_OPCODE_S	30
+#define GLFLXP_RXDID_FLX_WRD_4_RXDID_OPCODE_M	MAKEMASK(0x3, 30)
+#define GLFLXP_RXDID_FLX_WRD_5(_i)		(0x0045cd00 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GLFLXP_RXDID_FLX_WRD_5_MAX_INDEX	63
+#define GLFLXP_RXDID_FLX_WRD_5_PROT_MDID_S	0
+#define GLFLXP_RXDID_FLX_WRD_5_PROT_MDID_M	MAKEMASK(0xFF, 0)
+#define GLFLXP_RXDID_FLX_WRD_5_EXTRACTION_OFFSET_S 8
+#define GLFLXP_RXDID_FLX_WRD_5_EXTRACTION_OFFSET_M MAKEMASK(0x3FF, 8)
+#define GLFLXP_RXDID_FLX_WRD_5_RXDID_OPCODE_S	30
+#define GLFLXP_RXDID_FLX_WRD_5_RXDID_OPCODE_M	MAKEMASK(0x3, 30)
+#define GLFLXP_TX_SCHED_CORRECT(_i, _j)		(0x00458000 + ((_i) * 4 + (_j) * 256)) /* _i=0...63, _j=0...31 */ /* Reset Source: CORER */
+#define GLFLXP_TX_SCHED_CORRECT_MAX_INDEX	63
+#define GLFLXP_TX_SCHED_CORRECT_PROTD_ID_2N_S	0
+#define GLFLXP_TX_SCHED_CORRECT_PROTD_ID_2N_M	MAKEMASK(0xFF, 0)
+#define GLFLXP_TX_SCHED_CORRECT_RECIPE_2N_S	8
+#define GLFLXP_TX_SCHED_CORRECT_RECIPE_2N_M	MAKEMASK(0x1F, 8)
+#define GLFLXP_TX_SCHED_CORRECT_PROTD_ID_2N_1_S 16
+#define GLFLXP_TX_SCHED_CORRECT_PROTD_ID_2N_1_M MAKEMASK(0xFF, 16)
+#define GLFLXP_TX_SCHED_CORRECT_RECIPE_2N_1_S	24
+#define GLFLXP_TX_SCHED_CORRECT_RECIPE_2N_1_M	MAKEMASK(0x1F, 24)
+#define QRXFLXP_CNTXT(_QRX)			(0x00480000 + ((_QRX) * 4)) /* _i=0...2047 */ /* Reset Source: CORER */
+#define QRXFLXP_CNTXT_MAX_INDEX			2047
+#define QRXFLXP_CNTXT_RXDID_IDX_S		0
+#define QRXFLXP_CNTXT_RXDID_IDX_M		MAKEMASK(0x3F, 0)
+#define QRXFLXP_CNTXT_RXDID_PRIO_S		8
+#define QRXFLXP_CNTXT_RXDID_PRIO_M		MAKEMASK(0x7, 8)
+#define QRXFLXP_CNTXT_TS_S			11
+#define QRXFLXP_CNTXT_TS_M			BIT(11)
+#define GL_FWSTS				0x00083048 /* Reset Source: POR */
+#define GL_FWSTS_FWS0B_S			0
+#define GL_FWSTS_FWS0B_M			MAKEMASK(0xFF, 0)
+#define GL_FWSTS_FWROWD_S			8
+#define GL_FWSTS_FWROWD_M			BIT(8)
+#define GL_FWSTS_FWRI_S				9
+#define GL_FWSTS_FWRI_M				BIT(9)
+#define GL_FWSTS_FWS1B_S			16
+#define GL_FWSTS_FWS1B_M			MAKEMASK(0xFF, 16)
+#define GL_TCVMLR_DRAIN_CNTR_CTL		0x000A21E0 /* Reset Source: CORER */
+#define GL_TCVMLR_DRAIN_CNTR_CTL_OP_S		0
+#define GL_TCVMLR_DRAIN_CNTR_CTL_OP_M		BIT(0)
+#define GL_TCVMLR_DRAIN_CNTR_CTL_PORT_S		1
+#define GL_TCVMLR_DRAIN_CNTR_CTL_PORT_M		MAKEMASK(0x7, 1)
+#define GL_TCVMLR_DRAIN_CNTR_CTL_VALUE_S	4
+#define GL_TCVMLR_DRAIN_CNTR_CTL_VALUE_M	MAKEMASK(0x3FFF, 4)
+#define GL_TCVMLR_DRAIN_DONE_DEC		0x000A21A8 /* Reset Source: CORER */
+#define GL_TCVMLR_DRAIN_DONE_DEC_TARGET_S	0
+#define GL_TCVMLR_DRAIN_DONE_DEC_TARGET_M	BIT(0)
+#define GL_TCVMLR_DRAIN_DONE_DEC_INDEX_S	1
+#define GL_TCVMLR_DRAIN_DONE_DEC_INDEX_M	MAKEMASK(0x1F, 1)
+#define GL_TCVMLR_DRAIN_DONE_DEC_VALUE_S	6
+#define GL_TCVMLR_DRAIN_DONE_DEC_VALUE_M	MAKEMASK(0xFF, 6)
+#define GL_TCVMLR_DRAIN_DONE_TCLAN(_i)		(0x000A20A8 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GL_TCVMLR_DRAIN_DONE_TCLAN_MAX_INDEX	31
+#define GL_TCVMLR_DRAIN_DONE_TCLAN_COUNT_S	0
+#define GL_TCVMLR_DRAIN_DONE_TCLAN_COUNT_M	MAKEMASK(0xFF, 0)
+#define GL_TCVMLR_DRAIN_DONE_TPB(_i)		(0x000A2128 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GL_TCVMLR_DRAIN_DONE_TPB_MAX_INDEX	31
+#define GL_TCVMLR_DRAIN_DONE_TPB_COUNT_S	0
+#define GL_TCVMLR_DRAIN_DONE_TPB_COUNT_M	MAKEMASK(0xFF, 0)
+#define GL_TCVMLR_DRAIN_MARKER			0x000A2008 /* Reset Source: CORER */
+#define GL_TCVMLR_DRAIN_MARKER_PORT_S		0
+#define GL_TCVMLR_DRAIN_MARKER_PORT_M		MAKEMASK(0x7, 0)
+#define GL_TCVMLR_DRAIN_MARKER_TC_S		3
+#define GL_TCVMLR_DRAIN_MARKER_TC_M		MAKEMASK(0x1F, 3)
+#define GL_TCVMLR_ERR_STAT			0x000A2024 /* Reset Source: CORER */
+#define GL_TCVMLR_ERR_STAT_ERROR_S		0
+#define GL_TCVMLR_ERR_STAT_ERROR_M		BIT(0)
+#define GL_TCVMLR_ERR_STAT_FW_REQ_S		1
+#define GL_TCVMLR_ERR_STAT_FW_REQ_M		BIT(1)
+#define GL_TCVMLR_ERR_STAT_STAT_S		2
+#define GL_TCVMLR_ERR_STAT_STAT_M		MAKEMASK(0x7, 2)
+#define GL_TCVMLR_ERR_STAT_ENT_TYPE_S		5
+#define GL_TCVMLR_ERR_STAT_ENT_TYPE_M		MAKEMASK(0x7, 5)
+#define GL_TCVMLR_ERR_STAT_ENT_ID_S		8
+#define GL_TCVMLR_ERR_STAT_ENT_ID_M		MAKEMASK(0x3FFF, 8)
+#define GL_TCVMLR_QCFG				0x000A2010 /* Reset Source: CORER */
+#define GL_TCVMLR_QCFG_QID_S			0
+#define GL_TCVMLR_QCFG_QID_M			MAKEMASK(0x3FFF, 0)
+#define GL_TCVMLR_QCFG_OP_S			14
+#define GL_TCVMLR_QCFG_OP_M			BIT(14)
+#define GL_TCVMLR_QCFG_PORT_S			15
+#define GL_TCVMLR_QCFG_PORT_M			MAKEMASK(0x7, 15)
+#define GL_TCVMLR_QCFG_TC_S			18
+#define GL_TCVMLR_QCFG_TC_M			MAKEMASK(0x1F, 18)
+#define GL_TCVMLR_QCFG_RD			0x000A2014 /* Reset Source: CORER */
+#define GL_TCVMLR_QCFG_RD_QID_S			0
+#define GL_TCVMLR_QCFG_RD_QID_M			MAKEMASK(0x3FFF, 0)
+#define GL_TCVMLR_QCFG_RD_PORT_S		14
+#define GL_TCVMLR_QCFG_RD_PORT_M		MAKEMASK(0x7, 14)
+#define GL_TCVMLR_QCFG_RD_TC_S			17
+#define GL_TCVMLR_QCFG_RD_TC_M			MAKEMASK(0x1F, 17)
+#define GL_TCVMLR_QCNTR				0x000A200C /* Reset Source: CORER */
+#define GL_TCVMLR_QCNTR_CNTR_S			0
+#define GL_TCVMLR_QCNTR_CNTR_M			MAKEMASK(0x7FFF, 0)
+#define GL_TCVMLR_QCTL				0x000A2004 /* Reset Source: CORER */
+#define GL_TCVMLR_QCTL_QID_S			0
+#define GL_TCVMLR_QCTL_QID_M			MAKEMASK(0x3FFF, 0)
+#define GL_TCVMLR_QCTL_OP_S			14
+#define GL_TCVMLR_QCTL_OP_M			BIT(14)
+#define GL_TCVMLR_REQ_STAT			0x000A2018 /* Reset Source: CORER */
+#define GL_TCVMLR_REQ_STAT_ENT_TYPE_S		0
+#define GL_TCVMLR_REQ_STAT_ENT_TYPE_M		MAKEMASK(0x7, 0)
+#define GL_TCVMLR_REQ_STAT_ENT_ID_S		3
+#define GL_TCVMLR_REQ_STAT_ENT_ID_M		MAKEMASK(0x3FFF, 3)
+#define GL_TCVMLR_REQ_STAT_OP_S			17
+#define GL_TCVMLR_REQ_STAT_OP_M			BIT(17)
+#define GL_TCVMLR_REQ_STAT_WRITE_STATUS_S	18
+#define GL_TCVMLR_REQ_STAT_WRITE_STATUS_M	MAKEMASK(0x7, 18)
+#define GL_TCVMLR_STAT				0x000A201C /* Reset Source: CORER */
+#define GL_TCVMLR_STAT_ENT_TYPE_S		0
+#define GL_TCVMLR_STAT_ENT_TYPE_M		MAKEMASK(0x7, 0)
+#define GL_TCVMLR_STAT_ENT_ID_S			3
+#define GL_TCVMLR_STAT_ENT_ID_M			MAKEMASK(0x3FFF, 3)
+#define GL_TCVMLR_STAT_STATUS_S			17
+#define GL_TCVMLR_STAT_STATUS_M			MAKEMASK(0x7, 17)
+#define GL_XLR_MARKER_TRIG_TCVMLR		0x000A2000 /* Reset Source: CORER */
+#define GL_XLR_MARKER_TRIG_TCVMLR_VM_VF_NUM_S	0
+#define GL_XLR_MARKER_TRIG_TCVMLR_VM_VF_NUM_M	MAKEMASK(0x3FF, 0)
+#define GL_XLR_MARKER_TRIG_TCVMLR_VM_VF_TYPE_S	10
+#define GL_XLR_MARKER_TRIG_TCVMLR_VM_VF_TYPE_M	MAKEMASK(0x3, 10)
+#define GL_XLR_MARKER_TRIG_TCVMLR_PF_NUM_S	12
+#define GL_XLR_MARKER_TRIG_TCVMLR_PF_NUM_M	MAKEMASK(0x7, 12)
+#define GL_XLR_MARKER_TRIG_TCVMLR_PORT_NUM_S	16
+#define GL_XLR_MARKER_TRIG_TCVMLR_PORT_NUM_M	MAKEMASK(0x7, 16)
+#define GL_XLR_MARKER_TRIG_VMLR			0x00093804 /* Reset Source: CORER */
+#define GL_XLR_MARKER_TRIG_VMLR_VM_VF_NUM_S	0
+#define GL_XLR_MARKER_TRIG_VMLR_VM_VF_NUM_M	MAKEMASK(0x3FF, 0)
+#define GL_XLR_MARKER_TRIG_VMLR_VM_VF_TYPE_S	10
+#define GL_XLR_MARKER_TRIG_VMLR_VM_VF_TYPE_M	MAKEMASK(0x3, 10)
+#define GL_XLR_MARKER_TRIG_VMLR_PF_NUM_S	12
+#define GL_XLR_MARKER_TRIG_VMLR_PF_NUM_M	MAKEMASK(0x7, 12)
+#define GL_XLR_MARKER_TRIG_VMLR_PORT_NUM_S	16
+#define GL_XLR_MARKER_TRIG_VMLR_PORT_NUM_M	MAKEMASK(0x7, 16)
+#define GLGEN_ANA_ABORT_PTYPE			0x0020C21C /* Reset Source: CORER */
+#define GLGEN_ANA_ABORT_PTYPE_ABORT_S		0
+#define GLGEN_ANA_ABORT_PTYPE_ABORT_M		MAKEMASK(0x3FF, 0)
+#define GLGEN_ANA_ALU_ACCSS_OUT_OF_PKT		0x0020C208 /* Reset Source: CORER */
+#define GLGEN_ANA_ALU_ACCSS_OUT_OF_PKT_NPC_S	0
+#define GLGEN_ANA_ALU_ACCSS_OUT_OF_PKT_NPC_M	MAKEMASK(0xFF, 0)
+#define GLGEN_ANA_CFG_CTRL			0x0020C104 /* Reset Source: CORER */
+#define GLGEN_ANA_CFG_CTRL_LINE_IDX_S		0
+#define GLGEN_ANA_CFG_CTRL_LINE_IDX_M		MAKEMASK(0x3FFFF, 0)
+#define GLGEN_ANA_CFG_CTRL_TABLE_ID_S		18
+#define GLGEN_ANA_CFG_CTRL_TABLE_ID_M		MAKEMASK(0xFF, 18)
+#define GLGEN_ANA_CFG_CTRL_RESRVED_S		26
+#define GLGEN_ANA_CFG_CTRL_RESRVED_M		MAKEMASK(0x7, 26)
+#define GLGEN_ANA_CFG_CTRL_OPERATION_ID_S	29
+#define GLGEN_ANA_CFG_CTRL_OPERATION_ID_M	MAKEMASK(0x7, 29)
+#define GLGEN_ANA_CFG_HTBL_LU_RESULT		0x0020C158 /* Reset Source: CORER */
+#define GLGEN_ANA_CFG_HTBL_LU_RESULT_HIT_S	0
+#define GLGEN_ANA_CFG_HTBL_LU_RESULT_HIT_M	BIT(0)
+#define GLGEN_ANA_CFG_HTBL_LU_RESULT_PG_MEM_IDX_S 1
+#define GLGEN_ANA_CFG_HTBL_LU_RESULT_PG_MEM_IDX_M MAKEMASK(0x7, 1)
+#define GLGEN_ANA_CFG_HTBL_LU_RESULT_ADDR_S	4
+#define GLGEN_ANA_CFG_HTBL_LU_RESULT_ADDR_M	MAKEMASK(0x1FF, 4)
+#define GLGEN_ANA_CFG_LU_KEY(_i)		(0x0020C14C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GLGEN_ANA_CFG_LU_KEY_MAX_INDEX		2
+#define GLGEN_ANA_CFG_LU_KEY_LU_KEY_S		0
+#define GLGEN_ANA_CFG_LU_KEY_LU_KEY_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_CFG_RDDATA(_i)		(0x0020C10C + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLGEN_ANA_CFG_RDDATA_MAX_INDEX		15
+#define GLGEN_ANA_CFG_RDDATA_RD_DATA_S		0
+#define GLGEN_ANA_CFG_RDDATA_RD_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_CFG_SPLBUF_LU_RESULT		0x0020C15C /* Reset Source: CORER */
+#define GLGEN_ANA_CFG_SPLBUF_LU_RESULT_HIT_S	0
+#define GLGEN_ANA_CFG_SPLBUF_LU_RESULT_HIT_M	BIT(0)
+#define GLGEN_ANA_CFG_SPLBUF_LU_RESULT_RSV_S	1
+#define GLGEN_ANA_CFG_SPLBUF_LU_RESULT_RSV_M	MAKEMASK(0x7, 1)
+#define GLGEN_ANA_CFG_SPLBUF_LU_RESULT_ADDR_S	4
+#define GLGEN_ANA_CFG_SPLBUF_LU_RESULT_ADDR_M	MAKEMASK(0x1FF, 4)
+#define GLGEN_ANA_CFG_WRDATA			0x0020C108 /* Reset Source: CORER */
+#define GLGEN_ANA_CFG_WRDATA_WR_DATA_S		0
+#define GLGEN_ANA_CFG_WRDATA_WR_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_DEF_PTYPE			0x0020C100 /* Reset Source: CORER */
+#define GLGEN_ANA_DEF_PTYPE_DEF_PTYPE_S		0
+#define GLGEN_ANA_DEF_PTYPE_DEF_PTYPE_M		MAKEMASK(0x3FF, 0)
+#define GLGEN_ANA_DFD_FIFO_0			0x0020C398 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_FIFO_0_PC_NXT_S		0
+#define GLGEN_ANA_DFD_FIFO_0_PC_NXT_M		BIT(0)
+#define GLGEN_ANA_DFD_FIFO_0_HO_NXT_S		1
+#define GLGEN_ANA_DFD_FIFO_0_HO_NXT_M		BIT(1)
+#define GLGEN_ANA_DFD_FIFO_0_NID_NXT_S		2
+#define GLGEN_ANA_DFD_FIFO_0_NID_NXT_M		BIT(2)
+#define GLGEN_ANA_DFD_FIFO_0_PG_KEY_SEL_S	8
+#define GLGEN_ANA_DFD_FIFO_0_PG_KEY_SEL_M	BIT(8)
+#define GLGEN_ANA_DFD_FIFO_PTR			0x0020C43C /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_FIFO_PTR_HEAD_S		0
+#define GLGEN_ANA_DFD_FIFO_PTR_HEAD_M		MAKEMASK(0x1FF, 0)
+#define GLGEN_ANA_DFD_FIFO_PTR_USED_SPACE_S	16
+#define GLGEN_ANA_DFD_FIFO_PTR_USED_SPACE_M	MAKEMASK(0x3FF, 16)
+#define GLGEN_ANA_DFD_GEN_CTRL			0x0020C38C /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_GEN_CTRL_ENABLE_S		0
+#define GLGEN_ANA_DFD_GEN_CTRL_ENABLE_M		BIT(0)
+#define GLGEN_ANA_DFD_GEN_CTRL_BLK_INPUT_S	1
+#define GLGEN_ANA_DFD_GEN_CTRL_BLK_INPUT_M	BIT(1)
+#define GLGEN_ANA_DFD_LOG_0			0x0020C3A8 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_LOG_0_SOURCE_S		0
+#define GLGEN_ANA_DFD_LOG_0_SOURCE_M		MAKEMASK(0x7, 0)
+#define GLGEN_ANA_DFD_LOG_0_LVL_OR_EDGE_S	3
+#define GLGEN_ANA_DFD_LOG_0_LVL_OR_EDGE_M	BIT(3)
+#define GLGEN_ANA_DFD_LOG_0_RC_DISP_TRIG_S	4
+#define GLGEN_ANA_DFD_LOG_0_RC_DISP_TRIG_M	MAKEMASK(0x7, 4)
+#define GLGEN_ANA_DFD_LOG_0_FLD_MODE_S		8
+#define GLGEN_ANA_DFD_LOG_0_FLD_MODE_M		BIT(8)
+#define GLGEN_ANA_DFD_LOG_0_DLY_CYCL_S		16
+#define GLGEN_ANA_DFD_LOG_0_DLY_CYCL_M		MAKEMASK(0x3FF, 16)
+#define GLGEN_ANA_DFD_LOG_1			0x0020C3AC /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_LOG_1_NUM_EVENTS_S	0
+#define GLGEN_ANA_DFD_LOG_1_NUM_EVENTS_M	MAKEMASK(0x3FF, 0)
+#define GLGEN_ANA_DFD_LOG_1_NUM_TRIGS_S		16
+#define GLGEN_ANA_DFD_LOG_1_NUM_TRIGS_M		MAKEMASK(0x3FF, 16)
+#define GLGEN_ANA_DFD_LOG_ACTN_EN		0x0020C3F8 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_BLK_PH_ARB_S	0
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_BLK_PH_ARB_M	BIT(0)
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_BLK_PH_FB_S	1
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_BLK_PH_FB_M	BIT(1)
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_BLK_INPUT_S	2
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_BLK_INPUT_M	BIT(2)
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_STP_OUTPUT_S	3
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_STP_OUTPUT_M	BIT(3)
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_STP_WR_DFD_FIFO_S 4
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_STP_WR_DFD_FIFO_M BIT(4)
+#define GLGEN_ANA_DFD_LOG_ACTN_RST		0x0020C3FC /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_BLK_PH_ARB_S 0
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_BLK_PH_ARB_M BIT(0)
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_BLK_PH_FB_S	1
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_BLK_PH_FB_M	BIT(1)
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_BLK_INPUT_S	2
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_BLK_INPUT_M	BIT(2)
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_STP_OUTPUT_S 3
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_STP_OUTPUT_M BIT(3)
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_STP_WR_DFD_FIFO_S 4
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_STP_WR_DFD_FIFO_M BIT(4)
+#define GLGEN_ANA_DFD_LOG_DATA(_i)		(0x0020C3B0 + ((_i) * 4)) /* _i=0...8 */ /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_LOG_DATA_MAX_INDEX	8
+#define GLGEN_ANA_DFD_LOG_DATA_DATA_S		0
+#define GLGEN_ANA_DFD_LOG_DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_DFD_LOG_MASK(_i)		(0x0020C3D4 + ((_i) * 4)) /* _i=0...8 */ /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_LOG_MASK_MAX_INDEX	8
+#define GLGEN_ANA_DFD_LOG_MASK_MASK_S		0
+#define GLGEN_ANA_DFD_LOG_MASK_MASK_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_DFD_LOG_RST_ALL		0x0020C400 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_LOG_RST_ALL_RST_S		0
+#define GLGEN_ANA_DFD_LOG_RST_ALL_RST_M		BIT(0)
+#define GLGEN_ANA_DFD_LOG_RST_ALL_GEN_RST_S	1
+#define GLGEN_ANA_DFD_LOG_RST_ALL_GEN_RST_M	BIT(1)
+#define GLGEN_ANA_DFD_LOG_TRG_0			0x0020C404 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_LOG_TRG_0_TAGID_S		0
+#define GLGEN_ANA_DFD_LOG_TRG_0_TAGID_M		MAKEMASK(0x3F, 0)
+#define GLGEN_ANA_DFD_LOG_TRG_0_ACT_TRIGGED_S	16
+#define GLGEN_ANA_DFD_LOG_TRG_0_ACT_TRIGGED_M	BIT(16)
+#define GLGEN_ANA_DFD_LOG_TRG_0_MAX_NUM_RND_S	24
+#define GLGEN_ANA_DFD_LOG_TRG_0_MAX_NUM_RND_M	MAKEMASK(0x7F, 24)
+#define GLGEN_ANA_DFD_LOG_TRG_0_TRIGGED_S	31
+#define GLGEN_ANA_DFD_LOG_TRG_0_TRIGGED_M	BIT(31)
+#define GLGEN_ANA_DFD_LOG_TRG_DATA(_i)		(0x0020C408 + ((_i) * 4)) /* _i=0...8 */ /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_LOG_TRG_DATA_MAX_INDEX	8
+#define GLGEN_ANA_DFD_LOG_TRG_DATA_DATA_S	0
+#define GLGEN_ANA_DFD_LOG_TRG_DATA_DATA_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_DFD_PACE_OUT			0x0020C4CC /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_PACE_OUT_PUSH_S		0
+#define GLGEN_ANA_DFD_PACE_OUT_PUSH_M		BIT(0)
+#define GLGEN_ANA_DFD_PACING_0			0x0020C390 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_PACING_0_STOP_FEEDBK_S	0
+#define GLGEN_ANA_DFD_PACING_0_STOP_FEEDBK_M	BIT(0)
+#define GLGEN_ANA_DFD_PACING_0_STOP_ARB_S	1
+#define GLGEN_ANA_DFD_PACING_0_STOP_ARB_M	BIT(1)
+#define GLGEN_ANA_DFD_PACING_0_NUM_CHUNKS_S	2
+#define GLGEN_ANA_DFD_PACING_0_NUM_CHUNKS_M	MAKEMASK(0x1F, 2)
+#define GLGEN_ANA_DFD_PACING_1			0x0020C394 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_PACING_1_PUSH_S		0
+#define GLGEN_ANA_DFD_PACING_1_PUSH_M		BIT(0)
+#define GLGEN_ANA_DFD_REG_FILE_ACC_0		0x0020C39C /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_REG_FILE_ACC_0_SLOT_ID_S	0
+#define GLGEN_ANA_DFD_REG_FILE_ACC_0_SLOT_ID_M	MAKEMASK(0xF, 0)
+#define GLGEN_ANA_DFD_REG_FILE_ACC_1		0x0020C3A0 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_REG_FILE_ACC_1_REGID_S	0
+#define GLGEN_ANA_DFD_REG_FILE_ACC_1_REGID_M	MAKEMASK(0xFF, 0)
+#define GLGEN_ANA_DFD_REG_FILE_ACC_RES		0x0020C3A4 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_REG_FILE_ACC_RES_REG_VAL_S 0
+#define GLGEN_ANA_DFD_REG_FILE_ACC_RES_REG_VAL_M MAKEMASK(0xFFFF, 0)
+#define GLGEN_ANA_DFD_REG_FILE_ACC_RES_EXCEPTIONS_S 16
+#define GLGEN_ANA_DFD_REG_FILE_ACC_RES_EXCEPTIONS_M MAKEMASK(0x7FFF, 16)
+#define GLGEN_ANA_DFD_TAGIDS			0x0020C438 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_TAGIDS_TAGID_IN_DFD_FIFO_S 0
+#define GLGEN_ANA_DFD_TAGIDS_TAGID_IN_DFD_FIFO_M MAKEMASK(0x3F, 0)
+#define GLGEN_ANA_DFD_TAGIDS_TAGID_NXT_ANA_S	8
+#define GLGEN_ANA_DFD_TAGIDS_TAGID_NXT_ANA_M	MAKEMASK(0x3F, 8)
+#define GLGEN_ANA_DFD_TAGIDS_TAGID_OUT_S	16
+#define GLGEN_ANA_DFD_TAGIDS_TAGID_OUT_M	MAKEMASK(0x3F, 16)
+#define GLGEN_ANA_DFD_TAGIDS_SLOTID_IN_DFD_FIFO_S 24
+#define GLGEN_ANA_DFD_TAGIDS_SLOTID_IN_DFD_FIFO_M MAKEMASK(0xF, 24)
+#define GLGEN_ANA_DFD_TAGIDS_SLOTID_NXT_ANA_S	28
+#define GLGEN_ANA_DFD_TAGIDS_SLOTID_NXT_ANA_M	MAKEMASK(0xF, 28)
+#define GLGEN_ANA_ERR_AUX			0x0020C228 /* Reset Source: CORER */
+#define GLGEN_ANA_ERR_AUX_IPLEN_GPREG_S		0
+#define GLGEN_ANA_ERR_AUX_IPLEN_GPREG_M		MAKEMASK(0xF, 0)
+#define GLGEN_ANA_ERR_CTRL			0x0020C220 /* Reset Source: CORER */
+#define GLGEN_ANA_ERR_CTRL_ERR_MASK_EN_S	0
+#define GLGEN_ANA_ERR_CTRL_ERR_MASK_EN_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_FLAG_MAP(_i)			(0x0020C000 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GLGEN_ANA_FLAG_MAP_MAX_INDEX		63
+#define GLGEN_ANA_FLAG_MAP_FLAG_EN_S		0
+#define GLGEN_ANA_FLAG_MAP_FLAG_EN_M		BIT(0)
+#define GLGEN_ANA_FLAG_MAP_EXT_FLAG_ID_S	1
+#define GLGEN_ANA_FLAG_MAP_EXT_FLAG_ID_M	MAKEMASK(0x3F, 1)
+#define GLGEN_ANA_GEN_DFD_RO			0x0020C4C8 /* Reset Source: CORER */
+#define GLGEN_ANA_GEN_DFD_RO_GEN_VAL_S		0
+#define GLGEN_ANA_GEN_DFD_RO_GEN_VAL_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_GIGO_FIFO_PTR			0x0020C448 /* Reset Source: CORER */
+#define GLGEN_ANA_GIGO_FIFO_PTR_HEAD_S		0
+#define GLGEN_ANA_GIGO_FIFO_PTR_HEAD_M		MAKEMASK(0x1FF, 0)
+#define GLGEN_ANA_GIGO_FIFO_PTR_USED_SPACE_S	16
+#define GLGEN_ANA_GIGO_FIFO_PTR_USED_SPACE_M	MAKEMASK(0x3FF, 16)
+#define GLGEN_ANA_HDR_FIFO_FIFO_PTR		0x0020C44C /* Reset Source: CORER */
+#define GLGEN_ANA_HDR_FIFO_FIFO_PTR_HEAD_S	0
+#define GLGEN_ANA_HDR_FIFO_FIFO_PTR_HEAD_M	MAKEMASK(0x1FF, 0)
+#define GLGEN_ANA_HDR_FIFO_FIFO_PTR_USED_SPACE_S 16
+#define GLGEN_ANA_HDR_FIFO_FIFO_PTR_USED_SPACE_M MAKEMASK(0x3FF, 16)
+#define GLGEN_ANA_INV_NODE_PTYPE		0x0020C210 /* Reset Source: CORER */
+#define GLGEN_ANA_INV_NODE_PTYPE_INV_NODE_PTYPE_S 0
+#define GLGEN_ANA_INV_NODE_PTYPE_INV_NODE_PTYPE_M MAKEMASK(0x7FF, 0)
+#define GLGEN_ANA_INV_PROT_ID			0x0020C214 /* Reset Source: CORER */
+#define GLGEN_ANA_INV_PROT_ID_INV_PROT_ID_S	0
+#define GLGEN_ANA_INV_PROT_ID_INV_PROT_ID_M	MAKEMASK(0xFF, 0)
+#define GLGEN_ANA_INV_PTYPE_MARKER		0x0020C218 /* Reset Source: CORER */
+#define GLGEN_ANA_INV_PTYPE_MARKER_INV_PTYPE_MARKER_S 0
+#define GLGEN_ANA_INV_PTYPE_MARKER_INV_PTYPE_MARKER_M MAKEMASK(0x7F, 0)
+#define GLGEN_ANA_LAST_PROT_ID(_i)		(0x0020C1E4 + ((_i) * 4)) /* _i=0...5 */ /* Reset Source: CORER */
+#define GLGEN_ANA_LAST_PROT_ID_MAX_INDEX	5
+#define GLGEN_ANA_LAST_PROT_ID_EN_S		0
+#define GLGEN_ANA_LAST_PROT_ID_EN_M		BIT(0)
+#define GLGEN_ANA_LAST_PROT_ID_PROT_ID_S	1
+#define GLGEN_ANA_LAST_PROT_ID_PROT_ID_M	MAKEMASK(0xFF, 1)
+#define GLGEN_ANA_MAX_HDRLEN			0x0020C1E0 /* Reset Source: CORER */
+#define GLGEN_ANA_MAX_HDRLEN_NPC_S		0
+#define GLGEN_ANA_MAX_HDRLEN_NPC_M		MAKEMASK(0xFF, 0)
+#define GLGEN_ANA_MAX_HDRLEN_MAX_HDR_LEN_S	8
+#define GLGEN_ANA_MAX_HDRLEN_MAX_HDR_LEN_M	MAKEMASK(0x1FF, 8)
+#define GLGEN_ANA_MAX_PROT			0x0020C224 /* Reset Source: CORER */
+#define GLGEN_ANA_MAX_PROT_MAX_PRTS_S		0
+#define GLGEN_ANA_MAX_PROT_MAX_PRTS_M		MAKEMASK(0x7F, 0)
+#define GLGEN_ANA_MAX_ROUND			0x0020C20C /* Reset Source: CORER */
+#define GLGEN_ANA_MAX_ROUND_MAX_ROUND_ABS_S	0
+#define GLGEN_ANA_MAX_ROUND_MAX_ROUND_ABS_M	MAKEMASK(0x7F, 0)
+#define GLGEN_ANA_MIN_PKT			0x0020C42C /* Reset Source: CORER */
+#define GLGEN_ANA_MIN_PKT_MIN_LEN_S		0
+#define GLGEN_ANA_MIN_PKT_MIN_LEN_M		MAKEMASK(0x3FFF, 0)
+#define GLGEN_ANA_NMPG_KEYMASK(_i)		(0x0020C1D0 + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: CORER */
+#define GLGEN_ANA_NMPG_KEYMASK_MAX_INDEX	3
+#define GLGEN_ANA_NMPG_KEYMASK_HASH_KEY_S	0
+#define GLGEN_ANA_NMPG_KEYMASK_HASH_KEY_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_NMPG0_HASHKEY(_i)		(0x0020C1B0 + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: CORER */
+#define GLGEN_ANA_NMPG0_HASHKEY_MAX_INDEX	3
+#define GLGEN_ANA_NMPG0_HASHKEY_HASH_KEY_S	0
+#define GLGEN_ANA_NMPG0_HASHKEY_HASH_KEY_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_NO_HIT_PG_NM_PG		0x0020C204 /* Reset Source: CORER */
+#define GLGEN_ANA_NO_HIT_PG_NM_PG_NPC_S		0
+#define GLGEN_ANA_NO_HIT_PG_NM_PG_NPC_M		MAKEMASK(0xFF, 0)
+#define GLGEN_ANA_OUT_OF_PKT			0x0020C200 /* Reset Source: CORER */
+#define GLGEN_ANA_OUT_OF_PKT_NPC_S		0
+#define GLGEN_ANA_OUT_OF_PKT_NPC_M		MAKEMASK(0xFF, 0)
+#define GLGEN_ANA_P2P(_i)			(0x0020C160 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLGEN_ANA_P2P_MAX_INDEX			15
+#define GLGEN_ANA_P2P_TARGET_PROF_S		0
+#define GLGEN_ANA_P2P_TARGET_PROF_M		MAKEMASK(0xF, 0)
+#define GLGEN_ANA_PG_KEYMASK(_i)		(0x0020C1C0 + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: CORER */
+#define GLGEN_ANA_PG_KEYMASK_MAX_INDEX		3
+#define GLGEN_ANA_PG_KEYMASK_HASH_KEY_S		0
+#define GLGEN_ANA_PG_KEYMASK_HASH_KEY_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_PG0_HASHKEY(_i)		(0x0020C1A0 + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: CORER */
+#define GLGEN_ANA_PG0_HASHKEY_MAX_INDEX		3
+#define GLGEN_ANA_PG0_HASHKEY_HASH_KEY_S	0
+#define GLGEN_ANA_PG0_HASHKEY_HASH_KEY_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_PROFIL_CTRL			0x0020C1FC /* Reset Source: CORER */
+#define GLGEN_ANA_PROFIL_CTRL_PROFILE_SELECT_MDID_S 0
+#define GLGEN_ANA_PROFIL_CTRL_PROFILE_SELECT_MDID_M MAKEMASK(0x1F, 0)
+#define GLGEN_ANA_PROFIL_CTRL_PROFILE_SELECT_MDSTART_S 5
+#define GLGEN_ANA_PROFIL_CTRL_PROFILE_SELECT_MDSTART_M MAKEMASK(0xF, 5)
+#define GLGEN_ANA_PROFIL_CTRL_PROFILE_SELECT_MD_LEN_S 9
+#define GLGEN_ANA_PROFIL_CTRL_PROFILE_SELECT_MD_LEN_M MAKEMASK(0x1F, 9)
+#define GLGEN_ANA_PROFIL_CTRL_NUM_CTRL_DOMAIN_S 14
+#define GLGEN_ANA_PROFIL_CTRL_NUM_CTRL_DOMAIN_M MAKEMASK(0x3, 14)
+#define GLGEN_ANA_PROFIL_CTRL_DEF_PROF_ID_S	16
+#define GLGEN_ANA_PROFIL_CTRL_DEF_PROF_ID_M	MAKEMASK(0xF, 16)
+#define GLGEN_ANA_PROFIL_CTRL_SEL_DEF_PROF_ID_S 20
+#define GLGEN_ANA_PROFIL_CTRL_SEL_DEF_PROF_ID_M BIT(20)
+#define GLGEN_ANA_PSTAT_FIFO_PTR		0x0020C444 /* Reset Source: CORER */
+#define GLGEN_ANA_PSTAT_FIFO_PTR_HEAD_S		0
+#define GLGEN_ANA_PSTAT_FIFO_PTR_HEAD_M		MAKEMASK(0x1FF, 0)
+#define GLGEN_ANA_PSTAT_FIFO_PTR_USED_SPACE_S	16
+#define GLGEN_ANA_PSTAT_FIFO_PTR_USED_SPACE_M	MAKEMASK(0x3FF, 16)
+#define GLGEN_ANA_STAT_FIFO_PTR			0x0020C440 /* Reset Source: CORER */
+#define GLGEN_ANA_STAT_FIFO_PTR_HEAD_S		0
+#define GLGEN_ANA_STAT_FIFO_PTR_HEAD_M		MAKEMASK(0x1FF, 0)
+#define GLGEN_ANA_STAT_FIFO_PTR_USED_SPACE_S	16
+#define GLGEN_ANA_STAT_FIFO_PTR_USED_SPACE_M	MAKEMASK(0x3FF, 16)
+#define GLGEN_ANA_TX_DFD_LOG_0			0x0020D3A8 /* Reset Source: CORER */
+#define GLGEN_ANA_TX_DFD_LOG_0_SOURCE_S		0
+#define GLGEN_ANA_TX_DFD_LOG_0_SOURCE_M		MAKEMASK(0x7, 0)
+#define GLGEN_ANA_TX_DFD_LOG_0_LVL_OR_EDGE_S	3
+#define GLGEN_ANA_TX_DFD_LOG_0_LVL_OR_EDGE_M	BIT(3)
+#define GLGEN_ANA_TX_DFD_LOG_0_RC_DISP_TRIG_S	4
+#define GLGEN_ANA_TX_DFD_LOG_0_RC_DISP_TRIG_M	MAKEMASK(0x7, 4)
+#define GLGEN_ANA_TX_DFD_LOG_0_FLD_MODE_S	8
+#define GLGEN_ANA_TX_DFD_LOG_0_FLD_MODE_M	BIT(8)
+#define GLGEN_ANA_TX_DFD_LOG_0_DLY_CYCL_S	16
+#define GLGEN_ANA_TX_DFD_LOG_0_DLY_CYCL_M	MAKEMASK(0x3FF, 16)
+#define GLGEN_ANA_TX_DFD_PACE_OUT		0x0020D4CC /* Reset Source: CORER */
+#define GLGEN_ANA_TX_DFD_PACE_OUT_PUSH_S	0
+#define GLGEN_ANA_TX_DFD_PACE_OUT_PUSH_M	BIT(0)
+#define GLGEN_ANA_TX_GEN_DFD_RO			0x0020D4C8 /* Reset Source: CORER */
+#define GLGEN_ANA_TX_GEN_DFD_RO_GEN_VAL_S	0
+#define GLGEN_ANA_TX_GEN_DFD_RO_GEN_VAL_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_TX_P2P(_i)			(0x0020D160 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLGEN_ANA_TX_P2P_MAX_INDEX		15
+#define GLGEN_ANA_TX_P2P_TARGET_PROF_S		0
+#define GLGEN_ANA_TX_P2P_TARGET_PROF_M		MAKEMASK(0xF, 0)
+#define GLGEN_ASSERT_HLP			0x000B81E4 /* Reset Source: POR */
+#define GLGEN_ASSERT_HLP_CORE_ON_RST_S		0
+#define GLGEN_ASSERT_HLP_CORE_ON_RST_M		BIT(0)
+#define GLGEN_ASSERT_HLP_FULL_ON_RST_S		1
+#define GLGEN_ASSERT_HLP_FULL_ON_RST_M		BIT(1)
+#define GLGEN_CLKSTAT				0x000B8184 /* Reset Source: POR */
+#define GLGEN_CLKSTAT_U_CLK_SPEED_S		0
+#define GLGEN_CLKSTAT_U_CLK_SPEED_M		MAKEMASK(0x7, 0)
+#define GLGEN_CLKSTAT_L_CLK_SPEED_S		3
+#define GLGEN_CLKSTAT_L_CLK_SPEED_M		MAKEMASK(0x7, 3)
+#define GLGEN_CLKSTAT_PSM_CLK_SPEED_S		6
+#define GLGEN_CLKSTAT_PSM_CLK_SPEED_M		MAKEMASK(0x7, 6)
+#define GLGEN_CLKSTAT_RXCTL_CLK_SPEED_S		9
+#define GLGEN_CLKSTAT_RXCTL_CLK_SPEED_M		MAKEMASK(0x7, 9)
+#define GLGEN_CLKSTAT_UANA_CLK_SPEED_S		12
+#define GLGEN_CLKSTAT_UANA_CLK_SPEED_M		MAKEMASK(0x7, 12)
+#define GLGEN_CLKSTAT_PE_CLK_SPEED_S		18
+#define GLGEN_CLKSTAT_PE_CLK_SPEED_M		MAKEMASK(0x7, 18)
+#define GLGEN_CLKSTAT_SRC			0x000B826C /* Reset Source: POR */
+#define GLGEN_CLKSTAT_SRC_U_CLK_SRC_S		0
+#define GLGEN_CLKSTAT_SRC_U_CLK_SRC_M		MAKEMASK(0x3, 0)
+#define GLGEN_CLKSTAT_SRC_L_CLK_SRC_S		2
+#define GLGEN_CLKSTAT_SRC_L_CLK_SRC_M		MAKEMASK(0x3, 2)
+#define GLGEN_CLKSTAT_SRC_PSM_CLK_SRC_S		4
+#define GLGEN_CLKSTAT_SRC_PSM_CLK_SRC_M		MAKEMASK(0x3, 4)
+#define GLGEN_CLKSTAT_SRC_RXCTL_CLK_SRC_S	6
+#define GLGEN_CLKSTAT_SRC_RXCTL_CLK_SRC_M	MAKEMASK(0x3, 6)
+#define GLGEN_CLKSTAT_SRC_UANA_CLK_SRC_S	8
+#define GLGEN_CLKSTAT_SRC_UANA_CLK_SRC_M	MAKEMASK(0xF, 8)
+#define GLGEN_ECC_ERR_INT_TOG_MASK_H		0x00093A00 /* Reset Source: CORER */
+#define GLGEN_ECC_ERR_INT_TOG_MASK_H_CLIENT_NUM_S 0
+#define GLGEN_ECC_ERR_INT_TOG_MASK_H_CLIENT_NUM_M MAKEMASK(0x7F, 0)
+#define GLGEN_ECC_ERR_INT_TOG_MASK_L		0x000939FC /* Reset Source: CORER */
+#define GLGEN_ECC_ERR_INT_TOG_MASK_L_CLIENT_NUM_S 0
+#define GLGEN_ECC_ERR_INT_TOG_MASK_L_CLIENT_NUM_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ECC_ERR_RST_MASK_H		0x000939F8 /* Reset Source: CORER */
+#define GLGEN_ECC_ERR_RST_MASK_H_CLIENT_NUM_S	0
+#define GLGEN_ECC_ERR_RST_MASK_H_CLIENT_NUM_M	MAKEMASK(0x7F, 0)
+#define GLGEN_ECC_ERR_RST_MASK_L		0x000939F4 /* Reset Source: CORER */
+#define GLGEN_ECC_ERR_RST_MASK_L_CLIENT_NUM_S	0
+#define GLGEN_ECC_ERR_RST_MASK_L_CLIENT_NUM_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_GPIO_CTL(_i)			(0x000880C8 + ((_i) * 4)) /* _i=0...6 */ /* Reset Source: POR */
+#define GLGEN_GPIO_CTL_MAX_INDEX		6
+#define GLGEN_GPIO_CTL_IN_VALUE_S		0
+#define GLGEN_GPIO_CTL_IN_VALUE_M		BIT(0)
+#define GLGEN_GPIO_CTL_IN_TRANSIT_S		1
+#define GLGEN_GPIO_CTL_IN_TRANSIT_M		BIT(1)
+#define GLGEN_GPIO_CTL_OUT_VALUE_S		2
+#define GLGEN_GPIO_CTL_OUT_VALUE_M		BIT(2)
+#define GLGEN_GPIO_CTL_NO_P_UP_S		3
+#define GLGEN_GPIO_CTL_NO_P_UP_M		BIT(3)
+#define GLGEN_GPIO_CTL_PIN_DIR_S		4
+#define GLGEN_GPIO_CTL_PIN_DIR_M		BIT(4)
+#define GLGEN_GPIO_CTL_TRI_CTL_S		5
+#define GLGEN_GPIO_CTL_TRI_CTL_M		BIT(5)
+#define GLGEN_GPIO_CTL_PIN_FUNC_S		8
+#define GLGEN_GPIO_CTL_PIN_FUNC_M		MAKEMASK(0xF, 8)
+#define GLGEN_GPIO_CTL_INT_MODE_S		12
+#define GLGEN_GPIO_CTL_INT_MODE_M		MAKEMASK(0x3, 12)
+#define GLGEN_MARKER_COUNT			0x000939E8 /* Reset Source: CORER */
+#define GLGEN_MARKER_COUNT_MARKER_COUNT_S	0
+#define GLGEN_MARKER_COUNT_MARKER_COUNT_M	MAKEMASK(0xFF, 0)
+#define GLGEN_MARKER_COUNT_MARKER_COUNT_EN_S	31
+#define GLGEN_MARKER_COUNT_MARKER_COUNT_EN_M	BIT(31)
+#define GLGEN_RSTAT				0x000B8188 /* Reset Source: POR */
+#define GLGEN_RSTAT_DEVSTATE_S			0
+#define GLGEN_RSTAT_DEVSTATE_M			MAKEMASK(0x3, 0)
+#define GLGEN_RSTAT_RESET_TYPE_S		2
+#define GLGEN_RSTAT_RESET_TYPE_M		MAKEMASK(0x3, 2)
+#define GLGEN_RSTAT_CORERCNT_S			4
+#define GLGEN_RSTAT_CORERCNT_M			MAKEMASK(0x3, 4)
+#define GLGEN_RSTAT_GLOBRCNT_S			6
+#define GLGEN_RSTAT_GLOBRCNT_M			MAKEMASK(0x3, 6)
+#define GLGEN_RSTAT_EMPRCNT_S			8
+#define GLGEN_RSTAT_EMPRCNT_M			MAKEMASK(0x3, 8)
+#define GLGEN_RSTAT_TIME_TO_RST_S		10
+#define GLGEN_RSTAT_TIME_TO_RST_M		MAKEMASK(0x3F, 10)
+#define GLGEN_RSTAT_RTRIG_FLR_S			16
+#define GLGEN_RSTAT_RTRIG_FLR_M			BIT(16)
+#define GLGEN_RSTAT_RTRIG_ECC_S			17
+#define GLGEN_RSTAT_RTRIG_ECC_M			BIT(17)
+#define GLGEN_RSTAT_RTRIG_FW_AUX_S		18
+#define GLGEN_RSTAT_RTRIG_FW_AUX_M		BIT(18)
+#define GLGEN_RTRIG				0x000B8190 /* Reset Source: CORER */
+#define GLGEN_RTRIG_CORER_S			0
+#define GLGEN_RTRIG_CORER_M			BIT(0)
+#define GLGEN_RTRIG_GLOBR_S			1
+#define GLGEN_RTRIG_GLOBR_M			BIT(1)
+#define GLGEN_RTRIG_EMPFWR_S			2
+#define GLGEN_RTRIG_EMPFWR_M			BIT(2)
+#define GLGEN_STAT				0x000B612C /* Reset Source: POR */
+#define GLGEN_STAT_RSVD4FW_S			0
+#define GLGEN_STAT_RSVD4FW_M			MAKEMASK(0xFF, 0)
+#define GLGEN_VFLRSTAT(_i)			(0x00093A04 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLGEN_VFLRSTAT_MAX_INDEX		7
+#define GLGEN_VFLRSTAT_VFLRS_S			0
+#define GLGEN_VFLRSTAT_VFLRS_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_XLR_MSK2HLP_RDY			0x000939F0 /* Reset Source: CORER */
+#define GLGEN_XLR_MSK2HLP_RDY_GLGEN_XLR_MSK2HLP_RDY_S 0
+#define GLGEN_XLR_MSK2HLP_RDY_GLGEN_XLR_MSK2HLP_RDY_M BIT(0)
+#define GLGEN_XLR_TRNS_WAIT_COUNT		0x000939EC /* Reset Source: CORER */
+#define GLGEN_XLR_TRNS_WAIT_COUNT_W_BTWN_TRNS_COUNT_S 0
+#define GLGEN_XLR_TRNS_WAIT_COUNT_W_BTWN_TRNS_COUNT_M MAKEMASK(0x1F, 0)
+#define GLGEN_XLR_TRNS_WAIT_COUNT_W_PEND_TRNS_COUNT_S 8
+#define GLGEN_XLR_TRNS_WAIT_COUNT_W_PEND_TRNS_COUNT_M MAKEMASK(0xFF, 8)
+#define GLQDC_DFD_CAM_ACC			0x002D2E24 /* Reset Source: CORER */
+#define GLQDC_DFD_CAM_ACC_CLNUM_S		0
+#define GLQDC_DFD_CAM_ACC_CLNUM_M		MAKEMASK(0x7F, 0)
+#define GLQDC_DFD_CAM_ACC_RES_0			0x002D2E28 /* Reset Source: CORER */
+#define GLQDC_DFD_CAM_ACC_RES_0_QID_S		0
+#define GLQDC_DFD_CAM_ACC_RES_0_QID_M		MAKEMASK(0x3FFF, 0)
+#define GLQDC_DFD_CAM_ACC_RES_0_CAM_V_S		16
+#define GLQDC_DFD_CAM_ACC_RES_0_CAM_V_M		BIT(16)
+#define GLQDC_DFD_CAM_ACC_RES_0_CAM_E_S		31
+#define GLQDC_DFD_CAM_ACC_RES_0_CAM_E_M		BIT(31)
+#define GLQDC_DFD_CAM_ACC_RES_1			0x002D2E2C /* Reset Source: CORER */
+#define GLQDC_DFD_CAM_ACC_RES_1_CL_HEAD_S	0
+#define GLQDC_DFD_CAM_ACC_RES_1_CL_HEAD_M	MAKEMASK(0x3F, 0)
+#define GLQDC_DFD_CAM_ACC_RES_1_CL_TAIL_S	8
+#define GLQDC_DFD_CAM_ACC_RES_1_CL_TAIL_M	MAKEMASK(0x3F, 8)
+#define GLQDC_DFD_CAM_ACC_RES_1_CL_EMPTY_S	16
+#define GLQDC_DFD_CAM_ACC_RES_1_CL_EMPTY_M	BIT(16)
+#define GLQDC_DFD_CAM_ACC_RES_1_CL_MALC_S	24
+#define GLQDC_DFD_CAM_ACC_RES_1_CL_MALC_M	MAKEMASK(0x3F, 24)
+#define GLQDC_DFD_FIFO_CFG_0			0x002D2E34 /* Reset Source: CORER */
+#define GLQDC_DFD_FIFO_CFG_0_QID_S		0
+#define GLQDC_DFD_FIFO_CFG_0_QID_M		MAKEMASK(0x3FFF, 0)
+#define GLQDC_DFD_FIFO_CFG_0_SMPL_PT_S		16
+#define GLQDC_DFD_FIFO_CFG_0_SMPL_PT_M		MAKEMASK(0xFF, 16)
+#define GLQDC_DFD_FIFO_CFG_0_ALL_QID_S		31
+#define GLQDC_DFD_FIFO_CFG_0_ALL_QID_M		BIT(31)
+#define GLQDC_DFD_FIFO_CFG_1			0x002D2E38 /* Reset Source: CORER */
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_0_S		0
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_0_M		MAKEMASK(0x7, 0)
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_1_S		4
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_1_M		MAKEMASK(0x7, 4)
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_2_S		8
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_2_M		MAKEMASK(0x7, 8)
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_3_S		12
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_3_M		MAKEMASK(0x7, 12)
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_4_S		16
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_4_M		MAKEMASK(0x7, 16)
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_5_S		20
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_5_M		MAKEMASK(0x7, 20)
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_6_S		24
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_6_M		MAKEMASK(0x7, 24)
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_7_S		28
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_7_M		MAKEMASK(0x7, 28)
+#define GLQDC_DFD_FIFO_SZ_CFG			0x002D30AC /* Reset Source: CORER */
+#define GLQDC_DFD_FIFO_SZ_CFG_COMP_S		0
+#define GLQDC_DFD_FIFO_SZ_CFG_COMP_M		MAKEMASK(0xFF, 0)
+#define GLQDC_DFD_FIFO_SZ_CFG_MISS_S		8
+#define GLQDC_DFD_FIFO_SZ_CFG_MISS_M		MAKEMASK(0xFF, 8)
+#define GLQDC_DFD_FIFO_SZ_CFG_MISS_COMP_S	16
+#define GLQDC_DFD_FIFO_SZ_CFG_MISS_COMP_M	MAKEMASK(0xFF, 16)
+#define GLQDC_DFD_GEN_CHKN			0x002D30A0 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_CHKN_GEN_BITS_S		0
+#define GLQDC_DFD_GEN_CHKN_GEN_BITS_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_GEN_CHKN_2			0x002D30A4 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_CHKN_2_GEN_BITS_S		0
+#define GLQDC_DFD_GEN_CHKN_2_GEN_BITS_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_GEN_CTRL			0x002D2E20 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_CTRL_ENABLE_S		0
+#define GLQDC_DFD_GEN_CTRL_ENABLE_M		BIT(0)
+#define GLQDC_DFD_GEN_CTRL_BLK_INJECT_M1_S	1
+#define GLQDC_DFD_GEN_CTRL_BLK_INJECT_M1_M	BIT(1)
+#define GLQDC_DFD_GEN_CTRL_NUM_PAUSE_M1_S	16
+#define GLQDC_DFD_GEN_CTRL_NUM_PAUSE_M1_M	MAKEMASK(0x3FF, 16)
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0		0x002D2EE8 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_MISS_COMP_ACK_S 0
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_MISS_COMP_ACK_M MAKEMASK(0x7F, 0)
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_MISS_COMP_S 7
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_MISS_COMP_M MAKEMASK(0x7F, 7)
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_COMP_FSM_DATA_S 14
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_COMP_FSM_DATA_M MAKEMASK(0x3, 14)
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_COMP_FSM_S	16
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_COMP_FSM_M	MAKEMASK(0x7F, 16)
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_PCIE_OUT_S	23
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_PCIE_OUT_M	MAKEMASK(0x7, 23)
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_1		0x002D2EEC /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_1_MISS_FSM_S	0
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_1_MISS_FSM_M	MAKEMASK(0x7F, 0)
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_1_DFD_S	7
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_1_DFD_M	MAKEMASK(0xFF, 7)
+#define GLQDC_DFD_GEN_LOG_FSM			0x002D2EF0 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOG_FSM_FTSTATE_S		0
+#define GLQDC_DFD_GEN_LOG_FSM_FTSTATE_M		MAKEMASK(0x3, 0)
+#define GLQDC_DFD_GEN_LOG_FSM_MISS_FIFO_FSM_ST_S 2
+#define GLQDC_DFD_GEN_LOG_FSM_MISS_FIFO_FSM_ST_M MAKEMASK(0x7, 2)
+#define GLQDC_DFD_GEN_LOG_FSM_IN_MISS_FIFO_S	5
+#define GLQDC_DFD_GEN_LOG_FSM_IN_MISS_FIFO_M	MAKEMASK(0x3, 5)
+#define GLQDC_DFD_GEN_LOG_FSM_CPSTATE_S		7
+#define GLQDC_DFD_GEN_LOG_FSM_CPSTATE_M		MAKEMASK(0x7, 7)
+#define GLQDC_DFD_GEN_LOGGNG_0			0x002D2EE0 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOGGNG_0_RINGH_WR_RD_S	0
+#define GLQDC_DFD_GEN_LOGGNG_0_RINGH_WR_RD_M	BIT(0)
+#define GLQDC_DFD_GEN_LOGGNG_0_QD_WR_RD_S	1
+#define GLQDC_DFD_GEN_LOGGNG_0_QD_WR_RD_M	BIT(1)
+#define GLQDC_DFD_GEN_LOGGNG_0_PCIE_RD_REQ_VLD_S 2
+#define GLQDC_DFD_GEN_LOGGNG_0_PCIE_RD_REQ_VLD_M BIT(2)
+#define GLQDC_DFD_GEN_LOGGNG_0_NXT_SQ_VLD_S	3
+#define GLQDC_DFD_GEN_LOGGNG_0_NXT_SQ_VLD_M	BIT(3)
+#define GLQDC_DFD_GEN_LOGGNG_0_SQ_VLD_TO_DONE_S 4
+#define GLQDC_DFD_GEN_LOGGNG_0_SQ_VLD_TO_DONE_M BIT(4)
+#define GLQDC_DFD_GEN_LOGGNG_0_PCIE_COMP_VLD_S	5
+#define GLQDC_DFD_GEN_LOGGNG_0_PCIE_COMP_VLD_M	BIT(5)
+#define GLQDC_DFD_GEN_LOGGNG_0_FETCH_NXT_SQ_VLD_S 6
+#define GLQDC_DFD_GEN_LOGGNG_0_FETCH_NXT_SQ_VLD_M BIT(6)
+#define GLQDC_DFD_GEN_LOGGNG_0_MALC_RPT_S	8
+#define GLQDC_DFD_GEN_LOGGNG_0_MALC_RPT_M	MAKEMASK(0xF, 8)
+#define GLQDC_DFD_GEN_LOGGNG_0_DFD_FIFO_ADD_S	16
+#define GLQDC_DFD_GEN_LOGGNG_0_DFD_FIFO_ADD_M	MAKEMASK(0x7F, 16)
+#define GLQDC_DFD_GEN_LOGGNG_1			0x002D2EE4 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_RFIL_WM_S	0
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_RFIL_WM_M	MAKEMASK(0x3, 0)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_RFIL_S	2
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_RFIL_M	MAKEMASK(0x3, 2)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_MRED_WM_S	4
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_MRED_WM_M	MAKEMASK(0x3, 4)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_MRED_S	6
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_MRED_M	MAKEMASK(0x3, 6)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_M3_WM_S	8
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_M3_WM_M	MAKEMASK(0x3, 8)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_M3_S		10
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_M3_M		MAKEMASK(0x3, 10)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_LSO_MT_M3_WM_S 12
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_LSO_MT_M3_WM_M MAKEMASK(0x3, 12)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_LSO_MT_M3_S	14
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_LSO_MT_M3_M	MAKEMASK(0x3, 14)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_ACK_MISS_FIFO_M3_WM_S 16
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_ACK_MISS_FIFO_M3_WM_M MAKEMASK(0x3, 16)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_ACK_MISS_FIFO_M3_S 18
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_ACK_MISS_FIFO_M3_M MAKEMASK(0x3, 18)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_EVICT_WM_S	20
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_EVICT_WM_M	MAKEMASK(0x3, 20)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_EVICT_S	22
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_EVICT_M	MAKEMASK(0x3, 22)
+#define GLQDC_DFD_GEN_LOGGNG_2			0x002D2FFC /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOGGNG_2_WR_WHEN_FULL_S	0
+#define GLQDC_DFD_GEN_LOGGNG_2_WR_WHEN_FULL_M	MAKEMASK(0x3F, 0)
+#define GLQDC_DFD_GEN_LOGGNG_2_WR_WHEN_FULL_LT_S 6
+#define GLQDC_DFD_GEN_LOGGNG_2_WR_WHEN_FULL_LT_M MAKEMASK(0x3F, 6)
+#define GLQDC_DFD_GEN_LOGGNG_2_TEST_S		24
+#define GLQDC_DFD_GEN_LOGGNG_2_TEST_M		MAKEMASK(0xFF, 24)
+#define GLQDC_DFD_GEN_LOGGNG_3			0x002D3008 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOGGNG_3_GEN_S		0
+#define GLQDC_DFD_GEN_LOGGNG_3_GEN_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_GEN_LOGGNG_4			0x002D300C /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOGGNG_4_GEN_S		0
+#define GLQDC_DFD_GEN_LOGGNG_4_GEN_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_GEN_LOGGNG_5			0x002D3010 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOGGNG_5_GEN_S		0
+#define GLQDC_DFD_GEN_LOGGNG_5_GEN_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_GEN_LOGGNG_6			0x002D3014 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOGGNG_6_GEN_S		0
+#define GLQDC_DFD_GEN_LOGGNG_6_GEN_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_GEN_STAT_REGS(_i)		(0x002D3018 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_STAT_REGS_MAX_INDEX	15
+#define GLQDC_DFD_GEN_STAT_REGS_COUNT_S		0
+#define GLQDC_DFD_GEN_STAT_REGS_COUNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_LOG_0				0x002D2E3C /* Reset Source: CORER */
+#define GLQDC_DFD_LOG_0_SOURCE_S		0
+#define GLQDC_DFD_LOG_0_SOURCE_M		MAKEMASK(0x3, 0)
+#define GLQDC_DFD_LOG_0_LVL_OR_EDGE_S		4
+#define GLQDC_DFD_LOG_0_LVL_OR_EDGE_M		BIT(4)
+#define GLQDC_DFD_LOG_0_DLY_CYCL_S		16
+#define GLQDC_DFD_LOG_0_DLY_CYCL_M		MAKEMASK(0x3FF, 16)
+#define GLQDC_DFD_LOG_1				0x002D2E40 /* Reset Source: CORER */
+#define GLQDC_DFD_LOG_1_NUM_EVENTS_S		0
+#define GLQDC_DFD_LOG_1_NUM_EVENTS_M		MAKEMASK(0x3FF, 0)
+#define GLQDC_DFD_LOG_1_NUM_TRIGS_S		16
+#define GLQDC_DFD_LOG_1_NUM_TRIGS_M		MAKEMASK(0x3FF, 16)
+#define GLQDC_DFD_LOG_1_TRIG_B2B_S		31
+#define GLQDC_DFD_LOG_1_TRIG_B2B_M		BIT(31)
+#define GLQDC_DFD_LOG_ACTN_EN			0x002D2EA4 /* Reset Source: CORER */
+#define GLQDC_DFD_LOG_ACTN_EN_BLK_INJECT_M1_S	0
+#define GLQDC_DFD_LOG_ACTN_EN_BLK_INJECT_M1_M	BIT(0)
+#define GLQDC_DFD_LOG_ACTN_EN_STP_WR_DFD_FIFO_S 1
+#define GLQDC_DFD_LOG_ACTN_EN_STP_WR_DFD_FIFO_M BIT(1)
+#define GLQDC_DFD_LOG_ACTN_EN_STP_UPDT_MALC_RPT_CSR_S 2
+#define GLQDC_DFD_LOG_ACTN_EN_STP_UPDT_MALC_RPT_CSR_M BIT(2)
+#define GLQDC_DFD_LOG_ACTN_RST			0x002D2EA8 /* Reset Source: CORER */
+#define GLQDC_DFD_LOG_ACTN_RST_BLK_INJECT_M1_S	0
+#define GLQDC_DFD_LOG_ACTN_RST_BLK_INJECT_M1_M	BIT(0)
+#define GLQDC_DFD_LOG_ACTN_RST_STP_WR_DFD_FIFO_S 1
+#define GLQDC_DFD_LOG_ACTN_RST_STP_WR_DFD_FIFO_M BIT(1)
+#define GLQDC_DFD_LOG_ACTN_RST_STP_UPDT_MALC_RPT_CSR_S 2
+#define GLQDC_DFD_LOG_ACTN_RST_STP_UPDT_MALC_RPT_CSR_M BIT(2)
+#define GLQDC_DFD_LOG_DATA(_i)			(0x002D2E44 + ((_i) * 4)) /* _i=0...11 */ /* Reset Source: CORER */
+#define GLQDC_DFD_LOG_DATA_MAX_INDEX		11
+#define GLQDC_DFD_LOG_DATA_DATA_S		0
+#define GLQDC_DFD_LOG_DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_LOG_MASK(_i)			(0x002D2E74 + ((_i) * 4)) /* _i=0...11 */ /* Reset Source: CORER */
+#define GLQDC_DFD_LOG_MASK_MAX_INDEX		11
+#define GLQDC_DFD_LOG_MASK_MASK_S		0
+#define GLQDC_DFD_LOG_MASK_MASK_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_LOG_TRG_0			0x002D2EAC /* Reset Source: CORER */
+#define GLQDC_DFD_LOG_TRG_0_QID_S		0
+#define GLQDC_DFD_LOG_TRG_0_QID_M		MAKEMASK(0x3FFF, 0)
+#define GLQDC_DFD_LOG_TRG_0_ACT_TRIGGED_S	16
+#define GLQDC_DFD_LOG_TRG_0_ACT_TRIGGED_M	BIT(16)
+#define GLQDC_DFD_LOG_TRG_0_TRIGGED_S		31
+#define GLQDC_DFD_LOG_TRG_0_TRIGGED_M		BIT(31)
+#define GLQDC_DFD_LOG_TRG_DATA(_i)		(0x002D2EB0 + ((_i) * 4)) /* _i=0...11 */ /* Reset Source: CORER */
+#define GLQDC_DFD_LOG_TRG_DATA_MAX_INDEX	11
+#define GLQDC_DFD_LOG_TRG_DATA_DATA_S		0
+#define GLQDC_DFD_LOG_TRG_DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_PACE				0x002D3000 /* Reset Source: CORER */
+#define GLQDC_DFD_PACE_PUSH_S			0
+#define GLQDC_DFD_PACE_PUSH_M			BIT(0)
+#define GLQDC_DFD_RST				0x002D2E30 /* Reset Source: CORER */
+#define GLQDC_DFD_RST_RST_S			0
+#define GLQDC_DFD_RST_RST_M			BIT(0)
+#define GLQDC_DFD_RST_CLR_MALC_RPT_S		1
+#define GLQDC_DFD_RST_CLR_MALC_RPT_M		BIT(1)
+#define GLQDC_DFD_RST_LOG_RST_S			2
+#define GLQDC_DFD_RST_LOG_RST_M			BIT(2)
+#define GLQDC_DFD_SAMPLE_RO_CSR			0x002D3004 /* Reset Source: CORER */
+#define GLQDC_DFD_SAMPLE_RO_CSR_SMPL_S		0
+#define GLQDC_DFD_SAMPLE_RO_CSR_SMPL_M		BIT(0)
+#define GLQDC_DFD_STATS_CFG_0			0x002D3058 /* Reset Source: CORER */
+#define GLQDC_DFD_STATS_CFG_0_CLR_S		0
+#define GLQDC_DFD_STATS_CFG_0_CLR_M		BIT(0)
+#define GLQDC_DFD_STATS_CFG_1			0x002D305C /* Reset Source: CORER */
+#define GLQDC_DFD_STATS_CFG_1_QID_S		0
+#define GLQDC_DFD_STATS_CFG_1_QID_M		MAKEMASK(0x3FFF, 0)
+#define GLQDC_DFD_STATS_CFG_1_GEN_CFG_S		16
+#define GLQDC_DFD_STATS_CFG_1_GEN_CFG_M		MAKEMASK(0x1F, 16)
+#define GLQDC_DFD_STATS_CFG_EVNT(_i)		(0x002D3060 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLQDC_DFD_STATS_CFG_EVNT_MAX_INDEX	15
+#define GLQDC_DFD_STATS_CFG_EVNT_EVNT_ID_S	0
+#define GLQDC_DFD_STATS_CFG_EVNT_EVNT_ID_M	MAKEMASK(0x1F, 0)
+#define GLQDC_DFD_STATS_CFG_EVNT_WRAP_EN_S	31
+#define GLQDC_DFD_STATS_CFG_EVNT_WRAP_EN_M	BIT(31)
+#define GLQDC_DFD_TEST_MNG			0x002D30A8 /* Reset Source: CORER */
+#define GLQDC_DFD_TEST_MNG_TST_S		2
+#define GLQDC_DFD_TEST_MNG_TST_M		BIT(2)
+#define GLVFGEN_TIMER				0x000B8214 /* Reset Source: POR */
+#define GLVFGEN_TIMER_GTIME_S			0
+#define GLVFGEN_TIMER_GTIME_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PFGEN_CTRL				0x00091000 /* Reset Source: CORER */
+#define PFGEN_CTRL_PFSWR_S			0
+#define PFGEN_CTRL_PFSWR_M			BIT(0)
+#define PFGEN_DRUN				0x00091180 /* Reset Source: CORER */
+#define PFGEN_DRUN_DRVUNLD_S			0
+#define PFGEN_DRUN_DRVUNLD_M			BIT(0)
+#define PFGEN_PFRSTAT				0x00091080 /* Reset Source: CORER */
+#define PFGEN_PFRSTAT_PFRD_S			0
+#define PFGEN_PFRSTAT_PFRD_M			BIT(0)
+#define PFGEN_PORTNUM				0x001D2400 /* Reset Source: CORER */
+#define PFGEN_PORTNUM_PORT_NUM_S		0
+#define PFGEN_PORTNUM_PORT_NUM_M		MAKEMASK(0x7, 0)
+#define PFGEN_STATE				0x00088000 /* Reset Source: CORER */
+#define PFGEN_STATE_PFPEEN_S			0
+#define PFGEN_STATE_PFPEEN_M			BIT(0)
+#define PFGEN_STATE_RSVD_S			1
+#define PFGEN_STATE_RSVD_M			BIT(1)
+#define PFGEN_STATE_PFLINKEN_S			2
+#define PFGEN_STATE_PFLINKEN_M			BIT(2)
+#define PFGEN_STATE_PFSCEN_S			3
+#define PFGEN_STATE_PFSCEN_M			BIT(3)
+#define PRT_TCVMLR_DRAIN_CNTR			0x000A21C0 /* Reset Source: CORER */
+#define PRT_TCVMLR_DRAIN_CNTR_CNTR_S		0
+#define PRT_TCVMLR_DRAIN_CNTR_CNTR_M		MAKEMASK(0x3FFF, 0)
+#define PRTGEN_CNF				0x000B8120 /* Reset Source: POR */
+#define PRTGEN_CNF_PORT_DIS_S			0
+#define PRTGEN_CNF_PORT_DIS_M			BIT(0)
+#define PRTGEN_CNF_ALLOW_PORT_DIS_S		1
+#define PRTGEN_CNF_ALLOW_PORT_DIS_M		BIT(1)
+#define PRTGEN_CNF_EMP_PORT_DIS_S		2
+#define PRTGEN_CNF_EMP_PORT_DIS_M		BIT(2)
+#define PRTGEN_CNF2				0x000B8160 /* Reset Source: POR */
+#define PRTGEN_CNF2_ACTIVATE_PORT_LINK_S	0
+#define PRTGEN_CNF2_ACTIVATE_PORT_LINK_M	BIT(0)
+#define PRTGEN_CNF3				0x000B8280 /* Reset Source: POR */
+#define PRTGEN_CNF3_PORT_STAGERING_EN_S		0
+#define PRTGEN_CNF3_PORT_STAGERING_EN_M		BIT(0)
+#define PRTGEN_STATUS				0x000B8100 /* Reset Source: POR */
+#define PRTGEN_STATUS_PORT_VALID_S		0
+#define PRTGEN_STATUS_PORT_VALID_M		BIT(0)
+#define PRTGEN_STATUS_PORT_ACTIVE_S		1
+#define PRTGEN_STATUS_PORT_ACTIVE_M		BIT(1)
+#define VFGEN_RSTAT(_VF)			(0x00074000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: VFR */
+#define VFGEN_RSTAT_MAX_INDEX			255
+#define VFGEN_RSTAT_VFR_STATE_S			0
+#define VFGEN_RSTAT_VFR_STATE_M			MAKEMASK(0x3, 0)
+#define VPGEN_VFRSTAT(_VF)			(0x00090800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPGEN_VFRSTAT_MAX_INDEX			255
+#define VPGEN_VFRSTAT_VFRD_S			0
+#define VPGEN_VFRSTAT_VFRD_M			BIT(0)
+#define VPGEN_VFRTRIG(_VF)			(0x00090000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPGEN_VFRTRIG_MAX_INDEX			255
+#define VPGEN_VFRTRIG_VFSWR_S			0
+#define VPGEN_VFRTRIG_VFSWR_M			BIT(0)
+#define VSIGEN_RSTAT(_VSI)			(0x00092800 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSIGEN_RSTAT_MAX_INDEX			767
+#define VSIGEN_RSTAT_VMRD_S			0
+#define VSIGEN_RSTAT_VMRD_M			BIT(0)
+#define VSIGEN_RTRIG(_VSI)			(0x00091800 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSIGEN_RTRIG_MAX_INDEX			767
+#define VSIGEN_RTRIG_VMSWR_S			0
+#define VSIGEN_RTRIG_VMSWR_M			BIT(0)
+#define GLHMC_APBVTINUSEBASE(_i)		(0x00524A00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_APBVTINUSEBASE_MAX_INDEX		7
+#define GLHMC_APBVTINUSEBASE_FPMAPBINUSEBASE_S	0
+#define GLHMC_APBVTINUSEBASE_FPMAPBINUSEBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_CEQPART(_i)			(0x005031C0 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_CEQPART_MAX_INDEX			7
+#define GLHMC_CEQPART_PMCEQBASE_S		0
+#define GLHMC_CEQPART_PMCEQBASE_M		MAKEMASK(0x3FF, 0)
+#define GLHMC_CEQPART_PMCEQSIZE_S		16
+#define GLHMC_CEQPART_PMCEQSIZE_M		MAKEMASK(0x3FF, 16)
+#define GLHMC_DBCQMAX				0x005220F0 /* Reset Source: CORER */
+#define GLHMC_DBCQMAX_GLHMC_DBCQMAX_S		0
+#define GLHMC_DBCQMAX_GLHMC_DBCQMAX_M		MAKEMASK(0xFFFFF, 0)
+#define GLHMC_DBCQPART(_i)			(0x00503180 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_DBCQPART_MAX_INDEX		7
+#define GLHMC_DBCQPART_PMDBCQBASE_S		0
+#define GLHMC_DBCQPART_PMDBCQBASE_M		MAKEMASK(0x3FFF, 0)
+#define GLHMC_DBCQPART_PMDBCQSIZE_S		16
+#define GLHMC_DBCQPART_PMDBCQSIZE_M		MAKEMASK(0x7FFF, 16)
+#define GLHMC_DBQPMAX				0x005220EC /* Reset Source: CORER */
+#define GLHMC_DBQPMAX_GLHMC_DBQPMAX_S		0
+#define GLHMC_DBQPMAX_GLHMC_DBQPMAX_M		MAKEMASK(0x7FFFF, 0)
+#define GLHMC_DBQPPART(_i)			(0x005044C0 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_DBQPPART_MAX_INDEX		7
+#define GLHMC_DBQPPART_PMDBQPBASE_S		0
+#define GLHMC_DBQPPART_PMDBQPBASE_M		MAKEMASK(0x3FFF, 0)
+#define GLHMC_DBQPPART_PMDBQPSIZE_S		16
+#define GLHMC_DBQPPART_PMDBQPSIZE_M		MAKEMASK(0x7FFF, 16)
+#define GLHMC_FSIAVBASE(_i)			(0x00525600 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_FSIAVBASE_MAX_INDEX		7
+#define GLHMC_FSIAVBASE_FPMFSIAVBASE_S		0
+#define GLHMC_FSIAVBASE_FPMFSIAVBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_FSIAVCNT(_i)			(0x00525700 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_FSIAVCNT_MAX_INDEX		7
+#define GLHMC_FSIAVCNT_FPMFSIAVCNT_S		0
+#define GLHMC_FSIAVCNT_FPMFSIAVCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_FSIAVMAX				0x00522068 /* Reset Source: CORER */
+#define GLHMC_FSIAVMAX_PMFSIAVMAX_S		0
+#define GLHMC_FSIAVMAX_PMFSIAVMAX_M		MAKEMASK(0x1FFFF, 0)
+#define GLHMC_FSIAVOBJSZ			0x00522064 /* Reset Source: CORER */
+#define GLHMC_FSIAVOBJSZ_PMFSIAVOBJSZ_S		0
+#define GLHMC_FSIAVOBJSZ_PMFSIAVOBJSZ_M		MAKEMASK(0xF, 0)
+#define GLHMC_FSIMCBASE(_i)			(0x00526000 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_FSIMCBASE_MAX_INDEX		7
+#define GLHMC_FSIMCBASE_FPMFSIMCBASE_S		0
+#define GLHMC_FSIMCBASE_FPMFSIMCBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_FSIMCCNT(_i)			(0x00526100 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_FSIMCCNT_MAX_INDEX		7
+#define GLHMC_FSIMCCNT_FPMFSIMCSZ_S		0
+#define GLHMC_FSIMCCNT_FPMFSIMCSZ_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_FSIMCMAX				0x00522060 /* Reset Source: CORER */
+#define GLHMC_FSIMCMAX_PMFSIMCMAX_S		0
+#define GLHMC_FSIMCMAX_PMFSIMCMAX_M		MAKEMASK(0x3FFF, 0)
+#define GLHMC_FSIMCOBJSZ			0x0052205C /* Reset Source: CORER */
+#define GLHMC_FSIMCOBJSZ_PMFSIMCOBJSZ_S		0
+#define GLHMC_FSIMCOBJSZ_PMFSIMCOBJSZ_M		MAKEMASK(0xF, 0)
+#define GLHMC_FWPDINV				0x0052207C /* Reset Source: CORER */
+#define GLHMC_FWPDINV_PMSDIDX_S			0
+#define GLHMC_FWPDINV_PMSDIDX_M			MAKEMASK(0xFFF, 0)
+#define GLHMC_FWPDINV_PMSDPARTSEL_S		15
+#define GLHMC_FWPDINV_PMSDPARTSEL_M		BIT(15)
+#define GLHMC_FWPDINV_PMPDIDX_S			16
+#define GLHMC_FWPDINV_PMPDIDX_M			MAKEMASK(0x1FF, 16)
+#define GLHMC_FWPDINV_FPMAT			0x0010207c /* Reset Source: CORER */
+#define GLHMC_FWPDINV_FPMAT_PMSDIDX_S		0
+#define GLHMC_FWPDINV_FPMAT_PMSDIDX_M		MAKEMASK(0xFFF, 0)
+#define GLHMC_FWPDINV_FPMAT_PMSDPARTSEL_S	15
+#define GLHMC_FWPDINV_FPMAT_PMSDPARTSEL_M	BIT(15)
+#define GLHMC_FWPDINV_FPMAT_PMPDIDX_S		16
+#define GLHMC_FWPDINV_FPMAT_PMPDIDX_M		MAKEMASK(0x1FF, 16)
+#define GLHMC_FWSDDATAHIGH			0x00522078 /* Reset Source: CORER */
+#define GLHMC_FWSDDATAHIGH_PMSDDATAHIGH_S	0
+#define GLHMC_FWSDDATAHIGH_PMSDDATAHIGH_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_FWSDDATAHIGH_FPMAT		0x00102078 /* Reset Source: CORER */
+#define GLHMC_FWSDDATAHIGH_FPMAT_PMSDDATAHIGH_S 0
+#define GLHMC_FWSDDATAHIGH_FPMAT_PMSDDATAHIGH_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_FWSDDATALOW			0x00522074 /* Reset Source: CORER */
+#define GLHMC_FWSDDATALOW_PMSDVALID_S		0
+#define GLHMC_FWSDDATALOW_PMSDVALID_M		BIT(0)
+#define GLHMC_FWSDDATALOW_PMSDTYPE_S		1
+#define GLHMC_FWSDDATALOW_PMSDTYPE_M		BIT(1)
+#define GLHMC_FWSDDATALOW_PMSDBPCOUNT_S		2
+#define GLHMC_FWSDDATALOW_PMSDBPCOUNT_M		MAKEMASK(0x3FF, 2)
+#define GLHMC_FWSDDATALOW_PMSDDATALOW_S		12
+#define GLHMC_FWSDDATALOW_PMSDDATALOW_M		MAKEMASK(0xFFFFF, 12)
+#define GLHMC_FWSDDATALOW_FPMAT			0x00102074 /* Reset Source: CORER */
+#define GLHMC_FWSDDATALOW_FPMAT_PMSDVALID_S	0
+#define GLHMC_FWSDDATALOW_FPMAT_PMSDVALID_M	BIT(0)
+#define GLHMC_FWSDDATALOW_FPMAT_PMSDTYPE_S	1
+#define GLHMC_FWSDDATALOW_FPMAT_PMSDTYPE_M	BIT(1)
+#define GLHMC_FWSDDATALOW_FPMAT_PMSDBPCOUNT_S	2
+#define GLHMC_FWSDDATALOW_FPMAT_PMSDBPCOUNT_M	MAKEMASK(0x3FF, 2)
+#define GLHMC_FWSDDATALOW_FPMAT_PMSDDATALOW_S	12
+#define GLHMC_FWSDDATALOW_FPMAT_PMSDDATALOW_M	MAKEMASK(0xFFFFF, 12)
+#define GLHMC_PEARPBASE(_i)			(0x00524800 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEARPBASE_MAX_INDEX		7
+#define GLHMC_PEARPBASE_FPMPEARPBASE_S		0
+#define GLHMC_PEARPBASE_FPMPEARPBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEARPCNT(_i)			(0x00524900 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEARPCNT_MAX_INDEX		7
+#define GLHMC_PEARPCNT_FPMPEARPCNT_S		0
+#define GLHMC_PEARPCNT_FPMPEARPCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEARPMAX				0x00522038 /* Reset Source: CORER */
+#define GLHMC_PEARPMAX_PMPEARPMAX_S		0
+#define GLHMC_PEARPMAX_PMPEARPMAX_M		MAKEMASK(0x1FFFF, 0)
+#define GLHMC_PEARPOBJSZ			0x00522034 /* Reset Source: CORER */
+#define GLHMC_PEARPOBJSZ_PMPEARPOBJSZ_S		0
+#define GLHMC_PEARPOBJSZ_PMPEARPOBJSZ_M		MAKEMASK(0x7, 0)
+#define GLHMC_PECQBASE(_i)			(0x00524200 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PECQBASE_MAX_INDEX		7
+#define GLHMC_PECQBASE_FPMPECQBASE_S		0
+#define GLHMC_PECQBASE_FPMPECQBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PECQCNT(_i)			(0x00524300 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PECQCNT_MAX_INDEX			7
+#define GLHMC_PECQCNT_FPMPECQCNT_S		0
+#define GLHMC_PECQCNT_FPMPECQCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PECQOBJSZ				0x00522020 /* Reset Source: CORER */
+#define GLHMC_PECQOBJSZ_PMPECQOBJSZ_S		0
+#define GLHMC_PECQOBJSZ_PMPECQOBJSZ_M		MAKEMASK(0xF, 0)
+#define GLHMC_PEHDRBASE(_i)			(0x00526200 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEHDRBASE_MAX_INDEX		7
+#define GLHMC_PEHDRBASE_GLHMC_PEHDRBASE_S	0
+#define GLHMC_PEHDRBASE_GLHMC_PEHDRBASE_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PEHDRCNT(_i)			(0x00526300 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEHDRCNT_MAX_INDEX		7
+#define GLHMC_PEHDRCNT_GLHMC_PEHDRCNT_S		0
+#define GLHMC_PEHDRCNT_GLHMC_PEHDRCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PEHDRMAX				0x00522008 /* Reset Source: CORER */
+#define GLHMC_PEHDRMAX_PMPEHDRMAX_S		0
+#define GLHMC_PEHDRMAX_PMPEHDRMAX_M		MAKEMASK(0x7FFFF, 0)
+#define GLHMC_PEHDRMAX_RSVD_S			19
+#define GLHMC_PEHDRMAX_RSVD_M			MAKEMASK(0x1FFF, 19)
+#define GLHMC_PEHDROBJSZ			0x00522004 /* Reset Source: CORER */
+#define GLHMC_PEHDROBJSZ_PMPEHDROBJSZ_S		0
+#define GLHMC_PEHDROBJSZ_PMPEHDROBJSZ_M		MAKEMASK(0xF, 0)
+#define GLHMC_PEHDROBJSZ_RSVD_S			4
+#define GLHMC_PEHDROBJSZ_RSVD_M			MAKEMASK(0xFFFFFFF, 4)
+#define GLHMC_PEHTCNT(_i)			(0x00524700 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEHTCNT_MAX_INDEX			7
+#define GLHMC_PEHTCNT_FPMPEHTCNT_S		0
+#define GLHMC_PEHTCNT_FPMPEHTCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEHTCNT_FPMAT(_i)			(0x00104700 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEHTCNT_FPMAT_MAX_INDEX		7
+#define GLHMC_PEHTCNT_FPMAT_FPMPEHTCNT_S	0
+#define GLHMC_PEHTCNT_FPMAT_FPMPEHTCNT_M	MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEHTEBASE(_i)			(0x00524600 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEHTEBASE_MAX_INDEX		7
+#define GLHMC_PEHTEBASE_FPMPEHTEBASE_S		0
+#define GLHMC_PEHTEBASE_FPMPEHTEBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEHTEBASE_FPMAT(_i)		(0x00104600 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEHTEBASE_FPMAT_MAX_INDEX		7
+#define GLHMC_PEHTEBASE_FPMAT_FPMPEHTEBASE_S	0
+#define GLHMC_PEHTEBASE_FPMAT_FPMPEHTEBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEHTEOBJSZ			0x0052202C /* Reset Source: CORER */
+#define GLHMC_PEHTEOBJSZ_PMPEHTEOBJSZ_S		0
+#define GLHMC_PEHTEOBJSZ_PMPEHTEOBJSZ_M		MAKEMASK(0xF, 0)
+#define GLHMC_PEHTEOBJSZ_FPMAT			0x0010202c /* Reset Source: CORER */
+#define GLHMC_PEHTEOBJSZ_FPMAT_PMPEHTEOBJSZ_S	0
+#define GLHMC_PEHTEOBJSZ_FPMAT_PMPEHTEOBJSZ_M	MAKEMASK(0xF, 0)
+#define GLHMC_PEHTMAX				0x00522030 /* Reset Source: CORER */
+#define GLHMC_PEHTMAX_PMPEHTMAX_S		0
+#define GLHMC_PEHTMAX_PMPEHTMAX_M		MAKEMASK(0x1FFFFF, 0)
+#define GLHMC_PEHTMAX_FPMAT			0x00102030 /* Reset Source: CORER */
+#define GLHMC_PEHTMAX_FPMAT_PMPEHTMAX_S		0
+#define GLHMC_PEHTMAX_FPMAT_PMPEHTMAX_M		MAKEMASK(0x1FFFFF, 0)
+#define GLHMC_PEMDBASE(_i)			(0x00526400 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEMDBASE_MAX_INDEX		7
+#define GLHMC_PEMDBASE_GLHMC_PEMDBASE_S		0
+#define GLHMC_PEMDBASE_GLHMC_PEMDBASE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PEMDCNT(_i)			(0x00526500 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEMDCNT_MAX_INDEX			7
+#define GLHMC_PEMDCNT_GLHMC_PEMDCNT_S		0
+#define GLHMC_PEMDCNT_GLHMC_PEMDCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PEMDMAX				0x00522010 /* Reset Source: CORER */
+#define GLHMC_PEMDMAX_PMPEMDMAX_S		0
+#define GLHMC_PEMDMAX_PMPEMDMAX_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEMDMAX_RSVD_S			24
+#define GLHMC_PEMDMAX_RSVD_M			MAKEMASK(0xFF, 24)
+#define GLHMC_PEMDOBJSZ				0x0052200C /* Reset Source: CORER */
+#define GLHMC_PEMDOBJSZ_PMPEMDOBJSZ_S		0
+#define GLHMC_PEMDOBJSZ_PMPEMDOBJSZ_M		MAKEMASK(0xF, 0)
+#define GLHMC_PEMDOBJSZ_RSVD_S			4
+#define GLHMC_PEMDOBJSZ_RSVD_M			MAKEMASK(0xFFFFFFF, 4)
+#define GLHMC_PEMRBASE(_i)			(0x00524C00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEMRBASE_MAX_INDEX		7
+#define GLHMC_PEMRBASE_FPMPEMRBASE_S		0
+#define GLHMC_PEMRBASE_FPMPEMRBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEMRCNT(_i)			(0x00524D00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEMRCNT_MAX_INDEX			7
+#define GLHMC_PEMRCNT_FPMPEMRSZ_S		0
+#define GLHMC_PEMRCNT_FPMPEMRSZ_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEMRMAX				0x00522040 /* Reset Source: CORER */
+#define GLHMC_PEMRMAX_PMPEMRMAX_S		0
+#define GLHMC_PEMRMAX_PMPEMRMAX_M		MAKEMASK(0x7FFFFF, 0)
+#define GLHMC_PEMROBJSZ				0x0052203c /* Reset Source: CORER */
+#define GLHMC_PEMROBJSZ_PMPEMROBJSZ_S		0
+#define GLHMC_PEMROBJSZ_PMPEMROBJSZ_M		MAKEMASK(0xF, 0)
+#define GLHMC_PEOOISCBASE(_i)			(0x00526600 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEOOISCBASE_MAX_INDEX		7
+#define GLHMC_PEOOISCBASE_GLHMC_PEOOISCBASE_S	0
+#define GLHMC_PEOOISCBASE_GLHMC_PEOOISCBASE_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PEOOISCCNT(_i)			(0x00526700 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEOOISCCNT_MAX_INDEX		7
+#define GLHMC_PEOOISCCNT_GLHMC_PEOOISCCNT_S	0
+#define GLHMC_PEOOISCCNT_GLHMC_PEOOISCCNT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PEOOISCFFLBASE(_i)		(0x00526C00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEOOISCFFLBASE_MAX_INDEX		7
+#define GLHMC_PEOOISCFFLBASE_GLHMC_PEOOISCFFLBASE_S 0
+#define GLHMC_PEOOISCFFLBASE_GLHMC_PEOOISCFFLBASE_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PEOOISCFFLCNT_PMAT(_i)		(0x00526D00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEOOISCFFLCNT_PMAT_MAX_INDEX	7
+#define GLHMC_PEOOISCFFLCNT_PMAT_FPMPEOOISCFLCNT_S 0
+#define GLHMC_PEOOISCFFLCNT_PMAT_FPMPEOOISCFLCNT_M MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEOOISCFFLMAX			0x005220A4 /* Reset Source: CORER */
+#define GLHMC_PEOOISCFFLMAX_PMPEOOISCFFLMAX_S	0
+#define GLHMC_PEOOISCFFLMAX_PMPEOOISCFFLMAX_M	MAKEMASK(0x7FFFF, 0)
+#define GLHMC_PEOOISCFFLMAX_RSVD_S		19
+#define GLHMC_PEOOISCFFLMAX_RSVD_M		MAKEMASK(0x1FFF, 19)
+#define GLHMC_PEOOISCMAX			0x00522018 /* Reset Source: CORER */
+#define GLHMC_PEOOISCMAX_PMPEOOISCMAX_S		0
+#define GLHMC_PEOOISCMAX_PMPEOOISCMAX_M		MAKEMASK(0x7FFFF, 0)
+#define GLHMC_PEOOISCMAX_RSVD_S			19
+#define GLHMC_PEOOISCMAX_RSVD_M			MAKEMASK(0x1FFF, 19)
+#define GLHMC_PEOOISCOBJSZ			0x00522014 /* Reset Source: CORER */
+#define GLHMC_PEOOISCOBJSZ_PMPEOOISCOBJSZ_S	0
+#define GLHMC_PEOOISCOBJSZ_PMPEOOISCOBJSZ_M	MAKEMASK(0xF, 0)
+#define GLHMC_PEOOISCOBJSZ_RSVD_S		4
+#define GLHMC_PEOOISCOBJSZ_RSVD_M		MAKEMASK(0xFFFFFFF, 4)
+#define GLHMC_PEPBLBASE(_i)			(0x00525800 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEPBLBASE_MAX_INDEX		7
+#define GLHMC_PEPBLBASE_FPMPEPBLBASE_S		0
+#define GLHMC_PEPBLBASE_FPMPEPBLBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEPBLCNT(_i)			(0x00525900 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEPBLCNT_MAX_INDEX		7
+#define GLHMC_PEPBLCNT_FPMPEPBLCNT_S		0
+#define GLHMC_PEPBLCNT_FPMPEPBLCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEPBLMAX				0x0052206C /* Reset Source: CORER */
+#define GLHMC_PEPBLMAX_PMPEPBLMAX_S		0
+#define GLHMC_PEPBLMAX_PMPEPBLMAX_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEQ1BASE(_i)			(0x00525200 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEQ1BASE_MAX_INDEX		7
+#define GLHMC_PEQ1BASE_FPMPEQ1BASE_S		0
+#define GLHMC_PEQ1BASE_FPMPEQ1BASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEQ1CNT(_i)			(0x00525300 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEQ1CNT_MAX_INDEX			7
+#define GLHMC_PEQ1CNT_FPMPEQ1CNT_S		0
+#define GLHMC_PEQ1CNT_FPMPEQ1CNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEQ1FLBASE(_i)			(0x00525400 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEQ1FLBASE_MAX_INDEX		7
+#define GLHMC_PEQ1FLBASE_FPMPEQ1FLBASE_S	0
+#define GLHMC_PEQ1FLBASE_FPMPEQ1FLBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEQ1FLMAX				0x00522058 /* Reset Source: CORER */
+#define GLHMC_PEQ1FLMAX_PMPEQ1FLMAX_S		0
+#define GLHMC_PEQ1FLMAX_PMPEQ1FLMAX_M		MAKEMASK(0x3FFFFFF, 0)
+#define GLHMC_PEQ1MAX				0x00522054 /* Reset Source: CORER */
+#define GLHMC_PEQ1MAX_PMPEQ1MAX_S		0
+#define GLHMC_PEQ1MAX_PMPEQ1MAX_M		MAKEMASK(0xFFFFFFF, 0)
+#define GLHMC_PEQ1OBJSZ				0x00522050 /* Reset Source: CORER */
+#define GLHMC_PEQ1OBJSZ_PMPEQ1OBJSZ_S		0
+#define GLHMC_PEQ1OBJSZ_PMPEQ1OBJSZ_M		MAKEMASK(0xF, 0)
+#define GLHMC_PEQPBASE(_i)			(0x00524000 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEQPBASE_MAX_INDEX		7
+#define GLHMC_PEQPBASE_FPMPEQPBASE_S		0
+#define GLHMC_PEQPBASE_FPMPEQPBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEQPCNT(_i)			(0x00524100 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEQPCNT_MAX_INDEX			7
+#define GLHMC_PEQPCNT_FPMPEQPCNT_S		0
+#define GLHMC_PEQPCNT_FPMPEQPCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEQPOBJSZ				0x0052201C /* Reset Source: CORER */
+#define GLHMC_PEQPOBJSZ_PMPEQPOBJSZ_S		0
+#define GLHMC_PEQPOBJSZ_PMPEQPOBJSZ_M		MAKEMASK(0xF, 0)
+#define GLHMC_PERRFBASE(_i)			(0x00526800 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PERRFBASE_MAX_INDEX		7
+#define GLHMC_PERRFBASE_GLHMC_PERRFBASE_S	0
+#define GLHMC_PERRFBASE_GLHMC_PERRFBASE_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PERRFCNT(_i)			(0x00526900 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PERRFCNT_MAX_INDEX		7
+#define GLHMC_PERRFCNT_GLHMC_PERRFCNT_S		0
+#define GLHMC_PERRFCNT_GLHMC_PERRFCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PERRFFLBASE(_i)			(0x00526A00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PERRFFLBASE_MAX_INDEX		7
+#define GLHMC_PERRFFLBASE_GLHMC_PERRFFLBASE_S	0
+#define GLHMC_PERRFFLBASE_GLHMC_PERRFFLBASE_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PERRFFLCNT_PMAT(_i)		(0x00526B00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PERRFFLCNT_PMAT_MAX_INDEX		7
+#define GLHMC_PERRFFLCNT_PMAT_FPMPERRFFLCNT_S	0
+#define GLHMC_PERRFFLCNT_PMAT_FPMPERRFFLCNT_M	MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PERRFFLMAX			0x005220A0 /* Reset Source: CORER */
+#define GLHMC_PERRFFLMAX_PMPERRFFLMAX_S		0
+#define GLHMC_PERRFFLMAX_PMPERRFFLMAX_M		MAKEMASK(0x3FFFFFF, 0)
+#define GLHMC_PERRFFLMAX_RSVD_S			26
+#define GLHMC_PERRFFLMAX_RSVD_M			MAKEMASK(0x3F, 26)
+#define GLHMC_PERRFMAX				0x0052209C /* Reset Source: CORER */
+#define GLHMC_PERRFMAX_PMPERRFMAX_S		0
+#define GLHMC_PERRFMAX_PMPERRFMAX_M		MAKEMASK(0xFFFFFFF, 0)
+#define GLHMC_PERRFMAX_RSVD_S			28
+#define GLHMC_PERRFMAX_RSVD_M			MAKEMASK(0xF, 28)
+#define GLHMC_PERRFOBJSZ			0x00522098 /* Reset Source: CORER */
+#define GLHMC_PERRFOBJSZ_PMPERRFOBJSZ_S		0
+#define GLHMC_PERRFOBJSZ_PMPERRFOBJSZ_M		MAKEMASK(0xF, 0)
+#define GLHMC_PERRFOBJSZ_RSVD_S			4
+#define GLHMC_PERRFOBJSZ_RSVD_M			MAKEMASK(0xFFFFFFF, 4)
+#define GLHMC_PETIMERBASE(_i)			(0x00525A00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PETIMERBASE_MAX_INDEX		7
+#define GLHMC_PETIMERBASE_FPMPETIMERBASE_S	0
+#define GLHMC_PETIMERBASE_FPMPETIMERBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PETIMERCNT(_i)			(0x00525B00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PETIMERCNT_MAX_INDEX		7
+#define GLHMC_PETIMERCNT_FPMPETIMERCNT_S	0
+#define GLHMC_PETIMERCNT_FPMPETIMERCNT_M	MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PETIMERMAX			0x00522084 /* Reset Source: CORER */
+#define GLHMC_PETIMERMAX_PMPETIMERMAX_S		0
+#define GLHMC_PETIMERMAX_PMPETIMERMAX_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PETIMEROBJSZ			0x00522080 /* Reset Source: CORER */
+#define GLHMC_PETIMEROBJSZ_PMPETIMEROBJSZ_S	0
+#define GLHMC_PETIMEROBJSZ_PMPETIMEROBJSZ_M	MAKEMASK(0xF, 0)
+#define GLHMC_PEXFBASE(_i)			(0x00524E00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEXFBASE_MAX_INDEX		7
+#define GLHMC_PEXFBASE_FPMPEXFBASE_S		0
+#define GLHMC_PEXFBASE_FPMPEXFBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEXFCNT(_i)			(0x00524F00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEXFCNT_MAX_INDEX			7
+#define GLHMC_PEXFCNT_FPMPEXFCNT_S		0
+#define GLHMC_PEXFCNT_FPMPEXFCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEXFFLBASE(_i)			(0x00525000 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEXFFLBASE_MAX_INDEX		7
+#define GLHMC_PEXFFLBASE_FPMPEXFFLBASE_S	0
+#define GLHMC_PEXFFLBASE_FPMPEXFFLBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEXFFLMAX				0x0052204C /* Reset Source: CORER */
+#define GLHMC_PEXFFLMAX_PMPEXFFLMAX_S		0
+#define GLHMC_PEXFFLMAX_PMPEXFFLMAX_M		MAKEMASK(0x3FFFFFF, 0)
+#define GLHMC_PEXFMAX				0x00522048 /* Reset Source: CORER */
+#define GLHMC_PEXFMAX_PMPEXFMAX_S		0
+#define GLHMC_PEXFMAX_PMPEXFMAX_M		MAKEMASK(0xFFFFFFF, 0)
+#define GLHMC_PEXFOBJSZ				0x00522044 /* Reset Source: CORER */
+#define GLHMC_PEXFOBJSZ_PMPEXFOBJSZ_S		0
+#define GLHMC_PEXFOBJSZ_PMPEXFOBJSZ_M		MAKEMASK(0xF, 0)
+#define GLHMC_PFPESDPART(_i)			(0x00520880 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PFPESDPART_MAX_INDEX		7
+#define GLHMC_PFPESDPART_PMSDBASE_S		0
+#define GLHMC_PFPESDPART_PMSDBASE_M		MAKEMASK(0xFFF, 0)
+#define GLHMC_PFPESDPART_PMSDSIZE_S		16
+#define GLHMC_PFPESDPART_PMSDSIZE_M		MAKEMASK(0x1FFF, 16)
+#define GLHMC_PFPESDPART_FPMAT(_i)		(0x00100880 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PFPESDPART_FPMAT_MAX_INDEX	7
+#define GLHMC_PFPESDPART_FPMAT_PMSDBASE_S	0
+#define GLHMC_PFPESDPART_FPMAT_PMSDBASE_M	MAKEMASK(0xFFF, 0)
+#define GLHMC_PFPESDPART_FPMAT_PMSDSIZE_S	16
+#define GLHMC_PFPESDPART_FPMAT_PMSDSIZE_M	MAKEMASK(0x1FFF, 16)
+#define GLHMC_SDPART(_i)			(0x00520800 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_SDPART_MAX_INDEX			7
+#define GLHMC_SDPART_PMSDBASE_S			0
+#define GLHMC_SDPART_PMSDBASE_M			MAKEMASK(0xFFF, 0)
+#define GLHMC_SDPART_PMSDSIZE_S			16
+#define GLHMC_SDPART_PMSDSIZE_M			MAKEMASK(0x1FFF, 16)
+#define GLHMC_SDPART_FPMAT(_i)			(0x00100800 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_SDPART_FPMAT_MAX_INDEX		7
+#define GLHMC_SDPART_FPMAT_PMSDBASE_S		0
+#define GLHMC_SDPART_FPMAT_PMSDBASE_M		MAKEMASK(0xFFF, 0)
+#define GLHMC_SDPART_FPMAT_PMSDSIZE_S		16
+#define GLHMC_SDPART_FPMAT_PMSDSIZE_M		MAKEMASK(0x1FFF, 16)
+#define GLHMC_VFAPBVTINUSEBASE(_i)		(0x0052CA00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFAPBVTINUSEBASE_MAX_INDEX	31
+#define GLHMC_VFAPBVTINUSEBASE_FPMAPBINUSEBASE_S 0
+#define GLHMC_VFAPBVTINUSEBASE_FPMAPBINUSEBASE_M MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFCEQPART(_i)			(0x00502F00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFCEQPART_MAX_INDEX		31
+#define GLHMC_VFCEQPART_PMCEQBASE_S		0
+#define GLHMC_VFCEQPART_PMCEQBASE_M		MAKEMASK(0x3FF, 0)
+#define GLHMC_VFCEQPART_PMCEQSIZE_S		16
+#define GLHMC_VFCEQPART_PMCEQSIZE_M		MAKEMASK(0x3FF, 16)
+#define GLHMC_VFDBCQPART(_i)			(0x00502E00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFDBCQPART_MAX_INDEX		31
+#define GLHMC_VFDBCQPART_PMDBCQBASE_S		0
+#define GLHMC_VFDBCQPART_PMDBCQBASE_M		MAKEMASK(0x3FFF, 0)
+#define GLHMC_VFDBCQPART_PMDBCQSIZE_S		16
+#define GLHMC_VFDBCQPART_PMDBCQSIZE_M		MAKEMASK(0x7FFF, 16)
+#define GLHMC_VFDBQPPART(_i)			(0x00504520 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFDBQPPART_MAX_INDEX		31
+#define GLHMC_VFDBQPPART_PMDBQPBASE_S		0
+#define GLHMC_VFDBQPPART_PMDBQPBASE_M		MAKEMASK(0x3FFF, 0)
+#define GLHMC_VFDBQPPART_PMDBQPSIZE_S		16
+#define GLHMC_VFDBQPPART_PMDBQPSIZE_M		MAKEMASK(0x7FFF, 16)
+#define GLHMC_VFFSIAVBASE(_i)			(0x0052D600 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFFSIAVBASE_MAX_INDEX		31
+#define GLHMC_VFFSIAVBASE_FPMFSIAVBASE_S	0
+#define GLHMC_VFFSIAVBASE_FPMFSIAVBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFFSIAVCNT(_i)			(0x0052D700 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFFSIAVCNT_MAX_INDEX		31
+#define GLHMC_VFFSIAVCNT_FPMFSIAVCNT_S		0
+#define GLHMC_VFFSIAVCNT_FPMFSIAVCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFFSIMCBASE(_i)			(0x0052E000 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFFSIMCBASE_MAX_INDEX		31
+#define GLHMC_VFFSIMCBASE_FPMFSIMCBASE_S	0
+#define GLHMC_VFFSIMCBASE_FPMFSIMCBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFFSIMCCNT(_i)			(0x0052E100 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFFSIMCCNT_MAX_INDEX		31
+#define GLHMC_VFFSIMCCNT_FPMFSIMCSZ_S		0
+#define GLHMC_VFFSIMCCNT_FPMFSIMCSZ_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPDINV(_i)			(0x00528300 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPDINV_MAX_INDEX			31
+#define GLHMC_VFPDINV_PMSDIDX_S			0
+#define GLHMC_VFPDINV_PMSDIDX_M			MAKEMASK(0xFFF, 0)
+#define GLHMC_VFPDINV_PMSDPARTSEL_S		15
+#define GLHMC_VFPDINV_PMSDPARTSEL_M		BIT(15)
+#define GLHMC_VFPDINV_PMPDIDX_S			16
+#define GLHMC_VFPDINV_PMPDIDX_M			MAKEMASK(0x1FF, 16)
+#define GLHMC_VFPDINV_FPMAT(_i)			(0x00108300 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPDINV_FPMAT_MAX_INDEX		31
+#define GLHMC_VFPDINV_FPMAT_PMSDIDX_S		0
+#define GLHMC_VFPDINV_FPMAT_PMSDIDX_M		MAKEMASK(0xFFF, 0)
+#define GLHMC_VFPDINV_FPMAT_PMSDPARTSEL_S	15
+#define GLHMC_VFPDINV_FPMAT_PMSDPARTSEL_M	BIT(15)
+#define GLHMC_VFPDINV_FPMAT_PMPDIDX_S		16
+#define GLHMC_VFPDINV_FPMAT_PMPDIDX_M		MAKEMASK(0x1FF, 16)
+#define GLHMC_VFPEARPBASE(_i)			(0x0052C800 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEARPBASE_MAX_INDEX		31
+#define GLHMC_VFPEARPBASE_FPMPEARPBASE_S	0
+#define GLHMC_VFPEARPBASE_FPMPEARPBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPEARPCNT(_i)			(0x0052C900 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEARPCNT_MAX_INDEX		31
+#define GLHMC_VFPEARPCNT_FPMPEARPCNT_S		0
+#define GLHMC_VFPEARPCNT_FPMPEARPCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPECQBASE(_i)			(0x0052C200 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPECQBASE_MAX_INDEX		31
+#define GLHMC_VFPECQBASE_FPMPECQBASE_S		0
+#define GLHMC_VFPECQBASE_FPMPECQBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPECQCNT(_i)			(0x0052C300 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPECQCNT_MAX_INDEX		31
+#define GLHMC_VFPECQCNT_FPMPECQCNT_S		0
+#define GLHMC_VFPECQCNT_FPMPECQCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPEHDRBASE(_i)			(0x0052E200 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEHDRBASE_MAX_INDEX		31
+#define GLHMC_VFPEHDRBASE_GLHMC_PEHDRBASE_S	0
+#define GLHMC_VFPEHDRBASE_GLHMC_PEHDRBASE_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPEHDRCNT(_i)			(0x0052E300 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEHDRCNT_MAX_INDEX		31
+#define GLHMC_VFPEHDRCNT_GLHMC_PEHDRCNT_S	0
+#define GLHMC_VFPEHDRCNT_GLHMC_PEHDRCNT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPEHTCNT(_i)			(0x0052C700 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEHTCNT_MAX_INDEX		31
+#define GLHMC_VFPEHTCNT_FPMPEHTCNT_S		0
+#define GLHMC_VFPEHTCNT_FPMPEHTCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPEHTCNT_FPMAT(_i)		(0x0010c700 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEHTCNT_FPMAT_MAX_INDEX		31
+#define GLHMC_VFPEHTCNT_FPMAT_FPMPEHTCNT_S	0
+#define GLHMC_VFPEHTCNT_FPMAT_FPMPEHTCNT_M	MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPEHTEBASE(_i)			(0x0052C600 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEHTEBASE_MAX_INDEX		31
+#define GLHMC_VFPEHTEBASE_FPMPEHTEBASE_S	0
+#define GLHMC_VFPEHTEBASE_FPMPEHTEBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPEHTEBASE_FPMAT(_i)		(0x0010C600 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEHTEBASE_FPMAT_MAX_INDEX	31
+#define GLHMC_VFPEHTEBASE_FPMAT_FPMPEHTEBASE_S	0
+#define GLHMC_VFPEHTEBASE_FPMAT_FPMPEHTEBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPEMDBASE(_i)			(0x0052E400 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEMDBASE_MAX_INDEX		31
+#define GLHMC_VFPEMDBASE_GLHMC_PEMDBASE_S	0
+#define GLHMC_VFPEMDBASE_GLHMC_PEMDBASE_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPEMDCNT(_i)			(0x0052E500 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEMDCNT_MAX_INDEX		31
+#define GLHMC_VFPEMDCNT_GLHMC_PEMDCNT_S		0
+#define GLHMC_VFPEMDCNT_GLHMC_PEMDCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPEMRBASE(_i)			(0x0052CC00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEMRBASE_MAX_INDEX		31
+#define GLHMC_VFPEMRBASE_FPMPEMRBASE_S		0
+#define GLHMC_VFPEMRBASE_FPMPEMRBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPEMRCNT(_i)			(0x0052CD00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEMRCNT_MAX_INDEX		31
+#define GLHMC_VFPEMRCNT_FPMPEMRSZ_S		0
+#define GLHMC_VFPEMRCNT_FPMPEMRSZ_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPEOOISCBASE(_i)			(0x0052E600 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEOOISCBASE_MAX_INDEX		31
+#define GLHMC_VFPEOOISCBASE_GLHMC_PEOOISCBASE_S 0
+#define GLHMC_VFPEOOISCBASE_GLHMC_PEOOISCBASE_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPEOOISCCNT(_i)			(0x0052E700 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEOOISCCNT_MAX_INDEX		31
+#define GLHMC_VFPEOOISCCNT_GLHMC_PEOOISCCNT_S	0
+#define GLHMC_VFPEOOISCCNT_GLHMC_PEOOISCCNT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPEOOISCFFLBASE(_i)		(0x0052EC00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEOOISCFFLBASE_MAX_INDEX	31
+#define GLHMC_VFPEOOISCFFLBASE_GLHMC_PEOOISCFFLBASE_S 0
+#define GLHMC_VFPEOOISCFFLBASE_GLHMC_PEOOISCFFLBASE_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPEPBLBASE(_i)			(0x0052D800 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEPBLBASE_MAX_INDEX		31
+#define GLHMC_VFPEPBLBASE_FPMPEPBLBASE_S	0
+#define GLHMC_VFPEPBLBASE_FPMPEPBLBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPEPBLCNT(_i)			(0x0052D900 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEPBLCNT_MAX_INDEX		31
+#define GLHMC_VFPEPBLCNT_FPMPEPBLCNT_S		0
+#define GLHMC_VFPEPBLCNT_FPMPEPBLCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPEQ1BASE(_i)			(0x0052D200 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEQ1BASE_MAX_INDEX		31
+#define GLHMC_VFPEQ1BASE_FPMPEQ1BASE_S		0
+#define GLHMC_VFPEQ1BASE_FPMPEQ1BASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPEQ1CNT(_i)			(0x0052D300 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEQ1CNT_MAX_INDEX		31
+#define GLHMC_VFPEQ1CNT_FPMPEQ1CNT_S		0
+#define GLHMC_VFPEQ1CNT_FPMPEQ1CNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPEQ1FLBASE(_i)			(0x0052D400 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEQ1FLBASE_MAX_INDEX		31
+#define GLHMC_VFPEQ1FLBASE_FPMPEQ1FLBASE_S	0
+#define GLHMC_VFPEQ1FLBASE_FPMPEQ1FLBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPEQPBASE(_i)			(0x0052C000 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEQPBASE_MAX_INDEX		31
+#define GLHMC_VFPEQPBASE_FPMPEQPBASE_S		0
+#define GLHMC_VFPEQPBASE_FPMPEQPBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPEQPCNT(_i)			(0x0052C100 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEQPCNT_MAX_INDEX		31
+#define GLHMC_VFPEQPCNT_FPMPEQPCNT_S		0
+#define GLHMC_VFPEQPCNT_FPMPEQPCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPERRFBASE(_i)			(0x0052E800 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPERRFBASE_MAX_INDEX		31
+#define GLHMC_VFPERRFBASE_GLHMC_PERRFBASE_S	0
+#define GLHMC_VFPERRFBASE_GLHMC_PERRFBASE_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPERRFCNT(_i)			(0x0052E900 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPERRFCNT_MAX_INDEX		31
+#define GLHMC_VFPERRFCNT_GLHMC_PERRFCNT_S	0
+#define GLHMC_VFPERRFCNT_GLHMC_PERRFCNT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPERRFFLBASE(_i)			(0x0052EA00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPERRFFLBASE_MAX_INDEX		31
+#define GLHMC_VFPERRFFLBASE_GLHMC_PERRFFLBASE_S 0
+#define GLHMC_VFPERRFFLBASE_GLHMC_PERRFFLBASE_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPETIMERBASE(_i)			(0x0052DA00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPETIMERBASE_MAX_INDEX		31
+#define GLHMC_VFPETIMERBASE_FPMPETIMERBASE_S	0
+#define GLHMC_VFPETIMERBASE_FPMPETIMERBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPETIMERCNT(_i)			(0x0052DB00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPETIMERCNT_MAX_INDEX		31
+#define GLHMC_VFPETIMERCNT_FPMPETIMERCNT_S	0
+#define GLHMC_VFPETIMERCNT_FPMPETIMERCNT_M	MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPEXFBASE(_i)			(0x0052CE00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEXFBASE_MAX_INDEX		31
+#define GLHMC_VFPEXFBASE_FPMPEXFBASE_S		0
+#define GLHMC_VFPEXFBASE_FPMPEXFBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPEXFCNT(_i)			(0x0052CF00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEXFCNT_MAX_INDEX		31
+#define GLHMC_VFPEXFCNT_FPMPEXFCNT_S		0
+#define GLHMC_VFPEXFCNT_FPMPEXFCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPEXFFLBASE(_i)			(0x0052D000 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEXFFLBASE_MAX_INDEX		31
+#define GLHMC_VFPEXFFLBASE_FPMPEXFFLBASE_S	0
+#define GLHMC_VFPEXFFLBASE_FPMPEXFFLBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFSDDATAHIGH(_i)			(0x00528200 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFSDDATAHIGH_MAX_INDEX		31
+#define GLHMC_VFSDDATAHIGH_PMSDDATAHIGH_S	0
+#define GLHMC_VFSDDATAHIGH_PMSDDATAHIGH_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFSDDATAHIGH_FPMAT(_i)		(0x00108200 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFSDDATAHIGH_FPMAT_MAX_INDEX	31
+#define GLHMC_VFSDDATAHIGH_FPMAT_PMSDDATAHIGH_S 0
+#define GLHMC_VFSDDATAHIGH_FPMAT_PMSDDATAHIGH_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFSDDATALOW(_i)			(0x00528100 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFSDDATALOW_MAX_INDEX		31
+#define GLHMC_VFSDDATALOW_PMSDVALID_S		0
+#define GLHMC_VFSDDATALOW_PMSDVALID_M		BIT(0)
+#define GLHMC_VFSDDATALOW_PMSDTYPE_S		1
+#define GLHMC_VFSDDATALOW_PMSDTYPE_M		BIT(1)
+#define GLHMC_VFSDDATALOW_PMSDBPCOUNT_S		2
+#define GLHMC_VFSDDATALOW_PMSDBPCOUNT_M		MAKEMASK(0x3FF, 2)
+#define GLHMC_VFSDDATALOW_PMSDDATALOW_S		12
+#define GLHMC_VFSDDATALOW_PMSDDATALOW_M		MAKEMASK(0xFFFFF, 12)
+#define GLHMC_VFSDDATALOW_FPMAT(_i)		(0x00108100 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFSDDATALOW_FPMAT_MAX_INDEX	31
+#define GLHMC_VFSDDATALOW_FPMAT_PMSDVALID_S	0
+#define GLHMC_VFSDDATALOW_FPMAT_PMSDVALID_M	BIT(0)
+#define GLHMC_VFSDDATALOW_FPMAT_PMSDTYPE_S	1
+#define GLHMC_VFSDDATALOW_FPMAT_PMSDTYPE_M	BIT(1)
+#define GLHMC_VFSDDATALOW_FPMAT_PMSDBPCOUNT_S	2
+#define GLHMC_VFSDDATALOW_FPMAT_PMSDBPCOUNT_M	MAKEMASK(0x3FF, 2)
+#define GLHMC_VFSDDATALOW_FPMAT_PMSDDATALOW_S	12
+#define GLHMC_VFSDDATALOW_FPMAT_PMSDDATALOW_M	MAKEMASK(0xFFFFF, 12)
+#define GLHMC_VFSDPART(_i)			(0x00528800 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFSDPART_MAX_INDEX		31
+#define GLHMC_VFSDPART_PMSDBASE_S		0
+#define GLHMC_VFSDPART_PMSDBASE_M		MAKEMASK(0xFFF, 0)
+#define GLHMC_VFSDPART_PMSDSIZE_S		16
+#define GLHMC_VFSDPART_PMSDSIZE_M		MAKEMASK(0x1FFF, 16)
+#define GLHMC_VFSDPART_FPMAT(_i)		(0x00108800 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFSDPART_FPMAT_MAX_INDEX		31
+#define GLHMC_VFSDPART_FPMAT_PMSDBASE_S		0
+#define GLHMC_VFSDPART_FPMAT_PMSDBASE_M		MAKEMASK(0xFFF, 0)
+#define GLHMC_VFSDPART_FPMAT_PMSDSIZE_S		16
+#define GLHMC_VFSDPART_FPMAT_PMSDSIZE_M		MAKEMASK(0x1FFF, 16)
+#define GLMDOC_CACHESIZE			0x0051C06C /* Reset Source: CORER */
+#define GLMDOC_CACHESIZE_WORD_SIZE_S		0
+#define GLMDOC_CACHESIZE_WORD_SIZE_M		MAKEMASK(0xFF, 0)
+#define GLMDOC_CACHESIZE_SETS_S			8
+#define GLMDOC_CACHESIZE_SETS_M			MAKEMASK(0xFFF, 8)
+#define GLMDOC_CACHESIZE_WAYS_S			20
+#define GLMDOC_CACHESIZE_WAYS_M			MAKEMASK(0xF, 20)
+#define GLPBLOC0_CACHESIZE			0x00518074 /* Reset Source: CORER */
+#define GLPBLOC0_CACHESIZE_WORD_SIZE_S		0
+#define GLPBLOC0_CACHESIZE_WORD_SIZE_M		MAKEMASK(0xFF, 0)
+#define GLPBLOC0_CACHESIZE_SETS_S		8
+#define GLPBLOC0_CACHESIZE_SETS_M		MAKEMASK(0xFFF, 8)
+#define GLPBLOC0_CACHESIZE_WAYS_S		20
+#define GLPBLOC0_CACHESIZE_WAYS_M		MAKEMASK(0xF, 20)
+#define GLPBLOC1_CACHESIZE			0x0051A074 /* Reset Source: CORER */
+#define GLPBLOC1_CACHESIZE_WORD_SIZE_S		0
+#define GLPBLOC1_CACHESIZE_WORD_SIZE_M		MAKEMASK(0xFF, 0)
+#define GLPBLOC1_CACHESIZE_SETS_S		8
+#define GLPBLOC1_CACHESIZE_SETS_M		MAKEMASK(0xFFF, 8)
+#define GLPBLOC1_CACHESIZE_WAYS_S		20
+#define GLPBLOC1_CACHESIZE_WAYS_M		MAKEMASK(0xF, 20)
+#define GLPDOC_CACHESIZE			0x00530048 /* Reset Source: CORER */
+#define GLPDOC_CACHESIZE_WORD_SIZE_S		0
+#define GLPDOC_CACHESIZE_WORD_SIZE_M		MAKEMASK(0xFF, 0)
+#define GLPDOC_CACHESIZE_SETS_S			8
+#define GLPDOC_CACHESIZE_SETS_M			MAKEMASK(0xFFF, 8)
+#define GLPDOC_CACHESIZE_WAYS_S			20
+#define GLPDOC_CACHESIZE_WAYS_M			MAKEMASK(0xF, 20)
+#define GLPDOC_CACHESIZE_FPMAT			0x00110088 /* Reset Source: CORER */
+#define GLPDOC_CACHESIZE_FPMAT_WORD_SIZE_S	0
+#define GLPDOC_CACHESIZE_FPMAT_WORD_SIZE_M	MAKEMASK(0xFF, 0)
+#define GLPDOC_CACHESIZE_FPMAT_SETS_S		8
+#define GLPDOC_CACHESIZE_FPMAT_SETS_M		MAKEMASK(0xFFF, 8)
+#define GLPDOC_CACHESIZE_FPMAT_WAYS_S		20
+#define GLPDOC_CACHESIZE_FPMAT_WAYS_M		MAKEMASK(0xF, 20)
+#define GLPEOC0_CACHESIZE			0x005140A8 /* Reset Source: CORER */
+#define GLPEOC0_CACHESIZE_WORD_SIZE_S		0
+#define GLPEOC0_CACHESIZE_WORD_SIZE_M		MAKEMASK(0xFF, 0)
+#define GLPEOC0_CACHESIZE_SETS_S		8
+#define GLPEOC0_CACHESIZE_SETS_M		MAKEMASK(0xFFF, 8)
+#define GLPEOC0_CACHESIZE_WAYS_S		20
+#define GLPEOC0_CACHESIZE_WAYS_M		MAKEMASK(0xF, 20)
+#define GLPEOC1_CACHESIZE			0x005160A8 /* Reset Source: CORER */
+#define GLPEOC1_CACHESIZE_WORD_SIZE_S		0
+#define GLPEOC1_CACHESIZE_WORD_SIZE_M		MAKEMASK(0xFF, 0)
+#define GLPEOC1_CACHESIZE_SETS_S		8
+#define GLPEOC1_CACHESIZE_SETS_M		MAKEMASK(0xFFF, 8)
+#define GLPEOC1_CACHESIZE_WAYS_S		20
+#define GLPEOC1_CACHESIZE_WAYS_M		MAKEMASK(0xF, 20)
+#define PFHMC_ERRORDATA				0x00520500 /* Reset Source: PFR */
+#define PFHMC_ERRORDATA_HMC_ERROR_DATA_S	0
+#define PFHMC_ERRORDATA_HMC_ERROR_DATA_M	MAKEMASK(0x3FFFFFFF, 0)
+#define PFHMC_ERRORDATA_FPMAT			0x00100500 /* Reset Source: PFR */
+#define PFHMC_ERRORDATA_FPMAT_HMC_ERROR_DATA_S	0
+#define PFHMC_ERRORDATA_FPMAT_HMC_ERROR_DATA_M	MAKEMASK(0x3FFFFFFF, 0)
+#define PFHMC_ERRORINFO				0x00520400 /* Reset Source: PFR */
+#define PFHMC_ERRORINFO_PMF_INDEX_S		0
+#define PFHMC_ERRORINFO_PMF_INDEX_M		MAKEMASK(0x1F, 0)
+#define PFHMC_ERRORINFO_PMF_ISVF_S		7
+#define PFHMC_ERRORINFO_PMF_ISVF_M		BIT(7)
+#define PFHMC_ERRORINFO_HMC_ERROR_TYPE_S	8
+#define PFHMC_ERRORINFO_HMC_ERROR_TYPE_M	MAKEMASK(0xF, 8)
+#define PFHMC_ERRORINFO_HMC_OBJECT_TYPE_S	16
+#define PFHMC_ERRORINFO_HMC_OBJECT_TYPE_M	MAKEMASK(0x1F, 16)
+#define PFHMC_ERRORINFO_ERROR_DETECTED_S	31
+#define PFHMC_ERRORINFO_ERROR_DETECTED_M	BIT(31)
+#define PFHMC_ERRORINFO_FPMAT			0x00100400 /* Reset Source: PFR */
+#define PFHMC_ERRORINFO_FPMAT_PMF_INDEX_S	0
+#define PFHMC_ERRORINFO_FPMAT_PMF_INDEX_M	MAKEMASK(0x1F, 0)
+#define PFHMC_ERRORINFO_FPMAT_PMF_ISVF_S	7
+#define PFHMC_ERRORINFO_FPMAT_PMF_ISVF_M	BIT(7)
+#define PFHMC_ERRORINFO_FPMAT_HMC_ERROR_TYPE_S	8
+#define PFHMC_ERRORINFO_FPMAT_HMC_ERROR_TYPE_M	MAKEMASK(0xF, 8)
+#define PFHMC_ERRORINFO_FPMAT_HMC_OBJECT_TYPE_S 16
+#define PFHMC_ERRORINFO_FPMAT_HMC_OBJECT_TYPE_M MAKEMASK(0x1F, 16)
+#define PFHMC_ERRORINFO_FPMAT_ERROR_DETECTED_S	31
+#define PFHMC_ERRORINFO_FPMAT_ERROR_DETECTED_M	BIT(31)
+#define PFHMC_PDINV				0x00520300 /* Reset Source: PFR */
+#define PFHMC_PDINV_PMSDIDX_S			0
+#define PFHMC_PDINV_PMSDIDX_M			MAKEMASK(0xFFF, 0)
+#define PFHMC_PDINV_PMSDPARTSEL_S		15
+#define PFHMC_PDINV_PMSDPARTSEL_M		BIT(15)
+#define PFHMC_PDINV_PMPDIDX_S			16
+#define PFHMC_PDINV_PMPDIDX_M			MAKEMASK(0x1FF, 16)
+#define PFHMC_PDINV_FPMAT			0x00100300 /* Reset Source: PFR */
+#define PFHMC_PDINV_FPMAT_PMSDIDX_S		0
+#define PFHMC_PDINV_FPMAT_PMSDIDX_M		MAKEMASK(0xFFF, 0)
+#define PFHMC_PDINV_FPMAT_PMSDPARTSEL_S		15
+#define PFHMC_PDINV_FPMAT_PMSDPARTSEL_M		BIT(15)
+#define PFHMC_PDINV_FPMAT_PMPDIDX_S		16
+#define PFHMC_PDINV_FPMAT_PMPDIDX_M		MAKEMASK(0x1FF, 16)
+#define PFHMC_SDCMD				0x00520000 /* Reset Source: PFR */
+#define PFHMC_SDCMD_PMSDIDX_S			0
+#define PFHMC_SDCMD_PMSDIDX_M			MAKEMASK(0xFFF, 0)
+#define PFHMC_SDCMD_PMSDPARTSEL_S		15
+#define PFHMC_SDCMD_PMSDPARTSEL_M		BIT(15)
+#define PFHMC_SDCMD_PMSDWR_S			31
+#define PFHMC_SDCMD_PMSDWR_M			BIT(31)
+#define PFHMC_SDCMD_FPMAT			0x00100000 /* Reset Source: PFR */
+#define PFHMC_SDCMD_FPMAT_PMSDIDX_S		0
+#define PFHMC_SDCMD_FPMAT_PMSDIDX_M		MAKEMASK(0xFFF, 0)
+#define PFHMC_SDCMD_FPMAT_PMSDPARTSEL_S		15
+#define PFHMC_SDCMD_FPMAT_PMSDPARTSEL_M		BIT(15)
+#define PFHMC_SDCMD_FPMAT_PMSDWR_S		31
+#define PFHMC_SDCMD_FPMAT_PMSDWR_M		BIT(31)
+#define PFHMC_SDDATAHIGH			0x00520200 /* Reset Source: PFR */
+#define PFHMC_SDDATAHIGH_PMSDDATAHIGH_S		0
+#define PFHMC_SDDATAHIGH_PMSDDATAHIGH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PFHMC_SDDATAHIGH_FPMAT			0x00100200 /* Reset Source: PFR */
+#define PFHMC_SDDATAHIGH_FPMAT_PMSDDATAHIGH_S	0
+#define PFHMC_SDDATAHIGH_FPMAT_PMSDDATAHIGH_M	MAKEMASK(0xFFFFFFFF, 0)
+#define PFHMC_SDDATALOW				0x00520100 /* Reset Source: PFR */
+#define PFHMC_SDDATALOW_PMSDVALID_S		0
+#define PFHMC_SDDATALOW_PMSDVALID_M		BIT(0)
+#define PFHMC_SDDATALOW_PMSDTYPE_S		1
+#define PFHMC_SDDATALOW_PMSDTYPE_M		BIT(1)
+#define PFHMC_SDDATALOW_PMSDBPCOUNT_S		2
+#define PFHMC_SDDATALOW_PMSDBPCOUNT_M		MAKEMASK(0x3FF, 2)
+#define PFHMC_SDDATALOW_PMSDDATALOW_S		12
+#define PFHMC_SDDATALOW_PMSDDATALOW_M		MAKEMASK(0xFFFFF, 12)
+#define PFHMC_SDDATALOW_FPMAT			0x00100100 /* Reset Source: PFR */
+#define PFHMC_SDDATALOW_FPMAT_PMSDVALID_S	0
+#define PFHMC_SDDATALOW_FPMAT_PMSDVALID_M	BIT(0)
+#define PFHMC_SDDATALOW_FPMAT_PMSDTYPE_S	1
+#define PFHMC_SDDATALOW_FPMAT_PMSDTYPE_M	BIT(1)
+#define PFHMC_SDDATALOW_FPMAT_PMSDBPCOUNT_S	2
+#define PFHMC_SDDATALOW_FPMAT_PMSDBPCOUNT_M	MAKEMASK(0x3FF, 2)
+#define PFHMC_SDDATALOW_FPMAT_PMSDDATALOW_S	12
+#define PFHMC_SDDATALOW_FPMAT_PMSDDATALOW_M	MAKEMASK(0xFFFFF, 12)
+#define GL_DSI_RDPC				0x00294204 /* Reset Source: CORER */
+#define GL_DSI_RDPC_RDPC_S			0
+#define GL_DSI_RDPC_RDPC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GL_DSI_REPC				0x00294208 /* Reset Source: CORER */
+#define GL_DSI_REPC_NO_DESC_CNT_S		0
+#define GL_DSI_REPC_NO_DESC_CNT_M		MAKEMASK(0xFFFF, 0)
+#define GL_DSI_REPC_ERROR_CNT_S			16
+#define GL_DSI_REPC_ERROR_CNT_M			MAKEMASK(0xFFFF, 16)
+#define GL_MDCK_TDAT_TCLAN			0x000FC0DC /* Reset Source: CORER */
+#define GL_MDCK_TDAT_TCLAN_WRONG_ORDER_FORMAT_DESC_S 0
+#define GL_MDCK_TDAT_TCLAN_WRONG_ORDER_FORMAT_DESC_M BIT(0)
+#define GL_MDCK_TDAT_TCLAN_UR_S			1
+#define GL_MDCK_TDAT_TCLAN_UR_M			BIT(1)
+#define GL_MDCK_TDAT_TCLAN_TAIL_DESC_NOT_DDESC_EOP_NOP_S 2
+#define GL_MDCK_TDAT_TCLAN_TAIL_DESC_NOT_DDESC_EOP_NOP_M BIT(2)
+#define GL_MDCK_TDAT_TCLAN_FALSE_SCHEDULING_S	3
+#define GL_MDCK_TDAT_TCLAN_FALSE_SCHEDULING_M	BIT(3)
+#define GL_MDCK_TDAT_TCLAN_TAIL_VALUE_BIGGER_THAN_RING_LEN_S 4
+#define GL_MDCK_TDAT_TCLAN_TAIL_VALUE_BIGGER_THAN_RING_LEN_M BIT(4)
+#define GL_MDCK_TDAT_TCLAN_MORE_THAN_8_DCMDS_IN_PKT_S 5
+#define GL_MDCK_TDAT_TCLAN_MORE_THAN_8_DCMDS_IN_PKT_M BIT(5)
+#define GL_MDCK_TDAT_TCLAN_NO_HEAD_UPDATE_IN_QUANTA_S 6
+#define GL_MDCK_TDAT_TCLAN_NO_HEAD_UPDATE_IN_QUANTA_M BIT(6)
+#define GL_MDCK_TDAT_TCLAN_PKT_LEN_NOT_LEGAL_S	7
+#define GL_MDCK_TDAT_TCLAN_PKT_LEN_NOT_LEGAL_M	BIT(7)
+#define GL_MDCK_TDAT_TCLAN_TSO_TLEN_NOT_COHERENT_WITH_SUM_BUFS_S 8
+#define GL_MDCK_TDAT_TCLAN_TSO_TLEN_NOT_COHERENT_WITH_SUM_BUFS_M BIT(8)
+#define GL_MDCK_TDAT_TCLAN_TSO_TAIL_REACHED_BEFORE_TLEN_END_S 9
+#define GL_MDCK_TDAT_TCLAN_TSO_TAIL_REACHED_BEFORE_TLEN_END_M BIT(9)
+#define GL_MDCK_TDAT_TCLAN_TSO_MORE_THAN_3_HDRS_S 10
+#define GL_MDCK_TDAT_TCLAN_TSO_MORE_THAN_3_HDRS_M BIT(10)
+#define GL_MDCK_TDAT_TCLAN_TSO_SUM_BUFFS_LT_SUM_HDRS_S 11
+#define GL_MDCK_TDAT_TCLAN_TSO_SUM_BUFFS_LT_SUM_HDRS_M BIT(11)
+#define GL_MDCK_TDAT_TCLAN_TSO_ZERO_MSS_TLEN_HDRS_S 12
+#define GL_MDCK_TDAT_TCLAN_TSO_ZERO_MSS_TLEN_HDRS_M BIT(12)
+#define GL_MDCK_TDAT_TCLAN_TSO_CTX_DESC_IPSEC_S 13
+#define GL_MDCK_TDAT_TCLAN_TSO_CTX_DESC_IPSEC_M BIT(13)
+#define GL_MDCK_TDAT_TCLAN_SSO_COMS_NOT_WHOLE_PKT_NUM_IN_QUANTA_S 14
+#define GL_MDCK_TDAT_TCLAN_SSO_COMS_NOT_WHOLE_PKT_NUM_IN_QUANTA_M BIT(14)
+#define GL_MDCK_TDAT_TCLAN_COMS_QUANTA_BYTES_EXCEED_PKTLEN_X_64_S 15
+#define GL_MDCK_TDAT_TCLAN_COMS_QUANTA_BYTES_EXCEED_PKTLEN_X_64_M BIT(15)
+#define GL_MDCK_TDAT_TCLAN_COMS_QUANTA_CMDS_EXCEED_S 16
+#define GL_MDCK_TDAT_TCLAN_COMS_QUANTA_CMDS_EXCEED_M BIT(16)
+#define GL_MDCK_TDAT_TCLAN_TSO_COMS_TSO_DESCS_LAST_LSO_QUANTA_S 17
+#define GL_MDCK_TDAT_TCLAN_TSO_COMS_TSO_DESCS_LAST_LSO_QUANTA_M BIT(17)
+#define GL_MDCK_TDAT_TCLAN_TSO_COMS_TSO_DESCS_TLEN_S 18
+#define GL_MDCK_TDAT_TCLAN_TSO_COMS_TSO_DESCS_TLEN_M BIT(18)
+#define GL_MDCK_TDAT_TCLAN_TSO_COMS_QUANTA_FINISHED_TOO_EARLY_S 19
+#define GL_MDCK_TDAT_TCLAN_TSO_COMS_QUANTA_FINISHED_TOO_EARLY_M BIT(19)
+#define GL_MDCK_TDAT_TCLAN_COMS_NUM_PKTS_IN_QUANTA_S 20
+#define GL_MDCK_TDAT_TCLAN_COMS_NUM_PKTS_IN_QUANTA_M BIT(20)
+#define GL_PPRS_SPARE_0				0x000841A8 /* Reset Source: CORER */
+#define GL_PPRS_SPARE_0_GL_PPRS_SPARE_S		0
+#define GL_PPRS_SPARE_0_GL_PPRS_SPARE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PPRS_SPARE_1				0x000851A8 /* Reset Source: CORER */
+#define GL_PPRS_SPARE_1_GL_PPRS_SPARE_S		0
+#define GL_PPRS_SPARE_1_GL_PPRS_SPARE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PPRS_SPARE_2				0x000861A8 /* Reset Source: CORER */
+#define GL_PPRS_SPARE_2_GL_PPRS_SPARE_S		0
+#define GL_PPRS_SPARE_2_GL_PPRS_SPARE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PPRS_SPARE_3				0x000871A8 /* Reset Source: CORER */
+#define GL_PPRS_SPARE_3_GL_PPRS_SPARE_S		0
+#define GL_PPRS_SPARE_3_GL_PPRS_SPARE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLCORE_CLKCTL_H				0x000B81E8 /* Reset Source: POR */
+#define GLCORE_CLKCTL_H_UPPER_CLK_SRC_H_S	0
+#define GLCORE_CLKCTL_H_UPPER_CLK_SRC_H_M	MAKEMASK(0x3, 0)
+#define GLCORE_CLKCTL_H_LOWER_CLK_SRC_H_S	2
+#define GLCORE_CLKCTL_H_LOWER_CLK_SRC_H_M	MAKEMASK(0x3, 2)
+#define GLCORE_CLKCTL_H_PSM_CLK_SRC_H_S		4
+#define GLCORE_CLKCTL_H_PSM_CLK_SRC_H_M		MAKEMASK(0x3, 4)
+#define GLCORE_CLKCTL_H_RXCTL_CLK_SRC_H_S	6
+#define GLCORE_CLKCTL_H_RXCTL_CLK_SRC_H_M	MAKEMASK(0x3, 6)
+#define GLCORE_CLKCTL_H_UANA_CLK_SRC_H_S	8
+#define GLCORE_CLKCTL_H_UANA_CLK_SRC_H_M	MAKEMASK(0x7, 8)
+#define GLCORE_CLKCTL_L				0x000B8254 /* Reset Source: POR */
+#define GLCORE_CLKCTL_L_UPPER_CLK_SRC_L_S	0
+#define GLCORE_CLKCTL_L_UPPER_CLK_SRC_L_M	MAKEMASK(0x3, 0)
+#define GLCORE_CLKCTL_L_LOWER_CLK_SRC_L_S	2
+#define GLCORE_CLKCTL_L_LOWER_CLK_SRC_L_M	MAKEMASK(0x3, 2)
+#define GLCORE_CLKCTL_L_PSM_CLK_SRC_L_S		4
+#define GLCORE_CLKCTL_L_PSM_CLK_SRC_L_M		MAKEMASK(0x3, 4)
+#define GLCORE_CLKCTL_L_RXCTL_CLK_SRC_L_S	6
+#define GLCORE_CLKCTL_L_RXCTL_CLK_SRC_L_M	MAKEMASK(0x3, 6)
+#define GLCORE_CLKCTL_L_UANA_CLK_SRC_L_S	8
+#define GLCORE_CLKCTL_L_UANA_CLK_SRC_L_M	MAKEMASK(0x7, 8)
+#define GLCORE_CLKCTL_M				0x000B8258 /* Reset Source: POR */
+#define GLCORE_CLKCTL_M_UPPER_CLK_SRC_M_S	0
+#define GLCORE_CLKCTL_M_UPPER_CLK_SRC_M_M	MAKEMASK(0x3, 0)
+#define GLCORE_CLKCTL_M_LOWER_CLK_SRC_M_S	2
+#define GLCORE_CLKCTL_M_LOWER_CLK_SRC_M_M	MAKEMASK(0x3, 2)
+#define GLCORE_CLKCTL_M_PSM_CLK_SRC_M_S		4
+#define GLCORE_CLKCTL_M_PSM_CLK_SRC_M_M		MAKEMASK(0x3, 4)
+#define GLCORE_CLKCTL_M_RXCTL_CLK_SRC_M_S	6
+#define GLCORE_CLKCTL_M_RXCTL_CLK_SRC_M_M	MAKEMASK(0x3, 6)
+#define GLCORE_CLKCTL_M_UANA_CLK_SRC_M_S	8
+#define GLCORE_CLKCTL_M_UANA_CLK_SRC_M_M	MAKEMASK(0x7, 8)
+#define GLFOC_CACHESIZE				0x000AA074 /* Reset Source: CORER */
+#define GLFOC_CACHESIZE_WORD_SIZE_S		0
+#define GLFOC_CACHESIZE_WORD_SIZE_M		MAKEMASK(0xFF, 0)
+#define GLFOC_CACHESIZE_SETS_S			8
+#define GLFOC_CACHESIZE_SETS_M			MAKEMASK(0xFFF, 8)
+#define GLFOC_CACHESIZE_WAYS_S			20
+#define GLFOC_CACHESIZE_WAYS_M			MAKEMASK(0xF, 20)
+#define GLGEN_CAR_DEBUG				0x000B81C0 /* Reset Source: POR */
+#define GLGEN_CAR_DEBUG_CAR_UPPER_CORE_CLK_EN_S 0
+#define GLGEN_CAR_DEBUG_CAR_UPPER_CORE_CLK_EN_M BIT(0)
+#define GLGEN_CAR_DEBUG_CAR_PCIE_HIU_CLK_EN_S	1
+#define GLGEN_CAR_DEBUG_CAR_PCIE_HIU_CLK_EN_M	BIT(1)
+#define GLGEN_CAR_DEBUG_CAR_PE_CLK_EN_S		2
+#define GLGEN_CAR_DEBUG_CAR_PE_CLK_EN_M		BIT(2)
+#define GLGEN_CAR_DEBUG_CAR_PCIE_PRIM_CLK_ACTIVE_S 3
+#define GLGEN_CAR_DEBUG_CAR_PCIE_PRIM_CLK_ACTIVE_M BIT(3)
+#define GLGEN_CAR_DEBUG_CDC_PE_ACTIVE_S		4
+#define GLGEN_CAR_DEBUG_CDC_PE_ACTIVE_M		BIT(4)
+#define GLGEN_CAR_DEBUG_CAR_PCIE_RAW_PRST_RESET_N_S 5
+#define GLGEN_CAR_DEBUG_CAR_PCIE_RAW_PRST_RESET_N_M BIT(5)
+#define GLGEN_CAR_DEBUG_CAR_PCIE_RAW_SCLR_RESET_N_S 6
+#define GLGEN_CAR_DEBUG_CAR_PCIE_RAW_SCLR_RESET_N_M BIT(6)
+#define GLGEN_CAR_DEBUG_CAR_PCIE_RAW_IB_RESET_N_S 7
+#define GLGEN_CAR_DEBUG_CAR_PCIE_RAW_IB_RESET_N_M BIT(7)
+#define GLGEN_CAR_DEBUG_CAR_PCIE_RAW_IMIB_RESET_N_S 8
+#define GLGEN_CAR_DEBUG_CAR_PCIE_RAW_IMIB_RESET_N_M BIT(8)
+#define GLGEN_CAR_DEBUG_CAR_RAW_EMP_RESET_N_S	9
+#define GLGEN_CAR_DEBUG_CAR_RAW_EMP_RESET_N_M	BIT(9)
+#define GLGEN_CAR_DEBUG_CAR_RAW_GLOBAL_RESET_N_S 10
+#define GLGEN_CAR_DEBUG_CAR_RAW_GLOBAL_RESET_N_M BIT(10)
+#define GLGEN_CAR_DEBUG_CAR_RAW_LAN_POWER_GOOD_S 11
+#define GLGEN_CAR_DEBUG_CAR_RAW_LAN_POWER_GOOD_M BIT(11)
+#define GLGEN_CAR_DEBUG_CDC_IOSF_PRIMERY_RST_B_S 12
+#define GLGEN_CAR_DEBUG_CDC_IOSF_PRIMERY_RST_B_M BIT(12)
+#define GLGEN_CAR_DEBUG_GBE_GLOBALRST_B_S	13
+#define GLGEN_CAR_DEBUG_GBE_GLOBALRST_B_M	BIT(13)
+#define GLGEN_CAR_DEBUG_FLEEP_AL_GLOBR_DONE_S	14
+#define GLGEN_CAR_DEBUG_FLEEP_AL_GLOBR_DONE_M	BIT(14)
+#define GLGEN_CAR_DEBUG_CAR_RST_STATE_S		15
+#define GLGEN_CAR_DEBUG_CAR_RST_STATE_M		MAKEMASK(0xF, 15)
+#define GLGEN_CAR_SPARE				0x000B81C4 /* Reset Source: POR */
+#define GLGEN_CAR_SPARE_SPARE_CLEAR_S		0
+#define GLGEN_CAR_SPARE_SPARE_CLEAR_M		MAKEMASK(0xFFFF, 0)
+#define GLGEN_CAR_SPARE_SPARE_SET_S		16
+#define GLGEN_CAR_SPARE_SPARE_SET_M		MAKEMASK(0xFFFF, 16)
+#define GLMAC_CLKSTAT				0x000B8210 /* Reset Source: POR */
+#define GLMAC_CLKSTAT_P0_CLK_SPEED_S		0
+#define GLMAC_CLKSTAT_P0_CLK_SPEED_M		MAKEMASK(0xF, 0)
+#define GLMAC_CLKSTAT_P1_CLK_SPEED_S		4
+#define GLMAC_CLKSTAT_P1_CLK_SPEED_M		MAKEMASK(0xF, 4)
+#define GLMAC_CLKSTAT_P2_CLK_SPEED_S		8
+#define GLMAC_CLKSTAT_P2_CLK_SPEED_M		MAKEMASK(0xF, 8)
+#define GLMAC_CLKSTAT_P3_CLK_SPEED_S		12
+#define GLMAC_CLKSTAT_P3_CLK_SPEED_M		MAKEMASK(0xF, 12)
+#define GLMAC_CLKSTAT_P4_CLK_SPEED_S		16
+#define GLMAC_CLKSTAT_P4_CLK_SPEED_M		MAKEMASK(0xF, 16)
+#define GLMAC_CLKSTAT_P5_CLK_SPEED_S		20
+#define GLMAC_CLKSTAT_P5_CLK_SPEED_M		MAKEMASK(0xF, 20)
+#define GLMAC_CLKSTAT_P6_CLK_SPEED_S		24
+#define GLMAC_CLKSTAT_P6_CLK_SPEED_M		MAKEMASK(0xF, 24)
+#define GLMAC_CLKSTAT_P7_CLK_SPEED_S		28
+#define GLMAC_CLKSTAT_P7_CLK_SPEED_M		MAKEMASK(0xF, 28)
+#define GLRCB_DCB_LAN_PMS			0x001223F8 /* Reset Source: CORER */
+#define GLRCB_DCB_LAN_PMS_PSM_LAN_S		0
+#define GLRCB_DCB_LAN_PMS_PSM_LAN_M		MAKEMASK(0x3FFF, 0)
+#define GLRCB_DCB_RDMA_PMS			0x001223FC /* Reset Source: CORER */
+#define GLRCB_DCB_RDMA_PMS_PSM_RDMA_S		0
+#define GLRCB_DCB_RDMA_PMS_PSM_RDMA_M		MAKEMASK(0x3FFF, 0)
+#define GLRLAN_MDET				0x00294200 /* Reset Source: CORER */
+#define GLRLAN_MDET_PCKT_EXTRCT_ERR_S		0
+#define GLRLAN_MDET_PCKT_EXTRCT_ERR_M		BIT(0)
+#define GLTPB_100G_MAC_FC_THRESH		0x00099510 /* Reset Source: CORER */
+#define GLTPB_100G_MAC_FC_THRESH_PORT0_FC_THRESH_S 0
+#define GLTPB_100G_MAC_FC_THRESH_PORT0_FC_THRESH_M MAKEMASK(0xFFFF, 0)
+#define GLTPB_100G_MAC_FC_THRESH_PORT1_FC_THRESH_S 16
+#define GLTPB_100G_MAC_FC_THRESH_PORT1_FC_THRESH_M MAKEMASK(0xFFFF, 16)
+#define GLTPB_100G_RPB_FC_THRESH		0x0009963C /* Reset Source: CORER */
+#define GLTPB_100G_RPB_FC_THRESH_PORT0_FC_THRESH_S 0
+#define GLTPB_100G_RPB_FC_THRESH_PORT0_FC_THRESH_M MAKEMASK(0xFFFF, 0)
+#define GLTPB_100G_RPB_FC_THRESH_PORT1_FC_THRESH_S 16
+#define GLTPB_100G_RPB_FC_THRESH_PORT1_FC_THRESH_M MAKEMASK(0xFFFF, 16)
+#define GLTPB_PACING_10G			0x000994E4 /* Reset Source: CORER */
+#define GLTPB_PACING_10G_N_S			0
+#define GLTPB_PACING_10G_N_M			MAKEMASK(0xFF, 0)
+#define GLTPB_PACING_10G_K_S			8
+#define GLTPB_PACING_10G_K_M			MAKEMASK(0xFF, 8)
+#define GLTPB_PACING_10G_S_S			16
+#define GLTPB_PACING_10G_S_M			MAKEMASK(0x1FF, 16)
+#define GLTPB_PACING_25G			0x000994E0 /* Reset Source: CORER */
+#define GLTPB_PACING_25G_N_S			0
+#define GLTPB_PACING_25G_N_M			MAKEMASK(0xFF, 0)
+#define GLTPB_PACING_25G_K_S			8
+#define GLTPB_PACING_25G_K_M			MAKEMASK(0xFF, 8)
+#define GLTPB_PACING_25G_S_S			16
+#define GLTPB_PACING_25G_S_M			MAKEMASK(0x1FF, 16)
+#define GLTPB_PORT_PACING_SPEED			0x000994E8 /* Reset Source: CORER */
+#define GLTPB_PORT_PACING_SPEED_PORT0_SPEED_S	0
+#define GLTPB_PORT_PACING_SPEED_PORT0_SPEED_M	BIT(0)
+#define GLTPB_PORT_PACING_SPEED_PORT1_SPEED_S	1
+#define GLTPB_PORT_PACING_SPEED_PORT1_SPEED_M	BIT(1)
+#define GLTPB_PORT_PACING_SPEED_PORT2_SPEED_S	2
+#define GLTPB_PORT_PACING_SPEED_PORT2_SPEED_M	BIT(2)
+#define GLTPB_PORT_PACING_SPEED_PORT3_SPEED_S	3
+#define GLTPB_PORT_PACING_SPEED_PORT3_SPEED_M	BIT(3)
+#define GLTPB_PORT_PACING_SPEED_PORT4_SPEED_S	4
+#define GLTPB_PORT_PACING_SPEED_PORT4_SPEED_M	BIT(4)
+#define GLTPB_PORT_PACING_SPEED_PORT5_SPEED_S	5
+#define GLTPB_PORT_PACING_SPEED_PORT5_SPEED_M	BIT(5)
+#define GLTPB_PORT_PACING_SPEED_PORT6_SPEED_S	6
+#define GLTPB_PORT_PACING_SPEED_PORT6_SPEED_M	BIT(6)
+#define GLTPB_PORT_PACING_SPEED_PORT7_SPEED_S	7
+#define GLTPB_PORT_PACING_SPEED_PORT7_SPEED_M	BIT(7)
+#define GLTSYN_HH_DBG				0x000889F0 /* Reset Source: CORER */
+#define GLTSYN_HH_DBG_HH_SYNC_S			0
+#define GLTSYN_HH_DBG_HH_SYNC_M			BIT(0)
+#define GLTSYN_HH_DBG_HH_LATCH_EN_S		1
+#define GLTSYN_HH_DBG_HH_LATCH_EN_M		BIT(1)
+#define TPB_CFG_SCHEDULED_BC_THRESHOLD		0x00099494 /* Reset Source: CORER */
+#define TPB_CFG_SCHEDULED_BC_THRESHOLD_THRESHOLD_S 0
+#define TPB_CFG_SCHEDULED_BC_THRESHOLD_THRESHOLD_M MAKEMASK(0x7FFF, 0)
+#define GL_UFUSE_SOC				0x000A400C /* Reset Source: POR */
+#define GL_UFUSE_SOC_PORT_MODE_S		0
+#define GL_UFUSE_SOC_PORT_MODE_M		MAKEMASK(0x3, 0)
+#define GL_UFUSE_SOC_BANDWIDTH_S		2
+#define GL_UFUSE_SOC_BANDWIDTH_M		MAKEMASK(0x3, 2)
+#define GL_UFUSE_SOC_PE_DISABLE_S		4
+#define GL_UFUSE_SOC_PE_DISABLE_M		BIT(4)
+#define GL_UFUSE_SOC_SWITCH_MODE_S		5
+#define GL_UFUSE_SOC_SWITCH_MODE_M		BIT(5)
+#define GL_UFUSE_SOC_CSR_PROTECTION_ENABLE_S	6
+#define GL_UFUSE_SOC_CSR_PROTECTION_ENABLE_M	BIT(6)
+#define GL_UFUSE_SOC_SERIAL_50G_S		7
+#define GL_UFUSE_SOC_SERIAL_50G_M		BIT(7)
+#define GL_UFUSE_SOC_NIC_ID_S			8
+#define GL_UFUSE_SOC_NIC_ID_M			BIT(8)
+#define GL_UFUSE_SOC_BLOCK_BME_TO_FW_S		9
+#define GL_UFUSE_SOC_BLOCK_BME_TO_FW_M		BIT(9)
+#define GL_UFUSE_SOC_SOC_TYPE_S			10
+#define GL_UFUSE_SOC_SOC_TYPE_M			BIT(10)
+#define GL_UFUSE_SOC_BTS_MODE_S			11
+#define GL_UFUSE_SOC_BTS_MODE_M			BIT(11)
+#define GL_UFUSE_SOC_SPARE_FUSES_S		12
+#define GL_UFUSE_SOC_SPARE_FUSES_M		MAKEMASK(0xF, 12)
+#define EMPINT_GPIO_ENA				0x000880C0 /* Reset Source: POR */
+#define EMPINT_GPIO_ENA_GPIO0_ENA_S		0
+#define EMPINT_GPIO_ENA_GPIO0_ENA_M		BIT(0)
+#define EMPINT_GPIO_ENA_GPIO1_ENA_S		1
+#define EMPINT_GPIO_ENA_GPIO1_ENA_M		BIT(1)
+#define EMPINT_GPIO_ENA_GPIO2_ENA_S		2
+#define EMPINT_GPIO_ENA_GPIO2_ENA_M		BIT(2)
+#define EMPINT_GPIO_ENA_GPIO3_ENA_S		3
+#define EMPINT_GPIO_ENA_GPIO3_ENA_M		BIT(3)
+#define EMPINT_GPIO_ENA_GPIO4_ENA_S		4
+#define EMPINT_GPIO_ENA_GPIO4_ENA_M		BIT(4)
+#define EMPINT_GPIO_ENA_GPIO5_ENA_S		5
+#define EMPINT_GPIO_ENA_GPIO5_ENA_M		BIT(5)
+#define EMPINT_GPIO_ENA_GPIO6_ENA_S		6
+#define EMPINT_GPIO_ENA_GPIO6_ENA_M		BIT(6)
+#define GL_CLKGEN_DEBUG				0x000B8268 /* Reset Source: POR */
+#define GL_CLKGEN_DEBUG_PROBE_S			0
+#define GL_CLKGEN_DEBUG_PROBE_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GL_CLKGEN_DEBUG_SEL			0x000B8264 /* Reset Source: POR */
+#define GL_CLKGEN_DEBUG_SEL_GL_CLKGEN_DEBUG_SEL_S 0
+#define GL_CLKGEN_DEBUG_SEL_GL_CLKGEN_DEBUG_SEL_M MAKEMASK(0xFFFF, 0)
+#define GLGEN_MAC_LINK_TOPO			0x000B81DC /* Reset Source: GLOBR */
+#define GLGEN_MAC_LINK_TOPO_LINK_TOPO_S		0
+#define GLGEN_MAC_LINK_TOPO_LINK_TOPO_M		MAKEMASK(0x3, 0)
+#define GLINT_CEQCTL(_INT)			(0x0015C000 + ((_INT) * 4)) /* _i=0...2047 */ /* Reset Source: CORER */
+#define GLINT_CEQCTL_MAX_INDEX			2047
+#define GLINT_CEQCTL_MSIX_INDX_S		0
+#define GLINT_CEQCTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define GLINT_CEQCTL_ITR_INDX_S			11
+#define GLINT_CEQCTL_ITR_INDX_M			MAKEMASK(0x3, 11)
+#define GLINT_CEQCTL_CAUSE_ENA_S		30
+#define GLINT_CEQCTL_CAUSE_ENA_M		BIT(30)
+#define GLINT_CEQCTL_INTEVENT_S			31
+#define GLINT_CEQCTL_INTEVENT_M			BIT(31)
+#define GLINT_CTL				0x0016CC54 /* Reset Source: CORER */
+#define GLINT_CTL_DIS_AUTOMASK_S		0
+#define GLINT_CTL_DIS_AUTOMASK_M		BIT(0)
+#define GLINT_CTL_RSVD_S			1
+#define GLINT_CTL_RSVD_M			MAKEMASK(0x7FFF, 1)
+#define GLINT_CTL_ITR_GRAN_200_S		16
+#define GLINT_CTL_ITR_GRAN_200_M		MAKEMASK(0xF, 16)
+#define GLINT_CTL_ITR_GRAN_100_S		20
+#define GLINT_CTL_ITR_GRAN_100_M		MAKEMASK(0xF, 20)
+#define GLINT_CTL_ITR_GRAN_50_S			24
+#define GLINT_CTL_ITR_GRAN_50_M			MAKEMASK(0xF, 24)
+#define GLINT_CTL_ITR_GRAN_25_S			28
+#define GLINT_CTL_ITR_GRAN_25_M			MAKEMASK(0xF, 28)
+#define GLINT_DYN_CTL(_INT)			(0x00160000 + ((_INT) * 4)) /* _i=0...2047 */ /* Reset Source: PFR */
+#define GLINT_DYN_CTL_MAX_INDEX			2047
+#define GLINT_DYN_CTL_INTENA_S			0
+#define GLINT_DYN_CTL_INTENA_M			BIT(0)
+#define GLINT_DYN_CTL_CLEARPBA_S		1
+#define GLINT_DYN_CTL_CLEARPBA_M		BIT(1)
+#define GLINT_DYN_CTL_SWINT_TRIG_S		2
+#define GLINT_DYN_CTL_SWINT_TRIG_M		BIT(2)
+#define GLINT_DYN_CTL_ITR_INDX_S		3
+#define GLINT_DYN_CTL_ITR_INDX_M		MAKEMASK(0x3, 3)
+#define GLINT_DYN_CTL_INTERVAL_S		5
+#define GLINT_DYN_CTL_INTERVAL_M		MAKEMASK(0xFFF, 5)
+#define GLINT_DYN_CTL_SW_ITR_INDX_ENA_S		24
+#define GLINT_DYN_CTL_SW_ITR_INDX_ENA_M		BIT(24)
+#define GLINT_DYN_CTL_SW_ITR_INDX_S		25
+#define GLINT_DYN_CTL_SW_ITR_INDX_M		MAKEMASK(0x3, 25)
+#define GLINT_DYN_CTL_WB_ON_ITR_S		30
+#define GLINT_DYN_CTL_WB_ON_ITR_M		BIT(30)
+#define GLINT_DYN_CTL_INTENA_MSK_S		31
+#define GLINT_DYN_CTL_INTENA_MSK_M		BIT(31)
+#define GLINT_FW_TOOL_CTL			0x0016C840 /* Reset Source: CORER */
+#define GLINT_FW_TOOL_CTL_MSIX_INDX_S		0
+#define GLINT_FW_TOOL_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define GLINT_FW_TOOL_CTL_ITR_INDX_S		11
+#define GLINT_FW_TOOL_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define GLINT_FW_TOOL_CTL_CAUSE_ENA_S		30
+#define GLINT_FW_TOOL_CTL_CAUSE_ENA_M		BIT(30)
+#define GLINT_FW_TOOL_CTL_INTEVENT_S		31
+#define GLINT_FW_TOOL_CTL_INTEVENT_M		BIT(31)
+#define GLINT_ITR(_i, _INT)			(0x00154000 + ((_i) * 8192 + (_INT) * 4)) /* _i=0...2, _INT=0...2047 */ /* Reset Source: PFR */
+#define GLINT_ITR_MAX_INDEX			2
+#define GLINT_ITR_INTERVAL_S			0
+#define GLINT_ITR_INTERVAL_M			MAKEMASK(0xFFF, 0)
+#define GLINT_RATE(_INT)			(0x0015A000 + ((_INT) * 4)) /* _i=0...2047 */ /* Reset Source: PFR */
+#define GLINT_RATE_MAX_INDEX			2047
+#define GLINT_RATE_INTERVAL_S			0
+#define GLINT_RATE_INTERVAL_M			MAKEMASK(0x3F, 0)
+#define GLINT_RATE_INTRL_ENA_S			6
+#define GLINT_RATE_INTRL_ENA_M			BIT(6)
+#define GLINT_TSYN_PFMSTR(_i)			(0x0016CCC0 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLINT_TSYN_PFMSTR_MAX_INDEX		1
+#define GLINT_TSYN_PFMSTR_PF_MASTER_S		0
+#define GLINT_TSYN_PFMSTR_PF_MASTER_M		MAKEMASK(0x7, 0)
+#define GLINT_TSYN_PHY				0x0016CC50 /* Reset Source: CORER */
+#define GLINT_TSYN_PHY_PHY_INDX_S		0
+#define GLINT_TSYN_PHY_PHY_INDX_M		MAKEMASK(0x1F, 0)
+#define GLINT_VECT2FUNC(_INT)			(0x00162000 + ((_INT) * 4)) /* _i=0...2047 */ /* Reset Source: CORER */
+#define GLINT_VECT2FUNC_MAX_INDEX		2047
+#define GLINT_VECT2FUNC_VF_NUM_S		0
+#define GLINT_VECT2FUNC_VF_NUM_M		MAKEMASK(0xFF, 0)
+#define GLINT_VECT2FUNC_PF_NUM_S		12
+#define GLINT_VECT2FUNC_PF_NUM_M		MAKEMASK(0x7, 12)
+#define GLINT_VECT2FUNC_IS_PF_S			16
+#define GLINT_VECT2FUNC_IS_PF_M			BIT(16)
+#define PF0INT_FW_HLP_CTL			0x0016C844 /* Reset Source: CORER */
+#define PF0INT_FW_HLP_CTL_MSIX_INDX_S		0
+#define PF0INT_FW_HLP_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PF0INT_FW_HLP_CTL_ITR_INDX_S		11
+#define PF0INT_FW_HLP_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PF0INT_FW_HLP_CTL_CAUSE_ENA_S		30
+#define PF0INT_FW_HLP_CTL_CAUSE_ENA_M		BIT(30)
+#define PF0INT_FW_HLP_CTL_INTEVENT_S		31
+#define PF0INT_FW_HLP_CTL_INTEVENT_M		BIT(31)
+#define PF0INT_FW_PSM_CTL			0x0016C848 /* Reset Source: CORER */
+#define PF0INT_FW_PSM_CTL_MSIX_INDX_S		0
+#define PF0INT_FW_PSM_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PF0INT_FW_PSM_CTL_ITR_INDX_S		11
+#define PF0INT_FW_PSM_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PF0INT_FW_PSM_CTL_CAUSE_ENA_S		30
+#define PF0INT_FW_PSM_CTL_CAUSE_ENA_M		BIT(30)
+#define PF0INT_FW_PSM_CTL_INTEVENT_S		31
+#define PF0INT_FW_PSM_CTL_INTEVENT_M		BIT(31)
+#define PF0INT_MBX_CPM_CTL			0x0016B2C0 /* Reset Source: CORER */
+#define PF0INT_MBX_CPM_CTL_MSIX_INDX_S		0
+#define PF0INT_MBX_CPM_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PF0INT_MBX_CPM_CTL_ITR_INDX_S		11
+#define PF0INT_MBX_CPM_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PF0INT_MBX_CPM_CTL_CAUSE_ENA_S		30
+#define PF0INT_MBX_CPM_CTL_CAUSE_ENA_M		BIT(30)
+#define PF0INT_MBX_CPM_CTL_INTEVENT_S		31
+#define PF0INT_MBX_CPM_CTL_INTEVENT_M		BIT(31)
+#define PF0INT_MBX_HLP_CTL			0x0016B2C4 /* Reset Source: CORER */
+#define PF0INT_MBX_HLP_CTL_MSIX_INDX_S		0
+#define PF0INT_MBX_HLP_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PF0INT_MBX_HLP_CTL_ITR_INDX_S		11
+#define PF0INT_MBX_HLP_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PF0INT_MBX_HLP_CTL_CAUSE_ENA_S		30
+#define PF0INT_MBX_HLP_CTL_CAUSE_ENA_M		BIT(30)
+#define PF0INT_MBX_HLP_CTL_INTEVENT_S		31
+#define PF0INT_MBX_HLP_CTL_INTEVENT_M		BIT(31)
+#define PF0INT_MBX_PSM_CTL			0x0016B2C8 /* Reset Source: CORER */
+#define PF0INT_MBX_PSM_CTL_MSIX_INDX_S		0
+#define PF0INT_MBX_PSM_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PF0INT_MBX_PSM_CTL_ITR_INDX_S		11
+#define PF0INT_MBX_PSM_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PF0INT_MBX_PSM_CTL_CAUSE_ENA_S		30
+#define PF0INT_MBX_PSM_CTL_CAUSE_ENA_M		BIT(30)
+#define PF0INT_MBX_PSM_CTL_INTEVENT_S		31
+#define PF0INT_MBX_PSM_CTL_INTEVENT_M		BIT(31)
+#define PF0INT_OICR_CPM				0x0016CC40 /* Reset Source: CORER */
+#define PF0INT_OICR_CPM_INTEVENT_S		0
+#define PF0INT_OICR_CPM_INTEVENT_M		BIT(0)
+#define PF0INT_OICR_CPM_QUEUE_S			1
+#define PF0INT_OICR_CPM_QUEUE_M			BIT(1)
+#define PF0INT_OICR_CPM_RSV1_S			2
+#define PF0INT_OICR_CPM_RSV1_M			MAKEMASK(0xFF, 2)
+#define PF0INT_OICR_CPM_HH_COMP_S		10
+#define PF0INT_OICR_CPM_HH_COMP_M		BIT(10)
+#define PF0INT_OICR_CPM_TSYN_TX_S		11
+#define PF0INT_OICR_CPM_TSYN_TX_M		BIT(11)
+#define PF0INT_OICR_CPM_TSYN_EVNT_S		12
+#define PF0INT_OICR_CPM_TSYN_EVNT_M		BIT(12)
+#define PF0INT_OICR_CPM_TSYN_TGT_S		13
+#define PF0INT_OICR_CPM_TSYN_TGT_M		BIT(13)
+#define PF0INT_OICR_CPM_HLP_RDY_S		14
+#define PF0INT_OICR_CPM_HLP_RDY_M		BIT(14)
+#define PF0INT_OICR_CPM_CPM_RDY_S		15
+#define PF0INT_OICR_CPM_CPM_RDY_M		BIT(15)
+#define PF0INT_OICR_CPM_ECC_ERR_S		16
+#define PF0INT_OICR_CPM_ECC_ERR_M		BIT(16)
+#define PF0INT_OICR_CPM_RSV2_S			17
+#define PF0INT_OICR_CPM_RSV2_M			MAKEMASK(0x3, 17)
+#define PF0INT_OICR_CPM_MAL_DETECT_S		19
+#define PF0INT_OICR_CPM_MAL_DETECT_M		BIT(19)
+#define PF0INT_OICR_CPM_GRST_S			20
+#define PF0INT_OICR_CPM_GRST_M			BIT(20)
+#define PF0INT_OICR_CPM_PCI_EXCEPTION_S		21
+#define PF0INT_OICR_CPM_PCI_EXCEPTION_M		BIT(21)
+#define PF0INT_OICR_CPM_GPIO_S			22
+#define PF0INT_OICR_CPM_GPIO_M			BIT(22)
+#define PF0INT_OICR_CPM_RSV3_S			23
+#define PF0INT_OICR_CPM_RSV3_M			BIT(23)
+#define PF0INT_OICR_CPM_STORM_DETECT_S		24
+#define PF0INT_OICR_CPM_STORM_DETECT_M		BIT(24)
+#define PF0INT_OICR_CPM_LINK_STAT_CHANGE_S	25
+#define PF0INT_OICR_CPM_LINK_STAT_CHANGE_M	BIT(25)
+#define PF0INT_OICR_CPM_HMC_ERR_S		26
+#define PF0INT_OICR_CPM_HMC_ERR_M		BIT(26)
+#define PF0INT_OICR_CPM_PE_PUSH_S		27
+#define PF0INT_OICR_CPM_PE_PUSH_M		BIT(27)
+#define PF0INT_OICR_CPM_PE_CRITERR_S		28
+#define PF0INT_OICR_CPM_PE_CRITERR_M		BIT(28)
+#define PF0INT_OICR_CPM_VFLR_S			29
+#define PF0INT_OICR_CPM_VFLR_M			BIT(29)
+#define PF0INT_OICR_CPM_XLR_HW_DONE_S		30
+#define PF0INT_OICR_CPM_XLR_HW_DONE_M		BIT(30)
+#define PF0INT_OICR_CPM_SWINT_S			31
+#define PF0INT_OICR_CPM_SWINT_M			BIT(31)
+#define PF0INT_OICR_CTL_CPM			0x0016CC48 /* Reset Source: CORER */
+#define PF0INT_OICR_CTL_CPM_MSIX_INDX_S		0
+#define PF0INT_OICR_CTL_CPM_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PF0INT_OICR_CTL_CPM_ITR_INDX_S		11
+#define PF0INT_OICR_CTL_CPM_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PF0INT_OICR_CTL_CPM_CAUSE_ENA_S		30
+#define PF0INT_OICR_CTL_CPM_CAUSE_ENA_M		BIT(30)
+#define PF0INT_OICR_CTL_CPM_INTEVENT_S		31
+#define PF0INT_OICR_CTL_CPM_INTEVENT_M		BIT(31)
+#define PF0INT_OICR_CTL_HLP			0x0016CC5C /* Reset Source: CORER */
+#define PF0INT_OICR_CTL_HLP_MSIX_INDX_S		0
+#define PF0INT_OICR_CTL_HLP_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PF0INT_OICR_CTL_HLP_ITR_INDX_S		11
+#define PF0INT_OICR_CTL_HLP_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PF0INT_OICR_CTL_HLP_CAUSE_ENA_S		30
+#define PF0INT_OICR_CTL_HLP_CAUSE_ENA_M		BIT(30)
+#define PF0INT_OICR_CTL_HLP_INTEVENT_S		31
+#define PF0INT_OICR_CTL_HLP_INTEVENT_M		BIT(31)
+#define PF0INT_OICR_CTL_PSM			0x0016CC64 /* Reset Source: CORER */
+#define PF0INT_OICR_CTL_PSM_MSIX_INDX_S		0
+#define PF0INT_OICR_CTL_PSM_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PF0INT_OICR_CTL_PSM_ITR_INDX_S		11
+#define PF0INT_OICR_CTL_PSM_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PF0INT_OICR_CTL_PSM_CAUSE_ENA_S		30
+#define PF0INT_OICR_CTL_PSM_CAUSE_ENA_M		BIT(30)
+#define PF0INT_OICR_CTL_PSM_INTEVENT_S		31
+#define PF0INT_OICR_CTL_PSM_INTEVENT_M		BIT(31)
+#define PF0INT_OICR_ENA_CPM			0x0016CC60 /* Reset Source: CORER */
+#define PF0INT_OICR_ENA_CPM_RSV0_S		0
+#define PF0INT_OICR_ENA_CPM_RSV0_M		BIT(0)
+#define PF0INT_OICR_ENA_CPM_INT_ENA_S		1
+#define PF0INT_OICR_ENA_CPM_INT_ENA_M		MAKEMASK(0x7FFFFFFF, 1)
+#define PF0INT_OICR_ENA_HLP			0x0016CC4C /* Reset Source: CORER */
+#define PF0INT_OICR_ENA_HLP_RSV0_S		0
+#define PF0INT_OICR_ENA_HLP_RSV0_M		BIT(0)
+#define PF0INT_OICR_ENA_HLP_INT_ENA_S		1
+#define PF0INT_OICR_ENA_HLP_INT_ENA_M		MAKEMASK(0x7FFFFFFF, 1)
+#define PF0INT_OICR_ENA_PSM			0x0016CC58 /* Reset Source: CORER */
+#define PF0INT_OICR_ENA_PSM_RSV0_S		0
+#define PF0INT_OICR_ENA_PSM_RSV0_M		BIT(0)
+#define PF0INT_OICR_ENA_PSM_INT_ENA_S		1
+#define PF0INT_OICR_ENA_PSM_INT_ENA_M		MAKEMASK(0x7FFFFFFF, 1)
+#define PF0INT_OICR_HLP				0x0016CC68 /* Reset Source: CORER */
+#define PF0INT_OICR_HLP_INTEVENT_S		0
+#define PF0INT_OICR_HLP_INTEVENT_M		BIT(0)
+#define PF0INT_OICR_HLP_QUEUE_S			1
+#define PF0INT_OICR_HLP_QUEUE_M			BIT(1)
+#define PF0INT_OICR_HLP_RSV1_S			2
+#define PF0INT_OICR_HLP_RSV1_M			MAKEMASK(0xFF, 2)
+#define PF0INT_OICR_HLP_HH_COMP_S		10
+#define PF0INT_OICR_HLP_HH_COMP_M		BIT(10)
+#define PF0INT_OICR_HLP_TSYN_TX_S		11
+#define PF0INT_OICR_HLP_TSYN_TX_M		BIT(11)
+#define PF0INT_OICR_HLP_TSYN_EVNT_S		12
+#define PF0INT_OICR_HLP_TSYN_EVNT_M		BIT(12)
+#define PF0INT_OICR_HLP_TSYN_TGT_S		13
+#define PF0INT_OICR_HLP_TSYN_TGT_M		BIT(13)
+#define PF0INT_OICR_HLP_HLP_RDY_S		14
+#define PF0INT_OICR_HLP_HLP_RDY_M		BIT(14)
+#define PF0INT_OICR_HLP_CPM_RDY_S		15
+#define PF0INT_OICR_HLP_CPM_RDY_M		BIT(15)
+#define PF0INT_OICR_HLP_ECC_ERR_S		16
+#define PF0INT_OICR_HLP_ECC_ERR_M		BIT(16)
+#define PF0INT_OICR_HLP_RSV2_S			17
+#define PF0INT_OICR_HLP_RSV2_M			MAKEMASK(0x3, 17)
+#define PF0INT_OICR_HLP_MAL_DETECT_S		19
+#define PF0INT_OICR_HLP_MAL_DETECT_M		BIT(19)
+#define PF0INT_OICR_HLP_GRST_S			20
+#define PF0INT_OICR_HLP_GRST_M			BIT(20)
+#define PF0INT_OICR_HLP_PCI_EXCEPTION_S		21
+#define PF0INT_OICR_HLP_PCI_EXCEPTION_M		BIT(21)
+#define PF0INT_OICR_HLP_GPIO_S			22
+#define PF0INT_OICR_HLP_GPIO_M			BIT(22)
+#define PF0INT_OICR_HLP_RSV3_S			23
+#define PF0INT_OICR_HLP_RSV3_M			BIT(23)
+#define PF0INT_OICR_HLP_STORM_DETECT_S		24
+#define PF0INT_OICR_HLP_STORM_DETECT_M		BIT(24)
+#define PF0INT_OICR_HLP_LINK_STAT_CHANGE_S	25
+#define PF0INT_OICR_HLP_LINK_STAT_CHANGE_M	BIT(25)
+#define PF0INT_OICR_HLP_HMC_ERR_S		26
+#define PF0INT_OICR_HLP_HMC_ERR_M		BIT(26)
+#define PF0INT_OICR_HLP_PE_PUSH_S		27
+#define PF0INT_OICR_HLP_PE_PUSH_M		BIT(27)
+#define PF0INT_OICR_HLP_PE_CRITERR_S		28
+#define PF0INT_OICR_HLP_PE_CRITERR_M		BIT(28)
+#define PF0INT_OICR_HLP_VFLR_S			29
+#define PF0INT_OICR_HLP_VFLR_M			BIT(29)
+#define PF0INT_OICR_HLP_XLR_HW_DONE_S		30
+#define PF0INT_OICR_HLP_XLR_HW_DONE_M		BIT(30)
+#define PF0INT_OICR_HLP_SWINT_S			31
+#define PF0INT_OICR_HLP_SWINT_M			BIT(31)
+#define PF0INT_OICR_PSM				0x0016CC44 /* Reset Source: CORER */
+#define PF0INT_OICR_PSM_INTEVENT_S		0
+#define PF0INT_OICR_PSM_INTEVENT_M		BIT(0)
+#define PF0INT_OICR_PSM_QUEUE_S			1
+#define PF0INT_OICR_PSM_QUEUE_M			BIT(1)
+#define PF0INT_OICR_PSM_RSV1_S			2
+#define PF0INT_OICR_PSM_RSV1_M			MAKEMASK(0xFF, 2)
+#define PF0INT_OICR_PSM_HH_COMP_S		10
+#define PF0INT_OICR_PSM_HH_COMP_M		BIT(10)
+#define PF0INT_OICR_PSM_TSYN_TX_S		11
+#define PF0INT_OICR_PSM_TSYN_TX_M		BIT(11)
+#define PF0INT_OICR_PSM_TSYN_EVNT_S		12
+#define PF0INT_OICR_PSM_TSYN_EVNT_M		BIT(12)
+#define PF0INT_OICR_PSM_TSYN_TGT_S		13
+#define PF0INT_OICR_PSM_TSYN_TGT_M		BIT(13)
+#define PF0INT_OICR_PSM_HLP_RDY_S		14
+#define PF0INT_OICR_PSM_HLP_RDY_M		BIT(14)
+#define PF0INT_OICR_PSM_CPM_RDY_S		15
+#define PF0INT_OICR_PSM_CPM_RDY_M		BIT(15)
+#define PF0INT_OICR_PSM_ECC_ERR_S		16
+#define PF0INT_OICR_PSM_ECC_ERR_M		BIT(16)
+#define PF0INT_OICR_PSM_RSV2_S			17
+#define PF0INT_OICR_PSM_RSV2_M			MAKEMASK(0x3, 17)
+#define PF0INT_OICR_PSM_MAL_DETECT_S		19
+#define PF0INT_OICR_PSM_MAL_DETECT_M		BIT(19)
+#define PF0INT_OICR_PSM_GRST_S			20
+#define PF0INT_OICR_PSM_GRST_M			BIT(20)
+#define PF0INT_OICR_PSM_PCI_EXCEPTION_S		21
+#define PF0INT_OICR_PSM_PCI_EXCEPTION_M		BIT(21)
+#define PF0INT_OICR_PSM_GPIO_S			22
+#define PF0INT_OICR_PSM_GPIO_M			BIT(22)
+#define PF0INT_OICR_PSM_RSV3_S			23
+#define PF0INT_OICR_PSM_RSV3_M			BIT(23)
+#define PF0INT_OICR_PSM_STORM_DETECT_S		24
+#define PF0INT_OICR_PSM_STORM_DETECT_M		BIT(24)
+#define PF0INT_OICR_PSM_LINK_STAT_CHANGE_S	25
+#define PF0INT_OICR_PSM_LINK_STAT_CHANGE_M	BIT(25)
+#define PF0INT_OICR_PSM_HMC_ERR_S		26
+#define PF0INT_OICR_PSM_HMC_ERR_M		BIT(26)
+#define PF0INT_OICR_PSM_PE_PUSH_S		27
+#define PF0INT_OICR_PSM_PE_PUSH_M		BIT(27)
+#define PF0INT_OICR_PSM_PE_CRITERR_S		28
+#define PF0INT_OICR_PSM_PE_CRITERR_M		BIT(28)
+#define PF0INT_OICR_PSM_VFLR_S			29
+#define PF0INT_OICR_PSM_VFLR_M			BIT(29)
+#define PF0INT_OICR_PSM_XLR_HW_DONE_S		30
+#define PF0INT_OICR_PSM_XLR_HW_DONE_M		BIT(30)
+#define PF0INT_OICR_PSM_SWINT_S			31
+#define PF0INT_OICR_PSM_SWINT_M			BIT(31)
+#define PF0INT_SB_CPM_CTL			0x0016B2CC /* Reset Source: CORER */
+#define PF0INT_SB_CPM_CTL_MSIX_INDX_S		0
+#define PF0INT_SB_CPM_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PF0INT_SB_CPM_CTL_ITR_INDX_S		11
+#define PF0INT_SB_CPM_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PF0INT_SB_CPM_CTL_CAUSE_ENA_S		30
+#define PF0INT_SB_CPM_CTL_CAUSE_ENA_M		BIT(30)
+#define PF0INT_SB_CPM_CTL_INTEVENT_S		31
+#define PF0INT_SB_CPM_CTL_INTEVENT_M		BIT(31)
+#define PF0INT_SB_HLP_CTL			0x0016B640 /* Reset Source: CORER */
+#define PF0INT_SB_HLP_CTL_MSIX_INDX_S		0
+#define PF0INT_SB_HLP_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PF0INT_SB_HLP_CTL_ITR_INDX_S		11
+#define PF0INT_SB_HLP_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PF0INT_SB_HLP_CTL_CAUSE_ENA_S		30
+#define PF0INT_SB_HLP_CTL_CAUSE_ENA_M		BIT(30)
+#define PF0INT_SB_HLP_CTL_INTEVENT_S		31
+#define PF0INT_SB_HLP_CTL_INTEVENT_M		BIT(31)
+#define PFINT_AEQCTL				0x0016CB00 /* Reset Source: CORER */
+#define PFINT_AEQCTL_MSIX_INDX_S		0
+#define PFINT_AEQCTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PFINT_AEQCTL_ITR_INDX_S			11
+#define PFINT_AEQCTL_ITR_INDX_M			MAKEMASK(0x3, 11)
+#define PFINT_AEQCTL_CAUSE_ENA_S		30
+#define PFINT_AEQCTL_CAUSE_ENA_M		BIT(30)
+#define PFINT_AEQCTL_INTEVENT_S			31
+#define PFINT_AEQCTL_INTEVENT_M			BIT(31)
+#define PFINT_ALLOC				0x001D2600 /* Reset Source: CORER */
+#define PFINT_ALLOC_FIRST_S			0
+#define PFINT_ALLOC_FIRST_M			MAKEMASK(0x7FF, 0)
+#define PFINT_ALLOC_LAST_S			12
+#define PFINT_ALLOC_LAST_M			MAKEMASK(0x7FF, 12)
+#define PFINT_ALLOC_VALID_S			31
+#define PFINT_ALLOC_VALID_M			BIT(31)
+#define PFINT_ALLOC_PCI				0x0009D800 /* Reset Source: PCIR */
+#define PFINT_ALLOC_PCI_FIRST_S			0
+#define PFINT_ALLOC_PCI_FIRST_M			MAKEMASK(0x7FF, 0)
+#define PFINT_ALLOC_PCI_LAST_S			12
+#define PFINT_ALLOC_PCI_LAST_M			MAKEMASK(0x7FF, 12)
+#define PFINT_ALLOC_PCI_VALID_S			31
+#define PFINT_ALLOC_PCI_VALID_M			BIT(31)
+#define PFINT_FW_CTL				0x0016C800 /* Reset Source: CORER */
+#define PFINT_FW_CTL_MSIX_INDX_S		0
+#define PFINT_FW_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PFINT_FW_CTL_ITR_INDX_S			11
+#define PFINT_FW_CTL_ITR_INDX_M			MAKEMASK(0x3, 11)
+#define PFINT_FW_CTL_CAUSE_ENA_S		30
+#define PFINT_FW_CTL_CAUSE_ENA_M		BIT(30)
+#define PFINT_FW_CTL_INTEVENT_S			31
+#define PFINT_FW_CTL_INTEVENT_M			BIT(31)
+#define PFINT_GPIO_ENA				0x00088080 /* Reset Source: CORER */
+#define PFINT_GPIO_ENA_GPIO0_ENA_S		0
+#define PFINT_GPIO_ENA_GPIO0_ENA_M		BIT(0)
+#define PFINT_GPIO_ENA_GPIO1_ENA_S		1
+#define PFINT_GPIO_ENA_GPIO1_ENA_M		BIT(1)
+#define PFINT_GPIO_ENA_GPIO2_ENA_S		2
+#define PFINT_GPIO_ENA_GPIO2_ENA_M		BIT(2)
+#define PFINT_GPIO_ENA_GPIO3_ENA_S		3
+#define PFINT_GPIO_ENA_GPIO3_ENA_M		BIT(3)
+#define PFINT_GPIO_ENA_GPIO4_ENA_S		4
+#define PFINT_GPIO_ENA_GPIO4_ENA_M		BIT(4)
+#define PFINT_GPIO_ENA_GPIO5_ENA_S		5
+#define PFINT_GPIO_ENA_GPIO5_ENA_M		BIT(5)
+#define PFINT_GPIO_ENA_GPIO6_ENA_S		6
+#define PFINT_GPIO_ENA_GPIO6_ENA_M		BIT(6)
+#define PFINT_MBX_CTL				0x0016B280 /* Reset Source: CORER */
+#define PFINT_MBX_CTL_MSIX_INDX_S		0
+#define PFINT_MBX_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PFINT_MBX_CTL_ITR_INDX_S		11
+#define PFINT_MBX_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PFINT_MBX_CTL_CAUSE_ENA_S		30
+#define PFINT_MBX_CTL_CAUSE_ENA_M		BIT(30)
+#define PFINT_MBX_CTL_INTEVENT_S		31
+#define PFINT_MBX_CTL_INTEVENT_M		BIT(31)
+#define PFINT_OICR				0x0016CA00 /* Reset Source: CORER */
+#define PFINT_OICR_INTEVENT_S			0
+#define PFINT_OICR_INTEVENT_M			BIT(0)
+#define PFINT_OICR_QUEUE_S			1
+#define PFINT_OICR_QUEUE_M			BIT(1)
+#define PFINT_OICR_RSV1_S			2
+#define PFINT_OICR_RSV1_M			MAKEMASK(0xFF, 2)
+#define PFINT_OICR_HH_COMP_S			10
+#define PFINT_OICR_HH_COMP_M			BIT(10)
+#define PFINT_OICR_TSYN_TX_S			11
+#define PFINT_OICR_TSYN_TX_M			BIT(11)
+#define PFINT_OICR_TSYN_EVNT_S			12
+#define PFINT_OICR_TSYN_EVNT_M			BIT(12)
+#define PFINT_OICR_TSYN_TGT_S			13
+#define PFINT_OICR_TSYN_TGT_M			BIT(13)
+#define PFINT_OICR_HLP_RDY_S			14
+#define PFINT_OICR_HLP_RDY_M			BIT(14)
+#define PFINT_OICR_CPM_RDY_S			15
+#define PFINT_OICR_CPM_RDY_M			BIT(15)
+#define PFINT_OICR_ECC_ERR_S			16
+#define PFINT_OICR_ECC_ERR_M			BIT(16)
+#define PFINT_OICR_RSV2_S			17
+#define PFINT_OICR_RSV2_M			MAKEMASK(0x3, 17)
+#define PFINT_OICR_MAL_DETECT_S			19
+#define PFINT_OICR_MAL_DETECT_M			BIT(19)
+#define PFINT_OICR_GRST_S			20
+#define PFINT_OICR_GRST_M			BIT(20)
+#define PFINT_OICR_PCI_EXCEPTION_S		21
+#define PFINT_OICR_PCI_EXCEPTION_M		BIT(21)
+#define PFINT_OICR_GPIO_S			22
+#define PFINT_OICR_GPIO_M			BIT(22)
+#define PFINT_OICR_RSV3_S			23
+#define PFINT_OICR_RSV3_M			BIT(23)
+#define PFINT_OICR_STORM_DETECT_S		24
+#define PFINT_OICR_STORM_DETECT_M		BIT(24)
+#define PFINT_OICR_LINK_STAT_CHANGE_S		25
+#define PFINT_OICR_LINK_STAT_CHANGE_M		BIT(25)
+#define PFINT_OICR_HMC_ERR_S			26
+#define PFINT_OICR_HMC_ERR_M			BIT(26)
+#define PFINT_OICR_PE_PUSH_S			27
+#define PFINT_OICR_PE_PUSH_M			BIT(27)
+#define PFINT_OICR_PE_CRITERR_S			28
+#define PFINT_OICR_PE_CRITERR_M			BIT(28)
+#define PFINT_OICR_VFLR_S			29
+#define PFINT_OICR_VFLR_M			BIT(29)
+#define PFINT_OICR_XLR_HW_DONE_S		30
+#define PFINT_OICR_XLR_HW_DONE_M		BIT(30)
+#define PFINT_OICR_SWINT_S			31
+#define PFINT_OICR_SWINT_M			BIT(31)
+#define PFINT_OICR_CTL				0x0016CA80 /* Reset Source: CORER */
+#define PFINT_OICR_CTL_MSIX_INDX_S		0
+#define PFINT_OICR_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PFINT_OICR_CTL_ITR_INDX_S		11
+#define PFINT_OICR_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PFINT_OICR_CTL_CAUSE_ENA_S		30
+#define PFINT_OICR_CTL_CAUSE_ENA_M		BIT(30)
+#define PFINT_OICR_CTL_INTEVENT_S		31
+#define PFINT_OICR_CTL_INTEVENT_M		BIT(31)
+#define PFINT_OICR_ENA				0x0016C900 /* Reset Source: CORER */
+#define PFINT_OICR_ENA_RSV0_S			0
+#define PFINT_OICR_ENA_RSV0_M			BIT(0)
+#define PFINT_OICR_ENA_INT_ENA_S		1
+#define PFINT_OICR_ENA_INT_ENA_M		MAKEMASK(0x7FFFFFFF, 1)
+#define PFINT_SB_CTL				0x0016B600 /* Reset Source: CORER */
+#define PFINT_SB_CTL_MSIX_INDX_S		0
+#define PFINT_SB_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PFINT_SB_CTL_ITR_INDX_S			11
+#define PFINT_SB_CTL_ITR_INDX_M			MAKEMASK(0x3, 11)
+#define PFINT_SB_CTL_CAUSE_ENA_S		30
+#define PFINT_SB_CTL_CAUSE_ENA_M		BIT(30)
+#define PFINT_SB_CTL_INTEVENT_S			31
+#define PFINT_SB_CTL_INTEVENT_M			BIT(31)
+#define PFINT_TSYN_MSK				0x0016C980 /* Reset Source: CORER */
+#define PFINT_TSYN_MSK_PHY_INDX_S		0
+#define PFINT_TSYN_MSK_PHY_INDX_M		MAKEMASK(0x1F, 0)
+#define QINT_RQCTL(_QRX)			(0x00150000 + ((_QRX) * 4)) /* _i=0...2047 */ /* Reset Source: CORER */
+#define QINT_RQCTL_MAX_INDEX			2047
+#define QINT_RQCTL_MSIX_INDX_S			0
+#define QINT_RQCTL_MSIX_INDX_M			MAKEMASK(0x7FF, 0)
+#define QINT_RQCTL_ITR_INDX_S			11
+#define QINT_RQCTL_ITR_INDX_M			MAKEMASK(0x3, 11)
+#define QINT_RQCTL_CAUSE_ENA_S			30
+#define QINT_RQCTL_CAUSE_ENA_M			BIT(30)
+#define QINT_RQCTL_INTEVENT_S			31
+#define QINT_RQCTL_INTEVENT_M			BIT(31)
+#define QINT_TQCTL(_DBQM)			(0x00140000 + ((_DBQM) * 4)) /* _i=0...16383 */ /* Reset Source: CORER */
+#define QINT_TQCTL_MAX_INDEX			16383
+#define QINT_TQCTL_MSIX_INDX_S			0
+#define QINT_TQCTL_MSIX_INDX_M			MAKEMASK(0x7FF, 0)
+#define QINT_TQCTL_ITR_INDX_S			11
+#define QINT_TQCTL_ITR_INDX_M			MAKEMASK(0x3, 11)
+#define QINT_TQCTL_CAUSE_ENA_S			30
+#define QINT_TQCTL_CAUSE_ENA_M			BIT(30)
+#define QINT_TQCTL_INTEVENT_S			31
+#define QINT_TQCTL_INTEVENT_M			BIT(31)
+#define VPINT_AEQCTL(_VF)			(0x0016B800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPINT_AEQCTL_MAX_INDEX			255
+#define VPINT_AEQCTL_MSIX_INDX_S		0
+#define VPINT_AEQCTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define VPINT_AEQCTL_ITR_INDX_S			11
+#define VPINT_AEQCTL_ITR_INDX_M			MAKEMASK(0x3, 11)
+#define VPINT_AEQCTL_CAUSE_ENA_S		30
+#define VPINT_AEQCTL_CAUSE_ENA_M		BIT(30)
+#define VPINT_AEQCTL_INTEVENT_S			31
+#define VPINT_AEQCTL_INTEVENT_M			BIT(31)
+#define VPINT_ALLOC(_VF)			(0x001D1000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPINT_ALLOC_MAX_INDEX			255
+#define VPINT_ALLOC_FIRST_S			0
+#define VPINT_ALLOC_FIRST_M			MAKEMASK(0x7FF, 0)
+#define VPINT_ALLOC_LAST_S			12
+#define VPINT_ALLOC_LAST_M			MAKEMASK(0x7FF, 12)
+#define VPINT_ALLOC_VALID_S			31
+#define VPINT_ALLOC_VALID_M			BIT(31)
+#define VPINT_ALLOC_PCI(_VF)			(0x0009D000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PCIR */
+#define VPINT_ALLOC_PCI_MAX_INDEX		255
+#define VPINT_ALLOC_PCI_FIRST_S			0
+#define VPINT_ALLOC_PCI_FIRST_M			MAKEMASK(0x7FF, 0)
+#define VPINT_ALLOC_PCI_LAST_S			12
+#define VPINT_ALLOC_PCI_LAST_M			MAKEMASK(0x7FF, 12)
+#define VPINT_ALLOC_PCI_VALID_S			31
+#define VPINT_ALLOC_PCI_VALID_M			BIT(31)
+#define VPINT_MBX_CPM_CTL(_VP128)		(0x0016B000 + ((_VP128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VPINT_MBX_CPM_CTL_MAX_INDEX		127
+#define VPINT_MBX_CPM_CTL_MSIX_INDX_S		0
+#define VPINT_MBX_CPM_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define VPINT_MBX_CPM_CTL_ITR_INDX_S		11
+#define VPINT_MBX_CPM_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define VPINT_MBX_CPM_CTL_CAUSE_ENA_S		30
+#define VPINT_MBX_CPM_CTL_CAUSE_ENA_M		BIT(30)
+#define VPINT_MBX_CPM_CTL_INTEVENT_S		31
+#define VPINT_MBX_CPM_CTL_INTEVENT_M		BIT(31)
+#define VPINT_MBX_CTL(_VSI)			(0x0016A000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VPINT_MBX_CTL_MAX_INDEX			767
+#define VPINT_MBX_CTL_MSIX_INDX_S		0
+#define VPINT_MBX_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define VPINT_MBX_CTL_ITR_INDX_S		11
+#define VPINT_MBX_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define VPINT_MBX_CTL_CAUSE_ENA_S		30
+#define VPINT_MBX_CTL_CAUSE_ENA_M		BIT(30)
+#define VPINT_MBX_CTL_INTEVENT_S		31
+#define VPINT_MBX_CTL_INTEVENT_M		BIT(31)
+#define VPINT_MBX_HLP_CTL(_VP16)		(0x0016B200 + ((_VP16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VPINT_MBX_HLP_CTL_MAX_INDEX		15
+#define VPINT_MBX_HLP_CTL_MSIX_INDX_S		0
+#define VPINT_MBX_HLP_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define VPINT_MBX_HLP_CTL_ITR_INDX_S		11
+#define VPINT_MBX_HLP_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define VPINT_MBX_HLP_CTL_CAUSE_ENA_S		30
+#define VPINT_MBX_HLP_CTL_CAUSE_ENA_M		BIT(30)
+#define VPINT_MBX_HLP_CTL_INTEVENT_S		31
+#define VPINT_MBX_HLP_CTL_INTEVENT_M		BIT(31)
+#define VPINT_MBX_PSM_CTL(_VP16)		(0x0016B240 + ((_VP16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VPINT_MBX_PSM_CTL_MAX_INDEX		15
+#define VPINT_MBX_PSM_CTL_MSIX_INDX_S		0
+#define VPINT_MBX_PSM_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define VPINT_MBX_PSM_CTL_ITR_INDX_S		11
+#define VPINT_MBX_PSM_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define VPINT_MBX_PSM_CTL_CAUSE_ENA_S		30
+#define VPINT_MBX_PSM_CTL_CAUSE_ENA_M		BIT(30)
+#define VPINT_MBX_PSM_CTL_INTEVENT_S		31
+#define VPINT_MBX_PSM_CTL_INTEVENT_M		BIT(31)
+#define VPINT_SB_CPM_CTL(_VP128)		(0x0016B400 + ((_VP128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VPINT_SB_CPM_CTL_MAX_INDEX		127
+#define VPINT_SB_CPM_CTL_MSIX_INDX_S		0
+#define VPINT_SB_CPM_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define VPINT_SB_CPM_CTL_ITR_INDX_S		11
+#define VPINT_SB_CPM_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define VPINT_SB_CPM_CTL_CAUSE_ENA_S		30
+#define VPINT_SB_CPM_CTL_CAUSE_ENA_M		BIT(30)
+#define VPINT_SB_CPM_CTL_INTEVENT_S		31
+#define VPINT_SB_CPM_CTL_INTEVENT_M		BIT(31)
+#define GL_HLP_PRT_IPG_PREAMBLE_SIZE(_i)	(0x00049240 + ((_i) * 4)) /* _i=0...20 */ /* Reset Source: CORER */
+#define GL_HLP_PRT_IPG_PREAMBLE_SIZE_MAX_INDEX	20
+#define GL_HLP_PRT_IPG_PREAMBLE_SIZE_IPG_PREAMBLE_SIZE_S 0
+#define GL_HLP_PRT_IPG_PREAMBLE_SIZE_IPG_PREAMBLE_SIZE_M MAKEMASK(0xFF, 0)
+#define GL_TDPU_PSM_DEFAULT_RECIPE(_i)		(0x00049294 + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: CORER */
+#define GL_TDPU_PSM_DEFAULT_RECIPE_MAX_INDEX	3
+#define GL_TDPU_PSM_DEFAULT_RECIPE_ADD_IPG_S	0
+#define GL_TDPU_PSM_DEFAULT_RECIPE_ADD_IPG_M	BIT(0)
+#define GL_TDPU_PSM_DEFAULT_RECIPE_SUB_CRC_S	1
+#define GL_TDPU_PSM_DEFAULT_RECIPE_SUB_CRC_M	BIT(1)
+#define GL_TDPU_PSM_DEFAULT_RECIPE_SUB_ESP_TRAILER_S 2
+#define GL_TDPU_PSM_DEFAULT_RECIPE_SUB_ESP_TRAILER_M BIT(2)
+#define GL_TDPU_PSM_DEFAULT_RECIPE_INCLUDE_L2_PAD_S 3
+#define GL_TDPU_PSM_DEFAULT_RECIPE_INCLUDE_L2_PAD_M BIT(3)
+#define GL_TDPU_PSM_DEFAULT_RECIPE_DEFAULT_UPDATE_MODE_S 4
+#define GL_TDPU_PSM_DEFAULT_RECIPE_DEFAULT_UPDATE_MODE_M BIT(4)
+#define GLLAN_PF_RECIPE(_i)			(0x0029420C + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLLAN_PF_RECIPE_MAX_INDEX		7
+#define GLLAN_PF_RECIPE_RECIPE_S		0
+#define GLLAN_PF_RECIPE_RECIPE_M		MAKEMASK(0x3, 0)
+#define GLLAN_RCTL_0				0x002941F8 /* Reset Source: CORER */
+#define GLLAN_RCTL_0_PXE_MODE_S			0
+#define GLLAN_RCTL_0_PXE_MODE_M			BIT(0)
+#define GLLAN_RCTL_1				0x002941FC /* Reset Source: CORER */
+#define GLLAN_RCTL_1_RXMAX_EXPANSION_S		12
+#define GLLAN_RCTL_1_RXMAX_EXPANSION_M		MAKEMASK(0xF, 12)
+#define GLLAN_RCTL_1_RXDRDCTL_S			17
+#define GLLAN_RCTL_1_RXDRDCTL_M			BIT(17)
+#define GLLAN_RCTL_1_RXDESCRDROEN_S		18
+#define GLLAN_RCTL_1_RXDESCRDROEN_M		BIT(18)
+#define GLLAN_RCTL_1_RXDATAWRROEN_S		19
+#define GLLAN_RCTL_1_RXDATAWRROEN_M		BIT(19)
+#define GLLAN_TSOMSK_F				0x00049308 /* Reset Source: CORER */
+#define GLLAN_TSOMSK_F_TCPMSKF_S		0
+#define GLLAN_TSOMSK_F_TCPMSKF_M		MAKEMASK(0xFFF, 0)
+#define GLLAN_TSOMSK_L				0x00049310 /* Reset Source: CORER */
+#define GLLAN_TSOMSK_L_TCPMSKL_S		0
+#define GLLAN_TSOMSK_L_TCPMSKL_M		MAKEMASK(0xFFF, 0)
+#define GLLAN_TSOMSK_M				0x0004930C /* Reset Source: CORER */
+#define GLLAN_TSOMSK_M_TCPMSKM_S		0
+#define GLLAN_TSOMSK_M_TCPMSKM_M		MAKEMASK(0xFFF, 0)
+#define PFLAN_CP_QALLOC				0x00075700 /* Reset Source: CORER */
+#define PFLAN_CP_QALLOC_FIRSTQ_S		0
+#define PFLAN_CP_QALLOC_FIRSTQ_M		MAKEMASK(0x1FF, 0)
+#define PFLAN_CP_QALLOC_LASTQ_S			16
+#define PFLAN_CP_QALLOC_LASTQ_M			MAKEMASK(0x1FF, 16)
+#define PFLAN_CP_QALLOC_VALID_S			31
+#define PFLAN_CP_QALLOC_VALID_M			BIT(31)
+#define PFLAN_DB_QALLOC				0x00075680 /* Reset Source: CORER */
+#define PFLAN_DB_QALLOC_FIRSTQ_S		0
+#define PFLAN_DB_QALLOC_FIRSTQ_M		MAKEMASK(0xFF, 0)
+#define PFLAN_DB_QALLOC_LASTQ_S			16
+#define PFLAN_DB_QALLOC_LASTQ_M			MAKEMASK(0xFF, 16)
+#define PFLAN_DB_QALLOC_VALID_S			31
+#define PFLAN_DB_QALLOC_VALID_M			BIT(31)
+#define PFLAN_RX_QALLOC				0x001D2500 /* Reset Source: CORER */
+#define PFLAN_RX_QALLOC_FIRSTQ_S		0
+#define PFLAN_RX_QALLOC_FIRSTQ_M		MAKEMASK(0x7FF, 0)
+#define PFLAN_RX_QALLOC_LASTQ_S			16
+#define PFLAN_RX_QALLOC_LASTQ_M			MAKEMASK(0x7FF, 16)
+#define PFLAN_RX_QALLOC_VALID_S			31
+#define PFLAN_RX_QALLOC_VALID_M			BIT(31)
+#define PFLAN_TX_QALLOC				0x001D2580 /* Reset Source: CORER */
+#define PFLAN_TX_QALLOC_FIRSTQ_S		0
+#define PFLAN_TX_QALLOC_FIRSTQ_M		MAKEMASK(0x3FFF, 0)
+#define PFLAN_TX_QALLOC_LASTQ_S			16
+#define PFLAN_TX_QALLOC_LASTQ_M			MAKEMASK(0x3FFF, 16)
+#define PFLAN_TX_QALLOC_VALID_S			31
+#define PFLAN_TX_QALLOC_VALID_M			BIT(31)
+#define QRX_CONTEXT(_i, _QRX)			(0x00280000 + ((_i) * 8192 + (_QRX) * 4)) /* _i=0...7, _QRX=0...2047 */ /* Reset Source: CORER */
+#define QRX_CONTEXT_MAX_INDEX			7
+#define QRX_CONTEXT_RXQ_CONTEXT_S		0
+#define QRX_CONTEXT_RXQ_CONTEXT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define QRX_CTRL(_QRX)				(0x00120000 + ((_QRX) * 4)) /* _i=0...2047 */ /* Reset Source: PFR */
+#define QRX_CTRL_MAX_INDEX			2047
+#define QRX_CTRL_QENA_REQ_S			0
+#define QRX_CTRL_QENA_REQ_M			BIT(0)
+#define QRX_CTRL_FAST_QDIS_S			1
+#define QRX_CTRL_FAST_QDIS_M			BIT(1)
+#define QRX_CTRL_QENA_STAT_S			2
+#define QRX_CTRL_QENA_STAT_M			BIT(2)
+#define QRX_CTRL_CDE_S				3
+#define QRX_CTRL_CDE_M				BIT(3)
+#define QRX_CTRL_CDS_S				4
+#define QRX_CTRL_CDS_M				BIT(4)
+#define QRX_ITR(_QRX)				(0x00292000 + ((_QRX) * 4)) /* _i=0...2047 */ /* Reset Source: CORER */
+#define QRX_ITR_MAX_INDEX			2047
+#define QRX_ITR_NO_EXPR_S			0
+#define QRX_ITR_NO_EXPR_M			BIT(0)
+#define QRX_TAIL(_QRX)				(0x00290000 + ((_QRX) * 4)) /* _i=0...2047 */ /* Reset Source: CORER */
+#define QRX_TAIL_MAX_INDEX			2047
+#define QRX_TAIL_TAIL_S				0
+#define QRX_TAIL_TAIL_M				MAKEMASK(0x1FFF, 0)
+#define VPDSI_RX_QTABLE(_i, _VP16)		(0x00074C00 + ((_i) * 64 + (_VP16) * 4)) /* _i=0...15, _VP16=0...15 */ /* Reset Source: CORER */
+#define VPDSI_RX_QTABLE_MAX_INDEX		15
+#define VPDSI_RX_QTABLE_PAGE_INDEX0_S		0
+#define VPDSI_RX_QTABLE_PAGE_INDEX0_M		MAKEMASK(0x7F, 0)
+#define VPDSI_RX_QTABLE_PAGE_INDEX1_S		8
+#define VPDSI_RX_QTABLE_PAGE_INDEX1_M		MAKEMASK(0x7F, 8)
+#define VPDSI_RX_QTABLE_PAGE_INDEX2_S		16
+#define VPDSI_RX_QTABLE_PAGE_INDEX2_M		MAKEMASK(0x7F, 16)
+#define VPDSI_RX_QTABLE_PAGE_INDEX3_S		24
+#define VPDSI_RX_QTABLE_PAGE_INDEX3_M		MAKEMASK(0x7F, 24)
+#define VPDSI_TX_QTABLE(_i, _VP16)		(0x001D2000 + ((_i) * 64 + (_VP16) * 4)) /* _i=0...15, _VP16=0...15 */ /* Reset Source: CORER */
+#define VPDSI_TX_QTABLE_MAX_INDEX		15
+#define VPDSI_TX_QTABLE_PAGE_INDEX0_S		0
+#define VPDSI_TX_QTABLE_PAGE_INDEX0_M		MAKEMASK(0x7F, 0)
+#define VPDSI_TX_QTABLE_PAGE_INDEX1_S		8
+#define VPDSI_TX_QTABLE_PAGE_INDEX1_M		MAKEMASK(0x7F, 8)
+#define VPDSI_TX_QTABLE_PAGE_INDEX2_S		16
+#define VPDSI_TX_QTABLE_PAGE_INDEX2_M		MAKEMASK(0x7F, 16)
+#define VPDSI_TX_QTABLE_PAGE_INDEX3_S		24
+#define VPDSI_TX_QTABLE_PAGE_INDEX3_M		MAKEMASK(0x7F, 24)
+#define VPLAN_DB_QTABLE(_i, _VF)		(0x00070000 + ((_i) * 2048 + (_VF) * 4)) /* _i=0...3, _VF=0...255 */ /* Reset Source: CORER */
+#define VPLAN_DB_QTABLE_MAX_INDEX		3
+#define VPLAN_DB_QTABLE_QINDEX_S		0
+#define VPLAN_DB_QTABLE_QINDEX_M		MAKEMASK(0x1FF, 0)
+#define VPLAN_DSI_VF_MODE(_VP16)		(0x002D2C00 + ((_VP16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VPLAN_DSI_VF_MODE_MAX_INDEX		15
+#define VPLAN_DSI_VF_MODE_LAN_DSI_VF_MODE_S	0
+#define VPLAN_DSI_VF_MODE_LAN_DSI_VF_MODE_M	BIT(0)
+#define VPLAN_RX_QBASE(_VF)			(0x00072000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPLAN_RX_QBASE_MAX_INDEX		255
+#define VPLAN_RX_QBASE_VFFIRSTQ_S		0
+#define VPLAN_RX_QBASE_VFFIRSTQ_M		MAKEMASK(0x7FF, 0)
+#define VPLAN_RX_QBASE_VFNUMQ_S			16
+#define VPLAN_RX_QBASE_VFNUMQ_M			MAKEMASK(0xFF, 16)
+#define VPLAN_RX_QBASE_VFQTABLE_ENA_S		31
+#define VPLAN_RX_QBASE_VFQTABLE_ENA_M		BIT(31)
+#define VPLAN_RX_QTABLE(_i, _VF)		(0x00060000 + ((_i) * 2048 + (_VF) * 4)) /* _i=0...15, _VF=0...255 */ /* Reset Source: CORER */
+#define VPLAN_RX_QTABLE_MAX_INDEX		15
+#define VPLAN_RX_QTABLE_QINDEX_S		0
+#define VPLAN_RX_QTABLE_QINDEX_M		MAKEMASK(0xFFF, 0)
+#define VPLAN_RXQ_MAPENA(_VF)			(0x00073000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPLAN_RXQ_MAPENA_MAX_INDEX		255
+#define VPLAN_RXQ_MAPENA_RX_ENA_S		0
+#define VPLAN_RXQ_MAPENA_RX_ENA_M		BIT(0)
+#define VPLAN_TX_QBASE(_VF)			(0x001D1800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPLAN_TX_QBASE_MAX_INDEX		255
+#define VPLAN_TX_QBASE_VFFIRSTQ_S		0
+#define VPLAN_TX_QBASE_VFFIRSTQ_M		MAKEMASK(0x3FFF, 0)
+#define VPLAN_TX_QBASE_VFNUMQ_S			16
+#define VPLAN_TX_QBASE_VFNUMQ_M			MAKEMASK(0xFF, 16)
+#define VPLAN_TX_QBASE_VFQTABLE_ENA_S		31
+#define VPLAN_TX_QBASE_VFQTABLE_ENA_M		BIT(31)
+#define VPLAN_TX_QTABLE(_i, _VF)		(0x001C0000 + ((_i) * 2048 + (_VF) * 4)) /* _i=0...15, _VF=0...255 */ /* Reset Source: CORER */
+#define VPLAN_TX_QTABLE_MAX_INDEX		15
+#define VPLAN_TX_QTABLE_QINDEX_S		0
+#define VPLAN_TX_QTABLE_QINDEX_M		MAKEMASK(0x7FFF, 0)
+#define VPLAN_TXQ_MAPENA(_VF)			(0x00073800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPLAN_TXQ_MAPENA_MAX_INDEX		255
+#define VPLAN_TXQ_MAPENA_TX_ENA_S		0
+#define VPLAN_TXQ_MAPENA_TX_ENA_M		BIT(0)
+#define VSILAN_QBASE(_VSI)			(0x0044c000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSILAN_QBASE_MAX_INDEX			767
+#define VSILAN_QBASE_VSIBASE_S			0
+#define VSILAN_QBASE_VSIBASE_M			MAKEMASK(0x7FF, 0)
+#define VSILAN_QBASE_VSIQTABLE_ENA_S		11
+#define VSILAN_QBASE_VSIQTABLE_ENA_M		BIT(11)
+#define VSILAN_QTABLE(_i, _VSI)			(0x00440000 + ((_i) * 4096 + (_VSI) * 4)) /* _i=0...7, _VSI=0...767 */ /* Reset Source: PFR */
+#define VSILAN_QTABLE_MAX_INDEX			7
+#define VSILAN_QTABLE_QINDEX_0_S		0
+#define VSILAN_QTABLE_QINDEX_0_M		MAKEMASK(0x7FF, 0)
+#define VSILAN_QTABLE_QINDEX_1_S		16
+#define VSILAN_QTABLE_QINDEX_1_M		MAKEMASK(0x7FF, 16)
+#define PRTMAC_HSEC_CTL_RX_ENABLE_GCP		0x001E31C0 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_ENABLE_GCP_HSEC_CTL_RX_ENABLE_GCP_S 0
+#define PRTMAC_HSEC_CTL_RX_ENABLE_GCP_HSEC_CTL_RX_ENABLE_GCP_M BIT(0)
+#define PRTMAC_HSEC_CTL_RX_ENABLE_GPP		0x001E34C0 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_ENABLE_GPP_HSEC_CTL_RX_ENABLE_GPP_S 0
+#define PRTMAC_HSEC_CTL_RX_ENABLE_GPP_HSEC_CTL_RX_ENABLE_GPP_M BIT(0)
+#define PRTMAC_HSEC_CTL_RX_ENABLE_PPP		0x001E35C0 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_ENABLE_PPP_HSEC_CTL_RX_ENABLE_PPP_S 0
+#define PRTMAC_HSEC_CTL_RX_ENABLE_PPP_HSEC_CTL_RX_ENABLE_PPP_M BIT(0)
+#define PRTMAC_HSEC_CTL_RX_FORWARD_CONTROL	0x001E36C0 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_FORWARD_CONTROL_HSEC_CTL_RX_FORWARD_CONTROL_S 0
+#define PRTMAC_HSEC_CTL_RX_FORWARD_CONTROL_HSEC_CTL_RX_FORWARD_CONTROL_M BIT(0)
+#define PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART1 0x001E3220 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART1_HSEC_CTL_RX_PAUSE_DA_UCAST_PART1_S 0
+#define PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART1_HSEC_CTL_RX_PAUSE_DA_UCAST_PART1_M MAKEMASK(0xFFFFFFFF, 0)
+#define PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART2 0x001E3240 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART2_HSEC_CTL_RX_PAUSE_DA_UCAST_PART2_S 0
+#define PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART2_HSEC_CTL_RX_PAUSE_DA_UCAST_PART2_M MAKEMASK(0xFFFF, 0)
+#define PRTMAC_HSEC_CTL_RX_PAUSE_ENABLE		0x001E3180 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_PAUSE_ENABLE_HSEC_CTL_RX_PAUSE_ENABLE_S 0
+#define PRTMAC_HSEC_CTL_RX_PAUSE_ENABLE_HSEC_CTL_RX_PAUSE_ENABLE_M MAKEMASK(0x1FF, 0)
+#define PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART1	0x001E3280 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART1_HSEC_CTL_RX_PAUSE_SA_PART1_S 0
+#define PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART1_HSEC_CTL_RX_PAUSE_SA_PART1_M MAKEMASK(0xFFFFFFFF, 0)
+#define PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART2	0x001E32A0 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART2_HSEC_CTL_RX_PAUSE_SA_PART2_S 0
+#define PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART2_HSEC_CTL_RX_PAUSE_SA_PART2_M MAKEMASK(0xFFFF, 0)
+#define PRTMAC_HSEC_CTL_RX_QUANTA_S		0x001E3C40 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_QUANTA_SHIFT_PRTMAC_HSEC_CTL_RX_QUANTA_SHIFT_S 0
+#define PRTMAC_HSEC_CTL_RX_QUANTA_SHIFT_PRTMAC_HSEC_CTL_RX_QUANTA_SHIFT_M MAKEMASK(0xFFFF, 0)
+#define PRTMAC_HSEC_CTL_TX_PAUSE_ENABLE		0x001E31A0 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_TX_PAUSE_ENABLE_HSEC_CTL_TX_PAUSE_ENABLE_S 0
+#define PRTMAC_HSEC_CTL_TX_PAUSE_ENABLE_HSEC_CTL_TX_PAUSE_ENABLE_M MAKEMASK(0x1FF, 0)
+#define PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA(_i)	(0x001E36E0 + ((_i) * 32)) /* _i=0...8 */ /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA_MAX_INDEX 8
+#define PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA_HSEC_CTL_TX_PAUSE_QUANTA_S 0
+#define PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA_HSEC_CTL_TX_PAUSE_QUANTA_M MAKEMASK(0xFFFF, 0)
+#define PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER(_i) (0x001E3800 + ((_i) * 32)) /* _i=0...8 */ /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_MAX_INDEX 8
+#define PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_S 0
+#define PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_M MAKEMASK(0xFFFF, 0)
+#define PRTMAC_HSEC_CTL_TX_SA_PART1		0x001E3960 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_TX_SA_PART1_HSEC_CTL_TX_SA_PART1_S 0
+#define PRTMAC_HSEC_CTL_TX_SA_PART1_HSEC_CTL_TX_SA_PART1_M MAKEMASK(0xFFFFFFFF, 0)
+#define PRTMAC_HSEC_CTL_TX_SA_PART2		0x001E3980 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_TX_SA_PART2_HSEC_CTL_TX_SA_PART2_S 0
+#define PRTMAC_HSEC_CTL_TX_SA_PART2_HSEC_CTL_TX_SA_PART2_M MAKEMASK(0xFFFF, 0)
+#define PRTMAC_LINK_DOWN_COUNTER		0x001E47C0 /* Reset Source: GLOBR */
+#define PRTMAC_LINK_DOWN_COUNTER_LINK_DOWN_COUNTER_S 0
+#define PRTMAC_LINK_DOWN_COUNTER_LINK_DOWN_COUNTER_M MAKEMASK(0xFFFF, 0)
+#define PRTMAC_MD_OVRRIDE_ENABLE(_i)		(0x001E3C60 + ((_i) * 32)) /* _i=0...7 */ /* Reset Source: GLOBR */
+#define PRTMAC_MD_OVRRIDE_ENABLE_MAX_INDEX	7
+#define PRTMAC_MD_OVRRIDE_ENABLE_PRTMAC_MD_OVRRIDE_ENABLE_S 0
+#define PRTMAC_MD_OVRRIDE_ENABLE_PRTMAC_MD_OVRRIDE_ENABLE_M MAKEMASK(0xFFFFFFFF, 0)
+#define PRTMAC_MD_OVRRIDE_VAL(_i)		(0x001E3D60 + ((_i) * 32)) /* _i=0...7 */ /* Reset Source: GLOBR */
+#define PRTMAC_MD_OVRRIDE_VAL_MAX_INDEX		7
+#define PRTMAC_MD_OVRRIDE_VAL_PRTMAC_MD_OVRRIDE_ENABLE_S 0
+#define PRTMAC_MD_OVRRIDE_VAL_PRTMAC_MD_OVRRIDE_ENABLE_M MAKEMASK(0xFFFFFFFF, 0)
+#define PRTMAC_RX_CNT_MRKR			0x001E48E0 /* Reset Source: GLOBR */
+#define PRTMAC_RX_CNT_MRKR_RX_CNT_MRKR_S	0
+#define PRTMAC_RX_CNT_MRKR_RX_CNT_MRKR_M	MAKEMASK(0xFFFF, 0)
+#define PRTMAC_RX_PKT_DRP_CNT			0x001E3C20 /* Reset Source: GLOBR */
+#define PRTMAC_RX_PKT_DRP_CNT_RX_PKT_DRP_CNT_S	0
+#define PRTMAC_RX_PKT_DRP_CNT_RX_PKT_DRP_CNT_M	MAKEMASK(0xFFFF, 0)
+#define PRTMAC_RX_PKT_DRP_CNT_RX_MKR_PKT_DRP_CNT_S 16
+#define PRTMAC_RX_PKT_DRP_CNT_RX_MKR_PKT_DRP_CNT_M MAKEMASK(0xFFFF, 16)
+#define PRTMAC_TX_CNT_MRKR			0x001E48C0 /* Reset Source: GLOBR */
+#define PRTMAC_TX_CNT_MRKR_TX_CNT_MRKR_S	0
+#define PRTMAC_TX_CNT_MRKR_TX_CNT_MRKR_M	MAKEMASK(0xFFFF, 0)
+#define PRTMAC_TX_LNK_UP_CNT			0x001E4840 /* Reset Source: GLOBR */
+#define PRTMAC_TX_LNK_UP_CNT_TX_LINK_UP_CNT_S	0
+#define PRTMAC_TX_LNK_UP_CNT_TX_LINK_UP_CNT_M	MAKEMASK(0xFFFF, 0)
+#define GL_MDCK_CFG1_TX_PQM			0x002D2DF4 /* Reset Source: CORER */
+#define GL_MDCK_CFG1_TX_PQM_SSO_MAX_DATA_LEN_S	0
+#define GL_MDCK_CFG1_TX_PQM_SSO_MAX_DATA_LEN_M	MAKEMASK(0xFF, 0)
+#define GL_MDCK_CFG1_TX_PQM_SSO_MAX_PKT_CNT_S	8
+#define GL_MDCK_CFG1_TX_PQM_SSO_MAX_PKT_CNT_M	MAKEMASK(0x3F, 8)
+#define GL_MDCK_CFG1_TX_PQM_SSO_MAX_DESC_CNT_S	16
+#define GL_MDCK_CFG1_TX_PQM_SSO_MAX_DESC_CNT_M	MAKEMASK(0x3F, 16)
+#define GL_MDCK_EN_TX_PQM			0x002D2DFC /* Reset Source: CORER */
+#define GL_MDCK_EN_TX_PQM_PCI_DUMMY_COMP_S	0
+#define GL_MDCK_EN_TX_PQM_PCI_DUMMY_COMP_M	BIT(0)
+#define GL_MDCK_EN_TX_PQM_PCI_UR_COMP_S		1
+#define GL_MDCK_EN_TX_PQM_PCI_UR_COMP_M		BIT(1)
+#define GL_MDCK_EN_TX_PQM_RCV_SH_BE_LSO_S	3
+#define GL_MDCK_EN_TX_PQM_RCV_SH_BE_LSO_M	BIT(3)
+#define GL_MDCK_EN_TX_PQM_Q_FL_MNG_EPY_CH_S	4
+#define GL_MDCK_EN_TX_PQM_Q_FL_MNG_EPY_CH_M	BIT(4)
+#define GL_MDCK_EN_TX_PQM_Q_EPY_MNG_FL_CH_S	5
+#define GL_MDCK_EN_TX_PQM_Q_EPY_MNG_FL_CH_M	BIT(5)
+#define GL_MDCK_EN_TX_PQM_LSO_NUMDESCS_ZERO_S	6
+#define GL_MDCK_EN_TX_PQM_LSO_NUMDESCS_ZERO_M	BIT(6)
+#define GL_MDCK_EN_TX_PQM_LSO_LENGTH_ZERO_S	7
+#define GL_MDCK_EN_TX_PQM_LSO_LENGTH_ZERO_M	BIT(7)
+#define GL_MDCK_EN_TX_PQM_LSO_MSS_BELOW_MIN_S	8
+#define GL_MDCK_EN_TX_PQM_LSO_MSS_BELOW_MIN_M	BIT(8)
+#define GL_MDCK_EN_TX_PQM_LSO_MSS_ABOVE_MAX_S	9
+#define GL_MDCK_EN_TX_PQM_LSO_MSS_ABOVE_MAX_M	BIT(9)
+#define GL_MDCK_EN_TX_PQM_LSO_HDR_SIZE_ZERO_S	10
+#define GL_MDCK_EN_TX_PQM_LSO_HDR_SIZE_ZERO_M	BIT(10)
+#define GL_MDCK_EN_TX_PQM_RCV_CNT_BE_LSO_S	11
+#define GL_MDCK_EN_TX_PQM_RCV_CNT_BE_LSO_M	BIT(11)
+#define GL_MDCK_EN_TX_PQM_SKIP_ONE_QT_ONLY_S	12
+#define GL_MDCK_EN_TX_PQM_SKIP_ONE_QT_ONLY_M	BIT(12)
+#define GL_MDCK_EN_TX_PQM_LSO_PKTCNT_ZERO_S	13
+#define GL_MDCK_EN_TX_PQM_LSO_PKTCNT_ZERO_M	BIT(13)
+#define GL_MDCK_EN_TX_PQM_SSO_LENGTH_ZERO_S	14
+#define GL_MDCK_EN_TX_PQM_SSO_LENGTH_ZERO_M	BIT(14)
+#define GL_MDCK_EN_TX_PQM_SSO_LENGTH_EXCEED_S	15
+#define GL_MDCK_EN_TX_PQM_SSO_LENGTH_EXCEED_M	BIT(15)
+#define GL_MDCK_EN_TX_PQM_SSO_PKTCNT_ZERO_S	16
+#define GL_MDCK_EN_TX_PQM_SSO_PKTCNT_ZERO_M	BIT(16)
+#define GL_MDCK_EN_TX_PQM_SSO_PKTCNT_EXCEED_S	17
+#define GL_MDCK_EN_TX_PQM_SSO_PKTCNT_EXCEED_M	BIT(17)
+#define GL_MDCK_EN_TX_PQM_SSO_NUMDESCS_ZERO_S	18
+#define GL_MDCK_EN_TX_PQM_SSO_NUMDESCS_ZERO_M	BIT(18)
+#define GL_MDCK_EN_TX_PQM_SSO_NUMDESCS_EXCEED_S 19
+#define GL_MDCK_EN_TX_PQM_SSO_NUMDESCS_EXCEED_M BIT(19)
+#define GL_MDCK_EN_TX_PQM_TAIL_GT_RING_LENGTH_S 20
+#define GL_MDCK_EN_TX_PQM_TAIL_GT_RING_LENGTH_M BIT(20)
+#define GL_MDCK_EN_TX_PQM_RESERVED_DBL_TYPE_S	21
+#define GL_MDCK_EN_TX_PQM_RESERVED_DBL_TYPE_M	BIT(21)
+#define GL_MDCK_EN_TX_PQM_ILLEGAL_HEAD_DROP_DBL_S 22
+#define GL_MDCK_EN_TX_PQM_ILLEGAL_HEAD_DROP_DBL_M BIT(22)
+#define GL_MDCK_EN_TX_PQM_LSO_OVER_COMMS_Q_S	23
+#define GL_MDCK_EN_TX_PQM_LSO_OVER_COMMS_Q_M	BIT(23)
+#define GL_MDCK_EN_TX_PQM_ILLEGAL_VF_QNUM_S	24
+#define GL_MDCK_EN_TX_PQM_ILLEGAL_VF_QNUM_M	BIT(24)
+#define GL_MDCK_EN_TX_PQM_QTAIL_GT_RING_LENGTH_S 25
+#define GL_MDCK_EN_TX_PQM_QTAIL_GT_RING_LENGTH_M BIT(25)
+#define GL_MDCK_EN_TX_PQM_RSVD_S		26
+#define GL_MDCK_EN_TX_PQM_RSVD_M		MAKEMASK(0x3F, 26)
+#define GL_MDCK_RX				0x0029422C /* Reset Source: CORER */
+#define GL_MDCK_RX_DESC_ADDR_S			0
+#define GL_MDCK_RX_DESC_ADDR_M			BIT(0)
+#define GL_MDET_RX				0x00294C00 /* Reset Source: CORER */
+#define GL_MDET_RX_QNUM_S			0
+#define GL_MDET_RX_QNUM_M			MAKEMASK(0x7FFF, 0)
+#define GL_MDET_RX_VF_NUM_S			15
+#define GL_MDET_RX_VF_NUM_M			MAKEMASK(0xFF, 15)
+#define GL_MDET_RX_PF_NUM_S			23
+#define GL_MDET_RX_PF_NUM_M			MAKEMASK(0x7, 23)
+#define GL_MDET_RX_MAL_TYPE_S			26
+#define GL_MDET_RX_MAL_TYPE_M			MAKEMASK(0x1F, 26)
+#define GL_MDET_RX_VALID_S			31
+#define GL_MDET_RX_VALID_M			BIT(31)
+#define GL_MDET_TX_PQM				0x002D2E00 /* Reset Source: CORER */
+#define GL_MDET_TX_PQM_PF_NUM_S			0
+#define GL_MDET_TX_PQM_PF_NUM_M			MAKEMASK(0x7, 0)
+#define GL_MDET_TX_PQM_VF_NUM_S			4
+#define GL_MDET_TX_PQM_VF_NUM_M			MAKEMASK(0xFF, 4)
+#define GL_MDET_TX_PQM_QNUM_S			12
+#define GL_MDET_TX_PQM_QNUM_M			MAKEMASK(0x3FFF, 12)
+#define GL_MDET_TX_PQM_MAL_TYPE_S		26
+#define GL_MDET_TX_PQM_MAL_TYPE_M		MAKEMASK(0x1F, 26)
+#define GL_MDET_TX_PQM_VALID_S			31
+#define GL_MDET_TX_PQM_VALID_M			BIT(31)
+#define GL_MDET_TX_TCLAN			0x000FC068 /* Reset Source: CORER */
+#define GL_MDET_TX_TCLAN_QNUM_S			0
+#define GL_MDET_TX_TCLAN_QNUM_M			MAKEMASK(0x7FFF, 0)
+#define GL_MDET_TX_TCLAN_VF_NUM_S		15
+#define GL_MDET_TX_TCLAN_VF_NUM_M		MAKEMASK(0xFF, 15)
+#define GL_MDET_TX_TCLAN_PF_NUM_S		23
+#define GL_MDET_TX_TCLAN_PF_NUM_M		MAKEMASK(0x7, 23)
+#define GL_MDET_TX_TCLAN_MAL_TYPE_S		26
+#define GL_MDET_TX_TCLAN_MAL_TYPE_M		MAKEMASK(0x1F, 26)
+#define GL_MDET_TX_TCLAN_VALID_S		31
+#define GL_MDET_TX_TCLAN_VALID_M		BIT(31)
+#define PF_MDET_RX				0x00294280 /* Reset Source: CORER */
+#define PF_MDET_RX_VALID_S			0
+#define PF_MDET_RX_VALID_M			BIT(0)
+#define PF_MDET_TX_PQM				0x002D2C80 /* Reset Source: CORER */
+#define PF_MDET_TX_PQM_VALID_S			0
+#define PF_MDET_TX_PQM_VALID_M			BIT(0)
+#define PF_MDET_TX_TCLAN			0x000FC000 /* Reset Source: CORER */
+#define PF_MDET_TX_TCLAN_VALID_S		0
+#define PF_MDET_TX_TCLAN_VALID_M		BIT(0)
+#define PF_MDET_TX_TDPU				0x00040800 /* Reset Source: CORER */
+#define PF_MDET_TX_TDPU_VALID_S			0
+#define PF_MDET_TX_TDPU_VALID_M			BIT(0)
+#define VP_MDET_RX(_VF)				(0x00294400 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VP_MDET_RX_MAX_INDEX			255
+#define VP_MDET_RX_VALID_S			0
+#define VP_MDET_RX_VALID_M			BIT(0)
+#define VP_MDET_TX_PQM(_VF)			(0x002D2000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VP_MDET_TX_PQM_MAX_INDEX		255
+#define VP_MDET_TX_PQM_VALID_S			0
+#define VP_MDET_TX_PQM_VALID_M			BIT(0)
+#define VP_MDET_TX_TCLAN(_VF)			(0x000FB800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VP_MDET_TX_TCLAN_MAX_INDEX		255
+#define VP_MDET_TX_TCLAN_VALID_S		0
+#define VP_MDET_TX_TCLAN_VALID_M		BIT(0)
+#define VP_MDET_TX_TDPU(_VF)			(0x00040000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VP_MDET_TX_TDPU_MAX_INDEX		255
+#define VP_MDET_TX_TDPU_VALID_S			0
+#define VP_MDET_TX_TDPU_VALID_M			BIT(0)
+#define GENERAL_MNG_FW_DBG_CSR(_i)		(0x000B6180 + ((_i) * 4)) /* _i=0...9 */ /* Reset Source: POR */
+#define GENERAL_MNG_FW_DBG_CSR_MAX_INDEX	9
+#define GENERAL_MNG_FW_DBG_CSR_GENERAL_FW_DBG_S 0
+#define GENERAL_MNG_FW_DBG_CSR_GENERAL_FW_DBG_M MAKEMASK(0xFFFFFFFF, 0)
+#define GL_FWRESETCNT				0x00083100 /* Reset Source: POR */
+#define GL_FWRESETCNT_FWRESETCNT_S		0
+#define GL_FWRESETCNT_FWRESETCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_MNG_FW_RAM_STAT			0x0008309C /* Reset Source: POR */
+#define GL_MNG_FW_RAM_STAT_FW_RAM_RST_STAT_S	0
+#define GL_MNG_FW_RAM_STAT_FW_RAM_RST_STAT_M	BIT(0)
+#define GL_MNG_FW_RAM_STAT_MNG_MEM_ECC_ERR_S	1
+#define GL_MNG_FW_RAM_STAT_MNG_MEM_ECC_ERR_M	BIT(1)
+#define GL_MNG_FWSM				0x000B6134 /* Reset Source: POR */
+#define GL_MNG_FWSM_FW_MODES_S			0
+#define GL_MNG_FWSM_FW_MODES_M			MAKEMASK(0x3, 0)
+#define GL_MNG_FWSM_RSV0_S			2
+#define GL_MNG_FWSM_RSV0_M			MAKEMASK(0xFF, 2)
+#define GL_MNG_FWSM_EEP_RELOAD_IND_S		10
+#define GL_MNG_FWSM_EEP_RELOAD_IND_M		BIT(10)
+#define GL_MNG_FWSM_RSV1_S			11
+#define GL_MNG_FWSM_RSV1_M			MAKEMASK(0xF, 11)
+#define GL_MNG_FWSM_RSV2_S			15
+#define GL_MNG_FWSM_RSV2_M			BIT(15)
+#define GL_MNG_FWSM_PCIR_AL_FAILURE_S		16
+#define GL_MNG_FWSM_PCIR_AL_FAILURE_M		BIT(16)
+#define GL_MNG_FWSM_POR_AL_FAILURE_S		17
+#define GL_MNG_FWSM_POR_AL_FAILURE_M		BIT(17)
+#define GL_MNG_FWSM_RSV3_S			18
+#define GL_MNG_FWSM_RSV3_M			BIT(18)
+#define GL_MNG_FWSM_EXT_ERR_IND_S		19
+#define GL_MNG_FWSM_EXT_ERR_IND_M		MAKEMASK(0x3F, 19)
+#define GL_MNG_FWSM_RSV4_S			25
+#define GL_MNG_FWSM_RSV4_M			BIT(25)
+#define GL_MNG_FWSM_RESERVED_11_S		26
+#define GL_MNG_FWSM_RESERVED_11_M		MAKEMASK(0xF, 26)
+#define GL_MNG_FWSM_RSV5_S			30
+#define GL_MNG_FWSM_RSV5_M			MAKEMASK(0x3, 30)
+#define GL_MNG_HWARB_CTRL			0x000B6130 /* Reset Source: POR */
+#define GL_MNG_HWARB_CTRL_NCSI_ARB_EN_S		0
+#define GL_MNG_HWARB_CTRL_NCSI_ARB_EN_M		BIT(0)
+#define GL_MNG_SHA_EXTEND(_i)			(0x00083120 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: EMPR */
+#define GL_MNG_SHA_EXTEND_MAX_INDEX		7
+#define GL_MNG_SHA_EXTEND_GL_MNG_SHA_EXTEND_S	0
+#define GL_MNG_SHA_EXTEND_GL_MNG_SHA_EXTEND_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GL_MNG_SHA_EXTEND_ROM(_i)		(0x00083160 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: EMPR */
+#define GL_MNG_SHA_EXTEND_ROM_MAX_INDEX		7
+#define GL_MNG_SHA_EXTEND_ROM_GL_MNG_SHA_EXTEND_ROM_S 0
+#define GL_MNG_SHA_EXTEND_ROM_GL_MNG_SHA_EXTEND_ROM_M MAKEMASK(0xFFFFFFFF, 0)
+#define GL_MNG_SHA_EXTEND_STATUS		0x00083148 /* Reset Source: EMPR */
+#define GL_MNG_SHA_EXTEND_STATUS_STAGE_S	0
+#define GL_MNG_SHA_EXTEND_STATUS_STAGE_M	MAKEMASK(0x7, 0)
+#define GL_MNG_SHA_EXTEND_STATUS_FW_HALTED_S	30
+#define GL_MNG_SHA_EXTEND_STATUS_FW_HALTED_M	BIT(30)
+#define GL_MNG_SHA_EXTEND_STATUS_DONE_S		31
+#define GL_MNG_SHA_EXTEND_STATUS_DONE_M		BIT(31)
+#define GL_SWT_PRT2MDEF(_i)			(0x00216018 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: POR */
+#define GL_SWT_PRT2MDEF_MAX_INDEX		31
+#define GL_SWT_PRT2MDEF_MDEFIDX_S		0
+#define GL_SWT_PRT2MDEF_MDEFIDX_M		MAKEMASK(0x7, 0)
+#define GL_SWT_PRT2MDEF_MDEFENA_S		31
+#define GL_SWT_PRT2MDEF_MDEFENA_M		BIT(31)
+#define PRT_MNG_MANC				0x00214720 /* Reset Source: POR */
+#define PRT_MNG_MANC_FLOW_CONTROL_DISCARD_S	0
+#define PRT_MNG_MANC_FLOW_CONTROL_DISCARD_M	BIT(0)
+#define PRT_MNG_MANC_NCSI_DISCARD_S		1
+#define PRT_MNG_MANC_NCSI_DISCARD_M		BIT(1)
+#define PRT_MNG_MANC_RCV_TCO_EN_S		17
+#define PRT_MNG_MANC_RCV_TCO_EN_M		BIT(17)
+#define PRT_MNG_MANC_RCV_ALL_S			19
+#define PRT_MNG_MANC_RCV_ALL_M			BIT(19)
+#define PRT_MNG_MANC_FIXED_NET_TYPE_S		25
+#define PRT_MNG_MANC_FIXED_NET_TYPE_M		BIT(25)
+#define PRT_MNG_MANC_NET_TYPE_S			26
+#define PRT_MNG_MANC_NET_TYPE_M			BIT(26)
+#define PRT_MNG_MANC_EN_BMC2OS_S		28
+#define PRT_MNG_MANC_EN_BMC2OS_M		BIT(28)
+#define PRT_MNG_MANC_EN_BMC2NET_S		29
+#define PRT_MNG_MANC_EN_BMC2NET_M		BIT(29)
+#define PRT_MNG_MAVTV(_i)			(0x00214780 + ((_i) * 32)) /* _i=0...7 */ /* Reset Source: POR */
+#define PRT_MNG_MAVTV_MAX_INDEX			7
+#define PRT_MNG_MAVTV_VID_S			0
+#define PRT_MNG_MAVTV_VID_M			MAKEMASK(0xFFF, 0)
+#define PRT_MNG_MDEF(_i)			(0x00214880 + ((_i) * 32)) /* _i=0...7 */ /* Reset Source: POR */
+#define PRT_MNG_MDEF_MAX_INDEX			7
+#define PRT_MNG_MDEF_MAC_EXACT_AND_S		0
+#define PRT_MNG_MDEF_MAC_EXACT_AND_M		MAKEMASK(0xF, 0)
+#define PRT_MNG_MDEF_BROADCAST_AND_S		4
+#define PRT_MNG_MDEF_BROADCAST_AND_M		BIT(4)
+#define PRT_MNG_MDEF_VLAN_AND_S			5
+#define PRT_MNG_MDEF_VLAN_AND_M			MAKEMASK(0xFF, 5)
+#define PRT_MNG_MDEF_IPV4_ADDRESS_AND_S		13
+#define PRT_MNG_MDEF_IPV4_ADDRESS_AND_M		MAKEMASK(0xF, 13)
+#define PRT_MNG_MDEF_IPV6_ADDRESS_AND_S		17
+#define PRT_MNG_MDEF_IPV6_ADDRESS_AND_M		MAKEMASK(0xF, 17)
+#define PRT_MNG_MDEF_MAC_EXACT_OR_S		21
+#define PRT_MNG_MDEF_MAC_EXACT_OR_M		MAKEMASK(0xF, 21)
+#define PRT_MNG_MDEF_BROADCAST_OR_S		25
+#define PRT_MNG_MDEF_BROADCAST_OR_M		BIT(25)
+#define PRT_MNG_MDEF_MULTICAST_AND_S		26
+#define PRT_MNG_MDEF_MULTICAST_AND_M		BIT(26)
+#define PRT_MNG_MDEF_ARP_REQUEST_OR_S		27
+#define PRT_MNG_MDEF_ARP_REQUEST_OR_M		BIT(27)
+#define PRT_MNG_MDEF_ARP_RESPONSE_OR_S		28
+#define PRT_MNG_MDEF_ARP_RESPONSE_OR_M		BIT(28)
+#define PRT_MNG_MDEF_NEIGHBOR_DISCOVERY_134_OR_S 29
+#define PRT_MNG_MDEF_NEIGHBOR_DISCOVERY_134_OR_M BIT(29)
+#define PRT_MNG_MDEF_PORT_0X298_OR_S		30
+#define PRT_MNG_MDEF_PORT_0X298_OR_M		BIT(30)
+#define PRT_MNG_MDEF_PORT_0X26F_OR_S		31
+#define PRT_MNG_MDEF_PORT_0X26F_OR_M		BIT(31)
+#define PRT_MNG_MDEF_EXT(_i)			(0x00214A00 + ((_i) * 32)) /* _i=0...7 */ /* Reset Source: POR */
+#define PRT_MNG_MDEF_EXT_MAX_INDEX		7
+#define PRT_MNG_MDEF_EXT_L2_ETHERTYPE_AND_S	0
+#define PRT_MNG_MDEF_EXT_L2_ETHERTYPE_AND_M	MAKEMASK(0xF, 0)
+#define PRT_MNG_MDEF_EXT_L2_ETHERTYPE_OR_S	4
+#define PRT_MNG_MDEF_EXT_L2_ETHERTYPE_OR_M	MAKEMASK(0xF, 4)
+#define PRT_MNG_MDEF_EXT_FLEX_PORT_OR_S		8
+#define PRT_MNG_MDEF_EXT_FLEX_PORT_OR_M		MAKEMASK(0xFFFF, 8)
+#define PRT_MNG_MDEF_EXT_FLEX_TCO_S		24
+#define PRT_MNG_MDEF_EXT_FLEX_TCO_M		BIT(24)
+#define PRT_MNG_MDEF_EXT_NEIGHBOR_DISCOVERY_135_OR_S 25
+#define PRT_MNG_MDEF_EXT_NEIGHBOR_DISCOVERY_135_OR_M BIT(25)
+#define PRT_MNG_MDEF_EXT_NEIGHBOR_DISCOVERY_136_OR_S 26
+#define PRT_MNG_MDEF_EXT_NEIGHBOR_DISCOVERY_136_OR_M BIT(26)
+#define PRT_MNG_MDEF_EXT_NEIGHBOR_DISCOVERY_137_OR_S 27
+#define PRT_MNG_MDEF_EXT_NEIGHBOR_DISCOVERY_137_OR_M BIT(27)
+#define PRT_MNG_MDEF_EXT_ICMP_OR_S		28
+#define PRT_MNG_MDEF_EXT_ICMP_OR_M		BIT(28)
+#define PRT_MNG_MDEF_EXT_MLD_S			29
+#define PRT_MNG_MDEF_EXT_MLD_M			BIT(29)
+#define PRT_MNG_MDEF_EXT_APPLY_TO_NETWORK_TRAFFIC_S 30
+#define PRT_MNG_MDEF_EXT_APPLY_TO_NETWORK_TRAFFIC_M BIT(30)
+#define PRT_MNG_MDEF_EXT_APPLY_TO_HOST_TRAFFIC_S 31
+#define PRT_MNG_MDEF_EXT_APPLY_TO_HOST_TRAFFIC_M BIT(31)
+#define PRT_MNG_MDEFVSI(_i)			(0x00214980 + ((_i) * 32)) /* _i=0...3 */ /* Reset Source: POR */
+#define PRT_MNG_MDEFVSI_MAX_INDEX		3
+#define PRT_MNG_MDEFVSI_MDEFVSI_2N_S		0
+#define PRT_MNG_MDEFVSI_MDEFVSI_2N_M		MAKEMASK(0xFFFF, 0)
+#define PRT_MNG_MDEFVSI_MDEFVSI_2NP1_S		16
+#define PRT_MNG_MDEFVSI_MDEFVSI_2NP1_M		MAKEMASK(0xFFFF, 16)
+#define PRT_MNG_METF(_i)			(0x00214120 + ((_i) * 32)) /* _i=0...3 */ /* Reset Source: POR */
+#define PRT_MNG_METF_MAX_INDEX			3
+#define PRT_MNG_METF_ETYPE_S			0
+#define PRT_MNG_METF_ETYPE_M			MAKEMASK(0xFFFF, 0)
+#define PRT_MNG_METF_POLARITY_S			30
+#define PRT_MNG_METF_POLARITY_M			BIT(30)
+#define PRT_MNG_MFUTP(_i)			(0x00214320 + ((_i) * 32)) /* _i=0...15 */ /* Reset Source: POR */
+#define PRT_MNG_MFUTP_MAX_INDEX			15
+#define PRT_MNG_MFUTP_MFUTP_N_S			0
+#define PRT_MNG_MFUTP_MFUTP_N_M			MAKEMASK(0xFFFF, 0)
+#define PRT_MNG_MFUTP_UDP_S			16
+#define PRT_MNG_MFUTP_UDP_M			BIT(16)
+#define PRT_MNG_MFUTP_TCP_S			17
+#define PRT_MNG_MFUTP_TCP_M			BIT(17)
+#define PRT_MNG_MFUTP_SOURCE_DESTINATION_S	18
+#define PRT_MNG_MFUTP_SOURCE_DESTINATION_M	BIT(18)
+#define PRT_MNG_MIPAF4(_i)			(0x002141A0 + ((_i) * 32)) /* _i=0...3 */ /* Reset Source: POR */
+#define PRT_MNG_MIPAF4_MAX_INDEX		3
+#define PRT_MNG_MIPAF4_MIPAF_S			0
+#define PRT_MNG_MIPAF4_MIPAF_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PRT_MNG_MIPAF6(_i)			(0x00214520 + ((_i) * 32)) /* _i=0...15 */ /* Reset Source: POR */
+#define PRT_MNG_MIPAF6_MAX_INDEX		15
+#define PRT_MNG_MIPAF6_MIPAF_S			0
+#define PRT_MNG_MIPAF6_MIPAF_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PRT_MNG_MMAH(_i)			(0x00214220 + ((_i) * 32)) /* _i=0...3 */ /* Reset Source: POR */
+#define PRT_MNG_MMAH_MAX_INDEX			3
+#define PRT_MNG_MMAH_MMAH_S			0
+#define PRT_MNG_MMAH_MMAH_M			MAKEMASK(0xFFFF, 0)
+#define PRT_MNG_MMAL(_i)			(0x002142A0 + ((_i) * 32)) /* _i=0...3 */ /* Reset Source: POR */
+#define PRT_MNG_MMAL_MAX_INDEX			3
+#define PRT_MNG_MMAL_MMAL_S			0
+#define PRT_MNG_MMAL_MMAL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PRT_MNG_MNGONLY				0x00214740 /* Reset Source: POR */
+#define PRT_MNG_MNGONLY_EXCLUSIVE_TO_MANAGEABILITY_S 0
+#define PRT_MNG_MNGONLY_EXCLUSIVE_TO_MANAGEABILITY_M MAKEMASK(0xFF, 0)
+#define PRT_MNG_MSFM				0x00214760 /* Reset Source: POR */
+#define PRT_MNG_MSFM_PORT_26F_UDP_S		0
+#define PRT_MNG_MSFM_PORT_26F_UDP_M		BIT(0)
+#define PRT_MNG_MSFM_PORT_26F_TCP_S		1
+#define PRT_MNG_MSFM_PORT_26F_TCP_M		BIT(1)
+#define PRT_MNG_MSFM_PORT_298_UDP_S		2
+#define PRT_MNG_MSFM_PORT_298_UDP_M		BIT(2)
+#define PRT_MNG_MSFM_PORT_298_TCP_S		3
+#define PRT_MNG_MSFM_PORT_298_TCP_M		BIT(3)
+#define PRT_MNG_MSFM_IPV6_0_MASK_S		4
+#define PRT_MNG_MSFM_IPV6_0_MASK_M		BIT(4)
+#define PRT_MNG_MSFM_IPV6_1_MASK_S		5
+#define PRT_MNG_MSFM_IPV6_1_MASK_M		BIT(5)
+#define PRT_MNG_MSFM_IPV6_2_MASK_S		6
+#define PRT_MNG_MSFM_IPV6_2_MASK_M		BIT(6)
+#define PRT_MNG_MSFM_IPV6_3_MASK_S		7
+#define PRT_MNG_MSFM_IPV6_3_MASK_M		BIT(7)
+#define MSIX_PBA_PAGE(_i)			(0x02E08000 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: FLR */
+#define MSIX_PBA_PAGE_MAX_INDEX			63
+#define MSIX_PBA_PAGE_PENBIT_S			0
+#define MSIX_PBA_PAGE_PENBIT_M			MAKEMASK(0xFFFFFFFF, 0)
+#define MSIX_PBA1(_i)				(0x00008000 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: FLR */
+#define MSIX_PBA1_MAX_INDEX			63
+#define MSIX_PBA1_PENBIT_S			0
+#define MSIX_PBA1_PENBIT_M			MAKEMASK(0xFFFFFFFF, 0)
+#define MSIX_TADD_PAGE(_i)			(0x02E00000 + ((_i) * 16)) /* _i=0...2047 */ /* Reset Source: FLR */
+#define MSIX_TADD_PAGE_MAX_INDEX		2047
+#define MSIX_TADD_PAGE_MSIXTADD10_S		0
+#define MSIX_TADD_PAGE_MSIXTADD10_M		MAKEMASK(0x3, 0)
+#define MSIX_TADD_PAGE_MSIXTADD_S		2
+#define MSIX_TADD_PAGE_MSIXTADD_M		MAKEMASK(0x3FFFFFFF, 2)
+#define MSIX_TADD1(_i)				(0x00000000 + ((_i) * 16)) /* _i=0...2047 */ /* Reset Source: FLR */
+#define MSIX_TADD1_MAX_INDEX			2047
+#define MSIX_TADD1_MSIXTADD10_S			0
+#define MSIX_TADD1_MSIXTADD10_M			MAKEMASK(0x3, 0)
+#define MSIX_TADD1_MSIXTADD_S			2
+#define MSIX_TADD1_MSIXTADD_M			MAKEMASK(0x3FFFFFFF, 2)
+#define MSIX_TMSG(_i)				(0x00000008 + ((_i) * 16)) /* _i=0...2047 */ /* Reset Source: FLR */
+#define MSIX_TMSG_MAX_INDEX			2047
+#define MSIX_TMSG_MSIXTMSG_S			0
+#define MSIX_TMSG_MSIXTMSG_M			MAKEMASK(0xFFFFFFFF, 0)
+#define MSIX_TMSG_PAGE(_i)			(0x02E00008 + ((_i) * 16)) /* _i=0...2047 */ /* Reset Source: FLR */
+#define MSIX_TMSG_PAGE_MAX_INDEX		2047
+#define MSIX_TMSG_PAGE_MSIXTMSG_S		0
+#define MSIX_TMSG_PAGE_MSIXTMSG_M		MAKEMASK(0xFFFFFFFF, 0)
+#define MSIX_TUADD_PAGE(_i)			(0x02E00004 + ((_i) * 16)) /* _i=0...2047 */ /* Reset Source: FLR */
+#define MSIX_TUADD_PAGE_MAX_INDEX		2047
+#define MSIX_TUADD_PAGE_MSIXTUADD_S		0
+#define MSIX_TUADD_PAGE_MSIXTUADD_M		MAKEMASK(0xFFFFFFFF, 0)
+#define MSIX_TUADD1(_i)				(0x00000004 + ((_i) * 16)) /* _i=0...2047 */ /* Reset Source: FLR */
+#define MSIX_TUADD1_MAX_INDEX			2047
+#define MSIX_TUADD1_MSIXTUADD_S			0
+#define MSIX_TUADD1_MSIXTUADD_M			MAKEMASK(0xFFFFFFFF, 0)
+#define MSIX_TVCTRL_PAGE(_i)			(0x02E0000C + ((_i) * 16)) /* _i=0...2047 */ /* Reset Source: FLR */
+#define MSIX_TVCTRL_PAGE_MAX_INDEX		2047
+#define MSIX_TVCTRL_PAGE_MASK_S			0
+#define MSIX_TVCTRL_PAGE_MASK_M			BIT(0)
+#define MSIX_TVCTRL1(_i)			(0x0000000C + ((_i) * 16)) /* _i=0...2047 */ /* Reset Source: FLR */
+#define MSIX_TVCTRL1_MAX_INDEX			2047
+#define MSIX_TVCTRL1_MASK_S			0
+#define MSIX_TVCTRL1_MASK_M			BIT(0)
+#define GLNVM_AL_DONE_HLP			0x000824C4 /* Reset Source: POR */
+#define GLNVM_AL_DONE_HLP_HLP_CORER_S		0
+#define GLNVM_AL_DONE_HLP_HLP_CORER_M		BIT(0)
+#define GLNVM_AL_DONE_HLP_HLP_FULLR_S		1
+#define GLNVM_AL_DONE_HLP_HLP_FULLR_M		BIT(1)
+#define GLNVM_ALTIMERS				0x000B6140 /* Reset Source: POR */
+#define GLNVM_ALTIMERS_PCI_ALTIMER_S		0
+#define GLNVM_ALTIMERS_PCI_ALTIMER_M		MAKEMASK(0xFFF, 0)
+#define GLNVM_ALTIMERS_GEN_ALTIMER_S		12
+#define GLNVM_ALTIMERS_GEN_ALTIMER_M		MAKEMASK(0xFFFFF, 12)
+#define GLNVM_FLA				0x000B6108 /* Reset Source: POR */
+#define GLNVM_FLA_LOCKED_S			6
+#define GLNVM_FLA_LOCKED_M			BIT(6)
+#define GLNVM_GENS				0x000B6100 /* Reset Source: POR */
+#define GLNVM_GENS_NVM_PRES_S			0
+#define GLNVM_GENS_NVM_PRES_M			BIT(0)
+#define GLNVM_GENS_SR_SIZE_S			5
+#define GLNVM_GENS_SR_SIZE_M			MAKEMASK(0x7, 5)
+#define GLNVM_GENS_BANK1VAL_S			8
+#define GLNVM_GENS_BANK1VAL_M			BIT(8)
+#define GLNVM_GENS_ALT_PRST_S			23
+#define GLNVM_GENS_ALT_PRST_M			BIT(23)
+#define GLNVM_GENS_FL_AUTO_RD_S			25
+#define GLNVM_GENS_FL_AUTO_RD_M			BIT(25)
+#define GLNVM_PROTCSR(_i)			(0x000B6010 + ((_i) * 4)) /* _i=0...59 */ /* Reset Source: POR */
+#define GLNVM_PROTCSR_MAX_INDEX			59
+#define GLNVM_PROTCSR_ADDR_BLOCK_S		0
+#define GLNVM_PROTCSR_ADDR_BLOCK_M		MAKEMASK(0xFFFFFF, 0)
+#define GLNVM_ULD				0x000B6008 /* Reset Source: POR */
+#define GLNVM_ULD_PCIER_DONE_S			0
+#define GLNVM_ULD_PCIER_DONE_M			BIT(0)
+#define GLNVM_ULD_PCIER_DONE_1_S		1
+#define GLNVM_ULD_PCIER_DONE_1_M		BIT(1)
+#define GLNVM_ULD_CORER_DONE_S			3
+#define GLNVM_ULD_CORER_DONE_M			BIT(3)
+#define GLNVM_ULD_GLOBR_DONE_S			4
+#define GLNVM_ULD_GLOBR_DONE_M			BIT(4)
+#define GLNVM_ULD_POR_DONE_S			5
+#define GLNVM_ULD_POR_DONE_M			BIT(5)
+#define GLNVM_ULD_POR_DONE_1_S			8
+#define GLNVM_ULD_POR_DONE_1_M			BIT(8)
+#define GLNVM_ULD_PCIER_DONE_2_S		9
+#define GLNVM_ULD_PCIER_DONE_2_M		BIT(9)
+#define GLNVM_ULD_PE_DONE_S			10
+#define GLNVM_ULD_PE_DONE_M			BIT(10)
+#define GLNVM_ULD_HLP_CORE_DONE_S		11
+#define GLNVM_ULD_HLP_CORE_DONE_M		BIT(11)
+#define GLNVM_ULD_HLP_FULL_DONE_S		12
+#define GLNVM_ULD_HLP_FULL_DONE_M		BIT(12)
+#define GLNVM_ULT				0x000B6154 /* Reset Source: POR */
+#define GLNVM_ULT_CONF_PCIR_AE_S		0
+#define GLNVM_ULT_CONF_PCIR_AE_M		BIT(0)
+#define GLNVM_ULT_CONF_PCIRTL_AE_S		1
+#define GLNVM_ULT_CONF_PCIRTL_AE_M		BIT(1)
+#define GLNVM_ULT_RESERVED_1_S			2
+#define GLNVM_ULT_RESERVED_1_M			BIT(2)
+#define GLNVM_ULT_CONF_CORE_AE_S		3
+#define GLNVM_ULT_CONF_CORE_AE_M		BIT(3)
+#define GLNVM_ULT_CONF_GLOBAL_AE_S		4
+#define GLNVM_ULT_CONF_GLOBAL_AE_M		BIT(4)
+#define GLNVM_ULT_CONF_POR_AE_S			5
+#define GLNVM_ULT_CONF_POR_AE_M			BIT(5)
+#define GLNVM_ULT_RESERVED_2_S			6
+#define GLNVM_ULT_RESERVED_2_M			BIT(6)
+#define GLNVM_ULT_RESERVED_3_S			7
+#define GLNVM_ULT_RESERVED_3_M			BIT(7)
+#define GLNVM_ULT_RESERVED_5_S			8
+#define GLNVM_ULT_RESERVED_5_M			BIT(8)
+#define GLNVM_ULT_CONF_PCIALT_AE_S		9
+#define GLNVM_ULT_CONF_PCIALT_AE_M		BIT(9)
+#define GLNVM_ULT_CONF_PE_AE_S			10
+#define GLNVM_ULT_CONF_PE_AE_M			BIT(10)
+#define GLNVM_ULT_RESERVED_4_S			11
+#define GLNVM_ULT_RESERVED_4_M			MAKEMASK(0x1FFFFF, 11)
+#define GL_COTF_MARKER_STATUS			0x00200200 /* Reset Source: CORER */
+#define GL_COTF_MARKER_STATUS_MRKR_BUSY_S	0
+#define GL_COTF_MARKER_STATUS_MRKR_BUSY_M	MAKEMASK(0xFF, 0)
+#define GL_COTF_MARKER_TRIG_RCU_PRS(_i)		(0x002001D4 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GL_COTF_MARKER_TRIG_RCU_PRS_MAX_INDEX	7
+#define GL_COTF_MARKER_TRIG_RCU_PRS_SET_RST_S	0
+#define GL_COTF_MARKER_TRIG_RCU_PRS_SET_RST_M	BIT(0)
+#define GL_PRS_MARKER_ERROR			0x00200204 /* Reset Source: CORER */
+#define GL_PRS_MARKER_ERROR_XLR_CFG_ERR_S	0
+#define GL_PRS_MARKER_ERROR_XLR_CFG_ERR_M	BIT(0)
+#define GL_PRS_MARKER_ERROR_QH_CFG_ERR_S	1
+#define GL_PRS_MARKER_ERROR_QH_CFG_ERR_M	BIT(1)
+#define GL_PRS_MARKER_ERROR_COTF_CFG_ERR_S	2
+#define GL_PRS_MARKER_ERROR_COTF_CFG_ERR_M	BIT(2)
+#define GL_PRS_RX_PIPE_INIT0(_i)		(0x0020000C + ((_i) * 4)) /* _i=0...6 */ /* Reset Source: CORER */
+#define GL_PRS_RX_PIPE_INIT0_MAX_INDEX		6
+#define GL_PRS_RX_PIPE_INIT0_GPCSR_INIT_S	0
+#define GL_PRS_RX_PIPE_INIT0_GPCSR_INIT_M	MAKEMASK(0xFFFF, 0)
+#define GL_PRS_RX_PIPE_INIT1			0x00200028 /* Reset Source: CORER */
+#define GL_PRS_RX_PIPE_INIT1_GPCSR_INIT_S	0
+#define GL_PRS_RX_PIPE_INIT1_GPCSR_INIT_M	MAKEMASK(0xFFFF, 0)
+#define GL_PRS_RX_PIPE_INIT2			0x0020002C /* Reset Source: CORER */
+#define GL_PRS_RX_PIPE_INIT2_GPCSR_INIT_S	0
+#define GL_PRS_RX_PIPE_INIT2_GPCSR_INIT_M	MAKEMASK(0xFFFF, 0)
+#define GL_PRS_RX_SIZE_CTRL			0x00200004 /* Reset Source: CORER */
+#define GL_PRS_RX_SIZE_CTRL_MIN_SIZE_S		0
+#define GL_PRS_RX_SIZE_CTRL_MIN_SIZE_M		MAKEMASK(0x3FF, 0)
+#define GL_PRS_RX_SIZE_CTRL_MIN_SIZE_EN_S	15
+#define GL_PRS_RX_SIZE_CTRL_MIN_SIZE_EN_M	BIT(15)
+#define GL_PRS_RX_SIZE_CTRL_MAX_SIZE_S		16
+#define GL_PRS_RX_SIZE_CTRL_MAX_SIZE_M		MAKEMASK(0x3FF, 16)
+#define GL_PRS_RX_SIZE_CTRL_MAX_SIZE_EN_S	31
+#define GL_PRS_RX_SIZE_CTRL_MAX_SIZE_EN_M	BIT(31)
+#define GL_PRS_TX_PIPE_INIT0(_i)		(0x00202018 + ((_i) * 4)) /* _i=0...6 */ /* Reset Source: CORER */
+#define GL_PRS_TX_PIPE_INIT0_MAX_INDEX		6
+#define GL_PRS_TX_PIPE_INIT0_GPCSR_INIT_S	0
+#define GL_PRS_TX_PIPE_INIT0_GPCSR_INIT_M	MAKEMASK(0xFFFF, 0)
+#define GL_PRS_TX_PIPE_INIT1			0x00202034 /* Reset Source: CORER */
+#define GL_PRS_TX_PIPE_INIT1_GPCSR_INIT_S	0
+#define GL_PRS_TX_PIPE_INIT1_GPCSR_INIT_M	MAKEMASK(0xFFFF, 0)
+#define GL_PRS_TX_PIPE_INIT2			0x00202038 /* Reset Source: CORER */
+#define GL_PRS_TX_PIPE_INIT2_GPCSR_INIT_S	0
+#define GL_PRS_TX_PIPE_INIT2_GPCSR_INIT_M	MAKEMASK(0xFFFF, 0)
+#define GL_PRS_TX_SIZE_CTRL			0x00202014 /* Reset Source: CORER */
+#define GL_PRS_TX_SIZE_CTRL_MIN_SIZE_S		0
+#define GL_PRS_TX_SIZE_CTRL_MIN_SIZE_M		MAKEMASK(0x3FF, 0)
+#define GL_PRS_TX_SIZE_CTRL_MIN_SIZE_EN_S	15
+#define GL_PRS_TX_SIZE_CTRL_MIN_SIZE_EN_M	BIT(15)
+#define GL_PRS_TX_SIZE_CTRL_MAX_SIZE_S		16
+#define GL_PRS_TX_SIZE_CTRL_MAX_SIZE_M		MAKEMASK(0x3FF, 16)
+#define GL_PRS_TX_SIZE_CTRL_MAX_SIZE_EN_S	31
+#define GL_PRS_TX_SIZE_CTRL_MAX_SIZE_EN_M	BIT(31)
+#define GL_QH_MARKER_STATUS			0x002001FC /* Reset Source: CORER */
+#define GL_QH_MARKER_STATUS_MRKR_BUSY_S		0
+#define GL_QH_MARKER_STATUS_MRKR_BUSY_M		MAKEMASK(0xF, 0)
+#define GL_QH_MARKER_TRIG_RCU_PRS(_i)		(0x002001C4 + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: CORER */
+#define GL_QH_MARKER_TRIG_RCU_PRS_MAX_INDEX	3
+#define GL_QH_MARKER_TRIG_RCU_PRS_QPID_S	0
+#define GL_QH_MARKER_TRIG_RCU_PRS_QPID_M	MAKEMASK(0x3FFFF, 0)
+#define GL_QH_MARKER_TRIG_RCU_PRS_PE_TAG_S	18
+#define GL_QH_MARKER_TRIG_RCU_PRS_PE_TAG_M	MAKEMASK(0xFF, 18)
+#define GL_QH_MARKER_TRIG_RCU_PRS_PORT_NUM_S	26
+#define GL_QH_MARKER_TRIG_RCU_PRS_PORT_NUM_M	MAKEMASK(0x7, 26)
+#define GL_QH_MARKER_TRIG_RCU_PRS_SET_RST_S	31
+#define GL_QH_MARKER_TRIG_RCU_PRS_SET_RST_M	BIT(31)
+#define GL_RPRS_ANA_CSR_CTRL			0x00200708 /* Reset Source: CORER */
+#define GL_RPRS_ANA_CSR_CTRL_SELECT_EN_S	0
+#define GL_RPRS_ANA_CSR_CTRL_SELECT_EN_M	BIT(0)
+#define GL_RPRS_ANA_CSR_CTRL_SELECTED_ANA_S	1
+#define GL_RPRS_ANA_CSR_CTRL_SELECTED_ANA_M	BIT(1)
+#define GL_TPRS_ANA_CSR_CTRL			0x00202100 /* Reset Source: CORER */
+#define GL_TPRS_ANA_CSR_CTRL_SELECT_EN_S	0
+#define GL_TPRS_ANA_CSR_CTRL_SELECT_EN_M	BIT(0)
+#define GL_TPRS_ANA_CSR_CTRL_SELECTED_ANA_S	1
+#define GL_TPRS_ANA_CSR_CTRL_SELECTED_ANA_M	BIT(1)
+#define GL_TPRS_MNG_PM_THR			0x00202004 /* Reset Source: CORER */
+#define GL_TPRS_MNG_PM_THR_MNG_PM_THR_S		0
+#define GL_TPRS_MNG_PM_THR_MNG_PM_THR_M		MAKEMASK(0x3FFF, 0)
+#define GL_TPRS_PM_CNT(_i)			(0x00202008 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GL_TPRS_PM_CNT_MAX_INDEX		1
+#define GL_TPRS_PM_CNT_GL_PRS_PM_CNT_S		0
+#define GL_TPRS_PM_CNT_GL_PRS_PM_CNT_M		MAKEMASK(0x3FFF, 0)
+#define GL_TPRS_PM_THR				0x00202000 /* Reset Source: CORER */
+#define GL_TPRS_PM_THR_PM_THR_S			0
+#define GL_TPRS_PM_THR_PM_THR_M			MAKEMASK(0x3FFF, 0)
+#define GL_XLR_MARKER_LOG_RCU_PRS(_i)		(0x00200208 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GL_XLR_MARKER_LOG_RCU_PRS_MAX_INDEX	63
+#define GL_XLR_MARKER_LOG_RCU_PRS_XLR_TRIG_S	0
+#define GL_XLR_MARKER_LOG_RCU_PRS_XLR_TRIG_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GL_XLR_MARKER_STATUS(_i)		(0x002001F4 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GL_XLR_MARKER_STATUS_MAX_INDEX		1
+#define GL_XLR_MARKER_STATUS_MRKR_BUSY_S	0
+#define GL_XLR_MARKER_STATUS_MRKR_BUSY_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GL_XLR_MARKER_TRIG_PE			0x005008C0 /* Reset Source: CORER */
+#define GL_XLR_MARKER_TRIG_PE_VM_VF_NUM_S	0
+#define GL_XLR_MARKER_TRIG_PE_VM_VF_NUM_M	MAKEMASK(0x3FF, 0)
+#define GL_XLR_MARKER_TRIG_PE_VM_VF_TYPE_S	10
+#define GL_XLR_MARKER_TRIG_PE_VM_VF_TYPE_M	MAKEMASK(0x3, 10)
+#define GL_XLR_MARKER_TRIG_PE_PF_NUM_S		12
+#define GL_XLR_MARKER_TRIG_PE_PF_NUM_M		MAKEMASK(0x7, 12)
+#define GL_XLR_MARKER_TRIG_PE_PORT_NUM_S	16
+#define GL_XLR_MARKER_TRIG_PE_PORT_NUM_M	MAKEMASK(0x7, 16)
+#define GL_XLR_MARKER_TRIG_RCU_PRS		0x002001C0 /* Reset Source: CORER */
+#define GL_XLR_MARKER_TRIG_RCU_PRS_VM_VF_NUM_S	0
+#define GL_XLR_MARKER_TRIG_RCU_PRS_VM_VF_NUM_M	MAKEMASK(0x3FF, 0)
+#define GL_XLR_MARKER_TRIG_RCU_PRS_VM_VF_TYPE_S 10
+#define GL_XLR_MARKER_TRIG_RCU_PRS_VM_VF_TYPE_M MAKEMASK(0x3, 10)
+#define GL_XLR_MARKER_TRIG_RCU_PRS_PF_NUM_S	12
+#define GL_XLR_MARKER_TRIG_RCU_PRS_PF_NUM_M	MAKEMASK(0x7, 12)
+#define GL_XLR_MARKER_TRIG_RCU_PRS_PORT_NUM_S	16
+#define GL_XLR_MARKER_TRIG_RCU_PRS_PORT_NUM_M	MAKEMASK(0x7, 16)
+#define GL_CLKGATE_EVENTS			0x0009DE70 /* Reset Source: PERST */
+#define GL_CLKGATE_EVENTS_PRIMARY_CLKGATE_EVENTS_S 0
+#define GL_CLKGATE_EVENTS_PRIMARY_CLKGATE_EVENTS_M MAKEMASK(0xFFFF, 0)
+#define GL_CLKGATE_EVENTS_SIDEBAND_CLKGATE_EVENTS_S 16
+#define GL_CLKGATE_EVENTS_SIDEBAND_CLKGATE_EVENTS_M MAKEMASK(0xFFFF, 16)
+#define GLPCI_BYTCTH_NP_C			0x000BFDA8 /* Reset Source: PCIR */
+#define GLPCI_BYTCTH_NP_C_PCI_COUNT_BW_BCT_S	0
+#define GLPCI_BYTCTH_NP_C_PCI_COUNT_BW_BCT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPCI_BYTCTH_P				0x0009E970 /* Reset Source: PCIR */
+#define GLPCI_BYTCTH_P_PCI_COUNT_BW_BCT_S	0
+#define GLPCI_BYTCTH_P_PCI_COUNT_BW_BCT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPCI_BYTCTL_NP_C			0x000BFDAC /* Reset Source: PCIR */
+#define GLPCI_BYTCTL_NP_C_PCI_COUNT_BW_BCT_S	0
+#define GLPCI_BYTCTL_NP_C_PCI_COUNT_BW_BCT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPCI_BYTCTL_P				0x0009E994 /* Reset Source: PCIR */
+#define GLPCI_BYTCTL_P_PCI_COUNT_BW_BCT_S	0
+#define GLPCI_BYTCTL_P_PCI_COUNT_BW_BCT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPCI_CAPCTRL				0x0009DE88 /* Reset Source: PCIR */
+#define GLPCI_CAPCTRL_VPD_EN_S			0
+#define GLPCI_CAPCTRL_VPD_EN_M			BIT(0)
+#define GLPCI_CAPSUP				0x0009DE8C /* Reset Source: PCIR */
+#define GLPCI_CAPSUP_PCIE_VER_S			0
+#define GLPCI_CAPSUP_PCIE_VER_M			BIT(0)
+#define GLPCI_CAPSUP_RESERVED_2_S		1
+#define GLPCI_CAPSUP_RESERVED_2_M		BIT(1)
+#define GLPCI_CAPSUP_LTR_EN_S			2
+#define GLPCI_CAPSUP_LTR_EN_M			BIT(2)
+#define GLPCI_CAPSUP_TPH_EN_S			3
+#define GLPCI_CAPSUP_TPH_EN_M			BIT(3)
+#define GLPCI_CAPSUP_ARI_EN_S			4
+#define GLPCI_CAPSUP_ARI_EN_M			BIT(4)
+#define GLPCI_CAPSUP_IOV_EN_S			5
+#define GLPCI_CAPSUP_IOV_EN_M			BIT(5)
+#define GLPCI_CAPSUP_ACS_EN_S			6
+#define GLPCI_CAPSUP_ACS_EN_M			BIT(6)
+#define GLPCI_CAPSUP_SEC_EN_S			7
+#define GLPCI_CAPSUP_SEC_EN_M			BIT(7)
+#define GLPCI_CAPSUP_PASID_EN_S			8
+#define GLPCI_CAPSUP_PASID_EN_M			BIT(8)
+#define GLPCI_CAPSUP_DLFE_EN_S			9
+#define GLPCI_CAPSUP_DLFE_EN_M			BIT(9)
+#define GLPCI_CAPSUP_GEN4_EXT_EN_S		10
+#define GLPCI_CAPSUP_GEN4_EXT_EN_M		BIT(10)
+#define GLPCI_CAPSUP_GEN4_MARG_EN_S		11
+#define GLPCI_CAPSUP_GEN4_MARG_EN_M		BIT(11)
+#define GLPCI_CAPSUP_ECRC_GEN_EN_S		16
+#define GLPCI_CAPSUP_ECRC_GEN_EN_M		BIT(16)
+#define GLPCI_CAPSUP_ECRC_CHK_EN_S		17
+#define GLPCI_CAPSUP_ECRC_CHK_EN_M		BIT(17)
+#define GLPCI_CAPSUP_IDO_EN_S			18
+#define GLPCI_CAPSUP_IDO_EN_M			BIT(18)
+#define GLPCI_CAPSUP_MSI_MASK_S			19
+#define GLPCI_CAPSUP_MSI_MASK_M			BIT(19)
+#define GLPCI_CAPSUP_CSR_CONF_EN_S		20
+#define GLPCI_CAPSUP_CSR_CONF_EN_M		BIT(20)
+#define GLPCI_CAPSUP_WAKUP_EN_S			21
+#define GLPCI_CAPSUP_WAKUP_EN_M			BIT(21)
+#define GLPCI_CAPSUP_LOAD_SUBSYS_ID_S		30
+#define GLPCI_CAPSUP_LOAD_SUBSYS_ID_M		BIT(30)
+#define GLPCI_CAPSUP_LOAD_DEV_ID_S		31
+#define GLPCI_CAPSUP_LOAD_DEV_ID_M		BIT(31)
+#define GLPCI_CNF				0x0009DEA0 /* Reset Source: POR */
+#define GLPCI_CNF_FLEX10_S			1
+#define GLPCI_CNF_FLEX10_M			BIT(1)
+#define GLPCI_CNF_WAKE_PIN_EN_S			2
+#define GLPCI_CNF_WAKE_PIN_EN_M			BIT(2)
+#define GLPCI_CNF_MSIX_ECC_BLOCK_DISABLE_S	3
+#define GLPCI_CNF_MSIX_ECC_BLOCK_DISABLE_M	BIT(3)
+#define GLPCI_CNF2				0x000BE004 /* Reset Source: PCIR */
+#define GLPCI_CNF2_RO_DIS_S			0
+#define GLPCI_CNF2_RO_DIS_M			BIT(0)
+#define GLPCI_CNF2_CACHELINE_SIZE_S		1
+#define GLPCI_CNF2_CACHELINE_SIZE_M		BIT(1)
+#define GLPCI_DREVID				0x0009E9AC /* Reset Source: PCIR */
+#define GLPCI_DREVID_DEFAULT_REVID_S		0
+#define GLPCI_DREVID_DEFAULT_REVID_M		MAKEMASK(0xFF, 0)
+#define GLPCI_GSCL_1_NP_C			0x000BFDA4 /* Reset Source: PCIR */
+#define GLPCI_GSCL_1_NP_C_RT_MODE_S		8
+#define GLPCI_GSCL_1_NP_C_RT_MODE_M		BIT(8)
+#define GLPCI_GSCL_1_NP_C_RT_EVENT_S		9
+#define GLPCI_GSCL_1_NP_C_RT_EVENT_M		MAKEMASK(0x1F, 9)
+#define GLPCI_GSCL_1_NP_C_PCI_COUNT_BW_EN_S	14
+#define GLPCI_GSCL_1_NP_C_PCI_COUNT_BW_EN_M	BIT(14)
+#define GLPCI_GSCL_1_NP_C_PCI_COUNT_BW_EV_S	15
+#define GLPCI_GSCL_1_NP_C_PCI_COUNT_BW_EV_M	MAKEMASK(0x1F, 15)
+#define GLPCI_GSCL_1_NP_C_GIO_COUNT_RESET_S	29
+#define GLPCI_GSCL_1_NP_C_GIO_COUNT_RESET_M	BIT(29)
+#define GLPCI_GSCL_1_NP_C_GIO_COUNT_STOP_S	30
+#define GLPCI_GSCL_1_NP_C_GIO_COUNT_STOP_M	BIT(30)
+#define GLPCI_GSCL_1_NP_C_GIO_COUNT_START_S	31
+#define GLPCI_GSCL_1_NP_C_GIO_COUNT_START_M	BIT(31)
+#define GLPCI_GSCL_1_P				0x0009E9B4 /* Reset Source: PCIR */
+#define GLPCI_GSCL_1_P_GIO_COUNT_EN_0_S		0
+#define GLPCI_GSCL_1_P_GIO_COUNT_EN_0_M		BIT(0)
+#define GLPCI_GSCL_1_P_GIO_COUNT_EN_1_S		1
+#define GLPCI_GSCL_1_P_GIO_COUNT_EN_1_M		BIT(1)
+#define GLPCI_GSCL_1_P_GIO_COUNT_EN_2_S		2
+#define GLPCI_GSCL_1_P_GIO_COUNT_EN_2_M		BIT(2)
+#define GLPCI_GSCL_1_P_GIO_COUNT_EN_3_S		3
+#define GLPCI_GSCL_1_P_GIO_COUNT_EN_3_M		BIT(3)
+#define GLPCI_GSCL_1_P_LBC_ENABLE_0_S		4
+#define GLPCI_GSCL_1_P_LBC_ENABLE_0_M		BIT(4)
+#define GLPCI_GSCL_1_P_LBC_ENABLE_1_S		5
+#define GLPCI_GSCL_1_P_LBC_ENABLE_1_M		BIT(5)
+#define GLPCI_GSCL_1_P_LBC_ENABLE_2_S		6
+#define GLPCI_GSCL_1_P_LBC_ENABLE_2_M		BIT(6)
+#define GLPCI_GSCL_1_P_LBC_ENABLE_3_S		7
+#define GLPCI_GSCL_1_P_LBC_ENABLE_3_M		BIT(7)
+#define GLPCI_GSCL_1_P_PCI_COUNT_BW_EN_S	14
+#define GLPCI_GSCL_1_P_PCI_COUNT_BW_EN_M	BIT(14)
+#define GLPCI_GSCL_1_P_GIO_64_BIT_EN_S		28
+#define GLPCI_GSCL_1_P_GIO_64_BIT_EN_M		BIT(28)
+#define GLPCI_GSCL_1_P_GIO_COUNT_RESET_S	29
+#define GLPCI_GSCL_1_P_GIO_COUNT_RESET_M	BIT(29)
+#define GLPCI_GSCL_1_P_GIO_COUNT_STOP_S		30
+#define GLPCI_GSCL_1_P_GIO_COUNT_STOP_M		BIT(30)
+#define GLPCI_GSCL_1_P_GIO_COUNT_START_S	31
+#define GLPCI_GSCL_1_P_GIO_COUNT_START_M	BIT(31)
+#define GLPCI_GSCL_2				0x0009E998 /* Reset Source: PCIR */
+#define GLPCI_GSCL_2_GIO_EVENT_NUM_0_S		0
+#define GLPCI_GSCL_2_GIO_EVENT_NUM_0_M		MAKEMASK(0xFF, 0)
+#define GLPCI_GSCL_2_GIO_EVENT_NUM_1_S		8
+#define GLPCI_GSCL_2_GIO_EVENT_NUM_1_M		MAKEMASK(0xFF, 8)
+#define GLPCI_GSCL_2_GIO_EVENT_NUM_2_S		16
+#define GLPCI_GSCL_2_GIO_EVENT_NUM_2_M		MAKEMASK(0xFF, 16)
+#define GLPCI_GSCL_2_GIO_EVENT_NUM_3_S		24
+#define GLPCI_GSCL_2_GIO_EVENT_NUM_3_M		MAKEMASK(0xFF, 24)
+#define GLPCI_GSCL_5_8(_i)			(0x0009E954 + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: PCIR */
+#define GLPCI_GSCL_5_8_MAX_INDEX		3
+#define GLPCI_GSCL_5_8_LBC_THRESHOLD_N_S	0
+#define GLPCI_GSCL_5_8_LBC_THRESHOLD_N_M	MAKEMASK(0xFFFF, 0)
+#define GLPCI_GSCL_5_8_LBC_TIMER_N_S		16
+#define GLPCI_GSCL_5_8_LBC_TIMER_N_M		MAKEMASK(0xFFFF, 16)
+#define GLPCI_GSCN_0_3(_i)			(0x0009E99C + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: PCIR */
+#define GLPCI_GSCN_0_3_MAX_INDEX		3
+#define GLPCI_GSCN_0_3_EVENT_COUNTER_S		0
+#define GLPCI_GSCN_0_3_EVENT_COUNTER_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPCI_LATCT_NP_C			0x000BFDA0 /* Reset Source: PCIR */
+#define GLPCI_LATCT_NP_C_PCI_LATENCY_COUNT_S	0
+#define GLPCI_LATCT_NP_C_PCI_LATENCY_COUNT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPCI_LBARCTRL				0x0009DE74 /* Reset Source: POR */
+#define GLPCI_LBARCTRL_PREFBAR_S		0
+#define GLPCI_LBARCTRL_PREFBAR_M		BIT(0)
+#define GLPCI_LBARCTRL_BAR32_S			1
+#define GLPCI_LBARCTRL_BAR32_M			BIT(1)
+#define GLPCI_LBARCTRL_PAGES_SPACE_EN_PF_S	2
+#define GLPCI_LBARCTRL_PAGES_SPACE_EN_PF_M	BIT(2)
+#define GLPCI_LBARCTRL_FLASH_EXPOSE_S		3
+#define GLPCI_LBARCTRL_FLASH_EXPOSE_M		BIT(3)
+#define GLPCI_LBARCTRL_PE_DB_SIZE_S		4
+#define GLPCI_LBARCTRL_PE_DB_SIZE_M		MAKEMASK(0x3, 4)
+#define GLPCI_LBARCTRL_PAGES_SPACE_EN_VF_S	9
+#define GLPCI_LBARCTRL_PAGES_SPACE_EN_VF_M	BIT(9)
+#define GLPCI_LBARCTRL_EXROM_SIZE_S		11
+#define GLPCI_LBARCTRL_EXROM_SIZE_M		MAKEMASK(0x7, 11)
+#define GLPCI_LBARCTRL_VF_PE_DB_SIZE_S		14
+#define GLPCI_LBARCTRL_VF_PE_DB_SIZE_M		MAKEMASK(0x3, 14)
+#define GLPCI_LINKCAP				0x0009DE90 /* Reset Source: PCIR */
+#define GLPCI_LINKCAP_LINK_SPEEDS_VECTOR_S	0
+#define GLPCI_LINKCAP_LINK_SPEEDS_VECTOR_M	MAKEMASK(0x3F, 0)
+#define GLPCI_LINKCAP_MAX_LINK_WIDTH_S		9
+#define GLPCI_LINKCAP_MAX_LINK_WIDTH_M		MAKEMASK(0xF, 9)
+#define GLPCI_NPQ_CFG				0x000BFD80 /* Reset Source: PCIR */
+#define GLPCI_NPQ_CFG_EXTEND_TO_S		0
+#define GLPCI_NPQ_CFG_EXTEND_TO_M		BIT(0)
+#define GLPCI_NPQ_CFG_SMALL_TO_S		1
+#define GLPCI_NPQ_CFG_SMALL_TO_M		BIT(1)
+#define GLPCI_NPQ_CFG_WEIGHT_AVG_S		2
+#define GLPCI_NPQ_CFG_WEIGHT_AVG_M		MAKEMASK(0xF, 2)
+#define GLPCI_NPQ_CFG_NPQ_SPARE_S		6
+#define GLPCI_NPQ_CFG_NPQ_SPARE_M		MAKEMASK(0x3FF, 6)
+#define GLPCI_NPQ_CFG_NPQ_ERR_STAT_S		16
+#define GLPCI_NPQ_CFG_NPQ_ERR_STAT_M		MAKEMASK(0xF, 16)
+#define GLPCI_PKTCT_NP_C			0x000BFD9C /* Reset Source: PCIR */
+#define GLPCI_PKTCT_NP_C_PCI_COUNT_BW_PCT_S	0
+#define GLPCI_PKTCT_NP_C_PCI_COUNT_BW_PCT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPCI_PKTCT_P				0x0009E9B0 /* Reset Source: PCIR */
+#define GLPCI_PKTCT_P_PCI_COUNT_BW_PCT_S	0
+#define GLPCI_PKTCT_P_PCI_COUNT_BW_PCT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPCI_PMSUP				0x0009DE94 /* Reset Source: PCIR */
+#define GLPCI_PMSUP_RESERVED_0_S		0
+#define GLPCI_PMSUP_RESERVED_0_M		MAKEMASK(0x3, 0)
+#define GLPCI_PMSUP_RESERVED_1_S		2
+#define GLPCI_PMSUP_RESERVED_1_M		MAKEMASK(0x7, 2)
+#define GLPCI_PMSUP_RESERVED_2_S		5
+#define GLPCI_PMSUP_RESERVED_2_M		MAKEMASK(0x7, 5)
+#define GLPCI_PMSUP_L0S_ACC_LAT_S		8
+#define GLPCI_PMSUP_L0S_ACC_LAT_M		MAKEMASK(0x7, 8)
+#define GLPCI_PMSUP_L1_ACC_LAT_S		11
+#define GLPCI_PMSUP_L1_ACC_LAT_M		MAKEMASK(0x7, 11)
+#define GLPCI_PMSUP_RESERVED_3_S		14
+#define GLPCI_PMSUP_RESERVED_3_M		BIT(14)
+#define GLPCI_PMSUP_OBFF_SUP_S			15
+#define GLPCI_PMSUP_OBFF_SUP_M			MAKEMASK(0x3, 15)
+#define GLPCI_PUSH_PE_IF_TO_STATUS		0x0009DF44 /* Reset Source: PCIR */
+#define GLPCI_PUSH_PE_IF_TO_STATUS_GLPCI_PUSH_PE_IF_TO_STATUS_S 0
+#define GLPCI_PUSH_PE_IF_TO_STATUS_GLPCI_PUSH_PE_IF_TO_STATUS_M BIT(0)
+#define GLPCI_PWRDATA				0x0009DE7C /* Reset Source: PCIR */
+#define GLPCI_PWRDATA_D0_POWER_S		0
+#define GLPCI_PWRDATA_D0_POWER_M		MAKEMASK(0xFF, 0)
+#define GLPCI_PWRDATA_COMM_POWER_S		8
+#define GLPCI_PWRDATA_COMM_POWER_M		MAKEMASK(0xFF, 8)
+#define GLPCI_PWRDATA_D3_POWER_S		16
+#define GLPCI_PWRDATA_D3_POWER_M		MAKEMASK(0xFF, 16)
+#define GLPCI_PWRDATA_DATA_SCALE_S		24
+#define GLPCI_PWRDATA_DATA_SCALE_M		MAKEMASK(0x3, 24)
+#define GLPCI_REVID				0x0009DE98 /* Reset Source: PCIR */
+#define GLPCI_REVID_NVM_REVID_S			0
+#define GLPCI_REVID_NVM_REVID_M			MAKEMASK(0xFF, 0)
+#define GLPCI_SERH				0x0009DE84 /* Reset Source: PCIR */
+#define GLPCI_SERH_SER_NUM_H_S			0
+#define GLPCI_SERH_SER_NUM_H_M			MAKEMASK(0xFFFF, 0)
+#define GLPCI_SERL				0x0009DE80 /* Reset Source: PCIR */
+#define GLPCI_SERL_SER_NUM_L_S			0
+#define GLPCI_SERL_SER_NUM_L_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPCI_SUBVENID				0x0009DEE8 /* Reset Source: PCIR */
+#define GLPCI_SUBVENID_SUB_VEN_ID_S		0
+#define GLPCI_SUBVENID_SUB_VEN_ID_M		MAKEMASK(0xFFFF, 0)
+#define GLPCI_UPADD				0x000BE0D4 /* Reset Source: PCIR */
+#define GLPCI_UPADD_ADDRESS_S			1
+#define GLPCI_UPADD_ADDRESS_M			MAKEMASK(0x7FFFFFFF, 1)
+#define GLPCI_VENDORID				0x0009DEC8 /* Reset Source: PCIR */
+#define GLPCI_VENDORID_VENDORID_S		0
+#define GLPCI_VENDORID_VENDORID_M		MAKEMASK(0xFFFF, 0)
+#define GLPCI_VFSUP				0x0009DE9C /* Reset Source: PCIR */
+#define GLPCI_VFSUP_VF_PREFETCH_S		0
+#define GLPCI_VFSUP_VF_PREFETCH_M		BIT(0)
+#define GLPCI_VFSUP_VR_BAR_TYPE_S		1
+#define GLPCI_VFSUP_VR_BAR_TYPE_M		BIT(1)
+#define GLPCI_WATMK_CLNT_PIPEMON		0x000BFD90 /* Reset Source: PCIR */
+#define GLPCI_WATMK_CLNT_PIPEMON_DATA_LINES_S	0
+#define GLPCI_WATMK_CLNT_PIPEMON_DATA_LINES_M	MAKEMASK(0xFFFF, 0)
+#define PF_FUNC_RID				0x0009E880 /* Reset Source: PCIR */
+#define PF_FUNC_RID_FUNCTION_NUMBER_S		0
+#define PF_FUNC_RID_FUNCTION_NUMBER_M		MAKEMASK(0x7, 0)
+#define PF_FUNC_RID_DEVICE_NUMBER_S		3
+#define PF_FUNC_RID_DEVICE_NUMBER_M		MAKEMASK(0x1F, 3)
+#define PF_FUNC_RID_BUS_NUMBER_S		8
+#define PF_FUNC_RID_BUS_NUMBER_M		MAKEMASK(0xFF, 8)
+#define PF_PCI_CIAA				0x0009E580 /* Reset Source: FLR */
+#define PF_PCI_CIAA_ADDRESS_S			0
+#define PF_PCI_CIAA_ADDRESS_M			MAKEMASK(0xFFF, 0)
+#define PF_PCI_CIAA_VF_NUM_S			12
+#define PF_PCI_CIAA_VF_NUM_M			MAKEMASK(0xFF, 12)
+#define PF_PCI_CIAD				0x0009E500 /* Reset Source: FLR */
+#define PF_PCI_CIAD_DATA_S			0
+#define PF_PCI_CIAD_DATA_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PFPCI_CLASS				0x0009DB00 /* Reset Source: PCIR */
+#define PFPCI_CLASS_STORAGE_CLASS_S		0
+#define PFPCI_CLASS_STORAGE_CLASS_M		BIT(0)
+#define PFPCI_CLASS_PF_IS_LAN_S			2
+#define PFPCI_CLASS_PF_IS_LAN_M			BIT(2)
+#define PFPCI_CNF				0x0009DF00 /* Reset Source: PCIR */
+#define PFPCI_CNF_MSI_EN_S			2
+#define PFPCI_CNF_MSI_EN_M			BIT(2)
+#define PFPCI_CNF_EXROM_DIS_S			3
+#define PFPCI_CNF_EXROM_DIS_M			BIT(3)
+#define PFPCI_CNF_IO_BAR_S			4
+#define PFPCI_CNF_IO_BAR_M			BIT(4)
+#define PFPCI_CNF_INT_PIN_S			5
+#define PFPCI_CNF_INT_PIN_M			MAKEMASK(0x3, 5)
+#define PFPCI_DEVID				0x0009DE00 /* Reset Source: PCIR */
+#define PFPCI_DEVID_PF_DEV_ID_S			0
+#define PFPCI_DEVID_PF_DEV_ID_M			MAKEMASK(0xFFFF, 0)
+#define PFPCI_DEVID_VF_DEV_ID_S			16
+#define PFPCI_DEVID_VF_DEV_ID_M			MAKEMASK(0xFFFF, 16)
+#define PFPCI_FACTPS				0x0009E900 /* Reset Source: FLR */
+#define PFPCI_FACTPS_FUNC_POWER_STATE_S		0
+#define PFPCI_FACTPS_FUNC_POWER_STATE_M		MAKEMASK(0x3, 0)
+#define PFPCI_FACTPS_FUNC_AUX_EN_S		3
+#define PFPCI_FACTPS_FUNC_AUX_EN_M		BIT(3)
+#define PFPCI_FUNC				0x0009D980 /* Reset Source: POR */
+#define PFPCI_FUNC_FUNC_DIS_S			0
+#define PFPCI_FUNC_FUNC_DIS_M			BIT(0)
+#define PFPCI_FUNC_ALLOW_FUNC_DIS_S		1
+#define PFPCI_FUNC_ALLOW_FUNC_DIS_M		BIT(1)
+#define PFPCI_FUNC_DIS_FUNC_ON_PORT_DIS_S	2
+#define PFPCI_FUNC_DIS_FUNC_ON_PORT_DIS_M	BIT(2)
+#define PFPCI_PF_FLUSH_DONE			0x0009E400 /* Reset Source: PCIR */
+#define PFPCI_PF_FLUSH_DONE_FLUSH_DONE_S	0
+#define PFPCI_PF_FLUSH_DONE_FLUSH_DONE_M	BIT(0)
+#define PFPCI_PM				0x0009DA80 /* Reset Source: POR */
+#define PFPCI_PM_PME_EN_S			0
+#define PFPCI_PM_PME_EN_M			BIT(0)
+#define PFPCI_STATUS1				0x0009DA00 /* Reset Source: POR */
+#define PFPCI_STATUS1_FUNC_VALID_S		0
+#define PFPCI_STATUS1_FUNC_VALID_M		BIT(0)
+#define PFPCI_SUBSYSID				0x0009D880 /* Reset Source: PCIR */
+#define PFPCI_SUBSYSID_PF_SUBSYS_ID_S		0
+#define PFPCI_SUBSYSID_PF_SUBSYS_ID_M		MAKEMASK(0xFFFF, 0)
+#define PFPCI_SUBSYSID_VF_SUBSYS_ID_S		16
+#define PFPCI_SUBSYSID_VF_SUBSYS_ID_M		MAKEMASK(0xFFFF, 16)
+#define PFPCI_VF_FLUSH_DONE(_VF)		(0x0009E000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PCIR */
+#define PFPCI_VF_FLUSH_DONE_MAX_INDEX		255
+#define PFPCI_VF_FLUSH_DONE_FLUSH_DONE_S	0
+#define PFPCI_VF_FLUSH_DONE_FLUSH_DONE_M	BIT(0)
+#define PFPCI_VM_FLUSH_DONE			0x0009E480 /* Reset Source: PCIR */
+#define PFPCI_VM_FLUSH_DONE_FLUSH_DONE_S	0
+#define PFPCI_VM_FLUSH_DONE_FLUSH_DONE_M	BIT(0)
+#define PFPCI_VMINDEX				0x0009E600 /* Reset Source: PCIR */
+#define PFPCI_VMINDEX_VMINDEX_S			0
+#define PFPCI_VMINDEX_VMINDEX_M			MAKEMASK(0x3FF, 0)
+#define PFPCI_VMPEND				0x0009E800 /* Reset Source: PCIR */
+#define PFPCI_VMPEND_PENDING_S			0
+#define PFPCI_VMPEND_PENDING_M			BIT(0)
+#define PQ_FIFO_STATUS				0x0009DF40 /* Reset Source: PCIR */
+#define PQ_FIFO_STATUS_PQ_FIFO_COUNT_S		0
+#define PQ_FIFO_STATUS_PQ_FIFO_COUNT_M		MAKEMASK(0x7FFFFFFF, 0)
+#define PQ_FIFO_STATUS_PQ_FIFO_EMPTY_S		31
+#define PQ_FIFO_STATUS_PQ_FIFO_EMPTY_M		BIT(31)
+#define GLPE_CPUSTATUS0				0x0050BA5C /* Reset Source: CORER */
+#define GLPE_CPUSTATUS0_PECPUSTATUS0_S		0
+#define GLPE_CPUSTATUS0_PECPUSTATUS0_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPE_CPUSTATUS1				0x0050BA60 /* Reset Source: CORER */
+#define GLPE_CPUSTATUS1_PECPUSTATUS1_S		0
+#define GLPE_CPUSTATUS1_PECPUSTATUS1_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPE_CPUSTATUS2				0x0050BA64 /* Reset Source: CORER */
+#define GLPE_CPUSTATUS2_PECPUSTATUS2_S		0
+#define GLPE_CPUSTATUS2_PECPUSTATUS2_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPE_MDQ_BASE(_i)			(0x00536000 + ((_i) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLPE_MDQ_BASE_MAX_INDEX			511
+#define GLPE_MDQ_BASE_MDOC_INDEX_S		0
+#define GLPE_MDQ_BASE_MDOC_INDEX_M		MAKEMASK(0xFFFFFFF, 0)
+#define GLPE_MDQ_PTR(_i)			(0x00537000 + ((_i) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLPE_MDQ_PTR_MAX_INDEX			511
+#define GLPE_MDQ_PTR_MDQ_HEAD_S			0
+#define GLPE_MDQ_PTR_MDQ_HEAD_M			MAKEMASK(0x3FFF, 0)
+#define GLPE_MDQ_PTR_MDQ_TAIL_S			16
+#define GLPE_MDQ_PTR_MDQ_TAIL_M			MAKEMASK(0x3FFF, 16)
+#define GLPE_MDQ_SIZE(_i)			(0x00536800 + ((_i) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLPE_MDQ_SIZE_MAX_INDEX			511
+#define GLPE_MDQ_SIZE_MDQ_SIZE_S		0
+#define GLPE_MDQ_SIZE_MDQ_SIZE_M		MAKEMASK(0x3FFF, 0)
+#define GLPE_PEPM_CTRL				0x0050C000 /* Reset Source: PERST */
+#define GLPE_PEPM_CTRL_PEPM_ENABLE_S		0
+#define GLPE_PEPM_CTRL_PEPM_ENABLE_M		BIT(0)
+#define GLPE_PEPM_CTRL_PEPM_HALT_S		8
+#define GLPE_PEPM_CTRL_PEPM_HALT_M		BIT(8)
+#define GLPE_PEPM_CTRL_PEPM_PUSH_MARGIN_S	16
+#define GLPE_PEPM_CTRL_PEPM_PUSH_MARGIN_M	MAKEMASK(0xFF, 16)
+#define GLPE_PEPM_DEALLOC			0x0050C004 /* Reset Source: PERST */
+#define GLPE_PEPM_DEALLOC_MDQ_CREDITS_S		0
+#define GLPE_PEPM_DEALLOC_MDQ_CREDITS_M		MAKEMASK(0x3FFF, 0)
+#define GLPE_PEPM_DEALLOC_PSQ_CREDITS_S		14
+#define GLPE_PEPM_DEALLOC_PSQ_CREDITS_M		MAKEMASK(0x1F, 14)
+#define GLPE_PEPM_DEALLOC_PQID_S		19
+#define GLPE_PEPM_DEALLOC_PQID_M		MAKEMASK(0x1FF, 19)
+#define GLPE_PEPM_DEALLOC_PORT_S		28
+#define GLPE_PEPM_DEALLOC_PORT_M		MAKEMASK(0x7, 28)
+#define GLPE_PEPM_DEALLOC_DEALLOC_RDY_S		31
+#define GLPE_PEPM_DEALLOC_DEALLOC_RDY_M		BIT(31)
+#define GLPE_PEPM_PSQ_COUNT			0x0050C020 /* Reset Source: PERST */
+#define GLPE_PEPM_PSQ_COUNT_PEPM_PSQ_COUNT_S	0
+#define GLPE_PEPM_PSQ_COUNT_PEPM_PSQ_COUNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_PEPM_THRESH(_i)			(0x0050C840 + ((_i) * 4)) /* _i=0...511 */ /* Reset Source: PERST */
+#define GLPE_PEPM_THRESH_MAX_INDEX		511
+#define GLPE_PEPM_THRESH_PEPM_PSQ_THRESH_S	0
+#define GLPE_PEPM_THRESH_PEPM_PSQ_THRESH_M	MAKEMASK(0x1F, 0)
+#define GLPE_PEPM_THRESH_PEPM_MDQ_THRESH_S	16
+#define GLPE_PEPM_THRESH_PEPM_MDQ_THRESH_M	MAKEMASK(0x3FFF, 16)
+#define GLPE_PFAEQEDROPCNT(_i)			(0x00503240 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPE_PFAEQEDROPCNT_MAX_INDEX		7
+#define GLPE_PFAEQEDROPCNT_AEQEDROPCNT_S	0
+#define GLPE_PFAEQEDROPCNT_AEQEDROPCNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_PFCEQEDROPCNT(_i)			(0x00503220 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPE_PFCEQEDROPCNT_MAX_INDEX		7
+#define GLPE_PFCEQEDROPCNT_CEQEDROPCNT_S	0
+#define GLPE_PFCEQEDROPCNT_CEQEDROPCNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_PFCQEDROPCNT(_i)			(0x00503200 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPE_PFCQEDROPCNT_MAX_INDEX		7
+#define GLPE_PFCQEDROPCNT_CQEDROPCNT_S		0
+#define GLPE_PFCQEDROPCNT_CQEDROPCNT_M		MAKEMASK(0xFFFF, 0)
+#define GLPE_PFFLMOOISCALLOCERR(_i)		(0x0050B960 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPE_PFFLMOOISCALLOCERR_MAX_INDEX	7
+#define GLPE_PFFLMOOISCALLOCERR_ERROR_COUNT_S	0
+#define GLPE_PFFLMOOISCALLOCERR_ERROR_COUNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_PFFLMQ1ALLOCERR(_i)		(0x0050B920 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPE_PFFLMQ1ALLOCERR_MAX_INDEX		7
+#define GLPE_PFFLMQ1ALLOCERR_ERROR_COUNT_S	0
+#define GLPE_PFFLMQ1ALLOCERR_ERROR_COUNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_PFFLMRRFALLOCERR(_i)		(0x0050B940 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPE_PFFLMRRFALLOCERR_MAX_INDEX		7
+#define GLPE_PFFLMRRFALLOCERR_ERROR_COUNT_S	0
+#define GLPE_PFFLMRRFALLOCERR_ERROR_COUNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_PFFLMXMITALLOCERR(_i)		(0x0050B900 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPE_PFFLMXMITALLOCERR_MAX_INDEX	7
+#define GLPE_PFFLMXMITALLOCERR_ERROR_COUNT_S	0
+#define GLPE_PFFLMXMITALLOCERR_ERROR_COUNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_PFTCPNOW50USCNT(_i)		(0x0050B8C0 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPE_PFTCPNOW50USCNT_MAX_INDEX		7
+#define GLPE_PFTCPNOW50USCNT_CNT_S		0
+#define GLPE_PFTCPNOW50USCNT_CNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPE_PUSH_PEPM				0x0053241C /* Reset Source: CORER */
+#define GLPE_PUSH_PEPM_MDQ_CREDITS_S		0
+#define GLPE_PUSH_PEPM_MDQ_CREDITS_M		MAKEMASK(0xFF, 0)
+#define GLPE_VFAEQEDROPCNT(_i)			(0x00503100 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLPE_VFAEQEDROPCNT_MAX_INDEX		31
+#define GLPE_VFAEQEDROPCNT_AEQEDROPCNT_S	0
+#define GLPE_VFAEQEDROPCNT_AEQEDROPCNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_VFCEQEDROPCNT(_i)			(0x00503080 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLPE_VFCEQEDROPCNT_MAX_INDEX		31
+#define GLPE_VFCEQEDROPCNT_CEQEDROPCNT_S	0
+#define GLPE_VFCEQEDROPCNT_CEQEDROPCNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_VFCQEDROPCNT(_i)			(0x00503000 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLPE_VFCQEDROPCNT_MAX_INDEX		31
+#define GLPE_VFCQEDROPCNT_CQEDROPCNT_S		0
+#define GLPE_VFCQEDROPCNT_CQEDROPCNT_M		MAKEMASK(0xFFFF, 0)
+#define GLPE_VFFLMOOISCALLOCERR(_i)		(0x0050B580 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLPE_VFFLMOOISCALLOCERR_MAX_INDEX	31
+#define GLPE_VFFLMOOISCALLOCERR_ERROR_COUNT_S	0
+#define GLPE_VFFLMOOISCALLOCERR_ERROR_COUNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_VFFLMQ1ALLOCERR(_i)		(0x0050B480 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLPE_VFFLMQ1ALLOCERR_MAX_INDEX		31
+#define GLPE_VFFLMQ1ALLOCERR_ERROR_COUNT_S	0
+#define GLPE_VFFLMQ1ALLOCERR_ERROR_COUNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_VFFLMRRFALLOCERR(_i)		(0x0050B500 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLPE_VFFLMRRFALLOCERR_MAX_INDEX		31
+#define GLPE_VFFLMRRFALLOCERR_ERROR_COUNT_S	0
+#define GLPE_VFFLMRRFALLOCERR_ERROR_COUNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_VFFLMXMITALLOCERR(_i)		(0x0050B400 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLPE_VFFLMXMITALLOCERR_MAX_INDEX	31
+#define GLPE_VFFLMXMITALLOCERR_ERROR_COUNT_S	0
+#define GLPE_VFFLMXMITALLOCERR_ERROR_COUNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_VFTCPNOW50USCNT(_i)		(0x0050B300 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: PE_CORER */
+#define GLPE_VFTCPNOW50USCNT_MAX_INDEX		31
+#define GLPE_VFTCPNOW50USCNT_CNT_S		0
+#define GLPE_VFTCPNOW50USCNT_CNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PFPE_AEQALLOC				0x00502D00 /* Reset Source: PFR */
+#define PFPE_AEQALLOC_AECOUNT_S			0
+#define PFPE_AEQALLOC_AECOUNT_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PFPE_CCQPHIGH				0x0050A100 /* Reset Source: PFR */
+#define PFPE_CCQPHIGH_PECCQPHIGH_S		0
+#define PFPE_CCQPHIGH_PECCQPHIGH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PFPE_CCQPLOW				0x0050A080 /* Reset Source: PFR */
+#define PFPE_CCQPLOW_PECCQPLOW_S		0
+#define PFPE_CCQPLOW_PECCQPLOW_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PFPE_CCQPSTATUS				0x0050A000 /* Reset Source: PFR */
+#define PFPE_CCQPSTATUS_CCQP_DONE_S		0
+#define PFPE_CCQPSTATUS_CCQP_DONE_M		BIT(0)
+#define PFPE_CCQPSTATUS_HMC_PROFILE_S		4
+#define PFPE_CCQPSTATUS_HMC_PROFILE_M		MAKEMASK(0x7, 4)
+#define PFPE_CCQPSTATUS_RDMA_EN_VFS_S		16
+#define PFPE_CCQPSTATUS_RDMA_EN_VFS_M		MAKEMASK(0x3F, 16)
+#define PFPE_CCQPSTATUS_CCQP_ERR_S		31
+#define PFPE_CCQPSTATUS_CCQP_ERR_M		BIT(31)
+#define PFPE_CQACK				0x00502C80 /* Reset Source: PFR */
+#define PFPE_CQACK_PECQID_S			0
+#define PFPE_CQACK_PECQID_M			MAKEMASK(0x7FFFF, 0)
+#define PFPE_CQARM				0x00502C00 /* Reset Source: PFR */
+#define PFPE_CQARM_PECQID_S			0
+#define PFPE_CQARM_PECQID_M			MAKEMASK(0x7FFFF, 0)
+#define PFPE_CQPDB				0x00500800 /* Reset Source: PFR */
+#define PFPE_CQPDB_WQHEAD_S			0
+#define PFPE_CQPDB_WQHEAD_M			MAKEMASK(0x7FF, 0)
+#define PFPE_CQPERRCODES			0x0050A200 /* Reset Source: PFR */
+#define PFPE_CQPERRCODES_CQP_MINOR_CODE_S	0
+#define PFPE_CQPERRCODES_CQP_MINOR_CODE_M	MAKEMASK(0xFFFF, 0)
+#define PFPE_CQPERRCODES_CQP_MAJOR_CODE_S	16
+#define PFPE_CQPERRCODES_CQP_MAJOR_CODE_M	MAKEMASK(0xFFFF, 16)
+#define PFPE_CQPTAIL				0x00500880 /* Reset Source: PFR */
+#define PFPE_CQPTAIL_WQTAIL_S			0
+#define PFPE_CQPTAIL_WQTAIL_M			MAKEMASK(0x7FF, 0)
+#define PFPE_CQPTAIL_CQP_OP_ERR_S		31
+#define PFPE_CQPTAIL_CQP_OP_ERR_M		BIT(31)
+#define PFPE_IPCONFIG0				0x0050A180 /* Reset Source: PFR */
+#define PFPE_IPCONFIG0_PEIPID_S			0
+#define PFPE_IPCONFIG0_PEIPID_M			MAKEMASK(0xFFFF, 0)
+#define PFPE_IPCONFIG0_USEENTIREIDRANGE_S	16
+#define PFPE_IPCONFIG0_USEENTIREIDRANGE_M	BIT(16)
+#define PFPE_IPCONFIG0_UDP_SRC_PORT_MASK_EN_S	17
+#define PFPE_IPCONFIG0_UDP_SRC_PORT_MASK_EN_M	BIT(17)
+#define PFPE_MRTEIDXMASK			0x0050A300 /* Reset Source: PFR */
+#define PFPE_MRTEIDXMASK_MRTEIDXMASKBITS_S	0
+#define PFPE_MRTEIDXMASK_MRTEIDXMASKBITS_M	MAKEMASK(0x1F, 0)
+#define PFPE_RCVUNEXPECTEDERROR			0x0050A380 /* Reset Source: PFR */
+#define PFPE_RCVUNEXPECTEDERROR_TCP_RX_UNEXP_ERR_S 0
+#define PFPE_RCVUNEXPECTEDERROR_TCP_RX_UNEXP_ERR_M MAKEMASK(0xFFFFFF, 0)
+#define PFPE_TCPNOWTIMER			0x0050A280 /* Reset Source: PFR */
+#define PFPE_TCPNOWTIMER_TCP_NOW_S		0
+#define PFPE_TCPNOWTIMER_TCP_NOW_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PFPE_WQEALLOC				0x00504400 /* Reset Source: PFR */
+#define PFPE_WQEALLOC_PEQPID_S			0
+#define PFPE_WQEALLOC_PEQPID_M			MAKEMASK(0x3FFFF, 0)
+#define PFPE_WQEALLOC_WQE_DESC_INDEX_S		20
+#define PFPE_WQEALLOC_WQE_DESC_INDEX_M		MAKEMASK(0xFFF, 20)
+#define PRT_PEPM_COUNT(_i)			(0x0050C040 + ((_i) * 4)) /* _i=0...511 */ /* Reset Source: PERST */
+#define PRT_PEPM_COUNT_MAX_INDEX		511
+#define PRT_PEPM_COUNT_PEPM_PSQ_COUNT_S		0
+#define PRT_PEPM_COUNT_PEPM_PSQ_COUNT_M		MAKEMASK(0x1F, 0)
+#define PRT_PEPM_COUNT_PEPM_MDQ_COUNT_S		16
+#define PRT_PEPM_COUNT_PEPM_MDQ_COUNT_M		MAKEMASK(0x3FFF, 16)
+#define VFPE_AEQALLOC(_VF)			(0x00502800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_AEQALLOC_MAX_INDEX			255
+#define VFPE_AEQALLOC_AECOUNT_S			0
+#define VFPE_AEQALLOC_AECOUNT_M			MAKEMASK(0xFFFFFFFF, 0)
+#define VFPE_CCQPHIGH(_VF)			(0x00508800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_CCQPHIGH_MAX_INDEX			255
+#define VFPE_CCQPHIGH_PECCQPHIGH_S		0
+#define VFPE_CCQPHIGH_PECCQPHIGH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VFPE_CCQPLOW(_VF)			(0x00508400 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_CCQPLOW_MAX_INDEX			255
+#define VFPE_CCQPLOW_PECCQPLOW_S		0
+#define VFPE_CCQPLOW_PECCQPLOW_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VFPE_CCQPSTATUS(_VF)			(0x00508000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_CCQPSTATUS_MAX_INDEX		255
+#define VFPE_CCQPSTATUS_CCQP_DONE_S		0
+#define VFPE_CCQPSTATUS_CCQP_DONE_M		BIT(0)
+#define VFPE_CCQPSTATUS_HMC_PROFILE_S		4
+#define VFPE_CCQPSTATUS_HMC_PROFILE_M		MAKEMASK(0x7, 4)
+#define VFPE_CCQPSTATUS_RDMA_EN_VFS_S		16
+#define VFPE_CCQPSTATUS_RDMA_EN_VFS_M		MAKEMASK(0x3F, 16)
+#define VFPE_CCQPSTATUS_CCQP_ERR_S		31
+#define VFPE_CCQPSTATUS_CCQP_ERR_M		BIT(31)
+#define VFPE_CQACK(_VF)				(0x00502400 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_CQACK_MAX_INDEX			255
+#define VFPE_CQACK_PECQID_S			0
+#define VFPE_CQACK_PECQID_M			MAKEMASK(0x7FFFF, 0)
+#define VFPE_CQARM(_VF)				(0x00502000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_CQARM_MAX_INDEX			255
+#define VFPE_CQARM_PECQID_S			0
+#define VFPE_CQARM_PECQID_M			MAKEMASK(0x7FFFF, 0)
+#define VFPE_CQPDB(_VF)				(0x00500000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_CQPDB_MAX_INDEX			255
+#define VFPE_CQPDB_WQHEAD_S			0
+#define VFPE_CQPDB_WQHEAD_M			MAKEMASK(0x7FF, 0)
+#define VFPE_CQPERRCODES(_VF)			(0x00509000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_CQPERRCODES_MAX_INDEX		255
+#define VFPE_CQPERRCODES_CQP_MINOR_CODE_S	0
+#define VFPE_CQPERRCODES_CQP_MINOR_CODE_M	MAKEMASK(0xFFFF, 0)
+#define VFPE_CQPERRCODES_CQP_MAJOR_CODE_S	16
+#define VFPE_CQPERRCODES_CQP_MAJOR_CODE_M	MAKEMASK(0xFFFF, 16)
+#define VFPE_CQPTAIL(_VF)			(0x00500400 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_CQPTAIL_MAX_INDEX			255
+#define VFPE_CQPTAIL_WQTAIL_S			0
+#define VFPE_CQPTAIL_WQTAIL_M			MAKEMASK(0x7FF, 0)
+#define VFPE_CQPTAIL_CQP_OP_ERR_S		31
+#define VFPE_CQPTAIL_CQP_OP_ERR_M		BIT(31)
+#define VFPE_IPCONFIG0(_VF)			(0x00508C00 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_IPCONFIG0_MAX_INDEX		255
+#define VFPE_IPCONFIG0_PEIPID_S			0
+#define VFPE_IPCONFIG0_PEIPID_M			MAKEMASK(0xFFFF, 0)
+#define VFPE_IPCONFIG0_USEENTIREIDRANGE_S	16
+#define VFPE_IPCONFIG0_USEENTIREIDRANGE_M	BIT(16)
+#define VFPE_IPCONFIG0_UDP_SRC_PORT_MASK_EN_S	17
+#define VFPE_IPCONFIG0_UDP_SRC_PORT_MASK_EN_M	BIT(17)
+#define VFPE_RCVUNEXPECTEDERROR(_VF)		(0x00509C00 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_RCVUNEXPECTEDERROR_MAX_INDEX	255
+#define VFPE_RCVUNEXPECTEDERROR_TCP_RX_UNEXP_ERR_S 0
+#define VFPE_RCVUNEXPECTEDERROR_TCP_RX_UNEXP_ERR_M MAKEMASK(0xFFFFFF, 0)
+#define VFPE_TCPNOWTIMER(_VF)			(0x00509400 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_TCPNOWTIMER_MAX_INDEX		255
+#define VFPE_TCPNOWTIMER_TCP_NOW_S		0
+#define VFPE_TCPNOWTIMER_TCP_NOW_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VFPE_WQEALLOC(_VF)			(0x00504000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_WQEALLOC_MAX_INDEX			255
+#define VFPE_WQEALLOC_PEQPID_S			0
+#define VFPE_WQEALLOC_PEQPID_M			MAKEMASK(0x3FFFF, 0)
+#define VFPE_WQEALLOC_WQE_DESC_INDEX_S		20
+#define VFPE_WQEALLOC_WQE_DESC_INDEX_M		MAKEMASK(0xFFF, 20)
+#define GLPES_PFIP4RXDISCARD(_i)		(0x00541400 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXDISCARD_MAX_INDEX		127
+#define GLPES_PFIP4RXDISCARD_IP4RXDISCARD_S	0
+#define GLPES_PFIP4RXDISCARD_IP4RXDISCARD_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4RXFRAGSHI(_i)		(0x00541C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXFRAGSHI_MAX_INDEX		127
+#define GLPES_PFIP4RXFRAGSHI_IP4RXFRAGSHI_S	0
+#define GLPES_PFIP4RXFRAGSHI_IP4RXFRAGSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP4RXFRAGSLO(_i)		(0x00541C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXFRAGSLO_MAX_INDEX		127
+#define GLPES_PFIP4RXFRAGSLO_IP4RXFRAGSLO_S	0
+#define GLPES_PFIP4RXFRAGSLO_IP4RXFRAGSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4RXMCOCTSHI(_i)		(0x00542404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXMCOCTSHI_MAX_INDEX		127
+#define GLPES_PFIP4RXMCOCTSHI_IP4RXMCOCTSHI_S	0
+#define GLPES_PFIP4RXMCOCTSHI_IP4RXMCOCTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP4RXMCOCTSLO(_i)		(0x00542400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXMCOCTSLO_MAX_INDEX		127
+#define GLPES_PFIP4RXMCOCTSLO_IP4RXMCOCTSLO_S	0
+#define GLPES_PFIP4RXMCOCTSLO_IP4RXMCOCTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4RXMCPKTSHI(_i)		(0x00542C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXMCPKTSHI_MAX_INDEX		127
+#define GLPES_PFIP4RXMCPKTSHI_IP4RXMCPKTSHI_S	0
+#define GLPES_PFIP4RXMCPKTSHI_IP4RXMCPKTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP4RXMCPKTSLO(_i)		(0x00542C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXMCPKTSLO_MAX_INDEX		127
+#define GLPES_PFIP4RXMCPKTSLO_IP4RXMCPKTSLO_S	0
+#define GLPES_PFIP4RXMCPKTSLO_IP4RXMCPKTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4RXOCTSHI(_i)			(0x00540404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXOCTSHI_MAX_INDEX		127
+#define GLPES_PFIP4RXOCTSHI_IP4RXOCTSHI_S	0
+#define GLPES_PFIP4RXOCTSHI_IP4RXOCTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP4RXOCTSLO(_i)			(0x00540400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXOCTSLO_MAX_INDEX		127
+#define GLPES_PFIP4RXOCTSLO_IP4RXOCTSLO_S	0
+#define GLPES_PFIP4RXOCTSLO_IP4RXOCTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4RXPKTSHI(_i)			(0x00540C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXPKTSHI_MAX_INDEX		127
+#define GLPES_PFIP4RXPKTSHI_IP4RXPKTSHI_S	0
+#define GLPES_PFIP4RXPKTSHI_IP4RXPKTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP4RXPKTSLO(_i)			(0x00540C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXPKTSLO_MAX_INDEX		127
+#define GLPES_PFIP4RXPKTSLO_IP4RXPKTSLO_S	0
+#define GLPES_PFIP4RXPKTSLO_IP4RXPKTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4RXTRUNC(_i)			(0x00541800 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXTRUNC_MAX_INDEX		127
+#define GLPES_PFIP4RXTRUNC_IP4RXTRUNC_S		0
+#define GLPES_PFIP4RXTRUNC_IP4RXTRUNC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4TXFRAGSHI(_i)		(0x00547404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4TXFRAGSHI_MAX_INDEX		127
+#define GLPES_PFIP4TXFRAGSHI_IP4TXFRAGSHI_S	0
+#define GLPES_PFIP4TXFRAGSHI_IP4TXFRAGSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP4TXFRAGSLO(_i)		(0x00547400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4TXFRAGSLO_MAX_INDEX		127
+#define GLPES_PFIP4TXFRAGSLO_IP4TXFRAGSLO_S	0
+#define GLPES_PFIP4TXFRAGSLO_IP4TXFRAGSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4TXMCOCTSHI(_i)		(0x00547C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4TXMCOCTSHI_MAX_INDEX		127
+#define GLPES_PFIP4TXMCOCTSHI_IP4TXMCOCTSHI_S	0
+#define GLPES_PFIP4TXMCOCTSHI_IP4TXMCOCTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP4TXMCOCTSLO(_i)		(0x00547C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4TXMCOCTSLO_MAX_INDEX		127
+#define GLPES_PFIP4TXMCOCTSLO_IP4TXMCOCTSLO_S	0
+#define GLPES_PFIP4TXMCOCTSLO_IP4TXMCOCTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4TXMCPKTSHI(_i)		(0x00548404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4TXMCPKTSHI_MAX_INDEX		127
+#define GLPES_PFIP4TXMCPKTSHI_IP4TXMCPKTSHI_S	0
+#define GLPES_PFIP4TXMCPKTSHI_IP4TXMCPKTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP4TXMCPKTSLO(_i)		(0x00548400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4TXMCPKTSLO_MAX_INDEX		127
+#define GLPES_PFIP4TXMCPKTSLO_IP4TXMCPKTSLO_S	0
+#define GLPES_PFIP4TXMCPKTSLO_IP4TXMCPKTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4TXNOROUTE(_i)		(0x0054B400 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4TXNOROUTE_MAX_INDEX		127
+#define GLPES_PFIP4TXNOROUTE_IP4TXNOROUTE_S	0
+#define GLPES_PFIP4TXNOROUTE_IP4TXNOROUTE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLPES_PFIP4TXOCTSHI(_i)			(0x00546404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4TXOCTSHI_MAX_INDEX		127
+#define GLPES_PFIP4TXOCTSHI_IP4TXOCTSHI_S	0
+#define GLPES_PFIP4TXOCTSHI_IP4TXOCTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP4TXOCTSLO(_i)			(0x00546400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4TXOCTSLO_MAX_INDEX		127
+#define GLPES_PFIP4TXOCTSLO_IP4TXOCTSLO_S	0
+#define GLPES_PFIP4TXOCTSLO_IP4TXOCTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4TXPKTSHI(_i)			(0x00546C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4TXPKTSHI_MAX_INDEX		127
+#define GLPES_PFIP4TXPKTSHI_IP4TXPKTSHI_S	0
+#define GLPES_PFIP4TXPKTSHI_IP4TXPKTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP4TXPKTSLO(_i)			(0x00546C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4TXPKTSLO_MAX_INDEX		127
+#define GLPES_PFIP4TXPKTSLO_IP4TXPKTSLO_S	0
+#define GLPES_PFIP4TXPKTSLO_IP4TXPKTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6RXDISCARD(_i)		(0x00544400 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXDISCARD_MAX_INDEX		127
+#define GLPES_PFIP6RXDISCARD_IP6RXDISCARD_S	0
+#define GLPES_PFIP6RXDISCARD_IP6RXDISCARD_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6RXFRAGSHI(_i)		(0x00544C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXFRAGSHI_MAX_INDEX		127
+#define GLPES_PFIP6RXFRAGSHI_IP6RXFRAGSHI_S	0
+#define GLPES_PFIP6RXFRAGSHI_IP6RXFRAGSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP6RXFRAGSLO(_i)		(0x00544C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXFRAGSLO_MAX_INDEX		127
+#define GLPES_PFIP6RXFRAGSLO_IP6RXFRAGSLO_S	0
+#define GLPES_PFIP6RXFRAGSLO_IP6RXFRAGSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6RXMCOCTSHI(_i)		(0x00545404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXMCOCTSHI_MAX_INDEX		127
+#define GLPES_PFIP6RXMCOCTSHI_IP6RXMCOCTSHI_S	0
+#define GLPES_PFIP6RXMCOCTSHI_IP6RXMCOCTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP6RXMCOCTSLO(_i)		(0x00545400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXMCOCTSLO_MAX_INDEX		127
+#define GLPES_PFIP6RXMCOCTSLO_IP6RXMCOCTSLO_S	0
+#define GLPES_PFIP6RXMCOCTSLO_IP6RXMCOCTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6RXMCPKTSHI(_i)		(0x00545C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXMCPKTSHI_MAX_INDEX		127
+#define GLPES_PFIP6RXMCPKTSHI_IP6RXMCPKTSHI_S	0
+#define GLPES_PFIP6RXMCPKTSHI_IP6RXMCPKTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP6RXMCPKTSLO(_i)		(0x00545C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXMCPKTSLO_MAX_INDEX		127
+#define GLPES_PFIP6RXMCPKTSLO_IP6RXMCPKTSLO_S	0
+#define GLPES_PFIP6RXMCPKTSLO_IP6RXMCPKTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6RXOCTSHI(_i)			(0x00543404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXOCTSHI_MAX_INDEX		127
+#define GLPES_PFIP6RXOCTSHI_IP6RXOCTSHI_S	0
+#define GLPES_PFIP6RXOCTSHI_IP6RXOCTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP6RXOCTSLO(_i)			(0x00543400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXOCTSLO_MAX_INDEX		127
+#define GLPES_PFIP6RXOCTSLO_IP6RXOCTSLO_S	0
+#define GLPES_PFIP6RXOCTSLO_IP6RXOCTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6RXPKTSHI(_i)			(0x00543C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXPKTSHI_MAX_INDEX		127
+#define GLPES_PFIP6RXPKTSHI_IP6RXPKTSHI_S	0
+#define GLPES_PFIP6RXPKTSHI_IP6RXPKTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP6RXPKTSLO(_i)			(0x00543C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXPKTSLO_MAX_INDEX		127
+#define GLPES_PFIP6RXPKTSLO_IP6RXPKTSLO_S	0
+#define GLPES_PFIP6RXPKTSLO_IP6RXPKTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6RXTRUNC(_i)			(0x00544800 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXTRUNC_MAX_INDEX		127
+#define GLPES_PFIP6RXTRUNC_IP6RXTRUNC_S		0
+#define GLPES_PFIP6RXTRUNC_IP6RXTRUNC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6TXFRAGSHI(_i)		(0x00549C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6TXFRAGSHI_MAX_INDEX		127
+#define GLPES_PFIP6TXFRAGSHI_IP6TXFRAGSHI_S	0
+#define GLPES_PFIP6TXFRAGSHI_IP6TXFRAGSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP6TXFRAGSLO(_i)		(0x00549C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6TXFRAGSLO_MAX_INDEX		127
+#define GLPES_PFIP6TXFRAGSLO_IP6TXFRAGSLO_S	0
+#define GLPES_PFIP6TXFRAGSLO_IP6TXFRAGSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6TXMCOCTSHI(_i)		(0x0054A404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6TXMCOCTSHI_MAX_INDEX		127
+#define GLPES_PFIP6TXMCOCTSHI_IP6TXMCOCTSHI_S	0
+#define GLPES_PFIP6TXMCOCTSHI_IP6TXMCOCTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP6TXMCOCTSLO(_i)		(0x0054A400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6TXMCOCTSLO_MAX_INDEX		127
+#define GLPES_PFIP6TXMCOCTSLO_IP6TXMCOCTSLO_S	0
+#define GLPES_PFIP6TXMCOCTSLO_IP6TXMCOCTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6TXMCPKTSHI(_i)		(0x0054AC04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6TXMCPKTSHI_MAX_INDEX		127
+#define GLPES_PFIP6TXMCPKTSHI_IP6TXMCPKTSHI_S	0
+#define GLPES_PFIP6TXMCPKTSHI_IP6TXMCPKTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP6TXMCPKTSLO(_i)		(0x0054AC00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6TXMCPKTSLO_MAX_INDEX		127
+#define GLPES_PFIP6TXMCPKTSLO_IP6TXMCPKTSLO_S	0
+#define GLPES_PFIP6TXMCPKTSLO_IP6TXMCPKTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6TXNOROUTE(_i)		(0x0054B800 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6TXNOROUTE_MAX_INDEX		127
+#define GLPES_PFIP6TXNOROUTE_IP6TXNOROUTE_S	0
+#define GLPES_PFIP6TXNOROUTE_IP6TXNOROUTE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLPES_PFIP6TXOCTSHI(_i)			(0x00548C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6TXOCTSHI_MAX_INDEX		127
+#define GLPES_PFIP6TXOCTSHI_IP6TXOCTSHI_S	0
+#define GLPES_PFIP6TXOCTSHI_IP6TXOCTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP6TXOCTSLO(_i)			(0x00548C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6TXOCTSLO_MAX_INDEX		127
+#define GLPES_PFIP6TXOCTSLO_IP6TXOCTSLO_S	0
+#define GLPES_PFIP6TXOCTSLO_IP6TXOCTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6TXPKTSHI(_i)			(0x00549404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6TXPKTSHI_MAX_INDEX		127
+#define GLPES_PFIP6TXPKTSHI_IP6TXPKTSHI_S	0
+#define GLPES_PFIP6TXPKTSHI_IP6TXPKTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP6TXPKTSLO(_i)			(0x00549400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6TXPKTSLO_MAX_INDEX		127
+#define GLPES_PFIP6TXPKTSLO_IP6TXPKTSLO_S	0
+#define GLPES_PFIP6TXPKTSLO_IP6TXPKTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFRDMARXRDSHI(_i)			(0x0054EC04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMARXRDSHI_MAX_INDEX		127
+#define GLPES_PFRDMARXRDSHI_RDMARXRDSHI_S	0
+#define GLPES_PFRDMARXRDSHI_RDMARXRDSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFRDMARXRDSLO(_i)			(0x0054EC00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMARXRDSLO_MAX_INDEX		127
+#define GLPES_PFRDMARXRDSLO_RDMARXRDSLO_S	0
+#define GLPES_PFRDMARXRDSLO_RDMARXRDSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFRDMARXSNDSHI(_i)		(0x0054F404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMARXSNDSHI_MAX_INDEX		127
+#define GLPES_PFRDMARXSNDSHI_RDMARXSNDSHI_S	0
+#define GLPES_PFRDMARXSNDSHI_RDMARXSNDSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFRDMARXSNDSLO(_i)		(0x0054F400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMARXSNDSLO_MAX_INDEX		127
+#define GLPES_PFRDMARXSNDSLO_RDMARXSNDSLO_S	0
+#define GLPES_PFRDMARXSNDSLO_RDMARXSNDSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFRDMARXWRSHI(_i)			(0x0054E404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMARXWRSHI_MAX_INDEX		127
+#define GLPES_PFRDMARXWRSHI_RDMARXWRSHI_S	0
+#define GLPES_PFRDMARXWRSHI_RDMARXWRSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFRDMARXWRSLO(_i)			(0x0054E400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMARXWRSLO_MAX_INDEX		127
+#define GLPES_PFRDMARXWRSLO_RDMARXWRSLO_S	0
+#define GLPES_PFRDMARXWRSLO_RDMARXWRSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFRDMATXRDSHI(_i)			(0x00550404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMATXRDSHI_MAX_INDEX		127
+#define GLPES_PFRDMATXRDSHI_RDMARXRDSHI_S	0
+#define GLPES_PFRDMATXRDSHI_RDMARXRDSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFRDMATXRDSLO(_i)			(0x00550400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMATXRDSLO_MAX_INDEX		127
+#define GLPES_PFRDMATXRDSLO_RDMARXRDSLO_S	0
+#define GLPES_PFRDMATXRDSLO_RDMARXRDSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFRDMATXSNDSHI(_i)		(0x00550C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMATXSNDSHI_MAX_INDEX		127
+#define GLPES_PFRDMATXSNDSHI_RDMARXSNDSHI_S	0
+#define GLPES_PFRDMATXSNDSHI_RDMARXSNDSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFRDMATXSNDSLO(_i)		(0x00550C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMATXSNDSLO_MAX_INDEX		127
+#define GLPES_PFRDMATXSNDSLO_RDMARXSNDSLO_S	0
+#define GLPES_PFRDMATXSNDSLO_RDMARXSNDSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFRDMATXWRSHI(_i)			(0x0054FC04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMATXWRSHI_MAX_INDEX		127
+#define GLPES_PFRDMATXWRSHI_RDMARXWRSHI_S	0
+#define GLPES_PFRDMATXWRSHI_RDMARXWRSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFRDMATXWRSLO(_i)			(0x0054FC00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMATXWRSLO_MAX_INDEX		127
+#define GLPES_PFRDMATXWRSLO_RDMARXWRSLO_S	0
+#define GLPES_PFRDMATXWRSLO_RDMARXWRSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFRDMAVBNDHI(_i)			(0x00551404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMAVBNDHI_MAX_INDEX		127
+#define GLPES_PFRDMAVBNDHI_RDMAVBNDHI_S		0
+#define GLPES_PFRDMAVBNDHI_RDMAVBNDHI_M		MAKEMASK(0xFFFF, 0)
+#define GLPES_PFRDMAVBNDLO(_i)			(0x00551400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMAVBNDLO_MAX_INDEX		127
+#define GLPES_PFRDMAVBNDLO_RDMAVBNDLO_S		0
+#define GLPES_PFRDMAVBNDLO_RDMAVBNDLO_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFRDMAVINVHI(_i)			(0x00551C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMAVINVHI_MAX_INDEX		127
+#define GLPES_PFRDMAVINVHI_RDMAVINVHI_S		0
+#define GLPES_PFRDMAVINVHI_RDMAVINVHI_M		MAKEMASK(0xFFFF, 0)
+#define GLPES_PFRDMAVINVLO(_i)			(0x00551C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMAVINVLO_MAX_INDEX		127
+#define GLPES_PFRDMAVINVLO_RDMAVINVLO_S		0
+#define GLPES_PFRDMAVINVLO_RDMAVINVLO_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFRXVLANERR(_i)			(0x00540000 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRXVLANERR_MAX_INDEX		127
+#define GLPES_PFRXVLANERR_RXVLANERR_S		0
+#define GLPES_PFRXVLANERR_RXVLANERR_M		MAKEMASK(0xFFFFFF, 0)
+#define GLPES_PFTCPRTXSEG(_i)			(0x00552400 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFTCPRTXSEG_MAX_INDEX		127
+#define GLPES_PFTCPRTXSEG_TCPRTXSEG_S		0
+#define GLPES_PFTCPRTXSEG_TCPRTXSEG_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFTCPRXOPTERR(_i)			(0x0054C400 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFTCPRXOPTERR_MAX_INDEX		127
+#define GLPES_PFTCPRXOPTERR_TCPRXOPTERR_S	0
+#define GLPES_PFTCPRXOPTERR_TCPRXOPTERR_M	MAKEMASK(0xFFFFFF, 0)
+#define GLPES_PFTCPRXPROTOERR(_i)		(0x0054C800 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFTCPRXPROTOERR_MAX_INDEX		127
+#define GLPES_PFTCPRXPROTOERR_TCPRXPROTOERR_S	0
+#define GLPES_PFTCPRXPROTOERR_TCPRXPROTOERR_M	MAKEMASK(0xFFFFFF, 0)
+#define GLPES_PFTCPRXSEGSHI(_i)			(0x0054BC04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFTCPRXSEGSHI_MAX_INDEX		127
+#define GLPES_PFTCPRXSEGSHI_TCPRXSEGSHI_S	0
+#define GLPES_PFTCPRXSEGSHI_TCPRXSEGSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFTCPRXSEGSLO(_i)			(0x0054BC00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFTCPRXSEGSLO_MAX_INDEX		127
+#define GLPES_PFTCPRXSEGSLO_TCPRXSEGSLO_S	0
+#define GLPES_PFTCPRXSEGSLO_TCPRXSEGSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFTCPTXSEGHI(_i)			(0x0054CC04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFTCPTXSEGHI_MAX_INDEX		127
+#define GLPES_PFTCPTXSEGHI_TCPTXSEGHI_S		0
+#define GLPES_PFTCPTXSEGHI_TCPTXSEGHI_M		MAKEMASK(0xFFFF, 0)
+#define GLPES_PFTCPTXSEGLO(_i)			(0x0054CC00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFTCPTXSEGLO_MAX_INDEX		127
+#define GLPES_PFTCPTXSEGLO_TCPTXSEGLO_S		0
+#define GLPES_PFTCPTXSEGLO_TCPTXSEGLO_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFUDPRXPKTSHI(_i)			(0x0054D404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFUDPRXPKTSHI_MAX_INDEX		127
+#define GLPES_PFUDPRXPKTSHI_UDPRXPKTSHI_S	0
+#define GLPES_PFUDPRXPKTSHI_UDPRXPKTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFUDPRXPKTSLO(_i)			(0x0054D400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFUDPRXPKTSLO_MAX_INDEX		127
+#define GLPES_PFUDPRXPKTSLO_UDPRXPKTSLO_S	0
+#define GLPES_PFUDPRXPKTSLO_UDPRXPKTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFUDPTXPKTSHI(_i)			(0x0054DC04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFUDPTXPKTSHI_MAX_INDEX		127
+#define GLPES_PFUDPTXPKTSHI_UDPTXPKTSHI_S	0
+#define GLPES_PFUDPTXPKTSHI_UDPTXPKTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFUDPTXPKTSLO(_i)			(0x0054DC00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFUDPTXPKTSLO_MAX_INDEX		127
+#define GLPES_PFUDPTXPKTSLO_UDPTXPKTSLO_S	0
+#define GLPES_PFUDPTXPKTSLO_UDPTXPKTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_RDMARXMULTFPDUSHI			0x0055E00C /* Reset Source: CORER */
+#define GLPES_RDMARXMULTFPDUSHI_RDMARXMULTFPDUSHI_S 0
+#define GLPES_RDMARXMULTFPDUSHI_RDMARXMULTFPDUSHI_M MAKEMASK(0xFFFFFF, 0)
+#define GLPES_RDMARXMULTFPDUSLO			0x0055E008 /* Reset Source: CORER */
+#define GLPES_RDMARXMULTFPDUSLO_RDMARXMULTFPDUSLO_S 0
+#define GLPES_RDMARXMULTFPDUSLO_RDMARXMULTFPDUSLO_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_RDMARXOOODDPHI			0x0055E014 /* Reset Source: CORER */
+#define GLPES_RDMARXOOODDPHI_RDMARXOOODDPHI_S	0
+#define GLPES_RDMARXOOODDPHI_RDMARXOOODDPHI_M	MAKEMASK(0xFFFFFF, 0)
+#define GLPES_RDMARXOOODDPLO			0x0055E010 /* Reset Source: CORER */
+#define GLPES_RDMARXOOODDPLO_RDMARXOOODDPLO_S	0
+#define GLPES_RDMARXOOODDPLO_RDMARXOOODDPLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_RDMARXOOONOMARK			0x0055E004 /* Reset Source: CORER */
+#define GLPES_RDMARXOOONOMARK_RDMAOOONOMARK_S	0
+#define GLPES_RDMARXOOONOMARK_RDMAOOONOMARK_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_RDMARXUNALIGN			0x0055E000 /* Reset Source: CORER */
+#define GLPES_RDMARXUNALIGN_RDMRXAUNALIGN_S	0
+#define GLPES_RDMARXUNALIGN_RDMRXAUNALIGN_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_TCPRXFOURHOLEHI			0x0055E03C /* Reset Source: CORER */
+#define GLPES_TCPRXFOURHOLEHI_TCPRXFOURHOLEHI_S 0
+#define GLPES_TCPRXFOURHOLEHI_TCPRXFOURHOLEHI_M MAKEMASK(0xFFFFFF, 0)
+#define GLPES_TCPRXFOURHOLELO			0x0055E038 /* Reset Source: CORER */
+#define GLPES_TCPRXFOURHOLELO_TCPRXFOURHOLELO_S 0
+#define GLPES_TCPRXFOURHOLELO_TCPRXFOURHOLELO_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_TCPRXONEHOLEHI			0x0055E024 /* Reset Source: CORER */
+#define GLPES_TCPRXONEHOLEHI_TCPRXONEHOLEHI_S	0
+#define GLPES_TCPRXONEHOLEHI_TCPRXONEHOLEHI_M	MAKEMASK(0xFFFFFF, 0)
+#define GLPES_TCPRXONEHOLELO			0x0055E020 /* Reset Source: CORER */
+#define GLPES_TCPRXONEHOLELO_TCPRXONEHOLELO_S	0
+#define GLPES_TCPRXONEHOLELO_TCPRXONEHOLELO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_TCPRXPUREACKHI			0x0055E01C /* Reset Source: CORER */
+#define GLPES_TCPRXPUREACKHI_TCPRXPUREACKSHI_S	0
+#define GLPES_TCPRXPUREACKHI_TCPRXPUREACKSHI_M	MAKEMASK(0xFFFFFF, 0)
+#define GLPES_TCPRXPUREACKSLO			0x0055E018 /* Reset Source: CORER */
+#define GLPES_TCPRXPUREACKSLO_TCPRXPUREACKLO_S	0
+#define GLPES_TCPRXPUREACKSLO_TCPRXPUREACKLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_TCPRXTHREEHOLEHI			0x0055E034 /* Reset Source: CORER */
+#define GLPES_TCPRXTHREEHOLEHI_TCPRXTHREEHOLEHI_S 0
+#define GLPES_TCPRXTHREEHOLEHI_TCPRXTHREEHOLEHI_M MAKEMASK(0xFFFFFF, 0)
+#define GLPES_TCPRXTHREEHOLELO			0x0055E030 /* Reset Source: CORER */
+#define GLPES_TCPRXTHREEHOLELO_TCPRXTHREEHOLELO_S 0
+#define GLPES_TCPRXTHREEHOLELO_TCPRXTHREEHOLELO_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_TCPRXTWOHOLEHI			0x0055E02C /* Reset Source: CORER */
+#define GLPES_TCPRXTWOHOLEHI_TCPRXTWOHOLEHI_S	0
+#define GLPES_TCPRXTWOHOLEHI_TCPRXTWOHOLEHI_M	MAKEMASK(0xFFFFFF, 0)
+#define GLPES_TCPRXTWOHOLELO			0x0055E028 /* Reset Source: CORER */
+#define GLPES_TCPRXTWOHOLELO_TCPRXTWOHOLELO_S	0
+#define GLPES_TCPRXTWOHOLELO_TCPRXTWOHOLELO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_TCPTXRETRANSFASTHI		0x0055E044 /* Reset Source: CORER */
+#define GLPES_TCPTXRETRANSFASTHI_TCPTXRETRANSFASTHI_S 0
+#define GLPES_TCPTXRETRANSFASTHI_TCPTXRETRANSFASTHI_M MAKEMASK(0xFFFFFF, 0)
+#define GLPES_TCPTXRETRANSFASTLO		0x0055E040 /* Reset Source: CORER */
+#define GLPES_TCPTXRETRANSFASTLO_TCPTXRETRANSFASTLO_S 0
+#define GLPES_TCPTXRETRANSFASTLO_TCPTXRETRANSFASTLO_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_TCPTXTOUTSFASTHI			0x0055E04C /* Reset Source: CORER */
+#define GLPES_TCPTXTOUTSFASTHI_TCPTXTOUTSFASTHI_S 0
+#define GLPES_TCPTXTOUTSFASTHI_TCPTXTOUTSFASTHI_M MAKEMASK(0xFFFFFF, 0)
+#define GLPES_TCPTXTOUTSFASTLO			0x0055E048 /* Reset Source: CORER */
+#define GLPES_TCPTXTOUTSFASTLO_TCPTXTOUTSFASTLO_S 0
+#define GLPES_TCPTXTOUTSFASTLO_TCPTXTOUTSFASTLO_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_TCPTXTOUTSHI			0x0055E054 /* Reset Source: CORER */
+#define GLPES_TCPTXTOUTSHI_TCPTXTOUTSHI_S	0
+#define GLPES_TCPTXTOUTSHI_TCPTXTOUTSHI_M	MAKEMASK(0xFFFFFF, 0)
+#define GLPES_TCPTXTOUTSLO			0x0055E050 /* Reset Source: CORER */
+#define GLPES_TCPTXTOUTSLO_TCPTXTOUTSLO_S	0
+#define GLPES_TCPTXTOUTSLO_TCPTXTOUTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PWR_MODE_CTL				0x000B820C /* Reset Source: POR */
+#define GL_PWR_MODE_CTL_SWITCH_PWR_MODE_EN_S	0
+#define GL_PWR_MODE_CTL_SWITCH_PWR_MODE_EN_M	BIT(0)
+#define GL_PWR_MODE_CTL_NIC_PWR_MODE_EN_S	1
+#define GL_PWR_MODE_CTL_NIC_PWR_MODE_EN_M	BIT(1)
+#define GL_PWR_MODE_CTL_S5_PWR_MODE_EN_S	2
+#define GL_PWR_MODE_CTL_S5_PWR_MODE_EN_M	BIT(2)
+#define GL_PWR_MODE_CTL_CAR_MAX_SW_CONFIG_S	3
+#define GL_PWR_MODE_CTL_CAR_MAX_SW_CONFIG_M	MAKEMASK(0x3, 3)
+#define GL_PWR_MODE_CTL_CAR_MAX_BW_S		30
+#define GL_PWR_MODE_CTL_CAR_MAX_BW_M		MAKEMASK(0x3, 30)
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT	0x000B825C /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_PECLK_S 0
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_PECLK_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_UCLK_S 3
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_UCLK_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_LCLK_S 6
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_LCLK_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_PSM_S 9
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_PSM_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_RXCTL_S 12
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_RXCTL_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_UANA_S 15
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_UANA_M MAKEMASK(0x7, 15)
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_S5_S 18
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_S5_M MAKEMASK(0x7, 18)
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT	0x000B8218 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_PECLK_S 0
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_PECLK_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_UCLK_S 3
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_UCLK_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_LCLK_S 6
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_LCLK_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_PSM_S 9
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_PSM_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_RXCTL_S 12
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_RXCTL_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_UANA_S 15
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_UANA_M MAKEMASK(0x7, 15)
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_S5_S 18
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_S5_M MAKEMASK(0x7, 18)
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT	0x000B8260 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_PECLK_S 0
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_PECLK_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_UCLK_S 3
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_UCLK_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_LCLK_S 6
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_LCLK_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_PSM_S 9
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_PSM_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_RXCTL_S 12
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_RXCTL_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_UANA_S 15
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_UANA_M MAKEMASK(0x7, 15)
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_S5_S 18
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_S5_M MAKEMASK(0x7, 18)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_LCLK	0x000B8200 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_LCLK_DIV_VAL_TBW_50G_H_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_LCLK_DIV_VAL_TBW_50G_H_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_LCLK_DIV_VAL_TBW_25G_H_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_LCLK_DIV_VAL_TBW_25G_H_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_LCLK_DIV_VAL_TBW_10G_H_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_LCLK_DIV_VAL_TBW_10G_H_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_LCLK_DIV_VAL_TBW_4G_H_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_LCLK_DIV_VAL_TBW_4G_H_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_LCLK_DIV_VAL_TBW_A50G_H_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_LCLK_DIV_VAL_TBW_A50G_H_M MAKEMASK(0xF, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PECLK	0x000B81F0 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PECLK_DIV_VAL_TBW_50G_H_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PECLK_DIV_VAL_TBW_50G_H_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PECLK_DIV_VAL_TBW_25G_H_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PECLK_DIV_VAL_TBW_25G_H_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PECLK_DIV_VAL_TBW_10G_H_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PECLK_DIV_VAL_TBW_10G_H_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PECLK_DIV_VAL_TBW_4G_H_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PECLK_DIV_VAL_TBW_4G_H_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PECLK_DIV_VAL_TBW_A50G_H_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PECLK_DIV_VAL_TBW_A50G_H_M MAKEMASK(0xF, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PSM	0x000B81FC /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PSM_DIV_VAL_TBW_50G_H_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PSM_DIV_VAL_TBW_50G_H_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PSM_DIV_VAL_TBW_25G_H_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PSM_DIV_VAL_TBW_25G_H_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PSM_DIV_VAL_TBW_10G_H_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PSM_DIV_VAL_TBW_10G_H_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PSM_DIV_VAL_TBW_4G_H_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PSM_DIV_VAL_TBW_4G_H_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PSM_DIV_VAL_TBW_A50G_H_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PSM_DIV_VAL_TBW_A50G_H_M MAKEMASK(0xF, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_RXCTL	0x000B81F8 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_RXCTL_DIV_VAL_TBW_50G_H_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_RXCTL_DIV_VAL_TBW_50G_H_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_RXCTL_DIV_VAL_TBW_25G_H_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_RXCTL_DIV_VAL_TBW_25G_H_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_RXCTL_DIV_VAL_TBW_10G_H_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_RXCTL_DIV_VAL_TBW_10G_H_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_RXCTL_DIV_VAL_TBW_4G_H_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_RXCTL_DIV_VAL_TBW_4G_H_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_RXCTL_DIV_VAL_TBW_A50G_H_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_RXCTL_DIV_VAL_TBW_A50G_H_M MAKEMASK(0xF, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UANA	0x000B8208 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UANA_DIV_VAL_TBW_50G_H_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UANA_DIV_VAL_TBW_50G_H_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UANA_DIV_VAL_TBW_25G_H_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UANA_DIV_VAL_TBW_25G_H_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UANA_DIV_VAL_TBW_10G_H_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UANA_DIV_VAL_TBW_10G_H_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UANA_DIV_VAL_TBW_4G_H_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UANA_DIV_VAL_TBW_4G_H_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UANA_DIV_VAL_TBW_A50G_H_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UANA_DIV_VAL_TBW_A50G_H_M MAKEMASK(0xF, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UCLK	0x000B81F4 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UCLK_DIV_VAL_TBW_50G_H_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UCLK_DIV_VAL_TBW_50G_H_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UCLK_DIV_VAL_TBW_25G_H_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UCLK_DIV_VAL_TBW_25G_H_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UCLK_DIV_VAL_TBW_10G_H_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UCLK_DIV_VAL_TBW_10G_H_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UCLK_DIV_VAL_TBW_4G_H_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UCLK_DIV_VAL_TBW_4G_H_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UCLK_DIV_VAL_TBW_A50G_H_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UCLK_DIV_VAL_TBW_A50G_H_M MAKEMASK(0xF, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_LCLK	0x000B8244 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_LCLK_DIV_VAL_TBW_50G_L_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_LCLK_DIV_VAL_TBW_50G_L_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_LCLK_DIV_VAL_TBW_25G_L_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_LCLK_DIV_VAL_TBW_25G_L_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_LCLK_DIV_VAL_TBW_10G_L_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_LCLK_DIV_VAL_TBW_10G_L_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_LCLK_DIV_VAL_TBW_4G_L_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_LCLK_DIV_VAL_TBW_4G_L_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_LCLK_DIV_VAL_TBW_A50G_L_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_LCLK_DIV_VAL_TBW_A50G_L_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PECLK	0x000B8220 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PECLK_DIV_VAL_TBW_50G_L_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PECLK_DIV_VAL_TBW_50G_L_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PECLK_DIV_VAL_TBW_25G_L_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PECLK_DIV_VAL_TBW_25G_L_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PECLK_DIV_VAL_TBW_10G_L_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PECLK_DIV_VAL_TBW_10G_L_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PECLK_DIV_VAL_TBW_4G_L_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PECLK_DIV_VAL_TBW_4G_L_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PECLK_DIV_VAL_TBW_A50G_L_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PECLK_DIV_VAL_TBW_A50G_L_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PSM	0x000B8240 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PSM_DIV_VAL_TBW_50G_L_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PSM_DIV_VAL_TBW_50G_L_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PSM_DIV_VAL_TBW_25G_L_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PSM_DIV_VAL_TBW_25G_L_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PSM_DIV_VAL_TBW_10G_L_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PSM_DIV_VAL_TBW_10G_L_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PSM_DIV_VAL_TBW_4G_L_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PSM_DIV_VAL_TBW_4G_L_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PSM_DIV_VAL_TBW_A50G_L_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PSM_DIV_VAL_TBW_A50G_L_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_RXCTL	0x000B823C /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_RXCTL_DIV_VAL_TBW_50G_L_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_RXCTL_DIV_VAL_TBW_50G_L_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_RXCTL_DIV_VAL_TBW_25G_L_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_RXCTL_DIV_VAL_TBW_25G_L_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_RXCTL_DIV_VAL_TBW_10G_L_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_RXCTL_DIV_VAL_TBW_10G_L_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_RXCTL_DIV_VAL_TBW_4G_L_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_RXCTL_DIV_VAL_TBW_4G_L_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_RXCTL_DIV_VAL_TBW_A50G_L_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_RXCTL_DIV_VAL_TBW_A50G_L_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UANA	0x000B8248 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UANA_DIV_VAL_TBW_50G_L_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UANA_DIV_VAL_TBW_50G_L_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UANA_DIV_VAL_TBW_25G_L_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UANA_DIV_VAL_TBW_25G_L_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UANA_DIV_VAL_TBW_10G_L_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UANA_DIV_VAL_TBW_10G_L_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UANA_DIV_VAL_TBW_4G_L_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UANA_DIV_VAL_TBW_4G_L_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UANA_DIV_VAL_TBW_A50G_L_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UANA_DIV_VAL_TBW_A50G_L_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UCLK	0x000B8238 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UCLK_DIV_VAL_TBW_50G_L_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UCLK_DIV_VAL_TBW_50G_L_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UCLK_DIV_VAL_TBW_25G_L_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UCLK_DIV_VAL_TBW_25G_L_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UCLK_DIV_VAL_TBW_10G_L_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UCLK_DIV_VAL_TBW_10G_L_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UCLK_DIV_VAL_TBW_4G_L_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UCLK_DIV_VAL_TBW_4G_L_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UCLK_DIV_VAL_TBW_A50G_L_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UCLK_DIV_VAL_TBW_A50G_L_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_LCLK	0x000B8230 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_LCLK_DIV_VAL_TBW_50G_M_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_LCLK_DIV_VAL_TBW_50G_M_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_LCLK_DIV_VAL_TBW_25G_M_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_LCLK_DIV_VAL_TBW_25G_M_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_LCLK_DIV_VAL_TBW_10G_M_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_LCLK_DIV_VAL_TBW_10G_M_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_LCLK_DIV_VAL_TBW_4G_M_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_LCLK_DIV_VAL_TBW_4G_M_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_LCLK_DIV_VAL_TBW_A50G_M_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_LCLK_DIV_VAL_TBW_A50G_M_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PECLK	0x000B821C /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PECLK_DIV_VAL_TBW_50G_M_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PECLK_DIV_VAL_TBW_50G_M_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PECLK_DIV_VAL_TBW_25G_M_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PECLK_DIV_VAL_TBW_25G_M_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PECLK_DIV_VAL_TBW_10G_M_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PECLK_DIV_VAL_TBW_10G_M_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PECLK_DIV_VAL_TBW_4G_M_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PECLK_DIV_VAL_TBW_4G_M_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PECLK_DIV_VAL_TBW_A50G_M_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PECLK_DIV_VAL_TBW_A50G_M_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PSM	0x000B822C /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PSM_DIV_VAL_TBW_50G_M_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PSM_DIV_VAL_TBW_50G_M_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PSM_DIV_VAL_TBW_25G_M_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PSM_DIV_VAL_TBW_25G_M_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PSM_DIV_VAL_TBW_10G_M_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PSM_DIV_VAL_TBW_10G_M_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PSM_DIV_VAL_TBW_4G_M_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PSM_DIV_VAL_TBW_4G_M_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PSM_DIV_VAL_TBW_A50G_M_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PSM_DIV_VAL_TBW_A50G_M_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_RXCTL	0x000B8228 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_RXCTL_DIV_VAL_TBW_50G_M_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_RXCTL_DIV_VAL_TBW_50G_M_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_RXCTL_DIV_VAL_TBW_25G_M_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_RXCTL_DIV_VAL_TBW_25G_M_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_RXCTL_DIV_VAL_TBW_10G_M_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_RXCTL_DIV_VAL_TBW_10G_M_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_RXCTL_DIV_VAL_TBW_4G_M_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_RXCTL_DIV_VAL_TBW_4G_M_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_RXCTL_DIV_VAL_TBW_A50G_M_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_RXCTL_DIV_VAL_TBW_A50G_M_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UANA	0x000B8234 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UANA_DIV_VAL_TBW_50G_M_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UANA_DIV_VAL_TBW_50G_M_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UANA_DIV_VAL_TBW_25G_M_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UANA_DIV_VAL_TBW_25G_M_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UANA_DIV_VAL_TBW_10G_M_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UANA_DIV_VAL_TBW_10G_M_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UANA_DIV_VAL_TBW_4G_M_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UANA_DIV_VAL_TBW_4G_M_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UANA_DIV_VAL_TBW_A50G_M_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UANA_DIV_VAL_TBW_A50G_M_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UCLK	0x000B8224 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UCLK_DIV_VAL_TBW_50G_M_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UCLK_DIV_VAL_TBW_50G_M_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UCLK_DIV_VAL_TBW_25G_M_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UCLK_DIV_VAL_TBW_25G_M_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UCLK_DIV_VAL_TBW_10G_M_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UCLK_DIV_VAL_TBW_10G_M_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UCLK_DIV_VAL_TBW_4G_M_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UCLK_DIV_VAL_TBW_4G_M_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UCLK_DIV_VAL_TBW_A50G_M_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UCLK_DIV_VAL_TBW_A50G_M_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S5_H_CTRL		0x000B81EC /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S5_H_CTRL_DIV_VAL_TBW_50G_H_S 0
+#define GL_PWR_MODE_DIVIDE_S5_H_CTRL_DIV_VAL_TBW_50G_H_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S5_H_CTRL_DIV_VAL_TBW_25G_H_S 3
+#define GL_PWR_MODE_DIVIDE_S5_H_CTRL_DIV_VAL_TBW_25G_H_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S5_H_CTRL_DIV_VAL_TBW_10G_H_S 6
+#define GL_PWR_MODE_DIVIDE_S5_H_CTRL_DIV_VAL_TBW_10G_H_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S5_H_CTRL_DIV_VAL_TBW_4G_H_S 9
+#define GL_PWR_MODE_DIVIDE_S5_H_CTRL_DIV_VAL_TBW_4G_H_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S5_H_CTRL_DIV_VAL_TBW_A50G_H_S 12
+#define GL_PWR_MODE_DIVIDE_S5_H_CTRL_DIV_VAL_TBW_A50G_H_M MAKEMASK(0xF, 12)
+#define GL_PWR_MODE_DIVIDE_S5_L_CTRL		0x000B824C /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S5_L_CTRL_DIV_VAL_TBW_50G_L_S 0
+#define GL_PWR_MODE_DIVIDE_S5_L_CTRL_DIV_VAL_TBW_50G_L_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S5_L_CTRL_DIV_VAL_TBW_25G_L_S 3
+#define GL_PWR_MODE_DIVIDE_S5_L_CTRL_DIV_VAL_TBW_25G_L_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S5_L_CTRL_DIV_VAL_TBW_10G_L_S 6
+#define GL_PWR_MODE_DIVIDE_S5_L_CTRL_DIV_VAL_TBW_10G_L_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S5_L_CTRL_DIV_VAL_TBW_4G_L_S 9
+#define GL_PWR_MODE_DIVIDE_S5_L_CTRL_DIV_VAL_TBW_4G_L_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S5_L_CTRL_DIV_VAL_TBW_A50G_L_S 12
+#define GL_PWR_MODE_DIVIDE_S5_L_CTRL_DIV_VAL_TBW_A50G_L_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S5_M_CTRL		0x000B8250 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S5_M_CTRL_DIV_VAL_TBW_50G_M_S 0
+#define GL_PWR_MODE_DIVIDE_S5_M_CTRL_DIV_VAL_TBW_50G_M_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S5_M_CTRL_DIV_VAL_TBW_25G_M_S 3
+#define GL_PWR_MODE_DIVIDE_S5_M_CTRL_DIV_VAL_TBW_25G_M_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S5_M_CTRL_DIV_VAL_TBW_10G_M_S 6
+#define GL_PWR_MODE_DIVIDE_S5_M_CTRL_DIV_VAL_TBW_10G_M_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S5_M_CTRL_DIV_VAL_TBW_4G_M_S 9
+#define GL_PWR_MODE_DIVIDE_S5_M_CTRL_DIV_VAL_TBW_4G_M_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S5_M_CTRL_DIV_VAL_TBW_A50G_M_S 12
+#define GL_PWR_MODE_DIVIDE_S5_M_CTRL_DIV_VAL_TBW_A50G_M_M MAKEMASK(0x7, 12)
+#define GL_S5_PWR_MODE_EXIT_CTL			0x000B8270 /* Reset Source: POR */
+#define GL_S5_PWR_MODE_EXIT_CTL_S5_PWR_MODE_AUTO_EXIT_S 0
+#define GL_S5_PWR_MODE_EXIT_CTL_S5_PWR_MODE_AUTO_EXIT_M BIT(0)
+#define GL_S5_PWR_MODE_EXIT_CTL_S5_PWR_MODE_FW_EXIT_S 1
+#define GL_S5_PWR_MODE_EXIT_CTL_S5_PWR_MODE_FW_EXIT_M BIT(1)
+#define GL_S5_PWR_MODE_EXIT_CTL_S5_PWR_MODE_PRST_FLOWS_ON_CORER_S 3
+#define GL_S5_PWR_MODE_EXIT_CTL_S5_PWR_MODE_PRST_FLOWS_ON_CORER_M BIT(3)
+#define GLGEN_PME_TO				0x000B81BC /* Reset Source: POR */
+#define GLGEN_PME_TO_PME_TO_FOR_PE_S		0
+#define GLGEN_PME_TO_PME_TO_FOR_PE_M		BIT(0)
+#define PRTPM_EEE_STAT				0x001E4320 /* Reset Source: GLOBR */
+#define PRTPM_EEE_STAT_EEE_NEG_S		29
+#define PRTPM_EEE_STAT_EEE_NEG_M		BIT(29)
+#define PRTPM_EEE_STAT_RX_LPI_STATUS_S		30
+#define PRTPM_EEE_STAT_RX_LPI_STATUS_M		BIT(30)
+#define PRTPM_EEE_STAT_TX_LPI_STATUS_S		31
+#define PRTPM_EEE_STAT_TX_LPI_STATUS_M		BIT(31)
+#define PRTPM_EEEC				0x001E4380 /* Reset Source: GLOBR */
+#define PRTPM_EEEC_TW_WAKE_MIN_S		16
+#define PRTPM_EEEC_TW_WAKE_MIN_M		MAKEMASK(0x3F, 16)
+#define PRTPM_EEEC_TX_LU_LPI_DLY_S		24
+#define PRTPM_EEEC_TX_LU_LPI_DLY_M		MAKEMASK(0x3, 24)
+#define PRTPM_EEEC_TEEE_DLY_S			26
+#define PRTPM_EEEC_TEEE_DLY_M			MAKEMASK(0x3F, 26)
+#define PRTPM_EEEFWD				0x001E4400 /* Reset Source: GLOBR */
+#define PRTPM_EEEFWD_EEE_FW_CONFIG_DONE_S	31
+#define PRTPM_EEEFWD_EEE_FW_CONFIG_DONE_M	BIT(31)
+#define PRTPM_EEER				0x001E4360 /* Reset Source: GLOBR */
+#define PRTPM_EEER_TW_SYSTEM_S			0
+#define PRTPM_EEER_TW_SYSTEM_M			MAKEMASK(0xFFFF, 0)
+#define PRTPM_EEER_TX_LPI_EN_S			16
+#define PRTPM_EEER_TX_LPI_EN_M			BIT(16)
+#define PRTPM_EEETXC				0x001E43E0 /* Reset Source: GLOBR */
+#define PRTPM_EEETXC_TW_PHY_S			0
+#define PRTPM_EEETXC_TW_PHY_M			MAKEMASK(0xFFFF, 0)
+#define PRTPM_RLPIC				0x001E43A0 /* Reset Source: GLOBR */
+#define PRTPM_RLPIC_ERLPIC_S			0
+#define PRTPM_RLPIC_ERLPIC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PRTPM_TLPIC				0x001E43C0 /* Reset Source: GLOBR */
+#define PRTPM_TLPIC_ETLPIC_S			0
+#define PRTPM_TLPIC_ETLPIC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLRPB_DHW(_i)				(0x000AC000 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLRPB_DHW_MAX_INDEX			15
+#define GLRPB_DHW_DHW_TCN_S			0
+#define GLRPB_DHW_DHW_TCN_M			MAKEMASK(0xFFFFF, 0)
+#define GLRPB_DLW(_i)				(0x000AC044 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLRPB_DLW_MAX_INDEX			15
+#define GLRPB_DLW_DLW_TCN_S			0
+#define GLRPB_DLW_DLW_TCN_M			MAKEMASK(0xFFFFF, 0)
+#define GLRPB_DPS(_i)				(0x000AC084 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLRPB_DPS_MAX_INDEX			15
+#define GLRPB_DPS_DPS_TCN_S			0
+#define GLRPB_DPS_DPS_TCN_M			MAKEMASK(0xFFFFF, 0)
+#define GLRPB_DSI_EN				0x000AC324 /* Reset Source: CORER */
+#define GLRPB_DSI_EN_DSI_EN_S			0
+#define GLRPB_DSI_EN_DSI_EN_M			BIT(0)
+#define GLRPB_DSI_EN_DSI_L2_MAC_ERR_DROP_EN_S	1
+#define GLRPB_DSI_EN_DSI_L2_MAC_ERR_DROP_EN_M	BIT(1)
+#define GLRPB_SHW(_i)				(0x000AC120 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLRPB_SHW_MAX_INDEX			7
+#define GLRPB_SHW_SHW_S				0
+#define GLRPB_SHW_SHW_M				MAKEMASK(0xFFFFF, 0)
+#define GLRPB_SLW(_i)				(0x000AC140 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLRPB_SLW_MAX_INDEX			7
+#define GLRPB_SLW_SLW_S				0
+#define GLRPB_SLW_SLW_M				MAKEMASK(0xFFFFF, 0)
+#define GLRPB_SPS(_i)				(0x000AC0C4 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLRPB_SPS_MAX_INDEX			7
+#define GLRPB_SPS_SPS_TCN_S			0
+#define GLRPB_SPS_SPS_TCN_M			MAKEMASK(0xFFFFF, 0)
+#define GLRPB_TC_CFG(_i)			(0x000AC2A4 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLRPB_TC_CFG_MAX_INDEX			31
+#define GLRPB_TC_CFG_D_POOL_S			0
+#define GLRPB_TC_CFG_D_POOL_M			MAKEMASK(0xFFFF, 0)
+#define GLRPB_TC_CFG_S_POOL_S			16
+#define GLRPB_TC_CFG_S_POOL_M			MAKEMASK(0xFFFF, 16)
+#define GLRPB_TCHW(_i)				(0x000AC330 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLRPB_TCHW_MAX_INDEX			31
+#define GLRPB_TCHW_TCHW_S			0
+#define GLRPB_TCHW_TCHW_M			MAKEMASK(0xFFFFF, 0)
+#define GLRPB_TCLW(_i)				(0x000AC3B0 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLRPB_TCLW_MAX_INDEX			31
+#define GLRPB_TCLW_TCLW_S			0
+#define GLRPB_TCLW_TCLW_M			MAKEMASK(0xFFFFF, 0)
+#define GLQF_APBVT(_i)				(0x00450000 + ((_i) * 4)) /* _i=0...2047 */ /* Reset Source: CORER */
+#define GLQF_APBVT_MAX_INDEX			2047
+#define GLQF_APBVT_APBVT_S			0
+#define GLQF_APBVT_APBVT_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLQF_FD_CLSN_0				0x00460028 /* Reset Source: CORER */
+#define GLQF_FD_CLSN_0_HITSBCNT_S		0
+#define GLQF_FD_CLSN_0_HITSBCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQF_FD_CLSN1				0x00460030 /* Reset Source: CORER */
+#define GLQF_FD_CLSN1_HITLBCNT_S		0
+#define GLQF_FD_CLSN1_HITLBCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQF_FD_CNT				0x00460018 /* Reset Source: CORER */
+#define GLQF_FD_CNT_FD_GCNT_S			0
+#define GLQF_FD_CNT_FD_GCNT_M			MAKEMASK(0x7FFF, 0)
+#define GLQF_FD_CNT_FD_BCNT_S			16
+#define GLQF_FD_CNT_FD_BCNT_M			MAKEMASK(0x7FFF, 16)
+#define GLQF_FD_CTL				0x00460000 /* Reset Source: CORER */
+#define GLQF_FD_CTL_FDLONG_S			0
+#define GLQF_FD_CTL_FDLONG_M			MAKEMASK(0xF, 0)
+#define GLQF_FD_CTL_HASH_REPORT_S		4
+#define GLQF_FD_CTL_HASH_REPORT_M		BIT(4)
+#define GLQF_FD_CTL_FLT_ADDR_REPORT_S		5
+#define GLQF_FD_CTL_FLT_ADDR_REPORT_M		BIT(5)
+#define GLQF_FD_SIZE				0x00460010 /* Reset Source: CORER */
+#define GLQF_FD_SIZE_FD_GSIZE_S			0
+#define GLQF_FD_SIZE_FD_GSIZE_M			MAKEMASK(0x7FFF, 0)
+#define GLQF_FD_SIZE_FD_BSIZE_S			16
+#define GLQF_FD_SIZE_FD_BSIZE_M			MAKEMASK(0x7FFF, 16)
+#define GLQF_FDCNT_0				0x00460020 /* Reset Source: CORER */
+#define GLQF_FDCNT_0_BUCKETCNT_S		0
+#define GLQF_FDCNT_0_BUCKETCNT_M		MAKEMASK(0x7FFF, 0)
+#define GLQF_FDCNT_0_CNT_NOT_VLD_S		31
+#define GLQF_FDCNT_0_CNT_NOT_VLD_M		BIT(31)
+#define GLQF_FDEVICTENA(_i)			(0x00452000 + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: CORER */
+#define GLQF_FDEVICTENA_MAX_INDEX		3
+#define GLQF_FDEVICTENA_FDEVICTENA_S		0
+#define GLQF_FDEVICTENA_FDEVICTENA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQF_FDINSET(_i, _j)			(0x00412000 + ((_i) * 4 + (_j) * 512)) /* _i=0...127, _j=0...5 */ /* Reset Source: CORER */
+#define GLQF_FDINSET_MAX_INDEX			127
+#define GLQF_FDINSET_FV_WORD_INDX0_S		0
+#define GLQF_FDINSET_FV_WORD_INDX0_M		MAKEMASK(0x1F, 0)
+#define GLQF_FDINSET_FV_WORD_VAL0_S		7
+#define GLQF_FDINSET_FV_WORD_VAL0_M		BIT(7)
+#define GLQF_FDINSET_FV_WORD_INDX1_S		8
+#define GLQF_FDINSET_FV_WORD_INDX1_M		MAKEMASK(0x1F, 8)
+#define GLQF_FDINSET_FV_WORD_VAL1_S		15
+#define GLQF_FDINSET_FV_WORD_VAL1_M		BIT(15)
+#define GLQF_FDINSET_FV_WORD_INDX2_S		16
+#define GLQF_FDINSET_FV_WORD_INDX2_M		MAKEMASK(0x1F, 16)
+#define GLQF_FDINSET_FV_WORD_VAL2_S		23
+#define GLQF_FDINSET_FV_WORD_VAL2_M		BIT(23)
+#define GLQF_FDINSET_FV_WORD_INDX3_S		24
+#define GLQF_FDINSET_FV_WORD_INDX3_M		MAKEMASK(0x1F, 24)
+#define GLQF_FDINSET_FV_WORD_VAL3_S		31
+#define GLQF_FDINSET_FV_WORD_VAL3_M		BIT(31)
+#define GLQF_FDMASK(_i)				(0x00410800 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLQF_FDMASK_MAX_INDEX			31
+#define GLQF_FDMASK_MSK_INDEX_S			0
+#define GLQF_FDMASK_MSK_INDEX_M			MAKEMASK(0x1F, 0)
+#define GLQF_FDMASK_MASK_S			16
+#define GLQF_FDMASK_MASK_M			MAKEMASK(0xFFFF, 16)
+#define GLQF_FDMASK_SEL(_i)			(0x00410400 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLQF_FDMASK_SEL_MAX_INDEX		127
+#define GLQF_FDMASK_SEL_MASK_SEL_S		0
+#define GLQF_FDMASK_SEL_MASK_SEL_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQF_FDSWAP(_i, _j)			(0x00413000 + ((_i) * 4 + (_j) * 512)) /* _i=0...127, _j=0...5 */ /* Reset Source: CORER */
+#define GLQF_FDSWAP_MAX_INDEX			127
+#define GLQF_FDSWAP_FV_WORD_INDX0_S		0
+#define GLQF_FDSWAP_FV_WORD_INDX0_M		MAKEMASK(0x1F, 0)
+#define GLQF_FDSWAP_FV_WORD_VAL0_S		7
+#define GLQF_FDSWAP_FV_WORD_VAL0_M		BIT(7)
+#define GLQF_FDSWAP_FV_WORD_INDX1_S		8
+#define GLQF_FDSWAP_FV_WORD_INDX1_M		MAKEMASK(0x1F, 8)
+#define GLQF_FDSWAP_FV_WORD_VAL1_S		15
+#define GLQF_FDSWAP_FV_WORD_VAL1_M		BIT(15)
+#define GLQF_FDSWAP_FV_WORD_INDX2_S		16
+#define GLQF_FDSWAP_FV_WORD_INDX2_M		MAKEMASK(0x1F, 16)
+#define GLQF_FDSWAP_FV_WORD_VAL2_S		23
+#define GLQF_FDSWAP_FV_WORD_VAL2_M		BIT(23)
+#define GLQF_FDSWAP_FV_WORD_INDX3_S		24
+#define GLQF_FDSWAP_FV_WORD_INDX3_M		MAKEMASK(0x1F, 24)
+#define GLQF_FDSWAP_FV_WORD_VAL3_S		31
+#define GLQF_FDSWAP_FV_WORD_VAL3_M		BIT(31)
+#define GLQF_HINSET(_i, _j)			(0x0040E000 + ((_i) * 4 + (_j) * 512)) /* _i=0...127, _j=0...5 */ /* Reset Source: CORER */
+#define GLQF_HINSET_MAX_INDEX			127
+#define GLQF_HINSET_FV_WORD_INDX0_S		0
+#define GLQF_HINSET_FV_WORD_INDX0_M		MAKEMASK(0x1F, 0)
+#define GLQF_HINSET_FV_WORD_VAL0_S		7
+#define GLQF_HINSET_FV_WORD_VAL0_M		BIT(7)
+#define GLQF_HINSET_FV_WORD_INDX1_S		8
+#define GLQF_HINSET_FV_WORD_INDX1_M		MAKEMASK(0x1F, 8)
+#define GLQF_HINSET_FV_WORD_VAL1_S		15
+#define GLQF_HINSET_FV_WORD_VAL1_M		BIT(15)
+#define GLQF_HINSET_FV_WORD_INDX2_S		16
+#define GLQF_HINSET_FV_WORD_INDX2_M		MAKEMASK(0x1F, 16)
+#define GLQF_HINSET_FV_WORD_VAL2_S		23
+#define GLQF_HINSET_FV_WORD_VAL2_M		BIT(23)
+#define GLQF_HINSET_FV_WORD_INDX3_S		24
+#define GLQF_HINSET_FV_WORD_INDX3_M		MAKEMASK(0x1F, 24)
+#define GLQF_HINSET_FV_WORD_VAL3_S		31
+#define GLQF_HINSET_FV_WORD_VAL3_M		BIT(31)
+#define GLQF_HKEY(_i)				(0x00456000 + ((_i) * 4)) /* _i=0...12 */ /* Reset Source: CORER */
+#define GLQF_HKEY_MAX_INDEX			12
+#define GLQF_HKEY_KEY_0_S			0
+#define GLQF_HKEY_KEY_0_M			MAKEMASK(0xFF, 0)
+#define GLQF_HKEY_KEY_1_S			8
+#define GLQF_HKEY_KEY_1_M			MAKEMASK(0xFF, 8)
+#define GLQF_HKEY_KEY_2_S			16
+#define GLQF_HKEY_KEY_2_M			MAKEMASK(0xFF, 16)
+#define GLQF_HKEY_KEY_3_S			24
+#define GLQF_HKEY_KEY_3_M			MAKEMASK(0xFF, 24)
+#define GLQF_HLUT(_i, _j)			(0x00438000 + ((_i) * 4 + (_j) * 512)) /* _i=0...127, _j=0...15 */ /* Reset Source: CORER */
+#define GLQF_HLUT_MAX_INDEX			127
+#define GLQF_HLUT_LUT0_S			0
+#define GLQF_HLUT_LUT0_M			MAKEMASK(0x3F, 0)
+#define GLQF_HLUT_LUT1_S			8
+#define GLQF_HLUT_LUT1_M			MAKEMASK(0x3F, 8)
+#define GLQF_HLUT_LUT2_S			16
+#define GLQF_HLUT_LUT2_M			MAKEMASK(0x3F, 16)
+#define GLQF_HLUT_LUT3_S			24
+#define GLQF_HLUT_LUT3_M			MAKEMASK(0x3F, 24)
+#define GLQF_HLUT_SIZE(_i)			(0x00455400 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLQF_HLUT_SIZE_MAX_INDEX		15
+#define GLQF_HLUT_SIZE_HSIZE_S			0
+#define GLQF_HLUT_SIZE_HSIZE_M			BIT(0)
+#define GLQF_HMASK(_i)				(0x0040FC00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLQF_HMASK_MAX_INDEX			31
+#define GLQF_HMASK_MSK_INDEX_S			0
+#define GLQF_HMASK_MSK_INDEX_M			MAKEMASK(0x1F, 0)
+#define GLQF_HMASK_MASK_S			16
+#define GLQF_HMASK_MASK_M			MAKEMASK(0xFFFF, 16)
+#define GLQF_HMASK_SEL(_i)			(0x00410000 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLQF_HMASK_SEL_MAX_INDEX		127
+#define GLQF_HMASK_SEL_MASK_SEL_S		0
+#define GLQF_HMASK_SEL_MASK_SEL_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQF_HSYMM(_i, _j)			(0x0040F000 + ((_i) * 4 + (_j) * 512)) /* _i=0...127, _j=0...5 */ /* Reset Source: CORER */
+#define GLQF_HSYMM_MAX_INDEX			127
+#define GLQF_HSYMM_FV_SYMM_INDX0_S		0
+#define GLQF_HSYMM_FV_SYMM_INDX0_M		MAKEMASK(0x1F, 0)
+#define GLQF_HSYMM_SYMM0_ENA_S			7
+#define GLQF_HSYMM_SYMM0_ENA_M			BIT(7)
+#define GLQF_HSYMM_FV_SYMM_INDX1_S		8
+#define GLQF_HSYMM_FV_SYMM_INDX1_M		MAKEMASK(0x1F, 8)
+#define GLQF_HSYMM_SYMM1_ENA_S			15
+#define GLQF_HSYMM_SYMM1_ENA_M			BIT(15)
+#define GLQF_HSYMM_FV_SYMM_INDX2_S		16
+#define GLQF_HSYMM_FV_SYMM_INDX2_M		MAKEMASK(0x1F, 16)
+#define GLQF_HSYMM_SYMM2_ENA_S			23
+#define GLQF_HSYMM_SYMM2_ENA_M			BIT(23)
+#define GLQF_HSYMM_FV_SYMM_INDX3_S		24
+#define GLQF_HSYMM_FV_SYMM_INDX3_M		MAKEMASK(0x1F, 24)
+#define GLQF_HSYMM_SYMM3_ENA_S			31
+#define GLQF_HSYMM_SYMM3_ENA_M			BIT(31)
+#define GLQF_PE_APBVT_CNT			0x00455500 /* Reset Source: CORER */
+#define GLQF_PE_APBVT_CNT_APBVT_LAN_S		0
+#define GLQF_PE_APBVT_CNT_APBVT_LAN_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQF_PE_CMD				0x00471080 /* Reset Source: CORER */
+#define GLQF_PE_CMD_ADDREM_STS_S		0
+#define GLQF_PE_CMD_ADDREM_STS_M		MAKEMASK(0xFFFFFF, 0)
+#define GLQF_PE_CMD_ADDREM_ID_S			28
+#define GLQF_PE_CMD_ADDREM_ID_M			MAKEMASK(0xF, 28)
+#define GLQF_PE_CTL				0x004710C0 /* Reset Source: CORER */
+#define GLQF_PE_CTL_PELONG_S			0
+#define GLQF_PE_CTL_PELONG_M			MAKEMASK(0xF, 0)
+#define GLQF_PE_CTL2(_i)			(0x00455200 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLQF_PE_CTL2_MAX_INDEX			31
+#define GLQF_PE_CTL2_TO_QH_S			0
+#define GLQF_PE_CTL2_TO_QH_M			MAKEMASK(0x3, 0)
+#define GLQF_PE_CTL2_APBVT_ENA_S		2
+#define GLQF_PE_CTL2_APBVT_ENA_M		BIT(2)
+#define GLQF_PE_FVE				0x0020E514 /* Reset Source: CORER */
+#define GLQF_PE_FVE_W_ENA_S			0
+#define GLQF_PE_FVE_W_ENA_M			MAKEMASK(0xFFFFFF, 0)
+#define GLQF_PE_OSR_STS				0x00471040 /* Reset Source: CORER */
+#define GLQF_PE_OSR_STS_QH_SRCH_MAXOSR_S	0
+#define GLQF_PE_OSR_STS_QH_SRCH_MAXOSR_M	MAKEMASK(0x3FF, 0)
+#define GLQF_PE_OSR_STS_QH_CMD_MAXOSR_S		16
+#define GLQF_PE_OSR_STS_QH_CMD_MAXOSR_M		MAKEMASK(0x3FF, 16)
+#define GLQF_PEINSET(_i, _j)			(0x00415000 + ((_i) * 4 + (_j) * 128)) /* _i=0...31, _j=0...5 */ /* Reset Source: CORER */
+#define GLQF_PEINSET_MAX_INDEX			31
+#define GLQF_PEINSET_FV_WORD_INDX0_S		0
+#define GLQF_PEINSET_FV_WORD_INDX0_M		MAKEMASK(0x1F, 0)
+#define GLQF_PEINSET_FV_WORD_VAL0_S		7
+#define GLQF_PEINSET_FV_WORD_VAL0_M		BIT(7)
+#define GLQF_PEINSET_FV_WORD_INDX1_S		8
+#define GLQF_PEINSET_FV_WORD_INDX1_M		MAKEMASK(0x1F, 8)
+#define GLQF_PEINSET_FV_WORD_VAL1_S		15
+#define GLQF_PEINSET_FV_WORD_VAL1_M		BIT(15)
+#define GLQF_PEINSET_FV_WORD_INDX2_S		16
+#define GLQF_PEINSET_FV_WORD_INDX2_M		MAKEMASK(0x1F, 16)
+#define GLQF_PEINSET_FV_WORD_VAL2_S		23
+#define GLQF_PEINSET_FV_WORD_VAL2_M		BIT(23)
+#define GLQF_PEINSET_FV_WORD_INDX3_S		24
+#define GLQF_PEINSET_FV_WORD_INDX3_M		MAKEMASK(0x1F, 24)
+#define GLQF_PEINSET_FV_WORD_VAL3_S		31
+#define GLQF_PEINSET_FV_WORD_VAL3_M		BIT(31)
+#define GLQF_PEMASK(_i)				(0x00415400 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLQF_PEMASK_MAX_INDEX			15
+#define GLQF_PEMASK_MSK_INDEX_S			0
+#define GLQF_PEMASK_MSK_INDEX_M			MAKEMASK(0x1F, 0)
+#define GLQF_PEMASK_MASK_S			16
+#define GLQF_PEMASK_MASK_M			MAKEMASK(0xFFFF, 16)
+#define GLQF_PEMASK_SEL(_i)			(0x00415500 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLQF_PEMASK_SEL_MAX_INDEX		31
+#define GLQF_PEMASK_SEL_MASK_SEL_S		0
+#define GLQF_PEMASK_SEL_MASK_SEL_M		MAKEMASK(0xFFFF, 0)
+#define GLQF_PETABLE_CLR(_i)			(0x000AA078 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLQF_PETABLE_CLR_MAX_INDEX		1
+#define GLQF_PETABLE_CLR_VM_VF_NUM_S		0
+#define GLQF_PETABLE_CLR_VM_VF_NUM_M		MAKEMASK(0x3FF, 0)
+#define GLQF_PETABLE_CLR_VM_VF_TYPE_S		10
+#define GLQF_PETABLE_CLR_VM_VF_TYPE_M		MAKEMASK(0x3, 10)
+#define GLQF_PETABLE_CLR_PF_NUM_S		12
+#define GLQF_PETABLE_CLR_PF_NUM_M		MAKEMASK(0x7, 12)
+#define GLQF_PETABLE_CLR_PE_BUSY_S		16
+#define GLQF_PETABLE_CLR_PE_BUSY_M		BIT(16)
+#define GLQF_PETABLE_CLR_PE_CLEAR_S		17
+#define GLQF_PETABLE_CLR_PE_CLEAR_M		BIT(17)
+#define GLQF_PROF2TC(_i, _j)			(0x0044D000 + ((_i) * 4 + (_j) * 512)) /* _i=0...127, _j=0...3 */ /* Reset Source: CORER */
+#define GLQF_PROF2TC_MAX_INDEX			127
+#define GLQF_PROF2TC_OVERRIDE_ENA_0_S		0
+#define GLQF_PROF2TC_OVERRIDE_ENA_0_M		BIT(0)
+#define GLQF_PROF2TC_REGION_0_S			1
+#define GLQF_PROF2TC_REGION_0_M			MAKEMASK(0x7, 1)
+#define GLQF_PROF2TC_OVERRIDE_ENA_1_S		4
+#define GLQF_PROF2TC_OVERRIDE_ENA_1_M		BIT(4)
+#define GLQF_PROF2TC_REGION_1_S			5
+#define GLQF_PROF2TC_REGION_1_M			MAKEMASK(0x7, 5)
+#define GLQF_PROF2TC_OVERRIDE_ENA_2_S		8
+#define GLQF_PROF2TC_OVERRIDE_ENA_2_M		BIT(8)
+#define GLQF_PROF2TC_REGION_2_S			9
+#define GLQF_PROF2TC_REGION_2_M			MAKEMASK(0x7, 9)
+#define GLQF_PROF2TC_OVERRIDE_ENA_3_S		12
+#define GLQF_PROF2TC_OVERRIDE_ENA_3_M		BIT(12)
+#define GLQF_PROF2TC_REGION_3_S			13
+#define GLQF_PROF2TC_REGION_3_M			MAKEMASK(0x7, 13)
+#define GLQF_PROF2TC_OVERRIDE_ENA_4_S		16
+#define GLQF_PROF2TC_OVERRIDE_ENA_4_M		BIT(16)
+#define GLQF_PROF2TC_REGION_4_S			17
+#define GLQF_PROF2TC_REGION_4_M			MAKEMASK(0x7, 17)
+#define GLQF_PROF2TC_OVERRIDE_ENA_5_S		20
+#define GLQF_PROF2TC_OVERRIDE_ENA_5_M		BIT(20)
+#define GLQF_PROF2TC_REGION_5_S			21
+#define GLQF_PROF2TC_REGION_5_M			MAKEMASK(0x7, 21)
+#define GLQF_PROF2TC_OVERRIDE_ENA_6_S		24
+#define GLQF_PROF2TC_OVERRIDE_ENA_6_M		BIT(24)
+#define GLQF_PROF2TC_REGION_6_S			25
+#define GLQF_PROF2TC_REGION_6_M			MAKEMASK(0x7, 25)
+#define GLQF_PROF2TC_OVERRIDE_ENA_7_S		28
+#define GLQF_PROF2TC_OVERRIDE_ENA_7_M		BIT(28)
+#define GLQF_PROF2TC_REGION_7_S			29
+#define GLQF_PROF2TC_REGION_7_M			MAKEMASK(0x7, 29)
+#define PFQF_FD_CNT				0x00460180 /* Reset Source: CORER */
+#define PFQF_FD_CNT_FD_GCNT_S			0
+#define PFQF_FD_CNT_FD_GCNT_M			MAKEMASK(0x7FFF, 0)
+#define PFQF_FD_CNT_FD_BCNT_S			16
+#define PFQF_FD_CNT_FD_BCNT_M			MAKEMASK(0x7FFF, 16)
+#define PFQF_FD_ENA				0x0043A000 /* Reset Source: CORER */
+#define PFQF_FD_ENA_FD_ENA_S			0
+#define PFQF_FD_ENA_FD_ENA_M			BIT(0)
+#define PFQF_FD_SIZE				0x00460100 /* Reset Source: CORER */
+#define PFQF_FD_SIZE_FD_GSIZE_S			0
+#define PFQF_FD_SIZE_FD_GSIZE_M			MAKEMASK(0x7FFF, 0)
+#define PFQF_FD_SIZE_FD_BSIZE_S			16
+#define PFQF_FD_SIZE_FD_BSIZE_M			MAKEMASK(0x7FFF, 16)
+#define PFQF_FD_SUBTRACT			0x00460200 /* Reset Source: CORER */
+#define PFQF_FD_SUBTRACT_FD_GCNT_S		0
+#define PFQF_FD_SUBTRACT_FD_GCNT_M		MAKEMASK(0x7FFF, 0)
+#define PFQF_FD_SUBTRACT_FD_BCNT_S		16
+#define PFQF_FD_SUBTRACT_FD_BCNT_M		MAKEMASK(0x7FFF, 16)
+#define PFQF_HLUT(_i)				(0x00430000 + ((_i) * 64)) /* _i=0...511 */ /* Reset Source: CORER */
+#define PFQF_HLUT_MAX_INDEX			511
+#define PFQF_HLUT_LUT0_S			0
+#define PFQF_HLUT_LUT0_M			MAKEMASK(0xFF, 0)
+#define PFQF_HLUT_LUT1_S			8
+#define PFQF_HLUT_LUT1_M			MAKEMASK(0xFF, 8)
+#define PFQF_HLUT_LUT2_S			16
+#define PFQF_HLUT_LUT2_M			MAKEMASK(0xFF, 16)
+#define PFQF_HLUT_LUT3_S			24
+#define PFQF_HLUT_LUT3_M			MAKEMASK(0xFF, 24)
+#define PFQF_HLUT_SIZE				0x00455480 /* Reset Source: CORER */
+#define PFQF_HLUT_SIZE_HSIZE_S			0
+#define PFQF_HLUT_SIZE_HSIZE_M			MAKEMASK(0x3, 0)
+#define PFQF_PE_CLSN0				0x00470480 /* Reset Source: CORER */
+#define PFQF_PE_CLSN0_HITSBCNT_S		0
+#define PFQF_PE_CLSN0_HITSBCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PFQF_PE_CLSN1				0x00470500 /* Reset Source: CORER */
+#define PFQF_PE_CLSN1_HITLBCNT_S		0
+#define PFQF_PE_CLSN1_HITLBCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PFQF_PE_CTL1				0x00470000 /* Reset Source: CORER */
+#define PFQF_PE_CTL1_PEHSIZE_S			0
+#define PFQF_PE_CTL1_PEHSIZE_M			MAKEMASK(0xF, 0)
+#define PFQF_PE_CTL2				0x00470040 /* Reset Source: CORER */
+#define PFQF_PE_CTL2_PEDSIZE_S			0
+#define PFQF_PE_CTL2_PEDSIZE_M			MAKEMASK(0xF, 0)
+#define PFQF_PE_FILTERING_ENA			0x0043A080 /* Reset Source: CORER */
+#define PFQF_PE_FILTERING_ENA_PE_ENA_S		0
+#define PFQF_PE_FILTERING_ENA_PE_ENA_M		BIT(0)
+#define PFQF_PE_FLHD				0x00470100 /* Reset Source: CORER */
+#define PFQF_PE_FLHD_FLHD_S			0
+#define PFQF_PE_FLHD_FLHD_M			MAKEMASK(0xFFFFFF, 0)
+#define PFQF_PE_ST_CTL				0x00470400 /* Reset Source: CORER */
+#define PFQF_PE_ST_CTL_PF_CNT_EN_S		0
+#define PFQF_PE_ST_CTL_PF_CNT_EN_M		BIT(0)
+#define PFQF_PE_ST_CTL_VFS_CNT_EN_S		1
+#define PFQF_PE_ST_CTL_VFS_CNT_EN_M		BIT(1)
+#define PFQF_PE_ST_CTL_VF_CNT_EN_S		2
+#define PFQF_PE_ST_CTL_VF_CNT_EN_M		BIT(2)
+#define PFQF_PE_ST_CTL_VF_NUM_S			16
+#define PFQF_PE_ST_CTL_VF_NUM_M			MAKEMASK(0xFF, 16)
+#define PFQF_PE_TC_CTL				0x00452080 /* Reset Source: CORER */
+#define PFQF_PE_TC_CTL_TC_EN_PF_S		0
+#define PFQF_PE_TC_CTL_TC_EN_PF_M		MAKEMASK(0xFF, 0)
+#define PFQF_PE_TC_CTL_TC_EN_VF_S		16
+#define PFQF_PE_TC_CTL_TC_EN_VF_M		MAKEMASK(0xFF, 16)
+#define PFQF_PECNT_0				0x00470200 /* Reset Source: CORER */
+#define PFQF_PECNT_0_BUCKETCNT_S		0
+#define PFQF_PECNT_0_BUCKETCNT_M		MAKEMASK(0x3FFFF, 0)
+#define PFQF_PECNT_1				0x00470300 /* Reset Source: CORER */
+#define PFQF_PECNT_1_FLTCNT_S			0
+#define PFQF_PECNT_1_FLTCNT_M			MAKEMASK(0x3FFFF, 0)
+#define VPQF_PE_CTL1(_VF)			(0x00474000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPQF_PE_CTL1_MAX_INDEX			255
+#define VPQF_PE_CTL1_PEHSIZE_S			0
+#define VPQF_PE_CTL1_PEHSIZE_M			MAKEMASK(0xF, 0)
+#define VPQF_PE_CTL2(_VF)			(0x00474800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPQF_PE_CTL2_MAX_INDEX			255
+#define VPQF_PE_CTL2_PEDSIZE_S			0
+#define VPQF_PE_CTL2_PEDSIZE_M			MAKEMASK(0xF, 0)
+#define VPQF_PE_FILTERING_ENA(_VF)		(0x00455800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPQF_PE_FILTERING_ENA_MAX_INDEX		255
+#define VPQF_PE_FILTERING_ENA_PE_ENA_S		0
+#define VPQF_PE_FILTERING_ENA_PE_ENA_M		BIT(0)
+#define VPQF_PE_FLHD(_VF)			(0x00472000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPQF_PE_FLHD_MAX_INDEX			255
+#define VPQF_PE_FLHD_FLHD_S			0
+#define VPQF_PE_FLHD_FLHD_M			MAKEMASK(0xFFFFFF, 0)
+#define VPQF_PECNT_0(_VF)			(0x00472800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPQF_PECNT_0_MAX_INDEX			255
+#define VPQF_PECNT_0_BUCKETCNT_S		0
+#define VPQF_PECNT_0_BUCKETCNT_M		MAKEMASK(0x3FFFF, 0)
+#define VPQF_PECNT_1(_VF)			(0x00473000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPQF_PECNT_1_MAX_INDEX			255
+#define VPQF_PECNT_1_FLTCNT_S			0
+#define VPQF_PECNT_1_FLTCNT_M			MAKEMASK(0x3FFFF, 0)
+#define GLDCB_RMPMC				0x001223C8 /* Reset Source: CORER */
+#define GLDCB_RMPMC_RSPM_S			0
+#define GLDCB_RMPMC_RSPM_M			MAKEMASK(0x3F, 0)
+#define GLDCB_RMPMC_MIQ_NODROP_MODE_S		6
+#define GLDCB_RMPMC_MIQ_NODROP_MODE_M		MAKEMASK(0x1F, 6)
+#define GLDCB_RMPMC_RPM_DIS_S			31
+#define GLDCB_RMPMC_RPM_DIS_M			BIT(31)
+#define GLDCB_RMPMS				0x001223CC /* Reset Source: CORER */
+#define GLDCB_RMPMS_RMPM_S			0
+#define GLDCB_RMPMS_RMPM_M			MAKEMASK(0xFFFF, 0)
+#define GLDCB_RPCC				0x00122260 /* Reset Source: CORER */
+#define GLDCB_RPCC_EN_S				0
+#define GLDCB_RPCC_EN_M				BIT(0)
+#define GLDCB_RPCC_SCL_FACT_S			4
+#define GLDCB_RPCC_SCL_FACT_M			MAKEMASK(0x1F, 4)
+#define GLDCB_RPCC_THRSH_S			16
+#define GLDCB_RPCC_THRSH_M			MAKEMASK(0xFFF, 16)
+#define GLDCB_RSPMC				0x001223C4 /* Reset Source: CORER */
+#define GLDCB_RSPMC_RSPM_S			0
+#define GLDCB_RSPMC_RSPM_M			MAKEMASK(0xFF, 0)
+#define GLDCB_RSPMC_RPM_MODE_S			8
+#define GLDCB_RSPMC_RPM_MODE_M			MAKEMASK(0x3, 8)
+#define GLDCB_RSPMC_PRR_MAX_EXP_S		10
+#define GLDCB_RSPMC_PRR_MAX_EXP_M		MAKEMASK(0xF, 10)
+#define GLDCB_RSPMC_PFCTIMER_S			14
+#define GLDCB_RSPMC_PFCTIMER_M			MAKEMASK(0x3FFF, 14)
+#define GLDCB_RSPMC_RPM_DIS_S			31
+#define GLDCB_RSPMC_RPM_DIS_M			BIT(31)
+#define GLDCB_RSPMS				0x001223C0 /* Reset Source: CORER */
+#define GLDCB_RSPMS_RSPM_S			0
+#define GLDCB_RSPMS_RSPM_M			MAKEMASK(0x3FFFF, 0)
+#define GLDCB_RTCTI				0x001223D0 /* Reset Source: CORER */
+#define GLDCB_RTCTI_PFCTIMEOUT_TC_S		0
+#define GLDCB_RTCTI_PFCTIMEOUT_TC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_RTCTQ(_i)				(0x001222C0 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLDCB_RTCTQ_MAX_INDEX			31
+#define GLDCB_RTCTQ_RXQNUM_S			0
+#define GLDCB_RTCTQ_RXQNUM_M			MAKEMASK(0x7FF, 0)
+#define GLDCB_RTCTQ_IS_PF_Q_S			16
+#define GLDCB_RTCTQ_IS_PF_Q_M			BIT(16)
+#define GLDCB_RTCTS(_i)				(0x00122340 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLDCB_RTCTS_MAX_INDEX			31
+#define GLDCB_RTCTS_PFCTIMER_S			0
+#define GLDCB_RTCTS_PFCTIMER_M			MAKEMASK(0x3FFF, 0)
+#define GLRCB_CFG_COTF_CNT(_i)			(0x001223D4 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLRCB_CFG_COTF_CNT_MAX_INDEX		7
+#define GLRCB_CFG_COTF_CNT_MRKR_COTF_CNT_S	0
+#define GLRCB_CFG_COTF_CNT_MRKR_COTF_CNT_M	MAKEMASK(0x3F, 0)
+#define GLRCB_CFG_COTF_ST			0x001223F4 /* Reset Source: CORER */
+#define GLRCB_CFG_COTF_ST_MRKR_COTF_ST_S	0
+#define GLRCB_CFG_COTF_ST_MRKR_COTF_ST_M	MAKEMASK(0xFF, 0)
+#define GLRPRS_PMCFG_DHW(_i)			(0x00200388 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLRPRS_PMCFG_DHW_MAX_INDEX		15
+#define GLRPRS_PMCFG_DHW_DHW_S			0
+#define GLRPRS_PMCFG_DHW_DHW_M			MAKEMASK(0xFFFFF, 0)
+#define GLRPRS_PMCFG_DLW(_i)			(0x002003C8 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLRPRS_PMCFG_DLW_MAX_INDEX		15
+#define GLRPRS_PMCFG_DLW_DLW_S			0
+#define GLRPRS_PMCFG_DLW_DLW_M			MAKEMASK(0xFFFFF, 0)
+#define GLRPRS_PMCFG_DPS(_i)			(0x00200308 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLRPRS_PMCFG_DPS_MAX_INDEX		15
+#define GLRPRS_PMCFG_DPS_DPS_S			0
+#define GLRPRS_PMCFG_DPS_DPS_M			MAKEMASK(0xFFFFF, 0)
+#define GLRPRS_PMCFG_SHW(_i)			(0x00200448 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLRPRS_PMCFG_SHW_MAX_INDEX		7
+#define GLRPRS_PMCFG_SHW_SHW_S			0
+#define GLRPRS_PMCFG_SHW_SHW_M			MAKEMASK(0xFFFFF, 0)
+#define GLRPRS_PMCFG_SLW(_i)			(0x00200468 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLRPRS_PMCFG_SLW_MAX_INDEX		7
+#define GLRPRS_PMCFG_SLW_SLW_S			0
+#define GLRPRS_PMCFG_SLW_SLW_M			MAKEMASK(0xFFFFF, 0)
+#define GLRPRS_PMCFG_SPS(_i)			(0x00200408 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLRPRS_PMCFG_SPS_MAX_INDEX		7
+#define GLRPRS_PMCFG_SPS_SPS_S			0
+#define GLRPRS_PMCFG_SPS_SPS_M			MAKEMASK(0xFFFFF, 0)
+#define GLRPRS_PMCFG_TC_CFG(_i)			(0x00200488 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLRPRS_PMCFG_TC_CFG_MAX_INDEX		31
+#define GLRPRS_PMCFG_TC_CFG_D_POOL_S		0
+#define GLRPRS_PMCFG_TC_CFG_D_POOL_M		MAKEMASK(0xF, 0)
+#define GLRPRS_PMCFG_TC_CFG_S_POOL_S		16
+#define GLRPRS_PMCFG_TC_CFG_S_POOL_M		MAKEMASK(0x7, 16)
+#define GLRPRS_PMCFG_TCHW(_i)			(0x00200588 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLRPRS_PMCFG_TCHW_MAX_INDEX		31
+#define GLRPRS_PMCFG_TCHW_TCHW_S		0
+#define GLRPRS_PMCFG_TCHW_TCHW_M		MAKEMASK(0xFFFFF, 0)
+#define GLRPRS_PMCFG_TCLW(_i)			(0x00200608 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLRPRS_PMCFG_TCLW_MAX_INDEX		31
+#define GLRPRS_PMCFG_TCLW_TCLW_S		0
+#define GLRPRS_PMCFG_TCLW_TCLW_M		MAKEMASK(0xFFFFF, 0)
+#define GLSWT_PMCFG_TC_CFG(_i)			(0x00204900 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSWT_PMCFG_TC_CFG_MAX_INDEX		31
+#define GLSWT_PMCFG_TC_CFG_D_POOL_S		0
+#define GLSWT_PMCFG_TC_CFG_D_POOL_M		MAKEMASK(0xF, 0)
+#define GLSWT_PMCFG_TC_CFG_S_POOL_S		16
+#define GLSWT_PMCFG_TC_CFG_S_POOL_M		MAKEMASK(0x7, 16)
+#define PRTDCB_RLANPMS				0x00122280 /* Reset Source: CORER */
+#define PRTDCB_RLANPMS_LANRPPM_S		0
+#define PRTDCB_RLANPMS_LANRPPM_M		MAKEMASK(0x3FFFF, 0)
+#define PRTDCB_RPPMC				0x00122240 /* Reset Source: CORER */
+#define PRTDCB_RPPMC_LANRPPM_S			0
+#define PRTDCB_RPPMC_LANRPPM_M			MAKEMASK(0xFF, 0)
+#define PRTDCB_RPPMC_RDMARPPM_S			8
+#define PRTDCB_RPPMC_RDMARPPM_M			MAKEMASK(0xFF, 8)
+#define PRTDCB_RRDMAPMS				0x00122120 /* Reset Source: CORER */
+#define PRTDCB_RRDMAPMS_RDMARPPM_S		0
+#define PRTDCB_RRDMAPMS_RDMARPPM_M		MAKEMASK(0x3FFFF, 0)
+#define GL_STAT_SWR_BPCH(_i)			(0x00347804 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GL_STAT_SWR_BPCH_MAX_INDEX		127
+#define GL_STAT_SWR_BPCH_VLBPCH_S		0
+#define GL_STAT_SWR_BPCH_VLBPCH_M		MAKEMASK(0xFF, 0)
+#define GL_STAT_SWR_BPCL(_i)			(0x00347800 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GL_STAT_SWR_BPCL_MAX_INDEX		127
+#define GL_STAT_SWR_BPCL_VLBPCL_S		0
+#define GL_STAT_SWR_BPCL_VLBPCL_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_STAT_SWR_GORCH(_i)			(0x00342004 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GL_STAT_SWR_GORCH_MAX_INDEX		127
+#define GL_STAT_SWR_GORCH_VLBCH_S		0
+#define GL_STAT_SWR_GORCH_VLBCH_M		MAKEMASK(0xFF, 0)
+#define GL_STAT_SWR_GORCL(_i)			(0x00342000 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GL_STAT_SWR_GORCL_MAX_INDEX		127
+#define GL_STAT_SWR_GORCL_VLBCL_S		0
+#define GL_STAT_SWR_GORCL_VLBCL_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_STAT_SWR_GOTCH(_i)			(0x00304004 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GL_STAT_SWR_GOTCH_MAX_INDEX		127
+#define GL_STAT_SWR_GOTCH_VLBCH_S		0
+#define GL_STAT_SWR_GOTCH_VLBCH_M		MAKEMASK(0xFF, 0)
+#define GL_STAT_SWR_GOTCL(_i)			(0x00304000 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GL_STAT_SWR_GOTCL_MAX_INDEX		127
+#define GL_STAT_SWR_GOTCL_VLBCL_S		0
+#define GL_STAT_SWR_GOTCL_VLBCL_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_STAT_SWR_MPCH(_i)			(0x00347404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GL_STAT_SWR_MPCH_MAX_INDEX		127
+#define GL_STAT_SWR_MPCH_VLMPCH_S		0
+#define GL_STAT_SWR_MPCH_VLMPCH_M		MAKEMASK(0xFF, 0)
+#define GL_STAT_SWR_MPCL(_i)			(0x00347400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GL_STAT_SWR_MPCL_MAX_INDEX		127
+#define GL_STAT_SWR_MPCL_VLMPCL_S		0
+#define GL_STAT_SWR_MPCL_VLMPCL_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_STAT_SWR_UPCH(_i)			(0x00347004 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GL_STAT_SWR_UPCH_MAX_INDEX		127
+#define GL_STAT_SWR_UPCH_VLUPCH_S		0
+#define GL_STAT_SWR_UPCH_VLUPCH_M		MAKEMASK(0xFF, 0)
+#define GL_STAT_SWR_UPCL(_i)			(0x00347000 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GL_STAT_SWR_UPCL_MAX_INDEX		127
+#define GL_STAT_SWR_UPCL_VLUPCL_S		0
+#define GL_STAT_SWR_UPCL_VLUPCL_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_AORCL(_i)				(0x003812C0 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_AORCL_MAX_INDEX			7
+#define GLPRT_AORCL_AORCL_S			0
+#define GLPRT_AORCL_AORCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_BPRCH(_i)				(0x00381384 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_BPRCH_MAX_INDEX			7
+#define GLPRT_BPRCH_UPRCH_S			0
+#define GLPRT_BPRCH_UPRCH_M			MAKEMASK(0xFF, 0)
+#define GLPRT_BPRCL(_i)				(0x00381380 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_BPRCL_MAX_INDEX			7
+#define GLPRT_BPRCL_UPRCH_S			0
+#define GLPRT_BPRCL_UPRCH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_BPTCH(_i)				(0x00381244 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_BPTCH_MAX_INDEX			7
+#define GLPRT_BPTCH_UPRCH_S			0
+#define GLPRT_BPTCH_UPRCH_M			MAKEMASK(0xFF, 0)
+#define GLPRT_BPTCL(_i)				(0x00381240 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_BPTCL_MAX_INDEX			7
+#define GLPRT_BPTCL_UPRCH_S			0
+#define GLPRT_BPTCL_UPRCH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_CRCERRS(_i)			(0x00380100 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_CRCERRS_MAX_INDEX			7
+#define GLPRT_CRCERRS_CRCERRS_S			0
+#define GLPRT_CRCERRS_CRCERRS_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_CRCERRS_H(_i)			(0x00380104 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_CRCERRS_H_MAX_INDEX		7
+#define GLPRT_CRCERRS_H_CRCERRS_S		0
+#define GLPRT_CRCERRS_H_CRCERRS_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_GORCH(_i)				(0x00380004 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_GORCH_MAX_INDEX			7
+#define GLPRT_GORCH_GORCH_S			0
+#define GLPRT_GORCH_GORCH_M			MAKEMASK(0xFF, 0)
+#define GLPRT_GORCL(_i)				(0x00380000 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_GORCL_MAX_INDEX			7
+#define GLPRT_GORCL_GORCL_S			0
+#define GLPRT_GORCL_GORCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_GOTCH(_i)				(0x00380B44 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_GOTCH_MAX_INDEX			7
+#define GLPRT_GOTCH_GOTCH_S			0
+#define GLPRT_GOTCH_GOTCH_M			MAKEMASK(0xFF, 0)
+#define GLPRT_GOTCL(_i)				(0x00380B40 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_GOTCL_MAX_INDEX			7
+#define GLPRT_GOTCL_GOTCL_S			0
+#define GLPRT_GOTCL_GOTCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_ILLERRC(_i)			(0x003801C0 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_ILLERRC_MAX_INDEX			7
+#define GLPRT_ILLERRC_ILLERRC_S			0
+#define GLPRT_ILLERRC_ILLERRC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_ILLERRC_H(_i)			(0x003801C4 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_ILLERRC_H_MAX_INDEX		7
+#define GLPRT_ILLERRC_H_ILLERRC_S		0
+#define GLPRT_ILLERRC_H_ILLERRC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_LXOFFRXC(_i)			(0x003802C0 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_LXOFFRXC_MAX_INDEX		7
+#define GLPRT_LXOFFRXC_LXOFFRXCNT_S		0
+#define GLPRT_LXOFFRXC_LXOFFRXCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_LXOFFRXC_H(_i)			(0x003802C4 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_LXOFFRXC_H_MAX_INDEX		7
+#define GLPRT_LXOFFRXC_H_LXOFFRXCNT_S		0
+#define GLPRT_LXOFFRXC_H_LXOFFRXCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_LXOFFTXC(_i)			(0x00381180 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_LXOFFTXC_MAX_INDEX		7
+#define GLPRT_LXOFFTXC_LXOFFTXC_S		0
+#define GLPRT_LXOFFTXC_LXOFFTXC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_LXOFFTXC_H(_i)			(0x00381184 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_LXOFFTXC_H_MAX_INDEX		7
+#define GLPRT_LXOFFTXC_H_LXOFFTXC_S		0
+#define GLPRT_LXOFFTXC_H_LXOFFTXC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_LXONRXC(_i)			(0x00380280 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_LXONRXC_MAX_INDEX			7
+#define GLPRT_LXONRXC_LXONRXCNT_S		0
+#define GLPRT_LXONRXC_LXONRXCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_LXONRXC_H(_i)			(0x00380284 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_LXONRXC_H_MAX_INDEX		7
+#define GLPRT_LXONRXC_H_LXONRXCNT_S		0
+#define GLPRT_LXONRXC_H_LXONRXCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_LXONTXC(_i)			(0x00381140 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_LXONTXC_MAX_INDEX			7
+#define GLPRT_LXONTXC_LXONTXC_S			0
+#define GLPRT_LXONTXC_LXONTXC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_LXONTXC_H(_i)			(0x00381144 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_LXONTXC_H_MAX_INDEX		7
+#define GLPRT_LXONTXC_H_LXONTXC_S		0
+#define GLPRT_LXONTXC_H_LXONTXC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_MLFC(_i)				(0x00380040 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_MLFC_MAX_INDEX			7
+#define GLPRT_MLFC_MLFC_S			0
+#define GLPRT_MLFC_MLFC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_MLFC_H(_i)			(0x00380044 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_MLFC_H_MAX_INDEX			7
+#define GLPRT_MLFC_H_MLFC_S			0
+#define GLPRT_MLFC_H_MLFC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_MPRCH(_i)				(0x00381344 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_MPRCH_MAX_INDEX			7
+#define GLPRT_MPRCH_MPRCH_S			0
+#define GLPRT_MPRCH_MPRCH_M			MAKEMASK(0xFF, 0)
+#define GLPRT_MPRCL(_i)				(0x00381340 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_MPRCL_MAX_INDEX			7
+#define GLPRT_MPRCL_MPRCL_S			0
+#define GLPRT_MPRCL_MPRCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_MPTCH(_i)				(0x00381204 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_MPTCH_MAX_INDEX			7
+#define GLPRT_MPTCH_MPTCH_S			0
+#define GLPRT_MPTCH_MPTCH_M			MAKEMASK(0xFF, 0)
+#define GLPRT_MPTCL(_i)				(0x00381200 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_MPTCL_MAX_INDEX			7
+#define GLPRT_MPTCL_MPTCL_S			0
+#define GLPRT_MPTCL_MPTCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_MRFC(_i)				(0x00380080 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_MRFC_MAX_INDEX			7
+#define GLPRT_MRFC_MRFC_S			0
+#define GLPRT_MRFC_MRFC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_MRFC_H(_i)			(0x00380084 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_MRFC_H_MAX_INDEX			7
+#define GLPRT_MRFC_H_MRFC_S			0
+#define GLPRT_MRFC_H_MRFC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PRC1023H(_i)			(0x00380A04 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC1023H_MAX_INDEX		7
+#define GLPRT_PRC1023H_PRC1023H_S		0
+#define GLPRT_PRC1023H_PRC1023H_M		MAKEMASK(0xFF, 0)
+#define GLPRT_PRC1023L(_i)			(0x00380A00 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC1023L_MAX_INDEX		7
+#define GLPRT_PRC1023L_PRC1023L_S		0
+#define GLPRT_PRC1023L_PRC1023L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PRC127H(_i)			(0x00380944 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC127H_MAX_INDEX			7
+#define GLPRT_PRC127H_PRC127H_S			0
+#define GLPRT_PRC127H_PRC127H_M			MAKEMASK(0xFF, 0)
+#define GLPRT_PRC127L(_i)			(0x00380940 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC127L_MAX_INDEX			7
+#define GLPRT_PRC127L_PRC127L_S			0
+#define GLPRT_PRC127L_PRC127L_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PRC1522H(_i)			(0x00380A44 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC1522H_MAX_INDEX		7
+#define GLPRT_PRC1522H_PRC1522H_S		0
+#define GLPRT_PRC1522H_PRC1522H_M		MAKEMASK(0xFF, 0)
+#define GLPRT_PRC1522L(_i)			(0x00380A40 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC1522L_MAX_INDEX		7
+#define GLPRT_PRC1522L_PRC1522L_S		0
+#define GLPRT_PRC1522L_PRC1522L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PRC255H(_i)			(0x00380984 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC255H_MAX_INDEX			7
+#define GLPRT_PRC255H_PRTPRC255H_S		0
+#define GLPRT_PRC255H_PRTPRC255H_M		MAKEMASK(0xFF, 0)
+#define GLPRT_PRC255L(_i)			(0x00380980 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC255L_MAX_INDEX			7
+#define GLPRT_PRC255L_PRC255L_S			0
+#define GLPRT_PRC255L_PRC255L_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PRC511H(_i)			(0x003809C4 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC511H_MAX_INDEX			7
+#define GLPRT_PRC511H_PRC511H_S			0
+#define GLPRT_PRC511H_PRC511H_M			MAKEMASK(0xFF, 0)
+#define GLPRT_PRC511L(_i)			(0x003809C0 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC511L_MAX_INDEX			7
+#define GLPRT_PRC511L_PRC511L_S			0
+#define GLPRT_PRC511L_PRC511L_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PRC64H(_i)			(0x00380904 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC64H_MAX_INDEX			7
+#define GLPRT_PRC64H_PRC64H_S			0
+#define GLPRT_PRC64H_PRC64H_M			MAKEMASK(0xFF, 0)
+#define GLPRT_PRC64L(_i)			(0x00380900 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC64L_MAX_INDEX			7
+#define GLPRT_PRC64L_PRC64L_S			0
+#define GLPRT_PRC64L_PRC64L_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PRC9522H(_i)			(0x00380A84 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC9522H_MAX_INDEX		7
+#define GLPRT_PRC9522H_PRC1522H_S		0
+#define GLPRT_PRC9522H_PRC1522H_M		MAKEMASK(0xFF, 0)
+#define GLPRT_PRC9522L(_i)			(0x00380A80 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC9522L_MAX_INDEX		7
+#define GLPRT_PRC9522L_PRC1522L_S		0
+#define GLPRT_PRC9522L_PRC1522L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PTC1023H(_i)			(0x00380C84 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC1023H_MAX_INDEX		7
+#define GLPRT_PTC1023H_PTC1023H_S		0
+#define GLPRT_PTC1023H_PTC1023H_M		MAKEMASK(0xFF, 0)
+#define GLPRT_PTC1023L(_i)			(0x00380C80 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC1023L_MAX_INDEX		7
+#define GLPRT_PTC1023L_PTC1023L_S		0
+#define GLPRT_PTC1023L_PTC1023L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PTC127H(_i)			(0x00380BC4 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC127H_MAX_INDEX			7
+#define GLPRT_PTC127H_PTC127H_S			0
+#define GLPRT_PTC127H_PTC127H_M			MAKEMASK(0xFF, 0)
+#define GLPRT_PTC127L(_i)			(0x00380BC0 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC127L_MAX_INDEX			7
+#define GLPRT_PTC127L_PTC127L_S			0
+#define GLPRT_PTC127L_PTC127L_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PTC1522H(_i)			(0x00380CC4 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC1522H_MAX_INDEX		7
+#define GLPRT_PTC1522H_PTC1522H_S		0
+#define GLPRT_PTC1522H_PTC1522H_M		MAKEMASK(0xFF, 0)
+#define GLPRT_PTC1522L(_i)			(0x00380CC0 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC1522L_MAX_INDEX		7
+#define GLPRT_PTC1522L_PTC1522L_S		0
+#define GLPRT_PTC1522L_PTC1522L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PTC255H(_i)			(0x00380C04 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC255H_MAX_INDEX			7
+#define GLPRT_PTC255H_PTC255H_S			0
+#define GLPRT_PTC255H_PTC255H_M			MAKEMASK(0xFF, 0)
+#define GLPRT_PTC255L(_i)			(0x00380C00 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC255L_MAX_INDEX			7
+#define GLPRT_PTC255L_PTC255L_S			0
+#define GLPRT_PTC255L_PTC255L_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PTC511H(_i)			(0x00380C44 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC511H_MAX_INDEX			7
+#define GLPRT_PTC511H_PTC511H_S			0
+#define GLPRT_PTC511H_PTC511H_M			MAKEMASK(0xFF, 0)
+#define GLPRT_PTC511L(_i)			(0x00380C40 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC511L_MAX_INDEX			7
+#define GLPRT_PTC511L_PTC511L_S			0
+#define GLPRT_PTC511L_PTC511L_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PTC64H(_i)			(0x00380B84 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC64H_MAX_INDEX			7
+#define GLPRT_PTC64H_PTC64H_S			0
+#define GLPRT_PTC64H_PTC64H_M			MAKEMASK(0xFF, 0)
+#define GLPRT_PTC64L(_i)			(0x00380B80 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC64L_MAX_INDEX			7
+#define GLPRT_PTC64L_PTC64L_S			0
+#define GLPRT_PTC64L_PTC64L_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PTC9522H(_i)			(0x00380D04 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC9522H_MAX_INDEX		7
+#define GLPRT_PTC9522H_PTC9522H_S		0
+#define GLPRT_PTC9522H_PTC9522H_M		MAKEMASK(0xFF, 0)
+#define GLPRT_PTC9522L(_i)			(0x00380D00 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC9522L_MAX_INDEX		7
+#define GLPRT_PTC9522L_PTC9522L_S		0
+#define GLPRT_PTC9522L_PTC9522L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PXOFFRXC(_i, _j)			(0x00380500 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PXOFFRXC_MAX_INDEX		7
+#define GLPRT_PXOFFRXC_PRPXOFFRXCNT_S		0
+#define GLPRT_PXOFFRXC_PRPXOFFRXCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PXOFFRXC_H(_i, _j)		(0x00380504 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PXOFFRXC_H_MAX_INDEX		7
+#define GLPRT_PXOFFRXC_H_PRPXOFFRXCNT_S		0
+#define GLPRT_PXOFFRXC_H_PRPXOFFRXCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PXOFFTXC(_i, _j)			(0x00380F40 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PXOFFTXC_MAX_INDEX		7
+#define GLPRT_PXOFFTXC_PRPXOFFTXCNT_S		0
+#define GLPRT_PXOFFTXC_PRPXOFFTXCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PXOFFTXC_H(_i, _j)		(0x00380F44 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PXOFFTXC_H_MAX_INDEX		7
+#define GLPRT_PXOFFTXC_H_PRPXOFFTXCNT_S		0
+#define GLPRT_PXOFFTXC_H_PRPXOFFTXCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PXONRXC(_i, _j)			(0x00380300 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PXONRXC_MAX_INDEX			7
+#define GLPRT_PXONRXC_PRPXONRXCNT_S		0
+#define GLPRT_PXONRXC_PRPXONRXCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PXONRXC_H(_i, _j)			(0x00380304 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PXONRXC_H_MAX_INDEX		7
+#define GLPRT_PXONRXC_H_PRPXONRXCNT_S		0
+#define GLPRT_PXONRXC_H_PRPXONRXCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PXONTXC(_i, _j)			(0x00380D40 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PXONTXC_MAX_INDEX			7
+#define GLPRT_PXONTXC_PRPXONTXC_S		0
+#define GLPRT_PXONTXC_PRPXONTXC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PXONTXC_H(_i, _j)			(0x00380D44 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PXONTXC_H_MAX_INDEX		7
+#define GLPRT_PXONTXC_H_PRPXONTXC_S		0
+#define GLPRT_PXONTXC_H_PRPXONTXC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_RFC(_i)				(0x00380AC0 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_RFC_MAX_INDEX			7
+#define GLPRT_RFC_RFC_S				0
+#define GLPRT_RFC_RFC_M				MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_RFC_H(_i)				(0x00380AC4 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_RFC_H_MAX_INDEX			7
+#define GLPRT_RFC_H_RFC_S			0
+#define GLPRT_RFC_H_RFC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_RJC(_i)				(0x00380B00 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_RJC_MAX_INDEX			7
+#define GLPRT_RJC_RJC_S				0
+#define GLPRT_RJC_RJC_M				MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_RJC_H(_i)				(0x00380B04 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_RJC_H_MAX_INDEX			7
+#define GLPRT_RJC_H_RJC_S			0
+#define GLPRT_RJC_H_RJC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_RLEC(_i)				(0x00380140 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_RLEC_MAX_INDEX			7
+#define GLPRT_RLEC_RLEC_S			0
+#define GLPRT_RLEC_RLEC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_RLEC_H(_i)			(0x00380144 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_RLEC_H_MAX_INDEX			7
+#define GLPRT_RLEC_H_RLEC_S			0
+#define GLPRT_RLEC_H_RLEC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_ROC(_i)				(0x00380240 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_ROC_MAX_INDEX			7
+#define GLPRT_ROC_ROC_S				0
+#define GLPRT_ROC_ROC_M				MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_ROC_H(_i)				(0x00380244 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_ROC_H_MAX_INDEX			7
+#define GLPRT_ROC_H_ROC_S			0
+#define GLPRT_ROC_H_ROC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_RUC(_i)				(0x00380200 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_RUC_MAX_INDEX			7
+#define GLPRT_RUC_RUC_S				0
+#define GLPRT_RUC_RUC_M				MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_RUC_H(_i)				(0x00380204 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_RUC_H_MAX_INDEX			7
+#define GLPRT_RUC_H_RUC_S			0
+#define GLPRT_RUC_H_RUC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_RXON2OFFCNT(_i, _j)		(0x00380700 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...7 */ /* Reset Source: CORER */
+#define GLPRT_RXON2OFFCNT_MAX_INDEX		7
+#define GLPRT_RXON2OFFCNT_PRRXON2OFFCNT_S	0
+#define GLPRT_RXON2OFFCNT_PRRXON2OFFCNT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_RXON2OFFCNT_H(_i, _j)		(0x00380704 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...7 */ /* Reset Source: CORER */
+#define GLPRT_RXON2OFFCNT_H_MAX_INDEX		7
+#define GLPRT_RXON2OFFCNT_H_PRRXON2OFFCNT_S	0
+#define GLPRT_RXON2OFFCNT_H_PRRXON2OFFCNT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_STDC(_i)				(0x00340000 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_STDC_MAX_INDEX			7
+#define GLPRT_STDC_STDC_S			0
+#define GLPRT_STDC_STDC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_TDOLD(_i)				(0x00381280 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_TDOLD_MAX_INDEX			7
+#define GLPRT_TDOLD_GLPRT_TDOLD_S		0
+#define GLPRT_TDOLD_GLPRT_TDOLD_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_TDOLD_H(_i)			(0x00381284 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_TDOLD_H_MAX_INDEX			7
+#define GLPRT_TDOLD_H_GLPRT_TDOLD_S		0
+#define GLPRT_TDOLD_H_GLPRT_TDOLD_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_UPRCH(_i)				(0x00381304 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_UPRCH_MAX_INDEX			7
+#define GLPRT_UPRCH_UPRCH_S			0
+#define GLPRT_UPRCH_UPRCH_M			MAKEMASK(0xFF, 0)
+#define GLPRT_UPRCL(_i)				(0x00381300 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_UPRCL_MAX_INDEX			7
+#define GLPRT_UPRCL_UPRCL_S			0
+#define GLPRT_UPRCL_UPRCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_UPTCH(_i)				(0x003811C4 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_UPTCH_MAX_INDEX			7
+#define GLPRT_UPTCH_UPTCH_S			0
+#define GLPRT_UPTCH_UPTCH_M			MAKEMASK(0xFF, 0)
+#define GLPRT_UPTCL(_i)				(0x003811C0 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_UPTCL_MAX_INDEX			7
+#define GLPRT_UPTCL_VUPTCH_S			0
+#define GLPRT_UPTCL_VUPTCH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLSTAT_ACL_CNT_0_H(_i)			(0x00388004 + ((_i) * 8)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLSTAT_ACL_CNT_0_H_MAX_INDEX		511
+#define GLSTAT_ACL_CNT_0_H_CNT_MSB_S		0
+#define GLSTAT_ACL_CNT_0_H_CNT_MSB_M		MAKEMASK(0xFF, 0)
+#define GLSTAT_ACL_CNT_0_L(_i)			(0x00388000 + ((_i) * 8)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLSTAT_ACL_CNT_0_L_MAX_INDEX		511
+#define GLSTAT_ACL_CNT_0_L_CNT_LSB_S		0
+#define GLSTAT_ACL_CNT_0_L_CNT_LSB_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLSTAT_ACL_CNT_1_H(_i)			(0x00389004 + ((_i) * 8)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLSTAT_ACL_CNT_1_H_MAX_INDEX		511
+#define GLSTAT_ACL_CNT_1_H_CNT_MSB_S		0
+#define GLSTAT_ACL_CNT_1_H_CNT_MSB_M		MAKEMASK(0xFF, 0)
+#define GLSTAT_ACL_CNT_1_L(_i)			(0x00389000 + ((_i) * 8)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLSTAT_ACL_CNT_1_L_MAX_INDEX		511
+#define GLSTAT_ACL_CNT_1_L_CNT_LSB_S		0
+#define GLSTAT_ACL_CNT_1_L_CNT_LSB_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLSTAT_ACL_CNT_2_H(_i)			(0x0038A004 + ((_i) * 8)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLSTAT_ACL_CNT_2_H_MAX_INDEX		511
+#define GLSTAT_ACL_CNT_2_H_CNT_MSB_S		0
+#define GLSTAT_ACL_CNT_2_H_CNT_MSB_M		MAKEMASK(0xFF, 0)
+#define GLSTAT_ACL_CNT_2_L(_i)			(0x0038A000 + ((_i) * 8)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLSTAT_ACL_CNT_2_L_MAX_INDEX		511
+#define GLSTAT_ACL_CNT_2_L_CNT_LSB_S		0
+#define GLSTAT_ACL_CNT_2_L_CNT_LSB_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLSTAT_ACL_CNT_3_H(_i)			(0x0038B004 + ((_i) * 8)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLSTAT_ACL_CNT_3_H_MAX_INDEX		511
+#define GLSTAT_ACL_CNT_3_H_CNT_MSB_S		0
+#define GLSTAT_ACL_CNT_3_H_CNT_MSB_M		MAKEMASK(0xFF, 0)
+#define GLSTAT_ACL_CNT_3_L(_i)			(0x0038B000 + ((_i) * 8)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLSTAT_ACL_CNT_3_L_MAX_INDEX		511
+#define GLSTAT_ACL_CNT_3_L_CNT_LSB_S		0
+#define GLSTAT_ACL_CNT_3_L_CNT_LSB_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLSTAT_FD_CNT0H(_i)			(0x003A0004 + ((_i) * 8)) /* _i=0...4095 */ /* Reset Source: CORER */
+#define GLSTAT_FD_CNT0H_MAX_INDEX		4095
+#define GLSTAT_FD_CNT0H_FD0_CNT_H_S		0
+#define GLSTAT_FD_CNT0H_FD0_CNT_H_M		MAKEMASK(0xFF, 0)
+#define GLSTAT_FD_CNT0L(_i)			(0x003A0000 + ((_i) * 8)) /* _i=0...4095 */ /* Reset Source: CORER */
+#define GLSTAT_FD_CNT0L_MAX_INDEX		4095
+#define GLSTAT_FD_CNT0L_FD0_CNT_L_S		0
+#define GLSTAT_FD_CNT0L_FD0_CNT_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLSTAT_FD_CNT1H(_i)			(0x003A8004 + ((_i) * 8)) /* _i=0...4095 */ /* Reset Source: CORER */
+#define GLSTAT_FD_CNT1H_MAX_INDEX		4095
+#define GLSTAT_FD_CNT1H_FD0_CNT_H_S		0
+#define GLSTAT_FD_CNT1H_FD0_CNT_H_M		MAKEMASK(0xFF, 0)
+#define GLSTAT_FD_CNT1L(_i)			(0x003A8000 + ((_i) * 8)) /* _i=0...4095 */ /* Reset Source: CORER */
+#define GLSTAT_FD_CNT1L_MAX_INDEX		4095
+#define GLSTAT_FD_CNT1L_FD0_CNT_L_S		0
+#define GLSTAT_FD_CNT1L_FD0_CNT_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLSW_BPRCH(_i)				(0x00346204 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_BPRCH_MAX_INDEX			31
+#define GLSW_BPRCH_BPRCH_S			0
+#define GLSW_BPRCH_BPRCH_M			MAKEMASK(0xFF, 0)
+#define GLSW_BPRCL(_i)				(0x00346200 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_BPRCL_MAX_INDEX			31
+#define GLSW_BPRCL_BPRCL_S			0
+#define GLSW_BPRCL_BPRCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLSW_BPTCH(_i)				(0x00310204 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_BPTCH_MAX_INDEX			31
+#define GLSW_BPTCH_BPTCH_S			0
+#define GLSW_BPTCH_BPTCH_M			MAKEMASK(0xFF, 0)
+#define GLSW_BPTCL(_i)				(0x00310200 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_BPTCL_MAX_INDEX			31
+#define GLSW_BPTCL_BPTCL_S			0
+#define GLSW_BPTCL_BPTCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLSW_GORCH(_i)				(0x00341004 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_GORCH_MAX_INDEX			31
+#define GLSW_GORCH_GORCH_S			0
+#define GLSW_GORCH_GORCH_M			MAKEMASK(0xFF, 0)
+#define GLSW_GORCL(_i)				(0x00341000 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_GORCL_MAX_INDEX			31
+#define GLSW_GORCL_GORCL_S			0
+#define GLSW_GORCL_GORCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLSW_GOTCH(_i)				(0x00302004 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_GOTCH_MAX_INDEX			31
+#define GLSW_GOTCH_GOTCH_S			0
+#define GLSW_GOTCH_GOTCH_M			MAKEMASK(0xFF, 0)
+#define GLSW_GOTCL(_i)				(0x00302000 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_GOTCL_MAX_INDEX			31
+#define GLSW_GOTCL_GOTCL_S			0
+#define GLSW_GOTCL_GOTCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLSW_MPRCH(_i)				(0x00346104 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_MPRCH_MAX_INDEX			31
+#define GLSW_MPRCH_MPRCH_S			0
+#define GLSW_MPRCH_MPRCH_M			MAKEMASK(0xFF, 0)
+#define GLSW_MPRCL(_i)				(0x00346100 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_MPRCL_MAX_INDEX			31
+#define GLSW_MPRCL_MPRCL_S			0
+#define GLSW_MPRCL_MPRCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLSW_MPTCH(_i)				(0x00310104 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_MPTCH_MAX_INDEX			31
+#define GLSW_MPTCH_MPTCH_S			0
+#define GLSW_MPTCH_MPTCH_M			MAKEMASK(0xFF, 0)
+#define GLSW_MPTCL(_i)				(0x00310100 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_MPTCL_MAX_INDEX			31
+#define GLSW_MPTCL_MPTCL_S			0
+#define GLSW_MPTCL_MPTCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLSW_UPRCH(_i)				(0x00346004 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_UPRCH_MAX_INDEX			31
+#define GLSW_UPRCH_UPRCH_S			0
+#define GLSW_UPRCH_UPRCH_M			MAKEMASK(0xFF, 0)
+#define GLSW_UPRCL(_i)				(0x00346000 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_UPRCL_MAX_INDEX			31
+#define GLSW_UPRCL_UPRCL_S			0
+#define GLSW_UPRCL_UPRCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLSW_UPTCH(_i)				(0x00310004 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_UPTCH_MAX_INDEX			31
+#define GLSW_UPTCH_UPTCH_S			0
+#define GLSW_UPTCH_UPTCH_M			MAKEMASK(0xFF, 0)
+#define GLSW_UPTCL(_i)				(0x00310000 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_UPTCL_MAX_INDEX			31
+#define GLSW_UPTCL_UPTCL_S			0
+#define GLSW_UPTCL_UPTCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLSWID_RUPP(_i)				(0x00345000 + ((_i) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define GLSWID_RUPP_MAX_INDEX			255
+#define GLSWID_RUPP_RUPP_S			0
+#define GLSWID_RUPP_RUPP_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLV_BPRCH(_i)				(0x003B6004 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_BPRCH_MAX_INDEX			767
+#define GLV_BPRCH_BPRCH_S			0
+#define GLV_BPRCH_BPRCH_M			MAKEMASK(0xFF, 0)
+#define GLV_BPRCL(_i)				(0x003B6000 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_BPRCL_MAX_INDEX			767
+#define GLV_BPRCL_BPRCL_S			0
+#define GLV_BPRCL_BPRCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLV_BPTCH(_i)				(0x0030E004 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_BPTCH_MAX_INDEX			767
+#define GLV_BPTCH_BPTCH_S			0
+#define GLV_BPTCH_BPTCH_M			MAKEMASK(0xFF, 0)
+#define GLV_BPTCL(_i)				(0x0030E000 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_BPTCL_MAX_INDEX			767
+#define GLV_BPTCL_BPTCL_S			0
+#define GLV_BPTCL_BPTCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLV_GORCH(_i)				(0x003B0004 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_GORCH_MAX_INDEX			767
+#define GLV_GORCH_GORCH_S			0
+#define GLV_GORCH_GORCH_M			MAKEMASK(0xFF, 0)
+#define GLV_GORCL(_i)				(0x003B0000 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_GORCL_MAX_INDEX			767
+#define GLV_GORCL_GORCL_S			0
+#define GLV_GORCL_GORCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLV_GOTCH(_i)				(0x00300004 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_GOTCH_MAX_INDEX			767
+#define GLV_GOTCH_GOTCH_S			0
+#define GLV_GOTCH_GOTCH_M			MAKEMASK(0xFF, 0)
+#define GLV_GOTCL(_i)				(0x00300000 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_GOTCL_MAX_INDEX			767
+#define GLV_GOTCL_GOTCL_S			0
+#define GLV_GOTCL_GOTCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLV_MPRCH(_i)				(0x003B4004 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_MPRCH_MAX_INDEX			767
+#define GLV_MPRCH_MPRCH_S			0
+#define GLV_MPRCH_MPRCH_M			MAKEMASK(0xFF, 0)
+#define GLV_MPRCL(_i)				(0x003B4000 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_MPRCL_MAX_INDEX			767
+#define GLV_MPRCL_MPRCL_S			0
+#define GLV_MPRCL_MPRCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLV_MPTCH(_i)				(0x0030C004 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_MPTCH_MAX_INDEX			767
+#define GLV_MPTCH_MPTCH_S			0
+#define GLV_MPTCH_MPTCH_M			MAKEMASK(0xFF, 0)
+#define GLV_MPTCL(_i)				(0x0030C000 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_MPTCL_MAX_INDEX			767
+#define GLV_MPTCL_MPTCL_S			0
+#define GLV_MPTCL_MPTCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLV_RDPC(_i)				(0x00294C04 + ((_i) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_RDPC_MAX_INDEX			767
+#define GLV_RDPC_RDPC_S				0
+#define GLV_RDPC_RDPC_M				MAKEMASK(0xFFFFFFFF, 0)
+#define GLV_REPC(_i)				(0x00295804 + ((_i) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_REPC_MAX_INDEX			767
+#define GLV_REPC_NO_DESC_CNT_S			0
+#define GLV_REPC_NO_DESC_CNT_M			MAKEMASK(0xFFFF, 0)
+#define GLV_REPC_ERROR_CNT_S			16
+#define GLV_REPC_ERROR_CNT_M			MAKEMASK(0xFFFF, 16)
+#define GLV_TEPC(_VSI)				(0x00312000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_TEPC_MAX_INDEX			767
+#define GLV_TEPC_TEPC_S				0
+#define GLV_TEPC_TEPC_M				MAKEMASK(0xFFFFFFFF, 0)
+#define GLV_UPRCH(_i)				(0x003B2004 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_UPRCH_MAX_INDEX			767
+#define GLV_UPRCH_UPRCH_S			0
+#define GLV_UPRCH_UPRCH_M			MAKEMASK(0xFF, 0)
+#define GLV_UPRCL(_i)				(0x003B2000 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_UPRCL_MAX_INDEX			767
+#define GLV_UPRCL_UPRCL_S			0
+#define GLV_UPRCL_UPRCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLV_UPTCH(_i)				(0x0030A004 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_UPTCH_MAX_INDEX			767
+#define GLV_UPTCH_GLVUPTCH_S			0
+#define GLV_UPTCH_GLVUPTCH_M			MAKEMASK(0xFF, 0)
+#define GLV_UPTCL(_i)				(0x0030A000 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_UPTCL_MAX_INDEX			767
+#define GLV_UPTCL_UPTCL_S			0
+#define GLV_UPTCL_UPTCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLVEBUP_RBCH(_i, _j)			(0x00343004 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...31 */ /* Reset Source: CORER */
+#define GLVEBUP_RBCH_MAX_INDEX			7
+#define GLVEBUP_RBCH_UPBCH_S			0
+#define GLVEBUP_RBCH_UPBCH_M			MAKEMASK(0xFF, 0)
+#define GLVEBUP_RBCL(_i, _j)			(0x00343000 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...31 */ /* Reset Source: CORER */
+#define GLVEBUP_RBCL_MAX_INDEX			7
+#define GLVEBUP_RBCL_UPBCL_S			0
+#define GLVEBUP_RBCL_UPBCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLVEBUP_RPCH(_i, _j)			(0x00344004 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...31 */ /* Reset Source: CORER */
+#define GLVEBUP_RPCH_MAX_INDEX			7
+#define GLVEBUP_RPCH_UPPCH_S			0
+#define GLVEBUP_RPCH_UPPCH_M			MAKEMASK(0xFF, 0)
+#define GLVEBUP_RPCL(_i, _j)			(0x00344000 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...31 */ /* Reset Source: CORER */
+#define GLVEBUP_RPCL_MAX_INDEX			7
+#define GLVEBUP_RPCL_UPPCL_S			0
+#define GLVEBUP_RPCL_UPPCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLVEBUP_TBCH(_i, _j)			(0x00306004 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...31 */ /* Reset Source: CORER */
+#define GLVEBUP_TBCH_MAX_INDEX			7
+#define GLVEBUP_TBCH_UPBCH_S			0
+#define GLVEBUP_TBCH_UPBCH_M			MAKEMASK(0xFF, 0)
+#define GLVEBUP_TBCL(_i, _j)			(0x00306000 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...31 */ /* Reset Source: CORER */
+#define GLVEBUP_TBCL_MAX_INDEX			7
+#define GLVEBUP_TBCL_UPBCL_S			0
+#define GLVEBUP_TBCL_UPBCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLVEBUP_TPCH(_i, _j)			(0x00308004 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...31 */ /* Reset Source: CORER */
+#define GLVEBUP_TPCH_MAX_INDEX			7
+#define GLVEBUP_TPCH_UPPCH_S			0
+#define GLVEBUP_TPCH_UPPCH_M			MAKEMASK(0xFF, 0)
+#define GLVEBUP_TPCL(_i, _j)			(0x00308000 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...31 */ /* Reset Source: CORER */
+#define GLVEBUP_TPCL_MAX_INDEX			7
+#define GLVEBUP_TPCL_UPPCL_S			0
+#define GLVEBUP_TPCL_UPPCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PRTRPB_LDPC				0x000AC280 /* Reset Source: CORER */
+#define PRTRPB_LDPC_CRCERRS_S			0
+#define PRTRPB_LDPC_CRCERRS_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PRTRPB_RDPC				0x000AC260 /* Reset Source: CORER */
+#define PRTRPB_RDPC_CRCERRS_S			0
+#define PRTRPB_RDPC_CRCERRS_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PRTTPB_STAT_TC_BYTES_SENTL(_i)		(0x00098200 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define PRTTPB_STAT_TC_BYTES_SENTL_MAX_INDEX	63
+#define PRTTPB_STAT_TC_BYTES_SENTL_TCCNT_S	0
+#define PRTTPB_STAT_TC_BYTES_SENTL_TCCNT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define TPB_PRTTPB_STAT_PKT_SENT(_i)		(0x00099470 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define TPB_PRTTPB_STAT_PKT_SENT_MAX_INDEX	7
+#define TPB_PRTTPB_STAT_PKT_SENT_PKTCNT_S	0
+#define TPB_PRTTPB_STAT_PKT_SENT_PKTCNT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define TPB_PRTTPB_STAT_TC_BYTES_SENT(_i)	(0x00099094 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define TPB_PRTTPB_STAT_TC_BYTES_SENT_MAX_INDEX 63
+#define TPB_PRTTPB_STAT_TC_BYTES_SENT_TCCNT_S	0
+#define TPB_PRTTPB_STAT_TC_BYTES_SENT_TCCNT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define EMP_SWT_PRUNIND				0x00204020 /* Reset Source: CORER */
+#define EMP_SWT_PRUNIND_OPCODE_S		0
+#define EMP_SWT_PRUNIND_OPCODE_M		MAKEMASK(0xF, 0)
+#define EMP_SWT_PRUNIND_LIST_INDEX_NUM_S	4
+#define EMP_SWT_PRUNIND_LIST_INDEX_NUM_M	MAKEMASK(0x3FF, 4)
+#define EMP_SWT_PRUNIND_VSI_NUM_S		16
+#define EMP_SWT_PRUNIND_VSI_NUM_M		MAKEMASK(0x3FF, 16)
+#define EMP_SWT_PRUNIND_BIT_VALUE_S		31
+#define EMP_SWT_PRUNIND_BIT_VALUE_M		BIT(31)
+#define EMP_SWT_REPIND				0x0020401c /* Reset Source: CORER */
+#define EMP_SWT_REPIND_OPCODE_S			0
+#define EMP_SWT_REPIND_OPCODE_M			MAKEMASK(0xF, 0)
+#define EMP_SWT_REPIND_LIST_INDEX_NUMBER_S	4
+#define EMP_SWT_REPIND_LIST_INDEX_NUMBER_M	MAKEMASK(0x3FF, 4)
+#define EMP_SWT_REPIND_VSI_NUM_S		16
+#define EMP_SWT_REPIND_VSI_NUM_M		MAKEMASK(0x3FF, 16)
+#define EMP_SWT_REPIND_BIT_VALUE_S		31
+#define EMP_SWT_REPIND_BIT_VALUE_M		BIT(31)
+#define GL_OVERRIDEC				0x002040a4 /* Reset Source: CORER */
+#define GL_OVERRIDEC_OVERRIDE_ATTEMPTC_S	0
+#define GL_OVERRIDEC_OVERRIDE_ATTEMPTC_M	MAKEMASK(0xFFFF, 0)
+#define GL_OVERRIDEC_LAST_VSI_S			16
+#define GL_OVERRIDEC_LAST_VSI_M			MAKEMASK(0x3FF, 16)
+#define GL_PLG_AVG_CALC_CFG			0x0020A5AC /* Reset Source: CORER */
+#define GL_PLG_AVG_CALC_CFG_CYCLE_LEN_S		0
+#define GL_PLG_AVG_CALC_CFG_CYCLE_LEN_M		MAKEMASK(0x7FFFFFFF, 0)
+#define GL_PLG_AVG_CALC_CFG_MODE_S		31
+#define GL_PLG_AVG_CALC_CFG_MODE_M		BIT(31)
+#define GL_PLG_AVG_CALC_ST			0x0020A5B0 /* Reset Source: CORER */
+#define GL_PLG_AVG_CALC_ST_IN_DATA_S		0
+#define GL_PLG_AVG_CALC_ST_IN_DATA_M		MAKEMASK(0x7FFF, 0)
+#define GL_PLG_AVG_CALC_ST_OUT_DATA_S		16
+#define GL_PLG_AVG_CALC_ST_OUT_DATA_M		MAKEMASK(0x7FFF, 16)
+#define GL_PLG_AVG_CALC_ST_VALID_S		31
+#define GL_PLG_AVG_CALC_ST_VALID_M		BIT(31)
+#define GL_PRE_CFG_CMD				0x00214090 /* Reset Source: CORER */
+#define GL_PRE_CFG_CMD_ADDR_S			0
+#define GL_PRE_CFG_CMD_ADDR_M			MAKEMASK(0x1FFF, 0)
+#define GL_PRE_CFG_CMD_TBLIDX_S			16
+#define GL_PRE_CFG_CMD_TBLIDX_M			MAKEMASK(0x7, 16)
+#define GL_PRE_CFG_CMD_CMD_S			29
+#define GL_PRE_CFG_CMD_CMD_M			BIT(29)
+#define GL_PRE_CFG_CMD_DONE_S			31
+#define GL_PRE_CFG_CMD_DONE_M			BIT(31)
+#define GL_PRE_CFG_DATA(_i)			(0x00214074 + ((_i) * 4)) /* _i=0...6 */ /* Reset Source: CORER */
+#define GL_PRE_CFG_DATA_MAX_INDEX		6
+#define GL_PRE_CFG_DATA_GL_PRE_RCP_DATA_S	0
+#define GL_PRE_CFG_DATA_GL_PRE_RCP_DATA_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GL_SWT_FUNCFILT				0x001D2698 /* Reset Source: CORER */
+#define GL_SWT_FUNCFILT_FUNCFILT_S		0
+#define GL_SWT_FUNCFILT_FUNCFILT_M		BIT(0)
+#define GL_SWT_FW_STS(_i)			(0x00216000 + ((_i) * 4)) /* _i=0...5 */ /* Reset Source: CORER */
+#define GL_SWT_FW_STS_MAX_INDEX			5
+#define GL_SWT_FW_STS_GL_SWT_FW_STS_S		0
+#define GL_SWT_FW_STS_GL_SWT_FW_STS_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_SWT_LAT_DOUBLE			0x00204004 /* Reset Source: CORER */
+#define GL_SWT_LAT_DOUBLE_BASE_S		0
+#define GL_SWT_LAT_DOUBLE_BASE_M		MAKEMASK(0x7FF, 0)
+#define GL_SWT_LAT_DOUBLE_SIZE_S		16
+#define GL_SWT_LAT_DOUBLE_SIZE_M		MAKEMASK(0x7FF, 16)
+#define GL_SWT_LAT_QUAD				0x00204008 /* Reset Source: CORER */
+#define GL_SWT_LAT_QUAD_BASE_S			0
+#define GL_SWT_LAT_QUAD_BASE_M			MAKEMASK(0x7FF, 0)
+#define GL_SWT_LAT_QUAD_SIZE_S			16
+#define GL_SWT_LAT_QUAD_SIZE_M			MAKEMASK(0x7FF, 16)
+#define GL_SWT_LAT_SINGLE			0x00204000 /* Reset Source: CORER */
+#define GL_SWT_LAT_SINGLE_BASE_S		0
+#define GL_SWT_LAT_SINGLE_BASE_M		MAKEMASK(0x7FF, 0)
+#define GL_SWT_LAT_SINGLE_SIZE_S		16
+#define GL_SWT_LAT_SINGLE_SIZE_M		MAKEMASK(0x7FF, 16)
+#define GL_SWT_MD_PRI				0x002040ac /* Reset Source: CORER */
+#define GL_SWT_MD_PRI_VSI_PRI_S			0
+#define GL_SWT_MD_PRI_VSI_PRI_M			MAKEMASK(0x7, 0)
+#define GL_SWT_MD_PRI_LB_PRI_S			4
+#define GL_SWT_MD_PRI_LB_PRI_M			MAKEMASK(0x7, 4)
+#define GL_SWT_MD_PRI_LAN_EN_PRI_S		8
+#define GL_SWT_MD_PRI_LAN_EN_PRI_M		MAKEMASK(0x7, 8)
+#define GL_SWT_MD_PRI_QH_PRI_S			12
+#define GL_SWT_MD_PRI_QH_PRI_M			MAKEMASK(0x7, 12)
+#define GL_SWT_MD_PRI_QL_PRI_S			16
+#define GL_SWT_MD_PRI_QL_PRI_M			MAKEMASK(0x7, 16)
+#define GL_SWT_MIRTARVSI(_i)			(0x00204500 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GL_SWT_MIRTARVSI_MAX_INDEX		63
+#define GL_SWT_MIRTARVSI_VFVMNUMBER_S		0
+#define GL_SWT_MIRTARVSI_VFVMNUMBER_M		MAKEMASK(0x3FF, 0)
+#define GL_SWT_MIRTARVSI_FUNCTIONTYPE_S		10
+#define GL_SWT_MIRTARVSI_FUNCTIONTYPE_M		MAKEMASK(0x3, 10)
+#define GL_SWT_MIRTARVSI_PFNUMBER_S		12
+#define GL_SWT_MIRTARVSI_PFNUMBER_M		MAKEMASK(0x7, 12)
+#define GL_SWT_MIRTARVSI_TARGETVSI_S		20
+#define GL_SWT_MIRTARVSI_TARGETVSI_M		MAKEMASK(0x3FF, 20)
+#define GL_SWT_MIRTARVSI_RULEENABLE_S		31
+#define GL_SWT_MIRTARVSI_RULEENABLE_M		BIT(31)
+#define GL_SWT_NOMDEF_FLGS_H			0x0021411C /* Reset Source: CORER */
+#define GL_SWT_NOMDEF_FLGS_H_FLGS_S		0
+#define GL_SWT_NOMDEF_FLGS_H_FLGS_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_SWT_NOMDEF_FLGS_L			0x00214118 /* Reset Source: CORER */
+#define GL_SWT_NOMDEF_FLGS_L_FLGS_S		0
+#define GL_SWT_NOMDEF_FLGS_L_FLGS_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_SWT_SWIDFVIDX			0x00214114 /* Reset Source: CORER */
+#define GL_SWT_SWIDFVIDX_SWIDFVIDX_S		0
+#define GL_SWT_SWIDFVIDX_SWIDFVIDX_M		MAKEMASK(0x3F, 0)
+#define GL_SWT_SWIDFVIDX_PORT_TYPE_S		31
+#define GL_SWT_SWIDFVIDX_PORT_TYPE_M		BIT(31)
+#define GL_VP_SWITCHID(_i)			(0x00214094 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GL_VP_SWITCHID_MAX_INDEX		31
+#define GL_VP_SWITCHID_SWITCHID_S		0
+#define GL_VP_SWITCHID_SWITCHID_M		MAKEMASK(0xFF, 0)
+#define GLSWID_STAT_BLOCK(_i)			(0x0020A1A4 + ((_i) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define GLSWID_STAT_BLOCK_MAX_INDEX		255
+#define GLSWID_STAT_BLOCK_VEBID_S		0
+#define GLSWID_STAT_BLOCK_VEBID_M		MAKEMASK(0x1F, 0)
+#define GLSWID_STAT_BLOCK_VEBID_VALID_S		31
+#define GLSWID_STAT_BLOCK_VEBID_VALID_M		BIT(31)
+#define GLSWT_ACT_RESP_0			0x0020A5A4 /* Reset Source: CORER */
+#define GLSWT_ACT_RESP_0_GLSWT_ACT_RESP_S	0
+#define GLSWT_ACT_RESP_0_GLSWT_ACT_RESP_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLSWT_ACT_RESP_1			0x0020A5A8 /* Reset Source: CORER */
+#define GLSWT_ACT_RESP_1_GLSWT_ACT_RESP_S	0
+#define GLSWT_ACT_RESP_1_GLSWT_ACT_RESP_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLSWT_ARB_MODE				0x0020A674 /* Reset Source: CORER */
+#define GLSWT_ARB_MODE_FLU_PRI_SHM_S		0
+#define GLSWT_ARB_MODE_FLU_PRI_SHM_M		BIT(0)
+#define GLSWT_ARB_MODE_TX_RX_FWD_PRI_S		1
+#define GLSWT_ARB_MODE_TX_RX_FWD_PRI_M		BIT(1)
+#define PRT_SBPVSI				0x00204120 /* Reset Source: CORER */
+#define PRT_SBPVSI_BAD_FRAMES_VSI_S		0
+#define PRT_SBPVSI_BAD_FRAMES_VSI_M		MAKEMASK(0x3FF, 0)
+#define PRT_SBPVSI_SBP_S			31
+#define PRT_SBPVSI_SBP_M			BIT(31)
+#define PRT_SCSTS				0x00204140 /* Reset Source: CORER */
+#define PRT_SCSTS_BSCA_S			0
+#define PRT_SCSTS_BSCA_M			BIT(0)
+#define PRT_SCSTS_BSCAP_S			1
+#define PRT_SCSTS_BSCAP_M			BIT(1)
+#define PRT_SCSTS_MSCA_S			2
+#define PRT_SCSTS_MSCA_M			BIT(2)
+#define PRT_SCSTS_MSCAP_S			3
+#define PRT_SCSTS_MSCAP_M			BIT(3)
+#define PRT_SWT_BSCCNT				0x00204160 /* Reset Source: CORER */
+#define PRT_SWT_BSCCNT_CCOUNT_S			0
+#define PRT_SWT_BSCCNT_CCOUNT_M			MAKEMASK(0x1FFFFFF, 0)
+#define PRT_SWT_BSCTRH				0x00204180 /* Reset Source: CORER */
+#define PRT_SWT_BSCTRH_UTRESH_S			0
+#define PRT_SWT_BSCTRH_UTRESH_M			MAKEMASK(0x7FFFF, 0)
+#define PRT_SWT_MIREG				0x002042A0 /* Reset Source: CORER */
+#define PRT_SWT_MIREG_MIRRULE_S			0
+#define PRT_SWT_MIREG_MIRRULE_M			MAKEMASK(0x3F, 0)
+#define PRT_SWT_MIREG_MIRENA_S			7
+#define PRT_SWT_MIREG_MIRENA_M			BIT(7)
+#define PRT_SWT_MIRIG				0x00204280 /* Reset Source: CORER */
+#define PRT_SWT_MIRIG_MIRRULE_S			0
+#define PRT_SWT_MIRIG_MIRRULE_M			MAKEMASK(0x3F, 0)
+#define PRT_SWT_MIRIG_MIRENA_S			7
+#define PRT_SWT_MIRIG_MIRENA_M			BIT(7)
+#define PRT_SWT_MSCCNT				0x00204100 /* Reset Source: CORER */
+#define PRT_SWT_MSCCNT_CCOUNT_S			0
+#define PRT_SWT_MSCCNT_CCOUNT_M			MAKEMASK(0x1FFFFFF, 0)
+#define PRT_SWT_MSCTRH				0x002041c0 /* Reset Source: CORER */
+#define PRT_SWT_MSCTRH_UTRESH_S			0
+#define PRT_SWT_MSCTRH_UTRESH_M			MAKEMASK(0x7FFFF, 0)
+#define PRT_SWT_SCBI				0x002041e0 /* Reset Source: CORER */
+#define PRT_SWT_SCBI_BI_S			0
+#define PRT_SWT_SCBI_BI_M			MAKEMASK(0x1FFFFFF, 0)
+#define PRT_SWT_SCCRL				0x00204200 /* Reset Source: CORER */
+#define PRT_SWT_SCCRL_MDIPW_S			0
+#define PRT_SWT_SCCRL_MDIPW_M			BIT(0)
+#define PRT_SWT_SCCRL_MDICW_S			1
+#define PRT_SWT_SCCRL_MDICW_M			BIT(1)
+#define PRT_SWT_SCCRL_BDIPW_S			2
+#define PRT_SWT_SCCRL_BDIPW_M			BIT(2)
+#define PRT_SWT_SCCRL_BDICW_S			3
+#define PRT_SWT_SCCRL_BDICW_M			BIT(3)
+#define PRT_SWT_SCCRL_INTERVAL_S		8
+#define PRT_SWT_SCCRL_INTERVAL_M		MAKEMASK(0xFFFFF, 8)
+#define PRT_TCTUPR(_i)				(0x00040840 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define PRT_TCTUPR_MAX_INDEX			31
+#define PRT_TCTUPR_UP0_S			0
+#define PRT_TCTUPR_UP0_M			MAKEMASK(0x7, 0)
+#define PRT_TCTUPR_UP1_S			4
+#define PRT_TCTUPR_UP1_M			MAKEMASK(0x7, 4)
+#define PRT_TCTUPR_UP2_S			8
+#define PRT_TCTUPR_UP2_M			MAKEMASK(0x7, 8)
+#define PRT_TCTUPR_UP3_S			12
+#define PRT_TCTUPR_UP3_M			MAKEMASK(0x7, 12)
+#define PRT_TCTUPR_UP4_S			16
+#define PRT_TCTUPR_UP4_M			MAKEMASK(0x7, 16)
+#define PRT_TCTUPR_UP5_S			20
+#define PRT_TCTUPR_UP5_M			MAKEMASK(0x7, 20)
+#define PRT_TCTUPR_UP6_S			24
+#define PRT_TCTUPR_UP6_M			MAKEMASK(0x7, 24)
+#define PRT_TCTUPR_UP7_S			28
+#define PRT_TCTUPR_UP7_M			MAKEMASK(0x7, 28)
+#define GLHH_ART_CTL				0x000A41D4 /* Reset Source: POR */
+#define GLHH_ART_CTL_ACTIVE_S			0
+#define GLHH_ART_CTL_ACTIVE_M			BIT(0)
+#define GLHH_ART_CTL_TIME_OUT1_S		1
+#define GLHH_ART_CTL_TIME_OUT1_M		BIT(1)
+#define GLHH_ART_CTL_TIME_OUT2_S		2
+#define GLHH_ART_CTL_TIME_OUT2_M		BIT(2)
+#define GLHH_ART_CTL_RESET_HH_S			31
+#define GLHH_ART_CTL_RESET_HH_M			BIT(31)
+#define GLHH_ART_DATA				0x000A41E0 /* Reset Source: POR */
+#define GLHH_ART_DATA_AGENT_TYPE_S		0
+#define GLHH_ART_DATA_AGENT_TYPE_M		MAKEMASK(0x7, 0)
+#define GLHH_ART_DATA_SYNC_TYPE_S		3
+#define GLHH_ART_DATA_SYNC_TYPE_M		BIT(3)
+#define GLHH_ART_DATA_MAX_DELAY_S		4
+#define GLHH_ART_DATA_MAX_DELAY_M		MAKEMASK(0xF, 4)
+#define GLHH_ART_DATA_TIME_BASE_S		8
+#define GLHH_ART_DATA_TIME_BASE_M		MAKEMASK(0xF, 8)
+#define GLHH_ART_DATA_RSV_DATA_S		12
+#define GLHH_ART_DATA_RSV_DATA_M		MAKEMASK(0xFFFFF, 12)
+#define GLHH_ART_TIME_H				0x000A41D8 /* Reset Source: POR */
+#define GLHH_ART_TIME_H_ART_TIME_H_S		0
+#define GLHH_ART_TIME_H_ART_TIME_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLHH_ART_TIME_L				0x000A41DC /* Reset Source: POR */
+#define GLHH_ART_TIME_L_ART_TIME_L_S		0
+#define GLHH_ART_TIME_L_ART_TIME_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_AUX_IN_0(_i)			(0x000889D8 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_AUX_IN_0_MAX_INDEX		1
+#define GLTSYN_AUX_IN_0_EVNTLVL_S		0
+#define GLTSYN_AUX_IN_0_EVNTLVL_M		MAKEMASK(0x3, 0)
+#define GLTSYN_AUX_IN_0_INT_ENA_S		4
+#define GLTSYN_AUX_IN_0_INT_ENA_M		BIT(4)
+#define GLTSYN_AUX_IN_1(_i)			(0x000889E0 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_AUX_IN_1_MAX_INDEX		1
+#define GLTSYN_AUX_IN_1_EVNTLVL_S		0
+#define GLTSYN_AUX_IN_1_EVNTLVL_M		MAKEMASK(0x3, 0)
+#define GLTSYN_AUX_IN_1_INT_ENA_S		4
+#define GLTSYN_AUX_IN_1_INT_ENA_M		BIT(4)
+#define GLTSYN_AUX_IN_2(_i)			(0x000889E8 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_AUX_IN_2_MAX_INDEX		1
+#define GLTSYN_AUX_IN_2_EVNTLVL_S		0
+#define GLTSYN_AUX_IN_2_EVNTLVL_M		MAKEMASK(0x3, 0)
+#define GLTSYN_AUX_IN_2_INT_ENA_S		4
+#define GLTSYN_AUX_IN_2_INT_ENA_M		BIT(4)
+#define GLTSYN_AUX_OUT_0(_i)			(0x00088998 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_AUX_OUT_0_MAX_INDEX		1
+#define GLTSYN_AUX_OUT_0_OUT_ENA_S		0
+#define GLTSYN_AUX_OUT_0_OUT_ENA_M		BIT(0)
+#define GLTSYN_AUX_OUT_0_OUTMOD_S		1
+#define GLTSYN_AUX_OUT_0_OUTMOD_M		MAKEMASK(0x3, 1)
+#define GLTSYN_AUX_OUT_0_OUTLVL_S		3
+#define GLTSYN_AUX_OUT_0_OUTLVL_M		BIT(3)
+#define GLTSYN_AUX_OUT_0_INT_ENA_S		4
+#define GLTSYN_AUX_OUT_0_INT_ENA_M		BIT(4)
+#define GLTSYN_AUX_OUT_0_PULSEW_S		8
+#define GLTSYN_AUX_OUT_0_PULSEW_M		MAKEMASK(0xF, 8)
+#define GLTSYN_AUX_OUT_1(_i)			(0x000889A0 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_AUX_OUT_1_MAX_INDEX		1
+#define GLTSYN_AUX_OUT_1_OUT_ENA_S		0
+#define GLTSYN_AUX_OUT_1_OUT_ENA_M		BIT(0)
+#define GLTSYN_AUX_OUT_1_OUTMOD_S		1
+#define GLTSYN_AUX_OUT_1_OUTMOD_M		MAKEMASK(0x3, 1)
+#define GLTSYN_AUX_OUT_1_OUTLVL_S		3
+#define GLTSYN_AUX_OUT_1_OUTLVL_M		BIT(3)
+#define GLTSYN_AUX_OUT_1_INT_ENA_S		4
+#define GLTSYN_AUX_OUT_1_INT_ENA_M		BIT(4)
+#define GLTSYN_AUX_OUT_1_PULSEW_S		8
+#define GLTSYN_AUX_OUT_1_PULSEW_M		MAKEMASK(0xF, 8)
+#define GLTSYN_AUX_OUT_2(_i)			(0x000889A8 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_AUX_OUT_2_MAX_INDEX		1
+#define GLTSYN_AUX_OUT_2_OUT_ENA_S		0
+#define GLTSYN_AUX_OUT_2_OUT_ENA_M		BIT(0)
+#define GLTSYN_AUX_OUT_2_OUTMOD_S		1
+#define GLTSYN_AUX_OUT_2_OUTMOD_M		MAKEMASK(0x3, 1)
+#define GLTSYN_AUX_OUT_2_OUTLVL_S		3
+#define GLTSYN_AUX_OUT_2_OUTLVL_M		BIT(3)
+#define GLTSYN_AUX_OUT_2_INT_ENA_S		4
+#define GLTSYN_AUX_OUT_2_INT_ENA_M		BIT(4)
+#define GLTSYN_AUX_OUT_2_PULSEW_S		8
+#define GLTSYN_AUX_OUT_2_PULSEW_M		MAKEMASK(0xF, 8)
+#define GLTSYN_AUX_OUT_3(_i)			(0x000889B0 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_AUX_OUT_3_MAX_INDEX		1
+#define GLTSYN_AUX_OUT_3_OUT_ENA_S		0
+#define GLTSYN_AUX_OUT_3_OUT_ENA_M		BIT(0)
+#define GLTSYN_AUX_OUT_3_OUTMOD_S		1
+#define GLTSYN_AUX_OUT_3_OUTMOD_M		MAKEMASK(0x3, 1)
+#define GLTSYN_AUX_OUT_3_OUTLVL_S		3
+#define GLTSYN_AUX_OUT_3_OUTLVL_M		BIT(3)
+#define GLTSYN_AUX_OUT_3_INT_ENA_S		4
+#define GLTSYN_AUX_OUT_3_INT_ENA_M		BIT(4)
+#define GLTSYN_AUX_OUT_3_PULSEW_S		8
+#define GLTSYN_AUX_OUT_3_PULSEW_M		MAKEMASK(0xF, 8)
+#define GLTSYN_CLKO_0(_i)			(0x000889B8 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_CLKO_0_MAX_INDEX			1
+#define GLTSYN_CLKO_0_TSYNCLKO_S		0
+#define GLTSYN_CLKO_0_TSYNCLKO_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_CLKO_1(_i)			(0x000889C0 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_CLKO_1_MAX_INDEX			1
+#define GLTSYN_CLKO_1_TSYNCLKO_S		0
+#define GLTSYN_CLKO_1_TSYNCLKO_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_CLKO_2(_i)			(0x000889C8 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_CLKO_2_MAX_INDEX			1
+#define GLTSYN_CLKO_2_TSYNCLKO_S		0
+#define GLTSYN_CLKO_2_TSYNCLKO_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_CLKO_3(_i)			(0x000889D0 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_CLKO_3_MAX_INDEX			1
+#define GLTSYN_CLKO_3_TSYNCLKO_S		0
+#define GLTSYN_CLKO_3_TSYNCLKO_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_CMD				0x00088810 /* Reset Source: CORER */
+#define GLTSYN_CMD_CMD_S			0
+#define GLTSYN_CMD_CMD_M			MAKEMASK(0xFF, 0)
+#define GLTSYN_CMD_SEL_MASTER_S			8
+#define GLTSYN_CMD_SEL_MASTER_M			BIT(8)
+#define GLTSYN_CMD_SYNC				0x00088814 /* Reset Source: CORER */
+#define GLTSYN_CMD_SYNC_SYNC_S			0
+#define GLTSYN_CMD_SYNC_SYNC_M			MAKEMASK(0x3, 0)
+#define GLTSYN_ENA(_i)				(0x00088808 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_ENA_MAX_INDEX			1
+#define GLTSYN_ENA_TSYN_ENA_S			0
+#define GLTSYN_ENA_TSYN_ENA_M			BIT(0)
+#define GLTSYN_EVNT_H_0(_i)			(0x00088970 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_EVNT_H_0_MAX_INDEX		1
+#define GLTSYN_EVNT_H_0_TSYNEVNT_H_S		0
+#define GLTSYN_EVNT_H_0_TSYNEVNT_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_EVNT_H_1(_i)			(0x00088980 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_EVNT_H_1_MAX_INDEX		1
+#define GLTSYN_EVNT_H_1_TSYNEVNT_H_S		0
+#define GLTSYN_EVNT_H_1_TSYNEVNT_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_EVNT_H_2(_i)			(0x00088990 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_EVNT_H_2_MAX_INDEX		1
+#define GLTSYN_EVNT_H_2_TSYNEVNT_H_S		0
+#define GLTSYN_EVNT_H_2_TSYNEVNT_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_EVNT_L_0(_i)			(0x00088968 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_EVNT_L_0_MAX_INDEX		1
+#define GLTSYN_EVNT_L_0_TSYNEVNT_L_S		0
+#define GLTSYN_EVNT_L_0_TSYNEVNT_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_EVNT_L_1(_i)			(0x00088978 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_EVNT_L_1_MAX_INDEX		1
+#define GLTSYN_EVNT_L_1_TSYNEVNT_L_S		0
+#define GLTSYN_EVNT_L_1_TSYNEVNT_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_EVNT_L_2(_i)			(0x00088988 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_EVNT_L_2_MAX_INDEX		1
+#define GLTSYN_EVNT_L_2_TSYNEVNT_L_S		0
+#define GLTSYN_EVNT_L_2_TSYNEVNT_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_HHTIME_H(_i)			(0x00088900 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_HHTIME_H_MAX_INDEX		1
+#define GLTSYN_HHTIME_H_TSYNEVNT_H_S		0
+#define GLTSYN_HHTIME_H_TSYNEVNT_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_HHTIME_L(_i)			(0x000888F8 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_HHTIME_L_MAX_INDEX		1
+#define GLTSYN_HHTIME_L_TSYNEVNT_L_S		0
+#define GLTSYN_HHTIME_L_TSYNEVNT_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_INCVAL_H(_i)			(0x00088920 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_INCVAL_H_MAX_INDEX		1
+#define GLTSYN_INCVAL_H_INCVAL_H_S		0
+#define GLTSYN_INCVAL_H_INCVAL_H_M		MAKEMASK(0xFF, 0)
+#define GLTSYN_INCVAL_L(_i)			(0x00088918 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_INCVAL_L_MAX_INDEX		1
+#define GLTSYN_INCVAL_L_INCVAL_L_S		0
+#define GLTSYN_INCVAL_L_INCVAL_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_SHADJ_H(_i)			(0x00088910 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_SHADJ_H_MAX_INDEX		1
+#define GLTSYN_SHADJ_H_ADJUST_H_S		0
+#define GLTSYN_SHADJ_H_ADJUST_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_SHADJ_L(_i)			(0x00088908 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_SHADJ_L_MAX_INDEX		1
+#define GLTSYN_SHADJ_L_ADJUST_L_S		0
+#define GLTSYN_SHADJ_L_ADJUST_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_SHTIME_0(_i)			(0x000888E0 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_SHTIME_0_MAX_INDEX		1
+#define GLTSYN_SHTIME_0_TSYNTIME_0_S		0
+#define GLTSYN_SHTIME_0_TSYNTIME_0_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_SHTIME_H(_i)			(0x000888F0 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_SHTIME_H_MAX_INDEX		1
+#define GLTSYN_SHTIME_H_TSYNTIME_H_S		0
+#define GLTSYN_SHTIME_H_TSYNTIME_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_SHTIME_L(_i)			(0x000888E8 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_SHTIME_L_MAX_INDEX		1
+#define GLTSYN_SHTIME_L_TSYNTIME_L_S		0
+#define GLTSYN_SHTIME_L_TSYNTIME_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_STAT(_i)				(0x000888C0 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_STAT_MAX_INDEX			1
+#define GLTSYN_STAT_EVENT0_S			0
+#define GLTSYN_STAT_EVENT0_M			BIT(0)
+#define GLTSYN_STAT_EVENT1_S			1
+#define GLTSYN_STAT_EVENT1_M			BIT(1)
+#define GLTSYN_STAT_EVENT2_S			2
+#define GLTSYN_STAT_EVENT2_M			BIT(2)
+#define GLTSYN_STAT_TGT0_S			4
+#define GLTSYN_STAT_TGT0_M			BIT(4)
+#define GLTSYN_STAT_TGT1_S			5
+#define GLTSYN_STAT_TGT1_M			BIT(5)
+#define GLTSYN_STAT_TGT2_S			6
+#define GLTSYN_STAT_TGT2_M			BIT(6)
+#define GLTSYN_STAT_TGT3_S			7
+#define GLTSYN_STAT_TGT3_M			BIT(7)
+#define GLTSYN_SYNC_DLAY			0x00088818 /* Reset Source: CORER */
+#define GLTSYN_SYNC_DLAY_SYNC_DELAY_S		0
+#define GLTSYN_SYNC_DLAY_SYNC_DELAY_M		MAKEMASK(0x1F, 0)
+#define GLTSYN_TGT_H_0(_i)			(0x00088930 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_TGT_H_0_MAX_INDEX		1
+#define GLTSYN_TGT_H_0_TSYNTGTT_H_S		0
+#define GLTSYN_TGT_H_0_TSYNTGTT_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_TGT_H_1(_i)			(0x00088940 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_TGT_H_1_MAX_INDEX		1
+#define GLTSYN_TGT_H_1_TSYNTGTT_H_S		0
+#define GLTSYN_TGT_H_1_TSYNTGTT_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_TGT_H_2(_i)			(0x00088950 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_TGT_H_2_MAX_INDEX		1
+#define GLTSYN_TGT_H_2_TSYNTGTT_H_S		0
+#define GLTSYN_TGT_H_2_TSYNTGTT_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_TGT_H_3(_i)			(0x00088960 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_TGT_H_3_MAX_INDEX		1
+#define GLTSYN_TGT_H_3_TSYNTGTT_H_S		0
+#define GLTSYN_TGT_H_3_TSYNTGTT_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_TGT_L_0(_i)			(0x00088928 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_TGT_L_0_MAX_INDEX		1
+#define GLTSYN_TGT_L_0_TSYNTGTT_L_S		0
+#define GLTSYN_TGT_L_0_TSYNTGTT_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_TGT_L_1(_i)			(0x00088938 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_TGT_L_1_MAX_INDEX		1
+#define GLTSYN_TGT_L_1_TSYNTGTT_L_S		0
+#define GLTSYN_TGT_L_1_TSYNTGTT_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_TGT_L_2(_i)			(0x00088948 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_TGT_L_2_MAX_INDEX		1
+#define GLTSYN_TGT_L_2_TSYNTGTT_L_S		0
+#define GLTSYN_TGT_L_2_TSYNTGTT_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_TGT_L_3(_i)			(0x00088958 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_TGT_L_3_MAX_INDEX		1
+#define GLTSYN_TGT_L_3_TSYNTGTT_L_S		0
+#define GLTSYN_TGT_L_3_TSYNTGTT_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_TIME_0(_i)			(0x000888C8 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_TIME_0_MAX_INDEX			1
+#define GLTSYN_TIME_0_TSYNTIME_0_S		0
+#define GLTSYN_TIME_0_TSYNTIME_0_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_TIME_H(_i)			(0x000888D8 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_TIME_H_MAX_INDEX			1
+#define GLTSYN_TIME_H_TSYNTIME_H_S		0
+#define GLTSYN_TIME_H_TSYNTIME_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_TIME_L(_i)			(0x000888D0 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_TIME_L_MAX_INDEX			1
+#define GLTSYN_TIME_L_TSYNTIME_L_S		0
+#define GLTSYN_TIME_L_TSYNTIME_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PFHH_SEM				0x000A4200 /* Reset Source: PFR */
+#define PFHH_SEM_BUSY_S				0
+#define PFHH_SEM_BUSY_M				BIT(0)
+#define PFHH_SEM_PF_OWNER_S			4
+#define PFHH_SEM_PF_OWNER_M			MAKEMASK(0x7, 4)
+#define PFTSYN_SEM				0x00088880 /* Reset Source: PFR */
+#define PFTSYN_SEM_BUSY_S			0
+#define PFTSYN_SEM_BUSY_M			BIT(0)
+#define PFTSYN_SEM_PF_OWNER_S			4
+#define PFTSYN_SEM_PF_OWNER_M			MAKEMASK(0x7, 4)
+#define GLPE_TSCD_FLR(_i)			(0x0051E24c + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: CORER */
+#define GLPE_TSCD_FLR_MAX_INDEX			3
+#define GLPE_TSCD_FLR_DRAIN_VCTR_ID_S		0
+#define GLPE_TSCD_FLR_DRAIN_VCTR_ID_M		MAKEMASK(0x3, 0)
+#define GLPE_TSCD_FLR_PORT_S			2
+#define GLPE_TSCD_FLR_PORT_M			MAKEMASK(0x7, 2)
+#define GLPE_TSCD_FLR_PF_NUM_S			5
+#define GLPE_TSCD_FLR_PF_NUM_M			MAKEMASK(0x7, 5)
+#define GLPE_TSCD_FLR_VM_VF_TYPE_S		8
+#define GLPE_TSCD_FLR_VM_VF_TYPE_M		MAKEMASK(0x3, 8)
+#define GLPE_TSCD_FLR_VM_VF_NUM_S		16
+#define GLPE_TSCD_FLR_VM_VF_NUM_M		MAKEMASK(0x3FF, 16)
+#define GLPE_TSCD_FLR_VLD_S			31
+#define GLPE_TSCD_FLR_VLD_M			BIT(31)
+#define GLPE_TSCD_PEPM				0x0051E228 /* Reset Source: CORER */
+#define GLPE_TSCD_PEPM_MDQ_CREDITS_S		0
+#define GLPE_TSCD_PEPM_MDQ_CREDITS_M		MAKEMASK(0xFF, 0)
+#define PF_VIRT_VSTATUS				0x0009E680 /* Reset Source: PFR */
+#define PF_VIRT_VSTATUS_NUM_VFS_S		0
+#define PF_VIRT_VSTATUS_NUM_VFS_M		MAKEMASK(0xFF, 0)
+#define PF_VIRT_VSTATUS_TOTAL_VFS_S		8
+#define PF_VIRT_VSTATUS_TOTAL_VFS_M		MAKEMASK(0xFF, 8)
+#define PF_VIRT_VSTATUS_IOV_ACTIVE_S		16
+#define PF_VIRT_VSTATUS_IOV_ACTIVE_M		BIT(16)
+#define PF_VT_PFALLOC				0x001D2480 /* Reset Source: CORER */
+#define PF_VT_PFALLOC_FIRSTVF_S			0
+#define PF_VT_PFALLOC_FIRSTVF_M			MAKEMASK(0xFF, 0)
+#define PF_VT_PFALLOC_LASTVF_S			8
+#define PF_VT_PFALLOC_LASTVF_M			MAKEMASK(0xFF, 8)
+#define PF_VT_PFALLOC_VALID_S			31
+#define PF_VT_PFALLOC_VALID_M			BIT(31)
+#define PF_VT_PFALLOC_HIF			0x0009DD80 /* Reset Source: PCIR */
+#define PF_VT_PFALLOC_HIF_FIRSTVF_S		0
+#define PF_VT_PFALLOC_HIF_FIRSTVF_M		MAKEMASK(0xFF, 0)
+#define PF_VT_PFALLOC_HIF_LASTVF_S		8
+#define PF_VT_PFALLOC_HIF_LASTVF_M		MAKEMASK(0xFF, 8)
+#define PF_VT_PFALLOC_HIF_VALID_S		31
+#define PF_VT_PFALLOC_HIF_VALID_M		BIT(31)
+#define PF_VT_PFALLOC_PCIE			0x000BE080 /* Reset Source: PCIR */
+#define PF_VT_PFALLOC_PCIE_FIRSTVF_S		0
+#define PF_VT_PFALLOC_PCIE_FIRSTVF_M		MAKEMASK(0xFF, 0)
+#define PF_VT_PFALLOC_PCIE_LASTVF_S		8
+#define PF_VT_PFALLOC_PCIE_LASTVF_M		MAKEMASK(0xFF, 8)
+#define PF_VT_PFALLOC_PCIE_VALID_S		31
+#define PF_VT_PFALLOC_PCIE_VALID_M		BIT(31)
+#define VSI_L2TAGSTXVALID(_VSI)			(0x00046000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_L2TAGSTXVALID_MAX_INDEX		767
+#define VSI_L2TAGSTXVALID_L2TAG1INSERTID_S	0
+#define VSI_L2TAGSTXVALID_L2TAG1INSERTID_M	MAKEMASK(0x7, 0)
+#define VSI_L2TAGSTXVALID_L2TAG1INSERTID_VALID_S 3
+#define VSI_L2TAGSTXVALID_L2TAG1INSERTID_VALID_M BIT(3)
+#define VSI_L2TAGSTXVALID_L2TAG2INSERTID_S	4
+#define VSI_L2TAGSTXVALID_L2TAG2INSERTID_M	MAKEMASK(0x7, 4)
+#define VSI_L2TAGSTXVALID_L2TAG2INSERTID_VALID_S 7
+#define VSI_L2TAGSTXVALID_L2TAG2INSERTID_VALID_M BIT(7)
+#define VSI_L2TAGSTXVALID_TIR0INSERTID_S	16
+#define VSI_L2TAGSTXVALID_TIR0INSERTID_M	MAKEMASK(0x7, 16)
+#define VSI_L2TAGSTXVALID_TIR0_INSERT_S		19
+#define VSI_L2TAGSTXVALID_TIR0_INSERT_M		BIT(19)
+#define VSI_L2TAGSTXVALID_TIR1INSERTID_S	20
+#define VSI_L2TAGSTXVALID_TIR1INSERTID_M	MAKEMASK(0x7, 20)
+#define VSI_L2TAGSTXVALID_TIR1_INSERT_S		23
+#define VSI_L2TAGSTXVALID_TIR1_INSERT_M		BIT(23)
+#define VSI_L2TAGSTXVALID_TIR2INSERTID_S	24
+#define VSI_L2TAGSTXVALID_TIR2INSERTID_M	MAKEMASK(0x7, 24)
+#define VSI_L2TAGSTXVALID_TIR2_INSERT_S		27
+#define VSI_L2TAGSTXVALID_TIR2_INSERT_M		BIT(27)
+#define VSI_PASID(_VSI)				(0x0009C000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_PASID_MAX_INDEX			767
+#define VSI_PASID_PASID_S			0
+#define VSI_PASID_PASID_M			MAKEMASK(0xFFFFF, 0)
+#define VSI_PASID_EN_S				31
+#define VSI_PASID_EN_M				BIT(31)
+#define VSI_RUPR(_VSI)				(0x00050000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_RUPR_MAX_INDEX			767
+#define VSI_RUPR_UP0_S				0
+#define VSI_RUPR_UP0_M				MAKEMASK(0x7, 0)
+#define VSI_RUPR_UP1_S				3
+#define VSI_RUPR_UP1_M				MAKEMASK(0x7, 3)
+#define VSI_RUPR_UP2_S				6
+#define VSI_RUPR_UP2_M				MAKEMASK(0x7, 6)
+#define VSI_RUPR_UP3_S				9
+#define VSI_RUPR_UP3_M				MAKEMASK(0x7, 9)
+#define VSI_RUPR_UP4_S				12
+#define VSI_RUPR_UP4_M				MAKEMASK(0x7, 12)
+#define VSI_RUPR_UP5_S				15
+#define VSI_RUPR_UP5_M				MAKEMASK(0x7, 15)
+#define VSI_RUPR_UP6_S				18
+#define VSI_RUPR_UP6_M				MAKEMASK(0x7, 18)
+#define VSI_RUPR_UP7_S				21
+#define VSI_RUPR_UP7_M				MAKEMASK(0x7, 21)
+#define VSI_RXSWCTRL(_VSI)			(0x00205000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_RXSWCTRL_MAX_INDEX			767
+#define VSI_RXSWCTRL_MACVSIPRUNEENABLE_S	8
+#define VSI_RXSWCTRL_MACVSIPRUNEENABLE_M	BIT(8)
+#define VSI_RXSWCTRL_PRUNEENABLE_S		9
+#define VSI_RXSWCTRL_PRUNEENABLE_M		MAKEMASK(0xF, 9)
+#define VSI_RXSWCTRL_SRCPRUNEENABLE_S		13
+#define VSI_RXSWCTRL_SRCPRUNEENABLE_M		BIT(13)
+#define VSI_SRCSWCTRL(_VSI)			(0x00209000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_SRCSWCTRL_MAX_INDEX			767
+#define VSI_SRCSWCTRL_ALLOWDESTOVERRIDE_S	0
+#define VSI_SRCSWCTRL_ALLOWDESTOVERRIDE_M	BIT(0)
+#define VSI_SRCSWCTRL_ALLOWLOOPBACK_S		1
+#define VSI_SRCSWCTRL_ALLOWLOOPBACK_M		BIT(1)
+#define VSI_SRCSWCTRL_LANENABLE_S		2
+#define VSI_SRCSWCTRL_LANENABLE_M		BIT(2)
+#define VSI_SRCSWCTRL_MACAS_S			3
+#define VSI_SRCSWCTRL_MACAS_M			BIT(3)
+#define VSI_SRCSWCTRL_PRUNEENABLE_S		4
+#define VSI_SRCSWCTRL_PRUNEENABLE_M		MAKEMASK(0xF, 4)
+#define VSI_SWITCHID(_VSI)			(0x00215000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_SWITCHID_MAX_INDEX			767
+#define VSI_SWITCHID_SWITCHID_S			0
+#define VSI_SWITCHID_SWITCHID_M			MAKEMASK(0xFF, 0)
+#define VSI_SWT_MIREG(_VSI)			(0x00207000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_SWT_MIREG_MAX_INDEX			767
+#define VSI_SWT_MIREG_MIRRULE_S			0
+#define VSI_SWT_MIREG_MIRRULE_M			MAKEMASK(0x3F, 0)
+#define VSI_SWT_MIREG_MIRENA_S			7
+#define VSI_SWT_MIREG_MIRENA_M			BIT(7)
+#define VSI_SWT_MIRIG(_VSI)			(0x00208000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_SWT_MIRIG_MAX_INDEX			767
+#define VSI_SWT_MIRIG_MIRRULE_S			0
+#define VSI_SWT_MIRIG_MIRRULE_M			MAKEMASK(0x3F, 0)
+#define VSI_SWT_MIRIG_MIRENA_S			7
+#define VSI_SWT_MIRIG_MIRENA_M			BIT(7)
+#define VSI_TAIR(_VSI)				(0x00044000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_TAIR_MAX_INDEX			767
+#define VSI_TAIR_PORT_TAG_ID_S			0
+#define VSI_TAIR_PORT_TAG_ID_M			MAKEMASK(0xFFFF, 0)
+#define VSI_TAR(_VSI)				(0x00045000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_TAR_MAX_INDEX			767
+#define VSI_TAR_ACCEPTTAGGED_S			0
+#define VSI_TAR_ACCEPTTAGGED_M			MAKEMASK(0x3FF, 0)
+#define VSI_TAR_ACCEPTUNTAGGED_S		16
+#define VSI_TAR_ACCEPTUNTAGGED_M		MAKEMASK(0x3FF, 16)
+#define VSI_TIR_0(_VSI)				(0x00041000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_TIR_0_MAX_INDEX			767
+#define VSI_TIR_0_PORT_TAG_ID_S			0
+#define VSI_TIR_0_PORT_TAG_ID_M			MAKEMASK(0xFFFF, 0)
+#define VSI_TIR_1(_VSI)				(0x00042000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_TIR_1_MAX_INDEX			767
+#define VSI_TIR_1_PORT_TAG_ID_S			0
+#define VSI_TIR_1_PORT_TAG_ID_M			MAKEMASK(0xFFFFFFFF, 0)
+#define VSI_TIR_2(_VSI)				(0x00043000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_TIR_2_MAX_INDEX			767
+#define VSI_TIR_2_PORT_TAG_ID_S			0
+#define VSI_TIR_2_PORT_TAG_ID_M			MAKEMASK(0xFFFF, 0)
+#define VSI_TSR(_VSI)				(0x00051000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_TSR_MAX_INDEX			767
+#define VSI_TSR_STRIPTAG_S			0
+#define VSI_TSR_STRIPTAG_M			MAKEMASK(0x3FF, 0)
+#define VSI_TSR_SHOWTAG_S			10
+#define VSI_TSR_SHOWTAG_M			MAKEMASK(0x3FF, 10)
+#define VSI_TSR_SHOWPRIONLY_S			20
+#define VSI_TSR_SHOWPRIONLY_M			MAKEMASK(0x3FF, 20)
+#define VSI_TUPIOM(_VSI)			(0x00048000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_TUPIOM_MAX_INDEX			767
+#define VSI_TUPIOM_UP0_S			0
+#define VSI_TUPIOM_UP0_M			MAKEMASK(0x7, 0)
+#define VSI_TUPIOM_UP1_S			3
+#define VSI_TUPIOM_UP1_M			MAKEMASK(0x7, 3)
+#define VSI_TUPIOM_UP2_S			6
+#define VSI_TUPIOM_UP2_M			MAKEMASK(0x7, 6)
+#define VSI_TUPIOM_UP3_S			9
+#define VSI_TUPIOM_UP3_M			MAKEMASK(0x7, 9)
+#define VSI_TUPIOM_UP4_S			12
+#define VSI_TUPIOM_UP4_M			MAKEMASK(0x7, 12)
+#define VSI_TUPIOM_UP5_S			15
+#define VSI_TUPIOM_UP5_M			MAKEMASK(0x7, 15)
+#define VSI_TUPIOM_UP6_S			18
+#define VSI_TUPIOM_UP6_M			MAKEMASK(0x7, 18)
+#define VSI_TUPIOM_UP7_S			21
+#define VSI_TUPIOM_UP7_M			MAKEMASK(0x7, 21)
+#define VSI_TUPR(_VSI)				(0x00047000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_TUPR_MAX_INDEX			767
+#define VSI_TUPR_UP0_S				0
+#define VSI_TUPR_UP0_M				MAKEMASK(0x7, 0)
+#define VSI_TUPR_UP1_S				3
+#define VSI_TUPR_UP1_M				MAKEMASK(0x7, 3)
+#define VSI_TUPR_UP2_S				6
+#define VSI_TUPR_UP2_M				MAKEMASK(0x7, 6)
+#define VSI_TUPR_UP3_S				9
+#define VSI_TUPR_UP3_M				MAKEMASK(0x7, 9)
+#define VSI_TUPR_UP4_S				12
+#define VSI_TUPR_UP4_M				MAKEMASK(0x7, 12)
+#define VSI_TUPR_UP5_S				15
+#define VSI_TUPR_UP5_M				MAKEMASK(0x7, 15)
+#define VSI_TUPR_UP6_S				18
+#define VSI_TUPR_UP6_M				MAKEMASK(0x7, 18)
+#define VSI_TUPR_UP7_S				21
+#define VSI_TUPR_UP7_M				MAKEMASK(0x7, 21)
+#define VSI_VSI2F(_VSI)				(0x001D0000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_VSI2F_MAX_INDEX			767
+#define VSI_VSI2F_VFVMNUMBER_S			0
+#define VSI_VSI2F_VFVMNUMBER_M			MAKEMASK(0x3FF, 0)
+#define VSI_VSI2F_FUNCTIONTYPE_S		10
+#define VSI_VSI2F_FUNCTIONTYPE_M		MAKEMASK(0x3, 10)
+#define VSI_VSI2F_PFNUMBER_S			12
+#define VSI_VSI2F_PFNUMBER_M			MAKEMASK(0x7, 12)
+#define VSI_VSI2F_BUFFERNUMBER_S		16
+#define VSI_VSI2F_BUFFERNUMBER_M		MAKEMASK(0x7, 16)
+#define VSI_VSI2F_VSI_NUMBER_S			20
+#define VSI_VSI2F_VSI_NUMBER_M			MAKEMASK(0x3FF, 20)
+#define VSI_VSI2F_VSI_ENABLE_S			31
+#define VSI_VSI2F_VSI_ENABLE_M			BIT(31)
+#define VSI_VSI2F_MBX(_VSI)			(0x00232000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_VSI2F_MBX_MAX_INDEX			767
+#define VSI_VSI2F_MBX_VFVMNUMBER_S		0
+#define VSI_VSI2F_MBX_VFVMNUMBER_M		MAKEMASK(0x3FF, 0)
+#define VSI_VSI2F_MBX_FUNCTIONTYPE_S		10
+#define VSI_VSI2F_MBX_FUNCTIONTYPE_M		MAKEMASK(0x3, 10)
+#define VSI_VSI2F_MBX_PFNUMBER_S		12
+#define VSI_VSI2F_MBX_PFNUMBER_M		MAKEMASK(0x7, 12)
+#define VSI_VSI2F_MBX_BUFFERNUMBER_S		16
+#define VSI_VSI2F_MBX_BUFFERNUMBER_M		MAKEMASK(0x7, 16)
+#define VSI_VSI2F_MBX_VSI_NUMBER_S		20
+#define VSI_VSI2F_MBX_VSI_NUMBER_M		MAKEMASK(0x3FF, 20)
+#define VSI_VSI2F_MBX_VSI_ENABLE_S		31
+#define VSI_VSI2F_MBX_VSI_ENABLE_M		BIT(31)
+#define VSIQF_FD_CNT(_VSI)			(0x00464000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSIQF_FD_CNT_MAX_INDEX			767
+#define VSIQF_FD_CNT_FD_GCNT_S			0
+#define VSIQF_FD_CNT_FD_GCNT_M			MAKEMASK(0x3FFF, 0)
+#define VSIQF_FD_CNT_FD_BCNT_S			16
+#define VSIQF_FD_CNT_FD_BCNT_M			MAKEMASK(0x3FFF, 16)
+#define VSIQF_FD_CTL1(_VSI)			(0x00411000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSIQF_FD_CTL1_MAX_INDEX			767
+#define VSIQF_FD_CTL1_FLT_ENA_S			0
+#define VSIQF_FD_CTL1_FLT_ENA_M			BIT(0)
+#define VSIQF_FD_CTL1_CFG_ENA_S			1
+#define VSIQF_FD_CTL1_CFG_ENA_M			BIT(1)
+#define VSIQF_FD_CTL1_EVICT_ENA_S		2
+#define VSIQF_FD_CTL1_EVICT_ENA_M		BIT(2)
+#define VSIQF_FD_DFLT(_VSI)			(0x00457000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSIQF_FD_DFLT_MAX_INDEX			767
+#define VSIQF_FD_DFLT_DEFLT_QINDX_S		0
+#define VSIQF_FD_DFLT_DEFLT_QINDX_M		MAKEMASK(0x7FF, 0)
+#define VSIQF_FD_DFLT_DEFLT_TOQUEUE_S		12
+#define VSIQF_FD_DFLT_DEFLT_TOQUEUE_M		MAKEMASK(0x7, 12)
+#define VSIQF_FD_DFLT_COMP_QINDX_S		16
+#define VSIQF_FD_DFLT_COMP_QINDX_M		MAKEMASK(0x7FF, 16)
+#define VSIQF_FD_DFLT_DEFLT_QINDX_PRIO_S	28
+#define VSIQF_FD_DFLT_DEFLT_QINDX_PRIO_M	MAKEMASK(0x7, 28)
+#define VSIQF_FD_DFLT_DEFLT_DROP_S		31
+#define VSIQF_FD_DFLT_DEFLT_DROP_M		BIT(31)
+#define VSIQF_FD_SIZE(_VSI)			(0x00462000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSIQF_FD_SIZE_MAX_INDEX			767
+#define VSIQF_FD_SIZE_FD_GSIZE_S		0
+#define VSIQF_FD_SIZE_FD_GSIZE_M		MAKEMASK(0x3FFF, 0)
+#define VSIQF_FD_SIZE_FD_BSIZE_S		16
+#define VSIQF_FD_SIZE_FD_BSIZE_M		MAKEMASK(0x3FFF, 16)
+#define VSIQF_HASH_CTL(_VSI)			(0x0040D000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSIQF_HASH_CTL_MAX_INDEX		767
+#define VSIQF_HASH_CTL_HASH_LUT_SEL_S		0
+#define VSIQF_HASH_CTL_HASH_LUT_SEL_M		MAKEMASK(0x3, 0)
+#define VSIQF_HASH_CTL_GLOB_LUT_S		2
+#define VSIQF_HASH_CTL_GLOB_LUT_M		MAKEMASK(0xF, 2)
+#define VSIQF_HASH_CTL_HASH_SCHEME_S		6
+#define VSIQF_HASH_CTL_HASH_SCHEME_M		MAKEMASK(0x3, 6)
+#define VSIQF_HASH_CTL_TC_OVER_SEL_S		8
+#define VSIQF_HASH_CTL_TC_OVER_SEL_M		MAKEMASK(0x1F, 8)
+#define VSIQF_HASH_CTL_TC_OVER_ENA_S		15
+#define VSIQF_HASH_CTL_TC_OVER_ENA_M		BIT(15)
+#define VSIQF_HKEY(_i, _VSI)			(0x00400000 + ((_i) * 4096 + (_VSI) * 4)) /* _i=0...12, _VSI=0...767 */ /* Reset Source: PFR */
+#define VSIQF_HKEY_MAX_INDEX			12
+#define VSIQF_HKEY_KEY_0_S			0
+#define VSIQF_HKEY_KEY_0_M			MAKEMASK(0xFF, 0)
+#define VSIQF_HKEY_KEY_1_S			8
+#define VSIQF_HKEY_KEY_1_M			MAKEMASK(0xFF, 8)
+#define VSIQF_HKEY_KEY_2_S			16
+#define VSIQF_HKEY_KEY_2_M			MAKEMASK(0xFF, 16)
+#define VSIQF_HKEY_KEY_3_S			24
+#define VSIQF_HKEY_KEY_3_M			MAKEMASK(0xFF, 24)
+#define VSIQF_HLUT(_i, _VSI)			(0x00420000 + ((_i) * 4096 + (_VSI) * 4)) /* _i=0...15, _VSI=0...767 */ /* Reset Source: PFR */
+#define VSIQF_HLUT_MAX_INDEX			15
+#define VSIQF_HLUT_LUT0_S			0
+#define VSIQF_HLUT_LUT0_M			MAKEMASK(0xF, 0)
+#define VSIQF_HLUT_LUT1_S			8
+#define VSIQF_HLUT_LUT1_M			MAKEMASK(0xF, 8)
+#define VSIQF_HLUT_LUT2_S			16
+#define VSIQF_HLUT_LUT2_M			MAKEMASK(0xF, 16)
+#define VSIQF_HLUT_LUT3_S			24
+#define VSIQF_HLUT_LUT3_M			MAKEMASK(0xF, 24)
+#define VSIQF_PE_CTL1(_VSI)			(0x00414000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSIQF_PE_CTL1_MAX_INDEX			767
+#define VSIQF_PE_CTL1_PE_FLTENA_S		0
+#define VSIQF_PE_CTL1_PE_FLTENA_M		BIT(0)
+#define VSIQF_TC_REGION(_i, _VSI)		(0x00448000 + ((_i) * 4096 + (_VSI) * 4)) /* _i=0...3, _VSI=0...767 */ /* Reset Source: PFR */
+#define VSIQF_TC_REGION_MAX_INDEX		3
+#define VSIQF_TC_REGION_TC_BASE0_S		0
+#define VSIQF_TC_REGION_TC_BASE0_M		MAKEMASK(0x7FF, 0)
+#define VSIQF_TC_REGION_TC_SIZE0_S		11
+#define VSIQF_TC_REGION_TC_SIZE0_M		MAKEMASK(0xF, 11)
+#define VSIQF_TC_REGION_TC_BASE1_S		16
+#define VSIQF_TC_REGION_TC_BASE1_M		MAKEMASK(0x7FF, 16)
+#define VSIQF_TC_REGION_TC_SIZE1_S		27
+#define VSIQF_TC_REGION_TC_SIZE1_M		MAKEMASK(0xF, 27)
+#define GLPM_WUMC				0x0009DEE4 /* Reset Source: POR */
+#define GLPM_WUMC_MNG_WU_PF_S			16
+#define GLPM_WUMC_MNG_WU_PF_M			MAKEMASK(0xFF, 16)
+#define PFPM_APM				0x000B8080 /* Reset Source: POR */
+#define PFPM_APM_APME_S				0
+#define PFPM_APM_APME_M				BIT(0)
+#define PFPM_WUC				0x0009DC80 /* Reset Source: POR */
+#define PFPM_WUC_EN_APM_D0_S			5
+#define PFPM_WUC_EN_APM_D0_M			BIT(5)
+#define PFPM_WUFC				0x0009DC00 /* Reset Source: POR */
+#define PFPM_WUFC_LNKC_S			0
+#define PFPM_WUFC_LNKC_M			BIT(0)
+#define PFPM_WUFC_MAG_S				1
+#define PFPM_WUFC_MAG_M				BIT(1)
+#define PFPM_WUFC_MNG_S				3
+#define PFPM_WUFC_MNG_M				BIT(3)
+#define PFPM_WUFC_FLX0_ACT_S			4
+#define PFPM_WUFC_FLX0_ACT_M			BIT(4)
+#define PFPM_WUFC_FLX1_ACT_S			5
+#define PFPM_WUFC_FLX1_ACT_M			BIT(5)
+#define PFPM_WUFC_FLX2_ACT_S			6
+#define PFPM_WUFC_FLX2_ACT_M			BIT(6)
+#define PFPM_WUFC_FLX3_ACT_S			7
+#define PFPM_WUFC_FLX3_ACT_M			BIT(7)
+#define PFPM_WUFC_FLX4_ACT_S			8
+#define PFPM_WUFC_FLX4_ACT_M			BIT(8)
+#define PFPM_WUFC_FLX5_ACT_S			9
+#define PFPM_WUFC_FLX5_ACT_M			BIT(9)
+#define PFPM_WUFC_FLX6_ACT_S			10
+#define PFPM_WUFC_FLX6_ACT_M			BIT(10)
+#define PFPM_WUFC_FLX7_ACT_S			11
+#define PFPM_WUFC_FLX7_ACT_M			BIT(11)
+#define PFPM_WUFC_FLX0_S			16
+#define PFPM_WUFC_FLX0_M			BIT(16)
+#define PFPM_WUFC_FLX1_S			17
+#define PFPM_WUFC_FLX1_M			BIT(17)
+#define PFPM_WUFC_FLX2_S			18
+#define PFPM_WUFC_FLX2_M			BIT(18)
+#define PFPM_WUFC_FLX3_S			19
+#define PFPM_WUFC_FLX3_M			BIT(19)
+#define PFPM_WUFC_FLX4_S			20
+#define PFPM_WUFC_FLX4_M			BIT(20)
+#define PFPM_WUFC_FLX5_S			21
+#define PFPM_WUFC_FLX5_M			BIT(21)
+#define PFPM_WUFC_FLX6_S			22
+#define PFPM_WUFC_FLX6_M			BIT(22)
+#define PFPM_WUFC_FLX7_S			23
+#define PFPM_WUFC_FLX7_M			BIT(23)
+#define PFPM_WUFC_FW_RST_WK_S			31
+#define PFPM_WUFC_FW_RST_WK_M			BIT(31)
+#define PFPM_WUS				0x0009DB80 /* Reset Source: POR */
+#define PFPM_WUS_LNKC_S				0
+#define PFPM_WUS_LNKC_M				BIT(0)
+#define PFPM_WUS_MAG_S				1
+#define PFPM_WUS_MAG_M				BIT(1)
+#define PFPM_WUS_PME_STATUS_S			2
+#define PFPM_WUS_PME_STATUS_M			BIT(2)
+#define PFPM_WUS_MNG_S				3
+#define PFPM_WUS_MNG_M				BIT(3)
+#define PFPM_WUS_FLX0_S				16
+#define PFPM_WUS_FLX0_M				BIT(16)
+#define PFPM_WUS_FLX1_S				17
+#define PFPM_WUS_FLX1_M				BIT(17)
+#define PFPM_WUS_FLX2_S				18
+#define PFPM_WUS_FLX2_M				BIT(18)
+#define PFPM_WUS_FLX3_S				19
+#define PFPM_WUS_FLX3_M				BIT(19)
+#define PFPM_WUS_FLX4_S				20
+#define PFPM_WUS_FLX4_M				BIT(20)
+#define PFPM_WUS_FLX5_S				21
+#define PFPM_WUS_FLX5_M				BIT(21)
+#define PFPM_WUS_FLX6_S				22
+#define PFPM_WUS_FLX6_M				BIT(22)
+#define PFPM_WUS_FLX7_S				23
+#define PFPM_WUS_FLX7_M				BIT(23)
+#define PFPM_WUS_FW_RST_WK_S			31
+#define PFPM_WUS_FW_RST_WK_M			BIT(31)
+#define PRTPM_SAH(_i)				(0x001E3BA0 + ((_i) * 32)) /* _i=0...3 */ /* Reset Source: PFR */
+#define PRTPM_SAH_MAX_INDEX			3
+#define PRTPM_SAH_PFPM_SAH_S			0
+#define PRTPM_SAH_PFPM_SAH_M			MAKEMASK(0xFFFF, 0)
+#define PRTPM_SAH_PF_NUM_S			26
+#define PRTPM_SAH_PF_NUM_M			MAKEMASK(0xF, 26)
+#define PRTPM_SAH_MC_MAG_EN_S			30
+#define PRTPM_SAH_MC_MAG_EN_M			BIT(30)
+#define PRTPM_SAH_AV_S				31
+#define PRTPM_SAH_AV_M				BIT(31)
+#define PRTPM_SAL(_i)				(0x001E3B20 + ((_i) * 32)) /* _i=0...3 */ /* Reset Source: PFR */
+#define PRTPM_SAL_MAX_INDEX			3
+#define PRTPM_SAL_PFPM_SAL_S			0
+#define PRTPM_SAL_PFPM_SAL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPE_CQM_FUNC_INVALIDATE		0x00503300 /* Reset Source: CORER */
+#define GLPE_CQM_FUNC_INVALIDATE_PF_NUM_S	0
+#define GLPE_CQM_FUNC_INVALIDATE_PF_NUM_M	MAKEMASK(0x7, 0)
+#define GLPE_CQM_FUNC_INVALIDATE_VM_VF_NUM_S	3
+#define GLPE_CQM_FUNC_INVALIDATE_VM_VF_NUM_M	MAKEMASK(0x3FF, 3)
+#define GLPE_CQM_FUNC_INVALIDATE_VM_VF_TYPE_S	13
+#define GLPE_CQM_FUNC_INVALIDATE_VM_VF_TYPE_M	MAKEMASK(0x3, 13)
+#define GLPE_CQM_FUNC_INVALIDATE_ENABLE_S	31
+#define GLPE_CQM_FUNC_INVALIDATE_ENABLE_M	BIT(31)
+#define VFPE_MRTEIDXMASK			0x00009000 /* Reset Source: PFR */
+#define VFPE_MRTEIDXMASK_MRTEIDXMASKBITS_S	0
+#define VFPE_MRTEIDXMASK_MRTEIDXMASKBITS_M	MAKEMASK(0x1F, 0)
+#define GLTSYN_HH_DLAY				0x0008881C /* Reset Source: CORER */
+#define GLTSYN_HH_DLAY_SYNC_DELAY_S		0
+#define GLTSYN_HH_DLAY_SYNC_DELAY_M		MAKEMASK(0xF, 0)
+#define VF_MBX_ARQBAH1				0x00006000 /* Reset Source: CORER */
+#define VF_MBX_ARQBAH1_ARQBAH_S			0
+#define VF_MBX_ARQBAH1_ARQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_ARQBAL1				0x00006C00 /* Reset Source: CORER */
+#define VF_MBX_ARQBAL1_ARQBAL_LSB_S		0
+#define VF_MBX_ARQBAL1_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define VF_MBX_ARQBAL1_ARQBAL_S			6
+#define VF_MBX_ARQBAL1_ARQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_ARQH1				0x00007400 /* Reset Source: CORER */
+#define VF_MBX_ARQH1_ARQH_S			0
+#define VF_MBX_ARQH1_ARQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_ARQLEN1				0x00008000 /* Reset Source: CORER */
+#define VF_MBX_ARQLEN1_ARQLEN_S			0
+#define VF_MBX_ARQLEN1_ARQLEN_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_ARQLEN1_ARQVFE_S			28
+#define VF_MBX_ARQLEN1_ARQVFE_M			BIT(28)
+#define VF_MBX_ARQLEN1_ARQOVFL_S		29
+#define VF_MBX_ARQLEN1_ARQOVFL_M		BIT(29)
+#define VF_MBX_ARQLEN1_ARQCRIT_S		30
+#define VF_MBX_ARQLEN1_ARQCRIT_M		BIT(30)
+#define VF_MBX_ARQLEN1_ARQENABLE_S		31
+#define VF_MBX_ARQLEN1_ARQENABLE_M		BIT(31)
+#define VF_MBX_ARQT1				0x00007000 /* Reset Source: CORER */
+#define VF_MBX_ARQT1_ARQT_S			0
+#define VF_MBX_ARQT1_ARQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_ATQBAH1				0x00007800 /* Reset Source: CORER */
+#define VF_MBX_ATQBAH1_ATQBAH_S			0
+#define VF_MBX_ATQBAH1_ATQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_ATQBAL1				0x00007C00 /* Reset Source: CORER */
+#define VF_MBX_ATQBAL1_ATQBAL_S			6
+#define VF_MBX_ATQBAL1_ATQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_ATQH1				0x00006400 /* Reset Source: CORER */
+#define VF_MBX_ATQH1_ATQH_S			0
+#define VF_MBX_ATQH1_ATQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_ATQLEN1				0x00006800 /* Reset Source: CORER */
+#define VF_MBX_ATQLEN1_ATQLEN_S			0
+#define VF_MBX_ATQLEN1_ATQLEN_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_ATQLEN1_ATQVFE_S			28
+#define VF_MBX_ATQLEN1_ATQVFE_M			BIT(28)
+#define VF_MBX_ATQLEN1_ATQOVFL_S		29
+#define VF_MBX_ATQLEN1_ATQOVFL_M		BIT(29)
+#define VF_MBX_ATQLEN1_ATQCRIT_S		30
+#define VF_MBX_ATQLEN1_ATQCRIT_M		BIT(30)
+#define VF_MBX_ATQLEN1_ATQENABLE_S		31
+#define VF_MBX_ATQLEN1_ATQENABLE_M		BIT(31)
+#define VF_MBX_ATQT1				0x00008400 /* Reset Source: CORER */
+#define VF_MBX_ATQT1_ATQT_S			0
+#define VF_MBX_ATQT1_ATQT_M			MAKEMASK(0x3FF, 0)
+#define PFPCI_VF_FLUSH_DONE1			0x0000E400 /* Reset Source: PCIR */
+#define PFPCI_VF_FLUSH_DONE1_FLUSH_DONE_S	0
+#define PFPCI_VF_FLUSH_DONE1_FLUSH_DONE_M	BIT(0)
+#define VFGEN_RSTAT1				0x00008800 /* Reset Source: VFR */
+#define VFGEN_RSTAT1_VFR_STATE_S		0
+#define VFGEN_RSTAT1_VFR_STATE_M		MAKEMASK(0x3, 0)
+#define VFINT_DYN_CTL0				0x00005C00 /* Reset Source: PFR */
+#define VFINT_DYN_CTL0_INTENA_S			0
+#define VFINT_DYN_CTL0_INTENA_M			BIT(0)
+#define VFINT_DYN_CTL0_CLEARPBA_S		1
+#define VFINT_DYN_CTL0_CLEARPBA_M		BIT(1)
+#define VFINT_DYN_CTL0_SWINT_TRIG_S		2
+#define VFINT_DYN_CTL0_SWINT_TRIG_M		BIT(2)
+#define VFINT_DYN_CTL0_ITR_INDX_S		3
+#define VFINT_DYN_CTL0_ITR_INDX_M		MAKEMASK(0x3, 3)
+#define VFINT_DYN_CTL0_INTERVAL_S		5
+#define VFINT_DYN_CTL0_INTERVAL_M		MAKEMASK(0xFFF, 5)
+#define VFINT_DYN_CTL0_SW_ITR_INDX_ENA_S	24
+#define VFINT_DYN_CTL0_SW_ITR_INDX_ENA_M	BIT(24)
+#define VFINT_DYN_CTL0_SW_ITR_INDX_S		25
+#define VFINT_DYN_CTL0_SW_ITR_INDX_M		MAKEMASK(0x3, 25)
+#define VFINT_DYN_CTL0_WB_ON_ITR_S		30
+#define VFINT_DYN_CTL0_WB_ON_ITR_M		BIT(30)
+#define VFINT_DYN_CTL0_INTENA_MSK_S		31
+#define VFINT_DYN_CTL0_INTENA_MSK_M		BIT(31)
+#define VFINT_DYN_CTLN(_i)			(0x00003800 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: PFR */
+#define VFINT_DYN_CTLN_MAX_INDEX		63
+#define VFINT_DYN_CTLN_INTENA_S			0
+#define VFINT_DYN_CTLN_INTENA_M			BIT(0)
+#define VFINT_DYN_CTLN_CLEARPBA_S		1
+#define VFINT_DYN_CTLN_CLEARPBA_M		BIT(1)
+#define VFINT_DYN_CTLN_SWINT_TRIG_S		2
+#define VFINT_DYN_CTLN_SWINT_TRIG_M		BIT(2)
+#define VFINT_DYN_CTLN_ITR_INDX_S		3
+#define VFINT_DYN_CTLN_ITR_INDX_M		MAKEMASK(0x3, 3)
+#define VFINT_DYN_CTLN_INTERVAL_S		5
+#define VFINT_DYN_CTLN_INTERVAL_M		MAKEMASK(0xFFF, 5)
+#define VFINT_DYN_CTLN_SW_ITR_INDX_ENA_S	24
+#define VFINT_DYN_CTLN_SW_ITR_INDX_ENA_M	BIT(24)
+#define VFINT_DYN_CTLN_SW_ITR_INDX_S		25
+#define VFINT_DYN_CTLN_SW_ITR_INDX_M		MAKEMASK(0x3, 25)
+#define VFINT_DYN_CTLN_WB_ON_ITR_S		30
+#define VFINT_DYN_CTLN_WB_ON_ITR_M		BIT(30)
+#define VFINT_DYN_CTLN_INTENA_MSK_S		31
+#define VFINT_DYN_CTLN_INTENA_MSK_M		BIT(31)
+#define VFINT_ITR0(_i)				(0x00004C00 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: PFR */
+#define VFINT_ITR0_MAX_INDEX			2
+#define VFINT_ITR0_INTERVAL_S			0
+#define VFINT_ITR0_INTERVAL_M			MAKEMASK(0xFFF, 0)
+#define VFINT_ITRN(_i, _j)			(0x00002800 + ((_i) * 4 + (_j) * 12)) /* _i=0...2, _j=0...63 */ /* Reset Source: PFR */
+#define VFINT_ITRN_MAX_INDEX			2
+#define VFINT_ITRN_INTERVAL_S			0
+#define VFINT_ITRN_INTERVAL_M			MAKEMASK(0xFFF, 0)
+#define QRX_TAIL1(_QRX)				(0x00002000 + ((_QRX) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define QRX_TAIL1_MAX_INDEX			255
+#define QRX_TAIL1_TAIL_S			0
+#define QRX_TAIL1_TAIL_M			MAKEMASK(0x1FFF, 0)
+#define QTX_TAIL(_DBQM)				(0x00000000 + ((_DBQM) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define QTX_TAIL_MAX_INDEX			255
+#define QTX_TAIL_QTX_COMM_DBELL_S		0
+#define QTX_TAIL_QTX_COMM_DBELL_M		MAKEMASK(0xFFFFFFFF, 0)
+#define MSIX_TMSG1(_i)				(0x00000008 + ((_i) * 16)) /* _i=0...64 */ /* Reset Source: FLR */
+#define MSIX_TMSG1_MAX_INDEX			64
+#define MSIX_TMSG1_MSIXTMSG_S			0
+#define MSIX_TMSG1_MSIXTMSG_M			MAKEMASK(0xFFFFFFFF, 0)
+#define VFPE_AEQALLOC1				0x0000A400 /* Reset Source: VFR */
+#define VFPE_AEQALLOC1_AECOUNT_S		0
+#define VFPE_AEQALLOC1_AECOUNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VFPE_CCQPHIGH1				0x00009800 /* Reset Source: VFR */
+#define VFPE_CCQPHIGH1_PECCQPHIGH_S		0
+#define VFPE_CCQPHIGH1_PECCQPHIGH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VFPE_CCQPLOW1				0x0000AC00 /* Reset Source: VFR */
+#define VFPE_CCQPLOW1_PECCQPLOW_S		0
+#define VFPE_CCQPLOW1_PECCQPLOW_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VFPE_CCQPSTATUS1			0x0000B800 /* Reset Source: VFR */
+#define VFPE_CCQPSTATUS1_CCQP_DONE_S		0
+#define VFPE_CCQPSTATUS1_CCQP_DONE_M		BIT(0)
+#define VFPE_CCQPSTATUS1_HMC_PROFILE_S		4
+#define VFPE_CCQPSTATUS1_HMC_PROFILE_M		MAKEMASK(0x7, 4)
+#define VFPE_CCQPSTATUS1_RDMA_EN_VFS_S		16
+#define VFPE_CCQPSTATUS1_RDMA_EN_VFS_M		MAKEMASK(0x3F, 16)
+#define VFPE_CCQPSTATUS1_CCQP_ERR_S		31
+#define VFPE_CCQPSTATUS1_CCQP_ERR_M		BIT(31)
+#define VFPE_CQACK1				0x0000B000 /* Reset Source: VFR */
+#define VFPE_CQACK1_PECQID_S			0
+#define VFPE_CQACK1_PECQID_M			MAKEMASK(0x7FFFF, 0)
+#define VFPE_CQARM1				0x0000B400 /* Reset Source: VFR */
+#define VFPE_CQARM1_PECQID_S			0
+#define VFPE_CQARM1_PECQID_M			MAKEMASK(0x7FFFF, 0)
+#define VFPE_CQPDB1				0x0000BC00 /* Reset Source: VFR */
+#define VFPE_CQPDB1_WQHEAD_S			0
+#define VFPE_CQPDB1_WQHEAD_M			MAKEMASK(0x7FF, 0)
+#define VFPE_CQPERRCODES1			0x00009C00 /* Reset Source: VFR */
+#define VFPE_CQPERRCODES1_CQP_MINOR_CODE_S	0
+#define VFPE_CQPERRCODES1_CQP_MINOR_CODE_M	MAKEMASK(0xFFFF, 0)
+#define VFPE_CQPERRCODES1_CQP_MAJOR_CODE_S	16
+#define VFPE_CQPERRCODES1_CQP_MAJOR_CODE_M	MAKEMASK(0xFFFF, 16)
+#define VFPE_CQPTAIL1				0x0000A000 /* Reset Source: VFR */
+#define VFPE_CQPTAIL1_WQTAIL_S			0
+#define VFPE_CQPTAIL1_WQTAIL_M			MAKEMASK(0x7FF, 0)
+#define VFPE_CQPTAIL1_CQP_OP_ERR_S		31
+#define VFPE_CQPTAIL1_CQP_OP_ERR_M		BIT(31)
+#define VFPE_IPCONFIG01				0x00008C00 /* Reset Source: VFR */
+#define VFPE_IPCONFIG01_PEIPID_S		0
+#define VFPE_IPCONFIG01_PEIPID_M		MAKEMASK(0xFFFF, 0)
+#define VFPE_IPCONFIG01_USEENTIREIDRANGE_S	16
+#define VFPE_IPCONFIG01_USEENTIREIDRANGE_M	BIT(16)
+#define VFPE_IPCONFIG01_UDP_SRC_PORT_MASK_EN_S	17
+#define VFPE_IPCONFIG01_UDP_SRC_PORT_MASK_EN_M	BIT(17)
+#define VFPE_MRTEIDXMASK1(_VF)			(0x00509800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_MRTEIDXMASK1_MAX_INDEX		255
+#define VFPE_MRTEIDXMASK1_MRTEIDXMASKBITS_S	0
+#define VFPE_MRTEIDXMASK1_MRTEIDXMASKBITS_M	MAKEMASK(0x1F, 0)
+#define VFPE_RCVUNEXPECTEDERROR1		0x00009400 /* Reset Source: VFR */
+#define VFPE_RCVUNEXPECTEDERROR1_TCP_RX_UNEXP_ERR_S 0
+#define VFPE_RCVUNEXPECTEDERROR1_TCP_RX_UNEXP_ERR_M MAKEMASK(0xFFFFFF, 0)
+#define VFPE_TCPNOWTIMER1			0x0000A800 /* Reset Source: VFR */
+#define VFPE_TCPNOWTIMER1_TCP_NOW_S		0
+#define VFPE_TCPNOWTIMER1_TCP_NOW_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VFPE_WQEALLOC1				0x0000C000 /* Reset Source: VFR */
+#define VFPE_WQEALLOC1_PEQPID_S			0
+#define VFPE_WQEALLOC1_PEQPID_M			MAKEMASK(0x3FFFF, 0)
+#define VFPE_WQEALLOC1_WQE_DESC_INDEX_S		20
+#define VFPE_WQEALLOC1_WQE_DESC_INDEX_M		MAKEMASK(0xFFF, 20)
+#define VF_MBX_CPM_ARQBAH1			0x0000F060 /* Reset Source: CORER */
+#define VF_MBX_CPM_ARQBAH1_ARQBAH_S		0
+#define VF_MBX_CPM_ARQBAH1_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_CPM_ARQBAL1			0x0000F050 /* Reset Source: CORER */
+#define VF_MBX_CPM_ARQBAL1_ARQBAL_LSB_S		0
+#define VF_MBX_CPM_ARQBAL1_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define VF_MBX_CPM_ARQBAL1_ARQBAL_S		6
+#define VF_MBX_CPM_ARQBAL1_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_CPM_ARQH1			0x0000F080 /* Reset Source: CORER */
+#define VF_MBX_CPM_ARQH1_ARQH_S			0
+#define VF_MBX_CPM_ARQH1_ARQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ARQLEN1			0x0000F070 /* Reset Source: CORER */
+#define VF_MBX_CPM_ARQLEN1_ARQLEN_S		0
+#define VF_MBX_CPM_ARQLEN1_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ARQLEN1_ARQVFE_S		28
+#define VF_MBX_CPM_ARQLEN1_ARQVFE_M		BIT(28)
+#define VF_MBX_CPM_ARQLEN1_ARQOVFL_S		29
+#define VF_MBX_CPM_ARQLEN1_ARQOVFL_M		BIT(29)
+#define VF_MBX_CPM_ARQLEN1_ARQCRIT_S		30
+#define VF_MBX_CPM_ARQLEN1_ARQCRIT_M		BIT(30)
+#define VF_MBX_CPM_ARQLEN1_ARQENABLE_S		31
+#define VF_MBX_CPM_ARQLEN1_ARQENABLE_M		BIT(31)
+#define VF_MBX_CPM_ARQT1			0x0000F090 /* Reset Source: CORER */
+#define VF_MBX_CPM_ARQT1_ARQT_S			0
+#define VF_MBX_CPM_ARQT1_ARQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ATQBAH1			0x0000F010 /* Reset Source: CORER */
+#define VF_MBX_CPM_ATQBAH1_ATQBAH_S		0
+#define VF_MBX_CPM_ATQBAH1_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_CPM_ATQBAL1			0x0000F000 /* Reset Source: CORER */
+#define VF_MBX_CPM_ATQBAL1_ATQBAL_S		6
+#define VF_MBX_CPM_ATQBAL1_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_CPM_ATQH1			0x0000F030 /* Reset Source: CORER */
+#define VF_MBX_CPM_ATQH1_ATQH_S			0
+#define VF_MBX_CPM_ATQH1_ATQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ATQLEN1			0x0000F020 /* Reset Source: CORER */
+#define VF_MBX_CPM_ATQLEN1_ATQLEN_S		0
+#define VF_MBX_CPM_ATQLEN1_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ATQLEN1_ATQVFE_S		28
+#define VF_MBX_CPM_ATQLEN1_ATQVFE_M		BIT(28)
+#define VF_MBX_CPM_ATQLEN1_ATQOVFL_S		29
+#define VF_MBX_CPM_ATQLEN1_ATQOVFL_M		BIT(29)
+#define VF_MBX_CPM_ATQLEN1_ATQCRIT_S		30
+#define VF_MBX_CPM_ATQLEN1_ATQCRIT_M		BIT(30)
+#define VF_MBX_CPM_ATQLEN1_ATQENABLE_S		31
+#define VF_MBX_CPM_ATQLEN1_ATQENABLE_M		BIT(31)
+#define VF_MBX_CPM_ATQT1			0x0000F040 /* Reset Source: CORER */
+#define VF_MBX_CPM_ATQT1_ATQT_S			0
+#define VF_MBX_CPM_ATQT1_ATQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ARQBAH1			0x00020060 /* Reset Source: CORER */
+#define VF_MBX_HLP_ARQBAH1_ARQBAH_S		0
+#define VF_MBX_HLP_ARQBAH1_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_HLP_ARQBAL1			0x00020050 /* Reset Source: CORER */
+#define VF_MBX_HLP_ARQBAL1_ARQBAL_LSB_S		0
+#define VF_MBX_HLP_ARQBAL1_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define VF_MBX_HLP_ARQBAL1_ARQBAL_S		6
+#define VF_MBX_HLP_ARQBAL1_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_HLP_ARQH1			0x00020080 /* Reset Source: CORER */
+#define VF_MBX_HLP_ARQH1_ARQH_S			0
+#define VF_MBX_HLP_ARQH1_ARQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ARQLEN1			0x00020070 /* Reset Source: CORER */
+#define VF_MBX_HLP_ARQLEN1_ARQLEN_S		0
+#define VF_MBX_HLP_ARQLEN1_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ARQLEN1_ARQVFE_S		28
+#define VF_MBX_HLP_ARQLEN1_ARQVFE_M		BIT(28)
+#define VF_MBX_HLP_ARQLEN1_ARQOVFL_S		29
+#define VF_MBX_HLP_ARQLEN1_ARQOVFL_M		BIT(29)
+#define VF_MBX_HLP_ARQLEN1_ARQCRIT_S		30
+#define VF_MBX_HLP_ARQLEN1_ARQCRIT_M		BIT(30)
+#define VF_MBX_HLP_ARQLEN1_ARQENABLE_S		31
+#define VF_MBX_HLP_ARQLEN1_ARQENABLE_M		BIT(31)
+#define VF_MBX_HLP_ARQT1			0x00020090 /* Reset Source: CORER */
+#define VF_MBX_HLP_ARQT1_ARQT_S			0
+#define VF_MBX_HLP_ARQT1_ARQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ATQBAH1			0x00020010 /* Reset Source: CORER */
+#define VF_MBX_HLP_ATQBAH1_ATQBAH_S		0
+#define VF_MBX_HLP_ATQBAH1_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_HLP_ATQBAL1			0x00020000 /* Reset Source: CORER */
+#define VF_MBX_HLP_ATQBAL1_ATQBAL_S		6
+#define VF_MBX_HLP_ATQBAL1_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_HLP_ATQH1			0x00020030 /* Reset Source: CORER */
+#define VF_MBX_HLP_ATQH1_ATQH_S			0
+#define VF_MBX_HLP_ATQH1_ATQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ATQLEN1			0x00020020 /* Reset Source: CORER */
+#define VF_MBX_HLP_ATQLEN1_ATQLEN_S		0
+#define VF_MBX_HLP_ATQLEN1_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ATQLEN1_ATQVFE_S		28
+#define VF_MBX_HLP_ATQLEN1_ATQVFE_M		BIT(28)
+#define VF_MBX_HLP_ATQLEN1_ATQOVFL_S		29
+#define VF_MBX_HLP_ATQLEN1_ATQOVFL_M		BIT(29)
+#define VF_MBX_HLP_ATQLEN1_ATQCRIT_S		30
+#define VF_MBX_HLP_ATQLEN1_ATQCRIT_M		BIT(30)
+#define VF_MBX_HLP_ATQLEN1_ATQENABLE_S		31
+#define VF_MBX_HLP_ATQLEN1_ATQENABLE_M		BIT(31)
+#define VF_MBX_HLP_ATQT1			0x00020040 /* Reset Source: CORER */
+#define VF_MBX_HLP_ATQT1_ATQT_S			0
+#define VF_MBX_HLP_ATQT1_ATQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ARQBAH1			0x00021060 /* Reset Source: CORER */
+#define VF_MBX_PSM_ARQBAH1_ARQBAH_S		0
+#define VF_MBX_PSM_ARQBAH1_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_PSM_ARQBAL1			0x00021050 /* Reset Source: CORER */
+#define VF_MBX_PSM_ARQBAL1_ARQBAL_LSB_S		0
+#define VF_MBX_PSM_ARQBAL1_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define VF_MBX_PSM_ARQBAL1_ARQBAL_S		6
+#define VF_MBX_PSM_ARQBAL1_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_PSM_ARQH1			0x00021080 /* Reset Source: CORER */
+#define VF_MBX_PSM_ARQH1_ARQH_S			0
+#define VF_MBX_PSM_ARQH1_ARQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ARQLEN1			0x00021070 /* Reset Source: CORER */
+#define VF_MBX_PSM_ARQLEN1_ARQLEN_S		0
+#define VF_MBX_PSM_ARQLEN1_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ARQLEN1_ARQVFE_S		28
+#define VF_MBX_PSM_ARQLEN1_ARQVFE_M		BIT(28)
+#define VF_MBX_PSM_ARQLEN1_ARQOVFL_S		29
+#define VF_MBX_PSM_ARQLEN1_ARQOVFL_M		BIT(29)
+#define VF_MBX_PSM_ARQLEN1_ARQCRIT_S		30
+#define VF_MBX_PSM_ARQLEN1_ARQCRIT_M		BIT(30)
+#define VF_MBX_PSM_ARQLEN1_ARQENABLE_S		31
+#define VF_MBX_PSM_ARQLEN1_ARQENABLE_M		BIT(31)
+#define VF_MBX_PSM_ARQT1			0x00021090 /* Reset Source: CORER */
+#define VF_MBX_PSM_ARQT1_ARQT_S			0
+#define VF_MBX_PSM_ARQT1_ARQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ATQBAH1			0x00021010 /* Reset Source: CORER */
+#define VF_MBX_PSM_ATQBAH1_ATQBAH_S		0
+#define VF_MBX_PSM_ATQBAH1_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_PSM_ATQBAL1			0x00021000 /* Reset Source: CORER */
+#define VF_MBX_PSM_ATQBAL1_ATQBAL_S		6
+#define VF_MBX_PSM_ATQBAL1_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_PSM_ATQH1			0x00021030 /* Reset Source: CORER */
+#define VF_MBX_PSM_ATQH1_ATQH_S			0
+#define VF_MBX_PSM_ATQH1_ATQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ATQLEN1			0x00021020 /* Reset Source: CORER */
+#define VF_MBX_PSM_ATQLEN1_ATQLEN_S		0
+#define VF_MBX_PSM_ATQLEN1_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ATQLEN1_ATQVFE_S		28
+#define VF_MBX_PSM_ATQLEN1_ATQVFE_M		BIT(28)
+#define VF_MBX_PSM_ATQLEN1_ATQOVFL_S		29
+#define VF_MBX_PSM_ATQLEN1_ATQOVFL_M		BIT(29)
+#define VF_MBX_PSM_ATQLEN1_ATQCRIT_S		30
+#define VF_MBX_PSM_ATQLEN1_ATQCRIT_M		BIT(30)
+#define VF_MBX_PSM_ATQLEN1_ATQENABLE_S		31
+#define VF_MBX_PSM_ATQLEN1_ATQENABLE_M		BIT(31)
+#define VF_MBX_PSM_ATQT1			0x00021040 /* Reset Source: CORER */
+#define VF_MBX_PSM_ATQT1_ATQT_S			0
+#define VF_MBX_PSM_ATQT1_ATQT_M			MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ARQBAH1			0x0000F160 /* Reset Source: CORER */
+#define VF_SB_CPM_ARQBAH1_ARQBAH_S		0
+#define VF_SB_CPM_ARQBAH1_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_SB_CPM_ARQBAL1			0x0000F150 /* Reset Source: CORER */
+#define VF_SB_CPM_ARQBAL1_ARQBAL_LSB_S		0
+#define VF_SB_CPM_ARQBAL1_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define VF_SB_CPM_ARQBAL1_ARQBAL_S		6
+#define VF_SB_CPM_ARQBAL1_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_SB_CPM_ARQH1				0x0000F180 /* Reset Source: CORER */
+#define VF_SB_CPM_ARQH1_ARQH_S			0
+#define VF_SB_CPM_ARQH1_ARQH_M			MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ARQLEN1			0x0000F170 /* Reset Source: CORER */
+#define VF_SB_CPM_ARQLEN1_ARQLEN_S		0
+#define VF_SB_CPM_ARQLEN1_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ARQLEN1_ARQVFE_S		28
+#define VF_SB_CPM_ARQLEN1_ARQVFE_M		BIT(28)
+#define VF_SB_CPM_ARQLEN1_ARQOVFL_S		29
+#define VF_SB_CPM_ARQLEN1_ARQOVFL_M		BIT(29)
+#define VF_SB_CPM_ARQLEN1_ARQCRIT_S		30
+#define VF_SB_CPM_ARQLEN1_ARQCRIT_M		BIT(30)
+#define VF_SB_CPM_ARQLEN1_ARQENABLE_S		31
+#define VF_SB_CPM_ARQLEN1_ARQENABLE_M		BIT(31)
+#define VF_SB_CPM_ARQT1				0x0000F190 /* Reset Source: CORER */
+#define VF_SB_CPM_ARQT1_ARQT_S			0
+#define VF_SB_CPM_ARQT1_ARQT_M			MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ATQBAH1			0x0000F110 /* Reset Source: CORER */
+#define VF_SB_CPM_ATQBAH1_ATQBAH_S		0
+#define VF_SB_CPM_ATQBAH1_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_SB_CPM_ATQBAL1			0x0000F100 /* Reset Source: CORER */
+#define VF_SB_CPM_ATQBAL1_ATQBAL_S		6
+#define VF_SB_CPM_ATQBAL1_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_SB_CPM_ATQH1				0x0000F130 /* Reset Source: CORER */
+#define VF_SB_CPM_ATQH1_ATQH_S			0
+#define VF_SB_CPM_ATQH1_ATQH_M			MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ATQLEN1			0x0000F120 /* Reset Source: CORER */
+#define VF_SB_CPM_ATQLEN1_ATQLEN_S		0
+#define VF_SB_CPM_ATQLEN1_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ATQLEN1_ATQVFE_S		28
+#define VF_SB_CPM_ATQLEN1_ATQVFE_M		BIT(28)
+#define VF_SB_CPM_ATQLEN1_ATQOVFL_S		29
+#define VF_SB_CPM_ATQLEN1_ATQOVFL_M		BIT(29)
+#define VF_SB_CPM_ATQLEN1_ATQCRIT_S		30
+#define VF_SB_CPM_ATQLEN1_ATQCRIT_M		BIT(30)
+#define VF_SB_CPM_ATQLEN1_ATQENABLE_S		31
+#define VF_SB_CPM_ATQLEN1_ATQENABLE_M		BIT(31)
+#define VF_SB_CPM_ATQT1				0x0000F140 /* Reset Source: CORER */
+#define VF_SB_CPM_ATQT1_ATQT_S			0
+#define VF_SB_CPM_ATQT1_ATQT_M			MAKEMASK(0x3FF, 0)
+#define VFINT_DYN_CTL(_i)			(0x00023000 + ((_i) * 4096)) /* _i=0...7 */ /* Reset Source: PFR */
+#define VFINT_DYN_CTL_MAX_INDEX			7
+#define VFINT_DYN_CTL_INTENA_S			0
+#define VFINT_DYN_CTL_INTENA_M			BIT(0)
+#define VFINT_DYN_CTL_CLEARPBA_S		1
+#define VFINT_DYN_CTL_CLEARPBA_M		BIT(1)
+#define VFINT_DYN_CTL_SWINT_TRIG_S		2
+#define VFINT_DYN_CTL_SWINT_TRIG_M		BIT(2)
+#define VFINT_DYN_CTL_ITR_INDX_S		3
+#define VFINT_DYN_CTL_ITR_INDX_M		MAKEMASK(0x3, 3)
+#define VFINT_DYN_CTL_INTERVAL_S		5
+#define VFINT_DYN_CTL_INTERVAL_M		MAKEMASK(0xFFF, 5)
+#define VFINT_DYN_CTL_SW_ITR_INDX_ENA_S		24
+#define VFINT_DYN_CTL_SW_ITR_INDX_ENA_M		BIT(24)
+#define VFINT_DYN_CTL_SW_ITR_INDX_S		25
+#define VFINT_DYN_CTL_SW_ITR_INDX_M		MAKEMASK(0x3, 25)
+#define VFINT_DYN_CTL_WB_ON_ITR_S		30
+#define VFINT_DYN_CTL_WB_ON_ITR_M		BIT(30)
+#define VFINT_DYN_CTL_INTENA_MSK_S		31
+#define VFINT_DYN_CTL_INTENA_MSK_M		BIT(31)
+#define VFINT_ITR_0(_i)				(0x00023004 + ((_i) * 4096)) /* _i=0...7 */ /* Reset Source: PFR */
+#define VFINT_ITR_0_MAX_INDEX			7
+#define VFINT_ITR_0_INTERVAL_S			0
+#define VFINT_ITR_0_INTERVAL_M			MAKEMASK(0xFFF, 0)
+#define VFINT_ITR_1(_i)				(0x00023008 + ((_i) * 4096)) /* _i=0...7 */ /* Reset Source: PFR */
+#define VFINT_ITR_1_MAX_INDEX			7
+#define VFINT_ITR_1_INTERVAL_S			0
+#define VFINT_ITR_1_INTERVAL_M			MAKEMASK(0xFFF, 0)
+#define VFINT_ITR_2(_i)				(0x0002300C + ((_i) * 4096)) /* _i=0...7 */ /* Reset Source: PFR */
+#define VFINT_ITR_2_MAX_INDEX			7
+#define VFINT_ITR_2_INTERVAL_S			0
+#define VFINT_ITR_2_INTERVAL_M			MAKEMASK(0xFFF, 0)
+#define VFQRX_TAIL(_QRX)			(0x0002E000 + ((_QRX) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VFQRX_TAIL_MAX_INDEX			255
+#define VFQRX_TAIL_TAIL_S			0
+#define VFQRX_TAIL_TAIL_M			MAKEMASK(0x1FFF, 0)
+#define VFQTX_COMM_DBELL(_DBQM)			(0x00030000 + ((_DBQM) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VFQTX_COMM_DBELL_MAX_INDEX		255
+#define VFQTX_COMM_DBELL_QTX_COMM_DBELL_S	0
+#define VFQTX_COMM_DBELL_QTX_COMM_DBELL_M	MAKEMASK(0xFFFFFFFF, 0)
+#define VFQTX_COMM_DBLQ_DBELL(_DBLQ)		(0x00022000 + ((_DBLQ) * 4)) /* _i=0...3 */ /* Reset Source: CORER */
+#define VFQTX_COMM_DBLQ_DBELL_MAX_INDEX		3
+#define VFQTX_COMM_DBLQ_DBELL_TAIL_S		0
+#define VFQTX_COMM_DBLQ_DBELL_TAIL_M		MAKEMASK(0x1FFF, 0)
+
+#endif
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v5 02/31] net/ice/base: add basic structures
  2018-12-17  7:37 ` [dpdk-dev] [PATCH v5 00/31] A new net PMD - ICE Wenzhuo Lu
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 01/31] net/ice/base: add registers for Intel(R) E800 Series NIC Wenzhuo Lu
@ 2018-12-17  7:37   ` Wenzhuo Lu
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 03/31] net/ice/base: add admin queue structures and commands Wenzhuo Lu
                     ` (28 subsequent siblings)
  30 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-17  7:37 UTC (permalink / raw)
  To: dev; +Cc: Paul M Stillwell Jr

From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>

Add the structures required by the NIC.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
---
 drivers/net/ice/base/ice_type.h | 869 ++++++++++++++++++++++++++++++++++++++++
 1 file changed, 869 insertions(+)
 create mode 100644 drivers/net/ice/base/ice_type.h

diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h
new file mode 100644
index 0000000..256bf3f
--- /dev/null
+++ b/drivers/net/ice/base/ice_type.h
@@ -0,0 +1,869 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_TYPE_H_
+#define _ICE_TYPE_H_
+
+#define ETH_ALEN	6
+
+#define ETH_HEADER_LEN	14
+
+#define BIT(a) (1UL << (a))
+#define BIT_ULL(a) (1ULL << (a))
+
+#define BITS_PER_BYTE	8
+
+#define ICE_BYTES_PER_WORD	2
+#define ICE_BYTES_PER_DWORD	4
+#define ICE_MAX_TRAFFIC_CLASS	8
+
+
+#include "ice_status.h"
+#include "ice_hw_autogen.h"
+#include "ice_devids.h"
+#include "ice_osdep.h"
+#include "ice_controlq.h"
+#include "ice_lan_tx_rx.h"
+#include "ice_flex_type.h"
+#include "ice_protocol_type.h"
+
+static inline bool ice_is_tc_ena(ice_bitmap_t bitmap, u8 tc)
+{
+	return ice_is_bit_set(&bitmap, tc);
+}
+
+#ifndef DIV_64BIT
+#define DIV_64BIT(n, d) ((n) / (d))
+#endif /* DIV_64BIT */
+
+static inline u64 round_up_64bit(u64 a, u32 b)
+{
+	return DIV_64BIT(((a) + (b) / 2), (b));
+}
+
+static inline u32 ice_round_to_num(u32 N, u32 R)
+{
+	return ((((N) % (R)) < ((R) / 2)) ? (((N) / (R)) * (R)) :
+		((((N) + (R) - 1) / (R)) * (R)));
+}
+
+/* Driver always calls main vsi_handle first */
+#define ICE_MAIN_VSI_HANDLE		0
+
+/* Switch from ms to the 1usec global time (this is the GTIME resolution) */
+#define ICE_MS_TO_GTIME(time)		((time) * 1000)
+
+/* Data type manipulation macros. */
+#define ICE_HI_DWORD(x)		((u32)((((x) >> 16) >> 16) & 0xFFFFFFFF))
+#define ICE_LO_DWORD(x)		((u32)((x) & 0xFFFFFFFF))
+#define ICE_HI_WORD(x)		((u16)(((x) >> 16) & 0xFFFF))
+
+/* debug masks - set these bits in hw->debug_mask to control output */
+#define ICE_DBG_INIT		BIT_ULL(1)
+#define ICE_DBG_RELEASE		BIT_ULL(2)
+
+#define ICE_DBG_LINK		BIT_ULL(4)
+#define ICE_DBG_PHY		BIT_ULL(5)
+#define ICE_DBG_QCTX		BIT_ULL(6)
+#define ICE_DBG_NVM		BIT_ULL(7)
+#define ICE_DBG_LAN		BIT_ULL(8)
+#define ICE_DBG_FLOW		BIT_ULL(9)
+#define ICE_DBG_DCB		BIT_ULL(10)
+#define ICE_DBG_DIAG		BIT_ULL(11)
+#define ICE_DBG_FD		BIT_ULL(12)
+#define ICE_DBG_SW		BIT_ULL(13)
+#define ICE_DBG_SCHED		BIT_ULL(14)
+
+#define ICE_DBG_PKG		BIT_ULL(16)
+#define ICE_DBG_RES		BIT_ULL(17)
+#define ICE_DBG_AQ_MSG		BIT_ULL(24)
+#define ICE_DBG_AQ_DESC		BIT_ULL(25)
+#define ICE_DBG_AQ_DESC_BUF	BIT_ULL(26)
+#define ICE_DBG_AQ_CMD		BIT_ULL(27)
+#define ICE_DBG_AQ		(ICE_DBG_AQ_MSG		| \
+				 ICE_DBG_AQ_DESC	| \
+				 ICE_DBG_AQ_DESC_BUF	| \
+				 ICE_DBG_AQ_CMD)
+
+#define ICE_DBG_USER		BIT_ULL(31)
+#define ICE_DBG_ALL		0xFFFFFFFFFFFFFFFFULL
+
+
+
+
+
+
+enum ice_aq_res_ids {
+	ICE_NVM_RES_ID = 1,
+	ICE_SPD_RES_ID,
+	ICE_CHANGE_LOCK_RES_ID,
+	ICE_GLOBAL_CFG_LOCK_RES_ID
+};
+
+/* FW update timeout definitions are in milliseconds */
+#define ICE_NVM_TIMEOUT			180000
+#define ICE_CHANGE_LOCK_TIMEOUT		1000
+#define ICE_GLOBAL_CFG_LOCK_TIMEOUT	3000
+
+enum ice_aq_res_access_type {
+	ICE_RES_READ = 1,
+	ICE_RES_WRITE
+};
+
+struct ice_driver_ver {
+	u8 major_ver;
+	u8 minor_ver;
+	u8 build_ver;
+	u8 subbuild_ver;
+	u8 driver_string[32];
+};
+
+enum ice_fc_mode {
+	ICE_FC_NONE = 0,
+	ICE_FC_RX_PAUSE,
+	ICE_FC_TX_PAUSE,
+	ICE_FC_FULL,
+	ICE_FC_PFC,
+	ICE_FC_DFLT
+};
+
+enum ice_fec_mode {
+	ICE_FEC_NONE = 0,
+	ICE_FEC_RS,
+	ICE_FEC_BASER,
+	ICE_FEC_AUTO
+};
+
+enum ice_set_fc_aq_failures {
+	ICE_SET_FC_AQ_FAIL_NONE = 0,
+	ICE_SET_FC_AQ_FAIL_GET,
+	ICE_SET_FC_AQ_FAIL_SET,
+	ICE_SET_FC_AQ_FAIL_UPDATE
+};
+
+/* These are structs for managing the hardware information and the operations */
+/* MAC types */
+enum ice_mac_type {
+	ICE_MAC_UNKNOWN = 0,
+	ICE_MAC_GENERIC,
+};
+
+/* Media Types */
+enum ice_media_type {
+	ICE_MEDIA_UNKNOWN = 0,
+	ICE_MEDIA_FIBER,
+	ICE_MEDIA_BASET,
+	ICE_MEDIA_BACKPLANE,
+	ICE_MEDIA_DA,
+};
+
+/* Software VSI types. */
+enum ice_vsi_type {
+	ICE_VSI_PF = 0,
+#ifdef ADQ_SUPPORT
+	ICE_VSI_CHNL = 4,
+#endif /* ADQ_SUPPORT */
+};
+
+struct ice_link_status {
+	/* Refer to ice_aq_phy_type for bits definition */
+	u64 phy_type_low;
+	u64 phy_type_high;
+	u8 topo_media_conflict;
+	u16 max_frame_size;
+	u16 link_speed;
+	u16 req_speeds;
+	u8 lse_ena;	/* Link Status Event notification */
+	u8 link_info;
+	u8 an_info;
+	u8 ext_info;
+	u8 fec_info;
+	u8 pacing;
+	/* Refer to #define from module_type[ICE_MODULE_TYPE_TOTAL_BYTE] of
+	 * ice_aqc_get_phy_caps structure
+	 */
+	u8 module_type[ICE_MODULE_TYPE_TOTAL_BYTE];
+};
+
+/* Different data queue types: These are mainly for SW consumption. */
+enum ice_q {
+	ICE_DATA_Q_DOORBELL,
+	ICE_DATA_Q_CMPL,
+	ICE_DATA_Q_QUANTA,
+	ICE_DATA_Q_RX,
+	ICE_DATA_Q_TX,
+};
+
+/* Different reset sources for which a disable queue AQ call has to be made in
+ * order to clean the TX scheduler as a part of the reset
+ */
+enum ice_disq_rst_src {
+	ICE_NO_RESET = 0,
+	ICE_VM_RESET,
+};
+
+/* PHY info such as phy_type, etc... */
+struct ice_phy_info {
+	struct ice_link_status link_info;
+	struct ice_link_status link_info_old;
+	u64 phy_type_low;
+	u64 phy_type_high;
+	enum ice_media_type media_type;
+	u8 get_link_info;
+};
+
+#define ICE_MAX_NUM_MIRROR_RULES	64
+
+/* Common HW capabilities for SW use */
+struct ice_hw_common_caps {
+	/* Write CSR protection */
+	u64 wr_csr_prot;
+	u32 switching_mode;
+	/* switching mode supported - EVB switching (including cloud) */
+#define ICE_NVM_IMAGE_TYPE_EVB		0x0
+
+	/* Manageablity mode & supported protocols over MCTP */
+	u32 mgmt_mode;
+#define ICE_MGMT_MODE_PASS_THRU_MODE_M		0xF
+#define ICE_MGMT_MODE_CTL_INTERFACE_M		0xF0
+#define ICE_MGMT_MODE_REDIR_SB_INTERFACE_M	0xF00
+
+	u32 mgmt_protocols_mctp;
+#define ICE_MGMT_MODE_PROTO_RSVD	BIT(0)
+#define ICE_MGMT_MODE_PROTO_PLDM	BIT(1)
+#define ICE_MGMT_MODE_PROTO_OEM		BIT(2)
+#define ICE_MGMT_MODE_PROTO_NC_SI	BIT(3)
+
+	u32 os2bmc;
+	u32 valid_functions;
+
+	/* RSS related capabilities */
+	u32 rss_table_size;		/* 512 for PFs and 64 for VFs */
+	u32 rss_table_entry_width;	/* RSS Entry width in bits */
+
+	/* TX/RX queues */
+	u32 num_rxq;			/* Number/Total RX queues */
+	u32 rxq_first_id;		/* First queue ID for RX queues */
+	u32 num_txq;			/* Number/Total TX queues */
+	u32 txq_first_id;		/* First queue ID for TX queues */
+
+	/* MSI-X vectors */
+	u32 num_msix_vectors;
+	u32 msix_vector_first_id;
+
+	/* Max MTU for function or device */
+	u32 max_mtu;
+
+	/* WOL related */
+	u32 num_wol_proxy_fltr;
+	u32 wol_proxy_vsi_seid;
+
+	/* LED/SDP pin count */
+	u32 led_pin_num;
+	u32 sdp_pin_num;
+
+	/* LED/SDP - Supports up to 12 LED pins and 8 SDP signals */
+#define ICE_MAX_SUPPORTED_GPIO_LED	12
+#define ICE_MAX_SUPPORTED_GPIO_SDP	8
+	u8 led[ICE_MAX_SUPPORTED_GPIO_LED];
+	u8 sdp[ICE_MAX_SUPPORTED_GPIO_SDP];
+
+	/* EVB capabilities */
+	u8 evb_802_1_qbg;		/* Edge Virtual Bridging */
+	u8 evb_802_1_qbh;		/* Bridge Port Extension */
+
+	u8 iscsi;
+	u8 mgmt_cem;
+
+	/* WoL and APM support */
+#define ICE_WOL_SUPPORT_M		BIT(0)
+#define ICE_ACPI_PROG_MTHD_M		BIT(1)
+#define ICE_PROXY_SUPPORT_M		BIT(2)
+	u8 apm_wol_support;
+	u8 acpi_prog_mthd;
+	u8 proxy_support;
+};
+
+
+/* Function specific capabilities */
+struct ice_hw_func_caps {
+	struct ice_hw_common_caps common_cap;
+	u32 guar_num_vsi;
+};
+
+/* Device wide capabilities */
+struct ice_hw_dev_caps {
+	struct ice_hw_common_caps common_cap;
+	u32 num_vsi_allocd_to_host;	/* Excluding EMP VSI */
+};
+
+
+/* Information about MAC such as address, etc... */
+struct ice_mac_info {
+	u8 lan_addr[ETH_ALEN];
+	u8 perm_addr[ETH_ALEN];
+	u8 port_addr[ETH_ALEN];
+	u8 wol_addr[ETH_ALEN];
+};
+
+/* PCI bus types */
+enum ice_bus_type {
+	ice_bus_unknown = 0,
+	ice_bus_pci_express,
+	ice_bus_embedded, /* Is device Embedded versus card */
+	ice_bus_reserved
+};
+
+/* PCI bus speeds */
+enum ice_pcie_bus_speed {
+	ice_pcie_speed_unknown	= 0xff,
+	ice_pcie_speed_2_5GT	= 0x14,
+	ice_pcie_speed_5_0GT	= 0x15,
+	ice_pcie_speed_8_0GT	= 0x16,
+	ice_pcie_speed_16_0GT	= 0x17
+};
+
+/* PCI bus widths */
+enum ice_pcie_link_width {
+	ice_pcie_lnk_width_resrv	= 0x00,
+	ice_pcie_lnk_x1			= 0x01,
+	ice_pcie_lnk_x2			= 0x02,
+	ice_pcie_lnk_x4			= 0x04,
+	ice_pcie_lnk_x8			= 0x08,
+	ice_pcie_lnk_x12		= 0x0C,
+	ice_pcie_lnk_x16		= 0x10,
+	ice_pcie_lnk_x32		= 0x20,
+	ice_pcie_lnk_width_unknown	= 0xff,
+};
+
+/* Reset types used to determine which kind of reset was requested. These
+ * defines match what the RESET_TYPE field of the GLGEN_RSTAT register.
+ * ICE_RESET_PFR does not match any RESET_TYPE field in the GLGEN_RSTAT register
+ * because its reset source is different than the other types listed.
+ */
+enum ice_reset_req {
+	ICE_RESET_POR	= 0,
+	ICE_RESET_INVAL	= 0,
+	ICE_RESET_CORER	= 1,
+	ICE_RESET_GLOBR	= 2,
+	ICE_RESET_EMPR	= 3,
+	ICE_RESET_PFR	= 4,
+};
+
+/* Bus parameters */
+struct ice_bus_info {
+	enum ice_pcie_bus_speed speed;
+	enum ice_pcie_link_width width;
+	enum ice_bus_type type;
+	u16 domain_num;
+	u16 device;
+	u8 func;
+	u8 bus_num;
+};
+
+/* Flow control (FC) parameters */
+struct ice_fc_info {
+	enum ice_fc_mode current_mode;	/* FC mode in effect */
+	enum ice_fc_mode req_mode;	/* FC mode requested by caller */
+};
+
+/* NVM Information */
+struct ice_nvm_info {
+	u32 eetrack;			/* NVM data version */
+	u32 oem_ver;			/* OEM version info */
+	u16 sr_words;			/* Shadow RAM size in words */
+	u16 ver;			/* NVM package version */
+	u8 blank_nvm_mode;		/* is NVM empty (no FW present)*/
+};
+
+/* Max number of port to queue branches w.r.t topology */
+#define ICE_TXSCHED_MAX_BRANCHES ICE_MAX_TRAFFIC_CLASS
+/* ICE_DFLT_AGG_ID means that all new VM(s)/VSI node connects
+ * to driver defined policy for default aggregator
+ */
+#define ICE_INVAL_TEID 0xFFFFFFFF
+#define ICE_DFLT_AGG_ID 0
+
+struct ice_sched_node {
+	struct ice_sched_node *parent;
+	struct ice_sched_node *sibling; /* next sibling in the same layer */
+	struct ice_sched_node **children;
+	struct ice_aqc_txsched_elem_data info;
+	u32 agg_id;			/* aggregator group id */
+	u16 vsi_handle;
+	u8 in_use;			/* suspended or in use */
+	u8 tx_sched_layer;		/* Logical Layer (1-9) */
+	u8 num_children;
+	u8 tc_num;
+	u8 owner;
+#define ICE_SCHED_NODE_OWNER_LAN	0
+#define ICE_SCHED_NODE_OWNER_AE		1
+#define ICE_SCHED_NODE_OWNER_RDMA	2
+};
+
+/* Access Macros for Tx Sched Elements data */
+#define ICE_TXSCHED_GET_NODE_TEID(x) LE32_TO_CPU((x)->info.node_teid)
+#define ICE_TXSCHED_GET_PARENT_TEID(x) LE32_TO_CPU((x)->info.parent_teid)
+#define ICE_TXSCHED_GET_CIR_RL_ID(x)	\
+	LE16_TO_CPU((x)->info.cir_bw.bw_profile_idx)
+#define ICE_TXSCHED_GET_EIR_RL_ID(x)	\
+	LE16_TO_CPU((x)->info.eir_bw.bw_profile_idx)
+#define ICE_TXSCHED_GET_SRL_ID(x) LE16_TO_CPU((x)->info.srl_id)
+#define ICE_TXSCHED_GET_CIR_BWALLOC(x)	\
+	LE16_TO_CPU((x)->info.cir_bw.bw_alloc)
+#define ICE_TXSCHED_GET_EIR_BWALLOC(x)	\
+	LE16_TO_CPU((x)->info.eir_bw.bw_alloc)
+
+struct ice_sched_rl_profle {
+	u32 rate; /* In Kbps */
+	struct ice_aqc_rl_profile_elem info;
+};
+
+/* The aggregator type determines if identifier is for a VSI group,
+ * aggregator group, aggregator of queues, or queue group.
+ */
+enum ice_agg_type {
+	ICE_AGG_TYPE_UNKNOWN = 0,
+	ICE_AGG_TYPE_TC,
+	ICE_AGG_TYPE_AGG, /* aggregator */
+	ICE_AGG_TYPE_VSI,
+	ICE_AGG_TYPE_QG,
+	ICE_AGG_TYPE_Q
+};
+
+/* Rate limit types */
+enum ice_rl_type {
+	ICE_UNKNOWN_BW = 0,
+	ICE_MIN_BW,		/* for cir profile */
+	ICE_MAX_BW,		/* for eir profile */
+	ICE_SHARED_BW		/* for shared profile */
+};
+
+#define ICE_SCHED_MIN_BW		500		/* in Kbps */
+#define ICE_SCHED_MAX_BW		100000000	/* in Kbps */
+#define ICE_SCHED_DFLT_BW		0xFFFFFFFF	/* unlimited */
+#define ICE_SCHED_NO_PRIORITY		0
+#define ICE_SCHED_NO_BW_WT		0
+#define ICE_SCHED_DFLT_RL_PROF_ID	0
+#define ICE_SCHED_NO_SHARED_RL_PROF_ID	0xFFFF
+#define ICE_SCHED_DFLT_BW_WT		1
+#define ICE_SCHED_INVAL_PROF_ID		0xFFFF
+#define ICE_SCHED_DFLT_BURST_SIZE	(15 * 1024)	/* in bytes (15k) */
+
+/* Access Macros for Tx Sched RL Profile data */
+#define ICE_TXSCHED_GET_RL_PROF_ID(p) LE16_TO_CPU((p)->info.profile_id)
+#define ICE_TXSCHED_GET_RL_MBS(p) LE16_TO_CPU((p)->info.max_burst_size)
+#define ICE_TXSCHED_GET_RL_MULTIPLIER(p) LE16_TO_CPU((p)->info.rl_multiply)
+#define ICE_TXSCHED_GET_RL_WAKEUP_MV(p) LE16_TO_CPU((p)->info.wake_up_calc)
+#define ICE_TXSCHED_GET_RL_ENCODE(p) LE16_TO_CPU((p)->info.rl_encode)
+
+
+/* The following tree example shows the naming conventions followed under
+ * ice_port_info struct for default scheduler tree topology.
+ *
+ *                 A tree on a port
+ *                       *                ---> root node
+ *        (TC0)/  /  /  / \  \  \  \(TC7) ---> num_branches (range:1- 8)
+ *            *  *  *  *   *  *  *  *     |
+ *           /                            |
+ *          *                             |
+ *         /                              |-> num_elements (range:1 - 9)
+ *        *                               |   implies num_of_layers
+ *       /                                |
+ *   (a)*                                 |
+ *
+ *  (a) is the last_node_teid(not of type Leaf). A leaf node is created under
+ *  (a) as child node where queues get added, add Tx/Rx queue admin commands;
+ *  need teid of (a) to add queues.
+ *
+ *  This tree
+ *       -> has 8 branches (one for each TC)
+ *       -> First branch (TC0) has 4 elements
+ *       -> has 4 layers
+ *       -> (a) is the topmost layer node created by firmware on branch 0
+ *
+ *  Note: Above asterisk tree covers only basic terminology and scenario.
+ *  Refer to the documentation for more info.
+ */
+
+ /* Data structure for saving bw information */
+enum ice_bw_type {
+	ICE_BW_TYPE_PRIO,
+	ICE_BW_TYPE_CIR,
+	ICE_BW_TYPE_CIR_WT,
+	ICE_BW_TYPE_EIR,
+	ICE_BW_TYPE_EIR_WT,
+	ICE_BW_TYPE_SHARED,
+	ICE_BW_TYPE_CNT		/* This must be last */
+};
+
+struct ice_bw {
+	u32 bw;
+	u16 bw_alloc;
+};
+
+struct ice_bw_type_info {
+	ice_declare_bitmap(bw_t_bitmap, ICE_BW_TYPE_CNT);
+	u8 generic;
+	struct ice_bw cir_bw;
+	struct ice_bw eir_bw;
+	u32 shared_bw;
+};
+
+/* vsi type list entry to locate corresponding vsi/ag nodes */
+struct ice_sched_vsi_info {
+	struct ice_sched_node *vsi_node[ICE_MAX_TRAFFIC_CLASS];
+	struct ice_sched_node *ag_node[ICE_MAX_TRAFFIC_CLASS];
+	u16 max_lanq[ICE_MAX_TRAFFIC_CLASS];
+	/* bw_t_info saves VSI bw information */
+	struct ice_bw_type_info bw_t_info[ICE_MAX_TRAFFIC_CLASS];
+};
+
+#if !defined(NO_DCB_SUPPORT) || defined(ADQ_SUPPORT)
+/* CEE or IEEE 802.1Qaz ETS Configuration data */
+struct ice_dcb_ets_cfg {
+	u8 willing;
+	u8 cbs;
+	u8 maxtcs;
+	u8 prio_table[ICE_MAX_TRAFFIC_CLASS];
+	u8 tcbwtable[ICE_MAX_TRAFFIC_CLASS];
+	u8 tsatable[ICE_MAX_TRAFFIC_CLASS];
+};
+
+/* CEE or IEEE 802.1Qaz PFC Configuration data */
+struct ice_dcb_pfc_cfg {
+	u8 willing;
+	u8 mbc;
+	u8 pfccap;
+	u8 pfcena;
+};
+
+/* CEE or IEEE 802.1Qaz Application Priority data */
+struct ice_dcb_app_priority_table {
+	u16 prot_id;
+	u8 priority;
+	u8 selector;
+};
+
+#define ICE_MAX_USER_PRIORITY	8
+#define ICE_DCBX_MAX_APPS	32
+#define ICE_LLDPDU_SIZE		1500
+#define ICE_TLV_STATUS_OPER	0x1
+#define ICE_TLV_STATUS_SYNC	0x2
+#define ICE_TLV_STATUS_ERR	0x4
+#define ICE_APP_PROT_ID_FCOE	0x8906
+#define ICE_APP_PROT_ID_ISCSI	0x0cbc
+#define ICE_APP_PROT_ID_FIP	0x8914
+#define ICE_APP_SEL_ETHTYPE	0x1
+#define ICE_APP_SEL_TCPIP	0x2
+#define ICE_CEE_APP_SEL_ETHTYPE	0x0
+#define ICE_CEE_APP_SEL_TCPIP	0x1
+
+struct ice_dcbx_cfg {
+	u32 numapps;
+	u32 tlv_status; /* CEE mode TLV status */
+	struct ice_dcb_ets_cfg etscfg;
+	struct ice_dcb_ets_cfg etsrec;
+	struct ice_dcb_pfc_cfg pfc;
+	struct ice_dcb_app_priority_table app[ICE_DCBX_MAX_APPS];
+	u8 dcbx_mode;
+#define ICE_DCBX_MODE_CEE	0x1
+#define ICE_DCBX_MODE_IEEE	0x2
+	u8 app_mode;
+#define ICE_DCBX_APPS_NON_WILLING	0x1
+};
+#endif /* !NO_DCB_SUPPORT || ADQ_SUPPORT */
+
+struct ice_port_info {
+	struct ice_sched_node *root;	/* Root Node per Port */
+	struct ice_hw *hw;		/* back pointer to hw instance */
+	u32 last_node_teid;		/* scheduler last node info */
+	u16 sw_id;			/* Initial switch ID belongs to port */
+	u16 pf_vf_num;
+	u8 port_state;
+#define ICE_SCHED_PORT_STATE_INIT	0x0
+#define ICE_SCHED_PORT_STATE_READY	0x1
+	u16 dflt_tx_vsi_rule_id;
+	u16 dflt_tx_vsi_num;
+	u16 dflt_rx_vsi_rule_id;
+	u16 dflt_rx_vsi_num;
+	struct ice_fc_info fc;
+	struct ice_mac_info mac;
+	struct ice_phy_info phy;
+	struct ice_lock sched_lock;	/* protect access to TXSched tree */
+	/* List contain profile id(s) and other params per layer */
+	struct LIST_HEAD_TYPE rl_prof_list[ICE_AQC_TOPO_MAX_LEVEL_NUM];
+#if !defined(NO_DCB_SUPPORT) || defined(ADQ_SUPPORT)
+	struct ice_dcbx_cfg local_dcbx_cfg;	/* Oper/Local Cfg */
+#endif /* !NO_DCB_SUPPORT || ADQ_SUPPORT */
+	u8 lport;
+#define ICE_LPORT_MASK		0xff
+	u8 is_vf;
+};
+
+struct ice_switch_info {
+	struct LIST_HEAD_TYPE vsi_list_map_head;
+	struct ice_sw_recipe *recp_list;
+};
+
+/* FW logging configuration */
+struct ice_fw_log_evnt {
+	u8 cfg : 4;	/* New event enables to configure */
+	u8 cur : 4;	/* Current/active event enables */
+};
+
+struct ice_fw_log_cfg {
+	u8 cq_en : 1;    /* FW logging is enabled via the control queue */
+	u8 uart_en : 1;  /* FW logging is enabled via UART for all PFs */
+	u8 actv_evnts;   /* Cumulation of currently enabled log events */
+
+#define ICE_FW_LOG_EVNT_INFO	(ICE_AQC_FW_LOG_INFO_EN >> ICE_AQC_FW_LOG_EN_S)
+#define ICE_FW_LOG_EVNT_INIT	(ICE_AQC_FW_LOG_INIT_EN >> ICE_AQC_FW_LOG_EN_S)
+#define ICE_FW_LOG_EVNT_FLOW	(ICE_AQC_FW_LOG_FLOW_EN >> ICE_AQC_FW_LOG_EN_S)
+#define ICE_FW_LOG_EVNT_ERR	(ICE_AQC_FW_LOG_ERR_EN >> ICE_AQC_FW_LOG_EN_S)
+	struct ice_fw_log_evnt evnts[ICE_AQC_FW_LOG_ID_MAX];
+};
+
+/* Port hardware description */
+struct ice_hw {
+	u8 *hw_addr;
+	void *back;
+	struct ice_aqc_layer_props *layer_info;
+	struct ice_port_info *port_info;
+	/* 2D Array for each Tx Sched RL Profile type */
+	struct ice_sched_rl_profile **cir_profiles;
+	struct ice_sched_rl_profile **eir_profiles;
+	struct ice_sched_rl_profile **srl_profiles;
+	u64 debug_mask;		/* BITMAP for debug mask */
+	enum ice_mac_type mac_type;
+
+	/* pci info */
+	u16 device_id;
+	u16 vendor_id;
+	u16 subsystem_device_id;
+	u16 subsystem_vendor_id;
+	u8 revision_id;
+
+	u8 pf_id;		/* device profile info */
+
+	u16 max_burst_size;	/* driver sets this value */
+	/* TX Scheduler values */
+	u16 num_tx_sched_layers;
+	u16 num_tx_sched_phys_layers;
+	u8 flattened_layers;
+	u8 max_cgds;
+	u8 sw_entry_point_layer;
+	u16 max_children[ICE_AQC_TOPO_MAX_LEVEL_NUM];
+	struct LIST_HEAD_TYPE agg_list;	/* lists all aggregator */
+	struct ice_bw_type_info tc_node_bw_t_info[ICE_MAX_TRAFFIC_CLASS];
+	struct ice_vsi_ctx *vsi_ctx[ICE_MAX_VSI];
+	u8 evb_veb;		/* true for VEB, false for VEPA */
+	u8 reset_ongoing;	/* true if hw is in reset, false otherwise */
+	struct ice_bus_info bus;
+	struct ice_nvm_info nvm;
+	struct ice_hw_dev_caps dev_caps;	/* device capabilities */
+	struct ice_hw_func_caps func_caps;	/* function capabilities */
+
+	struct ice_switch_info *switch_info;	/* switch filter lists */
+
+	/* Control Queue info */
+	struct ice_ctl_q_info adminq;
+	struct ice_ctl_q_info mailboxq;
+
+	u8 api_branch;		/* API branch version */
+	u8 api_maj_ver;		/* API major version */
+	u8 api_min_ver;		/* API minor version */
+	u8 api_patch;		/* API patch version */
+	u8 fw_branch;		/* firmware branch version */
+	u8 fw_maj_ver;		/* firmware major version */
+	u8 fw_min_ver;		/* firmware minor version */
+	u8 fw_patch;		/* firmware patch version */
+	u32 fw_build;		/* firmware build number */
+
+	struct ice_fw_log_cfg fw_log;
+
+/* Device max aggregate bandwidths corresponding to the GL_PWR_MODE_CTL
+ * register. Used for determining the itr/intrl granularity during
+ * initialization.
+ */
+#define ICE_MAX_AGG_BW_200G	0x0
+#define ICE_MAX_AGG_BW_100G	0X1
+#define ICE_MAX_AGG_BW_50G	0x2
+#define ICE_MAX_AGG_BW_25G	0x3
+	/* ITR granularity for different speeds */
+#define ICE_ITR_GRAN_ABOVE_25	2
+#define ICE_ITR_GRAN_MAX_25	4
+	/* ITR granularity in 1 us */
+	u8 itr_gran;
+	/* INTRL granularity for different speeds */
+#define ICE_INTRL_GRAN_ABOVE_25	4
+#define ICE_INTRL_GRAN_MAX_25	8
+	/* INTRL granularity in 1 us */
+	u8 intrl_gran;
+
+	u8 ucast_shared;	/* true if VSIs can share unicast addr */
+
+
+};
+
+/* Statistics collected by each port, VSI, VEB, and S-channel */
+struct ice_eth_stats {
+	u64 rx_bytes;			/* gorc */
+	u64 rx_unicast;			/* uprc */
+	u64 rx_multicast;		/* mprc */
+	u64 rx_broadcast;		/* bprc */
+	u64 rx_discards;		/* rdpc */
+	u64 rx_unknown_protocol;	/* rupp */
+	u64 tx_bytes;			/* gotc */
+	u64 tx_unicast;			/* uptc */
+	u64 tx_multicast;		/* mptc */
+	u64 tx_broadcast;		/* bptc */
+	u64 tx_discards;		/* tdpc */
+	u64 tx_errors;			/* tepc */
+};
+
+#define ICE_MAX_UP	8
+
+/* Statistics collected per VEB per User Priority (UP) for up to 8 UPs */
+struct ice_veb_up_stats {
+	u64 up_rx_pkts[ICE_MAX_UP];
+	u64 up_rx_bytes[ICE_MAX_UP];
+	u64 up_tx_pkts[ICE_MAX_UP];
+	u64 up_tx_bytes[ICE_MAX_UP];
+};
+
+/* Statistics collected by the MAC */
+struct ice_hw_port_stats {
+	/* eth stats collected by the port */
+	struct ice_eth_stats eth;
+	/* additional port specific stats */
+	u64 tx_dropped_link_down;	/* tdold */
+	u64 crc_errors;			/* crcerrs */
+	u64 illegal_bytes;		/* illerrc */
+	u64 error_bytes;		/* errbc */
+	u64 mac_local_faults;		/* mlfc */
+	u64 mac_remote_faults;		/* mrfc */
+	u64 rx_len_errors;		/* rlec */
+	u64 link_xon_rx;		/* lxonrxc */
+	u64 link_xoff_rx;		/* lxoffrxc */
+	u64 link_xon_tx;		/* lxontxc */
+	u64 link_xoff_tx;		/* lxofftxc */
+	u64 rx_size_64;			/* prc64 */
+	u64 rx_size_127;		/* prc127 */
+	u64 rx_size_255;		/* prc255 */
+	u64 rx_size_511;		/* prc511 */
+	u64 rx_size_1023;		/* prc1023 */
+	u64 rx_size_1522;		/* prc1522 */
+	u64 rx_size_big;		/* prc9522 */
+	u64 rx_undersize;		/* ruc */
+	u64 rx_fragments;		/* rfc */
+	u64 rx_oversize;		/* roc */
+	u64 rx_jabber;			/* rjc */
+	u64 tx_size_64;			/* ptc64 */
+	u64 tx_size_127;		/* ptc127 */
+	u64 tx_size_255;		/* ptc255 */
+	u64 tx_size_511;		/* ptc511 */
+	u64 tx_size_1023;		/* ptc1023 */
+	u64 tx_size_1522;		/* ptc1522 */
+	u64 tx_size_big;		/* ptc9522 */
+	u64 mac_short_pkt_dropped;	/* mspdc */
+};
+
+enum ice_sw_fwd_act_type {
+	ICE_FWD_TO_VSI = 0,
+	ICE_FWD_TO_VSI_LIST, /* Do not use this when adding filter */
+	ICE_FWD_TO_Q,
+	ICE_FWD_TO_QGRP,
+	ICE_DROP_PACKET,
+	ICE_INVAL_ACT
+};
+
+/* Checksum and Shadow RAM pointers */
+#define ICE_SR_NVM_CTRL_WORD			0x00
+#define ICE_SR_PHY_ANALOG_PTR			0x04
+#define ICE_SR_OPTION_ROM_PTR			0x05
+#define ICE_SR_RO_PCIR_REGS_AUTO_LOAD_PTR	0x06
+#define ICE_SR_AUTO_GENERATED_POINTERS_PTR	0x07
+#define ICE_SR_PCIR_REGS_AUTO_LOAD_PTR		0x08
+#define ICE_SR_EMP_GLOBAL_MODULE_PTR		0x09
+#define ICE_SR_EMP_IMAGE_PTR			0x0B
+#define ICE_SR_PE_IMAGE_PTR			0x0C
+#define ICE_SR_CSR_PROTECTED_LIST_PTR		0x0D
+#define ICE_SR_MNG_CFG_PTR			0x0E
+#define ICE_SR_EMP_MODULE_PTR			0x0F
+#define ICE_SR_PBA_FLAGS			0x15
+#define ICE_SR_PBA_BLOCK_PTR			0x16
+#define ICE_SR_BOOT_CFG_PTR			0x17
+#define ICE_SR_NVM_WOL_CFG			0x19
+#define ICE_NVM_OEM_VER_OFF			0x83
+#define ICE_SR_NVM_DEV_STARTER_VER		0x18
+#define ICE_SR_ALTERNATE_SAN_MAC_ADDR_PTR	0x27
+#define ICE_SR_PERMANENT_SAN_MAC_ADDR_PTR	0x28
+#define ICE_SR_NVM_MAP_VER			0x29
+#define ICE_SR_NVM_IMAGE_VER			0x2A
+#define ICE_SR_NVM_STRUCTURE_VER		0x2B
+#define ICE_SR_NVM_EETRACK_LO			0x2D
+#define ICE_SR_NVM_EETRACK_HI			0x2E
+#define ICE_NVM_VER_LO_SHIFT			0
+#define ICE_NVM_VER_LO_MASK			(0xff << ICE_NVM_VER_LO_SHIFT)
+#define ICE_NVM_VER_HI_SHIFT			12
+#define ICE_NVM_VER_HI_MASK			(0xf << ICE_NVM_VER_HI_SHIFT)
+#define ICE_OEM_EETRACK_ID			0xffffffff
+#define ICE_OEM_VER_PATCH_SHIFT			0
+#define ICE_OEM_VER_PATCH_MASK		(0xff << ICE_OEM_VER_PATCH_SHIFT)
+#define ICE_OEM_VER_BUILD_SHIFT			8
+#define ICE_OEM_VER_BUILD_MASK		(0xffff << ICE_OEM_VER_BUILD_SHIFT)
+#define ICE_OEM_VER_SHIFT			24
+#define ICE_OEM_VER_MASK			(0xff << ICE_OEM_VER_SHIFT)
+#define ICE_SR_VPD_PTR				0x2F
+#define ICE_SR_PXE_SETUP_PTR			0x30
+#define ICE_SR_PXE_CFG_CUST_OPTIONS_PTR		0x31
+#define ICE_SR_NVM_ORIGINAL_EETRACK_LO		0x34
+#define ICE_SR_NVM_ORIGINAL_EETRACK_HI		0x35
+#define ICE_SR_VLAN_CFG_PTR			0x37
+#define ICE_SR_POR_REGS_AUTO_LOAD_PTR		0x38
+#define ICE_SR_EMPR_REGS_AUTO_LOAD_PTR		0x3A
+#define ICE_SR_GLOBR_REGS_AUTO_LOAD_PTR		0x3B
+#define ICE_SR_CORER_REGS_AUTO_LOAD_PTR		0x3C
+#define ICE_SR_PHY_CFG_SCRIPT_PTR		0x3D
+#define ICE_SR_PCIE_ALT_AUTO_LOAD_PTR		0x3E
+#define ICE_SR_SW_CHECKSUM_WORD			0x3F
+#define ICE_SR_PFA_PTR				0x40
+#define ICE_SR_1ST_SCRATCH_PAD_PTR		0x41
+#define ICE_SR_1ST_NVM_BANK_PTR			0x42
+#define ICE_SR_NVM_BANK_SIZE			0x43
+#define ICE_SR_1ND_OROM_BANK_PTR		0x44
+#define ICE_SR_OROM_BANK_SIZE			0x45
+#define ICE_SR_EMP_SR_SETTINGS_PTR		0x48
+#define ICE_SR_CONFIGURATION_METADATA_PTR	0x4D
+#define ICE_SR_IMMEDIATE_VALUES_PTR		0x4E
+
+/* Auxiliary field, mask and shift definition for Shadow RAM and NVM Flash */
+#define ICE_SR_VPD_SIZE_WORDS		512
+#define ICE_SR_PCIE_ALT_SIZE_WORDS	512
+#define ICE_SR_CTRL_WORD_1_S		0x06
+#define ICE_SR_CTRL_WORD_1_M		(0x03 << ICE_SR_CTRL_WORD_1_S)
+
+/* Shadow RAM related */
+#define ICE_SR_SECTOR_SIZE_IN_WORDS	0x800
+#define ICE_SR_BUF_ALIGNMENT		4096
+#define ICE_SR_WORDS_IN_1KB		512
+/* Checksum should be calculated such that after adding all the words,
+ * including the checksum word itself, the sum should be 0xBABA.
+ */
+#define ICE_SR_SW_CHECKSUM_BASE		0xBABA
+
+#define ICE_PBA_FLAG_DFLT		0xFAFA
+/* Hash redirection LUT for VSI - maximum array size */
+#define ICE_VSIQF_HLUT_ARRAY_SIZE	((VSIQF_HLUT_MAX_INDEX + 1) * 4)
+
+/*
+ * Defines for values in the VF_PE_DB_SIZE bits in the GLPCI_LBARCTRL register.
+ * This is needed to determine the BAR0 space for the VFs
+ */
+#define GLPCI_LBARCTRL_VF_PE_DB_SIZE_0KB 0x0
+#define GLPCI_LBARCTRL_VF_PE_DB_SIZE_8KB 0x1
+#define GLPCI_LBARCTRL_VF_PE_DB_SIZE_64KB 0x2
+
+#endif /* _ICE_TYPE_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v5 03/31] net/ice/base: add admin queue structures and commands
  2018-12-17  7:37 ` [dpdk-dev] [PATCH v5 00/31] A new net PMD - ICE Wenzhuo Lu
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 01/31] net/ice/base: add registers for Intel(R) E800 Series NIC Wenzhuo Lu
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 02/31] net/ice/base: add basic structures Wenzhuo Lu
@ 2018-12-17  7:37   ` Wenzhuo Lu
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 04/31] net/ice/base: add sideband queue info Wenzhuo Lu
                     ` (27 subsequent siblings)
  30 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-17  7:37 UTC (permalink / raw)
  To: dev; +Cc: Paul M Stillwell Jr

From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>

Add the commands, error codes, and structures for
the admin queue.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
---
 drivers/net/ice/base/ice_adminq_cmd.h | 1891 +++++++++++++++++++++++++++++++++
 1 file changed, 1891 insertions(+)
 create mode 100644 drivers/net/ice/base/ice_adminq_cmd.h

diff --git a/drivers/net/ice/base/ice_adminq_cmd.h b/drivers/net/ice/base/ice_adminq_cmd.h
new file mode 100644
index 0000000..9332f84
--- /dev/null
+++ b/drivers/net/ice/base/ice_adminq_cmd.h
@@ -0,0 +1,1891 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_ADMINQ_CMD_H_
+#define _ICE_ADMINQ_CMD_H_
+
+/* This header file defines the Admin Queue commands, error codes and
+ * descriptor format. It is shared between Firmware and Software.
+ */
+
+
+#define ICE_MAX_VSI			768
+#define ICE_AQC_TOPO_MAX_LEVEL_NUM	0x9
+#define ICE_AQ_SET_MAC_FRAME_SIZE_MAX	9728
+
+
+struct ice_aqc_generic {
+	__le32 param0;
+	__le32 param1;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Get version (direct 0x0001) */
+struct ice_aqc_get_ver {
+	__le32 rom_ver;
+	__le32 fw_build;
+	u8 fw_branch;
+	u8 fw_major;
+	u8 fw_minor;
+	u8 fw_patch;
+	u8 api_branch;
+	u8 api_major;
+	u8 api_minor;
+	u8 api_patch;
+};
+
+
+
+/* Queue Shutdown (direct 0x0003) */
+struct ice_aqc_q_shutdown {
+	__le32 driver_unloading;
+#define ICE_AQC_DRIVER_UNLOADING	BIT(0)
+	u8 reserved[12];
+};
+
+
+
+
+/* Request resource ownership (direct 0x0008)
+ * Release resource ownership (direct 0x0009)
+ */
+struct ice_aqc_req_res {
+	__le16 res_id;
+#define ICE_AQC_RES_ID_NVM		1
+#define ICE_AQC_RES_ID_SDP		2
+#define ICE_AQC_RES_ID_CHNG_LOCK	3
+#define ICE_AQC_RES_ID_GLBL_LOCK	4
+	__le16 access_type;
+#define ICE_AQC_RES_ACCESS_READ		1
+#define ICE_AQC_RES_ACCESS_WRITE	2
+
+	/* Upon successful completion, FW writes this value and driver is
+	 * expected to release resource before timeout. This value is provided
+	 * in milliseconds.
+	 */
+	__le32 timeout;
+#define ICE_AQ_RES_NVM_READ_DFLT_TIMEOUT_MS	3000
+#define ICE_AQ_RES_NVM_WRITE_DFLT_TIMEOUT_MS	180000
+#define ICE_AQ_RES_CHNG_LOCK_DFLT_TIMEOUT_MS	1000
+#define ICE_AQ_RES_GLBL_LOCK_DFLT_TIMEOUT_MS	3000
+	/* For SDP: pin id of the SDP */
+	__le32 res_number;
+	/* Status is only used for ICE_AQC_RES_ID_GLBL_LOCK */
+	__le16 status;
+#define ICE_AQ_RES_GLBL_SUCCESS		0
+#define ICE_AQ_RES_GLBL_IN_PROG		1
+#define ICE_AQ_RES_GLBL_DONE		2
+	u8 reserved[2];
+};
+
+
+/* Get function capabilities (indirect 0x000A)
+ * Get device capabilities (indirect 0x000B)
+ */
+struct ice_aqc_list_caps {
+	u8 cmd_flags;
+	u8 pf_index;
+	u8 reserved[2];
+	__le32 count;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Device/Function buffer entry, repeated per reported capability */
+struct ice_aqc_list_caps_elem {
+	__le16 cap;
+#define ICE_AQC_CAPS_VALID_FUNCTIONS			0x0005
+#define ICE_AQC_CAPS_VSI				0x0017
+#define ICE_AQC_CAPS_RSS				0x0040
+#define ICE_AQC_CAPS_RXQS				0x0041
+#define ICE_AQC_CAPS_TXQS				0x0042
+#define ICE_AQC_CAPS_MSIX				0x0043
+#define ICE_AQC_CAPS_MAX_MTU				0x0047
+
+	u8 major_ver;
+	u8 minor_ver;
+	/* Number of resources described by this capability */
+	__le32 number;
+	/* Only meaningful for some types of resources */
+	__le32 logical_id;
+	/* Only meaningful for some types of resources */
+	__le32 phys_id;
+	__le64 rsvd1;
+	__le64 rsvd2;
+};
+
+
+/* Manage MAC address, read command - indirect (0x0107)
+ * This struct is also used for the response
+ */
+struct ice_aqc_manage_mac_read {
+	__le16 flags; /* Zeroed by device driver */
+#define ICE_AQC_MAN_MAC_LAN_ADDR_VALID		BIT(4)
+#define ICE_AQC_MAN_MAC_SAN_ADDR_VALID		BIT(5)
+#define ICE_AQC_MAN_MAC_PORT_ADDR_VALID		BIT(6)
+#define ICE_AQC_MAN_MAC_WOL_ADDR_VALID		BIT(7)
+#define ICE_AQC_MAN_MAC_READ_S			4
+#define ICE_AQC_MAN_MAC_READ_M			(0xF << ICE_AQC_MAN_MAC_READ_S)
+	u8 lport_num;
+	u8 lport_num_valid;
+#define ICE_AQC_MAN_MAC_PORT_NUM_IS_VALID	BIT(0)
+	u8 num_addr; /* Used in response */
+	u8 reserved[3];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Response buffer format for manage MAC read command */
+struct ice_aqc_manage_mac_read_resp {
+	u8 lport_num;
+	u8 addr_type;
+#define ICE_AQC_MAN_MAC_ADDR_TYPE_LAN		0
+#define ICE_AQC_MAN_MAC_ADDR_TYPE_WOL		1
+	u8 mac_addr[ETH_ALEN];
+};
+
+
+/* Manage MAC address, write command - direct (0x0108) */
+struct ice_aqc_manage_mac_write {
+	u8 port_num;
+	u8 flags;
+#define ICE_AQC_MAN_MAC_WR_MC_MAG_EN		BIT(0)
+#define ICE_AQC_MAN_MAC_WR_WOL_LAA_PFR_KEEP	BIT(1)
+#define ICE_AQC_MAN_MAC_WR_S		6
+#define ICE_AQC_MAN_MAC_WR_M		(3 << ICE_AQC_MAN_MAC_WR_S)
+#define ICE_AQC_MAN_MAC_UPDATE_LAA	0
+#define ICE_AQC_MAN_MAC_UPDATE_LAA_WOL	(BIT(0) << ICE_AQC_MAN_MAC_WR_S)
+	/* High 16 bits of MAC address in big endian order */
+	__be16 sah;
+	/* Low 32 bits of MAC address in big endian order */
+	__be32 sal;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Clear PXE Command and response (direct 0x0110) */
+struct ice_aqc_clear_pxe {
+	u8 rx_cnt;
+#define ICE_AQC_CLEAR_PXE_RX_CNT		0x2
+	u8 reserved[15];
+};
+
+
+/* Get switch configuration (0x0200) */
+struct ice_aqc_get_sw_cfg {
+	/* Reserved for command and copy of request flags for response */
+	__le16 flags;
+	/* First desc in case of command and next_elem in case of response
+	 * In case of response, if it is not zero, means all the configuration
+	 * was not returned and new command shall be sent with this value in
+	 * the 'first desc' field
+	 */
+	__le16 element;
+	/* Reserved for command, only used for response */
+	__le16 num_elems;
+	__le16 rsvd;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Each entry in the response buffer is of the following type: */
+struct ice_aqc_get_sw_cfg_resp_elem {
+	/* VSI/Port Number */
+	__le16 vsi_port_num;
+#define ICE_AQC_GET_SW_CONF_RESP_VSI_PORT_NUM_S	0
+#define ICE_AQC_GET_SW_CONF_RESP_VSI_PORT_NUM_M	\
+			(0x3FF << ICE_AQC_GET_SW_CONF_RESP_VSI_PORT_NUM_S)
+#define ICE_AQC_GET_SW_CONF_RESP_TYPE_S	14
+#define ICE_AQC_GET_SW_CONF_RESP_TYPE_M	(0x3 << ICE_AQC_GET_SW_CONF_RESP_TYPE_S)
+#define ICE_AQC_GET_SW_CONF_RESP_PHYS_PORT	0
+#define ICE_AQC_GET_SW_CONF_RESP_VIRT_PORT	1
+#define ICE_AQC_GET_SW_CONF_RESP_VSI		2
+
+	/* SWID VSI/Port belongs to */
+	__le16 swid;
+
+	/* Bit 14..0 : PF/VF number VSI belongs to
+	 * Bit 15 : VF indication bit
+	 */
+	__le16 pf_vf_num;
+#define ICE_AQC_GET_SW_CONF_RESP_FUNC_NUM_S	0
+#define ICE_AQC_GET_SW_CONF_RESP_FUNC_NUM_M	\
+				(0x7FFF << ICE_AQC_GET_SW_CONF_RESP_FUNC_NUM_S)
+#define ICE_AQC_GET_SW_CONF_RESP_IS_VF		BIT(15)
+};
+
+
+/* The response buffer is as follows. Note that the length of the
+ * elements array varies with the length of the command response.
+ */
+struct ice_aqc_get_sw_cfg_resp {
+	struct ice_aqc_get_sw_cfg_resp_elem elements[1];
+};
+
+
+
+/* These resource type defines are used for all switch resource
+ * commands where a resource type is required, such as:
+ * Get Resource Allocation command (indirect 0x0204)
+ * Allocate Resources command (indirect 0x0208)
+ * Free Resources command (indirect 0x0209)
+ * Get Allocated Resource Descriptors Command (indirect 0x020A)
+ */
+#define ICE_AQC_RES_TYPE_VSI_LIST_REP			0x03
+#define ICE_AQC_RES_TYPE_VSI_LIST_PRUNE			0x04
+
+#define ICE_AQC_RES_TYPE_FLAG_SHARED			BIT(7)
+#define ICE_AQC_RES_TYPE_FLAG_SCAN_BOTTOM		BIT(12)
+#define ICE_AQC_RES_TYPE_FLAG_IGNORE_INDEX		BIT(13)
+
+#define ICE_AQC_RES_TYPE_FLAG_DEDICATED			0x00
+
+
+
+/* Allocate Resources command (indirect 0x0208)
+ * Free Resources command (indirect 0x0209)
+ */
+struct ice_aqc_alloc_free_res_cmd {
+	__le16 num_entries; /* Number of Resource entries */
+	u8 reserved[6];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Resource descriptor */
+struct ice_aqc_res_elem {
+	union {
+		__le16 sw_resp;
+		__le16 flu_resp;
+	} e;
+};
+
+
+/* Buffer for Allocate/Free Resources commands */
+struct ice_aqc_alloc_free_res_elem {
+	__le16 res_type; /* Types defined above cmd 0x0204 */
+#define ICE_AQC_RES_TYPE_VSI_PRUNE_LIST_S	8
+#define ICE_AQC_RES_TYPE_VSI_PRUNE_LIST_M	\
+				(0xF << ICE_AQC_RES_TYPE_VSI_PRUNE_LIST_S)
+	__le16 num_elems;
+	struct ice_aqc_res_elem elem[1];
+};
+
+
+
+
+/* Add VSI (indirect 0x0210)
+ * Update VSI (indirect 0x0211)
+ * Get VSI (indirect 0x0212)
+ * Free VSI (indirect 0x0213)
+ */
+struct ice_aqc_add_get_update_free_vsi {
+	__le16 vsi_num;
+#define ICE_AQ_VSI_NUM_S	0
+#define ICE_AQ_VSI_NUM_M	(0x03FF << ICE_AQ_VSI_NUM_S)
+#define ICE_AQ_VSI_IS_VALID	BIT(15)
+	__le16 cmd_flags;
+#define ICE_AQ_VSI_KEEP_ALLOC	0x1
+	u8 vf_id;
+	u8 reserved;
+	__le16 vsi_flags;
+#define ICE_AQ_VSI_TYPE_S	0
+#define ICE_AQ_VSI_TYPE_M	(0x3 << ICE_AQ_VSI_TYPE_S)
+#define ICE_AQ_VSI_TYPE_VF	0x0
+#define ICE_AQ_VSI_TYPE_VMDQ2	0x1
+#define ICE_AQ_VSI_TYPE_PF	0x2
+#define ICE_AQ_VSI_TYPE_EMP_MNG	0x3
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Response descriptor for:
+ * Add VSI (indirect 0x0210)
+ * Update VSI (indirect 0x0211)
+ * Free VSI (indirect 0x0213)
+ */
+struct ice_aqc_add_update_free_vsi_resp {
+	__le16 vsi_num;
+	__le16 ext_status;
+	__le16 vsi_used;
+	__le16 vsi_free;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+
+struct ice_aqc_vsi_props {
+	__le16 valid_sections;
+#define ICE_AQ_VSI_PROP_SW_VALID		BIT(0)
+#define ICE_AQ_VSI_PROP_SECURITY_VALID		BIT(1)
+#define ICE_AQ_VSI_PROP_VLAN_VALID		BIT(2)
+#define ICE_AQ_VSI_PROP_OUTER_TAG_VALID		BIT(3)
+#define ICE_AQ_VSI_PROP_INGRESS_UP_VALID	BIT(4)
+#define ICE_AQ_VSI_PROP_EGRESS_UP_VALID		BIT(5)
+#define ICE_AQ_VSI_PROP_RXQ_MAP_VALID		BIT(6)
+#define ICE_AQ_VSI_PROP_Q_OPT_VALID		BIT(7)
+#define ICE_AQ_VSI_PROP_OUTER_UP_VALID		BIT(8)
+#define ICE_AQ_VSI_PROP_FLOW_DIR_VALID		BIT(11)
+#define ICE_AQ_VSI_PROP_PASID_VALID		BIT(12)
+	/* switch section */
+	u8 sw_id;
+	u8 sw_flags;
+#define ICE_AQ_VSI_SW_FLAG_ALLOW_LB		BIT(5)
+#define ICE_AQ_VSI_SW_FLAG_LOCAL_LB		BIT(6)
+#define ICE_AQ_VSI_SW_FLAG_SRC_PRUNE		BIT(7)
+	u8 sw_flags2;
+#define ICE_AQ_VSI_SW_FLAG_RX_PRUNE_EN_S	0
+#define ICE_AQ_VSI_SW_FLAG_RX_PRUNE_EN_M	\
+				(0xF << ICE_AQ_VSI_SW_FLAG_RX_PRUNE_EN_S)
+#define ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA	BIT(0)
+#define ICE_AQ_VSI_SW_FLAG_LAN_ENA		BIT(4)
+	u8 veb_stat_id;
+#define ICE_AQ_VSI_SW_VEB_STAT_ID_S		0
+#define ICE_AQ_VSI_SW_VEB_STAT_ID_M	(0x1F << ICE_AQ_VSI_SW_VEB_STAT_ID_S)
+#define ICE_AQ_VSI_SW_VEB_STAT_ID_VALID		BIT(5)
+	/* security section */
+	u8 sec_flags;
+#define ICE_AQ_VSI_SEC_FLAG_ALLOW_DEST_OVRD	BIT(0)
+#define ICE_AQ_VSI_SEC_FLAG_ENA_MAC_ANTI_SPOOF	BIT(2)
+#define ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S	4
+#define ICE_AQ_VSI_SEC_TX_PRUNE_ENA_M	(0xF << ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S)
+#define ICE_AQ_VSI_SEC_TX_VLAN_PRUNE_ENA	BIT(0)
+	u8 sec_reserved;
+	/* VLAN section */
+	__le16 pvid; /* VLANS include priority bits */
+	u8 pvlan_reserved[2];
+	u8 vlan_flags;
+#define ICE_AQ_VSI_VLAN_MODE_S	0
+#define ICE_AQ_VSI_VLAN_MODE_M	(0x3 << ICE_AQ_VSI_VLAN_MODE_S)
+#define ICE_AQ_VSI_VLAN_MODE_UNTAGGED	0x1
+#define ICE_AQ_VSI_VLAN_MODE_TAGGED	0x2
+#define ICE_AQ_VSI_VLAN_MODE_ALL	0x3
+#define ICE_AQ_VSI_PVLAN_INSERT_PVID	BIT(2)
+#define ICE_AQ_VSI_VLAN_EMOD_S	3
+#define ICE_AQ_VSI_VLAN_EMOD_M	(0x3 << ICE_AQ_VSI_VLAN_EMOD_S)
+#define ICE_AQ_VSI_VLAN_EMOD_STR_BOTH	(0x0 << ICE_AQ_VSI_VLAN_EMOD_S)
+#define ICE_AQ_VSI_VLAN_EMOD_STR_UP	(0x1 << ICE_AQ_VSI_VLAN_EMOD_S)
+#define ICE_AQ_VSI_VLAN_EMOD_STR	(0x2 << ICE_AQ_VSI_VLAN_EMOD_S)
+#define ICE_AQ_VSI_VLAN_EMOD_NOTHING	(0x3 << ICE_AQ_VSI_VLAN_EMOD_S)
+	u8 pvlan_reserved2[3];
+	/* ingress egress up sections */
+	__le32 ingress_table; /* bitmap, 3 bits per up */
+#define ICE_AQ_VSI_UP_TABLE_UP0_S	0
+#define ICE_AQ_VSI_UP_TABLE_UP0_M	(0x7 << ICE_AQ_VSI_UP_TABLE_UP0_S)
+#define ICE_AQ_VSI_UP_TABLE_UP1_S	3
+#define ICE_AQ_VSI_UP_TABLE_UP1_M	(0x7 << ICE_AQ_VSI_UP_TABLE_UP1_S)
+#define ICE_AQ_VSI_UP_TABLE_UP2_S	6
+#define ICE_AQ_VSI_UP_TABLE_UP2_M	(0x7 << ICE_AQ_VSI_UP_TABLE_UP2_S)
+#define ICE_AQ_VSI_UP_TABLE_UP3_S	9
+#define ICE_AQ_VSI_UP_TABLE_UP3_M	(0x7 << ICE_AQ_VSI_UP_TABLE_UP3_S)
+#define ICE_AQ_VSI_UP_TABLE_UP4_S	12
+#define ICE_AQ_VSI_UP_TABLE_UP4_M	(0x7 << ICE_AQ_VSI_UP_TABLE_UP4_S)
+#define ICE_AQ_VSI_UP_TABLE_UP5_S	15
+#define ICE_AQ_VSI_UP_TABLE_UP5_M	(0x7 << ICE_AQ_VSI_UP_TABLE_UP5_S)
+#define ICE_AQ_VSI_UP_TABLE_UP6_S	18
+#define ICE_AQ_VSI_UP_TABLE_UP6_M	(0x7 << ICE_AQ_VSI_UP_TABLE_UP6_S)
+#define ICE_AQ_VSI_UP_TABLE_UP7_S	21
+#define ICE_AQ_VSI_UP_TABLE_UP7_M	(0x7 << ICE_AQ_VSI_UP_TABLE_UP7_S)
+	__le32 egress_table;   /* same defines as for ingress table */
+	/* outer tags section */
+	__le16 outer_tag;
+	u8 outer_tag_flags;
+#define ICE_AQ_VSI_OUTER_TAG_MODE_S	0
+#define ICE_AQ_VSI_OUTER_TAG_MODE_M	(0x3 << ICE_AQ_VSI_OUTER_TAG_MODE_S)
+#define ICE_AQ_VSI_OUTER_TAG_NOTHING	0x0
+#define ICE_AQ_VSI_OUTER_TAG_REMOVE	0x1
+#define ICE_AQ_VSI_OUTER_TAG_COPY	0x2
+#define ICE_AQ_VSI_OUTER_TAG_TYPE_S	2
+#define ICE_AQ_VSI_OUTER_TAG_TYPE_M	(0x3 << ICE_AQ_VSI_OUTER_TAG_TYPE_S)
+#define ICE_AQ_VSI_OUTER_TAG_NONE	0x0
+#define ICE_AQ_VSI_OUTER_TAG_STAG	0x1
+#define ICE_AQ_VSI_OUTER_TAG_VLAN_8100	0x2
+#define ICE_AQ_VSI_OUTER_TAG_VLAN_9100	0x3
+#define ICE_AQ_VSI_OUTER_TAG_INSERT	BIT(4)
+#define ICE_AQ_VSI_OUTER_TAG_ACCEPT_HOST BIT(6)
+	u8 outer_tag_reserved;
+	/* queue mapping section */
+	__le16 mapping_flags;
+#define ICE_AQ_VSI_Q_MAP_CONTIG	0x0
+#define ICE_AQ_VSI_Q_MAP_NONCONTIG	BIT(0)
+	__le16 q_mapping[16];
+#define ICE_AQ_VSI_Q_S		0
+#define ICE_AQ_VSI_Q_M		(0x7FF << ICE_AQ_VSI_Q_S)
+	__le16 tc_mapping[8];
+#define ICE_AQ_VSI_TC_Q_OFFSET_S	0
+#define ICE_AQ_VSI_TC_Q_OFFSET_M	(0x7FF << ICE_AQ_VSI_TC_Q_OFFSET_S)
+#define ICE_AQ_VSI_TC_Q_NUM_S		11
+#define ICE_AQ_VSI_TC_Q_NUM_M		(0xF << ICE_AQ_VSI_TC_Q_NUM_S)
+	/* queueing option section */
+	u8 q_opt_rss;
+#define ICE_AQ_VSI_Q_OPT_RSS_LUT_S	0
+#define ICE_AQ_VSI_Q_OPT_RSS_LUT_M	(0x3 << ICE_AQ_VSI_Q_OPT_RSS_LUT_S)
+#define ICE_AQ_VSI_Q_OPT_RSS_LUT_VSI	0x0
+#define ICE_AQ_VSI_Q_OPT_RSS_LUT_PF	0x2
+#define ICE_AQ_VSI_Q_OPT_RSS_LUT_GBL	0x3
+#define ICE_AQ_VSI_Q_OPT_RSS_GBL_LUT_S	2
+#define ICE_AQ_VSI_Q_OPT_RSS_GBL_LUT_M	(0xF << ICE_AQ_VSI_Q_OPT_RSS_GBL_LUT_S)
+#define ICE_AQ_VSI_Q_OPT_RSS_HASH_S	6
+#define ICE_AQ_VSI_Q_OPT_RSS_HASH_M	(0x3 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S)
+#define ICE_AQ_VSI_Q_OPT_RSS_TPLZ	(0x0 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S)
+#define ICE_AQ_VSI_Q_OPT_RSS_SYM_TPLZ	(0x1 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S)
+#define ICE_AQ_VSI_Q_OPT_RSS_XOR	(0x2 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S)
+#define ICE_AQ_VSI_Q_OPT_RSS_JHASH	(0x3 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S)
+	u8 q_opt_tc;
+#define ICE_AQ_VSI_Q_OPT_TC_OVR_S	0
+#define ICE_AQ_VSI_Q_OPT_TC_OVR_M	(0x1F << ICE_AQ_VSI_Q_OPT_TC_OVR_S)
+#define ICE_AQ_VSI_Q_OPT_PROF_TC_OVR	BIT(7)
+	u8 q_opt_flags;
+#define ICE_AQ_VSI_Q_OPT_PE_FLTR_EN	BIT(0)
+	u8 q_opt_reserved[3];
+	/* outer up section */
+	__le32 outer_up_table; /* same structure and defines as ingress tbl */
+	/* section 10 */
+	__le16 sect_10_reserved;
+	/* flow director section */
+	__le16 fd_options;
+#define ICE_AQ_VSI_FD_ENABLE		BIT(0)
+#define ICE_AQ_VSI_FD_TX_AUTO_ENABLE	BIT(1)
+#define ICE_AQ_VSI_FD_PROG_ENABLE	BIT(3)
+	__le16 max_fd_fltr_dedicated;
+	__le16 max_fd_fltr_shared;
+	__le16 fd_def_q;
+#define ICE_AQ_VSI_FD_DEF_Q_S		0
+#define ICE_AQ_VSI_FD_DEF_Q_M		(0x7FF << ICE_AQ_VSI_FD_DEF_Q_S)
+#define ICE_AQ_VSI_FD_DEF_GRP_S	12
+#define ICE_AQ_VSI_FD_DEF_GRP_M	(0x7 << ICE_AQ_VSI_FD_DEF_GRP_S)
+	__le16 fd_report_opt;
+#define ICE_AQ_VSI_FD_REPORT_Q_S	0
+#define ICE_AQ_VSI_FD_REPORT_Q_M	(0x7FF << ICE_AQ_VSI_FD_REPORT_Q_S)
+#define ICE_AQ_VSI_FD_DEF_PRIORITY_S	12
+#define ICE_AQ_VSI_FD_DEF_PRIORITY_M	(0x7 << ICE_AQ_VSI_FD_DEF_PRIORITY_S)
+#define ICE_AQ_VSI_FD_DEF_DROP		BIT(15)
+	/* PASID section */
+	__le32 pasid_id;
+#define ICE_AQ_VSI_PASID_ID_S		0
+#define ICE_AQ_VSI_PASID_ID_M		(0xFFFFF << ICE_AQ_VSI_PASID_ID_S)
+#define ICE_AQ_VSI_PASID_ID_VALID	BIT(31)
+	u8 reserved[24];
+};
+
+
+
+#define ICE_MAX_NUM_RECIPES 64
+
+
+/* Add/Update/Remove/Get switch rules (indirect 0x02A0, 0x02A1, 0x02A2, 0x02A3)
+ */
+struct ice_aqc_sw_rules {
+	/* ops: add switch rules, referring the number of rules.
+	 * ops: update switch rules, referring the number of filters
+	 * ops: remove switch rules, referring the entry index.
+	 * ops: get switch rules, referring to the number of filters.
+	 */
+	__le16 num_rules_fltr_entry_index;
+	u8 reserved[6];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+#pragma pack(1)
+/* Add/Update/Get/Remove lookup Rx/Tx command/response entry
+ * This structures describes the lookup rules and associated actions. "index"
+ * is returned as part of a response to a successful Add command, and can be
+ * used to identify the rule for Update/Get/Remove commands.
+ */
+struct ice_sw_rule_lkup_rx_tx {
+	__le16 recipe_id;
+#define ICE_SW_RECIPE_LOGICAL_PORT_FWD		10
+	/* Source port for LOOKUP_RX and source VSI in case of LOOKUP_TX */
+	__le16 src;
+	__le32 act;
+
+	/* Bit 0:1 - Action type */
+#define ICE_SINGLE_ACT_TYPE_S	0x00
+#define ICE_SINGLE_ACT_TYPE_M	(0x3 << ICE_SINGLE_ACT_TYPE_S)
+
+	/* Bit 2 - Loop back enable
+	 * Bit 3 - LAN enable
+	 */
+#define ICE_SINGLE_ACT_LB_ENABLE	BIT(2)
+#define ICE_SINGLE_ACT_LAN_ENABLE	BIT(3)
+
+	/* Action type = 0 - Forward to VSI or VSI list */
+#define ICE_SINGLE_ACT_VSI_FORWARDING	0x0
+
+#define ICE_SINGLE_ACT_VSI_ID_S		4
+#define ICE_SINGLE_ACT_VSI_ID_M		(0x3FF << ICE_SINGLE_ACT_VSI_ID_S)
+#define ICE_SINGLE_ACT_VSI_LIST_ID_S	4
+#define ICE_SINGLE_ACT_VSI_LIST_ID_M	(0x3FF << ICE_SINGLE_ACT_VSI_LIST_ID_S)
+	/* This bit needs to be set if action is forward to VSI list */
+#define ICE_SINGLE_ACT_VSI_LIST		BIT(14)
+#define ICE_SINGLE_ACT_VALID_BIT	BIT(17)
+#define ICE_SINGLE_ACT_DROP		BIT(18)
+
+	/* Action type = 1 - Forward to Queue of Queue group */
+#define ICE_SINGLE_ACT_TO_Q		0x1
+#define ICE_SINGLE_ACT_Q_INDEX_S	4
+#define ICE_SINGLE_ACT_Q_INDEX_M	(0x7FF << ICE_SINGLE_ACT_Q_INDEX_S)
+#define ICE_SINGLE_ACT_Q_REGION_S	15
+#define ICE_SINGLE_ACT_Q_REGION_M	(0x7 << ICE_SINGLE_ACT_Q_REGION_S)
+#define ICE_SINGLE_ACT_Q_PRIORITY	BIT(18)
+
+	/* Action type = 2 - Prune */
+#define ICE_SINGLE_ACT_PRUNE		0x2
+#define ICE_SINGLE_ACT_EGRESS		BIT(15)
+#define ICE_SINGLE_ACT_INGRESS		BIT(16)
+#define ICE_SINGLE_ACT_PRUNET		BIT(17)
+	/* Bit 18 should be set to 0 for this action */
+
+	/* Action type = 2 - Pointer */
+#define ICE_SINGLE_ACT_PTR		0x2
+#define ICE_SINGLE_ACT_PTR_VAL_S	4
+#define ICE_SINGLE_ACT_PTR_VAL_M	(0x1FFF << ICE_SINGLE_ACT_PTR_VAL_S)
+	/* Bit 18 should be set to 1 */
+#define ICE_SINGLE_ACT_PTR_BIT		BIT(18)
+
+	/* Action type = 3 - Other actions. Last two bits
+	 * are other action identifier
+	 */
+#define ICE_SINGLE_ACT_OTHER_ACTS		0x3
+#define ICE_SINGLE_OTHER_ACT_IDENTIFIER_S	17
+#define ICE_SINGLE_OTHER_ACT_IDENTIFIER_M	\
+				(0x3 << \ ICE_SINGLE_OTHER_ACT_IDENTIFIER_S)
+
+	/* Bit 17:18 - Defines other actions */
+	/* Other action = 0 - Mirror VSI */
+#define ICE_SINGLE_OTHER_ACT_MIRROR		0
+#define ICE_SINGLE_ACT_MIRROR_VSI_ID_S	4
+#define ICE_SINGLE_ACT_MIRROR_VSI_ID_M	\
+				(0x3FF << ICE_SINGLE_ACT_MIRROR_VSI_ID_S)
+
+	/* Other action = 3 - Set Stat count */
+#define ICE_SINGLE_OTHER_ACT_STAT_COUNT		3
+#define ICE_SINGLE_ACT_STAT_COUNT_INDEX_S	4
+#define ICE_SINGLE_ACT_STAT_COUNT_INDEX_M	\
+				(0x7F << ICE_SINGLE_ACT_STAT_COUNT_INDEX_S)
+
+	__le16 index; /* The index of the rule in the lookup table */
+	/* Length and values of the header to be matched per recipe or
+	 * lookup-type
+	 */
+	__le16 hdr_len;
+	u8 hdr[1];
+};
+#pragma pack()
+
+
+/* Add/Update/Remove large action command/response entry
+ * "index" is returned as part of a response to a successful Add command, and
+ * can be used to identify the action for Update/Get/Remove commands.
+ */
+struct ice_sw_rule_lg_act {
+	__le16 index; /* Index in large action table */
+	__le16 size;
+	__le32 act[1]; /* array of size for actions */
+	/* Max number of large actions */
+#define ICE_MAX_LG_ACT	4
+	/* Bit 0:1 - Action type */
+#define ICE_LG_ACT_TYPE_S	0
+#define ICE_LG_ACT_TYPE_M	(0x7 << ICE_LG_ACT_TYPE_S)
+
+	/* Action type = 0 - Forward to VSI or VSI list */
+#define ICE_LG_ACT_VSI_FORWARDING	0
+#define ICE_LG_ACT_VSI_ID_S		3
+#define ICE_LG_ACT_VSI_ID_M		(0x3FF << ICE_LG_ACT_VSI_ID_S)
+#define ICE_LG_ACT_VSI_LIST_ID_S	3
+#define ICE_LG_ACT_VSI_LIST_ID_M	(0x3FF << ICE_LG_ACT_VSI_LIST_ID_S)
+	/* This bit needs to be set if action is forward to VSI list */
+#define ICE_LG_ACT_VSI_LIST		BIT(13)
+
+#define ICE_LG_ACT_VALID_BIT		BIT(16)
+
+	/* Action type = 1 - Forward to Queue of Queue group */
+#define ICE_LG_ACT_TO_Q			0x1
+#define ICE_LG_ACT_Q_INDEX_S		3
+#define ICE_LG_ACT_Q_INDEX_M		(0x7FF << ICE_LG_ACT_Q_INDEX_S)
+#define ICE_LG_ACT_Q_REGION_S		14
+#define ICE_LG_ACT_Q_REGION_M		(0x7 << ICE_LG_ACT_Q_REGION_S)
+#define ICE_LG_ACT_Q_PRIORITY_SET	BIT(17)
+
+	/* Action type = 2 - Prune */
+#define ICE_LG_ACT_PRUNE		0x2
+#define ICE_LG_ACT_EGRESS		BIT(14)
+#define ICE_LG_ACT_INGRESS		BIT(15)
+#define ICE_LG_ACT_PRUNET		BIT(16)
+
+	/* Action type = 3 - Mirror VSI */
+#define ICE_LG_OTHER_ACT_MIRROR		0x3
+#define ICE_LG_ACT_MIRROR_VSI_ID_S	3
+#define ICE_LG_ACT_MIRROR_VSI_ID_M	(0x3FF << ICE_LG_ACT_MIRROR_VSI_ID_S)
+
+	/* Action type = 5 - Generic Value */
+#define ICE_LG_ACT_GENERIC		0x5
+#define ICE_LG_ACT_GENERIC_VALUE_S	3
+#define ICE_LG_ACT_GENERIC_VALUE_M	(0xFFFF << ICE_LG_ACT_GENERIC_VALUE_S)
+#define ICE_LG_ACT_GENERIC_OFFSET_S	19
+#define ICE_LG_ACT_GENERIC_OFFSET_M	(0x7 << ICE_LG_ACT_GENERIC_OFFSET_S)
+#define ICE_LG_ACT_GENERIC_PRIORITY_S	22
+#define ICE_LG_ACT_GENERIC_PRIORITY_M	(0x7 << ICE_LG_ACT_GENERIC_PRIORITY_S)
+#define ICE_LG_ACT_GENERIC_OFF_RX_DESC_PROF_IDX	7
+
+	/* Action = 7 - Set Stat count */
+#define ICE_LG_ACT_STAT_COUNT		0x7
+#define ICE_LG_ACT_STAT_COUNT_S		3
+#define ICE_LG_ACT_STAT_COUNT_M		(0x7F << ICE_LG_ACT_STAT_COUNT_S)
+};
+
+
+/* Add/Update/Remove VSI list command/response entry
+ * "index" is returned as part of a response to a successful Add command, and
+ * can be used to identify the VSI list for Update/Get/Remove commands.
+ */
+struct ice_sw_rule_vsi_list {
+	__le16 index; /* Index of VSI/Prune list */
+	__le16 number_vsi;
+	__le16 vsi[1]; /* Array of number_vsi VSI numbers */
+};
+
+
+#pragma pack(1)
+/* Query VSI list command/response entry */
+struct ice_sw_rule_vsi_list_query {
+	__le16 index;
+	ice_declare_bitmap(vsi_list, ICE_MAX_VSI);
+};
+#pragma pack()
+
+
+#pragma pack(1)
+/* Add switch rule response:
+ * Content of return buffer is same as the input buffer. The status field and
+ * LUT index are updated as part of the response
+ */
+struct ice_aqc_sw_rules_elem {
+	__le16 type; /* Switch rule type, one of T_... */
+#define ICE_AQC_SW_RULES_T_LKUP_RX		0x0
+#define ICE_AQC_SW_RULES_T_LKUP_TX		0x1
+#define ICE_AQC_SW_RULES_T_LG_ACT		0x2
+#define ICE_AQC_SW_RULES_T_VSI_LIST_SET		0x3
+#define ICE_AQC_SW_RULES_T_VSI_LIST_CLEAR	0x4
+#define ICE_AQC_SW_RULES_T_PRUNE_LIST_SET	0x5
+#define ICE_AQC_SW_RULES_T_PRUNE_LIST_CLEAR	0x6
+	__le16 status;
+	union {
+		struct ice_sw_rule_lkup_rx_tx lkup_tx_rx;
+		struct ice_sw_rule_lg_act lg_act;
+		struct ice_sw_rule_vsi_list vsi_list;
+		struct ice_sw_rule_vsi_list_query vsi_list_query;
+	} pdata;
+};
+
+#pragma pack()
+
+
+
+/* Get Default Topology (indirect 0x0400) */
+struct ice_aqc_get_topo {
+	u8 port_num;
+	u8 num_branches;
+	__le16 reserved1;
+	__le32 reserved2;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Update TSE (indirect 0x0403)
+ * Get TSE (indirect 0x0404)
+ * Add TSE (indirect 0x0401)
+ * Delete TSE (indirect 0x040F)
+ * Move TSE (indirect 0x0408)
+ * Suspend Nodes (indirect 0x0409)
+ * Resume Nodes (indirect 0x040A)
+ */
+struct ice_aqc_sched_elem_cmd {
+	__le16 num_elem_req;	/* Used by commands */
+	__le16 num_elem_resp;	/* Used by responses */
+	__le32 reserved;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* This is the buffer for:
+ * Suspend Nodes (indirect 0x0409)
+ * Resume Nodes (indirect 0x040A)
+ */
+struct ice_aqc_suspend_resume_elem {
+	__le32 teid[1];
+};
+
+
+struct ice_aqc_txsched_move_grp_info_hdr {
+	__le32 src_parent_teid;
+	__le32 dest_parent_teid;
+	__le16 num_elems;
+	__le16 reserved;
+};
+
+
+struct ice_aqc_move_elem {
+	struct ice_aqc_txsched_move_grp_info_hdr hdr;
+	__le32 teid[1];
+};
+
+
+struct ice_aqc_elem_info_bw {
+	__le16 bw_profile_idx;
+	__le16 bw_alloc;
+};
+
+
+struct ice_aqc_txsched_elem {
+	u8 elem_type; /* Special field, reserved for some aq calls */
+#define ICE_AQC_ELEM_TYPE_UNDEFINED		0x0
+#define ICE_AQC_ELEM_TYPE_ROOT_PORT		0x1
+#define ICE_AQC_ELEM_TYPE_TC			0x2
+#define ICE_AQC_ELEM_TYPE_SE_GENERIC		0x3
+#define ICE_AQC_ELEM_TYPE_ENTRY_POINT		0x4
+#define ICE_AQC_ELEM_TYPE_LEAF			0x5
+#define ICE_AQC_ELEM_TYPE_SE_PADDED		0x6
+	u8 valid_sections;
+#define ICE_AQC_ELEM_VALID_GENERIC		BIT(0)
+#define ICE_AQC_ELEM_VALID_CIR			BIT(1)
+#define ICE_AQC_ELEM_VALID_EIR			BIT(2)
+#define ICE_AQC_ELEM_VALID_SHARED		BIT(3)
+	u8 generic;
+#define ICE_AQC_ELEM_GENERIC_MODE_M		0x1
+#define ICE_AQC_ELEM_GENERIC_PRIO_S		0x1
+#define ICE_AQC_ELEM_GENERIC_PRIO_M	(0x7 << ICE_AQC_ELEM_GENERIC_PRIO_S)
+#define ICE_AQC_ELEM_GENERIC_SP_S		0x4
+#define ICE_AQC_ELEM_GENERIC_SP_M	(0x1 << ICE_AQC_ELEM_GENERIC_SP_S)
+#define ICE_AQC_ELEM_GENERIC_ADJUST_VAL_S	0x5
+#define ICE_AQC_ELEM_GENERIC_ADJUST_VAL_M	\
+	(0x3 << ICE_AQC_ELEM_GENERIC_ADJUST_VAL_S)
+	u8 flags; /* Special field, reserved for some aq calls */
+#define ICE_AQC_ELEM_FLAG_SUSPEND_M		0x1
+	struct ice_aqc_elem_info_bw cir_bw;
+	struct ice_aqc_elem_info_bw eir_bw;
+	__le16 srl_id;
+	__le16 reserved2;
+};
+
+
+struct ice_aqc_txsched_elem_data {
+	__le32 parent_teid;
+	__le32 node_teid;
+	struct ice_aqc_txsched_elem data;
+};
+
+
+struct ice_aqc_txsched_topo_grp_info_hdr {
+	__le32 parent_teid;
+	__le16 num_elems;
+	__le16 reserved2;
+};
+
+
+struct ice_aqc_add_elem {
+	struct ice_aqc_txsched_topo_grp_info_hdr hdr;
+	struct ice_aqc_txsched_elem_data generic[1];
+};
+
+
+struct ice_aqc_conf_elem {
+	struct ice_aqc_txsched_elem_data generic[1];
+};
+
+
+struct ice_aqc_get_elem {
+	struct ice_aqc_txsched_elem_data generic[1];
+};
+
+
+struct ice_aqc_get_topo_elem {
+	struct ice_aqc_txsched_topo_grp_info_hdr hdr;
+	struct ice_aqc_txsched_elem_data
+		generic[ICE_AQC_TOPO_MAX_LEVEL_NUM];
+};
+
+
+struct ice_aqc_delete_elem {
+	struct ice_aqc_txsched_topo_grp_info_hdr hdr;
+	__le32 teid[1];
+};
+
+
+
+
+/* Rate limiting profile for
+ * Add RL profile (indirect 0x0410)
+ * Query RL profile (indirect 0x0411)
+ * Remove RL profile (indirect 0x0415)
+ * These indirect commands acts on single or multiple
+ * RL profiles with specified data.
+ */
+struct ice_aqc_rl_profile {
+	__le16 num_profiles;
+	__le16 num_processed; /* Only for response. Reserved in Command. */
+	u8 reserved[4];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+struct ice_aqc_rl_profile_elem {
+	u8 level;
+	u8 flags;
+#define ICE_AQC_RL_PROFILE_TYPE_S	0x0
+#define ICE_AQC_RL_PROFILE_TYPE_M	(0x3 << ICE_AQC_RL_PROFILE_TYPE_S)
+#define ICE_AQC_RL_PROFILE_TYPE_CIR	0
+#define ICE_AQC_RL_PROFILE_TYPE_EIR	1
+#define ICE_AQC_RL_PROFILE_TYPE_SRL	2
+/* The following flag is used for Query RL Profile Data */
+#define ICE_AQC_RL_PROFILE_INVAL_S	0x7
+#define ICE_AQC_RL_PROFILE_INVAL_M	(0x1 << ICE_AQC_RL_PROFILE_INVAL_S)
+
+	__le16 profile_id;
+	__le16 max_burst_size;
+	__le16 rl_multiply;
+	__le16 wake_up_calc;
+	__le16 rl_encode;
+};
+
+
+struct ice_aqc_rl_profile_generic_elem {
+	struct ice_aqc_rl_profile_elem generic[1];
+};
+
+
+
+/* Configure L2 Node CGD (indirect 0x0414)
+ * This indirect command allows configuring a congestion domain for given L2
+ * node TEIDs in the scheduler topology.
+ */
+struct ice_aqc_cfg_l2_node_cgd {
+	__le16 num_l2_nodes;
+	u8 reserved[6];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+struct ice_aqc_cfg_l2_node_cgd_elem {
+	__le32 node_teid;
+	u8 cgd;
+	u8 reserved[3];
+};
+
+
+struct ice_aqc_cfg_l2_node_cgd_data {
+	struct ice_aqc_cfg_l2_node_cgd_elem elem[1];
+};
+
+
+/* Query Scheduler Resource Allocation (indirect 0x0412)
+ * This indirect command retrieves the scheduler resources allocated by
+ * EMP Firmware to the given PF.
+ */
+struct ice_aqc_query_txsched_res {
+	u8 reserved[8];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+struct ice_aqc_generic_sched_props {
+	__le16 phys_levels;
+	__le16 logical_levels;
+	u8 flattening_bitmap;
+	u8 max_device_cgds;
+	u8 max_pf_cgds;
+	u8 rsvd0;
+	__le16 rdma_qsets;
+	u8 rsvd1[22];
+};
+
+
+struct ice_aqc_layer_props {
+	u8 logical_layer;
+	u8 chunk_size;
+	__le16 max_device_nodes;
+	__le16 max_pf_nodes;
+	u8 rsvd0[4];
+	__le16 max_sibl_grp_sz;
+	__le16 max_cir_rl_profiles;
+	__le16 max_eir_rl_profiles;
+	__le16 max_srl_profiles;
+	u8 rsvd1[14];
+};
+
+
+struct ice_aqc_query_txsched_res_resp {
+	struct ice_aqc_generic_sched_props sched_props;
+	struct ice_aqc_layer_props layer_props[ICE_AQC_TOPO_MAX_LEVEL_NUM];
+};
+
+
+/* Query Node to Root Topology (indirect 0x0413)
+ * This command uses ice_aqc_get_elem as its data buffer.
+ */
+struct ice_aqc_query_node_to_root {
+	__le32 teid;
+	__le32 num_nodes; /* Response only */
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Get PHY capabilities (indirect 0x0600) */
+struct ice_aqc_get_phy_caps {
+	u8 lport_num;
+	u8 reserved;
+	__le16 param0;
+	/* 18.0 - Report qualified modules */
+#define ICE_AQC_GET_PHY_RQM		BIT(0)
+	/* 18.1 - 18.2 : Report mode
+	 * 00b - Report NVM capabilities
+	 * 01b - Report topology capabilities
+	 * 10b - Report SW configured
+	 */
+#define ICE_AQC_REPORT_MODE_S		1
+#define ICE_AQC_REPORT_MODE_M		(3 << ICE_AQC_REPORT_MODE_S)
+#define ICE_AQC_REPORT_NVM_CAP		0
+#define ICE_AQC_REPORT_TOPO_CAP		BIT(1)
+#define ICE_AQC_REPORT_SW_CFG		BIT(2)
+	__le32 reserved1;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* This is #define of PHY type (Extended):
+ * The first set of defines is for phy_type_low.
+ */
+#define ICE_PHY_TYPE_LOW_100BASE_TX		BIT_ULL(0)
+#define ICE_PHY_TYPE_LOW_100M_SGMII		BIT_ULL(1)
+#define ICE_PHY_TYPE_LOW_1000BASE_T		BIT_ULL(2)
+#define ICE_PHY_TYPE_LOW_1000BASE_SX		BIT_ULL(3)
+#define ICE_PHY_TYPE_LOW_1000BASE_LX		BIT_ULL(4)
+#define ICE_PHY_TYPE_LOW_1000BASE_KX		BIT_ULL(5)
+#define ICE_PHY_TYPE_LOW_1G_SGMII		BIT_ULL(6)
+#define ICE_PHY_TYPE_LOW_2500BASE_T		BIT_ULL(7)
+#define ICE_PHY_TYPE_LOW_2500BASE_X		BIT_ULL(8)
+#define ICE_PHY_TYPE_LOW_2500BASE_KX		BIT_ULL(9)
+#define ICE_PHY_TYPE_LOW_5GBASE_T		BIT_ULL(10)
+#define ICE_PHY_TYPE_LOW_5GBASE_KR		BIT_ULL(11)
+#define ICE_PHY_TYPE_LOW_10GBASE_T		BIT_ULL(12)
+#define ICE_PHY_TYPE_LOW_10G_SFI_DA		BIT_ULL(13)
+#define ICE_PHY_TYPE_LOW_10GBASE_SR		BIT_ULL(14)
+#define ICE_PHY_TYPE_LOW_10GBASE_LR		BIT_ULL(15)
+#define ICE_PHY_TYPE_LOW_10GBASE_KR_CR1		BIT_ULL(16)
+#define ICE_PHY_TYPE_LOW_10G_SFI_AOC_ACC	BIT_ULL(17)
+#define ICE_PHY_TYPE_LOW_10G_SFI_C2C		BIT_ULL(18)
+#define ICE_PHY_TYPE_LOW_25GBASE_T		BIT_ULL(19)
+#define ICE_PHY_TYPE_LOW_25GBASE_CR		BIT_ULL(20)
+#define ICE_PHY_TYPE_LOW_25GBASE_CR_S		BIT_ULL(21)
+#define ICE_PHY_TYPE_LOW_25GBASE_CR1		BIT_ULL(22)
+#define ICE_PHY_TYPE_LOW_25GBASE_SR		BIT_ULL(23)
+#define ICE_PHY_TYPE_LOW_25GBASE_LR		BIT_ULL(24)
+#define ICE_PHY_TYPE_LOW_25GBASE_KR		BIT_ULL(25)
+#define ICE_PHY_TYPE_LOW_25GBASE_KR_S		BIT_ULL(26)
+#define ICE_PHY_TYPE_LOW_25GBASE_KR1		BIT_ULL(27)
+#define ICE_PHY_TYPE_LOW_25G_AUI_AOC_ACC	BIT_ULL(28)
+#define ICE_PHY_TYPE_LOW_25G_AUI_C2C		BIT_ULL(29)
+#define ICE_PHY_TYPE_LOW_40GBASE_CR4		BIT_ULL(30)
+#define ICE_PHY_TYPE_LOW_40GBASE_SR4		BIT_ULL(31)
+#define ICE_PHY_TYPE_LOW_40GBASE_LR4		BIT_ULL(32)
+#define ICE_PHY_TYPE_LOW_40GBASE_KR4		BIT_ULL(33)
+#define ICE_PHY_TYPE_LOW_40G_XLAUI_AOC_ACC	BIT_ULL(34)
+#define ICE_PHY_TYPE_LOW_40G_XLAUI		BIT_ULL(35)
+#define ICE_PHY_TYPE_LOW_50GBASE_CR2		BIT_ULL(36)
+#define ICE_PHY_TYPE_LOW_50GBASE_SR2		BIT_ULL(37)
+#define ICE_PHY_TYPE_LOW_50GBASE_LR2		BIT_ULL(38)
+#define ICE_PHY_TYPE_LOW_50GBASE_KR2		BIT_ULL(39)
+#define ICE_PHY_TYPE_LOW_50G_LAUI2_AOC_ACC	BIT_ULL(40)
+#define ICE_PHY_TYPE_LOW_50G_LAUI2		BIT_ULL(41)
+#define ICE_PHY_TYPE_LOW_50G_AUI2_AOC_ACC	BIT_ULL(42)
+#define ICE_PHY_TYPE_LOW_50G_AUI2		BIT_ULL(43)
+#define ICE_PHY_TYPE_LOW_50GBASE_CP		BIT_ULL(44)
+#define ICE_PHY_TYPE_LOW_50GBASE_SR		BIT_ULL(45)
+#define ICE_PHY_TYPE_LOW_50GBASE_FR		BIT_ULL(46)
+#define ICE_PHY_TYPE_LOW_50GBASE_LR		BIT_ULL(47)
+#define ICE_PHY_TYPE_LOW_50GBASE_KR_PAM4	BIT_ULL(48)
+#define ICE_PHY_TYPE_LOW_50G_AUI1_AOC_ACC	BIT_ULL(49)
+#define ICE_PHY_TYPE_LOW_50G_AUI1		BIT_ULL(50)
+#define ICE_PHY_TYPE_LOW_100GBASE_CR4		BIT_ULL(51)
+#define ICE_PHY_TYPE_LOW_100GBASE_SR4		BIT_ULL(52)
+#define ICE_PHY_TYPE_LOW_100GBASE_LR4		BIT_ULL(53)
+#define ICE_PHY_TYPE_LOW_100GBASE_KR4		BIT_ULL(54)
+#define ICE_PHY_TYPE_LOW_100G_CAUI4_AOC_ACC	BIT_ULL(55)
+#define ICE_PHY_TYPE_LOW_100G_CAUI4		BIT_ULL(56)
+#define ICE_PHY_TYPE_LOW_100G_AUI4_AOC_ACC	BIT_ULL(57)
+#define ICE_PHY_TYPE_LOW_100G_AUI4		BIT_ULL(58)
+#define ICE_PHY_TYPE_LOW_100GBASE_CR_PAM4	BIT_ULL(59)
+#define ICE_PHY_TYPE_LOW_100GBASE_KR_PAM4	BIT_ULL(60)
+#define ICE_PHY_TYPE_LOW_100GBASE_CP2		BIT_ULL(61)
+#define ICE_PHY_TYPE_LOW_100GBASE_SR2		BIT_ULL(62)
+#define ICE_PHY_TYPE_LOW_100GBASE_DR		BIT_ULL(63)
+#define ICE_PHY_TYPE_LOW_MAX_INDEX		63
+/* The second set of defines is for phy_type_high. */
+#define ICE_PHY_TYPE_HIGH_100GBASE_KR2_PAM4	BIT_ULL(0)
+#define ICE_PHY_TYPE_HIGH_100G_CAUI2_AOC_ACC	BIT_ULL(1)
+#define ICE_PHY_TYPE_HIGH_100G_CAUI2		BIT_ULL(2)
+#define ICE_PHY_TYPE_HIGH_100G_AUI2_AOC_ACC	BIT_ULL(3)
+#define ICE_PHY_TYPE_HIGH_100G_AUI2		BIT_ULL(4)
+#define ICE_PHY_TYPE_HIGH_MAX_INDEX		19
+
+struct ice_aqc_get_phy_caps_data {
+	__le64 phy_type_low; /* Use values from ICE_PHY_TYPE_LOW_* */
+	__le64 phy_type_high; /* Use values from ICE_PHY_TYPE_HIGH_* */
+	u8 caps;
+#define ICE_AQC_PHY_EN_TX_LINK_PAUSE			BIT(0)
+#define ICE_AQC_PHY_EN_RX_LINK_PAUSE			BIT(1)
+#define ICE_AQC_PHY_LOW_POWER_MODE			BIT(2)
+#define ICE_AQC_PHY_EN_LINK				BIT(3)
+#define ICE_AQC_PHY_AN_MODE				BIT(4)
+#define ICE_AQC_PHY_EN_MOD_QUAL				BIT(5)
+#define ICE_AQC_PHY_EN_LESM				BIT(6)
+#define ICE_AQC_PHY_EN_AUTO_FEC				BIT(7)
+#define ICE_AQC_PHY_CAPS_MASK				MAKEMASK(0xff, 0)
+	u8 low_power_ctrl;
+#define ICE_AQC_PHY_EN_D3COLD_LOW_POWER_AUTONEG		BIT(0)
+	__le16 eee_cap;
+#define ICE_AQC_PHY_EEE_EN_100BASE_TX			BIT(0)
+#define ICE_AQC_PHY_EEE_EN_1000BASE_T			BIT(1)
+#define ICE_AQC_PHY_EEE_EN_10GBASE_T			BIT(2)
+#define ICE_AQC_PHY_EEE_EN_1000BASE_KX			BIT(3)
+#define ICE_AQC_PHY_EEE_EN_10GBASE_KR			BIT(4)
+#define ICE_AQC_PHY_EEE_EN_25GBASE_KR			BIT(5)
+#define ICE_AQC_PHY_EEE_EN_40GBASE_KR4			BIT(6)
+#define ICE_AQC_PHY_EEE_EN_50GBASE_KR2			BIT(7)
+#define ICE_AQC_PHY_EEE_EN_50GBASE_KR_PAM4		BIT(8)
+#define ICE_AQC_PHY_EEE_EN_100GBASE_KR4			BIT(9)
+#define ICE_AQC_PHY_EEE_EN_100GBASE_KR2_PAM4		BIT(10)
+	__le16 eeer_value;
+	u8 phy_id_oui[4]; /* PHY/Module ID connected on the port */
+	u8 phy_fw_ver[8];
+	u8 link_fec_options;
+#define ICE_AQC_PHY_FEC_10G_KR_40G_KR4_EN		BIT(0)
+#define ICE_AQC_PHY_FEC_10G_KR_40G_KR4_REQ		BIT(1)
+#define ICE_AQC_PHY_FEC_25G_RS_528_REQ			BIT(2)
+#define ICE_AQC_PHY_FEC_25G_KR_REQ			BIT(3)
+#define ICE_AQC_PHY_FEC_25G_RS_544_REQ			BIT(4)
+#define ICE_AQC_PHY_FEC_25G_RS_CLAUSE91_EN		BIT(6)
+#define ICE_AQC_PHY_FEC_25G_KR_CLAUSE74_EN		BIT(7)
+#define ICE_AQC_PHY_FEC_MASK				MAKEMASK(0xdf, 0)
+	u8 extended_compliance_code;
+#define ICE_MODULE_TYPE_TOTAL_BYTE			3
+	u8 module_type[ICE_MODULE_TYPE_TOTAL_BYTE];
+#define ICE_AQC_MOD_TYPE_BYTE0_SFP_PLUS			0xA0
+#define ICE_AQC_MOD_TYPE_BYTE0_QSFP_PLUS		0x80
+#define ICE_AQC_MOD_TYPE_BYTE1_SFP_PLUS_CU_PASSIVE	BIT(0)
+#define ICE_AQC_MOD_TYPE_BYTE1_SFP_PLUS_CU_ACTIVE	BIT(1)
+#define ICE_AQC_MOD_TYPE_BYTE1_10G_BASE_SR		BIT(4)
+#define ICE_AQC_MOD_TYPE_BYTE1_10G_BASE_LR		BIT(5)
+#define ICE_AQC_MOD_TYPE_BYTE1_10G_BASE_LRM		BIT(6)
+#define ICE_AQC_MOD_TYPE_BYTE1_10G_BASE_ER		BIT(7)
+#define ICE_AQC_MOD_TYPE_BYTE2_SFP_PLUS			0xA0
+#define ICE_AQC_MOD_TYPE_BYTE2_QSFP_PLUS		0x86
+	u8 qualified_module_count;
+#define ICE_AQC_QUAL_MOD_COUNT_MAX			16
+	struct {
+		u8 v_oui[3];
+		u8 rsvd3;
+		u8 v_part[16];
+		__le32 v_rev;
+		__le64 rsvd8;
+	} qual_modules[ICE_AQC_QUAL_MOD_COUNT_MAX];
+};
+
+
+/* Set PHY capabilities (direct 0x0601)
+ * NOTE: This command must be followed by setup link and restart auto-neg
+ */
+struct ice_aqc_set_phy_cfg {
+	u8 lport_num;
+	u8 reserved[7];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Set PHY config command data structure */
+struct ice_aqc_set_phy_cfg_data {
+	__le64 phy_type_low; /* Use values from ICE_PHY_TYPE_LOW_* */
+	__le64 phy_type_high; /* Use values from ICE_PHY_TYPE_HIGH_* */
+	u8 caps;
+#define ICE_AQ_PHY_ENA_TX_PAUSE_ABILITY		BIT(0)
+#define ICE_AQ_PHY_ENA_RX_PAUSE_ABILITY		BIT(1)
+#define ICE_AQ_PHY_ENA_LOW_POWER	BIT(2)
+#define ICE_AQ_PHY_ENA_LINK		BIT(3)
+#define ICE_AQ_PHY_ENA_AUTO_LINK_UPDT	BIT(5)
+#define ICE_AQ_PHY_ENA_LESM		BIT(6)
+#define ICE_AQ_PHY_ENA_AUTO_FEC		BIT(7)
+	u8 low_power_ctrl;
+	__le16 eee_cap; /* Value from ice_aqc_get_phy_caps */
+	__le16 eeer_value;
+	u8 link_fec_opt; /* Use defines from ice_aqc_get_phy_caps */
+	u8 rsvd1;
+};
+
+
+
+/* Restart AN command data structure (direct 0x0605)
+ * Also used for response, with only the lport_num field present.
+ */
+struct ice_aqc_restart_an {
+	u8 lport_num;
+	u8 reserved;
+	u8 cmd_flags;
+#define ICE_AQC_RESTART_AN_LINK_RESTART	BIT(1)
+#define ICE_AQC_RESTART_AN_LINK_ENABLE	BIT(2)
+	u8 reserved2[13];
+};
+
+
+/* Get link status (indirect 0x0607), also used for Link Status Event */
+struct ice_aqc_get_link_status {
+	u8 lport_num;
+	u8 reserved;
+	__le16 cmd_flags;
+#define ICE_AQ_LSE_M			0x3
+#define ICE_AQ_LSE_NOP			0x0
+#define ICE_AQ_LSE_DIS			0x2
+#define ICE_AQ_LSE_ENA			0x3
+	/* only response uses this flag */
+#define ICE_AQ_LSE_IS_ENABLED		0x1
+	__le32 reserved2;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Get link status response data structure, also used for Link Status Event */
+struct ice_aqc_get_link_status_data {
+	u8 topo_media_conflict;
+#define ICE_AQ_LINK_TOPO_CONFLICT	BIT(0)
+#define ICE_AQ_LINK_MEDIA_CONFLICT	BIT(1)
+#define ICE_AQ_LINK_TOPO_CORRUPT	BIT(2)
+	u8 reserved1;
+	u8 link_info;
+#define ICE_AQ_LINK_UP			BIT(0)	/* Link Status */
+#define ICE_AQ_LINK_FAULT		BIT(1)
+#define ICE_AQ_LINK_FAULT_TX		BIT(2)
+#define ICE_AQ_LINK_FAULT_RX		BIT(3)
+#define ICE_AQ_LINK_FAULT_REMOTE	BIT(4)
+#define ICE_AQ_LINK_UP_PORT		BIT(5)	/* External Port Link Status */
+#define ICE_AQ_MEDIA_AVAILABLE		BIT(6)
+#define ICE_AQ_SIGNAL_DETECT		BIT(7)
+	u8 an_info;
+#define ICE_AQ_AN_COMPLETED		BIT(0)
+#define ICE_AQ_LP_AN_ABILITY		BIT(1)
+#define ICE_AQ_PD_FAULT			BIT(2)	/* Parallel Detection Fault */
+#define ICE_AQ_FEC_EN			BIT(3)
+#define ICE_AQ_PHY_LOW_POWER		BIT(4)	/* Low Power State */
+#define ICE_AQ_LINK_PAUSE_TX		BIT(5)
+#define ICE_AQ_LINK_PAUSE_RX		BIT(6)
+#define ICE_AQ_QUALIFIED_MODULE		BIT(7)
+	u8 ext_info;
+#define ICE_AQ_LINK_PHY_TEMP_ALARM	BIT(0)
+#define ICE_AQ_LINK_EXCESSIVE_ERRORS	BIT(1)	/* Excessive Link Errors */
+	/* Port TX Suspended */
+#define ICE_AQ_LINK_TX_S		2
+#define ICE_AQ_LINK_TX_M		(0x03 << ICE_AQ_LINK_TX_S)
+#define ICE_AQ_LINK_TX_ACTIVE		0
+#define ICE_AQ_LINK_TX_DRAINED		1
+#define ICE_AQ_LINK_TX_FLUSHED		3
+	u8 reserved2;
+	__le16 max_frame_size;
+	u8 cfg;
+#define ICE_AQ_LINK_25G_KR_FEC_EN	BIT(0)
+#define ICE_AQ_LINK_25G_RS_528_FEC_EN	BIT(1)
+#define ICE_AQ_LINK_25G_RS_544_FEC_EN	BIT(2)
+#define ICE_AQ_FEC_MASK			MAKEMASK(0x7, 0)
+	/* Pacing Config */
+#define ICE_AQ_CFG_PACING_S		3
+#define ICE_AQ_CFG_PACING_M		(0xF << ICE_AQ_CFG_PACING_S)
+#define ICE_AQ_CFG_PACING_TYPE_M	BIT(7)
+#define ICE_AQ_CFG_PACING_TYPE_AVG	0
+#define ICE_AQ_CFG_PACING_TYPE_FIXED	ICE_AQ_CFG_PACING_TYPE_M
+	/* External Device Power Ability */
+	u8 power_desc;
+#define ICE_AQ_PWR_CLASS_M		0x3
+#define ICE_AQ_LINK_PWR_BASET_LOW_HIGH	0
+#define ICE_AQ_LINK_PWR_BASET_HIGH	1
+#define ICE_AQ_LINK_PWR_QSFP_CLASS_1	0
+#define ICE_AQ_LINK_PWR_QSFP_CLASS_2	1
+#define ICE_AQ_LINK_PWR_QSFP_CLASS_3	2
+#define ICE_AQ_LINK_PWR_QSFP_CLASS_4	3
+	__le16 link_speed;
+#define ICE_AQ_LINK_SPEED_10MB		BIT(0)
+#define ICE_AQ_LINK_SPEED_100MB		BIT(1)
+#define ICE_AQ_LINK_SPEED_1000MB	BIT(2)
+#define ICE_AQ_LINK_SPEED_2500MB	BIT(3)
+#define ICE_AQ_LINK_SPEED_5GB		BIT(4)
+#define ICE_AQ_LINK_SPEED_10GB		BIT(5)
+#define ICE_AQ_LINK_SPEED_20GB		BIT(6)
+#define ICE_AQ_LINK_SPEED_25GB		BIT(7)
+#define ICE_AQ_LINK_SPEED_40GB		BIT(8)
+#define ICE_AQ_LINK_SPEED_50GB		BIT(9)
+#define ICE_AQ_LINK_SPEED_100GB		BIT(10)
+#define ICE_AQ_LINK_SPEED_UNKNOWN	BIT(15)
+	__le32 reserved3; /* Aligns next field to 8-byte boundary */
+	__le64 phy_type_low; /* Use values from ICE_PHY_TYPE_LOW_* */
+	__le64 phy_type_high; /* Use values from ICE_PHY_TYPE_HIGH_* */
+};
+
+
+/* Set event mask command (direct 0x0613) */
+struct ice_aqc_set_event_mask {
+	u8	lport_num;
+	u8	reserved[7];
+	__le16	event_mask;
+#define ICE_AQ_LINK_EVENT_UPDOWN		BIT(1)
+#define ICE_AQ_LINK_EVENT_MEDIA_NA		BIT(2)
+#define ICE_AQ_LINK_EVENT_LINK_FAULT		BIT(3)
+#define ICE_AQ_LINK_EVENT_PHY_TEMP_ALARM	BIT(4)
+#define ICE_AQ_LINK_EVENT_EXCESSIVE_ERRORS	BIT(5)
+#define ICE_AQ_LINK_EVENT_SIGNAL_DETECT		BIT(6)
+#define ICE_AQ_LINK_EVENT_AN_COMPLETED		BIT(7)
+#define ICE_AQ_LINK_EVENT_MODULE_QUAL_FAIL	BIT(8)
+#define ICE_AQ_LINK_EVENT_PORT_TX_SUSPENDED	BIT(9)
+	u8	reserved1[6];
+};
+
+
+
+/* Set MAC Loopback command (direct 0x0620) */
+struct ice_aqc_set_mac_lb {
+	u8 lb_mode;
+#define ICE_AQ_MAC_LB_EN		BIT(0)
+#define ICE_AQ_MAC_LB_OSC_CLK		BIT(1)
+	u8 reserved[15];
+};
+
+
+
+
+
+/* Set Port Identification LED (direct, 0x06E9) */
+struct ice_aqc_set_port_id_led {
+	u8 lport_num;
+	u8 lport_num_valid;
+#define ICE_AQC_PORT_ID_PORT_NUM_VALID	BIT(0)
+	u8 ident_mode;
+#define ICE_AQC_PORT_IDENT_LED_BLINK	BIT(0)
+#define ICE_AQC_PORT_IDENT_LED_ORIG	0
+	u8 rsvd[13];
+};
+
+
+
+/* NVM Read command (indirect 0x0701)
+ * NVM Erase commands (direct 0x0702)
+ * NVM Update commands (indirect 0x0703)
+ */
+struct ice_aqc_nvm {
+	__le16 offset_low;
+	u8 offset_high;
+	u8 cmd_flags;
+#define ICE_AQC_NVM_LAST_CMD		BIT(0)
+#define ICE_AQC_NVM_PCIR_REQ		BIT(0)	/* Used by NVM Update reply */
+#define ICE_AQC_NVM_PRESERVATION_S	1
+#define ICE_AQC_NVM_PRESERVATION_M	(3 << ICE_AQC_NVM_PRESERVATION_S)
+#define ICE_AQC_NVM_NO_PRESERVATION	(0 << ICE_AQC_NVM_PRESERVATION_S)
+#define ICE_AQC_NVM_PRESERVE_ALL	BIT(1)
+#define ICE_AQC_NVM_FACTORY_DEFAULT	(2 << ICE_AQC_NVM_PRESERVATION_S)
+#define ICE_AQC_NVM_PRESERVE_SELECTED	(3 << ICE_AQC_NVM_PRESERVATION_S)
+#define ICE_AQC_NVM_FLASH_ONLY		BIT(7)
+	__le16 module_typeid;
+	__le16 length;
+#define ICE_AQC_NVM_ERASE_LEN	0xFFFF
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Used for 0x0704 as well as for 0x0705 commands */
+struct ice_aqc_nvm_cfg {
+	u8	cmd_flags;
+#define ICE_AQC_ANVM_MULTIPLE_ELEMS	BIT(0)
+#define ICE_AQC_ANVM_IMMEDIATE_FIELD	BIT(1)
+#define ICE_AQC_ANVM_NEW_CFG		BIT(2)
+	u8	reserved;
+	__le16 count;
+	__le16 id;
+	u8 reserved1[2];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+struct ice_aqc_nvm_cfg_data {
+	__le16 field_id;
+	__le16 field_options;
+	__le16 field_value;
+};
+
+
+/* NVM Checksum Command (direct, 0x0706) */
+struct ice_aqc_nvm_checksum {
+	u8 flags;
+#define ICE_AQC_NVM_CHECKSUM_VERIFY	BIT(0)
+#define ICE_AQC_NVM_CHECKSUM_RECALC	BIT(1)
+	u8 rsvd;
+	__le16 checksum; /* Used only by response */
+#define ICE_AQC_NVM_CHECKSUM_CORRECT	0xBABA
+	u8 rsvd2[12];
+};
+
+
+
+
+
+/* Get/Set RSS key (indirect 0x0B04/0x0B02) */
+struct ice_aqc_get_set_rss_key {
+#define ICE_AQC_GSET_RSS_KEY_VSI_VALID	BIT(15)
+#define ICE_AQC_GSET_RSS_KEY_VSI_ID_S	0
+#define ICE_AQC_GSET_RSS_KEY_VSI_ID_M	(0x3FF << ICE_AQC_GSET_RSS_KEY_VSI_ID_S)
+	__le16 vsi_id;
+	u8 reserved[6];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+#define ICE_AQC_GET_SET_RSS_KEY_DATA_RSS_KEY_SIZE	0x28
+#define ICE_AQC_GET_SET_RSS_KEY_DATA_HASH_KEY_SIZE	0xC
+
+struct ice_aqc_get_set_rss_keys {
+	u8 standard_rss_key[ICE_AQC_GET_SET_RSS_KEY_DATA_RSS_KEY_SIZE];
+	u8 extended_hash_key[ICE_AQC_GET_SET_RSS_KEY_DATA_HASH_KEY_SIZE];
+};
+
+
+/* Get/Set RSS LUT (indirect 0x0B05/0x0B03) */
+struct ice_aqc_get_set_rss_lut {
+#define ICE_AQC_GSET_RSS_LUT_VSI_VALID	BIT(15)
+#define ICE_AQC_GSET_RSS_LUT_VSI_ID_S	0
+#define ICE_AQC_GSET_RSS_LUT_VSI_ID_M	(0x1FF << ICE_AQC_GSET_RSS_LUT_VSI_ID_S)
+	__le16 vsi_id;
+#define ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_S	0
+#define ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_M	\
+				(0x3 << ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_S)
+
+#define ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_VSI	 0
+#define ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF	 1
+#define ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_GLOBAL	 2
+
+#define ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S	 2
+#define ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_M	 \
+				(0x3 << ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S)
+
+#define ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_128	 128
+#define ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_128_FLAG 0
+#define ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_512	 512
+#define ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_512_FLAG 1
+#define ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K	 2048
+#define ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K_FLAG	 2
+
+#define ICE_AQC_GSET_RSS_LUT_GLOBAL_IDX_S	 4
+#define ICE_AQC_GSET_RSS_LUT_GLOBAL_IDX_M	 \
+				(0xF << ICE_AQC_GSET_RSS_LUT_GLOBAL_IDX_S)
+
+	__le16 flags;
+	__le32 reserved;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+
+
+
+/* Add TX LAN Queues (indirect 0x0C30) */
+struct ice_aqc_add_txqs {
+	u8 num_qgrps;
+	u8 reserved[3];
+	__le32 reserved1;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* This is the descriptor of each queue entry for the Add TX LAN Queues
+ * command (0x0C30). Only used within struct ice_aqc_add_tx_qgrp.
+ */
+struct ice_aqc_add_txqs_perq {
+	__le16 txq_id;
+	u8 rsvd[2];
+	__le32 q_teid;
+	u8 txq_ctx[22];
+	u8 rsvd2[2];
+	struct ice_aqc_txsched_elem info;
+};
+
+
+/* The format of the command buffer for Add TX LAN Queues (0x0C30)
+ * is an array of the following structs. Please note that the length of
+ * each struct ice_aqc_add_tx_qgrp is variable due
+ * to the variable number of queues in each group!
+ */
+struct ice_aqc_add_tx_qgrp {
+	__le32 parent_teid;
+	u8 num_txqs;
+	u8 rsvd[3];
+	struct ice_aqc_add_txqs_perq txqs[1];
+};
+
+
+/* Disable TX LAN Queues (indirect 0x0C31) */
+struct ice_aqc_dis_txqs {
+	u8 cmd_type;
+#define ICE_AQC_Q_DIS_CMD_S		0
+#define ICE_AQC_Q_DIS_CMD_M		(0x3 << ICE_AQC_Q_DIS_CMD_S)
+#define ICE_AQC_Q_DIS_CMD_NO_FUNC_RESET	(0 << ICE_AQC_Q_DIS_CMD_S)
+#define ICE_AQC_Q_DIS_CMD_VM_RESET	BIT(ICE_AQC_Q_DIS_CMD_S)
+#define ICE_AQC_Q_DIS_CMD_VF_RESET	(2 << ICE_AQC_Q_DIS_CMD_S)
+#define ICE_AQC_Q_DIS_CMD_PF_RESET	(3 << ICE_AQC_Q_DIS_CMD_S)
+#define ICE_AQC_Q_DIS_CMD_SUBSEQ_CALL	BIT(2)
+#define ICE_AQC_Q_DIS_CMD_FLUSH_PIPE	BIT(3)
+	u8 num_entries;
+	__le16 vmvf_and_timeout;
+#define ICE_AQC_Q_DIS_VMVF_NUM_S	0
+#define ICE_AQC_Q_DIS_VMVF_NUM_M	(0x3FF << ICE_AQC_Q_DIS_VMVF_NUM_S)
+#define ICE_AQC_Q_DIS_TIMEOUT_S		10
+#define ICE_AQC_Q_DIS_TIMEOUT_M		(0x3F << ICE_AQC_Q_DIS_TIMEOUT_S)
+	__le32 blocked_cgds;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* The buffer for Disable TX LAN Queues (indirect 0x0C31)
+ * contains the following structures, arrayed one after the
+ * other.
+ * Note: Since the q_id is 16 bits wide, if the
+ * number of queues is even, then 2 bytes of alignment MUST be
+ * added before the start of the next group, to allow correct
+ * alignment of the parent_teid field.
+ */
+struct ice_aqc_dis_txq_item {
+	__le32 parent_teid;
+	u8 num_qs;
+	u8 rsvd;
+	/* The length of the q_id array varies according to num_qs */
+	__le16 q_id[1];
+	/* This only applies from F8 onward */
+#define ICE_AQC_Q_DIS_BUF_ELEM_TYPE_S		15
+#define ICE_AQC_Q_DIS_BUF_ELEM_TYPE_LAN_Q	\
+			(0 << ICE_AQC_Q_DIS_BUF_ELEM_TYPE_S)
+#define ICE_AQC_Q_DIS_BUF_ELEM_TYPE_RDMA_QSET	\
+			(1 << ICE_AQC_Q_DIS_BUF_ELEM_TYPE_S)
+};
+
+
+struct ice_aqc_dis_txq {
+	struct ice_aqc_dis_txq_item qgrps[1];
+};
+
+
+/* TX LAN Queues Cleanup Event (0x0C31) */
+struct ice_aqc_txqs_cleanup {
+	__le16 caller_opc;
+	__le16 cmd_tag;
+	u8 reserved[12];
+};
+
+
+/* Move / Reconfigure TX Queues (indirect 0x0C32) */
+struct ice_aqc_move_txqs {
+	u8 cmd_type;
+#define ICE_AQC_Q_CMD_TYPE_S		0
+#define ICE_AQC_Q_CMD_TYPE_M		(0x3 << ICE_AQC_Q_CMD_TYPE_S)
+#define ICE_AQC_Q_CMD_TYPE_MOVE		1
+#define ICE_AQC_Q_CMD_TYPE_TC_CHANGE	2
+#define ICE_AQC_Q_CMD_TYPE_MOVE_AND_TC	3
+#define ICE_AQC_Q_CMD_SUBSEQ_CALL	BIT(2)
+#define ICE_AQC_Q_CMD_FLUSH_PIPE	BIT(3)
+	u8 num_qs;
+	u8 rsvd;
+	u8 timeout;
+#define ICE_AQC_Q_CMD_TIMEOUT_S		2
+#define ICE_AQC_Q_CMD_TIMEOUT_M		(0x3F << ICE_AQC_Q_CMD_TIMEOUT_S)
+	__le32 blocked_cgds;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* This is the descriptor of each queue entry for the move TX LAN Queues
+ * command (0x0C32).
+ */
+struct ice_aqc_move_txqs_elem {
+	__le16 txq_id;
+	u8 q_cgd;
+	u8 rsvd;
+	__le32 q_teid;
+};
+
+
+struct ice_aqc_move_txqs_data {
+	__le32 src_teid;
+	__le32 dest_teid;
+	struct ice_aqc_move_txqs_elem txqs[1];
+};
+
+
+
+
+
+
+/* Lan Queue Overflow Event (direct, 0x1001) */
+struct ice_aqc_event_lan_overflow {
+	__le32 prtdcb_ruptq;
+	__le32 qtx_ctl;
+	u8 reserved[8];
+};
+
+
+
+/* Configure Firmware Logging Command (indirect 0xFF09)
+ * Logging Information Read Response (indirect 0xFF10)
+ * Note: The 0xFF10 command has no input parameters.
+ */
+struct ice_aqc_fw_logging {
+	u8 log_ctrl;
+#define ICE_AQC_FW_LOG_AQ_EN		BIT(0)
+#define ICE_AQC_FW_LOG_UART_EN		BIT(1)
+	u8 rsvd0;
+	u8 log_ctrl_valid; /* Not used by 0xFF10 Response */
+#define ICE_AQC_FW_LOG_AQ_VALID		BIT(0)
+#define ICE_AQC_FW_LOG_UART_VALID	BIT(1)
+	u8 rsvd1[5];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+enum ice_aqc_fw_logging_mod {
+	ICE_AQC_FW_LOG_ID_GENERAL = 0,
+	ICE_AQC_FW_LOG_ID_CTRL,
+	ICE_AQC_FW_LOG_ID_LINK,
+	ICE_AQC_FW_LOG_ID_LINK_TOPO,
+	ICE_AQC_FW_LOG_ID_DNL,
+	ICE_AQC_FW_LOG_ID_I2C,
+	ICE_AQC_FW_LOG_ID_SDP,
+	ICE_AQC_FW_LOG_ID_MDIO,
+	ICE_AQC_FW_LOG_ID_ADMINQ,
+	ICE_AQC_FW_LOG_ID_HDMA,
+	ICE_AQC_FW_LOG_ID_LLDP,
+	ICE_AQC_FW_LOG_ID_DCBX,
+	ICE_AQC_FW_LOG_ID_DCB,
+	ICE_AQC_FW_LOG_ID_NETPROXY,
+	ICE_AQC_FW_LOG_ID_NVM,
+	ICE_AQC_FW_LOG_ID_AUTH,
+	ICE_AQC_FW_LOG_ID_VPD,
+	ICE_AQC_FW_LOG_ID_IOSF,
+	ICE_AQC_FW_LOG_ID_PARSER,
+	ICE_AQC_FW_LOG_ID_SW,
+	ICE_AQC_FW_LOG_ID_SCHEDULER,
+	ICE_AQC_FW_LOG_ID_TXQ,
+	ICE_AQC_FW_LOG_ID_RSVD,
+	ICE_AQC_FW_LOG_ID_POST,
+	ICE_AQC_FW_LOG_ID_WATCHDOG,
+	ICE_AQC_FW_LOG_ID_TASK_DISPATCH,
+	ICE_AQC_FW_LOG_ID_MNG,
+	ICE_AQC_FW_LOG_ID_MAX,
+};
+
+/* This is the buffer for both of the logging commands.
+ * The entry array size depends on the datalen parameter in the descriptor.
+ * There will be a total of datalen / 2 entries.
+ */
+struct ice_aqc_fw_logging_data {
+	__le16 entry[1];
+#define ICE_AQC_FW_LOG_ID_S		0
+#define ICE_AQC_FW_LOG_ID_M		(0xFFF << ICE_AQC_FW_LOG_ID_S)
+
+#define ICE_AQC_FW_LOG_CONF_SUCCESS	0	/* Used by response */
+#define ICE_AQC_FW_LOG_CONF_BAD_INDX	BIT(12)	/* Used by response */
+
+#define ICE_AQC_FW_LOG_EN_S		12
+#define ICE_AQC_FW_LOG_EN_M		(0xF << ICE_AQC_FW_LOG_EN_S)
+#define ICE_AQC_FW_LOG_INFO_EN		BIT(12)	/* Used by command */
+#define ICE_AQC_FW_LOG_INIT_EN		BIT(13)	/* Used by command */
+#define ICE_AQC_FW_LOG_FLOW_EN		BIT(14)	/* Used by command */
+#define ICE_AQC_FW_LOG_ERR_EN		BIT(15)	/* Used by command */
+};
+
+
+/* Get/Clear FW Log (indirect 0xFF11) */
+struct ice_aqc_get_clear_fw_log {
+	u8 flags;
+#define ICE_AQC_FW_LOG_CLEAR		BIT(0)
+#define ICE_AQC_FW_LOG_MORE_DATA_AVAIL	BIT(1)
+	u8 rsvd1[7];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/**
+ * struct ice_aq_desc - Admin Queue (AQ) descriptor
+ * @flags: ICE_AQ_FLAG_* flags
+ * @opcode: AQ command opcode
+ * @datalen: length in bytes of indirect/external data buffer
+ * @retval: return value from firmware
+ * @cookie_h: opaque data high-half
+ * @cookie_l: opaque data low-half
+ * @params: command-specific parameters
+ *
+ * Descriptor format for commands the driver posts on the Admin Transmit Queue
+ * (ATQ). The firmware writes back onto the command descriptor and returns
+ * the result of the command. Asynchronous events that are not an immediate
+ * result of the command are written to the Admin Receive Queue (ARQ) using
+ * the same descriptor format. Descriptors are in little-endian notation with
+ * 32-bit words.
+ */
+struct ice_aq_desc {
+	__le16 flags;
+	__le16 opcode;
+	__le16 datalen;
+	__le16 retval;
+	__le32 cookie_high;
+	__le32 cookie_low;
+	union {
+		u8 raw[16];
+		struct ice_aqc_generic generic;
+		struct ice_aqc_get_ver get_ver;
+		struct ice_aqc_q_shutdown q_shutdown;
+		struct ice_aqc_req_res res_owner;
+		struct ice_aqc_manage_mac_read mac_read;
+		struct ice_aqc_manage_mac_write mac_write;
+		struct ice_aqc_clear_pxe clear_pxe;
+		struct ice_aqc_list_caps get_cap;
+		struct ice_aqc_get_phy_caps get_phy;
+		struct ice_aqc_set_phy_cfg set_phy;
+		struct ice_aqc_restart_an restart_an;
+		struct ice_aqc_set_port_id_led set_port_id_led;
+		struct ice_aqc_get_sw_cfg get_sw_conf;
+		struct ice_aqc_sw_rules sw_rules;
+		struct ice_aqc_get_topo get_topo;
+		struct ice_aqc_sched_elem_cmd sched_elem_cmd;
+		struct ice_aqc_query_txsched_res query_sched_res;
+		struct ice_aqc_query_node_to_root query_node_to_root;
+		struct ice_aqc_cfg_l2_node_cgd cfg_l2_node_cgd;
+		struct ice_aqc_rl_profile rl_profile;
+
+		struct ice_aqc_nvm nvm;
+		struct ice_aqc_nvm_cfg nvm_cfg;
+		struct ice_aqc_nvm_checksum nvm_checksum;
+		struct ice_aqc_get_set_rss_lut get_set_rss_lut;
+		struct ice_aqc_get_set_rss_key get_set_rss_key;
+		struct ice_aqc_add_txqs add_txqs;
+		struct ice_aqc_dis_txqs dis_txqs;
+		struct ice_aqc_txqs_cleanup txqs_cleanup;
+		struct ice_aqc_add_get_update_free_vsi vsi_cmd;
+		struct ice_aqc_add_update_free_vsi_resp add_update_free_vsi_res;
+		struct ice_aqc_fw_logging fw_logging;
+		struct ice_aqc_get_clear_fw_log get_clear_fw_log;
+		struct ice_aqc_set_mac_lb set_mac_lb;
+		struct ice_aqc_alloc_free_res_cmd sw_res_ctrl;
+		struct ice_aqc_set_event_mask set_event_mask;
+		struct ice_aqc_get_link_status get_link_status;
+	} params;
+};
+
+
+/* FW defined boundary for a large buffer, 4k >= Large buffer > 512 bytes */
+#define ICE_AQ_LG_BUF	512
+
+/* Flags sub-structure
+ * |0  |1  |2  |3  |4  |5  |6  |7  |8  |9  |10 |11 |12 |13 |14 |15 |
+ * |DD |CMP|ERR|VFE| * *  RESERVED * * |LB |RD |VFC|BUF|SI |EI |FE |
+ */
+
+/* command flags and offsets */
+#define ICE_AQ_FLAG_DD_S	0
+#define ICE_AQ_FLAG_CMP_S	1
+#define ICE_AQ_FLAG_ERR_S	2
+#define ICE_AQ_FLAG_VFE_S	3
+#define ICE_AQ_FLAG_LB_S	9
+#define ICE_AQ_FLAG_RD_S	10
+#define ICE_AQ_FLAG_VFC_S	11
+#define ICE_AQ_FLAG_BUF_S	12
+#define ICE_AQ_FLAG_SI_S	13
+#define ICE_AQ_FLAG_EI_S	14
+#define ICE_AQ_FLAG_FE_S	15
+
+#define ICE_AQ_FLAG_DD		BIT(ICE_AQ_FLAG_DD_S)  /* 0x1    */
+#define ICE_AQ_FLAG_CMP		BIT(ICE_AQ_FLAG_CMP_S) /* 0x2    */
+#define ICE_AQ_FLAG_ERR		BIT(ICE_AQ_FLAG_ERR_S) /* 0x4    */
+#define ICE_AQ_FLAG_VFE		BIT(ICE_AQ_FLAG_VFE_S) /* 0x8    */
+#define ICE_AQ_FLAG_LB		BIT(ICE_AQ_FLAG_LB_S)  /* 0x200  */
+#define ICE_AQ_FLAG_RD		BIT(ICE_AQ_FLAG_RD_S)  /* 0x400  */
+#define ICE_AQ_FLAG_VFC		BIT(ICE_AQ_FLAG_VFC_S) /* 0x800  */
+#define ICE_AQ_FLAG_BUF		BIT(ICE_AQ_FLAG_BUF_S) /* 0x1000 */
+#define ICE_AQ_FLAG_SI		BIT(ICE_AQ_FLAG_SI_S)  /* 0x2000 */
+#define ICE_AQ_FLAG_EI		BIT(ICE_AQ_FLAG_EI_S)  /* 0x4000 */
+#define ICE_AQ_FLAG_FE		BIT(ICE_AQ_FLAG_FE_S)  /* 0x8000 */
+
+/* error codes */
+enum ice_aq_err {
+	ICE_AQ_RC_OK		= 0,  /* Success */
+	ICE_AQ_RC_EPERM		= 1,  /* Operation not permitted */
+	ICE_AQ_RC_ENOENT	= 2,  /* No such element */
+	ICE_AQ_RC_ESRCH		= 3,  /* Bad opcode */
+	ICE_AQ_RC_EINTR		= 4,  /* Operation interrupted */
+	ICE_AQ_RC_EIO		= 5,  /* I/O error */
+	ICE_AQ_RC_ENXIO		= 6,  /* No such resource */
+	ICE_AQ_RC_E2BIG		= 7,  /* Arg too long */
+	ICE_AQ_RC_EAGAIN	= 8,  /* Try again */
+	ICE_AQ_RC_ENOMEM	= 9,  /* Out of memory */
+	ICE_AQ_RC_EACCES	= 10, /* Permission denied */
+	ICE_AQ_RC_EFAULT	= 11, /* Bad address */
+	ICE_AQ_RC_EBUSY		= 12, /* Device or resource busy */
+	ICE_AQ_RC_EEXIST	= 13, /* Object already exists */
+	ICE_AQ_RC_EINVAL	= 14, /* Invalid argument */
+	ICE_AQ_RC_ENOTTY	= 15, /* Not a typewriter */
+	ICE_AQ_RC_ENOSPC	= 16, /* No space left or allocation failure */
+	ICE_AQ_RC_ENOSYS	= 17, /* Function not implemented */
+	ICE_AQ_RC_ERANGE	= 18, /* Parameter out of range */
+	ICE_AQ_RC_EFLUSHED	= 19, /* Cmd flushed due to prev cmd error */
+	ICE_AQ_RC_BAD_ADDR	= 20, /* Descriptor contains a bad pointer */
+	ICE_AQ_RC_EMODE		= 21, /* Op not allowed in current dev mode */
+	ICE_AQ_RC_EFBIG		= 22, /* File too big */
+	ICE_AQ_RC_ESBCOMP	= 23, /* SB-IOSF completion unsuccessful */
+	ICE_AQ_RC_ENOSEC	= 24, /* Missing security manifest */
+	ICE_AQ_RC_EBADSIG	= 25, /* Bad RSA signature */
+	ICE_AQ_RC_ESVN		= 26, /* SVN number prohibits this package */
+	ICE_AQ_RC_EBADMAN	= 27, /* Manifest hash mismatch */
+	ICE_AQ_RC_EBADBUF	= 28, /* Buffer hash mismatches manifest */
+};
+
+/* Admin Queue command opcodes */
+enum ice_adminq_opc {
+	/* AQ commands */
+	ice_aqc_opc_get_ver				= 0x0001,
+	ice_aqc_opc_driver_ver				= 0x0002,
+	ice_aqc_opc_q_shutdown				= 0x0003,
+	ice_aqc_opc_get_exp_err				= 0x0005,
+
+	/* resource ownership */
+	ice_aqc_opc_req_res				= 0x0008,
+	ice_aqc_opc_release_res				= 0x0009,
+
+	/* device/function capabilities */
+	ice_aqc_opc_list_func_caps			= 0x000A,
+	ice_aqc_opc_list_dev_caps			= 0x000B,
+
+	/* manage MAC address */
+	ice_aqc_opc_manage_mac_read			= 0x0107,
+	ice_aqc_opc_manage_mac_write			= 0x0108,
+
+	/* PXE */
+	ice_aqc_opc_clear_pxe_mode			= 0x0110,
+
+	/* internal switch commands */
+	ice_aqc_opc_get_sw_cfg				= 0x0200,
+
+	/* Alloc/Free/Get Resources */
+	ice_aqc_opc_get_res_alloc			= 0x0204,
+	ice_aqc_opc_alloc_res				= 0x0208,
+	ice_aqc_opc_free_res				= 0x0209,
+	ice_aqc_opc_get_allocd_res_desc			= 0x020A,
+
+	/* VSI commands */
+	ice_aqc_opc_add_vsi				= 0x0210,
+	ice_aqc_opc_update_vsi				= 0x0211,
+	ice_aqc_opc_get_vsi_params			= 0x0212,
+	ice_aqc_opc_free_vsi				= 0x0213,
+
+
+
+	/* switch rules population commands */
+	ice_aqc_opc_add_sw_rules			= 0x02A0,
+	ice_aqc_opc_update_sw_rules			= 0x02A1,
+	ice_aqc_opc_remove_sw_rules			= 0x02A2,
+	ice_aqc_opc_get_sw_rules			= 0x02A3,
+	ice_aqc_opc_clear_pf_cfg			= 0x02A4,
+
+
+	/* transmit scheduler commands */
+	ice_aqc_opc_get_dflt_topo			= 0x0400,
+	ice_aqc_opc_add_sched_elems			= 0x0401,
+	ice_aqc_opc_cfg_sched_elems			= 0x0403,
+	ice_aqc_opc_get_sched_elems			= 0x0404,
+	ice_aqc_opc_move_sched_elems			= 0x0408,
+	ice_aqc_opc_suspend_sched_elems			= 0x0409,
+	ice_aqc_opc_resume_sched_elems			= 0x040A,
+	ice_aqc_opc_suspend_sched_traffic		= 0x040B,
+	ice_aqc_opc_resume_sched_traffic		= 0x040C,
+	ice_aqc_opc_delete_sched_elems			= 0x040F,
+	ice_aqc_opc_add_rl_profiles			= 0x0410,
+	ice_aqc_opc_query_rl_profiles			= 0x0411,
+	ice_aqc_opc_query_sched_res			= 0x0412,
+	ice_aqc_opc_query_node_to_root			= 0x0413,
+	ice_aqc_opc_cfg_l2_node_cgd			= 0x0414,
+	ice_aqc_opc_remove_rl_profiles			= 0x0415,
+
+	/* PHY commands */
+	ice_aqc_opc_get_phy_caps			= 0x0600,
+	ice_aqc_opc_set_phy_cfg				= 0x0601,
+	ice_aqc_opc_set_mac_cfg				= 0x0603,
+	ice_aqc_opc_restart_an				= 0x0605,
+	ice_aqc_opc_get_link_status			= 0x0607,
+	ice_aqc_opc_set_event_mask			= 0x0613,
+	ice_aqc_opc_set_mac_lb				= 0x0620,
+	ice_aqc_opc_set_port_id_led			= 0x06E9,
+	ice_aqc_opc_get_port_options			= 0x06EA,
+	ice_aqc_opc_set_port_option			= 0x06EB,
+	ice_aqc_opc_set_gpio				= 0x06EC,
+	ice_aqc_opc_get_gpio				= 0x06ED,
+
+	/* NVM commands */
+	ice_aqc_opc_nvm_read				= 0x0701,
+	ice_aqc_opc_nvm_erase				= 0x0702,
+	ice_aqc_opc_nvm_update				= 0x0703,
+	ice_aqc_opc_nvm_cfg_read			= 0x0704,
+	ice_aqc_opc_nvm_cfg_write			= 0x0705,
+	ice_aqc_opc_nvm_checksum			= 0x0706,
+
+
+	/* RSS commands */
+	ice_aqc_opc_set_rss_key				= 0x0B02,
+	ice_aqc_opc_set_rss_lut				= 0x0B03,
+	ice_aqc_opc_get_rss_key				= 0x0B04,
+	ice_aqc_opc_get_rss_lut				= 0x0B05,
+
+	/* TX queue handling commands/events */
+	ice_aqc_opc_add_txqs				= 0x0C30,
+	ice_aqc_opc_dis_txqs				= 0x0C31,
+	ice_aqc_opc_txqs_cleanup			= 0x0C31,
+	ice_aqc_opc_move_recfg_txqs			= 0x0C32,
+
+
+
+
+	/* Standalone Commands/Events */
+	ice_aqc_opc_event_lan_overflow			= 0x1001,
+
+	/* debug commands */
+	ice_aqc_opc_fw_logging				= 0xFF09,
+	ice_aqc_opc_fw_logging_info			= 0xFF10,
+	ice_aqc_opc_get_clear_fw_log			= 0xFF11
+};
+
+#endif /* _ICE_ADMINQ_CMD_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v5 04/31] net/ice/base: add sideband queue info
  2018-12-17  7:37 ` [dpdk-dev] [PATCH v5 00/31] A new net PMD - ICE Wenzhuo Lu
                     ` (2 preceding siblings ...)
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 03/31] net/ice/base: add admin queue structures and commands Wenzhuo Lu
@ 2018-12-17  7:37   ` Wenzhuo Lu
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 05/31] net/ice/base: add device IDs for Intel(r) E800 Series NICs Wenzhuo Lu
                     ` (26 subsequent siblings)
  30 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-17  7:37 UTC (permalink / raw)
  To: dev; +Cc: Paul M Stillwell Jr

From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>

Add the commands, error codes, and structures
for the sideband queue.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
---
 drivers/net/ice/base/ice_sbq_cmd.h | 93 ++++++++++++++++++++++++++++++++++++++
 1 file changed, 93 insertions(+)
 create mode 100644 drivers/net/ice/base/ice_sbq_cmd.h

diff --git a/drivers/net/ice/base/ice_sbq_cmd.h b/drivers/net/ice/base/ice_sbq_cmd.h
new file mode 100644
index 0000000..6dff378
--- /dev/null
+++ b/drivers/net/ice/base/ice_sbq_cmd.h
@@ -0,0 +1,93 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_SBQ_CMD_H_
+#define _ICE_SBQ_CMD_H_
+
+/* This header file defines the Sideband Queue commands, error codes and
+ * descriptor format. It is shared between Firmware and Software.
+ */
+
+/* Sideband Queue command structure and opcodes */
+enum ice_sbq_opc {
+	/* Sideband Queue commands */
+	ice_sbq_opc_neigh_dev_req			= 0x0C00,
+	ice_sbq_opc_neigh_dev_ev			= 0x0C01
+};
+
+/* Sideband Queue descriptor. Indirect command
+ * and non posted
+ */
+struct ice_sbq_cmd_desc {
+	__le16 flags;
+	__le16 opcode;
+	__le16 datalen;
+	__le16 cmd_retval;
+
+	/* Opaque message data */
+	__le32 cookie_high;
+	__le32 cookie_low;
+
+	union {
+		__le16 cmd_len;
+		__le16 cmpl_len;
+	} param0;
+
+	u8 reserved[6];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+struct ice_sbq_evt_desc {
+	__le16 flags;
+	__le16 opcode;
+	__le16 datalen;
+	__le16 cmd_retval;
+	u8 data[24];
+};
+
+enum ice_sbq_msg_dev {
+	rmn_0	= 0x02,
+	rmn_1	= 0x03,
+	rmn_2	= 0x04,
+	cgu	= 0x06
+};
+
+enum ice_sbq_msg_opcode {
+	ice_sbq_msg_rd	= 0x00,
+	ice_sbq_msg_wr	= 0x01
+};
+
+#define ICE_SBQ_MSG_FLAGS	0x40
+#define ICE_SBQ_MSG_SBE_FBE	0x0F
+
+struct ice_sbq_msg_req {
+	u8 dest_dev;
+	u8 src_dev;
+	u8 opcode;
+	u8 flags;
+	u8 sbe_fbe;
+	u8 func_id;
+	__le16 msg_addr_low;
+	__le32 msg_addr_high;
+	__le32 data;
+};
+
+struct ice_sbq_msg_cmpl {
+	u8 dest_dev;
+	u8 src_dev;
+	u8 opcode;
+	u8 flags;
+	__le32 data;
+};
+
+/* Internal struct */
+struct ice_sbq_msg_input {
+	u8 dest_dev;
+	u8 opcode;
+	u16 msg_addr_low;
+	u32 msg_addr_high;
+	u32 data;
+};
+#endif /* _ICE_SBQ_CMD_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v5 05/31] net/ice/base: add device IDs for Intel(r) E800 Series NICs
  2018-12-17  7:37 ` [dpdk-dev] [PATCH v5 00/31] A new net PMD - ICE Wenzhuo Lu
                     ` (3 preceding siblings ...)
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 04/31] net/ice/base: add sideband queue info Wenzhuo Lu
@ 2018-12-17  7:37   ` Wenzhuo Lu
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 06/31] net/ice/base: add control queue information Wenzhuo Lu
                     ` (25 subsequent siblings)
  30 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-17  7:37 UTC (permalink / raw)
  To: dev; +Cc: Paul M Stillwell Jr

From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>

Add all the device IDs that represent the NIC.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
---
 drivers/net/ice/base/ice_devids.h | 17 +++++++++++++++++
 1 file changed, 17 insertions(+)
 create mode 100644 drivers/net/ice/base/ice_devids.h

diff --git a/drivers/net/ice/base/ice_devids.h b/drivers/net/ice/base/ice_devids.h
new file mode 100644
index 0000000..87f17ab
--- /dev/null
+++ b/drivers/net/ice/base/ice_devids.h
@@ -0,0 +1,17 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_DEVIDS_H_
+#define _ICE_DEVIDS_H_
+
+
+/* Device IDs */
+/* Intel(R) Ethernet Controller E810-C for backplane */
+#define ICE_DEV_ID_E810C_BACKPLANE	0x1591
+/* Intel(R) Ethernet Controller E810-C for QSFP */
+#define ICE_DEV_ID_E810C_QSFP		0x1592
+/* Intel(R) Ethernet Controller E810-C for SFP */
+#define ICE_DEV_ID_E810C_SFP		0x1593
+
+#endif /* _ICE_DEVIDS_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v5 06/31] net/ice/base: add control queue information
  2018-12-17  7:37 ` [dpdk-dev] [PATCH v5 00/31] A new net PMD - ICE Wenzhuo Lu
                     ` (4 preceding siblings ...)
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 05/31] net/ice/base: add device IDs for Intel(r) E800 Series NICs Wenzhuo Lu
@ 2018-12-17  7:37   ` Wenzhuo Lu
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 07/31] net/ice/base: add basic transmit scheduler Wenzhuo Lu
                     ` (24 subsequent siblings)
  30 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-17  7:37 UTC (permalink / raw)
  To: dev; +Cc: Paul M Stillwell Jr

From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>

Add the structures for the control queues.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
---
 drivers/net/ice/base/ice_controlq.c | 1098 +++++++++++++++++++++++++++++++++++
 drivers/net/ice/base/ice_controlq.h |   97 ++++
 2 files changed, 1195 insertions(+)
 create mode 100644 drivers/net/ice/base/ice_controlq.c
 create mode 100644 drivers/net/ice/base/ice_controlq.h

diff --git a/drivers/net/ice/base/ice_controlq.c b/drivers/net/ice/base/ice_controlq.c
new file mode 100644
index 0000000..fb82c23
--- /dev/null
+++ b/drivers/net/ice/base/ice_controlq.c
@@ -0,0 +1,1098 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#include "ice_common.h"
+
+
+#define ICE_CQ_INIT_REGS(qinfo, prefix)				\
+do {								\
+	(qinfo)->sq.head = prefix##_ATQH;			\
+	(qinfo)->sq.tail = prefix##_ATQT;			\
+	(qinfo)->sq.len = prefix##_ATQLEN;			\
+	(qinfo)->sq.bah = prefix##_ATQBAH;			\
+	(qinfo)->sq.bal = prefix##_ATQBAL;			\
+	(qinfo)->sq.len_mask = prefix##_ATQLEN_ATQLEN_M;	\
+	(qinfo)->sq.len_ena_mask = prefix##_ATQLEN_ATQENABLE_M;	\
+	(qinfo)->sq.head_mask = prefix##_ATQH_ATQH_M;		\
+	(qinfo)->rq.head = prefix##_ARQH;			\
+	(qinfo)->rq.tail = prefix##_ARQT;			\
+	(qinfo)->rq.len = prefix##_ARQLEN;			\
+	(qinfo)->rq.bah = prefix##_ARQBAH;			\
+	(qinfo)->rq.bal = prefix##_ARQBAL;			\
+	(qinfo)->rq.len_mask = prefix##_ARQLEN_ARQLEN_M;	\
+	(qinfo)->rq.len_ena_mask = prefix##_ARQLEN_ARQENABLE_M;	\
+	(qinfo)->rq.head_mask = prefix##_ARQH_ARQH_M;		\
+} while (0)
+
+/**
+ * ice_adminq_init_regs - Initialize AdminQ registers
+ * @hw: pointer to the hardware structure
+ *
+ * This assumes the alloc_sq and alloc_rq functions have already been called
+ */
+static void ice_adminq_init_regs(struct ice_hw *hw)
+{
+	struct ice_ctl_q_info *cq = &hw->adminq;
+
+	ICE_CQ_INIT_REGS(cq, PF_FW);
+}
+
+/**
+ * ice_mailbox_init_regs - Initialize Mailbox registers
+ * @hw: pointer to the hardware structure
+ *
+ * This assumes the alloc_sq and alloc_rq functions have already been called
+ */
+static void ice_mailbox_init_regs(struct ice_hw *hw)
+{
+	struct ice_ctl_q_info *cq = &hw->mailboxq;
+
+	ICE_CQ_INIT_REGS(cq, PF_MBX);
+}
+
+
+/**
+ * ice_check_sq_alive
+ * @hw: pointer to the hw struct
+ * @cq: pointer to the specific Control queue
+ *
+ * Returns true if Queue is enabled else false.
+ */
+bool ice_check_sq_alive(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	/* check both queue-length and queue-enable fields */
+	if (cq->sq.len && cq->sq.len_mask && cq->sq.len_ena_mask)
+		return (rd32(hw, cq->sq.len) & (cq->sq.len_mask |
+						cq->sq.len_ena_mask)) ==
+			(cq->num_sq_entries | cq->sq.len_ena_mask);
+
+	return false;
+}
+
+/**
+ * ice_alloc_ctrlq_sq_ring - Allocate Control Transmit Queue (ATQ) rings
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ */
+static enum ice_status
+ice_alloc_ctrlq_sq_ring(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	size_t size = cq->num_sq_entries * sizeof(struct ice_aq_desc);
+
+	cq->sq.desc_buf.va = ice_alloc_dma_mem(hw, &cq->sq.desc_buf, size);
+	if (!cq->sq.desc_buf.va)
+		return ICE_ERR_NO_MEMORY;
+
+	cq->sq.cmd_buf = ice_calloc(hw, cq->num_sq_entries,
+				    sizeof(struct ice_sq_cd));
+	if (!cq->sq.cmd_buf) {
+		ice_free_dma_mem(hw, &cq->sq.desc_buf);
+		return ICE_ERR_NO_MEMORY;
+	}
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_alloc_ctrlq_rq_ring - Allocate Control Receive Queue (ARQ) rings
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ */
+static enum ice_status
+ice_alloc_ctrlq_rq_ring(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	size_t size = cq->num_rq_entries * sizeof(struct ice_aq_desc);
+
+	cq->rq.desc_buf.va = ice_alloc_dma_mem(hw, &cq->rq.desc_buf, size);
+	if (!cq->rq.desc_buf.va)
+		return ICE_ERR_NO_MEMORY;
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_free_cq_ring - Free control queue ring
+ * @hw: pointer to the hardware structure
+ * @ring: pointer to the specific control queue ring
+ *
+ * This assumes the posted buffers have already been cleaned
+ * and de-allocated
+ */
+static void ice_free_cq_ring(struct ice_hw *hw, struct ice_ctl_q_ring *ring)
+{
+	ice_free_dma_mem(hw, &ring->desc_buf);
+}
+
+/**
+ * ice_alloc_rq_bufs - Allocate pre-posted buffers for the ARQ
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ */
+static enum ice_status
+ice_alloc_rq_bufs(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	int i;
+
+	/* We'll be allocating the buffer info memory first, then we can
+	 * allocate the mapped buffers for the event processing
+	 */
+	cq->rq.dma_head = ice_calloc(hw, cq->num_rq_entries,
+				     sizeof(cq->rq.desc_buf));
+	if (!cq->rq.dma_head)
+		return ICE_ERR_NO_MEMORY;
+	cq->rq.r.rq_bi = (struct ice_dma_mem *)cq->rq.dma_head;
+
+	/* allocate the mapped buffers */
+	for (i = 0; i < cq->num_rq_entries; i++) {
+		struct ice_aq_desc *desc;
+		struct ice_dma_mem *bi;
+
+		bi = &cq->rq.r.rq_bi[i];
+		bi->va = ice_alloc_dma_mem(hw, bi, cq->rq_buf_size);
+		if (!bi->va)
+			goto unwind_alloc_rq_bufs;
+
+		/* now configure the descriptors for use */
+		desc = ICE_CTL_Q_DESC(cq->rq, i);
+
+		desc->flags = CPU_TO_LE16(ICE_AQ_FLAG_BUF);
+		if (cq->rq_buf_size > ICE_AQ_LG_BUF)
+			desc->flags |= CPU_TO_LE16(ICE_AQ_FLAG_LB);
+		desc->opcode = 0;
+		/* This is in accordance with Admin queue design, there is no
+		 * register for buffer size configuration
+		 */
+		desc->datalen = CPU_TO_LE16(bi->size);
+		desc->retval = 0;
+		desc->cookie_high = 0;
+		desc->cookie_low = 0;
+		desc->params.generic.addr_high =
+			CPU_TO_LE32(ICE_HI_DWORD(bi->pa));
+		desc->params.generic.addr_low =
+			CPU_TO_LE32(ICE_LO_DWORD(bi->pa));
+		desc->params.generic.param0 = 0;
+		desc->params.generic.param1 = 0;
+	}
+	return ICE_SUCCESS;
+
+unwind_alloc_rq_bufs:
+	/* don't try to free the one that failed... */
+	i--;
+	for (; i >= 0; i--)
+		ice_free_dma_mem(hw, &cq->rq.r.rq_bi[i]);
+	ice_free(hw, cq->rq.dma_head);
+
+	return ICE_ERR_NO_MEMORY;
+}
+
+/**
+ * ice_alloc_sq_bufs - Allocate empty buffer structs for the ATQ
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ */
+static enum ice_status
+ice_alloc_sq_bufs(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	int i;
+
+	/* No mapped memory needed yet, just the buffer info structures */
+	cq->sq.dma_head = ice_calloc(hw, cq->num_sq_entries,
+				     sizeof(cq->sq.desc_buf));
+	if (!cq->sq.dma_head)
+		return ICE_ERR_NO_MEMORY;
+	cq->sq.r.sq_bi = (struct ice_dma_mem *)cq->sq.dma_head;
+
+	/* allocate the mapped buffers */
+	for (i = 0; i < cq->num_sq_entries; i++) {
+		struct ice_dma_mem *bi;
+
+		bi = &cq->sq.r.sq_bi[i];
+		bi->va = ice_alloc_dma_mem(hw, bi, cq->sq_buf_size);
+		if (!bi->va)
+			goto unwind_alloc_sq_bufs;
+	}
+	return ICE_SUCCESS;
+
+unwind_alloc_sq_bufs:
+	/* don't try to free the one that failed... */
+	i--;
+	for (; i >= 0; i--)
+		ice_free_dma_mem(hw, &cq->sq.r.sq_bi[i]);
+	ice_free(hw, cq->sq.dma_head);
+
+	return ICE_ERR_NO_MEMORY;
+}
+
+static enum ice_status
+ice_cfg_cq_regs(struct ice_hw *hw, struct ice_ctl_q_ring *ring, u16 num_entries)
+{
+	/* Clear Head and Tail */
+	wr32(hw, ring->head, 0);
+	wr32(hw, ring->tail, 0);
+
+	/* set starting point */
+	wr32(hw, ring->len, (num_entries | ring->len_ena_mask));
+	wr32(hw, ring->bal, ICE_LO_DWORD(ring->desc_buf.pa));
+	wr32(hw, ring->bah, ICE_HI_DWORD(ring->desc_buf.pa));
+
+	/* Check one register to verify that config was applied */
+	if (rd32(hw, ring->bal) != ICE_LO_DWORD(ring->desc_buf.pa))
+		return ICE_ERR_AQ_ERROR;
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_cfg_sq_regs - configure Control ATQ registers
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ *
+ * Configure base address and length registers for the transmit queue
+ */
+static enum ice_status
+ice_cfg_sq_regs(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	return ice_cfg_cq_regs(hw, &cq->sq, cq->num_sq_entries);
+}
+
+/**
+ * ice_cfg_rq_regs - configure Control ARQ register
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ *
+ * Configure base address and length registers for the receive (event q)
+ */
+static enum ice_status
+ice_cfg_rq_regs(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	enum ice_status status;
+
+	status = ice_cfg_cq_regs(hw, &cq->rq, cq->num_rq_entries);
+	if (status)
+		return status;
+
+	/* Update tail in the HW to post pre-allocated buffers */
+	wr32(hw, cq->rq.tail, (u32)(cq->num_rq_entries - 1));
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_init_sq - main initialization routine for Control ATQ
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ *
+ * This is the main initialization routine for the Control Send Queue
+ * Prior to calling this function, drivers *MUST* set the following fields
+ * in the cq->structure:
+ *     - cq->num_sq_entries
+ *     - cq->sq_buf_size
+ *
+ * Do *NOT* hold the lock when calling this as the memory allocation routines
+ * called are not going to be atomic context safe
+ */
+static enum ice_status ice_init_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	enum ice_status ret_code;
+
+	if (cq->sq.count > 0) {
+		/* queue already initialized */
+		ret_code = ICE_ERR_NOT_READY;
+		goto init_ctrlq_exit;
+	}
+
+	/* verify input for valid configuration */
+	if (!cq->num_sq_entries || !cq->sq_buf_size) {
+		ret_code = ICE_ERR_CFG;
+		goto init_ctrlq_exit;
+	}
+
+	cq->sq.next_to_use = 0;
+	cq->sq.next_to_clean = 0;
+
+	/* allocate the ring memory */
+	ret_code = ice_alloc_ctrlq_sq_ring(hw, cq);
+	if (ret_code)
+		goto init_ctrlq_exit;
+
+	/* allocate buffers in the rings */
+	ret_code = ice_alloc_sq_bufs(hw, cq);
+	if (ret_code)
+		goto init_ctrlq_free_rings;
+
+	/* initialize base registers */
+	ret_code = ice_cfg_sq_regs(hw, cq);
+	if (ret_code)
+		goto init_ctrlq_free_rings;
+
+	/* success! */
+	cq->sq.count = cq->num_sq_entries;
+	goto init_ctrlq_exit;
+
+init_ctrlq_free_rings:
+	ice_free_cq_ring(hw, &cq->sq);
+
+init_ctrlq_exit:
+	return ret_code;
+}
+
+/**
+ * ice_init_rq - initialize ARQ
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ *
+ * The main initialization routine for the Admin Receive (Event) Queue.
+ * Prior to calling this function, drivers *MUST* set the following fields
+ * in the cq->structure:
+ *     - cq->num_rq_entries
+ *     - cq->rq_buf_size
+ *
+ * Do *NOT* hold the lock when calling this as the memory allocation routines
+ * called are not going to be atomic context safe
+ */
+static enum ice_status ice_init_rq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	enum ice_status ret_code;
+
+	if (cq->rq.count > 0) {
+		/* queue already initialized */
+		ret_code = ICE_ERR_NOT_READY;
+		goto init_ctrlq_exit;
+	}
+
+	/* verify input for valid configuration */
+	if (!cq->num_rq_entries || !cq->rq_buf_size) {
+		ret_code = ICE_ERR_CFG;
+		goto init_ctrlq_exit;
+	}
+
+	cq->rq.next_to_use = 0;
+	cq->rq.next_to_clean = 0;
+
+	/* allocate the ring memory */
+	ret_code = ice_alloc_ctrlq_rq_ring(hw, cq);
+	if (ret_code)
+		goto init_ctrlq_exit;
+
+	/* allocate buffers in the rings */
+	ret_code = ice_alloc_rq_bufs(hw, cq);
+	if (ret_code)
+		goto init_ctrlq_free_rings;
+
+	/* initialize base registers */
+	ret_code = ice_cfg_rq_regs(hw, cq);
+	if (ret_code)
+		goto init_ctrlq_free_rings;
+
+	/* success! */
+	cq->rq.count = cq->num_rq_entries;
+	goto init_ctrlq_exit;
+
+init_ctrlq_free_rings:
+	ice_free_cq_ring(hw, &cq->rq);
+
+init_ctrlq_exit:
+	return ret_code;
+}
+
+#define ICE_FREE_CQ_BUFS(hw, qi, ring)					\
+do {									\
+	int i;								\
+	/* free descriptors */						\
+	for (i = 0; i < (qi)->num_##ring##_entries; i++)		\
+		if ((qi)->ring.r.ring##_bi[i].pa)			\
+			ice_free_dma_mem((hw),				\
+					 &(qi)->ring.r.ring##_bi[i]);	\
+	/* free the buffer info list */					\
+	if ((qi)->ring.cmd_buf)						\
+		ice_free(hw, (qi)->ring.cmd_buf);			\
+	/* free dma head */						\
+	ice_free(hw, (qi)->ring.dma_head);				\
+} while (0)
+
+/**
+ * ice_shutdown_sq - shutdown the Control ATQ
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ *
+ * The main shutdown routine for the Control Transmit Queue
+ */
+static enum ice_status
+ice_shutdown_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	enum ice_status ret_code = ICE_SUCCESS;
+
+	ice_acquire_lock(&cq->sq_lock);
+
+	if (!cq->sq.count) {
+		ret_code = ICE_ERR_NOT_READY;
+		goto shutdown_sq_out;
+	}
+
+	/* Stop firmware AdminQ processing */
+	wr32(hw, cq->sq.head, 0);
+	wr32(hw, cq->sq.tail, 0);
+	wr32(hw, cq->sq.len, 0);
+	wr32(hw, cq->sq.bal, 0);
+	wr32(hw, cq->sq.bah, 0);
+
+	cq->sq.count = 0;	/* to indicate uninitialized queue */
+
+	/* free ring buffers and the ring itself */
+	ICE_FREE_CQ_BUFS(hw, cq, sq);
+	ice_free_cq_ring(hw, &cq->sq);
+
+shutdown_sq_out:
+	ice_release_lock(&cq->sq_lock);
+	return ret_code;
+}
+
+/**
+ * ice_aq_ver_check - Check the reported AQ API version.
+ * @hw: pointer to the hardware structure
+ *
+ * Checks if the driver should load on a given AQ API version.
+ *
+ * Return: 'true' iff the driver should attempt to load. 'false' otherwise.
+ */
+static bool ice_aq_ver_check(struct ice_hw *hw)
+{
+	if (hw->api_maj_ver > EXP_FW_API_VER_MAJOR) {
+		/* Major API version is newer than expected, don't load */
+		ice_warn(hw, "The driver for the device stopped because the NVM image is newer than expected. You must install the most recent version of the network driver.\n");
+		return false;
+	} else if (hw->api_maj_ver == EXP_FW_API_VER_MAJOR) {
+		if (hw->api_min_ver > (EXP_FW_API_VER_MINOR + 2))
+			ice_info(hw, "The driver for the device detected a newer version of the NVM image than expected. Please install the most recent version of the network driver.\n");
+		else if ((hw->api_min_ver + 2) < EXP_FW_API_VER_MINOR)
+			ice_info(hw, "The driver for the device detected an older version of the NVM image than expected. Please update the NVM image.\n");
+	} else {
+		/* Major API version is older than expected, log a warning */
+		ice_info(hw, "The driver for the device detected an older version of the NVM image than expected. Please update the NVM image.\n");
+	}
+	return true;
+}
+
+/**
+ * ice_shutdown_rq - shutdown Control ARQ
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ *
+ * The main shutdown routine for the Control Receive Queue
+ */
+static enum ice_status
+ice_shutdown_rq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	enum ice_status ret_code = ICE_SUCCESS;
+
+	ice_acquire_lock(&cq->rq_lock);
+
+	if (!cq->rq.count) {
+		ret_code = ICE_ERR_NOT_READY;
+		goto shutdown_rq_out;
+	}
+
+	/* Stop Control Queue processing */
+	wr32(hw, cq->rq.head, 0);
+	wr32(hw, cq->rq.tail, 0);
+	wr32(hw, cq->rq.len, 0);
+	wr32(hw, cq->rq.bal, 0);
+	wr32(hw, cq->rq.bah, 0);
+
+	/* set rq.count to 0 to indicate uninitialized queue */
+	cq->rq.count = 0;
+
+	/* free ring buffers and the ring itself */
+	ICE_FREE_CQ_BUFS(hw, cq, rq);
+	ice_free_cq_ring(hw, &cq->rq);
+
+shutdown_rq_out:
+	ice_release_lock(&cq->rq_lock);
+	return ret_code;
+}
+
+
+/**
+ * ice_init_check_adminq - Check version for Admin Queue to know if its alive
+ * @hw: pointer to the hardware structure
+ */
+static enum ice_status ice_init_check_adminq(struct ice_hw *hw)
+{
+	struct ice_ctl_q_info *cq = &hw->adminq;
+	enum ice_status status;
+
+
+	status = ice_aq_get_fw_ver(hw, NULL);
+	if (status)
+		goto init_ctrlq_free_rq;
+
+
+	if (!ice_aq_ver_check(hw)) {
+		status = ICE_ERR_FW_API_VER;
+		goto init_ctrlq_free_rq;
+	}
+
+	return ICE_SUCCESS;
+
+init_ctrlq_free_rq:
+	if (cq->rq.count) {
+		ice_shutdown_rq(hw, cq);
+		ice_destroy_lock(&cq->rq_lock);
+	}
+	if (cq->sq.count) {
+		ice_shutdown_sq(hw, cq);
+		ice_destroy_lock(&cq->sq_lock);
+	}
+	return status;
+}
+
+/**
+ * ice_init_ctrlq - main initialization routine for any control Queue
+ * @hw: pointer to the hardware structure
+ * @q_type: specific Control queue type
+ *
+ * Prior to calling this function, drivers *MUST* set the following fields
+ * in the cq->structure:
+ *     - cq->num_sq_entries
+ *     - cq->num_rq_entries
+ *     - cq->rq_buf_size
+ *     - cq->sq_buf_size
+ */
+static enum ice_status ice_init_ctrlq(struct ice_hw *hw, enum ice_ctl_q q_type)
+{
+	struct ice_ctl_q_info *cq;
+	enum ice_status ret_code;
+
+	switch (q_type) {
+	case ICE_CTL_Q_ADMIN:
+		ice_adminq_init_regs(hw);
+		cq = &hw->adminq;
+		break;
+	case ICE_CTL_Q_MAILBOX:
+		ice_mailbox_init_regs(hw);
+		cq = &hw->mailboxq;
+		break;
+	default:
+		return ICE_ERR_PARAM;
+	}
+	cq->qtype = q_type;
+
+	/* verify input for valid configuration */
+	if (!cq->num_rq_entries || !cq->num_sq_entries ||
+	    !cq->rq_buf_size || !cq->sq_buf_size) {
+		return ICE_ERR_CFG;
+	}
+	ice_init_lock(&cq->sq_lock);
+	ice_init_lock(&cq->rq_lock);
+
+	/* setup SQ command write back timeout */
+	cq->sq_cmd_timeout = ICE_CTL_Q_SQ_CMD_TIMEOUT;
+
+	/* allocate the ATQ */
+	ret_code = ice_init_sq(hw, cq);
+	if (ret_code)
+		goto init_ctrlq_destroy_locks;
+
+	/* allocate the ARQ */
+	ret_code = ice_init_rq(hw, cq);
+	if (ret_code)
+		goto init_ctrlq_free_sq;
+
+	/* success! */
+	return ICE_SUCCESS;
+
+init_ctrlq_free_sq:
+	ice_shutdown_sq(hw, cq);
+init_ctrlq_destroy_locks:
+	ice_destroy_lock(&cq->sq_lock);
+	ice_destroy_lock(&cq->rq_lock);
+	return ret_code;
+}
+
+/**
+ * ice_init_all_ctrlq - main initialization routine for all control queues
+ * @hw: pointer to the hardware structure
+ *
+ * Prior to calling this function, drivers *MUST* set the following fields
+ * in the cq->structure for all control queues:
+ *     - cq->num_sq_entries
+ *     - cq->num_rq_entries
+ *     - cq->rq_buf_size
+ *     - cq->sq_buf_size
+ */
+enum ice_status ice_init_all_ctrlq(struct ice_hw *hw)
+{
+	enum ice_status ret_code;
+
+
+	/* Init FW admin queue */
+	ret_code = ice_init_ctrlq(hw, ICE_CTL_Q_ADMIN);
+	if (ret_code)
+		return ret_code;
+
+	ret_code = ice_init_check_adminq(hw);
+	if (ret_code)
+		return ret_code;
+	/* Init Mailbox queue */
+	return ice_init_ctrlq(hw, ICE_CTL_Q_MAILBOX);
+}
+
+/**
+ * ice_shutdown_ctrlq - shutdown routine for any control queue
+ * @hw: pointer to the hardware structure
+ * @q_type: specific Control queue type
+ */
+static void ice_shutdown_ctrlq(struct ice_hw *hw, enum ice_ctl_q q_type)
+{
+	struct ice_ctl_q_info *cq;
+
+	switch (q_type) {
+	case ICE_CTL_Q_ADMIN:
+		cq = &hw->adminq;
+		if (ice_check_sq_alive(hw, cq))
+			ice_aq_q_shutdown(hw, true);
+		break;
+	case ICE_CTL_Q_MAILBOX:
+		cq = &hw->mailboxq;
+		break;
+	default:
+		return;
+	}
+
+	if (cq->sq.count) {
+		ice_shutdown_sq(hw, cq);
+		ice_destroy_lock(&cq->sq_lock);
+	}
+	if (cq->rq.count) {
+		ice_shutdown_rq(hw, cq);
+		ice_destroy_lock(&cq->rq_lock);
+	}
+}
+
+/**
+ * ice_shutdown_all_ctrlq - shutdown routine for all control queues
+ * @hw: pointer to the hardware structure
+ */
+void ice_shutdown_all_ctrlq(struct ice_hw *hw)
+{
+	/* Shutdown FW admin queue */
+	ice_shutdown_ctrlq(hw, ICE_CTL_Q_ADMIN);
+	/* Shutdown PF-VF Mailbox */
+	ice_shutdown_ctrlq(hw, ICE_CTL_Q_MAILBOX);
+}
+
+/**
+ * ice_clean_sq - cleans Admin send queue (ATQ)
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ *
+ * returns the number of free desc
+ */
+static u16 ice_clean_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	struct ice_ctl_q_ring *sq = &cq->sq;
+	u16 ntc = sq->next_to_clean;
+	struct ice_sq_cd *details;
+#if 0
+	struct ice_aq_desc desc_cb;
+#endif
+	struct ice_aq_desc *desc;
+
+	desc = ICE_CTL_Q_DESC(*sq, ntc);
+	details = ICE_CTL_Q_DETAILS(*sq, ntc);
+
+	while (rd32(hw, cq->sq.head) != ntc) {
+		ice_debug(hw, ICE_DBG_AQ_MSG,
+			  "ntc %d head %d.\n", ntc, rd32(hw, cq->sq.head));
+#if 0
+		if (details->callback) {
+			ICE_CTL_Q_CALLBACK cb_func =
+				(ICE_CTL_Q_CALLBACK)details->callback;
+			ice_memcpy(&desc_cb, desc, sizeof(desc_cb),
+				   ICE_DMA_TO_DMA);
+			cb_func(hw, &desc_cb);
+		}
+#endif
+		ice_memset(desc, 0, sizeof(*desc), ICE_DMA_MEM);
+		ice_memset(details, 0, sizeof(*details), ICE_NONDMA_MEM);
+		ntc++;
+		if (ntc == sq->count)
+			ntc = 0;
+		desc = ICE_CTL_Q_DESC(*sq, ntc);
+		details = ICE_CTL_Q_DETAILS(*sq, ntc);
+	}
+
+	sq->next_to_clean = ntc;
+
+	return ICE_CTL_Q_DESC_UNUSED(sq);
+}
+
+/**
+ * ice_sq_done - check if FW has processed the Admin Send Queue (ATQ)
+ * @hw: pointer to the hw struct
+ * @cq: pointer to the specific Control queue
+ *
+ * Returns true if the firmware has processed all descriptors on the
+ * admin send queue. Returns false if there are still requests pending.
+ */
+bool ice_sq_done(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	/* AQ designers suggest use of head for better
+	 * timing reliability than DD bit
+	 */
+	return rd32(hw, cq->sq.head) == cq->sq.next_to_use;
+}
+
+/**
+ * ice_sq_send_cmd - send command to Control Queue (ATQ)
+ * @hw: pointer to the hw struct
+ * @cq: pointer to the specific Control queue
+ * @desc: prefilled descriptor describing the command (non DMA mem)
+ * @buf: buffer to use for indirect commands (or NULL for direct commands)
+ * @buf_size: size of buffer for indirect commands (or 0 for direct commands)
+ * @cd: pointer to command details structure
+ *
+ * This is the main send command routine for the ATQ. It runs the queue,
+ * cleans the queue, etc.
+ */
+enum ice_status
+ice_sq_send_cmd(struct ice_hw *hw, struct ice_ctl_q_info *cq,
+		struct ice_aq_desc *desc, void *buf, u16 buf_size,
+		struct ice_sq_cd *cd)
+{
+	struct ice_dma_mem *dma_buf = NULL;
+	struct ice_aq_desc *desc_on_ring;
+	bool cmd_completed = false;
+	enum ice_status status = ICE_SUCCESS;
+	struct ice_sq_cd *details;
+	u32 total_delay = 0;
+	u16 retval = 0;
+	u32 val = 0;
+
+	/* if reset is in progress return a soft error */
+	if (hw->reset_ongoing)
+		return ICE_ERR_RESET_ONGOING;
+	ice_acquire_lock(&cq->sq_lock);
+
+	cq->sq_last_status = ICE_AQ_RC_OK;
+
+	if (!cq->sq.count) {
+		ice_debug(hw, ICE_DBG_AQ_MSG,
+			  "Control Send queue not initialized.\n");
+		status = ICE_ERR_AQ_EMPTY;
+		goto sq_send_command_error;
+	}
+
+	if ((buf && !buf_size) || (!buf && buf_size)) {
+		status = ICE_ERR_PARAM;
+		goto sq_send_command_error;
+	}
+
+	if (buf) {
+		if (buf_size > cq->sq_buf_size) {
+			ice_debug(hw, ICE_DBG_AQ_MSG,
+				  "Invalid buffer size for Control Send queue: %d.\n",
+				  buf_size);
+			status = ICE_ERR_INVAL_SIZE;
+			goto sq_send_command_error;
+		}
+
+		desc->flags |= CPU_TO_LE16(ICE_AQ_FLAG_BUF);
+		if (buf_size > ICE_AQ_LG_BUF)
+			desc->flags |= CPU_TO_LE16(ICE_AQ_FLAG_LB);
+	}
+
+	val = rd32(hw, cq->sq.head);
+	if (val >= cq->num_sq_entries) {
+		ice_debug(hw, ICE_DBG_AQ_MSG,
+			  "head overrun at %d in the Control Send Queue ring\n",
+			  val);
+		status = ICE_ERR_AQ_EMPTY;
+		goto sq_send_command_error;
+	}
+
+	details = ICE_CTL_Q_DETAILS(cq->sq, cq->sq.next_to_use);
+	if (cd)
+		*details = *cd;
+#if 0
+		/* FIXME: if/when this block gets enabled (when the #if 0
+		 * is removed), add braces to both branches of the surrounding
+		 * conditional expression. The braces have been removed to
+		 * prevent checkpatch complaining.
+		 */
+
+		/* If the command details are defined copy the cookie. The
+		 * CPU_TO_LE32 is not needed here because the data is ignored
+		 * by the FW, only used by the driver
+		 */
+		if (details->cookie) {
+			desc->cookie_high =
+				CPU_TO_LE32(ICE_HI_DWORD(details->cookie));
+			desc->cookie_low =
+				CPU_TO_LE32(ICE_LO_DWORD(details->cookie));
+		}
+#endif
+	else
+		ice_memset(details, 0, sizeof(*details), ICE_NONDMA_MEM);
+#if 0
+	/* clear requested flags and then set additional flags if defined */
+	desc->flags &= ~CPU_TO_LE16(details->flags_dis);
+	desc->flags |= CPU_TO_LE16(details->flags_ena);
+
+	if (details->postpone && !details->async) {
+		ice_debug(hw, ICE_DBG_AQ_MSG,
+			  "Async flag not set along with postpone flag\n");
+		status = ICE_ERR_PARAM;
+		goto sq_send_command_error;
+	}
+#endif
+
+	/* Call clean and check queue available function to reclaim the
+	 * descriptors that were processed by FW/MBX; the function returns the
+	 * number of desc available. The clean function called here could be
+	 * called in a separate thread in case of asynchronous completions.
+	 */
+	if (ice_clean_sq(hw, cq) == 0) {
+		ice_debug(hw, ICE_DBG_AQ_MSG,
+			  "Error: Control Send Queue is full.\n");
+		status = ICE_ERR_AQ_FULL;
+		goto sq_send_command_error;
+	}
+
+	/* initialize the temp desc pointer with the right desc */
+	desc_on_ring = ICE_CTL_Q_DESC(cq->sq, cq->sq.next_to_use);
+
+	/* if the desc is available copy the temp desc to the right place */
+	ice_memcpy(desc_on_ring, desc, sizeof(*desc_on_ring),
+		   ICE_NONDMA_TO_DMA);
+
+	/* if buf is not NULL assume indirect command */
+	if (buf) {
+		dma_buf = &cq->sq.r.sq_bi[cq->sq.next_to_use];
+		/* copy the user buf into the respective DMA buf */
+		ice_memcpy(dma_buf->va, buf, buf_size, ICE_NONDMA_TO_DMA);
+		desc_on_ring->datalen = CPU_TO_LE16(buf_size);
+
+		/* Update the address values in the desc with the pa value
+		 * for respective buffer
+		 */
+		desc_on_ring->params.generic.addr_high =
+			CPU_TO_LE32(ICE_HI_DWORD(dma_buf->pa));
+		desc_on_ring->params.generic.addr_low =
+			CPU_TO_LE32(ICE_LO_DWORD(dma_buf->pa));
+	}
+
+	/* Debug desc and buffer */
+	ice_debug(hw, ICE_DBG_AQ_MSG,
+		  "ATQ: Control Send queue desc and buffer:\n");
+
+	ice_debug_cq(hw, ICE_DBG_AQ_CMD, (void *)desc_on_ring, buf, buf_size);
+
+
+	(cq->sq.next_to_use)++;
+	if (cq->sq.next_to_use == cq->sq.count)
+		cq->sq.next_to_use = 0;
+#if 0
+	/* FIXME - handle this case? */
+	if (!details->postpone)
+#endif
+	wr32(hw, cq->sq.tail, cq->sq.next_to_use);
+
+#if 0
+	/* if command details are not defined or async flag is not set,
+	 * we need to wait for desc write back
+	 */
+	if (!details->async && !details->postpone) {
+		/* FIXME - handle this case? */
+	}
+#endif
+	do {
+		if (ice_sq_done(hw, cq))
+			break;
+
+		ice_msec_delay(1, false);
+		total_delay++;
+	} while (total_delay < cq->sq_cmd_timeout);
+
+	/* if ready, copy the desc back to temp */
+	if (ice_sq_done(hw, cq)) {
+		ice_memcpy(desc, desc_on_ring, sizeof(*desc),
+			   ICE_DMA_TO_NONDMA);
+		if (buf) {
+			/* get returned length to copy */
+			u16 copy_size = LE16_TO_CPU(desc->datalen);
+
+			if (copy_size > buf_size) {
+				ice_debug(hw, ICE_DBG_AQ_MSG,
+					  "Return len %d > than buf len %d\n",
+					  copy_size, buf_size);
+				status = ICE_ERR_AQ_ERROR;
+			} else {
+				ice_memcpy(buf, dma_buf->va, copy_size,
+					   ICE_DMA_TO_NONDMA);
+			}
+		}
+		retval = LE16_TO_CPU(desc->retval);
+		if (retval) {
+			ice_debug(hw, ICE_DBG_AQ_MSG,
+				  "Control Send Queue command completed with error 0x%x\n",
+				  retval);
+
+			/* strip off FW internal code */
+			retval &= 0xff;
+		}
+		cmd_completed = true;
+		if (!status && retval != ICE_AQ_RC_OK)
+			status = ICE_ERR_AQ_ERROR;
+		cq->sq_last_status = (enum ice_aq_err)retval;
+	}
+
+	ice_debug(hw, ICE_DBG_AQ_MSG,
+		  "ATQ: desc and buffer writeback:\n");
+
+	ice_debug_cq(hw, ICE_DBG_AQ_CMD, (void *)desc, buf, buf_size);
+
+
+	/* save writeback AQ if requested */
+	if (details->wb_desc)
+		ice_memcpy(details->wb_desc, desc_on_ring,
+			   sizeof(*details->wb_desc), ICE_DMA_TO_NONDMA);
+
+	/* update the error if time out occurred */
+	if (!cmd_completed) {
+#if 0
+	    (!details->async && !details->postpone)) {
+#endif
+		ice_debug(hw, ICE_DBG_AQ_MSG,
+			  "Control Send Queue Writeback timeout.\n");
+		status = ICE_ERR_AQ_TIMEOUT;
+	}
+
+sq_send_command_error:
+	ice_release_lock(&cq->sq_lock);
+	return status;
+}
+
+/**
+ * ice_fill_dflt_direct_cmd_desc - AQ descriptor helper function
+ * @desc: pointer to the temp descriptor (non DMA mem)
+ * @opcode: the opcode can be used to decide which flags to turn off or on
+ *
+ * Fill the desc with default values
+ */
+void ice_fill_dflt_direct_cmd_desc(struct ice_aq_desc *desc, u16 opcode)
+{
+	/* zero out the desc */
+	ice_memset(desc, 0, sizeof(*desc), ICE_NONDMA_MEM);
+	desc->opcode = CPU_TO_LE16(opcode);
+	desc->flags = CPU_TO_LE16(ICE_AQ_FLAG_SI);
+}
+
+/**
+ * ice_clean_rq_elem
+ * @hw: pointer to the hw struct
+ * @cq: pointer to the specific Control queue
+ * @e: event info from the receive descriptor, includes any buffers
+ * @pending: number of events that could be left to process
+ *
+ * This function cleans one Admin Receive Queue element and returns
+ * the contents through e. It can also return how many events are
+ * left to process through 'pending'.
+ */
+enum ice_status
+ice_clean_rq_elem(struct ice_hw *hw, struct ice_ctl_q_info *cq,
+		  struct ice_rq_event_info *e, u16 *pending)
+{
+	u16 ntc = cq->rq.next_to_clean;
+	enum ice_status ret_code = ICE_SUCCESS;
+	struct ice_aq_desc *desc;
+	struct ice_dma_mem *bi;
+	u16 desc_idx;
+	u16 datalen;
+	u16 flags;
+	u16 ntu;
+
+	/* pre-clean the event info */
+	ice_memset(&e->desc, 0, sizeof(e->desc), ICE_NONDMA_MEM);
+
+	/* take the lock before we start messing with the ring */
+	ice_acquire_lock(&cq->rq_lock);
+
+	if (!cq->rq.count) {
+		ice_debug(hw, ICE_DBG_AQ_MSG,
+			  "Control Receive queue not initialized.\n");
+		ret_code = ICE_ERR_AQ_EMPTY;
+		goto clean_rq_elem_err;
+	}
+
+	/* set next_to_use to head */
+	ntu = (u16)(rd32(hw, cq->rq.head) & cq->rq.head_mask);
+
+	if (ntu == ntc) {
+		/* nothing to do - shouldn't need to update ring's values */
+		ret_code = ICE_ERR_AQ_NO_WORK;
+		goto clean_rq_elem_out;
+	}
+
+	/* now clean the next descriptor */
+	desc = ICE_CTL_Q_DESC(cq->rq, ntc);
+	desc_idx = ntc;
+
+	cq->rq_last_status = (enum ice_aq_err)LE16_TO_CPU(desc->retval);
+	flags = LE16_TO_CPU(desc->flags);
+	if (flags & ICE_AQ_FLAG_ERR) {
+		ret_code = ICE_ERR_AQ_ERROR;
+		ice_debug(hw, ICE_DBG_AQ_MSG,
+			  "Control Receive Queue Event received with error 0x%x\n",
+			  cq->rq_last_status);
+	}
+	ice_memcpy(&e->desc, desc, sizeof(e->desc), ICE_DMA_TO_NONDMA);
+	datalen = LE16_TO_CPU(desc->datalen);
+	e->msg_len = min(datalen, e->buf_len);
+	if (e->msg_buf && e->msg_len)
+		ice_memcpy(e->msg_buf, cq->rq.r.rq_bi[desc_idx].va,
+			   e->msg_len, ICE_DMA_TO_NONDMA);
+
+	ice_debug(hw, ICE_DBG_AQ_MSG, "ARQ: desc and buffer:\n");
+
+	ice_debug_cq(hw, ICE_DBG_AQ_CMD, (void *)desc, e->msg_buf,
+		     cq->rq_buf_size);
+
+
+	/* Restore the original datalen and buffer address in the desc,
+	 * FW updates datalen to indicate the event message size
+	 */
+	bi = &cq->rq.r.rq_bi[ntc];
+	ice_memset(desc, 0, sizeof(*desc), ICE_DMA_MEM);
+
+	desc->flags = CPU_TO_LE16(ICE_AQ_FLAG_BUF);
+	if (cq->rq_buf_size > ICE_AQ_LG_BUF)
+		desc->flags |= CPU_TO_LE16(ICE_AQ_FLAG_LB);
+	desc->datalen = CPU_TO_LE16(bi->size);
+	desc->params.generic.addr_high = CPU_TO_LE32(ICE_HI_DWORD(bi->pa));
+	desc->params.generic.addr_low = CPU_TO_LE32(ICE_LO_DWORD(bi->pa));
+
+	/* set tail = the last cleaned desc index. */
+	wr32(hw, cq->rq.tail, ntc);
+	/* ntc is updated to tail + 1 */
+	ntc++;
+	if (ntc == cq->num_rq_entries)
+		ntc = 0;
+	cq->rq.next_to_clean = ntc;
+	cq->rq.next_to_use = ntu;
+
+#if 0
+	ice_nvmupd_check_wait_event(hw, LE16_TO_CPU(e->desc.opcode));
+#endif
+clean_rq_elem_out:
+	/* Set pending if needed, unlock and return */
+	if (pending) {
+		/* re-read HW head to calculate actual pending messages */
+		ntu = (u16)(rd32(hw, cq->rq.head) & cq->rq.head_mask);
+		*pending = (u16)((ntc > ntu ? cq->rq.count : 0) + (ntu - ntc));
+	}
+clean_rq_elem_err:
+	ice_release_lock(&cq->rq_lock);
+
+	return ret_code;
+}
diff --git a/drivers/net/ice/base/ice_controlq.h b/drivers/net/ice/base/ice_controlq.h
new file mode 100644
index 0000000..db2db93
--- /dev/null
+++ b/drivers/net/ice/base/ice_controlq.h
@@ -0,0 +1,97 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_CONTROLQ_H_
+#define _ICE_CONTROLQ_H_
+
+#include "ice_adminq_cmd.h"
+
+
+/* Maximum buffer lengths for all control queue types */
+#define ICE_AQ_MAX_BUF_LEN 4096
+#define ICE_MBXQ_MAX_BUF_LEN 4096
+
+#define ICE_CTL_Q_DESC(R, i) \
+	(&(((struct ice_aq_desc *)((R).desc_buf.va))[i]))
+
+#define ICE_CTL_Q_DESC_UNUSED(R) \
+	(u16)((((R)->next_to_clean > (R)->next_to_use) ? 0 : (R)->count) + \
+	      (R)->next_to_clean - (R)->next_to_use - 1)
+
+/* Defines that help manage the driver vs FW API checks.
+ * Take a look at ice_aq_ver_check in ice_controlq.c for actual usage.
+ */
+#define EXP_FW_API_VER_BRANCH		0x00
+#define EXP_FW_API_VER_MAJOR		0x01
+#define EXP_FW_API_VER_MINOR		0x03
+
+/* Different control queue types: These are mainly for SW consumption. */
+enum ice_ctl_q {
+	ICE_CTL_Q_UNKNOWN = 0,
+	ICE_CTL_Q_ADMIN,
+	ICE_CTL_Q_MAILBOX,
+};
+
+/* Control Queue default settings */
+#define ICE_CTL_Q_SQ_CMD_TIMEOUT	250  /* msecs */
+
+struct ice_ctl_q_ring {
+	void *dma_head;			/* Virtual address to dma head */
+	struct ice_dma_mem desc_buf;	/* descriptor ring memory */
+	void *cmd_buf;			/* command buffer memory */
+
+	union {
+		struct ice_dma_mem *sq_bi;
+		struct ice_dma_mem *rq_bi;
+	} r;
+
+	u16 count;		/* Number of descriptors */
+
+	/* used for interrupt processing */
+	u16 next_to_use;
+	u16 next_to_clean;
+
+	/* used for queue tracking */
+	u32 head;
+	u32 tail;
+	u32 len;
+	u32 bah;
+	u32 bal;
+	u32 len_mask;
+	u32 len_ena_mask;
+	u32 head_mask;
+};
+
+/* sq transaction details */
+struct ice_sq_cd {
+	struct ice_aq_desc *wb_desc;
+};
+
+#define ICE_CTL_Q_DETAILS(R, i) (&(((struct ice_sq_cd *)((R).cmd_buf))[i]))
+
+/* rq event information */
+struct ice_rq_event_info {
+	struct ice_aq_desc desc;
+	u16 msg_len;
+	u16 buf_len;
+	u8 *msg_buf;
+};
+
+/* Control Queue information */
+struct ice_ctl_q_info {
+	enum ice_ctl_q qtype;
+	struct ice_ctl_q_ring rq;	/* receive queue */
+	struct ice_ctl_q_ring sq;	/* send queue */
+	u32 sq_cmd_timeout;		/* send queue cmd write back timeout */
+	u16 num_rq_entries;		/* receive queue depth */
+	u16 num_sq_entries;		/* send queue depth */
+	u16 rq_buf_size;		/* receive queue buffer size */
+	u16 sq_buf_size;		/* send queue buffer size */
+	struct ice_lock sq_lock;		/* Send queue lock */
+	struct ice_lock rq_lock;		/* Receive queue lock */
+	enum ice_aq_err sq_last_status;	/* last status on send queue */
+	enum ice_aq_err rq_last_status;	/* last status on receive queue */
+};
+
+#endif /* _ICE_CONTROLQ_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v5 07/31] net/ice/base: add basic transmit scheduler
  2018-12-17  7:37 ` [dpdk-dev] [PATCH v5 00/31] A new net PMD - ICE Wenzhuo Lu
                     ` (5 preceding siblings ...)
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 06/31] net/ice/base: add control queue information Wenzhuo Lu
@ 2018-12-17  7:37   ` Wenzhuo Lu
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 08/31] net/ice/base: add virtual switch code Wenzhuo Lu
                     ` (23 subsequent siblings)
  30 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-17  7:37 UTC (permalink / raw)
  To: dev; +Cc: Paul M Stillwell Jr

From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>

Add code for the basic TX scheduler.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
---
 drivers/net/ice/base/ice_sched.c | 5380 ++++++++++++++++++++++++++++++++++++++
 drivers/net/ice/base/ice_sched.h |  210 ++
 2 files changed, 5590 insertions(+)
 create mode 100644 drivers/net/ice/base/ice_sched.c
 create mode 100644 drivers/net/ice/base/ice_sched.h

diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c
new file mode 100644
index 0000000..7acbae6
--- /dev/null
+++ b/drivers/net/ice/base/ice_sched.c
@@ -0,0 +1,5380 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#include "ice_sched.h"
+
+
+/**
+ * ice_sched_add_root_node - Insert the Tx scheduler root node in SW DB
+ * @pi: port information structure
+ * @info: Scheduler element information from firmware
+ *
+ * This function inserts the root node of the scheduling tree topology
+ * to the SW DB.
+ */
+static enum ice_status
+ice_sched_add_root_node(struct ice_port_info *pi,
+			struct ice_aqc_txsched_elem_data *info)
+{
+	struct ice_sched_node *root;
+	struct ice_hw *hw;
+
+	if (!pi)
+		return ICE_ERR_PARAM;
+
+	hw = pi->hw;
+
+	root = (struct ice_sched_node *)ice_malloc(hw, sizeof(*root));
+	if (!root)
+		return ICE_ERR_NO_MEMORY;
+
+	/* coverity[suspicious_sizeof] */
+	root->children = (struct ice_sched_node **)
+		ice_calloc(hw, hw->max_children[0], sizeof(*root));
+	if (!root->children) {
+		ice_free(hw, root);
+		return ICE_ERR_NO_MEMORY;
+	}
+
+	ice_memcpy(&root->info, info, sizeof(*info), ICE_DMA_TO_NONDMA);
+	pi->root = root;
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_sched_find_node_by_teid - Find the Tx scheduler node in SW DB
+ * @start_node: pointer to the starting ice_sched_node struct in a sub-tree
+ * @teid: node teid to search
+ *
+ * This function searches for a node matching the teid in the scheduling tree
+ * from the SW DB. The search is recursive and is restricted by the number of
+ * layers it has searched through; stopping at the max supported layer.
+ *
+ * This function needs to be called when holding the port_info->sched_lock
+ */
+struct ice_sched_node *
+ice_sched_find_node_by_teid(struct ice_sched_node *start_node, u32 teid)
+{
+	u16 i;
+
+	/* The TEID is same as that of the start_node */
+	if (ICE_TXSCHED_GET_NODE_TEID(start_node) == teid)
+		return start_node;
+
+	/* The node has no children or is at the max layer */
+	if (!start_node->num_children ||
+	    start_node->tx_sched_layer >= ICE_AQC_TOPO_MAX_LEVEL_NUM ||
+	    start_node->info.data.elem_type == ICE_AQC_ELEM_TYPE_LEAF)
+		return NULL;
+
+	/* Check if teid matches to any of the children nodes */
+	for (i = 0; i < start_node->num_children; i++)
+		if (ICE_TXSCHED_GET_NODE_TEID(start_node->children[i]) == teid)
+			return start_node->children[i];
+
+	/* Search within each child's sub-tree */
+	for (i = 0; i < start_node->num_children; i++) {
+		struct ice_sched_node *tmp;
+
+		tmp = ice_sched_find_node_by_teid(start_node->children[i],
+						  teid);
+		if (tmp)
+			return tmp;
+	}
+
+	return NULL;
+}
+
+/**
+ * ice_aqc_send_sched_elem_cmd - send scheduling elements cmd
+ * @hw: pointer to the hw struct
+ * @cmd_opc: cmd opcode
+ * @elems_req: number of elements to request
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @elems_resp: returns total number of elements response
+ * @cd: pointer to command details structure or NULL
+ *
+ * This function sends a scheduling elements cmd (cmd_opc)
+ */
+static enum ice_status
+ice_aqc_send_sched_elem_cmd(struct ice_hw *hw, enum ice_adminq_opc cmd_opc,
+			    u16 elems_req, void *buf, u16 buf_size,
+			    u16 *elems_resp, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_sched_elem_cmd *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	cmd = &desc.params.sched_elem_cmd;
+	ice_fill_dflt_direct_cmd_desc(&desc, cmd_opc);
+	cmd->num_elem_req = CPU_TO_LE16(elems_req);
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+	status = ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
+	if (!status && elems_resp)
+		*elems_resp = LE16_TO_CPU(cmd->num_elem_resp);
+
+	return status;
+}
+
+/**
+ * ice_aq_query_sched_elems - query scheduler elements
+ * @hw: pointer to the hw struct
+ * @elems_req: number of elements to query
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @elems_ret: returns total number of elements returned
+ * @cd: pointer to command details structure or NULL
+ *
+ * Query scheduling elements (0x0404)
+ */
+enum ice_status
+ice_aq_query_sched_elems(struct ice_hw *hw, u16 elems_req,
+			 struct ice_aqc_get_elem *buf, u16 buf_size,
+			 u16 *elems_ret, struct ice_sq_cd *cd)
+{
+	return ice_aqc_send_sched_elem_cmd(hw, ice_aqc_opc_get_sched_elems,
+					   elems_req, (void *)buf, buf_size,
+					   elems_ret, cd);
+}
+
+/**
+ * ice_sched_add_node - Insert the Tx scheduler node in SW DB
+ * @pi: port information structure
+ * @layer: Scheduler layer of the node
+ * @info: Scheduler element information from firmware
+ *
+ * This function inserts a scheduler node to the SW DB.
+ */
+enum ice_status
+ice_sched_add_node(struct ice_port_info *pi, u8 layer,
+		   struct ice_aqc_txsched_elem_data *info)
+{
+	struct ice_sched_node *parent;
+	struct ice_aqc_get_elem elem;
+	struct ice_sched_node *node;
+	enum ice_status status;
+	struct ice_hw *hw;
+
+	if (!pi)
+		return ICE_ERR_PARAM;
+
+	hw = pi->hw;
+
+	/* A valid parent node should be there */
+	parent = ice_sched_find_node_by_teid(pi->root,
+					     LE32_TO_CPU(info->parent_teid));
+	if (!parent) {
+		ice_debug(hw, ICE_DBG_SCHED,
+			  "Parent Node not found for parent_teid=0x%x\n",
+			  LE32_TO_CPU(info->parent_teid));
+		return ICE_ERR_PARAM;
+	}
+
+	/* query the current node information from FW  before additing it
+	 * to the SW DB
+	 */
+	status = ice_sched_query_elem(hw, LE32_TO_CPU(info->node_teid), &elem);
+	if (status)
+		return status;
+	node = (struct ice_sched_node *)ice_malloc(hw, sizeof(*node));
+	if (!node)
+		return ICE_ERR_NO_MEMORY;
+	if (hw->max_children[layer]) {
+		/* coverity[suspicious_sizeof] */
+		node->children = (struct ice_sched_node **)
+			ice_calloc(hw, hw->max_children[layer], sizeof(*node));
+		if (!node->children) {
+			ice_free(hw, node);
+			return ICE_ERR_NO_MEMORY;
+		}
+	}
+
+	node->in_use = true;
+	node->parent = parent;
+	node->tx_sched_layer = layer;
+	parent->children[parent->num_children++] = node;
+	node->info = elem.generic[0];
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_aq_delete_sched_elems - delete scheduler elements
+ * @hw: pointer to the hw struct
+ * @grps_req: number of groups to delete
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @grps_del: returns total number of elements deleted
+ * @cd: pointer to command details structure or NULL
+ *
+ * Delete scheduling elements (0x040F)
+ */
+static enum ice_status
+ice_aq_delete_sched_elems(struct ice_hw *hw, u16 grps_req,
+			  struct ice_aqc_delete_elem *buf, u16 buf_size,
+			  u16 *grps_del, struct ice_sq_cd *cd)
+{
+	return ice_aqc_send_sched_elem_cmd(hw, ice_aqc_opc_delete_sched_elems,
+					   grps_req, (void *)buf, buf_size,
+					   grps_del, cd);
+}
+
+/**
+ * ice_sched_remove_elems - remove nodes from hw
+ * @hw: pointer to the hw struct
+ * @parent: pointer to the parent node
+ * @num_nodes: number of nodes
+ * @node_teids: array of node teids to be deleted
+ *
+ * This function remove nodes from hw
+ */
+static enum ice_status
+ice_sched_remove_elems(struct ice_hw *hw, struct ice_sched_node *parent,
+		       u16 num_nodes, u32 *node_teids)
+{
+	struct ice_aqc_delete_elem *buf;
+	u16 i, num_groups_removed = 0;
+	enum ice_status status;
+	u16 buf_size;
+
+	buf_size = sizeof(*buf) + sizeof(u32) * (num_nodes - 1);
+	buf = (struct ice_aqc_delete_elem *)ice_malloc(hw, buf_size);
+	if (!buf)
+		return ICE_ERR_NO_MEMORY;
+
+	buf->hdr.parent_teid = parent->info.node_teid;
+	buf->hdr.num_elems = CPU_TO_LE16(num_nodes);
+	for (i = 0; i < num_nodes; i++)
+		buf->teid[i] = CPU_TO_LE32(node_teids[i]);
+
+	status = ice_aq_delete_sched_elems(hw, 1, buf, buf_size,
+					   &num_groups_removed, NULL);
+	if (status != ICE_SUCCESS || num_groups_removed != 1)
+		ice_debug(hw, ICE_DBG_SCHED, "remove node failed FW error %d\n",
+			  hw->adminq.sq_last_status);
+
+	ice_free(hw, buf);
+	return status;
+}
+
+/**
+ * ice_sched_get_first_node - get the first node of the given layer
+ * @hw: pointer to the hw struct
+ * @parent: pointer the base node of the subtree
+ * @layer: layer number
+ *
+ * This function retrieves the first node of the given layer from the subtree
+ */
+static struct ice_sched_node *
+ice_sched_get_first_node(struct ice_hw *hw, struct ice_sched_node *parent,
+			 u8 layer)
+{
+	u8 i;
+
+	if (layer < hw->sw_entry_point_layer)
+		return NULL;
+	for (i = 0; i < parent->num_children; i++) {
+		struct ice_sched_node *node = parent->children[i];
+
+		if (node) {
+			if (node->tx_sched_layer == layer)
+				return node;
+			/* this recursion is intentional, and wouldn't
+			 * go more than 9 calls
+			 */
+			return ice_sched_get_first_node(hw, node, layer);
+		}
+	}
+	return NULL;
+}
+
+/**
+ * ice_sched_get_tc_node - get pointer to TC node
+ * @pi: port information structure
+ * @tc: TC number
+ *
+ * This function returns the TC node pointer
+ */
+struct ice_sched_node *ice_sched_get_tc_node(struct ice_port_info *pi, u8 tc)
+{
+	u8 i;
+
+	if (!pi)
+		return NULL;
+	for (i = 0; i < pi->root->num_children; i++)
+		if (pi->root->children[i]->tc_num == tc)
+			return pi->root->children[i];
+	return NULL;
+}
+
+/**
+ * ice_free_sched_node - Free a Tx scheduler node from SW DB
+ * @pi: port information structure
+ * @node: pointer to the ice_sched_node struct
+ *
+ * This function frees up a node from SW DB as well as from HW
+ *
+ * This function needs to be called with the port_info->sched_lock held
+ */
+void ice_free_sched_node(struct ice_port_info *pi, struct ice_sched_node *node)
+{
+	struct ice_sched_node *parent;
+	struct ice_hw *hw = pi->hw;
+	u8 i, j;
+
+	/* Free the children before freeing up the parent node
+	 * The parent array is updated below and that shifts the nodes
+	 * in the array. So always pick the first child if num children > 0
+	 */
+	while (node->num_children)
+		ice_free_sched_node(pi, node->children[0]);
+
+	/* Leaf, TC and root nodes can't be deleted by SW */
+	if (node->tx_sched_layer >= hw->sw_entry_point_layer &&
+	    node->info.data.elem_type != ICE_AQC_ELEM_TYPE_TC &&
+	    node->info.data.elem_type != ICE_AQC_ELEM_TYPE_ROOT_PORT &&
+	    node->info.data.elem_type != ICE_AQC_ELEM_TYPE_LEAF) {
+		u32 teid = LE32_TO_CPU(node->info.node_teid);
+
+		ice_sched_remove_elems(hw, node->parent, 1, &teid);
+	}
+	parent = node->parent;
+	/* root has no parent */
+	if (parent) {
+		struct ice_sched_node *p, *tc_node;
+
+		/* update the parent */
+		for (i = 0; i < parent->num_children; i++)
+			if (parent->children[i] == node) {
+				for (j = i + 1; j < parent->num_children; j++)
+					parent->children[j - 1] =
+						parent->children[j];
+				parent->num_children--;
+				break;
+			}
+
+		/* search for previous sibling that points to this node and
+		 * remove the reference
+		 */
+		tc_node = ice_sched_get_tc_node(pi, node->tc_num);
+		if (!tc_node) {
+			ice_debug(hw, ICE_DBG_SCHED,
+				  "Invalid TC number %d\n", node->tc_num);
+			goto err_exit;
+		}
+		p = ice_sched_get_first_node(hw, tc_node, node->tx_sched_layer);
+		while (p) {
+			if (p->sibling == node) {
+				p->sibling = node->sibling;
+				break;
+			}
+			p = p->sibling;
+		}
+	}
+err_exit:
+	/* leaf nodes have no children */
+	if (node->children)
+		ice_free(hw, node->children);
+	ice_free(hw, node);
+}
+
+/**
+ * ice_aq_get_dflt_topo - gets default scheduler topology
+ * @hw: pointer to the hw struct
+ * @lport: logical port number
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @num_branches: returns total number of queue to port branches
+ * @cd: pointer to command details structure or NULL
+ *
+ * Get default scheduler topology (0x400)
+ */
+static enum ice_status
+ice_aq_get_dflt_topo(struct ice_hw *hw, u8 lport,
+		     struct ice_aqc_get_topo_elem *buf, u16 buf_size,
+		     u8 *num_branches, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_get_topo *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	cmd = &desc.params.get_topo;
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_dflt_topo);
+	cmd->port_num = lport;
+	status = ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
+	if (!status && num_branches)
+		*num_branches = cmd->num_branches;
+
+	return status;
+}
+
+/**
+ * ice_aq_add_sched_elems - adds scheduling element
+ * @hw: pointer to the hw struct
+ * @grps_req: the number of groups that are requested to be added
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @grps_added: returns total number of groups added
+ * @cd: pointer to command details structure or NULL
+ *
+ * Add scheduling elements (0x0401)
+ */
+static enum ice_status
+ice_aq_add_sched_elems(struct ice_hw *hw, u16 grps_req,
+		       struct ice_aqc_add_elem *buf, u16 buf_size,
+		       u16 *grps_added, struct ice_sq_cd *cd)
+{
+	return ice_aqc_send_sched_elem_cmd(hw, ice_aqc_opc_add_sched_elems,
+					   grps_req, (void *)buf, buf_size,
+					   grps_added, cd);
+}
+
+/**
+ * ice_aq_cfg_sched_elems - configures scheduler elements
+ * @hw: pointer to the hw struct
+ * @elems_req: number of elements to configure
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @elems_cfgd: returns total number of elements configured
+ * @cd: pointer to command details structure or NULL
+ *
+ * Configure scheduling elements (0x0403)
+ */
+static enum ice_status
+ice_aq_cfg_sched_elems(struct ice_hw *hw, u16 elems_req,
+		       struct ice_aqc_conf_elem *buf, u16 buf_size,
+		       u16 *elems_cfgd, struct ice_sq_cd *cd)
+{
+	return ice_aqc_send_sched_elem_cmd(hw, ice_aqc_opc_cfg_sched_elems,
+					   elems_req, (void *)buf, buf_size,
+					   elems_cfgd, cd);
+}
+
+/**
+ * ice_aq_move_sched_elems - move scheduler elements
+ * @hw: pointer to the hw struct
+ * @grps_req: number of groups to move
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @grps_movd: returns total number of groups moved
+ * @cd: pointer to command details structure or NULL
+ *
+ * Move scheduling elements (0x0408)
+ */
+enum ice_status
+ice_aq_move_sched_elems(struct ice_hw *hw, u16 grps_req,
+			struct ice_aqc_move_elem *buf, u16 buf_size,
+			u16 *grps_movd, struct ice_sq_cd *cd)
+{
+	return ice_aqc_send_sched_elem_cmd(hw, ice_aqc_opc_move_sched_elems,
+					   grps_req, (void *)buf, buf_size,
+					   grps_movd, cd);
+}
+
+/**
+ * ice_aq_suspend_sched_elems - suspend scheduler elements
+ * @hw: pointer to the hw struct
+ * @elems_req: number of elements to suspend
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @elems_ret: returns total number of elements suspended
+ * @cd: pointer to command details structure or NULL
+ *
+ * Suspend scheduling elements (0x0409)
+ */
+static enum ice_status
+ice_aq_suspend_sched_elems(struct ice_hw *hw, u16 elems_req,
+			   struct ice_aqc_suspend_resume_elem *buf,
+			   u16 buf_size, u16 *elems_ret, struct ice_sq_cd *cd)
+{
+	return ice_aqc_send_sched_elem_cmd(hw, ice_aqc_opc_suspend_sched_elems,
+					   elems_req, (void *)buf, buf_size,
+					   elems_ret, cd);
+}
+
+/**
+ * ice_aq_resume_sched_elems - resume scheduler elements
+ * @hw: pointer to the hw struct
+ * @elems_req: number of elements to resume
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @elems_ret: returns total number of elements resumed
+ * @cd: pointer to command details structure or NULL
+ *
+ * resume scheduling elements (0x040A)
+ */
+static enum ice_status
+ice_aq_resume_sched_elems(struct ice_hw *hw, u16 elems_req,
+			  struct ice_aqc_suspend_resume_elem *buf,
+			  u16 buf_size, u16 *elems_ret, struct ice_sq_cd *cd)
+{
+	return ice_aqc_send_sched_elem_cmd(hw, ice_aqc_opc_resume_sched_elems,
+					   elems_req, (void *)buf, buf_size,
+					   elems_ret, cd);
+}
+
+/**
+ * ice_aq_query_sched_res - query scheduler resource
+ * @hw: pointer to the hw struct
+ * @buf_size: buffer size in bytes
+ * @buf: pointer to buffer
+ * @cd: pointer to command details structure or NULL
+ *
+ * Query scheduler resource allocation (0x0412)
+ */
+static enum ice_status
+ice_aq_query_sched_res(struct ice_hw *hw, u16 buf_size,
+		       struct ice_aqc_query_txsched_res_resp *buf,
+		       struct ice_sq_cd *cd)
+{
+	struct ice_aq_desc desc;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_query_sched_res);
+	return ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
+}
+
+/**
+ * ice_sched_suspend_resume_elems - suspend or resume hw nodes
+ * @hw: pointer to the hw struct
+ * @num_nodes: number of nodes
+ * @node_teids: array of node teids to be suspended or resumed
+ * @suspend: true means suspend / false means resume
+ *
+ * This function suspends or resumes hw nodes
+ */
+static enum ice_status
+ice_sched_suspend_resume_elems(struct ice_hw *hw, u8 num_nodes, u32 *node_teids,
+			       bool suspend)
+{
+	struct ice_aqc_suspend_resume_elem *buf;
+	u16 i, buf_size, num_elem_ret = 0;
+	enum ice_status status;
+
+	buf_size = sizeof(*buf) * num_nodes;
+	buf = (struct ice_aqc_suspend_resume_elem *)
+		ice_malloc(hw, buf_size);
+	if (!buf)
+		return ICE_ERR_NO_MEMORY;
+
+	for (i = 0; i < num_nodes; i++)
+		buf->teid[i] = CPU_TO_LE32(node_teids[i]);
+
+	if (suspend)
+		status = ice_aq_suspend_sched_elems(hw, num_nodes, buf,
+						    buf_size, &num_elem_ret,
+						    NULL);
+	else
+		status = ice_aq_resume_sched_elems(hw, num_nodes, buf,
+						   buf_size, &num_elem_ret,
+						   NULL);
+	if (status != ICE_SUCCESS || num_elem_ret != num_nodes)
+		ice_debug(hw, ICE_DBG_SCHED, "suspend/resume failed\n");
+
+	ice_free(hw, buf);
+	return status;
+}
+
+/**
+ * ice_aq_rl_profile - performs a rate limiting task
+ * @hw: pointer to the hw struct
+ * @opcode:opcode for add, query, or remove profile(s)
+ * @num_profiles: the number of profiles
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @num_processed: number of processed add or remove profile(s) to return
+ * @cd: pointer to command details structure
+ *
+ * Rl profile function to add, query, or remove profile(s)
+ */
+static enum ice_status
+ice_aq_rl_profile(struct ice_hw *hw, enum ice_adminq_opc opcode,
+		  u16 num_profiles, struct ice_aqc_rl_profile_generic_elem *buf,
+		  u16 buf_size, u16 *num_processed, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_rl_profile *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	cmd = &desc.params.rl_profile;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, opcode);
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+	cmd->num_profiles = CPU_TO_LE16(num_profiles);
+	status = ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
+	if (!status && num_processed)
+		*num_processed = LE16_TO_CPU(cmd->num_processed);
+	return status;
+}
+
+/**
+ * ice_aq_add_rl_profile - adds rate limiting profile(s)
+ * @hw: pointer to the hw struct
+ * @num_profiles: the number of profile(s) to be add
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @num_profiles_added: total number of profiles added to return
+ * @cd: pointer to command details structure
+ *
+ * Add rl profile (0x0410)
+ */
+static enum ice_status
+ice_aq_add_rl_profile(struct ice_hw *hw, u16 num_profiles,
+		      struct ice_aqc_rl_profile_generic_elem *buf,
+		      u16 buf_size, u16 *num_profiles_added,
+		      struct ice_sq_cd *cd)
+{
+	return ice_aq_rl_profile(hw, ice_aqc_opc_add_rl_profiles,
+				 num_profiles, buf,
+				 buf_size, num_profiles_added, cd);
+}
+
+/**
+ * ice_aq_query_rl_profile - query rate limiting profile(s)
+ * @hw: pointer to the hw struct
+ * @num_profiles: the number of profile(s) to query
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @cd: pointer to command details structure
+ *
+ * Query rl profile (0x0411)
+ */
+enum ice_status
+ice_aq_query_rl_profile(struct ice_hw *hw, u16 num_profiles,
+			struct ice_aqc_rl_profile_generic_elem *buf,
+			u16 buf_size, struct ice_sq_cd *cd)
+{
+	return ice_aq_rl_profile(hw, ice_aqc_opc_query_rl_profiles,
+				 num_profiles, buf, buf_size, NULL, cd);
+}
+
+/**
+ * ice_aq_remove_rl_profile - removes rl profile(s)
+ * @hw: pointer to the hw struct
+ * @num_profiles: the number of profile(s) to remove
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @num_profiles_removed: total number of profiles removed to return
+ * @cd: pointer to command details structure or NULL
+ *
+ * Remove rl profile (0x0415)
+ */
+static enum ice_status
+ice_aq_remove_rl_profile(struct ice_hw *hw, u16 num_profiles,
+			 struct ice_aqc_rl_profile_generic_elem *buf,
+			 u16 buf_size, u16 *num_profiles_removed,
+			 struct ice_sq_cd *cd)
+{
+	return ice_aq_rl_profile(hw, ice_aqc_opc_remove_rl_profiles,
+				 num_profiles, buf,
+				 buf_size, num_profiles_removed, cd);
+}
+
+/**
+ * ice_sched_clear_rl_prof - clears rl prof entries
+ * @pi: port information structure
+ *
+ * This function removes all rl profile from hw as well as from SW DB.
+ */
+static void ice_sched_clear_rl_prof(struct ice_port_info *pi)
+{
+	u8 ln;
+
+	for (ln = 0; ln < pi->hw->num_tx_sched_layers; ln++) {
+		struct ice_aqc_rl_profile_info *rl_prof_elem;
+		struct ice_aqc_rl_profile_info *rl_prof_tmp;
+
+		LIST_FOR_EACH_ENTRY_SAFE(rl_prof_elem, rl_prof_tmp,
+					 &pi->rl_prof_list[ln],
+					 ice_aqc_rl_profile_info, list_entry) {
+			struct ice_hw *hw = pi->hw;
+			enum ice_status status;
+
+			rl_prof_elem->prof_id_ref = 0;
+			status = ice_sched_del_rl_profile(hw, rl_prof_elem);
+			if (status) {
+				ice_debug(hw, ICE_DBG_SCHED,
+					  "Remove rl profile failed\n");
+				/* On error, free mem required */
+				LIST_DEL(&rl_prof_elem->list_entry);
+				ice_free(hw, rl_prof_elem);
+			}
+		}
+	}
+}
+
+/**
+ * ice_sched_clear_agg - clears the agg related information
+ * @hw: pointer to the hardware structure
+ *
+ * This function removes agg list and free up agg related memory
+ * previously allocated.
+ */
+void ice_sched_clear_agg(struct ice_hw *hw)
+{
+	struct ice_sched_agg_info *agg_info;
+	struct ice_sched_agg_info *atmp;
+
+	LIST_FOR_EACH_ENTRY_SAFE(agg_info, atmp, &hw->agg_list,
+				 ice_sched_agg_info,
+				 list_entry) {
+		struct ice_sched_agg_vsi_info *agg_vsi_info;
+		struct ice_sched_agg_vsi_info *vtmp;
+
+		LIST_FOR_EACH_ENTRY_SAFE(agg_vsi_info, vtmp,
+					 &agg_info->agg_vsi_list,
+					 ice_sched_agg_vsi_info, list_entry) {
+			LIST_DEL(&agg_vsi_info->list_entry);
+			ice_free(hw, agg_vsi_info);
+		}
+		LIST_DEL(&agg_info->list_entry);
+		ice_free(hw, agg_info);
+	}
+}
+
+/**
+ * ice_sched_clear_tx_topo - clears the schduler tree nodes
+ * @pi: port information structure
+ *
+ * This function removes all the nodes from HW as well as from SW DB.
+ */
+static void ice_sched_clear_tx_topo(struct ice_port_info *pi)
+{
+	if (!pi)
+		return;
+	/* remove rl profiles related lists */
+	ice_sched_clear_rl_prof(pi);
+	if (pi->root) {
+		ice_free_sched_node(pi, pi->root);
+		pi->root = NULL;
+	}
+}
+
+/**
+ * ice_sched_clear_port - clear the scheduler elements from SW DB for a port
+ * @pi: port information structure
+ *
+ * Cleanup scheduling elements from SW DB
+ */
+void ice_sched_clear_port(struct ice_port_info *pi)
+{
+	if (!pi || pi->port_state != ICE_SCHED_PORT_STATE_READY)
+		return;
+
+	pi->port_state = ICE_SCHED_PORT_STATE_INIT;
+	ice_acquire_lock(&pi->sched_lock);
+	ice_sched_clear_tx_topo(pi);
+	ice_release_lock(&pi->sched_lock);
+	ice_destroy_lock(&pi->sched_lock);
+}
+
+/**
+ * ice_sched_cleanup_all - cleanup scheduler elements from SW DB for all ports
+ * @hw: pointer to the hw struct
+ *
+ * Cleanup scheduling elements from SW DB for all the ports
+ */
+void ice_sched_cleanup_all(struct ice_hw *hw)
+{
+	if (!hw)
+		return;
+
+	if (hw->layer_info) {
+		ice_free(hw, hw->layer_info);
+		hw->layer_info = NULL;
+	}
+
+	if (hw->port_info)
+		ice_sched_clear_port(hw->port_info);
+
+	hw->num_tx_sched_layers = 0;
+	hw->num_tx_sched_phys_layers = 0;
+	hw->flattened_layers = 0;
+	hw->max_cgds = 0;
+}
+
+/**
+ * ice_aq_cfg_l2_node_cgd - configures L2 node to CGD mapping
+ * @hw: pointer to the hw struct
+ * @num_l2_nodes: the number of L2 nodes whose CGDs to configure
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @cd: pointer to command details structure or NULL
+ *
+ * Configure L2 Node CGD (0x0414)
+ */
+enum ice_status
+ice_aq_cfg_l2_node_cgd(struct ice_hw *hw, u16 num_l2_nodes,
+		       struct ice_aqc_cfg_l2_node_cgd_data *buf,
+		       u16 buf_size, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_cfg_l2_node_cgd *cmd;
+	struct ice_aq_desc desc;
+
+	cmd = &desc.params.cfg_l2_node_cgd;
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_cfg_l2_node_cgd);
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+
+	cmd->num_l2_nodes = CPU_TO_LE16(num_l2_nodes);
+	return ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
+}
+
+
+/**
+ * ice_sched_add_elems - add nodes to hw and SW DB
+ * @pi: port information structure
+ * @tc_node: pointer to the branch node
+ * @parent: pointer to the parent node
+ * @layer: layer number to add nodes
+ * @num_nodes: number of nodes
+ * @num_nodes_added: pointer to num nodes added
+ * @first_node_teid: if new nodes are added then return the teid of first node
+ *
+ * This function add nodes to hw as well as to SW DB for a given layer
+ */
+static enum ice_status
+ice_sched_add_elems(struct ice_port_info *pi, struct ice_sched_node *tc_node,
+		    struct ice_sched_node *parent, u8 layer, u16 num_nodes,
+		    u16 *num_nodes_added, u32 *first_node_teid)
+{
+	struct ice_sched_node *prev, *new_node;
+	struct ice_aqc_add_elem *buf;
+	u16 i, num_groups_added = 0;
+	enum ice_status status = ICE_SUCCESS;
+	struct ice_hw *hw = pi->hw;
+	u16 buf_size;
+	u32 teid;
+
+	buf_size = sizeof(*buf) + sizeof(*buf->generic) * (num_nodes - 1);
+	buf = (struct ice_aqc_add_elem *)ice_malloc(hw, buf_size);
+	if (!buf)
+		return ICE_ERR_NO_MEMORY;
+
+	buf->hdr.parent_teid = parent->info.node_teid;
+	buf->hdr.num_elems = CPU_TO_LE16(num_nodes);
+	for (i = 0; i < num_nodes; i++) {
+		buf->generic[i].parent_teid = parent->info.node_teid;
+		buf->generic[i].data.elem_type = ICE_AQC_ELEM_TYPE_SE_GENERIC;
+		buf->generic[i].data.valid_sections =
+			ICE_AQC_ELEM_VALID_GENERIC | ICE_AQC_ELEM_VALID_CIR |
+			ICE_AQC_ELEM_VALID_EIR;
+		buf->generic[i].data.generic = 0;
+		buf->generic[i].data.cir_bw.bw_profile_idx =
+			CPU_TO_LE16(ICE_SCHED_DFLT_RL_PROF_ID);
+		buf->generic[i].data.cir_bw.bw_alloc =
+			CPU_TO_LE16(ICE_SCHED_DFLT_BW_WT);
+		buf->generic[i].data.eir_bw.bw_profile_idx =
+			CPU_TO_LE16(ICE_SCHED_DFLT_RL_PROF_ID);
+		buf->generic[i].data.eir_bw.bw_alloc =
+			CPU_TO_LE16(ICE_SCHED_DFLT_BW_WT);
+	}
+
+	status = ice_aq_add_sched_elems(hw, 1, buf, buf_size,
+					&num_groups_added, NULL);
+	if (status != ICE_SUCCESS || num_groups_added != 1) {
+		ice_debug(hw, ICE_DBG_SCHED, "add node failed FW Error %d\n",
+			  hw->adminq.sq_last_status);
+		ice_free(hw, buf);
+		return ICE_ERR_CFG;
+	}
+
+	*num_nodes_added = num_nodes;
+	/* add nodes to the SW DB */
+	for (i = 0; i < num_nodes; i++) {
+		status = ice_sched_add_node(pi, layer, &buf->generic[i]);
+		if (status != ICE_SUCCESS) {
+			ice_debug(hw, ICE_DBG_SCHED,
+				  "add nodes in SW DB failed status =%d\n",
+				  status);
+			break;
+		}
+
+		teid = LE32_TO_CPU(buf->generic[i].node_teid);
+		new_node = ice_sched_find_node_by_teid(parent, teid);
+		if (!new_node) {
+			ice_debug(hw, ICE_DBG_SCHED,
+				  "Node is missing for teid =%d\n", teid);
+			break;
+		}
+
+		new_node->sibling = NULL;
+		new_node->tc_num = tc_node->tc_num;
+
+		/* add it to previous node sibling pointer */
+		/* Note: siblings are not linked across branches */
+		prev = ice_sched_get_first_node(hw, tc_node, layer);
+		if (prev && prev != new_node) {
+			while (prev->sibling)
+				prev = prev->sibling;
+			prev->sibling = new_node;
+		}
+
+		if (i == 0)
+			*first_node_teid = teid;
+	}
+
+	ice_free(hw, buf);
+	return status;
+}
+
+/**
+ * ice_sched_add_nodes_to_layer - Add nodes to a given layer
+ * @pi: port information structure
+ * @tc_node: pointer to TC node
+ * @parent: pointer to parent node
+ * @layer: layer number to add nodes
+ * @num_nodes: number of nodes to be added
+ * @first_node_teid: pointer to the first node teid
+ * @num_nodes_added: pointer to number of nodes added
+ *
+ * This function add nodes to a given layer.
+ */
+static enum ice_status
+ice_sched_add_nodes_to_layer(struct ice_port_info *pi,
+			     struct ice_sched_node *tc_node,
+			     struct ice_sched_node *parent, u8 layer,
+			     u16 num_nodes, u32 *first_node_teid,
+			     u16 *num_nodes_added)
+{
+	u32 *first_teid_ptr = first_node_teid;
+	u16 new_num_nodes, max_child_nodes;
+	enum ice_status status = ICE_SUCCESS;
+	struct ice_hw *hw = pi->hw;
+	u16 num_added = 0;
+	u32 temp;
+
+	*num_nodes_added = 0;
+
+	if (!num_nodes)
+		return status;
+
+	if (!parent || layer < hw->sw_entry_point_layer)
+		return ICE_ERR_PARAM;
+
+	/* max children per node per layer */
+	max_child_nodes = hw->max_children[parent->tx_sched_layer];
+
+	/* current number of children + required nodes exceed max children ? */
+	if ((parent->num_children + num_nodes) > max_child_nodes) {
+		/* Fail if the parent is a TC node */
+		if (parent == tc_node)
+			return ICE_ERR_CFG;
+
+		/* utilize all the spaces if the parent is not full */
+		if (parent->num_children < max_child_nodes) {
+			new_num_nodes = max_child_nodes - parent->num_children;
+			/* this recursion is intentional, and wouldn't
+			 * go more than 2 calls
+			 */
+			status = ice_sched_add_nodes_to_layer(pi, tc_node,
+							      parent, layer,
+							      new_num_nodes,
+							      first_node_teid,
+							      &num_added);
+			if (status != ICE_SUCCESS)
+				return status;
+
+			*num_nodes_added += num_added;
+		}
+		/* Don't modify the first node teid memory if the first node was
+		 * added already in the above call. Instead send some temp
+		 * memory for all other recursive calls.
+		 */
+		if (num_added)
+			first_teid_ptr = &temp;
+
+		new_num_nodes = num_nodes - num_added;
+
+		/* This parent is full, try the next sibling */
+		parent = parent->sibling;
+
+		/* this recursion is intentional, for 1024 queues
+		 * per VSI, it goes max of 16 iterations.
+		 * 1024 / 8 = 128 layer 8 nodes
+		 * 128 /8 = 16 (add 8 nodes per iteration)
+		 */
+		status = ice_sched_add_nodes_to_layer(pi, tc_node, parent,
+						      layer, new_num_nodes,
+						      first_teid_ptr,
+						      &num_added);
+		*num_nodes_added += num_added;
+		return status;
+	}
+
+	status = ice_sched_add_elems(pi, tc_node, parent, layer, num_nodes,
+				     num_nodes_added, first_node_teid);
+	return status;
+}
+
+/**
+ * ice_sched_get_qgrp_layer - get the current queue group layer number
+ * @hw: pointer to the hw struct
+ *
+ * This function returns the current queue group layer number
+ */
+static u8 ice_sched_get_qgrp_layer(struct ice_hw *hw)
+{
+	/* It's always total layers - 1, the array is 0 relative so -2 */
+	return hw->num_tx_sched_layers - ICE_QGRP_LAYER_OFFSET;
+}
+
+/**
+ * ice_sched_get_vsi_layer - get the current VSI layer number
+ * @hw: pointer to the hw struct
+ *
+ * This function returns the current VSI layer number
+ */
+static u8 ice_sched_get_vsi_layer(struct ice_hw *hw)
+{
+	/* Num Layers       VSI layer
+	 *     9               6
+	 *     7               4
+	 *     5 or less       sw_entry_point_layer
+	 */
+	/* calculate the vsi layer based on number of layers. */
+	if (hw->num_tx_sched_layers > ICE_VSI_LAYER_OFFSET + 1) {
+		u8 layer = hw->num_tx_sched_layers - ICE_VSI_LAYER_OFFSET;
+
+		if (layer > hw->sw_entry_point_layer)
+			return layer;
+	}
+	return hw->sw_entry_point_layer;
+}
+
+/**
+ * ice_sched_get_agg_layer - get the current aggregator layer number
+ * @hw: pointer to the hw struct
+ *
+ * This function returns the current aggregator layer number
+ */
+static u8 ice_sched_get_agg_layer(struct ice_hw *hw)
+{
+	/* Num Layers       agg layer
+	 *     9               4
+	 *     7 or less       sw_entry_point_layer
+	 */
+	/* calculate the agg layer based on number of layers. */
+	if (hw->num_tx_sched_layers > ICE_AGG_LAYER_OFFSET + 1) {
+		u8 layer = hw->num_tx_sched_layers - ICE_AGG_LAYER_OFFSET;
+
+		if (layer > hw->sw_entry_point_layer)
+			return layer;
+	}
+	return hw->sw_entry_point_layer;
+}
+
+/**
+ * ice_rm_dflt_leaf_node - remove the default leaf node in the tree
+ * @pi: port information structure
+ *
+ * This function removes the leaf node that was created by the FW
+ * during initialization
+ */
+static void ice_rm_dflt_leaf_node(struct ice_port_info *pi)
+{
+	struct ice_sched_node *node;
+
+	node = pi->root;
+	while (node) {
+		if (!node->num_children)
+			break;
+		node = node->children[0];
+	}
+	if (node && node->info.data.elem_type == ICE_AQC_ELEM_TYPE_LEAF) {
+		u32 teid = LE32_TO_CPU(node->info.node_teid);
+		enum ice_status status;
+
+		/* remove the default leaf node */
+		status = ice_sched_remove_elems(pi->hw, node->parent, 1, &teid);
+		if (!status)
+			ice_free_sched_node(pi, node);
+	}
+}
+
+/**
+ * ice_sched_rm_dflt_nodes - free the default nodes in the tree
+ * @pi: port information structure
+ *
+ * This function frees all the nodes except root and TC that were created by
+ * the FW during initialization
+ */
+static void ice_sched_rm_dflt_nodes(struct ice_port_info *pi)
+{
+	struct ice_sched_node *node;
+
+	ice_rm_dflt_leaf_node(pi);
+
+	/* remove the default nodes except TC and root nodes */
+	node = pi->root;
+	while (node) {
+		if (node->tx_sched_layer >= pi->hw->sw_entry_point_layer &&
+		    node->info.data.elem_type != ICE_AQC_ELEM_TYPE_TC &&
+		    node->info.data.elem_type != ICE_AQC_ELEM_TYPE_ROOT_PORT) {
+			ice_free_sched_node(pi, node);
+			break;
+		}
+
+		if (!node->num_children)
+			break;
+		node = node->children[0];
+	}
+}
+
+/**
+ * ice_sched_init_port - Initialize scheduler by querying information from FW
+ * @pi: port info structure for the tree to cleanup
+ *
+ * This function is the initial call to find the total number of Tx scheduler
+ * resources, default topology created by firmware and storing the information
+ * in SW DB.
+ */
+enum ice_status ice_sched_init_port(struct ice_port_info *pi)
+{
+	struct ice_aqc_get_topo_elem *buf;
+	enum ice_status status;
+	struct ice_hw *hw;
+	u8 num_branches;
+	u16 num_elems;
+	u8 i, j;
+
+	if (!pi)
+		return ICE_ERR_PARAM;
+	hw = pi->hw;
+
+	/* Query the Default Topology from FW */
+	buf = (struct ice_aqc_get_topo_elem *)ice_malloc(hw,
+							 ICE_AQ_MAX_BUF_LEN);
+	if (!buf)
+		return ICE_ERR_NO_MEMORY;
+
+	/* Query default scheduling tree topology */
+	status = ice_aq_get_dflt_topo(hw, pi->lport, buf, ICE_AQ_MAX_BUF_LEN,
+				      &num_branches, NULL);
+	if (status)
+		goto err_init_port;
+
+	/* num_branches should be between 1-8 */
+	if (num_branches < 1 || num_branches > ICE_TXSCHED_MAX_BRANCHES) {
+		ice_debug(hw, ICE_DBG_SCHED, "num_branches unexpected %d\n",
+			  num_branches);
+		status = ICE_ERR_PARAM;
+		goto err_init_port;
+	}
+
+	/* get the number of elements on the default/first branch */
+	num_elems = LE16_TO_CPU(buf[0].hdr.num_elems);
+
+	/* num_elems should always be between 1-9 */
+	if (num_elems < 1 || num_elems > ICE_AQC_TOPO_MAX_LEVEL_NUM) {
+		ice_debug(hw, ICE_DBG_SCHED, "num_elems unexpected %d\n",
+			  num_elems);
+		status = ICE_ERR_PARAM;
+		goto err_init_port;
+	}
+
+	/* If the last node is a leaf node then the index of the Q group
+	 * layer is two less than the number of elements.
+	 */
+	if (num_elems > 2 && buf[0].generic[num_elems - 1].data.elem_type ==
+	    ICE_AQC_ELEM_TYPE_LEAF)
+		pi->last_node_teid =
+			LE32_TO_CPU(buf[0].generic[num_elems - 2].node_teid);
+	else
+		pi->last_node_teid =
+			LE32_TO_CPU(buf[0].generic[num_elems - 1].node_teid);
+
+	/* Insert the Tx Sched root node */
+	status = ice_sched_add_root_node(pi, &buf[0].generic[0]);
+	if (status)
+		goto err_init_port;
+
+	/* Parse the default tree and cache the information */
+	for (i = 0; i < num_branches; i++) {
+		num_elems = LE16_TO_CPU(buf[i].hdr.num_elems);
+
+		/* Skip root element as already inserted */
+		for (j = 1; j < num_elems; j++) {
+			/* update the sw entry point */
+			if (buf[0].generic[j].data.elem_type ==
+			    ICE_AQC_ELEM_TYPE_ENTRY_POINT)
+				hw->sw_entry_point_layer = j;
+
+			status = ice_sched_add_node(pi, j, &buf[i].generic[j]);
+			if (status)
+				goto err_init_port;
+		}
+	}
+
+	/* Remove the default nodes. */
+	if (pi->root)
+		ice_sched_rm_dflt_nodes(pi);
+
+	/* initialize the port for handling the scheduler tree */
+	pi->port_state = ICE_SCHED_PORT_STATE_READY;
+	ice_init_lock(&pi->sched_lock);
+	for (i = 0; i < ICE_AQC_TOPO_MAX_LEVEL_NUM; i++)
+		INIT_LIST_HEAD(&pi->rl_prof_list[i]);
+
+err_init_port:
+	if (status && pi->root) {
+		ice_free_sched_node(pi, pi->root);
+		pi->root = NULL;
+	}
+
+	ice_free(hw, buf);
+	return status;
+}
+
+/**
+ * ice_sched_get_node - Get the struct ice_sched_node for given teid
+ * @pi: port information structure
+ * @teid: Scheduler node TEID
+ *
+ * This function retrieves the ice_sched_node struct for given teid from
+ * the SW DB and returns it to the caller.
+ */
+struct ice_sched_node *ice_sched_get_node(struct ice_port_info *pi, u32 teid)
+{
+	struct ice_sched_node *node;
+
+	if (!pi)
+		return NULL;
+
+	/* Find the node starting from root */
+	ice_acquire_lock(&pi->sched_lock);
+	node = ice_sched_find_node_by_teid(pi->root, teid);
+	ice_release_lock(&pi->sched_lock);
+
+	if (!node)
+		ice_debug(pi->hw, ICE_DBG_SCHED,
+			  "Node not found for teid=0x%x\n", teid);
+
+	return node;
+}
+
+/**
+ * ice_sched_query_res_alloc - query the FW for num of logical sched layers
+ * @hw: pointer to the HW struct
+ *
+ * query FW for allocated scheduler resources and store in HW struct
+ */
+enum ice_status ice_sched_query_res_alloc(struct ice_hw *hw)
+{
+	struct ice_aqc_query_txsched_res_resp *buf;
+	enum ice_status status = ICE_SUCCESS;
+	__le16 max_sibl;
+	u8 i;
+
+	if (hw->layer_info)
+		return status;
+
+	buf = (struct ice_aqc_query_txsched_res_resp *)
+		ice_malloc(hw, sizeof(*buf));
+	if (!buf)
+		return ICE_ERR_NO_MEMORY;
+
+	status = ice_aq_query_sched_res(hw, sizeof(*buf), buf, NULL);
+	if (status)
+		goto sched_query_out;
+
+	hw->num_tx_sched_layers = LE16_TO_CPU(buf->sched_props.logical_levels);
+	hw->num_tx_sched_phys_layers =
+		LE16_TO_CPU(buf->sched_props.phys_levels);
+	hw->flattened_layers = buf->sched_props.flattening_bitmap;
+	hw->max_cgds = buf->sched_props.max_pf_cgds;
+
+	/* max sibling group size of current layer refers to the max children
+	 * of the below layer node.
+	 * layer 1 node max children will be layer 2 max sibling group size
+	 * layer 2 node max children will be layer 3 max sibling group size
+	 * and so on. This array will be populated from root (index 0) to
+	 * qgroup layer 7. Leaf node has no children.
+	 */
+	for (i = 0; i < hw->num_tx_sched_layers - 1; i++) {
+		max_sibl = buf->layer_props[i + 1].max_sibl_grp_sz;
+		hw->max_children[i] = LE16_TO_CPU(max_sibl);
+	}
+
+	hw->layer_info = (struct ice_aqc_layer_props *)
+			 ice_memdup(hw, buf->layer_props,
+				    (hw->num_tx_sched_layers *
+				     sizeof(*hw->layer_info)),
+				    ICE_DMA_TO_DMA);
+	if (!hw->layer_info) {
+		status = ICE_ERR_NO_MEMORY;
+		goto sched_query_out;
+	}
+
+
+sched_query_out:
+	ice_free(hw, buf);
+	return status;
+}
+
+/**
+ * ice_sched_find_node_in_subtree - Find node in part of base node subtree
+ * @hw: pointer to the hw struct
+ * @base: pointer to the base node
+ * @node: pointer to the node to search
+ *
+ * This function checks whether a given node is part of the base node
+ * subtree or not
+ */
+bool
+ice_sched_find_node_in_subtree(struct ice_hw *hw, struct ice_sched_node *base,
+			       struct ice_sched_node *node)
+{
+	u8 i;
+
+	for (i = 0; i < base->num_children; i++) {
+		struct ice_sched_node *child = base->children[i];
+
+		if (node == child)
+			return true;
+
+		if (child->tx_sched_layer > node->tx_sched_layer)
+			return false;
+
+		/* this recursion is intentional, and wouldn't
+		 * go more than 8 calls
+		 */
+		if (ice_sched_find_node_in_subtree(hw, child, node))
+			return true;
+	}
+	return false;
+}
+
+/**
+ * ice_sched_get_free_qparent - Get a free lan or rdma q group node
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc: branch number
+ * @owner: lan or rdma
+ *
+ * This function retrieves a free lan or rdma q group node
+ */
+struct ice_sched_node *
+ice_sched_get_free_qparent(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+			   u8 owner)
+{
+	struct ice_sched_node *vsi_node, *qgrp_node = NULL;
+	struct ice_vsi_ctx *vsi_ctx;
+	u16 max_children;
+	u8 qgrp_layer;
+
+	qgrp_layer = ice_sched_get_qgrp_layer(pi->hw);
+	max_children = pi->hw->max_children[qgrp_layer];
+
+	vsi_ctx = ice_get_vsi_ctx(pi->hw, vsi_handle);
+	if (!vsi_ctx)
+		return NULL;
+	vsi_node = vsi_ctx->sched.vsi_node[tc];
+	/* validate invalid VSI id */
+	if (!vsi_node)
+		goto lan_q_exit;
+
+	/* get the first q group node from VSI sub-tree */
+	qgrp_node = ice_sched_get_first_node(pi->hw, vsi_node, qgrp_layer);
+	while (qgrp_node) {
+		/* make sure the qgroup node is part of the VSI subtree */
+		if (ice_sched_find_node_in_subtree(pi->hw, vsi_node, qgrp_node))
+			if (qgrp_node->num_children < max_children &&
+			    qgrp_node->owner == owner)
+				break;
+		qgrp_node = qgrp_node->sibling;
+	}
+
+lan_q_exit:
+	return qgrp_node;
+}
+
+/**
+ * ice_sched_get_vsi_node - Get a VSI node based on VSI id
+ * @hw: pointer to the hw struct
+ * @tc_node: pointer to the TC node
+ * @vsi_handle: software VSI handle
+ *
+ * This function retrieves a VSI node for a given VSI id from a given
+ * TC branch
+ */
+struct ice_sched_node *
+ice_sched_get_vsi_node(struct ice_hw *hw, struct ice_sched_node *tc_node,
+		       u16 vsi_handle)
+{
+	struct ice_sched_node *node;
+	u8 vsi_layer;
+
+	vsi_layer = ice_sched_get_vsi_layer(hw);
+	node = ice_sched_get_first_node(hw, tc_node, vsi_layer);
+
+	/* Check whether it already exists */
+	while (node) {
+		if (node->vsi_handle == vsi_handle)
+			return node;
+		node = node->sibling;
+	}
+
+	return node;
+}
+
+/**
+ * ice_sched_get_agg_node - Get an aggregator node based on agg id
+ * @hw: pointer to the hw struct
+ * @tc_node: pointer to the TC node
+ * @agg_id: aggregator id
+ *
+ * This function retrieves an aggregator node for a given agg id from a given
+ * TC branch
+ */
+struct ice_sched_node *
+ice_sched_get_agg_node(struct ice_hw *hw, struct ice_sched_node *tc_node,
+		       u32 agg_id)
+{
+	struct ice_sched_node *node;
+	u8 agg_layer;
+
+	agg_layer = ice_sched_get_agg_layer(hw);
+	node = ice_sched_get_first_node(hw, tc_node, agg_layer);
+
+	/* Check whether it already exists */
+	while (node) {
+		if (node->agg_id == agg_id)
+			return node;
+		node = node->sibling;
+	}
+
+	return node;
+}
+
+/**
+ * ice_sched_check_node - Compare node parameters between SW DB and HW DB
+ * @hw: pointer to the hw struct
+ * @node: pointer to the ice_sched_node struct
+ *
+ * This function queries and compares the HW element with SW DB node parameters
+ */
+static bool ice_sched_check_node(struct ice_hw *hw, struct ice_sched_node *node)
+{
+	struct ice_aqc_get_elem buf;
+	enum ice_status status;
+	u32 node_teid;
+
+	node_teid = LE32_TO_CPU(node->info.node_teid);
+	status = ice_sched_query_elem(hw, node_teid, &buf);
+	if (status != ICE_SUCCESS)
+		return false;
+
+	if (memcmp(buf.generic, &node->info, sizeof(*buf.generic))) {
+		ice_debug(hw, ICE_DBG_SCHED, "Node mismatch for teid=0x%x\n",
+			  node_teid);
+		return false;
+	}
+
+	return true;
+}
+
+/**
+ * ice_sched_calc_vsi_child_nodes - calculate number of VSI child nodes
+ * @hw: pointer to the hw struct
+ * @num_qs: number of queues
+ * @num_nodes: num nodes array
+ *
+ * This function calculates the number of VSI child nodes based on the
+ * number of queues.
+ */
+static void
+ice_sched_calc_vsi_child_nodes(struct ice_hw *hw, u16 num_qs, u16 *num_nodes)
+{
+	u16 num = num_qs;
+	u8 i, qgl, vsil;
+
+	qgl = ice_sched_get_qgrp_layer(hw);
+	vsil = ice_sched_get_vsi_layer(hw);
+
+	/* calculate num nodes from q group to VSI layer */
+	for (i = qgl; i > vsil; i--) {
+		/* round to the next integer if there is a remainder */
+		num = DIVIDE_AND_ROUND_UP(num, hw->max_children[i]);
+
+		/* need at least one node */
+		num_nodes[i] = num ? num : 1;
+	}
+}
+
+/**
+ * ice_sched_add_vsi_child_nodes - add VSI child nodes to tree
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc_node: pointer to the TC node
+ * @num_nodes: pointer to the num nodes that needs to be added per layer
+ * @owner: node owner (lan or rdma)
+ *
+ * This function adds the VSI child nodes to tree. It gets called for
+ * lan and rdma separately.
+ */
+static enum ice_status
+ice_sched_add_vsi_child_nodes(struct ice_port_info *pi, u16 vsi_handle,
+			      struct ice_sched_node *tc_node, u16 *num_nodes,
+			      u8 owner)
+{
+	struct ice_sched_node *parent, *node;
+	struct ice_hw *hw = pi->hw;
+	enum ice_status status;
+	u32 first_node_teid;
+	u16 num_added = 0;
+	u8 i, qgl, vsil;
+
+	qgl = ice_sched_get_qgrp_layer(hw);
+	vsil = ice_sched_get_vsi_layer(hw);
+	parent = ice_sched_get_vsi_node(hw, tc_node, vsi_handle);
+	for (i = vsil + 1; i <= qgl; i++) {
+		if (!parent)
+			return ICE_ERR_CFG;
+
+		status = ice_sched_add_nodes_to_layer(pi, tc_node, parent, i,
+						      num_nodes[i],
+						      &first_node_teid,
+						      &num_added);
+		if (status != ICE_SUCCESS || num_nodes[i] != num_added)
+			return ICE_ERR_CFG;
+
+		/* The newly added node can be a new parent for the next
+		 * layer nodes
+		 */
+		if (num_added) {
+			parent = ice_sched_find_node_by_teid(tc_node,
+							     first_node_teid);
+			node = parent;
+			while (node) {
+				node->owner = owner;
+				node = node->sibling;
+			}
+		} else {
+			parent = parent->children[0];
+		}
+	}
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_sched_calc_vsi_support_nodes - calculate number of VSI support nodes
+ * @hw: pointer to the hw struct
+ * @tc_node: pointer to TC node
+ * @num_nodes: pointer to num nodes array
+ *
+ * This function calculates the number of supported nodes needed to add this
+ * VSI into Tx tree including the VSI, parent and intermediate nodes in below
+ * layers
+ */
+static void
+ice_sched_calc_vsi_support_nodes(struct ice_hw *hw,
+				 struct ice_sched_node *tc_node, u16 *num_nodes)
+{
+	struct ice_sched_node *node;
+	u8 vsil;
+	int i;
+
+	vsil = ice_sched_get_vsi_layer(hw);
+	for (i = vsil; i >= hw->sw_entry_point_layer; i--)
+		/* Add intermediate nodes if TC has no children and
+		 * need at least one node for VSI
+		 */
+		if (!tc_node->num_children || i == vsil) {
+			num_nodes[i]++;
+		} else {
+			/* If intermediate nodes are reached max children
+			 * then add a new one.
+			 */
+			node = ice_sched_get_first_node(hw, tc_node, (u8)i);
+			/* scan all the siblings */
+			while (node) {
+				if (node->num_children < hw->max_children[i])
+					break;
+				node = node->sibling;
+			}
+
+			/* tree has one intermediate node to add this new VSI.
+			 * So no need to calculate supported nodes for below
+			 * layers.
+			 */
+			if (node)
+				break;
+			/* all the nodes are full, allocate a new one */
+			num_nodes[i]++;
+		}
+}
+
+/**
+ * ice_sched_add_vsi_support_nodes - add VSI supported nodes into Tx tree
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc_node: pointer to TC node
+ * @num_nodes: pointer to num nodes array
+ *
+ * This function adds the VSI supported nodes into Tx tree including the
+ * VSI, its parent and intermediate nodes in below layers
+ */
+static enum ice_status
+ice_sched_add_vsi_support_nodes(struct ice_port_info *pi, u16 vsi_handle,
+				struct ice_sched_node *tc_node, u16 *num_nodes)
+{
+	struct ice_sched_node *parent = tc_node;
+	enum ice_status status;
+	u32 first_node_teid;
+	u16 num_added = 0;
+	u8 i, vsil;
+
+	if (!pi)
+		return ICE_ERR_PARAM;
+
+	vsil = ice_sched_get_vsi_layer(pi->hw);
+	for (i = pi->hw->sw_entry_point_layer; i <= vsil; i++) {
+		status = ice_sched_add_nodes_to_layer(pi, tc_node, parent,
+						      i, num_nodes[i],
+						      &first_node_teid,
+						      &num_added);
+		if (status != ICE_SUCCESS || num_nodes[i] != num_added)
+			return ICE_ERR_CFG;
+
+		/* The newly added node can be a new parent for the next
+		 * layer nodes
+		 */
+		if (num_added)
+			parent = ice_sched_find_node_by_teid(tc_node,
+							     first_node_teid);
+		else
+			parent = parent->children[0];
+
+		if (!parent)
+			return ICE_ERR_CFG;
+
+		if (i == vsil)
+			parent->vsi_handle = vsi_handle;
+	}
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_sched_add_vsi_to_topo - add a new VSI into tree
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc: TC number
+ *
+ * This function adds a new VSI into scheduler tree
+ */
+static enum ice_status
+ice_sched_add_vsi_to_topo(struct ice_port_info *pi, u16 vsi_handle, u8 tc)
+{
+	u16 num_nodes[ICE_AQC_TOPO_MAX_LEVEL_NUM] = { 0 };
+	struct ice_sched_node *tc_node;
+	struct ice_hw *hw = pi->hw;
+
+	tc_node = ice_sched_get_tc_node(pi, tc);
+	if (!tc_node)
+		return ICE_ERR_PARAM;
+
+	/* calculate number of supported nodes needed for this VSI */
+	ice_sched_calc_vsi_support_nodes(hw, tc_node, num_nodes);
+
+	/* add vsi supported nodes to tc subtree */
+	return ice_sched_add_vsi_support_nodes(pi, vsi_handle, tc_node,
+					       num_nodes);
+}
+
+/**
+ * ice_sched_update_vsi_child_nodes - update VSI child nodes
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc: TC number
+ * @new_numqs: new number of max queues
+ * @owner: owner of this subtree
+ *
+ * This function updates the VSI child nodes based on the number of queues
+ */
+static enum ice_status
+ice_sched_update_vsi_child_nodes(struct ice_port_info *pi, u16 vsi_handle,
+				 u8 tc, u16 new_numqs, u8 owner)
+{
+	u16 new_num_nodes[ICE_AQC_TOPO_MAX_LEVEL_NUM] = { 0 };
+	struct ice_sched_node *vsi_node;
+	struct ice_sched_node *tc_node;
+	struct ice_vsi_ctx *vsi_ctx;
+	enum ice_status status = ICE_SUCCESS;
+	struct ice_hw *hw = pi->hw;
+	u16 prev_numqs;
+
+	tc_node = ice_sched_get_tc_node(pi, tc);
+	if (!tc_node)
+		return ICE_ERR_CFG;
+
+	vsi_node = ice_sched_get_vsi_node(hw, tc_node, vsi_handle);
+	if (!vsi_node)
+		return ICE_ERR_CFG;
+
+	vsi_ctx = ice_get_vsi_ctx(hw, vsi_handle);
+	if (!vsi_ctx)
+		return ICE_ERR_PARAM;
+
+	if (owner == ICE_SCHED_NODE_OWNER_LAN)
+		prev_numqs = vsi_ctx->sched.max_lanq[tc];
+	else
+		return ICE_ERR_PARAM;
+
+	/* num queues are not changed or less than the previous number */
+	if (new_numqs <= prev_numqs)
+		return status;
+	if (new_numqs)
+		ice_sched_calc_vsi_child_nodes(hw, new_numqs, new_num_nodes);
+	/* Keep the max number of queue configuration all the time. Update the
+	 * tree only if number of queues > previous number of queues. This may
+	 * leave some extra nodes in the tree if number of queues < previous
+	 * number but that wouldn't harm anything. Removing those extra nodes
+	 * may complicate the code if those nodes are part of SRL or
+	 * individually rate limited.
+	 */
+	status = ice_sched_add_vsi_child_nodes(pi, vsi_handle, tc_node,
+					       new_num_nodes, owner);
+	if (status)
+		return status;
+	vsi_ctx->sched.max_lanq[tc] = new_numqs;
+
+	return status;
+}
+
+/**
+ * ice_sched_cfg_vsi - configure the new/existing VSI
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc: TC number
+ * @maxqs: max number of queues
+ * @owner: lan or rdma
+ * @enable: TC enabled or disabled
+ *
+ * This function adds/updates VSI nodes based on the number of queues. If TC is
+ * enabled and VSI is in suspended state then resume the VSI back. If TC is
+ * disabled then suspend the VSI if it is not already.
+ */
+enum ice_status
+ice_sched_cfg_vsi(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u16 maxqs,
+		  u8 owner, bool enable)
+{
+	struct ice_sched_node *vsi_node, *tc_node;
+	struct ice_vsi_ctx *vsi_ctx;
+	enum ice_status status = ICE_SUCCESS;
+	struct ice_hw *hw = pi->hw;
+
+	ice_debug(pi->hw, ICE_DBG_SCHED, "add/config VSI %d\n", vsi_handle);
+	tc_node = ice_sched_get_tc_node(pi, tc);
+	if (!tc_node)
+		return ICE_ERR_PARAM;
+	vsi_ctx = ice_get_vsi_ctx(hw, vsi_handle);
+	if (!vsi_ctx)
+		return ICE_ERR_PARAM;
+	vsi_node = ice_sched_get_vsi_node(hw, tc_node, vsi_handle);
+
+	/* suspend the VSI if tc is not enabled */
+	if (!enable) {
+		if (vsi_node && vsi_node->in_use) {
+			u32 teid = LE32_TO_CPU(vsi_node->info.node_teid);
+
+			status = ice_sched_suspend_resume_elems(hw, 1, &teid,
+								true);
+			if (!status)
+				vsi_node->in_use = false;
+		}
+		return status;
+	}
+
+	/* TC is enabled, if it is a new VSI then add it to the tree */
+	if (!vsi_node) {
+		status = ice_sched_add_vsi_to_topo(pi, vsi_handle, tc);
+		if (status)
+			return status;
+
+		vsi_node = ice_sched_get_vsi_node(hw, tc_node, vsi_handle);
+		if (!vsi_node)
+			return ICE_ERR_CFG;
+
+		vsi_ctx->sched.vsi_node[tc] = vsi_node;
+		vsi_node->in_use = true;
+		/* invalidate the max queues whenever VSI gets added first time
+		 * into the scheduler tree (boot or after reset). We need to
+		 * recreate the child nodes all the time in these cases.
+		 */
+		vsi_ctx->sched.max_lanq[tc] = 0;
+	}
+
+	/* update the VSI child nodes */
+	status = ice_sched_update_vsi_child_nodes(pi, vsi_handle, tc, maxqs,
+						  owner);
+	if (status)
+		return status;
+
+	/* TC is enabled, resume the VSI if it is in the suspend state */
+	if (!vsi_node->in_use) {
+		u32 teid = LE32_TO_CPU(vsi_node->info.node_teid);
+
+		status = ice_sched_suspend_resume_elems(hw, 1, &teid, false);
+		if (!status)
+			vsi_node->in_use = true;
+	}
+
+	return status;
+}
+
+/**
+ * ice_sched_rm_agg_vsi_entry - remove agg related vsi info entry
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ *
+ * This function removes single aggregator vsi info entry from
+ * aggregator list.
+ */
+static void
+ice_sched_rm_agg_vsi_info(struct ice_port_info *pi, u16 vsi_handle)
+{
+	struct ice_sched_agg_info *agg_info;
+	struct ice_sched_agg_info *atmp;
+
+	LIST_FOR_EACH_ENTRY_SAFE(agg_info, atmp, &pi->hw->agg_list,
+				 ice_sched_agg_info,
+				 list_entry) {
+		struct ice_sched_agg_vsi_info *agg_vsi_info;
+		struct ice_sched_agg_vsi_info *vtmp;
+
+		LIST_FOR_EACH_ENTRY_SAFE(agg_vsi_info, vtmp,
+					 &agg_info->agg_vsi_list,
+					 ice_sched_agg_vsi_info, list_entry)
+			if (agg_vsi_info->vsi_handle == vsi_handle) {
+				LIST_DEL(&agg_vsi_info->list_entry);
+				ice_free(pi->hw, agg_vsi_info);
+				return;
+			}
+	}
+}
+
+/**
+ * ice_sched_is_leaf_node_present - check for a leaf node in the sub-tree
+ * @node: pointer to the sub-tree node
+ *
+ * This function checks for a leaf node presence in a given sub-tree node.
+ */
+static bool ice_sched_is_leaf_node_present(struct ice_sched_node *node)
+{
+	u8 i;
+
+	for (i = 0; i < node->num_children; i++)
+		if (ice_sched_is_leaf_node_present(node->children[i]))
+			return true;
+	/* check for a leaf node */
+	return (node->info.data.elem_type == ICE_AQC_ELEM_TYPE_LEAF);
+}
+
+/**
+ * ice_sched_rm_vsi_cfg - remove the VSI and its children nodes
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @owner: lan or rdma
+ *
+ * This function removes the VSI and its lan or rdma children nodes from the
+ * scheduler tree.
+ */
+static enum ice_status
+ice_sched_rm_vsi_cfg(struct ice_port_info *pi, u16 vsi_handle, u8 owner)
+{
+	enum ice_status status = ICE_ERR_PARAM;
+	struct ice_vsi_ctx *vsi_ctx;
+	u8 i;
+
+	ice_debug(pi->hw, ICE_DBG_SCHED, "removing VSI %d\n", vsi_handle);
+	if (!ice_is_vsi_valid(pi->hw, vsi_handle))
+		return status;
+	ice_acquire_lock(&pi->sched_lock);
+	vsi_ctx = ice_get_vsi_ctx(pi->hw, vsi_handle);
+	if (!vsi_ctx)
+		goto exit_sched_rm_vsi_cfg;
+
+	for (i = 0; i < ICE_MAX_TRAFFIC_CLASS; i++) {
+		struct ice_sched_node *vsi_node, *tc_node;
+		u8 j = 0;
+
+		tc_node = ice_sched_get_tc_node(pi, i);
+		if (!tc_node)
+			continue;
+
+		vsi_node = ice_sched_get_vsi_node(pi->hw, tc_node, vsi_handle);
+		if (!vsi_node)
+			continue;
+
+		if (ice_sched_is_leaf_node_present(vsi_node)) {
+			ice_debug(pi->hw, ICE_DBG_SCHED,
+				  "VSI has leaf nodes in TC %d\n", i);
+			status = ICE_ERR_IN_USE;
+			goto exit_sched_rm_vsi_cfg;
+		}
+		while (j < vsi_node->num_children) {
+			if (vsi_node->children[j]->owner == owner) {
+				ice_free_sched_node(pi, vsi_node->children[j]);
+
+				/* reset the counter again since the num
+				 * children will be updated after node removal
+				 */
+				j = 0;
+			} else {
+				j++;
+			}
+		}
+		/* remove the VSI if it has no children */
+		if (!vsi_node->num_children) {
+			ice_free_sched_node(pi, vsi_node);
+			vsi_ctx->sched.vsi_node[i] = NULL;
+
+			/* clean up agg related vsi info if any */
+			ice_sched_rm_agg_vsi_info(pi, vsi_handle);
+		}
+		if (owner == ICE_SCHED_NODE_OWNER_LAN)
+			vsi_ctx->sched.max_lanq[i] = 0;
+	}
+	status = ICE_SUCCESS;
+
+exit_sched_rm_vsi_cfg:
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_rm_vsi_lan_cfg - remove VSI and its lan children nodes
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ *
+ * This function clears the VSI and its lan children nodes from scheduler tree
+ * for all TCs.
+ */
+enum ice_status ice_rm_vsi_lan_cfg(struct ice_port_info *pi, u16 vsi_handle)
+{
+	return ice_sched_rm_vsi_cfg(pi, vsi_handle, ICE_SCHED_NODE_OWNER_LAN);
+}
+
+
+/**
+ * ice_sched_is_tree_balanced - Check tree nodes are identical or not
+ * @hw: pointer to the hw struct
+ * @node: pointer to the ice_sched_node struct
+ *
+ * This function compares all the nodes for a given tree against HW DB nodes
+ * This function needs to be called with the port_info->sched_lock held
+ */
+bool ice_sched_is_tree_balanced(struct ice_hw *hw, struct ice_sched_node *node)
+{
+	u8 i;
+
+	/* start from the leaf node */
+	for (i = 0; i < node->num_children; i++)
+		/* Fail if node doesn't match with the SW DB
+		 * this recursion is intentional, and wouldn't
+		 * go more than 9 calls
+		 */
+		if (!ice_sched_is_tree_balanced(hw, node->children[i]))
+			return false;
+
+	return ice_sched_check_node(hw, node);
+}
+
+/**
+ * ice_aq_query_node_to_root - retrieve the tree topology for a given node teid
+ * @hw: pointer to the hw struct
+ * @node_teid: node teid
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @cd: pointer to command details structure or NULL
+ *
+ * This function retrieves the tree topology from the firmware for a given
+ * node teid to the root node.
+ */
+enum ice_status
+ice_aq_query_node_to_root(struct ice_hw *hw, u32 node_teid,
+			  struct ice_aqc_get_elem *buf, u16 buf_size,
+			  struct ice_sq_cd *cd)
+{
+	struct ice_aqc_query_node_to_root *cmd;
+	struct ice_aq_desc desc;
+
+	cmd = &desc.params.query_node_to_root;
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_query_node_to_root);
+	cmd->teid = CPU_TO_LE32(node_teid);
+	return ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
+}
+
+/**
+ * ice_get_agg_info - get the agg id
+ * @hw: pointer to the hardware structure
+ * @agg_id: aggregator id
+ *
+ * This function validates agg id. The function returns info if agg id is
+ * prsent in list otherwise it returns null.
+ */
+static struct ice_sched_agg_info*
+ice_get_agg_info(struct ice_hw *hw, u32 agg_id)
+{
+	struct ice_sched_agg_info *agg_info;
+
+	LIST_FOR_EACH_ENTRY(agg_info, &hw->agg_list, ice_sched_agg_info,
+			    list_entry)
+		if (agg_info->agg_id == agg_id)
+			return agg_info;
+
+	return NULL;
+}
+
+/**
+ * ice_move_all_vsi_to_dflt_agg - move all VSI(s) to default agg
+ * @pi: port information structure
+ * @agg_info: aggregator info
+ * @tc: traffic class number
+ * @rm_vsi_info: true or false
+ *
+ * This function move all the VSI(s) to the default aggregator and delete
+ * agg vsi info based on passed in boolean parameter rm_vsi_info. The
+ * caller holds the scheduler lock.
+ */
+static enum ice_status
+ice_move_all_vsi_to_dflt_agg(struct ice_port_info *pi,
+			     struct ice_sched_agg_info *agg_info, u8 tc,
+			     bool rm_vsi_info)
+{
+	struct ice_sched_agg_vsi_info *agg_vsi_info;
+	struct ice_sched_agg_vsi_info *tmp;
+	enum ice_status status = ICE_SUCCESS;
+
+	LIST_FOR_EACH_ENTRY_SAFE(agg_vsi_info, tmp, &agg_info->agg_vsi_list,
+				 ice_sched_agg_vsi_info, list_entry) {
+		u16 vsi_handle = agg_vsi_info->vsi_handle;
+
+		/* Move VSI to default agg */
+		if (!ice_is_tc_ena(agg_vsi_info->tc_bitmap[0], tc))
+			continue;
+
+		status = ice_sched_move_vsi_to_agg(pi, vsi_handle,
+						   ICE_DFLT_AGG_ID, tc);
+		if (status)
+			break;
+
+		ice_clear_bit(tc, agg_vsi_info->tc_bitmap);
+		if (rm_vsi_info && !agg_vsi_info->tc_bitmap[0]) {
+			LIST_DEL(&agg_vsi_info->list_entry);
+			ice_free(pi->hw, agg_vsi_info);
+		}
+	}
+
+	return status;
+}
+
+/**
+ * ice_rm_agg_cfg_tc - remove agg configuration for tc
+ * @pi: port information structure
+ * @agg_info: aggregator id
+ * @tc: tc number
+ * @rm_vsi_info: bool value true or false
+ *
+ * This function removes agg reference to vsi of given tc. It removes the agg
+ * configuration completely for requested tc. The caller needs to hold the
+ * scheduler lock.
+ */
+static enum ice_status
+ice_rm_agg_cfg_tc(struct ice_port_info *pi, struct ice_sched_agg_info *agg_info,
+		  u8 tc, bool rm_vsi_info)
+{
+	enum ice_status status = ICE_SUCCESS;
+
+	/* If nothing to remove - return success */
+	if (!ice_is_tc_ena(agg_info->tc_bitmap[0], tc))
+		goto exit_rm_agg_cfg_tc;
+
+	status = ice_move_all_vsi_to_dflt_agg(pi, agg_info, tc, rm_vsi_info);
+	if (status)
+		goto exit_rm_agg_cfg_tc;
+
+	/* Delete aggregator node(s) */
+	status = ice_sched_rm_agg_cfg(pi, agg_info->agg_id, tc);
+	if (status)
+		goto exit_rm_agg_cfg_tc;
+
+	ice_clear_bit(tc, agg_info->tc_bitmap);
+exit_rm_agg_cfg_tc:
+	return status;
+}
+
+/**
+ * ice_save_agg_tc_bitmap - save agg TC bitmap
+ * @pi: port information structure
+ * @agg_id: aggregator id
+ * @tc_bitmap: 8 bits TC bitmap
+ *
+ * Save agg TC bitmap. This function needs to be called with scheduler
+ * lock held.
+ */
+static enum ice_status
+ice_save_agg_tc_bitmap(struct ice_port_info *pi, u32 agg_id,
+		       ice_bitmap_t *tc_bitmap)
+{
+	struct ice_sched_agg_info *agg_info;
+
+	agg_info = ice_get_agg_info(pi->hw, agg_id);
+	if (!agg_info)
+		return ICE_ERR_PARAM;
+	ice_cp_bitmap(agg_info->replay_tc_bitmap, tc_bitmap,
+		      ICE_MAX_TRAFFIC_CLASS);
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_sched_cfg_agg - configure agg node
+ * @pi: port information structure
+ * @agg_id: aggregator id
+ * @agg_type: aggregator type queue, VSI, or agg group
+ * @tc_bitmap: bits TC bitmap
+ *
+ * It registers a unique aggregator node into scheduler services. It
+ * allows a user to register with a unique ID to track it's resources.
+ * The aggregator type determines if this is a queue group, VSI group
+ * or aggregator group. It then creates the agg node(s) for requested
+ * tc(s) or removes an existing agg node including its configuration
+ * if indicated via tc_bitmap. Call ice_rm_agg_cfg to release agg
+ * resources and remove agg id.
+ * This function needs to be called with scheduler lock held.
+ */
+static enum ice_status
+ice_sched_cfg_agg(struct ice_port_info *pi, u32 agg_id,
+		  enum ice_agg_type agg_type, ice_bitmap_t *tc_bitmap)
+{
+	struct ice_sched_agg_info *agg_info;
+	enum ice_status status = ICE_SUCCESS;
+	struct ice_hw *hw = pi->hw;
+	u8 tc;
+
+	agg_info = ice_get_agg_info(hw, agg_id);
+	if (!agg_info) {
+		/* Creat new entry for new agg id */
+		agg_info = (struct ice_sched_agg_info *)
+			ice_malloc(hw, sizeof(*agg_info));
+		if (!agg_info) {
+			status = ICE_ERR_NO_MEMORY;
+			goto exit_reg_agg;
+		}
+		agg_info->agg_id = agg_id;
+		agg_info->agg_type = agg_type;
+		agg_info->tc_bitmap[0] = 0;
+
+		/* Initialize the aggregator vsi list head */
+		INIT_LIST_HEAD(&agg_info->agg_vsi_list);
+
+		/* Add new entry in agg list */
+		LIST_ADD(&agg_info->list_entry, &hw->agg_list);
+	}
+	/* Create agg node(s) for requested tc(s) */
+	for (tc = 0; tc < ICE_MAX_TRAFFIC_CLASS; tc++) {
+		if (!ice_is_tc_ena(*tc_bitmap, tc)) {
+			/* Delete agg cfg tc if it exists previously */
+			status = ice_rm_agg_cfg_tc(pi, agg_info, tc, false);
+			if (status)
+				break;
+			continue;
+		}
+
+		/* Check if agg node for tc already exists */
+		if (ice_is_tc_ena(agg_info->tc_bitmap[0], tc))
+			continue;
+
+		/* Create new agg node for tc */
+		status = ice_sched_add_agg_cfg(pi, agg_id, tc);
+		if (status)
+			break;
+
+		/* Save agg node's tc information */
+		ice_set_bit(tc, agg_info->tc_bitmap);
+	}
+exit_reg_agg:
+	return status;
+}
+
+/**
+ * ice_cfg_agg - config agg node
+ * @pi: port information structure
+ * @agg_id: aggregator id
+ * @agg_type: aggregator type queue, VSI, or agg group
+ * @tc_bitmap: bits TC bitmap
+ *
+ * This function configures aggregator node(s).
+ */
+enum ice_status
+ice_cfg_agg(struct ice_port_info *pi, u32 agg_id, enum ice_agg_type agg_type,
+	    u8 tc_bitmap)
+{
+	ice_bitmap_t bitmap = tc_bitmap;
+	enum ice_status status;
+
+	ice_acquire_lock(&pi->sched_lock);
+	status = ice_sched_cfg_agg(pi, agg_id, agg_type,
+				   (ice_bitmap_t *)&bitmap);
+	if (!status)
+		status = ice_save_agg_tc_bitmap(pi, agg_id,
+						(ice_bitmap_t *)&bitmap);
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_get_agg_vsi_info - get the agg id
+ * @agg_info: aggregator info
+ * @vsi_handle: software VSI handle
+ *
+ * The function returns agg VSI info based on VSI handle. This function needs
+ * to be called with scheduler lock held.
+ */
+static struct ice_sched_agg_vsi_info*
+ice_get_agg_vsi_info(struct ice_sched_agg_info *agg_info, u16 vsi_handle)
+{
+	struct ice_sched_agg_vsi_info *agg_vsi_info;
+
+	LIST_FOR_EACH_ENTRY(agg_vsi_info, &agg_info->agg_vsi_list,
+			    ice_sched_agg_vsi_info, list_entry)
+		if (agg_vsi_info->vsi_handle == vsi_handle)
+			return agg_vsi_info;
+
+	return NULL;
+}
+
+/**
+ * ice_get_vsi_agg_info - get the agg info of VSI
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: Sw VSI handle
+ *
+ * The function returns agg info of VSI represented via vsi_handle. The VSI has
+ * in this case a different aggregator than the default one. This function
+ * needs to be called with scheduler lock held.
+ */
+static struct ice_sched_agg_info*
+ice_get_vsi_agg_info(struct ice_hw *hw, u16 vsi_handle)
+{
+	struct ice_sched_agg_info *agg_info;
+
+	LIST_FOR_EACH_ENTRY(agg_info, &hw->agg_list, ice_sched_agg_info,
+			    list_entry) {
+		struct ice_sched_agg_vsi_info *agg_vsi_info;
+
+		agg_vsi_info = ice_get_agg_vsi_info(agg_info, vsi_handle);
+		if (agg_vsi_info)
+			return agg_info;
+	}
+	return NULL;
+}
+
+/**
+ * ice_save_agg_vsi_tc_bitmap - save aggregator VSI TC bitmap
+ * @pi: port information structure
+ * @agg_id: aggregator id
+ * @vsi_handle: software VSI handle
+ * @tc_bitmap: TC bitmap of enabled tc(s)
+ *
+ * Save VSI to aggregator TC bitmap. This function needs to call with scheduler
+ * lock held.
+ */
+static enum ice_status
+ice_save_agg_vsi_tc_bitmap(struct ice_port_info *pi, u32 agg_id, u16 vsi_handle,
+			   ice_bitmap_t *tc_bitmap)
+{
+	struct ice_sched_agg_vsi_info *agg_vsi_info;
+	struct ice_sched_agg_info *agg_info;
+
+	agg_info = ice_get_agg_info(pi->hw, agg_id);
+	if (!agg_info)
+		return ICE_ERR_PARAM;
+	/* check if entry already exist */
+	agg_vsi_info = ice_get_agg_vsi_info(agg_info, vsi_handle);
+	if (!agg_vsi_info)
+		return ICE_ERR_PARAM;
+	ice_cp_bitmap(agg_vsi_info->replay_tc_bitmap, tc_bitmap,
+		      ICE_MAX_TRAFFIC_CLASS);
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_sched_assoc_vsi_to_agg - associate or move VSI to new or default agg
+ * @pi: port information structure
+ * @agg_id: aggregator id
+ * @vsi_handle: software VSI handle
+ * @tc_bitmap: TC bitmap of enabled tc(s)
+ *
+ * This function moves VSI to a new or default aggregator node. If VSI is
+ * already associated to the agg node then no operation is performed on the
+ * tree. This function needs to be called with scheduler lock held.
+ */
+static enum ice_status
+ice_sched_assoc_vsi_to_agg(struct ice_port_info *pi, u32 agg_id,
+			   u16 vsi_handle, ice_bitmap_t *tc_bitmap)
+{
+	struct ice_sched_agg_vsi_info *agg_vsi_info;
+	struct ice_sched_agg_info *agg_info;
+	enum ice_status status = ICE_SUCCESS;
+	struct ice_hw *hw = pi->hw;
+	u8 tc;
+
+	if (!ice_is_vsi_valid(pi->hw, vsi_handle))
+		return ICE_ERR_PARAM;
+	agg_info = ice_get_agg_info(hw, agg_id);
+	if (!agg_info)
+		return ICE_ERR_PARAM;
+	/* check if entry already exist */
+	agg_vsi_info = ice_get_agg_vsi_info(agg_info, vsi_handle);
+	if (!agg_vsi_info) {
+		/* Create new entry for vsi under agg list */
+		agg_vsi_info = (struct ice_sched_agg_vsi_info *)
+			ice_malloc(hw, sizeof(*agg_vsi_info));
+		if (!agg_vsi_info)
+			return ICE_ERR_PARAM;
+
+		/* add vsi id into the agg list */
+		agg_vsi_info->vsi_handle = vsi_handle;
+		LIST_ADD(&agg_vsi_info->list_entry, &agg_info->agg_vsi_list);
+	}
+	/* Move vsi node to new agg node for requested tc(s) */
+	for (tc = 0; tc < ICE_MAX_TRAFFIC_CLASS; tc++) {
+		if (!ice_is_tc_ena(*tc_bitmap, tc))
+			continue;
+
+		/* Move VSI to new agg */
+		status = ice_sched_move_vsi_to_agg(pi, vsi_handle, agg_id, tc);
+		if (status)
+			break;
+
+		if (agg_id != ICE_DFLT_AGG_ID)
+			ice_set_bit(tc, agg_vsi_info->tc_bitmap);
+		else
+			ice_clear_bit(tc, agg_vsi_info->tc_bitmap);
+	}
+	/* If vsi moved back to default agg then delete entry agg_vsi_info. */
+	if (!ice_is_any_bit_set(agg_vsi_info->tc_bitmap,
+				ICE_MAX_TRAFFIC_CLASS)) {
+		LIST_DEL(&agg_vsi_info->list_entry);
+		ice_free(hw, agg_vsi_info);
+	}
+	return status;
+}
+
+/**
+ * ice_move_vsi_to_agg - moves VSI to new or default agg
+ * @pi: port information structure
+ * @agg_id: aggregator id
+ * @vsi_handle: software VSI handle
+ * @tc_bitmap: tc bitmap of enabled tc(s)
+ *
+ * Move or associate VSI to a new or default aggregator node.
+ */
+enum ice_status
+ice_move_vsi_to_agg(struct ice_port_info *pi, u32 agg_id, u16 vsi_handle,
+		    u8 tc_bitmap)
+{
+	ice_bitmap_t bitmap = tc_bitmap;
+	enum ice_status status;
+
+	ice_acquire_lock(&pi->sched_lock);
+	status = ice_sched_assoc_vsi_to_agg(pi, agg_id, vsi_handle,
+					    (ice_bitmap_t *)&bitmap);
+	if (!status)
+		status = ice_save_agg_vsi_tc_bitmap(pi, agg_id, vsi_handle,
+						    (ice_bitmap_t *)&bitmap);
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_rm_agg_cfg - remove agg configuration
+ * @pi: port information structure
+ * @agg_id: aggregator id
+ *
+ * This function removes agg reference to vsi and delete agg id info.
+ * It removes the agg configuration completely.
+ */
+enum ice_status ice_rm_agg_cfg(struct ice_port_info *pi, u32 agg_id)
+{
+	struct ice_sched_agg_info *agg_info;
+	enum ice_status status = ICE_SUCCESS;
+	u8 tc;
+
+	ice_acquire_lock(&pi->sched_lock);
+	agg_info = ice_get_agg_info(pi->hw, agg_id);
+	if (!agg_info) {
+		status = ICE_ERR_DOES_NOT_EXIST;
+		goto exit_ice_rm_agg_cfg;
+	}
+
+	for (tc = 0; tc < ICE_MAX_TRAFFIC_CLASS; tc++) {
+		status = ice_rm_agg_cfg_tc(pi, agg_info, tc, true);
+		if (status)
+			goto exit_ice_rm_agg_cfg;
+	}
+
+	if (ice_is_any_bit_set(agg_info->tc_bitmap, ICE_MAX_TRAFFIC_CLASS)) {
+		status = ICE_ERR_IN_USE;
+		goto exit_ice_rm_agg_cfg;
+	}
+
+	/* Safe to delete entry now */
+	LIST_DEL(&agg_info->list_entry);
+	ice_free(pi->hw, agg_info);
+
+	/* Remove unused rl profile ids from HW and SW DB */
+	ice_sched_rm_unused_rl_prof(pi);
+
+exit_ice_rm_agg_cfg:
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_set_clear_cir_bw_alloc - set or clear CIR bw alloc information
+ * @bw_t_info: bandwidth type information structure
+ * @bw_alloc: Bandwidth allocation information
+ *
+ * Save or clear CIR bw alloc information (bw_alloc) in the passed param
+ * bw_t_info.
+ */
+static void
+ice_set_clear_cir_bw_alloc(struct ice_bw_type_info *bw_t_info, u16 bw_alloc)
+{
+	bw_t_info->cir_bw.bw_alloc = bw_alloc;
+	if (bw_t_info->cir_bw.bw_alloc)
+		ice_set_bit(ICE_BW_TYPE_CIR_WT, bw_t_info->bw_t_bitmap);
+	else
+		ice_clear_bit(ICE_BW_TYPE_CIR_WT, bw_t_info->bw_t_bitmap);
+}
+
+/**
+ * ice_set_clear_eir_bw_alloc - set or clear EIR bw alloc information
+ * @bw_t_info: bandwidth type information structure
+ * @bw_alloc: Bandwidth allocation information
+ *
+ * Save or clear EIR bw alloc information (bw_alloc) in the passed param
+ * bw_t_info.
+ */
+static void
+ice_set_clear_eir_bw_alloc(struct ice_bw_type_info *bw_t_info, u16 bw_alloc)
+{
+	bw_t_info->eir_bw.bw_alloc = bw_alloc;
+	if (bw_t_info->eir_bw.bw_alloc)
+		ice_set_bit(ICE_BW_TYPE_EIR_WT, bw_t_info->bw_t_bitmap);
+	else
+		ice_clear_bit(ICE_BW_TYPE_EIR_WT, bw_t_info->bw_t_bitmap);
+}
+
+/**
+ * ice_sched_save_vsi_bw_alloc - save VSI node's bw alloc information
+ * @pi: port information structure
+ * @vsi_handle: sw VSI handle
+ * @tc: traffic class
+ * @rl_type: rate limit type min or max
+ * @bw_alloc: Bandwidth allocation information
+ *
+ * Save bw alloc information of VSI type node for post replay use.
+ */
+static enum ice_status
+ice_sched_save_vsi_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+			    enum ice_rl_type rl_type, u16 bw_alloc)
+{
+	struct ice_vsi_ctx *vsi_ctx;
+
+	if (!ice_is_vsi_valid(pi->hw, vsi_handle))
+		return ICE_ERR_PARAM;
+	vsi_ctx = ice_get_vsi_ctx(pi->hw, vsi_handle);
+	if (!vsi_ctx)
+		return ICE_ERR_PARAM;
+	switch (rl_type) {
+	case ICE_MIN_BW:
+		ice_set_clear_cir_bw_alloc(&vsi_ctx->sched.bw_t_info[tc],
+					   bw_alloc);
+		break;
+	case ICE_MAX_BW:
+		ice_set_clear_eir_bw_alloc(&vsi_ctx->sched.bw_t_info[tc],
+					   bw_alloc);
+		break;
+	default:
+		return ICE_ERR_PARAM;
+	}
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_set_clear_cir_bw - set or clear CIR bw
+ * @bw_t_info: bandwidth type information structure
+ * @bw: bandwidth in Kbps - Kilo bits per sec
+ *
+ * Save or clear CIR bandwidth (bw) in the passed param bw_t_info.
+ */
+static void
+ice_set_clear_cir_bw(struct ice_bw_type_info *bw_t_info, u32 bw)
+{
+	if (bw == ICE_SCHED_DFLT_BW) {
+		ice_clear_bit(ICE_BW_TYPE_CIR, bw_t_info->bw_t_bitmap);
+		bw_t_info->cir_bw.bw = 0;
+	} else {
+		/* Save type of bw information */
+		ice_set_bit(ICE_BW_TYPE_CIR, bw_t_info->bw_t_bitmap);
+		bw_t_info->cir_bw.bw = bw;
+	}
+}
+
+/**
+ * ice_set_clear_eir_bw - set or clear EIR bw
+ * @bw_t_info: bandwidth type information structure
+ * @bw: bandwidth in Kbps - Kilo bits per sec
+ *
+ * Save or clear EIR bandwidth (bw) in the passed param bw_t_info.
+ */
+static void
+ice_set_clear_eir_bw(struct ice_bw_type_info *bw_t_info, u32 bw)
+{
+	if (bw == ICE_SCHED_DFLT_BW) {
+		ice_clear_bit(ICE_BW_TYPE_EIR, bw_t_info->bw_t_bitmap);
+		bw_t_info->eir_bw.bw = 0;
+	} else {
+		/* EIR bw and Shared bw profiles are mutually exclusive and
+		 * hence only one of them may be set for any given element.
+		 * First clear earlier saved shared bw information.
+		 */
+		ice_clear_bit(ICE_BW_TYPE_SHARED, bw_t_info->bw_t_bitmap);
+		bw_t_info->shared_bw = 0;
+		/* save EIR bw information */
+		ice_set_bit(ICE_BW_TYPE_EIR, bw_t_info->bw_t_bitmap);
+		bw_t_info->eir_bw.bw = bw;
+	}
+}
+
+/**
+ * ice_set_clear_shared_bw - set or clear shared bw
+ * @bw_t_info: bandwidth type information structure
+ * @bw: bandwidth in Kbps - Kilo bits per sec
+ *
+ * Save or clear shared bandwidth (bw) in the passed param bw_t_info.
+ */
+static void
+ice_set_clear_shared_bw(struct ice_bw_type_info *bw_t_info, u32 bw)
+{
+	if (bw == ICE_SCHED_DFLT_BW) {
+		ice_clear_bit(ICE_BW_TYPE_SHARED, bw_t_info->bw_t_bitmap);
+		bw_t_info->shared_bw = 0;
+	} else {
+		/* EIR bw and Shared bw profiles are mutually exclusive and
+		 * hence only one of them may be set for any given element.
+		 * First clear earlier saved EIR bw information.
+		 */
+		ice_clear_bit(ICE_BW_TYPE_EIR, bw_t_info->bw_t_bitmap);
+		bw_t_info->eir_bw.bw = 0;
+		/* save shared bw information */
+		ice_set_bit(ICE_BW_TYPE_SHARED, bw_t_info->bw_t_bitmap);
+		bw_t_info->shared_bw = bw;
+	}
+}
+
+/**
+ * ice_sched_save_vsi_bw - save VSI node's bw information
+ * @pi: port information structure
+ * @vsi_handle: sw VSI handle
+ * @tc: traffic class
+ * @rl_type: rate limit type min, max, or shared
+ * @bw: bandwidth in Kbps - Kilo bits per sec
+ *
+ * Save bw information of VSI type node for post replay use.
+ */
+static enum ice_status
+ice_sched_save_vsi_bw(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+		      enum ice_rl_type rl_type, u32 bw)
+{
+	struct ice_vsi_ctx *vsi_ctx;
+
+	if (!ice_is_vsi_valid(pi->hw, vsi_handle))
+		return ICE_ERR_PARAM;
+	vsi_ctx = ice_get_vsi_ctx(pi->hw, vsi_handle);
+	if (!vsi_ctx)
+		return ICE_ERR_PARAM;
+	switch (rl_type) {
+	case ICE_MIN_BW:
+		ice_set_clear_cir_bw(&vsi_ctx->sched.bw_t_info[tc], bw);
+		break;
+	case ICE_MAX_BW:
+		ice_set_clear_eir_bw(&vsi_ctx->sched.bw_t_info[tc], bw);
+		break;
+	case ICE_SHARED_BW:
+		ice_set_clear_shared_bw(&vsi_ctx->sched.bw_t_info[tc], bw);
+		break;
+	default:
+		return ICE_ERR_PARAM;
+	}
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_set_clear_prio - set or clear priority information
+ * @bw_t_info: bandwidth type information structure
+ * @prio: priority to save
+ *
+ * Save or clear priority (prio) in the passed param bw_t_info.
+ */
+static void
+ice_set_clear_prio(struct ice_bw_type_info *bw_t_info, u8 prio)
+{
+	bw_t_info->generic = prio;
+	if (bw_t_info->generic)
+		ice_set_bit(ICE_BW_TYPE_PRIO, bw_t_info->bw_t_bitmap);
+	else
+		ice_clear_bit(ICE_BW_TYPE_PRIO, bw_t_info->bw_t_bitmap);
+}
+
+/**
+ * ice_sched_save_vsi_prio - save VSI node's priority information
+ * @pi: port information structure
+ * @vsi_handle: Software VSI handle
+ * @tc: traffic class
+ * @prio: priority to save
+ *
+ * Save priority information of VSI type node for post replay use.
+ */
+static enum ice_status
+ice_sched_save_vsi_prio(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+			u8 prio)
+{
+	struct ice_vsi_ctx *vsi_ctx;
+
+	if (!ice_is_vsi_valid(pi->hw, vsi_handle))
+		return ICE_ERR_PARAM;
+	vsi_ctx = ice_get_vsi_ctx(pi->hw, vsi_handle);
+	if (!vsi_ctx)
+		return ICE_ERR_PARAM;
+	if (tc >= ICE_MAX_TRAFFIC_CLASS)
+		return ICE_ERR_PARAM;
+	ice_set_clear_prio(&vsi_ctx->sched.bw_t_info[tc], prio);
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_sched_save_agg_bw_alloc - save agg node's bw alloc information
+ * @pi: port information structure
+ * @agg_id: node aggregator id
+ * @tc: traffic class
+ * @rl_type: rate limit type min or max
+ * @bw_alloc: bandwidth alloc information
+ *
+ * Save bw alloc information of AGG type node for post replay use.
+ */
+static enum ice_status
+ice_sched_save_agg_bw_alloc(struct ice_port_info *pi, u32 agg_id, u8 tc,
+			    enum ice_rl_type rl_type, u16 bw_alloc)
+{
+	struct ice_sched_agg_info *agg_info;
+
+	agg_info = ice_get_agg_info(pi->hw, agg_id);
+	if (!agg_info)
+		return ICE_ERR_PARAM;
+	if (!ice_is_tc_ena(agg_info->tc_bitmap[0], tc))
+		return ICE_ERR_PARAM;
+	switch (rl_type) {
+	case ICE_MIN_BW:
+		ice_set_clear_cir_bw_alloc(&agg_info->bw_t_info[tc], bw_alloc);
+		break;
+	case ICE_MAX_BW:
+		ice_set_clear_eir_bw_alloc(&agg_info->bw_t_info[tc], bw_alloc);
+		break;
+	default:
+		return ICE_ERR_PARAM;
+	}
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_sched_save_agg_bw - save agg node's bw information
+ * @pi: port information structure
+ * @agg_id: node aggregator id
+ * @tc: traffic class
+ * @rl_type: rate limit type min, max, or shared
+ * @bw: bandwidth in Kbps - Kilo bits per sec
+ *
+ * Save bw information of AGG type node for post replay use.
+ */
+static enum ice_status
+ice_sched_save_agg_bw(struct ice_port_info *pi, u32 agg_id, u8 tc,
+		      enum ice_rl_type rl_type, u32 bw)
+{
+	struct ice_sched_agg_info *agg_info;
+
+	agg_info = ice_get_agg_info(pi->hw, agg_id);
+	if (!agg_info)
+		return ICE_ERR_PARAM;
+	if (!ice_is_tc_ena(agg_info->tc_bitmap[0], tc))
+		return ICE_ERR_PARAM;
+	switch (rl_type) {
+	case ICE_MIN_BW:
+		ice_set_clear_cir_bw(&agg_info->bw_t_info[tc], bw);
+		break;
+	case ICE_MAX_BW:
+		ice_set_clear_eir_bw(&agg_info->bw_t_info[tc], bw);
+		break;
+	case ICE_SHARED_BW:
+		ice_set_clear_shared_bw(&agg_info->bw_t_info[tc], bw);
+		break;
+	default:
+		return ICE_ERR_PARAM;
+	}
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_cfg_vsi_bw_lmt_per_tc - configure VSI bw limit per tc
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc: traffic class
+ * @rl_type: min or max
+ * @bw: bandwidth in kbps
+ *
+ * This function configures bw limit of VSI scheduling node based on tc
+ * information.
+ */
+enum ice_status
+ice_cfg_vsi_bw_lmt_per_tc(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+			  enum ice_rl_type rl_type, u32 bw)
+{
+	enum ice_status status;
+
+	status = ice_sched_set_node_bw_lmt_per_tc(pi, vsi_handle,
+						  ICE_AGG_TYPE_VSI,
+						  tc, rl_type, bw);
+	if (!status) {
+		ice_acquire_lock(&pi->sched_lock);
+		status = ice_sched_save_vsi_bw(pi, vsi_handle, tc, rl_type, bw);
+		ice_release_lock(&pi->sched_lock);
+	}
+	return status;
+}
+
+/**
+ * ice_cfg_dflt_vsi_bw_lmt_per_tc - configure default VSI bw limit per tc
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc: traffic class
+ * @rl_type: min or max
+ *
+ * This function configures default bw limit of VSI scheduling node based on tc
+ * information.
+ */
+enum ice_status
+ice_cfg_vsi_bw_dflt_lmt_per_tc(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+			       enum ice_rl_type rl_type)
+{
+	enum ice_status status;
+
+	status = ice_sched_set_node_bw_lmt_per_tc(pi, vsi_handle,
+						  ICE_AGG_TYPE_VSI,
+						  tc, rl_type,
+						  ICE_SCHED_DFLT_BW);
+	if (!status) {
+		ice_acquire_lock(&pi->sched_lock);
+		status = ice_sched_save_vsi_bw(pi, vsi_handle, tc, rl_type,
+					       ICE_SCHED_DFLT_BW);
+		ice_release_lock(&pi->sched_lock);
+	}
+	return status;
+}
+
+/**
+ * ice_cfg_agg_bw_lmt_per_tc - configure aggregator bw limit per tc
+ * @pi: port information structure
+ * @agg_id: aggregator id
+ * @tc: traffic class
+ * @rl_type: min or max
+ * @bw: bandwidth in kbps
+ *
+ * This function applies bw limit to aggregator scheduling node based on tc
+ * information.
+ */
+enum ice_status
+ice_cfg_agg_bw_lmt_per_tc(struct ice_port_info *pi, u32 agg_id, u8 tc,
+			  enum ice_rl_type rl_type, u32 bw)
+{
+	enum ice_status status;
+
+	status = ice_sched_set_node_bw_lmt_per_tc(pi, agg_id, ICE_AGG_TYPE_AGG,
+						  tc, rl_type, bw);
+	if (!status) {
+		ice_acquire_lock(&pi->sched_lock);
+		status = ice_sched_save_agg_bw(pi, agg_id, tc, rl_type, bw);
+		ice_release_lock(&pi->sched_lock);
+	}
+	return status;
+}
+
+/**
+ * ice_cfg_agg_bw_dflt_lmt_per_tc - configure aggregator bw default limit per tc
+ * @pi: port information structure
+ * @agg_id: aggregator id
+ * @tc: traffic class
+ * @rl_type: min or max
+ *
+ * This function applies default bw limit to aggregator scheduling node based
+ * on tc information.
+ */
+enum ice_status
+ice_cfg_agg_bw_dflt_lmt_per_tc(struct ice_port_info *pi, u32 agg_id, u8 tc,
+			       enum ice_rl_type rl_type)
+{
+	enum ice_status status;
+
+	status = ice_sched_set_node_bw_lmt_per_tc(pi, agg_id, ICE_AGG_TYPE_AGG,
+						  tc, rl_type,
+						  ICE_SCHED_DFLT_BW);
+	if (!status) {
+		ice_acquire_lock(&pi->sched_lock);
+		status = ice_sched_save_agg_bw(pi, agg_id, tc, rl_type,
+					       ICE_SCHED_DFLT_BW);
+		ice_release_lock(&pi->sched_lock);
+	}
+	return status;
+}
+
+/**
+ * ice_cfg_vsi_bw_shared_lmt - configure VSI bw shared limit
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @bw: bandwidth in kbps
+ *
+ * This function Configures shared rate limiter(SRL) of all VSI type nodes
+ * across all traffic classes for VSI matching handle.
+ */
+enum ice_status
+ice_cfg_vsi_bw_shared_lmt(struct ice_port_info *pi, u16 vsi_handle, u32 bw)
+{
+	return ice_sched_set_vsi_bw_shared_lmt(pi, vsi_handle, bw);
+}
+
+/**
+ * ice_cfg_vsi_bw_no_shared_lmt - configure VSI bw for no shared limiter
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ *
+ * This function removes the shared rate limiter(SRL) of all VSI type nodes
+ * across all traffic classes for VSI matching handle.
+ */
+enum ice_status
+ice_cfg_vsi_bw_no_shared_lmt(struct ice_port_info *pi, u16 vsi_handle)
+{
+	return ice_sched_set_vsi_bw_shared_lmt(pi, vsi_handle,
+					       ICE_SCHED_DFLT_BW);
+}
+
+/**
+ * ice_cfg_agg_bw_shared_lmt - configure aggregator bw shared limit
+ * @pi: port information structure
+ * @agg_id: aggregator id
+ * @bw: bandwidth in kbps
+ *
+ * This function configures the shared rate limiter(SRL) of all agg type nodes
+ * across all traffic classes for aggregator matching agg_id.
+ */
+enum ice_status
+ice_cfg_agg_bw_shared_lmt(struct ice_port_info *pi, u32 agg_id, u32 bw)
+{
+	return ice_sched_set_agg_bw_shared_lmt(pi, agg_id, bw);
+}
+
+/**
+ * ice_cfg_agg_bw_no_shared_lmt - configure aggregator bw for no shared limiter
+ * @pi: port information structure
+ * @agg_id: aggregator id
+ *
+ * This function removes the shared rate limiter(SRL) of all agg type nodes
+ * across all traffic classes for aggregator matching agg_id.
+ */
+enum ice_status
+ice_cfg_agg_bw_no_shared_lmt(struct ice_port_info *pi, u32 agg_id)
+{
+	return ice_sched_set_agg_bw_shared_lmt(pi, agg_id, ICE_SCHED_DFLT_BW);
+}
+
+/**
+ * ice_config_vsi_queue_priority - config VSI queue priority of node
+ * @pi: port information structure
+ * @num_qs: number of VSI queues
+ * @q_ids: queue ids array
+ * @q_ids: queue ids array
+ * @q_prio: queue priority array
+ *
+ * This function configures the queue node priority (Sibling Priority) of the
+ * passed in VSI's queue(s) for a given traffic class (tc).
+ */
+enum ice_status
+ice_cfg_vsi_q_priority(struct ice_port_info *pi, u16 num_qs, u32 *q_ids,
+		       u8 *q_prio)
+{
+	enum ice_status status = ICE_ERR_PARAM;
+	struct ice_hw *hw = pi->hw;
+	u16 i;
+
+	ice_acquire_lock(&pi->sched_lock);
+
+	for (i = 0; i < num_qs; i++) {
+		struct ice_sched_node *node;
+
+		node = ice_sched_find_node_by_teid(pi->root, q_ids[i]);
+		if (!node || node->info.data.elem_type !=
+		    ICE_AQC_ELEM_TYPE_LEAF) {
+			status = ICE_ERR_PARAM;
+			break;
+		}
+		/* Configure Priority */
+		status = ice_sched_cfg_sibl_node_prio(hw, node, q_prio[i]);
+		if (status)
+			break;
+	}
+
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_cfg_agg_vsi_priority_per_tc - config agg's VSI priority per tc
+ * @pi: port information structure
+ * @agg_id: Aggregator id
+ * @num_vsis: number of VSI(s)
+ * @vsi_handle_arr: array of software VSI handles
+ * @node_prio: pointer to node priority
+ * @tc: traffic class
+ *
+ * This function configures the node priority (Sibling Priority) of the
+ * passed in VSI's for a given traffic class (tc) of an Aggregator id.
+ */
+enum ice_status
+ice_cfg_agg_vsi_priority_per_tc(struct ice_port_info *pi, u32 agg_id,
+				u16 num_vsis, u16 *vsi_handle_arr,
+				u8 *node_prio, u8 tc)
+{
+	struct ice_sched_agg_vsi_info *agg_vsi_info;
+	struct ice_sched_node *tc_node, *agg_node;
+	enum ice_status status = ICE_ERR_PARAM;
+	struct ice_sched_agg_info *agg_info;
+	bool agg_id_present = false;
+	struct ice_hw *hw = pi->hw;
+	u16 i;
+
+	ice_acquire_lock(&pi->sched_lock);
+	LIST_FOR_EACH_ENTRY(agg_info, &hw->agg_list, ice_sched_agg_info,
+			    list_entry)
+		if (agg_info->agg_id == agg_id) {
+			agg_id_present = true;
+			break;
+		}
+	if (!agg_id_present)
+		goto exit_agg_priority_per_tc;
+
+	tc_node = ice_sched_get_tc_node(pi, tc);
+	if (!tc_node)
+		goto exit_agg_priority_per_tc;
+
+	agg_node = ice_sched_get_agg_node(hw, tc_node, agg_id);
+	if (!agg_node)
+		goto exit_agg_priority_per_tc;
+
+	if (num_vsis > hw->max_children[agg_node->tx_sched_layer])
+		goto exit_agg_priority_per_tc;
+
+	for (i = 0; i < num_vsis; i++) {
+		struct ice_sched_node *vsi_node;
+		bool vsi_handle_valid = false;
+		u16 vsi_handle;
+
+		status = ICE_ERR_PARAM;
+		vsi_handle = vsi_handle_arr[i];
+		if (!ice_is_vsi_valid(hw, vsi_handle))
+			goto exit_agg_priority_per_tc;
+		/* Verify child nodes before applying settings */
+		LIST_FOR_EACH_ENTRY(agg_vsi_info, &agg_info->agg_vsi_list,
+				    ice_sched_agg_vsi_info, list_entry)
+			if (agg_vsi_info->vsi_handle == vsi_handle) {
+				vsi_handle_valid = true;
+				break;
+			}
+		if (!vsi_handle_valid)
+			goto exit_agg_priority_per_tc;
+
+		vsi_node = ice_sched_get_vsi_node(hw, tc_node, vsi_handle);
+		if (!vsi_node)
+			goto exit_agg_priority_per_tc;
+
+		if (ice_sched_find_node_in_subtree(hw, agg_node, vsi_node)) {
+			/* Configure Priority */
+			status = ice_sched_cfg_sibl_node_prio(hw, vsi_node,
+							      node_prio[i]);
+			if (status)
+				break;
+			status = ice_sched_save_vsi_prio(pi, vsi_handle, tc,
+							 node_prio[i]);
+			if (status)
+				break;
+		}
+	}
+
+exit_agg_priority_per_tc:
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_cfg_vsi_bw_alloc - config VSI bw alloc per tc
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @ena_tcmap: enabled tc map
+ * @rl_type: Rate limit type CIR/EIR
+ * @bw_alloc: Array of bw alloc
+ *
+ * This function configures the bw allocation of the passed in VSI's
+ * node(s) for enabled traffic class.
+ */
+enum ice_status
+ice_cfg_vsi_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 ena_tcmap,
+		     enum ice_rl_type rl_type, u8 *bw_alloc)
+{
+	enum ice_status status = ICE_SUCCESS;
+	u8 tc;
+
+	if (!ice_is_vsi_valid(pi->hw, vsi_handle))
+		return ICE_ERR_PARAM;
+
+	ice_acquire_lock(&pi->sched_lock);
+
+	/* Return success if no nodes are present across tc */
+	for (tc = 0; tc < ICE_MAX_TRAFFIC_CLASS; tc++) {
+		struct ice_sched_node *tc_node, *vsi_node;
+
+		if (!ice_is_tc_ena(ena_tcmap, tc))
+			continue;
+
+		tc_node = ice_sched_get_tc_node(pi, tc);
+		if (!tc_node)
+			continue;
+
+		vsi_node = ice_sched_get_vsi_node(pi->hw, tc_node, vsi_handle);
+		if (!vsi_node)
+			continue;
+
+		status = ice_sched_cfg_node_bw_alloc(pi->hw, vsi_node, rl_type,
+						     bw_alloc[tc]);
+		if (status)
+			break;
+		status = ice_sched_save_vsi_bw_alloc(pi, vsi_handle, tc,
+						     rl_type, bw_alloc[tc]);
+		if (status)
+			break;
+	}
+
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_cfg_agg_bw_alloc - config agg bw alloc
+ * @pi: port information structure
+ * @agg_id: aggregator id
+ * @ena_tcmap: enabled tc map
+ * @rl_type: rate limit type CIR/EIR
+ * @bw_alloc: array of bw alloc
+ *
+ * This function configures the bw allocation of passed in aggregator for
+ * enabled traffic class(s).
+ */
+enum ice_status
+ice_cfg_agg_bw_alloc(struct ice_port_info *pi, u32 agg_id, u8 ena_tcmap,
+		     enum ice_rl_type rl_type, u8 *bw_alloc)
+{
+	struct ice_sched_agg_info *agg_info;
+	bool agg_id_present = false;
+	enum ice_status status = ICE_SUCCESS;
+	struct ice_hw *hw = pi->hw;
+	u8 tc;
+
+	ice_acquire_lock(&pi->sched_lock);
+	LIST_FOR_EACH_ENTRY(agg_info, &hw->agg_list, ice_sched_agg_info,
+			    list_entry)
+		if (agg_info->agg_id == agg_id) {
+			agg_id_present = true;
+			break;
+		}
+	if (!agg_id_present) {
+		status = ICE_ERR_PARAM;
+		goto exit_cfg_agg_bw_alloc;
+	}
+
+	/* Return success if no nodes are present across tc */
+	for (tc = 0; tc < ICE_MAX_TRAFFIC_CLASS; tc++) {
+		struct ice_sched_node *tc_node, *agg_node;
+
+		if (!ice_is_tc_ena(ena_tcmap, tc))
+			continue;
+
+		tc_node = ice_sched_get_tc_node(pi, tc);
+		if (!tc_node)
+			continue;
+
+		agg_node = ice_sched_get_agg_node(hw, tc_node, agg_id);
+		if (!agg_node)
+			continue;
+
+		status = ice_sched_cfg_node_bw_alloc(hw, agg_node, rl_type,
+						     bw_alloc[tc]);
+		if (status)
+			break;
+		status = ice_sched_save_agg_bw_alloc(pi, agg_id, tc, rl_type,
+						     bw_alloc[tc]);
+		if (status)
+			break;
+	}
+
+exit_cfg_agg_bw_alloc:
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_sched_calc_wakeup - calculate rl profile wakeup parameter
+ * @bw: bandwidth in kbps
+ *
+ * This function calculates the wakeup parameter of rl profile.
+ */
+static u16 ice_sched_calc_wakeup(s32 bw)
+{
+	s64 bytes_per_sec, wakeup_int, wakeup_a, wakeup_b, wakeup_f;
+	s32 wakeup_f_int;
+	u16 wakeup = 0;
+
+	/* Get the wakeup integer value */
+	bytes_per_sec = DIV_64BIT(((s64)bw * 1000), BITS_PER_BYTE);
+	wakeup_int = DIV_64BIT(ICE_RL_PROF_FREQUENCY, bytes_per_sec);
+	if (wakeup_int > 63) {
+		wakeup = (u16)((1 << 15) | wakeup_int);
+	} else {
+		/* Calculate fraction value up to 4 decimals
+		 * Convert Integer value to a constant multiplier
+		 */
+		wakeup_b = (s64)ICE_RL_PROF_MULTIPLIER * wakeup_int;
+		wakeup_a = DIV_64BIT((s64)ICE_RL_PROF_MULTIPLIER *
+				     ICE_RL_PROF_FREQUENCY, bytes_per_sec);
+
+		/* Get Fraction value */
+		wakeup_f = wakeup_a - wakeup_b;
+
+		/* Round up the Fractional value via Ceil(Fractional value) */
+		if (wakeup_f > DIV_64BIT(ICE_RL_PROF_MULTIPLIER, 2))
+			wakeup_f += 1;
+
+		wakeup_f_int = (s32)DIV_64BIT(wakeup_f * ICE_RL_PROF_FRACTION,
+					      ICE_RL_PROF_MULTIPLIER);
+		wakeup |= (u16)(wakeup_int << 9);
+		wakeup |= (u16)(0x1ff & wakeup_f_int);
+	}
+
+	return wakeup;
+}
+
+/**
+ * ice_sched_bw_to_rl_profile - convert bw to profile parameters
+ * @bw: bandwidth in kbps
+ * @profile: profile parameters to return
+ *
+ * This function converts the bw to profile structure format.
+ */
+static enum ice_status
+ice_sched_bw_to_rl_profile(u32 bw, struct ice_aqc_rl_profile_elem *profile)
+{
+	enum ice_status status = ICE_ERR_PARAM;
+	s64 bytes_per_sec, ts_rate, mv_tmp;
+	bool found = false;
+	s32 encode = 0;
+	s64 mv = 0;
+	s32 i;
+
+	/* Bw settings range is from 0.5Mb/sec to 100Gb/sec */
+	if (bw < ICE_SCHED_MIN_BW || bw > ICE_SCHED_MAX_BW)
+		return status;
+
+	/* Bytes per second from kbps */
+	bytes_per_sec = DIV_64BIT(((s64)bw * 1000), BITS_PER_BYTE);
+
+	/* encode is 6 bits but really useful are 5 bits */
+	for (i = 0; i < 64; i++) {
+		u64 pow_result = BIT_ULL(i);
+
+		ts_rate = DIV_64BIT((s64)ICE_RL_PROF_FREQUENCY,
+				    pow_result * ICE_RL_PROF_TS_MULTIPLIER);
+		if (ts_rate <= 0)
+			continue;
+
+		/* Multiplier value */
+		mv_tmp = DIV_64BIT(bytes_per_sec * ICE_RL_PROF_MULTIPLIER,
+				   ts_rate);
+
+		/* Round to the nearest ICE_RL_PROF_MULTIPLIER */
+		mv = round_up_64bit(mv_tmp, ICE_RL_PROF_MULTIPLIER);
+
+		/* First multiplier value greater than the given
+		 * accuracy bytes
+		 */
+		if (mv > ICE_RL_PROF_ACCURACY_BYTES) {
+			encode = i;
+			found = true;
+			break;
+		}
+	}
+	if (found) {
+		u16 wm;
+
+		wm = ice_sched_calc_wakeup(bw);
+		profile->rl_multiply = CPU_TO_LE16(mv);
+		profile->wake_up_calc = CPU_TO_LE16(wm);
+		profile->rl_encode = CPU_TO_LE16(encode);
+		status = ICE_SUCCESS;
+	} else {
+		status = ICE_ERR_DOES_NOT_EXIST;
+	}
+
+	return status;
+}
+
+/**
+ * ice_sched_add_rl_profile - add rl profile
+ * @pi: port information structure
+ * @rl_type: type of rate limit bw - min, max, or shared
+ * @bw: bandwidth in Kbps - Kilo bits per sec
+ * @layer_num: specifies in which layer to create profile
+ *
+ * This function first checks the existing list for corresponding bw
+ * parameter. If it exists, it returns the associated profile otherwise
+ * it creates a new rate limit profile for requested bw, and adds it to
+ * the hw db and local list. It returns the new profile or null on error.
+ * The caller needs to hold the scheduler lock.
+ */
+static struct ice_aqc_rl_profile_info *
+ice_sched_add_rl_profile(struct ice_port_info *pi,
+			 enum ice_rl_type rl_type, u32 bw, u8 layer_num)
+{
+	struct ice_aqc_rl_profile_generic_elem *buf;
+	struct ice_aqc_rl_profile_info *rl_prof_elem;
+	u16 profiles_added = 0, num_profiles = 1;
+	enum ice_status status = ICE_ERR_PARAM;
+	struct ice_hw *hw;
+	u8 profile_type;
+
+	switch (rl_type) {
+	case ICE_MIN_BW:
+		profile_type = ICE_AQC_RL_PROFILE_TYPE_CIR;
+		break;
+	case ICE_MAX_BW:
+		profile_type = ICE_AQC_RL_PROFILE_TYPE_EIR;
+		break;
+	case ICE_SHARED_BW:
+		profile_type = ICE_AQC_RL_PROFILE_TYPE_SRL;
+		break;
+	default:
+		return NULL;
+	}
+
+	if (!pi)
+		return NULL;
+	hw = pi->hw;
+	LIST_FOR_EACH_ENTRY(rl_prof_elem, &pi->rl_prof_list[layer_num],
+			    ice_aqc_rl_profile_info, list_entry)
+		if (rl_prof_elem->profile.flags == profile_type &&
+		    rl_prof_elem->bw == bw)
+			/* Return existing profile id info */
+			return rl_prof_elem;
+
+	/* Create new profile id */
+	rl_prof_elem = (struct ice_aqc_rl_profile_info *)
+		ice_malloc(hw, sizeof(*rl_prof_elem));
+
+	if (!rl_prof_elem)
+		return NULL;
+
+	status = ice_sched_bw_to_rl_profile(bw, &rl_prof_elem->profile);
+	if (status != ICE_SUCCESS)
+		goto exit_add_rl_prof;
+
+	rl_prof_elem->bw = bw;
+	/* layer_num is zero relative, and fw expects level from 1 to 9 */
+	rl_prof_elem->profile.level = layer_num + 1;
+	rl_prof_elem->profile.flags = profile_type;
+	rl_prof_elem->profile.max_burst_size = CPU_TO_LE16(hw->max_burst_size);
+
+	/* Create new entry in hw db */
+	buf = (struct ice_aqc_rl_profile_generic_elem *)
+		&rl_prof_elem->profile;
+	status = ice_aq_add_rl_profile(hw, num_profiles, buf, sizeof(*buf),
+				       &profiles_added, NULL);
+	if (status || profiles_added != num_profiles)
+		goto exit_add_rl_prof;
+
+	/* Good entry - add in the list */
+	rl_prof_elem->prof_id_ref = 0;
+	LIST_ADD(&rl_prof_elem->list_entry, &pi->rl_prof_list[layer_num]);
+	return rl_prof_elem;
+
+exit_add_rl_prof:
+	ice_free(hw, rl_prof_elem);
+	return NULL;
+}
+
+/**
+ * ice_sched_del_rl_profile - remove rl profile
+ * @hw: pointer to the hw struct
+ * @rl_info: rate limit profile information
+ *
+ * If the profile id is not referenced anymore, it removes profile id with
+ * its associated parameters from hw db,and locally. The caller needs to
+ * hold scheduler lock.
+ */
+enum ice_status
+ice_sched_del_rl_profile(struct ice_hw *hw,
+			 struct ice_aqc_rl_profile_info *rl_info)
+{
+	struct ice_aqc_rl_profile_generic_elem *buf;
+	u16 num_profiles_removed;
+	enum ice_status status;
+	u16 num_profiles = 1;
+
+	if (rl_info->prof_id_ref != 0)
+		return ICE_ERR_IN_USE;
+
+	/* Safe to remove profile id */
+	buf = (struct ice_aqc_rl_profile_generic_elem *)
+		&rl_info->profile;
+	status = ice_aq_remove_rl_profile(hw, num_profiles, buf, sizeof(*buf),
+					  &num_profiles_removed, NULL);
+	if (status || num_profiles_removed != num_profiles)
+		return ICE_ERR_CFG;
+
+	/* Delete stale entry now */
+	LIST_DEL(&rl_info->list_entry);
+	ice_free(hw, rl_info);
+	return status;
+}
+
+/**
+ * ice_sched_rm_unused_rl_prof - remove unused rl profile
+ * @pi: port information structure
+ *
+ * This function removes unused rate limit profiles from the hw and
+ * SW DB. The caller needs to hold scheduler lock.
+ */
+void ice_sched_rm_unused_rl_prof(struct ice_port_info *pi)
+{
+	u8 ln;
+
+	for (ln = 0; ln < pi->hw->num_tx_sched_layers; ln++) {
+		struct ice_aqc_rl_profile_info *rl_prof_elem;
+		struct ice_aqc_rl_profile_info *rl_prof_tmp;
+
+		LIST_FOR_EACH_ENTRY_SAFE(rl_prof_elem, rl_prof_tmp,
+					 &pi->rl_prof_list[ln],
+					 ice_aqc_rl_profile_info, list_entry) {
+			if (!ice_sched_del_rl_profile(pi->hw, rl_prof_elem))
+				ice_debug(pi->hw, ICE_DBG_SCHED,
+					  "Removed rl profile\n");
+		}
+	}
+}
+
+/**
+ * ice_sched_update_elem - update element
+ * @hw: pointer to the hw struct
+ * @node: pointer to node
+ * @info: node info to update
+ *
+ * It updates the HW DB, and local SW DB of node. It updates the scheduling
+ * parameters of node from argument info data buffer (Info->data buf) and
+ * returns success or error on config sched element failure. The caller
+ * needs to hold scheduler lock.
+ */
+static enum ice_status
+ice_sched_update_elem(struct ice_hw *hw, struct ice_sched_node *node,
+		      struct ice_aqc_txsched_elem_data *info)
+{
+	struct ice_aqc_conf_elem buf;
+	enum ice_status status;
+	u16 elem_cfgd = 0;
+	u16 num_elems = 1;
+
+	buf.generic[0] = *info;
+	/* Parent teid is reserved field in this aq call */
+	buf.generic[0].parent_teid = 0;
+	/* Element type is reserved field in this aq call */
+	buf.generic[0].data.elem_type = 0;
+	/* Flags is reserved field in this aq call */
+	buf.generic[0].data.flags = 0;
+
+	/* Update HW DB */
+	/* Configure element node */
+	status = ice_aq_cfg_sched_elems(hw, num_elems, &buf, sizeof(buf),
+					&elem_cfgd, NULL);
+	if (status || elem_cfgd != num_elems) {
+		ice_debug(hw, ICE_DBG_SCHED, "Config sched elem error\n");
+		return ICE_ERR_CFG;
+	}
+
+	/* Config success case */
+	/* Now update local SW DB */
+	/* Only copy the data portion of info buffer */
+	node->info.data = info->data;
+	return status;
+}
+
+/**
+ * ice_sched_cfg_node_bw_lmt - configure node sched params
+ * @hw: pointer to the hw struct
+ * @node: sched node to configure
+ * @rl_type: rate limit type cir, eir, or shared
+ * @rl_prof_id: rate limit profile id
+ *
+ * This function configures node element's bw limit.
+ */
+static enum ice_status
+ice_sched_cfg_node_bw_lmt(struct ice_hw *hw, struct ice_sched_node *node,
+			  enum ice_rl_type rl_type, u16 rl_prof_id)
+{
+	struct ice_aqc_txsched_elem_data buf;
+	struct ice_aqc_txsched_elem *data;
+
+	buf = node->info;
+	data = &buf.data;
+	switch (rl_type) {
+	case ICE_MIN_BW:
+		data->valid_sections |= ICE_AQC_ELEM_VALID_CIR;
+		data->cir_bw.bw_profile_idx = CPU_TO_LE16(rl_prof_id);
+		break;
+	case ICE_MAX_BW:
+		/* EIR bw and Shared bw profiles are mutually exclusive and
+		 * hence only one of them may be set for any given element
+		 */
+		if (data->valid_sections & ICE_AQC_ELEM_VALID_SHARED)
+			return ICE_ERR_CFG;
+		data->valid_sections |= ICE_AQC_ELEM_VALID_EIR;
+		data->eir_bw.bw_profile_idx = CPU_TO_LE16(rl_prof_id);
+		break;
+	case ICE_SHARED_BW:
+		/* Check for removing shared bw */
+		if (rl_prof_id == ICE_SCHED_NO_SHARED_RL_PROF_ID) {
+			/* remove shared profile */
+			data->valid_sections &= ~ICE_AQC_ELEM_VALID_SHARED;
+			data->srl_id = 0; /* clear srl field */
+
+			/* enable back EIR to default profile */
+			data->valid_sections |= ICE_AQC_ELEM_VALID_EIR;
+			data->eir_bw.bw_profile_idx =
+				CPU_TO_LE16(ICE_SCHED_DFLT_RL_PROF_ID);
+			break;
+		}
+		/* EIR bw and Shared bw profiles are mutually exclusive and
+		 * hence only one of them may be set for any given element
+		 */
+		if ((data->valid_sections & ICE_AQC_ELEM_VALID_EIR) &&
+		    (LE16_TO_CPU(data->eir_bw.bw_profile_idx) !=
+			    ICE_SCHED_DFLT_RL_PROF_ID))
+			return ICE_ERR_CFG;
+		/* EIR bw is set to default, disable it */
+		data->valid_sections &= ~ICE_AQC_ELEM_VALID_EIR;
+		/* Okay to enable shared bw now */
+		data->valid_sections |= ICE_AQC_ELEM_VALID_SHARED;
+		data->srl_id = CPU_TO_LE16(rl_prof_id);
+		break;
+	default:
+		/* Unknown rate limit type */
+		return ICE_ERR_PARAM;
+	}
+
+	/* Configure element */
+	return ice_sched_update_elem(hw, node, &buf);
+}
+
+/**
+ * ice_sched_get_node_rl_prof_id - get node's rate limit profile id
+ * @node: sched node
+ * @rl_type: rate limit type
+ *
+ * If existing profile matches, it returns the corresponding rate
+ * limit profile id, otherwise it returns an invalid id as error.
+ */
+static u16
+ice_sched_get_node_rl_prof_id(struct ice_sched_node *node,
+			      enum ice_rl_type rl_type)
+{
+	u16 rl_prof_id = ICE_SCHED_INVAL_PROF_ID;
+	struct ice_aqc_txsched_elem *data;
+
+	data = &node->info.data;
+	switch (rl_type) {
+	case ICE_MIN_BW:
+		if (data->valid_sections & ICE_AQC_ELEM_VALID_CIR)
+			rl_prof_id = LE16_TO_CPU(data->cir_bw.bw_profile_idx);
+		break;
+	case ICE_MAX_BW:
+		if (data->valid_sections & ICE_AQC_ELEM_VALID_EIR)
+			rl_prof_id = LE16_TO_CPU(data->eir_bw.bw_profile_idx);
+		break;
+	case ICE_SHARED_BW:
+		if (data->valid_sections & ICE_AQC_ELEM_VALID_SHARED)
+			rl_prof_id = LE16_TO_CPU(data->srl_id);
+		break;
+	default:
+		break;
+	}
+
+	return rl_prof_id;
+}
+
+/**
+ * ice_sched_get_rl_prof_layer - selects rate limit profile creation layer
+ * @pi: port information structure
+ * @rl_type: type of rate limit bw - min, max, or shared
+ * @layer_index: layer index
+ *
+ * This function returns requested profile creation layer.
+ */
+static u8
+ice_sched_get_rl_prof_layer(struct ice_port_info *pi, enum ice_rl_type rl_type,
+			    u8 layer_index)
+{
+	struct ice_hw *hw = pi->hw;
+
+	if (layer_index >= hw->num_tx_sched_layers)
+		return ICE_SCHED_INVAL_LAYER_NUM;
+	switch (rl_type) {
+	case ICE_MIN_BW:
+		if (hw->layer_info[layer_index].max_cir_rl_profiles)
+			return layer_index;
+		break;
+	case ICE_MAX_BW:
+		if (hw->layer_info[layer_index].max_eir_rl_profiles)
+			return layer_index;
+		break;
+	case ICE_SHARED_BW:
+		/* if current layer doesn't support SRL profile creation
+		 * then try a layer up or down.
+		 */
+		if (hw->layer_info[layer_index].max_srl_profiles)
+			return layer_index;
+		else if (layer_index < hw->num_tx_sched_layers - 1 &&
+			 hw->layer_info[layer_index + 1].max_srl_profiles)
+			return layer_index + 1;
+		else if (layer_index > 0 &&
+			 hw->layer_info[layer_index - 1].max_srl_profiles)
+			return layer_index - 1;
+		break;
+	default:
+		break;
+	}
+	return ICE_SCHED_INVAL_LAYER_NUM;
+}
+
+/**
+ * ice_sched_get_srl_node - get shared rate limit node
+ * @node: tree node
+ * @srl_layer: shared rate limit layer
+ *
+ * This function returns SRL node to be used for shared rate limit purpose.
+ * The caller needs to hold scheduler lock.
+ */
+static struct ice_sched_node *
+ice_sched_get_srl_node(struct ice_sched_node *node, u8 srl_layer)
+{
+	if (srl_layer > node->tx_sched_layer)
+		return node->children[0];
+	else if (srl_layer < node->tx_sched_layer)
+		/* Node can't be created without a parent. It will always
+		 * have a valid parent except root node.
+		 */
+		return node->parent;
+	else
+		return node;
+}
+
+/**
+ * ice_sched_rm_rl_profile - remove rl profile id
+ * @pi: port information structure
+ * @layer_num: layer number where profiles are saved
+ * @profile_type: profile type like EIR, CIR, or SRL
+ * @profile_id: profile id to remove
+ *
+ * This function removes rate limit profile from layer 'layer_num' of type
+ * 'profile_type' and profile id as 'profile_id'. The caller needs to hold
+ * scheduler lock.
+ */
+static enum ice_status
+ice_sched_rm_rl_profile(struct ice_port_info *pi, u8 layer_num, u8 profile_type,
+			u16 profile_id)
+{
+	struct ice_aqc_rl_profile_info *rl_prof_elem;
+	enum ice_status status = ICE_SUCCESS;
+
+	/* Check the existing list for rl profile */
+	LIST_FOR_EACH_ENTRY(rl_prof_elem, &pi->rl_prof_list[layer_num],
+			    ice_aqc_rl_profile_info, list_entry)
+		if (rl_prof_elem->profile.flags == profile_type &&
+		    LE16_TO_CPU(rl_prof_elem->profile.profile_id) ==
+		    profile_id) {
+			if (rl_prof_elem->prof_id_ref)
+				rl_prof_elem->prof_id_ref--;
+
+			/* Remove old profile id from database */
+			status = ice_sched_del_rl_profile(pi->hw, rl_prof_elem);
+			if (status && status != ICE_ERR_IN_USE)
+				ice_debug(pi->hw, ICE_DBG_SCHED,
+					  "Remove rl profile failed\n");
+			break;
+		}
+	if (status == ICE_ERR_IN_USE)
+		status = ICE_SUCCESS;
+	return status;
+}
+
+/**
+ * ice_sched_set_node_bw_dflt - set node's bandwidth limit to default
+ * @pi: port information structure
+ * @node: pointer to node structure
+ * @rl_type: rate limit type min, max, or shared
+ * @layer_num: layer number where rl profiles are saved
+ *
+ * This function configures node element's bw rate limit profile id of
+ * type cir, eir, or srl to default. This function needs to be called
+ * with the scheduler lock held.
+ */
+static enum ice_status
+ice_sched_set_node_bw_dflt(struct ice_port_info *pi,
+			   struct ice_sched_node *node,
+			   enum ice_rl_type rl_type, u8 layer_num)
+{
+	enum ice_status status;
+	struct ice_hw *hw;
+	u8 profile_type;
+	u16 rl_prof_id;
+	u16 old_id;
+
+	hw = pi->hw;
+	switch (rl_type) {
+	case ICE_MIN_BW:
+		profile_type = ICE_AQC_RL_PROFILE_TYPE_CIR;
+		rl_prof_id = ICE_SCHED_DFLT_RL_PROF_ID;
+		break;
+	case ICE_MAX_BW:
+		profile_type = ICE_AQC_RL_PROFILE_TYPE_EIR;
+		rl_prof_id = ICE_SCHED_DFLT_RL_PROF_ID;
+		break;
+	case ICE_SHARED_BW:
+		profile_type = ICE_AQC_RL_PROFILE_TYPE_SRL;
+		/* No SRL is configured for default case */
+		rl_prof_id = ICE_SCHED_NO_SHARED_RL_PROF_ID;
+		break;
+	default:
+		return ICE_ERR_PARAM;
+	}
+	/* Save existing rl prof id for later clean up */
+	old_id = ice_sched_get_node_rl_prof_id(node, rl_type);
+	/* Configure bw scheduling parameters */
+	status = ice_sched_cfg_node_bw_lmt(hw, node, rl_type, rl_prof_id);
+	if (status)
+		return status;
+
+	/* Remove stale rl profile id */
+	if (old_id == ICE_SCHED_DFLT_RL_PROF_ID ||
+	    old_id == ICE_SCHED_INVAL_PROF_ID)
+		return status;
+	return ice_sched_rm_rl_profile(pi, layer_num, profile_type, old_id);
+}
+
+/**
+ * ice_sched_set_eir_srl_excl - set EIR/SRL exclusiveness
+ * @pi: port information structure
+ * @node: pointer to node structure
+ * @layer_num: layer number where rate limit profiles are saved
+ * @rl_type: rate limit type min, max, or shared
+ * @bw: bandwidth value
+ *
+ * This function prepares node element's bandwidth to SRL or EIR exclusively.
+ * EIR bw and Shared bw profiles are mutually exclusive and hence only one of
+ * them may be set for any given element. This function needs to be called
+ * with the scheduler lock held.
+ */
+static enum ice_status
+ice_sched_set_eir_srl_excl(struct ice_port_info *pi,
+			   struct ice_sched_node *node,
+			   u8 layer_num, enum ice_rl_type rl_type, u32 bw)
+{
+	if (rl_type == ICE_SHARED_BW) {
+		/* SRL node passed in this case, it may be different node */
+		if (bw == ICE_SCHED_DFLT_BW)
+			/* SRL being removed, ice_sched_cfg_node_bw_lmt()
+			 * enables EIR to default. EIR is not set in this
+			 * case, so no additional action is required.
+			 */
+			return ICE_SUCCESS;
+
+		/* SRL being configured, set EIR to default here.
+		 * ice_sched_cfg_node_bw_lmt() disables EIR when it
+		 * configures SRL
+		 */
+		return ice_sched_set_node_bw_dflt(pi, node, ICE_MAX_BW,
+						  layer_num);
+	} else if (rl_type == ICE_MAX_BW &&
+		   node->info.data.valid_sections & ICE_AQC_ELEM_VALID_SHARED) {
+		/* Remove Shared profile. Set default shared bw call
+		 * removes shared profile for a node.
+		 */
+		return ice_sched_set_node_bw_dflt(pi, node,
+						  ICE_SHARED_BW,
+						  layer_num);
+	}
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_sched_set_node_bw - set node's bandwidth
+ * @pi: port information structure
+ * @node: tree node
+ * @rl_type: rate limit type min, max, or shared
+ * @bw: bandwidth in Kbps - Kilo bits per sec
+ * @layer_num: layer number
+ *
+ * This function adds new profile corresponding to requested bw, configures
+ * node's rl profile id of type cir, eir, or srl, and removes old profile
+ * id from local database. The caller needs to hold scheduler lock.
+ */
+static enum ice_status
+ice_sched_set_node_bw(struct ice_port_info *pi, struct ice_sched_node *node,
+		      enum ice_rl_type rl_type, u32 bw, u8 layer_num)
+{
+	struct ice_aqc_rl_profile_info *rl_prof_info;
+	enum ice_status status = ICE_ERR_PARAM;
+	struct ice_hw *hw = pi->hw;
+	u16 old_id, rl_prof_id;
+
+	rl_prof_info = ice_sched_add_rl_profile(pi, rl_type, bw, layer_num);
+	if (!rl_prof_info)
+		return status;
+
+	rl_prof_id = LE16_TO_CPU(rl_prof_info->profile.profile_id);
+
+	/* Save existing rl prof id for later clean up */
+	old_id = ice_sched_get_node_rl_prof_id(node, rl_type);
+	/* Configure bw scheduling parameters */
+	status = ice_sched_cfg_node_bw_lmt(hw, node, rl_type, rl_prof_id);
+	if (status)
+		return status;
+
+	/* New changes has been applied */
+	/* Increment the profile id reference count */
+	rl_prof_info->prof_id_ref++;
+
+	/* Check for old id removal */
+	if ((old_id == ICE_SCHED_DFLT_RL_PROF_ID && rl_type != ICE_SHARED_BW) ||
+	    old_id == ICE_SCHED_INVAL_PROF_ID || old_id == rl_prof_id)
+		return status;
+
+	return ice_sched_rm_rl_profile(pi, layer_num,
+				       rl_prof_info->profile.flags,
+				       old_id);
+}
+
+/**
+ * ice_sched_set_node_bw_lmt - set node's bw limit
+ * @pi: port information structure
+ * @node: tree node
+ * @rl_type: rate limit type min, max, or shared
+ * @bw: bandwidth in Kbps - Kilo bits per sec
+ *
+ * It updates node's bw limit parameters like bw rl profile id of type cir,
+ * eir, or srl. The caller needs to hold scheduler lock.
+ */
+enum ice_status
+ice_sched_set_node_bw_lmt(struct ice_port_info *pi, struct ice_sched_node *node,
+			  enum ice_rl_type rl_type, u32 bw)
+{
+	struct ice_sched_node *cfg_node = node;
+	enum ice_status status;
+
+	struct ice_hw *hw;
+	u8 layer_num;
+
+	if (!pi)
+		return ICE_ERR_PARAM;
+	hw = pi->hw;
+	/* Remove unused rl profile ids from HW and SW DB */
+	ice_sched_rm_unused_rl_prof(pi);
+	layer_num = ice_sched_get_rl_prof_layer(pi, rl_type,
+						node->tx_sched_layer);
+	if (layer_num >= hw->num_tx_sched_layers)
+		return ICE_ERR_PARAM;
+
+	if (rl_type == ICE_SHARED_BW) {
+		/* SRL node may be different */
+		cfg_node = ice_sched_get_srl_node(node, layer_num);
+		if (!cfg_node)
+			return ICE_ERR_CFG;
+	}
+	/* EIR bw and Shared bw profiles are mutually exclusive and
+	 * hence only one of them may be set for any given element
+	 */
+	status = ice_sched_set_eir_srl_excl(pi, cfg_node, layer_num, rl_type,
+					    bw);
+	if (status)
+		return status;
+	if (bw == ICE_SCHED_DFLT_BW)
+		return ice_sched_set_node_bw_dflt(pi, cfg_node, rl_type,
+						  layer_num);
+	return ice_sched_set_node_bw(pi, cfg_node, rl_type, bw, layer_num);
+}
+
+/**
+ * ice_sched_set_node_bw_dflt_lmt - set node's bw limit to default
+ * @pi: port information structure
+ * @node: pointer to node structure
+ * @rl_type: rate limit type min, max, or shared
+ *
+ * This function configures node element's bw rate limit profile id of
+ * type cir, eir, or srl to default. This function needs to be called
+ * with the scheduler lock held.
+ */
+static enum ice_status
+ice_sched_set_node_bw_dflt_lmt(struct ice_port_info *pi,
+			       struct ice_sched_node *node,
+			       enum ice_rl_type rl_type)
+{
+	return ice_sched_set_node_bw_lmt(pi, node, rl_type,
+					 ICE_SCHED_DFLT_BW);
+}
+
+/**
+ * ice_sched_validate_srl_node - Check node for SRL applicability
+ * @node: sched node to configure
+ * @sel_layer: selected SRL layer
+ *
+ * This function checks if the SRL can be applied to a selceted layer node on
+ * behalf of the requested node (first argument). This function needs to be
+ * called with scheduler lock held.
+ */
+static enum ice_status
+ice_sched_validate_srl_node(struct ice_sched_node *node, u8 sel_layer)
+{
+	/* SRL profiles are not available on all layers. Check if the
+	 * SRL profile can be applied to a node above or below the
+	 * requested node. SRL configuration is possible only if the
+	 * selected layer's node has single child.
+	 */
+	if (sel_layer == node->tx_sched_layer ||
+	    ((sel_layer == node->tx_sched_layer + 1) &&
+	    node->num_children == 1) ||
+	    ((sel_layer == node->tx_sched_layer - 1) &&
+	    (node->parent && node->parent->num_children == 1)))
+		return ICE_SUCCESS;
+
+	return ICE_ERR_CFG;
+}
+
+/**
+ * ice_sched_set_q_bw_lmt - sets queue bw limit
+ * @pi: port information structure
+ * @q_id: queue id (leaf node teid)
+ * @rl_type: min, max, or shared
+ * @bw: bandwidth in kbps
+ *
+ * This function sets bw limit of queue scheduling node.
+ */
+static enum ice_status
+ice_sched_set_q_bw_lmt(struct ice_port_info *pi, u32 q_id,
+		       enum ice_rl_type rl_type, u32 bw)
+{
+	enum ice_status status = ICE_ERR_PARAM;
+	struct ice_sched_node *node;
+
+	ice_acquire_lock(&pi->sched_lock);
+
+	node = ice_sched_find_node_by_teid(pi->root, q_id);
+	if (!node) {
+		ice_debug(pi->hw, ICE_DBG_SCHED, "Wrong q_id\n");
+		goto exit_q_bw_lmt;
+	}
+
+	/* Return error if it is not a leaf node */
+	if (node->info.data.elem_type != ICE_AQC_ELEM_TYPE_LEAF)
+		goto exit_q_bw_lmt;
+
+	/* SRL bandwidth layer selection */
+	if (rl_type == ICE_SHARED_BW) {
+		u8 sel_layer; /* selected layer */
+
+		sel_layer = ice_sched_get_rl_prof_layer(pi, rl_type,
+							node->tx_sched_layer);
+		if (sel_layer >= pi->hw->num_tx_sched_layers) {
+			status = ICE_ERR_PARAM;
+			goto exit_q_bw_lmt;
+		}
+		status = ice_sched_validate_srl_node(node, sel_layer);
+		if (status)
+			goto exit_q_bw_lmt;
+	}
+
+	if (bw == ICE_SCHED_DFLT_BW)
+		status = ice_sched_set_node_bw_dflt_lmt(pi, node, rl_type);
+	else
+		status = ice_sched_set_node_bw_lmt(pi, node, rl_type, bw);
+
+exit_q_bw_lmt:
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_cfg_q_bw_lmt - configure queue bw limit
+ * @pi: port information structure
+ * @q_id: queue id (leaf node teid)
+ * @rl_type: min, max, or shared
+ * @bw: bandwidth in kbps
+ *
+ * This function configures bw limit of queue scheduling node.
+ */
+enum ice_status
+ice_cfg_q_bw_lmt(struct ice_port_info *pi, u32 q_id, enum ice_rl_type rl_type,
+		 u32 bw)
+{
+	return ice_sched_set_q_bw_lmt(pi, q_id, rl_type, bw);
+}
+
+/**
+ * ice_cfg_q_bw_dflt_lmt - configure queue bw default limit
+ * @pi: port information structure
+ * @q_id: queue id (leaf node teid)
+ * @rl_type: min, max, or shared
+ *
+ * This function configures bw default limit of queue scheduling node.
+ */
+enum ice_status
+ice_cfg_q_bw_dflt_lmt(struct ice_port_info *pi, u32 q_id,
+		      enum ice_rl_type rl_type)
+{
+	return ice_sched_set_q_bw_lmt(pi, q_id, rl_type, ICE_SCHED_DFLT_BW);
+}
+
+/**
+ * ice_sched_save_tc_node_bw - save tc node bw limit
+ * @pi: port information structure
+ * @tc: tc number
+ * @rl_type: min or max
+ * @bw: bandwidth in kbps
+ *
+ * This function saves the modified values of bandwidth settings for later
+ * replay purpose (restore) after reset.
+ */
+static enum ice_status
+ice_sched_save_tc_node_bw(struct ice_port_info *pi, u8 tc,
+			  enum ice_rl_type rl_type, u32 bw)
+{
+	struct ice_hw *hw = pi->hw;
+
+	if (tc >= ICE_MAX_TRAFFIC_CLASS)
+		return ICE_ERR_PARAM;
+	switch (rl_type) {
+	case ICE_MIN_BW:
+		ice_set_clear_cir_bw(&hw->tc_node_bw_t_info[tc], bw);
+		break;
+	case ICE_MAX_BW:
+		ice_set_clear_eir_bw(&hw->tc_node_bw_t_info[tc], bw);
+		break;
+	case ICE_SHARED_BW:
+		ice_set_clear_shared_bw(&hw->tc_node_bw_t_info[tc], bw);
+		break;
+	default:
+		return ICE_ERR_PARAM;
+	}
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_sched_set_tc_node_bw_lmt - sets tc node bw limit
+ * @pi: port information structure
+ * @tc: tc number
+ * @rl_type: min or max
+ * @bw: bandwidth in kbps
+ *
+ * This function configures bandwidth limit of tc node.
+ */
+static enum ice_status
+ice_sched_set_tc_node_bw_lmt(struct ice_port_info *pi, u8 tc,
+			     enum ice_rl_type rl_type, u32 bw)
+{
+	enum ice_status status = ICE_ERR_PARAM;
+	struct ice_sched_node *tc_node;
+
+	if (tc >= ICE_MAX_TRAFFIC_CLASS)
+		return status;
+	ice_acquire_lock(&pi->sched_lock);
+	tc_node = ice_sched_get_tc_node(pi, tc);
+	if (!tc_node)
+		goto exit_set_tc_node_bw;
+	if (bw == ICE_SCHED_DFLT_BW)
+		status = ice_sched_set_node_bw_dflt_lmt(pi, tc_node, rl_type);
+	else
+		status = ice_sched_set_node_bw_lmt(pi, tc_node, rl_type, bw);
+	if (!status)
+		status = ice_sched_save_tc_node_bw(pi, tc, rl_type, bw);
+
+exit_set_tc_node_bw:
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_cfg_tc_node_bw_lmt - configure tc node bw limit
+ * @pi: port information structure
+ * @tc: tc number
+ * @rl_type: min or max
+ * @bw: bandwidth in kbps
+ *
+ * This function configures bw limit of tc node.
+ * Note: The minimum guaranteed reservation is done via DCBX.
+ */
+enum ice_status
+ice_cfg_tc_node_bw_lmt(struct ice_port_info *pi, u8 tc,
+		       enum ice_rl_type rl_type, u32 bw)
+{
+	return ice_sched_set_tc_node_bw_lmt(pi, tc, rl_type, bw);
+}
+
+/**
+ * ice_cfg_tc_node_bw_dflt_lmt - configure tc node bw default limit
+ * @pi: port information structure
+ * @tc: tc number
+ * @rl_type: min or max
+ *
+ * This function configures bw default limit of tc node.
+ */
+enum ice_status
+ice_cfg_tc_node_bw_dflt_lmt(struct ice_port_info *pi, u8 tc,
+			    enum ice_rl_type rl_type)
+{
+	return ice_sched_set_tc_node_bw_lmt(pi, tc, rl_type, ICE_SCHED_DFLT_BW);
+}
+
+/**
+ * ice_sched_save_tc_node_bw_alloc - save tc node's bw alloc information
+ * @pi: port information structure
+ * @tc: traffic class
+ * @rl_type: rate limit type min or max
+ * @bw_alloc: Bandwidth allocation information
+ *
+ * Save bw alloc information of VSI type node for post replay use.
+ */
+static enum ice_status
+ice_sched_save_tc_node_bw_alloc(struct ice_port_info *pi, u8 tc,
+				enum ice_rl_type rl_type, u16 bw_alloc)
+{
+	struct ice_hw *hw = pi->hw;
+
+	if (tc >= ICE_MAX_TRAFFIC_CLASS)
+		return ICE_ERR_PARAM;
+	switch (rl_type) {
+	case ICE_MIN_BW:
+		ice_set_clear_cir_bw_alloc(&hw->tc_node_bw_t_info[tc],
+					   bw_alloc);
+		break;
+	case ICE_MAX_BW:
+		ice_set_clear_eir_bw_alloc(&hw->tc_node_bw_t_info[tc],
+					   bw_alloc);
+		break;
+	default:
+		return ICE_ERR_PARAM;
+	}
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_sched_set_tc_node_bw_alloc - set tc node bw alloc
+ * @pi: port information structure
+ * @tc: tc number
+ * @rl_type: min or max
+ * @bw_alloc: bandwidth alloc
+ *
+ * This function configures bandwidth alloc of tc node, also saves the
+ * changed settings for replay purpose, and return success if it succeeds
+ * in modifying bandwidth alloc setting.
+ */
+static enum ice_status
+ice_sched_set_tc_node_bw_alloc(struct ice_port_info *pi, u8 tc,
+			       enum ice_rl_type rl_type, u8 bw_alloc)
+{
+	enum ice_status status = ICE_ERR_PARAM;
+	struct ice_sched_node *tc_node;
+
+	if (tc >= ICE_MAX_TRAFFIC_CLASS)
+		return status;
+	ice_acquire_lock(&pi->sched_lock);
+	tc_node = ice_sched_get_tc_node(pi, tc);
+	if (!tc_node)
+		goto exit_set_tc_node_bw_alloc;
+	status = ice_sched_cfg_node_bw_alloc(pi->hw, tc_node, rl_type,
+					     bw_alloc);
+	if (status)
+		goto exit_set_tc_node_bw_alloc;
+	status = ice_sched_save_tc_node_bw_alloc(pi, tc, rl_type, bw_alloc);
+
+exit_set_tc_node_bw_alloc:
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_cfg_tc_node_bw_alloc - configure tc node bw alloc
+ * @pi: port information structure
+ * @tc: tc number
+ * @rl_type: min or max
+ * @bw_alloc: bandwidth alloc
+ *
+ * This function configures bw limit of tc node.
+ * Note: The minimum guaranteed reservation is done via DCBX.
+ */
+enum ice_status
+ice_cfg_tc_node_bw_alloc(struct ice_port_info *pi, u8 tc,
+			 enum ice_rl_type rl_type, u8 bw_alloc)
+{
+	return ice_sched_set_tc_node_bw_alloc(pi, tc, rl_type, bw_alloc);
+}
+
+/**
+ * ice_sched_set_agg_bw_dflt_lmt - set agg node's bw limit to default
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ *
+ * This function retrieves the aggregator id based on VSI id and tc,
+ * and sets node's bw limit to default. This function needs to be
+ * called with the scheduler lock held.
+ */
+enum ice_status
+ice_sched_set_agg_bw_dflt_lmt(struct ice_port_info *pi, u16 vsi_handle)
+{
+	struct ice_vsi_ctx *vsi_ctx;
+	enum ice_status status = ICE_SUCCESS;
+	u8 tc;
+
+	if (!ice_is_vsi_valid(pi->hw, vsi_handle))
+		return ICE_ERR_PARAM;
+	vsi_ctx = ice_get_vsi_ctx(pi->hw, vsi_handle);
+	if (!vsi_ctx)
+		return ICE_ERR_PARAM;
+
+	for (tc = 0; tc < ICE_MAX_TRAFFIC_CLASS; tc++) {
+		struct ice_sched_node *node;
+
+		node = vsi_ctx->sched.ag_node[tc];
+		if (!node)
+			continue;
+
+		/* Set min profile to default */
+		status = ice_sched_set_node_bw_dflt_lmt(pi, node, ICE_MIN_BW);
+		if (status)
+			break;
+
+		/* Set max profile to default */
+		status = ice_sched_set_node_bw_dflt_lmt(pi, node, ICE_MAX_BW);
+		if (status)
+			break;
+
+		/* Remove shared profile, if there is one */
+		status = ice_sched_set_node_bw_dflt_lmt(pi, node,
+							ICE_SHARED_BW);
+		if (status)
+			break;
+	}
+
+	return status;
+}
+
+/**
+ * ice_sched_get_node_by_id_type - get node from id type
+ * @pi: port information structure
+ * @id: identifier
+ * @agg_type: type of aggregator
+ * @tc: traffic class
+ *
+ * This function returns node identified by id of type aggregator, and
+ * based on traffic class (tc). This function needs to be called with
+ * the scheduler lock held.
+ */
+static struct ice_sched_node *
+ice_sched_get_node_by_id_type(struct ice_port_info *pi, u32 id,
+			      enum ice_agg_type agg_type, u8 tc)
+{
+	struct ice_sched_node *node = NULL;
+	struct ice_sched_node *child_node;
+
+	switch (agg_type) {
+	case ICE_AGG_TYPE_VSI: {
+		struct ice_vsi_ctx *vsi_ctx;
+		u16 vsi_handle = (u16)id;
+
+		if (!ice_is_vsi_valid(pi->hw, vsi_handle))
+			break;
+		/* Get sched_vsi_info */
+		vsi_ctx = ice_get_vsi_ctx(pi->hw, vsi_handle);
+		if (!vsi_ctx)
+			break;
+		node = vsi_ctx->sched.vsi_node[tc];
+		break;
+	}
+
+	case ICE_AGG_TYPE_AGG: {
+		struct ice_sched_node *tc_node;
+
+		tc_node = ice_sched_get_tc_node(pi, tc);
+		if (tc_node)
+			node = ice_sched_get_agg_node(pi->hw, tc_node, id);
+		break;
+	}
+
+	case ICE_AGG_TYPE_Q:
+		/* The current implementation allows single queue to modify */
+		node = ice_sched_get_node(pi, id);
+		break;
+
+	case ICE_AGG_TYPE_QG:
+		/* The current implementation allows single qg to modify */
+		child_node = ice_sched_get_node(pi, id);
+		if (!child_node)
+			break;
+		node = child_node->parent;
+		break;
+
+	default:
+		break;
+	}
+
+	return node;
+}
+
+/**
+ * ice_sched_set_node_bw_lmt_per_tc - set node bw limit per tc
+ * @pi: port information structure
+ * @id: id (software VSI handle or AGG id)
+ * @agg_type: aggregator type (VSI or AGG type node)
+ * @tc: traffic class
+ * @rl_type: min or max
+ * @bw: bandwidth in kbps
+ *
+ * This function sets bw limit of VSI or Aggregator scheduling node
+ * based on tc information from passed in argument bw.
+ */
+enum ice_status
+ice_sched_set_node_bw_lmt_per_tc(struct ice_port_info *pi, u32 id,
+				 enum ice_agg_type agg_type, u8 tc,
+				 enum ice_rl_type rl_type, u32 bw)
+{
+	enum ice_status status = ICE_ERR_PARAM;
+	struct ice_sched_node *node;
+
+	if (!pi)
+		return status;
+
+	if (rl_type == ICE_UNKNOWN_BW)
+		return status;
+
+	ice_acquire_lock(&pi->sched_lock);
+	node = ice_sched_get_node_by_id_type(pi, id, agg_type, tc);
+	if (!node) {
+		ice_debug(pi->hw, ICE_DBG_SCHED, "Wrong id, agg type, or tc\n");
+		goto exit_set_node_bw_lmt_per_tc;
+	}
+	if (bw == ICE_SCHED_DFLT_BW)
+		status = ice_sched_set_node_bw_dflt_lmt(pi, node, rl_type);
+	else
+		status = ice_sched_set_node_bw_lmt(pi, node, rl_type, bw);
+
+exit_set_node_bw_lmt_per_tc:
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_sched_validate_vsi_srl_node - validate VSI SRL node
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ *
+ * This function validates SRL node of the VSI node if available SRL layer is
+ * different than the VSI node layer on all tc(s).This function needs to be
+ * called with scheduler lock held.
+ */
+static enum ice_status
+ice_sched_validate_vsi_srl_node(struct ice_port_info *pi, u16 vsi_handle)
+{
+	u8 sel_layer = ICE_SCHED_INVAL_LAYER_NUM;
+	u8 tc;
+
+	if (!ice_is_vsi_valid(pi->hw, vsi_handle))
+		return ICE_ERR_PARAM;
+
+	/* Return success if no nodes are present across tc */
+	for (tc = 0; tc < ICE_MAX_TRAFFIC_CLASS; tc++) {
+		struct ice_sched_node *tc_node, *vsi_node;
+		enum ice_rl_type rl_type = ICE_SHARED_BW;
+		enum ice_status status;
+
+		tc_node = ice_sched_get_tc_node(pi, tc);
+		if (!tc_node)
+			continue;
+
+		vsi_node = ice_sched_get_vsi_node(pi->hw, tc_node, vsi_handle);
+		if (!vsi_node)
+			continue;
+
+		/* SRL bandwidth layer selection */
+		if (sel_layer == ICE_SCHED_INVAL_LAYER_NUM) {
+			u8 node_layer = vsi_node->tx_sched_layer;
+			u8 layer_num;
+
+			layer_num = ice_sched_get_rl_prof_layer(pi, rl_type,
+								node_layer);
+			if (layer_num >= pi->hw->num_tx_sched_layers)
+				return ICE_ERR_PARAM;
+			sel_layer = layer_num;
+		}
+
+		status = ice_sched_validate_srl_node(vsi_node, sel_layer);
+		if (status)
+			return status;
+	}
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_sched_set_vsi_bw_shared_lmt - set VSI bw shared limit
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @bw: bandwidth in kbps
+ *
+ * This function Configures shared rate limiter(SRL) of all VSI type nodes
+ * across all traffic classes for VSI matching handle. When bw value of
+ * ICE_SCHED_DFLT_BW is passed, it removes the SRL from the node.
+ */
+enum ice_status
+ice_sched_set_vsi_bw_shared_lmt(struct ice_port_info *pi, u16 vsi_handle,
+				u32 bw)
+{
+	enum ice_status status = ICE_SUCCESS;
+	u8 tc;
+
+	if (!pi)
+		return ICE_ERR_PARAM;
+
+	if (!ice_is_vsi_valid(pi->hw, vsi_handle))
+		return ICE_ERR_PARAM;
+
+	ice_acquire_lock(&pi->sched_lock);
+	status = ice_sched_validate_vsi_srl_node(pi, vsi_handle);
+	if (status)
+		goto exit_set_vsi_bw_shared_lmt;
+	/* Return success if no nodes are present across tc */
+	for (tc = 0; tc < ICE_MAX_TRAFFIC_CLASS; tc++) {
+		struct ice_sched_node *tc_node, *vsi_node;
+		enum ice_rl_type rl_type = ICE_SHARED_BW;
+
+		tc_node = ice_sched_get_tc_node(pi, tc);
+		if (!tc_node)
+			continue;
+
+		vsi_node = ice_sched_get_vsi_node(pi->hw, tc_node, vsi_handle);
+		if (!vsi_node)
+			continue;
+
+		if (bw == ICE_SCHED_DFLT_BW)
+			/* It removes existing SRL from the node */
+			status = ice_sched_set_node_bw_dflt_lmt(pi, vsi_node,
+								rl_type);
+		else
+			status = ice_sched_set_node_bw_lmt(pi, vsi_node,
+							   rl_type, bw);
+		if (status)
+			break;
+		status = ice_sched_save_vsi_bw(pi, vsi_handle, tc, rl_type, bw);
+		if (status)
+			break;
+	}
+
+exit_set_vsi_bw_shared_lmt:
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_sched_validate_agg_srl_node - validate AGG SRL node
+ * @pi: port information structure
+ * @agg_id: aggregator id
+ *
+ * This function validates SRL node of the AGG node if available SRL layer is
+ * different than the AGG node layer on all tc(s).This function needs to be
+ * called with scheduler lock held.
+ */
+static enum ice_status
+ice_sched_validate_agg_srl_node(struct ice_port_info *pi, u32 agg_id)
+{
+	u8 sel_layer = ICE_SCHED_INVAL_LAYER_NUM;
+	struct ice_sched_agg_info *agg_info;
+	bool agg_id_present = false;
+	enum ice_status status = ICE_SUCCESS;
+	u8 tc;
+
+	LIST_FOR_EACH_ENTRY(agg_info, &pi->hw->agg_list, ice_sched_agg_info,
+			    list_entry)
+		if (agg_info->agg_id == agg_id) {
+			agg_id_present = true;
+			break;
+		}
+	if (!agg_id_present)
+		return ICE_ERR_PARAM;
+	/* Return success if no nodes are present across tc */
+	for (tc = 0; tc < ICE_MAX_TRAFFIC_CLASS; tc++) {
+		struct ice_sched_node *tc_node, *agg_node;
+		enum ice_rl_type rl_type = ICE_SHARED_BW;
+
+		tc_node = ice_sched_get_tc_node(pi, tc);
+		if (!tc_node)
+			continue;
+
+		agg_node = ice_sched_get_agg_node(pi->hw, tc_node, agg_id);
+		if (!agg_node)
+			continue;
+		/* SRL bandwidth layer selection */
+		if (sel_layer == ICE_SCHED_INVAL_LAYER_NUM) {
+			u8 node_layer = agg_node->tx_sched_layer;
+			u8 layer_num;
+
+			layer_num = ice_sched_get_rl_prof_layer(pi, rl_type,
+								node_layer);
+			if (layer_num >= pi->hw->num_tx_sched_layers)
+				return ICE_ERR_PARAM;
+			sel_layer = layer_num;
+		}
+
+		status = ice_sched_validate_srl_node(agg_node, sel_layer);
+		if (status)
+			break;
+	}
+	return status;
+}
+
+/**
+ * ice_sched_set_agg_bw_shared_lmt - set aggregator bw shared limit
+ * @pi: port information structure
+ * @agg_id: aggregator id
+ * @bw: bandwidth in kbps
+ *
+ * This function configures the shared rate limiter(SRL) of all agg type
+ * nodes across all traffic classes for aggregator matching agg_id. When
+ * bw value of ICE_SCHED_DFLT_BW is passed, it removes SRL from the
+ * node(s).
+ */
+enum ice_status
+ice_sched_set_agg_bw_shared_lmt(struct ice_port_info *pi, u32 agg_id, u32 bw)
+{
+	struct ice_sched_agg_info *agg_info;
+	struct ice_sched_agg_info *tmp;
+	bool agg_id_present = false;
+	enum ice_status status = ICE_SUCCESS;
+	u8 tc;
+
+	if (!pi)
+		return ICE_ERR_PARAM;
+
+	ice_acquire_lock(&pi->sched_lock);
+	status = ice_sched_validate_agg_srl_node(pi, agg_id);
+	if (status)
+		goto exit_agg_bw_shared_lmt;
+
+	LIST_FOR_EACH_ENTRY_SAFE(agg_info, tmp, &pi->hw->agg_list,
+				 ice_sched_agg_info, list_entry)
+		if (agg_info->agg_id == agg_id) {
+			agg_id_present = true;
+			break;
+		}
+
+	if (!agg_id_present) {
+		status = ICE_ERR_PARAM;
+		goto exit_agg_bw_shared_lmt;
+	}
+
+	/* Return success if no nodes are present across tc */
+	for (tc = 0; tc < ICE_MAX_TRAFFIC_CLASS; tc++) {
+		enum ice_rl_type rl_type = ICE_SHARED_BW;
+		struct ice_sched_node *tc_node, *agg_node;
+
+		tc_node = ice_sched_get_tc_node(pi, tc);
+		if (!tc_node)
+			continue;
+
+		agg_node = ice_sched_get_agg_node(pi->hw, tc_node, agg_id);
+		if (!agg_node)
+			continue;
+
+		if (bw == ICE_SCHED_DFLT_BW)
+			/* It removes existing SRL from the node */
+			status = ice_sched_set_node_bw_dflt_lmt(pi, agg_node,
+								rl_type);
+		else
+			status = ice_sched_set_node_bw_lmt(pi, agg_node,
+							   rl_type, bw);
+		if (status)
+			break;
+		status = ice_sched_save_agg_bw(pi, agg_id, tc, rl_type, bw);
+		if (status)
+			break;
+	}
+
+exit_agg_bw_shared_lmt:
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_sched_cfg_sibl_node_prio - configure node sibling priority
+ * @hw: pointer to the hw struct
+ * @node: sched node to configure
+ * @priority: sibling priority
+ *
+ * This function configures node element's sibling priority only. This
+ * function needs to be called with scheduler lock held.
+ */
+enum ice_status
+ice_sched_cfg_sibl_node_prio(struct ice_hw *hw, struct ice_sched_node *node,
+			     u8 priority)
+{
+	struct ice_aqc_txsched_elem_data buf;
+	struct ice_aqc_txsched_elem *data;
+	enum ice_status status;
+
+	buf = node->info;
+	data = &buf.data;
+	data->valid_sections |= ICE_AQC_ELEM_VALID_GENERIC;
+	priority = (priority << ICE_AQC_ELEM_GENERIC_PRIO_S) &
+		   ICE_AQC_ELEM_GENERIC_PRIO_M;
+	data->generic &= ~ICE_AQC_ELEM_GENERIC_PRIO_M;
+	data->generic |= priority;
+
+	/* Configure element */
+	status = ice_sched_update_elem(hw, node, &buf);
+	return status;
+}
+
+/**
+ * ice_sched_cfg_node_bw_alloc - configure node bw weight/alloc params
+ * @hw: pointer to the hw struct
+ * @node: sched node to configure
+ * @rl_type: rate limit type cir, eir, or shared
+ * @bw_alloc: bw weight/allocation
+ *
+ * This function configures node element's bw allocation.
+ */
+enum ice_status
+ice_sched_cfg_node_bw_alloc(struct ice_hw *hw, struct ice_sched_node *node,
+			    enum ice_rl_type rl_type, u8 bw_alloc)
+{
+	struct ice_aqc_txsched_elem_data buf;
+	struct ice_aqc_txsched_elem *data;
+	enum ice_status status;
+
+	buf = node->info;
+	data = &buf.data;
+	if (rl_type == ICE_MIN_BW) {
+		data->valid_sections |= ICE_AQC_ELEM_VALID_CIR;
+		data->cir_bw.bw_alloc = CPU_TO_LE16(bw_alloc);
+	} else if (rl_type == ICE_MAX_BW) {
+		data->valid_sections |= ICE_AQC_ELEM_VALID_EIR;
+		data->eir_bw.bw_alloc = CPU_TO_LE16(bw_alloc);
+	} else {
+		return ICE_ERR_PARAM;
+	}
+
+	/* Configure element */
+	status = ice_sched_update_elem(hw, node, &buf);
+	return status;
+}
+
+/**
+ * ice_sched_add_agg_cfg - create an aggregator node
+ * @pi: port information structure
+ * @agg_id: aggregator id
+ * @tc: TC number
+ *
+ * This function creates an aggregator node and intermediate nodes if required
+ * for the given TC
+ */
+enum ice_status
+ice_sched_add_agg_cfg(struct ice_port_info *pi, u32 agg_id, u8 tc)
+{
+	struct ice_sched_node *parent, *agg_node, *tc_node;
+	u16 num_nodes[ICE_AQC_TOPO_MAX_LEVEL_NUM] = { 0 };
+	enum ice_status status = ICE_SUCCESS;
+	struct ice_hw *hw = pi->hw;
+	u32 first_node_teid;
+	u16 num_nodes_added;
+	u8 i, aggl;
+
+	tc_node = ice_sched_get_tc_node(pi, tc);
+	if (!tc_node)
+		return ICE_ERR_CFG;
+
+	agg_node = ice_sched_get_agg_node(hw, tc_node, agg_id);
+	/* Does Agg node already exist ? */
+	if (agg_node)
+		return status;
+
+	aggl = ice_sched_get_agg_layer(hw);
+
+	/* need one node in Agg layer */
+	num_nodes[aggl] = 1;
+
+	/* Check whether the intermediate nodes have space to add the
+	 * new agg. If they are full, then SW needs to allocate a new
+	 * intermediate node on those layers
+	 */
+	for (i = hw->sw_entry_point_layer; i < aggl; i++) {
+		parent = ice_sched_get_first_node(hw, tc_node, i);
+
+		/* scan all the siblings */
+		while (parent) {
+			if (parent->num_children < hw->max_children[i])
+				break;
+			parent = parent->sibling;
+		}
+
+		/* all the nodes are full, reserve one for this layer */
+		if (!parent)
+			num_nodes[i]++;
+	}
+
+	/* add the agg node */
+	parent = tc_node;
+	for (i = hw->sw_entry_point_layer; i <= aggl; i++) {
+		if (!parent)
+			return ICE_ERR_CFG;
+
+		status = ice_sched_add_nodes_to_layer(pi, tc_node, parent, i,
+						      num_nodes[i],
+						      &first_node_teid,
+						      &num_nodes_added);
+		if (status != ICE_SUCCESS || num_nodes[i] != num_nodes_added)
+			return ICE_ERR_CFG;
+
+		/* The newly added node can be a new parent for the next
+		 * layer nodes
+		 */
+		if (num_nodes_added) {
+			parent = ice_sched_find_node_by_teid(tc_node,
+							     first_node_teid);
+			/* register the aggregator id with the agg node */
+			if (parent && i == aggl)
+				parent->agg_id = agg_id;
+		} else {
+			parent = parent->children[0];
+		}
+	}
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_sched_is_agg_inuse - check whether the agg is in use or not
+ * @pi: port information structure
+ * @node: node pointer
+ *
+ * This function checks whether the agg is attached with any vsi or not.
+ */
+static bool
+ice_sched_is_agg_inuse(struct ice_port_info *pi, struct ice_sched_node *node)
+{
+	u8 vsil, i;
+
+	vsil = ice_sched_get_vsi_layer(pi->hw);
+	if (node->tx_sched_layer < vsil - 1) {
+		for (i = 0; i < node->num_children; i++)
+			if (ice_sched_is_agg_inuse(pi, node->children[i]))
+				return true;
+		return false;
+	} else {
+		return node->num_children ? true : false;
+	}
+}
+
+/**
+ * ice_sched_rm_agg_cfg - remove the aggregator node
+ * @pi: port information structure
+ * @agg_id: aggregator id
+ * @tc: TC number
+ *
+ * This function removes the aggregator node and intermediate nodes if any
+ * from the given TC
+ */
+enum ice_status
+ice_sched_rm_agg_cfg(struct ice_port_info *pi, u32 agg_id, u8 tc)
+{
+	struct ice_sched_node *tc_node, *agg_node;
+	struct ice_hw *hw = pi->hw;
+
+	tc_node = ice_sched_get_tc_node(pi, tc);
+	if (!tc_node)
+		return ICE_ERR_CFG;
+
+	agg_node = ice_sched_get_agg_node(hw, tc_node, agg_id);
+	if (!agg_node)
+		return ICE_ERR_DOES_NOT_EXIST;
+
+	/* Can't remove the agg node if it has children */
+	if (ice_sched_is_agg_inuse(pi, agg_node))
+		return ICE_ERR_IN_USE;
+
+	/* need to remove the whole subtree if agg node is the
+	 * only child.
+	 */
+	while (agg_node->tx_sched_layer > hw->sw_entry_point_layer) {
+		struct ice_sched_node *parent = agg_node->parent;
+
+		if (!parent)
+			return ICE_ERR_CFG;
+
+		if (parent->num_children > 1)
+			break;
+
+		agg_node = parent;
+	}
+
+	ice_free_sched_node(pi, agg_node);
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_sched_get_free_vsi_parent - Find a free parent node in agg subtree
+ * @hw: pointer to the hw struct
+ * @node: pointer to a child node
+ * @num_nodes: num nodes count array
+ *
+ * This function walks through the aggregator subtree to find a free parent
+ * node
+ */
+static struct ice_sched_node *
+ice_sched_get_free_vsi_parent(struct ice_hw *hw, struct ice_sched_node *node,
+			      u16 *num_nodes)
+{
+	u8 l = node->tx_sched_layer;
+	u8 vsil, i;
+
+	vsil = ice_sched_get_vsi_layer(hw);
+
+	/* Is it VSI parent layer ? */
+	if (l == vsil - 1)
+		return (node->num_children < hw->max_children[l]) ? node : NULL;
+
+	/* We have intermediate nodes. Let's walk through the subtree. If the
+	 * intermediate node has space to add a new node then clear the count
+	 */
+	if (node->num_children < hw->max_children[l])
+		num_nodes[l] = 0;
+	/* The below recursive call is intentional and wouldn't go more than
+	 * 2 or 3 iterations.
+	 */
+
+	for (i = 0; i < node->num_children; i++) {
+		struct ice_sched_node *parent;
+
+		parent = ice_sched_get_free_vsi_parent(hw, node->children[i],
+						       num_nodes);
+		if (parent)
+			return parent;
+	}
+
+	return NULL;
+}
+
+/**
+ * ice_sched_update_new_parent - update the new parent in SW DB
+ * @new_parent: pointer to a new parent node
+ * @node: pointer to a child node
+ *
+ * This function removes the child from the old parent and adds it to a new
+ * parent
+ */
+static void
+ice_sched_update_parent(struct ice_sched_node *new_parent,
+			struct ice_sched_node *node)
+{
+	struct ice_sched_node *old_parent;
+	u8 i, j;
+
+	old_parent = node->parent;
+
+	/* update the old parent children */
+	for (i = 0; i < old_parent->num_children; i++)
+		if (old_parent->children[i] == node) {
+			for (j = i + 1; j < old_parent->num_children; j++)
+				old_parent->children[j - 1] =
+					old_parent->children[j];
+			old_parent->num_children--;
+			break;
+		}
+
+	/* now move the node to a new parent */
+	new_parent->children[new_parent->num_children++] = node;
+	node->parent = new_parent;
+	node->info.parent_teid = new_parent->info.node_teid;
+}
+
+/**
+ * ice_sched_move_nodes - move child nodes to a given parent
+ * @pi: port information structure
+ * @parent: pointer to parent node
+ * @num_items: number of child nodes to be moved
+ * @list: pointer to child node teids
+ *
+ * This function move the child nodes to a given parent.
+ */
+static enum ice_status
+ice_sched_move_nodes(struct ice_port_info *pi, struct ice_sched_node *parent,
+		     u16 num_items, u32 *list)
+{
+	struct ice_aqc_move_elem *buf;
+	struct ice_sched_node *node;
+	enum ice_status status = ICE_SUCCESS;
+	struct ice_hw *hw;
+	u16 grps_movd = 0;
+	u8 i;
+
+	hw = pi->hw;
+
+	if (!parent || !num_items)
+		return ICE_ERR_PARAM;
+
+	/* Does parent have enough space */
+	if (parent->num_children + num_items >=
+	    hw->max_children[parent->tx_sched_layer])
+		return ICE_ERR_AQ_FULL;
+
+	buf = (struct ice_aqc_move_elem *) ice_malloc(hw, sizeof(*buf));
+	if (!buf)
+		return ICE_ERR_NO_MEMORY;
+
+	for (i = 0; i < num_items; i++) {
+		node = ice_sched_find_node_by_teid(pi->root, list[i]);
+		if (!node) {
+			status = ICE_ERR_PARAM;
+			goto move_err_exit;
+		}
+
+		buf->hdr.src_parent_teid = node->info.parent_teid;
+		buf->hdr.dest_parent_teid = parent->info.node_teid;
+		buf->teid[0] = node->info.node_teid;
+		buf->hdr.num_elems = CPU_TO_LE16(1);
+		status = ice_aq_move_sched_elems(hw, 1, buf, sizeof(*buf),
+						 &grps_movd, NULL);
+		if (status && grps_movd != 1) {
+			status = ICE_ERR_CFG;
+			goto move_err_exit;
+		}
+
+		/* update the SW DB */
+		ice_sched_update_parent(parent, node);
+	}
+
+move_err_exit:
+	ice_free(hw, buf);
+	return status;
+}
+
+/**
+ * ice_sched_move_vsi_to_agg - move VSI to aggregator node
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @agg_id: aggregator id
+ * @tc: TC number
+ *
+ * This function moves a VSI to an aggregator node or its subtree.
+ * Intermediate nodes may be created if required.
+ */
+enum ice_status
+ice_sched_move_vsi_to_agg(struct ice_port_info *pi, u16 vsi_handle, u32 agg_id,
+			  u8 tc)
+{
+	struct ice_sched_node *vsi_node, *agg_node, *tc_node, *parent;
+	u16 num_nodes[ICE_AQC_TOPO_MAX_LEVEL_NUM] = { 0 };
+	u32 first_node_teid, vsi_teid;
+	enum ice_status status;
+	u16 num_nodes_added;
+	u8 aggl, vsil, i;
+
+	tc_node = ice_sched_get_tc_node(pi, tc);
+	if (!tc_node)
+		return ICE_ERR_CFG;
+
+	agg_node = ice_sched_get_agg_node(pi->hw, tc_node, agg_id);
+	if (!agg_node)
+		return ICE_ERR_DOES_NOT_EXIST;
+
+	vsi_node = ice_sched_get_vsi_node(pi->hw, tc_node, vsi_handle);
+	if (!vsi_node)
+		return ICE_ERR_DOES_NOT_EXIST;
+
+	aggl = ice_sched_get_agg_layer(pi->hw);
+	vsil = ice_sched_get_vsi_layer(pi->hw);
+
+	/* initialize intermediate node count to 1 between agg and VSI layers */
+	for (i = aggl + 1; i < vsil; i++)
+		num_nodes[i] = 1;
+
+	/* Check whether the agg subtree has any free node to add the VSI */
+	for (i = 0; i < agg_node->num_children; i++) {
+		parent = ice_sched_get_free_vsi_parent(pi->hw,
+						       agg_node->children[i],
+						       num_nodes);
+		if (parent)
+			goto move_nodes;
+	}
+
+	/* add new nodes */
+	parent = agg_node;
+	for (i = aggl + 1; i < vsil; i++) {
+		status = ice_sched_add_nodes_to_layer(pi, tc_node, parent, i,
+						      num_nodes[i],
+						      &first_node_teid,
+						      &num_nodes_added);
+		if (status != ICE_SUCCESS || num_nodes[i] != num_nodes_added)
+			return ICE_ERR_CFG;
+
+		/* The newly added node can be a new parent for the next
+		 * layer nodes
+		 */
+		if (num_nodes_added)
+			parent = ice_sched_find_node_by_teid(tc_node,
+							     first_node_teid);
+		else
+			parent = parent->children[0];
+
+		if (!parent)
+			return ICE_ERR_CFG;
+	}
+
+move_nodes:
+	vsi_teid = LE32_TO_CPU(vsi_node->info.node_teid);
+	return ice_sched_move_nodes(pi, parent, 1, &vsi_teid);
+}
+
+/**
+ * ice_cfg_rl_burst_size - Set burst size value
+ * @hw: pointer to the hw struct
+ * @bytes: burst size in bytes
+ *
+ * This function configures/set the burst size to requested new value. The new
+ * burst size value is used for future rate limit calls. It doesn't change the
+ * existing or previously created RL profiles.
+ */
+enum ice_status ice_cfg_rl_burst_size(struct ice_hw *hw, u32 bytes)
+{
+	u16 burst_size_to_prog;
+
+	if (bytes < ICE_MIN_BURST_SIZE_ALLOWED ||
+	    bytes > ICE_MAX_BURST_SIZE_ALLOWED)
+		return ICE_ERR_PARAM;
+	if (bytes <= ICE_MAX_BURST_SIZE_BYTE_GRANULARITY) {
+		/* byte granularity case */
+		/* Disable MSB granularity bit */
+		burst_size_to_prog = ICE_BYTE_GRANULARITY;
+		/* round number to nearest 256 granularity */
+		bytes = ice_round_to_num(bytes, 256);
+		/* check rounding doesn't go beyound allowed */
+		if (bytes > ICE_MAX_BURST_SIZE_BYTE_GRANULARITY)
+			bytes = ICE_MAX_BURST_SIZE_BYTE_GRANULARITY;
+		burst_size_to_prog |= (u16)bytes;
+	} else {
+		/* k bytes granularity case */
+		/* Enable MSB granularity bit */
+		burst_size_to_prog = ICE_KBYTE_GRANULARITY;
+		/* round number to nearest 1024 granularity */
+		bytes = ice_round_to_num(bytes, 1024);
+		/* check rounding doesn't go beyound allowed */
+		if (bytes > ICE_MAX_BURST_SIZE_KBYTE_GRANULARITY)
+			bytes = ICE_MAX_BURST_SIZE_KBYTE_GRANULARITY;
+		/* The value is in k bytes */
+		burst_size_to_prog |= (u16)(bytes / 1024);
+	}
+	hw->max_burst_size = burst_size_to_prog;
+	return ICE_SUCCESS;
+}
+
+/*
+ * ice_sched_replay_node_prio - re-configure node priority
+ * @hw: pointer to the hw struct
+ * @node: sched node to configure
+ * @priority: priority value
+ *
+ * This function configures node element's priority value. It
+ * needs to be called with scheduler lock held.
+ */
+static enum ice_status
+ice_sched_replay_node_prio(struct ice_hw *hw, struct ice_sched_node *node,
+			   u8 priority)
+{
+	struct ice_aqc_txsched_elem_data buf;
+	struct ice_aqc_txsched_elem *data;
+	enum ice_status status;
+
+	buf = node->info;
+	data = &buf.data;
+	data->valid_sections |= ICE_AQC_ELEM_VALID_GENERIC;
+	data->generic = priority;
+
+	/* Configure element */
+	status = ice_sched_update_elem(hw, node, &buf);
+	return status;
+}
+
+/**
+ * ice_sched_replay_node_bw - replay node(s) bw
+ * @hw: pointer to the hw struct
+ * @node: sched node to configure
+ * @bw_t_info: bw type information
+ *
+ * This function restores node's bw from bw_t_info. The caller needs
+ * to hold the scheduler lock.
+ */
+static enum ice_status
+ice_sched_replay_node_bw(struct ice_hw *hw, struct ice_sched_node *node,
+			 struct ice_bw_type_info *bw_t_info)
+{
+	struct ice_port_info *pi = hw->port_info;
+	enum ice_status status = ICE_ERR_PARAM;
+	u16 bw_alloc;
+
+	if (!node)
+		return status;
+	if (!ice_is_any_bit_set(bw_t_info->bw_t_bitmap, ICE_BW_TYPE_CNT))
+		return ICE_SUCCESS;
+	if (ice_is_bit_set(bw_t_info->bw_t_bitmap, ICE_BW_TYPE_PRIO)) {
+		status = ice_sched_replay_node_prio(hw, node,
+						    bw_t_info->generic);
+		if (status)
+			return status;
+	}
+	if (ice_is_bit_set(bw_t_info->bw_t_bitmap, ICE_BW_TYPE_CIR)) {
+		status = ice_sched_set_node_bw_lmt(pi, node, ICE_MIN_BW,
+						   bw_t_info->cir_bw.bw);
+		if (status)
+			return status;
+	}
+	if (ice_is_bit_set(bw_t_info->bw_t_bitmap, ICE_BW_TYPE_CIR_WT)) {
+		bw_alloc = bw_t_info->cir_bw.bw_alloc;
+		status = ice_sched_cfg_node_bw_alloc(hw, node, ICE_MIN_BW,
+						     bw_alloc);
+		if (status)
+			return status;
+	}
+	if (ice_is_bit_set(bw_t_info->bw_t_bitmap, ICE_BW_TYPE_EIR)) {
+		status = ice_sched_set_node_bw_lmt(pi, node, ICE_MAX_BW,
+						   bw_t_info->eir_bw.bw);
+		if (status)
+			return status;
+	}
+	if (ice_is_bit_set(bw_t_info->bw_t_bitmap, ICE_BW_TYPE_EIR_WT)) {
+		bw_alloc = bw_t_info->eir_bw.bw_alloc;
+		status = ice_sched_cfg_node_bw_alloc(hw, node, ICE_MAX_BW,
+						     bw_alloc);
+		if (status)
+			return status;
+	}
+	if (ice_is_bit_set(bw_t_info->bw_t_bitmap, ICE_BW_TYPE_SHARED))
+		status = ice_sched_set_node_bw_lmt(pi, node, ICE_SHARED_BW,
+						   bw_t_info->shared_bw);
+	return status;
+}
+
+/**
+ * ice_sched_replay_agg_bw - replay aggregator node(s) bw
+ * @hw: pointer to the hw struct
+ * @agg_info: aggregator data structure
+ *
+ * This function re-creates aggregator type nodes. The caller needs to hold
+ * the scheduler lock.
+ */
+static enum ice_status
+ice_sched_replay_agg_bw(struct ice_hw *hw, struct ice_sched_agg_info *agg_info)
+{
+	struct ice_sched_node *tc_node, *agg_node;
+	enum ice_status status = ICE_SUCCESS;
+	u8 tc;
+
+	if (!agg_info)
+		return ICE_ERR_PARAM;
+	for (tc = 0; tc < ICE_MAX_TRAFFIC_CLASS; tc++) {
+		if (!ice_is_any_bit_set(agg_info->bw_t_info[tc].bw_t_bitmap,
+					ICE_BW_TYPE_CNT))
+			continue;
+		tc_node = ice_sched_get_tc_node(hw->port_info, tc);
+		if (!tc_node) {
+			status = ICE_ERR_PARAM;
+			break;
+		}
+		agg_node = ice_sched_get_agg_node(hw, tc_node,
+						  agg_info->agg_id);
+		if (!agg_node) {
+			status = ICE_ERR_PARAM;
+			break;
+		}
+		status = ice_sched_replay_node_bw(hw, agg_node,
+						  &agg_info->bw_t_info[tc]);
+		if (status)
+			break;
+	}
+	return status;
+}
+
+/**
+ * ice_sched_get_ena_tc_bitmap - get enabled TC bitmap
+ * @pi: port info struct
+ * @tc_bitmap: 8 bits TC bitmap to check
+ * @ena_tc_bitmap: 8 bits enabled TC bitmap to return
+ *
+ * This function returns enabled TC bitmap in variable ena_tc_bitmap. Some TCs
+ * may be missing, it returns enabled TCs. This function needs to be called with
+ * scheduler lock held.
+ */
+static void
+ice_sched_get_ena_tc_bitmap(struct ice_port_info *pi, ice_bitmap_t *tc_bitmap,
+			    ice_bitmap_t *ena_tc_bitmap)
+{
+	u8 tc;
+
+	/* Some tc(s) may be missing after reset, adjust for replay */
+	for (tc = 0; tc < ICE_MAX_TRAFFIC_CLASS; tc++)
+		if (ice_is_tc_ena(*tc_bitmap, tc) &&
+		    (ice_sched_get_tc_node(pi, tc)))
+			ice_set_bit(tc, ena_tc_bitmap);
+}
+
+/**
+ * ice_sched_replay_agg - recreate aggregator node(s)
+ * @hw: pointer to the hw struct
+ *
+ * This function recreate aggregator type nodes which are not replayed earlier.
+ * It also replay aggregator bw information. These aggregator nodes are not
+ * associated with VSI type node yet.
+ */
+void ice_sched_replay_agg(struct ice_hw *hw)
+{
+	struct ice_port_info *pi = hw->port_info;
+	struct ice_sched_agg_info *agg_info;
+
+	ice_acquire_lock(&pi->sched_lock);
+	LIST_FOR_EACH_ENTRY(agg_info, &hw->agg_list, ice_sched_agg_info,
+			    list_entry) {
+		/* replay agg (re-create aggregator node) */
+		if (!ice_cmp_bitmap(agg_info->tc_bitmap,
+				    agg_info->replay_tc_bitmap,
+				    ICE_MAX_TRAFFIC_CLASS)) {
+			ice_declare_bitmap(replay_bitmap,
+					   ICE_MAX_TRAFFIC_CLASS);
+			enum ice_status status;
+
+			ice_zero_bitmap(replay_bitmap,
+					sizeof(replay_bitmap) * BITS_PER_BYTE);
+			ice_sched_get_ena_tc_bitmap(pi,
+						    agg_info->replay_tc_bitmap,
+						    replay_bitmap);
+			status = ice_sched_cfg_agg(hw->port_info,
+						   agg_info->agg_id,
+						   ICE_AGG_TYPE_AGG,
+						   replay_bitmap);
+			if (status) {
+				ice_info(hw, "Replay agg id[%d] failed\n",
+					 agg_info->agg_id);
+				/* Move on to next one */
+				continue;
+			}
+			/* Replay agg node bw (restore agg bw) */
+			status = ice_sched_replay_agg_bw(hw, agg_info);
+			if (status)
+				ice_info(hw, "Replay agg bw [id=%d] failed\n",
+					 agg_info->agg_id);
+		}
+	}
+	ice_release_lock(&pi->sched_lock);
+}
+
+/**
+ * ice_sched_replay_agg_vsi_preinit - Agg/VSI replay pre initialization
+ * @hw: pointer to the hw struct
+ *
+ * This function initialize aggregator(s) TC bitmap to zero. A required
+ * preinit step for replaying aggregators.
+ */
+void ice_sched_replay_agg_vsi_preinit(struct ice_hw *hw)
+{
+	struct ice_port_info *pi = hw->port_info;
+	struct ice_sched_agg_info *agg_info;
+
+	ice_acquire_lock(&pi->sched_lock);
+	LIST_FOR_EACH_ENTRY(agg_info, &hw->agg_list, ice_sched_agg_info,
+			    list_entry) {
+		struct ice_sched_agg_vsi_info *agg_vsi_info;
+
+		agg_info->tc_bitmap[0] = 0;
+		LIST_FOR_EACH_ENTRY(agg_vsi_info, &agg_info->agg_vsi_list,
+				    ice_sched_agg_vsi_info, list_entry)
+			agg_vsi_info->tc_bitmap[0] = 0;
+	}
+	ice_release_lock(&pi->sched_lock);
+}
+
+/**
+ * ice_sched_replay_tc_node_bw - replay tc node(s) bw
+ * @hw: pointer to the hw struct
+ *
+ * This function replay tc nodes. The caller needs to hold the scheduler lock.
+ */
+enum ice_status
+ice_sched_replay_tc_node_bw(struct ice_hw *hw)
+{
+	struct ice_port_info *pi = hw->port_info;
+	enum ice_status status = ICE_SUCCESS;
+	u8 tc;
+
+	ice_acquire_lock(&pi->sched_lock);
+	for (tc = 0; tc < ICE_MAX_TRAFFIC_CLASS; tc++) {
+		struct ice_sched_node *tc_node;
+
+		tc_node = ice_sched_get_tc_node(hw->port_info, tc);
+		if (!tc_node)
+			continue; /* tc not present */
+		status = ice_sched_replay_node_bw(hw, tc_node,
+						  &hw->tc_node_bw_t_info[tc]);
+		if (status)
+			break;
+	}
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_sched_replay_vsi_bw - replay VSI type node(s) bw
+ * @hw: pointer to the hw struct
+ * @vsi_handle: software VSI handle
+ * @tc_bitmap: 8 bits TC bitmap
+ *
+ * This function replays VSI type nodes bandwidth. This function needs to be
+ * called with scheduler lock held.
+ */
+static enum ice_status
+ice_sched_replay_vsi_bw(struct ice_hw *hw, u16 vsi_handle,
+			ice_bitmap_t *tc_bitmap)
+{
+	struct ice_sched_node *vsi_node, *tc_node;
+	struct ice_port_info *pi = hw->port_info;
+	struct ice_bw_type_info *bw_t_info;
+	struct ice_vsi_ctx *vsi_ctx;
+	enum ice_status status = ICE_SUCCESS;
+	u8 tc;
+
+	vsi_ctx = ice_get_vsi_ctx(pi->hw, vsi_handle);
+	if (!vsi_ctx)
+		return ICE_ERR_PARAM;
+	for (tc = 0; tc < ICE_MAX_TRAFFIC_CLASS; tc++) {
+		if (!ice_is_tc_ena(*tc_bitmap, tc))
+			continue;
+		tc_node = ice_sched_get_tc_node(pi, tc);
+		if (!tc_node)
+			continue;
+		vsi_node = ice_sched_get_vsi_node(hw, tc_node, vsi_handle);
+		if (!vsi_node)
+			continue;
+		bw_t_info = &vsi_ctx->sched.bw_t_info[tc];
+		status = ice_sched_replay_node_bw(hw, vsi_node, bw_t_info);
+		if (status)
+			break;
+	}
+	return status;
+}
+
+/**
+ * ice_sched_replay_vsi_agg - replay agg & VSI to aggregator node(s)
+ * @hw: pointer to the hw struct
+ * @vsi_handle: software VSI handle
+ *
+ * This function replays aggregator node, VSI to aggregator type nodes, and
+ * their node bandwidth information. This function needs to be called with
+ * scheduler lock held.
+ */
+static enum ice_status
+ice_sched_replay_vsi_agg(struct ice_hw *hw, u16 vsi_handle)
+{
+	ice_declare_bitmap(replay_bitmap, ICE_MAX_TRAFFIC_CLASS);
+	struct ice_sched_agg_vsi_info *agg_vsi_info;
+	struct ice_port_info *pi = hw->port_info;
+	struct ice_sched_agg_info *agg_info;
+	enum ice_status status;
+
+	ice_zero_bitmap(replay_bitmap, sizeof(replay_bitmap) * BITS_PER_BYTE);
+	if (!ice_is_vsi_valid(hw, vsi_handle))
+		return ICE_ERR_PARAM;
+	agg_info = ice_get_vsi_agg_info(hw, vsi_handle);
+	if (!agg_info)
+		return ICE_SUCCESS; /* Not present in list - default Agg case */
+	agg_vsi_info = ice_get_agg_vsi_info(agg_info, vsi_handle);
+	if (!agg_vsi_info)
+		return ICE_SUCCESS; /* Not present in list - default Agg case */
+	ice_sched_get_ena_tc_bitmap(pi, agg_info->replay_tc_bitmap,
+				    replay_bitmap);
+	/* Replay agg node associated to vsi_handle */
+	status = ice_sched_cfg_agg(hw->port_info, agg_info->agg_id,
+				   ICE_AGG_TYPE_AGG, replay_bitmap);
+	if (status)
+		return status;
+	/* Replay agg node bw (restore agg bw) */
+	status = ice_sched_replay_agg_bw(hw, agg_info);
+	if (status)
+		return status;
+
+	ice_zero_bitmap(replay_bitmap, ICE_MAX_TRAFFIC_CLASS);
+	ice_sched_get_ena_tc_bitmap(pi, agg_vsi_info->replay_tc_bitmap,
+				    replay_bitmap);
+	/* Move this VSI (vsi_handle) to above aggregator */
+	status = ice_sched_assoc_vsi_to_agg(pi, agg_info->agg_id, vsi_handle,
+					    replay_bitmap);
+	if (status)
+		return status;
+	/* Replay VSI bw (restore VSI bw) */
+	return ice_sched_replay_vsi_bw(hw, vsi_handle,
+				       agg_vsi_info->tc_bitmap);
+}
+
+/**
+ * ice_replay_vsi_agg - replay VSI to aggregator node
+ * @hw: pointer to the hw struct
+ * @vsi_handle: software VSI handle
+ *
+ * This function replays association of VSI to aggregator type nodes, and
+ * node bandwidth information.
+ */
+enum ice_status
+ice_replay_vsi_agg(struct ice_hw *hw, u16 vsi_handle)
+{
+	struct ice_port_info *pi = hw->port_info;
+	enum ice_status status;
+
+	ice_acquire_lock(&pi->sched_lock);
+	status = ice_sched_replay_vsi_agg(hw, vsi_handle);
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
diff --git a/drivers/net/ice/base/ice_sched.h b/drivers/net/ice/base/ice_sched.h
new file mode 100644
index 0000000..a556594
--- /dev/null
+++ b/drivers/net/ice/base/ice_sched.h
@@ -0,0 +1,210 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_SCHED_H_
+#define _ICE_SCHED_H_
+
+#include "ice_common.h"
+
+#define ICE_QGRP_LAYER_OFFSET	2
+#define ICE_VSI_LAYER_OFFSET	4
+#define ICE_AGG_LAYER_OFFSET	6
+#define ICE_SCHED_INVAL_LAYER_NUM	0xFF
+/* Burst size is a 12 bits register that is configured while creating the RL
+ * profile(s). MSB is a granularity bit and tells the granularity type
+ * 0 - LSB bits are in bytes granularity
+ * 1 - LSB bits are in 1K bytes granularity
+ */
+#define ICE_BYTE_GRANULARITY			0
+#define ICE_KBYTE_GRANULARITY			0x800
+#define ICE_MIN_BURST_SIZE_ALLOWED		1 /* In Bytes */
+#define ICE_MAX_BURST_SIZE_ALLOWED		(2047 * 1024) /* In Bytes */
+#define ICE_MAX_BURST_SIZE_BYTE_GRANULARITY	2047 /* In Bytes */
+#define ICE_MAX_BURST_SIZE_KBYTE_GRANULARITY	ICE_MAX_BURST_SIZE_ALLOWED
+
+#define ICE_RL_PROF_FREQUENCY 446000000
+#define ICE_RL_PROF_ACCURACY_BYTES 128
+#define ICE_RL_PROF_MULTIPLIER 10000
+#define ICE_RL_PROF_TS_MULTIPLIER 32
+#define ICE_RL_PROF_FRACTION 512
+
+struct rl_profile_params {
+	u32 bw;			/* in Kbps */
+	u16 rl_multiplier;
+	u16 wake_up_calc;
+	u16 rl_encode;
+};
+
+/* BW rate limit profile parameters list entry along
+ * with bandwidth maintained per layer in port info
+ */
+struct ice_aqc_rl_profile_info {
+	struct ice_aqc_rl_profile_elem profile;
+	struct LIST_ENTRY_TYPE list_entry;
+	u32 bw;			/* requested */
+	u16 prof_id_ref;	/* profile id to node association ref count */
+};
+
+struct ice_sched_agg_vsi_info {
+	struct LIST_ENTRY_TYPE list_entry;
+	ice_declare_bitmap(tc_bitmap, ICE_MAX_TRAFFIC_CLASS);
+	u16 vsi_handle;
+	/* save agg vsi TC bitmap */
+	ice_declare_bitmap(replay_tc_bitmap, ICE_MAX_TRAFFIC_CLASS);
+};
+
+struct ice_sched_agg_info {
+	struct LIST_HEAD_TYPE agg_vsi_list;
+	struct LIST_ENTRY_TYPE list_entry;
+	ice_declare_bitmap(tc_bitmap, ICE_MAX_TRAFFIC_CLASS);
+	u32 agg_id;
+	enum ice_agg_type agg_type;
+	/* bw_t_info saves agg bw information */
+	struct ice_bw_type_info bw_t_info[ICE_MAX_TRAFFIC_CLASS];
+	/* save agg TC bitmap */
+	ice_declare_bitmap(replay_tc_bitmap, ICE_MAX_TRAFFIC_CLASS);
+};
+
+/* FW AQ command calls */
+enum ice_status
+ice_aq_query_rl_profile(struct ice_hw *hw, u16 num_profiles,
+			struct ice_aqc_rl_profile_generic_elem *buf,
+			u16 buf_size, struct ice_sq_cd *cd);
+enum ice_status
+ice_aq_cfg_l2_node_cgd(struct ice_hw *hw, u16 num_nodes,
+		       struct ice_aqc_cfg_l2_node_cgd_data *buf, u16 buf_size,
+		       struct ice_sq_cd *cd);
+enum ice_status
+ice_aq_move_sched_elems(struct ice_hw *hw, u16 grps_req,
+			struct ice_aqc_move_elem *buf, u16 buf_size,
+			u16 *grps_movd, struct ice_sq_cd *cd);
+enum ice_status
+ice_aq_query_sched_elems(struct ice_hw *hw, u16 elems_req,
+			 struct ice_aqc_get_elem *buf, u16 buf_size,
+			 u16 *elems_ret, struct ice_sq_cd *cd);
+enum ice_status ice_sched_init_port(struct ice_port_info *pi);
+enum ice_status ice_sched_query_res_alloc(struct ice_hw *hw);
+
+/* Functions to cleanup scheduler SW DB */
+void ice_sched_clear_port(struct ice_port_info *pi);
+void ice_sched_cleanup_all(struct ice_hw *hw);
+void ice_sched_clear_agg(struct ice_hw *hw);
+
+/* Get a scheduling node from SW DB for given TEID */
+struct ice_sched_node *ice_sched_get_node(struct ice_port_info *pi, u32 teid);
+struct ice_sched_node *
+ice_sched_find_node_by_teid(struct ice_sched_node *start_node, u32 teid);
+/* Add a scheduling node into SW DB for given info */
+enum ice_status
+ice_sched_add_node(struct ice_port_info *pi, u8 layer,
+		   struct ice_aqc_txsched_elem_data *info);
+void ice_free_sched_node(struct ice_port_info *pi, struct ice_sched_node *node);
+struct ice_sched_node *ice_sched_get_tc_node(struct ice_port_info *pi, u8 tc);
+struct ice_sched_node *
+ice_sched_get_free_qparent(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+			   u8 owner);
+enum ice_status
+ice_sched_cfg_vsi(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u16 maxqs,
+		  u8 owner, bool enable);
+enum ice_status ice_rm_vsi_lan_cfg(struct ice_port_info *pi, u16 vsi_handle);
+struct ice_sched_node *
+ice_sched_get_agg_node(struct ice_hw *hw, struct ice_sched_node *tc_node,
+		       u32 agg_id);
+struct ice_sched_node *
+ice_sched_get_vsi_node(struct ice_hw *hw, struct ice_sched_node *tc_node,
+		       u16 vsi_handle);
+bool ice_sched_is_tree_balanced(struct ice_hw *hw, struct ice_sched_node *node);
+enum ice_status
+ice_aq_query_node_to_root(struct ice_hw *hw, u32 node_teid,
+			  struct ice_aqc_get_elem *buf, u16 buf_size,
+			  struct ice_sq_cd *cd);
+
+/* Tx scheduler rate limiter functions */
+enum ice_status
+ice_cfg_agg(struct ice_port_info *pi, u32 agg_id,
+	    enum ice_agg_type agg_type, u8 tc_bitmap);
+enum ice_status
+ice_move_vsi_to_agg(struct ice_port_info *pi, u32 agg_id, u16 vsi_handle,
+		    u8 tc_bitmap);
+enum ice_status ice_rm_agg_cfg(struct ice_port_info *pi, u32 agg_id);
+enum ice_status
+ice_cfg_q_bw_lmt(struct ice_port_info *pi, u32 q_id, enum ice_rl_type rl_type,
+		 u32 bw);
+enum ice_status
+ice_cfg_q_bw_dflt_lmt(struct ice_port_info *pi, u32 q_id,
+		      enum ice_rl_type rl_type);
+enum ice_status
+ice_cfg_tc_node_bw_lmt(struct ice_port_info *pi, u8 tc,
+		       enum ice_rl_type rl_type, u32 bw);
+enum ice_status
+ice_cfg_tc_node_bw_dflt_lmt(struct ice_port_info *pi, u8 tc,
+			    enum ice_rl_type rl_type);
+enum ice_status
+ice_cfg_vsi_bw_lmt_per_tc(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+			  enum ice_rl_type rl_type, u32 bw);
+enum ice_status
+ice_cfg_vsi_bw_dflt_lmt_per_tc(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+			       enum ice_rl_type rl_type);
+enum ice_status
+ice_cfg_agg_bw_lmt_per_tc(struct ice_port_info *pi, u32 agg_id, u8 tc,
+			  enum ice_rl_type rl_type, u32 bw);
+enum ice_status
+ice_cfg_agg_bw_dflt_lmt_per_tc(struct ice_port_info *pi, u32 agg_id, u8 tc,
+			       enum ice_rl_type rl_type);
+enum ice_status
+ice_cfg_vsi_bw_shared_lmt(struct ice_port_info *pi, u16 vsi_handle, u32 bw);
+enum ice_status
+ice_cfg_vsi_bw_no_shared_lmt(struct ice_port_info *pi, u16 vsi_handle);
+enum ice_status
+ice_cfg_agg_bw_shared_lmt(struct ice_port_info *pi, u32 agg_id, u32 bw);
+enum ice_status
+ice_cfg_agg_bw_no_shared_lmt(struct ice_port_info *pi, u32 agg_id);
+enum ice_status
+ice_cfg_vsi_q_priority(struct ice_port_info *pi, u16 num_qs, u32 *q_ids,
+		       u8 *q_prio);
+enum ice_status
+ice_cfg_vsi_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 ena_tcmap,
+		     enum ice_rl_type rl_type, u8 *bw_alloc);
+enum ice_status
+ice_cfg_agg_vsi_priority_per_tc(struct ice_port_info *pi, u32 agg_id,
+				u16 num_vsis, u16 *vsi_handle_arr,
+				u8 *node_prio, u8 tc);
+enum ice_status
+ice_cfg_agg_bw_alloc(struct ice_port_info *pi, u32 agg_id, u8 ena_tcmap,
+		     enum ice_rl_type rl_type, u8 *bw_alloc);
+bool
+ice_sched_find_node_in_subtree(struct ice_hw *hw, struct ice_sched_node *base,
+			       struct ice_sched_node *node);
+enum ice_status
+ice_sched_set_node_bw_lmt(struct ice_port_info *pi, struct ice_sched_node *node,
+			  enum ice_rl_type rl_type, u32 bw);
+enum ice_status
+ice_sched_set_agg_bw_dflt_lmt(struct ice_port_info *pi, u16 vsi_handle);
+enum ice_status
+ice_sched_set_node_bw_lmt_per_tc(struct ice_port_info *pi, u32 id,
+				 enum ice_agg_type agg_type, u8 tc,
+				 enum ice_rl_type rl_type, u32 bw);
+enum ice_status
+ice_sched_set_vsi_bw_shared_lmt(struct ice_port_info *pi, u16 vsi_handle,
+				u32 bw);
+enum ice_status
+ice_sched_set_agg_bw_shared_lmt(struct ice_port_info *pi, u32 agg_id, u32 bw);
+enum ice_status
+ice_sched_cfg_sibl_node_prio(struct ice_hw *hw, struct ice_sched_node *node,
+			     u8 priority);
+enum ice_status
+ice_sched_cfg_node_bw_alloc(struct ice_hw *hw, struct ice_sched_node *node,
+			    enum ice_rl_type rl_type, u8 bw_alloc);
+enum ice_status
+ice_sched_add_agg_cfg(struct ice_port_info *pi, u32 agg_id, u8 tc);
+enum ice_status
+ice_sched_rm_agg_cfg(struct ice_port_info *pi, u32 agg_id, u8 tc);
+enum ice_status
+ice_sched_move_vsi_to_agg(struct ice_port_info *pi, u16 vsi_handle, u32 agg_id,
+			  u8 tc);
+enum ice_status
+ice_sched_del_rl_profile(struct ice_hw *hw,
+			 struct ice_aqc_rl_profile_info *rl_info);
+void ice_sched_rm_unused_rl_prof(struct ice_port_info *pi);
+#endif /* _ICE_SCHED_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v5 08/31] net/ice/base: add virtual switch code
  2018-12-17  7:37 ` [dpdk-dev] [PATCH v5 00/31] A new net PMD - ICE Wenzhuo Lu
                     ` (6 preceding siblings ...)
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 07/31] net/ice/base: add basic transmit scheduler Wenzhuo Lu
@ 2018-12-17  7:37   ` Wenzhuo Lu
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 09/31] net/ice/base: add code to work with the NVM Wenzhuo Lu
                     ` (22 subsequent siblings)
  30 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-17  7:37 UTC (permalink / raw)
  To: dev; +Cc: Paul M Stillwell Jr

From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>

Add code to handle the virtual switch within the NIC.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
---
 drivers/net/ice/base/ice_switch.c | 2812 +++++++++++++++++++++++++++++++++++++
 drivers/net/ice/base/ice_switch.h |  333 +++++
 2 files changed, 3145 insertions(+)
 create mode 100644 drivers/net/ice/base/ice_switch.c
 create mode 100644 drivers/net/ice/base/ice_switch.h

diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
new file mode 100644
index 0000000..0379cd0
--- /dev/null
+++ b/drivers/net/ice/base/ice_switch.c
@@ -0,0 +1,2812 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#include "ice_switch.h"
+
+
+#define ICE_ETH_DA_OFFSET		0
+#define ICE_ETH_ETHTYPE_OFFSET		12
+#define ICE_ETH_VLAN_TCI_OFFSET		14
+#define ICE_MAX_VLAN_ID			0xFFF
+
+/* Dummy ethernet header needed in the ice_aqc_sw_rules_elem
+ * struct to configure any switch filter rules.
+ * {DA (6 bytes), SA(6 bytes),
+ * Ether type (2 bytes for header without VLAN tag) OR
+ * VLAN tag (4 bytes for header with VLAN tag) }
+ *
+ * Word on Hardcoded values
+ * byte 0 = 0x2: to identify it as locally administered DA MAC
+ * byte 6 = 0x2: to identify it as locally administered SA MAC
+ * byte 12 = 0x81 & byte 13 = 0x00:
+ *	In case of VLAN filter first two bytes defines ether type (0x8100)
+ *	and remaining two bytes are placeholder for programming a given VLAN id
+ *	In case of Ether type filter it is treated as header without VLAN tag
+ *	and byte 12 and 13 is used to program a given Ether type instead
+ */
+#define DUMMY_ETH_HDR_LEN		16
+static const u8 dummy_eth_header[DUMMY_ETH_HDR_LEN] = { 0x2, 0, 0, 0, 0, 0,
+							0x2, 0, 0, 0, 0, 0,
+							0x81, 0, 0, 0};
+
+#define ICE_SW_RULE_RX_TX_ETH_HDR_SIZE \
+	(sizeof(struct ice_aqc_sw_rules_elem) - \
+	 sizeof(((struct ice_aqc_sw_rules_elem *)0)->pdata) + \
+	 sizeof(struct ice_sw_rule_lkup_rx_tx) + DUMMY_ETH_HDR_LEN - 1)
+#define ICE_SW_RULE_RX_TX_NO_HDR_SIZE \
+	(sizeof(struct ice_aqc_sw_rules_elem) - \
+	 sizeof(((struct ice_aqc_sw_rules_elem *)0)->pdata) + \
+	 sizeof(struct ice_sw_rule_lkup_rx_tx) - 1)
+#define ICE_SW_RULE_LG_ACT_SIZE(n) \
+	(sizeof(struct ice_aqc_sw_rules_elem) - \
+	 sizeof(((struct ice_aqc_sw_rules_elem *)0)->pdata) + \
+	 sizeof(struct ice_sw_rule_lg_act) - \
+	 sizeof(((struct ice_sw_rule_lg_act *)0)->act) + \
+	 ((n) * sizeof(((struct ice_sw_rule_lg_act *)0)->act)))
+#define ICE_SW_RULE_VSI_LIST_SIZE(n) \
+	(sizeof(struct ice_aqc_sw_rules_elem) - \
+	 sizeof(((struct ice_aqc_sw_rules_elem *)0)->pdata) + \
+	 sizeof(struct ice_sw_rule_vsi_list) - \
+	 sizeof(((struct ice_sw_rule_vsi_list *)0)->vsi) + \
+	 ((n) * sizeof(((struct ice_sw_rule_vsi_list *)0)->vsi)))
+
+
+/**
+ * ice_init_def_sw_recp - initialize the recipe book keeping tables
+ * @hw: pointer to the hw struct
+ *
+ * Allocate memory for the entire recipe table and initialize the structures/
+ * entries corresponding to basic recipes.
+ */
+enum ice_status ice_init_def_sw_recp(struct ice_hw *hw)
+{
+	struct ice_sw_recipe *recps;
+	u8 i;
+
+	recps = (struct ice_sw_recipe *)
+		ice_calloc(hw, ICE_MAX_NUM_RECIPES, sizeof(*recps));
+	if (!recps)
+		return ICE_ERR_NO_MEMORY;
+
+	for (i = 0; i < ICE_MAX_NUM_RECIPES; i++) {
+		recps[i].root_rid = i;
+		INIT_LIST_HEAD(&recps[i].filt_rules);
+		INIT_LIST_HEAD(&recps[i].filt_replay_rules);
+		ice_init_lock(&recps[i].filt_rule_lock);
+	}
+
+	hw->switch_info->recp_list = recps;
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_aq_get_sw_cfg - get switch configuration
+ * @hw: pointer to the hardware structure
+ * @buf: pointer to the result buffer
+ * @buf_size: length of the buffer available for response
+ * @req_desc: pointer to requested descriptor
+ * @num_elems: pointer to number of elements
+ * @cd: pointer to command details structure or NULL
+ *
+ * Get switch configuration (0x0200) to be placed in 'buff'.
+ * This admin command returns information such as initial VSI/port number
+ * and switch ID it belongs to.
+ *
+ * NOTE: *req_desc is both an input/output parameter.
+ * The caller of this function first calls this function with *request_desc set
+ * to 0. If the response from f/w has *req_desc set to 0, all the switch
+ * configuration information has been returned; if non-zero (meaning not all
+ * the information was returned), the caller should call this function again
+ * with *req_desc set to the previous value returned by f/w to get the
+ * next block of switch configuration information.
+ *
+ * *num_elems is output only parameter. This reflects the number of elements
+ * in response buffer. The caller of this function to use *num_elems while
+ * parsing the response buffer.
+ */
+static enum ice_status
+ice_aq_get_sw_cfg(struct ice_hw *hw, struct ice_aqc_get_sw_cfg_resp *buf,
+		  u16 buf_size, u16 *req_desc, u16 *num_elems,
+		  struct ice_sq_cd *cd)
+{
+	struct ice_aqc_get_sw_cfg *cmd;
+	enum ice_status status;
+	struct ice_aq_desc desc;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_sw_cfg);
+	cmd = &desc.params.get_sw_conf;
+	cmd->element = CPU_TO_LE16(*req_desc);
+
+	status = ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
+	if (!status) {
+		*req_desc = LE16_TO_CPU(cmd->element);
+		*num_elems = LE16_TO_CPU(cmd->num_elems);
+	}
+
+	return status;
+}
+
+
+
+/**
+ * ice_aq_add_vsi
+ * @hw: pointer to the hw struct
+ * @vsi_ctx: pointer to a VSI context struct
+ * @cd: pointer to command details structure or NULL
+ *
+ * Add a VSI context to the hardware (0x0210)
+ */
+static enum ice_status
+ice_aq_add_vsi(struct ice_hw *hw, struct ice_vsi_ctx *vsi_ctx,
+	       struct ice_sq_cd *cd)
+{
+	struct ice_aqc_add_update_free_vsi_resp *res;
+	struct ice_aqc_add_get_update_free_vsi *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	cmd = &desc.params.vsi_cmd;
+	res = &desc.params.add_update_free_vsi_res;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_add_vsi);
+
+	if (!vsi_ctx->alloc_from_pool)
+		cmd->vsi_num = CPU_TO_LE16(vsi_ctx->vsi_num |
+					   ICE_AQ_VSI_IS_VALID);
+
+	cmd->vsi_flags = CPU_TO_LE16(vsi_ctx->flags);
+
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+
+	status = ice_aq_send_cmd(hw, &desc, &vsi_ctx->info,
+				 sizeof(vsi_ctx->info), cd);
+
+	if (!status) {
+		vsi_ctx->vsi_num = LE16_TO_CPU(res->vsi_num) & ICE_AQ_VSI_NUM_M;
+		vsi_ctx->vsis_allocd = LE16_TO_CPU(res->vsi_used);
+		vsi_ctx->vsis_unallocated = LE16_TO_CPU(res->vsi_free);
+	}
+
+	return status;
+}
+
+/**
+ * ice_aq_free_vsi
+ * @hw: pointer to the hw struct
+ * @vsi_ctx: pointer to a VSI context struct
+ * @keep_vsi_alloc: keep VSI allocation as part of this PF's resources
+ * @cd: pointer to command details structure or NULL
+ *
+ * Free VSI context info from hardware (0x0213)
+ */
+static enum ice_status
+ice_aq_free_vsi(struct ice_hw *hw, struct ice_vsi_ctx *vsi_ctx,
+		bool keep_vsi_alloc, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_add_update_free_vsi_resp *resp;
+	struct ice_aqc_add_get_update_free_vsi *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	cmd = &desc.params.vsi_cmd;
+	resp = &desc.params.add_update_free_vsi_res;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_free_vsi);
+
+	cmd->vsi_num = CPU_TO_LE16(vsi_ctx->vsi_num | ICE_AQ_VSI_IS_VALID);
+	if (keep_vsi_alloc)
+		cmd->cmd_flags = CPU_TO_LE16(ICE_AQ_VSI_KEEP_ALLOC);
+
+	status = ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+	if (!status) {
+		vsi_ctx->vsis_allocd = LE16_TO_CPU(resp->vsi_used);
+		vsi_ctx->vsis_unallocated = LE16_TO_CPU(resp->vsi_free);
+	}
+
+	return status;
+}
+
+/**
+ * ice_aq_update_vsi
+ * @hw: pointer to the hw struct
+ * @vsi_ctx: pointer to a VSI context struct
+ * @cd: pointer to command details structure or NULL
+ *
+ * Update VSI context in the hardware (0x0211)
+ */
+static enum ice_status
+ice_aq_update_vsi(struct ice_hw *hw, struct ice_vsi_ctx *vsi_ctx,
+		  struct ice_sq_cd *cd)
+{
+	struct ice_aqc_add_update_free_vsi_resp *resp;
+	struct ice_aqc_add_get_update_free_vsi *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	cmd = &desc.params.vsi_cmd;
+	resp = &desc.params.add_update_free_vsi_res;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_update_vsi);
+
+	cmd->vsi_num = CPU_TO_LE16(vsi_ctx->vsi_num | ICE_AQ_VSI_IS_VALID);
+
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+
+	status = ice_aq_send_cmd(hw, &desc, &vsi_ctx->info,
+				 sizeof(vsi_ctx->info), cd);
+
+	if (!status) {
+		vsi_ctx->vsis_allocd = LE16_TO_CPU(resp->vsi_used);
+		vsi_ctx->vsis_unallocated = LE16_TO_CPU(resp->vsi_free);
+	}
+
+	return status;
+}
+
+/**
+ * ice_is_vsi_valid - check whether the VSI is valid or not
+ * @hw: pointer to the hw struct
+ * @vsi_handle: VSI handle
+ *
+ * check whether the VSI is valid or not
+ */
+bool ice_is_vsi_valid(struct ice_hw *hw, u16 vsi_handle)
+{
+	return vsi_handle < ICE_MAX_VSI && hw->vsi_ctx[vsi_handle];
+}
+
+/**
+ * ice_get_hw_vsi_num - return the hw VSI number
+ * @hw: pointer to the hw struct
+ * @vsi_handle: VSI handle
+ *
+ * return the hw VSI number
+ * Caution: call this function only if VSI is valid (ice_is_vsi_valid)
+ */
+u16 ice_get_hw_vsi_num(struct ice_hw *hw, u16 vsi_handle)
+{
+	return hw->vsi_ctx[vsi_handle]->vsi_num;
+}
+
+/**
+ * ice_get_vsi_ctx - return the VSI context entry for a given VSI handle
+ * @hw: pointer to the hw struct
+ * @vsi_handle: VSI handle
+ *
+ * return the VSI context entry for a given VSI handle
+ */
+struct ice_vsi_ctx *ice_get_vsi_ctx(struct ice_hw *hw, u16 vsi_handle)
+{
+	return (vsi_handle >= ICE_MAX_VSI) ? NULL : hw->vsi_ctx[vsi_handle];
+}
+
+/**
+ * ice_save_vsi_ctx - save the VSI context for a given VSI handle
+ * @hw: pointer to the hw struct
+ * @vsi_handle: VSI handle
+ * @vsi: VSI context pointer
+ *
+ * save the VSI context entry for a given VSI handle
+ */
+static void
+ice_save_vsi_ctx(struct ice_hw *hw, u16 vsi_handle, struct ice_vsi_ctx *vsi)
+{
+	hw->vsi_ctx[vsi_handle] = vsi;
+}
+
+/**
+ * ice_clear_vsi_ctx - clear the VSI context entry
+ * @hw: pointer to the hw struct
+ * @vsi_handle: VSI handle
+ *
+ * clear the VSI context entry
+ */
+static void ice_clear_vsi_ctx(struct ice_hw *hw, u16 vsi_handle)
+{
+	struct ice_vsi_ctx *vsi;
+
+	vsi = ice_get_vsi_ctx(hw, vsi_handle);
+	if (vsi) {
+		ice_destroy_lock(&vsi->rss_locks);
+		ice_free(hw, vsi);
+		hw->vsi_ctx[vsi_handle] = NULL;
+	}
+}
+
+/**
+ * ice_clear_all_vsi_ctx - clear all the VSI context entries
+ * @hw: pointer to the hw struct
+ */
+void ice_clear_all_vsi_ctx(struct ice_hw *hw)
+{
+	u16 i;
+
+	for (i = 0; i < ICE_MAX_VSI; i++)
+		ice_clear_vsi_ctx(hw, i);
+}
+
+/**
+ * ice_add_vsi - add VSI context to the hardware and VSI handle list
+ * @hw: pointer to the hw struct
+ * @vsi_handle: unique VSI handle provided by drivers
+ * @vsi_ctx: pointer to a VSI context struct
+ * @cd: pointer to command details structure or NULL
+ *
+ * Add a VSI context to the hardware also add it into the VSI handle list.
+ * If this function gets called after reset for exisiting VSIs then update
+ * with the new HW VSI number in the corresponding VSI handle list entry.
+ */
+enum ice_status
+ice_add_vsi(struct ice_hw *hw, u16 vsi_handle, struct ice_vsi_ctx *vsi_ctx,
+	    struct ice_sq_cd *cd)
+{
+	struct ice_vsi_ctx *tmp_vsi_ctx;
+	enum ice_status status;
+
+	if (vsi_handle >= ICE_MAX_VSI)
+		return ICE_ERR_PARAM;
+	status = ice_aq_add_vsi(hw, vsi_ctx, cd);
+	if (status)
+		return status;
+	tmp_vsi_ctx = ice_get_vsi_ctx(hw, vsi_handle);
+	if (!tmp_vsi_ctx) {
+		/* Create a new vsi context */
+		tmp_vsi_ctx = (struct ice_vsi_ctx *)
+			ice_malloc(hw, sizeof(*tmp_vsi_ctx));
+		if (!tmp_vsi_ctx) {
+			ice_aq_free_vsi(hw, vsi_ctx, false, cd);
+			return ICE_ERR_NO_MEMORY;
+		}
+		*tmp_vsi_ctx = *vsi_ctx;
+		ice_init_lock(&tmp_vsi_ctx->rss_locks);
+		INIT_LIST_HEAD(&tmp_vsi_ctx->rss_list_head);
+		ice_save_vsi_ctx(hw, vsi_handle, tmp_vsi_ctx);
+	} else {
+		/* update with new HW VSI num */
+		if (tmp_vsi_ctx->vsi_num != vsi_ctx->vsi_num)
+			tmp_vsi_ctx->vsi_num = vsi_ctx->vsi_num;
+	}
+
+	return status;
+}
+
+/**
+ * ice_free_vsi- free VSI context from hardware and VSI handle list
+ * @hw: pointer to the hw struct
+ * @vsi_handle: unique VSI handle
+ * @vsi_ctx: pointer to a VSI context struct
+ * @keep_vsi_alloc: keep VSI allocation as part of this PF's resources
+ * @cd: pointer to command details structure or NULL
+ *
+ * Free VSI context info from hardware as well as from VSI handle list
+ */
+enum ice_status
+ice_free_vsi(struct ice_hw *hw, u16 vsi_handle, struct ice_vsi_ctx *vsi_ctx,
+	     bool keep_vsi_alloc, struct ice_sq_cd *cd)
+{
+	enum ice_status status;
+
+	if (!ice_is_vsi_valid(hw, vsi_handle))
+		return ICE_ERR_PARAM;
+	vsi_ctx->vsi_num = ice_get_hw_vsi_num(hw, vsi_handle);
+	status = ice_aq_free_vsi(hw, vsi_ctx, keep_vsi_alloc, cd);
+	if (!status)
+		ice_clear_vsi_ctx(hw, vsi_handle);
+	return status;
+}
+
+/**
+ * ice_update_vsi
+ * @hw: pointer to the hw struct
+ * @vsi_handle: unique VSI handle
+ * @vsi_ctx: pointer to a VSI context struct
+ * @cd: pointer to command details structure or NULL
+ *
+ * Update VSI context in the hardware
+ */
+enum ice_status
+ice_update_vsi(struct ice_hw *hw, u16 vsi_handle, struct ice_vsi_ctx *vsi_ctx,
+	       struct ice_sq_cd *cd)
+{
+	if (!ice_is_vsi_valid(hw, vsi_handle))
+		return ICE_ERR_PARAM;
+	vsi_ctx->vsi_num = ice_get_hw_vsi_num(hw, vsi_handle);
+	return ice_aq_update_vsi(hw, vsi_ctx, cd);
+}
+
+
+
+/**
+ * ice_aq_alloc_free_vsi_list
+ * @hw: pointer to the hw struct
+ * @vsi_list_id: VSI list id returned or used for lookup
+ * @lkup_type: switch rule filter lookup type
+ * @opc: switch rules population command type - pass in the command opcode
+ *
+ * allocates or free a VSI list resource
+ */
+static enum ice_status
+ice_aq_alloc_free_vsi_list(struct ice_hw *hw, u16 *vsi_list_id,
+			   enum ice_sw_lkup_type lkup_type,
+			   enum ice_adminq_opc opc)
+{
+	struct ice_aqc_alloc_free_res_elem *sw_buf;
+	struct ice_aqc_res_elem *vsi_ele;
+	enum ice_status status;
+	u16 buf_len;
+
+	buf_len = sizeof(*sw_buf);
+	sw_buf = (struct ice_aqc_alloc_free_res_elem *)
+		ice_malloc(hw, buf_len);
+	if (!sw_buf)
+		return ICE_ERR_NO_MEMORY;
+	sw_buf->num_elems = CPU_TO_LE16(1);
+
+	if (lkup_type == ICE_SW_LKUP_MAC ||
+	    lkup_type == ICE_SW_LKUP_MAC_VLAN ||
+	    lkup_type == ICE_SW_LKUP_ETHERTYPE ||
+	    lkup_type == ICE_SW_LKUP_ETHERTYPE_MAC ||
+	    lkup_type == ICE_SW_LKUP_PROMISC ||
+	    lkup_type == ICE_SW_LKUP_PROMISC_VLAN) {
+		sw_buf->res_type = CPU_TO_LE16(ICE_AQC_RES_TYPE_VSI_LIST_REP);
+	} else if (lkup_type == ICE_SW_LKUP_VLAN) {
+		sw_buf->res_type =
+			CPU_TO_LE16(ICE_AQC_RES_TYPE_VSI_LIST_PRUNE);
+	} else {
+		status = ICE_ERR_PARAM;
+		goto ice_aq_alloc_free_vsi_list_exit;
+	}
+
+	if (opc == ice_aqc_opc_free_res)
+		sw_buf->elem[0].e.sw_resp = CPU_TO_LE16(*vsi_list_id);
+
+	status = ice_aq_alloc_free_res(hw, 1, sw_buf, buf_len, opc, NULL);
+	if (status)
+		goto ice_aq_alloc_free_vsi_list_exit;
+
+	if (opc == ice_aqc_opc_alloc_res) {
+		vsi_ele = &sw_buf->elem[0];
+		*vsi_list_id = LE16_TO_CPU(vsi_ele->e.sw_resp);
+	}
+
+ice_aq_alloc_free_vsi_list_exit:
+	ice_free(hw, sw_buf);
+	return status;
+}
+
+
+/**
+ * ice_aq_sw_rules - add/update/remove switch rules
+ * @hw: pointer to the hw struct
+ * @rule_list: pointer to switch rule population list
+ * @rule_list_sz: total size of the rule list in bytes
+ * @num_rules: number of switch rules in the rule_list
+ * @opc: switch rules population command type - pass in the command opcode
+ * @cd: pointer to command details structure or NULL
+ *
+ * Add(0x02a0)/Update(0x02a1)/Remove(0x02a2) switch rules commands to firmware
+ */
+static enum ice_status
+ice_aq_sw_rules(struct ice_hw *hw, void *rule_list, u16 rule_list_sz,
+		u8 num_rules, enum ice_adminq_opc opc, struct ice_sq_cd *cd)
+{
+	struct ice_aq_desc desc;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_sw_rules");
+
+	if (opc != ice_aqc_opc_add_sw_rules &&
+	    opc != ice_aqc_opc_update_sw_rules &&
+	    opc != ice_aqc_opc_remove_sw_rules)
+		return ICE_ERR_PARAM;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, opc);
+
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+	desc.params.sw_rules.num_rules_fltr_entry_index =
+		CPU_TO_LE16(num_rules);
+	return ice_aq_send_cmd(hw, &desc, rule_list, rule_list_sz, cd);
+}
+
+
+/* ice_init_port_info - Initialize port_info with switch configuration data
+ * @pi: pointer to port_info
+ * @vsi_port_num: VSI number or port number
+ * @type: Type of switch element (port or VSI)
+ * @swid: switch ID of the switch the element is attached to
+ * @pf_vf_num: PF or VF number
+ * @is_vf: true if the element is a VF, false otherwise
+ */
+static void
+ice_init_port_info(struct ice_port_info *pi, u16 vsi_port_num, u8 type,
+		   u16 swid, u16 pf_vf_num, bool is_vf)
+{
+	switch (type) {
+	case ICE_AQC_GET_SW_CONF_RESP_PHYS_PORT:
+		pi->lport = (u8)(vsi_port_num & ICE_LPORT_MASK);
+		pi->sw_id = swid;
+		pi->pf_vf_num = pf_vf_num;
+		pi->is_vf = is_vf;
+		pi->dflt_tx_vsi_num = ICE_DFLT_VSI_INVAL;
+		pi->dflt_rx_vsi_num = ICE_DFLT_VSI_INVAL;
+		break;
+	default:
+		ice_debug(pi->hw, ICE_DBG_SW,
+			  "incorrect VSI/port type received\n");
+		break;
+	}
+}
+
+/* ice_get_initial_sw_cfg - Get initial port and default VSI data
+ * @hw: pointer to the hardware structure
+ */
+enum ice_status ice_get_initial_sw_cfg(struct ice_hw *hw)
+{
+	struct ice_aqc_get_sw_cfg_resp *rbuf;
+	enum ice_status status;
+	u16 num_total_ports;
+	u16 req_desc = 0;
+	u16 num_elems;
+	u16 j = 0;
+	u16 i;
+
+	num_total_ports = 1;
+
+	rbuf = (struct ice_aqc_get_sw_cfg_resp *)
+		ice_malloc(hw, ICE_SW_CFG_MAX_BUF_LEN);
+
+	if (!rbuf)
+		return ICE_ERR_NO_MEMORY;
+
+	/* Multiple calls to ice_aq_get_sw_cfg may be required
+	 * to get all the switch configuration information. The need
+	 * for additional calls is indicated by ice_aq_get_sw_cfg
+	 * writing a non-zero value in req_desc
+	 */
+	do {
+		status = ice_aq_get_sw_cfg(hw, rbuf, ICE_SW_CFG_MAX_BUF_LEN,
+					   &req_desc, &num_elems, NULL);
+
+		if (status)
+			break;
+
+		for (i = 0; i < num_elems; i++) {
+			struct ice_aqc_get_sw_cfg_resp_elem *ele;
+			u16 pf_vf_num, swid, vsi_port_num;
+			bool is_vf = false;
+			u8 type;
+
+			ele = rbuf[i].elements;
+			vsi_port_num = LE16_TO_CPU(ele->vsi_port_num) &
+				ICE_AQC_GET_SW_CONF_RESP_VSI_PORT_NUM_M;
+
+			pf_vf_num = LE16_TO_CPU(ele->pf_vf_num) &
+				ICE_AQC_GET_SW_CONF_RESP_FUNC_NUM_M;
+
+			swid = LE16_TO_CPU(ele->swid);
+
+			if (LE16_TO_CPU(ele->pf_vf_num) &
+			    ICE_AQC_GET_SW_CONF_RESP_IS_VF)
+				is_vf = true;
+
+			type = LE16_TO_CPU(ele->vsi_port_num) >>
+				ICE_AQC_GET_SW_CONF_RESP_TYPE_S;
+
+			switch (type) {
+			case ICE_AQC_GET_SW_CONF_RESP_PHYS_PORT:
+			case ICE_AQC_GET_SW_CONF_RESP_VIRT_PORT:
+				if (j == num_total_ports) {
+					ice_debug(hw, ICE_DBG_SW,
+						  "more ports than expected\n");
+					status = ICE_ERR_CFG;
+					goto out;
+				}
+				ice_init_port_info(hw->port_info,
+						   vsi_port_num, type, swid,
+						   pf_vf_num, is_vf);
+				j++;
+				break;
+			default:
+				break;
+			}
+		}
+	} while (req_desc && !status);
+
+
+out:
+	ice_free(hw, (void *)rbuf);
+	return status;
+}
+
+
+/**
+ * ice_fill_sw_info - Helper function to populate lb_en and lan_en
+ * @hw: pointer to the hardware structure
+ * @fi: filter info structure to fill/update
+ *
+ * This helper function populates the lb_en and lan_en elements of the provided
+ * ice_fltr_info struct using the switch's type and characteristics of the
+ * switch rule being configured.
+ */
+static void ice_fill_sw_info(struct ice_hw *hw, struct ice_fltr_info *fi)
+{
+	fi->lb_en = false;
+	fi->lan_en = false;
+	if ((fi->flag & ICE_FLTR_TX) &&
+	    (fi->fltr_act == ICE_FWD_TO_VSI ||
+	     fi->fltr_act == ICE_FWD_TO_VSI_LIST ||
+	     fi->fltr_act == ICE_FWD_TO_Q ||
+	     fi->fltr_act == ICE_FWD_TO_QGRP)) {
+		/* Setting LB for prune actions will result in replicated
+		 * packets to the internal switch that will be dropped.
+		 */
+		if (fi->lkup_type != ICE_SW_LKUP_VLAN)
+			fi->lb_en = true;
+
+		/* Set lan_en to TRUE if
+		 * 1. The switch is a VEB AND
+		 * 2
+		 * 2.1 The lookup is a directional lookup like ethertype,
+		 * promiscuous, ethertype-mac, promiscuous-vlan
+		 * and default-port OR
+		 * 2.2 The lookup is VLAN, OR
+		 * 2.3 The lookup is MAC with mcast or bcast addr for MAC, OR
+		 * 2.4 The lookup is MAC_VLAN with mcast or bcast addr for MAC.
+		 *
+		 * OR
+		 *
+		 * The switch is a VEPA.
+		 *
+		 * In all other cases, the LAN enable has to be set to false.
+		 */
+		if (hw->evb_veb) {
+			if (fi->lkup_type == ICE_SW_LKUP_ETHERTYPE ||
+			    fi->lkup_type == ICE_SW_LKUP_PROMISC ||
+			    fi->lkup_type == ICE_SW_LKUP_ETHERTYPE_MAC ||
+			    fi->lkup_type == ICE_SW_LKUP_PROMISC_VLAN ||
+			    fi->lkup_type == ICE_SW_LKUP_DFLT ||
+			    fi->lkup_type == ICE_SW_LKUP_VLAN ||
+			    (fi->lkup_type == ICE_SW_LKUP_MAC &&
+			     !IS_UNICAST_ETHER_ADDR(fi->l_data.mac.mac_addr)) ||
+			    (fi->lkup_type == ICE_SW_LKUP_MAC_VLAN &&
+			     !IS_UNICAST_ETHER_ADDR(fi->l_data.mac.mac_addr)))
+				fi->lan_en = true;
+		} else {
+			fi->lan_en = true;
+		}
+	}
+}
+
+/**
+ * ice_ilog2 - Caculates integer log base 2 of a number
+ * @n: number on which to perform operation
+ */
+static int ice_ilog2(u64 n)
+{
+	int i;
+
+	for (i = 63; i >= 0; i--)
+		if (((u64)1 << i) & n)
+			return i;
+
+	return -1;
+}
+
+
+/**
+ * ice_fill_sw_rule - Helper function to fill switch rule structure
+ * @hw: pointer to the hardware structure
+ * @f_info: entry containing packet forwarding information
+ * @s_rule: switch rule structure to be filled in based on mac_entry
+ * @opc: switch rules population command type - pass in the command opcode
+ */
+static void
+ice_fill_sw_rule(struct ice_hw *hw, struct ice_fltr_info *f_info,
+		 struct ice_aqc_sw_rules_elem *s_rule, enum ice_adminq_opc opc)
+{
+	u16 vlan_id = ICE_MAX_VLAN_ID + 1;
+	void *daddr = NULL;
+	u16 eth_hdr_sz;
+	u8 *eth_hdr;
+	u32 act = 0;
+	__be16 *off;
+	u8 q_rgn;
+
+
+	if (opc == ice_aqc_opc_remove_sw_rules) {
+		s_rule->pdata.lkup_tx_rx.act = 0;
+		s_rule->pdata.lkup_tx_rx.index =
+			CPU_TO_LE16(f_info->fltr_rule_id);
+		s_rule->pdata.lkup_tx_rx.hdr_len = 0;
+		return;
+	}
+
+	eth_hdr_sz = sizeof(dummy_eth_header);
+	eth_hdr = s_rule->pdata.lkup_tx_rx.hdr;
+
+	/* initialize the ether header with a dummy header */
+	ice_memcpy(eth_hdr, dummy_eth_header, eth_hdr_sz, ICE_NONDMA_TO_NONDMA);
+	ice_fill_sw_info(hw, f_info);
+
+	switch (f_info->fltr_act) {
+	case ICE_FWD_TO_VSI:
+		act |= (f_info->fwd_id.hw_vsi_id << ICE_SINGLE_ACT_VSI_ID_S) &
+			ICE_SINGLE_ACT_VSI_ID_M;
+		if (f_info->lkup_type != ICE_SW_LKUP_VLAN)
+			act |= ICE_SINGLE_ACT_VSI_FORWARDING |
+				ICE_SINGLE_ACT_VALID_BIT;
+		break;
+	case ICE_FWD_TO_VSI_LIST:
+		act |= ICE_SINGLE_ACT_VSI_LIST;
+		act |= (f_info->fwd_id.vsi_list_id <<
+			ICE_SINGLE_ACT_VSI_LIST_ID_S) &
+			ICE_SINGLE_ACT_VSI_LIST_ID_M;
+		if (f_info->lkup_type != ICE_SW_LKUP_VLAN)
+			act |= ICE_SINGLE_ACT_VSI_FORWARDING |
+				ICE_SINGLE_ACT_VALID_BIT;
+		break;
+	case ICE_FWD_TO_Q:
+		act |= ICE_SINGLE_ACT_TO_Q;
+		act |= (f_info->fwd_id.q_id << ICE_SINGLE_ACT_Q_INDEX_S) &
+			ICE_SINGLE_ACT_Q_INDEX_M;
+		break;
+	case ICE_DROP_PACKET:
+		act |= ICE_SINGLE_ACT_VSI_FORWARDING | ICE_SINGLE_ACT_DROP |
+			ICE_SINGLE_ACT_VALID_BIT;
+		break;
+	case ICE_FWD_TO_QGRP:
+		q_rgn = f_info->qgrp_size > 0 ?
+			(u8)ice_ilog2(f_info->qgrp_size) : 0;
+		act |= ICE_SINGLE_ACT_TO_Q;
+		act |= (f_info->fwd_id.q_id << ICE_SINGLE_ACT_Q_INDEX_S) &
+			ICE_SINGLE_ACT_Q_INDEX_M;
+		act |= (q_rgn << ICE_SINGLE_ACT_Q_REGION_S) &
+			ICE_SINGLE_ACT_Q_REGION_M;
+		break;
+	default:
+		return;
+	}
+
+	if (f_info->lb_en)
+		act |= ICE_SINGLE_ACT_LB_ENABLE;
+	if (f_info->lan_en)
+		act |= ICE_SINGLE_ACT_LAN_ENABLE;
+
+	switch (f_info->lkup_type) {
+	case ICE_SW_LKUP_MAC:
+		daddr = f_info->l_data.mac.mac_addr;
+		break;
+	case ICE_SW_LKUP_VLAN:
+		vlan_id = f_info->l_data.vlan.vlan_id;
+		if (f_info->fltr_act == ICE_FWD_TO_VSI ||
+		    f_info->fltr_act == ICE_FWD_TO_VSI_LIST) {
+			act |= ICE_SINGLE_ACT_PRUNE;
+			act |= ICE_SINGLE_ACT_EGRESS | ICE_SINGLE_ACT_INGRESS;
+		}
+		break;
+	case ICE_SW_LKUP_ETHERTYPE_MAC:
+		daddr = f_info->l_data.ethertype_mac.mac_addr;
+		/* fall-through */
+	case ICE_SW_LKUP_ETHERTYPE:
+		off = (__be16 *)(eth_hdr + ICE_ETH_ETHTYPE_OFFSET);
+		*off = CPU_TO_BE16(f_info->l_data.ethertype_mac.ethertype);
+		break;
+	case ICE_SW_LKUP_MAC_VLAN:
+		daddr = f_info->l_data.mac_vlan.mac_addr;
+		vlan_id = f_info->l_data.mac_vlan.vlan_id;
+		break;
+	case ICE_SW_LKUP_PROMISC_VLAN:
+		vlan_id = f_info->l_data.mac_vlan.vlan_id;
+		/* fall-through */
+	case ICE_SW_LKUP_PROMISC:
+		daddr = f_info->l_data.mac_vlan.mac_addr;
+		break;
+	default:
+		break;
+	}
+
+	s_rule->type = (f_info->flag & ICE_FLTR_RX) ?
+		CPU_TO_LE16(ICE_AQC_SW_RULES_T_LKUP_RX) :
+		CPU_TO_LE16(ICE_AQC_SW_RULES_T_LKUP_TX);
+
+	/* Recipe set depending on lookup type */
+	s_rule->pdata.lkup_tx_rx.recipe_id = CPU_TO_LE16(f_info->lkup_type);
+	s_rule->pdata.lkup_tx_rx.src = CPU_TO_LE16(f_info->src);
+	s_rule->pdata.lkup_tx_rx.act = CPU_TO_LE32(act);
+
+	if (daddr)
+		ice_memcpy(eth_hdr + ICE_ETH_DA_OFFSET, daddr, ETH_ALEN,
+			   ICE_NONDMA_TO_NONDMA);
+
+	if (!(vlan_id > ICE_MAX_VLAN_ID)) {
+		off = (__be16 *)(eth_hdr + ICE_ETH_VLAN_TCI_OFFSET);
+		*off = CPU_TO_BE16(vlan_id);
+	}
+
+	/* Create the switch rule with the final dummy Ethernet header */
+	if (opc != ice_aqc_opc_update_sw_rules)
+		s_rule->pdata.lkup_tx_rx.hdr_len = CPU_TO_LE16(eth_hdr_sz);
+}
+
+/**
+ * ice_add_marker_act
+ * @hw: pointer to the hardware structure
+ * @m_ent: the management entry for which sw marker needs to be added
+ * @sw_marker: sw marker to tag the Rx descriptor with
+ * @l_id: large action resource id
+ *
+ * Create a large action to hold software marker and update the switch rule
+ * entry pointed by m_ent with newly created large action
+ */
+static enum ice_status
+ice_add_marker_act(struct ice_hw *hw, struct ice_fltr_mgmt_list_entry *m_ent,
+		   u16 sw_marker, u16 l_id)
+{
+	struct ice_aqc_sw_rules_elem *lg_act, *rx_tx;
+	/* For software marker we need 3 large actions
+	 * 1. FWD action: FWD TO VSI or VSI LIST
+	 * 2. GENERIC VALUE action to hold the profile id
+	 * 3. GENERIC VALUE action to hold the software marker id
+	 */
+	const u16 num_lg_acts = 3;
+	enum ice_status status;
+	u16 lg_act_size;
+	u16 rules_size;
+	u32 act;
+	u16 id;
+
+	if (m_ent->fltr_info.lkup_type != ICE_SW_LKUP_MAC)
+		return ICE_ERR_PARAM;
+
+	/* Create two back-to-back switch rules and submit them to the HW using
+	 * one memory buffer:
+	 *    1. Large Action
+	 *    2. Look up Tx Rx
+	 */
+	lg_act_size = (u16)ICE_SW_RULE_LG_ACT_SIZE(num_lg_acts);
+	rules_size = lg_act_size + ICE_SW_RULE_RX_TX_ETH_HDR_SIZE;
+	lg_act = (struct ice_aqc_sw_rules_elem *)ice_malloc(hw, rules_size);
+	if (!lg_act)
+		return ICE_ERR_NO_MEMORY;
+
+	rx_tx = (struct ice_aqc_sw_rules_elem *)((u8 *)lg_act + lg_act_size);
+
+	/* Fill in the first switch rule i.e. large action */
+	lg_act->type = CPU_TO_LE16(ICE_AQC_SW_RULES_T_LG_ACT);
+	lg_act->pdata.lg_act.index = CPU_TO_LE16(l_id);
+	lg_act->pdata.lg_act.size = CPU_TO_LE16(num_lg_acts);
+
+	/* First action VSI forwarding or VSI list forwarding depending on how
+	 * many VSIs
+	 */
+	id = (m_ent->vsi_count > 1) ? m_ent->fltr_info.fwd_id.vsi_list_id :
+		m_ent->fltr_info.fwd_id.hw_vsi_id;
+
+	act = ICE_LG_ACT_VSI_FORWARDING | ICE_LG_ACT_VALID_BIT;
+	act |= (id << ICE_LG_ACT_VSI_LIST_ID_S) &
+		ICE_LG_ACT_VSI_LIST_ID_M;
+	if (m_ent->vsi_count > 1)
+		act |= ICE_LG_ACT_VSI_LIST;
+	lg_act->pdata.lg_act.act[0] = CPU_TO_LE32(act);
+
+	/* Second action descriptor type */
+	act = ICE_LG_ACT_GENERIC;
+
+	act |= (1 << ICE_LG_ACT_GENERIC_VALUE_S) & ICE_LG_ACT_GENERIC_VALUE_M;
+	lg_act->pdata.lg_act.act[1] = CPU_TO_LE32(act);
+
+	act = (ICE_LG_ACT_GENERIC_OFF_RX_DESC_PROF_IDX <<
+	       ICE_LG_ACT_GENERIC_OFFSET_S) & ICE_LG_ACT_GENERIC_OFFSET_M;
+
+	/* Third action Marker value */
+	act |= ICE_LG_ACT_GENERIC;
+	act |= (sw_marker << ICE_LG_ACT_GENERIC_VALUE_S) &
+		ICE_LG_ACT_GENERIC_VALUE_M;
+
+	lg_act->pdata.lg_act.act[2] = CPU_TO_LE32(act);
+
+	/* call the fill switch rule to fill the lookup Tx Rx structure */
+	ice_fill_sw_rule(hw, &m_ent->fltr_info, rx_tx,
+			 ice_aqc_opc_update_sw_rules);
+
+	/* Update the action to point to the large action id */
+	rx_tx->pdata.lkup_tx_rx.act =
+		CPU_TO_LE32(ICE_SINGLE_ACT_PTR |
+			    ((l_id << ICE_SINGLE_ACT_PTR_VAL_S) &
+			     ICE_SINGLE_ACT_PTR_VAL_M));
+
+	/* Use the filter rule id of the previously created rule with single
+	 * act. Once the update happens, hardware will treat this as large
+	 * action
+	 */
+	rx_tx->pdata.lkup_tx_rx.index =
+		CPU_TO_LE16(m_ent->fltr_info.fltr_rule_id);
+
+	status = ice_aq_sw_rules(hw, lg_act, rules_size, 2,
+				 ice_aqc_opc_update_sw_rules, NULL);
+	if (!status) {
+		m_ent->lg_act_idx = l_id;
+		m_ent->sw_marker_id = sw_marker;
+	}
+
+	ice_free(hw, lg_act);
+	return status;
+}
+
+
+/**
+ * ice_create_vsi_list_map
+ * @hw: pointer to the hardware structure
+ * @vsi_handle_arr: array of VSI handles to set in the VSI mapping
+ * @num_vsi: number of VSI handles in the array
+ * @vsi_list_id: VSI list id generated as part of allocate resource
+ *
+ * Helper function to create a new entry of VSI list id to VSI mapping
+ * using the given VSI list id
+ */
+static struct ice_vsi_list_map_info *
+ice_create_vsi_list_map(struct ice_hw *hw, u16 *vsi_handle_arr, u16 num_vsi,
+			u16 vsi_list_id)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	struct ice_vsi_list_map_info *v_map;
+	int i;
+
+	v_map = (struct ice_vsi_list_map_info *)ice_calloc(hw, 1,
+		sizeof(*v_map));
+	if (!v_map)
+		return NULL;
+
+	v_map->vsi_list_id = vsi_list_id;
+	v_map->ref_cnt = 1;
+	for (i = 0; i < num_vsi; i++)
+		ice_set_bit(vsi_handle_arr[i], v_map->vsi_map);
+
+	LIST_ADD(&v_map->list_entry, &sw->vsi_list_map_head);
+	return v_map;
+}
+
+/**
+ * ice_update_vsi_list_rule
+ * @hw: pointer to the hardware structure
+ * @vsi_handle_arr: array of VSI handles to form a VSI list
+ * @num_vsi: number of VSI handles in the array
+ * @vsi_list_id: VSI list id generated as part of allocate resource
+ * @remove: Boolean value to indicate if this is a remove action
+ * @opc: switch rules population command type - pass in the command opcode
+ * @lkup_type: lookup type of the filter
+ *
+ * Call AQ command to add a new switch rule or update existing switch rule
+ * using the given VSI list id
+ */
+static enum ice_status
+ice_update_vsi_list_rule(struct ice_hw *hw, u16 *vsi_handle_arr, u16 num_vsi,
+			 u16 vsi_list_id, bool remove, enum ice_adminq_opc opc,
+			 enum ice_sw_lkup_type lkup_type)
+{
+	struct ice_aqc_sw_rules_elem *s_rule;
+	enum ice_status status;
+	u16 s_rule_size;
+	u16 type;
+	int i;
+
+	if (!num_vsi)
+		return ICE_ERR_PARAM;
+
+	if (lkup_type == ICE_SW_LKUP_MAC ||
+	    lkup_type == ICE_SW_LKUP_MAC_VLAN ||
+	    lkup_type == ICE_SW_LKUP_ETHERTYPE ||
+	    lkup_type == ICE_SW_LKUP_ETHERTYPE_MAC ||
+	    lkup_type == ICE_SW_LKUP_PROMISC ||
+	    lkup_type == ICE_SW_LKUP_PROMISC_VLAN)
+		type = remove ? ICE_AQC_SW_RULES_T_VSI_LIST_CLEAR :
+				ICE_AQC_SW_RULES_T_VSI_LIST_SET;
+	else if (lkup_type == ICE_SW_LKUP_VLAN)
+		type = remove ? ICE_AQC_SW_RULES_T_PRUNE_LIST_CLEAR :
+				ICE_AQC_SW_RULES_T_PRUNE_LIST_SET;
+	else
+		return ICE_ERR_PARAM;
+
+	s_rule_size = (u16)ICE_SW_RULE_VSI_LIST_SIZE(num_vsi);
+	s_rule = (struct ice_aqc_sw_rules_elem *)ice_malloc(hw, s_rule_size);
+	if (!s_rule)
+		return ICE_ERR_NO_MEMORY;
+	for (i = 0; i < num_vsi; i++) {
+		if (!ice_is_vsi_valid(hw, vsi_handle_arr[i])) {
+			status = ICE_ERR_PARAM;
+			goto exit;
+		}
+		/* AQ call requires hw_vsi_id(s) */
+		s_rule->pdata.vsi_list.vsi[i] =
+			CPU_TO_LE16(ice_get_hw_vsi_num(hw, vsi_handle_arr[i]));
+	}
+
+	s_rule->type = CPU_TO_LE16(type);
+	s_rule->pdata.vsi_list.number_vsi = CPU_TO_LE16(num_vsi);
+	s_rule->pdata.vsi_list.index = CPU_TO_LE16(vsi_list_id);
+
+	status = ice_aq_sw_rules(hw, s_rule, s_rule_size, 1, opc, NULL);
+
+exit:
+	ice_free(hw, s_rule);
+	return status;
+}
+
+/**
+ * ice_create_vsi_list_rule - Creates and populates a VSI list rule
+ * @hw: pointer to the hw struct
+ * @vsi_handle_arr: array of VSI handles to form a VSI list
+ * @num_vsi: number of VSI handles in the array
+ * @vsi_list_id: stores the ID of the VSI list to be created
+ * @lkup_type: switch rule filter's lookup type
+ */
+static enum ice_status
+ice_create_vsi_list_rule(struct ice_hw *hw, u16 *vsi_handle_arr, u16 num_vsi,
+			 u16 *vsi_list_id, enum ice_sw_lkup_type lkup_type)
+{
+	enum ice_status status;
+
+	status = ice_aq_alloc_free_vsi_list(hw, vsi_list_id, lkup_type,
+					    ice_aqc_opc_alloc_res);
+	if (status)
+		return status;
+
+	/* Update the newly created VSI list to include the specified VSIs */
+	return ice_update_vsi_list_rule(hw, vsi_handle_arr, num_vsi,
+					*vsi_list_id, false,
+					ice_aqc_opc_add_sw_rules, lkup_type);
+}
+
+/**
+ * ice_create_pkt_fwd_rule
+ * @hw: pointer to the hardware structure
+ * @f_entry: entry containing packet forwarding information
+ *
+ * Create switch rule with given filter information and add an entry
+ * to the corresponding filter management list to track this switch rule
+ * and VSI mapping
+ */
+static enum ice_status
+ice_create_pkt_fwd_rule(struct ice_hw *hw,
+			struct ice_fltr_list_entry *f_entry)
+{
+	struct ice_fltr_mgmt_list_entry *fm_entry;
+	struct ice_aqc_sw_rules_elem *s_rule;
+	enum ice_sw_lkup_type l_type;
+	struct ice_sw_recipe *recp;
+	enum ice_status status;
+
+	s_rule = (struct ice_aqc_sw_rules_elem *)
+		ice_malloc(hw, ICE_SW_RULE_RX_TX_ETH_HDR_SIZE);
+	if (!s_rule)
+		return ICE_ERR_NO_MEMORY;
+	fm_entry = (struct ice_fltr_mgmt_list_entry *)
+		   ice_malloc(hw, sizeof(*fm_entry));
+	if (!fm_entry) {
+		status = ICE_ERR_NO_MEMORY;
+		goto ice_create_pkt_fwd_rule_exit;
+	}
+
+	fm_entry->fltr_info = f_entry->fltr_info;
+
+	/* Initialize all the fields for the management entry */
+	fm_entry->vsi_count = 1;
+	fm_entry->lg_act_idx = ICE_INVAL_LG_ACT_INDEX;
+	fm_entry->sw_marker_id = ICE_INVAL_SW_MARKER_ID;
+	fm_entry->counter_index = ICE_INVAL_COUNTER_ID;
+
+	ice_fill_sw_rule(hw, &fm_entry->fltr_info, s_rule,
+			 ice_aqc_opc_add_sw_rules);
+
+	status = ice_aq_sw_rules(hw, s_rule, ICE_SW_RULE_RX_TX_ETH_HDR_SIZE, 1,
+				 ice_aqc_opc_add_sw_rules, NULL);
+	if (status) {
+		ice_free(hw, fm_entry);
+		goto ice_create_pkt_fwd_rule_exit;
+	}
+
+	f_entry->fltr_info.fltr_rule_id =
+		LE16_TO_CPU(s_rule->pdata.lkup_tx_rx.index);
+	fm_entry->fltr_info.fltr_rule_id =
+		LE16_TO_CPU(s_rule->pdata.lkup_tx_rx.index);
+
+	/* The book keeping entries will get removed when base driver
+	 * calls remove filter AQ command
+	 */
+	l_type = fm_entry->fltr_info.lkup_type;
+	recp = &hw->switch_info->recp_list[l_type];
+	LIST_ADD(&fm_entry->list_entry, &recp->filt_rules);
+
+ice_create_pkt_fwd_rule_exit:
+	ice_free(hw, s_rule);
+	return status;
+}
+
+/**
+ * ice_update_pkt_fwd_rule
+ * @hw: pointer to the hardware structure
+ * @f_info: filter information for switch rule
+ *
+ * Call AQ command to update a previously created switch rule with a
+ * VSI list id
+ */
+static enum ice_status
+ice_update_pkt_fwd_rule(struct ice_hw *hw, struct ice_fltr_info *f_info)
+{
+	struct ice_aqc_sw_rules_elem *s_rule;
+	enum ice_status status;
+
+	s_rule = (struct ice_aqc_sw_rules_elem *)
+		ice_malloc(hw, ICE_SW_RULE_RX_TX_ETH_HDR_SIZE);
+	if (!s_rule)
+		return ICE_ERR_NO_MEMORY;
+
+	ice_fill_sw_rule(hw, f_info, s_rule, ice_aqc_opc_update_sw_rules);
+
+	s_rule->pdata.lkup_tx_rx.index = CPU_TO_LE16(f_info->fltr_rule_id);
+
+	/* Update switch rule with new rule set to forward VSI list */
+	status = ice_aq_sw_rules(hw, s_rule, ICE_SW_RULE_RX_TX_ETH_HDR_SIZE, 1,
+				 ice_aqc_opc_update_sw_rules, NULL);
+
+	ice_free(hw, s_rule);
+	return status;
+}
+
+/**
+ * ice_update_sw_rule_bridge_mode
+ * @hw: pointer to the hw struct
+ *
+ * Updates unicast switch filter rules based on VEB/VEPA mode
+ */
+enum ice_status ice_update_sw_rule_bridge_mode(struct ice_hw *hw)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	struct ice_fltr_mgmt_list_entry *fm_entry;
+	enum ice_status status = ICE_SUCCESS;
+	struct LIST_HEAD_TYPE *rule_head;
+	struct ice_lock *rule_lock; /* Lock to protect filter rule list */
+
+	rule_lock = &sw->recp_list[ICE_SW_LKUP_MAC].filt_rule_lock;
+	rule_head = &sw->recp_list[ICE_SW_LKUP_MAC].filt_rules;
+
+	ice_acquire_lock(rule_lock);
+	LIST_FOR_EACH_ENTRY(fm_entry, rule_head, ice_fltr_mgmt_list_entry,
+			    list_entry) {
+		struct ice_fltr_info *fi = &fm_entry->fltr_info;
+		u8 *addr = fi->l_data.mac.mac_addr;
+
+		/* Update unicast Tx rules to reflect the selected
+		 * VEB/VEPA mode
+		 */
+		if ((fi->flag & ICE_FLTR_TX) && IS_UNICAST_ETHER_ADDR(addr) &&
+		    (fi->fltr_act == ICE_FWD_TO_VSI ||
+		     fi->fltr_act == ICE_FWD_TO_VSI_LIST ||
+		     fi->fltr_act == ICE_FWD_TO_Q ||
+		     fi->fltr_act == ICE_FWD_TO_QGRP)) {
+			status = ice_update_pkt_fwd_rule(hw, fi);
+			if (status)
+				break;
+		}
+	}
+
+	ice_release_lock(rule_lock);
+
+	return status;
+}
+
+/**
+ * ice_add_update_vsi_list
+ * @hw: pointer to the hardware structure
+ * @m_entry: pointer to current filter management list entry
+ * @cur_fltr: filter information from the book keeping entry
+ * @new_fltr: filter information with the new VSI to be added
+ *
+ * Call AQ command to add or update previously created VSI list with new VSI.
+ *
+ * Helper function to do book keeping associated with adding filter information
+ * The algorithm to do the book keeping is described below :
+ * When a VSI needs to subscribe to a given filter (MAC/VLAN/Ethtype etc.)
+ *	if only one VSI has been added till now
+ *		Allocate a new VSI list and add two VSIs
+ *		to this list using switch rule command
+ *		Update the previously created switch rule with the
+ *		newly created VSI list id
+ *	if a VSI list was previously created
+ *		Add the new VSI to the previously created VSI list set
+ *		using the update switch rule command
+ */
+static enum ice_status
+ice_add_update_vsi_list(struct ice_hw *hw,
+			struct ice_fltr_mgmt_list_entry *m_entry,
+			struct ice_fltr_info *cur_fltr,
+			struct ice_fltr_info *new_fltr)
+{
+	enum ice_status status = ICE_SUCCESS;
+	u16 vsi_list_id = 0;
+
+	if ((cur_fltr->fltr_act == ICE_FWD_TO_Q ||
+	     cur_fltr->fltr_act == ICE_FWD_TO_QGRP))
+		return ICE_ERR_NOT_IMPL;
+
+	if ((new_fltr->fltr_act == ICE_FWD_TO_Q ||
+	     new_fltr->fltr_act == ICE_FWD_TO_QGRP) &&
+	    (cur_fltr->fltr_act == ICE_FWD_TO_VSI ||
+	     cur_fltr->fltr_act == ICE_FWD_TO_VSI_LIST))
+		return ICE_ERR_NOT_IMPL;
+
+	if (m_entry->vsi_count < 2 && !m_entry->vsi_list_info) {
+		/* Only one entry existed in the mapping and it was not already
+		 * a part of a VSI list. So, create a VSI list with the old and
+		 * new VSIs.
+		 */
+		struct ice_fltr_info tmp_fltr;
+		u16 vsi_handle_arr[2];
+
+		/* A rule already exists with the new VSI being added */
+		if (cur_fltr->fwd_id.hw_vsi_id == new_fltr->fwd_id.hw_vsi_id)
+			return ICE_ERR_ALREADY_EXISTS;
+
+		vsi_handle_arr[0] = cur_fltr->vsi_handle;
+		vsi_handle_arr[1] = new_fltr->vsi_handle;
+		status = ice_create_vsi_list_rule(hw, &vsi_handle_arr[0], 2,
+						  &vsi_list_id,
+						  new_fltr->lkup_type);
+		if (status)
+			return status;
+
+		tmp_fltr = *new_fltr;
+		tmp_fltr.fltr_rule_id = cur_fltr->fltr_rule_id;
+		tmp_fltr.fltr_act = ICE_FWD_TO_VSI_LIST;
+		tmp_fltr.fwd_id.vsi_list_id = vsi_list_id;
+		/* Update the previous switch rule of "MAC forward to VSI" to
+		 * "MAC fwd to VSI list"
+		 */
+		status = ice_update_pkt_fwd_rule(hw, &tmp_fltr);
+		if (status)
+			return status;
+
+		cur_fltr->fwd_id.vsi_list_id = vsi_list_id;
+		cur_fltr->fltr_act = ICE_FWD_TO_VSI_LIST;
+		m_entry->vsi_list_info =
+			ice_create_vsi_list_map(hw, &vsi_handle_arr[0], 2,
+						vsi_list_id);
+
+		/* If this entry was large action then the large action needs
+		 * to be updated to point to FWD to VSI list
+		 */
+		if (m_entry->sw_marker_id != ICE_INVAL_SW_MARKER_ID)
+			status =
+			    ice_add_marker_act(hw, m_entry,
+					       m_entry->sw_marker_id,
+					       m_entry->lg_act_idx);
+	} else {
+		u16 vsi_handle = new_fltr->vsi_handle;
+		enum ice_adminq_opc opcode;
+
+		if (!m_entry->vsi_list_info)
+			return ICE_ERR_CFG;
+
+		/* A rule already exists with the new VSI being added */
+		if (ice_is_bit_set(m_entry->vsi_list_info->vsi_map, vsi_handle))
+			return ICE_SUCCESS;
+
+		/* Update the previously created VSI list set with
+		 * the new VSI id passed in
+		 */
+		vsi_list_id = cur_fltr->fwd_id.vsi_list_id;
+		opcode = ice_aqc_opc_update_sw_rules;
+
+		status = ice_update_vsi_list_rule(hw, &vsi_handle, 1,
+						  vsi_list_id, false, opcode,
+						  new_fltr->lkup_type);
+		/* update VSI list mapping info with new VSI id */
+		if (!status)
+			ice_set_bit(vsi_handle,
+				    m_entry->vsi_list_info->vsi_map);
+	}
+	if (!status)
+		m_entry->vsi_count++;
+	return status;
+}
+
+/**
+ * ice_find_rule_entry - Search a rule entry
+ * @hw: pointer to the hardware structure
+ * @recp_id: lookup type for which the specified rule needs to be searched
+ * @f_info: rule information
+ *
+ * Helper function to search for a given rule entry
+ * Returns pointer to entry storing the rule if found
+ */
+static struct ice_fltr_mgmt_list_entry *
+ice_find_rule_entry(struct ice_hw *hw, u8 recp_id, struct ice_fltr_info *f_info)
+{
+	struct ice_fltr_mgmt_list_entry *list_itr, *ret = NULL;
+	struct ice_switch_info *sw = hw->switch_info;
+	struct LIST_HEAD_TYPE *list_head;
+
+	list_head = &sw->recp_list[recp_id].filt_rules;
+	LIST_FOR_EACH_ENTRY(list_itr, list_head, ice_fltr_mgmt_list_entry,
+			    list_entry) {
+		if (!memcmp(&f_info->l_data, &list_itr->fltr_info.l_data,
+			    sizeof(f_info->l_data)) &&
+		    f_info->flag == list_itr->fltr_info.flag) {
+			ret = list_itr;
+			break;
+		}
+	}
+	return ret;
+}
+
+/**
+ * ice_find_vsi_list_entry - Search VSI list map with VSI count 1
+ * @hw: pointer to the hardware structure
+ * @recp_id: lookup type for which VSI lists needs to be searched
+ * @vsi_handle: VSI handle to be found in VSI list
+ * @vsi_list_id: VSI list id found contaning vsi_handle
+ *
+ * Helper function to search a VSI list with single entry containing given VSI
+ * handle element. This can be extended further to search VSI list with more
+ * than 1 vsi_count. Returns pointer to VSI list entry if found.
+ */
+static struct ice_vsi_list_map_info *
+ice_find_vsi_list_entry(struct ice_hw *hw, u8 recp_id, u16 vsi_handle,
+			u16 *vsi_list_id)
+{
+	struct ice_vsi_list_map_info *map_info = NULL;
+	struct ice_switch_info *sw = hw->switch_info;
+	struct ice_fltr_mgmt_list_entry *list_itr;
+	struct LIST_HEAD_TYPE *list_head;
+
+	list_head = &sw->recp_list[recp_id].filt_rules;
+	LIST_FOR_EACH_ENTRY(list_itr, list_head, ice_fltr_mgmt_list_entry,
+			    list_entry) {
+		if (list_itr->vsi_count == 1 && list_itr->vsi_list_info) {
+			map_info = list_itr->vsi_list_info;
+			if (ice_is_bit_set(map_info->vsi_map, vsi_handle)) {
+				*vsi_list_id = map_info->vsi_list_id;
+				return map_info;
+			}
+		}
+	}
+	return NULL;
+}
+
+/**
+ * ice_add_rule_internal - add rule for a given lookup type
+ * @hw: pointer to the hardware structure
+ * @recp_id: lookup type (recipe id) for which rule has to be added
+ * @f_entry: structure containing MAC forwarding information
+ *
+ * Adds or updates the rule lists for a given recipe
+ */
+static enum ice_status
+ice_add_rule_internal(struct ice_hw *hw, u8 recp_id,
+		      struct ice_fltr_list_entry *f_entry)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	struct ice_fltr_info *new_fltr, *cur_fltr;
+	struct ice_fltr_mgmt_list_entry *m_entry;
+	struct ice_lock *rule_lock; /* Lock to protect filter rule list */
+	enum ice_status status = ICE_SUCCESS;
+
+	if (!ice_is_vsi_valid(hw, f_entry->fltr_info.vsi_handle))
+		return ICE_ERR_PARAM;
+
+	/* Load the hw_vsi_id only if the fwd action is fwd to VSI */
+	if (f_entry->fltr_info.fltr_act == ICE_FWD_TO_VSI)
+		f_entry->fltr_info.fwd_id.hw_vsi_id =
+			ice_get_hw_vsi_num(hw, f_entry->fltr_info.vsi_handle);
+
+	rule_lock = &sw->recp_list[recp_id].filt_rule_lock;
+
+	ice_acquire_lock(rule_lock);
+	new_fltr = &f_entry->fltr_info;
+	if (new_fltr->flag & ICE_FLTR_RX)
+		new_fltr->src = hw->port_info->lport;
+	else if (new_fltr->flag & ICE_FLTR_TX)
+		new_fltr->src =
+			ice_get_hw_vsi_num(hw, f_entry->fltr_info.vsi_handle);
+
+	m_entry = ice_find_rule_entry(hw, recp_id, new_fltr);
+	if (!m_entry) {
+		ice_release_lock(rule_lock);
+		return ice_create_pkt_fwd_rule(hw, f_entry);
+	}
+
+	cur_fltr = &m_entry->fltr_info;
+	status = ice_add_update_vsi_list(hw, m_entry, cur_fltr, new_fltr);
+	ice_release_lock(rule_lock);
+
+	return status;
+}
+
+/**
+ * ice_remove_vsi_list_rule
+ * @hw: pointer to the hardware structure
+ * @vsi_list_id: VSI list id generated as part of allocate resource
+ * @lkup_type: switch rule filter lookup type
+ *
+ * The VSI list should be emptied before this function is called to remove the
+ * VSI list.
+ */
+static enum ice_status
+ice_remove_vsi_list_rule(struct ice_hw *hw, u16 vsi_list_id,
+			 enum ice_sw_lkup_type lkup_type)
+{
+	struct ice_aqc_sw_rules_elem *s_rule;
+	enum ice_status status;
+	u16 s_rule_size;
+
+	s_rule_size = (u16)ICE_SW_RULE_VSI_LIST_SIZE(0);
+	s_rule = (struct ice_aqc_sw_rules_elem *)ice_malloc(hw, s_rule_size);
+	if (!s_rule)
+		return ICE_ERR_NO_MEMORY;
+
+	s_rule->type = CPU_TO_LE16(ICE_AQC_SW_RULES_T_VSI_LIST_CLEAR);
+	s_rule->pdata.vsi_list.index = CPU_TO_LE16(vsi_list_id);
+
+	/* Free the vsi_list resource that we allocated. It is assumed that the
+	 * list is empty at this point.
+	 */
+	status = ice_aq_alloc_free_vsi_list(hw, &vsi_list_id, lkup_type,
+					    ice_aqc_opc_free_res);
+
+	ice_free(hw, s_rule);
+	return status;
+}
+
+/**
+ * ice_rem_update_vsi_list
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: VSI handle of the VSI to remove
+ * @fm_list: filter management entry for which the VSI list management needs to
+ *	     be done
+ */
+static enum ice_status
+ice_rem_update_vsi_list(struct ice_hw *hw, u16 vsi_handle,
+			struct ice_fltr_mgmt_list_entry *fm_list)
+{
+	enum ice_sw_lkup_type lkup_type;
+	enum ice_status status = ICE_SUCCESS;
+	u16 vsi_list_id;
+
+	if (fm_list->fltr_info.fltr_act != ICE_FWD_TO_VSI_LIST ||
+	    fm_list->vsi_count == 0)
+		return ICE_ERR_PARAM;
+
+	/* A rule with the VSI being removed does not exist */
+	if (!ice_is_bit_set(fm_list->vsi_list_info->vsi_map, vsi_handle))
+		return ICE_ERR_DOES_NOT_EXIST;
+
+	lkup_type = fm_list->fltr_info.lkup_type;
+	vsi_list_id = fm_list->fltr_info.fwd_id.vsi_list_id;
+	status = ice_update_vsi_list_rule(hw, &vsi_handle, 1, vsi_list_id, true,
+					  ice_aqc_opc_update_sw_rules,
+					  lkup_type);
+	if (status)
+		return status;
+
+	fm_list->vsi_count--;
+	ice_clear_bit(vsi_handle, fm_list->vsi_list_info->vsi_map);
+
+	if (fm_list->vsi_count == 1 && lkup_type != ICE_SW_LKUP_VLAN) {
+		struct ice_fltr_info tmp_fltr_info = fm_list->fltr_info;
+		struct ice_vsi_list_map_info *vsi_list_info =
+			fm_list->vsi_list_info;
+		u16 rem_vsi_handle;
+
+		rem_vsi_handle = ice_find_first_bit(vsi_list_info->vsi_map,
+						    ICE_MAX_VSI);
+		if (!ice_is_vsi_valid(hw, rem_vsi_handle))
+			return ICE_ERR_OUT_OF_RANGE;
+
+		/* Make sure VSI list is empty before removing it below */
+		status = ice_update_vsi_list_rule(hw, &rem_vsi_handle, 1,
+						  vsi_list_id, true,
+						  ice_aqc_opc_update_sw_rules,
+						  lkup_type);
+		if (status)
+			return status;
+
+		tmp_fltr_info.fltr_act = ICE_FWD_TO_VSI;
+		tmp_fltr_info.fwd_id.hw_vsi_id =
+			ice_get_hw_vsi_num(hw, rem_vsi_handle);
+		tmp_fltr_info.vsi_handle = rem_vsi_handle;
+		status = ice_update_pkt_fwd_rule(hw, &tmp_fltr_info);
+		if (status) {
+			ice_debug(hw, ICE_DBG_SW,
+				  "Failed to update pkt fwd rule to FWD_TO_VSI on HW VSI %d, error %d\n",
+				  tmp_fltr_info.fwd_id.hw_vsi_id, status);
+			return status;
+		}
+
+		fm_list->fltr_info = tmp_fltr_info;
+	}
+
+	if ((fm_list->vsi_count == 1 && lkup_type != ICE_SW_LKUP_VLAN) ||
+	    (fm_list->vsi_count == 0 && lkup_type == ICE_SW_LKUP_VLAN)) {
+		struct ice_vsi_list_map_info *vsi_list_info =
+			fm_list->vsi_list_info;
+
+		/* Remove the VSI list since it is no longer used */
+		status = ice_remove_vsi_list_rule(hw, vsi_list_id, lkup_type);
+		if (status) {
+			ice_debug(hw, ICE_DBG_SW,
+				  "Failed to remove VSI list %d, error %d\n",
+				  vsi_list_id, status);
+			return status;
+		}
+
+		LIST_DEL(&vsi_list_info->list_entry);
+		ice_free(hw, vsi_list_info);
+		fm_list->vsi_list_info = NULL;
+	}
+
+	return status;
+}
+
+/**
+ * ice_remove_rule_internal - Remove a filter rule of a given type
+ *
+ * @hw: pointer to the hardware structure
+ * @recp_id: recipe id for which the rule needs to removed
+ * @f_entry: rule entry containing filter information
+ */
+static enum ice_status
+ice_remove_rule_internal(struct ice_hw *hw, u8 recp_id,
+			 struct ice_fltr_list_entry *f_entry)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	struct ice_fltr_mgmt_list_entry *list_elem;
+	struct ice_lock *rule_lock; /* Lock to protect filter rule list */
+	enum ice_status status = ICE_SUCCESS;
+	bool remove_rule = false;
+	u16 vsi_handle;
+
+	if (!ice_is_vsi_valid(hw, f_entry->fltr_info.vsi_handle))
+		return ICE_ERR_PARAM;
+	f_entry->fltr_info.fwd_id.hw_vsi_id =
+		ice_get_hw_vsi_num(hw, f_entry->fltr_info.vsi_handle);
+
+	rule_lock = &sw->recp_list[recp_id].filt_rule_lock;
+	ice_acquire_lock(rule_lock);
+	list_elem = ice_find_rule_entry(hw, recp_id, &f_entry->fltr_info);
+	if (!list_elem) {
+		status = ICE_ERR_DOES_NOT_EXIST;
+		goto exit;
+	}
+
+	if (list_elem->fltr_info.fltr_act != ICE_FWD_TO_VSI_LIST) {
+		remove_rule = true;
+	} else if (!list_elem->vsi_list_info) {
+		status = ICE_ERR_DOES_NOT_EXIST;
+		goto exit;
+	} else if (list_elem->vsi_list_info->ref_cnt > 1) {
+		/* a ref_cnt > 1 indicates that the vsi_list is being
+		 * shared by multiple rules. Decrement the ref_cnt and
+		 * remove this rule, but do not modify the list, as it
+		 * is in-use by other rules.
+		 */
+		list_elem->vsi_list_info->ref_cnt--;
+		remove_rule = true;
+	} else {
+		/* a ref_cnt of 1 indicates the vsi_list is only used
+		 * by one rule. However, the original removal request is only
+		 * for a single VSI. Update the vsi_list first, and only
+		 * remove the rule if there are no further VSIs in this list.
+		 */
+		vsi_handle = f_entry->fltr_info.vsi_handle;
+		status = ice_rem_update_vsi_list(hw, vsi_handle, list_elem);
+		if (status)
+			goto exit;
+		/* if vsi count goes to zero after updating the vsi list */
+		if (list_elem->vsi_count == 0)
+			remove_rule = true;
+	}
+
+	if (remove_rule) {
+		/* Remove the lookup rule */
+		struct ice_aqc_sw_rules_elem *s_rule;
+
+		s_rule = (struct ice_aqc_sw_rules_elem *)
+			ice_malloc(hw, ICE_SW_RULE_RX_TX_NO_HDR_SIZE);
+		if (!s_rule) {
+			status = ICE_ERR_NO_MEMORY;
+			goto exit;
+		}
+
+		ice_fill_sw_rule(hw, &list_elem->fltr_info, s_rule,
+				 ice_aqc_opc_remove_sw_rules);
+
+		status = ice_aq_sw_rules(hw, s_rule,
+					 ICE_SW_RULE_RX_TX_NO_HDR_SIZE, 1,
+					 ice_aqc_opc_remove_sw_rules, NULL);
+		if (status)
+			goto exit;
+
+		/* Remove a book keeping from the list */
+		ice_free(hw, s_rule);
+
+		LIST_DEL(&list_elem->list_entry);
+		ice_free(hw, list_elem);
+	}
+exit:
+	ice_release_lock(rule_lock);
+	return status;
+}
+
+
+/**
+ * ice_add_mac - Add a MAC address based filter rule
+ * @hw: pointer to the hardware structure
+ * @m_list: list of MAC addresses and forwarding information
+ *
+ * IMPORTANT: When the ucast_shared flag is set to false and m_list has
+ * multiple unicast addresses, the function assumes that all the
+ * addresses are unique in a given add_mac call. It doesn't
+ * check for duplicates in this case, removing duplicates from a given
+ * list should be taken care of in the caller of this function.
+ */
+enum ice_status
+ice_add_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list)
+{
+	struct ice_aqc_sw_rules_elem *s_rule, *r_iter;
+	struct ice_fltr_list_entry *m_list_itr;
+	struct LIST_HEAD_TYPE *rule_head;
+	u16 elem_sent, total_elem_left;
+	struct ice_switch_info *sw;
+	struct ice_lock *rule_lock; /* Lock to protect filter rule list */
+	enum ice_status status = ICE_SUCCESS;
+	u16 num_unicast = 0;
+	u16 s_rule_size;
+
+	if (!m_list || !hw)
+		return ICE_ERR_PARAM;
+	s_rule = NULL;
+	sw = hw->switch_info;
+	rule_lock = &sw->recp_list[ICE_SW_LKUP_MAC].filt_rule_lock;
+	LIST_FOR_EACH_ENTRY(m_list_itr, m_list, ice_fltr_list_entry,
+			    list_entry) {
+		u8 *add = &m_list_itr->fltr_info.l_data.mac.mac_addr[0];
+		u16 vsi_handle;
+		u16 hw_vsi_id;
+
+		m_list_itr->fltr_info.flag = ICE_FLTR_TX;
+		vsi_handle = m_list_itr->fltr_info.vsi_handle;
+		if (!ice_is_vsi_valid(hw, vsi_handle))
+			return ICE_ERR_PARAM;
+		hw_vsi_id = ice_get_hw_vsi_num(hw, vsi_handle);
+		m_list_itr->fltr_info.fwd_id.hw_vsi_id = hw_vsi_id;
+		/* update the src in case it is vsi num */
+		if (m_list_itr->fltr_info.src_id != ICE_SRC_ID_VSI)
+			return ICE_ERR_PARAM;
+		m_list_itr->fltr_info.src = hw_vsi_id;
+		if (m_list_itr->fltr_info.lkup_type != ICE_SW_LKUP_MAC ||
+		    IS_ZERO_ETHER_ADDR(add))
+			return ICE_ERR_PARAM;
+		if (IS_UNICAST_ETHER_ADDR(add) && !hw->ucast_shared) {
+			/* Don't overwrite the unicast address */
+			ice_acquire_lock(rule_lock);
+			if (ice_find_rule_entry(hw, ICE_SW_LKUP_MAC,
+						&m_list_itr->fltr_info)) {
+				ice_release_lock(rule_lock);
+				return ICE_ERR_ALREADY_EXISTS;
+			}
+			ice_release_lock(rule_lock);
+			num_unicast++;
+		} else if (IS_MULTICAST_ETHER_ADDR(add) ||
+			   (IS_UNICAST_ETHER_ADDR(add) && hw->ucast_shared)) {
+			m_list_itr->status =
+				ice_add_rule_internal(hw, ICE_SW_LKUP_MAC,
+						      m_list_itr);
+			if (m_list_itr->status)
+				return m_list_itr->status;
+		}
+	}
+
+	ice_acquire_lock(rule_lock);
+	/* Exit if no suitable entries were found for adding bulk switch rule */
+	if (!num_unicast) {
+		status = ICE_SUCCESS;
+		goto ice_add_mac_exit;
+	}
+
+	rule_head = &sw->recp_list[ICE_SW_LKUP_MAC].filt_rules;
+
+	/* Allocate switch rule buffer for the bulk update for unicast */
+	s_rule_size = ICE_SW_RULE_RX_TX_ETH_HDR_SIZE;
+	s_rule = (struct ice_aqc_sw_rules_elem *)
+		ice_calloc(hw, num_unicast, s_rule_size);
+	if (!s_rule) {
+		status = ICE_ERR_NO_MEMORY;
+		goto ice_add_mac_exit;
+	}
+
+	r_iter = s_rule;
+	LIST_FOR_EACH_ENTRY(m_list_itr, m_list, ice_fltr_list_entry,
+			    list_entry) {
+		struct ice_fltr_info *f_info = &m_list_itr->fltr_info;
+		u8 *mac_addr = &f_info->l_data.mac.mac_addr[0];
+
+		if (IS_UNICAST_ETHER_ADDR(mac_addr)) {
+			ice_fill_sw_rule(hw, &m_list_itr->fltr_info, r_iter,
+					 ice_aqc_opc_add_sw_rules);
+			r_iter = (struct ice_aqc_sw_rules_elem *)
+				((u8 *)r_iter + s_rule_size);
+		}
+	}
+
+	/* Call AQ bulk switch rule update for all unicast addresses */
+	r_iter = s_rule;
+	/* Call AQ switch rule in AQ_MAX chunk */
+	for (total_elem_left = num_unicast; total_elem_left > 0;
+	     total_elem_left -= elem_sent) {
+		struct ice_aqc_sw_rules_elem *entry = r_iter;
+
+		elem_sent = min(total_elem_left,
+				(u16)(ICE_AQ_MAX_BUF_LEN / s_rule_size));
+		status = ice_aq_sw_rules(hw, entry, elem_sent * s_rule_size,
+					 elem_sent, ice_aqc_opc_add_sw_rules,
+					 NULL);
+		if (status)
+			goto ice_add_mac_exit;
+		r_iter = (struct ice_aqc_sw_rules_elem *)
+			((u8 *)r_iter + (elem_sent * s_rule_size));
+	}
+
+	/* Fill up rule id based on the value returned from FW */
+	r_iter = s_rule;
+	LIST_FOR_EACH_ENTRY(m_list_itr, m_list, ice_fltr_list_entry,
+			    list_entry) {
+		struct ice_fltr_info *f_info = &m_list_itr->fltr_info;
+		u8 *mac_addr = &f_info->l_data.mac.mac_addr[0];
+		struct ice_fltr_mgmt_list_entry *fm_entry;
+
+		if (IS_UNICAST_ETHER_ADDR(mac_addr)) {
+			f_info->fltr_rule_id =
+				LE16_TO_CPU(r_iter->pdata.lkup_tx_rx.index);
+			f_info->fltr_act = ICE_FWD_TO_VSI;
+			/* Create an entry to track this MAC address */
+			fm_entry = (struct ice_fltr_mgmt_list_entry *)
+				ice_malloc(hw, sizeof(*fm_entry));
+			if (!fm_entry) {
+				status = ICE_ERR_NO_MEMORY;
+				goto ice_add_mac_exit;
+			}
+			fm_entry->fltr_info = *f_info;
+			fm_entry->vsi_count = 1;
+			/* The book keeping entries will get removed when
+			 * base driver calls remove filter AQ command
+			 */
+
+			LIST_ADD(&fm_entry->list_entry, rule_head);
+			r_iter = (struct ice_aqc_sw_rules_elem *)
+				((u8 *)r_iter + s_rule_size);
+		}
+	}
+
+ice_add_mac_exit:
+	ice_release_lock(rule_lock);
+	if (s_rule)
+		ice_free(hw, s_rule);
+	return status;
+}
+
+/**
+ * ice_add_vlan_internal - Add one VLAN based filter rule
+ * @hw: pointer to the hardware structure
+ * @f_entry: filter entry containing one VLAN information
+ */
+static enum ice_status
+ice_add_vlan_internal(struct ice_hw *hw, struct ice_fltr_list_entry *f_entry)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	struct ice_fltr_mgmt_list_entry *v_list_itr;
+	struct ice_fltr_info *new_fltr, *cur_fltr;
+	enum ice_sw_lkup_type lkup_type;
+	u16 vsi_list_id = 0, vsi_handle;
+	struct ice_lock *rule_lock; /* Lock to protect filter rule list */
+	enum ice_status status = ICE_SUCCESS;
+
+	if (!ice_is_vsi_valid(hw, f_entry->fltr_info.vsi_handle))
+		return ICE_ERR_PARAM;
+
+	f_entry->fltr_info.fwd_id.hw_vsi_id =
+		ice_get_hw_vsi_num(hw, f_entry->fltr_info.vsi_handle);
+	new_fltr = &f_entry->fltr_info;
+
+	/* VLAN id should only be 12 bits */
+	if (new_fltr->l_data.vlan.vlan_id > ICE_MAX_VLAN_ID)
+		return ICE_ERR_PARAM;
+
+	if (new_fltr->src_id != ICE_SRC_ID_VSI)
+		return ICE_ERR_PARAM;
+
+	new_fltr->src = new_fltr->fwd_id.hw_vsi_id;
+	lkup_type = new_fltr->lkup_type;
+	vsi_handle = new_fltr->vsi_handle;
+	rule_lock = &sw->recp_list[ICE_SW_LKUP_VLAN].filt_rule_lock;
+	ice_acquire_lock(rule_lock);
+	v_list_itr = ice_find_rule_entry(hw, ICE_SW_LKUP_VLAN, new_fltr);
+	if (!v_list_itr) {
+		struct ice_vsi_list_map_info *map_info = NULL;
+
+		if (new_fltr->fltr_act == ICE_FWD_TO_VSI) {
+			/* All VLAN pruning rules use a VSI list. Check if
+			 * there is already a VSI list containing VSI that we
+			 * want to add. If found, use the same vsi_list_id for
+			 * this new VLAN rule or else create a new list.
+			 */
+			map_info = ice_find_vsi_list_entry(hw, ICE_SW_LKUP_VLAN,
+							   vsi_handle,
+							   &vsi_list_id);
+			if (!map_info) {
+				status = ice_create_vsi_list_rule(hw,
+								  &vsi_handle,
+								  1,
+								  &vsi_list_id,
+								  lkup_type);
+				if (status)
+					goto exit;
+			}
+			/* Convert the action to forwarding to a VSI list. */
+			new_fltr->fltr_act = ICE_FWD_TO_VSI_LIST;
+			new_fltr->fwd_id.vsi_list_id = vsi_list_id;
+		}
+
+		status = ice_create_pkt_fwd_rule(hw, f_entry);
+		if (!status) {
+			v_list_itr = ice_find_rule_entry(hw, ICE_SW_LKUP_VLAN,
+							 new_fltr);
+			if (!v_list_itr) {
+				status = ICE_ERR_DOES_NOT_EXIST;
+				goto exit;
+			}
+			/* reuse VSI list for new rule and increment ref_cnt */
+			if (map_info) {
+				v_list_itr->vsi_list_info = map_info;
+				map_info->ref_cnt++;
+			} else {
+				v_list_itr->vsi_list_info =
+					ice_create_vsi_list_map(hw, &vsi_handle,
+								1, vsi_list_id);
+			}
+		}
+	} else if (v_list_itr->vsi_list_info->ref_cnt == 1) {
+		/* Update existing VSI list to add new VSI id only if it used
+		 * by one VLAN rule.
+		 */
+		cur_fltr = &v_list_itr->fltr_info;
+		status = ice_add_update_vsi_list(hw, v_list_itr, cur_fltr,
+						 new_fltr);
+	} else {
+		/* If VLAN rule exists and VSI list being used by this rule is
+		 * referenced by more than 1 VLAN rule. Then create a new VSI
+		 * list appending previous VSI with new VSI and update existing
+		 * VLAN rule to point to new VSI list id
+		 */
+		struct ice_fltr_info tmp_fltr;
+		u16 vsi_handle_arr[2];
+		u16 cur_handle;
+
+		/* Current implementation only supports reusing VSI list with
+		 * one VSI count. We should never hit below condition
+		 */
+		if (v_list_itr->vsi_count > 1 &&
+		    v_list_itr->vsi_list_info->ref_cnt > 1) {
+			ice_debug(hw, ICE_DBG_SW,
+				  "Invalid configuration: Optimization to reuse VSI list with more than one VSI is not being done yet\n");
+			status = ICE_ERR_CFG;
+			goto exit;
+		}
+
+		cur_handle =
+			ice_find_first_bit(v_list_itr->vsi_list_info->vsi_map,
+					   ICE_MAX_VSI);
+
+		/* A rule already exists with the new VSI being added */
+		if (cur_handle == vsi_handle) {
+			status = ICE_ERR_ALREADY_EXISTS;
+			goto exit;
+		}
+
+		vsi_handle_arr[0] = cur_handle;
+		vsi_handle_arr[1] = vsi_handle;
+		status = ice_create_vsi_list_rule(hw, &vsi_handle_arr[0], 2,
+						  &vsi_list_id, lkup_type);
+		if (status)
+			goto exit;
+
+		tmp_fltr = v_list_itr->fltr_info;
+		tmp_fltr.fltr_rule_id = v_list_itr->fltr_info.fltr_rule_id;
+		tmp_fltr.fwd_id.vsi_list_id = vsi_list_id;
+		tmp_fltr.fltr_act = ICE_FWD_TO_VSI_LIST;
+		/* Update the previous switch rule to a new VSI list which
+		 * includes current VSI that is requested
+		 */
+		status = ice_update_pkt_fwd_rule(hw, &tmp_fltr);
+		if (status)
+			goto exit;
+
+		/* before overriding VSI list map info. decrement ref_cnt of
+		 * previous VSI list
+		 */
+		v_list_itr->vsi_list_info->ref_cnt--;
+
+		/* now update to newly created list */
+		v_list_itr->fltr_info.fwd_id.vsi_list_id = vsi_list_id;
+		v_list_itr->vsi_list_info =
+			ice_create_vsi_list_map(hw, &vsi_handle_arr[0], 2,
+						vsi_list_id);
+		v_list_itr->vsi_count++;
+	}
+
+exit:
+	ice_release_lock(rule_lock);
+	return status;
+}
+
+/**
+ * ice_add_vlan - Add VLAN based filter rule
+ * @hw: pointer to the hardware structure
+ * @v_list: list of VLAN entries and forwarding information
+ */
+enum ice_status
+ice_add_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list)
+{
+	struct ice_fltr_list_entry *v_list_itr;
+
+	if (!v_list || !hw)
+		return ICE_ERR_PARAM;
+
+	LIST_FOR_EACH_ENTRY(v_list_itr, v_list, ice_fltr_list_entry,
+			    list_entry) {
+		if (v_list_itr->fltr_info.lkup_type != ICE_SW_LKUP_VLAN)
+			return ICE_ERR_PARAM;
+		v_list_itr->fltr_info.flag = ICE_FLTR_TX;
+		v_list_itr->status = ice_add_vlan_internal(hw, v_list_itr);
+		if (v_list_itr->status)
+			return v_list_itr->status;
+	}
+	return ICE_SUCCESS;
+}
+
+#ifndef NO_MACVLAN_SUPPORT
+/**
+ * ice_add_mac_vlan - Add MAC and VLAN pair based filter rule
+ * @hw: pointer to the hardware structure
+ * @mv_list: list of MAC and VLAN filters
+ *
+ * If the VSI on which the mac-vlan pair has to be added has RX and Tx VLAN
+ * pruning bits enabled, then it is the responsibility of the caller to make
+ * sure to add a vlan only filter on the same VSI. Packets belonging to that
+ * VLAN won't be received on that VSI otherwise.
+ */
+enum ice_status
+ice_add_mac_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *mv_list)
+{
+	struct ice_fltr_list_entry *mv_list_itr;
+
+	if (!mv_list || !hw)
+		return ICE_ERR_PARAM;
+
+	LIST_FOR_EACH_ENTRY(mv_list_itr, mv_list, ice_fltr_list_entry,
+			    list_entry) {
+		enum ice_sw_lkup_type l_type =
+			mv_list_itr->fltr_info.lkup_type;
+
+		if (l_type != ICE_SW_LKUP_MAC_VLAN)
+			return ICE_ERR_PARAM;
+		mv_list_itr->fltr_info.flag = ICE_FLTR_TX;
+		mv_list_itr->status =
+			ice_add_rule_internal(hw, ICE_SW_LKUP_MAC_VLAN,
+					      mv_list_itr);
+		if (mv_list_itr->status)
+			return mv_list_itr->status;
+	}
+	return ICE_SUCCESS;
+}
+#endif
+
+
+
+/**
+ * ice_rem_sw_rule_info
+ * @hw: pointer to the hardware structure
+ * @rule_head: pointer to the switch list structure that we want to delete
+ */
+static void
+ice_rem_sw_rule_info(struct ice_hw *hw, struct LIST_HEAD_TYPE *rule_head)
+{
+	if (!LIST_EMPTY(rule_head)) {
+		struct ice_fltr_mgmt_list_entry *entry;
+		struct ice_fltr_mgmt_list_entry *tmp;
+
+		LIST_FOR_EACH_ENTRY_SAFE(entry, tmp, rule_head,
+					 ice_fltr_mgmt_list_entry, list_entry) {
+			LIST_DEL(&entry->list_entry);
+			ice_free(hw, entry);
+		}
+	}
+}
+
+
+
+/**
+ * ice_cfg_dflt_vsi - change state of VSI to set/clear default
+ * @pi: pointer to the port_info structure
+ * @vsi_handle: VSI handle to set as default
+ * @set: true to add the above mentioned switch rule, false to remove it
+ * @direction: ICE_FLTR_RX or ICE_FLTR_TX
+ *
+ * add filter rule to set/unset given VSI as default VSI for the switch
+ * (represented by swid)
+ */
+enum ice_status
+ice_cfg_dflt_vsi(struct ice_port_info *pi, u16 vsi_handle, bool set,
+		 u8 direction)
+{
+	struct ice_aqc_sw_rules_elem *s_rule;
+	struct ice_fltr_info f_info;
+	struct ice_hw *hw = pi->hw;
+	enum ice_adminq_opc opcode;
+	enum ice_status status;
+	u16 s_rule_size;
+	u16 hw_vsi_id;
+
+	if (!ice_is_vsi_valid(hw, vsi_handle))
+		return ICE_ERR_PARAM;
+	hw_vsi_id = ice_get_hw_vsi_num(hw, vsi_handle);
+
+	s_rule_size = set ? ICE_SW_RULE_RX_TX_ETH_HDR_SIZE :
+			    ICE_SW_RULE_RX_TX_NO_HDR_SIZE;
+	s_rule = (struct ice_aqc_sw_rules_elem *)ice_malloc(hw, s_rule_size);
+	if (!s_rule)
+		return ICE_ERR_NO_MEMORY;
+
+	ice_memset(&f_info, 0, sizeof(f_info), ICE_NONDMA_MEM);
+
+	f_info.lkup_type = ICE_SW_LKUP_DFLT;
+	f_info.flag = direction;
+	f_info.fltr_act = ICE_FWD_TO_VSI;
+	f_info.fwd_id.hw_vsi_id = hw_vsi_id;
+
+	if (f_info.flag & ICE_FLTR_RX) {
+		f_info.src = pi->lport;
+		f_info.src_id = ICE_SRC_ID_LPORT;
+		if (!set)
+			f_info.fltr_rule_id =
+				pi->dflt_rx_vsi_rule_id;
+	} else if (f_info.flag & ICE_FLTR_TX) {
+		f_info.src_id = ICE_SRC_ID_VSI;
+		f_info.src = hw_vsi_id;
+		if (!set)
+			f_info.fltr_rule_id =
+				pi->dflt_tx_vsi_rule_id;
+	}
+
+	if (set)
+		opcode = ice_aqc_opc_add_sw_rules;
+	else
+		opcode = ice_aqc_opc_remove_sw_rules;
+
+	ice_fill_sw_rule(hw, &f_info, s_rule, opcode);
+
+	status = ice_aq_sw_rules(hw, s_rule, s_rule_size, 1, opcode, NULL);
+	if (status || !(f_info.flag & ICE_FLTR_TX_RX))
+		goto out;
+	if (set) {
+		u16 index = LE16_TO_CPU(s_rule->pdata.lkup_tx_rx.index);
+
+		if (f_info.flag & ICE_FLTR_TX) {
+			pi->dflt_tx_vsi_num = hw_vsi_id;
+			pi->dflt_tx_vsi_rule_id = index;
+		} else if (f_info.flag & ICE_FLTR_RX) {
+			pi->dflt_rx_vsi_num = hw_vsi_id;
+			pi->dflt_rx_vsi_rule_id = index;
+		}
+	} else {
+		if (f_info.flag & ICE_FLTR_TX) {
+			pi->dflt_tx_vsi_num = ICE_DFLT_VSI_INVAL;
+			pi->dflt_tx_vsi_rule_id = ICE_INVAL_ACT;
+		} else if (f_info.flag & ICE_FLTR_RX) {
+			pi->dflt_rx_vsi_num = ICE_DFLT_VSI_INVAL;
+			pi->dflt_rx_vsi_rule_id = ICE_INVAL_ACT;
+		}
+	}
+
+out:
+	ice_free(hw, s_rule);
+	return status;
+}
+
+/**
+ * ice_remove_mac - remove a MAC address based filter rule
+ * @hw: pointer to the hardware structure
+ * @m_list: list of MAC addresses and forwarding information
+ *
+ * This function removes either a MAC filter rule or a specific VSI from a
+ * VSI list for a multicast MAC address.
+ *
+ * Returns ICE_ERR_DOES_NOT_EXIST if a given entry was not added by
+ * ice_add_mac. Caller should be aware that this call will only work if all
+ * the entries passed into m_list were added previously. It will not attempt to
+ * do a partial remove of entries that were found.
+ */
+enum ice_status
+ice_remove_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list)
+{
+	struct ice_fltr_list_entry *list_itr, *tmp;
+
+	if (!m_list)
+		return ICE_ERR_PARAM;
+
+	LIST_FOR_EACH_ENTRY_SAFE(list_itr, tmp, m_list, ice_fltr_list_entry,
+				 list_entry) {
+		enum ice_sw_lkup_type l_type = list_itr->fltr_info.lkup_type;
+
+		if (l_type != ICE_SW_LKUP_MAC)
+			return ICE_ERR_PARAM;
+		list_itr->status = ice_remove_rule_internal(hw,
+							    ICE_SW_LKUP_MAC,
+							    list_itr);
+		if (list_itr->status)
+			return list_itr->status;
+	}
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_remove_vlan - Remove VLAN based filter rule
+ * @hw: pointer to the hardware structure
+ * @v_list: list of VLAN entries and forwarding information
+ */
+enum ice_status
+ice_remove_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list)
+{
+	struct ice_fltr_list_entry *v_list_itr, *tmp;
+
+	if (!v_list || !hw)
+		return ICE_ERR_PARAM;
+
+	LIST_FOR_EACH_ENTRY_SAFE(v_list_itr, tmp, v_list, ice_fltr_list_entry,
+				 list_entry) {
+		enum ice_sw_lkup_type l_type = v_list_itr->fltr_info.lkup_type;
+
+		if (l_type != ICE_SW_LKUP_VLAN)
+			return ICE_ERR_PARAM;
+		v_list_itr->status = ice_remove_rule_internal(hw,
+							      ICE_SW_LKUP_VLAN,
+							      v_list_itr);
+		if (v_list_itr->status)
+			return v_list_itr->status;
+	}
+	return ICE_SUCCESS;
+}
+
+#ifndef NO_MACVLAN_SUPPORT
+/**
+ * ice_remove_mac_vlan - Remove MAC VLAN based filter rule
+ * @hw: pointer to the hardware structure
+ * @v_list: list of MAC VLAN entries and forwarding information
+ */
+enum ice_status
+ice_remove_mac_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list)
+{
+	struct ice_fltr_list_entry *v_list_itr, *tmp;
+
+	if (!v_list || !hw)
+		return ICE_ERR_PARAM;
+
+	LIST_FOR_EACH_ENTRY_SAFE(v_list_itr, tmp, v_list, ice_fltr_list_entry,
+				 list_entry) {
+		enum ice_sw_lkup_type l_type = v_list_itr->fltr_info.lkup_type;
+
+		if (l_type != ICE_SW_LKUP_MAC_VLAN)
+			return ICE_ERR_PARAM;
+		v_list_itr->status =
+			ice_remove_rule_internal(hw, ICE_SW_LKUP_MAC_VLAN,
+						 v_list_itr);
+		if (v_list_itr->status)
+			return v_list_itr->status;
+	}
+	return ICE_SUCCESS;
+}
+#endif /* !NO_MACVLAN_SUPPORT */
+
+/**
+ * ice_vsi_uses_fltr - Determine if given VSI uses specified filter
+ * @fm_entry: filter entry to inspect
+ * @vsi_handle: VSI handle to compare with filter info
+ */
+static bool
+ice_vsi_uses_fltr(struct ice_fltr_mgmt_list_entry *fm_entry, u16 vsi_handle)
+{
+	return ((fm_entry->fltr_info.fltr_act == ICE_FWD_TO_VSI &&
+		 fm_entry->fltr_info.vsi_handle == vsi_handle) ||
+		(fm_entry->fltr_info.fltr_act == ICE_FWD_TO_VSI_LIST &&
+		 (ice_is_bit_set(fm_entry->vsi_list_info->vsi_map,
+				 vsi_handle))));
+}
+
+/**
+ * ice_add_entry_to_vsi_fltr_list - Add copy of fltr_list_entry to remove list
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: VSI handle to remove filters from
+ * @vsi_list_head: pointer to the list to add entry to
+ * @fi: pointer to fltr_info of filter entry to copy & add
+ *
+ * Helper function, used when creating a list of filters to remove from
+ * a specific VSI. The entry added to vsi_list_head is a COPY of the
+ * original filter entry, with the exception of fltr_info.fltr_act and
+ * fltr_info.fwd_id fields. These are set such that later logic can
+ * extract which VSI to remove the fltr from, and pass on that information.
+ */
+static enum ice_status
+ice_add_entry_to_vsi_fltr_list(struct ice_hw *hw, u16 vsi_handle,
+			       struct LIST_HEAD_TYPE *vsi_list_head,
+			       struct ice_fltr_info *fi)
+{
+	struct ice_fltr_list_entry *tmp;
+
+	/* this memory is freed up in the caller function
+	 * once filters for this VSI are removed
+	 */
+	tmp = (struct ice_fltr_list_entry *)ice_malloc(hw, sizeof(*tmp));
+	if (!tmp)
+		return ICE_ERR_NO_MEMORY;
+
+	tmp->fltr_info = *fi;
+
+	/* Overwrite these fields to indicate which VSI to remove filter from,
+	 * so find and remove logic can extract the information from the
+	 * list entries. Note that original entries will still have proper
+	 * values.
+	 */
+	tmp->fltr_info.fltr_act = ICE_FWD_TO_VSI;
+	tmp->fltr_info.vsi_handle = vsi_handle;
+	tmp->fltr_info.fwd_id.hw_vsi_id = ice_get_hw_vsi_num(hw, vsi_handle);
+
+	LIST_ADD(&tmp->list_entry, vsi_list_head);
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_add_to_vsi_fltr_list - Add VSI filters to the list
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: VSI handle to remove filters from
+ * @lkup_list_head: pointer to the list that has certain lookup type filters
+ * @vsi_list_head: pointer to the list pertaining to VSI with vsi_handle
+ *
+ * Locates all filters in lkup_list_head that are used by the given VSI,
+ * and adds COPIES of those entries to vsi_list_head (intended to be used
+ * to remove the listed filters).
+ * Note that this means all entries in vsi_list_head must be explicitly
+ * deallocated by the caller when done with list.
+ */
+static enum ice_status
+ice_add_to_vsi_fltr_list(struct ice_hw *hw, u16 vsi_handle,
+			 struct LIST_HEAD_TYPE *lkup_list_head,
+			 struct LIST_HEAD_TYPE *vsi_list_head)
+{
+	struct ice_fltr_mgmt_list_entry *fm_entry;
+	enum ice_status status = ICE_SUCCESS;
+
+	/* check to make sure VSI id is valid and within boundary */
+	if (!ice_is_vsi_valid(hw, vsi_handle))
+		return ICE_ERR_PARAM;
+
+	LIST_FOR_EACH_ENTRY(fm_entry, lkup_list_head,
+			    ice_fltr_mgmt_list_entry, list_entry) {
+		struct ice_fltr_info *fi;
+
+		fi = &fm_entry->fltr_info;
+		if (!fi || !ice_vsi_uses_fltr(fm_entry, vsi_handle))
+			continue;
+
+		status = ice_add_entry_to_vsi_fltr_list(hw, vsi_handle,
+							vsi_list_head, fi);
+		if (status)
+			return status;
+	}
+	return status;
+}
+
+
+/**
+ * ice_determine_promisc_mask
+ * @fi: filter info to parse
+ *
+ * Helper function to determine which ICE_PROMISC_ mask corresponds
+ * to given filter into.
+ */
+static u8 ice_determine_promisc_mask(struct ice_fltr_info *fi)
+{
+	u16 vid = fi->l_data.mac_vlan.vlan_id;
+	u8 *macaddr = fi->l_data.mac.mac_addr;
+	bool is_tx_fltr = false;
+	u8 promisc_mask = 0;
+
+	if (fi->flag == ICE_FLTR_TX)
+		is_tx_fltr = true;
+
+	if (IS_BROADCAST_ETHER_ADDR(macaddr))
+		promisc_mask |= is_tx_fltr ?
+			ICE_PROMISC_BCAST_TX : ICE_PROMISC_BCAST_RX;
+	else if (IS_MULTICAST_ETHER_ADDR(macaddr))
+		promisc_mask |= is_tx_fltr ?
+			ICE_PROMISC_MCAST_TX : ICE_PROMISC_MCAST_RX;
+	else if (IS_UNICAST_ETHER_ADDR(macaddr))
+		promisc_mask |= is_tx_fltr ?
+			ICE_PROMISC_UCAST_TX : ICE_PROMISC_UCAST_RX;
+	if (vid)
+		promisc_mask |= is_tx_fltr ?
+			ICE_PROMISC_VLAN_TX : ICE_PROMISC_VLAN_RX;
+
+	return promisc_mask;
+}
+
+
+/**
+ * ice_remove_promisc - Remove promisc based filter rules
+ * @hw: pointer to the hardware structure
+ * @recp_id: recipe id for which the rule needs to removed
+ * @v_list: list of promisc entries
+ */
+static enum ice_status
+ice_remove_promisc(struct ice_hw *hw, u8 recp_id,
+		   struct LIST_HEAD_TYPE *v_list)
+{
+	struct ice_fltr_list_entry *v_list_itr, *tmp;
+
+	LIST_FOR_EACH_ENTRY_SAFE(v_list_itr, tmp, v_list, ice_fltr_list_entry,
+				 list_entry) {
+		v_list_itr->status =
+			ice_remove_rule_internal(hw, recp_id, v_list_itr);
+		if (v_list_itr->status)
+			return v_list_itr->status;
+	}
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_clear_vsi_promisc - clear specified promiscuous mode(s) for given VSI
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: VSI handle to clear mode
+ * @promisc_mask: mask of promiscuous config bits to clear
+ * @vid: VLAN ID to clear VLAN promiscuous
+ */
+enum ice_status
+ice_clear_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask,
+		      u16 vid)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	struct ice_fltr_list_entry *fm_entry, *tmp;
+	struct LIST_HEAD_TYPE remove_list_head;
+	struct ice_fltr_mgmt_list_entry *itr;
+	struct LIST_HEAD_TYPE *rule_head;
+	struct ice_lock *rule_lock;	/* Lock to protect filter rule list */
+	enum ice_status status = ICE_SUCCESS;
+	u8 recipe_id;
+
+	if (!ice_is_vsi_valid(hw, vsi_handle))
+		return ICE_ERR_PARAM;
+
+	if (vid)
+		recipe_id = ICE_SW_LKUP_PROMISC_VLAN;
+	else
+		recipe_id = ICE_SW_LKUP_PROMISC;
+
+	rule_head = &sw->recp_list[recipe_id].filt_rules;
+	rule_lock = &sw->recp_list[recipe_id].filt_rule_lock;
+
+	INIT_LIST_HEAD(&remove_list_head);
+
+	ice_acquire_lock(rule_lock);
+	LIST_FOR_EACH_ENTRY(itr, rule_head,
+			    ice_fltr_mgmt_list_entry, list_entry) {
+		u8 fltr_promisc_mask = 0;
+
+		if (!ice_vsi_uses_fltr(itr, vsi_handle))
+			continue;
+
+		fltr_promisc_mask |=
+			ice_determine_promisc_mask(&itr->fltr_info);
+
+		/* Skip if filter is not completely specified by given mask */
+		if (fltr_promisc_mask & ~promisc_mask)
+			continue;
+
+		status = ice_add_entry_to_vsi_fltr_list(hw, vsi_handle,
+							&remove_list_head,
+							&itr->fltr_info);
+		if (status) {
+			ice_release_lock(rule_lock);
+			goto free_fltr_list;
+		}
+	}
+	ice_release_lock(rule_lock);
+
+	status = ice_remove_promisc(hw, recipe_id, &remove_list_head);
+
+free_fltr_list:
+	LIST_FOR_EACH_ENTRY_SAFE(fm_entry, tmp, &remove_list_head,
+				 ice_fltr_list_entry, list_entry) {
+		LIST_DEL(&fm_entry->list_entry);
+		ice_free(hw, fm_entry);
+	}
+
+	return status;
+}
+
+/**
+ * ice_set_vsi_promisc - set given VSI to given promiscuous mode(s)
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: VSI handle to configure
+ * @promisc_mask: mask of promiscuous config bits
+ * @vid: VLAN ID to set VLAN promiscuous
+ */
+enum ice_status
+ice_set_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask, u16 vid)
+{
+	enum { UCAST_FLTR = 1, MCAST_FLTR, BCAST_FLTR };
+	struct ice_fltr_list_entry f_list_entry;
+	struct ice_fltr_info new_fltr;
+	enum ice_status status = ICE_SUCCESS;
+	bool is_tx_fltr;
+	u16 hw_vsi_id;
+	int pkt_type;
+	u8 recipe_id;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_set_vsi_promisc\n");
+
+	if (!ice_is_vsi_valid(hw, vsi_handle))
+		return ICE_ERR_PARAM;
+	hw_vsi_id = ice_get_hw_vsi_num(hw, vsi_handle);
+
+	ice_memset(&new_fltr, 0, sizeof(new_fltr), ICE_NONDMA_MEM);
+
+	if (promisc_mask & (ICE_PROMISC_VLAN_RX | ICE_PROMISC_VLAN_TX)) {
+		new_fltr.lkup_type = ICE_SW_LKUP_PROMISC_VLAN;
+		new_fltr.l_data.mac_vlan.vlan_id = vid;
+		recipe_id = ICE_SW_LKUP_PROMISC_VLAN;
+	} else {
+		new_fltr.lkup_type = ICE_SW_LKUP_PROMISC;
+		recipe_id = ICE_SW_LKUP_PROMISC;
+	}
+
+	/* Separate filters must be set for each direction/packet type
+	 * combination, so we will loop over the mask value, store the
+	 * individual type, and clear it out in the input mask as it
+	 * is found.
+	 */
+	while (promisc_mask) {
+		u8 *mac_addr;
+
+		pkt_type = 0;
+		is_tx_fltr = false;
+
+		if (promisc_mask & ICE_PROMISC_UCAST_RX) {
+			promisc_mask &= ~ICE_PROMISC_UCAST_RX;
+			pkt_type = UCAST_FLTR;
+		} else if (promisc_mask & ICE_PROMISC_UCAST_TX) {
+			promisc_mask &= ~ICE_PROMISC_UCAST_TX;
+			pkt_type = UCAST_FLTR;
+			is_tx_fltr = true;
+		} else if (promisc_mask & ICE_PROMISC_MCAST_RX) {
+			promisc_mask &= ~ICE_PROMISC_MCAST_RX;
+			pkt_type = MCAST_FLTR;
+		} else if (promisc_mask & ICE_PROMISC_MCAST_TX) {
+			promisc_mask &= ~ICE_PROMISC_MCAST_TX;
+			pkt_type = MCAST_FLTR;
+			is_tx_fltr = true;
+		} else if (promisc_mask & ICE_PROMISC_BCAST_RX) {
+			promisc_mask &= ~ICE_PROMISC_BCAST_RX;
+			pkt_type = BCAST_FLTR;
+		} else if (promisc_mask & ICE_PROMISC_BCAST_TX) {
+			promisc_mask &= ~ICE_PROMISC_BCAST_TX;
+			pkt_type = BCAST_FLTR;
+			is_tx_fltr = true;
+		}
+
+		/* Check for VLAN promiscuous flag */
+		if (promisc_mask & ICE_PROMISC_VLAN_RX) {
+			promisc_mask &= ~ICE_PROMISC_VLAN_RX;
+		} else if (promisc_mask & ICE_PROMISC_VLAN_TX) {
+			promisc_mask &= ~ICE_PROMISC_VLAN_TX;
+			is_tx_fltr = true;
+		}
+
+		/* Set filter DA based on packet type */
+		mac_addr = new_fltr.l_data.mac.mac_addr;
+		if (pkt_type == BCAST_FLTR) {
+			ice_memset(mac_addr, 0xff, ETH_ALEN, ICE_NONDMA_MEM);
+		} else if (pkt_type == MCAST_FLTR ||
+			   pkt_type == UCAST_FLTR) {
+			/* Use the dummy ether header DA */
+			ice_memcpy(mac_addr, dummy_eth_header, ETH_ALEN,
+				   ICE_NONDMA_TO_NONDMA);
+			if (pkt_type == MCAST_FLTR)
+				mac_addr[0] |= 0x1;	/* Set multicast bit */
+		}
+
+		/* Need to reset this to zero for all iterations */
+		new_fltr.flag = 0;
+		if (is_tx_fltr) {
+			new_fltr.flag |= ICE_FLTR_TX;
+			new_fltr.src = hw_vsi_id;
+		} else {
+			new_fltr.flag |= ICE_FLTR_RX;
+			new_fltr.src = hw->port_info->lport;
+		}
+
+		new_fltr.fltr_act = ICE_FWD_TO_VSI;
+		new_fltr.vsi_handle = vsi_handle;
+		new_fltr.fwd_id.hw_vsi_id = hw_vsi_id;
+		f_list_entry.fltr_info = new_fltr;
+
+		status = ice_add_rule_internal(hw, recipe_id, &f_list_entry);
+		if (status != ICE_SUCCESS)
+			goto set_promisc_exit;
+	}
+
+set_promisc_exit:
+	return status;
+}
+
+/**
+ * ice_set_vlan_vsi_promisc
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: VSI handle to configure
+ * @promisc_mask: mask of promiscuous config bits
+ * @rm_vlan_promisc: Clear VLANs VSI promisc mode
+ *
+ * Configure VSI with all associated VLANs to given promiscuous mode(s)
+ */
+enum ice_status
+ice_set_vlan_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask,
+			 bool rm_vlan_promisc)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	struct ice_fltr_list_entry *list_itr, *tmp;
+	struct LIST_HEAD_TYPE vsi_list_head;
+	struct LIST_HEAD_TYPE *vlan_head;
+	struct ice_lock *vlan_lock; /* Lock to protect filter rule list */
+	enum ice_status status;
+	u16 vlan_id;
+
+	INIT_LIST_HEAD(&vsi_list_head);
+	vlan_lock = &sw->recp_list[ICE_SW_LKUP_VLAN].filt_rule_lock;
+	vlan_head = &sw->recp_list[ICE_SW_LKUP_VLAN].filt_rules;
+	ice_acquire_lock(vlan_lock);
+	status = ice_add_to_vsi_fltr_list(hw, vsi_handle, vlan_head,
+					  &vsi_list_head);
+	ice_release_lock(vlan_lock);
+	if (status)
+		goto free_fltr_list;
+
+	LIST_FOR_EACH_ENTRY(list_itr, &vsi_list_head, ice_fltr_list_entry,
+			    list_entry) {
+		vlan_id = list_itr->fltr_info.l_data.vlan.vlan_id;
+		if (rm_vlan_promisc)
+			status = ice_clear_vsi_promisc(hw, vsi_handle,
+						       promisc_mask, vlan_id);
+		else
+			status = ice_set_vsi_promisc(hw, vsi_handle,
+						     promisc_mask, vlan_id);
+		if (status)
+			break;
+	}
+
+free_fltr_list:
+	LIST_FOR_EACH_ENTRY_SAFE(list_itr, tmp, &vsi_list_head,
+				 ice_fltr_list_entry, list_entry) {
+		LIST_DEL(&list_itr->list_entry);
+		ice_free(hw, list_itr);
+	}
+	return status;
+}
+
+/**
+ * ice_remove_vsi_lkup_fltr - Remove lookup type filters for a VSI
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: VSI handle to remove filters from
+ * @lkup: switch rule filter lookup type
+ */
+static void
+ice_remove_vsi_lkup_fltr(struct ice_hw *hw, u16 vsi_handle,
+			 enum ice_sw_lkup_type lkup)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	struct ice_fltr_list_entry *fm_entry;
+	struct LIST_HEAD_TYPE remove_list_head;
+	struct LIST_HEAD_TYPE *rule_head;
+	struct ice_fltr_list_entry *tmp;
+	struct ice_lock *rule_lock;	/* Lock to protect filter rule list */
+	enum ice_status status;
+
+	INIT_LIST_HEAD(&remove_list_head);
+	rule_lock = &sw->recp_list[lkup].filt_rule_lock;
+	rule_head = &sw->recp_list[lkup].filt_rules;
+	ice_acquire_lock(rule_lock);
+	status = ice_add_to_vsi_fltr_list(hw, vsi_handle, rule_head,
+					  &remove_list_head);
+	ice_release_lock(rule_lock);
+	if (status)
+		return;
+
+	switch (lkup) {
+	case ICE_SW_LKUP_MAC:
+		ice_remove_mac(hw, &remove_list_head);
+		break;
+	case ICE_SW_LKUP_VLAN:
+		ice_remove_vlan(hw, &remove_list_head);
+		break;
+	case ICE_SW_LKUP_PROMISC:
+	case ICE_SW_LKUP_PROMISC_VLAN:
+		ice_remove_promisc(hw, lkup, &remove_list_head);
+		break;
+	case ICE_SW_LKUP_MAC_VLAN:
+#ifndef NO_MACVLAN_SUPPORT
+		ice_remove_mac_vlan(hw, &remove_list_head);
+#else
+		ice_debug(hw, ICE_DBG_SW, "MAC VLAN look up is not supported yet\n");
+#endif /* !NO_MACVLAN_SUPPORT */
+		break;
+	case ICE_SW_LKUP_ETHERTYPE:
+	case ICE_SW_LKUP_ETHERTYPE_MAC:
+	case ICE_SW_LKUP_DFLT:
+		ice_debug(hw, ICE_DBG_SW,
+			  "Remove filters for this lookup type hasn't been implemented yet\n");
+		break;
+	case ICE_SW_LKUP_LAST:
+		ice_debug(hw, ICE_DBG_SW, "Unsupported lookup type\n");
+		break;
+	}
+
+	LIST_FOR_EACH_ENTRY_SAFE(fm_entry, tmp, &remove_list_head,
+				 ice_fltr_list_entry, list_entry) {
+		LIST_DEL(&fm_entry->list_entry);
+		ice_free(hw, fm_entry);
+	}
+}
+
+/**
+ * ice_remove_vsi_fltr - Remove all filters for a VSI
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: VSI handle to remove filters from
+ */
+void ice_remove_vsi_fltr(struct ice_hw *hw, u16 vsi_handle)
+{
+	ice_debug(hw, ICE_DBG_TRACE, "ice_remove_vsi_fltr\n");
+
+	ice_remove_vsi_lkup_fltr(hw, vsi_handle, ICE_SW_LKUP_MAC);
+	ice_remove_vsi_lkup_fltr(hw, vsi_handle, ICE_SW_LKUP_MAC_VLAN);
+	ice_remove_vsi_lkup_fltr(hw, vsi_handle, ICE_SW_LKUP_PROMISC);
+	ice_remove_vsi_lkup_fltr(hw, vsi_handle, ICE_SW_LKUP_VLAN);
+	ice_remove_vsi_lkup_fltr(hw, vsi_handle, ICE_SW_LKUP_DFLT);
+	ice_remove_vsi_lkup_fltr(hw, vsi_handle, ICE_SW_LKUP_ETHERTYPE);
+	ice_remove_vsi_lkup_fltr(hw, vsi_handle, ICE_SW_LKUP_ETHERTYPE_MAC);
+	ice_remove_vsi_lkup_fltr(hw, vsi_handle, ICE_SW_LKUP_PROMISC_VLAN);
+}
+
+
+
+
+
+/**
+ * ice_replay_vsi_fltr - Replay filters for requested VSI
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: driver vsi handle
+ * @recp_id: Recipe id for which rules need to be replayed
+ * @list_head: list for which filters need to be replayed
+ *
+ * Replays the filter of recipe recp_id for a VSI represented via vsi_handle.
+ * It is required to pass valid VSI handle.
+ */
+static enum ice_status
+ice_replay_vsi_fltr(struct ice_hw *hw, u16 vsi_handle, u8 recp_id,
+		    struct LIST_HEAD_TYPE *list_head)
+{
+	struct ice_fltr_mgmt_list_entry *itr;
+	enum ice_status status = ICE_SUCCESS;
+	u16 hw_vsi_id;
+
+	if (LIST_EMPTY(list_head))
+		return status;
+	hw_vsi_id = ice_get_hw_vsi_num(hw, vsi_handle);
+
+	LIST_FOR_EACH_ENTRY(itr, list_head, ice_fltr_mgmt_list_entry,
+			    list_entry) {
+		struct ice_fltr_list_entry f_entry;
+
+		f_entry.fltr_info = itr->fltr_info;
+		if (itr->vsi_count < 2 && recp_id != ICE_SW_LKUP_VLAN &&
+		    itr->fltr_info.vsi_handle == vsi_handle) {
+			/* update the src in case it is vsi num */
+			if (f_entry.fltr_info.src_id == ICE_SRC_ID_VSI)
+				f_entry.fltr_info.src = hw_vsi_id;
+			status = ice_add_rule_internal(hw, recp_id, &f_entry);
+			if (status != ICE_SUCCESS)
+				goto end;
+			continue;
+		}
+		if (!itr->vsi_list_info ||
+		    !ice_is_bit_set(itr->vsi_list_info->vsi_map, vsi_handle))
+			continue;
+		/* Clearing it so that the logic can add it back */
+		ice_clear_bit(vsi_handle, itr->vsi_list_info->vsi_map);
+		f_entry.fltr_info.vsi_handle = vsi_handle;
+		f_entry.fltr_info.fltr_act = ICE_FWD_TO_VSI;
+		/* update the src in case it is vsi num */
+		if (f_entry.fltr_info.src_id == ICE_SRC_ID_VSI)
+			f_entry.fltr_info.src = hw_vsi_id;
+		if (recp_id == ICE_SW_LKUP_VLAN)
+			status = ice_add_vlan_internal(hw, &f_entry);
+		else
+			status = ice_add_rule_internal(hw, recp_id, &f_entry);
+		if (status != ICE_SUCCESS)
+			goto end;
+	}
+end:
+	return status;
+}
+
+
+/**
+ * ice_replay_vsi_all_fltr - replay all filters stored in bookkeeping lists
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: driver vsi handle
+ *
+ * Replays filters for requested VSI via vsi_handle.
+ */
+enum ice_status ice_replay_vsi_all_fltr(struct ice_hw *hw, u16 vsi_handle)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	enum ice_status status = ICE_SUCCESS;
+	u8 i;
+
+	for (i = 0; i < ICE_MAX_NUM_RECIPES; i++) {
+		/* Update the default recipe lines and ones that were created */
+		if (i < ICE_MAX_NUM_RECIPES || sw->recp_list[i].recp_created) {
+			struct LIST_HEAD_TYPE *head;
+
+			head = &sw->recp_list[i].filt_replay_rules;
+			if (!sw->recp_list[i].adv_rule)
+				status = ice_replay_vsi_fltr(hw, vsi_handle, i,
+							     head);
+			if (status != ICE_SUCCESS)
+				return status;
+		}
+	}
+	return status;
+}
+
+/**
+ * ice_rm_all_sw_replay_rule_info - deletes filter replay rules
+ * @hw: pointer to the hw struct
+ *
+ * Deletes the filter replay rules.
+ */
+void ice_rm_all_sw_replay_rule_info(struct ice_hw *hw)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	u8 i;
+
+	if (!sw)
+		return;
+
+	for (i = 0; i < ICE_MAX_NUM_RECIPES; i++) {
+		if (!LIST_EMPTY(&sw->recp_list[i].filt_replay_rules)) {
+			struct LIST_HEAD_TYPE *l_head;
+
+			l_head = &sw->recp_list[i].filt_replay_rules;
+			if (!sw->recp_list[i].adv_rule)
+				ice_rem_sw_rule_info(hw, l_head);
+		}
+	}
+}
diff --git a/drivers/net/ice/base/ice_switch.h b/drivers/net/ice/base/ice_switch.h
new file mode 100644
index 0000000..66a172f
--- /dev/null
+++ b/drivers/net/ice/base/ice_switch.h
@@ -0,0 +1,333 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_SWITCH_H_
+#define _ICE_SWITCH_H_
+
+#include "ice_common.h"
+#include "ice_protocol_type.h"
+
+#define ICE_SW_CFG_MAX_BUF_LEN 2048
+#define ICE_MAX_SW 256
+#define ICE_DFLT_VSI_INVAL 0xff
+
+
+
+#define ICE_VSI_INVAL_ID 0xFFFF
+
+/* VSI context structure for add/get/update/free operations */
+struct ice_vsi_ctx {
+	u16 vsi_num;
+	u16 vsis_allocd;
+	u16 vsis_unallocated;
+	u16 flags;
+	struct ice_aqc_vsi_props info;
+	struct ice_sched_vsi_info sched;
+	u8 alloc_from_pool;
+	struct ice_lock rss_locks;	/* protect rss config in VSI ctx */
+	struct LIST_HEAD_TYPE rss_list_head;
+};
+
+
+/* Switch recipe ID enum values are specific to hardware */
+enum ice_sw_lkup_type {
+	ICE_SW_LKUP_ETHERTYPE = 0,
+	ICE_SW_LKUP_MAC = 1,
+	ICE_SW_LKUP_MAC_VLAN = 2,
+	ICE_SW_LKUP_PROMISC = 3,
+	ICE_SW_LKUP_VLAN = 4,
+	ICE_SW_LKUP_DFLT = 5,
+	ICE_SW_LKUP_ETHERTYPE_MAC = 8,
+	ICE_SW_LKUP_PROMISC_VLAN = 9,
+	ICE_SW_LKUP_LAST,
+};
+
+/* type of filter src id */
+enum ice_src_id {
+	ICE_SRC_ID_UNKNOWN = 0,
+	ICE_SRC_ID_VSI,
+	ICE_SRC_ID_QUEUE,
+	ICE_SRC_ID_LPORT,
+};
+
+struct ice_fltr_info {
+	/* Look up information: how to look up packet */
+	enum ice_sw_lkup_type lkup_type;
+	/* Forward action: filter action to do after lookup */
+	enum ice_sw_fwd_act_type fltr_act;
+	/* rule ID returned by firmware once filter rule is created */
+	u16 fltr_rule_id;
+	u16 flag;
+#define ICE_FLTR_RX		BIT(0)
+#define ICE_FLTR_TX		BIT(1)
+#define ICE_FLTR_TX_RX		(ICE_FLTR_RX | ICE_FLTR_TX)
+
+	/* Source VSI for LOOKUP_TX or source port for LOOKUP_RX */
+	u16 src;
+	enum ice_src_id src_id;
+
+	union {
+		struct {
+			u8 mac_addr[ETH_ALEN];
+		} mac;
+		struct {
+			u8 mac_addr[ETH_ALEN];
+			u16 vlan_id;
+		} mac_vlan;
+		struct {
+			u16 vlan_id;
+		} vlan;
+		/* Set lkup_type as ICE_SW_LKUP_ETHERTYPE
+		 * if just using ethertype as filter. Set lkup_type as
+		 * ICE_SW_LKUP_ETHERTYPE_MAC if MAC also needs to be
+		 * passed in as filter.
+		 */
+		struct {
+			u16 ethertype;
+			u8 mac_addr[ETH_ALEN]; /* optional */
+		} ethertype_mac;
+	} l_data; /* Make sure to zero out the memory of l_data before using
+		   * it or only set the data associated with lookup match
+		   * rest everything should be zero
+		   */
+
+	/* Depending on filter action */
+	union {
+		/* queue id in case of ICE_FWD_TO_Q and starting
+		 * queue id in case of ICE_FWD_TO_QGRP.
+		 */
+		u16 q_id:11;
+		u16 hw_vsi_id:10;
+		u16 vsi_id:10;
+		u16 vsi_list_id:10;
+	} fwd_id;
+
+	/* Sw VSI handle */
+	u16 vsi_handle;
+
+	/* Set to num_queues if action is ICE_FWD_TO_QGRP. This field
+	 * determines the range of queues the packet needs to be forwarded to.
+	 * Note that qgrp_size must be set to a power of 2.
+	 */
+	u8 qgrp_size;
+
+	/* Rule creations populate these indicators basing on the switch type */
+	u8 lb_en;	/* Indicate if packet can be looped back */
+	u8 lan_en;	/* Indicate if packet can be forwarded to the uplink */
+};
+
+struct ice_adv_lkup_elem {
+	enum ice_protocol_type type;
+	union ice_prot_hdr h_u;	/* Header values */
+	union ice_prot_hdr m_u;	/* Mask of header values to match */
+};
+
+struct ice_sw_act_ctrl {
+	/* Source VSI for LOOKUP_TX or source port for LOOKUP_RX */
+	u16 src;
+	u16 flag;
+#define ICE_FLTR_RX             BIT(0)
+#define ICE_FLTR_TX             BIT(1)
+#define ICE_FLTR_TX_RX (ICE_FLTR_RX | ICE_FLTR_TX)
+
+	enum ice_sw_fwd_act_type fltr_act;
+	/* Depending on filter action */
+	union {
+		/* This is a queue id in case of ICE_FWD_TO_Q and starting
+		 * queue id in case of ICE_FWD_TO_QGRP.
+		 */
+		u16 q_id:11;
+		u16 vsi_id:10;
+		u16 hw_vsi_id:10;
+		u16 vsi_list_id:10;
+	} fwd_id;
+	/* software VSI handle */
+	u16 vsi_handle;
+	u8 qgrp_size;
+};
+
+struct ice_adv_rule_info {
+	enum ice_sw_tunnel_type tun_type;
+	struct ice_sw_act_ctrl sw_act;
+	u32 priority;
+	u8 rx; /* true means LOOKUP_RX otherwise LOOKUP_TX */
+};
+
+/* A collection of one or more four word recipe */
+struct ice_sw_recipe {
+	/* For a chained recipe the root recipe is what should be used for
+	 * programming rules
+	 */
+	u8 root_rid;
+	u8 recp_created;
+
+	/* Number of extraction words */
+	u8 n_ext_words;
+	/* Protocol ID and Offset pair (extraction word) to describe the
+	 * recipe
+	 */
+	struct ice_fv_word ext_words[ICE_MAX_CHAIN_WORDS];
+
+	/* if this recipe is a collection of other recipe */
+	u8 big_recp;
+
+	/* if this recipe is part of another bigger recipe then chain index
+	 * corresponding to this recipe
+	 */
+	u8 chain_idx;
+
+	/* if this recipe is a collection of other recipe then count of other
+	 * recipes and recipe ids of those recipes
+	 */
+	u8 n_grp_count;
+
+	/* Bit map specifying the IDs associated with this group of recipe */
+	ice_declare_bitmap(r_bitmap, ICE_MAX_NUM_RECIPES);
+
+	enum ice_sw_tunnel_type tun_type;
+
+	/* List of type ice_fltr_mgmt_list_entry or adv_rule */
+	u8 adv_rule;
+	struct LIST_HEAD_TYPE filt_rules;
+	struct LIST_HEAD_TYPE filt_replay_rules;
+
+	struct ice_lock filt_rule_lock;	/* protect filter rule structure */
+
+	/* Profiles this recipe should be associated with */
+	struct LIST_HEAD_TYPE fv_list;
+
+	/* Profiles this recipe is associated with */
+	u8 num_profs, *prof_ids;
+
+	/* This allows user to specify the recipe priority.
+	 * For now, this becomes 'fwd_priority' when recipe
+	 * is created, usually recipes can have 'fwd' and 'join'
+	 * priority.
+	 */
+	u8 priority;
+
+	struct LIST_HEAD_TYPE rg_list;
+
+	/* AQ buffer associated with this recipe */
+	struct ice_aqc_recipe_data_elem *root_buf;
+};
+
+/* Bookkeeping structure to hold bitmap of VSIs corresponding to VSI list id */
+struct ice_vsi_list_map_info {
+	struct LIST_ENTRY_TYPE list_entry;
+	ice_declare_bitmap(vsi_map, ICE_MAX_VSI);
+	u16 vsi_list_id;
+	/* counter to track how many rules are reusing this VSI list */
+	u16 ref_cnt;
+};
+
+struct ice_fltr_list_entry {
+	struct LIST_ENTRY_TYPE list_entry;
+	enum ice_status status;
+	struct ice_fltr_info fltr_info;
+};
+
+/* This defines an entry in the list that maintains MAC or VLAN membership
+ * to HW list mapping, since multiple VSIs can subscribe to the same MAC or
+ * VLAN. As an optimization the VSI list should be created only when a
+ * second VSI becomes a subscriber to the same MAC address. VSI lists are always
+ * used for VLAN membership.
+ */
+struct ice_fltr_mgmt_list_entry {
+	/* back pointer to VSI list id to VSI list mapping */
+	struct ice_vsi_list_map_info *vsi_list_info;
+	u16 vsi_count;
+#define ICE_INVAL_LG_ACT_INDEX 0xffff
+	u16 lg_act_idx;
+#define ICE_INVAL_SW_MARKER_ID 0xffff
+	u16 sw_marker_id;
+	struct LIST_ENTRY_TYPE list_entry;
+	struct ice_fltr_info fltr_info;
+#define ICE_INVAL_COUNTER_ID 0xff
+	u8 counter_index;
+};
+
+struct ice_adv_fltr_mgmt_list_entry {
+	struct LIST_ENTRY_TYPE list_entry;
+
+	struct ice_adv_lkup_elem *lkups;
+	struct ice_adv_rule_info rule_info;
+	u16 lkups_cnt;
+};
+
+enum ice_promisc_flags {
+	ICE_PROMISC_UCAST_RX = 0x1,
+	ICE_PROMISC_UCAST_TX = 0x2,
+	ICE_PROMISC_MCAST_RX = 0x4,
+	ICE_PROMISC_MCAST_TX = 0x8,
+	ICE_PROMISC_BCAST_RX = 0x10,
+	ICE_PROMISC_BCAST_TX = 0x20,
+	ICE_PROMISC_VLAN_RX = 0x40,
+	ICE_PROMISC_VLAN_TX = 0x80,
+};
+
+/* VSI related commands */
+enum ice_status
+ice_add_vsi(struct ice_hw *hw, u16 vsi_handle, struct ice_vsi_ctx *vsi_ctx,
+	    struct ice_sq_cd *cd);
+enum ice_status
+ice_free_vsi(struct ice_hw *hw, u16 vsi_handle, struct ice_vsi_ctx *vsi_ctx,
+	     bool keep_vsi_alloc, struct ice_sq_cd *cd);
+enum ice_status
+ice_update_vsi(struct ice_hw *hw, u16 vsi_handle, struct ice_vsi_ctx *vsi_ctx,
+	       struct ice_sq_cd *cd);
+struct ice_vsi_ctx *ice_get_vsi_ctx(struct ice_hw *hw, u16 vsi_handle);
+void ice_clear_all_vsi_ctx(struct ice_hw *hw);
+/* Switch config */
+enum ice_status ice_get_initial_sw_cfg(struct ice_hw *hw);
+
+enum ice_status
+ice_alloc_vlan_res_counter(struct ice_hw *hw, u16 *counter_id);
+enum ice_status
+ice_free_vlan_res_counter(struct ice_hw *hw, u16 counter_id);
+
+/* Switch/bridge related commands */
+enum ice_status ice_update_sw_rule_bridge_mode(struct ice_hw *hw);
+enum ice_status
+ice_add_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list);
+enum ice_status ice_add_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_lst);
+enum ice_status ice_remove_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_lst);
+enum ice_status
+ice_remove_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list);
+#ifndef NO_MACVLAN_SUPPORT
+enum ice_status
+ice_add_mac_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list);
+enum ice_status
+ice_remove_mac_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list);
+#endif /* !NO_MACVLAN_SUPPORT */
+
+void ice_remove_vsi_fltr(struct ice_hw *hw, u16 vsi_handle);
+
+
+/* Promisc/defport setup for VSIs */
+enum ice_status
+ice_cfg_dflt_vsi(struct ice_port_info *pi, u16 vsi_handle, bool set,
+		 u8 direction);
+enum ice_status
+ice_set_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask,
+		    u16 vid);
+enum ice_status
+ice_clear_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask,
+		      u16 vid);
+enum ice_status
+ice_set_vlan_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask,
+			 bool rm_vlan_promisc);
+
+
+
+
+
+enum ice_status ice_init_def_sw_recp(struct ice_hw *hw);
+u16 ice_get_hw_vsi_num(struct ice_hw *hw, u16 vsi_handle);
+bool ice_is_vsi_valid(struct ice_hw *hw, u16 vsi_handle);
+
+enum ice_status ice_replay_vsi_all_fltr(struct ice_hw *hw, u16 vsi_handle);
+void ice_rm_all_sw_replay_rule_info(struct ice_hw *hw);
+
+#endif /* _ICE_SWITCH_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v5 09/31] net/ice/base: add code to work with the NVM
  2018-12-17  7:37 ` [dpdk-dev] [PATCH v5 00/31] A new net PMD - ICE Wenzhuo Lu
                     ` (7 preceding siblings ...)
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 08/31] net/ice/base: add virtual switch code Wenzhuo Lu
@ 2018-12-17  7:37   ` Wenzhuo Lu
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 10/31] net/ice/base: add common functions Wenzhuo Lu
                     ` (21 subsequent siblings)
  30 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-17  7:37 UTC (permalink / raw)
  To: dev; +Cc: Paul M Stillwell Jr

From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>

Add code to read/write/query the NVM image.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
---
 drivers/net/ice/base/ice_nvm.c | 387 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 387 insertions(+)
 create mode 100644 drivers/net/ice/base/ice_nvm.c

diff --git a/drivers/net/ice/base/ice_nvm.c b/drivers/net/ice/base/ice_nvm.c
new file mode 100644
index 0000000..25a2ca4
--- /dev/null
+++ b/drivers/net/ice/base/ice_nvm.c
@@ -0,0 +1,387 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#include "ice_common.h"
+
+
+/**
+ * ice_aq_read_nvm
+ * @hw: pointer to the hw struct
+ * @module_typeid: module pointer location in words from the NVM beginning
+ * @offset: byte offset from the module beginning
+ * @length: length of the section to be read (in bytes from the offset)
+ * @data: command buffer (size [bytes] = length)
+ * @last_command: tells if this is the last command in a series
+ * @cd: pointer to command details structure or NULL
+ *
+ * Read the NVM using the admin queue commands (0x0701)
+ */
+static enum ice_status
+ice_aq_read_nvm(struct ice_hw *hw, u16 module_typeid, u32 offset, u16 length,
+		void *data, bool last_command, struct ice_sq_cd *cd)
+{
+	struct ice_aq_desc desc;
+	struct ice_aqc_nvm *cmd;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_read_nvm");
+
+	cmd = &desc.params.nvm;
+
+	/* In offset the highest byte must be zeroed. */
+	if (offset & 0xFF000000)
+		return ICE_ERR_PARAM;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_nvm_read);
+
+	/* If this is the last command in a series, set the proper flag. */
+	if (last_command)
+		cmd->cmd_flags |= ICE_AQC_NVM_LAST_CMD;
+	cmd->module_typeid = CPU_TO_LE16(module_typeid);
+	cmd->offset_low = CPU_TO_LE16(offset & 0xFFFF);
+	cmd->offset_high = (offset >> 16) & 0xFF;
+	cmd->length = CPU_TO_LE16(length);
+
+	return ice_aq_send_cmd(hw, &desc, data, length, cd);
+}
+
+/**
+ * ice_check_sr_access_params - verify params for Shadow RAM R/W operations.
+ * @hw: pointer to the HW structure
+ * @offset: offset in words from module start
+ * @words: number of words to access
+ */
+static enum ice_status
+ice_check_sr_access_params(struct ice_hw *hw, u32 offset, u16 words)
+{
+	if ((offset + words) > hw->nvm.sr_words) {
+		ice_debug(hw, ICE_DBG_NVM,
+			  "NVM error: offset beyond SR lmt.\n");
+		return ICE_ERR_PARAM;
+	}
+
+	if (words > ICE_SR_SECTOR_SIZE_IN_WORDS) {
+		/* We can access only up to 4KB (one sector), in one AQ write */
+		ice_debug(hw, ICE_DBG_NVM,
+			  "NVM error: tried to access %d words, limit is %d.\n",
+			  words, ICE_SR_SECTOR_SIZE_IN_WORDS);
+		return ICE_ERR_PARAM;
+	}
+
+	if (((offset + (words - 1)) / ICE_SR_SECTOR_SIZE_IN_WORDS) !=
+	    (offset / ICE_SR_SECTOR_SIZE_IN_WORDS)) {
+		/* A single access cannot spread over two sectors */
+		ice_debug(hw, ICE_DBG_NVM,
+			  "NVM error: cannot spread over two sectors.\n");
+		return ICE_ERR_PARAM;
+	}
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_read_sr_aq - Read Shadow RAM.
+ * @hw: pointer to the HW structure
+ * @offset: offset in words from module start
+ * @words: number of words to read
+ * @data: buffer for words reads from Shadow RAM
+ * @last_command: tells the AdminQ that this is the last command
+ *
+ * Reads 16-bit word buffers from the Shadow RAM using the admin command.
+ */
+static enum ice_status
+ice_read_sr_aq(struct ice_hw *hw, u32 offset, u16 words, u16 *data,
+	       bool last_command)
+{
+	enum ice_status status;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_read_sr_aq");
+
+	status = ice_check_sr_access_params(hw, offset, words);
+
+	/* values in "offset" and "words" parameters are sized as words
+	 * (16 bits) but ice_aq_read_nvm expects these values in bytes.
+	 * So do this conversion while calling ice_aq_read_nvm.
+	 */
+	if (!status)
+		status = ice_aq_read_nvm(hw, 0, 2 * offset, 2 * words, data,
+					 last_command, NULL);
+
+	return status;
+}
+
+/**
+ * ice_read_sr_word_aq - Reads Shadow RAM via AQ
+ * @hw: pointer to the HW structure
+ * @offset: offset of the Shadow RAM word to read (0x000000 - 0x001FFF)
+ * @data: word read from the Shadow RAM
+ *
+ * Reads one 16 bit word from the Shadow RAM using the ice_read_sr_aq method.
+ */
+static enum ice_status
+ice_read_sr_word_aq(struct ice_hw *hw, u16 offset, u16 *data)
+{
+	enum ice_status status;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_read_sr_word_aq");
+
+	status = ice_read_sr_aq(hw, offset, 1, data, true);
+	if (!status)
+		*data = LE16_TO_CPU(*(__le16 *)data);
+
+	return status;
+}
+
+
+/**
+ * ice_read_sr_buf_aq - Reads Shadow RAM buf via AQ
+ * @hw: pointer to the HW structure
+ * @offset: offset of the Shadow RAM word to read (0x000000 - 0x001FFF)
+ * @words: (in) number of words to read; (out) number of words actually read
+ * @data: words read from the Shadow RAM
+ *
+ * Reads 16 bit words (data buf) from the SR using the ice_read_sr_aq
+ * method. Ownership of the NVM is taken before reading the buffer and later
+ * released.
+ */
+static enum ice_status
+ice_read_sr_buf_aq(struct ice_hw *hw, u16 offset, u16 *words, u16 *data)
+{
+	enum ice_status status;
+	bool last_cmd = false;
+	u16 words_read = 0;
+	u16 i = 0;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_read_sr_buf_aq");
+
+	do {
+		u16 read_size, off_w;
+
+		/* Calculate number of bytes we should read in this step.
+		 * It's not allowed to read more than one page at a time or
+		 * to cross page boundaries.
+		 */
+		off_w = offset % ICE_SR_SECTOR_SIZE_IN_WORDS;
+		read_size = off_w ?
+			min(*words,
+			    (u16)(ICE_SR_SECTOR_SIZE_IN_WORDS - off_w)) :
+			min((*words - words_read), ICE_SR_SECTOR_SIZE_IN_WORDS);
+
+		/* Check if this is last command, if so set proper flag */
+		if ((words_read + read_size) >= *words)
+			last_cmd = true;
+
+		status = ice_read_sr_aq(hw, offset, read_size,
+					data + words_read, last_cmd);
+		if (status)
+			goto read_nvm_buf_aq_exit;
+
+		/* Increment counter for words already read and move offset to
+		 * new read location
+		 */
+		words_read += read_size;
+		offset += read_size;
+	} while (words_read < *words);
+
+	for (i = 0; i < *words; i++)
+		data[i] = LE16_TO_CPU(((__le16 *)data)[i]);
+
+read_nvm_buf_aq_exit:
+	*words = words_read;
+	return status;
+}
+
+/**
+ * ice_acquire_nvm - Generic request for acquiring the NVM ownership
+ * @hw: pointer to the HW structure
+ * @access: NVM access type (read or write)
+ *
+ * This function will request NVM ownership.
+ */
+static enum ice_status
+ice_acquire_nvm(struct ice_hw *hw, enum ice_aq_res_access_type access)
+{
+	ice_debug(hw, ICE_DBG_TRACE, "ice_acquire_nvm");
+
+	if (hw->nvm.blank_nvm_mode)
+		return ICE_SUCCESS;
+
+	return ice_acquire_res(hw, ICE_NVM_RES_ID, access, ICE_NVM_TIMEOUT);
+}
+
+/**
+ * ice_release_nvm - Generic request for releasing the NVM ownership
+ * @hw: pointer to the HW structure
+ *
+ * This function will release NVM ownership.
+ */
+static void ice_release_nvm(struct ice_hw *hw)
+{
+	ice_debug(hw, ICE_DBG_TRACE, "ice_release_nvm");
+
+	if (hw->nvm.blank_nvm_mode)
+		return;
+
+	ice_release_res(hw, ICE_NVM_RES_ID);
+}
+
+/**
+ * ice_read_sr_word - Reads Shadow RAM word and acquire NVM if necessary
+ * @hw: pointer to the HW structure
+ * @offset: offset of the Shadow RAM word to read (0x000000 - 0x001FFF)
+ * @data: word read from the Shadow RAM
+ *
+ * Reads one 16 bit word from the Shadow RAM using the ice_read_sr_word_aq.
+ */
+enum ice_status ice_read_sr_word(struct ice_hw *hw, u16 offset, u16 *data)
+{
+	enum ice_status status;
+
+	status = ice_acquire_nvm(hw, ICE_RES_READ);
+	if (!status) {
+		status = ice_read_sr_word_aq(hw, offset, data);
+		ice_release_nvm(hw);
+	}
+
+	return status;
+}
+
+/**
+ * ice_init_nvm - initializes NVM setting
+ * @hw: pointer to the hw struct
+ *
+ * This function reads and populates NVM settings such as Shadow RAM size,
+ * max_timeout, and blank_nvm_mode
+ */
+enum ice_status ice_init_nvm(struct ice_hw *hw)
+{
+	struct ice_nvm_info *nvm = &hw->nvm;
+	u16 oem_hi, oem_lo, cfg_ptr;
+	u16 eetrack_lo, eetrack_hi;
+	enum ice_status status = ICE_SUCCESS;
+	u32 fla, gens_stat;
+	u8 sr_size;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_init_nvm");
+
+	/* The SR size is stored regardless of the nvm programming mode
+	 * as the blank mode may be used in the factory line.
+	 */
+	gens_stat = rd32(hw, GLNVM_GENS);
+	sr_size = (gens_stat & GLNVM_GENS_SR_SIZE_M) >> GLNVM_GENS_SR_SIZE_S;
+
+	/* Switching to words (sr_size contains power of 2) */
+	nvm->sr_words = BIT(sr_size) * ICE_SR_WORDS_IN_1KB;
+
+	/* Check if we are in the normal or blank NVM programming mode */
+	fla = rd32(hw, GLNVM_FLA);
+	if (fla & GLNVM_FLA_LOCKED_M) { /* Normal programming mode */
+		nvm->blank_nvm_mode = false;
+	} else { /* Blank programming mode */
+		nvm->blank_nvm_mode = true;
+		status = ICE_ERR_NVM_BLANK_MODE;
+		ice_debug(hw, ICE_DBG_NVM,
+			  "NVM init error: unsupported blank mode.\n");
+		return status;
+	}
+
+	status = ice_read_sr_word(hw, ICE_SR_NVM_DEV_STARTER_VER, &hw->nvm.ver);
+	if (status) {
+		ice_debug(hw, ICE_DBG_INIT,
+			  "Failed to read DEV starter version.\n");
+		return status;
+	}
+
+	status = ice_read_sr_word(hw, ICE_SR_NVM_EETRACK_LO, &eetrack_lo);
+	if (status) {
+		ice_debug(hw, ICE_DBG_INIT, "Failed to read EETRACK lo.\n");
+		return status;
+	}
+	status = ice_read_sr_word(hw, ICE_SR_NVM_EETRACK_HI, &eetrack_hi);
+	if (status) {
+		ice_debug(hw, ICE_DBG_INIT, "Failed to read EETRACK hi.\n");
+		return status;
+	}
+
+	hw->nvm.eetrack = (eetrack_hi << 16) | eetrack_lo;
+
+	status = ice_read_sr_word(hw, ICE_SR_BOOT_CFG_PTR, &cfg_ptr);
+	if (status) {
+		ice_debug(hw, ICE_DBG_INIT, "Failed to read BOOT_CONFIG_PTR.\n");
+		return status;
+	}
+
+	status = ice_read_sr_word(hw, (cfg_ptr + ICE_NVM_OEM_VER_OFF), &oem_hi);
+	if (status) {
+		ice_debug(hw, ICE_DBG_INIT, "Failed to read OEM_VER hi.\n");
+		return status;
+	}
+
+	status = ice_read_sr_word(hw, (cfg_ptr + (ICE_NVM_OEM_VER_OFF + 1)),
+				  &oem_lo);
+	if (status) {
+		ice_debug(hw, ICE_DBG_INIT, "Failed to read OEM_VER lo.\n");
+		return status;
+	}
+
+	hw->nvm.oem_ver = ((u32)oem_hi << 16) | oem_lo;
+	return status;
+}
+
+
+/**
+ * ice_read_sr_buf - Reads Shadow RAM buf and acquire lock if necessary
+ * @hw: pointer to the HW structure
+ * @offset: offset of the Shadow RAM word to read (0x000000 - 0x001FFF)
+ * @words: (in) number of words to read; (out) number of words actually read
+ * @data: words read from the Shadow RAM
+ *
+ * Reads 16 bit words (data buf) from the SR using the ice_read_nvm_buf_aq
+ * method. The buf read is preceded by the NVM ownership take
+ * and followed by the release.
+ */
+enum ice_status
+ice_read_sr_buf(struct ice_hw *hw, u16 offset, u16 *words, u16 *data)
+{
+	enum ice_status status;
+
+	status = ice_acquire_nvm(hw, ICE_RES_READ);
+	if (!status) {
+		status = ice_read_sr_buf_aq(hw, offset, words, data);
+		ice_release_nvm(hw);
+	}
+
+	return status;
+}
+
+
+/**
+ * ice_nvm_validate_checksum
+ * @hw: pointer to the hw struct
+ *
+ * Verify NVM PFA checksum validity (0x0706)
+ */
+enum ice_status ice_nvm_validate_checksum(struct ice_hw *hw)
+{
+	struct ice_aqc_nvm_checksum *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	status = ice_acquire_nvm(hw, ICE_RES_READ);
+	if (status)
+		return status;
+
+	cmd = &desc.params.nvm_checksum;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_nvm_checksum);
+	cmd->flags = ICE_AQC_NVM_CHECKSUM_VERIFY;
+
+	status = ice_aq_send_cmd(hw, &desc, NULL, 0, NULL);
+	ice_release_nvm(hw);
+
+	if (!status)
+		if (LE16_TO_CPU(cmd->checksum) != ICE_AQC_NVM_CHECKSUM_CORRECT)
+			status = ICE_ERR_NVM_CHECKSUM;
+
+	return status;
+}
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v5 10/31] net/ice/base: add common functions
  2018-12-17  7:37 ` [dpdk-dev] [PATCH v5 00/31] A new net PMD - ICE Wenzhuo Lu
                     ` (8 preceding siblings ...)
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 09/31] net/ice/base: add code to work with the NVM Wenzhuo Lu
@ 2018-12-17  7:37   ` Wenzhuo Lu
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 11/31] net/ice/base: add various headers Wenzhuo Lu
                     ` (20 subsequent siblings)
  30 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-17  7:37 UTC (permalink / raw)
  To: dev; +Cc: Paul M Stillwell Jr

From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>

Add code that multiple other features use.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
---
 drivers/net/ice/base/ice_common.c | 3521 +++++++++++++++++++++++++++++++++++++
 drivers/net/ice/base/ice_common.h |  186 ++
 2 files changed, 3707 insertions(+)
 create mode 100644 drivers/net/ice/base/ice_common.c
 create mode 100644 drivers/net/ice/base/ice_common.h

diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c
new file mode 100644
index 0000000..d49264d
--- /dev/null
+++ b/drivers/net/ice/base/ice_common.c
@@ -0,0 +1,3521 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#include "ice_common.h"
+#include "ice_sched.h"
+#include "ice_adminq_cmd.h"
+
+#include "ice_flow.h"
+#include "ice_switch.h"
+
+#define ICE_PF_RESET_WAIT_COUNT	200
+
+#define ICE_PROG_FLEX_ENTRY(hw, rxdid, mdid, idx) \
+	wr32((hw), GLFLXP_RXDID_FLX_WRD_##idx(rxdid), \
+	     ((ICE_RX_OPC_MDID << \
+	       GLFLXP_RXDID_FLX_WRD_##idx##_RXDID_OPCODE_S) & \
+	      GLFLXP_RXDID_FLX_WRD_##idx##_RXDID_OPCODE_M) | \
+	     (((mdid) << GLFLXP_RXDID_FLX_WRD_##idx##_PROT_MDID_S) & \
+	      GLFLXP_RXDID_FLX_WRD_##idx##_PROT_MDID_M))
+
+#define ICE_PROG_FLG_ENTRY(hw, rxdid, flg_0, flg_1, flg_2, flg_3, idx) \
+	wr32((hw), GLFLXP_RXDID_FLAGS(rxdid, idx), \
+	     (((flg_0) << GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_S) & \
+	      GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_M) | \
+	     (((flg_1) << GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_1_S) & \
+	      GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_1_M) | \
+	     (((flg_2) << GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_2_S) & \
+	      GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_2_M) | \
+	     (((flg_3) << GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_3_S) & \
+	      GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_3_M))
+
+
+/**
+ * ice_set_mac_type - Sets MAC type
+ * @hw: pointer to the HW structure
+ *
+ * This function sets the MAC type of the adapter based on the
+ * vendor ID and device ID stored in the hw structure.
+ */
+static enum ice_status ice_set_mac_type(struct ice_hw *hw)
+{
+	enum ice_status status = ICE_SUCCESS;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_set_mac_type\n");
+
+	if (hw->vendor_id == ICE_INTEL_VENDOR_ID) {
+		switch (hw->device_id) {
+		default:
+			hw->mac_type = ICE_MAC_GENERIC;
+			break;
+		}
+	} else {
+		status = ICE_ERR_DEVICE_NOT_SUPPORTED;
+	}
+
+	ice_debug(hw, ICE_DBG_INIT, "found mac_type: %d, status: %d\n",
+		  hw->mac_type, status);
+
+	return status;
+}
+
+#if defined(FPGA_SUPPORT) || defined(CVL_A0_SUPPORT)
+void ice_dev_onetime_setup(struct ice_hw *hw)
+{
+	/* configure Rx - set non pxe mode */
+	wr32(hw, GLLAN_RCTL_0, 0x1);
+
+
+
+}
+#endif /* FPGA_SUPPORT || CVL_A0_SUPPORT */
+
+/**
+ * ice_clear_pf_cfg - Clear PF configuration
+ * @hw: pointer to the hardware structure
+ *
+ * Clears any existing PF configuration (VSIs, VSI lists, switch rules, port
+ * configuration, flow director filters, etc.).
+ */
+enum ice_status ice_clear_pf_cfg(struct ice_hw *hw)
+{
+	struct ice_aq_desc desc;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_clear_pf_cfg);
+
+	return ice_aq_send_cmd(hw, &desc, NULL, 0, NULL);
+}
+
+/**
+ * ice_aq_manage_mac_read - manage MAC address read command
+ * @hw: pointer to the hw struct
+ * @buf: a virtual buffer to hold the manage MAC read response
+ * @buf_size: Size of the virtual buffer
+ * @cd: pointer to command details structure or NULL
+ *
+ * This function is used to return per PF station MAC address (0x0107).
+ * NOTE: Upon successful completion of this command, MAC address information
+ * is returned in user specified buffer. Please interpret user specified
+ * buffer as "manage_mac_read" response.
+ * Response such as various MAC addresses are stored in HW struct (port.mac)
+ * ice_aq_discover_caps is expected to be called before this function is called.
+ */
+static enum ice_status
+ice_aq_manage_mac_read(struct ice_hw *hw, void *buf, u16 buf_size,
+		       struct ice_sq_cd *cd)
+{
+	struct ice_aqc_manage_mac_read_resp *resp;
+	struct ice_aqc_manage_mac_read *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+	u16 flags;
+	u8 i;
+
+	cmd = &desc.params.mac_read;
+
+	if (buf_size < sizeof(*resp))
+		return ICE_ERR_BUF_TOO_SHORT;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_manage_mac_read);
+
+	status = ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
+	if (status)
+		return status;
+
+	resp = (struct ice_aqc_manage_mac_read_resp *)buf;
+	flags = LE16_TO_CPU(cmd->flags) & ICE_AQC_MAN_MAC_READ_M;
+
+	if (!(flags & ICE_AQC_MAN_MAC_LAN_ADDR_VALID)) {
+		ice_debug(hw, ICE_DBG_LAN, "got invalid MAC address\n");
+		return ICE_ERR_CFG;
+	}
+
+	/* A single port can report up to two (LAN and WoL) addresses */
+	for (i = 0; i < cmd->num_addr; i++)
+		if (resp[i].addr_type == ICE_AQC_MAN_MAC_ADDR_TYPE_LAN) {
+			ice_memcpy(hw->port_info->mac.lan_addr,
+				   resp[i].mac_addr, ETH_ALEN,
+				   ICE_DMA_TO_NONDMA);
+			ice_memcpy(hw->port_info->mac.perm_addr,
+				   resp[i].mac_addr,
+				   ETH_ALEN, ICE_DMA_TO_NONDMA);
+			break;
+		}
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_aq_get_phy_caps - returns PHY capabilities
+ * @pi: port information structure
+ * @qual_mods: report qualified modules
+ * @report_mode: report mode capabilities
+ * @pcaps: structure for PHY capabilities to be filled
+ * @cd: pointer to command details structure or NULL
+ *
+ * Returns the various PHY capabilities supported on the Port (0x0600)
+ */
+enum ice_status
+ice_aq_get_phy_caps(struct ice_port_info *pi, bool qual_mods, u8 report_mode,
+		    struct ice_aqc_get_phy_caps_data *pcaps,
+		    struct ice_sq_cd *cd)
+{
+	struct ice_aqc_get_phy_caps *cmd;
+	u16 pcaps_size = sizeof(*pcaps);
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	cmd = &desc.params.get_phy;
+
+	if (!pcaps || (report_mode & ~ICE_AQC_REPORT_MODE_M) || !pi)
+		return ICE_ERR_PARAM;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_phy_caps);
+
+	if (qual_mods)
+		cmd->param0 |= CPU_TO_LE16(ICE_AQC_GET_PHY_RQM);
+
+	cmd->param0 |= CPU_TO_LE16(report_mode);
+	status = ice_aq_send_cmd(pi->hw, &desc, pcaps, pcaps_size, cd);
+
+	if (status == ICE_SUCCESS && report_mode == ICE_AQC_REPORT_TOPO_CAP) {
+		pi->phy.phy_type_low = LE64_TO_CPU(pcaps->phy_type_low);
+		pi->phy.phy_type_high = LE64_TO_CPU(pcaps->phy_type_high);
+	}
+
+	return status;
+}
+
+/**
+ * ice_get_media_type - Gets media type
+ * @pi: port information structure
+ */
+static enum ice_media_type ice_get_media_type(struct ice_port_info *pi)
+{
+	struct ice_link_status *hw_link_info;
+
+	if (!pi)
+		return ICE_MEDIA_UNKNOWN;
+
+	hw_link_info = &pi->phy.link_info;
+	if (hw_link_info->phy_type_low && hw_link_info->phy_type_high)
+		/* If more than one media type is selected, report unknown */
+		return ICE_MEDIA_UNKNOWN;
+
+	if (hw_link_info->phy_type_low) {
+		switch (hw_link_info->phy_type_low) {
+		case ICE_PHY_TYPE_LOW_1000BASE_SX:
+		case ICE_PHY_TYPE_LOW_1000BASE_LX:
+		case ICE_PHY_TYPE_LOW_10GBASE_SR:
+		case ICE_PHY_TYPE_LOW_10GBASE_LR:
+		case ICE_PHY_TYPE_LOW_10G_SFI_C2C:
+		case ICE_PHY_TYPE_LOW_25GBASE_SR:
+		case ICE_PHY_TYPE_LOW_25GBASE_LR:
+		case ICE_PHY_TYPE_LOW_25G_AUI_C2C:
+		case ICE_PHY_TYPE_LOW_40GBASE_SR4:
+		case ICE_PHY_TYPE_LOW_40GBASE_LR4:
+		case ICE_PHY_TYPE_LOW_50GBASE_SR2:
+		case ICE_PHY_TYPE_LOW_50GBASE_LR2:
+		case ICE_PHY_TYPE_LOW_50GBASE_SR:
+		case ICE_PHY_TYPE_LOW_50GBASE_FR:
+		case ICE_PHY_TYPE_LOW_50GBASE_LR:
+		case ICE_PHY_TYPE_LOW_100GBASE_SR4:
+		case ICE_PHY_TYPE_LOW_100GBASE_LR4:
+		case ICE_PHY_TYPE_LOW_100GBASE_SR2:
+		case ICE_PHY_TYPE_LOW_100GBASE_DR:
+			return ICE_MEDIA_FIBER;
+		case ICE_PHY_TYPE_LOW_100BASE_TX:
+		case ICE_PHY_TYPE_LOW_1000BASE_T:
+		case ICE_PHY_TYPE_LOW_2500BASE_T:
+		case ICE_PHY_TYPE_LOW_5GBASE_T:
+		case ICE_PHY_TYPE_LOW_10GBASE_T:
+		case ICE_PHY_TYPE_LOW_25GBASE_T:
+			return ICE_MEDIA_BASET;
+		case ICE_PHY_TYPE_LOW_10G_SFI_DA:
+		case ICE_PHY_TYPE_LOW_25GBASE_CR:
+		case ICE_PHY_TYPE_LOW_25GBASE_CR_S:
+		case ICE_PHY_TYPE_LOW_25GBASE_CR1:
+		case ICE_PHY_TYPE_LOW_40GBASE_CR4:
+		case ICE_PHY_TYPE_LOW_50GBASE_CR2:
+		case ICE_PHY_TYPE_LOW_50GBASE_CP:
+		case ICE_PHY_TYPE_LOW_100GBASE_CR4:
+		case ICE_PHY_TYPE_LOW_100GBASE_CR_PAM4:
+		case ICE_PHY_TYPE_LOW_100GBASE_CP2:
+			return ICE_MEDIA_DA;
+		case ICE_PHY_TYPE_LOW_1000BASE_KX:
+		case ICE_PHY_TYPE_LOW_2500BASE_KX:
+		case ICE_PHY_TYPE_LOW_2500BASE_X:
+		case ICE_PHY_TYPE_LOW_5GBASE_KR:
+		case ICE_PHY_TYPE_LOW_10GBASE_KR_CR1:
+		case ICE_PHY_TYPE_LOW_25GBASE_KR:
+		case ICE_PHY_TYPE_LOW_25GBASE_KR1:
+		case ICE_PHY_TYPE_LOW_25GBASE_KR_S:
+		case ICE_PHY_TYPE_LOW_40GBASE_KR4:
+		case ICE_PHY_TYPE_LOW_50GBASE_KR_PAM4:
+		case ICE_PHY_TYPE_LOW_50GBASE_KR2:
+		case ICE_PHY_TYPE_LOW_100GBASE_KR4:
+		case ICE_PHY_TYPE_LOW_100GBASE_KR_PAM4:
+			return ICE_MEDIA_BACKPLANE;
+		}
+	} else {
+		switch (hw_link_info->phy_type_high) {
+		case ICE_PHY_TYPE_HIGH_100GBASE_KR2_PAM4:
+			return ICE_MEDIA_BACKPLANE;
+		}
+	}
+	return ICE_MEDIA_UNKNOWN;
+}
+
+/**
+ * ice_aq_get_link_info
+ * @pi: port information structure
+ * @ena_lse: enable/disable LinkStatusEvent reporting
+ * @link: pointer to link status structure - optional
+ * @cd: pointer to command details structure or NULL
+ *
+ * Get Link Status (0x607). Returns the link status of the adapter.
+ */
+enum ice_status
+ice_aq_get_link_info(struct ice_port_info *pi, bool ena_lse,
+		     struct ice_link_status *link, struct ice_sq_cd *cd)
+{
+	struct ice_link_status *hw_link_info_old, *hw_link_info;
+	struct ice_aqc_get_link_status_data link_data = { 0 };
+	struct ice_aqc_get_link_status *resp;
+	enum ice_media_type *hw_media_type;
+	struct ice_fc_info *hw_fc_info;
+	bool tx_pause, rx_pause;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+	u16 cmd_flags;
+
+	if (!pi)
+		return ICE_ERR_PARAM;
+	hw_link_info_old = &pi->phy.link_info_old;
+	hw_media_type = &pi->phy.media_type;
+	hw_link_info = &pi->phy.link_info;
+	hw_fc_info = &pi->fc;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_link_status);
+	cmd_flags = (ena_lse) ? ICE_AQ_LSE_ENA : ICE_AQ_LSE_DIS;
+	resp = &desc.params.get_link_status;
+	resp->cmd_flags = CPU_TO_LE16(cmd_flags);
+	resp->lport_num = pi->lport;
+
+	status = ice_aq_send_cmd(pi->hw, &desc, &link_data, sizeof(link_data),
+				 cd);
+
+	if (status != ICE_SUCCESS)
+		return status;
+
+	/* save off old link status information */
+	*hw_link_info_old = *hw_link_info;
+
+	/* update current link status information */
+	hw_link_info->link_speed = LE16_TO_CPU(link_data.link_speed);
+	hw_link_info->phy_type_low = LE64_TO_CPU(link_data.phy_type_low);
+	hw_link_info->phy_type_high = LE64_TO_CPU(link_data.phy_type_high);
+	*hw_media_type = ice_get_media_type(pi);
+	hw_link_info->link_info = link_data.link_info;
+	hw_link_info->an_info = link_data.an_info;
+	hw_link_info->ext_info = link_data.ext_info;
+	hw_link_info->max_frame_size = LE16_TO_CPU(link_data.max_frame_size);
+	hw_link_info->fec_info = link_data.cfg & ICE_AQ_FEC_MASK;
+	hw_link_info->topo_media_conflict = link_data.topo_media_conflict;
+	hw_link_info->pacing = link_data.cfg & ICE_AQ_CFG_PACING_M;
+
+	/* update fc info */
+	tx_pause = !!(link_data.an_info & ICE_AQ_LINK_PAUSE_TX);
+	rx_pause = !!(link_data.an_info & ICE_AQ_LINK_PAUSE_RX);
+	if (tx_pause && rx_pause)
+		hw_fc_info->current_mode = ICE_FC_FULL;
+	else if (tx_pause)
+		hw_fc_info->current_mode = ICE_FC_TX_PAUSE;
+	else if (rx_pause)
+		hw_fc_info->current_mode = ICE_FC_RX_PAUSE;
+	else
+		hw_fc_info->current_mode = ICE_FC_NONE;
+
+	hw_link_info->lse_ena =
+		!!(resp->cmd_flags & CPU_TO_LE16(ICE_AQ_LSE_IS_ENABLED));
+
+
+	/* save link status information */
+	if (link)
+		*link = *hw_link_info;
+
+	/* flag cleared so calling functions don't call AQ again */
+	pi->phy.get_link_info = false;
+
+	return status;
+}
+
+/**
+ * ice_init_flex_flags
+ * @hw: pointer to the hardware structure
+ * @prof_id: Rx Descriptor Builder profile ID
+ *
+ * Function to initialize Rx flex flags
+ */
+static void ice_init_flex_flags(struct ice_hw *hw, enum ice_rxdid prof_id)
+{
+	u8 idx = 0;
+
+	/* Flex-flag fields (0-2) are programmed with FLG64 bits with layout:
+	 * flexiflags0[5:0] - TCP flags, is_packet_fragmented, is_packet_UDP_GRE
+	 * flexiflags1[3:0] - Not used for flag programming
+	 * flexiflags2[7:0] - Tunnel and VLAN types
+	 * 2 invalid fields in last index
+	 */
+	switch (prof_id) {
+	/* Rx flex flags are currently programmed for the NIC profiles only.
+	 * Different flag bit programming configurations can be added per
+	 * profile as needed.
+	 */
+	case ICE_RXDID_FLEX_NIC:
+	case ICE_RXDID_FLEX_NIC_2:
+		ICE_PROG_FLG_ENTRY(hw, prof_id, ICE_RXFLG_PKT_FRG,
+				   ICE_RXFLG_UDP_GRE, ICE_RXFLG_PKT_DSI,
+				   ICE_RXFLG_FIN, idx++);
+		/* flex flag 1 is not used for flexi-flag programming, skipping
+		 * these four FLG64 bits.
+		 */
+		ICE_PROG_FLG_ENTRY(hw, prof_id, ICE_RXFLG_SYN, ICE_RXFLG_RST,
+				   ICE_RXFLG_PKT_DSI, ICE_RXFLG_PKT_DSI, idx++);
+		ICE_PROG_FLG_ENTRY(hw, prof_id, ICE_RXFLG_PKT_DSI,
+				   ICE_RXFLG_PKT_DSI, ICE_RXFLG_EVLAN_x8100,
+				   ICE_RXFLG_EVLAN_x9100, idx++);
+		ICE_PROG_FLG_ENTRY(hw, prof_id, ICE_RXFLG_VLAN_x8100,
+				   ICE_RXFLG_TNL_VLAN, ICE_RXFLG_TNL_MAC,
+				   ICE_RXFLG_TNL0, idx++);
+		ICE_PROG_FLG_ENTRY(hw, prof_id, ICE_RXFLG_TNL1, ICE_RXFLG_TNL2,
+				   ICE_RXFLG_PKT_DSI, ICE_RXFLG_PKT_DSI, idx);
+		break;
+
+	default:
+		ice_debug(hw, ICE_DBG_INIT,
+			  "Flag programming for profile ID %d not supported\n",
+			  prof_id);
+	}
+}
+
+/**
+ * ice_init_flex_flds
+ * @hw: pointer to the hardware structure
+ * @prof_id: Rx Descriptor Builder profile ID
+ *
+ * Function to initialize flex descriptors
+ */
+static void ice_init_flex_flds(struct ice_hw *hw, enum ice_rxdid prof_id)
+{
+	enum ice_flex_rx_mdid mdid;
+
+	switch (prof_id) {
+	case ICE_RXDID_FLEX_NIC:
+	case ICE_RXDID_FLEX_NIC_2:
+		ICE_PROG_FLEX_ENTRY(hw, prof_id, ICE_RX_MDID_HASH_LOW, 0);
+		ICE_PROG_FLEX_ENTRY(hw, prof_id, ICE_RX_MDID_HASH_HIGH, 1);
+		ICE_PROG_FLEX_ENTRY(hw, prof_id, ICE_RX_MDID_FLOW_ID_LOWER, 2);
+
+		mdid = (prof_id == ICE_RXDID_FLEX_NIC_2) ?
+			ICE_RX_MDID_SRC_VSI : ICE_RX_MDID_FLOW_ID_HIGH;
+
+		ICE_PROG_FLEX_ENTRY(hw, prof_id, mdid, 3);
+
+		ice_init_flex_flags(hw, prof_id);
+		break;
+
+	default:
+		ice_debug(hw, ICE_DBG_INIT,
+			  "Field init for profile ID %d not supported\n",
+			  prof_id);
+	}
+}
+
+
+/**
+ * ice_init_fltr_mgmt_struct - initializes filter management list and locks
+ * @hw: pointer to the hw struct
+ */
+static enum ice_status ice_init_fltr_mgmt_struct(struct ice_hw *hw)
+{
+	struct ice_switch_info *sw;
+
+	hw->switch_info = (struct ice_switch_info *)
+			  ice_malloc(hw, sizeof(*hw->switch_info));
+	sw = hw->switch_info;
+
+	if (!sw)
+		return ICE_ERR_NO_MEMORY;
+
+	INIT_LIST_HEAD(&sw->vsi_list_map_head);
+
+	return ice_init_def_sw_recp(hw);
+}
+
+/**
+ * ice_cleanup_fltr_mgmt_struct - cleanup filter management list and locks
+ * @hw: pointer to the hw struct
+ */
+static void ice_cleanup_fltr_mgmt_struct(struct ice_hw *hw)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	struct ice_vsi_list_map_info *v_pos_map;
+	struct ice_vsi_list_map_info *v_tmp_map;
+	struct ice_sw_recipe *recps;
+	u8 i;
+
+	LIST_FOR_EACH_ENTRY_SAFE(v_pos_map, v_tmp_map, &sw->vsi_list_map_head,
+				 ice_vsi_list_map_info, list_entry) {
+		LIST_DEL(&v_pos_map->list_entry);
+		ice_free(hw, v_pos_map);
+	}
+	recps = hw->switch_info->recp_list;
+	for (i = 0; i < ICE_MAX_NUM_RECIPES; i++) {
+		recps[i].root_rid = i;
+
+		if (recps[i].adv_rule) {
+			struct ice_adv_fltr_mgmt_list_entry *tmp_entry;
+			struct ice_adv_fltr_mgmt_list_entry *lst_itr;
+
+			ice_destroy_lock(&recps[i].filt_rule_lock);
+			LIST_FOR_EACH_ENTRY_SAFE(lst_itr, tmp_entry,
+						 &recps[i].filt_rules,
+						 ice_adv_fltr_mgmt_list_entry,
+						 list_entry) {
+				LIST_DEL(&lst_itr->list_entry);
+				ice_free(hw, lst_itr->lkups);
+				ice_free(hw, lst_itr);
+			}
+		} else {
+			struct ice_fltr_mgmt_list_entry *lst_itr, *tmp_entry;
+
+			ice_destroy_lock(&recps[i].filt_rule_lock);
+			LIST_FOR_EACH_ENTRY_SAFE(lst_itr, tmp_entry,
+						 &recps[i].filt_rules,
+						 ice_fltr_mgmt_list_entry,
+						 list_entry) {
+				LIST_DEL(&lst_itr->list_entry);
+				ice_free(hw, lst_itr);
+			}
+		}
+	}
+	ice_rm_all_sw_replay_rule_info(hw);
+	ice_free(hw, sw->recp_list);
+	ice_free(hw, sw);
+}
+
+#define ICE_FW_LOG_DESC_SIZE(n)	(sizeof(struct ice_aqc_fw_logging_data) + \
+	(((n) - 1) * sizeof(((struct ice_aqc_fw_logging_data *)0)->entry)))
+#define ICE_FW_LOG_DESC_SIZE_MAX	\
+	ICE_FW_LOG_DESC_SIZE(ICE_AQC_FW_LOG_ID_MAX)
+
+/**
+ * ice_cfg_fw_log - configure FW logging
+ * @hw: pointer to the hw struct
+ * @enable: enable certain FW logging events if true, disable all if false
+ *
+ * This function enables/disables the FW logging via Rx CQ events and a UART
+ * port based on predetermined configurations. FW logging via the Rx CQ can be
+ * enabled/disabled for individual PF's. However, FW logging via the UART can
+ * only be enabled/disabled for all PFs on the same device.
+ *
+ * To enable overall FW logging, the "cq_en" and "uart_en" enable bits in
+ * hw->fw_log need to be set accordingly, e.g. based on user-provided input,
+ * before initializing the device.
+ *
+ * When re/configuring FW logging, callers need to update the "cfg" elements of
+ * the hw->fw_log.evnts array with the desired logging event configurations for
+ * modules of interest. When disabling FW logging completely, the callers can
+ * just pass false in the "enable" parameter. On completion, the function will
+ * update the "cur" element of the hw->fw_log.evnts array with the resulting
+ * logging event configurations of the modules that are being re/configured. FW
+ * logging modules that are not part of a reconfiguration operation retain their
+ * previous states.
+ *
+ * Before resetting the device, it is recommended that the driver disables FW
+ * logging before shutting down the control queue. When disabling FW logging
+ * ("enable" = false), the latest configurations of FW logging events stored in
+ * hw->fw_log.evnts[] are not overridden to allow them to be reconfigured after
+ * a device reset.
+ *
+ * When enabling FW logging to emit log messages via the Rx CQ during the
+ * device's initialization phase, a mechanism alternative to interrupt handlers
+ * needs to be used to extract FW log messages from the Rx CQ periodically and
+ * to prevent the Rx CQ from being full and stalling other types of control
+ * messages from FW to SW. Interrupts are typically disabled during the device's
+ * initialization phase.
+ */
+static enum ice_status ice_cfg_fw_log(struct ice_hw *hw, bool enable)
+{
+	struct ice_aqc_fw_logging_data *data = NULL;
+	struct ice_aqc_fw_logging *cmd;
+	enum ice_status status = ICE_SUCCESS;
+	u16 i, chgs = 0, len = 0;
+	struct ice_aq_desc desc;
+	u8 actv_evnts = 0;
+	void *buf = NULL;
+
+	if (!hw->fw_log.cq_en && !hw->fw_log.uart_en)
+		return ICE_SUCCESS;
+
+	/* Disable FW logging only when the control queue is still responsive */
+	if (!enable &&
+	    (!hw->fw_log.actv_evnts || !ice_check_sq_alive(hw, &hw->adminq)))
+		return ICE_SUCCESS;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_fw_logging);
+	cmd = &desc.params.fw_logging;
+
+	/* Indicate which controls are valid */
+	if (hw->fw_log.cq_en)
+		cmd->log_ctrl_valid |= ICE_AQC_FW_LOG_AQ_VALID;
+
+	if (hw->fw_log.uart_en)
+		cmd->log_ctrl_valid |= ICE_AQC_FW_LOG_UART_VALID;
+
+	if (enable) {
+		/* Fill in an array of entries with FW logging modules and
+		 * logging events being reconfigured.
+		 */
+		for (i = 0; i < ICE_AQC_FW_LOG_ID_MAX; i++) {
+			u16 val;
+
+			/* Keep track of enabled event types */
+			actv_evnts |= hw->fw_log.evnts[i].cfg;
+
+			if (hw->fw_log.evnts[i].cfg == hw->fw_log.evnts[i].cur)
+				continue;
+
+			if (!data) {
+				data = (struct ice_aqc_fw_logging_data *)
+					ice_malloc(hw,
+						   ICE_FW_LOG_DESC_SIZE_MAX);
+				if (!data)
+					return ICE_ERR_NO_MEMORY;
+			}
+
+			val = i << ICE_AQC_FW_LOG_ID_S;
+			val |= hw->fw_log.evnts[i].cfg << ICE_AQC_FW_LOG_EN_S;
+			data->entry[chgs++] = CPU_TO_LE16(val);
+		}
+
+		/* Only enable FW logging if at least one module is specified.
+		 * If FW logging is currently enabled but all modules are not
+		 * enabled to emit log messages, disable FW logging altogether.
+		 */
+		if (actv_evnts) {
+			/* Leave if there is effectively no change */
+			if (!chgs)
+				goto out;
+
+			if (hw->fw_log.cq_en)
+				cmd->log_ctrl |= ICE_AQC_FW_LOG_AQ_EN;
+
+			if (hw->fw_log.uart_en)
+				cmd->log_ctrl |= ICE_AQC_FW_LOG_UART_EN;
+
+			buf = data;
+			len = ICE_FW_LOG_DESC_SIZE(chgs);
+			desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+		}
+	}
+
+	status = ice_aq_send_cmd(hw, &desc, buf, len, NULL);
+	if (!status) {
+		/* Update the current configuration to reflect events enabled.
+		 * hw->fw_log.cq_en and hw->fw_log.uart_en indicate if the FW
+		 * logging mode is enabled for the device. They do not reflect
+		 * actual modules being enabled to emit log messages. So, their
+		 * values remain unchanged even when all modules are disabled.
+		 */
+		u16 cnt = enable ? chgs : (u16)ICE_AQC_FW_LOG_ID_MAX;
+
+		hw->fw_log.actv_evnts = actv_evnts;
+		for (i = 0; i < cnt; i++) {
+			u16 v, m;
+
+			if (!enable) {
+				/* When disabling all FW logging events as part
+				 * of device's de-initialization, the original
+				 * configurations are retained, and can be used
+				 * to reconfigure FW logging later if the device
+				 * is re-initialized.
+				 */
+				hw->fw_log.evnts[i].cur = 0;
+				continue;
+			}
+
+			v = LE16_TO_CPU(data->entry[i]);
+			m = (v & ICE_AQC_FW_LOG_ID_M) >> ICE_AQC_FW_LOG_ID_S;
+			hw->fw_log.evnts[m].cur = hw->fw_log.evnts[m].cfg;
+		}
+	}
+
+out:
+	if (data)
+		ice_free(hw, data);
+
+	return status;
+}
+
+/**
+ * ice_output_fw_log
+ * @hw: pointer to the hw struct
+ * @desc: pointer to the AQ message descriptor
+ * @buf: pointer to the buffer accompanying the AQ message
+ *
+ * Formats a FW Log message and outputs it via the standard driver logs.
+ */
+void ice_output_fw_log(struct ice_hw *hw, struct ice_aq_desc *desc, void *buf)
+{
+	ice_debug(hw, ICE_DBG_AQ_MSG, "[ FW Log Msg Start ]\n");
+	ice_debug_array(hw, ICE_DBG_AQ_MSG, 16, 1, (u8 *)buf,
+			LE16_TO_CPU(desc->datalen));
+	ice_debug(hw, ICE_DBG_AQ_MSG, "[ FW Log Msg End ]\n");
+}
+
+/**
+ * ice_get_itr_intrl_gran - determine int/intrl granularity
+ * @hw: pointer to the hw struct
+ *
+ * Determines the itr/intrl granularities based on the maximum aggregate
+ * bandwidth according to the device's configuration during power-on.
+ */
+static enum ice_status ice_get_itr_intrl_gran(struct ice_hw *hw)
+{
+	u8 max_agg_bw = (rd32(hw, GL_PWR_MODE_CTL) &
+			 GL_PWR_MODE_CTL_CAR_MAX_BW_M) >>
+			GL_PWR_MODE_CTL_CAR_MAX_BW_S;
+
+	switch (max_agg_bw) {
+	case ICE_MAX_AGG_BW_200G:
+	case ICE_MAX_AGG_BW_100G:
+	case ICE_MAX_AGG_BW_50G:
+		hw->itr_gran = ICE_ITR_GRAN_ABOVE_25;
+		hw->intrl_gran = ICE_INTRL_GRAN_ABOVE_25;
+		break;
+	case ICE_MAX_AGG_BW_25G:
+		hw->itr_gran = ICE_ITR_GRAN_MAX_25;
+		hw->intrl_gran = ICE_INTRL_GRAN_MAX_25;
+		break;
+	default:
+		ice_debug(hw, ICE_DBG_INIT,
+			  "Failed to determine itr/intrl granularity\n");
+		return ICE_ERR_CFG;
+	}
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_init_hw - main hardware initialization routine
+ * @hw: pointer to the hardware structure
+ */
+enum ice_status ice_init_hw(struct ice_hw *hw)
+{
+	struct ice_aqc_get_phy_caps_data *pcaps;
+	enum ice_status status;
+	u16 mac_buf_len;
+	void *mac_buf;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_init_hw");
+
+
+	/* Set MAC type based on DeviceID */
+	status = ice_set_mac_type(hw);
+	if (status)
+		return status;
+
+	hw->pf_id = (u8)(rd32(hw, PF_FUNC_RID) &
+			 PF_FUNC_RID_FUNCTION_NUMBER_M) >>
+		PF_FUNC_RID_FUNCTION_NUMBER_S;
+
+
+	status = ice_reset(hw, ICE_RESET_PFR);
+	if (status)
+		return status;
+
+	status = ice_get_itr_intrl_gran(hw);
+	if (status)
+		return status;
+
+
+	status = ice_init_all_ctrlq(hw);
+	if (status)
+		goto err_unroll_cqinit;
+
+	/* Enable FW logging. Not fatal if this fails. */
+	status = ice_cfg_fw_log(hw, true);
+	if (status)
+		ice_debug(hw, ICE_DBG_INIT, "Failed to enable FW logging.\n");
+
+	status = ice_clear_pf_cfg(hw);
+	if (status)
+		goto err_unroll_cqinit;
+
+
+	ice_clear_pxe_mode(hw);
+
+	status = ice_init_nvm(hw);
+	if (status)
+		goto err_unroll_cqinit;
+
+	status = ice_get_caps(hw);
+	if (status)
+		goto err_unroll_cqinit;
+
+	hw->port_info = (struct ice_port_info *)
+			ice_malloc(hw, sizeof(*hw->port_info));
+	if (!hw->port_info) {
+		status = ICE_ERR_NO_MEMORY;
+		goto err_unroll_cqinit;
+	}
+
+	/* set the back pointer to hw */
+	hw->port_info->hw = hw;
+
+	/* Initialize port_info struct with switch configuration data */
+	status = ice_get_initial_sw_cfg(hw);
+	if (status)
+		goto err_unroll_alloc;
+
+	hw->evb_veb = true;
+
+	/* Query the allocated resources for Tx scheduler */
+	status = ice_sched_query_res_alloc(hw);
+	if (status) {
+		ice_debug(hw, ICE_DBG_SCHED,
+			  "Failed to get scheduler allocated resources\n");
+		goto err_unroll_alloc;
+	}
+
+
+	/* Initialize port_info struct with scheduler data */
+	status = ice_sched_init_port(hw->port_info);
+	if (status)
+		goto err_unroll_sched;
+
+	pcaps = (struct ice_aqc_get_phy_caps_data *)
+		ice_malloc(hw, sizeof(*pcaps));
+	if (!pcaps) {
+		status = ICE_ERR_NO_MEMORY;
+		goto err_unroll_sched;
+	}
+
+	/* Initialize port_info struct with PHY capabilities */
+	status = ice_aq_get_phy_caps(hw->port_info, false,
+				     ICE_AQC_REPORT_TOPO_CAP, pcaps, NULL);
+	ice_free(hw, pcaps);
+	if (status)
+		goto err_unroll_sched;
+
+	/* Initialize port_info struct with link information */
+	status = ice_aq_get_link_info(hw->port_info, false, NULL, NULL);
+	if (status)
+		goto err_unroll_sched;
+	/* need a valid SW entry point to build a Tx tree */
+	if (!hw->sw_entry_point_layer) {
+		ice_debug(hw, ICE_DBG_SCHED, "invalid sw entry point\n");
+		status = ICE_ERR_CFG;
+		goto err_unroll_sched;
+	}
+	INIT_LIST_HEAD(&hw->agg_list);
+	/* Initialize max burst size */
+	if (!hw->max_burst_size)
+		ice_cfg_rl_burst_size(hw, ICE_SCHED_DFLT_BURST_SIZE);
+
+	status = ice_init_fltr_mgmt_struct(hw);
+	if (status)
+		goto err_unroll_sched;
+
+#if defined(FPGA_SUPPORT) || defined(CVL_A0_SUPPORT)
+	/* some of the register write workarounds to get Rx working */
+	ice_dev_onetime_setup(hw);
+#endif /* FPGA_SUPPORT || CVL_A0_SUPPORT */
+
+	/* Get MAC information */
+	/* A single port can report up to two (LAN and WoL) addresses */
+	mac_buf = ice_calloc(hw, 2,
+			     sizeof(struct ice_aqc_manage_mac_read_resp));
+	mac_buf_len = 2 * sizeof(struct ice_aqc_manage_mac_read_resp);
+
+	if (!mac_buf) {
+		status = ICE_ERR_NO_MEMORY;
+		goto err_unroll_fltr_mgmt_struct;
+	}
+
+	status = ice_aq_manage_mac_read(hw, mac_buf, mac_buf_len, NULL);
+	ice_free(hw, mac_buf);
+
+	if (status)
+		goto err_unroll_fltr_mgmt_struct;
+
+	ice_init_flex_flds(hw, ICE_RXDID_FLEX_NIC);
+	ice_init_flex_flds(hw, ICE_RXDID_FLEX_NIC_2);
+
+
+	return ICE_SUCCESS;
+
+err_unroll_fltr_mgmt_struct:
+	ice_cleanup_fltr_mgmt_struct(hw);
+err_unroll_sched:
+	ice_sched_cleanup_all(hw);
+err_unroll_alloc:
+	ice_free(hw, hw->port_info);
+	hw->port_info = NULL;
+err_unroll_cqinit:
+	ice_shutdown_all_ctrlq(hw);
+	return status;
+}
+
+/**
+ * ice_deinit_hw - unroll initialization operations done by ice_init_hw
+ * @hw: pointer to the hardware structure
+ *
+ * This should be called only during nominal operation, not as a result of
+ * ice_init_hw() failing since ice_init_hw() will take care of unrolling
+ * applicable initializations if it fails for any reason.
+ */
+void ice_deinit_hw(struct ice_hw *hw)
+{
+	ice_cleanup_fltr_mgmt_struct(hw);
+
+	ice_sched_cleanup_all(hw);
+	ice_sched_clear_agg(hw);
+
+	if (hw->port_info) {
+		ice_free(hw, hw->port_info);
+		hw->port_info = NULL;
+	}
+
+	/* Attempt to disable FW logging before shutting down control queues */
+	ice_cfg_fw_log(hw, false);
+	ice_shutdown_all_ctrlq(hw);
+
+	/* Clear VSI contexts if not already cleared */
+	ice_clear_all_vsi_ctx(hw);
+}
+
+/**
+ * ice_check_reset - Check to see if a global reset is complete
+ * @hw: pointer to the hardware structure
+ */
+enum ice_status ice_check_reset(struct ice_hw *hw)
+{
+	u32 cnt, reg = 0, grst_delay;
+
+	/* Poll for Device Active state in case a recent CORER, GLOBR,
+	 * or EMPR has occurred. The grst delay value is in 100ms units.
+	 * Add 1sec for outstanding AQ commands that can take a long time.
+	 */
+#define GLGEN_RSTCTL		0x000B8180 /* Reset Source: POR */
+#define GLGEN_RSTCTL_GRSTDEL_S	0
+#define GLGEN_RSTCTL_GRSTDEL_M	MAKEMASK(0x3F, GLGEN_RSTCTL_GRSTDEL_S)
+	grst_delay = ((rd32(hw, GLGEN_RSTCTL) & GLGEN_RSTCTL_GRSTDEL_M) >>
+		      GLGEN_RSTCTL_GRSTDEL_S) + 10;
+
+	for (cnt = 0; cnt < grst_delay; cnt++) {
+		ice_msec_delay(100, true);
+		reg = rd32(hw, GLGEN_RSTAT);
+		if (!(reg & GLGEN_RSTAT_DEVSTATE_M))
+			break;
+	}
+
+	if (cnt == grst_delay) {
+		ice_debug(hw, ICE_DBG_INIT,
+			  "Global reset polling failed to complete.\n");
+		return ICE_ERR_RESET_FAILED;
+	}
+
+#define ICE_RESET_DONE_MASK	(GLNVM_ULD_CORER_DONE_M | \
+				 GLNVM_ULD_GLOBR_DONE_M)
+
+	/* Device is Active; check Global Reset processes are done */
+	for (cnt = 0; cnt < ICE_PF_RESET_WAIT_COUNT; cnt++) {
+		reg = rd32(hw, GLNVM_ULD) & ICE_RESET_DONE_MASK;
+		if (reg == ICE_RESET_DONE_MASK) {
+			ice_debug(hw, ICE_DBG_INIT,
+				  "Global reset processes done. %d\n", cnt);
+			break;
+		}
+		ice_msec_delay(10, true);
+	}
+
+	if (cnt == ICE_PF_RESET_WAIT_COUNT) {
+		ice_debug(hw, ICE_DBG_INIT,
+			  "Wait for Reset Done timed out. GLNVM_ULD = 0x%x\n",
+			  reg);
+		return ICE_ERR_RESET_FAILED;
+	}
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_pf_reset - Reset the PF
+ * @hw: pointer to the hardware structure
+ *
+ * If a global reset has been triggered, this function checks
+ * for its completion and then issues the PF reset
+ */
+static enum ice_status ice_pf_reset(struct ice_hw *hw)
+{
+	u32 cnt, reg;
+
+	/* If at function entry a global reset was already in progress, i.e.
+	 * state is not 'device active' or any of the reset done bits are not
+	 * set in GLNVM_ULD, there is no need for a PF Reset; poll until the
+	 * global reset is done.
+	 */
+	if ((rd32(hw, GLGEN_RSTAT) & GLGEN_RSTAT_DEVSTATE_M) ||
+	    (rd32(hw, GLNVM_ULD) & ICE_RESET_DONE_MASK) ^ ICE_RESET_DONE_MASK) {
+		/* poll on global reset currently in progress until done */
+		if (ice_check_reset(hw))
+			return ICE_ERR_RESET_FAILED;
+
+		return ICE_SUCCESS;
+	}
+
+	/* Reset the PF */
+	reg = rd32(hw, PFGEN_CTRL);
+
+	wr32(hw, PFGEN_CTRL, (reg | PFGEN_CTRL_PFSWR_M));
+
+	for (cnt = 0; cnt < ICE_PF_RESET_WAIT_COUNT; cnt++) {
+		reg = rd32(hw, PFGEN_CTRL);
+		if (!(reg & PFGEN_CTRL_PFSWR_M))
+			break;
+
+		ice_msec_delay(1, true);
+	}
+
+	if (cnt == ICE_PF_RESET_WAIT_COUNT) {
+		ice_debug(hw, ICE_DBG_INIT,
+			  "PF reset polling failed to complete.\n");
+		return ICE_ERR_RESET_FAILED;
+	}
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_reset - Perform different types of reset
+ * @hw: pointer to the hardware structure
+ * @req: reset request
+ *
+ * This function triggers a reset as specified by the req parameter.
+ *
+ * Note:
+ * If anything other than a PF reset is triggered, PXE mode is restored.
+ * This has to be cleared using ice_clear_pxe_mode again, once the AQ
+ * interface has been restored in the rebuild flow.
+ */
+enum ice_status ice_reset(struct ice_hw *hw, enum ice_reset_req req)
+{
+	u32 val = 0;
+
+	switch (req) {
+	case ICE_RESET_PFR:
+		return ice_pf_reset(hw);
+	case ICE_RESET_CORER:
+		ice_debug(hw, ICE_DBG_INIT, "CoreR requested\n");
+		val = GLGEN_RTRIG_CORER_M;
+		break;
+	case ICE_RESET_GLOBR:
+		ice_debug(hw, ICE_DBG_INIT, "GlobalR requested\n");
+		val = GLGEN_RTRIG_GLOBR_M;
+		break;
+	default:
+		return ICE_ERR_PARAM;
+	}
+
+	val |= rd32(hw, GLGEN_RTRIG);
+	wr32(hw, GLGEN_RTRIG, val);
+	ice_flush(hw);
+
+
+	/* wait for the FW to be ready */
+	return ice_check_reset(hw);
+}
+
+
+
+/**
+ * ice_copy_rxq_ctx_to_hw
+ * @hw: pointer to the hardware structure
+ * @ice_rxq_ctx: pointer to the rxq context
+ * @rxq_index: the index of the Rx queue
+ *
+ * Copies rxq context from dense structure to hw register space
+ */
+static enum ice_status
+ice_copy_rxq_ctx_to_hw(struct ice_hw *hw, u8 *ice_rxq_ctx, u32 rxq_index)
+{
+	u8 i;
+
+	if (!ice_rxq_ctx)
+		return ICE_ERR_BAD_PTR;
+
+	if (rxq_index > QRX_CTRL_MAX_INDEX)
+		return ICE_ERR_PARAM;
+
+	/* Copy each dword separately to hw */
+	for (i = 0; i < ICE_RXQ_CTX_SIZE_DWORDS; i++) {
+		wr32(hw, QRX_CONTEXT(i, rxq_index),
+		     *((u32 *)(ice_rxq_ctx + (i * sizeof(u32)))));
+
+		ice_debug(hw, ICE_DBG_QCTX, "qrxdata[%d]: %08X\n", i,
+			  *((u32 *)(ice_rxq_ctx + (i * sizeof(u32)))));
+	}
+
+	return ICE_SUCCESS;
+}
+
+/* LAN Rx Queue Context */
+static const struct ice_ctx_ele ice_rlan_ctx_info[] = {
+	/* Field		Width	LSB */
+	ICE_CTX_STORE(ice_rlan_ctx, head,		13,	0),
+	ICE_CTX_STORE(ice_rlan_ctx, cpuid,		8,	13),
+	ICE_CTX_STORE(ice_rlan_ctx, base,		57,	32),
+	ICE_CTX_STORE(ice_rlan_ctx, qlen,		13,	89),
+	ICE_CTX_STORE(ice_rlan_ctx, dbuf,		7,	102),
+	ICE_CTX_STORE(ice_rlan_ctx, hbuf,		5,	109),
+	ICE_CTX_STORE(ice_rlan_ctx, dtype,		2,	114),
+	ICE_CTX_STORE(ice_rlan_ctx, dsize,		1,	116),
+	ICE_CTX_STORE(ice_rlan_ctx, crcstrip,		1,	117),
+	ICE_CTX_STORE(ice_rlan_ctx, l2tsel,		1,	119),
+	ICE_CTX_STORE(ice_rlan_ctx, hsplit_0,		4,	120),
+	ICE_CTX_STORE(ice_rlan_ctx, hsplit_1,		2,	124),
+	ICE_CTX_STORE(ice_rlan_ctx, showiv,		1,	127),
+	ICE_CTX_STORE(ice_rlan_ctx, rxmax,		14,	174),
+	ICE_CTX_STORE(ice_rlan_ctx, tphrdesc_ena,	1,	193),
+	ICE_CTX_STORE(ice_rlan_ctx, tphwdesc_ena,	1,	194),
+	ICE_CTX_STORE(ice_rlan_ctx, tphdata_ena,	1,	195),
+	ICE_CTX_STORE(ice_rlan_ctx, tphhead_ena,	1,	196),
+	ICE_CTX_STORE(ice_rlan_ctx, lrxqthresh,		3,	198),
+	{ 0 }
+};
+
+/**
+ * ice_write_rxq_ctx
+ * @hw: pointer to the hardware structure
+ * @rlan_ctx: pointer to the rxq context
+ * @rxq_index: the index of the Rx queue
+ *
+ * Converts rxq context from sparse to dense structure and then writes
+ * it to hw register space
+ */
+enum ice_status
+ice_write_rxq_ctx(struct ice_hw *hw, struct ice_rlan_ctx *rlan_ctx,
+		  u32 rxq_index)
+{
+	u8 ctx_buf[ICE_RXQ_CTX_SZ] = { 0 };
+
+	ice_set_ctx((u8 *)rlan_ctx, ctx_buf, ice_rlan_ctx_info);
+	return ice_copy_rxq_ctx_to_hw(hw, ctx_buf, rxq_index);
+}
+
+#if !defined(NO_UNUSED_CTX_CODE) || defined(AE_DRIVER)
+/**
+ * ice_clear_rxq_ctx
+ * @hw: pointer to the hardware structure
+ * @rxq_index: the index of the Rx queue to clear
+ *
+ * Clears rxq context in hw register space
+ */
+enum ice_status ice_clear_rxq_ctx(struct ice_hw *hw, u32 rxq_index)
+{
+	u8 i;
+
+	if (rxq_index > QRX_CTRL_MAX_INDEX)
+		return ICE_ERR_PARAM;
+
+	/* Clear each dword register separately */
+	for (i = 0; i < ICE_RXQ_CTX_SIZE_DWORDS; i++)
+		wr32(hw, QRX_CONTEXT(i, rxq_index), 0);
+
+	return ICE_SUCCESS;
+}
+#endif /* !NO_UNUSED_CTX_CODE || AE_DRIVER */
+
+/* LAN Tx Queue Context */
+const struct ice_ctx_ele ice_tlan_ctx_info[] = {
+				    /* Field			Width	LSB */
+	ICE_CTX_STORE(ice_tlan_ctx, base,			57,	0),
+	ICE_CTX_STORE(ice_tlan_ctx, port_num,			3,	57),
+	ICE_CTX_STORE(ice_tlan_ctx, cgd_num,			5,	60),
+	ICE_CTX_STORE(ice_tlan_ctx, pf_num,			3,	65),
+	ICE_CTX_STORE(ice_tlan_ctx, vmvf_num,			10,	68),
+	ICE_CTX_STORE(ice_tlan_ctx, vmvf_type,			2,	78),
+	ICE_CTX_STORE(ice_tlan_ctx, src_vsi,			10,	80),
+	ICE_CTX_STORE(ice_tlan_ctx, tsyn_ena,			1,	90),
+	ICE_CTX_STORE(ice_tlan_ctx, alt_vlan,			1,	92),
+	ICE_CTX_STORE(ice_tlan_ctx, cpuid,			8,	93),
+	ICE_CTX_STORE(ice_tlan_ctx, wb_mode,			1,	101),
+	ICE_CTX_STORE(ice_tlan_ctx, tphrd_desc,			1,	102),
+	ICE_CTX_STORE(ice_tlan_ctx, tphrd,			1,	103),
+	ICE_CTX_STORE(ice_tlan_ctx, tphwr_desc,			1,	104),
+	ICE_CTX_STORE(ice_tlan_ctx, cmpq_id,			9,	105),
+	ICE_CTX_STORE(ice_tlan_ctx, qnum_in_func,		14,	114),
+	ICE_CTX_STORE(ice_tlan_ctx, itr_notification_mode,	1,	128),
+	ICE_CTX_STORE(ice_tlan_ctx, adjust_prof_id,		6,	129),
+	ICE_CTX_STORE(ice_tlan_ctx, qlen,			13,	135),
+	ICE_CTX_STORE(ice_tlan_ctx, quanta_prof_idx,		4,	148),
+	ICE_CTX_STORE(ice_tlan_ctx, tso_ena,			1,	152),
+	ICE_CTX_STORE(ice_tlan_ctx, tso_qnum,			11,	153),
+	ICE_CTX_STORE(ice_tlan_ctx, legacy_int,			1,	164),
+	ICE_CTX_STORE(ice_tlan_ctx, drop_ena,			1,	165),
+	ICE_CTX_STORE(ice_tlan_ctx, cache_prof_idx,		2,	166),
+	ICE_CTX_STORE(ice_tlan_ctx, pkt_shaper_prof_idx,	3,	168),
+	ICE_CTX_STORE(ice_tlan_ctx, int_q_state,		110,	171),
+	{ 0 }
+};
+
+#if !defined(NO_UNUSED_CTX_CODE) || defined(AE_DRIVER)
+/**
+ * ice_copy_tx_cmpltnq_ctx_to_hw
+ * @hw: pointer to the hardware structure
+ * @ice_tx_cmpltnq_ctx: pointer to the Tx completion queue context
+ * @tx_cmpltnq_index: the index of the completion queue
+ *
+ * Copies Tx completion q context from dense structure to hw register space
+ */
+static enum ice_status
+ice_copy_tx_cmpltnq_ctx_to_hw(struct ice_hw *hw, u8 *ice_tx_cmpltnq_ctx,
+			      u32 tx_cmpltnq_index)
+{
+	u8 i;
+
+	if (!ice_tx_cmpltnq_ctx)
+		return ICE_ERR_BAD_PTR;
+
+	if (tx_cmpltnq_index > GLTCLAN_CQ_CNTX0_MAX_INDEX)
+		return ICE_ERR_PARAM;
+
+	/* Copy each dword separately to hw */
+	for (i = 0; i < ICE_TX_CMPLTNQ_CTX_SIZE_DWORDS; i++) {
+		wr32(hw, GLTCLAN_CQ_CNTX(i, tx_cmpltnq_index),
+		     *((u32 *)(ice_tx_cmpltnq_ctx + (i * sizeof(u32)))));
+
+		ice_debug(hw, ICE_DBG_QCTX, "cmpltnqdata[%d]: %08X\n", i,
+			  *((u32 *)(ice_tx_cmpltnq_ctx + (i * sizeof(u32)))));
+	}
+
+	return ICE_SUCCESS;
+}
+
+/* LAN Tx Completion Queue Context */
+static const struct ice_ctx_ele ice_tx_cmpltnq_ctx_info[] = {
+				       /* Field			Width   LSB */
+	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, base,			57,	0),
+	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, q_len,		18,	64),
+	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, generation,		1,	96),
+	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, wrt_ptr,		22,	97),
+	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, pf_num,		3,	128),
+	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, vmvf_num,		10,	131),
+	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, vmvf_type,		2,	141),
+	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, tph_desc_wr,		1,	160),
+	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, cpuid,		8,	161),
+	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, cmpltn_cache,		512,	192),
+	{ 0 }
+};
+
+/**
+ * ice_write_tx_cmpltnq_ctx
+ * @hw: pointer to the hardware structure
+ * @tx_cmpltnq_ctx: pointer to the completion queue context
+ * @tx_cmpltnq_index: the index of the completion queue
+ *
+ * Converts completion queue context from sparse to dense structure and then
+ * writes it to hw register space
+ */
+enum ice_status
+ice_write_tx_cmpltnq_ctx(struct ice_hw *hw,
+			 struct ice_tx_cmpltnq_ctx *tx_cmpltnq_ctx,
+			 u32 tx_cmpltnq_index)
+{
+	u8 ctx_buf[ICE_TX_CMPLTNQ_CTX_SIZE_DWORDS * sizeof(u32)] = { 0 };
+
+	ice_set_ctx((u8 *)tx_cmpltnq_ctx, ctx_buf, ice_tx_cmpltnq_ctx_info);
+	return ice_copy_tx_cmpltnq_ctx_to_hw(hw, ctx_buf, tx_cmpltnq_index);
+}
+
+/**
+ * ice_clear_tx_cmpltnq_ctx
+ * @hw: pointer to the hardware structure
+ * @tx_cmpltnq_index: the index of the completion queue to clear
+ *
+ * Clears Tx completion queue context in hw register space
+ */
+enum ice_status
+ice_clear_tx_cmpltnq_ctx(struct ice_hw *hw, u32 tx_cmpltnq_index)
+{
+	u8 i;
+
+	if (tx_cmpltnq_index > GLTCLAN_CQ_CNTX0_MAX_INDEX)
+		return ICE_ERR_PARAM;
+
+	/* Clear each dword register separately */
+	for (i = 0; i < ICE_TX_CMPLTNQ_CTX_SIZE_DWORDS; i++)
+		wr32(hw, GLTCLAN_CQ_CNTX(i, tx_cmpltnq_index), 0);
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_copy_tx_drbell_q_ctx_to_hw
+ * @hw: pointer to the hardware structure
+ * @ice_tx_drbell_q_ctx: pointer to the doorbell queue context
+ * @tx_drbell_q_index: the index of the doorbell queue
+ *
+ * Copies doorbell q context from dense structure to hw register space
+ */
+static enum ice_status
+ice_copy_tx_drbell_q_ctx_to_hw(struct ice_hw *hw, u8 *ice_tx_drbell_q_ctx,
+			       u32 tx_drbell_q_index)
+{
+	u8 i;
+
+	if (!ice_tx_drbell_q_ctx)
+		return ICE_ERR_BAD_PTR;
+
+	if (tx_drbell_q_index > QTX_COMM_DBLQ_DBELL_MAX_INDEX)
+		return ICE_ERR_PARAM;
+
+	/* Copy each dword separately to hw */
+	for (i = 0; i < ICE_TX_DRBELL_Q_CTX_SIZE_DWORDS; i++) {
+		wr32(hw, QTX_COMM_DBLQ_CNTX(i, tx_drbell_q_index),
+		     *((u32 *)(ice_tx_drbell_q_ctx + (i * sizeof(u32)))));
+
+		ice_debug(hw, ICE_DBG_QCTX, "tx_drbell_qdata[%d]: %08X\n", i,
+			  *((u32 *)(ice_tx_drbell_q_ctx + (i * sizeof(u32)))));
+	}
+
+	return ICE_SUCCESS;
+}
+
+/* LAN Tx Doorbell Queue Context info */
+static const struct ice_ctx_ele ice_tx_drbell_q_ctx_info[] = {
+					/* Field		Width   LSB */
+	ICE_CTX_STORE(ice_tx_drbell_q_ctx, base,		57,	0),
+	ICE_CTX_STORE(ice_tx_drbell_q_ctx, ring_len,		13,	64),
+	ICE_CTX_STORE(ice_tx_drbell_q_ctx, pf_num,		3,	80),
+	ICE_CTX_STORE(ice_tx_drbell_q_ctx, vf_num,		8,	84),
+	ICE_CTX_STORE(ice_tx_drbell_q_ctx, vmvf_type,		2,	94),
+	ICE_CTX_STORE(ice_tx_drbell_q_ctx, cpuid,		8,	96),
+	ICE_CTX_STORE(ice_tx_drbell_q_ctx, tph_desc_rd,		1,	104),
+	ICE_CTX_STORE(ice_tx_drbell_q_ctx, tph_desc_wr,		1,	108),
+	ICE_CTX_STORE(ice_tx_drbell_q_ctx, db_q_en,		1,	112),
+	ICE_CTX_STORE(ice_tx_drbell_q_ctx, rd_head,		13,	128),
+	ICE_CTX_STORE(ice_tx_drbell_q_ctx, rd_tail,		13,	144),
+	{ 0 }
+};
+
+/**
+ * ice_write_tx_drbell_q_ctx
+ * @hw: pointer to the hardware structure
+ * @tx_drbell_q_ctx: pointer to the doorbell queue context
+ * @tx_drbell_q_index: the index of the doorbell queue
+ *
+ * Converts doorbell queue context from sparse to dense structure and then
+ * writes it to hw register space
+ */
+enum ice_status
+ice_write_tx_drbell_q_ctx(struct ice_hw *hw,
+			  struct ice_tx_drbell_q_ctx *tx_drbell_q_ctx,
+			  u32 tx_drbell_q_index)
+{
+	u8 ctx_buf[ICE_TX_DRBELL_Q_CTX_SIZE_DWORDS * sizeof(u32)] = { 0 };
+
+	ice_set_ctx((u8 *)tx_drbell_q_ctx, ctx_buf, ice_tx_drbell_q_ctx_info);
+	return ice_copy_tx_drbell_q_ctx_to_hw(hw, ctx_buf, tx_drbell_q_index);
+}
+
+/**
+ * ice_clear_tx_drbell_q_ctx
+ * @hw: pointer to the hardware structure
+ * @tx_drbell_q_index: the index of the doorbell queue to clear
+ *
+ * Clears doorbell queue context in hw register space
+ */
+enum ice_status
+ice_clear_tx_drbell_q_ctx(struct ice_hw *hw, u32 tx_drbell_q_index)
+{
+	u8 i;
+
+	if (tx_drbell_q_index > QTX_COMM_DBLQ_DBELL_MAX_INDEX)
+		return ICE_ERR_PARAM;
+
+	/* Clear each dword register separately */
+	for (i = 0; i < ICE_TX_DRBELL_Q_CTX_SIZE_DWORDS; i++)
+		wr32(hw, QTX_COMM_DBLQ_CNTX(i, tx_drbell_q_index), 0);
+
+	return ICE_SUCCESS;
+}
+#endif /* !NO_UNUSED_CTX_CODE || AE_DRIVER */
+
+/**
+ * ice_debug_cq
+ * @hw: pointer to the hardware structure
+ * @mask: debug mask
+ * @desc: pointer to control queue descriptor
+ * @buf: pointer to command buffer
+ * @buf_len: max length of buf
+ *
+ * Dumps debug log about control command with descriptor contents.
+ */
+void
+ice_debug_cq(struct ice_hw *hw, u32 mask, void *desc, void *buf, u16 buf_len)
+{
+	struct ice_aq_desc *cq_desc = (struct ice_aq_desc *)desc;
+	u16 len;
+
+	if (!(mask & hw->debug_mask))
+		return;
+
+	if (!desc)
+		return;
+
+	len = LE16_TO_CPU(cq_desc->datalen);
+
+	ice_debug(hw, mask,
+		  "CQ CMD: opcode 0x%04X, flags 0x%04X, datalen 0x%04X, retval 0x%04X\n",
+		  LE16_TO_CPU(cq_desc->opcode),
+		  LE16_TO_CPU(cq_desc->flags),
+		  LE16_TO_CPU(cq_desc->datalen), LE16_TO_CPU(cq_desc->retval));
+	ice_debug(hw, mask, "\tcookie (h,l) 0x%08X 0x%08X\n",
+		  LE32_TO_CPU(cq_desc->cookie_high),
+		  LE32_TO_CPU(cq_desc->cookie_low));
+	ice_debug(hw, mask, "\tparam (0,1)  0x%08X 0x%08X\n",
+		  LE32_TO_CPU(cq_desc->params.generic.param0),
+		  LE32_TO_CPU(cq_desc->params.generic.param1));
+	ice_debug(hw, mask, "\taddr (h,l)   0x%08X 0x%08X\n",
+		  LE32_TO_CPU(cq_desc->params.generic.addr_high),
+		  LE32_TO_CPU(cq_desc->params.generic.addr_low));
+	if (buf && cq_desc->datalen != 0) {
+		ice_debug(hw, mask, "Buffer:\n");
+		if (buf_len < len)
+			len = buf_len;
+
+		ice_debug_array(hw, mask, 16, 1, (u8 *)buf, len);
+	}
+}
+
+
+/* FW Admin Queue command wrappers */
+
+/**
+ * ice_aq_send_cmd - send FW Admin Queue command to FW Admin Queue
+ * @hw: pointer to the hw struct
+ * @desc: descriptor describing the command
+ * @buf: buffer to use for indirect commands (NULL for direct commands)
+ * @buf_size: size of buffer for indirect commands (0 for direct commands)
+ * @cd: pointer to command details structure
+ *
+ * Helper function to send FW Admin Queue commands to the FW Admin Queue.
+ */
+enum ice_status
+ice_aq_send_cmd(struct ice_hw *hw, struct ice_aq_desc *desc, void *buf,
+		u16 buf_size, struct ice_sq_cd *cd)
+{
+	return ice_sq_send_cmd(hw, &hw->adminq, desc, buf, buf_size, cd);
+}
+
+/**
+ * ice_aq_get_fw_ver
+ * @hw: pointer to the hw struct
+ * @cd: pointer to command details structure or NULL
+ *
+ * Get the firmware version (0x0001) from the admin queue commands
+ */
+enum ice_status ice_aq_get_fw_ver(struct ice_hw *hw, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_get_ver *resp;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	resp = &desc.params.get_ver;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_ver);
+
+	status = ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+
+	if (!status) {
+		hw->fw_branch = resp->fw_branch;
+		hw->fw_maj_ver = resp->fw_major;
+		hw->fw_min_ver = resp->fw_minor;
+		hw->fw_patch = resp->fw_patch;
+		hw->fw_build = LE32_TO_CPU(resp->fw_build);
+		hw->api_branch = resp->api_branch;
+		hw->api_maj_ver = resp->api_major;
+		hw->api_min_ver = resp->api_minor;
+		hw->api_patch = resp->api_patch;
+	}
+
+	return status;
+}
+
+
+/**
+ * ice_aq_q_shutdown
+ * @hw: pointer to the hw struct
+ * @unloading: is the driver unloading itself
+ *
+ * Tell the Firmware that we're shutting down the AdminQ and whether
+ * or not the driver is unloading as well (0x0003).
+ */
+enum ice_status ice_aq_q_shutdown(struct ice_hw *hw, bool unloading)
+{
+	struct ice_aqc_q_shutdown *cmd;
+	struct ice_aq_desc desc;
+
+	cmd = &desc.params.q_shutdown;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_q_shutdown);
+
+	if (unloading)
+		cmd->driver_unloading = CPU_TO_LE32(ICE_AQC_DRIVER_UNLOADING);
+
+	return ice_aq_send_cmd(hw, &desc, NULL, 0, NULL);
+}
+
+/**
+ * ice_aq_req_res
+ * @hw: pointer to the hw struct
+ * @res: resource id
+ * @access: access type
+ * @sdp_number: resource number
+ * @timeout: the maximum time in ms that the driver may hold the resource
+ * @cd: pointer to command details structure or NULL
+ *
+ * Requests common resource using the admin queue commands (0x0008).
+ * When attempting to acquire the Global Config Lock, the driver can
+ * learn of three states:
+ *  1) ICE_SUCCESS -        acquired lock, and can perform download package
+ *  2) ICE_ERR_AQ_ERROR -   did not get lock, driver should fail to load
+ *  3) ICE_ERR_AQ_NO_WORK - did not get lock, but another driver has
+ *                          successfully downloaded the package; the driver does
+ *                          not have to download the package and can continue
+ *                          loading
+ *
+ * Note that if the caller is in an acquire lock, perform action, release lock
+ * phase of operation, it is possible that the FW may detect a timeout and issue
+ * a CORER. In this case, the driver will receive a CORER interrupt and will
+ * have to determine its cause. The calling thread that is handling this flow
+ * will likely get an error propagated back to it indicating the Download
+ * Package, Update Package or the Release Resource AQ commands timed out.
+ */
+static enum ice_status
+ice_aq_req_res(struct ice_hw *hw, enum ice_aq_res_ids res,
+	       enum ice_aq_res_access_type access, u8 sdp_number, u32 *timeout,
+	       struct ice_sq_cd *cd)
+{
+	struct ice_aqc_req_res *cmd_resp;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_req_res");
+
+	cmd_resp = &desc.params.res_owner;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_req_res);
+
+	cmd_resp->res_id = CPU_TO_LE16(res);
+	cmd_resp->access_type = CPU_TO_LE16(access);
+	cmd_resp->res_number = CPU_TO_LE32(sdp_number);
+	cmd_resp->timeout = CPU_TO_LE32(*timeout);
+	*timeout = 0;
+
+	status = ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+
+	/* The completion specifies the maximum time in ms that the driver
+	 * may hold the resource in the Timeout field.
+	 */
+
+	/* Global config lock response utilizes an additional status field.
+	 *
+	 * If the Global config lock resource is held by some other driver, the
+	 * command completes with ICE_AQ_RES_GLBL_IN_PROG in the status field
+	 * and the timeout field indicates the maximum time the current owner
+	 * of the resource has to free it.
+	 */
+	if (res == ICE_GLOBAL_CFG_LOCK_RES_ID) {
+		if (LE16_TO_CPU(cmd_resp->status) == ICE_AQ_RES_GLBL_SUCCESS) {
+			*timeout = LE32_TO_CPU(cmd_resp->timeout);
+			return ICE_SUCCESS;
+		} else if (LE16_TO_CPU(cmd_resp->status) ==
+			   ICE_AQ_RES_GLBL_IN_PROG) {
+			*timeout = LE32_TO_CPU(cmd_resp->timeout);
+			return ICE_ERR_AQ_ERROR;
+		} else if (LE16_TO_CPU(cmd_resp->status) ==
+			   ICE_AQ_RES_GLBL_DONE) {
+			return ICE_ERR_AQ_NO_WORK;
+		}
+
+		/* invalid FW response, force a timeout immediately */
+		*timeout = 0;
+		return ICE_ERR_AQ_ERROR;
+	}
+
+	/* If the resource is held by some other driver, the command completes
+	 * with a busy return value and the timeout field indicates the maximum
+	 * time the current owner of the resource has to free it.
+	 */
+	if (!status || hw->adminq.sq_last_status == ICE_AQ_RC_EBUSY)
+		*timeout = LE32_TO_CPU(cmd_resp->timeout);
+
+	return status;
+}
+
+/**
+ * ice_aq_release_res
+ * @hw: pointer to the hw struct
+ * @res: resource id
+ * @sdp_number: resource number
+ * @cd: pointer to command details structure or NULL
+ *
+ * release common resource using the admin queue commands (0x0009)
+ */
+static enum ice_status
+ice_aq_release_res(struct ice_hw *hw, enum ice_aq_res_ids res, u8 sdp_number,
+		   struct ice_sq_cd *cd)
+{
+	struct ice_aqc_req_res *cmd;
+	struct ice_aq_desc desc;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_release_res");
+
+	cmd = &desc.params.res_owner;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_release_res);
+
+	cmd->res_id = CPU_TO_LE16(res);
+	cmd->res_number = CPU_TO_LE32(sdp_number);
+
+	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+}
+
+/**
+ * ice_acquire_res
+ * @hw: pointer to the HW structure
+ * @res: resource id
+ * @access: access type (read or write)
+ * @timeout: timeout in milliseconds
+ *
+ * This function will attempt to acquire the ownership of a resource.
+ */
+enum ice_status
+ice_acquire_res(struct ice_hw *hw, enum ice_aq_res_ids res,
+		enum ice_aq_res_access_type access, u32 timeout)
+{
+#define ICE_RES_POLLING_DELAY_MS	10
+	u32 delay = ICE_RES_POLLING_DELAY_MS;
+	u32 time_left = timeout;
+	enum ice_status status;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_acquire_res");
+
+	status = ice_aq_req_res(hw, res, access, 0, &time_left, NULL);
+
+	/* A return code of ICE_ERR_AQ_NO_WORK means that another driver has
+	 * previously acquired the resource and performed any necessary updates;
+	 * in this case the caller does not obtain the resource and has no
+	 * further work to do.
+	 */
+	if (status == ICE_ERR_AQ_NO_WORK)
+		goto ice_acquire_res_exit;
+
+	if (status)
+		ice_debug(hw, ICE_DBG_RES,
+			  "resource %d acquire type %d failed.\n", res, access);
+
+	/* If necessary, poll until the current lock owner timeouts */
+	timeout = time_left;
+	while (status && timeout && time_left) {
+		ice_msec_delay(delay, true);
+		timeout = (timeout > delay) ? timeout - delay : 0;
+		status = ice_aq_req_res(hw, res, access, 0, &time_left, NULL);
+
+		if (status == ICE_ERR_AQ_NO_WORK)
+			/* lock free, but no work to do */
+			break;
+
+		if (!status)
+			/* lock acquired */
+			break;
+	}
+	if (status && status != ICE_ERR_AQ_NO_WORK)
+		ice_debug(hw, ICE_DBG_RES, "resource acquire timed out.\n");
+
+ice_acquire_res_exit:
+	if (status == ICE_ERR_AQ_NO_WORK) {
+		if (access == ICE_RES_WRITE)
+			ice_debug(hw, ICE_DBG_RES,
+				  "resource indicates no work to do.\n");
+		else
+			ice_debug(hw, ICE_DBG_RES,
+				  "Warning: ICE_ERR_AQ_NO_WORK not expected\n");
+	}
+	return status;
+}
+
+/**
+ * ice_release_res
+ * @hw: pointer to the HW structure
+ * @res: resource id
+ *
+ * This function will release a resource using the proper Admin Command.
+ */
+void ice_release_res(struct ice_hw *hw, enum ice_aq_res_ids res)
+{
+	enum ice_status status;
+	u32 total_delay = 0;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_release_res");
+
+	status = ice_aq_release_res(hw, res, 0, NULL);
+
+	/* there are some rare cases when trying to release the resource
+	 * results in an admin Q timeout, so handle them correctly
+	 */
+	while ((status == ICE_ERR_AQ_TIMEOUT) &&
+	       (total_delay < hw->adminq.sq_cmd_timeout)) {
+		ice_msec_delay(1, true);
+		status = ice_aq_release_res(hw, res, 0, NULL);
+		total_delay++;
+	}
+}
+
+/**
+ * ice_aq_alloc_free_res - command to allocate/free resources
+ * @hw: pointer to the hw struct
+ * @num_entries: number of resource entries in buffer
+ * @buf: Indirect buffer to hold data parameters and response
+ * @buf_size: size of buffer for indirect commands
+ * @opc: pass in the command opcode
+ * @cd: pointer to command details structure or NULL
+ *
+ * Helper function to allocate/free resources using the admin queue commands
+ */
+enum ice_status
+ice_aq_alloc_free_res(struct ice_hw *hw, u16 num_entries,
+		      struct ice_aqc_alloc_free_res_elem *buf, u16 buf_size,
+		      enum ice_adminq_opc opc, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_alloc_free_res_cmd *cmd;
+	struct ice_aq_desc desc;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_alloc_free_res");
+
+	cmd = &desc.params.sw_res_ctrl;
+
+	if (!buf)
+		return ICE_ERR_PARAM;
+
+	if (buf_size < (num_entries * sizeof(buf->elem[0])))
+		return ICE_ERR_PARAM;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, opc);
+
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+
+	cmd->num_entries = CPU_TO_LE16(num_entries);
+
+	return ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
+}
+
+
+/**
+ * ice_get_num_per_func - determine number of resources per PF
+ * @hw: pointer to the hw structure
+ * @max: value to be evenly split between each PF
+ *
+ * Determine the number of valid functions by going through the bitmap returned
+ * from parsing capabilities and use this to calculate the number of resources
+ * per PF based on the max value passed in.
+ */
+static u32 ice_get_num_per_func(struct ice_hw *hw, u32 max)
+{
+	u8 funcs;
+
+#define ICE_CAPS_VALID_FUNCS_M	0xFF
+	funcs = ice_hweight8(hw->dev_caps.common_cap.valid_functions &
+			     ICE_CAPS_VALID_FUNCS_M);
+
+	if (!funcs)
+		return 0;
+
+	return max / funcs;
+}
+
+/**
+ * ice_parse_caps - parse function/device capabilities
+ * @hw: pointer to the hw struct
+ * @buf: pointer to a buffer containing function/device capability records
+ * @cap_count: number of capability records in the list
+ * @opc: type of capabilities list to parse
+ *
+ * Helper function to parse function(0x000a)/device(0x000b) capabilities list.
+ */
+static void
+ice_parse_caps(struct ice_hw *hw, void *buf, u32 cap_count,
+	       enum ice_adminq_opc opc)
+{
+	struct ice_aqc_list_caps_elem *cap_resp;
+	struct ice_hw_func_caps *func_p = NULL;
+	struct ice_hw_dev_caps *dev_p = NULL;
+	struct ice_hw_common_caps *caps;
+	u32 i;
+
+	if (!buf)
+		return;
+
+	cap_resp = (struct ice_aqc_list_caps_elem *)buf;
+
+	if (opc == ice_aqc_opc_list_dev_caps) {
+		dev_p = &hw->dev_caps;
+		caps = &dev_p->common_cap;
+	} else if (opc == ice_aqc_opc_list_func_caps) {
+		func_p = &hw->func_caps;
+		caps = &func_p->common_cap;
+	} else {
+		ice_debug(hw, ICE_DBG_INIT, "wrong opcode\n");
+		return;
+	}
+
+	for (i = 0; caps && i < cap_count; i++, cap_resp++) {
+		u32 logical_id = LE32_TO_CPU(cap_resp->logical_id);
+		u32 phys_id = LE32_TO_CPU(cap_resp->phys_id);
+		u32 number = LE32_TO_CPU(cap_resp->number);
+		u16 cap = LE16_TO_CPU(cap_resp->cap);
+
+		switch (cap) {
+		case ICE_AQC_CAPS_VALID_FUNCTIONS:
+			caps->valid_functions = number;
+			ice_debug(hw, ICE_DBG_INIT,
+				  "HW caps: Valid Functions = %d\n",
+				  caps->valid_functions);
+			break;
+		case ICE_AQC_CAPS_VSI:
+			if (dev_p) {
+				dev_p->num_vsi_allocd_to_host = number;
+				ice_debug(hw, ICE_DBG_INIT,
+					  "HW caps: Dev.VSI cnt = %d\n",
+					  dev_p->num_vsi_allocd_to_host);
+			} else if (func_p) {
+				func_p->guar_num_vsi =
+					ice_get_num_per_func(hw, ICE_MAX_VSI);
+				ice_debug(hw, ICE_DBG_INIT,
+					  "HW caps: Func.VSI cnt = %d\n",
+					  number);
+			}
+			break;
+		case ICE_AQC_CAPS_RSS:
+			caps->rss_table_size = number;
+			caps->rss_table_entry_width = logical_id;
+			ice_debug(hw, ICE_DBG_INIT,
+				  "HW caps: RSS table size = %d\n",
+				  caps->rss_table_size);
+			ice_debug(hw, ICE_DBG_INIT,
+				  "HW caps: RSS table width = %d\n",
+				  caps->rss_table_entry_width);
+			break;
+		case ICE_AQC_CAPS_RXQS:
+			caps->num_rxq = number;
+			caps->rxq_first_id = phys_id;
+			ice_debug(hw, ICE_DBG_INIT,
+				  "HW caps: Num Rx Qs = %d\n", caps->num_rxq);
+			ice_debug(hw, ICE_DBG_INIT,
+				  "HW caps: Rx first queue ID = %d\n",
+				  caps->rxq_first_id);
+			break;
+		case ICE_AQC_CAPS_TXQS:
+			caps->num_txq = number;
+			caps->txq_first_id = phys_id;
+			ice_debug(hw, ICE_DBG_INIT,
+				  "HW caps: Num Tx Qs = %d\n", caps->num_txq);
+			ice_debug(hw, ICE_DBG_INIT,
+				  "HW caps: Tx first queue ID = %d\n",
+				  caps->txq_first_id);
+			break;
+		case ICE_AQC_CAPS_MSIX:
+			caps->num_msix_vectors = number;
+			caps->msix_vector_first_id = phys_id;
+			ice_debug(hw, ICE_DBG_INIT,
+				  "HW caps: MSIX vector count = %d\n",
+				  caps->num_msix_vectors);
+			ice_debug(hw, ICE_DBG_INIT,
+				  "HW caps: MSIX first vector index = %d\n",
+				  caps->msix_vector_first_id);
+			break;
+		case ICE_AQC_CAPS_MAX_MTU:
+			caps->max_mtu = number;
+			if (dev_p)
+				ice_debug(hw, ICE_DBG_INIT,
+					  "HW caps: Dev.MaxMTU = %d\n",
+					  caps->max_mtu);
+			else if (func_p)
+				ice_debug(hw, ICE_DBG_INIT,
+					  "HW caps: func.MaxMTU = %d\n",
+					  caps->max_mtu);
+			break;
+		default:
+			ice_debug(hw, ICE_DBG_INIT,
+				  "HW caps: Unknown capability[%d]: 0x%x\n", i,
+				  cap);
+			break;
+		}
+	}
+}
+
+/**
+ * ice_aq_discover_caps - query function/device capabilities
+ * @hw: pointer to the hw struct
+ * @buf: a virtual buffer to hold the capabilities
+ * @buf_size: Size of the virtual buffer
+ * @cap_count: cap count needed if AQ err==ENOMEM
+ * @opc: capabilities type to discover - pass in the command opcode
+ * @cd: pointer to command details structure or NULL
+ *
+ * Get the function(0x000a)/device(0x000b) capabilities description from
+ * the firmware.
+ */
+static enum ice_status
+ice_aq_discover_caps(struct ice_hw *hw, void *buf, u16 buf_size, u32 *cap_count,
+		     enum ice_adminq_opc opc, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_list_caps *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	cmd = &desc.params.get_cap;
+
+	if (opc != ice_aqc_opc_list_func_caps &&
+	    opc != ice_aqc_opc_list_dev_caps)
+		return ICE_ERR_PARAM;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, opc);
+
+	status = ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
+	if (!status)
+		ice_parse_caps(hw, buf, LE32_TO_CPU(cmd->count), opc);
+	else if (hw->adminq.sq_last_status == ICE_AQ_RC_ENOMEM)
+		*cap_count = LE32_TO_CPU(cmd->count);
+	return status;
+}
+
+/**
+ * ice_discover_caps - get info about the HW
+ * @hw: pointer to the hardware structure
+ * @opc: capabilities type to discover - pass in the command opcode
+ */
+static enum ice_status
+ice_discover_caps(struct ice_hw *hw, enum ice_adminq_opc opc)
+{
+	enum ice_status status;
+	u32 cap_count;
+	u16 cbuf_len;
+	u8 retries;
+
+	/* The driver doesn't know how many capabilities the device will return
+	 * so the buffer size required isn't known ahead of time. The driver
+	 * starts with cbuf_len and if this turns out to be insufficient, the
+	 * device returns ICE_AQ_RC_ENOMEM and also the cap_count it needs.
+	 * The driver then allocates the buffer based on the count and retries
+	 * the operation. So it follows that the retry count is 2.
+	 */
+#define ICE_GET_CAP_BUF_COUNT	40
+#define ICE_GET_CAP_RETRY_COUNT	2
+
+	cap_count = ICE_GET_CAP_BUF_COUNT;
+	retries = ICE_GET_CAP_RETRY_COUNT;
+
+	do {
+		void *cbuf;
+
+		cbuf_len = (u16)(cap_count *
+				 sizeof(struct ice_aqc_list_caps_elem));
+		cbuf = ice_malloc(hw, cbuf_len);
+		if (!cbuf)
+			return ICE_ERR_NO_MEMORY;
+
+		status = ice_aq_discover_caps(hw, cbuf, cbuf_len, &cap_count,
+					      opc, NULL);
+		ice_free(hw, cbuf);
+
+		if (!status || hw->adminq.sq_last_status != ICE_AQ_RC_ENOMEM)
+			break;
+
+		/* If ENOMEM is returned, try again with bigger buffer */
+	} while (--retries);
+
+	return status;
+}
+
+/**
+ * ice_get_caps - get info about the HW
+ * @hw: pointer to the hardware structure
+ */
+enum ice_status ice_get_caps(struct ice_hw *hw)
+{
+	enum ice_status status;
+
+	status = ice_discover_caps(hw, ice_aqc_opc_list_dev_caps);
+	if (!status)
+		status = ice_discover_caps(hw, ice_aqc_opc_list_func_caps);
+
+	return status;
+}
+
+/**
+ * ice_aq_manage_mac_write - manage MAC address write command
+ * @hw: pointer to the hw struct
+ * @mac_addr: MAC address to be written as LAA/LAA+WoL/Port address
+ * @flags: flags to control write behavior
+ * @cd: pointer to command details structure or NULL
+ *
+ * This function is used to write MAC address to the NVM (0x0108).
+ */
+enum ice_status
+ice_aq_manage_mac_write(struct ice_hw *hw, const u8 *mac_addr, u8 flags,
+			struct ice_sq_cd *cd)
+{
+	struct ice_aqc_manage_mac_write *cmd;
+	struct ice_aq_desc desc;
+
+	cmd = &desc.params.mac_write;
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_manage_mac_write);
+
+	cmd->flags = flags;
+
+
+	/* Prep values for flags, sah, sal */
+	cmd->sah = HTONS(*((const u16 *)mac_addr));
+	cmd->sal = HTONL(*((const u32 *)(mac_addr + 2)));
+
+	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+}
+
+/**
+ * ice_aq_clear_pxe_mode
+ * @hw: pointer to the hw struct
+ *
+ * Tell the firmware that the driver is taking over from PXE (0x0110).
+ */
+static enum ice_status ice_aq_clear_pxe_mode(struct ice_hw *hw)
+{
+	struct ice_aq_desc desc;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_clear_pxe_mode);
+	desc.params.clear_pxe.rx_cnt = ICE_AQC_CLEAR_PXE_RX_CNT;
+
+	return ice_aq_send_cmd(hw, &desc, NULL, 0, NULL);
+}
+
+/**
+ * ice_clear_pxe_mode - clear pxe operations mode
+ * @hw: pointer to the hw struct
+ *
+ * Make sure all PXE mode settings are cleared, including things
+ * like descriptor fetch/write-back mode.
+ */
+void ice_clear_pxe_mode(struct ice_hw *hw)
+{
+	if (ice_check_sq_alive(hw, &hw->adminq))
+		ice_aq_clear_pxe_mode(hw);
+}
+
+
+/**
+ * ice_get_link_speed_based_on_phy_type - returns link speed
+ * @phy_type_low: lower part of phy_type
+ * @phy_type_high: higher part of phy_type
+ *
+ * This helper function will convert an entry in phy type structure
+ * [phy_type_low, phy_type_high] to its corresponding link speed.
+ * Note: In the structure of [phy_type_low, phy_type_high], there should
+ * be one bit set, as this function will convert one phy type to its
+ * speed.
+ * If no bit gets set, ICE_LINK_SPEED_UNKNOWN will be returned
+ * If more than one bit gets set, ICE_LINK_SPEED_UNKNOWN will be returned
+ */
+static u16
+ice_get_link_speed_based_on_phy_type(u64 phy_type_low, u64 phy_type_high)
+{
+	u16 speed_phy_type_high = ICE_AQ_LINK_SPEED_UNKNOWN;
+	u16 speed_phy_type_low = ICE_AQ_LINK_SPEED_UNKNOWN;
+
+	switch (phy_type_low) {
+	case ICE_PHY_TYPE_LOW_100BASE_TX:
+	case ICE_PHY_TYPE_LOW_100M_SGMII:
+		speed_phy_type_low = ICE_AQ_LINK_SPEED_100MB;
+		break;
+	case ICE_PHY_TYPE_LOW_1000BASE_T:
+	case ICE_PHY_TYPE_LOW_1000BASE_SX:
+	case ICE_PHY_TYPE_LOW_1000BASE_LX:
+	case ICE_PHY_TYPE_LOW_1000BASE_KX:
+	case ICE_PHY_TYPE_LOW_1G_SGMII:
+		speed_phy_type_low = ICE_AQ_LINK_SPEED_1000MB;
+		break;
+	case ICE_PHY_TYPE_LOW_2500BASE_T:
+	case ICE_PHY_TYPE_LOW_2500BASE_X:
+	case ICE_PHY_TYPE_LOW_2500BASE_KX:
+		speed_phy_type_low = ICE_AQ_LINK_SPEED_2500MB;
+		break;
+	case ICE_PHY_TYPE_LOW_5GBASE_T:
+	case ICE_PHY_TYPE_LOW_5GBASE_KR:
+		speed_phy_type_low = ICE_AQ_LINK_SPEED_5GB;
+		break;
+	case ICE_PHY_TYPE_LOW_10GBASE_T:
+	case ICE_PHY_TYPE_LOW_10G_SFI_DA:
+	case ICE_PHY_TYPE_LOW_10GBASE_SR:
+	case ICE_PHY_TYPE_LOW_10GBASE_LR:
+	case ICE_PHY_TYPE_LOW_10GBASE_KR_CR1:
+	case ICE_PHY_TYPE_LOW_10G_SFI_AOC_ACC:
+	case ICE_PHY_TYPE_LOW_10G_SFI_C2C:
+		speed_phy_type_low = ICE_AQ_LINK_SPEED_10GB;
+		break;
+	case ICE_PHY_TYPE_LOW_25GBASE_T:
+	case ICE_PHY_TYPE_LOW_25GBASE_CR:
+	case ICE_PHY_TYPE_LOW_25GBASE_CR_S:
+	case ICE_PHY_TYPE_LOW_25GBASE_CR1:
+	case ICE_PHY_TYPE_LOW_25GBASE_SR:
+	case ICE_PHY_TYPE_LOW_25GBASE_LR:
+	case ICE_PHY_TYPE_LOW_25GBASE_KR:
+	case ICE_PHY_TYPE_LOW_25GBASE_KR_S:
+	case ICE_PHY_TYPE_LOW_25GBASE_KR1:
+	case ICE_PHY_TYPE_LOW_25G_AUI_AOC_ACC:
+	case ICE_PHY_TYPE_LOW_25G_AUI_C2C:
+		speed_phy_type_low = ICE_AQ_LINK_SPEED_25GB;
+		break;
+	case ICE_PHY_TYPE_LOW_40GBASE_CR4:
+	case ICE_PHY_TYPE_LOW_40GBASE_SR4:
+	case ICE_PHY_TYPE_LOW_40GBASE_LR4:
+	case ICE_PHY_TYPE_LOW_40GBASE_KR4:
+	case ICE_PHY_TYPE_LOW_40G_XLAUI_AOC_ACC:
+	case ICE_PHY_TYPE_LOW_40G_XLAUI:
+		speed_phy_type_low = ICE_AQ_LINK_SPEED_40GB;
+		break;
+	case ICE_PHY_TYPE_LOW_50GBASE_CR2:
+	case ICE_PHY_TYPE_LOW_50GBASE_SR2:
+	case ICE_PHY_TYPE_LOW_50GBASE_LR2:
+	case ICE_PHY_TYPE_LOW_50GBASE_KR2:
+	case ICE_PHY_TYPE_LOW_50G_LAUI2_AOC_ACC:
+	case ICE_PHY_TYPE_LOW_50G_LAUI2:
+	case ICE_PHY_TYPE_LOW_50G_AUI2_AOC_ACC:
+	case ICE_PHY_TYPE_LOW_50G_AUI2:
+	case ICE_PHY_TYPE_LOW_50GBASE_CP:
+	case ICE_PHY_TYPE_LOW_50GBASE_SR:
+	case ICE_PHY_TYPE_LOW_50GBASE_FR:
+	case ICE_PHY_TYPE_LOW_50GBASE_LR:
+	case ICE_PHY_TYPE_LOW_50GBASE_KR_PAM4:
+	case ICE_PHY_TYPE_LOW_50G_AUI1_AOC_ACC:
+	case ICE_PHY_TYPE_LOW_50G_AUI1:
+		speed_phy_type_low = ICE_AQ_LINK_SPEED_50GB;
+		break;
+	case ICE_PHY_TYPE_LOW_100GBASE_CR4:
+	case ICE_PHY_TYPE_LOW_100GBASE_SR4:
+	case ICE_PHY_TYPE_LOW_100GBASE_LR4:
+	case ICE_PHY_TYPE_LOW_100GBASE_KR4:
+	case ICE_PHY_TYPE_LOW_100G_CAUI4_AOC_ACC:
+	case ICE_PHY_TYPE_LOW_100G_CAUI4:
+	case ICE_PHY_TYPE_LOW_100G_AUI4_AOC_ACC:
+	case ICE_PHY_TYPE_LOW_100G_AUI4:
+	case ICE_PHY_TYPE_LOW_100GBASE_CR_PAM4:
+	case ICE_PHY_TYPE_LOW_100GBASE_KR_PAM4:
+	case ICE_PHY_TYPE_LOW_100GBASE_CP2:
+	case ICE_PHY_TYPE_LOW_100GBASE_SR2:
+	case ICE_PHY_TYPE_LOW_100GBASE_DR:
+		speed_phy_type_low = ICE_AQ_LINK_SPEED_100GB;
+		break;
+	default:
+		speed_phy_type_low = ICE_AQ_LINK_SPEED_UNKNOWN;
+		break;
+	}
+
+	switch (phy_type_high) {
+	case ICE_PHY_TYPE_HIGH_100GBASE_KR2_PAM4:
+	case ICE_PHY_TYPE_HIGH_100G_CAUI2_AOC_ACC:
+	case ICE_PHY_TYPE_HIGH_100G_CAUI2:
+	case ICE_PHY_TYPE_HIGH_100G_AUI2_AOC_ACC:
+	case ICE_PHY_TYPE_HIGH_100G_AUI2:
+		speed_phy_type_high = ICE_AQ_LINK_SPEED_100GB;
+		break;
+	default:
+		speed_phy_type_high = ICE_AQ_LINK_SPEED_UNKNOWN;
+		break;
+	}
+
+	if (speed_phy_type_low == ICE_AQ_LINK_SPEED_UNKNOWN &&
+	    speed_phy_type_high == ICE_AQ_LINK_SPEED_UNKNOWN)
+		return ICE_AQ_LINK_SPEED_UNKNOWN;
+	else if (speed_phy_type_low != ICE_AQ_LINK_SPEED_UNKNOWN &&
+		 speed_phy_type_high != ICE_AQ_LINK_SPEED_UNKNOWN)
+		return ICE_AQ_LINK_SPEED_UNKNOWN;
+	else if (speed_phy_type_low != ICE_AQ_LINK_SPEED_UNKNOWN &&
+		 speed_phy_type_high == ICE_AQ_LINK_SPEED_UNKNOWN)
+		return speed_phy_type_low;
+	else
+		return speed_phy_type_high;
+}
+
+/**
+ * ice_update_phy_type
+ * @phy_type_low: pointer to the lower part of phy_type
+ * @phy_type_high: pointer to the higher part of phy_type
+ * @link_speeds_bitmap: targeted link speeds bitmap
+ *
+ * Note: For the link_speeds_bitmap structure, you can check it at
+ * [ice_aqc_get_link_status->link_speed]. Caller can pass in
+ * link_speeds_bitmap include multiple speeds.
+ *
+ * Each entry in this [phy_type_low, phy_type_high] structure will
+ * present a certain link speed. This helper function will turn on bits
+ * in [phy_type_low, phy_type_high] structure based on the value of
+ * link_speeds_bitmap input parameter.
+ */
+void
+ice_update_phy_type(u64 *phy_type_low, u64 *phy_type_high,
+		    u16 link_speeds_bitmap)
+{
+	u16 speed = ICE_AQ_LINK_SPEED_UNKNOWN;
+	u64 pt_high;
+	u64 pt_low;
+	int index;
+
+	/* We first check with low part of phy_type */
+	for (index = 0; index <= ICE_PHY_TYPE_LOW_MAX_INDEX; index++) {
+		pt_low = BIT_ULL(index);
+		speed = ice_get_link_speed_based_on_phy_type(pt_low, 0);
+
+		if (link_speeds_bitmap & speed)
+			*phy_type_low |= BIT_ULL(index);
+	}
+
+	/* We then check with high part of phy_type */
+	for (index = 0; index <= ICE_PHY_TYPE_HIGH_MAX_INDEX; index++) {
+		pt_high = BIT_ULL(index);
+		speed = ice_get_link_speed_based_on_phy_type(0, pt_high);
+
+		if (link_speeds_bitmap & speed)
+			*phy_type_high |= BIT_ULL(index);
+	}
+}
+
+/**
+ * ice_aq_set_phy_cfg
+ * @hw: pointer to the hw struct
+ * @lport: logical port number
+ * @cfg: structure with PHY configuration data to be set
+ * @cd: pointer to command details structure or NULL
+ *
+ * Set the various PHY configuration parameters supported on the Port.
+ * One or more of the Set PHY config parameters may be ignored in an MFP
+ * mode as the PF may not have the privilege to set some of the PHY Config
+ * parameters. This status will be indicated by the command response (0x0601).
+ */
+enum ice_status
+ice_aq_set_phy_cfg(struct ice_hw *hw, u8 lport,
+		   struct ice_aqc_set_phy_cfg_data *cfg, struct ice_sq_cd *cd)
+{
+	struct ice_aq_desc desc;
+
+	if (!cfg)
+		return ICE_ERR_PARAM;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_phy_cfg);
+	desc.params.set_phy.lport_num = lport;
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+
+	return ice_aq_send_cmd(hw, &desc, cfg, sizeof(*cfg), cd);
+}
+
+/**
+ * ice_update_link_info - update status of the HW network link
+ * @pi: port info structure of the interested logical port
+ */
+enum ice_status ice_update_link_info(struct ice_port_info *pi)
+{
+	struct ice_aqc_get_phy_caps_data *pcaps;
+	struct ice_phy_info *phy_info;
+	enum ice_status status;
+	struct ice_hw *hw;
+
+	if (!pi)
+		return ICE_ERR_PARAM;
+
+	hw = pi->hw;
+
+	pcaps = (struct ice_aqc_get_phy_caps_data *)
+		ice_malloc(hw, sizeof(*pcaps));
+	if (!pcaps)
+		return ICE_ERR_NO_MEMORY;
+
+	phy_info = &pi->phy;
+	status = ice_aq_get_link_info(pi, true, NULL, NULL);
+	if (status)
+		goto out;
+
+	if (phy_info->link_info.link_info & ICE_AQ_MEDIA_AVAILABLE) {
+		status = ice_aq_get_phy_caps(pi, false, ICE_AQC_REPORT_SW_CFG,
+					     pcaps, NULL);
+		if (status)
+			goto out;
+
+		ice_memcpy(phy_info->link_info.module_type, &pcaps->module_type,
+			   sizeof(phy_info->link_info.module_type),
+			   ICE_NONDMA_TO_NONDMA);
+	}
+out:
+	ice_free(hw, pcaps);
+	return status;
+}
+
+/**
+ * ice_set_fc
+ * @pi: port information structure
+ * @aq_failures: pointer to status code, specific to ice_set_fc routine
+ * @ena_auto_link_update: enable automatic link update
+ *
+ * Set the requested flow control mode.
+ */
+enum ice_status
+ice_set_fc(struct ice_port_info *pi, u8 *aq_failures, bool ena_auto_link_update)
+{
+	struct ice_aqc_set_phy_cfg_data cfg = { 0 };
+	struct ice_aqc_get_phy_caps_data *pcaps;
+	enum ice_status status;
+	u8 pause_mask = 0x0;
+	struct ice_hw *hw;
+
+	if (!pi)
+		return ICE_ERR_PARAM;
+	hw = pi->hw;
+	*aq_failures = ICE_SET_FC_AQ_FAIL_NONE;
+
+	switch (pi->fc.req_mode) {
+	case ICE_FC_FULL:
+		pause_mask |= ICE_AQC_PHY_EN_TX_LINK_PAUSE;
+		pause_mask |= ICE_AQC_PHY_EN_RX_LINK_PAUSE;
+		break;
+	case ICE_FC_RX_PAUSE:
+		pause_mask |= ICE_AQC_PHY_EN_RX_LINK_PAUSE;
+		break;
+	case ICE_FC_TX_PAUSE:
+		pause_mask |= ICE_AQC_PHY_EN_TX_LINK_PAUSE;
+		break;
+	default:
+		break;
+	}
+
+	pcaps = (struct ice_aqc_get_phy_caps_data *)
+		ice_malloc(hw, sizeof(*pcaps));
+	if (!pcaps)
+		return ICE_ERR_NO_MEMORY;
+
+	/* Get the current phy config */
+	status = ice_aq_get_phy_caps(pi, false, ICE_AQC_REPORT_SW_CFG, pcaps,
+				     NULL);
+	if (status) {
+		*aq_failures = ICE_SET_FC_AQ_FAIL_GET;
+		goto out;
+	}
+
+	/* clear the old pause settings */
+	cfg.caps = pcaps->caps & ~(ICE_AQC_PHY_EN_TX_LINK_PAUSE |
+				   ICE_AQC_PHY_EN_RX_LINK_PAUSE);
+	/* set the new capabilities */
+	cfg.caps |= pause_mask;
+	/* If the capabilities have changed, then set the new config */
+	if (cfg.caps != pcaps->caps) {
+		int retry_count, retry_max = 10;
+
+		/* Auto restart link so settings take effect */
+		if (ena_auto_link_update)
+			cfg.caps |= ICE_AQ_PHY_ENA_AUTO_LINK_UPDT;
+		/* Copy over all the old settings */
+		cfg.phy_type_high = pcaps->phy_type_high;
+		cfg.phy_type_low = pcaps->phy_type_low;
+		cfg.low_power_ctrl = pcaps->low_power_ctrl;
+		cfg.eee_cap = pcaps->eee_cap;
+		cfg.eeer_value = pcaps->eeer_value;
+		cfg.link_fec_opt = pcaps->link_fec_options;
+
+		status = ice_aq_set_phy_cfg(hw, pi->lport, &cfg, NULL);
+		if (status) {
+			*aq_failures = ICE_SET_FC_AQ_FAIL_SET;
+			goto out;
+		}
+
+		/* Update the link info
+		 * It sometimes takes a really long time for link to
+		 * come back from the atomic reset. Thus, we wait a
+		 * little bit.
+		 */
+		for (retry_count = 0; retry_count < retry_max; retry_count++) {
+			status = ice_update_link_info(pi);
+
+			if (status == ICE_SUCCESS)
+				break;
+
+			ice_msec_delay(100, true);
+		}
+
+		if (status)
+			*aq_failures = ICE_SET_FC_AQ_FAIL_UPDATE;
+	}
+
+out:
+	ice_free(hw, pcaps);
+	return status;
+}
+
+/**
+ * ice_copy_phy_caps_to_cfg - Copy PHY ability data to configuration data
+ * @caps: PHY ability structure to copy date from
+ * @cfg: PHY configuration structure to copy data to
+ *
+ * Helper function to copy AQC PHY get ability data to PHY set configuration
+ * data structure
+ */
+void
+ice_copy_phy_caps_to_cfg(struct ice_aqc_get_phy_caps_data *caps,
+			 struct ice_aqc_set_phy_cfg_data *cfg)
+{
+	if (!caps || !cfg)
+		return;
+
+	cfg->phy_type_low = caps->phy_type_low;
+	cfg->phy_type_high = caps->phy_type_high;
+	cfg->caps = caps->caps;
+	cfg->low_power_ctrl = caps->low_power_ctrl;
+	cfg->eee_cap = caps->eee_cap;
+	cfg->eeer_value = caps->eeer_value;
+	cfg->link_fec_opt = caps->link_fec_options;
+}
+
+/**
+ * ice_cfg_phy_fec - Configure PHY FEC data based on FEC mode
+ * @cfg: PHY configuration data to set FEC mode
+ * @fec: FEC mode to configure
+ *
+ * Caller should copy ice_aqc_get_phy_caps_data.caps ICE_AQC_PHY_EN_AUTO_FEC
+ * (bit 7) and ice_aqc_get_phy_caps_data.link_fec_options to cfg.caps
+ * ICE_AQ_PHY_ENA_AUTO_FEC (bit 7) and cfg.link_fec_options before calling.
+ */
+void
+ice_cfg_phy_fec(struct ice_aqc_set_phy_cfg_data *cfg, enum ice_fec_mode fec)
+{
+	switch (fec) {
+	case ICE_FEC_BASER:
+		/* Clear auto FEC and RS bits, and AND BASE-R ability
+		 * bits and OR request bits.
+		 */
+		cfg->caps &= ~ICE_AQC_PHY_EN_AUTO_FEC;
+		cfg->link_fec_opt &= ICE_AQC_PHY_FEC_10G_KR_40G_KR4_EN |
+				     ICE_AQC_PHY_FEC_25G_KR_CLAUSE74_EN;
+		cfg->link_fec_opt |= ICE_AQC_PHY_FEC_10G_KR_40G_KR4_REQ |
+				     ICE_AQC_PHY_FEC_25G_KR_REQ;
+		break;
+	case ICE_FEC_RS:
+		/* Clear auto FEC and BASE-R bits, and AND RS ability
+		 * bits and OR request bits.
+		 */
+		cfg->caps &= ~ICE_AQC_PHY_EN_AUTO_FEC;
+		cfg->link_fec_opt &= ICE_AQC_PHY_FEC_25G_RS_CLAUSE91_EN;
+		cfg->link_fec_opt |= ICE_AQC_PHY_FEC_25G_RS_528_REQ |
+				     ICE_AQC_PHY_FEC_25G_RS_544_REQ;
+		break;
+	case ICE_FEC_NONE:
+		/* Clear auto FEC and all FEC option bits. */
+		cfg->caps &= ~ICE_AQC_PHY_EN_AUTO_FEC;
+		cfg->link_fec_opt &= ~ICE_AQC_PHY_FEC_MASK;
+		break;
+	case ICE_FEC_AUTO:
+		/* AND auto FEC bit, and all caps bits. */
+		cfg->caps &= ICE_AQC_PHY_CAPS_MASK;
+		break;
+	}
+}
+
+/**
+ * ice_get_link_status - get status of the HW network link
+ * @pi: port information structure
+ * @link_up: pointer to bool (true/false = linkup/linkdown)
+ *
+ * Variable link_up is true if link is up, false if link is down.
+ * The variable link_up is invalid if status is non zero. As a
+ * result of this call, link status reporting becomes enabled
+ */
+enum ice_status ice_get_link_status(struct ice_port_info *pi, bool *link_up)
+{
+	struct ice_phy_info *phy_info;
+	enum ice_status status = ICE_SUCCESS;
+
+	if (!pi || !link_up)
+		return ICE_ERR_PARAM;
+
+	phy_info = &pi->phy;
+
+	if (phy_info->get_link_info) {
+		status = ice_update_link_info(pi);
+
+		if (status)
+			ice_debug(pi->hw, ICE_DBG_LINK,
+				  "get link status error, status = %d\n",
+				  status);
+	}
+
+	*link_up = phy_info->link_info.link_info & ICE_AQ_LINK_UP;
+
+	return status;
+}
+
+/**
+ * ice_aq_set_link_restart_an
+ * @pi: pointer to the port information structure
+ * @ena_link: if true: enable link, if false: disable link
+ * @cd: pointer to command details structure or NULL
+ *
+ * Sets up the link and restarts the Auto-Negotiation over the link.
+ */
+enum ice_status
+ice_aq_set_link_restart_an(struct ice_port_info *pi, bool ena_link,
+			   struct ice_sq_cd *cd)
+{
+	struct ice_aqc_restart_an *cmd;
+	struct ice_aq_desc desc;
+
+	cmd = &desc.params.restart_an;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_restart_an);
+
+	cmd->cmd_flags = ICE_AQC_RESTART_AN_LINK_RESTART;
+	cmd->lport_num = pi->lport;
+	if (ena_link)
+		cmd->cmd_flags |= ICE_AQC_RESTART_AN_LINK_ENABLE;
+	else
+		cmd->cmd_flags &= ~ICE_AQC_RESTART_AN_LINK_ENABLE;
+
+	return ice_aq_send_cmd(pi->hw, &desc, NULL, 0, cd);
+}
+
+/**
+ * ice_aq_set_event_mask
+ * @hw: pointer to the hw struct
+ * @port_num: port number of the physical function
+ * @mask: event mask to be set
+ * @cd: pointer to command details structure or NULL
+ *
+ * Set event mask (0x0613)
+ */
+enum ice_status
+ice_aq_set_event_mask(struct ice_hw *hw, u8 port_num, u16 mask,
+		      struct ice_sq_cd *cd)
+{
+	struct ice_aqc_set_event_mask *cmd;
+	struct ice_aq_desc desc;
+
+	cmd = &desc.params.set_event_mask;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_event_mask);
+
+	cmd->lport_num = port_num;
+
+	cmd->event_mask = CPU_TO_LE16(mask);
+	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+}
+
+/**
+ * ice_aq_set_mac_loopback
+ * @hw: pointer to the hw struct
+ * @ena_lpbk: Enable or Disable loopback
+ * @cd: pointer to command details structure or NULL
+ *
+ * Enable/disable loopback on a given port
+ */
+enum ice_status
+ice_aq_set_mac_loopback(struct ice_hw *hw, bool ena_lpbk, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_set_mac_lb *cmd;
+	struct ice_aq_desc desc;
+
+	cmd = &desc.params.set_mac_lb;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_mac_lb);
+	if (ena_lpbk)
+		cmd->lb_mode = ICE_AQ_MAC_LB_EN;
+
+	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+}
+
+
+/**
+ * ice_aq_set_port_id_led
+ * @pi: pointer to the port information
+ * @is_orig_mode: is this LED set to original mode (by the net-list)
+ * @cd: pointer to command details structure or NULL
+ *
+ * Set LED value for the given port (0x06e9)
+ */
+enum ice_status
+ice_aq_set_port_id_led(struct ice_port_info *pi, bool is_orig_mode,
+		       struct ice_sq_cd *cd)
+{
+	struct ice_aqc_set_port_id_led *cmd;
+	struct ice_hw *hw = pi->hw;
+	struct ice_aq_desc desc;
+
+	cmd = &desc.params.set_port_id_led;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_port_id_led);
+
+
+	if (is_orig_mode)
+		cmd->ident_mode = ICE_AQC_PORT_IDENT_LED_ORIG;
+	else
+		cmd->ident_mode = ICE_AQC_PORT_IDENT_LED_BLINK;
+
+	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+}
+
+/**
+ * __ice_aq_get_set_rss_lut
+ * @hw: pointer to the hardware structure
+ * @vsi_id: VSI FW index
+ * @lut_type: LUT table type
+ * @lut: pointer to the LUT buffer provided by the caller
+ * @lut_size: size of the LUT buffer
+ * @glob_lut_idx: global LUT index
+ * @set: set true to set the table, false to get the table
+ *
+ * Internal function to get (0x0B05) or set (0x0B03) RSS look up table
+ */
+static enum ice_status
+__ice_aq_get_set_rss_lut(struct ice_hw *hw, u16 vsi_id, u8 lut_type, u8 *lut,
+			 u16 lut_size, u8 glob_lut_idx, bool set)
+{
+	struct ice_aqc_get_set_rss_lut *cmd_resp;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+	u16 flags = 0;
+
+	cmd_resp = &desc.params.get_set_rss_lut;
+
+	if (set) {
+		ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_rss_lut);
+		desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+	} else {
+		ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_rss_lut);
+	}
+
+	cmd_resp->vsi_id = CPU_TO_LE16(((vsi_id <<
+					 ICE_AQC_GSET_RSS_LUT_VSI_ID_S) &
+					ICE_AQC_GSET_RSS_LUT_VSI_ID_M) |
+				       ICE_AQC_GSET_RSS_LUT_VSI_VALID);
+
+	switch (lut_type) {
+	case ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_VSI:
+	case ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF:
+	case ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_GLOBAL:
+		flags |= ((lut_type << ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_S) &
+			  ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_M);
+		break;
+	default:
+		status = ICE_ERR_PARAM;
+		goto ice_aq_get_set_rss_lut_exit;
+	}
+
+	if (lut_type == ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_GLOBAL) {
+		flags |= ((glob_lut_idx << ICE_AQC_GSET_RSS_LUT_GLOBAL_IDX_S) &
+			  ICE_AQC_GSET_RSS_LUT_GLOBAL_IDX_M);
+
+		if (!set)
+			goto ice_aq_get_set_rss_lut_send;
+	} else if (lut_type == ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF) {
+		if (!set)
+			goto ice_aq_get_set_rss_lut_send;
+	} else {
+		goto ice_aq_get_set_rss_lut_send;
+	}
+
+	/* LUT size is only valid for Global and PF table types */
+	switch (lut_size) {
+	case ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_128:
+		flags |= (ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_128_FLAG <<
+			  ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S) &
+			 ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_M;
+		break;
+	case ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_512:
+		flags |= (ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_512_FLAG <<
+			  ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S) &
+			 ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_M;
+		break;
+	case ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K:
+		if (lut_type == ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF) {
+			flags |= (ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K_FLAG <<
+				  ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S) &
+				 ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_M;
+			break;
+		}
+		/* fall-through */
+	default:
+		status = ICE_ERR_PARAM;
+		goto ice_aq_get_set_rss_lut_exit;
+	}
+
+ice_aq_get_set_rss_lut_send:
+	cmd_resp->flags = CPU_TO_LE16(flags);
+	status = ice_aq_send_cmd(hw, &desc, lut, lut_size, NULL);
+
+ice_aq_get_set_rss_lut_exit:
+	return status;
+}
+
+/**
+ * ice_aq_get_rss_lut
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: software VSI handle
+ * @lut_type: LUT table type
+ * @lut: pointer to the LUT buffer provided by the caller
+ * @lut_size: size of the LUT buffer
+ *
+ * get the RSS lookup table, PF or VSI type
+ */
+enum ice_status
+ice_aq_get_rss_lut(struct ice_hw *hw, u16 vsi_handle, u8 lut_type,
+		   u8 *lut, u16 lut_size)
+{
+	if (!ice_is_vsi_valid(hw, vsi_handle) || !lut)
+		return ICE_ERR_PARAM;
+
+	return __ice_aq_get_set_rss_lut(hw, ice_get_hw_vsi_num(hw, vsi_handle),
+					lut_type, lut, lut_size, 0, false);
+}
+
+/**
+ * ice_aq_set_rss_lut
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: software VSI handle
+ * @lut_type: LUT table type
+ * @lut: pointer to the LUT buffer provided by the caller
+ * @lut_size: size of the LUT buffer
+ *
+ * set the RSS lookup table, PF or VSI type
+ */
+enum ice_status
+ice_aq_set_rss_lut(struct ice_hw *hw, u16 vsi_handle, u8 lut_type,
+		   u8 *lut, u16 lut_size)
+{
+	if (!ice_is_vsi_valid(hw, vsi_handle) || !lut)
+		return ICE_ERR_PARAM;
+
+	return __ice_aq_get_set_rss_lut(hw, ice_get_hw_vsi_num(hw, vsi_handle),
+					lut_type, lut, lut_size, 0, true);
+}
+
+/**
+ * __ice_aq_get_set_rss_key
+ * @hw: pointer to the hw struct
+ * @vsi_id: VSI FW index
+ * @key: pointer to key info struct
+ * @set: set true to set the key, false to get the key
+ *
+ * get (0x0B04) or set (0x0B02) the RSS key per VSI
+ */
+static enum
+ice_status __ice_aq_get_set_rss_key(struct ice_hw *hw, u16 vsi_id,
+				    struct ice_aqc_get_set_rss_keys *key,
+				    bool set)
+{
+	struct ice_aqc_get_set_rss_key *cmd_resp;
+	u16 key_size = sizeof(*key);
+	struct ice_aq_desc desc;
+
+	cmd_resp = &desc.params.get_set_rss_key;
+
+	if (set) {
+		ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_rss_key);
+		desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+	} else {
+		ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_rss_key);
+	}
+
+	cmd_resp->vsi_id = CPU_TO_LE16(((vsi_id <<
+					 ICE_AQC_GSET_RSS_KEY_VSI_ID_S) &
+					ICE_AQC_GSET_RSS_KEY_VSI_ID_M) |
+				       ICE_AQC_GSET_RSS_KEY_VSI_VALID);
+
+	return ice_aq_send_cmd(hw, &desc, key, key_size, NULL);
+}
+
+/**
+ * ice_aq_get_rss_key
+ * @hw: pointer to the hw struct
+ * @vsi_handle: software VSI handle
+ * @key: pointer to key info struct
+ *
+ * get the RSS key per VSI
+ */
+enum ice_status
+ice_aq_get_rss_key(struct ice_hw *hw, u16 vsi_handle,
+		   struct ice_aqc_get_set_rss_keys *key)
+{
+	if (!ice_is_vsi_valid(hw, vsi_handle) || !key)
+		return ICE_ERR_PARAM;
+
+	return __ice_aq_get_set_rss_key(hw, ice_get_hw_vsi_num(hw, vsi_handle),
+					key, false);
+}
+
+/**
+ * ice_aq_set_rss_key
+ * @hw: pointer to the hw struct
+ * @vsi_handle: software VSI handle
+ * @keys: pointer to key info struct
+ *
+ * set the RSS key per VSI
+ */
+enum ice_status
+ice_aq_set_rss_key(struct ice_hw *hw, u16 vsi_handle,
+		   struct ice_aqc_get_set_rss_keys *keys)
+{
+	if (!ice_is_vsi_valid(hw, vsi_handle) || !keys)
+		return ICE_ERR_PARAM;
+
+	return __ice_aq_get_set_rss_key(hw, ice_get_hw_vsi_num(hw, vsi_handle),
+					keys, true);
+}
+
+/**
+ * ice_aq_add_lan_txq
+ * @hw: pointer to the hardware structure
+ * @num_qgrps: Number of added queue groups
+ * @qg_list: list of queue groups to be added
+ * @buf_size: size of buffer for indirect command
+ * @cd: pointer to command details structure or NULL
+ *
+ * Add Tx LAN queue (0x0C30)
+ *
+ * NOTE:
+ * Prior to calling add Tx LAN queue:
+ * Initialize the following as part of the Tx queue context:
+ * Completion queue ID if the queue uses Completion queue, Quanta profile,
+ * Cache profile and Packet shaper profile.
+ *
+ * After add Tx LAN queue AQ command is completed:
+ * Interrupts should be associated with specific queues,
+ * Association of Tx queue to Doorbell queue is not part of Add LAN Tx queue
+ * flow.
+ */
+static enum ice_status
+ice_aq_add_lan_txq(struct ice_hw *hw, u8 num_qgrps,
+		   struct ice_aqc_add_tx_qgrp *qg_list, u16 buf_size,
+		   struct ice_sq_cd *cd)
+{
+	u16 i, sum_header_size, sum_q_size = 0;
+	struct ice_aqc_add_tx_qgrp *list;
+	struct ice_aqc_add_txqs *cmd;
+	struct ice_aq_desc desc;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_add_lan_txq");
+
+	cmd = &desc.params.add_txqs;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_add_txqs);
+
+	if (!qg_list)
+		return ICE_ERR_PARAM;
+
+	if (num_qgrps > ICE_LAN_TXQ_MAX_QGRPS)
+		return ICE_ERR_PARAM;
+
+	sum_header_size = num_qgrps *
+		(sizeof(*qg_list) - sizeof(*qg_list->txqs));
+
+	list = qg_list;
+	for (i = 0; i < num_qgrps; i++) {
+		struct ice_aqc_add_txqs_perq *q = list->txqs;
+
+		sum_q_size += list->num_txqs * sizeof(*q);
+		list = (struct ice_aqc_add_tx_qgrp *)(q + list->num_txqs);
+	}
+
+	if (buf_size != (sum_header_size + sum_q_size))
+		return ICE_ERR_PARAM;
+
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+
+	cmd->num_qgrps = num_qgrps;
+
+	return ice_aq_send_cmd(hw, &desc, qg_list, buf_size, cd);
+}
+
+/**
+ * ice_aq_dis_lan_txq
+ * @hw: pointer to the hardware structure
+ * @num_qgrps: number of groups in the list
+ * @qg_list: the list of groups to disable
+ * @buf_size: the total size of the qg_list buffer in bytes
+ * @rst_src: if called due to reset, specifies the rst source
+ * @vmvf_num: the relative vm or vf number that is undergoing the reset
+ * @cd: pointer to command details structure or NULL
+ *
+ * Disable LAN Tx queue (0x0C31)
+ */
+static enum ice_status
+ice_aq_dis_lan_txq(struct ice_hw *hw, u8 num_qgrps,
+		   struct ice_aqc_dis_txq_item *qg_list, u16 buf_size,
+		   enum ice_disq_rst_src rst_src, u16 vmvf_num,
+		   struct ice_sq_cd *cd)
+{
+	struct ice_aqc_dis_txqs *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+	u16 i, sz = 0;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_dis_lan_txq");
+	cmd = &desc.params.dis_txqs;
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_dis_txqs);
+
+	/* qg_list can be NULL only in VM/VF reset flow */
+	if (!qg_list && !rst_src)
+		return ICE_ERR_PARAM;
+
+	if (num_qgrps > ICE_LAN_TXQ_MAX_QGRPS)
+		return ICE_ERR_PARAM;
+
+	cmd->num_entries = num_qgrps;
+
+	cmd->vmvf_and_timeout = CPU_TO_LE16((5 << ICE_AQC_Q_DIS_TIMEOUT_S) &
+					    ICE_AQC_Q_DIS_TIMEOUT_M);
+
+	switch (rst_src) {
+	case ICE_VM_RESET:
+		cmd->cmd_type = ICE_AQC_Q_DIS_CMD_VM_RESET;
+		cmd->vmvf_and_timeout |=
+			CPU_TO_LE16(vmvf_num & ICE_AQC_Q_DIS_VMVF_NUM_M);
+		break;
+	case ICE_NO_RESET:
+	default:
+		break;
+	}
+
+	/* flush pipe on time out */
+	cmd->cmd_type |= ICE_AQC_Q_DIS_CMD_FLUSH_PIPE;
+	/* If no queue group info, we are in a reset flow. Issue the AQ */
+	if (!qg_list)
+		goto do_aq;
+
+	/* set RD bit to indicate that command buffer is provided by the driver
+	 * and it needs to be read by the firmware
+	 */
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+
+	for (i = 0; i < num_qgrps; ++i) {
+		/* Calculate the size taken up by the queue IDs in this group */
+		sz += qg_list[i].num_qs * sizeof(qg_list[i].q_id);
+
+		/* Add the size of the group header */
+		sz += sizeof(qg_list[i]) - sizeof(qg_list[i].q_id);
+
+		/* If the num of queues is even, add 2 bytes of padding */
+		if ((qg_list[i].num_qs % 2) == 0)
+			sz += 2;
+	}
+
+	if (buf_size != sz)
+		return ICE_ERR_PARAM;
+
+do_aq:
+	status = ice_aq_send_cmd(hw, &desc, qg_list, buf_size, cd);
+	if (status) {
+		if (!qg_list)
+			ice_debug(hw, ICE_DBG_SCHED, "VM%d disable failed %d\n",
+				  vmvf_num, hw->adminq.sq_last_status);
+		else
+			ice_debug(hw, ICE_DBG_SCHED, "disable Q %d failed %d\n",
+				  LE16_TO_CPU(qg_list[0].q_id[0]),
+				  hw->adminq.sq_last_status);
+	}
+	return status;
+}
+
+
+/* End of FW Admin Queue command wrappers */
+
+/**
+ * ice_write_byte - write a byte to a packed context structure
+ * @src_ctx:  the context structure to read from
+ * @dest_ctx: the context to be written to
+ * @ce_info:  a description of the struct to be filled
+ */
+static void
+ice_write_byte(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info)
+{
+	u8 src_byte, dest_byte, mask;
+	u8 *from, *dest;
+	u16 shift_width;
+
+	/* copy from the next struct field */
+	from = src_ctx + ce_info->offset;
+
+	/* prepare the bits and mask */
+	shift_width = ce_info->lsb % 8;
+	mask = (u8)(BIT(ce_info->width) - 1);
+
+	src_byte = *from;
+	src_byte &= mask;
+
+	/* shift to correct alignment */
+	mask <<= shift_width;
+	src_byte <<= shift_width;
+
+	/* get the current bits from the target bit string */
+	dest = dest_ctx + (ce_info->lsb / 8);
+
+	ice_memcpy(&dest_byte, dest, sizeof(dest_byte), ICE_DMA_TO_NONDMA);
+
+	dest_byte &= ~mask;	/* get the bits not changing */
+	dest_byte |= src_byte;	/* add in the new bits */
+
+	/* put it all back */
+	ice_memcpy(dest, &dest_byte, sizeof(dest_byte), ICE_NONDMA_TO_DMA);
+}
+
+/**
+ * ice_write_word - write a word to a packed context structure
+ * @src_ctx:  the context structure to read from
+ * @dest_ctx: the context to be written to
+ * @ce_info:  a description of the struct to be filled
+ */
+static void
+ice_write_word(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info)
+{
+	u16 src_word, mask;
+	__le16 dest_word;
+	u8 *from, *dest;
+	u16 shift_width;
+
+	/* copy from the next struct field */
+	from = src_ctx + ce_info->offset;
+
+	/* prepare the bits and mask */
+	shift_width = ce_info->lsb % 8;
+	mask = BIT(ce_info->width) - 1;
+
+	/* don't swizzle the bits until after the mask because the mask bits
+	 * will be in a different bit position on big endian machines
+	 */
+	src_word = *(u16 *)from;
+	src_word &= mask;
+
+	/* shift to correct alignment */
+	mask <<= shift_width;
+	src_word <<= shift_width;
+
+	/* get the current bits from the target bit string */
+	dest = dest_ctx + (ce_info->lsb / 8);
+
+	ice_memcpy(&dest_word, dest, sizeof(dest_word), ICE_DMA_TO_NONDMA);
+
+	dest_word &= ~(CPU_TO_LE16(mask));	/* get the bits not changing */
+	dest_word |= CPU_TO_LE16(src_word);	/* add in the new bits */
+
+	/* put it all back */
+	ice_memcpy(dest, &dest_word, sizeof(dest_word), ICE_NONDMA_TO_DMA);
+}
+
+/**
+ * ice_write_dword - write a dword to a packed context structure
+ * @src_ctx:  the context structure to read from
+ * @dest_ctx: the context to be written to
+ * @ce_info:  a description of the struct to be filled
+ */
+static void
+ice_write_dword(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info)
+{
+	u32 src_dword, mask;
+	__le32 dest_dword;
+	u8 *from, *dest;
+	u16 shift_width;
+
+	/* copy from the next struct field */
+	from = src_ctx + ce_info->offset;
+
+	/* prepare the bits and mask */
+	shift_width = ce_info->lsb % 8;
+
+	/* if the field width is exactly 32 on an x86 machine, then the shift
+	 * operation will not work because the SHL instructions count is masked
+	 * to 5 bits so the shift will do nothing
+	 */
+	if (ce_info->width < 32)
+		mask = BIT(ce_info->width) - 1;
+	else
+		mask = (u32)~0;
+
+	/* don't swizzle the bits until after the mask because the mask bits
+	 * will be in a different bit position on big endian machines
+	 */
+	src_dword = *(u32 *)from;
+	src_dword &= mask;
+
+	/* shift to correct alignment */
+	mask <<= shift_width;
+	src_dword <<= shift_width;
+
+	/* get the current bits from the target bit string */
+	dest = dest_ctx + (ce_info->lsb / 8);
+
+	ice_memcpy(&dest_dword, dest, sizeof(dest_dword), ICE_DMA_TO_NONDMA);
+
+	dest_dword &= ~(CPU_TO_LE32(mask));	/* get the bits not changing */
+	dest_dword |= CPU_TO_LE32(src_dword);	/* add in the new bits */
+
+	/* put it all back */
+	ice_memcpy(dest, &dest_dword, sizeof(dest_dword), ICE_NONDMA_TO_DMA);
+}
+
+/**
+ * ice_write_qword - write a qword to a packed context structure
+ * @src_ctx:  the context structure to read from
+ * @dest_ctx: the context to be written to
+ * @ce_info:  a description of the struct to be filled
+ */
+static void
+ice_write_qword(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info)
+{
+	u64 src_qword, mask;
+	__le64 dest_qword;
+	u8 *from, *dest;
+	u16 shift_width;
+
+	/* copy from the next struct field */
+	from = src_ctx + ce_info->offset;
+
+	/* prepare the bits and mask */
+	shift_width = ce_info->lsb % 8;
+
+	/* if the field width is exactly 64 on an x86 machine, then the shift
+	 * operation will not work because the SHL instructions count is masked
+	 * to 6 bits so the shift will do nothing
+	 */
+	if (ce_info->width < 64)
+		mask = BIT_ULL(ce_info->width) - 1;
+	else
+		mask = (u64)~0;
+
+	/* don't swizzle the bits until after the mask because the mask bits
+	 * will be in a different bit position on big endian machines
+	 */
+	src_qword = *(u64 *)from;
+	src_qword &= mask;
+
+	/* shift to correct alignment */
+	mask <<= shift_width;
+	src_qword <<= shift_width;
+
+	/* get the current bits from the target bit string */
+	dest = dest_ctx + (ce_info->lsb / 8);
+
+	ice_memcpy(&dest_qword, dest, sizeof(dest_qword), ICE_DMA_TO_NONDMA);
+
+	dest_qword &= ~(CPU_TO_LE64(mask));	/* get the bits not changing */
+	dest_qword |= CPU_TO_LE64(src_qword);	/* add in the new bits */
+
+	/* put it all back */
+	ice_memcpy(dest, &dest_qword, sizeof(dest_qword), ICE_NONDMA_TO_DMA);
+}
+
+/**
+ * ice_set_ctx - set context bits in packed structure
+ * @src_ctx:  pointer to a generic non-packed context structure
+ * @dest_ctx: pointer to memory for the packed structure
+ * @ce_info:  a description of the structure to be transformed
+ */
+enum ice_status
+ice_set_ctx(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info)
+{
+	int f;
+
+	for (f = 0; ce_info[f].width; f++) {
+		/* We have to deal with each element of the FW response
+		 * using the correct size so that we are correct regardless
+		 * of the endianness of the machine.
+		 */
+		switch (ce_info[f].size_of) {
+		case sizeof(u8):
+			ice_write_byte(src_ctx, dest_ctx, &ce_info[f]);
+			break;
+		case sizeof(u16):
+			ice_write_word(src_ctx, dest_ctx, &ce_info[f]);
+			break;
+		case sizeof(u32):
+			ice_write_dword(src_ctx, dest_ctx, &ce_info[f]);
+			break;
+		case sizeof(u64):
+			ice_write_qword(src_ctx, dest_ctx, &ce_info[f]);
+			break;
+		default:
+			return ICE_ERR_INVAL_SIZE;
+		}
+	}
+
+	return ICE_SUCCESS;
+}
+
+
+
+
+
+/**
+ * ice_ena_vsi_txq
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc: tc number
+ * @num_qgrps: Number of added queue groups
+ * @buf: list of queue groups to be added
+ * @buf_size: size of buffer for indirect command
+ * @cd: pointer to command details structure or NULL
+ *
+ * This function adds one lan q
+ */
+enum ice_status
+ice_ena_vsi_txq(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u8 num_qgrps,
+		struct ice_aqc_add_tx_qgrp *buf, u16 buf_size,
+		struct ice_sq_cd *cd)
+{
+	struct ice_aqc_txsched_elem_data node = { 0 };
+	struct ice_sched_node *parent;
+	enum ice_status status;
+	struct ice_hw *hw;
+
+	if (!pi || pi->port_state != ICE_SCHED_PORT_STATE_READY)
+		return ICE_ERR_CFG;
+
+	if (num_qgrps > 1 || buf->num_txqs > 1)
+		return ICE_ERR_MAX_LIMIT;
+
+	hw = pi->hw;
+
+	if (!ice_is_vsi_valid(hw, vsi_handle))
+		return ICE_ERR_PARAM;
+
+	ice_acquire_lock(&pi->sched_lock);
+
+	/* find a parent node */
+	parent = ice_sched_get_free_qparent(pi, vsi_handle, tc,
+					    ICE_SCHED_NODE_OWNER_LAN);
+	if (!parent) {
+		status = ICE_ERR_PARAM;
+		goto ena_txq_exit;
+	}
+
+	buf->parent_teid = parent->info.node_teid;
+	node.parent_teid = parent->info.node_teid;
+	/* Mark that the values in the "generic" section as valid. The default
+	 * value in the "generic" section is zero. This means that :
+	 * - Scheduling mode is Bytes Per Second (BPS), indicated by Bit 0.
+	 * - 0 priority among siblings, indicated by Bit 1-3.
+	 * - WFQ, indicated by Bit 4.
+	 * - 0 Adjustment value is used in PSM credit update flow, indicated by
+	 * Bit 5-6.
+	 * - Bit 7 is reserved.
+	 * Without setting the generic section as valid in valid_sections, the
+	 * Admin Q command will fail with error code ICE_AQ_RC_EINVAL.
+	 */
+	buf->txqs[0].info.valid_sections = ICE_AQC_ELEM_VALID_GENERIC;
+
+	/* add the lan q */
+	status = ice_aq_add_lan_txq(hw, num_qgrps, buf, buf_size, cd);
+	if (status != ICE_SUCCESS) {
+		ice_debug(hw, ICE_DBG_SCHED, "enable Q %d failed %d\n",
+			  LE16_TO_CPU(buf->txqs[0].txq_id),
+			  hw->adminq.sq_last_status);
+		goto ena_txq_exit;
+	}
+
+	node.node_teid = buf->txqs[0].q_teid;
+	node.data.elem_type = ICE_AQC_ELEM_TYPE_LEAF;
+
+	/* add a leaf node into schduler tree q layer */
+	status = ice_sched_add_node(pi, hw->num_tx_sched_layers - 1, &node);
+
+ena_txq_exit:
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_dis_vsi_txq
+ * @pi: port information structure
+ * @num_queues: number of queues
+ * @q_ids: pointer to the q_id array
+ * @q_teids: pointer to queue node teids
+ * @rst_src: if called due to reset, specifies the rst source
+ * @vmvf_num: the relative vm or vf number that is undergoing the reset
+ * @cd: pointer to command details structure or NULL
+ *
+ * This function removes queues and their corresponding nodes in SW DB
+ */
+enum ice_status
+ice_dis_vsi_txq(struct ice_port_info *pi, u8 num_queues, u16 *q_ids,
+		u32 *q_teids, enum ice_disq_rst_src rst_src, u16 vmvf_num,
+		struct ice_sq_cd *cd)
+{
+	enum ice_status status = ICE_ERR_DOES_NOT_EXIST;
+	struct ice_aqc_dis_txq_item qg_list;
+	u16 i;
+
+	if (!pi || pi->port_state != ICE_SCHED_PORT_STATE_READY)
+		return ICE_ERR_CFG;
+
+	/* if queue is disabled already yet the disable queue command has to be
+	 * sent to complete the VF reset, then call ice_aq_dis_lan_txq without
+	 * any queue information
+	 */
+
+	if (!num_queues && rst_src)
+		return ice_aq_dis_lan_txq(pi->hw, 0, NULL, 0, rst_src, vmvf_num,
+					  NULL);
+
+	ice_acquire_lock(&pi->sched_lock);
+
+	for (i = 0; i < num_queues; i++) {
+		struct ice_sched_node *node;
+
+		node = ice_sched_find_node_by_teid(pi->root, q_teids[i]);
+		if (!node)
+			continue;
+		qg_list.parent_teid = node->info.parent_teid;
+		qg_list.num_qs = 1;
+		qg_list.q_id[0] = CPU_TO_LE16(q_ids[i]);
+		status = ice_aq_dis_lan_txq(pi->hw, 1, &qg_list,
+					    sizeof(qg_list), rst_src, vmvf_num,
+					    cd);
+
+		if (status != ICE_SUCCESS)
+			break;
+		ice_free_sched_node(pi, node);
+	}
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_cfg_vsi_qs - configure the new/exisiting VSI queues
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc_bitmap: TC bitmap
+ * @maxqs: max queues array per TC
+ * @owner: lan or rdma
+ *
+ * This function adds/updates the VSI queues per TC.
+ */
+static enum ice_status
+ice_cfg_vsi_qs(struct ice_port_info *pi, u16 vsi_handle, u8 tc_bitmap,
+	       u16 *maxqs, u8 owner)
+{
+	enum ice_status status = ICE_SUCCESS;
+	u8 i;
+
+	if (!pi || pi->port_state != ICE_SCHED_PORT_STATE_READY)
+		return ICE_ERR_CFG;
+
+	if (!ice_is_vsi_valid(pi->hw, vsi_handle))
+		return ICE_ERR_PARAM;
+
+	ice_acquire_lock(&pi->sched_lock);
+
+	for (i = 0; i < ICE_MAX_TRAFFIC_CLASS; i++) {
+		/* configuration is possible only if TC node is present */
+		if (!ice_sched_get_tc_node(pi, i))
+			continue;
+
+		status = ice_sched_cfg_vsi(pi, vsi_handle, i, maxqs[i], owner,
+					   ice_is_tc_ena(tc_bitmap, i));
+		if (status)
+			break;
+	}
+
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_cfg_vsi_lan - configure VSI lan queues
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc_bitmap: TC bitmap
+ * @max_lanqs: max lan queues array per TC
+ *
+ * This function adds/updates the VSI lan queues per TC.
+ */
+enum ice_status
+ice_cfg_vsi_lan(struct ice_port_info *pi, u16 vsi_handle, u8 tc_bitmap,
+		u16 *max_lanqs)
+{
+	return ice_cfg_vsi_qs(pi, vsi_handle, tc_bitmap, max_lanqs,
+			      ICE_SCHED_NODE_OWNER_LAN);
+}
+
+
+
+/**
+ * ice_replay_pre_init - replay pre initialization
+ * @hw: pointer to the hw struct
+ *
+ * Initializes required config data for VSI, FD, ACL, and RSS before replay.
+ */
+static enum ice_status ice_replay_pre_init(struct ice_hw *hw)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	u8 i;
+
+	/* Delete old entries from replay filter list head if there is any */
+	ice_rm_all_sw_replay_rule_info(hw);
+	/* In start of replay, move entries into replay_rules list, it
+	 * will allow adding rules entries back to filt_rules list,
+	 * which is operational list.
+	 */
+	for (i = 0; i < ICE_MAX_NUM_RECIPES; i++)
+		LIST_REPLACE_INIT(&sw->recp_list[i].filt_rules,
+				  &sw->recp_list[i].filt_replay_rules);
+	ice_sched_replay_agg_vsi_preinit(hw);
+
+	return ice_sched_replay_tc_node_bw(hw);
+}
+
+/**
+ * ice_replay_vsi - replay vsi configuration
+ * @hw: pointer to the hw struct
+ * @vsi_handle: driver vsi handle
+ *
+ * Restore all VSI configuration after reset. It is required to call this
+ * function with main VSI first.
+ */
+enum ice_status ice_replay_vsi(struct ice_hw *hw, u16 vsi_handle)
+{
+	enum ice_status status;
+
+	if (!ice_is_vsi_valid(hw, vsi_handle))
+		return ICE_ERR_PARAM;
+
+	/* Replay pre-initialization if there is any */
+	if (vsi_handle == ICE_MAIN_VSI_HANDLE) {
+		status = ice_replay_pre_init(hw);
+		if (status)
+			return status;
+	}
+
+	/* Replay per VSI all filters */
+	status = ice_replay_vsi_all_fltr(hw, vsi_handle);
+	if (!status)
+		status = ice_replay_vsi_agg(hw, vsi_handle);
+	return status;
+}
+
+/**
+ * ice_replay_post - post replay configuration cleanup
+ * @hw: pointer to the hw struct
+ *
+ * Post replay cleanup.
+ */
+void ice_replay_post(struct ice_hw *hw)
+{
+	/* Delete old entries from replay filter list head */
+	ice_rm_all_sw_replay_rule_info(hw);
+	ice_sched_replay_agg(hw);
+}
+
+/**
+ * ice_stat_update40 - read 40 bit stat from the chip and update stat values
+ * @hw: ptr to the hardware info
+ * @hireg: high 32 bit HW register to read from
+ * @loreg: low 32 bit HW register to read from
+ * @prev_stat_loaded: bool to specify if previous stats are loaded
+ * @prev_stat: ptr to previous loaded stat value
+ * @cur_stat: ptr to current stat value
+ */
+void
+ice_stat_update40(struct ice_hw *hw, u32 hireg, u32 loreg,
+		  bool prev_stat_loaded, u64 *prev_stat, u64 *cur_stat)
+{
+	u64 new_data;
+
+	new_data = rd32(hw, loreg);
+	new_data |= ((u64)(rd32(hw, hireg) & 0xFFFF)) << 32;
+
+	/* device stats are not reset at PFR, they likely will not be zeroed
+	 * when the driver starts. So save the first values read and use them as
+	 * offsets to be subtracted from the raw values in order to report stats
+	 * that count from zero.
+	 */
+	if (!prev_stat_loaded)
+		*prev_stat = new_data;
+	if (new_data >= *prev_stat)
+		*cur_stat = new_data - *prev_stat;
+	else
+		/* to manage the potential roll-over */
+		*cur_stat = (new_data + BIT_ULL(40)) - *prev_stat;
+	*cur_stat &= 0xFFFFFFFFFFULL;
+}
+
+/**
+ * ice_stat_update32 - read 32 bit stat from the chip and update stat values
+ * @hw: ptr to the hardware info
+ * @reg: HW register to read from
+ * @prev_stat_loaded: bool to specify if previous stats are loaded
+ * @prev_stat: ptr to previous loaded stat value
+ * @cur_stat: ptr to current stat value
+ */
+void
+ice_stat_update32(struct ice_hw *hw, u32 reg, bool prev_stat_loaded,
+		  u64 *prev_stat, u64 *cur_stat)
+{
+	u32 new_data;
+
+	new_data = rd32(hw, reg);
+
+	/* device stats are not reset at PFR, they likely will not be zeroed
+	 * when the driver starts. So save the first values read and use them as
+	 * offsets to be subtracted from the raw values in order to report stats
+	 * that count from zero.
+	 */
+	if (!prev_stat_loaded)
+		*prev_stat = new_data;
+	if (new_data >= *prev_stat)
+		*cur_stat = new_data - *prev_stat;
+	else
+		/* to manage the potential roll-over */
+		*cur_stat = (new_data + BIT_ULL(32)) - *prev_stat;
+}
+
+
+/**
+ * ice_sched_query_elem - query element information from hw
+ * @hw: pointer to the hw struct
+ * @node_teid: node teid to be queried
+ * @buf: buffer to element information
+ *
+ * This function queries HW element information
+ */
+enum ice_status
+ice_sched_query_elem(struct ice_hw *hw, u32 node_teid,
+		     struct ice_aqc_get_elem *buf)
+{
+	u16 buf_size, num_elem_ret = 0;
+	enum ice_status status;
+
+	buf_size = sizeof(*buf);
+	ice_memset(buf, 0, buf_size, ICE_NONDMA_MEM);
+	buf->generic[0].node_teid = CPU_TO_LE32(node_teid);
+	status = ice_aq_query_sched_elems(hw, 1, buf, buf_size, &num_elem_ret,
+					  NULL);
+	if (status != ICE_SUCCESS || num_elem_ret != 1)
+		ice_debug(hw, ICE_DBG_SCHED, "query element failed\n");
+	return status;
+}
diff --git a/drivers/net/ice/base/ice_common.h b/drivers/net/ice/base/ice_common.h
new file mode 100644
index 0000000..082ae66
--- /dev/null
+++ b/drivers/net/ice/base/ice_common.h
@@ -0,0 +1,186 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_COMMON_H_
+#define _ICE_COMMON_H_
+
+#include "ice_type.h"
+
+#include "ice_switch.h"
+
+/* prototype for functions used for SW locks */
+void ice_free_list(struct LIST_HEAD_TYPE *list);
+void ice_init_lock(struct ice_lock *lock);
+void ice_acquire_lock(struct ice_lock *lock);
+void ice_release_lock(struct ice_lock *lock);
+void ice_destroy_lock(struct ice_lock *lock);
+
+void *ice_alloc_dma_mem(struct ice_hw *hw, struct ice_dma_mem *m, u64 size);
+void ice_free_dma_mem(struct ice_hw *hw, struct ice_dma_mem *m);
+
+bool ice_sq_done(struct ice_hw *hw, struct ice_ctl_q_info *cq);
+
+enum ice_status ice_nvm_validate_checksum(struct ice_hw *hw);
+
+void
+ice_debug_cq(struct ice_hw *hw, u32 mask, void *desc, void *buf, u16 buf_len);
+enum ice_status ice_init_hw(struct ice_hw *hw);
+void ice_deinit_hw(struct ice_hw *hw);
+enum ice_status ice_check_reset(struct ice_hw *hw);
+enum ice_status ice_reset(struct ice_hw *hw, enum ice_reset_req req);
+
+enum ice_status ice_init_all_ctrlq(struct ice_hw *hw);
+void ice_shutdown_all_ctrlq(struct ice_hw *hw);
+enum ice_status
+ice_clean_rq_elem(struct ice_hw *hw, struct ice_ctl_q_info *cq,
+		  struct ice_rq_event_info *e, u16 *pending);
+enum ice_status
+ice_get_link_status(struct ice_port_info *pi, bool *link_up);
+enum ice_status
+ice_update_link_info(struct ice_port_info *pi);
+enum ice_status
+ice_acquire_res(struct ice_hw *hw, enum ice_aq_res_ids res,
+		enum ice_aq_res_access_type access, u32 timeout);
+void ice_release_res(struct ice_hw *hw, enum ice_aq_res_ids res);
+enum ice_status
+ice_aq_alloc_free_res(struct ice_hw *hw, u16 num_entries,
+		      struct ice_aqc_alloc_free_res_elem *buf, u16 buf_size,
+		      enum ice_adminq_opc opc, struct ice_sq_cd *cd);
+enum ice_status ice_init_nvm(struct ice_hw *hw);
+enum ice_status ice_read_sr_word(struct ice_hw *hw, u16 offset, u16 *data);
+enum ice_status
+ice_read_sr_buf(struct ice_hw *hw, u16 offset, u16 *words, u16 *data);
+enum ice_status
+ice_sq_send_cmd(struct ice_hw *hw, struct ice_ctl_q_info *cq,
+		struct ice_aq_desc *desc, void *buf, u16 buf_size,
+		struct ice_sq_cd *cd);
+void ice_clear_pxe_mode(struct ice_hw *hw);
+
+enum ice_status ice_get_caps(struct ice_hw *hw);
+
+
+
+#if defined(FPGA_SUPPORT) || defined(CVL_A0_SUPPORT)
+void ice_dev_onetime_setup(struct ice_hw *hw);
+#endif /* FPGA_SUPPORT || CVL_A0_SUPPORT */
+
+
+enum ice_status
+ice_write_rxq_ctx(struct ice_hw *hw, struct ice_rlan_ctx *rlan_ctx,
+		  u32 rxq_index);
+#if !defined(NO_UNUSED_CTX_CODE) || defined(AE_DRIVER)
+enum ice_status ice_clear_rxq_ctx(struct ice_hw *hw, u32 rxq_index);
+enum ice_status
+ice_clear_tx_cmpltnq_ctx(struct ice_hw *hw, u32 tx_cmpltnq_index);
+enum ice_status
+ice_write_tx_cmpltnq_ctx(struct ice_hw *hw,
+			 struct ice_tx_cmpltnq_ctx *tx_cmpltnq_ctx,
+			 u32 tx_cmpltnq_index);
+enum ice_status
+ice_clear_tx_drbell_q_ctx(struct ice_hw *hw, u32 tx_drbell_q_index);
+enum ice_status
+ice_write_tx_drbell_q_ctx(struct ice_hw *hw,
+			  struct ice_tx_drbell_q_ctx *tx_drbell_q_ctx,
+			  u32 tx_drbell_q_index);
+#endif /* !NO_UNUSED_CTX_CODE || AE_DRIVER */
+
+enum ice_status
+ice_aq_get_rss_lut(struct ice_hw *hw, u16 vsi_handle, u8 lut_type, u8 *lut,
+		   u16 lut_size);
+enum ice_status
+ice_aq_set_rss_lut(struct ice_hw *hw, u16 vsi_handle, u8 lut_type, u8 *lut,
+		   u16 lut_size);
+enum ice_status
+ice_aq_get_rss_key(struct ice_hw *hw, u16 vsi_handle,
+		   struct ice_aqc_get_set_rss_keys *keys);
+enum ice_status
+ice_aq_set_rss_key(struct ice_hw *hw, u16 vsi_handle,
+		   struct ice_aqc_get_set_rss_keys *keys);
+
+bool ice_check_sq_alive(struct ice_hw *hw, struct ice_ctl_q_info *cq);
+enum ice_status ice_aq_q_shutdown(struct ice_hw *hw, bool unloading);
+void ice_fill_dflt_direct_cmd_desc(struct ice_aq_desc *desc, u16 opcode);
+extern const struct ice_ctx_ele ice_tlan_ctx_info[];
+enum ice_status
+ice_set_ctx(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info);
+enum ice_status
+ice_aq_send_cmd(struct ice_hw *hw, struct ice_aq_desc *desc,
+		void *buf, u16 buf_size, struct ice_sq_cd *cd);
+enum ice_status ice_aq_get_fw_ver(struct ice_hw *hw, struct ice_sq_cd *cd);
+
+enum ice_status
+ice_aq_get_phy_caps(struct ice_port_info *pi, bool qual_mods, u8 report_mode,
+		    struct ice_aqc_get_phy_caps_data *caps,
+		    struct ice_sq_cd *cd);
+void
+ice_update_phy_type(u64 *phy_type_low, u64 *phy_type_high,
+		    u16 link_speeds_bitmap);
+enum ice_status
+ice_aq_manage_mac_write(struct ice_hw *hw, const u8 *mac_addr, u8 flags,
+			struct ice_sq_cd *cd);
+
+enum ice_status ice_clear_pf_cfg(struct ice_hw *hw);
+enum ice_status
+ice_aq_set_phy_cfg(struct ice_hw *hw, u8 lport,
+		   struct ice_aqc_set_phy_cfg_data *cfg, struct ice_sq_cd *cd);
+enum ice_status
+ice_set_fc(struct ice_port_info *pi, u8 *aq_failures,
+	   bool ena_auto_link_update);
+void
+ice_cfg_phy_fec(struct ice_aqc_set_phy_cfg_data *cfg, enum ice_fec_mode fec);
+void
+ice_copy_phy_caps_to_cfg(struct ice_aqc_get_phy_caps_data *caps,
+			 struct ice_aqc_set_phy_cfg_data *cfg);
+enum ice_status
+ice_aq_set_link_restart_an(struct ice_port_info *pi, bool ena_link,
+			   struct ice_sq_cd *cd);
+enum ice_status
+ice_aq_get_link_info(struct ice_port_info *pi, bool ena_lse,
+		     struct ice_link_status *link, struct ice_sq_cd *cd);
+enum ice_status
+ice_aq_set_event_mask(struct ice_hw *hw, u8 port_num, u16 mask,
+		      struct ice_sq_cd *cd);
+enum ice_status
+ice_aq_set_mac_loopback(struct ice_hw *hw, bool ena_lpbk, struct ice_sq_cd *cd);
+
+
+enum ice_status
+ice_aq_set_port_id_led(struct ice_port_info *pi, bool is_orig_mode,
+		       struct ice_sq_cd *cd);
+
+
+
+
+enum ice_status
+ice_dis_vsi_txq(struct ice_port_info *pi, u8 num_queues, u16 *q_ids,
+		u32 *q_teids, enum ice_disq_rst_src rst_src, u16 vmvf_num,
+		struct ice_sq_cd *cmd_details);
+enum ice_status
+ice_cfg_vsi_lan(struct ice_port_info *pi, u16 vsi_handle, u8 tc_bitmap,
+		u16 *max_lanqs);
+enum ice_status
+ice_ena_vsi_txq(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u8 num_qgrps,
+		struct ice_aqc_add_tx_qgrp *buf, u16 buf_size,
+		struct ice_sq_cd *cd);
+enum ice_status ice_replay_vsi(struct ice_hw *hw, u16 vsi_handle);
+void ice_replay_post(struct ice_hw *hw);
+void ice_sched_replay_agg_vsi_preinit(struct ice_hw *hw);
+void ice_sched_replay_agg(struct ice_hw *hw);
+enum ice_status ice_sched_replay_tc_node_bw(struct ice_hw *hw);
+enum ice_status ice_replay_vsi_agg(struct ice_hw *hw, u16 vsi_handle);
+enum ice_status
+ice_cfg_tc_node_bw_alloc(struct ice_port_info *pi, u8 tc,
+			 enum ice_rl_type rl_type, u8 bw_alloc);
+enum ice_status ice_cfg_rl_burst_size(struct ice_hw *hw, u32 bytes);
+void ice_output_fw_log(struct ice_hw *hw, struct ice_aq_desc *desc, void *buf);
+void
+ice_stat_update40(struct ice_hw *hw, u32 hireg, u32 loreg,
+		  bool prev_stat_loaded, u64 *prev_stat, u64 *cur_stat);
+void
+ice_stat_update32(struct ice_hw *hw, u32 reg, bool prev_stat_loaded,
+		  u64 *prev_stat, u64 *cur_stat);
+enum ice_status
+ice_sched_query_elem(struct ice_hw *hw, u32 node_teid,
+		     struct ice_aqc_get_elem *buf);
+#endif /* _ICE_COMMON_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v5 11/31] net/ice/base: add various headers
  2018-12-17  7:37 ` [dpdk-dev] [PATCH v5 00/31] A new net PMD - ICE Wenzhuo Lu
                     ` (9 preceding siblings ...)
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 10/31] net/ice/base: add common functions Wenzhuo Lu
@ 2018-12-17  7:37   ` Wenzhuo Lu
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 12/31] net/ice/base: add protocol structures and defines Wenzhuo Lu
                     ` (19 subsequent siblings)
  30 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-17  7:37 UTC (permalink / raw)
  To: dev; +Cc: Paul M Stillwell Jr

From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>

Add various headers that define status codes and
basic defines for use in the code.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
---
 drivers/net/ice/base/ice_alloc.h     | 22 ++++++++++++++++++
 drivers/net/ice/base/ice_flex_type.h | 19 +++++++++++++++
 drivers/net/ice/base/ice_flow.h      |  8 +++++++
 drivers/net/ice/base/ice_status.h    | 45 ++++++++++++++++++++++++++++++++++++
 4 files changed, 94 insertions(+)
 create mode 100644 drivers/net/ice/base/ice_alloc.h
 create mode 100644 drivers/net/ice/base/ice_flex_type.h
 create mode 100644 drivers/net/ice/base/ice_flow.h
 create mode 100644 drivers/net/ice/base/ice_status.h

diff --git a/drivers/net/ice/base/ice_alloc.h b/drivers/net/ice/base/ice_alloc.h
new file mode 100644
index 0000000..7883104
--- /dev/null
+++ b/drivers/net/ice/base/ice_alloc.h
@@ -0,0 +1,22 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_ALLOC_H_
+#define _ICE_ALLOC_H_
+
+/* Memory types */
+enum ice_memset_type {
+	ICE_NONDMA_MEM = 0,
+	ICE_DMA_MEM
+};
+
+/* Memcpy types */
+enum ice_memcpy_type {
+	ICE_NONDMA_TO_NONDMA = 0,
+	ICE_NONDMA_TO_DMA,
+	ICE_DMA_TO_DMA,
+	ICE_DMA_TO_NONDMA
+};
+
+#endif /* _ICE_ALLOC_H_ */
diff --git a/drivers/net/ice/base/ice_flex_type.h b/drivers/net/ice/base/ice_flex_type.h
new file mode 100644
index 0000000..84a38cb
--- /dev/null
+++ b/drivers/net/ice/base/ice_flex_type.h
@@ -0,0 +1,19 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_FLEX_TYPE_H_
+#define _ICE_FLEX_TYPE_H_
+
+/* Extraction Sequence (Field Vector) Table */
+struct ice_fv_word {
+	u8 prot_id;
+	u8 off;		/* Offset within the protocol header */
+};
+
+#define ICE_MAX_FV_WORDS 48
+struct ice_fv {
+	struct ice_fv_word ew[ICE_MAX_FV_WORDS];
+};
+
+#endif /* _ICE_FLEX_TYPE_H_ */
diff --git a/drivers/net/ice/base/ice_flow.h b/drivers/net/ice/base/ice_flow.h
new file mode 100644
index 0000000..228a2c0
--- /dev/null
+++ b/drivers/net/ice/base/ice_flow.h
@@ -0,0 +1,8 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_FLOW_H_
+#define _ICE_FLOW_H_
+
+#endif /* _ICE_FLOW_H_ */
diff --git a/drivers/net/ice/base/ice_status.h b/drivers/net/ice/base/ice_status.h
new file mode 100644
index 0000000..898bfa6
--- /dev/null
+++ b/drivers/net/ice/base/ice_status.h
@@ -0,0 +1,45 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_STATUS_H_
+#define _ICE_STATUS_H_
+
+/* Error Codes */
+enum ice_status {
+	ICE_SUCCESS				= 0,
+
+	/* Generic codes : Range -1..-49 */
+	ICE_ERR_PARAM				= -1,
+	ICE_ERR_NOT_IMPL			= -2,
+	ICE_ERR_NOT_READY			= -3,
+	ICE_ERR_BAD_PTR				= -5,
+	ICE_ERR_INVAL_SIZE			= -6,
+	ICE_ERR_DEVICE_NOT_SUPPORTED		= -8,
+	ICE_ERR_RESET_FAILED			= -9,
+	ICE_ERR_FW_API_VER			= -10,
+	ICE_ERR_NO_MEMORY			= -11,
+	ICE_ERR_CFG				= -12,
+	ICE_ERR_OUT_OF_RANGE			= -13,
+	ICE_ERR_ALREADY_EXISTS			= -14,
+	ICE_ERR_DOES_NOT_EXIST			= -15,
+	ICE_ERR_IN_USE				= -16,
+	ICE_ERR_MAX_LIMIT			= -17,
+	ICE_ERR_RESET_ONGOING			= -18,
+	ICE_ERR_HW_TABLE			= -19,
+
+	/* NVM specific error codes: Range -50..-59 */
+	ICE_ERR_NVM				= -50,
+	ICE_ERR_NVM_CHECKSUM			= -51,
+	ICE_ERR_BUF_TOO_SHORT			= -52,
+	ICE_ERR_NVM_BLANK_MODE			= -53,
+
+	/* ARQ/ASQ specific error codes. Range -100..-109 */
+	ICE_ERR_AQ_ERROR			= -100,
+	ICE_ERR_AQ_TIMEOUT			= -101,
+	ICE_ERR_AQ_FULL				= -102,
+	ICE_ERR_AQ_NO_WORK			= -103,
+	ICE_ERR_AQ_EMPTY			= -104,
+};
+
+#endif /* _ICE_STATUS_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v5 12/31] net/ice/base: add protocol structures and defines
  2018-12-17  7:37 ` [dpdk-dev] [PATCH v5 00/31] A new net PMD - ICE Wenzhuo Lu
                     ` (10 preceding siblings ...)
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 11/31] net/ice/base: add various headers Wenzhuo Lu
@ 2018-12-17  7:37   ` Wenzhuo Lu
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 13/31] net/ice/base: add structures for RX/TX queues Wenzhuo Lu
                     ` (18 subsequent siblings)
  30 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-17  7:37 UTC (permalink / raw)
  To: dev; +Cc: Paul M Stillwell Jr

From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>

Add the structures and defines that define what
protocols the NIC can handle.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
---
 drivers/net/ice/base/ice_protocol_type.h | 248 +++++++++++++++++++++++++++++++
 1 file changed, 248 insertions(+)
 create mode 100644 drivers/net/ice/base/ice_protocol_type.h

diff --git a/drivers/net/ice/base/ice_protocol_type.h b/drivers/net/ice/base/ice_protocol_type.h
new file mode 100644
index 0000000..7b92c71
--- /dev/null
+++ b/drivers/net/ice/base/ice_protocol_type.h
@@ -0,0 +1,248 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_PROTOCOL_TYPE_H_
+#define _ICE_PROTOCOL_TYPE_H_
+#include "ice_flex_type.h"
+#define ICE_IPV6_ADDR_LENGTH 16
+
+/* Each recipe can match up to 5 different fields. Fields to match can be meta-
+ * data, values extracted from packet headers, or results from other recipes.
+ * One of the 5 fields is reserved for matching the switch ID. So, up to 4
+ * recipes can provide intermediate results to another one through chaining,
+ * e.g. recipes 0, 1, 2, and 3 can provide intermediate results to recipe 4.
+ */
+#define ICE_NUM_WORDS_RECIPE 4
+
+/* Max recipes that can be chained */
+#define ICE_MAX_CHAIN_RECIPE 5
+
+/* 1 word reserved for switch id from allowed 5 words.
+ * So a recipe can have max 4 words. And you can chain 5 such recipes
+ * together. So maximum words that can be programmed for look up is 5 * 4.
+ */
+#define ICE_MAX_CHAIN_WORDS (ICE_NUM_WORDS_RECIPE * ICE_MAX_CHAIN_RECIPE)
+
+/* Field vector index corresponding to chaining */
+#define ICE_CHAIN_FV_INDEX_START 47
+
+enum ice_protocol_type {
+	ICE_MAC_OFOS = 0,
+	ICE_MAC_IL,
+	ICE_IPV4_OFOS,
+	ICE_IPV4_IL,
+	ICE_IPV6_IL,
+	ICE_IPV6_OFOS,
+	ICE_TCP_IL,
+	ICE_UDP_ILOS,
+	ICE_SCTP_IL,
+	ICE_VXLAN,
+	ICE_GENEVE,
+	ICE_VXLAN_GPE,
+	ICE_NVGRE,
+	ICE_PROTOCOL_LAST
+};
+
+enum ice_sw_tunnel_type {
+	ICE_NON_TUN,
+	ICE_SW_TUN_VXLAN_GPE,
+	ICE_SW_TUN_GENEVE,
+	ICE_SW_TUN_VXLAN,
+	ICE_SW_TUN_NVGRE,
+	ICE_SW_TUN_UDP, /* This means all "UDP" tunnel types: VXLAN-GPE, VXLAN
+			 * and GENEVE
+			 */
+	ICE_ALL_TUNNELS /* All tunnel types including NVGRE */
+};
+
+/* Decoders for ice_prot_id:
+ * - F: First
+ * - I: Inner
+ * - L: Last
+ * - O: Outer
+ * - S: Single
+ */
+enum ice_prot_id {
+	ICE_PROT_ID_INVAL	= 0,
+	ICE_PROT_MAC_OF_OR_S	= 1,
+	ICE_PROT_MAC_O2		= 2,
+	ICE_PROT_MAC_IL		= 4,
+	ICE_PROT_MAC_IN_MAC	= 7,
+	ICE_PROT_ETYPE_OL	= 9,
+	ICE_PROT_ETYPE_IL	= 10,
+	ICE_PROT_PAY		= 15,
+	ICE_PROT_EVLAN_O	= 16,
+	ICE_PROT_VLAN_O		= 17,
+	ICE_PROT_VLAN_IF	= 18,
+	ICE_PROT_MPLS_OL_MINUS_1 = 27,
+	ICE_PROT_MPLS_OL_OR_OS	= 28,
+	ICE_PROT_MPLS_IL	= 29,
+	ICE_PROT_IPV4_OF_OR_S	= 32,
+	ICE_PROT_IPV4_IL	= 33,
+	ICE_PROT_IPV6_OF_OR_S	= 40,
+	ICE_PROT_IPV6_IL	= 41,
+	ICE_PROT_IPV6_FRAG	= 47,
+	ICE_PROT_TCP_IL		= 49,
+	ICE_PROT_UDP_OF		= 52,
+	ICE_PROT_UDP_IL_OR_S	= 53,
+	ICE_PROT_GRE_OF		= 64,
+	ICE_PROT_NSH_F		= 84,
+	ICE_PROT_ESP_F		= 88,
+	ICE_PROT_ESP_2		= 89,
+	ICE_PROT_SCTP_IL	= 96,
+	ICE_PROT_ICMP_IL	= 98,
+	ICE_PROT_ICMPV6_IL	= 100,
+	ICE_PROT_VRRP_F		= 101,
+	ICE_PROT_OSPF		= 102,
+	ICE_PROT_ATAOE_OF	= 114,
+	ICE_PROT_CTRL_OF	= 116,
+	ICE_PROT_LLDP_OF	= 117,
+	ICE_PROT_ARP_OF		= 118,
+	ICE_PROT_EAPOL_OF	= 120,
+	ICE_PROT_META_ID	= 255, /* when offset == metaddata */
+	ICE_PROT_INVALID	= 255  /* when offset == 0xFF */
+};
+
+
+#define ICE_MAC_OFOS_HW		1
+#define ICE_MAC_IL_HW		4
+#define ICE_IPV4_OFOS_HW	32
+#define ICE_IPV4_IL_HW		33
+#define ICE_IPV6_OFOS_HW	40
+#define ICE_IPV6_IL_HW		41
+#define ICE_TCP_IL_HW		49
+#define ICE_UDP_ILOS_HW		53
+#define ICE_SCTP_IL_HW		96
+
+/* ICE_UDP_OF is used to identify all 3 tunnel types
+ * VXLAN, GENEVE and VXLAN_GPE. To differentiate further
+ * need to use flags from the field vector
+ */
+#define ICE_UDP_OF_HW	52 /* UDP Tunnels */
+#define ICE_GRE_OF_HW	64 /* NVGRE */
+#define ICE_META_DATA_ID_HW 255 /* this is used for tunnel type */
+
+#define ICE_TUN_FLAG_MASK 0xFF
+#define ICE_TUN_FLAG_FV_IND 2
+
+#define ICE_PROTOCOL_MAX_ENTRIES 16
+
+/* Mapping of software defined protocol id to hardware defined protocol id */
+struct ice_protocol_entry {
+	enum ice_protocol_type type;
+	u8 protocol_id;
+};
+
+
+struct ice_ether_hdr {
+	u8 dst_addr[ETH_ALEN];
+	u8 src_addr[ETH_ALEN];
+	u16 ethtype_id;
+};
+
+struct ice_ether_vlan_hdr {
+	u8 dst_addr[ETH_ALEN];
+	u8 src_addr[ETH_ALEN];
+	u32 vlan_id;
+};
+
+struct ice_ipv4_hdr {
+	u8 version;
+	u8 tos;
+	u16 total_length;
+	u16 id;
+	u16 frag_off;
+	u8 time_to_live;
+	u8 protocol;
+	u16 check;
+	u32 src_addr;
+	u32 dst_addr;
+};
+
+struct ice_ipv6_hdr {
+	u8 version;
+	u8 tc;
+	u16 flow_label;
+	u16 payload_len;
+	u8 next_hdr;
+	u8 hop_limit;
+	u8 src_addr[ICE_IPV6_ADDR_LENGTH];
+	u8 dst_addr[ICE_IPV6_ADDR_LENGTH];
+};
+
+struct ice_sctp_hdr {
+	u16 src_port;
+	u16 dst_port;
+	u32 verification_tag;
+	u32 check;
+};
+
+struct ice_l4_hdr {
+	u16 src_port;
+	u16 dst_port;
+	u16 len;
+	u16 check;
+};
+
+struct ice_udp_tnl_hdr {
+	u16 field;
+	u16 proto_type;
+	u16 vni;
+};
+
+struct ice_nvgre {
+	u16 tni;
+	u16 flow_id;
+};
+
+union ice_prot_hdr {
+		struct ice_ether_hdr eth_hdr;
+		struct ice_ipv4_hdr ipv4_hdr;
+		struct ice_ipv6_hdr ice_ipv6_ofos_hdr;
+		struct ice_l4_hdr l4_hdr;
+		struct ice_sctp_hdr sctp_hdr;
+		struct ice_udp_tnl_hdr tnl_hdr;
+		struct ice_nvgre nvgre_hdr;
+};
+
+/* This is mapping table entry that maps every word within a given protocol
+ * structure to the real byte offset as per the specification of that
+ * protocol header.
+ * for e.g. dst address is 3 words in ethertype header and corresponding bytes
+ * are 0, 2, 3 in the actual packet header and src address is at 4, 6, 8
+ */
+struct ice_prot_ext_tbl_entry {
+	enum ice_protocol_type prot_type;
+	/* Byte offset into header of given protocol type */
+	u8 offs[sizeof(union ice_prot_hdr)];
+};
+
+/* Extractions to be looked up for a given recipe */
+struct ice_prot_lkup_ext {
+	u16 prot_type;
+	u8 n_val_words;
+	/* create a buffer to hold max words per recipe */
+	u8 field_off[ICE_MAX_CHAIN_WORDS];
+
+	struct ice_fv_word fv_words[ICE_MAX_CHAIN_WORDS];
+
+	/* Indicate field offsets that have field vector indices assigned */
+	ice_declare_bitmap(done, ICE_MAX_CHAIN_WORDS);
+};
+
+struct ice_pref_recipe_group {
+	u8 n_val_pairs;		/* Number of valid pairs */
+	struct ice_fv_word pairs[ICE_NUM_WORDS_RECIPE];
+};
+
+struct ice_recp_grp_entry {
+	struct LIST_ENTRY_TYPE l_entry;
+
+#define ICE_INVAL_CHAIN_IND 0xFF
+	u16 rid;
+	u8 chain_idx;
+	u16 fv_idx[ICE_NUM_WORDS_RECIPE];
+	struct ice_pref_recipe_group r_group;
+};
+#endif /* _ICE_PROTOCOL_TYPE_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v5 13/31] net/ice/base: add structures for RX/TX queues
  2018-12-17  7:37 ` [dpdk-dev] [PATCH v5 00/31] A new net PMD - ICE Wenzhuo Lu
                     ` (11 preceding siblings ...)
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 12/31] net/ice/base: add protocol structures and defines Wenzhuo Lu
@ 2018-12-17  7:37   ` Wenzhuo Lu
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 14/31] net/ice/base: add OS specific implementation Wenzhuo Lu
                     ` (17 subsequent siblings)
  30 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-17  7:37 UTC (permalink / raw)
  To: dev; +Cc: Paul M Stillwell Jr

From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>

Add the structures that define how the RX/TX queues
are used.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
---
 drivers/net/ice/base/ice_lan_tx_rx.h | 2291 ++++++++++++++++++++++++++++++++++
 1 file changed, 2291 insertions(+)
 create mode 100644 drivers/net/ice/base/ice_lan_tx_rx.h

diff --git a/drivers/net/ice/base/ice_lan_tx_rx.h b/drivers/net/ice/base/ice_lan_tx_rx.h
new file mode 100644
index 0000000..d27045f
--- /dev/null
+++ b/drivers/net/ice/base/ice_lan_tx_rx.h
@@ -0,0 +1,2291 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_LAN_TX_RX_H_
+#define _ICE_LAN_TX_RX_H_
+#include "ice_osdep.h"
+
+/* RX Descriptors */
+union ice_16byte_rx_desc {
+	struct {
+		__le64 pkt_addr; /* Packet buffer address */
+		__le64 hdr_addr; /* Header buffer address */
+	} read;
+	struct {
+		struct {
+			struct {
+				__le16 mirroring_status;
+				__le16 l2tag1;
+			} lo_dword;
+			union {
+				__le32 rss; /* RSS Hash */
+				__le32 fd_id; /* Flow Director filter id */
+			} hi_dword;
+		} qword0;
+		struct {
+			/* ext status/error/PTYPE/length */
+			__le64 status_error_len;
+		} qword1;
+	} wb;  /* writeback */
+};
+
+union ice_32byte_rx_desc {
+	struct {
+		__le64 pkt_addr; /* Packet buffer address */
+		__le64 hdr_addr; /* Header buffer address */
+			/* bit 0 of hdr_addr is DD bit */
+		__le64 rsvd1;
+		__le64 rsvd2;
+	} read;
+	struct {
+		struct {
+			struct {
+				__le16 mirroring_status;
+				__le16 l2tag1;
+			} lo_dword;
+			union {
+				__le32 rss; /* RSS Hash */
+				__le32 fd_id; /* Flow Director filter id */
+			} hi_dword;
+		} qword0;
+		struct {
+			/* status/error/PTYPE/length */
+			__le64 status_error_len;
+		} qword1;
+		struct {
+			__le16 ext_status; /* extended status */
+			__le16 rsvd;
+			__le16 l2tag2_1;
+			__le16 l2tag2_2;
+		} qword2;
+		struct {
+			__le32 reserved;
+			__le32 fd_id;
+		} qword3;
+	} wb; /* writeback */
+};
+
+struct ice_fltr_desc {
+	__le64 qidx_compq_space_stat;
+	__le64 dtype_cmd_vsi_fdid;
+};
+
+#define ICE_FXD_FLTR_QW0_QINDEX_S	0
+#define ICE_FXD_FLTR_QW0_QINDEX_M	(0x7FFULL << ICE_FXD_FLTR_QW0_QINDEX_S)
+#define ICE_FXD_FLTR_QW0_COMP_Q_S	11
+#define ICE_FXD_FLTR_QW0_COMP_Q_M	BIT_ULL(ICE_FXD_FLTR_QW0_COMP_Q_S)
+#define ICE_FXD_FLTR_QW0_COMP_Q_ZERO	0x0ULL
+#define ICE_FXD_FLTR_QW0_COMP_Q_QINDX	0x1ULL
+
+#define ICE_FXD_FLTR_QW0_COMP_REPORT_S	12
+#define ICE_FXD_FLTR_QW0_COMP_REPORT_M	\
+				(0x3ULL << ICE_FXD_FLTR_QW0_COMP_REPORT_S)
+#define ICE_FXD_FLTR_QW0_COMP_REPORT_NONE	0x0ULL
+#define ICE_FXD_FLTR_QW0_COMP_REPORT_SW_FAIL	0x1ULL
+#define ICE_FXD_FLTR_QW0_COMP_REPORT_SW		0x2ULL
+
+#define ICE_FXD_FLTR_QW0_FD_SPACE_S	14
+#define ICE_FXD_FLTR_QW0_FD_SPACE_M	(0x3ULL << ICE_FXD_FLTR_QW0_FD_SPACE_S)
+#define ICE_FXD_FLTR_QW0_FD_SPACE_GUAR			0x0ULL
+#define ICE_FXD_FLTR_QW0_FD_SPACE_BEST_EFFORT		0x1ULL
+#define ICE_FXD_FLTR_QW0_FD_SPACE_GUAR_BEST		0x2ULL
+#define ICE_FXD_FLTR_QW0_FD_SPACE_BEST_GUAR		0x3ULL
+
+#define ICE_FXD_FLTR_QW0_STAT_CNT_S	16
+#define ICE_FXD_FLTR_QW0_STAT_CNT_M	\
+				(0x1FFFULL << ICE_FXD_FLTR_QW0_STAT_CNT_S)
+#define ICE_FXD_FLTR_QW0_STAT_ENA_S	29
+#define ICE_FXD_FLTR_QW0_STAT_ENA_M	(0x3ULL << ICE_FXD_FLTR_QW0_STAT_ENA_S)
+#define ICE_FXD_FLTR_QW0_STAT_ENA_NONE		0x0ULL
+#define ICE_FXD_FLTR_QW0_STAT_ENA_PKTS		0x1ULL
+#define ICE_FXD_FLTR_QW0_STAT_ENA_BYTES		0x2ULL
+#define ICE_FXD_FLTR_QW0_STAT_ENA_PKTS_BYTES	0x3ULL
+
+#define ICE_FXD_FLTR_QW0_EVICT_ENA_S	31
+#define ICE_FXD_FLTR_QW0_EVICT_ENA_M	BIT_ULL(ICE_FXD_FLTR_QW0_EVICT_ENA_S)
+#define ICE_FXD_FLTR_QW0_EVICT_ENA_FALSE	0x0ULL
+#define ICE_FXD_FLTR_QW0_EVICT_ENA_TRUE		0x1ULL
+
+#define ICE_FXD_FLTR_QW0_TO_Q_S		32
+#define ICE_FXD_FLTR_QW0_TO_Q_M		(0x7ULL << ICE_FXD_FLTR_QW0_TO_Q_S)
+#define ICE_FXD_FLTR_QW0_TO_Q_EQUALS_QINDEX	0x0ULL
+
+#define ICE_FXD_FLTR_QW0_TO_Q_PRI_S	35
+#define ICE_FXD_FLTR_QW0_TO_Q_PRI_M	(0x7ULL << ICE_FXD_FLTR_QW0_TO_Q_PRI_S)
+#define ICE_FXD_FLTR_QW0_TO_Q_PRIO1	0x1ULL
+
+#define ICE_FXD_FLTR_QW0_DPU_RECIPE_S	38
+#define ICE_FXD_FLTR_QW0_DPU_RECIPE_M	\
+			(0x3ULL << ICE_FXD_FLTR_QW0_DPU_RECIPE_S)
+#define ICE_FXD_FLTR_QW0_DPU_RECIPE_DFLT	0x0ULL
+
+#define ICE_FXD_FLTR_QW0_DROP_S		40
+#define ICE_FXD_FLTR_QW0_DROP_M		BIT_ULL(ICE_FXD_FLTR_QW0_DROP_S)
+#define ICE_FXD_FLTR_QW0_DROP_NO	0x0ULL
+#define ICE_FXD_FLTR_QW0_DROP_YES	0x1ULL
+
+#define ICE_FXD_FLTR_QW0_FLEX_PRI_S	41
+#define ICE_FXD_FLTR_QW0_FLEX_PRI_M	(0x7ULL << ICE_FXD_FLTR_QW0_FLEX_PRI_S)
+#define ICE_FXD_FLTR_QW0_FLEX_PRI_NONE	0x0ULL
+
+#define ICE_FXD_FLTR_QW0_FLEX_MDID_S	44
+#define ICE_FXD_FLTR_QW0_FLEX_MDID_M	(0xFULL << ICE_FXD_FLTR_QW0_FLEX_MDID_S)
+#define ICE_FXD_FLTR_QW0_FLEX_MDID0	0x0ULL
+
+#define ICE_FXD_FLTR_QW0_FLEX_VAL_S	48
+#define ICE_FXD_FLTR_QW0_FLEX_VAL_M	\
+				(0xFFFFULL << ICE_FXD_FLTR_QW0_FLEX_VAL_S)
+#define ICE_FXD_FLTR_QW0_FLEX_VAL0	0x0ULL
+
+#define ICE_FXD_FLTR_QW1_DTYPE_S	0
+#define ICE_FXD_FLTR_QW1_DTYPE_M	(0xFULL << ICE_FXD_FLTR_QW1_DTYPE_S)
+#define ICE_FXD_FLTR_QW1_PCMD_S		4
+#define ICE_FXD_FLTR_QW1_PCMD_M		BIT_ULL(ICE_FXD_FLTR_QW1_PCMD_S)
+#define ICE_FXD_FLTR_QW1_PCMD_ADD	0x0ULL
+#define ICE_FXD_FLTR_QW1_PCMD_REMOVE	0x1ULL
+
+#define ICE_FXD_FLTR_QW1_PROF_PRI_S	5
+#define ICE_FXD_FLTR_QW1_PROF_PRI_M	(0x7ULL << ICE_FXD_FLTR_QW1_PROF_PRI_S)
+#define ICE_FXD_FLTR_QW1_PROF_PRIO_ZERO	0x0ULL
+
+#define ICE_FXD_FLTR_QW1_PROF_S		8
+#define ICE_FXD_FLTR_QW1_PROF_M		(0x3FULL << ICE_FXD_FLTR_QW1_PROF_S)
+#define ICE_FXD_FLTR_QW1_PROF_ZERO	0x0ULL
+
+#define ICE_FXD_FLTR_QW1_FD_VSI_S	14
+#define ICE_FXD_FLTR_QW1_FD_VSI_M	(0x3FFULL << ICE_FXD_FLTR_QW1_FD_VSI_S)
+#define ICE_FXD_FLTR_QW1_SWAP_S		24
+#define ICE_FXD_FLTR_QW1_SWAP_M		BIT_ULL(ICE_FXD_FLTR_QW1_SWAP_S)
+#define ICE_FXD_FLTR_QW1_SWAP_NOT_SET	0x0ULL
+#define ICE_FXD_FLTR_QW1_SWAP_SET	0x1ULL
+
+#define ICE_FXD_FLTR_QW1_FDID_PRI_S	25
+#define ICE_FXD_FLTR_QW1_FDID_PRI_M	(0x7ULL << ICE_FXD_FLTR_QW1_FDID_PRI_S)
+#define ICE_FXD_FLTR_QW1_FDID_PRI_ZERO	0x0ULL
+
+#define ICE_FXD_FLTR_QW1_FDID_MDID_S	28
+#define ICE_FXD_FLTR_QW1_FDID_MDID_M	(0xFULL << ICE_FXD_FLTR_QW1_FDID_MDID_S)
+#define ICE_FXD_FLTR_QW1_FDID_MDID_FD	0x05ULL
+
+#define ICE_FXD_FLTR_QW1_FDID_S		32
+#define ICE_FXD_FLTR_QW1_FDID_M		\
+			(0xFFFFFFFFULL << ICE_FXD_FLTR_QW1_FDID_S)
+#define ICE_FXD_FLTR_QW1_FDID_ZERO	0x0ULL
+
+
+enum ice_rx_desc_status_bits {
+	/* Note: These are predefined bit offsets */
+	ICE_RX_DESC_STATUS_DD_S			= 0,
+	ICE_RX_DESC_STATUS_EOF_S		= 1,
+	ICE_RX_DESC_STATUS_L2TAG1P_S		= 2,
+	ICE_RX_DESC_STATUS_L3L4P_S		= 3,
+	ICE_RX_DESC_STATUS_CRCP_S		= 4,
+	ICE_RX_DESC_STATUS_TSYNINDX_S		= 5, /* 2 BITS */
+	ICE_RX_DESC_STATUS_TSYNVALID_S		= 7,
+	ICE_RX_DESC_STATUS_EXT_UDP_0_S		= 8,
+	ICE_RX_DESC_STATUS_UMBCAST_S		= 9, /* 2 BITS */
+	ICE_RX_DESC_STATUS_FLM_S		= 11,
+	ICE_RX_DESC_STATUS_FLTSTAT_S		= 12, /* 2 BITS */
+	ICE_RX_DESC_STATUS_LPBK_S		= 14,
+	ICE_RX_DESC_STATUS_IPV6EXADD_S		= 15,
+	ICE_RX_DESC_STATUS_RESERVED2_S		= 16, /* 2 BITS */
+	ICE_RX_DESC_STATUS_INT_UDP_0_S		= 18,
+	ICE_RX_DESC_STATUS_LAST /* this entry must be last!!! */
+};
+
+#define ICE_RXD_QW1_STATUS_S	0
+#define ICE_RXD_QW1_STATUS_M	((BIT(ICE_RX_DESC_STATUS_LAST) - 1) << \
+				 ICE_RXD_QW1_STATUS_S)
+
+#define ICE_RXD_QW1_STATUS_TSYNINDX_S ICE_RX_DESC_STATUS_TSYNINDX_S
+#define ICE_RXD_QW1_STATUS_TSYNINDX_M (0x3UL << ICE_RXD_QW1_STATUS_TSYNINDX_S)
+
+#define ICE_RXD_QW1_STATUS_TSYNVALID_S ICE_RX_DESC_STATUS_TSYNVALID_S
+#define ICE_RXD_QW1_STATUS_TSYNVALID_M BIT_ULL(ICE_RXD_QW1_STATUS_TSYNVALID_S)
+
+
+enum ice_rx_desc_fltstat_values {
+	ICE_RX_DESC_FLTSTAT_NO_DATA	= 0,
+	ICE_RX_DESC_FLTSTAT_RSV_FD_ID	= 1, /* 16byte desc? FD_ID : RSV */
+	ICE_RX_DESC_FLTSTAT_RSV		= 2,
+	ICE_RX_DESC_FLTSTAT_RSS_HASH	= 3,
+};
+
+
+#define ICE_RXD_QW1_ERROR_S	19
+#define ICE_RXD_QW1_ERROR_M		(0xFFUL << ICE_RXD_QW1_ERROR_S)
+
+enum ice_rx_desc_error_bits {
+	/* Note: These are predefined bit offsets */
+	ICE_RX_DESC_ERROR_RXE_S			= 0,
+	ICE_RX_DESC_ERROR_RECIPE_S		= 1,
+	ICE_RX_DESC_ERROR_HBO_S			= 2,
+	ICE_RX_DESC_ERROR_L3L4E_S		= 3, /* 3 BITS */
+	ICE_RX_DESC_ERROR_IPE_S			= 3,
+	ICE_RX_DESC_ERROR_L4E_S			= 4,
+	ICE_RX_DESC_ERROR_EIPE_S		= 5,
+	ICE_RX_DESC_ERROR_OVERSIZE_S		= 6,
+	ICE_RX_DESC_ERROR_PPRS_S		= 7
+};
+
+enum ice_rx_desc_error_l3l4e_masks {
+	ICE_RX_DESC_ERROR_L3L4E_NONE		= 0,
+	ICE_RX_DESC_ERROR_L3L4E_PROT		= 1,
+};
+
+#define ICE_RXD_QW1_PTYPE_S	30
+#define ICE_RXD_QW1_PTYPE_M	(0xFFULL << ICE_RXD_QW1_PTYPE_S)
+
+/* Packet type non-ip values */
+enum ice_rx_l2_ptype {
+	ICE_RX_PTYPE_L2_RESERVED	= 0,
+	ICE_RX_PTYPE_L2_MAC_PAY2	= 1,
+	ICE_RX_PTYPE_L2_FIP_PAY2	= 3,
+	ICE_RX_PTYPE_L2_OUI_PAY2	= 4,
+	ICE_RX_PTYPE_L2_MACCNTRL_PAY2	= 5,
+	ICE_RX_PTYPE_L2_LLDP_PAY2	= 6,
+	ICE_RX_PTYPE_L2_ECP_PAY2	= 7,
+	ICE_RX_PTYPE_L2_EVB_PAY2	= 8,
+	ICE_RX_PTYPE_L2_QCN_PAY2	= 9,
+	ICE_RX_PTYPE_L2_EAPOL_PAY2	= 10,
+	ICE_RX_PTYPE_L2_ARP		= 11,
+};
+
+struct ice_rx_ptype_decoded {
+	u32 ptype:10;
+	u32 known:1;
+	u32 outer_ip:1;
+	u32 outer_ip_ver:2;
+	u32 outer_frag:1;
+	u32 tunnel_type:3;
+	u32 tunnel_end_prot:2;
+	u32 tunnel_end_frag:1;
+	u32 inner_prot:4;
+	u32 payload_layer:3;
+};
+
+enum ice_rx_ptype_outer_ip {
+	ICE_RX_PTYPE_OUTER_L2	= 0,
+	ICE_RX_PTYPE_OUTER_IP	= 1,
+};
+
+enum ice_rx_ptype_outer_ip_ver {
+	ICE_RX_PTYPE_OUTER_NONE	= 0,
+	ICE_RX_PTYPE_OUTER_IPV4	= 1,
+	ICE_RX_PTYPE_OUTER_IPV6	= 2,
+};
+
+enum ice_rx_ptype_outer_fragmented {
+	ICE_RX_PTYPE_NOT_FRAG	= 0,
+	ICE_RX_PTYPE_FRAG	= 1,
+};
+
+enum ice_rx_ptype_tunnel_type {
+	ICE_RX_PTYPE_TUNNEL_NONE		= 0,
+	ICE_RX_PTYPE_TUNNEL_IP_IP		= 1,
+	ICE_RX_PTYPE_TUNNEL_IP_GRENAT		= 2,
+	ICE_RX_PTYPE_TUNNEL_IP_GRENAT_MAC	= 3,
+	ICE_RX_PTYPE_TUNNEL_IP_GRENAT_MAC_VLAN	= 4,
+};
+
+enum ice_rx_ptype_tunnel_end_prot {
+	ICE_RX_PTYPE_TUNNEL_END_NONE	= 0,
+	ICE_RX_PTYPE_TUNNEL_END_IPV4	= 1,
+	ICE_RX_PTYPE_TUNNEL_END_IPV6	= 2,
+};
+
+enum ice_rx_ptype_inner_prot {
+	ICE_RX_PTYPE_INNER_PROT_NONE		= 0,
+	ICE_RX_PTYPE_INNER_PROT_UDP		= 1,
+	ICE_RX_PTYPE_INNER_PROT_TCP		= 2,
+	ICE_RX_PTYPE_INNER_PROT_SCTP		= 3,
+	ICE_RX_PTYPE_INNER_PROT_ICMP		= 4,
+};
+
+enum ice_rx_ptype_payload_layer {
+	ICE_RX_PTYPE_PAYLOAD_LAYER_NONE	= 0,
+	ICE_RX_PTYPE_PAYLOAD_LAYER_PAY2	= 1,
+	ICE_RX_PTYPE_PAYLOAD_LAYER_PAY3	= 2,
+	ICE_RX_PTYPE_PAYLOAD_LAYER_PAY4	= 3,
+};
+
+
+#define ICE_RXD_QW1_LEN_PBUF_S	38
+#define ICE_RXD_QW1_LEN_PBUF_M	(0x3FFFULL << ICE_RXD_QW1_LEN_PBUF_S)
+
+#define ICE_RXD_QW1_LEN_HBUF_S	52
+#define ICE_RXD_QW1_LEN_HBUF_M	(0x7FFULL << ICE_RXD_QW1_LEN_HBUF_S)
+
+#define ICE_RXD_QW1_LEN_SPH_S	63
+#define ICE_RXD_QW1_LEN_SPH_M	BIT_ULL(ICE_RXD_QW1_LEN_SPH_S)
+
+
+enum ice_rx_desc_ext_status_bits {
+	/* Note: These are predefined bit offsets */
+	ICE_RX_DESC_EXT_STATUS_L2TAG2P_S	= 0,
+	ICE_RX_DESC_EXT_STATUS_L2TAG3P_S	= 1,
+	ICE_RX_DESC_EXT_STATUS_FLEXBL_S		= 2, /* 2 BITS */
+	ICE_RX_DESC_EXT_STATUS_FLEXBH_S		= 4, /* 2 BITS */
+	ICE_RX_DESC_EXT_STATUS_FDLONGB_S	= 9,
+	ICE_RX_DESC_EXT_STATUS_PELONGB_S	= 11,
+};
+
+
+enum ice_rx_desc_pe_status_bits {
+	/* Note: These are predefined bit offsets */
+	ICE_RX_DESC_PE_STATUS_QPID_S		= 0, /* 18 BITS */
+	ICE_RX_DESC_PE_STATUS_L4PORT_S		= 0, /* 16 BITS */
+	ICE_RX_DESC_PE_STATUS_IPINDEX_S		= 16, /* 8 BITS */
+	ICE_RX_DESC_PE_STATUS_QPIDHIT_S		= 24,
+	ICE_RX_DESC_PE_STATUS_APBVTHIT_S	= 25,
+	ICE_RX_DESC_PE_STATUS_PORTV_S		= 26,
+	ICE_RX_DESC_PE_STATUS_URG_S		= 27,
+	ICE_RX_DESC_PE_STATUS_IPFRAG_S		= 28,
+	ICE_RX_DESC_PE_STATUS_IPOPT_S		= 29
+};
+
+#define ICE_RX_PROG_STATUS_DESC_LEN_S	38
+#define ICE_RX_PROG_STATUS_DESC_LEN	0x2000000
+
+#define ICE_RX_PROG_STATUS_DESC_QW1_PROGID_S	2
+#define ICE_RX_PROG_STATUS_DESC_QW1_PROGID_M	\
+			(0x7UL << ICE_RX_PROG_STATUS_DESC_QW1_PROGID_S)
+
+
+#define ICE_RX_PROG_STATUS_DESC_QW1_ERROR_S	19
+#define ICE_RX_PROG_STATUS_DESC_QW1_ERROR_M	\
+			(0x3FUL << ICE_RX_PROG_STATUS_DESC_QW1_ERROR_S)
+
+enum ice_rx_prog_status_desc_status_bits {
+	/* Note: These are predefined bit offsets */
+	ICE_RX_PROG_STATUS_DESC_DD_S		= 0,
+	ICE_RX_PROG_STATUS_DESC_PROG_ID_S	= 2 /* 3 BITS */
+};
+
+enum ice_rx_prog_status_desc_prog_id_masks {
+	ICE_RX_PROG_STATUS_DESC_FD_FLTR_STATUS	= 1,
+};
+
+enum ice_rx_prog_status_desc_error_bits {
+	/* Note: These are predefined bit offsets */
+	ICE_RX_PROG_STATUS_DESC_FD_TBL_FULL_S	= 0,
+	ICE_RX_PROG_STATUS_DESC_NO_FD_ENTRY_S	= 1,
+};
+
+/* RX Flex Descriptor
+ * This descriptor is used instead of the legacy version descriptor when
+ * ice_rlan_ctx.adv_desc is set
+ */
+union ice_32b_rx_flex_desc {
+	struct {
+		__le64 pkt_addr; /* Packet buffer address */
+		__le64 hdr_addr; /* Header buffer address */
+				 /* bit 0 of hdr_addr is DD bit */
+		__le64 rsvd1;
+		__le64 rsvd2;
+	} read;
+	struct {
+		/* Qword 0 */
+		u8 rxdid; /* descriptor builder profile id */
+		u8 mir_id_umb_cast; /* mirror=[5:0], umb=[7:6] */
+		__le16 ptype_flex_flags0; /* ptype=[9:0], ff0=[15:10] */
+		__le16 pkt_len; /* [15:14] are reserved */
+		__le16 hdr_len_sph_flex_flags1; /* header=[10:0] */
+						/* sph=[11:11] */
+						/* ff1/ext=[15:12] */
+
+		/* Qword 1 */
+		__le16 status_error0;
+		__le16 l2tag1;
+		__le16 flex_meta0;
+		__le16 flex_meta1;
+
+		/* Qword 2 */
+		__le16 status_error1;
+		u8 flex_flags2;
+		u8 time_stamp_low;
+		__le16 l2tag2_1st;
+		__le16 l2tag2_2nd;
+
+		/* Qword 3 */
+		__le16 flex_meta2;
+		__le16 flex_meta3;
+		union {
+			struct {
+				__le16 flex_meta4;
+				__le16 flex_meta5;
+			} flex;
+			__le32 ts_high;
+		} flex_ts;
+	} wb; /* writeback */
+};
+
+/* Rx Flex Descriptor NIC Profile
+ * RxDID Profile Id 2
+ * Flex-field 0: RSS hash lower 16-bits
+ * Flex-field 1: RSS hash upper 16-bits
+ * Flex-field 2: Flow Id lower 16-bits
+ * Flex-field 3: Flow Id higher 16-bits
+ * Flex-field 4: reserved, Vlan id taken from L2Tag
+ */
+struct ice_32b_rx_flex_desc_nic {
+	/* Qword 0 */
+	u8 rxdid;
+	u8 mir_id_umb_cast;
+	__le16 ptype_flexi_flags0;
+	__le16 pkt_len;
+	__le16 hdr_len_sph_flex_flags1;
+
+	/* Qword 1 */
+	__le16 status_error0;
+	__le16 l2tag1;
+	__le32 rss_hash;
+
+	/* Qword 2 */
+	__le16 status_error1;
+	u8 flexi_flags2;
+	u8 ts_low;
+	__le16 l2tag2_1st;
+	__le16 l2tag2_2nd;
+
+	/* Qword 3 */
+	__le32 flow_id;
+	union {
+		struct {
+			__le16 rsvd;
+			__le16 flow_id_ipv6;
+		} flex;
+		__le32 ts_high;
+	} flex_ts;
+};
+
+/* Rx Flex Descriptor Switch Profile
+ * RxDID Profile Id 3
+ * Flex-field 0: Source Vsi
+ */
+struct ice_32b_rx_flex_desc_sw {
+	/* Qword 0 */
+	u8 rxdid;
+	u8 mir_id_umb_cast;
+	__le16 ptype_flexi_flags0;
+	__le16 pkt_len;
+	__le16 hdr_len_sph_flex_flags1;
+
+	/* Qword 1 */
+	__le16 status_error0;
+	__le16 l2tag1;
+	__le16 src_vsi; /* [10:15] are reserved */
+	__le16 flex_md1_rsvd;
+
+	/* Qword 2 */
+	__le16 status_error1;
+	u8 flex_flags2;
+	u8 ts_low;
+	__le16 l2tag2_1st;
+	__le16 l2tag2_2nd;
+
+	/* Qword 3 */
+	__le32 rsvd; /* flex words 2-3 are reserved */
+	__le32 ts_high;
+};
+
+/* Rx Flex Descriptor NIC VEB Profile
+ * RxDID Profile Id 4
+ * Flex-field 0: Destination Vsi
+ */
+struct ice_32b_rx_flex_desc_nic_veb_dbg {
+	/* Qword 0 */
+	u8 rxdid;
+	u8 mir_id_umb_cast;
+	__le16 ptype_flexi_flags0;
+	__le16 pkt_len;
+	__le16 hdr_len_sph_flex_flags1;
+
+	/* Qword 1 */
+	__le16 status_error0;
+	__le16 l2tag1;
+	__le16 dst_vsi; /* [0:12]: destination vsi */
+			/* 13: vsi valid bit */
+			/* [14:15] are reserved */
+	__le16 flex_field_1;
+
+	/* Qword 2 */
+	__le16 status_error1;
+	u8 flex_flags2;
+	u8 ts_low;
+	__le16 l2tag2_1st;
+	__le16 l2tag2_2nd;
+
+	/* Qword 3 */
+	__le32 rsvd; /* flex words 2-3 are reserved */
+	__le32 ts_high;
+};
+
+/* Rx Flex Descriptor NIC ACL Profile
+ * RxDID Profile Id 5
+ * Flex-field 0: ACL Counter 0
+ * Flex-field 1: ACL Counter 1
+ * Flex-field 2: ACL Counter 2
+ */
+struct ice_32b_rx_flex_desc_nic_acl_dbg {
+	/* Qword 0 */
+	u8 rxdid;
+	u8 mir_id_umb_cast;
+	__le16 ptype_flexi_flags0;
+	__le16 pkt_len;
+	__le16 hdr_len_sph_flex_flags1;
+
+	/* Qword 1 */
+	__le16 status_error0;
+	__le16 l2tag1;
+	__le16 acl_ctr0;
+	__le16 acl_ctr1;
+
+	/* Qword 2 */
+	__le16 status_error1;
+	u8 flex_flags2;
+	u8 ts_low;
+	__le16 l2tag2_1st;
+	__le16 l2tag2_2nd;
+
+	/* Qword 3 */
+	__le16 acl_ctr2;
+	__le16 rsvd; /* flex words 2-3 are reserved */
+	__le32 ts_high;
+};
+
+/* Rx Flex Descriptor NIC Profile
+ * RxDID Profile Id 6
+ * Flex-field 0: RSS hash lower 16-bits
+ * Flex-field 1: RSS hash upper 16-bits
+ * Flex-field 2: Flow Id lower 16-bits
+ * Flex-field 3: Source Vsi
+ * Flex-field 4: reserved, Vlan id taken from L2Tag
+ */
+struct ice_32b_rx_flex_desc_nic_2 {
+	/* Qword 0 */
+	u8 rxdid;
+	u8 mir_id_umb_cast;
+	__le16 ptype_flexi_flags0;
+	__le16 pkt_len;
+	__le16 hdr_len_sph_flex_flags1;
+
+	/* Qword 1 */
+	__le16 status_error0;
+	__le16 l2tag1;
+	__le32 rss_hash;
+
+	/* Qword 2 */
+	__le16 status_error1;
+	u8 flexi_flags2;
+	u8 ts_low;
+	__le16 l2tag2_1st;
+	__le16 l2tag2_2nd;
+
+	/* Qword 3 */
+	__le16 flow_id;
+	__le16 src_vsi;
+	union {
+		struct {
+			__le16 rsvd;
+			__le16 flow_id_ipv6;
+		} flex;
+		__le32 ts_high;
+	} flex_ts;
+};
+
+/* Receive Flex Descriptor profile IDs: There are a total
+ * of 64 profiles where profile IDs 0/1 are for legacy; and
+ * profiles 2-63 are flex profiles that can be programmed
+ * with a specific metadata (profile 7 reserved for HW)
+ */
+enum ice_rxdid {
+	ICE_RXDID_LEGACY_0		= 0,
+	ICE_RXDID_LEGACY_1		= 1,
+	ICE_RXDID_FLEX_NIC		= 2,
+	ICE_RXDID_FLEX_NIC_2		= 6,
+	ICE_RXDID_HW			= 7,
+	ICE_RXDID_LAST			= 63,
+};
+
+/* Recceive Flex descriptor Dword Index */
+enum ice_flex_word {
+	ICE_RX_FLEX_DWORD_0 = 0,
+	ICE_RX_FLEX_DWORD_1,
+	ICE_RX_FLEX_DWORD_2,
+	ICE_RX_FLEX_DWORD_3,
+	ICE_RX_FLEX_DWORD_4,
+	ICE_RX_FLEX_DWORD_5
+};
+
+/* Receive Flex Descriptor Rx opcode values */
+enum ice_flex_opcode {
+	ICE_RX_OPC_DEBUG = 0,
+	ICE_RX_OPC_MDID,
+	ICE_RX_OPC_EXTRACT,
+	ICE_RX_OPC_PROTID
+};
+
+/* Receive Descriptor MDID values */
+enum ice_flex_rx_mdid {
+	ICE_RX_MDID_FLOW_ID_LOWER	= 5,
+	ICE_RX_MDID_FLOW_ID_HIGH,
+	ICE_RX_MDID_DST_VSI		= 13,
+	ICE_RX_MDID_SRC_VSI		= 19,
+	ICE_RX_MDID_HASH_LOW		= 56,
+	ICE_RX_MDID_HASH_HIGH,
+	ICE_RX_MDID_ACL_CTR0		= ICE_RX_MDID_HASH_LOW,
+	ICE_RX_MDID_ACL_CTR1		= ICE_RX_MDID_HASH_HIGH,
+	ICE_RX_MDID_ACL_CTR2		= 59
+};
+
+/* for ice_32byte_rx_flex_desc.mir_id_umb_cast member */
+#define ICE_RX_FLEX_DESC_MIRROR_M	(0x3F) /* 6-bits */
+
+/* Rx Flag64 packet flag bits */
+enum ice_rx_flg64_bits {
+	ICE_RXFLG_PKT_DSI	= 0,
+	ICE_RXFLG_EVLAN_x8100	= 15,
+	ICE_RXFLG_EVLAN_x9100,
+	ICE_RXFLG_VLAN_x8100,
+	ICE_RXFLG_TNL_MAC	= 22,
+	ICE_RXFLG_TNL_VLAN,
+	ICE_RXFLG_PKT_FRG,
+	ICE_RXFLG_FIN		= 32,
+	ICE_RXFLG_SYN,
+	ICE_RXFLG_RST,
+	ICE_RXFLG_TNL0		= 38,
+	ICE_RXFLG_TNL1,
+	ICE_RXFLG_TNL2,
+	ICE_RXFLG_UDP_GRE,
+	ICE_RXFLG_RSVD		= 63
+};
+
+enum ice_rx_flex_desc_umb_cast_bits { /* field is 2 bits long */
+	ICE_RX_FLEX_DESC_UMB_CAST_S = 6,
+	ICE_RX_FLEX_DESC_UMB_CAST_LAST /* this entry must be last!!! */
+};
+
+enum ice_umbcast_dest_addr_types {
+	ICE_DEST_UNICAST = 0,
+	ICE_DEST_MULTICAST,
+	ICE_DEST_BROADCAST,
+	ICE_DEST_MIRRORED,
+};
+
+/* for ice_32byte_rx_flex_desc.ptype_flexi_flags0 member */
+#define ICE_RX_FLEX_DESC_PTYPE_M	(0x3FF) /* 10-bits */
+
+enum ice_rx_flex_desc_flexi_flags0_bits { /* field is 6 bits long */
+	ICE_RX_FLEX_DESC_FLEXI_FLAGS0_S = 10,
+	ICE_RX_FLEX_DESC_FLEXI_FLAGS0_LAST /* this entry must be last!!! */
+};
+
+/* for ice_32byte_rx_flex_desc.pkt_length member */
+#define ICE_RX_FLX_DESC_PKT_LEN_M	(0x3FFF) /* 14-bits */
+
+/* for ice_32byte_rx_flex_desc.header_length_sph_flexi_flags1 member */
+#define ICE_RX_FLEX_DESC_HEADER_LEN_M	(0x7FF) /* 11-bits */
+
+enum ice_rx_flex_desc_sph_bits { /* field is 1 bit long */
+	ICE_RX_FLEX_DESC_SPH_S = 11,
+	ICE_RX_FLEX_DESC_SPH_LAST /* this entry must be last!!! */
+};
+
+enum ice_rx_flex_desc_flexi_flags1_bits { /* field is 4 bits long */
+	ICE_RX_FLEX_DESC_FLEXI_FLAGS1_S = 12,
+	ICE_RX_FLEX_DESC_FLEXI_FLAGS1_LAST /* this entry must be last!!! */
+};
+
+enum ice_rx_flex_desc_ext_status_bits { /* field is 4 bits long */
+	ICE_RX_FLEX_DESC_EXT_STATUS_EXT_UDP_S = 12,
+	ICE_RX_FLEX_DESC_EXT_STATUS_INT_UDP_S = 13,
+	ICE_RX_FLEX_DESC_EXT_STATUS_RECIPE_S = 14,
+	ICE_RX_FLEX_DESC_EXT_STATUS_OVERSIZE_S = 15,
+	ICE_RX_FLEX_DESC_EXT_STATUS_LAST /* entry must be last!!! */
+};
+
+enum ice_rx_flex_desc_status_error_0_bits {
+	/* Note: These are predefined bit offsets */
+	ICE_RX_FLEX_DESC_STATUS0_DD_S = 0,
+	ICE_RX_FLEX_DESC_STATUS0_EOF_S,
+	ICE_RX_FLEX_DESC_STATUS0_HBO_S,
+	ICE_RX_FLEX_DESC_STATUS0_L3L4P_S,
+	ICE_RX_FLEX_DESC_STATUS0_XSUM_IPE_S,
+	ICE_RX_FLEX_DESC_STATUS0_XSUM_L4E_S,
+	ICE_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S,
+	ICE_RX_FLEX_DESC_STATUS0_XSUM_EUDPE_S,
+	ICE_RX_FLEX_DESC_STATUS0_LPBK_S,
+	ICE_RX_FLEX_DESC_STATUS0_IPV6EXADD_S,
+	ICE_RX_FLEX_DESC_STATUS0_RXE_S,
+	ICE_RX_FLEX_DESC_STATUS0_CRCP_S,
+	ICE_RX_FLEX_DESC_STATUS0_RSS_VALID_S,
+	ICE_RX_FLEX_DESC_STATUS0_L2TAG1P_S,
+	ICE_RX_FLEX_DESC_STATUS0_XTRMD0_VALID_S,
+	ICE_RX_FLEX_DESC_STATUS0_XTRMD1_VALID_S,
+	ICE_RX_FLEX_DESC_STATUS0_LAST /* this entry must be last!!! */
+};
+
+enum ice_rx_flex_desc_status_error_1_bits {
+	/* Note: These are predefined bit offsets */
+	ICE_RX_FLEX_DESC_STATUS1_CPM_S = 0, /* 4 bits */
+	ICE_RX_FLEX_DESC_STATUS1_NAT_S = 4,
+	ICE_RX_FLEX_DESC_STATUS1_CRYPTO_S = 5,
+	/* [10:6] reserved */
+	ICE_RX_FLEX_DESC_STATUS1_L2TAG2P_S = 11,
+	ICE_RX_FLEX_DESC_STATUS1_XTRMD2_VALID_S = 12,
+	ICE_RX_FLEX_DESC_STATUS1_XTRMD3_VALID_S = 13,
+	ICE_RX_FLEX_DESC_STATUS1_XTRMD4_VALID_S = 14,
+	ICE_RX_FLEX_DESC_STATUS1_XTRMD5_VALID_S = 15,
+	ICE_RX_FLEX_DESC_STATUS1_LAST /* this entry must be last!!! */
+};
+
+enum ice_rx_flex_desc_exstat_bits {
+	/* Note: These are predefined bit offsets */
+	ICE_RX_FLEX_DESC_EXSTAT_EXTUDP_S = 0,
+	ICE_RX_FLEX_DESC_EXSTAT_INTUDP_S = 1,
+	ICE_RX_FLEX_DESC_EXSTAT_RECIPE_S = 2,
+	ICE_RX_FLEX_DESC_EXSTAT_OVERSIZE_S = 3,
+};
+
+
+#define ICE_RXQ_CTX_SIZE_DWORDS		8
+#define ICE_RXQ_CTX_SZ			(ICE_RXQ_CTX_SIZE_DWORDS * sizeof(u32))
+#define ICE_TX_CMPLTNQ_CTX_SIZE_DWORDS	22
+#define ICE_TX_DRBELL_Q_CTX_SIZE_DWORDS	5
+#define GLTCLAN_CQ_CNTX(i, CQ)		(GLTCLAN_CQ_CNTX0(CQ) + ((i) * 0x0800))
+
+/* RLAN Rx queue context data
+ *
+ * The sizes of the variables may be larger than needed due to crossing byte
+ * boundaries. If we do not have the width of the variable set to the correct
+ * size then we could end up shifting bits off the top of the variable when the
+ * variable is at the top of a byte and crosses over into the next byte.
+ */
+struct ice_rlan_ctx {
+	u16 head;
+	u16 cpuid; /* bigger than needed, see above for reason */
+#define ICE_RLAN_BASE_S 7
+	u64 base;
+	u16 qlen;
+#define ICE_RLAN_CTX_DBUF_S 7
+	u16 dbuf; /* bigger than needed, see above for reason */
+#define ICE_RLAN_CTX_HBUF_S 6
+	u16 hbuf; /* bigger than needed, see above for reason */
+	u8 dtype;
+	u8 dsize;
+	u8 crcstrip;
+	u8 l2tsel;
+	u8 hsplit_0;
+	u8 hsplit_1;
+	u8 showiv;
+	u32 rxmax; /* bigger than needed, see above for reason */
+	u8 tphrdesc_ena;
+	u8 tphwdesc_ena;
+	u8 tphdata_ena;
+	u8 tphhead_ena;
+	u16 lrxqthresh; /* bigger than needed, see above for reason */
+};
+
+struct ice_ctx_ele {
+	u16 offset;
+	u16 size_of;
+	u16 width;
+	u16 lsb;
+};
+
+#define ICE_CTX_STORE(_struct, _ele, _width, _lsb) {	\
+	.offset = offsetof(struct _struct, _ele),	\
+	.size_of = FIELD_SIZEOF(struct _struct, _ele),	\
+	.width = _width,				\
+	.lsb = _lsb,					\
+}
+
+/* for hsplit_0 field of Rx RLAN context */
+enum ice_rlan_ctx_rx_hsplit_0 {
+	ICE_RLAN_RX_HSPLIT_0_NO_SPLIT		= 0,
+	ICE_RLAN_RX_HSPLIT_0_SPLIT_L2		= 1,
+	ICE_RLAN_RX_HSPLIT_0_SPLIT_IP		= 2,
+	ICE_RLAN_RX_HSPLIT_0_SPLIT_TCP_UDP	= 4,
+	ICE_RLAN_RX_HSPLIT_0_SPLIT_SCTP		= 8,
+};
+
+/* for hsplit_1 field of Rx RLAN context */
+enum ice_rlan_ctx_rx_hsplit_1 {
+	ICE_RLAN_RX_HSPLIT_1_NO_SPLIT		= 0,
+	ICE_RLAN_RX_HSPLIT_1_SPLIT_L2		= 1,
+	ICE_RLAN_RX_HSPLIT_1_SPLIT_ALWAYS	= 2,
+};
+
+/* TX Descriptor */
+struct ice_tx_desc {
+	__le64 buf_addr; /* Address of descriptor's data buf */
+	__le64 cmd_type_offset_bsz;
+};
+
+#define ICE_TXD_QW1_DTYPE_S	0
+#define ICE_TXD_QW1_DTYPE_M	(0xFUL << ICE_TXD_QW1_DTYPE_S)
+
+enum ice_tx_desc_dtype_value {
+	ICE_TX_DESC_DTYPE_DATA		= 0x0,
+	ICE_TX_DESC_DTYPE_CTX		= 0x1,
+	ICE_TX_DESC_DTYPE_IPSEC		= 0x3,
+	ICE_TX_DESC_DTYPE_FLTR_PROG	= 0x8,
+	ICE_TX_DESC_DTYPE_HLP_META	= 0x9,
+	/* DESC_DONE - HW has completed write-back of descriptor */
+	ICE_TX_DESC_DTYPE_DESC_DONE	= 0xF,
+};
+
+#define ICE_TXD_QW1_CMD_S	4
+#define ICE_TXD_QW1_CMD_M	(0xFFFUL << ICE_TXD_QW1_CMD_S)
+
+enum ice_tx_desc_cmd_bits {
+	ICE_TX_DESC_CMD_EOP			= 0x0001,
+	ICE_TX_DESC_CMD_RS			= 0x0002,
+	ICE_TX_DESC_CMD_RSVD			= 0x0004,
+	ICE_TX_DESC_CMD_IL2TAG1			= 0x0008,
+	ICE_TX_DESC_CMD_DUMMY			= 0x0010,
+	ICE_TX_DESC_CMD_IIPT_NONIP		= 0x0000, /* 2 BITS */
+	ICE_TX_DESC_CMD_IIPT_IPV6		= 0x0020, /* 2 BITS */
+	ICE_TX_DESC_CMD_IIPT_IPV4		= 0x0040, /* 2 BITS */
+	ICE_TX_DESC_CMD_IIPT_IPV4_CSUM		= 0x0060, /* 2 BITS */
+	ICE_TX_DESC_CMD_RSVD2			= 0x0080,
+	ICE_TX_DESC_CMD_L4T_EOFT_UNK		= 0x0000, /* 2 BITS */
+	ICE_TX_DESC_CMD_L4T_EOFT_TCP		= 0x0100, /* 2 BITS */
+	ICE_TX_DESC_CMD_L4T_EOFT_SCTP		= 0x0200, /* 2 BITS */
+	ICE_TX_DESC_CMD_L4T_EOFT_UDP		= 0x0300, /* 2 BITS */
+	ICE_TX_DESC_CMD_RE			= 0x0400,
+	ICE_TX_DESC_CMD_RSVD3			= 0x0800,
+};
+
+#define ICE_TXD_QW1_OFFSET_S	16
+#define ICE_TXD_QW1_OFFSET_M	(0x3FFFFULL << ICE_TXD_QW1_OFFSET_S)
+
+enum ice_tx_desc_len_fields {
+	/* Note: These are predefined bit offsets */
+	ICE_TX_DESC_LEN_MACLEN_S	= 0, /* 7 BITS */
+	ICE_TX_DESC_LEN_IPLEN_S	= 7, /* 7 BITS */
+	ICE_TX_DESC_LEN_L4_LEN_S	= 14 /* 4 BITS */
+};
+
+#define ICE_TXD_QW1_MACLEN_M (0x7FUL << ICE_TX_DESC_LEN_MACLEN_S)
+#define ICE_TXD_QW1_IPLEN_M  (0x7FUL << ICE_TX_DESC_LEN_IPLEN_S)
+#define ICE_TXD_QW1_L4LEN_M  (0xFUL << ICE_TX_DESC_LEN_L4_LEN_S)
+
+/* Tx descriptor field limits in bytes */
+#define ICE_TXD_MACLEN_MAX ((ICE_TXD_QW1_MACLEN_M >> \
+			     ICE_TX_DESC_LEN_MACLEN_S) * ICE_BYTES_PER_WORD)
+#define ICE_TXD_IPLEN_MAX ((ICE_TXD_QW1_IPLEN_M >> \
+			    ICE_TX_DESC_LEN_IPLEN_S) * ICE_BYTES_PER_DWORD)
+#define ICE_TXD_L4LEN_MAX ((ICE_TXD_QW1_L4LEN_M >> \
+			    ICE_TX_DESC_LEN_L4_LEN_S) * ICE_BYTES_PER_DWORD)
+
+#define ICE_TXD_QW1_TX_BUF_SZ_S	34
+#define ICE_TXD_QW1_TX_BUF_SZ_M	(0x3FFFULL << ICE_TXD_QW1_TX_BUF_SZ_S)
+
+#define ICE_TXD_QW1_L2TAG1_S	48
+#define ICE_TXD_QW1_L2TAG1_M	(0xFFFFULL << ICE_TXD_QW1_L2TAG1_S)
+
+/* Context descriptors */
+struct ice_tx_ctx_desc {
+	__le32 tunneling_params;
+	__le16 l2tag2;
+	__le16 rsvd;
+	__le64 qw1;
+};
+
+#define ICE_TXD_CTX_QW1_DTYPE_S	0
+#define ICE_TXD_CTX_QW1_DTYPE_M	(0xFUL << ICE_TXD_CTX_QW1_DTYPE_S)
+
+#define ICE_TXD_CTX_QW1_CMD_S	4
+#define ICE_TXD_CTX_QW1_CMD_M	(0x7FUL << ICE_TXD_CTX_QW1_CMD_S)
+
+#define ICE_TXD_CTX_QW1_IPSEC_S	11
+#define ICE_TXD_CTX_QW1_IPSEC_M	(0x7FUL << ICE_TXD_CTX_QW1_IPSEC_S)
+
+#define ICE_TXD_CTX_QW1_TSO_LEN_S	30
+#define ICE_TXD_CTX_QW1_TSO_LEN_M	\
+			(0x3FFFFULL << ICE_TXD_CTX_QW1_TSO_LEN_S)
+
+#define ICE_TXD_CTX_QW1_TSYN_S	ICE_TXD_CTX_QW1_TSO_LEN_S
+#define ICE_TXD_CTX_QW1_TSYN_M	ICE_TXD_CTX_QW1_TSO_LEN_M
+
+#define ICE_TXD_CTX_QW1_MSS_S	50
+#define ICE_TXD_CTX_QW1_MSS_M	(0x3FFFULL << ICE_TXD_CTX_QW1_MSS_S)
+#define ICE_TXD_CTX_MIN_MSS	64
+#define ICE_TXD_CTX_MAX_MSS	9668
+
+#define ICE_TXD_CTX_QW1_VSI_S	50
+#define ICE_TXD_CTX_QW1_VSI_M	(0x3FFULL << ICE_TXD_CTX_QW1_VSI_S)
+
+enum ice_tx_ctx_desc_cmd_bits {
+	ICE_TX_CTX_DESC_TSO		= 0x01,
+	ICE_TX_CTX_DESC_TSYN		= 0x02,
+	ICE_TX_CTX_DESC_IL2TAG2		= 0x04,
+	ICE_TX_CTX_DESC_IL2TAG2_IL2H	= 0x08,
+	ICE_TX_CTX_DESC_SWTCH_NOTAG	= 0x00,
+	ICE_TX_CTX_DESC_SWTCH_UPLINK	= 0x10,
+	ICE_TX_CTX_DESC_SWTCH_LOCAL	= 0x20,
+	ICE_TX_CTX_DESC_SWTCH_VSI	= 0x30,
+	ICE_TX_CTX_DESC_RESERVED	= 0x40
+};
+
+enum ice_tx_ctx_desc_eipt_offload {
+	ICE_TX_CTX_EIPT_NONE		= 0x0,
+	ICE_TX_CTX_EIPT_IPV6		= 0x1,
+	ICE_TX_CTX_EIPT_IPV4_NO_CSUM	= 0x2,
+	ICE_TX_CTX_EIPT_IPV4		= 0x3
+};
+
+#define ICE_TXD_CTX_QW0_EIPT_S	0
+#define ICE_TXD_CTX_QW0_EIPT_M	(0x3ULL << ICE_TXD_CTX_QW0_EIPT_S)
+
+#define ICE_TXD_CTX_QW0_EIPLEN_S	2
+#define ICE_TXD_CTX_QW0_EIPLEN_M	(0x7FUL << ICE_TXD_CTX_QW0_EIPLEN_S)
+
+#define ICE_TXD_CTX_QW0_L4TUNT_S	9
+#define ICE_TXD_CTX_QW0_L4TUNT_M	(0x3ULL << ICE_TXD_CTX_QW0_L4TUNT_S)
+
+#define ICE_TXD_CTX_UDP_TUNNELING	BIT_ULL(ICE_TXD_CTX_QW0_L4TUNT_S)
+#define ICE_TXD_CTX_GRE_TUNNELING	(0x2ULL << ICE_TXD_CTX_QW0_L4TUNT_S)
+
+#define ICE_TXD_CTX_QW0_EIP_NOINC_S	11
+#define ICE_TXD_CTX_QW0_EIP_NOINC_M	BIT_ULL(ICE_TXD_CTX_QW0_EIP_NOINC_S)
+
+#define ICE_TXD_CTX_EIP_NOINC_IPID_CONST	ICE_TXD_CTX_QW0_EIP_NOINC_M
+
+#define ICE_TXD_CTX_QW0_NATLEN_S	12
+#define ICE_TXD_CTX_QW0_NATLEN_M	(0X7FULL << ICE_TXD_CTX_QW0_NATLEN_S)
+
+#define ICE_TXD_CTX_QW0_DECTTL_S	19
+#define ICE_TXD_CTX_QW0_DECTTL_M	(0xFULL << ICE_TXD_CTX_QW0_DECTTL_S)
+
+#define ICE_TXD_CTX_QW0_L4T_CS_S	23
+#define ICE_TXD_CTX_QW0_L4T_CS_M	BIT_ULL(ICE_TXD_CTX_QW0_L4T_CS_S)
+
+
+#define ICE_LAN_TXQ_MAX_QGRPS	127
+#define ICE_LAN_TXQ_MAX_QDIS	1023
+
+/* Tx queue context data
+ *
+ * The sizes of the variables may be larger than needed due to crossing byte
+ * boundaries. If we do not have the width of the variable set to the correct
+ * size then we could end up shifting bits off the top of the variable when the
+ * variable is at the top of a byte and crosses over into the next byte.
+ */
+struct ice_tlan_ctx {
+#define ICE_TLAN_CTX_BASE_S	7
+	u64 base;		/* base is defined in 128-byte units */
+	u8 port_num;
+	u16 cgd_num;		/* bigger than needed, see above for reason */
+	u8 pf_num;
+	u16 vmvf_num;
+	u8 vmvf_type;
+#define ICE_TLAN_CTX_VMVF_TYPE_VMQ	1
+#define ICE_TLAN_CTX_VMVF_TYPE_PF	2
+	u16 src_vsi;
+	u8 tsyn_ena;
+	u8 alt_vlan;
+	u16 cpuid;		/* bigger than needed, see above for reason */
+	u8 wb_mode;
+	u8 tphrd_desc;
+	u8 tphrd;
+	u8 tphwr_desc;
+	u16 cmpq_id;
+	u16 qnum_in_func;
+	u8 itr_notification_mode;
+	u8 adjust_prof_id;
+	u32 qlen;		/* bigger than needed, see above for reason */
+	u8 quanta_prof_idx;
+	u8 tso_ena;
+	u16 tso_qnum;
+	u8 legacy_int;
+	u8 drop_ena;
+	u8 cache_prof_idx;
+	u8 pkt_shaper_prof_idx;
+	u8 int_q_state;	/* width not needed - internal do not write */
+};
+
+/* LAN Tx Completion Queue data */
+#pragma pack(1)
+struct ice_tx_cmpltnq {
+	u16 txq_id;
+	u8 generation;
+	u16 tx_head;
+	u8 cmpl_type;
+};
+#pragma pack()
+
+
+/* LAN Tx Completion Queue Context */
+#pragma pack(1)
+struct ice_tx_cmpltnq_ctx {
+	u64 base;
+	u32 q_len;
+#define ICE_TX_CMPLTNQ_CTX_Q_LEN_S	4
+	u8 generation;
+	u32 wrt_ptr;
+	u8 pf_num;
+	u16 vmvf_num;
+	u8 vmvf_type;
+	u8 tph_desc_wr;
+	u8 cpuid;
+	u32 cmpltn_cache[16];
+};
+#pragma pack()
+
+/* LAN Tx Doorbell Descriptor Format */
+struct ice_tx_drbell_fmt {
+	u16 txq_id;
+	u8 dd;
+	u8 rs;
+	u32 db;
+};
+
+
+/* LAN Tx Doorbell Queue Context */
+#pragma pack(1)
+struct ice_tx_drbell_q_ctx {
+	u64 base;
+	u16 ring_len;
+	u8 pf_num;
+	u16 vf_num;
+	u8 vmvf_type;
+	u8 cpuid;
+	u8 tph_desc_rd;
+	u8 tph_desc_wr;
+	u8 db_q_en;
+	u16 rd_head;
+	u16 rd_tail;
+};
+#pragma pack()
+
+/* The ice_ptype_lkup table is used to convert from the 10-bit ptype in the
+ * hardware to a bit-field that can be used by SW to more easily determine the
+ * packet type.
+ *
+ * Macros are used to shorten the table lines and make this table human
+ * readable.
+ *
+ * We store the PTYPE in the top byte of the bit field - this is just so that
+ * we can check that the table doesn't have a row missing, as the index into
+ * the table should be the PTYPE.
+ *
+ * Typical work flow:
+ *
+ * IF NOT ice_ptype_lkup[ptype].known
+ * THEN
+ *      Packet is unknown
+ * ELSE IF ice_ptype_lkup[ptype].outer_ip == ICE_RX_PTYPE_OUTER_IP
+ *      Use the rest of the fields to look at the tunnels, inner protocols, etc
+ * ELSE
+ *      Use the enum ice_rx_l2_ptype to decode the packet type
+ * ENDIF
+ */
+
+/* macro to make the table lines short */
+#define ICE_PTT(PTYPE, OUTER_IP, OUTER_IP_VER, OUTER_FRAG, T, TE, TEF, I, PL)\
+	{	PTYPE, \
+		1, \
+		ICE_RX_PTYPE_OUTER_##OUTER_IP, \
+		ICE_RX_PTYPE_OUTER_##OUTER_IP_VER, \
+		ICE_RX_PTYPE_##OUTER_FRAG, \
+		ICE_RX_PTYPE_TUNNEL_##T, \
+		ICE_RX_PTYPE_TUNNEL_END_##TE, \
+		ICE_RX_PTYPE_##TEF, \
+		ICE_RX_PTYPE_INNER_PROT_##I, \
+		ICE_RX_PTYPE_PAYLOAD_LAYER_##PL }
+
+#define ICE_PTT_UNUSED_ENTRY(PTYPE) { PTYPE, 0, 0, 0, 0, 0, 0, 0, 0, 0 }
+
+/* shorter macros makes the table fit but are terse */
+#define ICE_RX_PTYPE_NOF		ICE_RX_PTYPE_NOT_FRAG
+#define ICE_RX_PTYPE_FRG		ICE_RX_PTYPE_FRAG
+
+/* Lookup table mapping the HW PTYPE to the bit field for decoding */
+static const struct ice_rx_ptype_decoded ice_ptype_lkup[] = {
+	/* L2 Packet types */
+	ICE_PTT_UNUSED_ENTRY(0),
+	ICE_PTT(1, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2),
+	ICE_PTT(2, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE),
+	ICE_PTT_UNUSED_ENTRY(3),
+	ICE_PTT_UNUSED_ENTRY(4),
+	ICE_PTT_UNUSED_ENTRY(5),
+	ICE_PTT(6, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE),
+	ICE_PTT(7, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE),
+	ICE_PTT_UNUSED_ENTRY(8),
+	ICE_PTT_UNUSED_ENTRY(9),
+	ICE_PTT(10, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE),
+	ICE_PTT(11, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE),
+	ICE_PTT_UNUSED_ENTRY(12),
+	ICE_PTT_UNUSED_ENTRY(13),
+	ICE_PTT_UNUSED_ENTRY(14),
+	ICE_PTT_UNUSED_ENTRY(15),
+	ICE_PTT_UNUSED_ENTRY(16),
+	ICE_PTT_UNUSED_ENTRY(17),
+	ICE_PTT_UNUSED_ENTRY(18),
+	ICE_PTT_UNUSED_ENTRY(19),
+	ICE_PTT_UNUSED_ENTRY(20),
+	ICE_PTT_UNUSED_ENTRY(21),
+
+	/* Non Tunneled IPv4 */
+	ICE_PTT(22, IP, IPV4, FRG, NONE, NONE, NOF, NONE, PAY3),
+	ICE_PTT(23, IP, IPV4, NOF, NONE, NONE, NOF, NONE, PAY3),
+	ICE_PTT(24, IP, IPV4, NOF, NONE, NONE, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(25),
+	ICE_PTT(26, IP, IPV4, NOF, NONE, NONE, NOF, TCP,  PAY4),
+	ICE_PTT(27, IP, IPV4, NOF, NONE, NONE, NOF, SCTP, PAY4),
+	ICE_PTT(28, IP, IPV4, NOF, NONE, NONE, NOF, ICMP, PAY4),
+
+	/* IPv4 --> IPv4 */
+	ICE_PTT(29, IP, IPV4, NOF, IP_IP, IPV4, FRG, NONE, PAY3),
+	ICE_PTT(30, IP, IPV4, NOF, IP_IP, IPV4, NOF, NONE, PAY3),
+	ICE_PTT(31, IP, IPV4, NOF, IP_IP, IPV4, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(32),
+	ICE_PTT(33, IP, IPV4, NOF, IP_IP, IPV4, NOF, TCP,  PAY4),
+	ICE_PTT(34, IP, IPV4, NOF, IP_IP, IPV4, NOF, SCTP, PAY4),
+	ICE_PTT(35, IP, IPV4, NOF, IP_IP, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv4 --> IPv6 */
+	ICE_PTT(36, IP, IPV4, NOF, IP_IP, IPV6, FRG, NONE, PAY3),
+	ICE_PTT(37, IP, IPV4, NOF, IP_IP, IPV6, NOF, NONE, PAY3),
+	ICE_PTT(38, IP, IPV4, NOF, IP_IP, IPV6, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(39),
+	ICE_PTT(40, IP, IPV4, NOF, IP_IP, IPV6, NOF, TCP,  PAY4),
+	ICE_PTT(41, IP, IPV4, NOF, IP_IP, IPV6, NOF, SCTP, PAY4),
+	ICE_PTT(42, IP, IPV4, NOF, IP_IP, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv4 --> GRE/NAT */
+	ICE_PTT(43, IP, IPV4, NOF, IP_GRENAT, NONE, NOF, NONE, PAY3),
+
+	/* IPv4 --> GRE/NAT --> IPv4 */
+	ICE_PTT(44, IP, IPV4, NOF, IP_GRENAT, IPV4, FRG, NONE, PAY3),
+	ICE_PTT(45, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, NONE, PAY3),
+	ICE_PTT(46, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(47),
+	ICE_PTT(48, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, TCP,  PAY4),
+	ICE_PTT(49, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, SCTP, PAY4),
+	ICE_PTT(50, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv4 --> GRE/NAT --> IPv6 */
+	ICE_PTT(51, IP, IPV4, NOF, IP_GRENAT, IPV6, FRG, NONE, PAY3),
+	ICE_PTT(52, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, NONE, PAY3),
+	ICE_PTT(53, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(54),
+	ICE_PTT(55, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, TCP,  PAY4),
+	ICE_PTT(56, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, SCTP, PAY4),
+	ICE_PTT(57, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv4 --> GRE/NAT --> MAC */
+	ICE_PTT(58, IP, IPV4, NOF, IP_GRENAT_MAC, NONE, NOF, NONE, PAY3),
+
+	/* IPv4 --> GRE/NAT --> MAC --> IPv4 */
+	ICE_PTT(59, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, FRG, NONE, PAY3),
+	ICE_PTT(60, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, NONE, PAY3),
+	ICE_PTT(61, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(62),
+	ICE_PTT(63, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, TCP,  PAY4),
+	ICE_PTT(64, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, SCTP, PAY4),
+	ICE_PTT(65, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv4 --> GRE/NAT -> MAC --> IPv6 */
+	ICE_PTT(66, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, FRG, NONE, PAY3),
+	ICE_PTT(67, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, NONE, PAY3),
+	ICE_PTT(68, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(69),
+	ICE_PTT(70, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, TCP,  PAY4),
+	ICE_PTT(71, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, SCTP, PAY4),
+	ICE_PTT(72, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv4 --> GRE/NAT --> MAC/VLAN */
+	ICE_PTT(73, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, NONE, NOF, NONE, PAY3),
+
+	/* IPv4 ---> GRE/NAT -> MAC/VLAN --> IPv4 */
+	ICE_PTT(74, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, FRG, NONE, PAY3),
+	ICE_PTT(75, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, NONE, PAY3),
+	ICE_PTT(76, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(77),
+	ICE_PTT(78, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, TCP,  PAY4),
+	ICE_PTT(79, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, SCTP, PAY4),
+	ICE_PTT(80, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv4 -> GRE/NAT -> MAC/VLAN --> IPv6 */
+	ICE_PTT(81, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, FRG, NONE, PAY3),
+	ICE_PTT(82, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, NONE, PAY3),
+	ICE_PTT(83, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(84),
+	ICE_PTT(85, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, TCP,  PAY4),
+	ICE_PTT(86, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, SCTP, PAY4),
+	ICE_PTT(87, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, ICMP, PAY4),
+
+	/* Non Tunneled IPv6 */
+	ICE_PTT(88, IP, IPV6, FRG, NONE, NONE, NOF, NONE, PAY3),
+	ICE_PTT(89, IP, IPV6, NOF, NONE, NONE, NOF, NONE, PAY3),
+	ICE_PTT(90, IP, IPV6, NOF, NONE, NONE, NOF, UDP,  PAY3),
+	ICE_PTT_UNUSED_ENTRY(91),
+	ICE_PTT(92, IP, IPV6, NOF, NONE, NONE, NOF, TCP,  PAY4),
+	ICE_PTT(93, IP, IPV6, NOF, NONE, NONE, NOF, SCTP, PAY4),
+	ICE_PTT(94, IP, IPV6, NOF, NONE, NONE, NOF, ICMP, PAY4),
+
+	/* IPv6 --> IPv4 */
+	ICE_PTT(95, IP, IPV6, NOF, IP_IP, IPV4, FRG, NONE, PAY3),
+	ICE_PTT(96, IP, IPV6, NOF, IP_IP, IPV4, NOF, NONE, PAY3),
+	ICE_PTT(97, IP, IPV6, NOF, IP_IP, IPV4, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(98),
+	ICE_PTT(99, IP, IPV6, NOF, IP_IP, IPV4, NOF, TCP,  PAY4),
+	ICE_PTT(100, IP, IPV6, NOF, IP_IP, IPV4, NOF, SCTP, PAY4),
+	ICE_PTT(101, IP, IPV6, NOF, IP_IP, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv6 --> IPv6 */
+	ICE_PTT(102, IP, IPV6, NOF, IP_IP, IPV6, FRG, NONE, PAY3),
+	ICE_PTT(103, IP, IPV6, NOF, IP_IP, IPV6, NOF, NONE, PAY3),
+	ICE_PTT(104, IP, IPV6, NOF, IP_IP, IPV6, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(105),
+	ICE_PTT(106, IP, IPV6, NOF, IP_IP, IPV6, NOF, TCP,  PAY4),
+	ICE_PTT(107, IP, IPV6, NOF, IP_IP, IPV6, NOF, SCTP, PAY4),
+	ICE_PTT(108, IP, IPV6, NOF, IP_IP, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT */
+	ICE_PTT(109, IP, IPV6, NOF, IP_GRENAT, NONE, NOF, NONE, PAY3),
+
+	/* IPv6 --> GRE/NAT -> IPv4 */
+	ICE_PTT(110, IP, IPV6, NOF, IP_GRENAT, IPV4, FRG, NONE, PAY3),
+	ICE_PTT(111, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, NONE, PAY3),
+	ICE_PTT(112, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(113),
+	ICE_PTT(114, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, TCP,  PAY4),
+	ICE_PTT(115, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, SCTP, PAY4),
+	ICE_PTT(116, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT -> IPv6 */
+	ICE_PTT(117, IP, IPV6, NOF, IP_GRENAT, IPV6, FRG, NONE, PAY3),
+	ICE_PTT(118, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, NONE, PAY3),
+	ICE_PTT(119, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(120),
+	ICE_PTT(121, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, TCP,  PAY4),
+	ICE_PTT(122, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, SCTP, PAY4),
+	ICE_PTT(123, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT -> MAC */
+	ICE_PTT(124, IP, IPV6, NOF, IP_GRENAT_MAC, NONE, NOF, NONE, PAY3),
+
+	/* IPv6 --> GRE/NAT -> MAC -> IPv4 */
+	ICE_PTT(125, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, FRG, NONE, PAY3),
+	ICE_PTT(126, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, NONE, PAY3),
+	ICE_PTT(127, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(128),
+	ICE_PTT(129, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, TCP,  PAY4),
+	ICE_PTT(130, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, SCTP, PAY4),
+	ICE_PTT(131, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT -> MAC -> IPv6 */
+	ICE_PTT(132, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, FRG, NONE, PAY3),
+	ICE_PTT(133, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, NONE, PAY3),
+	ICE_PTT(134, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(135),
+	ICE_PTT(136, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, TCP,  PAY4),
+	ICE_PTT(137, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, SCTP, PAY4),
+	ICE_PTT(138, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT -> MAC/VLAN */
+	ICE_PTT(139, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, NONE, NOF, NONE, PAY3),
+
+	/* IPv6 --> GRE/NAT -> MAC/VLAN --> IPv4 */
+	ICE_PTT(140, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, FRG, NONE, PAY3),
+	ICE_PTT(141, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, NONE, PAY3),
+	ICE_PTT(142, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(143),
+	ICE_PTT(144, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, TCP,  PAY4),
+	ICE_PTT(145, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, SCTP, PAY4),
+	ICE_PTT(146, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT -> MAC/VLAN --> IPv6 */
+	ICE_PTT(147, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, FRG, NONE, PAY3),
+	ICE_PTT(148, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, NONE, PAY3),
+	ICE_PTT(149, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(150),
+	ICE_PTT(151, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, TCP,  PAY4),
+	ICE_PTT(152, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, SCTP, PAY4),
+	ICE_PTT(153, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, ICMP, PAY4),
+
+	/* unused entries */
+	ICE_PTT_UNUSED_ENTRY(154),
+	ICE_PTT_UNUSED_ENTRY(155),
+	ICE_PTT_UNUSED_ENTRY(156),
+	ICE_PTT_UNUSED_ENTRY(157),
+	ICE_PTT_UNUSED_ENTRY(158),
+	ICE_PTT_UNUSED_ENTRY(159),
+
+	ICE_PTT_UNUSED_ENTRY(160),
+	ICE_PTT_UNUSED_ENTRY(161),
+	ICE_PTT_UNUSED_ENTRY(162),
+	ICE_PTT_UNUSED_ENTRY(163),
+	ICE_PTT_UNUSED_ENTRY(164),
+	ICE_PTT_UNUSED_ENTRY(165),
+	ICE_PTT_UNUSED_ENTRY(166),
+	ICE_PTT_UNUSED_ENTRY(167),
+	ICE_PTT_UNUSED_ENTRY(168),
+	ICE_PTT_UNUSED_ENTRY(169),
+
+	ICE_PTT_UNUSED_ENTRY(170),
+	ICE_PTT_UNUSED_ENTRY(171),
+	ICE_PTT_UNUSED_ENTRY(172),
+	ICE_PTT_UNUSED_ENTRY(173),
+	ICE_PTT_UNUSED_ENTRY(174),
+	ICE_PTT_UNUSED_ENTRY(175),
+	ICE_PTT_UNUSED_ENTRY(176),
+	ICE_PTT_UNUSED_ENTRY(177),
+	ICE_PTT_UNUSED_ENTRY(178),
+	ICE_PTT_UNUSED_ENTRY(179),
+
+	ICE_PTT_UNUSED_ENTRY(180),
+	ICE_PTT_UNUSED_ENTRY(181),
+	ICE_PTT_UNUSED_ENTRY(182),
+	ICE_PTT_UNUSED_ENTRY(183),
+	ICE_PTT_UNUSED_ENTRY(184),
+	ICE_PTT_UNUSED_ENTRY(185),
+	ICE_PTT_UNUSED_ENTRY(186),
+	ICE_PTT_UNUSED_ENTRY(187),
+	ICE_PTT_UNUSED_ENTRY(188),
+	ICE_PTT_UNUSED_ENTRY(189),
+
+	ICE_PTT_UNUSED_ENTRY(190),
+	ICE_PTT_UNUSED_ENTRY(191),
+	ICE_PTT_UNUSED_ENTRY(192),
+	ICE_PTT_UNUSED_ENTRY(193),
+	ICE_PTT_UNUSED_ENTRY(194),
+	ICE_PTT_UNUSED_ENTRY(195),
+	ICE_PTT_UNUSED_ENTRY(196),
+	ICE_PTT_UNUSED_ENTRY(197),
+	ICE_PTT_UNUSED_ENTRY(198),
+	ICE_PTT_UNUSED_ENTRY(199),
+
+	ICE_PTT_UNUSED_ENTRY(200),
+	ICE_PTT_UNUSED_ENTRY(201),
+	ICE_PTT_UNUSED_ENTRY(202),
+	ICE_PTT_UNUSED_ENTRY(203),
+	ICE_PTT_UNUSED_ENTRY(204),
+	ICE_PTT_UNUSED_ENTRY(205),
+	ICE_PTT_UNUSED_ENTRY(206),
+	ICE_PTT_UNUSED_ENTRY(207),
+	ICE_PTT_UNUSED_ENTRY(208),
+	ICE_PTT_UNUSED_ENTRY(209),
+
+	ICE_PTT_UNUSED_ENTRY(210),
+	ICE_PTT_UNUSED_ENTRY(211),
+	ICE_PTT_UNUSED_ENTRY(212),
+	ICE_PTT_UNUSED_ENTRY(213),
+	ICE_PTT_UNUSED_ENTRY(214),
+	ICE_PTT_UNUSED_ENTRY(215),
+	ICE_PTT_UNUSED_ENTRY(216),
+	ICE_PTT_UNUSED_ENTRY(217),
+	ICE_PTT_UNUSED_ENTRY(218),
+	ICE_PTT_UNUSED_ENTRY(219),
+
+	ICE_PTT_UNUSED_ENTRY(220),
+	ICE_PTT_UNUSED_ENTRY(221),
+	ICE_PTT_UNUSED_ENTRY(222),
+	ICE_PTT_UNUSED_ENTRY(223),
+	ICE_PTT_UNUSED_ENTRY(224),
+	ICE_PTT_UNUSED_ENTRY(225),
+	ICE_PTT_UNUSED_ENTRY(226),
+	ICE_PTT_UNUSED_ENTRY(227),
+	ICE_PTT_UNUSED_ENTRY(228),
+	ICE_PTT_UNUSED_ENTRY(229),
+
+	ICE_PTT_UNUSED_ENTRY(230),
+	ICE_PTT_UNUSED_ENTRY(231),
+	ICE_PTT_UNUSED_ENTRY(232),
+	ICE_PTT_UNUSED_ENTRY(233),
+	ICE_PTT_UNUSED_ENTRY(234),
+	ICE_PTT_UNUSED_ENTRY(235),
+	ICE_PTT_UNUSED_ENTRY(236),
+	ICE_PTT_UNUSED_ENTRY(237),
+	ICE_PTT_UNUSED_ENTRY(238),
+	ICE_PTT_UNUSED_ENTRY(239),
+
+	ICE_PTT_UNUSED_ENTRY(240),
+	ICE_PTT_UNUSED_ENTRY(241),
+	ICE_PTT_UNUSED_ENTRY(242),
+	ICE_PTT_UNUSED_ENTRY(243),
+	ICE_PTT_UNUSED_ENTRY(244),
+	ICE_PTT_UNUSED_ENTRY(245),
+	ICE_PTT_UNUSED_ENTRY(246),
+	ICE_PTT_UNUSED_ENTRY(247),
+	ICE_PTT_UNUSED_ENTRY(248),
+	ICE_PTT_UNUSED_ENTRY(249),
+
+	ICE_PTT_UNUSED_ENTRY(250),
+	ICE_PTT_UNUSED_ENTRY(251),
+	ICE_PTT_UNUSED_ENTRY(252),
+	ICE_PTT_UNUSED_ENTRY(253),
+	ICE_PTT_UNUSED_ENTRY(254),
+	ICE_PTT_UNUSED_ENTRY(255),
+	ICE_PTT_UNUSED_ENTRY(256),
+	ICE_PTT_UNUSED_ENTRY(257),
+	ICE_PTT_UNUSED_ENTRY(258),
+	ICE_PTT_UNUSED_ENTRY(259),
+
+	ICE_PTT_UNUSED_ENTRY(260),
+	ICE_PTT_UNUSED_ENTRY(261),
+	ICE_PTT_UNUSED_ENTRY(262),
+	ICE_PTT_UNUSED_ENTRY(263),
+	ICE_PTT_UNUSED_ENTRY(264),
+	ICE_PTT_UNUSED_ENTRY(265),
+	ICE_PTT_UNUSED_ENTRY(266),
+	ICE_PTT_UNUSED_ENTRY(267),
+	ICE_PTT_UNUSED_ENTRY(268),
+	ICE_PTT_UNUSED_ENTRY(269),
+
+	ICE_PTT_UNUSED_ENTRY(270),
+	ICE_PTT_UNUSED_ENTRY(271),
+	ICE_PTT_UNUSED_ENTRY(272),
+	ICE_PTT_UNUSED_ENTRY(273),
+	ICE_PTT_UNUSED_ENTRY(274),
+	ICE_PTT_UNUSED_ENTRY(275),
+	ICE_PTT_UNUSED_ENTRY(276),
+	ICE_PTT_UNUSED_ENTRY(277),
+	ICE_PTT_UNUSED_ENTRY(278),
+	ICE_PTT_UNUSED_ENTRY(279),
+
+	ICE_PTT_UNUSED_ENTRY(280),
+	ICE_PTT_UNUSED_ENTRY(281),
+	ICE_PTT_UNUSED_ENTRY(282),
+	ICE_PTT_UNUSED_ENTRY(283),
+	ICE_PTT_UNUSED_ENTRY(284),
+	ICE_PTT_UNUSED_ENTRY(285),
+	ICE_PTT_UNUSED_ENTRY(286),
+	ICE_PTT_UNUSED_ENTRY(287),
+	ICE_PTT_UNUSED_ENTRY(288),
+	ICE_PTT_UNUSED_ENTRY(289),
+
+	ICE_PTT_UNUSED_ENTRY(290),
+	ICE_PTT_UNUSED_ENTRY(291),
+	ICE_PTT_UNUSED_ENTRY(292),
+	ICE_PTT_UNUSED_ENTRY(293),
+	ICE_PTT_UNUSED_ENTRY(294),
+	ICE_PTT_UNUSED_ENTRY(295),
+	ICE_PTT_UNUSED_ENTRY(296),
+	ICE_PTT_UNUSED_ENTRY(297),
+	ICE_PTT_UNUSED_ENTRY(298),
+	ICE_PTT_UNUSED_ENTRY(299),
+
+	ICE_PTT_UNUSED_ENTRY(300),
+	ICE_PTT_UNUSED_ENTRY(301),
+	ICE_PTT_UNUSED_ENTRY(302),
+	ICE_PTT_UNUSED_ENTRY(303),
+	ICE_PTT_UNUSED_ENTRY(304),
+	ICE_PTT_UNUSED_ENTRY(305),
+	ICE_PTT_UNUSED_ENTRY(306),
+	ICE_PTT_UNUSED_ENTRY(307),
+	ICE_PTT_UNUSED_ENTRY(308),
+	ICE_PTT_UNUSED_ENTRY(309),
+
+	ICE_PTT_UNUSED_ENTRY(310),
+	ICE_PTT_UNUSED_ENTRY(311),
+	ICE_PTT_UNUSED_ENTRY(312),
+	ICE_PTT_UNUSED_ENTRY(313),
+	ICE_PTT_UNUSED_ENTRY(314),
+	ICE_PTT_UNUSED_ENTRY(315),
+	ICE_PTT_UNUSED_ENTRY(316),
+	ICE_PTT_UNUSED_ENTRY(317),
+	ICE_PTT_UNUSED_ENTRY(318),
+	ICE_PTT_UNUSED_ENTRY(319),
+
+	ICE_PTT_UNUSED_ENTRY(320),
+	ICE_PTT_UNUSED_ENTRY(321),
+	ICE_PTT_UNUSED_ENTRY(322),
+	ICE_PTT_UNUSED_ENTRY(323),
+	ICE_PTT_UNUSED_ENTRY(324),
+	ICE_PTT_UNUSED_ENTRY(325),
+	ICE_PTT_UNUSED_ENTRY(326),
+	ICE_PTT_UNUSED_ENTRY(327),
+	ICE_PTT_UNUSED_ENTRY(328),
+	ICE_PTT_UNUSED_ENTRY(329),
+
+	ICE_PTT_UNUSED_ENTRY(330),
+	ICE_PTT_UNUSED_ENTRY(331),
+	ICE_PTT_UNUSED_ENTRY(332),
+	ICE_PTT_UNUSED_ENTRY(333),
+	ICE_PTT_UNUSED_ENTRY(334),
+	ICE_PTT_UNUSED_ENTRY(335),
+	ICE_PTT_UNUSED_ENTRY(336),
+	ICE_PTT_UNUSED_ENTRY(337),
+	ICE_PTT_UNUSED_ENTRY(338),
+	ICE_PTT_UNUSED_ENTRY(339),
+
+	ICE_PTT_UNUSED_ENTRY(340),
+	ICE_PTT_UNUSED_ENTRY(341),
+	ICE_PTT_UNUSED_ENTRY(342),
+	ICE_PTT_UNUSED_ENTRY(343),
+	ICE_PTT_UNUSED_ENTRY(344),
+	ICE_PTT_UNUSED_ENTRY(345),
+	ICE_PTT_UNUSED_ENTRY(346),
+	ICE_PTT_UNUSED_ENTRY(347),
+	ICE_PTT_UNUSED_ENTRY(348),
+	ICE_PTT_UNUSED_ENTRY(349),
+
+	ICE_PTT_UNUSED_ENTRY(350),
+	ICE_PTT_UNUSED_ENTRY(351),
+	ICE_PTT_UNUSED_ENTRY(352),
+	ICE_PTT_UNUSED_ENTRY(353),
+	ICE_PTT_UNUSED_ENTRY(354),
+	ICE_PTT_UNUSED_ENTRY(355),
+	ICE_PTT_UNUSED_ENTRY(356),
+	ICE_PTT_UNUSED_ENTRY(357),
+	ICE_PTT_UNUSED_ENTRY(358),
+	ICE_PTT_UNUSED_ENTRY(359),
+
+	ICE_PTT_UNUSED_ENTRY(360),
+	ICE_PTT_UNUSED_ENTRY(361),
+	ICE_PTT_UNUSED_ENTRY(362),
+	ICE_PTT_UNUSED_ENTRY(363),
+	ICE_PTT_UNUSED_ENTRY(364),
+	ICE_PTT_UNUSED_ENTRY(365),
+	ICE_PTT_UNUSED_ENTRY(366),
+	ICE_PTT_UNUSED_ENTRY(367),
+	ICE_PTT_UNUSED_ENTRY(368),
+	ICE_PTT_UNUSED_ENTRY(369),
+
+	ICE_PTT_UNUSED_ENTRY(370),
+	ICE_PTT_UNUSED_ENTRY(371),
+	ICE_PTT_UNUSED_ENTRY(372),
+	ICE_PTT_UNUSED_ENTRY(373),
+	ICE_PTT_UNUSED_ENTRY(374),
+	ICE_PTT_UNUSED_ENTRY(375),
+	ICE_PTT_UNUSED_ENTRY(376),
+	ICE_PTT_UNUSED_ENTRY(377),
+	ICE_PTT_UNUSED_ENTRY(378),
+	ICE_PTT_UNUSED_ENTRY(379),
+
+	ICE_PTT_UNUSED_ENTRY(380),
+	ICE_PTT_UNUSED_ENTRY(381),
+	ICE_PTT_UNUSED_ENTRY(382),
+	ICE_PTT_UNUSED_ENTRY(383),
+	ICE_PTT_UNUSED_ENTRY(384),
+	ICE_PTT_UNUSED_ENTRY(385),
+	ICE_PTT_UNUSED_ENTRY(386),
+	ICE_PTT_UNUSED_ENTRY(387),
+	ICE_PTT_UNUSED_ENTRY(388),
+	ICE_PTT_UNUSED_ENTRY(389),
+
+	ICE_PTT_UNUSED_ENTRY(390),
+	ICE_PTT_UNUSED_ENTRY(391),
+	ICE_PTT_UNUSED_ENTRY(392),
+	ICE_PTT_UNUSED_ENTRY(393),
+	ICE_PTT_UNUSED_ENTRY(394),
+	ICE_PTT_UNUSED_ENTRY(395),
+	ICE_PTT_UNUSED_ENTRY(396),
+	ICE_PTT_UNUSED_ENTRY(397),
+	ICE_PTT_UNUSED_ENTRY(398),
+	ICE_PTT_UNUSED_ENTRY(399),
+
+	ICE_PTT_UNUSED_ENTRY(400),
+	ICE_PTT_UNUSED_ENTRY(401),
+	ICE_PTT_UNUSED_ENTRY(402),
+	ICE_PTT_UNUSED_ENTRY(403),
+	ICE_PTT_UNUSED_ENTRY(404),
+	ICE_PTT_UNUSED_ENTRY(405),
+	ICE_PTT_UNUSED_ENTRY(406),
+	ICE_PTT_UNUSED_ENTRY(407),
+	ICE_PTT_UNUSED_ENTRY(408),
+	ICE_PTT_UNUSED_ENTRY(409),
+
+	ICE_PTT_UNUSED_ENTRY(410),
+	ICE_PTT_UNUSED_ENTRY(411),
+	ICE_PTT_UNUSED_ENTRY(412),
+	ICE_PTT_UNUSED_ENTRY(413),
+	ICE_PTT_UNUSED_ENTRY(414),
+	ICE_PTT_UNUSED_ENTRY(415),
+	ICE_PTT_UNUSED_ENTRY(416),
+	ICE_PTT_UNUSED_ENTRY(417),
+	ICE_PTT_UNUSED_ENTRY(418),
+	ICE_PTT_UNUSED_ENTRY(419),
+
+	ICE_PTT_UNUSED_ENTRY(420),
+	ICE_PTT_UNUSED_ENTRY(421),
+	ICE_PTT_UNUSED_ENTRY(422),
+	ICE_PTT_UNUSED_ENTRY(423),
+	ICE_PTT_UNUSED_ENTRY(424),
+	ICE_PTT_UNUSED_ENTRY(425),
+	ICE_PTT_UNUSED_ENTRY(426),
+	ICE_PTT_UNUSED_ENTRY(427),
+	ICE_PTT_UNUSED_ENTRY(428),
+	ICE_PTT_UNUSED_ENTRY(429),
+
+	ICE_PTT_UNUSED_ENTRY(430),
+	ICE_PTT_UNUSED_ENTRY(431),
+	ICE_PTT_UNUSED_ENTRY(432),
+	ICE_PTT_UNUSED_ENTRY(433),
+	ICE_PTT_UNUSED_ENTRY(434),
+	ICE_PTT_UNUSED_ENTRY(435),
+	ICE_PTT_UNUSED_ENTRY(436),
+	ICE_PTT_UNUSED_ENTRY(437),
+	ICE_PTT_UNUSED_ENTRY(438),
+	ICE_PTT_UNUSED_ENTRY(439),
+
+	ICE_PTT_UNUSED_ENTRY(440),
+	ICE_PTT_UNUSED_ENTRY(441),
+	ICE_PTT_UNUSED_ENTRY(442),
+	ICE_PTT_UNUSED_ENTRY(443),
+	ICE_PTT_UNUSED_ENTRY(444),
+	ICE_PTT_UNUSED_ENTRY(445),
+	ICE_PTT_UNUSED_ENTRY(446),
+	ICE_PTT_UNUSED_ENTRY(447),
+	ICE_PTT_UNUSED_ENTRY(448),
+	ICE_PTT_UNUSED_ENTRY(449),
+
+	ICE_PTT_UNUSED_ENTRY(450),
+	ICE_PTT_UNUSED_ENTRY(451),
+	ICE_PTT_UNUSED_ENTRY(452),
+	ICE_PTT_UNUSED_ENTRY(453),
+	ICE_PTT_UNUSED_ENTRY(454),
+	ICE_PTT_UNUSED_ENTRY(455),
+	ICE_PTT_UNUSED_ENTRY(456),
+	ICE_PTT_UNUSED_ENTRY(457),
+	ICE_PTT_UNUSED_ENTRY(458),
+	ICE_PTT_UNUSED_ENTRY(459),
+
+	ICE_PTT_UNUSED_ENTRY(460),
+	ICE_PTT_UNUSED_ENTRY(461),
+	ICE_PTT_UNUSED_ENTRY(462),
+	ICE_PTT_UNUSED_ENTRY(463),
+	ICE_PTT_UNUSED_ENTRY(464),
+	ICE_PTT_UNUSED_ENTRY(465),
+	ICE_PTT_UNUSED_ENTRY(466),
+	ICE_PTT_UNUSED_ENTRY(467),
+	ICE_PTT_UNUSED_ENTRY(468),
+	ICE_PTT_UNUSED_ENTRY(469),
+
+	ICE_PTT_UNUSED_ENTRY(470),
+	ICE_PTT_UNUSED_ENTRY(471),
+	ICE_PTT_UNUSED_ENTRY(472),
+	ICE_PTT_UNUSED_ENTRY(473),
+	ICE_PTT_UNUSED_ENTRY(474),
+	ICE_PTT_UNUSED_ENTRY(475),
+	ICE_PTT_UNUSED_ENTRY(476),
+	ICE_PTT_UNUSED_ENTRY(477),
+	ICE_PTT_UNUSED_ENTRY(478),
+	ICE_PTT_UNUSED_ENTRY(479),
+
+	ICE_PTT_UNUSED_ENTRY(480),
+	ICE_PTT_UNUSED_ENTRY(481),
+	ICE_PTT_UNUSED_ENTRY(482),
+	ICE_PTT_UNUSED_ENTRY(483),
+	ICE_PTT_UNUSED_ENTRY(484),
+	ICE_PTT_UNUSED_ENTRY(485),
+	ICE_PTT_UNUSED_ENTRY(486),
+	ICE_PTT_UNUSED_ENTRY(487),
+	ICE_PTT_UNUSED_ENTRY(488),
+	ICE_PTT_UNUSED_ENTRY(489),
+
+	ICE_PTT_UNUSED_ENTRY(490),
+	ICE_PTT_UNUSED_ENTRY(491),
+	ICE_PTT_UNUSED_ENTRY(492),
+	ICE_PTT_UNUSED_ENTRY(493),
+	ICE_PTT_UNUSED_ENTRY(494),
+	ICE_PTT_UNUSED_ENTRY(495),
+	ICE_PTT_UNUSED_ENTRY(496),
+	ICE_PTT_UNUSED_ENTRY(497),
+	ICE_PTT_UNUSED_ENTRY(498),
+	ICE_PTT_UNUSED_ENTRY(499),
+
+	ICE_PTT_UNUSED_ENTRY(500),
+	ICE_PTT_UNUSED_ENTRY(501),
+	ICE_PTT_UNUSED_ENTRY(502),
+	ICE_PTT_UNUSED_ENTRY(503),
+	ICE_PTT_UNUSED_ENTRY(504),
+	ICE_PTT_UNUSED_ENTRY(505),
+	ICE_PTT_UNUSED_ENTRY(506),
+	ICE_PTT_UNUSED_ENTRY(507),
+	ICE_PTT_UNUSED_ENTRY(508),
+	ICE_PTT_UNUSED_ENTRY(509),
+
+	ICE_PTT_UNUSED_ENTRY(510),
+	ICE_PTT_UNUSED_ENTRY(511),
+	ICE_PTT_UNUSED_ENTRY(512),
+	ICE_PTT_UNUSED_ENTRY(513),
+	ICE_PTT_UNUSED_ENTRY(514),
+	ICE_PTT_UNUSED_ENTRY(515),
+	ICE_PTT_UNUSED_ENTRY(516),
+	ICE_PTT_UNUSED_ENTRY(517),
+	ICE_PTT_UNUSED_ENTRY(518),
+	ICE_PTT_UNUSED_ENTRY(519),
+
+	ICE_PTT_UNUSED_ENTRY(520),
+	ICE_PTT_UNUSED_ENTRY(521),
+	ICE_PTT_UNUSED_ENTRY(522),
+	ICE_PTT_UNUSED_ENTRY(523),
+	ICE_PTT_UNUSED_ENTRY(524),
+	ICE_PTT_UNUSED_ENTRY(525),
+	ICE_PTT_UNUSED_ENTRY(526),
+	ICE_PTT_UNUSED_ENTRY(527),
+	ICE_PTT_UNUSED_ENTRY(528),
+	ICE_PTT_UNUSED_ENTRY(529),
+
+	ICE_PTT_UNUSED_ENTRY(530),
+	ICE_PTT_UNUSED_ENTRY(531),
+	ICE_PTT_UNUSED_ENTRY(532),
+	ICE_PTT_UNUSED_ENTRY(533),
+	ICE_PTT_UNUSED_ENTRY(534),
+	ICE_PTT_UNUSED_ENTRY(535),
+	ICE_PTT_UNUSED_ENTRY(536),
+	ICE_PTT_UNUSED_ENTRY(537),
+	ICE_PTT_UNUSED_ENTRY(538),
+	ICE_PTT_UNUSED_ENTRY(539),
+
+	ICE_PTT_UNUSED_ENTRY(540),
+	ICE_PTT_UNUSED_ENTRY(541),
+	ICE_PTT_UNUSED_ENTRY(542),
+	ICE_PTT_UNUSED_ENTRY(543),
+	ICE_PTT_UNUSED_ENTRY(544),
+	ICE_PTT_UNUSED_ENTRY(545),
+	ICE_PTT_UNUSED_ENTRY(546),
+	ICE_PTT_UNUSED_ENTRY(547),
+	ICE_PTT_UNUSED_ENTRY(548),
+	ICE_PTT_UNUSED_ENTRY(549),
+
+	ICE_PTT_UNUSED_ENTRY(550),
+	ICE_PTT_UNUSED_ENTRY(551),
+	ICE_PTT_UNUSED_ENTRY(552),
+	ICE_PTT_UNUSED_ENTRY(553),
+	ICE_PTT_UNUSED_ENTRY(554),
+	ICE_PTT_UNUSED_ENTRY(555),
+	ICE_PTT_UNUSED_ENTRY(556),
+	ICE_PTT_UNUSED_ENTRY(557),
+	ICE_PTT_UNUSED_ENTRY(558),
+	ICE_PTT_UNUSED_ENTRY(559),
+
+	ICE_PTT_UNUSED_ENTRY(560),
+	ICE_PTT_UNUSED_ENTRY(561),
+	ICE_PTT_UNUSED_ENTRY(562),
+	ICE_PTT_UNUSED_ENTRY(563),
+	ICE_PTT_UNUSED_ENTRY(564),
+	ICE_PTT_UNUSED_ENTRY(565),
+	ICE_PTT_UNUSED_ENTRY(566),
+	ICE_PTT_UNUSED_ENTRY(567),
+	ICE_PTT_UNUSED_ENTRY(568),
+	ICE_PTT_UNUSED_ENTRY(569),
+
+	ICE_PTT_UNUSED_ENTRY(570),
+	ICE_PTT_UNUSED_ENTRY(571),
+	ICE_PTT_UNUSED_ENTRY(572),
+	ICE_PTT_UNUSED_ENTRY(573),
+	ICE_PTT_UNUSED_ENTRY(574),
+	ICE_PTT_UNUSED_ENTRY(575),
+	ICE_PTT_UNUSED_ENTRY(576),
+	ICE_PTT_UNUSED_ENTRY(577),
+	ICE_PTT_UNUSED_ENTRY(578),
+	ICE_PTT_UNUSED_ENTRY(579),
+
+	ICE_PTT_UNUSED_ENTRY(580),
+	ICE_PTT_UNUSED_ENTRY(581),
+	ICE_PTT_UNUSED_ENTRY(582),
+	ICE_PTT_UNUSED_ENTRY(583),
+	ICE_PTT_UNUSED_ENTRY(584),
+	ICE_PTT_UNUSED_ENTRY(585),
+	ICE_PTT_UNUSED_ENTRY(586),
+	ICE_PTT_UNUSED_ENTRY(587),
+	ICE_PTT_UNUSED_ENTRY(588),
+	ICE_PTT_UNUSED_ENTRY(589),
+
+	ICE_PTT_UNUSED_ENTRY(590),
+	ICE_PTT_UNUSED_ENTRY(591),
+	ICE_PTT_UNUSED_ENTRY(592),
+	ICE_PTT_UNUSED_ENTRY(593),
+	ICE_PTT_UNUSED_ENTRY(594),
+	ICE_PTT_UNUSED_ENTRY(595),
+	ICE_PTT_UNUSED_ENTRY(596),
+	ICE_PTT_UNUSED_ENTRY(597),
+	ICE_PTT_UNUSED_ENTRY(598),
+	ICE_PTT_UNUSED_ENTRY(599),
+
+	ICE_PTT_UNUSED_ENTRY(600),
+	ICE_PTT_UNUSED_ENTRY(601),
+	ICE_PTT_UNUSED_ENTRY(602),
+	ICE_PTT_UNUSED_ENTRY(603),
+	ICE_PTT_UNUSED_ENTRY(604),
+	ICE_PTT_UNUSED_ENTRY(605),
+	ICE_PTT_UNUSED_ENTRY(606),
+	ICE_PTT_UNUSED_ENTRY(607),
+	ICE_PTT_UNUSED_ENTRY(608),
+	ICE_PTT_UNUSED_ENTRY(609),
+
+	ICE_PTT_UNUSED_ENTRY(610),
+	ICE_PTT_UNUSED_ENTRY(611),
+	ICE_PTT_UNUSED_ENTRY(612),
+	ICE_PTT_UNUSED_ENTRY(613),
+	ICE_PTT_UNUSED_ENTRY(614),
+	ICE_PTT_UNUSED_ENTRY(615),
+	ICE_PTT_UNUSED_ENTRY(616),
+	ICE_PTT_UNUSED_ENTRY(617),
+	ICE_PTT_UNUSED_ENTRY(618),
+	ICE_PTT_UNUSED_ENTRY(619),
+
+	ICE_PTT_UNUSED_ENTRY(620),
+	ICE_PTT_UNUSED_ENTRY(621),
+	ICE_PTT_UNUSED_ENTRY(622),
+	ICE_PTT_UNUSED_ENTRY(623),
+	ICE_PTT_UNUSED_ENTRY(624),
+	ICE_PTT_UNUSED_ENTRY(625),
+	ICE_PTT_UNUSED_ENTRY(626),
+	ICE_PTT_UNUSED_ENTRY(627),
+	ICE_PTT_UNUSED_ENTRY(628),
+	ICE_PTT_UNUSED_ENTRY(629),
+
+	ICE_PTT_UNUSED_ENTRY(630),
+	ICE_PTT_UNUSED_ENTRY(631),
+	ICE_PTT_UNUSED_ENTRY(632),
+	ICE_PTT_UNUSED_ENTRY(633),
+	ICE_PTT_UNUSED_ENTRY(634),
+	ICE_PTT_UNUSED_ENTRY(635),
+	ICE_PTT_UNUSED_ENTRY(636),
+	ICE_PTT_UNUSED_ENTRY(637),
+	ICE_PTT_UNUSED_ENTRY(638),
+	ICE_PTT_UNUSED_ENTRY(639),
+
+	ICE_PTT_UNUSED_ENTRY(640),
+	ICE_PTT_UNUSED_ENTRY(641),
+	ICE_PTT_UNUSED_ENTRY(642),
+	ICE_PTT_UNUSED_ENTRY(643),
+	ICE_PTT_UNUSED_ENTRY(644),
+	ICE_PTT_UNUSED_ENTRY(645),
+	ICE_PTT_UNUSED_ENTRY(646),
+	ICE_PTT_UNUSED_ENTRY(647),
+	ICE_PTT_UNUSED_ENTRY(648),
+	ICE_PTT_UNUSED_ENTRY(649),
+
+	ICE_PTT_UNUSED_ENTRY(650),
+	ICE_PTT_UNUSED_ENTRY(651),
+	ICE_PTT_UNUSED_ENTRY(652),
+	ICE_PTT_UNUSED_ENTRY(653),
+	ICE_PTT_UNUSED_ENTRY(654),
+	ICE_PTT_UNUSED_ENTRY(655),
+	ICE_PTT_UNUSED_ENTRY(656),
+	ICE_PTT_UNUSED_ENTRY(657),
+	ICE_PTT_UNUSED_ENTRY(658),
+	ICE_PTT_UNUSED_ENTRY(659),
+
+	ICE_PTT_UNUSED_ENTRY(660),
+	ICE_PTT_UNUSED_ENTRY(661),
+	ICE_PTT_UNUSED_ENTRY(662),
+	ICE_PTT_UNUSED_ENTRY(663),
+	ICE_PTT_UNUSED_ENTRY(664),
+	ICE_PTT_UNUSED_ENTRY(665),
+	ICE_PTT_UNUSED_ENTRY(666),
+	ICE_PTT_UNUSED_ENTRY(667),
+	ICE_PTT_UNUSED_ENTRY(668),
+	ICE_PTT_UNUSED_ENTRY(669),
+
+	ICE_PTT_UNUSED_ENTRY(670),
+	ICE_PTT_UNUSED_ENTRY(671),
+	ICE_PTT_UNUSED_ENTRY(672),
+	ICE_PTT_UNUSED_ENTRY(673),
+	ICE_PTT_UNUSED_ENTRY(674),
+	ICE_PTT_UNUSED_ENTRY(675),
+	ICE_PTT_UNUSED_ENTRY(676),
+	ICE_PTT_UNUSED_ENTRY(677),
+	ICE_PTT_UNUSED_ENTRY(678),
+	ICE_PTT_UNUSED_ENTRY(679),
+
+	ICE_PTT_UNUSED_ENTRY(680),
+	ICE_PTT_UNUSED_ENTRY(681),
+	ICE_PTT_UNUSED_ENTRY(682),
+	ICE_PTT_UNUSED_ENTRY(683),
+	ICE_PTT_UNUSED_ENTRY(684),
+	ICE_PTT_UNUSED_ENTRY(685),
+	ICE_PTT_UNUSED_ENTRY(686),
+	ICE_PTT_UNUSED_ENTRY(687),
+	ICE_PTT_UNUSED_ENTRY(688),
+	ICE_PTT_UNUSED_ENTRY(689),
+
+	ICE_PTT_UNUSED_ENTRY(690),
+	ICE_PTT_UNUSED_ENTRY(691),
+	ICE_PTT_UNUSED_ENTRY(692),
+	ICE_PTT_UNUSED_ENTRY(693),
+	ICE_PTT_UNUSED_ENTRY(694),
+	ICE_PTT_UNUSED_ENTRY(695),
+	ICE_PTT_UNUSED_ENTRY(696),
+	ICE_PTT_UNUSED_ENTRY(697),
+	ICE_PTT_UNUSED_ENTRY(698),
+	ICE_PTT_UNUSED_ENTRY(699),
+
+	ICE_PTT_UNUSED_ENTRY(700),
+	ICE_PTT_UNUSED_ENTRY(701),
+	ICE_PTT_UNUSED_ENTRY(702),
+	ICE_PTT_UNUSED_ENTRY(703),
+	ICE_PTT_UNUSED_ENTRY(704),
+	ICE_PTT_UNUSED_ENTRY(705),
+	ICE_PTT_UNUSED_ENTRY(706),
+	ICE_PTT_UNUSED_ENTRY(707),
+	ICE_PTT_UNUSED_ENTRY(708),
+	ICE_PTT_UNUSED_ENTRY(709),
+
+	ICE_PTT_UNUSED_ENTRY(710),
+	ICE_PTT_UNUSED_ENTRY(711),
+	ICE_PTT_UNUSED_ENTRY(712),
+	ICE_PTT_UNUSED_ENTRY(713),
+	ICE_PTT_UNUSED_ENTRY(714),
+	ICE_PTT_UNUSED_ENTRY(715),
+	ICE_PTT_UNUSED_ENTRY(716),
+	ICE_PTT_UNUSED_ENTRY(717),
+	ICE_PTT_UNUSED_ENTRY(718),
+	ICE_PTT_UNUSED_ENTRY(719),
+
+	ICE_PTT_UNUSED_ENTRY(720),
+	ICE_PTT_UNUSED_ENTRY(721),
+	ICE_PTT_UNUSED_ENTRY(722),
+	ICE_PTT_UNUSED_ENTRY(723),
+	ICE_PTT_UNUSED_ENTRY(724),
+	ICE_PTT_UNUSED_ENTRY(725),
+	ICE_PTT_UNUSED_ENTRY(726),
+	ICE_PTT_UNUSED_ENTRY(727),
+	ICE_PTT_UNUSED_ENTRY(728),
+	ICE_PTT_UNUSED_ENTRY(729),
+
+	ICE_PTT_UNUSED_ENTRY(730),
+	ICE_PTT_UNUSED_ENTRY(731),
+	ICE_PTT_UNUSED_ENTRY(732),
+	ICE_PTT_UNUSED_ENTRY(733),
+	ICE_PTT_UNUSED_ENTRY(734),
+	ICE_PTT_UNUSED_ENTRY(735),
+	ICE_PTT_UNUSED_ENTRY(736),
+	ICE_PTT_UNUSED_ENTRY(737),
+	ICE_PTT_UNUSED_ENTRY(738),
+	ICE_PTT_UNUSED_ENTRY(739),
+
+	ICE_PTT_UNUSED_ENTRY(740),
+	ICE_PTT_UNUSED_ENTRY(741),
+	ICE_PTT_UNUSED_ENTRY(742),
+	ICE_PTT_UNUSED_ENTRY(743),
+	ICE_PTT_UNUSED_ENTRY(744),
+	ICE_PTT_UNUSED_ENTRY(745),
+	ICE_PTT_UNUSED_ENTRY(746),
+	ICE_PTT_UNUSED_ENTRY(747),
+	ICE_PTT_UNUSED_ENTRY(748),
+	ICE_PTT_UNUSED_ENTRY(749),
+
+	ICE_PTT_UNUSED_ENTRY(750),
+	ICE_PTT_UNUSED_ENTRY(751),
+	ICE_PTT_UNUSED_ENTRY(752),
+	ICE_PTT_UNUSED_ENTRY(753),
+	ICE_PTT_UNUSED_ENTRY(754),
+	ICE_PTT_UNUSED_ENTRY(755),
+	ICE_PTT_UNUSED_ENTRY(756),
+	ICE_PTT_UNUSED_ENTRY(757),
+	ICE_PTT_UNUSED_ENTRY(758),
+	ICE_PTT_UNUSED_ENTRY(759),
+
+	ICE_PTT_UNUSED_ENTRY(760),
+	ICE_PTT_UNUSED_ENTRY(761),
+	ICE_PTT_UNUSED_ENTRY(762),
+	ICE_PTT_UNUSED_ENTRY(763),
+	ICE_PTT_UNUSED_ENTRY(764),
+	ICE_PTT_UNUSED_ENTRY(765),
+	ICE_PTT_UNUSED_ENTRY(766),
+	ICE_PTT_UNUSED_ENTRY(767),
+	ICE_PTT_UNUSED_ENTRY(768),
+	ICE_PTT_UNUSED_ENTRY(769),
+
+	ICE_PTT_UNUSED_ENTRY(770),
+	ICE_PTT_UNUSED_ENTRY(771),
+	ICE_PTT_UNUSED_ENTRY(772),
+	ICE_PTT_UNUSED_ENTRY(773),
+	ICE_PTT_UNUSED_ENTRY(774),
+	ICE_PTT_UNUSED_ENTRY(775),
+	ICE_PTT_UNUSED_ENTRY(776),
+	ICE_PTT_UNUSED_ENTRY(777),
+	ICE_PTT_UNUSED_ENTRY(778),
+	ICE_PTT_UNUSED_ENTRY(779),
+
+	ICE_PTT_UNUSED_ENTRY(780),
+	ICE_PTT_UNUSED_ENTRY(781),
+	ICE_PTT_UNUSED_ENTRY(782),
+	ICE_PTT_UNUSED_ENTRY(783),
+	ICE_PTT_UNUSED_ENTRY(784),
+	ICE_PTT_UNUSED_ENTRY(785),
+	ICE_PTT_UNUSED_ENTRY(786),
+	ICE_PTT_UNUSED_ENTRY(787),
+	ICE_PTT_UNUSED_ENTRY(788),
+	ICE_PTT_UNUSED_ENTRY(789),
+
+	ICE_PTT_UNUSED_ENTRY(790),
+	ICE_PTT_UNUSED_ENTRY(791),
+	ICE_PTT_UNUSED_ENTRY(792),
+	ICE_PTT_UNUSED_ENTRY(793),
+	ICE_PTT_UNUSED_ENTRY(794),
+	ICE_PTT_UNUSED_ENTRY(795),
+	ICE_PTT_UNUSED_ENTRY(796),
+	ICE_PTT_UNUSED_ENTRY(797),
+	ICE_PTT_UNUSED_ENTRY(798),
+	ICE_PTT_UNUSED_ENTRY(799),
+
+	ICE_PTT_UNUSED_ENTRY(800),
+	ICE_PTT_UNUSED_ENTRY(801),
+	ICE_PTT_UNUSED_ENTRY(802),
+	ICE_PTT_UNUSED_ENTRY(803),
+	ICE_PTT_UNUSED_ENTRY(804),
+	ICE_PTT_UNUSED_ENTRY(805),
+	ICE_PTT_UNUSED_ENTRY(806),
+	ICE_PTT_UNUSED_ENTRY(807),
+	ICE_PTT_UNUSED_ENTRY(808),
+	ICE_PTT_UNUSED_ENTRY(809),
+
+	ICE_PTT_UNUSED_ENTRY(810),
+	ICE_PTT_UNUSED_ENTRY(811),
+	ICE_PTT_UNUSED_ENTRY(812),
+	ICE_PTT_UNUSED_ENTRY(813),
+	ICE_PTT_UNUSED_ENTRY(814),
+	ICE_PTT_UNUSED_ENTRY(815),
+	ICE_PTT_UNUSED_ENTRY(816),
+	ICE_PTT_UNUSED_ENTRY(817),
+	ICE_PTT_UNUSED_ENTRY(818),
+	ICE_PTT_UNUSED_ENTRY(819),
+
+	ICE_PTT_UNUSED_ENTRY(820),
+	ICE_PTT_UNUSED_ENTRY(821),
+	ICE_PTT_UNUSED_ENTRY(822),
+	ICE_PTT_UNUSED_ENTRY(823),
+	ICE_PTT_UNUSED_ENTRY(824),
+	ICE_PTT_UNUSED_ENTRY(825),
+	ICE_PTT_UNUSED_ENTRY(826),
+	ICE_PTT_UNUSED_ENTRY(827),
+	ICE_PTT_UNUSED_ENTRY(828),
+	ICE_PTT_UNUSED_ENTRY(829),
+
+	ICE_PTT_UNUSED_ENTRY(830),
+	ICE_PTT_UNUSED_ENTRY(831),
+	ICE_PTT_UNUSED_ENTRY(832),
+	ICE_PTT_UNUSED_ENTRY(833),
+	ICE_PTT_UNUSED_ENTRY(834),
+	ICE_PTT_UNUSED_ENTRY(835),
+	ICE_PTT_UNUSED_ENTRY(836),
+	ICE_PTT_UNUSED_ENTRY(837),
+	ICE_PTT_UNUSED_ENTRY(838),
+	ICE_PTT_UNUSED_ENTRY(839),
+
+	ICE_PTT_UNUSED_ENTRY(840),
+	ICE_PTT_UNUSED_ENTRY(841),
+	ICE_PTT_UNUSED_ENTRY(842),
+	ICE_PTT_UNUSED_ENTRY(843),
+	ICE_PTT_UNUSED_ENTRY(844),
+	ICE_PTT_UNUSED_ENTRY(845),
+	ICE_PTT_UNUSED_ENTRY(846),
+	ICE_PTT_UNUSED_ENTRY(847),
+	ICE_PTT_UNUSED_ENTRY(848),
+	ICE_PTT_UNUSED_ENTRY(849),
+
+	ICE_PTT_UNUSED_ENTRY(850),
+	ICE_PTT_UNUSED_ENTRY(851),
+	ICE_PTT_UNUSED_ENTRY(852),
+	ICE_PTT_UNUSED_ENTRY(853),
+	ICE_PTT_UNUSED_ENTRY(854),
+	ICE_PTT_UNUSED_ENTRY(855),
+	ICE_PTT_UNUSED_ENTRY(856),
+	ICE_PTT_UNUSED_ENTRY(857),
+	ICE_PTT_UNUSED_ENTRY(858),
+	ICE_PTT_UNUSED_ENTRY(859),
+
+	ICE_PTT_UNUSED_ENTRY(860),
+	ICE_PTT_UNUSED_ENTRY(861),
+	ICE_PTT_UNUSED_ENTRY(862),
+	ICE_PTT_UNUSED_ENTRY(863),
+	ICE_PTT_UNUSED_ENTRY(864),
+	ICE_PTT_UNUSED_ENTRY(865),
+	ICE_PTT_UNUSED_ENTRY(866),
+	ICE_PTT_UNUSED_ENTRY(867),
+	ICE_PTT_UNUSED_ENTRY(868),
+	ICE_PTT_UNUSED_ENTRY(869),
+
+	ICE_PTT_UNUSED_ENTRY(870),
+	ICE_PTT_UNUSED_ENTRY(871),
+	ICE_PTT_UNUSED_ENTRY(872),
+	ICE_PTT_UNUSED_ENTRY(873),
+	ICE_PTT_UNUSED_ENTRY(874),
+	ICE_PTT_UNUSED_ENTRY(875),
+	ICE_PTT_UNUSED_ENTRY(876),
+	ICE_PTT_UNUSED_ENTRY(877),
+	ICE_PTT_UNUSED_ENTRY(878),
+	ICE_PTT_UNUSED_ENTRY(879),
+
+	ICE_PTT_UNUSED_ENTRY(880),
+	ICE_PTT_UNUSED_ENTRY(881),
+	ICE_PTT_UNUSED_ENTRY(882),
+	ICE_PTT_UNUSED_ENTRY(883),
+	ICE_PTT_UNUSED_ENTRY(884),
+	ICE_PTT_UNUSED_ENTRY(885),
+	ICE_PTT_UNUSED_ENTRY(886),
+	ICE_PTT_UNUSED_ENTRY(887),
+	ICE_PTT_UNUSED_ENTRY(888),
+	ICE_PTT_UNUSED_ENTRY(889),
+
+	ICE_PTT_UNUSED_ENTRY(890),
+	ICE_PTT_UNUSED_ENTRY(891),
+	ICE_PTT_UNUSED_ENTRY(892),
+	ICE_PTT_UNUSED_ENTRY(893),
+	ICE_PTT_UNUSED_ENTRY(894),
+	ICE_PTT_UNUSED_ENTRY(895),
+	ICE_PTT_UNUSED_ENTRY(896),
+	ICE_PTT_UNUSED_ENTRY(897),
+	ICE_PTT_UNUSED_ENTRY(898),
+	ICE_PTT_UNUSED_ENTRY(899),
+
+	ICE_PTT_UNUSED_ENTRY(900),
+	ICE_PTT_UNUSED_ENTRY(901),
+	ICE_PTT_UNUSED_ENTRY(902),
+	ICE_PTT_UNUSED_ENTRY(903),
+	ICE_PTT_UNUSED_ENTRY(904),
+	ICE_PTT_UNUSED_ENTRY(905),
+	ICE_PTT_UNUSED_ENTRY(906),
+	ICE_PTT_UNUSED_ENTRY(907),
+	ICE_PTT_UNUSED_ENTRY(908),
+	ICE_PTT_UNUSED_ENTRY(909),
+
+	ICE_PTT_UNUSED_ENTRY(910),
+	ICE_PTT_UNUSED_ENTRY(911),
+	ICE_PTT_UNUSED_ENTRY(912),
+	ICE_PTT_UNUSED_ENTRY(913),
+	ICE_PTT_UNUSED_ENTRY(914),
+	ICE_PTT_UNUSED_ENTRY(915),
+	ICE_PTT_UNUSED_ENTRY(916),
+	ICE_PTT_UNUSED_ENTRY(917),
+	ICE_PTT_UNUSED_ENTRY(918),
+	ICE_PTT_UNUSED_ENTRY(919),
+
+	ICE_PTT_UNUSED_ENTRY(920),
+	ICE_PTT_UNUSED_ENTRY(921),
+	ICE_PTT_UNUSED_ENTRY(922),
+	ICE_PTT_UNUSED_ENTRY(923),
+	ICE_PTT_UNUSED_ENTRY(924),
+	ICE_PTT_UNUSED_ENTRY(925),
+	ICE_PTT_UNUSED_ENTRY(926),
+	ICE_PTT_UNUSED_ENTRY(927),
+	ICE_PTT_UNUSED_ENTRY(928),
+	ICE_PTT_UNUSED_ENTRY(929),
+
+	ICE_PTT_UNUSED_ENTRY(930),
+	ICE_PTT_UNUSED_ENTRY(931),
+	ICE_PTT_UNUSED_ENTRY(932),
+	ICE_PTT_UNUSED_ENTRY(933),
+	ICE_PTT_UNUSED_ENTRY(934),
+	ICE_PTT_UNUSED_ENTRY(935),
+	ICE_PTT_UNUSED_ENTRY(936),
+	ICE_PTT_UNUSED_ENTRY(937),
+	ICE_PTT_UNUSED_ENTRY(938),
+	ICE_PTT_UNUSED_ENTRY(939),
+
+	ICE_PTT_UNUSED_ENTRY(940),
+	ICE_PTT_UNUSED_ENTRY(941),
+	ICE_PTT_UNUSED_ENTRY(942),
+	ICE_PTT_UNUSED_ENTRY(943),
+	ICE_PTT_UNUSED_ENTRY(944),
+	ICE_PTT_UNUSED_ENTRY(945),
+	ICE_PTT_UNUSED_ENTRY(946),
+	ICE_PTT_UNUSED_ENTRY(947),
+	ICE_PTT_UNUSED_ENTRY(948),
+	ICE_PTT_UNUSED_ENTRY(949),
+
+	ICE_PTT_UNUSED_ENTRY(950),
+	ICE_PTT_UNUSED_ENTRY(951),
+	ICE_PTT_UNUSED_ENTRY(952),
+	ICE_PTT_UNUSED_ENTRY(953),
+	ICE_PTT_UNUSED_ENTRY(954),
+	ICE_PTT_UNUSED_ENTRY(955),
+	ICE_PTT_UNUSED_ENTRY(956),
+	ICE_PTT_UNUSED_ENTRY(957),
+	ICE_PTT_UNUSED_ENTRY(958),
+	ICE_PTT_UNUSED_ENTRY(959),
+
+	ICE_PTT_UNUSED_ENTRY(960),
+	ICE_PTT_UNUSED_ENTRY(961),
+	ICE_PTT_UNUSED_ENTRY(962),
+	ICE_PTT_UNUSED_ENTRY(963),
+	ICE_PTT_UNUSED_ENTRY(964),
+	ICE_PTT_UNUSED_ENTRY(965),
+	ICE_PTT_UNUSED_ENTRY(966),
+	ICE_PTT_UNUSED_ENTRY(967),
+	ICE_PTT_UNUSED_ENTRY(968),
+	ICE_PTT_UNUSED_ENTRY(969),
+
+	ICE_PTT_UNUSED_ENTRY(970),
+	ICE_PTT_UNUSED_ENTRY(971),
+	ICE_PTT_UNUSED_ENTRY(972),
+	ICE_PTT_UNUSED_ENTRY(973),
+	ICE_PTT_UNUSED_ENTRY(974),
+	ICE_PTT_UNUSED_ENTRY(975),
+	ICE_PTT_UNUSED_ENTRY(976),
+	ICE_PTT_UNUSED_ENTRY(977),
+	ICE_PTT_UNUSED_ENTRY(978),
+	ICE_PTT_UNUSED_ENTRY(979),
+
+	ICE_PTT_UNUSED_ENTRY(980),
+	ICE_PTT_UNUSED_ENTRY(981),
+	ICE_PTT_UNUSED_ENTRY(982),
+	ICE_PTT_UNUSED_ENTRY(983),
+	ICE_PTT_UNUSED_ENTRY(984),
+	ICE_PTT_UNUSED_ENTRY(985),
+	ICE_PTT_UNUSED_ENTRY(986),
+	ICE_PTT_UNUSED_ENTRY(987),
+	ICE_PTT_UNUSED_ENTRY(988),
+	ICE_PTT_UNUSED_ENTRY(989),
+
+	ICE_PTT_UNUSED_ENTRY(990),
+	ICE_PTT_UNUSED_ENTRY(991),
+	ICE_PTT_UNUSED_ENTRY(992),
+	ICE_PTT_UNUSED_ENTRY(993),
+	ICE_PTT_UNUSED_ENTRY(994),
+	ICE_PTT_UNUSED_ENTRY(995),
+	ICE_PTT_UNUSED_ENTRY(996),
+	ICE_PTT_UNUSED_ENTRY(997),
+	ICE_PTT_UNUSED_ENTRY(998),
+	ICE_PTT_UNUSED_ENTRY(999),
+
+	ICE_PTT_UNUSED_ENTRY(1000),
+	ICE_PTT_UNUSED_ENTRY(1001),
+	ICE_PTT_UNUSED_ENTRY(1002),
+	ICE_PTT_UNUSED_ENTRY(1003),
+	ICE_PTT_UNUSED_ENTRY(1004),
+	ICE_PTT_UNUSED_ENTRY(1005),
+	ICE_PTT_UNUSED_ENTRY(1006),
+	ICE_PTT_UNUSED_ENTRY(1007),
+	ICE_PTT_UNUSED_ENTRY(1008),
+	ICE_PTT_UNUSED_ENTRY(1009),
+
+	ICE_PTT_UNUSED_ENTRY(1010),
+	ICE_PTT_UNUSED_ENTRY(1011),
+	ICE_PTT_UNUSED_ENTRY(1012),
+	ICE_PTT_UNUSED_ENTRY(1013),
+	ICE_PTT_UNUSED_ENTRY(1014),
+	ICE_PTT_UNUSED_ENTRY(1015),
+	ICE_PTT_UNUSED_ENTRY(1016),
+	ICE_PTT_UNUSED_ENTRY(1017),
+	ICE_PTT_UNUSED_ENTRY(1018),
+	ICE_PTT_UNUSED_ENTRY(1019),
+
+	ICE_PTT_UNUSED_ENTRY(1020),
+	ICE_PTT_UNUSED_ENTRY(1021),
+	ICE_PTT_UNUSED_ENTRY(1022),
+	ICE_PTT_UNUSED_ENTRY(1023),
+};
+
+static inline struct ice_rx_ptype_decoded ice_decode_rx_desc_ptype(u16 ptype)
+{
+	return ice_ptype_lkup[ptype];
+}
+
+#define ICE_LINK_SPEED_UNKNOWN		0
+#define ICE_LINK_SPEED_10MBPS		10
+#define ICE_LINK_SPEED_100MBPS		100
+#define ICE_LINK_SPEED_1000MBPS		1000
+#define ICE_LINK_SPEED_2500MBPS		2500
+#define ICE_LINK_SPEED_5000MBPS		5000
+#define ICE_LINK_SPEED_10000MBPS	10000
+#define ICE_LINK_SPEED_20000MBPS	20000
+#define ICE_LINK_SPEED_25000MBPS	25000
+#define ICE_LINK_SPEED_40000MBPS	40000
+#define ICE_LINK_SPEED_50000MBPS	50000
+#define ICE_LINK_SPEED_100000MBPS	100000
+
+#endif /* _ICE_LAN_TX_RX_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v5 14/31] net/ice/base: add OS specific implementation
  2018-12-17  7:37 ` [dpdk-dev] [PATCH v5 00/31] A new net PMD - ICE Wenzhuo Lu
                     ` (12 preceding siblings ...)
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 13/31] net/ice/base: add structures for RX/TX queues Wenzhuo Lu
@ 2018-12-17  7:37   ` Wenzhuo Lu
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 15/31] net/ice: support device initialization Wenzhuo Lu
                     ` (16 subsequent siblings)
  30 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-17  7:37 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu

Add some MACRO defination and small functions which
are specific for DPDK.
Add readme too.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 drivers/net/ice/base/README      |  22 ++
 drivers/net/ice/base/ice_osdep.h | 524 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 546 insertions(+)
 create mode 100644 drivers/net/ice/base/README
 create mode 100644 drivers/net/ice/base/ice_osdep.h

diff --git a/drivers/net/ice/base/README b/drivers/net/ice/base/README
new file mode 100644
index 0000000..708f607
--- /dev/null
+++ b/drivers/net/ice/base/README
@@ -0,0 +1,22 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+Intel® ICE driver
+==================
+
+This directory contains source code of FreeBSD ice driver of version
+2018.12.11 released by the team which develops
+basic drivers for any ice NIC. The directory of base/ contains the
+original source package.
+This driver is valid for the product(s) listed below
+
+* Intel® Ethernet Network Adapters E810
+
+Updating the driver
+===================
+
+NOTE: The source code in this directory should not be modified apart from
+the following file(s):
+
+    ice_osdep.h
diff --git a/drivers/net/ice/base/ice_osdep.h b/drivers/net/ice/base/ice_osdep.h
new file mode 100644
index 0000000..dd25b75
--- /dev/null
+++ b/drivers/net/ice/base/ice_osdep.h
@@ -0,0 +1,524 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _ICE_OSDEP_H_
+#define _ICE_OSDEP_H_
+
+#include <string.h>
+#include <stdint.h>
+#include <stdio.h>
+#include <stdarg.h>
+#include <inttypes.h>
+#include <sys/queue.h>
+#include <stdbool.h>
+
+#include <rte_common.h>
+#include <rte_memcpy.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <rte_byteorder.h>
+#include <rte_cycles.h>
+#include <rte_spinlock.h>
+#include <rte_log.h>
+#include <rte_random.h>
+#include <rte_io.h>
+
+#include "../ice_logs.h"
+
+#define INLINE inline
+#define STATIC static
+
+typedef uint8_t         u8;
+typedef int8_t          s8;
+typedef uint16_t        u16;
+typedef int16_t         s16;
+typedef uint32_t        u32;
+typedef int32_t         s32;
+typedef uint64_t        u64;
+typedef uint64_t        s64;
+
+#define __iomem
+#define hw_dbg(hw, S, A...) do {} while (0)
+#define upper_32_bits(n) ((u32)(((n) >> 16) >> 16))
+#define lower_32_bits(n) ((u32)(n))
+#define low_16_bits(x)   ((x) & 0xFFFF)
+#define high_16_bits(x)  (((x) & 0xFFFF0000) >> 16)
+
+#ifndef ETH_ADDR_LEN
+#define ETH_ADDR_LEN                  6
+#endif
+
+#ifndef __le16
+#define __le16          uint16_t
+#endif
+#ifndef __le32
+#define __le32          uint32_t
+#endif
+#ifndef __le64
+#define __le64          uint64_t
+#endif
+#ifndef __be16
+#define __be16          uint16_t
+#endif
+#ifndef __be32
+#define __be32          uint32_t
+#endif
+#ifndef __be64
+#define __be64          uint64_t
+#endif
+
+#ifndef __always_unused
+#define __always_unused  __attribute__((unused))
+#endif
+#ifndef __maybe_unused
+#define __maybe_unused  __attribute__((unused))
+#endif
+#ifndef __packed
+#define __packed  __attribute__((packed))
+#endif
+
+#ifndef BIT_ULL
+#define BIT_ULL(a) (1ULL << (a))
+#endif
+
+#define FALSE           0
+#define TRUE            1
+#define false           0
+#define true            1
+
+#define min(a, b) RTE_MIN(a, b)
+#define max(a, b) RTE_MAX(a, b)
+
+#define ARRAY_SIZE(arr) (sizeof(arr) / sizeof(arr[0]))
+#define FIELD_SIZEOF(t, f) (sizeof(((t *)0)->f))
+#define MAKEMASK(m, s) ((m) << (s))
+
+#define DEBUGOUT(S, A...) PMD_DRV_LOG_RAW(DEBUG, S, ##A)
+#define DEBUGFUNC(F) PMD_DRV_LOG_RAW(DEBUG, F)
+
+#define ice_debug(h, m, s, ...)					\
+do {								\
+	if (((m) & (h)->debug_mask))				\
+		PMD_DRV_LOG_RAW(DEBUG, "ice %02x.%x " s,	\
+			(h)->bus.device, (h)->bus.func,		\
+					##__VA_ARGS__);		\
+} while (0)
+
+#define ice_info(hw, fmt, args...) ice_debug(hw, ICE_DBG_ALL, fmt, ##args)
+#define ice_warn(hw, fmt, args...) ice_debug(hw, ICE_DBG_ALL, fmt, ##args)
+#define ice_debug_array(hw, type, rowsize, groupsize, buf, len)		\
+do {									\
+	struct ice_hw *hw_l = hw;					\
+		u16 len_l = len;					\
+		u8 *buf_l = buf;					\
+		int i;							\
+		for (i = 0; i < len_l; i += 8)				\
+			ice_debug(hw_l, type,				\
+				  "0x%04X  0x%016"PRIx64"\n",		\
+				  i, *((u64 *)((buf_l) + i)));		\
+} while (0)
+#define ice_snprintf snprintf
+#ifndef SNPRINTF
+#define SNPRINTF ice_snprintf
+#endif
+
+#define ICE_PCI_REG(reg)     rte_read32(reg)
+#define ICE_PCI_REG_ADDR(a, reg) \
+	((volatile uint32_t *)((char *)(a)->hw_addr + (reg)))
+static inline uint32_t ice_read_addr(volatile void *addr)
+{
+	return rte_le_to_cpu_32(ICE_PCI_REG(addr));
+}
+
+#define ICE_PCI_REG_WRITE(reg, value) \
+	rte_write32((rte_cpu_to_le_32(value)), reg)
+
+#define ice_flush(a)   ICE_READ_REG((a), GLGEN_STAT)
+#define icevf_flush(a) ICE_READ_REG((a), VFGEN_RSTAT)
+#define ICE_READ_REG(hw, reg) ice_read_addr(ICE_PCI_REG_ADDR((hw), (reg)))
+#define ICE_WRITE_REG(hw, reg, value) \
+	ICE_PCI_REG_WRITE(ICE_PCI_REG_ADDR((hw), (reg)), (value))
+
+#define rd32(a, reg) ice_read_addr(ICE_PCI_REG_ADDR((a), (reg)))
+#define wr32(a, reg, value) \
+	ICE_PCI_REG_WRITE(ICE_PCI_REG_ADDR((a), (reg)), (value))
+#define flush(a) ice_read_addr(ICE_PCI_REG_ADDR((a), (GLGEN_STAT)))
+#define div64_long(n, d) ((n) / (d))
+
+#define BITS_PER_BYTE       8
+typedef u32 ice_bitmap_t;
+#define DIV_ROUND_UP(n, d) (((n) + (d) - 1) / (d))
+#define BITS_TO_CHUNKS(nr)   DIV_ROUND_UP(nr, BITS_PER_BYTE * sizeof(ice_bitmap_t))
+#define ice_declare_bitmap(name, bits) \
+	ice_bitmap_t name[BITS_TO_CHUNKS(bits)]
+
+#define BITS_CHUNK_MASK(nr)	(((ice_bitmap_t)~0) >>			\
+		((BITS_PER_BYTE * sizeof(ice_bitmap_t)) -		\
+		(((nr) - 1) % (BITS_PER_BYTE * sizeof(ice_bitmap_t))	\
+		 + 1)))
+#define BITS_PER_CHUNK          (BITS_PER_BYTE * sizeof(ice_bitmap_t))
+#define BIT_CHUNK(nr)           ((nr) / BITS_PER_CHUNK)
+#define BIT_IN_CHUNK(nr)        BIT((nr) % BITS_PER_CHUNK)
+
+static inline bool ice_is_bit_set(const ice_bitmap_t *bitmap, u16 nr)
+{
+        return !!(bitmap[BIT_CHUNK(nr)] & BIT_IN_CHUNK(nr));
+}
+
+#define ice_and_bitmap(d, b1, b2, sz) \
+	ice_intersect_bitmaps((u8 *)d, (u8 *)b1, (const u8 *)b2, (u16)sz)
+static inline int
+ice_intersect_bitmaps(u8 *dst, const u8 *bmp1, const u8 *bmp2, u16 sz)
+{
+	u32 res = 0;
+	int cnt;
+	u16 i;
+
+	/* Utilize 32-bit operations */
+	cnt = (sz % BITS_PER_BYTE) ?
+		(sz / BITS_PER_BYTE) + 1 : sz / BITS_PER_BYTE;
+	for (i = 0; i < cnt / 4; i++) {
+		((u32 *)dst)[i] = ((const u32 *)bmp1)[i] &
+		((const u32 *)bmp2)[i];
+		res |= ((u32 *)dst)[i];
+	}
+
+	for (i *= 4; i < cnt; i++) {
+		if ((sz % 8 == 0) || (i + 1 < cnt)) {
+			dst[i] = bmp1[i] & bmp2[i];
+		} else {
+			/* Remaining bits that do not occupy the whole byte */
+			u8 mask = ~0u >> (8 - (sz % 8));
+
+			dst[i] = bmp1[i] & bmp2[i] & mask;
+		}
+
+		res |= dst[i];
+	}
+
+	return res != 0;
+}
+
+static inline int ice_find_first_bit(ice_bitmap_t *name, u16 size)
+{
+	u16 i;
+
+	for (i = 0; i < BITS_PER_BYTE * (size / BITS_PER_BYTE); i++)
+		if (ice_is_bit_set(name, i))
+			return i;
+	return size;
+}
+
+static inline int ice_find_next_bit(ice_bitmap_t *name, u16 size, u16 bits)
+{
+	u16 i;
+
+	for (i = bits; i < BITS_PER_BYTE * (size / BITS_PER_BYTE); i++)
+		if (ice_is_bit_set(name, i))
+			return i;
+	return bits;
+}
+
+#define for_each_set_bit(bit, addr, size)				\
+	for ((bit) = ice_find_first_bit((addr), (size));		\
+	(bit) < (size);							\
+	(bit) = ice_find_next_bit((addr), (size), (bit) + 1))
+
+static inline bool ice_is_any_bit_set(ice_bitmap_t *bitmap, u32 bits)
+{
+	u32 max_index = BITS_TO_CHUNKS(bits);
+	u32 i;
+
+	for (i = 0; i < max_index; i++) {
+		if (bitmap[i])
+			return true;
+	}
+	return false;
+}
+
+/* memory allocation tracking */
+struct ice_dma_mem {
+	void *va;
+	u64 pa;
+	u32 size;
+	const void *zone;
+} __attribute__((packed));
+
+struct ice_virt_mem {
+	void *va;
+	u32 size;
+} __attribute__((packed));
+
+#define ice_malloc(h, s)    rte_zmalloc(NULL, s, 0)
+#define ice_calloc(h, c, s) rte_zmalloc(NULL, c * s, 0)
+#define ice_free(h, m)         rte_free(m)
+
+#define ice_memset(a, b, c, d) memset((a), (b), (c))
+#define ice_memcpy(a, b, c, d) rte_memcpy((a), (b), (c))
+#define ice_memdup(a, b, c, d) rte_memcpy(ice_malloc(a, c), b, c)
+
+#define CPU_TO_BE16(o) rte_cpu_to_be_16(o)
+#define CPU_TO_BE32(o) rte_cpu_to_be_32(o)
+#define CPU_TO_BE64(o) rte_cpu_to_be_64(o)
+#define CPU_TO_LE16(o) rte_cpu_to_le_16(o)
+#define CPU_TO_LE32(s) rte_cpu_to_le_32(s)
+#define CPU_TO_LE64(h) rte_cpu_to_le_64(h)
+#define LE16_TO_CPU(a) rte_le_to_cpu_16(a)
+#define LE32_TO_CPU(c) rte_le_to_cpu_32(c)
+#define LE64_TO_CPU(k) rte_le_to_cpu_64(k)
+
+#define NTOHS(a) rte_be_to_cpu_16(a)
+#define NTOHL(a) rte_be_to_cpu_32(a)
+#define HTONS(a) rte_cpu_to_be_16(a)
+#define HTONL(a) rte_cpu_to_be_32(a)
+
+static inline void
+ice_set_bit(unsigned int nr, volatile ice_bitmap_t *addr)
+{
+	__sync_fetch_and_or(addr, (1UL << nr));
+}
+
+static inline void
+ice_clear_bit(unsigned int nr, volatile ice_bitmap_t *addr)
+{
+	__sync_fetch_and_and(addr, (0UL << nr));
+}
+
+static inline void
+ice_zero_bitmap(ice_bitmap_t *bmp, u16 size)
+{
+	unsigned long mask;
+	u16 i;
+
+	for (i = 0; i < BITS_TO_CHUNKS(size) - 1; i++)
+		bmp[i] = 0;
+	mask = BITS_CHUNK_MASK(size);
+	bmp[i] &= ~mask;
+}
+
+static inline void
+ice_or_bitmap(ice_bitmap_t *dst, const ice_bitmap_t *bmp1,
+	      const ice_bitmap_t *bmp2, u16 size)
+{
+	unsigned long mask;
+	u16 i;
+
+	/* Handle all but last chunk*/
+	for (i = 0; i < BITS_TO_CHUNKS(size) - 1; i++)
+		dst[i] = bmp1[i] | bmp2[i];
+
+	/* We want to only OR bits within the size. Furthermore, we also do
+	 * not want to modify destination bits which are beyond the specified
+	 * size. Use a bitmask to ensure that we only modify the bits that are
+	 * within the specified size.
+	 */
+	mask = BITS_CHUNK_MASK(size);
+	dst[i] &= ~mask;
+	dst[i] |= (bmp1[i] | bmp2[i]) & mask;
+}
+
+static inline void ice_cp_bitmap(ice_bitmap_t *dst, ice_bitmap_t *src, u16 size)
+{
+        ice_bitmap_t mask;
+        u16 i;
+
+        /* Handle all but last chunk*/
+        for (i = 0; i < BITS_TO_CHUNKS(size) - 1; i++)
+                dst[i] = src[i];
+
+        /* We want to only copy bits within the size.*/
+        mask = BITS_CHUNK_MASK(size);
+        dst[i] &= ~mask;
+        dst[i] |= src[i] & mask;
+}
+
+static inline bool
+ice_cmp_bitmap(ice_bitmap_t *bmp1, ice_bitmap_t *bmp2, u16 size)
+{
+        ice_bitmap_t mask;
+        u16 i;
+
+        /* Handle all but last chunk*/
+        for (i = 0; i < BITS_TO_CHUNKS(size) - 1; i++)
+                if (bmp1[i] != bmp2[i])
+                        return false;
+
+        /* We want to only compare bits within the size.*/
+        mask = BITS_CHUNK_MASK(size);
+        if ((bmp1[i] & mask) != (bmp2[i] & mask))
+                return false;
+
+        return true;
+}
+
+/* SW spinlock */
+struct ice_lock {
+	rte_spinlock_t spinlock;
+};
+
+static inline void
+ice_init_lock(struct ice_lock *sp)
+{
+	rte_spinlock_init(&sp->spinlock);
+}
+
+static inline void
+ice_acquire_lock(struct ice_lock *sp)
+{
+	rte_spinlock_lock(&sp->spinlock);
+}
+
+static inline void
+ice_release_lock(struct ice_lock *sp)
+{
+	rte_spinlock_unlock(&sp->spinlock);
+}
+
+static inline void
+ice_destroy_lock(__attribute__((unused)) struct ice_lock *sp)
+{
+}
+
+struct ice_hw;
+
+static inline void *
+ice_alloc_dma_mem(__attribute__((unused)) struct ice_hw *hw,
+		  struct ice_dma_mem *mem, u64 size)
+{
+	const struct rte_memzone *mz = NULL;
+	char z_name[RTE_MEMZONE_NAMESIZE];
+
+	if (!mem)
+		return NULL;
+
+	snprintf(z_name, sizeof(z_name), "ice_dma_%"PRIu64, rte_rand());
+	mz = rte_memzone_reserve_bounded(z_name, size, SOCKET_ID_ANY, 0,
+					 0, RTE_PGSIZE_2M);
+	if (!mz)
+		return NULL;
+
+	mem->size = size;
+	mem->va = mz->addr;
+	mem->pa = mz->phys_addr;
+	mem->zone = (const void *)mz;
+	PMD_DRV_LOG(DEBUG, "memzone %s allocated with physical address: "
+		    "%"PRIu64, mz->name, mem->pa);
+
+	return mem->va;
+}
+
+static inline void
+ice_free_dma_mem(__attribute__((unused)) struct ice_hw *hw,
+		 struct ice_dma_mem *mem)
+{
+	PMD_DRV_LOG(DEBUG, "memzone %s to be freed with physical address: "
+		    "%"PRIu64, ((const struct rte_memzone *)mem->zone)->name,
+		    mem->pa);
+	rte_memzone_free((const struct rte_memzone *)mem->zone);
+	mem->zone = NULL;
+	mem->va = NULL;
+	mem->pa = (u64)0;
+}
+
+static inline u8
+ice_hweight8(u32 num)
+{
+	u8 bits = 0;
+	u32 i;
+
+	for (i = 0; i < 8; i++) {
+		bits += (u8)(num & 0x1);
+		num >>= 1;
+	}
+
+	return bits;
+}
+
+#define DIV_ROUND_UP(n, d) (((n) + (d) - 1) / (d))
+#define DELAY(x) rte_delay_us(x)
+#define ice_usec_delay(x) rte_delay_us(x)
+#define ice_msec_delay(x, y) rte_delay_us(1000 * (x))
+#define udelay(x) DELAY(x)
+#define msleep(x) DELAY(1000 * (x))
+#define usleep_range(min, max) msleep(DIV_ROUND_UP(min, 1000))
+
+struct ice_list_entry {
+	LIST_ENTRY(ice_list_entry) next;
+};
+
+LIST_HEAD(ice_list_head, ice_list_entry);
+
+#define LIST_ENTRY_TYPE    ice_list_entry
+#define LIST_HEAD_TYPE     ice_list_head
+#define INIT_LIST_HEAD(list_head)  LIST_INIT(list_head)
+#define LIST_DEL(entry)            LIST_REMOVE(entry, next)
+/* LIST_EMPTY(list_head)) the same in sys/queue.h */
+
+/*Note parameters are swapped*/
+#define LIST_FIRST_ENTRY(head, type, field) (type *)((head)->lh_first)
+#define LIST_ADD(entry, list_head)    LIST_INSERT_HEAD(list_head, entry, next)
+#define LIST_ADD_AFTER(entry, list_entry) \
+	LIST_INSERT_AFTER(list_entry, entry, next)
+#define LIST_FOR_EACH_ENTRY(pos, head, type, member)			       \
+	for ((pos) = (head)->lh_first ?					       \
+		     container_of((head)->lh_first, struct type, member) :     \
+		     0;							       \
+	     (pos);							       \
+	     (pos) = (pos)->member.next.le_next ?			       \
+		     container_of((pos)->member.next.le_next, struct type,     \
+				  member) :				       \
+		     0)
+
+#define LIST_REPLACE_INIT(list_head, head) do {				\
+	(head)->lh_first = (list_head)->lh_first;			\
+	INIT_LIST_HEAD(list_head);					\
+} while (0)
+
+#define HLIST_NODE_TYPE         LIST_ENTRY_TYPE
+#define HLIST_HEAD_TYPE         LIST_HEAD_TYPE
+#define INIT_HLIST_HEAD(list_head)             INIT_LIST_HEAD(list_head)
+#define HLIST_ADD_HEAD(entry, list_head)       LIST_ADD(entry, list_head)
+#define HLIST_EMPTY(list_head)                 LIST_EMPTY(list_head)
+#define HLIST_DEL(entry)                       LIST_DEL(entry)
+#define HLIST_FOR_EACH_ENTRY(pos, head, type, member) \
+	LIST_FOR_EACH_ENTRY(pos, head, type, member)
+#define LIST_FOR_EACH_ENTRY_SAFE(pos, tmp, head, type, member) \
+	LIST_FOR_EACH_ENTRY(pos, head, type, member)
+
+#ifndef ICE_DBG_TRACE
+#define ICE_DBG_TRACE		BIT_ULL(0)
+#endif
+
+#ifndef DIVIDE_AND_ROUND_UP
+#define DIVIDE_AND_ROUND_UP(a, b) (((a) + (b) - 1) / (b))
+#endif
+
+#ifndef ICE_INTEL_VENDOR_ID
+#define ICE_INTEL_VENDOR_ID		0x8086
+#endif
+
+#ifndef IS_UNICAST_ETHER_ADDR
+#define IS_UNICAST_ETHER_ADDR(addr) \
+	((bool)((((u8 *)(addr))[0] % ((u8)0x2)) == 0))
+#endif
+
+#ifndef IS_MULTICAST_ETHER_ADDR
+#define IS_MULTICAST_ETHER_ADDR(addr) \
+	((bool)((((u8 *)(addr))[0] % ((u8)0x2)) == 1))
+#endif
+
+#ifndef IS_BROADCAST_ETHER_ADDR
+/* Check whether an address is broadcast. */
+#define IS_BROADCAST_ETHER_ADDR(addr)	\
+	((bool)((((u16 *)(addr))[0] == ((u16)0xffff))))
+#endif
+
+#ifndef IS_ZERO_ETHER_ADDR
+#define IS_ZERO_ETHER_ADDR(addr) \
+	(((bool)((((u16 *)(addr))[0] == ((u16)0x0)))) && \
+	 ((bool)((((u16 *)(addr))[1] == ((u16)0x0)))) && \
+	 ((bool)((((u16 *)(addr))[2] == ((u16)0x0)))))
+#endif
+
+#endif /* _ICE_OSDEP_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v5 15/31] net/ice: support device initialization
  2018-12-17  7:37 ` [dpdk-dev] [PATCH v5 00/31] A new net PMD - ICE Wenzhuo Lu
                     ` (13 preceding siblings ...)
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 14/31] net/ice/base: add OS specific implementation Wenzhuo Lu
@ 2018-12-17  7:37   ` Wenzhuo Lu
  2018-12-17 22:29     ` Ferruh Yigit
  2018-12-17 23:15     ` Ferruh Yigit
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 16/31] net/ice: support device and queue ops Wenzhuo Lu
                     ` (15 subsequent siblings)
  30 siblings, 2 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-17  7:37 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Update the documents too.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 MAINTAINERS                             |   2 +
 config/common_base                      |   7 +
 doc/guides/nics/features/ice.ini        |  11 +
 doc/guides/nics/ice.rst                 |  80 ++++
 doc/guides/nics/index.rst               |   1 +
 doc/guides/rel_notes/release_19_02.rst  |   5 +
 drivers/net/Makefile                    |   1 +
 drivers/net/ice/Makefile                |  54 +++
 drivers/net/ice/base/meson.build        |  27 ++
 drivers/net/ice/ice_ethdev.c            | 636 ++++++++++++++++++++++++++++++++
 drivers/net/ice/ice_ethdev.h            | 305 +++++++++++++++
 drivers/net/ice/ice_logs.h              |  45 +++
 drivers/net/ice/ice_rxtx.h              | 117 ++++++
 drivers/net/ice/meson.build             |  12 +
 drivers/net/ice/rte_pmd_ice_version.map |   4 +
 drivers/net/meson.build                 |   1 +
 mk/rte.app.mk                           |   1 +
 17 files changed, 1309 insertions(+)
 create mode 100644 doc/guides/nics/features/ice.ini
 create mode 100644 doc/guides/nics/ice.rst
 create mode 100644 drivers/net/ice/Makefile
 create mode 100644 drivers/net/ice/base/meson.build
 create mode 100644 drivers/net/ice/ice_ethdev.c
 create mode 100644 drivers/net/ice/ice_ethdev.h
 create mode 100644 drivers/net/ice/ice_logs.h
 create mode 100644 drivers/net/ice/ice_rxtx.h
 create mode 100644 drivers/net/ice/meson.build
 create mode 100644 drivers/net/ice/rte_pmd_ice_version.map

diff --git a/MAINTAINERS b/MAINTAINERS
index 37f3bf7..cdb18e0 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -598,6 +598,8 @@ M: Qiming Yang <qiming.yang@intel.com>
 M: Wenzhuo Lu <wenzhuo.lu@intel.com>
 T: git://dpdk.org/next/dpdk-next-net-intel
 F: drivers/net/ice/
+F: doc/guides/nics/ice.rst
+F: doc/guides/nics/features/ice.ini
 
 Marvell mvpp2
 M: Tomasz Duszynski <tdu@semihalf.com>
diff --git a/config/common_base b/config/common_base
index d12ae98..872f440 100644
--- a/config/common_base
+++ b/config/common_base
@@ -297,6 +297,13 @@ CONFIG_RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE=y
 CONFIG_RTE_LIBRTE_FM10K_INC_VECTOR=y
 
 #
+# Compile burst-oriented ICE PMD driver
+#
+CONFIG_RTE_LIBRTE_ICE_PMD=y
+CONFIG_RTE_LIBRTE_ICE_DEBUG_RX=n
+CONFIG_RTE_LIBRTE_ICE_DEBUG_TX=n
+CONFIG_RTE_LIBRTE_ICE_DEBUG_TX_FREE=n
+
 # Compile burst-oriented AVF PMD driver
 #
 CONFIG_RTE_LIBRTE_AVF_PMD=y
diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
new file mode 100644
index 0000000..085e848
--- /dev/null
+++ b/doc/guides/nics/features/ice.ini
@@ -0,0 +1,11 @@
+;
+; Supported features of the 'ice' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+BSD nic_uio          = Y
+Linux UIO            = Y
+Linux VFIO           = Y
+x86-32               = Y
+x86-64               = Y
diff --git a/doc/guides/nics/ice.rst b/doc/guides/nics/ice.rst
new file mode 100644
index 0000000..946ed04
--- /dev/null
+++ b/doc/guides/nics/ice.rst
@@ -0,0 +1,80 @@
+..  SPDX-License-Identifier: BSD-3-Clause
+    Copyright(c) 2018 Intel Corporation.
+
+ICE Poll Mode Driver
+======================
+
+The ice PMD (librte_pmd_ice) provides poll mode driver support for
+10/25 Gbps Intel® Ethernet 810 Series Network Adapters based on
+the Intel Ethernet Controller E810.
+
+
+Prerequisites
+-------------
+
+- Identifying your adapter using `Intel Support
+  <http://www.intel.com/support>`_ and get the latest NVM/FW images.
+
+- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
+
+- To get better performance on Intel platforms, please follow the "How to get best performance with NICs on Intel platforms"
+  section of the :ref:`Getting Started Guide for Linux <linux_gsg>`.
+
+
+Pre-Installation Configuration
+------------------------------
+
+Config File Options
+~~~~~~~~~~~~~~~~~~~
+
+The following options can be modified in the ``config`` file.
+Please note that enabling debugging options may affect system performance.
+
+- ``CONFIG_RTE_LIBRTE_ICE_PMD`` (default ``y``)
+
+  Toggle compilation of the ``librte_pmd_ice`` driver.
+
+- ``CONFIG_RTE_LIBRTE_ICE_DEBUG_*`` (default ``n``)
+
+  Toggle display of generic debugging messages.
+
+Runtime Config Options
+~~~~~~~~~~~~~~~~~~~~~~
+
+- ``Maximum Number of Queue Pairs``
+
+  The maximum number of queue pairs is decided by HW. If not configured, APP
+  uses the number from HW. Users can check the number by calling the API
+  ``rte_eth_dev_info_get``.
+  If users want to limit the number of queues, they can set a smaller number
+  using EAL parameter like ``max_queue_pair_num=n``.
+
+
+Driver compilation and testing
+------------------------------
+
+Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
+for details.
+
+
+Limitations or Known issues
+---------------------------
+
+19.02 limitation
+~~~~~~~~~~~~~~~~
+
+Ice code released in 19.02 is for evaluation only.
+
+
+Promiscuous mode not supported
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+As promiscuous mode is not supported as this stage, a port can only receive the
+packets which destination MAC address is this port's own.
+
+
+TX anti-spoofing cannot be disabled
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+TX anti-spoofing is enabled by default. At this stage it's not supported to
+disable it. So any TX packet which source MAC address is not this port's own
+will be dropped by HW. It means io-fwd is not supported now. Recommand to use
+MAC-fwd for evaluation.
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index 1e46705..a205f15 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -26,6 +26,7 @@ Network Interface Controller Drivers
     enic
     fm10k
     i40e
+    ice
     ifc
     igb
     ixgbe
diff --git a/doc/guides/rel_notes/release_19_02.rst b/doc/guides/rel_notes/release_19_02.rst
index a94fa86..ca560b1 100644
--- a/doc/guides/rel_notes/release_19_02.rst
+++ b/doc/guides/rel_notes/release_19_02.rst
@@ -54,6 +54,11 @@ New Features
      Also, make sure to start the actual text at the margin.
      =========================================================
 
+* **Added ICE net PMD**
+
+  Added the new ``ice`` net driver for Intel® Ethernet Network Adapters E810.
+  See the :doc:`../nics/ice` NIC guide for more details on this new driver.
+
 
 Removed Items
 -------------
diff --git a/drivers/net/Makefile b/drivers/net/Makefile
index c0386fe..670d7f7 100644
--- a/drivers/net/Makefile
+++ b/drivers/net/Makefile
@@ -30,6 +30,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_ENIC_PMD) += enic
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_FAILSAFE) += failsafe
 DIRS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k
 DIRS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += i40e
+DIRS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice
 DIRS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe
 DIRS-$(CONFIG_RTE_LIBRTE_LIO_PMD) += liquidio
 DIRS-$(CONFIG_RTE_LIBRTE_MLX4_PMD) += mlx4
diff --git a/drivers/net/ice/Makefile b/drivers/net/ice/Makefile
new file mode 100644
index 0000000..70f23e3
--- /dev/null
+++ b/drivers/net/ice/Makefile
@@ -0,0 +1,54 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Intel Corporation
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_pmd_ice.a
+
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+LDLIBS += -lrte_eal -lrte_ethdev -lrte_kvargs -lrte_bus_pci
+
+EXPORT_MAP := rte_pmd_ice_version.map
+
+LIBABIVER := 1
+
+#
+# Add extra flags for base driver files (also known as shared code)
+# to disable warnings
+#
+ifeq ($(CONFIG_RTE_TOOLCHAIN_ICC),y)
+CFLAGS_BASE_DRIVER +=
+else ifeq ($(CONFIG_RTE_TOOLCHAIN_CLANG),y)
+CFLAGS_BASE_DRIVER += -Wno-unused-parameter
+CFLAGS_BASE_DRIVER += -Wno-unused-variable
+else
+CFLAGS_BASE_DRIVER += -Wno-unused-parameter
+CFLAGS_BASE_DRIVER += -Wno-unused-variable
+
+ifeq ($(shell test $(GCC_VERSION) -ge 44 && echo 1), 1)
+CFLAGS_BASE_DRIVER += -Wno-unused-but-set-variable
+endif
+
+endif
+OBJS_BASE_DRIVER=$(patsubst %.c,%.o,$(notdir $(wildcard $(SRCDIR)/base/*.c)))
+$(foreach obj, $(OBJS_BASE_DRIVER), $(eval CFLAGS_$(obj)+=$(CFLAGS_BASE_DRIVER)))
+
+VPATH += $(SRCDIR)/base
+
+#
+# all source are stored in SRCS-y
+#
+SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_controlq.c
+SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_common.c
+SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_sched.c
+SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_switch.c
+SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_nvm.c
+
+SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_ethdev.c
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/ice/base/meson.build b/drivers/net/ice/base/meson.build
new file mode 100644
index 0000000..0cfc8cd
--- /dev/null
+++ b/drivers/net/ice/base/meson.build
@@ -0,0 +1,27 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Intel Corporation
+
+sources = [
+	'ice_controlq.c',
+	'ice_common.c',
+	'ice_sched.c',
+	'ice_switch.c',
+	'ice_nvm.c',
+]
+
+error_cflags = ['-Wno-unused-value',
+		'-Wno-unused-but-set-variable',
+		'-Wno-unused-variable',
+]
+c_args = cflags
+
+foreach flag: error_cflags
+	if cc.has_argument(flag)
+		c_args += flag
+	endif
+endforeach
+
+base_lib = static_library('ice_base', sources,
+	dependencies: static_rte_eal,
+	c_args: c_args)
+base_objs = base_lib.extract_all_objects()
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
new file mode 100644
index 0000000..4f0c819
--- /dev/null
+++ b/drivers/net/ice/ice_ethdev.c
@@ -0,0 +1,636 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#include <rte_ethdev_pci.h>
+
+#include "base/ice_sched.h"
+#include "ice_ethdev.h"
+#include "ice_rxtx.h"
+
+#define ICE_MAX_QP_NUM "max_queue_pair_num"
+#define ICE_DFLT_OUTER_TAG_TYPE ICE_AQ_VSI_OUTER_TAG_VLAN_9100
+
+int ice_logtype_init;
+int ice_logtype_driver;
+
+static const struct rte_pci_id pci_id_ice_map[] = {
+	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
+	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_QSFP) },
+	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_SFP) },
+	{ .vendor_id = 0, /* sentinel */ },
+};
+
+static const struct eth_dev_ops ice_eth_dev_ops = {
+	.dev_configure                = NULL,
+};
+
+static void
+ice_init_controlq_parameter(struct ice_hw *hw)
+{
+	/* fields for adminq */
+	hw->adminq.num_rq_entries = ICE_ADMINQ_LEN;
+	hw->adminq.num_sq_entries = ICE_ADMINQ_LEN;
+	hw->adminq.rq_buf_size = ICE_ADMINQ_BUF_SZ;
+	hw->adminq.sq_buf_size = ICE_ADMINQ_BUF_SZ;
+
+	/* fields for mailboxq, DPDK used as PF host */
+	hw->mailboxq.num_rq_entries = ICE_MAILBOXQ_LEN;
+	hw->mailboxq.num_sq_entries = ICE_MAILBOXQ_LEN;
+	hw->mailboxq.rq_buf_size = ICE_MAILBOXQ_BUF_SZ;
+	hw->mailboxq.sq_buf_size = ICE_MAILBOXQ_BUF_SZ;
+}
+
+static int
+ice_check_qp_num(const char *key, const char *qp_value,
+		 __rte_unused void *opaque)
+{
+	char *end = NULL;
+	int num = 0;
+
+	while (isblank(*qp_value))
+		qp_value++;
+
+	num = strtoul(qp_value, &end, 10);
+
+	if (!num || (*end == '-') || errno) {
+		PMD_DRV_LOG(WARNING, "invalid value:\"%s\" for key:\"%s\", "
+			    "value must be > 0",
+			    qp_value, key);
+		return -1;
+	}
+
+	return num;
+}
+
+static int
+ice_config_max_queue_pair_num(struct rte_devargs *devargs)
+{
+	struct rte_kvargs *kvlist;
+	const char *queue_num_key = ICE_MAX_QP_NUM;
+	int ret;
+
+	if (!devargs)
+		return 0;
+
+	kvlist = rte_kvargs_parse(devargs->args, NULL);
+	if (!kvlist)
+		return 0;
+
+	if (!rte_kvargs_count(kvlist, queue_num_key)) {
+		rte_kvargs_free(kvlist);
+		return 0;
+	}
+
+	if (rte_kvargs_process(kvlist, queue_num_key,
+			       ice_check_qp_num, NULL) < 0) {
+		rte_kvargs_free(kvlist);
+		return 0;
+	}
+	ret = rte_kvargs_process(kvlist, queue_num_key,
+				 ice_check_qp_num, NULL);
+	rte_kvargs_free(kvlist);
+
+	return ret;
+}
+
+static int
+ice_res_pool_init(struct ice_res_pool_info *pool, uint32_t base,
+		  uint32_t num)
+{
+	struct pool_entry *entry;
+
+	if (!pool || !num)
+		return -EINVAL;
+
+	entry = rte_zmalloc(NULL, sizeof(*entry), 0);
+	if (!entry) {
+		PMD_INIT_LOG(ERR,
+			     "Failed to allocate memory for resource pool");
+		return -ENOMEM;
+	}
+
+	/* queue heap initialize */
+	pool->num_free = num;
+	pool->num_alloc = 0;
+	pool->base = base;
+	LIST_INIT(&pool->alloc_list);
+	LIST_INIT(&pool->free_list);
+
+	/* Initialize element  */
+	entry->base = 0;
+	entry->len = num;
+
+	LIST_INSERT_HEAD(&pool->free_list, entry, next);
+	return 0;
+}
+
+static int
+ice_res_pool_alloc(struct ice_res_pool_info *pool,
+		   uint16_t num)
+{
+	struct pool_entry *entry, *valid_entry;
+
+	if (!pool || !num) {
+		PMD_INIT_LOG(ERR, "Invalid parameter");
+		return -EINVAL;
+	}
+
+	if (pool->num_free < num) {
+		PMD_INIT_LOG(ERR, "No resource. ask:%u, available:%u",
+			     num, pool->num_free);
+		return -ENOMEM;
+	}
+
+	valid_entry = NULL;
+	/* Lookup  in free list and find most fit one */
+	LIST_FOREACH(entry, &pool->free_list, next) {
+		if (entry->len >= num) {
+			/* Find best one */
+			if (entry->len == num) {
+				valid_entry = entry;
+				break;
+			}
+			if (!valid_entry ||
+			    valid_entry->len > entry->len)
+				valid_entry = entry;
+		}
+	}
+
+	/* Not find one to satisfy the request, return */
+	if (!valid_entry) {
+		PMD_INIT_LOG(ERR, "No valid entry found");
+		return -ENOMEM;
+	}
+	/**
+	 * The entry have equal queue number as requested,
+	 * remove it from alloc_list.
+	 */
+	if (valid_entry->len == num) {
+		LIST_REMOVE(valid_entry, next);
+	} else {
+		/**
+		 * The entry have more numbers than requested,
+		 * create a new entry for alloc_list and minus its
+		 * queue base and number in free_list.
+		 */
+		entry = rte_zmalloc(NULL, sizeof(*entry), 0);
+		if (!entry) {
+			PMD_INIT_LOG(ERR,
+				     "Failed to allocate memory for "
+				     "resource pool");
+			return -ENOMEM;
+		}
+		entry->base = valid_entry->base;
+		entry->len = num;
+		valid_entry->base += num;
+		valid_entry->len -= num;
+		valid_entry = entry;
+	}
+
+	/* Insert it into alloc list, not sorted */
+	LIST_INSERT_HEAD(&pool->alloc_list, valid_entry, next);
+
+	pool->num_free -= valid_entry->len;
+	pool->num_alloc += valid_entry->len;
+
+	return valid_entry->base + pool->base;
+}
+
+static void
+ice_res_pool_destroy(struct ice_res_pool_info *pool)
+{
+	struct pool_entry *entry, *next_entry;
+
+	if (!pool)
+		return;
+
+	for (entry = LIST_FIRST(&pool->alloc_list);
+	     entry && (next_entry = LIST_NEXT(entry, next), 1);
+	     entry = next_entry) {
+		LIST_REMOVE(entry, next);
+		rte_free(entry);
+	}
+
+	for (entry = LIST_FIRST(&pool->free_list);
+	     entry && (next_entry = LIST_NEXT(entry, next), 1);
+	     entry = next_entry) {
+		LIST_REMOVE(entry, next);
+		rte_free(entry);
+	}
+
+	pool->num_free = 0;
+	pool->num_alloc = 0;
+	pool->base = 0;
+	LIST_INIT(&pool->alloc_list);
+	LIST_INIT(&pool->free_list);
+}
+
+static void
+ice_vsi_config_default_rss(struct ice_aqc_vsi_props *info)
+{
+	/* Set VSI LUT selection */
+	info->q_opt_rss = ICE_AQ_VSI_Q_OPT_RSS_LUT_VSI &
+			  ICE_AQ_VSI_Q_OPT_RSS_LUT_M;
+	/* Set Hash scheme */
+	info->q_opt_rss |= ICE_AQ_VSI_Q_OPT_RSS_TPLZ &
+			   ICE_AQ_VSI_Q_OPT_RSS_HASH_M;
+	/* enable TC */
+	info->q_opt_tc = ICE_AQ_VSI_Q_OPT_TC_OVR_M;
+}
+
+static enum ice_status
+ice_vsi_config_tc_queue_mapping(struct ice_vsi *vsi,
+				struct ice_aqc_vsi_props *info,
+				uint8_t enabled_tcmap)
+{
+	uint16_t bsf, qp_idx;
+
+	/* default tc 0 now. Multi-TC supporting need to be done later.
+	 * Configure TC and queue mapping parameters, for enabled TC,
+	 * allocate qpnum_per_tc queues to this traffic.
+	 */
+	if (enabled_tcmap != 0x01) {
+		PMD_INIT_LOG(ERR, "only TC0 is supported");
+		return -ENOTSUP;
+	}
+
+	vsi->nb_qps = RTE_MIN(vsi->nb_qps, ICE_MAX_Q_PER_TC);
+	bsf = rte_bsf32(vsi->nb_qps);
+	/* Adjust the queue number to actual queues that can be applied */
+	vsi->nb_qps = 0x1 << bsf;
+
+	qp_idx = 0;
+	/* Set tc and queue mapping with VSI */
+	info->tc_mapping[0] = rte_cpu_to_le_16((qp_idx <<
+						ICE_AQ_VSI_TC_Q_OFFSET_S) |
+					       (bsf << ICE_AQ_VSI_TC_Q_NUM_S));
+
+	/* Associate queue number with VSI */
+	info->mapping_flags |= rte_cpu_to_le_16(ICE_AQ_VSI_Q_MAP_CONTIG);
+	info->q_mapping[0] = rte_cpu_to_le_16(vsi->base_queue);
+	info->q_mapping[1] = rte_cpu_to_le_16(vsi->nb_qps);
+	info->valid_sections |=
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_RXQ_MAP_VALID);
+	/* Set the info.ingress_table and info.egress_table
+	 * for UP translate table. Now just set it to 1:1 map by default
+	 * -- 0b 111 110 101 100 011 010 001 000 == 0xFAC688
+	 */
+#define ICE_TC_QUEUE_TABLE_DFLT 0x00FAC688
+	info->ingress_table  = rte_cpu_to_le_32(ICE_TC_QUEUE_TABLE_DFLT);
+	info->egress_table   = rte_cpu_to_le_32(ICE_TC_QUEUE_TABLE_DFLT);
+	info->outer_up_table = rte_cpu_to_le_32(ICE_TC_QUEUE_TABLE_DFLT);
+	return 0;
+}
+
+static int
+ice_init_mac_address(struct rte_eth_dev *dev)
+{
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	if (!is_unicast_ether_addr
+		((struct ether_addr *)hw->port_info[0].mac.lan_addr)) {
+		PMD_INIT_LOG(ERR, "Invalid MAC address");
+		return -EINVAL;
+	}
+
+	ether_addr_copy((struct ether_addr *)hw->port_info[0].mac.lan_addr,
+			(struct ether_addr *)hw->port_info[0].mac.perm_addr);
+
+	dev->data->mac_addrs = rte_zmalloc(NULL, sizeof(struct ether_addr), 0);
+	if (!dev->data->mac_addrs) {
+		PMD_INIT_LOG(ERR,
+			     "Failed to allocate memory to store mac address");
+		return -ENOMEM;
+	}
+	/* store it to dev data */
+	ether_addr_copy((struct ether_addr *)hw->port_info[0].mac.perm_addr,
+			&dev->data->mac_addrs[0]);
+	return 0;
+}
+
+/*  Initialize SW parameters of PF */
+static int
+ice_pf_sw_init(struct rte_eth_dev *dev)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_hw *hw = ICE_PF_TO_HW(pf);
+
+	if (ice_config_max_queue_pair_num(dev->device->devargs) > 0)
+		pf->lan_nb_qp_max =
+			ice_config_max_queue_pair_num(dev->device->devargs);
+	else
+		pf->lan_nb_qp_max =
+			(uint16_t)RTE_MIN(hw->func_caps.common_cap.num_txq,
+					  hw->func_caps.common_cap.num_rxq);
+
+	pf->lan_nb_qps = pf->lan_nb_qp_max;
+
+	return 0;
+}
+
+static struct ice_vsi *
+ice_setup_vsi(struct ice_pf *pf, enum ice_vsi_type type)
+{
+	struct ice_hw *hw = ICE_PF_TO_HW(pf);
+	struct ice_vsi *vsi = NULL;
+	struct ice_vsi_ctx vsi_ctx;
+	int ret;
+	uint16_t max_txqs[ICE_MAX_TRAFFIC_CLASS] = { 0 };
+	uint8_t tc_bitmap = 0x1;
+
+	/* hw->num_lports = 1 in NIC mode */
+	vsi = rte_zmalloc(NULL, sizeof(struct ice_vsi), 0);
+	if (!vsi)
+		return NULL;
+
+	vsi->idx = pf->next_vsi_idx;
+	pf->next_vsi_idx++;
+	vsi->type = type;
+	vsi->adapter = ICE_PF_TO_ADAPTER(pf);
+	vsi->max_macaddrs = ICE_NUM_MACADDR_MAX;
+	vsi->vlan_anti_spoof_on = 0;
+	vsi->vlan_filter_on = 1;
+	TAILQ_INIT(&vsi->mac_list);
+	TAILQ_INIT(&vsi->vlan_list);
+
+	memset(&vsi_ctx, 0, sizeof(vsi_ctx));
+	/* base_queue in used in queue mapping of VSI add/update command.
+	 * Suppose vsi->base_queue is 0 now, don't consider SRIOV, VMDQ
+	 * cases in the first stage. Only Main VSI.
+	 */
+	vsi->base_queue = 0;
+	switch (type) {
+	case ICE_VSI_PF:
+		vsi->nb_qps = pf->lan_nb_qps;
+		ice_vsi_config_default_rss(&vsi_ctx.info);
+		vsi_ctx.alloc_from_pool = true;
+		vsi_ctx.flags = ICE_AQ_VSI_TYPE_PF;
+		/* switch_id is queried by get_switch_config aq, which is done
+		 * by ice_init_hw
+		 */
+		vsi_ctx.info.sw_id = hw->port_info->sw_id;
+		vsi_ctx.info.sw_flags2 = ICE_AQ_VSI_SW_FLAG_LAN_ENA;
+		/* Allow all untagged or tagged packets */
+		vsi_ctx.info.vlan_flags = ICE_AQ_VSI_VLAN_MODE_ALL;
+		vsi_ctx.info.vlan_flags |= ICE_AQ_VSI_VLAN_EMOD_NOTHING;
+		vsi_ctx.info.q_opt_rss = ICE_AQ_VSI_Q_OPT_RSS_LUT_PF |
+					 ICE_AQ_VSI_Q_OPT_RSS_TPLZ;
+		/* Enable VLAN/UP trip */
+		ret = ice_vsi_config_tc_queue_mapping(vsi,
+						      &vsi_ctx.info,
+						      ICE_DEFAULT_TCMAP);
+		if (ret) {
+			PMD_INIT_LOG(ERR,
+				     "tc queue mapping with vsi failed, "
+				     "err = %d",
+				     ret);
+			goto fail_mem;
+		}
+
+		break;
+	default:
+		/* for other types of VSI */
+		PMD_INIT_LOG(ERR, "other types of VSI not supported");
+		goto fail_mem;
+	}
+
+	/* VF has MSIX interrupt in VF range, don't allocate here */
+	if (type == ICE_VSI_PF) {
+		ret = ice_res_pool_alloc(&pf->msix_pool,
+					 RTE_MIN(vsi->nb_qps,
+						 RTE_MAX_RXTX_INTR_VEC_ID));
+		if (ret < 0) {
+			PMD_INIT_LOG(ERR, "VSI MAIN %d get heap failed %d",
+				     vsi->vsi_id, ret);
+		}
+		vsi->msix_intr = ret;
+		vsi->nb_msix = RTE_MIN(vsi->nb_qps, RTE_MAX_RXTX_INTR_VEC_ID);
+	} else {
+		vsi->msix_intr = 0;
+		vsi->nb_msix = 0;
+	}
+	ret = ice_add_vsi(hw, vsi->idx, &vsi_ctx, NULL);
+	if (ret != ICE_SUCCESS) {
+		PMD_INIT_LOG(ERR, "add vsi failed, err = %d", ret);
+		goto fail_mem;
+	}
+	/* store vsi information is SW structure */
+	vsi->vsi_id = vsi_ctx.vsi_num;
+	vsi->info = vsi_ctx.info;
+	pf->vsis_allocated = vsi_ctx.vsis_allocd;
+	pf->vsis_unallocated = vsi_ctx.vsis_unallocated;
+
+	/* At the beginning, only TC0. */
+	/* What we need here is the maximam number of the TX queues.
+	 * Currently vsi->nb_qps means it.
+	 * Correct it if any change.
+	 */
+	max_txqs[0] = vsi->nb_qps;
+	ret = ice_cfg_vsi_lan(hw->port_info, vsi->idx,
+			      tc_bitmap, max_txqs);
+	if (ret != ICE_SUCCESS)
+		PMD_INIT_LOG(ERR, "Failed to config vsi sched");
+
+	return vsi;
+fail_mem:
+	rte_free(vsi);
+	pf->next_vsi_idx--;
+	return NULL;
+}
+
+static int
+ice_pf_setup(struct ice_pf *pf)
+{
+	struct ice_vsi *vsi;
+
+	/* Clear all stats counters */
+	pf->offset_loaded = FALSE;
+	memset(&pf->stats, 0, sizeof(struct ice_hw_port_stats));
+	memset(&pf->stats_offset, 0, sizeof(struct ice_hw_port_stats));
+	memset(&pf->internal_stats, 0, sizeof(struct ice_eth_stats));
+	memset(&pf->internal_stats_offset, 0, sizeof(struct ice_eth_stats));
+
+	vsi = ice_setup_vsi(pf, ICE_VSI_PF);
+	if (!vsi) {
+		PMD_INIT_LOG(ERR, "Failed to add vsi for PF");
+		return -EINVAL;
+	}
+
+	pf->main_vsi = vsi;
+
+	return 0;
+}
+
+static int
+ice_dev_init(struct rte_eth_dev *dev)
+{
+	struct rte_pci_device *pci_dev;
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	int ret;
+
+	dev->dev_ops = &ice_eth_dev_ops;
+
+	pci_dev = RTE_DEV_TO_PCI(dev->device);
+
+	pf->adapter = ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	pf->adapter->eth_dev = dev;
+	pf->dev_data = dev->data;
+	hw->back = pf->adapter;
+	hw->hw_addr = (uint8_t *)pci_dev->mem_resource[0].addr;
+	hw->vendor_id = pci_dev->id.vendor_id;
+	hw->device_id = pci_dev->id.device_id;
+	hw->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id;
+	hw->subsystem_device_id = pci_dev->id.subsystem_device_id;
+	hw->bus.device = pci_dev->addr.devid;
+	hw->bus.func = pci_dev->addr.function;
+
+	ice_init_controlq_parameter(hw);
+
+	ret = ice_init_hw(hw);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to initialize HW");
+		return -EINVAL;
+	}
+
+	PMD_INIT_LOG(INFO, "FW %d.%d.%05d API %d.%d",
+		     hw->fw_maj_ver, hw->fw_min_ver, hw->fw_build,
+		     hw->api_maj_ver, hw->api_min_ver);
+
+	ice_pf_sw_init(dev);
+	ret = ice_init_mac_address(dev);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to initialize mac address");
+		goto err_init_mac;
+	}
+
+	ret = ice_res_pool_init(&pf->msix_pool, 1,
+				hw->func_caps.common_cap.num_msix_vectors - 1);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to init MSIX pool");
+		goto err_msix_pool_init;
+	}
+
+	ret = ice_pf_setup(pf);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to setup PF");
+		goto err_pf_setup;
+	}
+
+	return 0;
+
+err_pf_setup:
+	ice_res_pool_destroy(&pf->msix_pool);
+err_msix_pool_init:
+	rte_free(dev->data->mac_addrs);
+err_init_mac:
+	ice_sched_cleanup_all(hw);
+	rte_free(hw->port_info);
+	ice_shutdown_all_ctrlq(hw);
+
+	return ret;
+}
+
+static int
+ice_release_vsi(struct ice_vsi *vsi)
+{
+	struct ice_hw *hw;
+	struct ice_vsi_ctx vsi_ctx;
+	enum ice_status ret;
+
+	if (!vsi)
+		return 0;
+
+	hw = ICE_VSI_TO_HW(vsi);
+
+	memset(&vsi_ctx, 0, sizeof(vsi_ctx));
+
+	vsi_ctx.vsi_num = vsi->vsi_id;
+	vsi_ctx.info = vsi->info;
+	ret = ice_free_vsi(hw, vsi->idx, &vsi_ctx, false, NULL);
+	if (ret != ICE_SUCCESS) {
+		PMD_INIT_LOG(ERR, "Failed to free vsi by aq, %u", vsi->vsi_id);
+		rte_free(vsi);
+		return -1;
+	}
+
+	rte_free(vsi);
+	return 0;
+}
+
+static void
+ice_dev_close(struct rte_eth_dev *dev)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	ice_res_pool_destroy(&pf->msix_pool);
+	ice_release_vsi(pf->main_vsi);
+
+	ice_shutdown_all_ctrlq(hw);
+}
+
+static int
+ice_dev_uninit(struct rte_eth_dev *dev)
+{
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+
+	ice_dev_close(dev);
+
+	dev->dev_ops = NULL;
+	dev->rx_pkt_burst = NULL;
+	dev->tx_pkt_burst = NULL;
+
+	rte_free(dev->data->mac_addrs);
+	dev->data->mac_addrs = NULL;
+
+	ice_release_vsi(pf->main_vsi);
+	ice_sched_cleanup_all(hw);
+	rte_free(hw->port_info);
+	ice_shutdown_all_ctrlq(hw);
+
+	return 0;
+}
+
+static int
+ice_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+	      struct rte_pci_device *pci_dev)
+{
+	return rte_eth_dev_pci_generic_probe(pci_dev,
+					     sizeof(struct ice_adapter),
+					     ice_dev_init);
+}
+
+static int
+ice_pci_remove(struct rte_pci_device *pci_dev)
+{
+	return rte_eth_dev_pci_generic_remove(pci_dev, ice_dev_uninit);
+}
+
+static struct rte_pci_driver rte_ice_pmd = {
+	.id_table = pci_id_ice_map,
+	.drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC,
+	.probe = ice_pci_probe,
+	.remove = ice_pci_remove,
+};
+
+/**
+ * Driver initialization routine.
+ * Invoked once at EAL init time.
+ * Register itself as the [Poll Mode] Driver of PCI devices.
+ */
+RTE_PMD_REGISTER_PCI(net_ice, rte_ice_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(net_ice, pci_id_ice_map);
+RTE_PMD_REGISTER_KMOD_DEP(net_ice, "* igb_uio | uio_pci_generic | vfio-pci");
+
+RTE_INIT(ice_init_log)
+{
+	ice_logtype_init = rte_log_register("pmd.net.ice.init");
+	if (ice_logtype_init >= 0)
+		rte_log_set_level(ice_logtype_init, RTE_LOG_NOTICE);
+	ice_logtype_driver = rte_log_register("pmd.net.ice.driver");
+	if (ice_logtype_driver >= 0)
+		rte_log_set_level(ice_logtype_driver, RTE_LOG_NOTICE);
+}
diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
new file mode 100644
index 0000000..94e45c8
--- /dev/null
+++ b/drivers/net/ice/ice_ethdev.h
@@ -0,0 +1,305 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _ICE_ETHDEV_H_
+#define _ICE_ETHDEV_H_
+
+#include <rte_kvargs.h>
+
+#include "base/ice_common.h"
+#include "base/ice_adminq_cmd.h"
+
+#define ICE_VLAN_TAG_SIZE        4
+
+#define ICE_ADMINQ_LEN               32
+#define ICE_SBIOQ_LEN                32
+#define ICE_MAILBOXQ_LEN             32
+#define ICE_ADMINQ_BUF_SZ            4096
+#define ICE_SBIOQ_BUF_SZ             4096
+#define ICE_MAILBOXQ_BUF_SZ          4096
+/* Number of queues per TC should be one of 1, 2, 4, 8, 16, 32, 64 */
+#define ICE_MAX_Q_PER_TC         64
+#define ICE_NUM_DESC_DEFAULT     512
+#define ICE_BUF_SIZE_MIN         1024
+#define ICE_FRAME_SIZE_MAX       9728
+#define ICE_QUEUE_BASE_ADDR_UNIT 128
+/* number of VSIs and queue default setting */
+#define ICE_MAX_QP_NUM_PER_VF    16
+#define ICE_DEFAULT_QP_NUM_FDIR  1
+#define ICE_UINT32_BIT_SIZE      (CHAR_BIT * sizeof(uint32_t))
+#define ICE_VFTA_SIZE            (4096 / ICE_UINT32_BIT_SIZE)
+/* Maximun number of MAC addresses */
+#define ICE_NUM_MACADDR_MAX       64
+/* Maximum number of VFs */
+#define ICE_MAX_VF               128
+#define ICE_MAX_INTR_QUEUE_NUM   256
+
+#define ICE_MISC_VEC_ID          RTE_INTR_VEC_ZERO_OFFSET
+#define ICE_RX_VEC_ID            RTE_INTR_VEC_RXTX_OFFSET
+
+#define ICE_MAX_PKT_TYPE  1024
+
+/**
+ * vlan_id is a 12 bit number.
+ * The VFTA array is actually a 4096 bit array, 128 of 32bit elements.
+ * 2^5 = 32. The val of lower 5 bits specifies the bit in the 32bit element.
+ * The higher 7 bit val specifies VFTA array index.
+ */
+#define ICE_VFTA_BIT(vlan_id)    (1 << ((vlan_id) & 0x1F))
+#define ICE_VFTA_IDX(vlan_id)    ((vlan_id) >> 5)
+
+/* Default TC traffic in case DCB is not enabled */
+#define ICE_DEFAULT_TCMAP        0x1
+#define ICE_FDIR_QUEUE_ID        0
+
+/* Always assign pool 0 to main VSI, VMDQ will start from 1 */
+#define ICE_VMDQ_POOL_BASE       1
+
+#define ICE_DEFAULT_RX_FREE_THRESH  32
+#define ICE_DEFAULT_RX_PTHRESH      8
+#define ICE_DEFAULT_RX_HTHRESH      8
+#define ICE_DEFAULT_RX_WTHRESH      0
+
+#define ICE_DEFAULT_TX_FREE_THRESH  32
+#define ICE_DEFAULT_TX_PTHRESH      32
+#define ICE_DEFAULT_TX_HTHRESH      0
+#define ICE_DEFAULT_TX_WTHRESH      0
+#define ICE_DEFAULT_TX_RSBIT_THRESH 32
+
+/* Bit shift and mask */
+#define ICE_4_BIT_WIDTH  (CHAR_BIT / 2)
+#define ICE_4_BIT_MASK   RTE_LEN2MASK(ICE_4_BIT_WIDTH, uint8_t)
+#define ICE_8_BIT_WIDTH  CHAR_BIT
+#define ICE_8_BIT_MASK   UINT8_MAX
+#define ICE_16_BIT_WIDTH (CHAR_BIT * 2)
+#define ICE_16_BIT_MASK  UINT16_MAX
+#define ICE_32_BIT_WIDTH (CHAR_BIT * 4)
+#define ICE_32_BIT_MASK  UINT32_MAX
+#define ICE_40_BIT_WIDTH (CHAR_BIT * 5)
+#define ICE_40_BIT_MASK  RTE_LEN2MASK(ICE_40_BIT_WIDTH, uint64_t)
+#define ICE_48_BIT_WIDTH (CHAR_BIT * 6)
+#define ICE_48_BIT_MASK  RTE_LEN2MASK(ICE_48_BIT_WIDTH, uint64_t)
+
+#define ICE_FLAG_RSS                   BIT_ULL(0)
+#define ICE_FLAG_DCB                   BIT_ULL(1)
+#define ICE_FLAG_VMDQ                  BIT_ULL(2)
+#define ICE_FLAG_SRIOV                 BIT_ULL(3)
+#define ICE_FLAG_HEADER_SPLIT_DISABLED BIT_ULL(4)
+#define ICE_FLAG_HEADER_SPLIT_ENABLED  BIT_ULL(5)
+#define ICE_FLAG_FDIR                  BIT_ULL(6)
+#define ICE_FLAG_VXLAN                 BIT_ULL(7)
+#define ICE_FLAG_RSS_AQ_CAPABLE        BIT_ULL(8)
+#define ICE_FLAG_VF_MAC_BY_PF          BIT_ULL(9)
+#define ICE_FLAG_ALL  (ICE_FLAG_RSS | \
+		       ICE_FLAG_DCB | \
+		       ICE_FLAG_VMDQ | \
+		       ICE_FLAG_SRIOV | \
+		       ICE_FLAG_HEADER_SPLIT_DISABLED | \
+		       ICE_FLAG_HEADER_SPLIT_ENABLED | \
+		       ICE_FLAG_FDIR | \
+		       ICE_FLAG_VXLAN | \
+		       ICE_FLAG_RSS_AQ_CAPABLE | \
+		       ICE_FLAG_VF_MAC_BY_PF)
+
+struct ice_adapter;
+
+/**
+ * MAC filter structure
+ */
+struct ice_mac_filter_info {
+	struct ether_addr mac_addr;
+};
+
+TAILQ_HEAD(ice_mac_filter_list, ice_mac_filter);
+
+/* MAC filter list structure */
+struct ice_mac_filter {
+	TAILQ_ENTRY(ice_mac_filter) next;
+	struct ice_mac_filter_info mac_info;
+};
+
+/**
+ * VLAN filter structure
+ */
+struct ice_vlan_filter_info {
+	uint16_t vlan_id;
+};
+
+TAILQ_HEAD(ice_vlan_filter_list, ice_vlan_filter);
+
+/* VLAN filter list structure */
+struct ice_vlan_filter {
+	TAILQ_ENTRY(ice_vlan_filter) next;
+	struct ice_vlan_filter_info vlan_info;
+};
+
+struct pool_entry {
+	LIST_ENTRY(pool_entry) next;
+	uint16_t base;
+	uint16_t len;
+};
+
+LIST_HEAD(res_list, pool_entry);
+
+struct ice_res_pool_info {
+	uint32_t base;              /* Resource start index */
+	uint32_t num_alloc;         /* Allocated resource number */
+	uint32_t num_free;          /* Total available resource number */
+	struct res_list alloc_list; /* Allocated resource list */
+	struct res_list free_list;  /* Available resource list */
+};
+
+TAILQ_HEAD(ice_vsi_list_head, ice_vsi_list);
+
+struct ice_vsi;
+
+/* VSI list structure */
+struct ice_vsi_list {
+	TAILQ_ENTRY(ice_vsi_list) list;
+	struct ice_vsi *vsi;
+};
+
+struct ice_rx_queue;
+struct ice_tx_queue;
+
+/**
+ * Structure that defines a VSI, associated with a adapter.
+ */
+struct ice_vsi {
+	struct ice_adapter *adapter; /* Backreference to associated adapter */
+	struct ice_aqc_vsi_props info; /* VSI properties */
+	/**
+	 * When drivers loaded, only a default main VSI exists. In case new VSI
+	 * needs to add, HW needs to know the layout that VSIs are organized.
+	 * Besides that, VSI isan element and can't switch packets, which needs
+	 * to add new component VEB to perform switching. So, a new VSI needs
+	 * to specify the the uplink VSI (Parent VSI) before created. The
+	 * uplink VSI will check whether it had a VEB to switch packets. If no,
+	 * it will try to create one. Then, uplink VSI will move the new VSI
+	 * into its' sib_vsi_list to manage all the downlink VSI.
+	 *  sib_vsi_list: the VSI list that shared the same uplink VSI.
+	 *  parent_vsi  : the uplink VSI. It's NULL for main VSI.
+	 *  veb         : the VEB associates with the VSI.
+	 */
+	struct ice_vsi_list sib_vsi_list; /* sibling vsi list */
+	struct ice_vsi *parent_vsi;
+	enum ice_vsi_type type; /* VSI types */
+	uint16_t vlan_num;       /* Total VLAN number */
+	uint16_t mac_num;        /* Total mac number */
+	struct ice_mac_filter_list mac_list; /* macvlan filter list */
+	struct ice_vlan_filter_list vlan_list; /* vlan filter list */
+	uint16_t nb_qps;         /* Number of queue pairs VSI can occupy */
+	uint16_t nb_used_qps;    /* Number of queue pairs VSI uses */
+	uint16_t max_macaddrs;   /* Maximum number of MAC addresses */
+	uint16_t base_queue;     /* The first queue index of this VSI */
+	uint16_t vsi_id;         /* Hardware Id */
+	uint16_t idx;            /* vsi_handle: SW index in hw->vsi_ctx */
+	/* VF number to which the VSI connects, valid when VSI is VF type */
+	uint8_t vf_num;
+	uint16_t msix_intr; /* The MSIX interrupt binds to VSI */
+	uint16_t nb_msix;   /* The max number of msix vector */
+	uint8_t enabled_tc; /* The traffic class enabled */
+	uint8_t vlan_anti_spoof_on; /* The VLAN anti-spoofing enabled */
+	uint8_t vlan_filter_on; /* The VLAN filter enabled */
+	/* information about rss configuration */
+	u32 rss_key_size;
+	u32 rss_lut_size;
+	uint8_t *rss_lut;
+	uint8_t *rss_key;
+	struct ice_eth_stats eth_stats_offset;
+	struct ice_eth_stats eth_stats;
+	bool offset_loaded;
+};
+
+struct ice_pf {
+	struct ice_adapter *adapter; /* The adapter this PF associate to */
+	struct ice_vsi *main_vsi; /* pointer to main VSI structure */
+	/* Used for next free software vsi idx.
+	 * To save the effort, we don't recycle the index.
+	 * Suppose the indexes are more than enough.
+	 */
+	uint16_t next_vsi_idx;
+	uint16_t vsis_allocated;
+	uint16_t vsis_unallocated;
+	struct ice_res_pool_info qp_pool;    /*Queue pair pool */
+	struct ice_res_pool_info msix_pool;  /* MSIX interrupt pool */
+	struct rte_eth_dev_data *dev_data; /* Pointer to the device data */
+	struct ether_addr dev_addr; /* PF device mac address */
+	uint64_t flags; /* PF feature flags */
+	uint16_t hash_lut_size; /* The size of hash lookup table */
+	uint16_t lan_nb_qp_max;
+	uint16_t lan_nb_qps; /* The number of queue pairs of LAN */
+	struct ice_hw_port_stats stats_offset;
+	struct ice_hw_port_stats stats;
+	/* internal packet statistics, it should be excluded from the total */
+	struct ice_eth_stats internal_stats_offset;
+	struct ice_eth_stats internal_stats;
+	bool offset_loaded;
+	bool adapter_stopped;
+};
+
+/**
+ * Structure to store private data for each PF/VF instance.
+ */
+struct ice_adapter {
+	/* Common for both PF and VF */
+	struct ice_hw hw;
+	struct rte_eth_dev *eth_dev;
+	struct ice_pf pf;
+	bool rx_bulk_alloc_allowed;
+	bool tx_simple_allowed;
+	/* ptype mapping table */
+	uint32_t ptype_tbl[ICE_MAX_PKT_TYPE] __rte_cache_min_aligned;
+};
+
+struct ice_vsi_vlan_pvid_info {
+	uint16_t on;		/* Enable or disable pvid */
+	union {
+		uint16_t pvid;	/* Valid in case 'on' is set to set pvid */
+		struct {
+			/* Valid in case 'on' is cleared. 'tagged' will reject
+			 * tagged packets, while 'untagged' will reject
+			 * untagged packets.
+			 */
+			uint8_t tagged;
+			uint8_t untagged;
+		} reject;
+	} config;
+};
+
+#define ICE_DEV_TO_PCI(eth_dev) \
+	RTE_DEV_TO_PCI((eth_dev)->device)
+
+/* ICE_DEV_PRIVATE_TO */
+#define ICE_DEV_PRIVATE_TO_PF(adapter) \
+	(&((struct ice_adapter *)adapter)->pf)
+#define ICE_DEV_PRIVATE_TO_HW(adapter) \
+	(&((struct ice_adapter *)adapter)->hw)
+#define ICE_DEV_PRIVATE_TO_ADAPTER(adapter) \
+	((struct ice_adapter *)adapter)
+
+/* ICE_VSI_TO */
+#define ICE_VSI_TO_HW(vsi) \
+	(&(((struct ice_vsi *)vsi)->adapter->hw))
+#define ICE_VSI_TO_PF(vsi) \
+	(&(((struct ice_vsi *)vsi)->adapter->pf))
+#define ICE_VSI_TO_ETH_DEV(vsi) \
+	(((struct ice_vsi *)vsi)->adapter->eth_dev)
+
+/* ICE_PF_TO */
+#define ICE_PF_TO_HW(pf) \
+	(&(((struct ice_pf *)pf)->adapter->hw))
+#define ICE_PF_TO_ADAPTER(pf) \
+	((struct ice_adapter *)(pf)->adapter)
+#define ICE_PF_TO_ETH_DEV(pf) \
+	(((struct ice_pf *)pf)->adapter->eth_dev)
+
+static inline int
+ice_align_floor(int n)
+{
+	if (n == 0)
+		return 0;
+	return 1 << (sizeof(n) * CHAR_BIT - 1 - __builtin_clz(n));
+}
+#endif /* _ICE_ETHDEV_H_ */
diff --git a/drivers/net/ice/ice_logs.h b/drivers/net/ice/ice_logs.h
new file mode 100644
index 0000000..de2d573
--- /dev/null
+++ b/drivers/net/ice/ice_logs.h
@@ -0,0 +1,45 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _ICE_LOGS_H_
+#define _ICE_LOGS_H_
+
+extern int ice_logtype_init;
+extern int ice_logtype_driver;
+
+#define PMD_INIT_LOG(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, ice_logtype_init, "%s(): " fmt "\n", \
+		__func__, ##args)
+
+#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>")
+
+#ifdef RTE_LIBRTE_ICE_DEBUG_RX
+#define PMD_RX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_RX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_ICE_DEBUG_TX
+#define PMD_TX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_TX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_ICE_DEBUG_TX_FREE
+#define PMD_TX_FREE_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_TX_FREE_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#define PMD_DRV_LOG_RAW(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, ice_logtype_driver, "%s(): " fmt, \
+		__func__, ## args)
+
+#define PMD_DRV_LOG(level, fmt, args...) \
+	PMD_DRV_LOG_RAW(level, fmt "\n", ## args)
+
+#endif /* _ICE_LOGS_H_ */
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
new file mode 100644
index 0000000..c37dc23
--- /dev/null
+++ b/drivers/net/ice/ice_rxtx.h
@@ -0,0 +1,117 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _ICE_RXTX_H_
+#define _ICE_RXTX_H_
+
+#include "ice_ethdev.h"
+
+#define ICE_ALIGN_RING_DESC  32
+#define ICE_MIN_RING_DESC    64
+#define ICE_MAX_RING_DESC    4096
+#define ICE_DMA_MEM_ALIGN    4096
+#define ICE_RING_BASE_ALIGN  128
+
+#define ICE_RX_MAX_BURST 32
+#define ICE_TX_MAX_BURST 32
+
+#define ICE_CHK_Q_ENA_COUNT        100
+#define ICE_CHK_Q_ENA_INTERVAL_US  100
+
+#ifdef RTE_LIBRTE_ICE_16BYTE_RX_DESC
+#define ice_rx_desc ice_16byte_rx_desc
+#else
+#define ice_rx_desc ice_32byte_rx_desc
+#endif
+
+#define ICE_SUPPORT_CHAIN_NUM 5
+
+struct ice_rx_entry {
+	struct rte_mbuf *mbuf;
+};
+
+struct ice_rx_queue {
+	struct rte_mempool *mp; /* mbuf pool to populate RX ring */
+	volatile union ice_rx_desc *rx_ring;/* RX ring virtual address */
+	uint64_t rx_ring_phys_addr; /* RX ring DMA address */
+	struct ice_rx_entry *sw_ring; /* address of RX soft ring */
+	uint16_t nb_rx_desc; /* number of RX descriptors */
+	uint16_t rx_free_thresh; /* max free RX desc to hold */
+	uint16_t rx_tail; /* current value of tail */
+	uint16_t nb_rx_hold; /* number of held free RX desc */
+	struct rte_mbuf *pkt_first_seg; /**< first segment of current packet */
+	struct rte_mbuf *pkt_last_seg; /**< last segment of current packet */
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+	uint16_t rx_nb_avail; /**< number of staged packets ready */
+	uint16_t rx_next_avail; /**< index of next staged packets */
+	uint16_t rx_free_trigger; /**< triggers rx buffer allocation */
+	struct rte_mbuf fake_mbuf; /**< dummy mbuf */
+	struct rte_mbuf *rx_stage[ICE_RX_MAX_BURST * 2];
+#endif
+	uint8_t port_id; /* device port ID */
+	uint8_t crc_len; /* 0 if CRC stripped, 4 otherwise */
+	uint16_t queue_id; /* RX queue index */
+	uint16_t reg_idx; /* RX queue register index */
+	uint8_t drop_en; /* if not 0, set register bit */
+	volatile uint8_t *qrx_tail; /* register address of tail */
+	struct ice_vsi *vsi; /* the VSI this queue belongs to */
+	uint16_t rx_buf_len; /* The packet buffer size */
+	uint16_t rx_hdr_len; /* The header buffer size */
+	uint16_t max_pkt_len; /* Maximum packet length */
+	bool q_set; /* indicate if rx queue has been configured */
+	bool rx_deferred_start; /* don't start this queue in dev start */
+};
+
+struct ice_tx_entry {
+	struct rte_mbuf *mbuf;
+	uint16_t next_id;
+	uint16_t last_id;
+};
+
+struct ice_tx_queue {
+	uint16_t nb_tx_desc; /* number of TX descriptors */
+	uint64_t tx_ring_phys_addr; /* TX ring DMA address */
+	volatile struct ice_tx_desc *tx_ring; /* TX ring virtual address */
+	struct ice_tx_entry *sw_ring; /* virtual address of SW ring */
+	uint16_t tx_tail; /* current value of tail register */
+	volatile uint8_t *qtx_tail; /* register address of tail */
+	uint16_t nb_tx_used; /* number of TX desc used since RS bit set */
+	/* index to last TX descriptor to have been cleaned */
+	uint16_t last_desc_cleaned;
+	/* Total number of TX descriptors ready to be allocated. */
+	uint16_t nb_tx_free;
+	/* Start freeing TX buffers if there are less free descriptors than
+	 * this value.
+	 */
+	uint16_t tx_free_thresh;
+	/* Number of TX descriptors to use before RS bit is set. */
+	uint16_t tx_rs_thresh;
+	uint8_t pthresh; /**< Prefetch threshold register. */
+	uint8_t hthresh; /**< Host threshold register. */
+	uint8_t wthresh; /**< Write-back threshold reg. */
+	uint8_t port_id; /* Device port identifier. */
+	uint16_t queue_id; /* TX queue index. */
+	uint32_t q_teid; /* TX schedule node id. */
+	uint16_t reg_idx;
+	uint64_t offloads;
+	struct ice_vsi *vsi; /* the VSI this queue belongs to */
+	uint16_t tx_next_dd;
+	uint16_t tx_next_rs;
+	bool tx_deferred_start; /* don't start this queue in dev start */
+	bool q_set; /* indicate if tx queue has been configured */
+};
+
+/* Offload features */
+union ice_tx_offload {
+	uint64_t data;
+	struct {
+		uint64_t l2_len:7; /* L2 (MAC) Header Length. */
+		uint64_t l3_len:9; /* L3 (IP) Header Length. */
+		uint64_t l4_len:8; /* L4 Header Length. */
+		uint64_t tso_segsz:16; /* TCP TSO segment size */
+		uint64_t outer_l2_len:8; /* outer L2 Header Length */
+		uint64_t outer_l3_len:16; /* outer L3 Header Length */
+	};
+};
+#endif /* _ICE_RXTX_H_ */
diff --git a/drivers/net/ice/meson.build b/drivers/net/ice/meson.build
new file mode 100644
index 0000000..9ed7b27
--- /dev/null
+++ b/drivers/net/ice/meson.build
@@ -0,0 +1,12 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Intel Corporation
+
+subdir('base')
+objs = [base_objs]
+
+sources = files(
+	'ice_ethdev.c'
+	)
+
+deps += ['hash']
+includes += include_directories('base')
diff --git a/drivers/net/ice/rte_pmd_ice_version.map b/drivers/net/ice/rte_pmd_ice_version.map
new file mode 100644
index 0000000..7b23b60
--- /dev/null
+++ b/drivers/net/ice/rte_pmd_ice_version.map
@@ -0,0 +1,4 @@
+DPDK_19.02 {
+
+	local: *;
+};
diff --git a/drivers/net/meson.build b/drivers/net/meson.build
index 980eec2..45da3bb 100644
--- a/drivers/net/meson.build
+++ b/drivers/net/meson.build
@@ -17,6 +17,7 @@ drivers = ['af_packet',
 	'enic',
 	'failsafe',
 	'fm10k', 'i40e',
+	'ice',
 	'ifc',
 	'ixgbe',
 	'kni',
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 5699d97..02e8b6f 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -163,6 +163,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_ENIC_PMD)       += -lrte_pmd_enic
 _LDLIBS-$(CONFIG_RTE_LIBRTE_FM10K_PMD)      += -lrte_pmd_fm10k
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_FAILSAFE)   += -lrte_pmd_failsafe
 _LDLIBS-$(CONFIG_RTE_LIBRTE_I40E_PMD)       += -lrte_pmd_i40e
+_LDLIBS-$(CONFIG_RTE_LIBRTE_ICE_PMD)        += -lrte_pmd_ice
 _LDLIBS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD)      += -lrte_pmd_ixgbe
 ifeq ($(CONFIG_RTE_LIBRTE_KNI),y)
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_KNI)        += -lrte_pmd_kni
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v5 16/31] net/ice: support device and queue ops
  2018-12-17  7:37 ` [dpdk-dev] [PATCH v5 00/31] A new net PMD - ICE Wenzhuo Lu
                     ` (14 preceding siblings ...)
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 15/31] net/ice: support device initialization Wenzhuo Lu
@ 2018-12-17  7:37   ` Wenzhuo Lu
  2018-12-17 23:48     ` Ferruh Yigit
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 17/31] net/ice: support getting device information Wenzhuo Lu
                     ` (14 subsequent siblings)
  30 siblings, 1 reply; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-17  7:37 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Normally when starting/stopping the device the queue
should be started and stopped. Support them both in
this patch.

Below ops are added,
dev_configure
dev_start
dev_stop
dev_close
dev_reset
rx_queue_start
rx_queue_stop
tx_queue_start
tx_queue_stop
rx_queue_setup
rx_queue_release
tx_queue_setup
tx_queue_release

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 config/common_base               |   2 +
 doc/guides/nics/features/ice.ini |   1 +
 doc/guides/nics/ice.rst          |   8 +
 drivers/net/ice/Makefile         |   3 +-
 drivers/net/ice/ice_ethdev.c     | 198 ++++++++-
 drivers/net/ice/ice_lan_rxtx.c   | 927 +++++++++++++++++++++++++++++++++++++++
 drivers/net/ice/ice_rxtx.h       |  20 +
 drivers/net/ice/meson.build      |   3 +-
 8 files changed, 1159 insertions(+), 3 deletions(-)
 create mode 100644 drivers/net/ice/ice_lan_rxtx.c

diff --git a/config/common_base b/config/common_base
index 872f440..a342760 100644
--- a/config/common_base
+++ b/config/common_base
@@ -303,6 +303,8 @@ CONFIG_RTE_LIBRTE_ICE_PMD=y
 CONFIG_RTE_LIBRTE_ICE_DEBUG_RX=n
 CONFIG_RTE_LIBRTE_ICE_DEBUG_TX=n
 CONFIG_RTE_LIBRTE_ICE_DEBUG_TX_FREE=n
+CONFIG_RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC=y
+CONFIG_RTE_LIBRTE_ICE_16BYTE_RX_DESC=n
 
 # Compile burst-oriented AVF PMD driver
 #
diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
index 085e848..a43a9cd 100644
--- a/doc/guides/nics/features/ice.ini
+++ b/doc/guides/nics/features/ice.ini
@@ -4,6 +4,7 @@
 ; Refer to default.ini for the full list of available PMD features.
 ;
 [Features]
+Queue start/stop     = Y
 BSD nic_uio          = Y
 Linux UIO            = Y
 Linux VFIO           = Y
diff --git a/doc/guides/nics/ice.rst b/doc/guides/nics/ice.rst
index 946ed04..96a594f 100644
--- a/doc/guides/nics/ice.rst
+++ b/doc/guides/nics/ice.rst
@@ -38,6 +38,14 @@ Please note that enabling debugging options may affect system performance.
 
   Toggle display of generic debugging messages.
 
+- ``CONFIG_RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC`` (default ``y``)
+
+  Toggle bulk allocation for RX.
+
+- ``CONFIG_RTE_LIBRTE_ICE_16BYTE_RX_DESC`` (default ``n``)
+
+  Toggle to use a 16-byte RX descriptor, by default the RX descriptor is 32 byte.
+
 Runtime Config Options
 ~~~~~~~~~~~~~~~~~~~~~~
 
diff --git a/drivers/net/ice/Makefile b/drivers/net/ice/Makefile
index 70f23e3..ff93800 100644
--- a/drivers/net/ice/Makefile
+++ b/drivers/net/ice/Makefile
@@ -11,7 +11,7 @@ LIB = librte_pmd_ice.a
 CFLAGS += -O3
 CFLAGS += $(WERROR_FLAGS)
 
-LDLIBS += -lrte_eal -lrte_ethdev -lrte_kvargs -lrte_bus_pci
+LDLIBS += -lrte_eal -lrte_ethdev -lrte_kvargs -lrte_bus_pci -lrte_mempool
 
 EXPORT_MAP := rte_pmd_ice_version.map
 
@@ -50,5 +50,6 @@ SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_switch.c
 SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_nvm.c
 
 SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_ethdev.c
+SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_lan_rxtx.c
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 4f0c819..2c86b3d 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -14,6 +14,12 @@
 int ice_logtype_init;
 int ice_logtype_driver;
 
+static int ice_dev_configure(struct rte_eth_dev *dev);
+static int ice_dev_start(struct rte_eth_dev *dev);
+static void ice_dev_stop(struct rte_eth_dev *dev);
+static void ice_dev_close(struct rte_eth_dev *dev);
+static int ice_dev_reset(struct rte_eth_dev *dev);
+
 static const struct rte_pci_id pci_id_ice_map[] = {
 	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
 	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_QSFP) },
@@ -22,7 +28,19 @@
 };
 
 static const struct eth_dev_ops ice_eth_dev_ops = {
-	.dev_configure                = NULL,
+	.dev_configure                = ice_dev_configure,
+	.dev_start                    = ice_dev_start,
+	.dev_stop                     = ice_dev_stop,
+	.dev_close                    = ice_dev_close,
+	.dev_reset                    = ice_dev_reset,
+	.rx_queue_start               = ice_rx_queue_start,
+	.rx_queue_stop                = ice_rx_queue_stop,
+	.tx_queue_start               = ice_tx_queue_start,
+	.tx_queue_stop                = ice_tx_queue_stop,
+	.rx_queue_setup               = ice_rx_queue_setup,
+	.rx_queue_release             = ice_rx_queue_release,
+	.tx_queue_setup               = ice_tx_queue_setup,
+	.tx_queue_release             = ice_tx_queue_release,
 };
 
 static void
@@ -560,11 +578,41 @@
 }
 
 static void
+ice_dev_stop(struct rte_eth_dev *dev)
+{
+	struct rte_eth_dev_data *data = dev->data;
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	uint16_t i;
+
+	/* avoid stopping again */
+	if (pf->adapter_stopped)
+		return;
+
+	/* stop and clear all Rx queues */
+	for (i = 0; i < data->nb_rx_queues; i++)
+		ice_rx_queue_stop(dev, i);
+
+	/* stop and clear all Tx queues */
+	for (i = 0; i < data->nb_tx_queues; i++)
+		ice_tx_queue_stop(dev, i);
+
+	/* Clear all queues and release mbufs */
+	ice_clear_queues(dev);
+
+	pf->adapter_stopped = true;
+}
+
+static void
 ice_dev_close(struct rte_eth_dev *dev)
 {
 	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
 	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
+	ice_dev_stop(dev);
+
+	/* release all queue resource */
+	ice_free_queues(dev);
+
 	ice_res_pool_destroy(&pf->msix_pool);
 	ice_release_vsi(pf->main_vsi);
 
@@ -595,6 +643,154 @@
 }
 
 static int
+ice_dev_configure(__rte_unused struct rte_eth_dev *dev)
+{
+	struct ice_adapter *ad =
+		ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+
+	/* Initialize to TRUE. If any of Rx queues doesn't meet the
+	 * bulk allocation or vector Rx preconditions we will reset it.
+	 */
+	ad->rx_bulk_alloc_allowed = true;
+	ad->tx_simple_allowed = true;
+
+	return 0;
+}
+
+static int ice_init_rss(struct ice_pf *pf)
+{
+	struct ice_hw *hw = ICE_PF_TO_HW(pf);
+	struct ice_vsi *vsi = pf->main_vsi;
+	struct rte_eth_dev *dev = pf->adapter->eth_dev;
+	struct rte_eth_rss_conf *rss_conf;
+	struct ice_aqc_get_set_rss_keys key;
+	uint16_t i, nb_q;
+	int ret = 0;
+
+	rss_conf = &dev->data->dev_conf.rx_adv_conf.rss_conf;
+	nb_q = dev->data->nb_rx_queues;
+	vsi->rss_key_size = ICE_AQC_GET_SET_RSS_KEY_DATA_RSS_KEY_SIZE;
+	vsi->rss_lut_size = hw->func_caps.common_cap.rss_table_size;
+
+	if (!vsi->rss_key)
+		vsi->rss_key = rte_zmalloc(NULL,
+					   vsi->rss_key_size, 0);
+	if (!vsi->rss_lut)
+		vsi->rss_lut = rte_zmalloc(NULL,
+					   vsi->rss_lut_size, 0);
+
+	/* configure RSS key */
+	if (!rss_conf->rss_key) {
+		/* Calculate the default hash key */
+		for (i = 0; i <= vsi->rss_key_size; i++)
+			vsi->rss_key[i] = (uint8_t)rte_rand();
+	} else {
+		rte_memcpy(vsi->rss_key, rss_conf->rss_key,
+			   RTE_MIN(rss_conf->rss_key_len,
+				   vsi->rss_key_size));
+	}
+	rte_memcpy(key.standard_rss_key, vsi->rss_key, vsi->rss_key_size);
+	ret = ice_aq_set_rss_key(hw, vsi->idx, &key);
+	if (ret)
+		return -EINVAL;
+
+	/* init RSS LUT table */
+	for (i = 0; i < vsi->rss_lut_size; i++)
+		vsi->rss_lut[i] = i % nb_q;
+
+	ret = ice_aq_set_rss_lut(hw, vsi->idx,
+				 ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF,
+				 vsi->rss_lut, vsi->rss_lut_size);
+	if (ret)
+		return -EINVAL;
+
+	return 0;
+}
+
+static int
+ice_dev_start(struct rte_eth_dev *dev)
+{
+	struct rte_eth_dev_data *data = dev->data;
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	uint16_t nb_rxq = 0;
+	uint16_t nb_txq, i;
+	int ret;
+
+	/* program Tx queues' context in hardware */
+	for (nb_txq = 0; nb_txq < data->nb_tx_queues; nb_txq++) {
+		ret = ice_tx_queue_start(dev, nb_txq);
+		if (ret) {
+			PMD_DRV_LOG(ERR, "fail to start Tx queue %u", nb_txq);
+			goto tx_err;
+		}
+	}
+
+	/* program Rx queues' context in hardware*/
+	for (nb_rxq = 0; nb_rxq < data->nb_rx_queues; nb_rxq++) {
+		ret = ice_rx_queue_start(dev, nb_rxq);
+		if (ret) {
+			PMD_DRV_LOG(ERR, "fail to start Rx queue %u", nb_rxq);
+			goto rx_err;
+		}
+	}
+
+	ret = ice_init_rss(pf);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to enable rss for PF");
+		goto rx_err;
+	}
+
+	ret = ice_aq_set_event_mask(hw, hw->port_info->lport,
+				    ((u16)(ICE_AQ_LINK_EVENT_LINK_FAULT |
+				     ICE_AQ_LINK_EVENT_PHY_TEMP_ALARM |
+				     ICE_AQ_LINK_EVENT_EXCESSIVE_ERRORS |
+				     ICE_AQ_LINK_EVENT_SIGNAL_DETECT |
+				     ICE_AQ_LINK_EVENT_AN_COMPLETED |
+				     ICE_AQ_LINK_EVENT_PORT_TX_SUSPENDED)),
+				     NULL);
+	if (ret != ICE_SUCCESS)
+		PMD_DRV_LOG(WARNING, "Fail to set phy mask");
+
+	pf->adapter_stopped = false;
+
+	return 0;
+
+	/* stop the started queues if failed to start all queues */
+rx_err:
+	for (i = 0; i < nb_rxq; i++)
+		ice_rx_queue_stop(dev, i);
+tx_err:
+	for (i = 0; i < nb_txq; i++)
+		ice_tx_queue_stop(dev, i);
+
+	return -EIO;
+}
+
+static int
+ice_dev_reset(struct rte_eth_dev *dev)
+{
+	int ret;
+
+	if (dev->data->sriov.active)
+		return -ENOTSUP;
+
+	ret = ice_dev_uninit(dev);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "failed to uninit device, status = %d", ret);
+		return -ENXIO;
+	}
+
+	ret = ice_dev_init(dev);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "failed to init device, status = %d", ret);
+		return -ENXIO;
+	}
+
+	return 0;
+}
+
+static int
 ice_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 	      struct rte_pci_device *pci_dev)
 {
diff --git a/drivers/net/ice/ice_lan_rxtx.c b/drivers/net/ice/ice_lan_rxtx.c
new file mode 100644
index 0000000..5c2301a
--- /dev/null
+++ b/drivers/net/ice/ice_lan_rxtx.c
@@ -0,0 +1,927 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#include <rte_ethdev_driver.h>
+#include <rte_net.h>
+
+#include "ice_rxtx.h"
+
+#define ICE_TD_CMD ICE_TX_DESC_CMD_EOP
+
+#define ICE_TX_CKSUM_OFFLOAD_MASK (		 \
+		PKT_TX_IP_CKSUM |		 \
+		PKT_TX_L4_MASK |		 \
+		PKT_TX_TCP_SEG |		 \
+		PKT_TX_OUTER_IP_CKSUM)
+
+#define ICE_RX_ERR_BITS 0x3f
+
+static enum ice_status
+ice_program_hw_rx_queue(struct ice_rx_queue *rxq)
+{
+	struct ice_vsi *vsi = rxq->vsi;
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	struct rte_eth_dev *dev = ICE_VSI_TO_ETH_DEV(rxq->vsi);
+	struct ice_rlan_ctx rx_ctx;
+	enum ice_status err;
+	uint16_t buf_size, len;
+	struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
+	uint32_t regval;
+
+	/**
+	 * The kernel driver uses flex descriptor. It sets the register
+	 * to flex descriptor mode.
+	 * DPDK uses legacy descriptor. It should set the register back
+	 * to the default value, then uses legacy descriptor mode.
+	 */
+	regval = (0x01 << QRXFLXP_CNTXT_RXDID_PRIO_S) &
+		 QRXFLXP_CNTXT_RXDID_PRIO_M;
+	ICE_WRITE_REG(hw, QRXFLXP_CNTXT(rxq->reg_idx), regval);
+
+	/* Set buffer size as the head split is disabled. */
+	buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) -
+			      RTE_PKTMBUF_HEADROOM);
+	rxq->rx_hdr_len = 0;
+	rxq->rx_buf_len = RTE_ALIGN(buf_size, (1 << ICE_RLAN_CTX_DBUF_S));
+	len = ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len;
+	rxq->max_pkt_len = RTE_MIN(len,
+				   dev->data->dev_conf.rxmode.max_rx_pkt_len);
+
+	if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+		if (rxq->max_pkt_len <= ETHER_MAX_LEN ||
+		    rxq->max_pkt_len > ICE_FRAME_SIZE_MAX) {
+			PMD_DRV_LOG(ERR, "maximum packet length must "
+				    "be larger than %u and smaller than %u,"
+				    "as jumbo frame is enabled",
+				    (uint32_t)ETHER_MAX_LEN,
+				    (uint32_t)ICE_FRAME_SIZE_MAX);
+			return -EINVAL;
+		}
+	} else {
+		if (rxq->max_pkt_len < ETHER_MIN_LEN ||
+		    rxq->max_pkt_len > ETHER_MAX_LEN) {
+			PMD_DRV_LOG(ERR, "maximum packet length must be "
+				    "larger than %u and smaller than %u, "
+				    "as jumbo frame is disabled",
+				    (uint32_t)ETHER_MIN_LEN,
+				    (uint32_t)ETHER_MAX_LEN);
+			return -EINVAL;
+		}
+	}
+
+	memset(&rx_ctx, 0, sizeof(rx_ctx));
+
+	rx_ctx.base = rxq->rx_ring_phys_addr / ICE_QUEUE_BASE_ADDR_UNIT;
+	rx_ctx.qlen = rxq->nb_rx_desc;
+	rx_ctx.dbuf = rxq->rx_buf_len >> ICE_RLAN_CTX_DBUF_S;
+	rx_ctx.hbuf = rxq->rx_hdr_len >> ICE_RLAN_CTX_HBUF_S;
+	rx_ctx.dtype = 0; /* No Header Split mode */
+#ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
+	rx_ctx.dsize = 1; /* 32B descriptors */
+#endif
+	rx_ctx.rxmax = rxq->max_pkt_len;
+	/* TPH: Transaction Layer Packet (TLP) processing hints */
+	rx_ctx.tphrdesc_ena = 1;
+	rx_ctx.tphwdesc_ena = 1;
+	rx_ctx.tphdata_ena = 1;
+	rx_ctx.tphhead_ena = 1;
+	/* Low Receive Queue Threshold defined in 64 descriptors units.
+	 * When the number of free descriptors goes below the lrxqthresh,
+	 * an immediate interrupt is triggered.
+	 */
+	rx_ctx.lrxqthresh = 2;
+	/*default use 32 byte descriptor, vlan tag extract to L2TAG2(1st)*/
+	rx_ctx.l2tsel = 1;
+	rx_ctx.showiv = 0;
+
+	err = ice_clear_rxq_ctx(hw, rxq->reg_idx);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to clear Lan Rx queue (%u) context",
+			    rxq->queue_id);
+		return -EINVAL;
+	}
+	err = ice_write_rxq_ctx(hw, &rx_ctx, rxq->reg_idx);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to write Lan Rx queue (%u) context",
+			    rxq->queue_id);
+		return -EINVAL;
+	}
+
+	buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) -
+			      RTE_PKTMBUF_HEADROOM);
+
+	/* Check if scattered RX needs to be used. */
+	if ((rxq->max_pkt_len + 2 * ICE_VLAN_TAG_SIZE) > buf_size)
+		dev->data->scattered_rx = 1;
+
+	rxq->qrx_tail = hw->hw_addr + QRX_TAIL(rxq->reg_idx);
+
+	/* Init the Rx tail register*/
+	ICE_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1);
+
+	return 0;
+}
+
+/* Allocate mbufs for all descriptors in rx queue */
+static int
+ice_alloc_rx_queue_mbufs(struct ice_rx_queue *rxq)
+{
+	struct ice_rx_entry *rxe = rxq->sw_ring;
+	uint64_t dma_addr;
+	uint16_t i;
+
+	for (i = 0; i < rxq->nb_rx_desc; i++) {
+		volatile union ice_rx_desc *rxd;
+		struct rte_mbuf *mbuf = rte_mbuf_raw_alloc(rxq->mp);
+
+		if (unlikely(!mbuf)) {
+			PMD_DRV_LOG(ERR, "Failed to allocate mbuf for RX");
+			return -ENOMEM;
+		}
+
+		rte_mbuf_refcnt_set(mbuf, 1);
+		mbuf->next = NULL;
+		mbuf->data_off = RTE_PKTMBUF_HEADROOM;
+		mbuf->nb_segs = 1;
+		mbuf->port = rxq->port_id;
+
+		dma_addr =
+			rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf));
+
+		rxd = &rxq->rx_ring[i];
+		rxd->read.pkt_addr = dma_addr;
+		rxd->read.hdr_addr = 0;
+#ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
+		rxd->read.rsvd1 = 0;
+		rxd->read.rsvd2 = 0;
+#endif
+		rxe[i].mbuf = mbuf;
+	}
+
+	return 0;
+}
+
+/* Free all mbufs for descriptors in rx queue */
+static void
+ice_rx_queue_release_mbufs(struct ice_rx_queue *rxq)
+{
+	uint16_t i;
+
+	if (!rxq || !rxq->sw_ring) {
+		PMD_DRV_LOG(DEBUG, "Pointer to sw_ring is NULL");
+		return;
+	}
+
+	for (i = 0; i < rxq->nb_rx_desc; i++) {
+		if (rxq->sw_ring[i].mbuf) {
+			rte_pktmbuf_free_seg(rxq->sw_ring[i].mbuf);
+			rxq->sw_ring[i].mbuf = NULL;
+		}
+	}
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+		if (rxq->rx_nb_avail == 0)
+			return;
+		for (i = 0; i < rxq->rx_nb_avail; i++) {
+			struct rte_mbuf *mbuf;
+
+			mbuf = rxq->rx_stage[rxq->rx_next_avail + i];
+			rte_pktmbuf_free_seg(mbuf);
+		}
+		rxq->rx_nb_avail = 0;
+#endif /* RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC */
+}
+
+/* turn on or off rx queue
+ * @q_idx: queue index in pf scope
+ * @on: turn on or off the queue
+ */
+static int
+ice_switch_rx_queue(struct ice_hw *hw, uint16_t q_idx, bool on)
+{
+	uint32_t reg;
+	uint16_t j;
+
+	/* QRX_CTRL = QRX_ENA */
+	reg = ICE_READ_REG(hw, QRX_CTRL(q_idx));
+
+	if (on) {
+		if (reg & QRX_CTRL_QENA_STAT_M)
+			return 0; /* Already on, skip */
+		reg |= QRX_CTRL_QENA_REQ_M;
+	} else {
+		if (!(reg & QRX_CTRL_QENA_STAT_M))
+			return 0; /* Already off, skip */
+		reg &= ~QRX_CTRL_QENA_REQ_M;
+	}
+
+	/* Write the register */
+	ICE_WRITE_REG(hw, QRX_CTRL(q_idx), reg);
+	/* Check the result. It is said that QENA_STAT
+	 * follows the QENA_REQ not more than 10 use.
+	 * TODO: need to change the wait counter later
+	 */
+	for (j = 0; j < ICE_CHK_Q_ENA_COUNT; j++) {
+		rte_delay_us(ICE_CHK_Q_ENA_INTERVAL_US);
+		reg = ICE_READ_REG(hw, QRX_CTRL(q_idx));
+		if (on) {
+			if ((reg & QRX_CTRL_QENA_REQ_M) &&
+			    (reg & QRX_CTRL_QENA_STAT_M))
+				break;
+		} else {
+			if (!(reg & QRX_CTRL_QENA_REQ_M) &&
+			    !(reg & QRX_CTRL_QENA_STAT_M))
+				break;
+		}
+	}
+
+	/* Check if it is timeout */
+	if (j >= ICE_CHK_Q_ENA_COUNT) {
+		PMD_DRV_LOG(ERR, "Failed to %s rx queue[%u]",
+			    (on ? "enable" : "disable"), q_idx);
+		return -ETIMEDOUT;
+	}
+
+	return 0;
+}
+
+static inline int
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+ice_check_rx_burst_bulk_alloc_preconditions(struct ice_rx_queue *rxq)
+#else
+ice_check_rx_burst_bulk_alloc_preconditions
+	(__rte_unused struct ice_rx_queue *rxq)
+#endif
+{
+	int ret = 0;
+
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+	if (!(rxq->rx_free_thresh >= ICE_RX_MAX_BURST)) {
+		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: "
+			     "rxq->rx_free_thresh=%d, "
+			     "ICE_RX_MAX_BURST=%d",
+			     rxq->rx_free_thresh, ICE_RX_MAX_BURST);
+		ret = -EINVAL;
+	} else if (!(rxq->rx_free_thresh < rxq->nb_rx_desc)) {
+		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: "
+			     "rxq->rx_free_thresh=%d, "
+			     "rxq->nb_rx_desc=%d",
+			     rxq->rx_free_thresh, rxq->nb_rx_desc);
+		ret = -EINVAL;
+	} else if (rxq->nb_rx_desc % rxq->rx_free_thresh != 0) {
+		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: "
+			     "rxq->nb_rx_desc=%d, "
+			     "rxq->rx_free_thresh=%d",
+			     rxq->nb_rx_desc, rxq->rx_free_thresh);
+		ret = -EINVAL;
+	}
+#else
+	ret = -EINVAL;
+#endif
+
+	return ret;
+}
+
+/* reset fields in ice_rx_queue back to default */
+static void
+ice_reset_rx_queue(struct ice_rx_queue *rxq)
+{
+	unsigned i;
+	uint16_t len;
+
+	if (!rxq) {
+		PMD_DRV_LOG(DEBUG, "Pointer to rxq is NULL");
+		return;
+	}
+
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+	if (ice_check_rx_burst_bulk_alloc_preconditions(rxq) == 0)
+		len = (uint16_t)(rxq->nb_rx_desc + ICE_RX_MAX_BURST);
+	else
+#endif /* RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC */
+		len = rxq->nb_rx_desc;
+
+	for (i = 0; i < len * sizeof(union ice_rx_desc); i++)
+		((volatile char *)rxq->rx_ring)[i] = 0;
+
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+	memset(&rxq->fake_mbuf, 0x0, sizeof(rxq->fake_mbuf));
+	for (i = 0; i < ICE_RX_MAX_BURST; ++i)
+		rxq->sw_ring[rxq->nb_rx_desc + i].mbuf = &rxq->fake_mbuf;
+
+	rxq->rx_nb_avail = 0;
+	rxq->rx_next_avail = 0;
+	rxq->rx_free_trigger = (uint16_t)(rxq->rx_free_thresh - 1);
+#endif /* RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC */
+
+	rxq->rx_tail = 0;
+	rxq->nb_rx_hold = 0;
+	rxq->pkt_first_seg = NULL;
+	rxq->pkt_last_seg = NULL;
+}
+
+int
+ice_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct ice_rx_queue *rxq;
+	int err;
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (rx_queue_id >= dev->data->nb_rx_queues) {
+		PMD_DRV_LOG(ERR, "RX queue %u is out of range %u",
+			    rx_queue_id, dev->data->nb_rx_queues);
+		return -EINVAL;
+	}
+
+	rxq = dev->data->rx_queues[rx_queue_id];
+	if (!rxq || !rxq->q_set) {
+		PMD_DRV_LOG(ERR, "RX queue %u not available or setup",
+			    rx_queue_id);
+		return -EINVAL;
+	}
+
+	err = ice_program_hw_rx_queue(rxq);
+	if (err) {
+		PMD_DRV_LOG(ERR, "fail to program RX queue %u",
+			    rx_queue_id);
+		return -EIO;
+	}
+
+	err = ice_alloc_rx_queue_mbufs(rxq);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to allocate RX queue mbuf");
+		return -ENOMEM;
+	}
+
+	rte_wmb();
+
+	/* Init the RX tail register. */
+	ICE_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1);
+
+	err = ice_switch_rx_queue(hw, rxq->reg_idx, TRUE);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u on",
+			    rx_queue_id);
+
+		ice_rx_queue_release_mbufs(rxq);
+		ice_reset_rx_queue(rxq);
+		return -EINVAL;
+	}
+
+	dev->data->rx_queue_state[rx_queue_id] =
+		RTE_ETH_QUEUE_STATE_STARTED;
+
+	return 0;
+}
+
+int
+ice_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct ice_rx_queue *rxq;
+	int err;
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	if (rx_queue_id < dev->data->nb_rx_queues) {
+		rxq = dev->data->rx_queues[rx_queue_id];
+
+		err = ice_switch_rx_queue(hw, rxq->reg_idx, FALSE);
+		if (err) {
+			PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off",
+				    rx_queue_id);
+			return -EINVAL;
+		}
+		ice_rx_queue_release_mbufs(rxq);
+		ice_reset_rx_queue(rxq);
+		dev->data->rx_queue_state[rx_queue_id] =
+			RTE_ETH_QUEUE_STATE_STOPPED;
+	}
+
+	return 0;
+}
+
+int
+ice_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct ice_tx_queue *txq;
+	int err;
+	struct ice_vsi *vsi;
+	struct ice_hw *hw;
+	struct ice_aqc_add_tx_qgrp txq_elem;
+	struct ice_tlan_ctx tx_ctx;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (tx_queue_id >= dev->data->nb_tx_queues) {
+		PMD_DRV_LOG(ERR, "TX queue %u is out of range %u",
+			    tx_queue_id, dev->data->nb_tx_queues);
+		return -EINVAL;
+	}
+
+	txq = dev->data->tx_queues[tx_queue_id];
+	if (!txq || !txq->q_set) {
+		PMD_DRV_LOG(ERR, "TX queue %u is not available or setup",
+			    tx_queue_id);
+		return -EINVAL;
+	}
+
+	vsi = txq->vsi;
+	hw = ICE_VSI_TO_HW(vsi);
+
+	memset(&txq_elem, 0, sizeof(txq_elem));
+	memset(&tx_ctx, 0, sizeof(tx_ctx));
+	txq_elem.num_txqs = 1;
+	txq_elem.txqs[0].txq_id = rte_cpu_to_le_16(txq->reg_idx);
+
+	tx_ctx.base = txq->tx_ring_phys_addr / ICE_QUEUE_BASE_ADDR_UNIT;
+	tx_ctx.qlen = txq->nb_tx_desc;
+	tx_ctx.pf_num = hw->pf_id;
+	tx_ctx.vmvf_type = ICE_TLAN_CTX_VMVF_TYPE_PF;
+	tx_ctx.src_vsi = vsi->vsi_id;
+	tx_ctx.port_num = hw->port_info->lport;
+	tx_ctx.tso_ena = 1; /* tso enable */
+	tx_ctx.tso_qnum = txq->reg_idx; /* index for tso state structure */
+	tx_ctx.legacy_int = 1; /* Legacy or Advanced Host Interface */
+
+	ice_set_ctx((uint8_t *)&tx_ctx, txq_elem.txqs[0].txq_ctx,
+		    ice_tlan_ctx_info);
+
+	txq->qtx_tail = hw->hw_addr + QTX_COMM_DBELL(txq->reg_idx);
+
+	/* Init the Tx tail register*/
+	ICE_PCI_REG_WRITE(txq->qtx_tail, 0);
+
+	err = ice_ena_vsi_txq(hw->port_info, vsi->idx, 0, 1, &txq_elem,
+			      sizeof(txq_elem), NULL);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to add lan txq");
+		return -EIO;
+	}
+	/* store the schedule node id */
+	txq->q_teid = txq_elem.txqs[0].q_teid;
+
+	dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED;
+	return 0;
+}
+
+/* Free all mbufs for descriptors in tx queue */
+static void
+ice_tx_queue_release_mbufs(struct ice_tx_queue *txq)
+{
+	uint16_t i;
+
+	if (!txq || !txq->sw_ring) {
+		PMD_DRV_LOG(DEBUG, "Pointer to txq or sw_ring is NULL");
+		return;
+	}
+
+	for (i = 0; i < txq->nb_tx_desc; i++) {
+		if (txq->sw_ring[i].mbuf) {
+			rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
+			txq->sw_ring[i].mbuf = NULL;
+		}
+	}
+}
+
+static void
+ice_reset_tx_queue(struct ice_tx_queue *txq)
+{
+	struct ice_tx_entry *txe;
+	uint16_t i, prev, size;
+
+	if (!txq) {
+		PMD_DRV_LOG(DEBUG, "Pointer to txq is NULL");
+		return;
+	}
+
+	txe = txq->sw_ring;
+	size = sizeof(struct ice_tx_desc) * txq->nb_tx_desc;
+	for (i = 0; i < size; i++)
+		((volatile char *)txq->tx_ring)[i] = 0;
+
+	prev = (uint16_t)(txq->nb_tx_desc - 1);
+	for (i = 0; i < txq->nb_tx_desc; i++) {
+		volatile struct ice_tx_desc *txd = &txq->tx_ring[i];
+
+		txd->cmd_type_offset_bsz =
+			rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE);
+		txe[i].mbuf =  NULL;
+		txe[i].last_id = i;
+		txe[prev].next_id = i;
+		prev = i;
+	}
+
+	txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
+	txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
+
+	txq->tx_tail = 0;
+	txq->nb_tx_used = 0;
+
+	txq->last_desc_cleaned = (uint16_t)(txq->nb_tx_desc - 1);
+	txq->nb_tx_free = (uint16_t)(txq->nb_tx_desc - 1);
+}
+
+int
+ice_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct ice_tx_queue *txq;
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	enum ice_status status;
+	uint16_t q_ids[1];
+	uint32_t q_teids[1];
+
+	if (tx_queue_id >= dev->data->nb_tx_queues) {
+		PMD_DRV_LOG(ERR, "TX queue %u is out of range %u",
+			    tx_queue_id, dev->data->nb_tx_queues);
+		return -EINVAL;
+	}
+
+	txq = dev->data->tx_queues[tx_queue_id];
+	if (!txq) {
+		PMD_DRV_LOG(ERR, "TX queue %u is not available",
+			    tx_queue_id);
+		return -EINVAL;
+	}
+
+	q_ids[0] = txq->reg_idx;
+	q_teids[0] = txq->q_teid;
+
+	status = ice_dis_vsi_txq(hw->port_info, 1, q_ids, q_teids,
+				 ICE_NO_RESET, 0, NULL);
+	if (status != ICE_SUCCESS) {
+		PMD_DRV_LOG(DEBUG, "Failed to disable Lan Tx queue");
+		return -EINVAL;
+	}
+
+	ice_tx_queue_release_mbufs(txq);
+	ice_reset_tx_queue(txq);
+	dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+
+	return 0;
+}
+
+int
+ice_rx_queue_setup(struct rte_eth_dev *dev,
+		   uint16_t queue_idx,
+		   uint16_t nb_desc,
+		   unsigned int socket_id,
+		   const struct rte_eth_rxconf *rx_conf,
+		   struct rte_mempool *mp)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_adapter *ad =
+		ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct ice_vsi *vsi = pf->main_vsi;
+	struct ice_rx_queue *rxq;
+	const struct rte_memzone *rz;
+	uint32_t ring_size;
+	uint16_t len;
+	int use_def_burst_func = 1;
+
+	if (nb_desc % ICE_ALIGN_RING_DESC != 0 ||
+	    nb_desc > ICE_MAX_RING_DESC ||
+	    nb_desc < ICE_MIN_RING_DESC) {
+		PMD_INIT_LOG(ERR, "Number (%u) of receive descriptors is "
+			     "invalid", nb_desc);
+		return -EINVAL;
+	}
+
+	/* Free memory if needed */
+	if (dev->data->rx_queues[queue_idx]) {
+		ice_rx_queue_release(dev->data->rx_queues[queue_idx]);
+		dev->data->rx_queues[queue_idx] = NULL;
+	}
+
+	/* Allocate the rx queue data structure */
+	rxq = rte_zmalloc_socket(NULL,
+				 sizeof(struct ice_rx_queue),
+				 RTE_CACHE_LINE_SIZE,
+				 socket_id);
+	if (!rxq) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for "
+			     "rx queue data structure");
+		return -ENOMEM;
+	}
+	rxq->mp = mp;
+	rxq->nb_rx_desc = nb_desc;
+	rxq->rx_free_thresh = rx_conf->rx_free_thresh;
+	rxq->queue_id = queue_idx;
+
+	rxq->reg_idx = vsi->base_queue + queue_idx;
+	rxq->port_id = dev->data->port_id;
+	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+		rxq->crc_len = ETHER_CRC_LEN;
+	else
+		rxq->crc_len = 0;
+
+	rxq->drop_en = rx_conf->rx_drop_en;
+	rxq->vsi = vsi;
+	rxq->rx_deferred_start = rx_conf->rx_deferred_start;
+
+	/* Allocate the maximun number of RX ring hardware descriptor. */
+	len = ICE_MAX_RING_DESC;
+
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+	/**
+	 * Allocating a little more memory because vectorized/bulk_alloc Rx
+	 * functions doesn't check boundaries each time.
+	 */
+	len += ICE_RX_MAX_BURST;
+#endif
+
+	/* Allocate the maximum number of RX ring hardware descriptor. */
+	ring_size = sizeof(union ice_rx_desc) * len;
+	ring_size = RTE_ALIGN(ring_size, ICE_DMA_MEM_ALIGN);
+	rz = rte_eth_dma_zone_reserve(dev, "rx_ring", queue_idx,
+				      ring_size, ICE_RING_BASE_ALIGN,
+				      socket_id);
+	if (!rz) {
+		ice_rx_queue_release(rxq);
+		PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for RX");
+		return -ENOMEM;
+	}
+
+	/* Zero all the descriptors in the ring. */
+	memset(rz->addr, 0, ring_size);
+
+	rxq->rx_ring_phys_addr = rz->phys_addr;
+	rxq->rx_ring = (union ice_rx_desc *)rz->addr;
+
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+	len = (uint16_t)(nb_desc + ICE_RX_MAX_BURST);
+#else
+	len = nb_desc;
+#endif
+
+	/* Allocate the software ring. */
+	rxq->sw_ring = rte_zmalloc_socket(NULL,
+					  sizeof(struct ice_rx_entry) * len,
+					  RTE_CACHE_LINE_SIZE,
+					  socket_id);
+	if (!rxq->sw_ring) {
+		ice_rx_queue_release(rxq);
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for SW ring");
+		return -ENOMEM;
+	}
+
+	ice_reset_rx_queue(rxq);
+	rxq->q_set = TRUE;
+	dev->data->rx_queues[queue_idx] = rxq;
+
+	use_def_burst_func = ice_check_rx_burst_bulk_alloc_preconditions(rxq);
+
+	if (!use_def_burst_func) {
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions are "
+			     "satisfied. Rx Burst Bulk Alloc function will be "
+			     "used on port=%d, queue=%d.",
+			     rxq->port_id, rxq->queue_id);
+#endif /* RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC */
+	} else {
+		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions are "
+			     "not satisfied, Scattered Rx is requested, "
+			     "or RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC is "
+			     "not enabled on port=%d, queue=%d.",
+			     rxq->port_id, rxq->queue_id);
+		ad->rx_bulk_alloc_allowed = false;
+	}
+
+	return 0;
+}
+
+void
+ice_rx_queue_release(void *rxq)
+{
+	struct ice_rx_queue *q = (struct ice_rx_queue *)rxq;
+
+	if (!q) {
+		PMD_DRV_LOG(DEBUG, "Pointer to rxq is NULL");
+		return;
+	}
+
+	ice_rx_queue_release_mbufs(q);
+	rte_free(q->sw_ring);
+	rte_free(q);
+}
+
+int
+ice_tx_queue_setup(struct rte_eth_dev *dev,
+		   uint16_t queue_idx,
+		   uint16_t nb_desc,
+		   unsigned int socket_id,
+		   const struct rte_eth_txconf *tx_conf)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vsi *vsi = pf->main_vsi;
+	struct ice_tx_queue *txq;
+	const struct rte_memzone *tz;
+	uint32_t ring_size;
+	uint16_t tx_rs_thresh, tx_free_thresh;
+	uint64_t offloads;
+
+	offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads;
+
+	if (nb_desc % ICE_ALIGN_RING_DESC != 0 ||
+	    nb_desc > ICE_MAX_RING_DESC ||
+	    nb_desc < ICE_MIN_RING_DESC) {
+		PMD_INIT_LOG(ERR, "Number (%u) of transmit descriptors is "
+			     "invalid", nb_desc);
+		return -EINVAL;
+	}
+
+	/**
+	 * The following two parameters control the setting of the RS bit on
+	 * transmit descriptors. TX descriptors will have their RS bit set
+	 * after txq->tx_rs_thresh descriptors have been used. The TX
+	 * descriptor ring will be cleaned after txq->tx_free_thresh
+	 * descriptors are used or if the number of descriptors required to
+	 * transmit a packet is greater than the number of free TX descriptors.
+	 *
+	 * The following constraints must be satisfied:
+	 *  - tx_rs_thresh must be greater than 0.
+	 *  - tx_rs_thresh must be less than the size of the ring minus 2.
+	 *  - tx_rs_thresh must be less than or equal to tx_free_thresh.
+	 *  - tx_rs_thresh must be a divisor of the ring size.
+	 *  - tx_free_thresh must be greater than 0.
+	 *  - tx_free_thresh must be less than the size of the ring minus 3.
+	 *
+	 * One descriptor in the TX ring is used as a sentinel to avoid a H/W
+	 * race condition, hence the maximum threshold constraints. When set
+	 * to zero use default values.
+	 */
+	tx_rs_thresh = (uint16_t)(tx_conf->tx_rs_thresh ?
+				  tx_conf->tx_rs_thresh :
+				  ICE_DEFAULT_TX_RSBIT_THRESH);
+	tx_free_thresh = (uint16_t)(tx_conf->tx_free_thresh ?
+				    tx_conf->tx_free_thresh :
+				    ICE_DEFAULT_TX_FREE_THRESH);
+	if (tx_rs_thresh >= (nb_desc - 2)) {
+		PMD_INIT_LOG(ERR, "tx_rs_thresh must be less than the "
+			     "number of TX descriptors minus 2. "
+			     "(tx_rs_thresh=%u port=%d queue=%d)",
+			     (unsigned int)tx_rs_thresh,
+			     (int)dev->data->port_id,
+			     (int)queue_idx);
+		return -EINVAL;
+	}
+	if (tx_free_thresh >= (nb_desc - 3)) {
+		PMD_INIT_LOG(ERR, "tx_rs_thresh must be less than the "
+			     "tx_free_thresh must be less than the "
+			     "number of TX descriptors minus 3. "
+			     "(tx_free_thresh=%u port=%d queue=%d)",
+			     (unsigned int)tx_free_thresh,
+			     (int)dev->data->port_id,
+			     (int)queue_idx);
+		return -EINVAL;
+	}
+	if (tx_rs_thresh > tx_free_thresh) {
+		PMD_INIT_LOG(ERR, "tx_rs_thresh must be less than or "
+			     "equal to tx_free_thresh. (tx_free_thresh=%u"
+			     " tx_rs_thresh=%u port=%d queue=%d)",
+			     (unsigned int)tx_free_thresh,
+			     (unsigned int)tx_rs_thresh,
+			     (int)dev->data->port_id,
+			     (int)queue_idx);
+		return -EINVAL;
+	}
+	if ((nb_desc % tx_rs_thresh) != 0) {
+		PMD_INIT_LOG(ERR, "tx_rs_thresh must be a divisor of the "
+			     "number of TX descriptors. (tx_rs_thresh=%u"
+			     " port=%d queue=%d)",
+			     (unsigned int)tx_rs_thresh,
+			     (int)dev->data->port_id,
+			     (int)queue_idx);
+		return -EINVAL;
+	}
+	if (tx_rs_thresh > 1 && tx_conf->tx_thresh.wthresh != 0) {
+		PMD_INIT_LOG(ERR, "TX WTHRESH must be set to 0 if "
+			     "tx_rs_thresh is greater than 1. "
+			     "(tx_rs_thresh=%u port=%d queue=%d)",
+			     (unsigned int)tx_rs_thresh,
+			     (int)dev->data->port_id,
+			     (int)queue_idx);
+		return -EINVAL;
+	}
+
+	/* Free memory if needed. */
+	if (dev->data->tx_queues[queue_idx]) {
+		ice_tx_queue_release(dev->data->tx_queues[queue_idx]);
+		dev->data->tx_queues[queue_idx] = NULL;
+	}
+
+	/* Allocate the TX queue data structure. */
+	txq = rte_zmalloc_socket(NULL,
+				 sizeof(struct ice_tx_queue),
+				 RTE_CACHE_LINE_SIZE,
+				 socket_id);
+	if (!txq) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for "
+			     "tx queue structure");
+		return -ENOMEM;
+	}
+
+	/* Allocate TX hardware ring descriptors. */
+	ring_size = sizeof(struct ice_tx_desc) * ICE_MAX_RING_DESC;
+	ring_size = RTE_ALIGN(ring_size, ICE_DMA_MEM_ALIGN);
+	tz = rte_eth_dma_zone_reserve(dev, "tx_ring", queue_idx,
+				      ring_size, ICE_RING_BASE_ALIGN,
+				      socket_id);
+	if (!tz) {
+		ice_tx_queue_release(txq);
+		PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for TX");
+		return -ENOMEM;
+	}
+
+	txq->nb_tx_desc = nb_desc;
+	txq->tx_rs_thresh = tx_rs_thresh;
+	txq->tx_free_thresh = tx_free_thresh;
+	txq->pthresh = tx_conf->tx_thresh.pthresh;
+	txq->hthresh = tx_conf->tx_thresh.hthresh;
+	txq->wthresh = tx_conf->tx_thresh.wthresh;
+	txq->queue_id = queue_idx;
+
+	txq->reg_idx = vsi->base_queue + queue_idx;
+	txq->port_id = dev->data->port_id;
+	txq->offloads = offloads;
+	txq->vsi = vsi;
+	txq->tx_deferred_start = tx_conf->tx_deferred_start;
+
+	txq->tx_ring_phys_addr = tz->phys_addr;
+	txq->tx_ring = (struct ice_tx_desc *)tz->addr;
+
+	/* Allocate software ring */
+	txq->sw_ring =
+		rte_zmalloc_socket(NULL,
+				   sizeof(struct ice_tx_entry) * nb_desc,
+				   RTE_CACHE_LINE_SIZE,
+				   socket_id);
+	if (!txq->sw_ring) {
+		ice_tx_queue_release(txq);
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for SW TX ring");
+		return -ENOMEM;
+	}
+
+	ice_reset_tx_queue(txq);
+	txq->q_set = TRUE;
+	dev->data->tx_queues[queue_idx] = txq;
+
+	return 0;
+}
+
+void
+ice_tx_queue_release(void *txq)
+{
+	struct ice_tx_queue *q = (struct ice_tx_queue *)txq;
+
+	if (!q) {
+		PMD_DRV_LOG(DEBUG, "Pointer to TX queue is NULL");
+		return;
+	}
+
+	ice_tx_queue_release_mbufs(q);
+	rte_free(q->sw_ring);
+	rte_free(q);
+}
+
+void
+ice_clear_queues(struct rte_eth_dev *dev)
+{
+	uint16_t i;
+
+	PMD_INIT_FUNC_TRACE();
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		ice_tx_queue_release_mbufs(dev->data->tx_queues[i]);
+		ice_reset_tx_queue(dev->data->tx_queues[i]);
+	}
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		ice_rx_queue_release_mbufs(dev->data->rx_queues[i]);
+		ice_reset_rx_queue(dev->data->rx_queues[i]);
+	}
+}
+
+void
+ice_free_queues(struct rte_eth_dev *dev)
+{
+	uint16_t i;
+
+	PMD_INIT_FUNC_TRACE();
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		if (!dev->data->rx_queues[i])
+			continue;
+		ice_rx_queue_release(dev->data->rx_queues[i]);
+		dev->data->rx_queues[i] = NULL;
+	}
+	dev->data->nb_rx_queues = 0;
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		if (!dev->data->tx_queues[i])
+			continue;
+		ice_tx_queue_release(dev->data->tx_queues[i]);
+		dev->data->tx_queues[i] = NULL;
+	}
+	dev->data->nb_tx_queues = 0;
+}
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index c37dc23..088a206 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -114,4 +114,24 @@ struct ice_tx_queue {
 		uint64_t outer_l3_len:16; /* outer L3 Header Length */
 	};
 };
+
+int ice_rx_queue_setup(struct rte_eth_dev *dev,
+		       uint16_t queue_idx,
+		       uint16_t nb_desc,
+		       unsigned int socket_id,
+		       const struct rte_eth_rxconf *rx_conf,
+		       struct rte_mempool *mp);
+int ice_tx_queue_setup(struct rte_eth_dev *dev,
+		       uint16_t queue_idx,
+		       uint16_t nb_desc,
+		       unsigned int socket_id,
+		       const struct rte_eth_txconf *tx_conf);
+int ice_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+int ice_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+int ice_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+int ice_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+void ice_rx_queue_release(void *rxq);
+void ice_tx_queue_release(void *txq);
+void ice_clear_queues(struct rte_eth_dev *dev);
+void ice_free_queues(struct rte_eth_dev *dev);
 #endif /* _ICE_RXTX_H_ */
diff --git a/drivers/net/ice/meson.build b/drivers/net/ice/meson.build
index 9ed7b27..beb0d39 100644
--- a/drivers/net/ice/meson.build
+++ b/drivers/net/ice/meson.build
@@ -5,7 +5,8 @@ subdir('base')
 objs = [base_objs]
 
 sources = files(
-	'ice_ethdev.c'
+	'ice_ethdev.c',
+	'ice_lan_rxtx.c'
 	)
 
 deps += ['hash']
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v5 17/31] net/ice: support getting device information
  2018-12-17  7:37 ` [dpdk-dev] [PATCH v5 00/31] A new net PMD - ICE Wenzhuo Lu
                     ` (15 preceding siblings ...)
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 16/31] net/ice: support device and queue ops Wenzhuo Lu
@ 2018-12-17  7:37   ` Wenzhuo Lu
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 18/31] net/ice: support packet type getting Wenzhuo Lu
                     ` (13 subsequent siblings)
  30 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-17  7:37 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add ops dev_infos_get.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/ice.ini |   1 +
 drivers/net/ice/ice_ethdev.c     | 103 +++++++++++++++++++++++++++++++++++++++
 drivers/net/ice/ice_ethdev.h     |  13 +++++
 3 files changed, 117 insertions(+)

diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
index a43a9cd..af8f0d3 100644
--- a/doc/guides/nics/features/ice.ini
+++ b/doc/guides/nics/features/ice.ini
@@ -4,6 +4,7 @@
 ; Refer to default.ini for the full list of available PMD features.
 ;
 [Features]
+Speed capabilities   = Y
 Queue start/stop     = Y
 BSD nic_uio          = Y
 Linux UIO            = Y
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 2c86b3d..c572ba6 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -19,6 +19,8 @@
 static void ice_dev_stop(struct rte_eth_dev *dev);
 static void ice_dev_close(struct rte_eth_dev *dev);
 static int ice_dev_reset(struct rte_eth_dev *dev);
+static void ice_dev_info_get(struct rte_eth_dev *dev,
+			     struct rte_eth_dev_info *dev_info);
 
 static const struct rte_pci_id pci_id_ice_map[] = {
 	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
@@ -41,6 +43,7 @@
 	.rx_queue_release             = ice_rx_queue_release,
 	.tx_queue_setup               = ice_tx_queue_setup,
 	.tx_queue_release             = ice_tx_queue_release,
+	.dev_infos_get                = ice_dev_info_get,
 };
 
 static void
@@ -790,6 +793,106 @@ static int ice_init_rss(struct ice_pf *pf)
 	return 0;
 }
 
+static void
+ice_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ice_vsi *vsi = pf->main_vsi;
+	struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
+
+	dev_info->min_rx_bufsize = ICE_BUF_SIZE_MIN;
+	dev_info->max_rx_pktlen = ICE_FRAME_SIZE_MAX;
+	dev_info->max_rx_queues = vsi->nb_qps;
+	dev_info->max_tx_queues = vsi->nb_qps;
+	dev_info->max_mac_addrs = vsi->max_macaddrs;
+	dev_info->max_vfs = pci_dev->max_vfs;
+
+	dev_info->rx_offload_capa =
+		DEV_RX_OFFLOAD_VLAN_STRIP |
+		DEV_RX_OFFLOAD_IPV4_CKSUM |
+		DEV_RX_OFFLOAD_UDP_CKSUM |
+		DEV_RX_OFFLOAD_TCP_CKSUM |
+		DEV_RX_OFFLOAD_QINQ_STRIP |
+		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		DEV_RX_OFFLOAD_VLAN_EXTEND |
+		DEV_RX_OFFLOAD_JUMBO_FRAME |
+		DEV_RX_OFFLOAD_KEEP_CRC |
+		DEV_RX_OFFLOAD_SCATTER |
+		DEV_RX_OFFLOAD_VLAN_FILTER;
+	dev_info->tx_offload_capa =
+		DEV_TX_OFFLOAD_VLAN_INSERT |
+		DEV_TX_OFFLOAD_QINQ_INSERT |
+		DEV_TX_OFFLOAD_IPV4_CKSUM |
+		DEV_TX_OFFLOAD_UDP_CKSUM |
+		DEV_TX_OFFLOAD_TCP_CKSUM |
+		DEV_TX_OFFLOAD_SCTP_CKSUM |
+		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		DEV_TX_OFFLOAD_TCP_TSO |
+		DEV_TX_OFFLOAD_MULTI_SEGS |
+		DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+	dev_info->rx_queue_offload_capa = 0;
+	dev_info->tx_queue_offload_capa = 0;
+
+	dev_info->reta_size = hw->func_caps.common_cap.rss_table_size;
+	dev_info->hash_key_size = (VSIQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t);
+	dev_info->flow_type_rss_offloads = ICE_RSS_OFFLOAD_ALL;
+
+	dev_info->default_rxconf = (struct rte_eth_rxconf) {
+		.rx_thresh = {
+			.pthresh = ICE_DEFAULT_RX_PTHRESH,
+			.hthresh = ICE_DEFAULT_RX_HTHRESH,
+			.wthresh = ICE_DEFAULT_RX_WTHRESH,
+		},
+		.rx_free_thresh = ICE_DEFAULT_RX_FREE_THRESH,
+		.rx_drop_en = 0,
+		.offloads = 0,
+	};
+
+	dev_info->default_txconf = (struct rte_eth_txconf) {
+		.tx_thresh = {
+			.pthresh = ICE_DEFAULT_TX_PTHRESH,
+			.hthresh = ICE_DEFAULT_TX_HTHRESH,
+			.wthresh = ICE_DEFAULT_TX_WTHRESH,
+		},
+		.tx_free_thresh = ICE_DEFAULT_TX_FREE_THRESH,
+		.tx_rs_thresh = ICE_DEFAULT_TX_RSBIT_THRESH,
+		.offloads = 0,
+	};
+
+	dev_info->rx_desc_lim = (struct rte_eth_desc_lim) {
+		.nb_max = ICE_MAX_RING_DESC,
+		.nb_min = ICE_MIN_RING_DESC,
+		.nb_align = ICE_ALIGN_RING_DESC,
+	};
+
+	dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
+		.nb_max = ICE_MAX_RING_DESC,
+		.nb_min = ICE_MIN_RING_DESC,
+		.nb_align = ICE_ALIGN_RING_DESC,
+	};
+
+	dev_info->speed_capa = ETH_LINK_SPEED_10M |
+			       ETH_LINK_SPEED_100M |
+			       ETH_LINK_SPEED_1G |
+			       ETH_LINK_SPEED_2_5G |
+			       ETH_LINK_SPEED_5G |
+			       ETH_LINK_SPEED_10G |
+			       ETH_LINK_SPEED_20G |
+			       ETH_LINK_SPEED_25G |
+			       ETH_LINK_SPEED_40G;
+
+	dev_info->nb_rx_queues = dev->data->nb_rx_queues;
+	dev_info->nb_tx_queues = dev->data->nb_tx_queues;
+
+	dev_info->default_rxportconf.burst_size = ICE_RX_MAX_BURST;
+	dev_info->default_txportconf.burst_size = ICE_TX_MAX_BURST;
+	dev_info->default_rxportconf.nb_queues = 1;
+	dev_info->default_txportconf.nb_queues = 1;
+	dev_info->default_rxportconf.ring_size = ICE_BUF_SIZE_MIN;
+	dev_info->default_txportconf.ring_size = ICE_BUF_SIZE_MIN;
+}
+
 static int
 ice_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 	      struct rte_pci_device *pci_dev)
diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
index 94e45c8..3cefa5b 100644
--- a/drivers/net/ice/ice_ethdev.h
+++ b/drivers/net/ice/ice_ethdev.h
@@ -102,6 +102,19 @@
 		       ICE_FLAG_RSS_AQ_CAPABLE | \
 		       ICE_FLAG_VF_MAC_BY_PF)
 
+#define ICE_RSS_OFFLOAD_ALL ( \
+	ETH_RSS_FRAG_IPV4 | \
+	ETH_RSS_NONFRAG_IPV4_TCP | \
+	ETH_RSS_NONFRAG_IPV4_UDP | \
+	ETH_RSS_NONFRAG_IPV4_SCTP | \
+	ETH_RSS_NONFRAG_IPV4_OTHER | \
+	ETH_RSS_FRAG_IPV6 | \
+	ETH_RSS_NONFRAG_IPV6_TCP | \
+	ETH_RSS_NONFRAG_IPV6_UDP | \
+	ETH_RSS_NONFRAG_IPV6_SCTP | \
+	ETH_RSS_NONFRAG_IPV6_OTHER | \
+	ETH_RSS_L2_PAYLOAD)
+
 struct ice_adapter;
 
 /**
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v5 18/31] net/ice: support packet type getting
  2018-12-17  7:37 ` [dpdk-dev] [PATCH v5 00/31] A new net PMD - ICE Wenzhuo Lu
                     ` (16 preceding siblings ...)
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 17/31] net/ice: support getting device information Wenzhuo Lu
@ 2018-12-17  7:37   ` Wenzhuo Lu
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 19/31] net/ice: support link update Wenzhuo Lu
                     ` (12 subsequent siblings)
  30 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-17  7:37 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Wei Zhao

Add ops dev_supported_ptypes_get.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 drivers/net/ice/ice_ethdev.c   |   2 +
 drivers/net/ice/ice_lan_rxtx.c | 601 +++++++++++++++++++++++++++++++++++++++++
 drivers/net/ice/ice_rxtx.h     |   2 +
 3 files changed, 605 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index c572ba6..c916bf2 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -44,6 +44,7 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 	.tx_queue_setup               = ice_tx_queue_setup,
 	.tx_queue_release             = ice_tx_queue_release,
 	.dev_infos_get                = ice_dev_info_get,
+	.dev_supported_ptypes_get     = ice_dev_supported_ptypes_get,
 };
 
 static void
@@ -493,6 +494,7 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 
 	dev->dev_ops = &ice_eth_dev_ops;
 
+	ice_set_default_ptype_table(dev);
 	pci_dev = RTE_DEV_TO_PCI(dev->device);
 
 	pf->adapter = ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
diff --git a/drivers/net/ice/ice_lan_rxtx.c b/drivers/net/ice/ice_lan_rxtx.c
index 5c2301a..8230bb2 100644
--- a/drivers/net/ice/ice_lan_rxtx.c
+++ b/drivers/net/ice/ice_lan_rxtx.c
@@ -884,6 +884,42 @@
 	rte_free(q);
 }
 
+const uint32_t *
+ice_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
+{
+	static const uint32_t ptypes[] = {
+		/* refers to ice_get_default_pkt_type() */
+		RTE_PTYPE_L2_ETHER,
+		RTE_PTYPE_L2_ETHER_LLDP,
+		RTE_PTYPE_L2_ETHER_ARP,
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN,
+		RTE_PTYPE_L3_IPV6_EXT_UNKNOWN,
+		RTE_PTYPE_L4_FRAG,
+		RTE_PTYPE_L4_ICMP,
+		RTE_PTYPE_L4_NONFRAG,
+		RTE_PTYPE_L4_SCTP,
+		RTE_PTYPE_L4_TCP,
+		RTE_PTYPE_L4_UDP,
+		RTE_PTYPE_TUNNEL_GRENAT,
+		RTE_PTYPE_TUNNEL_IP,
+		RTE_PTYPE_INNER_L2_ETHER,
+		RTE_PTYPE_INNER_L2_ETHER_VLAN,
+		RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN,
+		RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN,
+		RTE_PTYPE_INNER_L4_FRAG,
+		RTE_PTYPE_INNER_L4_ICMP,
+		RTE_PTYPE_INNER_L4_NONFRAG,
+		RTE_PTYPE_INNER_L4_SCTP,
+		RTE_PTYPE_INNER_L4_TCP,
+		RTE_PTYPE_INNER_L4_UDP,
+		RTE_PTYPE_TUNNEL_GTPC,
+		RTE_PTYPE_TUNNEL_GTPU,
+		RTE_PTYPE_UNKNOWN
+	};
+
+	return ptypes;
+}
+
 void
 ice_clear_queues(struct rte_eth_dev *dev)
 {
@@ -925,3 +961,568 @@
 	}
 	dev->data->nb_tx_queues = 0;
 }
+
+/* For each value it means, datasheet of hardware can tell more details
+ *
+ * @note: fix ice_dev_supported_ptypes_get() if any change here.
+ */
+static inline uint32_t
+ice_get_default_pkt_type(uint16_t ptype)
+{
+	static const uint32_t type_table[ICE_MAX_PKT_TYPE]
+		__rte_cache_aligned = {
+		/* L2 types */
+		/* [0] reserved */
+		[1] = RTE_PTYPE_L2_ETHER,
+		/* [2] - [5] reserved */
+		[6] = RTE_PTYPE_L2_ETHER_LLDP,
+		/* [7] - [10] reserved */
+		[11] = RTE_PTYPE_L2_ETHER_ARP,
+		/* [12] - [21] reserved */
+
+		/* Non tunneled IPv4 */
+		[22] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_FRAG,
+		[23] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_NONFRAG,
+		[24] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_UDP,
+		/* [25] reserved */
+		[26] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_TCP,
+		[27] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_SCTP,
+		[28] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_ICMP,
+
+		/* IPv4 --> IPv4 */
+		[29] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_FRAG,
+		[30] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_NONFRAG,
+		[31] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_UDP,
+		/* [32] reserved */
+		[33] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_TCP,
+		[34] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_SCTP,
+		[35] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv4 --> IPv6 */
+		[36] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_FRAG,
+		[37] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_NONFRAG,
+		[38] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_UDP,
+		/* [39] reserved */
+		[40] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_TCP,
+		[41] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_SCTP,
+		[42] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv4 --> GRE/Teredo/VXLAN */
+		[43] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT,
+
+		/* IPv4 --> GRE/Teredo/VXLAN --> IPv4 */
+		[44] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_FRAG,
+		[45] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_NONFRAG,
+		[46] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_UDP,
+		/* [47] reserved */
+		[48] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_TCP,
+		[49] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_SCTP,
+		[50] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv4 --> GRE/Teredo/VXLAN --> IPv6 */
+		[51] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_FRAG,
+		[52] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_NONFRAG,
+		[53] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_UDP,
+		/* [54] reserved */
+		[55] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_TCP,
+		[56] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_SCTP,
+		[57] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv4 --> GRE/Teredo/VXLAN --> MAC */
+		[58] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER,
+
+		/* IPv4 --> GRE/Teredo/VXLAN --> MAC --> IPv4 */
+		[59] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_FRAG,
+		[60] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_NONFRAG,
+		[61] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_UDP,
+		/* [62] reserved */
+		[63] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_TCP,
+		[64] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_SCTP,
+		[65] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv4 --> GRE/Teredo/VXLAN --> MAC --> IPv6 */
+		[66] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_FRAG,
+		[67] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_NONFRAG,
+		[68] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_UDP,
+		/* [69] reserved */
+		[70] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_TCP,
+		[71] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_SCTP,
+		[72] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv4 --> GRE/Teredo/VXLAN --> MAC/VLAN */
+		[73] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN,
+
+		/* IPv4 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv4 */
+		[74] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_FRAG,
+		[75] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_NONFRAG,
+		[76] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_UDP,
+		/* [77] reserved */
+		[78] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_TCP,
+		[79] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_SCTP,
+		[80] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv4 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv6 */
+		[81] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_FRAG,
+		[82] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_NONFRAG,
+		[83] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_UDP,
+		/* [84] reserved */
+		[85] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_TCP,
+		[86] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_SCTP,
+		[87] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_ICMP,
+
+		/* Non tunneled IPv6 */
+		[88] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_FRAG,
+		[89] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_NONFRAG,
+		[90] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_UDP,
+		/* [91] reserved */
+		[92] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_TCP,
+		[93] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_SCTP,
+		[94] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_ICMP,
+
+		/* IPv6 --> IPv4 */
+		[95] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_FRAG,
+		[96] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_NONFRAG,
+		[97] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_UDP,
+		/* [98] reserved */
+		[99] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_TCP,
+		[100] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_IP |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_SCTP,
+		[101] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_IP |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv6 --> IPv6 */
+		[102] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_IP |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_FRAG,
+		[103] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_IP |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_NONFRAG,
+		[104] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_IP |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_UDP,
+		/* [105] reserved */
+		[106] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_IP |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_TCP,
+		[107] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_IP |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_SCTP,
+		[108] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_IP |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv6 --> GRE/Teredo/VXLAN */
+		[109] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT,
+
+		/* IPv6 --> GRE/Teredo/VXLAN --> IPv4 */
+		[110] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_FRAG,
+		[111] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_NONFRAG,
+		[112] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_UDP,
+		/* [113] reserved */
+		[114] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_TCP,
+		[115] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_SCTP,
+		[116] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv6 --> GRE/Teredo/VXLAN --> IPv6 */
+		[117] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_FRAG,
+		[118] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_NONFRAG,
+		[119] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_UDP,
+		/* [120] reserved */
+		[121] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_TCP,
+		[122] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_SCTP,
+		[123] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv6 --> GRE/Teredo/VXLAN --> MAC */
+		[124] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER,
+
+		/* IPv6 --> GRE/Teredo/VXLAN --> MAC --> IPv4 */
+		[125] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_FRAG,
+		[126] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_NONFRAG,
+		[127] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_UDP,
+		/* [128] reserved */
+		[129] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_TCP,
+		[130] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_SCTP,
+		[131] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv6 --> GRE/Teredo/VXLAN --> MAC --> IPv6 */
+		[132] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_FRAG,
+		[133] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_NONFRAG,
+		[134] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_UDP,
+		/* [135] reserved */
+		[136] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_TCP,
+		[137] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_SCTP,
+		[138] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv6 --> GRE/Teredo/VXLAN --> MAC/VLAN */
+		[139] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN,
+
+		/* IPv6 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv4 */
+		[140] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_FRAG,
+		[141] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_NONFRAG,
+		[142] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_UDP,
+		/* [143] reserved */
+		[144] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_TCP,
+		[145] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_SCTP,
+		[146] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv6 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv6 */
+		[147] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_FRAG,
+		[148] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_NONFRAG,
+		[149] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_UDP,
+		/* [150] reserved */
+		[151] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_TCP,
+		[152] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_SCTP,
+		[153] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_ICMP,
+		/* [154] - [255] reserved */
+		[256] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GTPC,
+		[257] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GTPC,
+		[258] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+				RTE_PTYPE_TUNNEL_GTPU,
+		[259] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+				RTE_PTYPE_TUNNEL_GTPU,
+		/* [260] - [263] reserved */
+		[264] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GTPC,
+		[265] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GTPC,
+		[266] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+				RTE_PTYPE_TUNNEL_GTPU,
+		[267] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+				RTE_PTYPE_TUNNEL_GTPU,
+
+		/* All others reserved */
+	};
+
+	return type_table[ptype];
+}
+
+void __attribute__((cold))
+ice_set_default_ptype_table(struct rte_eth_dev *dev)
+{
+	struct ice_adapter *ad =
+		ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	int i;
+
+	for (i = 0; i < ICE_MAX_PKT_TYPE; i++)
+		ad->ptype_tbl[i] = ice_get_default_pkt_type(i);
+}
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index 088a206..871646f 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -134,4 +134,6 @@ int ice_tx_queue_setup(struct rte_eth_dev *dev,
 void ice_tx_queue_release(void *txq);
 void ice_clear_queues(struct rte_eth_dev *dev);
 void ice_free_queues(struct rte_eth_dev *dev);
+void ice_set_default_ptype_table(struct rte_eth_dev *dev);
+const uint32_t *ice_dev_supported_ptypes_get(struct rte_eth_dev *dev);
 #endif /* _ICE_RXTX_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v5 19/31] net/ice: support link update
  2018-12-17  7:37 ` [dpdk-dev] [PATCH v5 00/31] A new net PMD - ICE Wenzhuo Lu
                     ` (17 preceding siblings ...)
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 18/31] net/ice: support packet type getting Wenzhuo Lu
@ 2018-12-17  7:37   ` Wenzhuo Lu
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 20/31] net/ice: support MTU setting Wenzhuo Lu
                     ` (11 subsequent siblings)
  30 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-17  7:37 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add ops link_update.
LSC interrupt is also enabled in this patch.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/ice.ini |   2 +
 drivers/net/ice/ice_ethdev.c     | 332 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 334 insertions(+)

diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
index af8f0d3..eb852ff 100644
--- a/doc/guides/nics/features/ice.ini
+++ b/doc/guides/nics/features/ice.ini
@@ -5,6 +5,8 @@
 ;
 [Features]
 Speed capabilities   = Y
+Link status          = Y
+Link status event    = Y
 Queue start/stop     = Y
 BSD nic_uio          = Y
 Linux UIO            = Y
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index c916bf2..3118b05 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -21,6 +21,8 @@
 static int ice_dev_reset(struct rte_eth_dev *dev);
 static void ice_dev_info_get(struct rte_eth_dev *dev,
 			     struct rte_eth_dev_info *dev_info);
+static int ice_link_update(struct rte_eth_dev *dev,
+			   int wait_to_complete);
 
 static const struct rte_pci_id pci_id_ice_map[] = {
 	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
@@ -45,6 +47,7 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 	.tx_queue_release             = ice_tx_queue_release,
 	.dev_infos_get                = ice_dev_info_get,
 	.dev_supported_ptypes_get     = ice_dev_supported_ptypes_get,
+	.link_update                  = ice_link_update,
 };
 
 static void
@@ -331,6 +334,187 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 	return 0;
 }
 
+/* Enable IRQ0 */
+static void
+ice_pf_enable_irq0(struct ice_hw *hw)
+{
+	/* reset the registers */
+	ICE_WRITE_REG(hw, PFINT_OICR_ENA, 0);
+	ICE_READ_REG(hw, PFINT_OICR);
+
+#ifdef ICE_LSE_SPT
+	ICE_WRITE_REG(hw, PFINT_OICR_ENA,
+		      (uint32_t)(PFINT_OICR_ENA_INT_ENA_M &
+				 (~PFINT_OICR_LINK_STAT_CHANGE_M)));
+
+	ICE_WRITE_REG(hw, PFINT_OICR_CTL,
+		      (0 & PFINT_OICR_CTL_MSIX_INDX_M) |
+		      ((0 << PFINT_OICR_CTL_ITR_INDX_S) &
+		       PFINT_OICR_CTL_ITR_INDX_M) |
+		      PFINT_OICR_CTL_CAUSE_ENA_M);
+
+	ICE_WRITE_REG(hw, PFINT_FW_CTL,
+		      (0 & PFINT_FW_CTL_MSIX_INDX_M) |
+		      ((0 << PFINT_FW_CTL_ITR_INDX_S) &
+		       PFINT_FW_CTL_ITR_INDX_M) |
+		      PFINT_FW_CTL_CAUSE_ENA_M);
+#else
+	ICE_WRITE_REG(hw, PFINT_OICR_ENA, PFINT_OICR_ENA_INT_ENA_M);
+#endif
+
+	ICE_WRITE_REG(hw, GLINT_DYN_CTL(0),
+		      GLINT_DYN_CTL_INTENA_M |
+		      GLINT_DYN_CTL_CLEARPBA_M |
+		      GLINT_DYN_CTL_ITR_INDX_M);
+
+	ice_flush(hw);
+}
+
+/* Disable IRQ0 */
+static void
+ice_pf_disable_irq0(struct ice_hw *hw)
+{
+	/* Disable all interrupt types */
+	ICE_WRITE_REG(hw, GLINT_DYN_CTL(0), GLINT_DYN_CTL_WB_ON_ITR_M);
+	ice_flush(hw);
+}
+
+#ifdef ICE_LSE_SPT
+static void
+ice_handle_aq_msg(struct rte_eth_dev *dev)
+{
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ice_ctl_q_info *cq = &hw->adminq;
+	struct ice_rq_event_info event;
+	uint16_t pending, opcode;
+	int ret;
+
+	event.buf_len = ICE_AQ_MAX_BUF_LEN;
+	event.msg_buf = rte_zmalloc(NULL, event.buf_len, 0);
+	if (!event.msg_buf) {
+		PMD_DRV_LOG(ERR, "Failed to allocate mem");
+		return;
+	}
+
+	pending = 1;
+	while (pending) {
+		ret = ice_clean_rq_elem(hw, cq, &event, &pending);
+
+		if (ret != ICE_SUCCESS) {
+			PMD_DRV_LOG(INFO,
+				    "Failed to read msg from AdminQ, "
+				    "adminq_err: %u",
+				    hw->adminq.sq_last_status);
+			break;
+		}
+		opcode = rte_le_to_cpu_16(event.desc.opcode);
+
+		switch (opcode) {
+		case ice_aqc_opc_get_link_status:
+			ret = ice_link_update(dev, 0);
+			if (!ret)
+				_rte_eth_dev_callback_process
+					(dev, RTE_ETH_EVENT_INTR_LSC, NULL);
+			break;
+		default:
+			PMD_DRV_LOG(DEBUG, "Request %u is not supported yet",
+				    opcode);
+			break;
+		}
+	}
+	rte_free(event.msg_buf);
+}
+#endif
+
+/**
+ * Interrupt handler triggered by NIC for handling
+ * specific interrupt.
+ *
+ * @param handle
+ *  Pointer to interrupt handle.
+ * @param param
+ *  The address of parameter (struct rte_eth_dev *) regsitered before.
+ *
+ * @return
+ *  void
+ */
+static void
+ice_interrupt_handler(void *param)
+{
+	struct rte_eth_dev *dev = (struct rte_eth_dev *)param;
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	uint32_t oicr;
+	uint32_t reg;
+	uint8_t pf_num;
+	uint8_t event;
+	uint16_t queue;
+#ifdef ICE_LSE_SPT
+	uint32_t int_fw_ctl;
+#endif
+
+	/* Disable interrupt */
+	ice_pf_disable_irq0(hw);
+
+	/* read out interrupt causes */
+	oicr = ICE_READ_REG(hw, PFINT_OICR);
+#ifdef ICE_LSE_SPT
+	int_fw_ctl = ICE_READ_REG(hw, PFINT_FW_CTL);
+#endif
+
+	/* No interrupt event indicated */
+	if (!(oicr & PFINT_OICR_INTEVENT_M)) {
+		PMD_DRV_LOG(INFO, "No interrupt event");
+		goto done;
+	}
+
+#ifdef ICE_LSE_SPT
+	if (int_fw_ctl & PFINT_FW_CTL_INTEVENT_M) {
+		PMD_DRV_LOG(INFO, "FW_CTL: link state change event");
+		ice_handle_aq_msg(dev);
+	}
+#else
+	if (oicr & PFINT_OICR_LINK_STAT_CHANGE_M) {
+		PMD_DRV_LOG(INFO, "OICR: link state change event");
+		ice_link_update(dev, 0);
+	}
+#endif
+
+	if (oicr & PFINT_OICR_MAL_DETECT_M) {
+		PMD_DRV_LOG(WARNING, "OICR: MDD event");
+		reg = ICE_READ_REG(hw, GL_MDET_TX_PQM);
+		if (reg & GL_MDET_TX_PQM_VALID_M) {
+			pf_num = (reg & GL_MDET_TX_PQM_PF_NUM_M) >>
+				 GL_MDET_TX_PQM_PF_NUM_S;
+			event = (reg & GL_MDET_TX_PQM_MAL_TYPE_M) >>
+				GL_MDET_TX_PQM_MAL_TYPE_S;
+			queue = (reg & GL_MDET_TX_PQM_QNUM_M) >>
+				GL_MDET_TX_PQM_QNUM_S;
+
+			PMD_DRV_LOG(WARNING, "Malicious Driver Detection event "
+				    "%d by PQM on TX queue %d PF# %d",
+				    event, queue, pf_num);
+		}
+
+		reg = ICE_READ_REG(hw, GL_MDET_TX_TCLAN);
+		if (reg & GL_MDET_TX_TCLAN_VALID_M) {
+			pf_num = (reg & GL_MDET_TX_TCLAN_PF_NUM_M) >>
+				 GL_MDET_TX_TCLAN_PF_NUM_S;
+			event = (reg & GL_MDET_TX_TCLAN_MAL_TYPE_M) >>
+				GL_MDET_TX_TCLAN_MAL_TYPE_S;
+			queue = (reg & GL_MDET_TX_TCLAN_QNUM_M) >>
+				GL_MDET_TX_TCLAN_QNUM_S;
+
+			PMD_DRV_LOG(WARNING, "Malicious Driver Detection event "
+				    "%d by TCLAN on TX queue %d PF# %d",
+				    event, queue, pf_num);
+		}
+	}
+done:
+	/* Enable interrupt */
+	ice_pf_enable_irq0(hw);
+	rte_intr_enable(dev->intr_handle);
+}
+
 /*  Initialize SW parameters of PF */
 static int
 ice_pf_sw_init(struct rte_eth_dev *dev)
@@ -488,6 +672,7 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 ice_dev_init(struct rte_eth_dev *dev)
 {
 	struct rte_pci_device *pci_dev;
+	struct rte_intr_handle *intr_handle;
 	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
 	int ret;
@@ -496,6 +681,7 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 
 	ice_set_default_ptype_table(dev);
 	pci_dev = RTE_DEV_TO_PCI(dev->device);
+	intr_handle = &pci_dev->intr_handle;
 
 	pf->adapter = ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
 	pf->adapter->eth_dev = dev;
@@ -541,6 +727,15 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 		goto err_pf_setup;
 	}
 
+	/* register callback func to eal lib */
+	rte_intr_callback_register(intr_handle,
+				   ice_interrupt_handler, dev);
+
+	ice_pf_enable_irq0(hw);
+
+	/* enable uio intr after callback register */
+	rte_intr_enable(intr_handle);
+
 	return 0;
 
 err_pf_setup:
@@ -587,6 +782,8 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 {
 	struct rte_eth_dev_data *data = dev->data;
 	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
 	uint16_t i;
 
 	/* avoid stopping again */
@@ -604,6 +801,13 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 	/* Clear all queues and release mbufs */
 	ice_clear_queues(dev);
 
+	/* Clean datapath event and queue/vec mapping */
+	rte_intr_efd_disable(intr_handle);
+	if (intr_handle->intr_vec) {
+		rte_free(intr_handle->intr_vec);
+		intr_handle->intr_vec = NULL;
+	}
+
 	pf->adapter_stopped = true;
 }
 
@@ -629,6 +833,8 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 {
 	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
 
 	ice_dev_close(dev);
 
@@ -639,6 +845,13 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 	rte_free(dev->data->mac_addrs);
 	dev->data->mac_addrs = NULL;
 
+	/* disable uio intr before callback unregister */
+	rte_intr_disable(intr_handle);
+
+	/* register callback func to eal lib */
+	rte_intr_callback_unregister(intr_handle,
+				     ice_interrupt_handler, dev);
+
 	ice_release_vsi(pf->main_vsi);
 	ice_sched_cleanup_all(hw);
 	rte_free(hw->port_info);
@@ -757,6 +970,9 @@ static int ice_init_rss(struct ice_pf *pf)
 	if (ret != ICE_SUCCESS)
 		PMD_DRV_LOG(WARNING, "Fail to set phy mask");
 
+	/* Call get_link_info aq commond to enable/disable LSE */
+	ice_link_update(dev, 0);
+
 	pf->adapter_stopped = false;
 
 	return 0;
@@ -895,6 +1111,122 @@ static int ice_init_rss(struct ice_pf *pf)
 	dev_info->default_txportconf.ring_size = ICE_BUF_SIZE_MIN;
 }
 
+static inline int
+ice_atomic_read_link_status(struct rte_eth_dev *dev,
+			    struct rte_eth_link *link)
+{
+	struct rte_eth_link *dst = link;
+	struct rte_eth_link *src = &dev->data->dev_link;
+
+	if (rte_atomic64_cmpset((uint64_t *)dst, *(uint64_t *)dst,
+				*(uint64_t *)src) == 0)
+		return -1;
+
+	return 0;
+}
+
+static inline int
+ice_atomic_write_link_status(struct rte_eth_dev *dev,
+			     struct rte_eth_link *link)
+{
+	struct rte_eth_link *dst = &dev->data->dev_link;
+	struct rte_eth_link *src = link;
+
+	if (rte_atomic64_cmpset((uint64_t *)dst, *(uint64_t *)dst,
+				*(uint64_t *)src) == 0)
+		return -1;
+
+	return 0;
+}
+
+static int
+ice_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete)
+{
+#define CHECK_INTERVAL 100  /* 100ms */
+#define MAX_REPEAT_TIME 10  /* 1s (10 * 100ms) in total */
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ice_link_status link_status;
+	struct rte_eth_link link, old;
+	int status;
+	unsigned int rep_cnt = MAX_REPEAT_TIME;
+	bool enable_lse = dev->data->dev_conf.intr_conf.lsc ? true : false;
+
+	memset(&link, 0, sizeof(link));
+	memset(&old, 0, sizeof(old));
+	memset(&link_status, 0, sizeof(link_status));
+	ice_atomic_read_link_status(dev, &old);
+
+	do {
+		/* Get link status information from hardware */
+		status = ice_aq_get_link_info(hw->port_info, enable_lse,
+					      &link_status, NULL);
+		if (status != ICE_SUCCESS) {
+			link.link_speed = ETH_SPEED_NUM_100M;
+			link.link_duplex = ETH_LINK_FULL_DUPLEX;
+			PMD_DRV_LOG(ERR, "Failed to get link info");
+			goto out;
+		}
+
+		link.link_status = link_status.link_info & ICE_AQ_LINK_UP;
+		if (!wait_to_complete || link.link_status)
+			break;
+
+		rte_delay_ms(CHECK_INTERVAL);
+	} while (--rep_cnt);
+
+	if (!link.link_status)
+		goto out;
+
+	/* Full-duplex operation at all supported speeds */
+	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+
+	/* Parse the link status */
+	switch (link_status.link_speed) {
+	case ICE_AQ_LINK_SPEED_10MB:
+		link.link_speed = ETH_SPEED_NUM_10M;
+		break;
+	case ICE_AQ_LINK_SPEED_100MB:
+		link.link_speed = ETH_SPEED_NUM_100M;
+		break;
+	case ICE_AQ_LINK_SPEED_1000MB:
+		link.link_speed = ETH_SPEED_NUM_1G;
+		break;
+	case ICE_AQ_LINK_SPEED_2500MB:
+		link.link_speed = ETH_SPEED_NUM_2_5G;
+		break;
+	case ICE_AQ_LINK_SPEED_5GB:
+		link.link_speed = ETH_SPEED_NUM_5G;
+		break;
+	case ICE_AQ_LINK_SPEED_10GB:
+		link.link_speed = ETH_SPEED_NUM_10G;
+		break;
+	case ICE_AQ_LINK_SPEED_20GB:
+		link.link_speed = ETH_SPEED_NUM_20G;
+		break;
+	case ICE_AQ_LINK_SPEED_25GB:
+		link.link_speed = ETH_SPEED_NUM_25G;
+		break;
+	case ICE_AQ_LINK_SPEED_40GB:
+		link.link_speed = ETH_SPEED_NUM_40G;
+		break;
+	case ICE_AQ_LINK_SPEED_UNKNOWN:
+	default:
+		PMD_DRV_LOG(ERR, "Unknown link speed");
+		link.link_speed = ETH_SPEED_NUM_NONE;
+		break;
+	}
+
+	link.link_autoneg = !(dev->data->dev_conf.link_speeds &
+			      ETH_LINK_SPEED_FIXED);
+
+out:
+	ice_atomic_write_link_status(dev, &link);
+	if (link.link_status == old.link_status)
+		return -1;
+
+	return 0;
+}
+
 static int
 ice_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 	      struct rte_pci_device *pci_dev)
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v5 20/31] net/ice: support MTU setting
  2018-12-17  7:37 ` [dpdk-dev] [PATCH v5 00/31] A new net PMD - ICE Wenzhuo Lu
                     ` (18 preceding siblings ...)
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 19/31] net/ice: support link update Wenzhuo Lu
@ 2018-12-17  7:37   ` Wenzhuo Lu
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 21/31] net/ice: support MAC ops Wenzhuo Lu
                     ` (10 subsequent siblings)
  30 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-17  7:37 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add ops mtu_set.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/ice.ini |  2 ++
 drivers/net/ice/ice_ethdev.c     | 34 ++++++++++++++++++++++++++++++++++
 2 files changed, 36 insertions(+)

diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
index eb852ff..fab6442 100644
--- a/doc/guides/nics/features/ice.ini
+++ b/doc/guides/nics/features/ice.ini
@@ -8,6 +8,8 @@ Speed capabilities   = Y
 Link status          = Y
 Link status event    = Y
 Queue start/stop     = Y
+MTU update           = Y
+Jumbo frame          = Y
 BSD nic_uio          = Y
 Linux UIO            = Y
 Linux VFIO           = Y
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 3118b05..0c0efce 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -23,6 +23,7 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 			     struct rte_eth_dev_info *dev_info);
 static int ice_link_update(struct rte_eth_dev *dev,
 			   int wait_to_complete);
+static int ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu);
 
 static const struct rte_pci_id pci_id_ice_map[] = {
 	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
@@ -48,6 +49,7 @@ static int ice_link_update(struct rte_eth_dev *dev,
 	.dev_infos_get                = ice_dev_info_get,
 	.dev_supported_ptypes_get     = ice_dev_supported_ptypes_get,
 	.link_update                  = ice_link_update,
+	.mtu_set                      = ice_mtu_set,
 };
 
 static void
@@ -1228,6 +1230,38 @@ static int ice_init_rss(struct ice_pf *pf)
 }
 
 static int
+ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct rte_eth_dev_data *dev_data = pf->dev_data;
+	uint32_t frame_size = mtu + ETHER_HDR_LEN
+			      + ETHER_CRC_LEN + ICE_VLAN_TAG_SIZE;
+
+	/* check if mtu is within the allowed range */
+	if (mtu < ETHER_MIN_MTU || frame_size > ICE_FRAME_SIZE_MAX)
+		return -EINVAL;
+
+	/* mtu setting is forbidden if port is start */
+	if (dev_data->dev_started) {
+		PMD_DRV_LOG(ERR,
+			    "port %d must be stopped before configuration",
+			    dev_data->port_id);
+		return -EBUSY;
+	}
+
+	if (frame_size > ETHER_MAX_LEN)
+		dev_data->dev_conf.rxmode.offloads |=
+			DEV_RX_OFFLOAD_JUMBO_FRAME;
+	else
+		dev_data->dev_conf.rxmode.offloads &=
+			~DEV_RX_OFFLOAD_JUMBO_FRAME;
+
+	dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
+
+	return 0;
+}
+
+static int
 ice_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 	      struct rte_pci_device *pci_dev)
 {
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v5 21/31] net/ice: support MAC ops
  2018-12-17  7:37 ` [dpdk-dev] [PATCH v5 00/31] A new net PMD - ICE Wenzhuo Lu
                     ` (19 preceding siblings ...)
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 20/31] net/ice: support MTU setting Wenzhuo Lu
@ 2018-12-17  7:37   ` Wenzhuo Lu
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 22/31] net/ice: support VLAN ops Wenzhuo Lu
                     ` (9 subsequent siblings)
  30 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-17  7:37 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add below ops,
mac_addr_set
mac_addr_add
mac_addr_remove

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/ice.ini |   2 +
 drivers/net/ice/ice_ethdev.c     | 236 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 238 insertions(+)

diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
index fab6442..759a036 100644
--- a/doc/guides/nics/features/ice.ini
+++ b/doc/guides/nics/features/ice.ini
@@ -10,6 +10,8 @@ Link status event    = Y
 Queue start/stop     = Y
 MTU update           = Y
 Jumbo frame          = Y
+Unicast MAC filter   = Y
+Multicast MAC filter = Y
 BSD nic_uio          = Y
 Linux UIO            = Y
 Linux VFIO           = Y
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 0c0efce..29840fd 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -24,6 +24,13 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 static int ice_link_update(struct rte_eth_dev *dev,
 			   int wait_to_complete);
 static int ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu);
+static int ice_macaddr_set(struct rte_eth_dev *dev,
+			   struct ether_addr *mac_addr);
+static int ice_macaddr_add(struct rte_eth_dev *dev,
+			   struct ether_addr *mac_addr,
+			   __rte_unused uint32_t index,
+			   uint32_t pool);
+static void ice_macaddr_remove(struct rte_eth_dev *dev, uint32_t index);
 
 static const struct rte_pci_id pci_id_ice_map[] = {
 	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
@@ -50,6 +57,9 @@ static int ice_link_update(struct rte_eth_dev *dev,
 	.dev_supported_ptypes_get     = ice_dev_supported_ptypes_get,
 	.link_update                  = ice_link_update,
 	.mtu_set                      = ice_mtu_set,
+	.mac_addr_set                 = ice_macaddr_set,
+	.mac_addr_add                 = ice_macaddr_add,
+	.mac_addr_remove              = ice_macaddr_remove,
 };
 
 static void
@@ -336,6 +346,130 @@ static int ice_link_update(struct rte_eth_dev *dev,
 	return 0;
 }
 
+/* Find out specific MAC filter */
+static struct ice_mac_filter *
+ice_find_mac_filter(struct ice_vsi *vsi, struct ether_addr *macaddr)
+{
+	struct ice_mac_filter *f;
+
+	TAILQ_FOREACH(f, &vsi->mac_list, next) {
+		if (is_same_ether_addr(macaddr, &f->mac_info.mac_addr))
+			return f;
+	}
+
+	return NULL;
+}
+
+static int
+ice_add_mac_filter(struct ice_vsi *vsi, struct ether_addr *mac_addr)
+{
+	struct ice_fltr_list_entry *m_list_itr = NULL;
+	struct ice_mac_filter *f;
+	struct LIST_HEAD_TYPE list_head;
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	int ret = 0;
+
+	/* If it's added and configured, return */
+	f = ice_find_mac_filter(vsi, mac_addr);
+	if (f) {
+		PMD_DRV_LOG(INFO, "This MAC filter already exists.");
+		return 0;
+	}
+
+	INIT_LIST_HEAD(&list_head);
+
+	m_list_itr = (struct ice_fltr_list_entry *)
+		ice_malloc(hw, sizeof(*m_list_itr));
+	if (!m_list_itr) {
+		ret = -ENOMEM;
+		goto DONE;
+	}
+	ice_memcpy(m_list_itr->fltr_info.l_data.mac.mac_addr,
+		   mac_addr, ETH_ALEN, ICE_NONDMA_TO_NONDMA);
+	m_list_itr->fltr_info.src_id = ICE_SRC_ID_VSI;
+	m_list_itr->fltr_info.fltr_act = ICE_FWD_TO_VSI;
+	m_list_itr->fltr_info.lkup_type = ICE_SW_LKUP_MAC;
+	m_list_itr->fltr_info.flag = ICE_FLTR_TX;
+	m_list_itr->fltr_info.vsi_handle = vsi->idx;
+
+	LIST_ADD(&m_list_itr->list_entry, &list_head);
+
+	/* Add the mac */
+	ret = ice_add_mac(hw, &list_head);
+	if (ret != ICE_SUCCESS) {
+		PMD_DRV_LOG(ERR, "Failed to add MAC filter");
+		ret = -EINVAL;
+		goto DONE;
+	}
+	/* Add the mac addr into mac list */
+	f = rte_zmalloc(NULL, sizeof(*f), 0);
+	if (!f) {
+		PMD_DRV_LOG(ERR, "failed to allocate memory");
+		ret = -ENOMEM;
+		goto DONE;
+	}
+	rte_memcpy(&f->mac_info.mac_addr, mac_addr, ETH_ADDR_LEN);
+	TAILQ_INSERT_TAIL(&vsi->mac_list, f, next);
+	vsi->mac_num++;
+
+	ret = 0;
+
+DONE:
+	rte_free(m_list_itr);
+	return ret;
+}
+
+static int
+ice_remove_mac_filter(struct ice_vsi *vsi, struct ether_addr *mac_addr)
+{
+	struct ice_fltr_list_entry *m_list_itr = NULL;
+	struct ice_mac_filter *f;
+	struct LIST_HEAD_TYPE list_head;
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	int ret = 0;
+
+	/* Can't find it, return an error */
+	f = ice_find_mac_filter(vsi, mac_addr);
+	if (!f)
+		return -EINVAL;
+
+	INIT_LIST_HEAD(&list_head);
+
+	m_list_itr = (struct ice_fltr_list_entry *)
+		ice_malloc(hw, sizeof(*m_list_itr));
+	if (!m_list_itr) {
+		ret = -ENOMEM;
+		goto DONE;
+	}
+	ice_memcpy(m_list_itr->fltr_info.l_data.mac.mac_addr,
+		   mac_addr, ETH_ALEN, ICE_NONDMA_TO_NONDMA);
+	m_list_itr->fltr_info.src_id = ICE_SRC_ID_VSI;
+	m_list_itr->fltr_info.fltr_act = ICE_FWD_TO_VSI;
+	m_list_itr->fltr_info.lkup_type = ICE_SW_LKUP_MAC;
+	m_list_itr->fltr_info.flag = ICE_FLTR_TX;
+	m_list_itr->fltr_info.vsi_handle = vsi->idx;
+
+	LIST_ADD(&m_list_itr->list_entry, &list_head);
+
+	/* remove the mac filter */
+	ret = ice_remove_mac(hw, &list_head);
+	if (ret != ICE_SUCCESS) {
+		PMD_DRV_LOG(ERR, "Failed to remove MAC filter");
+		ret = -EINVAL;
+		goto DONE;
+	}
+
+	/* Remove the mac addr from mac list */
+	TAILQ_REMOVE(&vsi->mac_list, f, next);
+	rte_free(f);
+	vsi->mac_num--;
+
+	ret = 0;
+DONE:
+	rte_free(m_list_itr);
+	return ret;
+}
+
 /* Enable IRQ0 */
 static void
 ice_pf_enable_irq0(struct ice_hw *hw)
@@ -544,6 +678,9 @@ static int ice_link_update(struct rte_eth_dev *dev,
 	struct ice_vsi *vsi = NULL;
 	struct ice_vsi_ctx vsi_ctx;
 	int ret;
+	struct ether_addr broadcast = {
+		.addr_bytes = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff} };
+	struct ether_addr mac_addr;
 	uint16_t max_txqs[ICE_MAX_TRAFFIC_CLASS] = { 0 };
 	uint8_t tc_bitmap = 0x1;
 
@@ -629,6 +766,21 @@ static int ice_link_update(struct rte_eth_dev *dev,
 	pf->vsis_allocated = vsi_ctx.vsis_allocd;
 	pf->vsis_unallocated = vsi_ctx.vsis_unallocated;
 
+	/* MAC configuration */
+	rte_memcpy(pf->dev_addr.addr_bytes,
+		   hw->port_info->mac.perm_addr,
+		   ETH_ADDR_LEN);
+
+	rte_memcpy(&mac_addr, &pf->dev_addr, ETHER_ADDR_LEN);
+	ret = ice_add_mac_filter(vsi, &mac_addr);
+	if (ret != ICE_SUCCESS)
+		PMD_INIT_LOG(ERR, "Failed to add dflt MAC filter");
+
+	rte_memcpy(&mac_addr, &broadcast, ETHER_ADDR_LEN);
+	ret = ice_add_mac_filter(vsi, &mac_addr);
+	if (ret != ICE_SUCCESS)
+		PMD_INIT_LOG(ERR, "Failed to add MAC filter");
+
 	/* At the beginning, only TC0. */
 	/* What we need here is the maximam number of the TX queues.
 	 * Currently vsi->nb_qps means it.
@@ -1261,6 +1413,90 @@ static int ice_init_rss(struct ice_pf *pf)
 	return 0;
 }
 
+static int ice_macaddr_set(struct rte_eth_dev *dev,
+			   struct ether_addr *mac_addr)
+{
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vsi *vsi = pf->main_vsi;
+	struct ice_mac_filter *f;
+	uint8_t flags = 0;
+	int ret;
+
+	if (!is_valid_assigned_ether_addr(mac_addr)) {
+		PMD_DRV_LOG(ERR, "Tried to set invalid MAC address.");
+		return -EINVAL;
+	}
+
+	TAILQ_FOREACH(f, &vsi->mac_list, next) {
+		if (is_same_ether_addr(&pf->dev_addr, &f->mac_info.mac_addr))
+			break;
+	}
+
+	if (!f) {
+		PMD_DRV_LOG(ERR, "Failed to find filter for default mac");
+		return -EIO;
+	}
+
+	ret = ice_remove_mac_filter(vsi, &f->mac_info.mac_addr);
+	if (ret != ICE_SUCCESS) {
+		PMD_DRV_LOG(ERR, "Failed to delete mac filter");
+		return -EIO;
+	}
+	ret = ice_add_mac_filter(vsi, mac_addr);
+	if (ret != ICE_SUCCESS) {
+		PMD_DRV_LOG(ERR, "Failed to add mac filter");
+		return -EIO;
+	}
+	memcpy(&pf->dev_addr, mac_addr, ETH_ADDR_LEN);
+
+	flags = ICE_AQC_MAN_MAC_UPDATE_LAA_WOL;
+	ret = ice_aq_manage_mac_write(hw, mac_addr->addr_bytes, flags, NULL);
+	if (ret != ICE_SUCCESS) {
+		PMD_DRV_LOG(ERR, "Failed to set manage mac");
+	}
+
+	return 0;
+}
+
+/* Add a MAC address, and update filters */
+static int
+ice_macaddr_add(struct rte_eth_dev *dev,
+		struct ether_addr *mac_addr,
+		__rte_unused uint32_t index,
+		__rte_unused uint32_t pool)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vsi *vsi = pf->main_vsi;
+	int ret;
+
+	ret = ice_add_mac_filter(vsi, mac_addr);
+	if (ret != ICE_SUCCESS) {
+		PMD_DRV_LOG(ERR, "Failed to add MAC filter");
+		return -EINVAL;
+	}
+
+	return ICE_SUCCESS;
+}
+
+/* Remove a MAC address, and update filters */
+static void
+ice_macaddr_remove(struct rte_eth_dev *dev, uint32_t index)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vsi *vsi = pf->main_vsi;
+	struct rte_eth_dev_data *data = dev->data;
+	struct ether_addr *macaddr;
+	int ret;
+
+	macaddr = &data->mac_addrs[index];
+	ret = ice_remove_mac_filter(vsi, macaddr);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to remove MAC filter");
+		return;
+	}
+}
+
 static int
 ice_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 	      struct rte_pci_device *pci_dev)
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v5 22/31] net/ice: support VLAN ops
  2018-12-17  7:37 ` [dpdk-dev] [PATCH v5 00/31] A new net PMD - ICE Wenzhuo Lu
                     ` (20 preceding siblings ...)
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 21/31] net/ice: support MAC ops Wenzhuo Lu
@ 2018-12-17  7:37   ` Wenzhuo Lu
  2018-12-17 22:45     ` Ferruh Yigit
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 23/31] net/ice: support RSS Wenzhuo Lu
                     ` (8 subsequent siblings)
  30 siblings, 1 reply; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-17  7:37 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add below ops,
ice_vlan_filter_set
ice_vlan_offload_set
ice_vlan_tpid_set
ice_vlan_pvid_set

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/ice.ini |   3 +
 doc/guides/nics/ice.rst          |  16 ++
 drivers/net/ice/ice_ethdev.c     | 590 +++++++++++++++++++++++++++++++++++++++
 3 files changed, 609 insertions(+)

diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
index 759a036..5ac8e56 100644
--- a/doc/guides/nics/features/ice.ini
+++ b/doc/guides/nics/features/ice.ini
@@ -12,6 +12,9 @@ MTU update           = Y
 Jumbo frame          = Y
 Unicast MAC filter   = Y
 Multicast MAC filter = Y
+VLAN filter          = Y
+VLAN offload         = Y
+QinQ offload         = Y
 BSD nic_uio          = Y
 Linux UIO            = Y
 Linux VFIO           = Y
diff --git a/doc/guides/nics/ice.rst b/doc/guides/nics/ice.rst
index 96a594f..466af55 100644
--- a/doc/guides/nics/ice.rst
+++ b/doc/guides/nics/ice.rst
@@ -64,6 +64,22 @@ Driver compilation and testing
 Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
 for details.
 
+Sample Application Notes
+------------------------
+
+Vlan filter
+~~~~~~~~~~~
+
+Vlan filter only works when Promiscuous mode is off.
+
+To start ``testpmd``, and add vlan 10 to port 0:
+
+.. code-block:: console
+
+    ./app/testpmd -l 0-15 -n 4 -- -i
+    ...
+
+    testpmd> rx_vlan add 10 0
 
 Limitations or Known issues
 ---------------------------
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 29840fd..1d3cc7a 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -24,6 +24,13 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 static int ice_link_update(struct rte_eth_dev *dev,
 			   int wait_to_complete);
 static int ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu);
+static int ice_vlan_offload_set(struct rte_eth_dev *dev, int mask);
+static int ice_vlan_tpid_set(struct rte_eth_dev *dev,
+			     enum rte_vlan_type vlan_type,
+			     uint16_t tpid);
+static int ice_vlan_filter_set(struct rte_eth_dev *dev,
+			       uint16_t vlan_id,
+			       int on);
 static int ice_macaddr_set(struct rte_eth_dev *dev,
 			   struct ether_addr *mac_addr);
 static int ice_macaddr_add(struct rte_eth_dev *dev,
@@ -31,6 +38,8 @@ static int ice_macaddr_add(struct rte_eth_dev *dev,
 			   __rte_unused uint32_t index,
 			   uint32_t pool);
 static void ice_macaddr_remove(struct rte_eth_dev *dev, uint32_t index);
+static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
+			     uint16_t pvid, int on);
 
 static const struct rte_pci_id pci_id_ice_map[] = {
 	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
@@ -60,6 +69,10 @@ static int ice_macaddr_add(struct rte_eth_dev *dev,
 	.mac_addr_set                 = ice_macaddr_set,
 	.mac_addr_add                 = ice_macaddr_add,
 	.mac_addr_remove              = ice_macaddr_remove,
+	.vlan_filter_set              = ice_vlan_filter_set,
+	.vlan_offload_set             = ice_vlan_offload_set,
+	.vlan_tpid_set                = ice_vlan_tpid_set,
+	.vlan_pvid_set                = ice_vlan_pvid_set,
 };
 
 static void
@@ -470,6 +483,297 @@ static int ice_macaddr_add(struct rte_eth_dev *dev,
 	return ret;
 }
 
+/* Find out specific VLAN filter */
+static struct ice_vlan_filter *
+ice_find_vlan_filter(struct ice_vsi *vsi, uint16_t vlan_id)
+{
+	struct ice_vlan_filter *f;
+
+	TAILQ_FOREACH(f, &vsi->vlan_list, next) {
+		if (vlan_id == f->vlan_info.vlan_id)
+			return f;
+	}
+
+	return NULL;
+}
+
+static int
+ice_add_vlan_filter(struct ice_vsi *vsi, uint16_t vlan_id)
+{
+	struct ice_fltr_list_entry *v_list_itr = NULL;
+	struct ice_vlan_filter *f;
+	struct LIST_HEAD_TYPE list_head;
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	int ret = 0;
+
+	if (!vsi || vlan_id > ETHER_MAX_VLAN_ID)
+		return -EINVAL;
+
+	/* If it's added and configured, return. */
+	f = ice_find_vlan_filter(vsi, vlan_id);
+	if (f) {
+		PMD_DRV_LOG(INFO, "This VLAN filter already exists.");
+		return 0;
+	}
+
+	if (!vsi->vlan_anti_spoof_on && !vsi->vlan_filter_on)
+		return 0;
+
+	INIT_LIST_HEAD(&list_head);
+
+	v_list_itr = (struct ice_fltr_list_entry *)
+		      ice_malloc(hw, sizeof(*v_list_itr));
+	if (!v_list_itr) {
+		ret = -ENOMEM;
+		goto DONE;
+	}
+	v_list_itr->fltr_info.l_data.vlan.vlan_id = vlan_id;
+	v_list_itr->fltr_info.src_id = ICE_SRC_ID_VSI;
+	v_list_itr->fltr_info.fltr_act = ICE_FWD_TO_VSI;
+	v_list_itr->fltr_info.lkup_type = ICE_SW_LKUP_VLAN;
+	v_list_itr->fltr_info.flag = ICE_FLTR_TX;
+	v_list_itr->fltr_info.vsi_handle = vsi->idx;
+
+	LIST_ADD(&v_list_itr->list_entry, &list_head);
+
+	/* Add the vlan */
+	ret = ice_add_vlan(hw, &list_head);
+	if (ret != ICE_SUCCESS) {
+		PMD_DRV_LOG(ERR, "Failed to add VLAN filter");
+		ret = -EINVAL;
+		goto DONE;
+	}
+
+	/* Add vlan into vlan list */
+	f = rte_zmalloc(NULL, sizeof(*f), 0);
+	if (!f) {
+		PMD_DRV_LOG(ERR, "failed to allocate memory");
+		ret = -ENOMEM;
+		goto DONE;
+	}
+	f->vlan_info.vlan_id = vlan_id;
+	TAILQ_INSERT_TAIL(&vsi->vlan_list, f, next);
+	vsi->vlan_num++;
+
+	ret = 0;
+
+DONE:
+	rte_free(v_list_itr);
+	return ret;
+}
+
+static int
+ice_remove_vlan_filter(struct ice_vsi *vsi, uint16_t vlan_id)
+{
+	struct ice_fltr_list_entry *v_list_itr = NULL;
+	struct ice_vlan_filter *f;
+	struct LIST_HEAD_TYPE list_head;
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	int ret = 0;
+
+	/**
+	 * Vlan 0 is the generic filter for untagged packets
+	 * and can't be removed.
+	 */
+	if (!vsi || vlan_id == 0 || vlan_id > ETHER_MAX_VLAN_ID)
+		return -EINVAL;
+
+	/* Can't find it, return an error */
+	f = ice_find_vlan_filter(vsi, vlan_id);
+	if (!f)
+		return -EINVAL;
+
+	INIT_LIST_HEAD(&list_head);
+
+	v_list_itr = (struct ice_fltr_list_entry *)
+		      ice_malloc(hw, sizeof(*v_list_itr));
+	if (!v_list_itr) {
+		ret = -ENOMEM;
+		goto DONE;
+	}
+
+	v_list_itr->fltr_info.l_data.vlan.vlan_id = vlan_id;
+	v_list_itr->fltr_info.src_id = ICE_SRC_ID_VSI;
+	v_list_itr->fltr_info.fltr_act = ICE_FWD_TO_VSI;
+	v_list_itr->fltr_info.lkup_type = ICE_SW_LKUP_VLAN;
+	v_list_itr->fltr_info.flag = ICE_FLTR_TX;
+	v_list_itr->fltr_info.vsi_handle = vsi->idx;
+
+	LIST_ADD(&v_list_itr->list_entry, &list_head);
+
+	/* remove the vlan filter */
+	ret = ice_remove_vlan(hw, &list_head);
+	if (ret != ICE_SUCCESS) {
+		PMD_DRV_LOG(ERR, "Failed to remove VLAN filter");
+		ret = -EINVAL;
+		goto DONE;
+	}
+
+	/* Remove the vlan id from vlan list */
+	TAILQ_REMOVE(&vsi->vlan_list, f, next);
+	rte_free(f);
+	vsi->vlan_num--;
+
+	ret = 0;
+DONE:
+	rte_free(v_list_itr);
+	return ret;
+}
+
+static int
+ice_remove_all_mac_vlan_filters(struct ice_vsi *vsi)
+{
+	struct ice_mac_filter *m_f;
+	struct ice_vlan_filter *v_f;
+	int ret = 0;
+
+	if (!vsi || !vsi->mac_num)
+		return -EINVAL;
+
+	TAILQ_FOREACH(m_f, &vsi->mac_list, next) {
+		ret = ice_remove_mac_filter(vsi, &m_f->mac_info.mac_addr);
+		if (ret != ICE_SUCCESS) {
+			ret = -EINVAL;
+			goto DONE;
+		}
+	}
+
+	if (vsi->vlan_num == 0)
+		return 0;
+
+	TAILQ_FOREACH(v_f, &vsi->vlan_list, next) {
+		ret = ice_remove_vlan_filter(vsi, v_f->vlan_info.vlan_id);
+		if (ret != ICE_SUCCESS) {
+			ret = -EINVAL;
+			goto DONE;
+		}
+	}
+
+DONE:
+	return ret;
+}
+
+static int
+ice_vsi_config_qinq_insertion(struct ice_vsi *vsi, bool on)
+{
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	struct ice_vsi_ctx ctxt;
+	uint8_t qinq_flags;
+	int ret = 0;
+
+	/* Check if it has been already on or off */
+	if (vsi->info.valid_sections &
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID)) {
+		if (on) {
+			if ((vsi->info.outer_tag_flags &
+			     ICE_AQ_VSI_OUTER_TAG_ACCEPT_HOST) ==
+			    ICE_AQ_VSI_OUTER_TAG_ACCEPT_HOST)
+				return 0; /* already on */
+		} else {
+			if (!(vsi->info.outer_tag_flags &
+			      ICE_AQ_VSI_OUTER_TAG_ACCEPT_HOST))
+				return 0; /* already off */
+		}
+	}
+
+	if (on)
+		qinq_flags = ICE_AQ_VSI_OUTER_TAG_ACCEPT_HOST;
+	else
+		qinq_flags = 0;
+	/* clear global insertion and use per packet insertion */
+	vsi->info.outer_tag_flags &= ~(ICE_AQ_VSI_OUTER_TAG_INSERT);
+	vsi->info.outer_tag_flags &= ~(ICE_AQ_VSI_OUTER_TAG_ACCEPT_HOST);
+	vsi->info.outer_tag_flags |= qinq_flags;
+	/* use default vlan type 0x8100 */
+	vsi->info.outer_tag_flags &= ~(ICE_AQ_VSI_OUTER_TAG_TYPE_M);
+	vsi->info.outer_tag_flags |= ICE_DFLT_OUTER_TAG_TYPE <<
+				     ICE_AQ_VSI_OUTER_TAG_TYPE_S;
+	(void)rte_memcpy(&ctxt.info, &vsi->info, sizeof(vsi->info));
+	ctxt.info.valid_sections =
+			rte_cpu_to_le_16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID);
+	ctxt.vsi_num = vsi->vsi_id;
+	ret = ice_update_vsi(hw, vsi->idx, &ctxt, NULL);
+	if (ret) {
+		PMD_DRV_LOG(INFO,
+			    "Update VSI failed to %s qinq stripping",
+			    on ? "enable" : "disable");
+		return -EINVAL;
+	}
+
+	vsi->info.valid_sections |=
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID);
+
+	return ret;
+}
+
+static int
+ice_vsi_config_qinq_stripping(struct ice_vsi *vsi, bool on)
+{
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	struct ice_vsi_ctx ctxt;
+	uint8_t qinq_flags;
+	int ret = 0;
+
+	/* Check if it has been already on or off */
+	if (vsi->info.valid_sections &
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID)) {
+		if (on) {
+			if ((vsi->info.outer_tag_flags &
+			     ICE_AQ_VSI_OUTER_TAG_MODE_M) ==
+			    ICE_AQ_VSI_OUTER_TAG_COPY)
+				return 0; /* already on */
+		} else {
+			if ((vsi->info.outer_tag_flags &
+			     ICE_AQ_VSI_OUTER_TAG_MODE_M) ==
+			    ICE_AQ_VSI_OUTER_TAG_NOTHING)
+				return 0; /* already off */
+		}
+	}
+
+	if (on)
+		qinq_flags = ICE_AQ_VSI_OUTER_TAG_COPY;
+	else
+		qinq_flags = ICE_AQ_VSI_OUTER_TAG_NOTHING;
+	vsi->info.outer_tag_flags &= ~(ICE_AQ_VSI_OUTER_TAG_MODE_M);
+	vsi->info.outer_tag_flags |= qinq_flags;
+	/* use default vlan type 0x8100 */
+	vsi->info.outer_tag_flags &= ~(ICE_AQ_VSI_OUTER_TAG_TYPE_M);
+	vsi->info.outer_tag_flags |= ICE_DFLT_OUTER_TAG_TYPE <<
+				     ICE_AQ_VSI_OUTER_TAG_TYPE_S;
+	(void)rte_memcpy(&ctxt.info, &vsi->info, sizeof(vsi->info));
+	ctxt.info.valid_sections =
+			rte_cpu_to_le_16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID);
+	ctxt.vsi_num = vsi->vsi_id;
+	ret = ice_update_vsi(hw, vsi->idx, &ctxt, NULL);
+	if (ret) {
+		PMD_DRV_LOG(INFO,
+			    "Update VSI failed to %s qinq stripping",
+			    on ? "enable" : "disable");
+		return -EINVAL;
+	}
+
+	vsi->info.valid_sections |=
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID);
+
+	return ret;
+}
+
+static int
+ice_vsi_config_double_vlan(struct ice_vsi *vsi, int on)
+{
+	int ret;
+
+	ret = ice_vsi_config_qinq_stripping(vsi, on);
+	if (ret)
+		PMD_DRV_LOG(ERR, "Fail to set qinq stripping - %d", ret);
+
+	ret = ice_vsi_config_qinq_insertion(vsi, on);
+	if (ret)
+		PMD_DRV_LOG(ERR, "Fail to set qinq insertion - %d", ret);
+
+	return ret;
+}
+
 /* Enable IRQ0 */
 static void
 ice_pf_enable_irq0(struct ice_hw *hw)
@@ -829,6 +1133,7 @@ static int ice_macaddr_add(struct rte_eth_dev *dev,
 	struct rte_intr_handle *intr_handle;
 	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vsi *vsi;
 	int ret;
 
 	dev->dev_ops = &ice_eth_dev_ops;
@@ -881,6 +1186,11 @@ static int ice_macaddr_add(struct rte_eth_dev *dev,
 		goto err_pf_setup;
 	}
 
+	vsi = pf->main_vsi;
+
+	/* Disable double vlan by default */
+	ice_vsi_config_double_vlan(vsi, FALSE);
+
 	/* register callback func to eal lib */
 	rte_intr_callback_register(intr_handle,
 				   ice_interrupt_handler, dev);
@@ -916,6 +1226,8 @@ static int ice_macaddr_add(struct rte_eth_dev *dev,
 
 	hw = ICE_VSI_TO_HW(vsi);
 
+	ice_remove_all_mac_vlan_filters(vsi);
+
 	memset(&vsi_ctx, 0, sizeof(vsi_ctx));
 
 	vsi_ctx.vsi_num = vsi->vsi_id;
@@ -1498,6 +1810,284 @@ static int ice_macaddr_set(struct rte_eth_dev *dev,
 }
 
 static int
+ice_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vsi *vsi = pf->main_vsi;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (on) {
+		ret = ice_add_vlan_filter(vsi, vlan_id);
+		if (ret < 0) {
+			PMD_DRV_LOG(ERR, "Failed to add vlan filter");
+			return -EINVAL;
+		}
+	} else {
+		ret = ice_remove_vlan_filter(vsi, vlan_id);
+		if (ret < 0) {
+			PMD_DRV_LOG(ERR, "Failed to remove vlan filter");
+			return -EINVAL;
+		}
+	}
+
+	return 0;
+}
+
+/* Configure vlan filter on or off */
+static int
+ice_vsi_config_vlan_filter(struct ice_vsi *vsi, bool on)
+{
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	struct ice_vsi_ctx ctxt;
+	uint8_t sec_flags, sw_flags2;
+	int ret = 0;
+
+	sec_flags = ICE_AQ_VSI_SEC_TX_VLAN_PRUNE_ENA <<
+		    ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S;
+	sw_flags2 = ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA;
+
+	if (on) {
+		vsi->info.sec_flags |= sec_flags;
+		vsi->info.sw_flags2 |= sw_flags2;
+	} else {
+		vsi->info.sec_flags &= ~sec_flags;
+		vsi->info.sw_flags2 &= ~sw_flags2;
+	}
+	vsi->info.sw_id = hw->port_info->sw_id;
+	(void)rte_memcpy(&ctxt.info, &vsi->info, sizeof(vsi->info));
+	ctxt.info.valid_sections =
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_SW_VALID |
+				 ICE_AQ_VSI_PROP_SECURITY_VALID);
+	ctxt.vsi_num = vsi->vsi_id;
+
+	ret = ice_update_vsi(hw, vsi->idx, &ctxt, NULL);
+	if (ret) {
+		PMD_DRV_LOG(INFO, "Update VSI failed to %s vlan rx pruning",
+			    on ? "enable" : "disable");
+		ret = -EINVAL;
+	} else {
+		vsi->info.valid_sections |=
+			rte_cpu_to_le_16(ICE_AQ_VSI_PROP_SW_VALID |
+					 ICE_AQ_VSI_PROP_SECURITY_VALID);
+	}
+
+	return ret;
+}
+
+static int
+ice_vsi_config_vlan_stripping(struct ice_vsi *vsi, bool on)
+{
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	struct ice_vsi_ctx ctxt;
+	uint8_t vlan_flags;
+	int ret = 0;
+
+	/* Check if it has been already on or off */
+	if (vsi->info.valid_sections &
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_VLAN_VALID)) {
+		if (on) {
+			if ((vsi->info.vlan_flags &
+			     ICE_AQ_VSI_VLAN_EMOD_M) ==
+			    ICE_AQ_VSI_VLAN_EMOD_STR_BOTH)
+				return 0; /* already on */
+		} else {
+			if ((vsi->info.vlan_flags &
+			     ICE_AQ_VSI_VLAN_EMOD_M) ==
+			    ICE_AQ_VSI_VLAN_EMOD_NOTHING)
+				return 0; /* already off */
+		}
+	}
+
+	if (on)
+		vlan_flags = ICE_AQ_VSI_VLAN_EMOD_STR_BOTH;
+	else
+		vlan_flags = ICE_AQ_VSI_VLAN_EMOD_NOTHING;
+	vsi->info.vlan_flags &= ~(ICE_AQ_VSI_VLAN_EMOD_M);
+	vsi->info.vlan_flags |= vlan_flags;
+	(void)rte_memcpy(&ctxt.info, &vsi->info, sizeof(vsi->info));
+	ctxt.info.valid_sections =
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_VLAN_VALID);
+	ctxt.vsi_num = vsi->vsi_id;
+	ret = ice_update_vsi(hw, vsi->idx, &ctxt, NULL);
+	if (ret) {
+		PMD_DRV_LOG(INFO, "Update VSI failed to %s vlan stripping",
+			    on ? "enable" : "disable");
+		return -EINVAL;
+	}
+
+	vsi->info.valid_sections |=
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_VLAN_VALID);
+
+	return ret;
+}
+
+static int
+ice_vlan_offload_set(struct rte_eth_dev *dev, int mask)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vsi *vsi = pf->main_vsi;
+	struct rte_eth_rxmode *rxmode;
+
+	rxmode = &dev->data->dev_conf.rxmode;
+	if (mask & ETH_VLAN_FILTER_MASK) {
+		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+			ice_vsi_config_vlan_filter(vsi, TRUE);
+		else
+			ice_vsi_config_vlan_filter(vsi, FALSE);
+	}
+
+	if (mask & ETH_VLAN_STRIP_MASK) {
+		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+			ice_vsi_config_vlan_stripping(vsi, TRUE);
+		else
+			ice_vsi_config_vlan_stripping(vsi, FALSE);
+	}
+
+	if (mask & ETH_VLAN_EXTEND_MASK) {
+		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+			ice_vsi_config_double_vlan(vsi, TRUE);
+		else
+			ice_vsi_config_double_vlan(vsi, FALSE);
+	}
+
+	return 0;
+}
+
+static int
+ice_vlan_tpid_set(struct rte_eth_dev *dev,
+		  enum rte_vlan_type vlan_type,
+		  uint16_t tpid)
+{
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	uint64_t reg_r = 0, reg_w = 0;
+	uint16_t reg_id = 0;
+	int ret = 0;
+	int qinq = dev->data->dev_conf.rxmode.offloads &
+		   DEV_RX_OFFLOAD_VLAN_EXTEND;
+
+	switch (vlan_type) {
+	case ETH_VLAN_TYPE_OUTER:
+		if (qinq)
+			reg_id = 3;
+		else
+			reg_id = 5;
+	break;
+	case ETH_VLAN_TYPE_INNER:
+		if (qinq) {
+			reg_id = 5;
+		} else {
+			PMD_DRV_LOG(ERR,
+				    "Unsupported vlan type in single vlan.");
+			return -EINVAL;
+		}
+		break;
+	default:
+		PMD_DRV_LOG(ERR, "Unsupported vlan type %d", vlan_type);
+		return -EINVAL;
+	}
+	reg_r = ICE_READ_REG(hw, GL_SWT_L2TAGCTRL(reg_id));
+	PMD_DRV_LOG(DEBUG, "Debug read from ICE GL_SWT_L2TAGCTRL[%d]: "
+		    "0x%08"PRIx64"", reg_id, reg_r);
+
+	reg_w = reg_r & (~(GL_SWT_L2TAGCTRL_ETHERTYPE_M));
+	reg_w |= ((uint64_t)tpid << GL_SWT_L2TAGCTRL_ETHERTYPE_S);
+	if (reg_r == reg_w) {
+		PMD_DRV_LOG(DEBUG, "No need to write");
+		return 0;
+	}
+
+	ICE_WRITE_REG(hw, GL_SWT_L2TAGCTRL(reg_id), reg_w);
+	PMD_DRV_LOG(DEBUG, "Debug write 0x%08"PRIx64" to "
+		    "ICE GL_SWT_L2TAGCTRL[%d]", reg_w, reg_id);
+
+	return ret;
+}
+
+static int
+ice_vsi_vlan_pvid_set(struct ice_vsi *vsi, struct ice_vsi_vlan_pvid_info *info)
+{
+	struct ice_hw *hw;
+	struct ice_vsi_ctx ctxt;
+	uint8_t vlan_flags = 0;
+	int ret;
+
+	if (!vsi || !info) {
+		PMD_DRV_LOG(ERR, "invalid parameters");
+		return -EINVAL;
+	}
+
+	if (info->on) {
+		vsi->info.pvid = info->config.pvid;
+		/**
+		 * If insert pvid is enabled, only tagged pkts are
+		 * allowed to be sent out.
+		 */
+		vlan_flags = ICE_AQ_VSI_PVLAN_INSERT_PVID |
+			     ICE_AQ_VSI_VLAN_MODE_UNTAGGED;
+	} else {
+		vsi->info.pvid = 0;
+		if (info->config.reject.tagged == 0)
+			vlan_flags |= ICE_AQ_VSI_VLAN_MODE_TAGGED;
+
+		if (info->config.reject.untagged == 0)
+			vlan_flags |= ICE_AQ_VSI_VLAN_MODE_UNTAGGED;
+	}
+	vsi->info.vlan_flags &= ~(ICE_AQ_VSI_PVLAN_INSERT_PVID |
+				  ICE_AQ_VSI_VLAN_MODE_M);
+	vsi->info.vlan_flags |= vlan_flags;
+	memset(&ctxt, 0, sizeof(ctxt));
+	rte_memcpy(&ctxt.info, &vsi->info, sizeof(vsi->info));
+	ctxt.info.valid_sections =
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_VLAN_VALID);
+	ctxt.vsi_num = vsi->vsi_id;
+
+	hw = ICE_VSI_TO_HW(vsi);
+	ret = ice_update_vsi(hw, vsi->idx, &ctxt, NULL);
+	if (ret != ICE_SUCCESS) {
+		PMD_DRV_LOG(ERR,
+			    "update VSI for VLAN insert failed, err %d",
+			    ret);
+		return -EINVAL;
+	}
+
+	vsi->info.valid_sections |=
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_VLAN_VALID);
+
+	return ret;
+}
+
+static int
+ice_vlan_pvid_set(struct rte_eth_dev *dev, uint16_t pvid, int on)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vsi *vsi = pf->main_vsi;
+	struct rte_eth_dev_data *data = pf->dev_data;
+	struct ice_vsi_vlan_pvid_info info;
+	int ret;
+
+	memset(&info, 0, sizeof(info));
+	info.on = on;
+	if (info.on) {
+		info.config.pvid = pvid;
+	} else {
+		info.config.reject.tagged =
+			data->dev_conf.txmode.hw_vlan_reject_tagged;
+		info.config.reject.untagged =
+			data->dev_conf.txmode.hw_vlan_reject_untagged;
+	}
+
+	ret = ice_vsi_vlan_pvid_set(vsi, &info);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Failed to set pvid.");
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int
 ice_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 	      struct rte_pci_device *pci_dev)
 {
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v5 23/31] net/ice: support RSS
  2018-12-17  7:37 ` [dpdk-dev] [PATCH v5 00/31] A new net PMD - ICE Wenzhuo Lu
                     ` (21 preceding siblings ...)
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 22/31] net/ice: support VLAN ops Wenzhuo Lu
@ 2018-12-17  7:37   ` Wenzhuo Lu
  2018-12-17 22:47     ` Ferruh Yigit
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 24/31] net/ice: support RX queue interruption Wenzhuo Lu
                     ` (7 subsequent siblings)
  30 siblings, 1 reply; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-17  7:37 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add below ops,
reta_update
reta_query
rss_hash_update
rss_hash_conf_get

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/ice.ini |   3 +
 drivers/net/ice/ice_ethdev.c     | 242 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 245 insertions(+)

diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
index 5ac8e56..953a869 100644
--- a/doc/guides/nics/features/ice.ini
+++ b/doc/guides/nics/features/ice.ini
@@ -12,6 +12,9 @@ MTU update           = Y
 Jumbo frame          = Y
 Unicast MAC filter   = Y
 Multicast MAC filter = Y
+RSS hash             = Y
+RSS key update       = Y
+RSS reta update      = Y
 VLAN filter          = Y
 VLAN offload         = Y
 QinQ offload         = Y
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 1d3cc7a..28d0282 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -28,6 +28,16 @@ static int ice_link_update(struct rte_eth_dev *dev,
 static int ice_vlan_tpid_set(struct rte_eth_dev *dev,
 			     enum rte_vlan_type vlan_type,
 			     uint16_t tpid);
+static int ice_rss_reta_update(struct rte_eth_dev *dev,
+			       struct rte_eth_rss_reta_entry64 *reta_conf,
+			       uint16_t reta_size);
+static int ice_rss_reta_query(struct rte_eth_dev *dev,
+			      struct rte_eth_rss_reta_entry64 *reta_conf,
+			      uint16_t reta_size);
+static int ice_rss_hash_update(struct rte_eth_dev *dev,
+			       struct rte_eth_rss_conf *rss_conf);
+static int ice_rss_hash_conf_get(struct rte_eth_dev *dev,
+				 struct rte_eth_rss_conf *rss_conf);
 static int ice_vlan_filter_set(struct rte_eth_dev *dev,
 			       uint16_t vlan_id,
 			       int on);
@@ -72,6 +82,10 @@ static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
 	.vlan_filter_set              = ice_vlan_filter_set,
 	.vlan_offload_set             = ice_vlan_offload_set,
 	.vlan_tpid_set                = ice_vlan_tpid_set,
+	.reta_update                  = ice_rss_reta_update,
+	.reta_query                   = ice_rss_reta_query,
+	.rss_hash_update              = ice_rss_hash_update,
+	.rss_hash_conf_get            = ice_rss_hash_conf_get,
 	.vlan_pvid_set                = ice_vlan_pvid_set,
 };
 
@@ -2006,6 +2020,234 @@ static int ice_macaddr_set(struct rte_eth_dev *dev,
 }
 
 static int
+ice_get_rss_lut(struct ice_vsi *vsi, uint8_t *lut, uint16_t lut_size)
+{
+	struct ice_pf *pf = ICE_VSI_TO_PF(vsi);
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	int ret;
+
+	if (!lut)
+		return -EINVAL;
+
+	if (pf->flags & ICE_FLAG_RSS_AQ_CAPABLE) {
+		ret = ice_aq_get_rss_lut(hw, vsi->idx, TRUE,
+					 lut, lut_size);
+		if (ret) {
+			PMD_DRV_LOG(ERR, "Failed to get RSS lookup table");
+			return -EINVAL;
+		}
+	} else {
+		uint64_t *lut_dw = (uint64_t *)lut;
+		uint16_t i, lut_size_dw = lut_size / 4;
+
+		for (i = 0; i < lut_size_dw; i++)
+			lut_dw[i] = ICE_READ_REG(hw, PFQF_HLUT(i));
+	}
+
+	return 0;
+}
+
+static int
+ice_set_rss_lut(struct ice_vsi *vsi, uint8_t *lut, uint16_t lut_size)
+{
+	struct ice_pf *pf = ICE_VSI_TO_PF(vsi);
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	int ret;
+
+	if (!vsi || !lut)
+		return -EINVAL;
+
+	if (pf->flags & ICE_FLAG_RSS_AQ_CAPABLE) {
+		ret = ice_aq_set_rss_lut(hw, vsi->idx, TRUE,
+					 lut, lut_size);
+		if (ret) {
+			PMD_DRV_LOG(ERR, "Failed to set RSS lookup table");
+			return -EINVAL;
+		}
+	} else {
+		uint64_t *lut_dw = (uint64_t *)lut;
+		uint16_t i, lut_size_dw = lut_size / 4;
+
+		for (i = 0; i < lut_size_dw; i++)
+			ICE_WRITE_REG(hw, PFQF_HLUT(i), lut_dw[i]);
+
+		ice_flush(hw);
+	}
+
+	return 0;
+}
+
+static int
+ice_rss_reta_update(struct rte_eth_dev *dev,
+		    struct rte_eth_rss_reta_entry64 *reta_conf,
+		    uint16_t reta_size)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	uint16_t i, lut_size = hw->func_caps.common_cap.rss_table_size;
+	uint16_t idx, shift;
+	uint8_t *lut;
+	int ret;
+
+	if (reta_size != lut_size ||
+	    reta_size > ETH_RSS_RETA_SIZE_512) {
+		PMD_DRV_LOG(ERR,
+			    "The size of hash lookup table configured (%d)"
+			    "doesn't match the number hardware can "
+			    "supported (%d)",
+			    reta_size, lut_size);
+		return -EINVAL;
+	}
+
+	lut = rte_zmalloc(NULL, reta_size, 0);
+	if (!lut) {
+		PMD_DRV_LOG(ERR, "No memory can be allocated");
+		return -ENOMEM;
+	}
+	ret = ice_get_rss_lut(pf->main_vsi, lut, reta_size);
+	if (ret)
+		goto out;
+
+	for (i = 0; i < reta_size; i++) {
+		idx = i / RTE_RETA_GROUP_SIZE;
+		shift = i % RTE_RETA_GROUP_SIZE;
+		if (reta_conf[idx].mask & (1ULL << shift))
+			lut[i] = reta_conf[idx].reta[shift];
+	}
+	ret = ice_set_rss_lut(pf->main_vsi, lut, reta_size);
+
+out:
+	rte_free(lut);
+
+	return ret;
+}
+
+static int
+ice_rss_reta_query(struct rte_eth_dev *dev,
+		   struct rte_eth_rss_reta_entry64 *reta_conf,
+		   uint16_t reta_size)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	uint16_t i, lut_size = hw->func_caps.common_cap.rss_table_size;
+	uint16_t idx, shift;
+	uint8_t *lut;
+	int ret;
+
+	if (reta_size != lut_size ||
+	    reta_size > ETH_RSS_RETA_SIZE_512) {
+		PMD_DRV_LOG(ERR,
+			    "The size of hash lookup table configured (%d)"
+			    "doesn't match the number hardware can "
+			    "supported (%d)",
+			    reta_size, lut_size);
+		return -EINVAL;
+	}
+
+	lut = rte_zmalloc(NULL, reta_size, 0);
+	if (!lut) {
+		PMD_DRV_LOG(ERR, "No memory can be allocated");
+		return -ENOMEM;
+	}
+
+	ret = ice_get_rss_lut(pf->main_vsi, lut, reta_size);
+	if (ret)
+		goto out;
+
+	for (i = 0; i < reta_size; i++) {
+		idx = i / RTE_RETA_GROUP_SIZE;
+		shift = i % RTE_RETA_GROUP_SIZE;
+		if (reta_conf[idx].mask & (1ULL << shift))
+			reta_conf[idx].reta[shift] = lut[i];
+	}
+
+out:
+	rte_free(lut);
+
+	return ret;
+}
+
+static int
+ice_set_rss_key(struct ice_vsi *vsi, uint8_t *key, uint8_t key_len)
+{
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	int ret = 0;
+
+	if (!key || key_len == 0) {
+		PMD_DRV_LOG(DEBUG, "No key to be configured");
+		return 0;
+	} else if (key_len != (VSIQF_HKEY_MAX_INDEX + 1) *
+		   sizeof(uint32_t)) {
+		PMD_DRV_LOG(ERR, "Invalid key length %u", key_len);
+		return -EINVAL;
+	}
+
+	struct ice_aqc_get_set_rss_keys *key_dw =
+		(struct ice_aqc_get_set_rss_keys *)key;
+
+	ret = ice_aq_set_rss_key(hw, vsi->idx, key_dw);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to configure RSS key via AQ");
+		ret = -EINVAL;
+	}
+
+	return ret;
+}
+
+static int
+ice_get_rss_key(struct ice_vsi *vsi, uint8_t *key, uint8_t *key_len)
+{
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	int ret;
+
+	if (!key || !key_len)
+		return -EINVAL;
+
+	ret = ice_aq_get_rss_key
+		(hw, vsi->idx,
+		 (struct ice_aqc_get_set_rss_keys *)key);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to get RSS key via AQ");
+		return -EINVAL;
+	}
+	*key_len = (VSIQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t);
+
+	return 0;
+}
+
+static int
+ice_rss_hash_update(struct rte_eth_dev *dev,
+		    struct rte_eth_rss_conf *rss_conf)
+{
+	enum ice_status status = ICE_SUCCESS;
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vsi *vsi = pf->main_vsi;
+
+	/* set hash key */
+	status = ice_set_rss_key(vsi, rss_conf->rss_key, rss_conf->rss_key_len);
+	if (status)
+		return status;
+
+	/* TODO: hash enable config, ice_add_rss_cfg */
+	return 0;
+}
+
+static int
+ice_rss_hash_conf_get(struct rte_eth_dev *dev,
+		      struct rte_eth_rss_conf *rss_conf)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vsi *vsi = pf->main_vsi;
+
+	ice_get_rss_key(vsi, rss_conf->rss_key,
+			&rss_conf->rss_key_len);
+
+	/* TODO: default set to 0 as hf config is not supported now */
+	rss_conf->rss_hf = 0;
+	return 0;
+}
+
+static int
 ice_vsi_vlan_pvid_set(struct ice_vsi *vsi, struct ice_vsi_vlan_pvid_info *info)
 {
 	struct ice_hw *hw;
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v5 24/31] net/ice: support RX queue interruption
  2018-12-17  7:37 ` [dpdk-dev] [PATCH v5 00/31] A new net PMD - ICE Wenzhuo Lu
                     ` (22 preceding siblings ...)
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 23/31] net/ice: support RSS Wenzhuo Lu
@ 2018-12-17  7:37   ` Wenzhuo Lu
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 25/31] net/ice: support FW version getting Wenzhuo Lu
                     ` (6 subsequent siblings)
  30 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-17  7:37 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add below ops,
rx_queue_intr_enable
rx_queue_intr_disable

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/ice.ini |   1 +
 drivers/net/ice/ice_ethdev.c     | 230 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 231 insertions(+)

diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
index 953a869..2844f4c 100644
--- a/doc/guides/nics/features/ice.ini
+++ b/doc/guides/nics/features/ice.ini
@@ -7,6 +7,7 @@
 Speed capabilities   = Y
 Link status          = Y
 Link status event    = Y
+Rx interrupt         = Y
 Queue start/stop     = Y
 MTU update           = Y
 Jumbo frame          = Y
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 28d0282..568d8a4 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -48,6 +48,10 @@ static int ice_macaddr_add(struct rte_eth_dev *dev,
 			   __rte_unused uint32_t index,
 			   uint32_t pool);
 static void ice_macaddr_remove(struct rte_eth_dev *dev, uint32_t index);
+static int ice_rx_queue_intr_enable(struct rte_eth_dev *dev,
+				    uint16_t queue_id);
+static int ice_rx_queue_intr_disable(struct rte_eth_dev *dev,
+				     uint16_t queue_id);
 static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
 			     uint16_t pvid, int on);
 
@@ -86,6 +90,8 @@ static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
 	.reta_query                   = ice_rss_reta_query,
 	.rss_hash_update              = ice_rss_hash_update,
 	.rss_hash_conf_get            = ice_rss_hash_conf_get,
+	.rx_queue_intr_enable         = ice_rx_queue_intr_enable,
+	.rx_queue_intr_disable        = ice_rx_queue_intr_disable,
 	.vlan_pvid_set                = ice_vlan_pvid_set,
 };
 
@@ -1258,10 +1264,39 @@ static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
 }
 
 static void
+ice_vsi_disable_queues_intr(struct ice_vsi *vsi)
+{
+	struct rte_eth_dev *dev = vsi->adapter->eth_dev;
+	struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	uint16_t msix_intr, i;
+
+	/* disable interrupt and also clear all the exist config */
+	for (i = 0; i < vsi->nb_qps; i++) {
+		ICE_WRITE_REG(hw, QINT_TQCTL(vsi->base_queue + i), 0);
+		ICE_WRITE_REG(hw, QINT_RQCTL(vsi->base_queue + i), 0);
+		rte_wmb();
+	}
+
+	if (rte_intr_allow_others(intr_handle))
+		/* vfio-pci */
+		for (i = 0; i < vsi->nb_msix; i++) {
+			msix_intr = vsi->msix_intr + i;
+			ICE_WRITE_REG(hw, GLINT_DYN_CTL(msix_intr),
+				      GLINT_DYN_CTL_WB_ON_ITR_M);
+		}
+	else
+		/* igb_uio */
+		ICE_WRITE_REG(hw, GLINT_DYN_CTL(0), GLINT_DYN_CTL_WB_ON_ITR_M);
+}
+
+static void
 ice_dev_stop(struct rte_eth_dev *dev)
 {
 	struct rte_eth_dev_data *data = dev->data;
 	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vsi *main_vsi = pf->main_vsi;
 	struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
 	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
 	uint16_t i;
@@ -1278,6 +1313,9 @@ static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
 	for (i = 0; i < data->nb_tx_queues; i++)
 		ice_tx_queue_stop(dev, i);
 
+	/* disable all queue interrupts */
+	ice_vsi_disable_queues_intr(main_vsi);
+
 	/* Clear all queues and release mbufs */
 	ice_clear_queues(dev);
 
@@ -1405,6 +1443,158 @@ static int ice_init_rss(struct ice_pf *pf)
 	return 0;
 }
 
+static void
+__vsi_queues_bind_intr(struct ice_vsi *vsi, uint16_t msix_vect,
+		       int base_queue, int nb_queue)
+{
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	uint32_t val, val_tx;
+	int i;
+
+	for (i = 0; i < nb_queue; i++) {
+		/*do actual bind*/
+		val = (msix_vect & QINT_RQCTL_MSIX_INDX_M) |
+		      (0 < QINT_RQCTL_ITR_INDX_S) | QINT_RQCTL_CAUSE_ENA_M;
+		val_tx = (msix_vect & QINT_TQCTL_MSIX_INDX_M) |
+			 (0 < QINT_TQCTL_ITR_INDX_S) | QINT_TQCTL_CAUSE_ENA_M;
+
+		PMD_DRV_LOG(INFO, "queue %d is binding to vect %d",
+			    base_queue + i, msix_vect);
+		/* set ITR0 value */
+		ICE_WRITE_REG(hw, GLINT_ITR(0, msix_vect), 0x10);
+		ICE_WRITE_REG(hw, QINT_RQCTL(base_queue + i), val);
+		ICE_WRITE_REG(hw, QINT_TQCTL(base_queue + i), val_tx);
+	}
+}
+
+static void
+ice_vsi_queues_bind_intr(struct ice_vsi *vsi)
+{
+	struct rte_eth_dev *dev = vsi->adapter->eth_dev;
+	struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	uint16_t msix_vect = vsi->msix_intr;
+	uint16_t nb_msix = RTE_MIN(vsi->nb_msix, intr_handle->nb_efd);
+	uint16_t queue_idx = 0;
+	int record = 0;
+	int i;
+
+	/* clear Rx/Tx queue interrupt */
+	for (i = 0; i < vsi->nb_used_qps; i++) {
+		ICE_WRITE_REG(hw, QINT_TQCTL(vsi->base_queue + i), 0);
+		ICE_WRITE_REG(hw, QINT_RQCTL(vsi->base_queue + i), 0);
+	}
+
+	/* PF bind interrupt */
+	if (rte_intr_dp_is_en(intr_handle)) {
+		queue_idx = 0;
+		record = 1;
+	}
+
+	for (i = 0; i < vsi->nb_used_qps; i++) {
+		if (nb_msix <= 1) {
+			if (!rte_intr_allow_others(intr_handle))
+				msix_vect = ICE_MISC_VEC_ID;
+
+			/* uio mapping all queue to one msix_vect */
+			__vsi_queues_bind_intr(vsi, msix_vect,
+					       vsi->base_queue + i,
+					       vsi->nb_used_qps - i);
+
+			for (; !!record && i < vsi->nb_used_qps; i++)
+				intr_handle->intr_vec[queue_idx + i] =
+					msix_vect;
+			break;
+		}
+
+		/* vfio 1:1 queue/msix_vect mapping */
+		__vsi_queues_bind_intr(vsi, msix_vect,
+				       vsi->base_queue + i, 1);
+
+		if (!!record)
+			intr_handle->intr_vec[queue_idx + i] = msix_vect;
+
+		msix_vect++;
+		nb_msix--;
+	}
+}
+
+static void
+ice_vsi_enable_queues_intr(struct ice_vsi *vsi)
+{
+	struct rte_eth_dev *dev = vsi->adapter->eth_dev;
+	struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	uint16_t msix_intr, i;
+
+	if (rte_intr_allow_others(intr_handle))
+		for (i = 0; i < vsi->nb_used_qps; i++) {
+			msix_intr = vsi->msix_intr + i;
+			ICE_WRITE_REG(hw, GLINT_DYN_CTL(msix_intr),
+				      GLINT_DYN_CTL_INTENA_M |
+				      GLINT_DYN_CTL_CLEARPBA_M |
+				      GLINT_DYN_CTL_ITR_INDX_M |
+				      GLINT_DYN_CTL_WB_ON_ITR_M);
+		}
+	else
+		ICE_WRITE_REG(hw, GLINT_DYN_CTL(0),
+			      GLINT_DYN_CTL_INTENA_M |
+			      GLINT_DYN_CTL_CLEARPBA_M |
+			      GLINT_DYN_CTL_ITR_INDX_M |
+			      GLINT_DYN_CTL_WB_ON_ITR_M);
+}
+
+static int
+ice_rxq_intr_setup(struct rte_eth_dev *dev)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+	struct ice_vsi *vsi = pf->main_vsi;
+	uint32_t intr_vector = 0;
+
+	rte_intr_disable(intr_handle);
+
+	/* check and configure queue intr-vector mapping */
+	if ((rte_intr_cap_multiple(intr_handle) ||
+	     !RTE_ETH_DEV_SRIOV(dev).active) &&
+	    dev->data->dev_conf.intr_conf.rxq != 0) {
+		intr_vector = dev->data->nb_rx_queues;
+		if (intr_vector > ICE_MAX_INTR_QUEUE_NUM) {
+			PMD_DRV_LOG(ERR, "At most %d intr queues supported",
+				    ICE_MAX_INTR_QUEUE_NUM);
+			return -ENOTSUP;
+		}
+		if (rte_intr_efd_enable(intr_handle, intr_vector))
+			return -1;
+	}
+
+	if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
+		intr_handle->intr_vec =
+		rte_zmalloc(NULL, dev->data->nb_rx_queues * sizeof(int),
+			    0);
+		if (!intr_handle->intr_vec) {
+			PMD_DRV_LOG(ERR,
+				    "Failed to allocate %d rx_queues intr_vec",
+				    dev->data->nb_rx_queues);
+			return -ENOMEM;
+		}
+	}
+
+	/* Map queues with MSIX interrupt */
+	vsi->nb_used_qps = dev->data->nb_rx_queues;
+	ice_vsi_queues_bind_intr(vsi);
+
+	/* Enable interrupts for all the queues */
+	ice_vsi_enable_queues_intr(vsi);
+
+	rte_intr_enable(intr_handle);
+
+	return 0;
+}
+
 static int
 ice_dev_start(struct rte_eth_dev *dev)
 {
@@ -1439,6 +1629,10 @@ static int ice_init_rss(struct ice_pf *pf)
 		goto rx_err;
 	}
 
+	/* enable Rx interrput and mapping Rx queue to interrupt vector */
+	if (ice_rxq_intr_setup(dev))
+		return -EIO;
+
 	ret = ice_aq_set_event_mask(hw, hw->port_info->lport,
 				    ((u16)(ICE_AQ_LINK_EVENT_LINK_FAULT |
 				     ICE_AQ_LINK_EVENT_PHY_TEMP_ALARM |
@@ -2247,6 +2441,42 @@ static int ice_macaddr_set(struct rte_eth_dev *dev,
 	return 0;
 }
 
+static int ice_rx_queue_intr_enable(struct rte_eth_dev *dev,
+				    uint16_t queue_id)
+{
+	struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	uint32_t val;
+	uint16_t msix_intr;
+
+	msix_intr = intr_handle->intr_vec[queue_id];
+
+	val = GLINT_DYN_CTL_INTENA_M | GLINT_DYN_CTL_CLEARPBA_M |
+	      GLINT_DYN_CTL_ITR_INDX_M;
+	val &= ~GLINT_DYN_CTL_WB_ON_ITR_M;
+
+	ICE_WRITE_REG(hw, GLINT_DYN_CTL(msix_intr), val);
+	rte_intr_enable(&pci_dev->intr_handle);
+
+	return 0;
+}
+
+static int ice_rx_queue_intr_disable(struct rte_eth_dev *dev,
+				     uint16_t queue_id)
+{
+	struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	uint16_t msix_intr;
+
+	msix_intr = intr_handle->intr_vec[queue_id];
+
+	ICE_WRITE_REG(hw, GLINT_DYN_CTL(msix_intr), GLINT_DYN_CTL_WB_ON_ITR_M);
+
+	return 0;
+}
+
 static int
 ice_vsi_vlan_pvid_set(struct ice_vsi *vsi, struct ice_vsi_vlan_pvid_info *info)
 {
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v5 25/31] net/ice: support FW version getting
  2018-12-17  7:37 ` [dpdk-dev] [PATCH v5 00/31] A new net PMD - ICE Wenzhuo Lu
                     ` (23 preceding siblings ...)
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 24/31] net/ice: support RX queue interruption Wenzhuo Lu
@ 2018-12-17  7:37   ` Wenzhuo Lu
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 26/31] net/ice: support EEPROM information getting Wenzhuo Lu
                     ` (5 subsequent siblings)
  30 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-17  7:37 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add ops fw_version_get.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/ice.ini |  1 +
 drivers/net/ice/ice_ethdev.c     | 21 +++++++++++++++++++++
 2 files changed, 22 insertions(+)

diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
index 2844f4c..4867433 100644
--- a/doc/guides/nics/features/ice.ini
+++ b/doc/guides/nics/features/ice.ini
@@ -19,6 +19,7 @@ RSS reta update      = Y
 VLAN filter          = Y
 VLAN offload         = Y
 QinQ offload         = Y
+FW version           = Y
 BSD nic_uio          = Y
 Linux UIO            = Y
 Linux VFIO           = Y
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 568d8a4..13d233a 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -52,6 +52,8 @@ static int ice_rx_queue_intr_enable(struct rte_eth_dev *dev,
 				    uint16_t queue_id);
 static int ice_rx_queue_intr_disable(struct rte_eth_dev *dev,
 				     uint16_t queue_id);
+static int ice_fw_version_get(struct rte_eth_dev *dev, char *fw_version,
+			      size_t fw_size);
 static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
 			     uint16_t pvid, int on);
 
@@ -92,6 +94,7 @@ static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
 	.rss_hash_conf_get            = ice_rss_hash_conf_get,
 	.rx_queue_intr_enable         = ice_rx_queue_intr_enable,
 	.rx_queue_intr_disable        = ice_rx_queue_intr_disable,
+	.fw_version_get               = ice_fw_version_get,
 	.vlan_pvid_set                = ice_vlan_pvid_set,
 };
 
@@ -2478,6 +2481,24 @@ static int ice_rx_queue_intr_disable(struct rte_eth_dev *dev,
 }
 
 static int
+ice_fw_version_get(struct rte_eth_dev *dev, char *fw_version, size_t fw_size)
+{
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	int ret;
+
+	ret = snprintf(fw_version, fw_size, "%d.%d.%05d %d.%d",
+		       hw->fw_maj_ver, hw->fw_min_ver, hw->fw_build,
+		       hw->api_maj_ver, hw->api_min_ver);
+
+	/* add the size of '\0' */
+	ret += 1;
+	if (fw_size < (u32)ret)
+		return ret;
+	else
+		return 0;
+}
+
+static int
 ice_vsi_vlan_pvid_set(struct ice_vsi *vsi, struct ice_vsi_vlan_pvid_info *info)
 {
 	struct ice_hw *hw;
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v5 26/31] net/ice: support EEPROM information getting
  2018-12-17  7:37 ` [dpdk-dev] [PATCH v5 00/31] A new net PMD - ICE Wenzhuo Lu
                     ` (24 preceding siblings ...)
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 25/31] net/ice: support FW version getting Wenzhuo Lu
@ 2018-12-17  7:37   ` Wenzhuo Lu
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 27/31] net/ice: support statistics Wenzhuo Lu
                     ` (4 subsequent siblings)
  30 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-17  7:37 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Wei Zhao

Add below ops,
get_eeprom_length
get_eeprom

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 doc/guides/nics/features/ice.ini |  1 +
 drivers/net/ice/ice_ethdev.c     | 45 ++++++++++++++++++++++++++++++++++++++++
 2 files changed, 46 insertions(+)

diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
index 4867433..c939b52 100644
--- a/doc/guides/nics/features/ice.ini
+++ b/doc/guides/nics/features/ice.ini
@@ -20,6 +20,7 @@ VLAN filter          = Y
 VLAN offload         = Y
 QinQ offload         = Y
 FW version           = Y
+Module EEPROM dump   = Y
 BSD nic_uio          = Y
 Linux UIO            = Y
 Linux VFIO           = Y
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 13d233a..42460a4 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -56,6 +56,9 @@ static int ice_fw_version_get(struct rte_eth_dev *dev, char *fw_version,
 			      size_t fw_size);
 static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
 			     uint16_t pvid, int on);
+static int ice_get_eeprom_length(struct rte_eth_dev *dev);
+static int ice_get_eeprom(struct rte_eth_dev *dev,
+			  struct rte_dev_eeprom_info *eeprom);
 
 static const struct rte_pci_id pci_id_ice_map[] = {
 	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
@@ -96,6 +99,8 @@ static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
 	.rx_queue_intr_disable        = ice_rx_queue_intr_disable,
 	.fw_version_get               = ice_fw_version_get,
 	.vlan_pvid_set                = ice_vlan_pvid_set,
+	.get_eeprom_length            = ice_get_eeprom_length,
+	.get_eeprom                   = ice_get_eeprom,
 };
 
 static void
@@ -2581,6 +2586,46 @@ static int ice_rx_queue_intr_disable(struct rte_eth_dev *dev,
 }
 
 static int
+ice_get_eeprom_length(struct rte_eth_dev *dev)
+{
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	/* Convert word count to byte count */
+	return hw->nvm.sr_words << 1;
+}
+
+static int
+ice_get_eeprom(struct rte_eth_dev *dev,
+	       struct rte_dev_eeprom_info *eeprom)
+{
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	uint16_t *data = eeprom->data;
+	uint16_t offset, length, i;
+	enum ice_status ret_code = ICE_SUCCESS;
+
+	offset = eeprom->offset >> 1;
+	length = eeprom->length >> 1;
+
+	if (offset > hw->nvm.sr_words ||
+	    offset + length > hw->nvm.sr_words) {
+		PMD_DRV_LOG(ERR, "Requested EEPROM bytes out of range.");
+		return -EINVAL;
+	}
+
+	eeprom->magic = hw->vendor_id | (hw->device_id << 16);
+
+	for (i = 0; i < length; i++) {
+		ret_code = ice_read_sr_word(hw, offset + i, &data[i]);
+		if (ret_code != ICE_SUCCESS) {
+			PMD_DRV_LOG(ERR, "EEPROM read failed.");
+			return -EIO;
+		}
+	}
+
+	return 0;
+}
+
+static int
 ice_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 	      struct rte_pci_device *pci_dev)
 {
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v5 27/31] net/ice: support statistics
  2018-12-17  7:37 ` [dpdk-dev] [PATCH v5 00/31] A new net PMD - ICE Wenzhuo Lu
                     ` (25 preceding siblings ...)
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 26/31] net/ice: support EEPROM information getting Wenzhuo Lu
@ 2018-12-17  7:37   ` Wenzhuo Lu
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 28/31] net/ice: support queue information getting Wenzhuo Lu
                     ` (3 subsequent siblings)
  30 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-17  7:37 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Jia Guo

Add below ops,
stats_get
stats_reset
xstats_get
xstats_get_names
xstats_reset

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Jia Guo <jia.guo@intel.com>
---
 doc/guides/nics/features/ice.ini |   2 +
 drivers/net/ice/ice_ethdev.c     | 566 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 568 insertions(+)

diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
index c939b52..67fd044 100644
--- a/doc/guides/nics/features/ice.ini
+++ b/doc/guides/nics/features/ice.ini
@@ -19,6 +19,8 @@ RSS reta update      = Y
 VLAN filter          = Y
 VLAN offload         = Y
 QinQ offload         = Y
+Basic stats          = Y
+Extended stats       = Y
 FW version           = Y
 Module EEPROM dump   = Y
 BSD nic_uio          = Y
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 42460a4..0b11a42 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -59,6 +59,14 @@ static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
 static int ice_get_eeprom_length(struct rte_eth_dev *dev);
 static int ice_get_eeprom(struct rte_eth_dev *dev,
 			  struct rte_dev_eeprom_info *eeprom);
+static int ice_stats_get(struct rte_eth_dev *dev,
+			 struct rte_eth_stats *stats);
+static void ice_stats_reset(struct rte_eth_dev *dev);
+static int ice_xstats_get(struct rte_eth_dev *dev,
+			  struct rte_eth_xstat *xstats, unsigned int n);
+static int ice_xstats_get_names(struct rte_eth_dev *dev,
+				struct rte_eth_xstat_name *xstats_names,
+				unsigned int limit);
 
 static const struct rte_pci_id pci_id_ice_map[] = {
 	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
@@ -101,8 +109,92 @@ static int ice_get_eeprom(struct rte_eth_dev *dev,
 	.vlan_pvid_set                = ice_vlan_pvid_set,
 	.get_eeprom_length            = ice_get_eeprom_length,
 	.get_eeprom                   = ice_get_eeprom,
+	.stats_get                    = ice_stats_get,
+	.stats_reset                  = ice_stats_reset,
+	.xstats_get                   = ice_xstats_get,
+	.xstats_get_names             = ice_xstats_get_names,
+	.xstats_reset                 = ice_stats_reset,
 };
 
+/* store statistics names and its offset in stats structure */
+struct ice_xstats_name_off {
+	char name[RTE_ETH_XSTATS_NAME_SIZE];
+	unsigned int offset;
+};
+
+static const struct ice_xstats_name_off ice_stats_strings[] = {
+	{"rx_unicast_packets", offsetof(struct ice_eth_stats, rx_unicast)},
+	{"rx_multicast_packets", offsetof(struct ice_eth_stats, rx_multicast)},
+	{"rx_broadcast_packets", offsetof(struct ice_eth_stats, rx_broadcast)},
+	{"rx_dropped", offsetof(struct ice_eth_stats, rx_discards)},
+	{"rx_unknown_protocol_packets", offsetof(struct ice_eth_stats,
+		rx_unknown_protocol)},
+	{"tx_unicast_packets", offsetof(struct ice_eth_stats, tx_unicast)},
+	{"tx_multicast_packets", offsetof(struct ice_eth_stats, tx_multicast)},
+	{"tx_broadcast_packets", offsetof(struct ice_eth_stats, tx_broadcast)},
+	{"tx_dropped", offsetof(struct ice_eth_stats, tx_discards)},
+};
+
+#define ICE_NB_ETH_XSTATS (sizeof(ice_stats_strings) / \
+		sizeof(ice_stats_strings[0]))
+
+static const struct ice_xstats_name_off ice_hw_port_strings[] = {
+	{"tx_link_down_dropped", offsetof(struct ice_hw_port_stats,
+		tx_dropped_link_down)},
+	{"rx_crc_errors", offsetof(struct ice_hw_port_stats, crc_errors)},
+	{"rx_illegal_byte_errors", offsetof(struct ice_hw_port_stats,
+		illegal_bytes)},
+	{"rx_error_bytes", offsetof(struct ice_hw_port_stats, error_bytes)},
+	{"mac_local_errors", offsetof(struct ice_hw_port_stats,
+		mac_local_faults)},
+	{"mac_remote_errors", offsetof(struct ice_hw_port_stats,
+		mac_remote_faults)},
+	{"rx_len_errors", offsetof(struct ice_hw_port_stats,
+		rx_len_errors)},
+	{"tx_xon_packets", offsetof(struct ice_hw_port_stats, link_xon_tx)},
+	{"rx_xon_packets", offsetof(struct ice_hw_port_stats, link_xon_rx)},
+	{"tx_xoff_packets", offsetof(struct ice_hw_port_stats, link_xoff_tx)},
+	{"rx_xoff_packets", offsetof(struct ice_hw_port_stats, link_xoff_rx)},
+	{"rx_size_64_packets", offsetof(struct ice_hw_port_stats, rx_size_64)},
+	{"rx_size_65_to_127_packets", offsetof(struct ice_hw_port_stats,
+		rx_size_127)},
+	{"rx_size_128_to_255_packets", offsetof(struct ice_hw_port_stats,
+		rx_size_255)},
+	{"rx_size_256_to_511_packets", offsetof(struct ice_hw_port_stats,
+		rx_size_511)},
+	{"rx_size_512_to_1023_packets", offsetof(struct ice_hw_port_stats,
+		rx_size_1023)},
+	{"rx_size_1024_to_1522_packets", offsetof(struct ice_hw_port_stats,
+		rx_size_1522)},
+	{"rx_size_1523_to_max_packets", offsetof(struct ice_hw_port_stats,
+		rx_size_big)},
+	{"rx_undersized_errors", offsetof(struct ice_hw_port_stats,
+		rx_undersize)},
+	{"rx_oversize_errors", offsetof(struct ice_hw_port_stats,
+		rx_oversize)},
+	{"rx_mac_short_pkt_dropped", offsetof(struct ice_hw_port_stats,
+		mac_short_pkt_dropped)},
+	{"rx_fragmented_errors", offsetof(struct ice_hw_port_stats,
+		rx_fragments)},
+	{"rx_jabber_errors", offsetof(struct ice_hw_port_stats, rx_jabber)},
+	{"tx_size_64_packets", offsetof(struct ice_hw_port_stats, tx_size_64)},
+	{"tx_size_65_to_127_packets", offsetof(struct ice_hw_port_stats,
+		tx_size_127)},
+	{"tx_size_128_to_255_packets", offsetof(struct ice_hw_port_stats,
+		tx_size_255)},
+	{"tx_size_256_to_511_packets", offsetof(struct ice_hw_port_stats,
+		tx_size_511)},
+	{"tx_size_512_to_1023_packets", offsetof(struct ice_hw_port_stats,
+		tx_size_1023)},
+	{"tx_size_1024_to_1522_packets", offsetof(struct ice_hw_port_stats,
+		tx_size_1522)},
+	{"tx_size_1523_to_max_packets", offsetof(struct ice_hw_port_stats,
+		tx_size_big)},
+};
+
+#define ICE_NB_HW_PORT_XSTATS (sizeof(ice_hw_port_strings) / \
+		sizeof(ice_hw_port_strings[0]))
+
 static void
 ice_init_controlq_parameter(struct ice_hw *hw)
 {
@@ -2625,6 +2717,480 @@ static int ice_rx_queue_intr_disable(struct rte_eth_dev *dev,
 	return 0;
 }
 
+static void
+ice_stat_update_32(struct ice_hw *hw,
+		   uint32_t reg,
+		   bool offset_loaded,
+		   uint64_t *offset,
+		   uint64_t *stat)
+{
+	uint64_t new_data;
+
+	new_data = (uint64_t)ICE_READ_REG(hw, reg);
+	if (!offset_loaded)
+		*offset = new_data;
+
+	if (new_data >= *offset)
+		*stat = (uint64_t)(new_data - *offset);
+	else
+		*stat = (uint64_t)((new_data +
+				    ((uint64_t)1 << ICE_32_BIT_WIDTH))
+				   - *offset);
+}
+
+static void
+ice_stat_update_40(struct ice_hw *hw,
+		   uint32_t hireg,
+		   uint32_t loreg,
+		   bool offset_loaded,
+		   uint64_t *offset,
+		   uint64_t *stat)
+{
+	uint64_t new_data;
+
+	new_data = (uint64_t)ICE_READ_REG(hw, loreg);
+	new_data |= (uint64_t)(ICE_READ_REG(hw, hireg) & ICE_8_BIT_MASK) <<
+		    ICE_32_BIT_WIDTH;
+
+	if (!offset_loaded)
+		*offset = new_data;
+
+	if (new_data >= *offset)
+		*stat = new_data - *offset;
+	else
+		*stat = (uint64_t)((new_data +
+				    ((uint64_t)1 << ICE_40_BIT_WIDTH)) -
+				   *offset);
+
+	*stat &= ICE_40_BIT_MASK;
+}
+
+/* Get all the statistics of a VSI */
+static void
+ice_update_vsi_stats(struct ice_vsi *vsi)
+{
+	struct ice_eth_stats *oes = &vsi->eth_stats_offset;
+	struct ice_eth_stats *nes = &vsi->eth_stats;
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	int idx = rte_le_to_cpu_16(vsi->vsi_id);
+
+	ice_stat_update_40(hw, GLV_GORCH(idx), GLV_GORCL(idx),
+			   vsi->offset_loaded, &oes->rx_bytes,
+			   &nes->rx_bytes);
+	ice_stat_update_40(hw, GLV_UPRCH(idx), GLV_UPRCL(idx),
+			   vsi->offset_loaded, &oes->rx_unicast,
+			   &nes->rx_unicast);
+	ice_stat_update_40(hw, GLV_MPRCH(idx), GLV_MPRCL(idx),
+			   vsi->offset_loaded, &oes->rx_multicast,
+			   &nes->rx_multicast);
+	ice_stat_update_40(hw, GLV_BPRCH(idx), GLV_BPRCL(idx),
+			   vsi->offset_loaded, &oes->rx_broadcast,
+			   &nes->rx_broadcast);
+	/* exclude CRC bytes */
+	nes->rx_bytes -= (nes->rx_unicast + nes->rx_multicast +
+			  nes->rx_broadcast) * ETHER_CRC_LEN;
+
+	ice_stat_update_32(hw, GLV_RDPC(idx), vsi->offset_loaded,
+			   &oes->rx_discards, &nes->rx_discards);
+	/* GLV_REPC not supported */
+	/* GLV_RMPC not supported */
+	ice_stat_update_32(hw, GLSWID_RUPP(idx), vsi->offset_loaded,
+			   &oes->rx_unknown_protocol,
+			   &nes->rx_unknown_protocol);
+	ice_stat_update_40(hw, GLV_GOTCH(idx), GLV_GOTCL(idx),
+			   vsi->offset_loaded, &oes->tx_bytes,
+			   &nes->tx_bytes);
+	ice_stat_update_40(hw, GLV_UPTCH(idx), GLV_UPTCL(idx),
+			   vsi->offset_loaded, &oes->tx_unicast,
+			   &nes->tx_unicast);
+	ice_stat_update_40(hw, GLV_MPTCH(idx), GLV_MPTCL(idx),
+			   vsi->offset_loaded, &oes->tx_multicast,
+			   &nes->tx_multicast);
+	ice_stat_update_40(hw, GLV_BPTCH(idx), GLV_BPTCL(idx),
+			   vsi->offset_loaded,  &oes->tx_broadcast,
+			   &nes->tx_broadcast);
+	/* GLV_TDPC not supported */
+	ice_stat_update_32(hw, GLV_TEPC(idx), vsi->offset_loaded,
+			   &oes->tx_errors, &nes->tx_errors);
+	vsi->offset_loaded = true;
+
+	PMD_DRV_LOG(DEBUG, "************** VSI[%u] stats start **************",
+		    vsi->vsi_id);
+	PMD_DRV_LOG(DEBUG, "rx_bytes:            %"PRIu64"", nes->rx_bytes);
+	PMD_DRV_LOG(DEBUG, "rx_unicast:          %"PRIu64"", nes->rx_unicast);
+	PMD_DRV_LOG(DEBUG, "rx_multicast:        %"PRIu64"", nes->rx_multicast);
+	PMD_DRV_LOG(DEBUG, "rx_broadcast:        %"PRIu64"", nes->rx_broadcast);
+	PMD_DRV_LOG(DEBUG, "rx_discards:         %"PRIu64"", nes->rx_discards);
+	PMD_DRV_LOG(DEBUG, "rx_unknown_protocol: %"PRIu64"",
+		    nes->rx_unknown_protocol);
+	PMD_DRV_LOG(DEBUG, "tx_bytes:            %"PRIu64"", nes->tx_bytes);
+	PMD_DRV_LOG(DEBUG, "tx_unicast:          %"PRIu64"", nes->tx_unicast);
+	PMD_DRV_LOG(DEBUG, "tx_multicast:        %"PRIu64"", nes->tx_multicast);
+	PMD_DRV_LOG(DEBUG, "tx_broadcast:        %"PRIu64"", nes->tx_broadcast);
+	PMD_DRV_LOG(DEBUG, "tx_discards:         %"PRIu64"", nes->tx_discards);
+	PMD_DRV_LOG(DEBUG, "tx_errors:           %"PRIu64"", nes->tx_errors);
+	PMD_DRV_LOG(DEBUG, "************** VSI[%u] stats end ****************",
+		    vsi->vsi_id);
+}
+
+static void
+ice_read_stats_registers(struct ice_pf *pf, struct ice_hw *hw)
+{
+	struct ice_hw_port_stats *ns = &pf->stats; /* new stats */
+	struct ice_hw_port_stats *os = &pf->stats_offset; /* old stats */
+
+	/* Get statistics of struct ice_eth_stats */
+	ice_stat_update_40(hw, GLPRT_GORCH(hw->port_info->lport),
+			   GLPRT_GORCL(hw->port_info->lport),
+			   pf->offset_loaded, &os->eth.rx_bytes,
+			   &ns->eth.rx_bytes);
+	ice_stat_update_40(hw, GLPRT_UPRCH(hw->port_info->lport),
+			   GLPRT_UPRCL(hw->port_info->lport),
+			   pf->offset_loaded, &os->eth.rx_unicast,
+			   &ns->eth.rx_unicast);
+	ice_stat_update_40(hw, GLPRT_MPRCH(hw->port_info->lport),
+			   GLPRT_MPRCL(hw->port_info->lport),
+			   pf->offset_loaded, &os->eth.rx_multicast,
+			   &ns->eth.rx_multicast);
+	ice_stat_update_40(hw, GLPRT_BPRCH(hw->port_info->lport),
+			   GLPRT_BPRCL(hw->port_info->lport),
+			   pf->offset_loaded, &os->eth.rx_broadcast,
+			   &ns->eth.rx_broadcast);
+	ice_stat_update_32(hw, PRTRPB_RDPC,
+			   pf->offset_loaded, &os->eth.rx_discards,
+			   &ns->eth.rx_discards);
+
+	/* Workaround: CRC size should not be included in byte statistics,
+	 * so subtract ETHER_CRC_LEN from the byte counter for each rx packet.
+	 */
+	ns->eth.rx_bytes -= (ns->eth.rx_unicast + ns->eth.rx_multicast +
+			     ns->eth.rx_broadcast) * ETHER_CRC_LEN;
+
+	/* GLPRT_REPC not supported */
+	/* GLPRT_RMPC not supported */
+	ice_stat_update_32(hw, GLSWID_RUPP(hw->port_info->lport),
+			   pf->offset_loaded,
+			   &os->eth.rx_unknown_protocol,
+			   &ns->eth.rx_unknown_protocol);
+	ice_stat_update_40(hw, GLPRT_GOTCH(hw->port_info->lport),
+			   GLPRT_GOTCL(hw->port_info->lport),
+			   pf->offset_loaded, &os->eth.tx_bytes,
+			   &ns->eth.tx_bytes);
+	ice_stat_update_40(hw, GLPRT_UPTCH(hw->port_info->lport),
+			   GLPRT_UPTCL(hw->port_info->lport),
+			   pf->offset_loaded, &os->eth.tx_unicast,
+			   &ns->eth.tx_unicast);
+	ice_stat_update_40(hw, GLPRT_MPTCH(hw->port_info->lport),
+			   GLPRT_MPTCL(hw->port_info->lport),
+			   pf->offset_loaded, &os->eth.tx_multicast,
+			   &ns->eth.tx_multicast);
+	ice_stat_update_40(hw, GLPRT_BPTCH(hw->port_info->lport),
+			   GLPRT_BPTCL(hw->port_info->lport),
+			   pf->offset_loaded, &os->eth.tx_broadcast,
+			   &ns->eth.tx_broadcast);
+	ns->eth.tx_bytes -= (ns->eth.tx_unicast + ns->eth.tx_multicast +
+			     ns->eth.tx_broadcast) * ETHER_CRC_LEN;
+
+	/* GLPRT_TEPC not supported */
+
+	/* additional port specific stats */
+	ice_stat_update_32(hw, GLPRT_TDOLD(hw->port_info->lport),
+			   pf->offset_loaded, &os->tx_dropped_link_down,
+			   &ns->tx_dropped_link_down);
+	ice_stat_update_32(hw, GLPRT_CRCERRS(hw->port_info->lport),
+			   pf->offset_loaded, &os->crc_errors,
+			   &ns->crc_errors);
+	ice_stat_update_32(hw, GLPRT_ILLERRC(hw->port_info->lport),
+			   pf->offset_loaded, &os->illegal_bytes,
+			   &ns->illegal_bytes);
+	/* GLPRT_ERRBC not supported */
+	ice_stat_update_32(hw, GLPRT_MLFC(hw->port_info->lport),
+			   pf->offset_loaded, &os->mac_local_faults,
+			   &ns->mac_local_faults);
+	ice_stat_update_32(hw, GLPRT_MRFC(hw->port_info->lport),
+			   pf->offset_loaded, &os->mac_remote_faults,
+			   &ns->mac_remote_faults);
+
+	ice_stat_update_32(hw, GLPRT_RLEC(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_len_errors,
+			   &ns->rx_len_errors);
+
+	ice_stat_update_32(hw, GLPRT_LXONRXC(hw->port_info->lport),
+			   pf->offset_loaded, &os->link_xon_rx,
+			   &ns->link_xon_rx);
+	ice_stat_update_32(hw, GLPRT_LXOFFRXC(hw->port_info->lport),
+			   pf->offset_loaded, &os->link_xoff_rx,
+			   &ns->link_xoff_rx);
+	ice_stat_update_32(hw, GLPRT_LXONTXC(hw->port_info->lport),
+			   pf->offset_loaded, &os->link_xon_tx,
+			   &ns->link_xon_tx);
+	ice_stat_update_32(hw, GLPRT_LXOFFTXC(hw->port_info->lport),
+			   pf->offset_loaded, &os->link_xoff_tx,
+			   &ns->link_xoff_tx);
+	ice_stat_update_40(hw, GLPRT_PRC64H(hw->port_info->lport),
+			   GLPRT_PRC64L(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_size_64,
+			   &ns->rx_size_64);
+	ice_stat_update_40(hw, GLPRT_PRC127H(hw->port_info->lport),
+			   GLPRT_PRC127L(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_size_127,
+			   &ns->rx_size_127);
+	ice_stat_update_40(hw, GLPRT_PRC255H(hw->port_info->lport),
+			   GLPRT_PRC255L(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_size_255,
+			   &ns->rx_size_255);
+	ice_stat_update_40(hw, GLPRT_PRC511H(hw->port_info->lport),
+			   GLPRT_PRC511L(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_size_511,
+			   &ns->rx_size_511);
+	ice_stat_update_40(hw, GLPRT_PRC1023H(hw->port_info->lport),
+			   GLPRT_PRC1023L(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_size_1023,
+			   &ns->rx_size_1023);
+	ice_stat_update_40(hw, GLPRT_PRC1522H(hw->port_info->lport),
+			   GLPRT_PRC1522L(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_size_1522,
+			   &ns->rx_size_1522);
+	ice_stat_update_40(hw, GLPRT_PRC9522H(hw->port_info->lport),
+			   GLPRT_PRC9522L(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_size_big,
+			   &ns->rx_size_big);
+	ice_stat_update_32(hw, GLPRT_RUC(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_undersize,
+			   &ns->rx_undersize);
+	ice_stat_update_32(hw, GLPRT_RFC(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_fragments,
+			   &ns->rx_fragments);
+	ice_stat_update_32(hw, GLPRT_ROC(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_oversize,
+			   &ns->rx_oversize);
+	ice_stat_update_32(hw, GLPRT_RJC(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_jabber,
+			   &ns->rx_jabber);
+	ice_stat_update_40(hw, GLPRT_PTC64H(hw->port_info->lport),
+			   GLPRT_PTC64L(hw->port_info->lport),
+			   pf->offset_loaded, &os->tx_size_64,
+			   &ns->tx_size_64);
+	ice_stat_update_40(hw, GLPRT_PTC127H(hw->port_info->lport),
+			   GLPRT_PTC127L(hw->port_info->lport),
+			   pf->offset_loaded, &os->tx_size_127,
+			   &ns->tx_size_127);
+	ice_stat_update_40(hw, GLPRT_PTC255H(hw->port_info->lport),
+			   GLPRT_PTC255L(hw->port_info->lport),
+			   pf->offset_loaded, &os->tx_size_255,
+			   &ns->tx_size_255);
+	ice_stat_update_40(hw, GLPRT_PTC511H(hw->port_info->lport),
+			   GLPRT_PTC511L(hw->port_info->lport),
+			   pf->offset_loaded, &os->tx_size_511,
+			   &ns->tx_size_511);
+	ice_stat_update_40(hw, GLPRT_PTC1023H(hw->port_info->lport),
+			   GLPRT_PTC1023L(hw->port_info->lport),
+			   pf->offset_loaded, &os->tx_size_1023,
+			   &ns->tx_size_1023);
+	ice_stat_update_40(hw, GLPRT_PTC1522H(hw->port_info->lport),
+			   GLPRT_PTC1522L(hw->port_info->lport),
+			   pf->offset_loaded, &os->tx_size_1522,
+			   &ns->tx_size_1522);
+	ice_stat_update_40(hw, GLPRT_PTC9522H(hw->port_info->lport),
+			   GLPRT_PTC9522L(hw->port_info->lport),
+			   pf->offset_loaded, &os->tx_size_big,
+			   &ns->tx_size_big);
+
+	/* GLPRT_MSPDC not supported */
+	/* GLPRT_XEC not supported */
+
+	pf->offset_loaded = true;
+
+	if (pf->main_vsi)
+		ice_update_vsi_stats(pf->main_vsi);
+}
+
+/* Get all statistics of a port */
+static int
+ice_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ice_hw_port_stats *ns = &pf->stats; /* new stats */
+
+	/* call read registers - updates values, now write them to struct */
+	ice_read_stats_registers(pf, hw);
+
+	stats->ipackets = ns->eth.rx_unicast +
+			  ns->eth.rx_multicast +
+			  ns->eth.rx_broadcast -
+			  ns->eth.rx_discards -
+			  pf->main_vsi->eth_stats.rx_discards;
+	stats->opackets = ns->eth.tx_unicast +
+			  ns->eth.tx_multicast +
+			  ns->eth.tx_broadcast;
+	stats->ibytes   = ns->eth.rx_bytes;
+	stats->obytes   = ns->eth.tx_bytes;
+	stats->oerrors  = ns->eth.tx_errors +
+			  pf->main_vsi->eth_stats.tx_errors;
+
+	/* Rx Errors */
+	stats->imissed  = ns->eth.rx_discards +
+			  pf->main_vsi->eth_stats.rx_discards;
+	stats->ierrors  = ns->crc_errors +
+			  ns->rx_undersize +
+			  ns->rx_oversize + ns->rx_fragments + ns->rx_jabber;
+
+	PMD_DRV_LOG(DEBUG, "*************** PF stats start *****************");
+	PMD_DRV_LOG(DEBUG, "rx_bytes:	%"PRIu64"", ns->eth.rx_bytes);
+	PMD_DRV_LOG(DEBUG, "rx_unicast:	%"PRIu64"", ns->eth.rx_unicast);
+	PMD_DRV_LOG(DEBUG, "rx_multicast:%"PRIu64"", ns->eth.rx_multicast);
+	PMD_DRV_LOG(DEBUG, "rx_broadcast:%"PRIu64"", ns->eth.rx_broadcast);
+	PMD_DRV_LOG(DEBUG, "rx_discards:%"PRIu64"", ns->eth.rx_discards);
+	PMD_DRV_LOG(DEBUG, "vsi rx_discards:%"PRIu64"",
+		    pf->main_vsi->eth_stats.rx_discards);
+	PMD_DRV_LOG(DEBUG, "rx_unknown_protocol:  %"PRIu64"",
+		    ns->eth.rx_unknown_protocol);
+	PMD_DRV_LOG(DEBUG, "tx_bytes:	%"PRIu64"", ns->eth.tx_bytes);
+	PMD_DRV_LOG(DEBUG, "tx_unicast:	%"PRIu64"", ns->eth.tx_unicast);
+	PMD_DRV_LOG(DEBUG, "tx_multicast:%"PRIu64"", ns->eth.tx_multicast);
+	PMD_DRV_LOG(DEBUG, "tx_broadcast:%"PRIu64"", ns->eth.tx_broadcast);
+	PMD_DRV_LOG(DEBUG, "tx_discards:%"PRIu64"", ns->eth.tx_discards);
+	PMD_DRV_LOG(DEBUG, "vsi tx_discards:%"PRIu64"",
+		    pf->main_vsi->eth_stats.tx_discards);
+	PMD_DRV_LOG(DEBUG, "tx_errors:		%"PRIu64"", ns->eth.tx_errors);
+
+	PMD_DRV_LOG(DEBUG, "tx_dropped_link_down:	%"PRIu64"",
+		    ns->tx_dropped_link_down);
+	PMD_DRV_LOG(DEBUG, "crc_errors:	%"PRIu64"", ns->crc_errors);
+	PMD_DRV_LOG(DEBUG, "illegal_bytes:	%"PRIu64"",
+		    ns->illegal_bytes);
+	PMD_DRV_LOG(DEBUG, "error_bytes:	%"PRIu64"", ns->error_bytes);
+	PMD_DRV_LOG(DEBUG, "mac_local_faults:	%"PRIu64"",
+		    ns->mac_local_faults);
+	PMD_DRV_LOG(DEBUG, "mac_remote_faults:	%"PRIu64"",
+		    ns->mac_remote_faults);
+	PMD_DRV_LOG(DEBUG, "link_xon_rx:	%"PRIu64"", ns->link_xon_rx);
+	PMD_DRV_LOG(DEBUG, "link_xoff_rx:	%"PRIu64"", ns->link_xoff_rx);
+	PMD_DRV_LOG(DEBUG, "link_xon_tx:	%"PRIu64"", ns->link_xon_tx);
+	PMD_DRV_LOG(DEBUG, "link_xoff_tx:	%"PRIu64"", ns->link_xoff_tx);
+	PMD_DRV_LOG(DEBUG, "rx_size_64:		%"PRIu64"", ns->rx_size_64);
+	PMD_DRV_LOG(DEBUG, "rx_size_127:	%"PRIu64"", ns->rx_size_127);
+	PMD_DRV_LOG(DEBUG, "rx_size_255:	%"PRIu64"", ns->rx_size_255);
+	PMD_DRV_LOG(DEBUG, "rx_size_511:	%"PRIu64"", ns->rx_size_511);
+	PMD_DRV_LOG(DEBUG, "rx_size_1023:	%"PRIu64"", ns->rx_size_1023);
+	PMD_DRV_LOG(DEBUG, "rx_size_1522:	%"PRIu64"", ns->rx_size_1522);
+	PMD_DRV_LOG(DEBUG, "rx_size_big:	%"PRIu64"", ns->rx_size_big);
+	PMD_DRV_LOG(DEBUG, "rx_undersize:	%"PRIu64"", ns->rx_undersize);
+	PMD_DRV_LOG(DEBUG, "rx_fragments:	%"PRIu64"", ns->rx_fragments);
+	PMD_DRV_LOG(DEBUG, "rx_oversize:	%"PRIu64"", ns->rx_oversize);
+	PMD_DRV_LOG(DEBUG, "rx_jabber:		%"PRIu64"", ns->rx_jabber);
+	PMD_DRV_LOG(DEBUG, "tx_size_64:		%"PRIu64"", ns->tx_size_64);
+	PMD_DRV_LOG(DEBUG, "tx_size_127:	%"PRIu64"", ns->tx_size_127);
+	PMD_DRV_LOG(DEBUG, "tx_size_255:	%"PRIu64"", ns->tx_size_255);
+	PMD_DRV_LOG(DEBUG, "tx_size_511:	%"PRIu64"", ns->tx_size_511);
+	PMD_DRV_LOG(DEBUG, "tx_size_1023:	%"PRIu64"", ns->tx_size_1023);
+	PMD_DRV_LOG(DEBUG, "tx_size_1522:	%"PRIu64"", ns->tx_size_1522);
+	PMD_DRV_LOG(DEBUG, "tx_size_big:	%"PRIu64"", ns->tx_size_big);
+	PMD_DRV_LOG(DEBUG, "rx_len_errors:	%"PRIu64"", ns->rx_len_errors);
+	PMD_DRV_LOG(DEBUG, "************* PF stats end ****************");
+	return 0;
+}
+
+/* Reset the statistics */
+static void
+ice_stats_reset(struct rte_eth_dev *dev)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	/* Mark PF and VSI stats to update the offset, aka "reset" */
+	pf->offset_loaded = false;
+	if (pf->main_vsi)
+		pf->main_vsi->offset_loaded = false;
+
+	/* read the stats, reading current register values into offset */
+	ice_read_stats_registers(pf, hw);
+}
+
+static uint32_t
+ice_xstats_calc_num(void)
+{
+	uint32_t num;
+
+	num = ICE_NB_ETH_XSTATS + ICE_NB_HW_PORT_XSTATS;
+
+	return num;
+}
+
+static int
+ice_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats,
+	       unsigned int n)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	unsigned int i;
+	unsigned int count;
+	struct ice_hw_port_stats *hw_stats = &pf->stats;
+
+	count = ice_xstats_calc_num();
+	if (n < count)
+		return count;
+
+	ice_read_stats_registers(pf, hw);
+
+	if (!xstats)
+		return 0;
+
+	count = 0;
+
+	/* Get stats from ice_eth_stats struct */
+	for (i = 0; i < ICE_NB_ETH_XSTATS; i++) {
+		xstats[count].value =
+			*(uint64_t *)((char *)&hw_stats->eth +
+				      ice_stats_strings[i].offset);
+		xstats[count].id = count;
+		count++;
+	}
+
+	/* Get individiual stats from ice_hw_port struct */
+	for (i = 0; i < ICE_NB_HW_PORT_XSTATS; i++) {
+		xstats[count].value =
+			*(uint64_t *)((char *)hw_stats +
+				      ice_hw_port_strings[i].offset);
+		xstats[count].id = count;
+		count++;
+	}
+
+	return count;
+}
+
+static int ice_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
+				struct rte_eth_xstat_name *xstats_names,
+				__rte_unused unsigned int limit)
+{
+	unsigned int count = 0;
+	unsigned int i;
+
+	if (!xstats_names)
+		return ice_xstats_calc_num();
+
+	/* Note: limit checked in rte_eth_xstats_names() */
+
+	/* Get stats from ice_eth_stats struct */
+	for (i = 0; i < ICE_NB_ETH_XSTATS; i++) {
+		snprintf(xstats_names[count].name,
+			 sizeof(xstats_names[count].name),
+			 "%s", ice_stats_strings[i].name);
+		count++;
+	}
+
+	/* Get individiual stats from ice_hw_port struct */
+	for (i = 0; i < ICE_NB_HW_PORT_XSTATS; i++) {
+		snprintf(xstats_names[count].name,
+			 sizeof(xstats_names[count].name),
+			 "%s", ice_hw_port_strings[i].name);
+		count++;
+	}
+
+	return count;
+}
+
 static int
 ice_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 	      struct rte_pci_device *pci_dev)
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v5 28/31] net/ice: support queue information getting
  2018-12-17  7:37 ` [dpdk-dev] [PATCH v5 00/31] A new net PMD - ICE Wenzhuo Lu
                     ` (26 preceding siblings ...)
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 27/31] net/ice: support statistics Wenzhuo Lu
@ 2018-12-17  7:37   ` Wenzhuo Lu
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 29/31] net/ice: support basic RX/TX Wenzhuo Lu
                     ` (2 subsequent siblings)
  30 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-17  7:37 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add below ops,
rxq_info_get
txq_info_get
rx_queue_count

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/ice/ice_ethdev.c   |  3 ++
 drivers/net/ice/ice_lan_rxtx.c | 66 ++++++++++++++++++++++++++++++++++++++++++
 drivers/net/ice/ice_rxtx.h     |  5 ++++
 3 files changed, 74 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 0b11a42..3235d01 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -107,8 +107,11 @@ static int ice_xstats_get_names(struct rte_eth_dev *dev,
 	.rx_queue_intr_disable        = ice_rx_queue_intr_disable,
 	.fw_version_get               = ice_fw_version_get,
 	.vlan_pvid_set                = ice_vlan_pvid_set,
+	.rxq_info_get                 = ice_rxq_info_get,
+	.txq_info_get                 = ice_txq_info_get,
 	.get_eeprom_length            = ice_get_eeprom_length,
 	.get_eeprom                   = ice_get_eeprom,
+	.rx_queue_count               = ice_rx_queue_count,
 	.stats_get                    = ice_stats_get,
 	.stats_reset                  = ice_stats_reset,
 	.xstats_get                   = ice_xstats_get,
diff --git a/drivers/net/ice/ice_lan_rxtx.c b/drivers/net/ice/ice_lan_rxtx.c
index 8230bb2..fed12b4 100644
--- a/drivers/net/ice/ice_lan_rxtx.c
+++ b/drivers/net/ice/ice_lan_rxtx.c
@@ -921,6 +921,72 @@
 }
 
 void
+ice_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+		 struct rte_eth_rxq_info *qinfo)
+{
+	struct ice_rx_queue *rxq;
+
+	rxq = dev->data->rx_queues[queue_id];
+
+	qinfo->mp = rxq->mp;
+	qinfo->scattered_rx = dev->data->scattered_rx;
+	qinfo->nb_desc = rxq->nb_rx_desc;
+
+	qinfo->conf.rx_free_thresh = rxq->rx_free_thresh;
+	qinfo->conf.rx_drop_en = rxq->drop_en;
+	qinfo->conf.rx_deferred_start = rxq->rx_deferred_start;
+}
+
+void
+ice_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+		 struct rte_eth_txq_info *qinfo)
+{
+	struct ice_tx_queue *txq;
+
+	txq = dev->data->tx_queues[queue_id];
+
+	qinfo->nb_desc = txq->nb_tx_desc;
+
+	qinfo->conf.tx_thresh.pthresh = txq->pthresh;
+	qinfo->conf.tx_thresh.hthresh = txq->hthresh;
+	qinfo->conf.tx_thresh.wthresh = txq->wthresh;
+
+	qinfo->conf.tx_free_thresh = txq->tx_free_thresh;
+	qinfo->conf.tx_rs_thresh = txq->tx_rs_thresh;
+	qinfo->conf.offloads = txq->offloads;
+	qinfo->conf.tx_deferred_start = txq->tx_deferred_start;
+}
+
+uint32_t
+ice_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+#define ICE_RXQ_SCAN_INTERVAL 4
+	volatile union ice_rx_desc *rxdp;
+	struct ice_rx_queue *rxq;
+	uint16_t desc = 0;
+
+	rxq = dev->data->rx_queues[rx_queue_id];
+	rxdp = &rxq->rx_ring[rxq->rx_tail];
+	while ((desc < rxq->nb_rx_desc) &&
+	       ((rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
+		 ICE_RXD_QW1_STATUS_M) >> ICE_RXD_QW1_STATUS_S) &
+	       (1 << ICE_RX_DESC_STATUS_DD_S)) {
+		/**
+		 * Check the DD bit of a rx descriptor of each 4 in a group,
+		 * to avoid checking too frequently and downgrading performance
+		 * too much.
+		 */
+		desc += ICE_RXQ_SCAN_INTERVAL;
+		rxdp += ICE_RXQ_SCAN_INTERVAL;
+		if (rxq->rx_tail + desc >= rxq->nb_rx_desc)
+			rxdp = &(rxq->rx_ring[rxq->rx_tail +
+				 desc - rxq->nb_rx_desc]);
+	}
+
+	return desc;
+}
+
+void
 ice_clear_queues(struct rte_eth_dev *dev)
 {
 	uint16_t i;
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index 871646f..bad2b89 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -134,6 +134,11 @@ int ice_tx_queue_setup(struct rte_eth_dev *dev,
 void ice_tx_queue_release(void *txq);
 void ice_clear_queues(struct rte_eth_dev *dev);
 void ice_free_queues(struct rte_eth_dev *dev);
+uint32_t ice_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 void ice_set_default_ptype_table(struct rte_eth_dev *dev);
 const uint32_t *ice_dev_supported_ptypes_get(struct rte_eth_dev *dev);
+void ice_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+		      struct rte_eth_rxq_info *qinfo);
+void ice_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+		      struct rte_eth_txq_info *qinfo);
 #endif /* _ICE_RXTX_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v5 29/31] net/ice: support basic RX/TX
  2018-12-17  7:37 ` [dpdk-dev] [PATCH v5 00/31] A new net PMD - ICE Wenzhuo Lu
                     ` (27 preceding siblings ...)
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 28/31] net/ice: support queue information getting Wenzhuo Lu
@ 2018-12-17  7:37   ` Wenzhuo Lu
  2018-12-17 22:58     ` Ferruh Yigit
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 30/31] net/ice: support advance RX/TX Wenzhuo Lu
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 31/31] net/ice: support descriptor ops Wenzhuo Lu
  30 siblings, 1 reply; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-17  7:37 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/ice.ini |   5 +
 drivers/net/ice/ice_ethdev.c     |   5 +
 drivers/net/ice/ice_lan_rxtx.c   | 568 ++++++++++++++++++++++++++++++++++++++-
 drivers/net/ice/ice_rxtx.h       |   8 +
 4 files changed, 584 insertions(+), 2 deletions(-)

diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
index 67fd044..19655f1 100644
--- a/doc/guides/nics/features/ice.ini
+++ b/doc/guides/nics/features/ice.ini
@@ -11,14 +11,19 @@ Rx interrupt         = Y
 Queue start/stop     = Y
 MTU update           = Y
 Jumbo frame          = Y
+TSO                  = Y
 Unicast MAC filter   = Y
 Multicast MAC filter = Y
 RSS hash             = Y
 RSS key update       = Y
 RSS reta update      = Y
 VLAN filter          = Y
+CRC offload          = Y
 VLAN offload         = Y
 QinQ offload         = Y
+L3 checksum offload  = Y
+L4 checksum offload  = Y
+Packet type parsing  = Y
 Basic stats          = Y
 Extended stats       = Y
 FW version           = Y
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 3235d01..ab8fe3b 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -1260,6 +1260,9 @@ struct ice_xstats_name_off {
 	int ret;
 
 	dev->dev_ops = &ice_eth_dev_ops;
+	dev->rx_pkt_burst = ice_recv_pkts;
+	dev->tx_pkt_burst = ice_xmit_pkts;
+	dev->tx_pkt_prepare = ice_prep_pkts;
 
 	ice_set_default_ptype_table(dev);
 	pci_dev = RTE_DEV_TO_PCI(dev->device);
@@ -1732,6 +1735,8 @@ static int ice_init_rss(struct ice_pf *pf)
 		goto rx_err;
 	}
 
+	ice_set_rx_function(dev);
+
 	/* enable Rx interrput and mapping Rx queue to interrupt vector */
 	if (ice_rxq_intr_setup(dev))
 		return -EIO;
diff --git a/drivers/net/ice/ice_lan_rxtx.c b/drivers/net/ice/ice_lan_rxtx.c
index fed12b4..c0ee7c5 100644
--- a/drivers/net/ice/ice_lan_rxtx.c
+++ b/drivers/net/ice/ice_lan_rxtx.c
@@ -884,8 +884,81 @@
 	rte_free(q);
 }
 
+/* Translate the rx descriptor status to pkt flags */
+static inline uint64_t
+ice_rxd_status_to_pkt_flags(uint64_t qword)
+{
+	uint64_t flags;
+
+	/* Check if RSS_HASH */
+	flags = (((qword >> ICE_RX_DESC_STATUS_FLTSTAT_S) &
+		  ICE_RX_DESC_FLTSTAT_RSS_HASH) ==
+		 ICE_RX_DESC_FLTSTAT_RSS_HASH) ? PKT_RX_RSS_HASH : 0;
+
+	return flags;
+}
+
+/* Rx L3/L4 checksum */
+static inline uint64_t
+ice_rxd_error_to_pkt_flags(uint64_t qword)
+{
+	uint64_t flags = 0;
+	uint64_t error_bits = (qword >> ICE_RXD_QW1_ERROR_S);
+
+	if (likely((error_bits & ICE_RX_ERR_BITS) == 0)) {
+		flags |= (PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD);
+		return flags;
+	}
+
+	if (unlikely(error_bits & (1 << ICE_RX_DESC_ERROR_IPE_S)))
+		flags |= PKT_RX_IP_CKSUM_BAD;
+	else
+		flags |= PKT_RX_IP_CKSUM_GOOD;
+
+	if (unlikely(error_bits & (1 << ICE_RX_DESC_ERROR_L4E_S)))
+		flags |= PKT_RX_L4_CKSUM_BAD;
+	else
+		flags |= PKT_RX_L4_CKSUM_GOOD;
+
+	if (unlikely(error_bits & (1 << ICE_RX_DESC_ERROR_EIPE_S)))
+		flags |= PKT_RX_EIP_CKSUM_BAD;
+
+	return flags;
+}
+
+static inline void
+ice_rxd_to_vlan_tci(struct rte_mbuf *mb, volatile union ice_rx_desc *rxdp)
+{
+	if (rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
+	    (1 << ICE_RX_DESC_STATUS_L2TAG1P_S)) {
+		mb->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		mb->vlan_tci =
+			rte_le_to_cpu_16(rxdp->wb.qword0.lo_dword.l2tag1);
+		PMD_RX_LOG(DEBUG, "Descriptor l2tag1: %u",
+			   rte_le_to_cpu_16(rxdp->wb.qword0.lo_dword.l2tag1));
+	} else {
+		mb->vlan_tci = 0;
+	}
+
+#ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
+	if (rte_le_to_cpu_16(rxdp->wb.qword2.ext_status) &
+	    (1 << ICE_RX_DESC_EXT_STATUS_L2TAG2P_S)) {
+		mb->ol_flags |= PKT_RX_QINQ_STRIPPED | PKT_RX_QINQ |
+				PKT_RX_VLAN_STRIPPED | PKT_RX_VLAN;
+		mb->vlan_tci_outer = mb->vlan_tci;
+		mb->vlan_tci = rte_le_to_cpu_16(rxdp->wb.qword2.l2tag2_2);
+		PMD_RX_LOG(DEBUG, "Descriptor l2tag2_1: %u, l2tag2_2: %u",
+			   rte_le_to_cpu_16(rxdp->wb.qword2.l2tag2_1),
+			   rte_le_to_cpu_16(rxdp->wb.qword2.l2tag2_2));
+	} else {
+		mb->vlan_tci_outer = 0;
+	}
+#endif
+	PMD_RX_LOG(DEBUG, "Mbuf vlan_tci: %u, vlan_tci_outer: %u",
+		   mb->vlan_tci, mb->vlan_tci_outer);
+}
 const uint32_t *
-ice_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
+ice_dev_supported_ptypes_get(struct rte_eth_dev *dev)
 {
 	static const uint32_t ptypes[] = {
 		/* refers to ice_get_default_pkt_type() */
@@ -917,7 +990,9 @@
 		RTE_PTYPE_UNKNOWN
 	};
 
-	return ptypes;
+	if (dev->rx_pkt_burst == ice_recv_pkts)
+		return ptypes;
+	return NULL;
 }
 
 void
@@ -1028,6 +1103,495 @@
 	dev->data->nb_tx_queues = 0;
 }
 
+uint16_t
+ice_recv_pkts(void *rx_queue,
+	      struct rte_mbuf **rx_pkts,
+	      uint16_t nb_pkts)
+{
+	struct ice_rx_queue *rxq = rx_queue;
+	volatile union ice_rx_desc *rx_ring = rxq->rx_ring;
+	volatile union ice_rx_desc *rxdp;
+	union ice_rx_desc rxd;
+	struct ice_rx_entry *sw_ring = rxq->sw_ring;
+	struct ice_rx_entry *rxe;
+	struct rte_mbuf *nmb; /* new allocated mbuf */
+	struct rte_mbuf *rxm; /* pointer to store old mbuf in SW ring */
+	uint16_t rx_id = rxq->rx_tail;
+	uint16_t nb_rx = 0;
+	uint16_t nb_hold = 0;
+	uint16_t rx_packet_len;
+	uint32_t rx_status;
+	uint64_t qword1;
+	uint64_t dma_addr;
+	uint64_t pkt_flags = 0;
+	uint32_t *ptype_tbl = rxq->vsi->adapter->ptype_tbl;
+	struct rte_eth_dev *dev;
+
+	while (nb_rx < nb_pkts) {
+		rxdp = &rx_ring[rx_id];
+		qword1 = rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len);
+		rx_status = (qword1 & ICE_RXD_QW1_STATUS_M) >>
+			    ICE_RXD_QW1_STATUS_S;
+
+		/* Check the DD bit first */
+		if (!(rx_status & (1 << ICE_RX_DESC_STATUS_DD_S)))
+			break;
+
+		/* allocate mbuf */
+		nmb = rte_mbuf_raw_alloc(rxq->mp);
+		if (unlikely(!nmb)) {
+			dev = ICE_VSI_TO_ETH_DEV(rxq->vsi);
+			dev->data->rx_mbuf_alloc_failed++;
+			break;
+		}
+		rxd = *rxdp; /* copy descriptor in ring to temp variable*/
+
+		nb_hold++;
+		rxe = &sw_ring[rx_id]; /* get corresponding mbuf in SW ring */
+		rx_id++;
+		if (unlikely(rx_id == rxq->nb_rx_desc))
+			rx_id = 0;
+		rxm = rxe->mbuf;
+		rxe->mbuf = nmb;
+		dma_addr =
+			rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb));
+
+		/**
+		 * fill the read format of descriptor with physic address in
+		 * new allocated mbuf: nmb
+		 */
+		rxdp->read.hdr_addr = 0;
+		rxdp->read.pkt_addr = dma_addr;
+
+		/* calculate rx_packet_len of the received pkt */
+		rx_packet_len = ((qword1 & ICE_RXD_QW1_LEN_PBUF_M) >>
+				ICE_RXD_QW1_LEN_PBUF_S) - rxq->crc_len;
+
+		/* fill old mbuf with received descriptor: rxd */
+		rxm->data_off = RTE_PKTMBUF_HEADROOM;
+		rte_prefetch0(RTE_PTR_ADD(rxm->buf_addr, RTE_PKTMBUF_HEADROOM));
+		rxm->nb_segs = 1;
+		rxm->next = NULL;
+		rxm->pkt_len = rx_packet_len;
+		rxm->data_len = rx_packet_len;
+		rxm->port = rxq->port_id;
+		ice_rxd_to_vlan_tci(rxm, rxdp);
+		rxm->packet_type = ptype_tbl[(uint8_t)((qword1 &
+							ICE_RXD_QW1_PTYPE_M) >>
+						       ICE_RXD_QW1_PTYPE_S)];
+		pkt_flags = ice_rxd_status_to_pkt_flags(qword1);
+		pkt_flags |= ice_rxd_error_to_pkt_flags(qword1);
+		if (pkt_flags & PKT_RX_RSS_HASH)
+			rxm->hash.rss =
+				rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
+		rxm->ol_flags |= pkt_flags;
+		/* copy old mbuf to rx_pkts */
+		rx_pkts[nb_rx++] = rxm;
+	}
+	rxq->rx_tail = rx_id;
+	/**
+	 * If the number of free RX descriptors is greater than the RX free
+	 * threshold of the queue, advance the receive tail register of queue.
+	 * Update that register with the value of the last processed RX
+	 * descriptor minus 1.
+	 */
+	nb_hold = (uint16_t)(nb_hold + rxq->nb_rx_hold);
+	if (nb_hold > rxq->rx_free_thresh) {
+		rx_id = (uint16_t)(rx_id == 0 ?
+				   (rxq->nb_rx_desc - 1) : (rx_id - 1));
+		/* write TAIL register */
+		ICE_PCI_REG_WRITE(rxq->qrx_tail, rx_id);
+		nb_hold = 0;
+	}
+	rxq->nb_rx_hold = nb_hold;
+
+	/* return received packet in the burst */
+	return nb_rx;
+}
+
+static inline void
+ice_txd_enable_checksum(uint64_t ol_flags,
+			uint32_t *td_cmd,
+			uint32_t *td_offset,
+			union ice_tx_offload tx_offload)
+{
+	/* L2 length must be set. */
+	*td_offset |= (tx_offload.l2_len >> 1) <<
+		      ICE_TX_DESC_LEN_MACLEN_S;
+
+	/* Enable L3 checksum offloads */
+	if (ol_flags & PKT_TX_IP_CKSUM) {
+		*td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV4_CSUM;
+		*td_offset |= (tx_offload.l3_len >> 2) <<
+			      ICE_TX_DESC_LEN_IPLEN_S;
+	} else if (ol_flags & PKT_TX_IPV4) {
+		*td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV4;
+		*td_offset |= (tx_offload.l3_len >> 2) <<
+			      ICE_TX_DESC_LEN_IPLEN_S;
+	} else if (ol_flags & PKT_TX_IPV6) {
+		*td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV6;
+		*td_offset |= (tx_offload.l3_len >> 2) <<
+			      ICE_TX_DESC_LEN_IPLEN_S;
+	}
+
+	if (ol_flags & PKT_TX_TCP_SEG) {
+		*td_cmd |= ICE_TX_DESC_CMD_L4T_EOFT_TCP;
+		*td_offset |= (tx_offload.l4_len >> 2) <<
+			      ICE_TX_DESC_LEN_L4_LEN_S;
+		return;
+	}
+
+	/* Enable L4 checksum offloads */
+	switch (ol_flags & PKT_TX_L4_MASK) {
+	case PKT_TX_TCP_CKSUM:
+		*td_cmd |= ICE_TX_DESC_CMD_L4T_EOFT_TCP;
+		*td_offset |= (sizeof(struct tcp_hdr) >> 2) <<
+			      ICE_TX_DESC_LEN_L4_LEN_S;
+		break;
+	case PKT_TX_SCTP_CKSUM:
+		*td_cmd |= ICE_TX_DESC_CMD_L4T_EOFT_SCTP;
+		*td_offset |= (sizeof(struct sctp_hdr) >> 2) <<
+			      ICE_TX_DESC_LEN_L4_LEN_S;
+		break;
+	case PKT_TX_UDP_CKSUM:
+		*td_cmd |= ICE_TX_DESC_CMD_L4T_EOFT_UDP;
+		*td_offset |= (sizeof(struct udp_hdr) >> 2) <<
+			      ICE_TX_DESC_LEN_L4_LEN_S;
+		break;
+	default:
+		break;
+	}
+}
+
+static inline int
+ice_xmit_cleanup(struct ice_tx_queue *txq)
+{
+	struct ice_tx_entry *sw_ring = txq->sw_ring;
+	volatile struct ice_tx_desc *txd = txq->tx_ring;
+	uint16_t last_desc_cleaned = txq->last_desc_cleaned;
+	uint16_t nb_tx_desc = txq->nb_tx_desc;
+	uint16_t desc_to_clean_to;
+	uint16_t nb_tx_to_clean;
+
+	/* Determine the last descriptor needing to be cleaned */
+	desc_to_clean_to = (uint16_t)(last_desc_cleaned + txq->tx_rs_thresh);
+	if (desc_to_clean_to >= nb_tx_desc)
+		desc_to_clean_to = (uint16_t)(desc_to_clean_to - nb_tx_desc);
+
+	/* Check to make sure the last descriptor to clean is done */
+	desc_to_clean_to = sw_ring[desc_to_clean_to].last_id;
+	if (!(txd[desc_to_clean_to].cmd_type_offset_bsz &
+	    rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE))) {
+		PMD_TX_FREE_LOG(DEBUG, "TX descriptor %4u is not done "
+				"(port=%d queue=%d) value=0x%"PRIx64"\n",
+				desc_to_clean_to,
+				txq->port_id, txq->queue_id,
+				txd[desc_to_clean_to].cmd_type_offset_bsz);
+		/* Failed to clean any descriptors */
+		return -1;
+	}
+
+	/* Figure out how many descriptors will be cleaned */
+	if (last_desc_cleaned > desc_to_clean_to)
+		nb_tx_to_clean = (uint16_t)((nb_tx_desc - last_desc_cleaned) +
+					    desc_to_clean_to);
+	else
+		nb_tx_to_clean = (uint16_t)(desc_to_clean_to -
+					    last_desc_cleaned);
+
+	/* The last descriptor to clean is done, so that means all the
+	 * descriptors from the last descriptor that was cleaned
+	 * up to the last descriptor with the RS bit set
+	 * are done. Only reset the threshold descriptor.
+	 */
+	txd[desc_to_clean_to].cmd_type_offset_bsz = 0;
+
+	/* Update the txq to reflect the last descriptor that was cleaned */
+	txq->last_desc_cleaned = desc_to_clean_to;
+	txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + nb_tx_to_clean);
+
+	return 0;
+}
+
+/* Check if the context descriptor is needed for TX offloading */
+static inline uint16_t
+ice_calc_context_desc(uint64_t flags)
+{
+	static uint64_t mask = PKT_TX_TCP_SEG | PKT_TX_QINQ_PKT;
+
+	return (flags & mask) ? 1 : 0;
+}
+
+/* set ice TSO context descriptor */
+static inline uint64_t
+ice_set_tso_ctx(struct rte_mbuf *mbuf, union ice_tx_offload tx_offload)
+{
+	uint64_t ctx_desc = 0;
+	uint32_t cd_cmd, hdr_len, cd_tso_len;
+
+	if (!tx_offload.l4_len) {
+		PMD_TX_LOG(DEBUG, "L4 length set to 0");
+		return ctx_desc;
+	}
+
+	/**
+	 * in case of non tunneling packet, the outer_l2_len and
+	 * outer_l3_len must be 0.
+	 */
+	hdr_len = tx_offload.outer_l2_len +
+		  tx_offload.outer_l3_len +
+		  tx_offload.l2_len +
+		  tx_offload.l3_len +
+		  tx_offload.l4_len;
+
+	cd_cmd = ICE_TX_CTX_DESC_TSO;
+	cd_tso_len = mbuf->pkt_len - hdr_len;
+	ctx_desc |= ((uint64_t)cd_cmd << ICE_TXD_CTX_QW1_CMD_S) |
+		    ((uint64_t)cd_tso_len << ICE_TXD_CTX_QW1_TSO_LEN_S) |
+		    ((uint64_t)mbuf->tso_segsz << ICE_TXD_CTX_QW1_MSS_S);
+
+	return ctx_desc;
+}
+
+uint16_t
+ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+	struct ice_tx_queue *txq;
+	volatile struct ice_tx_desc *tx_ring;
+	volatile struct ice_tx_desc *txd;
+	struct ice_tx_entry *sw_ring;
+	struct ice_tx_entry *txe, *txn;
+	struct rte_mbuf *tx_pkt;
+	struct rte_mbuf *m_seg;
+	uint16_t tx_id;
+	uint16_t nb_tx;
+	uint16_t nb_used;
+	uint16_t nb_ctx;
+	uint32_t td_cmd = 0;
+	uint32_t td_offset = 0;
+	uint32_t td_tag = 0;
+	uint16_t tx_last;
+	uint64_t buf_dma_addr;
+	uint64_t ol_flags;
+	union ice_tx_offload tx_offload = {0};
+
+	txq = tx_queue;
+	sw_ring = txq->sw_ring;
+	tx_ring = txq->tx_ring;
+	tx_id = txq->tx_tail;
+	txe = &sw_ring[tx_id];
+
+	/* Check if the descriptor ring needs to be cleaned. */
+	if (txq->nb_tx_free < txq->tx_free_thresh)
+		ice_xmit_cleanup(txq);
+
+	for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
+		tx_pkt = *tx_pkts++;
+
+		td_cmd = 0;
+		ol_flags = tx_pkt->ol_flags;
+		tx_offload.l2_len = tx_pkt->l2_len;
+		tx_offload.l3_len = tx_pkt->l3_len;
+		tx_offload.outer_l2_len = tx_pkt->outer_l2_len;
+		tx_offload.outer_l3_len = tx_pkt->outer_l3_len;
+		tx_offload.l4_len = tx_pkt->l4_len;
+		tx_offload.tso_segsz = tx_pkt->tso_segsz;
+		/* Calculate the number of context descriptors needed. */
+		nb_ctx = ice_calc_context_desc(ol_flags);
+
+		/* The number of descriptors that must be allocated for
+		 * a packet equals to the number of the segments of that
+		 * packet plus the number of context descriptor if needed.
+		 */
+		nb_used = (uint16_t)(tx_pkt->nb_segs + nb_ctx);
+		tx_last = (uint16_t)(tx_id + nb_used - 1);
+
+		/* Circular ring */
+		if (tx_last >= txq->nb_tx_desc)
+			tx_last = (uint16_t)(tx_last - txq->nb_tx_desc);
+
+		if (nb_used > txq->nb_tx_free) {
+			if (ice_xmit_cleanup(txq) != 0) {
+				if (nb_tx == 0)
+					return 0;
+				goto end_of_tx;
+			}
+			if (unlikely(nb_used > txq->tx_rs_thresh)) {
+				while (nb_used > txq->nb_tx_free) {
+					if (ice_xmit_cleanup(txq) != 0) {
+						if (nb_tx == 0)
+							return 0;
+						goto end_of_tx;
+					}
+				}
+			}
+		}
+
+		/* Descriptor based VLAN insertion */
+		if (ol_flags & (PKT_TX_VLAN_PKT | PKT_TX_QINQ_PKT)) {
+			td_cmd |= ICE_TX_DESC_CMD_IL2TAG1;
+			td_tag = tx_pkt->vlan_tci;
+		}
+
+		/* Enable checksum offloading */
+		if (ol_flags & ICE_TX_CKSUM_OFFLOAD_MASK) {
+			ice_txd_enable_checksum(ol_flags, &td_cmd,
+						&td_offset, tx_offload);
+		}
+
+		if (nb_ctx) {
+			/* Setup TX context descriptor if required */
+			volatile struct ice_tx_ctx_desc *ctx_txd =
+				(volatile struct ice_tx_ctx_desc *)
+					&tx_ring[tx_id];
+			uint16_t cd_l2tag2 = 0;
+			uint64_t cd_type_cmd_tso_mss = ICE_TX_DESC_DTYPE_CTX;
+
+			txn = &sw_ring[txe->next_id];
+			RTE_MBUF_PREFETCH_TO_FREE(txn->mbuf);
+			if (txe->mbuf) {
+				rte_pktmbuf_free_seg(txe->mbuf);
+				txe->mbuf = NULL;
+			}
+
+			if (ol_flags & PKT_TX_TCP_SEG)
+				cd_type_cmd_tso_mss |=
+					ice_set_tso_ctx(tx_pkt, tx_offload);
+
+			/* TX context descriptor based double VLAN insert */
+			if (ol_flags & PKT_TX_QINQ_PKT) {
+				cd_l2tag2 = tx_pkt->vlan_tci_outer;
+				cd_type_cmd_tso_mss |=
+					((uint64_t)ICE_TX_CTX_DESC_IL2TAG2 <<
+					 ICE_TXD_CTX_QW1_CMD_S);
+			}
+			ctx_txd->l2tag2 = rte_cpu_to_le_16(cd_l2tag2);
+			ctx_txd->qw1 =
+				rte_cpu_to_le_64(cd_type_cmd_tso_mss);
+
+			txe->last_id = tx_last;
+			tx_id = txe->next_id;
+			txe = txn;
+		}
+		m_seg = tx_pkt;
+
+		do {
+			txd = &tx_ring[tx_id];
+			txn = &sw_ring[txe->next_id];
+
+			if (txe->mbuf)
+				rte_pktmbuf_free_seg(txe->mbuf);
+			txe->mbuf = m_seg;
+
+			/* Setup TX Descriptor */
+			buf_dma_addr = rte_mbuf_data_iova(m_seg);
+			txd->buf_addr = rte_cpu_to_le_64(buf_dma_addr);
+			txd->cmd_type_offset_bsz =
+				rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DATA |
+				((uint64_t)td_cmd  << ICE_TXD_QW1_CMD_S) |
+				((uint64_t)td_offset << ICE_TXD_QW1_OFFSET_S) |
+				((uint64_t)m_seg->data_len  <<
+				 ICE_TXD_QW1_TX_BUF_SZ_S) |
+				((uint64_t)td_tag  << ICE_TXD_QW1_L2TAG1_S));
+
+			txe->last_id = tx_last;
+			tx_id = txe->next_id;
+			txe = txn;
+			m_seg = m_seg->next;
+		} while (m_seg);
+
+		/* fill the last descriptor with End of Packet (EOP) bit */
+		td_cmd |= ICE_TX_DESC_CMD_EOP;
+		txq->nb_tx_used = (uint16_t)(txq->nb_tx_used + nb_used);
+		txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_used);
+
+		/* set RS bit on the last descriptor of one packet */
+		if (txq->nb_tx_used >= txq->tx_rs_thresh) {
+			PMD_TX_FREE_LOG(DEBUG,
+					"Setting RS bit on TXD id="
+					"%4u (port=%d queue=%d)",
+					tx_last, txq->port_id, txq->queue_id);
+
+			td_cmd |= ICE_TX_DESC_CMD_RS;
+
+			/* Update txq RS bit counters */
+			txq->nb_tx_used = 0;
+		}
+		txd->cmd_type_offset_bsz |=
+			rte_cpu_to_le_64(((uint64_t)td_cmd) <<
+					 ICE_TXD_QW1_CMD_S);
+	}
+end_of_tx:
+	rte_wmb();
+
+	/* update Tail register */
+	ICE_PCI_REG_WRITE(txq->qtx_tail, tx_id);
+	txq->tx_tail = tx_id;
+
+	return nb_tx;
+}
+
+void __attribute__((cold))
+ice_set_rx_function(struct rte_eth_dev *dev)
+{
+	dev->rx_pkt_burst = ice_recv_pkts;
+}
+
+/*********************************************************************
+ *
+ *  TX prep functions
+ *
+ **********************************************************************/
+/* The default values of TSO MSS */
+#define ICE_MIN_TSO_MSS            64
+#define ICE_MAX_TSO_MSS            9728
+#define ICE_MAX_TSO_FRAME_SIZE     262144
+uint16_t
+ice_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
+	      uint16_t nb_pkts)
+{
+	int i, ret;
+	uint64_t ol_flags;
+	struct rte_mbuf *m;
+
+	for (i = 0; i < nb_pkts; i++) {
+		m = tx_pkts[i];
+		ol_flags = m->ol_flags;
+
+		if (ol_flags & PKT_TX_TCP_SEG &&
+		    (m->tso_segsz < ICE_MIN_TSO_MSS ||
+		     m->tso_segsz > ICE_MAX_TSO_MSS ||
+		     m->pkt_len > ICE_MAX_TSO_FRAME_SIZE)) {
+			/**
+			 * MSS outside the range are considered malicious
+			 */
+			rte_errno = -EINVAL;
+			return i;
+		}
+
+#ifdef RTE_LIBRTE_ETHDEV_DEBUG
+		ret = rte_validate_tx_offload(m);
+		if (ret != 0) {
+			rte_errno = ret;
+			return i;
+		}
+#endif
+		ret = rte_net_intel_cksum_prepare(m);
+		if (ret != 0) {
+			rte_errno = ret;
+			return i;
+		}
+	}
+	return i;
+}
+
+void __attribute__((cold))
+ice_set_tx_function(struct rte_eth_dev *dev)
+{
+		dev->tx_pkt_burst = ice_xmit_pkts;
+		dev->tx_pkt_prepare = ice_prep_pkts;
+}
+
 /* For each value it means, datasheet of hardware can tell more details
  *
  * @note: fix ice_dev_supported_ptypes_get() if any change here.
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index bad2b89..e0218b3 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -134,6 +134,14 @@ int ice_tx_queue_setup(struct rte_eth_dev *dev,
 void ice_tx_queue_release(void *txq);
 void ice_clear_queues(struct rte_eth_dev *dev);
 void ice_free_queues(struct rte_eth_dev *dev);
+uint16_t ice_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+		       uint16_t nb_pkts);
+uint16_t ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+		       uint16_t nb_pkts);
+void ice_set_rx_function(struct rte_eth_dev *dev);
+uint16_t ice_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
+		       uint16_t nb_pkts);
+void ice_set_tx_function(struct rte_eth_dev *dev);
 uint32_t ice_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 void ice_set_default_ptype_table(struct rte_eth_dev *dev);
 const uint32_t *ice_dev_supported_ptypes_get(struct rte_eth_dev *dev);
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v5 30/31] net/ice: support advance RX/TX
  2018-12-17  7:37 ` [dpdk-dev] [PATCH v5 00/31] A new net PMD - ICE Wenzhuo Lu
                     ` (28 preceding siblings ...)
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 29/31] net/ice: support basic RX/TX Wenzhuo Lu
@ 2018-12-17  7:37   ` Wenzhuo Lu
  2018-12-17 23:02     ` Ferruh Yigit
  2018-12-17 23:46     ` Ferruh Yigit
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 31/31] net/ice: support descriptor ops Wenzhuo Lu
  30 siblings, 2 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-17  7:37 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add RX functions, scatter and bulk.
Add TX function, simple.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/ice.ini |   1 +
 drivers/net/ice/ice_lan_rxtx.c   | 660 ++++++++++++++++++++++++++++++++++++++-
 2 files changed, 659 insertions(+), 2 deletions(-)

diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
index 19655f1..300eced 100644
--- a/doc/guides/nics/features/ice.ini
+++ b/doc/guides/nics/features/ice.ini
@@ -11,6 +11,7 @@ Rx interrupt         = Y
 Queue start/stop     = Y
 MTU update           = Y
 Jumbo frame          = Y
+Scattered Rx         = Y
 TSO                  = Y
 Unicast MAC filter   = Y
 Multicast MAC filter = Y
diff --git a/drivers/net/ice/ice_lan_rxtx.c b/drivers/net/ice/ice_lan_rxtx.c
index c0ee7c5..b328a96 100644
--- a/drivers/net/ice/ice_lan_rxtx.c
+++ b/drivers/net/ice/ice_lan_rxtx.c
@@ -957,6 +957,431 @@
 	PMD_RX_LOG(DEBUG, "Mbuf vlan_tci: %u, vlan_tci_outer: %u",
 		   mb->vlan_tci, mb->vlan_tci_outer);
 }
+
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+#define ICE_LOOK_AHEAD 8
+#if (ICE_LOOK_AHEAD != 8)
+#error "PMD ICE: ICE_LOOK_AHEAD must be 8\n"
+#endif
+static inline int
+ice_rx_scan_hw_ring(struct ice_rx_queue *rxq)
+{
+	volatile union ice_rx_desc *rxdp;
+	struct ice_rx_entry *rxep;
+	struct rte_mbuf *mb;
+	uint16_t pkt_len;
+	uint64_t qword1;
+	uint32_t rx_status;
+	int32_t s[ICE_LOOK_AHEAD], nb_dd;
+	int32_t i, j, nb_rx = 0;
+	uint64_t pkt_flags = 0;
+	uint32_t *ptype_tbl = rxq->vsi->adapter->ptype_tbl;
+
+	rxdp = &rxq->rx_ring[rxq->rx_tail];
+	rxep = &rxq->sw_ring[rxq->rx_tail];
+
+	qword1 = rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len);
+	rx_status = (qword1 & ICE_RXD_QW1_STATUS_M) >> ICE_RXD_QW1_STATUS_S;
+
+	/* Make sure there is at least 1 packet to receive */
+	if (!(rx_status & (1 << ICE_RX_DESC_STATUS_DD_S)))
+		return 0;
+
+	/**
+	 * Scan LOOK_AHEAD descriptors at a time to determine which
+	 * descriptors reference packets that are ready to be received.
+	 */
+	for (i = 0; i < ICE_RX_MAX_BURST; i += ICE_LOOK_AHEAD,
+	     rxdp += ICE_LOOK_AHEAD, rxep += ICE_LOOK_AHEAD) {
+		/* Read desc statuses backwards to avoid race condition */
+		for (j = ICE_LOOK_AHEAD - 1; j >= 0; j--) {
+			qword1 = rte_le_to_cpu_64(
+					rxdp[j].wb.qword1.status_error_len);
+			s[j] = (qword1 & ICE_RXD_QW1_STATUS_M) >>
+			       ICE_RXD_QW1_STATUS_S;
+		}
+
+		rte_smp_rmb();
+
+		/* Compute how many status bits were set */
+		for (j = 0, nb_dd = 0; j < ICE_LOOK_AHEAD; j++)
+			nb_dd += s[j] & (1 << ICE_RX_DESC_STATUS_DD_S);
+
+		nb_rx += nb_dd;
+
+		/* Translate descriptor info to mbuf parameters */
+		for (j = 0; j < nb_dd; j++) {
+			mb = rxep[j].mbuf;
+			qword1 = rte_le_to_cpu_64(
+					rxdp[j].wb.qword1.status_error_len);
+			pkt_len = ((qword1 & ICE_RXD_QW1_LEN_PBUF_M) >>
+				   ICE_RXD_QW1_LEN_PBUF_S) - rxq->crc_len;
+			mb->data_len = pkt_len;
+			mb->pkt_len = pkt_len;
+			mb->ol_flags = 0;
+			pkt_flags = ice_rxd_status_to_pkt_flags(qword1);
+			pkt_flags |= ice_rxd_error_to_pkt_flags(qword1);
+			if (pkt_flags & PKT_RX_RSS_HASH)
+				mb->hash.rss =
+					rte_le_to_cpu_32(
+						rxdp[j].wb.qword0.hi_dword.rss);
+			mb->packet_type = ptype_tbl[(uint8_t)(
+						(qword1 &
+						 ICE_RXD_QW1_PTYPE_M) >>
+						ICE_RXD_QW1_PTYPE_S)];
+			ice_rxd_to_vlan_tci(mb, &rxdp[j]);
+
+			mb->ol_flags |= pkt_flags;
+		}
+
+		for (j = 0; j < ICE_LOOK_AHEAD; j++)
+			rxq->rx_stage[i + j] = rxep[j].mbuf;
+
+		if (nb_dd != ICE_LOOK_AHEAD)
+			break;
+	}
+
+	/* Clear software ring entries */
+	for (i = 0; i < nb_rx; i++)
+		rxq->sw_ring[rxq->rx_tail + i].mbuf = NULL;
+
+	PMD_RX_LOG(DEBUG, "ice_rx_scan_hw_ring: "
+		   "port_id=%u, queue_id=%u, nb_rx=%d",
+		   rxq->port_id, rxq->queue_id, nb_rx);
+
+	return nb_rx;
+}
+
+static inline uint16_t
+ice_rx_fill_from_stage(struct ice_rx_queue *rxq,
+		       struct rte_mbuf **rx_pkts,
+		       uint16_t nb_pkts)
+{
+	uint16_t i;
+	struct rte_mbuf **stage = &rxq->rx_stage[rxq->rx_next_avail];
+
+	nb_pkts = (uint16_t)RTE_MIN(nb_pkts, rxq->rx_nb_avail);
+
+	for (i = 0; i < nb_pkts; i++)
+		rx_pkts[i] = stage[i];
+
+	rxq->rx_nb_avail = (uint16_t)(rxq->rx_nb_avail - nb_pkts);
+	rxq->rx_next_avail = (uint16_t)(rxq->rx_next_avail + nb_pkts);
+
+	return nb_pkts;
+}
+
+static inline int
+ice_rx_alloc_bufs(struct ice_rx_queue *rxq)
+{
+	volatile union ice_rx_desc *rxdp;
+	struct ice_rx_entry *rxep;
+	struct rte_mbuf *mb;
+	uint16_t alloc_idx, i;
+	uint64_t dma_addr;
+	int diag;
+
+	/* Allocate buffers in bulk */
+	alloc_idx = (uint16_t)(rxq->rx_free_trigger -
+			       (rxq->rx_free_thresh - 1));
+	rxep = &rxq->sw_ring[alloc_idx];
+	diag = rte_mempool_get_bulk(rxq->mp, (void *)rxep,
+				    rxq->rx_free_thresh);
+	if (unlikely(diag != 0)) {
+		PMD_RX_LOG(ERR, "Failed to get mbufs in bulk");
+		return -ENOMEM;
+	}
+
+	rxdp = &rxq->rx_ring[alloc_idx];
+	for (i = 0; i < rxq->rx_free_thresh; i++) {
+		if (likely(i < (rxq->rx_free_thresh - 1)))
+			/* Prefetch next mbuf */
+			rte_prefetch0(rxep[i + 1].mbuf);
+
+		mb = rxep[i].mbuf;
+		rte_mbuf_refcnt_set(mb, 1);
+		mb->next = NULL;
+		mb->data_off = RTE_PKTMBUF_HEADROOM;
+		mb->nb_segs = 1;
+		mb->port = rxq->port_id;
+		dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(mb));
+		rxdp[i].read.hdr_addr = 0;
+		rxdp[i].read.pkt_addr = dma_addr;
+	}
+
+	/* Update rx tail regsiter */
+	rte_wmb();
+	ICE_PCI_REG_WRITE(rxq->qrx_tail, rxq->rx_free_trigger);
+
+	rxq->rx_free_trigger =
+		(uint16_t)(rxq->rx_free_trigger + rxq->rx_free_thresh);
+	if (rxq->rx_free_trigger >= rxq->nb_rx_desc)
+		rxq->rx_free_trigger = (uint16_t)(rxq->rx_free_thresh - 1);
+
+	return 0;
+}
+
+static inline uint16_t
+rx_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
+{
+	struct ice_rx_queue *rxq = (struct ice_rx_queue *)rx_queue;
+	uint16_t nb_rx = 0;
+	struct rte_eth_dev *dev;
+
+	if (!nb_pkts)
+		return 0;
+
+	if (rxq->rx_nb_avail)
+		return ice_rx_fill_from_stage(rxq, rx_pkts, nb_pkts);
+
+	nb_rx = (uint16_t)ice_rx_scan_hw_ring(rxq);
+	rxq->rx_next_avail = 0;
+	rxq->rx_nb_avail = nb_rx;
+	rxq->rx_tail = (uint16_t)(rxq->rx_tail + nb_rx);
+
+	if (rxq->rx_tail > rxq->rx_free_trigger) {
+		if (ice_rx_alloc_bufs(rxq) != 0) {
+			uint16_t i, j;
+
+			dev = ICE_VSI_TO_ETH_DEV(rxq->vsi);
+			dev->data->rx_mbuf_alloc_failed +=
+				rxq->rx_free_thresh;
+			PMD_RX_LOG(DEBUG, "Rx mbuf alloc failed for "
+				   "port_id=%u, queue_id=%u",
+				   rxq->port_id, rxq->queue_id);
+			rxq->rx_nb_avail = 0;
+			rxq->rx_tail = (uint16_t)(rxq->rx_tail - nb_rx);
+			for (i = 0, j = rxq->rx_tail; i < nb_rx; i++, j++)
+				rxq->sw_ring[j].mbuf = rxq->rx_stage[i];
+
+			return 0;
+		}
+	}
+
+	if (rxq->rx_tail >= rxq->nb_rx_desc)
+		rxq->rx_tail = 0;
+
+	if (rxq->rx_nb_avail)
+		return ice_rx_fill_from_stage(rxq, rx_pkts, nb_pkts);
+
+	return 0;
+}
+
+static uint16_t
+ice_recv_pkts_bulk_alloc(void *rx_queue,
+			 struct rte_mbuf **rx_pkts,
+			 uint16_t nb_pkts)
+{
+	uint16_t nb_rx = 0;
+	uint16_t n;
+	uint16_t count;
+
+	if (unlikely(nb_pkts == 0))
+		return nb_rx;
+
+	if (likely(nb_pkts <= ICE_RX_MAX_BURST))
+		return rx_recv_pkts(rx_queue, rx_pkts, nb_pkts);
+
+	while (nb_pkts) {
+		n = RTE_MIN(nb_pkts, ICE_RX_MAX_BURST);
+		count = rx_recv_pkts(rx_queue, &rx_pkts[nb_rx], n);
+		nb_rx = (uint16_t)(nb_rx + count);
+		nb_pkts = (uint16_t)(nb_pkts - count);
+		if (count < n)
+			break;
+	}
+
+	return nb_rx;
+}
+#else
+static uint16_t
+ice_recv_pkts_bulk_alloc(void __rte_unused *rx_queue,
+			 struct rte_mbuf __rte_unused **rx_pkts,
+			 uint16_t __rte_unused nb_pkts)
+{
+	return 0;
+}
+#endif /* RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC */
+
+static uint16_t
+ice_recv_scattered_pkts(void *rx_queue,
+			struct rte_mbuf **rx_pkts,
+			uint16_t nb_pkts)
+{
+	struct ice_rx_queue *rxq = rx_queue;
+	volatile union ice_rx_desc *rx_ring = rxq->rx_ring;
+	volatile union ice_rx_desc *rxdp;
+	union ice_rx_desc rxd;
+	struct ice_rx_entry *sw_ring = rxq->sw_ring;
+	struct ice_rx_entry *rxe;
+	struct rte_mbuf *first_seg = rxq->pkt_first_seg;
+	struct rte_mbuf *last_seg = rxq->pkt_last_seg;
+	struct rte_mbuf *nmb; /* new allocated mbuf */
+	struct rte_mbuf *rxm; /* pointer to store old mbuf in SW ring */
+	uint16_t rx_id = rxq->rx_tail;
+	uint16_t nb_rx = 0;
+	uint16_t nb_hold = 0;
+	uint16_t rx_packet_len;
+	uint32_t rx_status;
+	uint64_t qword1;
+	uint64_t dma_addr;
+	uint64_t pkt_flags = 0;
+	uint32_t *ptype_tbl = rxq->vsi->adapter->ptype_tbl;
+	struct rte_eth_dev *dev;
+
+	while (nb_rx < nb_pkts) {
+		rxdp = &rx_ring[rx_id];
+		qword1 = rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len);
+		rx_status = (qword1 & ICE_RXD_QW1_STATUS_M) >>
+			    ICE_RXD_QW1_STATUS_S;
+
+		/* Check the DD bit first */
+		if (!(rx_status & (1 << ICE_RX_DESC_STATUS_DD_S)))
+			break;
+
+		/* allocate mbuf */
+		nmb = rte_mbuf_raw_alloc(rxq->mp);
+		if (unlikely(!nmb)) {
+			dev = ICE_VSI_TO_ETH_DEV(rxq->vsi);
+			dev->data->rx_mbuf_alloc_failed++;
+			break;
+		}
+		rxd = *rxdp; /* copy descriptor in ring to temp variable*/
+
+		nb_hold++;
+		rxe = &sw_ring[rx_id]; /* get corresponding mbuf in SW ring */
+		rx_id++;
+		if (unlikely(rx_id == rxq->nb_rx_desc))
+			rx_id = 0;
+
+		/* Prefetch next mbuf */
+		rte_prefetch0(sw_ring[rx_id].mbuf);
+
+		/**
+		 * When next RX descriptor is on a cache line boundary,
+		 * prefetch the next 4 RX descriptors and next 8 pointers
+		 * to mbufs.
+		 */
+		if ((rx_id & 0x3) == 0) {
+			rte_prefetch0(&rx_ring[rx_id]);
+			rte_prefetch0(&sw_ring[rx_id]);
+		}
+
+		rxm = rxe->mbuf;
+		rxe->mbuf = nmb;
+		dma_addr =
+			rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb));
+
+		/* Set data buffer address and data length of the mbuf */
+		rxdp->read.hdr_addr = 0;
+		rxdp->read.pkt_addr = dma_addr;
+		rx_packet_len = (qword1 & ICE_RXD_QW1_LEN_PBUF_M) >>
+				ICE_RXD_QW1_LEN_PBUF_S;
+		rxm->data_len = rx_packet_len;
+		rxm->data_off = RTE_PKTMBUF_HEADROOM;
+		ice_rxd_to_vlan_tci(rxm, rxdp);
+		rxm->packet_type = ptype_tbl[(uint8_t)((qword1 &
+							ICE_RXD_QW1_PTYPE_M) >>
+						       ICE_RXD_QW1_PTYPE_S)];
+
+		/**
+		 * If this is the first buffer of the received packet, set the
+		 * pointer to the first mbuf of the packet and initialize its
+		 * context. Otherwise, update the total length and the number
+		 * of segments of the current scattered packet, and update the
+		 * pointer to the last mbuf of the current packet.
+		 */
+		if (!first_seg) {
+			first_seg = rxm;
+			first_seg->nb_segs = 1;
+			first_seg->pkt_len = rx_packet_len;
+		} else {
+			first_seg->pkt_len =
+				(uint16_t)(first_seg->pkt_len +
+					   rx_packet_len);
+			first_seg->nb_segs++;
+			last_seg->next = rxm;
+		}
+
+		/**
+		 * If this is not the last buffer of the received packet,
+		 * update the pointer to the last mbuf of the current scattered
+		 * packet and continue to parse the RX ring.
+		 */
+		if (!(rx_status & (1 << ICE_RX_DESC_STATUS_EOF_S))) {
+			last_seg = rxm;
+			continue;
+		}
+
+		/**
+		 * This is the last buffer of the received packet. If the CRC
+		 * is not stripped by the hardware:
+		 *  - Subtract the CRC length from the total packet length.
+		 *  - If the last buffer only contains the whole CRC or a part
+		 *  of it, free the mbuf associated to the last buffer. If part
+		 *  of the CRC is also contained in the previous mbuf, subtract
+		 *  the length of that CRC part from the data length of the
+		 *  previous mbuf.
+		 */
+		rxm->next = NULL;
+		if (unlikely(rxq->crc_len > 0)) {
+			first_seg->pkt_len -= ETHER_CRC_LEN;
+			if (rx_packet_len <= ETHER_CRC_LEN) {
+				rte_pktmbuf_free_seg(rxm);
+				first_seg->nb_segs--;
+				last_seg->data_len =
+					(uint16_t)(last_seg->data_len -
+					(ETHER_CRC_LEN - rx_packet_len));
+				last_seg->next = NULL;
+			} else
+				rxm->data_len = (uint16_t)(rx_packet_len -
+							   ETHER_CRC_LEN);
+		}
+
+		first_seg->port = rxq->port_id;
+		first_seg->ol_flags = 0;
+
+		pkt_flags = ice_rxd_status_to_pkt_flags(qword1);
+		pkt_flags |= ice_rxd_error_to_pkt_flags(qword1);
+		if (pkt_flags & PKT_RX_RSS_HASH)
+			first_seg->hash.rss =
+				rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
+
+		first_seg->ol_flags |= pkt_flags;
+		/* Prefetch data of first segment, if configured to do so. */
+		rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr,
+					  first_seg->data_off));
+		rx_pkts[nb_rx++] = first_seg;
+		first_seg = NULL;
+	}
+
+	/* Record index of the next RX descriptor to probe. */
+	rxq->rx_tail = rx_id;
+	rxq->pkt_first_seg = first_seg;
+	rxq->pkt_last_seg = last_seg;
+
+	/**
+	 * If the number of free RX descriptors is greater than the RX free
+	 * threshold of the queue, advance the Receive Descriptor Tail (RDT)
+	 * register. Update the RDT with the value of the last processed RX
+	 * descriptor minus 1, to guarantee that the RDT register is never
+	 * equal to the RDH register, which creates a "full" ring situtation
+	 * from the hardware point of view.
+	 */
+	nb_hold = (uint16_t)(nb_hold + rxq->nb_rx_hold);
+	if (nb_hold > rxq->rx_free_thresh) {
+		rx_id = (uint16_t)(rx_id == 0 ?
+				   (rxq->nb_rx_desc - 1) : (rx_id - 1));
+		/* write TAIL register */
+		ICE_PCI_REG_WRITE(rxq->qrx_tail, rx_id);
+		nb_hold = 0;
+	}
+	rxq->nb_rx_hold = nb_hold;
+
+	/* return received packet in the burst */
+	return nb_rx;
+}
+
 const uint32_t *
 ice_dev_supported_ptypes_get(struct rte_eth_dev *dev)
 {
@@ -990,7 +1415,11 @@
 		RTE_PTYPE_UNKNOWN
 	};
 
-	if (dev->rx_pkt_burst == ice_recv_pkts)
+	if (dev->rx_pkt_burst == ice_recv_pkts ||
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+	    dev->rx_pkt_burst == ice_recv_pkts_bulk_alloc ||
+#endif
+	    dev->rx_pkt_burst == ice_recv_scattered_pkts)
 		return ptypes;
 	return NULL;
 }
@@ -1313,6 +1742,20 @@
 	return 0;
 }
 
+/* Construct the tx flags */
+static inline uint64_t
+ice_build_ctob(uint32_t td_cmd,
+	       uint32_t td_offset,
+	       uint16_t size,
+	       uint32_t td_tag)
+{
+	return rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DATA |
+				((uint64_t)td_cmd << ICE_TXD_QW1_CMD_S) |
+				((uint64_t)td_offset << ICE_TXD_QW1_OFFSET_S) |
+				((uint64_t)size << ICE_TXD_QW1_TX_BUF_SZ_S) |
+				((uint64_t)td_tag << ICE_TXD_QW1_L2TAG1_S));
+}
+
 /* Check if the context descriptor is needed for TX offloading */
 static inline uint16_t
 ice_calc_context_desc(uint64_t flags)
@@ -1531,10 +1974,213 @@
 	return nb_tx;
 }
 
+static inline int __attribute__((always_inline))
+ice_tx_free_bufs(struct ice_tx_queue *txq)
+{
+	struct ice_tx_entry *txep;
+	uint16_t i;
+
+	if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
+	     rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M)) !=
+	    rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE))
+		return 0;
+
+	txep = &txq->sw_ring[txq->tx_next_dd - (txq->tx_rs_thresh - 1)];
+
+	for (i = 0; i < txq->tx_rs_thresh; i++)
+		rte_prefetch0((txep + i)->mbuf);
+
+	if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) {
+		for (i = 0; i < txq->tx_rs_thresh; ++i, ++txep) {
+			rte_mempool_put(txep->mbuf->pool, txep->mbuf);
+			txep->mbuf = NULL;
+		}
+	} else {
+		for (i = 0; i < txq->tx_rs_thresh; ++i, ++txep) {
+			rte_pktmbuf_free_seg(txep->mbuf);
+			txep->mbuf = NULL;
+		}
+	}
+
+	txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh);
+	txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh);
+	if (txq->tx_next_dd >= txq->nb_tx_desc)
+		txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
+
+	return txq->tx_rs_thresh;
+}
+
+/* Populate 4 descriptors with data from 4 mbufs */
+static inline void
+tx4(volatile struct ice_tx_desc *txdp, struct rte_mbuf **pkts)
+{
+	uint64_t dma_addr;
+	uint32_t i;
+
+	for (i = 0; i < 4; i++, txdp++, pkts++) {
+		dma_addr = rte_mbuf_data_iova(*pkts);
+		txdp->buf_addr = rte_cpu_to_le_64(dma_addr);
+		txdp->cmd_type_offset_bsz =
+			ice_build_ctob((uint32_t)ICE_TD_CMD, 0,
+				       (*pkts)->data_len, 0);
+	}
+}
+
+/* Populate 1 descriptor with data from 1 mbuf */
+static inline void
+tx1(volatile struct ice_tx_desc *txdp, struct rte_mbuf **pkts)
+{
+	uint64_t dma_addr;
+
+	dma_addr = rte_mbuf_data_iova(*pkts);
+	txdp->buf_addr = rte_cpu_to_le_64(dma_addr);
+	txdp->cmd_type_offset_bsz =
+		ice_build_ctob((uint32_t)ICE_TD_CMD, 0,
+			       (*pkts)->data_len, 0);
+}
+
+static inline void
+ice_tx_fill_hw_ring(struct ice_tx_queue *txq, struct rte_mbuf **pkts,
+		    uint16_t nb_pkts)
+{
+	volatile struct ice_tx_desc *txdp = &txq->tx_ring[txq->tx_tail];
+	struct ice_tx_entry *txep = &txq->sw_ring[txq->tx_tail];
+	const int N_PER_LOOP = 4;
+	const int N_PER_LOOP_MASK = N_PER_LOOP - 1;
+	int mainpart, leftover;
+	int i, j;
+
+	/**
+	 * Process most of the packets in chunks of N pkts.  Any
+	 * leftover packets will get processed one at a time.
+	 */
+	mainpart = nb_pkts & ((uint32_t)~N_PER_LOOP_MASK);
+	leftover = nb_pkts & ((uint32_t)N_PER_LOOP_MASK);
+	for (i = 0; i < mainpart; i += N_PER_LOOP) {
+		/* Copy N mbuf pointers to the S/W ring */
+		for (j = 0; j < N_PER_LOOP; ++j)
+			(txep + i + j)->mbuf = *(pkts + i + j);
+		tx4(txdp + i, pkts + i);
+	}
+
+	if (unlikely(leftover > 0)) {
+		for (i = 0; i < leftover; ++i) {
+			(txep + mainpart + i)->mbuf = *(pkts + mainpart + i);
+			tx1(txdp + mainpart + i, pkts + mainpart + i);
+		}
+	}
+}
+
+static inline uint16_t
+tx_xmit_pkts(struct ice_tx_queue *txq,
+	     struct rte_mbuf **tx_pkts,
+	     uint16_t nb_pkts)
+{
+	volatile struct ice_tx_desc *txr = txq->tx_ring;
+	uint16_t n = 0;
+
+	/**
+	 * Begin scanning the H/W ring for done descriptors when the number
+	 * of available descriptors drops below tx_free_thresh. For each done
+	 * descriptor, free the associated buffer.
+	 */
+	if (txq->nb_tx_free < txq->tx_free_thresh)
+		ice_tx_free_bufs(txq);
+
+	/* Use available descriptor only */
+	nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
+	if (unlikely(!nb_pkts))
+		return 0;
+
+	txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
+	if ((txq->tx_tail + nb_pkts) > txq->nb_tx_desc) {
+		n = (uint16_t)(txq->nb_tx_desc - txq->tx_tail);
+		ice_tx_fill_hw_ring(txq, tx_pkts, n);
+		txr[txq->tx_next_rs].cmd_type_offset_bsz |=
+			rte_cpu_to_le_64(((uint64_t)ICE_TX_DESC_CMD_RS) <<
+					 ICE_TXD_QW1_CMD_S);
+		txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
+		txq->tx_tail = 0;
+	}
+
+	/* Fill hardware descriptor ring with mbuf data */
+	ice_tx_fill_hw_ring(txq, tx_pkts + n, (uint16_t)(nb_pkts - n));
+	txq->tx_tail = (uint16_t)(txq->tx_tail + (nb_pkts - n));
+
+	/* Determin if RS bit needs to be set */
+	if (txq->tx_tail > txq->tx_next_rs) {
+		txr[txq->tx_next_rs].cmd_type_offset_bsz |=
+			rte_cpu_to_le_64(((uint64_t)ICE_TX_DESC_CMD_RS) <<
+					 ICE_TXD_QW1_CMD_S);
+		txq->tx_next_rs =
+			(uint16_t)(txq->tx_next_rs + txq->tx_rs_thresh);
+		if (txq->tx_next_rs >= txq->nb_tx_desc)
+			txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
+	}
+
+	if (txq->tx_tail >= txq->nb_tx_desc)
+		txq->tx_tail = 0;
+
+	/* Update the tx tail register */
+	rte_wmb();
+	ICE_PCI_REG_WRITE(txq->qtx_tail, txq->tx_tail);
+
+	return nb_pkts;
+}
+
+static uint16_t
+ice_xmit_pkts_simple(void *tx_queue,
+		     struct rte_mbuf **tx_pkts,
+		     uint16_t nb_pkts)
+{
+	uint16_t nb_tx = 0;
+
+	if (likely(nb_pkts <= ICE_TX_MAX_BURST))
+		return tx_xmit_pkts((struct ice_tx_queue *)tx_queue,
+				    tx_pkts, nb_pkts);
+
+	while (nb_pkts) {
+		uint16_t ret, num = (uint16_t)RTE_MIN(nb_pkts,
+						      ICE_TX_MAX_BURST);
+
+		ret = tx_xmit_pkts((struct ice_tx_queue *)tx_queue,
+				   &tx_pkts[nb_tx], num);
+		nb_tx = (uint16_t)(nb_tx + ret);
+		nb_pkts = (uint16_t)(nb_pkts - ret);
+		if (ret < num)
+			break;
+	}
+
+	return nb_tx;
+}
+
 void __attribute__((cold))
 ice_set_rx_function(struct rte_eth_dev *dev)
 {
-	dev->rx_pkt_burst = ice_recv_pkts;
+	PMD_INIT_FUNC_TRACE();
+	struct ice_adapter *ad =
+		ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+
+	if (dev->data->scattered_rx) {
+		/* Set the non-LRO scattered function */
+		PMD_INIT_LOG(DEBUG,
+			     "Using a Scattered function on port %d.",
+			     dev->data->port_id);
+		dev->rx_pkt_burst = ice_recv_scattered_pkts;
+	} else if (ad->rx_bulk_alloc_allowed) {
+		PMD_INIT_LOG(DEBUG,
+			     "Rx Burst Bulk Alloc Preconditions are "
+			     "satisfied. Rx Burst Bulk Alloc function "
+			     "will be used on port %d.",
+			     dev->data->port_id);
+		dev->rx_pkt_burst = ice_recv_pkts_bulk_alloc;
+	} else {
+		PMD_INIT_LOG(DEBUG,
+			     "Rx Burst Bulk Alloc Preconditions are not "
+			     "satisfied, Normal Rx will be used on port %d.",
+			     dev->data->port_id);
+		dev->rx_pkt_burst = ice_recv_pkts;
+	}
 }
 
 /*********************************************************************
@@ -1588,8 +2234,18 @@ void __attribute__((cold))
 void __attribute__((cold))
 ice_set_tx_function(struct rte_eth_dev *dev)
 {
+	struct ice_adapter *ad =
+		ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+
+	if (ad->tx_simple_allowed) {
+		PMD_INIT_LOG(DEBUG, "Simple tx finally be used.");
+		dev->tx_pkt_burst = ice_xmit_pkts_simple;
+		dev->tx_pkt_prepare = NULL;
+	} else {
+		PMD_INIT_LOG(DEBUG, "Normal tx finally be used.");
 		dev->tx_pkt_burst = ice_xmit_pkts;
 		dev->tx_pkt_prepare = ice_prep_pkts;
+	}
 }
 
 /* For each value it means, datasheet of hardware can tell more details
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v5 31/31] net/ice: support descriptor ops
  2018-12-17  7:37 ` [dpdk-dev] [PATCH v5 00/31] A new net PMD - ICE Wenzhuo Lu
                     ` (29 preceding siblings ...)
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 30/31] net/ice: support advance RX/TX Wenzhuo Lu
@ 2018-12-17  7:37   ` Wenzhuo Lu
  30 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-17  7:37 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add below ops,
rx_descriptor_status
tx_descriptor_status

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/ice.ini |  2 ++
 drivers/net/ice/ice_ethdev.c     |  2 ++
 drivers/net/ice/ice_lan_rxtx.c   | 58 ++++++++++++++++++++++++++++++++++++++++
 drivers/net/ice/ice_rxtx.h       |  2 ++
 4 files changed, 64 insertions(+)

diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
index 300eced..196b8d5 100644
--- a/doc/guides/nics/features/ice.ini
+++ b/doc/guides/nics/features/ice.ini
@@ -25,6 +25,8 @@ QinQ offload         = Y
 L3 checksum offload  = Y
 L4 checksum offload  = Y
 Packet type parsing  = Y
+Rx descriptor status = Y
+Tx descriptor status = Y
 Basic stats          = Y
 Extended stats       = Y
 FW version           = Y
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index ab8fe3b..86db69d 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -112,6 +112,8 @@ static int ice_xstats_get_names(struct rte_eth_dev *dev,
 	.get_eeprom_length            = ice_get_eeprom_length,
 	.get_eeprom                   = ice_get_eeprom,
 	.rx_queue_count               = ice_rx_queue_count,
+	.rx_descriptor_status         = ice_rx_descriptor_status,
+	.tx_descriptor_status         = ice_tx_descriptor_status,
 	.stats_get                    = ice_stats_get,
 	.stats_reset                  = ice_stats_reset,
 	.xstats_get                   = ice_xstats_get,
diff --git a/drivers/net/ice/ice_lan_rxtx.c b/drivers/net/ice/ice_lan_rxtx.c
index b328a96..c481aed 100644
--- a/drivers/net/ice/ice_lan_rxtx.c
+++ b/drivers/net/ice/ice_lan_rxtx.c
@@ -1490,6 +1490,64 @@
 	return desc;
 }
 
+int
+ice_rx_descriptor_status(void *rx_queue, uint16_t offset)
+{
+	struct ice_rx_queue *rxq = rx_queue;
+	volatile uint64_t *status;
+	uint64_t mask;
+	uint32_t desc;
+
+	if (unlikely(offset >= rxq->nb_rx_desc))
+		return -EINVAL;
+
+	if (offset >= rxq->nb_rx_desc - rxq->nb_rx_hold)
+		return RTE_ETH_RX_DESC_UNAVAIL;
+
+	desc = rxq->rx_tail + offset;
+	if (desc >= rxq->nb_rx_desc)
+		desc -= rxq->nb_rx_desc;
+
+	status = &rxq->rx_ring[desc].wb.qword1.status_error_len;
+	mask = rte_cpu_to_le_64((1ULL << ICE_RX_DESC_STATUS_DD_S) <<
+				ICE_RXD_QW1_STATUS_S);
+	if (*status & mask)
+		return RTE_ETH_RX_DESC_DONE;
+
+	return RTE_ETH_RX_DESC_AVAIL;
+}
+
+int
+ice_tx_descriptor_status(void *tx_queue, uint16_t offset)
+{
+	struct ice_tx_queue *txq = tx_queue;
+	volatile uint64_t *status;
+	uint64_t mask, expect;
+	uint32_t desc;
+
+	if (unlikely(offset >= txq->nb_tx_desc))
+		return -EINVAL;
+
+	desc = txq->tx_tail + offset;
+	/* go to next desc that has the RS bit */
+	desc = ((desc + txq->tx_rs_thresh - 1) / txq->tx_rs_thresh) *
+		txq->tx_rs_thresh;
+	if (desc >= txq->nb_tx_desc) {
+		desc -= txq->nb_tx_desc;
+		if (desc >= txq->nb_tx_desc)
+			desc -= txq->nb_tx_desc;
+	}
+
+	status = &txq->tx_ring[desc].cmd_type_offset_bsz;
+	mask = rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M);
+	expect = rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE <<
+				  ICE_TXD_QW1_DTYPE_S);
+	if ((*status & mask) == expect)
+		return RTE_ETH_TX_DESC_DONE;
+
+	return RTE_ETH_TX_DESC_FULL;
+}
+
 void
 ice_clear_queues(struct rte_eth_dev *dev)
 {
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index e0218b3..a0aa8f9 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -143,6 +143,8 @@ uint16_t ice_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
 		       uint16_t nb_pkts);
 void ice_set_tx_function(struct rte_eth_dev *dev);
 uint32_t ice_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+int ice_rx_descriptor_status(void *rx_queue, uint16_t offset);
+int ice_tx_descriptor_status(void *tx_queue, uint16_t offset);
 void ice_set_default_ptype_table(struct rte_eth_dev *dev);
 const uint32_t *ice_dev_supported_ptypes_get(struct rte_eth_dev *dev);
 void ice_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v5 15/31] net/ice: support device initialization
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 15/31] net/ice: support device initialization Wenzhuo Lu
@ 2018-12-17 22:29     ` Ferruh Yigit
  2018-12-18  1:12       ` Lu, Wenzhuo
  2018-12-17 23:15     ` Ferruh Yigit
  1 sibling, 1 reply; 309+ messages in thread
From: Ferruh Yigit @ 2018-12-17 22:29 UTC (permalink / raw)
  To: Wenzhuo Lu, dev; +Cc: Qiming Yang, Xiaoyun Li, Jingjing Wu

On 12/17/2018 7:37 AM, Wenzhuo Lu wrote:
> Update the documents too.
> 
> Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> Signed-off-by: Qiming Yang <qiming.yang@intel.com>
> Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
> Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
> ---
>  MAINTAINERS                             |   2 +
>  config/common_base                      |   7 +
>  doc/guides/nics/features/ice.ini        |  11 +
>  doc/guides/nics/ice.rst                 |  80 ++++
>  doc/guides/nics/index.rst               |   1 +
>  doc/guides/rel_notes/release_19_02.rst  |   5 +
>  drivers/net/Makefile                    |   1 +
>  drivers/net/ice/Makefile                |  54 +++
>  drivers/net/ice/base/meson.build        |  27 ++
>  drivers/net/ice/ice_ethdev.c            | 636 ++++++++++++++++++++++++++++++++
>  drivers/net/ice/ice_ethdev.h            | 305 +++++++++++++++
>  drivers/net/ice/ice_logs.h              |  45 +++
>  drivers/net/ice/ice_rxtx.h              | 117 ++++++

I guess intention is to put this file (ice_rxtx.h) into next patch (16/31), file
seems not used in this patch.

<...>

> +RTE_PMD_REGISTER_PCI(net_ice, rte_ice_pmd);
> +RTE_PMD_REGISTER_PCI_TABLE(net_ice, pci_id_ice_map);
> +RTE_PMD_REGISTER_KMOD_DEP(net_ice, "* igb_uio | uio_pci_generic | vfio-pci");

Also needs to set "RTE_PMD_REGISTER_PARAM_STRING()" macro here.

<...>

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v5 22/31] net/ice: support VLAN ops
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 22/31] net/ice: support VLAN ops Wenzhuo Lu
@ 2018-12-17 22:45     ` Ferruh Yigit
  0 siblings, 0 replies; 309+ messages in thread
From: Ferruh Yigit @ 2018-12-17 22:45 UTC (permalink / raw)
  To: Wenzhuo Lu, dev; +Cc: Qiming Yang, Xiaoyun Li, Jingjing Wu

On 12/17/2018 7:37 AM, Wenzhuo Lu wrote:
> Add below ops,
> ice_vlan_filter_set
> ice_vlan_offload_set
> ice_vlan_tpid_set
> ice_vlan_pvid_set
> 
> Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> Signed-off-by: Qiming Yang <qiming.yang@intel.com>
> Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
> Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
> ---
>  doc/guides/nics/features/ice.ini |   3 +
>  doc/guides/nics/ice.rst          |  16 ++
>  drivers/net/ice/ice_ethdev.c     | 590 +++++++++++++++++++++++++++++++++++++++
>  3 files changed, 609 insertions(+)
> 
> diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
> index 759a036..5ac8e56 100644
> --- a/doc/guides/nics/features/ice.ini
> +++ b/doc/guides/nics/features/ice.ini
> @@ -12,6 +12,9 @@ MTU update           = Y
>  Jumbo frame          = Y
>  Unicast MAC filter   = Y
>  Multicast MAC filter = Y
> +VLAN filter          = Y
> +VLAN offload         = Y
> +QinQ offload         = Y

To claim VLAN & QINQ support, data path also needs to be updated, to set proper
flags and vlan_tci & vlan_tci_outer on Rx path and to check and use them in Tx path.

This patch does not updates data path.

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v5 23/31] net/ice: support RSS
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 23/31] net/ice: support RSS Wenzhuo Lu
@ 2018-12-17 22:47     ` Ferruh Yigit
  0 siblings, 0 replies; 309+ messages in thread
From: Ferruh Yigit @ 2018-12-17 22:47 UTC (permalink / raw)
  To: Wenzhuo Lu, dev; +Cc: Qiming Yang, Xiaoyun Li, Jingjing Wu

On 12/17/2018 7:37 AM, Wenzhuo Lu wrote:
> Add below ops,
> reta_update
> reta_query
> rss_hash_update
> rss_hash_conf_get
> 
> Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> Signed-off-by: Qiming Yang <qiming.yang@intel.com>
> Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
> Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
> ---
>  doc/guides/nics/features/ice.ini |   3 +
>  drivers/net/ice/ice_ethdev.c     | 242 +++++++++++++++++++++++++++++++++++++++
>  2 files changed, 245 insertions(+)
> 
> diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
> index 5ac8e56..953a869 100644
> --- a/doc/guides/nics/features/ice.ini
> +++ b/doc/guides/nics/features/ice.ini
> @@ -12,6 +12,9 @@ MTU update           = Y
>  Jumbo frame          = Y
>  Unicast MAC filter   = Y
>  Multicast MAC filter = Y
> +RSS hash             = Y
> +RSS key update       = Y
> +RSS reta update      = Y

Similar comment here, to claim RSS support data path also needs to be updated,
to set proper mbuf fields.

Perhaps it can be simpler to move RSS and VLAN patches after basic Rx/Tx support
patch and update data path for those features added, what do you think?

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v5 29/31] net/ice: support basic RX/TX
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 29/31] net/ice: support basic RX/TX Wenzhuo Lu
@ 2018-12-17 22:58     ` Ferruh Yigit
  2018-12-18  2:49       ` Lu, Wenzhuo
  0 siblings, 1 reply; 309+ messages in thread
From: Ferruh Yigit @ 2018-12-17 22:58 UTC (permalink / raw)
  To: Wenzhuo Lu, dev; +Cc: Qiming Yang, Xiaoyun Li, Jingjing Wu

On 12/17/2018 7:37 AM, Wenzhuo Lu wrote:
> Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> Signed-off-by: Qiming Yang <qiming.yang@intel.com>
> Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
> Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>

<...>

> +
> +		/* Descriptor based VLAN insertion */
> +		if (ol_flags & (PKT_TX_VLAN_PKT | PKT_TX_QINQ_PKT)) {

PKT_TX_VLAN_PKT & PKT_TX_QINQ_PKT are deprecated, please prefer PKT_TX_VLAN &
PKT_TX_QINQ

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v5 30/31] net/ice: support advance RX/TX
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 30/31] net/ice: support advance RX/TX Wenzhuo Lu
@ 2018-12-17 23:02     ` Ferruh Yigit
  2018-12-18  3:11       ` Lu, Wenzhuo
  2018-12-17 23:46     ` Ferruh Yigit
  1 sibling, 1 reply; 309+ messages in thread
From: Ferruh Yigit @ 2018-12-17 23:02 UTC (permalink / raw)
  To: Wenzhuo Lu, dev; +Cc: Qiming Yang, Xiaoyun Li, Jingjing Wu

On 12/17/2018 7:37 AM, Wenzhuo Lu wrote:
> Add RX functions, scatter and bulk.
> Add TX function, simple.
> 
> Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> Signed-off-by: Qiming Yang <qiming.yang@intel.com>
> Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
> Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>

<...>

> +ice_tx_free_bufs(struct ice_tx_queue *txq)
> +{
> +	struct ice_tx_entry *txep;
> +	uint16_t i;
> +
> +	if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
> +	     rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M)) !=
> +	    rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE))
> +		return 0;
> +
> +	txep = &txq->sw_ring[txq->tx_next_dd - (txq->tx_rs_thresh - 1)];
> +
> +	for (i = 0; i < txq->tx_rs_thresh; i++)
> +		rte_prefetch0((txep + i)->mbuf);
> +
> +	if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) {

You can announce "Fast mbuf free" feature in .ini file if this is supported.

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v5 15/31] net/ice: support device initialization
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 15/31] net/ice: support device initialization Wenzhuo Lu
  2018-12-17 22:29     ` Ferruh Yigit
@ 2018-12-17 23:15     ` Ferruh Yigit
  2018-12-18  1:42       ` Lu, Wenzhuo
  1 sibling, 1 reply; 309+ messages in thread
From: Ferruh Yigit @ 2018-12-17 23:15 UTC (permalink / raw)
  To: Wenzhuo Lu, dev; +Cc: Qiming Yang, Xiaoyun Li, Jingjing Wu

> +static struct rte_pci_driver rte_ice_pmd = {
> +	.id_table = pci_id_ice_map,
> +	.drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC,

Is "RTE_PCI_DRV_IOVA_AS_VA" not added intentionally?

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v5 30/31] net/ice: support advance RX/TX
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 30/31] net/ice: support advance RX/TX Wenzhuo Lu
  2018-12-17 23:02     ` Ferruh Yigit
@ 2018-12-17 23:46     ` Ferruh Yigit
  2018-12-18  3:13       ` Lu, Wenzhuo
  1 sibling, 1 reply; 309+ messages in thread
From: Ferruh Yigit @ 2018-12-17 23:46 UTC (permalink / raw)
  To: Wenzhuo Lu, dev; +Cc: Qiming Yang, Xiaoyun Li, Jingjing Wu

On 12/17/2018 7:37 AM, Wenzhuo Lu wrote:
> Add RX functions, scatter and bulk.
> Add TX function, simple.
> 
> Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> Signed-off-by: Qiming Yang <qiming.yang@intel.com>
> Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
> Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>

<....>

> +	rxdp = &rxq->rx_ring[alloc_idx];
> +	for (i = 0; i < rxq->rx_free_thresh; i++) {
> +		if (likely(i < (rxq->rx_free_thresh - 1)))
> +			/* Prefetch next mbuf */
> +			rte_prefetch0(rxep[i + 1].mbuf);
> +
> +		mb = rxep[i].mbuf;
> +		rte_mbuf_refcnt_set(mb, 1);
> +		mb->next = NULL;
> +		mb->data_off = RTE_PKTMBUF_HEADROOM;
> +		mb->nb_segs = 1;
> +		mb->port = rxq->port_id;
> +		dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(mb));
> +		rxdp[i].read.hdr_addr = 0;
> +		rxdp[i].read.pkt_addr = dma_addr;
> +	}
> +
> +	/* Update rx tail regsiter */

Can you please double check checkpatch warnings, some of them can be ignored but
some are easy to fix issues:
 WARNING:TYPO_SPELLING: 'regsiter' may be misspelled - perhaps 'register'?
 #201: FILE: drivers/net/ice/ice_lan_rxtx.c:1112:
 +	/* Update rx tail regsiter */

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v5 16/31] net/ice: support device and queue ops
  2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 16/31] net/ice: support device and queue ops Wenzhuo Lu
@ 2018-12-17 23:48     ` Ferruh Yigit
  2018-12-18  1:33       ` Lu, Wenzhuo
  0 siblings, 1 reply; 309+ messages in thread
From: Ferruh Yigit @ 2018-12-17 23:48 UTC (permalink / raw)
  To: Wenzhuo Lu, dev; +Cc: Qiming Yang, Xiaoyun Li, Jingjing Wu

On 12/17/2018 7:37 AM, Wenzhuo Lu wrote:
> Normally when starting/stopping the device the queue
> should be started and stopped. Support them both in
> this patch.
> 
> Below ops are added,
> dev_configure
> dev_start
> dev_stop
> dev_close
> dev_reset
> rx_queue_start
> rx_queue_stop
> tx_queue_start
> tx_queue_stop
> rx_queue_setup
> rx_queue_release
> tx_queue_setup
> tx_queue_release
> 
> Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> Signed-off-by: Qiming Yang <qiming.yang@intel.com>
> Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
> Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
> ---
>  config/common_base               |   2 +
>  doc/guides/nics/features/ice.ini |   1 +
>  doc/guides/nics/ice.rst          |   8 +
>  drivers/net/ice/Makefile         |   3 +-
>  drivers/net/ice/ice_ethdev.c     | 198 ++++++++-
>  drivers/net/ice/ice_lan_rxtx.c   | 927 +++++++++++++++++++++++++++++++++++++++

Out of curiosity, why not ice_rxtx.c but "ice_lan_rxtx.c", is that "lan" means
something?

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v5 15/31] net/ice: support device initialization
  2018-12-17 22:29     ` Ferruh Yigit
@ 2018-12-18  1:12       ` Lu, Wenzhuo
  0 siblings, 0 replies; 309+ messages in thread
From: Lu, Wenzhuo @ 2018-12-18  1:12 UTC (permalink / raw)
  To: Yigit, Ferruh, dev; +Cc: Yang, Qiming, Li, Xiaoyun, Wu, Jingjing

Hi Ferruh,

> -----Original Message-----
> From: Yigit, Ferruh
> Sent: Tuesday, December 18, 2018 6:29 AM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> Cc: Yang, Qiming <qiming.yang@intel.com>; Li, Xiaoyun
> <xiaoyun.li@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>
> Subject: Re: [dpdk-dev] [PATCH v5 15/31] net/ice: support device
> initialization
> 
> On 12/17/2018 7:37 AM, Wenzhuo Lu wrote:
> > Update the documents too.
> >
> > Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> > Signed-off-by: Qiming Yang <qiming.yang@intel.com>
> > Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
> > Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
> > ---
> >  MAINTAINERS                             |   2 +
> >  config/common_base                      |   7 +
> >  doc/guides/nics/features/ice.ini        |  11 +
> >  doc/guides/nics/ice.rst                 |  80 ++++
> >  doc/guides/nics/index.rst               |   1 +
> >  doc/guides/rel_notes/release_19_02.rst  |   5 +
> >  drivers/net/Makefile                    |   1 +
> >  drivers/net/ice/Makefile                |  54 +++
> >  drivers/net/ice/base/meson.build        |  27 ++
> >  drivers/net/ice/ice_ethdev.c            | 636
> ++++++++++++++++++++++++++++++++
> >  drivers/net/ice/ice_ethdev.h            | 305 +++++++++++++++
> >  drivers/net/ice/ice_logs.h              |  45 +++
> >  drivers/net/ice/ice_rxtx.h              | 117 ++++++
> 
> I guess intention is to put this file (ice_rxtx.h) into next patch (16/31), file
> seems not used in this patch.
Yes, better move it to a later patch. Will change it.

> 
> <...>
> 
> > +RTE_PMD_REGISTER_PCI(net_ice, rte_ice_pmd);
> > +RTE_PMD_REGISTER_PCI_TABLE(net_ice, pci_id_ice_map);
> > +RTE_PMD_REGISTER_KMOD_DEP(net_ice, "* igb_uio | uio_pci_generic |
> > +vfio-pci");
> 
> Also needs to set "RTE_PMD_REGISTER_PARAM_STRING()" macro here.
Thanks for reminder. Will add it.

> 
> <...>

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v5 16/31] net/ice: support device and queue ops
  2018-12-17 23:48     ` Ferruh Yigit
@ 2018-12-18  1:33       ` Lu, Wenzhuo
  0 siblings, 0 replies; 309+ messages in thread
From: Lu, Wenzhuo @ 2018-12-18  1:33 UTC (permalink / raw)
  To: Yigit, Ferruh, dev; +Cc: Yang, Qiming, Li, Xiaoyun, Wu, Jingjing

Hi Ferruh,


> -----Original Message-----
> From: Yigit, Ferruh
> Sent: Tuesday, December 18, 2018 7:48 AM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> Cc: Yang, Qiming <qiming.yang@intel.com>; Li, Xiaoyun
> <xiaoyun.li@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>
> Subject: Re: [dpdk-dev] [PATCH v5 16/31] net/ice: support device and queue
> ops
> 
> On 12/17/2018 7:37 AM, Wenzhuo Lu wrote:
> > Normally when starting/stopping the device the queue should be started
> > and stopped. Support them both in this patch.
> >
> > Below ops are added,
> > dev_configure
> > dev_start
> > dev_stop
> > dev_close
> > dev_reset
> > rx_queue_start
> > rx_queue_stop
> > tx_queue_start
> > tx_queue_stop
> > rx_queue_setup
> > rx_queue_release
> > tx_queue_setup
> > tx_queue_release
> >
> > Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> > Signed-off-by: Qiming Yang <qiming.yang@intel.com>
> > Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
> > Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
> > ---
> >  config/common_base               |   2 +
> >  doc/guides/nics/features/ice.ini |   1 +
> >  doc/guides/nics/ice.rst          |   8 +
> >  drivers/net/ice/Makefile         |   3 +-
> >  drivers/net/ice/ice_ethdev.c     | 198 ++++++++-
> >  drivers/net/ice/ice_lan_rxtx.c   | 927
> +++++++++++++++++++++++++++++++++++++++
> 
> Out of curiosity, why not ice_rxtx.c but "ice_lan_rxtx.c", is that "lan" means
> something?
It’s a good question. I don’t think this 'lan' has any specific meaning. And feel that don't like it either. Will change it to ice_rxtx.c to make it the same as others.

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v5 15/31] net/ice: support device initialization
  2018-12-17 23:15     ` Ferruh Yigit
@ 2018-12-18  1:42       ` Lu, Wenzhuo
  0 siblings, 0 replies; 309+ messages in thread
From: Lu, Wenzhuo @ 2018-12-18  1:42 UTC (permalink / raw)
  To: Yigit, Ferruh, dev; +Cc: Yang, Qiming, Li, Xiaoyun, Wu, Jingjing

Hi Ferruh,


> -----Original Message-----
> From: Yigit, Ferruh
> Sent: Tuesday, December 18, 2018 7:16 AM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> Cc: Yang, Qiming <qiming.yang@intel.com>; Li, Xiaoyun
> <xiaoyun.li@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>
> Subject: Re: [dpdk-dev] [PATCH v5 15/31] net/ice: support device
> initialization
> 
> > +static struct rte_pci_driver rte_ice_pmd = {
> > +	.id_table = pci_id_ice_map,
> > +	.drv_flags = RTE_PCI_DRV_NEED_MAPPING |
> RTE_PCI_DRV_INTR_LSC,
> 
> Is "RTE_PCI_DRV_IOVA_AS_VA" not added intentionally?
Thanks for the reminder. Will add it.

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v5 29/31] net/ice: support basic RX/TX
  2018-12-17 22:58     ` Ferruh Yigit
@ 2018-12-18  2:49       ` Lu, Wenzhuo
  0 siblings, 0 replies; 309+ messages in thread
From: Lu, Wenzhuo @ 2018-12-18  2:49 UTC (permalink / raw)
  To: Yigit, Ferruh, dev; +Cc: Yang, Qiming, Li, Xiaoyun, Wu, Jingjing

Hi Ferruh,


> -----Original Message-----
> From: Yigit, Ferruh
> Sent: Tuesday, December 18, 2018 6:59 AM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> Cc: Yang, Qiming <qiming.yang@intel.com>; Li, Xiaoyun
> <xiaoyun.li@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>
> Subject: Re: [dpdk-dev] [PATCH v5 29/31] net/ice: support basic RX/TX
> 
> On 12/17/2018 7:37 AM, Wenzhuo Lu wrote:
> > Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> > Signed-off-by: Qiming Yang <qiming.yang@intel.com>
> > Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
> > Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
> 
> <...>
> 
> > +
> > +		/* Descriptor based VLAN insertion */
> > +		if (ol_flags & (PKT_TX_VLAN_PKT | PKT_TX_QINQ_PKT)) {
> 
> PKT_TX_VLAN_PKT & PKT_TX_QINQ_PKT are deprecated, please prefer
> PKT_TX_VLAN & PKT_TX_QINQ
Thanks for reminder. Will update it.

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v5 30/31] net/ice: support advance RX/TX
  2018-12-17 23:02     ` Ferruh Yigit
@ 2018-12-18  3:11       ` Lu, Wenzhuo
  0 siblings, 0 replies; 309+ messages in thread
From: Lu, Wenzhuo @ 2018-12-18  3:11 UTC (permalink / raw)
  To: Yigit, Ferruh, dev; +Cc: Yang, Qiming, Li, Xiaoyun, Wu, Jingjing

Hi Ferruh,


> -----Original Message-----
> From: Yigit, Ferruh
> Sent: Tuesday, December 18, 2018 7:03 AM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> Cc: Yang, Qiming <qiming.yang@intel.com>; Li, Xiaoyun
> <xiaoyun.li@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>
> Subject: Re: [dpdk-dev] [PATCH v5 30/31] net/ice: support advance RX/TX
> 
> On 12/17/2018 7:37 AM, Wenzhuo Lu wrote:
> > Add RX functions, scatter and bulk.
> > Add TX function, simple.
> >
> > Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> > Signed-off-by: Qiming Yang <qiming.yang@intel.com>
> > Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
> > Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
> 
> <...>
> 
> > +ice_tx_free_bufs(struct ice_tx_queue *txq) {
> > +	struct ice_tx_entry *txep;
> > +	uint16_t i;
> > +
> > +	if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
> > +	     rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M)) !=
> > +	    rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE))
> > +		return 0;
> > +
> > +	txep = &txq->sw_ring[txq->tx_next_dd - (txq->tx_rs_thresh - 1)];
> > +
> > +	for (i = 0; i < txq->tx_rs_thresh; i++)
> > +		rte_prefetch0((txep + i)->mbuf);
> > +
> > +	if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) {
> 
> You can announce "Fast mbuf free" feature in .ini file if this is supported.
Thanks for the reminder. Will add it.


^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v5 30/31] net/ice: support advance RX/TX
  2018-12-17 23:46     ` Ferruh Yigit
@ 2018-12-18  3:13       ` Lu, Wenzhuo
  0 siblings, 0 replies; 309+ messages in thread
From: Lu, Wenzhuo @ 2018-12-18  3:13 UTC (permalink / raw)
  To: Yigit, Ferruh, dev; +Cc: Yang, Qiming, Li, Xiaoyun, Wu, Jingjing

Hi Ferruh,

> -----Original Message-----
> From: Yigit, Ferruh
> Sent: Tuesday, December 18, 2018 7:47 AM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> Cc: Yang, Qiming <qiming.yang@intel.com>; Li, Xiaoyun
> <xiaoyun.li@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>
> Subject: Re: [dpdk-dev] [PATCH v5 30/31] net/ice: support advance RX/TX
> 
> On 12/17/2018 7:37 AM, Wenzhuo Lu wrote:
> > Add RX functions, scatter and bulk.
> > Add TX function, simple.
> >
> > Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> > Signed-off-by: Qiming Yang <qiming.yang@intel.com>
> > Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
> > Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
> 
> <....>
> 
> > +	rxdp = &rxq->rx_ring[alloc_idx];
> > +	for (i = 0; i < rxq->rx_free_thresh; i++) {
> > +		if (likely(i < (rxq->rx_free_thresh - 1)))
> > +			/* Prefetch next mbuf */
> > +			rte_prefetch0(rxep[i + 1].mbuf);
> > +
> > +		mb = rxep[i].mbuf;
> > +		rte_mbuf_refcnt_set(mb, 1);
> > +		mb->next = NULL;
> > +		mb->data_off = RTE_PKTMBUF_HEADROOM;
> > +		mb->nb_segs = 1;
> > +		mb->port = rxq->port_id;
> > +		dma_addr =
> rte_cpu_to_le_64(rte_mbuf_data_iova_default(mb));
> > +		rxdp[i].read.hdr_addr = 0;
> > +		rxdp[i].read.pkt_addr = dma_addr;
> > +	}
> > +
> > +	/* Update rx tail regsiter */
> 
> Can you please double check checkpatch warnings, some of them can be
> ignored but some are easy to fix issues:
>  WARNING:TYPO_SPELLING: 'regsiter' may be misspelled - perhaps 'register'?
My bad. I remember I did handle this. But looks like this work lost by mistake. Will check it again.

>  #201: FILE: drivers/net/ice/ice_lan_rxtx.c:1112:
>  +	/* Update rx tail regsiter */

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v6 00/31] A new net PMD - ICE
  2018-11-23  6:56 [dpdk-dev] [PATCH 00/19] A new net PMD - ice Wenzhuo Lu
                   ` (23 preceding siblings ...)
  2018-12-17  7:37 ` [dpdk-dev] [PATCH v5 00/31] A new net PMD - ICE Wenzhuo Lu
@ 2018-12-18  8:46 ` Wenzhuo Lu
  2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 01/31] net/ice/base: add registers for Intel(R) E800 Series NIC Wenzhuo Lu
                     ` (31 more replies)
  24 siblings, 32 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-18  8:46 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu

This patch set adds the support of a new net PMD,
Intel® Ethernet Network Adapters E810, also
called ice.

Below features are enabled by this patch set,

Basic features:
1, Basic device operations: probe, initialization, start/stop, configure, info get.
2, RX/TX queue operations: setup/release, start/stop, info get.
3, RX/TX.

HW Offload features:
1, CRC Stripping/insertion.
2, L2/L3 checksum strip/insertion.
3, PVID set.
4, TPID change.
5, TSO (LRO/RSC not supported).

Stats:
1, statics & xstatics.

Switch functions:
1, MAC Filter Add/Delete.
2, VLAN Filter Add/Delete.

Power saving:
1, RX interrupt mode.

Misc:
1, Interrupt For Link Status.
2, firmware info query.
3, Jumbo Frame Support.
4, ptype check.
5, EEPROM check and set.

---
v2:
 - Fix shared lib compile issue.
 - Add meson build support.
 - Update documents.
 - Fix more checkpatch issues.

v3:
 - Removed the support of secondary process.
 - Splitted the base code to more patches.
 - Pass NULL to rte_zmalloc.
 - Changed some magic numbers to macros.
 - Fixed the wrong implementation of a specific bitmapi.

v4:
 - Moved meson build forward.
 - Updated and splitted the document to related patches.
 - Updated the device info.
 - Removed unnecessary compile config.
 - Removed the code of ops rx_descriptor_done.
 - Adjusted the order of the functions.
 - Added error print for MAC setting.

v5:
 - Removed ice_dcb.c/h.
 - Fixed compile error of icc and i686.
 - Announced dependence of uio and vfio.

v6:
 - Adjusted the order of the patches.
 - Fixed some checkpatch errors.
 - Some minor change.

Paul M Stillwell Jr (13):
  net/ice/base: add registers for Intel(R) E800 Series NIC
  net/ice/base: add basic structures
  net/ice/base: add admin queue structures and commands
  net/ice/base: add sideband queue info
  net/ice/base: add device IDs for Intel(r) E800 Series NICs
  net/ice/base: add control queue information
  net/ice/base: add basic transmit scheduler
  net/ice/base: add virtual switch code
  net/ice/base: add code to work with the NVM
  net/ice/base: add common functions
  net/ice/base: add various headers
  net/ice/base: add protocol structures and defines
  net/ice/base: add structures for RX/TX queues

Wenzhuo Lu (18):
  net/ice/base: add OS specific implementation
  net/ice: support device initialization
  net/ice: support device and queue ops
  net/ice: support getting device information
  net/ice: support link update
  net/ice: support queue information getting
  net/ice: support packet type getting
  net/ice: support basic RX/TX
  net/ice: support MTU setting
  net/ice: support MAC ops
  net/ice: support VLAN ops
  net/ice: support RSS
  net/ice: support RX queue interruption
  net/ice: support FW version getting
  net/ice: support EEPROM information getting
  net/ice: support advance RX/TX
  net/ice: support statistics
  support descriptor ops

 MAINTAINERS                              |    8 +
 config/common_base                       |    9 +
 doc/guides/nics/features/ice.ini         |   39 +
 doc/guides/nics/ice.rst                  |  104 +
 doc/guides/nics/index.rst                |    1 +
 doc/guides/rel_notes/release_19_02.rst   |    5 +
 drivers/net/Makefile                     |    1 +
 drivers/net/ice/Makefile                 |   55 +
 drivers/net/ice/base/README              |   22 +
 drivers/net/ice/base/ice_adminq_cmd.h    | 1891 ++++++
 drivers/net/ice/base/ice_alloc.h         |   22 +
 drivers/net/ice/base/ice_common.c        | 3521 +++++++++++
 drivers/net/ice/base/ice_common.h        |  186 +
 drivers/net/ice/base/ice_controlq.c      | 1098 ++++
 drivers/net/ice/base/ice_controlq.h      |   97 +
 drivers/net/ice/base/ice_devids.h        |   17 +
 drivers/net/ice/base/ice_flex_type.h     |   19 +
 drivers/net/ice/base/ice_flow.h          |    8 +
 drivers/net/ice/base/ice_hw_autogen.h    | 9815 ++++++++++++++++++++++++++++++
 drivers/net/ice/base/ice_lan_tx_rx.h     | 2291 +++++++
 drivers/net/ice/base/ice_nvm.c           |  387 ++
 drivers/net/ice/base/ice_osdep.h         |  524 ++
 drivers/net/ice/base/ice_protocol_type.h |  248 +
 drivers/net/ice/base/ice_sbq_cmd.h       |   93 +
 drivers/net/ice/base/ice_sched.c         | 5380 ++++++++++++++++
 drivers/net/ice/base/ice_sched.h         |  210 +
 drivers/net/ice/base/ice_status.h        |   45 +
 drivers/net/ice/base/ice_switch.c        | 2812 +++++++++
 drivers/net/ice/base/ice_switch.h        |  333 +
 drivers/net/ice/base/ice_type.h          |  869 +++
 drivers/net/ice/base/meson.build         |   27 +
 drivers/net/ice/ice_ethdev.c             | 3245 ++++++++++
 drivers/net/ice/ice_ethdev.h             |  318 +
 drivers/net/ice/ice_logs.h               |   45 +
 drivers/net/ice/ice_rxtx.c               | 2872 +++++++++
 drivers/net/ice/ice_rxtx.h               |  154 +
 drivers/net/ice/meson.build              |   13 +
 drivers/net/ice/rte_pmd_ice_version.map  |    4 +
 drivers/net/meson.build                  |    1 +
 mk/rte.app.mk                            |    1 +
 40 files changed, 36790 insertions(+)
 create mode 100644 doc/guides/nics/features/ice.ini
 create mode 100644 doc/guides/nics/ice.rst
 create mode 100644 drivers/net/ice/Makefile
 create mode 100644 drivers/net/ice/base/README
 create mode 100644 drivers/net/ice/base/ice_adminq_cmd.h
 create mode 100644 drivers/net/ice/base/ice_alloc.h
 create mode 100644 drivers/net/ice/base/ice_common.c
 create mode 100644 drivers/net/ice/base/ice_common.h
 create mode 100644 drivers/net/ice/base/ice_controlq.c
 create mode 100644 drivers/net/ice/base/ice_controlq.h
 create mode 100644 drivers/net/ice/base/ice_devids.h
 create mode 100644 drivers/net/ice/base/ice_flex_type.h
 create mode 100644 drivers/net/ice/base/ice_flow.h
 create mode 100644 drivers/net/ice/base/ice_hw_autogen.h
 create mode 100644 drivers/net/ice/base/ice_lan_tx_rx.h
 create mode 100644 drivers/net/ice/base/ice_nvm.c
 create mode 100644 drivers/net/ice/base/ice_osdep.h
 create mode 100644 drivers/net/ice/base/ice_protocol_type.h
 create mode 100644 drivers/net/ice/base/ice_sbq_cmd.h
 create mode 100644 drivers/net/ice/base/ice_sched.c
 create mode 100644 drivers/net/ice/base/ice_sched.h
 create mode 100644 drivers/net/ice/base/ice_status.h
 create mode 100644 drivers/net/ice/base/ice_switch.c
 create mode 100644 drivers/net/ice/base/ice_switch.h
 create mode 100644 drivers/net/ice/base/ice_type.h
 create mode 100644 drivers/net/ice/base/meson.build
 create mode 100644 drivers/net/ice/ice_ethdev.c
 create mode 100644 drivers/net/ice/ice_ethdev.h
 create mode 100644 drivers/net/ice/ice_logs.h
 create mode 100644 drivers/net/ice/ice_rxtx.c
 create mode 100644 drivers/net/ice/ice_rxtx.h
 create mode 100644 drivers/net/ice/meson.build
 create mode 100644 drivers/net/ice/rte_pmd_ice_version.map

-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v6 01/31] net/ice/base: add registers for Intel(R) E800 Series NIC
  2018-12-18  8:46 ` [dpdk-dev] [PATCH v6 00/31] A new net PMD - ICE Wenzhuo Lu
@ 2018-12-18  8:46   ` Wenzhuo Lu
  2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 02/31] net/ice/base: add basic structures Wenzhuo Lu
                     ` (30 subsequent siblings)
  31 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-18  8:46 UTC (permalink / raw)
  To: dev; +Cc: Paul M Stillwell Jr

From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>

Add the registers that comprise the Intel(R) 800
Series NIC. There is no functionality in this patch.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
---
 MAINTAINERS                           |    6 +
 drivers/net/ice/base/ice_hw_autogen.h | 9815 +++++++++++++++++++++++++++++++++
 2 files changed, 9821 insertions(+)
 create mode 100644 drivers/net/ice/base/ice_hw_autogen.h

diff --git a/MAINTAINERS b/MAINTAINERS
index 71ba312..37f3bf7 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -593,6 +593,12 @@ F: drivers/net/ifc/
 F: doc/guides/nics/ifc.rst
 F: doc/guides/nics/features/ifc*.ini
 
+Intel ice
+M: Qiming Yang <qiming.yang@intel.com>
+M: Wenzhuo Lu <wenzhuo.lu@intel.com>
+T: git://dpdk.org/next/dpdk-next-net-intel
+F: drivers/net/ice/
+
 Marvell mvpp2
 M: Tomasz Duszynski <tdu@semihalf.com>
 M: Dmitri Epshtein <dima@marvell.com>
diff --git a/drivers/net/ice/base/ice_hw_autogen.h b/drivers/net/ice/base/ice_hw_autogen.h
new file mode 100644
index 0000000..8c79891
--- /dev/null
+++ b/drivers/net/ice/base/ice_hw_autogen.h
@@ -0,0 +1,9815 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+/* Machine-generated file; do not edit */
+#ifndef _ICE_HW_AUTOGEN_H_
+#define _ICE_HW_AUTOGEN_H_
+
+
+
+#define GL_RDPU_CNTRL				0x00052054 /* Reset Source: CORER */
+#define GL_RDPU_CNTRL_RX_PAD_EN_S		0
+#define GL_RDPU_CNTRL_RX_PAD_EN_M		BIT(0)
+#define GL_RDPU_CNTRL_UDP_ZERO_EN_S		1
+#define GL_RDPU_CNTRL_UDP_ZERO_EN_M		BIT(1)
+#define GL_RDPU_CNTRL_BLNC_EN_S			2
+#define GL_RDPU_CNTRL_BLNC_EN_M			BIT(2)
+#define GL_RDPU_CNTRL_RECIPE_BYPASS_S		3
+#define GL_RDPU_CNTRL_RECIPE_BYPASS_M		BIT(3)
+#define GL_RDPU_CNTRL_RLAN_ACK_REQ_PM_TH_S	4
+#define GL_RDPU_CNTRL_RLAN_ACK_REQ_PM_TH_M	MAKEMASK(0x3F, 4)
+#define GL_RDPU_CNTRL_PE_ACK_REQ_PM_TH_S	10
+#define GL_RDPU_CNTRL_PE_ACK_REQ_PM_TH_M	MAKEMASK(0x3F, 10)
+#define GL_RDPU_CNTRL_REQ_WB_PM_TH_S		16
+#define GL_RDPU_CNTRL_REQ_WB_PM_TH_M		MAKEMASK(0x1F, 16)
+#define GL_RDPU_CNTRL_ECO_S			21
+#define GL_RDPU_CNTRL_ECO_M			MAKEMASK(0x7FF, 21)
+#define MSIX_PBA(_i)				(0x00008000 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: FLR */
+#define MSIX_PBA_MAX_INDEX			2
+#define MSIX_PBA_PENBIT_S			0
+#define MSIX_PBA_PENBIT_M			MAKEMASK(0xFFFFFFFF, 0)
+#define MSIX_TADD(_i)				(0x00000000 + ((_i) * 16)) /* _i=0...64 */ /* Reset Source: FLR */
+#define MSIX_TADD_MAX_INDEX			64
+#define MSIX_TADD_MSIXTADD10_S			0
+#define MSIX_TADD_MSIXTADD10_M			MAKEMASK(0x3, 0)
+#define MSIX_TADD_MSIXTADD_S			2
+#define MSIX_TADD_MSIXTADD_M			MAKEMASK(0x3FFFFFFF, 2)
+#define MSIX_TUADD(_i)				(0x00000004 + ((_i) * 16)) /* _i=0...64 */ /* Reset Source: FLR */
+#define MSIX_TUADD_MAX_INDEX			64
+#define MSIX_TUADD_MSIXTUADD_S			0
+#define MSIX_TUADD_MSIXTUADD_M			MAKEMASK(0xFFFFFFFF, 0)
+#define MSIX_TVCTRL(_i)				(0x0000000C + ((_i) * 16)) /* _i=0...64 */ /* Reset Source: FLR */
+#define MSIX_TVCTRL_MAX_INDEX			64
+#define MSIX_TVCTRL_MASK_S			0
+#define MSIX_TVCTRL_MASK_M			BIT(0)
+#define PF0_FW_HLP_ARQBAH_PAGE			0x02D00180 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQBAH_PAGE_ARQBAH_S		0
+#define PF0_FW_HLP_ARQBAH_PAGE_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_FW_HLP_ARQBAL_PAGE			0x02D00080 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQBAL_PAGE_ARQBAL_LSB_S	0
+#define PF0_FW_HLP_ARQBAL_PAGE_ARQBAL_LSB_M	MAKEMASK(0x3F, 0)
+#define PF0_FW_HLP_ARQBAL_PAGE_ARQBAL_S		6
+#define PF0_FW_HLP_ARQBAL_PAGE_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_FW_HLP_ARQH_PAGE			0x02D00380 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQH_PAGE_ARQH_S		0
+#define PF0_FW_HLP_ARQH_PAGE_ARQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ARQLEN_PAGE			0x02D00280 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQLEN_S		0
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQVFE_S		28
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQVFE_M		BIT(28)
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQOVFL_S	29
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQOVFL_M	BIT(29)
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQCRIT_S	30
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQCRIT_M	BIT(30)
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQENABLE_S	31
+#define PF0_FW_HLP_ARQLEN_PAGE_ARQENABLE_M	BIT(31)
+#define PF0_FW_HLP_ARQT_PAGE			0x02D00480 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQT_PAGE_ARQT_S		0
+#define PF0_FW_HLP_ARQT_PAGE_ARQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ATQBAH_PAGE			0x02D00100 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQBAH_PAGE_ATQBAH_S		0
+#define PF0_FW_HLP_ATQBAH_PAGE_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_FW_HLP_ATQBAL_PAGE			0x02D00000 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQBAL_PAGE_ATQBAL_LSB_S	0
+#define PF0_FW_HLP_ATQBAL_PAGE_ATQBAL_LSB_M	MAKEMASK(0x3F, 0)
+#define PF0_FW_HLP_ATQBAL_PAGE_ATQBAL_S		6
+#define PF0_FW_HLP_ATQBAL_PAGE_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_FW_HLP_ATQH_PAGE			0x02D00300 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQH_PAGE_ATQH_S		0
+#define PF0_FW_HLP_ATQH_PAGE_ATQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ATQLEN_PAGE			0x02D00200 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQLEN_S		0
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQVFE_S		28
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQVFE_M		BIT(28)
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQOVFL_S	29
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQOVFL_M	BIT(29)
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQCRIT_S	30
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQCRIT_M	BIT(30)
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQENABLE_S	31
+#define PF0_FW_HLP_ATQLEN_PAGE_ATQENABLE_M	BIT(31)
+#define PF0_FW_HLP_ATQT_PAGE			0x02D00400 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQT_PAGE_ATQT_S		0
+#define PF0_FW_HLP_ATQT_PAGE_ATQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ARQBAH_PAGE			0x02D40180 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQBAH_PAGE_ARQBAH_S		0
+#define PF0_FW_PSM_ARQBAH_PAGE_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_FW_PSM_ARQBAL_PAGE			0x02D40080 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQBAL_PAGE_ARQBAL_LSB_S	0
+#define PF0_FW_PSM_ARQBAL_PAGE_ARQBAL_LSB_M	MAKEMASK(0x3F, 0)
+#define PF0_FW_PSM_ARQBAL_PAGE_ARQBAL_S		6
+#define PF0_FW_PSM_ARQBAL_PAGE_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_FW_PSM_ARQH_PAGE			0x02D40380 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQH_PAGE_ARQH_S		0
+#define PF0_FW_PSM_ARQH_PAGE_ARQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ARQLEN_PAGE			0x02D40280 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQLEN_S		0
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQVFE_S		28
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQVFE_M		BIT(28)
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQOVFL_S	29
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQOVFL_M	BIT(29)
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQCRIT_S	30
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQCRIT_M	BIT(30)
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQENABLE_S	31
+#define PF0_FW_PSM_ARQLEN_PAGE_ARQENABLE_M	BIT(31)
+#define PF0_FW_PSM_ARQT_PAGE			0x02D40480 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQT_PAGE_ARQT_S		0
+#define PF0_FW_PSM_ARQT_PAGE_ARQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ATQBAH_PAGE			0x02D40100 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQBAH_PAGE_ATQBAH_S		0
+#define PF0_FW_PSM_ATQBAH_PAGE_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_FW_PSM_ATQBAL_PAGE			0x02D40000 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQBAL_PAGE_ATQBAL_LSB_S	0
+#define PF0_FW_PSM_ATQBAL_PAGE_ATQBAL_LSB_M	MAKEMASK(0x3F, 0)
+#define PF0_FW_PSM_ATQBAL_PAGE_ATQBAL_S		6
+#define PF0_FW_PSM_ATQBAL_PAGE_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_FW_PSM_ATQH_PAGE			0x02D40300 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQH_PAGE_ATQH_S		0
+#define PF0_FW_PSM_ATQH_PAGE_ATQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ATQLEN_PAGE			0x02D40200 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQLEN_S		0
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQVFE_S		28
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQVFE_M		BIT(28)
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQOVFL_S	29
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQOVFL_M	BIT(29)
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQCRIT_S	30
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQCRIT_M	BIT(30)
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQENABLE_S	31
+#define PF0_FW_PSM_ATQLEN_PAGE_ATQENABLE_M	BIT(31)
+#define PF0_FW_PSM_ATQT_PAGE			0x02D40400 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQT_PAGE_ATQT_S		0
+#define PF0_FW_PSM_ATQT_PAGE_ATQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ARQBAH_PAGE			0x02D80190 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQBAH_PAGE_ARQBAH_S	0
+#define PF0_MBX_CPM_ARQBAH_PAGE_ARQBAH_M	MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_CPM_ARQBAL_PAGE			0x02D80090 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQBAL_PAGE_ARQBAL_LSB_S	0
+#define PF0_MBX_CPM_ARQBAL_PAGE_ARQBAL_LSB_M	MAKEMASK(0x3F, 0)
+#define PF0_MBX_CPM_ARQBAL_PAGE_ARQBAL_S	6
+#define PF0_MBX_CPM_ARQBAL_PAGE_ARQBAL_M	MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_CPM_ARQH_PAGE			0x02D80390 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQH_PAGE_ARQH_S		0
+#define PF0_MBX_CPM_ARQH_PAGE_ARQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ARQLEN_PAGE			0x02D80290 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQLEN_S	0
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQLEN_M	MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQVFE_S	28
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQVFE_M	BIT(28)
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQOVFL_S	29
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQOVFL_M	BIT(29)
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQCRIT_S	30
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQCRIT_M	BIT(30)
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQENABLE_S	31
+#define PF0_MBX_CPM_ARQLEN_PAGE_ARQENABLE_M	BIT(31)
+#define PF0_MBX_CPM_ARQT_PAGE			0x02D80490 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQT_PAGE_ARQT_S		0
+#define PF0_MBX_CPM_ARQT_PAGE_ARQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ATQBAH_PAGE			0x02D80110 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQBAH_PAGE_ATQBAH_S	0
+#define PF0_MBX_CPM_ATQBAH_PAGE_ATQBAH_M	MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_CPM_ATQBAL_PAGE			0x02D80010 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQBAL_PAGE_ATQBAL_S	6
+#define PF0_MBX_CPM_ATQBAL_PAGE_ATQBAL_M	MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_CPM_ATQH_PAGE			0x02D80310 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQH_PAGE_ATQH_S		0
+#define PF0_MBX_CPM_ATQH_PAGE_ATQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ATQLEN_PAGE			0x02D80210 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQLEN_S	0
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQLEN_M	MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQVFE_S	28
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQVFE_M	BIT(28)
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQOVFL_S	29
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQOVFL_M	BIT(29)
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQCRIT_S	30
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQCRIT_M	BIT(30)
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQENABLE_S	31
+#define PF0_MBX_CPM_ATQLEN_PAGE_ATQENABLE_M	BIT(31)
+#define PF0_MBX_CPM_ATQT_PAGE			0x02D80410 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQT_PAGE_ATQT_S		0
+#define PF0_MBX_CPM_ATQT_PAGE_ATQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ARQBAH_PAGE			0x02D00190 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQBAH_PAGE_ARQBAH_S	0
+#define PF0_MBX_HLP_ARQBAH_PAGE_ARQBAH_M	MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_HLP_ARQBAL_PAGE			0x02D00090 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQBAL_PAGE_ARQBAL_LSB_S	0
+#define PF0_MBX_HLP_ARQBAL_PAGE_ARQBAL_LSB_M	MAKEMASK(0x3F, 0)
+#define PF0_MBX_HLP_ARQBAL_PAGE_ARQBAL_S	6
+#define PF0_MBX_HLP_ARQBAL_PAGE_ARQBAL_M	MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_HLP_ARQH_PAGE			0x02D00390 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQH_PAGE_ARQH_S		0
+#define PF0_MBX_HLP_ARQH_PAGE_ARQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ARQLEN_PAGE			0x02D00290 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQLEN_S	0
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQLEN_M	MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQVFE_S	28
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQVFE_M	BIT(28)
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQOVFL_S	29
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQOVFL_M	BIT(29)
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQCRIT_S	30
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQCRIT_M	BIT(30)
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQENABLE_S	31
+#define PF0_MBX_HLP_ARQLEN_PAGE_ARQENABLE_M	BIT(31)
+#define PF0_MBX_HLP_ARQT_PAGE			0x02D00490 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQT_PAGE_ARQT_S		0
+#define PF0_MBX_HLP_ARQT_PAGE_ARQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ATQBAH_PAGE			0x02D00110 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQBAH_PAGE_ATQBAH_S	0
+#define PF0_MBX_HLP_ATQBAH_PAGE_ATQBAH_M	MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_HLP_ATQBAL_PAGE			0x02D00010 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQBAL_PAGE_ATQBAL_S	6
+#define PF0_MBX_HLP_ATQBAL_PAGE_ATQBAL_M	MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_HLP_ATQH_PAGE			0x02D00310 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQH_PAGE_ATQH_S		0
+#define PF0_MBX_HLP_ATQH_PAGE_ATQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ATQLEN_PAGE			0x02D00210 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQLEN_S	0
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQLEN_M	MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQVFE_S	28
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQVFE_M	BIT(28)
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQOVFL_S	29
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQOVFL_M	BIT(29)
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQCRIT_S	30
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQCRIT_M	BIT(30)
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQENABLE_S	31
+#define PF0_MBX_HLP_ATQLEN_PAGE_ATQENABLE_M	BIT(31)
+#define PF0_MBX_HLP_ATQT_PAGE			0x02D00410 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQT_PAGE_ATQT_S		0
+#define PF0_MBX_HLP_ATQT_PAGE_ATQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ARQBAH_PAGE			0x02D40190 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQBAH_PAGE_ARQBAH_S	0
+#define PF0_MBX_PSM_ARQBAH_PAGE_ARQBAH_M	MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_PSM_ARQBAL_PAGE			0x02D40090 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQBAL_PAGE_ARQBAL_LSB_S	0
+#define PF0_MBX_PSM_ARQBAL_PAGE_ARQBAL_LSB_M	MAKEMASK(0x3F, 0)
+#define PF0_MBX_PSM_ARQBAL_PAGE_ARQBAL_S	6
+#define PF0_MBX_PSM_ARQBAL_PAGE_ARQBAL_M	MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_PSM_ARQH_PAGE			0x02D40390 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQH_PAGE_ARQH_S		0
+#define PF0_MBX_PSM_ARQH_PAGE_ARQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ARQLEN_PAGE			0x02D40290 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQLEN_S	0
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQLEN_M	MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQVFE_S	28
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQVFE_M	BIT(28)
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQOVFL_S	29
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQOVFL_M	BIT(29)
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQCRIT_S	30
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQCRIT_M	BIT(30)
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQENABLE_S	31
+#define PF0_MBX_PSM_ARQLEN_PAGE_ARQENABLE_M	BIT(31)
+#define PF0_MBX_PSM_ARQT_PAGE			0x02D40490 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQT_PAGE_ARQT_S		0
+#define PF0_MBX_PSM_ARQT_PAGE_ARQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ATQBAH_PAGE			0x02D40110 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQBAH_PAGE_ATQBAH_S	0
+#define PF0_MBX_PSM_ATQBAH_PAGE_ATQBAH_M	MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_PSM_ATQBAL_PAGE			0x02D40010 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQBAL_PAGE_ATQBAL_S	6
+#define PF0_MBX_PSM_ATQBAL_PAGE_ATQBAL_M	MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_PSM_ATQH_PAGE			0x02D40310 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQH_PAGE_ATQH_S		0
+#define PF0_MBX_PSM_ATQH_PAGE_ATQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ATQLEN_PAGE			0x02D40210 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQLEN_S	0
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQLEN_M	MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQVFE_S	28
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQVFE_M	BIT(28)
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQOVFL_S	29
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQOVFL_M	BIT(29)
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQCRIT_S	30
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQCRIT_M	BIT(30)
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQENABLE_S	31
+#define PF0_MBX_PSM_ATQLEN_PAGE_ATQENABLE_M	BIT(31)
+#define PF0_MBX_PSM_ATQT_PAGE			0x02D40410 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQT_PAGE_ATQT_S		0
+#define PF0_MBX_PSM_ATQT_PAGE_ATQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ARQBAH_PAGE			0x02D801A0 /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQBAH_PAGE_ARQBAH_S		0
+#define PF0_SB_CPM_ARQBAH_PAGE_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_SB_CPM_ARQBAL_PAGE			0x02D800A0 /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQBAL_PAGE_ARQBAL_LSB_S	0
+#define PF0_SB_CPM_ARQBAL_PAGE_ARQBAL_LSB_M	MAKEMASK(0x3F, 0)
+#define PF0_SB_CPM_ARQBAL_PAGE_ARQBAL_S		6
+#define PF0_SB_CPM_ARQBAL_PAGE_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_SB_CPM_ARQH_PAGE			0x02D803A0 /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQH_PAGE_ARQH_S		0
+#define PF0_SB_CPM_ARQH_PAGE_ARQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ARQLEN_PAGE			0x02D802A0 /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQLEN_S		0
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQVFE_S		28
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQVFE_M		BIT(28)
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQOVFL_S	29
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQOVFL_M	BIT(29)
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQCRIT_S	30
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQCRIT_M	BIT(30)
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQENABLE_S	31
+#define PF0_SB_CPM_ARQLEN_PAGE_ARQENABLE_M	BIT(31)
+#define PF0_SB_CPM_ARQT_PAGE			0x02D804A0 /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQT_PAGE_ARQT_S		0
+#define PF0_SB_CPM_ARQT_PAGE_ARQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ATQBAH_PAGE			0x02D80120 /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQBAH_PAGE_ATQBAH_S		0
+#define PF0_SB_CPM_ATQBAH_PAGE_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_SB_CPM_ATQBAL_PAGE			0x02D80020 /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQBAL_PAGE_ATQBAL_S		6
+#define PF0_SB_CPM_ATQBAL_PAGE_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_SB_CPM_ATQH_PAGE			0x02D80320 /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQH_PAGE_ATQH_S		0
+#define PF0_SB_CPM_ATQH_PAGE_ATQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ATQLEN_PAGE			0x02D80220 /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQLEN_S		0
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQVFE_S		28
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQVFE_M		BIT(28)
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQOVFL_S	29
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQOVFL_M	BIT(29)
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQCRIT_S	30
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQCRIT_M	BIT(30)
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQENABLE_S	31
+#define PF0_SB_CPM_ATQLEN_PAGE_ATQENABLE_M	BIT(31)
+#define PF0_SB_CPM_ATQT_PAGE			0x02D80420 /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQT_PAGE_ATQT_S		0
+#define PF0_SB_CPM_ATQT_PAGE_ATQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ARQBAH_PAGE			0x02D001A0 /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQBAH_PAGE_ARQBAH_S		0
+#define PF0_SB_HLP_ARQBAH_PAGE_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_SB_HLP_ARQBAL_PAGE			0x02D000A0 /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQBAL_PAGE_ARQBAL_LSB_S	0
+#define PF0_SB_HLP_ARQBAL_PAGE_ARQBAL_LSB_M	MAKEMASK(0x3F, 0)
+#define PF0_SB_HLP_ARQBAL_PAGE_ARQBAL_S		6
+#define PF0_SB_HLP_ARQBAL_PAGE_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_SB_HLP_ARQH_PAGE			0x02D003A0 /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQH_PAGE_ARQH_S		0
+#define PF0_SB_HLP_ARQH_PAGE_ARQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ARQLEN_PAGE			0x02D002A0 /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQLEN_S		0
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQVFE_S		28
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQVFE_M		BIT(28)
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQOVFL_S	29
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQOVFL_M	BIT(29)
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQCRIT_S	30
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQCRIT_M	BIT(30)
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQENABLE_S	31
+#define PF0_SB_HLP_ARQLEN_PAGE_ARQENABLE_M	BIT(31)
+#define PF0_SB_HLP_ARQT_PAGE			0x02D004A0 /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQT_PAGE_ARQT_S		0
+#define PF0_SB_HLP_ARQT_PAGE_ARQT_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ATQBAH_PAGE			0x02D00120 /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQBAH_PAGE_ATQBAH_S		0
+#define PF0_SB_HLP_ATQBAH_PAGE_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_SB_HLP_ATQBAL_PAGE			0x02D00020 /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQBAL_PAGE_ATQBAL_S		6
+#define PF0_SB_HLP_ATQBAL_PAGE_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_SB_HLP_ATQH_PAGE			0x02D00320 /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQH_PAGE_ATQH_S		0
+#define PF0_SB_HLP_ATQH_PAGE_ATQH_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ATQLEN_PAGE			0x02D00220 /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQLEN_S		0
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQVFE_S		28
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQVFE_M		BIT(28)
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQOVFL_S	29
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQOVFL_M	BIT(29)
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQCRIT_S	30
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQCRIT_M	BIT(30)
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQENABLE_S	31
+#define PF0_SB_HLP_ATQLEN_PAGE_ATQENABLE_M	BIT(31)
+#define PF0_SB_HLP_ATQT_PAGE			0x02D00420 /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQT_PAGE_ATQT_S		0
+#define PF0_SB_HLP_ATQT_PAGE_ATQT_M		MAKEMASK(0x3FF, 0)
+#define PF0INT_DYN_CTL(_i)			(0x03000000 + ((_i) * 4096)) /* _i=0...2047 */ /* Reset Source: PFR */
+#define PF0INT_DYN_CTL_MAX_INDEX		2047
+#define PF0INT_DYN_CTL_INTENA_S			0
+#define PF0INT_DYN_CTL_INTENA_M			BIT(0)
+#define PF0INT_DYN_CTL_CLEARPBA_S		1
+#define PF0INT_DYN_CTL_CLEARPBA_M		BIT(1)
+#define PF0INT_DYN_CTL_SWINT_TRIG_S		2
+#define PF0INT_DYN_CTL_SWINT_TRIG_M		BIT(2)
+#define PF0INT_DYN_CTL_ITR_INDX_S		3
+#define PF0INT_DYN_CTL_ITR_INDX_M		MAKEMASK(0x3, 3)
+#define PF0INT_DYN_CTL_INTERVAL_S		5
+#define PF0INT_DYN_CTL_INTERVAL_M		MAKEMASK(0xFFF, 5)
+#define PF0INT_DYN_CTL_SW_ITR_INDX_ENA_S	24
+#define PF0INT_DYN_CTL_SW_ITR_INDX_ENA_M	BIT(24)
+#define PF0INT_DYN_CTL_SW_ITR_INDX_S		25
+#define PF0INT_DYN_CTL_SW_ITR_INDX_M		MAKEMASK(0x3, 25)
+#define PF0INT_DYN_CTL_WB_ON_ITR_S		30
+#define PF0INT_DYN_CTL_WB_ON_ITR_M		BIT(30)
+#define PF0INT_DYN_CTL_INTENA_MSK_S		31
+#define PF0INT_DYN_CTL_INTENA_MSK_M		BIT(31)
+#define PF0INT_ITR_0(_i)			(0x03000004 + ((_i) * 4096)) /* _i=0...2047 */ /* Reset Source: PFR */
+#define PF0INT_ITR_0_MAX_INDEX			2047
+#define PF0INT_ITR_0_INTERVAL_S			0
+#define PF0INT_ITR_0_INTERVAL_M			MAKEMASK(0xFFF, 0)
+#define PF0INT_ITR_1(_i)			(0x03000008 + ((_i) * 4096)) /* _i=0...2047 */ /* Reset Source: PFR */
+#define PF0INT_ITR_1_MAX_INDEX			2047
+#define PF0INT_ITR_1_INTERVAL_S			0
+#define PF0INT_ITR_1_INTERVAL_M			MAKEMASK(0xFFF, 0)
+#define PF0INT_ITR_2(_i)			(0x0300000C + ((_i) * 4096)) /* _i=0...2047 */ /* Reset Source: PFR */
+#define PF0INT_ITR_2_MAX_INDEX			2047
+#define PF0INT_ITR_2_INTERVAL_S			0
+#define PF0INT_ITR_2_INTERVAL_M			MAKEMASK(0xFFF, 0)
+#define PF0INT_OICR_CPM_PAGE			0x02D03000 /* Reset Source: CORER */
+#define PF0INT_OICR_CPM_PAGE_INTEVENT_S		0
+#define PF0INT_OICR_CPM_PAGE_INTEVENT_M		BIT(0)
+#define PF0INT_OICR_CPM_PAGE_QUEUE_S		1
+#define PF0INT_OICR_CPM_PAGE_QUEUE_M		BIT(1)
+#define PF0INT_OICR_CPM_PAGE_RSV1_S		2
+#define PF0INT_OICR_CPM_PAGE_RSV1_M		MAKEMASK(0xFF, 2)
+#define PF0INT_OICR_CPM_PAGE_HH_COMP_S		10
+#define PF0INT_OICR_CPM_PAGE_HH_COMP_M		BIT(10)
+#define PF0INT_OICR_CPM_PAGE_TSYN_TX_S		11
+#define PF0INT_OICR_CPM_PAGE_TSYN_TX_M		BIT(11)
+#define PF0INT_OICR_CPM_PAGE_TSYN_EVNT_S	12
+#define PF0INT_OICR_CPM_PAGE_TSYN_EVNT_M	BIT(12)
+#define PF0INT_OICR_CPM_PAGE_TSYN_TGT_S		13
+#define PF0INT_OICR_CPM_PAGE_TSYN_TGT_M		BIT(13)
+#define PF0INT_OICR_CPM_PAGE_HLP_RDY_S		14
+#define PF0INT_OICR_CPM_PAGE_HLP_RDY_M		BIT(14)
+#define PF0INT_OICR_CPM_PAGE_CPM_RDY_S		15
+#define PF0INT_OICR_CPM_PAGE_CPM_RDY_M		BIT(15)
+#define PF0INT_OICR_CPM_PAGE_ECC_ERR_S		16
+#define PF0INT_OICR_CPM_PAGE_ECC_ERR_M		BIT(16)
+#define PF0INT_OICR_CPM_PAGE_RSV2_S		17
+#define PF0INT_OICR_CPM_PAGE_RSV2_M		MAKEMASK(0x3, 17)
+#define PF0INT_OICR_CPM_PAGE_MAL_DETECT_S	19
+#define PF0INT_OICR_CPM_PAGE_MAL_DETECT_M	BIT(19)
+#define PF0INT_OICR_CPM_PAGE_GRST_S		20
+#define PF0INT_OICR_CPM_PAGE_GRST_M		BIT(20)
+#define PF0INT_OICR_CPM_PAGE_PCI_EXCEPTION_S	21
+#define PF0INT_OICR_CPM_PAGE_PCI_EXCEPTION_M	BIT(21)
+#define PF0INT_OICR_CPM_PAGE_GPIO_S		22
+#define PF0INT_OICR_CPM_PAGE_GPIO_M		BIT(22)
+#define PF0INT_OICR_CPM_PAGE_RSV3_S		23
+#define PF0INT_OICR_CPM_PAGE_RSV3_M		BIT(23)
+#define PF0INT_OICR_CPM_PAGE_STORM_DETECT_S	24
+#define PF0INT_OICR_CPM_PAGE_STORM_DETECT_M	BIT(24)
+#define PF0INT_OICR_CPM_PAGE_LINK_STAT_CHANGE_S 25
+#define PF0INT_OICR_CPM_PAGE_LINK_STAT_CHANGE_M BIT(25)
+#define PF0INT_OICR_CPM_PAGE_HMC_ERR_S		26
+#define PF0INT_OICR_CPM_PAGE_HMC_ERR_M		BIT(26)
+#define PF0INT_OICR_CPM_PAGE_PE_PUSH_S		27
+#define PF0INT_OICR_CPM_PAGE_PE_PUSH_M		BIT(27)
+#define PF0INT_OICR_CPM_PAGE_PE_CRITERR_S	28
+#define PF0INT_OICR_CPM_PAGE_PE_CRITERR_M	BIT(28)
+#define PF0INT_OICR_CPM_PAGE_VFLR_S		29
+#define PF0INT_OICR_CPM_PAGE_VFLR_M		BIT(29)
+#define PF0INT_OICR_CPM_PAGE_XLR_HW_DONE_S	30
+#define PF0INT_OICR_CPM_PAGE_XLR_HW_DONE_M	BIT(30)
+#define PF0INT_OICR_CPM_PAGE_SWINT_S		31
+#define PF0INT_OICR_CPM_PAGE_SWINT_M		BIT(31)
+#define PF0INT_OICR_ENA_CPM_PAGE		0x02D03100 /* Reset Source: CORER */
+#define PF0INT_OICR_ENA_CPM_PAGE_RSV0_S		0
+#define PF0INT_OICR_ENA_CPM_PAGE_RSV0_M		BIT(0)
+#define PF0INT_OICR_ENA_CPM_PAGE_INT_ENA_S	1
+#define PF0INT_OICR_ENA_CPM_PAGE_INT_ENA_M	MAKEMASK(0x7FFFFFFF, 1)
+#define PF0INT_OICR_ENA_HLP_PAGE		0x02D01100 /* Reset Source: CORER */
+#define PF0INT_OICR_ENA_HLP_PAGE_RSV0_S		0
+#define PF0INT_OICR_ENA_HLP_PAGE_RSV0_M		BIT(0)
+#define PF0INT_OICR_ENA_HLP_PAGE_INT_ENA_S	1
+#define PF0INT_OICR_ENA_HLP_PAGE_INT_ENA_M	MAKEMASK(0x7FFFFFFF, 1)
+#define PF0INT_OICR_ENA_PSM_PAGE		0x02D02100 /* Reset Source: CORER */
+#define PF0INT_OICR_ENA_PSM_PAGE_RSV0_S		0
+#define PF0INT_OICR_ENA_PSM_PAGE_RSV0_M		BIT(0)
+#define PF0INT_OICR_ENA_PSM_PAGE_INT_ENA_S	1
+#define PF0INT_OICR_ENA_PSM_PAGE_INT_ENA_M	MAKEMASK(0x7FFFFFFF, 1)
+#define PF0INT_OICR_HLP_PAGE			0x02D01000 /* Reset Source: CORER */
+#define PF0INT_OICR_HLP_PAGE_INTEVENT_S		0
+#define PF0INT_OICR_HLP_PAGE_INTEVENT_M		BIT(0)
+#define PF0INT_OICR_HLP_PAGE_QUEUE_S		1
+#define PF0INT_OICR_HLP_PAGE_QUEUE_M		BIT(1)
+#define PF0INT_OICR_HLP_PAGE_RSV1_S		2
+#define PF0INT_OICR_HLP_PAGE_RSV1_M		MAKEMASK(0xFF, 2)
+#define PF0INT_OICR_HLP_PAGE_HH_COMP_S		10
+#define PF0INT_OICR_HLP_PAGE_HH_COMP_M		BIT(10)
+#define PF0INT_OICR_HLP_PAGE_TSYN_TX_S		11
+#define PF0INT_OICR_HLP_PAGE_TSYN_TX_M		BIT(11)
+#define PF0INT_OICR_HLP_PAGE_TSYN_EVNT_S	12
+#define PF0INT_OICR_HLP_PAGE_TSYN_EVNT_M	BIT(12)
+#define PF0INT_OICR_HLP_PAGE_TSYN_TGT_S		13
+#define PF0INT_OICR_HLP_PAGE_TSYN_TGT_M		BIT(13)
+#define PF0INT_OICR_HLP_PAGE_HLP_RDY_S		14
+#define PF0INT_OICR_HLP_PAGE_HLP_RDY_M		BIT(14)
+#define PF0INT_OICR_HLP_PAGE_CPM_RDY_S		15
+#define PF0INT_OICR_HLP_PAGE_CPM_RDY_M		BIT(15)
+#define PF0INT_OICR_HLP_PAGE_ECC_ERR_S		16
+#define PF0INT_OICR_HLP_PAGE_ECC_ERR_M		BIT(16)
+#define PF0INT_OICR_HLP_PAGE_RSV2_S		17
+#define PF0INT_OICR_HLP_PAGE_RSV2_M		MAKEMASK(0x3, 17)
+#define PF0INT_OICR_HLP_PAGE_MAL_DETECT_S	19
+#define PF0INT_OICR_HLP_PAGE_MAL_DETECT_M	BIT(19)
+#define PF0INT_OICR_HLP_PAGE_GRST_S		20
+#define PF0INT_OICR_HLP_PAGE_GRST_M		BIT(20)
+#define PF0INT_OICR_HLP_PAGE_PCI_EXCEPTION_S	21
+#define PF0INT_OICR_HLP_PAGE_PCI_EXCEPTION_M	BIT(21)
+#define PF0INT_OICR_HLP_PAGE_GPIO_S		22
+#define PF0INT_OICR_HLP_PAGE_GPIO_M		BIT(22)
+#define PF0INT_OICR_HLP_PAGE_RSV3_S		23
+#define PF0INT_OICR_HLP_PAGE_RSV3_M		BIT(23)
+#define PF0INT_OICR_HLP_PAGE_STORM_DETECT_S	24
+#define PF0INT_OICR_HLP_PAGE_STORM_DETECT_M	BIT(24)
+#define PF0INT_OICR_HLP_PAGE_LINK_STAT_CHANGE_S 25
+#define PF0INT_OICR_HLP_PAGE_LINK_STAT_CHANGE_M BIT(25)
+#define PF0INT_OICR_HLP_PAGE_HMC_ERR_S		26
+#define PF0INT_OICR_HLP_PAGE_HMC_ERR_M		BIT(26)
+#define PF0INT_OICR_HLP_PAGE_PE_PUSH_S		27
+#define PF0INT_OICR_HLP_PAGE_PE_PUSH_M		BIT(27)
+#define PF0INT_OICR_HLP_PAGE_PE_CRITERR_S	28
+#define PF0INT_OICR_HLP_PAGE_PE_CRITERR_M	BIT(28)
+#define PF0INT_OICR_HLP_PAGE_VFLR_S		29
+#define PF0INT_OICR_HLP_PAGE_VFLR_M		BIT(29)
+#define PF0INT_OICR_HLP_PAGE_XLR_HW_DONE_S	30
+#define PF0INT_OICR_HLP_PAGE_XLR_HW_DONE_M	BIT(30)
+#define PF0INT_OICR_HLP_PAGE_SWINT_S		31
+#define PF0INT_OICR_HLP_PAGE_SWINT_M		BIT(31)
+#define PF0INT_OICR_PSM_PAGE			0x02D02000 /* Reset Source: CORER */
+#define PF0INT_OICR_PSM_PAGE_INTEVENT_S		0
+#define PF0INT_OICR_PSM_PAGE_INTEVENT_M		BIT(0)
+#define PF0INT_OICR_PSM_PAGE_QUEUE_S		1
+#define PF0INT_OICR_PSM_PAGE_QUEUE_M		BIT(1)
+#define PF0INT_OICR_PSM_PAGE_RSV1_S		2
+#define PF0INT_OICR_PSM_PAGE_RSV1_M		MAKEMASK(0xFF, 2)
+#define PF0INT_OICR_PSM_PAGE_HH_COMP_S		10
+#define PF0INT_OICR_PSM_PAGE_HH_COMP_M		BIT(10)
+#define PF0INT_OICR_PSM_PAGE_TSYN_TX_S		11
+#define PF0INT_OICR_PSM_PAGE_TSYN_TX_M		BIT(11)
+#define PF0INT_OICR_PSM_PAGE_TSYN_EVNT_S	12
+#define PF0INT_OICR_PSM_PAGE_TSYN_EVNT_M	BIT(12)
+#define PF0INT_OICR_PSM_PAGE_TSYN_TGT_S		13
+#define PF0INT_OICR_PSM_PAGE_TSYN_TGT_M		BIT(13)
+#define PF0INT_OICR_PSM_PAGE_HLP_RDY_S		14
+#define PF0INT_OICR_PSM_PAGE_HLP_RDY_M		BIT(14)
+#define PF0INT_OICR_PSM_PAGE_CPM_RDY_S		15
+#define PF0INT_OICR_PSM_PAGE_CPM_RDY_M		BIT(15)
+#define PF0INT_OICR_PSM_PAGE_ECC_ERR_S		16
+#define PF0INT_OICR_PSM_PAGE_ECC_ERR_M		BIT(16)
+#define PF0INT_OICR_PSM_PAGE_RSV2_S		17
+#define PF0INT_OICR_PSM_PAGE_RSV2_M		MAKEMASK(0x3, 17)
+#define PF0INT_OICR_PSM_PAGE_MAL_DETECT_S	19
+#define PF0INT_OICR_PSM_PAGE_MAL_DETECT_M	BIT(19)
+#define PF0INT_OICR_PSM_PAGE_GRST_S		20
+#define PF0INT_OICR_PSM_PAGE_GRST_M		BIT(20)
+#define PF0INT_OICR_PSM_PAGE_PCI_EXCEPTION_S	21
+#define PF0INT_OICR_PSM_PAGE_PCI_EXCEPTION_M	BIT(21)
+#define PF0INT_OICR_PSM_PAGE_GPIO_S		22
+#define PF0INT_OICR_PSM_PAGE_GPIO_M		BIT(22)
+#define PF0INT_OICR_PSM_PAGE_RSV3_S		23
+#define PF0INT_OICR_PSM_PAGE_RSV3_M		BIT(23)
+#define PF0INT_OICR_PSM_PAGE_STORM_DETECT_S	24
+#define PF0INT_OICR_PSM_PAGE_STORM_DETECT_M	BIT(24)
+#define PF0INT_OICR_PSM_PAGE_LINK_STAT_CHANGE_S 25
+#define PF0INT_OICR_PSM_PAGE_LINK_STAT_CHANGE_M BIT(25)
+#define PF0INT_OICR_PSM_PAGE_HMC_ERR_S		26
+#define PF0INT_OICR_PSM_PAGE_HMC_ERR_M		BIT(26)
+#define PF0INT_OICR_PSM_PAGE_PE_PUSH_S		27
+#define PF0INT_OICR_PSM_PAGE_PE_PUSH_M		BIT(27)
+#define PF0INT_OICR_PSM_PAGE_PE_CRITERR_S	28
+#define PF0INT_OICR_PSM_PAGE_PE_CRITERR_M	BIT(28)
+#define PF0INT_OICR_PSM_PAGE_VFLR_S		29
+#define PF0INT_OICR_PSM_PAGE_VFLR_M		BIT(29)
+#define PF0INT_OICR_PSM_PAGE_XLR_HW_DONE_S	30
+#define PF0INT_OICR_PSM_PAGE_XLR_HW_DONE_M	BIT(30)
+#define PF0INT_OICR_PSM_PAGE_SWINT_S		31
+#define PF0INT_OICR_PSM_PAGE_SWINT_M		BIT(31)
+#define QRX_TAIL_PAGE(_QRX)			(0x03800000 + ((_QRX) * 4096)) /* _i=0...2047 */ /* Reset Source: CORER */
+#define QRX_TAIL_PAGE_MAX_INDEX			2047
+#define QRX_TAIL_PAGE_TAIL_S			0
+#define QRX_TAIL_PAGE_TAIL_M			MAKEMASK(0x1FFF, 0)
+#define QTX_COMM_DBELL_PAGE(_DBQM)		(0x04000000 + ((_DBQM) * 4096)) /* _i=0...16383 */ /* Reset Source: CORER */
+#define QTX_COMM_DBELL_PAGE_MAX_INDEX		16383
+#define QTX_COMM_DBELL_PAGE_QTX_COMM_DBELL_S	0
+#define QTX_COMM_DBELL_PAGE_QTX_COMM_DBELL_M	MAKEMASK(0xFFFFFFFF, 0)
+#define QTX_COMM_DBLQ_DBELL_PAGE(_DBLQ)		(0x02F00000 + ((_DBLQ) * 8)) /* _i=0...255 */ /* Reset Source: CORER */
+#define QTX_COMM_DBLQ_DBELL_PAGE_MAX_INDEX	255
+#define QTX_COMM_DBLQ_DBELL_PAGE_TAIL_S		0
+#define QTX_COMM_DBLQ_DBELL_PAGE_TAIL_M		MAKEMASK(0x1FFF, 0)
+#define VSI_MBX_ARQBAH(_VSI)			(0x02000018 + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ARQBAH_MAX_INDEX		767
+#define VSI_MBX_ARQBAH_ARQBAH_S			0
+#define VSI_MBX_ARQBAH_ARQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define VSI_MBX_ARQBAL(_VSI)			(0x02000014 + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ARQBAL_MAX_INDEX		767
+#define VSI_MBX_ARQBAL_ARQBAL_LSB_S		0
+#define VSI_MBX_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define VSI_MBX_ARQBAL_ARQBAL_S			6
+#define VSI_MBX_ARQBAL_ARQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define VSI_MBX_ARQH(_VSI)			(0x02000020 + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ARQH_MAX_INDEX			767
+#define VSI_MBX_ARQH_ARQH_S			0
+#define VSI_MBX_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define VSI_MBX_ARQLEN(_VSI)			(0x0200001C + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ARQLEN_MAX_INDEX		767
+#define VSI_MBX_ARQLEN_ARQLEN_S			0
+#define VSI_MBX_ARQLEN_ARQLEN_M			MAKEMASK(0x3FF, 0)
+#define VSI_MBX_ARQLEN_ARQVFE_S			28
+#define VSI_MBX_ARQLEN_ARQVFE_M			BIT(28)
+#define VSI_MBX_ARQLEN_ARQOVFL_S		29
+#define VSI_MBX_ARQLEN_ARQOVFL_M		BIT(29)
+#define VSI_MBX_ARQLEN_ARQCRIT_S		30
+#define VSI_MBX_ARQLEN_ARQCRIT_M		BIT(30)
+#define VSI_MBX_ARQLEN_ARQENABLE_S		31
+#define VSI_MBX_ARQLEN_ARQENABLE_M		BIT(31)
+#define VSI_MBX_ARQT(_VSI)			(0x02000024 + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ARQT_MAX_INDEX			767
+#define VSI_MBX_ARQT_ARQT_S			0
+#define VSI_MBX_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define VSI_MBX_ATQBAH(_VSI)			(0x02000004 + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ATQBAH_MAX_INDEX		767
+#define VSI_MBX_ATQBAH_ATQBAH_S			0
+#define VSI_MBX_ATQBAH_ATQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define VSI_MBX_ATQBAL(_VSI)			(0x02000000 + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ATQBAL_MAX_INDEX		767
+#define VSI_MBX_ATQBAL_ATQBAL_S			6
+#define VSI_MBX_ATQBAL_ATQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define VSI_MBX_ATQH(_VSI)			(0x0200000C + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ATQH_MAX_INDEX			767
+#define VSI_MBX_ATQH_ATQH_S			0
+#define VSI_MBX_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define VSI_MBX_ATQLEN(_VSI)			(0x02000008 + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ATQLEN_MAX_INDEX		767
+#define VSI_MBX_ATQLEN_ATQLEN_S			0
+#define VSI_MBX_ATQLEN_ATQLEN_M			MAKEMASK(0x3FF, 0)
+#define VSI_MBX_ATQLEN_ATQVFE_S			28
+#define VSI_MBX_ATQLEN_ATQVFE_M			BIT(28)
+#define VSI_MBX_ATQLEN_ATQOVFL_S		29
+#define VSI_MBX_ATQLEN_ATQOVFL_M		BIT(29)
+#define VSI_MBX_ATQLEN_ATQCRIT_S		30
+#define VSI_MBX_ATQLEN_ATQCRIT_M		BIT(30)
+#define VSI_MBX_ATQLEN_ATQENABLE_S		31
+#define VSI_MBX_ATQLEN_ATQENABLE_M		BIT(31)
+#define VSI_MBX_ATQT(_VSI)			(0x02000010 + ((_VSI) * 4096)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_MBX_ATQT_MAX_INDEX			767
+#define VSI_MBX_ATQT_ATQT_S			0
+#define VSI_MBX_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define GL_ACL_ACCESS_CMD			0x00391000 /* Reset Source: CORER */
+#define GL_ACL_ACCESS_CMD_TABLE_ID_S		0
+#define GL_ACL_ACCESS_CMD_TABLE_ID_M		MAKEMASK(0xFF, 0)
+#define GL_ACL_ACCESS_CMD_ENTRY_INDEX_S		8
+#define GL_ACL_ACCESS_CMD_ENTRY_INDEX_M		MAKEMASK(0xFFF, 8)
+#define GL_ACL_ACCESS_CMD_OPERATION_S		20
+#define GL_ACL_ACCESS_CMD_OPERATION_M		BIT(20)
+#define GL_ACL_ACCESS_CMD_OBJ_TYPE_S		24
+#define GL_ACL_ACCESS_CMD_OBJ_TYPE_M		MAKEMASK(0xF, 24)
+#define GL_ACL_ACCESS_CMD_EXECUTE_S		31
+#define GL_ACL_ACCESS_CMD_EXECUTE_M		BIT(31)
+#define GL_ACL_ACCESS_STATUS			0x00391004 /* Reset Source: CORER */
+#define GL_ACL_ACCESS_STATUS_BUSY_S		0
+#define GL_ACL_ACCESS_STATUS_BUSY_M		BIT(0)
+#define GL_ACL_ACCESS_STATUS_DONE_S		1
+#define GL_ACL_ACCESS_STATUS_DONE_M		BIT(1)
+#define GL_ACL_ACCESS_STATUS_ERROR_S		2
+#define GL_ACL_ACCESS_STATUS_ERROR_M		BIT(2)
+#define GL_ACL_ACCESS_STATUS_OPERATION_S	3
+#define GL_ACL_ACCESS_STATUS_OPERATION_M	BIT(3)
+#define GL_ACL_ACCESS_STATUS_ERROR_CODE_S	4
+#define GL_ACL_ACCESS_STATUS_ERROR_CODE_M	MAKEMASK(0xF, 4)
+#define GL_ACL_ACCESS_STATUS_TABLE_ID_S		8
+#define GL_ACL_ACCESS_STATUS_TABLE_ID_M		MAKEMASK(0xFF, 8)
+#define GL_ACL_ACCESS_STATUS_ENTRY_INDEX_S	16
+#define GL_ACL_ACCESS_STATUS_ENTRY_INDEX_M	MAKEMASK(0xFFF, 16)
+#define GL_ACL_ACCESS_STATUS_OBJ_TYPE_S		28
+#define GL_ACL_ACCESS_STATUS_OBJ_TYPE_M		MAKEMASK(0xF, 28)
+#define GL_ACL_ACTMEM_ACT(_i)			(0x00393824 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GL_ACL_ACTMEM_ACT_MAX_INDEX		1
+#define GL_ACL_ACTMEM_ACT_VALUE_S		0
+#define GL_ACL_ACTMEM_ACT_VALUE_M		MAKEMASK(0xFFFF, 0)
+#define GL_ACL_ACTMEM_ACT_MDID_S		20
+#define GL_ACL_ACTMEM_ACT_MDID_M		MAKEMASK(0x3F, 20)
+#define GL_ACL_ACTMEM_ACT_PRIORITY_S		28
+#define GL_ACL_ACTMEM_ACT_PRIORITY_M		MAKEMASK(0x7, 28)
+#define GL_ACL_CHICKEN_REGISTER			0x00393810 /* Reset Source: CORER */
+#define GL_ACL_CHICKEN_REGISTER_TCAM_DATA_POL_CH_S 0
+#define GL_ACL_CHICKEN_REGISTER_TCAM_DATA_POL_CH_M BIT(0)
+#define GL_ACL_CHICKEN_REGISTER_TCAM_ADDR_POL_CH_S 1
+#define GL_ACL_CHICKEN_REGISTER_TCAM_ADDR_POL_CH_M BIT(1)
+#define GL_ACL_DEFAULT_ACT(_i)			(0x00391168 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GL_ACL_DEFAULT_ACT_MAX_INDEX		15
+#define GL_ACL_DEFAULT_ACT_VALUE_S		0
+#define GL_ACL_DEFAULT_ACT_VALUE_M		MAKEMASK(0xFFFF, 0)
+#define GL_ACL_DEFAULT_ACT_MDID_S		20
+#define GL_ACL_DEFAULT_ACT_MDID_M		MAKEMASK(0x3F, 20)
+#define GL_ACL_DEFAULT_ACT_PRIORITY_S		28
+#define GL_ACL_DEFAULT_ACT_PRIORITY_M		MAKEMASK(0x7, 28)
+#define GL_ACL_PROFILE_BWSB_SEL(_i)		(0x00391008 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GL_ACL_PROFILE_BWSB_SEL_MAX_INDEX	31
+#define GL_ACL_PROFILE_BWSB_SEL_BSB_SRC_OFF_S	0
+#define GL_ACL_PROFILE_BWSB_SEL_BSB_SRC_OFF_M	MAKEMASK(0x3F, 0)
+#define GL_ACL_PROFILE_BWSB_SEL_WSB_SRC_OFF_S	8
+#define GL_ACL_PROFILE_BWSB_SEL_WSB_SRC_OFF_M	MAKEMASK(0x1F, 8)
+#define GL_ACL_PROFILE_DWSB_SEL(_i)		(0x00391088 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GL_ACL_PROFILE_DWSB_SEL_MAX_INDEX	15
+#define GL_ACL_PROFILE_DWSB_SEL_DWORD_SEL_OFF_S 0
+#define GL_ACL_PROFILE_DWSB_SEL_DWORD_SEL_OFF_M MAKEMASK(0xF, 0)
+#define GL_ACL_PROFILE_PF_CFG(_i)		(0x003910C8 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GL_ACL_PROFILE_PF_CFG_MAX_INDEX		7
+#define GL_ACL_PROFILE_PF_CFG_SCEN_SEL_S	0
+#define GL_ACL_PROFILE_PF_CFG_SCEN_SEL_M	MAKEMASK(0x3F, 0)
+#define GL_ACL_PROFILE_RC_CFG(_i)		(0x003910E8 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GL_ACL_PROFILE_RC_CFG_MAX_INDEX		7
+#define GL_ACL_PROFILE_RC_CFG_LOW_BOUND_S	0
+#define GL_ACL_PROFILE_RC_CFG_LOW_BOUND_M	MAKEMASK(0xFFFF, 0)
+#define GL_ACL_PROFILE_RC_CFG_HIGH_BOUND_S	16
+#define GL_ACL_PROFILE_RC_CFG_HIGH_BOUND_M	MAKEMASK(0xFFFF, 16)
+#define GL_ACL_PROFILE_RCF_MASK(_i)		(0x00391108 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GL_ACL_PROFILE_RCF_MASK_MAX_INDEX	7
+#define GL_ACL_PROFILE_RCF_MASK_MASK_S		0
+#define GL_ACL_PROFILE_RCF_MASK_MASK_M		MAKEMASK(0xFFFF, 0)
+#define GL_ACL_SCENARIO_ACT_CFG(_i)		(0x003938AC + ((_i) * 4)) /* _i=0...19 */ /* Reset Source: CORER */
+#define GL_ACL_SCENARIO_ACT_CFG_MAX_INDEX	19
+#define GL_ACL_SCENARIO_ACT_CFG_ACTMEM_SEL_S	0
+#define GL_ACL_SCENARIO_ACT_CFG_ACTMEM_SEL_M	MAKEMASK(0xF, 0)
+#define GL_ACL_SCENARIO_ACT_CFG_ACTMEM_EN_S	8
+#define GL_ACL_SCENARIO_ACT_CFG_ACTMEM_EN_M	BIT(8)
+#define GL_ACL_SCENARIO_CFG_H(_i)		(0x0039386C + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GL_ACL_SCENARIO_CFG_H_MAX_INDEX		15
+#define GL_ACL_SCENARIO_CFG_H_SELECT4_S		0
+#define GL_ACL_SCENARIO_CFG_H_SELECT4_M		MAKEMASK(0x1F, 0)
+#define GL_ACL_SCENARIO_CFG_H_CHUNKMASK_S	8
+#define GL_ACL_SCENARIO_CFG_H_CHUNKMASK_M	MAKEMASK(0xFF, 8)
+#define GL_ACL_SCENARIO_CFG_H_START_COMPARE_S	24
+#define GL_ACL_SCENARIO_CFG_H_START_COMPARE_M	BIT(24)
+#define GL_ACL_SCENARIO_CFG_H_START_SET_S	28
+#define GL_ACL_SCENARIO_CFG_H_START_SET_M	BIT(28)
+#define GL_ACL_SCENARIO_CFG_L(_i)		(0x0039382C + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GL_ACL_SCENARIO_CFG_L_MAX_INDEX		15
+#define GL_ACL_SCENARIO_CFG_L_SELECT0_S		0
+#define GL_ACL_SCENARIO_CFG_L_SELECT0_M		MAKEMASK(0x7F, 0)
+#define GL_ACL_SCENARIO_CFG_L_SELECT1_S		8
+#define GL_ACL_SCENARIO_CFG_L_SELECT1_M		MAKEMASK(0x7F, 8)
+#define GL_ACL_SCENARIO_CFG_L_SELECT2_S		16
+#define GL_ACL_SCENARIO_CFG_L_SELECT2_M		MAKEMASK(0x7F, 16)
+#define GL_ACL_SCENARIO_CFG_L_SELECT3_S		24
+#define GL_ACL_SCENARIO_CFG_L_SELECT3_M		MAKEMASK(0x7F, 24)
+#define GL_ACL_TCAM_KEY_H			0x00393818 /* Reset Source: CORER */
+#define GL_ACL_TCAM_KEY_H_GL_ACL_FFU_TCAM_KEY_H_S 0
+#define GL_ACL_TCAM_KEY_H_GL_ACL_FFU_TCAM_KEY_H_M MAKEMASK(0xFF, 0)
+#define GL_ACL_TCAM_KEY_INV_H			0x00393820 /* Reset Source: CORER */
+#define GL_ACL_TCAM_KEY_INV_H_GL_ACL_FFU_TCAM_KEY_INV_H_S 0
+#define GL_ACL_TCAM_KEY_INV_H_GL_ACL_FFU_TCAM_KEY_INV_H_M MAKEMASK(0xFF, 0)
+#define GL_ACL_TCAM_KEY_INV_L			0x0039381C /* Reset Source: CORER */
+#define GL_ACL_TCAM_KEY_INV_L_GL_ACL_FFU_TCAM_KEY_INV_L_S 0
+#define GL_ACL_TCAM_KEY_INV_L_GL_ACL_FFU_TCAM_KEY_INV_L_M MAKEMASK(0xFFFFFFFF, 0)
+#define GL_ACL_TCAM_KEY_L			0x00393814 /* Reset Source: CORER */
+#define GL_ACL_TCAM_KEY_L_GL_ACL_FFU_TCAM_KEY_L_S 0
+#define GL_ACL_TCAM_KEY_L_GL_ACL_FFU_TCAM_KEY_L_M MAKEMASK(0xFFFFFFFF, 0)
+#define VSI_ACL_DEF_SEL(_VSI)			(0x00391800 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_ACL_DEF_SEL_MAX_INDEX		767
+#define VSI_ACL_DEF_SEL_RX_PROFILE_MISS_SEL_S	0
+#define VSI_ACL_DEF_SEL_RX_PROFILE_MISS_SEL_M	MAKEMASK(0x3, 0)
+#define VSI_ACL_DEF_SEL_RX_TABLES_MISS_SEL_S	4
+#define VSI_ACL_DEF_SEL_RX_TABLES_MISS_SEL_M	MAKEMASK(0x3, 4)
+#define VSI_ACL_DEF_SEL_TX_PROFILE_MISS_SEL_S	8
+#define VSI_ACL_DEF_SEL_TX_PROFILE_MISS_SEL_M	MAKEMASK(0x3, 8)
+#define VSI_ACL_DEF_SEL_TX_TABLES_MISS_SEL_S	12
+#define VSI_ACL_DEF_SEL_TX_TABLES_MISS_SEL_M	MAKEMASK(0x3, 12)
+#define GL_SWT_L2TAG0(_i)			(0x000492A8 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GL_SWT_L2TAG0_MAX_INDEX			7
+#define GL_SWT_L2TAG0_DATA_S			0
+#define GL_SWT_L2TAG0_DATA_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GL_SWT_L2TAG1(_i)			(0x000492C8 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GL_SWT_L2TAG1_MAX_INDEX			7
+#define GL_SWT_L2TAG1_DATA_S			0
+#define GL_SWT_L2TAG1_DATA_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GL_SWT_L2TAGCTRL(_i)			(0x001D2660 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GL_SWT_L2TAGCTRL_MAX_INDEX		7
+#define GL_SWT_L2TAGCTRL_LENGTH_S		0
+#define GL_SWT_L2TAGCTRL_LENGTH_M		MAKEMASK(0x7F, 0)
+#define GL_SWT_L2TAGCTRL_HAS_UP_S		7
+#define GL_SWT_L2TAGCTRL_HAS_UP_M		BIT(7)
+#define GL_SWT_L2TAGCTRL_ISVLAN_S		9
+#define GL_SWT_L2TAGCTRL_ISVLAN_M		BIT(9)
+#define GL_SWT_L2TAGCTRL_INNERUP_S		10
+#define GL_SWT_L2TAGCTRL_INNERUP_M		BIT(10)
+#define GL_SWT_L2TAGCTRL_OUTERUP_S		11
+#define GL_SWT_L2TAGCTRL_OUTERUP_M		BIT(11)
+#define GL_SWT_L2TAGCTRL_LONG_S			12
+#define GL_SWT_L2TAGCTRL_LONG_M			BIT(12)
+#define GL_SWT_L2TAGCTRL_ISMPLS_S		13
+#define GL_SWT_L2TAGCTRL_ISMPLS_M		BIT(13)
+#define GL_SWT_L2TAGCTRL_ISNSH_S		14
+#define GL_SWT_L2TAGCTRL_ISNSH_M		BIT(14)
+#define GL_SWT_L2TAGCTRL_ETHERTYPE_S		16
+#define GL_SWT_L2TAGCTRL_ETHERTYPE_M		MAKEMASK(0xFFFF, 16)
+#define GL_SWT_L2TAGRXEB(_i)			(0x00052000 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GL_SWT_L2TAGRXEB_MAX_INDEX		7
+#define GL_SWT_L2TAGRXEB_OFFSET_S		0
+#define GL_SWT_L2TAGRXEB_OFFSET_M		MAKEMASK(0xFF, 0)
+#define GL_SWT_L2TAGRXEB_LENGTH_S		8
+#define GL_SWT_L2TAGRXEB_LENGTH_M		MAKEMASK(0x3, 8)
+#define GL_SWT_L2TAGTXIB(_i)			(0x000492E8 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GL_SWT_L2TAGTXIB_MAX_INDEX		7
+#define GL_SWT_L2TAGTXIB_OFFSET_S		0
+#define GL_SWT_L2TAGTXIB_OFFSET_M		MAKEMASK(0xFF, 0)
+#define GL_SWT_L2TAGTXIB_LENGTH_S		8
+#define GL_SWT_L2TAGTXIB_LENGTH_M		MAKEMASK(0x3, 8)
+#define PRT_TDPUL2TAGSEN			0x00040BA0 /* Reset Source: CORER */
+#define PRT_TDPUL2TAGSEN_ENABLE_S		0
+#define PRT_TDPUL2TAGSEN_ENABLE_M		MAKEMASK(0xFF, 0)
+#define PRT_TDPUL2TAGSEN_NONLAST_TAG_S		8
+#define PRT_TDPUL2TAGSEN_NONLAST_TAG_M		MAKEMASK(0xFF, 8)
+#define GLCM_PE_CACHESIZE			0x005046B4 /* Reset Source: CORER */
+#define GLCM_PE_CACHESIZE_WORD_SIZE_S		0
+#define GLCM_PE_CACHESIZE_WORD_SIZE_M		MAKEMASK(0xFFF, 0)
+#define GLCM_PE_CACHESIZE_SETS_S		12
+#define GLCM_PE_CACHESIZE_SETS_M		MAKEMASK(0xF, 12)
+#define GLCM_PE_CACHESIZE_WAYS_S		16
+#define GLCM_PE_CACHESIZE_WAYS_M		MAKEMASK(0x1FF, 16)
+#define GLCOMM_CQ_CTL(_CQ)			(0x000F0000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLCOMM_CQ_CTL_MAX_INDEX			511
+#define GLCOMM_CQ_CTL_COMP_TYPE_S		0
+#define GLCOMM_CQ_CTL_COMP_TYPE_M		MAKEMASK(0x7, 0)
+#define GLCOMM_CQ_CTL_CMD_S			4
+#define GLCOMM_CQ_CTL_CMD_M			MAKEMASK(0x7, 4)
+#define GLCOMM_CQ_CTL_ID_S			16
+#define GLCOMM_CQ_CTL_ID_M			MAKEMASK(0x3FFF, 16)
+#define GLCOMM_MIN_MAX_PKT			0x000FC064 /* Reset Source: CORER */
+#define GLCOMM_MIN_MAX_PKT_MAHDL_S		0
+#define GLCOMM_MIN_MAX_PKT_MAHDL_M		MAKEMASK(0x3FFF, 0)
+#define GLCOMM_MIN_MAX_PKT_MIHDL_S		16
+#define GLCOMM_MIN_MAX_PKT_MIHDL_M		MAKEMASK(0x3F, 16)
+#define GLCOMM_MIN_MAX_PKT_LSO_COMS_MIHDL_S	22
+#define GLCOMM_MIN_MAX_PKT_LSO_COMS_MIHDL_M	MAKEMASK(0x3FF, 22)
+#define GLCOMM_PKT_SHAPER_PROF(_i)		(0x002D2DA8 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLCOMM_PKT_SHAPER_PROF_MAX_INDEX	7
+#define GLCOMM_PKT_SHAPER_PROF_PKTCNT_S		0
+#define GLCOMM_PKT_SHAPER_PROF_PKTCNT_M		MAKEMASK(0x3F, 0)
+#define GLCOMM_QTX_CNTX_CTL			0x002D2DC8 /* Reset Source: CORER */
+#define GLCOMM_QTX_CNTX_CTL_QUEUE_ID_S		0
+#define GLCOMM_QTX_CNTX_CTL_QUEUE_ID_M		MAKEMASK(0x3FFF, 0)
+#define GLCOMM_QTX_CNTX_CTL_CMD_S		16
+#define GLCOMM_QTX_CNTX_CTL_CMD_M		MAKEMASK(0x7, 16)
+#define GLCOMM_QTX_CNTX_CTL_CMD_EXEC_S		19
+#define GLCOMM_QTX_CNTX_CTL_CMD_EXEC_M		BIT(19)
+#define GLCOMM_QTX_CNTX_DATA(_i)		(0x002D2D40 + ((_i) * 4)) /* _i=0...9 */ /* Reset Source: CORER */
+#define GLCOMM_QTX_CNTX_DATA_MAX_INDEX		9
+#define GLCOMM_QTX_CNTX_DATA_DATA_S		0
+#define GLCOMM_QTX_CNTX_DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLCOMM_QTX_CNTX_STAT			0x002D2DCC /* Reset Source: CORER */
+#define GLCOMM_QTX_CNTX_STAT_CMD_IN_PROG_S	0
+#define GLCOMM_QTX_CNTX_STAT_CMD_IN_PROG_M	BIT(0)
+#define GLCOMM_QUANTA_PROF(_i)			(0x002D2D68 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLCOMM_QUANTA_PROF_MAX_INDEX		15
+#define GLCOMM_QUANTA_PROF_QUANTA_SIZE_S	0
+#define GLCOMM_QUANTA_PROF_QUANTA_SIZE_M	MAKEMASK(0x3FFF, 0)
+#define GLCOMM_QUANTA_PROF_MAX_CMD_S		16
+#define GLCOMM_QUANTA_PROF_MAX_CMD_M		MAKEMASK(0xFF, 16)
+#define GLCOMM_QUANTA_PROF_MAX_DESC_S		24
+#define GLCOMM_QUANTA_PROF_MAX_DESC_M		MAKEMASK(0x3F, 24)
+#define GLLAN_TCLAN_CACHE_CTL			0x000FC0B8 /* Reset Source: CORER */
+#define GLLAN_TCLAN_CACHE_CTL_MIN_FETCH_THRESH_S 0
+#define GLLAN_TCLAN_CACHE_CTL_MIN_FETCH_THRESH_M MAKEMASK(0x3F, 0)
+#define GLLAN_TCLAN_CACHE_CTL_FETCH_CL_ALIGN_S	6
+#define GLLAN_TCLAN_CACHE_CTL_FETCH_CL_ALIGN_M	BIT(6)
+#define GLLAN_TCLAN_CACHE_CTL_MIN_ALLOC_THRESH_S 7
+#define GLLAN_TCLAN_CACHE_CTL_MIN_ALLOC_THRESH_M MAKEMASK(0x7F, 7)
+#define GLLAN_TCLAN_CACHE_CTL_CACHE_ENTRY_CNT_S 14
+#define GLLAN_TCLAN_CACHE_CTL_CACHE_ENTRY_CNT_M MAKEMASK(0xFF, 14)
+#define GLLAN_TCLAN_CACHE_CTL_CACHE_DESC_LIM_S	22
+#define GLLAN_TCLAN_CACHE_CTL_CACHE_DESC_LIM_M	MAKEMASK(0x3FF, 22)
+#define GLTCLAN_CQ_CNTX0(_CQ)			(0x000F0800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX0_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX0_RING_ADDR_LSB_S	0
+#define GLTCLAN_CQ_CNTX0_RING_ADDR_LSB_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX1(_CQ)			(0x000F1000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX1_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX1_RING_ADDR_MSB_S	0
+#define GLTCLAN_CQ_CNTX1_RING_ADDR_MSB_M	MAKEMASK(0x1FFFFFF, 0)
+#define GLTCLAN_CQ_CNTX10(_CQ)			(0x000F5800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX10_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX10_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX10_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX11(_CQ)			(0x000F6000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX11_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX11_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX11_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX12(_CQ)			(0x000F6800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX12_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX12_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX12_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX13(_CQ)			(0x000F7000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX13_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX13_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX13_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX14(_CQ)			(0x000F7800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX14_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX14_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX14_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX15(_CQ)			(0x000F8000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX15_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX15_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX15_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX16(_CQ)			(0x000F8800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX16_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX16_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX16_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX17(_CQ)			(0x000F9000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX17_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX17_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX17_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX18(_CQ)			(0x000F9800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX18_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX18_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX18_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX19(_CQ)			(0x000FA000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX19_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX19_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX19_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX2(_CQ)			(0x000F1800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX2_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX2_RING_LEN_S		0
+#define GLTCLAN_CQ_CNTX2_RING_LEN_M		MAKEMASK(0x3FFFF, 0)
+#define GLTCLAN_CQ_CNTX20(_CQ)			(0x000FA800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX20_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX20_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX20_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX21(_CQ)			(0x000FB000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX21_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX21_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX21_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX3(_CQ)			(0x000F2000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX3_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX3_GENERATION_S		0
+#define GLTCLAN_CQ_CNTX3_GENERATION_M		BIT(0)
+#define GLTCLAN_CQ_CNTX3_CQ_WR_PTR_S		1
+#define GLTCLAN_CQ_CNTX3_CQ_WR_PTR_M		MAKEMASK(0x3FFFFF, 1)
+#define GLTCLAN_CQ_CNTX4(_CQ)			(0x000F2800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX4_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX4_PF_NUM_S		0
+#define GLTCLAN_CQ_CNTX4_PF_NUM_M		MAKEMASK(0x7, 0)
+#define GLTCLAN_CQ_CNTX4_VMVF_NUM_S		3
+#define GLTCLAN_CQ_CNTX4_VMVF_NUM_M		MAKEMASK(0x3FF, 3)
+#define GLTCLAN_CQ_CNTX4_VMVF_TYPE_S		13
+#define GLTCLAN_CQ_CNTX4_VMVF_TYPE_M		MAKEMASK(0x3, 13)
+#define GLTCLAN_CQ_CNTX5(_CQ)			(0x000F3000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX5_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX5_TPH_EN_S		0
+#define GLTCLAN_CQ_CNTX5_TPH_EN_M		BIT(0)
+#define GLTCLAN_CQ_CNTX5_CPU_ID_S		1
+#define GLTCLAN_CQ_CNTX5_CPU_ID_M		MAKEMASK(0xFF, 1)
+#define GLTCLAN_CQ_CNTX5_FLUSH_ON_ITR_DIS_S	9
+#define GLTCLAN_CQ_CNTX5_FLUSH_ON_ITR_DIS_M	BIT(9)
+#define GLTCLAN_CQ_CNTX6(_CQ)			(0x000F3800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX6_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX6_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX6_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX7(_CQ)			(0x000F4000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX7_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX7_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX7_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX8(_CQ)			(0x000F4800 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX8_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX8_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX8_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCLAN_CQ_CNTX9(_CQ)			(0x000F5000 + ((_CQ) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLTCLAN_CQ_CNTX9_MAX_INDEX		511
+#define GLTCLAN_CQ_CNTX9_CQ_CACHLINE_S		0
+#define GLTCLAN_CQ_CNTX9_CQ_CACHLINE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define QTX_COMM_DBELL(_DBQM)			(0x002C0000 + ((_DBQM) * 4)) /* _i=0...16383 */ /* Reset Source: CORER */
+#define QTX_COMM_DBELL_MAX_INDEX		16383
+#define QTX_COMM_DBELL_QTX_COMM_DBELL_S		0
+#define QTX_COMM_DBELL_QTX_COMM_DBELL_M		MAKEMASK(0xFFFFFFFF, 0)
+#define QTX_COMM_DBLQ_CNTX(_i, _DBLQ)		(0x002D0000 + ((_i) * 1024 + (_DBLQ) * 4)) /* _i=0...4, _DBLQ=0...255 */ /* Reset Source: CORER */
+#define QTX_COMM_DBLQ_CNTX_MAX_INDEX		4
+#define QTX_COMM_DBLQ_CNTX_DATA_S		0
+#define QTX_COMM_DBLQ_CNTX_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define QTX_COMM_DBLQ_DBELL(_DBLQ)		(0x002D1400 + ((_DBLQ) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define QTX_COMM_DBLQ_DBELL_MAX_INDEX		255
+#define QTX_COMM_DBLQ_DBELL_TAIL_S		0
+#define QTX_COMM_DBLQ_DBELL_TAIL_M		MAKEMASK(0x1FFF, 0)
+#define QTX_COMM_HEAD(_DBQM)			(0x000E0000 + ((_DBQM) * 4)) /* _i=0...16383 */ /* Reset Source: CORER */
+#define QTX_COMM_HEAD_MAX_INDEX			16383
+#define QTX_COMM_HEAD_HEAD_S			0
+#define QTX_COMM_HEAD_HEAD_M			MAKEMASK(0x1FFF, 0)
+#define QTX_COMM_HEAD_RS_PENDING_S		16
+#define QTX_COMM_HEAD_RS_PENDING_M		BIT(16)
+#define GL_FW_TOOL_ARQBAH			0x000801C0 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ARQBAH_ARQBAH_S		0
+#define GL_FW_TOOL_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_FW_TOOL_ARQBAL			0x000800C0 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ARQBAL_ARQBAL_LSB_S		0
+#define GL_FW_TOOL_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define GL_FW_TOOL_ARQBAL_ARQBAL_S		6
+#define GL_FW_TOOL_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define GL_FW_TOOL_ARQH				0x000803C0 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ARQH_ARQH_S			0
+#define GL_FW_TOOL_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define GL_FW_TOOL_ARQLEN			0x000802C0 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ARQLEN_ARQLEN_S		0
+#define GL_FW_TOOL_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define GL_FW_TOOL_ARQLEN_ARQVFE_S		28
+#define GL_FW_TOOL_ARQLEN_ARQVFE_M		BIT(28)
+#define GL_FW_TOOL_ARQLEN_ARQOVFL_S		29
+#define GL_FW_TOOL_ARQLEN_ARQOVFL_M		BIT(29)
+#define GL_FW_TOOL_ARQLEN_ARQCRIT_S		30
+#define GL_FW_TOOL_ARQLEN_ARQCRIT_M		BIT(30)
+#define GL_FW_TOOL_ARQLEN_ARQENABLE_S		31
+#define GL_FW_TOOL_ARQLEN_ARQENABLE_M		BIT(31)
+#define GL_FW_TOOL_ARQT				0x000804C0 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ARQT_ARQT_S			0
+#define GL_FW_TOOL_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define GL_FW_TOOL_ATQBAH			0x00080140 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ATQBAH_ATQBAH_S		0
+#define GL_FW_TOOL_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_FW_TOOL_ATQBAL			0x00080040 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ATQBAL_ATQBAL_LSB_S		0
+#define GL_FW_TOOL_ATQBAL_ATQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define GL_FW_TOOL_ATQBAL_ATQBAL_S		6
+#define GL_FW_TOOL_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define GL_FW_TOOL_ATQH				0x00080340 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ATQH_ATQH_S			0
+#define GL_FW_TOOL_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define GL_FW_TOOL_ATQLEN			0x00080240 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ATQLEN_ATQLEN_S		0
+#define GL_FW_TOOL_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define GL_FW_TOOL_ATQLEN_ATQVFE_S		28
+#define GL_FW_TOOL_ATQLEN_ATQVFE_M		BIT(28)
+#define GL_FW_TOOL_ATQLEN_ATQOVFL_S		29
+#define GL_FW_TOOL_ATQLEN_ATQOVFL_M		BIT(29)
+#define GL_FW_TOOL_ATQLEN_ATQCRIT_S		30
+#define GL_FW_TOOL_ATQLEN_ATQCRIT_M		BIT(30)
+#define GL_FW_TOOL_ATQLEN_ATQENABLE_S		31
+#define GL_FW_TOOL_ATQLEN_ATQENABLE_M		BIT(31)
+#define GL_FW_TOOL_ATQT				0x00080440 /* Reset Source: EMPR */
+#define GL_FW_TOOL_ATQT_ATQT_S			0
+#define GL_FW_TOOL_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define GL_MBX_PASID				0x00231EC0 /* Reset Source: CORER */
+#define GL_MBX_PASID_PASID_MODE_S		0
+#define GL_MBX_PASID_PASID_MODE_M		BIT(0)
+#define GL_MBX_PASID_PASID_MODE_VALID_S		1
+#define GL_MBX_PASID_PASID_MODE_VALID_M		BIT(1)
+#define PF_FW_ARQBAH				0x00080180 /* Reset Source: EMPR */
+#define PF_FW_ARQBAH_ARQBAH_S			0
+#define PF_FW_ARQBAH_ARQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PF_FW_ARQBAL				0x00080080 /* Reset Source: EMPR */
+#define PF_FW_ARQBAL_ARQBAL_LSB_S		0
+#define PF_FW_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF_FW_ARQBAL_ARQBAL_S			6
+#define PF_FW_ARQBAL_ARQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define PF_FW_ARQH				0x00080380 /* Reset Source: EMPR */
+#define PF_FW_ARQH_ARQH_S			0
+#define PF_FW_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define PF_FW_ARQLEN				0x00080280 /* Reset Source: EMPR */
+#define PF_FW_ARQLEN_ARQLEN_S			0
+#define PF_FW_ARQLEN_ARQLEN_M			MAKEMASK(0x3FF, 0)
+#define PF_FW_ARQLEN_ARQVFE_S			28
+#define PF_FW_ARQLEN_ARQVFE_M			BIT(28)
+#define PF_FW_ARQLEN_ARQOVFL_S			29
+#define PF_FW_ARQLEN_ARQOVFL_M			BIT(29)
+#define PF_FW_ARQLEN_ARQCRIT_S			30
+#define PF_FW_ARQLEN_ARQCRIT_M			BIT(30)
+#define PF_FW_ARQLEN_ARQENABLE_S		31
+#define PF_FW_ARQLEN_ARQENABLE_M		BIT(31)
+#define PF_FW_ARQT				0x00080480 /* Reset Source: EMPR */
+#define PF_FW_ARQT_ARQT_S			0
+#define PF_FW_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define PF_FW_ATQBAH				0x00080100 /* Reset Source: EMPR */
+#define PF_FW_ATQBAH_ATQBAH_S			0
+#define PF_FW_ATQBAH_ATQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PF_FW_ATQBAL				0x00080000 /* Reset Source: EMPR */
+#define PF_FW_ATQBAL_ATQBAL_LSB_S		0
+#define PF_FW_ATQBAL_ATQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF_FW_ATQBAL_ATQBAL_S			6
+#define PF_FW_ATQBAL_ATQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define PF_FW_ATQH				0x00080300 /* Reset Source: EMPR */
+#define PF_FW_ATQH_ATQH_S			0
+#define PF_FW_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define PF_FW_ATQLEN				0x00080200 /* Reset Source: EMPR */
+#define PF_FW_ATQLEN_ATQLEN_S			0
+#define PF_FW_ATQLEN_ATQLEN_M			MAKEMASK(0x3FF, 0)
+#define PF_FW_ATQLEN_ATQVFE_S			28
+#define PF_FW_ATQLEN_ATQVFE_M			BIT(28)
+#define PF_FW_ATQLEN_ATQOVFL_S			29
+#define PF_FW_ATQLEN_ATQOVFL_M			BIT(29)
+#define PF_FW_ATQLEN_ATQCRIT_S			30
+#define PF_FW_ATQLEN_ATQCRIT_M			BIT(30)
+#define PF_FW_ATQLEN_ATQENABLE_S		31
+#define PF_FW_ATQLEN_ATQENABLE_M		BIT(31)
+#define PF_FW_ATQT				0x00080400 /* Reset Source: EMPR */
+#define PF_FW_ATQT_ATQT_S			0
+#define PF_FW_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define PF_MBX_ARQBAH				0x0022E400 /* Reset Source: CORER */
+#define PF_MBX_ARQBAH_ARQBAH_S			0
+#define PF_MBX_ARQBAH_ARQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PF_MBX_ARQBAL				0x0022E380 /* Reset Source: CORER */
+#define PF_MBX_ARQBAL_ARQBAL_LSB_S		0
+#define PF_MBX_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF_MBX_ARQBAL_ARQBAL_S			6
+#define PF_MBX_ARQBAL_ARQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define PF_MBX_ARQH				0x0022E500 /* Reset Source: CORER */
+#define PF_MBX_ARQH_ARQH_S			0
+#define PF_MBX_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define PF_MBX_ARQLEN				0x0022E480 /* Reset Source: CORER */
+#define PF_MBX_ARQLEN_ARQLEN_S			0
+#define PF_MBX_ARQLEN_ARQLEN_M			MAKEMASK(0x3FF, 0)
+#define PF_MBX_ARQLEN_ARQVFE_S			28
+#define PF_MBX_ARQLEN_ARQVFE_M			BIT(28)
+#define PF_MBX_ARQLEN_ARQOVFL_S			29
+#define PF_MBX_ARQLEN_ARQOVFL_M			BIT(29)
+#define PF_MBX_ARQLEN_ARQCRIT_S			30
+#define PF_MBX_ARQLEN_ARQCRIT_M			BIT(30)
+#define PF_MBX_ARQLEN_ARQENABLE_S		31
+#define PF_MBX_ARQLEN_ARQENABLE_M		BIT(31)
+#define PF_MBX_ARQT				0x0022E580 /* Reset Source: CORER */
+#define PF_MBX_ARQT_ARQT_S			0
+#define PF_MBX_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define PF_MBX_ATQBAH				0x0022E180 /* Reset Source: CORER */
+#define PF_MBX_ATQBAH_ATQBAH_S			0
+#define PF_MBX_ATQBAH_ATQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PF_MBX_ATQBAL				0x0022E100 /* Reset Source: CORER */
+#define PF_MBX_ATQBAL_ATQBAL_S			6
+#define PF_MBX_ATQBAL_ATQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define PF_MBX_ATQH				0x0022E280 /* Reset Source: CORER */
+#define PF_MBX_ATQH_ATQH_S			0
+#define PF_MBX_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define PF_MBX_ATQLEN				0x0022E200 /* Reset Source: CORER */
+#define PF_MBX_ATQLEN_ATQLEN_S			0
+#define PF_MBX_ATQLEN_ATQLEN_M			MAKEMASK(0x3FF, 0)
+#define PF_MBX_ATQLEN_ATQVFE_S			28
+#define PF_MBX_ATQLEN_ATQVFE_M			BIT(28)
+#define PF_MBX_ATQLEN_ATQOVFL_S			29
+#define PF_MBX_ATQLEN_ATQOVFL_M			BIT(29)
+#define PF_MBX_ATQLEN_ATQCRIT_S			30
+#define PF_MBX_ATQLEN_ATQCRIT_M			BIT(30)
+#define PF_MBX_ATQLEN_ATQENABLE_S		31
+#define PF_MBX_ATQLEN_ATQENABLE_M		BIT(31)
+#define PF_MBX_ATQT				0x0022E300 /* Reset Source: CORER */
+#define PF_MBX_ATQT_ATQT_S			0
+#define PF_MBX_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define PF_SB_ARQBAH				0x0022FF00 /* Reset Source: CORER */
+#define PF_SB_ARQBAH_ARQBAH_S			0
+#define PF_SB_ARQBAH_ARQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PF_SB_ARQBAL				0x0022FE80 /* Reset Source: CORER */
+#define PF_SB_ARQBAL_ARQBAL_LSB_S		0
+#define PF_SB_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF_SB_ARQBAL_ARQBAL_S			6
+#define PF_SB_ARQBAL_ARQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define PF_SB_ARQH				0x00230000 /* Reset Source: CORER */
+#define PF_SB_ARQH_ARQH_S			0
+#define PF_SB_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define PF_SB_ARQLEN				0x0022FF80 /* Reset Source: CORER */
+#define PF_SB_ARQLEN_ARQLEN_S			0
+#define PF_SB_ARQLEN_ARQLEN_M			MAKEMASK(0x3FF, 0)
+#define PF_SB_ARQLEN_ARQVFE_S			28
+#define PF_SB_ARQLEN_ARQVFE_M			BIT(28)
+#define PF_SB_ARQLEN_ARQOVFL_S			29
+#define PF_SB_ARQLEN_ARQOVFL_M			BIT(29)
+#define PF_SB_ARQLEN_ARQCRIT_S			30
+#define PF_SB_ARQLEN_ARQCRIT_M			BIT(30)
+#define PF_SB_ARQLEN_ARQENABLE_S		31
+#define PF_SB_ARQLEN_ARQENABLE_M		BIT(31)
+#define PF_SB_ARQT				0x00230080 /* Reset Source: CORER */
+#define PF_SB_ARQT_ARQT_S			0
+#define PF_SB_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define PF_SB_ATQBAH				0x0022FC80 /* Reset Source: CORER */
+#define PF_SB_ATQBAH_ATQBAH_S			0
+#define PF_SB_ATQBAH_ATQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PF_SB_ATQBAL				0x0022FC00 /* Reset Source: CORER */
+#define PF_SB_ATQBAL_ATQBAL_S			6
+#define PF_SB_ATQBAL_ATQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define PF_SB_ATQH				0x0022FD80 /* Reset Source: CORER */
+#define PF_SB_ATQH_ATQH_S			0
+#define PF_SB_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define PF_SB_ATQLEN				0x0022FD00 /* Reset Source: CORER */
+#define PF_SB_ATQLEN_ATQLEN_S			0
+#define PF_SB_ATQLEN_ATQLEN_M			MAKEMASK(0x3FF, 0)
+#define PF_SB_ATQLEN_ATQVFE_S			28
+#define PF_SB_ATQLEN_ATQVFE_M			BIT(28)
+#define PF_SB_ATQLEN_ATQOVFL_S			29
+#define PF_SB_ATQLEN_ATQOVFL_M			BIT(29)
+#define PF_SB_ATQLEN_ATQCRIT_S			30
+#define PF_SB_ATQLEN_ATQCRIT_M			BIT(30)
+#define PF_SB_ATQLEN_ATQENABLE_S		31
+#define PF_SB_ATQLEN_ATQENABLE_M		BIT(31)
+#define PF_SB_ATQT				0x0022FE00 /* Reset Source: CORER */
+#define PF_SB_ATQT_ATQT_S			0
+#define PF_SB_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define PF_SB_REM_DEV_CTL			0x002300F0 /* Reset Source: CORER */
+#define PF_SB_REM_DEV_CTL_DEST_EN_S		0
+#define PF_SB_REM_DEV_CTL_DEST_EN_M		MAKEMASK(0xFFFF, 0)
+#define PF0_FW_HLP_ARQBAH			0x000801C8 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQBAH_ARQBAH_S		0
+#define PF0_FW_HLP_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_FW_HLP_ARQBAL			0x000800C8 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQBAL_ARQBAL_LSB_S		0
+#define PF0_FW_HLP_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF0_FW_HLP_ARQBAL_ARQBAL_S		6
+#define PF0_FW_HLP_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_FW_HLP_ARQH				0x000803C8 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQH_ARQH_S			0
+#define PF0_FW_HLP_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ARQLEN			0x000802C8 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQLEN_ARQLEN_S		0
+#define PF0_FW_HLP_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ARQLEN_ARQVFE_S		28
+#define PF0_FW_HLP_ARQLEN_ARQVFE_M		BIT(28)
+#define PF0_FW_HLP_ARQLEN_ARQOVFL_S		29
+#define PF0_FW_HLP_ARQLEN_ARQOVFL_M		BIT(29)
+#define PF0_FW_HLP_ARQLEN_ARQCRIT_S		30
+#define PF0_FW_HLP_ARQLEN_ARQCRIT_M		BIT(30)
+#define PF0_FW_HLP_ARQLEN_ARQENABLE_S		31
+#define PF0_FW_HLP_ARQLEN_ARQENABLE_M		BIT(31)
+#define PF0_FW_HLP_ARQT				0x000804C8 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ARQT_ARQT_S			0
+#define PF0_FW_HLP_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ATQBAH			0x00080148 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQBAH_ATQBAH_S		0
+#define PF0_FW_HLP_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_FW_HLP_ATQBAL			0x00080048 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQBAL_ATQBAL_LSB_S		0
+#define PF0_FW_HLP_ATQBAL_ATQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF0_FW_HLP_ATQBAL_ATQBAL_S		6
+#define PF0_FW_HLP_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_FW_HLP_ATQH				0x00080348 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQH_ATQH_S			0
+#define PF0_FW_HLP_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ATQLEN			0x00080248 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQLEN_ATQLEN_S		0
+#define PF0_FW_HLP_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_HLP_ATQLEN_ATQVFE_S		28
+#define PF0_FW_HLP_ATQLEN_ATQVFE_M		BIT(28)
+#define PF0_FW_HLP_ATQLEN_ATQOVFL_S		29
+#define PF0_FW_HLP_ATQLEN_ATQOVFL_M		BIT(29)
+#define PF0_FW_HLP_ATQLEN_ATQCRIT_S		30
+#define PF0_FW_HLP_ATQLEN_ATQCRIT_M		BIT(30)
+#define PF0_FW_HLP_ATQLEN_ATQENABLE_S		31
+#define PF0_FW_HLP_ATQLEN_ATQENABLE_M		BIT(31)
+#define PF0_FW_HLP_ATQT				0x00080448 /* Reset Source: EMPR */
+#define PF0_FW_HLP_ATQT_ATQT_S			0
+#define PF0_FW_HLP_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ARQBAH			0x000801C4 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQBAH_ARQBAH_S		0
+#define PF0_FW_PSM_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_FW_PSM_ARQBAL			0x000800C4 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQBAL_ARQBAL_LSB_S		0
+#define PF0_FW_PSM_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF0_FW_PSM_ARQBAL_ARQBAL_S		6
+#define PF0_FW_PSM_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_FW_PSM_ARQH				0x000803C4 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQH_ARQH_S			0
+#define PF0_FW_PSM_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ARQLEN			0x000802C4 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQLEN_ARQLEN_S		0
+#define PF0_FW_PSM_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ARQLEN_ARQVFE_S		28
+#define PF0_FW_PSM_ARQLEN_ARQVFE_M		BIT(28)
+#define PF0_FW_PSM_ARQLEN_ARQOVFL_S		29
+#define PF0_FW_PSM_ARQLEN_ARQOVFL_M		BIT(29)
+#define PF0_FW_PSM_ARQLEN_ARQCRIT_S		30
+#define PF0_FW_PSM_ARQLEN_ARQCRIT_M		BIT(30)
+#define PF0_FW_PSM_ARQLEN_ARQENABLE_S		31
+#define PF0_FW_PSM_ARQLEN_ARQENABLE_M		BIT(31)
+#define PF0_FW_PSM_ARQT				0x000804C4 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ARQT_ARQT_S			0
+#define PF0_FW_PSM_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ATQBAH			0x00080144 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQBAH_ATQBAH_S		0
+#define PF0_FW_PSM_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_FW_PSM_ATQBAL			0x00080044 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQBAL_ATQBAL_LSB_S		0
+#define PF0_FW_PSM_ATQBAL_ATQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF0_FW_PSM_ATQBAL_ATQBAL_S		6
+#define PF0_FW_PSM_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_FW_PSM_ATQH				0x00080344 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQH_ATQH_S			0
+#define PF0_FW_PSM_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ATQLEN			0x00080244 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQLEN_ATQLEN_S		0
+#define PF0_FW_PSM_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_FW_PSM_ATQLEN_ATQVFE_S		28
+#define PF0_FW_PSM_ATQLEN_ATQVFE_M		BIT(28)
+#define PF0_FW_PSM_ATQLEN_ATQOVFL_S		29
+#define PF0_FW_PSM_ATQLEN_ATQOVFL_M		BIT(29)
+#define PF0_FW_PSM_ATQLEN_ATQCRIT_S		30
+#define PF0_FW_PSM_ATQLEN_ATQCRIT_M		BIT(30)
+#define PF0_FW_PSM_ATQLEN_ATQENABLE_S		31
+#define PF0_FW_PSM_ATQLEN_ATQENABLE_M		BIT(31)
+#define PF0_FW_PSM_ATQT				0x00080444 /* Reset Source: EMPR */
+#define PF0_FW_PSM_ATQT_ATQT_S			0
+#define PF0_FW_PSM_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ARQBAH			0x0022E5D8 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQBAH_ARQBAH_S		0
+#define PF0_MBX_CPM_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_CPM_ARQBAL			0x0022E5D4 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQBAL_ARQBAL_LSB_S		0
+#define PF0_MBX_CPM_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF0_MBX_CPM_ARQBAL_ARQBAL_S		6
+#define PF0_MBX_CPM_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_CPM_ARQH			0x0022E5E0 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQH_ARQH_S			0
+#define PF0_MBX_CPM_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ARQLEN			0x0022E5DC /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQLEN_ARQLEN_S		0
+#define PF0_MBX_CPM_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ARQLEN_ARQVFE_S		28
+#define PF0_MBX_CPM_ARQLEN_ARQVFE_M		BIT(28)
+#define PF0_MBX_CPM_ARQLEN_ARQOVFL_S		29
+#define PF0_MBX_CPM_ARQLEN_ARQOVFL_M		BIT(29)
+#define PF0_MBX_CPM_ARQLEN_ARQCRIT_S		30
+#define PF0_MBX_CPM_ARQLEN_ARQCRIT_M		BIT(30)
+#define PF0_MBX_CPM_ARQLEN_ARQENABLE_S		31
+#define PF0_MBX_CPM_ARQLEN_ARQENABLE_M		BIT(31)
+#define PF0_MBX_CPM_ARQT			0x0022E5E4 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ARQT_ARQT_S			0
+#define PF0_MBX_CPM_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ATQBAH			0x0022E5C4 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQBAH_ATQBAH_S		0
+#define PF0_MBX_CPM_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_CPM_ATQBAL			0x0022E5C0 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQBAL_ATQBAL_S		6
+#define PF0_MBX_CPM_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_CPM_ATQH			0x0022E5CC /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQH_ATQH_S			0
+#define PF0_MBX_CPM_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ATQLEN			0x0022E5C8 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQLEN_ATQLEN_S		0
+#define PF0_MBX_CPM_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_CPM_ATQLEN_ATQVFE_S		28
+#define PF0_MBX_CPM_ATQLEN_ATQVFE_M		BIT(28)
+#define PF0_MBX_CPM_ATQLEN_ATQOVFL_S		29
+#define PF0_MBX_CPM_ATQLEN_ATQOVFL_M		BIT(29)
+#define PF0_MBX_CPM_ATQLEN_ATQCRIT_S		30
+#define PF0_MBX_CPM_ATQLEN_ATQCRIT_M		BIT(30)
+#define PF0_MBX_CPM_ATQLEN_ATQENABLE_S		31
+#define PF0_MBX_CPM_ATQLEN_ATQENABLE_M		BIT(31)
+#define PF0_MBX_CPM_ATQT			0x0022E5D0 /* Reset Source: CORER */
+#define PF0_MBX_CPM_ATQT_ATQT_S			0
+#define PF0_MBX_CPM_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ARQBAH			0x0022E600 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQBAH_ARQBAH_S		0
+#define PF0_MBX_HLP_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_HLP_ARQBAL			0x0022E5FC /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQBAL_ARQBAL_LSB_S		0
+#define PF0_MBX_HLP_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF0_MBX_HLP_ARQBAL_ARQBAL_S		6
+#define PF0_MBX_HLP_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_HLP_ARQH			0x0022E608 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQH_ARQH_S			0
+#define PF0_MBX_HLP_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ARQLEN			0x0022E604 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQLEN_ARQLEN_S		0
+#define PF0_MBX_HLP_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ARQLEN_ARQVFE_S		28
+#define PF0_MBX_HLP_ARQLEN_ARQVFE_M		BIT(28)
+#define PF0_MBX_HLP_ARQLEN_ARQOVFL_S		29
+#define PF0_MBX_HLP_ARQLEN_ARQOVFL_M		BIT(29)
+#define PF0_MBX_HLP_ARQLEN_ARQCRIT_S		30
+#define PF0_MBX_HLP_ARQLEN_ARQCRIT_M		BIT(30)
+#define PF0_MBX_HLP_ARQLEN_ARQENABLE_S		31
+#define PF0_MBX_HLP_ARQLEN_ARQENABLE_M		BIT(31)
+#define PF0_MBX_HLP_ARQT			0x0022E60C /* Reset Source: CORER */
+#define PF0_MBX_HLP_ARQT_ARQT_S			0
+#define PF0_MBX_HLP_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ATQBAH			0x0022E5EC /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQBAH_ATQBAH_S		0
+#define PF0_MBX_HLP_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_HLP_ATQBAL			0x0022E5E8 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQBAL_ATQBAL_S		6
+#define PF0_MBX_HLP_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_HLP_ATQH			0x0022E5F4 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQH_ATQH_S			0
+#define PF0_MBX_HLP_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ATQLEN			0x0022E5F0 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQLEN_ATQLEN_S		0
+#define PF0_MBX_HLP_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_HLP_ATQLEN_ATQVFE_S		28
+#define PF0_MBX_HLP_ATQLEN_ATQVFE_M		BIT(28)
+#define PF0_MBX_HLP_ATQLEN_ATQOVFL_S		29
+#define PF0_MBX_HLP_ATQLEN_ATQOVFL_M		BIT(29)
+#define PF0_MBX_HLP_ATQLEN_ATQCRIT_S		30
+#define PF0_MBX_HLP_ATQLEN_ATQCRIT_M		BIT(30)
+#define PF0_MBX_HLP_ATQLEN_ATQENABLE_S		31
+#define PF0_MBX_HLP_ATQLEN_ATQENABLE_M		BIT(31)
+#define PF0_MBX_HLP_ATQT			0x0022E5F8 /* Reset Source: CORER */
+#define PF0_MBX_HLP_ATQT_ATQT_S			0
+#define PF0_MBX_HLP_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ARQBAH			0x0022E628 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQBAH_ARQBAH_S		0
+#define PF0_MBX_PSM_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_PSM_ARQBAL			0x0022E624 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQBAL_ARQBAL_LSB_S		0
+#define PF0_MBX_PSM_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF0_MBX_PSM_ARQBAL_ARQBAL_S		6
+#define PF0_MBX_PSM_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_PSM_ARQH			0x0022E630 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQH_ARQH_S			0
+#define PF0_MBX_PSM_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ARQLEN			0x0022E62C /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQLEN_ARQLEN_S		0
+#define PF0_MBX_PSM_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ARQLEN_ARQVFE_S		28
+#define PF0_MBX_PSM_ARQLEN_ARQVFE_M		BIT(28)
+#define PF0_MBX_PSM_ARQLEN_ARQOVFL_S		29
+#define PF0_MBX_PSM_ARQLEN_ARQOVFL_M		BIT(29)
+#define PF0_MBX_PSM_ARQLEN_ARQCRIT_S		30
+#define PF0_MBX_PSM_ARQLEN_ARQCRIT_M		BIT(30)
+#define PF0_MBX_PSM_ARQLEN_ARQENABLE_S		31
+#define PF0_MBX_PSM_ARQLEN_ARQENABLE_M		BIT(31)
+#define PF0_MBX_PSM_ARQT			0x0022E634 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ARQT_ARQT_S			0
+#define PF0_MBX_PSM_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ATQBAH			0x0022E614 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQBAH_ATQBAH_S		0
+#define PF0_MBX_PSM_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_MBX_PSM_ATQBAL			0x0022E610 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQBAL_ATQBAL_S		6
+#define PF0_MBX_PSM_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_MBX_PSM_ATQH			0x0022E61C /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQH_ATQH_S			0
+#define PF0_MBX_PSM_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ATQLEN			0x0022E618 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQLEN_ATQLEN_S		0
+#define PF0_MBX_PSM_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_MBX_PSM_ATQLEN_ATQVFE_S		28
+#define PF0_MBX_PSM_ATQLEN_ATQVFE_M		BIT(28)
+#define PF0_MBX_PSM_ATQLEN_ATQOVFL_S		29
+#define PF0_MBX_PSM_ATQLEN_ATQOVFL_M		BIT(29)
+#define PF0_MBX_PSM_ATQLEN_ATQCRIT_S		30
+#define PF0_MBX_PSM_ATQLEN_ATQCRIT_M		BIT(30)
+#define PF0_MBX_PSM_ATQLEN_ATQENABLE_S		31
+#define PF0_MBX_PSM_ATQLEN_ATQENABLE_M		BIT(31)
+#define PF0_MBX_PSM_ATQT			0x0022E620 /* Reset Source: CORER */
+#define PF0_MBX_PSM_ATQT_ATQT_S			0
+#define PF0_MBX_PSM_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ARQBAH			0x0022E650 /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQBAH_ARQBAH_S		0
+#define PF0_SB_CPM_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_SB_CPM_ARQBAL			0x0022E64C /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQBAL_ARQBAL_LSB_S		0
+#define PF0_SB_CPM_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF0_SB_CPM_ARQBAL_ARQBAL_S		6
+#define PF0_SB_CPM_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_SB_CPM_ARQH				0x0022E658 /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQH_ARQH_S			0
+#define PF0_SB_CPM_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ARQLEN			0x0022E654 /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQLEN_ARQLEN_S		0
+#define PF0_SB_CPM_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ARQLEN_ARQVFE_S		28
+#define PF0_SB_CPM_ARQLEN_ARQVFE_M		BIT(28)
+#define PF0_SB_CPM_ARQLEN_ARQOVFL_S		29
+#define PF0_SB_CPM_ARQLEN_ARQOVFL_M		BIT(29)
+#define PF0_SB_CPM_ARQLEN_ARQCRIT_S		30
+#define PF0_SB_CPM_ARQLEN_ARQCRIT_M		BIT(30)
+#define PF0_SB_CPM_ARQLEN_ARQENABLE_S		31
+#define PF0_SB_CPM_ARQLEN_ARQENABLE_M		BIT(31)
+#define PF0_SB_CPM_ARQT				0x0022E65C /* Reset Source: CORER */
+#define PF0_SB_CPM_ARQT_ARQT_S			0
+#define PF0_SB_CPM_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ATQBAH			0x0022E63C /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQBAH_ATQBAH_S		0
+#define PF0_SB_CPM_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_SB_CPM_ATQBAL			0x0022E638 /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQBAL_ATQBAL_S		6
+#define PF0_SB_CPM_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_SB_CPM_ATQH				0x0022E644 /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQH_ATQH_S			0
+#define PF0_SB_CPM_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ATQLEN			0x0022E640 /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQLEN_ATQLEN_S		0
+#define PF0_SB_CPM_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_ATQLEN_ATQVFE_S		28
+#define PF0_SB_CPM_ATQLEN_ATQVFE_M		BIT(28)
+#define PF0_SB_CPM_ATQLEN_ATQOVFL_S		29
+#define PF0_SB_CPM_ATQLEN_ATQOVFL_M		BIT(29)
+#define PF0_SB_CPM_ATQLEN_ATQCRIT_S		30
+#define PF0_SB_CPM_ATQLEN_ATQCRIT_M		BIT(30)
+#define PF0_SB_CPM_ATQLEN_ATQENABLE_S		31
+#define PF0_SB_CPM_ATQLEN_ATQENABLE_M		BIT(31)
+#define PF0_SB_CPM_ATQT				0x0022E648 /* Reset Source: CORER */
+#define PF0_SB_CPM_ATQT_ATQT_S			0
+#define PF0_SB_CPM_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_SB_CPM_REM_DEV_CTL			0x002300F4 /* Reset Source: CORER */
+#define PF0_SB_CPM_REM_DEV_CTL_DEST_EN_S	0
+#define PF0_SB_CPM_REM_DEV_CTL_DEST_EN_M	MAKEMASK(0xFFFF, 0)
+#define PF0_SB_HLP_ARQBAH			0x002300D8 /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQBAH_ARQBAH_S		0
+#define PF0_SB_HLP_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_SB_HLP_ARQBAL			0x002300D4 /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQBAL_ARQBAL_LSB_S		0
+#define PF0_SB_HLP_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define PF0_SB_HLP_ARQBAL_ARQBAL_S		6
+#define PF0_SB_HLP_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_SB_HLP_ARQH				0x002300E0 /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQH_ARQH_S			0
+#define PF0_SB_HLP_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ARQLEN			0x002300DC /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQLEN_ARQLEN_S		0
+#define PF0_SB_HLP_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ARQLEN_ARQVFE_S		28
+#define PF0_SB_HLP_ARQLEN_ARQVFE_M		BIT(28)
+#define PF0_SB_HLP_ARQLEN_ARQOVFL_S		29
+#define PF0_SB_HLP_ARQLEN_ARQOVFL_M		BIT(29)
+#define PF0_SB_HLP_ARQLEN_ARQCRIT_S		30
+#define PF0_SB_HLP_ARQLEN_ARQCRIT_M		BIT(30)
+#define PF0_SB_HLP_ARQLEN_ARQENABLE_S		31
+#define PF0_SB_HLP_ARQLEN_ARQENABLE_M		BIT(31)
+#define PF0_SB_HLP_ARQT				0x002300E4 /* Reset Source: CORER */
+#define PF0_SB_HLP_ARQT_ARQT_S			0
+#define PF0_SB_HLP_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ATQBAH			0x002300C4 /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQBAH_ATQBAH_S		0
+#define PF0_SB_HLP_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PF0_SB_HLP_ATQBAL			0x002300C0 /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQBAL_ATQBAL_S		6
+#define PF0_SB_HLP_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define PF0_SB_HLP_ATQH				0x002300CC /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQH_ATQH_S			0
+#define PF0_SB_HLP_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ATQLEN			0x002300C8 /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQLEN_ATQLEN_S		0
+#define PF0_SB_HLP_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_ATQLEN_ATQVFE_S		28
+#define PF0_SB_HLP_ATQLEN_ATQVFE_M		BIT(28)
+#define PF0_SB_HLP_ATQLEN_ATQOVFL_S		29
+#define PF0_SB_HLP_ATQLEN_ATQOVFL_M		BIT(29)
+#define PF0_SB_HLP_ATQLEN_ATQCRIT_S		30
+#define PF0_SB_HLP_ATQLEN_ATQCRIT_M		BIT(30)
+#define PF0_SB_HLP_ATQLEN_ATQENABLE_S		31
+#define PF0_SB_HLP_ATQLEN_ATQENABLE_M		BIT(31)
+#define PF0_SB_HLP_ATQT				0x002300D0 /* Reset Source: CORER */
+#define PF0_SB_HLP_ATQT_ATQT_S			0
+#define PF0_SB_HLP_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define PF0_SB_HLP_REM_DEV_CTL			0x002300E8 /* Reset Source: CORER */
+#define PF0_SB_HLP_REM_DEV_CTL_DEST_EN_S	0
+#define PF0_SB_HLP_REM_DEV_CTL_DEST_EN_M	MAKEMASK(0xFFFF, 0)
+#define SB_REM_DEV_DEST(_i)			(0x002300F8 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define SB_REM_DEV_DEST_MAX_INDEX		7
+#define SB_REM_DEV_DEST_DEST_S			0
+#define SB_REM_DEV_DEST_DEST_M			MAKEMASK(0xF, 0)
+#define SB_REM_DEV_DEST_DEST_VALID_S		31
+#define SB_REM_DEV_DEST_DEST_VALID_M		BIT(31)
+#define VF_MBX_ARQBAH(_VF)			(0x0022B800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ARQBAH_MAX_INDEX			255
+#define VF_MBX_ARQBAH_ARQBAH_S			0
+#define VF_MBX_ARQBAH_ARQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_ARQBAL(_VF)			(0x0022B400 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ARQBAL_MAX_INDEX			255
+#define VF_MBX_ARQBAL_ARQBAL_LSB_S		0
+#define VF_MBX_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define VF_MBX_ARQBAL_ARQBAL_S			6
+#define VF_MBX_ARQBAL_ARQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_ARQH(_VF)			(0x0022C000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ARQH_MAX_INDEX			255
+#define VF_MBX_ARQH_ARQH_S			0
+#define VF_MBX_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_ARQLEN(_VF)			(0x0022BC00 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ARQLEN_MAX_INDEX			255
+#define VF_MBX_ARQLEN_ARQLEN_S			0
+#define VF_MBX_ARQLEN_ARQLEN_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_ARQLEN_ARQVFE_S			28
+#define VF_MBX_ARQLEN_ARQVFE_M			BIT(28)
+#define VF_MBX_ARQLEN_ARQOVFL_S			29
+#define VF_MBX_ARQLEN_ARQOVFL_M			BIT(29)
+#define VF_MBX_ARQLEN_ARQCRIT_S			30
+#define VF_MBX_ARQLEN_ARQCRIT_M			BIT(30)
+#define VF_MBX_ARQLEN_ARQENABLE_S		31
+#define VF_MBX_ARQLEN_ARQENABLE_M		BIT(31)
+#define VF_MBX_ARQT(_VF)			(0x0022C400 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ARQT_MAX_INDEX			255
+#define VF_MBX_ARQT_ARQT_S			0
+#define VF_MBX_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_ATQBAH(_VF)			(0x0022A400 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ATQBAH_MAX_INDEX			255
+#define VF_MBX_ATQBAH_ATQBAH_S			0
+#define VF_MBX_ATQBAH_ATQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_ATQBAL(_VF)			(0x0022A000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ATQBAL_MAX_INDEX			255
+#define VF_MBX_ATQBAL_ATQBAL_S			6
+#define VF_MBX_ATQBAL_ATQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_ATQH(_VF)			(0x0022AC00 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ATQH_MAX_INDEX			255
+#define VF_MBX_ATQH_ATQH_S			0
+#define VF_MBX_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_ATQLEN(_VF)			(0x0022A800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ATQLEN_MAX_INDEX			255
+#define VF_MBX_ATQLEN_ATQLEN_S			0
+#define VF_MBX_ATQLEN_ATQLEN_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_ATQLEN_ATQVFE_S			28
+#define VF_MBX_ATQLEN_ATQVFE_M			BIT(28)
+#define VF_MBX_ATQLEN_ATQOVFL_S			29
+#define VF_MBX_ATQLEN_ATQOVFL_M			BIT(29)
+#define VF_MBX_ATQLEN_ATQCRIT_S			30
+#define VF_MBX_ATQLEN_ATQCRIT_M			BIT(30)
+#define VF_MBX_ATQLEN_ATQENABLE_S		31
+#define VF_MBX_ATQLEN_ATQENABLE_M		BIT(31)
+#define VF_MBX_ATQT(_VF)			(0x0022B000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VF_MBX_ATQT_MAX_INDEX			255
+#define VF_MBX_ATQT_ATQT_S			0
+#define VF_MBX_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ARQBAH(_VF128)		(0x0022D400 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ARQBAH_MAX_INDEX		127
+#define VF_MBX_CPM_ARQBAH_ARQBAH_S		0
+#define VF_MBX_CPM_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_CPM_ARQBAL(_VF128)		(0x0022D200 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ARQBAL_MAX_INDEX		127
+#define VF_MBX_CPM_ARQBAL_ARQBAL_LSB_S		0
+#define VF_MBX_CPM_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define VF_MBX_CPM_ARQBAL_ARQBAL_S		6
+#define VF_MBX_CPM_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_CPM_ARQH(_VF128)			(0x0022D800 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ARQH_MAX_INDEX		127
+#define VF_MBX_CPM_ARQH_ARQH_S			0
+#define VF_MBX_CPM_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ARQLEN(_VF128)		(0x0022D600 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ARQLEN_MAX_INDEX		127
+#define VF_MBX_CPM_ARQLEN_ARQLEN_S		0
+#define VF_MBX_CPM_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ARQLEN_ARQVFE_S		28
+#define VF_MBX_CPM_ARQLEN_ARQVFE_M		BIT(28)
+#define VF_MBX_CPM_ARQLEN_ARQOVFL_S		29
+#define VF_MBX_CPM_ARQLEN_ARQOVFL_M		BIT(29)
+#define VF_MBX_CPM_ARQLEN_ARQCRIT_S		30
+#define VF_MBX_CPM_ARQLEN_ARQCRIT_M		BIT(30)
+#define VF_MBX_CPM_ARQLEN_ARQENABLE_S		31
+#define VF_MBX_CPM_ARQLEN_ARQENABLE_M		BIT(31)
+#define VF_MBX_CPM_ARQT(_VF128)			(0x0022DA00 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ARQT_MAX_INDEX		127
+#define VF_MBX_CPM_ARQT_ARQT_S			0
+#define VF_MBX_CPM_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ATQBAH(_VF128)		(0x0022CA00 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ATQBAH_MAX_INDEX		127
+#define VF_MBX_CPM_ATQBAH_ATQBAH_S		0
+#define VF_MBX_CPM_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_CPM_ATQBAL(_VF128)		(0x0022C800 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ATQBAL_MAX_INDEX		127
+#define VF_MBX_CPM_ATQBAL_ATQBAL_S		6
+#define VF_MBX_CPM_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_CPM_ATQH(_VF128)			(0x0022CE00 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ATQH_MAX_INDEX		127
+#define VF_MBX_CPM_ATQH_ATQH_S			0
+#define VF_MBX_CPM_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ATQLEN(_VF128)		(0x0022CC00 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ATQLEN_MAX_INDEX		127
+#define VF_MBX_CPM_ATQLEN_ATQLEN_S		0
+#define VF_MBX_CPM_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ATQLEN_ATQVFE_S		28
+#define VF_MBX_CPM_ATQLEN_ATQVFE_M		BIT(28)
+#define VF_MBX_CPM_ATQLEN_ATQOVFL_S		29
+#define VF_MBX_CPM_ATQLEN_ATQOVFL_M		BIT(29)
+#define VF_MBX_CPM_ATQLEN_ATQCRIT_S		30
+#define VF_MBX_CPM_ATQLEN_ATQCRIT_M		BIT(30)
+#define VF_MBX_CPM_ATQLEN_ATQENABLE_S		31
+#define VF_MBX_CPM_ATQLEN_ATQENABLE_M		BIT(31)
+#define VF_MBX_CPM_ATQT(_VF128)			(0x0022D000 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_MBX_CPM_ATQT_MAX_INDEX		127
+#define VF_MBX_CPM_ATQT_ATQT_S			0
+#define VF_MBX_CPM_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ARQBAH(_VF16)		(0x0022DD80 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ARQBAH_MAX_INDEX		15
+#define VF_MBX_HLP_ARQBAH_ARQBAH_S		0
+#define VF_MBX_HLP_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_HLP_ARQBAL(_VF16)		(0x0022DD40 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ARQBAL_MAX_INDEX		15
+#define VF_MBX_HLP_ARQBAL_ARQBAL_LSB_S		0
+#define VF_MBX_HLP_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define VF_MBX_HLP_ARQBAL_ARQBAL_S		6
+#define VF_MBX_HLP_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_HLP_ARQH(_VF16)			(0x0022DE00 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ARQH_MAX_INDEX		15
+#define VF_MBX_HLP_ARQH_ARQH_S			0
+#define VF_MBX_HLP_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ARQLEN(_VF16)		(0x0022DDC0 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ARQLEN_MAX_INDEX		15
+#define VF_MBX_HLP_ARQLEN_ARQLEN_S		0
+#define VF_MBX_HLP_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ARQLEN_ARQVFE_S		28
+#define VF_MBX_HLP_ARQLEN_ARQVFE_M		BIT(28)
+#define VF_MBX_HLP_ARQLEN_ARQOVFL_S		29
+#define VF_MBX_HLP_ARQLEN_ARQOVFL_M		BIT(29)
+#define VF_MBX_HLP_ARQLEN_ARQCRIT_S		30
+#define VF_MBX_HLP_ARQLEN_ARQCRIT_M		BIT(30)
+#define VF_MBX_HLP_ARQLEN_ARQENABLE_S		31
+#define VF_MBX_HLP_ARQLEN_ARQENABLE_M		BIT(31)
+#define VF_MBX_HLP_ARQT(_VF16)			(0x0022DE40 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ARQT_MAX_INDEX		15
+#define VF_MBX_HLP_ARQT_ARQT_S			0
+#define VF_MBX_HLP_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ATQBAH(_VF16)		(0x0022DC40 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ATQBAH_MAX_INDEX		15
+#define VF_MBX_HLP_ATQBAH_ATQBAH_S		0
+#define VF_MBX_HLP_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_HLP_ATQBAL(_VF16)		(0x0022DC00 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ATQBAL_MAX_INDEX		15
+#define VF_MBX_HLP_ATQBAL_ATQBAL_S		6
+#define VF_MBX_HLP_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_HLP_ATQH(_VF16)			(0x0022DCC0 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ATQH_MAX_INDEX		15
+#define VF_MBX_HLP_ATQH_ATQH_S			0
+#define VF_MBX_HLP_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ATQLEN(_VF16)		(0x0022DC80 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ATQLEN_MAX_INDEX		15
+#define VF_MBX_HLP_ATQLEN_ATQLEN_S		0
+#define VF_MBX_HLP_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ATQLEN_ATQVFE_S		28
+#define VF_MBX_HLP_ATQLEN_ATQVFE_M		BIT(28)
+#define VF_MBX_HLP_ATQLEN_ATQOVFL_S		29
+#define VF_MBX_HLP_ATQLEN_ATQOVFL_M		BIT(29)
+#define VF_MBX_HLP_ATQLEN_ATQCRIT_S		30
+#define VF_MBX_HLP_ATQLEN_ATQCRIT_M		BIT(30)
+#define VF_MBX_HLP_ATQLEN_ATQENABLE_S		31
+#define VF_MBX_HLP_ATQLEN_ATQENABLE_M		BIT(31)
+#define VF_MBX_HLP_ATQT(_VF16)			(0x0022DD00 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_HLP_ATQT_MAX_INDEX		15
+#define VF_MBX_HLP_ATQT_ATQT_S			0
+#define VF_MBX_HLP_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ARQBAH(_VF16)		(0x0022E000 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ARQBAH_MAX_INDEX		15
+#define VF_MBX_PSM_ARQBAH_ARQBAH_S		0
+#define VF_MBX_PSM_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_PSM_ARQBAL(_VF16)		(0x0022DFC0 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ARQBAL_MAX_INDEX		15
+#define VF_MBX_PSM_ARQBAL_ARQBAL_LSB_S		0
+#define VF_MBX_PSM_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define VF_MBX_PSM_ARQBAL_ARQBAL_S		6
+#define VF_MBX_PSM_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_PSM_ARQH(_VF16)			(0x0022E080 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ARQH_MAX_INDEX		15
+#define VF_MBX_PSM_ARQH_ARQH_S			0
+#define VF_MBX_PSM_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ARQLEN(_VF16)		(0x0022E040 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ARQLEN_MAX_INDEX		15
+#define VF_MBX_PSM_ARQLEN_ARQLEN_S		0
+#define VF_MBX_PSM_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ARQLEN_ARQVFE_S		28
+#define VF_MBX_PSM_ARQLEN_ARQVFE_M		BIT(28)
+#define VF_MBX_PSM_ARQLEN_ARQOVFL_S		29
+#define VF_MBX_PSM_ARQLEN_ARQOVFL_M		BIT(29)
+#define VF_MBX_PSM_ARQLEN_ARQCRIT_S		30
+#define VF_MBX_PSM_ARQLEN_ARQCRIT_M		BIT(30)
+#define VF_MBX_PSM_ARQLEN_ARQENABLE_S		31
+#define VF_MBX_PSM_ARQLEN_ARQENABLE_M		BIT(31)
+#define VF_MBX_PSM_ARQT(_VF16)			(0x0022E0C0 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ARQT_MAX_INDEX		15
+#define VF_MBX_PSM_ARQT_ARQT_S			0
+#define VF_MBX_PSM_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ATQBAH(_VF16)		(0x0022DEC0 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ATQBAH_MAX_INDEX		15
+#define VF_MBX_PSM_ATQBAH_ATQBAH_S		0
+#define VF_MBX_PSM_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_PSM_ATQBAL(_VF16)		(0x0022DE80 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ATQBAL_MAX_INDEX		15
+#define VF_MBX_PSM_ATQBAL_ATQBAL_S		6
+#define VF_MBX_PSM_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_PSM_ATQH(_VF16)			(0x0022DF40 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ATQH_MAX_INDEX		15
+#define VF_MBX_PSM_ATQH_ATQH_S			0
+#define VF_MBX_PSM_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ATQLEN(_VF16)		(0x0022DF00 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ATQLEN_MAX_INDEX		15
+#define VF_MBX_PSM_ATQLEN_ATQLEN_S		0
+#define VF_MBX_PSM_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ATQLEN_ATQVFE_S		28
+#define VF_MBX_PSM_ATQLEN_ATQVFE_M		BIT(28)
+#define VF_MBX_PSM_ATQLEN_ATQOVFL_S		29
+#define VF_MBX_PSM_ATQLEN_ATQOVFL_M		BIT(29)
+#define VF_MBX_PSM_ATQLEN_ATQCRIT_S		30
+#define VF_MBX_PSM_ATQLEN_ATQCRIT_M		BIT(30)
+#define VF_MBX_PSM_ATQLEN_ATQENABLE_S		31
+#define VF_MBX_PSM_ATQLEN_ATQENABLE_M		BIT(31)
+#define VF_MBX_PSM_ATQT(_VF16)			(0x0022DF80 + ((_VF16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VF_MBX_PSM_ATQT_MAX_INDEX		15
+#define VF_MBX_PSM_ATQT_ATQT_S			0
+#define VF_MBX_PSM_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ARQBAH(_VF128)		(0x0022F400 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ARQBAH_MAX_INDEX		127
+#define VF_SB_CPM_ARQBAH_ARQBAH_S		0
+#define VF_SB_CPM_ARQBAH_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_SB_CPM_ARQBAL(_VF128)		(0x0022F200 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ARQBAL_MAX_INDEX		127
+#define VF_SB_CPM_ARQBAL_ARQBAL_LSB_S		0
+#define VF_SB_CPM_ARQBAL_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define VF_SB_CPM_ARQBAL_ARQBAL_S		6
+#define VF_SB_CPM_ARQBAL_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_SB_CPM_ARQH(_VF128)			(0x0022F800 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ARQH_MAX_INDEX		127
+#define VF_SB_CPM_ARQH_ARQH_S			0
+#define VF_SB_CPM_ARQH_ARQH_M			MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ARQLEN(_VF128)		(0x0022F600 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ARQLEN_MAX_INDEX		127
+#define VF_SB_CPM_ARQLEN_ARQLEN_S		0
+#define VF_SB_CPM_ARQLEN_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ARQLEN_ARQVFE_S		28
+#define VF_SB_CPM_ARQLEN_ARQVFE_M		BIT(28)
+#define VF_SB_CPM_ARQLEN_ARQOVFL_S		29
+#define VF_SB_CPM_ARQLEN_ARQOVFL_M		BIT(29)
+#define VF_SB_CPM_ARQLEN_ARQCRIT_S		30
+#define VF_SB_CPM_ARQLEN_ARQCRIT_M		BIT(30)
+#define VF_SB_CPM_ARQLEN_ARQENABLE_S		31
+#define VF_SB_CPM_ARQLEN_ARQENABLE_M		BIT(31)
+#define VF_SB_CPM_ARQT(_VF128)			(0x0022FA00 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ARQT_MAX_INDEX		127
+#define VF_SB_CPM_ARQT_ARQT_S			0
+#define VF_SB_CPM_ARQT_ARQT_M			MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ATQBAH(_VF128)		(0x0022EA00 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ATQBAH_MAX_INDEX		127
+#define VF_SB_CPM_ATQBAH_ATQBAH_S		0
+#define VF_SB_CPM_ATQBAH_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_SB_CPM_ATQBAL(_VF128)		(0x0022E800 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ATQBAL_MAX_INDEX		127
+#define VF_SB_CPM_ATQBAL_ATQBAL_S		6
+#define VF_SB_CPM_ATQBAL_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_SB_CPM_ATQH(_VF128)			(0x0022EE00 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ATQH_MAX_INDEX		127
+#define VF_SB_CPM_ATQH_ATQH_S			0
+#define VF_SB_CPM_ATQH_ATQH_M			MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ATQLEN(_VF128)		(0x0022EC00 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ATQLEN_MAX_INDEX		127
+#define VF_SB_CPM_ATQLEN_ATQLEN_S		0
+#define VF_SB_CPM_ATQLEN_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ATQLEN_ATQVFE_S		28
+#define VF_SB_CPM_ATQLEN_ATQVFE_M		BIT(28)
+#define VF_SB_CPM_ATQLEN_ATQOVFL_S		29
+#define VF_SB_CPM_ATQLEN_ATQOVFL_M		BIT(29)
+#define VF_SB_CPM_ATQLEN_ATQCRIT_S		30
+#define VF_SB_CPM_ATQLEN_ATQCRIT_M		BIT(30)
+#define VF_SB_CPM_ATQLEN_ATQENABLE_S		31
+#define VF_SB_CPM_ATQLEN_ATQENABLE_M		BIT(31)
+#define VF_SB_CPM_ATQT(_VF128)			(0x0022F000 + ((_VF128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VF_SB_CPM_ATQT_MAX_INDEX		127
+#define VF_SB_CPM_ATQT_ATQT_S			0
+#define VF_SB_CPM_ATQT_ATQT_M			MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_REM_DEV_CTL			0x002300EC /* Reset Source: CORER */
+#define VF_SB_CPM_REM_DEV_CTL_DEST_EN_S		0
+#define VF_SB_CPM_REM_DEV_CTL_DEST_EN_M		MAKEMASK(0xFFFF, 0)
+#define VP_MBX_CPM_PF_VF_CTRL(_VP128)		(0x00231800 + ((_VP128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VP_MBX_CPM_PF_VF_CTRL_MAX_INDEX		127
+#define VP_MBX_CPM_PF_VF_CTRL_QUEUE_EN_S	0
+#define VP_MBX_CPM_PF_VF_CTRL_QUEUE_EN_M	BIT(0)
+#define VP_MBX_HLP_PF_VF_CTRL(_VP16)		(0x00231A00 + ((_VP16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VP_MBX_HLP_PF_VF_CTRL_MAX_INDEX		15
+#define VP_MBX_HLP_PF_VF_CTRL_QUEUE_EN_S	0
+#define VP_MBX_HLP_PF_VF_CTRL_QUEUE_EN_M	BIT(0)
+#define VP_MBX_PF_VF_CTRL(_VSI)			(0x00230800 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VP_MBX_PF_VF_CTRL_MAX_INDEX		767
+#define VP_MBX_PF_VF_CTRL_QUEUE_EN_S		0
+#define VP_MBX_PF_VF_CTRL_QUEUE_EN_M		BIT(0)
+#define VP_MBX_PSM_PF_VF_CTRL(_VP16)		(0x00231A40 + ((_VP16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VP_MBX_PSM_PF_VF_CTRL_MAX_INDEX		15
+#define VP_MBX_PSM_PF_VF_CTRL_QUEUE_EN_S	0
+#define VP_MBX_PSM_PF_VF_CTRL_QUEUE_EN_M	BIT(0)
+#define VP_SB_CPM_PF_VF_CTRL(_VP128)		(0x00231C00 + ((_VP128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VP_SB_CPM_PF_VF_CTRL_MAX_INDEX		127
+#define VP_SB_CPM_PF_VF_CTRL_QUEUE_EN_S		0
+#define VP_SB_CPM_PF_VF_CTRL_QUEUE_EN_M		BIT(0)
+#define GL_DCB_TDSCP2TC_BLOCK_DIS		0x00049218 /* Reset Source: CORER */
+#define GL_DCB_TDSCP2TC_BLOCK_DIS_DSCP2TC_BLOCK_DIS_S 0
+#define GL_DCB_TDSCP2TC_BLOCK_DIS_DSCP2TC_BLOCK_DIS_M BIT(0)
+#define GL_DCB_TDSCP2TC_BLOCK_IPV4(_i)		(0x00049018 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GL_DCB_TDSCP2TC_BLOCK_IPV4_MAX_INDEX	63
+#define GL_DCB_TDSCP2TC_BLOCK_IPV4_TC_BLOCK_LUT_S 0
+#define GL_DCB_TDSCP2TC_BLOCK_IPV4_TC_BLOCK_LUT_M MAKEMASK(0xFFFFFFFF, 0)
+#define GL_DCB_TDSCP2TC_BLOCK_IPV6(_i)		(0x00049118 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GL_DCB_TDSCP2TC_BLOCK_IPV6_MAX_INDEX	63
+#define GL_DCB_TDSCP2TC_BLOCK_IPV6_TC_BLOCK_LUT_S 0
+#define GL_DCB_TDSCP2TC_BLOCK_IPV6_TC_BLOCK_LUT_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_GENC				0x00083044 /* Reset Source: CORER */
+#define GLDCB_GENC_PCIRTT_S			0
+#define GLDCB_GENC_PCIRTT_M			MAKEMASK(0xFFFF, 0)
+#define GLDCB_PRS_RETSTCC(_i)			(0x002000B0 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLDCB_PRS_RETSTCC_MAX_INDEX		31
+#define GLDCB_PRS_RETSTCC_BWSHARE_S		0
+#define GLDCB_PRS_RETSTCC_BWSHARE_M		MAKEMASK(0x7F, 0)
+#define GLDCB_PRS_RETSTCC_ETSTC_S		31
+#define GLDCB_PRS_RETSTCC_ETSTC_M		BIT(31)
+#define GLDCB_PRS_RSPMC				0x00200160 /* Reset Source: CORER */
+#define GLDCB_PRS_RSPMC_RSPM_S			0
+#define GLDCB_PRS_RSPMC_RSPM_M			MAKEMASK(0xFF, 0)
+#define GLDCB_PRS_RSPMC_RPM_MODE_S		8
+#define GLDCB_PRS_RSPMC_RPM_MODE_M		MAKEMASK(0x3, 8)
+#define GLDCB_PRS_RSPMC_PRR_MAX_EXP_S		10
+#define GLDCB_PRS_RSPMC_PRR_MAX_EXP_M		MAKEMASK(0xF, 10)
+#define GLDCB_PRS_RSPMC_PFCTIMER_S		14
+#define GLDCB_PRS_RSPMC_PFCTIMER_M		MAKEMASK(0x3FFF, 14)
+#define GLDCB_PRS_RSPMC_RPM_DIS_S		31
+#define GLDCB_PRS_RSPMC_RPM_DIS_M		BIT(31)
+#define GLDCB_RETSTCC(_i)			(0x00122140 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLDCB_RETSTCC_MAX_INDEX			31
+#define GLDCB_RETSTCC_BWSHARE_S			0
+#define GLDCB_RETSTCC_BWSHARE_M			MAKEMASK(0x7F, 0)
+#define GLDCB_RETSTCC_ETSTC_S			31
+#define GLDCB_RETSTCC_ETSTC_M			BIT(31)
+#define GLDCB_RETSTCS(_i)			(0x001221C0 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLDCB_RETSTCS_MAX_INDEX			31
+#define GLDCB_RETSTCS_CREDITS_S			0
+#define GLDCB_RETSTCS_CREDITS_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_RTC2PFC_RCB			0x00122100 /* Reset Source: CORER */
+#define GLDCB_RTC2PFC_RCB_TC2PFC_S		0
+#define GLDCB_RTC2PFC_RCB_TC2PFC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_SWT_RETSTCC(_i)			(0x0020A040 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLDCB_SWT_RETSTCC_MAX_INDEX		31
+#define GLDCB_SWT_RETSTCC_BWSHARE_S		0
+#define GLDCB_SWT_RETSTCC_BWSHARE_M		MAKEMASK(0x7F, 0)
+#define GLDCB_SWT_RETSTCC_ETSTC_S		31
+#define GLDCB_SWT_RETSTCC_ETSTC_M		BIT(31)
+#define GLDCB_TC2PFC				0x001D2694 /* Reset Source: CORER */
+#define GLDCB_TC2PFC_TC2PFC_S			0
+#define GLDCB_TC2PFC_TC2PFC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_TCB_MNG_SP			0x000AE12C /* Reset Source: CORER */
+#define GLDCB_TCB_MNG_SP_MNG_SP_S		0
+#define GLDCB_TCB_MNG_SP_MNG_SP_M		BIT(0)
+#define GLDCB_TCB_TCLL_CFG			0x000AE134 /* Reset Source: CORER */
+#define GLDCB_TCB_TCLL_CFG_LLTC_S		0
+#define GLDCB_TCB_TCLL_CFG_LLTC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_TCB_WB_SP				0x000AE310 /* Reset Source: CORER */
+#define GLDCB_TCB_WB_SP_WB_SP_S			0
+#define GLDCB_TCB_WB_SP_WB_SP_M			BIT(0)
+#define GLDCB_TCUPM_IMM_EN			0x000BC824 /* Reset Source: CORER */
+#define GLDCB_TCUPM_IMM_EN_IMM_EN_S		0
+#define GLDCB_TCUPM_IMM_EN_IMM_EN_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_TCUPM_LEGACY_TC			0x000BC828 /* Reset Source: CORER */
+#define GLDCB_TCUPM_LEGACY_TC_LEGTC_S		0
+#define GLDCB_TCUPM_LEGACY_TC_LEGTC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_TCUPM_NO_EXCEED_DIS		0x000BC830 /* Reset Source: CORER */
+#define GLDCB_TCUPM_NO_EXCEED_DIS_NON_EXCEED_DIS_S 0
+#define GLDCB_TCUPM_NO_EXCEED_DIS_NON_EXCEED_DIS_M BIT(0)
+#define GLDCB_TCUPM_WB_DIS			0x000BC834 /* Reset Source: CORER */
+#define GLDCB_TCUPM_WB_DIS_PORT_DISABLE_S	0
+#define GLDCB_TCUPM_WB_DIS_PORT_DISABLE_M	BIT(0)
+#define GLDCB_TCUPM_WB_DIS_TC_DISABLE_S		1
+#define GLDCB_TCUPM_WB_DIS_TC_DISABLE_M		BIT(1)
+#define GLDCB_TFPFCI				0x0009949C /* Reset Source: CORER */
+#define GLDCB_TFPFCI_GLDCB_TFPFCI_S		0
+#define GLDCB_TFPFCI_GLDCB_TFPFCI_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_TLPM_IMM_TCB			0x000A0190 /* Reset Source: CORER */
+#define GLDCB_TLPM_IMM_TCB_IMM_EN_S		0
+#define GLDCB_TLPM_IMM_TCB_IMM_EN_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_TLPM_IMM_TCUPM			0x000A018C /* Reset Source: CORER */
+#define GLDCB_TLPM_IMM_TCUPM_IMM_EN_S		0
+#define GLDCB_TLPM_IMM_TCUPM_IMM_EN_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_TLPM_PCI_DM			0x000A0180 /* Reset Source: CORER */
+#define GLDCB_TLPM_PCI_DM_MONITOR_S		0
+#define GLDCB_TLPM_PCI_DM_MONITOR_M		MAKEMASK(0x7FFFF, 0)
+#define GLDCB_TLPM_PCI_DTHR			0x000A0184 /* Reset Source: CORER */
+#define GLDCB_TLPM_PCI_DTHR_PCI_TDATA_S		0
+#define GLDCB_TLPM_PCI_DTHR_PCI_TDATA_M		MAKEMASK(0xFFF, 0)
+#define GLDCB_TPB_IMM_TLPM			0x00099468 /* Reset Source: CORER */
+#define GLDCB_TPB_IMM_TLPM_IMM_EN_S		0
+#define GLDCB_TPB_IMM_TLPM_IMM_EN_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_TPB_IMM_TPB			0x0009946C /* Reset Source: CORER */
+#define GLDCB_TPB_IMM_TPB_IMM_EN_S		0
+#define GLDCB_TPB_IMM_TPB_IMM_EN_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_TPB_TCLL_CFG			0x00099464 /* Reset Source: CORER */
+#define GLDCB_TPB_TCLL_CFG_LLTC_S		0
+#define GLDCB_TPB_TCLL_CFG_LLTC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTCB_BULK_DWRR_REG_QUANTA		0x000AE0E0 /* Reset Source: CORER */
+#define GLTCB_BULK_DWRR_REG_QUANTA_QUANTA_S	0
+#define GLTCB_BULK_DWRR_REG_QUANTA_QUANTA_M	MAKEMASK(0x7FF, 0)
+#define GLTCB_BULK_DWRR_REG_SAT			0x000AE0F0 /* Reset Source: CORER */
+#define GLTCB_BULK_DWRR_REG_SAT_SATURATION_S	0
+#define GLTCB_BULK_DWRR_REG_SAT_SATURATION_M	MAKEMASK(0x1FFFF, 0)
+#define GLTCB_BULK_DWRR_WB_QUANTA		0x000AE0E4 /* Reset Source: CORER */
+#define GLTCB_BULK_DWRR_WB_QUANTA_QUANTA_S	0
+#define GLTCB_BULK_DWRR_WB_QUANTA_QUANTA_M	MAKEMASK(0x7FF, 0)
+#define GLTCB_BULK_DWRR_WB_SAT			0x000AE0F4 /* Reset Source: CORER */
+#define GLTCB_BULK_DWRR_WB_SAT_SATURATION_S	0
+#define GLTCB_BULK_DWRR_WB_SAT_SATURATION_M	MAKEMASK(0x1FFFF, 0)
+#define GLTCB_CREDIT_EXP_CTL			0x000AE120 /* Reset Source: CORER */
+#define GLTCB_CREDIT_EXP_CTL_EN_S		0
+#define GLTCB_CREDIT_EXP_CTL_EN_M		BIT(0)
+#define GLTCB_CREDIT_EXP_CTL_MIN_PKT_S		1
+#define GLTCB_CREDIT_EXP_CTL_MIN_PKT_M		MAKEMASK(0x1FF, 1)
+#define GLTCB_LL_DWRR_REG_QUANTA		0x000AE0E8 /* Reset Source: CORER */
+#define GLTCB_LL_DWRR_REG_QUANTA_QUANTA_S	0
+#define GLTCB_LL_DWRR_REG_QUANTA_QUANTA_M	MAKEMASK(0x7FF, 0)
+#define GLTCB_LL_DWRR_REG_SAT			0x000AE0F8 /* Reset Source: CORER */
+#define GLTCB_LL_DWRR_REG_SAT_SATURATION_S	0
+#define GLTCB_LL_DWRR_REG_SAT_SATURATION_M	MAKEMASK(0x1FFFF, 0)
+#define GLTCB_LL_DWRR_WB_QUANTA			0x000AE0EC /* Reset Source: CORER */
+#define GLTCB_LL_DWRR_WB_QUANTA_QUANTA_S	0
+#define GLTCB_LL_DWRR_WB_QUANTA_QUANTA_M	MAKEMASK(0x7FF, 0)
+#define GLTCB_LL_DWRR_WB_SAT			0x000AE0FC /* Reset Source: CORER */
+#define GLTCB_LL_DWRR_WB_SAT_SATURATION_S	0
+#define GLTCB_LL_DWRR_WB_SAT_SATURATION_M	MAKEMASK(0x1FFFF, 0)
+#define GLTCB_WB_RL				0x000AE238 /* Reset Source: CORER */
+#define GLTCB_WB_RL_PERIOD_S			0
+#define GLTCB_WB_RL_PERIOD_M			MAKEMASK(0xFFFF, 0)
+#define GLTCB_WB_RL_EN_S			16
+#define GLTCB_WB_RL_EN_M			BIT(16)
+#define GLTPB_WB_RL				0x00099460 /* Reset Source: CORER */
+#define GLTPB_WB_RL_PERIOD_S			0
+#define GLTPB_WB_RL_PERIOD_M			MAKEMASK(0xFFFF, 0)
+#define GLTPB_WB_RL_EN_S			16
+#define GLTPB_WB_RL_EN_M			BIT(16)
+#define PRTDCB_FCCFG				0x001E4640 /* Reset Source: GLOBR */
+#define PRTDCB_FCCFG_TFCE_S			3
+#define PRTDCB_FCCFG_TFCE_M			MAKEMASK(0x3, 3)
+#define PRTDCB_FCRTV				0x001E4600 /* Reset Source: GLOBR */
+#define PRTDCB_FCRTV_FC_REFRESH_TH_S		0
+#define PRTDCB_FCRTV_FC_REFRESH_TH_M		MAKEMASK(0xFFFF, 0)
+#define PRTDCB_FCTTVN(_i)			(0x001E4580 + ((_i) * 32)) /* _i=0...3 */ /* Reset Source: GLOBR */
+#define PRTDCB_FCTTVN_MAX_INDEX			3
+#define PRTDCB_FCTTVN_TTV_2N_S			0
+#define PRTDCB_FCTTVN_TTV_2N_M			MAKEMASK(0xFFFF, 0)
+#define PRTDCB_FCTTVN_TTV_2N_P1_S		16
+#define PRTDCB_FCTTVN_TTV_2N_P1_M		MAKEMASK(0xFFFF, 16)
+#define PRTDCB_GENC				0x00083000 /* Reset Source: CORER */
+#define PRTDCB_GENC_NUMTC_S			2
+#define PRTDCB_GENC_NUMTC_M			MAKEMASK(0xF, 2)
+#define PRTDCB_GENC_FCOEUP_S			6
+#define PRTDCB_GENC_FCOEUP_M			MAKEMASK(0x7, 6)
+#define PRTDCB_GENC_FCOEUP_VALID_S		9
+#define PRTDCB_GENC_FCOEUP_VALID_M		BIT(9)
+#define PRTDCB_GENC_PFCLDA_S			16
+#define PRTDCB_GENC_PFCLDA_M			MAKEMASK(0xFFFF, 16)
+#define PRTDCB_GENS				0x00083020 /* Reset Source: CORER */
+#define PRTDCB_GENS_DCBX_STATUS_S		0
+#define PRTDCB_GENS_DCBX_STATUS_M		MAKEMASK(0x7, 0)
+#define PRTDCB_PRS_RETSC			0x002001A0 /* Reset Source: CORER */
+#define PRTDCB_PRS_RETSC_ETS_MODE_S		0
+#define PRTDCB_PRS_RETSC_ETS_MODE_M		BIT(0)
+#define PRTDCB_PRS_RETSC_NON_ETS_MODE_S		1
+#define PRTDCB_PRS_RETSC_NON_ETS_MODE_M		BIT(1)
+#define PRTDCB_PRS_RETSC_ETS_MAX_EXP_S		2
+#define PRTDCB_PRS_RETSC_ETS_MAX_EXP_M		MAKEMASK(0xF, 2)
+#define PRTDCB_PRS_RPRRC			0x00200180 /* Reset Source: CORER */
+#define PRTDCB_PRS_RPRRC_BWSHARE_S		0
+#define PRTDCB_PRS_RPRRC_BWSHARE_M		MAKEMASK(0x3FF, 0)
+#define PRTDCB_PRS_RPRRC_BWSHARE_DIS_S		31
+#define PRTDCB_PRS_RPRRC_BWSHARE_DIS_M		BIT(31)
+#define PRTDCB_RETSC				0x001222A0 /* Reset Source: CORER */
+#define PRTDCB_RETSC_ETS_MODE_S			0
+#define PRTDCB_RETSC_ETS_MODE_M			BIT(0)
+#define PRTDCB_RETSC_NON_ETS_MODE_S		1
+#define PRTDCB_RETSC_NON_ETS_MODE_M		BIT(1)
+#define PRTDCB_RETSC_ETS_MAX_EXP_S		2
+#define PRTDCB_RETSC_ETS_MAX_EXP_M		MAKEMASK(0xF, 2)
+#define PRTDCB_RPRRC				0x001220C0 /* Reset Source: CORER */
+#define PRTDCB_RPRRC_BWSHARE_S			0
+#define PRTDCB_RPRRC_BWSHARE_M			MAKEMASK(0x3FF, 0)
+#define PRTDCB_RPRRC_BWSHARE_DIS_S		31
+#define PRTDCB_RPRRC_BWSHARE_DIS_M		BIT(31)
+#define PRTDCB_RPRRS				0x001220E0 /* Reset Source: CORER */
+#define PRTDCB_RPRRS_CREDITS_S			0
+#define PRTDCB_RPRRS_CREDITS_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PRTDCB_RUP_TDPU				0x00040960 /* Reset Source: CORER */
+#define PRTDCB_RUP_TDPU_NOVLANUP_S		0
+#define PRTDCB_RUP_TDPU_NOVLANUP_M		MAKEMASK(0x7, 0)
+#define PRTDCB_RUP2TC				0x001D2640 /* Reset Source: CORER */
+#define PRTDCB_RUP2TC_UP0TC_S			0
+#define PRTDCB_RUP2TC_UP0TC_M			MAKEMASK(0x7, 0)
+#define PRTDCB_RUP2TC_UP1TC_S			3
+#define PRTDCB_RUP2TC_UP1TC_M			MAKEMASK(0x7, 3)
+#define PRTDCB_RUP2TC_UP2TC_S			6
+#define PRTDCB_RUP2TC_UP2TC_M			MAKEMASK(0x7, 6)
+#define PRTDCB_RUP2TC_UP3TC_S			9
+#define PRTDCB_RUP2TC_UP3TC_M			MAKEMASK(0x7, 9)
+#define PRTDCB_RUP2TC_UP4TC_S			12
+#define PRTDCB_RUP2TC_UP4TC_M			MAKEMASK(0x7, 12)
+#define PRTDCB_RUP2TC_UP5TC_S			15
+#define PRTDCB_RUP2TC_UP5TC_M			MAKEMASK(0x7, 15)
+#define PRTDCB_RUP2TC_UP6TC_S			18
+#define PRTDCB_RUP2TC_UP6TC_M			MAKEMASK(0x7, 18)
+#define PRTDCB_RUP2TC_UP7TC_S			21
+#define PRTDCB_RUP2TC_UP7TC_M			MAKEMASK(0x7, 21)
+#define PRTDCB_SWT_RETSC			0x0020A140 /* Reset Source: CORER */
+#define PRTDCB_SWT_RETSC_ETS_MODE_S		0
+#define PRTDCB_SWT_RETSC_ETS_MODE_M		BIT(0)
+#define PRTDCB_SWT_RETSC_NON_ETS_MODE_S		1
+#define PRTDCB_SWT_RETSC_NON_ETS_MODE_M		BIT(1)
+#define PRTDCB_SWT_RETSC_ETS_MAX_EXP_S		2
+#define PRTDCB_SWT_RETSC_ETS_MAX_EXP_M		MAKEMASK(0xF, 2)
+#define PRTDCB_TCB_DWRR_CREDITS			0x000AE000 /* Reset Source: CORER */
+#define PRTDCB_TCB_DWRR_CREDITS_CREDITS_S	0
+#define PRTDCB_TCB_DWRR_CREDITS_CREDITS_M	MAKEMASK(0x3FFFF, 0)
+#define PRTDCB_TCB_DWRR_QUANTA			0x000AE020 /* Reset Source: CORER */
+#define PRTDCB_TCB_DWRR_QUANTA_QUANTA_S		0
+#define PRTDCB_TCB_DWRR_QUANTA_QUANTA_M		MAKEMASK(0x7FF, 0)
+#define PRTDCB_TCB_DWRR_SAT			0x000AE040 /* Reset Source: CORER */
+#define PRTDCB_TCB_DWRR_SAT_SATURATION_S	0
+#define PRTDCB_TCB_DWRR_SAT_SATURATION_M	MAKEMASK(0x1FFFF, 0)
+#define PRTDCB_TCUPM_NO_EXCEED_DM		0x000BC3C0 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_NO_EXCEED_DM_MONITOR_S	0
+#define PRTDCB_TCUPM_NO_EXCEED_DM_MONITOR_M	MAKEMASK(0x7FFFF, 0)
+#define PRTDCB_TCUPM_REG_CM			0x000BC360 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_REG_CM_MONITOR_S		0
+#define PRTDCB_TCUPM_REG_CM_MONITOR_M		MAKEMASK(0x7FFF, 0)
+#define PRTDCB_TCUPM_REG_CTHR			0x000BC380 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_REG_CTHR_PORTOFFTH_H_S	0
+#define PRTDCB_TCUPM_REG_CTHR_PORTOFFTH_H_M	MAKEMASK(0x7FFF, 0)
+#define PRTDCB_TCUPM_REG_CTHR_PORTOFFTH_L_S	15
+#define PRTDCB_TCUPM_REG_CTHR_PORTOFFTH_L_M	MAKEMASK(0x7FFF, 15)
+#define PRTDCB_TCUPM_REG_DM			0x000BC3A0 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_REG_DM_MONITOR_S		0
+#define PRTDCB_TCUPM_REG_DM_MONITOR_M		MAKEMASK(0x7FFFF, 0)
+#define PRTDCB_TCUPM_REG_DTHR			0x000BC3E0 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_REG_DTHR_PORTOFFTH_H_S	0
+#define PRTDCB_TCUPM_REG_DTHR_PORTOFFTH_H_M	MAKEMASK(0xFFF, 0)
+#define PRTDCB_TCUPM_REG_DTHR_PORTOFFTH_L_S	12
+#define PRTDCB_TCUPM_REG_DTHR_PORTOFFTH_L_M	MAKEMASK(0xFFF, 12)
+#define PRTDCB_TCUPM_REG_PE_HB_DM		0x000BC400 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_REG_PE_HB_DM_MONITOR_S	0
+#define PRTDCB_TCUPM_REG_PE_HB_DM_MONITOR_M	MAKEMASK(0xFFF, 0)
+#define PRTDCB_TCUPM_REG_PE_HB_DTHR		0x000BC420 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_REG_PE_HB_DTHR_PORTOFFTH_H_S 0
+#define PRTDCB_TCUPM_REG_PE_HB_DTHR_PORTOFFTH_H_M MAKEMASK(0xFFF, 0)
+#define PRTDCB_TCUPM_REG_PE_HB_DTHR_PORTOFFTH_L_S 12
+#define PRTDCB_TCUPM_REG_PE_HB_DTHR_PORTOFFTH_L_M MAKEMASK(0xFFF, 12)
+#define PRTDCB_TCUPM_WAIT_PFC_CM		0x000BC440 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_WAIT_PFC_CM_MONITOR_S	0
+#define PRTDCB_TCUPM_WAIT_PFC_CM_MONITOR_M	MAKEMASK(0x7FFF, 0)
+#define PRTDCB_TCUPM_WAIT_PFC_CTHR		0x000BC460 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_WAIT_PFC_CTHR_PORTOFFTH_S	0
+#define PRTDCB_TCUPM_WAIT_PFC_CTHR_PORTOFFTH_M	MAKEMASK(0x7FFF, 0)
+#define PRTDCB_TCUPM_WAIT_PFC_DM		0x000BC480 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_WAIT_PFC_DM_MONITOR_S	0
+#define PRTDCB_TCUPM_WAIT_PFC_DM_MONITOR_M	MAKEMASK(0x7FFFF, 0)
+#define PRTDCB_TCUPM_WAIT_PFC_DTHR		0x000BC4A0 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_WAIT_PFC_DTHR_PORTOFFTH_S	0
+#define PRTDCB_TCUPM_WAIT_PFC_DTHR_PORTOFFTH_M	MAKEMASK(0xFFF, 0)
+#define PRTDCB_TCUPM_WAIT_PFC_PE_HB_DM		0x000BC4C0 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_WAIT_PFC_PE_HB_DM_MONITOR_S 0
+#define PRTDCB_TCUPM_WAIT_PFC_PE_HB_DM_MONITOR_M MAKEMASK(0xFFF, 0)
+#define PRTDCB_TCUPM_WAIT_PFC_PE_HB_DTHR	0x000BC4E0 /* Reset Source: CORER */
+#define PRTDCB_TCUPM_WAIT_PFC_PE_HB_DTHR_PORTOFFTH_S 0
+#define PRTDCB_TCUPM_WAIT_PFC_PE_HB_DTHR_PORTOFFTH_M MAKEMASK(0xFFF, 0)
+#define PRTDCB_TDPUC				0x00040940 /* Reset Source: CORER */
+#define PRTDCB_TDPUC_MAX_TXFRAME_S		0
+#define PRTDCB_TDPUC_MAX_TXFRAME_M		MAKEMASK(0xFFFF, 0)
+#define PRTDCB_TDPUC_MAL_LENGTH_S		16
+#define PRTDCB_TDPUC_MAL_LENGTH_M		BIT(16)
+#define PRTDCB_TDPUC_MAL_CMD_S			17
+#define PRTDCB_TDPUC_MAL_CMD_M			BIT(17)
+#define PRTDCB_TDPUC_TTL_DROP_S			18
+#define PRTDCB_TDPUC_TTL_DROP_M			BIT(18)
+#define PRTDCB_TDPUC_UR_DROP_S			19
+#define PRTDCB_TDPUC_UR_DROP_M			BIT(19)
+#define PRTDCB_TDPUC_DUMMY_S			20
+#define PRTDCB_TDPUC_DUMMY_M			BIT(20)
+#define PRTDCB_TDPUC_BIG_PKT_SIZE_S		21
+#define PRTDCB_TDPUC_BIG_PKT_SIZE_M		BIT(21)
+#define PRTDCB_TDPUC_L2_ACCEPT_FAIL_S		22
+#define PRTDCB_TDPUC_L2_ACCEPT_FAIL_M		BIT(22)
+#define PRTDCB_TDPUC_DSCP_CHECK_FAIL_S		23
+#define PRTDCB_TDPUC_DSCP_CHECK_FAIL_M		BIT(23)
+#define PRTDCB_TDPUC_RCU_ANTISPOOF_S		24
+#define PRTDCB_TDPUC_RCU_ANTISPOOF_M		BIT(24)
+#define PRTDCB_TDPUC_NIC_DSI_S			25
+#define PRTDCB_TDPUC_NIC_DSI_M			BIT(25)
+#define PRTDCB_TDPUC_NIC_IPSEC_S		26
+#define PRTDCB_TDPUC_NIC_IPSEC_M		BIT(26)
+#define PRTDCB_TDPUC_CLEAR_DROP_S		31
+#define PRTDCB_TDPUC_CLEAR_DROP_M		BIT(31)
+#define PRTDCB_TFCS				0x001E4560 /* Reset Source: GLOBR */
+#define PRTDCB_TFCS_TXOFF_S			0
+#define PRTDCB_TFCS_TXOFF_M			BIT(0)
+#define PRTDCB_TFCS_TXOFF0_S			8
+#define PRTDCB_TFCS_TXOFF0_M			BIT(8)
+#define PRTDCB_TFCS_TXOFF1_S			9
+#define PRTDCB_TFCS_TXOFF1_M			BIT(9)
+#define PRTDCB_TFCS_TXOFF2_S			10
+#define PRTDCB_TFCS_TXOFF2_M			BIT(10)
+#define PRTDCB_TFCS_TXOFF3_S			11
+#define PRTDCB_TFCS_TXOFF3_M			BIT(11)
+#define PRTDCB_TFCS_TXOFF4_S			12
+#define PRTDCB_TFCS_TXOFF4_M			BIT(12)
+#define PRTDCB_TFCS_TXOFF5_S			13
+#define PRTDCB_TFCS_TXOFF5_M			BIT(13)
+#define PRTDCB_TFCS_TXOFF6_S			14
+#define PRTDCB_TFCS_TXOFF6_M			BIT(14)
+#define PRTDCB_TFCS_TXOFF7_S			15
+#define PRTDCB_TFCS_TXOFF7_M			BIT(15)
+#define PRTDCB_TLPM_REG_DM			0x000A0000 /* Reset Source: CORER */
+#define PRTDCB_TLPM_REG_DM_MONITOR_S		0
+#define PRTDCB_TLPM_REG_DM_MONITOR_M		MAKEMASK(0x7FFFF, 0)
+#define PRTDCB_TLPM_REG_DTHR			0x000A0020 /* Reset Source: CORER */
+#define PRTDCB_TLPM_REG_DTHR_PORTOFFTH_H_S	0
+#define PRTDCB_TLPM_REG_DTHR_PORTOFFTH_H_M	MAKEMASK(0xFFF, 0)
+#define PRTDCB_TLPM_REG_DTHR_PORTOFFTH_L_S	12
+#define PRTDCB_TLPM_REG_DTHR_PORTOFFTH_L_M	MAKEMASK(0xFFF, 12)
+#define PRTDCB_TLPM_WAIT_PFC_DM			0x000A0040 /* Reset Source: CORER */
+#define PRTDCB_TLPM_WAIT_PFC_DM_MONITOR_S	0
+#define PRTDCB_TLPM_WAIT_PFC_DM_MONITOR_M	MAKEMASK(0x7FFFF, 0)
+#define PRTDCB_TLPM_WAIT_PFC_DTHR		0x000A0060 /* Reset Source: CORER */
+#define PRTDCB_TLPM_WAIT_PFC_DTHR_PORTOFFTH_S	0
+#define PRTDCB_TLPM_WAIT_PFC_DTHR_PORTOFFTH_M	MAKEMASK(0xFFF, 0)
+#define PRTDCB_TPFCTS(_i)			(0x001E4660 + ((_i) * 32)) /* _i=0...7 */ /* Reset Source: GLOBR */
+#define PRTDCB_TPFCTS_MAX_INDEX			7
+#define PRTDCB_TPFCTS_PFCTIMER_S		0
+#define PRTDCB_TPFCTS_PFCTIMER_M		MAKEMASK(0x3FFF, 0)
+#define PRTDCB_TUP2TC				0x001D26C0 /* Reset Source: CORER */
+#define PRTDCB_TUP2TC_UP0TC_S			0
+#define PRTDCB_TUP2TC_UP0TC_M			MAKEMASK(0x7, 0)
+#define PRTDCB_TUP2TC_UP1TC_S			3
+#define PRTDCB_TUP2TC_UP1TC_M			MAKEMASK(0x7, 3)
+#define PRTDCB_TUP2TC_UP2TC_S			6
+#define PRTDCB_TUP2TC_UP2TC_M			MAKEMASK(0x7, 6)
+#define PRTDCB_TUP2TC_UP3TC_S			9
+#define PRTDCB_TUP2TC_UP3TC_M			MAKEMASK(0x7, 9)
+#define PRTDCB_TUP2TC_UP4TC_S			12
+#define PRTDCB_TUP2TC_UP4TC_M			MAKEMASK(0x7, 12)
+#define PRTDCB_TUP2TC_UP5TC_S			15
+#define PRTDCB_TUP2TC_UP5TC_M			MAKEMASK(0x7, 15)
+#define PRTDCB_TUP2TC_UP6TC_S			18
+#define PRTDCB_TUP2TC_UP6TC_M			MAKEMASK(0x7, 18)
+#define PRTDCB_TUP2TC_UP7TC_S			21
+#define PRTDCB_TUP2TC_UP7TC_M			MAKEMASK(0x7, 21)
+#define PRTDCB_TX_DSCP2UP_CTL			0x00040980 /* Reset Source: CORER */
+#define PRTDCB_TX_DSCP2UP_CTL_DSCP2UP_ENA_S	0
+#define PRTDCB_TX_DSCP2UP_CTL_DSCP2UP_ENA_M	BIT(0)
+#define PRTDCB_TX_DSCP2UP_CTL_DSCP_DEFAULT_UP_S 1
+#define PRTDCB_TX_DSCP2UP_CTL_DSCP_DEFAULT_UP_M MAKEMASK(0x7, 1)
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT(_i)		(0x000409A0 + ((_i) * 32)) /* _i=0...7 */ /* Reset Source: CORER */
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_MAX_INDEX	7
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_0_S 0
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_0_M MAKEMASK(0x7, 0)
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_1_S 4
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_1_M MAKEMASK(0x7, 4)
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_2_S 8
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_2_M MAKEMASK(0x7, 8)
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_3_S 12
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_3_M MAKEMASK(0x7, 12)
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_4_S 16
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_4_M MAKEMASK(0x7, 16)
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_5_S 20
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_5_M MAKEMASK(0x7, 20)
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_6_S 24
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_6_M MAKEMASK(0x7, 24)
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_7_S 28
+#define PRTDCB_TX_DSCP2UP_IPV4_LUT_DSCP2UP_LUT_7_M MAKEMASK(0x7, 28)
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT(_i)		(0x00040AA0 + ((_i) * 32)) /* _i=0...7 */ /* Reset Source: CORER */
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_MAX_INDEX	7
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_0_S 0
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_0_M MAKEMASK(0x7, 0)
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_1_S 4
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_1_M MAKEMASK(0x7, 4)
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_2_S 8
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_2_M MAKEMASK(0x7, 8)
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_3_S 12
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_3_M MAKEMASK(0x7, 12)
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_4_S 16
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_4_M MAKEMASK(0x7, 16)
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_5_S 20
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_5_M MAKEMASK(0x7, 20)
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_6_S 24
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_6_M MAKEMASK(0x7, 24)
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_7_S 28
+#define PRTDCB_TX_DSCP2UP_IPV6_LUT_DSCP2UP_LUT_7_M MAKEMASK(0x7, 28)
+#define PRTTCB_BULK_DWRR_REG_CREDITS		0x000AE060 /* Reset Source: CORER */
+#define PRTTCB_BULK_DWRR_REG_CREDITS_CREDITS_S	0
+#define PRTTCB_BULK_DWRR_REG_CREDITS_CREDITS_M	MAKEMASK(0x3FFFF, 0)
+#define PRTTCB_BULK_DWRR_WB_CREDITS		0x000AE080 /* Reset Source: CORER */
+#define PRTTCB_BULK_DWRR_WB_CREDITS_CREDITS_S	0
+#define PRTTCB_BULK_DWRR_WB_CREDITS_CREDITS_M	MAKEMASK(0x3FFFF, 0)
+#define PRTTCB_CREDIT_EXP			0x000AE100 /* Reset Source: CORER */
+#define PRTTCB_CREDIT_EXP_EXPANSION_S		0
+#define PRTTCB_CREDIT_EXP_EXPANSION_M		MAKEMASK(0xFF, 0)
+#define PRTTCB_LL_DWRR_REG_CREDITS		0x000AE0A0 /* Reset Source: CORER */
+#define PRTTCB_LL_DWRR_REG_CREDITS_CREDITS_S	0
+#define PRTTCB_LL_DWRR_REG_CREDITS_CREDITS_M	MAKEMASK(0x3FFFF, 0)
+#define PRTTCB_LL_DWRR_WB_CREDITS		0x000AE0C0 /* Reset Source: CORER */
+#define PRTTCB_LL_DWRR_WB_CREDITS_CREDITS_S	0
+#define PRTTCB_LL_DWRR_WB_CREDITS_CREDITS_M	MAKEMASK(0x3FFFF, 0)
+#define TCDCB_TCUPM_WAIT_CM(_i)			(0x000BC520 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCDCB_TCUPM_WAIT_CM_MAX_INDEX		31
+#define TCDCB_TCUPM_WAIT_CM_MONITOR_S		0
+#define TCDCB_TCUPM_WAIT_CM_MONITOR_M		MAKEMASK(0x7FFF, 0)
+#define TCDCB_TCUPM_WAIT_CTHR(_i)		(0x000BC5A0 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCDCB_TCUPM_WAIT_CTHR_MAX_INDEX		31
+#define TCDCB_TCUPM_WAIT_CTHR_TCOFFTH_S		0
+#define TCDCB_TCUPM_WAIT_CTHR_TCOFFTH_M		MAKEMASK(0x7FFF, 0)
+#define TCDCB_TCUPM_WAIT_DM(_i)			(0x000BC620 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCDCB_TCUPM_WAIT_DM_MAX_INDEX		31
+#define TCDCB_TCUPM_WAIT_DM_MONITOR_S		0
+#define TCDCB_TCUPM_WAIT_DM_MONITOR_M		MAKEMASK(0x7FFFF, 0)
+#define TCDCB_TCUPM_WAIT_DTHR(_i)		(0x000BC6A0 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCDCB_TCUPM_WAIT_DTHR_MAX_INDEX		31
+#define TCDCB_TCUPM_WAIT_DTHR_TCOFFTH_S		0
+#define TCDCB_TCUPM_WAIT_DTHR_TCOFFTH_M		MAKEMASK(0xFFF, 0)
+#define TCDCB_TCUPM_WAIT_PE_HB_DM(_i)		(0x000BC720 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCDCB_TCUPM_WAIT_PE_HB_DM_MAX_INDEX	31
+#define TCDCB_TCUPM_WAIT_PE_HB_DM_MONITOR_S	0
+#define TCDCB_TCUPM_WAIT_PE_HB_DM_MONITOR_M	MAKEMASK(0xFFF, 0)
+#define TCDCB_TCUPM_WAIT_PE_HB_DTHR(_i)		(0x000BC7A0 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCDCB_TCUPM_WAIT_PE_HB_DTHR_MAX_INDEX	31
+#define TCDCB_TCUPM_WAIT_PE_HB_DTHR_TCOFFTH_S	0
+#define TCDCB_TCUPM_WAIT_PE_HB_DTHR_TCOFFTH_M	MAKEMASK(0xFFF, 0)
+#define TCDCB_TLPM_WAIT_DM(_i)			(0x000A0080 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCDCB_TLPM_WAIT_DM_MAX_INDEX		31
+#define TCDCB_TLPM_WAIT_DM_MONITOR_S		0
+#define TCDCB_TLPM_WAIT_DM_MONITOR_M		MAKEMASK(0x7FFFF, 0)
+#define TCDCB_TLPM_WAIT_DTHR(_i)		(0x000A0100 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCDCB_TLPM_WAIT_DTHR_MAX_INDEX		31
+#define TCDCB_TLPM_WAIT_DTHR_TCOFFTH_S		0
+#define TCDCB_TLPM_WAIT_DTHR_TCOFFTH_M		MAKEMASK(0xFFF, 0)
+#define TCTCB_WB_RL_TC_CFG(_i)			(0x000AE138 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCTCB_WB_RL_TC_CFG_MAX_INDEX		31
+#define TCTCB_WB_RL_TC_CFG_TOKENS_S		0
+#define TCTCB_WB_RL_TC_CFG_TOKENS_M		MAKEMASK(0xFFF, 0)
+#define TCTCB_WB_RL_TC_CFG_BURST_SIZE_S		12
+#define TCTCB_WB_RL_TC_CFG_BURST_SIZE_M		MAKEMASK(0x3FF, 12)
+#define TCTCB_WB_RL_TC_STAT(_i)			(0x000AE1B8 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TCTCB_WB_RL_TC_STAT_MAX_INDEX		31
+#define TCTCB_WB_RL_TC_STAT_BUCKET_S		0
+#define TCTCB_WB_RL_TC_STAT_BUCKET_M		MAKEMASK(0x1FFFF, 0)
+#define TPB_BULK_DWRR_REG_QUANTA		0x00099340 /* Reset Source: CORER */
+#define TPB_BULK_DWRR_REG_QUANTA_QUANTA_S	0
+#define TPB_BULK_DWRR_REG_QUANTA_QUANTA_M	MAKEMASK(0x7FF, 0)
+#define TPB_BULK_DWRR_REG_SAT			0x00099350 /* Reset Source: CORER */
+#define TPB_BULK_DWRR_REG_SAT_SATURATION_S	0
+#define TPB_BULK_DWRR_REG_SAT_SATURATION_M	MAKEMASK(0x1FFFF, 0)
+#define TPB_BULK_DWRR_WB_QUANTA			0x00099344 /* Reset Source: CORER */
+#define TPB_BULK_DWRR_WB_QUANTA_QUANTA_S	0
+#define TPB_BULK_DWRR_WB_QUANTA_QUANTA_M	MAKEMASK(0x7FF, 0)
+#define TPB_BULK_DWRR_WB_SAT			0x00099354 /* Reset Source: CORER */
+#define TPB_BULK_DWRR_WB_SAT_SATURATION_S	0
+#define TPB_BULK_DWRR_WB_SAT_SATURATION_M	MAKEMASK(0x1FFFF, 0)
+#define TPB_GLDCB_TCB_WB_SP			0x0009966C /* Reset Source: CORER */
+#define TPB_GLDCB_TCB_WB_SP_WB_SP_S		0
+#define TPB_GLDCB_TCB_WB_SP_WB_SP_M		BIT(0)
+#define TPB_GLTCB_CREDIT_EXP_CTL		0x00099664 /* Reset Source: CORER */
+#define TPB_GLTCB_CREDIT_EXP_CTL_EN_S		0
+#define TPB_GLTCB_CREDIT_EXP_CTL_EN_M		BIT(0)
+#define TPB_GLTCB_CREDIT_EXP_CTL_MIN_PKT_S	1
+#define TPB_GLTCB_CREDIT_EXP_CTL_MIN_PKT_M	MAKEMASK(0x1FF, 1)
+#define TPB_LL_DWRR_REG_QUANTA			0x00099348 /* Reset Source: CORER */
+#define TPB_LL_DWRR_REG_QUANTA_QUANTA_S		0
+#define TPB_LL_DWRR_REG_QUANTA_QUANTA_M		MAKEMASK(0x7FF, 0)
+#define TPB_LL_DWRR_REG_SAT			0x00099358 /* Reset Source: CORER */
+#define TPB_LL_DWRR_REG_SAT_SATURATION_S	0
+#define TPB_LL_DWRR_REG_SAT_SATURATION_M	MAKEMASK(0x1FFFF, 0)
+#define TPB_LL_DWRR_WB_QUANTA			0x0009934C /* Reset Source: CORER */
+#define TPB_LL_DWRR_WB_QUANTA_QUANTA_S		0
+#define TPB_LL_DWRR_WB_QUANTA_QUANTA_M		MAKEMASK(0x7FF, 0)
+#define TPB_LL_DWRR_WB_SAT			0x0009935C /* Reset Source: CORER */
+#define TPB_LL_DWRR_WB_SAT_SATURATION_S		0
+#define TPB_LL_DWRR_WB_SAT_SATURATION_M		MAKEMASK(0x1FFFF, 0)
+#define TPB_PRTDCB_TCB_DWRR_CREDITS		0x000991C0 /* Reset Source: CORER */
+#define TPB_PRTDCB_TCB_DWRR_CREDITS_CREDITS_S	0
+#define TPB_PRTDCB_TCB_DWRR_CREDITS_CREDITS_M	MAKEMASK(0x3FFFF, 0)
+#define TPB_PRTDCB_TCB_DWRR_QUANTA		0x00099220 /* Reset Source: CORER */
+#define TPB_PRTDCB_TCB_DWRR_QUANTA_QUANTA_S	0
+#define TPB_PRTDCB_TCB_DWRR_QUANTA_QUANTA_M	MAKEMASK(0x7FF, 0)
+#define TPB_PRTDCB_TCB_DWRR_SAT			0x00099260 /* Reset Source: CORER */
+#define TPB_PRTDCB_TCB_DWRR_SAT_SATURATION_S	0
+#define TPB_PRTDCB_TCB_DWRR_SAT_SATURATION_M	MAKEMASK(0x1FFFF, 0)
+#define TPB_PRTTCB_BULK_DWRR_REG_CREDITS	0x000992A0 /* Reset Source: CORER */
+#define TPB_PRTTCB_BULK_DWRR_REG_CREDITS_CREDITS_S 0
+#define TPB_PRTTCB_BULK_DWRR_REG_CREDITS_CREDITS_M MAKEMASK(0x3FFFF, 0)
+#define TPB_PRTTCB_BULK_DWRR_WB_CREDITS		0x000992C0 /* Reset Source: CORER */
+#define TPB_PRTTCB_BULK_DWRR_WB_CREDITS_CREDITS_S 0
+#define TPB_PRTTCB_BULK_DWRR_WB_CREDITS_CREDITS_M MAKEMASK(0x3FFFF, 0)
+#define TPB_PRTTCB_CREDIT_EXP			0x00099644 /* Reset Source: CORER */
+#define TPB_PRTTCB_CREDIT_EXP_EXPANSION_S	0
+#define TPB_PRTTCB_CREDIT_EXP_EXPANSION_M	MAKEMASK(0xFF, 0)
+#define TPB_PRTTCB_LL_DWRR_REG_CREDITS		0x00099300 /* Reset Source: CORER */
+#define TPB_PRTTCB_LL_DWRR_REG_CREDITS_CREDITS_S 0
+#define TPB_PRTTCB_LL_DWRR_REG_CREDITS_CREDITS_M MAKEMASK(0x3FFFF, 0)
+#define TPB_PRTTCB_LL_DWRR_WB_CREDITS		0x00099320 /* Reset Source: CORER */
+#define TPB_PRTTCB_LL_DWRR_WB_CREDITS_CREDITS_S 0
+#define TPB_PRTTCB_LL_DWRR_WB_CREDITS_CREDITS_M MAKEMASK(0x3FFFF, 0)
+#define TPB_WB_RL_TC_CFG(_i)			(0x00099360 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TPB_WB_RL_TC_CFG_MAX_INDEX		31
+#define TPB_WB_RL_TC_CFG_TOKENS_S		0
+#define TPB_WB_RL_TC_CFG_TOKENS_M		MAKEMASK(0xFFF, 0)
+#define TPB_WB_RL_TC_CFG_BURST_SIZE_S		12
+#define TPB_WB_RL_TC_CFG_BURST_SIZE_M		MAKEMASK(0x3FF, 12)
+#define TPB_WB_RL_TC_STAT(_i)			(0x000993E0 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define TPB_WB_RL_TC_STAT_MAX_INDEX		31
+#define TPB_WB_RL_TC_STAT_BUCKET_S		0
+#define TPB_WB_RL_TC_STAT_BUCKET_M		MAKEMASK(0x1FFFF, 0)
+#define GL_ACLEXT_CDMD_L1SEL(_i)		(0x00210054 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_CDMD_L1SEL_MAX_INDEX		2
+#define GL_ACLEXT_CDMD_L1SEL_RX_SEL_S		0
+#define GL_ACLEXT_CDMD_L1SEL_RX_SEL_M		MAKEMASK(0x1F, 0)
+#define GL_ACLEXT_CDMD_L1SEL_TX_SEL_S		8
+#define GL_ACLEXT_CDMD_L1SEL_TX_SEL_M		MAKEMASK(0x1F, 8)
+#define GL_ACLEXT_CDMD_L1SEL_AUX0_SEL_S		16
+#define GL_ACLEXT_CDMD_L1SEL_AUX0_SEL_M		MAKEMASK(0x1F, 16)
+#define GL_ACLEXT_CDMD_L1SEL_AUX1_SEL_S		24
+#define GL_ACLEXT_CDMD_L1SEL_AUX1_SEL_M		MAKEMASK(0x1F, 24)
+#define GL_ACLEXT_CDMD_L1SEL_BIDIR_ENA_S	30
+#define GL_ACLEXT_CDMD_L1SEL_BIDIR_ENA_M	MAKEMASK(0x3, 30)
+#define GL_ACLEXT_CTLTBL_L2ADDR(_i)		(0x00210084 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_CTLTBL_L2ADDR_MAX_INDEX	2
+#define GL_ACLEXT_CTLTBL_L2ADDR_LINE_OFF_S	0
+#define GL_ACLEXT_CTLTBL_L2ADDR_LINE_OFF_M	MAKEMASK(0x7, 0)
+#define GL_ACLEXT_CTLTBL_L2ADDR_LINE_IDX_S	8
+#define GL_ACLEXT_CTLTBL_L2ADDR_LINE_IDX_M	MAKEMASK(0x7, 8)
+#define GL_ACLEXT_CTLTBL_L2ADDR_AUTO_INC_S	31
+#define GL_ACLEXT_CTLTBL_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_ACLEXT_CTLTBL_L2DATA(_i)		(0x00210090 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_CTLTBL_L2DATA_MAX_INDEX	2
+#define GL_ACLEXT_CTLTBL_L2DATA_DATA_S		0
+#define GL_ACLEXT_CTLTBL_L2DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_ACLEXT_DFLT_L2PRFL(_i)		(0x00210138 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_DFLT_L2PRFL_MAX_INDEX		2
+#define GL_ACLEXT_DFLT_L2PRFL_DFLT_PRFL_S	0
+#define GL_ACLEXT_DFLT_L2PRFL_DFLT_PRFL_M	MAKEMASK(0xFFFF, 0)
+#define GL_ACLEXT_DFLT_L2PRFL_ACL(_i)		(0x00393800 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_DFLT_L2PRFL_ACL_MAX_INDEX	2
+#define GL_ACLEXT_DFLT_L2PRFL_ACL_DFLT_PRFL_S	0
+#define GL_ACLEXT_DFLT_L2PRFL_ACL_DFLT_PRFL_M	MAKEMASK(0xFFFF, 0)
+#define GL_ACLEXT_FLGS_L1SEL0_1(_i)		(0x0021006C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_FLGS_L1SEL0_1_MAX_INDEX	2
+#define GL_ACLEXT_FLGS_L1SEL0_1_FLS0_S		0
+#define GL_ACLEXT_FLGS_L1SEL0_1_FLS0_M		MAKEMASK(0x1FF, 0)
+#define GL_ACLEXT_FLGS_L1SEL0_1_FLS1_S		16
+#define GL_ACLEXT_FLGS_L1SEL0_1_FLS1_M		MAKEMASK(0x1FF, 16)
+#define GL_ACLEXT_FLGS_L1SEL2_3(_i)		(0x00210078 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_FLGS_L1SEL2_3_MAX_INDEX	2
+#define GL_ACLEXT_FLGS_L1SEL2_3_FLS2_S		0
+#define GL_ACLEXT_FLGS_L1SEL2_3_FLS2_M		MAKEMASK(0x1FF, 0)
+#define GL_ACLEXT_FLGS_L1SEL2_3_FLS3_S		16
+#define GL_ACLEXT_FLGS_L1SEL2_3_FLS3_M		MAKEMASK(0x1FF, 16)
+#define GL_ACLEXT_FLGS_L1TBL(_i)		(0x00210060 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_FLGS_L1TBL_MAX_INDEX		2
+#define GL_ACLEXT_FLGS_L1TBL_LSB_S		0
+#define GL_ACLEXT_FLGS_L1TBL_LSB_M		MAKEMASK(0xFFFF, 0)
+#define GL_ACLEXT_FLGS_L1TBL_MSB_S		16
+#define GL_ACLEXT_FLGS_L1TBL_MSB_M		MAKEMASK(0xFFFF, 16)
+#define GL_ACLEXT_FORCE_L1CDID(_i)		(0x00210018 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_FORCE_L1CDID_MAX_INDEX	2
+#define GL_ACLEXT_FORCE_L1CDID_STATIC_CDID_S	0
+#define GL_ACLEXT_FORCE_L1CDID_STATIC_CDID_M	MAKEMASK(0xF, 0)
+#define GL_ACLEXT_FORCE_L1CDID_STATIC_CDID_EN_S 31
+#define GL_ACLEXT_FORCE_L1CDID_STATIC_CDID_EN_M BIT(31)
+#define GL_ACLEXT_FORCE_PID(_i)			(0x00210000 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_FORCE_PID_MAX_INDEX		2
+#define GL_ACLEXT_FORCE_PID_STATIC_PID_S	0
+#define GL_ACLEXT_FORCE_PID_STATIC_PID_M	MAKEMASK(0xFFFF, 0)
+#define GL_ACLEXT_FORCE_PID_STATIC_PID_EN_S	31
+#define GL_ACLEXT_FORCE_PID_STATIC_PID_EN_M	BIT(31)
+#define GL_ACLEXT_K2N_L2ADDR(_i)		(0x00210144 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_K2N_L2ADDR_MAX_INDEX		2
+#define GL_ACLEXT_K2N_L2ADDR_LINE_IDX_S		0
+#define GL_ACLEXT_K2N_L2ADDR_LINE_IDX_M		MAKEMASK(0x7F, 0)
+#define GL_ACLEXT_K2N_L2ADDR_AUTO_INC_S		31
+#define GL_ACLEXT_K2N_L2ADDR_AUTO_INC_M		BIT(31)
+#define GL_ACLEXT_K2N_L2DATA(_i)		(0x00210150 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_K2N_L2DATA_MAX_INDEX		2
+#define GL_ACLEXT_K2N_L2DATA_DATA0_S		0
+#define GL_ACLEXT_K2N_L2DATA_DATA0_M		MAKEMASK(0xFF, 0)
+#define GL_ACLEXT_K2N_L2DATA_DATA1_S		8
+#define GL_ACLEXT_K2N_L2DATA_DATA1_M		MAKEMASK(0xFF, 8)
+#define GL_ACLEXT_K2N_L2DATA_DATA2_S		16
+#define GL_ACLEXT_K2N_L2DATA_DATA2_M		MAKEMASK(0xFF, 16)
+#define GL_ACLEXT_K2N_L2DATA_DATA3_S		24
+#define GL_ACLEXT_K2N_L2DATA_DATA3_M		MAKEMASK(0xFF, 24)
+#define GL_ACLEXT_L2_PMASK0(_i)			(0x002100FC + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_L2_PMASK0_MAX_INDEX		2
+#define GL_ACLEXT_L2_PMASK0_BITMASK_S		0
+#define GL_ACLEXT_L2_PMASK0_BITMASK_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_ACLEXT_L2_PMASK1(_i)			(0x00210108 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_L2_PMASK1_MAX_INDEX		2
+#define GL_ACLEXT_L2_PMASK1_BITMASK_S		0
+#define GL_ACLEXT_L2_PMASK1_BITMASK_M		MAKEMASK(0xFFFF, 0)
+#define GL_ACLEXT_L2_TMASK0(_i)			(0x00210498 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_L2_TMASK0_MAX_INDEX		2
+#define GL_ACLEXT_L2_TMASK0_BITMASK_S		0
+#define GL_ACLEXT_L2_TMASK0_BITMASK_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_ACLEXT_L2_TMASK1(_i)			(0x002104A4 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_L2_TMASK1_MAX_INDEX		2
+#define GL_ACLEXT_L2_TMASK1_BITMASK_S		0
+#define GL_ACLEXT_L2_TMASK1_BITMASK_M		MAKEMASK(0xFF, 0)
+#define GL_ACLEXT_L2BMP0_3(_i)			(0x002100A8 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_L2BMP0_3_MAX_INDEX		2
+#define GL_ACLEXT_L2BMP0_3_BMP0_S		0
+#define GL_ACLEXT_L2BMP0_3_BMP0_M		MAKEMASK(0xFF, 0)
+#define GL_ACLEXT_L2BMP0_3_BMP1_S		8
+#define GL_ACLEXT_L2BMP0_3_BMP1_M		MAKEMASK(0xFF, 8)
+#define GL_ACLEXT_L2BMP0_3_BMP2_S		16
+#define GL_ACLEXT_L2BMP0_3_BMP2_M		MAKEMASK(0xFF, 16)
+#define GL_ACLEXT_L2BMP0_3_BMP3_S		24
+#define GL_ACLEXT_L2BMP0_3_BMP3_M		MAKEMASK(0xFF, 24)
+#define GL_ACLEXT_L2BMP4_7(_i)			(0x002100B4 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_L2BMP4_7_MAX_INDEX		2
+#define GL_ACLEXT_L2BMP4_7_BMP4_S		0
+#define GL_ACLEXT_L2BMP4_7_BMP4_M		MAKEMASK(0xFF, 0)
+#define GL_ACLEXT_L2BMP4_7_BMP5_S		8
+#define GL_ACLEXT_L2BMP4_7_BMP5_M		MAKEMASK(0xFF, 8)
+#define GL_ACLEXT_L2BMP4_7_BMP6_S		16
+#define GL_ACLEXT_L2BMP4_7_BMP6_M		MAKEMASK(0xFF, 16)
+#define GL_ACLEXT_L2BMP4_7_BMP7_S		24
+#define GL_ACLEXT_L2BMP4_7_BMP7_M		MAKEMASK(0xFF, 24)
+#define GL_ACLEXT_L2PRTMOD(_i)			(0x0021009C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_L2PRTMOD_MAX_INDEX		2
+#define GL_ACLEXT_L2PRTMOD_XLT1_S		0
+#define GL_ACLEXT_L2PRTMOD_XLT1_M		MAKEMASK(0x3, 0)
+#define GL_ACLEXT_L2PRTMOD_XLT2_S		8
+#define GL_ACLEXT_L2PRTMOD_XLT2_M		MAKEMASK(0x3, 8)
+#define GL_ACLEXT_N2N_L2ADDR(_i)		(0x0021015C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_N2N_L2ADDR_MAX_INDEX		2
+#define GL_ACLEXT_N2N_L2ADDR_LINE_IDX_S		0
+#define GL_ACLEXT_N2N_L2ADDR_LINE_IDX_M		MAKEMASK(0x3F, 0)
+#define GL_ACLEXT_N2N_L2ADDR_AUTO_INC_S		31
+#define GL_ACLEXT_N2N_L2ADDR_AUTO_INC_M		BIT(31)
+#define GL_ACLEXT_N2N_L2DATA(_i)		(0x00210168 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_N2N_L2DATA_MAX_INDEX		2
+#define GL_ACLEXT_N2N_L2DATA_DATA0_S		0
+#define GL_ACLEXT_N2N_L2DATA_DATA0_M		MAKEMASK(0xFF, 0)
+#define GL_ACLEXT_N2N_L2DATA_DATA1_S		8
+#define GL_ACLEXT_N2N_L2DATA_DATA1_M		MAKEMASK(0xFF, 8)
+#define GL_ACLEXT_N2N_L2DATA_DATA2_S		16
+#define GL_ACLEXT_N2N_L2DATA_DATA2_M		MAKEMASK(0xFF, 16)
+#define GL_ACLEXT_N2N_L2DATA_DATA3_S		24
+#define GL_ACLEXT_N2N_L2DATA_DATA3_M		MAKEMASK(0xFF, 24)
+#define GL_ACLEXT_P2P_L1ADDR(_i)		(0x00210024 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_P2P_L1ADDR_MAX_INDEX		2
+#define GL_ACLEXT_P2P_L1ADDR_LINE_IDX_S		0
+#define GL_ACLEXT_P2P_L1ADDR_LINE_IDX_M		BIT(0)
+#define GL_ACLEXT_P2P_L1ADDR_AUTO_INC_S		31
+#define GL_ACLEXT_P2P_L1ADDR_AUTO_INC_M		BIT(31)
+#define GL_ACLEXT_P2P_L1DATA(_i)		(0x00210030 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_P2P_L1DATA_MAX_INDEX		2
+#define GL_ACLEXT_P2P_L1DATA_DATA_S		0
+#define GL_ACLEXT_P2P_L1DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_ACLEXT_PID_L2GKTYPE(_i)		(0x002100F0 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_PID_L2GKTYPE_MAX_INDEX	2
+#define GL_ACLEXT_PID_L2GKTYPE_PID_GKTYPE_S	0
+#define GL_ACLEXT_PID_L2GKTYPE_PID_GKTYPE_M	MAKEMASK(0x3, 0)
+#define GL_ACLEXT_PLVL_SEL(_i)			(0x0021000C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_PLVL_SEL_MAX_INDEX		2
+#define GL_ACLEXT_PLVL_SEL_PLVL_SEL_S		0
+#define GL_ACLEXT_PLVL_SEL_PLVL_SEL_M		BIT(0)
+#define GL_ACLEXT_TCAM_L2ADDR(_i)		(0x00210114 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_TCAM_L2ADDR_MAX_INDEX		2
+#define GL_ACLEXT_TCAM_L2ADDR_LINE_IDX_S	0
+#define GL_ACLEXT_TCAM_L2ADDR_LINE_IDX_M	MAKEMASK(0x3FF, 0)
+#define GL_ACLEXT_TCAM_L2ADDR_AUTO_INC_S	31
+#define GL_ACLEXT_TCAM_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_ACLEXT_TCAM_L2DATALSB(_i)		(0x00210120 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_TCAM_L2DATALSB_MAX_INDEX	2
+#define GL_ACLEXT_TCAM_L2DATALSB_DATALSB_S	0
+#define GL_ACLEXT_TCAM_L2DATALSB_DATALSB_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GL_ACLEXT_TCAM_L2DATAMSB(_i)		(0x0021012C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_TCAM_L2DATAMSB_MAX_INDEX	2
+#define GL_ACLEXT_TCAM_L2DATAMSB_DATAMSB_S	0
+#define GL_ACLEXT_TCAM_L2DATAMSB_DATAMSB_M	MAKEMASK(0xFF, 0)
+#define GL_ACLEXT_XLT0_L1ADDR(_i)		(0x0021003C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_XLT0_L1ADDR_MAX_INDEX		2
+#define GL_ACLEXT_XLT0_L1ADDR_LINE_IDX_S	0
+#define GL_ACLEXT_XLT0_L1ADDR_LINE_IDX_M	MAKEMASK(0xFF, 0)
+#define GL_ACLEXT_XLT0_L1ADDR_AUTO_INC_S	31
+#define GL_ACLEXT_XLT0_L1ADDR_AUTO_INC_M	BIT(31)
+#define GL_ACLEXT_XLT0_L1DATA(_i)		(0x00210048 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_XLT0_L1DATA_MAX_INDEX		2
+#define GL_ACLEXT_XLT0_L1DATA_DATA_S		0
+#define GL_ACLEXT_XLT0_L1DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_ACLEXT_XLT1_L2ADDR(_i)		(0x002100C0 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_XLT1_L2ADDR_MAX_INDEX		2
+#define GL_ACLEXT_XLT1_L2ADDR_LINE_IDX_S	0
+#define GL_ACLEXT_XLT1_L2ADDR_LINE_IDX_M	MAKEMASK(0x7FF, 0)
+#define GL_ACLEXT_XLT1_L2ADDR_AUTO_INC_S	31
+#define GL_ACLEXT_XLT1_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_ACLEXT_XLT1_L2DATA(_i)		(0x002100CC + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_XLT1_L2DATA_MAX_INDEX		2
+#define GL_ACLEXT_XLT1_L2DATA_DATA_S		0
+#define GL_ACLEXT_XLT1_L2DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_ACLEXT_XLT2_L2ADDR(_i)		(0x002100D8 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_XLT2_L2ADDR_MAX_INDEX		2
+#define GL_ACLEXT_XLT2_L2ADDR_LINE_IDX_S	0
+#define GL_ACLEXT_XLT2_L2ADDR_LINE_IDX_M	MAKEMASK(0x1FF, 0)
+#define GL_ACLEXT_XLT2_L2ADDR_AUTO_INC_S	31
+#define GL_ACLEXT_XLT2_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_ACLEXT_XLT2_L2DATA(_i)		(0x002100E4 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_ACLEXT_XLT2_L2DATA_MAX_INDEX		2
+#define GL_ACLEXT_XLT2_L2DATA_DATA_S		0
+#define GL_ACLEXT_XLT2_L2DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PREEXT_CDMD_L1SEL(_i)		(0x0020F054 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_CDMD_L1SEL_MAX_INDEX		2
+#define GL_PREEXT_CDMD_L1SEL_RX_SEL_S		0
+#define GL_PREEXT_CDMD_L1SEL_RX_SEL_M		MAKEMASK(0x1F, 0)
+#define GL_PREEXT_CDMD_L1SEL_TX_SEL_S		8
+#define GL_PREEXT_CDMD_L1SEL_TX_SEL_M		MAKEMASK(0x1F, 8)
+#define GL_PREEXT_CDMD_L1SEL_AUX0_SEL_S		16
+#define GL_PREEXT_CDMD_L1SEL_AUX0_SEL_M		MAKEMASK(0x1F, 16)
+#define GL_PREEXT_CDMD_L1SEL_AUX1_SEL_S		24
+#define GL_PREEXT_CDMD_L1SEL_AUX1_SEL_M		MAKEMASK(0x1F, 24)
+#define GL_PREEXT_CDMD_L1SEL_BIDIR_ENA_S	30
+#define GL_PREEXT_CDMD_L1SEL_BIDIR_ENA_M	MAKEMASK(0x3, 30)
+#define GL_PREEXT_CTLTBL_L2ADDR(_i)		(0x0020F084 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_CTLTBL_L2ADDR_MAX_INDEX	2
+#define GL_PREEXT_CTLTBL_L2ADDR_LINE_OFF_S	0
+#define GL_PREEXT_CTLTBL_L2ADDR_LINE_OFF_M	MAKEMASK(0x7, 0)
+#define GL_PREEXT_CTLTBL_L2ADDR_LINE_IDX_S	8
+#define GL_PREEXT_CTLTBL_L2ADDR_LINE_IDX_M	MAKEMASK(0x7, 8)
+#define GL_PREEXT_CTLTBL_L2ADDR_AUTO_INC_S	31
+#define GL_PREEXT_CTLTBL_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_PREEXT_CTLTBL_L2DATA(_i)		(0x0020F090 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_CTLTBL_L2DATA_MAX_INDEX	2
+#define GL_PREEXT_CTLTBL_L2DATA_DATA_S		0
+#define GL_PREEXT_CTLTBL_L2DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PREEXT_DFLT_L2PRFL(_i)		(0x0020F138 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_DFLT_L2PRFL_MAX_INDEX		2
+#define GL_PREEXT_DFLT_L2PRFL_DFLT_PRFL_S	0
+#define GL_PREEXT_DFLT_L2PRFL_DFLT_PRFL_M	MAKEMASK(0xFFFF, 0)
+#define GL_PREEXT_FLGS_L1SEL0_1(_i)		(0x0020F06C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_FLGS_L1SEL0_1_MAX_INDEX	2
+#define GL_PREEXT_FLGS_L1SEL0_1_FLS0_S		0
+#define GL_PREEXT_FLGS_L1SEL0_1_FLS0_M		MAKEMASK(0x1FF, 0)
+#define GL_PREEXT_FLGS_L1SEL0_1_FLS1_S		16
+#define GL_PREEXT_FLGS_L1SEL0_1_FLS1_M		MAKEMASK(0x1FF, 16)
+#define GL_PREEXT_FLGS_L1SEL2_3(_i)		(0x0020F078 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_FLGS_L1SEL2_3_MAX_INDEX	2
+#define GL_PREEXT_FLGS_L1SEL2_3_FLS2_S		0
+#define GL_PREEXT_FLGS_L1SEL2_3_FLS2_M		MAKEMASK(0x1FF, 0)
+#define GL_PREEXT_FLGS_L1SEL2_3_FLS3_S		16
+#define GL_PREEXT_FLGS_L1SEL2_3_FLS3_M		MAKEMASK(0x1FF, 16)
+#define GL_PREEXT_FLGS_L1TBL(_i)		(0x0020F060 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_FLGS_L1TBL_MAX_INDEX		2
+#define GL_PREEXT_FLGS_L1TBL_LSB_S		0
+#define GL_PREEXT_FLGS_L1TBL_LSB_M		MAKEMASK(0xFFFF, 0)
+#define GL_PREEXT_FLGS_L1TBL_MSB_S		16
+#define GL_PREEXT_FLGS_L1TBL_MSB_M		MAKEMASK(0xFFFF, 16)
+#define GL_PREEXT_FORCE_L1CDID(_i)		(0x0020F018 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_FORCE_L1CDID_MAX_INDEX	2
+#define GL_PREEXT_FORCE_L1CDID_STATIC_CDID_S	0
+#define GL_PREEXT_FORCE_L1CDID_STATIC_CDID_M	MAKEMASK(0xF, 0)
+#define GL_PREEXT_FORCE_L1CDID_STATIC_CDID_EN_S 31
+#define GL_PREEXT_FORCE_L1CDID_STATIC_CDID_EN_M BIT(31)
+#define GL_PREEXT_FORCE_PID(_i)			(0x0020F000 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_FORCE_PID_MAX_INDEX		2
+#define GL_PREEXT_FORCE_PID_STATIC_PID_S	0
+#define GL_PREEXT_FORCE_PID_STATIC_PID_M	MAKEMASK(0xFFFF, 0)
+#define GL_PREEXT_FORCE_PID_STATIC_PID_EN_S	31
+#define GL_PREEXT_FORCE_PID_STATIC_PID_EN_M	BIT(31)
+#define GL_PREEXT_K2N_L2ADDR(_i)		(0x0020F144 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_K2N_L2ADDR_MAX_INDEX		2
+#define GL_PREEXT_K2N_L2ADDR_LINE_IDX_S		0
+#define GL_PREEXT_K2N_L2ADDR_LINE_IDX_M		MAKEMASK(0x7F, 0)
+#define GL_PREEXT_K2N_L2ADDR_AUTO_INC_S		31
+#define GL_PREEXT_K2N_L2ADDR_AUTO_INC_M		BIT(31)
+#define GL_PREEXT_K2N_L2DATA(_i)		(0x0020F150 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_K2N_L2DATA_MAX_INDEX		2
+#define GL_PREEXT_K2N_L2DATA_DATA0_S		0
+#define GL_PREEXT_K2N_L2DATA_DATA0_M		MAKEMASK(0xFF, 0)
+#define GL_PREEXT_K2N_L2DATA_DATA1_S		8
+#define GL_PREEXT_K2N_L2DATA_DATA1_M		MAKEMASK(0xFF, 8)
+#define GL_PREEXT_K2N_L2DATA_DATA2_S		16
+#define GL_PREEXT_K2N_L2DATA_DATA2_M		MAKEMASK(0xFF, 16)
+#define GL_PREEXT_K2N_L2DATA_DATA3_S		24
+#define GL_PREEXT_K2N_L2DATA_DATA3_M		MAKEMASK(0xFF, 24)
+#define GL_PREEXT_L2_PMASK0(_i)			(0x0020F0FC + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_L2_PMASK0_MAX_INDEX		2
+#define GL_PREEXT_L2_PMASK0_BITMASK_S		0
+#define GL_PREEXT_L2_PMASK0_BITMASK_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PREEXT_L2_PMASK1(_i)			(0x0020F108 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_L2_PMASK1_MAX_INDEX		2
+#define GL_PREEXT_L2_PMASK1_BITMASK_S		0
+#define GL_PREEXT_L2_PMASK1_BITMASK_M		MAKEMASK(0xFFFF, 0)
+#define GL_PREEXT_L2_TMASK0(_i)			(0x0020F498 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_L2_TMASK0_MAX_INDEX		2
+#define GL_PREEXT_L2_TMASK0_BITMASK_S		0
+#define GL_PREEXT_L2_TMASK0_BITMASK_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PREEXT_L2_TMASK1(_i)			(0x0020F4A4 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_L2_TMASK1_MAX_INDEX		2
+#define GL_PREEXT_L2_TMASK1_BITMASK_S		0
+#define GL_PREEXT_L2_TMASK1_BITMASK_M		MAKEMASK(0xFF, 0)
+#define GL_PREEXT_L2BMP0_3(_i)			(0x0020F0A8 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_L2BMP0_3_MAX_INDEX		2
+#define GL_PREEXT_L2BMP0_3_BMP0_S		0
+#define GL_PREEXT_L2BMP0_3_BMP0_M		MAKEMASK(0xFF, 0)
+#define GL_PREEXT_L2BMP0_3_BMP1_S		8
+#define GL_PREEXT_L2BMP0_3_BMP1_M		MAKEMASK(0xFF, 8)
+#define GL_PREEXT_L2BMP0_3_BMP2_S		16
+#define GL_PREEXT_L2BMP0_3_BMP2_M		MAKEMASK(0xFF, 16)
+#define GL_PREEXT_L2BMP0_3_BMP3_S		24
+#define GL_PREEXT_L2BMP0_3_BMP3_M		MAKEMASK(0xFF, 24)
+#define GL_PREEXT_L2BMP4_7(_i)			(0x0020F0B4 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_L2BMP4_7_MAX_INDEX		2
+#define GL_PREEXT_L2BMP4_7_BMP4_S		0
+#define GL_PREEXT_L2BMP4_7_BMP4_M		MAKEMASK(0xFF, 0)
+#define GL_PREEXT_L2BMP4_7_BMP5_S		8
+#define GL_PREEXT_L2BMP4_7_BMP5_M		MAKEMASK(0xFF, 8)
+#define GL_PREEXT_L2BMP4_7_BMP6_S		16
+#define GL_PREEXT_L2BMP4_7_BMP6_M		MAKEMASK(0xFF, 16)
+#define GL_PREEXT_L2BMP4_7_BMP7_S		24
+#define GL_PREEXT_L2BMP4_7_BMP7_M		MAKEMASK(0xFF, 24)
+#define GL_PREEXT_L2PRTMOD(_i)			(0x0020F09C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_L2PRTMOD_MAX_INDEX		2
+#define GL_PREEXT_L2PRTMOD_XLT1_S		0
+#define GL_PREEXT_L2PRTMOD_XLT1_M		MAKEMASK(0x3, 0)
+#define GL_PREEXT_L2PRTMOD_XLT2_S		8
+#define GL_PREEXT_L2PRTMOD_XLT2_M		MAKEMASK(0x3, 8)
+#define GL_PREEXT_N2N_L2ADDR(_i)		(0x0020F15C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_N2N_L2ADDR_MAX_INDEX		2
+#define GL_PREEXT_N2N_L2ADDR_LINE_IDX_S		0
+#define GL_PREEXT_N2N_L2ADDR_LINE_IDX_M		MAKEMASK(0x3F, 0)
+#define GL_PREEXT_N2N_L2ADDR_AUTO_INC_S		31
+#define GL_PREEXT_N2N_L2ADDR_AUTO_INC_M		BIT(31)
+#define GL_PREEXT_N2N_L2DATA(_i)		(0x0020F168 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_N2N_L2DATA_MAX_INDEX		2
+#define GL_PREEXT_N2N_L2DATA_DATA0_S		0
+#define GL_PREEXT_N2N_L2DATA_DATA0_M		MAKEMASK(0xFF, 0)
+#define GL_PREEXT_N2N_L2DATA_DATA1_S		8
+#define GL_PREEXT_N2N_L2DATA_DATA1_M		MAKEMASK(0xFF, 8)
+#define GL_PREEXT_N2N_L2DATA_DATA2_S		16
+#define GL_PREEXT_N2N_L2DATA_DATA2_M		MAKEMASK(0xFF, 16)
+#define GL_PREEXT_N2N_L2DATA_DATA3_S		24
+#define GL_PREEXT_N2N_L2DATA_DATA3_M		MAKEMASK(0xFF, 24)
+#define GL_PREEXT_P2P_L1ADDR(_i)		(0x0020F024 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_P2P_L1ADDR_MAX_INDEX		2
+#define GL_PREEXT_P2P_L1ADDR_LINE_IDX_S		0
+#define GL_PREEXT_P2P_L1ADDR_LINE_IDX_M		BIT(0)
+#define GL_PREEXT_P2P_L1ADDR_AUTO_INC_S		31
+#define GL_PREEXT_P2P_L1ADDR_AUTO_INC_M		BIT(31)
+#define GL_PREEXT_P2P_L1DATA(_i)		(0x0020F030 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_P2P_L1DATA_MAX_INDEX		2
+#define GL_PREEXT_P2P_L1DATA_DATA_S		0
+#define GL_PREEXT_P2P_L1DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PREEXT_PID_L2GKTYPE(_i)		(0x0020F0F0 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_PID_L2GKTYPE_MAX_INDEX	2
+#define GL_PREEXT_PID_L2GKTYPE_PID_GKTYPE_S	0
+#define GL_PREEXT_PID_L2GKTYPE_PID_GKTYPE_M	MAKEMASK(0x3, 0)
+#define GL_PREEXT_PLVL_SEL(_i)			(0x0020F00C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_PLVL_SEL_MAX_INDEX		2
+#define GL_PREEXT_PLVL_SEL_PLVL_SEL_S		0
+#define GL_PREEXT_PLVL_SEL_PLVL_SEL_M		BIT(0)
+#define GL_PREEXT_TCAM_L2ADDR(_i)		(0x0020F114 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_TCAM_L2ADDR_MAX_INDEX		2
+#define GL_PREEXT_TCAM_L2ADDR_LINE_IDX_S	0
+#define GL_PREEXT_TCAM_L2ADDR_LINE_IDX_M	MAKEMASK(0x3FF, 0)
+#define GL_PREEXT_TCAM_L2ADDR_AUTO_INC_S	31
+#define GL_PREEXT_TCAM_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_PREEXT_TCAM_L2DATALSB(_i)		(0x0020F120 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_TCAM_L2DATALSB_MAX_INDEX	2
+#define GL_PREEXT_TCAM_L2DATALSB_DATALSB_S	0
+#define GL_PREEXT_TCAM_L2DATALSB_DATALSB_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PREEXT_TCAM_L2DATAMSB(_i)		(0x0020F12C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_TCAM_L2DATAMSB_MAX_INDEX	2
+#define GL_PREEXT_TCAM_L2DATAMSB_DATAMSB_S	0
+#define GL_PREEXT_TCAM_L2DATAMSB_DATAMSB_M	MAKEMASK(0xFF, 0)
+#define GL_PREEXT_XLT0_L1ADDR(_i)		(0x0020F03C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_XLT0_L1ADDR_MAX_INDEX		2
+#define GL_PREEXT_XLT0_L1ADDR_LINE_IDX_S	0
+#define GL_PREEXT_XLT0_L1ADDR_LINE_IDX_M	MAKEMASK(0xFF, 0)
+#define GL_PREEXT_XLT0_L1ADDR_AUTO_INC_S	31
+#define GL_PREEXT_XLT0_L1ADDR_AUTO_INC_M	BIT(31)
+#define GL_PREEXT_XLT0_L1DATA(_i)		(0x0020F048 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_XLT0_L1DATA_MAX_INDEX		2
+#define GL_PREEXT_XLT0_L1DATA_DATA_S		0
+#define GL_PREEXT_XLT0_L1DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PREEXT_XLT1_L2ADDR(_i)		(0x0020F0C0 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_XLT1_L2ADDR_MAX_INDEX		2
+#define GL_PREEXT_XLT1_L2ADDR_LINE_IDX_S	0
+#define GL_PREEXT_XLT1_L2ADDR_LINE_IDX_M	MAKEMASK(0x7FF, 0)
+#define GL_PREEXT_XLT1_L2ADDR_AUTO_INC_S	31
+#define GL_PREEXT_XLT1_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_PREEXT_XLT1_L2DATA(_i)		(0x0020F0CC + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_XLT1_L2DATA_MAX_INDEX		2
+#define GL_PREEXT_XLT1_L2DATA_DATA_S		0
+#define GL_PREEXT_XLT1_L2DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PREEXT_XLT2_L2ADDR(_i)		(0x0020F0D8 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_XLT2_L2ADDR_MAX_INDEX		2
+#define GL_PREEXT_XLT2_L2ADDR_LINE_IDX_S	0
+#define GL_PREEXT_XLT2_L2ADDR_LINE_IDX_M	MAKEMASK(0x1FF, 0)
+#define GL_PREEXT_XLT2_L2ADDR_AUTO_INC_S	31
+#define GL_PREEXT_XLT2_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_PREEXT_XLT2_L2DATA(_i)		(0x0020F0E4 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PREEXT_XLT2_L2DATA_MAX_INDEX		2
+#define GL_PREEXT_XLT2_L2DATA_DATA_S		0
+#define GL_PREEXT_XLT2_L2DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_CDMD_L1SEL(_i)		(0x0020E054 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_CDMD_L1SEL_MAX_INDEX		2
+#define GL_PSTEXT_CDMD_L1SEL_RX_SEL_S		0
+#define GL_PSTEXT_CDMD_L1SEL_RX_SEL_M		MAKEMASK(0x1F, 0)
+#define GL_PSTEXT_CDMD_L1SEL_TX_SEL_S		8
+#define GL_PSTEXT_CDMD_L1SEL_TX_SEL_M		MAKEMASK(0x1F, 8)
+#define GL_PSTEXT_CDMD_L1SEL_AUX0_SEL_S		16
+#define GL_PSTEXT_CDMD_L1SEL_AUX0_SEL_M		MAKEMASK(0x1F, 16)
+#define GL_PSTEXT_CDMD_L1SEL_AUX1_SEL_S		24
+#define GL_PSTEXT_CDMD_L1SEL_AUX1_SEL_M		MAKEMASK(0x1F, 24)
+#define GL_PSTEXT_CDMD_L1SEL_BIDIR_ENA_S	30
+#define GL_PSTEXT_CDMD_L1SEL_BIDIR_ENA_M	MAKEMASK(0x3, 30)
+#define GL_PSTEXT_CTLTBL_L2ADDR(_i)		(0x0020E084 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_CTLTBL_L2ADDR_MAX_INDEX	2
+#define GL_PSTEXT_CTLTBL_L2ADDR_LINE_OFF_S	0
+#define GL_PSTEXT_CTLTBL_L2ADDR_LINE_OFF_M	MAKEMASK(0x7, 0)
+#define GL_PSTEXT_CTLTBL_L2ADDR_LINE_IDX_S	8
+#define GL_PSTEXT_CTLTBL_L2ADDR_LINE_IDX_M	MAKEMASK(0x7, 8)
+#define GL_PSTEXT_CTLTBL_L2ADDR_AUTO_INC_S	31
+#define GL_PSTEXT_CTLTBL_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_PSTEXT_CTLTBL_L2DATA(_i)		(0x0020E090 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_CTLTBL_L2DATA_MAX_INDEX	2
+#define GL_PSTEXT_CTLTBL_L2DATA_DATA_S		0
+#define GL_PSTEXT_CTLTBL_L2DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_DFLT_L2PRFL(_i)		(0x0020E138 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_DFLT_L2PRFL_MAX_INDEX		2
+#define GL_PSTEXT_DFLT_L2PRFL_DFLT_PRFL_S	0
+#define GL_PSTEXT_DFLT_L2PRFL_DFLT_PRFL_M	MAKEMASK(0xFFFF, 0)
+#define GL_PSTEXT_FL15_BMPLSB(_i)		(0x0020E480 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_FL15_BMPLSB_MAX_INDEX		2
+#define GL_PSTEXT_FL15_BMPLSB_BMPLSB_S		0
+#define GL_PSTEXT_FL15_BMPLSB_BMPLSB_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_FL15_BMPMSB(_i)		(0x0020E48C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_FL15_BMPMSB_MAX_INDEX		2
+#define GL_PSTEXT_FL15_BMPMSB_BMPMSB_S		0
+#define GL_PSTEXT_FL15_BMPMSB_BMPMSB_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_FLGS_L1SEL0_1(_i)		(0x0020E06C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_FLGS_L1SEL0_1_MAX_INDEX	2
+#define GL_PSTEXT_FLGS_L1SEL0_1_FLS0_S		0
+#define GL_PSTEXT_FLGS_L1SEL0_1_FLS0_M		MAKEMASK(0x1FF, 0)
+#define GL_PSTEXT_FLGS_L1SEL0_1_FLS1_S		16
+#define GL_PSTEXT_FLGS_L1SEL0_1_FLS1_M		MAKEMASK(0x1FF, 16)
+#define GL_PSTEXT_FLGS_L1SEL2_3(_i)		(0x0020E078 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_FLGS_L1SEL2_3_MAX_INDEX	2
+#define GL_PSTEXT_FLGS_L1SEL2_3_FLS2_S		0
+#define GL_PSTEXT_FLGS_L1SEL2_3_FLS2_M		MAKEMASK(0x1FF, 0)
+#define GL_PSTEXT_FLGS_L1SEL2_3_FLS3_S		16
+#define GL_PSTEXT_FLGS_L1SEL2_3_FLS3_M		MAKEMASK(0x1FF, 16)
+#define GL_PSTEXT_FLGS_L1TBL(_i)		(0x0020E060 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_FLGS_L1TBL_MAX_INDEX		2
+#define GL_PSTEXT_FLGS_L1TBL_LSB_S		0
+#define GL_PSTEXT_FLGS_L1TBL_LSB_M		MAKEMASK(0xFFFF, 0)
+#define GL_PSTEXT_FLGS_L1TBL_MSB_S		16
+#define GL_PSTEXT_FLGS_L1TBL_MSB_M		MAKEMASK(0xFFFF, 16)
+#define GL_PSTEXT_FORCE_L1CDID(_i)		(0x0020E018 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_FORCE_L1CDID_MAX_INDEX	2
+#define GL_PSTEXT_FORCE_L1CDID_STATIC_CDID_S	0
+#define GL_PSTEXT_FORCE_L1CDID_STATIC_CDID_M	MAKEMASK(0xF, 0)
+#define GL_PSTEXT_FORCE_L1CDID_STATIC_CDID_EN_S 31
+#define GL_PSTEXT_FORCE_L1CDID_STATIC_CDID_EN_M BIT(31)
+#define GL_PSTEXT_FORCE_PID(_i)			(0x0020E000 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_FORCE_PID_MAX_INDEX		2
+#define GL_PSTEXT_FORCE_PID_STATIC_PID_S	0
+#define GL_PSTEXT_FORCE_PID_STATIC_PID_M	MAKEMASK(0xFFFF, 0)
+#define GL_PSTEXT_FORCE_PID_STATIC_PID_EN_S	31
+#define GL_PSTEXT_FORCE_PID_STATIC_PID_EN_M	BIT(31)
+#define GL_PSTEXT_K2N_L2ADDR(_i)		(0x0020E144 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_K2N_L2ADDR_MAX_INDEX		2
+#define GL_PSTEXT_K2N_L2ADDR_LINE_IDX_S		0
+#define GL_PSTEXT_K2N_L2ADDR_LINE_IDX_M		MAKEMASK(0x7F, 0)
+#define GL_PSTEXT_K2N_L2ADDR_AUTO_INC_S		31
+#define GL_PSTEXT_K2N_L2ADDR_AUTO_INC_M		BIT(31)
+#define GL_PSTEXT_K2N_L2DATA(_i)		(0x0020E150 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_K2N_L2DATA_MAX_INDEX		2
+#define GL_PSTEXT_K2N_L2DATA_DATA0_S		0
+#define GL_PSTEXT_K2N_L2DATA_DATA0_M		MAKEMASK(0xFF, 0)
+#define GL_PSTEXT_K2N_L2DATA_DATA1_S		8
+#define GL_PSTEXT_K2N_L2DATA_DATA1_M		MAKEMASK(0xFF, 8)
+#define GL_PSTEXT_K2N_L2DATA_DATA2_S		16
+#define GL_PSTEXT_K2N_L2DATA_DATA2_M		MAKEMASK(0xFF, 16)
+#define GL_PSTEXT_K2N_L2DATA_DATA3_S		24
+#define GL_PSTEXT_K2N_L2DATA_DATA3_M		MAKEMASK(0xFF, 24)
+#define GL_PSTEXT_L2_PMASK0(_i)			(0x0020E0FC + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_L2_PMASK0_MAX_INDEX		2
+#define GL_PSTEXT_L2_PMASK0_BITMASK_S		0
+#define GL_PSTEXT_L2_PMASK0_BITMASK_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_L2_PMASK1(_i)			(0x0020E108 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_L2_PMASK1_MAX_INDEX		2
+#define GL_PSTEXT_L2_PMASK1_BITMASK_S		0
+#define GL_PSTEXT_L2_PMASK1_BITMASK_M		MAKEMASK(0xFFFF, 0)
+#define GL_PSTEXT_L2_TMASK0(_i)			(0x0020E498 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_L2_TMASK0_MAX_INDEX		2
+#define GL_PSTEXT_L2_TMASK0_BITMASK_S		0
+#define GL_PSTEXT_L2_TMASK0_BITMASK_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_L2_TMASK1(_i)			(0x0020E4A4 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_L2_TMASK1_MAX_INDEX		2
+#define GL_PSTEXT_L2_TMASK1_BITMASK_S		0
+#define GL_PSTEXT_L2_TMASK1_BITMASK_M		MAKEMASK(0xFF, 0)
+#define GL_PSTEXT_L2PRTMOD(_i)			(0x0020E09C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_L2PRTMOD_MAX_INDEX		2
+#define GL_PSTEXT_L2PRTMOD_XLT1_S		0
+#define GL_PSTEXT_L2PRTMOD_XLT1_M		MAKEMASK(0x3, 0)
+#define GL_PSTEXT_L2PRTMOD_XLT2_S		8
+#define GL_PSTEXT_L2PRTMOD_XLT2_M		MAKEMASK(0x3, 8)
+#define GL_PSTEXT_N2N_L2ADDR(_i)		(0x0020E15C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_N2N_L2ADDR_MAX_INDEX		2
+#define GL_PSTEXT_N2N_L2ADDR_LINE_IDX_S		0
+#define GL_PSTEXT_N2N_L2ADDR_LINE_IDX_M		MAKEMASK(0x3F, 0)
+#define GL_PSTEXT_N2N_L2ADDR_AUTO_INC_S		31
+#define GL_PSTEXT_N2N_L2ADDR_AUTO_INC_M		BIT(31)
+#define GL_PSTEXT_N2N_L2DATA(_i)		(0x0020E168 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_N2N_L2DATA_MAX_INDEX		2
+#define GL_PSTEXT_N2N_L2DATA_DATA0_S		0
+#define GL_PSTEXT_N2N_L2DATA_DATA0_M		MAKEMASK(0xFF, 0)
+#define GL_PSTEXT_N2N_L2DATA_DATA1_S		8
+#define GL_PSTEXT_N2N_L2DATA_DATA1_M		MAKEMASK(0xFF, 8)
+#define GL_PSTEXT_N2N_L2DATA_DATA2_S		16
+#define GL_PSTEXT_N2N_L2DATA_DATA2_M		MAKEMASK(0xFF, 16)
+#define GL_PSTEXT_N2N_L2DATA_DATA3_S		24
+#define GL_PSTEXT_N2N_L2DATA_DATA3_M		MAKEMASK(0xFF, 24)
+#define GL_PSTEXT_P2P_L1ADDR(_i)		(0x0020E024 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_P2P_L1ADDR_MAX_INDEX		2
+#define GL_PSTEXT_P2P_L1ADDR_LINE_IDX_S		0
+#define GL_PSTEXT_P2P_L1ADDR_LINE_IDX_M		BIT(0)
+#define GL_PSTEXT_P2P_L1ADDR_AUTO_INC_S		31
+#define GL_PSTEXT_P2P_L1ADDR_AUTO_INC_M		BIT(31)
+#define GL_PSTEXT_P2P_L1DATA(_i)		(0x0020E030 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_P2P_L1DATA_MAX_INDEX		2
+#define GL_PSTEXT_P2P_L1DATA_DATA_S		0
+#define GL_PSTEXT_P2P_L1DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_PID_L2GKTYPE(_i)		(0x0020E0F0 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_PID_L2GKTYPE_MAX_INDEX	2
+#define GL_PSTEXT_PID_L2GKTYPE_PID_GKTYPE_S	0
+#define GL_PSTEXT_PID_L2GKTYPE_PID_GKTYPE_M	MAKEMASK(0x3, 0)
+#define GL_PSTEXT_PLVL_SEL(_i)			(0x0020E00C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_PLVL_SEL_MAX_INDEX		2
+#define GL_PSTEXT_PLVL_SEL_PLVL_SEL_S		0
+#define GL_PSTEXT_PLVL_SEL_PLVL_SEL_M		BIT(0)
+#define GL_PSTEXT_PRFLM_CTRL(_i)		(0x0020E474 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_PRFLM_CTRL_MAX_INDEX		2
+#define GL_PSTEXT_PRFLM_CTRL_PRFL_IDX_S		0
+#define GL_PSTEXT_PRFLM_CTRL_PRFL_IDX_M		MAKEMASK(0xFF, 0)
+#define GL_PSTEXT_PRFLM_CTRL_RD_REQ_S		30
+#define GL_PSTEXT_PRFLM_CTRL_RD_REQ_M		BIT(30)
+#define GL_PSTEXT_PRFLM_CTRL_WR_REQ_S		31
+#define GL_PSTEXT_PRFLM_CTRL_WR_REQ_M		BIT(31)
+#define GL_PSTEXT_PRFLM_DATA_0(_i)		(0x0020E174 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GL_PSTEXT_PRFLM_DATA_0_MAX_INDEX	63
+#define GL_PSTEXT_PRFLM_DATA_0_PROT_S		0
+#define GL_PSTEXT_PRFLM_DATA_0_PROT_M		MAKEMASK(0xFF, 0)
+#define GL_PSTEXT_PRFLM_DATA_0_OFF_S		16
+#define GL_PSTEXT_PRFLM_DATA_0_OFF_M		MAKEMASK(0x1FF, 16)
+#define GL_PSTEXT_PRFLM_DATA_1(_i)		(0x0020E274 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GL_PSTEXT_PRFLM_DATA_1_MAX_INDEX	63
+#define GL_PSTEXT_PRFLM_DATA_1_PROT_S		0
+#define GL_PSTEXT_PRFLM_DATA_1_PROT_M		MAKEMASK(0xFF, 0)
+#define GL_PSTEXT_PRFLM_DATA_1_OFF_S		16
+#define GL_PSTEXT_PRFLM_DATA_1_OFF_M		MAKEMASK(0x1FF, 16)
+#define GL_PSTEXT_PRFLM_DATA_2(_i)		(0x0020E374 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GL_PSTEXT_PRFLM_DATA_2_MAX_INDEX	63
+#define GL_PSTEXT_PRFLM_DATA_2_PROT_S		0
+#define GL_PSTEXT_PRFLM_DATA_2_PROT_M		MAKEMASK(0xFF, 0)
+#define GL_PSTEXT_PRFLM_DATA_2_OFF_S		16
+#define GL_PSTEXT_PRFLM_DATA_2_OFF_M		MAKEMASK(0x1FF, 16)
+#define GL_PSTEXT_TCAM_L2ADDR(_i)		(0x0020E114 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_TCAM_L2ADDR_MAX_INDEX		2
+#define GL_PSTEXT_TCAM_L2ADDR_LINE_IDX_S	0
+#define GL_PSTEXT_TCAM_L2ADDR_LINE_IDX_M	MAKEMASK(0x3FF, 0)
+#define GL_PSTEXT_TCAM_L2ADDR_AUTO_INC_S	31
+#define GL_PSTEXT_TCAM_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_PSTEXT_TCAM_L2DATALSB(_i)		(0x0020E120 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_TCAM_L2DATALSB_MAX_INDEX	2
+#define GL_PSTEXT_TCAM_L2DATALSB_DATALSB_S	0
+#define GL_PSTEXT_TCAM_L2DATALSB_DATALSB_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_TCAM_L2DATAMSB(_i)		(0x0020E12C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_TCAM_L2DATAMSB_MAX_INDEX	2
+#define GL_PSTEXT_TCAM_L2DATAMSB_DATAMSB_S	0
+#define GL_PSTEXT_TCAM_L2DATAMSB_DATAMSB_M	MAKEMASK(0xFF, 0)
+#define GL_PSTEXT_XLT0_L1ADDR(_i)		(0x0020E03C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_XLT0_L1ADDR_MAX_INDEX		2
+#define GL_PSTEXT_XLT0_L1ADDR_LINE_IDX_S	0
+#define GL_PSTEXT_XLT0_L1ADDR_LINE_IDX_M	MAKEMASK(0xFF, 0)
+#define GL_PSTEXT_XLT0_L1ADDR_AUTO_INC_S	31
+#define GL_PSTEXT_XLT0_L1ADDR_AUTO_INC_M	BIT(31)
+#define GL_PSTEXT_XLT0_L1DATA(_i)		(0x0020E048 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_XLT0_L1DATA_MAX_INDEX		2
+#define GL_PSTEXT_XLT0_L1DATA_DATA_S		0
+#define GL_PSTEXT_XLT0_L1DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_XLT1_L2ADDR(_i)		(0x0020E0C0 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_XLT1_L2ADDR_MAX_INDEX		2
+#define GL_PSTEXT_XLT1_L2ADDR_LINE_IDX_S	0
+#define GL_PSTEXT_XLT1_L2ADDR_LINE_IDX_M	MAKEMASK(0x7FF, 0)
+#define GL_PSTEXT_XLT1_L2ADDR_AUTO_INC_S	31
+#define GL_PSTEXT_XLT1_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_PSTEXT_XLT1_L2DATA(_i)		(0x0020E0CC + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_XLT1_L2DATA_MAX_INDEX		2
+#define GL_PSTEXT_XLT1_L2DATA_DATA_S		0
+#define GL_PSTEXT_XLT1_L2DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PSTEXT_XLT2_L2ADDR(_i)		(0x0020E0D8 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_XLT2_L2ADDR_MAX_INDEX		2
+#define GL_PSTEXT_XLT2_L2ADDR_LINE_IDX_S	0
+#define GL_PSTEXT_XLT2_L2ADDR_LINE_IDX_M	MAKEMASK(0x1FF, 0)
+#define GL_PSTEXT_XLT2_L2ADDR_AUTO_INC_S	31
+#define GL_PSTEXT_XLT2_L2ADDR_AUTO_INC_M	BIT(31)
+#define GL_PSTEXT_XLT2_L2DATA(_i)		(0x0020E0E4 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GL_PSTEXT_XLT2_L2DATA_MAX_INDEX		2
+#define GL_PSTEXT_XLT2_L2DATA_DATA_S		0
+#define GL_PSTEXT_XLT2_L2DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLFLXP_PTYPE_TRANSLATION(_i)		(0x0045C000 + ((_i) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define GLFLXP_PTYPE_TRANSLATION_MAX_INDEX	255
+#define GLFLXP_PTYPE_TRANSLATION_PTYPE_4N_S	0
+#define GLFLXP_PTYPE_TRANSLATION_PTYPE_4N_M	MAKEMASK(0xFF, 0)
+#define GLFLXP_PTYPE_TRANSLATION_PTYPE_4N_1_S	8
+#define GLFLXP_PTYPE_TRANSLATION_PTYPE_4N_1_M	MAKEMASK(0xFF, 8)
+#define GLFLXP_PTYPE_TRANSLATION_PTYPE_4N_2_S	16
+#define GLFLXP_PTYPE_TRANSLATION_PTYPE_4N_2_M	MAKEMASK(0xFF, 16)
+#define GLFLXP_PTYPE_TRANSLATION_PTYPE_4N_3_S	24
+#define GLFLXP_PTYPE_TRANSLATION_PTYPE_4N_3_M	MAKEMASK(0xFF, 24)
+#define GLFLXP_RX_CMD_LX_PROT_IDX(_i)		(0x0045C400 + ((_i) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define GLFLXP_RX_CMD_LX_PROT_IDX_MAX_INDEX	255
+#define GLFLXP_RX_CMD_LX_PROT_IDX_INNER_CLOUD_OFFSET_INDEX_S 0
+#define GLFLXP_RX_CMD_LX_PROT_IDX_INNER_CLOUD_OFFSET_INDEX_M MAKEMASK(0x7, 0)
+#define GLFLXP_RX_CMD_LX_PROT_IDX_L4_OFFSET_INDEX_S 4
+#define GLFLXP_RX_CMD_LX_PROT_IDX_L4_OFFSET_INDEX_M MAKEMASK(0x7, 4)
+#define GLFLXP_RX_CMD_LX_PROT_IDX_PAYLOAD_OFFSET_INDEX_S 8
+#define GLFLXP_RX_CMD_LX_PROT_IDX_PAYLOAD_OFFSET_INDEX_M MAKEMASK(0x7, 8)
+#define GLFLXP_RX_CMD_LX_PROT_IDX_L3_PROTOCOL_S 12
+#define GLFLXP_RX_CMD_LX_PROT_IDX_L3_PROTOCOL_M MAKEMASK(0x3, 12)
+#define GLFLXP_RX_CMD_LX_PROT_IDX_L4_PROTOCOL_S 14
+#define GLFLXP_RX_CMD_LX_PROT_IDX_L4_PROTOCOL_M MAKEMASK(0x3, 14)
+#define GLFLXP_RX_CMD_PROTIDS(_i, _j)		(0x0045A000 + ((_i) * 4 + (_j) * 1024)) /* _i=0...255, _j=0...5 */ /* Reset Source: CORER */
+#define GLFLXP_RX_CMD_PROTIDS_MAX_INDEX		255
+#define GLFLXP_RX_CMD_PROTIDS_PROTID_4N_S	0
+#define GLFLXP_RX_CMD_PROTIDS_PROTID_4N_M	MAKEMASK(0xFF, 0)
+#define GLFLXP_RX_CMD_PROTIDS_PROTID_4N_1_S	8
+#define GLFLXP_RX_CMD_PROTIDS_PROTID_4N_1_M	MAKEMASK(0xFF, 8)
+#define GLFLXP_RX_CMD_PROTIDS_PROTID_4N_2_S	16
+#define GLFLXP_RX_CMD_PROTIDS_PROTID_4N_2_M	MAKEMASK(0xFF, 16)
+#define GLFLXP_RX_CMD_PROTIDS_PROTID_4N_3_S	24
+#define GLFLXP_RX_CMD_PROTIDS_PROTID_4N_3_M	MAKEMASK(0xFF, 24)
+#define GLFLXP_RXDID_FLAGS(_i, _j)		(0x0045D000 + ((_i) * 4 + (_j) * 256)) /* _i=0...63, _j=0...4 */ /* Reset Source: CORER */
+#define GLFLXP_RXDID_FLAGS_MAX_INDEX		63
+#define GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_S	0
+#define GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_M	MAKEMASK(0x3F, 0)
+#define GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_1_S	8
+#define GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_1_M	MAKEMASK(0x3F, 8)
+#define GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_2_S	16
+#define GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_2_M	MAKEMASK(0x3F, 16)
+#define GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_3_S	24
+#define GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_3_M	MAKEMASK(0x3F, 24)
+#define GLFLXP_RXDID_FLAGS1_OVERRIDE(_i)	(0x0045D600 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GLFLXP_RXDID_FLAGS1_OVERRIDE_MAX_INDEX	63
+#define GLFLXP_RXDID_FLAGS1_OVERRIDE_FLEXIFLAGS1_OVERRIDE_S 0
+#define GLFLXP_RXDID_FLAGS1_OVERRIDE_FLEXIFLAGS1_OVERRIDE_M MAKEMASK(0xF, 0)
+#define GLFLXP_RXDID_FLX_WRD_0(_i)		(0x0045c800 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GLFLXP_RXDID_FLX_WRD_0_MAX_INDEX	63
+#define GLFLXP_RXDID_FLX_WRD_0_PROT_MDID_S	0
+#define GLFLXP_RXDID_FLX_WRD_0_PROT_MDID_M	MAKEMASK(0xFF, 0)
+#define GLFLXP_RXDID_FLX_WRD_0_EXTRACTION_OFFSET_S 8
+#define GLFLXP_RXDID_FLX_WRD_0_EXTRACTION_OFFSET_M MAKEMASK(0x3FF, 8)
+#define GLFLXP_RXDID_FLX_WRD_0_RXDID_OPCODE_S	30
+#define GLFLXP_RXDID_FLX_WRD_0_RXDID_OPCODE_M	MAKEMASK(0x3, 30)
+#define GLFLXP_RXDID_FLX_WRD_1(_i)		(0x0045c900 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GLFLXP_RXDID_FLX_WRD_1_MAX_INDEX	63
+#define GLFLXP_RXDID_FLX_WRD_1_PROT_MDID_S	0
+#define GLFLXP_RXDID_FLX_WRD_1_PROT_MDID_M	MAKEMASK(0xFF, 0)
+#define GLFLXP_RXDID_FLX_WRD_1_EXTRACTION_OFFSET_S 8
+#define GLFLXP_RXDID_FLX_WRD_1_EXTRACTION_OFFSET_M MAKEMASK(0x3FF, 8)
+#define GLFLXP_RXDID_FLX_WRD_1_RXDID_OPCODE_S	30
+#define GLFLXP_RXDID_FLX_WRD_1_RXDID_OPCODE_M	MAKEMASK(0x3, 30)
+#define GLFLXP_RXDID_FLX_WRD_2(_i)		(0x0045ca00 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GLFLXP_RXDID_FLX_WRD_2_MAX_INDEX	63
+#define GLFLXP_RXDID_FLX_WRD_2_PROT_MDID_S	0
+#define GLFLXP_RXDID_FLX_WRD_2_PROT_MDID_M	MAKEMASK(0xFF, 0)
+#define GLFLXP_RXDID_FLX_WRD_2_EXTRACTION_OFFSET_S 8
+#define GLFLXP_RXDID_FLX_WRD_2_EXTRACTION_OFFSET_M MAKEMASK(0x3FF, 8)
+#define GLFLXP_RXDID_FLX_WRD_2_RXDID_OPCODE_S	30
+#define GLFLXP_RXDID_FLX_WRD_2_RXDID_OPCODE_M	MAKEMASK(0x3, 30)
+#define GLFLXP_RXDID_FLX_WRD_3(_i)		(0x0045cb00 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GLFLXP_RXDID_FLX_WRD_3_MAX_INDEX	63
+#define GLFLXP_RXDID_FLX_WRD_3_PROT_MDID_S	0
+#define GLFLXP_RXDID_FLX_WRD_3_PROT_MDID_M	MAKEMASK(0xFF, 0)
+#define GLFLXP_RXDID_FLX_WRD_3_EXTRACTION_OFFSET_S 8
+#define GLFLXP_RXDID_FLX_WRD_3_EXTRACTION_OFFSET_M MAKEMASK(0x3FF, 8)
+#define GLFLXP_RXDID_FLX_WRD_3_RXDID_OPCODE_S	30
+#define GLFLXP_RXDID_FLX_WRD_3_RXDID_OPCODE_M	MAKEMASK(0x3, 30)
+#define GLFLXP_RXDID_FLX_WRD_4(_i)		(0x0045cc00 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GLFLXP_RXDID_FLX_WRD_4_MAX_INDEX	63
+#define GLFLXP_RXDID_FLX_WRD_4_PROT_MDID_S	0
+#define GLFLXP_RXDID_FLX_WRD_4_PROT_MDID_M	MAKEMASK(0xFF, 0)
+#define GLFLXP_RXDID_FLX_WRD_4_EXTRACTION_OFFSET_S 8
+#define GLFLXP_RXDID_FLX_WRD_4_EXTRACTION_OFFSET_M MAKEMASK(0x3FF, 8)
+#define GLFLXP_RXDID_FLX_WRD_4_RXDID_OPCODE_S	30
+#define GLFLXP_RXDID_FLX_WRD_4_RXDID_OPCODE_M	MAKEMASK(0x3, 30)
+#define GLFLXP_RXDID_FLX_WRD_5(_i)		(0x0045cd00 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GLFLXP_RXDID_FLX_WRD_5_MAX_INDEX	63
+#define GLFLXP_RXDID_FLX_WRD_5_PROT_MDID_S	0
+#define GLFLXP_RXDID_FLX_WRD_5_PROT_MDID_M	MAKEMASK(0xFF, 0)
+#define GLFLXP_RXDID_FLX_WRD_5_EXTRACTION_OFFSET_S 8
+#define GLFLXP_RXDID_FLX_WRD_5_EXTRACTION_OFFSET_M MAKEMASK(0x3FF, 8)
+#define GLFLXP_RXDID_FLX_WRD_5_RXDID_OPCODE_S	30
+#define GLFLXP_RXDID_FLX_WRD_5_RXDID_OPCODE_M	MAKEMASK(0x3, 30)
+#define GLFLXP_TX_SCHED_CORRECT(_i, _j)		(0x00458000 + ((_i) * 4 + (_j) * 256)) /* _i=0...63, _j=0...31 */ /* Reset Source: CORER */
+#define GLFLXP_TX_SCHED_CORRECT_MAX_INDEX	63
+#define GLFLXP_TX_SCHED_CORRECT_PROTD_ID_2N_S	0
+#define GLFLXP_TX_SCHED_CORRECT_PROTD_ID_2N_M	MAKEMASK(0xFF, 0)
+#define GLFLXP_TX_SCHED_CORRECT_RECIPE_2N_S	8
+#define GLFLXP_TX_SCHED_CORRECT_RECIPE_2N_M	MAKEMASK(0x1F, 8)
+#define GLFLXP_TX_SCHED_CORRECT_PROTD_ID_2N_1_S 16
+#define GLFLXP_TX_SCHED_CORRECT_PROTD_ID_2N_1_M MAKEMASK(0xFF, 16)
+#define GLFLXP_TX_SCHED_CORRECT_RECIPE_2N_1_S	24
+#define GLFLXP_TX_SCHED_CORRECT_RECIPE_2N_1_M	MAKEMASK(0x1F, 24)
+#define QRXFLXP_CNTXT(_QRX)			(0x00480000 + ((_QRX) * 4)) /* _i=0...2047 */ /* Reset Source: CORER */
+#define QRXFLXP_CNTXT_MAX_INDEX			2047
+#define QRXFLXP_CNTXT_RXDID_IDX_S		0
+#define QRXFLXP_CNTXT_RXDID_IDX_M		MAKEMASK(0x3F, 0)
+#define QRXFLXP_CNTXT_RXDID_PRIO_S		8
+#define QRXFLXP_CNTXT_RXDID_PRIO_M		MAKEMASK(0x7, 8)
+#define QRXFLXP_CNTXT_TS_S			11
+#define QRXFLXP_CNTXT_TS_M			BIT(11)
+#define GL_FWSTS				0x00083048 /* Reset Source: POR */
+#define GL_FWSTS_FWS0B_S			0
+#define GL_FWSTS_FWS0B_M			MAKEMASK(0xFF, 0)
+#define GL_FWSTS_FWROWD_S			8
+#define GL_FWSTS_FWROWD_M			BIT(8)
+#define GL_FWSTS_FWRI_S				9
+#define GL_FWSTS_FWRI_M				BIT(9)
+#define GL_FWSTS_FWS1B_S			16
+#define GL_FWSTS_FWS1B_M			MAKEMASK(0xFF, 16)
+#define GL_TCVMLR_DRAIN_CNTR_CTL		0x000A21E0 /* Reset Source: CORER */
+#define GL_TCVMLR_DRAIN_CNTR_CTL_OP_S		0
+#define GL_TCVMLR_DRAIN_CNTR_CTL_OP_M		BIT(0)
+#define GL_TCVMLR_DRAIN_CNTR_CTL_PORT_S		1
+#define GL_TCVMLR_DRAIN_CNTR_CTL_PORT_M		MAKEMASK(0x7, 1)
+#define GL_TCVMLR_DRAIN_CNTR_CTL_VALUE_S	4
+#define GL_TCVMLR_DRAIN_CNTR_CTL_VALUE_M	MAKEMASK(0x3FFF, 4)
+#define GL_TCVMLR_DRAIN_DONE_DEC		0x000A21A8 /* Reset Source: CORER */
+#define GL_TCVMLR_DRAIN_DONE_DEC_TARGET_S	0
+#define GL_TCVMLR_DRAIN_DONE_DEC_TARGET_M	BIT(0)
+#define GL_TCVMLR_DRAIN_DONE_DEC_INDEX_S	1
+#define GL_TCVMLR_DRAIN_DONE_DEC_INDEX_M	MAKEMASK(0x1F, 1)
+#define GL_TCVMLR_DRAIN_DONE_DEC_VALUE_S	6
+#define GL_TCVMLR_DRAIN_DONE_DEC_VALUE_M	MAKEMASK(0xFF, 6)
+#define GL_TCVMLR_DRAIN_DONE_TCLAN(_i)		(0x000A20A8 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GL_TCVMLR_DRAIN_DONE_TCLAN_MAX_INDEX	31
+#define GL_TCVMLR_DRAIN_DONE_TCLAN_COUNT_S	0
+#define GL_TCVMLR_DRAIN_DONE_TCLAN_COUNT_M	MAKEMASK(0xFF, 0)
+#define GL_TCVMLR_DRAIN_DONE_TPB(_i)		(0x000A2128 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GL_TCVMLR_DRAIN_DONE_TPB_MAX_INDEX	31
+#define GL_TCVMLR_DRAIN_DONE_TPB_COUNT_S	0
+#define GL_TCVMLR_DRAIN_DONE_TPB_COUNT_M	MAKEMASK(0xFF, 0)
+#define GL_TCVMLR_DRAIN_MARKER			0x000A2008 /* Reset Source: CORER */
+#define GL_TCVMLR_DRAIN_MARKER_PORT_S		0
+#define GL_TCVMLR_DRAIN_MARKER_PORT_M		MAKEMASK(0x7, 0)
+#define GL_TCVMLR_DRAIN_MARKER_TC_S		3
+#define GL_TCVMLR_DRAIN_MARKER_TC_M		MAKEMASK(0x1F, 3)
+#define GL_TCVMLR_ERR_STAT			0x000A2024 /* Reset Source: CORER */
+#define GL_TCVMLR_ERR_STAT_ERROR_S		0
+#define GL_TCVMLR_ERR_STAT_ERROR_M		BIT(0)
+#define GL_TCVMLR_ERR_STAT_FW_REQ_S		1
+#define GL_TCVMLR_ERR_STAT_FW_REQ_M		BIT(1)
+#define GL_TCVMLR_ERR_STAT_STAT_S		2
+#define GL_TCVMLR_ERR_STAT_STAT_M		MAKEMASK(0x7, 2)
+#define GL_TCVMLR_ERR_STAT_ENT_TYPE_S		5
+#define GL_TCVMLR_ERR_STAT_ENT_TYPE_M		MAKEMASK(0x7, 5)
+#define GL_TCVMLR_ERR_STAT_ENT_ID_S		8
+#define GL_TCVMLR_ERR_STAT_ENT_ID_M		MAKEMASK(0x3FFF, 8)
+#define GL_TCVMLR_QCFG				0x000A2010 /* Reset Source: CORER */
+#define GL_TCVMLR_QCFG_QID_S			0
+#define GL_TCVMLR_QCFG_QID_M			MAKEMASK(0x3FFF, 0)
+#define GL_TCVMLR_QCFG_OP_S			14
+#define GL_TCVMLR_QCFG_OP_M			BIT(14)
+#define GL_TCVMLR_QCFG_PORT_S			15
+#define GL_TCVMLR_QCFG_PORT_M			MAKEMASK(0x7, 15)
+#define GL_TCVMLR_QCFG_TC_S			18
+#define GL_TCVMLR_QCFG_TC_M			MAKEMASK(0x1F, 18)
+#define GL_TCVMLR_QCFG_RD			0x000A2014 /* Reset Source: CORER */
+#define GL_TCVMLR_QCFG_RD_QID_S			0
+#define GL_TCVMLR_QCFG_RD_QID_M			MAKEMASK(0x3FFF, 0)
+#define GL_TCVMLR_QCFG_RD_PORT_S		14
+#define GL_TCVMLR_QCFG_RD_PORT_M		MAKEMASK(0x7, 14)
+#define GL_TCVMLR_QCFG_RD_TC_S			17
+#define GL_TCVMLR_QCFG_RD_TC_M			MAKEMASK(0x1F, 17)
+#define GL_TCVMLR_QCNTR				0x000A200C /* Reset Source: CORER */
+#define GL_TCVMLR_QCNTR_CNTR_S			0
+#define GL_TCVMLR_QCNTR_CNTR_M			MAKEMASK(0x7FFF, 0)
+#define GL_TCVMLR_QCTL				0x000A2004 /* Reset Source: CORER */
+#define GL_TCVMLR_QCTL_QID_S			0
+#define GL_TCVMLR_QCTL_QID_M			MAKEMASK(0x3FFF, 0)
+#define GL_TCVMLR_QCTL_OP_S			14
+#define GL_TCVMLR_QCTL_OP_M			BIT(14)
+#define GL_TCVMLR_REQ_STAT			0x000A2018 /* Reset Source: CORER */
+#define GL_TCVMLR_REQ_STAT_ENT_TYPE_S		0
+#define GL_TCVMLR_REQ_STAT_ENT_TYPE_M		MAKEMASK(0x7, 0)
+#define GL_TCVMLR_REQ_STAT_ENT_ID_S		3
+#define GL_TCVMLR_REQ_STAT_ENT_ID_M		MAKEMASK(0x3FFF, 3)
+#define GL_TCVMLR_REQ_STAT_OP_S			17
+#define GL_TCVMLR_REQ_STAT_OP_M			BIT(17)
+#define GL_TCVMLR_REQ_STAT_WRITE_STATUS_S	18
+#define GL_TCVMLR_REQ_STAT_WRITE_STATUS_M	MAKEMASK(0x7, 18)
+#define GL_TCVMLR_STAT				0x000A201C /* Reset Source: CORER */
+#define GL_TCVMLR_STAT_ENT_TYPE_S		0
+#define GL_TCVMLR_STAT_ENT_TYPE_M		MAKEMASK(0x7, 0)
+#define GL_TCVMLR_STAT_ENT_ID_S			3
+#define GL_TCVMLR_STAT_ENT_ID_M			MAKEMASK(0x3FFF, 3)
+#define GL_TCVMLR_STAT_STATUS_S			17
+#define GL_TCVMLR_STAT_STATUS_M			MAKEMASK(0x7, 17)
+#define GL_XLR_MARKER_TRIG_TCVMLR		0x000A2000 /* Reset Source: CORER */
+#define GL_XLR_MARKER_TRIG_TCVMLR_VM_VF_NUM_S	0
+#define GL_XLR_MARKER_TRIG_TCVMLR_VM_VF_NUM_M	MAKEMASK(0x3FF, 0)
+#define GL_XLR_MARKER_TRIG_TCVMLR_VM_VF_TYPE_S	10
+#define GL_XLR_MARKER_TRIG_TCVMLR_VM_VF_TYPE_M	MAKEMASK(0x3, 10)
+#define GL_XLR_MARKER_TRIG_TCVMLR_PF_NUM_S	12
+#define GL_XLR_MARKER_TRIG_TCVMLR_PF_NUM_M	MAKEMASK(0x7, 12)
+#define GL_XLR_MARKER_TRIG_TCVMLR_PORT_NUM_S	16
+#define GL_XLR_MARKER_TRIG_TCVMLR_PORT_NUM_M	MAKEMASK(0x7, 16)
+#define GL_XLR_MARKER_TRIG_VMLR			0x00093804 /* Reset Source: CORER */
+#define GL_XLR_MARKER_TRIG_VMLR_VM_VF_NUM_S	0
+#define GL_XLR_MARKER_TRIG_VMLR_VM_VF_NUM_M	MAKEMASK(0x3FF, 0)
+#define GL_XLR_MARKER_TRIG_VMLR_VM_VF_TYPE_S	10
+#define GL_XLR_MARKER_TRIG_VMLR_VM_VF_TYPE_M	MAKEMASK(0x3, 10)
+#define GL_XLR_MARKER_TRIG_VMLR_PF_NUM_S	12
+#define GL_XLR_MARKER_TRIG_VMLR_PF_NUM_M	MAKEMASK(0x7, 12)
+#define GL_XLR_MARKER_TRIG_VMLR_PORT_NUM_S	16
+#define GL_XLR_MARKER_TRIG_VMLR_PORT_NUM_M	MAKEMASK(0x7, 16)
+#define GLGEN_ANA_ABORT_PTYPE			0x0020C21C /* Reset Source: CORER */
+#define GLGEN_ANA_ABORT_PTYPE_ABORT_S		0
+#define GLGEN_ANA_ABORT_PTYPE_ABORT_M		MAKEMASK(0x3FF, 0)
+#define GLGEN_ANA_ALU_ACCSS_OUT_OF_PKT		0x0020C208 /* Reset Source: CORER */
+#define GLGEN_ANA_ALU_ACCSS_OUT_OF_PKT_NPC_S	0
+#define GLGEN_ANA_ALU_ACCSS_OUT_OF_PKT_NPC_M	MAKEMASK(0xFF, 0)
+#define GLGEN_ANA_CFG_CTRL			0x0020C104 /* Reset Source: CORER */
+#define GLGEN_ANA_CFG_CTRL_LINE_IDX_S		0
+#define GLGEN_ANA_CFG_CTRL_LINE_IDX_M		MAKEMASK(0x3FFFF, 0)
+#define GLGEN_ANA_CFG_CTRL_TABLE_ID_S		18
+#define GLGEN_ANA_CFG_CTRL_TABLE_ID_M		MAKEMASK(0xFF, 18)
+#define GLGEN_ANA_CFG_CTRL_RESRVED_S		26
+#define GLGEN_ANA_CFG_CTRL_RESRVED_M		MAKEMASK(0x7, 26)
+#define GLGEN_ANA_CFG_CTRL_OPERATION_ID_S	29
+#define GLGEN_ANA_CFG_CTRL_OPERATION_ID_M	MAKEMASK(0x7, 29)
+#define GLGEN_ANA_CFG_HTBL_LU_RESULT		0x0020C158 /* Reset Source: CORER */
+#define GLGEN_ANA_CFG_HTBL_LU_RESULT_HIT_S	0
+#define GLGEN_ANA_CFG_HTBL_LU_RESULT_HIT_M	BIT(0)
+#define GLGEN_ANA_CFG_HTBL_LU_RESULT_PG_MEM_IDX_S 1
+#define GLGEN_ANA_CFG_HTBL_LU_RESULT_PG_MEM_IDX_M MAKEMASK(0x7, 1)
+#define GLGEN_ANA_CFG_HTBL_LU_RESULT_ADDR_S	4
+#define GLGEN_ANA_CFG_HTBL_LU_RESULT_ADDR_M	MAKEMASK(0x1FF, 4)
+#define GLGEN_ANA_CFG_LU_KEY(_i)		(0x0020C14C + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: CORER */
+#define GLGEN_ANA_CFG_LU_KEY_MAX_INDEX		2
+#define GLGEN_ANA_CFG_LU_KEY_LU_KEY_S		0
+#define GLGEN_ANA_CFG_LU_KEY_LU_KEY_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_CFG_RDDATA(_i)		(0x0020C10C + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLGEN_ANA_CFG_RDDATA_MAX_INDEX		15
+#define GLGEN_ANA_CFG_RDDATA_RD_DATA_S		0
+#define GLGEN_ANA_CFG_RDDATA_RD_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_CFG_SPLBUF_LU_RESULT		0x0020C15C /* Reset Source: CORER */
+#define GLGEN_ANA_CFG_SPLBUF_LU_RESULT_HIT_S	0
+#define GLGEN_ANA_CFG_SPLBUF_LU_RESULT_HIT_M	BIT(0)
+#define GLGEN_ANA_CFG_SPLBUF_LU_RESULT_RSV_S	1
+#define GLGEN_ANA_CFG_SPLBUF_LU_RESULT_RSV_M	MAKEMASK(0x7, 1)
+#define GLGEN_ANA_CFG_SPLBUF_LU_RESULT_ADDR_S	4
+#define GLGEN_ANA_CFG_SPLBUF_LU_RESULT_ADDR_M	MAKEMASK(0x1FF, 4)
+#define GLGEN_ANA_CFG_WRDATA			0x0020C108 /* Reset Source: CORER */
+#define GLGEN_ANA_CFG_WRDATA_WR_DATA_S		0
+#define GLGEN_ANA_CFG_WRDATA_WR_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_DEF_PTYPE			0x0020C100 /* Reset Source: CORER */
+#define GLGEN_ANA_DEF_PTYPE_DEF_PTYPE_S		0
+#define GLGEN_ANA_DEF_PTYPE_DEF_PTYPE_M		MAKEMASK(0x3FF, 0)
+#define GLGEN_ANA_DFD_FIFO_0			0x0020C398 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_FIFO_0_PC_NXT_S		0
+#define GLGEN_ANA_DFD_FIFO_0_PC_NXT_M		BIT(0)
+#define GLGEN_ANA_DFD_FIFO_0_HO_NXT_S		1
+#define GLGEN_ANA_DFD_FIFO_0_HO_NXT_M		BIT(1)
+#define GLGEN_ANA_DFD_FIFO_0_NID_NXT_S		2
+#define GLGEN_ANA_DFD_FIFO_0_NID_NXT_M		BIT(2)
+#define GLGEN_ANA_DFD_FIFO_0_PG_KEY_SEL_S	8
+#define GLGEN_ANA_DFD_FIFO_0_PG_KEY_SEL_M	BIT(8)
+#define GLGEN_ANA_DFD_FIFO_PTR			0x0020C43C /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_FIFO_PTR_HEAD_S		0
+#define GLGEN_ANA_DFD_FIFO_PTR_HEAD_M		MAKEMASK(0x1FF, 0)
+#define GLGEN_ANA_DFD_FIFO_PTR_USED_SPACE_S	16
+#define GLGEN_ANA_DFD_FIFO_PTR_USED_SPACE_M	MAKEMASK(0x3FF, 16)
+#define GLGEN_ANA_DFD_GEN_CTRL			0x0020C38C /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_GEN_CTRL_ENABLE_S		0
+#define GLGEN_ANA_DFD_GEN_CTRL_ENABLE_M		BIT(0)
+#define GLGEN_ANA_DFD_GEN_CTRL_BLK_INPUT_S	1
+#define GLGEN_ANA_DFD_GEN_CTRL_BLK_INPUT_M	BIT(1)
+#define GLGEN_ANA_DFD_LOG_0			0x0020C3A8 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_LOG_0_SOURCE_S		0
+#define GLGEN_ANA_DFD_LOG_0_SOURCE_M		MAKEMASK(0x7, 0)
+#define GLGEN_ANA_DFD_LOG_0_LVL_OR_EDGE_S	3
+#define GLGEN_ANA_DFD_LOG_0_LVL_OR_EDGE_M	BIT(3)
+#define GLGEN_ANA_DFD_LOG_0_RC_DISP_TRIG_S	4
+#define GLGEN_ANA_DFD_LOG_0_RC_DISP_TRIG_M	MAKEMASK(0x7, 4)
+#define GLGEN_ANA_DFD_LOG_0_FLD_MODE_S		8
+#define GLGEN_ANA_DFD_LOG_0_FLD_MODE_M		BIT(8)
+#define GLGEN_ANA_DFD_LOG_0_DLY_CYCL_S		16
+#define GLGEN_ANA_DFD_LOG_0_DLY_CYCL_M		MAKEMASK(0x3FF, 16)
+#define GLGEN_ANA_DFD_LOG_1			0x0020C3AC /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_LOG_1_NUM_EVENTS_S	0
+#define GLGEN_ANA_DFD_LOG_1_NUM_EVENTS_M	MAKEMASK(0x3FF, 0)
+#define GLGEN_ANA_DFD_LOG_1_NUM_TRIGS_S		16
+#define GLGEN_ANA_DFD_LOG_1_NUM_TRIGS_M		MAKEMASK(0x3FF, 16)
+#define GLGEN_ANA_DFD_LOG_ACTN_EN		0x0020C3F8 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_BLK_PH_ARB_S	0
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_BLK_PH_ARB_M	BIT(0)
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_BLK_PH_FB_S	1
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_BLK_PH_FB_M	BIT(1)
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_BLK_INPUT_S	2
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_BLK_INPUT_M	BIT(2)
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_STP_OUTPUT_S	3
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_STP_OUTPUT_M	BIT(3)
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_STP_WR_DFD_FIFO_S 4
+#define GLGEN_ANA_DFD_LOG_ACTN_EN_STP_WR_DFD_FIFO_M BIT(4)
+#define GLGEN_ANA_DFD_LOG_ACTN_RST		0x0020C3FC /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_BLK_PH_ARB_S 0
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_BLK_PH_ARB_M BIT(0)
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_BLK_PH_FB_S	1
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_BLK_PH_FB_M	BIT(1)
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_BLK_INPUT_S	2
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_BLK_INPUT_M	BIT(2)
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_STP_OUTPUT_S 3
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_STP_OUTPUT_M BIT(3)
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_STP_WR_DFD_FIFO_S 4
+#define GLGEN_ANA_DFD_LOG_ACTN_RST_STP_WR_DFD_FIFO_M BIT(4)
+#define GLGEN_ANA_DFD_LOG_DATA(_i)		(0x0020C3B0 + ((_i) * 4)) /* _i=0...8 */ /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_LOG_DATA_MAX_INDEX	8
+#define GLGEN_ANA_DFD_LOG_DATA_DATA_S		0
+#define GLGEN_ANA_DFD_LOG_DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_DFD_LOG_MASK(_i)		(0x0020C3D4 + ((_i) * 4)) /* _i=0...8 */ /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_LOG_MASK_MAX_INDEX	8
+#define GLGEN_ANA_DFD_LOG_MASK_MASK_S		0
+#define GLGEN_ANA_DFD_LOG_MASK_MASK_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_DFD_LOG_RST_ALL		0x0020C400 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_LOG_RST_ALL_RST_S		0
+#define GLGEN_ANA_DFD_LOG_RST_ALL_RST_M		BIT(0)
+#define GLGEN_ANA_DFD_LOG_RST_ALL_GEN_RST_S	1
+#define GLGEN_ANA_DFD_LOG_RST_ALL_GEN_RST_M	BIT(1)
+#define GLGEN_ANA_DFD_LOG_TRG_0			0x0020C404 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_LOG_TRG_0_TAGID_S		0
+#define GLGEN_ANA_DFD_LOG_TRG_0_TAGID_M		MAKEMASK(0x3F, 0)
+#define GLGEN_ANA_DFD_LOG_TRG_0_ACT_TRIGGED_S	16
+#define GLGEN_ANA_DFD_LOG_TRG_0_ACT_TRIGGED_M	BIT(16)
+#define GLGEN_ANA_DFD_LOG_TRG_0_MAX_NUM_RND_S	24
+#define GLGEN_ANA_DFD_LOG_TRG_0_MAX_NUM_RND_M	MAKEMASK(0x7F, 24)
+#define GLGEN_ANA_DFD_LOG_TRG_0_TRIGGED_S	31
+#define GLGEN_ANA_DFD_LOG_TRG_0_TRIGGED_M	BIT(31)
+#define GLGEN_ANA_DFD_LOG_TRG_DATA(_i)		(0x0020C408 + ((_i) * 4)) /* _i=0...8 */ /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_LOG_TRG_DATA_MAX_INDEX	8
+#define GLGEN_ANA_DFD_LOG_TRG_DATA_DATA_S	0
+#define GLGEN_ANA_DFD_LOG_TRG_DATA_DATA_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_DFD_PACE_OUT			0x0020C4CC /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_PACE_OUT_PUSH_S		0
+#define GLGEN_ANA_DFD_PACE_OUT_PUSH_M		BIT(0)
+#define GLGEN_ANA_DFD_PACING_0			0x0020C390 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_PACING_0_STOP_FEEDBK_S	0
+#define GLGEN_ANA_DFD_PACING_0_STOP_FEEDBK_M	BIT(0)
+#define GLGEN_ANA_DFD_PACING_0_STOP_ARB_S	1
+#define GLGEN_ANA_DFD_PACING_0_STOP_ARB_M	BIT(1)
+#define GLGEN_ANA_DFD_PACING_0_NUM_CHUNKS_S	2
+#define GLGEN_ANA_DFD_PACING_0_NUM_CHUNKS_M	MAKEMASK(0x1F, 2)
+#define GLGEN_ANA_DFD_PACING_1			0x0020C394 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_PACING_1_PUSH_S		0
+#define GLGEN_ANA_DFD_PACING_1_PUSH_M		BIT(0)
+#define GLGEN_ANA_DFD_REG_FILE_ACC_0		0x0020C39C /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_REG_FILE_ACC_0_SLOT_ID_S	0
+#define GLGEN_ANA_DFD_REG_FILE_ACC_0_SLOT_ID_M	MAKEMASK(0xF, 0)
+#define GLGEN_ANA_DFD_REG_FILE_ACC_1		0x0020C3A0 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_REG_FILE_ACC_1_REGID_S	0
+#define GLGEN_ANA_DFD_REG_FILE_ACC_1_REGID_M	MAKEMASK(0xFF, 0)
+#define GLGEN_ANA_DFD_REG_FILE_ACC_RES		0x0020C3A4 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_REG_FILE_ACC_RES_REG_VAL_S 0
+#define GLGEN_ANA_DFD_REG_FILE_ACC_RES_REG_VAL_M MAKEMASK(0xFFFF, 0)
+#define GLGEN_ANA_DFD_REG_FILE_ACC_RES_EXCEPTIONS_S 16
+#define GLGEN_ANA_DFD_REG_FILE_ACC_RES_EXCEPTIONS_M MAKEMASK(0x7FFF, 16)
+#define GLGEN_ANA_DFD_TAGIDS			0x0020C438 /* Reset Source: CORER */
+#define GLGEN_ANA_DFD_TAGIDS_TAGID_IN_DFD_FIFO_S 0
+#define GLGEN_ANA_DFD_TAGIDS_TAGID_IN_DFD_FIFO_M MAKEMASK(0x3F, 0)
+#define GLGEN_ANA_DFD_TAGIDS_TAGID_NXT_ANA_S	8
+#define GLGEN_ANA_DFD_TAGIDS_TAGID_NXT_ANA_M	MAKEMASK(0x3F, 8)
+#define GLGEN_ANA_DFD_TAGIDS_TAGID_OUT_S	16
+#define GLGEN_ANA_DFD_TAGIDS_TAGID_OUT_M	MAKEMASK(0x3F, 16)
+#define GLGEN_ANA_DFD_TAGIDS_SLOTID_IN_DFD_FIFO_S 24
+#define GLGEN_ANA_DFD_TAGIDS_SLOTID_IN_DFD_FIFO_M MAKEMASK(0xF, 24)
+#define GLGEN_ANA_DFD_TAGIDS_SLOTID_NXT_ANA_S	28
+#define GLGEN_ANA_DFD_TAGIDS_SLOTID_NXT_ANA_M	MAKEMASK(0xF, 28)
+#define GLGEN_ANA_ERR_AUX			0x0020C228 /* Reset Source: CORER */
+#define GLGEN_ANA_ERR_AUX_IPLEN_GPREG_S		0
+#define GLGEN_ANA_ERR_AUX_IPLEN_GPREG_M		MAKEMASK(0xF, 0)
+#define GLGEN_ANA_ERR_CTRL			0x0020C220 /* Reset Source: CORER */
+#define GLGEN_ANA_ERR_CTRL_ERR_MASK_EN_S	0
+#define GLGEN_ANA_ERR_CTRL_ERR_MASK_EN_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_FLAG_MAP(_i)			(0x0020C000 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GLGEN_ANA_FLAG_MAP_MAX_INDEX		63
+#define GLGEN_ANA_FLAG_MAP_FLAG_EN_S		0
+#define GLGEN_ANA_FLAG_MAP_FLAG_EN_M		BIT(0)
+#define GLGEN_ANA_FLAG_MAP_EXT_FLAG_ID_S	1
+#define GLGEN_ANA_FLAG_MAP_EXT_FLAG_ID_M	MAKEMASK(0x3F, 1)
+#define GLGEN_ANA_GEN_DFD_RO			0x0020C4C8 /* Reset Source: CORER */
+#define GLGEN_ANA_GEN_DFD_RO_GEN_VAL_S		0
+#define GLGEN_ANA_GEN_DFD_RO_GEN_VAL_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_GIGO_FIFO_PTR			0x0020C448 /* Reset Source: CORER */
+#define GLGEN_ANA_GIGO_FIFO_PTR_HEAD_S		0
+#define GLGEN_ANA_GIGO_FIFO_PTR_HEAD_M		MAKEMASK(0x1FF, 0)
+#define GLGEN_ANA_GIGO_FIFO_PTR_USED_SPACE_S	16
+#define GLGEN_ANA_GIGO_FIFO_PTR_USED_SPACE_M	MAKEMASK(0x3FF, 16)
+#define GLGEN_ANA_HDR_FIFO_FIFO_PTR		0x0020C44C /* Reset Source: CORER */
+#define GLGEN_ANA_HDR_FIFO_FIFO_PTR_HEAD_S	0
+#define GLGEN_ANA_HDR_FIFO_FIFO_PTR_HEAD_M	MAKEMASK(0x1FF, 0)
+#define GLGEN_ANA_HDR_FIFO_FIFO_PTR_USED_SPACE_S 16
+#define GLGEN_ANA_HDR_FIFO_FIFO_PTR_USED_SPACE_M MAKEMASK(0x3FF, 16)
+#define GLGEN_ANA_INV_NODE_PTYPE		0x0020C210 /* Reset Source: CORER */
+#define GLGEN_ANA_INV_NODE_PTYPE_INV_NODE_PTYPE_S 0
+#define GLGEN_ANA_INV_NODE_PTYPE_INV_NODE_PTYPE_M MAKEMASK(0x7FF, 0)
+#define GLGEN_ANA_INV_PROT_ID			0x0020C214 /* Reset Source: CORER */
+#define GLGEN_ANA_INV_PROT_ID_INV_PROT_ID_S	0
+#define GLGEN_ANA_INV_PROT_ID_INV_PROT_ID_M	MAKEMASK(0xFF, 0)
+#define GLGEN_ANA_INV_PTYPE_MARKER		0x0020C218 /* Reset Source: CORER */
+#define GLGEN_ANA_INV_PTYPE_MARKER_INV_PTYPE_MARKER_S 0
+#define GLGEN_ANA_INV_PTYPE_MARKER_INV_PTYPE_MARKER_M MAKEMASK(0x7F, 0)
+#define GLGEN_ANA_LAST_PROT_ID(_i)		(0x0020C1E4 + ((_i) * 4)) /* _i=0...5 */ /* Reset Source: CORER */
+#define GLGEN_ANA_LAST_PROT_ID_MAX_INDEX	5
+#define GLGEN_ANA_LAST_PROT_ID_EN_S		0
+#define GLGEN_ANA_LAST_PROT_ID_EN_M		BIT(0)
+#define GLGEN_ANA_LAST_PROT_ID_PROT_ID_S	1
+#define GLGEN_ANA_LAST_PROT_ID_PROT_ID_M	MAKEMASK(0xFF, 1)
+#define GLGEN_ANA_MAX_HDRLEN			0x0020C1E0 /* Reset Source: CORER */
+#define GLGEN_ANA_MAX_HDRLEN_NPC_S		0
+#define GLGEN_ANA_MAX_HDRLEN_NPC_M		MAKEMASK(0xFF, 0)
+#define GLGEN_ANA_MAX_HDRLEN_MAX_HDR_LEN_S	8
+#define GLGEN_ANA_MAX_HDRLEN_MAX_HDR_LEN_M	MAKEMASK(0x1FF, 8)
+#define GLGEN_ANA_MAX_PROT			0x0020C224 /* Reset Source: CORER */
+#define GLGEN_ANA_MAX_PROT_MAX_PRTS_S		0
+#define GLGEN_ANA_MAX_PROT_MAX_PRTS_M		MAKEMASK(0x7F, 0)
+#define GLGEN_ANA_MAX_ROUND			0x0020C20C /* Reset Source: CORER */
+#define GLGEN_ANA_MAX_ROUND_MAX_ROUND_ABS_S	0
+#define GLGEN_ANA_MAX_ROUND_MAX_ROUND_ABS_M	MAKEMASK(0x7F, 0)
+#define GLGEN_ANA_MIN_PKT			0x0020C42C /* Reset Source: CORER */
+#define GLGEN_ANA_MIN_PKT_MIN_LEN_S		0
+#define GLGEN_ANA_MIN_PKT_MIN_LEN_M		MAKEMASK(0x3FFF, 0)
+#define GLGEN_ANA_NMPG_KEYMASK(_i)		(0x0020C1D0 + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: CORER */
+#define GLGEN_ANA_NMPG_KEYMASK_MAX_INDEX	3
+#define GLGEN_ANA_NMPG_KEYMASK_HASH_KEY_S	0
+#define GLGEN_ANA_NMPG_KEYMASK_HASH_KEY_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_NMPG0_HASHKEY(_i)		(0x0020C1B0 + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: CORER */
+#define GLGEN_ANA_NMPG0_HASHKEY_MAX_INDEX	3
+#define GLGEN_ANA_NMPG0_HASHKEY_HASH_KEY_S	0
+#define GLGEN_ANA_NMPG0_HASHKEY_HASH_KEY_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_NO_HIT_PG_NM_PG		0x0020C204 /* Reset Source: CORER */
+#define GLGEN_ANA_NO_HIT_PG_NM_PG_NPC_S		0
+#define GLGEN_ANA_NO_HIT_PG_NM_PG_NPC_M		MAKEMASK(0xFF, 0)
+#define GLGEN_ANA_OUT_OF_PKT			0x0020C200 /* Reset Source: CORER */
+#define GLGEN_ANA_OUT_OF_PKT_NPC_S		0
+#define GLGEN_ANA_OUT_OF_PKT_NPC_M		MAKEMASK(0xFF, 0)
+#define GLGEN_ANA_P2P(_i)			(0x0020C160 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLGEN_ANA_P2P_MAX_INDEX			15
+#define GLGEN_ANA_P2P_TARGET_PROF_S		0
+#define GLGEN_ANA_P2P_TARGET_PROF_M		MAKEMASK(0xF, 0)
+#define GLGEN_ANA_PG_KEYMASK(_i)		(0x0020C1C0 + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: CORER */
+#define GLGEN_ANA_PG_KEYMASK_MAX_INDEX		3
+#define GLGEN_ANA_PG_KEYMASK_HASH_KEY_S		0
+#define GLGEN_ANA_PG_KEYMASK_HASH_KEY_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_PG0_HASHKEY(_i)		(0x0020C1A0 + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: CORER */
+#define GLGEN_ANA_PG0_HASHKEY_MAX_INDEX		3
+#define GLGEN_ANA_PG0_HASHKEY_HASH_KEY_S	0
+#define GLGEN_ANA_PG0_HASHKEY_HASH_KEY_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_PROFIL_CTRL			0x0020C1FC /* Reset Source: CORER */
+#define GLGEN_ANA_PROFIL_CTRL_PROFILE_SELECT_MDID_S 0
+#define GLGEN_ANA_PROFIL_CTRL_PROFILE_SELECT_MDID_M MAKEMASK(0x1F, 0)
+#define GLGEN_ANA_PROFIL_CTRL_PROFILE_SELECT_MDSTART_S 5
+#define GLGEN_ANA_PROFIL_CTRL_PROFILE_SELECT_MDSTART_M MAKEMASK(0xF, 5)
+#define GLGEN_ANA_PROFIL_CTRL_PROFILE_SELECT_MD_LEN_S 9
+#define GLGEN_ANA_PROFIL_CTRL_PROFILE_SELECT_MD_LEN_M MAKEMASK(0x1F, 9)
+#define GLGEN_ANA_PROFIL_CTRL_NUM_CTRL_DOMAIN_S 14
+#define GLGEN_ANA_PROFIL_CTRL_NUM_CTRL_DOMAIN_M MAKEMASK(0x3, 14)
+#define GLGEN_ANA_PROFIL_CTRL_DEF_PROF_ID_S	16
+#define GLGEN_ANA_PROFIL_CTRL_DEF_PROF_ID_M	MAKEMASK(0xF, 16)
+#define GLGEN_ANA_PROFIL_CTRL_SEL_DEF_PROF_ID_S 20
+#define GLGEN_ANA_PROFIL_CTRL_SEL_DEF_PROF_ID_M BIT(20)
+#define GLGEN_ANA_PSTAT_FIFO_PTR		0x0020C444 /* Reset Source: CORER */
+#define GLGEN_ANA_PSTAT_FIFO_PTR_HEAD_S		0
+#define GLGEN_ANA_PSTAT_FIFO_PTR_HEAD_M		MAKEMASK(0x1FF, 0)
+#define GLGEN_ANA_PSTAT_FIFO_PTR_USED_SPACE_S	16
+#define GLGEN_ANA_PSTAT_FIFO_PTR_USED_SPACE_M	MAKEMASK(0x3FF, 16)
+#define GLGEN_ANA_STAT_FIFO_PTR			0x0020C440 /* Reset Source: CORER */
+#define GLGEN_ANA_STAT_FIFO_PTR_HEAD_S		0
+#define GLGEN_ANA_STAT_FIFO_PTR_HEAD_M		MAKEMASK(0x1FF, 0)
+#define GLGEN_ANA_STAT_FIFO_PTR_USED_SPACE_S	16
+#define GLGEN_ANA_STAT_FIFO_PTR_USED_SPACE_M	MAKEMASK(0x3FF, 16)
+#define GLGEN_ANA_TX_DFD_LOG_0			0x0020D3A8 /* Reset Source: CORER */
+#define GLGEN_ANA_TX_DFD_LOG_0_SOURCE_S		0
+#define GLGEN_ANA_TX_DFD_LOG_0_SOURCE_M		MAKEMASK(0x7, 0)
+#define GLGEN_ANA_TX_DFD_LOG_0_LVL_OR_EDGE_S	3
+#define GLGEN_ANA_TX_DFD_LOG_0_LVL_OR_EDGE_M	BIT(3)
+#define GLGEN_ANA_TX_DFD_LOG_0_RC_DISP_TRIG_S	4
+#define GLGEN_ANA_TX_DFD_LOG_0_RC_DISP_TRIG_M	MAKEMASK(0x7, 4)
+#define GLGEN_ANA_TX_DFD_LOG_0_FLD_MODE_S	8
+#define GLGEN_ANA_TX_DFD_LOG_0_FLD_MODE_M	BIT(8)
+#define GLGEN_ANA_TX_DFD_LOG_0_DLY_CYCL_S	16
+#define GLGEN_ANA_TX_DFD_LOG_0_DLY_CYCL_M	MAKEMASK(0x3FF, 16)
+#define GLGEN_ANA_TX_DFD_PACE_OUT		0x0020D4CC /* Reset Source: CORER */
+#define GLGEN_ANA_TX_DFD_PACE_OUT_PUSH_S	0
+#define GLGEN_ANA_TX_DFD_PACE_OUT_PUSH_M	BIT(0)
+#define GLGEN_ANA_TX_GEN_DFD_RO			0x0020D4C8 /* Reset Source: CORER */
+#define GLGEN_ANA_TX_GEN_DFD_RO_GEN_VAL_S	0
+#define GLGEN_ANA_TX_GEN_DFD_RO_GEN_VAL_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ANA_TX_P2P(_i)			(0x0020D160 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLGEN_ANA_TX_P2P_MAX_INDEX		15
+#define GLGEN_ANA_TX_P2P_TARGET_PROF_S		0
+#define GLGEN_ANA_TX_P2P_TARGET_PROF_M		MAKEMASK(0xF, 0)
+#define GLGEN_ASSERT_HLP			0x000B81E4 /* Reset Source: POR */
+#define GLGEN_ASSERT_HLP_CORE_ON_RST_S		0
+#define GLGEN_ASSERT_HLP_CORE_ON_RST_M		BIT(0)
+#define GLGEN_ASSERT_HLP_FULL_ON_RST_S		1
+#define GLGEN_ASSERT_HLP_FULL_ON_RST_M		BIT(1)
+#define GLGEN_CLKSTAT				0x000B8184 /* Reset Source: POR */
+#define GLGEN_CLKSTAT_U_CLK_SPEED_S		0
+#define GLGEN_CLKSTAT_U_CLK_SPEED_M		MAKEMASK(0x7, 0)
+#define GLGEN_CLKSTAT_L_CLK_SPEED_S		3
+#define GLGEN_CLKSTAT_L_CLK_SPEED_M		MAKEMASK(0x7, 3)
+#define GLGEN_CLKSTAT_PSM_CLK_SPEED_S		6
+#define GLGEN_CLKSTAT_PSM_CLK_SPEED_M		MAKEMASK(0x7, 6)
+#define GLGEN_CLKSTAT_RXCTL_CLK_SPEED_S		9
+#define GLGEN_CLKSTAT_RXCTL_CLK_SPEED_M		MAKEMASK(0x7, 9)
+#define GLGEN_CLKSTAT_UANA_CLK_SPEED_S		12
+#define GLGEN_CLKSTAT_UANA_CLK_SPEED_M		MAKEMASK(0x7, 12)
+#define GLGEN_CLKSTAT_PE_CLK_SPEED_S		18
+#define GLGEN_CLKSTAT_PE_CLK_SPEED_M		MAKEMASK(0x7, 18)
+#define GLGEN_CLKSTAT_SRC			0x000B826C /* Reset Source: POR */
+#define GLGEN_CLKSTAT_SRC_U_CLK_SRC_S		0
+#define GLGEN_CLKSTAT_SRC_U_CLK_SRC_M		MAKEMASK(0x3, 0)
+#define GLGEN_CLKSTAT_SRC_L_CLK_SRC_S		2
+#define GLGEN_CLKSTAT_SRC_L_CLK_SRC_M		MAKEMASK(0x3, 2)
+#define GLGEN_CLKSTAT_SRC_PSM_CLK_SRC_S		4
+#define GLGEN_CLKSTAT_SRC_PSM_CLK_SRC_M		MAKEMASK(0x3, 4)
+#define GLGEN_CLKSTAT_SRC_RXCTL_CLK_SRC_S	6
+#define GLGEN_CLKSTAT_SRC_RXCTL_CLK_SRC_M	MAKEMASK(0x3, 6)
+#define GLGEN_CLKSTAT_SRC_UANA_CLK_SRC_S	8
+#define GLGEN_CLKSTAT_SRC_UANA_CLK_SRC_M	MAKEMASK(0xF, 8)
+#define GLGEN_ECC_ERR_INT_TOG_MASK_H		0x00093A00 /* Reset Source: CORER */
+#define GLGEN_ECC_ERR_INT_TOG_MASK_H_CLIENT_NUM_S 0
+#define GLGEN_ECC_ERR_INT_TOG_MASK_H_CLIENT_NUM_M MAKEMASK(0x7F, 0)
+#define GLGEN_ECC_ERR_INT_TOG_MASK_L		0x000939FC /* Reset Source: CORER */
+#define GLGEN_ECC_ERR_INT_TOG_MASK_L_CLIENT_NUM_S 0
+#define GLGEN_ECC_ERR_INT_TOG_MASK_L_CLIENT_NUM_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_ECC_ERR_RST_MASK_H		0x000939F8 /* Reset Source: CORER */
+#define GLGEN_ECC_ERR_RST_MASK_H_CLIENT_NUM_S	0
+#define GLGEN_ECC_ERR_RST_MASK_H_CLIENT_NUM_M	MAKEMASK(0x7F, 0)
+#define GLGEN_ECC_ERR_RST_MASK_L		0x000939F4 /* Reset Source: CORER */
+#define GLGEN_ECC_ERR_RST_MASK_L_CLIENT_NUM_S	0
+#define GLGEN_ECC_ERR_RST_MASK_L_CLIENT_NUM_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_GPIO_CTL(_i)			(0x000880C8 + ((_i) * 4)) /* _i=0...6 */ /* Reset Source: POR */
+#define GLGEN_GPIO_CTL_MAX_INDEX		6
+#define GLGEN_GPIO_CTL_IN_VALUE_S		0
+#define GLGEN_GPIO_CTL_IN_VALUE_M		BIT(0)
+#define GLGEN_GPIO_CTL_IN_TRANSIT_S		1
+#define GLGEN_GPIO_CTL_IN_TRANSIT_M		BIT(1)
+#define GLGEN_GPIO_CTL_OUT_VALUE_S		2
+#define GLGEN_GPIO_CTL_OUT_VALUE_M		BIT(2)
+#define GLGEN_GPIO_CTL_NO_P_UP_S		3
+#define GLGEN_GPIO_CTL_NO_P_UP_M		BIT(3)
+#define GLGEN_GPIO_CTL_PIN_DIR_S		4
+#define GLGEN_GPIO_CTL_PIN_DIR_M		BIT(4)
+#define GLGEN_GPIO_CTL_TRI_CTL_S		5
+#define GLGEN_GPIO_CTL_TRI_CTL_M		BIT(5)
+#define GLGEN_GPIO_CTL_PIN_FUNC_S		8
+#define GLGEN_GPIO_CTL_PIN_FUNC_M		MAKEMASK(0xF, 8)
+#define GLGEN_GPIO_CTL_INT_MODE_S		12
+#define GLGEN_GPIO_CTL_INT_MODE_M		MAKEMASK(0x3, 12)
+#define GLGEN_MARKER_COUNT			0x000939E8 /* Reset Source: CORER */
+#define GLGEN_MARKER_COUNT_MARKER_COUNT_S	0
+#define GLGEN_MARKER_COUNT_MARKER_COUNT_M	MAKEMASK(0xFF, 0)
+#define GLGEN_MARKER_COUNT_MARKER_COUNT_EN_S	31
+#define GLGEN_MARKER_COUNT_MARKER_COUNT_EN_M	BIT(31)
+#define GLGEN_RSTAT				0x000B8188 /* Reset Source: POR */
+#define GLGEN_RSTAT_DEVSTATE_S			0
+#define GLGEN_RSTAT_DEVSTATE_M			MAKEMASK(0x3, 0)
+#define GLGEN_RSTAT_RESET_TYPE_S		2
+#define GLGEN_RSTAT_RESET_TYPE_M		MAKEMASK(0x3, 2)
+#define GLGEN_RSTAT_CORERCNT_S			4
+#define GLGEN_RSTAT_CORERCNT_M			MAKEMASK(0x3, 4)
+#define GLGEN_RSTAT_GLOBRCNT_S			6
+#define GLGEN_RSTAT_GLOBRCNT_M			MAKEMASK(0x3, 6)
+#define GLGEN_RSTAT_EMPRCNT_S			8
+#define GLGEN_RSTAT_EMPRCNT_M			MAKEMASK(0x3, 8)
+#define GLGEN_RSTAT_TIME_TO_RST_S		10
+#define GLGEN_RSTAT_TIME_TO_RST_M		MAKEMASK(0x3F, 10)
+#define GLGEN_RSTAT_RTRIG_FLR_S			16
+#define GLGEN_RSTAT_RTRIG_FLR_M			BIT(16)
+#define GLGEN_RSTAT_RTRIG_ECC_S			17
+#define GLGEN_RSTAT_RTRIG_ECC_M			BIT(17)
+#define GLGEN_RSTAT_RTRIG_FW_AUX_S		18
+#define GLGEN_RSTAT_RTRIG_FW_AUX_M		BIT(18)
+#define GLGEN_RTRIG				0x000B8190 /* Reset Source: CORER */
+#define GLGEN_RTRIG_CORER_S			0
+#define GLGEN_RTRIG_CORER_M			BIT(0)
+#define GLGEN_RTRIG_GLOBR_S			1
+#define GLGEN_RTRIG_GLOBR_M			BIT(1)
+#define GLGEN_RTRIG_EMPFWR_S			2
+#define GLGEN_RTRIG_EMPFWR_M			BIT(2)
+#define GLGEN_STAT				0x000B612C /* Reset Source: POR */
+#define GLGEN_STAT_RSVD4FW_S			0
+#define GLGEN_STAT_RSVD4FW_M			MAKEMASK(0xFF, 0)
+#define GLGEN_VFLRSTAT(_i)			(0x00093A04 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLGEN_VFLRSTAT_MAX_INDEX		7
+#define GLGEN_VFLRSTAT_VFLRS_S			0
+#define GLGEN_VFLRSTAT_VFLRS_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLGEN_XLR_MSK2HLP_RDY			0x000939F0 /* Reset Source: CORER */
+#define GLGEN_XLR_MSK2HLP_RDY_GLGEN_XLR_MSK2HLP_RDY_S 0
+#define GLGEN_XLR_MSK2HLP_RDY_GLGEN_XLR_MSK2HLP_RDY_M BIT(0)
+#define GLGEN_XLR_TRNS_WAIT_COUNT		0x000939EC /* Reset Source: CORER */
+#define GLGEN_XLR_TRNS_WAIT_COUNT_W_BTWN_TRNS_COUNT_S 0
+#define GLGEN_XLR_TRNS_WAIT_COUNT_W_BTWN_TRNS_COUNT_M MAKEMASK(0x1F, 0)
+#define GLGEN_XLR_TRNS_WAIT_COUNT_W_PEND_TRNS_COUNT_S 8
+#define GLGEN_XLR_TRNS_WAIT_COUNT_W_PEND_TRNS_COUNT_M MAKEMASK(0xFF, 8)
+#define GLQDC_DFD_CAM_ACC			0x002D2E24 /* Reset Source: CORER */
+#define GLQDC_DFD_CAM_ACC_CLNUM_S		0
+#define GLQDC_DFD_CAM_ACC_CLNUM_M		MAKEMASK(0x7F, 0)
+#define GLQDC_DFD_CAM_ACC_RES_0			0x002D2E28 /* Reset Source: CORER */
+#define GLQDC_DFD_CAM_ACC_RES_0_QID_S		0
+#define GLQDC_DFD_CAM_ACC_RES_0_QID_M		MAKEMASK(0x3FFF, 0)
+#define GLQDC_DFD_CAM_ACC_RES_0_CAM_V_S		16
+#define GLQDC_DFD_CAM_ACC_RES_0_CAM_V_M		BIT(16)
+#define GLQDC_DFD_CAM_ACC_RES_0_CAM_E_S		31
+#define GLQDC_DFD_CAM_ACC_RES_0_CAM_E_M		BIT(31)
+#define GLQDC_DFD_CAM_ACC_RES_1			0x002D2E2C /* Reset Source: CORER */
+#define GLQDC_DFD_CAM_ACC_RES_1_CL_HEAD_S	0
+#define GLQDC_DFD_CAM_ACC_RES_1_CL_HEAD_M	MAKEMASK(0x3F, 0)
+#define GLQDC_DFD_CAM_ACC_RES_1_CL_TAIL_S	8
+#define GLQDC_DFD_CAM_ACC_RES_1_CL_TAIL_M	MAKEMASK(0x3F, 8)
+#define GLQDC_DFD_CAM_ACC_RES_1_CL_EMPTY_S	16
+#define GLQDC_DFD_CAM_ACC_RES_1_CL_EMPTY_M	BIT(16)
+#define GLQDC_DFD_CAM_ACC_RES_1_CL_MALC_S	24
+#define GLQDC_DFD_CAM_ACC_RES_1_CL_MALC_M	MAKEMASK(0x3F, 24)
+#define GLQDC_DFD_FIFO_CFG_0			0x002D2E34 /* Reset Source: CORER */
+#define GLQDC_DFD_FIFO_CFG_0_QID_S		0
+#define GLQDC_DFD_FIFO_CFG_0_QID_M		MAKEMASK(0x3FFF, 0)
+#define GLQDC_DFD_FIFO_CFG_0_SMPL_PT_S		16
+#define GLQDC_DFD_FIFO_CFG_0_SMPL_PT_M		MAKEMASK(0xFF, 16)
+#define GLQDC_DFD_FIFO_CFG_0_ALL_QID_S		31
+#define GLQDC_DFD_FIFO_CFG_0_ALL_QID_M		BIT(31)
+#define GLQDC_DFD_FIFO_CFG_1			0x002D2E38 /* Reset Source: CORER */
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_0_S		0
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_0_M		MAKEMASK(0x7, 0)
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_1_S		4
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_1_M		MAKEMASK(0x7, 4)
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_2_S		8
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_2_M		MAKEMASK(0x7, 8)
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_3_S		12
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_3_M		MAKEMASK(0x7, 12)
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_4_S		16
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_4_M		MAKEMASK(0x7, 16)
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_5_S		20
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_5_M		MAKEMASK(0x7, 20)
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_6_S		24
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_6_M		MAKEMASK(0x7, 24)
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_7_S		28
+#define GLQDC_DFD_FIFO_CFG_1_PRIO_7_M		MAKEMASK(0x7, 28)
+#define GLQDC_DFD_FIFO_SZ_CFG			0x002D30AC /* Reset Source: CORER */
+#define GLQDC_DFD_FIFO_SZ_CFG_COMP_S		0
+#define GLQDC_DFD_FIFO_SZ_CFG_COMP_M		MAKEMASK(0xFF, 0)
+#define GLQDC_DFD_FIFO_SZ_CFG_MISS_S		8
+#define GLQDC_DFD_FIFO_SZ_CFG_MISS_M		MAKEMASK(0xFF, 8)
+#define GLQDC_DFD_FIFO_SZ_CFG_MISS_COMP_S	16
+#define GLQDC_DFD_FIFO_SZ_CFG_MISS_COMP_M	MAKEMASK(0xFF, 16)
+#define GLQDC_DFD_GEN_CHKN			0x002D30A0 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_CHKN_GEN_BITS_S		0
+#define GLQDC_DFD_GEN_CHKN_GEN_BITS_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_GEN_CHKN_2			0x002D30A4 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_CHKN_2_GEN_BITS_S		0
+#define GLQDC_DFD_GEN_CHKN_2_GEN_BITS_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_GEN_CTRL			0x002D2E20 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_CTRL_ENABLE_S		0
+#define GLQDC_DFD_GEN_CTRL_ENABLE_M		BIT(0)
+#define GLQDC_DFD_GEN_CTRL_BLK_INJECT_M1_S	1
+#define GLQDC_DFD_GEN_CTRL_BLK_INJECT_M1_M	BIT(1)
+#define GLQDC_DFD_GEN_CTRL_NUM_PAUSE_M1_S	16
+#define GLQDC_DFD_GEN_CTRL_NUM_PAUSE_M1_M	MAKEMASK(0x3FF, 16)
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0		0x002D2EE8 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_MISS_COMP_ACK_S 0
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_MISS_COMP_ACK_M MAKEMASK(0x7F, 0)
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_MISS_COMP_S 7
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_MISS_COMP_M MAKEMASK(0x7F, 7)
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_COMP_FSM_DATA_S 14
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_COMP_FSM_DATA_M MAKEMASK(0x3, 14)
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_COMP_FSM_S	16
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_COMP_FSM_M	MAKEMASK(0x7F, 16)
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_PCIE_OUT_S	23
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_0_PCIE_OUT_M	MAKEMASK(0x7, 23)
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_1		0x002D2EEC /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_1_MISS_FSM_S	0
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_1_MISS_FSM_M	MAKEMASK(0x7F, 0)
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_1_DFD_S	7
+#define GLQDC_DFD_GEN_LOG_FIFO_ST_1_DFD_M	MAKEMASK(0xFF, 7)
+#define GLQDC_DFD_GEN_LOG_FSM			0x002D2EF0 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOG_FSM_FTSTATE_S		0
+#define GLQDC_DFD_GEN_LOG_FSM_FTSTATE_M		MAKEMASK(0x3, 0)
+#define GLQDC_DFD_GEN_LOG_FSM_MISS_FIFO_FSM_ST_S 2
+#define GLQDC_DFD_GEN_LOG_FSM_MISS_FIFO_FSM_ST_M MAKEMASK(0x7, 2)
+#define GLQDC_DFD_GEN_LOG_FSM_IN_MISS_FIFO_S	5
+#define GLQDC_DFD_GEN_LOG_FSM_IN_MISS_FIFO_M	MAKEMASK(0x3, 5)
+#define GLQDC_DFD_GEN_LOG_FSM_CPSTATE_S		7
+#define GLQDC_DFD_GEN_LOG_FSM_CPSTATE_M		MAKEMASK(0x7, 7)
+#define GLQDC_DFD_GEN_LOGGNG_0			0x002D2EE0 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOGGNG_0_RINGH_WR_RD_S	0
+#define GLQDC_DFD_GEN_LOGGNG_0_RINGH_WR_RD_M	BIT(0)
+#define GLQDC_DFD_GEN_LOGGNG_0_QD_WR_RD_S	1
+#define GLQDC_DFD_GEN_LOGGNG_0_QD_WR_RD_M	BIT(1)
+#define GLQDC_DFD_GEN_LOGGNG_0_PCIE_RD_REQ_VLD_S 2
+#define GLQDC_DFD_GEN_LOGGNG_0_PCIE_RD_REQ_VLD_M BIT(2)
+#define GLQDC_DFD_GEN_LOGGNG_0_NXT_SQ_VLD_S	3
+#define GLQDC_DFD_GEN_LOGGNG_0_NXT_SQ_VLD_M	BIT(3)
+#define GLQDC_DFD_GEN_LOGGNG_0_SQ_VLD_TO_DONE_S 4
+#define GLQDC_DFD_GEN_LOGGNG_0_SQ_VLD_TO_DONE_M BIT(4)
+#define GLQDC_DFD_GEN_LOGGNG_0_PCIE_COMP_VLD_S	5
+#define GLQDC_DFD_GEN_LOGGNG_0_PCIE_COMP_VLD_M	BIT(5)
+#define GLQDC_DFD_GEN_LOGGNG_0_FETCH_NXT_SQ_VLD_S 6
+#define GLQDC_DFD_GEN_LOGGNG_0_FETCH_NXT_SQ_VLD_M BIT(6)
+#define GLQDC_DFD_GEN_LOGGNG_0_MALC_RPT_S	8
+#define GLQDC_DFD_GEN_LOGGNG_0_MALC_RPT_M	MAKEMASK(0xF, 8)
+#define GLQDC_DFD_GEN_LOGGNG_0_DFD_FIFO_ADD_S	16
+#define GLQDC_DFD_GEN_LOGGNG_0_DFD_FIFO_ADD_M	MAKEMASK(0x7F, 16)
+#define GLQDC_DFD_GEN_LOGGNG_1			0x002D2EE4 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_RFIL_WM_S	0
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_RFIL_WM_M	MAKEMASK(0x3, 0)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_RFIL_S	2
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_RFIL_M	MAKEMASK(0x3, 2)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_MRED_WM_S	4
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_MRED_WM_M	MAKEMASK(0x3, 4)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_MRED_S	6
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_MRED_M	MAKEMASK(0x3, 6)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_M3_WM_S	8
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_M3_WM_M	MAKEMASK(0x3, 8)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_M3_S		10
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_M3_M		MAKEMASK(0x3, 10)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_LSO_MT_M3_WM_S 12
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_LSO_MT_M3_WM_M MAKEMASK(0x3, 12)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_LSO_MT_M3_S	14
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_LSO_MT_M3_M	MAKEMASK(0x3, 14)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_ACK_MISS_FIFO_M3_WM_S 16
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_ACK_MISS_FIFO_M3_WM_M MAKEMASK(0x3, 16)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_ACK_MISS_FIFO_M3_S 18
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_ACK_MISS_FIFO_M3_M MAKEMASK(0x3, 18)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_EVICT_WM_S	20
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_EVICT_WM_M	MAKEMASK(0x3, 20)
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_EVICT_S	22
+#define GLQDC_DFD_GEN_LOGGNG_1_WS_EVICT_M	MAKEMASK(0x3, 22)
+#define GLQDC_DFD_GEN_LOGGNG_2			0x002D2FFC /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOGGNG_2_WR_WHEN_FULL_S	0
+#define GLQDC_DFD_GEN_LOGGNG_2_WR_WHEN_FULL_M	MAKEMASK(0x3F, 0)
+#define GLQDC_DFD_GEN_LOGGNG_2_WR_WHEN_FULL_LT_S 6
+#define GLQDC_DFD_GEN_LOGGNG_2_WR_WHEN_FULL_LT_M MAKEMASK(0x3F, 6)
+#define GLQDC_DFD_GEN_LOGGNG_2_TEST_S		24
+#define GLQDC_DFD_GEN_LOGGNG_2_TEST_M		MAKEMASK(0xFF, 24)
+#define GLQDC_DFD_GEN_LOGGNG_3			0x002D3008 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOGGNG_3_GEN_S		0
+#define GLQDC_DFD_GEN_LOGGNG_3_GEN_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_GEN_LOGGNG_4			0x002D300C /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOGGNG_4_GEN_S		0
+#define GLQDC_DFD_GEN_LOGGNG_4_GEN_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_GEN_LOGGNG_5			0x002D3010 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOGGNG_5_GEN_S		0
+#define GLQDC_DFD_GEN_LOGGNG_5_GEN_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_GEN_LOGGNG_6			0x002D3014 /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_LOGGNG_6_GEN_S		0
+#define GLQDC_DFD_GEN_LOGGNG_6_GEN_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_GEN_STAT_REGS(_i)		(0x002D3018 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLQDC_DFD_GEN_STAT_REGS_MAX_INDEX	15
+#define GLQDC_DFD_GEN_STAT_REGS_COUNT_S		0
+#define GLQDC_DFD_GEN_STAT_REGS_COUNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_LOG_0				0x002D2E3C /* Reset Source: CORER */
+#define GLQDC_DFD_LOG_0_SOURCE_S		0
+#define GLQDC_DFD_LOG_0_SOURCE_M		MAKEMASK(0x3, 0)
+#define GLQDC_DFD_LOG_0_LVL_OR_EDGE_S		4
+#define GLQDC_DFD_LOG_0_LVL_OR_EDGE_M		BIT(4)
+#define GLQDC_DFD_LOG_0_DLY_CYCL_S		16
+#define GLQDC_DFD_LOG_0_DLY_CYCL_M		MAKEMASK(0x3FF, 16)
+#define GLQDC_DFD_LOG_1				0x002D2E40 /* Reset Source: CORER */
+#define GLQDC_DFD_LOG_1_NUM_EVENTS_S		0
+#define GLQDC_DFD_LOG_1_NUM_EVENTS_M		MAKEMASK(0x3FF, 0)
+#define GLQDC_DFD_LOG_1_NUM_TRIGS_S		16
+#define GLQDC_DFD_LOG_1_NUM_TRIGS_M		MAKEMASK(0x3FF, 16)
+#define GLQDC_DFD_LOG_1_TRIG_B2B_S		31
+#define GLQDC_DFD_LOG_1_TRIG_B2B_M		BIT(31)
+#define GLQDC_DFD_LOG_ACTN_EN			0x002D2EA4 /* Reset Source: CORER */
+#define GLQDC_DFD_LOG_ACTN_EN_BLK_INJECT_M1_S	0
+#define GLQDC_DFD_LOG_ACTN_EN_BLK_INJECT_M1_M	BIT(0)
+#define GLQDC_DFD_LOG_ACTN_EN_STP_WR_DFD_FIFO_S 1
+#define GLQDC_DFD_LOG_ACTN_EN_STP_WR_DFD_FIFO_M BIT(1)
+#define GLQDC_DFD_LOG_ACTN_EN_STP_UPDT_MALC_RPT_CSR_S 2
+#define GLQDC_DFD_LOG_ACTN_EN_STP_UPDT_MALC_RPT_CSR_M BIT(2)
+#define GLQDC_DFD_LOG_ACTN_RST			0x002D2EA8 /* Reset Source: CORER */
+#define GLQDC_DFD_LOG_ACTN_RST_BLK_INJECT_M1_S	0
+#define GLQDC_DFD_LOG_ACTN_RST_BLK_INJECT_M1_M	BIT(0)
+#define GLQDC_DFD_LOG_ACTN_RST_STP_WR_DFD_FIFO_S 1
+#define GLQDC_DFD_LOG_ACTN_RST_STP_WR_DFD_FIFO_M BIT(1)
+#define GLQDC_DFD_LOG_ACTN_RST_STP_UPDT_MALC_RPT_CSR_S 2
+#define GLQDC_DFD_LOG_ACTN_RST_STP_UPDT_MALC_RPT_CSR_M BIT(2)
+#define GLQDC_DFD_LOG_DATA(_i)			(0x002D2E44 + ((_i) * 4)) /* _i=0...11 */ /* Reset Source: CORER */
+#define GLQDC_DFD_LOG_DATA_MAX_INDEX		11
+#define GLQDC_DFD_LOG_DATA_DATA_S		0
+#define GLQDC_DFD_LOG_DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_LOG_MASK(_i)			(0x002D2E74 + ((_i) * 4)) /* _i=0...11 */ /* Reset Source: CORER */
+#define GLQDC_DFD_LOG_MASK_MAX_INDEX		11
+#define GLQDC_DFD_LOG_MASK_MASK_S		0
+#define GLQDC_DFD_LOG_MASK_MASK_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_LOG_TRG_0			0x002D2EAC /* Reset Source: CORER */
+#define GLQDC_DFD_LOG_TRG_0_QID_S		0
+#define GLQDC_DFD_LOG_TRG_0_QID_M		MAKEMASK(0x3FFF, 0)
+#define GLQDC_DFD_LOG_TRG_0_ACT_TRIGGED_S	16
+#define GLQDC_DFD_LOG_TRG_0_ACT_TRIGGED_M	BIT(16)
+#define GLQDC_DFD_LOG_TRG_0_TRIGGED_S		31
+#define GLQDC_DFD_LOG_TRG_0_TRIGGED_M		BIT(31)
+#define GLQDC_DFD_LOG_TRG_DATA(_i)		(0x002D2EB0 + ((_i) * 4)) /* _i=0...11 */ /* Reset Source: CORER */
+#define GLQDC_DFD_LOG_TRG_DATA_MAX_INDEX	11
+#define GLQDC_DFD_LOG_TRG_DATA_DATA_S		0
+#define GLQDC_DFD_LOG_TRG_DATA_DATA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQDC_DFD_PACE				0x002D3000 /* Reset Source: CORER */
+#define GLQDC_DFD_PACE_PUSH_S			0
+#define GLQDC_DFD_PACE_PUSH_M			BIT(0)
+#define GLQDC_DFD_RST				0x002D2E30 /* Reset Source: CORER */
+#define GLQDC_DFD_RST_RST_S			0
+#define GLQDC_DFD_RST_RST_M			BIT(0)
+#define GLQDC_DFD_RST_CLR_MALC_RPT_S		1
+#define GLQDC_DFD_RST_CLR_MALC_RPT_M		BIT(1)
+#define GLQDC_DFD_RST_LOG_RST_S			2
+#define GLQDC_DFD_RST_LOG_RST_M			BIT(2)
+#define GLQDC_DFD_SAMPLE_RO_CSR			0x002D3004 /* Reset Source: CORER */
+#define GLQDC_DFD_SAMPLE_RO_CSR_SMPL_S		0
+#define GLQDC_DFD_SAMPLE_RO_CSR_SMPL_M		BIT(0)
+#define GLQDC_DFD_STATS_CFG_0			0x002D3058 /* Reset Source: CORER */
+#define GLQDC_DFD_STATS_CFG_0_CLR_S		0
+#define GLQDC_DFD_STATS_CFG_0_CLR_M		BIT(0)
+#define GLQDC_DFD_STATS_CFG_1			0x002D305C /* Reset Source: CORER */
+#define GLQDC_DFD_STATS_CFG_1_QID_S		0
+#define GLQDC_DFD_STATS_CFG_1_QID_M		MAKEMASK(0x3FFF, 0)
+#define GLQDC_DFD_STATS_CFG_1_GEN_CFG_S		16
+#define GLQDC_DFD_STATS_CFG_1_GEN_CFG_M		MAKEMASK(0x1F, 16)
+#define GLQDC_DFD_STATS_CFG_EVNT(_i)		(0x002D3060 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLQDC_DFD_STATS_CFG_EVNT_MAX_INDEX	15
+#define GLQDC_DFD_STATS_CFG_EVNT_EVNT_ID_S	0
+#define GLQDC_DFD_STATS_CFG_EVNT_EVNT_ID_M	MAKEMASK(0x1F, 0)
+#define GLQDC_DFD_STATS_CFG_EVNT_WRAP_EN_S	31
+#define GLQDC_DFD_STATS_CFG_EVNT_WRAP_EN_M	BIT(31)
+#define GLQDC_DFD_TEST_MNG			0x002D30A8 /* Reset Source: CORER */
+#define GLQDC_DFD_TEST_MNG_TST_S		2
+#define GLQDC_DFD_TEST_MNG_TST_M		BIT(2)
+#define GLVFGEN_TIMER				0x000B8214 /* Reset Source: POR */
+#define GLVFGEN_TIMER_GTIME_S			0
+#define GLVFGEN_TIMER_GTIME_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PFGEN_CTRL				0x00091000 /* Reset Source: CORER */
+#define PFGEN_CTRL_PFSWR_S			0
+#define PFGEN_CTRL_PFSWR_M			BIT(0)
+#define PFGEN_DRUN				0x00091180 /* Reset Source: CORER */
+#define PFGEN_DRUN_DRVUNLD_S			0
+#define PFGEN_DRUN_DRVUNLD_M			BIT(0)
+#define PFGEN_PFRSTAT				0x00091080 /* Reset Source: CORER */
+#define PFGEN_PFRSTAT_PFRD_S			0
+#define PFGEN_PFRSTAT_PFRD_M			BIT(0)
+#define PFGEN_PORTNUM				0x001D2400 /* Reset Source: CORER */
+#define PFGEN_PORTNUM_PORT_NUM_S		0
+#define PFGEN_PORTNUM_PORT_NUM_M		MAKEMASK(0x7, 0)
+#define PFGEN_STATE				0x00088000 /* Reset Source: CORER */
+#define PFGEN_STATE_PFPEEN_S			0
+#define PFGEN_STATE_PFPEEN_M			BIT(0)
+#define PFGEN_STATE_RSVD_S			1
+#define PFGEN_STATE_RSVD_M			BIT(1)
+#define PFGEN_STATE_PFLINKEN_S			2
+#define PFGEN_STATE_PFLINKEN_M			BIT(2)
+#define PFGEN_STATE_PFSCEN_S			3
+#define PFGEN_STATE_PFSCEN_M			BIT(3)
+#define PRT_TCVMLR_DRAIN_CNTR			0x000A21C0 /* Reset Source: CORER */
+#define PRT_TCVMLR_DRAIN_CNTR_CNTR_S		0
+#define PRT_TCVMLR_DRAIN_CNTR_CNTR_M		MAKEMASK(0x3FFF, 0)
+#define PRTGEN_CNF				0x000B8120 /* Reset Source: POR */
+#define PRTGEN_CNF_PORT_DIS_S			0
+#define PRTGEN_CNF_PORT_DIS_M			BIT(0)
+#define PRTGEN_CNF_ALLOW_PORT_DIS_S		1
+#define PRTGEN_CNF_ALLOW_PORT_DIS_M		BIT(1)
+#define PRTGEN_CNF_EMP_PORT_DIS_S		2
+#define PRTGEN_CNF_EMP_PORT_DIS_M		BIT(2)
+#define PRTGEN_CNF2				0x000B8160 /* Reset Source: POR */
+#define PRTGEN_CNF2_ACTIVATE_PORT_LINK_S	0
+#define PRTGEN_CNF2_ACTIVATE_PORT_LINK_M	BIT(0)
+#define PRTGEN_CNF3				0x000B8280 /* Reset Source: POR */
+#define PRTGEN_CNF3_PORT_STAGERING_EN_S		0
+#define PRTGEN_CNF3_PORT_STAGERING_EN_M		BIT(0)
+#define PRTGEN_STATUS				0x000B8100 /* Reset Source: POR */
+#define PRTGEN_STATUS_PORT_VALID_S		0
+#define PRTGEN_STATUS_PORT_VALID_M		BIT(0)
+#define PRTGEN_STATUS_PORT_ACTIVE_S		1
+#define PRTGEN_STATUS_PORT_ACTIVE_M		BIT(1)
+#define VFGEN_RSTAT(_VF)			(0x00074000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: VFR */
+#define VFGEN_RSTAT_MAX_INDEX			255
+#define VFGEN_RSTAT_VFR_STATE_S			0
+#define VFGEN_RSTAT_VFR_STATE_M			MAKEMASK(0x3, 0)
+#define VPGEN_VFRSTAT(_VF)			(0x00090800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPGEN_VFRSTAT_MAX_INDEX			255
+#define VPGEN_VFRSTAT_VFRD_S			0
+#define VPGEN_VFRSTAT_VFRD_M			BIT(0)
+#define VPGEN_VFRTRIG(_VF)			(0x00090000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPGEN_VFRTRIG_MAX_INDEX			255
+#define VPGEN_VFRTRIG_VFSWR_S			0
+#define VPGEN_VFRTRIG_VFSWR_M			BIT(0)
+#define VSIGEN_RSTAT(_VSI)			(0x00092800 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSIGEN_RSTAT_MAX_INDEX			767
+#define VSIGEN_RSTAT_VMRD_S			0
+#define VSIGEN_RSTAT_VMRD_M			BIT(0)
+#define VSIGEN_RTRIG(_VSI)			(0x00091800 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSIGEN_RTRIG_MAX_INDEX			767
+#define VSIGEN_RTRIG_VMSWR_S			0
+#define VSIGEN_RTRIG_VMSWR_M			BIT(0)
+#define GLHMC_APBVTINUSEBASE(_i)		(0x00524A00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_APBVTINUSEBASE_MAX_INDEX		7
+#define GLHMC_APBVTINUSEBASE_FPMAPBINUSEBASE_S	0
+#define GLHMC_APBVTINUSEBASE_FPMAPBINUSEBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_CEQPART(_i)			(0x005031C0 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_CEQPART_MAX_INDEX			7
+#define GLHMC_CEQPART_PMCEQBASE_S		0
+#define GLHMC_CEQPART_PMCEQBASE_M		MAKEMASK(0x3FF, 0)
+#define GLHMC_CEQPART_PMCEQSIZE_S		16
+#define GLHMC_CEQPART_PMCEQSIZE_M		MAKEMASK(0x3FF, 16)
+#define GLHMC_DBCQMAX				0x005220F0 /* Reset Source: CORER */
+#define GLHMC_DBCQMAX_GLHMC_DBCQMAX_S		0
+#define GLHMC_DBCQMAX_GLHMC_DBCQMAX_M		MAKEMASK(0xFFFFF, 0)
+#define GLHMC_DBCQPART(_i)			(0x00503180 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_DBCQPART_MAX_INDEX		7
+#define GLHMC_DBCQPART_PMDBCQBASE_S		0
+#define GLHMC_DBCQPART_PMDBCQBASE_M		MAKEMASK(0x3FFF, 0)
+#define GLHMC_DBCQPART_PMDBCQSIZE_S		16
+#define GLHMC_DBCQPART_PMDBCQSIZE_M		MAKEMASK(0x7FFF, 16)
+#define GLHMC_DBQPMAX				0x005220EC /* Reset Source: CORER */
+#define GLHMC_DBQPMAX_GLHMC_DBQPMAX_S		0
+#define GLHMC_DBQPMAX_GLHMC_DBQPMAX_M		MAKEMASK(0x7FFFF, 0)
+#define GLHMC_DBQPPART(_i)			(0x005044C0 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_DBQPPART_MAX_INDEX		7
+#define GLHMC_DBQPPART_PMDBQPBASE_S		0
+#define GLHMC_DBQPPART_PMDBQPBASE_M		MAKEMASK(0x3FFF, 0)
+#define GLHMC_DBQPPART_PMDBQPSIZE_S		16
+#define GLHMC_DBQPPART_PMDBQPSIZE_M		MAKEMASK(0x7FFF, 16)
+#define GLHMC_FSIAVBASE(_i)			(0x00525600 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_FSIAVBASE_MAX_INDEX		7
+#define GLHMC_FSIAVBASE_FPMFSIAVBASE_S		0
+#define GLHMC_FSIAVBASE_FPMFSIAVBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_FSIAVCNT(_i)			(0x00525700 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_FSIAVCNT_MAX_INDEX		7
+#define GLHMC_FSIAVCNT_FPMFSIAVCNT_S		0
+#define GLHMC_FSIAVCNT_FPMFSIAVCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_FSIAVMAX				0x00522068 /* Reset Source: CORER */
+#define GLHMC_FSIAVMAX_PMFSIAVMAX_S		0
+#define GLHMC_FSIAVMAX_PMFSIAVMAX_M		MAKEMASK(0x1FFFF, 0)
+#define GLHMC_FSIAVOBJSZ			0x00522064 /* Reset Source: CORER */
+#define GLHMC_FSIAVOBJSZ_PMFSIAVOBJSZ_S		0
+#define GLHMC_FSIAVOBJSZ_PMFSIAVOBJSZ_M		MAKEMASK(0xF, 0)
+#define GLHMC_FSIMCBASE(_i)			(0x00526000 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_FSIMCBASE_MAX_INDEX		7
+#define GLHMC_FSIMCBASE_FPMFSIMCBASE_S		0
+#define GLHMC_FSIMCBASE_FPMFSIMCBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_FSIMCCNT(_i)			(0x00526100 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_FSIMCCNT_MAX_INDEX		7
+#define GLHMC_FSIMCCNT_FPMFSIMCSZ_S		0
+#define GLHMC_FSIMCCNT_FPMFSIMCSZ_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_FSIMCMAX				0x00522060 /* Reset Source: CORER */
+#define GLHMC_FSIMCMAX_PMFSIMCMAX_S		0
+#define GLHMC_FSIMCMAX_PMFSIMCMAX_M		MAKEMASK(0x3FFF, 0)
+#define GLHMC_FSIMCOBJSZ			0x0052205C /* Reset Source: CORER */
+#define GLHMC_FSIMCOBJSZ_PMFSIMCOBJSZ_S		0
+#define GLHMC_FSIMCOBJSZ_PMFSIMCOBJSZ_M		MAKEMASK(0xF, 0)
+#define GLHMC_FWPDINV				0x0052207C /* Reset Source: CORER */
+#define GLHMC_FWPDINV_PMSDIDX_S			0
+#define GLHMC_FWPDINV_PMSDIDX_M			MAKEMASK(0xFFF, 0)
+#define GLHMC_FWPDINV_PMSDPARTSEL_S		15
+#define GLHMC_FWPDINV_PMSDPARTSEL_M		BIT(15)
+#define GLHMC_FWPDINV_PMPDIDX_S			16
+#define GLHMC_FWPDINV_PMPDIDX_M			MAKEMASK(0x1FF, 16)
+#define GLHMC_FWPDINV_FPMAT			0x0010207c /* Reset Source: CORER */
+#define GLHMC_FWPDINV_FPMAT_PMSDIDX_S		0
+#define GLHMC_FWPDINV_FPMAT_PMSDIDX_M		MAKEMASK(0xFFF, 0)
+#define GLHMC_FWPDINV_FPMAT_PMSDPARTSEL_S	15
+#define GLHMC_FWPDINV_FPMAT_PMSDPARTSEL_M	BIT(15)
+#define GLHMC_FWPDINV_FPMAT_PMPDIDX_S		16
+#define GLHMC_FWPDINV_FPMAT_PMPDIDX_M		MAKEMASK(0x1FF, 16)
+#define GLHMC_FWSDDATAHIGH			0x00522078 /* Reset Source: CORER */
+#define GLHMC_FWSDDATAHIGH_PMSDDATAHIGH_S	0
+#define GLHMC_FWSDDATAHIGH_PMSDDATAHIGH_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_FWSDDATAHIGH_FPMAT		0x00102078 /* Reset Source: CORER */
+#define GLHMC_FWSDDATAHIGH_FPMAT_PMSDDATAHIGH_S 0
+#define GLHMC_FWSDDATAHIGH_FPMAT_PMSDDATAHIGH_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_FWSDDATALOW			0x00522074 /* Reset Source: CORER */
+#define GLHMC_FWSDDATALOW_PMSDVALID_S		0
+#define GLHMC_FWSDDATALOW_PMSDVALID_M		BIT(0)
+#define GLHMC_FWSDDATALOW_PMSDTYPE_S		1
+#define GLHMC_FWSDDATALOW_PMSDTYPE_M		BIT(1)
+#define GLHMC_FWSDDATALOW_PMSDBPCOUNT_S		2
+#define GLHMC_FWSDDATALOW_PMSDBPCOUNT_M		MAKEMASK(0x3FF, 2)
+#define GLHMC_FWSDDATALOW_PMSDDATALOW_S		12
+#define GLHMC_FWSDDATALOW_PMSDDATALOW_M		MAKEMASK(0xFFFFF, 12)
+#define GLHMC_FWSDDATALOW_FPMAT			0x00102074 /* Reset Source: CORER */
+#define GLHMC_FWSDDATALOW_FPMAT_PMSDVALID_S	0
+#define GLHMC_FWSDDATALOW_FPMAT_PMSDVALID_M	BIT(0)
+#define GLHMC_FWSDDATALOW_FPMAT_PMSDTYPE_S	1
+#define GLHMC_FWSDDATALOW_FPMAT_PMSDTYPE_M	BIT(1)
+#define GLHMC_FWSDDATALOW_FPMAT_PMSDBPCOUNT_S	2
+#define GLHMC_FWSDDATALOW_FPMAT_PMSDBPCOUNT_M	MAKEMASK(0x3FF, 2)
+#define GLHMC_FWSDDATALOW_FPMAT_PMSDDATALOW_S	12
+#define GLHMC_FWSDDATALOW_FPMAT_PMSDDATALOW_M	MAKEMASK(0xFFFFF, 12)
+#define GLHMC_PEARPBASE(_i)			(0x00524800 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEARPBASE_MAX_INDEX		7
+#define GLHMC_PEARPBASE_FPMPEARPBASE_S		0
+#define GLHMC_PEARPBASE_FPMPEARPBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEARPCNT(_i)			(0x00524900 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEARPCNT_MAX_INDEX		7
+#define GLHMC_PEARPCNT_FPMPEARPCNT_S		0
+#define GLHMC_PEARPCNT_FPMPEARPCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEARPMAX				0x00522038 /* Reset Source: CORER */
+#define GLHMC_PEARPMAX_PMPEARPMAX_S		0
+#define GLHMC_PEARPMAX_PMPEARPMAX_M		MAKEMASK(0x1FFFF, 0)
+#define GLHMC_PEARPOBJSZ			0x00522034 /* Reset Source: CORER */
+#define GLHMC_PEARPOBJSZ_PMPEARPOBJSZ_S		0
+#define GLHMC_PEARPOBJSZ_PMPEARPOBJSZ_M		MAKEMASK(0x7, 0)
+#define GLHMC_PECQBASE(_i)			(0x00524200 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PECQBASE_MAX_INDEX		7
+#define GLHMC_PECQBASE_FPMPECQBASE_S		0
+#define GLHMC_PECQBASE_FPMPECQBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PECQCNT(_i)			(0x00524300 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PECQCNT_MAX_INDEX			7
+#define GLHMC_PECQCNT_FPMPECQCNT_S		0
+#define GLHMC_PECQCNT_FPMPECQCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PECQOBJSZ				0x00522020 /* Reset Source: CORER */
+#define GLHMC_PECQOBJSZ_PMPECQOBJSZ_S		0
+#define GLHMC_PECQOBJSZ_PMPECQOBJSZ_M		MAKEMASK(0xF, 0)
+#define GLHMC_PEHDRBASE(_i)			(0x00526200 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEHDRBASE_MAX_INDEX		7
+#define GLHMC_PEHDRBASE_GLHMC_PEHDRBASE_S	0
+#define GLHMC_PEHDRBASE_GLHMC_PEHDRBASE_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PEHDRCNT(_i)			(0x00526300 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEHDRCNT_MAX_INDEX		7
+#define GLHMC_PEHDRCNT_GLHMC_PEHDRCNT_S		0
+#define GLHMC_PEHDRCNT_GLHMC_PEHDRCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PEHDRMAX				0x00522008 /* Reset Source: CORER */
+#define GLHMC_PEHDRMAX_PMPEHDRMAX_S		0
+#define GLHMC_PEHDRMAX_PMPEHDRMAX_M		MAKEMASK(0x7FFFF, 0)
+#define GLHMC_PEHDRMAX_RSVD_S			19
+#define GLHMC_PEHDRMAX_RSVD_M			MAKEMASK(0x1FFF, 19)
+#define GLHMC_PEHDROBJSZ			0x00522004 /* Reset Source: CORER */
+#define GLHMC_PEHDROBJSZ_PMPEHDROBJSZ_S		0
+#define GLHMC_PEHDROBJSZ_PMPEHDROBJSZ_M		MAKEMASK(0xF, 0)
+#define GLHMC_PEHDROBJSZ_RSVD_S			4
+#define GLHMC_PEHDROBJSZ_RSVD_M			MAKEMASK(0xFFFFFFF, 4)
+#define GLHMC_PEHTCNT(_i)			(0x00524700 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEHTCNT_MAX_INDEX			7
+#define GLHMC_PEHTCNT_FPMPEHTCNT_S		0
+#define GLHMC_PEHTCNT_FPMPEHTCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEHTCNT_FPMAT(_i)			(0x00104700 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEHTCNT_FPMAT_MAX_INDEX		7
+#define GLHMC_PEHTCNT_FPMAT_FPMPEHTCNT_S	0
+#define GLHMC_PEHTCNT_FPMAT_FPMPEHTCNT_M	MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEHTEBASE(_i)			(0x00524600 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEHTEBASE_MAX_INDEX		7
+#define GLHMC_PEHTEBASE_FPMPEHTEBASE_S		0
+#define GLHMC_PEHTEBASE_FPMPEHTEBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEHTEBASE_FPMAT(_i)		(0x00104600 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEHTEBASE_FPMAT_MAX_INDEX		7
+#define GLHMC_PEHTEBASE_FPMAT_FPMPEHTEBASE_S	0
+#define GLHMC_PEHTEBASE_FPMAT_FPMPEHTEBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEHTEOBJSZ			0x0052202C /* Reset Source: CORER */
+#define GLHMC_PEHTEOBJSZ_PMPEHTEOBJSZ_S		0
+#define GLHMC_PEHTEOBJSZ_PMPEHTEOBJSZ_M		MAKEMASK(0xF, 0)
+#define GLHMC_PEHTEOBJSZ_FPMAT			0x0010202c /* Reset Source: CORER */
+#define GLHMC_PEHTEOBJSZ_FPMAT_PMPEHTEOBJSZ_S	0
+#define GLHMC_PEHTEOBJSZ_FPMAT_PMPEHTEOBJSZ_M	MAKEMASK(0xF, 0)
+#define GLHMC_PEHTMAX				0x00522030 /* Reset Source: CORER */
+#define GLHMC_PEHTMAX_PMPEHTMAX_S		0
+#define GLHMC_PEHTMAX_PMPEHTMAX_M		MAKEMASK(0x1FFFFF, 0)
+#define GLHMC_PEHTMAX_FPMAT			0x00102030 /* Reset Source: CORER */
+#define GLHMC_PEHTMAX_FPMAT_PMPEHTMAX_S		0
+#define GLHMC_PEHTMAX_FPMAT_PMPEHTMAX_M		MAKEMASK(0x1FFFFF, 0)
+#define GLHMC_PEMDBASE(_i)			(0x00526400 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEMDBASE_MAX_INDEX		7
+#define GLHMC_PEMDBASE_GLHMC_PEMDBASE_S		0
+#define GLHMC_PEMDBASE_GLHMC_PEMDBASE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PEMDCNT(_i)			(0x00526500 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEMDCNT_MAX_INDEX			7
+#define GLHMC_PEMDCNT_GLHMC_PEMDCNT_S		0
+#define GLHMC_PEMDCNT_GLHMC_PEMDCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PEMDMAX				0x00522010 /* Reset Source: CORER */
+#define GLHMC_PEMDMAX_PMPEMDMAX_S		0
+#define GLHMC_PEMDMAX_PMPEMDMAX_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEMDMAX_RSVD_S			24
+#define GLHMC_PEMDMAX_RSVD_M			MAKEMASK(0xFF, 24)
+#define GLHMC_PEMDOBJSZ				0x0052200C /* Reset Source: CORER */
+#define GLHMC_PEMDOBJSZ_PMPEMDOBJSZ_S		0
+#define GLHMC_PEMDOBJSZ_PMPEMDOBJSZ_M		MAKEMASK(0xF, 0)
+#define GLHMC_PEMDOBJSZ_RSVD_S			4
+#define GLHMC_PEMDOBJSZ_RSVD_M			MAKEMASK(0xFFFFFFF, 4)
+#define GLHMC_PEMRBASE(_i)			(0x00524C00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEMRBASE_MAX_INDEX		7
+#define GLHMC_PEMRBASE_FPMPEMRBASE_S		0
+#define GLHMC_PEMRBASE_FPMPEMRBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEMRCNT(_i)			(0x00524D00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEMRCNT_MAX_INDEX			7
+#define GLHMC_PEMRCNT_FPMPEMRSZ_S		0
+#define GLHMC_PEMRCNT_FPMPEMRSZ_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEMRMAX				0x00522040 /* Reset Source: CORER */
+#define GLHMC_PEMRMAX_PMPEMRMAX_S		0
+#define GLHMC_PEMRMAX_PMPEMRMAX_M		MAKEMASK(0x7FFFFF, 0)
+#define GLHMC_PEMROBJSZ				0x0052203c /* Reset Source: CORER */
+#define GLHMC_PEMROBJSZ_PMPEMROBJSZ_S		0
+#define GLHMC_PEMROBJSZ_PMPEMROBJSZ_M		MAKEMASK(0xF, 0)
+#define GLHMC_PEOOISCBASE(_i)			(0x00526600 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEOOISCBASE_MAX_INDEX		7
+#define GLHMC_PEOOISCBASE_GLHMC_PEOOISCBASE_S	0
+#define GLHMC_PEOOISCBASE_GLHMC_PEOOISCBASE_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PEOOISCCNT(_i)			(0x00526700 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEOOISCCNT_MAX_INDEX		7
+#define GLHMC_PEOOISCCNT_GLHMC_PEOOISCCNT_S	0
+#define GLHMC_PEOOISCCNT_GLHMC_PEOOISCCNT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PEOOISCFFLBASE(_i)		(0x00526C00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEOOISCFFLBASE_MAX_INDEX		7
+#define GLHMC_PEOOISCFFLBASE_GLHMC_PEOOISCFFLBASE_S 0
+#define GLHMC_PEOOISCFFLBASE_GLHMC_PEOOISCFFLBASE_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PEOOISCFFLCNT_PMAT(_i)		(0x00526D00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEOOISCFFLCNT_PMAT_MAX_INDEX	7
+#define GLHMC_PEOOISCFFLCNT_PMAT_FPMPEOOISCFLCNT_S 0
+#define GLHMC_PEOOISCFFLCNT_PMAT_FPMPEOOISCFLCNT_M MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEOOISCFFLMAX			0x005220A4 /* Reset Source: CORER */
+#define GLHMC_PEOOISCFFLMAX_PMPEOOISCFFLMAX_S	0
+#define GLHMC_PEOOISCFFLMAX_PMPEOOISCFFLMAX_M	MAKEMASK(0x7FFFF, 0)
+#define GLHMC_PEOOISCFFLMAX_RSVD_S		19
+#define GLHMC_PEOOISCFFLMAX_RSVD_M		MAKEMASK(0x1FFF, 19)
+#define GLHMC_PEOOISCMAX			0x00522018 /* Reset Source: CORER */
+#define GLHMC_PEOOISCMAX_PMPEOOISCMAX_S		0
+#define GLHMC_PEOOISCMAX_PMPEOOISCMAX_M		MAKEMASK(0x7FFFF, 0)
+#define GLHMC_PEOOISCMAX_RSVD_S			19
+#define GLHMC_PEOOISCMAX_RSVD_M			MAKEMASK(0x1FFF, 19)
+#define GLHMC_PEOOISCOBJSZ			0x00522014 /* Reset Source: CORER */
+#define GLHMC_PEOOISCOBJSZ_PMPEOOISCOBJSZ_S	0
+#define GLHMC_PEOOISCOBJSZ_PMPEOOISCOBJSZ_M	MAKEMASK(0xF, 0)
+#define GLHMC_PEOOISCOBJSZ_RSVD_S		4
+#define GLHMC_PEOOISCOBJSZ_RSVD_M		MAKEMASK(0xFFFFFFF, 4)
+#define GLHMC_PEPBLBASE(_i)			(0x00525800 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEPBLBASE_MAX_INDEX		7
+#define GLHMC_PEPBLBASE_FPMPEPBLBASE_S		0
+#define GLHMC_PEPBLBASE_FPMPEPBLBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEPBLCNT(_i)			(0x00525900 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEPBLCNT_MAX_INDEX		7
+#define GLHMC_PEPBLCNT_FPMPEPBLCNT_S		0
+#define GLHMC_PEPBLCNT_FPMPEPBLCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEPBLMAX				0x0052206C /* Reset Source: CORER */
+#define GLHMC_PEPBLMAX_PMPEPBLMAX_S		0
+#define GLHMC_PEPBLMAX_PMPEPBLMAX_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEQ1BASE(_i)			(0x00525200 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEQ1BASE_MAX_INDEX		7
+#define GLHMC_PEQ1BASE_FPMPEQ1BASE_S		0
+#define GLHMC_PEQ1BASE_FPMPEQ1BASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEQ1CNT(_i)			(0x00525300 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEQ1CNT_MAX_INDEX			7
+#define GLHMC_PEQ1CNT_FPMPEQ1CNT_S		0
+#define GLHMC_PEQ1CNT_FPMPEQ1CNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEQ1FLBASE(_i)			(0x00525400 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEQ1FLBASE_MAX_INDEX		7
+#define GLHMC_PEQ1FLBASE_FPMPEQ1FLBASE_S	0
+#define GLHMC_PEQ1FLBASE_FPMPEQ1FLBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEQ1FLMAX				0x00522058 /* Reset Source: CORER */
+#define GLHMC_PEQ1FLMAX_PMPEQ1FLMAX_S		0
+#define GLHMC_PEQ1FLMAX_PMPEQ1FLMAX_M		MAKEMASK(0x3FFFFFF, 0)
+#define GLHMC_PEQ1MAX				0x00522054 /* Reset Source: CORER */
+#define GLHMC_PEQ1MAX_PMPEQ1MAX_S		0
+#define GLHMC_PEQ1MAX_PMPEQ1MAX_M		MAKEMASK(0xFFFFFFF, 0)
+#define GLHMC_PEQ1OBJSZ				0x00522050 /* Reset Source: CORER */
+#define GLHMC_PEQ1OBJSZ_PMPEQ1OBJSZ_S		0
+#define GLHMC_PEQ1OBJSZ_PMPEQ1OBJSZ_M		MAKEMASK(0xF, 0)
+#define GLHMC_PEQPBASE(_i)			(0x00524000 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEQPBASE_MAX_INDEX		7
+#define GLHMC_PEQPBASE_FPMPEQPBASE_S		0
+#define GLHMC_PEQPBASE_FPMPEQPBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEQPCNT(_i)			(0x00524100 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEQPCNT_MAX_INDEX			7
+#define GLHMC_PEQPCNT_FPMPEQPCNT_S		0
+#define GLHMC_PEQPCNT_FPMPEQPCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEQPOBJSZ				0x0052201C /* Reset Source: CORER */
+#define GLHMC_PEQPOBJSZ_PMPEQPOBJSZ_S		0
+#define GLHMC_PEQPOBJSZ_PMPEQPOBJSZ_M		MAKEMASK(0xF, 0)
+#define GLHMC_PERRFBASE(_i)			(0x00526800 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PERRFBASE_MAX_INDEX		7
+#define GLHMC_PERRFBASE_GLHMC_PERRFBASE_S	0
+#define GLHMC_PERRFBASE_GLHMC_PERRFBASE_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PERRFCNT(_i)			(0x00526900 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PERRFCNT_MAX_INDEX		7
+#define GLHMC_PERRFCNT_GLHMC_PERRFCNT_S		0
+#define GLHMC_PERRFCNT_GLHMC_PERRFCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PERRFFLBASE(_i)			(0x00526A00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PERRFFLBASE_MAX_INDEX		7
+#define GLHMC_PERRFFLBASE_GLHMC_PERRFFLBASE_S	0
+#define GLHMC_PERRFFLBASE_GLHMC_PERRFFLBASE_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_PERRFFLCNT_PMAT(_i)		(0x00526B00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PERRFFLCNT_PMAT_MAX_INDEX		7
+#define GLHMC_PERRFFLCNT_PMAT_FPMPERRFFLCNT_S	0
+#define GLHMC_PERRFFLCNT_PMAT_FPMPERRFFLCNT_M	MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PERRFFLMAX			0x005220A0 /* Reset Source: CORER */
+#define GLHMC_PERRFFLMAX_PMPERRFFLMAX_S		0
+#define GLHMC_PERRFFLMAX_PMPERRFFLMAX_M		MAKEMASK(0x3FFFFFF, 0)
+#define GLHMC_PERRFFLMAX_RSVD_S			26
+#define GLHMC_PERRFFLMAX_RSVD_M			MAKEMASK(0x3F, 26)
+#define GLHMC_PERRFMAX				0x0052209C /* Reset Source: CORER */
+#define GLHMC_PERRFMAX_PMPERRFMAX_S		0
+#define GLHMC_PERRFMAX_PMPERRFMAX_M		MAKEMASK(0xFFFFFFF, 0)
+#define GLHMC_PERRFMAX_RSVD_S			28
+#define GLHMC_PERRFMAX_RSVD_M			MAKEMASK(0xF, 28)
+#define GLHMC_PERRFOBJSZ			0x00522098 /* Reset Source: CORER */
+#define GLHMC_PERRFOBJSZ_PMPERRFOBJSZ_S		0
+#define GLHMC_PERRFOBJSZ_PMPERRFOBJSZ_M		MAKEMASK(0xF, 0)
+#define GLHMC_PERRFOBJSZ_RSVD_S			4
+#define GLHMC_PERRFOBJSZ_RSVD_M			MAKEMASK(0xFFFFFFF, 4)
+#define GLHMC_PETIMERBASE(_i)			(0x00525A00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PETIMERBASE_MAX_INDEX		7
+#define GLHMC_PETIMERBASE_FPMPETIMERBASE_S	0
+#define GLHMC_PETIMERBASE_FPMPETIMERBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PETIMERCNT(_i)			(0x00525B00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PETIMERCNT_MAX_INDEX		7
+#define GLHMC_PETIMERCNT_FPMPETIMERCNT_S	0
+#define GLHMC_PETIMERCNT_FPMPETIMERCNT_M	MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PETIMERMAX			0x00522084 /* Reset Source: CORER */
+#define GLHMC_PETIMERMAX_PMPETIMERMAX_S		0
+#define GLHMC_PETIMERMAX_PMPETIMERMAX_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PETIMEROBJSZ			0x00522080 /* Reset Source: CORER */
+#define GLHMC_PETIMEROBJSZ_PMPETIMEROBJSZ_S	0
+#define GLHMC_PETIMEROBJSZ_PMPETIMEROBJSZ_M	MAKEMASK(0xF, 0)
+#define GLHMC_PEXFBASE(_i)			(0x00524E00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEXFBASE_MAX_INDEX		7
+#define GLHMC_PEXFBASE_FPMPEXFBASE_S		0
+#define GLHMC_PEXFBASE_FPMPEXFBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEXFCNT(_i)			(0x00524F00 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEXFCNT_MAX_INDEX			7
+#define GLHMC_PEXFCNT_FPMPEXFCNT_S		0
+#define GLHMC_PEXFCNT_FPMPEXFCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_PEXFFLBASE(_i)			(0x00525000 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PEXFFLBASE_MAX_INDEX		7
+#define GLHMC_PEXFFLBASE_FPMPEXFFLBASE_S	0
+#define GLHMC_PEXFFLBASE_FPMPEXFFLBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_PEXFFLMAX				0x0052204C /* Reset Source: CORER */
+#define GLHMC_PEXFFLMAX_PMPEXFFLMAX_S		0
+#define GLHMC_PEXFFLMAX_PMPEXFFLMAX_M		MAKEMASK(0x3FFFFFF, 0)
+#define GLHMC_PEXFMAX				0x00522048 /* Reset Source: CORER */
+#define GLHMC_PEXFMAX_PMPEXFMAX_S		0
+#define GLHMC_PEXFMAX_PMPEXFMAX_M		MAKEMASK(0xFFFFFFF, 0)
+#define GLHMC_PEXFOBJSZ				0x00522044 /* Reset Source: CORER */
+#define GLHMC_PEXFOBJSZ_PMPEXFOBJSZ_S		0
+#define GLHMC_PEXFOBJSZ_PMPEXFOBJSZ_M		MAKEMASK(0xF, 0)
+#define GLHMC_PFPESDPART(_i)			(0x00520880 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PFPESDPART_MAX_INDEX		7
+#define GLHMC_PFPESDPART_PMSDBASE_S		0
+#define GLHMC_PFPESDPART_PMSDBASE_M		MAKEMASK(0xFFF, 0)
+#define GLHMC_PFPESDPART_PMSDSIZE_S		16
+#define GLHMC_PFPESDPART_PMSDSIZE_M		MAKEMASK(0x1FFF, 16)
+#define GLHMC_PFPESDPART_FPMAT(_i)		(0x00100880 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_PFPESDPART_FPMAT_MAX_INDEX	7
+#define GLHMC_PFPESDPART_FPMAT_PMSDBASE_S	0
+#define GLHMC_PFPESDPART_FPMAT_PMSDBASE_M	MAKEMASK(0xFFF, 0)
+#define GLHMC_PFPESDPART_FPMAT_PMSDSIZE_S	16
+#define GLHMC_PFPESDPART_FPMAT_PMSDSIZE_M	MAKEMASK(0x1FFF, 16)
+#define GLHMC_SDPART(_i)			(0x00520800 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_SDPART_MAX_INDEX			7
+#define GLHMC_SDPART_PMSDBASE_S			0
+#define GLHMC_SDPART_PMSDBASE_M			MAKEMASK(0xFFF, 0)
+#define GLHMC_SDPART_PMSDSIZE_S			16
+#define GLHMC_SDPART_PMSDSIZE_M			MAKEMASK(0x1FFF, 16)
+#define GLHMC_SDPART_FPMAT(_i)			(0x00100800 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLHMC_SDPART_FPMAT_MAX_INDEX		7
+#define GLHMC_SDPART_FPMAT_PMSDBASE_S		0
+#define GLHMC_SDPART_FPMAT_PMSDBASE_M		MAKEMASK(0xFFF, 0)
+#define GLHMC_SDPART_FPMAT_PMSDSIZE_S		16
+#define GLHMC_SDPART_FPMAT_PMSDSIZE_M		MAKEMASK(0x1FFF, 16)
+#define GLHMC_VFAPBVTINUSEBASE(_i)		(0x0052CA00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFAPBVTINUSEBASE_MAX_INDEX	31
+#define GLHMC_VFAPBVTINUSEBASE_FPMAPBINUSEBASE_S 0
+#define GLHMC_VFAPBVTINUSEBASE_FPMAPBINUSEBASE_M MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFCEQPART(_i)			(0x00502F00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFCEQPART_MAX_INDEX		31
+#define GLHMC_VFCEQPART_PMCEQBASE_S		0
+#define GLHMC_VFCEQPART_PMCEQBASE_M		MAKEMASK(0x3FF, 0)
+#define GLHMC_VFCEQPART_PMCEQSIZE_S		16
+#define GLHMC_VFCEQPART_PMCEQSIZE_M		MAKEMASK(0x3FF, 16)
+#define GLHMC_VFDBCQPART(_i)			(0x00502E00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFDBCQPART_MAX_INDEX		31
+#define GLHMC_VFDBCQPART_PMDBCQBASE_S		0
+#define GLHMC_VFDBCQPART_PMDBCQBASE_M		MAKEMASK(0x3FFF, 0)
+#define GLHMC_VFDBCQPART_PMDBCQSIZE_S		16
+#define GLHMC_VFDBCQPART_PMDBCQSIZE_M		MAKEMASK(0x7FFF, 16)
+#define GLHMC_VFDBQPPART(_i)			(0x00504520 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFDBQPPART_MAX_INDEX		31
+#define GLHMC_VFDBQPPART_PMDBQPBASE_S		0
+#define GLHMC_VFDBQPPART_PMDBQPBASE_M		MAKEMASK(0x3FFF, 0)
+#define GLHMC_VFDBQPPART_PMDBQPSIZE_S		16
+#define GLHMC_VFDBQPPART_PMDBQPSIZE_M		MAKEMASK(0x7FFF, 16)
+#define GLHMC_VFFSIAVBASE(_i)			(0x0052D600 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFFSIAVBASE_MAX_INDEX		31
+#define GLHMC_VFFSIAVBASE_FPMFSIAVBASE_S	0
+#define GLHMC_VFFSIAVBASE_FPMFSIAVBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFFSIAVCNT(_i)			(0x0052D700 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFFSIAVCNT_MAX_INDEX		31
+#define GLHMC_VFFSIAVCNT_FPMFSIAVCNT_S		0
+#define GLHMC_VFFSIAVCNT_FPMFSIAVCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFFSIMCBASE(_i)			(0x0052E000 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFFSIMCBASE_MAX_INDEX		31
+#define GLHMC_VFFSIMCBASE_FPMFSIMCBASE_S	0
+#define GLHMC_VFFSIMCBASE_FPMFSIMCBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFFSIMCCNT(_i)			(0x0052E100 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFFSIMCCNT_MAX_INDEX		31
+#define GLHMC_VFFSIMCCNT_FPMFSIMCSZ_S		0
+#define GLHMC_VFFSIMCCNT_FPMFSIMCSZ_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPDINV(_i)			(0x00528300 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPDINV_MAX_INDEX			31
+#define GLHMC_VFPDINV_PMSDIDX_S			0
+#define GLHMC_VFPDINV_PMSDIDX_M			MAKEMASK(0xFFF, 0)
+#define GLHMC_VFPDINV_PMSDPARTSEL_S		15
+#define GLHMC_VFPDINV_PMSDPARTSEL_M		BIT(15)
+#define GLHMC_VFPDINV_PMPDIDX_S			16
+#define GLHMC_VFPDINV_PMPDIDX_M			MAKEMASK(0x1FF, 16)
+#define GLHMC_VFPDINV_FPMAT(_i)			(0x00108300 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPDINV_FPMAT_MAX_INDEX		31
+#define GLHMC_VFPDINV_FPMAT_PMSDIDX_S		0
+#define GLHMC_VFPDINV_FPMAT_PMSDIDX_M		MAKEMASK(0xFFF, 0)
+#define GLHMC_VFPDINV_FPMAT_PMSDPARTSEL_S	15
+#define GLHMC_VFPDINV_FPMAT_PMSDPARTSEL_M	BIT(15)
+#define GLHMC_VFPDINV_FPMAT_PMPDIDX_S		16
+#define GLHMC_VFPDINV_FPMAT_PMPDIDX_M		MAKEMASK(0x1FF, 16)
+#define GLHMC_VFPEARPBASE(_i)			(0x0052C800 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEARPBASE_MAX_INDEX		31
+#define GLHMC_VFPEARPBASE_FPMPEARPBASE_S	0
+#define GLHMC_VFPEARPBASE_FPMPEARPBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPEARPCNT(_i)			(0x0052C900 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEARPCNT_MAX_INDEX		31
+#define GLHMC_VFPEARPCNT_FPMPEARPCNT_S		0
+#define GLHMC_VFPEARPCNT_FPMPEARPCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPECQBASE(_i)			(0x0052C200 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPECQBASE_MAX_INDEX		31
+#define GLHMC_VFPECQBASE_FPMPECQBASE_S		0
+#define GLHMC_VFPECQBASE_FPMPECQBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPECQCNT(_i)			(0x0052C300 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPECQCNT_MAX_INDEX		31
+#define GLHMC_VFPECQCNT_FPMPECQCNT_S		0
+#define GLHMC_VFPECQCNT_FPMPECQCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPEHDRBASE(_i)			(0x0052E200 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEHDRBASE_MAX_INDEX		31
+#define GLHMC_VFPEHDRBASE_GLHMC_PEHDRBASE_S	0
+#define GLHMC_VFPEHDRBASE_GLHMC_PEHDRBASE_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPEHDRCNT(_i)			(0x0052E300 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEHDRCNT_MAX_INDEX		31
+#define GLHMC_VFPEHDRCNT_GLHMC_PEHDRCNT_S	0
+#define GLHMC_VFPEHDRCNT_GLHMC_PEHDRCNT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPEHTCNT(_i)			(0x0052C700 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEHTCNT_MAX_INDEX		31
+#define GLHMC_VFPEHTCNT_FPMPEHTCNT_S		0
+#define GLHMC_VFPEHTCNT_FPMPEHTCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPEHTCNT_FPMAT(_i)		(0x0010c700 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEHTCNT_FPMAT_MAX_INDEX		31
+#define GLHMC_VFPEHTCNT_FPMAT_FPMPEHTCNT_S	0
+#define GLHMC_VFPEHTCNT_FPMAT_FPMPEHTCNT_M	MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPEHTEBASE(_i)			(0x0052C600 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEHTEBASE_MAX_INDEX		31
+#define GLHMC_VFPEHTEBASE_FPMPEHTEBASE_S	0
+#define GLHMC_VFPEHTEBASE_FPMPEHTEBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPEHTEBASE_FPMAT(_i)		(0x0010C600 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEHTEBASE_FPMAT_MAX_INDEX	31
+#define GLHMC_VFPEHTEBASE_FPMAT_FPMPEHTEBASE_S	0
+#define GLHMC_VFPEHTEBASE_FPMAT_FPMPEHTEBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPEMDBASE(_i)			(0x0052E400 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEMDBASE_MAX_INDEX		31
+#define GLHMC_VFPEMDBASE_GLHMC_PEMDBASE_S	0
+#define GLHMC_VFPEMDBASE_GLHMC_PEMDBASE_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPEMDCNT(_i)			(0x0052E500 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEMDCNT_MAX_INDEX		31
+#define GLHMC_VFPEMDCNT_GLHMC_PEMDCNT_S		0
+#define GLHMC_VFPEMDCNT_GLHMC_PEMDCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPEMRBASE(_i)			(0x0052CC00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEMRBASE_MAX_INDEX		31
+#define GLHMC_VFPEMRBASE_FPMPEMRBASE_S		0
+#define GLHMC_VFPEMRBASE_FPMPEMRBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPEMRCNT(_i)			(0x0052CD00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEMRCNT_MAX_INDEX		31
+#define GLHMC_VFPEMRCNT_FPMPEMRSZ_S		0
+#define GLHMC_VFPEMRCNT_FPMPEMRSZ_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPEOOISCBASE(_i)			(0x0052E600 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEOOISCBASE_MAX_INDEX		31
+#define GLHMC_VFPEOOISCBASE_GLHMC_PEOOISCBASE_S 0
+#define GLHMC_VFPEOOISCBASE_GLHMC_PEOOISCBASE_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPEOOISCCNT(_i)			(0x0052E700 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEOOISCCNT_MAX_INDEX		31
+#define GLHMC_VFPEOOISCCNT_GLHMC_PEOOISCCNT_S	0
+#define GLHMC_VFPEOOISCCNT_GLHMC_PEOOISCCNT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPEOOISCFFLBASE(_i)		(0x0052EC00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEOOISCFFLBASE_MAX_INDEX	31
+#define GLHMC_VFPEOOISCFFLBASE_GLHMC_PEOOISCFFLBASE_S 0
+#define GLHMC_VFPEOOISCFFLBASE_GLHMC_PEOOISCFFLBASE_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPEPBLBASE(_i)			(0x0052D800 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEPBLBASE_MAX_INDEX		31
+#define GLHMC_VFPEPBLBASE_FPMPEPBLBASE_S	0
+#define GLHMC_VFPEPBLBASE_FPMPEPBLBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPEPBLCNT(_i)			(0x0052D900 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEPBLCNT_MAX_INDEX		31
+#define GLHMC_VFPEPBLCNT_FPMPEPBLCNT_S		0
+#define GLHMC_VFPEPBLCNT_FPMPEPBLCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPEQ1BASE(_i)			(0x0052D200 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEQ1BASE_MAX_INDEX		31
+#define GLHMC_VFPEQ1BASE_FPMPEQ1BASE_S		0
+#define GLHMC_VFPEQ1BASE_FPMPEQ1BASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPEQ1CNT(_i)			(0x0052D300 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEQ1CNT_MAX_INDEX		31
+#define GLHMC_VFPEQ1CNT_FPMPEQ1CNT_S		0
+#define GLHMC_VFPEQ1CNT_FPMPEQ1CNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPEQ1FLBASE(_i)			(0x0052D400 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEQ1FLBASE_MAX_INDEX		31
+#define GLHMC_VFPEQ1FLBASE_FPMPEQ1FLBASE_S	0
+#define GLHMC_VFPEQ1FLBASE_FPMPEQ1FLBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPEQPBASE(_i)			(0x0052C000 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEQPBASE_MAX_INDEX		31
+#define GLHMC_VFPEQPBASE_FPMPEQPBASE_S		0
+#define GLHMC_VFPEQPBASE_FPMPEQPBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPEQPCNT(_i)			(0x0052C100 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEQPCNT_MAX_INDEX		31
+#define GLHMC_VFPEQPCNT_FPMPEQPCNT_S		0
+#define GLHMC_VFPEQPCNT_FPMPEQPCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPERRFBASE(_i)			(0x0052E800 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPERRFBASE_MAX_INDEX		31
+#define GLHMC_VFPERRFBASE_GLHMC_PERRFBASE_S	0
+#define GLHMC_VFPERRFBASE_GLHMC_PERRFBASE_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPERRFCNT(_i)			(0x0052E900 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPERRFCNT_MAX_INDEX		31
+#define GLHMC_VFPERRFCNT_GLHMC_PERRFCNT_S	0
+#define GLHMC_VFPERRFCNT_GLHMC_PERRFCNT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPERRFFLBASE(_i)			(0x0052EA00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPERRFFLBASE_MAX_INDEX		31
+#define GLHMC_VFPERRFFLBASE_GLHMC_PERRFFLBASE_S 0
+#define GLHMC_VFPERRFFLBASE_GLHMC_PERRFFLBASE_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFPETIMERBASE(_i)			(0x0052DA00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPETIMERBASE_MAX_INDEX		31
+#define GLHMC_VFPETIMERBASE_FPMPETIMERBASE_S	0
+#define GLHMC_VFPETIMERBASE_FPMPETIMERBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPETIMERCNT(_i)			(0x0052DB00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPETIMERCNT_MAX_INDEX		31
+#define GLHMC_VFPETIMERCNT_FPMPETIMERCNT_S	0
+#define GLHMC_VFPETIMERCNT_FPMPETIMERCNT_M	MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPEXFBASE(_i)			(0x0052CE00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEXFBASE_MAX_INDEX		31
+#define GLHMC_VFPEXFBASE_FPMPEXFBASE_S		0
+#define GLHMC_VFPEXFBASE_FPMPEXFBASE_M		MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFPEXFCNT(_i)			(0x0052CF00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEXFCNT_MAX_INDEX		31
+#define GLHMC_VFPEXFCNT_FPMPEXFCNT_S		0
+#define GLHMC_VFPEXFCNT_FPMPEXFCNT_M		MAKEMASK(0x1FFFFFFF, 0)
+#define GLHMC_VFPEXFFLBASE(_i)			(0x0052D000 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFPEXFFLBASE_MAX_INDEX		31
+#define GLHMC_VFPEXFFLBASE_FPMPEXFFLBASE_S	0
+#define GLHMC_VFPEXFFLBASE_FPMPEXFFLBASE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLHMC_VFSDDATAHIGH(_i)			(0x00528200 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFSDDATAHIGH_MAX_INDEX		31
+#define GLHMC_VFSDDATAHIGH_PMSDDATAHIGH_S	0
+#define GLHMC_VFSDDATAHIGH_PMSDDATAHIGH_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFSDDATAHIGH_FPMAT(_i)		(0x00108200 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFSDDATAHIGH_FPMAT_MAX_INDEX	31
+#define GLHMC_VFSDDATAHIGH_FPMAT_PMSDDATAHIGH_S 0
+#define GLHMC_VFSDDATAHIGH_FPMAT_PMSDDATAHIGH_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLHMC_VFSDDATALOW(_i)			(0x00528100 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFSDDATALOW_MAX_INDEX		31
+#define GLHMC_VFSDDATALOW_PMSDVALID_S		0
+#define GLHMC_VFSDDATALOW_PMSDVALID_M		BIT(0)
+#define GLHMC_VFSDDATALOW_PMSDTYPE_S		1
+#define GLHMC_VFSDDATALOW_PMSDTYPE_M		BIT(1)
+#define GLHMC_VFSDDATALOW_PMSDBPCOUNT_S		2
+#define GLHMC_VFSDDATALOW_PMSDBPCOUNT_M		MAKEMASK(0x3FF, 2)
+#define GLHMC_VFSDDATALOW_PMSDDATALOW_S		12
+#define GLHMC_VFSDDATALOW_PMSDDATALOW_M		MAKEMASK(0xFFFFF, 12)
+#define GLHMC_VFSDDATALOW_FPMAT(_i)		(0x00108100 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFSDDATALOW_FPMAT_MAX_INDEX	31
+#define GLHMC_VFSDDATALOW_FPMAT_PMSDVALID_S	0
+#define GLHMC_VFSDDATALOW_FPMAT_PMSDVALID_M	BIT(0)
+#define GLHMC_VFSDDATALOW_FPMAT_PMSDTYPE_S	1
+#define GLHMC_VFSDDATALOW_FPMAT_PMSDTYPE_M	BIT(1)
+#define GLHMC_VFSDDATALOW_FPMAT_PMSDBPCOUNT_S	2
+#define GLHMC_VFSDDATALOW_FPMAT_PMSDBPCOUNT_M	MAKEMASK(0x3FF, 2)
+#define GLHMC_VFSDDATALOW_FPMAT_PMSDDATALOW_S	12
+#define GLHMC_VFSDDATALOW_FPMAT_PMSDDATALOW_M	MAKEMASK(0xFFFFF, 12)
+#define GLHMC_VFSDPART(_i)			(0x00528800 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFSDPART_MAX_INDEX		31
+#define GLHMC_VFSDPART_PMSDBASE_S		0
+#define GLHMC_VFSDPART_PMSDBASE_M		MAKEMASK(0xFFF, 0)
+#define GLHMC_VFSDPART_PMSDSIZE_S		16
+#define GLHMC_VFSDPART_PMSDSIZE_M		MAKEMASK(0x1FFF, 16)
+#define GLHMC_VFSDPART_FPMAT(_i)		(0x00108800 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLHMC_VFSDPART_FPMAT_MAX_INDEX		31
+#define GLHMC_VFSDPART_FPMAT_PMSDBASE_S		0
+#define GLHMC_VFSDPART_FPMAT_PMSDBASE_M		MAKEMASK(0xFFF, 0)
+#define GLHMC_VFSDPART_FPMAT_PMSDSIZE_S		16
+#define GLHMC_VFSDPART_FPMAT_PMSDSIZE_M		MAKEMASK(0x1FFF, 16)
+#define GLMDOC_CACHESIZE			0x0051C06C /* Reset Source: CORER */
+#define GLMDOC_CACHESIZE_WORD_SIZE_S		0
+#define GLMDOC_CACHESIZE_WORD_SIZE_M		MAKEMASK(0xFF, 0)
+#define GLMDOC_CACHESIZE_SETS_S			8
+#define GLMDOC_CACHESIZE_SETS_M			MAKEMASK(0xFFF, 8)
+#define GLMDOC_CACHESIZE_WAYS_S			20
+#define GLMDOC_CACHESIZE_WAYS_M			MAKEMASK(0xF, 20)
+#define GLPBLOC0_CACHESIZE			0x00518074 /* Reset Source: CORER */
+#define GLPBLOC0_CACHESIZE_WORD_SIZE_S		0
+#define GLPBLOC0_CACHESIZE_WORD_SIZE_M		MAKEMASK(0xFF, 0)
+#define GLPBLOC0_CACHESIZE_SETS_S		8
+#define GLPBLOC0_CACHESIZE_SETS_M		MAKEMASK(0xFFF, 8)
+#define GLPBLOC0_CACHESIZE_WAYS_S		20
+#define GLPBLOC0_CACHESIZE_WAYS_M		MAKEMASK(0xF, 20)
+#define GLPBLOC1_CACHESIZE			0x0051A074 /* Reset Source: CORER */
+#define GLPBLOC1_CACHESIZE_WORD_SIZE_S		0
+#define GLPBLOC1_CACHESIZE_WORD_SIZE_M		MAKEMASK(0xFF, 0)
+#define GLPBLOC1_CACHESIZE_SETS_S		8
+#define GLPBLOC1_CACHESIZE_SETS_M		MAKEMASK(0xFFF, 8)
+#define GLPBLOC1_CACHESIZE_WAYS_S		20
+#define GLPBLOC1_CACHESIZE_WAYS_M		MAKEMASK(0xF, 20)
+#define GLPDOC_CACHESIZE			0x00530048 /* Reset Source: CORER */
+#define GLPDOC_CACHESIZE_WORD_SIZE_S		0
+#define GLPDOC_CACHESIZE_WORD_SIZE_M		MAKEMASK(0xFF, 0)
+#define GLPDOC_CACHESIZE_SETS_S			8
+#define GLPDOC_CACHESIZE_SETS_M			MAKEMASK(0xFFF, 8)
+#define GLPDOC_CACHESIZE_WAYS_S			20
+#define GLPDOC_CACHESIZE_WAYS_M			MAKEMASK(0xF, 20)
+#define GLPDOC_CACHESIZE_FPMAT			0x00110088 /* Reset Source: CORER */
+#define GLPDOC_CACHESIZE_FPMAT_WORD_SIZE_S	0
+#define GLPDOC_CACHESIZE_FPMAT_WORD_SIZE_M	MAKEMASK(0xFF, 0)
+#define GLPDOC_CACHESIZE_FPMAT_SETS_S		8
+#define GLPDOC_CACHESIZE_FPMAT_SETS_M		MAKEMASK(0xFFF, 8)
+#define GLPDOC_CACHESIZE_FPMAT_WAYS_S		20
+#define GLPDOC_CACHESIZE_FPMAT_WAYS_M		MAKEMASK(0xF, 20)
+#define GLPEOC0_CACHESIZE			0x005140A8 /* Reset Source: CORER */
+#define GLPEOC0_CACHESIZE_WORD_SIZE_S		0
+#define GLPEOC0_CACHESIZE_WORD_SIZE_M		MAKEMASK(0xFF, 0)
+#define GLPEOC0_CACHESIZE_SETS_S		8
+#define GLPEOC0_CACHESIZE_SETS_M		MAKEMASK(0xFFF, 8)
+#define GLPEOC0_CACHESIZE_WAYS_S		20
+#define GLPEOC0_CACHESIZE_WAYS_M		MAKEMASK(0xF, 20)
+#define GLPEOC1_CACHESIZE			0x005160A8 /* Reset Source: CORER */
+#define GLPEOC1_CACHESIZE_WORD_SIZE_S		0
+#define GLPEOC1_CACHESIZE_WORD_SIZE_M		MAKEMASK(0xFF, 0)
+#define GLPEOC1_CACHESIZE_SETS_S		8
+#define GLPEOC1_CACHESIZE_SETS_M		MAKEMASK(0xFFF, 8)
+#define GLPEOC1_CACHESIZE_WAYS_S		20
+#define GLPEOC1_CACHESIZE_WAYS_M		MAKEMASK(0xF, 20)
+#define PFHMC_ERRORDATA				0x00520500 /* Reset Source: PFR */
+#define PFHMC_ERRORDATA_HMC_ERROR_DATA_S	0
+#define PFHMC_ERRORDATA_HMC_ERROR_DATA_M	MAKEMASK(0x3FFFFFFF, 0)
+#define PFHMC_ERRORDATA_FPMAT			0x00100500 /* Reset Source: PFR */
+#define PFHMC_ERRORDATA_FPMAT_HMC_ERROR_DATA_S	0
+#define PFHMC_ERRORDATA_FPMAT_HMC_ERROR_DATA_M	MAKEMASK(0x3FFFFFFF, 0)
+#define PFHMC_ERRORINFO				0x00520400 /* Reset Source: PFR */
+#define PFHMC_ERRORINFO_PMF_INDEX_S		0
+#define PFHMC_ERRORINFO_PMF_INDEX_M		MAKEMASK(0x1F, 0)
+#define PFHMC_ERRORINFO_PMF_ISVF_S		7
+#define PFHMC_ERRORINFO_PMF_ISVF_M		BIT(7)
+#define PFHMC_ERRORINFO_HMC_ERROR_TYPE_S	8
+#define PFHMC_ERRORINFO_HMC_ERROR_TYPE_M	MAKEMASK(0xF, 8)
+#define PFHMC_ERRORINFO_HMC_OBJECT_TYPE_S	16
+#define PFHMC_ERRORINFO_HMC_OBJECT_TYPE_M	MAKEMASK(0x1F, 16)
+#define PFHMC_ERRORINFO_ERROR_DETECTED_S	31
+#define PFHMC_ERRORINFO_ERROR_DETECTED_M	BIT(31)
+#define PFHMC_ERRORINFO_FPMAT			0x00100400 /* Reset Source: PFR */
+#define PFHMC_ERRORINFO_FPMAT_PMF_INDEX_S	0
+#define PFHMC_ERRORINFO_FPMAT_PMF_INDEX_M	MAKEMASK(0x1F, 0)
+#define PFHMC_ERRORINFO_FPMAT_PMF_ISVF_S	7
+#define PFHMC_ERRORINFO_FPMAT_PMF_ISVF_M	BIT(7)
+#define PFHMC_ERRORINFO_FPMAT_HMC_ERROR_TYPE_S	8
+#define PFHMC_ERRORINFO_FPMAT_HMC_ERROR_TYPE_M	MAKEMASK(0xF, 8)
+#define PFHMC_ERRORINFO_FPMAT_HMC_OBJECT_TYPE_S 16
+#define PFHMC_ERRORINFO_FPMAT_HMC_OBJECT_TYPE_M MAKEMASK(0x1F, 16)
+#define PFHMC_ERRORINFO_FPMAT_ERROR_DETECTED_S	31
+#define PFHMC_ERRORINFO_FPMAT_ERROR_DETECTED_M	BIT(31)
+#define PFHMC_PDINV				0x00520300 /* Reset Source: PFR */
+#define PFHMC_PDINV_PMSDIDX_S			0
+#define PFHMC_PDINV_PMSDIDX_M			MAKEMASK(0xFFF, 0)
+#define PFHMC_PDINV_PMSDPARTSEL_S		15
+#define PFHMC_PDINV_PMSDPARTSEL_M		BIT(15)
+#define PFHMC_PDINV_PMPDIDX_S			16
+#define PFHMC_PDINV_PMPDIDX_M			MAKEMASK(0x1FF, 16)
+#define PFHMC_PDINV_FPMAT			0x00100300 /* Reset Source: PFR */
+#define PFHMC_PDINV_FPMAT_PMSDIDX_S		0
+#define PFHMC_PDINV_FPMAT_PMSDIDX_M		MAKEMASK(0xFFF, 0)
+#define PFHMC_PDINV_FPMAT_PMSDPARTSEL_S		15
+#define PFHMC_PDINV_FPMAT_PMSDPARTSEL_M		BIT(15)
+#define PFHMC_PDINV_FPMAT_PMPDIDX_S		16
+#define PFHMC_PDINV_FPMAT_PMPDIDX_M		MAKEMASK(0x1FF, 16)
+#define PFHMC_SDCMD				0x00520000 /* Reset Source: PFR */
+#define PFHMC_SDCMD_PMSDIDX_S			0
+#define PFHMC_SDCMD_PMSDIDX_M			MAKEMASK(0xFFF, 0)
+#define PFHMC_SDCMD_PMSDPARTSEL_S		15
+#define PFHMC_SDCMD_PMSDPARTSEL_M		BIT(15)
+#define PFHMC_SDCMD_PMSDWR_S			31
+#define PFHMC_SDCMD_PMSDWR_M			BIT(31)
+#define PFHMC_SDCMD_FPMAT			0x00100000 /* Reset Source: PFR */
+#define PFHMC_SDCMD_FPMAT_PMSDIDX_S		0
+#define PFHMC_SDCMD_FPMAT_PMSDIDX_M		MAKEMASK(0xFFF, 0)
+#define PFHMC_SDCMD_FPMAT_PMSDPARTSEL_S		15
+#define PFHMC_SDCMD_FPMAT_PMSDPARTSEL_M		BIT(15)
+#define PFHMC_SDCMD_FPMAT_PMSDWR_S		31
+#define PFHMC_SDCMD_FPMAT_PMSDWR_M		BIT(31)
+#define PFHMC_SDDATAHIGH			0x00520200 /* Reset Source: PFR */
+#define PFHMC_SDDATAHIGH_PMSDDATAHIGH_S		0
+#define PFHMC_SDDATAHIGH_PMSDDATAHIGH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PFHMC_SDDATAHIGH_FPMAT			0x00100200 /* Reset Source: PFR */
+#define PFHMC_SDDATAHIGH_FPMAT_PMSDDATAHIGH_S	0
+#define PFHMC_SDDATAHIGH_FPMAT_PMSDDATAHIGH_M	MAKEMASK(0xFFFFFFFF, 0)
+#define PFHMC_SDDATALOW				0x00520100 /* Reset Source: PFR */
+#define PFHMC_SDDATALOW_PMSDVALID_S		0
+#define PFHMC_SDDATALOW_PMSDVALID_M		BIT(0)
+#define PFHMC_SDDATALOW_PMSDTYPE_S		1
+#define PFHMC_SDDATALOW_PMSDTYPE_M		BIT(1)
+#define PFHMC_SDDATALOW_PMSDBPCOUNT_S		2
+#define PFHMC_SDDATALOW_PMSDBPCOUNT_M		MAKEMASK(0x3FF, 2)
+#define PFHMC_SDDATALOW_PMSDDATALOW_S		12
+#define PFHMC_SDDATALOW_PMSDDATALOW_M		MAKEMASK(0xFFFFF, 12)
+#define PFHMC_SDDATALOW_FPMAT			0x00100100 /* Reset Source: PFR */
+#define PFHMC_SDDATALOW_FPMAT_PMSDVALID_S	0
+#define PFHMC_SDDATALOW_FPMAT_PMSDVALID_M	BIT(0)
+#define PFHMC_SDDATALOW_FPMAT_PMSDTYPE_S	1
+#define PFHMC_SDDATALOW_FPMAT_PMSDTYPE_M	BIT(1)
+#define PFHMC_SDDATALOW_FPMAT_PMSDBPCOUNT_S	2
+#define PFHMC_SDDATALOW_FPMAT_PMSDBPCOUNT_M	MAKEMASK(0x3FF, 2)
+#define PFHMC_SDDATALOW_FPMAT_PMSDDATALOW_S	12
+#define PFHMC_SDDATALOW_FPMAT_PMSDDATALOW_M	MAKEMASK(0xFFFFF, 12)
+#define GL_DSI_RDPC				0x00294204 /* Reset Source: CORER */
+#define GL_DSI_RDPC_RDPC_S			0
+#define GL_DSI_RDPC_RDPC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GL_DSI_REPC				0x00294208 /* Reset Source: CORER */
+#define GL_DSI_REPC_NO_DESC_CNT_S		0
+#define GL_DSI_REPC_NO_DESC_CNT_M		MAKEMASK(0xFFFF, 0)
+#define GL_DSI_REPC_ERROR_CNT_S			16
+#define GL_DSI_REPC_ERROR_CNT_M			MAKEMASK(0xFFFF, 16)
+#define GL_MDCK_TDAT_TCLAN			0x000FC0DC /* Reset Source: CORER */
+#define GL_MDCK_TDAT_TCLAN_WRONG_ORDER_FORMAT_DESC_S 0
+#define GL_MDCK_TDAT_TCLAN_WRONG_ORDER_FORMAT_DESC_M BIT(0)
+#define GL_MDCK_TDAT_TCLAN_UR_S			1
+#define GL_MDCK_TDAT_TCLAN_UR_M			BIT(1)
+#define GL_MDCK_TDAT_TCLAN_TAIL_DESC_NOT_DDESC_EOP_NOP_S 2
+#define GL_MDCK_TDAT_TCLAN_TAIL_DESC_NOT_DDESC_EOP_NOP_M BIT(2)
+#define GL_MDCK_TDAT_TCLAN_FALSE_SCHEDULING_S	3
+#define GL_MDCK_TDAT_TCLAN_FALSE_SCHEDULING_M	BIT(3)
+#define GL_MDCK_TDAT_TCLAN_TAIL_VALUE_BIGGER_THAN_RING_LEN_S 4
+#define GL_MDCK_TDAT_TCLAN_TAIL_VALUE_BIGGER_THAN_RING_LEN_M BIT(4)
+#define GL_MDCK_TDAT_TCLAN_MORE_THAN_8_DCMDS_IN_PKT_S 5
+#define GL_MDCK_TDAT_TCLAN_MORE_THAN_8_DCMDS_IN_PKT_M BIT(5)
+#define GL_MDCK_TDAT_TCLAN_NO_HEAD_UPDATE_IN_QUANTA_S 6
+#define GL_MDCK_TDAT_TCLAN_NO_HEAD_UPDATE_IN_QUANTA_M BIT(6)
+#define GL_MDCK_TDAT_TCLAN_PKT_LEN_NOT_LEGAL_S	7
+#define GL_MDCK_TDAT_TCLAN_PKT_LEN_NOT_LEGAL_M	BIT(7)
+#define GL_MDCK_TDAT_TCLAN_TSO_TLEN_NOT_COHERENT_WITH_SUM_BUFS_S 8
+#define GL_MDCK_TDAT_TCLAN_TSO_TLEN_NOT_COHERENT_WITH_SUM_BUFS_M BIT(8)
+#define GL_MDCK_TDAT_TCLAN_TSO_TAIL_REACHED_BEFORE_TLEN_END_S 9
+#define GL_MDCK_TDAT_TCLAN_TSO_TAIL_REACHED_BEFORE_TLEN_END_M BIT(9)
+#define GL_MDCK_TDAT_TCLAN_TSO_MORE_THAN_3_HDRS_S 10
+#define GL_MDCK_TDAT_TCLAN_TSO_MORE_THAN_3_HDRS_M BIT(10)
+#define GL_MDCK_TDAT_TCLAN_TSO_SUM_BUFFS_LT_SUM_HDRS_S 11
+#define GL_MDCK_TDAT_TCLAN_TSO_SUM_BUFFS_LT_SUM_HDRS_M BIT(11)
+#define GL_MDCK_TDAT_TCLAN_TSO_ZERO_MSS_TLEN_HDRS_S 12
+#define GL_MDCK_TDAT_TCLAN_TSO_ZERO_MSS_TLEN_HDRS_M BIT(12)
+#define GL_MDCK_TDAT_TCLAN_TSO_CTX_DESC_IPSEC_S 13
+#define GL_MDCK_TDAT_TCLAN_TSO_CTX_DESC_IPSEC_M BIT(13)
+#define GL_MDCK_TDAT_TCLAN_SSO_COMS_NOT_WHOLE_PKT_NUM_IN_QUANTA_S 14
+#define GL_MDCK_TDAT_TCLAN_SSO_COMS_NOT_WHOLE_PKT_NUM_IN_QUANTA_M BIT(14)
+#define GL_MDCK_TDAT_TCLAN_COMS_QUANTA_BYTES_EXCEED_PKTLEN_X_64_S 15
+#define GL_MDCK_TDAT_TCLAN_COMS_QUANTA_BYTES_EXCEED_PKTLEN_X_64_M BIT(15)
+#define GL_MDCK_TDAT_TCLAN_COMS_QUANTA_CMDS_EXCEED_S 16
+#define GL_MDCK_TDAT_TCLAN_COMS_QUANTA_CMDS_EXCEED_M BIT(16)
+#define GL_MDCK_TDAT_TCLAN_TSO_COMS_TSO_DESCS_LAST_LSO_QUANTA_S 17
+#define GL_MDCK_TDAT_TCLAN_TSO_COMS_TSO_DESCS_LAST_LSO_QUANTA_M BIT(17)
+#define GL_MDCK_TDAT_TCLAN_TSO_COMS_TSO_DESCS_TLEN_S 18
+#define GL_MDCK_TDAT_TCLAN_TSO_COMS_TSO_DESCS_TLEN_M BIT(18)
+#define GL_MDCK_TDAT_TCLAN_TSO_COMS_QUANTA_FINISHED_TOO_EARLY_S 19
+#define GL_MDCK_TDAT_TCLAN_TSO_COMS_QUANTA_FINISHED_TOO_EARLY_M BIT(19)
+#define GL_MDCK_TDAT_TCLAN_COMS_NUM_PKTS_IN_QUANTA_S 20
+#define GL_MDCK_TDAT_TCLAN_COMS_NUM_PKTS_IN_QUANTA_M BIT(20)
+#define GL_PPRS_SPARE_0				0x000841A8 /* Reset Source: CORER */
+#define GL_PPRS_SPARE_0_GL_PPRS_SPARE_S		0
+#define GL_PPRS_SPARE_0_GL_PPRS_SPARE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PPRS_SPARE_1				0x000851A8 /* Reset Source: CORER */
+#define GL_PPRS_SPARE_1_GL_PPRS_SPARE_S		0
+#define GL_PPRS_SPARE_1_GL_PPRS_SPARE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PPRS_SPARE_2				0x000861A8 /* Reset Source: CORER */
+#define GL_PPRS_SPARE_2_GL_PPRS_SPARE_S		0
+#define GL_PPRS_SPARE_2_GL_PPRS_SPARE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PPRS_SPARE_3				0x000871A8 /* Reset Source: CORER */
+#define GL_PPRS_SPARE_3_GL_PPRS_SPARE_S		0
+#define GL_PPRS_SPARE_3_GL_PPRS_SPARE_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLCORE_CLKCTL_H				0x000B81E8 /* Reset Source: POR */
+#define GLCORE_CLKCTL_H_UPPER_CLK_SRC_H_S	0
+#define GLCORE_CLKCTL_H_UPPER_CLK_SRC_H_M	MAKEMASK(0x3, 0)
+#define GLCORE_CLKCTL_H_LOWER_CLK_SRC_H_S	2
+#define GLCORE_CLKCTL_H_LOWER_CLK_SRC_H_M	MAKEMASK(0x3, 2)
+#define GLCORE_CLKCTL_H_PSM_CLK_SRC_H_S		4
+#define GLCORE_CLKCTL_H_PSM_CLK_SRC_H_M		MAKEMASK(0x3, 4)
+#define GLCORE_CLKCTL_H_RXCTL_CLK_SRC_H_S	6
+#define GLCORE_CLKCTL_H_RXCTL_CLK_SRC_H_M	MAKEMASK(0x3, 6)
+#define GLCORE_CLKCTL_H_UANA_CLK_SRC_H_S	8
+#define GLCORE_CLKCTL_H_UANA_CLK_SRC_H_M	MAKEMASK(0x7, 8)
+#define GLCORE_CLKCTL_L				0x000B8254 /* Reset Source: POR */
+#define GLCORE_CLKCTL_L_UPPER_CLK_SRC_L_S	0
+#define GLCORE_CLKCTL_L_UPPER_CLK_SRC_L_M	MAKEMASK(0x3, 0)
+#define GLCORE_CLKCTL_L_LOWER_CLK_SRC_L_S	2
+#define GLCORE_CLKCTL_L_LOWER_CLK_SRC_L_M	MAKEMASK(0x3, 2)
+#define GLCORE_CLKCTL_L_PSM_CLK_SRC_L_S		4
+#define GLCORE_CLKCTL_L_PSM_CLK_SRC_L_M		MAKEMASK(0x3, 4)
+#define GLCORE_CLKCTL_L_RXCTL_CLK_SRC_L_S	6
+#define GLCORE_CLKCTL_L_RXCTL_CLK_SRC_L_M	MAKEMASK(0x3, 6)
+#define GLCORE_CLKCTL_L_UANA_CLK_SRC_L_S	8
+#define GLCORE_CLKCTL_L_UANA_CLK_SRC_L_M	MAKEMASK(0x7, 8)
+#define GLCORE_CLKCTL_M				0x000B8258 /* Reset Source: POR */
+#define GLCORE_CLKCTL_M_UPPER_CLK_SRC_M_S	0
+#define GLCORE_CLKCTL_M_UPPER_CLK_SRC_M_M	MAKEMASK(0x3, 0)
+#define GLCORE_CLKCTL_M_LOWER_CLK_SRC_M_S	2
+#define GLCORE_CLKCTL_M_LOWER_CLK_SRC_M_M	MAKEMASK(0x3, 2)
+#define GLCORE_CLKCTL_M_PSM_CLK_SRC_M_S		4
+#define GLCORE_CLKCTL_M_PSM_CLK_SRC_M_M		MAKEMASK(0x3, 4)
+#define GLCORE_CLKCTL_M_RXCTL_CLK_SRC_M_S	6
+#define GLCORE_CLKCTL_M_RXCTL_CLK_SRC_M_M	MAKEMASK(0x3, 6)
+#define GLCORE_CLKCTL_M_UANA_CLK_SRC_M_S	8
+#define GLCORE_CLKCTL_M_UANA_CLK_SRC_M_M	MAKEMASK(0x7, 8)
+#define GLFOC_CACHESIZE				0x000AA074 /* Reset Source: CORER */
+#define GLFOC_CACHESIZE_WORD_SIZE_S		0
+#define GLFOC_CACHESIZE_WORD_SIZE_M		MAKEMASK(0xFF, 0)
+#define GLFOC_CACHESIZE_SETS_S			8
+#define GLFOC_CACHESIZE_SETS_M			MAKEMASK(0xFFF, 8)
+#define GLFOC_CACHESIZE_WAYS_S			20
+#define GLFOC_CACHESIZE_WAYS_M			MAKEMASK(0xF, 20)
+#define GLGEN_CAR_DEBUG				0x000B81C0 /* Reset Source: POR */
+#define GLGEN_CAR_DEBUG_CAR_UPPER_CORE_CLK_EN_S 0
+#define GLGEN_CAR_DEBUG_CAR_UPPER_CORE_CLK_EN_M BIT(0)
+#define GLGEN_CAR_DEBUG_CAR_PCIE_HIU_CLK_EN_S	1
+#define GLGEN_CAR_DEBUG_CAR_PCIE_HIU_CLK_EN_M	BIT(1)
+#define GLGEN_CAR_DEBUG_CAR_PE_CLK_EN_S		2
+#define GLGEN_CAR_DEBUG_CAR_PE_CLK_EN_M		BIT(2)
+#define GLGEN_CAR_DEBUG_CAR_PCIE_PRIM_CLK_ACTIVE_S 3
+#define GLGEN_CAR_DEBUG_CAR_PCIE_PRIM_CLK_ACTIVE_M BIT(3)
+#define GLGEN_CAR_DEBUG_CDC_PE_ACTIVE_S		4
+#define GLGEN_CAR_DEBUG_CDC_PE_ACTIVE_M		BIT(4)
+#define GLGEN_CAR_DEBUG_CAR_PCIE_RAW_PRST_RESET_N_S 5
+#define GLGEN_CAR_DEBUG_CAR_PCIE_RAW_PRST_RESET_N_M BIT(5)
+#define GLGEN_CAR_DEBUG_CAR_PCIE_RAW_SCLR_RESET_N_S 6
+#define GLGEN_CAR_DEBUG_CAR_PCIE_RAW_SCLR_RESET_N_M BIT(6)
+#define GLGEN_CAR_DEBUG_CAR_PCIE_RAW_IB_RESET_N_S 7
+#define GLGEN_CAR_DEBUG_CAR_PCIE_RAW_IB_RESET_N_M BIT(7)
+#define GLGEN_CAR_DEBUG_CAR_PCIE_RAW_IMIB_RESET_N_S 8
+#define GLGEN_CAR_DEBUG_CAR_PCIE_RAW_IMIB_RESET_N_M BIT(8)
+#define GLGEN_CAR_DEBUG_CAR_RAW_EMP_RESET_N_S	9
+#define GLGEN_CAR_DEBUG_CAR_RAW_EMP_RESET_N_M	BIT(9)
+#define GLGEN_CAR_DEBUG_CAR_RAW_GLOBAL_RESET_N_S 10
+#define GLGEN_CAR_DEBUG_CAR_RAW_GLOBAL_RESET_N_M BIT(10)
+#define GLGEN_CAR_DEBUG_CAR_RAW_LAN_POWER_GOOD_S 11
+#define GLGEN_CAR_DEBUG_CAR_RAW_LAN_POWER_GOOD_M BIT(11)
+#define GLGEN_CAR_DEBUG_CDC_IOSF_PRIMERY_RST_B_S 12
+#define GLGEN_CAR_DEBUG_CDC_IOSF_PRIMERY_RST_B_M BIT(12)
+#define GLGEN_CAR_DEBUG_GBE_GLOBALRST_B_S	13
+#define GLGEN_CAR_DEBUG_GBE_GLOBALRST_B_M	BIT(13)
+#define GLGEN_CAR_DEBUG_FLEEP_AL_GLOBR_DONE_S	14
+#define GLGEN_CAR_DEBUG_FLEEP_AL_GLOBR_DONE_M	BIT(14)
+#define GLGEN_CAR_DEBUG_CAR_RST_STATE_S		15
+#define GLGEN_CAR_DEBUG_CAR_RST_STATE_M		MAKEMASK(0xF, 15)
+#define GLGEN_CAR_SPARE				0x000B81C4 /* Reset Source: POR */
+#define GLGEN_CAR_SPARE_SPARE_CLEAR_S		0
+#define GLGEN_CAR_SPARE_SPARE_CLEAR_M		MAKEMASK(0xFFFF, 0)
+#define GLGEN_CAR_SPARE_SPARE_SET_S		16
+#define GLGEN_CAR_SPARE_SPARE_SET_M		MAKEMASK(0xFFFF, 16)
+#define GLMAC_CLKSTAT				0x000B8210 /* Reset Source: POR */
+#define GLMAC_CLKSTAT_P0_CLK_SPEED_S		0
+#define GLMAC_CLKSTAT_P0_CLK_SPEED_M		MAKEMASK(0xF, 0)
+#define GLMAC_CLKSTAT_P1_CLK_SPEED_S		4
+#define GLMAC_CLKSTAT_P1_CLK_SPEED_M		MAKEMASK(0xF, 4)
+#define GLMAC_CLKSTAT_P2_CLK_SPEED_S		8
+#define GLMAC_CLKSTAT_P2_CLK_SPEED_M		MAKEMASK(0xF, 8)
+#define GLMAC_CLKSTAT_P3_CLK_SPEED_S		12
+#define GLMAC_CLKSTAT_P3_CLK_SPEED_M		MAKEMASK(0xF, 12)
+#define GLMAC_CLKSTAT_P4_CLK_SPEED_S		16
+#define GLMAC_CLKSTAT_P4_CLK_SPEED_M		MAKEMASK(0xF, 16)
+#define GLMAC_CLKSTAT_P5_CLK_SPEED_S		20
+#define GLMAC_CLKSTAT_P5_CLK_SPEED_M		MAKEMASK(0xF, 20)
+#define GLMAC_CLKSTAT_P6_CLK_SPEED_S		24
+#define GLMAC_CLKSTAT_P6_CLK_SPEED_M		MAKEMASK(0xF, 24)
+#define GLMAC_CLKSTAT_P7_CLK_SPEED_S		28
+#define GLMAC_CLKSTAT_P7_CLK_SPEED_M		MAKEMASK(0xF, 28)
+#define GLRCB_DCB_LAN_PMS			0x001223F8 /* Reset Source: CORER */
+#define GLRCB_DCB_LAN_PMS_PSM_LAN_S		0
+#define GLRCB_DCB_LAN_PMS_PSM_LAN_M		MAKEMASK(0x3FFF, 0)
+#define GLRCB_DCB_RDMA_PMS			0x001223FC /* Reset Source: CORER */
+#define GLRCB_DCB_RDMA_PMS_PSM_RDMA_S		0
+#define GLRCB_DCB_RDMA_PMS_PSM_RDMA_M		MAKEMASK(0x3FFF, 0)
+#define GLRLAN_MDET				0x00294200 /* Reset Source: CORER */
+#define GLRLAN_MDET_PCKT_EXTRCT_ERR_S		0
+#define GLRLAN_MDET_PCKT_EXTRCT_ERR_M		BIT(0)
+#define GLTPB_100G_MAC_FC_THRESH		0x00099510 /* Reset Source: CORER */
+#define GLTPB_100G_MAC_FC_THRESH_PORT0_FC_THRESH_S 0
+#define GLTPB_100G_MAC_FC_THRESH_PORT0_FC_THRESH_M MAKEMASK(0xFFFF, 0)
+#define GLTPB_100G_MAC_FC_THRESH_PORT1_FC_THRESH_S 16
+#define GLTPB_100G_MAC_FC_THRESH_PORT1_FC_THRESH_M MAKEMASK(0xFFFF, 16)
+#define GLTPB_100G_RPB_FC_THRESH		0x0009963C /* Reset Source: CORER */
+#define GLTPB_100G_RPB_FC_THRESH_PORT0_FC_THRESH_S 0
+#define GLTPB_100G_RPB_FC_THRESH_PORT0_FC_THRESH_M MAKEMASK(0xFFFF, 0)
+#define GLTPB_100G_RPB_FC_THRESH_PORT1_FC_THRESH_S 16
+#define GLTPB_100G_RPB_FC_THRESH_PORT1_FC_THRESH_M MAKEMASK(0xFFFF, 16)
+#define GLTPB_PACING_10G			0x000994E4 /* Reset Source: CORER */
+#define GLTPB_PACING_10G_N_S			0
+#define GLTPB_PACING_10G_N_M			MAKEMASK(0xFF, 0)
+#define GLTPB_PACING_10G_K_S			8
+#define GLTPB_PACING_10G_K_M			MAKEMASK(0xFF, 8)
+#define GLTPB_PACING_10G_S_S			16
+#define GLTPB_PACING_10G_S_M			MAKEMASK(0x1FF, 16)
+#define GLTPB_PACING_25G			0x000994E0 /* Reset Source: CORER */
+#define GLTPB_PACING_25G_N_S			0
+#define GLTPB_PACING_25G_N_M			MAKEMASK(0xFF, 0)
+#define GLTPB_PACING_25G_K_S			8
+#define GLTPB_PACING_25G_K_M			MAKEMASK(0xFF, 8)
+#define GLTPB_PACING_25G_S_S			16
+#define GLTPB_PACING_25G_S_M			MAKEMASK(0x1FF, 16)
+#define GLTPB_PORT_PACING_SPEED			0x000994E8 /* Reset Source: CORER */
+#define GLTPB_PORT_PACING_SPEED_PORT0_SPEED_S	0
+#define GLTPB_PORT_PACING_SPEED_PORT0_SPEED_M	BIT(0)
+#define GLTPB_PORT_PACING_SPEED_PORT1_SPEED_S	1
+#define GLTPB_PORT_PACING_SPEED_PORT1_SPEED_M	BIT(1)
+#define GLTPB_PORT_PACING_SPEED_PORT2_SPEED_S	2
+#define GLTPB_PORT_PACING_SPEED_PORT2_SPEED_M	BIT(2)
+#define GLTPB_PORT_PACING_SPEED_PORT3_SPEED_S	3
+#define GLTPB_PORT_PACING_SPEED_PORT3_SPEED_M	BIT(3)
+#define GLTPB_PORT_PACING_SPEED_PORT4_SPEED_S	4
+#define GLTPB_PORT_PACING_SPEED_PORT4_SPEED_M	BIT(4)
+#define GLTPB_PORT_PACING_SPEED_PORT5_SPEED_S	5
+#define GLTPB_PORT_PACING_SPEED_PORT5_SPEED_M	BIT(5)
+#define GLTPB_PORT_PACING_SPEED_PORT6_SPEED_S	6
+#define GLTPB_PORT_PACING_SPEED_PORT6_SPEED_M	BIT(6)
+#define GLTPB_PORT_PACING_SPEED_PORT7_SPEED_S	7
+#define GLTPB_PORT_PACING_SPEED_PORT7_SPEED_M	BIT(7)
+#define GLTSYN_HH_DBG				0x000889F0 /* Reset Source: CORER */
+#define GLTSYN_HH_DBG_HH_SYNC_S			0
+#define GLTSYN_HH_DBG_HH_SYNC_M			BIT(0)
+#define GLTSYN_HH_DBG_HH_LATCH_EN_S		1
+#define GLTSYN_HH_DBG_HH_LATCH_EN_M		BIT(1)
+#define TPB_CFG_SCHEDULED_BC_THRESHOLD		0x00099494 /* Reset Source: CORER */
+#define TPB_CFG_SCHEDULED_BC_THRESHOLD_THRESHOLD_S 0
+#define TPB_CFG_SCHEDULED_BC_THRESHOLD_THRESHOLD_M MAKEMASK(0x7FFF, 0)
+#define GL_UFUSE_SOC				0x000A400C /* Reset Source: POR */
+#define GL_UFUSE_SOC_PORT_MODE_S		0
+#define GL_UFUSE_SOC_PORT_MODE_M		MAKEMASK(0x3, 0)
+#define GL_UFUSE_SOC_BANDWIDTH_S		2
+#define GL_UFUSE_SOC_BANDWIDTH_M		MAKEMASK(0x3, 2)
+#define GL_UFUSE_SOC_PE_DISABLE_S		4
+#define GL_UFUSE_SOC_PE_DISABLE_M		BIT(4)
+#define GL_UFUSE_SOC_SWITCH_MODE_S		5
+#define GL_UFUSE_SOC_SWITCH_MODE_M		BIT(5)
+#define GL_UFUSE_SOC_CSR_PROTECTION_ENABLE_S	6
+#define GL_UFUSE_SOC_CSR_PROTECTION_ENABLE_M	BIT(6)
+#define GL_UFUSE_SOC_SERIAL_50G_S		7
+#define GL_UFUSE_SOC_SERIAL_50G_M		BIT(7)
+#define GL_UFUSE_SOC_NIC_ID_S			8
+#define GL_UFUSE_SOC_NIC_ID_M			BIT(8)
+#define GL_UFUSE_SOC_BLOCK_BME_TO_FW_S		9
+#define GL_UFUSE_SOC_BLOCK_BME_TO_FW_M		BIT(9)
+#define GL_UFUSE_SOC_SOC_TYPE_S			10
+#define GL_UFUSE_SOC_SOC_TYPE_M			BIT(10)
+#define GL_UFUSE_SOC_BTS_MODE_S			11
+#define GL_UFUSE_SOC_BTS_MODE_M			BIT(11)
+#define GL_UFUSE_SOC_SPARE_FUSES_S		12
+#define GL_UFUSE_SOC_SPARE_FUSES_M		MAKEMASK(0xF, 12)
+#define EMPINT_GPIO_ENA				0x000880C0 /* Reset Source: POR */
+#define EMPINT_GPIO_ENA_GPIO0_ENA_S		0
+#define EMPINT_GPIO_ENA_GPIO0_ENA_M		BIT(0)
+#define EMPINT_GPIO_ENA_GPIO1_ENA_S		1
+#define EMPINT_GPIO_ENA_GPIO1_ENA_M		BIT(1)
+#define EMPINT_GPIO_ENA_GPIO2_ENA_S		2
+#define EMPINT_GPIO_ENA_GPIO2_ENA_M		BIT(2)
+#define EMPINT_GPIO_ENA_GPIO3_ENA_S		3
+#define EMPINT_GPIO_ENA_GPIO3_ENA_M		BIT(3)
+#define EMPINT_GPIO_ENA_GPIO4_ENA_S		4
+#define EMPINT_GPIO_ENA_GPIO4_ENA_M		BIT(4)
+#define EMPINT_GPIO_ENA_GPIO5_ENA_S		5
+#define EMPINT_GPIO_ENA_GPIO5_ENA_M		BIT(5)
+#define EMPINT_GPIO_ENA_GPIO6_ENA_S		6
+#define EMPINT_GPIO_ENA_GPIO6_ENA_M		BIT(6)
+#define GL_CLKGEN_DEBUG				0x000B8268 /* Reset Source: POR */
+#define GL_CLKGEN_DEBUG_PROBE_S			0
+#define GL_CLKGEN_DEBUG_PROBE_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GL_CLKGEN_DEBUG_SEL			0x000B8264 /* Reset Source: POR */
+#define GL_CLKGEN_DEBUG_SEL_GL_CLKGEN_DEBUG_SEL_S 0
+#define GL_CLKGEN_DEBUG_SEL_GL_CLKGEN_DEBUG_SEL_M MAKEMASK(0xFFFF, 0)
+#define GLGEN_MAC_LINK_TOPO			0x000B81DC /* Reset Source: GLOBR */
+#define GLGEN_MAC_LINK_TOPO_LINK_TOPO_S		0
+#define GLGEN_MAC_LINK_TOPO_LINK_TOPO_M		MAKEMASK(0x3, 0)
+#define GLINT_CEQCTL(_INT)			(0x0015C000 + ((_INT) * 4)) /* _i=0...2047 */ /* Reset Source: CORER */
+#define GLINT_CEQCTL_MAX_INDEX			2047
+#define GLINT_CEQCTL_MSIX_INDX_S		0
+#define GLINT_CEQCTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define GLINT_CEQCTL_ITR_INDX_S			11
+#define GLINT_CEQCTL_ITR_INDX_M			MAKEMASK(0x3, 11)
+#define GLINT_CEQCTL_CAUSE_ENA_S		30
+#define GLINT_CEQCTL_CAUSE_ENA_M		BIT(30)
+#define GLINT_CEQCTL_INTEVENT_S			31
+#define GLINT_CEQCTL_INTEVENT_M			BIT(31)
+#define GLINT_CTL				0x0016CC54 /* Reset Source: CORER */
+#define GLINT_CTL_DIS_AUTOMASK_S		0
+#define GLINT_CTL_DIS_AUTOMASK_M		BIT(0)
+#define GLINT_CTL_RSVD_S			1
+#define GLINT_CTL_RSVD_M			MAKEMASK(0x7FFF, 1)
+#define GLINT_CTL_ITR_GRAN_200_S		16
+#define GLINT_CTL_ITR_GRAN_200_M		MAKEMASK(0xF, 16)
+#define GLINT_CTL_ITR_GRAN_100_S		20
+#define GLINT_CTL_ITR_GRAN_100_M		MAKEMASK(0xF, 20)
+#define GLINT_CTL_ITR_GRAN_50_S			24
+#define GLINT_CTL_ITR_GRAN_50_M			MAKEMASK(0xF, 24)
+#define GLINT_CTL_ITR_GRAN_25_S			28
+#define GLINT_CTL_ITR_GRAN_25_M			MAKEMASK(0xF, 28)
+#define GLINT_DYN_CTL(_INT)			(0x00160000 + ((_INT) * 4)) /* _i=0...2047 */ /* Reset Source: PFR */
+#define GLINT_DYN_CTL_MAX_INDEX			2047
+#define GLINT_DYN_CTL_INTENA_S			0
+#define GLINT_DYN_CTL_INTENA_M			BIT(0)
+#define GLINT_DYN_CTL_CLEARPBA_S		1
+#define GLINT_DYN_CTL_CLEARPBA_M		BIT(1)
+#define GLINT_DYN_CTL_SWINT_TRIG_S		2
+#define GLINT_DYN_CTL_SWINT_TRIG_M		BIT(2)
+#define GLINT_DYN_CTL_ITR_INDX_S		3
+#define GLINT_DYN_CTL_ITR_INDX_M		MAKEMASK(0x3, 3)
+#define GLINT_DYN_CTL_INTERVAL_S		5
+#define GLINT_DYN_CTL_INTERVAL_M		MAKEMASK(0xFFF, 5)
+#define GLINT_DYN_CTL_SW_ITR_INDX_ENA_S		24
+#define GLINT_DYN_CTL_SW_ITR_INDX_ENA_M		BIT(24)
+#define GLINT_DYN_CTL_SW_ITR_INDX_S		25
+#define GLINT_DYN_CTL_SW_ITR_INDX_M		MAKEMASK(0x3, 25)
+#define GLINT_DYN_CTL_WB_ON_ITR_S		30
+#define GLINT_DYN_CTL_WB_ON_ITR_M		BIT(30)
+#define GLINT_DYN_CTL_INTENA_MSK_S		31
+#define GLINT_DYN_CTL_INTENA_MSK_M		BIT(31)
+#define GLINT_FW_TOOL_CTL			0x0016C840 /* Reset Source: CORER */
+#define GLINT_FW_TOOL_CTL_MSIX_INDX_S		0
+#define GLINT_FW_TOOL_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define GLINT_FW_TOOL_CTL_ITR_INDX_S		11
+#define GLINT_FW_TOOL_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define GLINT_FW_TOOL_CTL_CAUSE_ENA_S		30
+#define GLINT_FW_TOOL_CTL_CAUSE_ENA_M		BIT(30)
+#define GLINT_FW_TOOL_CTL_INTEVENT_S		31
+#define GLINT_FW_TOOL_CTL_INTEVENT_M		BIT(31)
+#define GLINT_ITR(_i, _INT)			(0x00154000 + ((_i) * 8192 + (_INT) * 4)) /* _i=0...2, _INT=0...2047 */ /* Reset Source: PFR */
+#define GLINT_ITR_MAX_INDEX			2
+#define GLINT_ITR_INTERVAL_S			0
+#define GLINT_ITR_INTERVAL_M			MAKEMASK(0xFFF, 0)
+#define GLINT_RATE(_INT)			(0x0015A000 + ((_INT) * 4)) /* _i=0...2047 */ /* Reset Source: PFR */
+#define GLINT_RATE_MAX_INDEX			2047
+#define GLINT_RATE_INTERVAL_S			0
+#define GLINT_RATE_INTERVAL_M			MAKEMASK(0x3F, 0)
+#define GLINT_RATE_INTRL_ENA_S			6
+#define GLINT_RATE_INTRL_ENA_M			BIT(6)
+#define GLINT_TSYN_PFMSTR(_i)			(0x0016CCC0 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLINT_TSYN_PFMSTR_MAX_INDEX		1
+#define GLINT_TSYN_PFMSTR_PF_MASTER_S		0
+#define GLINT_TSYN_PFMSTR_PF_MASTER_M		MAKEMASK(0x7, 0)
+#define GLINT_TSYN_PHY				0x0016CC50 /* Reset Source: CORER */
+#define GLINT_TSYN_PHY_PHY_INDX_S		0
+#define GLINT_TSYN_PHY_PHY_INDX_M		MAKEMASK(0x1F, 0)
+#define GLINT_VECT2FUNC(_INT)			(0x00162000 + ((_INT) * 4)) /* _i=0...2047 */ /* Reset Source: CORER */
+#define GLINT_VECT2FUNC_MAX_INDEX		2047
+#define GLINT_VECT2FUNC_VF_NUM_S		0
+#define GLINT_VECT2FUNC_VF_NUM_M		MAKEMASK(0xFF, 0)
+#define GLINT_VECT2FUNC_PF_NUM_S		12
+#define GLINT_VECT2FUNC_PF_NUM_M		MAKEMASK(0x7, 12)
+#define GLINT_VECT2FUNC_IS_PF_S			16
+#define GLINT_VECT2FUNC_IS_PF_M			BIT(16)
+#define PF0INT_FW_HLP_CTL			0x0016C844 /* Reset Source: CORER */
+#define PF0INT_FW_HLP_CTL_MSIX_INDX_S		0
+#define PF0INT_FW_HLP_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PF0INT_FW_HLP_CTL_ITR_INDX_S		11
+#define PF0INT_FW_HLP_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PF0INT_FW_HLP_CTL_CAUSE_ENA_S		30
+#define PF0INT_FW_HLP_CTL_CAUSE_ENA_M		BIT(30)
+#define PF0INT_FW_HLP_CTL_INTEVENT_S		31
+#define PF0INT_FW_HLP_CTL_INTEVENT_M		BIT(31)
+#define PF0INT_FW_PSM_CTL			0x0016C848 /* Reset Source: CORER */
+#define PF0INT_FW_PSM_CTL_MSIX_INDX_S		0
+#define PF0INT_FW_PSM_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PF0INT_FW_PSM_CTL_ITR_INDX_S		11
+#define PF0INT_FW_PSM_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PF0INT_FW_PSM_CTL_CAUSE_ENA_S		30
+#define PF0INT_FW_PSM_CTL_CAUSE_ENA_M		BIT(30)
+#define PF0INT_FW_PSM_CTL_INTEVENT_S		31
+#define PF0INT_FW_PSM_CTL_INTEVENT_M		BIT(31)
+#define PF0INT_MBX_CPM_CTL			0x0016B2C0 /* Reset Source: CORER */
+#define PF0INT_MBX_CPM_CTL_MSIX_INDX_S		0
+#define PF0INT_MBX_CPM_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PF0INT_MBX_CPM_CTL_ITR_INDX_S		11
+#define PF0INT_MBX_CPM_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PF0INT_MBX_CPM_CTL_CAUSE_ENA_S		30
+#define PF0INT_MBX_CPM_CTL_CAUSE_ENA_M		BIT(30)
+#define PF0INT_MBX_CPM_CTL_INTEVENT_S		31
+#define PF0INT_MBX_CPM_CTL_INTEVENT_M		BIT(31)
+#define PF0INT_MBX_HLP_CTL			0x0016B2C4 /* Reset Source: CORER */
+#define PF0INT_MBX_HLP_CTL_MSIX_INDX_S		0
+#define PF0INT_MBX_HLP_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PF0INT_MBX_HLP_CTL_ITR_INDX_S		11
+#define PF0INT_MBX_HLP_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PF0INT_MBX_HLP_CTL_CAUSE_ENA_S		30
+#define PF0INT_MBX_HLP_CTL_CAUSE_ENA_M		BIT(30)
+#define PF0INT_MBX_HLP_CTL_INTEVENT_S		31
+#define PF0INT_MBX_HLP_CTL_INTEVENT_M		BIT(31)
+#define PF0INT_MBX_PSM_CTL			0x0016B2C8 /* Reset Source: CORER */
+#define PF0INT_MBX_PSM_CTL_MSIX_INDX_S		0
+#define PF0INT_MBX_PSM_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PF0INT_MBX_PSM_CTL_ITR_INDX_S		11
+#define PF0INT_MBX_PSM_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PF0INT_MBX_PSM_CTL_CAUSE_ENA_S		30
+#define PF0INT_MBX_PSM_CTL_CAUSE_ENA_M		BIT(30)
+#define PF0INT_MBX_PSM_CTL_INTEVENT_S		31
+#define PF0INT_MBX_PSM_CTL_INTEVENT_M		BIT(31)
+#define PF0INT_OICR_CPM				0x0016CC40 /* Reset Source: CORER */
+#define PF0INT_OICR_CPM_INTEVENT_S		0
+#define PF0INT_OICR_CPM_INTEVENT_M		BIT(0)
+#define PF0INT_OICR_CPM_QUEUE_S			1
+#define PF0INT_OICR_CPM_QUEUE_M			BIT(1)
+#define PF0INT_OICR_CPM_RSV1_S			2
+#define PF0INT_OICR_CPM_RSV1_M			MAKEMASK(0xFF, 2)
+#define PF0INT_OICR_CPM_HH_COMP_S		10
+#define PF0INT_OICR_CPM_HH_COMP_M		BIT(10)
+#define PF0INT_OICR_CPM_TSYN_TX_S		11
+#define PF0INT_OICR_CPM_TSYN_TX_M		BIT(11)
+#define PF0INT_OICR_CPM_TSYN_EVNT_S		12
+#define PF0INT_OICR_CPM_TSYN_EVNT_M		BIT(12)
+#define PF0INT_OICR_CPM_TSYN_TGT_S		13
+#define PF0INT_OICR_CPM_TSYN_TGT_M		BIT(13)
+#define PF0INT_OICR_CPM_HLP_RDY_S		14
+#define PF0INT_OICR_CPM_HLP_RDY_M		BIT(14)
+#define PF0INT_OICR_CPM_CPM_RDY_S		15
+#define PF0INT_OICR_CPM_CPM_RDY_M		BIT(15)
+#define PF0INT_OICR_CPM_ECC_ERR_S		16
+#define PF0INT_OICR_CPM_ECC_ERR_M		BIT(16)
+#define PF0INT_OICR_CPM_RSV2_S			17
+#define PF0INT_OICR_CPM_RSV2_M			MAKEMASK(0x3, 17)
+#define PF0INT_OICR_CPM_MAL_DETECT_S		19
+#define PF0INT_OICR_CPM_MAL_DETECT_M		BIT(19)
+#define PF0INT_OICR_CPM_GRST_S			20
+#define PF0INT_OICR_CPM_GRST_M			BIT(20)
+#define PF0INT_OICR_CPM_PCI_EXCEPTION_S		21
+#define PF0INT_OICR_CPM_PCI_EXCEPTION_M		BIT(21)
+#define PF0INT_OICR_CPM_GPIO_S			22
+#define PF0INT_OICR_CPM_GPIO_M			BIT(22)
+#define PF0INT_OICR_CPM_RSV3_S			23
+#define PF0INT_OICR_CPM_RSV3_M			BIT(23)
+#define PF0INT_OICR_CPM_STORM_DETECT_S		24
+#define PF0INT_OICR_CPM_STORM_DETECT_M		BIT(24)
+#define PF0INT_OICR_CPM_LINK_STAT_CHANGE_S	25
+#define PF0INT_OICR_CPM_LINK_STAT_CHANGE_M	BIT(25)
+#define PF0INT_OICR_CPM_HMC_ERR_S		26
+#define PF0INT_OICR_CPM_HMC_ERR_M		BIT(26)
+#define PF0INT_OICR_CPM_PE_PUSH_S		27
+#define PF0INT_OICR_CPM_PE_PUSH_M		BIT(27)
+#define PF0INT_OICR_CPM_PE_CRITERR_S		28
+#define PF0INT_OICR_CPM_PE_CRITERR_M		BIT(28)
+#define PF0INT_OICR_CPM_VFLR_S			29
+#define PF0INT_OICR_CPM_VFLR_M			BIT(29)
+#define PF0INT_OICR_CPM_XLR_HW_DONE_S		30
+#define PF0INT_OICR_CPM_XLR_HW_DONE_M		BIT(30)
+#define PF0INT_OICR_CPM_SWINT_S			31
+#define PF0INT_OICR_CPM_SWINT_M			BIT(31)
+#define PF0INT_OICR_CTL_CPM			0x0016CC48 /* Reset Source: CORER */
+#define PF0INT_OICR_CTL_CPM_MSIX_INDX_S		0
+#define PF0INT_OICR_CTL_CPM_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PF0INT_OICR_CTL_CPM_ITR_INDX_S		11
+#define PF0INT_OICR_CTL_CPM_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PF0INT_OICR_CTL_CPM_CAUSE_ENA_S		30
+#define PF0INT_OICR_CTL_CPM_CAUSE_ENA_M		BIT(30)
+#define PF0INT_OICR_CTL_CPM_INTEVENT_S		31
+#define PF0INT_OICR_CTL_CPM_INTEVENT_M		BIT(31)
+#define PF0INT_OICR_CTL_HLP			0x0016CC5C /* Reset Source: CORER */
+#define PF0INT_OICR_CTL_HLP_MSIX_INDX_S		0
+#define PF0INT_OICR_CTL_HLP_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PF0INT_OICR_CTL_HLP_ITR_INDX_S		11
+#define PF0INT_OICR_CTL_HLP_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PF0INT_OICR_CTL_HLP_CAUSE_ENA_S		30
+#define PF0INT_OICR_CTL_HLP_CAUSE_ENA_M		BIT(30)
+#define PF0INT_OICR_CTL_HLP_INTEVENT_S		31
+#define PF0INT_OICR_CTL_HLP_INTEVENT_M		BIT(31)
+#define PF0INT_OICR_CTL_PSM			0x0016CC64 /* Reset Source: CORER */
+#define PF0INT_OICR_CTL_PSM_MSIX_INDX_S		0
+#define PF0INT_OICR_CTL_PSM_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PF0INT_OICR_CTL_PSM_ITR_INDX_S		11
+#define PF0INT_OICR_CTL_PSM_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PF0INT_OICR_CTL_PSM_CAUSE_ENA_S		30
+#define PF0INT_OICR_CTL_PSM_CAUSE_ENA_M		BIT(30)
+#define PF0INT_OICR_CTL_PSM_INTEVENT_S		31
+#define PF0INT_OICR_CTL_PSM_INTEVENT_M		BIT(31)
+#define PF0INT_OICR_ENA_CPM			0x0016CC60 /* Reset Source: CORER */
+#define PF0INT_OICR_ENA_CPM_RSV0_S		0
+#define PF0INT_OICR_ENA_CPM_RSV0_M		BIT(0)
+#define PF0INT_OICR_ENA_CPM_INT_ENA_S		1
+#define PF0INT_OICR_ENA_CPM_INT_ENA_M		MAKEMASK(0x7FFFFFFF, 1)
+#define PF0INT_OICR_ENA_HLP			0x0016CC4C /* Reset Source: CORER */
+#define PF0INT_OICR_ENA_HLP_RSV0_S		0
+#define PF0INT_OICR_ENA_HLP_RSV0_M		BIT(0)
+#define PF0INT_OICR_ENA_HLP_INT_ENA_S		1
+#define PF0INT_OICR_ENA_HLP_INT_ENA_M		MAKEMASK(0x7FFFFFFF, 1)
+#define PF0INT_OICR_ENA_PSM			0x0016CC58 /* Reset Source: CORER */
+#define PF0INT_OICR_ENA_PSM_RSV0_S		0
+#define PF0INT_OICR_ENA_PSM_RSV0_M		BIT(0)
+#define PF0INT_OICR_ENA_PSM_INT_ENA_S		1
+#define PF0INT_OICR_ENA_PSM_INT_ENA_M		MAKEMASK(0x7FFFFFFF, 1)
+#define PF0INT_OICR_HLP				0x0016CC68 /* Reset Source: CORER */
+#define PF0INT_OICR_HLP_INTEVENT_S		0
+#define PF0INT_OICR_HLP_INTEVENT_M		BIT(0)
+#define PF0INT_OICR_HLP_QUEUE_S			1
+#define PF0INT_OICR_HLP_QUEUE_M			BIT(1)
+#define PF0INT_OICR_HLP_RSV1_S			2
+#define PF0INT_OICR_HLP_RSV1_M			MAKEMASK(0xFF, 2)
+#define PF0INT_OICR_HLP_HH_COMP_S		10
+#define PF0INT_OICR_HLP_HH_COMP_M		BIT(10)
+#define PF0INT_OICR_HLP_TSYN_TX_S		11
+#define PF0INT_OICR_HLP_TSYN_TX_M		BIT(11)
+#define PF0INT_OICR_HLP_TSYN_EVNT_S		12
+#define PF0INT_OICR_HLP_TSYN_EVNT_M		BIT(12)
+#define PF0INT_OICR_HLP_TSYN_TGT_S		13
+#define PF0INT_OICR_HLP_TSYN_TGT_M		BIT(13)
+#define PF0INT_OICR_HLP_HLP_RDY_S		14
+#define PF0INT_OICR_HLP_HLP_RDY_M		BIT(14)
+#define PF0INT_OICR_HLP_CPM_RDY_S		15
+#define PF0INT_OICR_HLP_CPM_RDY_M		BIT(15)
+#define PF0INT_OICR_HLP_ECC_ERR_S		16
+#define PF0INT_OICR_HLP_ECC_ERR_M		BIT(16)
+#define PF0INT_OICR_HLP_RSV2_S			17
+#define PF0INT_OICR_HLP_RSV2_M			MAKEMASK(0x3, 17)
+#define PF0INT_OICR_HLP_MAL_DETECT_S		19
+#define PF0INT_OICR_HLP_MAL_DETECT_M		BIT(19)
+#define PF0INT_OICR_HLP_GRST_S			20
+#define PF0INT_OICR_HLP_GRST_M			BIT(20)
+#define PF0INT_OICR_HLP_PCI_EXCEPTION_S		21
+#define PF0INT_OICR_HLP_PCI_EXCEPTION_M		BIT(21)
+#define PF0INT_OICR_HLP_GPIO_S			22
+#define PF0INT_OICR_HLP_GPIO_M			BIT(22)
+#define PF0INT_OICR_HLP_RSV3_S			23
+#define PF0INT_OICR_HLP_RSV3_M			BIT(23)
+#define PF0INT_OICR_HLP_STORM_DETECT_S		24
+#define PF0INT_OICR_HLP_STORM_DETECT_M		BIT(24)
+#define PF0INT_OICR_HLP_LINK_STAT_CHANGE_S	25
+#define PF0INT_OICR_HLP_LINK_STAT_CHANGE_M	BIT(25)
+#define PF0INT_OICR_HLP_HMC_ERR_S		26
+#define PF0INT_OICR_HLP_HMC_ERR_M		BIT(26)
+#define PF0INT_OICR_HLP_PE_PUSH_S		27
+#define PF0INT_OICR_HLP_PE_PUSH_M		BIT(27)
+#define PF0INT_OICR_HLP_PE_CRITERR_S		28
+#define PF0INT_OICR_HLP_PE_CRITERR_M		BIT(28)
+#define PF0INT_OICR_HLP_VFLR_S			29
+#define PF0INT_OICR_HLP_VFLR_M			BIT(29)
+#define PF0INT_OICR_HLP_XLR_HW_DONE_S		30
+#define PF0INT_OICR_HLP_XLR_HW_DONE_M		BIT(30)
+#define PF0INT_OICR_HLP_SWINT_S			31
+#define PF0INT_OICR_HLP_SWINT_M			BIT(31)
+#define PF0INT_OICR_PSM				0x0016CC44 /* Reset Source: CORER */
+#define PF0INT_OICR_PSM_INTEVENT_S		0
+#define PF0INT_OICR_PSM_INTEVENT_M		BIT(0)
+#define PF0INT_OICR_PSM_QUEUE_S			1
+#define PF0INT_OICR_PSM_QUEUE_M			BIT(1)
+#define PF0INT_OICR_PSM_RSV1_S			2
+#define PF0INT_OICR_PSM_RSV1_M			MAKEMASK(0xFF, 2)
+#define PF0INT_OICR_PSM_HH_COMP_S		10
+#define PF0INT_OICR_PSM_HH_COMP_M		BIT(10)
+#define PF0INT_OICR_PSM_TSYN_TX_S		11
+#define PF0INT_OICR_PSM_TSYN_TX_M		BIT(11)
+#define PF0INT_OICR_PSM_TSYN_EVNT_S		12
+#define PF0INT_OICR_PSM_TSYN_EVNT_M		BIT(12)
+#define PF0INT_OICR_PSM_TSYN_TGT_S		13
+#define PF0INT_OICR_PSM_TSYN_TGT_M		BIT(13)
+#define PF0INT_OICR_PSM_HLP_RDY_S		14
+#define PF0INT_OICR_PSM_HLP_RDY_M		BIT(14)
+#define PF0INT_OICR_PSM_CPM_RDY_S		15
+#define PF0INT_OICR_PSM_CPM_RDY_M		BIT(15)
+#define PF0INT_OICR_PSM_ECC_ERR_S		16
+#define PF0INT_OICR_PSM_ECC_ERR_M		BIT(16)
+#define PF0INT_OICR_PSM_RSV2_S			17
+#define PF0INT_OICR_PSM_RSV2_M			MAKEMASK(0x3, 17)
+#define PF0INT_OICR_PSM_MAL_DETECT_S		19
+#define PF0INT_OICR_PSM_MAL_DETECT_M		BIT(19)
+#define PF0INT_OICR_PSM_GRST_S			20
+#define PF0INT_OICR_PSM_GRST_M			BIT(20)
+#define PF0INT_OICR_PSM_PCI_EXCEPTION_S		21
+#define PF0INT_OICR_PSM_PCI_EXCEPTION_M		BIT(21)
+#define PF0INT_OICR_PSM_GPIO_S			22
+#define PF0INT_OICR_PSM_GPIO_M			BIT(22)
+#define PF0INT_OICR_PSM_RSV3_S			23
+#define PF0INT_OICR_PSM_RSV3_M			BIT(23)
+#define PF0INT_OICR_PSM_STORM_DETECT_S		24
+#define PF0INT_OICR_PSM_STORM_DETECT_M		BIT(24)
+#define PF0INT_OICR_PSM_LINK_STAT_CHANGE_S	25
+#define PF0INT_OICR_PSM_LINK_STAT_CHANGE_M	BIT(25)
+#define PF0INT_OICR_PSM_HMC_ERR_S		26
+#define PF0INT_OICR_PSM_HMC_ERR_M		BIT(26)
+#define PF0INT_OICR_PSM_PE_PUSH_S		27
+#define PF0INT_OICR_PSM_PE_PUSH_M		BIT(27)
+#define PF0INT_OICR_PSM_PE_CRITERR_S		28
+#define PF0INT_OICR_PSM_PE_CRITERR_M		BIT(28)
+#define PF0INT_OICR_PSM_VFLR_S			29
+#define PF0INT_OICR_PSM_VFLR_M			BIT(29)
+#define PF0INT_OICR_PSM_XLR_HW_DONE_S		30
+#define PF0INT_OICR_PSM_XLR_HW_DONE_M		BIT(30)
+#define PF0INT_OICR_PSM_SWINT_S			31
+#define PF0INT_OICR_PSM_SWINT_M			BIT(31)
+#define PF0INT_SB_CPM_CTL			0x0016B2CC /* Reset Source: CORER */
+#define PF0INT_SB_CPM_CTL_MSIX_INDX_S		0
+#define PF0INT_SB_CPM_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PF0INT_SB_CPM_CTL_ITR_INDX_S		11
+#define PF0INT_SB_CPM_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PF0INT_SB_CPM_CTL_CAUSE_ENA_S		30
+#define PF0INT_SB_CPM_CTL_CAUSE_ENA_M		BIT(30)
+#define PF0INT_SB_CPM_CTL_INTEVENT_S		31
+#define PF0INT_SB_CPM_CTL_INTEVENT_M		BIT(31)
+#define PF0INT_SB_HLP_CTL			0x0016B640 /* Reset Source: CORER */
+#define PF0INT_SB_HLP_CTL_MSIX_INDX_S		0
+#define PF0INT_SB_HLP_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PF0INT_SB_HLP_CTL_ITR_INDX_S		11
+#define PF0INT_SB_HLP_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PF0INT_SB_HLP_CTL_CAUSE_ENA_S		30
+#define PF0INT_SB_HLP_CTL_CAUSE_ENA_M		BIT(30)
+#define PF0INT_SB_HLP_CTL_INTEVENT_S		31
+#define PF0INT_SB_HLP_CTL_INTEVENT_M		BIT(31)
+#define PFINT_AEQCTL				0x0016CB00 /* Reset Source: CORER */
+#define PFINT_AEQCTL_MSIX_INDX_S		0
+#define PFINT_AEQCTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PFINT_AEQCTL_ITR_INDX_S			11
+#define PFINT_AEQCTL_ITR_INDX_M			MAKEMASK(0x3, 11)
+#define PFINT_AEQCTL_CAUSE_ENA_S		30
+#define PFINT_AEQCTL_CAUSE_ENA_M		BIT(30)
+#define PFINT_AEQCTL_INTEVENT_S			31
+#define PFINT_AEQCTL_INTEVENT_M			BIT(31)
+#define PFINT_ALLOC				0x001D2600 /* Reset Source: CORER */
+#define PFINT_ALLOC_FIRST_S			0
+#define PFINT_ALLOC_FIRST_M			MAKEMASK(0x7FF, 0)
+#define PFINT_ALLOC_LAST_S			12
+#define PFINT_ALLOC_LAST_M			MAKEMASK(0x7FF, 12)
+#define PFINT_ALLOC_VALID_S			31
+#define PFINT_ALLOC_VALID_M			BIT(31)
+#define PFINT_ALLOC_PCI				0x0009D800 /* Reset Source: PCIR */
+#define PFINT_ALLOC_PCI_FIRST_S			0
+#define PFINT_ALLOC_PCI_FIRST_M			MAKEMASK(0x7FF, 0)
+#define PFINT_ALLOC_PCI_LAST_S			12
+#define PFINT_ALLOC_PCI_LAST_M			MAKEMASK(0x7FF, 12)
+#define PFINT_ALLOC_PCI_VALID_S			31
+#define PFINT_ALLOC_PCI_VALID_M			BIT(31)
+#define PFINT_FW_CTL				0x0016C800 /* Reset Source: CORER */
+#define PFINT_FW_CTL_MSIX_INDX_S		0
+#define PFINT_FW_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PFINT_FW_CTL_ITR_INDX_S			11
+#define PFINT_FW_CTL_ITR_INDX_M			MAKEMASK(0x3, 11)
+#define PFINT_FW_CTL_CAUSE_ENA_S		30
+#define PFINT_FW_CTL_CAUSE_ENA_M		BIT(30)
+#define PFINT_FW_CTL_INTEVENT_S			31
+#define PFINT_FW_CTL_INTEVENT_M			BIT(31)
+#define PFINT_GPIO_ENA				0x00088080 /* Reset Source: CORER */
+#define PFINT_GPIO_ENA_GPIO0_ENA_S		0
+#define PFINT_GPIO_ENA_GPIO0_ENA_M		BIT(0)
+#define PFINT_GPIO_ENA_GPIO1_ENA_S		1
+#define PFINT_GPIO_ENA_GPIO1_ENA_M		BIT(1)
+#define PFINT_GPIO_ENA_GPIO2_ENA_S		2
+#define PFINT_GPIO_ENA_GPIO2_ENA_M		BIT(2)
+#define PFINT_GPIO_ENA_GPIO3_ENA_S		3
+#define PFINT_GPIO_ENA_GPIO3_ENA_M		BIT(3)
+#define PFINT_GPIO_ENA_GPIO4_ENA_S		4
+#define PFINT_GPIO_ENA_GPIO4_ENA_M		BIT(4)
+#define PFINT_GPIO_ENA_GPIO5_ENA_S		5
+#define PFINT_GPIO_ENA_GPIO5_ENA_M		BIT(5)
+#define PFINT_GPIO_ENA_GPIO6_ENA_S		6
+#define PFINT_GPIO_ENA_GPIO6_ENA_M		BIT(6)
+#define PFINT_MBX_CTL				0x0016B280 /* Reset Source: CORER */
+#define PFINT_MBX_CTL_MSIX_INDX_S		0
+#define PFINT_MBX_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PFINT_MBX_CTL_ITR_INDX_S		11
+#define PFINT_MBX_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PFINT_MBX_CTL_CAUSE_ENA_S		30
+#define PFINT_MBX_CTL_CAUSE_ENA_M		BIT(30)
+#define PFINT_MBX_CTL_INTEVENT_S		31
+#define PFINT_MBX_CTL_INTEVENT_M		BIT(31)
+#define PFINT_OICR				0x0016CA00 /* Reset Source: CORER */
+#define PFINT_OICR_INTEVENT_S			0
+#define PFINT_OICR_INTEVENT_M			BIT(0)
+#define PFINT_OICR_QUEUE_S			1
+#define PFINT_OICR_QUEUE_M			BIT(1)
+#define PFINT_OICR_RSV1_S			2
+#define PFINT_OICR_RSV1_M			MAKEMASK(0xFF, 2)
+#define PFINT_OICR_HH_COMP_S			10
+#define PFINT_OICR_HH_COMP_M			BIT(10)
+#define PFINT_OICR_TSYN_TX_S			11
+#define PFINT_OICR_TSYN_TX_M			BIT(11)
+#define PFINT_OICR_TSYN_EVNT_S			12
+#define PFINT_OICR_TSYN_EVNT_M			BIT(12)
+#define PFINT_OICR_TSYN_TGT_S			13
+#define PFINT_OICR_TSYN_TGT_M			BIT(13)
+#define PFINT_OICR_HLP_RDY_S			14
+#define PFINT_OICR_HLP_RDY_M			BIT(14)
+#define PFINT_OICR_CPM_RDY_S			15
+#define PFINT_OICR_CPM_RDY_M			BIT(15)
+#define PFINT_OICR_ECC_ERR_S			16
+#define PFINT_OICR_ECC_ERR_M			BIT(16)
+#define PFINT_OICR_RSV2_S			17
+#define PFINT_OICR_RSV2_M			MAKEMASK(0x3, 17)
+#define PFINT_OICR_MAL_DETECT_S			19
+#define PFINT_OICR_MAL_DETECT_M			BIT(19)
+#define PFINT_OICR_GRST_S			20
+#define PFINT_OICR_GRST_M			BIT(20)
+#define PFINT_OICR_PCI_EXCEPTION_S		21
+#define PFINT_OICR_PCI_EXCEPTION_M		BIT(21)
+#define PFINT_OICR_GPIO_S			22
+#define PFINT_OICR_GPIO_M			BIT(22)
+#define PFINT_OICR_RSV3_S			23
+#define PFINT_OICR_RSV3_M			BIT(23)
+#define PFINT_OICR_STORM_DETECT_S		24
+#define PFINT_OICR_STORM_DETECT_M		BIT(24)
+#define PFINT_OICR_LINK_STAT_CHANGE_S		25
+#define PFINT_OICR_LINK_STAT_CHANGE_M		BIT(25)
+#define PFINT_OICR_HMC_ERR_S			26
+#define PFINT_OICR_HMC_ERR_M			BIT(26)
+#define PFINT_OICR_PE_PUSH_S			27
+#define PFINT_OICR_PE_PUSH_M			BIT(27)
+#define PFINT_OICR_PE_CRITERR_S			28
+#define PFINT_OICR_PE_CRITERR_M			BIT(28)
+#define PFINT_OICR_VFLR_S			29
+#define PFINT_OICR_VFLR_M			BIT(29)
+#define PFINT_OICR_XLR_HW_DONE_S		30
+#define PFINT_OICR_XLR_HW_DONE_M		BIT(30)
+#define PFINT_OICR_SWINT_S			31
+#define PFINT_OICR_SWINT_M			BIT(31)
+#define PFINT_OICR_CTL				0x0016CA80 /* Reset Source: CORER */
+#define PFINT_OICR_CTL_MSIX_INDX_S		0
+#define PFINT_OICR_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PFINT_OICR_CTL_ITR_INDX_S		11
+#define PFINT_OICR_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define PFINT_OICR_CTL_CAUSE_ENA_S		30
+#define PFINT_OICR_CTL_CAUSE_ENA_M		BIT(30)
+#define PFINT_OICR_CTL_INTEVENT_S		31
+#define PFINT_OICR_CTL_INTEVENT_M		BIT(31)
+#define PFINT_OICR_ENA				0x0016C900 /* Reset Source: CORER */
+#define PFINT_OICR_ENA_RSV0_S			0
+#define PFINT_OICR_ENA_RSV0_M			BIT(0)
+#define PFINT_OICR_ENA_INT_ENA_S		1
+#define PFINT_OICR_ENA_INT_ENA_M		MAKEMASK(0x7FFFFFFF, 1)
+#define PFINT_SB_CTL				0x0016B600 /* Reset Source: CORER */
+#define PFINT_SB_CTL_MSIX_INDX_S		0
+#define PFINT_SB_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define PFINT_SB_CTL_ITR_INDX_S			11
+#define PFINT_SB_CTL_ITR_INDX_M			MAKEMASK(0x3, 11)
+#define PFINT_SB_CTL_CAUSE_ENA_S		30
+#define PFINT_SB_CTL_CAUSE_ENA_M		BIT(30)
+#define PFINT_SB_CTL_INTEVENT_S			31
+#define PFINT_SB_CTL_INTEVENT_M			BIT(31)
+#define PFINT_TSYN_MSK				0x0016C980 /* Reset Source: CORER */
+#define PFINT_TSYN_MSK_PHY_INDX_S		0
+#define PFINT_TSYN_MSK_PHY_INDX_M		MAKEMASK(0x1F, 0)
+#define QINT_RQCTL(_QRX)			(0x00150000 + ((_QRX) * 4)) /* _i=0...2047 */ /* Reset Source: CORER */
+#define QINT_RQCTL_MAX_INDEX			2047
+#define QINT_RQCTL_MSIX_INDX_S			0
+#define QINT_RQCTL_MSIX_INDX_M			MAKEMASK(0x7FF, 0)
+#define QINT_RQCTL_ITR_INDX_S			11
+#define QINT_RQCTL_ITR_INDX_M			MAKEMASK(0x3, 11)
+#define QINT_RQCTL_CAUSE_ENA_S			30
+#define QINT_RQCTL_CAUSE_ENA_M			BIT(30)
+#define QINT_RQCTL_INTEVENT_S			31
+#define QINT_RQCTL_INTEVENT_M			BIT(31)
+#define QINT_TQCTL(_DBQM)			(0x00140000 + ((_DBQM) * 4)) /* _i=0...16383 */ /* Reset Source: CORER */
+#define QINT_TQCTL_MAX_INDEX			16383
+#define QINT_TQCTL_MSIX_INDX_S			0
+#define QINT_TQCTL_MSIX_INDX_M			MAKEMASK(0x7FF, 0)
+#define QINT_TQCTL_ITR_INDX_S			11
+#define QINT_TQCTL_ITR_INDX_M			MAKEMASK(0x3, 11)
+#define QINT_TQCTL_CAUSE_ENA_S			30
+#define QINT_TQCTL_CAUSE_ENA_M			BIT(30)
+#define QINT_TQCTL_INTEVENT_S			31
+#define QINT_TQCTL_INTEVENT_M			BIT(31)
+#define VPINT_AEQCTL(_VF)			(0x0016B800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPINT_AEQCTL_MAX_INDEX			255
+#define VPINT_AEQCTL_MSIX_INDX_S		0
+#define VPINT_AEQCTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define VPINT_AEQCTL_ITR_INDX_S			11
+#define VPINT_AEQCTL_ITR_INDX_M			MAKEMASK(0x3, 11)
+#define VPINT_AEQCTL_CAUSE_ENA_S		30
+#define VPINT_AEQCTL_CAUSE_ENA_M		BIT(30)
+#define VPINT_AEQCTL_INTEVENT_S			31
+#define VPINT_AEQCTL_INTEVENT_M			BIT(31)
+#define VPINT_ALLOC(_VF)			(0x001D1000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPINT_ALLOC_MAX_INDEX			255
+#define VPINT_ALLOC_FIRST_S			0
+#define VPINT_ALLOC_FIRST_M			MAKEMASK(0x7FF, 0)
+#define VPINT_ALLOC_LAST_S			12
+#define VPINT_ALLOC_LAST_M			MAKEMASK(0x7FF, 12)
+#define VPINT_ALLOC_VALID_S			31
+#define VPINT_ALLOC_VALID_M			BIT(31)
+#define VPINT_ALLOC_PCI(_VF)			(0x0009D000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PCIR */
+#define VPINT_ALLOC_PCI_MAX_INDEX		255
+#define VPINT_ALLOC_PCI_FIRST_S			0
+#define VPINT_ALLOC_PCI_FIRST_M			MAKEMASK(0x7FF, 0)
+#define VPINT_ALLOC_PCI_LAST_S			12
+#define VPINT_ALLOC_PCI_LAST_M			MAKEMASK(0x7FF, 12)
+#define VPINT_ALLOC_PCI_VALID_S			31
+#define VPINT_ALLOC_PCI_VALID_M			BIT(31)
+#define VPINT_MBX_CPM_CTL(_VP128)		(0x0016B000 + ((_VP128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VPINT_MBX_CPM_CTL_MAX_INDEX		127
+#define VPINT_MBX_CPM_CTL_MSIX_INDX_S		0
+#define VPINT_MBX_CPM_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define VPINT_MBX_CPM_CTL_ITR_INDX_S		11
+#define VPINT_MBX_CPM_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define VPINT_MBX_CPM_CTL_CAUSE_ENA_S		30
+#define VPINT_MBX_CPM_CTL_CAUSE_ENA_M		BIT(30)
+#define VPINT_MBX_CPM_CTL_INTEVENT_S		31
+#define VPINT_MBX_CPM_CTL_INTEVENT_M		BIT(31)
+#define VPINT_MBX_CTL(_VSI)			(0x0016A000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VPINT_MBX_CTL_MAX_INDEX			767
+#define VPINT_MBX_CTL_MSIX_INDX_S		0
+#define VPINT_MBX_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define VPINT_MBX_CTL_ITR_INDX_S		11
+#define VPINT_MBX_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define VPINT_MBX_CTL_CAUSE_ENA_S		30
+#define VPINT_MBX_CTL_CAUSE_ENA_M		BIT(30)
+#define VPINT_MBX_CTL_INTEVENT_S		31
+#define VPINT_MBX_CTL_INTEVENT_M		BIT(31)
+#define VPINT_MBX_HLP_CTL(_VP16)		(0x0016B200 + ((_VP16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VPINT_MBX_HLP_CTL_MAX_INDEX		15
+#define VPINT_MBX_HLP_CTL_MSIX_INDX_S		0
+#define VPINT_MBX_HLP_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define VPINT_MBX_HLP_CTL_ITR_INDX_S		11
+#define VPINT_MBX_HLP_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define VPINT_MBX_HLP_CTL_CAUSE_ENA_S		30
+#define VPINT_MBX_HLP_CTL_CAUSE_ENA_M		BIT(30)
+#define VPINT_MBX_HLP_CTL_INTEVENT_S		31
+#define VPINT_MBX_HLP_CTL_INTEVENT_M		BIT(31)
+#define VPINT_MBX_PSM_CTL(_VP16)		(0x0016B240 + ((_VP16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VPINT_MBX_PSM_CTL_MAX_INDEX		15
+#define VPINT_MBX_PSM_CTL_MSIX_INDX_S		0
+#define VPINT_MBX_PSM_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define VPINT_MBX_PSM_CTL_ITR_INDX_S		11
+#define VPINT_MBX_PSM_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define VPINT_MBX_PSM_CTL_CAUSE_ENA_S		30
+#define VPINT_MBX_PSM_CTL_CAUSE_ENA_M		BIT(30)
+#define VPINT_MBX_PSM_CTL_INTEVENT_S		31
+#define VPINT_MBX_PSM_CTL_INTEVENT_M		BIT(31)
+#define VPINT_SB_CPM_CTL(_VP128)		(0x0016B400 + ((_VP128) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define VPINT_SB_CPM_CTL_MAX_INDEX		127
+#define VPINT_SB_CPM_CTL_MSIX_INDX_S		0
+#define VPINT_SB_CPM_CTL_MSIX_INDX_M		MAKEMASK(0x7FF, 0)
+#define VPINT_SB_CPM_CTL_ITR_INDX_S		11
+#define VPINT_SB_CPM_CTL_ITR_INDX_M		MAKEMASK(0x3, 11)
+#define VPINT_SB_CPM_CTL_CAUSE_ENA_S		30
+#define VPINT_SB_CPM_CTL_CAUSE_ENA_M		BIT(30)
+#define VPINT_SB_CPM_CTL_INTEVENT_S		31
+#define VPINT_SB_CPM_CTL_INTEVENT_M		BIT(31)
+#define GL_HLP_PRT_IPG_PREAMBLE_SIZE(_i)	(0x00049240 + ((_i) * 4)) /* _i=0...20 */ /* Reset Source: CORER */
+#define GL_HLP_PRT_IPG_PREAMBLE_SIZE_MAX_INDEX	20
+#define GL_HLP_PRT_IPG_PREAMBLE_SIZE_IPG_PREAMBLE_SIZE_S 0
+#define GL_HLP_PRT_IPG_PREAMBLE_SIZE_IPG_PREAMBLE_SIZE_M MAKEMASK(0xFF, 0)
+#define GL_TDPU_PSM_DEFAULT_RECIPE(_i)		(0x00049294 + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: CORER */
+#define GL_TDPU_PSM_DEFAULT_RECIPE_MAX_INDEX	3
+#define GL_TDPU_PSM_DEFAULT_RECIPE_ADD_IPG_S	0
+#define GL_TDPU_PSM_DEFAULT_RECIPE_ADD_IPG_M	BIT(0)
+#define GL_TDPU_PSM_DEFAULT_RECIPE_SUB_CRC_S	1
+#define GL_TDPU_PSM_DEFAULT_RECIPE_SUB_CRC_M	BIT(1)
+#define GL_TDPU_PSM_DEFAULT_RECIPE_SUB_ESP_TRAILER_S 2
+#define GL_TDPU_PSM_DEFAULT_RECIPE_SUB_ESP_TRAILER_M BIT(2)
+#define GL_TDPU_PSM_DEFAULT_RECIPE_INCLUDE_L2_PAD_S 3
+#define GL_TDPU_PSM_DEFAULT_RECIPE_INCLUDE_L2_PAD_M BIT(3)
+#define GL_TDPU_PSM_DEFAULT_RECIPE_DEFAULT_UPDATE_MODE_S 4
+#define GL_TDPU_PSM_DEFAULT_RECIPE_DEFAULT_UPDATE_MODE_M BIT(4)
+#define GLLAN_PF_RECIPE(_i)			(0x0029420C + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLLAN_PF_RECIPE_MAX_INDEX		7
+#define GLLAN_PF_RECIPE_RECIPE_S		0
+#define GLLAN_PF_RECIPE_RECIPE_M		MAKEMASK(0x3, 0)
+#define GLLAN_RCTL_0				0x002941F8 /* Reset Source: CORER */
+#define GLLAN_RCTL_0_PXE_MODE_S			0
+#define GLLAN_RCTL_0_PXE_MODE_M			BIT(0)
+#define GLLAN_RCTL_1				0x002941FC /* Reset Source: CORER */
+#define GLLAN_RCTL_1_RXMAX_EXPANSION_S		12
+#define GLLAN_RCTL_1_RXMAX_EXPANSION_M		MAKEMASK(0xF, 12)
+#define GLLAN_RCTL_1_RXDRDCTL_S			17
+#define GLLAN_RCTL_1_RXDRDCTL_M			BIT(17)
+#define GLLAN_RCTL_1_RXDESCRDROEN_S		18
+#define GLLAN_RCTL_1_RXDESCRDROEN_M		BIT(18)
+#define GLLAN_RCTL_1_RXDATAWRROEN_S		19
+#define GLLAN_RCTL_1_RXDATAWRROEN_M		BIT(19)
+#define GLLAN_TSOMSK_F				0x00049308 /* Reset Source: CORER */
+#define GLLAN_TSOMSK_F_TCPMSKF_S		0
+#define GLLAN_TSOMSK_F_TCPMSKF_M		MAKEMASK(0xFFF, 0)
+#define GLLAN_TSOMSK_L				0x00049310 /* Reset Source: CORER */
+#define GLLAN_TSOMSK_L_TCPMSKL_S		0
+#define GLLAN_TSOMSK_L_TCPMSKL_M		MAKEMASK(0xFFF, 0)
+#define GLLAN_TSOMSK_M				0x0004930C /* Reset Source: CORER */
+#define GLLAN_TSOMSK_M_TCPMSKM_S		0
+#define GLLAN_TSOMSK_M_TCPMSKM_M		MAKEMASK(0xFFF, 0)
+#define PFLAN_CP_QALLOC				0x00075700 /* Reset Source: CORER */
+#define PFLAN_CP_QALLOC_FIRSTQ_S		0
+#define PFLAN_CP_QALLOC_FIRSTQ_M		MAKEMASK(0x1FF, 0)
+#define PFLAN_CP_QALLOC_LASTQ_S			16
+#define PFLAN_CP_QALLOC_LASTQ_M			MAKEMASK(0x1FF, 16)
+#define PFLAN_CP_QALLOC_VALID_S			31
+#define PFLAN_CP_QALLOC_VALID_M			BIT(31)
+#define PFLAN_DB_QALLOC				0x00075680 /* Reset Source: CORER */
+#define PFLAN_DB_QALLOC_FIRSTQ_S		0
+#define PFLAN_DB_QALLOC_FIRSTQ_M		MAKEMASK(0xFF, 0)
+#define PFLAN_DB_QALLOC_LASTQ_S			16
+#define PFLAN_DB_QALLOC_LASTQ_M			MAKEMASK(0xFF, 16)
+#define PFLAN_DB_QALLOC_VALID_S			31
+#define PFLAN_DB_QALLOC_VALID_M			BIT(31)
+#define PFLAN_RX_QALLOC				0x001D2500 /* Reset Source: CORER */
+#define PFLAN_RX_QALLOC_FIRSTQ_S		0
+#define PFLAN_RX_QALLOC_FIRSTQ_M		MAKEMASK(0x7FF, 0)
+#define PFLAN_RX_QALLOC_LASTQ_S			16
+#define PFLAN_RX_QALLOC_LASTQ_M			MAKEMASK(0x7FF, 16)
+#define PFLAN_RX_QALLOC_VALID_S			31
+#define PFLAN_RX_QALLOC_VALID_M			BIT(31)
+#define PFLAN_TX_QALLOC				0x001D2580 /* Reset Source: CORER */
+#define PFLAN_TX_QALLOC_FIRSTQ_S		0
+#define PFLAN_TX_QALLOC_FIRSTQ_M		MAKEMASK(0x3FFF, 0)
+#define PFLAN_TX_QALLOC_LASTQ_S			16
+#define PFLAN_TX_QALLOC_LASTQ_M			MAKEMASK(0x3FFF, 16)
+#define PFLAN_TX_QALLOC_VALID_S			31
+#define PFLAN_TX_QALLOC_VALID_M			BIT(31)
+#define QRX_CONTEXT(_i, _QRX)			(0x00280000 + ((_i) * 8192 + (_QRX) * 4)) /* _i=0...7, _QRX=0...2047 */ /* Reset Source: CORER */
+#define QRX_CONTEXT_MAX_INDEX			7
+#define QRX_CONTEXT_RXQ_CONTEXT_S		0
+#define QRX_CONTEXT_RXQ_CONTEXT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define QRX_CTRL(_QRX)				(0x00120000 + ((_QRX) * 4)) /* _i=0...2047 */ /* Reset Source: PFR */
+#define QRX_CTRL_MAX_INDEX			2047
+#define QRX_CTRL_QENA_REQ_S			0
+#define QRX_CTRL_QENA_REQ_M			BIT(0)
+#define QRX_CTRL_FAST_QDIS_S			1
+#define QRX_CTRL_FAST_QDIS_M			BIT(1)
+#define QRX_CTRL_QENA_STAT_S			2
+#define QRX_CTRL_QENA_STAT_M			BIT(2)
+#define QRX_CTRL_CDE_S				3
+#define QRX_CTRL_CDE_M				BIT(3)
+#define QRX_CTRL_CDS_S				4
+#define QRX_CTRL_CDS_M				BIT(4)
+#define QRX_ITR(_QRX)				(0x00292000 + ((_QRX) * 4)) /* _i=0...2047 */ /* Reset Source: CORER */
+#define QRX_ITR_MAX_INDEX			2047
+#define QRX_ITR_NO_EXPR_S			0
+#define QRX_ITR_NO_EXPR_M			BIT(0)
+#define QRX_TAIL(_QRX)				(0x00290000 + ((_QRX) * 4)) /* _i=0...2047 */ /* Reset Source: CORER */
+#define QRX_TAIL_MAX_INDEX			2047
+#define QRX_TAIL_TAIL_S				0
+#define QRX_TAIL_TAIL_M				MAKEMASK(0x1FFF, 0)
+#define VPDSI_RX_QTABLE(_i, _VP16)		(0x00074C00 + ((_i) * 64 + (_VP16) * 4)) /* _i=0...15, _VP16=0...15 */ /* Reset Source: CORER */
+#define VPDSI_RX_QTABLE_MAX_INDEX		15
+#define VPDSI_RX_QTABLE_PAGE_INDEX0_S		0
+#define VPDSI_RX_QTABLE_PAGE_INDEX0_M		MAKEMASK(0x7F, 0)
+#define VPDSI_RX_QTABLE_PAGE_INDEX1_S		8
+#define VPDSI_RX_QTABLE_PAGE_INDEX1_M		MAKEMASK(0x7F, 8)
+#define VPDSI_RX_QTABLE_PAGE_INDEX2_S		16
+#define VPDSI_RX_QTABLE_PAGE_INDEX2_M		MAKEMASK(0x7F, 16)
+#define VPDSI_RX_QTABLE_PAGE_INDEX3_S		24
+#define VPDSI_RX_QTABLE_PAGE_INDEX3_M		MAKEMASK(0x7F, 24)
+#define VPDSI_TX_QTABLE(_i, _VP16)		(0x001D2000 + ((_i) * 64 + (_VP16) * 4)) /* _i=0...15, _VP16=0...15 */ /* Reset Source: CORER */
+#define VPDSI_TX_QTABLE_MAX_INDEX		15
+#define VPDSI_TX_QTABLE_PAGE_INDEX0_S		0
+#define VPDSI_TX_QTABLE_PAGE_INDEX0_M		MAKEMASK(0x7F, 0)
+#define VPDSI_TX_QTABLE_PAGE_INDEX1_S		8
+#define VPDSI_TX_QTABLE_PAGE_INDEX1_M		MAKEMASK(0x7F, 8)
+#define VPDSI_TX_QTABLE_PAGE_INDEX2_S		16
+#define VPDSI_TX_QTABLE_PAGE_INDEX2_M		MAKEMASK(0x7F, 16)
+#define VPDSI_TX_QTABLE_PAGE_INDEX3_S		24
+#define VPDSI_TX_QTABLE_PAGE_INDEX3_M		MAKEMASK(0x7F, 24)
+#define VPLAN_DB_QTABLE(_i, _VF)		(0x00070000 + ((_i) * 2048 + (_VF) * 4)) /* _i=0...3, _VF=0...255 */ /* Reset Source: CORER */
+#define VPLAN_DB_QTABLE_MAX_INDEX		3
+#define VPLAN_DB_QTABLE_QINDEX_S		0
+#define VPLAN_DB_QTABLE_QINDEX_M		MAKEMASK(0x1FF, 0)
+#define VPLAN_DSI_VF_MODE(_VP16)		(0x002D2C00 + ((_VP16) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define VPLAN_DSI_VF_MODE_MAX_INDEX		15
+#define VPLAN_DSI_VF_MODE_LAN_DSI_VF_MODE_S	0
+#define VPLAN_DSI_VF_MODE_LAN_DSI_VF_MODE_M	BIT(0)
+#define VPLAN_RX_QBASE(_VF)			(0x00072000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPLAN_RX_QBASE_MAX_INDEX		255
+#define VPLAN_RX_QBASE_VFFIRSTQ_S		0
+#define VPLAN_RX_QBASE_VFFIRSTQ_M		MAKEMASK(0x7FF, 0)
+#define VPLAN_RX_QBASE_VFNUMQ_S			16
+#define VPLAN_RX_QBASE_VFNUMQ_M			MAKEMASK(0xFF, 16)
+#define VPLAN_RX_QBASE_VFQTABLE_ENA_S		31
+#define VPLAN_RX_QBASE_VFQTABLE_ENA_M		BIT(31)
+#define VPLAN_RX_QTABLE(_i, _VF)		(0x00060000 + ((_i) * 2048 + (_VF) * 4)) /* _i=0...15, _VF=0...255 */ /* Reset Source: CORER */
+#define VPLAN_RX_QTABLE_MAX_INDEX		15
+#define VPLAN_RX_QTABLE_QINDEX_S		0
+#define VPLAN_RX_QTABLE_QINDEX_M		MAKEMASK(0xFFF, 0)
+#define VPLAN_RXQ_MAPENA(_VF)			(0x00073000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPLAN_RXQ_MAPENA_MAX_INDEX		255
+#define VPLAN_RXQ_MAPENA_RX_ENA_S		0
+#define VPLAN_RXQ_MAPENA_RX_ENA_M		BIT(0)
+#define VPLAN_TX_QBASE(_VF)			(0x001D1800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPLAN_TX_QBASE_MAX_INDEX		255
+#define VPLAN_TX_QBASE_VFFIRSTQ_S		0
+#define VPLAN_TX_QBASE_VFFIRSTQ_M		MAKEMASK(0x3FFF, 0)
+#define VPLAN_TX_QBASE_VFNUMQ_S			16
+#define VPLAN_TX_QBASE_VFNUMQ_M			MAKEMASK(0xFF, 16)
+#define VPLAN_TX_QBASE_VFQTABLE_ENA_S		31
+#define VPLAN_TX_QBASE_VFQTABLE_ENA_M		BIT(31)
+#define VPLAN_TX_QTABLE(_i, _VF)		(0x001C0000 + ((_i) * 2048 + (_VF) * 4)) /* _i=0...15, _VF=0...255 */ /* Reset Source: CORER */
+#define VPLAN_TX_QTABLE_MAX_INDEX		15
+#define VPLAN_TX_QTABLE_QINDEX_S		0
+#define VPLAN_TX_QTABLE_QINDEX_M		MAKEMASK(0x7FFF, 0)
+#define VPLAN_TXQ_MAPENA(_VF)			(0x00073800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPLAN_TXQ_MAPENA_MAX_INDEX		255
+#define VPLAN_TXQ_MAPENA_TX_ENA_S		0
+#define VPLAN_TXQ_MAPENA_TX_ENA_M		BIT(0)
+#define VSILAN_QBASE(_VSI)			(0x0044c000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSILAN_QBASE_MAX_INDEX			767
+#define VSILAN_QBASE_VSIBASE_S			0
+#define VSILAN_QBASE_VSIBASE_M			MAKEMASK(0x7FF, 0)
+#define VSILAN_QBASE_VSIQTABLE_ENA_S		11
+#define VSILAN_QBASE_VSIQTABLE_ENA_M		BIT(11)
+#define VSILAN_QTABLE(_i, _VSI)			(0x00440000 + ((_i) * 4096 + (_VSI) * 4)) /* _i=0...7, _VSI=0...767 */ /* Reset Source: PFR */
+#define VSILAN_QTABLE_MAX_INDEX			7
+#define VSILAN_QTABLE_QINDEX_0_S		0
+#define VSILAN_QTABLE_QINDEX_0_M		MAKEMASK(0x7FF, 0)
+#define VSILAN_QTABLE_QINDEX_1_S		16
+#define VSILAN_QTABLE_QINDEX_1_M		MAKEMASK(0x7FF, 16)
+#define PRTMAC_HSEC_CTL_RX_ENABLE_GCP		0x001E31C0 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_ENABLE_GCP_HSEC_CTL_RX_ENABLE_GCP_S 0
+#define PRTMAC_HSEC_CTL_RX_ENABLE_GCP_HSEC_CTL_RX_ENABLE_GCP_M BIT(0)
+#define PRTMAC_HSEC_CTL_RX_ENABLE_GPP		0x001E34C0 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_ENABLE_GPP_HSEC_CTL_RX_ENABLE_GPP_S 0
+#define PRTMAC_HSEC_CTL_RX_ENABLE_GPP_HSEC_CTL_RX_ENABLE_GPP_M BIT(0)
+#define PRTMAC_HSEC_CTL_RX_ENABLE_PPP		0x001E35C0 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_ENABLE_PPP_HSEC_CTL_RX_ENABLE_PPP_S 0
+#define PRTMAC_HSEC_CTL_RX_ENABLE_PPP_HSEC_CTL_RX_ENABLE_PPP_M BIT(0)
+#define PRTMAC_HSEC_CTL_RX_FORWARD_CONTROL	0x001E36C0 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_FORWARD_CONTROL_HSEC_CTL_RX_FORWARD_CONTROL_S 0
+#define PRTMAC_HSEC_CTL_RX_FORWARD_CONTROL_HSEC_CTL_RX_FORWARD_CONTROL_M BIT(0)
+#define PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART1 0x001E3220 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART1_HSEC_CTL_RX_PAUSE_DA_UCAST_PART1_S 0
+#define PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART1_HSEC_CTL_RX_PAUSE_DA_UCAST_PART1_M MAKEMASK(0xFFFFFFFF, 0)
+#define PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART2 0x001E3240 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART2_HSEC_CTL_RX_PAUSE_DA_UCAST_PART2_S 0
+#define PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART2_HSEC_CTL_RX_PAUSE_DA_UCAST_PART2_M MAKEMASK(0xFFFF, 0)
+#define PRTMAC_HSEC_CTL_RX_PAUSE_ENABLE		0x001E3180 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_PAUSE_ENABLE_HSEC_CTL_RX_PAUSE_ENABLE_S 0
+#define PRTMAC_HSEC_CTL_RX_PAUSE_ENABLE_HSEC_CTL_RX_PAUSE_ENABLE_M MAKEMASK(0x1FF, 0)
+#define PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART1	0x001E3280 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART1_HSEC_CTL_RX_PAUSE_SA_PART1_S 0
+#define PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART1_HSEC_CTL_RX_PAUSE_SA_PART1_M MAKEMASK(0xFFFFFFFF, 0)
+#define PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART2	0x001E32A0 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART2_HSEC_CTL_RX_PAUSE_SA_PART2_S 0
+#define PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART2_HSEC_CTL_RX_PAUSE_SA_PART2_M MAKEMASK(0xFFFF, 0)
+#define PRTMAC_HSEC_CTL_RX_QUANTA_S		0x001E3C40 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_RX_QUANTA_SHIFT_PRTMAC_HSEC_CTL_RX_QUANTA_SHIFT_S 0
+#define PRTMAC_HSEC_CTL_RX_QUANTA_SHIFT_PRTMAC_HSEC_CTL_RX_QUANTA_SHIFT_M MAKEMASK(0xFFFF, 0)
+#define PRTMAC_HSEC_CTL_TX_PAUSE_ENABLE		0x001E31A0 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_TX_PAUSE_ENABLE_HSEC_CTL_TX_PAUSE_ENABLE_S 0
+#define PRTMAC_HSEC_CTL_TX_PAUSE_ENABLE_HSEC_CTL_TX_PAUSE_ENABLE_M MAKEMASK(0x1FF, 0)
+#define PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA(_i)	(0x001E36E0 + ((_i) * 32)) /* _i=0...8 */ /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA_MAX_INDEX 8
+#define PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA_HSEC_CTL_TX_PAUSE_QUANTA_S 0
+#define PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA_HSEC_CTL_TX_PAUSE_QUANTA_M MAKEMASK(0xFFFF, 0)
+#define PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER(_i) (0x001E3800 + ((_i) * 32)) /* _i=0...8 */ /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_MAX_INDEX 8
+#define PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_S 0
+#define PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_M MAKEMASK(0xFFFF, 0)
+#define PRTMAC_HSEC_CTL_TX_SA_PART1		0x001E3960 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_TX_SA_PART1_HSEC_CTL_TX_SA_PART1_S 0
+#define PRTMAC_HSEC_CTL_TX_SA_PART1_HSEC_CTL_TX_SA_PART1_M MAKEMASK(0xFFFFFFFF, 0)
+#define PRTMAC_HSEC_CTL_TX_SA_PART2		0x001E3980 /* Reset Source: GLOBR */
+#define PRTMAC_HSEC_CTL_TX_SA_PART2_HSEC_CTL_TX_SA_PART2_S 0
+#define PRTMAC_HSEC_CTL_TX_SA_PART2_HSEC_CTL_TX_SA_PART2_M MAKEMASK(0xFFFF, 0)
+#define PRTMAC_LINK_DOWN_COUNTER		0x001E47C0 /* Reset Source: GLOBR */
+#define PRTMAC_LINK_DOWN_COUNTER_LINK_DOWN_COUNTER_S 0
+#define PRTMAC_LINK_DOWN_COUNTER_LINK_DOWN_COUNTER_M MAKEMASK(0xFFFF, 0)
+#define PRTMAC_MD_OVRRIDE_ENABLE(_i)		(0x001E3C60 + ((_i) * 32)) /* _i=0...7 */ /* Reset Source: GLOBR */
+#define PRTMAC_MD_OVRRIDE_ENABLE_MAX_INDEX	7
+#define PRTMAC_MD_OVRRIDE_ENABLE_PRTMAC_MD_OVRRIDE_ENABLE_S 0
+#define PRTMAC_MD_OVRRIDE_ENABLE_PRTMAC_MD_OVRRIDE_ENABLE_M MAKEMASK(0xFFFFFFFF, 0)
+#define PRTMAC_MD_OVRRIDE_VAL(_i)		(0x001E3D60 + ((_i) * 32)) /* _i=0...7 */ /* Reset Source: GLOBR */
+#define PRTMAC_MD_OVRRIDE_VAL_MAX_INDEX		7
+#define PRTMAC_MD_OVRRIDE_VAL_PRTMAC_MD_OVRRIDE_ENABLE_S 0
+#define PRTMAC_MD_OVRRIDE_VAL_PRTMAC_MD_OVRRIDE_ENABLE_M MAKEMASK(0xFFFFFFFF, 0)
+#define PRTMAC_RX_CNT_MRKR			0x001E48E0 /* Reset Source: GLOBR */
+#define PRTMAC_RX_CNT_MRKR_RX_CNT_MRKR_S	0
+#define PRTMAC_RX_CNT_MRKR_RX_CNT_MRKR_M	MAKEMASK(0xFFFF, 0)
+#define PRTMAC_RX_PKT_DRP_CNT			0x001E3C20 /* Reset Source: GLOBR */
+#define PRTMAC_RX_PKT_DRP_CNT_RX_PKT_DRP_CNT_S	0
+#define PRTMAC_RX_PKT_DRP_CNT_RX_PKT_DRP_CNT_M	MAKEMASK(0xFFFF, 0)
+#define PRTMAC_RX_PKT_DRP_CNT_RX_MKR_PKT_DRP_CNT_S 16
+#define PRTMAC_RX_PKT_DRP_CNT_RX_MKR_PKT_DRP_CNT_M MAKEMASK(0xFFFF, 16)
+#define PRTMAC_TX_CNT_MRKR			0x001E48C0 /* Reset Source: GLOBR */
+#define PRTMAC_TX_CNT_MRKR_TX_CNT_MRKR_S	0
+#define PRTMAC_TX_CNT_MRKR_TX_CNT_MRKR_M	MAKEMASK(0xFFFF, 0)
+#define PRTMAC_TX_LNK_UP_CNT			0x001E4840 /* Reset Source: GLOBR */
+#define PRTMAC_TX_LNK_UP_CNT_TX_LINK_UP_CNT_S	0
+#define PRTMAC_TX_LNK_UP_CNT_TX_LINK_UP_CNT_M	MAKEMASK(0xFFFF, 0)
+#define GL_MDCK_CFG1_TX_PQM			0x002D2DF4 /* Reset Source: CORER */
+#define GL_MDCK_CFG1_TX_PQM_SSO_MAX_DATA_LEN_S	0
+#define GL_MDCK_CFG1_TX_PQM_SSO_MAX_DATA_LEN_M	MAKEMASK(0xFF, 0)
+#define GL_MDCK_CFG1_TX_PQM_SSO_MAX_PKT_CNT_S	8
+#define GL_MDCK_CFG1_TX_PQM_SSO_MAX_PKT_CNT_M	MAKEMASK(0x3F, 8)
+#define GL_MDCK_CFG1_TX_PQM_SSO_MAX_DESC_CNT_S	16
+#define GL_MDCK_CFG1_TX_PQM_SSO_MAX_DESC_CNT_M	MAKEMASK(0x3F, 16)
+#define GL_MDCK_EN_TX_PQM			0x002D2DFC /* Reset Source: CORER */
+#define GL_MDCK_EN_TX_PQM_PCI_DUMMY_COMP_S	0
+#define GL_MDCK_EN_TX_PQM_PCI_DUMMY_COMP_M	BIT(0)
+#define GL_MDCK_EN_TX_PQM_PCI_UR_COMP_S		1
+#define GL_MDCK_EN_TX_PQM_PCI_UR_COMP_M		BIT(1)
+#define GL_MDCK_EN_TX_PQM_RCV_SH_BE_LSO_S	3
+#define GL_MDCK_EN_TX_PQM_RCV_SH_BE_LSO_M	BIT(3)
+#define GL_MDCK_EN_TX_PQM_Q_FL_MNG_EPY_CH_S	4
+#define GL_MDCK_EN_TX_PQM_Q_FL_MNG_EPY_CH_M	BIT(4)
+#define GL_MDCK_EN_TX_PQM_Q_EPY_MNG_FL_CH_S	5
+#define GL_MDCK_EN_TX_PQM_Q_EPY_MNG_FL_CH_M	BIT(5)
+#define GL_MDCK_EN_TX_PQM_LSO_NUMDESCS_ZERO_S	6
+#define GL_MDCK_EN_TX_PQM_LSO_NUMDESCS_ZERO_M	BIT(6)
+#define GL_MDCK_EN_TX_PQM_LSO_LENGTH_ZERO_S	7
+#define GL_MDCK_EN_TX_PQM_LSO_LENGTH_ZERO_M	BIT(7)
+#define GL_MDCK_EN_TX_PQM_LSO_MSS_BELOW_MIN_S	8
+#define GL_MDCK_EN_TX_PQM_LSO_MSS_BELOW_MIN_M	BIT(8)
+#define GL_MDCK_EN_TX_PQM_LSO_MSS_ABOVE_MAX_S	9
+#define GL_MDCK_EN_TX_PQM_LSO_MSS_ABOVE_MAX_M	BIT(9)
+#define GL_MDCK_EN_TX_PQM_LSO_HDR_SIZE_ZERO_S	10
+#define GL_MDCK_EN_TX_PQM_LSO_HDR_SIZE_ZERO_M	BIT(10)
+#define GL_MDCK_EN_TX_PQM_RCV_CNT_BE_LSO_S	11
+#define GL_MDCK_EN_TX_PQM_RCV_CNT_BE_LSO_M	BIT(11)
+#define GL_MDCK_EN_TX_PQM_SKIP_ONE_QT_ONLY_S	12
+#define GL_MDCK_EN_TX_PQM_SKIP_ONE_QT_ONLY_M	BIT(12)
+#define GL_MDCK_EN_TX_PQM_LSO_PKTCNT_ZERO_S	13
+#define GL_MDCK_EN_TX_PQM_LSO_PKTCNT_ZERO_M	BIT(13)
+#define GL_MDCK_EN_TX_PQM_SSO_LENGTH_ZERO_S	14
+#define GL_MDCK_EN_TX_PQM_SSO_LENGTH_ZERO_M	BIT(14)
+#define GL_MDCK_EN_TX_PQM_SSO_LENGTH_EXCEED_S	15
+#define GL_MDCK_EN_TX_PQM_SSO_LENGTH_EXCEED_M	BIT(15)
+#define GL_MDCK_EN_TX_PQM_SSO_PKTCNT_ZERO_S	16
+#define GL_MDCK_EN_TX_PQM_SSO_PKTCNT_ZERO_M	BIT(16)
+#define GL_MDCK_EN_TX_PQM_SSO_PKTCNT_EXCEED_S	17
+#define GL_MDCK_EN_TX_PQM_SSO_PKTCNT_EXCEED_M	BIT(17)
+#define GL_MDCK_EN_TX_PQM_SSO_NUMDESCS_ZERO_S	18
+#define GL_MDCK_EN_TX_PQM_SSO_NUMDESCS_ZERO_M	BIT(18)
+#define GL_MDCK_EN_TX_PQM_SSO_NUMDESCS_EXCEED_S 19
+#define GL_MDCK_EN_TX_PQM_SSO_NUMDESCS_EXCEED_M BIT(19)
+#define GL_MDCK_EN_TX_PQM_TAIL_GT_RING_LENGTH_S 20
+#define GL_MDCK_EN_TX_PQM_TAIL_GT_RING_LENGTH_M BIT(20)
+#define GL_MDCK_EN_TX_PQM_RESERVED_DBL_TYPE_S	21
+#define GL_MDCK_EN_TX_PQM_RESERVED_DBL_TYPE_M	BIT(21)
+#define GL_MDCK_EN_TX_PQM_ILLEGAL_HEAD_DROP_DBL_S 22
+#define GL_MDCK_EN_TX_PQM_ILLEGAL_HEAD_DROP_DBL_M BIT(22)
+#define GL_MDCK_EN_TX_PQM_LSO_OVER_COMMS_Q_S	23
+#define GL_MDCK_EN_TX_PQM_LSO_OVER_COMMS_Q_M	BIT(23)
+#define GL_MDCK_EN_TX_PQM_ILLEGAL_VF_QNUM_S	24
+#define GL_MDCK_EN_TX_PQM_ILLEGAL_VF_QNUM_M	BIT(24)
+#define GL_MDCK_EN_TX_PQM_QTAIL_GT_RING_LENGTH_S 25
+#define GL_MDCK_EN_TX_PQM_QTAIL_GT_RING_LENGTH_M BIT(25)
+#define GL_MDCK_EN_TX_PQM_RSVD_S		26
+#define GL_MDCK_EN_TX_PQM_RSVD_M		MAKEMASK(0x3F, 26)
+#define GL_MDCK_RX				0x0029422C /* Reset Source: CORER */
+#define GL_MDCK_RX_DESC_ADDR_S			0
+#define GL_MDCK_RX_DESC_ADDR_M			BIT(0)
+#define GL_MDET_RX				0x00294C00 /* Reset Source: CORER */
+#define GL_MDET_RX_QNUM_S			0
+#define GL_MDET_RX_QNUM_M			MAKEMASK(0x7FFF, 0)
+#define GL_MDET_RX_VF_NUM_S			15
+#define GL_MDET_RX_VF_NUM_M			MAKEMASK(0xFF, 15)
+#define GL_MDET_RX_PF_NUM_S			23
+#define GL_MDET_RX_PF_NUM_M			MAKEMASK(0x7, 23)
+#define GL_MDET_RX_MAL_TYPE_S			26
+#define GL_MDET_RX_MAL_TYPE_M			MAKEMASK(0x1F, 26)
+#define GL_MDET_RX_VALID_S			31
+#define GL_MDET_RX_VALID_M			BIT(31)
+#define GL_MDET_TX_PQM				0x002D2E00 /* Reset Source: CORER */
+#define GL_MDET_TX_PQM_PF_NUM_S			0
+#define GL_MDET_TX_PQM_PF_NUM_M			MAKEMASK(0x7, 0)
+#define GL_MDET_TX_PQM_VF_NUM_S			4
+#define GL_MDET_TX_PQM_VF_NUM_M			MAKEMASK(0xFF, 4)
+#define GL_MDET_TX_PQM_QNUM_S			12
+#define GL_MDET_TX_PQM_QNUM_M			MAKEMASK(0x3FFF, 12)
+#define GL_MDET_TX_PQM_MAL_TYPE_S		26
+#define GL_MDET_TX_PQM_MAL_TYPE_M		MAKEMASK(0x1F, 26)
+#define GL_MDET_TX_PQM_VALID_S			31
+#define GL_MDET_TX_PQM_VALID_M			BIT(31)
+#define GL_MDET_TX_TCLAN			0x000FC068 /* Reset Source: CORER */
+#define GL_MDET_TX_TCLAN_QNUM_S			0
+#define GL_MDET_TX_TCLAN_QNUM_M			MAKEMASK(0x7FFF, 0)
+#define GL_MDET_TX_TCLAN_VF_NUM_S		15
+#define GL_MDET_TX_TCLAN_VF_NUM_M		MAKEMASK(0xFF, 15)
+#define GL_MDET_TX_TCLAN_PF_NUM_S		23
+#define GL_MDET_TX_TCLAN_PF_NUM_M		MAKEMASK(0x7, 23)
+#define GL_MDET_TX_TCLAN_MAL_TYPE_S		26
+#define GL_MDET_TX_TCLAN_MAL_TYPE_M		MAKEMASK(0x1F, 26)
+#define GL_MDET_TX_TCLAN_VALID_S		31
+#define GL_MDET_TX_TCLAN_VALID_M		BIT(31)
+#define PF_MDET_RX				0x00294280 /* Reset Source: CORER */
+#define PF_MDET_RX_VALID_S			0
+#define PF_MDET_RX_VALID_M			BIT(0)
+#define PF_MDET_TX_PQM				0x002D2C80 /* Reset Source: CORER */
+#define PF_MDET_TX_PQM_VALID_S			0
+#define PF_MDET_TX_PQM_VALID_M			BIT(0)
+#define PF_MDET_TX_TCLAN			0x000FC000 /* Reset Source: CORER */
+#define PF_MDET_TX_TCLAN_VALID_S		0
+#define PF_MDET_TX_TCLAN_VALID_M		BIT(0)
+#define PF_MDET_TX_TDPU				0x00040800 /* Reset Source: CORER */
+#define PF_MDET_TX_TDPU_VALID_S			0
+#define PF_MDET_TX_TDPU_VALID_M			BIT(0)
+#define VP_MDET_RX(_VF)				(0x00294400 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VP_MDET_RX_MAX_INDEX			255
+#define VP_MDET_RX_VALID_S			0
+#define VP_MDET_RX_VALID_M			BIT(0)
+#define VP_MDET_TX_PQM(_VF)			(0x002D2000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VP_MDET_TX_PQM_MAX_INDEX		255
+#define VP_MDET_TX_PQM_VALID_S			0
+#define VP_MDET_TX_PQM_VALID_M			BIT(0)
+#define VP_MDET_TX_TCLAN(_VF)			(0x000FB800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VP_MDET_TX_TCLAN_MAX_INDEX		255
+#define VP_MDET_TX_TCLAN_VALID_S		0
+#define VP_MDET_TX_TCLAN_VALID_M		BIT(0)
+#define VP_MDET_TX_TDPU(_VF)			(0x00040000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VP_MDET_TX_TDPU_MAX_INDEX		255
+#define VP_MDET_TX_TDPU_VALID_S			0
+#define VP_MDET_TX_TDPU_VALID_M			BIT(0)
+#define GENERAL_MNG_FW_DBG_CSR(_i)		(0x000B6180 + ((_i) * 4)) /* _i=0...9 */ /* Reset Source: POR */
+#define GENERAL_MNG_FW_DBG_CSR_MAX_INDEX	9
+#define GENERAL_MNG_FW_DBG_CSR_GENERAL_FW_DBG_S 0
+#define GENERAL_MNG_FW_DBG_CSR_GENERAL_FW_DBG_M MAKEMASK(0xFFFFFFFF, 0)
+#define GL_FWRESETCNT				0x00083100 /* Reset Source: POR */
+#define GL_FWRESETCNT_FWRESETCNT_S		0
+#define GL_FWRESETCNT_FWRESETCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_MNG_FW_RAM_STAT			0x0008309C /* Reset Source: POR */
+#define GL_MNG_FW_RAM_STAT_FW_RAM_RST_STAT_S	0
+#define GL_MNG_FW_RAM_STAT_FW_RAM_RST_STAT_M	BIT(0)
+#define GL_MNG_FW_RAM_STAT_MNG_MEM_ECC_ERR_S	1
+#define GL_MNG_FW_RAM_STAT_MNG_MEM_ECC_ERR_M	BIT(1)
+#define GL_MNG_FWSM				0x000B6134 /* Reset Source: POR */
+#define GL_MNG_FWSM_FW_MODES_S			0
+#define GL_MNG_FWSM_FW_MODES_M			MAKEMASK(0x3, 0)
+#define GL_MNG_FWSM_RSV0_S			2
+#define GL_MNG_FWSM_RSV0_M			MAKEMASK(0xFF, 2)
+#define GL_MNG_FWSM_EEP_RELOAD_IND_S		10
+#define GL_MNG_FWSM_EEP_RELOAD_IND_M		BIT(10)
+#define GL_MNG_FWSM_RSV1_S			11
+#define GL_MNG_FWSM_RSV1_M			MAKEMASK(0xF, 11)
+#define GL_MNG_FWSM_RSV2_S			15
+#define GL_MNG_FWSM_RSV2_M			BIT(15)
+#define GL_MNG_FWSM_PCIR_AL_FAILURE_S		16
+#define GL_MNG_FWSM_PCIR_AL_FAILURE_M		BIT(16)
+#define GL_MNG_FWSM_POR_AL_FAILURE_S		17
+#define GL_MNG_FWSM_POR_AL_FAILURE_M		BIT(17)
+#define GL_MNG_FWSM_RSV3_S			18
+#define GL_MNG_FWSM_RSV3_M			BIT(18)
+#define GL_MNG_FWSM_EXT_ERR_IND_S		19
+#define GL_MNG_FWSM_EXT_ERR_IND_M		MAKEMASK(0x3F, 19)
+#define GL_MNG_FWSM_RSV4_S			25
+#define GL_MNG_FWSM_RSV4_M			BIT(25)
+#define GL_MNG_FWSM_RESERVED_11_S		26
+#define GL_MNG_FWSM_RESERVED_11_M		MAKEMASK(0xF, 26)
+#define GL_MNG_FWSM_RSV5_S			30
+#define GL_MNG_FWSM_RSV5_M			MAKEMASK(0x3, 30)
+#define GL_MNG_HWARB_CTRL			0x000B6130 /* Reset Source: POR */
+#define GL_MNG_HWARB_CTRL_NCSI_ARB_EN_S		0
+#define GL_MNG_HWARB_CTRL_NCSI_ARB_EN_M		BIT(0)
+#define GL_MNG_SHA_EXTEND(_i)			(0x00083120 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: EMPR */
+#define GL_MNG_SHA_EXTEND_MAX_INDEX		7
+#define GL_MNG_SHA_EXTEND_GL_MNG_SHA_EXTEND_S	0
+#define GL_MNG_SHA_EXTEND_GL_MNG_SHA_EXTEND_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GL_MNG_SHA_EXTEND_ROM(_i)		(0x00083160 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: EMPR */
+#define GL_MNG_SHA_EXTEND_ROM_MAX_INDEX		7
+#define GL_MNG_SHA_EXTEND_ROM_GL_MNG_SHA_EXTEND_ROM_S 0
+#define GL_MNG_SHA_EXTEND_ROM_GL_MNG_SHA_EXTEND_ROM_M MAKEMASK(0xFFFFFFFF, 0)
+#define GL_MNG_SHA_EXTEND_STATUS		0x00083148 /* Reset Source: EMPR */
+#define GL_MNG_SHA_EXTEND_STATUS_STAGE_S	0
+#define GL_MNG_SHA_EXTEND_STATUS_STAGE_M	MAKEMASK(0x7, 0)
+#define GL_MNG_SHA_EXTEND_STATUS_FW_HALTED_S	30
+#define GL_MNG_SHA_EXTEND_STATUS_FW_HALTED_M	BIT(30)
+#define GL_MNG_SHA_EXTEND_STATUS_DONE_S		31
+#define GL_MNG_SHA_EXTEND_STATUS_DONE_M		BIT(31)
+#define GL_SWT_PRT2MDEF(_i)			(0x00216018 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: POR */
+#define GL_SWT_PRT2MDEF_MAX_INDEX		31
+#define GL_SWT_PRT2MDEF_MDEFIDX_S		0
+#define GL_SWT_PRT2MDEF_MDEFIDX_M		MAKEMASK(0x7, 0)
+#define GL_SWT_PRT2MDEF_MDEFENA_S		31
+#define GL_SWT_PRT2MDEF_MDEFENA_M		BIT(31)
+#define PRT_MNG_MANC				0x00214720 /* Reset Source: POR */
+#define PRT_MNG_MANC_FLOW_CONTROL_DISCARD_S	0
+#define PRT_MNG_MANC_FLOW_CONTROL_DISCARD_M	BIT(0)
+#define PRT_MNG_MANC_NCSI_DISCARD_S		1
+#define PRT_MNG_MANC_NCSI_DISCARD_M		BIT(1)
+#define PRT_MNG_MANC_RCV_TCO_EN_S		17
+#define PRT_MNG_MANC_RCV_TCO_EN_M		BIT(17)
+#define PRT_MNG_MANC_RCV_ALL_S			19
+#define PRT_MNG_MANC_RCV_ALL_M			BIT(19)
+#define PRT_MNG_MANC_FIXED_NET_TYPE_S		25
+#define PRT_MNG_MANC_FIXED_NET_TYPE_M		BIT(25)
+#define PRT_MNG_MANC_NET_TYPE_S			26
+#define PRT_MNG_MANC_NET_TYPE_M			BIT(26)
+#define PRT_MNG_MANC_EN_BMC2OS_S		28
+#define PRT_MNG_MANC_EN_BMC2OS_M		BIT(28)
+#define PRT_MNG_MANC_EN_BMC2NET_S		29
+#define PRT_MNG_MANC_EN_BMC2NET_M		BIT(29)
+#define PRT_MNG_MAVTV(_i)			(0x00214780 + ((_i) * 32)) /* _i=0...7 */ /* Reset Source: POR */
+#define PRT_MNG_MAVTV_MAX_INDEX			7
+#define PRT_MNG_MAVTV_VID_S			0
+#define PRT_MNG_MAVTV_VID_M			MAKEMASK(0xFFF, 0)
+#define PRT_MNG_MDEF(_i)			(0x00214880 + ((_i) * 32)) /* _i=0...7 */ /* Reset Source: POR */
+#define PRT_MNG_MDEF_MAX_INDEX			7
+#define PRT_MNG_MDEF_MAC_EXACT_AND_S		0
+#define PRT_MNG_MDEF_MAC_EXACT_AND_M		MAKEMASK(0xF, 0)
+#define PRT_MNG_MDEF_BROADCAST_AND_S		4
+#define PRT_MNG_MDEF_BROADCAST_AND_M		BIT(4)
+#define PRT_MNG_MDEF_VLAN_AND_S			5
+#define PRT_MNG_MDEF_VLAN_AND_M			MAKEMASK(0xFF, 5)
+#define PRT_MNG_MDEF_IPV4_ADDRESS_AND_S		13
+#define PRT_MNG_MDEF_IPV4_ADDRESS_AND_M		MAKEMASK(0xF, 13)
+#define PRT_MNG_MDEF_IPV6_ADDRESS_AND_S		17
+#define PRT_MNG_MDEF_IPV6_ADDRESS_AND_M		MAKEMASK(0xF, 17)
+#define PRT_MNG_MDEF_MAC_EXACT_OR_S		21
+#define PRT_MNG_MDEF_MAC_EXACT_OR_M		MAKEMASK(0xF, 21)
+#define PRT_MNG_MDEF_BROADCAST_OR_S		25
+#define PRT_MNG_MDEF_BROADCAST_OR_M		BIT(25)
+#define PRT_MNG_MDEF_MULTICAST_AND_S		26
+#define PRT_MNG_MDEF_MULTICAST_AND_M		BIT(26)
+#define PRT_MNG_MDEF_ARP_REQUEST_OR_S		27
+#define PRT_MNG_MDEF_ARP_REQUEST_OR_M		BIT(27)
+#define PRT_MNG_MDEF_ARP_RESPONSE_OR_S		28
+#define PRT_MNG_MDEF_ARP_RESPONSE_OR_M		BIT(28)
+#define PRT_MNG_MDEF_NEIGHBOR_DISCOVERY_134_OR_S 29
+#define PRT_MNG_MDEF_NEIGHBOR_DISCOVERY_134_OR_M BIT(29)
+#define PRT_MNG_MDEF_PORT_0X298_OR_S		30
+#define PRT_MNG_MDEF_PORT_0X298_OR_M		BIT(30)
+#define PRT_MNG_MDEF_PORT_0X26F_OR_S		31
+#define PRT_MNG_MDEF_PORT_0X26F_OR_M		BIT(31)
+#define PRT_MNG_MDEF_EXT(_i)			(0x00214A00 + ((_i) * 32)) /* _i=0...7 */ /* Reset Source: POR */
+#define PRT_MNG_MDEF_EXT_MAX_INDEX		7
+#define PRT_MNG_MDEF_EXT_L2_ETHERTYPE_AND_S	0
+#define PRT_MNG_MDEF_EXT_L2_ETHERTYPE_AND_M	MAKEMASK(0xF, 0)
+#define PRT_MNG_MDEF_EXT_L2_ETHERTYPE_OR_S	4
+#define PRT_MNG_MDEF_EXT_L2_ETHERTYPE_OR_M	MAKEMASK(0xF, 4)
+#define PRT_MNG_MDEF_EXT_FLEX_PORT_OR_S		8
+#define PRT_MNG_MDEF_EXT_FLEX_PORT_OR_M		MAKEMASK(0xFFFF, 8)
+#define PRT_MNG_MDEF_EXT_FLEX_TCO_S		24
+#define PRT_MNG_MDEF_EXT_FLEX_TCO_M		BIT(24)
+#define PRT_MNG_MDEF_EXT_NEIGHBOR_DISCOVERY_135_OR_S 25
+#define PRT_MNG_MDEF_EXT_NEIGHBOR_DISCOVERY_135_OR_M BIT(25)
+#define PRT_MNG_MDEF_EXT_NEIGHBOR_DISCOVERY_136_OR_S 26
+#define PRT_MNG_MDEF_EXT_NEIGHBOR_DISCOVERY_136_OR_M BIT(26)
+#define PRT_MNG_MDEF_EXT_NEIGHBOR_DISCOVERY_137_OR_S 27
+#define PRT_MNG_MDEF_EXT_NEIGHBOR_DISCOVERY_137_OR_M BIT(27)
+#define PRT_MNG_MDEF_EXT_ICMP_OR_S		28
+#define PRT_MNG_MDEF_EXT_ICMP_OR_M		BIT(28)
+#define PRT_MNG_MDEF_EXT_MLD_S			29
+#define PRT_MNG_MDEF_EXT_MLD_M			BIT(29)
+#define PRT_MNG_MDEF_EXT_APPLY_TO_NETWORK_TRAFFIC_S 30
+#define PRT_MNG_MDEF_EXT_APPLY_TO_NETWORK_TRAFFIC_M BIT(30)
+#define PRT_MNG_MDEF_EXT_APPLY_TO_HOST_TRAFFIC_S 31
+#define PRT_MNG_MDEF_EXT_APPLY_TO_HOST_TRAFFIC_M BIT(31)
+#define PRT_MNG_MDEFVSI(_i)			(0x00214980 + ((_i) * 32)) /* _i=0...3 */ /* Reset Source: POR */
+#define PRT_MNG_MDEFVSI_MAX_INDEX		3
+#define PRT_MNG_MDEFVSI_MDEFVSI_2N_S		0
+#define PRT_MNG_MDEFVSI_MDEFVSI_2N_M		MAKEMASK(0xFFFF, 0)
+#define PRT_MNG_MDEFVSI_MDEFVSI_2NP1_S		16
+#define PRT_MNG_MDEFVSI_MDEFVSI_2NP1_M		MAKEMASK(0xFFFF, 16)
+#define PRT_MNG_METF(_i)			(0x00214120 + ((_i) * 32)) /* _i=0...3 */ /* Reset Source: POR */
+#define PRT_MNG_METF_MAX_INDEX			3
+#define PRT_MNG_METF_ETYPE_S			0
+#define PRT_MNG_METF_ETYPE_M			MAKEMASK(0xFFFF, 0)
+#define PRT_MNG_METF_POLARITY_S			30
+#define PRT_MNG_METF_POLARITY_M			BIT(30)
+#define PRT_MNG_MFUTP(_i)			(0x00214320 + ((_i) * 32)) /* _i=0...15 */ /* Reset Source: POR */
+#define PRT_MNG_MFUTP_MAX_INDEX			15
+#define PRT_MNG_MFUTP_MFUTP_N_S			0
+#define PRT_MNG_MFUTP_MFUTP_N_M			MAKEMASK(0xFFFF, 0)
+#define PRT_MNG_MFUTP_UDP_S			16
+#define PRT_MNG_MFUTP_UDP_M			BIT(16)
+#define PRT_MNG_MFUTP_TCP_S			17
+#define PRT_MNG_MFUTP_TCP_M			BIT(17)
+#define PRT_MNG_MFUTP_SOURCE_DESTINATION_S	18
+#define PRT_MNG_MFUTP_SOURCE_DESTINATION_M	BIT(18)
+#define PRT_MNG_MIPAF4(_i)			(0x002141A0 + ((_i) * 32)) /* _i=0...3 */ /* Reset Source: POR */
+#define PRT_MNG_MIPAF4_MAX_INDEX		3
+#define PRT_MNG_MIPAF4_MIPAF_S			0
+#define PRT_MNG_MIPAF4_MIPAF_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PRT_MNG_MIPAF6(_i)			(0x00214520 + ((_i) * 32)) /* _i=0...15 */ /* Reset Source: POR */
+#define PRT_MNG_MIPAF6_MAX_INDEX		15
+#define PRT_MNG_MIPAF6_MIPAF_S			0
+#define PRT_MNG_MIPAF6_MIPAF_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PRT_MNG_MMAH(_i)			(0x00214220 + ((_i) * 32)) /* _i=0...3 */ /* Reset Source: POR */
+#define PRT_MNG_MMAH_MAX_INDEX			3
+#define PRT_MNG_MMAH_MMAH_S			0
+#define PRT_MNG_MMAH_MMAH_M			MAKEMASK(0xFFFF, 0)
+#define PRT_MNG_MMAL(_i)			(0x002142A0 + ((_i) * 32)) /* _i=0...3 */ /* Reset Source: POR */
+#define PRT_MNG_MMAL_MAX_INDEX			3
+#define PRT_MNG_MMAL_MMAL_S			0
+#define PRT_MNG_MMAL_MMAL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PRT_MNG_MNGONLY				0x00214740 /* Reset Source: POR */
+#define PRT_MNG_MNGONLY_EXCLUSIVE_TO_MANAGEABILITY_S 0
+#define PRT_MNG_MNGONLY_EXCLUSIVE_TO_MANAGEABILITY_M MAKEMASK(0xFF, 0)
+#define PRT_MNG_MSFM				0x00214760 /* Reset Source: POR */
+#define PRT_MNG_MSFM_PORT_26F_UDP_S		0
+#define PRT_MNG_MSFM_PORT_26F_UDP_M		BIT(0)
+#define PRT_MNG_MSFM_PORT_26F_TCP_S		1
+#define PRT_MNG_MSFM_PORT_26F_TCP_M		BIT(1)
+#define PRT_MNG_MSFM_PORT_298_UDP_S		2
+#define PRT_MNG_MSFM_PORT_298_UDP_M		BIT(2)
+#define PRT_MNG_MSFM_PORT_298_TCP_S		3
+#define PRT_MNG_MSFM_PORT_298_TCP_M		BIT(3)
+#define PRT_MNG_MSFM_IPV6_0_MASK_S		4
+#define PRT_MNG_MSFM_IPV6_0_MASK_M		BIT(4)
+#define PRT_MNG_MSFM_IPV6_1_MASK_S		5
+#define PRT_MNG_MSFM_IPV6_1_MASK_M		BIT(5)
+#define PRT_MNG_MSFM_IPV6_2_MASK_S		6
+#define PRT_MNG_MSFM_IPV6_2_MASK_M		BIT(6)
+#define PRT_MNG_MSFM_IPV6_3_MASK_S		7
+#define PRT_MNG_MSFM_IPV6_3_MASK_M		BIT(7)
+#define MSIX_PBA_PAGE(_i)			(0x02E08000 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: FLR */
+#define MSIX_PBA_PAGE_MAX_INDEX			63
+#define MSIX_PBA_PAGE_PENBIT_S			0
+#define MSIX_PBA_PAGE_PENBIT_M			MAKEMASK(0xFFFFFFFF, 0)
+#define MSIX_PBA1(_i)				(0x00008000 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: FLR */
+#define MSIX_PBA1_MAX_INDEX			63
+#define MSIX_PBA1_PENBIT_S			0
+#define MSIX_PBA1_PENBIT_M			MAKEMASK(0xFFFFFFFF, 0)
+#define MSIX_TADD_PAGE(_i)			(0x02E00000 + ((_i) * 16)) /* _i=0...2047 */ /* Reset Source: FLR */
+#define MSIX_TADD_PAGE_MAX_INDEX		2047
+#define MSIX_TADD_PAGE_MSIXTADD10_S		0
+#define MSIX_TADD_PAGE_MSIXTADD10_M		MAKEMASK(0x3, 0)
+#define MSIX_TADD_PAGE_MSIXTADD_S		2
+#define MSIX_TADD_PAGE_MSIXTADD_M		MAKEMASK(0x3FFFFFFF, 2)
+#define MSIX_TADD1(_i)				(0x00000000 + ((_i) * 16)) /* _i=0...2047 */ /* Reset Source: FLR */
+#define MSIX_TADD1_MAX_INDEX			2047
+#define MSIX_TADD1_MSIXTADD10_S			0
+#define MSIX_TADD1_MSIXTADD10_M			MAKEMASK(0x3, 0)
+#define MSIX_TADD1_MSIXTADD_S			2
+#define MSIX_TADD1_MSIXTADD_M			MAKEMASK(0x3FFFFFFF, 2)
+#define MSIX_TMSG(_i)				(0x00000008 + ((_i) * 16)) /* _i=0...2047 */ /* Reset Source: FLR */
+#define MSIX_TMSG_MAX_INDEX			2047
+#define MSIX_TMSG_MSIXTMSG_S			0
+#define MSIX_TMSG_MSIXTMSG_M			MAKEMASK(0xFFFFFFFF, 0)
+#define MSIX_TMSG_PAGE(_i)			(0x02E00008 + ((_i) * 16)) /* _i=0...2047 */ /* Reset Source: FLR */
+#define MSIX_TMSG_PAGE_MAX_INDEX		2047
+#define MSIX_TMSG_PAGE_MSIXTMSG_S		0
+#define MSIX_TMSG_PAGE_MSIXTMSG_M		MAKEMASK(0xFFFFFFFF, 0)
+#define MSIX_TUADD_PAGE(_i)			(0x02E00004 + ((_i) * 16)) /* _i=0...2047 */ /* Reset Source: FLR */
+#define MSIX_TUADD_PAGE_MAX_INDEX		2047
+#define MSIX_TUADD_PAGE_MSIXTUADD_S		0
+#define MSIX_TUADD_PAGE_MSIXTUADD_M		MAKEMASK(0xFFFFFFFF, 0)
+#define MSIX_TUADD1(_i)				(0x00000004 + ((_i) * 16)) /* _i=0...2047 */ /* Reset Source: FLR */
+#define MSIX_TUADD1_MAX_INDEX			2047
+#define MSIX_TUADD1_MSIXTUADD_S			0
+#define MSIX_TUADD1_MSIXTUADD_M			MAKEMASK(0xFFFFFFFF, 0)
+#define MSIX_TVCTRL_PAGE(_i)			(0x02E0000C + ((_i) * 16)) /* _i=0...2047 */ /* Reset Source: FLR */
+#define MSIX_TVCTRL_PAGE_MAX_INDEX		2047
+#define MSIX_TVCTRL_PAGE_MASK_S			0
+#define MSIX_TVCTRL_PAGE_MASK_M			BIT(0)
+#define MSIX_TVCTRL1(_i)			(0x0000000C + ((_i) * 16)) /* _i=0...2047 */ /* Reset Source: FLR */
+#define MSIX_TVCTRL1_MAX_INDEX			2047
+#define MSIX_TVCTRL1_MASK_S			0
+#define MSIX_TVCTRL1_MASK_M			BIT(0)
+#define GLNVM_AL_DONE_HLP			0x000824C4 /* Reset Source: POR */
+#define GLNVM_AL_DONE_HLP_HLP_CORER_S		0
+#define GLNVM_AL_DONE_HLP_HLP_CORER_M		BIT(0)
+#define GLNVM_AL_DONE_HLP_HLP_FULLR_S		1
+#define GLNVM_AL_DONE_HLP_HLP_FULLR_M		BIT(1)
+#define GLNVM_ALTIMERS				0x000B6140 /* Reset Source: POR */
+#define GLNVM_ALTIMERS_PCI_ALTIMER_S		0
+#define GLNVM_ALTIMERS_PCI_ALTIMER_M		MAKEMASK(0xFFF, 0)
+#define GLNVM_ALTIMERS_GEN_ALTIMER_S		12
+#define GLNVM_ALTIMERS_GEN_ALTIMER_M		MAKEMASK(0xFFFFF, 12)
+#define GLNVM_FLA				0x000B6108 /* Reset Source: POR */
+#define GLNVM_FLA_LOCKED_S			6
+#define GLNVM_FLA_LOCKED_M			BIT(6)
+#define GLNVM_GENS				0x000B6100 /* Reset Source: POR */
+#define GLNVM_GENS_NVM_PRES_S			0
+#define GLNVM_GENS_NVM_PRES_M			BIT(0)
+#define GLNVM_GENS_SR_SIZE_S			5
+#define GLNVM_GENS_SR_SIZE_M			MAKEMASK(0x7, 5)
+#define GLNVM_GENS_BANK1VAL_S			8
+#define GLNVM_GENS_BANK1VAL_M			BIT(8)
+#define GLNVM_GENS_ALT_PRST_S			23
+#define GLNVM_GENS_ALT_PRST_M			BIT(23)
+#define GLNVM_GENS_FL_AUTO_RD_S			25
+#define GLNVM_GENS_FL_AUTO_RD_M			BIT(25)
+#define GLNVM_PROTCSR(_i)			(0x000B6010 + ((_i) * 4)) /* _i=0...59 */ /* Reset Source: POR */
+#define GLNVM_PROTCSR_MAX_INDEX			59
+#define GLNVM_PROTCSR_ADDR_BLOCK_S		0
+#define GLNVM_PROTCSR_ADDR_BLOCK_M		MAKEMASK(0xFFFFFF, 0)
+#define GLNVM_ULD				0x000B6008 /* Reset Source: POR */
+#define GLNVM_ULD_PCIER_DONE_S			0
+#define GLNVM_ULD_PCIER_DONE_M			BIT(0)
+#define GLNVM_ULD_PCIER_DONE_1_S		1
+#define GLNVM_ULD_PCIER_DONE_1_M		BIT(1)
+#define GLNVM_ULD_CORER_DONE_S			3
+#define GLNVM_ULD_CORER_DONE_M			BIT(3)
+#define GLNVM_ULD_GLOBR_DONE_S			4
+#define GLNVM_ULD_GLOBR_DONE_M			BIT(4)
+#define GLNVM_ULD_POR_DONE_S			5
+#define GLNVM_ULD_POR_DONE_M			BIT(5)
+#define GLNVM_ULD_POR_DONE_1_S			8
+#define GLNVM_ULD_POR_DONE_1_M			BIT(8)
+#define GLNVM_ULD_PCIER_DONE_2_S		9
+#define GLNVM_ULD_PCIER_DONE_2_M		BIT(9)
+#define GLNVM_ULD_PE_DONE_S			10
+#define GLNVM_ULD_PE_DONE_M			BIT(10)
+#define GLNVM_ULD_HLP_CORE_DONE_S		11
+#define GLNVM_ULD_HLP_CORE_DONE_M		BIT(11)
+#define GLNVM_ULD_HLP_FULL_DONE_S		12
+#define GLNVM_ULD_HLP_FULL_DONE_M		BIT(12)
+#define GLNVM_ULT				0x000B6154 /* Reset Source: POR */
+#define GLNVM_ULT_CONF_PCIR_AE_S		0
+#define GLNVM_ULT_CONF_PCIR_AE_M		BIT(0)
+#define GLNVM_ULT_CONF_PCIRTL_AE_S		1
+#define GLNVM_ULT_CONF_PCIRTL_AE_M		BIT(1)
+#define GLNVM_ULT_RESERVED_1_S			2
+#define GLNVM_ULT_RESERVED_1_M			BIT(2)
+#define GLNVM_ULT_CONF_CORE_AE_S		3
+#define GLNVM_ULT_CONF_CORE_AE_M		BIT(3)
+#define GLNVM_ULT_CONF_GLOBAL_AE_S		4
+#define GLNVM_ULT_CONF_GLOBAL_AE_M		BIT(4)
+#define GLNVM_ULT_CONF_POR_AE_S			5
+#define GLNVM_ULT_CONF_POR_AE_M			BIT(5)
+#define GLNVM_ULT_RESERVED_2_S			6
+#define GLNVM_ULT_RESERVED_2_M			BIT(6)
+#define GLNVM_ULT_RESERVED_3_S			7
+#define GLNVM_ULT_RESERVED_3_M			BIT(7)
+#define GLNVM_ULT_RESERVED_5_S			8
+#define GLNVM_ULT_RESERVED_5_M			BIT(8)
+#define GLNVM_ULT_CONF_PCIALT_AE_S		9
+#define GLNVM_ULT_CONF_PCIALT_AE_M		BIT(9)
+#define GLNVM_ULT_CONF_PE_AE_S			10
+#define GLNVM_ULT_CONF_PE_AE_M			BIT(10)
+#define GLNVM_ULT_RESERVED_4_S			11
+#define GLNVM_ULT_RESERVED_4_M			MAKEMASK(0x1FFFFF, 11)
+#define GL_COTF_MARKER_STATUS			0x00200200 /* Reset Source: CORER */
+#define GL_COTF_MARKER_STATUS_MRKR_BUSY_S	0
+#define GL_COTF_MARKER_STATUS_MRKR_BUSY_M	MAKEMASK(0xFF, 0)
+#define GL_COTF_MARKER_TRIG_RCU_PRS(_i)		(0x002001D4 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GL_COTF_MARKER_TRIG_RCU_PRS_MAX_INDEX	7
+#define GL_COTF_MARKER_TRIG_RCU_PRS_SET_RST_S	0
+#define GL_COTF_MARKER_TRIG_RCU_PRS_SET_RST_M	BIT(0)
+#define GL_PRS_MARKER_ERROR			0x00200204 /* Reset Source: CORER */
+#define GL_PRS_MARKER_ERROR_XLR_CFG_ERR_S	0
+#define GL_PRS_MARKER_ERROR_XLR_CFG_ERR_M	BIT(0)
+#define GL_PRS_MARKER_ERROR_QH_CFG_ERR_S	1
+#define GL_PRS_MARKER_ERROR_QH_CFG_ERR_M	BIT(1)
+#define GL_PRS_MARKER_ERROR_COTF_CFG_ERR_S	2
+#define GL_PRS_MARKER_ERROR_COTF_CFG_ERR_M	BIT(2)
+#define GL_PRS_RX_PIPE_INIT0(_i)		(0x0020000C + ((_i) * 4)) /* _i=0...6 */ /* Reset Source: CORER */
+#define GL_PRS_RX_PIPE_INIT0_MAX_INDEX		6
+#define GL_PRS_RX_PIPE_INIT0_GPCSR_INIT_S	0
+#define GL_PRS_RX_PIPE_INIT0_GPCSR_INIT_M	MAKEMASK(0xFFFF, 0)
+#define GL_PRS_RX_PIPE_INIT1			0x00200028 /* Reset Source: CORER */
+#define GL_PRS_RX_PIPE_INIT1_GPCSR_INIT_S	0
+#define GL_PRS_RX_PIPE_INIT1_GPCSR_INIT_M	MAKEMASK(0xFFFF, 0)
+#define GL_PRS_RX_PIPE_INIT2			0x0020002C /* Reset Source: CORER */
+#define GL_PRS_RX_PIPE_INIT2_GPCSR_INIT_S	0
+#define GL_PRS_RX_PIPE_INIT2_GPCSR_INIT_M	MAKEMASK(0xFFFF, 0)
+#define GL_PRS_RX_SIZE_CTRL			0x00200004 /* Reset Source: CORER */
+#define GL_PRS_RX_SIZE_CTRL_MIN_SIZE_S		0
+#define GL_PRS_RX_SIZE_CTRL_MIN_SIZE_M		MAKEMASK(0x3FF, 0)
+#define GL_PRS_RX_SIZE_CTRL_MIN_SIZE_EN_S	15
+#define GL_PRS_RX_SIZE_CTRL_MIN_SIZE_EN_M	BIT(15)
+#define GL_PRS_RX_SIZE_CTRL_MAX_SIZE_S		16
+#define GL_PRS_RX_SIZE_CTRL_MAX_SIZE_M		MAKEMASK(0x3FF, 16)
+#define GL_PRS_RX_SIZE_CTRL_MAX_SIZE_EN_S	31
+#define GL_PRS_RX_SIZE_CTRL_MAX_SIZE_EN_M	BIT(31)
+#define GL_PRS_TX_PIPE_INIT0(_i)		(0x00202018 + ((_i) * 4)) /* _i=0...6 */ /* Reset Source: CORER */
+#define GL_PRS_TX_PIPE_INIT0_MAX_INDEX		6
+#define GL_PRS_TX_PIPE_INIT0_GPCSR_INIT_S	0
+#define GL_PRS_TX_PIPE_INIT0_GPCSR_INIT_M	MAKEMASK(0xFFFF, 0)
+#define GL_PRS_TX_PIPE_INIT1			0x00202034 /* Reset Source: CORER */
+#define GL_PRS_TX_PIPE_INIT1_GPCSR_INIT_S	0
+#define GL_PRS_TX_PIPE_INIT1_GPCSR_INIT_M	MAKEMASK(0xFFFF, 0)
+#define GL_PRS_TX_PIPE_INIT2			0x00202038 /* Reset Source: CORER */
+#define GL_PRS_TX_PIPE_INIT2_GPCSR_INIT_S	0
+#define GL_PRS_TX_PIPE_INIT2_GPCSR_INIT_M	MAKEMASK(0xFFFF, 0)
+#define GL_PRS_TX_SIZE_CTRL			0x00202014 /* Reset Source: CORER */
+#define GL_PRS_TX_SIZE_CTRL_MIN_SIZE_S		0
+#define GL_PRS_TX_SIZE_CTRL_MIN_SIZE_M		MAKEMASK(0x3FF, 0)
+#define GL_PRS_TX_SIZE_CTRL_MIN_SIZE_EN_S	15
+#define GL_PRS_TX_SIZE_CTRL_MIN_SIZE_EN_M	BIT(15)
+#define GL_PRS_TX_SIZE_CTRL_MAX_SIZE_S		16
+#define GL_PRS_TX_SIZE_CTRL_MAX_SIZE_M		MAKEMASK(0x3FF, 16)
+#define GL_PRS_TX_SIZE_CTRL_MAX_SIZE_EN_S	31
+#define GL_PRS_TX_SIZE_CTRL_MAX_SIZE_EN_M	BIT(31)
+#define GL_QH_MARKER_STATUS			0x002001FC /* Reset Source: CORER */
+#define GL_QH_MARKER_STATUS_MRKR_BUSY_S		0
+#define GL_QH_MARKER_STATUS_MRKR_BUSY_M		MAKEMASK(0xF, 0)
+#define GL_QH_MARKER_TRIG_RCU_PRS(_i)		(0x002001C4 + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: CORER */
+#define GL_QH_MARKER_TRIG_RCU_PRS_MAX_INDEX	3
+#define GL_QH_MARKER_TRIG_RCU_PRS_QPID_S	0
+#define GL_QH_MARKER_TRIG_RCU_PRS_QPID_M	MAKEMASK(0x3FFFF, 0)
+#define GL_QH_MARKER_TRIG_RCU_PRS_PE_TAG_S	18
+#define GL_QH_MARKER_TRIG_RCU_PRS_PE_TAG_M	MAKEMASK(0xFF, 18)
+#define GL_QH_MARKER_TRIG_RCU_PRS_PORT_NUM_S	26
+#define GL_QH_MARKER_TRIG_RCU_PRS_PORT_NUM_M	MAKEMASK(0x7, 26)
+#define GL_QH_MARKER_TRIG_RCU_PRS_SET_RST_S	31
+#define GL_QH_MARKER_TRIG_RCU_PRS_SET_RST_M	BIT(31)
+#define GL_RPRS_ANA_CSR_CTRL			0x00200708 /* Reset Source: CORER */
+#define GL_RPRS_ANA_CSR_CTRL_SELECT_EN_S	0
+#define GL_RPRS_ANA_CSR_CTRL_SELECT_EN_M	BIT(0)
+#define GL_RPRS_ANA_CSR_CTRL_SELECTED_ANA_S	1
+#define GL_RPRS_ANA_CSR_CTRL_SELECTED_ANA_M	BIT(1)
+#define GL_TPRS_ANA_CSR_CTRL			0x00202100 /* Reset Source: CORER */
+#define GL_TPRS_ANA_CSR_CTRL_SELECT_EN_S	0
+#define GL_TPRS_ANA_CSR_CTRL_SELECT_EN_M	BIT(0)
+#define GL_TPRS_ANA_CSR_CTRL_SELECTED_ANA_S	1
+#define GL_TPRS_ANA_CSR_CTRL_SELECTED_ANA_M	BIT(1)
+#define GL_TPRS_MNG_PM_THR			0x00202004 /* Reset Source: CORER */
+#define GL_TPRS_MNG_PM_THR_MNG_PM_THR_S		0
+#define GL_TPRS_MNG_PM_THR_MNG_PM_THR_M		MAKEMASK(0x3FFF, 0)
+#define GL_TPRS_PM_CNT(_i)			(0x00202008 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GL_TPRS_PM_CNT_MAX_INDEX		1
+#define GL_TPRS_PM_CNT_GL_PRS_PM_CNT_S		0
+#define GL_TPRS_PM_CNT_GL_PRS_PM_CNT_M		MAKEMASK(0x3FFF, 0)
+#define GL_TPRS_PM_THR				0x00202000 /* Reset Source: CORER */
+#define GL_TPRS_PM_THR_PM_THR_S			0
+#define GL_TPRS_PM_THR_PM_THR_M			MAKEMASK(0x3FFF, 0)
+#define GL_XLR_MARKER_LOG_RCU_PRS(_i)		(0x00200208 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GL_XLR_MARKER_LOG_RCU_PRS_MAX_INDEX	63
+#define GL_XLR_MARKER_LOG_RCU_PRS_XLR_TRIG_S	0
+#define GL_XLR_MARKER_LOG_RCU_PRS_XLR_TRIG_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GL_XLR_MARKER_STATUS(_i)		(0x002001F4 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GL_XLR_MARKER_STATUS_MAX_INDEX		1
+#define GL_XLR_MARKER_STATUS_MRKR_BUSY_S	0
+#define GL_XLR_MARKER_STATUS_MRKR_BUSY_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GL_XLR_MARKER_TRIG_PE			0x005008C0 /* Reset Source: CORER */
+#define GL_XLR_MARKER_TRIG_PE_VM_VF_NUM_S	0
+#define GL_XLR_MARKER_TRIG_PE_VM_VF_NUM_M	MAKEMASK(0x3FF, 0)
+#define GL_XLR_MARKER_TRIG_PE_VM_VF_TYPE_S	10
+#define GL_XLR_MARKER_TRIG_PE_VM_VF_TYPE_M	MAKEMASK(0x3, 10)
+#define GL_XLR_MARKER_TRIG_PE_PF_NUM_S		12
+#define GL_XLR_MARKER_TRIG_PE_PF_NUM_M		MAKEMASK(0x7, 12)
+#define GL_XLR_MARKER_TRIG_PE_PORT_NUM_S	16
+#define GL_XLR_MARKER_TRIG_PE_PORT_NUM_M	MAKEMASK(0x7, 16)
+#define GL_XLR_MARKER_TRIG_RCU_PRS		0x002001C0 /* Reset Source: CORER */
+#define GL_XLR_MARKER_TRIG_RCU_PRS_VM_VF_NUM_S	0
+#define GL_XLR_MARKER_TRIG_RCU_PRS_VM_VF_NUM_M	MAKEMASK(0x3FF, 0)
+#define GL_XLR_MARKER_TRIG_RCU_PRS_VM_VF_TYPE_S 10
+#define GL_XLR_MARKER_TRIG_RCU_PRS_VM_VF_TYPE_M MAKEMASK(0x3, 10)
+#define GL_XLR_MARKER_TRIG_RCU_PRS_PF_NUM_S	12
+#define GL_XLR_MARKER_TRIG_RCU_PRS_PF_NUM_M	MAKEMASK(0x7, 12)
+#define GL_XLR_MARKER_TRIG_RCU_PRS_PORT_NUM_S	16
+#define GL_XLR_MARKER_TRIG_RCU_PRS_PORT_NUM_M	MAKEMASK(0x7, 16)
+#define GL_CLKGATE_EVENTS			0x0009DE70 /* Reset Source: PERST */
+#define GL_CLKGATE_EVENTS_PRIMARY_CLKGATE_EVENTS_S 0
+#define GL_CLKGATE_EVENTS_PRIMARY_CLKGATE_EVENTS_M MAKEMASK(0xFFFF, 0)
+#define GL_CLKGATE_EVENTS_SIDEBAND_CLKGATE_EVENTS_S 16
+#define GL_CLKGATE_EVENTS_SIDEBAND_CLKGATE_EVENTS_M MAKEMASK(0xFFFF, 16)
+#define GLPCI_BYTCTH_NP_C			0x000BFDA8 /* Reset Source: PCIR */
+#define GLPCI_BYTCTH_NP_C_PCI_COUNT_BW_BCT_S	0
+#define GLPCI_BYTCTH_NP_C_PCI_COUNT_BW_BCT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPCI_BYTCTH_P				0x0009E970 /* Reset Source: PCIR */
+#define GLPCI_BYTCTH_P_PCI_COUNT_BW_BCT_S	0
+#define GLPCI_BYTCTH_P_PCI_COUNT_BW_BCT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPCI_BYTCTL_NP_C			0x000BFDAC /* Reset Source: PCIR */
+#define GLPCI_BYTCTL_NP_C_PCI_COUNT_BW_BCT_S	0
+#define GLPCI_BYTCTL_NP_C_PCI_COUNT_BW_BCT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPCI_BYTCTL_P				0x0009E994 /* Reset Source: PCIR */
+#define GLPCI_BYTCTL_P_PCI_COUNT_BW_BCT_S	0
+#define GLPCI_BYTCTL_P_PCI_COUNT_BW_BCT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPCI_CAPCTRL				0x0009DE88 /* Reset Source: PCIR */
+#define GLPCI_CAPCTRL_VPD_EN_S			0
+#define GLPCI_CAPCTRL_VPD_EN_M			BIT(0)
+#define GLPCI_CAPSUP				0x0009DE8C /* Reset Source: PCIR */
+#define GLPCI_CAPSUP_PCIE_VER_S			0
+#define GLPCI_CAPSUP_PCIE_VER_M			BIT(0)
+#define GLPCI_CAPSUP_RESERVED_2_S		1
+#define GLPCI_CAPSUP_RESERVED_2_M		BIT(1)
+#define GLPCI_CAPSUP_LTR_EN_S			2
+#define GLPCI_CAPSUP_LTR_EN_M			BIT(2)
+#define GLPCI_CAPSUP_TPH_EN_S			3
+#define GLPCI_CAPSUP_TPH_EN_M			BIT(3)
+#define GLPCI_CAPSUP_ARI_EN_S			4
+#define GLPCI_CAPSUP_ARI_EN_M			BIT(4)
+#define GLPCI_CAPSUP_IOV_EN_S			5
+#define GLPCI_CAPSUP_IOV_EN_M			BIT(5)
+#define GLPCI_CAPSUP_ACS_EN_S			6
+#define GLPCI_CAPSUP_ACS_EN_M			BIT(6)
+#define GLPCI_CAPSUP_SEC_EN_S			7
+#define GLPCI_CAPSUP_SEC_EN_M			BIT(7)
+#define GLPCI_CAPSUP_PASID_EN_S			8
+#define GLPCI_CAPSUP_PASID_EN_M			BIT(8)
+#define GLPCI_CAPSUP_DLFE_EN_S			9
+#define GLPCI_CAPSUP_DLFE_EN_M			BIT(9)
+#define GLPCI_CAPSUP_GEN4_EXT_EN_S		10
+#define GLPCI_CAPSUP_GEN4_EXT_EN_M		BIT(10)
+#define GLPCI_CAPSUP_GEN4_MARG_EN_S		11
+#define GLPCI_CAPSUP_GEN4_MARG_EN_M		BIT(11)
+#define GLPCI_CAPSUP_ECRC_GEN_EN_S		16
+#define GLPCI_CAPSUP_ECRC_GEN_EN_M		BIT(16)
+#define GLPCI_CAPSUP_ECRC_CHK_EN_S		17
+#define GLPCI_CAPSUP_ECRC_CHK_EN_M		BIT(17)
+#define GLPCI_CAPSUP_IDO_EN_S			18
+#define GLPCI_CAPSUP_IDO_EN_M			BIT(18)
+#define GLPCI_CAPSUP_MSI_MASK_S			19
+#define GLPCI_CAPSUP_MSI_MASK_M			BIT(19)
+#define GLPCI_CAPSUP_CSR_CONF_EN_S		20
+#define GLPCI_CAPSUP_CSR_CONF_EN_M		BIT(20)
+#define GLPCI_CAPSUP_WAKUP_EN_S			21
+#define GLPCI_CAPSUP_WAKUP_EN_M			BIT(21)
+#define GLPCI_CAPSUP_LOAD_SUBSYS_ID_S		30
+#define GLPCI_CAPSUP_LOAD_SUBSYS_ID_M		BIT(30)
+#define GLPCI_CAPSUP_LOAD_DEV_ID_S		31
+#define GLPCI_CAPSUP_LOAD_DEV_ID_M		BIT(31)
+#define GLPCI_CNF				0x0009DEA0 /* Reset Source: POR */
+#define GLPCI_CNF_FLEX10_S			1
+#define GLPCI_CNF_FLEX10_M			BIT(1)
+#define GLPCI_CNF_WAKE_PIN_EN_S			2
+#define GLPCI_CNF_WAKE_PIN_EN_M			BIT(2)
+#define GLPCI_CNF_MSIX_ECC_BLOCK_DISABLE_S	3
+#define GLPCI_CNF_MSIX_ECC_BLOCK_DISABLE_M	BIT(3)
+#define GLPCI_CNF2				0x000BE004 /* Reset Source: PCIR */
+#define GLPCI_CNF2_RO_DIS_S			0
+#define GLPCI_CNF2_RO_DIS_M			BIT(0)
+#define GLPCI_CNF2_CACHELINE_SIZE_S		1
+#define GLPCI_CNF2_CACHELINE_SIZE_M		BIT(1)
+#define GLPCI_DREVID				0x0009E9AC /* Reset Source: PCIR */
+#define GLPCI_DREVID_DEFAULT_REVID_S		0
+#define GLPCI_DREVID_DEFAULT_REVID_M		MAKEMASK(0xFF, 0)
+#define GLPCI_GSCL_1_NP_C			0x000BFDA4 /* Reset Source: PCIR */
+#define GLPCI_GSCL_1_NP_C_RT_MODE_S		8
+#define GLPCI_GSCL_1_NP_C_RT_MODE_M		BIT(8)
+#define GLPCI_GSCL_1_NP_C_RT_EVENT_S		9
+#define GLPCI_GSCL_1_NP_C_RT_EVENT_M		MAKEMASK(0x1F, 9)
+#define GLPCI_GSCL_1_NP_C_PCI_COUNT_BW_EN_S	14
+#define GLPCI_GSCL_1_NP_C_PCI_COUNT_BW_EN_M	BIT(14)
+#define GLPCI_GSCL_1_NP_C_PCI_COUNT_BW_EV_S	15
+#define GLPCI_GSCL_1_NP_C_PCI_COUNT_BW_EV_M	MAKEMASK(0x1F, 15)
+#define GLPCI_GSCL_1_NP_C_GIO_COUNT_RESET_S	29
+#define GLPCI_GSCL_1_NP_C_GIO_COUNT_RESET_M	BIT(29)
+#define GLPCI_GSCL_1_NP_C_GIO_COUNT_STOP_S	30
+#define GLPCI_GSCL_1_NP_C_GIO_COUNT_STOP_M	BIT(30)
+#define GLPCI_GSCL_1_NP_C_GIO_COUNT_START_S	31
+#define GLPCI_GSCL_1_NP_C_GIO_COUNT_START_M	BIT(31)
+#define GLPCI_GSCL_1_P				0x0009E9B4 /* Reset Source: PCIR */
+#define GLPCI_GSCL_1_P_GIO_COUNT_EN_0_S		0
+#define GLPCI_GSCL_1_P_GIO_COUNT_EN_0_M		BIT(0)
+#define GLPCI_GSCL_1_P_GIO_COUNT_EN_1_S		1
+#define GLPCI_GSCL_1_P_GIO_COUNT_EN_1_M		BIT(1)
+#define GLPCI_GSCL_1_P_GIO_COUNT_EN_2_S		2
+#define GLPCI_GSCL_1_P_GIO_COUNT_EN_2_M		BIT(2)
+#define GLPCI_GSCL_1_P_GIO_COUNT_EN_3_S		3
+#define GLPCI_GSCL_1_P_GIO_COUNT_EN_3_M		BIT(3)
+#define GLPCI_GSCL_1_P_LBC_ENABLE_0_S		4
+#define GLPCI_GSCL_1_P_LBC_ENABLE_0_M		BIT(4)
+#define GLPCI_GSCL_1_P_LBC_ENABLE_1_S		5
+#define GLPCI_GSCL_1_P_LBC_ENABLE_1_M		BIT(5)
+#define GLPCI_GSCL_1_P_LBC_ENABLE_2_S		6
+#define GLPCI_GSCL_1_P_LBC_ENABLE_2_M		BIT(6)
+#define GLPCI_GSCL_1_P_LBC_ENABLE_3_S		7
+#define GLPCI_GSCL_1_P_LBC_ENABLE_3_M		BIT(7)
+#define GLPCI_GSCL_1_P_PCI_COUNT_BW_EN_S	14
+#define GLPCI_GSCL_1_P_PCI_COUNT_BW_EN_M	BIT(14)
+#define GLPCI_GSCL_1_P_GIO_64_BIT_EN_S		28
+#define GLPCI_GSCL_1_P_GIO_64_BIT_EN_M		BIT(28)
+#define GLPCI_GSCL_1_P_GIO_COUNT_RESET_S	29
+#define GLPCI_GSCL_1_P_GIO_COUNT_RESET_M	BIT(29)
+#define GLPCI_GSCL_1_P_GIO_COUNT_STOP_S		30
+#define GLPCI_GSCL_1_P_GIO_COUNT_STOP_M		BIT(30)
+#define GLPCI_GSCL_1_P_GIO_COUNT_START_S	31
+#define GLPCI_GSCL_1_P_GIO_COUNT_START_M	BIT(31)
+#define GLPCI_GSCL_2				0x0009E998 /* Reset Source: PCIR */
+#define GLPCI_GSCL_2_GIO_EVENT_NUM_0_S		0
+#define GLPCI_GSCL_2_GIO_EVENT_NUM_0_M		MAKEMASK(0xFF, 0)
+#define GLPCI_GSCL_2_GIO_EVENT_NUM_1_S		8
+#define GLPCI_GSCL_2_GIO_EVENT_NUM_1_M		MAKEMASK(0xFF, 8)
+#define GLPCI_GSCL_2_GIO_EVENT_NUM_2_S		16
+#define GLPCI_GSCL_2_GIO_EVENT_NUM_2_M		MAKEMASK(0xFF, 16)
+#define GLPCI_GSCL_2_GIO_EVENT_NUM_3_S		24
+#define GLPCI_GSCL_2_GIO_EVENT_NUM_3_M		MAKEMASK(0xFF, 24)
+#define GLPCI_GSCL_5_8(_i)			(0x0009E954 + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: PCIR */
+#define GLPCI_GSCL_5_8_MAX_INDEX		3
+#define GLPCI_GSCL_5_8_LBC_THRESHOLD_N_S	0
+#define GLPCI_GSCL_5_8_LBC_THRESHOLD_N_M	MAKEMASK(0xFFFF, 0)
+#define GLPCI_GSCL_5_8_LBC_TIMER_N_S		16
+#define GLPCI_GSCL_5_8_LBC_TIMER_N_M		MAKEMASK(0xFFFF, 16)
+#define GLPCI_GSCN_0_3(_i)			(0x0009E99C + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: PCIR */
+#define GLPCI_GSCN_0_3_MAX_INDEX		3
+#define GLPCI_GSCN_0_3_EVENT_COUNTER_S		0
+#define GLPCI_GSCN_0_3_EVENT_COUNTER_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPCI_LATCT_NP_C			0x000BFDA0 /* Reset Source: PCIR */
+#define GLPCI_LATCT_NP_C_PCI_LATENCY_COUNT_S	0
+#define GLPCI_LATCT_NP_C_PCI_LATENCY_COUNT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPCI_LBARCTRL				0x0009DE74 /* Reset Source: POR */
+#define GLPCI_LBARCTRL_PREFBAR_S		0
+#define GLPCI_LBARCTRL_PREFBAR_M		BIT(0)
+#define GLPCI_LBARCTRL_BAR32_S			1
+#define GLPCI_LBARCTRL_BAR32_M			BIT(1)
+#define GLPCI_LBARCTRL_PAGES_SPACE_EN_PF_S	2
+#define GLPCI_LBARCTRL_PAGES_SPACE_EN_PF_M	BIT(2)
+#define GLPCI_LBARCTRL_FLASH_EXPOSE_S		3
+#define GLPCI_LBARCTRL_FLASH_EXPOSE_M		BIT(3)
+#define GLPCI_LBARCTRL_PE_DB_SIZE_S		4
+#define GLPCI_LBARCTRL_PE_DB_SIZE_M		MAKEMASK(0x3, 4)
+#define GLPCI_LBARCTRL_PAGES_SPACE_EN_VF_S	9
+#define GLPCI_LBARCTRL_PAGES_SPACE_EN_VF_M	BIT(9)
+#define GLPCI_LBARCTRL_EXROM_SIZE_S		11
+#define GLPCI_LBARCTRL_EXROM_SIZE_M		MAKEMASK(0x7, 11)
+#define GLPCI_LBARCTRL_VF_PE_DB_SIZE_S		14
+#define GLPCI_LBARCTRL_VF_PE_DB_SIZE_M		MAKEMASK(0x3, 14)
+#define GLPCI_LINKCAP				0x0009DE90 /* Reset Source: PCIR */
+#define GLPCI_LINKCAP_LINK_SPEEDS_VECTOR_S	0
+#define GLPCI_LINKCAP_LINK_SPEEDS_VECTOR_M	MAKEMASK(0x3F, 0)
+#define GLPCI_LINKCAP_MAX_LINK_WIDTH_S		9
+#define GLPCI_LINKCAP_MAX_LINK_WIDTH_M		MAKEMASK(0xF, 9)
+#define GLPCI_NPQ_CFG				0x000BFD80 /* Reset Source: PCIR */
+#define GLPCI_NPQ_CFG_EXTEND_TO_S		0
+#define GLPCI_NPQ_CFG_EXTEND_TO_M		BIT(0)
+#define GLPCI_NPQ_CFG_SMALL_TO_S		1
+#define GLPCI_NPQ_CFG_SMALL_TO_M		BIT(1)
+#define GLPCI_NPQ_CFG_WEIGHT_AVG_S		2
+#define GLPCI_NPQ_CFG_WEIGHT_AVG_M		MAKEMASK(0xF, 2)
+#define GLPCI_NPQ_CFG_NPQ_SPARE_S		6
+#define GLPCI_NPQ_CFG_NPQ_SPARE_M		MAKEMASK(0x3FF, 6)
+#define GLPCI_NPQ_CFG_NPQ_ERR_STAT_S		16
+#define GLPCI_NPQ_CFG_NPQ_ERR_STAT_M		MAKEMASK(0xF, 16)
+#define GLPCI_PKTCT_NP_C			0x000BFD9C /* Reset Source: PCIR */
+#define GLPCI_PKTCT_NP_C_PCI_COUNT_BW_PCT_S	0
+#define GLPCI_PKTCT_NP_C_PCI_COUNT_BW_PCT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPCI_PKTCT_P				0x0009E9B0 /* Reset Source: PCIR */
+#define GLPCI_PKTCT_P_PCI_COUNT_BW_PCT_S	0
+#define GLPCI_PKTCT_P_PCI_COUNT_BW_PCT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPCI_PMSUP				0x0009DE94 /* Reset Source: PCIR */
+#define GLPCI_PMSUP_RESERVED_0_S		0
+#define GLPCI_PMSUP_RESERVED_0_M		MAKEMASK(0x3, 0)
+#define GLPCI_PMSUP_RESERVED_1_S		2
+#define GLPCI_PMSUP_RESERVED_1_M		MAKEMASK(0x7, 2)
+#define GLPCI_PMSUP_RESERVED_2_S		5
+#define GLPCI_PMSUP_RESERVED_2_M		MAKEMASK(0x7, 5)
+#define GLPCI_PMSUP_L0S_ACC_LAT_S		8
+#define GLPCI_PMSUP_L0S_ACC_LAT_M		MAKEMASK(0x7, 8)
+#define GLPCI_PMSUP_L1_ACC_LAT_S		11
+#define GLPCI_PMSUP_L1_ACC_LAT_M		MAKEMASK(0x7, 11)
+#define GLPCI_PMSUP_RESERVED_3_S		14
+#define GLPCI_PMSUP_RESERVED_3_M		BIT(14)
+#define GLPCI_PMSUP_OBFF_SUP_S			15
+#define GLPCI_PMSUP_OBFF_SUP_M			MAKEMASK(0x3, 15)
+#define GLPCI_PUSH_PE_IF_TO_STATUS		0x0009DF44 /* Reset Source: PCIR */
+#define GLPCI_PUSH_PE_IF_TO_STATUS_GLPCI_PUSH_PE_IF_TO_STATUS_S 0
+#define GLPCI_PUSH_PE_IF_TO_STATUS_GLPCI_PUSH_PE_IF_TO_STATUS_M BIT(0)
+#define GLPCI_PWRDATA				0x0009DE7C /* Reset Source: PCIR */
+#define GLPCI_PWRDATA_D0_POWER_S		0
+#define GLPCI_PWRDATA_D0_POWER_M		MAKEMASK(0xFF, 0)
+#define GLPCI_PWRDATA_COMM_POWER_S		8
+#define GLPCI_PWRDATA_COMM_POWER_M		MAKEMASK(0xFF, 8)
+#define GLPCI_PWRDATA_D3_POWER_S		16
+#define GLPCI_PWRDATA_D3_POWER_M		MAKEMASK(0xFF, 16)
+#define GLPCI_PWRDATA_DATA_SCALE_S		24
+#define GLPCI_PWRDATA_DATA_SCALE_M		MAKEMASK(0x3, 24)
+#define GLPCI_REVID				0x0009DE98 /* Reset Source: PCIR */
+#define GLPCI_REVID_NVM_REVID_S			0
+#define GLPCI_REVID_NVM_REVID_M			MAKEMASK(0xFF, 0)
+#define GLPCI_SERH				0x0009DE84 /* Reset Source: PCIR */
+#define GLPCI_SERH_SER_NUM_H_S			0
+#define GLPCI_SERH_SER_NUM_H_M			MAKEMASK(0xFFFF, 0)
+#define GLPCI_SERL				0x0009DE80 /* Reset Source: PCIR */
+#define GLPCI_SERL_SER_NUM_L_S			0
+#define GLPCI_SERL_SER_NUM_L_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPCI_SUBVENID				0x0009DEE8 /* Reset Source: PCIR */
+#define GLPCI_SUBVENID_SUB_VEN_ID_S		0
+#define GLPCI_SUBVENID_SUB_VEN_ID_M		MAKEMASK(0xFFFF, 0)
+#define GLPCI_UPADD				0x000BE0D4 /* Reset Source: PCIR */
+#define GLPCI_UPADD_ADDRESS_S			1
+#define GLPCI_UPADD_ADDRESS_M			MAKEMASK(0x7FFFFFFF, 1)
+#define GLPCI_VENDORID				0x0009DEC8 /* Reset Source: PCIR */
+#define GLPCI_VENDORID_VENDORID_S		0
+#define GLPCI_VENDORID_VENDORID_M		MAKEMASK(0xFFFF, 0)
+#define GLPCI_VFSUP				0x0009DE9C /* Reset Source: PCIR */
+#define GLPCI_VFSUP_VF_PREFETCH_S		0
+#define GLPCI_VFSUP_VF_PREFETCH_M		BIT(0)
+#define GLPCI_VFSUP_VR_BAR_TYPE_S		1
+#define GLPCI_VFSUP_VR_BAR_TYPE_M		BIT(1)
+#define GLPCI_WATMK_CLNT_PIPEMON		0x000BFD90 /* Reset Source: PCIR */
+#define GLPCI_WATMK_CLNT_PIPEMON_DATA_LINES_S	0
+#define GLPCI_WATMK_CLNT_PIPEMON_DATA_LINES_M	MAKEMASK(0xFFFF, 0)
+#define PF_FUNC_RID				0x0009E880 /* Reset Source: PCIR */
+#define PF_FUNC_RID_FUNCTION_NUMBER_S		0
+#define PF_FUNC_RID_FUNCTION_NUMBER_M		MAKEMASK(0x7, 0)
+#define PF_FUNC_RID_DEVICE_NUMBER_S		3
+#define PF_FUNC_RID_DEVICE_NUMBER_M		MAKEMASK(0x1F, 3)
+#define PF_FUNC_RID_BUS_NUMBER_S		8
+#define PF_FUNC_RID_BUS_NUMBER_M		MAKEMASK(0xFF, 8)
+#define PF_PCI_CIAA				0x0009E580 /* Reset Source: FLR */
+#define PF_PCI_CIAA_ADDRESS_S			0
+#define PF_PCI_CIAA_ADDRESS_M			MAKEMASK(0xFFF, 0)
+#define PF_PCI_CIAA_VF_NUM_S			12
+#define PF_PCI_CIAA_VF_NUM_M			MAKEMASK(0xFF, 12)
+#define PF_PCI_CIAD				0x0009E500 /* Reset Source: FLR */
+#define PF_PCI_CIAD_DATA_S			0
+#define PF_PCI_CIAD_DATA_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PFPCI_CLASS				0x0009DB00 /* Reset Source: PCIR */
+#define PFPCI_CLASS_STORAGE_CLASS_S		0
+#define PFPCI_CLASS_STORAGE_CLASS_M		BIT(0)
+#define PFPCI_CLASS_PF_IS_LAN_S			2
+#define PFPCI_CLASS_PF_IS_LAN_M			BIT(2)
+#define PFPCI_CNF				0x0009DF00 /* Reset Source: PCIR */
+#define PFPCI_CNF_MSI_EN_S			2
+#define PFPCI_CNF_MSI_EN_M			BIT(2)
+#define PFPCI_CNF_EXROM_DIS_S			3
+#define PFPCI_CNF_EXROM_DIS_M			BIT(3)
+#define PFPCI_CNF_IO_BAR_S			4
+#define PFPCI_CNF_IO_BAR_M			BIT(4)
+#define PFPCI_CNF_INT_PIN_S			5
+#define PFPCI_CNF_INT_PIN_M			MAKEMASK(0x3, 5)
+#define PFPCI_DEVID				0x0009DE00 /* Reset Source: PCIR */
+#define PFPCI_DEVID_PF_DEV_ID_S			0
+#define PFPCI_DEVID_PF_DEV_ID_M			MAKEMASK(0xFFFF, 0)
+#define PFPCI_DEVID_VF_DEV_ID_S			16
+#define PFPCI_DEVID_VF_DEV_ID_M			MAKEMASK(0xFFFF, 16)
+#define PFPCI_FACTPS				0x0009E900 /* Reset Source: FLR */
+#define PFPCI_FACTPS_FUNC_POWER_STATE_S		0
+#define PFPCI_FACTPS_FUNC_POWER_STATE_M		MAKEMASK(0x3, 0)
+#define PFPCI_FACTPS_FUNC_AUX_EN_S		3
+#define PFPCI_FACTPS_FUNC_AUX_EN_M		BIT(3)
+#define PFPCI_FUNC				0x0009D980 /* Reset Source: POR */
+#define PFPCI_FUNC_FUNC_DIS_S			0
+#define PFPCI_FUNC_FUNC_DIS_M			BIT(0)
+#define PFPCI_FUNC_ALLOW_FUNC_DIS_S		1
+#define PFPCI_FUNC_ALLOW_FUNC_DIS_M		BIT(1)
+#define PFPCI_FUNC_DIS_FUNC_ON_PORT_DIS_S	2
+#define PFPCI_FUNC_DIS_FUNC_ON_PORT_DIS_M	BIT(2)
+#define PFPCI_PF_FLUSH_DONE			0x0009E400 /* Reset Source: PCIR */
+#define PFPCI_PF_FLUSH_DONE_FLUSH_DONE_S	0
+#define PFPCI_PF_FLUSH_DONE_FLUSH_DONE_M	BIT(0)
+#define PFPCI_PM				0x0009DA80 /* Reset Source: POR */
+#define PFPCI_PM_PME_EN_S			0
+#define PFPCI_PM_PME_EN_M			BIT(0)
+#define PFPCI_STATUS1				0x0009DA00 /* Reset Source: POR */
+#define PFPCI_STATUS1_FUNC_VALID_S		0
+#define PFPCI_STATUS1_FUNC_VALID_M		BIT(0)
+#define PFPCI_SUBSYSID				0x0009D880 /* Reset Source: PCIR */
+#define PFPCI_SUBSYSID_PF_SUBSYS_ID_S		0
+#define PFPCI_SUBSYSID_PF_SUBSYS_ID_M		MAKEMASK(0xFFFF, 0)
+#define PFPCI_SUBSYSID_VF_SUBSYS_ID_S		16
+#define PFPCI_SUBSYSID_VF_SUBSYS_ID_M		MAKEMASK(0xFFFF, 16)
+#define PFPCI_VF_FLUSH_DONE(_VF)		(0x0009E000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PCIR */
+#define PFPCI_VF_FLUSH_DONE_MAX_INDEX		255
+#define PFPCI_VF_FLUSH_DONE_FLUSH_DONE_S	0
+#define PFPCI_VF_FLUSH_DONE_FLUSH_DONE_M	BIT(0)
+#define PFPCI_VM_FLUSH_DONE			0x0009E480 /* Reset Source: PCIR */
+#define PFPCI_VM_FLUSH_DONE_FLUSH_DONE_S	0
+#define PFPCI_VM_FLUSH_DONE_FLUSH_DONE_M	BIT(0)
+#define PFPCI_VMINDEX				0x0009E600 /* Reset Source: PCIR */
+#define PFPCI_VMINDEX_VMINDEX_S			0
+#define PFPCI_VMINDEX_VMINDEX_M			MAKEMASK(0x3FF, 0)
+#define PFPCI_VMPEND				0x0009E800 /* Reset Source: PCIR */
+#define PFPCI_VMPEND_PENDING_S			0
+#define PFPCI_VMPEND_PENDING_M			BIT(0)
+#define PQ_FIFO_STATUS				0x0009DF40 /* Reset Source: PCIR */
+#define PQ_FIFO_STATUS_PQ_FIFO_COUNT_S		0
+#define PQ_FIFO_STATUS_PQ_FIFO_COUNT_M		MAKEMASK(0x7FFFFFFF, 0)
+#define PQ_FIFO_STATUS_PQ_FIFO_EMPTY_S		31
+#define PQ_FIFO_STATUS_PQ_FIFO_EMPTY_M		BIT(31)
+#define GLPE_CPUSTATUS0				0x0050BA5C /* Reset Source: CORER */
+#define GLPE_CPUSTATUS0_PECPUSTATUS0_S		0
+#define GLPE_CPUSTATUS0_PECPUSTATUS0_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPE_CPUSTATUS1				0x0050BA60 /* Reset Source: CORER */
+#define GLPE_CPUSTATUS1_PECPUSTATUS1_S		0
+#define GLPE_CPUSTATUS1_PECPUSTATUS1_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPE_CPUSTATUS2				0x0050BA64 /* Reset Source: CORER */
+#define GLPE_CPUSTATUS2_PECPUSTATUS2_S		0
+#define GLPE_CPUSTATUS2_PECPUSTATUS2_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPE_MDQ_BASE(_i)			(0x00536000 + ((_i) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLPE_MDQ_BASE_MAX_INDEX			511
+#define GLPE_MDQ_BASE_MDOC_INDEX_S		0
+#define GLPE_MDQ_BASE_MDOC_INDEX_M		MAKEMASK(0xFFFFFFF, 0)
+#define GLPE_MDQ_PTR(_i)			(0x00537000 + ((_i) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLPE_MDQ_PTR_MAX_INDEX			511
+#define GLPE_MDQ_PTR_MDQ_HEAD_S			0
+#define GLPE_MDQ_PTR_MDQ_HEAD_M			MAKEMASK(0x3FFF, 0)
+#define GLPE_MDQ_PTR_MDQ_TAIL_S			16
+#define GLPE_MDQ_PTR_MDQ_TAIL_M			MAKEMASK(0x3FFF, 16)
+#define GLPE_MDQ_SIZE(_i)			(0x00536800 + ((_i) * 4)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLPE_MDQ_SIZE_MAX_INDEX			511
+#define GLPE_MDQ_SIZE_MDQ_SIZE_S		0
+#define GLPE_MDQ_SIZE_MDQ_SIZE_M		MAKEMASK(0x3FFF, 0)
+#define GLPE_PEPM_CTRL				0x0050C000 /* Reset Source: PERST */
+#define GLPE_PEPM_CTRL_PEPM_ENABLE_S		0
+#define GLPE_PEPM_CTRL_PEPM_ENABLE_M		BIT(0)
+#define GLPE_PEPM_CTRL_PEPM_HALT_S		8
+#define GLPE_PEPM_CTRL_PEPM_HALT_M		BIT(8)
+#define GLPE_PEPM_CTRL_PEPM_PUSH_MARGIN_S	16
+#define GLPE_PEPM_CTRL_PEPM_PUSH_MARGIN_M	MAKEMASK(0xFF, 16)
+#define GLPE_PEPM_DEALLOC			0x0050C004 /* Reset Source: PERST */
+#define GLPE_PEPM_DEALLOC_MDQ_CREDITS_S		0
+#define GLPE_PEPM_DEALLOC_MDQ_CREDITS_M		MAKEMASK(0x3FFF, 0)
+#define GLPE_PEPM_DEALLOC_PSQ_CREDITS_S		14
+#define GLPE_PEPM_DEALLOC_PSQ_CREDITS_M		MAKEMASK(0x1F, 14)
+#define GLPE_PEPM_DEALLOC_PQID_S		19
+#define GLPE_PEPM_DEALLOC_PQID_M		MAKEMASK(0x1FF, 19)
+#define GLPE_PEPM_DEALLOC_PORT_S		28
+#define GLPE_PEPM_DEALLOC_PORT_M		MAKEMASK(0x7, 28)
+#define GLPE_PEPM_DEALLOC_DEALLOC_RDY_S		31
+#define GLPE_PEPM_DEALLOC_DEALLOC_RDY_M		BIT(31)
+#define GLPE_PEPM_PSQ_COUNT			0x0050C020 /* Reset Source: PERST */
+#define GLPE_PEPM_PSQ_COUNT_PEPM_PSQ_COUNT_S	0
+#define GLPE_PEPM_PSQ_COUNT_PEPM_PSQ_COUNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_PEPM_THRESH(_i)			(0x0050C840 + ((_i) * 4)) /* _i=0...511 */ /* Reset Source: PERST */
+#define GLPE_PEPM_THRESH_MAX_INDEX		511
+#define GLPE_PEPM_THRESH_PEPM_PSQ_THRESH_S	0
+#define GLPE_PEPM_THRESH_PEPM_PSQ_THRESH_M	MAKEMASK(0x1F, 0)
+#define GLPE_PEPM_THRESH_PEPM_MDQ_THRESH_S	16
+#define GLPE_PEPM_THRESH_PEPM_MDQ_THRESH_M	MAKEMASK(0x3FFF, 16)
+#define GLPE_PFAEQEDROPCNT(_i)			(0x00503240 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPE_PFAEQEDROPCNT_MAX_INDEX		7
+#define GLPE_PFAEQEDROPCNT_AEQEDROPCNT_S	0
+#define GLPE_PFAEQEDROPCNT_AEQEDROPCNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_PFCEQEDROPCNT(_i)			(0x00503220 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPE_PFCEQEDROPCNT_MAX_INDEX		7
+#define GLPE_PFCEQEDROPCNT_CEQEDROPCNT_S	0
+#define GLPE_PFCEQEDROPCNT_CEQEDROPCNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_PFCQEDROPCNT(_i)			(0x00503200 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPE_PFCQEDROPCNT_MAX_INDEX		7
+#define GLPE_PFCQEDROPCNT_CQEDROPCNT_S		0
+#define GLPE_PFCQEDROPCNT_CQEDROPCNT_M		MAKEMASK(0xFFFF, 0)
+#define GLPE_PFFLMOOISCALLOCERR(_i)		(0x0050B960 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPE_PFFLMOOISCALLOCERR_MAX_INDEX	7
+#define GLPE_PFFLMOOISCALLOCERR_ERROR_COUNT_S	0
+#define GLPE_PFFLMOOISCALLOCERR_ERROR_COUNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_PFFLMQ1ALLOCERR(_i)		(0x0050B920 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPE_PFFLMQ1ALLOCERR_MAX_INDEX		7
+#define GLPE_PFFLMQ1ALLOCERR_ERROR_COUNT_S	0
+#define GLPE_PFFLMQ1ALLOCERR_ERROR_COUNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_PFFLMRRFALLOCERR(_i)		(0x0050B940 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPE_PFFLMRRFALLOCERR_MAX_INDEX		7
+#define GLPE_PFFLMRRFALLOCERR_ERROR_COUNT_S	0
+#define GLPE_PFFLMRRFALLOCERR_ERROR_COUNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_PFFLMXMITALLOCERR(_i)		(0x0050B900 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPE_PFFLMXMITALLOCERR_MAX_INDEX	7
+#define GLPE_PFFLMXMITALLOCERR_ERROR_COUNT_S	0
+#define GLPE_PFFLMXMITALLOCERR_ERROR_COUNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_PFTCPNOW50USCNT(_i)		(0x0050B8C0 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPE_PFTCPNOW50USCNT_MAX_INDEX		7
+#define GLPE_PFTCPNOW50USCNT_CNT_S		0
+#define GLPE_PFTCPNOW50USCNT_CNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPE_PUSH_PEPM				0x0053241C /* Reset Source: CORER */
+#define GLPE_PUSH_PEPM_MDQ_CREDITS_S		0
+#define GLPE_PUSH_PEPM_MDQ_CREDITS_M		MAKEMASK(0xFF, 0)
+#define GLPE_VFAEQEDROPCNT(_i)			(0x00503100 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLPE_VFAEQEDROPCNT_MAX_INDEX		31
+#define GLPE_VFAEQEDROPCNT_AEQEDROPCNT_S	0
+#define GLPE_VFAEQEDROPCNT_AEQEDROPCNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_VFCEQEDROPCNT(_i)			(0x00503080 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLPE_VFCEQEDROPCNT_MAX_INDEX		31
+#define GLPE_VFCEQEDROPCNT_CEQEDROPCNT_S	0
+#define GLPE_VFCEQEDROPCNT_CEQEDROPCNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_VFCQEDROPCNT(_i)			(0x00503000 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLPE_VFCQEDROPCNT_MAX_INDEX		31
+#define GLPE_VFCQEDROPCNT_CQEDROPCNT_S		0
+#define GLPE_VFCQEDROPCNT_CQEDROPCNT_M		MAKEMASK(0xFFFF, 0)
+#define GLPE_VFFLMOOISCALLOCERR(_i)		(0x0050B580 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLPE_VFFLMOOISCALLOCERR_MAX_INDEX	31
+#define GLPE_VFFLMOOISCALLOCERR_ERROR_COUNT_S	0
+#define GLPE_VFFLMOOISCALLOCERR_ERROR_COUNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_VFFLMQ1ALLOCERR(_i)		(0x0050B480 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLPE_VFFLMQ1ALLOCERR_MAX_INDEX		31
+#define GLPE_VFFLMQ1ALLOCERR_ERROR_COUNT_S	0
+#define GLPE_VFFLMQ1ALLOCERR_ERROR_COUNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_VFFLMRRFALLOCERR(_i)		(0x0050B500 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLPE_VFFLMRRFALLOCERR_MAX_INDEX		31
+#define GLPE_VFFLMRRFALLOCERR_ERROR_COUNT_S	0
+#define GLPE_VFFLMRRFALLOCERR_ERROR_COUNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_VFFLMXMITALLOCERR(_i)		(0x0050B400 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLPE_VFFLMXMITALLOCERR_MAX_INDEX	31
+#define GLPE_VFFLMXMITALLOCERR_ERROR_COUNT_S	0
+#define GLPE_VFFLMXMITALLOCERR_ERROR_COUNT_M	MAKEMASK(0xFFFF, 0)
+#define GLPE_VFTCPNOW50USCNT(_i)		(0x0050B300 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: PE_CORER */
+#define GLPE_VFTCPNOW50USCNT_MAX_INDEX		31
+#define GLPE_VFTCPNOW50USCNT_CNT_S		0
+#define GLPE_VFTCPNOW50USCNT_CNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PFPE_AEQALLOC				0x00502D00 /* Reset Source: PFR */
+#define PFPE_AEQALLOC_AECOUNT_S			0
+#define PFPE_AEQALLOC_AECOUNT_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PFPE_CCQPHIGH				0x0050A100 /* Reset Source: PFR */
+#define PFPE_CCQPHIGH_PECCQPHIGH_S		0
+#define PFPE_CCQPHIGH_PECCQPHIGH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PFPE_CCQPLOW				0x0050A080 /* Reset Source: PFR */
+#define PFPE_CCQPLOW_PECCQPLOW_S		0
+#define PFPE_CCQPLOW_PECCQPLOW_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PFPE_CCQPSTATUS				0x0050A000 /* Reset Source: PFR */
+#define PFPE_CCQPSTATUS_CCQP_DONE_S		0
+#define PFPE_CCQPSTATUS_CCQP_DONE_M		BIT(0)
+#define PFPE_CCQPSTATUS_HMC_PROFILE_S		4
+#define PFPE_CCQPSTATUS_HMC_PROFILE_M		MAKEMASK(0x7, 4)
+#define PFPE_CCQPSTATUS_RDMA_EN_VFS_S		16
+#define PFPE_CCQPSTATUS_RDMA_EN_VFS_M		MAKEMASK(0x3F, 16)
+#define PFPE_CCQPSTATUS_CCQP_ERR_S		31
+#define PFPE_CCQPSTATUS_CCQP_ERR_M		BIT(31)
+#define PFPE_CQACK				0x00502C80 /* Reset Source: PFR */
+#define PFPE_CQACK_PECQID_S			0
+#define PFPE_CQACK_PECQID_M			MAKEMASK(0x7FFFF, 0)
+#define PFPE_CQARM				0x00502C00 /* Reset Source: PFR */
+#define PFPE_CQARM_PECQID_S			0
+#define PFPE_CQARM_PECQID_M			MAKEMASK(0x7FFFF, 0)
+#define PFPE_CQPDB				0x00500800 /* Reset Source: PFR */
+#define PFPE_CQPDB_WQHEAD_S			0
+#define PFPE_CQPDB_WQHEAD_M			MAKEMASK(0x7FF, 0)
+#define PFPE_CQPERRCODES			0x0050A200 /* Reset Source: PFR */
+#define PFPE_CQPERRCODES_CQP_MINOR_CODE_S	0
+#define PFPE_CQPERRCODES_CQP_MINOR_CODE_M	MAKEMASK(0xFFFF, 0)
+#define PFPE_CQPERRCODES_CQP_MAJOR_CODE_S	16
+#define PFPE_CQPERRCODES_CQP_MAJOR_CODE_M	MAKEMASK(0xFFFF, 16)
+#define PFPE_CQPTAIL				0x00500880 /* Reset Source: PFR */
+#define PFPE_CQPTAIL_WQTAIL_S			0
+#define PFPE_CQPTAIL_WQTAIL_M			MAKEMASK(0x7FF, 0)
+#define PFPE_CQPTAIL_CQP_OP_ERR_S		31
+#define PFPE_CQPTAIL_CQP_OP_ERR_M		BIT(31)
+#define PFPE_IPCONFIG0				0x0050A180 /* Reset Source: PFR */
+#define PFPE_IPCONFIG0_PEIPID_S			0
+#define PFPE_IPCONFIG0_PEIPID_M			MAKEMASK(0xFFFF, 0)
+#define PFPE_IPCONFIG0_USEENTIREIDRANGE_S	16
+#define PFPE_IPCONFIG0_USEENTIREIDRANGE_M	BIT(16)
+#define PFPE_IPCONFIG0_UDP_SRC_PORT_MASK_EN_S	17
+#define PFPE_IPCONFIG0_UDP_SRC_PORT_MASK_EN_M	BIT(17)
+#define PFPE_MRTEIDXMASK			0x0050A300 /* Reset Source: PFR */
+#define PFPE_MRTEIDXMASK_MRTEIDXMASKBITS_S	0
+#define PFPE_MRTEIDXMASK_MRTEIDXMASKBITS_M	MAKEMASK(0x1F, 0)
+#define PFPE_RCVUNEXPECTEDERROR			0x0050A380 /* Reset Source: PFR */
+#define PFPE_RCVUNEXPECTEDERROR_TCP_RX_UNEXP_ERR_S 0
+#define PFPE_RCVUNEXPECTEDERROR_TCP_RX_UNEXP_ERR_M MAKEMASK(0xFFFFFF, 0)
+#define PFPE_TCPNOWTIMER			0x0050A280 /* Reset Source: PFR */
+#define PFPE_TCPNOWTIMER_TCP_NOW_S		0
+#define PFPE_TCPNOWTIMER_TCP_NOW_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PFPE_WQEALLOC				0x00504400 /* Reset Source: PFR */
+#define PFPE_WQEALLOC_PEQPID_S			0
+#define PFPE_WQEALLOC_PEQPID_M			MAKEMASK(0x3FFFF, 0)
+#define PFPE_WQEALLOC_WQE_DESC_INDEX_S		20
+#define PFPE_WQEALLOC_WQE_DESC_INDEX_M		MAKEMASK(0xFFF, 20)
+#define PRT_PEPM_COUNT(_i)			(0x0050C040 + ((_i) * 4)) /* _i=0...511 */ /* Reset Source: PERST */
+#define PRT_PEPM_COUNT_MAX_INDEX		511
+#define PRT_PEPM_COUNT_PEPM_PSQ_COUNT_S		0
+#define PRT_PEPM_COUNT_PEPM_PSQ_COUNT_M		MAKEMASK(0x1F, 0)
+#define PRT_PEPM_COUNT_PEPM_MDQ_COUNT_S		16
+#define PRT_PEPM_COUNT_PEPM_MDQ_COUNT_M		MAKEMASK(0x3FFF, 16)
+#define VFPE_AEQALLOC(_VF)			(0x00502800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_AEQALLOC_MAX_INDEX			255
+#define VFPE_AEQALLOC_AECOUNT_S			0
+#define VFPE_AEQALLOC_AECOUNT_M			MAKEMASK(0xFFFFFFFF, 0)
+#define VFPE_CCQPHIGH(_VF)			(0x00508800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_CCQPHIGH_MAX_INDEX			255
+#define VFPE_CCQPHIGH_PECCQPHIGH_S		0
+#define VFPE_CCQPHIGH_PECCQPHIGH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VFPE_CCQPLOW(_VF)			(0x00508400 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_CCQPLOW_MAX_INDEX			255
+#define VFPE_CCQPLOW_PECCQPLOW_S		0
+#define VFPE_CCQPLOW_PECCQPLOW_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VFPE_CCQPSTATUS(_VF)			(0x00508000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_CCQPSTATUS_MAX_INDEX		255
+#define VFPE_CCQPSTATUS_CCQP_DONE_S		0
+#define VFPE_CCQPSTATUS_CCQP_DONE_M		BIT(0)
+#define VFPE_CCQPSTATUS_HMC_PROFILE_S		4
+#define VFPE_CCQPSTATUS_HMC_PROFILE_M		MAKEMASK(0x7, 4)
+#define VFPE_CCQPSTATUS_RDMA_EN_VFS_S		16
+#define VFPE_CCQPSTATUS_RDMA_EN_VFS_M		MAKEMASK(0x3F, 16)
+#define VFPE_CCQPSTATUS_CCQP_ERR_S		31
+#define VFPE_CCQPSTATUS_CCQP_ERR_M		BIT(31)
+#define VFPE_CQACK(_VF)				(0x00502400 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_CQACK_MAX_INDEX			255
+#define VFPE_CQACK_PECQID_S			0
+#define VFPE_CQACK_PECQID_M			MAKEMASK(0x7FFFF, 0)
+#define VFPE_CQARM(_VF)				(0x00502000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_CQARM_MAX_INDEX			255
+#define VFPE_CQARM_PECQID_S			0
+#define VFPE_CQARM_PECQID_M			MAKEMASK(0x7FFFF, 0)
+#define VFPE_CQPDB(_VF)				(0x00500000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_CQPDB_MAX_INDEX			255
+#define VFPE_CQPDB_WQHEAD_S			0
+#define VFPE_CQPDB_WQHEAD_M			MAKEMASK(0x7FF, 0)
+#define VFPE_CQPERRCODES(_VF)			(0x00509000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_CQPERRCODES_MAX_INDEX		255
+#define VFPE_CQPERRCODES_CQP_MINOR_CODE_S	0
+#define VFPE_CQPERRCODES_CQP_MINOR_CODE_M	MAKEMASK(0xFFFF, 0)
+#define VFPE_CQPERRCODES_CQP_MAJOR_CODE_S	16
+#define VFPE_CQPERRCODES_CQP_MAJOR_CODE_M	MAKEMASK(0xFFFF, 16)
+#define VFPE_CQPTAIL(_VF)			(0x00500400 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_CQPTAIL_MAX_INDEX			255
+#define VFPE_CQPTAIL_WQTAIL_S			0
+#define VFPE_CQPTAIL_WQTAIL_M			MAKEMASK(0x7FF, 0)
+#define VFPE_CQPTAIL_CQP_OP_ERR_S		31
+#define VFPE_CQPTAIL_CQP_OP_ERR_M		BIT(31)
+#define VFPE_IPCONFIG0(_VF)			(0x00508C00 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_IPCONFIG0_MAX_INDEX		255
+#define VFPE_IPCONFIG0_PEIPID_S			0
+#define VFPE_IPCONFIG0_PEIPID_M			MAKEMASK(0xFFFF, 0)
+#define VFPE_IPCONFIG0_USEENTIREIDRANGE_S	16
+#define VFPE_IPCONFIG0_USEENTIREIDRANGE_M	BIT(16)
+#define VFPE_IPCONFIG0_UDP_SRC_PORT_MASK_EN_S	17
+#define VFPE_IPCONFIG0_UDP_SRC_PORT_MASK_EN_M	BIT(17)
+#define VFPE_RCVUNEXPECTEDERROR(_VF)		(0x00509C00 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_RCVUNEXPECTEDERROR_MAX_INDEX	255
+#define VFPE_RCVUNEXPECTEDERROR_TCP_RX_UNEXP_ERR_S 0
+#define VFPE_RCVUNEXPECTEDERROR_TCP_RX_UNEXP_ERR_M MAKEMASK(0xFFFFFF, 0)
+#define VFPE_TCPNOWTIMER(_VF)			(0x00509400 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_TCPNOWTIMER_MAX_INDEX		255
+#define VFPE_TCPNOWTIMER_TCP_NOW_S		0
+#define VFPE_TCPNOWTIMER_TCP_NOW_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VFPE_WQEALLOC(_VF)			(0x00504000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_WQEALLOC_MAX_INDEX			255
+#define VFPE_WQEALLOC_PEQPID_S			0
+#define VFPE_WQEALLOC_PEQPID_M			MAKEMASK(0x3FFFF, 0)
+#define VFPE_WQEALLOC_WQE_DESC_INDEX_S		20
+#define VFPE_WQEALLOC_WQE_DESC_INDEX_M		MAKEMASK(0xFFF, 20)
+#define GLPES_PFIP4RXDISCARD(_i)		(0x00541400 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXDISCARD_MAX_INDEX		127
+#define GLPES_PFIP4RXDISCARD_IP4RXDISCARD_S	0
+#define GLPES_PFIP4RXDISCARD_IP4RXDISCARD_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4RXFRAGSHI(_i)		(0x00541C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXFRAGSHI_MAX_INDEX		127
+#define GLPES_PFIP4RXFRAGSHI_IP4RXFRAGSHI_S	0
+#define GLPES_PFIP4RXFRAGSHI_IP4RXFRAGSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP4RXFRAGSLO(_i)		(0x00541C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXFRAGSLO_MAX_INDEX		127
+#define GLPES_PFIP4RXFRAGSLO_IP4RXFRAGSLO_S	0
+#define GLPES_PFIP4RXFRAGSLO_IP4RXFRAGSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4RXMCOCTSHI(_i)		(0x00542404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXMCOCTSHI_MAX_INDEX		127
+#define GLPES_PFIP4RXMCOCTSHI_IP4RXMCOCTSHI_S	0
+#define GLPES_PFIP4RXMCOCTSHI_IP4RXMCOCTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP4RXMCOCTSLO(_i)		(0x00542400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXMCOCTSLO_MAX_INDEX		127
+#define GLPES_PFIP4RXMCOCTSLO_IP4RXMCOCTSLO_S	0
+#define GLPES_PFIP4RXMCOCTSLO_IP4RXMCOCTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4RXMCPKTSHI(_i)		(0x00542C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXMCPKTSHI_MAX_INDEX		127
+#define GLPES_PFIP4RXMCPKTSHI_IP4RXMCPKTSHI_S	0
+#define GLPES_PFIP4RXMCPKTSHI_IP4RXMCPKTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP4RXMCPKTSLO(_i)		(0x00542C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXMCPKTSLO_MAX_INDEX		127
+#define GLPES_PFIP4RXMCPKTSLO_IP4RXMCPKTSLO_S	0
+#define GLPES_PFIP4RXMCPKTSLO_IP4RXMCPKTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4RXOCTSHI(_i)			(0x00540404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXOCTSHI_MAX_INDEX		127
+#define GLPES_PFIP4RXOCTSHI_IP4RXOCTSHI_S	0
+#define GLPES_PFIP4RXOCTSHI_IP4RXOCTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP4RXOCTSLO(_i)			(0x00540400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXOCTSLO_MAX_INDEX		127
+#define GLPES_PFIP4RXOCTSLO_IP4RXOCTSLO_S	0
+#define GLPES_PFIP4RXOCTSLO_IP4RXOCTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4RXPKTSHI(_i)			(0x00540C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXPKTSHI_MAX_INDEX		127
+#define GLPES_PFIP4RXPKTSHI_IP4RXPKTSHI_S	0
+#define GLPES_PFIP4RXPKTSHI_IP4RXPKTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP4RXPKTSLO(_i)			(0x00540C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXPKTSLO_MAX_INDEX		127
+#define GLPES_PFIP4RXPKTSLO_IP4RXPKTSLO_S	0
+#define GLPES_PFIP4RXPKTSLO_IP4RXPKTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4RXTRUNC(_i)			(0x00541800 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4RXTRUNC_MAX_INDEX		127
+#define GLPES_PFIP4RXTRUNC_IP4RXTRUNC_S		0
+#define GLPES_PFIP4RXTRUNC_IP4RXTRUNC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4TXFRAGSHI(_i)		(0x00547404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4TXFRAGSHI_MAX_INDEX		127
+#define GLPES_PFIP4TXFRAGSHI_IP4TXFRAGSHI_S	0
+#define GLPES_PFIP4TXFRAGSHI_IP4TXFRAGSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP4TXFRAGSLO(_i)		(0x00547400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4TXFRAGSLO_MAX_INDEX		127
+#define GLPES_PFIP4TXFRAGSLO_IP4TXFRAGSLO_S	0
+#define GLPES_PFIP4TXFRAGSLO_IP4TXFRAGSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4TXMCOCTSHI(_i)		(0x00547C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4TXMCOCTSHI_MAX_INDEX		127
+#define GLPES_PFIP4TXMCOCTSHI_IP4TXMCOCTSHI_S	0
+#define GLPES_PFIP4TXMCOCTSHI_IP4TXMCOCTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP4TXMCOCTSLO(_i)		(0x00547C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4TXMCOCTSLO_MAX_INDEX		127
+#define GLPES_PFIP4TXMCOCTSLO_IP4TXMCOCTSLO_S	0
+#define GLPES_PFIP4TXMCOCTSLO_IP4TXMCOCTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4TXMCPKTSHI(_i)		(0x00548404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4TXMCPKTSHI_MAX_INDEX		127
+#define GLPES_PFIP4TXMCPKTSHI_IP4TXMCPKTSHI_S	0
+#define GLPES_PFIP4TXMCPKTSHI_IP4TXMCPKTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP4TXMCPKTSLO(_i)		(0x00548400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4TXMCPKTSLO_MAX_INDEX		127
+#define GLPES_PFIP4TXMCPKTSLO_IP4TXMCPKTSLO_S	0
+#define GLPES_PFIP4TXMCPKTSLO_IP4TXMCPKTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4TXNOROUTE(_i)		(0x0054B400 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4TXNOROUTE_MAX_INDEX		127
+#define GLPES_PFIP4TXNOROUTE_IP4TXNOROUTE_S	0
+#define GLPES_PFIP4TXNOROUTE_IP4TXNOROUTE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLPES_PFIP4TXOCTSHI(_i)			(0x00546404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4TXOCTSHI_MAX_INDEX		127
+#define GLPES_PFIP4TXOCTSHI_IP4TXOCTSHI_S	0
+#define GLPES_PFIP4TXOCTSHI_IP4TXOCTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP4TXOCTSLO(_i)			(0x00546400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4TXOCTSLO_MAX_INDEX		127
+#define GLPES_PFIP4TXOCTSLO_IP4TXOCTSLO_S	0
+#define GLPES_PFIP4TXOCTSLO_IP4TXOCTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP4TXPKTSHI(_i)			(0x00546C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4TXPKTSHI_MAX_INDEX		127
+#define GLPES_PFIP4TXPKTSHI_IP4TXPKTSHI_S	0
+#define GLPES_PFIP4TXPKTSHI_IP4TXPKTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP4TXPKTSLO(_i)			(0x00546C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP4TXPKTSLO_MAX_INDEX		127
+#define GLPES_PFIP4TXPKTSLO_IP4TXPKTSLO_S	0
+#define GLPES_PFIP4TXPKTSLO_IP4TXPKTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6RXDISCARD(_i)		(0x00544400 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXDISCARD_MAX_INDEX		127
+#define GLPES_PFIP6RXDISCARD_IP6RXDISCARD_S	0
+#define GLPES_PFIP6RXDISCARD_IP6RXDISCARD_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6RXFRAGSHI(_i)		(0x00544C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXFRAGSHI_MAX_INDEX		127
+#define GLPES_PFIP6RXFRAGSHI_IP6RXFRAGSHI_S	0
+#define GLPES_PFIP6RXFRAGSHI_IP6RXFRAGSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP6RXFRAGSLO(_i)		(0x00544C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXFRAGSLO_MAX_INDEX		127
+#define GLPES_PFIP6RXFRAGSLO_IP6RXFRAGSLO_S	0
+#define GLPES_PFIP6RXFRAGSLO_IP6RXFRAGSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6RXMCOCTSHI(_i)		(0x00545404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXMCOCTSHI_MAX_INDEX		127
+#define GLPES_PFIP6RXMCOCTSHI_IP6RXMCOCTSHI_S	0
+#define GLPES_PFIP6RXMCOCTSHI_IP6RXMCOCTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP6RXMCOCTSLO(_i)		(0x00545400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXMCOCTSLO_MAX_INDEX		127
+#define GLPES_PFIP6RXMCOCTSLO_IP6RXMCOCTSLO_S	0
+#define GLPES_PFIP6RXMCOCTSLO_IP6RXMCOCTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6RXMCPKTSHI(_i)		(0x00545C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXMCPKTSHI_MAX_INDEX		127
+#define GLPES_PFIP6RXMCPKTSHI_IP6RXMCPKTSHI_S	0
+#define GLPES_PFIP6RXMCPKTSHI_IP6RXMCPKTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP6RXMCPKTSLO(_i)		(0x00545C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXMCPKTSLO_MAX_INDEX		127
+#define GLPES_PFIP6RXMCPKTSLO_IP6RXMCPKTSLO_S	0
+#define GLPES_PFIP6RXMCPKTSLO_IP6RXMCPKTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6RXOCTSHI(_i)			(0x00543404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXOCTSHI_MAX_INDEX		127
+#define GLPES_PFIP6RXOCTSHI_IP6RXOCTSHI_S	0
+#define GLPES_PFIP6RXOCTSHI_IP6RXOCTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP6RXOCTSLO(_i)			(0x00543400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXOCTSLO_MAX_INDEX		127
+#define GLPES_PFIP6RXOCTSLO_IP6RXOCTSLO_S	0
+#define GLPES_PFIP6RXOCTSLO_IP6RXOCTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6RXPKTSHI(_i)			(0x00543C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXPKTSHI_MAX_INDEX		127
+#define GLPES_PFIP6RXPKTSHI_IP6RXPKTSHI_S	0
+#define GLPES_PFIP6RXPKTSHI_IP6RXPKTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP6RXPKTSLO(_i)			(0x00543C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXPKTSLO_MAX_INDEX		127
+#define GLPES_PFIP6RXPKTSLO_IP6RXPKTSLO_S	0
+#define GLPES_PFIP6RXPKTSLO_IP6RXPKTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6RXTRUNC(_i)			(0x00544800 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6RXTRUNC_MAX_INDEX		127
+#define GLPES_PFIP6RXTRUNC_IP6RXTRUNC_S		0
+#define GLPES_PFIP6RXTRUNC_IP6RXTRUNC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6TXFRAGSHI(_i)		(0x00549C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6TXFRAGSHI_MAX_INDEX		127
+#define GLPES_PFIP6TXFRAGSHI_IP6TXFRAGSHI_S	0
+#define GLPES_PFIP6TXFRAGSHI_IP6TXFRAGSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP6TXFRAGSLO(_i)		(0x00549C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6TXFRAGSLO_MAX_INDEX		127
+#define GLPES_PFIP6TXFRAGSLO_IP6TXFRAGSLO_S	0
+#define GLPES_PFIP6TXFRAGSLO_IP6TXFRAGSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6TXMCOCTSHI(_i)		(0x0054A404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6TXMCOCTSHI_MAX_INDEX		127
+#define GLPES_PFIP6TXMCOCTSHI_IP6TXMCOCTSHI_S	0
+#define GLPES_PFIP6TXMCOCTSHI_IP6TXMCOCTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP6TXMCOCTSLO(_i)		(0x0054A400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6TXMCOCTSLO_MAX_INDEX		127
+#define GLPES_PFIP6TXMCOCTSLO_IP6TXMCOCTSLO_S	0
+#define GLPES_PFIP6TXMCOCTSLO_IP6TXMCOCTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6TXMCPKTSHI(_i)		(0x0054AC04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6TXMCPKTSHI_MAX_INDEX		127
+#define GLPES_PFIP6TXMCPKTSHI_IP6TXMCPKTSHI_S	0
+#define GLPES_PFIP6TXMCPKTSHI_IP6TXMCPKTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP6TXMCPKTSLO(_i)		(0x0054AC00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6TXMCPKTSLO_MAX_INDEX		127
+#define GLPES_PFIP6TXMCPKTSLO_IP6TXMCPKTSLO_S	0
+#define GLPES_PFIP6TXMCPKTSLO_IP6TXMCPKTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6TXNOROUTE(_i)		(0x0054B800 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6TXNOROUTE_MAX_INDEX		127
+#define GLPES_PFIP6TXNOROUTE_IP6TXNOROUTE_S	0
+#define GLPES_PFIP6TXNOROUTE_IP6TXNOROUTE_M	MAKEMASK(0xFFFFFF, 0)
+#define GLPES_PFIP6TXOCTSHI(_i)			(0x00548C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6TXOCTSHI_MAX_INDEX		127
+#define GLPES_PFIP6TXOCTSHI_IP6TXOCTSHI_S	0
+#define GLPES_PFIP6TXOCTSHI_IP6TXOCTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP6TXOCTSLO(_i)			(0x00548C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6TXOCTSLO_MAX_INDEX		127
+#define GLPES_PFIP6TXOCTSLO_IP6TXOCTSLO_S	0
+#define GLPES_PFIP6TXOCTSLO_IP6TXOCTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFIP6TXPKTSHI(_i)			(0x00549404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6TXPKTSHI_MAX_INDEX		127
+#define GLPES_PFIP6TXPKTSHI_IP6TXPKTSHI_S	0
+#define GLPES_PFIP6TXPKTSHI_IP6TXPKTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFIP6TXPKTSLO(_i)			(0x00549400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFIP6TXPKTSLO_MAX_INDEX		127
+#define GLPES_PFIP6TXPKTSLO_IP6TXPKTSLO_S	0
+#define GLPES_PFIP6TXPKTSLO_IP6TXPKTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFRDMARXRDSHI(_i)			(0x0054EC04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMARXRDSHI_MAX_INDEX		127
+#define GLPES_PFRDMARXRDSHI_RDMARXRDSHI_S	0
+#define GLPES_PFRDMARXRDSHI_RDMARXRDSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFRDMARXRDSLO(_i)			(0x0054EC00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMARXRDSLO_MAX_INDEX		127
+#define GLPES_PFRDMARXRDSLO_RDMARXRDSLO_S	0
+#define GLPES_PFRDMARXRDSLO_RDMARXRDSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFRDMARXSNDSHI(_i)		(0x0054F404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMARXSNDSHI_MAX_INDEX		127
+#define GLPES_PFRDMARXSNDSHI_RDMARXSNDSHI_S	0
+#define GLPES_PFRDMARXSNDSHI_RDMARXSNDSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFRDMARXSNDSLO(_i)		(0x0054F400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMARXSNDSLO_MAX_INDEX		127
+#define GLPES_PFRDMARXSNDSLO_RDMARXSNDSLO_S	0
+#define GLPES_PFRDMARXSNDSLO_RDMARXSNDSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFRDMARXWRSHI(_i)			(0x0054E404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMARXWRSHI_MAX_INDEX		127
+#define GLPES_PFRDMARXWRSHI_RDMARXWRSHI_S	0
+#define GLPES_PFRDMARXWRSHI_RDMARXWRSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFRDMARXWRSLO(_i)			(0x0054E400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMARXWRSLO_MAX_INDEX		127
+#define GLPES_PFRDMARXWRSLO_RDMARXWRSLO_S	0
+#define GLPES_PFRDMARXWRSLO_RDMARXWRSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFRDMATXRDSHI(_i)			(0x00550404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMATXRDSHI_MAX_INDEX		127
+#define GLPES_PFRDMATXRDSHI_RDMARXRDSHI_S	0
+#define GLPES_PFRDMATXRDSHI_RDMARXRDSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFRDMATXRDSLO(_i)			(0x00550400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMATXRDSLO_MAX_INDEX		127
+#define GLPES_PFRDMATXRDSLO_RDMARXRDSLO_S	0
+#define GLPES_PFRDMATXRDSLO_RDMARXRDSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFRDMATXSNDSHI(_i)		(0x00550C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMATXSNDSHI_MAX_INDEX		127
+#define GLPES_PFRDMATXSNDSHI_RDMARXSNDSHI_S	0
+#define GLPES_PFRDMATXSNDSHI_RDMARXSNDSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFRDMATXSNDSLO(_i)		(0x00550C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMATXSNDSLO_MAX_INDEX		127
+#define GLPES_PFRDMATXSNDSLO_RDMARXSNDSLO_S	0
+#define GLPES_PFRDMATXSNDSLO_RDMARXSNDSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFRDMATXWRSHI(_i)			(0x0054FC04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMATXWRSHI_MAX_INDEX		127
+#define GLPES_PFRDMATXWRSHI_RDMARXWRSHI_S	0
+#define GLPES_PFRDMATXWRSHI_RDMARXWRSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFRDMATXWRSLO(_i)			(0x0054FC00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMATXWRSLO_MAX_INDEX		127
+#define GLPES_PFRDMATXWRSLO_RDMARXWRSLO_S	0
+#define GLPES_PFRDMATXWRSLO_RDMARXWRSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFRDMAVBNDHI(_i)			(0x00551404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMAVBNDHI_MAX_INDEX		127
+#define GLPES_PFRDMAVBNDHI_RDMAVBNDHI_S		0
+#define GLPES_PFRDMAVBNDHI_RDMAVBNDHI_M		MAKEMASK(0xFFFF, 0)
+#define GLPES_PFRDMAVBNDLO(_i)			(0x00551400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMAVBNDLO_MAX_INDEX		127
+#define GLPES_PFRDMAVBNDLO_RDMAVBNDLO_S		0
+#define GLPES_PFRDMAVBNDLO_RDMAVBNDLO_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFRDMAVINVHI(_i)			(0x00551C04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMAVINVHI_MAX_INDEX		127
+#define GLPES_PFRDMAVINVHI_RDMAVINVHI_S		0
+#define GLPES_PFRDMAVINVHI_RDMAVINVHI_M		MAKEMASK(0xFFFF, 0)
+#define GLPES_PFRDMAVINVLO(_i)			(0x00551C00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRDMAVINVLO_MAX_INDEX		127
+#define GLPES_PFRDMAVINVLO_RDMAVINVLO_S		0
+#define GLPES_PFRDMAVINVLO_RDMAVINVLO_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFRXVLANERR(_i)			(0x00540000 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFRXVLANERR_MAX_INDEX		127
+#define GLPES_PFRXVLANERR_RXVLANERR_S		0
+#define GLPES_PFRXVLANERR_RXVLANERR_M		MAKEMASK(0xFFFFFF, 0)
+#define GLPES_PFTCPRTXSEG(_i)			(0x00552400 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFTCPRTXSEG_MAX_INDEX		127
+#define GLPES_PFTCPRTXSEG_TCPRTXSEG_S		0
+#define GLPES_PFTCPRTXSEG_TCPRTXSEG_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFTCPRXOPTERR(_i)			(0x0054C400 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFTCPRXOPTERR_MAX_INDEX		127
+#define GLPES_PFTCPRXOPTERR_TCPRXOPTERR_S	0
+#define GLPES_PFTCPRXOPTERR_TCPRXOPTERR_M	MAKEMASK(0xFFFFFF, 0)
+#define GLPES_PFTCPRXPROTOERR(_i)		(0x0054C800 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFTCPRXPROTOERR_MAX_INDEX		127
+#define GLPES_PFTCPRXPROTOERR_TCPRXPROTOERR_S	0
+#define GLPES_PFTCPRXPROTOERR_TCPRXPROTOERR_M	MAKEMASK(0xFFFFFF, 0)
+#define GLPES_PFTCPRXSEGSHI(_i)			(0x0054BC04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFTCPRXSEGSHI_MAX_INDEX		127
+#define GLPES_PFTCPRXSEGSHI_TCPRXSEGSHI_S	0
+#define GLPES_PFTCPRXSEGSHI_TCPRXSEGSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFTCPRXSEGSLO(_i)			(0x0054BC00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFTCPRXSEGSLO_MAX_INDEX		127
+#define GLPES_PFTCPRXSEGSLO_TCPRXSEGSLO_S	0
+#define GLPES_PFTCPRXSEGSLO_TCPRXSEGSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFTCPTXSEGHI(_i)			(0x0054CC04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFTCPTXSEGHI_MAX_INDEX		127
+#define GLPES_PFTCPTXSEGHI_TCPTXSEGHI_S		0
+#define GLPES_PFTCPTXSEGHI_TCPTXSEGHI_M		MAKEMASK(0xFFFF, 0)
+#define GLPES_PFTCPTXSEGLO(_i)			(0x0054CC00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFTCPTXSEGLO_MAX_INDEX		127
+#define GLPES_PFTCPTXSEGLO_TCPTXSEGLO_S		0
+#define GLPES_PFTCPTXSEGLO_TCPTXSEGLO_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFUDPRXPKTSHI(_i)			(0x0054D404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFUDPRXPKTSHI_MAX_INDEX		127
+#define GLPES_PFUDPRXPKTSHI_UDPRXPKTSHI_S	0
+#define GLPES_PFUDPRXPKTSHI_UDPRXPKTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFUDPRXPKTSLO(_i)			(0x0054D400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFUDPRXPKTSLO_MAX_INDEX		127
+#define GLPES_PFUDPRXPKTSLO_UDPRXPKTSLO_S	0
+#define GLPES_PFUDPRXPKTSLO_UDPRXPKTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_PFUDPTXPKTSHI(_i)			(0x0054DC04 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFUDPTXPKTSHI_MAX_INDEX		127
+#define GLPES_PFUDPTXPKTSHI_UDPTXPKTSHI_S	0
+#define GLPES_PFUDPTXPKTSHI_UDPTXPKTSHI_M	MAKEMASK(0xFFFF, 0)
+#define GLPES_PFUDPTXPKTSLO(_i)			(0x0054DC00 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLPES_PFUDPTXPKTSLO_MAX_INDEX		127
+#define GLPES_PFUDPTXPKTSLO_UDPTXPKTSLO_S	0
+#define GLPES_PFUDPTXPKTSLO_UDPTXPKTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_RDMARXMULTFPDUSHI			0x0055E00C /* Reset Source: CORER */
+#define GLPES_RDMARXMULTFPDUSHI_RDMARXMULTFPDUSHI_S 0
+#define GLPES_RDMARXMULTFPDUSHI_RDMARXMULTFPDUSHI_M MAKEMASK(0xFFFFFF, 0)
+#define GLPES_RDMARXMULTFPDUSLO			0x0055E008 /* Reset Source: CORER */
+#define GLPES_RDMARXMULTFPDUSLO_RDMARXMULTFPDUSLO_S 0
+#define GLPES_RDMARXMULTFPDUSLO_RDMARXMULTFPDUSLO_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_RDMARXOOODDPHI			0x0055E014 /* Reset Source: CORER */
+#define GLPES_RDMARXOOODDPHI_RDMARXOOODDPHI_S	0
+#define GLPES_RDMARXOOODDPHI_RDMARXOOODDPHI_M	MAKEMASK(0xFFFFFF, 0)
+#define GLPES_RDMARXOOODDPLO			0x0055E010 /* Reset Source: CORER */
+#define GLPES_RDMARXOOODDPLO_RDMARXOOODDPLO_S	0
+#define GLPES_RDMARXOOODDPLO_RDMARXOOODDPLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_RDMARXOOONOMARK			0x0055E004 /* Reset Source: CORER */
+#define GLPES_RDMARXOOONOMARK_RDMAOOONOMARK_S	0
+#define GLPES_RDMARXOOONOMARK_RDMAOOONOMARK_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_RDMARXUNALIGN			0x0055E000 /* Reset Source: CORER */
+#define GLPES_RDMARXUNALIGN_RDMRXAUNALIGN_S	0
+#define GLPES_RDMARXUNALIGN_RDMRXAUNALIGN_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_TCPRXFOURHOLEHI			0x0055E03C /* Reset Source: CORER */
+#define GLPES_TCPRXFOURHOLEHI_TCPRXFOURHOLEHI_S 0
+#define GLPES_TCPRXFOURHOLEHI_TCPRXFOURHOLEHI_M MAKEMASK(0xFFFFFF, 0)
+#define GLPES_TCPRXFOURHOLELO			0x0055E038 /* Reset Source: CORER */
+#define GLPES_TCPRXFOURHOLELO_TCPRXFOURHOLELO_S 0
+#define GLPES_TCPRXFOURHOLELO_TCPRXFOURHOLELO_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_TCPRXONEHOLEHI			0x0055E024 /* Reset Source: CORER */
+#define GLPES_TCPRXONEHOLEHI_TCPRXONEHOLEHI_S	0
+#define GLPES_TCPRXONEHOLEHI_TCPRXONEHOLEHI_M	MAKEMASK(0xFFFFFF, 0)
+#define GLPES_TCPRXONEHOLELO			0x0055E020 /* Reset Source: CORER */
+#define GLPES_TCPRXONEHOLELO_TCPRXONEHOLELO_S	0
+#define GLPES_TCPRXONEHOLELO_TCPRXONEHOLELO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_TCPRXPUREACKHI			0x0055E01C /* Reset Source: CORER */
+#define GLPES_TCPRXPUREACKHI_TCPRXPUREACKSHI_S	0
+#define GLPES_TCPRXPUREACKHI_TCPRXPUREACKSHI_M	MAKEMASK(0xFFFFFF, 0)
+#define GLPES_TCPRXPUREACKSLO			0x0055E018 /* Reset Source: CORER */
+#define GLPES_TCPRXPUREACKSLO_TCPRXPUREACKLO_S	0
+#define GLPES_TCPRXPUREACKSLO_TCPRXPUREACKLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_TCPRXTHREEHOLEHI			0x0055E034 /* Reset Source: CORER */
+#define GLPES_TCPRXTHREEHOLEHI_TCPRXTHREEHOLEHI_S 0
+#define GLPES_TCPRXTHREEHOLEHI_TCPRXTHREEHOLEHI_M MAKEMASK(0xFFFFFF, 0)
+#define GLPES_TCPRXTHREEHOLELO			0x0055E030 /* Reset Source: CORER */
+#define GLPES_TCPRXTHREEHOLELO_TCPRXTHREEHOLELO_S 0
+#define GLPES_TCPRXTHREEHOLELO_TCPRXTHREEHOLELO_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_TCPRXTWOHOLEHI			0x0055E02C /* Reset Source: CORER */
+#define GLPES_TCPRXTWOHOLEHI_TCPRXTWOHOLEHI_S	0
+#define GLPES_TCPRXTWOHOLEHI_TCPRXTWOHOLEHI_M	MAKEMASK(0xFFFFFF, 0)
+#define GLPES_TCPRXTWOHOLELO			0x0055E028 /* Reset Source: CORER */
+#define GLPES_TCPRXTWOHOLELO_TCPRXTWOHOLELO_S	0
+#define GLPES_TCPRXTWOHOLELO_TCPRXTWOHOLELO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_TCPTXRETRANSFASTHI		0x0055E044 /* Reset Source: CORER */
+#define GLPES_TCPTXRETRANSFASTHI_TCPTXRETRANSFASTHI_S 0
+#define GLPES_TCPTXRETRANSFASTHI_TCPTXRETRANSFASTHI_M MAKEMASK(0xFFFFFF, 0)
+#define GLPES_TCPTXRETRANSFASTLO		0x0055E040 /* Reset Source: CORER */
+#define GLPES_TCPTXRETRANSFASTLO_TCPTXRETRANSFASTLO_S 0
+#define GLPES_TCPTXRETRANSFASTLO_TCPTXRETRANSFASTLO_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_TCPTXTOUTSFASTHI			0x0055E04C /* Reset Source: CORER */
+#define GLPES_TCPTXTOUTSFASTHI_TCPTXTOUTSFASTHI_S 0
+#define GLPES_TCPTXTOUTSFASTHI_TCPTXTOUTSFASTHI_M MAKEMASK(0xFFFFFF, 0)
+#define GLPES_TCPTXTOUTSFASTLO			0x0055E048 /* Reset Source: CORER */
+#define GLPES_TCPTXTOUTSFASTLO_TCPTXTOUTSFASTLO_S 0
+#define GLPES_TCPTXTOUTSFASTLO_TCPTXTOUTSFASTLO_M MAKEMASK(0xFFFFFFFF, 0)
+#define GLPES_TCPTXTOUTSHI			0x0055E054 /* Reset Source: CORER */
+#define GLPES_TCPTXTOUTSHI_TCPTXTOUTSHI_S	0
+#define GLPES_TCPTXTOUTSHI_TCPTXTOUTSHI_M	MAKEMASK(0xFFFFFF, 0)
+#define GLPES_TCPTXTOUTSLO			0x0055E050 /* Reset Source: CORER */
+#define GLPES_TCPTXTOUTSLO_TCPTXTOUTSLO_S	0
+#define GLPES_TCPTXTOUTSLO_TCPTXTOUTSLO_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GL_PWR_MODE_CTL				0x000B820C /* Reset Source: POR */
+#define GL_PWR_MODE_CTL_SWITCH_PWR_MODE_EN_S	0
+#define GL_PWR_MODE_CTL_SWITCH_PWR_MODE_EN_M	BIT(0)
+#define GL_PWR_MODE_CTL_NIC_PWR_MODE_EN_S	1
+#define GL_PWR_MODE_CTL_NIC_PWR_MODE_EN_M	BIT(1)
+#define GL_PWR_MODE_CTL_S5_PWR_MODE_EN_S	2
+#define GL_PWR_MODE_CTL_S5_PWR_MODE_EN_M	BIT(2)
+#define GL_PWR_MODE_CTL_CAR_MAX_SW_CONFIG_S	3
+#define GL_PWR_MODE_CTL_CAR_MAX_SW_CONFIG_M	MAKEMASK(0x3, 3)
+#define GL_PWR_MODE_CTL_CAR_MAX_BW_S		30
+#define GL_PWR_MODE_CTL_CAR_MAX_BW_M		MAKEMASK(0x3, 30)
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT	0x000B825C /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_PECLK_S 0
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_PECLK_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_UCLK_S 3
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_UCLK_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_LCLK_S 6
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_LCLK_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_PSM_S 9
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_PSM_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_RXCTL_S 12
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_RXCTL_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_UANA_S 15
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_UANA_M MAKEMASK(0x7, 15)
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_S5_S 18
+#define GL_PWR_MODE_DIVIDE_CTRL_H_DEFAULT_DEFAULT_DIV_VAL_S5_M MAKEMASK(0x7, 18)
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT	0x000B8218 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_PECLK_S 0
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_PECLK_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_UCLK_S 3
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_UCLK_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_LCLK_S 6
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_LCLK_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_PSM_S 9
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_PSM_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_RXCTL_S 12
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_RXCTL_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_UANA_S 15
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_UANA_M MAKEMASK(0x7, 15)
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_S5_S 18
+#define GL_PWR_MODE_DIVIDE_CTRL_L_DEFAULT_DEFAULT_DIV_VAL_S5_M MAKEMASK(0x7, 18)
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT	0x000B8260 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_PECLK_S 0
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_PECLK_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_UCLK_S 3
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_UCLK_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_LCLK_S 6
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_LCLK_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_PSM_S 9
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_PSM_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_RXCTL_S 12
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_RXCTL_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_UANA_S 15
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_UANA_M MAKEMASK(0x7, 15)
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_S5_S 18
+#define GL_PWR_MODE_DIVIDE_CTRL_M_DEFAULT_DEFAULT_DIV_VAL_S5_M MAKEMASK(0x7, 18)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_LCLK	0x000B8200 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_LCLK_DIV_VAL_TBW_50G_H_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_LCLK_DIV_VAL_TBW_50G_H_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_LCLK_DIV_VAL_TBW_25G_H_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_LCLK_DIV_VAL_TBW_25G_H_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_LCLK_DIV_VAL_TBW_10G_H_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_LCLK_DIV_VAL_TBW_10G_H_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_LCLK_DIV_VAL_TBW_4G_H_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_LCLK_DIV_VAL_TBW_4G_H_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_LCLK_DIV_VAL_TBW_A50G_H_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_LCLK_DIV_VAL_TBW_A50G_H_M MAKEMASK(0xF, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PECLK	0x000B81F0 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PECLK_DIV_VAL_TBW_50G_H_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PECLK_DIV_VAL_TBW_50G_H_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PECLK_DIV_VAL_TBW_25G_H_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PECLK_DIV_VAL_TBW_25G_H_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PECLK_DIV_VAL_TBW_10G_H_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PECLK_DIV_VAL_TBW_10G_H_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PECLK_DIV_VAL_TBW_4G_H_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PECLK_DIV_VAL_TBW_4G_H_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PECLK_DIV_VAL_TBW_A50G_H_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PECLK_DIV_VAL_TBW_A50G_H_M MAKEMASK(0xF, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PSM	0x000B81FC /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PSM_DIV_VAL_TBW_50G_H_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PSM_DIV_VAL_TBW_50G_H_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PSM_DIV_VAL_TBW_25G_H_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PSM_DIV_VAL_TBW_25G_H_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PSM_DIV_VAL_TBW_10G_H_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PSM_DIV_VAL_TBW_10G_H_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PSM_DIV_VAL_TBW_4G_H_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PSM_DIV_VAL_TBW_4G_H_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PSM_DIV_VAL_TBW_A50G_H_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_PSM_DIV_VAL_TBW_A50G_H_M MAKEMASK(0xF, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_RXCTL	0x000B81F8 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_RXCTL_DIV_VAL_TBW_50G_H_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_RXCTL_DIV_VAL_TBW_50G_H_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_RXCTL_DIV_VAL_TBW_25G_H_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_RXCTL_DIV_VAL_TBW_25G_H_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_RXCTL_DIV_VAL_TBW_10G_H_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_RXCTL_DIV_VAL_TBW_10G_H_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_RXCTL_DIV_VAL_TBW_4G_H_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_RXCTL_DIV_VAL_TBW_4G_H_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_RXCTL_DIV_VAL_TBW_A50G_H_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_RXCTL_DIV_VAL_TBW_A50G_H_M MAKEMASK(0xF, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UANA	0x000B8208 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UANA_DIV_VAL_TBW_50G_H_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UANA_DIV_VAL_TBW_50G_H_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UANA_DIV_VAL_TBW_25G_H_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UANA_DIV_VAL_TBW_25G_H_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UANA_DIV_VAL_TBW_10G_H_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UANA_DIV_VAL_TBW_10G_H_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UANA_DIV_VAL_TBW_4G_H_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UANA_DIV_VAL_TBW_4G_H_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UANA_DIV_VAL_TBW_A50G_H_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UANA_DIV_VAL_TBW_A50G_H_M MAKEMASK(0xF, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UCLK	0x000B81F4 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UCLK_DIV_VAL_TBW_50G_H_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UCLK_DIV_VAL_TBW_50G_H_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UCLK_DIV_VAL_TBW_25G_H_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UCLK_DIV_VAL_TBW_25G_H_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UCLK_DIV_VAL_TBW_10G_H_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UCLK_DIV_VAL_TBW_10G_H_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UCLK_DIV_VAL_TBW_4G_H_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UCLK_DIV_VAL_TBW_4G_H_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UCLK_DIV_VAL_TBW_A50G_H_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_H_UCLK_DIV_VAL_TBW_A50G_H_M MAKEMASK(0xF, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_LCLK	0x000B8244 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_LCLK_DIV_VAL_TBW_50G_L_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_LCLK_DIV_VAL_TBW_50G_L_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_LCLK_DIV_VAL_TBW_25G_L_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_LCLK_DIV_VAL_TBW_25G_L_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_LCLK_DIV_VAL_TBW_10G_L_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_LCLK_DIV_VAL_TBW_10G_L_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_LCLK_DIV_VAL_TBW_4G_L_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_LCLK_DIV_VAL_TBW_4G_L_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_LCLK_DIV_VAL_TBW_A50G_L_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_LCLK_DIV_VAL_TBW_A50G_L_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PECLK	0x000B8220 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PECLK_DIV_VAL_TBW_50G_L_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PECLK_DIV_VAL_TBW_50G_L_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PECLK_DIV_VAL_TBW_25G_L_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PECLK_DIV_VAL_TBW_25G_L_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PECLK_DIV_VAL_TBW_10G_L_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PECLK_DIV_VAL_TBW_10G_L_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PECLK_DIV_VAL_TBW_4G_L_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PECLK_DIV_VAL_TBW_4G_L_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PECLK_DIV_VAL_TBW_A50G_L_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PECLK_DIV_VAL_TBW_A50G_L_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PSM	0x000B8240 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PSM_DIV_VAL_TBW_50G_L_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PSM_DIV_VAL_TBW_50G_L_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PSM_DIV_VAL_TBW_25G_L_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PSM_DIV_VAL_TBW_25G_L_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PSM_DIV_VAL_TBW_10G_L_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PSM_DIV_VAL_TBW_10G_L_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PSM_DIV_VAL_TBW_4G_L_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PSM_DIV_VAL_TBW_4G_L_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PSM_DIV_VAL_TBW_A50G_L_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_PSM_DIV_VAL_TBW_A50G_L_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_RXCTL	0x000B823C /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_RXCTL_DIV_VAL_TBW_50G_L_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_RXCTL_DIV_VAL_TBW_50G_L_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_RXCTL_DIV_VAL_TBW_25G_L_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_RXCTL_DIV_VAL_TBW_25G_L_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_RXCTL_DIV_VAL_TBW_10G_L_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_RXCTL_DIV_VAL_TBW_10G_L_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_RXCTL_DIV_VAL_TBW_4G_L_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_RXCTL_DIV_VAL_TBW_4G_L_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_RXCTL_DIV_VAL_TBW_A50G_L_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_RXCTL_DIV_VAL_TBW_A50G_L_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UANA	0x000B8248 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UANA_DIV_VAL_TBW_50G_L_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UANA_DIV_VAL_TBW_50G_L_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UANA_DIV_VAL_TBW_25G_L_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UANA_DIV_VAL_TBW_25G_L_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UANA_DIV_VAL_TBW_10G_L_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UANA_DIV_VAL_TBW_10G_L_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UANA_DIV_VAL_TBW_4G_L_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UANA_DIV_VAL_TBW_4G_L_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UANA_DIV_VAL_TBW_A50G_L_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UANA_DIV_VAL_TBW_A50G_L_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UCLK	0x000B8238 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UCLK_DIV_VAL_TBW_50G_L_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UCLK_DIV_VAL_TBW_50G_L_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UCLK_DIV_VAL_TBW_25G_L_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UCLK_DIV_VAL_TBW_25G_L_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UCLK_DIV_VAL_TBW_10G_L_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UCLK_DIV_VAL_TBW_10G_L_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UCLK_DIV_VAL_TBW_4G_L_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UCLK_DIV_VAL_TBW_4G_L_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UCLK_DIV_VAL_TBW_A50G_L_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_L_UCLK_DIV_VAL_TBW_A50G_L_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_LCLK	0x000B8230 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_LCLK_DIV_VAL_TBW_50G_M_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_LCLK_DIV_VAL_TBW_50G_M_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_LCLK_DIV_VAL_TBW_25G_M_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_LCLK_DIV_VAL_TBW_25G_M_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_LCLK_DIV_VAL_TBW_10G_M_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_LCLK_DIV_VAL_TBW_10G_M_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_LCLK_DIV_VAL_TBW_4G_M_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_LCLK_DIV_VAL_TBW_4G_M_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_LCLK_DIV_VAL_TBW_A50G_M_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_LCLK_DIV_VAL_TBW_A50G_M_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PECLK	0x000B821C /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PECLK_DIV_VAL_TBW_50G_M_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PECLK_DIV_VAL_TBW_50G_M_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PECLK_DIV_VAL_TBW_25G_M_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PECLK_DIV_VAL_TBW_25G_M_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PECLK_DIV_VAL_TBW_10G_M_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PECLK_DIV_VAL_TBW_10G_M_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PECLK_DIV_VAL_TBW_4G_M_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PECLK_DIV_VAL_TBW_4G_M_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PECLK_DIV_VAL_TBW_A50G_M_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PECLK_DIV_VAL_TBW_A50G_M_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PSM	0x000B822C /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PSM_DIV_VAL_TBW_50G_M_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PSM_DIV_VAL_TBW_50G_M_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PSM_DIV_VAL_TBW_25G_M_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PSM_DIV_VAL_TBW_25G_M_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PSM_DIV_VAL_TBW_10G_M_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PSM_DIV_VAL_TBW_10G_M_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PSM_DIV_VAL_TBW_4G_M_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PSM_DIV_VAL_TBW_4G_M_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PSM_DIV_VAL_TBW_A50G_M_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_PSM_DIV_VAL_TBW_A50G_M_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_RXCTL	0x000B8228 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_RXCTL_DIV_VAL_TBW_50G_M_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_RXCTL_DIV_VAL_TBW_50G_M_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_RXCTL_DIV_VAL_TBW_25G_M_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_RXCTL_DIV_VAL_TBW_25G_M_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_RXCTL_DIV_VAL_TBW_10G_M_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_RXCTL_DIV_VAL_TBW_10G_M_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_RXCTL_DIV_VAL_TBW_4G_M_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_RXCTL_DIV_VAL_TBW_4G_M_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_RXCTL_DIV_VAL_TBW_A50G_M_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_RXCTL_DIV_VAL_TBW_A50G_M_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UANA	0x000B8234 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UANA_DIV_VAL_TBW_50G_M_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UANA_DIV_VAL_TBW_50G_M_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UANA_DIV_VAL_TBW_25G_M_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UANA_DIV_VAL_TBW_25G_M_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UANA_DIV_VAL_TBW_10G_M_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UANA_DIV_VAL_TBW_10G_M_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UANA_DIV_VAL_TBW_4G_M_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UANA_DIV_VAL_TBW_4G_M_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UANA_DIV_VAL_TBW_A50G_M_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UANA_DIV_VAL_TBW_A50G_M_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UCLK	0x000B8224 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UCLK_DIV_VAL_TBW_50G_M_S 0
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UCLK_DIV_VAL_TBW_50G_M_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UCLK_DIV_VAL_TBW_25G_M_S 3
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UCLK_DIV_VAL_TBW_25G_M_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UCLK_DIV_VAL_TBW_10G_M_S 6
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UCLK_DIV_VAL_TBW_10G_M_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UCLK_DIV_VAL_TBW_4G_M_S 9
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UCLK_DIV_VAL_TBW_4G_M_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UCLK_DIV_VAL_TBW_A50G_M_S 12
+#define GL_PWR_MODE_DIVIDE_S0_CTRL_M_UCLK_DIV_VAL_TBW_A50G_M_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S5_H_CTRL		0x000B81EC /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S5_H_CTRL_DIV_VAL_TBW_50G_H_S 0
+#define GL_PWR_MODE_DIVIDE_S5_H_CTRL_DIV_VAL_TBW_50G_H_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S5_H_CTRL_DIV_VAL_TBW_25G_H_S 3
+#define GL_PWR_MODE_DIVIDE_S5_H_CTRL_DIV_VAL_TBW_25G_H_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S5_H_CTRL_DIV_VAL_TBW_10G_H_S 6
+#define GL_PWR_MODE_DIVIDE_S5_H_CTRL_DIV_VAL_TBW_10G_H_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S5_H_CTRL_DIV_VAL_TBW_4G_H_S 9
+#define GL_PWR_MODE_DIVIDE_S5_H_CTRL_DIV_VAL_TBW_4G_H_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S5_H_CTRL_DIV_VAL_TBW_A50G_H_S 12
+#define GL_PWR_MODE_DIVIDE_S5_H_CTRL_DIV_VAL_TBW_A50G_H_M MAKEMASK(0xF, 12)
+#define GL_PWR_MODE_DIVIDE_S5_L_CTRL		0x000B824C /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S5_L_CTRL_DIV_VAL_TBW_50G_L_S 0
+#define GL_PWR_MODE_DIVIDE_S5_L_CTRL_DIV_VAL_TBW_50G_L_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S5_L_CTRL_DIV_VAL_TBW_25G_L_S 3
+#define GL_PWR_MODE_DIVIDE_S5_L_CTRL_DIV_VAL_TBW_25G_L_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S5_L_CTRL_DIV_VAL_TBW_10G_L_S 6
+#define GL_PWR_MODE_DIVIDE_S5_L_CTRL_DIV_VAL_TBW_10G_L_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S5_L_CTRL_DIV_VAL_TBW_4G_L_S 9
+#define GL_PWR_MODE_DIVIDE_S5_L_CTRL_DIV_VAL_TBW_4G_L_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S5_L_CTRL_DIV_VAL_TBW_A50G_L_S 12
+#define GL_PWR_MODE_DIVIDE_S5_L_CTRL_DIV_VAL_TBW_A50G_L_M MAKEMASK(0x7, 12)
+#define GL_PWR_MODE_DIVIDE_S5_M_CTRL		0x000B8250 /* Reset Source: POR */
+#define GL_PWR_MODE_DIVIDE_S5_M_CTRL_DIV_VAL_TBW_50G_M_S 0
+#define GL_PWR_MODE_DIVIDE_S5_M_CTRL_DIV_VAL_TBW_50G_M_M MAKEMASK(0x7, 0)
+#define GL_PWR_MODE_DIVIDE_S5_M_CTRL_DIV_VAL_TBW_25G_M_S 3
+#define GL_PWR_MODE_DIVIDE_S5_M_CTRL_DIV_VAL_TBW_25G_M_M MAKEMASK(0x7, 3)
+#define GL_PWR_MODE_DIVIDE_S5_M_CTRL_DIV_VAL_TBW_10G_M_S 6
+#define GL_PWR_MODE_DIVIDE_S5_M_CTRL_DIV_VAL_TBW_10G_M_M MAKEMASK(0x7, 6)
+#define GL_PWR_MODE_DIVIDE_S5_M_CTRL_DIV_VAL_TBW_4G_M_S 9
+#define GL_PWR_MODE_DIVIDE_S5_M_CTRL_DIV_VAL_TBW_4G_M_M MAKEMASK(0x7, 9)
+#define GL_PWR_MODE_DIVIDE_S5_M_CTRL_DIV_VAL_TBW_A50G_M_S 12
+#define GL_PWR_MODE_DIVIDE_S5_M_CTRL_DIV_VAL_TBW_A50G_M_M MAKEMASK(0x7, 12)
+#define GL_S5_PWR_MODE_EXIT_CTL			0x000B8270 /* Reset Source: POR */
+#define GL_S5_PWR_MODE_EXIT_CTL_S5_PWR_MODE_AUTO_EXIT_S 0
+#define GL_S5_PWR_MODE_EXIT_CTL_S5_PWR_MODE_AUTO_EXIT_M BIT(0)
+#define GL_S5_PWR_MODE_EXIT_CTL_S5_PWR_MODE_FW_EXIT_S 1
+#define GL_S5_PWR_MODE_EXIT_CTL_S5_PWR_MODE_FW_EXIT_M BIT(1)
+#define GL_S5_PWR_MODE_EXIT_CTL_S5_PWR_MODE_PRST_FLOWS_ON_CORER_S 3
+#define GL_S5_PWR_MODE_EXIT_CTL_S5_PWR_MODE_PRST_FLOWS_ON_CORER_M BIT(3)
+#define GLGEN_PME_TO				0x000B81BC /* Reset Source: POR */
+#define GLGEN_PME_TO_PME_TO_FOR_PE_S		0
+#define GLGEN_PME_TO_PME_TO_FOR_PE_M		BIT(0)
+#define PRTPM_EEE_STAT				0x001E4320 /* Reset Source: GLOBR */
+#define PRTPM_EEE_STAT_EEE_NEG_S		29
+#define PRTPM_EEE_STAT_EEE_NEG_M		BIT(29)
+#define PRTPM_EEE_STAT_RX_LPI_STATUS_S		30
+#define PRTPM_EEE_STAT_RX_LPI_STATUS_M		BIT(30)
+#define PRTPM_EEE_STAT_TX_LPI_STATUS_S		31
+#define PRTPM_EEE_STAT_TX_LPI_STATUS_M		BIT(31)
+#define PRTPM_EEEC				0x001E4380 /* Reset Source: GLOBR */
+#define PRTPM_EEEC_TW_WAKE_MIN_S		16
+#define PRTPM_EEEC_TW_WAKE_MIN_M		MAKEMASK(0x3F, 16)
+#define PRTPM_EEEC_TX_LU_LPI_DLY_S		24
+#define PRTPM_EEEC_TX_LU_LPI_DLY_M		MAKEMASK(0x3, 24)
+#define PRTPM_EEEC_TEEE_DLY_S			26
+#define PRTPM_EEEC_TEEE_DLY_M			MAKEMASK(0x3F, 26)
+#define PRTPM_EEEFWD				0x001E4400 /* Reset Source: GLOBR */
+#define PRTPM_EEEFWD_EEE_FW_CONFIG_DONE_S	31
+#define PRTPM_EEEFWD_EEE_FW_CONFIG_DONE_M	BIT(31)
+#define PRTPM_EEER				0x001E4360 /* Reset Source: GLOBR */
+#define PRTPM_EEER_TW_SYSTEM_S			0
+#define PRTPM_EEER_TW_SYSTEM_M			MAKEMASK(0xFFFF, 0)
+#define PRTPM_EEER_TX_LPI_EN_S			16
+#define PRTPM_EEER_TX_LPI_EN_M			BIT(16)
+#define PRTPM_EEETXC				0x001E43E0 /* Reset Source: GLOBR */
+#define PRTPM_EEETXC_TW_PHY_S			0
+#define PRTPM_EEETXC_TW_PHY_M			MAKEMASK(0xFFFF, 0)
+#define PRTPM_RLPIC				0x001E43A0 /* Reset Source: GLOBR */
+#define PRTPM_RLPIC_ERLPIC_S			0
+#define PRTPM_RLPIC_ERLPIC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PRTPM_TLPIC				0x001E43C0 /* Reset Source: GLOBR */
+#define PRTPM_TLPIC_ETLPIC_S			0
+#define PRTPM_TLPIC_ETLPIC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLRPB_DHW(_i)				(0x000AC000 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLRPB_DHW_MAX_INDEX			15
+#define GLRPB_DHW_DHW_TCN_S			0
+#define GLRPB_DHW_DHW_TCN_M			MAKEMASK(0xFFFFF, 0)
+#define GLRPB_DLW(_i)				(0x000AC044 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLRPB_DLW_MAX_INDEX			15
+#define GLRPB_DLW_DLW_TCN_S			0
+#define GLRPB_DLW_DLW_TCN_M			MAKEMASK(0xFFFFF, 0)
+#define GLRPB_DPS(_i)				(0x000AC084 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLRPB_DPS_MAX_INDEX			15
+#define GLRPB_DPS_DPS_TCN_S			0
+#define GLRPB_DPS_DPS_TCN_M			MAKEMASK(0xFFFFF, 0)
+#define GLRPB_DSI_EN				0x000AC324 /* Reset Source: CORER */
+#define GLRPB_DSI_EN_DSI_EN_S			0
+#define GLRPB_DSI_EN_DSI_EN_M			BIT(0)
+#define GLRPB_DSI_EN_DSI_L2_MAC_ERR_DROP_EN_S	1
+#define GLRPB_DSI_EN_DSI_L2_MAC_ERR_DROP_EN_M	BIT(1)
+#define GLRPB_SHW(_i)				(0x000AC120 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLRPB_SHW_MAX_INDEX			7
+#define GLRPB_SHW_SHW_S				0
+#define GLRPB_SHW_SHW_M				MAKEMASK(0xFFFFF, 0)
+#define GLRPB_SLW(_i)				(0x000AC140 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLRPB_SLW_MAX_INDEX			7
+#define GLRPB_SLW_SLW_S				0
+#define GLRPB_SLW_SLW_M				MAKEMASK(0xFFFFF, 0)
+#define GLRPB_SPS(_i)				(0x000AC0C4 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLRPB_SPS_MAX_INDEX			7
+#define GLRPB_SPS_SPS_TCN_S			0
+#define GLRPB_SPS_SPS_TCN_M			MAKEMASK(0xFFFFF, 0)
+#define GLRPB_TC_CFG(_i)			(0x000AC2A4 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLRPB_TC_CFG_MAX_INDEX			31
+#define GLRPB_TC_CFG_D_POOL_S			0
+#define GLRPB_TC_CFG_D_POOL_M			MAKEMASK(0xFFFF, 0)
+#define GLRPB_TC_CFG_S_POOL_S			16
+#define GLRPB_TC_CFG_S_POOL_M			MAKEMASK(0xFFFF, 16)
+#define GLRPB_TCHW(_i)				(0x000AC330 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLRPB_TCHW_MAX_INDEX			31
+#define GLRPB_TCHW_TCHW_S			0
+#define GLRPB_TCHW_TCHW_M			MAKEMASK(0xFFFFF, 0)
+#define GLRPB_TCLW(_i)				(0x000AC3B0 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLRPB_TCLW_MAX_INDEX			31
+#define GLRPB_TCLW_TCLW_S			0
+#define GLRPB_TCLW_TCLW_M			MAKEMASK(0xFFFFF, 0)
+#define GLQF_APBVT(_i)				(0x00450000 + ((_i) * 4)) /* _i=0...2047 */ /* Reset Source: CORER */
+#define GLQF_APBVT_MAX_INDEX			2047
+#define GLQF_APBVT_APBVT_S			0
+#define GLQF_APBVT_APBVT_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLQF_FD_CLSN_0				0x00460028 /* Reset Source: CORER */
+#define GLQF_FD_CLSN_0_HITSBCNT_S		0
+#define GLQF_FD_CLSN_0_HITSBCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQF_FD_CLSN1				0x00460030 /* Reset Source: CORER */
+#define GLQF_FD_CLSN1_HITLBCNT_S		0
+#define GLQF_FD_CLSN1_HITLBCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQF_FD_CNT				0x00460018 /* Reset Source: CORER */
+#define GLQF_FD_CNT_FD_GCNT_S			0
+#define GLQF_FD_CNT_FD_GCNT_M			MAKEMASK(0x7FFF, 0)
+#define GLQF_FD_CNT_FD_BCNT_S			16
+#define GLQF_FD_CNT_FD_BCNT_M			MAKEMASK(0x7FFF, 16)
+#define GLQF_FD_CTL				0x00460000 /* Reset Source: CORER */
+#define GLQF_FD_CTL_FDLONG_S			0
+#define GLQF_FD_CTL_FDLONG_M			MAKEMASK(0xF, 0)
+#define GLQF_FD_CTL_HASH_REPORT_S		4
+#define GLQF_FD_CTL_HASH_REPORT_M		BIT(4)
+#define GLQF_FD_CTL_FLT_ADDR_REPORT_S		5
+#define GLQF_FD_CTL_FLT_ADDR_REPORT_M		BIT(5)
+#define GLQF_FD_SIZE				0x00460010 /* Reset Source: CORER */
+#define GLQF_FD_SIZE_FD_GSIZE_S			0
+#define GLQF_FD_SIZE_FD_GSIZE_M			MAKEMASK(0x7FFF, 0)
+#define GLQF_FD_SIZE_FD_BSIZE_S			16
+#define GLQF_FD_SIZE_FD_BSIZE_M			MAKEMASK(0x7FFF, 16)
+#define GLQF_FDCNT_0				0x00460020 /* Reset Source: CORER */
+#define GLQF_FDCNT_0_BUCKETCNT_S		0
+#define GLQF_FDCNT_0_BUCKETCNT_M		MAKEMASK(0x7FFF, 0)
+#define GLQF_FDCNT_0_CNT_NOT_VLD_S		31
+#define GLQF_FDCNT_0_CNT_NOT_VLD_M		BIT(31)
+#define GLQF_FDEVICTENA(_i)			(0x00452000 + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: CORER */
+#define GLQF_FDEVICTENA_MAX_INDEX		3
+#define GLQF_FDEVICTENA_FDEVICTENA_S		0
+#define GLQF_FDEVICTENA_FDEVICTENA_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQF_FDINSET(_i, _j)			(0x00412000 + ((_i) * 4 + (_j) * 512)) /* _i=0...127, _j=0...5 */ /* Reset Source: CORER */
+#define GLQF_FDINSET_MAX_INDEX			127
+#define GLQF_FDINSET_FV_WORD_INDX0_S		0
+#define GLQF_FDINSET_FV_WORD_INDX0_M		MAKEMASK(0x1F, 0)
+#define GLQF_FDINSET_FV_WORD_VAL0_S		7
+#define GLQF_FDINSET_FV_WORD_VAL0_M		BIT(7)
+#define GLQF_FDINSET_FV_WORD_INDX1_S		8
+#define GLQF_FDINSET_FV_WORD_INDX1_M		MAKEMASK(0x1F, 8)
+#define GLQF_FDINSET_FV_WORD_VAL1_S		15
+#define GLQF_FDINSET_FV_WORD_VAL1_M		BIT(15)
+#define GLQF_FDINSET_FV_WORD_INDX2_S		16
+#define GLQF_FDINSET_FV_WORD_INDX2_M		MAKEMASK(0x1F, 16)
+#define GLQF_FDINSET_FV_WORD_VAL2_S		23
+#define GLQF_FDINSET_FV_WORD_VAL2_M		BIT(23)
+#define GLQF_FDINSET_FV_WORD_INDX3_S		24
+#define GLQF_FDINSET_FV_WORD_INDX3_M		MAKEMASK(0x1F, 24)
+#define GLQF_FDINSET_FV_WORD_VAL3_S		31
+#define GLQF_FDINSET_FV_WORD_VAL3_M		BIT(31)
+#define GLQF_FDMASK(_i)				(0x00410800 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLQF_FDMASK_MAX_INDEX			31
+#define GLQF_FDMASK_MSK_INDEX_S			0
+#define GLQF_FDMASK_MSK_INDEX_M			MAKEMASK(0x1F, 0)
+#define GLQF_FDMASK_MASK_S			16
+#define GLQF_FDMASK_MASK_M			MAKEMASK(0xFFFF, 16)
+#define GLQF_FDMASK_SEL(_i)			(0x00410400 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLQF_FDMASK_SEL_MAX_INDEX		127
+#define GLQF_FDMASK_SEL_MASK_SEL_S		0
+#define GLQF_FDMASK_SEL_MASK_SEL_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQF_FDSWAP(_i, _j)			(0x00413000 + ((_i) * 4 + (_j) * 512)) /* _i=0...127, _j=0...5 */ /* Reset Source: CORER */
+#define GLQF_FDSWAP_MAX_INDEX			127
+#define GLQF_FDSWAP_FV_WORD_INDX0_S		0
+#define GLQF_FDSWAP_FV_WORD_INDX0_M		MAKEMASK(0x1F, 0)
+#define GLQF_FDSWAP_FV_WORD_VAL0_S		7
+#define GLQF_FDSWAP_FV_WORD_VAL0_M		BIT(7)
+#define GLQF_FDSWAP_FV_WORD_INDX1_S		8
+#define GLQF_FDSWAP_FV_WORD_INDX1_M		MAKEMASK(0x1F, 8)
+#define GLQF_FDSWAP_FV_WORD_VAL1_S		15
+#define GLQF_FDSWAP_FV_WORD_VAL1_M		BIT(15)
+#define GLQF_FDSWAP_FV_WORD_INDX2_S		16
+#define GLQF_FDSWAP_FV_WORD_INDX2_M		MAKEMASK(0x1F, 16)
+#define GLQF_FDSWAP_FV_WORD_VAL2_S		23
+#define GLQF_FDSWAP_FV_WORD_VAL2_M		BIT(23)
+#define GLQF_FDSWAP_FV_WORD_INDX3_S		24
+#define GLQF_FDSWAP_FV_WORD_INDX3_M		MAKEMASK(0x1F, 24)
+#define GLQF_FDSWAP_FV_WORD_VAL3_S		31
+#define GLQF_FDSWAP_FV_WORD_VAL3_M		BIT(31)
+#define GLQF_HINSET(_i, _j)			(0x0040E000 + ((_i) * 4 + (_j) * 512)) /* _i=0...127, _j=0...5 */ /* Reset Source: CORER */
+#define GLQF_HINSET_MAX_INDEX			127
+#define GLQF_HINSET_FV_WORD_INDX0_S		0
+#define GLQF_HINSET_FV_WORD_INDX0_M		MAKEMASK(0x1F, 0)
+#define GLQF_HINSET_FV_WORD_VAL0_S		7
+#define GLQF_HINSET_FV_WORD_VAL0_M		BIT(7)
+#define GLQF_HINSET_FV_WORD_INDX1_S		8
+#define GLQF_HINSET_FV_WORD_INDX1_M		MAKEMASK(0x1F, 8)
+#define GLQF_HINSET_FV_WORD_VAL1_S		15
+#define GLQF_HINSET_FV_WORD_VAL1_M		BIT(15)
+#define GLQF_HINSET_FV_WORD_INDX2_S		16
+#define GLQF_HINSET_FV_WORD_INDX2_M		MAKEMASK(0x1F, 16)
+#define GLQF_HINSET_FV_WORD_VAL2_S		23
+#define GLQF_HINSET_FV_WORD_VAL2_M		BIT(23)
+#define GLQF_HINSET_FV_WORD_INDX3_S		24
+#define GLQF_HINSET_FV_WORD_INDX3_M		MAKEMASK(0x1F, 24)
+#define GLQF_HINSET_FV_WORD_VAL3_S		31
+#define GLQF_HINSET_FV_WORD_VAL3_M		BIT(31)
+#define GLQF_HKEY(_i)				(0x00456000 + ((_i) * 4)) /* _i=0...12 */ /* Reset Source: CORER */
+#define GLQF_HKEY_MAX_INDEX			12
+#define GLQF_HKEY_KEY_0_S			0
+#define GLQF_HKEY_KEY_0_M			MAKEMASK(0xFF, 0)
+#define GLQF_HKEY_KEY_1_S			8
+#define GLQF_HKEY_KEY_1_M			MAKEMASK(0xFF, 8)
+#define GLQF_HKEY_KEY_2_S			16
+#define GLQF_HKEY_KEY_2_M			MAKEMASK(0xFF, 16)
+#define GLQF_HKEY_KEY_3_S			24
+#define GLQF_HKEY_KEY_3_M			MAKEMASK(0xFF, 24)
+#define GLQF_HLUT(_i, _j)			(0x00438000 + ((_i) * 4 + (_j) * 512)) /* _i=0...127, _j=0...15 */ /* Reset Source: CORER */
+#define GLQF_HLUT_MAX_INDEX			127
+#define GLQF_HLUT_LUT0_S			0
+#define GLQF_HLUT_LUT0_M			MAKEMASK(0x3F, 0)
+#define GLQF_HLUT_LUT1_S			8
+#define GLQF_HLUT_LUT1_M			MAKEMASK(0x3F, 8)
+#define GLQF_HLUT_LUT2_S			16
+#define GLQF_HLUT_LUT2_M			MAKEMASK(0x3F, 16)
+#define GLQF_HLUT_LUT3_S			24
+#define GLQF_HLUT_LUT3_M			MAKEMASK(0x3F, 24)
+#define GLQF_HLUT_SIZE(_i)			(0x00455400 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLQF_HLUT_SIZE_MAX_INDEX		15
+#define GLQF_HLUT_SIZE_HSIZE_S			0
+#define GLQF_HLUT_SIZE_HSIZE_M			BIT(0)
+#define GLQF_HMASK(_i)				(0x0040FC00 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLQF_HMASK_MAX_INDEX			31
+#define GLQF_HMASK_MSK_INDEX_S			0
+#define GLQF_HMASK_MSK_INDEX_M			MAKEMASK(0x1F, 0)
+#define GLQF_HMASK_MASK_S			16
+#define GLQF_HMASK_MASK_M			MAKEMASK(0xFFFF, 16)
+#define GLQF_HMASK_SEL(_i)			(0x00410000 + ((_i) * 4)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GLQF_HMASK_SEL_MAX_INDEX		127
+#define GLQF_HMASK_SEL_MASK_SEL_S		0
+#define GLQF_HMASK_SEL_MASK_SEL_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQF_HSYMM(_i, _j)			(0x0040F000 + ((_i) * 4 + (_j) * 512)) /* _i=0...127, _j=0...5 */ /* Reset Source: CORER */
+#define GLQF_HSYMM_MAX_INDEX			127
+#define GLQF_HSYMM_FV_SYMM_INDX0_S		0
+#define GLQF_HSYMM_FV_SYMM_INDX0_M		MAKEMASK(0x1F, 0)
+#define GLQF_HSYMM_SYMM0_ENA_S			7
+#define GLQF_HSYMM_SYMM0_ENA_M			BIT(7)
+#define GLQF_HSYMM_FV_SYMM_INDX1_S		8
+#define GLQF_HSYMM_FV_SYMM_INDX1_M		MAKEMASK(0x1F, 8)
+#define GLQF_HSYMM_SYMM1_ENA_S			15
+#define GLQF_HSYMM_SYMM1_ENA_M			BIT(15)
+#define GLQF_HSYMM_FV_SYMM_INDX2_S		16
+#define GLQF_HSYMM_FV_SYMM_INDX2_M		MAKEMASK(0x1F, 16)
+#define GLQF_HSYMM_SYMM2_ENA_S			23
+#define GLQF_HSYMM_SYMM2_ENA_M			BIT(23)
+#define GLQF_HSYMM_FV_SYMM_INDX3_S		24
+#define GLQF_HSYMM_FV_SYMM_INDX3_M		MAKEMASK(0x1F, 24)
+#define GLQF_HSYMM_SYMM3_ENA_S			31
+#define GLQF_HSYMM_SYMM3_ENA_M			BIT(31)
+#define GLQF_PE_APBVT_CNT			0x00455500 /* Reset Source: CORER */
+#define GLQF_PE_APBVT_CNT_APBVT_LAN_S		0
+#define GLQF_PE_APBVT_CNT_APBVT_LAN_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLQF_PE_CMD				0x00471080 /* Reset Source: CORER */
+#define GLQF_PE_CMD_ADDREM_STS_S		0
+#define GLQF_PE_CMD_ADDREM_STS_M		MAKEMASK(0xFFFFFF, 0)
+#define GLQF_PE_CMD_ADDREM_ID_S			28
+#define GLQF_PE_CMD_ADDREM_ID_M			MAKEMASK(0xF, 28)
+#define GLQF_PE_CTL				0x004710C0 /* Reset Source: CORER */
+#define GLQF_PE_CTL_PELONG_S			0
+#define GLQF_PE_CTL_PELONG_M			MAKEMASK(0xF, 0)
+#define GLQF_PE_CTL2(_i)			(0x00455200 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLQF_PE_CTL2_MAX_INDEX			31
+#define GLQF_PE_CTL2_TO_QH_S			0
+#define GLQF_PE_CTL2_TO_QH_M			MAKEMASK(0x3, 0)
+#define GLQF_PE_CTL2_APBVT_ENA_S		2
+#define GLQF_PE_CTL2_APBVT_ENA_M		BIT(2)
+#define GLQF_PE_FVE				0x0020E514 /* Reset Source: CORER */
+#define GLQF_PE_FVE_W_ENA_S			0
+#define GLQF_PE_FVE_W_ENA_M			MAKEMASK(0xFFFFFF, 0)
+#define GLQF_PE_OSR_STS				0x00471040 /* Reset Source: CORER */
+#define GLQF_PE_OSR_STS_QH_SRCH_MAXOSR_S	0
+#define GLQF_PE_OSR_STS_QH_SRCH_MAXOSR_M	MAKEMASK(0x3FF, 0)
+#define GLQF_PE_OSR_STS_QH_CMD_MAXOSR_S		16
+#define GLQF_PE_OSR_STS_QH_CMD_MAXOSR_M		MAKEMASK(0x3FF, 16)
+#define GLQF_PEINSET(_i, _j)			(0x00415000 + ((_i) * 4 + (_j) * 128)) /* _i=0...31, _j=0...5 */ /* Reset Source: CORER */
+#define GLQF_PEINSET_MAX_INDEX			31
+#define GLQF_PEINSET_FV_WORD_INDX0_S		0
+#define GLQF_PEINSET_FV_WORD_INDX0_M		MAKEMASK(0x1F, 0)
+#define GLQF_PEINSET_FV_WORD_VAL0_S		7
+#define GLQF_PEINSET_FV_WORD_VAL0_M		BIT(7)
+#define GLQF_PEINSET_FV_WORD_INDX1_S		8
+#define GLQF_PEINSET_FV_WORD_INDX1_M		MAKEMASK(0x1F, 8)
+#define GLQF_PEINSET_FV_WORD_VAL1_S		15
+#define GLQF_PEINSET_FV_WORD_VAL1_M		BIT(15)
+#define GLQF_PEINSET_FV_WORD_INDX2_S		16
+#define GLQF_PEINSET_FV_WORD_INDX2_M		MAKEMASK(0x1F, 16)
+#define GLQF_PEINSET_FV_WORD_VAL2_S		23
+#define GLQF_PEINSET_FV_WORD_VAL2_M		BIT(23)
+#define GLQF_PEINSET_FV_WORD_INDX3_S		24
+#define GLQF_PEINSET_FV_WORD_INDX3_M		MAKEMASK(0x1F, 24)
+#define GLQF_PEINSET_FV_WORD_VAL3_S		31
+#define GLQF_PEINSET_FV_WORD_VAL3_M		BIT(31)
+#define GLQF_PEMASK(_i)				(0x00415400 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLQF_PEMASK_MAX_INDEX			15
+#define GLQF_PEMASK_MSK_INDEX_S			0
+#define GLQF_PEMASK_MSK_INDEX_M			MAKEMASK(0x1F, 0)
+#define GLQF_PEMASK_MASK_S			16
+#define GLQF_PEMASK_MASK_M			MAKEMASK(0xFFFF, 16)
+#define GLQF_PEMASK_SEL(_i)			(0x00415500 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLQF_PEMASK_SEL_MAX_INDEX		31
+#define GLQF_PEMASK_SEL_MASK_SEL_S		0
+#define GLQF_PEMASK_SEL_MASK_SEL_M		MAKEMASK(0xFFFF, 0)
+#define GLQF_PETABLE_CLR(_i)			(0x000AA078 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLQF_PETABLE_CLR_MAX_INDEX		1
+#define GLQF_PETABLE_CLR_VM_VF_NUM_S		0
+#define GLQF_PETABLE_CLR_VM_VF_NUM_M		MAKEMASK(0x3FF, 0)
+#define GLQF_PETABLE_CLR_VM_VF_TYPE_S		10
+#define GLQF_PETABLE_CLR_VM_VF_TYPE_M		MAKEMASK(0x3, 10)
+#define GLQF_PETABLE_CLR_PF_NUM_S		12
+#define GLQF_PETABLE_CLR_PF_NUM_M		MAKEMASK(0x7, 12)
+#define GLQF_PETABLE_CLR_PE_BUSY_S		16
+#define GLQF_PETABLE_CLR_PE_BUSY_M		BIT(16)
+#define GLQF_PETABLE_CLR_PE_CLEAR_S		17
+#define GLQF_PETABLE_CLR_PE_CLEAR_M		BIT(17)
+#define GLQF_PROF2TC(_i, _j)			(0x0044D000 + ((_i) * 4 + (_j) * 512)) /* _i=0...127, _j=0...3 */ /* Reset Source: CORER */
+#define GLQF_PROF2TC_MAX_INDEX			127
+#define GLQF_PROF2TC_OVERRIDE_ENA_0_S		0
+#define GLQF_PROF2TC_OVERRIDE_ENA_0_M		BIT(0)
+#define GLQF_PROF2TC_REGION_0_S			1
+#define GLQF_PROF2TC_REGION_0_M			MAKEMASK(0x7, 1)
+#define GLQF_PROF2TC_OVERRIDE_ENA_1_S		4
+#define GLQF_PROF2TC_OVERRIDE_ENA_1_M		BIT(4)
+#define GLQF_PROF2TC_REGION_1_S			5
+#define GLQF_PROF2TC_REGION_1_M			MAKEMASK(0x7, 5)
+#define GLQF_PROF2TC_OVERRIDE_ENA_2_S		8
+#define GLQF_PROF2TC_OVERRIDE_ENA_2_M		BIT(8)
+#define GLQF_PROF2TC_REGION_2_S			9
+#define GLQF_PROF2TC_REGION_2_M			MAKEMASK(0x7, 9)
+#define GLQF_PROF2TC_OVERRIDE_ENA_3_S		12
+#define GLQF_PROF2TC_OVERRIDE_ENA_3_M		BIT(12)
+#define GLQF_PROF2TC_REGION_3_S			13
+#define GLQF_PROF2TC_REGION_3_M			MAKEMASK(0x7, 13)
+#define GLQF_PROF2TC_OVERRIDE_ENA_4_S		16
+#define GLQF_PROF2TC_OVERRIDE_ENA_4_M		BIT(16)
+#define GLQF_PROF2TC_REGION_4_S			17
+#define GLQF_PROF2TC_REGION_4_M			MAKEMASK(0x7, 17)
+#define GLQF_PROF2TC_OVERRIDE_ENA_5_S		20
+#define GLQF_PROF2TC_OVERRIDE_ENA_5_M		BIT(20)
+#define GLQF_PROF2TC_REGION_5_S			21
+#define GLQF_PROF2TC_REGION_5_M			MAKEMASK(0x7, 21)
+#define GLQF_PROF2TC_OVERRIDE_ENA_6_S		24
+#define GLQF_PROF2TC_OVERRIDE_ENA_6_M		BIT(24)
+#define GLQF_PROF2TC_REGION_6_S			25
+#define GLQF_PROF2TC_REGION_6_M			MAKEMASK(0x7, 25)
+#define GLQF_PROF2TC_OVERRIDE_ENA_7_S		28
+#define GLQF_PROF2TC_OVERRIDE_ENA_7_M		BIT(28)
+#define GLQF_PROF2TC_REGION_7_S			29
+#define GLQF_PROF2TC_REGION_7_M			MAKEMASK(0x7, 29)
+#define PFQF_FD_CNT				0x00460180 /* Reset Source: CORER */
+#define PFQF_FD_CNT_FD_GCNT_S			0
+#define PFQF_FD_CNT_FD_GCNT_M			MAKEMASK(0x7FFF, 0)
+#define PFQF_FD_CNT_FD_BCNT_S			16
+#define PFQF_FD_CNT_FD_BCNT_M			MAKEMASK(0x7FFF, 16)
+#define PFQF_FD_ENA				0x0043A000 /* Reset Source: CORER */
+#define PFQF_FD_ENA_FD_ENA_S			0
+#define PFQF_FD_ENA_FD_ENA_M			BIT(0)
+#define PFQF_FD_SIZE				0x00460100 /* Reset Source: CORER */
+#define PFQF_FD_SIZE_FD_GSIZE_S			0
+#define PFQF_FD_SIZE_FD_GSIZE_M			MAKEMASK(0x7FFF, 0)
+#define PFQF_FD_SIZE_FD_BSIZE_S			16
+#define PFQF_FD_SIZE_FD_BSIZE_M			MAKEMASK(0x7FFF, 16)
+#define PFQF_FD_SUBTRACT			0x00460200 /* Reset Source: CORER */
+#define PFQF_FD_SUBTRACT_FD_GCNT_S		0
+#define PFQF_FD_SUBTRACT_FD_GCNT_M		MAKEMASK(0x7FFF, 0)
+#define PFQF_FD_SUBTRACT_FD_BCNT_S		16
+#define PFQF_FD_SUBTRACT_FD_BCNT_M		MAKEMASK(0x7FFF, 16)
+#define PFQF_HLUT(_i)				(0x00430000 + ((_i) * 64)) /* _i=0...511 */ /* Reset Source: CORER */
+#define PFQF_HLUT_MAX_INDEX			511
+#define PFQF_HLUT_LUT0_S			0
+#define PFQF_HLUT_LUT0_M			MAKEMASK(0xFF, 0)
+#define PFQF_HLUT_LUT1_S			8
+#define PFQF_HLUT_LUT1_M			MAKEMASK(0xFF, 8)
+#define PFQF_HLUT_LUT2_S			16
+#define PFQF_HLUT_LUT2_M			MAKEMASK(0xFF, 16)
+#define PFQF_HLUT_LUT3_S			24
+#define PFQF_HLUT_LUT3_M			MAKEMASK(0xFF, 24)
+#define PFQF_HLUT_SIZE				0x00455480 /* Reset Source: CORER */
+#define PFQF_HLUT_SIZE_HSIZE_S			0
+#define PFQF_HLUT_SIZE_HSIZE_M			MAKEMASK(0x3, 0)
+#define PFQF_PE_CLSN0				0x00470480 /* Reset Source: CORER */
+#define PFQF_PE_CLSN0_HITSBCNT_S		0
+#define PFQF_PE_CLSN0_HITSBCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PFQF_PE_CLSN1				0x00470500 /* Reset Source: CORER */
+#define PFQF_PE_CLSN1_HITLBCNT_S		0
+#define PFQF_PE_CLSN1_HITLBCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PFQF_PE_CTL1				0x00470000 /* Reset Source: CORER */
+#define PFQF_PE_CTL1_PEHSIZE_S			0
+#define PFQF_PE_CTL1_PEHSIZE_M			MAKEMASK(0xF, 0)
+#define PFQF_PE_CTL2				0x00470040 /* Reset Source: CORER */
+#define PFQF_PE_CTL2_PEDSIZE_S			0
+#define PFQF_PE_CTL2_PEDSIZE_M			MAKEMASK(0xF, 0)
+#define PFQF_PE_FILTERING_ENA			0x0043A080 /* Reset Source: CORER */
+#define PFQF_PE_FILTERING_ENA_PE_ENA_S		0
+#define PFQF_PE_FILTERING_ENA_PE_ENA_M		BIT(0)
+#define PFQF_PE_FLHD				0x00470100 /* Reset Source: CORER */
+#define PFQF_PE_FLHD_FLHD_S			0
+#define PFQF_PE_FLHD_FLHD_M			MAKEMASK(0xFFFFFF, 0)
+#define PFQF_PE_ST_CTL				0x00470400 /* Reset Source: CORER */
+#define PFQF_PE_ST_CTL_PF_CNT_EN_S		0
+#define PFQF_PE_ST_CTL_PF_CNT_EN_M		BIT(0)
+#define PFQF_PE_ST_CTL_VFS_CNT_EN_S		1
+#define PFQF_PE_ST_CTL_VFS_CNT_EN_M		BIT(1)
+#define PFQF_PE_ST_CTL_VF_CNT_EN_S		2
+#define PFQF_PE_ST_CTL_VF_CNT_EN_M		BIT(2)
+#define PFQF_PE_ST_CTL_VF_NUM_S			16
+#define PFQF_PE_ST_CTL_VF_NUM_M			MAKEMASK(0xFF, 16)
+#define PFQF_PE_TC_CTL				0x00452080 /* Reset Source: CORER */
+#define PFQF_PE_TC_CTL_TC_EN_PF_S		0
+#define PFQF_PE_TC_CTL_TC_EN_PF_M		MAKEMASK(0xFF, 0)
+#define PFQF_PE_TC_CTL_TC_EN_VF_S		16
+#define PFQF_PE_TC_CTL_TC_EN_VF_M		MAKEMASK(0xFF, 16)
+#define PFQF_PECNT_0				0x00470200 /* Reset Source: CORER */
+#define PFQF_PECNT_0_BUCKETCNT_S		0
+#define PFQF_PECNT_0_BUCKETCNT_M		MAKEMASK(0x3FFFF, 0)
+#define PFQF_PECNT_1				0x00470300 /* Reset Source: CORER */
+#define PFQF_PECNT_1_FLTCNT_S			0
+#define PFQF_PECNT_1_FLTCNT_M			MAKEMASK(0x3FFFF, 0)
+#define VPQF_PE_CTL1(_VF)			(0x00474000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPQF_PE_CTL1_MAX_INDEX			255
+#define VPQF_PE_CTL1_PEHSIZE_S			0
+#define VPQF_PE_CTL1_PEHSIZE_M			MAKEMASK(0xF, 0)
+#define VPQF_PE_CTL2(_VF)			(0x00474800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPQF_PE_CTL2_MAX_INDEX			255
+#define VPQF_PE_CTL2_PEDSIZE_S			0
+#define VPQF_PE_CTL2_PEDSIZE_M			MAKEMASK(0xF, 0)
+#define VPQF_PE_FILTERING_ENA(_VF)		(0x00455800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPQF_PE_FILTERING_ENA_MAX_INDEX		255
+#define VPQF_PE_FILTERING_ENA_PE_ENA_S		0
+#define VPQF_PE_FILTERING_ENA_PE_ENA_M		BIT(0)
+#define VPQF_PE_FLHD(_VF)			(0x00472000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPQF_PE_FLHD_MAX_INDEX			255
+#define VPQF_PE_FLHD_FLHD_S			0
+#define VPQF_PE_FLHD_FLHD_M			MAKEMASK(0xFFFFFF, 0)
+#define VPQF_PECNT_0(_VF)			(0x00472800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPQF_PECNT_0_MAX_INDEX			255
+#define VPQF_PECNT_0_BUCKETCNT_S		0
+#define VPQF_PECNT_0_BUCKETCNT_M		MAKEMASK(0x3FFFF, 0)
+#define VPQF_PECNT_1(_VF)			(0x00473000 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VPQF_PECNT_1_MAX_INDEX			255
+#define VPQF_PECNT_1_FLTCNT_S			0
+#define VPQF_PECNT_1_FLTCNT_M			MAKEMASK(0x3FFFF, 0)
+#define GLDCB_RMPMC				0x001223C8 /* Reset Source: CORER */
+#define GLDCB_RMPMC_RSPM_S			0
+#define GLDCB_RMPMC_RSPM_M			MAKEMASK(0x3F, 0)
+#define GLDCB_RMPMC_MIQ_NODROP_MODE_S		6
+#define GLDCB_RMPMC_MIQ_NODROP_MODE_M		MAKEMASK(0x1F, 6)
+#define GLDCB_RMPMC_RPM_DIS_S			31
+#define GLDCB_RMPMC_RPM_DIS_M			BIT(31)
+#define GLDCB_RMPMS				0x001223CC /* Reset Source: CORER */
+#define GLDCB_RMPMS_RMPM_S			0
+#define GLDCB_RMPMS_RMPM_M			MAKEMASK(0xFFFF, 0)
+#define GLDCB_RPCC				0x00122260 /* Reset Source: CORER */
+#define GLDCB_RPCC_EN_S				0
+#define GLDCB_RPCC_EN_M				BIT(0)
+#define GLDCB_RPCC_SCL_FACT_S			4
+#define GLDCB_RPCC_SCL_FACT_M			MAKEMASK(0x1F, 4)
+#define GLDCB_RPCC_THRSH_S			16
+#define GLDCB_RPCC_THRSH_M			MAKEMASK(0xFFF, 16)
+#define GLDCB_RSPMC				0x001223C4 /* Reset Source: CORER */
+#define GLDCB_RSPMC_RSPM_S			0
+#define GLDCB_RSPMC_RSPM_M			MAKEMASK(0xFF, 0)
+#define GLDCB_RSPMC_RPM_MODE_S			8
+#define GLDCB_RSPMC_RPM_MODE_M			MAKEMASK(0x3, 8)
+#define GLDCB_RSPMC_PRR_MAX_EXP_S		10
+#define GLDCB_RSPMC_PRR_MAX_EXP_M		MAKEMASK(0xF, 10)
+#define GLDCB_RSPMC_PFCTIMER_S			14
+#define GLDCB_RSPMC_PFCTIMER_M			MAKEMASK(0x3FFF, 14)
+#define GLDCB_RSPMC_RPM_DIS_S			31
+#define GLDCB_RSPMC_RPM_DIS_M			BIT(31)
+#define GLDCB_RSPMS				0x001223C0 /* Reset Source: CORER */
+#define GLDCB_RSPMS_RSPM_S			0
+#define GLDCB_RSPMS_RSPM_M			MAKEMASK(0x3FFFF, 0)
+#define GLDCB_RTCTI				0x001223D0 /* Reset Source: CORER */
+#define GLDCB_RTCTI_PFCTIMEOUT_TC_S		0
+#define GLDCB_RTCTI_PFCTIMEOUT_TC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLDCB_RTCTQ(_i)				(0x001222C0 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLDCB_RTCTQ_MAX_INDEX			31
+#define GLDCB_RTCTQ_RXQNUM_S			0
+#define GLDCB_RTCTQ_RXQNUM_M			MAKEMASK(0x7FF, 0)
+#define GLDCB_RTCTQ_IS_PF_Q_S			16
+#define GLDCB_RTCTQ_IS_PF_Q_M			BIT(16)
+#define GLDCB_RTCTS(_i)				(0x00122340 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLDCB_RTCTS_MAX_INDEX			31
+#define GLDCB_RTCTS_PFCTIMER_S			0
+#define GLDCB_RTCTS_PFCTIMER_M			MAKEMASK(0x3FFF, 0)
+#define GLRCB_CFG_COTF_CNT(_i)			(0x001223D4 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLRCB_CFG_COTF_CNT_MAX_INDEX		7
+#define GLRCB_CFG_COTF_CNT_MRKR_COTF_CNT_S	0
+#define GLRCB_CFG_COTF_CNT_MRKR_COTF_CNT_M	MAKEMASK(0x3F, 0)
+#define GLRCB_CFG_COTF_ST			0x001223F4 /* Reset Source: CORER */
+#define GLRCB_CFG_COTF_ST_MRKR_COTF_ST_S	0
+#define GLRCB_CFG_COTF_ST_MRKR_COTF_ST_M	MAKEMASK(0xFF, 0)
+#define GLRPRS_PMCFG_DHW(_i)			(0x00200388 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLRPRS_PMCFG_DHW_MAX_INDEX		15
+#define GLRPRS_PMCFG_DHW_DHW_S			0
+#define GLRPRS_PMCFG_DHW_DHW_M			MAKEMASK(0xFFFFF, 0)
+#define GLRPRS_PMCFG_DLW(_i)			(0x002003C8 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLRPRS_PMCFG_DLW_MAX_INDEX		15
+#define GLRPRS_PMCFG_DLW_DLW_S			0
+#define GLRPRS_PMCFG_DLW_DLW_M			MAKEMASK(0xFFFFF, 0)
+#define GLRPRS_PMCFG_DPS(_i)			(0x00200308 + ((_i) * 4)) /* _i=0...15 */ /* Reset Source: CORER */
+#define GLRPRS_PMCFG_DPS_MAX_INDEX		15
+#define GLRPRS_PMCFG_DPS_DPS_S			0
+#define GLRPRS_PMCFG_DPS_DPS_M			MAKEMASK(0xFFFFF, 0)
+#define GLRPRS_PMCFG_SHW(_i)			(0x00200448 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLRPRS_PMCFG_SHW_MAX_INDEX		7
+#define GLRPRS_PMCFG_SHW_SHW_S			0
+#define GLRPRS_PMCFG_SHW_SHW_M			MAKEMASK(0xFFFFF, 0)
+#define GLRPRS_PMCFG_SLW(_i)			(0x00200468 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLRPRS_PMCFG_SLW_MAX_INDEX		7
+#define GLRPRS_PMCFG_SLW_SLW_S			0
+#define GLRPRS_PMCFG_SLW_SLW_M			MAKEMASK(0xFFFFF, 0)
+#define GLRPRS_PMCFG_SPS(_i)			(0x00200408 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLRPRS_PMCFG_SPS_MAX_INDEX		7
+#define GLRPRS_PMCFG_SPS_SPS_S			0
+#define GLRPRS_PMCFG_SPS_SPS_M			MAKEMASK(0xFFFFF, 0)
+#define GLRPRS_PMCFG_TC_CFG(_i)			(0x00200488 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLRPRS_PMCFG_TC_CFG_MAX_INDEX		31
+#define GLRPRS_PMCFG_TC_CFG_D_POOL_S		0
+#define GLRPRS_PMCFG_TC_CFG_D_POOL_M		MAKEMASK(0xF, 0)
+#define GLRPRS_PMCFG_TC_CFG_S_POOL_S		16
+#define GLRPRS_PMCFG_TC_CFG_S_POOL_M		MAKEMASK(0x7, 16)
+#define GLRPRS_PMCFG_TCHW(_i)			(0x00200588 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLRPRS_PMCFG_TCHW_MAX_INDEX		31
+#define GLRPRS_PMCFG_TCHW_TCHW_S		0
+#define GLRPRS_PMCFG_TCHW_TCHW_M		MAKEMASK(0xFFFFF, 0)
+#define GLRPRS_PMCFG_TCLW(_i)			(0x00200608 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLRPRS_PMCFG_TCLW_MAX_INDEX		31
+#define GLRPRS_PMCFG_TCLW_TCLW_S		0
+#define GLRPRS_PMCFG_TCLW_TCLW_M		MAKEMASK(0xFFFFF, 0)
+#define GLSWT_PMCFG_TC_CFG(_i)			(0x00204900 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSWT_PMCFG_TC_CFG_MAX_INDEX		31
+#define GLSWT_PMCFG_TC_CFG_D_POOL_S		0
+#define GLSWT_PMCFG_TC_CFG_D_POOL_M		MAKEMASK(0xF, 0)
+#define GLSWT_PMCFG_TC_CFG_S_POOL_S		16
+#define GLSWT_PMCFG_TC_CFG_S_POOL_M		MAKEMASK(0x7, 16)
+#define PRTDCB_RLANPMS				0x00122280 /* Reset Source: CORER */
+#define PRTDCB_RLANPMS_LANRPPM_S		0
+#define PRTDCB_RLANPMS_LANRPPM_M		MAKEMASK(0x3FFFF, 0)
+#define PRTDCB_RPPMC				0x00122240 /* Reset Source: CORER */
+#define PRTDCB_RPPMC_LANRPPM_S			0
+#define PRTDCB_RPPMC_LANRPPM_M			MAKEMASK(0xFF, 0)
+#define PRTDCB_RPPMC_RDMARPPM_S			8
+#define PRTDCB_RPPMC_RDMARPPM_M			MAKEMASK(0xFF, 8)
+#define PRTDCB_RRDMAPMS				0x00122120 /* Reset Source: CORER */
+#define PRTDCB_RRDMAPMS_RDMARPPM_S		0
+#define PRTDCB_RRDMAPMS_RDMARPPM_M		MAKEMASK(0x3FFFF, 0)
+#define GL_STAT_SWR_BPCH(_i)			(0x00347804 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GL_STAT_SWR_BPCH_MAX_INDEX		127
+#define GL_STAT_SWR_BPCH_VLBPCH_S		0
+#define GL_STAT_SWR_BPCH_VLBPCH_M		MAKEMASK(0xFF, 0)
+#define GL_STAT_SWR_BPCL(_i)			(0x00347800 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GL_STAT_SWR_BPCL_MAX_INDEX		127
+#define GL_STAT_SWR_BPCL_VLBPCL_S		0
+#define GL_STAT_SWR_BPCL_VLBPCL_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_STAT_SWR_GORCH(_i)			(0x00342004 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GL_STAT_SWR_GORCH_MAX_INDEX		127
+#define GL_STAT_SWR_GORCH_VLBCH_S		0
+#define GL_STAT_SWR_GORCH_VLBCH_M		MAKEMASK(0xFF, 0)
+#define GL_STAT_SWR_GORCL(_i)			(0x00342000 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GL_STAT_SWR_GORCL_MAX_INDEX		127
+#define GL_STAT_SWR_GORCL_VLBCL_S		0
+#define GL_STAT_SWR_GORCL_VLBCL_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_STAT_SWR_GOTCH(_i)			(0x00304004 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GL_STAT_SWR_GOTCH_MAX_INDEX		127
+#define GL_STAT_SWR_GOTCH_VLBCH_S		0
+#define GL_STAT_SWR_GOTCH_VLBCH_M		MAKEMASK(0xFF, 0)
+#define GL_STAT_SWR_GOTCL(_i)			(0x00304000 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GL_STAT_SWR_GOTCL_MAX_INDEX		127
+#define GL_STAT_SWR_GOTCL_VLBCL_S		0
+#define GL_STAT_SWR_GOTCL_VLBCL_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_STAT_SWR_MPCH(_i)			(0x00347404 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GL_STAT_SWR_MPCH_MAX_INDEX		127
+#define GL_STAT_SWR_MPCH_VLMPCH_S		0
+#define GL_STAT_SWR_MPCH_VLMPCH_M		MAKEMASK(0xFF, 0)
+#define GL_STAT_SWR_MPCL(_i)			(0x00347400 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GL_STAT_SWR_MPCL_MAX_INDEX		127
+#define GL_STAT_SWR_MPCL_VLMPCL_S		0
+#define GL_STAT_SWR_MPCL_VLMPCL_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_STAT_SWR_UPCH(_i)			(0x00347004 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GL_STAT_SWR_UPCH_MAX_INDEX		127
+#define GL_STAT_SWR_UPCH_VLUPCH_S		0
+#define GL_STAT_SWR_UPCH_VLUPCH_M		MAKEMASK(0xFF, 0)
+#define GL_STAT_SWR_UPCL(_i)			(0x00347000 + ((_i) * 8)) /* _i=0...127 */ /* Reset Source: CORER */
+#define GL_STAT_SWR_UPCL_MAX_INDEX		127
+#define GL_STAT_SWR_UPCL_VLUPCL_S		0
+#define GL_STAT_SWR_UPCL_VLUPCL_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_AORCL(_i)				(0x003812C0 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_AORCL_MAX_INDEX			7
+#define GLPRT_AORCL_AORCL_S			0
+#define GLPRT_AORCL_AORCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_BPRCH(_i)				(0x00381384 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_BPRCH_MAX_INDEX			7
+#define GLPRT_BPRCH_UPRCH_S			0
+#define GLPRT_BPRCH_UPRCH_M			MAKEMASK(0xFF, 0)
+#define GLPRT_BPRCL(_i)				(0x00381380 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_BPRCL_MAX_INDEX			7
+#define GLPRT_BPRCL_UPRCH_S			0
+#define GLPRT_BPRCL_UPRCH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_BPTCH(_i)				(0x00381244 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_BPTCH_MAX_INDEX			7
+#define GLPRT_BPTCH_UPRCH_S			0
+#define GLPRT_BPTCH_UPRCH_M			MAKEMASK(0xFF, 0)
+#define GLPRT_BPTCL(_i)				(0x00381240 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_BPTCL_MAX_INDEX			7
+#define GLPRT_BPTCL_UPRCH_S			0
+#define GLPRT_BPTCL_UPRCH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_CRCERRS(_i)			(0x00380100 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_CRCERRS_MAX_INDEX			7
+#define GLPRT_CRCERRS_CRCERRS_S			0
+#define GLPRT_CRCERRS_CRCERRS_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_CRCERRS_H(_i)			(0x00380104 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_CRCERRS_H_MAX_INDEX		7
+#define GLPRT_CRCERRS_H_CRCERRS_S		0
+#define GLPRT_CRCERRS_H_CRCERRS_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_GORCH(_i)				(0x00380004 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_GORCH_MAX_INDEX			7
+#define GLPRT_GORCH_GORCH_S			0
+#define GLPRT_GORCH_GORCH_M			MAKEMASK(0xFF, 0)
+#define GLPRT_GORCL(_i)				(0x00380000 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_GORCL_MAX_INDEX			7
+#define GLPRT_GORCL_GORCL_S			0
+#define GLPRT_GORCL_GORCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_GOTCH(_i)				(0x00380B44 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_GOTCH_MAX_INDEX			7
+#define GLPRT_GOTCH_GOTCH_S			0
+#define GLPRT_GOTCH_GOTCH_M			MAKEMASK(0xFF, 0)
+#define GLPRT_GOTCL(_i)				(0x00380B40 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_GOTCL_MAX_INDEX			7
+#define GLPRT_GOTCL_GOTCL_S			0
+#define GLPRT_GOTCL_GOTCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_ILLERRC(_i)			(0x003801C0 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_ILLERRC_MAX_INDEX			7
+#define GLPRT_ILLERRC_ILLERRC_S			0
+#define GLPRT_ILLERRC_ILLERRC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_ILLERRC_H(_i)			(0x003801C4 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_ILLERRC_H_MAX_INDEX		7
+#define GLPRT_ILLERRC_H_ILLERRC_S		0
+#define GLPRT_ILLERRC_H_ILLERRC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_LXOFFRXC(_i)			(0x003802C0 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_LXOFFRXC_MAX_INDEX		7
+#define GLPRT_LXOFFRXC_LXOFFRXCNT_S		0
+#define GLPRT_LXOFFRXC_LXOFFRXCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_LXOFFRXC_H(_i)			(0x003802C4 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_LXOFFRXC_H_MAX_INDEX		7
+#define GLPRT_LXOFFRXC_H_LXOFFRXCNT_S		0
+#define GLPRT_LXOFFRXC_H_LXOFFRXCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_LXOFFTXC(_i)			(0x00381180 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_LXOFFTXC_MAX_INDEX		7
+#define GLPRT_LXOFFTXC_LXOFFTXC_S		0
+#define GLPRT_LXOFFTXC_LXOFFTXC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_LXOFFTXC_H(_i)			(0x00381184 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_LXOFFTXC_H_MAX_INDEX		7
+#define GLPRT_LXOFFTXC_H_LXOFFTXC_S		0
+#define GLPRT_LXOFFTXC_H_LXOFFTXC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_LXONRXC(_i)			(0x00380280 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_LXONRXC_MAX_INDEX			7
+#define GLPRT_LXONRXC_LXONRXCNT_S		0
+#define GLPRT_LXONRXC_LXONRXCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_LXONRXC_H(_i)			(0x00380284 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_LXONRXC_H_MAX_INDEX		7
+#define GLPRT_LXONRXC_H_LXONRXCNT_S		0
+#define GLPRT_LXONRXC_H_LXONRXCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_LXONTXC(_i)			(0x00381140 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_LXONTXC_MAX_INDEX			7
+#define GLPRT_LXONTXC_LXONTXC_S			0
+#define GLPRT_LXONTXC_LXONTXC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_LXONTXC_H(_i)			(0x00381144 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_LXONTXC_H_MAX_INDEX		7
+#define GLPRT_LXONTXC_H_LXONTXC_S		0
+#define GLPRT_LXONTXC_H_LXONTXC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_MLFC(_i)				(0x00380040 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_MLFC_MAX_INDEX			7
+#define GLPRT_MLFC_MLFC_S			0
+#define GLPRT_MLFC_MLFC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_MLFC_H(_i)			(0x00380044 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_MLFC_H_MAX_INDEX			7
+#define GLPRT_MLFC_H_MLFC_S			0
+#define GLPRT_MLFC_H_MLFC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_MPRCH(_i)				(0x00381344 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_MPRCH_MAX_INDEX			7
+#define GLPRT_MPRCH_MPRCH_S			0
+#define GLPRT_MPRCH_MPRCH_M			MAKEMASK(0xFF, 0)
+#define GLPRT_MPRCL(_i)				(0x00381340 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_MPRCL_MAX_INDEX			7
+#define GLPRT_MPRCL_MPRCL_S			0
+#define GLPRT_MPRCL_MPRCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_MPTCH(_i)				(0x00381204 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_MPTCH_MAX_INDEX			7
+#define GLPRT_MPTCH_MPTCH_S			0
+#define GLPRT_MPTCH_MPTCH_M			MAKEMASK(0xFF, 0)
+#define GLPRT_MPTCL(_i)				(0x00381200 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_MPTCL_MAX_INDEX			7
+#define GLPRT_MPTCL_MPTCL_S			0
+#define GLPRT_MPTCL_MPTCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_MRFC(_i)				(0x00380080 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_MRFC_MAX_INDEX			7
+#define GLPRT_MRFC_MRFC_S			0
+#define GLPRT_MRFC_MRFC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_MRFC_H(_i)			(0x00380084 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_MRFC_H_MAX_INDEX			7
+#define GLPRT_MRFC_H_MRFC_S			0
+#define GLPRT_MRFC_H_MRFC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PRC1023H(_i)			(0x00380A04 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC1023H_MAX_INDEX		7
+#define GLPRT_PRC1023H_PRC1023H_S		0
+#define GLPRT_PRC1023H_PRC1023H_M		MAKEMASK(0xFF, 0)
+#define GLPRT_PRC1023L(_i)			(0x00380A00 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC1023L_MAX_INDEX		7
+#define GLPRT_PRC1023L_PRC1023L_S		0
+#define GLPRT_PRC1023L_PRC1023L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PRC127H(_i)			(0x00380944 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC127H_MAX_INDEX			7
+#define GLPRT_PRC127H_PRC127H_S			0
+#define GLPRT_PRC127H_PRC127H_M			MAKEMASK(0xFF, 0)
+#define GLPRT_PRC127L(_i)			(0x00380940 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC127L_MAX_INDEX			7
+#define GLPRT_PRC127L_PRC127L_S			0
+#define GLPRT_PRC127L_PRC127L_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PRC1522H(_i)			(0x00380A44 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC1522H_MAX_INDEX		7
+#define GLPRT_PRC1522H_PRC1522H_S		0
+#define GLPRT_PRC1522H_PRC1522H_M		MAKEMASK(0xFF, 0)
+#define GLPRT_PRC1522L(_i)			(0x00380A40 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC1522L_MAX_INDEX		7
+#define GLPRT_PRC1522L_PRC1522L_S		0
+#define GLPRT_PRC1522L_PRC1522L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PRC255H(_i)			(0x00380984 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC255H_MAX_INDEX			7
+#define GLPRT_PRC255H_PRTPRC255H_S		0
+#define GLPRT_PRC255H_PRTPRC255H_M		MAKEMASK(0xFF, 0)
+#define GLPRT_PRC255L(_i)			(0x00380980 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC255L_MAX_INDEX			7
+#define GLPRT_PRC255L_PRC255L_S			0
+#define GLPRT_PRC255L_PRC255L_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PRC511H(_i)			(0x003809C4 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC511H_MAX_INDEX			7
+#define GLPRT_PRC511H_PRC511H_S			0
+#define GLPRT_PRC511H_PRC511H_M			MAKEMASK(0xFF, 0)
+#define GLPRT_PRC511L(_i)			(0x003809C0 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC511L_MAX_INDEX			7
+#define GLPRT_PRC511L_PRC511L_S			0
+#define GLPRT_PRC511L_PRC511L_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PRC64H(_i)			(0x00380904 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC64H_MAX_INDEX			7
+#define GLPRT_PRC64H_PRC64H_S			0
+#define GLPRT_PRC64H_PRC64H_M			MAKEMASK(0xFF, 0)
+#define GLPRT_PRC64L(_i)			(0x00380900 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC64L_MAX_INDEX			7
+#define GLPRT_PRC64L_PRC64L_S			0
+#define GLPRT_PRC64L_PRC64L_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PRC9522H(_i)			(0x00380A84 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC9522H_MAX_INDEX		7
+#define GLPRT_PRC9522H_PRC1522H_S		0
+#define GLPRT_PRC9522H_PRC1522H_M		MAKEMASK(0xFF, 0)
+#define GLPRT_PRC9522L(_i)			(0x00380A80 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PRC9522L_MAX_INDEX		7
+#define GLPRT_PRC9522L_PRC1522L_S		0
+#define GLPRT_PRC9522L_PRC1522L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PTC1023H(_i)			(0x00380C84 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC1023H_MAX_INDEX		7
+#define GLPRT_PTC1023H_PTC1023H_S		0
+#define GLPRT_PTC1023H_PTC1023H_M		MAKEMASK(0xFF, 0)
+#define GLPRT_PTC1023L(_i)			(0x00380C80 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC1023L_MAX_INDEX		7
+#define GLPRT_PTC1023L_PTC1023L_S		0
+#define GLPRT_PTC1023L_PTC1023L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PTC127H(_i)			(0x00380BC4 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC127H_MAX_INDEX			7
+#define GLPRT_PTC127H_PTC127H_S			0
+#define GLPRT_PTC127H_PTC127H_M			MAKEMASK(0xFF, 0)
+#define GLPRT_PTC127L(_i)			(0x00380BC0 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC127L_MAX_INDEX			7
+#define GLPRT_PTC127L_PTC127L_S			0
+#define GLPRT_PTC127L_PTC127L_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PTC1522H(_i)			(0x00380CC4 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC1522H_MAX_INDEX		7
+#define GLPRT_PTC1522H_PTC1522H_S		0
+#define GLPRT_PTC1522H_PTC1522H_M		MAKEMASK(0xFF, 0)
+#define GLPRT_PTC1522L(_i)			(0x00380CC0 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC1522L_MAX_INDEX		7
+#define GLPRT_PTC1522L_PTC1522L_S		0
+#define GLPRT_PTC1522L_PTC1522L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PTC255H(_i)			(0x00380C04 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC255H_MAX_INDEX			7
+#define GLPRT_PTC255H_PTC255H_S			0
+#define GLPRT_PTC255H_PTC255H_M			MAKEMASK(0xFF, 0)
+#define GLPRT_PTC255L(_i)			(0x00380C00 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC255L_MAX_INDEX			7
+#define GLPRT_PTC255L_PTC255L_S			0
+#define GLPRT_PTC255L_PTC255L_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PTC511H(_i)			(0x00380C44 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC511H_MAX_INDEX			7
+#define GLPRT_PTC511H_PTC511H_S			0
+#define GLPRT_PTC511H_PTC511H_M			MAKEMASK(0xFF, 0)
+#define GLPRT_PTC511L(_i)			(0x00380C40 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC511L_MAX_INDEX			7
+#define GLPRT_PTC511L_PTC511L_S			0
+#define GLPRT_PTC511L_PTC511L_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PTC64H(_i)			(0x00380B84 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC64H_MAX_INDEX			7
+#define GLPRT_PTC64H_PTC64H_S			0
+#define GLPRT_PTC64H_PTC64H_M			MAKEMASK(0xFF, 0)
+#define GLPRT_PTC64L(_i)			(0x00380B80 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC64L_MAX_INDEX			7
+#define GLPRT_PTC64L_PTC64L_S			0
+#define GLPRT_PTC64L_PTC64L_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PTC9522H(_i)			(0x00380D04 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC9522H_MAX_INDEX		7
+#define GLPRT_PTC9522H_PTC9522H_S		0
+#define GLPRT_PTC9522H_PTC9522H_M		MAKEMASK(0xFF, 0)
+#define GLPRT_PTC9522L(_i)			(0x00380D00 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PTC9522L_MAX_INDEX		7
+#define GLPRT_PTC9522L_PTC9522L_S		0
+#define GLPRT_PTC9522L_PTC9522L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PXOFFRXC(_i, _j)			(0x00380500 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PXOFFRXC_MAX_INDEX		7
+#define GLPRT_PXOFFRXC_PRPXOFFRXCNT_S		0
+#define GLPRT_PXOFFRXC_PRPXOFFRXCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PXOFFRXC_H(_i, _j)		(0x00380504 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PXOFFRXC_H_MAX_INDEX		7
+#define GLPRT_PXOFFRXC_H_PRPXOFFRXCNT_S		0
+#define GLPRT_PXOFFRXC_H_PRPXOFFRXCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PXOFFTXC(_i, _j)			(0x00380F40 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PXOFFTXC_MAX_INDEX		7
+#define GLPRT_PXOFFTXC_PRPXOFFTXCNT_S		0
+#define GLPRT_PXOFFTXC_PRPXOFFTXCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PXOFFTXC_H(_i, _j)		(0x00380F44 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PXOFFTXC_H_MAX_INDEX		7
+#define GLPRT_PXOFFTXC_H_PRPXOFFTXCNT_S		0
+#define GLPRT_PXOFFTXC_H_PRPXOFFTXCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PXONRXC(_i, _j)			(0x00380300 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PXONRXC_MAX_INDEX			7
+#define GLPRT_PXONRXC_PRPXONRXCNT_S		0
+#define GLPRT_PXONRXC_PRPXONRXCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PXONRXC_H(_i, _j)			(0x00380304 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PXONRXC_H_MAX_INDEX		7
+#define GLPRT_PXONRXC_H_PRPXONRXCNT_S		0
+#define GLPRT_PXONRXC_H_PRPXONRXCNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PXONTXC(_i, _j)			(0x00380D40 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PXONTXC_MAX_INDEX			7
+#define GLPRT_PXONTXC_PRPXONTXC_S		0
+#define GLPRT_PXONTXC_PRPXONTXC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_PXONTXC_H(_i, _j)			(0x00380D44 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...7 */ /* Reset Source: CORER */
+#define GLPRT_PXONTXC_H_MAX_INDEX		7
+#define GLPRT_PXONTXC_H_PRPXONTXC_S		0
+#define GLPRT_PXONTXC_H_PRPXONTXC_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_RFC(_i)				(0x00380AC0 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_RFC_MAX_INDEX			7
+#define GLPRT_RFC_RFC_S				0
+#define GLPRT_RFC_RFC_M				MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_RFC_H(_i)				(0x00380AC4 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_RFC_H_MAX_INDEX			7
+#define GLPRT_RFC_H_RFC_S			0
+#define GLPRT_RFC_H_RFC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_RJC(_i)				(0x00380B00 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_RJC_MAX_INDEX			7
+#define GLPRT_RJC_RJC_S				0
+#define GLPRT_RJC_RJC_M				MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_RJC_H(_i)				(0x00380B04 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_RJC_H_MAX_INDEX			7
+#define GLPRT_RJC_H_RJC_S			0
+#define GLPRT_RJC_H_RJC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_RLEC(_i)				(0x00380140 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_RLEC_MAX_INDEX			7
+#define GLPRT_RLEC_RLEC_S			0
+#define GLPRT_RLEC_RLEC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_RLEC_H(_i)			(0x00380144 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_RLEC_H_MAX_INDEX			7
+#define GLPRT_RLEC_H_RLEC_S			0
+#define GLPRT_RLEC_H_RLEC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_ROC(_i)				(0x00380240 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_ROC_MAX_INDEX			7
+#define GLPRT_ROC_ROC_S				0
+#define GLPRT_ROC_ROC_M				MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_ROC_H(_i)				(0x00380244 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_ROC_H_MAX_INDEX			7
+#define GLPRT_ROC_H_ROC_S			0
+#define GLPRT_ROC_H_ROC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_RUC(_i)				(0x00380200 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_RUC_MAX_INDEX			7
+#define GLPRT_RUC_RUC_S				0
+#define GLPRT_RUC_RUC_M				MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_RUC_H(_i)				(0x00380204 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_RUC_H_MAX_INDEX			7
+#define GLPRT_RUC_H_RUC_S			0
+#define GLPRT_RUC_H_RUC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_RXON2OFFCNT(_i, _j)		(0x00380700 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...7 */ /* Reset Source: CORER */
+#define GLPRT_RXON2OFFCNT_MAX_INDEX		7
+#define GLPRT_RXON2OFFCNT_PRRXON2OFFCNT_S	0
+#define GLPRT_RXON2OFFCNT_PRRXON2OFFCNT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_RXON2OFFCNT_H(_i, _j)		(0x00380704 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...7 */ /* Reset Source: CORER */
+#define GLPRT_RXON2OFFCNT_H_MAX_INDEX		7
+#define GLPRT_RXON2OFFCNT_H_PRRXON2OFFCNT_S	0
+#define GLPRT_RXON2OFFCNT_H_PRRXON2OFFCNT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_STDC(_i)				(0x00340000 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_STDC_MAX_INDEX			7
+#define GLPRT_STDC_STDC_S			0
+#define GLPRT_STDC_STDC_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_TDOLD(_i)				(0x00381280 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_TDOLD_MAX_INDEX			7
+#define GLPRT_TDOLD_GLPRT_TDOLD_S		0
+#define GLPRT_TDOLD_GLPRT_TDOLD_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_TDOLD_H(_i)			(0x00381284 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_TDOLD_H_MAX_INDEX			7
+#define GLPRT_TDOLD_H_GLPRT_TDOLD_S		0
+#define GLPRT_TDOLD_H_GLPRT_TDOLD_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_UPRCH(_i)				(0x00381304 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_UPRCH_MAX_INDEX			7
+#define GLPRT_UPRCH_UPRCH_S			0
+#define GLPRT_UPRCH_UPRCH_M			MAKEMASK(0xFF, 0)
+#define GLPRT_UPRCL(_i)				(0x00381300 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_UPRCL_MAX_INDEX			7
+#define GLPRT_UPRCL_UPRCL_S			0
+#define GLPRT_UPRCL_UPRCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPRT_UPTCH(_i)				(0x003811C4 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_UPTCH_MAX_INDEX			7
+#define GLPRT_UPTCH_UPTCH_S			0
+#define GLPRT_UPTCH_UPTCH_M			MAKEMASK(0xFF, 0)
+#define GLPRT_UPTCL(_i)				(0x003811C0 + ((_i) * 8)) /* _i=0...7 */ /* Reset Source: CORER */
+#define GLPRT_UPTCL_MAX_INDEX			7
+#define GLPRT_UPTCL_VUPTCH_S			0
+#define GLPRT_UPTCL_VUPTCH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLSTAT_ACL_CNT_0_H(_i)			(0x00388004 + ((_i) * 8)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLSTAT_ACL_CNT_0_H_MAX_INDEX		511
+#define GLSTAT_ACL_CNT_0_H_CNT_MSB_S		0
+#define GLSTAT_ACL_CNT_0_H_CNT_MSB_M		MAKEMASK(0xFF, 0)
+#define GLSTAT_ACL_CNT_0_L(_i)			(0x00388000 + ((_i) * 8)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLSTAT_ACL_CNT_0_L_MAX_INDEX		511
+#define GLSTAT_ACL_CNT_0_L_CNT_LSB_S		0
+#define GLSTAT_ACL_CNT_0_L_CNT_LSB_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLSTAT_ACL_CNT_1_H(_i)			(0x00389004 + ((_i) * 8)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLSTAT_ACL_CNT_1_H_MAX_INDEX		511
+#define GLSTAT_ACL_CNT_1_H_CNT_MSB_S		0
+#define GLSTAT_ACL_CNT_1_H_CNT_MSB_M		MAKEMASK(0xFF, 0)
+#define GLSTAT_ACL_CNT_1_L(_i)			(0x00389000 + ((_i) * 8)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLSTAT_ACL_CNT_1_L_MAX_INDEX		511
+#define GLSTAT_ACL_CNT_1_L_CNT_LSB_S		0
+#define GLSTAT_ACL_CNT_1_L_CNT_LSB_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLSTAT_ACL_CNT_2_H(_i)			(0x0038A004 + ((_i) * 8)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLSTAT_ACL_CNT_2_H_MAX_INDEX		511
+#define GLSTAT_ACL_CNT_2_H_CNT_MSB_S		0
+#define GLSTAT_ACL_CNT_2_H_CNT_MSB_M		MAKEMASK(0xFF, 0)
+#define GLSTAT_ACL_CNT_2_L(_i)			(0x0038A000 + ((_i) * 8)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLSTAT_ACL_CNT_2_L_MAX_INDEX		511
+#define GLSTAT_ACL_CNT_2_L_CNT_LSB_S		0
+#define GLSTAT_ACL_CNT_2_L_CNT_LSB_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLSTAT_ACL_CNT_3_H(_i)			(0x0038B004 + ((_i) * 8)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLSTAT_ACL_CNT_3_H_MAX_INDEX		511
+#define GLSTAT_ACL_CNT_3_H_CNT_MSB_S		0
+#define GLSTAT_ACL_CNT_3_H_CNT_MSB_M		MAKEMASK(0xFF, 0)
+#define GLSTAT_ACL_CNT_3_L(_i)			(0x0038B000 + ((_i) * 8)) /* _i=0...511 */ /* Reset Source: CORER */
+#define GLSTAT_ACL_CNT_3_L_MAX_INDEX		511
+#define GLSTAT_ACL_CNT_3_L_CNT_LSB_S		0
+#define GLSTAT_ACL_CNT_3_L_CNT_LSB_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLSTAT_FD_CNT0H(_i)			(0x003A0004 + ((_i) * 8)) /* _i=0...4095 */ /* Reset Source: CORER */
+#define GLSTAT_FD_CNT0H_MAX_INDEX		4095
+#define GLSTAT_FD_CNT0H_FD0_CNT_H_S		0
+#define GLSTAT_FD_CNT0H_FD0_CNT_H_M		MAKEMASK(0xFF, 0)
+#define GLSTAT_FD_CNT0L(_i)			(0x003A0000 + ((_i) * 8)) /* _i=0...4095 */ /* Reset Source: CORER */
+#define GLSTAT_FD_CNT0L_MAX_INDEX		4095
+#define GLSTAT_FD_CNT0L_FD0_CNT_L_S		0
+#define GLSTAT_FD_CNT0L_FD0_CNT_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLSTAT_FD_CNT1H(_i)			(0x003A8004 + ((_i) * 8)) /* _i=0...4095 */ /* Reset Source: CORER */
+#define GLSTAT_FD_CNT1H_MAX_INDEX		4095
+#define GLSTAT_FD_CNT1H_FD0_CNT_H_S		0
+#define GLSTAT_FD_CNT1H_FD0_CNT_H_M		MAKEMASK(0xFF, 0)
+#define GLSTAT_FD_CNT1L(_i)			(0x003A8000 + ((_i) * 8)) /* _i=0...4095 */ /* Reset Source: CORER */
+#define GLSTAT_FD_CNT1L_MAX_INDEX		4095
+#define GLSTAT_FD_CNT1L_FD0_CNT_L_S		0
+#define GLSTAT_FD_CNT1L_FD0_CNT_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLSW_BPRCH(_i)				(0x00346204 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_BPRCH_MAX_INDEX			31
+#define GLSW_BPRCH_BPRCH_S			0
+#define GLSW_BPRCH_BPRCH_M			MAKEMASK(0xFF, 0)
+#define GLSW_BPRCL(_i)				(0x00346200 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_BPRCL_MAX_INDEX			31
+#define GLSW_BPRCL_BPRCL_S			0
+#define GLSW_BPRCL_BPRCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLSW_BPTCH(_i)				(0x00310204 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_BPTCH_MAX_INDEX			31
+#define GLSW_BPTCH_BPTCH_S			0
+#define GLSW_BPTCH_BPTCH_M			MAKEMASK(0xFF, 0)
+#define GLSW_BPTCL(_i)				(0x00310200 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_BPTCL_MAX_INDEX			31
+#define GLSW_BPTCL_BPTCL_S			0
+#define GLSW_BPTCL_BPTCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLSW_GORCH(_i)				(0x00341004 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_GORCH_MAX_INDEX			31
+#define GLSW_GORCH_GORCH_S			0
+#define GLSW_GORCH_GORCH_M			MAKEMASK(0xFF, 0)
+#define GLSW_GORCL(_i)				(0x00341000 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_GORCL_MAX_INDEX			31
+#define GLSW_GORCL_GORCL_S			0
+#define GLSW_GORCL_GORCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLSW_GOTCH(_i)				(0x00302004 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_GOTCH_MAX_INDEX			31
+#define GLSW_GOTCH_GOTCH_S			0
+#define GLSW_GOTCH_GOTCH_M			MAKEMASK(0xFF, 0)
+#define GLSW_GOTCL(_i)				(0x00302000 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_GOTCL_MAX_INDEX			31
+#define GLSW_GOTCL_GOTCL_S			0
+#define GLSW_GOTCL_GOTCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLSW_MPRCH(_i)				(0x00346104 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_MPRCH_MAX_INDEX			31
+#define GLSW_MPRCH_MPRCH_S			0
+#define GLSW_MPRCH_MPRCH_M			MAKEMASK(0xFF, 0)
+#define GLSW_MPRCL(_i)				(0x00346100 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_MPRCL_MAX_INDEX			31
+#define GLSW_MPRCL_MPRCL_S			0
+#define GLSW_MPRCL_MPRCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLSW_MPTCH(_i)				(0x00310104 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_MPTCH_MAX_INDEX			31
+#define GLSW_MPTCH_MPTCH_S			0
+#define GLSW_MPTCH_MPTCH_M			MAKEMASK(0xFF, 0)
+#define GLSW_MPTCL(_i)				(0x00310100 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_MPTCL_MAX_INDEX			31
+#define GLSW_MPTCL_MPTCL_S			0
+#define GLSW_MPTCL_MPTCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLSW_UPRCH(_i)				(0x00346004 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_UPRCH_MAX_INDEX			31
+#define GLSW_UPRCH_UPRCH_S			0
+#define GLSW_UPRCH_UPRCH_M			MAKEMASK(0xFF, 0)
+#define GLSW_UPRCL(_i)				(0x00346000 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_UPRCL_MAX_INDEX			31
+#define GLSW_UPRCL_UPRCL_S			0
+#define GLSW_UPRCL_UPRCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLSW_UPTCH(_i)				(0x00310004 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_UPTCH_MAX_INDEX			31
+#define GLSW_UPTCH_UPTCH_S			0
+#define GLSW_UPTCH_UPTCH_M			MAKEMASK(0xFF, 0)
+#define GLSW_UPTCL(_i)				(0x00310000 + ((_i) * 8)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GLSW_UPTCL_MAX_INDEX			31
+#define GLSW_UPTCL_UPTCL_S			0
+#define GLSW_UPTCL_UPTCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLSWID_RUPP(_i)				(0x00345000 + ((_i) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define GLSWID_RUPP_MAX_INDEX			255
+#define GLSWID_RUPP_RUPP_S			0
+#define GLSWID_RUPP_RUPP_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLV_BPRCH(_i)				(0x003B6004 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_BPRCH_MAX_INDEX			767
+#define GLV_BPRCH_BPRCH_S			0
+#define GLV_BPRCH_BPRCH_M			MAKEMASK(0xFF, 0)
+#define GLV_BPRCL(_i)				(0x003B6000 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_BPRCL_MAX_INDEX			767
+#define GLV_BPRCL_BPRCL_S			0
+#define GLV_BPRCL_BPRCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLV_BPTCH(_i)				(0x0030E004 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_BPTCH_MAX_INDEX			767
+#define GLV_BPTCH_BPTCH_S			0
+#define GLV_BPTCH_BPTCH_M			MAKEMASK(0xFF, 0)
+#define GLV_BPTCL(_i)				(0x0030E000 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_BPTCL_MAX_INDEX			767
+#define GLV_BPTCL_BPTCL_S			0
+#define GLV_BPTCL_BPTCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLV_GORCH(_i)				(0x003B0004 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_GORCH_MAX_INDEX			767
+#define GLV_GORCH_GORCH_S			0
+#define GLV_GORCH_GORCH_M			MAKEMASK(0xFF, 0)
+#define GLV_GORCL(_i)				(0x003B0000 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_GORCL_MAX_INDEX			767
+#define GLV_GORCL_GORCL_S			0
+#define GLV_GORCL_GORCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLV_GOTCH(_i)				(0x00300004 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_GOTCH_MAX_INDEX			767
+#define GLV_GOTCH_GOTCH_S			0
+#define GLV_GOTCH_GOTCH_M			MAKEMASK(0xFF, 0)
+#define GLV_GOTCL(_i)				(0x00300000 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_GOTCL_MAX_INDEX			767
+#define GLV_GOTCL_GOTCL_S			0
+#define GLV_GOTCL_GOTCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLV_MPRCH(_i)				(0x003B4004 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_MPRCH_MAX_INDEX			767
+#define GLV_MPRCH_MPRCH_S			0
+#define GLV_MPRCH_MPRCH_M			MAKEMASK(0xFF, 0)
+#define GLV_MPRCL(_i)				(0x003B4000 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_MPRCL_MAX_INDEX			767
+#define GLV_MPRCL_MPRCL_S			0
+#define GLV_MPRCL_MPRCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLV_MPTCH(_i)				(0x0030C004 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_MPTCH_MAX_INDEX			767
+#define GLV_MPTCH_MPTCH_S			0
+#define GLV_MPTCH_MPTCH_M			MAKEMASK(0xFF, 0)
+#define GLV_MPTCL(_i)				(0x0030C000 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_MPTCL_MAX_INDEX			767
+#define GLV_MPTCL_MPTCL_S			0
+#define GLV_MPTCL_MPTCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLV_RDPC(_i)				(0x00294C04 + ((_i) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_RDPC_MAX_INDEX			767
+#define GLV_RDPC_RDPC_S				0
+#define GLV_RDPC_RDPC_M				MAKEMASK(0xFFFFFFFF, 0)
+#define GLV_REPC(_i)				(0x00295804 + ((_i) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_REPC_MAX_INDEX			767
+#define GLV_REPC_NO_DESC_CNT_S			0
+#define GLV_REPC_NO_DESC_CNT_M			MAKEMASK(0xFFFF, 0)
+#define GLV_REPC_ERROR_CNT_S			16
+#define GLV_REPC_ERROR_CNT_M			MAKEMASK(0xFFFF, 16)
+#define GLV_TEPC(_VSI)				(0x00312000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_TEPC_MAX_INDEX			767
+#define GLV_TEPC_TEPC_S				0
+#define GLV_TEPC_TEPC_M				MAKEMASK(0xFFFFFFFF, 0)
+#define GLV_UPRCH(_i)				(0x003B2004 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_UPRCH_MAX_INDEX			767
+#define GLV_UPRCH_UPRCH_S			0
+#define GLV_UPRCH_UPRCH_M			MAKEMASK(0xFF, 0)
+#define GLV_UPRCL(_i)				(0x003B2000 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_UPRCL_MAX_INDEX			767
+#define GLV_UPRCL_UPRCL_S			0
+#define GLV_UPRCL_UPRCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLV_UPTCH(_i)				(0x0030A004 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_UPTCH_MAX_INDEX			767
+#define GLV_UPTCH_GLVUPTCH_S			0
+#define GLV_UPTCH_GLVUPTCH_M			MAKEMASK(0xFF, 0)
+#define GLV_UPTCL(_i)				(0x0030A000 + ((_i) * 8)) /* _i=0...767 */ /* Reset Source: CORER */
+#define GLV_UPTCL_MAX_INDEX			767
+#define GLV_UPTCL_UPTCL_S			0
+#define GLV_UPTCL_UPTCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLVEBUP_RBCH(_i, _j)			(0x00343004 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...31 */ /* Reset Source: CORER */
+#define GLVEBUP_RBCH_MAX_INDEX			7
+#define GLVEBUP_RBCH_UPBCH_S			0
+#define GLVEBUP_RBCH_UPBCH_M			MAKEMASK(0xFF, 0)
+#define GLVEBUP_RBCL(_i, _j)			(0x00343000 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...31 */ /* Reset Source: CORER */
+#define GLVEBUP_RBCL_MAX_INDEX			7
+#define GLVEBUP_RBCL_UPBCL_S			0
+#define GLVEBUP_RBCL_UPBCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLVEBUP_RPCH(_i, _j)			(0x00344004 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...31 */ /* Reset Source: CORER */
+#define GLVEBUP_RPCH_MAX_INDEX			7
+#define GLVEBUP_RPCH_UPPCH_S			0
+#define GLVEBUP_RPCH_UPPCH_M			MAKEMASK(0xFF, 0)
+#define GLVEBUP_RPCL(_i, _j)			(0x00344000 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...31 */ /* Reset Source: CORER */
+#define GLVEBUP_RPCL_MAX_INDEX			7
+#define GLVEBUP_RPCL_UPPCL_S			0
+#define GLVEBUP_RPCL_UPPCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLVEBUP_TBCH(_i, _j)			(0x00306004 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...31 */ /* Reset Source: CORER */
+#define GLVEBUP_TBCH_MAX_INDEX			7
+#define GLVEBUP_TBCH_UPBCH_S			0
+#define GLVEBUP_TBCH_UPBCH_M			MAKEMASK(0xFF, 0)
+#define GLVEBUP_TBCL(_i, _j)			(0x00306000 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...31 */ /* Reset Source: CORER */
+#define GLVEBUP_TBCL_MAX_INDEX			7
+#define GLVEBUP_TBCL_UPBCL_S			0
+#define GLVEBUP_TBCL_UPBCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLVEBUP_TPCH(_i, _j)			(0x00308004 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...31 */ /* Reset Source: CORER */
+#define GLVEBUP_TPCH_MAX_INDEX			7
+#define GLVEBUP_TPCH_UPPCH_S			0
+#define GLVEBUP_TPCH_UPPCH_M			MAKEMASK(0xFF, 0)
+#define GLVEBUP_TPCL(_i, _j)			(0x00308000 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...31 */ /* Reset Source: CORER */
+#define GLVEBUP_TPCL_MAX_INDEX			7
+#define GLVEBUP_TPCL_UPPCL_S			0
+#define GLVEBUP_TPCL_UPPCL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PRTRPB_LDPC				0x000AC280 /* Reset Source: CORER */
+#define PRTRPB_LDPC_CRCERRS_S			0
+#define PRTRPB_LDPC_CRCERRS_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PRTRPB_RDPC				0x000AC260 /* Reset Source: CORER */
+#define PRTRPB_RDPC_CRCERRS_S			0
+#define PRTRPB_RDPC_CRCERRS_M			MAKEMASK(0xFFFFFFFF, 0)
+#define PRTTPB_STAT_TC_BYTES_SENTL(_i)		(0x00098200 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define PRTTPB_STAT_TC_BYTES_SENTL_MAX_INDEX	63
+#define PRTTPB_STAT_TC_BYTES_SENTL_TCCNT_S	0
+#define PRTTPB_STAT_TC_BYTES_SENTL_TCCNT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define TPB_PRTTPB_STAT_PKT_SENT(_i)		(0x00099470 + ((_i) * 4)) /* _i=0...7 */ /* Reset Source: CORER */
+#define TPB_PRTTPB_STAT_PKT_SENT_MAX_INDEX	7
+#define TPB_PRTTPB_STAT_PKT_SENT_PKTCNT_S	0
+#define TPB_PRTTPB_STAT_PKT_SENT_PKTCNT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define TPB_PRTTPB_STAT_TC_BYTES_SENT(_i)	(0x00099094 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define TPB_PRTTPB_STAT_TC_BYTES_SENT_MAX_INDEX 63
+#define TPB_PRTTPB_STAT_TC_BYTES_SENT_TCCNT_S	0
+#define TPB_PRTTPB_STAT_TC_BYTES_SENT_TCCNT_M	MAKEMASK(0xFFFFFFFF, 0)
+#define EMP_SWT_PRUNIND				0x00204020 /* Reset Source: CORER */
+#define EMP_SWT_PRUNIND_OPCODE_S		0
+#define EMP_SWT_PRUNIND_OPCODE_M		MAKEMASK(0xF, 0)
+#define EMP_SWT_PRUNIND_LIST_INDEX_NUM_S	4
+#define EMP_SWT_PRUNIND_LIST_INDEX_NUM_M	MAKEMASK(0x3FF, 4)
+#define EMP_SWT_PRUNIND_VSI_NUM_S		16
+#define EMP_SWT_PRUNIND_VSI_NUM_M		MAKEMASK(0x3FF, 16)
+#define EMP_SWT_PRUNIND_BIT_VALUE_S		31
+#define EMP_SWT_PRUNIND_BIT_VALUE_M		BIT(31)
+#define EMP_SWT_REPIND				0x0020401c /* Reset Source: CORER */
+#define EMP_SWT_REPIND_OPCODE_S			0
+#define EMP_SWT_REPIND_OPCODE_M			MAKEMASK(0xF, 0)
+#define EMP_SWT_REPIND_LIST_INDEX_NUMBER_S	4
+#define EMP_SWT_REPIND_LIST_INDEX_NUMBER_M	MAKEMASK(0x3FF, 4)
+#define EMP_SWT_REPIND_VSI_NUM_S		16
+#define EMP_SWT_REPIND_VSI_NUM_M		MAKEMASK(0x3FF, 16)
+#define EMP_SWT_REPIND_BIT_VALUE_S		31
+#define EMP_SWT_REPIND_BIT_VALUE_M		BIT(31)
+#define GL_OVERRIDEC				0x002040a4 /* Reset Source: CORER */
+#define GL_OVERRIDEC_OVERRIDE_ATTEMPTC_S	0
+#define GL_OVERRIDEC_OVERRIDE_ATTEMPTC_M	MAKEMASK(0xFFFF, 0)
+#define GL_OVERRIDEC_LAST_VSI_S			16
+#define GL_OVERRIDEC_LAST_VSI_M			MAKEMASK(0x3FF, 16)
+#define GL_PLG_AVG_CALC_CFG			0x0020A5AC /* Reset Source: CORER */
+#define GL_PLG_AVG_CALC_CFG_CYCLE_LEN_S		0
+#define GL_PLG_AVG_CALC_CFG_CYCLE_LEN_M		MAKEMASK(0x7FFFFFFF, 0)
+#define GL_PLG_AVG_CALC_CFG_MODE_S		31
+#define GL_PLG_AVG_CALC_CFG_MODE_M		BIT(31)
+#define GL_PLG_AVG_CALC_ST			0x0020A5B0 /* Reset Source: CORER */
+#define GL_PLG_AVG_CALC_ST_IN_DATA_S		0
+#define GL_PLG_AVG_CALC_ST_IN_DATA_M		MAKEMASK(0x7FFF, 0)
+#define GL_PLG_AVG_CALC_ST_OUT_DATA_S		16
+#define GL_PLG_AVG_CALC_ST_OUT_DATA_M		MAKEMASK(0x7FFF, 16)
+#define GL_PLG_AVG_CALC_ST_VALID_S		31
+#define GL_PLG_AVG_CALC_ST_VALID_M		BIT(31)
+#define GL_PRE_CFG_CMD				0x00214090 /* Reset Source: CORER */
+#define GL_PRE_CFG_CMD_ADDR_S			0
+#define GL_PRE_CFG_CMD_ADDR_M			MAKEMASK(0x1FFF, 0)
+#define GL_PRE_CFG_CMD_TBLIDX_S			16
+#define GL_PRE_CFG_CMD_TBLIDX_M			MAKEMASK(0x7, 16)
+#define GL_PRE_CFG_CMD_CMD_S			29
+#define GL_PRE_CFG_CMD_CMD_M			BIT(29)
+#define GL_PRE_CFG_CMD_DONE_S			31
+#define GL_PRE_CFG_CMD_DONE_M			BIT(31)
+#define GL_PRE_CFG_DATA(_i)			(0x00214074 + ((_i) * 4)) /* _i=0...6 */ /* Reset Source: CORER */
+#define GL_PRE_CFG_DATA_MAX_INDEX		6
+#define GL_PRE_CFG_DATA_GL_PRE_RCP_DATA_S	0
+#define GL_PRE_CFG_DATA_GL_PRE_RCP_DATA_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GL_SWT_FUNCFILT				0x001D2698 /* Reset Source: CORER */
+#define GL_SWT_FUNCFILT_FUNCFILT_S		0
+#define GL_SWT_FUNCFILT_FUNCFILT_M		BIT(0)
+#define GL_SWT_FW_STS(_i)			(0x00216000 + ((_i) * 4)) /* _i=0...5 */ /* Reset Source: CORER */
+#define GL_SWT_FW_STS_MAX_INDEX			5
+#define GL_SWT_FW_STS_GL_SWT_FW_STS_S		0
+#define GL_SWT_FW_STS_GL_SWT_FW_STS_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_SWT_LAT_DOUBLE			0x00204004 /* Reset Source: CORER */
+#define GL_SWT_LAT_DOUBLE_BASE_S		0
+#define GL_SWT_LAT_DOUBLE_BASE_M		MAKEMASK(0x7FF, 0)
+#define GL_SWT_LAT_DOUBLE_SIZE_S		16
+#define GL_SWT_LAT_DOUBLE_SIZE_M		MAKEMASK(0x7FF, 16)
+#define GL_SWT_LAT_QUAD				0x00204008 /* Reset Source: CORER */
+#define GL_SWT_LAT_QUAD_BASE_S			0
+#define GL_SWT_LAT_QUAD_BASE_M			MAKEMASK(0x7FF, 0)
+#define GL_SWT_LAT_QUAD_SIZE_S			16
+#define GL_SWT_LAT_QUAD_SIZE_M			MAKEMASK(0x7FF, 16)
+#define GL_SWT_LAT_SINGLE			0x00204000 /* Reset Source: CORER */
+#define GL_SWT_LAT_SINGLE_BASE_S		0
+#define GL_SWT_LAT_SINGLE_BASE_M		MAKEMASK(0x7FF, 0)
+#define GL_SWT_LAT_SINGLE_SIZE_S		16
+#define GL_SWT_LAT_SINGLE_SIZE_M		MAKEMASK(0x7FF, 16)
+#define GL_SWT_MD_PRI				0x002040ac /* Reset Source: CORER */
+#define GL_SWT_MD_PRI_VSI_PRI_S			0
+#define GL_SWT_MD_PRI_VSI_PRI_M			MAKEMASK(0x7, 0)
+#define GL_SWT_MD_PRI_LB_PRI_S			4
+#define GL_SWT_MD_PRI_LB_PRI_M			MAKEMASK(0x7, 4)
+#define GL_SWT_MD_PRI_LAN_EN_PRI_S		8
+#define GL_SWT_MD_PRI_LAN_EN_PRI_M		MAKEMASK(0x7, 8)
+#define GL_SWT_MD_PRI_QH_PRI_S			12
+#define GL_SWT_MD_PRI_QH_PRI_M			MAKEMASK(0x7, 12)
+#define GL_SWT_MD_PRI_QL_PRI_S			16
+#define GL_SWT_MD_PRI_QL_PRI_M			MAKEMASK(0x7, 16)
+#define GL_SWT_MIRTARVSI(_i)			(0x00204500 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: CORER */
+#define GL_SWT_MIRTARVSI_MAX_INDEX		63
+#define GL_SWT_MIRTARVSI_VFVMNUMBER_S		0
+#define GL_SWT_MIRTARVSI_VFVMNUMBER_M		MAKEMASK(0x3FF, 0)
+#define GL_SWT_MIRTARVSI_FUNCTIONTYPE_S		10
+#define GL_SWT_MIRTARVSI_FUNCTIONTYPE_M		MAKEMASK(0x3, 10)
+#define GL_SWT_MIRTARVSI_PFNUMBER_S		12
+#define GL_SWT_MIRTARVSI_PFNUMBER_M		MAKEMASK(0x7, 12)
+#define GL_SWT_MIRTARVSI_TARGETVSI_S		20
+#define GL_SWT_MIRTARVSI_TARGETVSI_M		MAKEMASK(0x3FF, 20)
+#define GL_SWT_MIRTARVSI_RULEENABLE_S		31
+#define GL_SWT_MIRTARVSI_RULEENABLE_M		BIT(31)
+#define GL_SWT_NOMDEF_FLGS_H			0x0021411C /* Reset Source: CORER */
+#define GL_SWT_NOMDEF_FLGS_H_FLGS_S		0
+#define GL_SWT_NOMDEF_FLGS_H_FLGS_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_SWT_NOMDEF_FLGS_L			0x00214118 /* Reset Source: CORER */
+#define GL_SWT_NOMDEF_FLGS_L_FLGS_S		0
+#define GL_SWT_NOMDEF_FLGS_L_FLGS_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GL_SWT_SWIDFVIDX			0x00214114 /* Reset Source: CORER */
+#define GL_SWT_SWIDFVIDX_SWIDFVIDX_S		0
+#define GL_SWT_SWIDFVIDX_SWIDFVIDX_M		MAKEMASK(0x3F, 0)
+#define GL_SWT_SWIDFVIDX_PORT_TYPE_S		31
+#define GL_SWT_SWIDFVIDX_PORT_TYPE_M		BIT(31)
+#define GL_VP_SWITCHID(_i)			(0x00214094 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define GL_VP_SWITCHID_MAX_INDEX		31
+#define GL_VP_SWITCHID_SWITCHID_S		0
+#define GL_VP_SWITCHID_SWITCHID_M		MAKEMASK(0xFF, 0)
+#define GLSWID_STAT_BLOCK(_i)			(0x0020A1A4 + ((_i) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define GLSWID_STAT_BLOCK_MAX_INDEX		255
+#define GLSWID_STAT_BLOCK_VEBID_S		0
+#define GLSWID_STAT_BLOCK_VEBID_M		MAKEMASK(0x1F, 0)
+#define GLSWID_STAT_BLOCK_VEBID_VALID_S		31
+#define GLSWID_STAT_BLOCK_VEBID_VALID_M		BIT(31)
+#define GLSWT_ACT_RESP_0			0x0020A5A4 /* Reset Source: CORER */
+#define GLSWT_ACT_RESP_0_GLSWT_ACT_RESP_S	0
+#define GLSWT_ACT_RESP_0_GLSWT_ACT_RESP_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLSWT_ACT_RESP_1			0x0020A5A8 /* Reset Source: CORER */
+#define GLSWT_ACT_RESP_1_GLSWT_ACT_RESP_S	0
+#define GLSWT_ACT_RESP_1_GLSWT_ACT_RESP_M	MAKEMASK(0xFFFFFFFF, 0)
+#define GLSWT_ARB_MODE				0x0020A674 /* Reset Source: CORER */
+#define GLSWT_ARB_MODE_FLU_PRI_SHM_S		0
+#define GLSWT_ARB_MODE_FLU_PRI_SHM_M		BIT(0)
+#define GLSWT_ARB_MODE_TX_RX_FWD_PRI_S		1
+#define GLSWT_ARB_MODE_TX_RX_FWD_PRI_M		BIT(1)
+#define PRT_SBPVSI				0x00204120 /* Reset Source: CORER */
+#define PRT_SBPVSI_BAD_FRAMES_VSI_S		0
+#define PRT_SBPVSI_BAD_FRAMES_VSI_M		MAKEMASK(0x3FF, 0)
+#define PRT_SBPVSI_SBP_S			31
+#define PRT_SBPVSI_SBP_M			BIT(31)
+#define PRT_SCSTS				0x00204140 /* Reset Source: CORER */
+#define PRT_SCSTS_BSCA_S			0
+#define PRT_SCSTS_BSCA_M			BIT(0)
+#define PRT_SCSTS_BSCAP_S			1
+#define PRT_SCSTS_BSCAP_M			BIT(1)
+#define PRT_SCSTS_MSCA_S			2
+#define PRT_SCSTS_MSCA_M			BIT(2)
+#define PRT_SCSTS_MSCAP_S			3
+#define PRT_SCSTS_MSCAP_M			BIT(3)
+#define PRT_SWT_BSCCNT				0x00204160 /* Reset Source: CORER */
+#define PRT_SWT_BSCCNT_CCOUNT_S			0
+#define PRT_SWT_BSCCNT_CCOUNT_M			MAKEMASK(0x1FFFFFF, 0)
+#define PRT_SWT_BSCTRH				0x00204180 /* Reset Source: CORER */
+#define PRT_SWT_BSCTRH_UTRESH_S			0
+#define PRT_SWT_BSCTRH_UTRESH_M			MAKEMASK(0x7FFFF, 0)
+#define PRT_SWT_MIREG				0x002042A0 /* Reset Source: CORER */
+#define PRT_SWT_MIREG_MIRRULE_S			0
+#define PRT_SWT_MIREG_MIRRULE_M			MAKEMASK(0x3F, 0)
+#define PRT_SWT_MIREG_MIRENA_S			7
+#define PRT_SWT_MIREG_MIRENA_M			BIT(7)
+#define PRT_SWT_MIRIG				0x00204280 /* Reset Source: CORER */
+#define PRT_SWT_MIRIG_MIRRULE_S			0
+#define PRT_SWT_MIRIG_MIRRULE_M			MAKEMASK(0x3F, 0)
+#define PRT_SWT_MIRIG_MIRENA_S			7
+#define PRT_SWT_MIRIG_MIRENA_M			BIT(7)
+#define PRT_SWT_MSCCNT				0x00204100 /* Reset Source: CORER */
+#define PRT_SWT_MSCCNT_CCOUNT_S			0
+#define PRT_SWT_MSCCNT_CCOUNT_M			MAKEMASK(0x1FFFFFF, 0)
+#define PRT_SWT_MSCTRH				0x002041c0 /* Reset Source: CORER */
+#define PRT_SWT_MSCTRH_UTRESH_S			0
+#define PRT_SWT_MSCTRH_UTRESH_M			MAKEMASK(0x7FFFF, 0)
+#define PRT_SWT_SCBI				0x002041e0 /* Reset Source: CORER */
+#define PRT_SWT_SCBI_BI_S			0
+#define PRT_SWT_SCBI_BI_M			MAKEMASK(0x1FFFFFF, 0)
+#define PRT_SWT_SCCRL				0x00204200 /* Reset Source: CORER */
+#define PRT_SWT_SCCRL_MDIPW_S			0
+#define PRT_SWT_SCCRL_MDIPW_M			BIT(0)
+#define PRT_SWT_SCCRL_MDICW_S			1
+#define PRT_SWT_SCCRL_MDICW_M			BIT(1)
+#define PRT_SWT_SCCRL_BDIPW_S			2
+#define PRT_SWT_SCCRL_BDIPW_M			BIT(2)
+#define PRT_SWT_SCCRL_BDICW_S			3
+#define PRT_SWT_SCCRL_BDICW_M			BIT(3)
+#define PRT_SWT_SCCRL_INTERVAL_S		8
+#define PRT_SWT_SCCRL_INTERVAL_M		MAKEMASK(0xFFFFF, 8)
+#define PRT_TCTUPR(_i)				(0x00040840 + ((_i) * 4)) /* _i=0...31 */ /* Reset Source: CORER */
+#define PRT_TCTUPR_MAX_INDEX			31
+#define PRT_TCTUPR_UP0_S			0
+#define PRT_TCTUPR_UP0_M			MAKEMASK(0x7, 0)
+#define PRT_TCTUPR_UP1_S			4
+#define PRT_TCTUPR_UP1_M			MAKEMASK(0x7, 4)
+#define PRT_TCTUPR_UP2_S			8
+#define PRT_TCTUPR_UP2_M			MAKEMASK(0x7, 8)
+#define PRT_TCTUPR_UP3_S			12
+#define PRT_TCTUPR_UP3_M			MAKEMASK(0x7, 12)
+#define PRT_TCTUPR_UP4_S			16
+#define PRT_TCTUPR_UP4_M			MAKEMASK(0x7, 16)
+#define PRT_TCTUPR_UP5_S			20
+#define PRT_TCTUPR_UP5_M			MAKEMASK(0x7, 20)
+#define PRT_TCTUPR_UP6_S			24
+#define PRT_TCTUPR_UP6_M			MAKEMASK(0x7, 24)
+#define PRT_TCTUPR_UP7_S			28
+#define PRT_TCTUPR_UP7_M			MAKEMASK(0x7, 28)
+#define GLHH_ART_CTL				0x000A41D4 /* Reset Source: POR */
+#define GLHH_ART_CTL_ACTIVE_S			0
+#define GLHH_ART_CTL_ACTIVE_M			BIT(0)
+#define GLHH_ART_CTL_TIME_OUT1_S		1
+#define GLHH_ART_CTL_TIME_OUT1_M		BIT(1)
+#define GLHH_ART_CTL_TIME_OUT2_S		2
+#define GLHH_ART_CTL_TIME_OUT2_M		BIT(2)
+#define GLHH_ART_CTL_RESET_HH_S			31
+#define GLHH_ART_CTL_RESET_HH_M			BIT(31)
+#define GLHH_ART_DATA				0x000A41E0 /* Reset Source: POR */
+#define GLHH_ART_DATA_AGENT_TYPE_S		0
+#define GLHH_ART_DATA_AGENT_TYPE_M		MAKEMASK(0x7, 0)
+#define GLHH_ART_DATA_SYNC_TYPE_S		3
+#define GLHH_ART_DATA_SYNC_TYPE_M		BIT(3)
+#define GLHH_ART_DATA_MAX_DELAY_S		4
+#define GLHH_ART_DATA_MAX_DELAY_M		MAKEMASK(0xF, 4)
+#define GLHH_ART_DATA_TIME_BASE_S		8
+#define GLHH_ART_DATA_TIME_BASE_M		MAKEMASK(0xF, 8)
+#define GLHH_ART_DATA_RSV_DATA_S		12
+#define GLHH_ART_DATA_RSV_DATA_M		MAKEMASK(0xFFFFF, 12)
+#define GLHH_ART_TIME_H				0x000A41D8 /* Reset Source: POR */
+#define GLHH_ART_TIME_H_ART_TIME_H_S		0
+#define GLHH_ART_TIME_H_ART_TIME_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLHH_ART_TIME_L				0x000A41DC /* Reset Source: POR */
+#define GLHH_ART_TIME_L_ART_TIME_L_S		0
+#define GLHH_ART_TIME_L_ART_TIME_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_AUX_IN_0(_i)			(0x000889D8 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_AUX_IN_0_MAX_INDEX		1
+#define GLTSYN_AUX_IN_0_EVNTLVL_S		0
+#define GLTSYN_AUX_IN_0_EVNTLVL_M		MAKEMASK(0x3, 0)
+#define GLTSYN_AUX_IN_0_INT_ENA_S		4
+#define GLTSYN_AUX_IN_0_INT_ENA_M		BIT(4)
+#define GLTSYN_AUX_IN_1(_i)			(0x000889E0 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_AUX_IN_1_MAX_INDEX		1
+#define GLTSYN_AUX_IN_1_EVNTLVL_S		0
+#define GLTSYN_AUX_IN_1_EVNTLVL_M		MAKEMASK(0x3, 0)
+#define GLTSYN_AUX_IN_1_INT_ENA_S		4
+#define GLTSYN_AUX_IN_1_INT_ENA_M		BIT(4)
+#define GLTSYN_AUX_IN_2(_i)			(0x000889E8 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_AUX_IN_2_MAX_INDEX		1
+#define GLTSYN_AUX_IN_2_EVNTLVL_S		0
+#define GLTSYN_AUX_IN_2_EVNTLVL_M		MAKEMASK(0x3, 0)
+#define GLTSYN_AUX_IN_2_INT_ENA_S		4
+#define GLTSYN_AUX_IN_2_INT_ENA_M		BIT(4)
+#define GLTSYN_AUX_OUT_0(_i)			(0x00088998 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_AUX_OUT_0_MAX_INDEX		1
+#define GLTSYN_AUX_OUT_0_OUT_ENA_S		0
+#define GLTSYN_AUX_OUT_0_OUT_ENA_M		BIT(0)
+#define GLTSYN_AUX_OUT_0_OUTMOD_S		1
+#define GLTSYN_AUX_OUT_0_OUTMOD_M		MAKEMASK(0x3, 1)
+#define GLTSYN_AUX_OUT_0_OUTLVL_S		3
+#define GLTSYN_AUX_OUT_0_OUTLVL_M		BIT(3)
+#define GLTSYN_AUX_OUT_0_INT_ENA_S		4
+#define GLTSYN_AUX_OUT_0_INT_ENA_M		BIT(4)
+#define GLTSYN_AUX_OUT_0_PULSEW_S		8
+#define GLTSYN_AUX_OUT_0_PULSEW_M		MAKEMASK(0xF, 8)
+#define GLTSYN_AUX_OUT_1(_i)			(0x000889A0 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_AUX_OUT_1_MAX_INDEX		1
+#define GLTSYN_AUX_OUT_1_OUT_ENA_S		0
+#define GLTSYN_AUX_OUT_1_OUT_ENA_M		BIT(0)
+#define GLTSYN_AUX_OUT_1_OUTMOD_S		1
+#define GLTSYN_AUX_OUT_1_OUTMOD_M		MAKEMASK(0x3, 1)
+#define GLTSYN_AUX_OUT_1_OUTLVL_S		3
+#define GLTSYN_AUX_OUT_1_OUTLVL_M		BIT(3)
+#define GLTSYN_AUX_OUT_1_INT_ENA_S		4
+#define GLTSYN_AUX_OUT_1_INT_ENA_M		BIT(4)
+#define GLTSYN_AUX_OUT_1_PULSEW_S		8
+#define GLTSYN_AUX_OUT_1_PULSEW_M		MAKEMASK(0xF, 8)
+#define GLTSYN_AUX_OUT_2(_i)			(0x000889A8 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_AUX_OUT_2_MAX_INDEX		1
+#define GLTSYN_AUX_OUT_2_OUT_ENA_S		0
+#define GLTSYN_AUX_OUT_2_OUT_ENA_M		BIT(0)
+#define GLTSYN_AUX_OUT_2_OUTMOD_S		1
+#define GLTSYN_AUX_OUT_2_OUTMOD_M		MAKEMASK(0x3, 1)
+#define GLTSYN_AUX_OUT_2_OUTLVL_S		3
+#define GLTSYN_AUX_OUT_2_OUTLVL_M		BIT(3)
+#define GLTSYN_AUX_OUT_2_INT_ENA_S		4
+#define GLTSYN_AUX_OUT_2_INT_ENA_M		BIT(4)
+#define GLTSYN_AUX_OUT_2_PULSEW_S		8
+#define GLTSYN_AUX_OUT_2_PULSEW_M		MAKEMASK(0xF, 8)
+#define GLTSYN_AUX_OUT_3(_i)			(0x000889B0 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_AUX_OUT_3_MAX_INDEX		1
+#define GLTSYN_AUX_OUT_3_OUT_ENA_S		0
+#define GLTSYN_AUX_OUT_3_OUT_ENA_M		BIT(0)
+#define GLTSYN_AUX_OUT_3_OUTMOD_S		1
+#define GLTSYN_AUX_OUT_3_OUTMOD_M		MAKEMASK(0x3, 1)
+#define GLTSYN_AUX_OUT_3_OUTLVL_S		3
+#define GLTSYN_AUX_OUT_3_OUTLVL_M		BIT(3)
+#define GLTSYN_AUX_OUT_3_INT_ENA_S		4
+#define GLTSYN_AUX_OUT_3_INT_ENA_M		BIT(4)
+#define GLTSYN_AUX_OUT_3_PULSEW_S		8
+#define GLTSYN_AUX_OUT_3_PULSEW_M		MAKEMASK(0xF, 8)
+#define GLTSYN_CLKO_0(_i)			(0x000889B8 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_CLKO_0_MAX_INDEX			1
+#define GLTSYN_CLKO_0_TSYNCLKO_S		0
+#define GLTSYN_CLKO_0_TSYNCLKO_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_CLKO_1(_i)			(0x000889C0 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_CLKO_1_MAX_INDEX			1
+#define GLTSYN_CLKO_1_TSYNCLKO_S		0
+#define GLTSYN_CLKO_1_TSYNCLKO_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_CLKO_2(_i)			(0x000889C8 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_CLKO_2_MAX_INDEX			1
+#define GLTSYN_CLKO_2_TSYNCLKO_S		0
+#define GLTSYN_CLKO_2_TSYNCLKO_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_CLKO_3(_i)			(0x000889D0 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_CLKO_3_MAX_INDEX			1
+#define GLTSYN_CLKO_3_TSYNCLKO_S		0
+#define GLTSYN_CLKO_3_TSYNCLKO_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_CMD				0x00088810 /* Reset Source: CORER */
+#define GLTSYN_CMD_CMD_S			0
+#define GLTSYN_CMD_CMD_M			MAKEMASK(0xFF, 0)
+#define GLTSYN_CMD_SEL_MASTER_S			8
+#define GLTSYN_CMD_SEL_MASTER_M			BIT(8)
+#define GLTSYN_CMD_SYNC				0x00088814 /* Reset Source: CORER */
+#define GLTSYN_CMD_SYNC_SYNC_S			0
+#define GLTSYN_CMD_SYNC_SYNC_M			MAKEMASK(0x3, 0)
+#define GLTSYN_ENA(_i)				(0x00088808 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_ENA_MAX_INDEX			1
+#define GLTSYN_ENA_TSYN_ENA_S			0
+#define GLTSYN_ENA_TSYN_ENA_M			BIT(0)
+#define GLTSYN_EVNT_H_0(_i)			(0x00088970 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_EVNT_H_0_MAX_INDEX		1
+#define GLTSYN_EVNT_H_0_TSYNEVNT_H_S		0
+#define GLTSYN_EVNT_H_0_TSYNEVNT_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_EVNT_H_1(_i)			(0x00088980 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_EVNT_H_1_MAX_INDEX		1
+#define GLTSYN_EVNT_H_1_TSYNEVNT_H_S		0
+#define GLTSYN_EVNT_H_1_TSYNEVNT_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_EVNT_H_2(_i)			(0x00088990 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_EVNT_H_2_MAX_INDEX		1
+#define GLTSYN_EVNT_H_2_TSYNEVNT_H_S		0
+#define GLTSYN_EVNT_H_2_TSYNEVNT_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_EVNT_L_0(_i)			(0x00088968 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_EVNT_L_0_MAX_INDEX		1
+#define GLTSYN_EVNT_L_0_TSYNEVNT_L_S		0
+#define GLTSYN_EVNT_L_0_TSYNEVNT_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_EVNT_L_1(_i)			(0x00088978 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_EVNT_L_1_MAX_INDEX		1
+#define GLTSYN_EVNT_L_1_TSYNEVNT_L_S		0
+#define GLTSYN_EVNT_L_1_TSYNEVNT_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_EVNT_L_2(_i)			(0x00088988 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_EVNT_L_2_MAX_INDEX		1
+#define GLTSYN_EVNT_L_2_TSYNEVNT_L_S		0
+#define GLTSYN_EVNT_L_2_TSYNEVNT_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_HHTIME_H(_i)			(0x00088900 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_HHTIME_H_MAX_INDEX		1
+#define GLTSYN_HHTIME_H_TSYNEVNT_H_S		0
+#define GLTSYN_HHTIME_H_TSYNEVNT_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_HHTIME_L(_i)			(0x000888F8 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_HHTIME_L_MAX_INDEX		1
+#define GLTSYN_HHTIME_L_TSYNEVNT_L_S		0
+#define GLTSYN_HHTIME_L_TSYNEVNT_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_INCVAL_H(_i)			(0x00088920 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_INCVAL_H_MAX_INDEX		1
+#define GLTSYN_INCVAL_H_INCVAL_H_S		0
+#define GLTSYN_INCVAL_H_INCVAL_H_M		MAKEMASK(0xFF, 0)
+#define GLTSYN_INCVAL_L(_i)			(0x00088918 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_INCVAL_L_MAX_INDEX		1
+#define GLTSYN_INCVAL_L_INCVAL_L_S		0
+#define GLTSYN_INCVAL_L_INCVAL_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_SHADJ_H(_i)			(0x00088910 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_SHADJ_H_MAX_INDEX		1
+#define GLTSYN_SHADJ_H_ADJUST_H_S		0
+#define GLTSYN_SHADJ_H_ADJUST_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_SHADJ_L(_i)			(0x00088908 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_SHADJ_L_MAX_INDEX		1
+#define GLTSYN_SHADJ_L_ADJUST_L_S		0
+#define GLTSYN_SHADJ_L_ADJUST_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_SHTIME_0(_i)			(0x000888E0 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_SHTIME_0_MAX_INDEX		1
+#define GLTSYN_SHTIME_0_TSYNTIME_0_S		0
+#define GLTSYN_SHTIME_0_TSYNTIME_0_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_SHTIME_H(_i)			(0x000888F0 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_SHTIME_H_MAX_INDEX		1
+#define GLTSYN_SHTIME_H_TSYNTIME_H_S		0
+#define GLTSYN_SHTIME_H_TSYNTIME_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_SHTIME_L(_i)			(0x000888E8 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_SHTIME_L_MAX_INDEX		1
+#define GLTSYN_SHTIME_L_TSYNTIME_L_S		0
+#define GLTSYN_SHTIME_L_TSYNTIME_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_STAT(_i)				(0x000888C0 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_STAT_MAX_INDEX			1
+#define GLTSYN_STAT_EVENT0_S			0
+#define GLTSYN_STAT_EVENT0_M			BIT(0)
+#define GLTSYN_STAT_EVENT1_S			1
+#define GLTSYN_STAT_EVENT1_M			BIT(1)
+#define GLTSYN_STAT_EVENT2_S			2
+#define GLTSYN_STAT_EVENT2_M			BIT(2)
+#define GLTSYN_STAT_TGT0_S			4
+#define GLTSYN_STAT_TGT0_M			BIT(4)
+#define GLTSYN_STAT_TGT1_S			5
+#define GLTSYN_STAT_TGT1_M			BIT(5)
+#define GLTSYN_STAT_TGT2_S			6
+#define GLTSYN_STAT_TGT2_M			BIT(6)
+#define GLTSYN_STAT_TGT3_S			7
+#define GLTSYN_STAT_TGT3_M			BIT(7)
+#define GLTSYN_SYNC_DLAY			0x00088818 /* Reset Source: CORER */
+#define GLTSYN_SYNC_DLAY_SYNC_DELAY_S		0
+#define GLTSYN_SYNC_DLAY_SYNC_DELAY_M		MAKEMASK(0x1F, 0)
+#define GLTSYN_TGT_H_0(_i)			(0x00088930 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_TGT_H_0_MAX_INDEX		1
+#define GLTSYN_TGT_H_0_TSYNTGTT_H_S		0
+#define GLTSYN_TGT_H_0_TSYNTGTT_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_TGT_H_1(_i)			(0x00088940 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_TGT_H_1_MAX_INDEX		1
+#define GLTSYN_TGT_H_1_TSYNTGTT_H_S		0
+#define GLTSYN_TGT_H_1_TSYNTGTT_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_TGT_H_2(_i)			(0x00088950 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_TGT_H_2_MAX_INDEX		1
+#define GLTSYN_TGT_H_2_TSYNTGTT_H_S		0
+#define GLTSYN_TGT_H_2_TSYNTGTT_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_TGT_H_3(_i)			(0x00088960 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_TGT_H_3_MAX_INDEX		1
+#define GLTSYN_TGT_H_3_TSYNTGTT_H_S		0
+#define GLTSYN_TGT_H_3_TSYNTGTT_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_TGT_L_0(_i)			(0x00088928 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_TGT_L_0_MAX_INDEX		1
+#define GLTSYN_TGT_L_0_TSYNTGTT_L_S		0
+#define GLTSYN_TGT_L_0_TSYNTGTT_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_TGT_L_1(_i)			(0x00088938 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_TGT_L_1_MAX_INDEX		1
+#define GLTSYN_TGT_L_1_TSYNTGTT_L_S		0
+#define GLTSYN_TGT_L_1_TSYNTGTT_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_TGT_L_2(_i)			(0x00088948 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_TGT_L_2_MAX_INDEX		1
+#define GLTSYN_TGT_L_2_TSYNTGTT_L_S		0
+#define GLTSYN_TGT_L_2_TSYNTGTT_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_TGT_L_3(_i)			(0x00088958 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_TGT_L_3_MAX_INDEX		1
+#define GLTSYN_TGT_L_3_TSYNTGTT_L_S		0
+#define GLTSYN_TGT_L_3_TSYNTGTT_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_TIME_0(_i)			(0x000888C8 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_TIME_0_MAX_INDEX			1
+#define GLTSYN_TIME_0_TSYNTIME_0_S		0
+#define GLTSYN_TIME_0_TSYNTIME_0_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_TIME_H(_i)			(0x000888D8 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_TIME_H_MAX_INDEX			1
+#define GLTSYN_TIME_H_TSYNTIME_H_S		0
+#define GLTSYN_TIME_H_TSYNTIME_H_M		MAKEMASK(0xFFFFFFFF, 0)
+#define GLTSYN_TIME_L(_i)			(0x000888D0 + ((_i) * 4)) /* _i=0...1 */ /* Reset Source: CORER */
+#define GLTSYN_TIME_L_MAX_INDEX			1
+#define GLTSYN_TIME_L_TSYNTIME_L_S		0
+#define GLTSYN_TIME_L_TSYNTIME_L_M		MAKEMASK(0xFFFFFFFF, 0)
+#define PFHH_SEM				0x000A4200 /* Reset Source: PFR */
+#define PFHH_SEM_BUSY_S				0
+#define PFHH_SEM_BUSY_M				BIT(0)
+#define PFHH_SEM_PF_OWNER_S			4
+#define PFHH_SEM_PF_OWNER_M			MAKEMASK(0x7, 4)
+#define PFTSYN_SEM				0x00088880 /* Reset Source: PFR */
+#define PFTSYN_SEM_BUSY_S			0
+#define PFTSYN_SEM_BUSY_M			BIT(0)
+#define PFTSYN_SEM_PF_OWNER_S			4
+#define PFTSYN_SEM_PF_OWNER_M			MAKEMASK(0x7, 4)
+#define GLPE_TSCD_FLR(_i)			(0x0051E24c + ((_i) * 4)) /* _i=0...3 */ /* Reset Source: CORER */
+#define GLPE_TSCD_FLR_MAX_INDEX			3
+#define GLPE_TSCD_FLR_DRAIN_VCTR_ID_S		0
+#define GLPE_TSCD_FLR_DRAIN_VCTR_ID_M		MAKEMASK(0x3, 0)
+#define GLPE_TSCD_FLR_PORT_S			2
+#define GLPE_TSCD_FLR_PORT_M			MAKEMASK(0x7, 2)
+#define GLPE_TSCD_FLR_PF_NUM_S			5
+#define GLPE_TSCD_FLR_PF_NUM_M			MAKEMASK(0x7, 5)
+#define GLPE_TSCD_FLR_VM_VF_TYPE_S		8
+#define GLPE_TSCD_FLR_VM_VF_TYPE_M		MAKEMASK(0x3, 8)
+#define GLPE_TSCD_FLR_VM_VF_NUM_S		16
+#define GLPE_TSCD_FLR_VM_VF_NUM_M		MAKEMASK(0x3FF, 16)
+#define GLPE_TSCD_FLR_VLD_S			31
+#define GLPE_TSCD_FLR_VLD_M			BIT(31)
+#define GLPE_TSCD_PEPM				0x0051E228 /* Reset Source: CORER */
+#define GLPE_TSCD_PEPM_MDQ_CREDITS_S		0
+#define GLPE_TSCD_PEPM_MDQ_CREDITS_M		MAKEMASK(0xFF, 0)
+#define PF_VIRT_VSTATUS				0x0009E680 /* Reset Source: PFR */
+#define PF_VIRT_VSTATUS_NUM_VFS_S		0
+#define PF_VIRT_VSTATUS_NUM_VFS_M		MAKEMASK(0xFF, 0)
+#define PF_VIRT_VSTATUS_TOTAL_VFS_S		8
+#define PF_VIRT_VSTATUS_TOTAL_VFS_M		MAKEMASK(0xFF, 8)
+#define PF_VIRT_VSTATUS_IOV_ACTIVE_S		16
+#define PF_VIRT_VSTATUS_IOV_ACTIVE_M		BIT(16)
+#define PF_VT_PFALLOC				0x001D2480 /* Reset Source: CORER */
+#define PF_VT_PFALLOC_FIRSTVF_S			0
+#define PF_VT_PFALLOC_FIRSTVF_M			MAKEMASK(0xFF, 0)
+#define PF_VT_PFALLOC_LASTVF_S			8
+#define PF_VT_PFALLOC_LASTVF_M			MAKEMASK(0xFF, 8)
+#define PF_VT_PFALLOC_VALID_S			31
+#define PF_VT_PFALLOC_VALID_M			BIT(31)
+#define PF_VT_PFALLOC_HIF			0x0009DD80 /* Reset Source: PCIR */
+#define PF_VT_PFALLOC_HIF_FIRSTVF_S		0
+#define PF_VT_PFALLOC_HIF_FIRSTVF_M		MAKEMASK(0xFF, 0)
+#define PF_VT_PFALLOC_HIF_LASTVF_S		8
+#define PF_VT_PFALLOC_HIF_LASTVF_M		MAKEMASK(0xFF, 8)
+#define PF_VT_PFALLOC_HIF_VALID_S		31
+#define PF_VT_PFALLOC_HIF_VALID_M		BIT(31)
+#define PF_VT_PFALLOC_PCIE			0x000BE080 /* Reset Source: PCIR */
+#define PF_VT_PFALLOC_PCIE_FIRSTVF_S		0
+#define PF_VT_PFALLOC_PCIE_FIRSTVF_M		MAKEMASK(0xFF, 0)
+#define PF_VT_PFALLOC_PCIE_LASTVF_S		8
+#define PF_VT_PFALLOC_PCIE_LASTVF_M		MAKEMASK(0xFF, 8)
+#define PF_VT_PFALLOC_PCIE_VALID_S		31
+#define PF_VT_PFALLOC_PCIE_VALID_M		BIT(31)
+#define VSI_L2TAGSTXVALID(_VSI)			(0x00046000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_L2TAGSTXVALID_MAX_INDEX		767
+#define VSI_L2TAGSTXVALID_L2TAG1INSERTID_S	0
+#define VSI_L2TAGSTXVALID_L2TAG1INSERTID_M	MAKEMASK(0x7, 0)
+#define VSI_L2TAGSTXVALID_L2TAG1INSERTID_VALID_S 3
+#define VSI_L2TAGSTXVALID_L2TAG1INSERTID_VALID_M BIT(3)
+#define VSI_L2TAGSTXVALID_L2TAG2INSERTID_S	4
+#define VSI_L2TAGSTXVALID_L2TAG2INSERTID_M	MAKEMASK(0x7, 4)
+#define VSI_L2TAGSTXVALID_L2TAG2INSERTID_VALID_S 7
+#define VSI_L2TAGSTXVALID_L2TAG2INSERTID_VALID_M BIT(7)
+#define VSI_L2TAGSTXVALID_TIR0INSERTID_S	16
+#define VSI_L2TAGSTXVALID_TIR0INSERTID_M	MAKEMASK(0x7, 16)
+#define VSI_L2TAGSTXVALID_TIR0_INSERT_S		19
+#define VSI_L2TAGSTXVALID_TIR0_INSERT_M		BIT(19)
+#define VSI_L2TAGSTXVALID_TIR1INSERTID_S	20
+#define VSI_L2TAGSTXVALID_TIR1INSERTID_M	MAKEMASK(0x7, 20)
+#define VSI_L2TAGSTXVALID_TIR1_INSERT_S		23
+#define VSI_L2TAGSTXVALID_TIR1_INSERT_M		BIT(23)
+#define VSI_L2TAGSTXVALID_TIR2INSERTID_S	24
+#define VSI_L2TAGSTXVALID_TIR2INSERTID_M	MAKEMASK(0x7, 24)
+#define VSI_L2TAGSTXVALID_TIR2_INSERT_S		27
+#define VSI_L2TAGSTXVALID_TIR2_INSERT_M		BIT(27)
+#define VSI_PASID(_VSI)				(0x0009C000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_PASID_MAX_INDEX			767
+#define VSI_PASID_PASID_S			0
+#define VSI_PASID_PASID_M			MAKEMASK(0xFFFFF, 0)
+#define VSI_PASID_EN_S				31
+#define VSI_PASID_EN_M				BIT(31)
+#define VSI_RUPR(_VSI)				(0x00050000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_RUPR_MAX_INDEX			767
+#define VSI_RUPR_UP0_S				0
+#define VSI_RUPR_UP0_M				MAKEMASK(0x7, 0)
+#define VSI_RUPR_UP1_S				3
+#define VSI_RUPR_UP1_M				MAKEMASK(0x7, 3)
+#define VSI_RUPR_UP2_S				6
+#define VSI_RUPR_UP2_M				MAKEMASK(0x7, 6)
+#define VSI_RUPR_UP3_S				9
+#define VSI_RUPR_UP3_M				MAKEMASK(0x7, 9)
+#define VSI_RUPR_UP4_S				12
+#define VSI_RUPR_UP4_M				MAKEMASK(0x7, 12)
+#define VSI_RUPR_UP5_S				15
+#define VSI_RUPR_UP5_M				MAKEMASK(0x7, 15)
+#define VSI_RUPR_UP6_S				18
+#define VSI_RUPR_UP6_M				MAKEMASK(0x7, 18)
+#define VSI_RUPR_UP7_S				21
+#define VSI_RUPR_UP7_M				MAKEMASK(0x7, 21)
+#define VSI_RXSWCTRL(_VSI)			(0x00205000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_RXSWCTRL_MAX_INDEX			767
+#define VSI_RXSWCTRL_MACVSIPRUNEENABLE_S	8
+#define VSI_RXSWCTRL_MACVSIPRUNEENABLE_M	BIT(8)
+#define VSI_RXSWCTRL_PRUNEENABLE_S		9
+#define VSI_RXSWCTRL_PRUNEENABLE_M		MAKEMASK(0xF, 9)
+#define VSI_RXSWCTRL_SRCPRUNEENABLE_S		13
+#define VSI_RXSWCTRL_SRCPRUNEENABLE_M		BIT(13)
+#define VSI_SRCSWCTRL(_VSI)			(0x00209000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_SRCSWCTRL_MAX_INDEX			767
+#define VSI_SRCSWCTRL_ALLOWDESTOVERRIDE_S	0
+#define VSI_SRCSWCTRL_ALLOWDESTOVERRIDE_M	BIT(0)
+#define VSI_SRCSWCTRL_ALLOWLOOPBACK_S		1
+#define VSI_SRCSWCTRL_ALLOWLOOPBACK_M		BIT(1)
+#define VSI_SRCSWCTRL_LANENABLE_S		2
+#define VSI_SRCSWCTRL_LANENABLE_M		BIT(2)
+#define VSI_SRCSWCTRL_MACAS_S			3
+#define VSI_SRCSWCTRL_MACAS_M			BIT(3)
+#define VSI_SRCSWCTRL_PRUNEENABLE_S		4
+#define VSI_SRCSWCTRL_PRUNEENABLE_M		MAKEMASK(0xF, 4)
+#define VSI_SWITCHID(_VSI)			(0x00215000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_SWITCHID_MAX_INDEX			767
+#define VSI_SWITCHID_SWITCHID_S			0
+#define VSI_SWITCHID_SWITCHID_M			MAKEMASK(0xFF, 0)
+#define VSI_SWT_MIREG(_VSI)			(0x00207000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_SWT_MIREG_MAX_INDEX			767
+#define VSI_SWT_MIREG_MIRRULE_S			0
+#define VSI_SWT_MIREG_MIRRULE_M			MAKEMASK(0x3F, 0)
+#define VSI_SWT_MIREG_MIRENA_S			7
+#define VSI_SWT_MIREG_MIRENA_M			BIT(7)
+#define VSI_SWT_MIRIG(_VSI)			(0x00208000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSI_SWT_MIRIG_MAX_INDEX			767
+#define VSI_SWT_MIRIG_MIRRULE_S			0
+#define VSI_SWT_MIRIG_MIRRULE_M			MAKEMASK(0x3F, 0)
+#define VSI_SWT_MIRIG_MIRENA_S			7
+#define VSI_SWT_MIRIG_MIRENA_M			BIT(7)
+#define VSI_TAIR(_VSI)				(0x00044000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_TAIR_MAX_INDEX			767
+#define VSI_TAIR_PORT_TAG_ID_S			0
+#define VSI_TAIR_PORT_TAG_ID_M			MAKEMASK(0xFFFF, 0)
+#define VSI_TAR(_VSI)				(0x00045000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_TAR_MAX_INDEX			767
+#define VSI_TAR_ACCEPTTAGGED_S			0
+#define VSI_TAR_ACCEPTTAGGED_M			MAKEMASK(0x3FF, 0)
+#define VSI_TAR_ACCEPTUNTAGGED_S		16
+#define VSI_TAR_ACCEPTUNTAGGED_M		MAKEMASK(0x3FF, 16)
+#define VSI_TIR_0(_VSI)				(0x00041000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_TIR_0_MAX_INDEX			767
+#define VSI_TIR_0_PORT_TAG_ID_S			0
+#define VSI_TIR_0_PORT_TAG_ID_M			MAKEMASK(0xFFFF, 0)
+#define VSI_TIR_1(_VSI)				(0x00042000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_TIR_1_MAX_INDEX			767
+#define VSI_TIR_1_PORT_TAG_ID_S			0
+#define VSI_TIR_1_PORT_TAG_ID_M			MAKEMASK(0xFFFFFFFF, 0)
+#define VSI_TIR_2(_VSI)				(0x00043000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_TIR_2_MAX_INDEX			767
+#define VSI_TIR_2_PORT_TAG_ID_S			0
+#define VSI_TIR_2_PORT_TAG_ID_M			MAKEMASK(0xFFFF, 0)
+#define VSI_TSR(_VSI)				(0x00051000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_TSR_MAX_INDEX			767
+#define VSI_TSR_STRIPTAG_S			0
+#define VSI_TSR_STRIPTAG_M			MAKEMASK(0x3FF, 0)
+#define VSI_TSR_SHOWTAG_S			10
+#define VSI_TSR_SHOWTAG_M			MAKEMASK(0x3FF, 10)
+#define VSI_TSR_SHOWPRIONLY_S			20
+#define VSI_TSR_SHOWPRIONLY_M			MAKEMASK(0x3FF, 20)
+#define VSI_TUPIOM(_VSI)			(0x00048000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_TUPIOM_MAX_INDEX			767
+#define VSI_TUPIOM_UP0_S			0
+#define VSI_TUPIOM_UP0_M			MAKEMASK(0x7, 0)
+#define VSI_TUPIOM_UP1_S			3
+#define VSI_TUPIOM_UP1_M			MAKEMASK(0x7, 3)
+#define VSI_TUPIOM_UP2_S			6
+#define VSI_TUPIOM_UP2_M			MAKEMASK(0x7, 6)
+#define VSI_TUPIOM_UP3_S			9
+#define VSI_TUPIOM_UP3_M			MAKEMASK(0x7, 9)
+#define VSI_TUPIOM_UP4_S			12
+#define VSI_TUPIOM_UP4_M			MAKEMASK(0x7, 12)
+#define VSI_TUPIOM_UP5_S			15
+#define VSI_TUPIOM_UP5_M			MAKEMASK(0x7, 15)
+#define VSI_TUPIOM_UP6_S			18
+#define VSI_TUPIOM_UP6_M			MAKEMASK(0x7, 18)
+#define VSI_TUPIOM_UP7_S			21
+#define VSI_TUPIOM_UP7_M			MAKEMASK(0x7, 21)
+#define VSI_TUPR(_VSI)				(0x00047000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_TUPR_MAX_INDEX			767
+#define VSI_TUPR_UP0_S				0
+#define VSI_TUPR_UP0_M				MAKEMASK(0x7, 0)
+#define VSI_TUPR_UP1_S				3
+#define VSI_TUPR_UP1_M				MAKEMASK(0x7, 3)
+#define VSI_TUPR_UP2_S				6
+#define VSI_TUPR_UP2_M				MAKEMASK(0x7, 6)
+#define VSI_TUPR_UP3_S				9
+#define VSI_TUPR_UP3_M				MAKEMASK(0x7, 9)
+#define VSI_TUPR_UP4_S				12
+#define VSI_TUPR_UP4_M				MAKEMASK(0x7, 12)
+#define VSI_TUPR_UP5_S				15
+#define VSI_TUPR_UP5_M				MAKEMASK(0x7, 15)
+#define VSI_TUPR_UP6_S				18
+#define VSI_TUPR_UP6_M				MAKEMASK(0x7, 18)
+#define VSI_TUPR_UP7_S				21
+#define VSI_TUPR_UP7_M				MAKEMASK(0x7, 21)
+#define VSI_VSI2F(_VSI)				(0x001D0000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_VSI2F_MAX_INDEX			767
+#define VSI_VSI2F_VFVMNUMBER_S			0
+#define VSI_VSI2F_VFVMNUMBER_M			MAKEMASK(0x3FF, 0)
+#define VSI_VSI2F_FUNCTIONTYPE_S		10
+#define VSI_VSI2F_FUNCTIONTYPE_M		MAKEMASK(0x3, 10)
+#define VSI_VSI2F_PFNUMBER_S			12
+#define VSI_VSI2F_PFNUMBER_M			MAKEMASK(0x7, 12)
+#define VSI_VSI2F_BUFFERNUMBER_S		16
+#define VSI_VSI2F_BUFFERNUMBER_M		MAKEMASK(0x7, 16)
+#define VSI_VSI2F_VSI_NUMBER_S			20
+#define VSI_VSI2F_VSI_NUMBER_M			MAKEMASK(0x3FF, 20)
+#define VSI_VSI2F_VSI_ENABLE_S			31
+#define VSI_VSI2F_VSI_ENABLE_M			BIT(31)
+#define VSI_VSI2F_MBX(_VSI)			(0x00232000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSI_VSI2F_MBX_MAX_INDEX			767
+#define VSI_VSI2F_MBX_VFVMNUMBER_S		0
+#define VSI_VSI2F_MBX_VFVMNUMBER_M		MAKEMASK(0x3FF, 0)
+#define VSI_VSI2F_MBX_FUNCTIONTYPE_S		10
+#define VSI_VSI2F_MBX_FUNCTIONTYPE_M		MAKEMASK(0x3, 10)
+#define VSI_VSI2F_MBX_PFNUMBER_S		12
+#define VSI_VSI2F_MBX_PFNUMBER_M		MAKEMASK(0x7, 12)
+#define VSI_VSI2F_MBX_BUFFERNUMBER_S		16
+#define VSI_VSI2F_MBX_BUFFERNUMBER_M		MAKEMASK(0x7, 16)
+#define VSI_VSI2F_MBX_VSI_NUMBER_S		20
+#define VSI_VSI2F_MBX_VSI_NUMBER_M		MAKEMASK(0x3FF, 20)
+#define VSI_VSI2F_MBX_VSI_ENABLE_S		31
+#define VSI_VSI2F_MBX_VSI_ENABLE_M		BIT(31)
+#define VSIQF_FD_CNT(_VSI)			(0x00464000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSIQF_FD_CNT_MAX_INDEX			767
+#define VSIQF_FD_CNT_FD_GCNT_S			0
+#define VSIQF_FD_CNT_FD_GCNT_M			MAKEMASK(0x3FFF, 0)
+#define VSIQF_FD_CNT_FD_BCNT_S			16
+#define VSIQF_FD_CNT_FD_BCNT_M			MAKEMASK(0x3FFF, 16)
+#define VSIQF_FD_CTL1(_VSI)			(0x00411000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSIQF_FD_CTL1_MAX_INDEX			767
+#define VSIQF_FD_CTL1_FLT_ENA_S			0
+#define VSIQF_FD_CTL1_FLT_ENA_M			BIT(0)
+#define VSIQF_FD_CTL1_CFG_ENA_S			1
+#define VSIQF_FD_CTL1_CFG_ENA_M			BIT(1)
+#define VSIQF_FD_CTL1_EVICT_ENA_S		2
+#define VSIQF_FD_CTL1_EVICT_ENA_M		BIT(2)
+#define VSIQF_FD_DFLT(_VSI)			(0x00457000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSIQF_FD_DFLT_MAX_INDEX			767
+#define VSIQF_FD_DFLT_DEFLT_QINDX_S		0
+#define VSIQF_FD_DFLT_DEFLT_QINDX_M		MAKEMASK(0x7FF, 0)
+#define VSIQF_FD_DFLT_DEFLT_TOQUEUE_S		12
+#define VSIQF_FD_DFLT_DEFLT_TOQUEUE_M		MAKEMASK(0x7, 12)
+#define VSIQF_FD_DFLT_COMP_QINDX_S		16
+#define VSIQF_FD_DFLT_COMP_QINDX_M		MAKEMASK(0x7FF, 16)
+#define VSIQF_FD_DFLT_DEFLT_QINDX_PRIO_S	28
+#define VSIQF_FD_DFLT_DEFLT_QINDX_PRIO_M	MAKEMASK(0x7, 28)
+#define VSIQF_FD_DFLT_DEFLT_DROP_S		31
+#define VSIQF_FD_DFLT_DEFLT_DROP_M		BIT(31)
+#define VSIQF_FD_SIZE(_VSI)			(0x00462000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: CORER */
+#define VSIQF_FD_SIZE_MAX_INDEX			767
+#define VSIQF_FD_SIZE_FD_GSIZE_S		0
+#define VSIQF_FD_SIZE_FD_GSIZE_M		MAKEMASK(0x3FFF, 0)
+#define VSIQF_FD_SIZE_FD_BSIZE_S		16
+#define VSIQF_FD_SIZE_FD_BSIZE_M		MAKEMASK(0x3FFF, 16)
+#define VSIQF_HASH_CTL(_VSI)			(0x0040D000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSIQF_HASH_CTL_MAX_INDEX		767
+#define VSIQF_HASH_CTL_HASH_LUT_SEL_S		0
+#define VSIQF_HASH_CTL_HASH_LUT_SEL_M		MAKEMASK(0x3, 0)
+#define VSIQF_HASH_CTL_GLOB_LUT_S		2
+#define VSIQF_HASH_CTL_GLOB_LUT_M		MAKEMASK(0xF, 2)
+#define VSIQF_HASH_CTL_HASH_SCHEME_S		6
+#define VSIQF_HASH_CTL_HASH_SCHEME_M		MAKEMASK(0x3, 6)
+#define VSIQF_HASH_CTL_TC_OVER_SEL_S		8
+#define VSIQF_HASH_CTL_TC_OVER_SEL_M		MAKEMASK(0x1F, 8)
+#define VSIQF_HASH_CTL_TC_OVER_ENA_S		15
+#define VSIQF_HASH_CTL_TC_OVER_ENA_M		BIT(15)
+#define VSIQF_HKEY(_i, _VSI)			(0x00400000 + ((_i) * 4096 + (_VSI) * 4)) /* _i=0...12, _VSI=0...767 */ /* Reset Source: PFR */
+#define VSIQF_HKEY_MAX_INDEX			12
+#define VSIQF_HKEY_KEY_0_S			0
+#define VSIQF_HKEY_KEY_0_M			MAKEMASK(0xFF, 0)
+#define VSIQF_HKEY_KEY_1_S			8
+#define VSIQF_HKEY_KEY_1_M			MAKEMASK(0xFF, 8)
+#define VSIQF_HKEY_KEY_2_S			16
+#define VSIQF_HKEY_KEY_2_M			MAKEMASK(0xFF, 16)
+#define VSIQF_HKEY_KEY_3_S			24
+#define VSIQF_HKEY_KEY_3_M			MAKEMASK(0xFF, 24)
+#define VSIQF_HLUT(_i, _VSI)			(0x00420000 + ((_i) * 4096 + (_VSI) * 4)) /* _i=0...15, _VSI=0...767 */ /* Reset Source: PFR */
+#define VSIQF_HLUT_MAX_INDEX			15
+#define VSIQF_HLUT_LUT0_S			0
+#define VSIQF_HLUT_LUT0_M			MAKEMASK(0xF, 0)
+#define VSIQF_HLUT_LUT1_S			8
+#define VSIQF_HLUT_LUT1_M			MAKEMASK(0xF, 8)
+#define VSIQF_HLUT_LUT2_S			16
+#define VSIQF_HLUT_LUT2_M			MAKEMASK(0xF, 16)
+#define VSIQF_HLUT_LUT3_S			24
+#define VSIQF_HLUT_LUT3_M			MAKEMASK(0xF, 24)
+#define VSIQF_PE_CTL1(_VSI)			(0x00414000 + ((_VSI) * 4)) /* _i=0...767 */ /* Reset Source: PFR */
+#define VSIQF_PE_CTL1_MAX_INDEX			767
+#define VSIQF_PE_CTL1_PE_FLTENA_S		0
+#define VSIQF_PE_CTL1_PE_FLTENA_M		BIT(0)
+#define VSIQF_TC_REGION(_i, _VSI)		(0x00448000 + ((_i) * 4096 + (_VSI) * 4)) /* _i=0...3, _VSI=0...767 */ /* Reset Source: PFR */
+#define VSIQF_TC_REGION_MAX_INDEX		3
+#define VSIQF_TC_REGION_TC_BASE0_S		0
+#define VSIQF_TC_REGION_TC_BASE0_M		MAKEMASK(0x7FF, 0)
+#define VSIQF_TC_REGION_TC_SIZE0_S		11
+#define VSIQF_TC_REGION_TC_SIZE0_M		MAKEMASK(0xF, 11)
+#define VSIQF_TC_REGION_TC_BASE1_S		16
+#define VSIQF_TC_REGION_TC_BASE1_M		MAKEMASK(0x7FF, 16)
+#define VSIQF_TC_REGION_TC_SIZE1_S		27
+#define VSIQF_TC_REGION_TC_SIZE1_M		MAKEMASK(0xF, 27)
+#define GLPM_WUMC				0x0009DEE4 /* Reset Source: POR */
+#define GLPM_WUMC_MNG_WU_PF_S			16
+#define GLPM_WUMC_MNG_WU_PF_M			MAKEMASK(0xFF, 16)
+#define PFPM_APM				0x000B8080 /* Reset Source: POR */
+#define PFPM_APM_APME_S				0
+#define PFPM_APM_APME_M				BIT(0)
+#define PFPM_WUC				0x0009DC80 /* Reset Source: POR */
+#define PFPM_WUC_EN_APM_D0_S			5
+#define PFPM_WUC_EN_APM_D0_M			BIT(5)
+#define PFPM_WUFC				0x0009DC00 /* Reset Source: POR */
+#define PFPM_WUFC_LNKC_S			0
+#define PFPM_WUFC_LNKC_M			BIT(0)
+#define PFPM_WUFC_MAG_S				1
+#define PFPM_WUFC_MAG_M				BIT(1)
+#define PFPM_WUFC_MNG_S				3
+#define PFPM_WUFC_MNG_M				BIT(3)
+#define PFPM_WUFC_FLX0_ACT_S			4
+#define PFPM_WUFC_FLX0_ACT_M			BIT(4)
+#define PFPM_WUFC_FLX1_ACT_S			5
+#define PFPM_WUFC_FLX1_ACT_M			BIT(5)
+#define PFPM_WUFC_FLX2_ACT_S			6
+#define PFPM_WUFC_FLX2_ACT_M			BIT(6)
+#define PFPM_WUFC_FLX3_ACT_S			7
+#define PFPM_WUFC_FLX3_ACT_M			BIT(7)
+#define PFPM_WUFC_FLX4_ACT_S			8
+#define PFPM_WUFC_FLX4_ACT_M			BIT(8)
+#define PFPM_WUFC_FLX5_ACT_S			9
+#define PFPM_WUFC_FLX5_ACT_M			BIT(9)
+#define PFPM_WUFC_FLX6_ACT_S			10
+#define PFPM_WUFC_FLX6_ACT_M			BIT(10)
+#define PFPM_WUFC_FLX7_ACT_S			11
+#define PFPM_WUFC_FLX7_ACT_M			BIT(11)
+#define PFPM_WUFC_FLX0_S			16
+#define PFPM_WUFC_FLX0_M			BIT(16)
+#define PFPM_WUFC_FLX1_S			17
+#define PFPM_WUFC_FLX1_M			BIT(17)
+#define PFPM_WUFC_FLX2_S			18
+#define PFPM_WUFC_FLX2_M			BIT(18)
+#define PFPM_WUFC_FLX3_S			19
+#define PFPM_WUFC_FLX3_M			BIT(19)
+#define PFPM_WUFC_FLX4_S			20
+#define PFPM_WUFC_FLX4_M			BIT(20)
+#define PFPM_WUFC_FLX5_S			21
+#define PFPM_WUFC_FLX5_M			BIT(21)
+#define PFPM_WUFC_FLX6_S			22
+#define PFPM_WUFC_FLX6_M			BIT(22)
+#define PFPM_WUFC_FLX7_S			23
+#define PFPM_WUFC_FLX7_M			BIT(23)
+#define PFPM_WUFC_FW_RST_WK_S			31
+#define PFPM_WUFC_FW_RST_WK_M			BIT(31)
+#define PFPM_WUS				0x0009DB80 /* Reset Source: POR */
+#define PFPM_WUS_LNKC_S				0
+#define PFPM_WUS_LNKC_M				BIT(0)
+#define PFPM_WUS_MAG_S				1
+#define PFPM_WUS_MAG_M				BIT(1)
+#define PFPM_WUS_PME_STATUS_S			2
+#define PFPM_WUS_PME_STATUS_M			BIT(2)
+#define PFPM_WUS_MNG_S				3
+#define PFPM_WUS_MNG_M				BIT(3)
+#define PFPM_WUS_FLX0_S				16
+#define PFPM_WUS_FLX0_M				BIT(16)
+#define PFPM_WUS_FLX1_S				17
+#define PFPM_WUS_FLX1_M				BIT(17)
+#define PFPM_WUS_FLX2_S				18
+#define PFPM_WUS_FLX2_M				BIT(18)
+#define PFPM_WUS_FLX3_S				19
+#define PFPM_WUS_FLX3_M				BIT(19)
+#define PFPM_WUS_FLX4_S				20
+#define PFPM_WUS_FLX4_M				BIT(20)
+#define PFPM_WUS_FLX5_S				21
+#define PFPM_WUS_FLX5_M				BIT(21)
+#define PFPM_WUS_FLX6_S				22
+#define PFPM_WUS_FLX6_M				BIT(22)
+#define PFPM_WUS_FLX7_S				23
+#define PFPM_WUS_FLX7_M				BIT(23)
+#define PFPM_WUS_FW_RST_WK_S			31
+#define PFPM_WUS_FW_RST_WK_M			BIT(31)
+#define PRTPM_SAH(_i)				(0x001E3BA0 + ((_i) * 32)) /* _i=0...3 */ /* Reset Source: PFR */
+#define PRTPM_SAH_MAX_INDEX			3
+#define PRTPM_SAH_PFPM_SAH_S			0
+#define PRTPM_SAH_PFPM_SAH_M			MAKEMASK(0xFFFF, 0)
+#define PRTPM_SAH_PF_NUM_S			26
+#define PRTPM_SAH_PF_NUM_M			MAKEMASK(0xF, 26)
+#define PRTPM_SAH_MC_MAG_EN_S			30
+#define PRTPM_SAH_MC_MAG_EN_M			BIT(30)
+#define PRTPM_SAH_AV_S				31
+#define PRTPM_SAH_AV_M				BIT(31)
+#define PRTPM_SAL(_i)				(0x001E3B20 + ((_i) * 32)) /* _i=0...3 */ /* Reset Source: PFR */
+#define PRTPM_SAL_MAX_INDEX			3
+#define PRTPM_SAL_PFPM_SAL_S			0
+#define PRTPM_SAL_PFPM_SAL_M			MAKEMASK(0xFFFFFFFF, 0)
+#define GLPE_CQM_FUNC_INVALIDATE		0x00503300 /* Reset Source: CORER */
+#define GLPE_CQM_FUNC_INVALIDATE_PF_NUM_S	0
+#define GLPE_CQM_FUNC_INVALIDATE_PF_NUM_M	MAKEMASK(0x7, 0)
+#define GLPE_CQM_FUNC_INVALIDATE_VM_VF_NUM_S	3
+#define GLPE_CQM_FUNC_INVALIDATE_VM_VF_NUM_M	MAKEMASK(0x3FF, 3)
+#define GLPE_CQM_FUNC_INVALIDATE_VM_VF_TYPE_S	13
+#define GLPE_CQM_FUNC_INVALIDATE_VM_VF_TYPE_M	MAKEMASK(0x3, 13)
+#define GLPE_CQM_FUNC_INVALIDATE_ENABLE_S	31
+#define GLPE_CQM_FUNC_INVALIDATE_ENABLE_M	BIT(31)
+#define VFPE_MRTEIDXMASK			0x00009000 /* Reset Source: PFR */
+#define VFPE_MRTEIDXMASK_MRTEIDXMASKBITS_S	0
+#define VFPE_MRTEIDXMASK_MRTEIDXMASKBITS_M	MAKEMASK(0x1F, 0)
+#define GLTSYN_HH_DLAY				0x0008881C /* Reset Source: CORER */
+#define GLTSYN_HH_DLAY_SYNC_DELAY_S		0
+#define GLTSYN_HH_DLAY_SYNC_DELAY_M		MAKEMASK(0xF, 0)
+#define VF_MBX_ARQBAH1				0x00006000 /* Reset Source: CORER */
+#define VF_MBX_ARQBAH1_ARQBAH_S			0
+#define VF_MBX_ARQBAH1_ARQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_ARQBAL1				0x00006C00 /* Reset Source: CORER */
+#define VF_MBX_ARQBAL1_ARQBAL_LSB_S		0
+#define VF_MBX_ARQBAL1_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define VF_MBX_ARQBAL1_ARQBAL_S			6
+#define VF_MBX_ARQBAL1_ARQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_ARQH1				0x00007400 /* Reset Source: CORER */
+#define VF_MBX_ARQH1_ARQH_S			0
+#define VF_MBX_ARQH1_ARQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_ARQLEN1				0x00008000 /* Reset Source: CORER */
+#define VF_MBX_ARQLEN1_ARQLEN_S			0
+#define VF_MBX_ARQLEN1_ARQLEN_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_ARQLEN1_ARQVFE_S			28
+#define VF_MBX_ARQLEN1_ARQVFE_M			BIT(28)
+#define VF_MBX_ARQLEN1_ARQOVFL_S		29
+#define VF_MBX_ARQLEN1_ARQOVFL_M		BIT(29)
+#define VF_MBX_ARQLEN1_ARQCRIT_S		30
+#define VF_MBX_ARQLEN1_ARQCRIT_M		BIT(30)
+#define VF_MBX_ARQLEN1_ARQENABLE_S		31
+#define VF_MBX_ARQLEN1_ARQENABLE_M		BIT(31)
+#define VF_MBX_ARQT1				0x00007000 /* Reset Source: CORER */
+#define VF_MBX_ARQT1_ARQT_S			0
+#define VF_MBX_ARQT1_ARQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_ATQBAH1				0x00007800 /* Reset Source: CORER */
+#define VF_MBX_ATQBAH1_ATQBAH_S			0
+#define VF_MBX_ATQBAH1_ATQBAH_M			MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_ATQBAL1				0x00007C00 /* Reset Source: CORER */
+#define VF_MBX_ATQBAL1_ATQBAL_S			6
+#define VF_MBX_ATQBAL1_ATQBAL_M			MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_ATQH1				0x00006400 /* Reset Source: CORER */
+#define VF_MBX_ATQH1_ATQH_S			0
+#define VF_MBX_ATQH1_ATQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_ATQLEN1				0x00006800 /* Reset Source: CORER */
+#define VF_MBX_ATQLEN1_ATQLEN_S			0
+#define VF_MBX_ATQLEN1_ATQLEN_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_ATQLEN1_ATQVFE_S			28
+#define VF_MBX_ATQLEN1_ATQVFE_M			BIT(28)
+#define VF_MBX_ATQLEN1_ATQOVFL_S		29
+#define VF_MBX_ATQLEN1_ATQOVFL_M		BIT(29)
+#define VF_MBX_ATQLEN1_ATQCRIT_S		30
+#define VF_MBX_ATQLEN1_ATQCRIT_M		BIT(30)
+#define VF_MBX_ATQLEN1_ATQENABLE_S		31
+#define VF_MBX_ATQLEN1_ATQENABLE_M		BIT(31)
+#define VF_MBX_ATQT1				0x00008400 /* Reset Source: CORER */
+#define VF_MBX_ATQT1_ATQT_S			0
+#define VF_MBX_ATQT1_ATQT_M			MAKEMASK(0x3FF, 0)
+#define PFPCI_VF_FLUSH_DONE1			0x0000E400 /* Reset Source: PCIR */
+#define PFPCI_VF_FLUSH_DONE1_FLUSH_DONE_S	0
+#define PFPCI_VF_FLUSH_DONE1_FLUSH_DONE_M	BIT(0)
+#define VFGEN_RSTAT1				0x00008800 /* Reset Source: VFR */
+#define VFGEN_RSTAT1_VFR_STATE_S		0
+#define VFGEN_RSTAT1_VFR_STATE_M		MAKEMASK(0x3, 0)
+#define VFINT_DYN_CTL0				0x00005C00 /* Reset Source: PFR */
+#define VFINT_DYN_CTL0_INTENA_S			0
+#define VFINT_DYN_CTL0_INTENA_M			BIT(0)
+#define VFINT_DYN_CTL0_CLEARPBA_S		1
+#define VFINT_DYN_CTL0_CLEARPBA_M		BIT(1)
+#define VFINT_DYN_CTL0_SWINT_TRIG_S		2
+#define VFINT_DYN_CTL0_SWINT_TRIG_M		BIT(2)
+#define VFINT_DYN_CTL0_ITR_INDX_S		3
+#define VFINT_DYN_CTL0_ITR_INDX_M		MAKEMASK(0x3, 3)
+#define VFINT_DYN_CTL0_INTERVAL_S		5
+#define VFINT_DYN_CTL0_INTERVAL_M		MAKEMASK(0xFFF, 5)
+#define VFINT_DYN_CTL0_SW_ITR_INDX_ENA_S	24
+#define VFINT_DYN_CTL0_SW_ITR_INDX_ENA_M	BIT(24)
+#define VFINT_DYN_CTL0_SW_ITR_INDX_S		25
+#define VFINT_DYN_CTL0_SW_ITR_INDX_M		MAKEMASK(0x3, 25)
+#define VFINT_DYN_CTL0_WB_ON_ITR_S		30
+#define VFINT_DYN_CTL0_WB_ON_ITR_M		BIT(30)
+#define VFINT_DYN_CTL0_INTENA_MSK_S		31
+#define VFINT_DYN_CTL0_INTENA_MSK_M		BIT(31)
+#define VFINT_DYN_CTLN(_i)			(0x00003800 + ((_i) * 4)) /* _i=0...63 */ /* Reset Source: PFR */
+#define VFINT_DYN_CTLN_MAX_INDEX		63
+#define VFINT_DYN_CTLN_INTENA_S			0
+#define VFINT_DYN_CTLN_INTENA_M			BIT(0)
+#define VFINT_DYN_CTLN_CLEARPBA_S		1
+#define VFINT_DYN_CTLN_CLEARPBA_M		BIT(1)
+#define VFINT_DYN_CTLN_SWINT_TRIG_S		2
+#define VFINT_DYN_CTLN_SWINT_TRIG_M		BIT(2)
+#define VFINT_DYN_CTLN_ITR_INDX_S		3
+#define VFINT_DYN_CTLN_ITR_INDX_M		MAKEMASK(0x3, 3)
+#define VFINT_DYN_CTLN_INTERVAL_S		5
+#define VFINT_DYN_CTLN_INTERVAL_M		MAKEMASK(0xFFF, 5)
+#define VFINT_DYN_CTLN_SW_ITR_INDX_ENA_S	24
+#define VFINT_DYN_CTLN_SW_ITR_INDX_ENA_M	BIT(24)
+#define VFINT_DYN_CTLN_SW_ITR_INDX_S		25
+#define VFINT_DYN_CTLN_SW_ITR_INDX_M		MAKEMASK(0x3, 25)
+#define VFINT_DYN_CTLN_WB_ON_ITR_S		30
+#define VFINT_DYN_CTLN_WB_ON_ITR_M		BIT(30)
+#define VFINT_DYN_CTLN_INTENA_MSK_S		31
+#define VFINT_DYN_CTLN_INTENA_MSK_M		BIT(31)
+#define VFINT_ITR0(_i)				(0x00004C00 + ((_i) * 4)) /* _i=0...2 */ /* Reset Source: PFR */
+#define VFINT_ITR0_MAX_INDEX			2
+#define VFINT_ITR0_INTERVAL_S			0
+#define VFINT_ITR0_INTERVAL_M			MAKEMASK(0xFFF, 0)
+#define VFINT_ITRN(_i, _j)			(0x00002800 + ((_i) * 4 + (_j) * 12)) /* _i=0...2, _j=0...63 */ /* Reset Source: PFR */
+#define VFINT_ITRN_MAX_INDEX			2
+#define VFINT_ITRN_INTERVAL_S			0
+#define VFINT_ITRN_INTERVAL_M			MAKEMASK(0xFFF, 0)
+#define QRX_TAIL1(_QRX)				(0x00002000 + ((_QRX) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define QRX_TAIL1_MAX_INDEX			255
+#define QRX_TAIL1_TAIL_S			0
+#define QRX_TAIL1_TAIL_M			MAKEMASK(0x1FFF, 0)
+#define QTX_TAIL(_DBQM)				(0x00000000 + ((_DBQM) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define QTX_TAIL_MAX_INDEX			255
+#define QTX_TAIL_QTX_COMM_DBELL_S		0
+#define QTX_TAIL_QTX_COMM_DBELL_M		MAKEMASK(0xFFFFFFFF, 0)
+#define MSIX_TMSG1(_i)				(0x00000008 + ((_i) * 16)) /* _i=0...64 */ /* Reset Source: FLR */
+#define MSIX_TMSG1_MAX_INDEX			64
+#define MSIX_TMSG1_MSIXTMSG_S			0
+#define MSIX_TMSG1_MSIXTMSG_M			MAKEMASK(0xFFFFFFFF, 0)
+#define VFPE_AEQALLOC1				0x0000A400 /* Reset Source: VFR */
+#define VFPE_AEQALLOC1_AECOUNT_S		0
+#define VFPE_AEQALLOC1_AECOUNT_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VFPE_CCQPHIGH1				0x00009800 /* Reset Source: VFR */
+#define VFPE_CCQPHIGH1_PECCQPHIGH_S		0
+#define VFPE_CCQPHIGH1_PECCQPHIGH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VFPE_CCQPLOW1				0x0000AC00 /* Reset Source: VFR */
+#define VFPE_CCQPLOW1_PECCQPLOW_S		0
+#define VFPE_CCQPLOW1_PECCQPLOW_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VFPE_CCQPSTATUS1			0x0000B800 /* Reset Source: VFR */
+#define VFPE_CCQPSTATUS1_CCQP_DONE_S		0
+#define VFPE_CCQPSTATUS1_CCQP_DONE_M		BIT(0)
+#define VFPE_CCQPSTATUS1_HMC_PROFILE_S		4
+#define VFPE_CCQPSTATUS1_HMC_PROFILE_M		MAKEMASK(0x7, 4)
+#define VFPE_CCQPSTATUS1_RDMA_EN_VFS_S		16
+#define VFPE_CCQPSTATUS1_RDMA_EN_VFS_M		MAKEMASK(0x3F, 16)
+#define VFPE_CCQPSTATUS1_CCQP_ERR_S		31
+#define VFPE_CCQPSTATUS1_CCQP_ERR_M		BIT(31)
+#define VFPE_CQACK1				0x0000B000 /* Reset Source: VFR */
+#define VFPE_CQACK1_PECQID_S			0
+#define VFPE_CQACK1_PECQID_M			MAKEMASK(0x7FFFF, 0)
+#define VFPE_CQARM1				0x0000B400 /* Reset Source: VFR */
+#define VFPE_CQARM1_PECQID_S			0
+#define VFPE_CQARM1_PECQID_M			MAKEMASK(0x7FFFF, 0)
+#define VFPE_CQPDB1				0x0000BC00 /* Reset Source: VFR */
+#define VFPE_CQPDB1_WQHEAD_S			0
+#define VFPE_CQPDB1_WQHEAD_M			MAKEMASK(0x7FF, 0)
+#define VFPE_CQPERRCODES1			0x00009C00 /* Reset Source: VFR */
+#define VFPE_CQPERRCODES1_CQP_MINOR_CODE_S	0
+#define VFPE_CQPERRCODES1_CQP_MINOR_CODE_M	MAKEMASK(0xFFFF, 0)
+#define VFPE_CQPERRCODES1_CQP_MAJOR_CODE_S	16
+#define VFPE_CQPERRCODES1_CQP_MAJOR_CODE_M	MAKEMASK(0xFFFF, 16)
+#define VFPE_CQPTAIL1				0x0000A000 /* Reset Source: VFR */
+#define VFPE_CQPTAIL1_WQTAIL_S			0
+#define VFPE_CQPTAIL1_WQTAIL_M			MAKEMASK(0x7FF, 0)
+#define VFPE_CQPTAIL1_CQP_OP_ERR_S		31
+#define VFPE_CQPTAIL1_CQP_OP_ERR_M		BIT(31)
+#define VFPE_IPCONFIG01				0x00008C00 /* Reset Source: VFR */
+#define VFPE_IPCONFIG01_PEIPID_S		0
+#define VFPE_IPCONFIG01_PEIPID_M		MAKEMASK(0xFFFF, 0)
+#define VFPE_IPCONFIG01_USEENTIREIDRANGE_S	16
+#define VFPE_IPCONFIG01_USEENTIREIDRANGE_M	BIT(16)
+#define VFPE_IPCONFIG01_UDP_SRC_PORT_MASK_EN_S	17
+#define VFPE_IPCONFIG01_UDP_SRC_PORT_MASK_EN_M	BIT(17)
+#define VFPE_MRTEIDXMASK1(_VF)			(0x00509800 + ((_VF) * 4)) /* _i=0...255 */ /* Reset Source: PFR */
+#define VFPE_MRTEIDXMASK1_MAX_INDEX		255
+#define VFPE_MRTEIDXMASK1_MRTEIDXMASKBITS_S	0
+#define VFPE_MRTEIDXMASK1_MRTEIDXMASKBITS_M	MAKEMASK(0x1F, 0)
+#define VFPE_RCVUNEXPECTEDERROR1		0x00009400 /* Reset Source: VFR */
+#define VFPE_RCVUNEXPECTEDERROR1_TCP_RX_UNEXP_ERR_S 0
+#define VFPE_RCVUNEXPECTEDERROR1_TCP_RX_UNEXP_ERR_M MAKEMASK(0xFFFFFF, 0)
+#define VFPE_TCPNOWTIMER1			0x0000A800 /* Reset Source: VFR */
+#define VFPE_TCPNOWTIMER1_TCP_NOW_S		0
+#define VFPE_TCPNOWTIMER1_TCP_NOW_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VFPE_WQEALLOC1				0x0000C000 /* Reset Source: VFR */
+#define VFPE_WQEALLOC1_PEQPID_S			0
+#define VFPE_WQEALLOC1_PEQPID_M			MAKEMASK(0x3FFFF, 0)
+#define VFPE_WQEALLOC1_WQE_DESC_INDEX_S		20
+#define VFPE_WQEALLOC1_WQE_DESC_INDEX_M		MAKEMASK(0xFFF, 20)
+#define VF_MBX_CPM_ARQBAH1			0x0000F060 /* Reset Source: CORER */
+#define VF_MBX_CPM_ARQBAH1_ARQBAH_S		0
+#define VF_MBX_CPM_ARQBAH1_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_CPM_ARQBAL1			0x0000F050 /* Reset Source: CORER */
+#define VF_MBX_CPM_ARQBAL1_ARQBAL_LSB_S		0
+#define VF_MBX_CPM_ARQBAL1_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define VF_MBX_CPM_ARQBAL1_ARQBAL_S		6
+#define VF_MBX_CPM_ARQBAL1_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_CPM_ARQH1			0x0000F080 /* Reset Source: CORER */
+#define VF_MBX_CPM_ARQH1_ARQH_S			0
+#define VF_MBX_CPM_ARQH1_ARQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ARQLEN1			0x0000F070 /* Reset Source: CORER */
+#define VF_MBX_CPM_ARQLEN1_ARQLEN_S		0
+#define VF_MBX_CPM_ARQLEN1_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ARQLEN1_ARQVFE_S		28
+#define VF_MBX_CPM_ARQLEN1_ARQVFE_M		BIT(28)
+#define VF_MBX_CPM_ARQLEN1_ARQOVFL_S		29
+#define VF_MBX_CPM_ARQLEN1_ARQOVFL_M		BIT(29)
+#define VF_MBX_CPM_ARQLEN1_ARQCRIT_S		30
+#define VF_MBX_CPM_ARQLEN1_ARQCRIT_M		BIT(30)
+#define VF_MBX_CPM_ARQLEN1_ARQENABLE_S		31
+#define VF_MBX_CPM_ARQLEN1_ARQENABLE_M		BIT(31)
+#define VF_MBX_CPM_ARQT1			0x0000F090 /* Reset Source: CORER */
+#define VF_MBX_CPM_ARQT1_ARQT_S			0
+#define VF_MBX_CPM_ARQT1_ARQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ATQBAH1			0x0000F010 /* Reset Source: CORER */
+#define VF_MBX_CPM_ATQBAH1_ATQBAH_S		0
+#define VF_MBX_CPM_ATQBAH1_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_CPM_ATQBAL1			0x0000F000 /* Reset Source: CORER */
+#define VF_MBX_CPM_ATQBAL1_ATQBAL_S		6
+#define VF_MBX_CPM_ATQBAL1_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_CPM_ATQH1			0x0000F030 /* Reset Source: CORER */
+#define VF_MBX_CPM_ATQH1_ATQH_S			0
+#define VF_MBX_CPM_ATQH1_ATQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ATQLEN1			0x0000F020 /* Reset Source: CORER */
+#define VF_MBX_CPM_ATQLEN1_ATQLEN_S		0
+#define VF_MBX_CPM_ATQLEN1_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_CPM_ATQLEN1_ATQVFE_S		28
+#define VF_MBX_CPM_ATQLEN1_ATQVFE_M		BIT(28)
+#define VF_MBX_CPM_ATQLEN1_ATQOVFL_S		29
+#define VF_MBX_CPM_ATQLEN1_ATQOVFL_M		BIT(29)
+#define VF_MBX_CPM_ATQLEN1_ATQCRIT_S		30
+#define VF_MBX_CPM_ATQLEN1_ATQCRIT_M		BIT(30)
+#define VF_MBX_CPM_ATQLEN1_ATQENABLE_S		31
+#define VF_MBX_CPM_ATQLEN1_ATQENABLE_M		BIT(31)
+#define VF_MBX_CPM_ATQT1			0x0000F040 /* Reset Source: CORER */
+#define VF_MBX_CPM_ATQT1_ATQT_S			0
+#define VF_MBX_CPM_ATQT1_ATQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ARQBAH1			0x00020060 /* Reset Source: CORER */
+#define VF_MBX_HLP_ARQBAH1_ARQBAH_S		0
+#define VF_MBX_HLP_ARQBAH1_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_HLP_ARQBAL1			0x00020050 /* Reset Source: CORER */
+#define VF_MBX_HLP_ARQBAL1_ARQBAL_LSB_S		0
+#define VF_MBX_HLP_ARQBAL1_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define VF_MBX_HLP_ARQBAL1_ARQBAL_S		6
+#define VF_MBX_HLP_ARQBAL1_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_HLP_ARQH1			0x00020080 /* Reset Source: CORER */
+#define VF_MBX_HLP_ARQH1_ARQH_S			0
+#define VF_MBX_HLP_ARQH1_ARQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ARQLEN1			0x00020070 /* Reset Source: CORER */
+#define VF_MBX_HLP_ARQLEN1_ARQLEN_S		0
+#define VF_MBX_HLP_ARQLEN1_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ARQLEN1_ARQVFE_S		28
+#define VF_MBX_HLP_ARQLEN1_ARQVFE_M		BIT(28)
+#define VF_MBX_HLP_ARQLEN1_ARQOVFL_S		29
+#define VF_MBX_HLP_ARQLEN1_ARQOVFL_M		BIT(29)
+#define VF_MBX_HLP_ARQLEN1_ARQCRIT_S		30
+#define VF_MBX_HLP_ARQLEN1_ARQCRIT_M		BIT(30)
+#define VF_MBX_HLP_ARQLEN1_ARQENABLE_S		31
+#define VF_MBX_HLP_ARQLEN1_ARQENABLE_M		BIT(31)
+#define VF_MBX_HLP_ARQT1			0x00020090 /* Reset Source: CORER */
+#define VF_MBX_HLP_ARQT1_ARQT_S			0
+#define VF_MBX_HLP_ARQT1_ARQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ATQBAH1			0x00020010 /* Reset Source: CORER */
+#define VF_MBX_HLP_ATQBAH1_ATQBAH_S		0
+#define VF_MBX_HLP_ATQBAH1_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_HLP_ATQBAL1			0x00020000 /* Reset Source: CORER */
+#define VF_MBX_HLP_ATQBAL1_ATQBAL_S		6
+#define VF_MBX_HLP_ATQBAL1_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_HLP_ATQH1			0x00020030 /* Reset Source: CORER */
+#define VF_MBX_HLP_ATQH1_ATQH_S			0
+#define VF_MBX_HLP_ATQH1_ATQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ATQLEN1			0x00020020 /* Reset Source: CORER */
+#define VF_MBX_HLP_ATQLEN1_ATQLEN_S		0
+#define VF_MBX_HLP_ATQLEN1_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_HLP_ATQLEN1_ATQVFE_S		28
+#define VF_MBX_HLP_ATQLEN1_ATQVFE_M		BIT(28)
+#define VF_MBX_HLP_ATQLEN1_ATQOVFL_S		29
+#define VF_MBX_HLP_ATQLEN1_ATQOVFL_M		BIT(29)
+#define VF_MBX_HLP_ATQLEN1_ATQCRIT_S		30
+#define VF_MBX_HLP_ATQLEN1_ATQCRIT_M		BIT(30)
+#define VF_MBX_HLP_ATQLEN1_ATQENABLE_S		31
+#define VF_MBX_HLP_ATQLEN1_ATQENABLE_M		BIT(31)
+#define VF_MBX_HLP_ATQT1			0x00020040 /* Reset Source: CORER */
+#define VF_MBX_HLP_ATQT1_ATQT_S			0
+#define VF_MBX_HLP_ATQT1_ATQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ARQBAH1			0x00021060 /* Reset Source: CORER */
+#define VF_MBX_PSM_ARQBAH1_ARQBAH_S		0
+#define VF_MBX_PSM_ARQBAH1_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_PSM_ARQBAL1			0x00021050 /* Reset Source: CORER */
+#define VF_MBX_PSM_ARQBAL1_ARQBAL_LSB_S		0
+#define VF_MBX_PSM_ARQBAL1_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define VF_MBX_PSM_ARQBAL1_ARQBAL_S		6
+#define VF_MBX_PSM_ARQBAL1_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_PSM_ARQH1			0x00021080 /* Reset Source: CORER */
+#define VF_MBX_PSM_ARQH1_ARQH_S			0
+#define VF_MBX_PSM_ARQH1_ARQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ARQLEN1			0x00021070 /* Reset Source: CORER */
+#define VF_MBX_PSM_ARQLEN1_ARQLEN_S		0
+#define VF_MBX_PSM_ARQLEN1_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ARQLEN1_ARQVFE_S		28
+#define VF_MBX_PSM_ARQLEN1_ARQVFE_M		BIT(28)
+#define VF_MBX_PSM_ARQLEN1_ARQOVFL_S		29
+#define VF_MBX_PSM_ARQLEN1_ARQOVFL_M		BIT(29)
+#define VF_MBX_PSM_ARQLEN1_ARQCRIT_S		30
+#define VF_MBX_PSM_ARQLEN1_ARQCRIT_M		BIT(30)
+#define VF_MBX_PSM_ARQLEN1_ARQENABLE_S		31
+#define VF_MBX_PSM_ARQLEN1_ARQENABLE_M		BIT(31)
+#define VF_MBX_PSM_ARQT1			0x00021090 /* Reset Source: CORER */
+#define VF_MBX_PSM_ARQT1_ARQT_S			0
+#define VF_MBX_PSM_ARQT1_ARQT_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ATQBAH1			0x00021010 /* Reset Source: CORER */
+#define VF_MBX_PSM_ATQBAH1_ATQBAH_S		0
+#define VF_MBX_PSM_ATQBAH1_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_MBX_PSM_ATQBAL1			0x00021000 /* Reset Source: CORER */
+#define VF_MBX_PSM_ATQBAL1_ATQBAL_S		6
+#define VF_MBX_PSM_ATQBAL1_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_MBX_PSM_ATQH1			0x00021030 /* Reset Source: CORER */
+#define VF_MBX_PSM_ATQH1_ATQH_S			0
+#define VF_MBX_PSM_ATQH1_ATQH_M			MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ATQLEN1			0x00021020 /* Reset Source: CORER */
+#define VF_MBX_PSM_ATQLEN1_ATQLEN_S		0
+#define VF_MBX_PSM_ATQLEN1_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_MBX_PSM_ATQLEN1_ATQVFE_S		28
+#define VF_MBX_PSM_ATQLEN1_ATQVFE_M		BIT(28)
+#define VF_MBX_PSM_ATQLEN1_ATQOVFL_S		29
+#define VF_MBX_PSM_ATQLEN1_ATQOVFL_M		BIT(29)
+#define VF_MBX_PSM_ATQLEN1_ATQCRIT_S		30
+#define VF_MBX_PSM_ATQLEN1_ATQCRIT_M		BIT(30)
+#define VF_MBX_PSM_ATQLEN1_ATQENABLE_S		31
+#define VF_MBX_PSM_ATQLEN1_ATQENABLE_M		BIT(31)
+#define VF_MBX_PSM_ATQT1			0x00021040 /* Reset Source: CORER */
+#define VF_MBX_PSM_ATQT1_ATQT_S			0
+#define VF_MBX_PSM_ATQT1_ATQT_M			MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ARQBAH1			0x0000F160 /* Reset Source: CORER */
+#define VF_SB_CPM_ARQBAH1_ARQBAH_S		0
+#define VF_SB_CPM_ARQBAH1_ARQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_SB_CPM_ARQBAL1			0x0000F150 /* Reset Source: CORER */
+#define VF_SB_CPM_ARQBAL1_ARQBAL_LSB_S		0
+#define VF_SB_CPM_ARQBAL1_ARQBAL_LSB_M		MAKEMASK(0x3F, 0)
+#define VF_SB_CPM_ARQBAL1_ARQBAL_S		6
+#define VF_SB_CPM_ARQBAL1_ARQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_SB_CPM_ARQH1				0x0000F180 /* Reset Source: CORER */
+#define VF_SB_CPM_ARQH1_ARQH_S			0
+#define VF_SB_CPM_ARQH1_ARQH_M			MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ARQLEN1			0x0000F170 /* Reset Source: CORER */
+#define VF_SB_CPM_ARQLEN1_ARQLEN_S		0
+#define VF_SB_CPM_ARQLEN1_ARQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ARQLEN1_ARQVFE_S		28
+#define VF_SB_CPM_ARQLEN1_ARQVFE_M		BIT(28)
+#define VF_SB_CPM_ARQLEN1_ARQOVFL_S		29
+#define VF_SB_CPM_ARQLEN1_ARQOVFL_M		BIT(29)
+#define VF_SB_CPM_ARQLEN1_ARQCRIT_S		30
+#define VF_SB_CPM_ARQLEN1_ARQCRIT_M		BIT(30)
+#define VF_SB_CPM_ARQLEN1_ARQENABLE_S		31
+#define VF_SB_CPM_ARQLEN1_ARQENABLE_M		BIT(31)
+#define VF_SB_CPM_ARQT1				0x0000F190 /* Reset Source: CORER */
+#define VF_SB_CPM_ARQT1_ARQT_S			0
+#define VF_SB_CPM_ARQT1_ARQT_M			MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ATQBAH1			0x0000F110 /* Reset Source: CORER */
+#define VF_SB_CPM_ATQBAH1_ATQBAH_S		0
+#define VF_SB_CPM_ATQBAH1_ATQBAH_M		MAKEMASK(0xFFFFFFFF, 0)
+#define VF_SB_CPM_ATQBAL1			0x0000F100 /* Reset Source: CORER */
+#define VF_SB_CPM_ATQBAL1_ATQBAL_S		6
+#define VF_SB_CPM_ATQBAL1_ATQBAL_M		MAKEMASK(0x3FFFFFF, 6)
+#define VF_SB_CPM_ATQH1				0x0000F130 /* Reset Source: CORER */
+#define VF_SB_CPM_ATQH1_ATQH_S			0
+#define VF_SB_CPM_ATQH1_ATQH_M			MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ATQLEN1			0x0000F120 /* Reset Source: CORER */
+#define VF_SB_CPM_ATQLEN1_ATQLEN_S		0
+#define VF_SB_CPM_ATQLEN1_ATQLEN_M		MAKEMASK(0x3FF, 0)
+#define VF_SB_CPM_ATQLEN1_ATQVFE_S		28
+#define VF_SB_CPM_ATQLEN1_ATQVFE_M		BIT(28)
+#define VF_SB_CPM_ATQLEN1_ATQOVFL_S		29
+#define VF_SB_CPM_ATQLEN1_ATQOVFL_M		BIT(29)
+#define VF_SB_CPM_ATQLEN1_ATQCRIT_S		30
+#define VF_SB_CPM_ATQLEN1_ATQCRIT_M		BIT(30)
+#define VF_SB_CPM_ATQLEN1_ATQENABLE_S		31
+#define VF_SB_CPM_ATQLEN1_ATQENABLE_M		BIT(31)
+#define VF_SB_CPM_ATQT1				0x0000F140 /* Reset Source: CORER */
+#define VF_SB_CPM_ATQT1_ATQT_S			0
+#define VF_SB_CPM_ATQT1_ATQT_M			MAKEMASK(0x3FF, 0)
+#define VFINT_DYN_CTL(_i)			(0x00023000 + ((_i) * 4096)) /* _i=0...7 */ /* Reset Source: PFR */
+#define VFINT_DYN_CTL_MAX_INDEX			7
+#define VFINT_DYN_CTL_INTENA_S			0
+#define VFINT_DYN_CTL_INTENA_M			BIT(0)
+#define VFINT_DYN_CTL_CLEARPBA_S		1
+#define VFINT_DYN_CTL_CLEARPBA_M		BIT(1)
+#define VFINT_DYN_CTL_SWINT_TRIG_S		2
+#define VFINT_DYN_CTL_SWINT_TRIG_M		BIT(2)
+#define VFINT_DYN_CTL_ITR_INDX_S		3
+#define VFINT_DYN_CTL_ITR_INDX_M		MAKEMASK(0x3, 3)
+#define VFINT_DYN_CTL_INTERVAL_S		5
+#define VFINT_DYN_CTL_INTERVAL_M		MAKEMASK(0xFFF, 5)
+#define VFINT_DYN_CTL_SW_ITR_INDX_ENA_S		24
+#define VFINT_DYN_CTL_SW_ITR_INDX_ENA_M		BIT(24)
+#define VFINT_DYN_CTL_SW_ITR_INDX_S		25
+#define VFINT_DYN_CTL_SW_ITR_INDX_M		MAKEMASK(0x3, 25)
+#define VFINT_DYN_CTL_WB_ON_ITR_S		30
+#define VFINT_DYN_CTL_WB_ON_ITR_M		BIT(30)
+#define VFINT_DYN_CTL_INTENA_MSK_S		31
+#define VFINT_DYN_CTL_INTENA_MSK_M		BIT(31)
+#define VFINT_ITR_0(_i)				(0x00023004 + ((_i) * 4096)) /* _i=0...7 */ /* Reset Source: PFR */
+#define VFINT_ITR_0_MAX_INDEX			7
+#define VFINT_ITR_0_INTERVAL_S			0
+#define VFINT_ITR_0_INTERVAL_M			MAKEMASK(0xFFF, 0)
+#define VFINT_ITR_1(_i)				(0x00023008 + ((_i) * 4096)) /* _i=0...7 */ /* Reset Source: PFR */
+#define VFINT_ITR_1_MAX_INDEX			7
+#define VFINT_ITR_1_INTERVAL_S			0
+#define VFINT_ITR_1_INTERVAL_M			MAKEMASK(0xFFF, 0)
+#define VFINT_ITR_2(_i)				(0x0002300C + ((_i) * 4096)) /* _i=0...7 */ /* Reset Source: PFR */
+#define VFINT_ITR_2_MAX_INDEX			7
+#define VFINT_ITR_2_INTERVAL_S			0
+#define VFINT_ITR_2_INTERVAL_M			MAKEMASK(0xFFF, 0)
+#define VFQRX_TAIL(_QRX)			(0x0002E000 + ((_QRX) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VFQRX_TAIL_MAX_INDEX			255
+#define VFQRX_TAIL_TAIL_S			0
+#define VFQRX_TAIL_TAIL_M			MAKEMASK(0x1FFF, 0)
+#define VFQTX_COMM_DBELL(_DBQM)			(0x00030000 + ((_DBQM) * 4)) /* _i=0...255 */ /* Reset Source: CORER */
+#define VFQTX_COMM_DBELL_MAX_INDEX		255
+#define VFQTX_COMM_DBELL_QTX_COMM_DBELL_S	0
+#define VFQTX_COMM_DBELL_QTX_COMM_DBELL_M	MAKEMASK(0xFFFFFFFF, 0)
+#define VFQTX_COMM_DBLQ_DBELL(_DBLQ)		(0x00022000 + ((_DBLQ) * 4)) /* _i=0...3 */ /* Reset Source: CORER */
+#define VFQTX_COMM_DBLQ_DBELL_MAX_INDEX		3
+#define VFQTX_COMM_DBLQ_DBELL_TAIL_S		0
+#define VFQTX_COMM_DBLQ_DBELL_TAIL_M		MAKEMASK(0x1FFF, 0)
+
+#endif
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v6 02/31] net/ice/base: add basic structures
  2018-12-18  8:46 ` [dpdk-dev] [PATCH v6 00/31] A new net PMD - ICE Wenzhuo Lu
  2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 01/31] net/ice/base: add registers for Intel(R) E800 Series NIC Wenzhuo Lu
@ 2018-12-18  8:46   ` Wenzhuo Lu
  2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 03/31] net/ice/base: add admin queue structures and commands Wenzhuo Lu
                     ` (29 subsequent siblings)
  31 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-18  8:46 UTC (permalink / raw)
  To: dev; +Cc: Paul M Stillwell Jr

From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>

Add the structures required by the NIC.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
---
 drivers/net/ice/base/ice_type.h | 869 ++++++++++++++++++++++++++++++++++++++++
 1 file changed, 869 insertions(+)
 create mode 100644 drivers/net/ice/base/ice_type.h

diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h
new file mode 100644
index 0000000..256bf3f
--- /dev/null
+++ b/drivers/net/ice/base/ice_type.h
@@ -0,0 +1,869 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_TYPE_H_
+#define _ICE_TYPE_H_
+
+#define ETH_ALEN	6
+
+#define ETH_HEADER_LEN	14
+
+#define BIT(a) (1UL << (a))
+#define BIT_ULL(a) (1ULL << (a))
+
+#define BITS_PER_BYTE	8
+
+#define ICE_BYTES_PER_WORD	2
+#define ICE_BYTES_PER_DWORD	4
+#define ICE_MAX_TRAFFIC_CLASS	8
+
+
+#include "ice_status.h"
+#include "ice_hw_autogen.h"
+#include "ice_devids.h"
+#include "ice_osdep.h"
+#include "ice_controlq.h"
+#include "ice_lan_tx_rx.h"
+#include "ice_flex_type.h"
+#include "ice_protocol_type.h"
+
+static inline bool ice_is_tc_ena(ice_bitmap_t bitmap, u8 tc)
+{
+	return ice_is_bit_set(&bitmap, tc);
+}
+
+#ifndef DIV_64BIT
+#define DIV_64BIT(n, d) ((n) / (d))
+#endif /* DIV_64BIT */
+
+static inline u64 round_up_64bit(u64 a, u32 b)
+{
+	return DIV_64BIT(((a) + (b) / 2), (b));
+}
+
+static inline u32 ice_round_to_num(u32 N, u32 R)
+{
+	return ((((N) % (R)) < ((R) / 2)) ? (((N) / (R)) * (R)) :
+		((((N) + (R) - 1) / (R)) * (R)));
+}
+
+/* Driver always calls main vsi_handle first */
+#define ICE_MAIN_VSI_HANDLE		0
+
+/* Switch from ms to the 1usec global time (this is the GTIME resolution) */
+#define ICE_MS_TO_GTIME(time)		((time) * 1000)
+
+/* Data type manipulation macros. */
+#define ICE_HI_DWORD(x)		((u32)((((x) >> 16) >> 16) & 0xFFFFFFFF))
+#define ICE_LO_DWORD(x)		((u32)((x) & 0xFFFFFFFF))
+#define ICE_HI_WORD(x)		((u16)(((x) >> 16) & 0xFFFF))
+
+/* debug masks - set these bits in hw->debug_mask to control output */
+#define ICE_DBG_INIT		BIT_ULL(1)
+#define ICE_DBG_RELEASE		BIT_ULL(2)
+
+#define ICE_DBG_LINK		BIT_ULL(4)
+#define ICE_DBG_PHY		BIT_ULL(5)
+#define ICE_DBG_QCTX		BIT_ULL(6)
+#define ICE_DBG_NVM		BIT_ULL(7)
+#define ICE_DBG_LAN		BIT_ULL(8)
+#define ICE_DBG_FLOW		BIT_ULL(9)
+#define ICE_DBG_DCB		BIT_ULL(10)
+#define ICE_DBG_DIAG		BIT_ULL(11)
+#define ICE_DBG_FD		BIT_ULL(12)
+#define ICE_DBG_SW		BIT_ULL(13)
+#define ICE_DBG_SCHED		BIT_ULL(14)
+
+#define ICE_DBG_PKG		BIT_ULL(16)
+#define ICE_DBG_RES		BIT_ULL(17)
+#define ICE_DBG_AQ_MSG		BIT_ULL(24)
+#define ICE_DBG_AQ_DESC		BIT_ULL(25)
+#define ICE_DBG_AQ_DESC_BUF	BIT_ULL(26)
+#define ICE_DBG_AQ_CMD		BIT_ULL(27)
+#define ICE_DBG_AQ		(ICE_DBG_AQ_MSG		| \
+				 ICE_DBG_AQ_DESC	| \
+				 ICE_DBG_AQ_DESC_BUF	| \
+				 ICE_DBG_AQ_CMD)
+
+#define ICE_DBG_USER		BIT_ULL(31)
+#define ICE_DBG_ALL		0xFFFFFFFFFFFFFFFFULL
+
+
+
+
+
+
+enum ice_aq_res_ids {
+	ICE_NVM_RES_ID = 1,
+	ICE_SPD_RES_ID,
+	ICE_CHANGE_LOCK_RES_ID,
+	ICE_GLOBAL_CFG_LOCK_RES_ID
+};
+
+/* FW update timeout definitions are in milliseconds */
+#define ICE_NVM_TIMEOUT			180000
+#define ICE_CHANGE_LOCK_TIMEOUT		1000
+#define ICE_GLOBAL_CFG_LOCK_TIMEOUT	3000
+
+enum ice_aq_res_access_type {
+	ICE_RES_READ = 1,
+	ICE_RES_WRITE
+};
+
+struct ice_driver_ver {
+	u8 major_ver;
+	u8 minor_ver;
+	u8 build_ver;
+	u8 subbuild_ver;
+	u8 driver_string[32];
+};
+
+enum ice_fc_mode {
+	ICE_FC_NONE = 0,
+	ICE_FC_RX_PAUSE,
+	ICE_FC_TX_PAUSE,
+	ICE_FC_FULL,
+	ICE_FC_PFC,
+	ICE_FC_DFLT
+};
+
+enum ice_fec_mode {
+	ICE_FEC_NONE = 0,
+	ICE_FEC_RS,
+	ICE_FEC_BASER,
+	ICE_FEC_AUTO
+};
+
+enum ice_set_fc_aq_failures {
+	ICE_SET_FC_AQ_FAIL_NONE = 0,
+	ICE_SET_FC_AQ_FAIL_GET,
+	ICE_SET_FC_AQ_FAIL_SET,
+	ICE_SET_FC_AQ_FAIL_UPDATE
+};
+
+/* These are structs for managing the hardware information and the operations */
+/* MAC types */
+enum ice_mac_type {
+	ICE_MAC_UNKNOWN = 0,
+	ICE_MAC_GENERIC,
+};
+
+/* Media Types */
+enum ice_media_type {
+	ICE_MEDIA_UNKNOWN = 0,
+	ICE_MEDIA_FIBER,
+	ICE_MEDIA_BASET,
+	ICE_MEDIA_BACKPLANE,
+	ICE_MEDIA_DA,
+};
+
+/* Software VSI types. */
+enum ice_vsi_type {
+	ICE_VSI_PF = 0,
+#ifdef ADQ_SUPPORT
+	ICE_VSI_CHNL = 4,
+#endif /* ADQ_SUPPORT */
+};
+
+struct ice_link_status {
+	/* Refer to ice_aq_phy_type for bits definition */
+	u64 phy_type_low;
+	u64 phy_type_high;
+	u8 topo_media_conflict;
+	u16 max_frame_size;
+	u16 link_speed;
+	u16 req_speeds;
+	u8 lse_ena;	/* Link Status Event notification */
+	u8 link_info;
+	u8 an_info;
+	u8 ext_info;
+	u8 fec_info;
+	u8 pacing;
+	/* Refer to #define from module_type[ICE_MODULE_TYPE_TOTAL_BYTE] of
+	 * ice_aqc_get_phy_caps structure
+	 */
+	u8 module_type[ICE_MODULE_TYPE_TOTAL_BYTE];
+};
+
+/* Different data queue types: These are mainly for SW consumption. */
+enum ice_q {
+	ICE_DATA_Q_DOORBELL,
+	ICE_DATA_Q_CMPL,
+	ICE_DATA_Q_QUANTA,
+	ICE_DATA_Q_RX,
+	ICE_DATA_Q_TX,
+};
+
+/* Different reset sources for which a disable queue AQ call has to be made in
+ * order to clean the TX scheduler as a part of the reset
+ */
+enum ice_disq_rst_src {
+	ICE_NO_RESET = 0,
+	ICE_VM_RESET,
+};
+
+/* PHY info such as phy_type, etc... */
+struct ice_phy_info {
+	struct ice_link_status link_info;
+	struct ice_link_status link_info_old;
+	u64 phy_type_low;
+	u64 phy_type_high;
+	enum ice_media_type media_type;
+	u8 get_link_info;
+};
+
+#define ICE_MAX_NUM_MIRROR_RULES	64
+
+/* Common HW capabilities for SW use */
+struct ice_hw_common_caps {
+	/* Write CSR protection */
+	u64 wr_csr_prot;
+	u32 switching_mode;
+	/* switching mode supported - EVB switching (including cloud) */
+#define ICE_NVM_IMAGE_TYPE_EVB		0x0
+
+	/* Manageablity mode & supported protocols over MCTP */
+	u32 mgmt_mode;
+#define ICE_MGMT_MODE_PASS_THRU_MODE_M		0xF
+#define ICE_MGMT_MODE_CTL_INTERFACE_M		0xF0
+#define ICE_MGMT_MODE_REDIR_SB_INTERFACE_M	0xF00
+
+	u32 mgmt_protocols_mctp;
+#define ICE_MGMT_MODE_PROTO_RSVD	BIT(0)
+#define ICE_MGMT_MODE_PROTO_PLDM	BIT(1)
+#define ICE_MGMT_MODE_PROTO_OEM		BIT(2)
+#define ICE_MGMT_MODE_PROTO_NC_SI	BIT(3)
+
+	u32 os2bmc;
+	u32 valid_functions;
+
+	/* RSS related capabilities */
+	u32 rss_table_size;		/* 512 for PFs and 64 for VFs */
+	u32 rss_table_entry_width;	/* RSS Entry width in bits */
+
+	/* TX/RX queues */
+	u32 num_rxq;			/* Number/Total RX queues */
+	u32 rxq_first_id;		/* First queue ID for RX queues */
+	u32 num_txq;			/* Number/Total TX queues */
+	u32 txq_first_id;		/* First queue ID for TX queues */
+
+	/* MSI-X vectors */
+	u32 num_msix_vectors;
+	u32 msix_vector_first_id;
+
+	/* Max MTU for function or device */
+	u32 max_mtu;
+
+	/* WOL related */
+	u32 num_wol_proxy_fltr;
+	u32 wol_proxy_vsi_seid;
+
+	/* LED/SDP pin count */
+	u32 led_pin_num;
+	u32 sdp_pin_num;
+
+	/* LED/SDP - Supports up to 12 LED pins and 8 SDP signals */
+#define ICE_MAX_SUPPORTED_GPIO_LED	12
+#define ICE_MAX_SUPPORTED_GPIO_SDP	8
+	u8 led[ICE_MAX_SUPPORTED_GPIO_LED];
+	u8 sdp[ICE_MAX_SUPPORTED_GPIO_SDP];
+
+	/* EVB capabilities */
+	u8 evb_802_1_qbg;		/* Edge Virtual Bridging */
+	u8 evb_802_1_qbh;		/* Bridge Port Extension */
+
+	u8 iscsi;
+	u8 mgmt_cem;
+
+	/* WoL and APM support */
+#define ICE_WOL_SUPPORT_M		BIT(0)
+#define ICE_ACPI_PROG_MTHD_M		BIT(1)
+#define ICE_PROXY_SUPPORT_M		BIT(2)
+	u8 apm_wol_support;
+	u8 acpi_prog_mthd;
+	u8 proxy_support;
+};
+
+
+/* Function specific capabilities */
+struct ice_hw_func_caps {
+	struct ice_hw_common_caps common_cap;
+	u32 guar_num_vsi;
+};
+
+/* Device wide capabilities */
+struct ice_hw_dev_caps {
+	struct ice_hw_common_caps common_cap;
+	u32 num_vsi_allocd_to_host;	/* Excluding EMP VSI */
+};
+
+
+/* Information about MAC such as address, etc... */
+struct ice_mac_info {
+	u8 lan_addr[ETH_ALEN];
+	u8 perm_addr[ETH_ALEN];
+	u8 port_addr[ETH_ALEN];
+	u8 wol_addr[ETH_ALEN];
+};
+
+/* PCI bus types */
+enum ice_bus_type {
+	ice_bus_unknown = 0,
+	ice_bus_pci_express,
+	ice_bus_embedded, /* Is device Embedded versus card */
+	ice_bus_reserved
+};
+
+/* PCI bus speeds */
+enum ice_pcie_bus_speed {
+	ice_pcie_speed_unknown	= 0xff,
+	ice_pcie_speed_2_5GT	= 0x14,
+	ice_pcie_speed_5_0GT	= 0x15,
+	ice_pcie_speed_8_0GT	= 0x16,
+	ice_pcie_speed_16_0GT	= 0x17
+};
+
+/* PCI bus widths */
+enum ice_pcie_link_width {
+	ice_pcie_lnk_width_resrv	= 0x00,
+	ice_pcie_lnk_x1			= 0x01,
+	ice_pcie_lnk_x2			= 0x02,
+	ice_pcie_lnk_x4			= 0x04,
+	ice_pcie_lnk_x8			= 0x08,
+	ice_pcie_lnk_x12		= 0x0C,
+	ice_pcie_lnk_x16		= 0x10,
+	ice_pcie_lnk_x32		= 0x20,
+	ice_pcie_lnk_width_unknown	= 0xff,
+};
+
+/* Reset types used to determine which kind of reset was requested. These
+ * defines match what the RESET_TYPE field of the GLGEN_RSTAT register.
+ * ICE_RESET_PFR does not match any RESET_TYPE field in the GLGEN_RSTAT register
+ * because its reset source is different than the other types listed.
+ */
+enum ice_reset_req {
+	ICE_RESET_POR	= 0,
+	ICE_RESET_INVAL	= 0,
+	ICE_RESET_CORER	= 1,
+	ICE_RESET_GLOBR	= 2,
+	ICE_RESET_EMPR	= 3,
+	ICE_RESET_PFR	= 4,
+};
+
+/* Bus parameters */
+struct ice_bus_info {
+	enum ice_pcie_bus_speed speed;
+	enum ice_pcie_link_width width;
+	enum ice_bus_type type;
+	u16 domain_num;
+	u16 device;
+	u8 func;
+	u8 bus_num;
+};
+
+/* Flow control (FC) parameters */
+struct ice_fc_info {
+	enum ice_fc_mode current_mode;	/* FC mode in effect */
+	enum ice_fc_mode req_mode;	/* FC mode requested by caller */
+};
+
+/* NVM Information */
+struct ice_nvm_info {
+	u32 eetrack;			/* NVM data version */
+	u32 oem_ver;			/* OEM version info */
+	u16 sr_words;			/* Shadow RAM size in words */
+	u16 ver;			/* NVM package version */
+	u8 blank_nvm_mode;		/* is NVM empty (no FW present)*/
+};
+
+/* Max number of port to queue branches w.r.t topology */
+#define ICE_TXSCHED_MAX_BRANCHES ICE_MAX_TRAFFIC_CLASS
+/* ICE_DFLT_AGG_ID means that all new VM(s)/VSI node connects
+ * to driver defined policy for default aggregator
+ */
+#define ICE_INVAL_TEID 0xFFFFFFFF
+#define ICE_DFLT_AGG_ID 0
+
+struct ice_sched_node {
+	struct ice_sched_node *parent;
+	struct ice_sched_node *sibling; /* next sibling in the same layer */
+	struct ice_sched_node **children;
+	struct ice_aqc_txsched_elem_data info;
+	u32 agg_id;			/* aggregator group id */
+	u16 vsi_handle;
+	u8 in_use;			/* suspended or in use */
+	u8 tx_sched_layer;		/* Logical Layer (1-9) */
+	u8 num_children;
+	u8 tc_num;
+	u8 owner;
+#define ICE_SCHED_NODE_OWNER_LAN	0
+#define ICE_SCHED_NODE_OWNER_AE		1
+#define ICE_SCHED_NODE_OWNER_RDMA	2
+};
+
+/* Access Macros for Tx Sched Elements data */
+#define ICE_TXSCHED_GET_NODE_TEID(x) LE32_TO_CPU((x)->info.node_teid)
+#define ICE_TXSCHED_GET_PARENT_TEID(x) LE32_TO_CPU((x)->info.parent_teid)
+#define ICE_TXSCHED_GET_CIR_RL_ID(x)	\
+	LE16_TO_CPU((x)->info.cir_bw.bw_profile_idx)
+#define ICE_TXSCHED_GET_EIR_RL_ID(x)	\
+	LE16_TO_CPU((x)->info.eir_bw.bw_profile_idx)
+#define ICE_TXSCHED_GET_SRL_ID(x) LE16_TO_CPU((x)->info.srl_id)
+#define ICE_TXSCHED_GET_CIR_BWALLOC(x)	\
+	LE16_TO_CPU((x)->info.cir_bw.bw_alloc)
+#define ICE_TXSCHED_GET_EIR_BWALLOC(x)	\
+	LE16_TO_CPU((x)->info.eir_bw.bw_alloc)
+
+struct ice_sched_rl_profle {
+	u32 rate; /* In Kbps */
+	struct ice_aqc_rl_profile_elem info;
+};
+
+/* The aggregator type determines if identifier is for a VSI group,
+ * aggregator group, aggregator of queues, or queue group.
+ */
+enum ice_agg_type {
+	ICE_AGG_TYPE_UNKNOWN = 0,
+	ICE_AGG_TYPE_TC,
+	ICE_AGG_TYPE_AGG, /* aggregator */
+	ICE_AGG_TYPE_VSI,
+	ICE_AGG_TYPE_QG,
+	ICE_AGG_TYPE_Q
+};
+
+/* Rate limit types */
+enum ice_rl_type {
+	ICE_UNKNOWN_BW = 0,
+	ICE_MIN_BW,		/* for cir profile */
+	ICE_MAX_BW,		/* for eir profile */
+	ICE_SHARED_BW		/* for shared profile */
+};
+
+#define ICE_SCHED_MIN_BW		500		/* in Kbps */
+#define ICE_SCHED_MAX_BW		100000000	/* in Kbps */
+#define ICE_SCHED_DFLT_BW		0xFFFFFFFF	/* unlimited */
+#define ICE_SCHED_NO_PRIORITY		0
+#define ICE_SCHED_NO_BW_WT		0
+#define ICE_SCHED_DFLT_RL_PROF_ID	0
+#define ICE_SCHED_NO_SHARED_RL_PROF_ID	0xFFFF
+#define ICE_SCHED_DFLT_BW_WT		1
+#define ICE_SCHED_INVAL_PROF_ID		0xFFFF
+#define ICE_SCHED_DFLT_BURST_SIZE	(15 * 1024)	/* in bytes (15k) */
+
+/* Access Macros for Tx Sched RL Profile data */
+#define ICE_TXSCHED_GET_RL_PROF_ID(p) LE16_TO_CPU((p)->info.profile_id)
+#define ICE_TXSCHED_GET_RL_MBS(p) LE16_TO_CPU((p)->info.max_burst_size)
+#define ICE_TXSCHED_GET_RL_MULTIPLIER(p) LE16_TO_CPU((p)->info.rl_multiply)
+#define ICE_TXSCHED_GET_RL_WAKEUP_MV(p) LE16_TO_CPU((p)->info.wake_up_calc)
+#define ICE_TXSCHED_GET_RL_ENCODE(p) LE16_TO_CPU((p)->info.rl_encode)
+
+
+/* The following tree example shows the naming conventions followed under
+ * ice_port_info struct for default scheduler tree topology.
+ *
+ *                 A tree on a port
+ *                       *                ---> root node
+ *        (TC0)/  /  /  / \  \  \  \(TC7) ---> num_branches (range:1- 8)
+ *            *  *  *  *   *  *  *  *     |
+ *           /                            |
+ *          *                             |
+ *         /                              |-> num_elements (range:1 - 9)
+ *        *                               |   implies num_of_layers
+ *       /                                |
+ *   (a)*                                 |
+ *
+ *  (a) is the last_node_teid(not of type Leaf). A leaf node is created under
+ *  (a) as child node where queues get added, add Tx/Rx queue admin commands;
+ *  need teid of (a) to add queues.
+ *
+ *  This tree
+ *       -> has 8 branches (one for each TC)
+ *       -> First branch (TC0) has 4 elements
+ *       -> has 4 layers
+ *       -> (a) is the topmost layer node created by firmware on branch 0
+ *
+ *  Note: Above asterisk tree covers only basic terminology and scenario.
+ *  Refer to the documentation for more info.
+ */
+
+ /* Data structure for saving bw information */
+enum ice_bw_type {
+	ICE_BW_TYPE_PRIO,
+	ICE_BW_TYPE_CIR,
+	ICE_BW_TYPE_CIR_WT,
+	ICE_BW_TYPE_EIR,
+	ICE_BW_TYPE_EIR_WT,
+	ICE_BW_TYPE_SHARED,
+	ICE_BW_TYPE_CNT		/* This must be last */
+};
+
+struct ice_bw {
+	u32 bw;
+	u16 bw_alloc;
+};
+
+struct ice_bw_type_info {
+	ice_declare_bitmap(bw_t_bitmap, ICE_BW_TYPE_CNT);
+	u8 generic;
+	struct ice_bw cir_bw;
+	struct ice_bw eir_bw;
+	u32 shared_bw;
+};
+
+/* vsi type list entry to locate corresponding vsi/ag nodes */
+struct ice_sched_vsi_info {
+	struct ice_sched_node *vsi_node[ICE_MAX_TRAFFIC_CLASS];
+	struct ice_sched_node *ag_node[ICE_MAX_TRAFFIC_CLASS];
+	u16 max_lanq[ICE_MAX_TRAFFIC_CLASS];
+	/* bw_t_info saves VSI bw information */
+	struct ice_bw_type_info bw_t_info[ICE_MAX_TRAFFIC_CLASS];
+};
+
+#if !defined(NO_DCB_SUPPORT) || defined(ADQ_SUPPORT)
+/* CEE or IEEE 802.1Qaz ETS Configuration data */
+struct ice_dcb_ets_cfg {
+	u8 willing;
+	u8 cbs;
+	u8 maxtcs;
+	u8 prio_table[ICE_MAX_TRAFFIC_CLASS];
+	u8 tcbwtable[ICE_MAX_TRAFFIC_CLASS];
+	u8 tsatable[ICE_MAX_TRAFFIC_CLASS];
+};
+
+/* CEE or IEEE 802.1Qaz PFC Configuration data */
+struct ice_dcb_pfc_cfg {
+	u8 willing;
+	u8 mbc;
+	u8 pfccap;
+	u8 pfcena;
+};
+
+/* CEE or IEEE 802.1Qaz Application Priority data */
+struct ice_dcb_app_priority_table {
+	u16 prot_id;
+	u8 priority;
+	u8 selector;
+};
+
+#define ICE_MAX_USER_PRIORITY	8
+#define ICE_DCBX_MAX_APPS	32
+#define ICE_LLDPDU_SIZE		1500
+#define ICE_TLV_STATUS_OPER	0x1
+#define ICE_TLV_STATUS_SYNC	0x2
+#define ICE_TLV_STATUS_ERR	0x4
+#define ICE_APP_PROT_ID_FCOE	0x8906
+#define ICE_APP_PROT_ID_ISCSI	0x0cbc
+#define ICE_APP_PROT_ID_FIP	0x8914
+#define ICE_APP_SEL_ETHTYPE	0x1
+#define ICE_APP_SEL_TCPIP	0x2
+#define ICE_CEE_APP_SEL_ETHTYPE	0x0
+#define ICE_CEE_APP_SEL_TCPIP	0x1
+
+struct ice_dcbx_cfg {
+	u32 numapps;
+	u32 tlv_status; /* CEE mode TLV status */
+	struct ice_dcb_ets_cfg etscfg;
+	struct ice_dcb_ets_cfg etsrec;
+	struct ice_dcb_pfc_cfg pfc;
+	struct ice_dcb_app_priority_table app[ICE_DCBX_MAX_APPS];
+	u8 dcbx_mode;
+#define ICE_DCBX_MODE_CEE	0x1
+#define ICE_DCBX_MODE_IEEE	0x2
+	u8 app_mode;
+#define ICE_DCBX_APPS_NON_WILLING	0x1
+};
+#endif /* !NO_DCB_SUPPORT || ADQ_SUPPORT */
+
+struct ice_port_info {
+	struct ice_sched_node *root;	/* Root Node per Port */
+	struct ice_hw *hw;		/* back pointer to hw instance */
+	u32 last_node_teid;		/* scheduler last node info */
+	u16 sw_id;			/* Initial switch ID belongs to port */
+	u16 pf_vf_num;
+	u8 port_state;
+#define ICE_SCHED_PORT_STATE_INIT	0x0
+#define ICE_SCHED_PORT_STATE_READY	0x1
+	u16 dflt_tx_vsi_rule_id;
+	u16 dflt_tx_vsi_num;
+	u16 dflt_rx_vsi_rule_id;
+	u16 dflt_rx_vsi_num;
+	struct ice_fc_info fc;
+	struct ice_mac_info mac;
+	struct ice_phy_info phy;
+	struct ice_lock sched_lock;	/* protect access to TXSched tree */
+	/* List contain profile id(s) and other params per layer */
+	struct LIST_HEAD_TYPE rl_prof_list[ICE_AQC_TOPO_MAX_LEVEL_NUM];
+#if !defined(NO_DCB_SUPPORT) || defined(ADQ_SUPPORT)
+	struct ice_dcbx_cfg local_dcbx_cfg;	/* Oper/Local Cfg */
+#endif /* !NO_DCB_SUPPORT || ADQ_SUPPORT */
+	u8 lport;
+#define ICE_LPORT_MASK		0xff
+	u8 is_vf;
+};
+
+struct ice_switch_info {
+	struct LIST_HEAD_TYPE vsi_list_map_head;
+	struct ice_sw_recipe *recp_list;
+};
+
+/* FW logging configuration */
+struct ice_fw_log_evnt {
+	u8 cfg : 4;	/* New event enables to configure */
+	u8 cur : 4;	/* Current/active event enables */
+};
+
+struct ice_fw_log_cfg {
+	u8 cq_en : 1;    /* FW logging is enabled via the control queue */
+	u8 uart_en : 1;  /* FW logging is enabled via UART for all PFs */
+	u8 actv_evnts;   /* Cumulation of currently enabled log events */
+
+#define ICE_FW_LOG_EVNT_INFO	(ICE_AQC_FW_LOG_INFO_EN >> ICE_AQC_FW_LOG_EN_S)
+#define ICE_FW_LOG_EVNT_INIT	(ICE_AQC_FW_LOG_INIT_EN >> ICE_AQC_FW_LOG_EN_S)
+#define ICE_FW_LOG_EVNT_FLOW	(ICE_AQC_FW_LOG_FLOW_EN >> ICE_AQC_FW_LOG_EN_S)
+#define ICE_FW_LOG_EVNT_ERR	(ICE_AQC_FW_LOG_ERR_EN >> ICE_AQC_FW_LOG_EN_S)
+	struct ice_fw_log_evnt evnts[ICE_AQC_FW_LOG_ID_MAX];
+};
+
+/* Port hardware description */
+struct ice_hw {
+	u8 *hw_addr;
+	void *back;
+	struct ice_aqc_layer_props *layer_info;
+	struct ice_port_info *port_info;
+	/* 2D Array for each Tx Sched RL Profile type */
+	struct ice_sched_rl_profile **cir_profiles;
+	struct ice_sched_rl_profile **eir_profiles;
+	struct ice_sched_rl_profile **srl_profiles;
+	u64 debug_mask;		/* BITMAP for debug mask */
+	enum ice_mac_type mac_type;
+
+	/* pci info */
+	u16 device_id;
+	u16 vendor_id;
+	u16 subsystem_device_id;
+	u16 subsystem_vendor_id;
+	u8 revision_id;
+
+	u8 pf_id;		/* device profile info */
+
+	u16 max_burst_size;	/* driver sets this value */
+	/* TX Scheduler values */
+	u16 num_tx_sched_layers;
+	u16 num_tx_sched_phys_layers;
+	u8 flattened_layers;
+	u8 max_cgds;
+	u8 sw_entry_point_layer;
+	u16 max_children[ICE_AQC_TOPO_MAX_LEVEL_NUM];
+	struct LIST_HEAD_TYPE agg_list;	/* lists all aggregator */
+	struct ice_bw_type_info tc_node_bw_t_info[ICE_MAX_TRAFFIC_CLASS];
+	struct ice_vsi_ctx *vsi_ctx[ICE_MAX_VSI];
+	u8 evb_veb;		/* true for VEB, false for VEPA */
+	u8 reset_ongoing;	/* true if hw is in reset, false otherwise */
+	struct ice_bus_info bus;
+	struct ice_nvm_info nvm;
+	struct ice_hw_dev_caps dev_caps;	/* device capabilities */
+	struct ice_hw_func_caps func_caps;	/* function capabilities */
+
+	struct ice_switch_info *switch_info;	/* switch filter lists */
+
+	/* Control Queue info */
+	struct ice_ctl_q_info adminq;
+	struct ice_ctl_q_info mailboxq;
+
+	u8 api_branch;		/* API branch version */
+	u8 api_maj_ver;		/* API major version */
+	u8 api_min_ver;		/* API minor version */
+	u8 api_patch;		/* API patch version */
+	u8 fw_branch;		/* firmware branch version */
+	u8 fw_maj_ver;		/* firmware major version */
+	u8 fw_min_ver;		/* firmware minor version */
+	u8 fw_patch;		/* firmware patch version */
+	u32 fw_build;		/* firmware build number */
+
+	struct ice_fw_log_cfg fw_log;
+
+/* Device max aggregate bandwidths corresponding to the GL_PWR_MODE_CTL
+ * register. Used for determining the itr/intrl granularity during
+ * initialization.
+ */
+#define ICE_MAX_AGG_BW_200G	0x0
+#define ICE_MAX_AGG_BW_100G	0X1
+#define ICE_MAX_AGG_BW_50G	0x2
+#define ICE_MAX_AGG_BW_25G	0x3
+	/* ITR granularity for different speeds */
+#define ICE_ITR_GRAN_ABOVE_25	2
+#define ICE_ITR_GRAN_MAX_25	4
+	/* ITR granularity in 1 us */
+	u8 itr_gran;
+	/* INTRL granularity for different speeds */
+#define ICE_INTRL_GRAN_ABOVE_25	4
+#define ICE_INTRL_GRAN_MAX_25	8
+	/* INTRL granularity in 1 us */
+	u8 intrl_gran;
+
+	u8 ucast_shared;	/* true if VSIs can share unicast addr */
+
+
+};
+
+/* Statistics collected by each port, VSI, VEB, and S-channel */
+struct ice_eth_stats {
+	u64 rx_bytes;			/* gorc */
+	u64 rx_unicast;			/* uprc */
+	u64 rx_multicast;		/* mprc */
+	u64 rx_broadcast;		/* bprc */
+	u64 rx_discards;		/* rdpc */
+	u64 rx_unknown_protocol;	/* rupp */
+	u64 tx_bytes;			/* gotc */
+	u64 tx_unicast;			/* uptc */
+	u64 tx_multicast;		/* mptc */
+	u64 tx_broadcast;		/* bptc */
+	u64 tx_discards;		/* tdpc */
+	u64 tx_errors;			/* tepc */
+};
+
+#define ICE_MAX_UP	8
+
+/* Statistics collected per VEB per User Priority (UP) for up to 8 UPs */
+struct ice_veb_up_stats {
+	u64 up_rx_pkts[ICE_MAX_UP];
+	u64 up_rx_bytes[ICE_MAX_UP];
+	u64 up_tx_pkts[ICE_MAX_UP];
+	u64 up_tx_bytes[ICE_MAX_UP];
+};
+
+/* Statistics collected by the MAC */
+struct ice_hw_port_stats {
+	/* eth stats collected by the port */
+	struct ice_eth_stats eth;
+	/* additional port specific stats */
+	u64 tx_dropped_link_down;	/* tdold */
+	u64 crc_errors;			/* crcerrs */
+	u64 illegal_bytes;		/* illerrc */
+	u64 error_bytes;		/* errbc */
+	u64 mac_local_faults;		/* mlfc */
+	u64 mac_remote_faults;		/* mrfc */
+	u64 rx_len_errors;		/* rlec */
+	u64 link_xon_rx;		/* lxonrxc */
+	u64 link_xoff_rx;		/* lxoffrxc */
+	u64 link_xon_tx;		/* lxontxc */
+	u64 link_xoff_tx;		/* lxofftxc */
+	u64 rx_size_64;			/* prc64 */
+	u64 rx_size_127;		/* prc127 */
+	u64 rx_size_255;		/* prc255 */
+	u64 rx_size_511;		/* prc511 */
+	u64 rx_size_1023;		/* prc1023 */
+	u64 rx_size_1522;		/* prc1522 */
+	u64 rx_size_big;		/* prc9522 */
+	u64 rx_undersize;		/* ruc */
+	u64 rx_fragments;		/* rfc */
+	u64 rx_oversize;		/* roc */
+	u64 rx_jabber;			/* rjc */
+	u64 tx_size_64;			/* ptc64 */
+	u64 tx_size_127;		/* ptc127 */
+	u64 tx_size_255;		/* ptc255 */
+	u64 tx_size_511;		/* ptc511 */
+	u64 tx_size_1023;		/* ptc1023 */
+	u64 tx_size_1522;		/* ptc1522 */
+	u64 tx_size_big;		/* ptc9522 */
+	u64 mac_short_pkt_dropped;	/* mspdc */
+};
+
+enum ice_sw_fwd_act_type {
+	ICE_FWD_TO_VSI = 0,
+	ICE_FWD_TO_VSI_LIST, /* Do not use this when adding filter */
+	ICE_FWD_TO_Q,
+	ICE_FWD_TO_QGRP,
+	ICE_DROP_PACKET,
+	ICE_INVAL_ACT
+};
+
+/* Checksum and Shadow RAM pointers */
+#define ICE_SR_NVM_CTRL_WORD			0x00
+#define ICE_SR_PHY_ANALOG_PTR			0x04
+#define ICE_SR_OPTION_ROM_PTR			0x05
+#define ICE_SR_RO_PCIR_REGS_AUTO_LOAD_PTR	0x06
+#define ICE_SR_AUTO_GENERATED_POINTERS_PTR	0x07
+#define ICE_SR_PCIR_REGS_AUTO_LOAD_PTR		0x08
+#define ICE_SR_EMP_GLOBAL_MODULE_PTR		0x09
+#define ICE_SR_EMP_IMAGE_PTR			0x0B
+#define ICE_SR_PE_IMAGE_PTR			0x0C
+#define ICE_SR_CSR_PROTECTED_LIST_PTR		0x0D
+#define ICE_SR_MNG_CFG_PTR			0x0E
+#define ICE_SR_EMP_MODULE_PTR			0x0F
+#define ICE_SR_PBA_FLAGS			0x15
+#define ICE_SR_PBA_BLOCK_PTR			0x16
+#define ICE_SR_BOOT_CFG_PTR			0x17
+#define ICE_SR_NVM_WOL_CFG			0x19
+#define ICE_NVM_OEM_VER_OFF			0x83
+#define ICE_SR_NVM_DEV_STARTER_VER		0x18
+#define ICE_SR_ALTERNATE_SAN_MAC_ADDR_PTR	0x27
+#define ICE_SR_PERMANENT_SAN_MAC_ADDR_PTR	0x28
+#define ICE_SR_NVM_MAP_VER			0x29
+#define ICE_SR_NVM_IMAGE_VER			0x2A
+#define ICE_SR_NVM_STRUCTURE_VER		0x2B
+#define ICE_SR_NVM_EETRACK_LO			0x2D
+#define ICE_SR_NVM_EETRACK_HI			0x2E
+#define ICE_NVM_VER_LO_SHIFT			0
+#define ICE_NVM_VER_LO_MASK			(0xff << ICE_NVM_VER_LO_SHIFT)
+#define ICE_NVM_VER_HI_SHIFT			12
+#define ICE_NVM_VER_HI_MASK			(0xf << ICE_NVM_VER_HI_SHIFT)
+#define ICE_OEM_EETRACK_ID			0xffffffff
+#define ICE_OEM_VER_PATCH_SHIFT			0
+#define ICE_OEM_VER_PATCH_MASK		(0xff << ICE_OEM_VER_PATCH_SHIFT)
+#define ICE_OEM_VER_BUILD_SHIFT			8
+#define ICE_OEM_VER_BUILD_MASK		(0xffff << ICE_OEM_VER_BUILD_SHIFT)
+#define ICE_OEM_VER_SHIFT			24
+#define ICE_OEM_VER_MASK			(0xff << ICE_OEM_VER_SHIFT)
+#define ICE_SR_VPD_PTR				0x2F
+#define ICE_SR_PXE_SETUP_PTR			0x30
+#define ICE_SR_PXE_CFG_CUST_OPTIONS_PTR		0x31
+#define ICE_SR_NVM_ORIGINAL_EETRACK_LO		0x34
+#define ICE_SR_NVM_ORIGINAL_EETRACK_HI		0x35
+#define ICE_SR_VLAN_CFG_PTR			0x37
+#define ICE_SR_POR_REGS_AUTO_LOAD_PTR		0x38
+#define ICE_SR_EMPR_REGS_AUTO_LOAD_PTR		0x3A
+#define ICE_SR_GLOBR_REGS_AUTO_LOAD_PTR		0x3B
+#define ICE_SR_CORER_REGS_AUTO_LOAD_PTR		0x3C
+#define ICE_SR_PHY_CFG_SCRIPT_PTR		0x3D
+#define ICE_SR_PCIE_ALT_AUTO_LOAD_PTR		0x3E
+#define ICE_SR_SW_CHECKSUM_WORD			0x3F
+#define ICE_SR_PFA_PTR				0x40
+#define ICE_SR_1ST_SCRATCH_PAD_PTR		0x41
+#define ICE_SR_1ST_NVM_BANK_PTR			0x42
+#define ICE_SR_NVM_BANK_SIZE			0x43
+#define ICE_SR_1ND_OROM_BANK_PTR		0x44
+#define ICE_SR_OROM_BANK_SIZE			0x45
+#define ICE_SR_EMP_SR_SETTINGS_PTR		0x48
+#define ICE_SR_CONFIGURATION_METADATA_PTR	0x4D
+#define ICE_SR_IMMEDIATE_VALUES_PTR		0x4E
+
+/* Auxiliary field, mask and shift definition for Shadow RAM and NVM Flash */
+#define ICE_SR_VPD_SIZE_WORDS		512
+#define ICE_SR_PCIE_ALT_SIZE_WORDS	512
+#define ICE_SR_CTRL_WORD_1_S		0x06
+#define ICE_SR_CTRL_WORD_1_M		(0x03 << ICE_SR_CTRL_WORD_1_S)
+
+/* Shadow RAM related */
+#define ICE_SR_SECTOR_SIZE_IN_WORDS	0x800
+#define ICE_SR_BUF_ALIGNMENT		4096
+#define ICE_SR_WORDS_IN_1KB		512
+/* Checksum should be calculated such that after adding all the words,
+ * including the checksum word itself, the sum should be 0xBABA.
+ */
+#define ICE_SR_SW_CHECKSUM_BASE		0xBABA
+
+#define ICE_PBA_FLAG_DFLT		0xFAFA
+/* Hash redirection LUT for VSI - maximum array size */
+#define ICE_VSIQF_HLUT_ARRAY_SIZE	((VSIQF_HLUT_MAX_INDEX + 1) * 4)
+
+/*
+ * Defines for values in the VF_PE_DB_SIZE bits in the GLPCI_LBARCTRL register.
+ * This is needed to determine the BAR0 space for the VFs
+ */
+#define GLPCI_LBARCTRL_VF_PE_DB_SIZE_0KB 0x0
+#define GLPCI_LBARCTRL_VF_PE_DB_SIZE_8KB 0x1
+#define GLPCI_LBARCTRL_VF_PE_DB_SIZE_64KB 0x2
+
+#endif /* _ICE_TYPE_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v6 03/31] net/ice/base: add admin queue structures and commands
  2018-12-18  8:46 ` [dpdk-dev] [PATCH v6 00/31] A new net PMD - ICE Wenzhuo Lu
  2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 01/31] net/ice/base: add registers for Intel(R) E800 Series NIC Wenzhuo Lu
  2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 02/31] net/ice/base: add basic structures Wenzhuo Lu
@ 2018-12-18  8:46   ` Wenzhuo Lu
  2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 04/31] net/ice/base: add sideband queue info Wenzhuo Lu
                     ` (28 subsequent siblings)
  31 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-18  8:46 UTC (permalink / raw)
  To: dev; +Cc: Paul M Stillwell Jr

From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>

Add the commands, error codes, and structures for
the admin queue.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
---
 drivers/net/ice/base/ice_adminq_cmd.h | 1891 +++++++++++++++++++++++++++++++++
 1 file changed, 1891 insertions(+)
 create mode 100644 drivers/net/ice/base/ice_adminq_cmd.h

diff --git a/drivers/net/ice/base/ice_adminq_cmd.h b/drivers/net/ice/base/ice_adminq_cmd.h
new file mode 100644
index 0000000..9332f84
--- /dev/null
+++ b/drivers/net/ice/base/ice_adminq_cmd.h
@@ -0,0 +1,1891 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_ADMINQ_CMD_H_
+#define _ICE_ADMINQ_CMD_H_
+
+/* This header file defines the Admin Queue commands, error codes and
+ * descriptor format. It is shared between Firmware and Software.
+ */
+
+
+#define ICE_MAX_VSI			768
+#define ICE_AQC_TOPO_MAX_LEVEL_NUM	0x9
+#define ICE_AQ_SET_MAC_FRAME_SIZE_MAX	9728
+
+
+struct ice_aqc_generic {
+	__le32 param0;
+	__le32 param1;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Get version (direct 0x0001) */
+struct ice_aqc_get_ver {
+	__le32 rom_ver;
+	__le32 fw_build;
+	u8 fw_branch;
+	u8 fw_major;
+	u8 fw_minor;
+	u8 fw_patch;
+	u8 api_branch;
+	u8 api_major;
+	u8 api_minor;
+	u8 api_patch;
+};
+
+
+
+/* Queue Shutdown (direct 0x0003) */
+struct ice_aqc_q_shutdown {
+	__le32 driver_unloading;
+#define ICE_AQC_DRIVER_UNLOADING	BIT(0)
+	u8 reserved[12];
+};
+
+
+
+
+/* Request resource ownership (direct 0x0008)
+ * Release resource ownership (direct 0x0009)
+ */
+struct ice_aqc_req_res {
+	__le16 res_id;
+#define ICE_AQC_RES_ID_NVM		1
+#define ICE_AQC_RES_ID_SDP		2
+#define ICE_AQC_RES_ID_CHNG_LOCK	3
+#define ICE_AQC_RES_ID_GLBL_LOCK	4
+	__le16 access_type;
+#define ICE_AQC_RES_ACCESS_READ		1
+#define ICE_AQC_RES_ACCESS_WRITE	2
+
+	/* Upon successful completion, FW writes this value and driver is
+	 * expected to release resource before timeout. This value is provided
+	 * in milliseconds.
+	 */
+	__le32 timeout;
+#define ICE_AQ_RES_NVM_READ_DFLT_TIMEOUT_MS	3000
+#define ICE_AQ_RES_NVM_WRITE_DFLT_TIMEOUT_MS	180000
+#define ICE_AQ_RES_CHNG_LOCK_DFLT_TIMEOUT_MS	1000
+#define ICE_AQ_RES_GLBL_LOCK_DFLT_TIMEOUT_MS	3000
+	/* For SDP: pin id of the SDP */
+	__le32 res_number;
+	/* Status is only used for ICE_AQC_RES_ID_GLBL_LOCK */
+	__le16 status;
+#define ICE_AQ_RES_GLBL_SUCCESS		0
+#define ICE_AQ_RES_GLBL_IN_PROG		1
+#define ICE_AQ_RES_GLBL_DONE		2
+	u8 reserved[2];
+};
+
+
+/* Get function capabilities (indirect 0x000A)
+ * Get device capabilities (indirect 0x000B)
+ */
+struct ice_aqc_list_caps {
+	u8 cmd_flags;
+	u8 pf_index;
+	u8 reserved[2];
+	__le32 count;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Device/Function buffer entry, repeated per reported capability */
+struct ice_aqc_list_caps_elem {
+	__le16 cap;
+#define ICE_AQC_CAPS_VALID_FUNCTIONS			0x0005
+#define ICE_AQC_CAPS_VSI				0x0017
+#define ICE_AQC_CAPS_RSS				0x0040
+#define ICE_AQC_CAPS_RXQS				0x0041
+#define ICE_AQC_CAPS_TXQS				0x0042
+#define ICE_AQC_CAPS_MSIX				0x0043
+#define ICE_AQC_CAPS_MAX_MTU				0x0047
+
+	u8 major_ver;
+	u8 minor_ver;
+	/* Number of resources described by this capability */
+	__le32 number;
+	/* Only meaningful for some types of resources */
+	__le32 logical_id;
+	/* Only meaningful for some types of resources */
+	__le32 phys_id;
+	__le64 rsvd1;
+	__le64 rsvd2;
+};
+
+
+/* Manage MAC address, read command - indirect (0x0107)
+ * This struct is also used for the response
+ */
+struct ice_aqc_manage_mac_read {
+	__le16 flags; /* Zeroed by device driver */
+#define ICE_AQC_MAN_MAC_LAN_ADDR_VALID		BIT(4)
+#define ICE_AQC_MAN_MAC_SAN_ADDR_VALID		BIT(5)
+#define ICE_AQC_MAN_MAC_PORT_ADDR_VALID		BIT(6)
+#define ICE_AQC_MAN_MAC_WOL_ADDR_VALID		BIT(7)
+#define ICE_AQC_MAN_MAC_READ_S			4
+#define ICE_AQC_MAN_MAC_READ_M			(0xF << ICE_AQC_MAN_MAC_READ_S)
+	u8 lport_num;
+	u8 lport_num_valid;
+#define ICE_AQC_MAN_MAC_PORT_NUM_IS_VALID	BIT(0)
+	u8 num_addr; /* Used in response */
+	u8 reserved[3];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Response buffer format for manage MAC read command */
+struct ice_aqc_manage_mac_read_resp {
+	u8 lport_num;
+	u8 addr_type;
+#define ICE_AQC_MAN_MAC_ADDR_TYPE_LAN		0
+#define ICE_AQC_MAN_MAC_ADDR_TYPE_WOL		1
+	u8 mac_addr[ETH_ALEN];
+};
+
+
+/* Manage MAC address, write command - direct (0x0108) */
+struct ice_aqc_manage_mac_write {
+	u8 port_num;
+	u8 flags;
+#define ICE_AQC_MAN_MAC_WR_MC_MAG_EN		BIT(0)
+#define ICE_AQC_MAN_MAC_WR_WOL_LAA_PFR_KEEP	BIT(1)
+#define ICE_AQC_MAN_MAC_WR_S		6
+#define ICE_AQC_MAN_MAC_WR_M		(3 << ICE_AQC_MAN_MAC_WR_S)
+#define ICE_AQC_MAN_MAC_UPDATE_LAA	0
+#define ICE_AQC_MAN_MAC_UPDATE_LAA_WOL	(BIT(0) << ICE_AQC_MAN_MAC_WR_S)
+	/* High 16 bits of MAC address in big endian order */
+	__be16 sah;
+	/* Low 32 bits of MAC address in big endian order */
+	__be32 sal;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Clear PXE Command and response (direct 0x0110) */
+struct ice_aqc_clear_pxe {
+	u8 rx_cnt;
+#define ICE_AQC_CLEAR_PXE_RX_CNT		0x2
+	u8 reserved[15];
+};
+
+
+/* Get switch configuration (0x0200) */
+struct ice_aqc_get_sw_cfg {
+	/* Reserved for command and copy of request flags for response */
+	__le16 flags;
+	/* First desc in case of command and next_elem in case of response
+	 * In case of response, if it is not zero, means all the configuration
+	 * was not returned and new command shall be sent with this value in
+	 * the 'first desc' field
+	 */
+	__le16 element;
+	/* Reserved for command, only used for response */
+	__le16 num_elems;
+	__le16 rsvd;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Each entry in the response buffer is of the following type: */
+struct ice_aqc_get_sw_cfg_resp_elem {
+	/* VSI/Port Number */
+	__le16 vsi_port_num;
+#define ICE_AQC_GET_SW_CONF_RESP_VSI_PORT_NUM_S	0
+#define ICE_AQC_GET_SW_CONF_RESP_VSI_PORT_NUM_M	\
+			(0x3FF << ICE_AQC_GET_SW_CONF_RESP_VSI_PORT_NUM_S)
+#define ICE_AQC_GET_SW_CONF_RESP_TYPE_S	14
+#define ICE_AQC_GET_SW_CONF_RESP_TYPE_M	(0x3 << ICE_AQC_GET_SW_CONF_RESP_TYPE_S)
+#define ICE_AQC_GET_SW_CONF_RESP_PHYS_PORT	0
+#define ICE_AQC_GET_SW_CONF_RESP_VIRT_PORT	1
+#define ICE_AQC_GET_SW_CONF_RESP_VSI		2
+
+	/* SWID VSI/Port belongs to */
+	__le16 swid;
+
+	/* Bit 14..0 : PF/VF number VSI belongs to
+	 * Bit 15 : VF indication bit
+	 */
+	__le16 pf_vf_num;
+#define ICE_AQC_GET_SW_CONF_RESP_FUNC_NUM_S	0
+#define ICE_AQC_GET_SW_CONF_RESP_FUNC_NUM_M	\
+				(0x7FFF << ICE_AQC_GET_SW_CONF_RESP_FUNC_NUM_S)
+#define ICE_AQC_GET_SW_CONF_RESP_IS_VF		BIT(15)
+};
+
+
+/* The response buffer is as follows. Note that the length of the
+ * elements array varies with the length of the command response.
+ */
+struct ice_aqc_get_sw_cfg_resp {
+	struct ice_aqc_get_sw_cfg_resp_elem elements[1];
+};
+
+
+
+/* These resource type defines are used for all switch resource
+ * commands where a resource type is required, such as:
+ * Get Resource Allocation command (indirect 0x0204)
+ * Allocate Resources command (indirect 0x0208)
+ * Free Resources command (indirect 0x0209)
+ * Get Allocated Resource Descriptors Command (indirect 0x020A)
+ */
+#define ICE_AQC_RES_TYPE_VSI_LIST_REP			0x03
+#define ICE_AQC_RES_TYPE_VSI_LIST_PRUNE			0x04
+
+#define ICE_AQC_RES_TYPE_FLAG_SHARED			BIT(7)
+#define ICE_AQC_RES_TYPE_FLAG_SCAN_BOTTOM		BIT(12)
+#define ICE_AQC_RES_TYPE_FLAG_IGNORE_INDEX		BIT(13)
+
+#define ICE_AQC_RES_TYPE_FLAG_DEDICATED			0x00
+
+
+
+/* Allocate Resources command (indirect 0x0208)
+ * Free Resources command (indirect 0x0209)
+ */
+struct ice_aqc_alloc_free_res_cmd {
+	__le16 num_entries; /* Number of Resource entries */
+	u8 reserved[6];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Resource descriptor */
+struct ice_aqc_res_elem {
+	union {
+		__le16 sw_resp;
+		__le16 flu_resp;
+	} e;
+};
+
+
+/* Buffer for Allocate/Free Resources commands */
+struct ice_aqc_alloc_free_res_elem {
+	__le16 res_type; /* Types defined above cmd 0x0204 */
+#define ICE_AQC_RES_TYPE_VSI_PRUNE_LIST_S	8
+#define ICE_AQC_RES_TYPE_VSI_PRUNE_LIST_M	\
+				(0xF << ICE_AQC_RES_TYPE_VSI_PRUNE_LIST_S)
+	__le16 num_elems;
+	struct ice_aqc_res_elem elem[1];
+};
+
+
+
+
+/* Add VSI (indirect 0x0210)
+ * Update VSI (indirect 0x0211)
+ * Get VSI (indirect 0x0212)
+ * Free VSI (indirect 0x0213)
+ */
+struct ice_aqc_add_get_update_free_vsi {
+	__le16 vsi_num;
+#define ICE_AQ_VSI_NUM_S	0
+#define ICE_AQ_VSI_NUM_M	(0x03FF << ICE_AQ_VSI_NUM_S)
+#define ICE_AQ_VSI_IS_VALID	BIT(15)
+	__le16 cmd_flags;
+#define ICE_AQ_VSI_KEEP_ALLOC	0x1
+	u8 vf_id;
+	u8 reserved;
+	__le16 vsi_flags;
+#define ICE_AQ_VSI_TYPE_S	0
+#define ICE_AQ_VSI_TYPE_M	(0x3 << ICE_AQ_VSI_TYPE_S)
+#define ICE_AQ_VSI_TYPE_VF	0x0
+#define ICE_AQ_VSI_TYPE_VMDQ2	0x1
+#define ICE_AQ_VSI_TYPE_PF	0x2
+#define ICE_AQ_VSI_TYPE_EMP_MNG	0x3
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Response descriptor for:
+ * Add VSI (indirect 0x0210)
+ * Update VSI (indirect 0x0211)
+ * Free VSI (indirect 0x0213)
+ */
+struct ice_aqc_add_update_free_vsi_resp {
+	__le16 vsi_num;
+	__le16 ext_status;
+	__le16 vsi_used;
+	__le16 vsi_free;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+
+struct ice_aqc_vsi_props {
+	__le16 valid_sections;
+#define ICE_AQ_VSI_PROP_SW_VALID		BIT(0)
+#define ICE_AQ_VSI_PROP_SECURITY_VALID		BIT(1)
+#define ICE_AQ_VSI_PROP_VLAN_VALID		BIT(2)
+#define ICE_AQ_VSI_PROP_OUTER_TAG_VALID		BIT(3)
+#define ICE_AQ_VSI_PROP_INGRESS_UP_VALID	BIT(4)
+#define ICE_AQ_VSI_PROP_EGRESS_UP_VALID		BIT(5)
+#define ICE_AQ_VSI_PROP_RXQ_MAP_VALID		BIT(6)
+#define ICE_AQ_VSI_PROP_Q_OPT_VALID		BIT(7)
+#define ICE_AQ_VSI_PROP_OUTER_UP_VALID		BIT(8)
+#define ICE_AQ_VSI_PROP_FLOW_DIR_VALID		BIT(11)
+#define ICE_AQ_VSI_PROP_PASID_VALID		BIT(12)
+	/* switch section */
+	u8 sw_id;
+	u8 sw_flags;
+#define ICE_AQ_VSI_SW_FLAG_ALLOW_LB		BIT(5)
+#define ICE_AQ_VSI_SW_FLAG_LOCAL_LB		BIT(6)
+#define ICE_AQ_VSI_SW_FLAG_SRC_PRUNE		BIT(7)
+	u8 sw_flags2;
+#define ICE_AQ_VSI_SW_FLAG_RX_PRUNE_EN_S	0
+#define ICE_AQ_VSI_SW_FLAG_RX_PRUNE_EN_M	\
+				(0xF << ICE_AQ_VSI_SW_FLAG_RX_PRUNE_EN_S)
+#define ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA	BIT(0)
+#define ICE_AQ_VSI_SW_FLAG_LAN_ENA		BIT(4)
+	u8 veb_stat_id;
+#define ICE_AQ_VSI_SW_VEB_STAT_ID_S		0
+#define ICE_AQ_VSI_SW_VEB_STAT_ID_M	(0x1F << ICE_AQ_VSI_SW_VEB_STAT_ID_S)
+#define ICE_AQ_VSI_SW_VEB_STAT_ID_VALID		BIT(5)
+	/* security section */
+	u8 sec_flags;
+#define ICE_AQ_VSI_SEC_FLAG_ALLOW_DEST_OVRD	BIT(0)
+#define ICE_AQ_VSI_SEC_FLAG_ENA_MAC_ANTI_SPOOF	BIT(2)
+#define ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S	4
+#define ICE_AQ_VSI_SEC_TX_PRUNE_ENA_M	(0xF << ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S)
+#define ICE_AQ_VSI_SEC_TX_VLAN_PRUNE_ENA	BIT(0)
+	u8 sec_reserved;
+	/* VLAN section */
+	__le16 pvid; /* VLANS include priority bits */
+	u8 pvlan_reserved[2];
+	u8 vlan_flags;
+#define ICE_AQ_VSI_VLAN_MODE_S	0
+#define ICE_AQ_VSI_VLAN_MODE_M	(0x3 << ICE_AQ_VSI_VLAN_MODE_S)
+#define ICE_AQ_VSI_VLAN_MODE_UNTAGGED	0x1
+#define ICE_AQ_VSI_VLAN_MODE_TAGGED	0x2
+#define ICE_AQ_VSI_VLAN_MODE_ALL	0x3
+#define ICE_AQ_VSI_PVLAN_INSERT_PVID	BIT(2)
+#define ICE_AQ_VSI_VLAN_EMOD_S	3
+#define ICE_AQ_VSI_VLAN_EMOD_M	(0x3 << ICE_AQ_VSI_VLAN_EMOD_S)
+#define ICE_AQ_VSI_VLAN_EMOD_STR_BOTH	(0x0 << ICE_AQ_VSI_VLAN_EMOD_S)
+#define ICE_AQ_VSI_VLAN_EMOD_STR_UP	(0x1 << ICE_AQ_VSI_VLAN_EMOD_S)
+#define ICE_AQ_VSI_VLAN_EMOD_STR	(0x2 << ICE_AQ_VSI_VLAN_EMOD_S)
+#define ICE_AQ_VSI_VLAN_EMOD_NOTHING	(0x3 << ICE_AQ_VSI_VLAN_EMOD_S)
+	u8 pvlan_reserved2[3];
+	/* ingress egress up sections */
+	__le32 ingress_table; /* bitmap, 3 bits per up */
+#define ICE_AQ_VSI_UP_TABLE_UP0_S	0
+#define ICE_AQ_VSI_UP_TABLE_UP0_M	(0x7 << ICE_AQ_VSI_UP_TABLE_UP0_S)
+#define ICE_AQ_VSI_UP_TABLE_UP1_S	3
+#define ICE_AQ_VSI_UP_TABLE_UP1_M	(0x7 << ICE_AQ_VSI_UP_TABLE_UP1_S)
+#define ICE_AQ_VSI_UP_TABLE_UP2_S	6
+#define ICE_AQ_VSI_UP_TABLE_UP2_M	(0x7 << ICE_AQ_VSI_UP_TABLE_UP2_S)
+#define ICE_AQ_VSI_UP_TABLE_UP3_S	9
+#define ICE_AQ_VSI_UP_TABLE_UP3_M	(0x7 << ICE_AQ_VSI_UP_TABLE_UP3_S)
+#define ICE_AQ_VSI_UP_TABLE_UP4_S	12
+#define ICE_AQ_VSI_UP_TABLE_UP4_M	(0x7 << ICE_AQ_VSI_UP_TABLE_UP4_S)
+#define ICE_AQ_VSI_UP_TABLE_UP5_S	15
+#define ICE_AQ_VSI_UP_TABLE_UP5_M	(0x7 << ICE_AQ_VSI_UP_TABLE_UP5_S)
+#define ICE_AQ_VSI_UP_TABLE_UP6_S	18
+#define ICE_AQ_VSI_UP_TABLE_UP6_M	(0x7 << ICE_AQ_VSI_UP_TABLE_UP6_S)
+#define ICE_AQ_VSI_UP_TABLE_UP7_S	21
+#define ICE_AQ_VSI_UP_TABLE_UP7_M	(0x7 << ICE_AQ_VSI_UP_TABLE_UP7_S)
+	__le32 egress_table;   /* same defines as for ingress table */
+	/* outer tags section */
+	__le16 outer_tag;
+	u8 outer_tag_flags;
+#define ICE_AQ_VSI_OUTER_TAG_MODE_S	0
+#define ICE_AQ_VSI_OUTER_TAG_MODE_M	(0x3 << ICE_AQ_VSI_OUTER_TAG_MODE_S)
+#define ICE_AQ_VSI_OUTER_TAG_NOTHING	0x0
+#define ICE_AQ_VSI_OUTER_TAG_REMOVE	0x1
+#define ICE_AQ_VSI_OUTER_TAG_COPY	0x2
+#define ICE_AQ_VSI_OUTER_TAG_TYPE_S	2
+#define ICE_AQ_VSI_OUTER_TAG_TYPE_M	(0x3 << ICE_AQ_VSI_OUTER_TAG_TYPE_S)
+#define ICE_AQ_VSI_OUTER_TAG_NONE	0x0
+#define ICE_AQ_VSI_OUTER_TAG_STAG	0x1
+#define ICE_AQ_VSI_OUTER_TAG_VLAN_8100	0x2
+#define ICE_AQ_VSI_OUTER_TAG_VLAN_9100	0x3
+#define ICE_AQ_VSI_OUTER_TAG_INSERT	BIT(4)
+#define ICE_AQ_VSI_OUTER_TAG_ACCEPT_HOST BIT(6)
+	u8 outer_tag_reserved;
+	/* queue mapping section */
+	__le16 mapping_flags;
+#define ICE_AQ_VSI_Q_MAP_CONTIG	0x0
+#define ICE_AQ_VSI_Q_MAP_NONCONTIG	BIT(0)
+	__le16 q_mapping[16];
+#define ICE_AQ_VSI_Q_S		0
+#define ICE_AQ_VSI_Q_M		(0x7FF << ICE_AQ_VSI_Q_S)
+	__le16 tc_mapping[8];
+#define ICE_AQ_VSI_TC_Q_OFFSET_S	0
+#define ICE_AQ_VSI_TC_Q_OFFSET_M	(0x7FF << ICE_AQ_VSI_TC_Q_OFFSET_S)
+#define ICE_AQ_VSI_TC_Q_NUM_S		11
+#define ICE_AQ_VSI_TC_Q_NUM_M		(0xF << ICE_AQ_VSI_TC_Q_NUM_S)
+	/* queueing option section */
+	u8 q_opt_rss;
+#define ICE_AQ_VSI_Q_OPT_RSS_LUT_S	0
+#define ICE_AQ_VSI_Q_OPT_RSS_LUT_M	(0x3 << ICE_AQ_VSI_Q_OPT_RSS_LUT_S)
+#define ICE_AQ_VSI_Q_OPT_RSS_LUT_VSI	0x0
+#define ICE_AQ_VSI_Q_OPT_RSS_LUT_PF	0x2
+#define ICE_AQ_VSI_Q_OPT_RSS_LUT_GBL	0x3
+#define ICE_AQ_VSI_Q_OPT_RSS_GBL_LUT_S	2
+#define ICE_AQ_VSI_Q_OPT_RSS_GBL_LUT_M	(0xF << ICE_AQ_VSI_Q_OPT_RSS_GBL_LUT_S)
+#define ICE_AQ_VSI_Q_OPT_RSS_HASH_S	6
+#define ICE_AQ_VSI_Q_OPT_RSS_HASH_M	(0x3 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S)
+#define ICE_AQ_VSI_Q_OPT_RSS_TPLZ	(0x0 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S)
+#define ICE_AQ_VSI_Q_OPT_RSS_SYM_TPLZ	(0x1 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S)
+#define ICE_AQ_VSI_Q_OPT_RSS_XOR	(0x2 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S)
+#define ICE_AQ_VSI_Q_OPT_RSS_JHASH	(0x3 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S)
+	u8 q_opt_tc;
+#define ICE_AQ_VSI_Q_OPT_TC_OVR_S	0
+#define ICE_AQ_VSI_Q_OPT_TC_OVR_M	(0x1F << ICE_AQ_VSI_Q_OPT_TC_OVR_S)
+#define ICE_AQ_VSI_Q_OPT_PROF_TC_OVR	BIT(7)
+	u8 q_opt_flags;
+#define ICE_AQ_VSI_Q_OPT_PE_FLTR_EN	BIT(0)
+	u8 q_opt_reserved[3];
+	/* outer up section */
+	__le32 outer_up_table; /* same structure and defines as ingress tbl */
+	/* section 10 */
+	__le16 sect_10_reserved;
+	/* flow director section */
+	__le16 fd_options;
+#define ICE_AQ_VSI_FD_ENABLE		BIT(0)
+#define ICE_AQ_VSI_FD_TX_AUTO_ENABLE	BIT(1)
+#define ICE_AQ_VSI_FD_PROG_ENABLE	BIT(3)
+	__le16 max_fd_fltr_dedicated;
+	__le16 max_fd_fltr_shared;
+	__le16 fd_def_q;
+#define ICE_AQ_VSI_FD_DEF_Q_S		0
+#define ICE_AQ_VSI_FD_DEF_Q_M		(0x7FF << ICE_AQ_VSI_FD_DEF_Q_S)
+#define ICE_AQ_VSI_FD_DEF_GRP_S	12
+#define ICE_AQ_VSI_FD_DEF_GRP_M	(0x7 << ICE_AQ_VSI_FD_DEF_GRP_S)
+	__le16 fd_report_opt;
+#define ICE_AQ_VSI_FD_REPORT_Q_S	0
+#define ICE_AQ_VSI_FD_REPORT_Q_M	(0x7FF << ICE_AQ_VSI_FD_REPORT_Q_S)
+#define ICE_AQ_VSI_FD_DEF_PRIORITY_S	12
+#define ICE_AQ_VSI_FD_DEF_PRIORITY_M	(0x7 << ICE_AQ_VSI_FD_DEF_PRIORITY_S)
+#define ICE_AQ_VSI_FD_DEF_DROP		BIT(15)
+	/* PASID section */
+	__le32 pasid_id;
+#define ICE_AQ_VSI_PASID_ID_S		0
+#define ICE_AQ_VSI_PASID_ID_M		(0xFFFFF << ICE_AQ_VSI_PASID_ID_S)
+#define ICE_AQ_VSI_PASID_ID_VALID	BIT(31)
+	u8 reserved[24];
+};
+
+
+
+#define ICE_MAX_NUM_RECIPES 64
+
+
+/* Add/Update/Remove/Get switch rules (indirect 0x02A0, 0x02A1, 0x02A2, 0x02A3)
+ */
+struct ice_aqc_sw_rules {
+	/* ops: add switch rules, referring the number of rules.
+	 * ops: update switch rules, referring the number of filters
+	 * ops: remove switch rules, referring the entry index.
+	 * ops: get switch rules, referring to the number of filters.
+	 */
+	__le16 num_rules_fltr_entry_index;
+	u8 reserved[6];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+#pragma pack(1)
+/* Add/Update/Get/Remove lookup Rx/Tx command/response entry
+ * This structures describes the lookup rules and associated actions. "index"
+ * is returned as part of a response to a successful Add command, and can be
+ * used to identify the rule for Update/Get/Remove commands.
+ */
+struct ice_sw_rule_lkup_rx_tx {
+	__le16 recipe_id;
+#define ICE_SW_RECIPE_LOGICAL_PORT_FWD		10
+	/* Source port for LOOKUP_RX and source VSI in case of LOOKUP_TX */
+	__le16 src;
+	__le32 act;
+
+	/* Bit 0:1 - Action type */
+#define ICE_SINGLE_ACT_TYPE_S	0x00
+#define ICE_SINGLE_ACT_TYPE_M	(0x3 << ICE_SINGLE_ACT_TYPE_S)
+
+	/* Bit 2 - Loop back enable
+	 * Bit 3 - LAN enable
+	 */
+#define ICE_SINGLE_ACT_LB_ENABLE	BIT(2)
+#define ICE_SINGLE_ACT_LAN_ENABLE	BIT(3)
+
+	/* Action type = 0 - Forward to VSI or VSI list */
+#define ICE_SINGLE_ACT_VSI_FORWARDING	0x0
+
+#define ICE_SINGLE_ACT_VSI_ID_S		4
+#define ICE_SINGLE_ACT_VSI_ID_M		(0x3FF << ICE_SINGLE_ACT_VSI_ID_S)
+#define ICE_SINGLE_ACT_VSI_LIST_ID_S	4
+#define ICE_SINGLE_ACT_VSI_LIST_ID_M	(0x3FF << ICE_SINGLE_ACT_VSI_LIST_ID_S)
+	/* This bit needs to be set if action is forward to VSI list */
+#define ICE_SINGLE_ACT_VSI_LIST		BIT(14)
+#define ICE_SINGLE_ACT_VALID_BIT	BIT(17)
+#define ICE_SINGLE_ACT_DROP		BIT(18)
+
+	/* Action type = 1 - Forward to Queue of Queue group */
+#define ICE_SINGLE_ACT_TO_Q		0x1
+#define ICE_SINGLE_ACT_Q_INDEX_S	4
+#define ICE_SINGLE_ACT_Q_INDEX_M	(0x7FF << ICE_SINGLE_ACT_Q_INDEX_S)
+#define ICE_SINGLE_ACT_Q_REGION_S	15
+#define ICE_SINGLE_ACT_Q_REGION_M	(0x7 << ICE_SINGLE_ACT_Q_REGION_S)
+#define ICE_SINGLE_ACT_Q_PRIORITY	BIT(18)
+
+	/* Action type = 2 - Prune */
+#define ICE_SINGLE_ACT_PRUNE		0x2
+#define ICE_SINGLE_ACT_EGRESS		BIT(15)
+#define ICE_SINGLE_ACT_INGRESS		BIT(16)
+#define ICE_SINGLE_ACT_PRUNET		BIT(17)
+	/* Bit 18 should be set to 0 for this action */
+
+	/* Action type = 2 - Pointer */
+#define ICE_SINGLE_ACT_PTR		0x2
+#define ICE_SINGLE_ACT_PTR_VAL_S	4
+#define ICE_SINGLE_ACT_PTR_VAL_M	(0x1FFF << ICE_SINGLE_ACT_PTR_VAL_S)
+	/* Bit 18 should be set to 1 */
+#define ICE_SINGLE_ACT_PTR_BIT		BIT(18)
+
+	/* Action type = 3 - Other actions. Last two bits
+	 * are other action identifier
+	 */
+#define ICE_SINGLE_ACT_OTHER_ACTS		0x3
+#define ICE_SINGLE_OTHER_ACT_IDENTIFIER_S	17
+#define ICE_SINGLE_OTHER_ACT_IDENTIFIER_M	\
+				(0x3 << \ ICE_SINGLE_OTHER_ACT_IDENTIFIER_S)
+
+	/* Bit 17:18 - Defines other actions */
+	/* Other action = 0 - Mirror VSI */
+#define ICE_SINGLE_OTHER_ACT_MIRROR		0
+#define ICE_SINGLE_ACT_MIRROR_VSI_ID_S	4
+#define ICE_SINGLE_ACT_MIRROR_VSI_ID_M	\
+				(0x3FF << ICE_SINGLE_ACT_MIRROR_VSI_ID_S)
+
+	/* Other action = 3 - Set Stat count */
+#define ICE_SINGLE_OTHER_ACT_STAT_COUNT		3
+#define ICE_SINGLE_ACT_STAT_COUNT_INDEX_S	4
+#define ICE_SINGLE_ACT_STAT_COUNT_INDEX_M	\
+				(0x7F << ICE_SINGLE_ACT_STAT_COUNT_INDEX_S)
+
+	__le16 index; /* The index of the rule in the lookup table */
+	/* Length and values of the header to be matched per recipe or
+	 * lookup-type
+	 */
+	__le16 hdr_len;
+	u8 hdr[1];
+};
+#pragma pack()
+
+
+/* Add/Update/Remove large action command/response entry
+ * "index" is returned as part of a response to a successful Add command, and
+ * can be used to identify the action for Update/Get/Remove commands.
+ */
+struct ice_sw_rule_lg_act {
+	__le16 index; /* Index in large action table */
+	__le16 size;
+	__le32 act[1]; /* array of size for actions */
+	/* Max number of large actions */
+#define ICE_MAX_LG_ACT	4
+	/* Bit 0:1 - Action type */
+#define ICE_LG_ACT_TYPE_S	0
+#define ICE_LG_ACT_TYPE_M	(0x7 << ICE_LG_ACT_TYPE_S)
+
+	/* Action type = 0 - Forward to VSI or VSI list */
+#define ICE_LG_ACT_VSI_FORWARDING	0
+#define ICE_LG_ACT_VSI_ID_S		3
+#define ICE_LG_ACT_VSI_ID_M		(0x3FF << ICE_LG_ACT_VSI_ID_S)
+#define ICE_LG_ACT_VSI_LIST_ID_S	3
+#define ICE_LG_ACT_VSI_LIST_ID_M	(0x3FF << ICE_LG_ACT_VSI_LIST_ID_S)
+	/* This bit needs to be set if action is forward to VSI list */
+#define ICE_LG_ACT_VSI_LIST		BIT(13)
+
+#define ICE_LG_ACT_VALID_BIT		BIT(16)
+
+	/* Action type = 1 - Forward to Queue of Queue group */
+#define ICE_LG_ACT_TO_Q			0x1
+#define ICE_LG_ACT_Q_INDEX_S		3
+#define ICE_LG_ACT_Q_INDEX_M		(0x7FF << ICE_LG_ACT_Q_INDEX_S)
+#define ICE_LG_ACT_Q_REGION_S		14
+#define ICE_LG_ACT_Q_REGION_M		(0x7 << ICE_LG_ACT_Q_REGION_S)
+#define ICE_LG_ACT_Q_PRIORITY_SET	BIT(17)
+
+	/* Action type = 2 - Prune */
+#define ICE_LG_ACT_PRUNE		0x2
+#define ICE_LG_ACT_EGRESS		BIT(14)
+#define ICE_LG_ACT_INGRESS		BIT(15)
+#define ICE_LG_ACT_PRUNET		BIT(16)
+
+	/* Action type = 3 - Mirror VSI */
+#define ICE_LG_OTHER_ACT_MIRROR		0x3
+#define ICE_LG_ACT_MIRROR_VSI_ID_S	3
+#define ICE_LG_ACT_MIRROR_VSI_ID_M	(0x3FF << ICE_LG_ACT_MIRROR_VSI_ID_S)
+
+	/* Action type = 5 - Generic Value */
+#define ICE_LG_ACT_GENERIC		0x5
+#define ICE_LG_ACT_GENERIC_VALUE_S	3
+#define ICE_LG_ACT_GENERIC_VALUE_M	(0xFFFF << ICE_LG_ACT_GENERIC_VALUE_S)
+#define ICE_LG_ACT_GENERIC_OFFSET_S	19
+#define ICE_LG_ACT_GENERIC_OFFSET_M	(0x7 << ICE_LG_ACT_GENERIC_OFFSET_S)
+#define ICE_LG_ACT_GENERIC_PRIORITY_S	22
+#define ICE_LG_ACT_GENERIC_PRIORITY_M	(0x7 << ICE_LG_ACT_GENERIC_PRIORITY_S)
+#define ICE_LG_ACT_GENERIC_OFF_RX_DESC_PROF_IDX	7
+
+	/* Action = 7 - Set Stat count */
+#define ICE_LG_ACT_STAT_COUNT		0x7
+#define ICE_LG_ACT_STAT_COUNT_S		3
+#define ICE_LG_ACT_STAT_COUNT_M		(0x7F << ICE_LG_ACT_STAT_COUNT_S)
+};
+
+
+/* Add/Update/Remove VSI list command/response entry
+ * "index" is returned as part of a response to a successful Add command, and
+ * can be used to identify the VSI list for Update/Get/Remove commands.
+ */
+struct ice_sw_rule_vsi_list {
+	__le16 index; /* Index of VSI/Prune list */
+	__le16 number_vsi;
+	__le16 vsi[1]; /* Array of number_vsi VSI numbers */
+};
+
+
+#pragma pack(1)
+/* Query VSI list command/response entry */
+struct ice_sw_rule_vsi_list_query {
+	__le16 index;
+	ice_declare_bitmap(vsi_list, ICE_MAX_VSI);
+};
+#pragma pack()
+
+
+#pragma pack(1)
+/* Add switch rule response:
+ * Content of return buffer is same as the input buffer. The status field and
+ * LUT index are updated as part of the response
+ */
+struct ice_aqc_sw_rules_elem {
+	__le16 type; /* Switch rule type, one of T_... */
+#define ICE_AQC_SW_RULES_T_LKUP_RX		0x0
+#define ICE_AQC_SW_RULES_T_LKUP_TX		0x1
+#define ICE_AQC_SW_RULES_T_LG_ACT		0x2
+#define ICE_AQC_SW_RULES_T_VSI_LIST_SET		0x3
+#define ICE_AQC_SW_RULES_T_VSI_LIST_CLEAR	0x4
+#define ICE_AQC_SW_RULES_T_PRUNE_LIST_SET	0x5
+#define ICE_AQC_SW_RULES_T_PRUNE_LIST_CLEAR	0x6
+	__le16 status;
+	union {
+		struct ice_sw_rule_lkup_rx_tx lkup_tx_rx;
+		struct ice_sw_rule_lg_act lg_act;
+		struct ice_sw_rule_vsi_list vsi_list;
+		struct ice_sw_rule_vsi_list_query vsi_list_query;
+	} pdata;
+};
+
+#pragma pack()
+
+
+
+/* Get Default Topology (indirect 0x0400) */
+struct ice_aqc_get_topo {
+	u8 port_num;
+	u8 num_branches;
+	__le16 reserved1;
+	__le32 reserved2;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Update TSE (indirect 0x0403)
+ * Get TSE (indirect 0x0404)
+ * Add TSE (indirect 0x0401)
+ * Delete TSE (indirect 0x040F)
+ * Move TSE (indirect 0x0408)
+ * Suspend Nodes (indirect 0x0409)
+ * Resume Nodes (indirect 0x040A)
+ */
+struct ice_aqc_sched_elem_cmd {
+	__le16 num_elem_req;	/* Used by commands */
+	__le16 num_elem_resp;	/* Used by responses */
+	__le32 reserved;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* This is the buffer for:
+ * Suspend Nodes (indirect 0x0409)
+ * Resume Nodes (indirect 0x040A)
+ */
+struct ice_aqc_suspend_resume_elem {
+	__le32 teid[1];
+};
+
+
+struct ice_aqc_txsched_move_grp_info_hdr {
+	__le32 src_parent_teid;
+	__le32 dest_parent_teid;
+	__le16 num_elems;
+	__le16 reserved;
+};
+
+
+struct ice_aqc_move_elem {
+	struct ice_aqc_txsched_move_grp_info_hdr hdr;
+	__le32 teid[1];
+};
+
+
+struct ice_aqc_elem_info_bw {
+	__le16 bw_profile_idx;
+	__le16 bw_alloc;
+};
+
+
+struct ice_aqc_txsched_elem {
+	u8 elem_type; /* Special field, reserved for some aq calls */
+#define ICE_AQC_ELEM_TYPE_UNDEFINED		0x0
+#define ICE_AQC_ELEM_TYPE_ROOT_PORT		0x1
+#define ICE_AQC_ELEM_TYPE_TC			0x2
+#define ICE_AQC_ELEM_TYPE_SE_GENERIC		0x3
+#define ICE_AQC_ELEM_TYPE_ENTRY_POINT		0x4
+#define ICE_AQC_ELEM_TYPE_LEAF			0x5
+#define ICE_AQC_ELEM_TYPE_SE_PADDED		0x6
+	u8 valid_sections;
+#define ICE_AQC_ELEM_VALID_GENERIC		BIT(0)
+#define ICE_AQC_ELEM_VALID_CIR			BIT(1)
+#define ICE_AQC_ELEM_VALID_EIR			BIT(2)
+#define ICE_AQC_ELEM_VALID_SHARED		BIT(3)
+	u8 generic;
+#define ICE_AQC_ELEM_GENERIC_MODE_M		0x1
+#define ICE_AQC_ELEM_GENERIC_PRIO_S		0x1
+#define ICE_AQC_ELEM_GENERIC_PRIO_M	(0x7 << ICE_AQC_ELEM_GENERIC_PRIO_S)
+#define ICE_AQC_ELEM_GENERIC_SP_S		0x4
+#define ICE_AQC_ELEM_GENERIC_SP_M	(0x1 << ICE_AQC_ELEM_GENERIC_SP_S)
+#define ICE_AQC_ELEM_GENERIC_ADJUST_VAL_S	0x5
+#define ICE_AQC_ELEM_GENERIC_ADJUST_VAL_M	\
+	(0x3 << ICE_AQC_ELEM_GENERIC_ADJUST_VAL_S)
+	u8 flags; /* Special field, reserved for some aq calls */
+#define ICE_AQC_ELEM_FLAG_SUSPEND_M		0x1
+	struct ice_aqc_elem_info_bw cir_bw;
+	struct ice_aqc_elem_info_bw eir_bw;
+	__le16 srl_id;
+	__le16 reserved2;
+};
+
+
+struct ice_aqc_txsched_elem_data {
+	__le32 parent_teid;
+	__le32 node_teid;
+	struct ice_aqc_txsched_elem data;
+};
+
+
+struct ice_aqc_txsched_topo_grp_info_hdr {
+	__le32 parent_teid;
+	__le16 num_elems;
+	__le16 reserved2;
+};
+
+
+struct ice_aqc_add_elem {
+	struct ice_aqc_txsched_topo_grp_info_hdr hdr;
+	struct ice_aqc_txsched_elem_data generic[1];
+};
+
+
+struct ice_aqc_conf_elem {
+	struct ice_aqc_txsched_elem_data generic[1];
+};
+
+
+struct ice_aqc_get_elem {
+	struct ice_aqc_txsched_elem_data generic[1];
+};
+
+
+struct ice_aqc_get_topo_elem {
+	struct ice_aqc_txsched_topo_grp_info_hdr hdr;
+	struct ice_aqc_txsched_elem_data
+		generic[ICE_AQC_TOPO_MAX_LEVEL_NUM];
+};
+
+
+struct ice_aqc_delete_elem {
+	struct ice_aqc_txsched_topo_grp_info_hdr hdr;
+	__le32 teid[1];
+};
+
+
+
+
+/* Rate limiting profile for
+ * Add RL profile (indirect 0x0410)
+ * Query RL profile (indirect 0x0411)
+ * Remove RL profile (indirect 0x0415)
+ * These indirect commands acts on single or multiple
+ * RL profiles with specified data.
+ */
+struct ice_aqc_rl_profile {
+	__le16 num_profiles;
+	__le16 num_processed; /* Only for response. Reserved in Command. */
+	u8 reserved[4];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+struct ice_aqc_rl_profile_elem {
+	u8 level;
+	u8 flags;
+#define ICE_AQC_RL_PROFILE_TYPE_S	0x0
+#define ICE_AQC_RL_PROFILE_TYPE_M	(0x3 << ICE_AQC_RL_PROFILE_TYPE_S)
+#define ICE_AQC_RL_PROFILE_TYPE_CIR	0
+#define ICE_AQC_RL_PROFILE_TYPE_EIR	1
+#define ICE_AQC_RL_PROFILE_TYPE_SRL	2
+/* The following flag is used for Query RL Profile Data */
+#define ICE_AQC_RL_PROFILE_INVAL_S	0x7
+#define ICE_AQC_RL_PROFILE_INVAL_M	(0x1 << ICE_AQC_RL_PROFILE_INVAL_S)
+
+	__le16 profile_id;
+	__le16 max_burst_size;
+	__le16 rl_multiply;
+	__le16 wake_up_calc;
+	__le16 rl_encode;
+};
+
+
+struct ice_aqc_rl_profile_generic_elem {
+	struct ice_aqc_rl_profile_elem generic[1];
+};
+
+
+
+/* Configure L2 Node CGD (indirect 0x0414)
+ * This indirect command allows configuring a congestion domain for given L2
+ * node TEIDs in the scheduler topology.
+ */
+struct ice_aqc_cfg_l2_node_cgd {
+	__le16 num_l2_nodes;
+	u8 reserved[6];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+struct ice_aqc_cfg_l2_node_cgd_elem {
+	__le32 node_teid;
+	u8 cgd;
+	u8 reserved[3];
+};
+
+
+struct ice_aqc_cfg_l2_node_cgd_data {
+	struct ice_aqc_cfg_l2_node_cgd_elem elem[1];
+};
+
+
+/* Query Scheduler Resource Allocation (indirect 0x0412)
+ * This indirect command retrieves the scheduler resources allocated by
+ * EMP Firmware to the given PF.
+ */
+struct ice_aqc_query_txsched_res {
+	u8 reserved[8];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+struct ice_aqc_generic_sched_props {
+	__le16 phys_levels;
+	__le16 logical_levels;
+	u8 flattening_bitmap;
+	u8 max_device_cgds;
+	u8 max_pf_cgds;
+	u8 rsvd0;
+	__le16 rdma_qsets;
+	u8 rsvd1[22];
+};
+
+
+struct ice_aqc_layer_props {
+	u8 logical_layer;
+	u8 chunk_size;
+	__le16 max_device_nodes;
+	__le16 max_pf_nodes;
+	u8 rsvd0[4];
+	__le16 max_sibl_grp_sz;
+	__le16 max_cir_rl_profiles;
+	__le16 max_eir_rl_profiles;
+	__le16 max_srl_profiles;
+	u8 rsvd1[14];
+};
+
+
+struct ice_aqc_query_txsched_res_resp {
+	struct ice_aqc_generic_sched_props sched_props;
+	struct ice_aqc_layer_props layer_props[ICE_AQC_TOPO_MAX_LEVEL_NUM];
+};
+
+
+/* Query Node to Root Topology (indirect 0x0413)
+ * This command uses ice_aqc_get_elem as its data buffer.
+ */
+struct ice_aqc_query_node_to_root {
+	__le32 teid;
+	__le32 num_nodes; /* Response only */
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Get PHY capabilities (indirect 0x0600) */
+struct ice_aqc_get_phy_caps {
+	u8 lport_num;
+	u8 reserved;
+	__le16 param0;
+	/* 18.0 - Report qualified modules */
+#define ICE_AQC_GET_PHY_RQM		BIT(0)
+	/* 18.1 - 18.2 : Report mode
+	 * 00b - Report NVM capabilities
+	 * 01b - Report topology capabilities
+	 * 10b - Report SW configured
+	 */
+#define ICE_AQC_REPORT_MODE_S		1
+#define ICE_AQC_REPORT_MODE_M		(3 << ICE_AQC_REPORT_MODE_S)
+#define ICE_AQC_REPORT_NVM_CAP		0
+#define ICE_AQC_REPORT_TOPO_CAP		BIT(1)
+#define ICE_AQC_REPORT_SW_CFG		BIT(2)
+	__le32 reserved1;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* This is #define of PHY type (Extended):
+ * The first set of defines is for phy_type_low.
+ */
+#define ICE_PHY_TYPE_LOW_100BASE_TX		BIT_ULL(0)
+#define ICE_PHY_TYPE_LOW_100M_SGMII		BIT_ULL(1)
+#define ICE_PHY_TYPE_LOW_1000BASE_T		BIT_ULL(2)
+#define ICE_PHY_TYPE_LOW_1000BASE_SX		BIT_ULL(3)
+#define ICE_PHY_TYPE_LOW_1000BASE_LX		BIT_ULL(4)
+#define ICE_PHY_TYPE_LOW_1000BASE_KX		BIT_ULL(5)
+#define ICE_PHY_TYPE_LOW_1G_SGMII		BIT_ULL(6)
+#define ICE_PHY_TYPE_LOW_2500BASE_T		BIT_ULL(7)
+#define ICE_PHY_TYPE_LOW_2500BASE_X		BIT_ULL(8)
+#define ICE_PHY_TYPE_LOW_2500BASE_KX		BIT_ULL(9)
+#define ICE_PHY_TYPE_LOW_5GBASE_T		BIT_ULL(10)
+#define ICE_PHY_TYPE_LOW_5GBASE_KR		BIT_ULL(11)
+#define ICE_PHY_TYPE_LOW_10GBASE_T		BIT_ULL(12)
+#define ICE_PHY_TYPE_LOW_10G_SFI_DA		BIT_ULL(13)
+#define ICE_PHY_TYPE_LOW_10GBASE_SR		BIT_ULL(14)
+#define ICE_PHY_TYPE_LOW_10GBASE_LR		BIT_ULL(15)
+#define ICE_PHY_TYPE_LOW_10GBASE_KR_CR1		BIT_ULL(16)
+#define ICE_PHY_TYPE_LOW_10G_SFI_AOC_ACC	BIT_ULL(17)
+#define ICE_PHY_TYPE_LOW_10G_SFI_C2C		BIT_ULL(18)
+#define ICE_PHY_TYPE_LOW_25GBASE_T		BIT_ULL(19)
+#define ICE_PHY_TYPE_LOW_25GBASE_CR		BIT_ULL(20)
+#define ICE_PHY_TYPE_LOW_25GBASE_CR_S		BIT_ULL(21)
+#define ICE_PHY_TYPE_LOW_25GBASE_CR1		BIT_ULL(22)
+#define ICE_PHY_TYPE_LOW_25GBASE_SR		BIT_ULL(23)
+#define ICE_PHY_TYPE_LOW_25GBASE_LR		BIT_ULL(24)
+#define ICE_PHY_TYPE_LOW_25GBASE_KR		BIT_ULL(25)
+#define ICE_PHY_TYPE_LOW_25GBASE_KR_S		BIT_ULL(26)
+#define ICE_PHY_TYPE_LOW_25GBASE_KR1		BIT_ULL(27)
+#define ICE_PHY_TYPE_LOW_25G_AUI_AOC_ACC	BIT_ULL(28)
+#define ICE_PHY_TYPE_LOW_25G_AUI_C2C		BIT_ULL(29)
+#define ICE_PHY_TYPE_LOW_40GBASE_CR4		BIT_ULL(30)
+#define ICE_PHY_TYPE_LOW_40GBASE_SR4		BIT_ULL(31)
+#define ICE_PHY_TYPE_LOW_40GBASE_LR4		BIT_ULL(32)
+#define ICE_PHY_TYPE_LOW_40GBASE_KR4		BIT_ULL(33)
+#define ICE_PHY_TYPE_LOW_40G_XLAUI_AOC_ACC	BIT_ULL(34)
+#define ICE_PHY_TYPE_LOW_40G_XLAUI		BIT_ULL(35)
+#define ICE_PHY_TYPE_LOW_50GBASE_CR2		BIT_ULL(36)
+#define ICE_PHY_TYPE_LOW_50GBASE_SR2		BIT_ULL(37)
+#define ICE_PHY_TYPE_LOW_50GBASE_LR2		BIT_ULL(38)
+#define ICE_PHY_TYPE_LOW_50GBASE_KR2		BIT_ULL(39)
+#define ICE_PHY_TYPE_LOW_50G_LAUI2_AOC_ACC	BIT_ULL(40)
+#define ICE_PHY_TYPE_LOW_50G_LAUI2		BIT_ULL(41)
+#define ICE_PHY_TYPE_LOW_50G_AUI2_AOC_ACC	BIT_ULL(42)
+#define ICE_PHY_TYPE_LOW_50G_AUI2		BIT_ULL(43)
+#define ICE_PHY_TYPE_LOW_50GBASE_CP		BIT_ULL(44)
+#define ICE_PHY_TYPE_LOW_50GBASE_SR		BIT_ULL(45)
+#define ICE_PHY_TYPE_LOW_50GBASE_FR		BIT_ULL(46)
+#define ICE_PHY_TYPE_LOW_50GBASE_LR		BIT_ULL(47)
+#define ICE_PHY_TYPE_LOW_50GBASE_KR_PAM4	BIT_ULL(48)
+#define ICE_PHY_TYPE_LOW_50G_AUI1_AOC_ACC	BIT_ULL(49)
+#define ICE_PHY_TYPE_LOW_50G_AUI1		BIT_ULL(50)
+#define ICE_PHY_TYPE_LOW_100GBASE_CR4		BIT_ULL(51)
+#define ICE_PHY_TYPE_LOW_100GBASE_SR4		BIT_ULL(52)
+#define ICE_PHY_TYPE_LOW_100GBASE_LR4		BIT_ULL(53)
+#define ICE_PHY_TYPE_LOW_100GBASE_KR4		BIT_ULL(54)
+#define ICE_PHY_TYPE_LOW_100G_CAUI4_AOC_ACC	BIT_ULL(55)
+#define ICE_PHY_TYPE_LOW_100G_CAUI4		BIT_ULL(56)
+#define ICE_PHY_TYPE_LOW_100G_AUI4_AOC_ACC	BIT_ULL(57)
+#define ICE_PHY_TYPE_LOW_100G_AUI4		BIT_ULL(58)
+#define ICE_PHY_TYPE_LOW_100GBASE_CR_PAM4	BIT_ULL(59)
+#define ICE_PHY_TYPE_LOW_100GBASE_KR_PAM4	BIT_ULL(60)
+#define ICE_PHY_TYPE_LOW_100GBASE_CP2		BIT_ULL(61)
+#define ICE_PHY_TYPE_LOW_100GBASE_SR2		BIT_ULL(62)
+#define ICE_PHY_TYPE_LOW_100GBASE_DR		BIT_ULL(63)
+#define ICE_PHY_TYPE_LOW_MAX_INDEX		63
+/* The second set of defines is for phy_type_high. */
+#define ICE_PHY_TYPE_HIGH_100GBASE_KR2_PAM4	BIT_ULL(0)
+#define ICE_PHY_TYPE_HIGH_100G_CAUI2_AOC_ACC	BIT_ULL(1)
+#define ICE_PHY_TYPE_HIGH_100G_CAUI2		BIT_ULL(2)
+#define ICE_PHY_TYPE_HIGH_100G_AUI2_AOC_ACC	BIT_ULL(3)
+#define ICE_PHY_TYPE_HIGH_100G_AUI2		BIT_ULL(4)
+#define ICE_PHY_TYPE_HIGH_MAX_INDEX		19
+
+struct ice_aqc_get_phy_caps_data {
+	__le64 phy_type_low; /* Use values from ICE_PHY_TYPE_LOW_* */
+	__le64 phy_type_high; /* Use values from ICE_PHY_TYPE_HIGH_* */
+	u8 caps;
+#define ICE_AQC_PHY_EN_TX_LINK_PAUSE			BIT(0)
+#define ICE_AQC_PHY_EN_RX_LINK_PAUSE			BIT(1)
+#define ICE_AQC_PHY_LOW_POWER_MODE			BIT(2)
+#define ICE_AQC_PHY_EN_LINK				BIT(3)
+#define ICE_AQC_PHY_AN_MODE				BIT(4)
+#define ICE_AQC_PHY_EN_MOD_QUAL				BIT(5)
+#define ICE_AQC_PHY_EN_LESM				BIT(6)
+#define ICE_AQC_PHY_EN_AUTO_FEC				BIT(7)
+#define ICE_AQC_PHY_CAPS_MASK				MAKEMASK(0xff, 0)
+	u8 low_power_ctrl;
+#define ICE_AQC_PHY_EN_D3COLD_LOW_POWER_AUTONEG		BIT(0)
+	__le16 eee_cap;
+#define ICE_AQC_PHY_EEE_EN_100BASE_TX			BIT(0)
+#define ICE_AQC_PHY_EEE_EN_1000BASE_T			BIT(1)
+#define ICE_AQC_PHY_EEE_EN_10GBASE_T			BIT(2)
+#define ICE_AQC_PHY_EEE_EN_1000BASE_KX			BIT(3)
+#define ICE_AQC_PHY_EEE_EN_10GBASE_KR			BIT(4)
+#define ICE_AQC_PHY_EEE_EN_25GBASE_KR			BIT(5)
+#define ICE_AQC_PHY_EEE_EN_40GBASE_KR4			BIT(6)
+#define ICE_AQC_PHY_EEE_EN_50GBASE_KR2			BIT(7)
+#define ICE_AQC_PHY_EEE_EN_50GBASE_KR_PAM4		BIT(8)
+#define ICE_AQC_PHY_EEE_EN_100GBASE_KR4			BIT(9)
+#define ICE_AQC_PHY_EEE_EN_100GBASE_KR2_PAM4		BIT(10)
+	__le16 eeer_value;
+	u8 phy_id_oui[4]; /* PHY/Module ID connected on the port */
+	u8 phy_fw_ver[8];
+	u8 link_fec_options;
+#define ICE_AQC_PHY_FEC_10G_KR_40G_KR4_EN		BIT(0)
+#define ICE_AQC_PHY_FEC_10G_KR_40G_KR4_REQ		BIT(1)
+#define ICE_AQC_PHY_FEC_25G_RS_528_REQ			BIT(2)
+#define ICE_AQC_PHY_FEC_25G_KR_REQ			BIT(3)
+#define ICE_AQC_PHY_FEC_25G_RS_544_REQ			BIT(4)
+#define ICE_AQC_PHY_FEC_25G_RS_CLAUSE91_EN		BIT(6)
+#define ICE_AQC_PHY_FEC_25G_KR_CLAUSE74_EN		BIT(7)
+#define ICE_AQC_PHY_FEC_MASK				MAKEMASK(0xdf, 0)
+	u8 extended_compliance_code;
+#define ICE_MODULE_TYPE_TOTAL_BYTE			3
+	u8 module_type[ICE_MODULE_TYPE_TOTAL_BYTE];
+#define ICE_AQC_MOD_TYPE_BYTE0_SFP_PLUS			0xA0
+#define ICE_AQC_MOD_TYPE_BYTE0_QSFP_PLUS		0x80
+#define ICE_AQC_MOD_TYPE_BYTE1_SFP_PLUS_CU_PASSIVE	BIT(0)
+#define ICE_AQC_MOD_TYPE_BYTE1_SFP_PLUS_CU_ACTIVE	BIT(1)
+#define ICE_AQC_MOD_TYPE_BYTE1_10G_BASE_SR		BIT(4)
+#define ICE_AQC_MOD_TYPE_BYTE1_10G_BASE_LR		BIT(5)
+#define ICE_AQC_MOD_TYPE_BYTE1_10G_BASE_LRM		BIT(6)
+#define ICE_AQC_MOD_TYPE_BYTE1_10G_BASE_ER		BIT(7)
+#define ICE_AQC_MOD_TYPE_BYTE2_SFP_PLUS			0xA0
+#define ICE_AQC_MOD_TYPE_BYTE2_QSFP_PLUS		0x86
+	u8 qualified_module_count;
+#define ICE_AQC_QUAL_MOD_COUNT_MAX			16
+	struct {
+		u8 v_oui[3];
+		u8 rsvd3;
+		u8 v_part[16];
+		__le32 v_rev;
+		__le64 rsvd8;
+	} qual_modules[ICE_AQC_QUAL_MOD_COUNT_MAX];
+};
+
+
+/* Set PHY capabilities (direct 0x0601)
+ * NOTE: This command must be followed by setup link and restart auto-neg
+ */
+struct ice_aqc_set_phy_cfg {
+	u8 lport_num;
+	u8 reserved[7];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Set PHY config command data structure */
+struct ice_aqc_set_phy_cfg_data {
+	__le64 phy_type_low; /* Use values from ICE_PHY_TYPE_LOW_* */
+	__le64 phy_type_high; /* Use values from ICE_PHY_TYPE_HIGH_* */
+	u8 caps;
+#define ICE_AQ_PHY_ENA_TX_PAUSE_ABILITY		BIT(0)
+#define ICE_AQ_PHY_ENA_RX_PAUSE_ABILITY		BIT(1)
+#define ICE_AQ_PHY_ENA_LOW_POWER	BIT(2)
+#define ICE_AQ_PHY_ENA_LINK		BIT(3)
+#define ICE_AQ_PHY_ENA_AUTO_LINK_UPDT	BIT(5)
+#define ICE_AQ_PHY_ENA_LESM		BIT(6)
+#define ICE_AQ_PHY_ENA_AUTO_FEC		BIT(7)
+	u8 low_power_ctrl;
+	__le16 eee_cap; /* Value from ice_aqc_get_phy_caps */
+	__le16 eeer_value;
+	u8 link_fec_opt; /* Use defines from ice_aqc_get_phy_caps */
+	u8 rsvd1;
+};
+
+
+
+/* Restart AN command data structure (direct 0x0605)
+ * Also used for response, with only the lport_num field present.
+ */
+struct ice_aqc_restart_an {
+	u8 lport_num;
+	u8 reserved;
+	u8 cmd_flags;
+#define ICE_AQC_RESTART_AN_LINK_RESTART	BIT(1)
+#define ICE_AQC_RESTART_AN_LINK_ENABLE	BIT(2)
+	u8 reserved2[13];
+};
+
+
+/* Get link status (indirect 0x0607), also used for Link Status Event */
+struct ice_aqc_get_link_status {
+	u8 lport_num;
+	u8 reserved;
+	__le16 cmd_flags;
+#define ICE_AQ_LSE_M			0x3
+#define ICE_AQ_LSE_NOP			0x0
+#define ICE_AQ_LSE_DIS			0x2
+#define ICE_AQ_LSE_ENA			0x3
+	/* only response uses this flag */
+#define ICE_AQ_LSE_IS_ENABLED		0x1
+	__le32 reserved2;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Get link status response data structure, also used for Link Status Event */
+struct ice_aqc_get_link_status_data {
+	u8 topo_media_conflict;
+#define ICE_AQ_LINK_TOPO_CONFLICT	BIT(0)
+#define ICE_AQ_LINK_MEDIA_CONFLICT	BIT(1)
+#define ICE_AQ_LINK_TOPO_CORRUPT	BIT(2)
+	u8 reserved1;
+	u8 link_info;
+#define ICE_AQ_LINK_UP			BIT(0)	/* Link Status */
+#define ICE_AQ_LINK_FAULT		BIT(1)
+#define ICE_AQ_LINK_FAULT_TX		BIT(2)
+#define ICE_AQ_LINK_FAULT_RX		BIT(3)
+#define ICE_AQ_LINK_FAULT_REMOTE	BIT(4)
+#define ICE_AQ_LINK_UP_PORT		BIT(5)	/* External Port Link Status */
+#define ICE_AQ_MEDIA_AVAILABLE		BIT(6)
+#define ICE_AQ_SIGNAL_DETECT		BIT(7)
+	u8 an_info;
+#define ICE_AQ_AN_COMPLETED		BIT(0)
+#define ICE_AQ_LP_AN_ABILITY		BIT(1)
+#define ICE_AQ_PD_FAULT			BIT(2)	/* Parallel Detection Fault */
+#define ICE_AQ_FEC_EN			BIT(3)
+#define ICE_AQ_PHY_LOW_POWER		BIT(4)	/* Low Power State */
+#define ICE_AQ_LINK_PAUSE_TX		BIT(5)
+#define ICE_AQ_LINK_PAUSE_RX		BIT(6)
+#define ICE_AQ_QUALIFIED_MODULE		BIT(7)
+	u8 ext_info;
+#define ICE_AQ_LINK_PHY_TEMP_ALARM	BIT(0)
+#define ICE_AQ_LINK_EXCESSIVE_ERRORS	BIT(1)	/* Excessive Link Errors */
+	/* Port TX Suspended */
+#define ICE_AQ_LINK_TX_S		2
+#define ICE_AQ_LINK_TX_M		(0x03 << ICE_AQ_LINK_TX_S)
+#define ICE_AQ_LINK_TX_ACTIVE		0
+#define ICE_AQ_LINK_TX_DRAINED		1
+#define ICE_AQ_LINK_TX_FLUSHED		3
+	u8 reserved2;
+	__le16 max_frame_size;
+	u8 cfg;
+#define ICE_AQ_LINK_25G_KR_FEC_EN	BIT(0)
+#define ICE_AQ_LINK_25G_RS_528_FEC_EN	BIT(1)
+#define ICE_AQ_LINK_25G_RS_544_FEC_EN	BIT(2)
+#define ICE_AQ_FEC_MASK			MAKEMASK(0x7, 0)
+	/* Pacing Config */
+#define ICE_AQ_CFG_PACING_S		3
+#define ICE_AQ_CFG_PACING_M		(0xF << ICE_AQ_CFG_PACING_S)
+#define ICE_AQ_CFG_PACING_TYPE_M	BIT(7)
+#define ICE_AQ_CFG_PACING_TYPE_AVG	0
+#define ICE_AQ_CFG_PACING_TYPE_FIXED	ICE_AQ_CFG_PACING_TYPE_M
+	/* External Device Power Ability */
+	u8 power_desc;
+#define ICE_AQ_PWR_CLASS_M		0x3
+#define ICE_AQ_LINK_PWR_BASET_LOW_HIGH	0
+#define ICE_AQ_LINK_PWR_BASET_HIGH	1
+#define ICE_AQ_LINK_PWR_QSFP_CLASS_1	0
+#define ICE_AQ_LINK_PWR_QSFP_CLASS_2	1
+#define ICE_AQ_LINK_PWR_QSFP_CLASS_3	2
+#define ICE_AQ_LINK_PWR_QSFP_CLASS_4	3
+	__le16 link_speed;
+#define ICE_AQ_LINK_SPEED_10MB		BIT(0)
+#define ICE_AQ_LINK_SPEED_100MB		BIT(1)
+#define ICE_AQ_LINK_SPEED_1000MB	BIT(2)
+#define ICE_AQ_LINK_SPEED_2500MB	BIT(3)
+#define ICE_AQ_LINK_SPEED_5GB		BIT(4)
+#define ICE_AQ_LINK_SPEED_10GB		BIT(5)
+#define ICE_AQ_LINK_SPEED_20GB		BIT(6)
+#define ICE_AQ_LINK_SPEED_25GB		BIT(7)
+#define ICE_AQ_LINK_SPEED_40GB		BIT(8)
+#define ICE_AQ_LINK_SPEED_50GB		BIT(9)
+#define ICE_AQ_LINK_SPEED_100GB		BIT(10)
+#define ICE_AQ_LINK_SPEED_UNKNOWN	BIT(15)
+	__le32 reserved3; /* Aligns next field to 8-byte boundary */
+	__le64 phy_type_low; /* Use values from ICE_PHY_TYPE_LOW_* */
+	__le64 phy_type_high; /* Use values from ICE_PHY_TYPE_HIGH_* */
+};
+
+
+/* Set event mask command (direct 0x0613) */
+struct ice_aqc_set_event_mask {
+	u8	lport_num;
+	u8	reserved[7];
+	__le16	event_mask;
+#define ICE_AQ_LINK_EVENT_UPDOWN		BIT(1)
+#define ICE_AQ_LINK_EVENT_MEDIA_NA		BIT(2)
+#define ICE_AQ_LINK_EVENT_LINK_FAULT		BIT(3)
+#define ICE_AQ_LINK_EVENT_PHY_TEMP_ALARM	BIT(4)
+#define ICE_AQ_LINK_EVENT_EXCESSIVE_ERRORS	BIT(5)
+#define ICE_AQ_LINK_EVENT_SIGNAL_DETECT		BIT(6)
+#define ICE_AQ_LINK_EVENT_AN_COMPLETED		BIT(7)
+#define ICE_AQ_LINK_EVENT_MODULE_QUAL_FAIL	BIT(8)
+#define ICE_AQ_LINK_EVENT_PORT_TX_SUSPENDED	BIT(9)
+	u8	reserved1[6];
+};
+
+
+
+/* Set MAC Loopback command (direct 0x0620) */
+struct ice_aqc_set_mac_lb {
+	u8 lb_mode;
+#define ICE_AQ_MAC_LB_EN		BIT(0)
+#define ICE_AQ_MAC_LB_OSC_CLK		BIT(1)
+	u8 reserved[15];
+};
+
+
+
+
+
+/* Set Port Identification LED (direct, 0x06E9) */
+struct ice_aqc_set_port_id_led {
+	u8 lport_num;
+	u8 lport_num_valid;
+#define ICE_AQC_PORT_ID_PORT_NUM_VALID	BIT(0)
+	u8 ident_mode;
+#define ICE_AQC_PORT_IDENT_LED_BLINK	BIT(0)
+#define ICE_AQC_PORT_IDENT_LED_ORIG	0
+	u8 rsvd[13];
+};
+
+
+
+/* NVM Read command (indirect 0x0701)
+ * NVM Erase commands (direct 0x0702)
+ * NVM Update commands (indirect 0x0703)
+ */
+struct ice_aqc_nvm {
+	__le16 offset_low;
+	u8 offset_high;
+	u8 cmd_flags;
+#define ICE_AQC_NVM_LAST_CMD		BIT(0)
+#define ICE_AQC_NVM_PCIR_REQ		BIT(0)	/* Used by NVM Update reply */
+#define ICE_AQC_NVM_PRESERVATION_S	1
+#define ICE_AQC_NVM_PRESERVATION_M	(3 << ICE_AQC_NVM_PRESERVATION_S)
+#define ICE_AQC_NVM_NO_PRESERVATION	(0 << ICE_AQC_NVM_PRESERVATION_S)
+#define ICE_AQC_NVM_PRESERVE_ALL	BIT(1)
+#define ICE_AQC_NVM_FACTORY_DEFAULT	(2 << ICE_AQC_NVM_PRESERVATION_S)
+#define ICE_AQC_NVM_PRESERVE_SELECTED	(3 << ICE_AQC_NVM_PRESERVATION_S)
+#define ICE_AQC_NVM_FLASH_ONLY		BIT(7)
+	__le16 module_typeid;
+	__le16 length;
+#define ICE_AQC_NVM_ERASE_LEN	0xFFFF
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* Used for 0x0704 as well as for 0x0705 commands */
+struct ice_aqc_nvm_cfg {
+	u8	cmd_flags;
+#define ICE_AQC_ANVM_MULTIPLE_ELEMS	BIT(0)
+#define ICE_AQC_ANVM_IMMEDIATE_FIELD	BIT(1)
+#define ICE_AQC_ANVM_NEW_CFG		BIT(2)
+	u8	reserved;
+	__le16 count;
+	__le16 id;
+	u8 reserved1[2];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+struct ice_aqc_nvm_cfg_data {
+	__le16 field_id;
+	__le16 field_options;
+	__le16 field_value;
+};
+
+
+/* NVM Checksum Command (direct, 0x0706) */
+struct ice_aqc_nvm_checksum {
+	u8 flags;
+#define ICE_AQC_NVM_CHECKSUM_VERIFY	BIT(0)
+#define ICE_AQC_NVM_CHECKSUM_RECALC	BIT(1)
+	u8 rsvd;
+	__le16 checksum; /* Used only by response */
+#define ICE_AQC_NVM_CHECKSUM_CORRECT	0xBABA
+	u8 rsvd2[12];
+};
+
+
+
+
+
+/* Get/Set RSS key (indirect 0x0B04/0x0B02) */
+struct ice_aqc_get_set_rss_key {
+#define ICE_AQC_GSET_RSS_KEY_VSI_VALID	BIT(15)
+#define ICE_AQC_GSET_RSS_KEY_VSI_ID_S	0
+#define ICE_AQC_GSET_RSS_KEY_VSI_ID_M	(0x3FF << ICE_AQC_GSET_RSS_KEY_VSI_ID_S)
+	__le16 vsi_id;
+	u8 reserved[6];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+#define ICE_AQC_GET_SET_RSS_KEY_DATA_RSS_KEY_SIZE	0x28
+#define ICE_AQC_GET_SET_RSS_KEY_DATA_HASH_KEY_SIZE	0xC
+
+struct ice_aqc_get_set_rss_keys {
+	u8 standard_rss_key[ICE_AQC_GET_SET_RSS_KEY_DATA_RSS_KEY_SIZE];
+	u8 extended_hash_key[ICE_AQC_GET_SET_RSS_KEY_DATA_HASH_KEY_SIZE];
+};
+
+
+/* Get/Set RSS LUT (indirect 0x0B05/0x0B03) */
+struct ice_aqc_get_set_rss_lut {
+#define ICE_AQC_GSET_RSS_LUT_VSI_VALID	BIT(15)
+#define ICE_AQC_GSET_RSS_LUT_VSI_ID_S	0
+#define ICE_AQC_GSET_RSS_LUT_VSI_ID_M	(0x1FF << ICE_AQC_GSET_RSS_LUT_VSI_ID_S)
+	__le16 vsi_id;
+#define ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_S	0
+#define ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_M	\
+				(0x3 << ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_S)
+
+#define ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_VSI	 0
+#define ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF	 1
+#define ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_GLOBAL	 2
+
+#define ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S	 2
+#define ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_M	 \
+				(0x3 << ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S)
+
+#define ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_128	 128
+#define ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_128_FLAG 0
+#define ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_512	 512
+#define ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_512_FLAG 1
+#define ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K	 2048
+#define ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K_FLAG	 2
+
+#define ICE_AQC_GSET_RSS_LUT_GLOBAL_IDX_S	 4
+#define ICE_AQC_GSET_RSS_LUT_GLOBAL_IDX_M	 \
+				(0xF << ICE_AQC_GSET_RSS_LUT_GLOBAL_IDX_S)
+
+	__le16 flags;
+	__le32 reserved;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+
+
+
+/* Add TX LAN Queues (indirect 0x0C30) */
+struct ice_aqc_add_txqs {
+	u8 num_qgrps;
+	u8 reserved[3];
+	__le32 reserved1;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* This is the descriptor of each queue entry for the Add TX LAN Queues
+ * command (0x0C30). Only used within struct ice_aqc_add_tx_qgrp.
+ */
+struct ice_aqc_add_txqs_perq {
+	__le16 txq_id;
+	u8 rsvd[2];
+	__le32 q_teid;
+	u8 txq_ctx[22];
+	u8 rsvd2[2];
+	struct ice_aqc_txsched_elem info;
+};
+
+
+/* The format of the command buffer for Add TX LAN Queues (0x0C30)
+ * is an array of the following structs. Please note that the length of
+ * each struct ice_aqc_add_tx_qgrp is variable due
+ * to the variable number of queues in each group!
+ */
+struct ice_aqc_add_tx_qgrp {
+	__le32 parent_teid;
+	u8 num_txqs;
+	u8 rsvd[3];
+	struct ice_aqc_add_txqs_perq txqs[1];
+};
+
+
+/* Disable TX LAN Queues (indirect 0x0C31) */
+struct ice_aqc_dis_txqs {
+	u8 cmd_type;
+#define ICE_AQC_Q_DIS_CMD_S		0
+#define ICE_AQC_Q_DIS_CMD_M		(0x3 << ICE_AQC_Q_DIS_CMD_S)
+#define ICE_AQC_Q_DIS_CMD_NO_FUNC_RESET	(0 << ICE_AQC_Q_DIS_CMD_S)
+#define ICE_AQC_Q_DIS_CMD_VM_RESET	BIT(ICE_AQC_Q_DIS_CMD_S)
+#define ICE_AQC_Q_DIS_CMD_VF_RESET	(2 << ICE_AQC_Q_DIS_CMD_S)
+#define ICE_AQC_Q_DIS_CMD_PF_RESET	(3 << ICE_AQC_Q_DIS_CMD_S)
+#define ICE_AQC_Q_DIS_CMD_SUBSEQ_CALL	BIT(2)
+#define ICE_AQC_Q_DIS_CMD_FLUSH_PIPE	BIT(3)
+	u8 num_entries;
+	__le16 vmvf_and_timeout;
+#define ICE_AQC_Q_DIS_VMVF_NUM_S	0
+#define ICE_AQC_Q_DIS_VMVF_NUM_M	(0x3FF << ICE_AQC_Q_DIS_VMVF_NUM_S)
+#define ICE_AQC_Q_DIS_TIMEOUT_S		10
+#define ICE_AQC_Q_DIS_TIMEOUT_M		(0x3F << ICE_AQC_Q_DIS_TIMEOUT_S)
+	__le32 blocked_cgds;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* The buffer for Disable TX LAN Queues (indirect 0x0C31)
+ * contains the following structures, arrayed one after the
+ * other.
+ * Note: Since the q_id is 16 bits wide, if the
+ * number of queues is even, then 2 bytes of alignment MUST be
+ * added before the start of the next group, to allow correct
+ * alignment of the parent_teid field.
+ */
+struct ice_aqc_dis_txq_item {
+	__le32 parent_teid;
+	u8 num_qs;
+	u8 rsvd;
+	/* The length of the q_id array varies according to num_qs */
+	__le16 q_id[1];
+	/* This only applies from F8 onward */
+#define ICE_AQC_Q_DIS_BUF_ELEM_TYPE_S		15
+#define ICE_AQC_Q_DIS_BUF_ELEM_TYPE_LAN_Q	\
+			(0 << ICE_AQC_Q_DIS_BUF_ELEM_TYPE_S)
+#define ICE_AQC_Q_DIS_BUF_ELEM_TYPE_RDMA_QSET	\
+			(1 << ICE_AQC_Q_DIS_BUF_ELEM_TYPE_S)
+};
+
+
+struct ice_aqc_dis_txq {
+	struct ice_aqc_dis_txq_item qgrps[1];
+};
+
+
+/* TX LAN Queues Cleanup Event (0x0C31) */
+struct ice_aqc_txqs_cleanup {
+	__le16 caller_opc;
+	__le16 cmd_tag;
+	u8 reserved[12];
+};
+
+
+/* Move / Reconfigure TX Queues (indirect 0x0C32) */
+struct ice_aqc_move_txqs {
+	u8 cmd_type;
+#define ICE_AQC_Q_CMD_TYPE_S		0
+#define ICE_AQC_Q_CMD_TYPE_M		(0x3 << ICE_AQC_Q_CMD_TYPE_S)
+#define ICE_AQC_Q_CMD_TYPE_MOVE		1
+#define ICE_AQC_Q_CMD_TYPE_TC_CHANGE	2
+#define ICE_AQC_Q_CMD_TYPE_MOVE_AND_TC	3
+#define ICE_AQC_Q_CMD_SUBSEQ_CALL	BIT(2)
+#define ICE_AQC_Q_CMD_FLUSH_PIPE	BIT(3)
+	u8 num_qs;
+	u8 rsvd;
+	u8 timeout;
+#define ICE_AQC_Q_CMD_TIMEOUT_S		2
+#define ICE_AQC_Q_CMD_TIMEOUT_M		(0x3F << ICE_AQC_Q_CMD_TIMEOUT_S)
+	__le32 blocked_cgds;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/* This is the descriptor of each queue entry for the move TX LAN Queues
+ * command (0x0C32).
+ */
+struct ice_aqc_move_txqs_elem {
+	__le16 txq_id;
+	u8 q_cgd;
+	u8 rsvd;
+	__le32 q_teid;
+};
+
+
+struct ice_aqc_move_txqs_data {
+	__le32 src_teid;
+	__le32 dest_teid;
+	struct ice_aqc_move_txqs_elem txqs[1];
+};
+
+
+
+
+
+
+/* Lan Queue Overflow Event (direct, 0x1001) */
+struct ice_aqc_event_lan_overflow {
+	__le32 prtdcb_ruptq;
+	__le32 qtx_ctl;
+	u8 reserved[8];
+};
+
+
+
+/* Configure Firmware Logging Command (indirect 0xFF09)
+ * Logging Information Read Response (indirect 0xFF10)
+ * Note: The 0xFF10 command has no input parameters.
+ */
+struct ice_aqc_fw_logging {
+	u8 log_ctrl;
+#define ICE_AQC_FW_LOG_AQ_EN		BIT(0)
+#define ICE_AQC_FW_LOG_UART_EN		BIT(1)
+	u8 rsvd0;
+	u8 log_ctrl_valid; /* Not used by 0xFF10 Response */
+#define ICE_AQC_FW_LOG_AQ_VALID		BIT(0)
+#define ICE_AQC_FW_LOG_UART_VALID	BIT(1)
+	u8 rsvd1[5];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+enum ice_aqc_fw_logging_mod {
+	ICE_AQC_FW_LOG_ID_GENERAL = 0,
+	ICE_AQC_FW_LOG_ID_CTRL,
+	ICE_AQC_FW_LOG_ID_LINK,
+	ICE_AQC_FW_LOG_ID_LINK_TOPO,
+	ICE_AQC_FW_LOG_ID_DNL,
+	ICE_AQC_FW_LOG_ID_I2C,
+	ICE_AQC_FW_LOG_ID_SDP,
+	ICE_AQC_FW_LOG_ID_MDIO,
+	ICE_AQC_FW_LOG_ID_ADMINQ,
+	ICE_AQC_FW_LOG_ID_HDMA,
+	ICE_AQC_FW_LOG_ID_LLDP,
+	ICE_AQC_FW_LOG_ID_DCBX,
+	ICE_AQC_FW_LOG_ID_DCB,
+	ICE_AQC_FW_LOG_ID_NETPROXY,
+	ICE_AQC_FW_LOG_ID_NVM,
+	ICE_AQC_FW_LOG_ID_AUTH,
+	ICE_AQC_FW_LOG_ID_VPD,
+	ICE_AQC_FW_LOG_ID_IOSF,
+	ICE_AQC_FW_LOG_ID_PARSER,
+	ICE_AQC_FW_LOG_ID_SW,
+	ICE_AQC_FW_LOG_ID_SCHEDULER,
+	ICE_AQC_FW_LOG_ID_TXQ,
+	ICE_AQC_FW_LOG_ID_RSVD,
+	ICE_AQC_FW_LOG_ID_POST,
+	ICE_AQC_FW_LOG_ID_WATCHDOG,
+	ICE_AQC_FW_LOG_ID_TASK_DISPATCH,
+	ICE_AQC_FW_LOG_ID_MNG,
+	ICE_AQC_FW_LOG_ID_MAX,
+};
+
+/* This is the buffer for both of the logging commands.
+ * The entry array size depends on the datalen parameter in the descriptor.
+ * There will be a total of datalen / 2 entries.
+ */
+struct ice_aqc_fw_logging_data {
+	__le16 entry[1];
+#define ICE_AQC_FW_LOG_ID_S		0
+#define ICE_AQC_FW_LOG_ID_M		(0xFFF << ICE_AQC_FW_LOG_ID_S)
+
+#define ICE_AQC_FW_LOG_CONF_SUCCESS	0	/* Used by response */
+#define ICE_AQC_FW_LOG_CONF_BAD_INDX	BIT(12)	/* Used by response */
+
+#define ICE_AQC_FW_LOG_EN_S		12
+#define ICE_AQC_FW_LOG_EN_M		(0xF << ICE_AQC_FW_LOG_EN_S)
+#define ICE_AQC_FW_LOG_INFO_EN		BIT(12)	/* Used by command */
+#define ICE_AQC_FW_LOG_INIT_EN		BIT(13)	/* Used by command */
+#define ICE_AQC_FW_LOG_FLOW_EN		BIT(14)	/* Used by command */
+#define ICE_AQC_FW_LOG_ERR_EN		BIT(15)	/* Used by command */
+};
+
+
+/* Get/Clear FW Log (indirect 0xFF11) */
+struct ice_aqc_get_clear_fw_log {
+	u8 flags;
+#define ICE_AQC_FW_LOG_CLEAR		BIT(0)
+#define ICE_AQC_FW_LOG_MORE_DATA_AVAIL	BIT(1)
+	u8 rsvd1[7];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+
+/**
+ * struct ice_aq_desc - Admin Queue (AQ) descriptor
+ * @flags: ICE_AQ_FLAG_* flags
+ * @opcode: AQ command opcode
+ * @datalen: length in bytes of indirect/external data buffer
+ * @retval: return value from firmware
+ * @cookie_h: opaque data high-half
+ * @cookie_l: opaque data low-half
+ * @params: command-specific parameters
+ *
+ * Descriptor format for commands the driver posts on the Admin Transmit Queue
+ * (ATQ). The firmware writes back onto the command descriptor and returns
+ * the result of the command. Asynchronous events that are not an immediate
+ * result of the command are written to the Admin Receive Queue (ARQ) using
+ * the same descriptor format. Descriptors are in little-endian notation with
+ * 32-bit words.
+ */
+struct ice_aq_desc {
+	__le16 flags;
+	__le16 opcode;
+	__le16 datalen;
+	__le16 retval;
+	__le32 cookie_high;
+	__le32 cookie_low;
+	union {
+		u8 raw[16];
+		struct ice_aqc_generic generic;
+		struct ice_aqc_get_ver get_ver;
+		struct ice_aqc_q_shutdown q_shutdown;
+		struct ice_aqc_req_res res_owner;
+		struct ice_aqc_manage_mac_read mac_read;
+		struct ice_aqc_manage_mac_write mac_write;
+		struct ice_aqc_clear_pxe clear_pxe;
+		struct ice_aqc_list_caps get_cap;
+		struct ice_aqc_get_phy_caps get_phy;
+		struct ice_aqc_set_phy_cfg set_phy;
+		struct ice_aqc_restart_an restart_an;
+		struct ice_aqc_set_port_id_led set_port_id_led;
+		struct ice_aqc_get_sw_cfg get_sw_conf;
+		struct ice_aqc_sw_rules sw_rules;
+		struct ice_aqc_get_topo get_topo;
+		struct ice_aqc_sched_elem_cmd sched_elem_cmd;
+		struct ice_aqc_query_txsched_res query_sched_res;
+		struct ice_aqc_query_node_to_root query_node_to_root;
+		struct ice_aqc_cfg_l2_node_cgd cfg_l2_node_cgd;
+		struct ice_aqc_rl_profile rl_profile;
+
+		struct ice_aqc_nvm nvm;
+		struct ice_aqc_nvm_cfg nvm_cfg;
+		struct ice_aqc_nvm_checksum nvm_checksum;
+		struct ice_aqc_get_set_rss_lut get_set_rss_lut;
+		struct ice_aqc_get_set_rss_key get_set_rss_key;
+		struct ice_aqc_add_txqs add_txqs;
+		struct ice_aqc_dis_txqs dis_txqs;
+		struct ice_aqc_txqs_cleanup txqs_cleanup;
+		struct ice_aqc_add_get_update_free_vsi vsi_cmd;
+		struct ice_aqc_add_update_free_vsi_resp add_update_free_vsi_res;
+		struct ice_aqc_fw_logging fw_logging;
+		struct ice_aqc_get_clear_fw_log get_clear_fw_log;
+		struct ice_aqc_set_mac_lb set_mac_lb;
+		struct ice_aqc_alloc_free_res_cmd sw_res_ctrl;
+		struct ice_aqc_set_event_mask set_event_mask;
+		struct ice_aqc_get_link_status get_link_status;
+	} params;
+};
+
+
+/* FW defined boundary for a large buffer, 4k >= Large buffer > 512 bytes */
+#define ICE_AQ_LG_BUF	512
+
+/* Flags sub-structure
+ * |0  |1  |2  |3  |4  |5  |6  |7  |8  |9  |10 |11 |12 |13 |14 |15 |
+ * |DD |CMP|ERR|VFE| * *  RESERVED * * |LB |RD |VFC|BUF|SI |EI |FE |
+ */
+
+/* command flags and offsets */
+#define ICE_AQ_FLAG_DD_S	0
+#define ICE_AQ_FLAG_CMP_S	1
+#define ICE_AQ_FLAG_ERR_S	2
+#define ICE_AQ_FLAG_VFE_S	3
+#define ICE_AQ_FLAG_LB_S	9
+#define ICE_AQ_FLAG_RD_S	10
+#define ICE_AQ_FLAG_VFC_S	11
+#define ICE_AQ_FLAG_BUF_S	12
+#define ICE_AQ_FLAG_SI_S	13
+#define ICE_AQ_FLAG_EI_S	14
+#define ICE_AQ_FLAG_FE_S	15
+
+#define ICE_AQ_FLAG_DD		BIT(ICE_AQ_FLAG_DD_S)  /* 0x1    */
+#define ICE_AQ_FLAG_CMP		BIT(ICE_AQ_FLAG_CMP_S) /* 0x2    */
+#define ICE_AQ_FLAG_ERR		BIT(ICE_AQ_FLAG_ERR_S) /* 0x4    */
+#define ICE_AQ_FLAG_VFE		BIT(ICE_AQ_FLAG_VFE_S) /* 0x8    */
+#define ICE_AQ_FLAG_LB		BIT(ICE_AQ_FLAG_LB_S)  /* 0x200  */
+#define ICE_AQ_FLAG_RD		BIT(ICE_AQ_FLAG_RD_S)  /* 0x400  */
+#define ICE_AQ_FLAG_VFC		BIT(ICE_AQ_FLAG_VFC_S) /* 0x800  */
+#define ICE_AQ_FLAG_BUF		BIT(ICE_AQ_FLAG_BUF_S) /* 0x1000 */
+#define ICE_AQ_FLAG_SI		BIT(ICE_AQ_FLAG_SI_S)  /* 0x2000 */
+#define ICE_AQ_FLAG_EI		BIT(ICE_AQ_FLAG_EI_S)  /* 0x4000 */
+#define ICE_AQ_FLAG_FE		BIT(ICE_AQ_FLAG_FE_S)  /* 0x8000 */
+
+/* error codes */
+enum ice_aq_err {
+	ICE_AQ_RC_OK		= 0,  /* Success */
+	ICE_AQ_RC_EPERM		= 1,  /* Operation not permitted */
+	ICE_AQ_RC_ENOENT	= 2,  /* No such element */
+	ICE_AQ_RC_ESRCH		= 3,  /* Bad opcode */
+	ICE_AQ_RC_EINTR		= 4,  /* Operation interrupted */
+	ICE_AQ_RC_EIO		= 5,  /* I/O error */
+	ICE_AQ_RC_ENXIO		= 6,  /* No such resource */
+	ICE_AQ_RC_E2BIG		= 7,  /* Arg too long */
+	ICE_AQ_RC_EAGAIN	= 8,  /* Try again */
+	ICE_AQ_RC_ENOMEM	= 9,  /* Out of memory */
+	ICE_AQ_RC_EACCES	= 10, /* Permission denied */
+	ICE_AQ_RC_EFAULT	= 11, /* Bad address */
+	ICE_AQ_RC_EBUSY		= 12, /* Device or resource busy */
+	ICE_AQ_RC_EEXIST	= 13, /* Object already exists */
+	ICE_AQ_RC_EINVAL	= 14, /* Invalid argument */
+	ICE_AQ_RC_ENOTTY	= 15, /* Not a typewriter */
+	ICE_AQ_RC_ENOSPC	= 16, /* No space left or allocation failure */
+	ICE_AQ_RC_ENOSYS	= 17, /* Function not implemented */
+	ICE_AQ_RC_ERANGE	= 18, /* Parameter out of range */
+	ICE_AQ_RC_EFLUSHED	= 19, /* Cmd flushed due to prev cmd error */
+	ICE_AQ_RC_BAD_ADDR	= 20, /* Descriptor contains a bad pointer */
+	ICE_AQ_RC_EMODE		= 21, /* Op not allowed in current dev mode */
+	ICE_AQ_RC_EFBIG		= 22, /* File too big */
+	ICE_AQ_RC_ESBCOMP	= 23, /* SB-IOSF completion unsuccessful */
+	ICE_AQ_RC_ENOSEC	= 24, /* Missing security manifest */
+	ICE_AQ_RC_EBADSIG	= 25, /* Bad RSA signature */
+	ICE_AQ_RC_ESVN		= 26, /* SVN number prohibits this package */
+	ICE_AQ_RC_EBADMAN	= 27, /* Manifest hash mismatch */
+	ICE_AQ_RC_EBADBUF	= 28, /* Buffer hash mismatches manifest */
+};
+
+/* Admin Queue command opcodes */
+enum ice_adminq_opc {
+	/* AQ commands */
+	ice_aqc_opc_get_ver				= 0x0001,
+	ice_aqc_opc_driver_ver				= 0x0002,
+	ice_aqc_opc_q_shutdown				= 0x0003,
+	ice_aqc_opc_get_exp_err				= 0x0005,
+
+	/* resource ownership */
+	ice_aqc_opc_req_res				= 0x0008,
+	ice_aqc_opc_release_res				= 0x0009,
+
+	/* device/function capabilities */
+	ice_aqc_opc_list_func_caps			= 0x000A,
+	ice_aqc_opc_list_dev_caps			= 0x000B,
+
+	/* manage MAC address */
+	ice_aqc_opc_manage_mac_read			= 0x0107,
+	ice_aqc_opc_manage_mac_write			= 0x0108,
+
+	/* PXE */
+	ice_aqc_opc_clear_pxe_mode			= 0x0110,
+
+	/* internal switch commands */
+	ice_aqc_opc_get_sw_cfg				= 0x0200,
+
+	/* Alloc/Free/Get Resources */
+	ice_aqc_opc_get_res_alloc			= 0x0204,
+	ice_aqc_opc_alloc_res				= 0x0208,
+	ice_aqc_opc_free_res				= 0x0209,
+	ice_aqc_opc_get_allocd_res_desc			= 0x020A,
+
+	/* VSI commands */
+	ice_aqc_opc_add_vsi				= 0x0210,
+	ice_aqc_opc_update_vsi				= 0x0211,
+	ice_aqc_opc_get_vsi_params			= 0x0212,
+	ice_aqc_opc_free_vsi				= 0x0213,
+
+
+
+	/* switch rules population commands */
+	ice_aqc_opc_add_sw_rules			= 0x02A0,
+	ice_aqc_opc_update_sw_rules			= 0x02A1,
+	ice_aqc_opc_remove_sw_rules			= 0x02A2,
+	ice_aqc_opc_get_sw_rules			= 0x02A3,
+	ice_aqc_opc_clear_pf_cfg			= 0x02A4,
+
+
+	/* transmit scheduler commands */
+	ice_aqc_opc_get_dflt_topo			= 0x0400,
+	ice_aqc_opc_add_sched_elems			= 0x0401,
+	ice_aqc_opc_cfg_sched_elems			= 0x0403,
+	ice_aqc_opc_get_sched_elems			= 0x0404,
+	ice_aqc_opc_move_sched_elems			= 0x0408,
+	ice_aqc_opc_suspend_sched_elems			= 0x0409,
+	ice_aqc_opc_resume_sched_elems			= 0x040A,
+	ice_aqc_opc_suspend_sched_traffic		= 0x040B,
+	ice_aqc_opc_resume_sched_traffic		= 0x040C,
+	ice_aqc_opc_delete_sched_elems			= 0x040F,
+	ice_aqc_opc_add_rl_profiles			= 0x0410,
+	ice_aqc_opc_query_rl_profiles			= 0x0411,
+	ice_aqc_opc_query_sched_res			= 0x0412,
+	ice_aqc_opc_query_node_to_root			= 0x0413,
+	ice_aqc_opc_cfg_l2_node_cgd			= 0x0414,
+	ice_aqc_opc_remove_rl_profiles			= 0x0415,
+
+	/* PHY commands */
+	ice_aqc_opc_get_phy_caps			= 0x0600,
+	ice_aqc_opc_set_phy_cfg				= 0x0601,
+	ice_aqc_opc_set_mac_cfg				= 0x0603,
+	ice_aqc_opc_restart_an				= 0x0605,
+	ice_aqc_opc_get_link_status			= 0x0607,
+	ice_aqc_opc_set_event_mask			= 0x0613,
+	ice_aqc_opc_set_mac_lb				= 0x0620,
+	ice_aqc_opc_set_port_id_led			= 0x06E9,
+	ice_aqc_opc_get_port_options			= 0x06EA,
+	ice_aqc_opc_set_port_option			= 0x06EB,
+	ice_aqc_opc_set_gpio				= 0x06EC,
+	ice_aqc_opc_get_gpio				= 0x06ED,
+
+	/* NVM commands */
+	ice_aqc_opc_nvm_read				= 0x0701,
+	ice_aqc_opc_nvm_erase				= 0x0702,
+	ice_aqc_opc_nvm_update				= 0x0703,
+	ice_aqc_opc_nvm_cfg_read			= 0x0704,
+	ice_aqc_opc_nvm_cfg_write			= 0x0705,
+	ice_aqc_opc_nvm_checksum			= 0x0706,
+
+
+	/* RSS commands */
+	ice_aqc_opc_set_rss_key				= 0x0B02,
+	ice_aqc_opc_set_rss_lut				= 0x0B03,
+	ice_aqc_opc_get_rss_key				= 0x0B04,
+	ice_aqc_opc_get_rss_lut				= 0x0B05,
+
+	/* TX queue handling commands/events */
+	ice_aqc_opc_add_txqs				= 0x0C30,
+	ice_aqc_opc_dis_txqs				= 0x0C31,
+	ice_aqc_opc_txqs_cleanup			= 0x0C31,
+	ice_aqc_opc_move_recfg_txqs			= 0x0C32,
+
+
+
+
+	/* Standalone Commands/Events */
+	ice_aqc_opc_event_lan_overflow			= 0x1001,
+
+	/* debug commands */
+	ice_aqc_opc_fw_logging				= 0xFF09,
+	ice_aqc_opc_fw_logging_info			= 0xFF10,
+	ice_aqc_opc_get_clear_fw_log			= 0xFF11
+};
+
+#endif /* _ICE_ADMINQ_CMD_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v6 04/31] net/ice/base: add sideband queue info
  2018-12-18  8:46 ` [dpdk-dev] [PATCH v6 00/31] A new net PMD - ICE Wenzhuo Lu
                     ` (2 preceding siblings ...)
  2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 03/31] net/ice/base: add admin queue structures and commands Wenzhuo Lu
@ 2018-12-18  8:46   ` Wenzhuo Lu
  2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 05/31] net/ice/base: add device IDs for Intel(r) E800 Series NICs Wenzhuo Lu
                     ` (27 subsequent siblings)
  31 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-18  8:46 UTC (permalink / raw)
  To: dev; +Cc: Paul M Stillwell Jr

From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>

Add the commands, error codes, and structures
for the sideband queue.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
---
 drivers/net/ice/base/ice_sbq_cmd.h | 93 ++++++++++++++++++++++++++++++++++++++
 1 file changed, 93 insertions(+)
 create mode 100644 drivers/net/ice/base/ice_sbq_cmd.h

diff --git a/drivers/net/ice/base/ice_sbq_cmd.h b/drivers/net/ice/base/ice_sbq_cmd.h
new file mode 100644
index 0000000..6dff378
--- /dev/null
+++ b/drivers/net/ice/base/ice_sbq_cmd.h
@@ -0,0 +1,93 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_SBQ_CMD_H_
+#define _ICE_SBQ_CMD_H_
+
+/* This header file defines the Sideband Queue commands, error codes and
+ * descriptor format. It is shared between Firmware and Software.
+ */
+
+/* Sideband Queue command structure and opcodes */
+enum ice_sbq_opc {
+	/* Sideband Queue commands */
+	ice_sbq_opc_neigh_dev_req			= 0x0C00,
+	ice_sbq_opc_neigh_dev_ev			= 0x0C01
+};
+
+/* Sideband Queue descriptor. Indirect command
+ * and non posted
+ */
+struct ice_sbq_cmd_desc {
+	__le16 flags;
+	__le16 opcode;
+	__le16 datalen;
+	__le16 cmd_retval;
+
+	/* Opaque message data */
+	__le32 cookie_high;
+	__le32 cookie_low;
+
+	union {
+		__le16 cmd_len;
+		__le16 cmpl_len;
+	} param0;
+
+	u8 reserved[6];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+struct ice_sbq_evt_desc {
+	__le16 flags;
+	__le16 opcode;
+	__le16 datalen;
+	__le16 cmd_retval;
+	u8 data[24];
+};
+
+enum ice_sbq_msg_dev {
+	rmn_0	= 0x02,
+	rmn_1	= 0x03,
+	rmn_2	= 0x04,
+	cgu	= 0x06
+};
+
+enum ice_sbq_msg_opcode {
+	ice_sbq_msg_rd	= 0x00,
+	ice_sbq_msg_wr	= 0x01
+};
+
+#define ICE_SBQ_MSG_FLAGS	0x40
+#define ICE_SBQ_MSG_SBE_FBE	0x0F
+
+struct ice_sbq_msg_req {
+	u8 dest_dev;
+	u8 src_dev;
+	u8 opcode;
+	u8 flags;
+	u8 sbe_fbe;
+	u8 func_id;
+	__le16 msg_addr_low;
+	__le32 msg_addr_high;
+	__le32 data;
+};
+
+struct ice_sbq_msg_cmpl {
+	u8 dest_dev;
+	u8 src_dev;
+	u8 opcode;
+	u8 flags;
+	__le32 data;
+};
+
+/* Internal struct */
+struct ice_sbq_msg_input {
+	u8 dest_dev;
+	u8 opcode;
+	u16 msg_addr_low;
+	u32 msg_addr_high;
+	u32 data;
+};
+#endif /* _ICE_SBQ_CMD_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v6 05/31] net/ice/base: add device IDs for Intel(r) E800 Series NICs
  2018-12-18  8:46 ` [dpdk-dev] [PATCH v6 00/31] A new net PMD - ICE Wenzhuo Lu
                     ` (3 preceding siblings ...)
  2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 04/31] net/ice/base: add sideband queue info Wenzhuo Lu
@ 2018-12-18  8:46   ` Wenzhuo Lu
  2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 06/31] net/ice/base: add control queue information Wenzhuo Lu
                     ` (26 subsequent siblings)
  31 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-18  8:46 UTC (permalink / raw)
  To: dev; +Cc: Paul M Stillwell Jr

From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>

Add all the device IDs that represent the NIC.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
---
 drivers/net/ice/base/ice_devids.h | 17 +++++++++++++++++
 1 file changed, 17 insertions(+)
 create mode 100644 drivers/net/ice/base/ice_devids.h

diff --git a/drivers/net/ice/base/ice_devids.h b/drivers/net/ice/base/ice_devids.h
new file mode 100644
index 0000000..87f17ab
--- /dev/null
+++ b/drivers/net/ice/base/ice_devids.h
@@ -0,0 +1,17 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_DEVIDS_H_
+#define _ICE_DEVIDS_H_
+
+
+/* Device IDs */
+/* Intel(R) Ethernet Controller E810-C for backplane */
+#define ICE_DEV_ID_E810C_BACKPLANE	0x1591
+/* Intel(R) Ethernet Controller E810-C for QSFP */
+#define ICE_DEV_ID_E810C_QSFP		0x1592
+/* Intel(R) Ethernet Controller E810-C for SFP */
+#define ICE_DEV_ID_E810C_SFP		0x1593
+
+#endif /* _ICE_DEVIDS_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v6 06/31] net/ice/base: add control queue information
  2018-12-18  8:46 ` [dpdk-dev] [PATCH v6 00/31] A new net PMD - ICE Wenzhuo Lu
                     ` (4 preceding siblings ...)
  2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 05/31] net/ice/base: add device IDs for Intel(r) E800 Series NICs Wenzhuo Lu
@ 2018-12-18  8:46   ` Wenzhuo Lu
  2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 07/31] net/ice/base: add basic transmit scheduler Wenzhuo Lu
                     ` (25 subsequent siblings)
  31 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-18  8:46 UTC (permalink / raw)
  To: dev; +Cc: Paul M Stillwell Jr

From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>

Add the structures for the control queues.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
---
 drivers/net/ice/base/ice_controlq.c | 1098 +++++++++++++++++++++++++++++++++++
 drivers/net/ice/base/ice_controlq.h |   97 ++++
 2 files changed, 1195 insertions(+)
 create mode 100644 drivers/net/ice/base/ice_controlq.c
 create mode 100644 drivers/net/ice/base/ice_controlq.h

diff --git a/drivers/net/ice/base/ice_controlq.c b/drivers/net/ice/base/ice_controlq.c
new file mode 100644
index 0000000..fb82c23
--- /dev/null
+++ b/drivers/net/ice/base/ice_controlq.c
@@ -0,0 +1,1098 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#include "ice_common.h"
+
+
+#define ICE_CQ_INIT_REGS(qinfo, prefix)				\
+do {								\
+	(qinfo)->sq.head = prefix##_ATQH;			\
+	(qinfo)->sq.tail = prefix##_ATQT;			\
+	(qinfo)->sq.len = prefix##_ATQLEN;			\
+	(qinfo)->sq.bah = prefix##_ATQBAH;			\
+	(qinfo)->sq.bal = prefix##_ATQBAL;			\
+	(qinfo)->sq.len_mask = prefix##_ATQLEN_ATQLEN_M;	\
+	(qinfo)->sq.len_ena_mask = prefix##_ATQLEN_ATQENABLE_M;	\
+	(qinfo)->sq.head_mask = prefix##_ATQH_ATQH_M;		\
+	(qinfo)->rq.head = prefix##_ARQH;			\
+	(qinfo)->rq.tail = prefix##_ARQT;			\
+	(qinfo)->rq.len = prefix##_ARQLEN;			\
+	(qinfo)->rq.bah = prefix##_ARQBAH;			\
+	(qinfo)->rq.bal = prefix##_ARQBAL;			\
+	(qinfo)->rq.len_mask = prefix##_ARQLEN_ARQLEN_M;	\
+	(qinfo)->rq.len_ena_mask = prefix##_ARQLEN_ARQENABLE_M;	\
+	(qinfo)->rq.head_mask = prefix##_ARQH_ARQH_M;		\
+} while (0)
+
+/**
+ * ice_adminq_init_regs - Initialize AdminQ registers
+ * @hw: pointer to the hardware structure
+ *
+ * This assumes the alloc_sq and alloc_rq functions have already been called
+ */
+static void ice_adminq_init_regs(struct ice_hw *hw)
+{
+	struct ice_ctl_q_info *cq = &hw->adminq;
+
+	ICE_CQ_INIT_REGS(cq, PF_FW);
+}
+
+/**
+ * ice_mailbox_init_regs - Initialize Mailbox registers
+ * @hw: pointer to the hardware structure
+ *
+ * This assumes the alloc_sq and alloc_rq functions have already been called
+ */
+static void ice_mailbox_init_regs(struct ice_hw *hw)
+{
+	struct ice_ctl_q_info *cq = &hw->mailboxq;
+
+	ICE_CQ_INIT_REGS(cq, PF_MBX);
+}
+
+
+/**
+ * ice_check_sq_alive
+ * @hw: pointer to the hw struct
+ * @cq: pointer to the specific Control queue
+ *
+ * Returns true if Queue is enabled else false.
+ */
+bool ice_check_sq_alive(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	/* check both queue-length and queue-enable fields */
+	if (cq->sq.len && cq->sq.len_mask && cq->sq.len_ena_mask)
+		return (rd32(hw, cq->sq.len) & (cq->sq.len_mask |
+						cq->sq.len_ena_mask)) ==
+			(cq->num_sq_entries | cq->sq.len_ena_mask);
+
+	return false;
+}
+
+/**
+ * ice_alloc_ctrlq_sq_ring - Allocate Control Transmit Queue (ATQ) rings
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ */
+static enum ice_status
+ice_alloc_ctrlq_sq_ring(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	size_t size = cq->num_sq_entries * sizeof(struct ice_aq_desc);
+
+	cq->sq.desc_buf.va = ice_alloc_dma_mem(hw, &cq->sq.desc_buf, size);
+	if (!cq->sq.desc_buf.va)
+		return ICE_ERR_NO_MEMORY;
+
+	cq->sq.cmd_buf = ice_calloc(hw, cq->num_sq_entries,
+				    sizeof(struct ice_sq_cd));
+	if (!cq->sq.cmd_buf) {
+		ice_free_dma_mem(hw, &cq->sq.desc_buf);
+		return ICE_ERR_NO_MEMORY;
+	}
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_alloc_ctrlq_rq_ring - Allocate Control Receive Queue (ARQ) rings
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ */
+static enum ice_status
+ice_alloc_ctrlq_rq_ring(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	size_t size = cq->num_rq_entries * sizeof(struct ice_aq_desc);
+
+	cq->rq.desc_buf.va = ice_alloc_dma_mem(hw, &cq->rq.desc_buf, size);
+	if (!cq->rq.desc_buf.va)
+		return ICE_ERR_NO_MEMORY;
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_free_cq_ring - Free control queue ring
+ * @hw: pointer to the hardware structure
+ * @ring: pointer to the specific control queue ring
+ *
+ * This assumes the posted buffers have already been cleaned
+ * and de-allocated
+ */
+static void ice_free_cq_ring(struct ice_hw *hw, struct ice_ctl_q_ring *ring)
+{
+	ice_free_dma_mem(hw, &ring->desc_buf);
+}
+
+/**
+ * ice_alloc_rq_bufs - Allocate pre-posted buffers for the ARQ
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ */
+static enum ice_status
+ice_alloc_rq_bufs(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	int i;
+
+	/* We'll be allocating the buffer info memory first, then we can
+	 * allocate the mapped buffers for the event processing
+	 */
+	cq->rq.dma_head = ice_calloc(hw, cq->num_rq_entries,
+				     sizeof(cq->rq.desc_buf));
+	if (!cq->rq.dma_head)
+		return ICE_ERR_NO_MEMORY;
+	cq->rq.r.rq_bi = (struct ice_dma_mem *)cq->rq.dma_head;
+
+	/* allocate the mapped buffers */
+	for (i = 0; i < cq->num_rq_entries; i++) {
+		struct ice_aq_desc *desc;
+		struct ice_dma_mem *bi;
+
+		bi = &cq->rq.r.rq_bi[i];
+		bi->va = ice_alloc_dma_mem(hw, bi, cq->rq_buf_size);
+		if (!bi->va)
+			goto unwind_alloc_rq_bufs;
+
+		/* now configure the descriptors for use */
+		desc = ICE_CTL_Q_DESC(cq->rq, i);
+
+		desc->flags = CPU_TO_LE16(ICE_AQ_FLAG_BUF);
+		if (cq->rq_buf_size > ICE_AQ_LG_BUF)
+			desc->flags |= CPU_TO_LE16(ICE_AQ_FLAG_LB);
+		desc->opcode = 0;
+		/* This is in accordance with Admin queue design, there is no
+		 * register for buffer size configuration
+		 */
+		desc->datalen = CPU_TO_LE16(bi->size);
+		desc->retval = 0;
+		desc->cookie_high = 0;
+		desc->cookie_low = 0;
+		desc->params.generic.addr_high =
+			CPU_TO_LE32(ICE_HI_DWORD(bi->pa));
+		desc->params.generic.addr_low =
+			CPU_TO_LE32(ICE_LO_DWORD(bi->pa));
+		desc->params.generic.param0 = 0;
+		desc->params.generic.param1 = 0;
+	}
+	return ICE_SUCCESS;
+
+unwind_alloc_rq_bufs:
+	/* don't try to free the one that failed... */
+	i--;
+	for (; i >= 0; i--)
+		ice_free_dma_mem(hw, &cq->rq.r.rq_bi[i]);
+	ice_free(hw, cq->rq.dma_head);
+
+	return ICE_ERR_NO_MEMORY;
+}
+
+/**
+ * ice_alloc_sq_bufs - Allocate empty buffer structs for the ATQ
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ */
+static enum ice_status
+ice_alloc_sq_bufs(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	int i;
+
+	/* No mapped memory needed yet, just the buffer info structures */
+	cq->sq.dma_head = ice_calloc(hw, cq->num_sq_entries,
+				     sizeof(cq->sq.desc_buf));
+	if (!cq->sq.dma_head)
+		return ICE_ERR_NO_MEMORY;
+	cq->sq.r.sq_bi = (struct ice_dma_mem *)cq->sq.dma_head;
+
+	/* allocate the mapped buffers */
+	for (i = 0; i < cq->num_sq_entries; i++) {
+		struct ice_dma_mem *bi;
+
+		bi = &cq->sq.r.sq_bi[i];
+		bi->va = ice_alloc_dma_mem(hw, bi, cq->sq_buf_size);
+		if (!bi->va)
+			goto unwind_alloc_sq_bufs;
+	}
+	return ICE_SUCCESS;
+
+unwind_alloc_sq_bufs:
+	/* don't try to free the one that failed... */
+	i--;
+	for (; i >= 0; i--)
+		ice_free_dma_mem(hw, &cq->sq.r.sq_bi[i]);
+	ice_free(hw, cq->sq.dma_head);
+
+	return ICE_ERR_NO_MEMORY;
+}
+
+static enum ice_status
+ice_cfg_cq_regs(struct ice_hw *hw, struct ice_ctl_q_ring *ring, u16 num_entries)
+{
+	/* Clear Head and Tail */
+	wr32(hw, ring->head, 0);
+	wr32(hw, ring->tail, 0);
+
+	/* set starting point */
+	wr32(hw, ring->len, (num_entries | ring->len_ena_mask));
+	wr32(hw, ring->bal, ICE_LO_DWORD(ring->desc_buf.pa));
+	wr32(hw, ring->bah, ICE_HI_DWORD(ring->desc_buf.pa));
+
+	/* Check one register to verify that config was applied */
+	if (rd32(hw, ring->bal) != ICE_LO_DWORD(ring->desc_buf.pa))
+		return ICE_ERR_AQ_ERROR;
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_cfg_sq_regs - configure Control ATQ registers
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ *
+ * Configure base address and length registers for the transmit queue
+ */
+static enum ice_status
+ice_cfg_sq_regs(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	return ice_cfg_cq_regs(hw, &cq->sq, cq->num_sq_entries);
+}
+
+/**
+ * ice_cfg_rq_regs - configure Control ARQ register
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ *
+ * Configure base address and length registers for the receive (event q)
+ */
+static enum ice_status
+ice_cfg_rq_regs(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	enum ice_status status;
+
+	status = ice_cfg_cq_regs(hw, &cq->rq, cq->num_rq_entries);
+	if (status)
+		return status;
+
+	/* Update tail in the HW to post pre-allocated buffers */
+	wr32(hw, cq->rq.tail, (u32)(cq->num_rq_entries - 1));
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_init_sq - main initialization routine for Control ATQ
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ *
+ * This is the main initialization routine for the Control Send Queue
+ * Prior to calling this function, drivers *MUST* set the following fields
+ * in the cq->structure:
+ *     - cq->num_sq_entries
+ *     - cq->sq_buf_size
+ *
+ * Do *NOT* hold the lock when calling this as the memory allocation routines
+ * called are not going to be atomic context safe
+ */
+static enum ice_status ice_init_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	enum ice_status ret_code;
+
+	if (cq->sq.count > 0) {
+		/* queue already initialized */
+		ret_code = ICE_ERR_NOT_READY;
+		goto init_ctrlq_exit;
+	}
+
+	/* verify input for valid configuration */
+	if (!cq->num_sq_entries || !cq->sq_buf_size) {
+		ret_code = ICE_ERR_CFG;
+		goto init_ctrlq_exit;
+	}
+
+	cq->sq.next_to_use = 0;
+	cq->sq.next_to_clean = 0;
+
+	/* allocate the ring memory */
+	ret_code = ice_alloc_ctrlq_sq_ring(hw, cq);
+	if (ret_code)
+		goto init_ctrlq_exit;
+
+	/* allocate buffers in the rings */
+	ret_code = ice_alloc_sq_bufs(hw, cq);
+	if (ret_code)
+		goto init_ctrlq_free_rings;
+
+	/* initialize base registers */
+	ret_code = ice_cfg_sq_regs(hw, cq);
+	if (ret_code)
+		goto init_ctrlq_free_rings;
+
+	/* success! */
+	cq->sq.count = cq->num_sq_entries;
+	goto init_ctrlq_exit;
+
+init_ctrlq_free_rings:
+	ice_free_cq_ring(hw, &cq->sq);
+
+init_ctrlq_exit:
+	return ret_code;
+}
+
+/**
+ * ice_init_rq - initialize ARQ
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ *
+ * The main initialization routine for the Admin Receive (Event) Queue.
+ * Prior to calling this function, drivers *MUST* set the following fields
+ * in the cq->structure:
+ *     - cq->num_rq_entries
+ *     - cq->rq_buf_size
+ *
+ * Do *NOT* hold the lock when calling this as the memory allocation routines
+ * called are not going to be atomic context safe
+ */
+static enum ice_status ice_init_rq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	enum ice_status ret_code;
+
+	if (cq->rq.count > 0) {
+		/* queue already initialized */
+		ret_code = ICE_ERR_NOT_READY;
+		goto init_ctrlq_exit;
+	}
+
+	/* verify input for valid configuration */
+	if (!cq->num_rq_entries || !cq->rq_buf_size) {
+		ret_code = ICE_ERR_CFG;
+		goto init_ctrlq_exit;
+	}
+
+	cq->rq.next_to_use = 0;
+	cq->rq.next_to_clean = 0;
+
+	/* allocate the ring memory */
+	ret_code = ice_alloc_ctrlq_rq_ring(hw, cq);
+	if (ret_code)
+		goto init_ctrlq_exit;
+
+	/* allocate buffers in the rings */
+	ret_code = ice_alloc_rq_bufs(hw, cq);
+	if (ret_code)
+		goto init_ctrlq_free_rings;
+
+	/* initialize base registers */
+	ret_code = ice_cfg_rq_regs(hw, cq);
+	if (ret_code)
+		goto init_ctrlq_free_rings;
+
+	/* success! */
+	cq->rq.count = cq->num_rq_entries;
+	goto init_ctrlq_exit;
+
+init_ctrlq_free_rings:
+	ice_free_cq_ring(hw, &cq->rq);
+
+init_ctrlq_exit:
+	return ret_code;
+}
+
+#define ICE_FREE_CQ_BUFS(hw, qi, ring)					\
+do {									\
+	int i;								\
+	/* free descriptors */						\
+	for (i = 0; i < (qi)->num_##ring##_entries; i++)		\
+		if ((qi)->ring.r.ring##_bi[i].pa)			\
+			ice_free_dma_mem((hw),				\
+					 &(qi)->ring.r.ring##_bi[i]);	\
+	/* free the buffer info list */					\
+	if ((qi)->ring.cmd_buf)						\
+		ice_free(hw, (qi)->ring.cmd_buf);			\
+	/* free dma head */						\
+	ice_free(hw, (qi)->ring.dma_head);				\
+} while (0)
+
+/**
+ * ice_shutdown_sq - shutdown the Control ATQ
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ *
+ * The main shutdown routine for the Control Transmit Queue
+ */
+static enum ice_status
+ice_shutdown_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	enum ice_status ret_code = ICE_SUCCESS;
+
+	ice_acquire_lock(&cq->sq_lock);
+
+	if (!cq->sq.count) {
+		ret_code = ICE_ERR_NOT_READY;
+		goto shutdown_sq_out;
+	}
+
+	/* Stop firmware AdminQ processing */
+	wr32(hw, cq->sq.head, 0);
+	wr32(hw, cq->sq.tail, 0);
+	wr32(hw, cq->sq.len, 0);
+	wr32(hw, cq->sq.bal, 0);
+	wr32(hw, cq->sq.bah, 0);
+
+	cq->sq.count = 0;	/* to indicate uninitialized queue */
+
+	/* free ring buffers and the ring itself */
+	ICE_FREE_CQ_BUFS(hw, cq, sq);
+	ice_free_cq_ring(hw, &cq->sq);
+
+shutdown_sq_out:
+	ice_release_lock(&cq->sq_lock);
+	return ret_code;
+}
+
+/**
+ * ice_aq_ver_check - Check the reported AQ API version.
+ * @hw: pointer to the hardware structure
+ *
+ * Checks if the driver should load on a given AQ API version.
+ *
+ * Return: 'true' iff the driver should attempt to load. 'false' otherwise.
+ */
+static bool ice_aq_ver_check(struct ice_hw *hw)
+{
+	if (hw->api_maj_ver > EXP_FW_API_VER_MAJOR) {
+		/* Major API version is newer than expected, don't load */
+		ice_warn(hw, "The driver for the device stopped because the NVM image is newer than expected. You must install the most recent version of the network driver.\n");
+		return false;
+	} else if (hw->api_maj_ver == EXP_FW_API_VER_MAJOR) {
+		if (hw->api_min_ver > (EXP_FW_API_VER_MINOR + 2))
+			ice_info(hw, "The driver for the device detected a newer version of the NVM image than expected. Please install the most recent version of the network driver.\n");
+		else if ((hw->api_min_ver + 2) < EXP_FW_API_VER_MINOR)
+			ice_info(hw, "The driver for the device detected an older version of the NVM image than expected. Please update the NVM image.\n");
+	} else {
+		/* Major API version is older than expected, log a warning */
+		ice_info(hw, "The driver for the device detected an older version of the NVM image than expected. Please update the NVM image.\n");
+	}
+	return true;
+}
+
+/**
+ * ice_shutdown_rq - shutdown Control ARQ
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ *
+ * The main shutdown routine for the Control Receive Queue
+ */
+static enum ice_status
+ice_shutdown_rq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	enum ice_status ret_code = ICE_SUCCESS;
+
+	ice_acquire_lock(&cq->rq_lock);
+
+	if (!cq->rq.count) {
+		ret_code = ICE_ERR_NOT_READY;
+		goto shutdown_rq_out;
+	}
+
+	/* Stop Control Queue processing */
+	wr32(hw, cq->rq.head, 0);
+	wr32(hw, cq->rq.tail, 0);
+	wr32(hw, cq->rq.len, 0);
+	wr32(hw, cq->rq.bal, 0);
+	wr32(hw, cq->rq.bah, 0);
+
+	/* set rq.count to 0 to indicate uninitialized queue */
+	cq->rq.count = 0;
+
+	/* free ring buffers and the ring itself */
+	ICE_FREE_CQ_BUFS(hw, cq, rq);
+	ice_free_cq_ring(hw, &cq->rq);
+
+shutdown_rq_out:
+	ice_release_lock(&cq->rq_lock);
+	return ret_code;
+}
+
+
+/**
+ * ice_init_check_adminq - Check version for Admin Queue to know if its alive
+ * @hw: pointer to the hardware structure
+ */
+static enum ice_status ice_init_check_adminq(struct ice_hw *hw)
+{
+	struct ice_ctl_q_info *cq = &hw->adminq;
+	enum ice_status status;
+
+
+	status = ice_aq_get_fw_ver(hw, NULL);
+	if (status)
+		goto init_ctrlq_free_rq;
+
+
+	if (!ice_aq_ver_check(hw)) {
+		status = ICE_ERR_FW_API_VER;
+		goto init_ctrlq_free_rq;
+	}
+
+	return ICE_SUCCESS;
+
+init_ctrlq_free_rq:
+	if (cq->rq.count) {
+		ice_shutdown_rq(hw, cq);
+		ice_destroy_lock(&cq->rq_lock);
+	}
+	if (cq->sq.count) {
+		ice_shutdown_sq(hw, cq);
+		ice_destroy_lock(&cq->sq_lock);
+	}
+	return status;
+}
+
+/**
+ * ice_init_ctrlq - main initialization routine for any control Queue
+ * @hw: pointer to the hardware structure
+ * @q_type: specific Control queue type
+ *
+ * Prior to calling this function, drivers *MUST* set the following fields
+ * in the cq->structure:
+ *     - cq->num_sq_entries
+ *     - cq->num_rq_entries
+ *     - cq->rq_buf_size
+ *     - cq->sq_buf_size
+ */
+static enum ice_status ice_init_ctrlq(struct ice_hw *hw, enum ice_ctl_q q_type)
+{
+	struct ice_ctl_q_info *cq;
+	enum ice_status ret_code;
+
+	switch (q_type) {
+	case ICE_CTL_Q_ADMIN:
+		ice_adminq_init_regs(hw);
+		cq = &hw->adminq;
+		break;
+	case ICE_CTL_Q_MAILBOX:
+		ice_mailbox_init_regs(hw);
+		cq = &hw->mailboxq;
+		break;
+	default:
+		return ICE_ERR_PARAM;
+	}
+	cq->qtype = q_type;
+
+	/* verify input for valid configuration */
+	if (!cq->num_rq_entries || !cq->num_sq_entries ||
+	    !cq->rq_buf_size || !cq->sq_buf_size) {
+		return ICE_ERR_CFG;
+	}
+	ice_init_lock(&cq->sq_lock);
+	ice_init_lock(&cq->rq_lock);
+
+	/* setup SQ command write back timeout */
+	cq->sq_cmd_timeout = ICE_CTL_Q_SQ_CMD_TIMEOUT;
+
+	/* allocate the ATQ */
+	ret_code = ice_init_sq(hw, cq);
+	if (ret_code)
+		goto init_ctrlq_destroy_locks;
+
+	/* allocate the ARQ */
+	ret_code = ice_init_rq(hw, cq);
+	if (ret_code)
+		goto init_ctrlq_free_sq;
+
+	/* success! */
+	return ICE_SUCCESS;
+
+init_ctrlq_free_sq:
+	ice_shutdown_sq(hw, cq);
+init_ctrlq_destroy_locks:
+	ice_destroy_lock(&cq->sq_lock);
+	ice_destroy_lock(&cq->rq_lock);
+	return ret_code;
+}
+
+/**
+ * ice_init_all_ctrlq - main initialization routine for all control queues
+ * @hw: pointer to the hardware structure
+ *
+ * Prior to calling this function, drivers *MUST* set the following fields
+ * in the cq->structure for all control queues:
+ *     - cq->num_sq_entries
+ *     - cq->num_rq_entries
+ *     - cq->rq_buf_size
+ *     - cq->sq_buf_size
+ */
+enum ice_status ice_init_all_ctrlq(struct ice_hw *hw)
+{
+	enum ice_status ret_code;
+
+
+	/* Init FW admin queue */
+	ret_code = ice_init_ctrlq(hw, ICE_CTL_Q_ADMIN);
+	if (ret_code)
+		return ret_code;
+
+	ret_code = ice_init_check_adminq(hw);
+	if (ret_code)
+		return ret_code;
+	/* Init Mailbox queue */
+	return ice_init_ctrlq(hw, ICE_CTL_Q_MAILBOX);
+}
+
+/**
+ * ice_shutdown_ctrlq - shutdown routine for any control queue
+ * @hw: pointer to the hardware structure
+ * @q_type: specific Control queue type
+ */
+static void ice_shutdown_ctrlq(struct ice_hw *hw, enum ice_ctl_q q_type)
+{
+	struct ice_ctl_q_info *cq;
+
+	switch (q_type) {
+	case ICE_CTL_Q_ADMIN:
+		cq = &hw->adminq;
+		if (ice_check_sq_alive(hw, cq))
+			ice_aq_q_shutdown(hw, true);
+		break;
+	case ICE_CTL_Q_MAILBOX:
+		cq = &hw->mailboxq;
+		break;
+	default:
+		return;
+	}
+
+	if (cq->sq.count) {
+		ice_shutdown_sq(hw, cq);
+		ice_destroy_lock(&cq->sq_lock);
+	}
+	if (cq->rq.count) {
+		ice_shutdown_rq(hw, cq);
+		ice_destroy_lock(&cq->rq_lock);
+	}
+}
+
+/**
+ * ice_shutdown_all_ctrlq - shutdown routine for all control queues
+ * @hw: pointer to the hardware structure
+ */
+void ice_shutdown_all_ctrlq(struct ice_hw *hw)
+{
+	/* Shutdown FW admin queue */
+	ice_shutdown_ctrlq(hw, ICE_CTL_Q_ADMIN);
+	/* Shutdown PF-VF Mailbox */
+	ice_shutdown_ctrlq(hw, ICE_CTL_Q_MAILBOX);
+}
+
+/**
+ * ice_clean_sq - cleans Admin send queue (ATQ)
+ * @hw: pointer to the hardware structure
+ * @cq: pointer to the specific Control queue
+ *
+ * returns the number of free desc
+ */
+static u16 ice_clean_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	struct ice_ctl_q_ring *sq = &cq->sq;
+	u16 ntc = sq->next_to_clean;
+	struct ice_sq_cd *details;
+#if 0
+	struct ice_aq_desc desc_cb;
+#endif
+	struct ice_aq_desc *desc;
+
+	desc = ICE_CTL_Q_DESC(*sq, ntc);
+	details = ICE_CTL_Q_DETAILS(*sq, ntc);
+
+	while (rd32(hw, cq->sq.head) != ntc) {
+		ice_debug(hw, ICE_DBG_AQ_MSG,
+			  "ntc %d head %d.\n", ntc, rd32(hw, cq->sq.head));
+#if 0
+		if (details->callback) {
+			ICE_CTL_Q_CALLBACK cb_func =
+				(ICE_CTL_Q_CALLBACK)details->callback;
+			ice_memcpy(&desc_cb, desc, sizeof(desc_cb),
+				   ICE_DMA_TO_DMA);
+			cb_func(hw, &desc_cb);
+		}
+#endif
+		ice_memset(desc, 0, sizeof(*desc), ICE_DMA_MEM);
+		ice_memset(details, 0, sizeof(*details), ICE_NONDMA_MEM);
+		ntc++;
+		if (ntc == sq->count)
+			ntc = 0;
+		desc = ICE_CTL_Q_DESC(*sq, ntc);
+		details = ICE_CTL_Q_DETAILS(*sq, ntc);
+	}
+
+	sq->next_to_clean = ntc;
+
+	return ICE_CTL_Q_DESC_UNUSED(sq);
+}
+
+/**
+ * ice_sq_done - check if FW has processed the Admin Send Queue (ATQ)
+ * @hw: pointer to the hw struct
+ * @cq: pointer to the specific Control queue
+ *
+ * Returns true if the firmware has processed all descriptors on the
+ * admin send queue. Returns false if there are still requests pending.
+ */
+bool ice_sq_done(struct ice_hw *hw, struct ice_ctl_q_info *cq)
+{
+	/* AQ designers suggest use of head for better
+	 * timing reliability than DD bit
+	 */
+	return rd32(hw, cq->sq.head) == cq->sq.next_to_use;
+}
+
+/**
+ * ice_sq_send_cmd - send command to Control Queue (ATQ)
+ * @hw: pointer to the hw struct
+ * @cq: pointer to the specific Control queue
+ * @desc: prefilled descriptor describing the command (non DMA mem)
+ * @buf: buffer to use for indirect commands (or NULL for direct commands)
+ * @buf_size: size of buffer for indirect commands (or 0 for direct commands)
+ * @cd: pointer to command details structure
+ *
+ * This is the main send command routine for the ATQ. It runs the queue,
+ * cleans the queue, etc.
+ */
+enum ice_status
+ice_sq_send_cmd(struct ice_hw *hw, struct ice_ctl_q_info *cq,
+		struct ice_aq_desc *desc, void *buf, u16 buf_size,
+		struct ice_sq_cd *cd)
+{
+	struct ice_dma_mem *dma_buf = NULL;
+	struct ice_aq_desc *desc_on_ring;
+	bool cmd_completed = false;
+	enum ice_status status = ICE_SUCCESS;
+	struct ice_sq_cd *details;
+	u32 total_delay = 0;
+	u16 retval = 0;
+	u32 val = 0;
+
+	/* if reset is in progress return a soft error */
+	if (hw->reset_ongoing)
+		return ICE_ERR_RESET_ONGOING;
+	ice_acquire_lock(&cq->sq_lock);
+
+	cq->sq_last_status = ICE_AQ_RC_OK;
+
+	if (!cq->sq.count) {
+		ice_debug(hw, ICE_DBG_AQ_MSG,
+			  "Control Send queue not initialized.\n");
+		status = ICE_ERR_AQ_EMPTY;
+		goto sq_send_command_error;
+	}
+
+	if ((buf && !buf_size) || (!buf && buf_size)) {
+		status = ICE_ERR_PARAM;
+		goto sq_send_command_error;
+	}
+
+	if (buf) {
+		if (buf_size > cq->sq_buf_size) {
+			ice_debug(hw, ICE_DBG_AQ_MSG,
+				  "Invalid buffer size for Control Send queue: %d.\n",
+				  buf_size);
+			status = ICE_ERR_INVAL_SIZE;
+			goto sq_send_command_error;
+		}
+
+		desc->flags |= CPU_TO_LE16(ICE_AQ_FLAG_BUF);
+		if (buf_size > ICE_AQ_LG_BUF)
+			desc->flags |= CPU_TO_LE16(ICE_AQ_FLAG_LB);
+	}
+
+	val = rd32(hw, cq->sq.head);
+	if (val >= cq->num_sq_entries) {
+		ice_debug(hw, ICE_DBG_AQ_MSG,
+			  "head overrun at %d in the Control Send Queue ring\n",
+			  val);
+		status = ICE_ERR_AQ_EMPTY;
+		goto sq_send_command_error;
+	}
+
+	details = ICE_CTL_Q_DETAILS(cq->sq, cq->sq.next_to_use);
+	if (cd)
+		*details = *cd;
+#if 0
+		/* FIXME: if/when this block gets enabled (when the #if 0
+		 * is removed), add braces to both branches of the surrounding
+		 * conditional expression. The braces have been removed to
+		 * prevent checkpatch complaining.
+		 */
+
+		/* If the command details are defined copy the cookie. The
+		 * CPU_TO_LE32 is not needed here because the data is ignored
+		 * by the FW, only used by the driver
+		 */
+		if (details->cookie) {
+			desc->cookie_high =
+				CPU_TO_LE32(ICE_HI_DWORD(details->cookie));
+			desc->cookie_low =
+				CPU_TO_LE32(ICE_LO_DWORD(details->cookie));
+		}
+#endif
+	else
+		ice_memset(details, 0, sizeof(*details), ICE_NONDMA_MEM);
+#if 0
+	/* clear requested flags and then set additional flags if defined */
+	desc->flags &= ~CPU_TO_LE16(details->flags_dis);
+	desc->flags |= CPU_TO_LE16(details->flags_ena);
+
+	if (details->postpone && !details->async) {
+		ice_debug(hw, ICE_DBG_AQ_MSG,
+			  "Async flag not set along with postpone flag\n");
+		status = ICE_ERR_PARAM;
+		goto sq_send_command_error;
+	}
+#endif
+
+	/* Call clean and check queue available function to reclaim the
+	 * descriptors that were processed by FW/MBX; the function returns the
+	 * number of desc available. The clean function called here could be
+	 * called in a separate thread in case of asynchronous completions.
+	 */
+	if (ice_clean_sq(hw, cq) == 0) {
+		ice_debug(hw, ICE_DBG_AQ_MSG,
+			  "Error: Control Send Queue is full.\n");
+		status = ICE_ERR_AQ_FULL;
+		goto sq_send_command_error;
+	}
+
+	/* initialize the temp desc pointer with the right desc */
+	desc_on_ring = ICE_CTL_Q_DESC(cq->sq, cq->sq.next_to_use);
+
+	/* if the desc is available copy the temp desc to the right place */
+	ice_memcpy(desc_on_ring, desc, sizeof(*desc_on_ring),
+		   ICE_NONDMA_TO_DMA);
+
+	/* if buf is not NULL assume indirect command */
+	if (buf) {
+		dma_buf = &cq->sq.r.sq_bi[cq->sq.next_to_use];
+		/* copy the user buf into the respective DMA buf */
+		ice_memcpy(dma_buf->va, buf, buf_size, ICE_NONDMA_TO_DMA);
+		desc_on_ring->datalen = CPU_TO_LE16(buf_size);
+
+		/* Update the address values in the desc with the pa value
+		 * for respective buffer
+		 */
+		desc_on_ring->params.generic.addr_high =
+			CPU_TO_LE32(ICE_HI_DWORD(dma_buf->pa));
+		desc_on_ring->params.generic.addr_low =
+			CPU_TO_LE32(ICE_LO_DWORD(dma_buf->pa));
+	}
+
+	/* Debug desc and buffer */
+	ice_debug(hw, ICE_DBG_AQ_MSG,
+		  "ATQ: Control Send queue desc and buffer:\n");
+
+	ice_debug_cq(hw, ICE_DBG_AQ_CMD, (void *)desc_on_ring, buf, buf_size);
+
+
+	(cq->sq.next_to_use)++;
+	if (cq->sq.next_to_use == cq->sq.count)
+		cq->sq.next_to_use = 0;
+#if 0
+	/* FIXME - handle this case? */
+	if (!details->postpone)
+#endif
+	wr32(hw, cq->sq.tail, cq->sq.next_to_use);
+
+#if 0
+	/* if command details are not defined or async flag is not set,
+	 * we need to wait for desc write back
+	 */
+	if (!details->async && !details->postpone) {
+		/* FIXME - handle this case? */
+	}
+#endif
+	do {
+		if (ice_sq_done(hw, cq))
+			break;
+
+		ice_msec_delay(1, false);
+		total_delay++;
+	} while (total_delay < cq->sq_cmd_timeout);
+
+	/* if ready, copy the desc back to temp */
+	if (ice_sq_done(hw, cq)) {
+		ice_memcpy(desc, desc_on_ring, sizeof(*desc),
+			   ICE_DMA_TO_NONDMA);
+		if (buf) {
+			/* get returned length to copy */
+			u16 copy_size = LE16_TO_CPU(desc->datalen);
+
+			if (copy_size > buf_size) {
+				ice_debug(hw, ICE_DBG_AQ_MSG,
+					  "Return len %d > than buf len %d\n",
+					  copy_size, buf_size);
+				status = ICE_ERR_AQ_ERROR;
+			} else {
+				ice_memcpy(buf, dma_buf->va, copy_size,
+					   ICE_DMA_TO_NONDMA);
+			}
+		}
+		retval = LE16_TO_CPU(desc->retval);
+		if (retval) {
+			ice_debug(hw, ICE_DBG_AQ_MSG,
+				  "Control Send Queue command completed with error 0x%x\n",
+				  retval);
+
+			/* strip off FW internal code */
+			retval &= 0xff;
+		}
+		cmd_completed = true;
+		if (!status && retval != ICE_AQ_RC_OK)
+			status = ICE_ERR_AQ_ERROR;
+		cq->sq_last_status = (enum ice_aq_err)retval;
+	}
+
+	ice_debug(hw, ICE_DBG_AQ_MSG,
+		  "ATQ: desc and buffer writeback:\n");
+
+	ice_debug_cq(hw, ICE_DBG_AQ_CMD, (void *)desc, buf, buf_size);
+
+
+	/* save writeback AQ if requested */
+	if (details->wb_desc)
+		ice_memcpy(details->wb_desc, desc_on_ring,
+			   sizeof(*details->wb_desc), ICE_DMA_TO_NONDMA);
+
+	/* update the error if time out occurred */
+	if (!cmd_completed) {
+#if 0
+	    (!details->async && !details->postpone)) {
+#endif
+		ice_debug(hw, ICE_DBG_AQ_MSG,
+			  "Control Send Queue Writeback timeout.\n");
+		status = ICE_ERR_AQ_TIMEOUT;
+	}
+
+sq_send_command_error:
+	ice_release_lock(&cq->sq_lock);
+	return status;
+}
+
+/**
+ * ice_fill_dflt_direct_cmd_desc - AQ descriptor helper function
+ * @desc: pointer to the temp descriptor (non DMA mem)
+ * @opcode: the opcode can be used to decide which flags to turn off or on
+ *
+ * Fill the desc with default values
+ */
+void ice_fill_dflt_direct_cmd_desc(struct ice_aq_desc *desc, u16 opcode)
+{
+	/* zero out the desc */
+	ice_memset(desc, 0, sizeof(*desc), ICE_NONDMA_MEM);
+	desc->opcode = CPU_TO_LE16(opcode);
+	desc->flags = CPU_TO_LE16(ICE_AQ_FLAG_SI);
+}
+
+/**
+ * ice_clean_rq_elem
+ * @hw: pointer to the hw struct
+ * @cq: pointer to the specific Control queue
+ * @e: event info from the receive descriptor, includes any buffers
+ * @pending: number of events that could be left to process
+ *
+ * This function cleans one Admin Receive Queue element and returns
+ * the contents through e. It can also return how many events are
+ * left to process through 'pending'.
+ */
+enum ice_status
+ice_clean_rq_elem(struct ice_hw *hw, struct ice_ctl_q_info *cq,
+		  struct ice_rq_event_info *e, u16 *pending)
+{
+	u16 ntc = cq->rq.next_to_clean;
+	enum ice_status ret_code = ICE_SUCCESS;
+	struct ice_aq_desc *desc;
+	struct ice_dma_mem *bi;
+	u16 desc_idx;
+	u16 datalen;
+	u16 flags;
+	u16 ntu;
+
+	/* pre-clean the event info */
+	ice_memset(&e->desc, 0, sizeof(e->desc), ICE_NONDMA_MEM);
+
+	/* take the lock before we start messing with the ring */
+	ice_acquire_lock(&cq->rq_lock);
+
+	if (!cq->rq.count) {
+		ice_debug(hw, ICE_DBG_AQ_MSG,
+			  "Control Receive queue not initialized.\n");
+		ret_code = ICE_ERR_AQ_EMPTY;
+		goto clean_rq_elem_err;
+	}
+
+	/* set next_to_use to head */
+	ntu = (u16)(rd32(hw, cq->rq.head) & cq->rq.head_mask);
+
+	if (ntu == ntc) {
+		/* nothing to do - shouldn't need to update ring's values */
+		ret_code = ICE_ERR_AQ_NO_WORK;
+		goto clean_rq_elem_out;
+	}
+
+	/* now clean the next descriptor */
+	desc = ICE_CTL_Q_DESC(cq->rq, ntc);
+	desc_idx = ntc;
+
+	cq->rq_last_status = (enum ice_aq_err)LE16_TO_CPU(desc->retval);
+	flags = LE16_TO_CPU(desc->flags);
+	if (flags & ICE_AQ_FLAG_ERR) {
+		ret_code = ICE_ERR_AQ_ERROR;
+		ice_debug(hw, ICE_DBG_AQ_MSG,
+			  "Control Receive Queue Event received with error 0x%x\n",
+			  cq->rq_last_status);
+	}
+	ice_memcpy(&e->desc, desc, sizeof(e->desc), ICE_DMA_TO_NONDMA);
+	datalen = LE16_TO_CPU(desc->datalen);
+	e->msg_len = min(datalen, e->buf_len);
+	if (e->msg_buf && e->msg_len)
+		ice_memcpy(e->msg_buf, cq->rq.r.rq_bi[desc_idx].va,
+			   e->msg_len, ICE_DMA_TO_NONDMA);
+
+	ice_debug(hw, ICE_DBG_AQ_MSG, "ARQ: desc and buffer:\n");
+
+	ice_debug_cq(hw, ICE_DBG_AQ_CMD, (void *)desc, e->msg_buf,
+		     cq->rq_buf_size);
+
+
+	/* Restore the original datalen and buffer address in the desc,
+	 * FW updates datalen to indicate the event message size
+	 */
+	bi = &cq->rq.r.rq_bi[ntc];
+	ice_memset(desc, 0, sizeof(*desc), ICE_DMA_MEM);
+
+	desc->flags = CPU_TO_LE16(ICE_AQ_FLAG_BUF);
+	if (cq->rq_buf_size > ICE_AQ_LG_BUF)
+		desc->flags |= CPU_TO_LE16(ICE_AQ_FLAG_LB);
+	desc->datalen = CPU_TO_LE16(bi->size);
+	desc->params.generic.addr_high = CPU_TO_LE32(ICE_HI_DWORD(bi->pa));
+	desc->params.generic.addr_low = CPU_TO_LE32(ICE_LO_DWORD(bi->pa));
+
+	/* set tail = the last cleaned desc index. */
+	wr32(hw, cq->rq.tail, ntc);
+	/* ntc is updated to tail + 1 */
+	ntc++;
+	if (ntc == cq->num_rq_entries)
+		ntc = 0;
+	cq->rq.next_to_clean = ntc;
+	cq->rq.next_to_use = ntu;
+
+#if 0
+	ice_nvmupd_check_wait_event(hw, LE16_TO_CPU(e->desc.opcode));
+#endif
+clean_rq_elem_out:
+	/* Set pending if needed, unlock and return */
+	if (pending) {
+		/* re-read HW head to calculate actual pending messages */
+		ntu = (u16)(rd32(hw, cq->rq.head) & cq->rq.head_mask);
+		*pending = (u16)((ntc > ntu ? cq->rq.count : 0) + (ntu - ntc));
+	}
+clean_rq_elem_err:
+	ice_release_lock(&cq->rq_lock);
+
+	return ret_code;
+}
diff --git a/drivers/net/ice/base/ice_controlq.h b/drivers/net/ice/base/ice_controlq.h
new file mode 100644
index 0000000..db2db93
--- /dev/null
+++ b/drivers/net/ice/base/ice_controlq.h
@@ -0,0 +1,97 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_CONTROLQ_H_
+#define _ICE_CONTROLQ_H_
+
+#include "ice_adminq_cmd.h"
+
+
+/* Maximum buffer lengths for all control queue types */
+#define ICE_AQ_MAX_BUF_LEN 4096
+#define ICE_MBXQ_MAX_BUF_LEN 4096
+
+#define ICE_CTL_Q_DESC(R, i) \
+	(&(((struct ice_aq_desc *)((R).desc_buf.va))[i]))
+
+#define ICE_CTL_Q_DESC_UNUSED(R) \
+	(u16)((((R)->next_to_clean > (R)->next_to_use) ? 0 : (R)->count) + \
+	      (R)->next_to_clean - (R)->next_to_use - 1)
+
+/* Defines that help manage the driver vs FW API checks.
+ * Take a look at ice_aq_ver_check in ice_controlq.c for actual usage.
+ */
+#define EXP_FW_API_VER_BRANCH		0x00
+#define EXP_FW_API_VER_MAJOR		0x01
+#define EXP_FW_API_VER_MINOR		0x03
+
+/* Different control queue types: These are mainly for SW consumption. */
+enum ice_ctl_q {
+	ICE_CTL_Q_UNKNOWN = 0,
+	ICE_CTL_Q_ADMIN,
+	ICE_CTL_Q_MAILBOX,
+};
+
+/* Control Queue default settings */
+#define ICE_CTL_Q_SQ_CMD_TIMEOUT	250  /* msecs */
+
+struct ice_ctl_q_ring {
+	void *dma_head;			/* Virtual address to dma head */
+	struct ice_dma_mem desc_buf;	/* descriptor ring memory */
+	void *cmd_buf;			/* command buffer memory */
+
+	union {
+		struct ice_dma_mem *sq_bi;
+		struct ice_dma_mem *rq_bi;
+	} r;
+
+	u16 count;		/* Number of descriptors */
+
+	/* used for interrupt processing */
+	u16 next_to_use;
+	u16 next_to_clean;
+
+	/* used for queue tracking */
+	u32 head;
+	u32 tail;
+	u32 len;
+	u32 bah;
+	u32 bal;
+	u32 len_mask;
+	u32 len_ena_mask;
+	u32 head_mask;
+};
+
+/* sq transaction details */
+struct ice_sq_cd {
+	struct ice_aq_desc *wb_desc;
+};
+
+#define ICE_CTL_Q_DETAILS(R, i) (&(((struct ice_sq_cd *)((R).cmd_buf))[i]))
+
+/* rq event information */
+struct ice_rq_event_info {
+	struct ice_aq_desc desc;
+	u16 msg_len;
+	u16 buf_len;
+	u8 *msg_buf;
+};
+
+/* Control Queue information */
+struct ice_ctl_q_info {
+	enum ice_ctl_q qtype;
+	struct ice_ctl_q_ring rq;	/* receive queue */
+	struct ice_ctl_q_ring sq;	/* send queue */
+	u32 sq_cmd_timeout;		/* send queue cmd write back timeout */
+	u16 num_rq_entries;		/* receive queue depth */
+	u16 num_sq_entries;		/* send queue depth */
+	u16 rq_buf_size;		/* receive queue buffer size */
+	u16 sq_buf_size;		/* send queue buffer size */
+	struct ice_lock sq_lock;		/* Send queue lock */
+	struct ice_lock rq_lock;		/* Receive queue lock */
+	enum ice_aq_err sq_last_status;	/* last status on send queue */
+	enum ice_aq_err rq_last_status;	/* last status on receive queue */
+};
+
+#endif /* _ICE_CONTROLQ_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v6 07/31] net/ice/base: add basic transmit scheduler
  2018-12-18  8:46 ` [dpdk-dev] [PATCH v6 00/31] A new net PMD - ICE Wenzhuo Lu
                     ` (5 preceding siblings ...)
  2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 06/31] net/ice/base: add control queue information Wenzhuo Lu
@ 2018-12-18  8:46   ` Wenzhuo Lu
  2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 08/31] net/ice/base: add virtual switch code Wenzhuo Lu
                     ` (24 subsequent siblings)
  31 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-18  8:46 UTC (permalink / raw)
  To: dev; +Cc: Paul M Stillwell Jr

From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>

Add code for the basic TX scheduler.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
---
 drivers/net/ice/base/ice_sched.c | 5380 ++++++++++++++++++++++++++++++++++++++
 drivers/net/ice/base/ice_sched.h |  210 ++
 2 files changed, 5590 insertions(+)
 create mode 100644 drivers/net/ice/base/ice_sched.c
 create mode 100644 drivers/net/ice/base/ice_sched.h

diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c
new file mode 100644
index 0000000..7acbae6
--- /dev/null
+++ b/drivers/net/ice/base/ice_sched.c
@@ -0,0 +1,5380 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#include "ice_sched.h"
+
+
+/**
+ * ice_sched_add_root_node - Insert the Tx scheduler root node in SW DB
+ * @pi: port information structure
+ * @info: Scheduler element information from firmware
+ *
+ * This function inserts the root node of the scheduling tree topology
+ * to the SW DB.
+ */
+static enum ice_status
+ice_sched_add_root_node(struct ice_port_info *pi,
+			struct ice_aqc_txsched_elem_data *info)
+{
+	struct ice_sched_node *root;
+	struct ice_hw *hw;
+
+	if (!pi)
+		return ICE_ERR_PARAM;
+
+	hw = pi->hw;
+
+	root = (struct ice_sched_node *)ice_malloc(hw, sizeof(*root));
+	if (!root)
+		return ICE_ERR_NO_MEMORY;
+
+	/* coverity[suspicious_sizeof] */
+	root->children = (struct ice_sched_node **)
+		ice_calloc(hw, hw->max_children[0], sizeof(*root));
+	if (!root->children) {
+		ice_free(hw, root);
+		return ICE_ERR_NO_MEMORY;
+	}
+
+	ice_memcpy(&root->info, info, sizeof(*info), ICE_DMA_TO_NONDMA);
+	pi->root = root;
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_sched_find_node_by_teid - Find the Tx scheduler node in SW DB
+ * @start_node: pointer to the starting ice_sched_node struct in a sub-tree
+ * @teid: node teid to search
+ *
+ * This function searches for a node matching the teid in the scheduling tree
+ * from the SW DB. The search is recursive and is restricted by the number of
+ * layers it has searched through; stopping at the max supported layer.
+ *
+ * This function needs to be called when holding the port_info->sched_lock
+ */
+struct ice_sched_node *
+ice_sched_find_node_by_teid(struct ice_sched_node *start_node, u32 teid)
+{
+	u16 i;
+
+	/* The TEID is same as that of the start_node */
+	if (ICE_TXSCHED_GET_NODE_TEID(start_node) == teid)
+		return start_node;
+
+	/* The node has no children or is at the max layer */
+	if (!start_node->num_children ||
+	    start_node->tx_sched_layer >= ICE_AQC_TOPO_MAX_LEVEL_NUM ||
+	    start_node->info.data.elem_type == ICE_AQC_ELEM_TYPE_LEAF)
+		return NULL;
+
+	/* Check if teid matches to any of the children nodes */
+	for (i = 0; i < start_node->num_children; i++)
+		if (ICE_TXSCHED_GET_NODE_TEID(start_node->children[i]) == teid)
+			return start_node->children[i];
+
+	/* Search within each child's sub-tree */
+	for (i = 0; i < start_node->num_children; i++) {
+		struct ice_sched_node *tmp;
+
+		tmp = ice_sched_find_node_by_teid(start_node->children[i],
+						  teid);
+		if (tmp)
+			return tmp;
+	}
+
+	return NULL;
+}
+
+/**
+ * ice_aqc_send_sched_elem_cmd - send scheduling elements cmd
+ * @hw: pointer to the hw struct
+ * @cmd_opc: cmd opcode
+ * @elems_req: number of elements to request
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @elems_resp: returns total number of elements response
+ * @cd: pointer to command details structure or NULL
+ *
+ * This function sends a scheduling elements cmd (cmd_opc)
+ */
+static enum ice_status
+ice_aqc_send_sched_elem_cmd(struct ice_hw *hw, enum ice_adminq_opc cmd_opc,
+			    u16 elems_req, void *buf, u16 buf_size,
+			    u16 *elems_resp, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_sched_elem_cmd *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	cmd = &desc.params.sched_elem_cmd;
+	ice_fill_dflt_direct_cmd_desc(&desc, cmd_opc);
+	cmd->num_elem_req = CPU_TO_LE16(elems_req);
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+	status = ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
+	if (!status && elems_resp)
+		*elems_resp = LE16_TO_CPU(cmd->num_elem_resp);
+
+	return status;
+}
+
+/**
+ * ice_aq_query_sched_elems - query scheduler elements
+ * @hw: pointer to the hw struct
+ * @elems_req: number of elements to query
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @elems_ret: returns total number of elements returned
+ * @cd: pointer to command details structure or NULL
+ *
+ * Query scheduling elements (0x0404)
+ */
+enum ice_status
+ice_aq_query_sched_elems(struct ice_hw *hw, u16 elems_req,
+			 struct ice_aqc_get_elem *buf, u16 buf_size,
+			 u16 *elems_ret, struct ice_sq_cd *cd)
+{
+	return ice_aqc_send_sched_elem_cmd(hw, ice_aqc_opc_get_sched_elems,
+					   elems_req, (void *)buf, buf_size,
+					   elems_ret, cd);
+}
+
+/**
+ * ice_sched_add_node - Insert the Tx scheduler node in SW DB
+ * @pi: port information structure
+ * @layer: Scheduler layer of the node
+ * @info: Scheduler element information from firmware
+ *
+ * This function inserts a scheduler node to the SW DB.
+ */
+enum ice_status
+ice_sched_add_node(struct ice_port_info *pi, u8 layer,
+		   struct ice_aqc_txsched_elem_data *info)
+{
+	struct ice_sched_node *parent;
+	struct ice_aqc_get_elem elem;
+	struct ice_sched_node *node;
+	enum ice_status status;
+	struct ice_hw *hw;
+
+	if (!pi)
+		return ICE_ERR_PARAM;
+
+	hw = pi->hw;
+
+	/* A valid parent node should be there */
+	parent = ice_sched_find_node_by_teid(pi->root,
+					     LE32_TO_CPU(info->parent_teid));
+	if (!parent) {
+		ice_debug(hw, ICE_DBG_SCHED,
+			  "Parent Node not found for parent_teid=0x%x\n",
+			  LE32_TO_CPU(info->parent_teid));
+		return ICE_ERR_PARAM;
+	}
+
+	/* query the current node information from FW  before additing it
+	 * to the SW DB
+	 */
+	status = ice_sched_query_elem(hw, LE32_TO_CPU(info->node_teid), &elem);
+	if (status)
+		return status;
+	node = (struct ice_sched_node *)ice_malloc(hw, sizeof(*node));
+	if (!node)
+		return ICE_ERR_NO_MEMORY;
+	if (hw->max_children[layer]) {
+		/* coverity[suspicious_sizeof] */
+		node->children = (struct ice_sched_node **)
+			ice_calloc(hw, hw->max_children[layer], sizeof(*node));
+		if (!node->children) {
+			ice_free(hw, node);
+			return ICE_ERR_NO_MEMORY;
+		}
+	}
+
+	node->in_use = true;
+	node->parent = parent;
+	node->tx_sched_layer = layer;
+	parent->children[parent->num_children++] = node;
+	node->info = elem.generic[0];
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_aq_delete_sched_elems - delete scheduler elements
+ * @hw: pointer to the hw struct
+ * @grps_req: number of groups to delete
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @grps_del: returns total number of elements deleted
+ * @cd: pointer to command details structure or NULL
+ *
+ * Delete scheduling elements (0x040F)
+ */
+static enum ice_status
+ice_aq_delete_sched_elems(struct ice_hw *hw, u16 grps_req,
+			  struct ice_aqc_delete_elem *buf, u16 buf_size,
+			  u16 *grps_del, struct ice_sq_cd *cd)
+{
+	return ice_aqc_send_sched_elem_cmd(hw, ice_aqc_opc_delete_sched_elems,
+					   grps_req, (void *)buf, buf_size,
+					   grps_del, cd);
+}
+
+/**
+ * ice_sched_remove_elems - remove nodes from hw
+ * @hw: pointer to the hw struct
+ * @parent: pointer to the parent node
+ * @num_nodes: number of nodes
+ * @node_teids: array of node teids to be deleted
+ *
+ * This function remove nodes from hw
+ */
+static enum ice_status
+ice_sched_remove_elems(struct ice_hw *hw, struct ice_sched_node *parent,
+		       u16 num_nodes, u32 *node_teids)
+{
+	struct ice_aqc_delete_elem *buf;
+	u16 i, num_groups_removed = 0;
+	enum ice_status status;
+	u16 buf_size;
+
+	buf_size = sizeof(*buf) + sizeof(u32) * (num_nodes - 1);
+	buf = (struct ice_aqc_delete_elem *)ice_malloc(hw, buf_size);
+	if (!buf)
+		return ICE_ERR_NO_MEMORY;
+
+	buf->hdr.parent_teid = parent->info.node_teid;
+	buf->hdr.num_elems = CPU_TO_LE16(num_nodes);
+	for (i = 0; i < num_nodes; i++)
+		buf->teid[i] = CPU_TO_LE32(node_teids[i]);
+
+	status = ice_aq_delete_sched_elems(hw, 1, buf, buf_size,
+					   &num_groups_removed, NULL);
+	if (status != ICE_SUCCESS || num_groups_removed != 1)
+		ice_debug(hw, ICE_DBG_SCHED, "remove node failed FW error %d\n",
+			  hw->adminq.sq_last_status);
+
+	ice_free(hw, buf);
+	return status;
+}
+
+/**
+ * ice_sched_get_first_node - get the first node of the given layer
+ * @hw: pointer to the hw struct
+ * @parent: pointer the base node of the subtree
+ * @layer: layer number
+ *
+ * This function retrieves the first node of the given layer from the subtree
+ */
+static struct ice_sched_node *
+ice_sched_get_first_node(struct ice_hw *hw, struct ice_sched_node *parent,
+			 u8 layer)
+{
+	u8 i;
+
+	if (layer < hw->sw_entry_point_layer)
+		return NULL;
+	for (i = 0; i < parent->num_children; i++) {
+		struct ice_sched_node *node = parent->children[i];
+
+		if (node) {
+			if (node->tx_sched_layer == layer)
+				return node;
+			/* this recursion is intentional, and wouldn't
+			 * go more than 9 calls
+			 */
+			return ice_sched_get_first_node(hw, node, layer);
+		}
+	}
+	return NULL;
+}
+
+/**
+ * ice_sched_get_tc_node - get pointer to TC node
+ * @pi: port information structure
+ * @tc: TC number
+ *
+ * This function returns the TC node pointer
+ */
+struct ice_sched_node *ice_sched_get_tc_node(struct ice_port_info *pi, u8 tc)
+{
+	u8 i;
+
+	if (!pi)
+		return NULL;
+	for (i = 0; i < pi->root->num_children; i++)
+		if (pi->root->children[i]->tc_num == tc)
+			return pi->root->children[i];
+	return NULL;
+}
+
+/**
+ * ice_free_sched_node - Free a Tx scheduler node from SW DB
+ * @pi: port information structure
+ * @node: pointer to the ice_sched_node struct
+ *
+ * This function frees up a node from SW DB as well as from HW
+ *
+ * This function needs to be called with the port_info->sched_lock held
+ */
+void ice_free_sched_node(struct ice_port_info *pi, struct ice_sched_node *node)
+{
+	struct ice_sched_node *parent;
+	struct ice_hw *hw = pi->hw;
+	u8 i, j;
+
+	/* Free the children before freeing up the parent node
+	 * The parent array is updated below and that shifts the nodes
+	 * in the array. So always pick the first child if num children > 0
+	 */
+	while (node->num_children)
+		ice_free_sched_node(pi, node->children[0]);
+
+	/* Leaf, TC and root nodes can't be deleted by SW */
+	if (node->tx_sched_layer >= hw->sw_entry_point_layer &&
+	    node->info.data.elem_type != ICE_AQC_ELEM_TYPE_TC &&
+	    node->info.data.elem_type != ICE_AQC_ELEM_TYPE_ROOT_PORT &&
+	    node->info.data.elem_type != ICE_AQC_ELEM_TYPE_LEAF) {
+		u32 teid = LE32_TO_CPU(node->info.node_teid);
+
+		ice_sched_remove_elems(hw, node->parent, 1, &teid);
+	}
+	parent = node->parent;
+	/* root has no parent */
+	if (parent) {
+		struct ice_sched_node *p, *tc_node;
+
+		/* update the parent */
+		for (i = 0; i < parent->num_children; i++)
+			if (parent->children[i] == node) {
+				for (j = i + 1; j < parent->num_children; j++)
+					parent->children[j - 1] =
+						parent->children[j];
+				parent->num_children--;
+				break;
+			}
+
+		/* search for previous sibling that points to this node and
+		 * remove the reference
+		 */
+		tc_node = ice_sched_get_tc_node(pi, node->tc_num);
+		if (!tc_node) {
+			ice_debug(hw, ICE_DBG_SCHED,
+				  "Invalid TC number %d\n", node->tc_num);
+			goto err_exit;
+		}
+		p = ice_sched_get_first_node(hw, tc_node, node->tx_sched_layer);
+		while (p) {
+			if (p->sibling == node) {
+				p->sibling = node->sibling;
+				break;
+			}
+			p = p->sibling;
+		}
+	}
+err_exit:
+	/* leaf nodes have no children */
+	if (node->children)
+		ice_free(hw, node->children);
+	ice_free(hw, node);
+}
+
+/**
+ * ice_aq_get_dflt_topo - gets default scheduler topology
+ * @hw: pointer to the hw struct
+ * @lport: logical port number
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @num_branches: returns total number of queue to port branches
+ * @cd: pointer to command details structure or NULL
+ *
+ * Get default scheduler topology (0x400)
+ */
+static enum ice_status
+ice_aq_get_dflt_topo(struct ice_hw *hw, u8 lport,
+		     struct ice_aqc_get_topo_elem *buf, u16 buf_size,
+		     u8 *num_branches, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_get_topo *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	cmd = &desc.params.get_topo;
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_dflt_topo);
+	cmd->port_num = lport;
+	status = ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
+	if (!status && num_branches)
+		*num_branches = cmd->num_branches;
+
+	return status;
+}
+
+/**
+ * ice_aq_add_sched_elems - adds scheduling element
+ * @hw: pointer to the hw struct
+ * @grps_req: the number of groups that are requested to be added
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @grps_added: returns total number of groups added
+ * @cd: pointer to command details structure or NULL
+ *
+ * Add scheduling elements (0x0401)
+ */
+static enum ice_status
+ice_aq_add_sched_elems(struct ice_hw *hw, u16 grps_req,
+		       struct ice_aqc_add_elem *buf, u16 buf_size,
+		       u16 *grps_added, struct ice_sq_cd *cd)
+{
+	return ice_aqc_send_sched_elem_cmd(hw, ice_aqc_opc_add_sched_elems,
+					   grps_req, (void *)buf, buf_size,
+					   grps_added, cd);
+}
+
+/**
+ * ice_aq_cfg_sched_elems - configures scheduler elements
+ * @hw: pointer to the hw struct
+ * @elems_req: number of elements to configure
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @elems_cfgd: returns total number of elements configured
+ * @cd: pointer to command details structure or NULL
+ *
+ * Configure scheduling elements (0x0403)
+ */
+static enum ice_status
+ice_aq_cfg_sched_elems(struct ice_hw *hw, u16 elems_req,
+		       struct ice_aqc_conf_elem *buf, u16 buf_size,
+		       u16 *elems_cfgd, struct ice_sq_cd *cd)
+{
+	return ice_aqc_send_sched_elem_cmd(hw, ice_aqc_opc_cfg_sched_elems,
+					   elems_req, (void *)buf, buf_size,
+					   elems_cfgd, cd);
+}
+
+/**
+ * ice_aq_move_sched_elems - move scheduler elements
+ * @hw: pointer to the hw struct
+ * @grps_req: number of groups to move
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @grps_movd: returns total number of groups moved
+ * @cd: pointer to command details structure or NULL
+ *
+ * Move scheduling elements (0x0408)
+ */
+enum ice_status
+ice_aq_move_sched_elems(struct ice_hw *hw, u16 grps_req,
+			struct ice_aqc_move_elem *buf, u16 buf_size,
+			u16 *grps_movd, struct ice_sq_cd *cd)
+{
+	return ice_aqc_send_sched_elem_cmd(hw, ice_aqc_opc_move_sched_elems,
+					   grps_req, (void *)buf, buf_size,
+					   grps_movd, cd);
+}
+
+/**
+ * ice_aq_suspend_sched_elems - suspend scheduler elements
+ * @hw: pointer to the hw struct
+ * @elems_req: number of elements to suspend
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @elems_ret: returns total number of elements suspended
+ * @cd: pointer to command details structure or NULL
+ *
+ * Suspend scheduling elements (0x0409)
+ */
+static enum ice_status
+ice_aq_suspend_sched_elems(struct ice_hw *hw, u16 elems_req,
+			   struct ice_aqc_suspend_resume_elem *buf,
+			   u16 buf_size, u16 *elems_ret, struct ice_sq_cd *cd)
+{
+	return ice_aqc_send_sched_elem_cmd(hw, ice_aqc_opc_suspend_sched_elems,
+					   elems_req, (void *)buf, buf_size,
+					   elems_ret, cd);
+}
+
+/**
+ * ice_aq_resume_sched_elems - resume scheduler elements
+ * @hw: pointer to the hw struct
+ * @elems_req: number of elements to resume
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @elems_ret: returns total number of elements resumed
+ * @cd: pointer to command details structure or NULL
+ *
+ * resume scheduling elements (0x040A)
+ */
+static enum ice_status
+ice_aq_resume_sched_elems(struct ice_hw *hw, u16 elems_req,
+			  struct ice_aqc_suspend_resume_elem *buf,
+			  u16 buf_size, u16 *elems_ret, struct ice_sq_cd *cd)
+{
+	return ice_aqc_send_sched_elem_cmd(hw, ice_aqc_opc_resume_sched_elems,
+					   elems_req, (void *)buf, buf_size,
+					   elems_ret, cd);
+}
+
+/**
+ * ice_aq_query_sched_res - query scheduler resource
+ * @hw: pointer to the hw struct
+ * @buf_size: buffer size in bytes
+ * @buf: pointer to buffer
+ * @cd: pointer to command details structure or NULL
+ *
+ * Query scheduler resource allocation (0x0412)
+ */
+static enum ice_status
+ice_aq_query_sched_res(struct ice_hw *hw, u16 buf_size,
+		       struct ice_aqc_query_txsched_res_resp *buf,
+		       struct ice_sq_cd *cd)
+{
+	struct ice_aq_desc desc;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_query_sched_res);
+	return ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
+}
+
+/**
+ * ice_sched_suspend_resume_elems - suspend or resume hw nodes
+ * @hw: pointer to the hw struct
+ * @num_nodes: number of nodes
+ * @node_teids: array of node teids to be suspended or resumed
+ * @suspend: true means suspend / false means resume
+ *
+ * This function suspends or resumes hw nodes
+ */
+static enum ice_status
+ice_sched_suspend_resume_elems(struct ice_hw *hw, u8 num_nodes, u32 *node_teids,
+			       bool suspend)
+{
+	struct ice_aqc_suspend_resume_elem *buf;
+	u16 i, buf_size, num_elem_ret = 0;
+	enum ice_status status;
+
+	buf_size = sizeof(*buf) * num_nodes;
+	buf = (struct ice_aqc_suspend_resume_elem *)
+		ice_malloc(hw, buf_size);
+	if (!buf)
+		return ICE_ERR_NO_MEMORY;
+
+	for (i = 0; i < num_nodes; i++)
+		buf->teid[i] = CPU_TO_LE32(node_teids[i]);
+
+	if (suspend)
+		status = ice_aq_suspend_sched_elems(hw, num_nodes, buf,
+						    buf_size, &num_elem_ret,
+						    NULL);
+	else
+		status = ice_aq_resume_sched_elems(hw, num_nodes, buf,
+						   buf_size, &num_elem_ret,
+						   NULL);
+	if (status != ICE_SUCCESS || num_elem_ret != num_nodes)
+		ice_debug(hw, ICE_DBG_SCHED, "suspend/resume failed\n");
+
+	ice_free(hw, buf);
+	return status;
+}
+
+/**
+ * ice_aq_rl_profile - performs a rate limiting task
+ * @hw: pointer to the hw struct
+ * @opcode:opcode for add, query, or remove profile(s)
+ * @num_profiles: the number of profiles
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @num_processed: number of processed add or remove profile(s) to return
+ * @cd: pointer to command details structure
+ *
+ * Rl profile function to add, query, or remove profile(s)
+ */
+static enum ice_status
+ice_aq_rl_profile(struct ice_hw *hw, enum ice_adminq_opc opcode,
+		  u16 num_profiles, struct ice_aqc_rl_profile_generic_elem *buf,
+		  u16 buf_size, u16 *num_processed, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_rl_profile *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	cmd = &desc.params.rl_profile;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, opcode);
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+	cmd->num_profiles = CPU_TO_LE16(num_profiles);
+	status = ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
+	if (!status && num_processed)
+		*num_processed = LE16_TO_CPU(cmd->num_processed);
+	return status;
+}
+
+/**
+ * ice_aq_add_rl_profile - adds rate limiting profile(s)
+ * @hw: pointer to the hw struct
+ * @num_profiles: the number of profile(s) to be add
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @num_profiles_added: total number of profiles added to return
+ * @cd: pointer to command details structure
+ *
+ * Add rl profile (0x0410)
+ */
+static enum ice_status
+ice_aq_add_rl_profile(struct ice_hw *hw, u16 num_profiles,
+		      struct ice_aqc_rl_profile_generic_elem *buf,
+		      u16 buf_size, u16 *num_profiles_added,
+		      struct ice_sq_cd *cd)
+{
+	return ice_aq_rl_profile(hw, ice_aqc_opc_add_rl_profiles,
+				 num_profiles, buf,
+				 buf_size, num_profiles_added, cd);
+}
+
+/**
+ * ice_aq_query_rl_profile - query rate limiting profile(s)
+ * @hw: pointer to the hw struct
+ * @num_profiles: the number of profile(s) to query
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @cd: pointer to command details structure
+ *
+ * Query rl profile (0x0411)
+ */
+enum ice_status
+ice_aq_query_rl_profile(struct ice_hw *hw, u16 num_profiles,
+			struct ice_aqc_rl_profile_generic_elem *buf,
+			u16 buf_size, struct ice_sq_cd *cd)
+{
+	return ice_aq_rl_profile(hw, ice_aqc_opc_query_rl_profiles,
+				 num_profiles, buf, buf_size, NULL, cd);
+}
+
+/**
+ * ice_aq_remove_rl_profile - removes rl profile(s)
+ * @hw: pointer to the hw struct
+ * @num_profiles: the number of profile(s) to remove
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @num_profiles_removed: total number of profiles removed to return
+ * @cd: pointer to command details structure or NULL
+ *
+ * Remove rl profile (0x0415)
+ */
+static enum ice_status
+ice_aq_remove_rl_profile(struct ice_hw *hw, u16 num_profiles,
+			 struct ice_aqc_rl_profile_generic_elem *buf,
+			 u16 buf_size, u16 *num_profiles_removed,
+			 struct ice_sq_cd *cd)
+{
+	return ice_aq_rl_profile(hw, ice_aqc_opc_remove_rl_profiles,
+				 num_profiles, buf,
+				 buf_size, num_profiles_removed, cd);
+}
+
+/**
+ * ice_sched_clear_rl_prof - clears rl prof entries
+ * @pi: port information structure
+ *
+ * This function removes all rl profile from hw as well as from SW DB.
+ */
+static void ice_sched_clear_rl_prof(struct ice_port_info *pi)
+{
+	u8 ln;
+
+	for (ln = 0; ln < pi->hw->num_tx_sched_layers; ln++) {
+		struct ice_aqc_rl_profile_info *rl_prof_elem;
+		struct ice_aqc_rl_profile_info *rl_prof_tmp;
+
+		LIST_FOR_EACH_ENTRY_SAFE(rl_prof_elem, rl_prof_tmp,
+					 &pi->rl_prof_list[ln],
+					 ice_aqc_rl_profile_info, list_entry) {
+			struct ice_hw *hw = pi->hw;
+			enum ice_status status;
+
+			rl_prof_elem->prof_id_ref = 0;
+			status = ice_sched_del_rl_profile(hw, rl_prof_elem);
+			if (status) {
+				ice_debug(hw, ICE_DBG_SCHED,
+					  "Remove rl profile failed\n");
+				/* On error, free mem required */
+				LIST_DEL(&rl_prof_elem->list_entry);
+				ice_free(hw, rl_prof_elem);
+			}
+		}
+	}
+}
+
+/**
+ * ice_sched_clear_agg - clears the agg related information
+ * @hw: pointer to the hardware structure
+ *
+ * This function removes agg list and free up agg related memory
+ * previously allocated.
+ */
+void ice_sched_clear_agg(struct ice_hw *hw)
+{
+	struct ice_sched_agg_info *agg_info;
+	struct ice_sched_agg_info *atmp;
+
+	LIST_FOR_EACH_ENTRY_SAFE(agg_info, atmp, &hw->agg_list,
+				 ice_sched_agg_info,
+				 list_entry) {
+		struct ice_sched_agg_vsi_info *agg_vsi_info;
+		struct ice_sched_agg_vsi_info *vtmp;
+
+		LIST_FOR_EACH_ENTRY_SAFE(agg_vsi_info, vtmp,
+					 &agg_info->agg_vsi_list,
+					 ice_sched_agg_vsi_info, list_entry) {
+			LIST_DEL(&agg_vsi_info->list_entry);
+			ice_free(hw, agg_vsi_info);
+		}
+		LIST_DEL(&agg_info->list_entry);
+		ice_free(hw, agg_info);
+	}
+}
+
+/**
+ * ice_sched_clear_tx_topo - clears the schduler tree nodes
+ * @pi: port information structure
+ *
+ * This function removes all the nodes from HW as well as from SW DB.
+ */
+static void ice_sched_clear_tx_topo(struct ice_port_info *pi)
+{
+	if (!pi)
+		return;
+	/* remove rl profiles related lists */
+	ice_sched_clear_rl_prof(pi);
+	if (pi->root) {
+		ice_free_sched_node(pi, pi->root);
+		pi->root = NULL;
+	}
+}
+
+/**
+ * ice_sched_clear_port - clear the scheduler elements from SW DB for a port
+ * @pi: port information structure
+ *
+ * Cleanup scheduling elements from SW DB
+ */
+void ice_sched_clear_port(struct ice_port_info *pi)
+{
+	if (!pi || pi->port_state != ICE_SCHED_PORT_STATE_READY)
+		return;
+
+	pi->port_state = ICE_SCHED_PORT_STATE_INIT;
+	ice_acquire_lock(&pi->sched_lock);
+	ice_sched_clear_tx_topo(pi);
+	ice_release_lock(&pi->sched_lock);
+	ice_destroy_lock(&pi->sched_lock);
+}
+
+/**
+ * ice_sched_cleanup_all - cleanup scheduler elements from SW DB for all ports
+ * @hw: pointer to the hw struct
+ *
+ * Cleanup scheduling elements from SW DB for all the ports
+ */
+void ice_sched_cleanup_all(struct ice_hw *hw)
+{
+	if (!hw)
+		return;
+
+	if (hw->layer_info) {
+		ice_free(hw, hw->layer_info);
+		hw->layer_info = NULL;
+	}
+
+	if (hw->port_info)
+		ice_sched_clear_port(hw->port_info);
+
+	hw->num_tx_sched_layers = 0;
+	hw->num_tx_sched_phys_layers = 0;
+	hw->flattened_layers = 0;
+	hw->max_cgds = 0;
+}
+
+/**
+ * ice_aq_cfg_l2_node_cgd - configures L2 node to CGD mapping
+ * @hw: pointer to the hw struct
+ * @num_l2_nodes: the number of L2 nodes whose CGDs to configure
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @cd: pointer to command details structure or NULL
+ *
+ * Configure L2 Node CGD (0x0414)
+ */
+enum ice_status
+ice_aq_cfg_l2_node_cgd(struct ice_hw *hw, u16 num_l2_nodes,
+		       struct ice_aqc_cfg_l2_node_cgd_data *buf,
+		       u16 buf_size, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_cfg_l2_node_cgd *cmd;
+	struct ice_aq_desc desc;
+
+	cmd = &desc.params.cfg_l2_node_cgd;
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_cfg_l2_node_cgd);
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+
+	cmd->num_l2_nodes = CPU_TO_LE16(num_l2_nodes);
+	return ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
+}
+
+
+/**
+ * ice_sched_add_elems - add nodes to hw and SW DB
+ * @pi: port information structure
+ * @tc_node: pointer to the branch node
+ * @parent: pointer to the parent node
+ * @layer: layer number to add nodes
+ * @num_nodes: number of nodes
+ * @num_nodes_added: pointer to num nodes added
+ * @first_node_teid: if new nodes are added then return the teid of first node
+ *
+ * This function add nodes to hw as well as to SW DB for a given layer
+ */
+static enum ice_status
+ice_sched_add_elems(struct ice_port_info *pi, struct ice_sched_node *tc_node,
+		    struct ice_sched_node *parent, u8 layer, u16 num_nodes,
+		    u16 *num_nodes_added, u32 *first_node_teid)
+{
+	struct ice_sched_node *prev, *new_node;
+	struct ice_aqc_add_elem *buf;
+	u16 i, num_groups_added = 0;
+	enum ice_status status = ICE_SUCCESS;
+	struct ice_hw *hw = pi->hw;
+	u16 buf_size;
+	u32 teid;
+
+	buf_size = sizeof(*buf) + sizeof(*buf->generic) * (num_nodes - 1);
+	buf = (struct ice_aqc_add_elem *)ice_malloc(hw, buf_size);
+	if (!buf)
+		return ICE_ERR_NO_MEMORY;
+
+	buf->hdr.parent_teid = parent->info.node_teid;
+	buf->hdr.num_elems = CPU_TO_LE16(num_nodes);
+	for (i = 0; i < num_nodes; i++) {
+		buf->generic[i].parent_teid = parent->info.node_teid;
+		buf->generic[i].data.elem_type = ICE_AQC_ELEM_TYPE_SE_GENERIC;
+		buf->generic[i].data.valid_sections =
+			ICE_AQC_ELEM_VALID_GENERIC | ICE_AQC_ELEM_VALID_CIR |
+			ICE_AQC_ELEM_VALID_EIR;
+		buf->generic[i].data.generic = 0;
+		buf->generic[i].data.cir_bw.bw_profile_idx =
+			CPU_TO_LE16(ICE_SCHED_DFLT_RL_PROF_ID);
+		buf->generic[i].data.cir_bw.bw_alloc =
+			CPU_TO_LE16(ICE_SCHED_DFLT_BW_WT);
+		buf->generic[i].data.eir_bw.bw_profile_idx =
+			CPU_TO_LE16(ICE_SCHED_DFLT_RL_PROF_ID);
+		buf->generic[i].data.eir_bw.bw_alloc =
+			CPU_TO_LE16(ICE_SCHED_DFLT_BW_WT);
+	}
+
+	status = ice_aq_add_sched_elems(hw, 1, buf, buf_size,
+					&num_groups_added, NULL);
+	if (status != ICE_SUCCESS || num_groups_added != 1) {
+		ice_debug(hw, ICE_DBG_SCHED, "add node failed FW Error %d\n",
+			  hw->adminq.sq_last_status);
+		ice_free(hw, buf);
+		return ICE_ERR_CFG;
+	}
+
+	*num_nodes_added = num_nodes;
+	/* add nodes to the SW DB */
+	for (i = 0; i < num_nodes; i++) {
+		status = ice_sched_add_node(pi, layer, &buf->generic[i]);
+		if (status != ICE_SUCCESS) {
+			ice_debug(hw, ICE_DBG_SCHED,
+				  "add nodes in SW DB failed status =%d\n",
+				  status);
+			break;
+		}
+
+		teid = LE32_TO_CPU(buf->generic[i].node_teid);
+		new_node = ice_sched_find_node_by_teid(parent, teid);
+		if (!new_node) {
+			ice_debug(hw, ICE_DBG_SCHED,
+				  "Node is missing for teid =%d\n", teid);
+			break;
+		}
+
+		new_node->sibling = NULL;
+		new_node->tc_num = tc_node->tc_num;
+
+		/* add it to previous node sibling pointer */
+		/* Note: siblings are not linked across branches */
+		prev = ice_sched_get_first_node(hw, tc_node, layer);
+		if (prev && prev != new_node) {
+			while (prev->sibling)
+				prev = prev->sibling;
+			prev->sibling = new_node;
+		}
+
+		if (i == 0)
+			*first_node_teid = teid;
+	}
+
+	ice_free(hw, buf);
+	return status;
+}
+
+/**
+ * ice_sched_add_nodes_to_layer - Add nodes to a given layer
+ * @pi: port information structure
+ * @tc_node: pointer to TC node
+ * @parent: pointer to parent node
+ * @layer: layer number to add nodes
+ * @num_nodes: number of nodes to be added
+ * @first_node_teid: pointer to the first node teid
+ * @num_nodes_added: pointer to number of nodes added
+ *
+ * This function add nodes to a given layer.
+ */
+static enum ice_status
+ice_sched_add_nodes_to_layer(struct ice_port_info *pi,
+			     struct ice_sched_node *tc_node,
+			     struct ice_sched_node *parent, u8 layer,
+			     u16 num_nodes, u32 *first_node_teid,
+			     u16 *num_nodes_added)
+{
+	u32 *first_teid_ptr = first_node_teid;
+	u16 new_num_nodes, max_child_nodes;
+	enum ice_status status = ICE_SUCCESS;
+	struct ice_hw *hw = pi->hw;
+	u16 num_added = 0;
+	u32 temp;
+
+	*num_nodes_added = 0;
+
+	if (!num_nodes)
+		return status;
+
+	if (!parent || layer < hw->sw_entry_point_layer)
+		return ICE_ERR_PARAM;
+
+	/* max children per node per layer */
+	max_child_nodes = hw->max_children[parent->tx_sched_layer];
+
+	/* current number of children + required nodes exceed max children ? */
+	if ((parent->num_children + num_nodes) > max_child_nodes) {
+		/* Fail if the parent is a TC node */
+		if (parent == tc_node)
+			return ICE_ERR_CFG;
+
+		/* utilize all the spaces if the parent is not full */
+		if (parent->num_children < max_child_nodes) {
+			new_num_nodes = max_child_nodes - parent->num_children;
+			/* this recursion is intentional, and wouldn't
+			 * go more than 2 calls
+			 */
+			status = ice_sched_add_nodes_to_layer(pi, tc_node,
+							      parent, layer,
+							      new_num_nodes,
+							      first_node_teid,
+							      &num_added);
+			if (status != ICE_SUCCESS)
+				return status;
+
+			*num_nodes_added += num_added;
+		}
+		/* Don't modify the first node teid memory if the first node was
+		 * added already in the above call. Instead send some temp
+		 * memory for all other recursive calls.
+		 */
+		if (num_added)
+			first_teid_ptr = &temp;
+
+		new_num_nodes = num_nodes - num_added;
+
+		/* This parent is full, try the next sibling */
+		parent = parent->sibling;
+
+		/* this recursion is intentional, for 1024 queues
+		 * per VSI, it goes max of 16 iterations.
+		 * 1024 / 8 = 128 layer 8 nodes
+		 * 128 /8 = 16 (add 8 nodes per iteration)
+		 */
+		status = ice_sched_add_nodes_to_layer(pi, tc_node, parent,
+						      layer, new_num_nodes,
+						      first_teid_ptr,
+						      &num_added);
+		*num_nodes_added += num_added;
+		return status;
+	}
+
+	status = ice_sched_add_elems(pi, tc_node, parent, layer, num_nodes,
+				     num_nodes_added, first_node_teid);
+	return status;
+}
+
+/**
+ * ice_sched_get_qgrp_layer - get the current queue group layer number
+ * @hw: pointer to the hw struct
+ *
+ * This function returns the current queue group layer number
+ */
+static u8 ice_sched_get_qgrp_layer(struct ice_hw *hw)
+{
+	/* It's always total layers - 1, the array is 0 relative so -2 */
+	return hw->num_tx_sched_layers - ICE_QGRP_LAYER_OFFSET;
+}
+
+/**
+ * ice_sched_get_vsi_layer - get the current VSI layer number
+ * @hw: pointer to the hw struct
+ *
+ * This function returns the current VSI layer number
+ */
+static u8 ice_sched_get_vsi_layer(struct ice_hw *hw)
+{
+	/* Num Layers       VSI layer
+	 *     9               6
+	 *     7               4
+	 *     5 or less       sw_entry_point_layer
+	 */
+	/* calculate the vsi layer based on number of layers. */
+	if (hw->num_tx_sched_layers > ICE_VSI_LAYER_OFFSET + 1) {
+		u8 layer = hw->num_tx_sched_layers - ICE_VSI_LAYER_OFFSET;
+
+		if (layer > hw->sw_entry_point_layer)
+			return layer;
+	}
+	return hw->sw_entry_point_layer;
+}
+
+/**
+ * ice_sched_get_agg_layer - get the current aggregator layer number
+ * @hw: pointer to the hw struct
+ *
+ * This function returns the current aggregator layer number
+ */
+static u8 ice_sched_get_agg_layer(struct ice_hw *hw)
+{
+	/* Num Layers       agg layer
+	 *     9               4
+	 *     7 or less       sw_entry_point_layer
+	 */
+	/* calculate the agg layer based on number of layers. */
+	if (hw->num_tx_sched_layers > ICE_AGG_LAYER_OFFSET + 1) {
+		u8 layer = hw->num_tx_sched_layers - ICE_AGG_LAYER_OFFSET;
+
+		if (layer > hw->sw_entry_point_layer)
+			return layer;
+	}
+	return hw->sw_entry_point_layer;
+}
+
+/**
+ * ice_rm_dflt_leaf_node - remove the default leaf node in the tree
+ * @pi: port information structure
+ *
+ * This function removes the leaf node that was created by the FW
+ * during initialization
+ */
+static void ice_rm_dflt_leaf_node(struct ice_port_info *pi)
+{
+	struct ice_sched_node *node;
+
+	node = pi->root;
+	while (node) {
+		if (!node->num_children)
+			break;
+		node = node->children[0];
+	}
+	if (node && node->info.data.elem_type == ICE_AQC_ELEM_TYPE_LEAF) {
+		u32 teid = LE32_TO_CPU(node->info.node_teid);
+		enum ice_status status;
+
+		/* remove the default leaf node */
+		status = ice_sched_remove_elems(pi->hw, node->parent, 1, &teid);
+		if (!status)
+			ice_free_sched_node(pi, node);
+	}
+}
+
+/**
+ * ice_sched_rm_dflt_nodes - free the default nodes in the tree
+ * @pi: port information structure
+ *
+ * This function frees all the nodes except root and TC that were created by
+ * the FW during initialization
+ */
+static void ice_sched_rm_dflt_nodes(struct ice_port_info *pi)
+{
+	struct ice_sched_node *node;
+
+	ice_rm_dflt_leaf_node(pi);
+
+	/* remove the default nodes except TC and root nodes */
+	node = pi->root;
+	while (node) {
+		if (node->tx_sched_layer >= pi->hw->sw_entry_point_layer &&
+		    node->info.data.elem_type != ICE_AQC_ELEM_TYPE_TC &&
+		    node->info.data.elem_type != ICE_AQC_ELEM_TYPE_ROOT_PORT) {
+			ice_free_sched_node(pi, node);
+			break;
+		}
+
+		if (!node->num_children)
+			break;
+		node = node->children[0];
+	}
+}
+
+/**
+ * ice_sched_init_port - Initialize scheduler by querying information from FW
+ * @pi: port info structure for the tree to cleanup
+ *
+ * This function is the initial call to find the total number of Tx scheduler
+ * resources, default topology created by firmware and storing the information
+ * in SW DB.
+ */
+enum ice_status ice_sched_init_port(struct ice_port_info *pi)
+{
+	struct ice_aqc_get_topo_elem *buf;
+	enum ice_status status;
+	struct ice_hw *hw;
+	u8 num_branches;
+	u16 num_elems;
+	u8 i, j;
+
+	if (!pi)
+		return ICE_ERR_PARAM;
+	hw = pi->hw;
+
+	/* Query the Default Topology from FW */
+	buf = (struct ice_aqc_get_topo_elem *)ice_malloc(hw,
+							 ICE_AQ_MAX_BUF_LEN);
+	if (!buf)
+		return ICE_ERR_NO_MEMORY;
+
+	/* Query default scheduling tree topology */
+	status = ice_aq_get_dflt_topo(hw, pi->lport, buf, ICE_AQ_MAX_BUF_LEN,
+				      &num_branches, NULL);
+	if (status)
+		goto err_init_port;
+
+	/* num_branches should be between 1-8 */
+	if (num_branches < 1 || num_branches > ICE_TXSCHED_MAX_BRANCHES) {
+		ice_debug(hw, ICE_DBG_SCHED, "num_branches unexpected %d\n",
+			  num_branches);
+		status = ICE_ERR_PARAM;
+		goto err_init_port;
+	}
+
+	/* get the number of elements on the default/first branch */
+	num_elems = LE16_TO_CPU(buf[0].hdr.num_elems);
+
+	/* num_elems should always be between 1-9 */
+	if (num_elems < 1 || num_elems > ICE_AQC_TOPO_MAX_LEVEL_NUM) {
+		ice_debug(hw, ICE_DBG_SCHED, "num_elems unexpected %d\n",
+			  num_elems);
+		status = ICE_ERR_PARAM;
+		goto err_init_port;
+	}
+
+	/* If the last node is a leaf node then the index of the Q group
+	 * layer is two less than the number of elements.
+	 */
+	if (num_elems > 2 && buf[0].generic[num_elems - 1].data.elem_type ==
+	    ICE_AQC_ELEM_TYPE_LEAF)
+		pi->last_node_teid =
+			LE32_TO_CPU(buf[0].generic[num_elems - 2].node_teid);
+	else
+		pi->last_node_teid =
+			LE32_TO_CPU(buf[0].generic[num_elems - 1].node_teid);
+
+	/* Insert the Tx Sched root node */
+	status = ice_sched_add_root_node(pi, &buf[0].generic[0]);
+	if (status)
+		goto err_init_port;
+
+	/* Parse the default tree and cache the information */
+	for (i = 0; i < num_branches; i++) {
+		num_elems = LE16_TO_CPU(buf[i].hdr.num_elems);
+
+		/* Skip root element as already inserted */
+		for (j = 1; j < num_elems; j++) {
+			/* update the sw entry point */
+			if (buf[0].generic[j].data.elem_type ==
+			    ICE_AQC_ELEM_TYPE_ENTRY_POINT)
+				hw->sw_entry_point_layer = j;
+
+			status = ice_sched_add_node(pi, j, &buf[i].generic[j]);
+			if (status)
+				goto err_init_port;
+		}
+	}
+
+	/* Remove the default nodes. */
+	if (pi->root)
+		ice_sched_rm_dflt_nodes(pi);
+
+	/* initialize the port for handling the scheduler tree */
+	pi->port_state = ICE_SCHED_PORT_STATE_READY;
+	ice_init_lock(&pi->sched_lock);
+	for (i = 0; i < ICE_AQC_TOPO_MAX_LEVEL_NUM; i++)
+		INIT_LIST_HEAD(&pi->rl_prof_list[i]);
+
+err_init_port:
+	if (status && pi->root) {
+		ice_free_sched_node(pi, pi->root);
+		pi->root = NULL;
+	}
+
+	ice_free(hw, buf);
+	return status;
+}
+
+/**
+ * ice_sched_get_node - Get the struct ice_sched_node for given teid
+ * @pi: port information structure
+ * @teid: Scheduler node TEID
+ *
+ * This function retrieves the ice_sched_node struct for given teid from
+ * the SW DB and returns it to the caller.
+ */
+struct ice_sched_node *ice_sched_get_node(struct ice_port_info *pi, u32 teid)
+{
+	struct ice_sched_node *node;
+
+	if (!pi)
+		return NULL;
+
+	/* Find the node starting from root */
+	ice_acquire_lock(&pi->sched_lock);
+	node = ice_sched_find_node_by_teid(pi->root, teid);
+	ice_release_lock(&pi->sched_lock);
+
+	if (!node)
+		ice_debug(pi->hw, ICE_DBG_SCHED,
+			  "Node not found for teid=0x%x\n", teid);
+
+	return node;
+}
+
+/**
+ * ice_sched_query_res_alloc - query the FW for num of logical sched layers
+ * @hw: pointer to the HW struct
+ *
+ * query FW for allocated scheduler resources and store in HW struct
+ */
+enum ice_status ice_sched_query_res_alloc(struct ice_hw *hw)
+{
+	struct ice_aqc_query_txsched_res_resp *buf;
+	enum ice_status status = ICE_SUCCESS;
+	__le16 max_sibl;
+	u8 i;
+
+	if (hw->layer_info)
+		return status;
+
+	buf = (struct ice_aqc_query_txsched_res_resp *)
+		ice_malloc(hw, sizeof(*buf));
+	if (!buf)
+		return ICE_ERR_NO_MEMORY;
+
+	status = ice_aq_query_sched_res(hw, sizeof(*buf), buf, NULL);
+	if (status)
+		goto sched_query_out;
+
+	hw->num_tx_sched_layers = LE16_TO_CPU(buf->sched_props.logical_levels);
+	hw->num_tx_sched_phys_layers =
+		LE16_TO_CPU(buf->sched_props.phys_levels);
+	hw->flattened_layers = buf->sched_props.flattening_bitmap;
+	hw->max_cgds = buf->sched_props.max_pf_cgds;
+
+	/* max sibling group size of current layer refers to the max children
+	 * of the below layer node.
+	 * layer 1 node max children will be layer 2 max sibling group size
+	 * layer 2 node max children will be layer 3 max sibling group size
+	 * and so on. This array will be populated from root (index 0) to
+	 * qgroup layer 7. Leaf node has no children.
+	 */
+	for (i = 0; i < hw->num_tx_sched_layers - 1; i++) {
+		max_sibl = buf->layer_props[i + 1].max_sibl_grp_sz;
+		hw->max_children[i] = LE16_TO_CPU(max_sibl);
+	}
+
+	hw->layer_info = (struct ice_aqc_layer_props *)
+			 ice_memdup(hw, buf->layer_props,
+				    (hw->num_tx_sched_layers *
+				     sizeof(*hw->layer_info)),
+				    ICE_DMA_TO_DMA);
+	if (!hw->layer_info) {
+		status = ICE_ERR_NO_MEMORY;
+		goto sched_query_out;
+	}
+
+
+sched_query_out:
+	ice_free(hw, buf);
+	return status;
+}
+
+/**
+ * ice_sched_find_node_in_subtree - Find node in part of base node subtree
+ * @hw: pointer to the hw struct
+ * @base: pointer to the base node
+ * @node: pointer to the node to search
+ *
+ * This function checks whether a given node is part of the base node
+ * subtree or not
+ */
+bool
+ice_sched_find_node_in_subtree(struct ice_hw *hw, struct ice_sched_node *base,
+			       struct ice_sched_node *node)
+{
+	u8 i;
+
+	for (i = 0; i < base->num_children; i++) {
+		struct ice_sched_node *child = base->children[i];
+
+		if (node == child)
+			return true;
+
+		if (child->tx_sched_layer > node->tx_sched_layer)
+			return false;
+
+		/* this recursion is intentional, and wouldn't
+		 * go more than 8 calls
+		 */
+		if (ice_sched_find_node_in_subtree(hw, child, node))
+			return true;
+	}
+	return false;
+}
+
+/**
+ * ice_sched_get_free_qparent - Get a free lan or rdma q group node
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc: branch number
+ * @owner: lan or rdma
+ *
+ * This function retrieves a free lan or rdma q group node
+ */
+struct ice_sched_node *
+ice_sched_get_free_qparent(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+			   u8 owner)
+{
+	struct ice_sched_node *vsi_node, *qgrp_node = NULL;
+	struct ice_vsi_ctx *vsi_ctx;
+	u16 max_children;
+	u8 qgrp_layer;
+
+	qgrp_layer = ice_sched_get_qgrp_layer(pi->hw);
+	max_children = pi->hw->max_children[qgrp_layer];
+
+	vsi_ctx = ice_get_vsi_ctx(pi->hw, vsi_handle);
+	if (!vsi_ctx)
+		return NULL;
+	vsi_node = vsi_ctx->sched.vsi_node[tc];
+	/* validate invalid VSI id */
+	if (!vsi_node)
+		goto lan_q_exit;
+
+	/* get the first q group node from VSI sub-tree */
+	qgrp_node = ice_sched_get_first_node(pi->hw, vsi_node, qgrp_layer);
+	while (qgrp_node) {
+		/* make sure the qgroup node is part of the VSI subtree */
+		if (ice_sched_find_node_in_subtree(pi->hw, vsi_node, qgrp_node))
+			if (qgrp_node->num_children < max_children &&
+			    qgrp_node->owner == owner)
+				break;
+		qgrp_node = qgrp_node->sibling;
+	}
+
+lan_q_exit:
+	return qgrp_node;
+}
+
+/**
+ * ice_sched_get_vsi_node - Get a VSI node based on VSI id
+ * @hw: pointer to the hw struct
+ * @tc_node: pointer to the TC node
+ * @vsi_handle: software VSI handle
+ *
+ * This function retrieves a VSI node for a given VSI id from a given
+ * TC branch
+ */
+struct ice_sched_node *
+ice_sched_get_vsi_node(struct ice_hw *hw, struct ice_sched_node *tc_node,
+		       u16 vsi_handle)
+{
+	struct ice_sched_node *node;
+	u8 vsi_layer;
+
+	vsi_layer = ice_sched_get_vsi_layer(hw);
+	node = ice_sched_get_first_node(hw, tc_node, vsi_layer);
+
+	/* Check whether it already exists */
+	while (node) {
+		if (node->vsi_handle == vsi_handle)
+			return node;
+		node = node->sibling;
+	}
+
+	return node;
+}
+
+/**
+ * ice_sched_get_agg_node - Get an aggregator node based on agg id
+ * @hw: pointer to the hw struct
+ * @tc_node: pointer to the TC node
+ * @agg_id: aggregator id
+ *
+ * This function retrieves an aggregator node for a given agg id from a given
+ * TC branch
+ */
+struct ice_sched_node *
+ice_sched_get_agg_node(struct ice_hw *hw, struct ice_sched_node *tc_node,
+		       u32 agg_id)
+{
+	struct ice_sched_node *node;
+	u8 agg_layer;
+
+	agg_layer = ice_sched_get_agg_layer(hw);
+	node = ice_sched_get_first_node(hw, tc_node, agg_layer);
+
+	/* Check whether it already exists */
+	while (node) {
+		if (node->agg_id == agg_id)
+			return node;
+		node = node->sibling;
+	}
+
+	return node;
+}
+
+/**
+ * ice_sched_check_node - Compare node parameters between SW DB and HW DB
+ * @hw: pointer to the hw struct
+ * @node: pointer to the ice_sched_node struct
+ *
+ * This function queries and compares the HW element with SW DB node parameters
+ */
+static bool ice_sched_check_node(struct ice_hw *hw, struct ice_sched_node *node)
+{
+	struct ice_aqc_get_elem buf;
+	enum ice_status status;
+	u32 node_teid;
+
+	node_teid = LE32_TO_CPU(node->info.node_teid);
+	status = ice_sched_query_elem(hw, node_teid, &buf);
+	if (status != ICE_SUCCESS)
+		return false;
+
+	if (memcmp(buf.generic, &node->info, sizeof(*buf.generic))) {
+		ice_debug(hw, ICE_DBG_SCHED, "Node mismatch for teid=0x%x\n",
+			  node_teid);
+		return false;
+	}
+
+	return true;
+}
+
+/**
+ * ice_sched_calc_vsi_child_nodes - calculate number of VSI child nodes
+ * @hw: pointer to the hw struct
+ * @num_qs: number of queues
+ * @num_nodes: num nodes array
+ *
+ * This function calculates the number of VSI child nodes based on the
+ * number of queues.
+ */
+static void
+ice_sched_calc_vsi_child_nodes(struct ice_hw *hw, u16 num_qs, u16 *num_nodes)
+{
+	u16 num = num_qs;
+	u8 i, qgl, vsil;
+
+	qgl = ice_sched_get_qgrp_layer(hw);
+	vsil = ice_sched_get_vsi_layer(hw);
+
+	/* calculate num nodes from q group to VSI layer */
+	for (i = qgl; i > vsil; i--) {
+		/* round to the next integer if there is a remainder */
+		num = DIVIDE_AND_ROUND_UP(num, hw->max_children[i]);
+
+		/* need at least one node */
+		num_nodes[i] = num ? num : 1;
+	}
+}
+
+/**
+ * ice_sched_add_vsi_child_nodes - add VSI child nodes to tree
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc_node: pointer to the TC node
+ * @num_nodes: pointer to the num nodes that needs to be added per layer
+ * @owner: node owner (lan or rdma)
+ *
+ * This function adds the VSI child nodes to tree. It gets called for
+ * lan and rdma separately.
+ */
+static enum ice_status
+ice_sched_add_vsi_child_nodes(struct ice_port_info *pi, u16 vsi_handle,
+			      struct ice_sched_node *tc_node, u16 *num_nodes,
+			      u8 owner)
+{
+	struct ice_sched_node *parent, *node;
+	struct ice_hw *hw = pi->hw;
+	enum ice_status status;
+	u32 first_node_teid;
+	u16 num_added = 0;
+	u8 i, qgl, vsil;
+
+	qgl = ice_sched_get_qgrp_layer(hw);
+	vsil = ice_sched_get_vsi_layer(hw);
+	parent = ice_sched_get_vsi_node(hw, tc_node, vsi_handle);
+	for (i = vsil + 1; i <= qgl; i++) {
+		if (!parent)
+			return ICE_ERR_CFG;
+
+		status = ice_sched_add_nodes_to_layer(pi, tc_node, parent, i,
+						      num_nodes[i],
+						      &first_node_teid,
+						      &num_added);
+		if (status != ICE_SUCCESS || num_nodes[i] != num_added)
+			return ICE_ERR_CFG;
+
+		/* The newly added node can be a new parent for the next
+		 * layer nodes
+		 */
+		if (num_added) {
+			parent = ice_sched_find_node_by_teid(tc_node,
+							     first_node_teid);
+			node = parent;
+			while (node) {
+				node->owner = owner;
+				node = node->sibling;
+			}
+		} else {
+			parent = parent->children[0];
+		}
+	}
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_sched_calc_vsi_support_nodes - calculate number of VSI support nodes
+ * @hw: pointer to the hw struct
+ * @tc_node: pointer to TC node
+ * @num_nodes: pointer to num nodes array
+ *
+ * This function calculates the number of supported nodes needed to add this
+ * VSI into Tx tree including the VSI, parent and intermediate nodes in below
+ * layers
+ */
+static void
+ice_sched_calc_vsi_support_nodes(struct ice_hw *hw,
+				 struct ice_sched_node *tc_node, u16 *num_nodes)
+{
+	struct ice_sched_node *node;
+	u8 vsil;
+	int i;
+
+	vsil = ice_sched_get_vsi_layer(hw);
+	for (i = vsil; i >= hw->sw_entry_point_layer; i--)
+		/* Add intermediate nodes if TC has no children and
+		 * need at least one node for VSI
+		 */
+		if (!tc_node->num_children || i == vsil) {
+			num_nodes[i]++;
+		} else {
+			/* If intermediate nodes are reached max children
+			 * then add a new one.
+			 */
+			node = ice_sched_get_first_node(hw, tc_node, (u8)i);
+			/* scan all the siblings */
+			while (node) {
+				if (node->num_children < hw->max_children[i])
+					break;
+				node = node->sibling;
+			}
+
+			/* tree has one intermediate node to add this new VSI.
+			 * So no need to calculate supported nodes for below
+			 * layers.
+			 */
+			if (node)
+				break;
+			/* all the nodes are full, allocate a new one */
+			num_nodes[i]++;
+		}
+}
+
+/**
+ * ice_sched_add_vsi_support_nodes - add VSI supported nodes into Tx tree
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc_node: pointer to TC node
+ * @num_nodes: pointer to num nodes array
+ *
+ * This function adds the VSI supported nodes into Tx tree including the
+ * VSI, its parent and intermediate nodes in below layers
+ */
+static enum ice_status
+ice_sched_add_vsi_support_nodes(struct ice_port_info *pi, u16 vsi_handle,
+				struct ice_sched_node *tc_node, u16 *num_nodes)
+{
+	struct ice_sched_node *parent = tc_node;
+	enum ice_status status;
+	u32 first_node_teid;
+	u16 num_added = 0;
+	u8 i, vsil;
+
+	if (!pi)
+		return ICE_ERR_PARAM;
+
+	vsil = ice_sched_get_vsi_layer(pi->hw);
+	for (i = pi->hw->sw_entry_point_layer; i <= vsil; i++) {
+		status = ice_sched_add_nodes_to_layer(pi, tc_node, parent,
+						      i, num_nodes[i],
+						      &first_node_teid,
+						      &num_added);
+		if (status != ICE_SUCCESS || num_nodes[i] != num_added)
+			return ICE_ERR_CFG;
+
+		/* The newly added node can be a new parent for the next
+		 * layer nodes
+		 */
+		if (num_added)
+			parent = ice_sched_find_node_by_teid(tc_node,
+							     first_node_teid);
+		else
+			parent = parent->children[0];
+
+		if (!parent)
+			return ICE_ERR_CFG;
+
+		if (i == vsil)
+			parent->vsi_handle = vsi_handle;
+	}
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_sched_add_vsi_to_topo - add a new VSI into tree
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc: TC number
+ *
+ * This function adds a new VSI into scheduler tree
+ */
+static enum ice_status
+ice_sched_add_vsi_to_topo(struct ice_port_info *pi, u16 vsi_handle, u8 tc)
+{
+	u16 num_nodes[ICE_AQC_TOPO_MAX_LEVEL_NUM] = { 0 };
+	struct ice_sched_node *tc_node;
+	struct ice_hw *hw = pi->hw;
+
+	tc_node = ice_sched_get_tc_node(pi, tc);
+	if (!tc_node)
+		return ICE_ERR_PARAM;
+
+	/* calculate number of supported nodes needed for this VSI */
+	ice_sched_calc_vsi_support_nodes(hw, tc_node, num_nodes);
+
+	/* add vsi supported nodes to tc subtree */
+	return ice_sched_add_vsi_support_nodes(pi, vsi_handle, tc_node,
+					       num_nodes);
+}
+
+/**
+ * ice_sched_update_vsi_child_nodes - update VSI child nodes
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc: TC number
+ * @new_numqs: new number of max queues
+ * @owner: owner of this subtree
+ *
+ * This function updates the VSI child nodes based on the number of queues
+ */
+static enum ice_status
+ice_sched_update_vsi_child_nodes(struct ice_port_info *pi, u16 vsi_handle,
+				 u8 tc, u16 new_numqs, u8 owner)
+{
+	u16 new_num_nodes[ICE_AQC_TOPO_MAX_LEVEL_NUM] = { 0 };
+	struct ice_sched_node *vsi_node;
+	struct ice_sched_node *tc_node;
+	struct ice_vsi_ctx *vsi_ctx;
+	enum ice_status status = ICE_SUCCESS;
+	struct ice_hw *hw = pi->hw;
+	u16 prev_numqs;
+
+	tc_node = ice_sched_get_tc_node(pi, tc);
+	if (!tc_node)
+		return ICE_ERR_CFG;
+
+	vsi_node = ice_sched_get_vsi_node(hw, tc_node, vsi_handle);
+	if (!vsi_node)
+		return ICE_ERR_CFG;
+
+	vsi_ctx = ice_get_vsi_ctx(hw, vsi_handle);
+	if (!vsi_ctx)
+		return ICE_ERR_PARAM;
+
+	if (owner == ICE_SCHED_NODE_OWNER_LAN)
+		prev_numqs = vsi_ctx->sched.max_lanq[tc];
+	else
+		return ICE_ERR_PARAM;
+
+	/* num queues are not changed or less than the previous number */
+	if (new_numqs <= prev_numqs)
+		return status;
+	if (new_numqs)
+		ice_sched_calc_vsi_child_nodes(hw, new_numqs, new_num_nodes);
+	/* Keep the max number of queue configuration all the time. Update the
+	 * tree only if number of queues > previous number of queues. This may
+	 * leave some extra nodes in the tree if number of queues < previous
+	 * number but that wouldn't harm anything. Removing those extra nodes
+	 * may complicate the code if those nodes are part of SRL or
+	 * individually rate limited.
+	 */
+	status = ice_sched_add_vsi_child_nodes(pi, vsi_handle, tc_node,
+					       new_num_nodes, owner);
+	if (status)
+		return status;
+	vsi_ctx->sched.max_lanq[tc] = new_numqs;
+
+	return status;
+}
+
+/**
+ * ice_sched_cfg_vsi - configure the new/existing VSI
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc: TC number
+ * @maxqs: max number of queues
+ * @owner: lan or rdma
+ * @enable: TC enabled or disabled
+ *
+ * This function adds/updates VSI nodes based on the number of queues. If TC is
+ * enabled and VSI is in suspended state then resume the VSI back. If TC is
+ * disabled then suspend the VSI if it is not already.
+ */
+enum ice_status
+ice_sched_cfg_vsi(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u16 maxqs,
+		  u8 owner, bool enable)
+{
+	struct ice_sched_node *vsi_node, *tc_node;
+	struct ice_vsi_ctx *vsi_ctx;
+	enum ice_status status = ICE_SUCCESS;
+	struct ice_hw *hw = pi->hw;
+
+	ice_debug(pi->hw, ICE_DBG_SCHED, "add/config VSI %d\n", vsi_handle);
+	tc_node = ice_sched_get_tc_node(pi, tc);
+	if (!tc_node)
+		return ICE_ERR_PARAM;
+	vsi_ctx = ice_get_vsi_ctx(hw, vsi_handle);
+	if (!vsi_ctx)
+		return ICE_ERR_PARAM;
+	vsi_node = ice_sched_get_vsi_node(hw, tc_node, vsi_handle);
+
+	/* suspend the VSI if tc is not enabled */
+	if (!enable) {
+		if (vsi_node && vsi_node->in_use) {
+			u32 teid = LE32_TO_CPU(vsi_node->info.node_teid);
+
+			status = ice_sched_suspend_resume_elems(hw, 1, &teid,
+								true);
+			if (!status)
+				vsi_node->in_use = false;
+		}
+		return status;
+	}
+
+	/* TC is enabled, if it is a new VSI then add it to the tree */
+	if (!vsi_node) {
+		status = ice_sched_add_vsi_to_topo(pi, vsi_handle, tc);
+		if (status)
+			return status;
+
+		vsi_node = ice_sched_get_vsi_node(hw, tc_node, vsi_handle);
+		if (!vsi_node)
+			return ICE_ERR_CFG;
+
+		vsi_ctx->sched.vsi_node[tc] = vsi_node;
+		vsi_node->in_use = true;
+		/* invalidate the max queues whenever VSI gets added first time
+		 * into the scheduler tree (boot or after reset). We need to
+		 * recreate the child nodes all the time in these cases.
+		 */
+		vsi_ctx->sched.max_lanq[tc] = 0;
+	}
+
+	/* update the VSI child nodes */
+	status = ice_sched_update_vsi_child_nodes(pi, vsi_handle, tc, maxqs,
+						  owner);
+	if (status)
+		return status;
+
+	/* TC is enabled, resume the VSI if it is in the suspend state */
+	if (!vsi_node->in_use) {
+		u32 teid = LE32_TO_CPU(vsi_node->info.node_teid);
+
+		status = ice_sched_suspend_resume_elems(hw, 1, &teid, false);
+		if (!status)
+			vsi_node->in_use = true;
+	}
+
+	return status;
+}
+
+/**
+ * ice_sched_rm_agg_vsi_entry - remove agg related vsi info entry
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ *
+ * This function removes single aggregator vsi info entry from
+ * aggregator list.
+ */
+static void
+ice_sched_rm_agg_vsi_info(struct ice_port_info *pi, u16 vsi_handle)
+{
+	struct ice_sched_agg_info *agg_info;
+	struct ice_sched_agg_info *atmp;
+
+	LIST_FOR_EACH_ENTRY_SAFE(agg_info, atmp, &pi->hw->agg_list,
+				 ice_sched_agg_info,
+				 list_entry) {
+		struct ice_sched_agg_vsi_info *agg_vsi_info;
+		struct ice_sched_agg_vsi_info *vtmp;
+
+		LIST_FOR_EACH_ENTRY_SAFE(agg_vsi_info, vtmp,
+					 &agg_info->agg_vsi_list,
+					 ice_sched_agg_vsi_info, list_entry)
+			if (agg_vsi_info->vsi_handle == vsi_handle) {
+				LIST_DEL(&agg_vsi_info->list_entry);
+				ice_free(pi->hw, agg_vsi_info);
+				return;
+			}
+	}
+}
+
+/**
+ * ice_sched_is_leaf_node_present - check for a leaf node in the sub-tree
+ * @node: pointer to the sub-tree node
+ *
+ * This function checks for a leaf node presence in a given sub-tree node.
+ */
+static bool ice_sched_is_leaf_node_present(struct ice_sched_node *node)
+{
+	u8 i;
+
+	for (i = 0; i < node->num_children; i++)
+		if (ice_sched_is_leaf_node_present(node->children[i]))
+			return true;
+	/* check for a leaf node */
+	return (node->info.data.elem_type == ICE_AQC_ELEM_TYPE_LEAF);
+}
+
+/**
+ * ice_sched_rm_vsi_cfg - remove the VSI and its children nodes
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @owner: lan or rdma
+ *
+ * This function removes the VSI and its lan or rdma children nodes from the
+ * scheduler tree.
+ */
+static enum ice_status
+ice_sched_rm_vsi_cfg(struct ice_port_info *pi, u16 vsi_handle, u8 owner)
+{
+	enum ice_status status = ICE_ERR_PARAM;
+	struct ice_vsi_ctx *vsi_ctx;
+	u8 i;
+
+	ice_debug(pi->hw, ICE_DBG_SCHED, "removing VSI %d\n", vsi_handle);
+	if (!ice_is_vsi_valid(pi->hw, vsi_handle))
+		return status;
+	ice_acquire_lock(&pi->sched_lock);
+	vsi_ctx = ice_get_vsi_ctx(pi->hw, vsi_handle);
+	if (!vsi_ctx)
+		goto exit_sched_rm_vsi_cfg;
+
+	for (i = 0; i < ICE_MAX_TRAFFIC_CLASS; i++) {
+		struct ice_sched_node *vsi_node, *tc_node;
+		u8 j = 0;
+
+		tc_node = ice_sched_get_tc_node(pi, i);
+		if (!tc_node)
+			continue;
+
+		vsi_node = ice_sched_get_vsi_node(pi->hw, tc_node, vsi_handle);
+		if (!vsi_node)
+			continue;
+
+		if (ice_sched_is_leaf_node_present(vsi_node)) {
+			ice_debug(pi->hw, ICE_DBG_SCHED,
+				  "VSI has leaf nodes in TC %d\n", i);
+			status = ICE_ERR_IN_USE;
+			goto exit_sched_rm_vsi_cfg;
+		}
+		while (j < vsi_node->num_children) {
+			if (vsi_node->children[j]->owner == owner) {
+				ice_free_sched_node(pi, vsi_node->children[j]);
+
+				/* reset the counter again since the num
+				 * children will be updated after node removal
+				 */
+				j = 0;
+			} else {
+				j++;
+			}
+		}
+		/* remove the VSI if it has no children */
+		if (!vsi_node->num_children) {
+			ice_free_sched_node(pi, vsi_node);
+			vsi_ctx->sched.vsi_node[i] = NULL;
+
+			/* clean up agg related vsi info if any */
+			ice_sched_rm_agg_vsi_info(pi, vsi_handle);
+		}
+		if (owner == ICE_SCHED_NODE_OWNER_LAN)
+			vsi_ctx->sched.max_lanq[i] = 0;
+	}
+	status = ICE_SUCCESS;
+
+exit_sched_rm_vsi_cfg:
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_rm_vsi_lan_cfg - remove VSI and its lan children nodes
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ *
+ * This function clears the VSI and its lan children nodes from scheduler tree
+ * for all TCs.
+ */
+enum ice_status ice_rm_vsi_lan_cfg(struct ice_port_info *pi, u16 vsi_handle)
+{
+	return ice_sched_rm_vsi_cfg(pi, vsi_handle, ICE_SCHED_NODE_OWNER_LAN);
+}
+
+
+/**
+ * ice_sched_is_tree_balanced - Check tree nodes are identical or not
+ * @hw: pointer to the hw struct
+ * @node: pointer to the ice_sched_node struct
+ *
+ * This function compares all the nodes for a given tree against HW DB nodes
+ * This function needs to be called with the port_info->sched_lock held
+ */
+bool ice_sched_is_tree_balanced(struct ice_hw *hw, struct ice_sched_node *node)
+{
+	u8 i;
+
+	/* start from the leaf node */
+	for (i = 0; i < node->num_children; i++)
+		/* Fail if node doesn't match with the SW DB
+		 * this recursion is intentional, and wouldn't
+		 * go more than 9 calls
+		 */
+		if (!ice_sched_is_tree_balanced(hw, node->children[i]))
+			return false;
+
+	return ice_sched_check_node(hw, node);
+}
+
+/**
+ * ice_aq_query_node_to_root - retrieve the tree topology for a given node teid
+ * @hw: pointer to the hw struct
+ * @node_teid: node teid
+ * @buf: pointer to buffer
+ * @buf_size: buffer size in bytes
+ * @cd: pointer to command details structure or NULL
+ *
+ * This function retrieves the tree topology from the firmware for a given
+ * node teid to the root node.
+ */
+enum ice_status
+ice_aq_query_node_to_root(struct ice_hw *hw, u32 node_teid,
+			  struct ice_aqc_get_elem *buf, u16 buf_size,
+			  struct ice_sq_cd *cd)
+{
+	struct ice_aqc_query_node_to_root *cmd;
+	struct ice_aq_desc desc;
+
+	cmd = &desc.params.query_node_to_root;
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_query_node_to_root);
+	cmd->teid = CPU_TO_LE32(node_teid);
+	return ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
+}
+
+/**
+ * ice_get_agg_info - get the agg id
+ * @hw: pointer to the hardware structure
+ * @agg_id: aggregator id
+ *
+ * This function validates agg id. The function returns info if agg id is
+ * prsent in list otherwise it returns null.
+ */
+static struct ice_sched_agg_info*
+ice_get_agg_info(struct ice_hw *hw, u32 agg_id)
+{
+	struct ice_sched_agg_info *agg_info;
+
+	LIST_FOR_EACH_ENTRY(agg_info, &hw->agg_list, ice_sched_agg_info,
+			    list_entry)
+		if (agg_info->agg_id == agg_id)
+			return agg_info;
+
+	return NULL;
+}
+
+/**
+ * ice_move_all_vsi_to_dflt_agg - move all VSI(s) to default agg
+ * @pi: port information structure
+ * @agg_info: aggregator info
+ * @tc: traffic class number
+ * @rm_vsi_info: true or false
+ *
+ * This function move all the VSI(s) to the default aggregator and delete
+ * agg vsi info based on passed in boolean parameter rm_vsi_info. The
+ * caller holds the scheduler lock.
+ */
+static enum ice_status
+ice_move_all_vsi_to_dflt_agg(struct ice_port_info *pi,
+			     struct ice_sched_agg_info *agg_info, u8 tc,
+			     bool rm_vsi_info)
+{
+	struct ice_sched_agg_vsi_info *agg_vsi_info;
+	struct ice_sched_agg_vsi_info *tmp;
+	enum ice_status status = ICE_SUCCESS;
+
+	LIST_FOR_EACH_ENTRY_SAFE(agg_vsi_info, tmp, &agg_info->agg_vsi_list,
+				 ice_sched_agg_vsi_info, list_entry) {
+		u16 vsi_handle = agg_vsi_info->vsi_handle;
+
+		/* Move VSI to default agg */
+		if (!ice_is_tc_ena(agg_vsi_info->tc_bitmap[0], tc))
+			continue;
+
+		status = ice_sched_move_vsi_to_agg(pi, vsi_handle,
+						   ICE_DFLT_AGG_ID, tc);
+		if (status)
+			break;
+
+		ice_clear_bit(tc, agg_vsi_info->tc_bitmap);
+		if (rm_vsi_info && !agg_vsi_info->tc_bitmap[0]) {
+			LIST_DEL(&agg_vsi_info->list_entry);
+			ice_free(pi->hw, agg_vsi_info);
+		}
+	}
+
+	return status;
+}
+
+/**
+ * ice_rm_agg_cfg_tc - remove agg configuration for tc
+ * @pi: port information structure
+ * @agg_info: aggregator id
+ * @tc: tc number
+ * @rm_vsi_info: bool value true or false
+ *
+ * This function removes agg reference to vsi of given tc. It removes the agg
+ * configuration completely for requested tc. The caller needs to hold the
+ * scheduler lock.
+ */
+static enum ice_status
+ice_rm_agg_cfg_tc(struct ice_port_info *pi, struct ice_sched_agg_info *agg_info,
+		  u8 tc, bool rm_vsi_info)
+{
+	enum ice_status status = ICE_SUCCESS;
+
+	/* If nothing to remove - return success */
+	if (!ice_is_tc_ena(agg_info->tc_bitmap[0], tc))
+		goto exit_rm_agg_cfg_tc;
+
+	status = ice_move_all_vsi_to_dflt_agg(pi, agg_info, tc, rm_vsi_info);
+	if (status)
+		goto exit_rm_agg_cfg_tc;
+
+	/* Delete aggregator node(s) */
+	status = ice_sched_rm_agg_cfg(pi, agg_info->agg_id, tc);
+	if (status)
+		goto exit_rm_agg_cfg_tc;
+
+	ice_clear_bit(tc, agg_info->tc_bitmap);
+exit_rm_agg_cfg_tc:
+	return status;
+}
+
+/**
+ * ice_save_agg_tc_bitmap - save agg TC bitmap
+ * @pi: port information structure
+ * @agg_id: aggregator id
+ * @tc_bitmap: 8 bits TC bitmap
+ *
+ * Save agg TC bitmap. This function needs to be called with scheduler
+ * lock held.
+ */
+static enum ice_status
+ice_save_agg_tc_bitmap(struct ice_port_info *pi, u32 agg_id,
+		       ice_bitmap_t *tc_bitmap)
+{
+	struct ice_sched_agg_info *agg_info;
+
+	agg_info = ice_get_agg_info(pi->hw, agg_id);
+	if (!agg_info)
+		return ICE_ERR_PARAM;
+	ice_cp_bitmap(agg_info->replay_tc_bitmap, tc_bitmap,
+		      ICE_MAX_TRAFFIC_CLASS);
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_sched_cfg_agg - configure agg node
+ * @pi: port information structure
+ * @agg_id: aggregator id
+ * @agg_type: aggregator type queue, VSI, or agg group
+ * @tc_bitmap: bits TC bitmap
+ *
+ * It registers a unique aggregator node into scheduler services. It
+ * allows a user to register with a unique ID to track it's resources.
+ * The aggregator type determines if this is a queue group, VSI group
+ * or aggregator group. It then creates the agg node(s) for requested
+ * tc(s) or removes an existing agg node including its configuration
+ * if indicated via tc_bitmap. Call ice_rm_agg_cfg to release agg
+ * resources and remove agg id.
+ * This function needs to be called with scheduler lock held.
+ */
+static enum ice_status
+ice_sched_cfg_agg(struct ice_port_info *pi, u32 agg_id,
+		  enum ice_agg_type agg_type, ice_bitmap_t *tc_bitmap)
+{
+	struct ice_sched_agg_info *agg_info;
+	enum ice_status status = ICE_SUCCESS;
+	struct ice_hw *hw = pi->hw;
+	u8 tc;
+
+	agg_info = ice_get_agg_info(hw, agg_id);
+	if (!agg_info) {
+		/* Creat new entry for new agg id */
+		agg_info = (struct ice_sched_agg_info *)
+			ice_malloc(hw, sizeof(*agg_info));
+		if (!agg_info) {
+			status = ICE_ERR_NO_MEMORY;
+			goto exit_reg_agg;
+		}
+		agg_info->agg_id = agg_id;
+		agg_info->agg_type = agg_type;
+		agg_info->tc_bitmap[0] = 0;
+
+		/* Initialize the aggregator vsi list head */
+		INIT_LIST_HEAD(&agg_info->agg_vsi_list);
+
+		/* Add new entry in agg list */
+		LIST_ADD(&agg_info->list_entry, &hw->agg_list);
+	}
+	/* Create agg node(s) for requested tc(s) */
+	for (tc = 0; tc < ICE_MAX_TRAFFIC_CLASS; tc++) {
+		if (!ice_is_tc_ena(*tc_bitmap, tc)) {
+			/* Delete agg cfg tc if it exists previously */
+			status = ice_rm_agg_cfg_tc(pi, agg_info, tc, false);
+			if (status)
+				break;
+			continue;
+		}
+
+		/* Check if agg node for tc already exists */
+		if (ice_is_tc_ena(agg_info->tc_bitmap[0], tc))
+			continue;
+
+		/* Create new agg node for tc */
+		status = ice_sched_add_agg_cfg(pi, agg_id, tc);
+		if (status)
+			break;
+
+		/* Save agg node's tc information */
+		ice_set_bit(tc, agg_info->tc_bitmap);
+	}
+exit_reg_agg:
+	return status;
+}
+
+/**
+ * ice_cfg_agg - config agg node
+ * @pi: port information structure
+ * @agg_id: aggregator id
+ * @agg_type: aggregator type queue, VSI, or agg group
+ * @tc_bitmap: bits TC bitmap
+ *
+ * This function configures aggregator node(s).
+ */
+enum ice_status
+ice_cfg_agg(struct ice_port_info *pi, u32 agg_id, enum ice_agg_type agg_type,
+	    u8 tc_bitmap)
+{
+	ice_bitmap_t bitmap = tc_bitmap;
+	enum ice_status status;
+
+	ice_acquire_lock(&pi->sched_lock);
+	status = ice_sched_cfg_agg(pi, agg_id, agg_type,
+				   (ice_bitmap_t *)&bitmap);
+	if (!status)
+		status = ice_save_agg_tc_bitmap(pi, agg_id,
+						(ice_bitmap_t *)&bitmap);
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_get_agg_vsi_info - get the agg id
+ * @agg_info: aggregator info
+ * @vsi_handle: software VSI handle
+ *
+ * The function returns agg VSI info based on VSI handle. This function needs
+ * to be called with scheduler lock held.
+ */
+static struct ice_sched_agg_vsi_info*
+ice_get_agg_vsi_info(struct ice_sched_agg_info *agg_info, u16 vsi_handle)
+{
+	struct ice_sched_agg_vsi_info *agg_vsi_info;
+
+	LIST_FOR_EACH_ENTRY(agg_vsi_info, &agg_info->agg_vsi_list,
+			    ice_sched_agg_vsi_info, list_entry)
+		if (agg_vsi_info->vsi_handle == vsi_handle)
+			return agg_vsi_info;
+
+	return NULL;
+}
+
+/**
+ * ice_get_vsi_agg_info - get the agg info of VSI
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: Sw VSI handle
+ *
+ * The function returns agg info of VSI represented via vsi_handle. The VSI has
+ * in this case a different aggregator than the default one. This function
+ * needs to be called with scheduler lock held.
+ */
+static struct ice_sched_agg_info*
+ice_get_vsi_agg_info(struct ice_hw *hw, u16 vsi_handle)
+{
+	struct ice_sched_agg_info *agg_info;
+
+	LIST_FOR_EACH_ENTRY(agg_info, &hw->agg_list, ice_sched_agg_info,
+			    list_entry) {
+		struct ice_sched_agg_vsi_info *agg_vsi_info;
+
+		agg_vsi_info = ice_get_agg_vsi_info(agg_info, vsi_handle);
+		if (agg_vsi_info)
+			return agg_info;
+	}
+	return NULL;
+}
+
+/**
+ * ice_save_agg_vsi_tc_bitmap - save aggregator VSI TC bitmap
+ * @pi: port information structure
+ * @agg_id: aggregator id
+ * @vsi_handle: software VSI handle
+ * @tc_bitmap: TC bitmap of enabled tc(s)
+ *
+ * Save VSI to aggregator TC bitmap. This function needs to call with scheduler
+ * lock held.
+ */
+static enum ice_status
+ice_save_agg_vsi_tc_bitmap(struct ice_port_info *pi, u32 agg_id, u16 vsi_handle,
+			   ice_bitmap_t *tc_bitmap)
+{
+	struct ice_sched_agg_vsi_info *agg_vsi_info;
+	struct ice_sched_agg_info *agg_info;
+
+	agg_info = ice_get_agg_info(pi->hw, agg_id);
+	if (!agg_info)
+		return ICE_ERR_PARAM;
+	/* check if entry already exist */
+	agg_vsi_info = ice_get_agg_vsi_info(agg_info, vsi_handle);
+	if (!agg_vsi_info)
+		return ICE_ERR_PARAM;
+	ice_cp_bitmap(agg_vsi_info->replay_tc_bitmap, tc_bitmap,
+		      ICE_MAX_TRAFFIC_CLASS);
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_sched_assoc_vsi_to_agg - associate or move VSI to new or default agg
+ * @pi: port information structure
+ * @agg_id: aggregator id
+ * @vsi_handle: software VSI handle
+ * @tc_bitmap: TC bitmap of enabled tc(s)
+ *
+ * This function moves VSI to a new or default aggregator node. If VSI is
+ * already associated to the agg node then no operation is performed on the
+ * tree. This function needs to be called with scheduler lock held.
+ */
+static enum ice_status
+ice_sched_assoc_vsi_to_agg(struct ice_port_info *pi, u32 agg_id,
+			   u16 vsi_handle, ice_bitmap_t *tc_bitmap)
+{
+	struct ice_sched_agg_vsi_info *agg_vsi_info;
+	struct ice_sched_agg_info *agg_info;
+	enum ice_status status = ICE_SUCCESS;
+	struct ice_hw *hw = pi->hw;
+	u8 tc;
+
+	if (!ice_is_vsi_valid(pi->hw, vsi_handle))
+		return ICE_ERR_PARAM;
+	agg_info = ice_get_agg_info(hw, agg_id);
+	if (!agg_info)
+		return ICE_ERR_PARAM;
+	/* check if entry already exist */
+	agg_vsi_info = ice_get_agg_vsi_info(agg_info, vsi_handle);
+	if (!agg_vsi_info) {
+		/* Create new entry for vsi under agg list */
+		agg_vsi_info = (struct ice_sched_agg_vsi_info *)
+			ice_malloc(hw, sizeof(*agg_vsi_info));
+		if (!agg_vsi_info)
+			return ICE_ERR_PARAM;
+
+		/* add vsi id into the agg list */
+		agg_vsi_info->vsi_handle = vsi_handle;
+		LIST_ADD(&agg_vsi_info->list_entry, &agg_info->agg_vsi_list);
+	}
+	/* Move vsi node to new agg node for requested tc(s) */
+	for (tc = 0; tc < ICE_MAX_TRAFFIC_CLASS; tc++) {
+		if (!ice_is_tc_ena(*tc_bitmap, tc))
+			continue;
+
+		/* Move VSI to new agg */
+		status = ice_sched_move_vsi_to_agg(pi, vsi_handle, agg_id, tc);
+		if (status)
+			break;
+
+		if (agg_id != ICE_DFLT_AGG_ID)
+			ice_set_bit(tc, agg_vsi_info->tc_bitmap);
+		else
+			ice_clear_bit(tc, agg_vsi_info->tc_bitmap);
+	}
+	/* If vsi moved back to default agg then delete entry agg_vsi_info. */
+	if (!ice_is_any_bit_set(agg_vsi_info->tc_bitmap,
+				ICE_MAX_TRAFFIC_CLASS)) {
+		LIST_DEL(&agg_vsi_info->list_entry);
+		ice_free(hw, agg_vsi_info);
+	}
+	return status;
+}
+
+/**
+ * ice_move_vsi_to_agg - moves VSI to new or default agg
+ * @pi: port information structure
+ * @agg_id: aggregator id
+ * @vsi_handle: software VSI handle
+ * @tc_bitmap: tc bitmap of enabled tc(s)
+ *
+ * Move or associate VSI to a new or default aggregator node.
+ */
+enum ice_status
+ice_move_vsi_to_agg(struct ice_port_info *pi, u32 agg_id, u16 vsi_handle,
+		    u8 tc_bitmap)
+{
+	ice_bitmap_t bitmap = tc_bitmap;
+	enum ice_status status;
+
+	ice_acquire_lock(&pi->sched_lock);
+	status = ice_sched_assoc_vsi_to_agg(pi, agg_id, vsi_handle,
+					    (ice_bitmap_t *)&bitmap);
+	if (!status)
+		status = ice_save_agg_vsi_tc_bitmap(pi, agg_id, vsi_handle,
+						    (ice_bitmap_t *)&bitmap);
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_rm_agg_cfg - remove agg configuration
+ * @pi: port information structure
+ * @agg_id: aggregator id
+ *
+ * This function removes agg reference to vsi and delete agg id info.
+ * It removes the agg configuration completely.
+ */
+enum ice_status ice_rm_agg_cfg(struct ice_port_info *pi, u32 agg_id)
+{
+	struct ice_sched_agg_info *agg_info;
+	enum ice_status status = ICE_SUCCESS;
+	u8 tc;
+
+	ice_acquire_lock(&pi->sched_lock);
+	agg_info = ice_get_agg_info(pi->hw, agg_id);
+	if (!agg_info) {
+		status = ICE_ERR_DOES_NOT_EXIST;
+		goto exit_ice_rm_agg_cfg;
+	}
+
+	for (tc = 0; tc < ICE_MAX_TRAFFIC_CLASS; tc++) {
+		status = ice_rm_agg_cfg_tc(pi, agg_info, tc, true);
+		if (status)
+			goto exit_ice_rm_agg_cfg;
+	}
+
+	if (ice_is_any_bit_set(agg_info->tc_bitmap, ICE_MAX_TRAFFIC_CLASS)) {
+		status = ICE_ERR_IN_USE;
+		goto exit_ice_rm_agg_cfg;
+	}
+
+	/* Safe to delete entry now */
+	LIST_DEL(&agg_info->list_entry);
+	ice_free(pi->hw, agg_info);
+
+	/* Remove unused rl profile ids from HW and SW DB */
+	ice_sched_rm_unused_rl_prof(pi);
+
+exit_ice_rm_agg_cfg:
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_set_clear_cir_bw_alloc - set or clear CIR bw alloc information
+ * @bw_t_info: bandwidth type information structure
+ * @bw_alloc: Bandwidth allocation information
+ *
+ * Save or clear CIR bw alloc information (bw_alloc) in the passed param
+ * bw_t_info.
+ */
+static void
+ice_set_clear_cir_bw_alloc(struct ice_bw_type_info *bw_t_info, u16 bw_alloc)
+{
+	bw_t_info->cir_bw.bw_alloc = bw_alloc;
+	if (bw_t_info->cir_bw.bw_alloc)
+		ice_set_bit(ICE_BW_TYPE_CIR_WT, bw_t_info->bw_t_bitmap);
+	else
+		ice_clear_bit(ICE_BW_TYPE_CIR_WT, bw_t_info->bw_t_bitmap);
+}
+
+/**
+ * ice_set_clear_eir_bw_alloc - set or clear EIR bw alloc information
+ * @bw_t_info: bandwidth type information structure
+ * @bw_alloc: Bandwidth allocation information
+ *
+ * Save or clear EIR bw alloc information (bw_alloc) in the passed param
+ * bw_t_info.
+ */
+static void
+ice_set_clear_eir_bw_alloc(struct ice_bw_type_info *bw_t_info, u16 bw_alloc)
+{
+	bw_t_info->eir_bw.bw_alloc = bw_alloc;
+	if (bw_t_info->eir_bw.bw_alloc)
+		ice_set_bit(ICE_BW_TYPE_EIR_WT, bw_t_info->bw_t_bitmap);
+	else
+		ice_clear_bit(ICE_BW_TYPE_EIR_WT, bw_t_info->bw_t_bitmap);
+}
+
+/**
+ * ice_sched_save_vsi_bw_alloc - save VSI node's bw alloc information
+ * @pi: port information structure
+ * @vsi_handle: sw VSI handle
+ * @tc: traffic class
+ * @rl_type: rate limit type min or max
+ * @bw_alloc: Bandwidth allocation information
+ *
+ * Save bw alloc information of VSI type node for post replay use.
+ */
+static enum ice_status
+ice_sched_save_vsi_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+			    enum ice_rl_type rl_type, u16 bw_alloc)
+{
+	struct ice_vsi_ctx *vsi_ctx;
+
+	if (!ice_is_vsi_valid(pi->hw, vsi_handle))
+		return ICE_ERR_PARAM;
+	vsi_ctx = ice_get_vsi_ctx(pi->hw, vsi_handle);
+	if (!vsi_ctx)
+		return ICE_ERR_PARAM;
+	switch (rl_type) {
+	case ICE_MIN_BW:
+		ice_set_clear_cir_bw_alloc(&vsi_ctx->sched.bw_t_info[tc],
+					   bw_alloc);
+		break;
+	case ICE_MAX_BW:
+		ice_set_clear_eir_bw_alloc(&vsi_ctx->sched.bw_t_info[tc],
+					   bw_alloc);
+		break;
+	default:
+		return ICE_ERR_PARAM;
+	}
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_set_clear_cir_bw - set or clear CIR bw
+ * @bw_t_info: bandwidth type information structure
+ * @bw: bandwidth in Kbps - Kilo bits per sec
+ *
+ * Save or clear CIR bandwidth (bw) in the passed param bw_t_info.
+ */
+static void
+ice_set_clear_cir_bw(struct ice_bw_type_info *bw_t_info, u32 bw)
+{
+	if (bw == ICE_SCHED_DFLT_BW) {
+		ice_clear_bit(ICE_BW_TYPE_CIR, bw_t_info->bw_t_bitmap);
+		bw_t_info->cir_bw.bw = 0;
+	} else {
+		/* Save type of bw information */
+		ice_set_bit(ICE_BW_TYPE_CIR, bw_t_info->bw_t_bitmap);
+		bw_t_info->cir_bw.bw = bw;
+	}
+}
+
+/**
+ * ice_set_clear_eir_bw - set or clear EIR bw
+ * @bw_t_info: bandwidth type information structure
+ * @bw: bandwidth in Kbps - Kilo bits per sec
+ *
+ * Save or clear EIR bandwidth (bw) in the passed param bw_t_info.
+ */
+static void
+ice_set_clear_eir_bw(struct ice_bw_type_info *bw_t_info, u32 bw)
+{
+	if (bw == ICE_SCHED_DFLT_BW) {
+		ice_clear_bit(ICE_BW_TYPE_EIR, bw_t_info->bw_t_bitmap);
+		bw_t_info->eir_bw.bw = 0;
+	} else {
+		/* EIR bw and Shared bw profiles are mutually exclusive and
+		 * hence only one of them may be set for any given element.
+		 * First clear earlier saved shared bw information.
+		 */
+		ice_clear_bit(ICE_BW_TYPE_SHARED, bw_t_info->bw_t_bitmap);
+		bw_t_info->shared_bw = 0;
+		/* save EIR bw information */
+		ice_set_bit(ICE_BW_TYPE_EIR, bw_t_info->bw_t_bitmap);
+		bw_t_info->eir_bw.bw = bw;
+	}
+}
+
+/**
+ * ice_set_clear_shared_bw - set or clear shared bw
+ * @bw_t_info: bandwidth type information structure
+ * @bw: bandwidth in Kbps - Kilo bits per sec
+ *
+ * Save or clear shared bandwidth (bw) in the passed param bw_t_info.
+ */
+static void
+ice_set_clear_shared_bw(struct ice_bw_type_info *bw_t_info, u32 bw)
+{
+	if (bw == ICE_SCHED_DFLT_BW) {
+		ice_clear_bit(ICE_BW_TYPE_SHARED, bw_t_info->bw_t_bitmap);
+		bw_t_info->shared_bw = 0;
+	} else {
+		/* EIR bw and Shared bw profiles are mutually exclusive and
+		 * hence only one of them may be set for any given element.
+		 * First clear earlier saved EIR bw information.
+		 */
+		ice_clear_bit(ICE_BW_TYPE_EIR, bw_t_info->bw_t_bitmap);
+		bw_t_info->eir_bw.bw = 0;
+		/* save shared bw information */
+		ice_set_bit(ICE_BW_TYPE_SHARED, bw_t_info->bw_t_bitmap);
+		bw_t_info->shared_bw = bw;
+	}
+}
+
+/**
+ * ice_sched_save_vsi_bw - save VSI node's bw information
+ * @pi: port information structure
+ * @vsi_handle: sw VSI handle
+ * @tc: traffic class
+ * @rl_type: rate limit type min, max, or shared
+ * @bw: bandwidth in Kbps - Kilo bits per sec
+ *
+ * Save bw information of VSI type node for post replay use.
+ */
+static enum ice_status
+ice_sched_save_vsi_bw(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+		      enum ice_rl_type rl_type, u32 bw)
+{
+	struct ice_vsi_ctx *vsi_ctx;
+
+	if (!ice_is_vsi_valid(pi->hw, vsi_handle))
+		return ICE_ERR_PARAM;
+	vsi_ctx = ice_get_vsi_ctx(pi->hw, vsi_handle);
+	if (!vsi_ctx)
+		return ICE_ERR_PARAM;
+	switch (rl_type) {
+	case ICE_MIN_BW:
+		ice_set_clear_cir_bw(&vsi_ctx->sched.bw_t_info[tc], bw);
+		break;
+	case ICE_MAX_BW:
+		ice_set_clear_eir_bw(&vsi_ctx->sched.bw_t_info[tc], bw);
+		break;
+	case ICE_SHARED_BW:
+		ice_set_clear_shared_bw(&vsi_ctx->sched.bw_t_info[tc], bw);
+		break;
+	default:
+		return ICE_ERR_PARAM;
+	}
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_set_clear_prio - set or clear priority information
+ * @bw_t_info: bandwidth type information structure
+ * @prio: priority to save
+ *
+ * Save or clear priority (prio) in the passed param bw_t_info.
+ */
+static void
+ice_set_clear_prio(struct ice_bw_type_info *bw_t_info, u8 prio)
+{
+	bw_t_info->generic = prio;
+	if (bw_t_info->generic)
+		ice_set_bit(ICE_BW_TYPE_PRIO, bw_t_info->bw_t_bitmap);
+	else
+		ice_clear_bit(ICE_BW_TYPE_PRIO, bw_t_info->bw_t_bitmap);
+}
+
+/**
+ * ice_sched_save_vsi_prio - save VSI node's priority information
+ * @pi: port information structure
+ * @vsi_handle: Software VSI handle
+ * @tc: traffic class
+ * @prio: priority to save
+ *
+ * Save priority information of VSI type node for post replay use.
+ */
+static enum ice_status
+ice_sched_save_vsi_prio(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+			u8 prio)
+{
+	struct ice_vsi_ctx *vsi_ctx;
+
+	if (!ice_is_vsi_valid(pi->hw, vsi_handle))
+		return ICE_ERR_PARAM;
+	vsi_ctx = ice_get_vsi_ctx(pi->hw, vsi_handle);
+	if (!vsi_ctx)
+		return ICE_ERR_PARAM;
+	if (tc >= ICE_MAX_TRAFFIC_CLASS)
+		return ICE_ERR_PARAM;
+	ice_set_clear_prio(&vsi_ctx->sched.bw_t_info[tc], prio);
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_sched_save_agg_bw_alloc - save agg node's bw alloc information
+ * @pi: port information structure
+ * @agg_id: node aggregator id
+ * @tc: traffic class
+ * @rl_type: rate limit type min or max
+ * @bw_alloc: bandwidth alloc information
+ *
+ * Save bw alloc information of AGG type node for post replay use.
+ */
+static enum ice_status
+ice_sched_save_agg_bw_alloc(struct ice_port_info *pi, u32 agg_id, u8 tc,
+			    enum ice_rl_type rl_type, u16 bw_alloc)
+{
+	struct ice_sched_agg_info *agg_info;
+
+	agg_info = ice_get_agg_info(pi->hw, agg_id);
+	if (!agg_info)
+		return ICE_ERR_PARAM;
+	if (!ice_is_tc_ena(agg_info->tc_bitmap[0], tc))
+		return ICE_ERR_PARAM;
+	switch (rl_type) {
+	case ICE_MIN_BW:
+		ice_set_clear_cir_bw_alloc(&agg_info->bw_t_info[tc], bw_alloc);
+		break;
+	case ICE_MAX_BW:
+		ice_set_clear_eir_bw_alloc(&agg_info->bw_t_info[tc], bw_alloc);
+		break;
+	default:
+		return ICE_ERR_PARAM;
+	}
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_sched_save_agg_bw - save agg node's bw information
+ * @pi: port information structure
+ * @agg_id: node aggregator id
+ * @tc: traffic class
+ * @rl_type: rate limit type min, max, or shared
+ * @bw: bandwidth in Kbps - Kilo bits per sec
+ *
+ * Save bw information of AGG type node for post replay use.
+ */
+static enum ice_status
+ice_sched_save_agg_bw(struct ice_port_info *pi, u32 agg_id, u8 tc,
+		      enum ice_rl_type rl_type, u32 bw)
+{
+	struct ice_sched_agg_info *agg_info;
+
+	agg_info = ice_get_agg_info(pi->hw, agg_id);
+	if (!agg_info)
+		return ICE_ERR_PARAM;
+	if (!ice_is_tc_ena(agg_info->tc_bitmap[0], tc))
+		return ICE_ERR_PARAM;
+	switch (rl_type) {
+	case ICE_MIN_BW:
+		ice_set_clear_cir_bw(&agg_info->bw_t_info[tc], bw);
+		break;
+	case ICE_MAX_BW:
+		ice_set_clear_eir_bw(&agg_info->bw_t_info[tc], bw);
+		break;
+	case ICE_SHARED_BW:
+		ice_set_clear_shared_bw(&agg_info->bw_t_info[tc], bw);
+		break;
+	default:
+		return ICE_ERR_PARAM;
+	}
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_cfg_vsi_bw_lmt_per_tc - configure VSI bw limit per tc
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc: traffic class
+ * @rl_type: min or max
+ * @bw: bandwidth in kbps
+ *
+ * This function configures bw limit of VSI scheduling node based on tc
+ * information.
+ */
+enum ice_status
+ice_cfg_vsi_bw_lmt_per_tc(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+			  enum ice_rl_type rl_type, u32 bw)
+{
+	enum ice_status status;
+
+	status = ice_sched_set_node_bw_lmt_per_tc(pi, vsi_handle,
+						  ICE_AGG_TYPE_VSI,
+						  tc, rl_type, bw);
+	if (!status) {
+		ice_acquire_lock(&pi->sched_lock);
+		status = ice_sched_save_vsi_bw(pi, vsi_handle, tc, rl_type, bw);
+		ice_release_lock(&pi->sched_lock);
+	}
+	return status;
+}
+
+/**
+ * ice_cfg_dflt_vsi_bw_lmt_per_tc - configure default VSI bw limit per tc
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc: traffic class
+ * @rl_type: min or max
+ *
+ * This function configures default bw limit of VSI scheduling node based on tc
+ * information.
+ */
+enum ice_status
+ice_cfg_vsi_bw_dflt_lmt_per_tc(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+			       enum ice_rl_type rl_type)
+{
+	enum ice_status status;
+
+	status = ice_sched_set_node_bw_lmt_per_tc(pi, vsi_handle,
+						  ICE_AGG_TYPE_VSI,
+						  tc, rl_type,
+						  ICE_SCHED_DFLT_BW);
+	if (!status) {
+		ice_acquire_lock(&pi->sched_lock);
+		status = ice_sched_save_vsi_bw(pi, vsi_handle, tc, rl_type,
+					       ICE_SCHED_DFLT_BW);
+		ice_release_lock(&pi->sched_lock);
+	}
+	return status;
+}
+
+/**
+ * ice_cfg_agg_bw_lmt_per_tc - configure aggregator bw limit per tc
+ * @pi: port information structure
+ * @agg_id: aggregator id
+ * @tc: traffic class
+ * @rl_type: min or max
+ * @bw: bandwidth in kbps
+ *
+ * This function applies bw limit to aggregator scheduling node based on tc
+ * information.
+ */
+enum ice_status
+ice_cfg_agg_bw_lmt_per_tc(struct ice_port_info *pi, u32 agg_id, u8 tc,
+			  enum ice_rl_type rl_type, u32 bw)
+{
+	enum ice_status status;
+
+	status = ice_sched_set_node_bw_lmt_per_tc(pi, agg_id, ICE_AGG_TYPE_AGG,
+						  tc, rl_type, bw);
+	if (!status) {
+		ice_acquire_lock(&pi->sched_lock);
+		status = ice_sched_save_agg_bw(pi, agg_id, tc, rl_type, bw);
+		ice_release_lock(&pi->sched_lock);
+	}
+	return status;
+}
+
+/**
+ * ice_cfg_agg_bw_dflt_lmt_per_tc - configure aggregator bw default limit per tc
+ * @pi: port information structure
+ * @agg_id: aggregator id
+ * @tc: traffic class
+ * @rl_type: min or max
+ *
+ * This function applies default bw limit to aggregator scheduling node based
+ * on tc information.
+ */
+enum ice_status
+ice_cfg_agg_bw_dflt_lmt_per_tc(struct ice_port_info *pi, u32 agg_id, u8 tc,
+			       enum ice_rl_type rl_type)
+{
+	enum ice_status status;
+
+	status = ice_sched_set_node_bw_lmt_per_tc(pi, agg_id, ICE_AGG_TYPE_AGG,
+						  tc, rl_type,
+						  ICE_SCHED_DFLT_BW);
+	if (!status) {
+		ice_acquire_lock(&pi->sched_lock);
+		status = ice_sched_save_agg_bw(pi, agg_id, tc, rl_type,
+					       ICE_SCHED_DFLT_BW);
+		ice_release_lock(&pi->sched_lock);
+	}
+	return status;
+}
+
+/**
+ * ice_cfg_vsi_bw_shared_lmt - configure VSI bw shared limit
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @bw: bandwidth in kbps
+ *
+ * This function Configures shared rate limiter(SRL) of all VSI type nodes
+ * across all traffic classes for VSI matching handle.
+ */
+enum ice_status
+ice_cfg_vsi_bw_shared_lmt(struct ice_port_info *pi, u16 vsi_handle, u32 bw)
+{
+	return ice_sched_set_vsi_bw_shared_lmt(pi, vsi_handle, bw);
+}
+
+/**
+ * ice_cfg_vsi_bw_no_shared_lmt - configure VSI bw for no shared limiter
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ *
+ * This function removes the shared rate limiter(SRL) of all VSI type nodes
+ * across all traffic classes for VSI matching handle.
+ */
+enum ice_status
+ice_cfg_vsi_bw_no_shared_lmt(struct ice_port_info *pi, u16 vsi_handle)
+{
+	return ice_sched_set_vsi_bw_shared_lmt(pi, vsi_handle,
+					       ICE_SCHED_DFLT_BW);
+}
+
+/**
+ * ice_cfg_agg_bw_shared_lmt - configure aggregator bw shared limit
+ * @pi: port information structure
+ * @agg_id: aggregator id
+ * @bw: bandwidth in kbps
+ *
+ * This function configures the shared rate limiter(SRL) of all agg type nodes
+ * across all traffic classes for aggregator matching agg_id.
+ */
+enum ice_status
+ice_cfg_agg_bw_shared_lmt(struct ice_port_info *pi, u32 agg_id, u32 bw)
+{
+	return ice_sched_set_agg_bw_shared_lmt(pi, agg_id, bw);
+}
+
+/**
+ * ice_cfg_agg_bw_no_shared_lmt - configure aggregator bw for no shared limiter
+ * @pi: port information structure
+ * @agg_id: aggregator id
+ *
+ * This function removes the shared rate limiter(SRL) of all agg type nodes
+ * across all traffic classes for aggregator matching agg_id.
+ */
+enum ice_status
+ice_cfg_agg_bw_no_shared_lmt(struct ice_port_info *pi, u32 agg_id)
+{
+	return ice_sched_set_agg_bw_shared_lmt(pi, agg_id, ICE_SCHED_DFLT_BW);
+}
+
+/**
+ * ice_config_vsi_queue_priority - config VSI queue priority of node
+ * @pi: port information structure
+ * @num_qs: number of VSI queues
+ * @q_ids: queue ids array
+ * @q_ids: queue ids array
+ * @q_prio: queue priority array
+ *
+ * This function configures the queue node priority (Sibling Priority) of the
+ * passed in VSI's queue(s) for a given traffic class (tc).
+ */
+enum ice_status
+ice_cfg_vsi_q_priority(struct ice_port_info *pi, u16 num_qs, u32 *q_ids,
+		       u8 *q_prio)
+{
+	enum ice_status status = ICE_ERR_PARAM;
+	struct ice_hw *hw = pi->hw;
+	u16 i;
+
+	ice_acquire_lock(&pi->sched_lock);
+
+	for (i = 0; i < num_qs; i++) {
+		struct ice_sched_node *node;
+
+		node = ice_sched_find_node_by_teid(pi->root, q_ids[i]);
+		if (!node || node->info.data.elem_type !=
+		    ICE_AQC_ELEM_TYPE_LEAF) {
+			status = ICE_ERR_PARAM;
+			break;
+		}
+		/* Configure Priority */
+		status = ice_sched_cfg_sibl_node_prio(hw, node, q_prio[i]);
+		if (status)
+			break;
+	}
+
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_cfg_agg_vsi_priority_per_tc - config agg's VSI priority per tc
+ * @pi: port information structure
+ * @agg_id: Aggregator id
+ * @num_vsis: number of VSI(s)
+ * @vsi_handle_arr: array of software VSI handles
+ * @node_prio: pointer to node priority
+ * @tc: traffic class
+ *
+ * This function configures the node priority (Sibling Priority) of the
+ * passed in VSI's for a given traffic class (tc) of an Aggregator id.
+ */
+enum ice_status
+ice_cfg_agg_vsi_priority_per_tc(struct ice_port_info *pi, u32 agg_id,
+				u16 num_vsis, u16 *vsi_handle_arr,
+				u8 *node_prio, u8 tc)
+{
+	struct ice_sched_agg_vsi_info *agg_vsi_info;
+	struct ice_sched_node *tc_node, *agg_node;
+	enum ice_status status = ICE_ERR_PARAM;
+	struct ice_sched_agg_info *agg_info;
+	bool agg_id_present = false;
+	struct ice_hw *hw = pi->hw;
+	u16 i;
+
+	ice_acquire_lock(&pi->sched_lock);
+	LIST_FOR_EACH_ENTRY(agg_info, &hw->agg_list, ice_sched_agg_info,
+			    list_entry)
+		if (agg_info->agg_id == agg_id) {
+			agg_id_present = true;
+			break;
+		}
+	if (!agg_id_present)
+		goto exit_agg_priority_per_tc;
+
+	tc_node = ice_sched_get_tc_node(pi, tc);
+	if (!tc_node)
+		goto exit_agg_priority_per_tc;
+
+	agg_node = ice_sched_get_agg_node(hw, tc_node, agg_id);
+	if (!agg_node)
+		goto exit_agg_priority_per_tc;
+
+	if (num_vsis > hw->max_children[agg_node->tx_sched_layer])
+		goto exit_agg_priority_per_tc;
+
+	for (i = 0; i < num_vsis; i++) {
+		struct ice_sched_node *vsi_node;
+		bool vsi_handle_valid = false;
+		u16 vsi_handle;
+
+		status = ICE_ERR_PARAM;
+		vsi_handle = vsi_handle_arr[i];
+		if (!ice_is_vsi_valid(hw, vsi_handle))
+			goto exit_agg_priority_per_tc;
+		/* Verify child nodes before applying settings */
+		LIST_FOR_EACH_ENTRY(agg_vsi_info, &agg_info->agg_vsi_list,
+				    ice_sched_agg_vsi_info, list_entry)
+			if (agg_vsi_info->vsi_handle == vsi_handle) {
+				vsi_handle_valid = true;
+				break;
+			}
+		if (!vsi_handle_valid)
+			goto exit_agg_priority_per_tc;
+
+		vsi_node = ice_sched_get_vsi_node(hw, tc_node, vsi_handle);
+		if (!vsi_node)
+			goto exit_agg_priority_per_tc;
+
+		if (ice_sched_find_node_in_subtree(hw, agg_node, vsi_node)) {
+			/* Configure Priority */
+			status = ice_sched_cfg_sibl_node_prio(hw, vsi_node,
+							      node_prio[i]);
+			if (status)
+				break;
+			status = ice_sched_save_vsi_prio(pi, vsi_handle, tc,
+							 node_prio[i]);
+			if (status)
+				break;
+		}
+	}
+
+exit_agg_priority_per_tc:
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_cfg_vsi_bw_alloc - config VSI bw alloc per tc
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @ena_tcmap: enabled tc map
+ * @rl_type: Rate limit type CIR/EIR
+ * @bw_alloc: Array of bw alloc
+ *
+ * This function configures the bw allocation of the passed in VSI's
+ * node(s) for enabled traffic class.
+ */
+enum ice_status
+ice_cfg_vsi_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 ena_tcmap,
+		     enum ice_rl_type rl_type, u8 *bw_alloc)
+{
+	enum ice_status status = ICE_SUCCESS;
+	u8 tc;
+
+	if (!ice_is_vsi_valid(pi->hw, vsi_handle))
+		return ICE_ERR_PARAM;
+
+	ice_acquire_lock(&pi->sched_lock);
+
+	/* Return success if no nodes are present across tc */
+	for (tc = 0; tc < ICE_MAX_TRAFFIC_CLASS; tc++) {
+		struct ice_sched_node *tc_node, *vsi_node;
+
+		if (!ice_is_tc_ena(ena_tcmap, tc))
+			continue;
+
+		tc_node = ice_sched_get_tc_node(pi, tc);
+		if (!tc_node)
+			continue;
+
+		vsi_node = ice_sched_get_vsi_node(pi->hw, tc_node, vsi_handle);
+		if (!vsi_node)
+			continue;
+
+		status = ice_sched_cfg_node_bw_alloc(pi->hw, vsi_node, rl_type,
+						     bw_alloc[tc]);
+		if (status)
+			break;
+		status = ice_sched_save_vsi_bw_alloc(pi, vsi_handle, tc,
+						     rl_type, bw_alloc[tc]);
+		if (status)
+			break;
+	}
+
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_cfg_agg_bw_alloc - config agg bw alloc
+ * @pi: port information structure
+ * @agg_id: aggregator id
+ * @ena_tcmap: enabled tc map
+ * @rl_type: rate limit type CIR/EIR
+ * @bw_alloc: array of bw alloc
+ *
+ * This function configures the bw allocation of passed in aggregator for
+ * enabled traffic class(s).
+ */
+enum ice_status
+ice_cfg_agg_bw_alloc(struct ice_port_info *pi, u32 agg_id, u8 ena_tcmap,
+		     enum ice_rl_type rl_type, u8 *bw_alloc)
+{
+	struct ice_sched_agg_info *agg_info;
+	bool agg_id_present = false;
+	enum ice_status status = ICE_SUCCESS;
+	struct ice_hw *hw = pi->hw;
+	u8 tc;
+
+	ice_acquire_lock(&pi->sched_lock);
+	LIST_FOR_EACH_ENTRY(agg_info, &hw->agg_list, ice_sched_agg_info,
+			    list_entry)
+		if (agg_info->agg_id == agg_id) {
+			agg_id_present = true;
+			break;
+		}
+	if (!agg_id_present) {
+		status = ICE_ERR_PARAM;
+		goto exit_cfg_agg_bw_alloc;
+	}
+
+	/* Return success if no nodes are present across tc */
+	for (tc = 0; tc < ICE_MAX_TRAFFIC_CLASS; tc++) {
+		struct ice_sched_node *tc_node, *agg_node;
+
+		if (!ice_is_tc_ena(ena_tcmap, tc))
+			continue;
+
+		tc_node = ice_sched_get_tc_node(pi, tc);
+		if (!tc_node)
+			continue;
+
+		agg_node = ice_sched_get_agg_node(hw, tc_node, agg_id);
+		if (!agg_node)
+			continue;
+
+		status = ice_sched_cfg_node_bw_alloc(hw, agg_node, rl_type,
+						     bw_alloc[tc]);
+		if (status)
+			break;
+		status = ice_sched_save_agg_bw_alloc(pi, agg_id, tc, rl_type,
+						     bw_alloc[tc]);
+		if (status)
+			break;
+	}
+
+exit_cfg_agg_bw_alloc:
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_sched_calc_wakeup - calculate rl profile wakeup parameter
+ * @bw: bandwidth in kbps
+ *
+ * This function calculates the wakeup parameter of rl profile.
+ */
+static u16 ice_sched_calc_wakeup(s32 bw)
+{
+	s64 bytes_per_sec, wakeup_int, wakeup_a, wakeup_b, wakeup_f;
+	s32 wakeup_f_int;
+	u16 wakeup = 0;
+
+	/* Get the wakeup integer value */
+	bytes_per_sec = DIV_64BIT(((s64)bw * 1000), BITS_PER_BYTE);
+	wakeup_int = DIV_64BIT(ICE_RL_PROF_FREQUENCY, bytes_per_sec);
+	if (wakeup_int > 63) {
+		wakeup = (u16)((1 << 15) | wakeup_int);
+	} else {
+		/* Calculate fraction value up to 4 decimals
+		 * Convert Integer value to a constant multiplier
+		 */
+		wakeup_b = (s64)ICE_RL_PROF_MULTIPLIER * wakeup_int;
+		wakeup_a = DIV_64BIT((s64)ICE_RL_PROF_MULTIPLIER *
+				     ICE_RL_PROF_FREQUENCY, bytes_per_sec);
+
+		/* Get Fraction value */
+		wakeup_f = wakeup_a - wakeup_b;
+
+		/* Round up the Fractional value via Ceil(Fractional value) */
+		if (wakeup_f > DIV_64BIT(ICE_RL_PROF_MULTIPLIER, 2))
+			wakeup_f += 1;
+
+		wakeup_f_int = (s32)DIV_64BIT(wakeup_f * ICE_RL_PROF_FRACTION,
+					      ICE_RL_PROF_MULTIPLIER);
+		wakeup |= (u16)(wakeup_int << 9);
+		wakeup |= (u16)(0x1ff & wakeup_f_int);
+	}
+
+	return wakeup;
+}
+
+/**
+ * ice_sched_bw_to_rl_profile - convert bw to profile parameters
+ * @bw: bandwidth in kbps
+ * @profile: profile parameters to return
+ *
+ * This function converts the bw to profile structure format.
+ */
+static enum ice_status
+ice_sched_bw_to_rl_profile(u32 bw, struct ice_aqc_rl_profile_elem *profile)
+{
+	enum ice_status status = ICE_ERR_PARAM;
+	s64 bytes_per_sec, ts_rate, mv_tmp;
+	bool found = false;
+	s32 encode = 0;
+	s64 mv = 0;
+	s32 i;
+
+	/* Bw settings range is from 0.5Mb/sec to 100Gb/sec */
+	if (bw < ICE_SCHED_MIN_BW || bw > ICE_SCHED_MAX_BW)
+		return status;
+
+	/* Bytes per second from kbps */
+	bytes_per_sec = DIV_64BIT(((s64)bw * 1000), BITS_PER_BYTE);
+
+	/* encode is 6 bits but really useful are 5 bits */
+	for (i = 0; i < 64; i++) {
+		u64 pow_result = BIT_ULL(i);
+
+		ts_rate = DIV_64BIT((s64)ICE_RL_PROF_FREQUENCY,
+				    pow_result * ICE_RL_PROF_TS_MULTIPLIER);
+		if (ts_rate <= 0)
+			continue;
+
+		/* Multiplier value */
+		mv_tmp = DIV_64BIT(bytes_per_sec * ICE_RL_PROF_MULTIPLIER,
+				   ts_rate);
+
+		/* Round to the nearest ICE_RL_PROF_MULTIPLIER */
+		mv = round_up_64bit(mv_tmp, ICE_RL_PROF_MULTIPLIER);
+
+		/* First multiplier value greater than the given
+		 * accuracy bytes
+		 */
+		if (mv > ICE_RL_PROF_ACCURACY_BYTES) {
+			encode = i;
+			found = true;
+			break;
+		}
+	}
+	if (found) {
+		u16 wm;
+
+		wm = ice_sched_calc_wakeup(bw);
+		profile->rl_multiply = CPU_TO_LE16(mv);
+		profile->wake_up_calc = CPU_TO_LE16(wm);
+		profile->rl_encode = CPU_TO_LE16(encode);
+		status = ICE_SUCCESS;
+	} else {
+		status = ICE_ERR_DOES_NOT_EXIST;
+	}
+
+	return status;
+}
+
+/**
+ * ice_sched_add_rl_profile - add rl profile
+ * @pi: port information structure
+ * @rl_type: type of rate limit bw - min, max, or shared
+ * @bw: bandwidth in Kbps - Kilo bits per sec
+ * @layer_num: specifies in which layer to create profile
+ *
+ * This function first checks the existing list for corresponding bw
+ * parameter. If it exists, it returns the associated profile otherwise
+ * it creates a new rate limit profile for requested bw, and adds it to
+ * the hw db and local list. It returns the new profile or null on error.
+ * The caller needs to hold the scheduler lock.
+ */
+static struct ice_aqc_rl_profile_info *
+ice_sched_add_rl_profile(struct ice_port_info *pi,
+			 enum ice_rl_type rl_type, u32 bw, u8 layer_num)
+{
+	struct ice_aqc_rl_profile_generic_elem *buf;
+	struct ice_aqc_rl_profile_info *rl_prof_elem;
+	u16 profiles_added = 0, num_profiles = 1;
+	enum ice_status status = ICE_ERR_PARAM;
+	struct ice_hw *hw;
+	u8 profile_type;
+
+	switch (rl_type) {
+	case ICE_MIN_BW:
+		profile_type = ICE_AQC_RL_PROFILE_TYPE_CIR;
+		break;
+	case ICE_MAX_BW:
+		profile_type = ICE_AQC_RL_PROFILE_TYPE_EIR;
+		break;
+	case ICE_SHARED_BW:
+		profile_type = ICE_AQC_RL_PROFILE_TYPE_SRL;
+		break;
+	default:
+		return NULL;
+	}
+
+	if (!pi)
+		return NULL;
+	hw = pi->hw;
+	LIST_FOR_EACH_ENTRY(rl_prof_elem, &pi->rl_prof_list[layer_num],
+			    ice_aqc_rl_profile_info, list_entry)
+		if (rl_prof_elem->profile.flags == profile_type &&
+		    rl_prof_elem->bw == bw)
+			/* Return existing profile id info */
+			return rl_prof_elem;
+
+	/* Create new profile id */
+	rl_prof_elem = (struct ice_aqc_rl_profile_info *)
+		ice_malloc(hw, sizeof(*rl_prof_elem));
+
+	if (!rl_prof_elem)
+		return NULL;
+
+	status = ice_sched_bw_to_rl_profile(bw, &rl_prof_elem->profile);
+	if (status != ICE_SUCCESS)
+		goto exit_add_rl_prof;
+
+	rl_prof_elem->bw = bw;
+	/* layer_num is zero relative, and fw expects level from 1 to 9 */
+	rl_prof_elem->profile.level = layer_num + 1;
+	rl_prof_elem->profile.flags = profile_type;
+	rl_prof_elem->profile.max_burst_size = CPU_TO_LE16(hw->max_burst_size);
+
+	/* Create new entry in hw db */
+	buf = (struct ice_aqc_rl_profile_generic_elem *)
+		&rl_prof_elem->profile;
+	status = ice_aq_add_rl_profile(hw, num_profiles, buf, sizeof(*buf),
+				       &profiles_added, NULL);
+	if (status || profiles_added != num_profiles)
+		goto exit_add_rl_prof;
+
+	/* Good entry - add in the list */
+	rl_prof_elem->prof_id_ref = 0;
+	LIST_ADD(&rl_prof_elem->list_entry, &pi->rl_prof_list[layer_num]);
+	return rl_prof_elem;
+
+exit_add_rl_prof:
+	ice_free(hw, rl_prof_elem);
+	return NULL;
+}
+
+/**
+ * ice_sched_del_rl_profile - remove rl profile
+ * @hw: pointer to the hw struct
+ * @rl_info: rate limit profile information
+ *
+ * If the profile id is not referenced anymore, it removes profile id with
+ * its associated parameters from hw db,and locally. The caller needs to
+ * hold scheduler lock.
+ */
+enum ice_status
+ice_sched_del_rl_profile(struct ice_hw *hw,
+			 struct ice_aqc_rl_profile_info *rl_info)
+{
+	struct ice_aqc_rl_profile_generic_elem *buf;
+	u16 num_profiles_removed;
+	enum ice_status status;
+	u16 num_profiles = 1;
+
+	if (rl_info->prof_id_ref != 0)
+		return ICE_ERR_IN_USE;
+
+	/* Safe to remove profile id */
+	buf = (struct ice_aqc_rl_profile_generic_elem *)
+		&rl_info->profile;
+	status = ice_aq_remove_rl_profile(hw, num_profiles, buf, sizeof(*buf),
+					  &num_profiles_removed, NULL);
+	if (status || num_profiles_removed != num_profiles)
+		return ICE_ERR_CFG;
+
+	/* Delete stale entry now */
+	LIST_DEL(&rl_info->list_entry);
+	ice_free(hw, rl_info);
+	return status;
+}
+
+/**
+ * ice_sched_rm_unused_rl_prof - remove unused rl profile
+ * @pi: port information structure
+ *
+ * This function removes unused rate limit profiles from the hw and
+ * SW DB. The caller needs to hold scheduler lock.
+ */
+void ice_sched_rm_unused_rl_prof(struct ice_port_info *pi)
+{
+	u8 ln;
+
+	for (ln = 0; ln < pi->hw->num_tx_sched_layers; ln++) {
+		struct ice_aqc_rl_profile_info *rl_prof_elem;
+		struct ice_aqc_rl_profile_info *rl_prof_tmp;
+
+		LIST_FOR_EACH_ENTRY_SAFE(rl_prof_elem, rl_prof_tmp,
+					 &pi->rl_prof_list[ln],
+					 ice_aqc_rl_profile_info, list_entry) {
+			if (!ice_sched_del_rl_profile(pi->hw, rl_prof_elem))
+				ice_debug(pi->hw, ICE_DBG_SCHED,
+					  "Removed rl profile\n");
+		}
+	}
+}
+
+/**
+ * ice_sched_update_elem - update element
+ * @hw: pointer to the hw struct
+ * @node: pointer to node
+ * @info: node info to update
+ *
+ * It updates the HW DB, and local SW DB of node. It updates the scheduling
+ * parameters of node from argument info data buffer (Info->data buf) and
+ * returns success or error on config sched element failure. The caller
+ * needs to hold scheduler lock.
+ */
+static enum ice_status
+ice_sched_update_elem(struct ice_hw *hw, struct ice_sched_node *node,
+		      struct ice_aqc_txsched_elem_data *info)
+{
+	struct ice_aqc_conf_elem buf;
+	enum ice_status status;
+	u16 elem_cfgd = 0;
+	u16 num_elems = 1;
+
+	buf.generic[0] = *info;
+	/* Parent teid is reserved field in this aq call */
+	buf.generic[0].parent_teid = 0;
+	/* Element type is reserved field in this aq call */
+	buf.generic[0].data.elem_type = 0;
+	/* Flags is reserved field in this aq call */
+	buf.generic[0].data.flags = 0;
+
+	/* Update HW DB */
+	/* Configure element node */
+	status = ice_aq_cfg_sched_elems(hw, num_elems, &buf, sizeof(buf),
+					&elem_cfgd, NULL);
+	if (status || elem_cfgd != num_elems) {
+		ice_debug(hw, ICE_DBG_SCHED, "Config sched elem error\n");
+		return ICE_ERR_CFG;
+	}
+
+	/* Config success case */
+	/* Now update local SW DB */
+	/* Only copy the data portion of info buffer */
+	node->info.data = info->data;
+	return status;
+}
+
+/**
+ * ice_sched_cfg_node_bw_lmt - configure node sched params
+ * @hw: pointer to the hw struct
+ * @node: sched node to configure
+ * @rl_type: rate limit type cir, eir, or shared
+ * @rl_prof_id: rate limit profile id
+ *
+ * This function configures node element's bw limit.
+ */
+static enum ice_status
+ice_sched_cfg_node_bw_lmt(struct ice_hw *hw, struct ice_sched_node *node,
+			  enum ice_rl_type rl_type, u16 rl_prof_id)
+{
+	struct ice_aqc_txsched_elem_data buf;
+	struct ice_aqc_txsched_elem *data;
+
+	buf = node->info;
+	data = &buf.data;
+	switch (rl_type) {
+	case ICE_MIN_BW:
+		data->valid_sections |= ICE_AQC_ELEM_VALID_CIR;
+		data->cir_bw.bw_profile_idx = CPU_TO_LE16(rl_prof_id);
+		break;
+	case ICE_MAX_BW:
+		/* EIR bw and Shared bw profiles are mutually exclusive and
+		 * hence only one of them may be set for any given element
+		 */
+		if (data->valid_sections & ICE_AQC_ELEM_VALID_SHARED)
+			return ICE_ERR_CFG;
+		data->valid_sections |= ICE_AQC_ELEM_VALID_EIR;
+		data->eir_bw.bw_profile_idx = CPU_TO_LE16(rl_prof_id);
+		break;
+	case ICE_SHARED_BW:
+		/* Check for removing shared bw */
+		if (rl_prof_id == ICE_SCHED_NO_SHARED_RL_PROF_ID) {
+			/* remove shared profile */
+			data->valid_sections &= ~ICE_AQC_ELEM_VALID_SHARED;
+			data->srl_id = 0; /* clear srl field */
+
+			/* enable back EIR to default profile */
+			data->valid_sections |= ICE_AQC_ELEM_VALID_EIR;
+			data->eir_bw.bw_profile_idx =
+				CPU_TO_LE16(ICE_SCHED_DFLT_RL_PROF_ID);
+			break;
+		}
+		/* EIR bw and Shared bw profiles are mutually exclusive and
+		 * hence only one of them may be set for any given element
+		 */
+		if ((data->valid_sections & ICE_AQC_ELEM_VALID_EIR) &&
+		    (LE16_TO_CPU(data->eir_bw.bw_profile_idx) !=
+			    ICE_SCHED_DFLT_RL_PROF_ID))
+			return ICE_ERR_CFG;
+		/* EIR bw is set to default, disable it */
+		data->valid_sections &= ~ICE_AQC_ELEM_VALID_EIR;
+		/* Okay to enable shared bw now */
+		data->valid_sections |= ICE_AQC_ELEM_VALID_SHARED;
+		data->srl_id = CPU_TO_LE16(rl_prof_id);
+		break;
+	default:
+		/* Unknown rate limit type */
+		return ICE_ERR_PARAM;
+	}
+
+	/* Configure element */
+	return ice_sched_update_elem(hw, node, &buf);
+}
+
+/**
+ * ice_sched_get_node_rl_prof_id - get node's rate limit profile id
+ * @node: sched node
+ * @rl_type: rate limit type
+ *
+ * If existing profile matches, it returns the corresponding rate
+ * limit profile id, otherwise it returns an invalid id as error.
+ */
+static u16
+ice_sched_get_node_rl_prof_id(struct ice_sched_node *node,
+			      enum ice_rl_type rl_type)
+{
+	u16 rl_prof_id = ICE_SCHED_INVAL_PROF_ID;
+	struct ice_aqc_txsched_elem *data;
+
+	data = &node->info.data;
+	switch (rl_type) {
+	case ICE_MIN_BW:
+		if (data->valid_sections & ICE_AQC_ELEM_VALID_CIR)
+			rl_prof_id = LE16_TO_CPU(data->cir_bw.bw_profile_idx);
+		break;
+	case ICE_MAX_BW:
+		if (data->valid_sections & ICE_AQC_ELEM_VALID_EIR)
+			rl_prof_id = LE16_TO_CPU(data->eir_bw.bw_profile_idx);
+		break;
+	case ICE_SHARED_BW:
+		if (data->valid_sections & ICE_AQC_ELEM_VALID_SHARED)
+			rl_prof_id = LE16_TO_CPU(data->srl_id);
+		break;
+	default:
+		break;
+	}
+
+	return rl_prof_id;
+}
+
+/**
+ * ice_sched_get_rl_prof_layer - selects rate limit profile creation layer
+ * @pi: port information structure
+ * @rl_type: type of rate limit bw - min, max, or shared
+ * @layer_index: layer index
+ *
+ * This function returns requested profile creation layer.
+ */
+static u8
+ice_sched_get_rl_prof_layer(struct ice_port_info *pi, enum ice_rl_type rl_type,
+			    u8 layer_index)
+{
+	struct ice_hw *hw = pi->hw;
+
+	if (layer_index >= hw->num_tx_sched_layers)
+		return ICE_SCHED_INVAL_LAYER_NUM;
+	switch (rl_type) {
+	case ICE_MIN_BW:
+		if (hw->layer_info[layer_index].max_cir_rl_profiles)
+			return layer_index;
+		break;
+	case ICE_MAX_BW:
+		if (hw->layer_info[layer_index].max_eir_rl_profiles)
+			return layer_index;
+		break;
+	case ICE_SHARED_BW:
+		/* if current layer doesn't support SRL profile creation
+		 * then try a layer up or down.
+		 */
+		if (hw->layer_info[layer_index].max_srl_profiles)
+			return layer_index;
+		else if (layer_index < hw->num_tx_sched_layers - 1 &&
+			 hw->layer_info[layer_index + 1].max_srl_profiles)
+			return layer_index + 1;
+		else if (layer_index > 0 &&
+			 hw->layer_info[layer_index - 1].max_srl_profiles)
+			return layer_index - 1;
+		break;
+	default:
+		break;
+	}
+	return ICE_SCHED_INVAL_LAYER_NUM;
+}
+
+/**
+ * ice_sched_get_srl_node - get shared rate limit node
+ * @node: tree node
+ * @srl_layer: shared rate limit layer
+ *
+ * This function returns SRL node to be used for shared rate limit purpose.
+ * The caller needs to hold scheduler lock.
+ */
+static struct ice_sched_node *
+ice_sched_get_srl_node(struct ice_sched_node *node, u8 srl_layer)
+{
+	if (srl_layer > node->tx_sched_layer)
+		return node->children[0];
+	else if (srl_layer < node->tx_sched_layer)
+		/* Node can't be created without a parent. It will always
+		 * have a valid parent except root node.
+		 */
+		return node->parent;
+	else
+		return node;
+}
+
+/**
+ * ice_sched_rm_rl_profile - remove rl profile id
+ * @pi: port information structure
+ * @layer_num: layer number where profiles are saved
+ * @profile_type: profile type like EIR, CIR, or SRL
+ * @profile_id: profile id to remove
+ *
+ * This function removes rate limit profile from layer 'layer_num' of type
+ * 'profile_type' and profile id as 'profile_id'. The caller needs to hold
+ * scheduler lock.
+ */
+static enum ice_status
+ice_sched_rm_rl_profile(struct ice_port_info *pi, u8 layer_num, u8 profile_type,
+			u16 profile_id)
+{
+	struct ice_aqc_rl_profile_info *rl_prof_elem;
+	enum ice_status status = ICE_SUCCESS;
+
+	/* Check the existing list for rl profile */
+	LIST_FOR_EACH_ENTRY(rl_prof_elem, &pi->rl_prof_list[layer_num],
+			    ice_aqc_rl_profile_info, list_entry)
+		if (rl_prof_elem->profile.flags == profile_type &&
+		    LE16_TO_CPU(rl_prof_elem->profile.profile_id) ==
+		    profile_id) {
+			if (rl_prof_elem->prof_id_ref)
+				rl_prof_elem->prof_id_ref--;
+
+			/* Remove old profile id from database */
+			status = ice_sched_del_rl_profile(pi->hw, rl_prof_elem);
+			if (status && status != ICE_ERR_IN_USE)
+				ice_debug(pi->hw, ICE_DBG_SCHED,
+					  "Remove rl profile failed\n");
+			break;
+		}
+	if (status == ICE_ERR_IN_USE)
+		status = ICE_SUCCESS;
+	return status;
+}
+
+/**
+ * ice_sched_set_node_bw_dflt - set node's bandwidth limit to default
+ * @pi: port information structure
+ * @node: pointer to node structure
+ * @rl_type: rate limit type min, max, or shared
+ * @layer_num: layer number where rl profiles are saved
+ *
+ * This function configures node element's bw rate limit profile id of
+ * type cir, eir, or srl to default. This function needs to be called
+ * with the scheduler lock held.
+ */
+static enum ice_status
+ice_sched_set_node_bw_dflt(struct ice_port_info *pi,
+			   struct ice_sched_node *node,
+			   enum ice_rl_type rl_type, u8 layer_num)
+{
+	enum ice_status status;
+	struct ice_hw *hw;
+	u8 profile_type;
+	u16 rl_prof_id;
+	u16 old_id;
+
+	hw = pi->hw;
+	switch (rl_type) {
+	case ICE_MIN_BW:
+		profile_type = ICE_AQC_RL_PROFILE_TYPE_CIR;
+		rl_prof_id = ICE_SCHED_DFLT_RL_PROF_ID;
+		break;
+	case ICE_MAX_BW:
+		profile_type = ICE_AQC_RL_PROFILE_TYPE_EIR;
+		rl_prof_id = ICE_SCHED_DFLT_RL_PROF_ID;
+		break;
+	case ICE_SHARED_BW:
+		profile_type = ICE_AQC_RL_PROFILE_TYPE_SRL;
+		/* No SRL is configured for default case */
+		rl_prof_id = ICE_SCHED_NO_SHARED_RL_PROF_ID;
+		break;
+	default:
+		return ICE_ERR_PARAM;
+	}
+	/* Save existing rl prof id for later clean up */
+	old_id = ice_sched_get_node_rl_prof_id(node, rl_type);
+	/* Configure bw scheduling parameters */
+	status = ice_sched_cfg_node_bw_lmt(hw, node, rl_type, rl_prof_id);
+	if (status)
+		return status;
+
+	/* Remove stale rl profile id */
+	if (old_id == ICE_SCHED_DFLT_RL_PROF_ID ||
+	    old_id == ICE_SCHED_INVAL_PROF_ID)
+		return status;
+	return ice_sched_rm_rl_profile(pi, layer_num, profile_type, old_id);
+}
+
+/**
+ * ice_sched_set_eir_srl_excl - set EIR/SRL exclusiveness
+ * @pi: port information structure
+ * @node: pointer to node structure
+ * @layer_num: layer number where rate limit profiles are saved
+ * @rl_type: rate limit type min, max, or shared
+ * @bw: bandwidth value
+ *
+ * This function prepares node element's bandwidth to SRL or EIR exclusively.
+ * EIR bw and Shared bw profiles are mutually exclusive and hence only one of
+ * them may be set for any given element. This function needs to be called
+ * with the scheduler lock held.
+ */
+static enum ice_status
+ice_sched_set_eir_srl_excl(struct ice_port_info *pi,
+			   struct ice_sched_node *node,
+			   u8 layer_num, enum ice_rl_type rl_type, u32 bw)
+{
+	if (rl_type == ICE_SHARED_BW) {
+		/* SRL node passed in this case, it may be different node */
+		if (bw == ICE_SCHED_DFLT_BW)
+			/* SRL being removed, ice_sched_cfg_node_bw_lmt()
+			 * enables EIR to default. EIR is not set in this
+			 * case, so no additional action is required.
+			 */
+			return ICE_SUCCESS;
+
+		/* SRL being configured, set EIR to default here.
+		 * ice_sched_cfg_node_bw_lmt() disables EIR when it
+		 * configures SRL
+		 */
+		return ice_sched_set_node_bw_dflt(pi, node, ICE_MAX_BW,
+						  layer_num);
+	} else if (rl_type == ICE_MAX_BW &&
+		   node->info.data.valid_sections & ICE_AQC_ELEM_VALID_SHARED) {
+		/* Remove Shared profile. Set default shared bw call
+		 * removes shared profile for a node.
+		 */
+		return ice_sched_set_node_bw_dflt(pi, node,
+						  ICE_SHARED_BW,
+						  layer_num);
+	}
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_sched_set_node_bw - set node's bandwidth
+ * @pi: port information structure
+ * @node: tree node
+ * @rl_type: rate limit type min, max, or shared
+ * @bw: bandwidth in Kbps - Kilo bits per sec
+ * @layer_num: layer number
+ *
+ * This function adds new profile corresponding to requested bw, configures
+ * node's rl profile id of type cir, eir, or srl, and removes old profile
+ * id from local database. The caller needs to hold scheduler lock.
+ */
+static enum ice_status
+ice_sched_set_node_bw(struct ice_port_info *pi, struct ice_sched_node *node,
+		      enum ice_rl_type rl_type, u32 bw, u8 layer_num)
+{
+	struct ice_aqc_rl_profile_info *rl_prof_info;
+	enum ice_status status = ICE_ERR_PARAM;
+	struct ice_hw *hw = pi->hw;
+	u16 old_id, rl_prof_id;
+
+	rl_prof_info = ice_sched_add_rl_profile(pi, rl_type, bw, layer_num);
+	if (!rl_prof_info)
+		return status;
+
+	rl_prof_id = LE16_TO_CPU(rl_prof_info->profile.profile_id);
+
+	/* Save existing rl prof id for later clean up */
+	old_id = ice_sched_get_node_rl_prof_id(node, rl_type);
+	/* Configure bw scheduling parameters */
+	status = ice_sched_cfg_node_bw_lmt(hw, node, rl_type, rl_prof_id);
+	if (status)
+		return status;
+
+	/* New changes has been applied */
+	/* Increment the profile id reference count */
+	rl_prof_info->prof_id_ref++;
+
+	/* Check for old id removal */
+	if ((old_id == ICE_SCHED_DFLT_RL_PROF_ID && rl_type != ICE_SHARED_BW) ||
+	    old_id == ICE_SCHED_INVAL_PROF_ID || old_id == rl_prof_id)
+		return status;
+
+	return ice_sched_rm_rl_profile(pi, layer_num,
+				       rl_prof_info->profile.flags,
+				       old_id);
+}
+
+/**
+ * ice_sched_set_node_bw_lmt - set node's bw limit
+ * @pi: port information structure
+ * @node: tree node
+ * @rl_type: rate limit type min, max, or shared
+ * @bw: bandwidth in Kbps - Kilo bits per sec
+ *
+ * It updates node's bw limit parameters like bw rl profile id of type cir,
+ * eir, or srl. The caller needs to hold scheduler lock.
+ */
+enum ice_status
+ice_sched_set_node_bw_lmt(struct ice_port_info *pi, struct ice_sched_node *node,
+			  enum ice_rl_type rl_type, u32 bw)
+{
+	struct ice_sched_node *cfg_node = node;
+	enum ice_status status;
+
+	struct ice_hw *hw;
+	u8 layer_num;
+
+	if (!pi)
+		return ICE_ERR_PARAM;
+	hw = pi->hw;
+	/* Remove unused rl profile ids from HW and SW DB */
+	ice_sched_rm_unused_rl_prof(pi);
+	layer_num = ice_sched_get_rl_prof_layer(pi, rl_type,
+						node->tx_sched_layer);
+	if (layer_num >= hw->num_tx_sched_layers)
+		return ICE_ERR_PARAM;
+
+	if (rl_type == ICE_SHARED_BW) {
+		/* SRL node may be different */
+		cfg_node = ice_sched_get_srl_node(node, layer_num);
+		if (!cfg_node)
+			return ICE_ERR_CFG;
+	}
+	/* EIR bw and Shared bw profiles are mutually exclusive and
+	 * hence only one of them may be set for any given element
+	 */
+	status = ice_sched_set_eir_srl_excl(pi, cfg_node, layer_num, rl_type,
+					    bw);
+	if (status)
+		return status;
+	if (bw == ICE_SCHED_DFLT_BW)
+		return ice_sched_set_node_bw_dflt(pi, cfg_node, rl_type,
+						  layer_num);
+	return ice_sched_set_node_bw(pi, cfg_node, rl_type, bw, layer_num);
+}
+
+/**
+ * ice_sched_set_node_bw_dflt_lmt - set node's bw limit to default
+ * @pi: port information structure
+ * @node: pointer to node structure
+ * @rl_type: rate limit type min, max, or shared
+ *
+ * This function configures node element's bw rate limit profile id of
+ * type cir, eir, or srl to default. This function needs to be called
+ * with the scheduler lock held.
+ */
+static enum ice_status
+ice_sched_set_node_bw_dflt_lmt(struct ice_port_info *pi,
+			       struct ice_sched_node *node,
+			       enum ice_rl_type rl_type)
+{
+	return ice_sched_set_node_bw_lmt(pi, node, rl_type,
+					 ICE_SCHED_DFLT_BW);
+}
+
+/**
+ * ice_sched_validate_srl_node - Check node for SRL applicability
+ * @node: sched node to configure
+ * @sel_layer: selected SRL layer
+ *
+ * This function checks if the SRL can be applied to a selceted layer node on
+ * behalf of the requested node (first argument). This function needs to be
+ * called with scheduler lock held.
+ */
+static enum ice_status
+ice_sched_validate_srl_node(struct ice_sched_node *node, u8 sel_layer)
+{
+	/* SRL profiles are not available on all layers. Check if the
+	 * SRL profile can be applied to a node above or below the
+	 * requested node. SRL configuration is possible only if the
+	 * selected layer's node has single child.
+	 */
+	if (sel_layer == node->tx_sched_layer ||
+	    ((sel_layer == node->tx_sched_layer + 1) &&
+	    node->num_children == 1) ||
+	    ((sel_layer == node->tx_sched_layer - 1) &&
+	    (node->parent && node->parent->num_children == 1)))
+		return ICE_SUCCESS;
+
+	return ICE_ERR_CFG;
+}
+
+/**
+ * ice_sched_set_q_bw_lmt - sets queue bw limit
+ * @pi: port information structure
+ * @q_id: queue id (leaf node teid)
+ * @rl_type: min, max, or shared
+ * @bw: bandwidth in kbps
+ *
+ * This function sets bw limit of queue scheduling node.
+ */
+static enum ice_status
+ice_sched_set_q_bw_lmt(struct ice_port_info *pi, u32 q_id,
+		       enum ice_rl_type rl_type, u32 bw)
+{
+	enum ice_status status = ICE_ERR_PARAM;
+	struct ice_sched_node *node;
+
+	ice_acquire_lock(&pi->sched_lock);
+
+	node = ice_sched_find_node_by_teid(pi->root, q_id);
+	if (!node) {
+		ice_debug(pi->hw, ICE_DBG_SCHED, "Wrong q_id\n");
+		goto exit_q_bw_lmt;
+	}
+
+	/* Return error if it is not a leaf node */
+	if (node->info.data.elem_type != ICE_AQC_ELEM_TYPE_LEAF)
+		goto exit_q_bw_lmt;
+
+	/* SRL bandwidth layer selection */
+	if (rl_type == ICE_SHARED_BW) {
+		u8 sel_layer; /* selected layer */
+
+		sel_layer = ice_sched_get_rl_prof_layer(pi, rl_type,
+							node->tx_sched_layer);
+		if (sel_layer >= pi->hw->num_tx_sched_layers) {
+			status = ICE_ERR_PARAM;
+			goto exit_q_bw_lmt;
+		}
+		status = ice_sched_validate_srl_node(node, sel_layer);
+		if (status)
+			goto exit_q_bw_lmt;
+	}
+
+	if (bw == ICE_SCHED_DFLT_BW)
+		status = ice_sched_set_node_bw_dflt_lmt(pi, node, rl_type);
+	else
+		status = ice_sched_set_node_bw_lmt(pi, node, rl_type, bw);
+
+exit_q_bw_lmt:
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_cfg_q_bw_lmt - configure queue bw limit
+ * @pi: port information structure
+ * @q_id: queue id (leaf node teid)
+ * @rl_type: min, max, or shared
+ * @bw: bandwidth in kbps
+ *
+ * This function configures bw limit of queue scheduling node.
+ */
+enum ice_status
+ice_cfg_q_bw_lmt(struct ice_port_info *pi, u32 q_id, enum ice_rl_type rl_type,
+		 u32 bw)
+{
+	return ice_sched_set_q_bw_lmt(pi, q_id, rl_type, bw);
+}
+
+/**
+ * ice_cfg_q_bw_dflt_lmt - configure queue bw default limit
+ * @pi: port information structure
+ * @q_id: queue id (leaf node teid)
+ * @rl_type: min, max, or shared
+ *
+ * This function configures bw default limit of queue scheduling node.
+ */
+enum ice_status
+ice_cfg_q_bw_dflt_lmt(struct ice_port_info *pi, u32 q_id,
+		      enum ice_rl_type rl_type)
+{
+	return ice_sched_set_q_bw_lmt(pi, q_id, rl_type, ICE_SCHED_DFLT_BW);
+}
+
+/**
+ * ice_sched_save_tc_node_bw - save tc node bw limit
+ * @pi: port information structure
+ * @tc: tc number
+ * @rl_type: min or max
+ * @bw: bandwidth in kbps
+ *
+ * This function saves the modified values of bandwidth settings for later
+ * replay purpose (restore) after reset.
+ */
+static enum ice_status
+ice_sched_save_tc_node_bw(struct ice_port_info *pi, u8 tc,
+			  enum ice_rl_type rl_type, u32 bw)
+{
+	struct ice_hw *hw = pi->hw;
+
+	if (tc >= ICE_MAX_TRAFFIC_CLASS)
+		return ICE_ERR_PARAM;
+	switch (rl_type) {
+	case ICE_MIN_BW:
+		ice_set_clear_cir_bw(&hw->tc_node_bw_t_info[tc], bw);
+		break;
+	case ICE_MAX_BW:
+		ice_set_clear_eir_bw(&hw->tc_node_bw_t_info[tc], bw);
+		break;
+	case ICE_SHARED_BW:
+		ice_set_clear_shared_bw(&hw->tc_node_bw_t_info[tc], bw);
+		break;
+	default:
+		return ICE_ERR_PARAM;
+	}
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_sched_set_tc_node_bw_lmt - sets tc node bw limit
+ * @pi: port information structure
+ * @tc: tc number
+ * @rl_type: min or max
+ * @bw: bandwidth in kbps
+ *
+ * This function configures bandwidth limit of tc node.
+ */
+static enum ice_status
+ice_sched_set_tc_node_bw_lmt(struct ice_port_info *pi, u8 tc,
+			     enum ice_rl_type rl_type, u32 bw)
+{
+	enum ice_status status = ICE_ERR_PARAM;
+	struct ice_sched_node *tc_node;
+
+	if (tc >= ICE_MAX_TRAFFIC_CLASS)
+		return status;
+	ice_acquire_lock(&pi->sched_lock);
+	tc_node = ice_sched_get_tc_node(pi, tc);
+	if (!tc_node)
+		goto exit_set_tc_node_bw;
+	if (bw == ICE_SCHED_DFLT_BW)
+		status = ice_sched_set_node_bw_dflt_lmt(pi, tc_node, rl_type);
+	else
+		status = ice_sched_set_node_bw_lmt(pi, tc_node, rl_type, bw);
+	if (!status)
+		status = ice_sched_save_tc_node_bw(pi, tc, rl_type, bw);
+
+exit_set_tc_node_bw:
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_cfg_tc_node_bw_lmt - configure tc node bw limit
+ * @pi: port information structure
+ * @tc: tc number
+ * @rl_type: min or max
+ * @bw: bandwidth in kbps
+ *
+ * This function configures bw limit of tc node.
+ * Note: The minimum guaranteed reservation is done via DCBX.
+ */
+enum ice_status
+ice_cfg_tc_node_bw_lmt(struct ice_port_info *pi, u8 tc,
+		       enum ice_rl_type rl_type, u32 bw)
+{
+	return ice_sched_set_tc_node_bw_lmt(pi, tc, rl_type, bw);
+}
+
+/**
+ * ice_cfg_tc_node_bw_dflt_lmt - configure tc node bw default limit
+ * @pi: port information structure
+ * @tc: tc number
+ * @rl_type: min or max
+ *
+ * This function configures bw default limit of tc node.
+ */
+enum ice_status
+ice_cfg_tc_node_bw_dflt_lmt(struct ice_port_info *pi, u8 tc,
+			    enum ice_rl_type rl_type)
+{
+	return ice_sched_set_tc_node_bw_lmt(pi, tc, rl_type, ICE_SCHED_DFLT_BW);
+}
+
+/**
+ * ice_sched_save_tc_node_bw_alloc - save tc node's bw alloc information
+ * @pi: port information structure
+ * @tc: traffic class
+ * @rl_type: rate limit type min or max
+ * @bw_alloc: Bandwidth allocation information
+ *
+ * Save bw alloc information of VSI type node for post replay use.
+ */
+static enum ice_status
+ice_sched_save_tc_node_bw_alloc(struct ice_port_info *pi, u8 tc,
+				enum ice_rl_type rl_type, u16 bw_alloc)
+{
+	struct ice_hw *hw = pi->hw;
+
+	if (tc >= ICE_MAX_TRAFFIC_CLASS)
+		return ICE_ERR_PARAM;
+	switch (rl_type) {
+	case ICE_MIN_BW:
+		ice_set_clear_cir_bw_alloc(&hw->tc_node_bw_t_info[tc],
+					   bw_alloc);
+		break;
+	case ICE_MAX_BW:
+		ice_set_clear_eir_bw_alloc(&hw->tc_node_bw_t_info[tc],
+					   bw_alloc);
+		break;
+	default:
+		return ICE_ERR_PARAM;
+	}
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_sched_set_tc_node_bw_alloc - set tc node bw alloc
+ * @pi: port information structure
+ * @tc: tc number
+ * @rl_type: min or max
+ * @bw_alloc: bandwidth alloc
+ *
+ * This function configures bandwidth alloc of tc node, also saves the
+ * changed settings for replay purpose, and return success if it succeeds
+ * in modifying bandwidth alloc setting.
+ */
+static enum ice_status
+ice_sched_set_tc_node_bw_alloc(struct ice_port_info *pi, u8 tc,
+			       enum ice_rl_type rl_type, u8 bw_alloc)
+{
+	enum ice_status status = ICE_ERR_PARAM;
+	struct ice_sched_node *tc_node;
+
+	if (tc >= ICE_MAX_TRAFFIC_CLASS)
+		return status;
+	ice_acquire_lock(&pi->sched_lock);
+	tc_node = ice_sched_get_tc_node(pi, tc);
+	if (!tc_node)
+		goto exit_set_tc_node_bw_alloc;
+	status = ice_sched_cfg_node_bw_alloc(pi->hw, tc_node, rl_type,
+					     bw_alloc);
+	if (status)
+		goto exit_set_tc_node_bw_alloc;
+	status = ice_sched_save_tc_node_bw_alloc(pi, tc, rl_type, bw_alloc);
+
+exit_set_tc_node_bw_alloc:
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_cfg_tc_node_bw_alloc - configure tc node bw alloc
+ * @pi: port information structure
+ * @tc: tc number
+ * @rl_type: min or max
+ * @bw_alloc: bandwidth alloc
+ *
+ * This function configures bw limit of tc node.
+ * Note: The minimum guaranteed reservation is done via DCBX.
+ */
+enum ice_status
+ice_cfg_tc_node_bw_alloc(struct ice_port_info *pi, u8 tc,
+			 enum ice_rl_type rl_type, u8 bw_alloc)
+{
+	return ice_sched_set_tc_node_bw_alloc(pi, tc, rl_type, bw_alloc);
+}
+
+/**
+ * ice_sched_set_agg_bw_dflt_lmt - set agg node's bw limit to default
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ *
+ * This function retrieves the aggregator id based on VSI id and tc,
+ * and sets node's bw limit to default. This function needs to be
+ * called with the scheduler lock held.
+ */
+enum ice_status
+ice_sched_set_agg_bw_dflt_lmt(struct ice_port_info *pi, u16 vsi_handle)
+{
+	struct ice_vsi_ctx *vsi_ctx;
+	enum ice_status status = ICE_SUCCESS;
+	u8 tc;
+
+	if (!ice_is_vsi_valid(pi->hw, vsi_handle))
+		return ICE_ERR_PARAM;
+	vsi_ctx = ice_get_vsi_ctx(pi->hw, vsi_handle);
+	if (!vsi_ctx)
+		return ICE_ERR_PARAM;
+
+	for (tc = 0; tc < ICE_MAX_TRAFFIC_CLASS; tc++) {
+		struct ice_sched_node *node;
+
+		node = vsi_ctx->sched.ag_node[tc];
+		if (!node)
+			continue;
+
+		/* Set min profile to default */
+		status = ice_sched_set_node_bw_dflt_lmt(pi, node, ICE_MIN_BW);
+		if (status)
+			break;
+
+		/* Set max profile to default */
+		status = ice_sched_set_node_bw_dflt_lmt(pi, node, ICE_MAX_BW);
+		if (status)
+			break;
+
+		/* Remove shared profile, if there is one */
+		status = ice_sched_set_node_bw_dflt_lmt(pi, node,
+							ICE_SHARED_BW);
+		if (status)
+			break;
+	}
+
+	return status;
+}
+
+/**
+ * ice_sched_get_node_by_id_type - get node from id type
+ * @pi: port information structure
+ * @id: identifier
+ * @agg_type: type of aggregator
+ * @tc: traffic class
+ *
+ * This function returns node identified by id of type aggregator, and
+ * based on traffic class (tc). This function needs to be called with
+ * the scheduler lock held.
+ */
+static struct ice_sched_node *
+ice_sched_get_node_by_id_type(struct ice_port_info *pi, u32 id,
+			      enum ice_agg_type agg_type, u8 tc)
+{
+	struct ice_sched_node *node = NULL;
+	struct ice_sched_node *child_node;
+
+	switch (agg_type) {
+	case ICE_AGG_TYPE_VSI: {
+		struct ice_vsi_ctx *vsi_ctx;
+		u16 vsi_handle = (u16)id;
+
+		if (!ice_is_vsi_valid(pi->hw, vsi_handle))
+			break;
+		/* Get sched_vsi_info */
+		vsi_ctx = ice_get_vsi_ctx(pi->hw, vsi_handle);
+		if (!vsi_ctx)
+			break;
+		node = vsi_ctx->sched.vsi_node[tc];
+		break;
+	}
+
+	case ICE_AGG_TYPE_AGG: {
+		struct ice_sched_node *tc_node;
+
+		tc_node = ice_sched_get_tc_node(pi, tc);
+		if (tc_node)
+			node = ice_sched_get_agg_node(pi->hw, tc_node, id);
+		break;
+	}
+
+	case ICE_AGG_TYPE_Q:
+		/* The current implementation allows single queue to modify */
+		node = ice_sched_get_node(pi, id);
+		break;
+
+	case ICE_AGG_TYPE_QG:
+		/* The current implementation allows single qg to modify */
+		child_node = ice_sched_get_node(pi, id);
+		if (!child_node)
+			break;
+		node = child_node->parent;
+		break;
+
+	default:
+		break;
+	}
+
+	return node;
+}
+
+/**
+ * ice_sched_set_node_bw_lmt_per_tc - set node bw limit per tc
+ * @pi: port information structure
+ * @id: id (software VSI handle or AGG id)
+ * @agg_type: aggregator type (VSI or AGG type node)
+ * @tc: traffic class
+ * @rl_type: min or max
+ * @bw: bandwidth in kbps
+ *
+ * This function sets bw limit of VSI or Aggregator scheduling node
+ * based on tc information from passed in argument bw.
+ */
+enum ice_status
+ice_sched_set_node_bw_lmt_per_tc(struct ice_port_info *pi, u32 id,
+				 enum ice_agg_type agg_type, u8 tc,
+				 enum ice_rl_type rl_type, u32 bw)
+{
+	enum ice_status status = ICE_ERR_PARAM;
+	struct ice_sched_node *node;
+
+	if (!pi)
+		return status;
+
+	if (rl_type == ICE_UNKNOWN_BW)
+		return status;
+
+	ice_acquire_lock(&pi->sched_lock);
+	node = ice_sched_get_node_by_id_type(pi, id, agg_type, tc);
+	if (!node) {
+		ice_debug(pi->hw, ICE_DBG_SCHED, "Wrong id, agg type, or tc\n");
+		goto exit_set_node_bw_lmt_per_tc;
+	}
+	if (bw == ICE_SCHED_DFLT_BW)
+		status = ice_sched_set_node_bw_dflt_lmt(pi, node, rl_type);
+	else
+		status = ice_sched_set_node_bw_lmt(pi, node, rl_type, bw);
+
+exit_set_node_bw_lmt_per_tc:
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_sched_validate_vsi_srl_node - validate VSI SRL node
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ *
+ * This function validates SRL node of the VSI node if available SRL layer is
+ * different than the VSI node layer on all tc(s).This function needs to be
+ * called with scheduler lock held.
+ */
+static enum ice_status
+ice_sched_validate_vsi_srl_node(struct ice_port_info *pi, u16 vsi_handle)
+{
+	u8 sel_layer = ICE_SCHED_INVAL_LAYER_NUM;
+	u8 tc;
+
+	if (!ice_is_vsi_valid(pi->hw, vsi_handle))
+		return ICE_ERR_PARAM;
+
+	/* Return success if no nodes are present across tc */
+	for (tc = 0; tc < ICE_MAX_TRAFFIC_CLASS; tc++) {
+		struct ice_sched_node *tc_node, *vsi_node;
+		enum ice_rl_type rl_type = ICE_SHARED_BW;
+		enum ice_status status;
+
+		tc_node = ice_sched_get_tc_node(pi, tc);
+		if (!tc_node)
+			continue;
+
+		vsi_node = ice_sched_get_vsi_node(pi->hw, tc_node, vsi_handle);
+		if (!vsi_node)
+			continue;
+
+		/* SRL bandwidth layer selection */
+		if (sel_layer == ICE_SCHED_INVAL_LAYER_NUM) {
+			u8 node_layer = vsi_node->tx_sched_layer;
+			u8 layer_num;
+
+			layer_num = ice_sched_get_rl_prof_layer(pi, rl_type,
+								node_layer);
+			if (layer_num >= pi->hw->num_tx_sched_layers)
+				return ICE_ERR_PARAM;
+			sel_layer = layer_num;
+		}
+
+		status = ice_sched_validate_srl_node(vsi_node, sel_layer);
+		if (status)
+			return status;
+	}
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_sched_set_vsi_bw_shared_lmt - set VSI bw shared limit
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @bw: bandwidth in kbps
+ *
+ * This function Configures shared rate limiter(SRL) of all VSI type nodes
+ * across all traffic classes for VSI matching handle. When bw value of
+ * ICE_SCHED_DFLT_BW is passed, it removes the SRL from the node.
+ */
+enum ice_status
+ice_sched_set_vsi_bw_shared_lmt(struct ice_port_info *pi, u16 vsi_handle,
+				u32 bw)
+{
+	enum ice_status status = ICE_SUCCESS;
+	u8 tc;
+
+	if (!pi)
+		return ICE_ERR_PARAM;
+
+	if (!ice_is_vsi_valid(pi->hw, vsi_handle))
+		return ICE_ERR_PARAM;
+
+	ice_acquire_lock(&pi->sched_lock);
+	status = ice_sched_validate_vsi_srl_node(pi, vsi_handle);
+	if (status)
+		goto exit_set_vsi_bw_shared_lmt;
+	/* Return success if no nodes are present across tc */
+	for (tc = 0; tc < ICE_MAX_TRAFFIC_CLASS; tc++) {
+		struct ice_sched_node *tc_node, *vsi_node;
+		enum ice_rl_type rl_type = ICE_SHARED_BW;
+
+		tc_node = ice_sched_get_tc_node(pi, tc);
+		if (!tc_node)
+			continue;
+
+		vsi_node = ice_sched_get_vsi_node(pi->hw, tc_node, vsi_handle);
+		if (!vsi_node)
+			continue;
+
+		if (bw == ICE_SCHED_DFLT_BW)
+			/* It removes existing SRL from the node */
+			status = ice_sched_set_node_bw_dflt_lmt(pi, vsi_node,
+								rl_type);
+		else
+			status = ice_sched_set_node_bw_lmt(pi, vsi_node,
+							   rl_type, bw);
+		if (status)
+			break;
+		status = ice_sched_save_vsi_bw(pi, vsi_handle, tc, rl_type, bw);
+		if (status)
+			break;
+	}
+
+exit_set_vsi_bw_shared_lmt:
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_sched_validate_agg_srl_node - validate AGG SRL node
+ * @pi: port information structure
+ * @agg_id: aggregator id
+ *
+ * This function validates SRL node of the AGG node if available SRL layer is
+ * different than the AGG node layer on all tc(s).This function needs to be
+ * called with scheduler lock held.
+ */
+static enum ice_status
+ice_sched_validate_agg_srl_node(struct ice_port_info *pi, u32 agg_id)
+{
+	u8 sel_layer = ICE_SCHED_INVAL_LAYER_NUM;
+	struct ice_sched_agg_info *agg_info;
+	bool agg_id_present = false;
+	enum ice_status status = ICE_SUCCESS;
+	u8 tc;
+
+	LIST_FOR_EACH_ENTRY(agg_info, &pi->hw->agg_list, ice_sched_agg_info,
+			    list_entry)
+		if (agg_info->agg_id == agg_id) {
+			agg_id_present = true;
+			break;
+		}
+	if (!agg_id_present)
+		return ICE_ERR_PARAM;
+	/* Return success if no nodes are present across tc */
+	for (tc = 0; tc < ICE_MAX_TRAFFIC_CLASS; tc++) {
+		struct ice_sched_node *tc_node, *agg_node;
+		enum ice_rl_type rl_type = ICE_SHARED_BW;
+
+		tc_node = ice_sched_get_tc_node(pi, tc);
+		if (!tc_node)
+			continue;
+
+		agg_node = ice_sched_get_agg_node(pi->hw, tc_node, agg_id);
+		if (!agg_node)
+			continue;
+		/* SRL bandwidth layer selection */
+		if (sel_layer == ICE_SCHED_INVAL_LAYER_NUM) {
+			u8 node_layer = agg_node->tx_sched_layer;
+			u8 layer_num;
+
+			layer_num = ice_sched_get_rl_prof_layer(pi, rl_type,
+								node_layer);
+			if (layer_num >= pi->hw->num_tx_sched_layers)
+				return ICE_ERR_PARAM;
+			sel_layer = layer_num;
+		}
+
+		status = ice_sched_validate_srl_node(agg_node, sel_layer);
+		if (status)
+			break;
+	}
+	return status;
+}
+
+/**
+ * ice_sched_set_agg_bw_shared_lmt - set aggregator bw shared limit
+ * @pi: port information structure
+ * @agg_id: aggregator id
+ * @bw: bandwidth in kbps
+ *
+ * This function configures the shared rate limiter(SRL) of all agg type
+ * nodes across all traffic classes for aggregator matching agg_id. When
+ * bw value of ICE_SCHED_DFLT_BW is passed, it removes SRL from the
+ * node(s).
+ */
+enum ice_status
+ice_sched_set_agg_bw_shared_lmt(struct ice_port_info *pi, u32 agg_id, u32 bw)
+{
+	struct ice_sched_agg_info *agg_info;
+	struct ice_sched_agg_info *tmp;
+	bool agg_id_present = false;
+	enum ice_status status = ICE_SUCCESS;
+	u8 tc;
+
+	if (!pi)
+		return ICE_ERR_PARAM;
+
+	ice_acquire_lock(&pi->sched_lock);
+	status = ice_sched_validate_agg_srl_node(pi, agg_id);
+	if (status)
+		goto exit_agg_bw_shared_lmt;
+
+	LIST_FOR_EACH_ENTRY_SAFE(agg_info, tmp, &pi->hw->agg_list,
+				 ice_sched_agg_info, list_entry)
+		if (agg_info->agg_id == agg_id) {
+			agg_id_present = true;
+			break;
+		}
+
+	if (!agg_id_present) {
+		status = ICE_ERR_PARAM;
+		goto exit_agg_bw_shared_lmt;
+	}
+
+	/* Return success if no nodes are present across tc */
+	for (tc = 0; tc < ICE_MAX_TRAFFIC_CLASS; tc++) {
+		enum ice_rl_type rl_type = ICE_SHARED_BW;
+		struct ice_sched_node *tc_node, *agg_node;
+
+		tc_node = ice_sched_get_tc_node(pi, tc);
+		if (!tc_node)
+			continue;
+
+		agg_node = ice_sched_get_agg_node(pi->hw, tc_node, agg_id);
+		if (!agg_node)
+			continue;
+
+		if (bw == ICE_SCHED_DFLT_BW)
+			/* It removes existing SRL from the node */
+			status = ice_sched_set_node_bw_dflt_lmt(pi, agg_node,
+								rl_type);
+		else
+			status = ice_sched_set_node_bw_lmt(pi, agg_node,
+							   rl_type, bw);
+		if (status)
+			break;
+		status = ice_sched_save_agg_bw(pi, agg_id, tc, rl_type, bw);
+		if (status)
+			break;
+	}
+
+exit_agg_bw_shared_lmt:
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_sched_cfg_sibl_node_prio - configure node sibling priority
+ * @hw: pointer to the hw struct
+ * @node: sched node to configure
+ * @priority: sibling priority
+ *
+ * This function configures node element's sibling priority only. This
+ * function needs to be called with scheduler lock held.
+ */
+enum ice_status
+ice_sched_cfg_sibl_node_prio(struct ice_hw *hw, struct ice_sched_node *node,
+			     u8 priority)
+{
+	struct ice_aqc_txsched_elem_data buf;
+	struct ice_aqc_txsched_elem *data;
+	enum ice_status status;
+
+	buf = node->info;
+	data = &buf.data;
+	data->valid_sections |= ICE_AQC_ELEM_VALID_GENERIC;
+	priority = (priority << ICE_AQC_ELEM_GENERIC_PRIO_S) &
+		   ICE_AQC_ELEM_GENERIC_PRIO_M;
+	data->generic &= ~ICE_AQC_ELEM_GENERIC_PRIO_M;
+	data->generic |= priority;
+
+	/* Configure element */
+	status = ice_sched_update_elem(hw, node, &buf);
+	return status;
+}
+
+/**
+ * ice_sched_cfg_node_bw_alloc - configure node bw weight/alloc params
+ * @hw: pointer to the hw struct
+ * @node: sched node to configure
+ * @rl_type: rate limit type cir, eir, or shared
+ * @bw_alloc: bw weight/allocation
+ *
+ * This function configures node element's bw allocation.
+ */
+enum ice_status
+ice_sched_cfg_node_bw_alloc(struct ice_hw *hw, struct ice_sched_node *node,
+			    enum ice_rl_type rl_type, u8 bw_alloc)
+{
+	struct ice_aqc_txsched_elem_data buf;
+	struct ice_aqc_txsched_elem *data;
+	enum ice_status status;
+
+	buf = node->info;
+	data = &buf.data;
+	if (rl_type == ICE_MIN_BW) {
+		data->valid_sections |= ICE_AQC_ELEM_VALID_CIR;
+		data->cir_bw.bw_alloc = CPU_TO_LE16(bw_alloc);
+	} else if (rl_type == ICE_MAX_BW) {
+		data->valid_sections |= ICE_AQC_ELEM_VALID_EIR;
+		data->eir_bw.bw_alloc = CPU_TO_LE16(bw_alloc);
+	} else {
+		return ICE_ERR_PARAM;
+	}
+
+	/* Configure element */
+	status = ice_sched_update_elem(hw, node, &buf);
+	return status;
+}
+
+/**
+ * ice_sched_add_agg_cfg - create an aggregator node
+ * @pi: port information structure
+ * @agg_id: aggregator id
+ * @tc: TC number
+ *
+ * This function creates an aggregator node and intermediate nodes if required
+ * for the given TC
+ */
+enum ice_status
+ice_sched_add_agg_cfg(struct ice_port_info *pi, u32 agg_id, u8 tc)
+{
+	struct ice_sched_node *parent, *agg_node, *tc_node;
+	u16 num_nodes[ICE_AQC_TOPO_MAX_LEVEL_NUM] = { 0 };
+	enum ice_status status = ICE_SUCCESS;
+	struct ice_hw *hw = pi->hw;
+	u32 first_node_teid;
+	u16 num_nodes_added;
+	u8 i, aggl;
+
+	tc_node = ice_sched_get_tc_node(pi, tc);
+	if (!tc_node)
+		return ICE_ERR_CFG;
+
+	agg_node = ice_sched_get_agg_node(hw, tc_node, agg_id);
+	/* Does Agg node already exist ? */
+	if (agg_node)
+		return status;
+
+	aggl = ice_sched_get_agg_layer(hw);
+
+	/* need one node in Agg layer */
+	num_nodes[aggl] = 1;
+
+	/* Check whether the intermediate nodes have space to add the
+	 * new agg. If they are full, then SW needs to allocate a new
+	 * intermediate node on those layers
+	 */
+	for (i = hw->sw_entry_point_layer; i < aggl; i++) {
+		parent = ice_sched_get_first_node(hw, tc_node, i);
+
+		/* scan all the siblings */
+		while (parent) {
+			if (parent->num_children < hw->max_children[i])
+				break;
+			parent = parent->sibling;
+		}
+
+		/* all the nodes are full, reserve one for this layer */
+		if (!parent)
+			num_nodes[i]++;
+	}
+
+	/* add the agg node */
+	parent = tc_node;
+	for (i = hw->sw_entry_point_layer; i <= aggl; i++) {
+		if (!parent)
+			return ICE_ERR_CFG;
+
+		status = ice_sched_add_nodes_to_layer(pi, tc_node, parent, i,
+						      num_nodes[i],
+						      &first_node_teid,
+						      &num_nodes_added);
+		if (status != ICE_SUCCESS || num_nodes[i] != num_nodes_added)
+			return ICE_ERR_CFG;
+
+		/* The newly added node can be a new parent for the next
+		 * layer nodes
+		 */
+		if (num_nodes_added) {
+			parent = ice_sched_find_node_by_teid(tc_node,
+							     first_node_teid);
+			/* register the aggregator id with the agg node */
+			if (parent && i == aggl)
+				parent->agg_id = agg_id;
+		} else {
+			parent = parent->children[0];
+		}
+	}
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_sched_is_agg_inuse - check whether the agg is in use or not
+ * @pi: port information structure
+ * @node: node pointer
+ *
+ * This function checks whether the agg is attached with any vsi or not.
+ */
+static bool
+ice_sched_is_agg_inuse(struct ice_port_info *pi, struct ice_sched_node *node)
+{
+	u8 vsil, i;
+
+	vsil = ice_sched_get_vsi_layer(pi->hw);
+	if (node->tx_sched_layer < vsil - 1) {
+		for (i = 0; i < node->num_children; i++)
+			if (ice_sched_is_agg_inuse(pi, node->children[i]))
+				return true;
+		return false;
+	} else {
+		return node->num_children ? true : false;
+	}
+}
+
+/**
+ * ice_sched_rm_agg_cfg - remove the aggregator node
+ * @pi: port information structure
+ * @agg_id: aggregator id
+ * @tc: TC number
+ *
+ * This function removes the aggregator node and intermediate nodes if any
+ * from the given TC
+ */
+enum ice_status
+ice_sched_rm_agg_cfg(struct ice_port_info *pi, u32 agg_id, u8 tc)
+{
+	struct ice_sched_node *tc_node, *agg_node;
+	struct ice_hw *hw = pi->hw;
+
+	tc_node = ice_sched_get_tc_node(pi, tc);
+	if (!tc_node)
+		return ICE_ERR_CFG;
+
+	agg_node = ice_sched_get_agg_node(hw, tc_node, agg_id);
+	if (!agg_node)
+		return ICE_ERR_DOES_NOT_EXIST;
+
+	/* Can't remove the agg node if it has children */
+	if (ice_sched_is_agg_inuse(pi, agg_node))
+		return ICE_ERR_IN_USE;
+
+	/* need to remove the whole subtree if agg node is the
+	 * only child.
+	 */
+	while (agg_node->tx_sched_layer > hw->sw_entry_point_layer) {
+		struct ice_sched_node *parent = agg_node->parent;
+
+		if (!parent)
+			return ICE_ERR_CFG;
+
+		if (parent->num_children > 1)
+			break;
+
+		agg_node = parent;
+	}
+
+	ice_free_sched_node(pi, agg_node);
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_sched_get_free_vsi_parent - Find a free parent node in agg subtree
+ * @hw: pointer to the hw struct
+ * @node: pointer to a child node
+ * @num_nodes: num nodes count array
+ *
+ * This function walks through the aggregator subtree to find a free parent
+ * node
+ */
+static struct ice_sched_node *
+ice_sched_get_free_vsi_parent(struct ice_hw *hw, struct ice_sched_node *node,
+			      u16 *num_nodes)
+{
+	u8 l = node->tx_sched_layer;
+	u8 vsil, i;
+
+	vsil = ice_sched_get_vsi_layer(hw);
+
+	/* Is it VSI parent layer ? */
+	if (l == vsil - 1)
+		return (node->num_children < hw->max_children[l]) ? node : NULL;
+
+	/* We have intermediate nodes. Let's walk through the subtree. If the
+	 * intermediate node has space to add a new node then clear the count
+	 */
+	if (node->num_children < hw->max_children[l])
+		num_nodes[l] = 0;
+	/* The below recursive call is intentional and wouldn't go more than
+	 * 2 or 3 iterations.
+	 */
+
+	for (i = 0; i < node->num_children; i++) {
+		struct ice_sched_node *parent;
+
+		parent = ice_sched_get_free_vsi_parent(hw, node->children[i],
+						       num_nodes);
+		if (parent)
+			return parent;
+	}
+
+	return NULL;
+}
+
+/**
+ * ice_sched_update_new_parent - update the new parent in SW DB
+ * @new_parent: pointer to a new parent node
+ * @node: pointer to a child node
+ *
+ * This function removes the child from the old parent and adds it to a new
+ * parent
+ */
+static void
+ice_sched_update_parent(struct ice_sched_node *new_parent,
+			struct ice_sched_node *node)
+{
+	struct ice_sched_node *old_parent;
+	u8 i, j;
+
+	old_parent = node->parent;
+
+	/* update the old parent children */
+	for (i = 0; i < old_parent->num_children; i++)
+		if (old_parent->children[i] == node) {
+			for (j = i + 1; j < old_parent->num_children; j++)
+				old_parent->children[j - 1] =
+					old_parent->children[j];
+			old_parent->num_children--;
+			break;
+		}
+
+	/* now move the node to a new parent */
+	new_parent->children[new_parent->num_children++] = node;
+	node->parent = new_parent;
+	node->info.parent_teid = new_parent->info.node_teid;
+}
+
+/**
+ * ice_sched_move_nodes - move child nodes to a given parent
+ * @pi: port information structure
+ * @parent: pointer to parent node
+ * @num_items: number of child nodes to be moved
+ * @list: pointer to child node teids
+ *
+ * This function move the child nodes to a given parent.
+ */
+static enum ice_status
+ice_sched_move_nodes(struct ice_port_info *pi, struct ice_sched_node *parent,
+		     u16 num_items, u32 *list)
+{
+	struct ice_aqc_move_elem *buf;
+	struct ice_sched_node *node;
+	enum ice_status status = ICE_SUCCESS;
+	struct ice_hw *hw;
+	u16 grps_movd = 0;
+	u8 i;
+
+	hw = pi->hw;
+
+	if (!parent || !num_items)
+		return ICE_ERR_PARAM;
+
+	/* Does parent have enough space */
+	if (parent->num_children + num_items >=
+	    hw->max_children[parent->tx_sched_layer])
+		return ICE_ERR_AQ_FULL;
+
+	buf = (struct ice_aqc_move_elem *) ice_malloc(hw, sizeof(*buf));
+	if (!buf)
+		return ICE_ERR_NO_MEMORY;
+
+	for (i = 0; i < num_items; i++) {
+		node = ice_sched_find_node_by_teid(pi->root, list[i]);
+		if (!node) {
+			status = ICE_ERR_PARAM;
+			goto move_err_exit;
+		}
+
+		buf->hdr.src_parent_teid = node->info.parent_teid;
+		buf->hdr.dest_parent_teid = parent->info.node_teid;
+		buf->teid[0] = node->info.node_teid;
+		buf->hdr.num_elems = CPU_TO_LE16(1);
+		status = ice_aq_move_sched_elems(hw, 1, buf, sizeof(*buf),
+						 &grps_movd, NULL);
+		if (status && grps_movd != 1) {
+			status = ICE_ERR_CFG;
+			goto move_err_exit;
+		}
+
+		/* update the SW DB */
+		ice_sched_update_parent(parent, node);
+	}
+
+move_err_exit:
+	ice_free(hw, buf);
+	return status;
+}
+
+/**
+ * ice_sched_move_vsi_to_agg - move VSI to aggregator node
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @agg_id: aggregator id
+ * @tc: TC number
+ *
+ * This function moves a VSI to an aggregator node or its subtree.
+ * Intermediate nodes may be created if required.
+ */
+enum ice_status
+ice_sched_move_vsi_to_agg(struct ice_port_info *pi, u16 vsi_handle, u32 agg_id,
+			  u8 tc)
+{
+	struct ice_sched_node *vsi_node, *agg_node, *tc_node, *parent;
+	u16 num_nodes[ICE_AQC_TOPO_MAX_LEVEL_NUM] = { 0 };
+	u32 first_node_teid, vsi_teid;
+	enum ice_status status;
+	u16 num_nodes_added;
+	u8 aggl, vsil, i;
+
+	tc_node = ice_sched_get_tc_node(pi, tc);
+	if (!tc_node)
+		return ICE_ERR_CFG;
+
+	agg_node = ice_sched_get_agg_node(pi->hw, tc_node, agg_id);
+	if (!agg_node)
+		return ICE_ERR_DOES_NOT_EXIST;
+
+	vsi_node = ice_sched_get_vsi_node(pi->hw, tc_node, vsi_handle);
+	if (!vsi_node)
+		return ICE_ERR_DOES_NOT_EXIST;
+
+	aggl = ice_sched_get_agg_layer(pi->hw);
+	vsil = ice_sched_get_vsi_layer(pi->hw);
+
+	/* initialize intermediate node count to 1 between agg and VSI layers */
+	for (i = aggl + 1; i < vsil; i++)
+		num_nodes[i] = 1;
+
+	/* Check whether the agg subtree has any free node to add the VSI */
+	for (i = 0; i < agg_node->num_children; i++) {
+		parent = ice_sched_get_free_vsi_parent(pi->hw,
+						       agg_node->children[i],
+						       num_nodes);
+		if (parent)
+			goto move_nodes;
+	}
+
+	/* add new nodes */
+	parent = agg_node;
+	for (i = aggl + 1; i < vsil; i++) {
+		status = ice_sched_add_nodes_to_layer(pi, tc_node, parent, i,
+						      num_nodes[i],
+						      &first_node_teid,
+						      &num_nodes_added);
+		if (status != ICE_SUCCESS || num_nodes[i] != num_nodes_added)
+			return ICE_ERR_CFG;
+
+		/* The newly added node can be a new parent for the next
+		 * layer nodes
+		 */
+		if (num_nodes_added)
+			parent = ice_sched_find_node_by_teid(tc_node,
+							     first_node_teid);
+		else
+			parent = parent->children[0];
+
+		if (!parent)
+			return ICE_ERR_CFG;
+	}
+
+move_nodes:
+	vsi_teid = LE32_TO_CPU(vsi_node->info.node_teid);
+	return ice_sched_move_nodes(pi, parent, 1, &vsi_teid);
+}
+
+/**
+ * ice_cfg_rl_burst_size - Set burst size value
+ * @hw: pointer to the hw struct
+ * @bytes: burst size in bytes
+ *
+ * This function configures/set the burst size to requested new value. The new
+ * burst size value is used for future rate limit calls. It doesn't change the
+ * existing or previously created RL profiles.
+ */
+enum ice_status ice_cfg_rl_burst_size(struct ice_hw *hw, u32 bytes)
+{
+	u16 burst_size_to_prog;
+
+	if (bytes < ICE_MIN_BURST_SIZE_ALLOWED ||
+	    bytes > ICE_MAX_BURST_SIZE_ALLOWED)
+		return ICE_ERR_PARAM;
+	if (bytes <= ICE_MAX_BURST_SIZE_BYTE_GRANULARITY) {
+		/* byte granularity case */
+		/* Disable MSB granularity bit */
+		burst_size_to_prog = ICE_BYTE_GRANULARITY;
+		/* round number to nearest 256 granularity */
+		bytes = ice_round_to_num(bytes, 256);
+		/* check rounding doesn't go beyound allowed */
+		if (bytes > ICE_MAX_BURST_SIZE_BYTE_GRANULARITY)
+			bytes = ICE_MAX_BURST_SIZE_BYTE_GRANULARITY;
+		burst_size_to_prog |= (u16)bytes;
+	} else {
+		/* k bytes granularity case */
+		/* Enable MSB granularity bit */
+		burst_size_to_prog = ICE_KBYTE_GRANULARITY;
+		/* round number to nearest 1024 granularity */
+		bytes = ice_round_to_num(bytes, 1024);
+		/* check rounding doesn't go beyound allowed */
+		if (bytes > ICE_MAX_BURST_SIZE_KBYTE_GRANULARITY)
+			bytes = ICE_MAX_BURST_SIZE_KBYTE_GRANULARITY;
+		/* The value is in k bytes */
+		burst_size_to_prog |= (u16)(bytes / 1024);
+	}
+	hw->max_burst_size = burst_size_to_prog;
+	return ICE_SUCCESS;
+}
+
+/*
+ * ice_sched_replay_node_prio - re-configure node priority
+ * @hw: pointer to the hw struct
+ * @node: sched node to configure
+ * @priority: priority value
+ *
+ * This function configures node element's priority value. It
+ * needs to be called with scheduler lock held.
+ */
+static enum ice_status
+ice_sched_replay_node_prio(struct ice_hw *hw, struct ice_sched_node *node,
+			   u8 priority)
+{
+	struct ice_aqc_txsched_elem_data buf;
+	struct ice_aqc_txsched_elem *data;
+	enum ice_status status;
+
+	buf = node->info;
+	data = &buf.data;
+	data->valid_sections |= ICE_AQC_ELEM_VALID_GENERIC;
+	data->generic = priority;
+
+	/* Configure element */
+	status = ice_sched_update_elem(hw, node, &buf);
+	return status;
+}
+
+/**
+ * ice_sched_replay_node_bw - replay node(s) bw
+ * @hw: pointer to the hw struct
+ * @node: sched node to configure
+ * @bw_t_info: bw type information
+ *
+ * This function restores node's bw from bw_t_info. The caller needs
+ * to hold the scheduler lock.
+ */
+static enum ice_status
+ice_sched_replay_node_bw(struct ice_hw *hw, struct ice_sched_node *node,
+			 struct ice_bw_type_info *bw_t_info)
+{
+	struct ice_port_info *pi = hw->port_info;
+	enum ice_status status = ICE_ERR_PARAM;
+	u16 bw_alloc;
+
+	if (!node)
+		return status;
+	if (!ice_is_any_bit_set(bw_t_info->bw_t_bitmap, ICE_BW_TYPE_CNT))
+		return ICE_SUCCESS;
+	if (ice_is_bit_set(bw_t_info->bw_t_bitmap, ICE_BW_TYPE_PRIO)) {
+		status = ice_sched_replay_node_prio(hw, node,
+						    bw_t_info->generic);
+		if (status)
+			return status;
+	}
+	if (ice_is_bit_set(bw_t_info->bw_t_bitmap, ICE_BW_TYPE_CIR)) {
+		status = ice_sched_set_node_bw_lmt(pi, node, ICE_MIN_BW,
+						   bw_t_info->cir_bw.bw);
+		if (status)
+			return status;
+	}
+	if (ice_is_bit_set(bw_t_info->bw_t_bitmap, ICE_BW_TYPE_CIR_WT)) {
+		bw_alloc = bw_t_info->cir_bw.bw_alloc;
+		status = ice_sched_cfg_node_bw_alloc(hw, node, ICE_MIN_BW,
+						     bw_alloc);
+		if (status)
+			return status;
+	}
+	if (ice_is_bit_set(bw_t_info->bw_t_bitmap, ICE_BW_TYPE_EIR)) {
+		status = ice_sched_set_node_bw_lmt(pi, node, ICE_MAX_BW,
+						   bw_t_info->eir_bw.bw);
+		if (status)
+			return status;
+	}
+	if (ice_is_bit_set(bw_t_info->bw_t_bitmap, ICE_BW_TYPE_EIR_WT)) {
+		bw_alloc = bw_t_info->eir_bw.bw_alloc;
+		status = ice_sched_cfg_node_bw_alloc(hw, node, ICE_MAX_BW,
+						     bw_alloc);
+		if (status)
+			return status;
+	}
+	if (ice_is_bit_set(bw_t_info->bw_t_bitmap, ICE_BW_TYPE_SHARED))
+		status = ice_sched_set_node_bw_lmt(pi, node, ICE_SHARED_BW,
+						   bw_t_info->shared_bw);
+	return status;
+}
+
+/**
+ * ice_sched_replay_agg_bw - replay aggregator node(s) bw
+ * @hw: pointer to the hw struct
+ * @agg_info: aggregator data structure
+ *
+ * This function re-creates aggregator type nodes. The caller needs to hold
+ * the scheduler lock.
+ */
+static enum ice_status
+ice_sched_replay_agg_bw(struct ice_hw *hw, struct ice_sched_agg_info *agg_info)
+{
+	struct ice_sched_node *tc_node, *agg_node;
+	enum ice_status status = ICE_SUCCESS;
+	u8 tc;
+
+	if (!agg_info)
+		return ICE_ERR_PARAM;
+	for (tc = 0; tc < ICE_MAX_TRAFFIC_CLASS; tc++) {
+		if (!ice_is_any_bit_set(agg_info->bw_t_info[tc].bw_t_bitmap,
+					ICE_BW_TYPE_CNT))
+			continue;
+		tc_node = ice_sched_get_tc_node(hw->port_info, tc);
+		if (!tc_node) {
+			status = ICE_ERR_PARAM;
+			break;
+		}
+		agg_node = ice_sched_get_agg_node(hw, tc_node,
+						  agg_info->agg_id);
+		if (!agg_node) {
+			status = ICE_ERR_PARAM;
+			break;
+		}
+		status = ice_sched_replay_node_bw(hw, agg_node,
+						  &agg_info->bw_t_info[tc]);
+		if (status)
+			break;
+	}
+	return status;
+}
+
+/**
+ * ice_sched_get_ena_tc_bitmap - get enabled TC bitmap
+ * @pi: port info struct
+ * @tc_bitmap: 8 bits TC bitmap to check
+ * @ena_tc_bitmap: 8 bits enabled TC bitmap to return
+ *
+ * This function returns enabled TC bitmap in variable ena_tc_bitmap. Some TCs
+ * may be missing, it returns enabled TCs. This function needs to be called with
+ * scheduler lock held.
+ */
+static void
+ice_sched_get_ena_tc_bitmap(struct ice_port_info *pi, ice_bitmap_t *tc_bitmap,
+			    ice_bitmap_t *ena_tc_bitmap)
+{
+	u8 tc;
+
+	/* Some tc(s) may be missing after reset, adjust for replay */
+	for (tc = 0; tc < ICE_MAX_TRAFFIC_CLASS; tc++)
+		if (ice_is_tc_ena(*tc_bitmap, tc) &&
+		    (ice_sched_get_tc_node(pi, tc)))
+			ice_set_bit(tc, ena_tc_bitmap);
+}
+
+/**
+ * ice_sched_replay_agg - recreate aggregator node(s)
+ * @hw: pointer to the hw struct
+ *
+ * This function recreate aggregator type nodes which are not replayed earlier.
+ * It also replay aggregator bw information. These aggregator nodes are not
+ * associated with VSI type node yet.
+ */
+void ice_sched_replay_agg(struct ice_hw *hw)
+{
+	struct ice_port_info *pi = hw->port_info;
+	struct ice_sched_agg_info *agg_info;
+
+	ice_acquire_lock(&pi->sched_lock);
+	LIST_FOR_EACH_ENTRY(agg_info, &hw->agg_list, ice_sched_agg_info,
+			    list_entry) {
+		/* replay agg (re-create aggregator node) */
+		if (!ice_cmp_bitmap(agg_info->tc_bitmap,
+				    agg_info->replay_tc_bitmap,
+				    ICE_MAX_TRAFFIC_CLASS)) {
+			ice_declare_bitmap(replay_bitmap,
+					   ICE_MAX_TRAFFIC_CLASS);
+			enum ice_status status;
+
+			ice_zero_bitmap(replay_bitmap,
+					sizeof(replay_bitmap) * BITS_PER_BYTE);
+			ice_sched_get_ena_tc_bitmap(pi,
+						    agg_info->replay_tc_bitmap,
+						    replay_bitmap);
+			status = ice_sched_cfg_agg(hw->port_info,
+						   agg_info->agg_id,
+						   ICE_AGG_TYPE_AGG,
+						   replay_bitmap);
+			if (status) {
+				ice_info(hw, "Replay agg id[%d] failed\n",
+					 agg_info->agg_id);
+				/* Move on to next one */
+				continue;
+			}
+			/* Replay agg node bw (restore agg bw) */
+			status = ice_sched_replay_agg_bw(hw, agg_info);
+			if (status)
+				ice_info(hw, "Replay agg bw [id=%d] failed\n",
+					 agg_info->agg_id);
+		}
+	}
+	ice_release_lock(&pi->sched_lock);
+}
+
+/**
+ * ice_sched_replay_agg_vsi_preinit - Agg/VSI replay pre initialization
+ * @hw: pointer to the hw struct
+ *
+ * This function initialize aggregator(s) TC bitmap to zero. A required
+ * preinit step for replaying aggregators.
+ */
+void ice_sched_replay_agg_vsi_preinit(struct ice_hw *hw)
+{
+	struct ice_port_info *pi = hw->port_info;
+	struct ice_sched_agg_info *agg_info;
+
+	ice_acquire_lock(&pi->sched_lock);
+	LIST_FOR_EACH_ENTRY(agg_info, &hw->agg_list, ice_sched_agg_info,
+			    list_entry) {
+		struct ice_sched_agg_vsi_info *agg_vsi_info;
+
+		agg_info->tc_bitmap[0] = 0;
+		LIST_FOR_EACH_ENTRY(agg_vsi_info, &agg_info->agg_vsi_list,
+				    ice_sched_agg_vsi_info, list_entry)
+			agg_vsi_info->tc_bitmap[0] = 0;
+	}
+	ice_release_lock(&pi->sched_lock);
+}
+
+/**
+ * ice_sched_replay_tc_node_bw - replay tc node(s) bw
+ * @hw: pointer to the hw struct
+ *
+ * This function replay tc nodes. The caller needs to hold the scheduler lock.
+ */
+enum ice_status
+ice_sched_replay_tc_node_bw(struct ice_hw *hw)
+{
+	struct ice_port_info *pi = hw->port_info;
+	enum ice_status status = ICE_SUCCESS;
+	u8 tc;
+
+	ice_acquire_lock(&pi->sched_lock);
+	for (tc = 0; tc < ICE_MAX_TRAFFIC_CLASS; tc++) {
+		struct ice_sched_node *tc_node;
+
+		tc_node = ice_sched_get_tc_node(hw->port_info, tc);
+		if (!tc_node)
+			continue; /* tc not present */
+		status = ice_sched_replay_node_bw(hw, tc_node,
+						  &hw->tc_node_bw_t_info[tc]);
+		if (status)
+			break;
+	}
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_sched_replay_vsi_bw - replay VSI type node(s) bw
+ * @hw: pointer to the hw struct
+ * @vsi_handle: software VSI handle
+ * @tc_bitmap: 8 bits TC bitmap
+ *
+ * This function replays VSI type nodes bandwidth. This function needs to be
+ * called with scheduler lock held.
+ */
+static enum ice_status
+ice_sched_replay_vsi_bw(struct ice_hw *hw, u16 vsi_handle,
+			ice_bitmap_t *tc_bitmap)
+{
+	struct ice_sched_node *vsi_node, *tc_node;
+	struct ice_port_info *pi = hw->port_info;
+	struct ice_bw_type_info *bw_t_info;
+	struct ice_vsi_ctx *vsi_ctx;
+	enum ice_status status = ICE_SUCCESS;
+	u8 tc;
+
+	vsi_ctx = ice_get_vsi_ctx(pi->hw, vsi_handle);
+	if (!vsi_ctx)
+		return ICE_ERR_PARAM;
+	for (tc = 0; tc < ICE_MAX_TRAFFIC_CLASS; tc++) {
+		if (!ice_is_tc_ena(*tc_bitmap, tc))
+			continue;
+		tc_node = ice_sched_get_tc_node(pi, tc);
+		if (!tc_node)
+			continue;
+		vsi_node = ice_sched_get_vsi_node(hw, tc_node, vsi_handle);
+		if (!vsi_node)
+			continue;
+		bw_t_info = &vsi_ctx->sched.bw_t_info[tc];
+		status = ice_sched_replay_node_bw(hw, vsi_node, bw_t_info);
+		if (status)
+			break;
+	}
+	return status;
+}
+
+/**
+ * ice_sched_replay_vsi_agg - replay agg & VSI to aggregator node(s)
+ * @hw: pointer to the hw struct
+ * @vsi_handle: software VSI handle
+ *
+ * This function replays aggregator node, VSI to aggregator type nodes, and
+ * their node bandwidth information. This function needs to be called with
+ * scheduler lock held.
+ */
+static enum ice_status
+ice_sched_replay_vsi_agg(struct ice_hw *hw, u16 vsi_handle)
+{
+	ice_declare_bitmap(replay_bitmap, ICE_MAX_TRAFFIC_CLASS);
+	struct ice_sched_agg_vsi_info *agg_vsi_info;
+	struct ice_port_info *pi = hw->port_info;
+	struct ice_sched_agg_info *agg_info;
+	enum ice_status status;
+
+	ice_zero_bitmap(replay_bitmap, sizeof(replay_bitmap) * BITS_PER_BYTE);
+	if (!ice_is_vsi_valid(hw, vsi_handle))
+		return ICE_ERR_PARAM;
+	agg_info = ice_get_vsi_agg_info(hw, vsi_handle);
+	if (!agg_info)
+		return ICE_SUCCESS; /* Not present in list - default Agg case */
+	agg_vsi_info = ice_get_agg_vsi_info(agg_info, vsi_handle);
+	if (!agg_vsi_info)
+		return ICE_SUCCESS; /* Not present in list - default Agg case */
+	ice_sched_get_ena_tc_bitmap(pi, agg_info->replay_tc_bitmap,
+				    replay_bitmap);
+	/* Replay agg node associated to vsi_handle */
+	status = ice_sched_cfg_agg(hw->port_info, agg_info->agg_id,
+				   ICE_AGG_TYPE_AGG, replay_bitmap);
+	if (status)
+		return status;
+	/* Replay agg node bw (restore agg bw) */
+	status = ice_sched_replay_agg_bw(hw, agg_info);
+	if (status)
+		return status;
+
+	ice_zero_bitmap(replay_bitmap, ICE_MAX_TRAFFIC_CLASS);
+	ice_sched_get_ena_tc_bitmap(pi, agg_vsi_info->replay_tc_bitmap,
+				    replay_bitmap);
+	/* Move this VSI (vsi_handle) to above aggregator */
+	status = ice_sched_assoc_vsi_to_agg(pi, agg_info->agg_id, vsi_handle,
+					    replay_bitmap);
+	if (status)
+		return status;
+	/* Replay VSI bw (restore VSI bw) */
+	return ice_sched_replay_vsi_bw(hw, vsi_handle,
+				       agg_vsi_info->tc_bitmap);
+}
+
+/**
+ * ice_replay_vsi_agg - replay VSI to aggregator node
+ * @hw: pointer to the hw struct
+ * @vsi_handle: software VSI handle
+ *
+ * This function replays association of VSI to aggregator type nodes, and
+ * node bandwidth information.
+ */
+enum ice_status
+ice_replay_vsi_agg(struct ice_hw *hw, u16 vsi_handle)
+{
+	struct ice_port_info *pi = hw->port_info;
+	enum ice_status status;
+
+	ice_acquire_lock(&pi->sched_lock);
+	status = ice_sched_replay_vsi_agg(hw, vsi_handle);
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
diff --git a/drivers/net/ice/base/ice_sched.h b/drivers/net/ice/base/ice_sched.h
new file mode 100644
index 0000000..a556594
--- /dev/null
+++ b/drivers/net/ice/base/ice_sched.h
@@ -0,0 +1,210 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_SCHED_H_
+#define _ICE_SCHED_H_
+
+#include "ice_common.h"
+
+#define ICE_QGRP_LAYER_OFFSET	2
+#define ICE_VSI_LAYER_OFFSET	4
+#define ICE_AGG_LAYER_OFFSET	6
+#define ICE_SCHED_INVAL_LAYER_NUM	0xFF
+/* Burst size is a 12 bits register that is configured while creating the RL
+ * profile(s). MSB is a granularity bit and tells the granularity type
+ * 0 - LSB bits are in bytes granularity
+ * 1 - LSB bits are in 1K bytes granularity
+ */
+#define ICE_BYTE_GRANULARITY			0
+#define ICE_KBYTE_GRANULARITY			0x800
+#define ICE_MIN_BURST_SIZE_ALLOWED		1 /* In Bytes */
+#define ICE_MAX_BURST_SIZE_ALLOWED		(2047 * 1024) /* In Bytes */
+#define ICE_MAX_BURST_SIZE_BYTE_GRANULARITY	2047 /* In Bytes */
+#define ICE_MAX_BURST_SIZE_KBYTE_GRANULARITY	ICE_MAX_BURST_SIZE_ALLOWED
+
+#define ICE_RL_PROF_FREQUENCY 446000000
+#define ICE_RL_PROF_ACCURACY_BYTES 128
+#define ICE_RL_PROF_MULTIPLIER 10000
+#define ICE_RL_PROF_TS_MULTIPLIER 32
+#define ICE_RL_PROF_FRACTION 512
+
+struct rl_profile_params {
+	u32 bw;			/* in Kbps */
+	u16 rl_multiplier;
+	u16 wake_up_calc;
+	u16 rl_encode;
+};
+
+/* BW rate limit profile parameters list entry along
+ * with bandwidth maintained per layer in port info
+ */
+struct ice_aqc_rl_profile_info {
+	struct ice_aqc_rl_profile_elem profile;
+	struct LIST_ENTRY_TYPE list_entry;
+	u32 bw;			/* requested */
+	u16 prof_id_ref;	/* profile id to node association ref count */
+};
+
+struct ice_sched_agg_vsi_info {
+	struct LIST_ENTRY_TYPE list_entry;
+	ice_declare_bitmap(tc_bitmap, ICE_MAX_TRAFFIC_CLASS);
+	u16 vsi_handle;
+	/* save agg vsi TC bitmap */
+	ice_declare_bitmap(replay_tc_bitmap, ICE_MAX_TRAFFIC_CLASS);
+};
+
+struct ice_sched_agg_info {
+	struct LIST_HEAD_TYPE agg_vsi_list;
+	struct LIST_ENTRY_TYPE list_entry;
+	ice_declare_bitmap(tc_bitmap, ICE_MAX_TRAFFIC_CLASS);
+	u32 agg_id;
+	enum ice_agg_type agg_type;
+	/* bw_t_info saves agg bw information */
+	struct ice_bw_type_info bw_t_info[ICE_MAX_TRAFFIC_CLASS];
+	/* save agg TC bitmap */
+	ice_declare_bitmap(replay_tc_bitmap, ICE_MAX_TRAFFIC_CLASS);
+};
+
+/* FW AQ command calls */
+enum ice_status
+ice_aq_query_rl_profile(struct ice_hw *hw, u16 num_profiles,
+			struct ice_aqc_rl_profile_generic_elem *buf,
+			u16 buf_size, struct ice_sq_cd *cd);
+enum ice_status
+ice_aq_cfg_l2_node_cgd(struct ice_hw *hw, u16 num_nodes,
+		       struct ice_aqc_cfg_l2_node_cgd_data *buf, u16 buf_size,
+		       struct ice_sq_cd *cd);
+enum ice_status
+ice_aq_move_sched_elems(struct ice_hw *hw, u16 grps_req,
+			struct ice_aqc_move_elem *buf, u16 buf_size,
+			u16 *grps_movd, struct ice_sq_cd *cd);
+enum ice_status
+ice_aq_query_sched_elems(struct ice_hw *hw, u16 elems_req,
+			 struct ice_aqc_get_elem *buf, u16 buf_size,
+			 u16 *elems_ret, struct ice_sq_cd *cd);
+enum ice_status ice_sched_init_port(struct ice_port_info *pi);
+enum ice_status ice_sched_query_res_alloc(struct ice_hw *hw);
+
+/* Functions to cleanup scheduler SW DB */
+void ice_sched_clear_port(struct ice_port_info *pi);
+void ice_sched_cleanup_all(struct ice_hw *hw);
+void ice_sched_clear_agg(struct ice_hw *hw);
+
+/* Get a scheduling node from SW DB for given TEID */
+struct ice_sched_node *ice_sched_get_node(struct ice_port_info *pi, u32 teid);
+struct ice_sched_node *
+ice_sched_find_node_by_teid(struct ice_sched_node *start_node, u32 teid);
+/* Add a scheduling node into SW DB for given info */
+enum ice_status
+ice_sched_add_node(struct ice_port_info *pi, u8 layer,
+		   struct ice_aqc_txsched_elem_data *info);
+void ice_free_sched_node(struct ice_port_info *pi, struct ice_sched_node *node);
+struct ice_sched_node *ice_sched_get_tc_node(struct ice_port_info *pi, u8 tc);
+struct ice_sched_node *
+ice_sched_get_free_qparent(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+			   u8 owner);
+enum ice_status
+ice_sched_cfg_vsi(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u16 maxqs,
+		  u8 owner, bool enable);
+enum ice_status ice_rm_vsi_lan_cfg(struct ice_port_info *pi, u16 vsi_handle);
+struct ice_sched_node *
+ice_sched_get_agg_node(struct ice_hw *hw, struct ice_sched_node *tc_node,
+		       u32 agg_id);
+struct ice_sched_node *
+ice_sched_get_vsi_node(struct ice_hw *hw, struct ice_sched_node *tc_node,
+		       u16 vsi_handle);
+bool ice_sched_is_tree_balanced(struct ice_hw *hw, struct ice_sched_node *node);
+enum ice_status
+ice_aq_query_node_to_root(struct ice_hw *hw, u32 node_teid,
+			  struct ice_aqc_get_elem *buf, u16 buf_size,
+			  struct ice_sq_cd *cd);
+
+/* Tx scheduler rate limiter functions */
+enum ice_status
+ice_cfg_agg(struct ice_port_info *pi, u32 agg_id,
+	    enum ice_agg_type agg_type, u8 tc_bitmap);
+enum ice_status
+ice_move_vsi_to_agg(struct ice_port_info *pi, u32 agg_id, u16 vsi_handle,
+		    u8 tc_bitmap);
+enum ice_status ice_rm_agg_cfg(struct ice_port_info *pi, u32 agg_id);
+enum ice_status
+ice_cfg_q_bw_lmt(struct ice_port_info *pi, u32 q_id, enum ice_rl_type rl_type,
+		 u32 bw);
+enum ice_status
+ice_cfg_q_bw_dflt_lmt(struct ice_port_info *pi, u32 q_id,
+		      enum ice_rl_type rl_type);
+enum ice_status
+ice_cfg_tc_node_bw_lmt(struct ice_port_info *pi, u8 tc,
+		       enum ice_rl_type rl_type, u32 bw);
+enum ice_status
+ice_cfg_tc_node_bw_dflt_lmt(struct ice_port_info *pi, u8 tc,
+			    enum ice_rl_type rl_type);
+enum ice_status
+ice_cfg_vsi_bw_lmt_per_tc(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+			  enum ice_rl_type rl_type, u32 bw);
+enum ice_status
+ice_cfg_vsi_bw_dflt_lmt_per_tc(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+			       enum ice_rl_type rl_type);
+enum ice_status
+ice_cfg_agg_bw_lmt_per_tc(struct ice_port_info *pi, u32 agg_id, u8 tc,
+			  enum ice_rl_type rl_type, u32 bw);
+enum ice_status
+ice_cfg_agg_bw_dflt_lmt_per_tc(struct ice_port_info *pi, u32 agg_id, u8 tc,
+			       enum ice_rl_type rl_type);
+enum ice_status
+ice_cfg_vsi_bw_shared_lmt(struct ice_port_info *pi, u16 vsi_handle, u32 bw);
+enum ice_status
+ice_cfg_vsi_bw_no_shared_lmt(struct ice_port_info *pi, u16 vsi_handle);
+enum ice_status
+ice_cfg_agg_bw_shared_lmt(struct ice_port_info *pi, u32 agg_id, u32 bw);
+enum ice_status
+ice_cfg_agg_bw_no_shared_lmt(struct ice_port_info *pi, u32 agg_id);
+enum ice_status
+ice_cfg_vsi_q_priority(struct ice_port_info *pi, u16 num_qs, u32 *q_ids,
+		       u8 *q_prio);
+enum ice_status
+ice_cfg_vsi_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 ena_tcmap,
+		     enum ice_rl_type rl_type, u8 *bw_alloc);
+enum ice_status
+ice_cfg_agg_vsi_priority_per_tc(struct ice_port_info *pi, u32 agg_id,
+				u16 num_vsis, u16 *vsi_handle_arr,
+				u8 *node_prio, u8 tc);
+enum ice_status
+ice_cfg_agg_bw_alloc(struct ice_port_info *pi, u32 agg_id, u8 ena_tcmap,
+		     enum ice_rl_type rl_type, u8 *bw_alloc);
+bool
+ice_sched_find_node_in_subtree(struct ice_hw *hw, struct ice_sched_node *base,
+			       struct ice_sched_node *node);
+enum ice_status
+ice_sched_set_node_bw_lmt(struct ice_port_info *pi, struct ice_sched_node *node,
+			  enum ice_rl_type rl_type, u32 bw);
+enum ice_status
+ice_sched_set_agg_bw_dflt_lmt(struct ice_port_info *pi, u16 vsi_handle);
+enum ice_status
+ice_sched_set_node_bw_lmt_per_tc(struct ice_port_info *pi, u32 id,
+				 enum ice_agg_type agg_type, u8 tc,
+				 enum ice_rl_type rl_type, u32 bw);
+enum ice_status
+ice_sched_set_vsi_bw_shared_lmt(struct ice_port_info *pi, u16 vsi_handle,
+				u32 bw);
+enum ice_status
+ice_sched_set_agg_bw_shared_lmt(struct ice_port_info *pi, u32 agg_id, u32 bw);
+enum ice_status
+ice_sched_cfg_sibl_node_prio(struct ice_hw *hw, struct ice_sched_node *node,
+			     u8 priority);
+enum ice_status
+ice_sched_cfg_node_bw_alloc(struct ice_hw *hw, struct ice_sched_node *node,
+			    enum ice_rl_type rl_type, u8 bw_alloc);
+enum ice_status
+ice_sched_add_agg_cfg(struct ice_port_info *pi, u32 agg_id, u8 tc);
+enum ice_status
+ice_sched_rm_agg_cfg(struct ice_port_info *pi, u32 agg_id, u8 tc);
+enum ice_status
+ice_sched_move_vsi_to_agg(struct ice_port_info *pi, u16 vsi_handle, u32 agg_id,
+			  u8 tc);
+enum ice_status
+ice_sched_del_rl_profile(struct ice_hw *hw,
+			 struct ice_aqc_rl_profile_info *rl_info);
+void ice_sched_rm_unused_rl_prof(struct ice_port_info *pi);
+#endif /* _ICE_SCHED_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v6 08/31] net/ice/base: add virtual switch code
  2018-12-18  8:46 ` [dpdk-dev] [PATCH v6 00/31] A new net PMD - ICE Wenzhuo Lu
                     ` (6 preceding siblings ...)
  2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 07/31] net/ice/base: add basic transmit scheduler Wenzhuo Lu
@ 2018-12-18  8:46   ` Wenzhuo Lu
  2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 09/31] net/ice/base: add code to work with the NVM Wenzhuo Lu
                     ` (23 subsequent siblings)
  31 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-18  8:46 UTC (permalink / raw)
  To: dev; +Cc: Paul M Stillwell Jr

From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>

Add code to handle the virtual switch within the NIC.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
---
 drivers/net/ice/base/ice_switch.c | 2812 +++++++++++++++++++++++++++++++++++++
 drivers/net/ice/base/ice_switch.h |  333 +++++
 2 files changed, 3145 insertions(+)
 create mode 100644 drivers/net/ice/base/ice_switch.c
 create mode 100644 drivers/net/ice/base/ice_switch.h

diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
new file mode 100644
index 0000000..0379cd0
--- /dev/null
+++ b/drivers/net/ice/base/ice_switch.c
@@ -0,0 +1,2812 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#include "ice_switch.h"
+
+
+#define ICE_ETH_DA_OFFSET		0
+#define ICE_ETH_ETHTYPE_OFFSET		12
+#define ICE_ETH_VLAN_TCI_OFFSET		14
+#define ICE_MAX_VLAN_ID			0xFFF
+
+/* Dummy ethernet header needed in the ice_aqc_sw_rules_elem
+ * struct to configure any switch filter rules.
+ * {DA (6 bytes), SA(6 bytes),
+ * Ether type (2 bytes for header without VLAN tag) OR
+ * VLAN tag (4 bytes for header with VLAN tag) }
+ *
+ * Word on Hardcoded values
+ * byte 0 = 0x2: to identify it as locally administered DA MAC
+ * byte 6 = 0x2: to identify it as locally administered SA MAC
+ * byte 12 = 0x81 & byte 13 = 0x00:
+ *	In case of VLAN filter first two bytes defines ether type (0x8100)
+ *	and remaining two bytes are placeholder for programming a given VLAN id
+ *	In case of Ether type filter it is treated as header without VLAN tag
+ *	and byte 12 and 13 is used to program a given Ether type instead
+ */
+#define DUMMY_ETH_HDR_LEN		16
+static const u8 dummy_eth_header[DUMMY_ETH_HDR_LEN] = { 0x2, 0, 0, 0, 0, 0,
+							0x2, 0, 0, 0, 0, 0,
+							0x81, 0, 0, 0};
+
+#define ICE_SW_RULE_RX_TX_ETH_HDR_SIZE \
+	(sizeof(struct ice_aqc_sw_rules_elem) - \
+	 sizeof(((struct ice_aqc_sw_rules_elem *)0)->pdata) + \
+	 sizeof(struct ice_sw_rule_lkup_rx_tx) + DUMMY_ETH_HDR_LEN - 1)
+#define ICE_SW_RULE_RX_TX_NO_HDR_SIZE \
+	(sizeof(struct ice_aqc_sw_rules_elem) - \
+	 sizeof(((struct ice_aqc_sw_rules_elem *)0)->pdata) + \
+	 sizeof(struct ice_sw_rule_lkup_rx_tx) - 1)
+#define ICE_SW_RULE_LG_ACT_SIZE(n) \
+	(sizeof(struct ice_aqc_sw_rules_elem) - \
+	 sizeof(((struct ice_aqc_sw_rules_elem *)0)->pdata) + \
+	 sizeof(struct ice_sw_rule_lg_act) - \
+	 sizeof(((struct ice_sw_rule_lg_act *)0)->act) + \
+	 ((n) * sizeof(((struct ice_sw_rule_lg_act *)0)->act)))
+#define ICE_SW_RULE_VSI_LIST_SIZE(n) \
+	(sizeof(struct ice_aqc_sw_rules_elem) - \
+	 sizeof(((struct ice_aqc_sw_rules_elem *)0)->pdata) + \
+	 sizeof(struct ice_sw_rule_vsi_list) - \
+	 sizeof(((struct ice_sw_rule_vsi_list *)0)->vsi) + \
+	 ((n) * sizeof(((struct ice_sw_rule_vsi_list *)0)->vsi)))
+
+
+/**
+ * ice_init_def_sw_recp - initialize the recipe book keeping tables
+ * @hw: pointer to the hw struct
+ *
+ * Allocate memory for the entire recipe table and initialize the structures/
+ * entries corresponding to basic recipes.
+ */
+enum ice_status ice_init_def_sw_recp(struct ice_hw *hw)
+{
+	struct ice_sw_recipe *recps;
+	u8 i;
+
+	recps = (struct ice_sw_recipe *)
+		ice_calloc(hw, ICE_MAX_NUM_RECIPES, sizeof(*recps));
+	if (!recps)
+		return ICE_ERR_NO_MEMORY;
+
+	for (i = 0; i < ICE_MAX_NUM_RECIPES; i++) {
+		recps[i].root_rid = i;
+		INIT_LIST_HEAD(&recps[i].filt_rules);
+		INIT_LIST_HEAD(&recps[i].filt_replay_rules);
+		ice_init_lock(&recps[i].filt_rule_lock);
+	}
+
+	hw->switch_info->recp_list = recps;
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_aq_get_sw_cfg - get switch configuration
+ * @hw: pointer to the hardware structure
+ * @buf: pointer to the result buffer
+ * @buf_size: length of the buffer available for response
+ * @req_desc: pointer to requested descriptor
+ * @num_elems: pointer to number of elements
+ * @cd: pointer to command details structure or NULL
+ *
+ * Get switch configuration (0x0200) to be placed in 'buff'.
+ * This admin command returns information such as initial VSI/port number
+ * and switch ID it belongs to.
+ *
+ * NOTE: *req_desc is both an input/output parameter.
+ * The caller of this function first calls this function with *request_desc set
+ * to 0. If the response from f/w has *req_desc set to 0, all the switch
+ * configuration information has been returned; if non-zero (meaning not all
+ * the information was returned), the caller should call this function again
+ * with *req_desc set to the previous value returned by f/w to get the
+ * next block of switch configuration information.
+ *
+ * *num_elems is output only parameter. This reflects the number of elements
+ * in response buffer. The caller of this function to use *num_elems while
+ * parsing the response buffer.
+ */
+static enum ice_status
+ice_aq_get_sw_cfg(struct ice_hw *hw, struct ice_aqc_get_sw_cfg_resp *buf,
+		  u16 buf_size, u16 *req_desc, u16 *num_elems,
+		  struct ice_sq_cd *cd)
+{
+	struct ice_aqc_get_sw_cfg *cmd;
+	enum ice_status status;
+	struct ice_aq_desc desc;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_sw_cfg);
+	cmd = &desc.params.get_sw_conf;
+	cmd->element = CPU_TO_LE16(*req_desc);
+
+	status = ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
+	if (!status) {
+		*req_desc = LE16_TO_CPU(cmd->element);
+		*num_elems = LE16_TO_CPU(cmd->num_elems);
+	}
+
+	return status;
+}
+
+
+
+/**
+ * ice_aq_add_vsi
+ * @hw: pointer to the hw struct
+ * @vsi_ctx: pointer to a VSI context struct
+ * @cd: pointer to command details structure or NULL
+ *
+ * Add a VSI context to the hardware (0x0210)
+ */
+static enum ice_status
+ice_aq_add_vsi(struct ice_hw *hw, struct ice_vsi_ctx *vsi_ctx,
+	       struct ice_sq_cd *cd)
+{
+	struct ice_aqc_add_update_free_vsi_resp *res;
+	struct ice_aqc_add_get_update_free_vsi *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	cmd = &desc.params.vsi_cmd;
+	res = &desc.params.add_update_free_vsi_res;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_add_vsi);
+
+	if (!vsi_ctx->alloc_from_pool)
+		cmd->vsi_num = CPU_TO_LE16(vsi_ctx->vsi_num |
+					   ICE_AQ_VSI_IS_VALID);
+
+	cmd->vsi_flags = CPU_TO_LE16(vsi_ctx->flags);
+
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+
+	status = ice_aq_send_cmd(hw, &desc, &vsi_ctx->info,
+				 sizeof(vsi_ctx->info), cd);
+
+	if (!status) {
+		vsi_ctx->vsi_num = LE16_TO_CPU(res->vsi_num) & ICE_AQ_VSI_NUM_M;
+		vsi_ctx->vsis_allocd = LE16_TO_CPU(res->vsi_used);
+		vsi_ctx->vsis_unallocated = LE16_TO_CPU(res->vsi_free);
+	}
+
+	return status;
+}
+
+/**
+ * ice_aq_free_vsi
+ * @hw: pointer to the hw struct
+ * @vsi_ctx: pointer to a VSI context struct
+ * @keep_vsi_alloc: keep VSI allocation as part of this PF's resources
+ * @cd: pointer to command details structure or NULL
+ *
+ * Free VSI context info from hardware (0x0213)
+ */
+static enum ice_status
+ice_aq_free_vsi(struct ice_hw *hw, struct ice_vsi_ctx *vsi_ctx,
+		bool keep_vsi_alloc, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_add_update_free_vsi_resp *resp;
+	struct ice_aqc_add_get_update_free_vsi *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	cmd = &desc.params.vsi_cmd;
+	resp = &desc.params.add_update_free_vsi_res;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_free_vsi);
+
+	cmd->vsi_num = CPU_TO_LE16(vsi_ctx->vsi_num | ICE_AQ_VSI_IS_VALID);
+	if (keep_vsi_alloc)
+		cmd->cmd_flags = CPU_TO_LE16(ICE_AQ_VSI_KEEP_ALLOC);
+
+	status = ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+	if (!status) {
+		vsi_ctx->vsis_allocd = LE16_TO_CPU(resp->vsi_used);
+		vsi_ctx->vsis_unallocated = LE16_TO_CPU(resp->vsi_free);
+	}
+
+	return status;
+}
+
+/**
+ * ice_aq_update_vsi
+ * @hw: pointer to the hw struct
+ * @vsi_ctx: pointer to a VSI context struct
+ * @cd: pointer to command details structure or NULL
+ *
+ * Update VSI context in the hardware (0x0211)
+ */
+static enum ice_status
+ice_aq_update_vsi(struct ice_hw *hw, struct ice_vsi_ctx *vsi_ctx,
+		  struct ice_sq_cd *cd)
+{
+	struct ice_aqc_add_update_free_vsi_resp *resp;
+	struct ice_aqc_add_get_update_free_vsi *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	cmd = &desc.params.vsi_cmd;
+	resp = &desc.params.add_update_free_vsi_res;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_update_vsi);
+
+	cmd->vsi_num = CPU_TO_LE16(vsi_ctx->vsi_num | ICE_AQ_VSI_IS_VALID);
+
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+
+	status = ice_aq_send_cmd(hw, &desc, &vsi_ctx->info,
+				 sizeof(vsi_ctx->info), cd);
+
+	if (!status) {
+		vsi_ctx->vsis_allocd = LE16_TO_CPU(resp->vsi_used);
+		vsi_ctx->vsis_unallocated = LE16_TO_CPU(resp->vsi_free);
+	}
+
+	return status;
+}
+
+/**
+ * ice_is_vsi_valid - check whether the VSI is valid or not
+ * @hw: pointer to the hw struct
+ * @vsi_handle: VSI handle
+ *
+ * check whether the VSI is valid or not
+ */
+bool ice_is_vsi_valid(struct ice_hw *hw, u16 vsi_handle)
+{
+	return vsi_handle < ICE_MAX_VSI && hw->vsi_ctx[vsi_handle];
+}
+
+/**
+ * ice_get_hw_vsi_num - return the hw VSI number
+ * @hw: pointer to the hw struct
+ * @vsi_handle: VSI handle
+ *
+ * return the hw VSI number
+ * Caution: call this function only if VSI is valid (ice_is_vsi_valid)
+ */
+u16 ice_get_hw_vsi_num(struct ice_hw *hw, u16 vsi_handle)
+{
+	return hw->vsi_ctx[vsi_handle]->vsi_num;
+}
+
+/**
+ * ice_get_vsi_ctx - return the VSI context entry for a given VSI handle
+ * @hw: pointer to the hw struct
+ * @vsi_handle: VSI handle
+ *
+ * return the VSI context entry for a given VSI handle
+ */
+struct ice_vsi_ctx *ice_get_vsi_ctx(struct ice_hw *hw, u16 vsi_handle)
+{
+	return (vsi_handle >= ICE_MAX_VSI) ? NULL : hw->vsi_ctx[vsi_handle];
+}
+
+/**
+ * ice_save_vsi_ctx - save the VSI context for a given VSI handle
+ * @hw: pointer to the hw struct
+ * @vsi_handle: VSI handle
+ * @vsi: VSI context pointer
+ *
+ * save the VSI context entry for a given VSI handle
+ */
+static void
+ice_save_vsi_ctx(struct ice_hw *hw, u16 vsi_handle, struct ice_vsi_ctx *vsi)
+{
+	hw->vsi_ctx[vsi_handle] = vsi;
+}
+
+/**
+ * ice_clear_vsi_ctx - clear the VSI context entry
+ * @hw: pointer to the hw struct
+ * @vsi_handle: VSI handle
+ *
+ * clear the VSI context entry
+ */
+static void ice_clear_vsi_ctx(struct ice_hw *hw, u16 vsi_handle)
+{
+	struct ice_vsi_ctx *vsi;
+
+	vsi = ice_get_vsi_ctx(hw, vsi_handle);
+	if (vsi) {
+		ice_destroy_lock(&vsi->rss_locks);
+		ice_free(hw, vsi);
+		hw->vsi_ctx[vsi_handle] = NULL;
+	}
+}
+
+/**
+ * ice_clear_all_vsi_ctx - clear all the VSI context entries
+ * @hw: pointer to the hw struct
+ */
+void ice_clear_all_vsi_ctx(struct ice_hw *hw)
+{
+	u16 i;
+
+	for (i = 0; i < ICE_MAX_VSI; i++)
+		ice_clear_vsi_ctx(hw, i);
+}
+
+/**
+ * ice_add_vsi - add VSI context to the hardware and VSI handle list
+ * @hw: pointer to the hw struct
+ * @vsi_handle: unique VSI handle provided by drivers
+ * @vsi_ctx: pointer to a VSI context struct
+ * @cd: pointer to command details structure or NULL
+ *
+ * Add a VSI context to the hardware also add it into the VSI handle list.
+ * If this function gets called after reset for exisiting VSIs then update
+ * with the new HW VSI number in the corresponding VSI handle list entry.
+ */
+enum ice_status
+ice_add_vsi(struct ice_hw *hw, u16 vsi_handle, struct ice_vsi_ctx *vsi_ctx,
+	    struct ice_sq_cd *cd)
+{
+	struct ice_vsi_ctx *tmp_vsi_ctx;
+	enum ice_status status;
+
+	if (vsi_handle >= ICE_MAX_VSI)
+		return ICE_ERR_PARAM;
+	status = ice_aq_add_vsi(hw, vsi_ctx, cd);
+	if (status)
+		return status;
+	tmp_vsi_ctx = ice_get_vsi_ctx(hw, vsi_handle);
+	if (!tmp_vsi_ctx) {
+		/* Create a new vsi context */
+		tmp_vsi_ctx = (struct ice_vsi_ctx *)
+			ice_malloc(hw, sizeof(*tmp_vsi_ctx));
+		if (!tmp_vsi_ctx) {
+			ice_aq_free_vsi(hw, vsi_ctx, false, cd);
+			return ICE_ERR_NO_MEMORY;
+		}
+		*tmp_vsi_ctx = *vsi_ctx;
+		ice_init_lock(&tmp_vsi_ctx->rss_locks);
+		INIT_LIST_HEAD(&tmp_vsi_ctx->rss_list_head);
+		ice_save_vsi_ctx(hw, vsi_handle, tmp_vsi_ctx);
+	} else {
+		/* update with new HW VSI num */
+		if (tmp_vsi_ctx->vsi_num != vsi_ctx->vsi_num)
+			tmp_vsi_ctx->vsi_num = vsi_ctx->vsi_num;
+	}
+
+	return status;
+}
+
+/**
+ * ice_free_vsi- free VSI context from hardware and VSI handle list
+ * @hw: pointer to the hw struct
+ * @vsi_handle: unique VSI handle
+ * @vsi_ctx: pointer to a VSI context struct
+ * @keep_vsi_alloc: keep VSI allocation as part of this PF's resources
+ * @cd: pointer to command details structure or NULL
+ *
+ * Free VSI context info from hardware as well as from VSI handle list
+ */
+enum ice_status
+ice_free_vsi(struct ice_hw *hw, u16 vsi_handle, struct ice_vsi_ctx *vsi_ctx,
+	     bool keep_vsi_alloc, struct ice_sq_cd *cd)
+{
+	enum ice_status status;
+
+	if (!ice_is_vsi_valid(hw, vsi_handle))
+		return ICE_ERR_PARAM;
+	vsi_ctx->vsi_num = ice_get_hw_vsi_num(hw, vsi_handle);
+	status = ice_aq_free_vsi(hw, vsi_ctx, keep_vsi_alloc, cd);
+	if (!status)
+		ice_clear_vsi_ctx(hw, vsi_handle);
+	return status;
+}
+
+/**
+ * ice_update_vsi
+ * @hw: pointer to the hw struct
+ * @vsi_handle: unique VSI handle
+ * @vsi_ctx: pointer to a VSI context struct
+ * @cd: pointer to command details structure or NULL
+ *
+ * Update VSI context in the hardware
+ */
+enum ice_status
+ice_update_vsi(struct ice_hw *hw, u16 vsi_handle, struct ice_vsi_ctx *vsi_ctx,
+	       struct ice_sq_cd *cd)
+{
+	if (!ice_is_vsi_valid(hw, vsi_handle))
+		return ICE_ERR_PARAM;
+	vsi_ctx->vsi_num = ice_get_hw_vsi_num(hw, vsi_handle);
+	return ice_aq_update_vsi(hw, vsi_ctx, cd);
+}
+
+
+
+/**
+ * ice_aq_alloc_free_vsi_list
+ * @hw: pointer to the hw struct
+ * @vsi_list_id: VSI list id returned or used for lookup
+ * @lkup_type: switch rule filter lookup type
+ * @opc: switch rules population command type - pass in the command opcode
+ *
+ * allocates or free a VSI list resource
+ */
+static enum ice_status
+ice_aq_alloc_free_vsi_list(struct ice_hw *hw, u16 *vsi_list_id,
+			   enum ice_sw_lkup_type lkup_type,
+			   enum ice_adminq_opc opc)
+{
+	struct ice_aqc_alloc_free_res_elem *sw_buf;
+	struct ice_aqc_res_elem *vsi_ele;
+	enum ice_status status;
+	u16 buf_len;
+
+	buf_len = sizeof(*sw_buf);
+	sw_buf = (struct ice_aqc_alloc_free_res_elem *)
+		ice_malloc(hw, buf_len);
+	if (!sw_buf)
+		return ICE_ERR_NO_MEMORY;
+	sw_buf->num_elems = CPU_TO_LE16(1);
+
+	if (lkup_type == ICE_SW_LKUP_MAC ||
+	    lkup_type == ICE_SW_LKUP_MAC_VLAN ||
+	    lkup_type == ICE_SW_LKUP_ETHERTYPE ||
+	    lkup_type == ICE_SW_LKUP_ETHERTYPE_MAC ||
+	    lkup_type == ICE_SW_LKUP_PROMISC ||
+	    lkup_type == ICE_SW_LKUP_PROMISC_VLAN) {
+		sw_buf->res_type = CPU_TO_LE16(ICE_AQC_RES_TYPE_VSI_LIST_REP);
+	} else if (lkup_type == ICE_SW_LKUP_VLAN) {
+		sw_buf->res_type =
+			CPU_TO_LE16(ICE_AQC_RES_TYPE_VSI_LIST_PRUNE);
+	} else {
+		status = ICE_ERR_PARAM;
+		goto ice_aq_alloc_free_vsi_list_exit;
+	}
+
+	if (opc == ice_aqc_opc_free_res)
+		sw_buf->elem[0].e.sw_resp = CPU_TO_LE16(*vsi_list_id);
+
+	status = ice_aq_alloc_free_res(hw, 1, sw_buf, buf_len, opc, NULL);
+	if (status)
+		goto ice_aq_alloc_free_vsi_list_exit;
+
+	if (opc == ice_aqc_opc_alloc_res) {
+		vsi_ele = &sw_buf->elem[0];
+		*vsi_list_id = LE16_TO_CPU(vsi_ele->e.sw_resp);
+	}
+
+ice_aq_alloc_free_vsi_list_exit:
+	ice_free(hw, sw_buf);
+	return status;
+}
+
+
+/**
+ * ice_aq_sw_rules - add/update/remove switch rules
+ * @hw: pointer to the hw struct
+ * @rule_list: pointer to switch rule population list
+ * @rule_list_sz: total size of the rule list in bytes
+ * @num_rules: number of switch rules in the rule_list
+ * @opc: switch rules population command type - pass in the command opcode
+ * @cd: pointer to command details structure or NULL
+ *
+ * Add(0x02a0)/Update(0x02a1)/Remove(0x02a2) switch rules commands to firmware
+ */
+static enum ice_status
+ice_aq_sw_rules(struct ice_hw *hw, void *rule_list, u16 rule_list_sz,
+		u8 num_rules, enum ice_adminq_opc opc, struct ice_sq_cd *cd)
+{
+	struct ice_aq_desc desc;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_sw_rules");
+
+	if (opc != ice_aqc_opc_add_sw_rules &&
+	    opc != ice_aqc_opc_update_sw_rules &&
+	    opc != ice_aqc_opc_remove_sw_rules)
+		return ICE_ERR_PARAM;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, opc);
+
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+	desc.params.sw_rules.num_rules_fltr_entry_index =
+		CPU_TO_LE16(num_rules);
+	return ice_aq_send_cmd(hw, &desc, rule_list, rule_list_sz, cd);
+}
+
+
+/* ice_init_port_info - Initialize port_info with switch configuration data
+ * @pi: pointer to port_info
+ * @vsi_port_num: VSI number or port number
+ * @type: Type of switch element (port or VSI)
+ * @swid: switch ID of the switch the element is attached to
+ * @pf_vf_num: PF or VF number
+ * @is_vf: true if the element is a VF, false otherwise
+ */
+static void
+ice_init_port_info(struct ice_port_info *pi, u16 vsi_port_num, u8 type,
+		   u16 swid, u16 pf_vf_num, bool is_vf)
+{
+	switch (type) {
+	case ICE_AQC_GET_SW_CONF_RESP_PHYS_PORT:
+		pi->lport = (u8)(vsi_port_num & ICE_LPORT_MASK);
+		pi->sw_id = swid;
+		pi->pf_vf_num = pf_vf_num;
+		pi->is_vf = is_vf;
+		pi->dflt_tx_vsi_num = ICE_DFLT_VSI_INVAL;
+		pi->dflt_rx_vsi_num = ICE_DFLT_VSI_INVAL;
+		break;
+	default:
+		ice_debug(pi->hw, ICE_DBG_SW,
+			  "incorrect VSI/port type received\n");
+		break;
+	}
+}
+
+/* ice_get_initial_sw_cfg - Get initial port and default VSI data
+ * @hw: pointer to the hardware structure
+ */
+enum ice_status ice_get_initial_sw_cfg(struct ice_hw *hw)
+{
+	struct ice_aqc_get_sw_cfg_resp *rbuf;
+	enum ice_status status;
+	u16 num_total_ports;
+	u16 req_desc = 0;
+	u16 num_elems;
+	u16 j = 0;
+	u16 i;
+
+	num_total_ports = 1;
+
+	rbuf = (struct ice_aqc_get_sw_cfg_resp *)
+		ice_malloc(hw, ICE_SW_CFG_MAX_BUF_LEN);
+
+	if (!rbuf)
+		return ICE_ERR_NO_MEMORY;
+
+	/* Multiple calls to ice_aq_get_sw_cfg may be required
+	 * to get all the switch configuration information. The need
+	 * for additional calls is indicated by ice_aq_get_sw_cfg
+	 * writing a non-zero value in req_desc
+	 */
+	do {
+		status = ice_aq_get_sw_cfg(hw, rbuf, ICE_SW_CFG_MAX_BUF_LEN,
+					   &req_desc, &num_elems, NULL);
+
+		if (status)
+			break;
+
+		for (i = 0; i < num_elems; i++) {
+			struct ice_aqc_get_sw_cfg_resp_elem *ele;
+			u16 pf_vf_num, swid, vsi_port_num;
+			bool is_vf = false;
+			u8 type;
+
+			ele = rbuf[i].elements;
+			vsi_port_num = LE16_TO_CPU(ele->vsi_port_num) &
+				ICE_AQC_GET_SW_CONF_RESP_VSI_PORT_NUM_M;
+
+			pf_vf_num = LE16_TO_CPU(ele->pf_vf_num) &
+				ICE_AQC_GET_SW_CONF_RESP_FUNC_NUM_M;
+
+			swid = LE16_TO_CPU(ele->swid);
+
+			if (LE16_TO_CPU(ele->pf_vf_num) &
+			    ICE_AQC_GET_SW_CONF_RESP_IS_VF)
+				is_vf = true;
+
+			type = LE16_TO_CPU(ele->vsi_port_num) >>
+				ICE_AQC_GET_SW_CONF_RESP_TYPE_S;
+
+			switch (type) {
+			case ICE_AQC_GET_SW_CONF_RESP_PHYS_PORT:
+			case ICE_AQC_GET_SW_CONF_RESP_VIRT_PORT:
+				if (j == num_total_ports) {
+					ice_debug(hw, ICE_DBG_SW,
+						  "more ports than expected\n");
+					status = ICE_ERR_CFG;
+					goto out;
+				}
+				ice_init_port_info(hw->port_info,
+						   vsi_port_num, type, swid,
+						   pf_vf_num, is_vf);
+				j++;
+				break;
+			default:
+				break;
+			}
+		}
+	} while (req_desc && !status);
+
+
+out:
+	ice_free(hw, (void *)rbuf);
+	return status;
+}
+
+
+/**
+ * ice_fill_sw_info - Helper function to populate lb_en and lan_en
+ * @hw: pointer to the hardware structure
+ * @fi: filter info structure to fill/update
+ *
+ * This helper function populates the lb_en and lan_en elements of the provided
+ * ice_fltr_info struct using the switch's type and characteristics of the
+ * switch rule being configured.
+ */
+static void ice_fill_sw_info(struct ice_hw *hw, struct ice_fltr_info *fi)
+{
+	fi->lb_en = false;
+	fi->lan_en = false;
+	if ((fi->flag & ICE_FLTR_TX) &&
+	    (fi->fltr_act == ICE_FWD_TO_VSI ||
+	     fi->fltr_act == ICE_FWD_TO_VSI_LIST ||
+	     fi->fltr_act == ICE_FWD_TO_Q ||
+	     fi->fltr_act == ICE_FWD_TO_QGRP)) {
+		/* Setting LB for prune actions will result in replicated
+		 * packets to the internal switch that will be dropped.
+		 */
+		if (fi->lkup_type != ICE_SW_LKUP_VLAN)
+			fi->lb_en = true;
+
+		/* Set lan_en to TRUE if
+		 * 1. The switch is a VEB AND
+		 * 2
+		 * 2.1 The lookup is a directional lookup like ethertype,
+		 * promiscuous, ethertype-mac, promiscuous-vlan
+		 * and default-port OR
+		 * 2.2 The lookup is VLAN, OR
+		 * 2.3 The lookup is MAC with mcast or bcast addr for MAC, OR
+		 * 2.4 The lookup is MAC_VLAN with mcast or bcast addr for MAC.
+		 *
+		 * OR
+		 *
+		 * The switch is a VEPA.
+		 *
+		 * In all other cases, the LAN enable has to be set to false.
+		 */
+		if (hw->evb_veb) {
+			if (fi->lkup_type == ICE_SW_LKUP_ETHERTYPE ||
+			    fi->lkup_type == ICE_SW_LKUP_PROMISC ||
+			    fi->lkup_type == ICE_SW_LKUP_ETHERTYPE_MAC ||
+			    fi->lkup_type == ICE_SW_LKUP_PROMISC_VLAN ||
+			    fi->lkup_type == ICE_SW_LKUP_DFLT ||
+			    fi->lkup_type == ICE_SW_LKUP_VLAN ||
+			    (fi->lkup_type == ICE_SW_LKUP_MAC &&
+			     !IS_UNICAST_ETHER_ADDR(fi->l_data.mac.mac_addr)) ||
+			    (fi->lkup_type == ICE_SW_LKUP_MAC_VLAN &&
+			     !IS_UNICAST_ETHER_ADDR(fi->l_data.mac.mac_addr)))
+				fi->lan_en = true;
+		} else {
+			fi->lan_en = true;
+		}
+	}
+}
+
+/**
+ * ice_ilog2 - Caculates integer log base 2 of a number
+ * @n: number on which to perform operation
+ */
+static int ice_ilog2(u64 n)
+{
+	int i;
+
+	for (i = 63; i >= 0; i--)
+		if (((u64)1 << i) & n)
+			return i;
+
+	return -1;
+}
+
+
+/**
+ * ice_fill_sw_rule - Helper function to fill switch rule structure
+ * @hw: pointer to the hardware structure
+ * @f_info: entry containing packet forwarding information
+ * @s_rule: switch rule structure to be filled in based on mac_entry
+ * @opc: switch rules population command type - pass in the command opcode
+ */
+static void
+ice_fill_sw_rule(struct ice_hw *hw, struct ice_fltr_info *f_info,
+		 struct ice_aqc_sw_rules_elem *s_rule, enum ice_adminq_opc opc)
+{
+	u16 vlan_id = ICE_MAX_VLAN_ID + 1;
+	void *daddr = NULL;
+	u16 eth_hdr_sz;
+	u8 *eth_hdr;
+	u32 act = 0;
+	__be16 *off;
+	u8 q_rgn;
+
+
+	if (opc == ice_aqc_opc_remove_sw_rules) {
+		s_rule->pdata.lkup_tx_rx.act = 0;
+		s_rule->pdata.lkup_tx_rx.index =
+			CPU_TO_LE16(f_info->fltr_rule_id);
+		s_rule->pdata.lkup_tx_rx.hdr_len = 0;
+		return;
+	}
+
+	eth_hdr_sz = sizeof(dummy_eth_header);
+	eth_hdr = s_rule->pdata.lkup_tx_rx.hdr;
+
+	/* initialize the ether header with a dummy header */
+	ice_memcpy(eth_hdr, dummy_eth_header, eth_hdr_sz, ICE_NONDMA_TO_NONDMA);
+	ice_fill_sw_info(hw, f_info);
+
+	switch (f_info->fltr_act) {
+	case ICE_FWD_TO_VSI:
+		act |= (f_info->fwd_id.hw_vsi_id << ICE_SINGLE_ACT_VSI_ID_S) &
+			ICE_SINGLE_ACT_VSI_ID_M;
+		if (f_info->lkup_type != ICE_SW_LKUP_VLAN)
+			act |= ICE_SINGLE_ACT_VSI_FORWARDING |
+				ICE_SINGLE_ACT_VALID_BIT;
+		break;
+	case ICE_FWD_TO_VSI_LIST:
+		act |= ICE_SINGLE_ACT_VSI_LIST;
+		act |= (f_info->fwd_id.vsi_list_id <<
+			ICE_SINGLE_ACT_VSI_LIST_ID_S) &
+			ICE_SINGLE_ACT_VSI_LIST_ID_M;
+		if (f_info->lkup_type != ICE_SW_LKUP_VLAN)
+			act |= ICE_SINGLE_ACT_VSI_FORWARDING |
+				ICE_SINGLE_ACT_VALID_BIT;
+		break;
+	case ICE_FWD_TO_Q:
+		act |= ICE_SINGLE_ACT_TO_Q;
+		act |= (f_info->fwd_id.q_id << ICE_SINGLE_ACT_Q_INDEX_S) &
+			ICE_SINGLE_ACT_Q_INDEX_M;
+		break;
+	case ICE_DROP_PACKET:
+		act |= ICE_SINGLE_ACT_VSI_FORWARDING | ICE_SINGLE_ACT_DROP |
+			ICE_SINGLE_ACT_VALID_BIT;
+		break;
+	case ICE_FWD_TO_QGRP:
+		q_rgn = f_info->qgrp_size > 0 ?
+			(u8)ice_ilog2(f_info->qgrp_size) : 0;
+		act |= ICE_SINGLE_ACT_TO_Q;
+		act |= (f_info->fwd_id.q_id << ICE_SINGLE_ACT_Q_INDEX_S) &
+			ICE_SINGLE_ACT_Q_INDEX_M;
+		act |= (q_rgn << ICE_SINGLE_ACT_Q_REGION_S) &
+			ICE_SINGLE_ACT_Q_REGION_M;
+		break;
+	default:
+		return;
+	}
+
+	if (f_info->lb_en)
+		act |= ICE_SINGLE_ACT_LB_ENABLE;
+	if (f_info->lan_en)
+		act |= ICE_SINGLE_ACT_LAN_ENABLE;
+
+	switch (f_info->lkup_type) {
+	case ICE_SW_LKUP_MAC:
+		daddr = f_info->l_data.mac.mac_addr;
+		break;
+	case ICE_SW_LKUP_VLAN:
+		vlan_id = f_info->l_data.vlan.vlan_id;
+		if (f_info->fltr_act == ICE_FWD_TO_VSI ||
+		    f_info->fltr_act == ICE_FWD_TO_VSI_LIST) {
+			act |= ICE_SINGLE_ACT_PRUNE;
+			act |= ICE_SINGLE_ACT_EGRESS | ICE_SINGLE_ACT_INGRESS;
+		}
+		break;
+	case ICE_SW_LKUP_ETHERTYPE_MAC:
+		daddr = f_info->l_data.ethertype_mac.mac_addr;
+		/* fall-through */
+	case ICE_SW_LKUP_ETHERTYPE:
+		off = (__be16 *)(eth_hdr + ICE_ETH_ETHTYPE_OFFSET);
+		*off = CPU_TO_BE16(f_info->l_data.ethertype_mac.ethertype);
+		break;
+	case ICE_SW_LKUP_MAC_VLAN:
+		daddr = f_info->l_data.mac_vlan.mac_addr;
+		vlan_id = f_info->l_data.mac_vlan.vlan_id;
+		break;
+	case ICE_SW_LKUP_PROMISC_VLAN:
+		vlan_id = f_info->l_data.mac_vlan.vlan_id;
+		/* fall-through */
+	case ICE_SW_LKUP_PROMISC:
+		daddr = f_info->l_data.mac_vlan.mac_addr;
+		break;
+	default:
+		break;
+	}
+
+	s_rule->type = (f_info->flag & ICE_FLTR_RX) ?
+		CPU_TO_LE16(ICE_AQC_SW_RULES_T_LKUP_RX) :
+		CPU_TO_LE16(ICE_AQC_SW_RULES_T_LKUP_TX);
+
+	/* Recipe set depending on lookup type */
+	s_rule->pdata.lkup_tx_rx.recipe_id = CPU_TO_LE16(f_info->lkup_type);
+	s_rule->pdata.lkup_tx_rx.src = CPU_TO_LE16(f_info->src);
+	s_rule->pdata.lkup_tx_rx.act = CPU_TO_LE32(act);
+
+	if (daddr)
+		ice_memcpy(eth_hdr + ICE_ETH_DA_OFFSET, daddr, ETH_ALEN,
+			   ICE_NONDMA_TO_NONDMA);
+
+	if (!(vlan_id > ICE_MAX_VLAN_ID)) {
+		off = (__be16 *)(eth_hdr + ICE_ETH_VLAN_TCI_OFFSET);
+		*off = CPU_TO_BE16(vlan_id);
+	}
+
+	/* Create the switch rule with the final dummy Ethernet header */
+	if (opc != ice_aqc_opc_update_sw_rules)
+		s_rule->pdata.lkup_tx_rx.hdr_len = CPU_TO_LE16(eth_hdr_sz);
+}
+
+/**
+ * ice_add_marker_act
+ * @hw: pointer to the hardware structure
+ * @m_ent: the management entry for which sw marker needs to be added
+ * @sw_marker: sw marker to tag the Rx descriptor with
+ * @l_id: large action resource id
+ *
+ * Create a large action to hold software marker and update the switch rule
+ * entry pointed by m_ent with newly created large action
+ */
+static enum ice_status
+ice_add_marker_act(struct ice_hw *hw, struct ice_fltr_mgmt_list_entry *m_ent,
+		   u16 sw_marker, u16 l_id)
+{
+	struct ice_aqc_sw_rules_elem *lg_act, *rx_tx;
+	/* For software marker we need 3 large actions
+	 * 1. FWD action: FWD TO VSI or VSI LIST
+	 * 2. GENERIC VALUE action to hold the profile id
+	 * 3. GENERIC VALUE action to hold the software marker id
+	 */
+	const u16 num_lg_acts = 3;
+	enum ice_status status;
+	u16 lg_act_size;
+	u16 rules_size;
+	u32 act;
+	u16 id;
+
+	if (m_ent->fltr_info.lkup_type != ICE_SW_LKUP_MAC)
+		return ICE_ERR_PARAM;
+
+	/* Create two back-to-back switch rules and submit them to the HW using
+	 * one memory buffer:
+	 *    1. Large Action
+	 *    2. Look up Tx Rx
+	 */
+	lg_act_size = (u16)ICE_SW_RULE_LG_ACT_SIZE(num_lg_acts);
+	rules_size = lg_act_size + ICE_SW_RULE_RX_TX_ETH_HDR_SIZE;
+	lg_act = (struct ice_aqc_sw_rules_elem *)ice_malloc(hw, rules_size);
+	if (!lg_act)
+		return ICE_ERR_NO_MEMORY;
+
+	rx_tx = (struct ice_aqc_sw_rules_elem *)((u8 *)lg_act + lg_act_size);
+
+	/* Fill in the first switch rule i.e. large action */
+	lg_act->type = CPU_TO_LE16(ICE_AQC_SW_RULES_T_LG_ACT);
+	lg_act->pdata.lg_act.index = CPU_TO_LE16(l_id);
+	lg_act->pdata.lg_act.size = CPU_TO_LE16(num_lg_acts);
+
+	/* First action VSI forwarding or VSI list forwarding depending on how
+	 * many VSIs
+	 */
+	id = (m_ent->vsi_count > 1) ? m_ent->fltr_info.fwd_id.vsi_list_id :
+		m_ent->fltr_info.fwd_id.hw_vsi_id;
+
+	act = ICE_LG_ACT_VSI_FORWARDING | ICE_LG_ACT_VALID_BIT;
+	act |= (id << ICE_LG_ACT_VSI_LIST_ID_S) &
+		ICE_LG_ACT_VSI_LIST_ID_M;
+	if (m_ent->vsi_count > 1)
+		act |= ICE_LG_ACT_VSI_LIST;
+	lg_act->pdata.lg_act.act[0] = CPU_TO_LE32(act);
+
+	/* Second action descriptor type */
+	act = ICE_LG_ACT_GENERIC;
+
+	act |= (1 << ICE_LG_ACT_GENERIC_VALUE_S) & ICE_LG_ACT_GENERIC_VALUE_M;
+	lg_act->pdata.lg_act.act[1] = CPU_TO_LE32(act);
+
+	act = (ICE_LG_ACT_GENERIC_OFF_RX_DESC_PROF_IDX <<
+	       ICE_LG_ACT_GENERIC_OFFSET_S) & ICE_LG_ACT_GENERIC_OFFSET_M;
+
+	/* Third action Marker value */
+	act |= ICE_LG_ACT_GENERIC;
+	act |= (sw_marker << ICE_LG_ACT_GENERIC_VALUE_S) &
+		ICE_LG_ACT_GENERIC_VALUE_M;
+
+	lg_act->pdata.lg_act.act[2] = CPU_TO_LE32(act);
+
+	/* call the fill switch rule to fill the lookup Tx Rx structure */
+	ice_fill_sw_rule(hw, &m_ent->fltr_info, rx_tx,
+			 ice_aqc_opc_update_sw_rules);
+
+	/* Update the action to point to the large action id */
+	rx_tx->pdata.lkup_tx_rx.act =
+		CPU_TO_LE32(ICE_SINGLE_ACT_PTR |
+			    ((l_id << ICE_SINGLE_ACT_PTR_VAL_S) &
+			     ICE_SINGLE_ACT_PTR_VAL_M));
+
+	/* Use the filter rule id of the previously created rule with single
+	 * act. Once the update happens, hardware will treat this as large
+	 * action
+	 */
+	rx_tx->pdata.lkup_tx_rx.index =
+		CPU_TO_LE16(m_ent->fltr_info.fltr_rule_id);
+
+	status = ice_aq_sw_rules(hw, lg_act, rules_size, 2,
+				 ice_aqc_opc_update_sw_rules, NULL);
+	if (!status) {
+		m_ent->lg_act_idx = l_id;
+		m_ent->sw_marker_id = sw_marker;
+	}
+
+	ice_free(hw, lg_act);
+	return status;
+}
+
+
+/**
+ * ice_create_vsi_list_map
+ * @hw: pointer to the hardware structure
+ * @vsi_handle_arr: array of VSI handles to set in the VSI mapping
+ * @num_vsi: number of VSI handles in the array
+ * @vsi_list_id: VSI list id generated as part of allocate resource
+ *
+ * Helper function to create a new entry of VSI list id to VSI mapping
+ * using the given VSI list id
+ */
+static struct ice_vsi_list_map_info *
+ice_create_vsi_list_map(struct ice_hw *hw, u16 *vsi_handle_arr, u16 num_vsi,
+			u16 vsi_list_id)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	struct ice_vsi_list_map_info *v_map;
+	int i;
+
+	v_map = (struct ice_vsi_list_map_info *)ice_calloc(hw, 1,
+		sizeof(*v_map));
+	if (!v_map)
+		return NULL;
+
+	v_map->vsi_list_id = vsi_list_id;
+	v_map->ref_cnt = 1;
+	for (i = 0; i < num_vsi; i++)
+		ice_set_bit(vsi_handle_arr[i], v_map->vsi_map);
+
+	LIST_ADD(&v_map->list_entry, &sw->vsi_list_map_head);
+	return v_map;
+}
+
+/**
+ * ice_update_vsi_list_rule
+ * @hw: pointer to the hardware structure
+ * @vsi_handle_arr: array of VSI handles to form a VSI list
+ * @num_vsi: number of VSI handles in the array
+ * @vsi_list_id: VSI list id generated as part of allocate resource
+ * @remove: Boolean value to indicate if this is a remove action
+ * @opc: switch rules population command type - pass in the command opcode
+ * @lkup_type: lookup type of the filter
+ *
+ * Call AQ command to add a new switch rule or update existing switch rule
+ * using the given VSI list id
+ */
+static enum ice_status
+ice_update_vsi_list_rule(struct ice_hw *hw, u16 *vsi_handle_arr, u16 num_vsi,
+			 u16 vsi_list_id, bool remove, enum ice_adminq_opc opc,
+			 enum ice_sw_lkup_type lkup_type)
+{
+	struct ice_aqc_sw_rules_elem *s_rule;
+	enum ice_status status;
+	u16 s_rule_size;
+	u16 type;
+	int i;
+
+	if (!num_vsi)
+		return ICE_ERR_PARAM;
+
+	if (lkup_type == ICE_SW_LKUP_MAC ||
+	    lkup_type == ICE_SW_LKUP_MAC_VLAN ||
+	    lkup_type == ICE_SW_LKUP_ETHERTYPE ||
+	    lkup_type == ICE_SW_LKUP_ETHERTYPE_MAC ||
+	    lkup_type == ICE_SW_LKUP_PROMISC ||
+	    lkup_type == ICE_SW_LKUP_PROMISC_VLAN)
+		type = remove ? ICE_AQC_SW_RULES_T_VSI_LIST_CLEAR :
+				ICE_AQC_SW_RULES_T_VSI_LIST_SET;
+	else if (lkup_type == ICE_SW_LKUP_VLAN)
+		type = remove ? ICE_AQC_SW_RULES_T_PRUNE_LIST_CLEAR :
+				ICE_AQC_SW_RULES_T_PRUNE_LIST_SET;
+	else
+		return ICE_ERR_PARAM;
+
+	s_rule_size = (u16)ICE_SW_RULE_VSI_LIST_SIZE(num_vsi);
+	s_rule = (struct ice_aqc_sw_rules_elem *)ice_malloc(hw, s_rule_size);
+	if (!s_rule)
+		return ICE_ERR_NO_MEMORY;
+	for (i = 0; i < num_vsi; i++) {
+		if (!ice_is_vsi_valid(hw, vsi_handle_arr[i])) {
+			status = ICE_ERR_PARAM;
+			goto exit;
+		}
+		/* AQ call requires hw_vsi_id(s) */
+		s_rule->pdata.vsi_list.vsi[i] =
+			CPU_TO_LE16(ice_get_hw_vsi_num(hw, vsi_handle_arr[i]));
+	}
+
+	s_rule->type = CPU_TO_LE16(type);
+	s_rule->pdata.vsi_list.number_vsi = CPU_TO_LE16(num_vsi);
+	s_rule->pdata.vsi_list.index = CPU_TO_LE16(vsi_list_id);
+
+	status = ice_aq_sw_rules(hw, s_rule, s_rule_size, 1, opc, NULL);
+
+exit:
+	ice_free(hw, s_rule);
+	return status;
+}
+
+/**
+ * ice_create_vsi_list_rule - Creates and populates a VSI list rule
+ * @hw: pointer to the hw struct
+ * @vsi_handle_arr: array of VSI handles to form a VSI list
+ * @num_vsi: number of VSI handles in the array
+ * @vsi_list_id: stores the ID of the VSI list to be created
+ * @lkup_type: switch rule filter's lookup type
+ */
+static enum ice_status
+ice_create_vsi_list_rule(struct ice_hw *hw, u16 *vsi_handle_arr, u16 num_vsi,
+			 u16 *vsi_list_id, enum ice_sw_lkup_type lkup_type)
+{
+	enum ice_status status;
+
+	status = ice_aq_alloc_free_vsi_list(hw, vsi_list_id, lkup_type,
+					    ice_aqc_opc_alloc_res);
+	if (status)
+		return status;
+
+	/* Update the newly created VSI list to include the specified VSIs */
+	return ice_update_vsi_list_rule(hw, vsi_handle_arr, num_vsi,
+					*vsi_list_id, false,
+					ice_aqc_opc_add_sw_rules, lkup_type);
+}
+
+/**
+ * ice_create_pkt_fwd_rule
+ * @hw: pointer to the hardware structure
+ * @f_entry: entry containing packet forwarding information
+ *
+ * Create switch rule with given filter information and add an entry
+ * to the corresponding filter management list to track this switch rule
+ * and VSI mapping
+ */
+static enum ice_status
+ice_create_pkt_fwd_rule(struct ice_hw *hw,
+			struct ice_fltr_list_entry *f_entry)
+{
+	struct ice_fltr_mgmt_list_entry *fm_entry;
+	struct ice_aqc_sw_rules_elem *s_rule;
+	enum ice_sw_lkup_type l_type;
+	struct ice_sw_recipe *recp;
+	enum ice_status status;
+
+	s_rule = (struct ice_aqc_sw_rules_elem *)
+		ice_malloc(hw, ICE_SW_RULE_RX_TX_ETH_HDR_SIZE);
+	if (!s_rule)
+		return ICE_ERR_NO_MEMORY;
+	fm_entry = (struct ice_fltr_mgmt_list_entry *)
+		   ice_malloc(hw, sizeof(*fm_entry));
+	if (!fm_entry) {
+		status = ICE_ERR_NO_MEMORY;
+		goto ice_create_pkt_fwd_rule_exit;
+	}
+
+	fm_entry->fltr_info = f_entry->fltr_info;
+
+	/* Initialize all the fields for the management entry */
+	fm_entry->vsi_count = 1;
+	fm_entry->lg_act_idx = ICE_INVAL_LG_ACT_INDEX;
+	fm_entry->sw_marker_id = ICE_INVAL_SW_MARKER_ID;
+	fm_entry->counter_index = ICE_INVAL_COUNTER_ID;
+
+	ice_fill_sw_rule(hw, &fm_entry->fltr_info, s_rule,
+			 ice_aqc_opc_add_sw_rules);
+
+	status = ice_aq_sw_rules(hw, s_rule, ICE_SW_RULE_RX_TX_ETH_HDR_SIZE, 1,
+				 ice_aqc_opc_add_sw_rules, NULL);
+	if (status) {
+		ice_free(hw, fm_entry);
+		goto ice_create_pkt_fwd_rule_exit;
+	}
+
+	f_entry->fltr_info.fltr_rule_id =
+		LE16_TO_CPU(s_rule->pdata.lkup_tx_rx.index);
+	fm_entry->fltr_info.fltr_rule_id =
+		LE16_TO_CPU(s_rule->pdata.lkup_tx_rx.index);
+
+	/* The book keeping entries will get removed when base driver
+	 * calls remove filter AQ command
+	 */
+	l_type = fm_entry->fltr_info.lkup_type;
+	recp = &hw->switch_info->recp_list[l_type];
+	LIST_ADD(&fm_entry->list_entry, &recp->filt_rules);
+
+ice_create_pkt_fwd_rule_exit:
+	ice_free(hw, s_rule);
+	return status;
+}
+
+/**
+ * ice_update_pkt_fwd_rule
+ * @hw: pointer to the hardware structure
+ * @f_info: filter information for switch rule
+ *
+ * Call AQ command to update a previously created switch rule with a
+ * VSI list id
+ */
+static enum ice_status
+ice_update_pkt_fwd_rule(struct ice_hw *hw, struct ice_fltr_info *f_info)
+{
+	struct ice_aqc_sw_rules_elem *s_rule;
+	enum ice_status status;
+
+	s_rule = (struct ice_aqc_sw_rules_elem *)
+		ice_malloc(hw, ICE_SW_RULE_RX_TX_ETH_HDR_SIZE);
+	if (!s_rule)
+		return ICE_ERR_NO_MEMORY;
+
+	ice_fill_sw_rule(hw, f_info, s_rule, ice_aqc_opc_update_sw_rules);
+
+	s_rule->pdata.lkup_tx_rx.index = CPU_TO_LE16(f_info->fltr_rule_id);
+
+	/* Update switch rule with new rule set to forward VSI list */
+	status = ice_aq_sw_rules(hw, s_rule, ICE_SW_RULE_RX_TX_ETH_HDR_SIZE, 1,
+				 ice_aqc_opc_update_sw_rules, NULL);
+
+	ice_free(hw, s_rule);
+	return status;
+}
+
+/**
+ * ice_update_sw_rule_bridge_mode
+ * @hw: pointer to the hw struct
+ *
+ * Updates unicast switch filter rules based on VEB/VEPA mode
+ */
+enum ice_status ice_update_sw_rule_bridge_mode(struct ice_hw *hw)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	struct ice_fltr_mgmt_list_entry *fm_entry;
+	enum ice_status status = ICE_SUCCESS;
+	struct LIST_HEAD_TYPE *rule_head;
+	struct ice_lock *rule_lock; /* Lock to protect filter rule list */
+
+	rule_lock = &sw->recp_list[ICE_SW_LKUP_MAC].filt_rule_lock;
+	rule_head = &sw->recp_list[ICE_SW_LKUP_MAC].filt_rules;
+
+	ice_acquire_lock(rule_lock);
+	LIST_FOR_EACH_ENTRY(fm_entry, rule_head, ice_fltr_mgmt_list_entry,
+			    list_entry) {
+		struct ice_fltr_info *fi = &fm_entry->fltr_info;
+		u8 *addr = fi->l_data.mac.mac_addr;
+
+		/* Update unicast Tx rules to reflect the selected
+		 * VEB/VEPA mode
+		 */
+		if ((fi->flag & ICE_FLTR_TX) && IS_UNICAST_ETHER_ADDR(addr) &&
+		    (fi->fltr_act == ICE_FWD_TO_VSI ||
+		     fi->fltr_act == ICE_FWD_TO_VSI_LIST ||
+		     fi->fltr_act == ICE_FWD_TO_Q ||
+		     fi->fltr_act == ICE_FWD_TO_QGRP)) {
+			status = ice_update_pkt_fwd_rule(hw, fi);
+			if (status)
+				break;
+		}
+	}
+
+	ice_release_lock(rule_lock);
+
+	return status;
+}
+
+/**
+ * ice_add_update_vsi_list
+ * @hw: pointer to the hardware structure
+ * @m_entry: pointer to current filter management list entry
+ * @cur_fltr: filter information from the book keeping entry
+ * @new_fltr: filter information with the new VSI to be added
+ *
+ * Call AQ command to add or update previously created VSI list with new VSI.
+ *
+ * Helper function to do book keeping associated with adding filter information
+ * The algorithm to do the book keeping is described below :
+ * When a VSI needs to subscribe to a given filter (MAC/VLAN/Ethtype etc.)
+ *	if only one VSI has been added till now
+ *		Allocate a new VSI list and add two VSIs
+ *		to this list using switch rule command
+ *		Update the previously created switch rule with the
+ *		newly created VSI list id
+ *	if a VSI list was previously created
+ *		Add the new VSI to the previously created VSI list set
+ *		using the update switch rule command
+ */
+static enum ice_status
+ice_add_update_vsi_list(struct ice_hw *hw,
+			struct ice_fltr_mgmt_list_entry *m_entry,
+			struct ice_fltr_info *cur_fltr,
+			struct ice_fltr_info *new_fltr)
+{
+	enum ice_status status = ICE_SUCCESS;
+	u16 vsi_list_id = 0;
+
+	if ((cur_fltr->fltr_act == ICE_FWD_TO_Q ||
+	     cur_fltr->fltr_act == ICE_FWD_TO_QGRP))
+		return ICE_ERR_NOT_IMPL;
+
+	if ((new_fltr->fltr_act == ICE_FWD_TO_Q ||
+	     new_fltr->fltr_act == ICE_FWD_TO_QGRP) &&
+	    (cur_fltr->fltr_act == ICE_FWD_TO_VSI ||
+	     cur_fltr->fltr_act == ICE_FWD_TO_VSI_LIST))
+		return ICE_ERR_NOT_IMPL;
+
+	if (m_entry->vsi_count < 2 && !m_entry->vsi_list_info) {
+		/* Only one entry existed in the mapping and it was not already
+		 * a part of a VSI list. So, create a VSI list with the old and
+		 * new VSIs.
+		 */
+		struct ice_fltr_info tmp_fltr;
+		u16 vsi_handle_arr[2];
+
+		/* A rule already exists with the new VSI being added */
+		if (cur_fltr->fwd_id.hw_vsi_id == new_fltr->fwd_id.hw_vsi_id)
+			return ICE_ERR_ALREADY_EXISTS;
+
+		vsi_handle_arr[0] = cur_fltr->vsi_handle;
+		vsi_handle_arr[1] = new_fltr->vsi_handle;
+		status = ice_create_vsi_list_rule(hw, &vsi_handle_arr[0], 2,
+						  &vsi_list_id,
+						  new_fltr->lkup_type);
+		if (status)
+			return status;
+
+		tmp_fltr = *new_fltr;
+		tmp_fltr.fltr_rule_id = cur_fltr->fltr_rule_id;
+		tmp_fltr.fltr_act = ICE_FWD_TO_VSI_LIST;
+		tmp_fltr.fwd_id.vsi_list_id = vsi_list_id;
+		/* Update the previous switch rule of "MAC forward to VSI" to
+		 * "MAC fwd to VSI list"
+		 */
+		status = ice_update_pkt_fwd_rule(hw, &tmp_fltr);
+		if (status)
+			return status;
+
+		cur_fltr->fwd_id.vsi_list_id = vsi_list_id;
+		cur_fltr->fltr_act = ICE_FWD_TO_VSI_LIST;
+		m_entry->vsi_list_info =
+			ice_create_vsi_list_map(hw, &vsi_handle_arr[0], 2,
+						vsi_list_id);
+
+		/* If this entry was large action then the large action needs
+		 * to be updated to point to FWD to VSI list
+		 */
+		if (m_entry->sw_marker_id != ICE_INVAL_SW_MARKER_ID)
+			status =
+			    ice_add_marker_act(hw, m_entry,
+					       m_entry->sw_marker_id,
+					       m_entry->lg_act_idx);
+	} else {
+		u16 vsi_handle = new_fltr->vsi_handle;
+		enum ice_adminq_opc opcode;
+
+		if (!m_entry->vsi_list_info)
+			return ICE_ERR_CFG;
+
+		/* A rule already exists with the new VSI being added */
+		if (ice_is_bit_set(m_entry->vsi_list_info->vsi_map, vsi_handle))
+			return ICE_SUCCESS;
+
+		/* Update the previously created VSI list set with
+		 * the new VSI id passed in
+		 */
+		vsi_list_id = cur_fltr->fwd_id.vsi_list_id;
+		opcode = ice_aqc_opc_update_sw_rules;
+
+		status = ice_update_vsi_list_rule(hw, &vsi_handle, 1,
+						  vsi_list_id, false, opcode,
+						  new_fltr->lkup_type);
+		/* update VSI list mapping info with new VSI id */
+		if (!status)
+			ice_set_bit(vsi_handle,
+				    m_entry->vsi_list_info->vsi_map);
+	}
+	if (!status)
+		m_entry->vsi_count++;
+	return status;
+}
+
+/**
+ * ice_find_rule_entry - Search a rule entry
+ * @hw: pointer to the hardware structure
+ * @recp_id: lookup type for which the specified rule needs to be searched
+ * @f_info: rule information
+ *
+ * Helper function to search for a given rule entry
+ * Returns pointer to entry storing the rule if found
+ */
+static struct ice_fltr_mgmt_list_entry *
+ice_find_rule_entry(struct ice_hw *hw, u8 recp_id, struct ice_fltr_info *f_info)
+{
+	struct ice_fltr_mgmt_list_entry *list_itr, *ret = NULL;
+	struct ice_switch_info *sw = hw->switch_info;
+	struct LIST_HEAD_TYPE *list_head;
+
+	list_head = &sw->recp_list[recp_id].filt_rules;
+	LIST_FOR_EACH_ENTRY(list_itr, list_head, ice_fltr_mgmt_list_entry,
+			    list_entry) {
+		if (!memcmp(&f_info->l_data, &list_itr->fltr_info.l_data,
+			    sizeof(f_info->l_data)) &&
+		    f_info->flag == list_itr->fltr_info.flag) {
+			ret = list_itr;
+			break;
+		}
+	}
+	return ret;
+}
+
+/**
+ * ice_find_vsi_list_entry - Search VSI list map with VSI count 1
+ * @hw: pointer to the hardware structure
+ * @recp_id: lookup type for which VSI lists needs to be searched
+ * @vsi_handle: VSI handle to be found in VSI list
+ * @vsi_list_id: VSI list id found contaning vsi_handle
+ *
+ * Helper function to search a VSI list with single entry containing given VSI
+ * handle element. This can be extended further to search VSI list with more
+ * than 1 vsi_count. Returns pointer to VSI list entry if found.
+ */
+static struct ice_vsi_list_map_info *
+ice_find_vsi_list_entry(struct ice_hw *hw, u8 recp_id, u16 vsi_handle,
+			u16 *vsi_list_id)
+{
+	struct ice_vsi_list_map_info *map_info = NULL;
+	struct ice_switch_info *sw = hw->switch_info;
+	struct ice_fltr_mgmt_list_entry *list_itr;
+	struct LIST_HEAD_TYPE *list_head;
+
+	list_head = &sw->recp_list[recp_id].filt_rules;
+	LIST_FOR_EACH_ENTRY(list_itr, list_head, ice_fltr_mgmt_list_entry,
+			    list_entry) {
+		if (list_itr->vsi_count == 1 && list_itr->vsi_list_info) {
+			map_info = list_itr->vsi_list_info;
+			if (ice_is_bit_set(map_info->vsi_map, vsi_handle)) {
+				*vsi_list_id = map_info->vsi_list_id;
+				return map_info;
+			}
+		}
+	}
+	return NULL;
+}
+
+/**
+ * ice_add_rule_internal - add rule for a given lookup type
+ * @hw: pointer to the hardware structure
+ * @recp_id: lookup type (recipe id) for which rule has to be added
+ * @f_entry: structure containing MAC forwarding information
+ *
+ * Adds or updates the rule lists for a given recipe
+ */
+static enum ice_status
+ice_add_rule_internal(struct ice_hw *hw, u8 recp_id,
+		      struct ice_fltr_list_entry *f_entry)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	struct ice_fltr_info *new_fltr, *cur_fltr;
+	struct ice_fltr_mgmt_list_entry *m_entry;
+	struct ice_lock *rule_lock; /* Lock to protect filter rule list */
+	enum ice_status status = ICE_SUCCESS;
+
+	if (!ice_is_vsi_valid(hw, f_entry->fltr_info.vsi_handle))
+		return ICE_ERR_PARAM;
+
+	/* Load the hw_vsi_id only if the fwd action is fwd to VSI */
+	if (f_entry->fltr_info.fltr_act == ICE_FWD_TO_VSI)
+		f_entry->fltr_info.fwd_id.hw_vsi_id =
+			ice_get_hw_vsi_num(hw, f_entry->fltr_info.vsi_handle);
+
+	rule_lock = &sw->recp_list[recp_id].filt_rule_lock;
+
+	ice_acquire_lock(rule_lock);
+	new_fltr = &f_entry->fltr_info;
+	if (new_fltr->flag & ICE_FLTR_RX)
+		new_fltr->src = hw->port_info->lport;
+	else if (new_fltr->flag & ICE_FLTR_TX)
+		new_fltr->src =
+			ice_get_hw_vsi_num(hw, f_entry->fltr_info.vsi_handle);
+
+	m_entry = ice_find_rule_entry(hw, recp_id, new_fltr);
+	if (!m_entry) {
+		ice_release_lock(rule_lock);
+		return ice_create_pkt_fwd_rule(hw, f_entry);
+	}
+
+	cur_fltr = &m_entry->fltr_info;
+	status = ice_add_update_vsi_list(hw, m_entry, cur_fltr, new_fltr);
+	ice_release_lock(rule_lock);
+
+	return status;
+}
+
+/**
+ * ice_remove_vsi_list_rule
+ * @hw: pointer to the hardware structure
+ * @vsi_list_id: VSI list id generated as part of allocate resource
+ * @lkup_type: switch rule filter lookup type
+ *
+ * The VSI list should be emptied before this function is called to remove the
+ * VSI list.
+ */
+static enum ice_status
+ice_remove_vsi_list_rule(struct ice_hw *hw, u16 vsi_list_id,
+			 enum ice_sw_lkup_type lkup_type)
+{
+	struct ice_aqc_sw_rules_elem *s_rule;
+	enum ice_status status;
+	u16 s_rule_size;
+
+	s_rule_size = (u16)ICE_SW_RULE_VSI_LIST_SIZE(0);
+	s_rule = (struct ice_aqc_sw_rules_elem *)ice_malloc(hw, s_rule_size);
+	if (!s_rule)
+		return ICE_ERR_NO_MEMORY;
+
+	s_rule->type = CPU_TO_LE16(ICE_AQC_SW_RULES_T_VSI_LIST_CLEAR);
+	s_rule->pdata.vsi_list.index = CPU_TO_LE16(vsi_list_id);
+
+	/* Free the vsi_list resource that we allocated. It is assumed that the
+	 * list is empty at this point.
+	 */
+	status = ice_aq_alloc_free_vsi_list(hw, &vsi_list_id, lkup_type,
+					    ice_aqc_opc_free_res);
+
+	ice_free(hw, s_rule);
+	return status;
+}
+
+/**
+ * ice_rem_update_vsi_list
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: VSI handle of the VSI to remove
+ * @fm_list: filter management entry for which the VSI list management needs to
+ *	     be done
+ */
+static enum ice_status
+ice_rem_update_vsi_list(struct ice_hw *hw, u16 vsi_handle,
+			struct ice_fltr_mgmt_list_entry *fm_list)
+{
+	enum ice_sw_lkup_type lkup_type;
+	enum ice_status status = ICE_SUCCESS;
+	u16 vsi_list_id;
+
+	if (fm_list->fltr_info.fltr_act != ICE_FWD_TO_VSI_LIST ||
+	    fm_list->vsi_count == 0)
+		return ICE_ERR_PARAM;
+
+	/* A rule with the VSI being removed does not exist */
+	if (!ice_is_bit_set(fm_list->vsi_list_info->vsi_map, vsi_handle))
+		return ICE_ERR_DOES_NOT_EXIST;
+
+	lkup_type = fm_list->fltr_info.lkup_type;
+	vsi_list_id = fm_list->fltr_info.fwd_id.vsi_list_id;
+	status = ice_update_vsi_list_rule(hw, &vsi_handle, 1, vsi_list_id, true,
+					  ice_aqc_opc_update_sw_rules,
+					  lkup_type);
+	if (status)
+		return status;
+
+	fm_list->vsi_count--;
+	ice_clear_bit(vsi_handle, fm_list->vsi_list_info->vsi_map);
+
+	if (fm_list->vsi_count == 1 && lkup_type != ICE_SW_LKUP_VLAN) {
+		struct ice_fltr_info tmp_fltr_info = fm_list->fltr_info;
+		struct ice_vsi_list_map_info *vsi_list_info =
+			fm_list->vsi_list_info;
+		u16 rem_vsi_handle;
+
+		rem_vsi_handle = ice_find_first_bit(vsi_list_info->vsi_map,
+						    ICE_MAX_VSI);
+		if (!ice_is_vsi_valid(hw, rem_vsi_handle))
+			return ICE_ERR_OUT_OF_RANGE;
+
+		/* Make sure VSI list is empty before removing it below */
+		status = ice_update_vsi_list_rule(hw, &rem_vsi_handle, 1,
+						  vsi_list_id, true,
+						  ice_aqc_opc_update_sw_rules,
+						  lkup_type);
+		if (status)
+			return status;
+
+		tmp_fltr_info.fltr_act = ICE_FWD_TO_VSI;
+		tmp_fltr_info.fwd_id.hw_vsi_id =
+			ice_get_hw_vsi_num(hw, rem_vsi_handle);
+		tmp_fltr_info.vsi_handle = rem_vsi_handle;
+		status = ice_update_pkt_fwd_rule(hw, &tmp_fltr_info);
+		if (status) {
+			ice_debug(hw, ICE_DBG_SW,
+				  "Failed to update pkt fwd rule to FWD_TO_VSI on HW VSI %d, error %d\n",
+				  tmp_fltr_info.fwd_id.hw_vsi_id, status);
+			return status;
+		}
+
+		fm_list->fltr_info = tmp_fltr_info;
+	}
+
+	if ((fm_list->vsi_count == 1 && lkup_type != ICE_SW_LKUP_VLAN) ||
+	    (fm_list->vsi_count == 0 && lkup_type == ICE_SW_LKUP_VLAN)) {
+		struct ice_vsi_list_map_info *vsi_list_info =
+			fm_list->vsi_list_info;
+
+		/* Remove the VSI list since it is no longer used */
+		status = ice_remove_vsi_list_rule(hw, vsi_list_id, lkup_type);
+		if (status) {
+			ice_debug(hw, ICE_DBG_SW,
+				  "Failed to remove VSI list %d, error %d\n",
+				  vsi_list_id, status);
+			return status;
+		}
+
+		LIST_DEL(&vsi_list_info->list_entry);
+		ice_free(hw, vsi_list_info);
+		fm_list->vsi_list_info = NULL;
+	}
+
+	return status;
+}
+
+/**
+ * ice_remove_rule_internal - Remove a filter rule of a given type
+ *
+ * @hw: pointer to the hardware structure
+ * @recp_id: recipe id for which the rule needs to removed
+ * @f_entry: rule entry containing filter information
+ */
+static enum ice_status
+ice_remove_rule_internal(struct ice_hw *hw, u8 recp_id,
+			 struct ice_fltr_list_entry *f_entry)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	struct ice_fltr_mgmt_list_entry *list_elem;
+	struct ice_lock *rule_lock; /* Lock to protect filter rule list */
+	enum ice_status status = ICE_SUCCESS;
+	bool remove_rule = false;
+	u16 vsi_handle;
+
+	if (!ice_is_vsi_valid(hw, f_entry->fltr_info.vsi_handle))
+		return ICE_ERR_PARAM;
+	f_entry->fltr_info.fwd_id.hw_vsi_id =
+		ice_get_hw_vsi_num(hw, f_entry->fltr_info.vsi_handle);
+
+	rule_lock = &sw->recp_list[recp_id].filt_rule_lock;
+	ice_acquire_lock(rule_lock);
+	list_elem = ice_find_rule_entry(hw, recp_id, &f_entry->fltr_info);
+	if (!list_elem) {
+		status = ICE_ERR_DOES_NOT_EXIST;
+		goto exit;
+	}
+
+	if (list_elem->fltr_info.fltr_act != ICE_FWD_TO_VSI_LIST) {
+		remove_rule = true;
+	} else if (!list_elem->vsi_list_info) {
+		status = ICE_ERR_DOES_NOT_EXIST;
+		goto exit;
+	} else if (list_elem->vsi_list_info->ref_cnt > 1) {
+		/* a ref_cnt > 1 indicates that the vsi_list is being
+		 * shared by multiple rules. Decrement the ref_cnt and
+		 * remove this rule, but do not modify the list, as it
+		 * is in-use by other rules.
+		 */
+		list_elem->vsi_list_info->ref_cnt--;
+		remove_rule = true;
+	} else {
+		/* a ref_cnt of 1 indicates the vsi_list is only used
+		 * by one rule. However, the original removal request is only
+		 * for a single VSI. Update the vsi_list first, and only
+		 * remove the rule if there are no further VSIs in this list.
+		 */
+		vsi_handle = f_entry->fltr_info.vsi_handle;
+		status = ice_rem_update_vsi_list(hw, vsi_handle, list_elem);
+		if (status)
+			goto exit;
+		/* if vsi count goes to zero after updating the vsi list */
+		if (list_elem->vsi_count == 0)
+			remove_rule = true;
+	}
+
+	if (remove_rule) {
+		/* Remove the lookup rule */
+		struct ice_aqc_sw_rules_elem *s_rule;
+
+		s_rule = (struct ice_aqc_sw_rules_elem *)
+			ice_malloc(hw, ICE_SW_RULE_RX_TX_NO_HDR_SIZE);
+		if (!s_rule) {
+			status = ICE_ERR_NO_MEMORY;
+			goto exit;
+		}
+
+		ice_fill_sw_rule(hw, &list_elem->fltr_info, s_rule,
+				 ice_aqc_opc_remove_sw_rules);
+
+		status = ice_aq_sw_rules(hw, s_rule,
+					 ICE_SW_RULE_RX_TX_NO_HDR_SIZE, 1,
+					 ice_aqc_opc_remove_sw_rules, NULL);
+		if (status)
+			goto exit;
+
+		/* Remove a book keeping from the list */
+		ice_free(hw, s_rule);
+
+		LIST_DEL(&list_elem->list_entry);
+		ice_free(hw, list_elem);
+	}
+exit:
+	ice_release_lock(rule_lock);
+	return status;
+}
+
+
+/**
+ * ice_add_mac - Add a MAC address based filter rule
+ * @hw: pointer to the hardware structure
+ * @m_list: list of MAC addresses and forwarding information
+ *
+ * IMPORTANT: When the ucast_shared flag is set to false and m_list has
+ * multiple unicast addresses, the function assumes that all the
+ * addresses are unique in a given add_mac call. It doesn't
+ * check for duplicates in this case, removing duplicates from a given
+ * list should be taken care of in the caller of this function.
+ */
+enum ice_status
+ice_add_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list)
+{
+	struct ice_aqc_sw_rules_elem *s_rule, *r_iter;
+	struct ice_fltr_list_entry *m_list_itr;
+	struct LIST_HEAD_TYPE *rule_head;
+	u16 elem_sent, total_elem_left;
+	struct ice_switch_info *sw;
+	struct ice_lock *rule_lock; /* Lock to protect filter rule list */
+	enum ice_status status = ICE_SUCCESS;
+	u16 num_unicast = 0;
+	u16 s_rule_size;
+
+	if (!m_list || !hw)
+		return ICE_ERR_PARAM;
+	s_rule = NULL;
+	sw = hw->switch_info;
+	rule_lock = &sw->recp_list[ICE_SW_LKUP_MAC].filt_rule_lock;
+	LIST_FOR_EACH_ENTRY(m_list_itr, m_list, ice_fltr_list_entry,
+			    list_entry) {
+		u8 *add = &m_list_itr->fltr_info.l_data.mac.mac_addr[0];
+		u16 vsi_handle;
+		u16 hw_vsi_id;
+
+		m_list_itr->fltr_info.flag = ICE_FLTR_TX;
+		vsi_handle = m_list_itr->fltr_info.vsi_handle;
+		if (!ice_is_vsi_valid(hw, vsi_handle))
+			return ICE_ERR_PARAM;
+		hw_vsi_id = ice_get_hw_vsi_num(hw, vsi_handle);
+		m_list_itr->fltr_info.fwd_id.hw_vsi_id = hw_vsi_id;
+		/* update the src in case it is vsi num */
+		if (m_list_itr->fltr_info.src_id != ICE_SRC_ID_VSI)
+			return ICE_ERR_PARAM;
+		m_list_itr->fltr_info.src = hw_vsi_id;
+		if (m_list_itr->fltr_info.lkup_type != ICE_SW_LKUP_MAC ||
+		    IS_ZERO_ETHER_ADDR(add))
+			return ICE_ERR_PARAM;
+		if (IS_UNICAST_ETHER_ADDR(add) && !hw->ucast_shared) {
+			/* Don't overwrite the unicast address */
+			ice_acquire_lock(rule_lock);
+			if (ice_find_rule_entry(hw, ICE_SW_LKUP_MAC,
+						&m_list_itr->fltr_info)) {
+				ice_release_lock(rule_lock);
+				return ICE_ERR_ALREADY_EXISTS;
+			}
+			ice_release_lock(rule_lock);
+			num_unicast++;
+		} else if (IS_MULTICAST_ETHER_ADDR(add) ||
+			   (IS_UNICAST_ETHER_ADDR(add) && hw->ucast_shared)) {
+			m_list_itr->status =
+				ice_add_rule_internal(hw, ICE_SW_LKUP_MAC,
+						      m_list_itr);
+			if (m_list_itr->status)
+				return m_list_itr->status;
+		}
+	}
+
+	ice_acquire_lock(rule_lock);
+	/* Exit if no suitable entries were found for adding bulk switch rule */
+	if (!num_unicast) {
+		status = ICE_SUCCESS;
+		goto ice_add_mac_exit;
+	}
+
+	rule_head = &sw->recp_list[ICE_SW_LKUP_MAC].filt_rules;
+
+	/* Allocate switch rule buffer for the bulk update for unicast */
+	s_rule_size = ICE_SW_RULE_RX_TX_ETH_HDR_SIZE;
+	s_rule = (struct ice_aqc_sw_rules_elem *)
+		ice_calloc(hw, num_unicast, s_rule_size);
+	if (!s_rule) {
+		status = ICE_ERR_NO_MEMORY;
+		goto ice_add_mac_exit;
+	}
+
+	r_iter = s_rule;
+	LIST_FOR_EACH_ENTRY(m_list_itr, m_list, ice_fltr_list_entry,
+			    list_entry) {
+		struct ice_fltr_info *f_info = &m_list_itr->fltr_info;
+		u8 *mac_addr = &f_info->l_data.mac.mac_addr[0];
+
+		if (IS_UNICAST_ETHER_ADDR(mac_addr)) {
+			ice_fill_sw_rule(hw, &m_list_itr->fltr_info, r_iter,
+					 ice_aqc_opc_add_sw_rules);
+			r_iter = (struct ice_aqc_sw_rules_elem *)
+				((u8 *)r_iter + s_rule_size);
+		}
+	}
+
+	/* Call AQ bulk switch rule update for all unicast addresses */
+	r_iter = s_rule;
+	/* Call AQ switch rule in AQ_MAX chunk */
+	for (total_elem_left = num_unicast; total_elem_left > 0;
+	     total_elem_left -= elem_sent) {
+		struct ice_aqc_sw_rules_elem *entry = r_iter;
+
+		elem_sent = min(total_elem_left,
+				(u16)(ICE_AQ_MAX_BUF_LEN / s_rule_size));
+		status = ice_aq_sw_rules(hw, entry, elem_sent * s_rule_size,
+					 elem_sent, ice_aqc_opc_add_sw_rules,
+					 NULL);
+		if (status)
+			goto ice_add_mac_exit;
+		r_iter = (struct ice_aqc_sw_rules_elem *)
+			((u8 *)r_iter + (elem_sent * s_rule_size));
+	}
+
+	/* Fill up rule id based on the value returned from FW */
+	r_iter = s_rule;
+	LIST_FOR_EACH_ENTRY(m_list_itr, m_list, ice_fltr_list_entry,
+			    list_entry) {
+		struct ice_fltr_info *f_info = &m_list_itr->fltr_info;
+		u8 *mac_addr = &f_info->l_data.mac.mac_addr[0];
+		struct ice_fltr_mgmt_list_entry *fm_entry;
+
+		if (IS_UNICAST_ETHER_ADDR(mac_addr)) {
+			f_info->fltr_rule_id =
+				LE16_TO_CPU(r_iter->pdata.lkup_tx_rx.index);
+			f_info->fltr_act = ICE_FWD_TO_VSI;
+			/* Create an entry to track this MAC address */
+			fm_entry = (struct ice_fltr_mgmt_list_entry *)
+				ice_malloc(hw, sizeof(*fm_entry));
+			if (!fm_entry) {
+				status = ICE_ERR_NO_MEMORY;
+				goto ice_add_mac_exit;
+			}
+			fm_entry->fltr_info = *f_info;
+			fm_entry->vsi_count = 1;
+			/* The book keeping entries will get removed when
+			 * base driver calls remove filter AQ command
+			 */
+
+			LIST_ADD(&fm_entry->list_entry, rule_head);
+			r_iter = (struct ice_aqc_sw_rules_elem *)
+				((u8 *)r_iter + s_rule_size);
+		}
+	}
+
+ice_add_mac_exit:
+	ice_release_lock(rule_lock);
+	if (s_rule)
+		ice_free(hw, s_rule);
+	return status;
+}
+
+/**
+ * ice_add_vlan_internal - Add one VLAN based filter rule
+ * @hw: pointer to the hardware structure
+ * @f_entry: filter entry containing one VLAN information
+ */
+static enum ice_status
+ice_add_vlan_internal(struct ice_hw *hw, struct ice_fltr_list_entry *f_entry)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	struct ice_fltr_mgmt_list_entry *v_list_itr;
+	struct ice_fltr_info *new_fltr, *cur_fltr;
+	enum ice_sw_lkup_type lkup_type;
+	u16 vsi_list_id = 0, vsi_handle;
+	struct ice_lock *rule_lock; /* Lock to protect filter rule list */
+	enum ice_status status = ICE_SUCCESS;
+
+	if (!ice_is_vsi_valid(hw, f_entry->fltr_info.vsi_handle))
+		return ICE_ERR_PARAM;
+
+	f_entry->fltr_info.fwd_id.hw_vsi_id =
+		ice_get_hw_vsi_num(hw, f_entry->fltr_info.vsi_handle);
+	new_fltr = &f_entry->fltr_info;
+
+	/* VLAN id should only be 12 bits */
+	if (new_fltr->l_data.vlan.vlan_id > ICE_MAX_VLAN_ID)
+		return ICE_ERR_PARAM;
+
+	if (new_fltr->src_id != ICE_SRC_ID_VSI)
+		return ICE_ERR_PARAM;
+
+	new_fltr->src = new_fltr->fwd_id.hw_vsi_id;
+	lkup_type = new_fltr->lkup_type;
+	vsi_handle = new_fltr->vsi_handle;
+	rule_lock = &sw->recp_list[ICE_SW_LKUP_VLAN].filt_rule_lock;
+	ice_acquire_lock(rule_lock);
+	v_list_itr = ice_find_rule_entry(hw, ICE_SW_LKUP_VLAN, new_fltr);
+	if (!v_list_itr) {
+		struct ice_vsi_list_map_info *map_info = NULL;
+
+		if (new_fltr->fltr_act == ICE_FWD_TO_VSI) {
+			/* All VLAN pruning rules use a VSI list. Check if
+			 * there is already a VSI list containing VSI that we
+			 * want to add. If found, use the same vsi_list_id for
+			 * this new VLAN rule or else create a new list.
+			 */
+			map_info = ice_find_vsi_list_entry(hw, ICE_SW_LKUP_VLAN,
+							   vsi_handle,
+							   &vsi_list_id);
+			if (!map_info) {
+				status = ice_create_vsi_list_rule(hw,
+								  &vsi_handle,
+								  1,
+								  &vsi_list_id,
+								  lkup_type);
+				if (status)
+					goto exit;
+			}
+			/* Convert the action to forwarding to a VSI list. */
+			new_fltr->fltr_act = ICE_FWD_TO_VSI_LIST;
+			new_fltr->fwd_id.vsi_list_id = vsi_list_id;
+		}
+
+		status = ice_create_pkt_fwd_rule(hw, f_entry);
+		if (!status) {
+			v_list_itr = ice_find_rule_entry(hw, ICE_SW_LKUP_VLAN,
+							 new_fltr);
+			if (!v_list_itr) {
+				status = ICE_ERR_DOES_NOT_EXIST;
+				goto exit;
+			}
+			/* reuse VSI list for new rule and increment ref_cnt */
+			if (map_info) {
+				v_list_itr->vsi_list_info = map_info;
+				map_info->ref_cnt++;
+			} else {
+				v_list_itr->vsi_list_info =
+					ice_create_vsi_list_map(hw, &vsi_handle,
+								1, vsi_list_id);
+			}
+		}
+	} else if (v_list_itr->vsi_list_info->ref_cnt == 1) {
+		/* Update existing VSI list to add new VSI id only if it used
+		 * by one VLAN rule.
+		 */
+		cur_fltr = &v_list_itr->fltr_info;
+		status = ice_add_update_vsi_list(hw, v_list_itr, cur_fltr,
+						 new_fltr);
+	} else {
+		/* If VLAN rule exists and VSI list being used by this rule is
+		 * referenced by more than 1 VLAN rule. Then create a new VSI
+		 * list appending previous VSI with new VSI and update existing
+		 * VLAN rule to point to new VSI list id
+		 */
+		struct ice_fltr_info tmp_fltr;
+		u16 vsi_handle_arr[2];
+		u16 cur_handle;
+
+		/* Current implementation only supports reusing VSI list with
+		 * one VSI count. We should never hit below condition
+		 */
+		if (v_list_itr->vsi_count > 1 &&
+		    v_list_itr->vsi_list_info->ref_cnt > 1) {
+			ice_debug(hw, ICE_DBG_SW,
+				  "Invalid configuration: Optimization to reuse VSI list with more than one VSI is not being done yet\n");
+			status = ICE_ERR_CFG;
+			goto exit;
+		}
+
+		cur_handle =
+			ice_find_first_bit(v_list_itr->vsi_list_info->vsi_map,
+					   ICE_MAX_VSI);
+
+		/* A rule already exists with the new VSI being added */
+		if (cur_handle == vsi_handle) {
+			status = ICE_ERR_ALREADY_EXISTS;
+			goto exit;
+		}
+
+		vsi_handle_arr[0] = cur_handle;
+		vsi_handle_arr[1] = vsi_handle;
+		status = ice_create_vsi_list_rule(hw, &vsi_handle_arr[0], 2,
+						  &vsi_list_id, lkup_type);
+		if (status)
+			goto exit;
+
+		tmp_fltr = v_list_itr->fltr_info;
+		tmp_fltr.fltr_rule_id = v_list_itr->fltr_info.fltr_rule_id;
+		tmp_fltr.fwd_id.vsi_list_id = vsi_list_id;
+		tmp_fltr.fltr_act = ICE_FWD_TO_VSI_LIST;
+		/* Update the previous switch rule to a new VSI list which
+		 * includes current VSI that is requested
+		 */
+		status = ice_update_pkt_fwd_rule(hw, &tmp_fltr);
+		if (status)
+			goto exit;
+
+		/* before overriding VSI list map info. decrement ref_cnt of
+		 * previous VSI list
+		 */
+		v_list_itr->vsi_list_info->ref_cnt--;
+
+		/* now update to newly created list */
+		v_list_itr->fltr_info.fwd_id.vsi_list_id = vsi_list_id;
+		v_list_itr->vsi_list_info =
+			ice_create_vsi_list_map(hw, &vsi_handle_arr[0], 2,
+						vsi_list_id);
+		v_list_itr->vsi_count++;
+	}
+
+exit:
+	ice_release_lock(rule_lock);
+	return status;
+}
+
+/**
+ * ice_add_vlan - Add VLAN based filter rule
+ * @hw: pointer to the hardware structure
+ * @v_list: list of VLAN entries and forwarding information
+ */
+enum ice_status
+ice_add_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list)
+{
+	struct ice_fltr_list_entry *v_list_itr;
+
+	if (!v_list || !hw)
+		return ICE_ERR_PARAM;
+
+	LIST_FOR_EACH_ENTRY(v_list_itr, v_list, ice_fltr_list_entry,
+			    list_entry) {
+		if (v_list_itr->fltr_info.lkup_type != ICE_SW_LKUP_VLAN)
+			return ICE_ERR_PARAM;
+		v_list_itr->fltr_info.flag = ICE_FLTR_TX;
+		v_list_itr->status = ice_add_vlan_internal(hw, v_list_itr);
+		if (v_list_itr->status)
+			return v_list_itr->status;
+	}
+	return ICE_SUCCESS;
+}
+
+#ifndef NO_MACVLAN_SUPPORT
+/**
+ * ice_add_mac_vlan - Add MAC and VLAN pair based filter rule
+ * @hw: pointer to the hardware structure
+ * @mv_list: list of MAC and VLAN filters
+ *
+ * If the VSI on which the mac-vlan pair has to be added has RX and Tx VLAN
+ * pruning bits enabled, then it is the responsibility of the caller to make
+ * sure to add a vlan only filter on the same VSI. Packets belonging to that
+ * VLAN won't be received on that VSI otherwise.
+ */
+enum ice_status
+ice_add_mac_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *mv_list)
+{
+	struct ice_fltr_list_entry *mv_list_itr;
+
+	if (!mv_list || !hw)
+		return ICE_ERR_PARAM;
+
+	LIST_FOR_EACH_ENTRY(mv_list_itr, mv_list, ice_fltr_list_entry,
+			    list_entry) {
+		enum ice_sw_lkup_type l_type =
+			mv_list_itr->fltr_info.lkup_type;
+
+		if (l_type != ICE_SW_LKUP_MAC_VLAN)
+			return ICE_ERR_PARAM;
+		mv_list_itr->fltr_info.flag = ICE_FLTR_TX;
+		mv_list_itr->status =
+			ice_add_rule_internal(hw, ICE_SW_LKUP_MAC_VLAN,
+					      mv_list_itr);
+		if (mv_list_itr->status)
+			return mv_list_itr->status;
+	}
+	return ICE_SUCCESS;
+}
+#endif
+
+
+
+/**
+ * ice_rem_sw_rule_info
+ * @hw: pointer to the hardware structure
+ * @rule_head: pointer to the switch list structure that we want to delete
+ */
+static void
+ice_rem_sw_rule_info(struct ice_hw *hw, struct LIST_HEAD_TYPE *rule_head)
+{
+	if (!LIST_EMPTY(rule_head)) {
+		struct ice_fltr_mgmt_list_entry *entry;
+		struct ice_fltr_mgmt_list_entry *tmp;
+
+		LIST_FOR_EACH_ENTRY_SAFE(entry, tmp, rule_head,
+					 ice_fltr_mgmt_list_entry, list_entry) {
+			LIST_DEL(&entry->list_entry);
+			ice_free(hw, entry);
+		}
+	}
+}
+
+
+
+/**
+ * ice_cfg_dflt_vsi - change state of VSI to set/clear default
+ * @pi: pointer to the port_info structure
+ * @vsi_handle: VSI handle to set as default
+ * @set: true to add the above mentioned switch rule, false to remove it
+ * @direction: ICE_FLTR_RX or ICE_FLTR_TX
+ *
+ * add filter rule to set/unset given VSI as default VSI for the switch
+ * (represented by swid)
+ */
+enum ice_status
+ice_cfg_dflt_vsi(struct ice_port_info *pi, u16 vsi_handle, bool set,
+		 u8 direction)
+{
+	struct ice_aqc_sw_rules_elem *s_rule;
+	struct ice_fltr_info f_info;
+	struct ice_hw *hw = pi->hw;
+	enum ice_adminq_opc opcode;
+	enum ice_status status;
+	u16 s_rule_size;
+	u16 hw_vsi_id;
+
+	if (!ice_is_vsi_valid(hw, vsi_handle))
+		return ICE_ERR_PARAM;
+	hw_vsi_id = ice_get_hw_vsi_num(hw, vsi_handle);
+
+	s_rule_size = set ? ICE_SW_RULE_RX_TX_ETH_HDR_SIZE :
+			    ICE_SW_RULE_RX_TX_NO_HDR_SIZE;
+	s_rule = (struct ice_aqc_sw_rules_elem *)ice_malloc(hw, s_rule_size);
+	if (!s_rule)
+		return ICE_ERR_NO_MEMORY;
+
+	ice_memset(&f_info, 0, sizeof(f_info), ICE_NONDMA_MEM);
+
+	f_info.lkup_type = ICE_SW_LKUP_DFLT;
+	f_info.flag = direction;
+	f_info.fltr_act = ICE_FWD_TO_VSI;
+	f_info.fwd_id.hw_vsi_id = hw_vsi_id;
+
+	if (f_info.flag & ICE_FLTR_RX) {
+		f_info.src = pi->lport;
+		f_info.src_id = ICE_SRC_ID_LPORT;
+		if (!set)
+			f_info.fltr_rule_id =
+				pi->dflt_rx_vsi_rule_id;
+	} else if (f_info.flag & ICE_FLTR_TX) {
+		f_info.src_id = ICE_SRC_ID_VSI;
+		f_info.src = hw_vsi_id;
+		if (!set)
+			f_info.fltr_rule_id =
+				pi->dflt_tx_vsi_rule_id;
+	}
+
+	if (set)
+		opcode = ice_aqc_opc_add_sw_rules;
+	else
+		opcode = ice_aqc_opc_remove_sw_rules;
+
+	ice_fill_sw_rule(hw, &f_info, s_rule, opcode);
+
+	status = ice_aq_sw_rules(hw, s_rule, s_rule_size, 1, opcode, NULL);
+	if (status || !(f_info.flag & ICE_FLTR_TX_RX))
+		goto out;
+	if (set) {
+		u16 index = LE16_TO_CPU(s_rule->pdata.lkup_tx_rx.index);
+
+		if (f_info.flag & ICE_FLTR_TX) {
+			pi->dflt_tx_vsi_num = hw_vsi_id;
+			pi->dflt_tx_vsi_rule_id = index;
+		} else if (f_info.flag & ICE_FLTR_RX) {
+			pi->dflt_rx_vsi_num = hw_vsi_id;
+			pi->dflt_rx_vsi_rule_id = index;
+		}
+	} else {
+		if (f_info.flag & ICE_FLTR_TX) {
+			pi->dflt_tx_vsi_num = ICE_DFLT_VSI_INVAL;
+			pi->dflt_tx_vsi_rule_id = ICE_INVAL_ACT;
+		} else if (f_info.flag & ICE_FLTR_RX) {
+			pi->dflt_rx_vsi_num = ICE_DFLT_VSI_INVAL;
+			pi->dflt_rx_vsi_rule_id = ICE_INVAL_ACT;
+		}
+	}
+
+out:
+	ice_free(hw, s_rule);
+	return status;
+}
+
+/**
+ * ice_remove_mac - remove a MAC address based filter rule
+ * @hw: pointer to the hardware structure
+ * @m_list: list of MAC addresses and forwarding information
+ *
+ * This function removes either a MAC filter rule or a specific VSI from a
+ * VSI list for a multicast MAC address.
+ *
+ * Returns ICE_ERR_DOES_NOT_EXIST if a given entry was not added by
+ * ice_add_mac. Caller should be aware that this call will only work if all
+ * the entries passed into m_list were added previously. It will not attempt to
+ * do a partial remove of entries that were found.
+ */
+enum ice_status
+ice_remove_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list)
+{
+	struct ice_fltr_list_entry *list_itr, *tmp;
+
+	if (!m_list)
+		return ICE_ERR_PARAM;
+
+	LIST_FOR_EACH_ENTRY_SAFE(list_itr, tmp, m_list, ice_fltr_list_entry,
+				 list_entry) {
+		enum ice_sw_lkup_type l_type = list_itr->fltr_info.lkup_type;
+
+		if (l_type != ICE_SW_LKUP_MAC)
+			return ICE_ERR_PARAM;
+		list_itr->status = ice_remove_rule_internal(hw,
+							    ICE_SW_LKUP_MAC,
+							    list_itr);
+		if (list_itr->status)
+			return list_itr->status;
+	}
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_remove_vlan - Remove VLAN based filter rule
+ * @hw: pointer to the hardware structure
+ * @v_list: list of VLAN entries and forwarding information
+ */
+enum ice_status
+ice_remove_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list)
+{
+	struct ice_fltr_list_entry *v_list_itr, *tmp;
+
+	if (!v_list || !hw)
+		return ICE_ERR_PARAM;
+
+	LIST_FOR_EACH_ENTRY_SAFE(v_list_itr, tmp, v_list, ice_fltr_list_entry,
+				 list_entry) {
+		enum ice_sw_lkup_type l_type = v_list_itr->fltr_info.lkup_type;
+
+		if (l_type != ICE_SW_LKUP_VLAN)
+			return ICE_ERR_PARAM;
+		v_list_itr->status = ice_remove_rule_internal(hw,
+							      ICE_SW_LKUP_VLAN,
+							      v_list_itr);
+		if (v_list_itr->status)
+			return v_list_itr->status;
+	}
+	return ICE_SUCCESS;
+}
+
+#ifndef NO_MACVLAN_SUPPORT
+/**
+ * ice_remove_mac_vlan - Remove MAC VLAN based filter rule
+ * @hw: pointer to the hardware structure
+ * @v_list: list of MAC VLAN entries and forwarding information
+ */
+enum ice_status
+ice_remove_mac_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list)
+{
+	struct ice_fltr_list_entry *v_list_itr, *tmp;
+
+	if (!v_list || !hw)
+		return ICE_ERR_PARAM;
+
+	LIST_FOR_EACH_ENTRY_SAFE(v_list_itr, tmp, v_list, ice_fltr_list_entry,
+				 list_entry) {
+		enum ice_sw_lkup_type l_type = v_list_itr->fltr_info.lkup_type;
+
+		if (l_type != ICE_SW_LKUP_MAC_VLAN)
+			return ICE_ERR_PARAM;
+		v_list_itr->status =
+			ice_remove_rule_internal(hw, ICE_SW_LKUP_MAC_VLAN,
+						 v_list_itr);
+		if (v_list_itr->status)
+			return v_list_itr->status;
+	}
+	return ICE_SUCCESS;
+}
+#endif /* !NO_MACVLAN_SUPPORT */
+
+/**
+ * ice_vsi_uses_fltr - Determine if given VSI uses specified filter
+ * @fm_entry: filter entry to inspect
+ * @vsi_handle: VSI handle to compare with filter info
+ */
+static bool
+ice_vsi_uses_fltr(struct ice_fltr_mgmt_list_entry *fm_entry, u16 vsi_handle)
+{
+	return ((fm_entry->fltr_info.fltr_act == ICE_FWD_TO_VSI &&
+		 fm_entry->fltr_info.vsi_handle == vsi_handle) ||
+		(fm_entry->fltr_info.fltr_act == ICE_FWD_TO_VSI_LIST &&
+		 (ice_is_bit_set(fm_entry->vsi_list_info->vsi_map,
+				 vsi_handle))));
+}
+
+/**
+ * ice_add_entry_to_vsi_fltr_list - Add copy of fltr_list_entry to remove list
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: VSI handle to remove filters from
+ * @vsi_list_head: pointer to the list to add entry to
+ * @fi: pointer to fltr_info of filter entry to copy & add
+ *
+ * Helper function, used when creating a list of filters to remove from
+ * a specific VSI. The entry added to vsi_list_head is a COPY of the
+ * original filter entry, with the exception of fltr_info.fltr_act and
+ * fltr_info.fwd_id fields. These are set such that later logic can
+ * extract which VSI to remove the fltr from, and pass on that information.
+ */
+static enum ice_status
+ice_add_entry_to_vsi_fltr_list(struct ice_hw *hw, u16 vsi_handle,
+			       struct LIST_HEAD_TYPE *vsi_list_head,
+			       struct ice_fltr_info *fi)
+{
+	struct ice_fltr_list_entry *tmp;
+
+	/* this memory is freed up in the caller function
+	 * once filters for this VSI are removed
+	 */
+	tmp = (struct ice_fltr_list_entry *)ice_malloc(hw, sizeof(*tmp));
+	if (!tmp)
+		return ICE_ERR_NO_MEMORY;
+
+	tmp->fltr_info = *fi;
+
+	/* Overwrite these fields to indicate which VSI to remove filter from,
+	 * so find and remove logic can extract the information from the
+	 * list entries. Note that original entries will still have proper
+	 * values.
+	 */
+	tmp->fltr_info.fltr_act = ICE_FWD_TO_VSI;
+	tmp->fltr_info.vsi_handle = vsi_handle;
+	tmp->fltr_info.fwd_id.hw_vsi_id = ice_get_hw_vsi_num(hw, vsi_handle);
+
+	LIST_ADD(&tmp->list_entry, vsi_list_head);
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_add_to_vsi_fltr_list - Add VSI filters to the list
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: VSI handle to remove filters from
+ * @lkup_list_head: pointer to the list that has certain lookup type filters
+ * @vsi_list_head: pointer to the list pertaining to VSI with vsi_handle
+ *
+ * Locates all filters in lkup_list_head that are used by the given VSI,
+ * and adds COPIES of those entries to vsi_list_head (intended to be used
+ * to remove the listed filters).
+ * Note that this means all entries in vsi_list_head must be explicitly
+ * deallocated by the caller when done with list.
+ */
+static enum ice_status
+ice_add_to_vsi_fltr_list(struct ice_hw *hw, u16 vsi_handle,
+			 struct LIST_HEAD_TYPE *lkup_list_head,
+			 struct LIST_HEAD_TYPE *vsi_list_head)
+{
+	struct ice_fltr_mgmt_list_entry *fm_entry;
+	enum ice_status status = ICE_SUCCESS;
+
+	/* check to make sure VSI id is valid and within boundary */
+	if (!ice_is_vsi_valid(hw, vsi_handle))
+		return ICE_ERR_PARAM;
+
+	LIST_FOR_EACH_ENTRY(fm_entry, lkup_list_head,
+			    ice_fltr_mgmt_list_entry, list_entry) {
+		struct ice_fltr_info *fi;
+
+		fi = &fm_entry->fltr_info;
+		if (!fi || !ice_vsi_uses_fltr(fm_entry, vsi_handle))
+			continue;
+
+		status = ice_add_entry_to_vsi_fltr_list(hw, vsi_handle,
+							vsi_list_head, fi);
+		if (status)
+			return status;
+	}
+	return status;
+}
+
+
+/**
+ * ice_determine_promisc_mask
+ * @fi: filter info to parse
+ *
+ * Helper function to determine which ICE_PROMISC_ mask corresponds
+ * to given filter into.
+ */
+static u8 ice_determine_promisc_mask(struct ice_fltr_info *fi)
+{
+	u16 vid = fi->l_data.mac_vlan.vlan_id;
+	u8 *macaddr = fi->l_data.mac.mac_addr;
+	bool is_tx_fltr = false;
+	u8 promisc_mask = 0;
+
+	if (fi->flag == ICE_FLTR_TX)
+		is_tx_fltr = true;
+
+	if (IS_BROADCAST_ETHER_ADDR(macaddr))
+		promisc_mask |= is_tx_fltr ?
+			ICE_PROMISC_BCAST_TX : ICE_PROMISC_BCAST_RX;
+	else if (IS_MULTICAST_ETHER_ADDR(macaddr))
+		promisc_mask |= is_tx_fltr ?
+			ICE_PROMISC_MCAST_TX : ICE_PROMISC_MCAST_RX;
+	else if (IS_UNICAST_ETHER_ADDR(macaddr))
+		promisc_mask |= is_tx_fltr ?
+			ICE_PROMISC_UCAST_TX : ICE_PROMISC_UCAST_RX;
+	if (vid)
+		promisc_mask |= is_tx_fltr ?
+			ICE_PROMISC_VLAN_TX : ICE_PROMISC_VLAN_RX;
+
+	return promisc_mask;
+}
+
+
+/**
+ * ice_remove_promisc - Remove promisc based filter rules
+ * @hw: pointer to the hardware structure
+ * @recp_id: recipe id for which the rule needs to removed
+ * @v_list: list of promisc entries
+ */
+static enum ice_status
+ice_remove_promisc(struct ice_hw *hw, u8 recp_id,
+		   struct LIST_HEAD_TYPE *v_list)
+{
+	struct ice_fltr_list_entry *v_list_itr, *tmp;
+
+	LIST_FOR_EACH_ENTRY_SAFE(v_list_itr, tmp, v_list, ice_fltr_list_entry,
+				 list_entry) {
+		v_list_itr->status =
+			ice_remove_rule_internal(hw, recp_id, v_list_itr);
+		if (v_list_itr->status)
+			return v_list_itr->status;
+	}
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_clear_vsi_promisc - clear specified promiscuous mode(s) for given VSI
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: VSI handle to clear mode
+ * @promisc_mask: mask of promiscuous config bits to clear
+ * @vid: VLAN ID to clear VLAN promiscuous
+ */
+enum ice_status
+ice_clear_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask,
+		      u16 vid)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	struct ice_fltr_list_entry *fm_entry, *tmp;
+	struct LIST_HEAD_TYPE remove_list_head;
+	struct ice_fltr_mgmt_list_entry *itr;
+	struct LIST_HEAD_TYPE *rule_head;
+	struct ice_lock *rule_lock;	/* Lock to protect filter rule list */
+	enum ice_status status = ICE_SUCCESS;
+	u8 recipe_id;
+
+	if (!ice_is_vsi_valid(hw, vsi_handle))
+		return ICE_ERR_PARAM;
+
+	if (vid)
+		recipe_id = ICE_SW_LKUP_PROMISC_VLAN;
+	else
+		recipe_id = ICE_SW_LKUP_PROMISC;
+
+	rule_head = &sw->recp_list[recipe_id].filt_rules;
+	rule_lock = &sw->recp_list[recipe_id].filt_rule_lock;
+
+	INIT_LIST_HEAD(&remove_list_head);
+
+	ice_acquire_lock(rule_lock);
+	LIST_FOR_EACH_ENTRY(itr, rule_head,
+			    ice_fltr_mgmt_list_entry, list_entry) {
+		u8 fltr_promisc_mask = 0;
+
+		if (!ice_vsi_uses_fltr(itr, vsi_handle))
+			continue;
+
+		fltr_promisc_mask |=
+			ice_determine_promisc_mask(&itr->fltr_info);
+
+		/* Skip if filter is not completely specified by given mask */
+		if (fltr_promisc_mask & ~promisc_mask)
+			continue;
+
+		status = ice_add_entry_to_vsi_fltr_list(hw, vsi_handle,
+							&remove_list_head,
+							&itr->fltr_info);
+		if (status) {
+			ice_release_lock(rule_lock);
+			goto free_fltr_list;
+		}
+	}
+	ice_release_lock(rule_lock);
+
+	status = ice_remove_promisc(hw, recipe_id, &remove_list_head);
+
+free_fltr_list:
+	LIST_FOR_EACH_ENTRY_SAFE(fm_entry, tmp, &remove_list_head,
+				 ice_fltr_list_entry, list_entry) {
+		LIST_DEL(&fm_entry->list_entry);
+		ice_free(hw, fm_entry);
+	}
+
+	return status;
+}
+
+/**
+ * ice_set_vsi_promisc - set given VSI to given promiscuous mode(s)
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: VSI handle to configure
+ * @promisc_mask: mask of promiscuous config bits
+ * @vid: VLAN ID to set VLAN promiscuous
+ */
+enum ice_status
+ice_set_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask, u16 vid)
+{
+	enum { UCAST_FLTR = 1, MCAST_FLTR, BCAST_FLTR };
+	struct ice_fltr_list_entry f_list_entry;
+	struct ice_fltr_info new_fltr;
+	enum ice_status status = ICE_SUCCESS;
+	bool is_tx_fltr;
+	u16 hw_vsi_id;
+	int pkt_type;
+	u8 recipe_id;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_set_vsi_promisc\n");
+
+	if (!ice_is_vsi_valid(hw, vsi_handle))
+		return ICE_ERR_PARAM;
+	hw_vsi_id = ice_get_hw_vsi_num(hw, vsi_handle);
+
+	ice_memset(&new_fltr, 0, sizeof(new_fltr), ICE_NONDMA_MEM);
+
+	if (promisc_mask & (ICE_PROMISC_VLAN_RX | ICE_PROMISC_VLAN_TX)) {
+		new_fltr.lkup_type = ICE_SW_LKUP_PROMISC_VLAN;
+		new_fltr.l_data.mac_vlan.vlan_id = vid;
+		recipe_id = ICE_SW_LKUP_PROMISC_VLAN;
+	} else {
+		new_fltr.lkup_type = ICE_SW_LKUP_PROMISC;
+		recipe_id = ICE_SW_LKUP_PROMISC;
+	}
+
+	/* Separate filters must be set for each direction/packet type
+	 * combination, so we will loop over the mask value, store the
+	 * individual type, and clear it out in the input mask as it
+	 * is found.
+	 */
+	while (promisc_mask) {
+		u8 *mac_addr;
+
+		pkt_type = 0;
+		is_tx_fltr = false;
+
+		if (promisc_mask & ICE_PROMISC_UCAST_RX) {
+			promisc_mask &= ~ICE_PROMISC_UCAST_RX;
+			pkt_type = UCAST_FLTR;
+		} else if (promisc_mask & ICE_PROMISC_UCAST_TX) {
+			promisc_mask &= ~ICE_PROMISC_UCAST_TX;
+			pkt_type = UCAST_FLTR;
+			is_tx_fltr = true;
+		} else if (promisc_mask & ICE_PROMISC_MCAST_RX) {
+			promisc_mask &= ~ICE_PROMISC_MCAST_RX;
+			pkt_type = MCAST_FLTR;
+		} else if (promisc_mask & ICE_PROMISC_MCAST_TX) {
+			promisc_mask &= ~ICE_PROMISC_MCAST_TX;
+			pkt_type = MCAST_FLTR;
+			is_tx_fltr = true;
+		} else if (promisc_mask & ICE_PROMISC_BCAST_RX) {
+			promisc_mask &= ~ICE_PROMISC_BCAST_RX;
+			pkt_type = BCAST_FLTR;
+		} else if (promisc_mask & ICE_PROMISC_BCAST_TX) {
+			promisc_mask &= ~ICE_PROMISC_BCAST_TX;
+			pkt_type = BCAST_FLTR;
+			is_tx_fltr = true;
+		}
+
+		/* Check for VLAN promiscuous flag */
+		if (promisc_mask & ICE_PROMISC_VLAN_RX) {
+			promisc_mask &= ~ICE_PROMISC_VLAN_RX;
+		} else if (promisc_mask & ICE_PROMISC_VLAN_TX) {
+			promisc_mask &= ~ICE_PROMISC_VLAN_TX;
+			is_tx_fltr = true;
+		}
+
+		/* Set filter DA based on packet type */
+		mac_addr = new_fltr.l_data.mac.mac_addr;
+		if (pkt_type == BCAST_FLTR) {
+			ice_memset(mac_addr, 0xff, ETH_ALEN, ICE_NONDMA_MEM);
+		} else if (pkt_type == MCAST_FLTR ||
+			   pkt_type == UCAST_FLTR) {
+			/* Use the dummy ether header DA */
+			ice_memcpy(mac_addr, dummy_eth_header, ETH_ALEN,
+				   ICE_NONDMA_TO_NONDMA);
+			if (pkt_type == MCAST_FLTR)
+				mac_addr[0] |= 0x1;	/* Set multicast bit */
+		}
+
+		/* Need to reset this to zero for all iterations */
+		new_fltr.flag = 0;
+		if (is_tx_fltr) {
+			new_fltr.flag |= ICE_FLTR_TX;
+			new_fltr.src = hw_vsi_id;
+		} else {
+			new_fltr.flag |= ICE_FLTR_RX;
+			new_fltr.src = hw->port_info->lport;
+		}
+
+		new_fltr.fltr_act = ICE_FWD_TO_VSI;
+		new_fltr.vsi_handle = vsi_handle;
+		new_fltr.fwd_id.hw_vsi_id = hw_vsi_id;
+		f_list_entry.fltr_info = new_fltr;
+
+		status = ice_add_rule_internal(hw, recipe_id, &f_list_entry);
+		if (status != ICE_SUCCESS)
+			goto set_promisc_exit;
+	}
+
+set_promisc_exit:
+	return status;
+}
+
+/**
+ * ice_set_vlan_vsi_promisc
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: VSI handle to configure
+ * @promisc_mask: mask of promiscuous config bits
+ * @rm_vlan_promisc: Clear VLANs VSI promisc mode
+ *
+ * Configure VSI with all associated VLANs to given promiscuous mode(s)
+ */
+enum ice_status
+ice_set_vlan_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask,
+			 bool rm_vlan_promisc)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	struct ice_fltr_list_entry *list_itr, *tmp;
+	struct LIST_HEAD_TYPE vsi_list_head;
+	struct LIST_HEAD_TYPE *vlan_head;
+	struct ice_lock *vlan_lock; /* Lock to protect filter rule list */
+	enum ice_status status;
+	u16 vlan_id;
+
+	INIT_LIST_HEAD(&vsi_list_head);
+	vlan_lock = &sw->recp_list[ICE_SW_LKUP_VLAN].filt_rule_lock;
+	vlan_head = &sw->recp_list[ICE_SW_LKUP_VLAN].filt_rules;
+	ice_acquire_lock(vlan_lock);
+	status = ice_add_to_vsi_fltr_list(hw, vsi_handle, vlan_head,
+					  &vsi_list_head);
+	ice_release_lock(vlan_lock);
+	if (status)
+		goto free_fltr_list;
+
+	LIST_FOR_EACH_ENTRY(list_itr, &vsi_list_head, ice_fltr_list_entry,
+			    list_entry) {
+		vlan_id = list_itr->fltr_info.l_data.vlan.vlan_id;
+		if (rm_vlan_promisc)
+			status = ice_clear_vsi_promisc(hw, vsi_handle,
+						       promisc_mask, vlan_id);
+		else
+			status = ice_set_vsi_promisc(hw, vsi_handle,
+						     promisc_mask, vlan_id);
+		if (status)
+			break;
+	}
+
+free_fltr_list:
+	LIST_FOR_EACH_ENTRY_SAFE(list_itr, tmp, &vsi_list_head,
+				 ice_fltr_list_entry, list_entry) {
+		LIST_DEL(&list_itr->list_entry);
+		ice_free(hw, list_itr);
+	}
+	return status;
+}
+
+/**
+ * ice_remove_vsi_lkup_fltr - Remove lookup type filters for a VSI
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: VSI handle to remove filters from
+ * @lkup: switch rule filter lookup type
+ */
+static void
+ice_remove_vsi_lkup_fltr(struct ice_hw *hw, u16 vsi_handle,
+			 enum ice_sw_lkup_type lkup)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	struct ice_fltr_list_entry *fm_entry;
+	struct LIST_HEAD_TYPE remove_list_head;
+	struct LIST_HEAD_TYPE *rule_head;
+	struct ice_fltr_list_entry *tmp;
+	struct ice_lock *rule_lock;	/* Lock to protect filter rule list */
+	enum ice_status status;
+
+	INIT_LIST_HEAD(&remove_list_head);
+	rule_lock = &sw->recp_list[lkup].filt_rule_lock;
+	rule_head = &sw->recp_list[lkup].filt_rules;
+	ice_acquire_lock(rule_lock);
+	status = ice_add_to_vsi_fltr_list(hw, vsi_handle, rule_head,
+					  &remove_list_head);
+	ice_release_lock(rule_lock);
+	if (status)
+		return;
+
+	switch (lkup) {
+	case ICE_SW_LKUP_MAC:
+		ice_remove_mac(hw, &remove_list_head);
+		break;
+	case ICE_SW_LKUP_VLAN:
+		ice_remove_vlan(hw, &remove_list_head);
+		break;
+	case ICE_SW_LKUP_PROMISC:
+	case ICE_SW_LKUP_PROMISC_VLAN:
+		ice_remove_promisc(hw, lkup, &remove_list_head);
+		break;
+	case ICE_SW_LKUP_MAC_VLAN:
+#ifndef NO_MACVLAN_SUPPORT
+		ice_remove_mac_vlan(hw, &remove_list_head);
+#else
+		ice_debug(hw, ICE_DBG_SW, "MAC VLAN look up is not supported yet\n");
+#endif /* !NO_MACVLAN_SUPPORT */
+		break;
+	case ICE_SW_LKUP_ETHERTYPE:
+	case ICE_SW_LKUP_ETHERTYPE_MAC:
+	case ICE_SW_LKUP_DFLT:
+		ice_debug(hw, ICE_DBG_SW,
+			  "Remove filters for this lookup type hasn't been implemented yet\n");
+		break;
+	case ICE_SW_LKUP_LAST:
+		ice_debug(hw, ICE_DBG_SW, "Unsupported lookup type\n");
+		break;
+	}
+
+	LIST_FOR_EACH_ENTRY_SAFE(fm_entry, tmp, &remove_list_head,
+				 ice_fltr_list_entry, list_entry) {
+		LIST_DEL(&fm_entry->list_entry);
+		ice_free(hw, fm_entry);
+	}
+}
+
+/**
+ * ice_remove_vsi_fltr - Remove all filters for a VSI
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: VSI handle to remove filters from
+ */
+void ice_remove_vsi_fltr(struct ice_hw *hw, u16 vsi_handle)
+{
+	ice_debug(hw, ICE_DBG_TRACE, "ice_remove_vsi_fltr\n");
+
+	ice_remove_vsi_lkup_fltr(hw, vsi_handle, ICE_SW_LKUP_MAC);
+	ice_remove_vsi_lkup_fltr(hw, vsi_handle, ICE_SW_LKUP_MAC_VLAN);
+	ice_remove_vsi_lkup_fltr(hw, vsi_handle, ICE_SW_LKUP_PROMISC);
+	ice_remove_vsi_lkup_fltr(hw, vsi_handle, ICE_SW_LKUP_VLAN);
+	ice_remove_vsi_lkup_fltr(hw, vsi_handle, ICE_SW_LKUP_DFLT);
+	ice_remove_vsi_lkup_fltr(hw, vsi_handle, ICE_SW_LKUP_ETHERTYPE);
+	ice_remove_vsi_lkup_fltr(hw, vsi_handle, ICE_SW_LKUP_ETHERTYPE_MAC);
+	ice_remove_vsi_lkup_fltr(hw, vsi_handle, ICE_SW_LKUP_PROMISC_VLAN);
+}
+
+
+
+
+
+/**
+ * ice_replay_vsi_fltr - Replay filters for requested VSI
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: driver vsi handle
+ * @recp_id: Recipe id for which rules need to be replayed
+ * @list_head: list for which filters need to be replayed
+ *
+ * Replays the filter of recipe recp_id for a VSI represented via vsi_handle.
+ * It is required to pass valid VSI handle.
+ */
+static enum ice_status
+ice_replay_vsi_fltr(struct ice_hw *hw, u16 vsi_handle, u8 recp_id,
+		    struct LIST_HEAD_TYPE *list_head)
+{
+	struct ice_fltr_mgmt_list_entry *itr;
+	enum ice_status status = ICE_SUCCESS;
+	u16 hw_vsi_id;
+
+	if (LIST_EMPTY(list_head))
+		return status;
+	hw_vsi_id = ice_get_hw_vsi_num(hw, vsi_handle);
+
+	LIST_FOR_EACH_ENTRY(itr, list_head, ice_fltr_mgmt_list_entry,
+			    list_entry) {
+		struct ice_fltr_list_entry f_entry;
+
+		f_entry.fltr_info = itr->fltr_info;
+		if (itr->vsi_count < 2 && recp_id != ICE_SW_LKUP_VLAN &&
+		    itr->fltr_info.vsi_handle == vsi_handle) {
+			/* update the src in case it is vsi num */
+			if (f_entry.fltr_info.src_id == ICE_SRC_ID_VSI)
+				f_entry.fltr_info.src = hw_vsi_id;
+			status = ice_add_rule_internal(hw, recp_id, &f_entry);
+			if (status != ICE_SUCCESS)
+				goto end;
+			continue;
+		}
+		if (!itr->vsi_list_info ||
+		    !ice_is_bit_set(itr->vsi_list_info->vsi_map, vsi_handle))
+			continue;
+		/* Clearing it so that the logic can add it back */
+		ice_clear_bit(vsi_handle, itr->vsi_list_info->vsi_map);
+		f_entry.fltr_info.vsi_handle = vsi_handle;
+		f_entry.fltr_info.fltr_act = ICE_FWD_TO_VSI;
+		/* update the src in case it is vsi num */
+		if (f_entry.fltr_info.src_id == ICE_SRC_ID_VSI)
+			f_entry.fltr_info.src = hw_vsi_id;
+		if (recp_id == ICE_SW_LKUP_VLAN)
+			status = ice_add_vlan_internal(hw, &f_entry);
+		else
+			status = ice_add_rule_internal(hw, recp_id, &f_entry);
+		if (status != ICE_SUCCESS)
+			goto end;
+	}
+end:
+	return status;
+}
+
+
+/**
+ * ice_replay_vsi_all_fltr - replay all filters stored in bookkeeping lists
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: driver vsi handle
+ *
+ * Replays filters for requested VSI via vsi_handle.
+ */
+enum ice_status ice_replay_vsi_all_fltr(struct ice_hw *hw, u16 vsi_handle)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	enum ice_status status = ICE_SUCCESS;
+	u8 i;
+
+	for (i = 0; i < ICE_MAX_NUM_RECIPES; i++) {
+		/* Update the default recipe lines and ones that were created */
+		if (i < ICE_MAX_NUM_RECIPES || sw->recp_list[i].recp_created) {
+			struct LIST_HEAD_TYPE *head;
+
+			head = &sw->recp_list[i].filt_replay_rules;
+			if (!sw->recp_list[i].adv_rule)
+				status = ice_replay_vsi_fltr(hw, vsi_handle, i,
+							     head);
+			if (status != ICE_SUCCESS)
+				return status;
+		}
+	}
+	return status;
+}
+
+/**
+ * ice_rm_all_sw_replay_rule_info - deletes filter replay rules
+ * @hw: pointer to the hw struct
+ *
+ * Deletes the filter replay rules.
+ */
+void ice_rm_all_sw_replay_rule_info(struct ice_hw *hw)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	u8 i;
+
+	if (!sw)
+		return;
+
+	for (i = 0; i < ICE_MAX_NUM_RECIPES; i++) {
+		if (!LIST_EMPTY(&sw->recp_list[i].filt_replay_rules)) {
+			struct LIST_HEAD_TYPE *l_head;
+
+			l_head = &sw->recp_list[i].filt_replay_rules;
+			if (!sw->recp_list[i].adv_rule)
+				ice_rem_sw_rule_info(hw, l_head);
+		}
+	}
+}
diff --git a/drivers/net/ice/base/ice_switch.h b/drivers/net/ice/base/ice_switch.h
new file mode 100644
index 0000000..66a172f
--- /dev/null
+++ b/drivers/net/ice/base/ice_switch.h
@@ -0,0 +1,333 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_SWITCH_H_
+#define _ICE_SWITCH_H_
+
+#include "ice_common.h"
+#include "ice_protocol_type.h"
+
+#define ICE_SW_CFG_MAX_BUF_LEN 2048
+#define ICE_MAX_SW 256
+#define ICE_DFLT_VSI_INVAL 0xff
+
+
+
+#define ICE_VSI_INVAL_ID 0xFFFF
+
+/* VSI context structure for add/get/update/free operations */
+struct ice_vsi_ctx {
+	u16 vsi_num;
+	u16 vsis_allocd;
+	u16 vsis_unallocated;
+	u16 flags;
+	struct ice_aqc_vsi_props info;
+	struct ice_sched_vsi_info sched;
+	u8 alloc_from_pool;
+	struct ice_lock rss_locks;	/* protect rss config in VSI ctx */
+	struct LIST_HEAD_TYPE rss_list_head;
+};
+
+
+/* Switch recipe ID enum values are specific to hardware */
+enum ice_sw_lkup_type {
+	ICE_SW_LKUP_ETHERTYPE = 0,
+	ICE_SW_LKUP_MAC = 1,
+	ICE_SW_LKUP_MAC_VLAN = 2,
+	ICE_SW_LKUP_PROMISC = 3,
+	ICE_SW_LKUP_VLAN = 4,
+	ICE_SW_LKUP_DFLT = 5,
+	ICE_SW_LKUP_ETHERTYPE_MAC = 8,
+	ICE_SW_LKUP_PROMISC_VLAN = 9,
+	ICE_SW_LKUP_LAST,
+};
+
+/* type of filter src id */
+enum ice_src_id {
+	ICE_SRC_ID_UNKNOWN = 0,
+	ICE_SRC_ID_VSI,
+	ICE_SRC_ID_QUEUE,
+	ICE_SRC_ID_LPORT,
+};
+
+struct ice_fltr_info {
+	/* Look up information: how to look up packet */
+	enum ice_sw_lkup_type lkup_type;
+	/* Forward action: filter action to do after lookup */
+	enum ice_sw_fwd_act_type fltr_act;
+	/* rule ID returned by firmware once filter rule is created */
+	u16 fltr_rule_id;
+	u16 flag;
+#define ICE_FLTR_RX		BIT(0)
+#define ICE_FLTR_TX		BIT(1)
+#define ICE_FLTR_TX_RX		(ICE_FLTR_RX | ICE_FLTR_TX)
+
+	/* Source VSI for LOOKUP_TX or source port for LOOKUP_RX */
+	u16 src;
+	enum ice_src_id src_id;
+
+	union {
+		struct {
+			u8 mac_addr[ETH_ALEN];
+		} mac;
+		struct {
+			u8 mac_addr[ETH_ALEN];
+			u16 vlan_id;
+		} mac_vlan;
+		struct {
+			u16 vlan_id;
+		} vlan;
+		/* Set lkup_type as ICE_SW_LKUP_ETHERTYPE
+		 * if just using ethertype as filter. Set lkup_type as
+		 * ICE_SW_LKUP_ETHERTYPE_MAC if MAC also needs to be
+		 * passed in as filter.
+		 */
+		struct {
+			u16 ethertype;
+			u8 mac_addr[ETH_ALEN]; /* optional */
+		} ethertype_mac;
+	} l_data; /* Make sure to zero out the memory of l_data before using
+		   * it or only set the data associated with lookup match
+		   * rest everything should be zero
+		   */
+
+	/* Depending on filter action */
+	union {
+		/* queue id in case of ICE_FWD_TO_Q and starting
+		 * queue id in case of ICE_FWD_TO_QGRP.
+		 */
+		u16 q_id:11;
+		u16 hw_vsi_id:10;
+		u16 vsi_id:10;
+		u16 vsi_list_id:10;
+	} fwd_id;
+
+	/* Sw VSI handle */
+	u16 vsi_handle;
+
+	/* Set to num_queues if action is ICE_FWD_TO_QGRP. This field
+	 * determines the range of queues the packet needs to be forwarded to.
+	 * Note that qgrp_size must be set to a power of 2.
+	 */
+	u8 qgrp_size;
+
+	/* Rule creations populate these indicators basing on the switch type */
+	u8 lb_en;	/* Indicate if packet can be looped back */
+	u8 lan_en;	/* Indicate if packet can be forwarded to the uplink */
+};
+
+struct ice_adv_lkup_elem {
+	enum ice_protocol_type type;
+	union ice_prot_hdr h_u;	/* Header values */
+	union ice_prot_hdr m_u;	/* Mask of header values to match */
+};
+
+struct ice_sw_act_ctrl {
+	/* Source VSI for LOOKUP_TX or source port for LOOKUP_RX */
+	u16 src;
+	u16 flag;
+#define ICE_FLTR_RX             BIT(0)
+#define ICE_FLTR_TX             BIT(1)
+#define ICE_FLTR_TX_RX (ICE_FLTR_RX | ICE_FLTR_TX)
+
+	enum ice_sw_fwd_act_type fltr_act;
+	/* Depending on filter action */
+	union {
+		/* This is a queue id in case of ICE_FWD_TO_Q and starting
+		 * queue id in case of ICE_FWD_TO_QGRP.
+		 */
+		u16 q_id:11;
+		u16 vsi_id:10;
+		u16 hw_vsi_id:10;
+		u16 vsi_list_id:10;
+	} fwd_id;
+	/* software VSI handle */
+	u16 vsi_handle;
+	u8 qgrp_size;
+};
+
+struct ice_adv_rule_info {
+	enum ice_sw_tunnel_type tun_type;
+	struct ice_sw_act_ctrl sw_act;
+	u32 priority;
+	u8 rx; /* true means LOOKUP_RX otherwise LOOKUP_TX */
+};
+
+/* A collection of one or more four word recipe */
+struct ice_sw_recipe {
+	/* For a chained recipe the root recipe is what should be used for
+	 * programming rules
+	 */
+	u8 root_rid;
+	u8 recp_created;
+
+	/* Number of extraction words */
+	u8 n_ext_words;
+	/* Protocol ID and Offset pair (extraction word) to describe the
+	 * recipe
+	 */
+	struct ice_fv_word ext_words[ICE_MAX_CHAIN_WORDS];
+
+	/* if this recipe is a collection of other recipe */
+	u8 big_recp;
+
+	/* if this recipe is part of another bigger recipe then chain index
+	 * corresponding to this recipe
+	 */
+	u8 chain_idx;
+
+	/* if this recipe is a collection of other recipe then count of other
+	 * recipes and recipe ids of those recipes
+	 */
+	u8 n_grp_count;
+
+	/* Bit map specifying the IDs associated with this group of recipe */
+	ice_declare_bitmap(r_bitmap, ICE_MAX_NUM_RECIPES);
+
+	enum ice_sw_tunnel_type tun_type;
+
+	/* List of type ice_fltr_mgmt_list_entry or adv_rule */
+	u8 adv_rule;
+	struct LIST_HEAD_TYPE filt_rules;
+	struct LIST_HEAD_TYPE filt_replay_rules;
+
+	struct ice_lock filt_rule_lock;	/* protect filter rule structure */
+
+	/* Profiles this recipe should be associated with */
+	struct LIST_HEAD_TYPE fv_list;
+
+	/* Profiles this recipe is associated with */
+	u8 num_profs, *prof_ids;
+
+	/* This allows user to specify the recipe priority.
+	 * For now, this becomes 'fwd_priority' when recipe
+	 * is created, usually recipes can have 'fwd' and 'join'
+	 * priority.
+	 */
+	u8 priority;
+
+	struct LIST_HEAD_TYPE rg_list;
+
+	/* AQ buffer associated with this recipe */
+	struct ice_aqc_recipe_data_elem *root_buf;
+};
+
+/* Bookkeeping structure to hold bitmap of VSIs corresponding to VSI list id */
+struct ice_vsi_list_map_info {
+	struct LIST_ENTRY_TYPE list_entry;
+	ice_declare_bitmap(vsi_map, ICE_MAX_VSI);
+	u16 vsi_list_id;
+	/* counter to track how many rules are reusing this VSI list */
+	u16 ref_cnt;
+};
+
+struct ice_fltr_list_entry {
+	struct LIST_ENTRY_TYPE list_entry;
+	enum ice_status status;
+	struct ice_fltr_info fltr_info;
+};
+
+/* This defines an entry in the list that maintains MAC or VLAN membership
+ * to HW list mapping, since multiple VSIs can subscribe to the same MAC or
+ * VLAN. As an optimization the VSI list should be created only when a
+ * second VSI becomes a subscriber to the same MAC address. VSI lists are always
+ * used for VLAN membership.
+ */
+struct ice_fltr_mgmt_list_entry {
+	/* back pointer to VSI list id to VSI list mapping */
+	struct ice_vsi_list_map_info *vsi_list_info;
+	u16 vsi_count;
+#define ICE_INVAL_LG_ACT_INDEX 0xffff
+	u16 lg_act_idx;
+#define ICE_INVAL_SW_MARKER_ID 0xffff
+	u16 sw_marker_id;
+	struct LIST_ENTRY_TYPE list_entry;
+	struct ice_fltr_info fltr_info;
+#define ICE_INVAL_COUNTER_ID 0xff
+	u8 counter_index;
+};
+
+struct ice_adv_fltr_mgmt_list_entry {
+	struct LIST_ENTRY_TYPE list_entry;
+
+	struct ice_adv_lkup_elem *lkups;
+	struct ice_adv_rule_info rule_info;
+	u16 lkups_cnt;
+};
+
+enum ice_promisc_flags {
+	ICE_PROMISC_UCAST_RX = 0x1,
+	ICE_PROMISC_UCAST_TX = 0x2,
+	ICE_PROMISC_MCAST_RX = 0x4,
+	ICE_PROMISC_MCAST_TX = 0x8,
+	ICE_PROMISC_BCAST_RX = 0x10,
+	ICE_PROMISC_BCAST_TX = 0x20,
+	ICE_PROMISC_VLAN_RX = 0x40,
+	ICE_PROMISC_VLAN_TX = 0x80,
+};
+
+/* VSI related commands */
+enum ice_status
+ice_add_vsi(struct ice_hw *hw, u16 vsi_handle, struct ice_vsi_ctx *vsi_ctx,
+	    struct ice_sq_cd *cd);
+enum ice_status
+ice_free_vsi(struct ice_hw *hw, u16 vsi_handle, struct ice_vsi_ctx *vsi_ctx,
+	     bool keep_vsi_alloc, struct ice_sq_cd *cd);
+enum ice_status
+ice_update_vsi(struct ice_hw *hw, u16 vsi_handle, struct ice_vsi_ctx *vsi_ctx,
+	       struct ice_sq_cd *cd);
+struct ice_vsi_ctx *ice_get_vsi_ctx(struct ice_hw *hw, u16 vsi_handle);
+void ice_clear_all_vsi_ctx(struct ice_hw *hw);
+/* Switch config */
+enum ice_status ice_get_initial_sw_cfg(struct ice_hw *hw);
+
+enum ice_status
+ice_alloc_vlan_res_counter(struct ice_hw *hw, u16 *counter_id);
+enum ice_status
+ice_free_vlan_res_counter(struct ice_hw *hw, u16 counter_id);
+
+/* Switch/bridge related commands */
+enum ice_status ice_update_sw_rule_bridge_mode(struct ice_hw *hw);
+enum ice_status
+ice_add_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list);
+enum ice_status ice_add_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_lst);
+enum ice_status ice_remove_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_lst);
+enum ice_status
+ice_remove_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list);
+#ifndef NO_MACVLAN_SUPPORT
+enum ice_status
+ice_add_mac_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list);
+enum ice_status
+ice_remove_mac_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list);
+#endif /* !NO_MACVLAN_SUPPORT */
+
+void ice_remove_vsi_fltr(struct ice_hw *hw, u16 vsi_handle);
+
+
+/* Promisc/defport setup for VSIs */
+enum ice_status
+ice_cfg_dflt_vsi(struct ice_port_info *pi, u16 vsi_handle, bool set,
+		 u8 direction);
+enum ice_status
+ice_set_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask,
+		    u16 vid);
+enum ice_status
+ice_clear_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask,
+		      u16 vid);
+enum ice_status
+ice_set_vlan_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask,
+			 bool rm_vlan_promisc);
+
+
+
+
+
+enum ice_status ice_init_def_sw_recp(struct ice_hw *hw);
+u16 ice_get_hw_vsi_num(struct ice_hw *hw, u16 vsi_handle);
+bool ice_is_vsi_valid(struct ice_hw *hw, u16 vsi_handle);
+
+enum ice_status ice_replay_vsi_all_fltr(struct ice_hw *hw, u16 vsi_handle);
+void ice_rm_all_sw_replay_rule_info(struct ice_hw *hw);
+
+#endif /* _ICE_SWITCH_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v6 09/31] net/ice/base: add code to work with the NVM
  2018-12-18  8:46 ` [dpdk-dev] [PATCH v6 00/31] A new net PMD - ICE Wenzhuo Lu
                     ` (7 preceding siblings ...)
  2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 08/31] net/ice/base: add virtual switch code Wenzhuo Lu
@ 2018-12-18  8:46   ` Wenzhuo Lu
  2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 10/31] net/ice/base: add common functions Wenzhuo Lu
                     ` (22 subsequent siblings)
  31 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-18  8:46 UTC (permalink / raw)
  To: dev; +Cc: Paul M Stillwell Jr

From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>

Add code to read/write/query the NVM image.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
---
 drivers/net/ice/base/ice_nvm.c | 387 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 387 insertions(+)
 create mode 100644 drivers/net/ice/base/ice_nvm.c

diff --git a/drivers/net/ice/base/ice_nvm.c b/drivers/net/ice/base/ice_nvm.c
new file mode 100644
index 0000000..25a2ca4
--- /dev/null
+++ b/drivers/net/ice/base/ice_nvm.c
@@ -0,0 +1,387 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#include "ice_common.h"
+
+
+/**
+ * ice_aq_read_nvm
+ * @hw: pointer to the hw struct
+ * @module_typeid: module pointer location in words from the NVM beginning
+ * @offset: byte offset from the module beginning
+ * @length: length of the section to be read (in bytes from the offset)
+ * @data: command buffer (size [bytes] = length)
+ * @last_command: tells if this is the last command in a series
+ * @cd: pointer to command details structure or NULL
+ *
+ * Read the NVM using the admin queue commands (0x0701)
+ */
+static enum ice_status
+ice_aq_read_nvm(struct ice_hw *hw, u16 module_typeid, u32 offset, u16 length,
+		void *data, bool last_command, struct ice_sq_cd *cd)
+{
+	struct ice_aq_desc desc;
+	struct ice_aqc_nvm *cmd;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_read_nvm");
+
+	cmd = &desc.params.nvm;
+
+	/* In offset the highest byte must be zeroed. */
+	if (offset & 0xFF000000)
+		return ICE_ERR_PARAM;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_nvm_read);
+
+	/* If this is the last command in a series, set the proper flag. */
+	if (last_command)
+		cmd->cmd_flags |= ICE_AQC_NVM_LAST_CMD;
+	cmd->module_typeid = CPU_TO_LE16(module_typeid);
+	cmd->offset_low = CPU_TO_LE16(offset & 0xFFFF);
+	cmd->offset_high = (offset >> 16) & 0xFF;
+	cmd->length = CPU_TO_LE16(length);
+
+	return ice_aq_send_cmd(hw, &desc, data, length, cd);
+}
+
+/**
+ * ice_check_sr_access_params - verify params for Shadow RAM R/W operations.
+ * @hw: pointer to the HW structure
+ * @offset: offset in words from module start
+ * @words: number of words to access
+ */
+static enum ice_status
+ice_check_sr_access_params(struct ice_hw *hw, u32 offset, u16 words)
+{
+	if ((offset + words) > hw->nvm.sr_words) {
+		ice_debug(hw, ICE_DBG_NVM,
+			  "NVM error: offset beyond SR lmt.\n");
+		return ICE_ERR_PARAM;
+	}
+
+	if (words > ICE_SR_SECTOR_SIZE_IN_WORDS) {
+		/* We can access only up to 4KB (one sector), in one AQ write */
+		ice_debug(hw, ICE_DBG_NVM,
+			  "NVM error: tried to access %d words, limit is %d.\n",
+			  words, ICE_SR_SECTOR_SIZE_IN_WORDS);
+		return ICE_ERR_PARAM;
+	}
+
+	if (((offset + (words - 1)) / ICE_SR_SECTOR_SIZE_IN_WORDS) !=
+	    (offset / ICE_SR_SECTOR_SIZE_IN_WORDS)) {
+		/* A single access cannot spread over two sectors */
+		ice_debug(hw, ICE_DBG_NVM,
+			  "NVM error: cannot spread over two sectors.\n");
+		return ICE_ERR_PARAM;
+	}
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_read_sr_aq - Read Shadow RAM.
+ * @hw: pointer to the HW structure
+ * @offset: offset in words from module start
+ * @words: number of words to read
+ * @data: buffer for words reads from Shadow RAM
+ * @last_command: tells the AdminQ that this is the last command
+ *
+ * Reads 16-bit word buffers from the Shadow RAM using the admin command.
+ */
+static enum ice_status
+ice_read_sr_aq(struct ice_hw *hw, u32 offset, u16 words, u16 *data,
+	       bool last_command)
+{
+	enum ice_status status;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_read_sr_aq");
+
+	status = ice_check_sr_access_params(hw, offset, words);
+
+	/* values in "offset" and "words" parameters are sized as words
+	 * (16 bits) but ice_aq_read_nvm expects these values in bytes.
+	 * So do this conversion while calling ice_aq_read_nvm.
+	 */
+	if (!status)
+		status = ice_aq_read_nvm(hw, 0, 2 * offset, 2 * words, data,
+					 last_command, NULL);
+
+	return status;
+}
+
+/**
+ * ice_read_sr_word_aq - Reads Shadow RAM via AQ
+ * @hw: pointer to the HW structure
+ * @offset: offset of the Shadow RAM word to read (0x000000 - 0x001FFF)
+ * @data: word read from the Shadow RAM
+ *
+ * Reads one 16 bit word from the Shadow RAM using the ice_read_sr_aq method.
+ */
+static enum ice_status
+ice_read_sr_word_aq(struct ice_hw *hw, u16 offset, u16 *data)
+{
+	enum ice_status status;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_read_sr_word_aq");
+
+	status = ice_read_sr_aq(hw, offset, 1, data, true);
+	if (!status)
+		*data = LE16_TO_CPU(*(__le16 *)data);
+
+	return status;
+}
+
+
+/**
+ * ice_read_sr_buf_aq - Reads Shadow RAM buf via AQ
+ * @hw: pointer to the HW structure
+ * @offset: offset of the Shadow RAM word to read (0x000000 - 0x001FFF)
+ * @words: (in) number of words to read; (out) number of words actually read
+ * @data: words read from the Shadow RAM
+ *
+ * Reads 16 bit words (data buf) from the SR using the ice_read_sr_aq
+ * method. Ownership of the NVM is taken before reading the buffer and later
+ * released.
+ */
+static enum ice_status
+ice_read_sr_buf_aq(struct ice_hw *hw, u16 offset, u16 *words, u16 *data)
+{
+	enum ice_status status;
+	bool last_cmd = false;
+	u16 words_read = 0;
+	u16 i = 0;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_read_sr_buf_aq");
+
+	do {
+		u16 read_size, off_w;
+
+		/* Calculate number of bytes we should read in this step.
+		 * It's not allowed to read more than one page at a time or
+		 * to cross page boundaries.
+		 */
+		off_w = offset % ICE_SR_SECTOR_SIZE_IN_WORDS;
+		read_size = off_w ?
+			min(*words,
+			    (u16)(ICE_SR_SECTOR_SIZE_IN_WORDS - off_w)) :
+			min((*words - words_read), ICE_SR_SECTOR_SIZE_IN_WORDS);
+
+		/* Check if this is last command, if so set proper flag */
+		if ((words_read + read_size) >= *words)
+			last_cmd = true;
+
+		status = ice_read_sr_aq(hw, offset, read_size,
+					data + words_read, last_cmd);
+		if (status)
+			goto read_nvm_buf_aq_exit;
+
+		/* Increment counter for words already read and move offset to
+		 * new read location
+		 */
+		words_read += read_size;
+		offset += read_size;
+	} while (words_read < *words);
+
+	for (i = 0; i < *words; i++)
+		data[i] = LE16_TO_CPU(((__le16 *)data)[i]);
+
+read_nvm_buf_aq_exit:
+	*words = words_read;
+	return status;
+}
+
+/**
+ * ice_acquire_nvm - Generic request for acquiring the NVM ownership
+ * @hw: pointer to the HW structure
+ * @access: NVM access type (read or write)
+ *
+ * This function will request NVM ownership.
+ */
+static enum ice_status
+ice_acquire_nvm(struct ice_hw *hw, enum ice_aq_res_access_type access)
+{
+	ice_debug(hw, ICE_DBG_TRACE, "ice_acquire_nvm");
+
+	if (hw->nvm.blank_nvm_mode)
+		return ICE_SUCCESS;
+
+	return ice_acquire_res(hw, ICE_NVM_RES_ID, access, ICE_NVM_TIMEOUT);
+}
+
+/**
+ * ice_release_nvm - Generic request for releasing the NVM ownership
+ * @hw: pointer to the HW structure
+ *
+ * This function will release NVM ownership.
+ */
+static void ice_release_nvm(struct ice_hw *hw)
+{
+	ice_debug(hw, ICE_DBG_TRACE, "ice_release_nvm");
+
+	if (hw->nvm.blank_nvm_mode)
+		return;
+
+	ice_release_res(hw, ICE_NVM_RES_ID);
+}
+
+/**
+ * ice_read_sr_word - Reads Shadow RAM word and acquire NVM if necessary
+ * @hw: pointer to the HW structure
+ * @offset: offset of the Shadow RAM word to read (0x000000 - 0x001FFF)
+ * @data: word read from the Shadow RAM
+ *
+ * Reads one 16 bit word from the Shadow RAM using the ice_read_sr_word_aq.
+ */
+enum ice_status ice_read_sr_word(struct ice_hw *hw, u16 offset, u16 *data)
+{
+	enum ice_status status;
+
+	status = ice_acquire_nvm(hw, ICE_RES_READ);
+	if (!status) {
+		status = ice_read_sr_word_aq(hw, offset, data);
+		ice_release_nvm(hw);
+	}
+
+	return status;
+}
+
+/**
+ * ice_init_nvm - initializes NVM setting
+ * @hw: pointer to the hw struct
+ *
+ * This function reads and populates NVM settings such as Shadow RAM size,
+ * max_timeout, and blank_nvm_mode
+ */
+enum ice_status ice_init_nvm(struct ice_hw *hw)
+{
+	struct ice_nvm_info *nvm = &hw->nvm;
+	u16 oem_hi, oem_lo, cfg_ptr;
+	u16 eetrack_lo, eetrack_hi;
+	enum ice_status status = ICE_SUCCESS;
+	u32 fla, gens_stat;
+	u8 sr_size;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_init_nvm");
+
+	/* The SR size is stored regardless of the nvm programming mode
+	 * as the blank mode may be used in the factory line.
+	 */
+	gens_stat = rd32(hw, GLNVM_GENS);
+	sr_size = (gens_stat & GLNVM_GENS_SR_SIZE_M) >> GLNVM_GENS_SR_SIZE_S;
+
+	/* Switching to words (sr_size contains power of 2) */
+	nvm->sr_words = BIT(sr_size) * ICE_SR_WORDS_IN_1KB;
+
+	/* Check if we are in the normal or blank NVM programming mode */
+	fla = rd32(hw, GLNVM_FLA);
+	if (fla & GLNVM_FLA_LOCKED_M) { /* Normal programming mode */
+		nvm->blank_nvm_mode = false;
+	} else { /* Blank programming mode */
+		nvm->blank_nvm_mode = true;
+		status = ICE_ERR_NVM_BLANK_MODE;
+		ice_debug(hw, ICE_DBG_NVM,
+			  "NVM init error: unsupported blank mode.\n");
+		return status;
+	}
+
+	status = ice_read_sr_word(hw, ICE_SR_NVM_DEV_STARTER_VER, &hw->nvm.ver);
+	if (status) {
+		ice_debug(hw, ICE_DBG_INIT,
+			  "Failed to read DEV starter version.\n");
+		return status;
+	}
+
+	status = ice_read_sr_word(hw, ICE_SR_NVM_EETRACK_LO, &eetrack_lo);
+	if (status) {
+		ice_debug(hw, ICE_DBG_INIT, "Failed to read EETRACK lo.\n");
+		return status;
+	}
+	status = ice_read_sr_word(hw, ICE_SR_NVM_EETRACK_HI, &eetrack_hi);
+	if (status) {
+		ice_debug(hw, ICE_DBG_INIT, "Failed to read EETRACK hi.\n");
+		return status;
+	}
+
+	hw->nvm.eetrack = (eetrack_hi << 16) | eetrack_lo;
+
+	status = ice_read_sr_word(hw, ICE_SR_BOOT_CFG_PTR, &cfg_ptr);
+	if (status) {
+		ice_debug(hw, ICE_DBG_INIT, "Failed to read BOOT_CONFIG_PTR.\n");
+		return status;
+	}
+
+	status = ice_read_sr_word(hw, (cfg_ptr + ICE_NVM_OEM_VER_OFF), &oem_hi);
+	if (status) {
+		ice_debug(hw, ICE_DBG_INIT, "Failed to read OEM_VER hi.\n");
+		return status;
+	}
+
+	status = ice_read_sr_word(hw, (cfg_ptr + (ICE_NVM_OEM_VER_OFF + 1)),
+				  &oem_lo);
+	if (status) {
+		ice_debug(hw, ICE_DBG_INIT, "Failed to read OEM_VER lo.\n");
+		return status;
+	}
+
+	hw->nvm.oem_ver = ((u32)oem_hi << 16) | oem_lo;
+	return status;
+}
+
+
+/**
+ * ice_read_sr_buf - Reads Shadow RAM buf and acquire lock if necessary
+ * @hw: pointer to the HW structure
+ * @offset: offset of the Shadow RAM word to read (0x000000 - 0x001FFF)
+ * @words: (in) number of words to read; (out) number of words actually read
+ * @data: words read from the Shadow RAM
+ *
+ * Reads 16 bit words (data buf) from the SR using the ice_read_nvm_buf_aq
+ * method. The buf read is preceded by the NVM ownership take
+ * and followed by the release.
+ */
+enum ice_status
+ice_read_sr_buf(struct ice_hw *hw, u16 offset, u16 *words, u16 *data)
+{
+	enum ice_status status;
+
+	status = ice_acquire_nvm(hw, ICE_RES_READ);
+	if (!status) {
+		status = ice_read_sr_buf_aq(hw, offset, words, data);
+		ice_release_nvm(hw);
+	}
+
+	return status;
+}
+
+
+/**
+ * ice_nvm_validate_checksum
+ * @hw: pointer to the hw struct
+ *
+ * Verify NVM PFA checksum validity (0x0706)
+ */
+enum ice_status ice_nvm_validate_checksum(struct ice_hw *hw)
+{
+	struct ice_aqc_nvm_checksum *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	status = ice_acquire_nvm(hw, ICE_RES_READ);
+	if (status)
+		return status;
+
+	cmd = &desc.params.nvm_checksum;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_nvm_checksum);
+	cmd->flags = ICE_AQC_NVM_CHECKSUM_VERIFY;
+
+	status = ice_aq_send_cmd(hw, &desc, NULL, 0, NULL);
+	ice_release_nvm(hw);
+
+	if (!status)
+		if (LE16_TO_CPU(cmd->checksum) != ICE_AQC_NVM_CHECKSUM_CORRECT)
+			status = ICE_ERR_NVM_CHECKSUM;
+
+	return status;
+}
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v6 10/31] net/ice/base: add common functions
  2018-12-18  8:46 ` [dpdk-dev] [PATCH v6 00/31] A new net PMD - ICE Wenzhuo Lu
                     ` (8 preceding siblings ...)
  2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 09/31] net/ice/base: add code to work with the NVM Wenzhuo Lu
@ 2018-12-18  8:46   ` Wenzhuo Lu
  2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 11/31] net/ice/base: add various headers Wenzhuo Lu
                     ` (21 subsequent siblings)
  31 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-18  8:46 UTC (permalink / raw)
  To: dev; +Cc: Paul M Stillwell Jr

From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>

Add code that multiple other features use.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
---
 drivers/net/ice/base/ice_common.c | 3521 +++++++++++++++++++++++++++++++++++++
 drivers/net/ice/base/ice_common.h |  186 ++
 2 files changed, 3707 insertions(+)
 create mode 100644 drivers/net/ice/base/ice_common.c
 create mode 100644 drivers/net/ice/base/ice_common.h

diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c
new file mode 100644
index 0000000..d49264d
--- /dev/null
+++ b/drivers/net/ice/base/ice_common.c
@@ -0,0 +1,3521 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#include "ice_common.h"
+#include "ice_sched.h"
+#include "ice_adminq_cmd.h"
+
+#include "ice_flow.h"
+#include "ice_switch.h"
+
+#define ICE_PF_RESET_WAIT_COUNT	200
+
+#define ICE_PROG_FLEX_ENTRY(hw, rxdid, mdid, idx) \
+	wr32((hw), GLFLXP_RXDID_FLX_WRD_##idx(rxdid), \
+	     ((ICE_RX_OPC_MDID << \
+	       GLFLXP_RXDID_FLX_WRD_##idx##_RXDID_OPCODE_S) & \
+	      GLFLXP_RXDID_FLX_WRD_##idx##_RXDID_OPCODE_M) | \
+	     (((mdid) << GLFLXP_RXDID_FLX_WRD_##idx##_PROT_MDID_S) & \
+	      GLFLXP_RXDID_FLX_WRD_##idx##_PROT_MDID_M))
+
+#define ICE_PROG_FLG_ENTRY(hw, rxdid, flg_0, flg_1, flg_2, flg_3, idx) \
+	wr32((hw), GLFLXP_RXDID_FLAGS(rxdid, idx), \
+	     (((flg_0) << GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_S) & \
+	      GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_M) | \
+	     (((flg_1) << GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_1_S) & \
+	      GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_1_M) | \
+	     (((flg_2) << GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_2_S) & \
+	      GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_2_M) | \
+	     (((flg_3) << GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_3_S) & \
+	      GLFLXP_RXDID_FLAGS_FLEXIFLAG_4N_3_M))
+
+
+/**
+ * ice_set_mac_type - Sets MAC type
+ * @hw: pointer to the HW structure
+ *
+ * This function sets the MAC type of the adapter based on the
+ * vendor ID and device ID stored in the hw structure.
+ */
+static enum ice_status ice_set_mac_type(struct ice_hw *hw)
+{
+	enum ice_status status = ICE_SUCCESS;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_set_mac_type\n");
+
+	if (hw->vendor_id == ICE_INTEL_VENDOR_ID) {
+		switch (hw->device_id) {
+		default:
+			hw->mac_type = ICE_MAC_GENERIC;
+			break;
+		}
+	} else {
+		status = ICE_ERR_DEVICE_NOT_SUPPORTED;
+	}
+
+	ice_debug(hw, ICE_DBG_INIT, "found mac_type: %d, status: %d\n",
+		  hw->mac_type, status);
+
+	return status;
+}
+
+#if defined(FPGA_SUPPORT) || defined(CVL_A0_SUPPORT)
+void ice_dev_onetime_setup(struct ice_hw *hw)
+{
+	/* configure Rx - set non pxe mode */
+	wr32(hw, GLLAN_RCTL_0, 0x1);
+
+
+
+}
+#endif /* FPGA_SUPPORT || CVL_A0_SUPPORT */
+
+/**
+ * ice_clear_pf_cfg - Clear PF configuration
+ * @hw: pointer to the hardware structure
+ *
+ * Clears any existing PF configuration (VSIs, VSI lists, switch rules, port
+ * configuration, flow director filters, etc.).
+ */
+enum ice_status ice_clear_pf_cfg(struct ice_hw *hw)
+{
+	struct ice_aq_desc desc;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_clear_pf_cfg);
+
+	return ice_aq_send_cmd(hw, &desc, NULL, 0, NULL);
+}
+
+/**
+ * ice_aq_manage_mac_read - manage MAC address read command
+ * @hw: pointer to the hw struct
+ * @buf: a virtual buffer to hold the manage MAC read response
+ * @buf_size: Size of the virtual buffer
+ * @cd: pointer to command details structure or NULL
+ *
+ * This function is used to return per PF station MAC address (0x0107).
+ * NOTE: Upon successful completion of this command, MAC address information
+ * is returned in user specified buffer. Please interpret user specified
+ * buffer as "manage_mac_read" response.
+ * Response such as various MAC addresses are stored in HW struct (port.mac)
+ * ice_aq_discover_caps is expected to be called before this function is called.
+ */
+static enum ice_status
+ice_aq_manage_mac_read(struct ice_hw *hw, void *buf, u16 buf_size,
+		       struct ice_sq_cd *cd)
+{
+	struct ice_aqc_manage_mac_read_resp *resp;
+	struct ice_aqc_manage_mac_read *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+	u16 flags;
+	u8 i;
+
+	cmd = &desc.params.mac_read;
+
+	if (buf_size < sizeof(*resp))
+		return ICE_ERR_BUF_TOO_SHORT;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_manage_mac_read);
+
+	status = ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
+	if (status)
+		return status;
+
+	resp = (struct ice_aqc_manage_mac_read_resp *)buf;
+	flags = LE16_TO_CPU(cmd->flags) & ICE_AQC_MAN_MAC_READ_M;
+
+	if (!(flags & ICE_AQC_MAN_MAC_LAN_ADDR_VALID)) {
+		ice_debug(hw, ICE_DBG_LAN, "got invalid MAC address\n");
+		return ICE_ERR_CFG;
+	}
+
+	/* A single port can report up to two (LAN and WoL) addresses */
+	for (i = 0; i < cmd->num_addr; i++)
+		if (resp[i].addr_type == ICE_AQC_MAN_MAC_ADDR_TYPE_LAN) {
+			ice_memcpy(hw->port_info->mac.lan_addr,
+				   resp[i].mac_addr, ETH_ALEN,
+				   ICE_DMA_TO_NONDMA);
+			ice_memcpy(hw->port_info->mac.perm_addr,
+				   resp[i].mac_addr,
+				   ETH_ALEN, ICE_DMA_TO_NONDMA);
+			break;
+		}
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_aq_get_phy_caps - returns PHY capabilities
+ * @pi: port information structure
+ * @qual_mods: report qualified modules
+ * @report_mode: report mode capabilities
+ * @pcaps: structure for PHY capabilities to be filled
+ * @cd: pointer to command details structure or NULL
+ *
+ * Returns the various PHY capabilities supported on the Port (0x0600)
+ */
+enum ice_status
+ice_aq_get_phy_caps(struct ice_port_info *pi, bool qual_mods, u8 report_mode,
+		    struct ice_aqc_get_phy_caps_data *pcaps,
+		    struct ice_sq_cd *cd)
+{
+	struct ice_aqc_get_phy_caps *cmd;
+	u16 pcaps_size = sizeof(*pcaps);
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	cmd = &desc.params.get_phy;
+
+	if (!pcaps || (report_mode & ~ICE_AQC_REPORT_MODE_M) || !pi)
+		return ICE_ERR_PARAM;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_phy_caps);
+
+	if (qual_mods)
+		cmd->param0 |= CPU_TO_LE16(ICE_AQC_GET_PHY_RQM);
+
+	cmd->param0 |= CPU_TO_LE16(report_mode);
+	status = ice_aq_send_cmd(pi->hw, &desc, pcaps, pcaps_size, cd);
+
+	if (status == ICE_SUCCESS && report_mode == ICE_AQC_REPORT_TOPO_CAP) {
+		pi->phy.phy_type_low = LE64_TO_CPU(pcaps->phy_type_low);
+		pi->phy.phy_type_high = LE64_TO_CPU(pcaps->phy_type_high);
+	}
+
+	return status;
+}
+
+/**
+ * ice_get_media_type - Gets media type
+ * @pi: port information structure
+ */
+static enum ice_media_type ice_get_media_type(struct ice_port_info *pi)
+{
+	struct ice_link_status *hw_link_info;
+
+	if (!pi)
+		return ICE_MEDIA_UNKNOWN;
+
+	hw_link_info = &pi->phy.link_info;
+	if (hw_link_info->phy_type_low && hw_link_info->phy_type_high)
+		/* If more than one media type is selected, report unknown */
+		return ICE_MEDIA_UNKNOWN;
+
+	if (hw_link_info->phy_type_low) {
+		switch (hw_link_info->phy_type_low) {
+		case ICE_PHY_TYPE_LOW_1000BASE_SX:
+		case ICE_PHY_TYPE_LOW_1000BASE_LX:
+		case ICE_PHY_TYPE_LOW_10GBASE_SR:
+		case ICE_PHY_TYPE_LOW_10GBASE_LR:
+		case ICE_PHY_TYPE_LOW_10G_SFI_C2C:
+		case ICE_PHY_TYPE_LOW_25GBASE_SR:
+		case ICE_PHY_TYPE_LOW_25GBASE_LR:
+		case ICE_PHY_TYPE_LOW_25G_AUI_C2C:
+		case ICE_PHY_TYPE_LOW_40GBASE_SR4:
+		case ICE_PHY_TYPE_LOW_40GBASE_LR4:
+		case ICE_PHY_TYPE_LOW_50GBASE_SR2:
+		case ICE_PHY_TYPE_LOW_50GBASE_LR2:
+		case ICE_PHY_TYPE_LOW_50GBASE_SR:
+		case ICE_PHY_TYPE_LOW_50GBASE_FR:
+		case ICE_PHY_TYPE_LOW_50GBASE_LR:
+		case ICE_PHY_TYPE_LOW_100GBASE_SR4:
+		case ICE_PHY_TYPE_LOW_100GBASE_LR4:
+		case ICE_PHY_TYPE_LOW_100GBASE_SR2:
+		case ICE_PHY_TYPE_LOW_100GBASE_DR:
+			return ICE_MEDIA_FIBER;
+		case ICE_PHY_TYPE_LOW_100BASE_TX:
+		case ICE_PHY_TYPE_LOW_1000BASE_T:
+		case ICE_PHY_TYPE_LOW_2500BASE_T:
+		case ICE_PHY_TYPE_LOW_5GBASE_T:
+		case ICE_PHY_TYPE_LOW_10GBASE_T:
+		case ICE_PHY_TYPE_LOW_25GBASE_T:
+			return ICE_MEDIA_BASET;
+		case ICE_PHY_TYPE_LOW_10G_SFI_DA:
+		case ICE_PHY_TYPE_LOW_25GBASE_CR:
+		case ICE_PHY_TYPE_LOW_25GBASE_CR_S:
+		case ICE_PHY_TYPE_LOW_25GBASE_CR1:
+		case ICE_PHY_TYPE_LOW_40GBASE_CR4:
+		case ICE_PHY_TYPE_LOW_50GBASE_CR2:
+		case ICE_PHY_TYPE_LOW_50GBASE_CP:
+		case ICE_PHY_TYPE_LOW_100GBASE_CR4:
+		case ICE_PHY_TYPE_LOW_100GBASE_CR_PAM4:
+		case ICE_PHY_TYPE_LOW_100GBASE_CP2:
+			return ICE_MEDIA_DA;
+		case ICE_PHY_TYPE_LOW_1000BASE_KX:
+		case ICE_PHY_TYPE_LOW_2500BASE_KX:
+		case ICE_PHY_TYPE_LOW_2500BASE_X:
+		case ICE_PHY_TYPE_LOW_5GBASE_KR:
+		case ICE_PHY_TYPE_LOW_10GBASE_KR_CR1:
+		case ICE_PHY_TYPE_LOW_25GBASE_KR:
+		case ICE_PHY_TYPE_LOW_25GBASE_KR1:
+		case ICE_PHY_TYPE_LOW_25GBASE_KR_S:
+		case ICE_PHY_TYPE_LOW_40GBASE_KR4:
+		case ICE_PHY_TYPE_LOW_50GBASE_KR_PAM4:
+		case ICE_PHY_TYPE_LOW_50GBASE_KR2:
+		case ICE_PHY_TYPE_LOW_100GBASE_KR4:
+		case ICE_PHY_TYPE_LOW_100GBASE_KR_PAM4:
+			return ICE_MEDIA_BACKPLANE;
+		}
+	} else {
+		switch (hw_link_info->phy_type_high) {
+		case ICE_PHY_TYPE_HIGH_100GBASE_KR2_PAM4:
+			return ICE_MEDIA_BACKPLANE;
+		}
+	}
+	return ICE_MEDIA_UNKNOWN;
+}
+
+/**
+ * ice_aq_get_link_info
+ * @pi: port information structure
+ * @ena_lse: enable/disable LinkStatusEvent reporting
+ * @link: pointer to link status structure - optional
+ * @cd: pointer to command details structure or NULL
+ *
+ * Get Link Status (0x607). Returns the link status of the adapter.
+ */
+enum ice_status
+ice_aq_get_link_info(struct ice_port_info *pi, bool ena_lse,
+		     struct ice_link_status *link, struct ice_sq_cd *cd)
+{
+	struct ice_link_status *hw_link_info_old, *hw_link_info;
+	struct ice_aqc_get_link_status_data link_data = { 0 };
+	struct ice_aqc_get_link_status *resp;
+	enum ice_media_type *hw_media_type;
+	struct ice_fc_info *hw_fc_info;
+	bool tx_pause, rx_pause;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+	u16 cmd_flags;
+
+	if (!pi)
+		return ICE_ERR_PARAM;
+	hw_link_info_old = &pi->phy.link_info_old;
+	hw_media_type = &pi->phy.media_type;
+	hw_link_info = &pi->phy.link_info;
+	hw_fc_info = &pi->fc;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_link_status);
+	cmd_flags = (ena_lse) ? ICE_AQ_LSE_ENA : ICE_AQ_LSE_DIS;
+	resp = &desc.params.get_link_status;
+	resp->cmd_flags = CPU_TO_LE16(cmd_flags);
+	resp->lport_num = pi->lport;
+
+	status = ice_aq_send_cmd(pi->hw, &desc, &link_data, sizeof(link_data),
+				 cd);
+
+	if (status != ICE_SUCCESS)
+		return status;
+
+	/* save off old link status information */
+	*hw_link_info_old = *hw_link_info;
+
+	/* update current link status information */
+	hw_link_info->link_speed = LE16_TO_CPU(link_data.link_speed);
+	hw_link_info->phy_type_low = LE64_TO_CPU(link_data.phy_type_low);
+	hw_link_info->phy_type_high = LE64_TO_CPU(link_data.phy_type_high);
+	*hw_media_type = ice_get_media_type(pi);
+	hw_link_info->link_info = link_data.link_info;
+	hw_link_info->an_info = link_data.an_info;
+	hw_link_info->ext_info = link_data.ext_info;
+	hw_link_info->max_frame_size = LE16_TO_CPU(link_data.max_frame_size);
+	hw_link_info->fec_info = link_data.cfg & ICE_AQ_FEC_MASK;
+	hw_link_info->topo_media_conflict = link_data.topo_media_conflict;
+	hw_link_info->pacing = link_data.cfg & ICE_AQ_CFG_PACING_M;
+
+	/* update fc info */
+	tx_pause = !!(link_data.an_info & ICE_AQ_LINK_PAUSE_TX);
+	rx_pause = !!(link_data.an_info & ICE_AQ_LINK_PAUSE_RX);
+	if (tx_pause && rx_pause)
+		hw_fc_info->current_mode = ICE_FC_FULL;
+	else if (tx_pause)
+		hw_fc_info->current_mode = ICE_FC_TX_PAUSE;
+	else if (rx_pause)
+		hw_fc_info->current_mode = ICE_FC_RX_PAUSE;
+	else
+		hw_fc_info->current_mode = ICE_FC_NONE;
+
+	hw_link_info->lse_ena =
+		!!(resp->cmd_flags & CPU_TO_LE16(ICE_AQ_LSE_IS_ENABLED));
+
+
+	/* save link status information */
+	if (link)
+		*link = *hw_link_info;
+
+	/* flag cleared so calling functions don't call AQ again */
+	pi->phy.get_link_info = false;
+
+	return status;
+}
+
+/**
+ * ice_init_flex_flags
+ * @hw: pointer to the hardware structure
+ * @prof_id: Rx Descriptor Builder profile ID
+ *
+ * Function to initialize Rx flex flags
+ */
+static void ice_init_flex_flags(struct ice_hw *hw, enum ice_rxdid prof_id)
+{
+	u8 idx = 0;
+
+	/* Flex-flag fields (0-2) are programmed with FLG64 bits with layout:
+	 * flexiflags0[5:0] - TCP flags, is_packet_fragmented, is_packet_UDP_GRE
+	 * flexiflags1[3:0] - Not used for flag programming
+	 * flexiflags2[7:0] - Tunnel and VLAN types
+	 * 2 invalid fields in last index
+	 */
+	switch (prof_id) {
+	/* Rx flex flags are currently programmed for the NIC profiles only.
+	 * Different flag bit programming configurations can be added per
+	 * profile as needed.
+	 */
+	case ICE_RXDID_FLEX_NIC:
+	case ICE_RXDID_FLEX_NIC_2:
+		ICE_PROG_FLG_ENTRY(hw, prof_id, ICE_RXFLG_PKT_FRG,
+				   ICE_RXFLG_UDP_GRE, ICE_RXFLG_PKT_DSI,
+				   ICE_RXFLG_FIN, idx++);
+		/* flex flag 1 is not used for flexi-flag programming, skipping
+		 * these four FLG64 bits.
+		 */
+		ICE_PROG_FLG_ENTRY(hw, prof_id, ICE_RXFLG_SYN, ICE_RXFLG_RST,
+				   ICE_RXFLG_PKT_DSI, ICE_RXFLG_PKT_DSI, idx++);
+		ICE_PROG_FLG_ENTRY(hw, prof_id, ICE_RXFLG_PKT_DSI,
+				   ICE_RXFLG_PKT_DSI, ICE_RXFLG_EVLAN_x8100,
+				   ICE_RXFLG_EVLAN_x9100, idx++);
+		ICE_PROG_FLG_ENTRY(hw, prof_id, ICE_RXFLG_VLAN_x8100,
+				   ICE_RXFLG_TNL_VLAN, ICE_RXFLG_TNL_MAC,
+				   ICE_RXFLG_TNL0, idx++);
+		ICE_PROG_FLG_ENTRY(hw, prof_id, ICE_RXFLG_TNL1, ICE_RXFLG_TNL2,
+				   ICE_RXFLG_PKT_DSI, ICE_RXFLG_PKT_DSI, idx);
+		break;
+
+	default:
+		ice_debug(hw, ICE_DBG_INIT,
+			  "Flag programming for profile ID %d not supported\n",
+			  prof_id);
+	}
+}
+
+/**
+ * ice_init_flex_flds
+ * @hw: pointer to the hardware structure
+ * @prof_id: Rx Descriptor Builder profile ID
+ *
+ * Function to initialize flex descriptors
+ */
+static void ice_init_flex_flds(struct ice_hw *hw, enum ice_rxdid prof_id)
+{
+	enum ice_flex_rx_mdid mdid;
+
+	switch (prof_id) {
+	case ICE_RXDID_FLEX_NIC:
+	case ICE_RXDID_FLEX_NIC_2:
+		ICE_PROG_FLEX_ENTRY(hw, prof_id, ICE_RX_MDID_HASH_LOW, 0);
+		ICE_PROG_FLEX_ENTRY(hw, prof_id, ICE_RX_MDID_HASH_HIGH, 1);
+		ICE_PROG_FLEX_ENTRY(hw, prof_id, ICE_RX_MDID_FLOW_ID_LOWER, 2);
+
+		mdid = (prof_id == ICE_RXDID_FLEX_NIC_2) ?
+			ICE_RX_MDID_SRC_VSI : ICE_RX_MDID_FLOW_ID_HIGH;
+
+		ICE_PROG_FLEX_ENTRY(hw, prof_id, mdid, 3);
+
+		ice_init_flex_flags(hw, prof_id);
+		break;
+
+	default:
+		ice_debug(hw, ICE_DBG_INIT,
+			  "Field init for profile ID %d not supported\n",
+			  prof_id);
+	}
+}
+
+
+/**
+ * ice_init_fltr_mgmt_struct - initializes filter management list and locks
+ * @hw: pointer to the hw struct
+ */
+static enum ice_status ice_init_fltr_mgmt_struct(struct ice_hw *hw)
+{
+	struct ice_switch_info *sw;
+
+	hw->switch_info = (struct ice_switch_info *)
+			  ice_malloc(hw, sizeof(*hw->switch_info));
+	sw = hw->switch_info;
+
+	if (!sw)
+		return ICE_ERR_NO_MEMORY;
+
+	INIT_LIST_HEAD(&sw->vsi_list_map_head);
+
+	return ice_init_def_sw_recp(hw);
+}
+
+/**
+ * ice_cleanup_fltr_mgmt_struct - cleanup filter management list and locks
+ * @hw: pointer to the hw struct
+ */
+static void ice_cleanup_fltr_mgmt_struct(struct ice_hw *hw)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	struct ice_vsi_list_map_info *v_pos_map;
+	struct ice_vsi_list_map_info *v_tmp_map;
+	struct ice_sw_recipe *recps;
+	u8 i;
+
+	LIST_FOR_EACH_ENTRY_SAFE(v_pos_map, v_tmp_map, &sw->vsi_list_map_head,
+				 ice_vsi_list_map_info, list_entry) {
+		LIST_DEL(&v_pos_map->list_entry);
+		ice_free(hw, v_pos_map);
+	}
+	recps = hw->switch_info->recp_list;
+	for (i = 0; i < ICE_MAX_NUM_RECIPES; i++) {
+		recps[i].root_rid = i;
+
+		if (recps[i].adv_rule) {
+			struct ice_adv_fltr_mgmt_list_entry *tmp_entry;
+			struct ice_adv_fltr_mgmt_list_entry *lst_itr;
+
+			ice_destroy_lock(&recps[i].filt_rule_lock);
+			LIST_FOR_EACH_ENTRY_SAFE(lst_itr, tmp_entry,
+						 &recps[i].filt_rules,
+						 ice_adv_fltr_mgmt_list_entry,
+						 list_entry) {
+				LIST_DEL(&lst_itr->list_entry);
+				ice_free(hw, lst_itr->lkups);
+				ice_free(hw, lst_itr);
+			}
+		} else {
+			struct ice_fltr_mgmt_list_entry *lst_itr, *tmp_entry;
+
+			ice_destroy_lock(&recps[i].filt_rule_lock);
+			LIST_FOR_EACH_ENTRY_SAFE(lst_itr, tmp_entry,
+						 &recps[i].filt_rules,
+						 ice_fltr_mgmt_list_entry,
+						 list_entry) {
+				LIST_DEL(&lst_itr->list_entry);
+				ice_free(hw, lst_itr);
+			}
+		}
+	}
+	ice_rm_all_sw_replay_rule_info(hw);
+	ice_free(hw, sw->recp_list);
+	ice_free(hw, sw);
+}
+
+#define ICE_FW_LOG_DESC_SIZE(n)	(sizeof(struct ice_aqc_fw_logging_data) + \
+	(((n) - 1) * sizeof(((struct ice_aqc_fw_logging_data *)0)->entry)))
+#define ICE_FW_LOG_DESC_SIZE_MAX	\
+	ICE_FW_LOG_DESC_SIZE(ICE_AQC_FW_LOG_ID_MAX)
+
+/**
+ * ice_cfg_fw_log - configure FW logging
+ * @hw: pointer to the hw struct
+ * @enable: enable certain FW logging events if true, disable all if false
+ *
+ * This function enables/disables the FW logging via Rx CQ events and a UART
+ * port based on predetermined configurations. FW logging via the Rx CQ can be
+ * enabled/disabled for individual PF's. However, FW logging via the UART can
+ * only be enabled/disabled for all PFs on the same device.
+ *
+ * To enable overall FW logging, the "cq_en" and "uart_en" enable bits in
+ * hw->fw_log need to be set accordingly, e.g. based on user-provided input,
+ * before initializing the device.
+ *
+ * When re/configuring FW logging, callers need to update the "cfg" elements of
+ * the hw->fw_log.evnts array with the desired logging event configurations for
+ * modules of interest. When disabling FW logging completely, the callers can
+ * just pass false in the "enable" parameter. On completion, the function will
+ * update the "cur" element of the hw->fw_log.evnts array with the resulting
+ * logging event configurations of the modules that are being re/configured. FW
+ * logging modules that are not part of a reconfiguration operation retain their
+ * previous states.
+ *
+ * Before resetting the device, it is recommended that the driver disables FW
+ * logging before shutting down the control queue. When disabling FW logging
+ * ("enable" = false), the latest configurations of FW logging events stored in
+ * hw->fw_log.evnts[] are not overridden to allow them to be reconfigured after
+ * a device reset.
+ *
+ * When enabling FW logging to emit log messages via the Rx CQ during the
+ * device's initialization phase, a mechanism alternative to interrupt handlers
+ * needs to be used to extract FW log messages from the Rx CQ periodically and
+ * to prevent the Rx CQ from being full and stalling other types of control
+ * messages from FW to SW. Interrupts are typically disabled during the device's
+ * initialization phase.
+ */
+static enum ice_status ice_cfg_fw_log(struct ice_hw *hw, bool enable)
+{
+	struct ice_aqc_fw_logging_data *data = NULL;
+	struct ice_aqc_fw_logging *cmd;
+	enum ice_status status = ICE_SUCCESS;
+	u16 i, chgs = 0, len = 0;
+	struct ice_aq_desc desc;
+	u8 actv_evnts = 0;
+	void *buf = NULL;
+
+	if (!hw->fw_log.cq_en && !hw->fw_log.uart_en)
+		return ICE_SUCCESS;
+
+	/* Disable FW logging only when the control queue is still responsive */
+	if (!enable &&
+	    (!hw->fw_log.actv_evnts || !ice_check_sq_alive(hw, &hw->adminq)))
+		return ICE_SUCCESS;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_fw_logging);
+	cmd = &desc.params.fw_logging;
+
+	/* Indicate which controls are valid */
+	if (hw->fw_log.cq_en)
+		cmd->log_ctrl_valid |= ICE_AQC_FW_LOG_AQ_VALID;
+
+	if (hw->fw_log.uart_en)
+		cmd->log_ctrl_valid |= ICE_AQC_FW_LOG_UART_VALID;
+
+	if (enable) {
+		/* Fill in an array of entries with FW logging modules and
+		 * logging events being reconfigured.
+		 */
+		for (i = 0; i < ICE_AQC_FW_LOG_ID_MAX; i++) {
+			u16 val;
+
+			/* Keep track of enabled event types */
+			actv_evnts |= hw->fw_log.evnts[i].cfg;
+
+			if (hw->fw_log.evnts[i].cfg == hw->fw_log.evnts[i].cur)
+				continue;
+
+			if (!data) {
+				data = (struct ice_aqc_fw_logging_data *)
+					ice_malloc(hw,
+						   ICE_FW_LOG_DESC_SIZE_MAX);
+				if (!data)
+					return ICE_ERR_NO_MEMORY;
+			}
+
+			val = i << ICE_AQC_FW_LOG_ID_S;
+			val |= hw->fw_log.evnts[i].cfg << ICE_AQC_FW_LOG_EN_S;
+			data->entry[chgs++] = CPU_TO_LE16(val);
+		}
+
+		/* Only enable FW logging if at least one module is specified.
+		 * If FW logging is currently enabled but all modules are not
+		 * enabled to emit log messages, disable FW logging altogether.
+		 */
+		if (actv_evnts) {
+			/* Leave if there is effectively no change */
+			if (!chgs)
+				goto out;
+
+			if (hw->fw_log.cq_en)
+				cmd->log_ctrl |= ICE_AQC_FW_LOG_AQ_EN;
+
+			if (hw->fw_log.uart_en)
+				cmd->log_ctrl |= ICE_AQC_FW_LOG_UART_EN;
+
+			buf = data;
+			len = ICE_FW_LOG_DESC_SIZE(chgs);
+			desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+		}
+	}
+
+	status = ice_aq_send_cmd(hw, &desc, buf, len, NULL);
+	if (!status) {
+		/* Update the current configuration to reflect events enabled.
+		 * hw->fw_log.cq_en and hw->fw_log.uart_en indicate if the FW
+		 * logging mode is enabled for the device. They do not reflect
+		 * actual modules being enabled to emit log messages. So, their
+		 * values remain unchanged even when all modules are disabled.
+		 */
+		u16 cnt = enable ? chgs : (u16)ICE_AQC_FW_LOG_ID_MAX;
+
+		hw->fw_log.actv_evnts = actv_evnts;
+		for (i = 0; i < cnt; i++) {
+			u16 v, m;
+
+			if (!enable) {
+				/* When disabling all FW logging events as part
+				 * of device's de-initialization, the original
+				 * configurations are retained, and can be used
+				 * to reconfigure FW logging later if the device
+				 * is re-initialized.
+				 */
+				hw->fw_log.evnts[i].cur = 0;
+				continue;
+			}
+
+			v = LE16_TO_CPU(data->entry[i]);
+			m = (v & ICE_AQC_FW_LOG_ID_M) >> ICE_AQC_FW_LOG_ID_S;
+			hw->fw_log.evnts[m].cur = hw->fw_log.evnts[m].cfg;
+		}
+	}
+
+out:
+	if (data)
+		ice_free(hw, data);
+
+	return status;
+}
+
+/**
+ * ice_output_fw_log
+ * @hw: pointer to the hw struct
+ * @desc: pointer to the AQ message descriptor
+ * @buf: pointer to the buffer accompanying the AQ message
+ *
+ * Formats a FW Log message and outputs it via the standard driver logs.
+ */
+void ice_output_fw_log(struct ice_hw *hw, struct ice_aq_desc *desc, void *buf)
+{
+	ice_debug(hw, ICE_DBG_AQ_MSG, "[ FW Log Msg Start ]\n");
+	ice_debug_array(hw, ICE_DBG_AQ_MSG, 16, 1, (u8 *)buf,
+			LE16_TO_CPU(desc->datalen));
+	ice_debug(hw, ICE_DBG_AQ_MSG, "[ FW Log Msg End ]\n");
+}
+
+/**
+ * ice_get_itr_intrl_gran - determine int/intrl granularity
+ * @hw: pointer to the hw struct
+ *
+ * Determines the itr/intrl granularities based on the maximum aggregate
+ * bandwidth according to the device's configuration during power-on.
+ */
+static enum ice_status ice_get_itr_intrl_gran(struct ice_hw *hw)
+{
+	u8 max_agg_bw = (rd32(hw, GL_PWR_MODE_CTL) &
+			 GL_PWR_MODE_CTL_CAR_MAX_BW_M) >>
+			GL_PWR_MODE_CTL_CAR_MAX_BW_S;
+
+	switch (max_agg_bw) {
+	case ICE_MAX_AGG_BW_200G:
+	case ICE_MAX_AGG_BW_100G:
+	case ICE_MAX_AGG_BW_50G:
+		hw->itr_gran = ICE_ITR_GRAN_ABOVE_25;
+		hw->intrl_gran = ICE_INTRL_GRAN_ABOVE_25;
+		break;
+	case ICE_MAX_AGG_BW_25G:
+		hw->itr_gran = ICE_ITR_GRAN_MAX_25;
+		hw->intrl_gran = ICE_INTRL_GRAN_MAX_25;
+		break;
+	default:
+		ice_debug(hw, ICE_DBG_INIT,
+			  "Failed to determine itr/intrl granularity\n");
+		return ICE_ERR_CFG;
+	}
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_init_hw - main hardware initialization routine
+ * @hw: pointer to the hardware structure
+ */
+enum ice_status ice_init_hw(struct ice_hw *hw)
+{
+	struct ice_aqc_get_phy_caps_data *pcaps;
+	enum ice_status status;
+	u16 mac_buf_len;
+	void *mac_buf;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_init_hw");
+
+
+	/* Set MAC type based on DeviceID */
+	status = ice_set_mac_type(hw);
+	if (status)
+		return status;
+
+	hw->pf_id = (u8)(rd32(hw, PF_FUNC_RID) &
+			 PF_FUNC_RID_FUNCTION_NUMBER_M) >>
+		PF_FUNC_RID_FUNCTION_NUMBER_S;
+
+
+	status = ice_reset(hw, ICE_RESET_PFR);
+	if (status)
+		return status;
+
+	status = ice_get_itr_intrl_gran(hw);
+	if (status)
+		return status;
+
+
+	status = ice_init_all_ctrlq(hw);
+	if (status)
+		goto err_unroll_cqinit;
+
+	/* Enable FW logging. Not fatal if this fails. */
+	status = ice_cfg_fw_log(hw, true);
+	if (status)
+		ice_debug(hw, ICE_DBG_INIT, "Failed to enable FW logging.\n");
+
+	status = ice_clear_pf_cfg(hw);
+	if (status)
+		goto err_unroll_cqinit;
+
+
+	ice_clear_pxe_mode(hw);
+
+	status = ice_init_nvm(hw);
+	if (status)
+		goto err_unroll_cqinit;
+
+	status = ice_get_caps(hw);
+	if (status)
+		goto err_unroll_cqinit;
+
+	hw->port_info = (struct ice_port_info *)
+			ice_malloc(hw, sizeof(*hw->port_info));
+	if (!hw->port_info) {
+		status = ICE_ERR_NO_MEMORY;
+		goto err_unroll_cqinit;
+	}
+
+	/* set the back pointer to hw */
+	hw->port_info->hw = hw;
+
+	/* Initialize port_info struct with switch configuration data */
+	status = ice_get_initial_sw_cfg(hw);
+	if (status)
+		goto err_unroll_alloc;
+
+	hw->evb_veb = true;
+
+	/* Query the allocated resources for Tx scheduler */
+	status = ice_sched_query_res_alloc(hw);
+	if (status) {
+		ice_debug(hw, ICE_DBG_SCHED,
+			  "Failed to get scheduler allocated resources\n");
+		goto err_unroll_alloc;
+	}
+
+
+	/* Initialize port_info struct with scheduler data */
+	status = ice_sched_init_port(hw->port_info);
+	if (status)
+		goto err_unroll_sched;
+
+	pcaps = (struct ice_aqc_get_phy_caps_data *)
+		ice_malloc(hw, sizeof(*pcaps));
+	if (!pcaps) {
+		status = ICE_ERR_NO_MEMORY;
+		goto err_unroll_sched;
+	}
+
+	/* Initialize port_info struct with PHY capabilities */
+	status = ice_aq_get_phy_caps(hw->port_info, false,
+				     ICE_AQC_REPORT_TOPO_CAP, pcaps, NULL);
+	ice_free(hw, pcaps);
+	if (status)
+		goto err_unroll_sched;
+
+	/* Initialize port_info struct with link information */
+	status = ice_aq_get_link_info(hw->port_info, false, NULL, NULL);
+	if (status)
+		goto err_unroll_sched;
+	/* need a valid SW entry point to build a Tx tree */
+	if (!hw->sw_entry_point_layer) {
+		ice_debug(hw, ICE_DBG_SCHED, "invalid sw entry point\n");
+		status = ICE_ERR_CFG;
+		goto err_unroll_sched;
+	}
+	INIT_LIST_HEAD(&hw->agg_list);
+	/* Initialize max burst size */
+	if (!hw->max_burst_size)
+		ice_cfg_rl_burst_size(hw, ICE_SCHED_DFLT_BURST_SIZE);
+
+	status = ice_init_fltr_mgmt_struct(hw);
+	if (status)
+		goto err_unroll_sched;
+
+#if defined(FPGA_SUPPORT) || defined(CVL_A0_SUPPORT)
+	/* some of the register write workarounds to get Rx working */
+	ice_dev_onetime_setup(hw);
+#endif /* FPGA_SUPPORT || CVL_A0_SUPPORT */
+
+	/* Get MAC information */
+	/* A single port can report up to two (LAN and WoL) addresses */
+	mac_buf = ice_calloc(hw, 2,
+			     sizeof(struct ice_aqc_manage_mac_read_resp));
+	mac_buf_len = 2 * sizeof(struct ice_aqc_manage_mac_read_resp);
+
+	if (!mac_buf) {
+		status = ICE_ERR_NO_MEMORY;
+		goto err_unroll_fltr_mgmt_struct;
+	}
+
+	status = ice_aq_manage_mac_read(hw, mac_buf, mac_buf_len, NULL);
+	ice_free(hw, mac_buf);
+
+	if (status)
+		goto err_unroll_fltr_mgmt_struct;
+
+	ice_init_flex_flds(hw, ICE_RXDID_FLEX_NIC);
+	ice_init_flex_flds(hw, ICE_RXDID_FLEX_NIC_2);
+
+
+	return ICE_SUCCESS;
+
+err_unroll_fltr_mgmt_struct:
+	ice_cleanup_fltr_mgmt_struct(hw);
+err_unroll_sched:
+	ice_sched_cleanup_all(hw);
+err_unroll_alloc:
+	ice_free(hw, hw->port_info);
+	hw->port_info = NULL;
+err_unroll_cqinit:
+	ice_shutdown_all_ctrlq(hw);
+	return status;
+}
+
+/**
+ * ice_deinit_hw - unroll initialization operations done by ice_init_hw
+ * @hw: pointer to the hardware structure
+ *
+ * This should be called only during nominal operation, not as a result of
+ * ice_init_hw() failing since ice_init_hw() will take care of unrolling
+ * applicable initializations if it fails for any reason.
+ */
+void ice_deinit_hw(struct ice_hw *hw)
+{
+	ice_cleanup_fltr_mgmt_struct(hw);
+
+	ice_sched_cleanup_all(hw);
+	ice_sched_clear_agg(hw);
+
+	if (hw->port_info) {
+		ice_free(hw, hw->port_info);
+		hw->port_info = NULL;
+	}
+
+	/* Attempt to disable FW logging before shutting down control queues */
+	ice_cfg_fw_log(hw, false);
+	ice_shutdown_all_ctrlq(hw);
+
+	/* Clear VSI contexts if not already cleared */
+	ice_clear_all_vsi_ctx(hw);
+}
+
+/**
+ * ice_check_reset - Check to see if a global reset is complete
+ * @hw: pointer to the hardware structure
+ */
+enum ice_status ice_check_reset(struct ice_hw *hw)
+{
+	u32 cnt, reg = 0, grst_delay;
+
+	/* Poll for Device Active state in case a recent CORER, GLOBR,
+	 * or EMPR has occurred. The grst delay value is in 100ms units.
+	 * Add 1sec for outstanding AQ commands that can take a long time.
+	 */
+#define GLGEN_RSTCTL		0x000B8180 /* Reset Source: POR */
+#define GLGEN_RSTCTL_GRSTDEL_S	0
+#define GLGEN_RSTCTL_GRSTDEL_M	MAKEMASK(0x3F, GLGEN_RSTCTL_GRSTDEL_S)
+	grst_delay = ((rd32(hw, GLGEN_RSTCTL) & GLGEN_RSTCTL_GRSTDEL_M) >>
+		      GLGEN_RSTCTL_GRSTDEL_S) + 10;
+
+	for (cnt = 0; cnt < grst_delay; cnt++) {
+		ice_msec_delay(100, true);
+		reg = rd32(hw, GLGEN_RSTAT);
+		if (!(reg & GLGEN_RSTAT_DEVSTATE_M))
+			break;
+	}
+
+	if (cnt == grst_delay) {
+		ice_debug(hw, ICE_DBG_INIT,
+			  "Global reset polling failed to complete.\n");
+		return ICE_ERR_RESET_FAILED;
+	}
+
+#define ICE_RESET_DONE_MASK	(GLNVM_ULD_CORER_DONE_M | \
+				 GLNVM_ULD_GLOBR_DONE_M)
+
+	/* Device is Active; check Global Reset processes are done */
+	for (cnt = 0; cnt < ICE_PF_RESET_WAIT_COUNT; cnt++) {
+		reg = rd32(hw, GLNVM_ULD) & ICE_RESET_DONE_MASK;
+		if (reg == ICE_RESET_DONE_MASK) {
+			ice_debug(hw, ICE_DBG_INIT,
+				  "Global reset processes done. %d\n", cnt);
+			break;
+		}
+		ice_msec_delay(10, true);
+	}
+
+	if (cnt == ICE_PF_RESET_WAIT_COUNT) {
+		ice_debug(hw, ICE_DBG_INIT,
+			  "Wait for Reset Done timed out. GLNVM_ULD = 0x%x\n",
+			  reg);
+		return ICE_ERR_RESET_FAILED;
+	}
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_pf_reset - Reset the PF
+ * @hw: pointer to the hardware structure
+ *
+ * If a global reset has been triggered, this function checks
+ * for its completion and then issues the PF reset
+ */
+static enum ice_status ice_pf_reset(struct ice_hw *hw)
+{
+	u32 cnt, reg;
+
+	/* If at function entry a global reset was already in progress, i.e.
+	 * state is not 'device active' or any of the reset done bits are not
+	 * set in GLNVM_ULD, there is no need for a PF Reset; poll until the
+	 * global reset is done.
+	 */
+	if ((rd32(hw, GLGEN_RSTAT) & GLGEN_RSTAT_DEVSTATE_M) ||
+	    (rd32(hw, GLNVM_ULD) & ICE_RESET_DONE_MASK) ^ ICE_RESET_DONE_MASK) {
+		/* poll on global reset currently in progress until done */
+		if (ice_check_reset(hw))
+			return ICE_ERR_RESET_FAILED;
+
+		return ICE_SUCCESS;
+	}
+
+	/* Reset the PF */
+	reg = rd32(hw, PFGEN_CTRL);
+
+	wr32(hw, PFGEN_CTRL, (reg | PFGEN_CTRL_PFSWR_M));
+
+	for (cnt = 0; cnt < ICE_PF_RESET_WAIT_COUNT; cnt++) {
+		reg = rd32(hw, PFGEN_CTRL);
+		if (!(reg & PFGEN_CTRL_PFSWR_M))
+			break;
+
+		ice_msec_delay(1, true);
+	}
+
+	if (cnt == ICE_PF_RESET_WAIT_COUNT) {
+		ice_debug(hw, ICE_DBG_INIT,
+			  "PF reset polling failed to complete.\n");
+		return ICE_ERR_RESET_FAILED;
+	}
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_reset - Perform different types of reset
+ * @hw: pointer to the hardware structure
+ * @req: reset request
+ *
+ * This function triggers a reset as specified by the req parameter.
+ *
+ * Note:
+ * If anything other than a PF reset is triggered, PXE mode is restored.
+ * This has to be cleared using ice_clear_pxe_mode again, once the AQ
+ * interface has been restored in the rebuild flow.
+ */
+enum ice_status ice_reset(struct ice_hw *hw, enum ice_reset_req req)
+{
+	u32 val = 0;
+
+	switch (req) {
+	case ICE_RESET_PFR:
+		return ice_pf_reset(hw);
+	case ICE_RESET_CORER:
+		ice_debug(hw, ICE_DBG_INIT, "CoreR requested\n");
+		val = GLGEN_RTRIG_CORER_M;
+		break;
+	case ICE_RESET_GLOBR:
+		ice_debug(hw, ICE_DBG_INIT, "GlobalR requested\n");
+		val = GLGEN_RTRIG_GLOBR_M;
+		break;
+	default:
+		return ICE_ERR_PARAM;
+	}
+
+	val |= rd32(hw, GLGEN_RTRIG);
+	wr32(hw, GLGEN_RTRIG, val);
+	ice_flush(hw);
+
+
+	/* wait for the FW to be ready */
+	return ice_check_reset(hw);
+}
+
+
+
+/**
+ * ice_copy_rxq_ctx_to_hw
+ * @hw: pointer to the hardware structure
+ * @ice_rxq_ctx: pointer to the rxq context
+ * @rxq_index: the index of the Rx queue
+ *
+ * Copies rxq context from dense structure to hw register space
+ */
+static enum ice_status
+ice_copy_rxq_ctx_to_hw(struct ice_hw *hw, u8 *ice_rxq_ctx, u32 rxq_index)
+{
+	u8 i;
+
+	if (!ice_rxq_ctx)
+		return ICE_ERR_BAD_PTR;
+
+	if (rxq_index > QRX_CTRL_MAX_INDEX)
+		return ICE_ERR_PARAM;
+
+	/* Copy each dword separately to hw */
+	for (i = 0; i < ICE_RXQ_CTX_SIZE_DWORDS; i++) {
+		wr32(hw, QRX_CONTEXT(i, rxq_index),
+		     *((u32 *)(ice_rxq_ctx + (i * sizeof(u32)))));
+
+		ice_debug(hw, ICE_DBG_QCTX, "qrxdata[%d]: %08X\n", i,
+			  *((u32 *)(ice_rxq_ctx + (i * sizeof(u32)))));
+	}
+
+	return ICE_SUCCESS;
+}
+
+/* LAN Rx Queue Context */
+static const struct ice_ctx_ele ice_rlan_ctx_info[] = {
+	/* Field		Width	LSB */
+	ICE_CTX_STORE(ice_rlan_ctx, head,		13,	0),
+	ICE_CTX_STORE(ice_rlan_ctx, cpuid,		8,	13),
+	ICE_CTX_STORE(ice_rlan_ctx, base,		57,	32),
+	ICE_CTX_STORE(ice_rlan_ctx, qlen,		13,	89),
+	ICE_CTX_STORE(ice_rlan_ctx, dbuf,		7,	102),
+	ICE_CTX_STORE(ice_rlan_ctx, hbuf,		5,	109),
+	ICE_CTX_STORE(ice_rlan_ctx, dtype,		2,	114),
+	ICE_CTX_STORE(ice_rlan_ctx, dsize,		1,	116),
+	ICE_CTX_STORE(ice_rlan_ctx, crcstrip,		1,	117),
+	ICE_CTX_STORE(ice_rlan_ctx, l2tsel,		1,	119),
+	ICE_CTX_STORE(ice_rlan_ctx, hsplit_0,		4,	120),
+	ICE_CTX_STORE(ice_rlan_ctx, hsplit_1,		2,	124),
+	ICE_CTX_STORE(ice_rlan_ctx, showiv,		1,	127),
+	ICE_CTX_STORE(ice_rlan_ctx, rxmax,		14,	174),
+	ICE_CTX_STORE(ice_rlan_ctx, tphrdesc_ena,	1,	193),
+	ICE_CTX_STORE(ice_rlan_ctx, tphwdesc_ena,	1,	194),
+	ICE_CTX_STORE(ice_rlan_ctx, tphdata_ena,	1,	195),
+	ICE_CTX_STORE(ice_rlan_ctx, tphhead_ena,	1,	196),
+	ICE_CTX_STORE(ice_rlan_ctx, lrxqthresh,		3,	198),
+	{ 0 }
+};
+
+/**
+ * ice_write_rxq_ctx
+ * @hw: pointer to the hardware structure
+ * @rlan_ctx: pointer to the rxq context
+ * @rxq_index: the index of the Rx queue
+ *
+ * Converts rxq context from sparse to dense structure and then writes
+ * it to hw register space
+ */
+enum ice_status
+ice_write_rxq_ctx(struct ice_hw *hw, struct ice_rlan_ctx *rlan_ctx,
+		  u32 rxq_index)
+{
+	u8 ctx_buf[ICE_RXQ_CTX_SZ] = { 0 };
+
+	ice_set_ctx((u8 *)rlan_ctx, ctx_buf, ice_rlan_ctx_info);
+	return ice_copy_rxq_ctx_to_hw(hw, ctx_buf, rxq_index);
+}
+
+#if !defined(NO_UNUSED_CTX_CODE) || defined(AE_DRIVER)
+/**
+ * ice_clear_rxq_ctx
+ * @hw: pointer to the hardware structure
+ * @rxq_index: the index of the Rx queue to clear
+ *
+ * Clears rxq context in hw register space
+ */
+enum ice_status ice_clear_rxq_ctx(struct ice_hw *hw, u32 rxq_index)
+{
+	u8 i;
+
+	if (rxq_index > QRX_CTRL_MAX_INDEX)
+		return ICE_ERR_PARAM;
+
+	/* Clear each dword register separately */
+	for (i = 0; i < ICE_RXQ_CTX_SIZE_DWORDS; i++)
+		wr32(hw, QRX_CONTEXT(i, rxq_index), 0);
+
+	return ICE_SUCCESS;
+}
+#endif /* !NO_UNUSED_CTX_CODE || AE_DRIVER */
+
+/* LAN Tx Queue Context */
+const struct ice_ctx_ele ice_tlan_ctx_info[] = {
+				    /* Field			Width	LSB */
+	ICE_CTX_STORE(ice_tlan_ctx, base,			57,	0),
+	ICE_CTX_STORE(ice_tlan_ctx, port_num,			3,	57),
+	ICE_CTX_STORE(ice_tlan_ctx, cgd_num,			5,	60),
+	ICE_CTX_STORE(ice_tlan_ctx, pf_num,			3,	65),
+	ICE_CTX_STORE(ice_tlan_ctx, vmvf_num,			10,	68),
+	ICE_CTX_STORE(ice_tlan_ctx, vmvf_type,			2,	78),
+	ICE_CTX_STORE(ice_tlan_ctx, src_vsi,			10,	80),
+	ICE_CTX_STORE(ice_tlan_ctx, tsyn_ena,			1,	90),
+	ICE_CTX_STORE(ice_tlan_ctx, alt_vlan,			1,	92),
+	ICE_CTX_STORE(ice_tlan_ctx, cpuid,			8,	93),
+	ICE_CTX_STORE(ice_tlan_ctx, wb_mode,			1,	101),
+	ICE_CTX_STORE(ice_tlan_ctx, tphrd_desc,			1,	102),
+	ICE_CTX_STORE(ice_tlan_ctx, tphrd,			1,	103),
+	ICE_CTX_STORE(ice_tlan_ctx, tphwr_desc,			1,	104),
+	ICE_CTX_STORE(ice_tlan_ctx, cmpq_id,			9,	105),
+	ICE_CTX_STORE(ice_tlan_ctx, qnum_in_func,		14,	114),
+	ICE_CTX_STORE(ice_tlan_ctx, itr_notification_mode,	1,	128),
+	ICE_CTX_STORE(ice_tlan_ctx, adjust_prof_id,		6,	129),
+	ICE_CTX_STORE(ice_tlan_ctx, qlen,			13,	135),
+	ICE_CTX_STORE(ice_tlan_ctx, quanta_prof_idx,		4,	148),
+	ICE_CTX_STORE(ice_tlan_ctx, tso_ena,			1,	152),
+	ICE_CTX_STORE(ice_tlan_ctx, tso_qnum,			11,	153),
+	ICE_CTX_STORE(ice_tlan_ctx, legacy_int,			1,	164),
+	ICE_CTX_STORE(ice_tlan_ctx, drop_ena,			1,	165),
+	ICE_CTX_STORE(ice_tlan_ctx, cache_prof_idx,		2,	166),
+	ICE_CTX_STORE(ice_tlan_ctx, pkt_shaper_prof_idx,	3,	168),
+	ICE_CTX_STORE(ice_tlan_ctx, int_q_state,		110,	171),
+	{ 0 }
+};
+
+#if !defined(NO_UNUSED_CTX_CODE) || defined(AE_DRIVER)
+/**
+ * ice_copy_tx_cmpltnq_ctx_to_hw
+ * @hw: pointer to the hardware structure
+ * @ice_tx_cmpltnq_ctx: pointer to the Tx completion queue context
+ * @tx_cmpltnq_index: the index of the completion queue
+ *
+ * Copies Tx completion q context from dense structure to hw register space
+ */
+static enum ice_status
+ice_copy_tx_cmpltnq_ctx_to_hw(struct ice_hw *hw, u8 *ice_tx_cmpltnq_ctx,
+			      u32 tx_cmpltnq_index)
+{
+	u8 i;
+
+	if (!ice_tx_cmpltnq_ctx)
+		return ICE_ERR_BAD_PTR;
+
+	if (tx_cmpltnq_index > GLTCLAN_CQ_CNTX0_MAX_INDEX)
+		return ICE_ERR_PARAM;
+
+	/* Copy each dword separately to hw */
+	for (i = 0; i < ICE_TX_CMPLTNQ_CTX_SIZE_DWORDS; i++) {
+		wr32(hw, GLTCLAN_CQ_CNTX(i, tx_cmpltnq_index),
+		     *((u32 *)(ice_tx_cmpltnq_ctx + (i * sizeof(u32)))));
+
+		ice_debug(hw, ICE_DBG_QCTX, "cmpltnqdata[%d]: %08X\n", i,
+			  *((u32 *)(ice_tx_cmpltnq_ctx + (i * sizeof(u32)))));
+	}
+
+	return ICE_SUCCESS;
+}
+
+/* LAN Tx Completion Queue Context */
+static const struct ice_ctx_ele ice_tx_cmpltnq_ctx_info[] = {
+				       /* Field			Width   LSB */
+	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, base,			57,	0),
+	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, q_len,		18,	64),
+	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, generation,		1,	96),
+	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, wrt_ptr,		22,	97),
+	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, pf_num,		3,	128),
+	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, vmvf_num,		10,	131),
+	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, vmvf_type,		2,	141),
+	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, tph_desc_wr,		1,	160),
+	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, cpuid,		8,	161),
+	ICE_CTX_STORE(ice_tx_cmpltnq_ctx, cmpltn_cache,		512,	192),
+	{ 0 }
+};
+
+/**
+ * ice_write_tx_cmpltnq_ctx
+ * @hw: pointer to the hardware structure
+ * @tx_cmpltnq_ctx: pointer to the completion queue context
+ * @tx_cmpltnq_index: the index of the completion queue
+ *
+ * Converts completion queue context from sparse to dense structure and then
+ * writes it to hw register space
+ */
+enum ice_status
+ice_write_tx_cmpltnq_ctx(struct ice_hw *hw,
+			 struct ice_tx_cmpltnq_ctx *tx_cmpltnq_ctx,
+			 u32 tx_cmpltnq_index)
+{
+	u8 ctx_buf[ICE_TX_CMPLTNQ_CTX_SIZE_DWORDS * sizeof(u32)] = { 0 };
+
+	ice_set_ctx((u8 *)tx_cmpltnq_ctx, ctx_buf, ice_tx_cmpltnq_ctx_info);
+	return ice_copy_tx_cmpltnq_ctx_to_hw(hw, ctx_buf, tx_cmpltnq_index);
+}
+
+/**
+ * ice_clear_tx_cmpltnq_ctx
+ * @hw: pointer to the hardware structure
+ * @tx_cmpltnq_index: the index of the completion queue to clear
+ *
+ * Clears Tx completion queue context in hw register space
+ */
+enum ice_status
+ice_clear_tx_cmpltnq_ctx(struct ice_hw *hw, u32 tx_cmpltnq_index)
+{
+	u8 i;
+
+	if (tx_cmpltnq_index > GLTCLAN_CQ_CNTX0_MAX_INDEX)
+		return ICE_ERR_PARAM;
+
+	/* Clear each dword register separately */
+	for (i = 0; i < ICE_TX_CMPLTNQ_CTX_SIZE_DWORDS; i++)
+		wr32(hw, GLTCLAN_CQ_CNTX(i, tx_cmpltnq_index), 0);
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_copy_tx_drbell_q_ctx_to_hw
+ * @hw: pointer to the hardware structure
+ * @ice_tx_drbell_q_ctx: pointer to the doorbell queue context
+ * @tx_drbell_q_index: the index of the doorbell queue
+ *
+ * Copies doorbell q context from dense structure to hw register space
+ */
+static enum ice_status
+ice_copy_tx_drbell_q_ctx_to_hw(struct ice_hw *hw, u8 *ice_tx_drbell_q_ctx,
+			       u32 tx_drbell_q_index)
+{
+	u8 i;
+
+	if (!ice_tx_drbell_q_ctx)
+		return ICE_ERR_BAD_PTR;
+
+	if (tx_drbell_q_index > QTX_COMM_DBLQ_DBELL_MAX_INDEX)
+		return ICE_ERR_PARAM;
+
+	/* Copy each dword separately to hw */
+	for (i = 0; i < ICE_TX_DRBELL_Q_CTX_SIZE_DWORDS; i++) {
+		wr32(hw, QTX_COMM_DBLQ_CNTX(i, tx_drbell_q_index),
+		     *((u32 *)(ice_tx_drbell_q_ctx + (i * sizeof(u32)))));
+
+		ice_debug(hw, ICE_DBG_QCTX, "tx_drbell_qdata[%d]: %08X\n", i,
+			  *((u32 *)(ice_tx_drbell_q_ctx + (i * sizeof(u32)))));
+	}
+
+	return ICE_SUCCESS;
+}
+
+/* LAN Tx Doorbell Queue Context info */
+static const struct ice_ctx_ele ice_tx_drbell_q_ctx_info[] = {
+					/* Field		Width   LSB */
+	ICE_CTX_STORE(ice_tx_drbell_q_ctx, base,		57,	0),
+	ICE_CTX_STORE(ice_tx_drbell_q_ctx, ring_len,		13,	64),
+	ICE_CTX_STORE(ice_tx_drbell_q_ctx, pf_num,		3,	80),
+	ICE_CTX_STORE(ice_tx_drbell_q_ctx, vf_num,		8,	84),
+	ICE_CTX_STORE(ice_tx_drbell_q_ctx, vmvf_type,		2,	94),
+	ICE_CTX_STORE(ice_tx_drbell_q_ctx, cpuid,		8,	96),
+	ICE_CTX_STORE(ice_tx_drbell_q_ctx, tph_desc_rd,		1,	104),
+	ICE_CTX_STORE(ice_tx_drbell_q_ctx, tph_desc_wr,		1,	108),
+	ICE_CTX_STORE(ice_tx_drbell_q_ctx, db_q_en,		1,	112),
+	ICE_CTX_STORE(ice_tx_drbell_q_ctx, rd_head,		13,	128),
+	ICE_CTX_STORE(ice_tx_drbell_q_ctx, rd_tail,		13,	144),
+	{ 0 }
+};
+
+/**
+ * ice_write_tx_drbell_q_ctx
+ * @hw: pointer to the hardware structure
+ * @tx_drbell_q_ctx: pointer to the doorbell queue context
+ * @tx_drbell_q_index: the index of the doorbell queue
+ *
+ * Converts doorbell queue context from sparse to dense structure and then
+ * writes it to hw register space
+ */
+enum ice_status
+ice_write_tx_drbell_q_ctx(struct ice_hw *hw,
+			  struct ice_tx_drbell_q_ctx *tx_drbell_q_ctx,
+			  u32 tx_drbell_q_index)
+{
+	u8 ctx_buf[ICE_TX_DRBELL_Q_CTX_SIZE_DWORDS * sizeof(u32)] = { 0 };
+
+	ice_set_ctx((u8 *)tx_drbell_q_ctx, ctx_buf, ice_tx_drbell_q_ctx_info);
+	return ice_copy_tx_drbell_q_ctx_to_hw(hw, ctx_buf, tx_drbell_q_index);
+}
+
+/**
+ * ice_clear_tx_drbell_q_ctx
+ * @hw: pointer to the hardware structure
+ * @tx_drbell_q_index: the index of the doorbell queue to clear
+ *
+ * Clears doorbell queue context in hw register space
+ */
+enum ice_status
+ice_clear_tx_drbell_q_ctx(struct ice_hw *hw, u32 tx_drbell_q_index)
+{
+	u8 i;
+
+	if (tx_drbell_q_index > QTX_COMM_DBLQ_DBELL_MAX_INDEX)
+		return ICE_ERR_PARAM;
+
+	/* Clear each dword register separately */
+	for (i = 0; i < ICE_TX_DRBELL_Q_CTX_SIZE_DWORDS; i++)
+		wr32(hw, QTX_COMM_DBLQ_CNTX(i, tx_drbell_q_index), 0);
+
+	return ICE_SUCCESS;
+}
+#endif /* !NO_UNUSED_CTX_CODE || AE_DRIVER */
+
+/**
+ * ice_debug_cq
+ * @hw: pointer to the hardware structure
+ * @mask: debug mask
+ * @desc: pointer to control queue descriptor
+ * @buf: pointer to command buffer
+ * @buf_len: max length of buf
+ *
+ * Dumps debug log about control command with descriptor contents.
+ */
+void
+ice_debug_cq(struct ice_hw *hw, u32 mask, void *desc, void *buf, u16 buf_len)
+{
+	struct ice_aq_desc *cq_desc = (struct ice_aq_desc *)desc;
+	u16 len;
+
+	if (!(mask & hw->debug_mask))
+		return;
+
+	if (!desc)
+		return;
+
+	len = LE16_TO_CPU(cq_desc->datalen);
+
+	ice_debug(hw, mask,
+		  "CQ CMD: opcode 0x%04X, flags 0x%04X, datalen 0x%04X, retval 0x%04X\n",
+		  LE16_TO_CPU(cq_desc->opcode),
+		  LE16_TO_CPU(cq_desc->flags),
+		  LE16_TO_CPU(cq_desc->datalen), LE16_TO_CPU(cq_desc->retval));
+	ice_debug(hw, mask, "\tcookie (h,l) 0x%08X 0x%08X\n",
+		  LE32_TO_CPU(cq_desc->cookie_high),
+		  LE32_TO_CPU(cq_desc->cookie_low));
+	ice_debug(hw, mask, "\tparam (0,1)  0x%08X 0x%08X\n",
+		  LE32_TO_CPU(cq_desc->params.generic.param0),
+		  LE32_TO_CPU(cq_desc->params.generic.param1));
+	ice_debug(hw, mask, "\taddr (h,l)   0x%08X 0x%08X\n",
+		  LE32_TO_CPU(cq_desc->params.generic.addr_high),
+		  LE32_TO_CPU(cq_desc->params.generic.addr_low));
+	if (buf && cq_desc->datalen != 0) {
+		ice_debug(hw, mask, "Buffer:\n");
+		if (buf_len < len)
+			len = buf_len;
+
+		ice_debug_array(hw, mask, 16, 1, (u8 *)buf, len);
+	}
+}
+
+
+/* FW Admin Queue command wrappers */
+
+/**
+ * ice_aq_send_cmd - send FW Admin Queue command to FW Admin Queue
+ * @hw: pointer to the hw struct
+ * @desc: descriptor describing the command
+ * @buf: buffer to use for indirect commands (NULL for direct commands)
+ * @buf_size: size of buffer for indirect commands (0 for direct commands)
+ * @cd: pointer to command details structure
+ *
+ * Helper function to send FW Admin Queue commands to the FW Admin Queue.
+ */
+enum ice_status
+ice_aq_send_cmd(struct ice_hw *hw, struct ice_aq_desc *desc, void *buf,
+		u16 buf_size, struct ice_sq_cd *cd)
+{
+	return ice_sq_send_cmd(hw, &hw->adminq, desc, buf, buf_size, cd);
+}
+
+/**
+ * ice_aq_get_fw_ver
+ * @hw: pointer to the hw struct
+ * @cd: pointer to command details structure or NULL
+ *
+ * Get the firmware version (0x0001) from the admin queue commands
+ */
+enum ice_status ice_aq_get_fw_ver(struct ice_hw *hw, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_get_ver *resp;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	resp = &desc.params.get_ver;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_ver);
+
+	status = ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+
+	if (!status) {
+		hw->fw_branch = resp->fw_branch;
+		hw->fw_maj_ver = resp->fw_major;
+		hw->fw_min_ver = resp->fw_minor;
+		hw->fw_patch = resp->fw_patch;
+		hw->fw_build = LE32_TO_CPU(resp->fw_build);
+		hw->api_branch = resp->api_branch;
+		hw->api_maj_ver = resp->api_major;
+		hw->api_min_ver = resp->api_minor;
+		hw->api_patch = resp->api_patch;
+	}
+
+	return status;
+}
+
+
+/**
+ * ice_aq_q_shutdown
+ * @hw: pointer to the hw struct
+ * @unloading: is the driver unloading itself
+ *
+ * Tell the Firmware that we're shutting down the AdminQ and whether
+ * or not the driver is unloading as well (0x0003).
+ */
+enum ice_status ice_aq_q_shutdown(struct ice_hw *hw, bool unloading)
+{
+	struct ice_aqc_q_shutdown *cmd;
+	struct ice_aq_desc desc;
+
+	cmd = &desc.params.q_shutdown;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_q_shutdown);
+
+	if (unloading)
+		cmd->driver_unloading = CPU_TO_LE32(ICE_AQC_DRIVER_UNLOADING);
+
+	return ice_aq_send_cmd(hw, &desc, NULL, 0, NULL);
+}
+
+/**
+ * ice_aq_req_res
+ * @hw: pointer to the hw struct
+ * @res: resource id
+ * @access: access type
+ * @sdp_number: resource number
+ * @timeout: the maximum time in ms that the driver may hold the resource
+ * @cd: pointer to command details structure or NULL
+ *
+ * Requests common resource using the admin queue commands (0x0008).
+ * When attempting to acquire the Global Config Lock, the driver can
+ * learn of three states:
+ *  1) ICE_SUCCESS -        acquired lock, and can perform download package
+ *  2) ICE_ERR_AQ_ERROR -   did not get lock, driver should fail to load
+ *  3) ICE_ERR_AQ_NO_WORK - did not get lock, but another driver has
+ *                          successfully downloaded the package; the driver does
+ *                          not have to download the package and can continue
+ *                          loading
+ *
+ * Note that if the caller is in an acquire lock, perform action, release lock
+ * phase of operation, it is possible that the FW may detect a timeout and issue
+ * a CORER. In this case, the driver will receive a CORER interrupt and will
+ * have to determine its cause. The calling thread that is handling this flow
+ * will likely get an error propagated back to it indicating the Download
+ * Package, Update Package or the Release Resource AQ commands timed out.
+ */
+static enum ice_status
+ice_aq_req_res(struct ice_hw *hw, enum ice_aq_res_ids res,
+	       enum ice_aq_res_access_type access, u8 sdp_number, u32 *timeout,
+	       struct ice_sq_cd *cd)
+{
+	struct ice_aqc_req_res *cmd_resp;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_req_res");
+
+	cmd_resp = &desc.params.res_owner;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_req_res);
+
+	cmd_resp->res_id = CPU_TO_LE16(res);
+	cmd_resp->access_type = CPU_TO_LE16(access);
+	cmd_resp->res_number = CPU_TO_LE32(sdp_number);
+	cmd_resp->timeout = CPU_TO_LE32(*timeout);
+	*timeout = 0;
+
+	status = ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+
+	/* The completion specifies the maximum time in ms that the driver
+	 * may hold the resource in the Timeout field.
+	 */
+
+	/* Global config lock response utilizes an additional status field.
+	 *
+	 * If the Global config lock resource is held by some other driver, the
+	 * command completes with ICE_AQ_RES_GLBL_IN_PROG in the status field
+	 * and the timeout field indicates the maximum time the current owner
+	 * of the resource has to free it.
+	 */
+	if (res == ICE_GLOBAL_CFG_LOCK_RES_ID) {
+		if (LE16_TO_CPU(cmd_resp->status) == ICE_AQ_RES_GLBL_SUCCESS) {
+			*timeout = LE32_TO_CPU(cmd_resp->timeout);
+			return ICE_SUCCESS;
+		} else if (LE16_TO_CPU(cmd_resp->status) ==
+			   ICE_AQ_RES_GLBL_IN_PROG) {
+			*timeout = LE32_TO_CPU(cmd_resp->timeout);
+			return ICE_ERR_AQ_ERROR;
+		} else if (LE16_TO_CPU(cmd_resp->status) ==
+			   ICE_AQ_RES_GLBL_DONE) {
+			return ICE_ERR_AQ_NO_WORK;
+		}
+
+		/* invalid FW response, force a timeout immediately */
+		*timeout = 0;
+		return ICE_ERR_AQ_ERROR;
+	}
+
+	/* If the resource is held by some other driver, the command completes
+	 * with a busy return value and the timeout field indicates the maximum
+	 * time the current owner of the resource has to free it.
+	 */
+	if (!status || hw->adminq.sq_last_status == ICE_AQ_RC_EBUSY)
+		*timeout = LE32_TO_CPU(cmd_resp->timeout);
+
+	return status;
+}
+
+/**
+ * ice_aq_release_res
+ * @hw: pointer to the hw struct
+ * @res: resource id
+ * @sdp_number: resource number
+ * @cd: pointer to command details structure or NULL
+ *
+ * release common resource using the admin queue commands (0x0009)
+ */
+static enum ice_status
+ice_aq_release_res(struct ice_hw *hw, enum ice_aq_res_ids res, u8 sdp_number,
+		   struct ice_sq_cd *cd)
+{
+	struct ice_aqc_req_res *cmd;
+	struct ice_aq_desc desc;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_release_res");
+
+	cmd = &desc.params.res_owner;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_release_res);
+
+	cmd->res_id = CPU_TO_LE16(res);
+	cmd->res_number = CPU_TO_LE32(sdp_number);
+
+	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+}
+
+/**
+ * ice_acquire_res
+ * @hw: pointer to the HW structure
+ * @res: resource id
+ * @access: access type (read or write)
+ * @timeout: timeout in milliseconds
+ *
+ * This function will attempt to acquire the ownership of a resource.
+ */
+enum ice_status
+ice_acquire_res(struct ice_hw *hw, enum ice_aq_res_ids res,
+		enum ice_aq_res_access_type access, u32 timeout)
+{
+#define ICE_RES_POLLING_DELAY_MS	10
+	u32 delay = ICE_RES_POLLING_DELAY_MS;
+	u32 time_left = timeout;
+	enum ice_status status;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_acquire_res");
+
+	status = ice_aq_req_res(hw, res, access, 0, &time_left, NULL);
+
+	/* A return code of ICE_ERR_AQ_NO_WORK means that another driver has
+	 * previously acquired the resource and performed any necessary updates;
+	 * in this case the caller does not obtain the resource and has no
+	 * further work to do.
+	 */
+	if (status == ICE_ERR_AQ_NO_WORK)
+		goto ice_acquire_res_exit;
+
+	if (status)
+		ice_debug(hw, ICE_DBG_RES,
+			  "resource %d acquire type %d failed.\n", res, access);
+
+	/* If necessary, poll until the current lock owner timeouts */
+	timeout = time_left;
+	while (status && timeout && time_left) {
+		ice_msec_delay(delay, true);
+		timeout = (timeout > delay) ? timeout - delay : 0;
+		status = ice_aq_req_res(hw, res, access, 0, &time_left, NULL);
+
+		if (status == ICE_ERR_AQ_NO_WORK)
+			/* lock free, but no work to do */
+			break;
+
+		if (!status)
+			/* lock acquired */
+			break;
+	}
+	if (status && status != ICE_ERR_AQ_NO_WORK)
+		ice_debug(hw, ICE_DBG_RES, "resource acquire timed out.\n");
+
+ice_acquire_res_exit:
+	if (status == ICE_ERR_AQ_NO_WORK) {
+		if (access == ICE_RES_WRITE)
+			ice_debug(hw, ICE_DBG_RES,
+				  "resource indicates no work to do.\n");
+		else
+			ice_debug(hw, ICE_DBG_RES,
+				  "Warning: ICE_ERR_AQ_NO_WORK not expected\n");
+	}
+	return status;
+}
+
+/**
+ * ice_release_res
+ * @hw: pointer to the HW structure
+ * @res: resource id
+ *
+ * This function will release a resource using the proper Admin Command.
+ */
+void ice_release_res(struct ice_hw *hw, enum ice_aq_res_ids res)
+{
+	enum ice_status status;
+	u32 total_delay = 0;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_release_res");
+
+	status = ice_aq_release_res(hw, res, 0, NULL);
+
+	/* there are some rare cases when trying to release the resource
+	 * results in an admin Q timeout, so handle them correctly
+	 */
+	while ((status == ICE_ERR_AQ_TIMEOUT) &&
+	       (total_delay < hw->adminq.sq_cmd_timeout)) {
+		ice_msec_delay(1, true);
+		status = ice_aq_release_res(hw, res, 0, NULL);
+		total_delay++;
+	}
+}
+
+/**
+ * ice_aq_alloc_free_res - command to allocate/free resources
+ * @hw: pointer to the hw struct
+ * @num_entries: number of resource entries in buffer
+ * @buf: Indirect buffer to hold data parameters and response
+ * @buf_size: size of buffer for indirect commands
+ * @opc: pass in the command opcode
+ * @cd: pointer to command details structure or NULL
+ *
+ * Helper function to allocate/free resources using the admin queue commands
+ */
+enum ice_status
+ice_aq_alloc_free_res(struct ice_hw *hw, u16 num_entries,
+		      struct ice_aqc_alloc_free_res_elem *buf, u16 buf_size,
+		      enum ice_adminq_opc opc, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_alloc_free_res_cmd *cmd;
+	struct ice_aq_desc desc;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_alloc_free_res");
+
+	cmd = &desc.params.sw_res_ctrl;
+
+	if (!buf)
+		return ICE_ERR_PARAM;
+
+	if (buf_size < (num_entries * sizeof(buf->elem[0])))
+		return ICE_ERR_PARAM;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, opc);
+
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+
+	cmd->num_entries = CPU_TO_LE16(num_entries);
+
+	return ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
+}
+
+
+/**
+ * ice_get_num_per_func - determine number of resources per PF
+ * @hw: pointer to the hw structure
+ * @max: value to be evenly split between each PF
+ *
+ * Determine the number of valid functions by going through the bitmap returned
+ * from parsing capabilities and use this to calculate the number of resources
+ * per PF based on the max value passed in.
+ */
+static u32 ice_get_num_per_func(struct ice_hw *hw, u32 max)
+{
+	u8 funcs;
+
+#define ICE_CAPS_VALID_FUNCS_M	0xFF
+	funcs = ice_hweight8(hw->dev_caps.common_cap.valid_functions &
+			     ICE_CAPS_VALID_FUNCS_M);
+
+	if (!funcs)
+		return 0;
+
+	return max / funcs;
+}
+
+/**
+ * ice_parse_caps - parse function/device capabilities
+ * @hw: pointer to the hw struct
+ * @buf: pointer to a buffer containing function/device capability records
+ * @cap_count: number of capability records in the list
+ * @opc: type of capabilities list to parse
+ *
+ * Helper function to parse function(0x000a)/device(0x000b) capabilities list.
+ */
+static void
+ice_parse_caps(struct ice_hw *hw, void *buf, u32 cap_count,
+	       enum ice_adminq_opc opc)
+{
+	struct ice_aqc_list_caps_elem *cap_resp;
+	struct ice_hw_func_caps *func_p = NULL;
+	struct ice_hw_dev_caps *dev_p = NULL;
+	struct ice_hw_common_caps *caps;
+	u32 i;
+
+	if (!buf)
+		return;
+
+	cap_resp = (struct ice_aqc_list_caps_elem *)buf;
+
+	if (opc == ice_aqc_opc_list_dev_caps) {
+		dev_p = &hw->dev_caps;
+		caps = &dev_p->common_cap;
+	} else if (opc == ice_aqc_opc_list_func_caps) {
+		func_p = &hw->func_caps;
+		caps = &func_p->common_cap;
+	} else {
+		ice_debug(hw, ICE_DBG_INIT, "wrong opcode\n");
+		return;
+	}
+
+	for (i = 0; caps && i < cap_count; i++, cap_resp++) {
+		u32 logical_id = LE32_TO_CPU(cap_resp->logical_id);
+		u32 phys_id = LE32_TO_CPU(cap_resp->phys_id);
+		u32 number = LE32_TO_CPU(cap_resp->number);
+		u16 cap = LE16_TO_CPU(cap_resp->cap);
+
+		switch (cap) {
+		case ICE_AQC_CAPS_VALID_FUNCTIONS:
+			caps->valid_functions = number;
+			ice_debug(hw, ICE_DBG_INIT,
+				  "HW caps: Valid Functions = %d\n",
+				  caps->valid_functions);
+			break;
+		case ICE_AQC_CAPS_VSI:
+			if (dev_p) {
+				dev_p->num_vsi_allocd_to_host = number;
+				ice_debug(hw, ICE_DBG_INIT,
+					  "HW caps: Dev.VSI cnt = %d\n",
+					  dev_p->num_vsi_allocd_to_host);
+			} else if (func_p) {
+				func_p->guar_num_vsi =
+					ice_get_num_per_func(hw, ICE_MAX_VSI);
+				ice_debug(hw, ICE_DBG_INIT,
+					  "HW caps: Func.VSI cnt = %d\n",
+					  number);
+			}
+			break;
+		case ICE_AQC_CAPS_RSS:
+			caps->rss_table_size = number;
+			caps->rss_table_entry_width = logical_id;
+			ice_debug(hw, ICE_DBG_INIT,
+				  "HW caps: RSS table size = %d\n",
+				  caps->rss_table_size);
+			ice_debug(hw, ICE_DBG_INIT,
+				  "HW caps: RSS table width = %d\n",
+				  caps->rss_table_entry_width);
+			break;
+		case ICE_AQC_CAPS_RXQS:
+			caps->num_rxq = number;
+			caps->rxq_first_id = phys_id;
+			ice_debug(hw, ICE_DBG_INIT,
+				  "HW caps: Num Rx Qs = %d\n", caps->num_rxq);
+			ice_debug(hw, ICE_DBG_INIT,
+				  "HW caps: Rx first queue ID = %d\n",
+				  caps->rxq_first_id);
+			break;
+		case ICE_AQC_CAPS_TXQS:
+			caps->num_txq = number;
+			caps->txq_first_id = phys_id;
+			ice_debug(hw, ICE_DBG_INIT,
+				  "HW caps: Num Tx Qs = %d\n", caps->num_txq);
+			ice_debug(hw, ICE_DBG_INIT,
+				  "HW caps: Tx first queue ID = %d\n",
+				  caps->txq_first_id);
+			break;
+		case ICE_AQC_CAPS_MSIX:
+			caps->num_msix_vectors = number;
+			caps->msix_vector_first_id = phys_id;
+			ice_debug(hw, ICE_DBG_INIT,
+				  "HW caps: MSIX vector count = %d\n",
+				  caps->num_msix_vectors);
+			ice_debug(hw, ICE_DBG_INIT,
+				  "HW caps: MSIX first vector index = %d\n",
+				  caps->msix_vector_first_id);
+			break;
+		case ICE_AQC_CAPS_MAX_MTU:
+			caps->max_mtu = number;
+			if (dev_p)
+				ice_debug(hw, ICE_DBG_INIT,
+					  "HW caps: Dev.MaxMTU = %d\n",
+					  caps->max_mtu);
+			else if (func_p)
+				ice_debug(hw, ICE_DBG_INIT,
+					  "HW caps: func.MaxMTU = %d\n",
+					  caps->max_mtu);
+			break;
+		default:
+			ice_debug(hw, ICE_DBG_INIT,
+				  "HW caps: Unknown capability[%d]: 0x%x\n", i,
+				  cap);
+			break;
+		}
+	}
+}
+
+/**
+ * ice_aq_discover_caps - query function/device capabilities
+ * @hw: pointer to the hw struct
+ * @buf: a virtual buffer to hold the capabilities
+ * @buf_size: Size of the virtual buffer
+ * @cap_count: cap count needed if AQ err==ENOMEM
+ * @opc: capabilities type to discover - pass in the command opcode
+ * @cd: pointer to command details structure or NULL
+ *
+ * Get the function(0x000a)/device(0x000b) capabilities description from
+ * the firmware.
+ */
+static enum ice_status
+ice_aq_discover_caps(struct ice_hw *hw, void *buf, u16 buf_size, u32 *cap_count,
+		     enum ice_adminq_opc opc, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_list_caps *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	cmd = &desc.params.get_cap;
+
+	if (opc != ice_aqc_opc_list_func_caps &&
+	    opc != ice_aqc_opc_list_dev_caps)
+		return ICE_ERR_PARAM;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, opc);
+
+	status = ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
+	if (!status)
+		ice_parse_caps(hw, buf, LE32_TO_CPU(cmd->count), opc);
+	else if (hw->adminq.sq_last_status == ICE_AQ_RC_ENOMEM)
+		*cap_count = LE32_TO_CPU(cmd->count);
+	return status;
+}
+
+/**
+ * ice_discover_caps - get info about the HW
+ * @hw: pointer to the hardware structure
+ * @opc: capabilities type to discover - pass in the command opcode
+ */
+static enum ice_status
+ice_discover_caps(struct ice_hw *hw, enum ice_adminq_opc opc)
+{
+	enum ice_status status;
+	u32 cap_count;
+	u16 cbuf_len;
+	u8 retries;
+
+	/* The driver doesn't know how many capabilities the device will return
+	 * so the buffer size required isn't known ahead of time. The driver
+	 * starts with cbuf_len and if this turns out to be insufficient, the
+	 * device returns ICE_AQ_RC_ENOMEM and also the cap_count it needs.
+	 * The driver then allocates the buffer based on the count and retries
+	 * the operation. So it follows that the retry count is 2.
+	 */
+#define ICE_GET_CAP_BUF_COUNT	40
+#define ICE_GET_CAP_RETRY_COUNT	2
+
+	cap_count = ICE_GET_CAP_BUF_COUNT;
+	retries = ICE_GET_CAP_RETRY_COUNT;
+
+	do {
+		void *cbuf;
+
+		cbuf_len = (u16)(cap_count *
+				 sizeof(struct ice_aqc_list_caps_elem));
+		cbuf = ice_malloc(hw, cbuf_len);
+		if (!cbuf)
+			return ICE_ERR_NO_MEMORY;
+
+		status = ice_aq_discover_caps(hw, cbuf, cbuf_len, &cap_count,
+					      opc, NULL);
+		ice_free(hw, cbuf);
+
+		if (!status || hw->adminq.sq_last_status != ICE_AQ_RC_ENOMEM)
+			break;
+
+		/* If ENOMEM is returned, try again with bigger buffer */
+	} while (--retries);
+
+	return status;
+}
+
+/**
+ * ice_get_caps - get info about the HW
+ * @hw: pointer to the hardware structure
+ */
+enum ice_status ice_get_caps(struct ice_hw *hw)
+{
+	enum ice_status status;
+
+	status = ice_discover_caps(hw, ice_aqc_opc_list_dev_caps);
+	if (!status)
+		status = ice_discover_caps(hw, ice_aqc_opc_list_func_caps);
+
+	return status;
+}
+
+/**
+ * ice_aq_manage_mac_write - manage MAC address write command
+ * @hw: pointer to the hw struct
+ * @mac_addr: MAC address to be written as LAA/LAA+WoL/Port address
+ * @flags: flags to control write behavior
+ * @cd: pointer to command details structure or NULL
+ *
+ * This function is used to write MAC address to the NVM (0x0108).
+ */
+enum ice_status
+ice_aq_manage_mac_write(struct ice_hw *hw, const u8 *mac_addr, u8 flags,
+			struct ice_sq_cd *cd)
+{
+	struct ice_aqc_manage_mac_write *cmd;
+	struct ice_aq_desc desc;
+
+	cmd = &desc.params.mac_write;
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_manage_mac_write);
+
+	cmd->flags = flags;
+
+
+	/* Prep values for flags, sah, sal */
+	cmd->sah = HTONS(*((const u16 *)mac_addr));
+	cmd->sal = HTONL(*((const u32 *)(mac_addr + 2)));
+
+	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+}
+
+/**
+ * ice_aq_clear_pxe_mode
+ * @hw: pointer to the hw struct
+ *
+ * Tell the firmware that the driver is taking over from PXE (0x0110).
+ */
+static enum ice_status ice_aq_clear_pxe_mode(struct ice_hw *hw)
+{
+	struct ice_aq_desc desc;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_clear_pxe_mode);
+	desc.params.clear_pxe.rx_cnt = ICE_AQC_CLEAR_PXE_RX_CNT;
+
+	return ice_aq_send_cmd(hw, &desc, NULL, 0, NULL);
+}
+
+/**
+ * ice_clear_pxe_mode - clear pxe operations mode
+ * @hw: pointer to the hw struct
+ *
+ * Make sure all PXE mode settings are cleared, including things
+ * like descriptor fetch/write-back mode.
+ */
+void ice_clear_pxe_mode(struct ice_hw *hw)
+{
+	if (ice_check_sq_alive(hw, &hw->adminq))
+		ice_aq_clear_pxe_mode(hw);
+}
+
+
+/**
+ * ice_get_link_speed_based_on_phy_type - returns link speed
+ * @phy_type_low: lower part of phy_type
+ * @phy_type_high: higher part of phy_type
+ *
+ * This helper function will convert an entry in phy type structure
+ * [phy_type_low, phy_type_high] to its corresponding link speed.
+ * Note: In the structure of [phy_type_low, phy_type_high], there should
+ * be one bit set, as this function will convert one phy type to its
+ * speed.
+ * If no bit gets set, ICE_LINK_SPEED_UNKNOWN will be returned
+ * If more than one bit gets set, ICE_LINK_SPEED_UNKNOWN will be returned
+ */
+static u16
+ice_get_link_speed_based_on_phy_type(u64 phy_type_low, u64 phy_type_high)
+{
+	u16 speed_phy_type_high = ICE_AQ_LINK_SPEED_UNKNOWN;
+	u16 speed_phy_type_low = ICE_AQ_LINK_SPEED_UNKNOWN;
+
+	switch (phy_type_low) {
+	case ICE_PHY_TYPE_LOW_100BASE_TX:
+	case ICE_PHY_TYPE_LOW_100M_SGMII:
+		speed_phy_type_low = ICE_AQ_LINK_SPEED_100MB;
+		break;
+	case ICE_PHY_TYPE_LOW_1000BASE_T:
+	case ICE_PHY_TYPE_LOW_1000BASE_SX:
+	case ICE_PHY_TYPE_LOW_1000BASE_LX:
+	case ICE_PHY_TYPE_LOW_1000BASE_KX:
+	case ICE_PHY_TYPE_LOW_1G_SGMII:
+		speed_phy_type_low = ICE_AQ_LINK_SPEED_1000MB;
+		break;
+	case ICE_PHY_TYPE_LOW_2500BASE_T:
+	case ICE_PHY_TYPE_LOW_2500BASE_X:
+	case ICE_PHY_TYPE_LOW_2500BASE_KX:
+		speed_phy_type_low = ICE_AQ_LINK_SPEED_2500MB;
+		break;
+	case ICE_PHY_TYPE_LOW_5GBASE_T:
+	case ICE_PHY_TYPE_LOW_5GBASE_KR:
+		speed_phy_type_low = ICE_AQ_LINK_SPEED_5GB;
+		break;
+	case ICE_PHY_TYPE_LOW_10GBASE_T:
+	case ICE_PHY_TYPE_LOW_10G_SFI_DA:
+	case ICE_PHY_TYPE_LOW_10GBASE_SR:
+	case ICE_PHY_TYPE_LOW_10GBASE_LR:
+	case ICE_PHY_TYPE_LOW_10GBASE_KR_CR1:
+	case ICE_PHY_TYPE_LOW_10G_SFI_AOC_ACC:
+	case ICE_PHY_TYPE_LOW_10G_SFI_C2C:
+		speed_phy_type_low = ICE_AQ_LINK_SPEED_10GB;
+		break;
+	case ICE_PHY_TYPE_LOW_25GBASE_T:
+	case ICE_PHY_TYPE_LOW_25GBASE_CR:
+	case ICE_PHY_TYPE_LOW_25GBASE_CR_S:
+	case ICE_PHY_TYPE_LOW_25GBASE_CR1:
+	case ICE_PHY_TYPE_LOW_25GBASE_SR:
+	case ICE_PHY_TYPE_LOW_25GBASE_LR:
+	case ICE_PHY_TYPE_LOW_25GBASE_KR:
+	case ICE_PHY_TYPE_LOW_25GBASE_KR_S:
+	case ICE_PHY_TYPE_LOW_25GBASE_KR1:
+	case ICE_PHY_TYPE_LOW_25G_AUI_AOC_ACC:
+	case ICE_PHY_TYPE_LOW_25G_AUI_C2C:
+		speed_phy_type_low = ICE_AQ_LINK_SPEED_25GB;
+		break;
+	case ICE_PHY_TYPE_LOW_40GBASE_CR4:
+	case ICE_PHY_TYPE_LOW_40GBASE_SR4:
+	case ICE_PHY_TYPE_LOW_40GBASE_LR4:
+	case ICE_PHY_TYPE_LOW_40GBASE_KR4:
+	case ICE_PHY_TYPE_LOW_40G_XLAUI_AOC_ACC:
+	case ICE_PHY_TYPE_LOW_40G_XLAUI:
+		speed_phy_type_low = ICE_AQ_LINK_SPEED_40GB;
+		break;
+	case ICE_PHY_TYPE_LOW_50GBASE_CR2:
+	case ICE_PHY_TYPE_LOW_50GBASE_SR2:
+	case ICE_PHY_TYPE_LOW_50GBASE_LR2:
+	case ICE_PHY_TYPE_LOW_50GBASE_KR2:
+	case ICE_PHY_TYPE_LOW_50G_LAUI2_AOC_ACC:
+	case ICE_PHY_TYPE_LOW_50G_LAUI2:
+	case ICE_PHY_TYPE_LOW_50G_AUI2_AOC_ACC:
+	case ICE_PHY_TYPE_LOW_50G_AUI2:
+	case ICE_PHY_TYPE_LOW_50GBASE_CP:
+	case ICE_PHY_TYPE_LOW_50GBASE_SR:
+	case ICE_PHY_TYPE_LOW_50GBASE_FR:
+	case ICE_PHY_TYPE_LOW_50GBASE_LR:
+	case ICE_PHY_TYPE_LOW_50GBASE_KR_PAM4:
+	case ICE_PHY_TYPE_LOW_50G_AUI1_AOC_ACC:
+	case ICE_PHY_TYPE_LOW_50G_AUI1:
+		speed_phy_type_low = ICE_AQ_LINK_SPEED_50GB;
+		break;
+	case ICE_PHY_TYPE_LOW_100GBASE_CR4:
+	case ICE_PHY_TYPE_LOW_100GBASE_SR4:
+	case ICE_PHY_TYPE_LOW_100GBASE_LR4:
+	case ICE_PHY_TYPE_LOW_100GBASE_KR4:
+	case ICE_PHY_TYPE_LOW_100G_CAUI4_AOC_ACC:
+	case ICE_PHY_TYPE_LOW_100G_CAUI4:
+	case ICE_PHY_TYPE_LOW_100G_AUI4_AOC_ACC:
+	case ICE_PHY_TYPE_LOW_100G_AUI4:
+	case ICE_PHY_TYPE_LOW_100GBASE_CR_PAM4:
+	case ICE_PHY_TYPE_LOW_100GBASE_KR_PAM4:
+	case ICE_PHY_TYPE_LOW_100GBASE_CP2:
+	case ICE_PHY_TYPE_LOW_100GBASE_SR2:
+	case ICE_PHY_TYPE_LOW_100GBASE_DR:
+		speed_phy_type_low = ICE_AQ_LINK_SPEED_100GB;
+		break;
+	default:
+		speed_phy_type_low = ICE_AQ_LINK_SPEED_UNKNOWN;
+		break;
+	}
+
+	switch (phy_type_high) {
+	case ICE_PHY_TYPE_HIGH_100GBASE_KR2_PAM4:
+	case ICE_PHY_TYPE_HIGH_100G_CAUI2_AOC_ACC:
+	case ICE_PHY_TYPE_HIGH_100G_CAUI2:
+	case ICE_PHY_TYPE_HIGH_100G_AUI2_AOC_ACC:
+	case ICE_PHY_TYPE_HIGH_100G_AUI2:
+		speed_phy_type_high = ICE_AQ_LINK_SPEED_100GB;
+		break;
+	default:
+		speed_phy_type_high = ICE_AQ_LINK_SPEED_UNKNOWN;
+		break;
+	}
+
+	if (speed_phy_type_low == ICE_AQ_LINK_SPEED_UNKNOWN &&
+	    speed_phy_type_high == ICE_AQ_LINK_SPEED_UNKNOWN)
+		return ICE_AQ_LINK_SPEED_UNKNOWN;
+	else if (speed_phy_type_low != ICE_AQ_LINK_SPEED_UNKNOWN &&
+		 speed_phy_type_high != ICE_AQ_LINK_SPEED_UNKNOWN)
+		return ICE_AQ_LINK_SPEED_UNKNOWN;
+	else if (speed_phy_type_low != ICE_AQ_LINK_SPEED_UNKNOWN &&
+		 speed_phy_type_high == ICE_AQ_LINK_SPEED_UNKNOWN)
+		return speed_phy_type_low;
+	else
+		return speed_phy_type_high;
+}
+
+/**
+ * ice_update_phy_type
+ * @phy_type_low: pointer to the lower part of phy_type
+ * @phy_type_high: pointer to the higher part of phy_type
+ * @link_speeds_bitmap: targeted link speeds bitmap
+ *
+ * Note: For the link_speeds_bitmap structure, you can check it at
+ * [ice_aqc_get_link_status->link_speed]. Caller can pass in
+ * link_speeds_bitmap include multiple speeds.
+ *
+ * Each entry in this [phy_type_low, phy_type_high] structure will
+ * present a certain link speed. This helper function will turn on bits
+ * in [phy_type_low, phy_type_high] structure based on the value of
+ * link_speeds_bitmap input parameter.
+ */
+void
+ice_update_phy_type(u64 *phy_type_low, u64 *phy_type_high,
+		    u16 link_speeds_bitmap)
+{
+	u16 speed = ICE_AQ_LINK_SPEED_UNKNOWN;
+	u64 pt_high;
+	u64 pt_low;
+	int index;
+
+	/* We first check with low part of phy_type */
+	for (index = 0; index <= ICE_PHY_TYPE_LOW_MAX_INDEX; index++) {
+		pt_low = BIT_ULL(index);
+		speed = ice_get_link_speed_based_on_phy_type(pt_low, 0);
+
+		if (link_speeds_bitmap & speed)
+			*phy_type_low |= BIT_ULL(index);
+	}
+
+	/* We then check with high part of phy_type */
+	for (index = 0; index <= ICE_PHY_TYPE_HIGH_MAX_INDEX; index++) {
+		pt_high = BIT_ULL(index);
+		speed = ice_get_link_speed_based_on_phy_type(0, pt_high);
+
+		if (link_speeds_bitmap & speed)
+			*phy_type_high |= BIT_ULL(index);
+	}
+}
+
+/**
+ * ice_aq_set_phy_cfg
+ * @hw: pointer to the hw struct
+ * @lport: logical port number
+ * @cfg: structure with PHY configuration data to be set
+ * @cd: pointer to command details structure or NULL
+ *
+ * Set the various PHY configuration parameters supported on the Port.
+ * One or more of the Set PHY config parameters may be ignored in an MFP
+ * mode as the PF may not have the privilege to set some of the PHY Config
+ * parameters. This status will be indicated by the command response (0x0601).
+ */
+enum ice_status
+ice_aq_set_phy_cfg(struct ice_hw *hw, u8 lport,
+		   struct ice_aqc_set_phy_cfg_data *cfg, struct ice_sq_cd *cd)
+{
+	struct ice_aq_desc desc;
+
+	if (!cfg)
+		return ICE_ERR_PARAM;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_phy_cfg);
+	desc.params.set_phy.lport_num = lport;
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+
+	return ice_aq_send_cmd(hw, &desc, cfg, sizeof(*cfg), cd);
+}
+
+/**
+ * ice_update_link_info - update status of the HW network link
+ * @pi: port info structure of the interested logical port
+ */
+enum ice_status ice_update_link_info(struct ice_port_info *pi)
+{
+	struct ice_aqc_get_phy_caps_data *pcaps;
+	struct ice_phy_info *phy_info;
+	enum ice_status status;
+	struct ice_hw *hw;
+
+	if (!pi)
+		return ICE_ERR_PARAM;
+
+	hw = pi->hw;
+
+	pcaps = (struct ice_aqc_get_phy_caps_data *)
+		ice_malloc(hw, sizeof(*pcaps));
+	if (!pcaps)
+		return ICE_ERR_NO_MEMORY;
+
+	phy_info = &pi->phy;
+	status = ice_aq_get_link_info(pi, true, NULL, NULL);
+	if (status)
+		goto out;
+
+	if (phy_info->link_info.link_info & ICE_AQ_MEDIA_AVAILABLE) {
+		status = ice_aq_get_phy_caps(pi, false, ICE_AQC_REPORT_SW_CFG,
+					     pcaps, NULL);
+		if (status)
+			goto out;
+
+		ice_memcpy(phy_info->link_info.module_type, &pcaps->module_type,
+			   sizeof(phy_info->link_info.module_type),
+			   ICE_NONDMA_TO_NONDMA);
+	}
+out:
+	ice_free(hw, pcaps);
+	return status;
+}
+
+/**
+ * ice_set_fc
+ * @pi: port information structure
+ * @aq_failures: pointer to status code, specific to ice_set_fc routine
+ * @ena_auto_link_update: enable automatic link update
+ *
+ * Set the requested flow control mode.
+ */
+enum ice_status
+ice_set_fc(struct ice_port_info *pi, u8 *aq_failures, bool ena_auto_link_update)
+{
+	struct ice_aqc_set_phy_cfg_data cfg = { 0 };
+	struct ice_aqc_get_phy_caps_data *pcaps;
+	enum ice_status status;
+	u8 pause_mask = 0x0;
+	struct ice_hw *hw;
+
+	if (!pi)
+		return ICE_ERR_PARAM;
+	hw = pi->hw;
+	*aq_failures = ICE_SET_FC_AQ_FAIL_NONE;
+
+	switch (pi->fc.req_mode) {
+	case ICE_FC_FULL:
+		pause_mask |= ICE_AQC_PHY_EN_TX_LINK_PAUSE;
+		pause_mask |= ICE_AQC_PHY_EN_RX_LINK_PAUSE;
+		break;
+	case ICE_FC_RX_PAUSE:
+		pause_mask |= ICE_AQC_PHY_EN_RX_LINK_PAUSE;
+		break;
+	case ICE_FC_TX_PAUSE:
+		pause_mask |= ICE_AQC_PHY_EN_TX_LINK_PAUSE;
+		break;
+	default:
+		break;
+	}
+
+	pcaps = (struct ice_aqc_get_phy_caps_data *)
+		ice_malloc(hw, sizeof(*pcaps));
+	if (!pcaps)
+		return ICE_ERR_NO_MEMORY;
+
+	/* Get the current phy config */
+	status = ice_aq_get_phy_caps(pi, false, ICE_AQC_REPORT_SW_CFG, pcaps,
+				     NULL);
+	if (status) {
+		*aq_failures = ICE_SET_FC_AQ_FAIL_GET;
+		goto out;
+	}
+
+	/* clear the old pause settings */
+	cfg.caps = pcaps->caps & ~(ICE_AQC_PHY_EN_TX_LINK_PAUSE |
+				   ICE_AQC_PHY_EN_RX_LINK_PAUSE);
+	/* set the new capabilities */
+	cfg.caps |= pause_mask;
+	/* If the capabilities have changed, then set the new config */
+	if (cfg.caps != pcaps->caps) {
+		int retry_count, retry_max = 10;
+
+		/* Auto restart link so settings take effect */
+		if (ena_auto_link_update)
+			cfg.caps |= ICE_AQ_PHY_ENA_AUTO_LINK_UPDT;
+		/* Copy over all the old settings */
+		cfg.phy_type_high = pcaps->phy_type_high;
+		cfg.phy_type_low = pcaps->phy_type_low;
+		cfg.low_power_ctrl = pcaps->low_power_ctrl;
+		cfg.eee_cap = pcaps->eee_cap;
+		cfg.eeer_value = pcaps->eeer_value;
+		cfg.link_fec_opt = pcaps->link_fec_options;
+
+		status = ice_aq_set_phy_cfg(hw, pi->lport, &cfg, NULL);
+		if (status) {
+			*aq_failures = ICE_SET_FC_AQ_FAIL_SET;
+			goto out;
+		}
+
+		/* Update the link info
+		 * It sometimes takes a really long time for link to
+		 * come back from the atomic reset. Thus, we wait a
+		 * little bit.
+		 */
+		for (retry_count = 0; retry_count < retry_max; retry_count++) {
+			status = ice_update_link_info(pi);
+
+			if (status == ICE_SUCCESS)
+				break;
+
+			ice_msec_delay(100, true);
+		}
+
+		if (status)
+			*aq_failures = ICE_SET_FC_AQ_FAIL_UPDATE;
+	}
+
+out:
+	ice_free(hw, pcaps);
+	return status;
+}
+
+/**
+ * ice_copy_phy_caps_to_cfg - Copy PHY ability data to configuration data
+ * @caps: PHY ability structure to copy date from
+ * @cfg: PHY configuration structure to copy data to
+ *
+ * Helper function to copy AQC PHY get ability data to PHY set configuration
+ * data structure
+ */
+void
+ice_copy_phy_caps_to_cfg(struct ice_aqc_get_phy_caps_data *caps,
+			 struct ice_aqc_set_phy_cfg_data *cfg)
+{
+	if (!caps || !cfg)
+		return;
+
+	cfg->phy_type_low = caps->phy_type_low;
+	cfg->phy_type_high = caps->phy_type_high;
+	cfg->caps = caps->caps;
+	cfg->low_power_ctrl = caps->low_power_ctrl;
+	cfg->eee_cap = caps->eee_cap;
+	cfg->eeer_value = caps->eeer_value;
+	cfg->link_fec_opt = caps->link_fec_options;
+}
+
+/**
+ * ice_cfg_phy_fec - Configure PHY FEC data based on FEC mode
+ * @cfg: PHY configuration data to set FEC mode
+ * @fec: FEC mode to configure
+ *
+ * Caller should copy ice_aqc_get_phy_caps_data.caps ICE_AQC_PHY_EN_AUTO_FEC
+ * (bit 7) and ice_aqc_get_phy_caps_data.link_fec_options to cfg.caps
+ * ICE_AQ_PHY_ENA_AUTO_FEC (bit 7) and cfg.link_fec_options before calling.
+ */
+void
+ice_cfg_phy_fec(struct ice_aqc_set_phy_cfg_data *cfg, enum ice_fec_mode fec)
+{
+	switch (fec) {
+	case ICE_FEC_BASER:
+		/* Clear auto FEC and RS bits, and AND BASE-R ability
+		 * bits and OR request bits.
+		 */
+		cfg->caps &= ~ICE_AQC_PHY_EN_AUTO_FEC;
+		cfg->link_fec_opt &= ICE_AQC_PHY_FEC_10G_KR_40G_KR4_EN |
+				     ICE_AQC_PHY_FEC_25G_KR_CLAUSE74_EN;
+		cfg->link_fec_opt |= ICE_AQC_PHY_FEC_10G_KR_40G_KR4_REQ |
+				     ICE_AQC_PHY_FEC_25G_KR_REQ;
+		break;
+	case ICE_FEC_RS:
+		/* Clear auto FEC and BASE-R bits, and AND RS ability
+		 * bits and OR request bits.
+		 */
+		cfg->caps &= ~ICE_AQC_PHY_EN_AUTO_FEC;
+		cfg->link_fec_opt &= ICE_AQC_PHY_FEC_25G_RS_CLAUSE91_EN;
+		cfg->link_fec_opt |= ICE_AQC_PHY_FEC_25G_RS_528_REQ |
+				     ICE_AQC_PHY_FEC_25G_RS_544_REQ;
+		break;
+	case ICE_FEC_NONE:
+		/* Clear auto FEC and all FEC option bits. */
+		cfg->caps &= ~ICE_AQC_PHY_EN_AUTO_FEC;
+		cfg->link_fec_opt &= ~ICE_AQC_PHY_FEC_MASK;
+		break;
+	case ICE_FEC_AUTO:
+		/* AND auto FEC bit, and all caps bits. */
+		cfg->caps &= ICE_AQC_PHY_CAPS_MASK;
+		break;
+	}
+}
+
+/**
+ * ice_get_link_status - get status of the HW network link
+ * @pi: port information structure
+ * @link_up: pointer to bool (true/false = linkup/linkdown)
+ *
+ * Variable link_up is true if link is up, false if link is down.
+ * The variable link_up is invalid if status is non zero. As a
+ * result of this call, link status reporting becomes enabled
+ */
+enum ice_status ice_get_link_status(struct ice_port_info *pi, bool *link_up)
+{
+	struct ice_phy_info *phy_info;
+	enum ice_status status = ICE_SUCCESS;
+
+	if (!pi || !link_up)
+		return ICE_ERR_PARAM;
+
+	phy_info = &pi->phy;
+
+	if (phy_info->get_link_info) {
+		status = ice_update_link_info(pi);
+
+		if (status)
+			ice_debug(pi->hw, ICE_DBG_LINK,
+				  "get link status error, status = %d\n",
+				  status);
+	}
+
+	*link_up = phy_info->link_info.link_info & ICE_AQ_LINK_UP;
+
+	return status;
+}
+
+/**
+ * ice_aq_set_link_restart_an
+ * @pi: pointer to the port information structure
+ * @ena_link: if true: enable link, if false: disable link
+ * @cd: pointer to command details structure or NULL
+ *
+ * Sets up the link and restarts the Auto-Negotiation over the link.
+ */
+enum ice_status
+ice_aq_set_link_restart_an(struct ice_port_info *pi, bool ena_link,
+			   struct ice_sq_cd *cd)
+{
+	struct ice_aqc_restart_an *cmd;
+	struct ice_aq_desc desc;
+
+	cmd = &desc.params.restart_an;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_restart_an);
+
+	cmd->cmd_flags = ICE_AQC_RESTART_AN_LINK_RESTART;
+	cmd->lport_num = pi->lport;
+	if (ena_link)
+		cmd->cmd_flags |= ICE_AQC_RESTART_AN_LINK_ENABLE;
+	else
+		cmd->cmd_flags &= ~ICE_AQC_RESTART_AN_LINK_ENABLE;
+
+	return ice_aq_send_cmd(pi->hw, &desc, NULL, 0, cd);
+}
+
+/**
+ * ice_aq_set_event_mask
+ * @hw: pointer to the hw struct
+ * @port_num: port number of the physical function
+ * @mask: event mask to be set
+ * @cd: pointer to command details structure or NULL
+ *
+ * Set event mask (0x0613)
+ */
+enum ice_status
+ice_aq_set_event_mask(struct ice_hw *hw, u8 port_num, u16 mask,
+		      struct ice_sq_cd *cd)
+{
+	struct ice_aqc_set_event_mask *cmd;
+	struct ice_aq_desc desc;
+
+	cmd = &desc.params.set_event_mask;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_event_mask);
+
+	cmd->lport_num = port_num;
+
+	cmd->event_mask = CPU_TO_LE16(mask);
+	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+}
+
+/**
+ * ice_aq_set_mac_loopback
+ * @hw: pointer to the hw struct
+ * @ena_lpbk: Enable or Disable loopback
+ * @cd: pointer to command details structure or NULL
+ *
+ * Enable/disable loopback on a given port
+ */
+enum ice_status
+ice_aq_set_mac_loopback(struct ice_hw *hw, bool ena_lpbk, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_set_mac_lb *cmd;
+	struct ice_aq_desc desc;
+
+	cmd = &desc.params.set_mac_lb;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_mac_lb);
+	if (ena_lpbk)
+		cmd->lb_mode = ICE_AQ_MAC_LB_EN;
+
+	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+}
+
+
+/**
+ * ice_aq_set_port_id_led
+ * @pi: pointer to the port information
+ * @is_orig_mode: is this LED set to original mode (by the net-list)
+ * @cd: pointer to command details structure or NULL
+ *
+ * Set LED value for the given port (0x06e9)
+ */
+enum ice_status
+ice_aq_set_port_id_led(struct ice_port_info *pi, bool is_orig_mode,
+		       struct ice_sq_cd *cd)
+{
+	struct ice_aqc_set_port_id_led *cmd;
+	struct ice_hw *hw = pi->hw;
+	struct ice_aq_desc desc;
+
+	cmd = &desc.params.set_port_id_led;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_port_id_led);
+
+
+	if (is_orig_mode)
+		cmd->ident_mode = ICE_AQC_PORT_IDENT_LED_ORIG;
+	else
+		cmd->ident_mode = ICE_AQC_PORT_IDENT_LED_BLINK;
+
+	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+}
+
+/**
+ * __ice_aq_get_set_rss_lut
+ * @hw: pointer to the hardware structure
+ * @vsi_id: VSI FW index
+ * @lut_type: LUT table type
+ * @lut: pointer to the LUT buffer provided by the caller
+ * @lut_size: size of the LUT buffer
+ * @glob_lut_idx: global LUT index
+ * @set: set true to set the table, false to get the table
+ *
+ * Internal function to get (0x0B05) or set (0x0B03) RSS look up table
+ */
+static enum ice_status
+__ice_aq_get_set_rss_lut(struct ice_hw *hw, u16 vsi_id, u8 lut_type, u8 *lut,
+			 u16 lut_size, u8 glob_lut_idx, bool set)
+{
+	struct ice_aqc_get_set_rss_lut *cmd_resp;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+	u16 flags = 0;
+
+	cmd_resp = &desc.params.get_set_rss_lut;
+
+	if (set) {
+		ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_rss_lut);
+		desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+	} else {
+		ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_rss_lut);
+	}
+
+	cmd_resp->vsi_id = CPU_TO_LE16(((vsi_id <<
+					 ICE_AQC_GSET_RSS_LUT_VSI_ID_S) &
+					ICE_AQC_GSET_RSS_LUT_VSI_ID_M) |
+				       ICE_AQC_GSET_RSS_LUT_VSI_VALID);
+
+	switch (lut_type) {
+	case ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_VSI:
+	case ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF:
+	case ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_GLOBAL:
+		flags |= ((lut_type << ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_S) &
+			  ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_M);
+		break;
+	default:
+		status = ICE_ERR_PARAM;
+		goto ice_aq_get_set_rss_lut_exit;
+	}
+
+	if (lut_type == ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_GLOBAL) {
+		flags |= ((glob_lut_idx << ICE_AQC_GSET_RSS_LUT_GLOBAL_IDX_S) &
+			  ICE_AQC_GSET_RSS_LUT_GLOBAL_IDX_M);
+
+		if (!set)
+			goto ice_aq_get_set_rss_lut_send;
+	} else if (lut_type == ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF) {
+		if (!set)
+			goto ice_aq_get_set_rss_lut_send;
+	} else {
+		goto ice_aq_get_set_rss_lut_send;
+	}
+
+	/* LUT size is only valid for Global and PF table types */
+	switch (lut_size) {
+	case ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_128:
+		flags |= (ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_128_FLAG <<
+			  ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S) &
+			 ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_M;
+		break;
+	case ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_512:
+		flags |= (ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_512_FLAG <<
+			  ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S) &
+			 ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_M;
+		break;
+	case ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K:
+		if (lut_type == ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF) {
+			flags |= (ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K_FLAG <<
+				  ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S) &
+				 ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_M;
+			break;
+		}
+		/* fall-through */
+	default:
+		status = ICE_ERR_PARAM;
+		goto ice_aq_get_set_rss_lut_exit;
+	}
+
+ice_aq_get_set_rss_lut_send:
+	cmd_resp->flags = CPU_TO_LE16(flags);
+	status = ice_aq_send_cmd(hw, &desc, lut, lut_size, NULL);
+
+ice_aq_get_set_rss_lut_exit:
+	return status;
+}
+
+/**
+ * ice_aq_get_rss_lut
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: software VSI handle
+ * @lut_type: LUT table type
+ * @lut: pointer to the LUT buffer provided by the caller
+ * @lut_size: size of the LUT buffer
+ *
+ * get the RSS lookup table, PF or VSI type
+ */
+enum ice_status
+ice_aq_get_rss_lut(struct ice_hw *hw, u16 vsi_handle, u8 lut_type,
+		   u8 *lut, u16 lut_size)
+{
+	if (!ice_is_vsi_valid(hw, vsi_handle) || !lut)
+		return ICE_ERR_PARAM;
+
+	return __ice_aq_get_set_rss_lut(hw, ice_get_hw_vsi_num(hw, vsi_handle),
+					lut_type, lut, lut_size, 0, false);
+}
+
+/**
+ * ice_aq_set_rss_lut
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: software VSI handle
+ * @lut_type: LUT table type
+ * @lut: pointer to the LUT buffer provided by the caller
+ * @lut_size: size of the LUT buffer
+ *
+ * set the RSS lookup table, PF or VSI type
+ */
+enum ice_status
+ice_aq_set_rss_lut(struct ice_hw *hw, u16 vsi_handle, u8 lut_type,
+		   u8 *lut, u16 lut_size)
+{
+	if (!ice_is_vsi_valid(hw, vsi_handle) || !lut)
+		return ICE_ERR_PARAM;
+
+	return __ice_aq_get_set_rss_lut(hw, ice_get_hw_vsi_num(hw, vsi_handle),
+					lut_type, lut, lut_size, 0, true);
+}
+
+/**
+ * __ice_aq_get_set_rss_key
+ * @hw: pointer to the hw struct
+ * @vsi_id: VSI FW index
+ * @key: pointer to key info struct
+ * @set: set true to set the key, false to get the key
+ *
+ * get (0x0B04) or set (0x0B02) the RSS key per VSI
+ */
+static enum
+ice_status __ice_aq_get_set_rss_key(struct ice_hw *hw, u16 vsi_id,
+				    struct ice_aqc_get_set_rss_keys *key,
+				    bool set)
+{
+	struct ice_aqc_get_set_rss_key *cmd_resp;
+	u16 key_size = sizeof(*key);
+	struct ice_aq_desc desc;
+
+	cmd_resp = &desc.params.get_set_rss_key;
+
+	if (set) {
+		ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_rss_key);
+		desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+	} else {
+		ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_rss_key);
+	}
+
+	cmd_resp->vsi_id = CPU_TO_LE16(((vsi_id <<
+					 ICE_AQC_GSET_RSS_KEY_VSI_ID_S) &
+					ICE_AQC_GSET_RSS_KEY_VSI_ID_M) |
+				       ICE_AQC_GSET_RSS_KEY_VSI_VALID);
+
+	return ice_aq_send_cmd(hw, &desc, key, key_size, NULL);
+}
+
+/**
+ * ice_aq_get_rss_key
+ * @hw: pointer to the hw struct
+ * @vsi_handle: software VSI handle
+ * @key: pointer to key info struct
+ *
+ * get the RSS key per VSI
+ */
+enum ice_status
+ice_aq_get_rss_key(struct ice_hw *hw, u16 vsi_handle,
+		   struct ice_aqc_get_set_rss_keys *key)
+{
+	if (!ice_is_vsi_valid(hw, vsi_handle) || !key)
+		return ICE_ERR_PARAM;
+
+	return __ice_aq_get_set_rss_key(hw, ice_get_hw_vsi_num(hw, vsi_handle),
+					key, false);
+}
+
+/**
+ * ice_aq_set_rss_key
+ * @hw: pointer to the hw struct
+ * @vsi_handle: software VSI handle
+ * @keys: pointer to key info struct
+ *
+ * set the RSS key per VSI
+ */
+enum ice_status
+ice_aq_set_rss_key(struct ice_hw *hw, u16 vsi_handle,
+		   struct ice_aqc_get_set_rss_keys *keys)
+{
+	if (!ice_is_vsi_valid(hw, vsi_handle) || !keys)
+		return ICE_ERR_PARAM;
+
+	return __ice_aq_get_set_rss_key(hw, ice_get_hw_vsi_num(hw, vsi_handle),
+					keys, true);
+}
+
+/**
+ * ice_aq_add_lan_txq
+ * @hw: pointer to the hardware structure
+ * @num_qgrps: Number of added queue groups
+ * @qg_list: list of queue groups to be added
+ * @buf_size: size of buffer for indirect command
+ * @cd: pointer to command details structure or NULL
+ *
+ * Add Tx LAN queue (0x0C30)
+ *
+ * NOTE:
+ * Prior to calling add Tx LAN queue:
+ * Initialize the following as part of the Tx queue context:
+ * Completion queue ID if the queue uses Completion queue, Quanta profile,
+ * Cache profile and Packet shaper profile.
+ *
+ * After add Tx LAN queue AQ command is completed:
+ * Interrupts should be associated with specific queues,
+ * Association of Tx queue to Doorbell queue is not part of Add LAN Tx queue
+ * flow.
+ */
+static enum ice_status
+ice_aq_add_lan_txq(struct ice_hw *hw, u8 num_qgrps,
+		   struct ice_aqc_add_tx_qgrp *qg_list, u16 buf_size,
+		   struct ice_sq_cd *cd)
+{
+	u16 i, sum_header_size, sum_q_size = 0;
+	struct ice_aqc_add_tx_qgrp *list;
+	struct ice_aqc_add_txqs *cmd;
+	struct ice_aq_desc desc;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_add_lan_txq");
+
+	cmd = &desc.params.add_txqs;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_add_txqs);
+
+	if (!qg_list)
+		return ICE_ERR_PARAM;
+
+	if (num_qgrps > ICE_LAN_TXQ_MAX_QGRPS)
+		return ICE_ERR_PARAM;
+
+	sum_header_size = num_qgrps *
+		(sizeof(*qg_list) - sizeof(*qg_list->txqs));
+
+	list = qg_list;
+	for (i = 0; i < num_qgrps; i++) {
+		struct ice_aqc_add_txqs_perq *q = list->txqs;
+
+		sum_q_size += list->num_txqs * sizeof(*q);
+		list = (struct ice_aqc_add_tx_qgrp *)(q + list->num_txqs);
+	}
+
+	if (buf_size != (sum_header_size + sum_q_size))
+		return ICE_ERR_PARAM;
+
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+
+	cmd->num_qgrps = num_qgrps;
+
+	return ice_aq_send_cmd(hw, &desc, qg_list, buf_size, cd);
+}
+
+/**
+ * ice_aq_dis_lan_txq
+ * @hw: pointer to the hardware structure
+ * @num_qgrps: number of groups in the list
+ * @qg_list: the list of groups to disable
+ * @buf_size: the total size of the qg_list buffer in bytes
+ * @rst_src: if called due to reset, specifies the rst source
+ * @vmvf_num: the relative vm or vf number that is undergoing the reset
+ * @cd: pointer to command details structure or NULL
+ *
+ * Disable LAN Tx queue (0x0C31)
+ */
+static enum ice_status
+ice_aq_dis_lan_txq(struct ice_hw *hw, u8 num_qgrps,
+		   struct ice_aqc_dis_txq_item *qg_list, u16 buf_size,
+		   enum ice_disq_rst_src rst_src, u16 vmvf_num,
+		   struct ice_sq_cd *cd)
+{
+	struct ice_aqc_dis_txqs *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+	u16 i, sz = 0;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_dis_lan_txq");
+	cmd = &desc.params.dis_txqs;
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_dis_txqs);
+
+	/* qg_list can be NULL only in VM/VF reset flow */
+	if (!qg_list && !rst_src)
+		return ICE_ERR_PARAM;
+
+	if (num_qgrps > ICE_LAN_TXQ_MAX_QGRPS)
+		return ICE_ERR_PARAM;
+
+	cmd->num_entries = num_qgrps;
+
+	cmd->vmvf_and_timeout = CPU_TO_LE16((5 << ICE_AQC_Q_DIS_TIMEOUT_S) &
+					    ICE_AQC_Q_DIS_TIMEOUT_M);
+
+	switch (rst_src) {
+	case ICE_VM_RESET:
+		cmd->cmd_type = ICE_AQC_Q_DIS_CMD_VM_RESET;
+		cmd->vmvf_and_timeout |=
+			CPU_TO_LE16(vmvf_num & ICE_AQC_Q_DIS_VMVF_NUM_M);
+		break;
+	case ICE_NO_RESET:
+	default:
+		break;
+	}
+
+	/* flush pipe on time out */
+	cmd->cmd_type |= ICE_AQC_Q_DIS_CMD_FLUSH_PIPE;
+	/* If no queue group info, we are in a reset flow. Issue the AQ */
+	if (!qg_list)
+		goto do_aq;
+
+	/* set RD bit to indicate that command buffer is provided by the driver
+	 * and it needs to be read by the firmware
+	 */
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+
+	for (i = 0; i < num_qgrps; ++i) {
+		/* Calculate the size taken up by the queue IDs in this group */
+		sz += qg_list[i].num_qs * sizeof(qg_list[i].q_id);
+
+		/* Add the size of the group header */
+		sz += sizeof(qg_list[i]) - sizeof(qg_list[i].q_id);
+
+		/* If the num of queues is even, add 2 bytes of padding */
+		if ((qg_list[i].num_qs % 2) == 0)
+			sz += 2;
+	}
+
+	if (buf_size != sz)
+		return ICE_ERR_PARAM;
+
+do_aq:
+	status = ice_aq_send_cmd(hw, &desc, qg_list, buf_size, cd);
+	if (status) {
+		if (!qg_list)
+			ice_debug(hw, ICE_DBG_SCHED, "VM%d disable failed %d\n",
+				  vmvf_num, hw->adminq.sq_last_status);
+		else
+			ice_debug(hw, ICE_DBG_SCHED, "disable Q %d failed %d\n",
+				  LE16_TO_CPU(qg_list[0].q_id[0]),
+				  hw->adminq.sq_last_status);
+	}
+	return status;
+}
+
+
+/* End of FW Admin Queue command wrappers */
+
+/**
+ * ice_write_byte - write a byte to a packed context structure
+ * @src_ctx:  the context structure to read from
+ * @dest_ctx: the context to be written to
+ * @ce_info:  a description of the struct to be filled
+ */
+static void
+ice_write_byte(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info)
+{
+	u8 src_byte, dest_byte, mask;
+	u8 *from, *dest;
+	u16 shift_width;
+
+	/* copy from the next struct field */
+	from = src_ctx + ce_info->offset;
+
+	/* prepare the bits and mask */
+	shift_width = ce_info->lsb % 8;
+	mask = (u8)(BIT(ce_info->width) - 1);
+
+	src_byte = *from;
+	src_byte &= mask;
+
+	/* shift to correct alignment */
+	mask <<= shift_width;
+	src_byte <<= shift_width;
+
+	/* get the current bits from the target bit string */
+	dest = dest_ctx + (ce_info->lsb / 8);
+
+	ice_memcpy(&dest_byte, dest, sizeof(dest_byte), ICE_DMA_TO_NONDMA);
+
+	dest_byte &= ~mask;	/* get the bits not changing */
+	dest_byte |= src_byte;	/* add in the new bits */
+
+	/* put it all back */
+	ice_memcpy(dest, &dest_byte, sizeof(dest_byte), ICE_NONDMA_TO_DMA);
+}
+
+/**
+ * ice_write_word - write a word to a packed context structure
+ * @src_ctx:  the context structure to read from
+ * @dest_ctx: the context to be written to
+ * @ce_info:  a description of the struct to be filled
+ */
+static void
+ice_write_word(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info)
+{
+	u16 src_word, mask;
+	__le16 dest_word;
+	u8 *from, *dest;
+	u16 shift_width;
+
+	/* copy from the next struct field */
+	from = src_ctx + ce_info->offset;
+
+	/* prepare the bits and mask */
+	shift_width = ce_info->lsb % 8;
+	mask = BIT(ce_info->width) - 1;
+
+	/* don't swizzle the bits until after the mask because the mask bits
+	 * will be in a different bit position on big endian machines
+	 */
+	src_word = *(u16 *)from;
+	src_word &= mask;
+
+	/* shift to correct alignment */
+	mask <<= shift_width;
+	src_word <<= shift_width;
+
+	/* get the current bits from the target bit string */
+	dest = dest_ctx + (ce_info->lsb / 8);
+
+	ice_memcpy(&dest_word, dest, sizeof(dest_word), ICE_DMA_TO_NONDMA);
+
+	dest_word &= ~(CPU_TO_LE16(mask));	/* get the bits not changing */
+	dest_word |= CPU_TO_LE16(src_word);	/* add in the new bits */
+
+	/* put it all back */
+	ice_memcpy(dest, &dest_word, sizeof(dest_word), ICE_NONDMA_TO_DMA);
+}
+
+/**
+ * ice_write_dword - write a dword to a packed context structure
+ * @src_ctx:  the context structure to read from
+ * @dest_ctx: the context to be written to
+ * @ce_info:  a description of the struct to be filled
+ */
+static void
+ice_write_dword(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info)
+{
+	u32 src_dword, mask;
+	__le32 dest_dword;
+	u8 *from, *dest;
+	u16 shift_width;
+
+	/* copy from the next struct field */
+	from = src_ctx + ce_info->offset;
+
+	/* prepare the bits and mask */
+	shift_width = ce_info->lsb % 8;
+
+	/* if the field width is exactly 32 on an x86 machine, then the shift
+	 * operation will not work because the SHL instructions count is masked
+	 * to 5 bits so the shift will do nothing
+	 */
+	if (ce_info->width < 32)
+		mask = BIT(ce_info->width) - 1;
+	else
+		mask = (u32)~0;
+
+	/* don't swizzle the bits until after the mask because the mask bits
+	 * will be in a different bit position on big endian machines
+	 */
+	src_dword = *(u32 *)from;
+	src_dword &= mask;
+
+	/* shift to correct alignment */
+	mask <<= shift_width;
+	src_dword <<= shift_width;
+
+	/* get the current bits from the target bit string */
+	dest = dest_ctx + (ce_info->lsb / 8);
+
+	ice_memcpy(&dest_dword, dest, sizeof(dest_dword), ICE_DMA_TO_NONDMA);
+
+	dest_dword &= ~(CPU_TO_LE32(mask));	/* get the bits not changing */
+	dest_dword |= CPU_TO_LE32(src_dword);	/* add in the new bits */
+
+	/* put it all back */
+	ice_memcpy(dest, &dest_dword, sizeof(dest_dword), ICE_NONDMA_TO_DMA);
+}
+
+/**
+ * ice_write_qword - write a qword to a packed context structure
+ * @src_ctx:  the context structure to read from
+ * @dest_ctx: the context to be written to
+ * @ce_info:  a description of the struct to be filled
+ */
+static void
+ice_write_qword(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info)
+{
+	u64 src_qword, mask;
+	__le64 dest_qword;
+	u8 *from, *dest;
+	u16 shift_width;
+
+	/* copy from the next struct field */
+	from = src_ctx + ce_info->offset;
+
+	/* prepare the bits and mask */
+	shift_width = ce_info->lsb % 8;
+
+	/* if the field width is exactly 64 on an x86 machine, then the shift
+	 * operation will not work because the SHL instructions count is masked
+	 * to 6 bits so the shift will do nothing
+	 */
+	if (ce_info->width < 64)
+		mask = BIT_ULL(ce_info->width) - 1;
+	else
+		mask = (u64)~0;
+
+	/* don't swizzle the bits until after the mask because the mask bits
+	 * will be in a different bit position on big endian machines
+	 */
+	src_qword = *(u64 *)from;
+	src_qword &= mask;
+
+	/* shift to correct alignment */
+	mask <<= shift_width;
+	src_qword <<= shift_width;
+
+	/* get the current bits from the target bit string */
+	dest = dest_ctx + (ce_info->lsb / 8);
+
+	ice_memcpy(&dest_qword, dest, sizeof(dest_qword), ICE_DMA_TO_NONDMA);
+
+	dest_qword &= ~(CPU_TO_LE64(mask));	/* get the bits not changing */
+	dest_qword |= CPU_TO_LE64(src_qword);	/* add in the new bits */
+
+	/* put it all back */
+	ice_memcpy(dest, &dest_qword, sizeof(dest_qword), ICE_NONDMA_TO_DMA);
+}
+
+/**
+ * ice_set_ctx - set context bits in packed structure
+ * @src_ctx:  pointer to a generic non-packed context structure
+ * @dest_ctx: pointer to memory for the packed structure
+ * @ce_info:  a description of the structure to be transformed
+ */
+enum ice_status
+ice_set_ctx(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info)
+{
+	int f;
+
+	for (f = 0; ce_info[f].width; f++) {
+		/* We have to deal with each element of the FW response
+		 * using the correct size so that we are correct regardless
+		 * of the endianness of the machine.
+		 */
+		switch (ce_info[f].size_of) {
+		case sizeof(u8):
+			ice_write_byte(src_ctx, dest_ctx, &ce_info[f]);
+			break;
+		case sizeof(u16):
+			ice_write_word(src_ctx, dest_ctx, &ce_info[f]);
+			break;
+		case sizeof(u32):
+			ice_write_dword(src_ctx, dest_ctx, &ce_info[f]);
+			break;
+		case sizeof(u64):
+			ice_write_qword(src_ctx, dest_ctx, &ce_info[f]);
+			break;
+		default:
+			return ICE_ERR_INVAL_SIZE;
+		}
+	}
+
+	return ICE_SUCCESS;
+}
+
+
+
+
+
+/**
+ * ice_ena_vsi_txq
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc: tc number
+ * @num_qgrps: Number of added queue groups
+ * @buf: list of queue groups to be added
+ * @buf_size: size of buffer for indirect command
+ * @cd: pointer to command details structure or NULL
+ *
+ * This function adds one lan q
+ */
+enum ice_status
+ice_ena_vsi_txq(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u8 num_qgrps,
+		struct ice_aqc_add_tx_qgrp *buf, u16 buf_size,
+		struct ice_sq_cd *cd)
+{
+	struct ice_aqc_txsched_elem_data node = { 0 };
+	struct ice_sched_node *parent;
+	enum ice_status status;
+	struct ice_hw *hw;
+
+	if (!pi || pi->port_state != ICE_SCHED_PORT_STATE_READY)
+		return ICE_ERR_CFG;
+
+	if (num_qgrps > 1 || buf->num_txqs > 1)
+		return ICE_ERR_MAX_LIMIT;
+
+	hw = pi->hw;
+
+	if (!ice_is_vsi_valid(hw, vsi_handle))
+		return ICE_ERR_PARAM;
+
+	ice_acquire_lock(&pi->sched_lock);
+
+	/* find a parent node */
+	parent = ice_sched_get_free_qparent(pi, vsi_handle, tc,
+					    ICE_SCHED_NODE_OWNER_LAN);
+	if (!parent) {
+		status = ICE_ERR_PARAM;
+		goto ena_txq_exit;
+	}
+
+	buf->parent_teid = parent->info.node_teid;
+	node.parent_teid = parent->info.node_teid;
+	/* Mark that the values in the "generic" section as valid. The default
+	 * value in the "generic" section is zero. This means that :
+	 * - Scheduling mode is Bytes Per Second (BPS), indicated by Bit 0.
+	 * - 0 priority among siblings, indicated by Bit 1-3.
+	 * - WFQ, indicated by Bit 4.
+	 * - 0 Adjustment value is used in PSM credit update flow, indicated by
+	 * Bit 5-6.
+	 * - Bit 7 is reserved.
+	 * Without setting the generic section as valid in valid_sections, the
+	 * Admin Q command will fail with error code ICE_AQ_RC_EINVAL.
+	 */
+	buf->txqs[0].info.valid_sections = ICE_AQC_ELEM_VALID_GENERIC;
+
+	/* add the lan q */
+	status = ice_aq_add_lan_txq(hw, num_qgrps, buf, buf_size, cd);
+	if (status != ICE_SUCCESS) {
+		ice_debug(hw, ICE_DBG_SCHED, "enable Q %d failed %d\n",
+			  LE16_TO_CPU(buf->txqs[0].txq_id),
+			  hw->adminq.sq_last_status);
+		goto ena_txq_exit;
+	}
+
+	node.node_teid = buf->txqs[0].q_teid;
+	node.data.elem_type = ICE_AQC_ELEM_TYPE_LEAF;
+
+	/* add a leaf node into schduler tree q layer */
+	status = ice_sched_add_node(pi, hw->num_tx_sched_layers - 1, &node);
+
+ena_txq_exit:
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_dis_vsi_txq
+ * @pi: port information structure
+ * @num_queues: number of queues
+ * @q_ids: pointer to the q_id array
+ * @q_teids: pointer to queue node teids
+ * @rst_src: if called due to reset, specifies the rst source
+ * @vmvf_num: the relative vm or vf number that is undergoing the reset
+ * @cd: pointer to command details structure or NULL
+ *
+ * This function removes queues and their corresponding nodes in SW DB
+ */
+enum ice_status
+ice_dis_vsi_txq(struct ice_port_info *pi, u8 num_queues, u16 *q_ids,
+		u32 *q_teids, enum ice_disq_rst_src rst_src, u16 vmvf_num,
+		struct ice_sq_cd *cd)
+{
+	enum ice_status status = ICE_ERR_DOES_NOT_EXIST;
+	struct ice_aqc_dis_txq_item qg_list;
+	u16 i;
+
+	if (!pi || pi->port_state != ICE_SCHED_PORT_STATE_READY)
+		return ICE_ERR_CFG;
+
+	/* if queue is disabled already yet the disable queue command has to be
+	 * sent to complete the VF reset, then call ice_aq_dis_lan_txq without
+	 * any queue information
+	 */
+
+	if (!num_queues && rst_src)
+		return ice_aq_dis_lan_txq(pi->hw, 0, NULL, 0, rst_src, vmvf_num,
+					  NULL);
+
+	ice_acquire_lock(&pi->sched_lock);
+
+	for (i = 0; i < num_queues; i++) {
+		struct ice_sched_node *node;
+
+		node = ice_sched_find_node_by_teid(pi->root, q_teids[i]);
+		if (!node)
+			continue;
+		qg_list.parent_teid = node->info.parent_teid;
+		qg_list.num_qs = 1;
+		qg_list.q_id[0] = CPU_TO_LE16(q_ids[i]);
+		status = ice_aq_dis_lan_txq(pi->hw, 1, &qg_list,
+					    sizeof(qg_list), rst_src, vmvf_num,
+					    cd);
+
+		if (status != ICE_SUCCESS)
+			break;
+		ice_free_sched_node(pi, node);
+	}
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_cfg_vsi_qs - configure the new/exisiting VSI queues
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc_bitmap: TC bitmap
+ * @maxqs: max queues array per TC
+ * @owner: lan or rdma
+ *
+ * This function adds/updates the VSI queues per TC.
+ */
+static enum ice_status
+ice_cfg_vsi_qs(struct ice_port_info *pi, u16 vsi_handle, u8 tc_bitmap,
+	       u16 *maxqs, u8 owner)
+{
+	enum ice_status status = ICE_SUCCESS;
+	u8 i;
+
+	if (!pi || pi->port_state != ICE_SCHED_PORT_STATE_READY)
+		return ICE_ERR_CFG;
+
+	if (!ice_is_vsi_valid(pi->hw, vsi_handle))
+		return ICE_ERR_PARAM;
+
+	ice_acquire_lock(&pi->sched_lock);
+
+	for (i = 0; i < ICE_MAX_TRAFFIC_CLASS; i++) {
+		/* configuration is possible only if TC node is present */
+		if (!ice_sched_get_tc_node(pi, i))
+			continue;
+
+		status = ice_sched_cfg_vsi(pi, vsi_handle, i, maxqs[i], owner,
+					   ice_is_tc_ena(tc_bitmap, i));
+		if (status)
+			break;
+	}
+
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
+/**
+ * ice_cfg_vsi_lan - configure VSI lan queues
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc_bitmap: TC bitmap
+ * @max_lanqs: max lan queues array per TC
+ *
+ * This function adds/updates the VSI lan queues per TC.
+ */
+enum ice_status
+ice_cfg_vsi_lan(struct ice_port_info *pi, u16 vsi_handle, u8 tc_bitmap,
+		u16 *max_lanqs)
+{
+	return ice_cfg_vsi_qs(pi, vsi_handle, tc_bitmap, max_lanqs,
+			      ICE_SCHED_NODE_OWNER_LAN);
+}
+
+
+
+/**
+ * ice_replay_pre_init - replay pre initialization
+ * @hw: pointer to the hw struct
+ *
+ * Initializes required config data for VSI, FD, ACL, and RSS before replay.
+ */
+static enum ice_status ice_replay_pre_init(struct ice_hw *hw)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	u8 i;
+
+	/* Delete old entries from replay filter list head if there is any */
+	ice_rm_all_sw_replay_rule_info(hw);
+	/* In start of replay, move entries into replay_rules list, it
+	 * will allow adding rules entries back to filt_rules list,
+	 * which is operational list.
+	 */
+	for (i = 0; i < ICE_MAX_NUM_RECIPES; i++)
+		LIST_REPLACE_INIT(&sw->recp_list[i].filt_rules,
+				  &sw->recp_list[i].filt_replay_rules);
+	ice_sched_replay_agg_vsi_preinit(hw);
+
+	return ice_sched_replay_tc_node_bw(hw);
+}
+
+/**
+ * ice_replay_vsi - replay vsi configuration
+ * @hw: pointer to the hw struct
+ * @vsi_handle: driver vsi handle
+ *
+ * Restore all VSI configuration after reset. It is required to call this
+ * function with main VSI first.
+ */
+enum ice_status ice_replay_vsi(struct ice_hw *hw, u16 vsi_handle)
+{
+	enum ice_status status;
+
+	if (!ice_is_vsi_valid(hw, vsi_handle))
+		return ICE_ERR_PARAM;
+
+	/* Replay pre-initialization if there is any */
+	if (vsi_handle == ICE_MAIN_VSI_HANDLE) {
+		status = ice_replay_pre_init(hw);
+		if (status)
+			return status;
+	}
+
+	/* Replay per VSI all filters */
+	status = ice_replay_vsi_all_fltr(hw, vsi_handle);
+	if (!status)
+		status = ice_replay_vsi_agg(hw, vsi_handle);
+	return status;
+}
+
+/**
+ * ice_replay_post - post replay configuration cleanup
+ * @hw: pointer to the hw struct
+ *
+ * Post replay cleanup.
+ */
+void ice_replay_post(struct ice_hw *hw)
+{
+	/* Delete old entries from replay filter list head */
+	ice_rm_all_sw_replay_rule_info(hw);
+	ice_sched_replay_agg(hw);
+}
+
+/**
+ * ice_stat_update40 - read 40 bit stat from the chip and update stat values
+ * @hw: ptr to the hardware info
+ * @hireg: high 32 bit HW register to read from
+ * @loreg: low 32 bit HW register to read from
+ * @prev_stat_loaded: bool to specify if previous stats are loaded
+ * @prev_stat: ptr to previous loaded stat value
+ * @cur_stat: ptr to current stat value
+ */
+void
+ice_stat_update40(struct ice_hw *hw, u32 hireg, u32 loreg,
+		  bool prev_stat_loaded, u64 *prev_stat, u64 *cur_stat)
+{
+	u64 new_data;
+
+	new_data = rd32(hw, loreg);
+	new_data |= ((u64)(rd32(hw, hireg) & 0xFFFF)) << 32;
+
+	/* device stats are not reset at PFR, they likely will not be zeroed
+	 * when the driver starts. So save the first values read and use them as
+	 * offsets to be subtracted from the raw values in order to report stats
+	 * that count from zero.
+	 */
+	if (!prev_stat_loaded)
+		*prev_stat = new_data;
+	if (new_data >= *prev_stat)
+		*cur_stat = new_data - *prev_stat;
+	else
+		/* to manage the potential roll-over */
+		*cur_stat = (new_data + BIT_ULL(40)) - *prev_stat;
+	*cur_stat &= 0xFFFFFFFFFFULL;
+}
+
+/**
+ * ice_stat_update32 - read 32 bit stat from the chip and update stat values
+ * @hw: ptr to the hardware info
+ * @reg: HW register to read from
+ * @prev_stat_loaded: bool to specify if previous stats are loaded
+ * @prev_stat: ptr to previous loaded stat value
+ * @cur_stat: ptr to current stat value
+ */
+void
+ice_stat_update32(struct ice_hw *hw, u32 reg, bool prev_stat_loaded,
+		  u64 *prev_stat, u64 *cur_stat)
+{
+	u32 new_data;
+
+	new_data = rd32(hw, reg);
+
+	/* device stats are not reset at PFR, they likely will not be zeroed
+	 * when the driver starts. So save the first values read and use them as
+	 * offsets to be subtracted from the raw values in order to report stats
+	 * that count from zero.
+	 */
+	if (!prev_stat_loaded)
+		*prev_stat = new_data;
+	if (new_data >= *prev_stat)
+		*cur_stat = new_data - *prev_stat;
+	else
+		/* to manage the potential roll-over */
+		*cur_stat = (new_data + BIT_ULL(32)) - *prev_stat;
+}
+
+
+/**
+ * ice_sched_query_elem - query element information from hw
+ * @hw: pointer to the hw struct
+ * @node_teid: node teid to be queried
+ * @buf: buffer to element information
+ *
+ * This function queries HW element information
+ */
+enum ice_status
+ice_sched_query_elem(struct ice_hw *hw, u32 node_teid,
+		     struct ice_aqc_get_elem *buf)
+{
+	u16 buf_size, num_elem_ret = 0;
+	enum ice_status status;
+
+	buf_size = sizeof(*buf);
+	ice_memset(buf, 0, buf_size, ICE_NONDMA_MEM);
+	buf->generic[0].node_teid = CPU_TO_LE32(node_teid);
+	status = ice_aq_query_sched_elems(hw, 1, buf, buf_size, &num_elem_ret,
+					  NULL);
+	if (status != ICE_SUCCESS || num_elem_ret != 1)
+		ice_debug(hw, ICE_DBG_SCHED, "query element failed\n");
+	return status;
+}
diff --git a/drivers/net/ice/base/ice_common.h b/drivers/net/ice/base/ice_common.h
new file mode 100644
index 0000000..082ae66
--- /dev/null
+++ b/drivers/net/ice/base/ice_common.h
@@ -0,0 +1,186 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_COMMON_H_
+#define _ICE_COMMON_H_
+
+#include "ice_type.h"
+
+#include "ice_switch.h"
+
+/* prototype for functions used for SW locks */
+void ice_free_list(struct LIST_HEAD_TYPE *list);
+void ice_init_lock(struct ice_lock *lock);
+void ice_acquire_lock(struct ice_lock *lock);
+void ice_release_lock(struct ice_lock *lock);
+void ice_destroy_lock(struct ice_lock *lock);
+
+void *ice_alloc_dma_mem(struct ice_hw *hw, struct ice_dma_mem *m, u64 size);
+void ice_free_dma_mem(struct ice_hw *hw, struct ice_dma_mem *m);
+
+bool ice_sq_done(struct ice_hw *hw, struct ice_ctl_q_info *cq);
+
+enum ice_status ice_nvm_validate_checksum(struct ice_hw *hw);
+
+void
+ice_debug_cq(struct ice_hw *hw, u32 mask, void *desc, void *buf, u16 buf_len);
+enum ice_status ice_init_hw(struct ice_hw *hw);
+void ice_deinit_hw(struct ice_hw *hw);
+enum ice_status ice_check_reset(struct ice_hw *hw);
+enum ice_status ice_reset(struct ice_hw *hw, enum ice_reset_req req);
+
+enum ice_status ice_init_all_ctrlq(struct ice_hw *hw);
+void ice_shutdown_all_ctrlq(struct ice_hw *hw);
+enum ice_status
+ice_clean_rq_elem(struct ice_hw *hw, struct ice_ctl_q_info *cq,
+		  struct ice_rq_event_info *e, u16 *pending);
+enum ice_status
+ice_get_link_status(struct ice_port_info *pi, bool *link_up);
+enum ice_status
+ice_update_link_info(struct ice_port_info *pi);
+enum ice_status
+ice_acquire_res(struct ice_hw *hw, enum ice_aq_res_ids res,
+		enum ice_aq_res_access_type access, u32 timeout);
+void ice_release_res(struct ice_hw *hw, enum ice_aq_res_ids res);
+enum ice_status
+ice_aq_alloc_free_res(struct ice_hw *hw, u16 num_entries,
+		      struct ice_aqc_alloc_free_res_elem *buf, u16 buf_size,
+		      enum ice_adminq_opc opc, struct ice_sq_cd *cd);
+enum ice_status ice_init_nvm(struct ice_hw *hw);
+enum ice_status ice_read_sr_word(struct ice_hw *hw, u16 offset, u16 *data);
+enum ice_status
+ice_read_sr_buf(struct ice_hw *hw, u16 offset, u16 *words, u16 *data);
+enum ice_status
+ice_sq_send_cmd(struct ice_hw *hw, struct ice_ctl_q_info *cq,
+		struct ice_aq_desc *desc, void *buf, u16 buf_size,
+		struct ice_sq_cd *cd);
+void ice_clear_pxe_mode(struct ice_hw *hw);
+
+enum ice_status ice_get_caps(struct ice_hw *hw);
+
+
+
+#if defined(FPGA_SUPPORT) || defined(CVL_A0_SUPPORT)
+void ice_dev_onetime_setup(struct ice_hw *hw);
+#endif /* FPGA_SUPPORT || CVL_A0_SUPPORT */
+
+
+enum ice_status
+ice_write_rxq_ctx(struct ice_hw *hw, struct ice_rlan_ctx *rlan_ctx,
+		  u32 rxq_index);
+#if !defined(NO_UNUSED_CTX_CODE) || defined(AE_DRIVER)
+enum ice_status ice_clear_rxq_ctx(struct ice_hw *hw, u32 rxq_index);
+enum ice_status
+ice_clear_tx_cmpltnq_ctx(struct ice_hw *hw, u32 tx_cmpltnq_index);
+enum ice_status
+ice_write_tx_cmpltnq_ctx(struct ice_hw *hw,
+			 struct ice_tx_cmpltnq_ctx *tx_cmpltnq_ctx,
+			 u32 tx_cmpltnq_index);
+enum ice_status
+ice_clear_tx_drbell_q_ctx(struct ice_hw *hw, u32 tx_drbell_q_index);
+enum ice_status
+ice_write_tx_drbell_q_ctx(struct ice_hw *hw,
+			  struct ice_tx_drbell_q_ctx *tx_drbell_q_ctx,
+			  u32 tx_drbell_q_index);
+#endif /* !NO_UNUSED_CTX_CODE || AE_DRIVER */
+
+enum ice_status
+ice_aq_get_rss_lut(struct ice_hw *hw, u16 vsi_handle, u8 lut_type, u8 *lut,
+		   u16 lut_size);
+enum ice_status
+ice_aq_set_rss_lut(struct ice_hw *hw, u16 vsi_handle, u8 lut_type, u8 *lut,
+		   u16 lut_size);
+enum ice_status
+ice_aq_get_rss_key(struct ice_hw *hw, u16 vsi_handle,
+		   struct ice_aqc_get_set_rss_keys *keys);
+enum ice_status
+ice_aq_set_rss_key(struct ice_hw *hw, u16 vsi_handle,
+		   struct ice_aqc_get_set_rss_keys *keys);
+
+bool ice_check_sq_alive(struct ice_hw *hw, struct ice_ctl_q_info *cq);
+enum ice_status ice_aq_q_shutdown(struct ice_hw *hw, bool unloading);
+void ice_fill_dflt_direct_cmd_desc(struct ice_aq_desc *desc, u16 opcode);
+extern const struct ice_ctx_ele ice_tlan_ctx_info[];
+enum ice_status
+ice_set_ctx(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info);
+enum ice_status
+ice_aq_send_cmd(struct ice_hw *hw, struct ice_aq_desc *desc,
+		void *buf, u16 buf_size, struct ice_sq_cd *cd);
+enum ice_status ice_aq_get_fw_ver(struct ice_hw *hw, struct ice_sq_cd *cd);
+
+enum ice_status
+ice_aq_get_phy_caps(struct ice_port_info *pi, bool qual_mods, u8 report_mode,
+		    struct ice_aqc_get_phy_caps_data *caps,
+		    struct ice_sq_cd *cd);
+void
+ice_update_phy_type(u64 *phy_type_low, u64 *phy_type_high,
+		    u16 link_speeds_bitmap);
+enum ice_status
+ice_aq_manage_mac_write(struct ice_hw *hw, const u8 *mac_addr, u8 flags,
+			struct ice_sq_cd *cd);
+
+enum ice_status ice_clear_pf_cfg(struct ice_hw *hw);
+enum ice_status
+ice_aq_set_phy_cfg(struct ice_hw *hw, u8 lport,
+		   struct ice_aqc_set_phy_cfg_data *cfg, struct ice_sq_cd *cd);
+enum ice_status
+ice_set_fc(struct ice_port_info *pi, u8 *aq_failures,
+	   bool ena_auto_link_update);
+void
+ice_cfg_phy_fec(struct ice_aqc_set_phy_cfg_data *cfg, enum ice_fec_mode fec);
+void
+ice_copy_phy_caps_to_cfg(struct ice_aqc_get_phy_caps_data *caps,
+			 struct ice_aqc_set_phy_cfg_data *cfg);
+enum ice_status
+ice_aq_set_link_restart_an(struct ice_port_info *pi, bool ena_link,
+			   struct ice_sq_cd *cd);
+enum ice_status
+ice_aq_get_link_info(struct ice_port_info *pi, bool ena_lse,
+		     struct ice_link_status *link, struct ice_sq_cd *cd);
+enum ice_status
+ice_aq_set_event_mask(struct ice_hw *hw, u8 port_num, u16 mask,
+		      struct ice_sq_cd *cd);
+enum ice_status
+ice_aq_set_mac_loopback(struct ice_hw *hw, bool ena_lpbk, struct ice_sq_cd *cd);
+
+
+enum ice_status
+ice_aq_set_port_id_led(struct ice_port_info *pi, bool is_orig_mode,
+		       struct ice_sq_cd *cd);
+
+
+
+
+enum ice_status
+ice_dis_vsi_txq(struct ice_port_info *pi, u8 num_queues, u16 *q_ids,
+		u32 *q_teids, enum ice_disq_rst_src rst_src, u16 vmvf_num,
+		struct ice_sq_cd *cmd_details);
+enum ice_status
+ice_cfg_vsi_lan(struct ice_port_info *pi, u16 vsi_handle, u8 tc_bitmap,
+		u16 *max_lanqs);
+enum ice_status
+ice_ena_vsi_txq(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u8 num_qgrps,
+		struct ice_aqc_add_tx_qgrp *buf, u16 buf_size,
+		struct ice_sq_cd *cd);
+enum ice_status ice_replay_vsi(struct ice_hw *hw, u16 vsi_handle);
+void ice_replay_post(struct ice_hw *hw);
+void ice_sched_replay_agg_vsi_preinit(struct ice_hw *hw);
+void ice_sched_replay_agg(struct ice_hw *hw);
+enum ice_status ice_sched_replay_tc_node_bw(struct ice_hw *hw);
+enum ice_status ice_replay_vsi_agg(struct ice_hw *hw, u16 vsi_handle);
+enum ice_status
+ice_cfg_tc_node_bw_alloc(struct ice_port_info *pi, u8 tc,
+			 enum ice_rl_type rl_type, u8 bw_alloc);
+enum ice_status ice_cfg_rl_burst_size(struct ice_hw *hw, u32 bytes);
+void ice_output_fw_log(struct ice_hw *hw, struct ice_aq_desc *desc, void *buf);
+void
+ice_stat_update40(struct ice_hw *hw, u32 hireg, u32 loreg,
+		  bool prev_stat_loaded, u64 *prev_stat, u64 *cur_stat);
+void
+ice_stat_update32(struct ice_hw *hw, u32 reg, bool prev_stat_loaded,
+		  u64 *prev_stat, u64 *cur_stat);
+enum ice_status
+ice_sched_query_elem(struct ice_hw *hw, u32 node_teid,
+		     struct ice_aqc_get_elem *buf);
+#endif /* _ICE_COMMON_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v6 11/31] net/ice/base: add various headers
  2018-12-18  8:46 ` [dpdk-dev] [PATCH v6 00/31] A new net PMD - ICE Wenzhuo Lu
                     ` (9 preceding siblings ...)
  2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 10/31] net/ice/base: add common functions Wenzhuo Lu
@ 2018-12-18  8:46   ` Wenzhuo Lu
  2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 12/31] net/ice/base: add protocol structures and defines Wenzhuo Lu
                     ` (20 subsequent siblings)
  31 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-18  8:46 UTC (permalink / raw)
  To: dev; +Cc: Paul M Stillwell Jr

From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>

Add various headers that define status codes and
basic defines for use in the code.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
---
 drivers/net/ice/base/ice_alloc.h     | 22 ++++++++++++++++++
 drivers/net/ice/base/ice_flex_type.h | 19 +++++++++++++++
 drivers/net/ice/base/ice_flow.h      |  8 +++++++
 drivers/net/ice/base/ice_status.h    | 45 ++++++++++++++++++++++++++++++++++++
 4 files changed, 94 insertions(+)
 create mode 100644 drivers/net/ice/base/ice_alloc.h
 create mode 100644 drivers/net/ice/base/ice_flex_type.h
 create mode 100644 drivers/net/ice/base/ice_flow.h
 create mode 100644 drivers/net/ice/base/ice_status.h

diff --git a/drivers/net/ice/base/ice_alloc.h b/drivers/net/ice/base/ice_alloc.h
new file mode 100644
index 0000000..7883104
--- /dev/null
+++ b/drivers/net/ice/base/ice_alloc.h
@@ -0,0 +1,22 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_ALLOC_H_
+#define _ICE_ALLOC_H_
+
+/* Memory types */
+enum ice_memset_type {
+	ICE_NONDMA_MEM = 0,
+	ICE_DMA_MEM
+};
+
+/* Memcpy types */
+enum ice_memcpy_type {
+	ICE_NONDMA_TO_NONDMA = 0,
+	ICE_NONDMA_TO_DMA,
+	ICE_DMA_TO_DMA,
+	ICE_DMA_TO_NONDMA
+};
+
+#endif /* _ICE_ALLOC_H_ */
diff --git a/drivers/net/ice/base/ice_flex_type.h b/drivers/net/ice/base/ice_flex_type.h
new file mode 100644
index 0000000..84a38cb
--- /dev/null
+++ b/drivers/net/ice/base/ice_flex_type.h
@@ -0,0 +1,19 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_FLEX_TYPE_H_
+#define _ICE_FLEX_TYPE_H_
+
+/* Extraction Sequence (Field Vector) Table */
+struct ice_fv_word {
+	u8 prot_id;
+	u8 off;		/* Offset within the protocol header */
+};
+
+#define ICE_MAX_FV_WORDS 48
+struct ice_fv {
+	struct ice_fv_word ew[ICE_MAX_FV_WORDS];
+};
+
+#endif /* _ICE_FLEX_TYPE_H_ */
diff --git a/drivers/net/ice/base/ice_flow.h b/drivers/net/ice/base/ice_flow.h
new file mode 100644
index 0000000..228a2c0
--- /dev/null
+++ b/drivers/net/ice/base/ice_flow.h
@@ -0,0 +1,8 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_FLOW_H_
+#define _ICE_FLOW_H_
+
+#endif /* _ICE_FLOW_H_ */
diff --git a/drivers/net/ice/base/ice_status.h b/drivers/net/ice/base/ice_status.h
new file mode 100644
index 0000000..898bfa6
--- /dev/null
+++ b/drivers/net/ice/base/ice_status.h
@@ -0,0 +1,45 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_STATUS_H_
+#define _ICE_STATUS_H_
+
+/* Error Codes */
+enum ice_status {
+	ICE_SUCCESS				= 0,
+
+	/* Generic codes : Range -1..-49 */
+	ICE_ERR_PARAM				= -1,
+	ICE_ERR_NOT_IMPL			= -2,
+	ICE_ERR_NOT_READY			= -3,
+	ICE_ERR_BAD_PTR				= -5,
+	ICE_ERR_INVAL_SIZE			= -6,
+	ICE_ERR_DEVICE_NOT_SUPPORTED		= -8,
+	ICE_ERR_RESET_FAILED			= -9,
+	ICE_ERR_FW_API_VER			= -10,
+	ICE_ERR_NO_MEMORY			= -11,
+	ICE_ERR_CFG				= -12,
+	ICE_ERR_OUT_OF_RANGE			= -13,
+	ICE_ERR_ALREADY_EXISTS			= -14,
+	ICE_ERR_DOES_NOT_EXIST			= -15,
+	ICE_ERR_IN_USE				= -16,
+	ICE_ERR_MAX_LIMIT			= -17,
+	ICE_ERR_RESET_ONGOING			= -18,
+	ICE_ERR_HW_TABLE			= -19,
+
+	/* NVM specific error codes: Range -50..-59 */
+	ICE_ERR_NVM				= -50,
+	ICE_ERR_NVM_CHECKSUM			= -51,
+	ICE_ERR_BUF_TOO_SHORT			= -52,
+	ICE_ERR_NVM_BLANK_MODE			= -53,
+
+	/* ARQ/ASQ specific error codes. Range -100..-109 */
+	ICE_ERR_AQ_ERROR			= -100,
+	ICE_ERR_AQ_TIMEOUT			= -101,
+	ICE_ERR_AQ_FULL				= -102,
+	ICE_ERR_AQ_NO_WORK			= -103,
+	ICE_ERR_AQ_EMPTY			= -104,
+};
+
+#endif /* _ICE_STATUS_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v6 12/31] net/ice/base: add protocol structures and defines
  2018-12-18  8:46 ` [dpdk-dev] [PATCH v6 00/31] A new net PMD - ICE Wenzhuo Lu
                     ` (10 preceding siblings ...)
  2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 11/31] net/ice/base: add various headers Wenzhuo Lu
@ 2018-12-18  8:46   ` Wenzhuo Lu
  2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 13/31] net/ice/base: add structures for RX/TX queues Wenzhuo Lu
                     ` (19 subsequent siblings)
  31 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-18  8:46 UTC (permalink / raw)
  To: dev; +Cc: Paul M Stillwell Jr

From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>

Add the structures and defines that define what
protocols the NIC can handle.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
---
 drivers/net/ice/base/ice_protocol_type.h | 248 +++++++++++++++++++++++++++++++
 1 file changed, 248 insertions(+)
 create mode 100644 drivers/net/ice/base/ice_protocol_type.h

diff --git a/drivers/net/ice/base/ice_protocol_type.h b/drivers/net/ice/base/ice_protocol_type.h
new file mode 100644
index 0000000..7b92c71
--- /dev/null
+++ b/drivers/net/ice/base/ice_protocol_type.h
@@ -0,0 +1,248 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_PROTOCOL_TYPE_H_
+#define _ICE_PROTOCOL_TYPE_H_
+#include "ice_flex_type.h"
+#define ICE_IPV6_ADDR_LENGTH 16
+
+/* Each recipe can match up to 5 different fields. Fields to match can be meta-
+ * data, values extracted from packet headers, or results from other recipes.
+ * One of the 5 fields is reserved for matching the switch ID. So, up to 4
+ * recipes can provide intermediate results to another one through chaining,
+ * e.g. recipes 0, 1, 2, and 3 can provide intermediate results to recipe 4.
+ */
+#define ICE_NUM_WORDS_RECIPE 4
+
+/* Max recipes that can be chained */
+#define ICE_MAX_CHAIN_RECIPE 5
+
+/* 1 word reserved for switch id from allowed 5 words.
+ * So a recipe can have max 4 words. And you can chain 5 such recipes
+ * together. So maximum words that can be programmed for look up is 5 * 4.
+ */
+#define ICE_MAX_CHAIN_WORDS (ICE_NUM_WORDS_RECIPE * ICE_MAX_CHAIN_RECIPE)
+
+/* Field vector index corresponding to chaining */
+#define ICE_CHAIN_FV_INDEX_START 47
+
+enum ice_protocol_type {
+	ICE_MAC_OFOS = 0,
+	ICE_MAC_IL,
+	ICE_IPV4_OFOS,
+	ICE_IPV4_IL,
+	ICE_IPV6_IL,
+	ICE_IPV6_OFOS,
+	ICE_TCP_IL,
+	ICE_UDP_ILOS,
+	ICE_SCTP_IL,
+	ICE_VXLAN,
+	ICE_GENEVE,
+	ICE_VXLAN_GPE,
+	ICE_NVGRE,
+	ICE_PROTOCOL_LAST
+};
+
+enum ice_sw_tunnel_type {
+	ICE_NON_TUN,
+	ICE_SW_TUN_VXLAN_GPE,
+	ICE_SW_TUN_GENEVE,
+	ICE_SW_TUN_VXLAN,
+	ICE_SW_TUN_NVGRE,
+	ICE_SW_TUN_UDP, /* This means all "UDP" tunnel types: VXLAN-GPE, VXLAN
+			 * and GENEVE
+			 */
+	ICE_ALL_TUNNELS /* All tunnel types including NVGRE */
+};
+
+/* Decoders for ice_prot_id:
+ * - F: First
+ * - I: Inner
+ * - L: Last
+ * - O: Outer
+ * - S: Single
+ */
+enum ice_prot_id {
+	ICE_PROT_ID_INVAL	= 0,
+	ICE_PROT_MAC_OF_OR_S	= 1,
+	ICE_PROT_MAC_O2		= 2,
+	ICE_PROT_MAC_IL		= 4,
+	ICE_PROT_MAC_IN_MAC	= 7,
+	ICE_PROT_ETYPE_OL	= 9,
+	ICE_PROT_ETYPE_IL	= 10,
+	ICE_PROT_PAY		= 15,
+	ICE_PROT_EVLAN_O	= 16,
+	ICE_PROT_VLAN_O		= 17,
+	ICE_PROT_VLAN_IF	= 18,
+	ICE_PROT_MPLS_OL_MINUS_1 = 27,
+	ICE_PROT_MPLS_OL_OR_OS	= 28,
+	ICE_PROT_MPLS_IL	= 29,
+	ICE_PROT_IPV4_OF_OR_S	= 32,
+	ICE_PROT_IPV4_IL	= 33,
+	ICE_PROT_IPV6_OF_OR_S	= 40,
+	ICE_PROT_IPV6_IL	= 41,
+	ICE_PROT_IPV6_FRAG	= 47,
+	ICE_PROT_TCP_IL		= 49,
+	ICE_PROT_UDP_OF		= 52,
+	ICE_PROT_UDP_IL_OR_S	= 53,
+	ICE_PROT_GRE_OF		= 64,
+	ICE_PROT_NSH_F		= 84,
+	ICE_PROT_ESP_F		= 88,
+	ICE_PROT_ESP_2		= 89,
+	ICE_PROT_SCTP_IL	= 96,
+	ICE_PROT_ICMP_IL	= 98,
+	ICE_PROT_ICMPV6_IL	= 100,
+	ICE_PROT_VRRP_F		= 101,
+	ICE_PROT_OSPF		= 102,
+	ICE_PROT_ATAOE_OF	= 114,
+	ICE_PROT_CTRL_OF	= 116,
+	ICE_PROT_LLDP_OF	= 117,
+	ICE_PROT_ARP_OF		= 118,
+	ICE_PROT_EAPOL_OF	= 120,
+	ICE_PROT_META_ID	= 255, /* when offset == metaddata */
+	ICE_PROT_INVALID	= 255  /* when offset == 0xFF */
+};
+
+
+#define ICE_MAC_OFOS_HW		1
+#define ICE_MAC_IL_HW		4
+#define ICE_IPV4_OFOS_HW	32
+#define ICE_IPV4_IL_HW		33
+#define ICE_IPV6_OFOS_HW	40
+#define ICE_IPV6_IL_HW		41
+#define ICE_TCP_IL_HW		49
+#define ICE_UDP_ILOS_HW		53
+#define ICE_SCTP_IL_HW		96
+
+/* ICE_UDP_OF is used to identify all 3 tunnel types
+ * VXLAN, GENEVE and VXLAN_GPE. To differentiate further
+ * need to use flags from the field vector
+ */
+#define ICE_UDP_OF_HW	52 /* UDP Tunnels */
+#define ICE_GRE_OF_HW	64 /* NVGRE */
+#define ICE_META_DATA_ID_HW 255 /* this is used for tunnel type */
+
+#define ICE_TUN_FLAG_MASK 0xFF
+#define ICE_TUN_FLAG_FV_IND 2
+
+#define ICE_PROTOCOL_MAX_ENTRIES 16
+
+/* Mapping of software defined protocol id to hardware defined protocol id */
+struct ice_protocol_entry {
+	enum ice_protocol_type type;
+	u8 protocol_id;
+};
+
+
+struct ice_ether_hdr {
+	u8 dst_addr[ETH_ALEN];
+	u8 src_addr[ETH_ALEN];
+	u16 ethtype_id;
+};
+
+struct ice_ether_vlan_hdr {
+	u8 dst_addr[ETH_ALEN];
+	u8 src_addr[ETH_ALEN];
+	u32 vlan_id;
+};
+
+struct ice_ipv4_hdr {
+	u8 version;
+	u8 tos;
+	u16 total_length;
+	u16 id;
+	u16 frag_off;
+	u8 time_to_live;
+	u8 protocol;
+	u16 check;
+	u32 src_addr;
+	u32 dst_addr;
+};
+
+struct ice_ipv6_hdr {
+	u8 version;
+	u8 tc;
+	u16 flow_label;
+	u16 payload_len;
+	u8 next_hdr;
+	u8 hop_limit;
+	u8 src_addr[ICE_IPV6_ADDR_LENGTH];
+	u8 dst_addr[ICE_IPV6_ADDR_LENGTH];
+};
+
+struct ice_sctp_hdr {
+	u16 src_port;
+	u16 dst_port;
+	u32 verification_tag;
+	u32 check;
+};
+
+struct ice_l4_hdr {
+	u16 src_port;
+	u16 dst_port;
+	u16 len;
+	u16 check;
+};
+
+struct ice_udp_tnl_hdr {
+	u16 field;
+	u16 proto_type;
+	u16 vni;
+};
+
+struct ice_nvgre {
+	u16 tni;
+	u16 flow_id;
+};
+
+union ice_prot_hdr {
+		struct ice_ether_hdr eth_hdr;
+		struct ice_ipv4_hdr ipv4_hdr;
+		struct ice_ipv6_hdr ice_ipv6_ofos_hdr;
+		struct ice_l4_hdr l4_hdr;
+		struct ice_sctp_hdr sctp_hdr;
+		struct ice_udp_tnl_hdr tnl_hdr;
+		struct ice_nvgre nvgre_hdr;
+};
+
+/* This is mapping table entry that maps every word within a given protocol
+ * structure to the real byte offset as per the specification of that
+ * protocol header.
+ * for e.g. dst address is 3 words in ethertype header and corresponding bytes
+ * are 0, 2, 3 in the actual packet header and src address is at 4, 6, 8
+ */
+struct ice_prot_ext_tbl_entry {
+	enum ice_protocol_type prot_type;
+	/* Byte offset into header of given protocol type */
+	u8 offs[sizeof(union ice_prot_hdr)];
+};
+
+/* Extractions to be looked up for a given recipe */
+struct ice_prot_lkup_ext {
+	u16 prot_type;
+	u8 n_val_words;
+	/* create a buffer to hold max words per recipe */
+	u8 field_off[ICE_MAX_CHAIN_WORDS];
+
+	struct ice_fv_word fv_words[ICE_MAX_CHAIN_WORDS];
+
+	/* Indicate field offsets that have field vector indices assigned */
+	ice_declare_bitmap(done, ICE_MAX_CHAIN_WORDS);
+};
+
+struct ice_pref_recipe_group {
+	u8 n_val_pairs;		/* Number of valid pairs */
+	struct ice_fv_word pairs[ICE_NUM_WORDS_RECIPE];
+};
+
+struct ice_recp_grp_entry {
+	struct LIST_ENTRY_TYPE l_entry;
+
+#define ICE_INVAL_CHAIN_IND 0xFF
+	u16 rid;
+	u8 chain_idx;
+	u16 fv_idx[ICE_NUM_WORDS_RECIPE];
+	struct ice_pref_recipe_group r_group;
+};
+#endif /* _ICE_PROTOCOL_TYPE_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v6 13/31] net/ice/base: add structures for RX/TX queues
  2018-12-18  8:46 ` [dpdk-dev] [PATCH v6 00/31] A new net PMD - ICE Wenzhuo Lu
                     ` (11 preceding siblings ...)
  2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 12/31] net/ice/base: add protocol structures and defines Wenzhuo Lu
@ 2018-12-18  8:46   ` Wenzhuo Lu
  2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 14/31] net/ice/base: add OS specific implementation Wenzhuo Lu
                     ` (18 subsequent siblings)
  31 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-18  8:46 UTC (permalink / raw)
  To: dev; +Cc: Paul M Stillwell Jr

From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>

Add the structures that define how the RX/TX queues
are used.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
---
 drivers/net/ice/base/ice_lan_tx_rx.h | 2291 ++++++++++++++++++++++++++++++++++
 1 file changed, 2291 insertions(+)
 create mode 100644 drivers/net/ice/base/ice_lan_tx_rx.h

diff --git a/drivers/net/ice/base/ice_lan_tx_rx.h b/drivers/net/ice/base/ice_lan_tx_rx.h
new file mode 100644
index 0000000..d27045f
--- /dev/null
+++ b/drivers/net/ice/base/ice_lan_tx_rx.h
@@ -0,0 +1,2291 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2018
+ */
+
+#ifndef _ICE_LAN_TX_RX_H_
+#define _ICE_LAN_TX_RX_H_
+#include "ice_osdep.h"
+
+/* RX Descriptors */
+union ice_16byte_rx_desc {
+	struct {
+		__le64 pkt_addr; /* Packet buffer address */
+		__le64 hdr_addr; /* Header buffer address */
+	} read;
+	struct {
+		struct {
+			struct {
+				__le16 mirroring_status;
+				__le16 l2tag1;
+			} lo_dword;
+			union {
+				__le32 rss; /* RSS Hash */
+				__le32 fd_id; /* Flow Director filter id */
+			} hi_dword;
+		} qword0;
+		struct {
+			/* ext status/error/PTYPE/length */
+			__le64 status_error_len;
+		} qword1;
+	} wb;  /* writeback */
+};
+
+union ice_32byte_rx_desc {
+	struct {
+		__le64 pkt_addr; /* Packet buffer address */
+		__le64 hdr_addr; /* Header buffer address */
+			/* bit 0 of hdr_addr is DD bit */
+		__le64 rsvd1;
+		__le64 rsvd2;
+	} read;
+	struct {
+		struct {
+			struct {
+				__le16 mirroring_status;
+				__le16 l2tag1;
+			} lo_dword;
+			union {
+				__le32 rss; /* RSS Hash */
+				__le32 fd_id; /* Flow Director filter id */
+			} hi_dword;
+		} qword0;
+		struct {
+			/* status/error/PTYPE/length */
+			__le64 status_error_len;
+		} qword1;
+		struct {
+			__le16 ext_status; /* extended status */
+			__le16 rsvd;
+			__le16 l2tag2_1;
+			__le16 l2tag2_2;
+		} qword2;
+		struct {
+			__le32 reserved;
+			__le32 fd_id;
+		} qword3;
+	} wb; /* writeback */
+};
+
+struct ice_fltr_desc {
+	__le64 qidx_compq_space_stat;
+	__le64 dtype_cmd_vsi_fdid;
+};
+
+#define ICE_FXD_FLTR_QW0_QINDEX_S	0
+#define ICE_FXD_FLTR_QW0_QINDEX_M	(0x7FFULL << ICE_FXD_FLTR_QW0_QINDEX_S)
+#define ICE_FXD_FLTR_QW0_COMP_Q_S	11
+#define ICE_FXD_FLTR_QW0_COMP_Q_M	BIT_ULL(ICE_FXD_FLTR_QW0_COMP_Q_S)
+#define ICE_FXD_FLTR_QW0_COMP_Q_ZERO	0x0ULL
+#define ICE_FXD_FLTR_QW0_COMP_Q_QINDX	0x1ULL
+
+#define ICE_FXD_FLTR_QW0_COMP_REPORT_S	12
+#define ICE_FXD_FLTR_QW0_COMP_REPORT_M	\
+				(0x3ULL << ICE_FXD_FLTR_QW0_COMP_REPORT_S)
+#define ICE_FXD_FLTR_QW0_COMP_REPORT_NONE	0x0ULL
+#define ICE_FXD_FLTR_QW0_COMP_REPORT_SW_FAIL	0x1ULL
+#define ICE_FXD_FLTR_QW0_COMP_REPORT_SW		0x2ULL
+
+#define ICE_FXD_FLTR_QW0_FD_SPACE_S	14
+#define ICE_FXD_FLTR_QW0_FD_SPACE_M	(0x3ULL << ICE_FXD_FLTR_QW0_FD_SPACE_S)
+#define ICE_FXD_FLTR_QW0_FD_SPACE_GUAR			0x0ULL
+#define ICE_FXD_FLTR_QW0_FD_SPACE_BEST_EFFORT		0x1ULL
+#define ICE_FXD_FLTR_QW0_FD_SPACE_GUAR_BEST		0x2ULL
+#define ICE_FXD_FLTR_QW0_FD_SPACE_BEST_GUAR		0x3ULL
+
+#define ICE_FXD_FLTR_QW0_STAT_CNT_S	16
+#define ICE_FXD_FLTR_QW0_STAT_CNT_M	\
+				(0x1FFFULL << ICE_FXD_FLTR_QW0_STAT_CNT_S)
+#define ICE_FXD_FLTR_QW0_STAT_ENA_S	29
+#define ICE_FXD_FLTR_QW0_STAT_ENA_M	(0x3ULL << ICE_FXD_FLTR_QW0_STAT_ENA_S)
+#define ICE_FXD_FLTR_QW0_STAT_ENA_NONE		0x0ULL
+#define ICE_FXD_FLTR_QW0_STAT_ENA_PKTS		0x1ULL
+#define ICE_FXD_FLTR_QW0_STAT_ENA_BYTES		0x2ULL
+#define ICE_FXD_FLTR_QW0_STAT_ENA_PKTS_BYTES	0x3ULL
+
+#define ICE_FXD_FLTR_QW0_EVICT_ENA_S	31
+#define ICE_FXD_FLTR_QW0_EVICT_ENA_M	BIT_ULL(ICE_FXD_FLTR_QW0_EVICT_ENA_S)
+#define ICE_FXD_FLTR_QW0_EVICT_ENA_FALSE	0x0ULL
+#define ICE_FXD_FLTR_QW0_EVICT_ENA_TRUE		0x1ULL
+
+#define ICE_FXD_FLTR_QW0_TO_Q_S		32
+#define ICE_FXD_FLTR_QW0_TO_Q_M		(0x7ULL << ICE_FXD_FLTR_QW0_TO_Q_S)
+#define ICE_FXD_FLTR_QW0_TO_Q_EQUALS_QINDEX	0x0ULL
+
+#define ICE_FXD_FLTR_QW0_TO_Q_PRI_S	35
+#define ICE_FXD_FLTR_QW0_TO_Q_PRI_M	(0x7ULL << ICE_FXD_FLTR_QW0_TO_Q_PRI_S)
+#define ICE_FXD_FLTR_QW0_TO_Q_PRIO1	0x1ULL
+
+#define ICE_FXD_FLTR_QW0_DPU_RECIPE_S	38
+#define ICE_FXD_FLTR_QW0_DPU_RECIPE_M	\
+			(0x3ULL << ICE_FXD_FLTR_QW0_DPU_RECIPE_S)
+#define ICE_FXD_FLTR_QW0_DPU_RECIPE_DFLT	0x0ULL
+
+#define ICE_FXD_FLTR_QW0_DROP_S		40
+#define ICE_FXD_FLTR_QW0_DROP_M		BIT_ULL(ICE_FXD_FLTR_QW0_DROP_S)
+#define ICE_FXD_FLTR_QW0_DROP_NO	0x0ULL
+#define ICE_FXD_FLTR_QW0_DROP_YES	0x1ULL
+
+#define ICE_FXD_FLTR_QW0_FLEX_PRI_S	41
+#define ICE_FXD_FLTR_QW0_FLEX_PRI_M	(0x7ULL << ICE_FXD_FLTR_QW0_FLEX_PRI_S)
+#define ICE_FXD_FLTR_QW0_FLEX_PRI_NONE	0x0ULL
+
+#define ICE_FXD_FLTR_QW0_FLEX_MDID_S	44
+#define ICE_FXD_FLTR_QW0_FLEX_MDID_M	(0xFULL << ICE_FXD_FLTR_QW0_FLEX_MDID_S)
+#define ICE_FXD_FLTR_QW0_FLEX_MDID0	0x0ULL
+
+#define ICE_FXD_FLTR_QW0_FLEX_VAL_S	48
+#define ICE_FXD_FLTR_QW0_FLEX_VAL_M	\
+				(0xFFFFULL << ICE_FXD_FLTR_QW0_FLEX_VAL_S)
+#define ICE_FXD_FLTR_QW0_FLEX_VAL0	0x0ULL
+
+#define ICE_FXD_FLTR_QW1_DTYPE_S	0
+#define ICE_FXD_FLTR_QW1_DTYPE_M	(0xFULL << ICE_FXD_FLTR_QW1_DTYPE_S)
+#define ICE_FXD_FLTR_QW1_PCMD_S		4
+#define ICE_FXD_FLTR_QW1_PCMD_M		BIT_ULL(ICE_FXD_FLTR_QW1_PCMD_S)
+#define ICE_FXD_FLTR_QW1_PCMD_ADD	0x0ULL
+#define ICE_FXD_FLTR_QW1_PCMD_REMOVE	0x1ULL
+
+#define ICE_FXD_FLTR_QW1_PROF_PRI_S	5
+#define ICE_FXD_FLTR_QW1_PROF_PRI_M	(0x7ULL << ICE_FXD_FLTR_QW1_PROF_PRI_S)
+#define ICE_FXD_FLTR_QW1_PROF_PRIO_ZERO	0x0ULL
+
+#define ICE_FXD_FLTR_QW1_PROF_S		8
+#define ICE_FXD_FLTR_QW1_PROF_M		(0x3FULL << ICE_FXD_FLTR_QW1_PROF_S)
+#define ICE_FXD_FLTR_QW1_PROF_ZERO	0x0ULL
+
+#define ICE_FXD_FLTR_QW1_FD_VSI_S	14
+#define ICE_FXD_FLTR_QW1_FD_VSI_M	(0x3FFULL << ICE_FXD_FLTR_QW1_FD_VSI_S)
+#define ICE_FXD_FLTR_QW1_SWAP_S		24
+#define ICE_FXD_FLTR_QW1_SWAP_M		BIT_ULL(ICE_FXD_FLTR_QW1_SWAP_S)
+#define ICE_FXD_FLTR_QW1_SWAP_NOT_SET	0x0ULL
+#define ICE_FXD_FLTR_QW1_SWAP_SET	0x1ULL
+
+#define ICE_FXD_FLTR_QW1_FDID_PRI_S	25
+#define ICE_FXD_FLTR_QW1_FDID_PRI_M	(0x7ULL << ICE_FXD_FLTR_QW1_FDID_PRI_S)
+#define ICE_FXD_FLTR_QW1_FDID_PRI_ZERO	0x0ULL
+
+#define ICE_FXD_FLTR_QW1_FDID_MDID_S	28
+#define ICE_FXD_FLTR_QW1_FDID_MDID_M	(0xFULL << ICE_FXD_FLTR_QW1_FDID_MDID_S)
+#define ICE_FXD_FLTR_QW1_FDID_MDID_FD	0x05ULL
+
+#define ICE_FXD_FLTR_QW1_FDID_S		32
+#define ICE_FXD_FLTR_QW1_FDID_M		\
+			(0xFFFFFFFFULL << ICE_FXD_FLTR_QW1_FDID_S)
+#define ICE_FXD_FLTR_QW1_FDID_ZERO	0x0ULL
+
+
+enum ice_rx_desc_status_bits {
+	/* Note: These are predefined bit offsets */
+	ICE_RX_DESC_STATUS_DD_S			= 0,
+	ICE_RX_DESC_STATUS_EOF_S		= 1,
+	ICE_RX_DESC_STATUS_L2TAG1P_S		= 2,
+	ICE_RX_DESC_STATUS_L3L4P_S		= 3,
+	ICE_RX_DESC_STATUS_CRCP_S		= 4,
+	ICE_RX_DESC_STATUS_TSYNINDX_S		= 5, /* 2 BITS */
+	ICE_RX_DESC_STATUS_TSYNVALID_S		= 7,
+	ICE_RX_DESC_STATUS_EXT_UDP_0_S		= 8,
+	ICE_RX_DESC_STATUS_UMBCAST_S		= 9, /* 2 BITS */
+	ICE_RX_DESC_STATUS_FLM_S		= 11,
+	ICE_RX_DESC_STATUS_FLTSTAT_S		= 12, /* 2 BITS */
+	ICE_RX_DESC_STATUS_LPBK_S		= 14,
+	ICE_RX_DESC_STATUS_IPV6EXADD_S		= 15,
+	ICE_RX_DESC_STATUS_RESERVED2_S		= 16, /* 2 BITS */
+	ICE_RX_DESC_STATUS_INT_UDP_0_S		= 18,
+	ICE_RX_DESC_STATUS_LAST /* this entry must be last!!! */
+};
+
+#define ICE_RXD_QW1_STATUS_S	0
+#define ICE_RXD_QW1_STATUS_M	((BIT(ICE_RX_DESC_STATUS_LAST) - 1) << \
+				 ICE_RXD_QW1_STATUS_S)
+
+#define ICE_RXD_QW1_STATUS_TSYNINDX_S ICE_RX_DESC_STATUS_TSYNINDX_S
+#define ICE_RXD_QW1_STATUS_TSYNINDX_M (0x3UL << ICE_RXD_QW1_STATUS_TSYNINDX_S)
+
+#define ICE_RXD_QW1_STATUS_TSYNVALID_S ICE_RX_DESC_STATUS_TSYNVALID_S
+#define ICE_RXD_QW1_STATUS_TSYNVALID_M BIT_ULL(ICE_RXD_QW1_STATUS_TSYNVALID_S)
+
+
+enum ice_rx_desc_fltstat_values {
+	ICE_RX_DESC_FLTSTAT_NO_DATA	= 0,
+	ICE_RX_DESC_FLTSTAT_RSV_FD_ID	= 1, /* 16byte desc? FD_ID : RSV */
+	ICE_RX_DESC_FLTSTAT_RSV		= 2,
+	ICE_RX_DESC_FLTSTAT_RSS_HASH	= 3,
+};
+
+
+#define ICE_RXD_QW1_ERROR_S	19
+#define ICE_RXD_QW1_ERROR_M		(0xFFUL << ICE_RXD_QW1_ERROR_S)
+
+enum ice_rx_desc_error_bits {
+	/* Note: These are predefined bit offsets */
+	ICE_RX_DESC_ERROR_RXE_S			= 0,
+	ICE_RX_DESC_ERROR_RECIPE_S		= 1,
+	ICE_RX_DESC_ERROR_HBO_S			= 2,
+	ICE_RX_DESC_ERROR_L3L4E_S		= 3, /* 3 BITS */
+	ICE_RX_DESC_ERROR_IPE_S			= 3,
+	ICE_RX_DESC_ERROR_L4E_S			= 4,
+	ICE_RX_DESC_ERROR_EIPE_S		= 5,
+	ICE_RX_DESC_ERROR_OVERSIZE_S		= 6,
+	ICE_RX_DESC_ERROR_PPRS_S		= 7
+};
+
+enum ice_rx_desc_error_l3l4e_masks {
+	ICE_RX_DESC_ERROR_L3L4E_NONE		= 0,
+	ICE_RX_DESC_ERROR_L3L4E_PROT		= 1,
+};
+
+#define ICE_RXD_QW1_PTYPE_S	30
+#define ICE_RXD_QW1_PTYPE_M	(0xFFULL << ICE_RXD_QW1_PTYPE_S)
+
+/* Packet type non-ip values */
+enum ice_rx_l2_ptype {
+	ICE_RX_PTYPE_L2_RESERVED	= 0,
+	ICE_RX_PTYPE_L2_MAC_PAY2	= 1,
+	ICE_RX_PTYPE_L2_FIP_PAY2	= 3,
+	ICE_RX_PTYPE_L2_OUI_PAY2	= 4,
+	ICE_RX_PTYPE_L2_MACCNTRL_PAY2	= 5,
+	ICE_RX_PTYPE_L2_LLDP_PAY2	= 6,
+	ICE_RX_PTYPE_L2_ECP_PAY2	= 7,
+	ICE_RX_PTYPE_L2_EVB_PAY2	= 8,
+	ICE_RX_PTYPE_L2_QCN_PAY2	= 9,
+	ICE_RX_PTYPE_L2_EAPOL_PAY2	= 10,
+	ICE_RX_PTYPE_L2_ARP		= 11,
+};
+
+struct ice_rx_ptype_decoded {
+	u32 ptype:10;
+	u32 known:1;
+	u32 outer_ip:1;
+	u32 outer_ip_ver:2;
+	u32 outer_frag:1;
+	u32 tunnel_type:3;
+	u32 tunnel_end_prot:2;
+	u32 tunnel_end_frag:1;
+	u32 inner_prot:4;
+	u32 payload_layer:3;
+};
+
+enum ice_rx_ptype_outer_ip {
+	ICE_RX_PTYPE_OUTER_L2	= 0,
+	ICE_RX_PTYPE_OUTER_IP	= 1,
+};
+
+enum ice_rx_ptype_outer_ip_ver {
+	ICE_RX_PTYPE_OUTER_NONE	= 0,
+	ICE_RX_PTYPE_OUTER_IPV4	= 1,
+	ICE_RX_PTYPE_OUTER_IPV6	= 2,
+};
+
+enum ice_rx_ptype_outer_fragmented {
+	ICE_RX_PTYPE_NOT_FRAG	= 0,
+	ICE_RX_PTYPE_FRAG	= 1,
+};
+
+enum ice_rx_ptype_tunnel_type {
+	ICE_RX_PTYPE_TUNNEL_NONE		= 0,
+	ICE_RX_PTYPE_TUNNEL_IP_IP		= 1,
+	ICE_RX_PTYPE_TUNNEL_IP_GRENAT		= 2,
+	ICE_RX_PTYPE_TUNNEL_IP_GRENAT_MAC	= 3,
+	ICE_RX_PTYPE_TUNNEL_IP_GRENAT_MAC_VLAN	= 4,
+};
+
+enum ice_rx_ptype_tunnel_end_prot {
+	ICE_RX_PTYPE_TUNNEL_END_NONE	= 0,
+	ICE_RX_PTYPE_TUNNEL_END_IPV4	= 1,
+	ICE_RX_PTYPE_TUNNEL_END_IPV6	= 2,
+};
+
+enum ice_rx_ptype_inner_prot {
+	ICE_RX_PTYPE_INNER_PROT_NONE		= 0,
+	ICE_RX_PTYPE_INNER_PROT_UDP		= 1,
+	ICE_RX_PTYPE_INNER_PROT_TCP		= 2,
+	ICE_RX_PTYPE_INNER_PROT_SCTP		= 3,
+	ICE_RX_PTYPE_INNER_PROT_ICMP		= 4,
+};
+
+enum ice_rx_ptype_payload_layer {
+	ICE_RX_PTYPE_PAYLOAD_LAYER_NONE	= 0,
+	ICE_RX_PTYPE_PAYLOAD_LAYER_PAY2	= 1,
+	ICE_RX_PTYPE_PAYLOAD_LAYER_PAY3	= 2,
+	ICE_RX_PTYPE_PAYLOAD_LAYER_PAY4	= 3,
+};
+
+
+#define ICE_RXD_QW1_LEN_PBUF_S	38
+#define ICE_RXD_QW1_LEN_PBUF_M	(0x3FFFULL << ICE_RXD_QW1_LEN_PBUF_S)
+
+#define ICE_RXD_QW1_LEN_HBUF_S	52
+#define ICE_RXD_QW1_LEN_HBUF_M	(0x7FFULL << ICE_RXD_QW1_LEN_HBUF_S)
+
+#define ICE_RXD_QW1_LEN_SPH_S	63
+#define ICE_RXD_QW1_LEN_SPH_M	BIT_ULL(ICE_RXD_QW1_LEN_SPH_S)
+
+
+enum ice_rx_desc_ext_status_bits {
+	/* Note: These are predefined bit offsets */
+	ICE_RX_DESC_EXT_STATUS_L2TAG2P_S	= 0,
+	ICE_RX_DESC_EXT_STATUS_L2TAG3P_S	= 1,
+	ICE_RX_DESC_EXT_STATUS_FLEXBL_S		= 2, /* 2 BITS */
+	ICE_RX_DESC_EXT_STATUS_FLEXBH_S		= 4, /* 2 BITS */
+	ICE_RX_DESC_EXT_STATUS_FDLONGB_S	= 9,
+	ICE_RX_DESC_EXT_STATUS_PELONGB_S	= 11,
+};
+
+
+enum ice_rx_desc_pe_status_bits {
+	/* Note: These are predefined bit offsets */
+	ICE_RX_DESC_PE_STATUS_QPID_S		= 0, /* 18 BITS */
+	ICE_RX_DESC_PE_STATUS_L4PORT_S		= 0, /* 16 BITS */
+	ICE_RX_DESC_PE_STATUS_IPINDEX_S		= 16, /* 8 BITS */
+	ICE_RX_DESC_PE_STATUS_QPIDHIT_S		= 24,
+	ICE_RX_DESC_PE_STATUS_APBVTHIT_S	= 25,
+	ICE_RX_DESC_PE_STATUS_PORTV_S		= 26,
+	ICE_RX_DESC_PE_STATUS_URG_S		= 27,
+	ICE_RX_DESC_PE_STATUS_IPFRAG_S		= 28,
+	ICE_RX_DESC_PE_STATUS_IPOPT_S		= 29
+};
+
+#define ICE_RX_PROG_STATUS_DESC_LEN_S	38
+#define ICE_RX_PROG_STATUS_DESC_LEN	0x2000000
+
+#define ICE_RX_PROG_STATUS_DESC_QW1_PROGID_S	2
+#define ICE_RX_PROG_STATUS_DESC_QW1_PROGID_M	\
+			(0x7UL << ICE_RX_PROG_STATUS_DESC_QW1_PROGID_S)
+
+
+#define ICE_RX_PROG_STATUS_DESC_QW1_ERROR_S	19
+#define ICE_RX_PROG_STATUS_DESC_QW1_ERROR_M	\
+			(0x3FUL << ICE_RX_PROG_STATUS_DESC_QW1_ERROR_S)
+
+enum ice_rx_prog_status_desc_status_bits {
+	/* Note: These are predefined bit offsets */
+	ICE_RX_PROG_STATUS_DESC_DD_S		= 0,
+	ICE_RX_PROG_STATUS_DESC_PROG_ID_S	= 2 /* 3 BITS */
+};
+
+enum ice_rx_prog_status_desc_prog_id_masks {
+	ICE_RX_PROG_STATUS_DESC_FD_FLTR_STATUS	= 1,
+};
+
+enum ice_rx_prog_status_desc_error_bits {
+	/* Note: These are predefined bit offsets */
+	ICE_RX_PROG_STATUS_DESC_FD_TBL_FULL_S	= 0,
+	ICE_RX_PROG_STATUS_DESC_NO_FD_ENTRY_S	= 1,
+};
+
+/* RX Flex Descriptor
+ * This descriptor is used instead of the legacy version descriptor when
+ * ice_rlan_ctx.adv_desc is set
+ */
+union ice_32b_rx_flex_desc {
+	struct {
+		__le64 pkt_addr; /* Packet buffer address */
+		__le64 hdr_addr; /* Header buffer address */
+				 /* bit 0 of hdr_addr is DD bit */
+		__le64 rsvd1;
+		__le64 rsvd2;
+	} read;
+	struct {
+		/* Qword 0 */
+		u8 rxdid; /* descriptor builder profile id */
+		u8 mir_id_umb_cast; /* mirror=[5:0], umb=[7:6] */
+		__le16 ptype_flex_flags0; /* ptype=[9:0], ff0=[15:10] */
+		__le16 pkt_len; /* [15:14] are reserved */
+		__le16 hdr_len_sph_flex_flags1; /* header=[10:0] */
+						/* sph=[11:11] */
+						/* ff1/ext=[15:12] */
+
+		/* Qword 1 */
+		__le16 status_error0;
+		__le16 l2tag1;
+		__le16 flex_meta0;
+		__le16 flex_meta1;
+
+		/* Qword 2 */
+		__le16 status_error1;
+		u8 flex_flags2;
+		u8 time_stamp_low;
+		__le16 l2tag2_1st;
+		__le16 l2tag2_2nd;
+
+		/* Qword 3 */
+		__le16 flex_meta2;
+		__le16 flex_meta3;
+		union {
+			struct {
+				__le16 flex_meta4;
+				__le16 flex_meta5;
+			} flex;
+			__le32 ts_high;
+		} flex_ts;
+	} wb; /* writeback */
+};
+
+/* Rx Flex Descriptor NIC Profile
+ * RxDID Profile Id 2
+ * Flex-field 0: RSS hash lower 16-bits
+ * Flex-field 1: RSS hash upper 16-bits
+ * Flex-field 2: Flow Id lower 16-bits
+ * Flex-field 3: Flow Id higher 16-bits
+ * Flex-field 4: reserved, Vlan id taken from L2Tag
+ */
+struct ice_32b_rx_flex_desc_nic {
+	/* Qword 0 */
+	u8 rxdid;
+	u8 mir_id_umb_cast;
+	__le16 ptype_flexi_flags0;
+	__le16 pkt_len;
+	__le16 hdr_len_sph_flex_flags1;
+
+	/* Qword 1 */
+	__le16 status_error0;
+	__le16 l2tag1;
+	__le32 rss_hash;
+
+	/* Qword 2 */
+	__le16 status_error1;
+	u8 flexi_flags2;
+	u8 ts_low;
+	__le16 l2tag2_1st;
+	__le16 l2tag2_2nd;
+
+	/* Qword 3 */
+	__le32 flow_id;
+	union {
+		struct {
+			__le16 rsvd;
+			__le16 flow_id_ipv6;
+		} flex;
+		__le32 ts_high;
+	} flex_ts;
+};
+
+/* Rx Flex Descriptor Switch Profile
+ * RxDID Profile Id 3
+ * Flex-field 0: Source Vsi
+ */
+struct ice_32b_rx_flex_desc_sw {
+	/* Qword 0 */
+	u8 rxdid;
+	u8 mir_id_umb_cast;
+	__le16 ptype_flexi_flags0;
+	__le16 pkt_len;
+	__le16 hdr_len_sph_flex_flags1;
+
+	/* Qword 1 */
+	__le16 status_error0;
+	__le16 l2tag1;
+	__le16 src_vsi; /* [10:15] are reserved */
+	__le16 flex_md1_rsvd;
+
+	/* Qword 2 */
+	__le16 status_error1;
+	u8 flex_flags2;
+	u8 ts_low;
+	__le16 l2tag2_1st;
+	__le16 l2tag2_2nd;
+
+	/* Qword 3 */
+	__le32 rsvd; /* flex words 2-3 are reserved */
+	__le32 ts_high;
+};
+
+/* Rx Flex Descriptor NIC VEB Profile
+ * RxDID Profile Id 4
+ * Flex-field 0: Destination Vsi
+ */
+struct ice_32b_rx_flex_desc_nic_veb_dbg {
+	/* Qword 0 */
+	u8 rxdid;
+	u8 mir_id_umb_cast;
+	__le16 ptype_flexi_flags0;
+	__le16 pkt_len;
+	__le16 hdr_len_sph_flex_flags1;
+
+	/* Qword 1 */
+	__le16 status_error0;
+	__le16 l2tag1;
+	__le16 dst_vsi; /* [0:12]: destination vsi */
+			/* 13: vsi valid bit */
+			/* [14:15] are reserved */
+	__le16 flex_field_1;
+
+	/* Qword 2 */
+	__le16 status_error1;
+	u8 flex_flags2;
+	u8 ts_low;
+	__le16 l2tag2_1st;
+	__le16 l2tag2_2nd;
+
+	/* Qword 3 */
+	__le32 rsvd; /* flex words 2-3 are reserved */
+	__le32 ts_high;
+};
+
+/* Rx Flex Descriptor NIC ACL Profile
+ * RxDID Profile Id 5
+ * Flex-field 0: ACL Counter 0
+ * Flex-field 1: ACL Counter 1
+ * Flex-field 2: ACL Counter 2
+ */
+struct ice_32b_rx_flex_desc_nic_acl_dbg {
+	/* Qword 0 */
+	u8 rxdid;
+	u8 mir_id_umb_cast;
+	__le16 ptype_flexi_flags0;
+	__le16 pkt_len;
+	__le16 hdr_len_sph_flex_flags1;
+
+	/* Qword 1 */
+	__le16 status_error0;
+	__le16 l2tag1;
+	__le16 acl_ctr0;
+	__le16 acl_ctr1;
+
+	/* Qword 2 */
+	__le16 status_error1;
+	u8 flex_flags2;
+	u8 ts_low;
+	__le16 l2tag2_1st;
+	__le16 l2tag2_2nd;
+
+	/* Qword 3 */
+	__le16 acl_ctr2;
+	__le16 rsvd; /* flex words 2-3 are reserved */
+	__le32 ts_high;
+};
+
+/* Rx Flex Descriptor NIC Profile
+ * RxDID Profile Id 6
+ * Flex-field 0: RSS hash lower 16-bits
+ * Flex-field 1: RSS hash upper 16-bits
+ * Flex-field 2: Flow Id lower 16-bits
+ * Flex-field 3: Source Vsi
+ * Flex-field 4: reserved, Vlan id taken from L2Tag
+ */
+struct ice_32b_rx_flex_desc_nic_2 {
+	/* Qword 0 */
+	u8 rxdid;
+	u8 mir_id_umb_cast;
+	__le16 ptype_flexi_flags0;
+	__le16 pkt_len;
+	__le16 hdr_len_sph_flex_flags1;
+
+	/* Qword 1 */
+	__le16 status_error0;
+	__le16 l2tag1;
+	__le32 rss_hash;
+
+	/* Qword 2 */
+	__le16 status_error1;
+	u8 flexi_flags2;
+	u8 ts_low;
+	__le16 l2tag2_1st;
+	__le16 l2tag2_2nd;
+
+	/* Qword 3 */
+	__le16 flow_id;
+	__le16 src_vsi;
+	union {
+		struct {
+			__le16 rsvd;
+			__le16 flow_id_ipv6;
+		} flex;
+		__le32 ts_high;
+	} flex_ts;
+};
+
+/* Receive Flex Descriptor profile IDs: There are a total
+ * of 64 profiles where profile IDs 0/1 are for legacy; and
+ * profiles 2-63 are flex profiles that can be programmed
+ * with a specific metadata (profile 7 reserved for HW)
+ */
+enum ice_rxdid {
+	ICE_RXDID_LEGACY_0		= 0,
+	ICE_RXDID_LEGACY_1		= 1,
+	ICE_RXDID_FLEX_NIC		= 2,
+	ICE_RXDID_FLEX_NIC_2		= 6,
+	ICE_RXDID_HW			= 7,
+	ICE_RXDID_LAST			= 63,
+};
+
+/* Recceive Flex descriptor Dword Index */
+enum ice_flex_word {
+	ICE_RX_FLEX_DWORD_0 = 0,
+	ICE_RX_FLEX_DWORD_1,
+	ICE_RX_FLEX_DWORD_2,
+	ICE_RX_FLEX_DWORD_3,
+	ICE_RX_FLEX_DWORD_4,
+	ICE_RX_FLEX_DWORD_5
+};
+
+/* Receive Flex Descriptor Rx opcode values */
+enum ice_flex_opcode {
+	ICE_RX_OPC_DEBUG = 0,
+	ICE_RX_OPC_MDID,
+	ICE_RX_OPC_EXTRACT,
+	ICE_RX_OPC_PROTID
+};
+
+/* Receive Descriptor MDID values */
+enum ice_flex_rx_mdid {
+	ICE_RX_MDID_FLOW_ID_LOWER	= 5,
+	ICE_RX_MDID_FLOW_ID_HIGH,
+	ICE_RX_MDID_DST_VSI		= 13,
+	ICE_RX_MDID_SRC_VSI		= 19,
+	ICE_RX_MDID_HASH_LOW		= 56,
+	ICE_RX_MDID_HASH_HIGH,
+	ICE_RX_MDID_ACL_CTR0		= ICE_RX_MDID_HASH_LOW,
+	ICE_RX_MDID_ACL_CTR1		= ICE_RX_MDID_HASH_HIGH,
+	ICE_RX_MDID_ACL_CTR2		= 59
+};
+
+/* for ice_32byte_rx_flex_desc.mir_id_umb_cast member */
+#define ICE_RX_FLEX_DESC_MIRROR_M	(0x3F) /* 6-bits */
+
+/* Rx Flag64 packet flag bits */
+enum ice_rx_flg64_bits {
+	ICE_RXFLG_PKT_DSI	= 0,
+	ICE_RXFLG_EVLAN_x8100	= 15,
+	ICE_RXFLG_EVLAN_x9100,
+	ICE_RXFLG_VLAN_x8100,
+	ICE_RXFLG_TNL_MAC	= 22,
+	ICE_RXFLG_TNL_VLAN,
+	ICE_RXFLG_PKT_FRG,
+	ICE_RXFLG_FIN		= 32,
+	ICE_RXFLG_SYN,
+	ICE_RXFLG_RST,
+	ICE_RXFLG_TNL0		= 38,
+	ICE_RXFLG_TNL1,
+	ICE_RXFLG_TNL2,
+	ICE_RXFLG_UDP_GRE,
+	ICE_RXFLG_RSVD		= 63
+};
+
+enum ice_rx_flex_desc_umb_cast_bits { /* field is 2 bits long */
+	ICE_RX_FLEX_DESC_UMB_CAST_S = 6,
+	ICE_RX_FLEX_DESC_UMB_CAST_LAST /* this entry must be last!!! */
+};
+
+enum ice_umbcast_dest_addr_types {
+	ICE_DEST_UNICAST = 0,
+	ICE_DEST_MULTICAST,
+	ICE_DEST_BROADCAST,
+	ICE_DEST_MIRRORED,
+};
+
+/* for ice_32byte_rx_flex_desc.ptype_flexi_flags0 member */
+#define ICE_RX_FLEX_DESC_PTYPE_M	(0x3FF) /* 10-bits */
+
+enum ice_rx_flex_desc_flexi_flags0_bits { /* field is 6 bits long */
+	ICE_RX_FLEX_DESC_FLEXI_FLAGS0_S = 10,
+	ICE_RX_FLEX_DESC_FLEXI_FLAGS0_LAST /* this entry must be last!!! */
+};
+
+/* for ice_32byte_rx_flex_desc.pkt_length member */
+#define ICE_RX_FLX_DESC_PKT_LEN_M	(0x3FFF) /* 14-bits */
+
+/* for ice_32byte_rx_flex_desc.header_length_sph_flexi_flags1 member */
+#define ICE_RX_FLEX_DESC_HEADER_LEN_M	(0x7FF) /* 11-bits */
+
+enum ice_rx_flex_desc_sph_bits { /* field is 1 bit long */
+	ICE_RX_FLEX_DESC_SPH_S = 11,
+	ICE_RX_FLEX_DESC_SPH_LAST /* this entry must be last!!! */
+};
+
+enum ice_rx_flex_desc_flexi_flags1_bits { /* field is 4 bits long */
+	ICE_RX_FLEX_DESC_FLEXI_FLAGS1_S = 12,
+	ICE_RX_FLEX_DESC_FLEXI_FLAGS1_LAST /* this entry must be last!!! */
+};
+
+enum ice_rx_flex_desc_ext_status_bits { /* field is 4 bits long */
+	ICE_RX_FLEX_DESC_EXT_STATUS_EXT_UDP_S = 12,
+	ICE_RX_FLEX_DESC_EXT_STATUS_INT_UDP_S = 13,
+	ICE_RX_FLEX_DESC_EXT_STATUS_RECIPE_S = 14,
+	ICE_RX_FLEX_DESC_EXT_STATUS_OVERSIZE_S = 15,
+	ICE_RX_FLEX_DESC_EXT_STATUS_LAST /* entry must be last!!! */
+};
+
+enum ice_rx_flex_desc_status_error_0_bits {
+	/* Note: These are predefined bit offsets */
+	ICE_RX_FLEX_DESC_STATUS0_DD_S = 0,
+	ICE_RX_FLEX_DESC_STATUS0_EOF_S,
+	ICE_RX_FLEX_DESC_STATUS0_HBO_S,
+	ICE_RX_FLEX_DESC_STATUS0_L3L4P_S,
+	ICE_RX_FLEX_DESC_STATUS0_XSUM_IPE_S,
+	ICE_RX_FLEX_DESC_STATUS0_XSUM_L4E_S,
+	ICE_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S,
+	ICE_RX_FLEX_DESC_STATUS0_XSUM_EUDPE_S,
+	ICE_RX_FLEX_DESC_STATUS0_LPBK_S,
+	ICE_RX_FLEX_DESC_STATUS0_IPV6EXADD_S,
+	ICE_RX_FLEX_DESC_STATUS0_RXE_S,
+	ICE_RX_FLEX_DESC_STATUS0_CRCP_S,
+	ICE_RX_FLEX_DESC_STATUS0_RSS_VALID_S,
+	ICE_RX_FLEX_DESC_STATUS0_L2TAG1P_S,
+	ICE_RX_FLEX_DESC_STATUS0_XTRMD0_VALID_S,
+	ICE_RX_FLEX_DESC_STATUS0_XTRMD1_VALID_S,
+	ICE_RX_FLEX_DESC_STATUS0_LAST /* this entry must be last!!! */
+};
+
+enum ice_rx_flex_desc_status_error_1_bits {
+	/* Note: These are predefined bit offsets */
+	ICE_RX_FLEX_DESC_STATUS1_CPM_S = 0, /* 4 bits */
+	ICE_RX_FLEX_DESC_STATUS1_NAT_S = 4,
+	ICE_RX_FLEX_DESC_STATUS1_CRYPTO_S = 5,
+	/* [10:6] reserved */
+	ICE_RX_FLEX_DESC_STATUS1_L2TAG2P_S = 11,
+	ICE_RX_FLEX_DESC_STATUS1_XTRMD2_VALID_S = 12,
+	ICE_RX_FLEX_DESC_STATUS1_XTRMD3_VALID_S = 13,
+	ICE_RX_FLEX_DESC_STATUS1_XTRMD4_VALID_S = 14,
+	ICE_RX_FLEX_DESC_STATUS1_XTRMD5_VALID_S = 15,
+	ICE_RX_FLEX_DESC_STATUS1_LAST /* this entry must be last!!! */
+};
+
+enum ice_rx_flex_desc_exstat_bits {
+	/* Note: These are predefined bit offsets */
+	ICE_RX_FLEX_DESC_EXSTAT_EXTUDP_S = 0,
+	ICE_RX_FLEX_DESC_EXSTAT_INTUDP_S = 1,
+	ICE_RX_FLEX_DESC_EXSTAT_RECIPE_S = 2,
+	ICE_RX_FLEX_DESC_EXSTAT_OVERSIZE_S = 3,
+};
+
+
+#define ICE_RXQ_CTX_SIZE_DWORDS		8
+#define ICE_RXQ_CTX_SZ			(ICE_RXQ_CTX_SIZE_DWORDS * sizeof(u32))
+#define ICE_TX_CMPLTNQ_CTX_SIZE_DWORDS	22
+#define ICE_TX_DRBELL_Q_CTX_SIZE_DWORDS	5
+#define GLTCLAN_CQ_CNTX(i, CQ)		(GLTCLAN_CQ_CNTX0(CQ) + ((i) * 0x0800))
+
+/* RLAN Rx queue context data
+ *
+ * The sizes of the variables may be larger than needed due to crossing byte
+ * boundaries. If we do not have the width of the variable set to the correct
+ * size then we could end up shifting bits off the top of the variable when the
+ * variable is at the top of a byte and crosses over into the next byte.
+ */
+struct ice_rlan_ctx {
+	u16 head;
+	u16 cpuid; /* bigger than needed, see above for reason */
+#define ICE_RLAN_BASE_S 7
+	u64 base;
+	u16 qlen;
+#define ICE_RLAN_CTX_DBUF_S 7
+	u16 dbuf; /* bigger than needed, see above for reason */
+#define ICE_RLAN_CTX_HBUF_S 6
+	u16 hbuf; /* bigger than needed, see above for reason */
+	u8 dtype;
+	u8 dsize;
+	u8 crcstrip;
+	u8 l2tsel;
+	u8 hsplit_0;
+	u8 hsplit_1;
+	u8 showiv;
+	u32 rxmax; /* bigger than needed, see above for reason */
+	u8 tphrdesc_ena;
+	u8 tphwdesc_ena;
+	u8 tphdata_ena;
+	u8 tphhead_ena;
+	u16 lrxqthresh; /* bigger than needed, see above for reason */
+};
+
+struct ice_ctx_ele {
+	u16 offset;
+	u16 size_of;
+	u16 width;
+	u16 lsb;
+};
+
+#define ICE_CTX_STORE(_struct, _ele, _width, _lsb) {	\
+	.offset = offsetof(struct _struct, _ele),	\
+	.size_of = FIELD_SIZEOF(struct _struct, _ele),	\
+	.width = _width,				\
+	.lsb = _lsb,					\
+}
+
+/* for hsplit_0 field of Rx RLAN context */
+enum ice_rlan_ctx_rx_hsplit_0 {
+	ICE_RLAN_RX_HSPLIT_0_NO_SPLIT		= 0,
+	ICE_RLAN_RX_HSPLIT_0_SPLIT_L2		= 1,
+	ICE_RLAN_RX_HSPLIT_0_SPLIT_IP		= 2,
+	ICE_RLAN_RX_HSPLIT_0_SPLIT_TCP_UDP	= 4,
+	ICE_RLAN_RX_HSPLIT_0_SPLIT_SCTP		= 8,
+};
+
+/* for hsplit_1 field of Rx RLAN context */
+enum ice_rlan_ctx_rx_hsplit_1 {
+	ICE_RLAN_RX_HSPLIT_1_NO_SPLIT		= 0,
+	ICE_RLAN_RX_HSPLIT_1_SPLIT_L2		= 1,
+	ICE_RLAN_RX_HSPLIT_1_SPLIT_ALWAYS	= 2,
+};
+
+/* TX Descriptor */
+struct ice_tx_desc {
+	__le64 buf_addr; /* Address of descriptor's data buf */
+	__le64 cmd_type_offset_bsz;
+};
+
+#define ICE_TXD_QW1_DTYPE_S	0
+#define ICE_TXD_QW1_DTYPE_M	(0xFUL << ICE_TXD_QW1_DTYPE_S)
+
+enum ice_tx_desc_dtype_value {
+	ICE_TX_DESC_DTYPE_DATA		= 0x0,
+	ICE_TX_DESC_DTYPE_CTX		= 0x1,
+	ICE_TX_DESC_DTYPE_IPSEC		= 0x3,
+	ICE_TX_DESC_DTYPE_FLTR_PROG	= 0x8,
+	ICE_TX_DESC_DTYPE_HLP_META	= 0x9,
+	/* DESC_DONE - HW has completed write-back of descriptor */
+	ICE_TX_DESC_DTYPE_DESC_DONE	= 0xF,
+};
+
+#define ICE_TXD_QW1_CMD_S	4
+#define ICE_TXD_QW1_CMD_M	(0xFFFUL << ICE_TXD_QW1_CMD_S)
+
+enum ice_tx_desc_cmd_bits {
+	ICE_TX_DESC_CMD_EOP			= 0x0001,
+	ICE_TX_DESC_CMD_RS			= 0x0002,
+	ICE_TX_DESC_CMD_RSVD			= 0x0004,
+	ICE_TX_DESC_CMD_IL2TAG1			= 0x0008,
+	ICE_TX_DESC_CMD_DUMMY			= 0x0010,
+	ICE_TX_DESC_CMD_IIPT_NONIP		= 0x0000, /* 2 BITS */
+	ICE_TX_DESC_CMD_IIPT_IPV6		= 0x0020, /* 2 BITS */
+	ICE_TX_DESC_CMD_IIPT_IPV4		= 0x0040, /* 2 BITS */
+	ICE_TX_DESC_CMD_IIPT_IPV4_CSUM		= 0x0060, /* 2 BITS */
+	ICE_TX_DESC_CMD_RSVD2			= 0x0080,
+	ICE_TX_DESC_CMD_L4T_EOFT_UNK		= 0x0000, /* 2 BITS */
+	ICE_TX_DESC_CMD_L4T_EOFT_TCP		= 0x0100, /* 2 BITS */
+	ICE_TX_DESC_CMD_L4T_EOFT_SCTP		= 0x0200, /* 2 BITS */
+	ICE_TX_DESC_CMD_L4T_EOFT_UDP		= 0x0300, /* 2 BITS */
+	ICE_TX_DESC_CMD_RE			= 0x0400,
+	ICE_TX_DESC_CMD_RSVD3			= 0x0800,
+};
+
+#define ICE_TXD_QW1_OFFSET_S	16
+#define ICE_TXD_QW1_OFFSET_M	(0x3FFFFULL << ICE_TXD_QW1_OFFSET_S)
+
+enum ice_tx_desc_len_fields {
+	/* Note: These are predefined bit offsets */
+	ICE_TX_DESC_LEN_MACLEN_S	= 0, /* 7 BITS */
+	ICE_TX_DESC_LEN_IPLEN_S	= 7, /* 7 BITS */
+	ICE_TX_DESC_LEN_L4_LEN_S	= 14 /* 4 BITS */
+};
+
+#define ICE_TXD_QW1_MACLEN_M (0x7FUL << ICE_TX_DESC_LEN_MACLEN_S)
+#define ICE_TXD_QW1_IPLEN_M  (0x7FUL << ICE_TX_DESC_LEN_IPLEN_S)
+#define ICE_TXD_QW1_L4LEN_M  (0xFUL << ICE_TX_DESC_LEN_L4_LEN_S)
+
+/* Tx descriptor field limits in bytes */
+#define ICE_TXD_MACLEN_MAX ((ICE_TXD_QW1_MACLEN_M >> \
+			     ICE_TX_DESC_LEN_MACLEN_S) * ICE_BYTES_PER_WORD)
+#define ICE_TXD_IPLEN_MAX ((ICE_TXD_QW1_IPLEN_M >> \
+			    ICE_TX_DESC_LEN_IPLEN_S) * ICE_BYTES_PER_DWORD)
+#define ICE_TXD_L4LEN_MAX ((ICE_TXD_QW1_L4LEN_M >> \
+			    ICE_TX_DESC_LEN_L4_LEN_S) * ICE_BYTES_PER_DWORD)
+
+#define ICE_TXD_QW1_TX_BUF_SZ_S	34
+#define ICE_TXD_QW1_TX_BUF_SZ_M	(0x3FFFULL << ICE_TXD_QW1_TX_BUF_SZ_S)
+
+#define ICE_TXD_QW1_L2TAG1_S	48
+#define ICE_TXD_QW1_L2TAG1_M	(0xFFFFULL << ICE_TXD_QW1_L2TAG1_S)
+
+/* Context descriptors */
+struct ice_tx_ctx_desc {
+	__le32 tunneling_params;
+	__le16 l2tag2;
+	__le16 rsvd;
+	__le64 qw1;
+};
+
+#define ICE_TXD_CTX_QW1_DTYPE_S	0
+#define ICE_TXD_CTX_QW1_DTYPE_M	(0xFUL << ICE_TXD_CTX_QW1_DTYPE_S)
+
+#define ICE_TXD_CTX_QW1_CMD_S	4
+#define ICE_TXD_CTX_QW1_CMD_M	(0x7FUL << ICE_TXD_CTX_QW1_CMD_S)
+
+#define ICE_TXD_CTX_QW1_IPSEC_S	11
+#define ICE_TXD_CTX_QW1_IPSEC_M	(0x7FUL << ICE_TXD_CTX_QW1_IPSEC_S)
+
+#define ICE_TXD_CTX_QW1_TSO_LEN_S	30
+#define ICE_TXD_CTX_QW1_TSO_LEN_M	\
+			(0x3FFFFULL << ICE_TXD_CTX_QW1_TSO_LEN_S)
+
+#define ICE_TXD_CTX_QW1_TSYN_S	ICE_TXD_CTX_QW1_TSO_LEN_S
+#define ICE_TXD_CTX_QW1_TSYN_M	ICE_TXD_CTX_QW1_TSO_LEN_M
+
+#define ICE_TXD_CTX_QW1_MSS_S	50
+#define ICE_TXD_CTX_QW1_MSS_M	(0x3FFFULL << ICE_TXD_CTX_QW1_MSS_S)
+#define ICE_TXD_CTX_MIN_MSS	64
+#define ICE_TXD_CTX_MAX_MSS	9668
+
+#define ICE_TXD_CTX_QW1_VSI_S	50
+#define ICE_TXD_CTX_QW1_VSI_M	(0x3FFULL << ICE_TXD_CTX_QW1_VSI_S)
+
+enum ice_tx_ctx_desc_cmd_bits {
+	ICE_TX_CTX_DESC_TSO		= 0x01,
+	ICE_TX_CTX_DESC_TSYN		= 0x02,
+	ICE_TX_CTX_DESC_IL2TAG2		= 0x04,
+	ICE_TX_CTX_DESC_IL2TAG2_IL2H	= 0x08,
+	ICE_TX_CTX_DESC_SWTCH_NOTAG	= 0x00,
+	ICE_TX_CTX_DESC_SWTCH_UPLINK	= 0x10,
+	ICE_TX_CTX_DESC_SWTCH_LOCAL	= 0x20,
+	ICE_TX_CTX_DESC_SWTCH_VSI	= 0x30,
+	ICE_TX_CTX_DESC_RESERVED	= 0x40
+};
+
+enum ice_tx_ctx_desc_eipt_offload {
+	ICE_TX_CTX_EIPT_NONE		= 0x0,
+	ICE_TX_CTX_EIPT_IPV6		= 0x1,
+	ICE_TX_CTX_EIPT_IPV4_NO_CSUM	= 0x2,
+	ICE_TX_CTX_EIPT_IPV4		= 0x3
+};
+
+#define ICE_TXD_CTX_QW0_EIPT_S	0
+#define ICE_TXD_CTX_QW0_EIPT_M	(0x3ULL << ICE_TXD_CTX_QW0_EIPT_S)
+
+#define ICE_TXD_CTX_QW0_EIPLEN_S	2
+#define ICE_TXD_CTX_QW0_EIPLEN_M	(0x7FUL << ICE_TXD_CTX_QW0_EIPLEN_S)
+
+#define ICE_TXD_CTX_QW0_L4TUNT_S	9
+#define ICE_TXD_CTX_QW0_L4TUNT_M	(0x3ULL << ICE_TXD_CTX_QW0_L4TUNT_S)
+
+#define ICE_TXD_CTX_UDP_TUNNELING	BIT_ULL(ICE_TXD_CTX_QW0_L4TUNT_S)
+#define ICE_TXD_CTX_GRE_TUNNELING	(0x2ULL << ICE_TXD_CTX_QW0_L4TUNT_S)
+
+#define ICE_TXD_CTX_QW0_EIP_NOINC_S	11
+#define ICE_TXD_CTX_QW0_EIP_NOINC_M	BIT_ULL(ICE_TXD_CTX_QW0_EIP_NOINC_S)
+
+#define ICE_TXD_CTX_EIP_NOINC_IPID_CONST	ICE_TXD_CTX_QW0_EIP_NOINC_M
+
+#define ICE_TXD_CTX_QW0_NATLEN_S	12
+#define ICE_TXD_CTX_QW0_NATLEN_M	(0X7FULL << ICE_TXD_CTX_QW0_NATLEN_S)
+
+#define ICE_TXD_CTX_QW0_DECTTL_S	19
+#define ICE_TXD_CTX_QW0_DECTTL_M	(0xFULL << ICE_TXD_CTX_QW0_DECTTL_S)
+
+#define ICE_TXD_CTX_QW0_L4T_CS_S	23
+#define ICE_TXD_CTX_QW0_L4T_CS_M	BIT_ULL(ICE_TXD_CTX_QW0_L4T_CS_S)
+
+
+#define ICE_LAN_TXQ_MAX_QGRPS	127
+#define ICE_LAN_TXQ_MAX_QDIS	1023
+
+/* Tx queue context data
+ *
+ * The sizes of the variables may be larger than needed due to crossing byte
+ * boundaries. If we do not have the width of the variable set to the correct
+ * size then we could end up shifting bits off the top of the variable when the
+ * variable is at the top of a byte and crosses over into the next byte.
+ */
+struct ice_tlan_ctx {
+#define ICE_TLAN_CTX_BASE_S	7
+	u64 base;		/* base is defined in 128-byte units */
+	u8 port_num;
+	u16 cgd_num;		/* bigger than needed, see above for reason */
+	u8 pf_num;
+	u16 vmvf_num;
+	u8 vmvf_type;
+#define ICE_TLAN_CTX_VMVF_TYPE_VMQ	1
+#define ICE_TLAN_CTX_VMVF_TYPE_PF	2
+	u16 src_vsi;
+	u8 tsyn_ena;
+	u8 alt_vlan;
+	u16 cpuid;		/* bigger than needed, see above for reason */
+	u8 wb_mode;
+	u8 tphrd_desc;
+	u8 tphrd;
+	u8 tphwr_desc;
+	u16 cmpq_id;
+	u16 qnum_in_func;
+	u8 itr_notification_mode;
+	u8 adjust_prof_id;
+	u32 qlen;		/* bigger than needed, see above for reason */
+	u8 quanta_prof_idx;
+	u8 tso_ena;
+	u16 tso_qnum;
+	u8 legacy_int;
+	u8 drop_ena;
+	u8 cache_prof_idx;
+	u8 pkt_shaper_prof_idx;
+	u8 int_q_state;	/* width not needed - internal do not write */
+};
+
+/* LAN Tx Completion Queue data */
+#pragma pack(1)
+struct ice_tx_cmpltnq {
+	u16 txq_id;
+	u8 generation;
+	u16 tx_head;
+	u8 cmpl_type;
+};
+#pragma pack()
+
+
+/* LAN Tx Completion Queue Context */
+#pragma pack(1)
+struct ice_tx_cmpltnq_ctx {
+	u64 base;
+	u32 q_len;
+#define ICE_TX_CMPLTNQ_CTX_Q_LEN_S	4
+	u8 generation;
+	u32 wrt_ptr;
+	u8 pf_num;
+	u16 vmvf_num;
+	u8 vmvf_type;
+	u8 tph_desc_wr;
+	u8 cpuid;
+	u32 cmpltn_cache[16];
+};
+#pragma pack()
+
+/* LAN Tx Doorbell Descriptor Format */
+struct ice_tx_drbell_fmt {
+	u16 txq_id;
+	u8 dd;
+	u8 rs;
+	u32 db;
+};
+
+
+/* LAN Tx Doorbell Queue Context */
+#pragma pack(1)
+struct ice_tx_drbell_q_ctx {
+	u64 base;
+	u16 ring_len;
+	u8 pf_num;
+	u16 vf_num;
+	u8 vmvf_type;
+	u8 cpuid;
+	u8 tph_desc_rd;
+	u8 tph_desc_wr;
+	u8 db_q_en;
+	u16 rd_head;
+	u16 rd_tail;
+};
+#pragma pack()
+
+/* The ice_ptype_lkup table is used to convert from the 10-bit ptype in the
+ * hardware to a bit-field that can be used by SW to more easily determine the
+ * packet type.
+ *
+ * Macros are used to shorten the table lines and make this table human
+ * readable.
+ *
+ * We store the PTYPE in the top byte of the bit field - this is just so that
+ * we can check that the table doesn't have a row missing, as the index into
+ * the table should be the PTYPE.
+ *
+ * Typical work flow:
+ *
+ * IF NOT ice_ptype_lkup[ptype].known
+ * THEN
+ *      Packet is unknown
+ * ELSE IF ice_ptype_lkup[ptype].outer_ip == ICE_RX_PTYPE_OUTER_IP
+ *      Use the rest of the fields to look at the tunnels, inner protocols, etc
+ * ELSE
+ *      Use the enum ice_rx_l2_ptype to decode the packet type
+ * ENDIF
+ */
+
+/* macro to make the table lines short */
+#define ICE_PTT(PTYPE, OUTER_IP, OUTER_IP_VER, OUTER_FRAG, T, TE, TEF, I, PL)\
+	{	PTYPE, \
+		1, \
+		ICE_RX_PTYPE_OUTER_##OUTER_IP, \
+		ICE_RX_PTYPE_OUTER_##OUTER_IP_VER, \
+		ICE_RX_PTYPE_##OUTER_FRAG, \
+		ICE_RX_PTYPE_TUNNEL_##T, \
+		ICE_RX_PTYPE_TUNNEL_END_##TE, \
+		ICE_RX_PTYPE_##TEF, \
+		ICE_RX_PTYPE_INNER_PROT_##I, \
+		ICE_RX_PTYPE_PAYLOAD_LAYER_##PL }
+
+#define ICE_PTT_UNUSED_ENTRY(PTYPE) { PTYPE, 0, 0, 0, 0, 0, 0, 0, 0, 0 }
+
+/* shorter macros makes the table fit but are terse */
+#define ICE_RX_PTYPE_NOF		ICE_RX_PTYPE_NOT_FRAG
+#define ICE_RX_PTYPE_FRG		ICE_RX_PTYPE_FRAG
+
+/* Lookup table mapping the HW PTYPE to the bit field for decoding */
+static const struct ice_rx_ptype_decoded ice_ptype_lkup[] = {
+	/* L2 Packet types */
+	ICE_PTT_UNUSED_ENTRY(0),
+	ICE_PTT(1, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2),
+	ICE_PTT(2, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE),
+	ICE_PTT_UNUSED_ENTRY(3),
+	ICE_PTT_UNUSED_ENTRY(4),
+	ICE_PTT_UNUSED_ENTRY(5),
+	ICE_PTT(6, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE),
+	ICE_PTT(7, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE),
+	ICE_PTT_UNUSED_ENTRY(8),
+	ICE_PTT_UNUSED_ENTRY(9),
+	ICE_PTT(10, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE),
+	ICE_PTT(11, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE),
+	ICE_PTT_UNUSED_ENTRY(12),
+	ICE_PTT_UNUSED_ENTRY(13),
+	ICE_PTT_UNUSED_ENTRY(14),
+	ICE_PTT_UNUSED_ENTRY(15),
+	ICE_PTT_UNUSED_ENTRY(16),
+	ICE_PTT_UNUSED_ENTRY(17),
+	ICE_PTT_UNUSED_ENTRY(18),
+	ICE_PTT_UNUSED_ENTRY(19),
+	ICE_PTT_UNUSED_ENTRY(20),
+	ICE_PTT_UNUSED_ENTRY(21),
+
+	/* Non Tunneled IPv4 */
+	ICE_PTT(22, IP, IPV4, FRG, NONE, NONE, NOF, NONE, PAY3),
+	ICE_PTT(23, IP, IPV4, NOF, NONE, NONE, NOF, NONE, PAY3),
+	ICE_PTT(24, IP, IPV4, NOF, NONE, NONE, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(25),
+	ICE_PTT(26, IP, IPV4, NOF, NONE, NONE, NOF, TCP,  PAY4),
+	ICE_PTT(27, IP, IPV4, NOF, NONE, NONE, NOF, SCTP, PAY4),
+	ICE_PTT(28, IP, IPV4, NOF, NONE, NONE, NOF, ICMP, PAY4),
+
+	/* IPv4 --> IPv4 */
+	ICE_PTT(29, IP, IPV4, NOF, IP_IP, IPV4, FRG, NONE, PAY3),
+	ICE_PTT(30, IP, IPV4, NOF, IP_IP, IPV4, NOF, NONE, PAY3),
+	ICE_PTT(31, IP, IPV4, NOF, IP_IP, IPV4, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(32),
+	ICE_PTT(33, IP, IPV4, NOF, IP_IP, IPV4, NOF, TCP,  PAY4),
+	ICE_PTT(34, IP, IPV4, NOF, IP_IP, IPV4, NOF, SCTP, PAY4),
+	ICE_PTT(35, IP, IPV4, NOF, IP_IP, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv4 --> IPv6 */
+	ICE_PTT(36, IP, IPV4, NOF, IP_IP, IPV6, FRG, NONE, PAY3),
+	ICE_PTT(37, IP, IPV4, NOF, IP_IP, IPV6, NOF, NONE, PAY3),
+	ICE_PTT(38, IP, IPV4, NOF, IP_IP, IPV6, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(39),
+	ICE_PTT(40, IP, IPV4, NOF, IP_IP, IPV6, NOF, TCP,  PAY4),
+	ICE_PTT(41, IP, IPV4, NOF, IP_IP, IPV6, NOF, SCTP, PAY4),
+	ICE_PTT(42, IP, IPV4, NOF, IP_IP, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv4 --> GRE/NAT */
+	ICE_PTT(43, IP, IPV4, NOF, IP_GRENAT, NONE, NOF, NONE, PAY3),
+
+	/* IPv4 --> GRE/NAT --> IPv4 */
+	ICE_PTT(44, IP, IPV4, NOF, IP_GRENAT, IPV4, FRG, NONE, PAY3),
+	ICE_PTT(45, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, NONE, PAY3),
+	ICE_PTT(46, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(47),
+	ICE_PTT(48, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, TCP,  PAY4),
+	ICE_PTT(49, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, SCTP, PAY4),
+	ICE_PTT(50, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv4 --> GRE/NAT --> IPv6 */
+	ICE_PTT(51, IP, IPV4, NOF, IP_GRENAT, IPV6, FRG, NONE, PAY3),
+	ICE_PTT(52, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, NONE, PAY3),
+	ICE_PTT(53, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(54),
+	ICE_PTT(55, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, TCP,  PAY4),
+	ICE_PTT(56, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, SCTP, PAY4),
+	ICE_PTT(57, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv4 --> GRE/NAT --> MAC */
+	ICE_PTT(58, IP, IPV4, NOF, IP_GRENAT_MAC, NONE, NOF, NONE, PAY3),
+
+	/* IPv4 --> GRE/NAT --> MAC --> IPv4 */
+	ICE_PTT(59, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, FRG, NONE, PAY3),
+	ICE_PTT(60, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, NONE, PAY3),
+	ICE_PTT(61, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(62),
+	ICE_PTT(63, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, TCP,  PAY4),
+	ICE_PTT(64, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, SCTP, PAY4),
+	ICE_PTT(65, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv4 --> GRE/NAT -> MAC --> IPv6 */
+	ICE_PTT(66, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, FRG, NONE, PAY3),
+	ICE_PTT(67, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, NONE, PAY3),
+	ICE_PTT(68, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(69),
+	ICE_PTT(70, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, TCP,  PAY4),
+	ICE_PTT(71, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, SCTP, PAY4),
+	ICE_PTT(72, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv4 --> GRE/NAT --> MAC/VLAN */
+	ICE_PTT(73, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, NONE, NOF, NONE, PAY3),
+
+	/* IPv4 ---> GRE/NAT -> MAC/VLAN --> IPv4 */
+	ICE_PTT(74, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, FRG, NONE, PAY3),
+	ICE_PTT(75, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, NONE, PAY3),
+	ICE_PTT(76, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(77),
+	ICE_PTT(78, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, TCP,  PAY4),
+	ICE_PTT(79, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, SCTP, PAY4),
+	ICE_PTT(80, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv4 -> GRE/NAT -> MAC/VLAN --> IPv6 */
+	ICE_PTT(81, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, FRG, NONE, PAY3),
+	ICE_PTT(82, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, NONE, PAY3),
+	ICE_PTT(83, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(84),
+	ICE_PTT(85, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, TCP,  PAY4),
+	ICE_PTT(86, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, SCTP, PAY4),
+	ICE_PTT(87, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, ICMP, PAY4),
+
+	/* Non Tunneled IPv6 */
+	ICE_PTT(88, IP, IPV6, FRG, NONE, NONE, NOF, NONE, PAY3),
+	ICE_PTT(89, IP, IPV6, NOF, NONE, NONE, NOF, NONE, PAY3),
+	ICE_PTT(90, IP, IPV6, NOF, NONE, NONE, NOF, UDP,  PAY3),
+	ICE_PTT_UNUSED_ENTRY(91),
+	ICE_PTT(92, IP, IPV6, NOF, NONE, NONE, NOF, TCP,  PAY4),
+	ICE_PTT(93, IP, IPV6, NOF, NONE, NONE, NOF, SCTP, PAY4),
+	ICE_PTT(94, IP, IPV6, NOF, NONE, NONE, NOF, ICMP, PAY4),
+
+	/* IPv6 --> IPv4 */
+	ICE_PTT(95, IP, IPV6, NOF, IP_IP, IPV4, FRG, NONE, PAY3),
+	ICE_PTT(96, IP, IPV6, NOF, IP_IP, IPV4, NOF, NONE, PAY3),
+	ICE_PTT(97, IP, IPV6, NOF, IP_IP, IPV4, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(98),
+	ICE_PTT(99, IP, IPV6, NOF, IP_IP, IPV4, NOF, TCP,  PAY4),
+	ICE_PTT(100, IP, IPV6, NOF, IP_IP, IPV4, NOF, SCTP, PAY4),
+	ICE_PTT(101, IP, IPV6, NOF, IP_IP, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv6 --> IPv6 */
+	ICE_PTT(102, IP, IPV6, NOF, IP_IP, IPV6, FRG, NONE, PAY3),
+	ICE_PTT(103, IP, IPV6, NOF, IP_IP, IPV6, NOF, NONE, PAY3),
+	ICE_PTT(104, IP, IPV6, NOF, IP_IP, IPV6, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(105),
+	ICE_PTT(106, IP, IPV6, NOF, IP_IP, IPV6, NOF, TCP,  PAY4),
+	ICE_PTT(107, IP, IPV6, NOF, IP_IP, IPV6, NOF, SCTP, PAY4),
+	ICE_PTT(108, IP, IPV6, NOF, IP_IP, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT */
+	ICE_PTT(109, IP, IPV6, NOF, IP_GRENAT, NONE, NOF, NONE, PAY3),
+
+	/* IPv6 --> GRE/NAT -> IPv4 */
+	ICE_PTT(110, IP, IPV6, NOF, IP_GRENAT, IPV4, FRG, NONE, PAY3),
+	ICE_PTT(111, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, NONE, PAY3),
+	ICE_PTT(112, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(113),
+	ICE_PTT(114, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, TCP,  PAY4),
+	ICE_PTT(115, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, SCTP, PAY4),
+	ICE_PTT(116, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT -> IPv6 */
+	ICE_PTT(117, IP, IPV6, NOF, IP_GRENAT, IPV6, FRG, NONE, PAY3),
+	ICE_PTT(118, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, NONE, PAY3),
+	ICE_PTT(119, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(120),
+	ICE_PTT(121, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, TCP,  PAY4),
+	ICE_PTT(122, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, SCTP, PAY4),
+	ICE_PTT(123, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT -> MAC */
+	ICE_PTT(124, IP, IPV6, NOF, IP_GRENAT_MAC, NONE, NOF, NONE, PAY3),
+
+	/* IPv6 --> GRE/NAT -> MAC -> IPv4 */
+	ICE_PTT(125, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, FRG, NONE, PAY3),
+	ICE_PTT(126, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, NONE, PAY3),
+	ICE_PTT(127, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(128),
+	ICE_PTT(129, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, TCP,  PAY4),
+	ICE_PTT(130, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, SCTP, PAY4),
+	ICE_PTT(131, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT -> MAC -> IPv6 */
+	ICE_PTT(132, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, FRG, NONE, PAY3),
+	ICE_PTT(133, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, NONE, PAY3),
+	ICE_PTT(134, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(135),
+	ICE_PTT(136, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, TCP,  PAY4),
+	ICE_PTT(137, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, SCTP, PAY4),
+	ICE_PTT(138, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT -> MAC/VLAN */
+	ICE_PTT(139, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, NONE, NOF, NONE, PAY3),
+
+	/* IPv6 --> GRE/NAT -> MAC/VLAN --> IPv4 */
+	ICE_PTT(140, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, FRG, NONE, PAY3),
+	ICE_PTT(141, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, NONE, PAY3),
+	ICE_PTT(142, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(143),
+	ICE_PTT(144, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, TCP,  PAY4),
+	ICE_PTT(145, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, SCTP, PAY4),
+	ICE_PTT(146, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT -> MAC/VLAN --> IPv6 */
+	ICE_PTT(147, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, FRG, NONE, PAY3),
+	ICE_PTT(148, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, NONE, PAY3),
+	ICE_PTT(149, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, UDP,  PAY4),
+	ICE_PTT_UNUSED_ENTRY(150),
+	ICE_PTT(151, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, TCP,  PAY4),
+	ICE_PTT(152, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, SCTP, PAY4),
+	ICE_PTT(153, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, ICMP, PAY4),
+
+	/* unused entries */
+	ICE_PTT_UNUSED_ENTRY(154),
+	ICE_PTT_UNUSED_ENTRY(155),
+	ICE_PTT_UNUSED_ENTRY(156),
+	ICE_PTT_UNUSED_ENTRY(157),
+	ICE_PTT_UNUSED_ENTRY(158),
+	ICE_PTT_UNUSED_ENTRY(159),
+
+	ICE_PTT_UNUSED_ENTRY(160),
+	ICE_PTT_UNUSED_ENTRY(161),
+	ICE_PTT_UNUSED_ENTRY(162),
+	ICE_PTT_UNUSED_ENTRY(163),
+	ICE_PTT_UNUSED_ENTRY(164),
+	ICE_PTT_UNUSED_ENTRY(165),
+	ICE_PTT_UNUSED_ENTRY(166),
+	ICE_PTT_UNUSED_ENTRY(167),
+	ICE_PTT_UNUSED_ENTRY(168),
+	ICE_PTT_UNUSED_ENTRY(169),
+
+	ICE_PTT_UNUSED_ENTRY(170),
+	ICE_PTT_UNUSED_ENTRY(171),
+	ICE_PTT_UNUSED_ENTRY(172),
+	ICE_PTT_UNUSED_ENTRY(173),
+	ICE_PTT_UNUSED_ENTRY(174),
+	ICE_PTT_UNUSED_ENTRY(175),
+	ICE_PTT_UNUSED_ENTRY(176),
+	ICE_PTT_UNUSED_ENTRY(177),
+	ICE_PTT_UNUSED_ENTRY(178),
+	ICE_PTT_UNUSED_ENTRY(179),
+
+	ICE_PTT_UNUSED_ENTRY(180),
+	ICE_PTT_UNUSED_ENTRY(181),
+	ICE_PTT_UNUSED_ENTRY(182),
+	ICE_PTT_UNUSED_ENTRY(183),
+	ICE_PTT_UNUSED_ENTRY(184),
+	ICE_PTT_UNUSED_ENTRY(185),
+	ICE_PTT_UNUSED_ENTRY(186),
+	ICE_PTT_UNUSED_ENTRY(187),
+	ICE_PTT_UNUSED_ENTRY(188),
+	ICE_PTT_UNUSED_ENTRY(189),
+
+	ICE_PTT_UNUSED_ENTRY(190),
+	ICE_PTT_UNUSED_ENTRY(191),
+	ICE_PTT_UNUSED_ENTRY(192),
+	ICE_PTT_UNUSED_ENTRY(193),
+	ICE_PTT_UNUSED_ENTRY(194),
+	ICE_PTT_UNUSED_ENTRY(195),
+	ICE_PTT_UNUSED_ENTRY(196),
+	ICE_PTT_UNUSED_ENTRY(197),
+	ICE_PTT_UNUSED_ENTRY(198),
+	ICE_PTT_UNUSED_ENTRY(199),
+
+	ICE_PTT_UNUSED_ENTRY(200),
+	ICE_PTT_UNUSED_ENTRY(201),
+	ICE_PTT_UNUSED_ENTRY(202),
+	ICE_PTT_UNUSED_ENTRY(203),
+	ICE_PTT_UNUSED_ENTRY(204),
+	ICE_PTT_UNUSED_ENTRY(205),
+	ICE_PTT_UNUSED_ENTRY(206),
+	ICE_PTT_UNUSED_ENTRY(207),
+	ICE_PTT_UNUSED_ENTRY(208),
+	ICE_PTT_UNUSED_ENTRY(209),
+
+	ICE_PTT_UNUSED_ENTRY(210),
+	ICE_PTT_UNUSED_ENTRY(211),
+	ICE_PTT_UNUSED_ENTRY(212),
+	ICE_PTT_UNUSED_ENTRY(213),
+	ICE_PTT_UNUSED_ENTRY(214),
+	ICE_PTT_UNUSED_ENTRY(215),
+	ICE_PTT_UNUSED_ENTRY(216),
+	ICE_PTT_UNUSED_ENTRY(217),
+	ICE_PTT_UNUSED_ENTRY(218),
+	ICE_PTT_UNUSED_ENTRY(219),
+
+	ICE_PTT_UNUSED_ENTRY(220),
+	ICE_PTT_UNUSED_ENTRY(221),
+	ICE_PTT_UNUSED_ENTRY(222),
+	ICE_PTT_UNUSED_ENTRY(223),
+	ICE_PTT_UNUSED_ENTRY(224),
+	ICE_PTT_UNUSED_ENTRY(225),
+	ICE_PTT_UNUSED_ENTRY(226),
+	ICE_PTT_UNUSED_ENTRY(227),
+	ICE_PTT_UNUSED_ENTRY(228),
+	ICE_PTT_UNUSED_ENTRY(229),
+
+	ICE_PTT_UNUSED_ENTRY(230),
+	ICE_PTT_UNUSED_ENTRY(231),
+	ICE_PTT_UNUSED_ENTRY(232),
+	ICE_PTT_UNUSED_ENTRY(233),
+	ICE_PTT_UNUSED_ENTRY(234),
+	ICE_PTT_UNUSED_ENTRY(235),
+	ICE_PTT_UNUSED_ENTRY(236),
+	ICE_PTT_UNUSED_ENTRY(237),
+	ICE_PTT_UNUSED_ENTRY(238),
+	ICE_PTT_UNUSED_ENTRY(239),
+
+	ICE_PTT_UNUSED_ENTRY(240),
+	ICE_PTT_UNUSED_ENTRY(241),
+	ICE_PTT_UNUSED_ENTRY(242),
+	ICE_PTT_UNUSED_ENTRY(243),
+	ICE_PTT_UNUSED_ENTRY(244),
+	ICE_PTT_UNUSED_ENTRY(245),
+	ICE_PTT_UNUSED_ENTRY(246),
+	ICE_PTT_UNUSED_ENTRY(247),
+	ICE_PTT_UNUSED_ENTRY(248),
+	ICE_PTT_UNUSED_ENTRY(249),
+
+	ICE_PTT_UNUSED_ENTRY(250),
+	ICE_PTT_UNUSED_ENTRY(251),
+	ICE_PTT_UNUSED_ENTRY(252),
+	ICE_PTT_UNUSED_ENTRY(253),
+	ICE_PTT_UNUSED_ENTRY(254),
+	ICE_PTT_UNUSED_ENTRY(255),
+	ICE_PTT_UNUSED_ENTRY(256),
+	ICE_PTT_UNUSED_ENTRY(257),
+	ICE_PTT_UNUSED_ENTRY(258),
+	ICE_PTT_UNUSED_ENTRY(259),
+
+	ICE_PTT_UNUSED_ENTRY(260),
+	ICE_PTT_UNUSED_ENTRY(261),
+	ICE_PTT_UNUSED_ENTRY(262),
+	ICE_PTT_UNUSED_ENTRY(263),
+	ICE_PTT_UNUSED_ENTRY(264),
+	ICE_PTT_UNUSED_ENTRY(265),
+	ICE_PTT_UNUSED_ENTRY(266),
+	ICE_PTT_UNUSED_ENTRY(267),
+	ICE_PTT_UNUSED_ENTRY(268),
+	ICE_PTT_UNUSED_ENTRY(269),
+
+	ICE_PTT_UNUSED_ENTRY(270),
+	ICE_PTT_UNUSED_ENTRY(271),
+	ICE_PTT_UNUSED_ENTRY(272),
+	ICE_PTT_UNUSED_ENTRY(273),
+	ICE_PTT_UNUSED_ENTRY(274),
+	ICE_PTT_UNUSED_ENTRY(275),
+	ICE_PTT_UNUSED_ENTRY(276),
+	ICE_PTT_UNUSED_ENTRY(277),
+	ICE_PTT_UNUSED_ENTRY(278),
+	ICE_PTT_UNUSED_ENTRY(279),
+
+	ICE_PTT_UNUSED_ENTRY(280),
+	ICE_PTT_UNUSED_ENTRY(281),
+	ICE_PTT_UNUSED_ENTRY(282),
+	ICE_PTT_UNUSED_ENTRY(283),
+	ICE_PTT_UNUSED_ENTRY(284),
+	ICE_PTT_UNUSED_ENTRY(285),
+	ICE_PTT_UNUSED_ENTRY(286),
+	ICE_PTT_UNUSED_ENTRY(287),
+	ICE_PTT_UNUSED_ENTRY(288),
+	ICE_PTT_UNUSED_ENTRY(289),
+
+	ICE_PTT_UNUSED_ENTRY(290),
+	ICE_PTT_UNUSED_ENTRY(291),
+	ICE_PTT_UNUSED_ENTRY(292),
+	ICE_PTT_UNUSED_ENTRY(293),
+	ICE_PTT_UNUSED_ENTRY(294),
+	ICE_PTT_UNUSED_ENTRY(295),
+	ICE_PTT_UNUSED_ENTRY(296),
+	ICE_PTT_UNUSED_ENTRY(297),
+	ICE_PTT_UNUSED_ENTRY(298),
+	ICE_PTT_UNUSED_ENTRY(299),
+
+	ICE_PTT_UNUSED_ENTRY(300),
+	ICE_PTT_UNUSED_ENTRY(301),
+	ICE_PTT_UNUSED_ENTRY(302),
+	ICE_PTT_UNUSED_ENTRY(303),
+	ICE_PTT_UNUSED_ENTRY(304),
+	ICE_PTT_UNUSED_ENTRY(305),
+	ICE_PTT_UNUSED_ENTRY(306),
+	ICE_PTT_UNUSED_ENTRY(307),
+	ICE_PTT_UNUSED_ENTRY(308),
+	ICE_PTT_UNUSED_ENTRY(309),
+
+	ICE_PTT_UNUSED_ENTRY(310),
+	ICE_PTT_UNUSED_ENTRY(311),
+	ICE_PTT_UNUSED_ENTRY(312),
+	ICE_PTT_UNUSED_ENTRY(313),
+	ICE_PTT_UNUSED_ENTRY(314),
+	ICE_PTT_UNUSED_ENTRY(315),
+	ICE_PTT_UNUSED_ENTRY(316),
+	ICE_PTT_UNUSED_ENTRY(317),
+	ICE_PTT_UNUSED_ENTRY(318),
+	ICE_PTT_UNUSED_ENTRY(319),
+
+	ICE_PTT_UNUSED_ENTRY(320),
+	ICE_PTT_UNUSED_ENTRY(321),
+	ICE_PTT_UNUSED_ENTRY(322),
+	ICE_PTT_UNUSED_ENTRY(323),
+	ICE_PTT_UNUSED_ENTRY(324),
+	ICE_PTT_UNUSED_ENTRY(325),
+	ICE_PTT_UNUSED_ENTRY(326),
+	ICE_PTT_UNUSED_ENTRY(327),
+	ICE_PTT_UNUSED_ENTRY(328),
+	ICE_PTT_UNUSED_ENTRY(329),
+
+	ICE_PTT_UNUSED_ENTRY(330),
+	ICE_PTT_UNUSED_ENTRY(331),
+	ICE_PTT_UNUSED_ENTRY(332),
+	ICE_PTT_UNUSED_ENTRY(333),
+	ICE_PTT_UNUSED_ENTRY(334),
+	ICE_PTT_UNUSED_ENTRY(335),
+	ICE_PTT_UNUSED_ENTRY(336),
+	ICE_PTT_UNUSED_ENTRY(337),
+	ICE_PTT_UNUSED_ENTRY(338),
+	ICE_PTT_UNUSED_ENTRY(339),
+
+	ICE_PTT_UNUSED_ENTRY(340),
+	ICE_PTT_UNUSED_ENTRY(341),
+	ICE_PTT_UNUSED_ENTRY(342),
+	ICE_PTT_UNUSED_ENTRY(343),
+	ICE_PTT_UNUSED_ENTRY(344),
+	ICE_PTT_UNUSED_ENTRY(345),
+	ICE_PTT_UNUSED_ENTRY(346),
+	ICE_PTT_UNUSED_ENTRY(347),
+	ICE_PTT_UNUSED_ENTRY(348),
+	ICE_PTT_UNUSED_ENTRY(349),
+
+	ICE_PTT_UNUSED_ENTRY(350),
+	ICE_PTT_UNUSED_ENTRY(351),
+	ICE_PTT_UNUSED_ENTRY(352),
+	ICE_PTT_UNUSED_ENTRY(353),
+	ICE_PTT_UNUSED_ENTRY(354),
+	ICE_PTT_UNUSED_ENTRY(355),
+	ICE_PTT_UNUSED_ENTRY(356),
+	ICE_PTT_UNUSED_ENTRY(357),
+	ICE_PTT_UNUSED_ENTRY(358),
+	ICE_PTT_UNUSED_ENTRY(359),
+
+	ICE_PTT_UNUSED_ENTRY(360),
+	ICE_PTT_UNUSED_ENTRY(361),
+	ICE_PTT_UNUSED_ENTRY(362),
+	ICE_PTT_UNUSED_ENTRY(363),
+	ICE_PTT_UNUSED_ENTRY(364),
+	ICE_PTT_UNUSED_ENTRY(365),
+	ICE_PTT_UNUSED_ENTRY(366),
+	ICE_PTT_UNUSED_ENTRY(367),
+	ICE_PTT_UNUSED_ENTRY(368),
+	ICE_PTT_UNUSED_ENTRY(369),
+
+	ICE_PTT_UNUSED_ENTRY(370),
+	ICE_PTT_UNUSED_ENTRY(371),
+	ICE_PTT_UNUSED_ENTRY(372),
+	ICE_PTT_UNUSED_ENTRY(373),
+	ICE_PTT_UNUSED_ENTRY(374),
+	ICE_PTT_UNUSED_ENTRY(375),
+	ICE_PTT_UNUSED_ENTRY(376),
+	ICE_PTT_UNUSED_ENTRY(377),
+	ICE_PTT_UNUSED_ENTRY(378),
+	ICE_PTT_UNUSED_ENTRY(379),
+
+	ICE_PTT_UNUSED_ENTRY(380),
+	ICE_PTT_UNUSED_ENTRY(381),
+	ICE_PTT_UNUSED_ENTRY(382),
+	ICE_PTT_UNUSED_ENTRY(383),
+	ICE_PTT_UNUSED_ENTRY(384),
+	ICE_PTT_UNUSED_ENTRY(385),
+	ICE_PTT_UNUSED_ENTRY(386),
+	ICE_PTT_UNUSED_ENTRY(387),
+	ICE_PTT_UNUSED_ENTRY(388),
+	ICE_PTT_UNUSED_ENTRY(389),
+
+	ICE_PTT_UNUSED_ENTRY(390),
+	ICE_PTT_UNUSED_ENTRY(391),
+	ICE_PTT_UNUSED_ENTRY(392),
+	ICE_PTT_UNUSED_ENTRY(393),
+	ICE_PTT_UNUSED_ENTRY(394),
+	ICE_PTT_UNUSED_ENTRY(395),
+	ICE_PTT_UNUSED_ENTRY(396),
+	ICE_PTT_UNUSED_ENTRY(397),
+	ICE_PTT_UNUSED_ENTRY(398),
+	ICE_PTT_UNUSED_ENTRY(399),
+
+	ICE_PTT_UNUSED_ENTRY(400),
+	ICE_PTT_UNUSED_ENTRY(401),
+	ICE_PTT_UNUSED_ENTRY(402),
+	ICE_PTT_UNUSED_ENTRY(403),
+	ICE_PTT_UNUSED_ENTRY(404),
+	ICE_PTT_UNUSED_ENTRY(405),
+	ICE_PTT_UNUSED_ENTRY(406),
+	ICE_PTT_UNUSED_ENTRY(407),
+	ICE_PTT_UNUSED_ENTRY(408),
+	ICE_PTT_UNUSED_ENTRY(409),
+
+	ICE_PTT_UNUSED_ENTRY(410),
+	ICE_PTT_UNUSED_ENTRY(411),
+	ICE_PTT_UNUSED_ENTRY(412),
+	ICE_PTT_UNUSED_ENTRY(413),
+	ICE_PTT_UNUSED_ENTRY(414),
+	ICE_PTT_UNUSED_ENTRY(415),
+	ICE_PTT_UNUSED_ENTRY(416),
+	ICE_PTT_UNUSED_ENTRY(417),
+	ICE_PTT_UNUSED_ENTRY(418),
+	ICE_PTT_UNUSED_ENTRY(419),
+
+	ICE_PTT_UNUSED_ENTRY(420),
+	ICE_PTT_UNUSED_ENTRY(421),
+	ICE_PTT_UNUSED_ENTRY(422),
+	ICE_PTT_UNUSED_ENTRY(423),
+	ICE_PTT_UNUSED_ENTRY(424),
+	ICE_PTT_UNUSED_ENTRY(425),
+	ICE_PTT_UNUSED_ENTRY(426),
+	ICE_PTT_UNUSED_ENTRY(427),
+	ICE_PTT_UNUSED_ENTRY(428),
+	ICE_PTT_UNUSED_ENTRY(429),
+
+	ICE_PTT_UNUSED_ENTRY(430),
+	ICE_PTT_UNUSED_ENTRY(431),
+	ICE_PTT_UNUSED_ENTRY(432),
+	ICE_PTT_UNUSED_ENTRY(433),
+	ICE_PTT_UNUSED_ENTRY(434),
+	ICE_PTT_UNUSED_ENTRY(435),
+	ICE_PTT_UNUSED_ENTRY(436),
+	ICE_PTT_UNUSED_ENTRY(437),
+	ICE_PTT_UNUSED_ENTRY(438),
+	ICE_PTT_UNUSED_ENTRY(439),
+
+	ICE_PTT_UNUSED_ENTRY(440),
+	ICE_PTT_UNUSED_ENTRY(441),
+	ICE_PTT_UNUSED_ENTRY(442),
+	ICE_PTT_UNUSED_ENTRY(443),
+	ICE_PTT_UNUSED_ENTRY(444),
+	ICE_PTT_UNUSED_ENTRY(445),
+	ICE_PTT_UNUSED_ENTRY(446),
+	ICE_PTT_UNUSED_ENTRY(447),
+	ICE_PTT_UNUSED_ENTRY(448),
+	ICE_PTT_UNUSED_ENTRY(449),
+
+	ICE_PTT_UNUSED_ENTRY(450),
+	ICE_PTT_UNUSED_ENTRY(451),
+	ICE_PTT_UNUSED_ENTRY(452),
+	ICE_PTT_UNUSED_ENTRY(453),
+	ICE_PTT_UNUSED_ENTRY(454),
+	ICE_PTT_UNUSED_ENTRY(455),
+	ICE_PTT_UNUSED_ENTRY(456),
+	ICE_PTT_UNUSED_ENTRY(457),
+	ICE_PTT_UNUSED_ENTRY(458),
+	ICE_PTT_UNUSED_ENTRY(459),
+
+	ICE_PTT_UNUSED_ENTRY(460),
+	ICE_PTT_UNUSED_ENTRY(461),
+	ICE_PTT_UNUSED_ENTRY(462),
+	ICE_PTT_UNUSED_ENTRY(463),
+	ICE_PTT_UNUSED_ENTRY(464),
+	ICE_PTT_UNUSED_ENTRY(465),
+	ICE_PTT_UNUSED_ENTRY(466),
+	ICE_PTT_UNUSED_ENTRY(467),
+	ICE_PTT_UNUSED_ENTRY(468),
+	ICE_PTT_UNUSED_ENTRY(469),
+
+	ICE_PTT_UNUSED_ENTRY(470),
+	ICE_PTT_UNUSED_ENTRY(471),
+	ICE_PTT_UNUSED_ENTRY(472),
+	ICE_PTT_UNUSED_ENTRY(473),
+	ICE_PTT_UNUSED_ENTRY(474),
+	ICE_PTT_UNUSED_ENTRY(475),
+	ICE_PTT_UNUSED_ENTRY(476),
+	ICE_PTT_UNUSED_ENTRY(477),
+	ICE_PTT_UNUSED_ENTRY(478),
+	ICE_PTT_UNUSED_ENTRY(479),
+
+	ICE_PTT_UNUSED_ENTRY(480),
+	ICE_PTT_UNUSED_ENTRY(481),
+	ICE_PTT_UNUSED_ENTRY(482),
+	ICE_PTT_UNUSED_ENTRY(483),
+	ICE_PTT_UNUSED_ENTRY(484),
+	ICE_PTT_UNUSED_ENTRY(485),
+	ICE_PTT_UNUSED_ENTRY(486),
+	ICE_PTT_UNUSED_ENTRY(487),
+	ICE_PTT_UNUSED_ENTRY(488),
+	ICE_PTT_UNUSED_ENTRY(489),
+
+	ICE_PTT_UNUSED_ENTRY(490),
+	ICE_PTT_UNUSED_ENTRY(491),
+	ICE_PTT_UNUSED_ENTRY(492),
+	ICE_PTT_UNUSED_ENTRY(493),
+	ICE_PTT_UNUSED_ENTRY(494),
+	ICE_PTT_UNUSED_ENTRY(495),
+	ICE_PTT_UNUSED_ENTRY(496),
+	ICE_PTT_UNUSED_ENTRY(497),
+	ICE_PTT_UNUSED_ENTRY(498),
+	ICE_PTT_UNUSED_ENTRY(499),
+
+	ICE_PTT_UNUSED_ENTRY(500),
+	ICE_PTT_UNUSED_ENTRY(501),
+	ICE_PTT_UNUSED_ENTRY(502),
+	ICE_PTT_UNUSED_ENTRY(503),
+	ICE_PTT_UNUSED_ENTRY(504),
+	ICE_PTT_UNUSED_ENTRY(505),
+	ICE_PTT_UNUSED_ENTRY(506),
+	ICE_PTT_UNUSED_ENTRY(507),
+	ICE_PTT_UNUSED_ENTRY(508),
+	ICE_PTT_UNUSED_ENTRY(509),
+
+	ICE_PTT_UNUSED_ENTRY(510),
+	ICE_PTT_UNUSED_ENTRY(511),
+	ICE_PTT_UNUSED_ENTRY(512),
+	ICE_PTT_UNUSED_ENTRY(513),
+	ICE_PTT_UNUSED_ENTRY(514),
+	ICE_PTT_UNUSED_ENTRY(515),
+	ICE_PTT_UNUSED_ENTRY(516),
+	ICE_PTT_UNUSED_ENTRY(517),
+	ICE_PTT_UNUSED_ENTRY(518),
+	ICE_PTT_UNUSED_ENTRY(519),
+
+	ICE_PTT_UNUSED_ENTRY(520),
+	ICE_PTT_UNUSED_ENTRY(521),
+	ICE_PTT_UNUSED_ENTRY(522),
+	ICE_PTT_UNUSED_ENTRY(523),
+	ICE_PTT_UNUSED_ENTRY(524),
+	ICE_PTT_UNUSED_ENTRY(525),
+	ICE_PTT_UNUSED_ENTRY(526),
+	ICE_PTT_UNUSED_ENTRY(527),
+	ICE_PTT_UNUSED_ENTRY(528),
+	ICE_PTT_UNUSED_ENTRY(529),
+
+	ICE_PTT_UNUSED_ENTRY(530),
+	ICE_PTT_UNUSED_ENTRY(531),
+	ICE_PTT_UNUSED_ENTRY(532),
+	ICE_PTT_UNUSED_ENTRY(533),
+	ICE_PTT_UNUSED_ENTRY(534),
+	ICE_PTT_UNUSED_ENTRY(535),
+	ICE_PTT_UNUSED_ENTRY(536),
+	ICE_PTT_UNUSED_ENTRY(537),
+	ICE_PTT_UNUSED_ENTRY(538),
+	ICE_PTT_UNUSED_ENTRY(539),
+
+	ICE_PTT_UNUSED_ENTRY(540),
+	ICE_PTT_UNUSED_ENTRY(541),
+	ICE_PTT_UNUSED_ENTRY(542),
+	ICE_PTT_UNUSED_ENTRY(543),
+	ICE_PTT_UNUSED_ENTRY(544),
+	ICE_PTT_UNUSED_ENTRY(545),
+	ICE_PTT_UNUSED_ENTRY(546),
+	ICE_PTT_UNUSED_ENTRY(547),
+	ICE_PTT_UNUSED_ENTRY(548),
+	ICE_PTT_UNUSED_ENTRY(549),
+
+	ICE_PTT_UNUSED_ENTRY(550),
+	ICE_PTT_UNUSED_ENTRY(551),
+	ICE_PTT_UNUSED_ENTRY(552),
+	ICE_PTT_UNUSED_ENTRY(553),
+	ICE_PTT_UNUSED_ENTRY(554),
+	ICE_PTT_UNUSED_ENTRY(555),
+	ICE_PTT_UNUSED_ENTRY(556),
+	ICE_PTT_UNUSED_ENTRY(557),
+	ICE_PTT_UNUSED_ENTRY(558),
+	ICE_PTT_UNUSED_ENTRY(559),
+
+	ICE_PTT_UNUSED_ENTRY(560),
+	ICE_PTT_UNUSED_ENTRY(561),
+	ICE_PTT_UNUSED_ENTRY(562),
+	ICE_PTT_UNUSED_ENTRY(563),
+	ICE_PTT_UNUSED_ENTRY(564),
+	ICE_PTT_UNUSED_ENTRY(565),
+	ICE_PTT_UNUSED_ENTRY(566),
+	ICE_PTT_UNUSED_ENTRY(567),
+	ICE_PTT_UNUSED_ENTRY(568),
+	ICE_PTT_UNUSED_ENTRY(569),
+
+	ICE_PTT_UNUSED_ENTRY(570),
+	ICE_PTT_UNUSED_ENTRY(571),
+	ICE_PTT_UNUSED_ENTRY(572),
+	ICE_PTT_UNUSED_ENTRY(573),
+	ICE_PTT_UNUSED_ENTRY(574),
+	ICE_PTT_UNUSED_ENTRY(575),
+	ICE_PTT_UNUSED_ENTRY(576),
+	ICE_PTT_UNUSED_ENTRY(577),
+	ICE_PTT_UNUSED_ENTRY(578),
+	ICE_PTT_UNUSED_ENTRY(579),
+
+	ICE_PTT_UNUSED_ENTRY(580),
+	ICE_PTT_UNUSED_ENTRY(581),
+	ICE_PTT_UNUSED_ENTRY(582),
+	ICE_PTT_UNUSED_ENTRY(583),
+	ICE_PTT_UNUSED_ENTRY(584),
+	ICE_PTT_UNUSED_ENTRY(585),
+	ICE_PTT_UNUSED_ENTRY(586),
+	ICE_PTT_UNUSED_ENTRY(587),
+	ICE_PTT_UNUSED_ENTRY(588),
+	ICE_PTT_UNUSED_ENTRY(589),
+
+	ICE_PTT_UNUSED_ENTRY(590),
+	ICE_PTT_UNUSED_ENTRY(591),
+	ICE_PTT_UNUSED_ENTRY(592),
+	ICE_PTT_UNUSED_ENTRY(593),
+	ICE_PTT_UNUSED_ENTRY(594),
+	ICE_PTT_UNUSED_ENTRY(595),
+	ICE_PTT_UNUSED_ENTRY(596),
+	ICE_PTT_UNUSED_ENTRY(597),
+	ICE_PTT_UNUSED_ENTRY(598),
+	ICE_PTT_UNUSED_ENTRY(599),
+
+	ICE_PTT_UNUSED_ENTRY(600),
+	ICE_PTT_UNUSED_ENTRY(601),
+	ICE_PTT_UNUSED_ENTRY(602),
+	ICE_PTT_UNUSED_ENTRY(603),
+	ICE_PTT_UNUSED_ENTRY(604),
+	ICE_PTT_UNUSED_ENTRY(605),
+	ICE_PTT_UNUSED_ENTRY(606),
+	ICE_PTT_UNUSED_ENTRY(607),
+	ICE_PTT_UNUSED_ENTRY(608),
+	ICE_PTT_UNUSED_ENTRY(609),
+
+	ICE_PTT_UNUSED_ENTRY(610),
+	ICE_PTT_UNUSED_ENTRY(611),
+	ICE_PTT_UNUSED_ENTRY(612),
+	ICE_PTT_UNUSED_ENTRY(613),
+	ICE_PTT_UNUSED_ENTRY(614),
+	ICE_PTT_UNUSED_ENTRY(615),
+	ICE_PTT_UNUSED_ENTRY(616),
+	ICE_PTT_UNUSED_ENTRY(617),
+	ICE_PTT_UNUSED_ENTRY(618),
+	ICE_PTT_UNUSED_ENTRY(619),
+
+	ICE_PTT_UNUSED_ENTRY(620),
+	ICE_PTT_UNUSED_ENTRY(621),
+	ICE_PTT_UNUSED_ENTRY(622),
+	ICE_PTT_UNUSED_ENTRY(623),
+	ICE_PTT_UNUSED_ENTRY(624),
+	ICE_PTT_UNUSED_ENTRY(625),
+	ICE_PTT_UNUSED_ENTRY(626),
+	ICE_PTT_UNUSED_ENTRY(627),
+	ICE_PTT_UNUSED_ENTRY(628),
+	ICE_PTT_UNUSED_ENTRY(629),
+
+	ICE_PTT_UNUSED_ENTRY(630),
+	ICE_PTT_UNUSED_ENTRY(631),
+	ICE_PTT_UNUSED_ENTRY(632),
+	ICE_PTT_UNUSED_ENTRY(633),
+	ICE_PTT_UNUSED_ENTRY(634),
+	ICE_PTT_UNUSED_ENTRY(635),
+	ICE_PTT_UNUSED_ENTRY(636),
+	ICE_PTT_UNUSED_ENTRY(637),
+	ICE_PTT_UNUSED_ENTRY(638),
+	ICE_PTT_UNUSED_ENTRY(639),
+
+	ICE_PTT_UNUSED_ENTRY(640),
+	ICE_PTT_UNUSED_ENTRY(641),
+	ICE_PTT_UNUSED_ENTRY(642),
+	ICE_PTT_UNUSED_ENTRY(643),
+	ICE_PTT_UNUSED_ENTRY(644),
+	ICE_PTT_UNUSED_ENTRY(645),
+	ICE_PTT_UNUSED_ENTRY(646),
+	ICE_PTT_UNUSED_ENTRY(647),
+	ICE_PTT_UNUSED_ENTRY(648),
+	ICE_PTT_UNUSED_ENTRY(649),
+
+	ICE_PTT_UNUSED_ENTRY(650),
+	ICE_PTT_UNUSED_ENTRY(651),
+	ICE_PTT_UNUSED_ENTRY(652),
+	ICE_PTT_UNUSED_ENTRY(653),
+	ICE_PTT_UNUSED_ENTRY(654),
+	ICE_PTT_UNUSED_ENTRY(655),
+	ICE_PTT_UNUSED_ENTRY(656),
+	ICE_PTT_UNUSED_ENTRY(657),
+	ICE_PTT_UNUSED_ENTRY(658),
+	ICE_PTT_UNUSED_ENTRY(659),
+
+	ICE_PTT_UNUSED_ENTRY(660),
+	ICE_PTT_UNUSED_ENTRY(661),
+	ICE_PTT_UNUSED_ENTRY(662),
+	ICE_PTT_UNUSED_ENTRY(663),
+	ICE_PTT_UNUSED_ENTRY(664),
+	ICE_PTT_UNUSED_ENTRY(665),
+	ICE_PTT_UNUSED_ENTRY(666),
+	ICE_PTT_UNUSED_ENTRY(667),
+	ICE_PTT_UNUSED_ENTRY(668),
+	ICE_PTT_UNUSED_ENTRY(669),
+
+	ICE_PTT_UNUSED_ENTRY(670),
+	ICE_PTT_UNUSED_ENTRY(671),
+	ICE_PTT_UNUSED_ENTRY(672),
+	ICE_PTT_UNUSED_ENTRY(673),
+	ICE_PTT_UNUSED_ENTRY(674),
+	ICE_PTT_UNUSED_ENTRY(675),
+	ICE_PTT_UNUSED_ENTRY(676),
+	ICE_PTT_UNUSED_ENTRY(677),
+	ICE_PTT_UNUSED_ENTRY(678),
+	ICE_PTT_UNUSED_ENTRY(679),
+
+	ICE_PTT_UNUSED_ENTRY(680),
+	ICE_PTT_UNUSED_ENTRY(681),
+	ICE_PTT_UNUSED_ENTRY(682),
+	ICE_PTT_UNUSED_ENTRY(683),
+	ICE_PTT_UNUSED_ENTRY(684),
+	ICE_PTT_UNUSED_ENTRY(685),
+	ICE_PTT_UNUSED_ENTRY(686),
+	ICE_PTT_UNUSED_ENTRY(687),
+	ICE_PTT_UNUSED_ENTRY(688),
+	ICE_PTT_UNUSED_ENTRY(689),
+
+	ICE_PTT_UNUSED_ENTRY(690),
+	ICE_PTT_UNUSED_ENTRY(691),
+	ICE_PTT_UNUSED_ENTRY(692),
+	ICE_PTT_UNUSED_ENTRY(693),
+	ICE_PTT_UNUSED_ENTRY(694),
+	ICE_PTT_UNUSED_ENTRY(695),
+	ICE_PTT_UNUSED_ENTRY(696),
+	ICE_PTT_UNUSED_ENTRY(697),
+	ICE_PTT_UNUSED_ENTRY(698),
+	ICE_PTT_UNUSED_ENTRY(699),
+
+	ICE_PTT_UNUSED_ENTRY(700),
+	ICE_PTT_UNUSED_ENTRY(701),
+	ICE_PTT_UNUSED_ENTRY(702),
+	ICE_PTT_UNUSED_ENTRY(703),
+	ICE_PTT_UNUSED_ENTRY(704),
+	ICE_PTT_UNUSED_ENTRY(705),
+	ICE_PTT_UNUSED_ENTRY(706),
+	ICE_PTT_UNUSED_ENTRY(707),
+	ICE_PTT_UNUSED_ENTRY(708),
+	ICE_PTT_UNUSED_ENTRY(709),
+
+	ICE_PTT_UNUSED_ENTRY(710),
+	ICE_PTT_UNUSED_ENTRY(711),
+	ICE_PTT_UNUSED_ENTRY(712),
+	ICE_PTT_UNUSED_ENTRY(713),
+	ICE_PTT_UNUSED_ENTRY(714),
+	ICE_PTT_UNUSED_ENTRY(715),
+	ICE_PTT_UNUSED_ENTRY(716),
+	ICE_PTT_UNUSED_ENTRY(717),
+	ICE_PTT_UNUSED_ENTRY(718),
+	ICE_PTT_UNUSED_ENTRY(719),
+
+	ICE_PTT_UNUSED_ENTRY(720),
+	ICE_PTT_UNUSED_ENTRY(721),
+	ICE_PTT_UNUSED_ENTRY(722),
+	ICE_PTT_UNUSED_ENTRY(723),
+	ICE_PTT_UNUSED_ENTRY(724),
+	ICE_PTT_UNUSED_ENTRY(725),
+	ICE_PTT_UNUSED_ENTRY(726),
+	ICE_PTT_UNUSED_ENTRY(727),
+	ICE_PTT_UNUSED_ENTRY(728),
+	ICE_PTT_UNUSED_ENTRY(729),
+
+	ICE_PTT_UNUSED_ENTRY(730),
+	ICE_PTT_UNUSED_ENTRY(731),
+	ICE_PTT_UNUSED_ENTRY(732),
+	ICE_PTT_UNUSED_ENTRY(733),
+	ICE_PTT_UNUSED_ENTRY(734),
+	ICE_PTT_UNUSED_ENTRY(735),
+	ICE_PTT_UNUSED_ENTRY(736),
+	ICE_PTT_UNUSED_ENTRY(737),
+	ICE_PTT_UNUSED_ENTRY(738),
+	ICE_PTT_UNUSED_ENTRY(739),
+
+	ICE_PTT_UNUSED_ENTRY(740),
+	ICE_PTT_UNUSED_ENTRY(741),
+	ICE_PTT_UNUSED_ENTRY(742),
+	ICE_PTT_UNUSED_ENTRY(743),
+	ICE_PTT_UNUSED_ENTRY(744),
+	ICE_PTT_UNUSED_ENTRY(745),
+	ICE_PTT_UNUSED_ENTRY(746),
+	ICE_PTT_UNUSED_ENTRY(747),
+	ICE_PTT_UNUSED_ENTRY(748),
+	ICE_PTT_UNUSED_ENTRY(749),
+
+	ICE_PTT_UNUSED_ENTRY(750),
+	ICE_PTT_UNUSED_ENTRY(751),
+	ICE_PTT_UNUSED_ENTRY(752),
+	ICE_PTT_UNUSED_ENTRY(753),
+	ICE_PTT_UNUSED_ENTRY(754),
+	ICE_PTT_UNUSED_ENTRY(755),
+	ICE_PTT_UNUSED_ENTRY(756),
+	ICE_PTT_UNUSED_ENTRY(757),
+	ICE_PTT_UNUSED_ENTRY(758),
+	ICE_PTT_UNUSED_ENTRY(759),
+
+	ICE_PTT_UNUSED_ENTRY(760),
+	ICE_PTT_UNUSED_ENTRY(761),
+	ICE_PTT_UNUSED_ENTRY(762),
+	ICE_PTT_UNUSED_ENTRY(763),
+	ICE_PTT_UNUSED_ENTRY(764),
+	ICE_PTT_UNUSED_ENTRY(765),
+	ICE_PTT_UNUSED_ENTRY(766),
+	ICE_PTT_UNUSED_ENTRY(767),
+	ICE_PTT_UNUSED_ENTRY(768),
+	ICE_PTT_UNUSED_ENTRY(769),
+
+	ICE_PTT_UNUSED_ENTRY(770),
+	ICE_PTT_UNUSED_ENTRY(771),
+	ICE_PTT_UNUSED_ENTRY(772),
+	ICE_PTT_UNUSED_ENTRY(773),
+	ICE_PTT_UNUSED_ENTRY(774),
+	ICE_PTT_UNUSED_ENTRY(775),
+	ICE_PTT_UNUSED_ENTRY(776),
+	ICE_PTT_UNUSED_ENTRY(777),
+	ICE_PTT_UNUSED_ENTRY(778),
+	ICE_PTT_UNUSED_ENTRY(779),
+
+	ICE_PTT_UNUSED_ENTRY(780),
+	ICE_PTT_UNUSED_ENTRY(781),
+	ICE_PTT_UNUSED_ENTRY(782),
+	ICE_PTT_UNUSED_ENTRY(783),
+	ICE_PTT_UNUSED_ENTRY(784),
+	ICE_PTT_UNUSED_ENTRY(785),
+	ICE_PTT_UNUSED_ENTRY(786),
+	ICE_PTT_UNUSED_ENTRY(787),
+	ICE_PTT_UNUSED_ENTRY(788),
+	ICE_PTT_UNUSED_ENTRY(789),
+
+	ICE_PTT_UNUSED_ENTRY(790),
+	ICE_PTT_UNUSED_ENTRY(791),
+	ICE_PTT_UNUSED_ENTRY(792),
+	ICE_PTT_UNUSED_ENTRY(793),
+	ICE_PTT_UNUSED_ENTRY(794),
+	ICE_PTT_UNUSED_ENTRY(795),
+	ICE_PTT_UNUSED_ENTRY(796),
+	ICE_PTT_UNUSED_ENTRY(797),
+	ICE_PTT_UNUSED_ENTRY(798),
+	ICE_PTT_UNUSED_ENTRY(799),
+
+	ICE_PTT_UNUSED_ENTRY(800),
+	ICE_PTT_UNUSED_ENTRY(801),
+	ICE_PTT_UNUSED_ENTRY(802),
+	ICE_PTT_UNUSED_ENTRY(803),
+	ICE_PTT_UNUSED_ENTRY(804),
+	ICE_PTT_UNUSED_ENTRY(805),
+	ICE_PTT_UNUSED_ENTRY(806),
+	ICE_PTT_UNUSED_ENTRY(807),
+	ICE_PTT_UNUSED_ENTRY(808),
+	ICE_PTT_UNUSED_ENTRY(809),
+
+	ICE_PTT_UNUSED_ENTRY(810),
+	ICE_PTT_UNUSED_ENTRY(811),
+	ICE_PTT_UNUSED_ENTRY(812),
+	ICE_PTT_UNUSED_ENTRY(813),
+	ICE_PTT_UNUSED_ENTRY(814),
+	ICE_PTT_UNUSED_ENTRY(815),
+	ICE_PTT_UNUSED_ENTRY(816),
+	ICE_PTT_UNUSED_ENTRY(817),
+	ICE_PTT_UNUSED_ENTRY(818),
+	ICE_PTT_UNUSED_ENTRY(819),
+
+	ICE_PTT_UNUSED_ENTRY(820),
+	ICE_PTT_UNUSED_ENTRY(821),
+	ICE_PTT_UNUSED_ENTRY(822),
+	ICE_PTT_UNUSED_ENTRY(823),
+	ICE_PTT_UNUSED_ENTRY(824),
+	ICE_PTT_UNUSED_ENTRY(825),
+	ICE_PTT_UNUSED_ENTRY(826),
+	ICE_PTT_UNUSED_ENTRY(827),
+	ICE_PTT_UNUSED_ENTRY(828),
+	ICE_PTT_UNUSED_ENTRY(829),
+
+	ICE_PTT_UNUSED_ENTRY(830),
+	ICE_PTT_UNUSED_ENTRY(831),
+	ICE_PTT_UNUSED_ENTRY(832),
+	ICE_PTT_UNUSED_ENTRY(833),
+	ICE_PTT_UNUSED_ENTRY(834),
+	ICE_PTT_UNUSED_ENTRY(835),
+	ICE_PTT_UNUSED_ENTRY(836),
+	ICE_PTT_UNUSED_ENTRY(837),
+	ICE_PTT_UNUSED_ENTRY(838),
+	ICE_PTT_UNUSED_ENTRY(839),
+
+	ICE_PTT_UNUSED_ENTRY(840),
+	ICE_PTT_UNUSED_ENTRY(841),
+	ICE_PTT_UNUSED_ENTRY(842),
+	ICE_PTT_UNUSED_ENTRY(843),
+	ICE_PTT_UNUSED_ENTRY(844),
+	ICE_PTT_UNUSED_ENTRY(845),
+	ICE_PTT_UNUSED_ENTRY(846),
+	ICE_PTT_UNUSED_ENTRY(847),
+	ICE_PTT_UNUSED_ENTRY(848),
+	ICE_PTT_UNUSED_ENTRY(849),
+
+	ICE_PTT_UNUSED_ENTRY(850),
+	ICE_PTT_UNUSED_ENTRY(851),
+	ICE_PTT_UNUSED_ENTRY(852),
+	ICE_PTT_UNUSED_ENTRY(853),
+	ICE_PTT_UNUSED_ENTRY(854),
+	ICE_PTT_UNUSED_ENTRY(855),
+	ICE_PTT_UNUSED_ENTRY(856),
+	ICE_PTT_UNUSED_ENTRY(857),
+	ICE_PTT_UNUSED_ENTRY(858),
+	ICE_PTT_UNUSED_ENTRY(859),
+
+	ICE_PTT_UNUSED_ENTRY(860),
+	ICE_PTT_UNUSED_ENTRY(861),
+	ICE_PTT_UNUSED_ENTRY(862),
+	ICE_PTT_UNUSED_ENTRY(863),
+	ICE_PTT_UNUSED_ENTRY(864),
+	ICE_PTT_UNUSED_ENTRY(865),
+	ICE_PTT_UNUSED_ENTRY(866),
+	ICE_PTT_UNUSED_ENTRY(867),
+	ICE_PTT_UNUSED_ENTRY(868),
+	ICE_PTT_UNUSED_ENTRY(869),
+
+	ICE_PTT_UNUSED_ENTRY(870),
+	ICE_PTT_UNUSED_ENTRY(871),
+	ICE_PTT_UNUSED_ENTRY(872),
+	ICE_PTT_UNUSED_ENTRY(873),
+	ICE_PTT_UNUSED_ENTRY(874),
+	ICE_PTT_UNUSED_ENTRY(875),
+	ICE_PTT_UNUSED_ENTRY(876),
+	ICE_PTT_UNUSED_ENTRY(877),
+	ICE_PTT_UNUSED_ENTRY(878),
+	ICE_PTT_UNUSED_ENTRY(879),
+
+	ICE_PTT_UNUSED_ENTRY(880),
+	ICE_PTT_UNUSED_ENTRY(881),
+	ICE_PTT_UNUSED_ENTRY(882),
+	ICE_PTT_UNUSED_ENTRY(883),
+	ICE_PTT_UNUSED_ENTRY(884),
+	ICE_PTT_UNUSED_ENTRY(885),
+	ICE_PTT_UNUSED_ENTRY(886),
+	ICE_PTT_UNUSED_ENTRY(887),
+	ICE_PTT_UNUSED_ENTRY(888),
+	ICE_PTT_UNUSED_ENTRY(889),
+
+	ICE_PTT_UNUSED_ENTRY(890),
+	ICE_PTT_UNUSED_ENTRY(891),
+	ICE_PTT_UNUSED_ENTRY(892),
+	ICE_PTT_UNUSED_ENTRY(893),
+	ICE_PTT_UNUSED_ENTRY(894),
+	ICE_PTT_UNUSED_ENTRY(895),
+	ICE_PTT_UNUSED_ENTRY(896),
+	ICE_PTT_UNUSED_ENTRY(897),
+	ICE_PTT_UNUSED_ENTRY(898),
+	ICE_PTT_UNUSED_ENTRY(899),
+
+	ICE_PTT_UNUSED_ENTRY(900),
+	ICE_PTT_UNUSED_ENTRY(901),
+	ICE_PTT_UNUSED_ENTRY(902),
+	ICE_PTT_UNUSED_ENTRY(903),
+	ICE_PTT_UNUSED_ENTRY(904),
+	ICE_PTT_UNUSED_ENTRY(905),
+	ICE_PTT_UNUSED_ENTRY(906),
+	ICE_PTT_UNUSED_ENTRY(907),
+	ICE_PTT_UNUSED_ENTRY(908),
+	ICE_PTT_UNUSED_ENTRY(909),
+
+	ICE_PTT_UNUSED_ENTRY(910),
+	ICE_PTT_UNUSED_ENTRY(911),
+	ICE_PTT_UNUSED_ENTRY(912),
+	ICE_PTT_UNUSED_ENTRY(913),
+	ICE_PTT_UNUSED_ENTRY(914),
+	ICE_PTT_UNUSED_ENTRY(915),
+	ICE_PTT_UNUSED_ENTRY(916),
+	ICE_PTT_UNUSED_ENTRY(917),
+	ICE_PTT_UNUSED_ENTRY(918),
+	ICE_PTT_UNUSED_ENTRY(919),
+
+	ICE_PTT_UNUSED_ENTRY(920),
+	ICE_PTT_UNUSED_ENTRY(921),
+	ICE_PTT_UNUSED_ENTRY(922),
+	ICE_PTT_UNUSED_ENTRY(923),
+	ICE_PTT_UNUSED_ENTRY(924),
+	ICE_PTT_UNUSED_ENTRY(925),
+	ICE_PTT_UNUSED_ENTRY(926),
+	ICE_PTT_UNUSED_ENTRY(927),
+	ICE_PTT_UNUSED_ENTRY(928),
+	ICE_PTT_UNUSED_ENTRY(929),
+
+	ICE_PTT_UNUSED_ENTRY(930),
+	ICE_PTT_UNUSED_ENTRY(931),
+	ICE_PTT_UNUSED_ENTRY(932),
+	ICE_PTT_UNUSED_ENTRY(933),
+	ICE_PTT_UNUSED_ENTRY(934),
+	ICE_PTT_UNUSED_ENTRY(935),
+	ICE_PTT_UNUSED_ENTRY(936),
+	ICE_PTT_UNUSED_ENTRY(937),
+	ICE_PTT_UNUSED_ENTRY(938),
+	ICE_PTT_UNUSED_ENTRY(939),
+
+	ICE_PTT_UNUSED_ENTRY(940),
+	ICE_PTT_UNUSED_ENTRY(941),
+	ICE_PTT_UNUSED_ENTRY(942),
+	ICE_PTT_UNUSED_ENTRY(943),
+	ICE_PTT_UNUSED_ENTRY(944),
+	ICE_PTT_UNUSED_ENTRY(945),
+	ICE_PTT_UNUSED_ENTRY(946),
+	ICE_PTT_UNUSED_ENTRY(947),
+	ICE_PTT_UNUSED_ENTRY(948),
+	ICE_PTT_UNUSED_ENTRY(949),
+
+	ICE_PTT_UNUSED_ENTRY(950),
+	ICE_PTT_UNUSED_ENTRY(951),
+	ICE_PTT_UNUSED_ENTRY(952),
+	ICE_PTT_UNUSED_ENTRY(953),
+	ICE_PTT_UNUSED_ENTRY(954),
+	ICE_PTT_UNUSED_ENTRY(955),
+	ICE_PTT_UNUSED_ENTRY(956),
+	ICE_PTT_UNUSED_ENTRY(957),
+	ICE_PTT_UNUSED_ENTRY(958),
+	ICE_PTT_UNUSED_ENTRY(959),
+
+	ICE_PTT_UNUSED_ENTRY(960),
+	ICE_PTT_UNUSED_ENTRY(961),
+	ICE_PTT_UNUSED_ENTRY(962),
+	ICE_PTT_UNUSED_ENTRY(963),
+	ICE_PTT_UNUSED_ENTRY(964),
+	ICE_PTT_UNUSED_ENTRY(965),
+	ICE_PTT_UNUSED_ENTRY(966),
+	ICE_PTT_UNUSED_ENTRY(967),
+	ICE_PTT_UNUSED_ENTRY(968),
+	ICE_PTT_UNUSED_ENTRY(969),
+
+	ICE_PTT_UNUSED_ENTRY(970),
+	ICE_PTT_UNUSED_ENTRY(971),
+	ICE_PTT_UNUSED_ENTRY(972),
+	ICE_PTT_UNUSED_ENTRY(973),
+	ICE_PTT_UNUSED_ENTRY(974),
+	ICE_PTT_UNUSED_ENTRY(975),
+	ICE_PTT_UNUSED_ENTRY(976),
+	ICE_PTT_UNUSED_ENTRY(977),
+	ICE_PTT_UNUSED_ENTRY(978),
+	ICE_PTT_UNUSED_ENTRY(979),
+
+	ICE_PTT_UNUSED_ENTRY(980),
+	ICE_PTT_UNUSED_ENTRY(981),
+	ICE_PTT_UNUSED_ENTRY(982),
+	ICE_PTT_UNUSED_ENTRY(983),
+	ICE_PTT_UNUSED_ENTRY(984),
+	ICE_PTT_UNUSED_ENTRY(985),
+	ICE_PTT_UNUSED_ENTRY(986),
+	ICE_PTT_UNUSED_ENTRY(987),
+	ICE_PTT_UNUSED_ENTRY(988),
+	ICE_PTT_UNUSED_ENTRY(989),
+
+	ICE_PTT_UNUSED_ENTRY(990),
+	ICE_PTT_UNUSED_ENTRY(991),
+	ICE_PTT_UNUSED_ENTRY(992),
+	ICE_PTT_UNUSED_ENTRY(993),
+	ICE_PTT_UNUSED_ENTRY(994),
+	ICE_PTT_UNUSED_ENTRY(995),
+	ICE_PTT_UNUSED_ENTRY(996),
+	ICE_PTT_UNUSED_ENTRY(997),
+	ICE_PTT_UNUSED_ENTRY(998),
+	ICE_PTT_UNUSED_ENTRY(999),
+
+	ICE_PTT_UNUSED_ENTRY(1000),
+	ICE_PTT_UNUSED_ENTRY(1001),
+	ICE_PTT_UNUSED_ENTRY(1002),
+	ICE_PTT_UNUSED_ENTRY(1003),
+	ICE_PTT_UNUSED_ENTRY(1004),
+	ICE_PTT_UNUSED_ENTRY(1005),
+	ICE_PTT_UNUSED_ENTRY(1006),
+	ICE_PTT_UNUSED_ENTRY(1007),
+	ICE_PTT_UNUSED_ENTRY(1008),
+	ICE_PTT_UNUSED_ENTRY(1009),
+
+	ICE_PTT_UNUSED_ENTRY(1010),
+	ICE_PTT_UNUSED_ENTRY(1011),
+	ICE_PTT_UNUSED_ENTRY(1012),
+	ICE_PTT_UNUSED_ENTRY(1013),
+	ICE_PTT_UNUSED_ENTRY(1014),
+	ICE_PTT_UNUSED_ENTRY(1015),
+	ICE_PTT_UNUSED_ENTRY(1016),
+	ICE_PTT_UNUSED_ENTRY(1017),
+	ICE_PTT_UNUSED_ENTRY(1018),
+	ICE_PTT_UNUSED_ENTRY(1019),
+
+	ICE_PTT_UNUSED_ENTRY(1020),
+	ICE_PTT_UNUSED_ENTRY(1021),
+	ICE_PTT_UNUSED_ENTRY(1022),
+	ICE_PTT_UNUSED_ENTRY(1023),
+};
+
+static inline struct ice_rx_ptype_decoded ice_decode_rx_desc_ptype(u16 ptype)
+{
+	return ice_ptype_lkup[ptype];
+}
+
+#define ICE_LINK_SPEED_UNKNOWN		0
+#define ICE_LINK_SPEED_10MBPS		10
+#define ICE_LINK_SPEED_100MBPS		100
+#define ICE_LINK_SPEED_1000MBPS		1000
+#define ICE_LINK_SPEED_2500MBPS		2500
+#define ICE_LINK_SPEED_5000MBPS		5000
+#define ICE_LINK_SPEED_10000MBPS	10000
+#define ICE_LINK_SPEED_20000MBPS	20000
+#define ICE_LINK_SPEED_25000MBPS	25000
+#define ICE_LINK_SPEED_40000MBPS	40000
+#define ICE_LINK_SPEED_50000MBPS	50000
+#define ICE_LINK_SPEED_100000MBPS	100000
+
+#endif /* _ICE_LAN_TX_RX_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v6 14/31] net/ice/base: add OS specific implementation
  2018-12-18  8:46 ` [dpdk-dev] [PATCH v6 00/31] A new net PMD - ICE Wenzhuo Lu
                     ` (12 preceding siblings ...)
  2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 13/31] net/ice/base: add structures for RX/TX queues Wenzhuo Lu
@ 2018-12-18  8:46   ` Wenzhuo Lu
  2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 15/31] net/ice: support device initialization Wenzhuo Lu
                     ` (17 subsequent siblings)
  31 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-18  8:46 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu

Add some MACRO defination and small functions which
are specific for DPDK.
Add readme too.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 drivers/net/ice/base/README      |  22 ++
 drivers/net/ice/base/ice_osdep.h | 524 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 546 insertions(+)
 create mode 100644 drivers/net/ice/base/README
 create mode 100644 drivers/net/ice/base/ice_osdep.h

diff --git a/drivers/net/ice/base/README b/drivers/net/ice/base/README
new file mode 100644
index 0000000..708f607
--- /dev/null
+++ b/drivers/net/ice/base/README
@@ -0,0 +1,22 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+Intel® ICE driver
+==================
+
+This directory contains source code of FreeBSD ice driver of version
+2018.12.11 released by the team which develops
+basic drivers for any ice NIC. The directory of base/ contains the
+original source package.
+This driver is valid for the product(s) listed below
+
+* Intel® Ethernet Network Adapters E810
+
+Updating the driver
+===================
+
+NOTE: The source code in this directory should not be modified apart from
+the following file(s):
+
+    ice_osdep.h
diff --git a/drivers/net/ice/base/ice_osdep.h b/drivers/net/ice/base/ice_osdep.h
new file mode 100644
index 0000000..a3351c0
--- /dev/null
+++ b/drivers/net/ice/base/ice_osdep.h
@@ -0,0 +1,524 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _ICE_OSDEP_H_
+#define _ICE_OSDEP_H_
+
+#include <string.h>
+#include <stdint.h>
+#include <stdio.h>
+#include <stdarg.h>
+#include <inttypes.h>
+#include <sys/queue.h>
+#include <stdbool.h>
+
+#include <rte_common.h>
+#include <rte_memcpy.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <rte_byteorder.h>
+#include <rte_cycles.h>
+#include <rte_spinlock.h>
+#include <rte_log.h>
+#include <rte_random.h>
+#include <rte_io.h>
+
+#include "../ice_logs.h"
+
+#define INLINE inline
+#define STATIC static
+
+typedef uint8_t         u8;
+typedef int8_t          s8;
+typedef uint16_t        u16;
+typedef int16_t         s16;
+typedef uint32_t        u32;
+typedef int32_t         s32;
+typedef uint64_t        u64;
+typedef uint64_t        s64;
+
+#define __iomem
+#define hw_dbg(hw, S, A...) do {} while (0)
+#define upper_32_bits(n) ((u32)(((n) >> 16) >> 16))
+#define lower_32_bits(n) ((u32)(n))
+#define low_16_bits(x)   ((x) & 0xFFFF)
+#define high_16_bits(x)  (((x) & 0xFFFF0000) >> 16)
+
+#ifndef ETH_ADDR_LEN
+#define ETH_ADDR_LEN                  6
+#endif
+
+#ifndef __le16
+#define __le16          uint16_t
+#endif
+#ifndef __le32
+#define __le32          uint32_t
+#endif
+#ifndef __le64
+#define __le64          uint64_t
+#endif
+#ifndef __be16
+#define __be16          uint16_t
+#endif
+#ifndef __be32
+#define __be32          uint32_t
+#endif
+#ifndef __be64
+#define __be64          uint64_t
+#endif
+
+#ifndef __always_unused
+#define __always_unused  __attribute__((unused))
+#endif
+#ifndef __maybe_unused
+#define __maybe_unused  __attribute__((unused))
+#endif
+#ifndef __packed
+#define __packed  __attribute__((packed))
+#endif
+
+#ifndef BIT_ULL
+#define BIT_ULL(a) (1ULL << (a))
+#endif
+
+#define FALSE           0
+#define TRUE            1
+#define false           0
+#define true            1
+
+#define min(a, b) RTE_MIN(a, b)
+#define max(a, b) RTE_MAX(a, b)
+
+#define ARRAY_SIZE(arr) (sizeof(arr) / sizeof(arr[0]))
+#define FIELD_SIZEOF(t, f) (sizeof(((t *)0)->f))
+#define MAKEMASK(m, s) ((m) << (s))
+
+#define DEBUGOUT(S, A...) PMD_DRV_LOG_RAW(DEBUG, S, ##A)
+#define DEBUGFUNC(F) PMD_DRV_LOG_RAW(DEBUG, F)
+
+#define ice_debug(h, m, s, ...)					\
+do {								\
+	if (((m) & (h)->debug_mask))				\
+		PMD_DRV_LOG_RAW(DEBUG, "ice %02x.%x " s,	\
+			(h)->bus.device, (h)->bus.func,		\
+					##__VA_ARGS__);		\
+} while (0)
+
+#define ice_info(hw, fmt, args...) ice_debug(hw, ICE_DBG_ALL, fmt, ##args)
+#define ice_warn(hw, fmt, args...) ice_debug(hw, ICE_DBG_ALL, fmt, ##args)
+#define ice_debug_array(hw, type, rowsize, groupsize, buf, len)		\
+do {									\
+	struct ice_hw *hw_l = hw;					\
+		u16 len_l = len;					\
+		u8 *buf_l = buf;					\
+		int i;							\
+		for (i = 0; i < len_l; i += 8)				\
+			ice_debug(hw_l, type,				\
+				  "0x%04X  0x%016"PRIx64"\n",		\
+				  i, *((u64 *)((buf_l) + i)));		\
+} while (0)
+#define ice_snprintf snprintf
+#ifndef SNPRINTF
+#define SNPRINTF ice_snprintf
+#endif
+
+#define ICE_PCI_REG(reg)     rte_read32(reg)
+#define ICE_PCI_REG_ADDR(a, reg) \
+	((volatile uint32_t *)((char *)(a)->hw_addr + (reg)))
+static inline uint32_t ice_read_addr(volatile void *addr)
+{
+	return rte_le_to_cpu_32(ICE_PCI_REG(addr));
+}
+
+#define ICE_PCI_REG_WRITE(reg, value) \
+	rte_write32((rte_cpu_to_le_32(value)), reg)
+
+#define ice_flush(a)   ICE_READ_REG((a), GLGEN_STAT)
+#define icevf_flush(a) ICE_READ_REG((a), VFGEN_RSTAT)
+#define ICE_READ_REG(hw, reg) ice_read_addr(ICE_PCI_REG_ADDR((hw), (reg)))
+#define ICE_WRITE_REG(hw, reg, value) \
+	ICE_PCI_REG_WRITE(ICE_PCI_REG_ADDR((hw), (reg)), (value))
+
+#define rd32(a, reg) ice_read_addr(ICE_PCI_REG_ADDR((a), (reg)))
+#define wr32(a, reg, value) \
+	ICE_PCI_REG_WRITE(ICE_PCI_REG_ADDR((a), (reg)), (value))
+#define flush(a) ice_read_addr(ICE_PCI_REG_ADDR((a), (GLGEN_STAT)))
+#define div64_long(n, d) ((n) / (d))
+
+#define BITS_PER_BYTE       8
+typedef u32 ice_bitmap_t;
+#define DIV_ROUND_UP(n, d) (((n) + (d) - 1) / (d))
+#define BITS_TO_CHUNKS(nr)   DIV_ROUND_UP(nr, BITS_PER_BYTE * sizeof(ice_bitmap_t))
+#define ice_declare_bitmap(name, bits) \
+	ice_bitmap_t name[BITS_TO_CHUNKS(bits)]
+
+#define BITS_CHUNK_MASK(nr)	(((ice_bitmap_t)~0) >>			\
+		((BITS_PER_BYTE * sizeof(ice_bitmap_t)) -		\
+		(((nr) - 1) % (BITS_PER_BYTE * sizeof(ice_bitmap_t))	\
+		 + 1)))
+#define BITS_PER_CHUNK          (BITS_PER_BYTE * sizeof(ice_bitmap_t))
+#define BIT_CHUNK(nr)           ((nr) / BITS_PER_CHUNK)
+#define BIT_IN_CHUNK(nr)        BIT((nr) % BITS_PER_CHUNK)
+
+static inline bool ice_is_bit_set(const ice_bitmap_t *bitmap, u16 nr)
+{
+	return !!(bitmap[BIT_CHUNK(nr)] & BIT_IN_CHUNK(nr));
+}
+
+#define ice_and_bitmap(d, b1, b2, sz) \
+	ice_intersect_bitmaps((u8 *)d, (u8 *)b1, (const u8 *)b2, (u16)sz)
+static inline int
+ice_intersect_bitmaps(u8 *dst, const u8 *bmp1, const u8 *bmp2, u16 sz)
+{
+	u32 res = 0;
+	int cnt;
+	u16 i;
+
+	/* Utilize 32-bit operations */
+	cnt = (sz % BITS_PER_BYTE) ?
+		(sz / BITS_PER_BYTE) + 1 : sz / BITS_PER_BYTE;
+	for (i = 0; i < cnt / 4; i++) {
+		((u32 *)dst)[i] = ((const u32 *)bmp1)[i] &
+		((const u32 *)bmp2)[i];
+		res |= ((u32 *)dst)[i];
+	}
+
+	for (i *= 4; i < cnt; i++) {
+		if ((sz % 8 == 0) || (i + 1 < cnt)) {
+			dst[i] = bmp1[i] & bmp2[i];
+		} else {
+			/* Remaining bits that do not occupy the whole byte */
+			u8 mask = ~0u >> (8 - (sz % 8));
+
+			dst[i] = bmp1[i] & bmp2[i] & mask;
+		}
+
+		res |= dst[i];
+	}
+
+	return res != 0;
+}
+
+static inline int ice_find_first_bit(ice_bitmap_t *name, u16 size)
+{
+	u16 i;
+
+	for (i = 0; i < BITS_PER_BYTE * (size / BITS_PER_BYTE); i++)
+		if (ice_is_bit_set(name, i))
+			return i;
+	return size;
+}
+
+static inline int ice_find_next_bit(ice_bitmap_t *name, u16 size, u16 bits)
+{
+	u16 i;
+
+	for (i = bits; i < BITS_PER_BYTE * (size / BITS_PER_BYTE); i++)
+		if (ice_is_bit_set(name, i))
+			return i;
+	return bits;
+}
+
+#define for_each_set_bit(bit, addr, size)				\
+	for ((bit) = ice_find_first_bit((addr), (size));		\
+	(bit) < (size);							\
+	(bit) = ice_find_next_bit((addr), (size), (bit) + 1))
+
+static inline bool ice_is_any_bit_set(ice_bitmap_t *bitmap, u32 bits)
+{
+	u32 max_index = BITS_TO_CHUNKS(bits);
+	u32 i;
+
+	for (i = 0; i < max_index; i++) {
+		if (bitmap[i])
+			return true;
+	}
+	return false;
+}
+
+/* memory allocation tracking */
+struct ice_dma_mem {
+	void *va;
+	u64 pa;
+	u32 size;
+	const void *zone;
+} __attribute__((packed));
+
+struct ice_virt_mem {
+	void *va;
+	u32 size;
+} __attribute__((packed));
+
+#define ice_malloc(h, s)    rte_zmalloc(NULL, s, 0)
+#define ice_calloc(h, c, s) rte_zmalloc(NULL, (c) * (s), 0)
+#define ice_free(h, m)         rte_free(m)
+
+#define ice_memset(a, b, c, d) memset((a), (b), (c))
+#define ice_memcpy(a, b, c, d) rte_memcpy((a), (b), (c))
+#define ice_memdup(a, b, c, d) rte_memcpy(ice_malloc(a, c), b, c)
+
+#define CPU_TO_BE16(o) rte_cpu_to_be_16(o)
+#define CPU_TO_BE32(o) rte_cpu_to_be_32(o)
+#define CPU_TO_BE64(o) rte_cpu_to_be_64(o)
+#define CPU_TO_LE16(o) rte_cpu_to_le_16(o)
+#define CPU_TO_LE32(s) rte_cpu_to_le_32(s)
+#define CPU_TO_LE64(h) rte_cpu_to_le_64(h)
+#define LE16_TO_CPU(a) rte_le_to_cpu_16(a)
+#define LE32_TO_CPU(c) rte_le_to_cpu_32(c)
+#define LE64_TO_CPU(k) rte_le_to_cpu_64(k)
+
+#define NTOHS(a) rte_be_to_cpu_16(a)
+#define NTOHL(a) rte_be_to_cpu_32(a)
+#define HTONS(a) rte_cpu_to_be_16(a)
+#define HTONL(a) rte_cpu_to_be_32(a)
+
+static inline void
+ice_set_bit(unsigned int nr, volatile ice_bitmap_t *addr)
+{
+	__sync_fetch_and_or(addr, (1UL << nr));
+}
+
+static inline void
+ice_clear_bit(unsigned int nr, volatile ice_bitmap_t *addr)
+{
+	__sync_fetch_and_and(addr, (0UL << nr));
+}
+
+static inline void
+ice_zero_bitmap(ice_bitmap_t *bmp, u16 size)
+{
+	unsigned long mask;
+	u16 i;
+
+	for (i = 0; i < BITS_TO_CHUNKS(size) - 1; i++)
+		bmp[i] = 0;
+	mask = BITS_CHUNK_MASK(size);
+	bmp[i] &= ~mask;
+}
+
+static inline void
+ice_or_bitmap(ice_bitmap_t *dst, const ice_bitmap_t *bmp1,
+	      const ice_bitmap_t *bmp2, u16 size)
+{
+	unsigned long mask;
+	u16 i;
+
+	/* Handle all but last chunk*/
+	for (i = 0; i < BITS_TO_CHUNKS(size) - 1; i++)
+		dst[i] = bmp1[i] | bmp2[i];
+
+	/* We want to only OR bits within the size. Furthermore, we also do
+	 * not want to modify destination bits which are beyond the specified
+	 * size. Use a bitmask to ensure that we only modify the bits that are
+	 * within the specified size.
+	 */
+	mask = BITS_CHUNK_MASK(size);
+	dst[i] &= ~mask;
+	dst[i] |= (bmp1[i] | bmp2[i]) & mask;
+}
+
+static inline void ice_cp_bitmap(ice_bitmap_t *dst, ice_bitmap_t *src, u16 size)
+{
+	ice_bitmap_t mask;
+	u16 i;
+
+	/* Handle all but last chunk*/
+	for (i = 0; i < BITS_TO_CHUNKS(size) - 1; i++)
+		dst[i] = src[i];
+
+	/* We want to only copy bits within the size.*/
+	mask = BITS_CHUNK_MASK(size);
+	dst[i] &= ~mask;
+	dst[i] |= src[i] & mask;
+}
+
+static inline bool
+ice_cmp_bitmap(ice_bitmap_t *bmp1, ice_bitmap_t *bmp2, u16 size)
+{
+	ice_bitmap_t mask;
+	u16 i;
+
+	/* Handle all but last chunk*/
+	for (i = 0; i < BITS_TO_CHUNKS(size) - 1; i++)
+		if (bmp1[i] != bmp2[i])
+			return false;
+
+	/* We want to only compare bits within the size.*/
+	mask = BITS_CHUNK_MASK(size);
+	if ((bmp1[i] & mask) != (bmp2[i] & mask))
+		return false;
+
+	return true;
+}
+
+/* SW spinlock */
+struct ice_lock {
+	rte_spinlock_t spinlock;
+};
+
+static inline void
+ice_init_lock(struct ice_lock *sp)
+{
+	rte_spinlock_init(&sp->spinlock);
+}
+
+static inline void
+ice_acquire_lock(struct ice_lock *sp)
+{
+	rte_spinlock_lock(&sp->spinlock);
+}
+
+static inline void
+ice_release_lock(struct ice_lock *sp)
+{
+	rte_spinlock_unlock(&sp->spinlock);
+}
+
+static inline void
+ice_destroy_lock(__attribute__((unused)) struct ice_lock *sp)
+{
+}
+
+struct ice_hw;
+
+static inline void *
+ice_alloc_dma_mem(__attribute__((unused)) struct ice_hw *hw,
+		  struct ice_dma_mem *mem, u64 size)
+{
+	const struct rte_memzone *mz = NULL;
+	char z_name[RTE_MEMZONE_NAMESIZE];
+
+	if (!mem)
+		return NULL;
+
+	snprintf(z_name, sizeof(z_name), "ice_dma_%"PRIu64, rte_rand());
+	mz = rte_memzone_reserve_bounded(z_name, size, SOCKET_ID_ANY, 0,
+					 0, RTE_PGSIZE_2M);
+	if (!mz)
+		return NULL;
+
+	mem->size = size;
+	mem->va = mz->addr;
+	mem->pa = mz->phys_addr;
+	mem->zone = (const void *)mz;
+	PMD_DRV_LOG(DEBUG, "memzone %s allocated with physical address: "
+		    "%"PRIu64, mz->name, mem->pa);
+
+	return mem->va;
+}
+
+static inline void
+ice_free_dma_mem(__attribute__((unused)) struct ice_hw *hw,
+		 struct ice_dma_mem *mem)
+{
+	PMD_DRV_LOG(DEBUG, "memzone %s to be freed with physical address: "
+		    "%"PRIu64, ((const struct rte_memzone *)mem->zone)->name,
+		    mem->pa);
+	rte_memzone_free((const struct rte_memzone *)mem->zone);
+	mem->zone = NULL;
+	mem->va = NULL;
+	mem->pa = (u64)0;
+}
+
+static inline u8
+ice_hweight8(u32 num)
+{
+	u8 bits = 0;
+	u32 i;
+
+	for (i = 0; i < 8; i++) {
+		bits += (u8)(num & 0x1);
+		num >>= 1;
+	}
+
+	return bits;
+}
+
+#define DIV_ROUND_UP(n, d) (((n) + (d) - 1) / (d))
+#define DELAY(x) rte_delay_us(x)
+#define ice_usec_delay(x) rte_delay_us(x)
+#define ice_msec_delay(x, y) rte_delay_us(1000 * (x))
+#define udelay(x) DELAY(x)
+#define msleep(x) DELAY(1000 * (x))
+#define usleep_range(min, max) msleep(DIV_ROUND_UP(min, 1000))
+
+struct ice_list_entry {
+	LIST_ENTRY(ice_list_entry) next;
+};
+
+LIST_HEAD(ice_list_head, ice_list_entry);
+
+#define LIST_ENTRY_TYPE    ice_list_entry
+#define LIST_HEAD_TYPE     ice_list_head
+#define INIT_LIST_HEAD(list_head)  LIST_INIT(list_head)
+#define LIST_DEL(entry)            LIST_REMOVE(entry, next)
+/* LIST_EMPTY(list_head)) the same in sys/queue.h */
+
+/*Note parameters are swapped*/
+#define LIST_FIRST_ENTRY(head, type, field) (type *)((head)->lh_first)
+#define LIST_ADD(entry, list_head)    LIST_INSERT_HEAD(list_head, entry, next)
+#define LIST_ADD_AFTER(entry, list_entry) \
+	LIST_INSERT_AFTER(list_entry, entry, next)
+#define LIST_FOR_EACH_ENTRY(pos, head, type, member)			       \
+	for ((pos) = (head)->lh_first ?					       \
+		     container_of((head)->lh_first, struct type, member) :     \
+		     0;							       \
+	     (pos);							       \
+	     (pos) = (pos)->member.next.le_next ?			       \
+		     container_of((pos)->member.next.le_next, struct type,     \
+				  member) :				       \
+		     0)
+
+#define LIST_REPLACE_INIT(list_head, head) do {				\
+	(head)->lh_first = (list_head)->lh_first;			\
+	INIT_LIST_HEAD(list_head);					\
+} while (0)
+
+#define HLIST_NODE_TYPE         LIST_ENTRY_TYPE
+#define HLIST_HEAD_TYPE         LIST_HEAD_TYPE
+#define INIT_HLIST_HEAD(list_head)             INIT_LIST_HEAD(list_head)
+#define HLIST_ADD_HEAD(entry, list_head)       LIST_ADD(entry, list_head)
+#define HLIST_EMPTY(list_head)                 LIST_EMPTY(list_head)
+#define HLIST_DEL(entry)                       LIST_DEL(entry)
+#define HLIST_FOR_EACH_ENTRY(pos, head, type, member) \
+	LIST_FOR_EACH_ENTRY(pos, head, type, member)
+#define LIST_FOR_EACH_ENTRY_SAFE(pos, tmp, head, type, member) \
+	LIST_FOR_EACH_ENTRY(pos, head, type, member)
+
+#ifndef ICE_DBG_TRACE
+#define ICE_DBG_TRACE		BIT_ULL(0)
+#endif
+
+#ifndef DIVIDE_AND_ROUND_UP
+#define DIVIDE_AND_ROUND_UP(a, b) (((a) + (b) - 1) / (b))
+#endif
+
+#ifndef ICE_INTEL_VENDOR_ID
+#define ICE_INTEL_VENDOR_ID		0x8086
+#endif
+
+#ifndef IS_UNICAST_ETHER_ADDR
+#define IS_UNICAST_ETHER_ADDR(addr) \
+	((bool)((((u8 *)(addr))[0] % ((u8)0x2)) == 0))
+#endif
+
+#ifndef IS_MULTICAST_ETHER_ADDR
+#define IS_MULTICAST_ETHER_ADDR(addr) \
+	((bool)((((u8 *)(addr))[0] % ((u8)0x2)) == 1))
+#endif
+
+#ifndef IS_BROADCAST_ETHER_ADDR
+/* Check whether an address is broadcast. */
+#define IS_BROADCAST_ETHER_ADDR(addr)	\
+	((bool)((((u16 *)(addr))[0] == ((u16)0xffff))))
+#endif
+
+#ifndef IS_ZERO_ETHER_ADDR
+#define IS_ZERO_ETHER_ADDR(addr) \
+	(((bool)((((u16 *)(addr))[0] == ((u16)0x0)))) && \
+	 ((bool)((((u16 *)(addr))[1] == ((u16)0x0)))) && \
+	 ((bool)((((u16 *)(addr))[2] == ((u16)0x0)))))
+#endif
+
+#endif /* _ICE_OSDEP_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v6 15/31] net/ice: support device initialization
  2018-12-18  8:46 ` [dpdk-dev] [PATCH v6 00/31] A new net PMD - ICE Wenzhuo Lu
                     ` (13 preceding siblings ...)
  2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 14/31] net/ice/base: add OS specific implementation Wenzhuo Lu
@ 2018-12-18  8:46   ` Wenzhuo Lu
  2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 16/31] net/ice: support device and queue ops Wenzhuo Lu
                     ` (16 subsequent siblings)
  31 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-18  8:46 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Update the documents too.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 MAINTAINERS                             |   2 +
 config/common_base                      |   7 +
 doc/guides/nics/features/ice.ini        |  11 +
 doc/guides/nics/ice.rst                 |  80 ++++
 doc/guides/nics/index.rst               |   1 +
 doc/guides/rel_notes/release_19_02.rst  |   5 +
 drivers/net/Makefile                    |   1 +
 drivers/net/ice/Makefile                |  54 +++
 drivers/net/ice/base/meson.build        |  27 ++
 drivers/net/ice/ice_ethdev.c            | 638 ++++++++++++++++++++++++++++++++
 drivers/net/ice/ice_ethdev.h            | 305 +++++++++++++++
 drivers/net/ice/ice_logs.h              |  45 +++
 drivers/net/ice/meson.build             |  12 +
 drivers/net/ice/rte_pmd_ice_version.map |   4 +
 drivers/net/meson.build                 |   1 +
 mk/rte.app.mk                           |   1 +
 16 files changed, 1194 insertions(+)
 create mode 100644 doc/guides/nics/features/ice.ini
 create mode 100644 doc/guides/nics/ice.rst
 create mode 100644 drivers/net/ice/Makefile
 create mode 100644 drivers/net/ice/base/meson.build
 create mode 100644 drivers/net/ice/ice_ethdev.c
 create mode 100644 drivers/net/ice/ice_ethdev.h
 create mode 100644 drivers/net/ice/ice_logs.h
 create mode 100644 drivers/net/ice/meson.build
 create mode 100644 drivers/net/ice/rte_pmd_ice_version.map

diff --git a/MAINTAINERS b/MAINTAINERS
index 37f3bf7..cdb18e0 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -598,6 +598,8 @@ M: Qiming Yang <qiming.yang@intel.com>
 M: Wenzhuo Lu <wenzhuo.lu@intel.com>
 T: git://dpdk.org/next/dpdk-next-net-intel
 F: drivers/net/ice/
+F: doc/guides/nics/ice.rst
+F: doc/guides/nics/features/ice.ini
 
 Marvell mvpp2
 M: Tomasz Duszynski <tdu@semihalf.com>
diff --git a/config/common_base b/config/common_base
index d12ae98..872f440 100644
--- a/config/common_base
+++ b/config/common_base
@@ -297,6 +297,13 @@ CONFIG_RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE=y
 CONFIG_RTE_LIBRTE_FM10K_INC_VECTOR=y
 
 #
+# Compile burst-oriented ICE PMD driver
+#
+CONFIG_RTE_LIBRTE_ICE_PMD=y
+CONFIG_RTE_LIBRTE_ICE_DEBUG_RX=n
+CONFIG_RTE_LIBRTE_ICE_DEBUG_TX=n
+CONFIG_RTE_LIBRTE_ICE_DEBUG_TX_FREE=n
+
 # Compile burst-oriented AVF PMD driver
 #
 CONFIG_RTE_LIBRTE_AVF_PMD=y
diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
new file mode 100644
index 0000000..085e848
--- /dev/null
+++ b/doc/guides/nics/features/ice.ini
@@ -0,0 +1,11 @@
+;
+; Supported features of the 'ice' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+BSD nic_uio          = Y
+Linux UIO            = Y
+Linux VFIO           = Y
+x86-32               = Y
+x86-64               = Y
diff --git a/doc/guides/nics/ice.rst b/doc/guides/nics/ice.rst
new file mode 100644
index 0000000..946ed04
--- /dev/null
+++ b/doc/guides/nics/ice.rst
@@ -0,0 +1,80 @@
+..  SPDX-License-Identifier: BSD-3-Clause
+    Copyright(c) 2018 Intel Corporation.
+
+ICE Poll Mode Driver
+======================
+
+The ice PMD (librte_pmd_ice) provides poll mode driver support for
+10/25 Gbps Intel® Ethernet 810 Series Network Adapters based on
+the Intel Ethernet Controller E810.
+
+
+Prerequisites
+-------------
+
+- Identifying your adapter using `Intel Support
+  <http://www.intel.com/support>`_ and get the latest NVM/FW images.
+
+- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
+
+- To get better performance on Intel platforms, please follow the "How to get best performance with NICs on Intel platforms"
+  section of the :ref:`Getting Started Guide for Linux <linux_gsg>`.
+
+
+Pre-Installation Configuration
+------------------------------
+
+Config File Options
+~~~~~~~~~~~~~~~~~~~
+
+The following options can be modified in the ``config`` file.
+Please note that enabling debugging options may affect system performance.
+
+- ``CONFIG_RTE_LIBRTE_ICE_PMD`` (default ``y``)
+
+  Toggle compilation of the ``librte_pmd_ice`` driver.
+
+- ``CONFIG_RTE_LIBRTE_ICE_DEBUG_*`` (default ``n``)
+
+  Toggle display of generic debugging messages.
+
+Runtime Config Options
+~~~~~~~~~~~~~~~~~~~~~~
+
+- ``Maximum Number of Queue Pairs``
+
+  The maximum number of queue pairs is decided by HW. If not configured, APP
+  uses the number from HW. Users can check the number by calling the API
+  ``rte_eth_dev_info_get``.
+  If users want to limit the number of queues, they can set a smaller number
+  using EAL parameter like ``max_queue_pair_num=n``.
+
+
+Driver compilation and testing
+------------------------------
+
+Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
+for details.
+
+
+Limitations or Known issues
+---------------------------
+
+19.02 limitation
+~~~~~~~~~~~~~~~~
+
+Ice code released in 19.02 is for evaluation only.
+
+
+Promiscuous mode not supported
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+As promiscuous mode is not supported as this stage, a port can only receive the
+packets which destination MAC address is this port's own.
+
+
+TX anti-spoofing cannot be disabled
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+TX anti-spoofing is enabled by default. At this stage it's not supported to
+disable it. So any TX packet which source MAC address is not this port's own
+will be dropped by HW. It means io-fwd is not supported now. Recommand to use
+MAC-fwd for evaluation.
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index 1e46705..a205f15 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -26,6 +26,7 @@ Network Interface Controller Drivers
     enic
     fm10k
     i40e
+    ice
     ifc
     igb
     ixgbe
diff --git a/doc/guides/rel_notes/release_19_02.rst b/doc/guides/rel_notes/release_19_02.rst
index a94fa86..ca560b1 100644
--- a/doc/guides/rel_notes/release_19_02.rst
+++ b/doc/guides/rel_notes/release_19_02.rst
@@ -54,6 +54,11 @@ New Features
      Also, make sure to start the actual text at the margin.
      =========================================================
 
+* **Added ICE net PMD**
+
+  Added the new ``ice`` net driver for Intel® Ethernet Network Adapters E810.
+  See the :doc:`../nics/ice` NIC guide for more details on this new driver.
+
 
 Removed Items
 -------------
diff --git a/drivers/net/Makefile b/drivers/net/Makefile
index c0386fe..670d7f7 100644
--- a/drivers/net/Makefile
+++ b/drivers/net/Makefile
@@ -30,6 +30,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_ENIC_PMD) += enic
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_FAILSAFE) += failsafe
 DIRS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k
 DIRS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += i40e
+DIRS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice
 DIRS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe
 DIRS-$(CONFIG_RTE_LIBRTE_LIO_PMD) += liquidio
 DIRS-$(CONFIG_RTE_LIBRTE_MLX4_PMD) += mlx4
diff --git a/drivers/net/ice/Makefile b/drivers/net/ice/Makefile
new file mode 100644
index 0000000..70f23e3
--- /dev/null
+++ b/drivers/net/ice/Makefile
@@ -0,0 +1,54 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Intel Corporation
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_pmd_ice.a
+
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+LDLIBS += -lrte_eal -lrte_ethdev -lrte_kvargs -lrte_bus_pci
+
+EXPORT_MAP := rte_pmd_ice_version.map
+
+LIBABIVER := 1
+
+#
+# Add extra flags for base driver files (also known as shared code)
+# to disable warnings
+#
+ifeq ($(CONFIG_RTE_TOOLCHAIN_ICC),y)
+CFLAGS_BASE_DRIVER +=
+else ifeq ($(CONFIG_RTE_TOOLCHAIN_CLANG),y)
+CFLAGS_BASE_DRIVER += -Wno-unused-parameter
+CFLAGS_BASE_DRIVER += -Wno-unused-variable
+else
+CFLAGS_BASE_DRIVER += -Wno-unused-parameter
+CFLAGS_BASE_DRIVER += -Wno-unused-variable
+
+ifeq ($(shell test $(GCC_VERSION) -ge 44 && echo 1), 1)
+CFLAGS_BASE_DRIVER += -Wno-unused-but-set-variable
+endif
+
+endif
+OBJS_BASE_DRIVER=$(patsubst %.c,%.o,$(notdir $(wildcard $(SRCDIR)/base/*.c)))
+$(foreach obj, $(OBJS_BASE_DRIVER), $(eval CFLAGS_$(obj)+=$(CFLAGS_BASE_DRIVER)))
+
+VPATH += $(SRCDIR)/base
+
+#
+# all source are stored in SRCS-y
+#
+SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_controlq.c
+SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_common.c
+SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_sched.c
+SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_switch.c
+SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_nvm.c
+
+SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_ethdev.c
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/ice/base/meson.build b/drivers/net/ice/base/meson.build
new file mode 100644
index 0000000..0cfc8cd
--- /dev/null
+++ b/drivers/net/ice/base/meson.build
@@ -0,0 +1,27 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Intel Corporation
+
+sources = [
+	'ice_controlq.c',
+	'ice_common.c',
+	'ice_sched.c',
+	'ice_switch.c',
+	'ice_nvm.c',
+]
+
+error_cflags = ['-Wno-unused-value',
+		'-Wno-unused-but-set-variable',
+		'-Wno-unused-variable',
+]
+c_args = cflags
+
+foreach flag: error_cflags
+	if cc.has_argument(flag)
+		c_args += flag
+	endif
+endforeach
+
+base_lib = static_library('ice_base', sources,
+	dependencies: static_rte_eal,
+	c_args: c_args)
+base_objs = base_lib.extract_all_objects()
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
new file mode 100644
index 0000000..a514755
--- /dev/null
+++ b/drivers/net/ice/ice_ethdev.c
@@ -0,0 +1,638 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#include <rte_ethdev_pci.h>
+
+#include "base/ice_sched.h"
+#include "ice_ethdev.h"
+
+#define ICE_MAX_QP_NUM "max_queue_pair_num"
+#define ICE_DFLT_OUTER_TAG_TYPE ICE_AQ_VSI_OUTER_TAG_VLAN_9100
+
+int ice_logtype_init;
+int ice_logtype_driver;
+
+static const struct rte_pci_id pci_id_ice_map[] = {
+	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
+	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_QSFP) },
+	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_SFP) },
+	{ .vendor_id = 0, /* sentinel */ },
+};
+
+static const struct eth_dev_ops ice_eth_dev_ops = {
+	.dev_configure                = NULL,
+};
+
+static void
+ice_init_controlq_parameter(struct ice_hw *hw)
+{
+	/* fields for adminq */
+	hw->adminq.num_rq_entries = ICE_ADMINQ_LEN;
+	hw->adminq.num_sq_entries = ICE_ADMINQ_LEN;
+	hw->adminq.rq_buf_size = ICE_ADMINQ_BUF_SZ;
+	hw->adminq.sq_buf_size = ICE_ADMINQ_BUF_SZ;
+
+	/* fields for mailboxq, DPDK used as PF host */
+	hw->mailboxq.num_rq_entries = ICE_MAILBOXQ_LEN;
+	hw->mailboxq.num_sq_entries = ICE_MAILBOXQ_LEN;
+	hw->mailboxq.rq_buf_size = ICE_MAILBOXQ_BUF_SZ;
+	hw->mailboxq.sq_buf_size = ICE_MAILBOXQ_BUF_SZ;
+}
+
+static int
+ice_check_qp_num(const char *key, const char *qp_value,
+		 __rte_unused void *opaque)
+{
+	char *end = NULL;
+	int num = 0;
+
+	while (isblank(*qp_value))
+		qp_value++;
+
+	num = strtoul(qp_value, &end, 10);
+
+	if (!num || (*end == '-') || errno) {
+		PMD_DRV_LOG(WARNING, "invalid value:\"%s\" for key:\"%s\", "
+			    "value must be > 0",
+			    qp_value, key);
+		return -1;
+	}
+
+	return num;
+}
+
+static int
+ice_config_max_queue_pair_num(struct rte_devargs *devargs)
+{
+	struct rte_kvargs *kvlist;
+	const char *queue_num_key = ICE_MAX_QP_NUM;
+	int ret;
+
+	if (!devargs)
+		return 0;
+
+	kvlist = rte_kvargs_parse(devargs->args, NULL);
+	if (!kvlist)
+		return 0;
+
+	if (!rte_kvargs_count(kvlist, queue_num_key)) {
+		rte_kvargs_free(kvlist);
+		return 0;
+	}
+
+	if (rte_kvargs_process(kvlist, queue_num_key,
+			       ice_check_qp_num, NULL) < 0) {
+		rte_kvargs_free(kvlist);
+		return 0;
+	}
+	ret = rte_kvargs_process(kvlist, queue_num_key,
+				 ice_check_qp_num, NULL);
+	rte_kvargs_free(kvlist);
+
+	return ret;
+}
+
+static int
+ice_res_pool_init(struct ice_res_pool_info *pool, uint32_t base,
+		  uint32_t num)
+{
+	struct pool_entry *entry;
+
+	if (!pool || !num)
+		return -EINVAL;
+
+	entry = rte_zmalloc(NULL, sizeof(*entry), 0);
+	if (!entry) {
+		PMD_INIT_LOG(ERR,
+			     "Failed to allocate memory for resource pool");
+		return -ENOMEM;
+	}
+
+	/* queue heap initialize */
+	pool->num_free = num;
+	pool->num_alloc = 0;
+	pool->base = base;
+	LIST_INIT(&pool->alloc_list);
+	LIST_INIT(&pool->free_list);
+
+	/* Initialize element  */
+	entry->base = 0;
+	entry->len = num;
+
+	LIST_INSERT_HEAD(&pool->free_list, entry, next);
+	return 0;
+}
+
+static int
+ice_res_pool_alloc(struct ice_res_pool_info *pool,
+		   uint16_t num)
+{
+	struct pool_entry *entry, *valid_entry;
+
+	if (!pool || !num) {
+		PMD_INIT_LOG(ERR, "Invalid parameter");
+		return -EINVAL;
+	}
+
+	if (pool->num_free < num) {
+		PMD_INIT_LOG(ERR, "No resource. ask:%u, available:%u",
+			     num, pool->num_free);
+		return -ENOMEM;
+	}
+
+	valid_entry = NULL;
+	/* Lookup  in free list and find most fit one */
+	LIST_FOREACH(entry, &pool->free_list, next) {
+		if (entry->len >= num) {
+			/* Find best one */
+			if (entry->len == num) {
+				valid_entry = entry;
+				break;
+			}
+			if (!valid_entry ||
+			    valid_entry->len > entry->len)
+				valid_entry = entry;
+		}
+	}
+
+	/* Not find one to satisfy the request, return */
+	if (!valid_entry) {
+		PMD_INIT_LOG(ERR, "No valid entry found");
+		return -ENOMEM;
+	}
+	/**
+	 * The entry have equal queue number as requested,
+	 * remove it from alloc_list.
+	 */
+	if (valid_entry->len == num) {
+		LIST_REMOVE(valid_entry, next);
+	} else {
+		/**
+		 * The entry have more numbers than requested,
+		 * create a new entry for alloc_list and minus its
+		 * queue base and number in free_list.
+		 */
+		entry = rte_zmalloc(NULL, sizeof(*entry), 0);
+		if (!entry) {
+			PMD_INIT_LOG(ERR,
+				     "Failed to allocate memory for "
+				     "resource pool");
+			return -ENOMEM;
+		}
+		entry->base = valid_entry->base;
+		entry->len = num;
+		valid_entry->base += num;
+		valid_entry->len -= num;
+		valid_entry = entry;
+	}
+
+	/* Insert it into alloc list, not sorted */
+	LIST_INSERT_HEAD(&pool->alloc_list, valid_entry, next);
+
+	pool->num_free -= valid_entry->len;
+	pool->num_alloc += valid_entry->len;
+
+	return valid_entry->base + pool->base;
+}
+
+static void
+ice_res_pool_destroy(struct ice_res_pool_info *pool)
+{
+	struct pool_entry *entry, *next_entry;
+
+	if (!pool)
+		return;
+
+	for (entry = LIST_FIRST(&pool->alloc_list);
+	     entry && (next_entry = LIST_NEXT(entry, next), 1);
+	     entry = next_entry) {
+		LIST_REMOVE(entry, next);
+		rte_free(entry);
+	}
+
+	for (entry = LIST_FIRST(&pool->free_list);
+	     entry && (next_entry = LIST_NEXT(entry, next), 1);
+	     entry = next_entry) {
+		LIST_REMOVE(entry, next);
+		rte_free(entry);
+	}
+
+	pool->num_free = 0;
+	pool->num_alloc = 0;
+	pool->base = 0;
+	LIST_INIT(&pool->alloc_list);
+	LIST_INIT(&pool->free_list);
+}
+
+static void
+ice_vsi_config_default_rss(struct ice_aqc_vsi_props *info)
+{
+	/* Set VSI LUT selection */
+	info->q_opt_rss = ICE_AQ_VSI_Q_OPT_RSS_LUT_VSI &
+			  ICE_AQ_VSI_Q_OPT_RSS_LUT_M;
+	/* Set Hash scheme */
+	info->q_opt_rss |= ICE_AQ_VSI_Q_OPT_RSS_TPLZ &
+			   ICE_AQ_VSI_Q_OPT_RSS_HASH_M;
+	/* enable TC */
+	info->q_opt_tc = ICE_AQ_VSI_Q_OPT_TC_OVR_M;
+}
+
+static enum ice_status
+ice_vsi_config_tc_queue_mapping(struct ice_vsi *vsi,
+				struct ice_aqc_vsi_props *info,
+				uint8_t enabled_tcmap)
+{
+	uint16_t bsf, qp_idx;
+
+	/* default tc 0 now. Multi-TC supporting need to be done later.
+	 * Configure TC and queue mapping parameters, for enabled TC,
+	 * allocate qpnum_per_tc queues to this traffic.
+	 */
+	if (enabled_tcmap != 0x01) {
+		PMD_INIT_LOG(ERR, "only TC0 is supported");
+		return -ENOTSUP;
+	}
+
+	vsi->nb_qps = RTE_MIN(vsi->nb_qps, ICE_MAX_Q_PER_TC);
+	bsf = rte_bsf32(vsi->nb_qps);
+	/* Adjust the queue number to actual queues that can be applied */
+	vsi->nb_qps = 0x1 << bsf;
+
+	qp_idx = 0;
+	/* Set tc and queue mapping with VSI */
+	info->tc_mapping[0] = rte_cpu_to_le_16((qp_idx <<
+						ICE_AQ_VSI_TC_Q_OFFSET_S) |
+					       (bsf << ICE_AQ_VSI_TC_Q_NUM_S));
+
+	/* Associate queue number with VSI */
+	info->mapping_flags |= rte_cpu_to_le_16(ICE_AQ_VSI_Q_MAP_CONTIG);
+	info->q_mapping[0] = rte_cpu_to_le_16(vsi->base_queue);
+	info->q_mapping[1] = rte_cpu_to_le_16(vsi->nb_qps);
+	info->valid_sections |=
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_RXQ_MAP_VALID);
+	/* Set the info.ingress_table and info.egress_table
+	 * for UP translate table. Now just set it to 1:1 map by default
+	 * -- 0b 111 110 101 100 011 010 001 000 == 0xFAC688
+	 */
+#define ICE_TC_QUEUE_TABLE_DFLT 0x00FAC688
+	info->ingress_table  = rte_cpu_to_le_32(ICE_TC_QUEUE_TABLE_DFLT);
+	info->egress_table   = rte_cpu_to_le_32(ICE_TC_QUEUE_TABLE_DFLT);
+	info->outer_up_table = rte_cpu_to_le_32(ICE_TC_QUEUE_TABLE_DFLT);
+	return 0;
+}
+
+static int
+ice_init_mac_address(struct rte_eth_dev *dev)
+{
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	if (!is_unicast_ether_addr
+		((struct ether_addr *)hw->port_info[0].mac.lan_addr)) {
+		PMD_INIT_LOG(ERR, "Invalid MAC address");
+		return -EINVAL;
+	}
+
+	ether_addr_copy((struct ether_addr *)hw->port_info[0].mac.lan_addr,
+			(struct ether_addr *)hw->port_info[0].mac.perm_addr);
+
+	dev->data->mac_addrs = rte_zmalloc(NULL, sizeof(struct ether_addr), 0);
+	if (!dev->data->mac_addrs) {
+		PMD_INIT_LOG(ERR,
+			     "Failed to allocate memory to store mac address");
+		return -ENOMEM;
+	}
+	/* store it to dev data */
+	ether_addr_copy((struct ether_addr *)hw->port_info[0].mac.perm_addr,
+			&dev->data->mac_addrs[0]);
+	return 0;
+}
+
+/*  Initialize SW parameters of PF */
+static int
+ice_pf_sw_init(struct rte_eth_dev *dev)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_hw *hw = ICE_PF_TO_HW(pf);
+
+	if (ice_config_max_queue_pair_num(dev->device->devargs) > 0)
+		pf->lan_nb_qp_max =
+			ice_config_max_queue_pair_num(dev->device->devargs);
+	else
+		pf->lan_nb_qp_max =
+			(uint16_t)RTE_MIN(hw->func_caps.common_cap.num_txq,
+					  hw->func_caps.common_cap.num_rxq);
+
+	pf->lan_nb_qps = pf->lan_nb_qp_max;
+
+	return 0;
+}
+
+static struct ice_vsi *
+ice_setup_vsi(struct ice_pf *pf, enum ice_vsi_type type)
+{
+	struct ice_hw *hw = ICE_PF_TO_HW(pf);
+	struct ice_vsi *vsi = NULL;
+	struct ice_vsi_ctx vsi_ctx;
+	int ret;
+	uint16_t max_txqs[ICE_MAX_TRAFFIC_CLASS] = { 0 };
+	uint8_t tc_bitmap = 0x1;
+
+	/* hw->num_lports = 1 in NIC mode */
+	vsi = rte_zmalloc(NULL, sizeof(struct ice_vsi), 0);
+	if (!vsi)
+		return NULL;
+
+	vsi->idx = pf->next_vsi_idx;
+	pf->next_vsi_idx++;
+	vsi->type = type;
+	vsi->adapter = ICE_PF_TO_ADAPTER(pf);
+	vsi->max_macaddrs = ICE_NUM_MACADDR_MAX;
+	vsi->vlan_anti_spoof_on = 0;
+	vsi->vlan_filter_on = 1;
+	TAILQ_INIT(&vsi->mac_list);
+	TAILQ_INIT(&vsi->vlan_list);
+
+	memset(&vsi_ctx, 0, sizeof(vsi_ctx));
+	/* base_queue in used in queue mapping of VSI add/update command.
+	 * Suppose vsi->base_queue is 0 now, don't consider SRIOV, VMDQ
+	 * cases in the first stage. Only Main VSI.
+	 */
+	vsi->base_queue = 0;
+	switch (type) {
+	case ICE_VSI_PF:
+		vsi->nb_qps = pf->lan_nb_qps;
+		ice_vsi_config_default_rss(&vsi_ctx.info);
+		vsi_ctx.alloc_from_pool = true;
+		vsi_ctx.flags = ICE_AQ_VSI_TYPE_PF;
+		/* switch_id is queried by get_switch_config aq, which is done
+		 * by ice_init_hw
+		 */
+		vsi_ctx.info.sw_id = hw->port_info->sw_id;
+		vsi_ctx.info.sw_flags2 = ICE_AQ_VSI_SW_FLAG_LAN_ENA;
+		/* Allow all untagged or tagged packets */
+		vsi_ctx.info.vlan_flags = ICE_AQ_VSI_VLAN_MODE_ALL;
+		vsi_ctx.info.vlan_flags |= ICE_AQ_VSI_VLAN_EMOD_NOTHING;
+		vsi_ctx.info.q_opt_rss = ICE_AQ_VSI_Q_OPT_RSS_LUT_PF |
+					 ICE_AQ_VSI_Q_OPT_RSS_TPLZ;
+		/* Enable VLAN/UP trip */
+		ret = ice_vsi_config_tc_queue_mapping(vsi,
+						      &vsi_ctx.info,
+						      ICE_DEFAULT_TCMAP);
+		if (ret) {
+			PMD_INIT_LOG(ERR,
+				     "tc queue mapping with vsi failed, "
+				     "err = %d",
+				     ret);
+			goto fail_mem;
+		}
+
+		break;
+	default:
+		/* for other types of VSI */
+		PMD_INIT_LOG(ERR, "other types of VSI not supported");
+		goto fail_mem;
+	}
+
+	/* VF has MSIX interrupt in VF range, don't allocate here */
+	if (type == ICE_VSI_PF) {
+		ret = ice_res_pool_alloc(&pf->msix_pool,
+					 RTE_MIN(vsi->nb_qps,
+						 RTE_MAX_RXTX_INTR_VEC_ID));
+		if (ret < 0) {
+			PMD_INIT_LOG(ERR, "VSI MAIN %d get heap failed %d",
+				     vsi->vsi_id, ret);
+		}
+		vsi->msix_intr = ret;
+		vsi->nb_msix = RTE_MIN(vsi->nb_qps, RTE_MAX_RXTX_INTR_VEC_ID);
+	} else {
+		vsi->msix_intr = 0;
+		vsi->nb_msix = 0;
+	}
+	ret = ice_add_vsi(hw, vsi->idx, &vsi_ctx, NULL);
+	if (ret != ICE_SUCCESS) {
+		PMD_INIT_LOG(ERR, "add vsi failed, err = %d", ret);
+		goto fail_mem;
+	}
+	/* store vsi information is SW structure */
+	vsi->vsi_id = vsi_ctx.vsi_num;
+	vsi->info = vsi_ctx.info;
+	pf->vsis_allocated = vsi_ctx.vsis_allocd;
+	pf->vsis_unallocated = vsi_ctx.vsis_unallocated;
+
+	/* At the beginning, only TC0. */
+	/* What we need here is the maximam number of the TX queues.
+	 * Currently vsi->nb_qps means it.
+	 * Correct it if any change.
+	 */
+	max_txqs[0] = vsi->nb_qps;
+	ret = ice_cfg_vsi_lan(hw->port_info, vsi->idx,
+			      tc_bitmap, max_txqs);
+	if (ret != ICE_SUCCESS)
+		PMD_INIT_LOG(ERR, "Failed to config vsi sched");
+
+	return vsi;
+fail_mem:
+	rte_free(vsi);
+	pf->next_vsi_idx--;
+	return NULL;
+}
+
+static int
+ice_pf_setup(struct ice_pf *pf)
+{
+	struct ice_vsi *vsi;
+
+	/* Clear all stats counters */
+	pf->offset_loaded = FALSE;
+	memset(&pf->stats, 0, sizeof(struct ice_hw_port_stats));
+	memset(&pf->stats_offset, 0, sizeof(struct ice_hw_port_stats));
+	memset(&pf->internal_stats, 0, sizeof(struct ice_eth_stats));
+	memset(&pf->internal_stats_offset, 0, sizeof(struct ice_eth_stats));
+
+	vsi = ice_setup_vsi(pf, ICE_VSI_PF);
+	if (!vsi) {
+		PMD_INIT_LOG(ERR, "Failed to add vsi for PF");
+		return -EINVAL;
+	}
+
+	pf->main_vsi = vsi;
+
+	return 0;
+}
+
+static int
+ice_dev_init(struct rte_eth_dev *dev)
+{
+	struct rte_pci_device *pci_dev;
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	int ret;
+
+	dev->dev_ops = &ice_eth_dev_ops;
+
+	pci_dev = RTE_DEV_TO_PCI(dev->device);
+
+	pf->adapter = ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	pf->adapter->eth_dev = dev;
+	pf->dev_data = dev->data;
+	hw->back = pf->adapter;
+	hw->hw_addr = (uint8_t *)pci_dev->mem_resource[0].addr;
+	hw->vendor_id = pci_dev->id.vendor_id;
+	hw->device_id = pci_dev->id.device_id;
+	hw->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id;
+	hw->subsystem_device_id = pci_dev->id.subsystem_device_id;
+	hw->bus.device = pci_dev->addr.devid;
+	hw->bus.func = pci_dev->addr.function;
+
+	ice_init_controlq_parameter(hw);
+
+	ret = ice_init_hw(hw);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to initialize HW");
+		return -EINVAL;
+	}
+
+	PMD_INIT_LOG(INFO, "FW %d.%d.%05d API %d.%d",
+		     hw->fw_maj_ver, hw->fw_min_ver, hw->fw_build,
+		     hw->api_maj_ver, hw->api_min_ver);
+
+	ice_pf_sw_init(dev);
+	ret = ice_init_mac_address(dev);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to initialize mac address");
+		goto err_init_mac;
+	}
+
+	ret = ice_res_pool_init(&pf->msix_pool, 1,
+				hw->func_caps.common_cap.num_msix_vectors - 1);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to init MSIX pool");
+		goto err_msix_pool_init;
+	}
+
+	ret = ice_pf_setup(pf);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to setup PF");
+		goto err_pf_setup;
+	}
+
+	return 0;
+
+err_pf_setup:
+	ice_res_pool_destroy(&pf->msix_pool);
+err_msix_pool_init:
+	rte_free(dev->data->mac_addrs);
+err_init_mac:
+	ice_sched_cleanup_all(hw);
+	rte_free(hw->port_info);
+	ice_shutdown_all_ctrlq(hw);
+
+	return ret;
+}
+
+static int
+ice_release_vsi(struct ice_vsi *vsi)
+{
+	struct ice_hw *hw;
+	struct ice_vsi_ctx vsi_ctx;
+	enum ice_status ret;
+
+	if (!vsi)
+		return 0;
+
+	hw = ICE_VSI_TO_HW(vsi);
+
+	memset(&vsi_ctx, 0, sizeof(vsi_ctx));
+
+	vsi_ctx.vsi_num = vsi->vsi_id;
+	vsi_ctx.info = vsi->info;
+	ret = ice_free_vsi(hw, vsi->idx, &vsi_ctx, false, NULL);
+	if (ret != ICE_SUCCESS) {
+		PMD_INIT_LOG(ERR, "Failed to free vsi by aq, %u", vsi->vsi_id);
+		rte_free(vsi);
+		return -1;
+	}
+
+	rte_free(vsi);
+	return 0;
+}
+
+static void
+ice_dev_close(struct rte_eth_dev *dev)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	ice_res_pool_destroy(&pf->msix_pool);
+	ice_release_vsi(pf->main_vsi);
+
+	ice_shutdown_all_ctrlq(hw);
+}
+
+static int
+ice_dev_uninit(struct rte_eth_dev *dev)
+{
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+
+	ice_dev_close(dev);
+
+	dev->dev_ops = NULL;
+	dev->rx_pkt_burst = NULL;
+	dev->tx_pkt_burst = NULL;
+
+	rte_free(dev->data->mac_addrs);
+	dev->data->mac_addrs = NULL;
+
+	ice_release_vsi(pf->main_vsi);
+	ice_sched_cleanup_all(hw);
+	rte_free(hw->port_info);
+	ice_shutdown_all_ctrlq(hw);
+
+	return 0;
+}
+
+static int
+ice_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+	      struct rte_pci_device *pci_dev)
+{
+	return rte_eth_dev_pci_generic_probe(pci_dev,
+					     sizeof(struct ice_adapter),
+					     ice_dev_init);
+}
+
+static int
+ice_pci_remove(struct rte_pci_device *pci_dev)
+{
+	return rte_eth_dev_pci_generic_remove(pci_dev, ice_dev_uninit);
+}
+
+static struct rte_pci_driver rte_ice_pmd = {
+	.id_table = pci_id_ice_map,
+	.drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC |
+		     RTE_PCI_DRV_IOVA_AS_VA,
+	.probe = ice_pci_probe,
+	.remove = ice_pci_remove,
+};
+
+/**
+ * Driver initialization routine.
+ * Invoked once at EAL init time.
+ * Register itself as the [Poll Mode] Driver of PCI devices.
+ */
+RTE_PMD_REGISTER_PCI(net_ice, rte_ice_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(net_ice, pci_id_ice_map);
+RTE_PMD_REGISTER_KMOD_DEP(net_ice, "* igb_uio | uio_pci_generic | vfio-pci");
+RTE_PMD_REGISTER_PARAM_STRING(net_ice,
+			      ICE_MAX_QP_NUM "=<int>");
+
+RTE_INIT(ice_init_log)
+{
+	ice_logtype_init = rte_log_register("pmd.net.ice.init");
+	if (ice_logtype_init >= 0)
+		rte_log_set_level(ice_logtype_init, RTE_LOG_NOTICE);
+	ice_logtype_driver = rte_log_register("pmd.net.ice.driver");
+	if (ice_logtype_driver >= 0)
+		rte_log_set_level(ice_logtype_driver, RTE_LOG_NOTICE);
+}
diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
new file mode 100644
index 0000000..94e45c8
--- /dev/null
+++ b/drivers/net/ice/ice_ethdev.h
@@ -0,0 +1,305 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _ICE_ETHDEV_H_
+#define _ICE_ETHDEV_H_
+
+#include <rte_kvargs.h>
+
+#include "base/ice_common.h"
+#include "base/ice_adminq_cmd.h"
+
+#define ICE_VLAN_TAG_SIZE        4
+
+#define ICE_ADMINQ_LEN               32
+#define ICE_SBIOQ_LEN                32
+#define ICE_MAILBOXQ_LEN             32
+#define ICE_ADMINQ_BUF_SZ            4096
+#define ICE_SBIOQ_BUF_SZ             4096
+#define ICE_MAILBOXQ_BUF_SZ          4096
+/* Number of queues per TC should be one of 1, 2, 4, 8, 16, 32, 64 */
+#define ICE_MAX_Q_PER_TC         64
+#define ICE_NUM_DESC_DEFAULT     512
+#define ICE_BUF_SIZE_MIN         1024
+#define ICE_FRAME_SIZE_MAX       9728
+#define ICE_QUEUE_BASE_ADDR_UNIT 128
+/* number of VSIs and queue default setting */
+#define ICE_MAX_QP_NUM_PER_VF    16
+#define ICE_DEFAULT_QP_NUM_FDIR  1
+#define ICE_UINT32_BIT_SIZE      (CHAR_BIT * sizeof(uint32_t))
+#define ICE_VFTA_SIZE            (4096 / ICE_UINT32_BIT_SIZE)
+/* Maximun number of MAC addresses */
+#define ICE_NUM_MACADDR_MAX       64
+/* Maximum number of VFs */
+#define ICE_MAX_VF               128
+#define ICE_MAX_INTR_QUEUE_NUM   256
+
+#define ICE_MISC_VEC_ID          RTE_INTR_VEC_ZERO_OFFSET
+#define ICE_RX_VEC_ID            RTE_INTR_VEC_RXTX_OFFSET
+
+#define ICE_MAX_PKT_TYPE  1024
+
+/**
+ * vlan_id is a 12 bit number.
+ * The VFTA array is actually a 4096 bit array, 128 of 32bit elements.
+ * 2^5 = 32. The val of lower 5 bits specifies the bit in the 32bit element.
+ * The higher 7 bit val specifies VFTA array index.
+ */
+#define ICE_VFTA_BIT(vlan_id)    (1 << ((vlan_id) & 0x1F))
+#define ICE_VFTA_IDX(vlan_id)    ((vlan_id) >> 5)
+
+/* Default TC traffic in case DCB is not enabled */
+#define ICE_DEFAULT_TCMAP        0x1
+#define ICE_FDIR_QUEUE_ID        0
+
+/* Always assign pool 0 to main VSI, VMDQ will start from 1 */
+#define ICE_VMDQ_POOL_BASE       1
+
+#define ICE_DEFAULT_RX_FREE_THRESH  32
+#define ICE_DEFAULT_RX_PTHRESH      8
+#define ICE_DEFAULT_RX_HTHRESH      8
+#define ICE_DEFAULT_RX_WTHRESH      0
+
+#define ICE_DEFAULT_TX_FREE_THRESH  32
+#define ICE_DEFAULT_TX_PTHRESH      32
+#define ICE_DEFAULT_TX_HTHRESH      0
+#define ICE_DEFAULT_TX_WTHRESH      0
+#define ICE_DEFAULT_TX_RSBIT_THRESH 32
+
+/* Bit shift and mask */
+#define ICE_4_BIT_WIDTH  (CHAR_BIT / 2)
+#define ICE_4_BIT_MASK   RTE_LEN2MASK(ICE_4_BIT_WIDTH, uint8_t)
+#define ICE_8_BIT_WIDTH  CHAR_BIT
+#define ICE_8_BIT_MASK   UINT8_MAX
+#define ICE_16_BIT_WIDTH (CHAR_BIT * 2)
+#define ICE_16_BIT_MASK  UINT16_MAX
+#define ICE_32_BIT_WIDTH (CHAR_BIT * 4)
+#define ICE_32_BIT_MASK  UINT32_MAX
+#define ICE_40_BIT_WIDTH (CHAR_BIT * 5)
+#define ICE_40_BIT_MASK  RTE_LEN2MASK(ICE_40_BIT_WIDTH, uint64_t)
+#define ICE_48_BIT_WIDTH (CHAR_BIT * 6)
+#define ICE_48_BIT_MASK  RTE_LEN2MASK(ICE_48_BIT_WIDTH, uint64_t)
+
+#define ICE_FLAG_RSS                   BIT_ULL(0)
+#define ICE_FLAG_DCB                   BIT_ULL(1)
+#define ICE_FLAG_VMDQ                  BIT_ULL(2)
+#define ICE_FLAG_SRIOV                 BIT_ULL(3)
+#define ICE_FLAG_HEADER_SPLIT_DISABLED BIT_ULL(4)
+#define ICE_FLAG_HEADER_SPLIT_ENABLED  BIT_ULL(5)
+#define ICE_FLAG_FDIR                  BIT_ULL(6)
+#define ICE_FLAG_VXLAN                 BIT_ULL(7)
+#define ICE_FLAG_RSS_AQ_CAPABLE        BIT_ULL(8)
+#define ICE_FLAG_VF_MAC_BY_PF          BIT_ULL(9)
+#define ICE_FLAG_ALL  (ICE_FLAG_RSS | \
+		       ICE_FLAG_DCB | \
+		       ICE_FLAG_VMDQ | \
+		       ICE_FLAG_SRIOV | \
+		       ICE_FLAG_HEADER_SPLIT_DISABLED | \
+		       ICE_FLAG_HEADER_SPLIT_ENABLED | \
+		       ICE_FLAG_FDIR | \
+		       ICE_FLAG_VXLAN | \
+		       ICE_FLAG_RSS_AQ_CAPABLE | \
+		       ICE_FLAG_VF_MAC_BY_PF)
+
+struct ice_adapter;
+
+/**
+ * MAC filter structure
+ */
+struct ice_mac_filter_info {
+	struct ether_addr mac_addr;
+};
+
+TAILQ_HEAD(ice_mac_filter_list, ice_mac_filter);
+
+/* MAC filter list structure */
+struct ice_mac_filter {
+	TAILQ_ENTRY(ice_mac_filter) next;
+	struct ice_mac_filter_info mac_info;
+};
+
+/**
+ * VLAN filter structure
+ */
+struct ice_vlan_filter_info {
+	uint16_t vlan_id;
+};
+
+TAILQ_HEAD(ice_vlan_filter_list, ice_vlan_filter);
+
+/* VLAN filter list structure */
+struct ice_vlan_filter {
+	TAILQ_ENTRY(ice_vlan_filter) next;
+	struct ice_vlan_filter_info vlan_info;
+};
+
+struct pool_entry {
+	LIST_ENTRY(pool_entry) next;
+	uint16_t base;
+	uint16_t len;
+};
+
+LIST_HEAD(res_list, pool_entry);
+
+struct ice_res_pool_info {
+	uint32_t base;              /* Resource start index */
+	uint32_t num_alloc;         /* Allocated resource number */
+	uint32_t num_free;          /* Total available resource number */
+	struct res_list alloc_list; /* Allocated resource list */
+	struct res_list free_list;  /* Available resource list */
+};
+
+TAILQ_HEAD(ice_vsi_list_head, ice_vsi_list);
+
+struct ice_vsi;
+
+/* VSI list structure */
+struct ice_vsi_list {
+	TAILQ_ENTRY(ice_vsi_list) list;
+	struct ice_vsi *vsi;
+};
+
+struct ice_rx_queue;
+struct ice_tx_queue;
+
+/**
+ * Structure that defines a VSI, associated with a adapter.
+ */
+struct ice_vsi {
+	struct ice_adapter *adapter; /* Backreference to associated adapter */
+	struct ice_aqc_vsi_props info; /* VSI properties */
+	/**
+	 * When drivers loaded, only a default main VSI exists. In case new VSI
+	 * needs to add, HW needs to know the layout that VSIs are organized.
+	 * Besides that, VSI isan element and can't switch packets, which needs
+	 * to add new component VEB to perform switching. So, a new VSI needs
+	 * to specify the the uplink VSI (Parent VSI) before created. The
+	 * uplink VSI will check whether it had a VEB to switch packets. If no,
+	 * it will try to create one. Then, uplink VSI will move the new VSI
+	 * into its' sib_vsi_list to manage all the downlink VSI.
+	 *  sib_vsi_list: the VSI list that shared the same uplink VSI.
+	 *  parent_vsi  : the uplink VSI. It's NULL for main VSI.
+	 *  veb         : the VEB associates with the VSI.
+	 */
+	struct ice_vsi_list sib_vsi_list; /* sibling vsi list */
+	struct ice_vsi *parent_vsi;
+	enum ice_vsi_type type; /* VSI types */
+	uint16_t vlan_num;       /* Total VLAN number */
+	uint16_t mac_num;        /* Total mac number */
+	struct ice_mac_filter_list mac_list; /* macvlan filter list */
+	struct ice_vlan_filter_list vlan_list; /* vlan filter list */
+	uint16_t nb_qps;         /* Number of queue pairs VSI can occupy */
+	uint16_t nb_used_qps;    /* Number of queue pairs VSI uses */
+	uint16_t max_macaddrs;   /* Maximum number of MAC addresses */
+	uint16_t base_queue;     /* The first queue index of this VSI */
+	uint16_t vsi_id;         /* Hardware Id */
+	uint16_t idx;            /* vsi_handle: SW index in hw->vsi_ctx */
+	/* VF number to which the VSI connects, valid when VSI is VF type */
+	uint8_t vf_num;
+	uint16_t msix_intr; /* The MSIX interrupt binds to VSI */
+	uint16_t nb_msix;   /* The max number of msix vector */
+	uint8_t enabled_tc; /* The traffic class enabled */
+	uint8_t vlan_anti_spoof_on; /* The VLAN anti-spoofing enabled */
+	uint8_t vlan_filter_on; /* The VLAN filter enabled */
+	/* information about rss configuration */
+	u32 rss_key_size;
+	u32 rss_lut_size;
+	uint8_t *rss_lut;
+	uint8_t *rss_key;
+	struct ice_eth_stats eth_stats_offset;
+	struct ice_eth_stats eth_stats;
+	bool offset_loaded;
+};
+
+struct ice_pf {
+	struct ice_adapter *adapter; /* The adapter this PF associate to */
+	struct ice_vsi *main_vsi; /* pointer to main VSI structure */
+	/* Used for next free software vsi idx.
+	 * To save the effort, we don't recycle the index.
+	 * Suppose the indexes are more than enough.
+	 */
+	uint16_t next_vsi_idx;
+	uint16_t vsis_allocated;
+	uint16_t vsis_unallocated;
+	struct ice_res_pool_info qp_pool;    /*Queue pair pool */
+	struct ice_res_pool_info msix_pool;  /* MSIX interrupt pool */
+	struct rte_eth_dev_data *dev_data; /* Pointer to the device data */
+	struct ether_addr dev_addr; /* PF device mac address */
+	uint64_t flags; /* PF feature flags */
+	uint16_t hash_lut_size; /* The size of hash lookup table */
+	uint16_t lan_nb_qp_max;
+	uint16_t lan_nb_qps; /* The number of queue pairs of LAN */
+	struct ice_hw_port_stats stats_offset;
+	struct ice_hw_port_stats stats;
+	/* internal packet statistics, it should be excluded from the total */
+	struct ice_eth_stats internal_stats_offset;
+	struct ice_eth_stats internal_stats;
+	bool offset_loaded;
+	bool adapter_stopped;
+};
+
+/**
+ * Structure to store private data for each PF/VF instance.
+ */
+struct ice_adapter {
+	/* Common for both PF and VF */
+	struct ice_hw hw;
+	struct rte_eth_dev *eth_dev;
+	struct ice_pf pf;
+	bool rx_bulk_alloc_allowed;
+	bool tx_simple_allowed;
+	/* ptype mapping table */
+	uint32_t ptype_tbl[ICE_MAX_PKT_TYPE] __rte_cache_min_aligned;
+};
+
+struct ice_vsi_vlan_pvid_info {
+	uint16_t on;		/* Enable or disable pvid */
+	union {
+		uint16_t pvid;	/* Valid in case 'on' is set to set pvid */
+		struct {
+			/* Valid in case 'on' is cleared. 'tagged' will reject
+			 * tagged packets, while 'untagged' will reject
+			 * untagged packets.
+			 */
+			uint8_t tagged;
+			uint8_t untagged;
+		} reject;
+	} config;
+};
+
+#define ICE_DEV_TO_PCI(eth_dev) \
+	RTE_DEV_TO_PCI((eth_dev)->device)
+
+/* ICE_DEV_PRIVATE_TO */
+#define ICE_DEV_PRIVATE_TO_PF(adapter) \
+	(&((struct ice_adapter *)adapter)->pf)
+#define ICE_DEV_PRIVATE_TO_HW(adapter) \
+	(&((struct ice_adapter *)adapter)->hw)
+#define ICE_DEV_PRIVATE_TO_ADAPTER(adapter) \
+	((struct ice_adapter *)adapter)
+
+/* ICE_VSI_TO */
+#define ICE_VSI_TO_HW(vsi) \
+	(&(((struct ice_vsi *)vsi)->adapter->hw))
+#define ICE_VSI_TO_PF(vsi) \
+	(&(((struct ice_vsi *)vsi)->adapter->pf))
+#define ICE_VSI_TO_ETH_DEV(vsi) \
+	(((struct ice_vsi *)vsi)->adapter->eth_dev)
+
+/* ICE_PF_TO */
+#define ICE_PF_TO_HW(pf) \
+	(&(((struct ice_pf *)pf)->adapter->hw))
+#define ICE_PF_TO_ADAPTER(pf) \
+	((struct ice_adapter *)(pf)->adapter)
+#define ICE_PF_TO_ETH_DEV(pf) \
+	(((struct ice_pf *)pf)->adapter->eth_dev)
+
+static inline int
+ice_align_floor(int n)
+{
+	if (n == 0)
+		return 0;
+	return 1 << (sizeof(n) * CHAR_BIT - 1 - __builtin_clz(n));
+}
+#endif /* _ICE_ETHDEV_H_ */
diff --git a/drivers/net/ice/ice_logs.h b/drivers/net/ice/ice_logs.h
new file mode 100644
index 0000000..de2d573
--- /dev/null
+++ b/drivers/net/ice/ice_logs.h
@@ -0,0 +1,45 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _ICE_LOGS_H_
+#define _ICE_LOGS_H_
+
+extern int ice_logtype_init;
+extern int ice_logtype_driver;
+
+#define PMD_INIT_LOG(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, ice_logtype_init, "%s(): " fmt "\n", \
+		__func__, ##args)
+
+#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>")
+
+#ifdef RTE_LIBRTE_ICE_DEBUG_RX
+#define PMD_RX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_RX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_ICE_DEBUG_TX
+#define PMD_TX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_TX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_ICE_DEBUG_TX_FREE
+#define PMD_TX_FREE_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_TX_FREE_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#define PMD_DRV_LOG_RAW(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, ice_logtype_driver, "%s(): " fmt, \
+		__func__, ## args)
+
+#define PMD_DRV_LOG(level, fmt, args...) \
+	PMD_DRV_LOG_RAW(level, fmt "\n", ## args)
+
+#endif /* _ICE_LOGS_H_ */
diff --git a/drivers/net/ice/meson.build b/drivers/net/ice/meson.build
new file mode 100644
index 0000000..9ed7b27
--- /dev/null
+++ b/drivers/net/ice/meson.build
@@ -0,0 +1,12 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Intel Corporation
+
+subdir('base')
+objs = [base_objs]
+
+sources = files(
+	'ice_ethdev.c'
+	)
+
+deps += ['hash']
+includes += include_directories('base')
diff --git a/drivers/net/ice/rte_pmd_ice_version.map b/drivers/net/ice/rte_pmd_ice_version.map
new file mode 100644
index 0000000..7b23b60
--- /dev/null
+++ b/drivers/net/ice/rte_pmd_ice_version.map
@@ -0,0 +1,4 @@
+DPDK_19.02 {
+
+	local: *;
+};
diff --git a/drivers/net/meson.build b/drivers/net/meson.build
index 980eec2..45da3bb 100644
--- a/drivers/net/meson.build
+++ b/drivers/net/meson.build
@@ -17,6 +17,7 @@ drivers = ['af_packet',
 	'enic',
 	'failsafe',
 	'fm10k', 'i40e',
+	'ice',
 	'ifc',
 	'ixgbe',
 	'kni',
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 5699d97..02e8b6f 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -163,6 +163,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_ENIC_PMD)       += -lrte_pmd_enic
 _LDLIBS-$(CONFIG_RTE_LIBRTE_FM10K_PMD)      += -lrte_pmd_fm10k
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_FAILSAFE)   += -lrte_pmd_failsafe
 _LDLIBS-$(CONFIG_RTE_LIBRTE_I40E_PMD)       += -lrte_pmd_i40e
+_LDLIBS-$(CONFIG_RTE_LIBRTE_ICE_PMD)        += -lrte_pmd_ice
 _LDLIBS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD)      += -lrte_pmd_ixgbe
 ifeq ($(CONFIG_RTE_LIBRTE_KNI),y)
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_KNI)        += -lrte_pmd_kni
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v6 16/31] net/ice: support device and queue ops
  2018-12-18  8:46 ` [dpdk-dev] [PATCH v6 00/31] A new net PMD - ICE Wenzhuo Lu
                     ` (14 preceding siblings ...)
  2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 15/31] net/ice: support device initialization Wenzhuo Lu
@ 2018-12-18  8:46   ` Wenzhuo Lu
  2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 17/31] net/ice: support getting device information Wenzhuo Lu
                     ` (15 subsequent siblings)
  31 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-18  8:46 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Normally when starting/stopping the device the queue
should be started and stopped. Support them both in
this patch.

Below ops are added,
dev_configure
dev_start
dev_stop
dev_close
dev_reset
rx_queue_start
rx_queue_stop
tx_queue_start
tx_queue_stop
rx_queue_setup
rx_queue_release
tx_queue_setup
tx_queue_release

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 config/common_base               |   2 +
 doc/guides/nics/features/ice.ini |   1 +
 doc/guides/nics/ice.rst          |   8 +
 drivers/net/ice/Makefile         |   3 +-
 drivers/net/ice/ice_ethdev.c     | 199 ++++++++-
 drivers/net/ice/ice_rxtx.c       | 923 +++++++++++++++++++++++++++++++++++++++
 drivers/net/ice/ice_rxtx.h       | 137 ++++++
 drivers/net/ice/meson.build      |   3 +-
 8 files changed, 1273 insertions(+), 3 deletions(-)
 create mode 100644 drivers/net/ice/ice_rxtx.c
 create mode 100644 drivers/net/ice/ice_rxtx.h

diff --git a/config/common_base b/config/common_base
index 872f440..a342760 100644
--- a/config/common_base
+++ b/config/common_base
@@ -303,6 +303,8 @@ CONFIG_RTE_LIBRTE_ICE_PMD=y
 CONFIG_RTE_LIBRTE_ICE_DEBUG_RX=n
 CONFIG_RTE_LIBRTE_ICE_DEBUG_TX=n
 CONFIG_RTE_LIBRTE_ICE_DEBUG_TX_FREE=n
+CONFIG_RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC=y
+CONFIG_RTE_LIBRTE_ICE_16BYTE_RX_DESC=n
 
 # Compile burst-oriented AVF PMD driver
 #
diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
index 085e848..a43a9cd 100644
--- a/doc/guides/nics/features/ice.ini
+++ b/doc/guides/nics/features/ice.ini
@@ -4,6 +4,7 @@
 ; Refer to default.ini for the full list of available PMD features.
 ;
 [Features]
+Queue start/stop     = Y
 BSD nic_uio          = Y
 Linux UIO            = Y
 Linux VFIO           = Y
diff --git a/doc/guides/nics/ice.rst b/doc/guides/nics/ice.rst
index 946ed04..96a594f 100644
--- a/doc/guides/nics/ice.rst
+++ b/doc/guides/nics/ice.rst
@@ -38,6 +38,14 @@ Please note that enabling debugging options may affect system performance.
 
   Toggle display of generic debugging messages.
 
+- ``CONFIG_RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC`` (default ``y``)
+
+  Toggle bulk allocation for RX.
+
+- ``CONFIG_RTE_LIBRTE_ICE_16BYTE_RX_DESC`` (default ``n``)
+
+  Toggle to use a 16-byte RX descriptor, by default the RX descriptor is 32 byte.
+
 Runtime Config Options
 ~~~~~~~~~~~~~~~~~~~~~~
 
diff --git a/drivers/net/ice/Makefile b/drivers/net/ice/Makefile
index 70f23e3..bc24444 100644
--- a/drivers/net/ice/Makefile
+++ b/drivers/net/ice/Makefile
@@ -11,7 +11,7 @@ LIB = librte_pmd_ice.a
 CFLAGS += -O3
 CFLAGS += $(WERROR_FLAGS)
 
-LDLIBS += -lrte_eal -lrte_ethdev -lrte_kvargs -lrte_bus_pci
+LDLIBS += -lrte_eal -lrte_ethdev -lrte_kvargs -lrte_bus_pci -lrte_mempool
 
 EXPORT_MAP := rte_pmd_ice_version.map
 
@@ -50,5 +50,6 @@ SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_switch.c
 SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_nvm.c
 
 SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_ethdev.c
+SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_rxtx.c
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index a514755..0274d9e 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -6,6 +6,7 @@
 
 #include "base/ice_sched.h"
 #include "ice_ethdev.h"
+#include "ice_rxtx.h"
 
 #define ICE_MAX_QP_NUM "max_queue_pair_num"
 #define ICE_DFLT_OUTER_TAG_TYPE ICE_AQ_VSI_OUTER_TAG_VLAN_9100
@@ -13,6 +14,12 @@
 int ice_logtype_init;
 int ice_logtype_driver;
 
+static int ice_dev_configure(struct rte_eth_dev *dev);
+static int ice_dev_start(struct rte_eth_dev *dev);
+static void ice_dev_stop(struct rte_eth_dev *dev);
+static void ice_dev_close(struct rte_eth_dev *dev);
+static int ice_dev_reset(struct rte_eth_dev *dev);
+
 static const struct rte_pci_id pci_id_ice_map[] = {
 	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
 	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_QSFP) },
@@ -21,7 +28,19 @@
 };
 
 static const struct eth_dev_ops ice_eth_dev_ops = {
-	.dev_configure                = NULL,
+	.dev_configure                = ice_dev_configure,
+	.dev_start                    = ice_dev_start,
+	.dev_stop                     = ice_dev_stop,
+	.dev_close                    = ice_dev_close,
+	.dev_reset                    = ice_dev_reset,
+	.rx_queue_start               = ice_rx_queue_start,
+	.rx_queue_stop                = ice_rx_queue_stop,
+	.tx_queue_start               = ice_tx_queue_start,
+	.tx_queue_stop                = ice_tx_queue_stop,
+	.rx_queue_setup               = ice_rx_queue_setup,
+	.rx_queue_release             = ice_rx_queue_release,
+	.tx_queue_setup               = ice_tx_queue_setup,
+	.tx_queue_release             = ice_tx_queue_release,
 };
 
 static void
@@ -559,11 +578,41 @@
 }
 
 static void
+ice_dev_stop(struct rte_eth_dev *dev)
+{
+	struct rte_eth_dev_data *data = dev->data;
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	uint16_t i;
+
+	/* avoid stopping again */
+	if (pf->adapter_stopped)
+		return;
+
+	/* stop and clear all Rx queues */
+	for (i = 0; i < data->nb_rx_queues; i++)
+		ice_rx_queue_stop(dev, i);
+
+	/* stop and clear all Tx queues */
+	for (i = 0; i < data->nb_tx_queues; i++)
+		ice_tx_queue_stop(dev, i);
+
+	/* Clear all queues and release mbufs */
+	ice_clear_queues(dev);
+
+	pf->adapter_stopped = true;
+}
+
+static void
 ice_dev_close(struct rte_eth_dev *dev)
 {
 	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
 	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
+	ice_dev_stop(dev);
+
+	/* release all queue resource */
+	ice_free_queues(dev);
+
 	ice_res_pool_destroy(&pf->msix_pool);
 	ice_release_vsi(pf->main_vsi);
 
@@ -594,6 +643,154 @@
 }
 
 static int
+ice_dev_configure(__rte_unused struct rte_eth_dev *dev)
+{
+	struct ice_adapter *ad =
+		ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+
+	/* Initialize to TRUE. If any of Rx queues doesn't meet the
+	 * bulk allocation or vector Rx preconditions we will reset it.
+	 */
+	ad->rx_bulk_alloc_allowed = true;
+	ad->tx_simple_allowed = true;
+
+	return 0;
+}
+
+static int ice_init_rss(struct ice_pf *pf)
+{
+	struct ice_hw *hw = ICE_PF_TO_HW(pf);
+	struct ice_vsi *vsi = pf->main_vsi;
+	struct rte_eth_dev *dev = pf->adapter->eth_dev;
+	struct rte_eth_rss_conf *rss_conf;
+	struct ice_aqc_get_set_rss_keys key;
+	uint16_t i, nb_q;
+	int ret = 0;
+
+	rss_conf = &dev->data->dev_conf.rx_adv_conf.rss_conf;
+	nb_q = dev->data->nb_rx_queues;
+	vsi->rss_key_size = ICE_AQC_GET_SET_RSS_KEY_DATA_RSS_KEY_SIZE;
+	vsi->rss_lut_size = hw->func_caps.common_cap.rss_table_size;
+
+	if (!vsi->rss_key)
+		vsi->rss_key = rte_zmalloc(NULL,
+					   vsi->rss_key_size, 0);
+	if (!vsi->rss_lut)
+		vsi->rss_lut = rte_zmalloc(NULL,
+					   vsi->rss_lut_size, 0);
+
+	/* configure RSS key */
+	if (!rss_conf->rss_key) {
+		/* Calculate the default hash key */
+		for (i = 0; i <= vsi->rss_key_size; i++)
+			vsi->rss_key[i] = (uint8_t)rte_rand();
+	} else {
+		rte_memcpy(vsi->rss_key, rss_conf->rss_key,
+			   RTE_MIN(rss_conf->rss_key_len,
+				   vsi->rss_key_size));
+	}
+	rte_memcpy(key.standard_rss_key, vsi->rss_key, vsi->rss_key_size);
+	ret = ice_aq_set_rss_key(hw, vsi->idx, &key);
+	if (ret)
+		return -EINVAL;
+
+	/* init RSS LUT table */
+	for (i = 0; i < vsi->rss_lut_size; i++)
+		vsi->rss_lut[i] = i % nb_q;
+
+	ret = ice_aq_set_rss_lut(hw, vsi->idx,
+				 ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF,
+				 vsi->rss_lut, vsi->rss_lut_size);
+	if (ret)
+		return -EINVAL;
+
+	return 0;
+}
+
+static int
+ice_dev_start(struct rte_eth_dev *dev)
+{
+	struct rte_eth_dev_data *data = dev->data;
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	uint16_t nb_rxq = 0;
+	uint16_t nb_txq, i;
+	int ret;
+
+	/* program Tx queues' context in hardware */
+	for (nb_txq = 0; nb_txq < data->nb_tx_queues; nb_txq++) {
+		ret = ice_tx_queue_start(dev, nb_txq);
+		if (ret) {
+			PMD_DRV_LOG(ERR, "fail to start Tx queue %u", nb_txq);
+			goto tx_err;
+		}
+	}
+
+	/* program Rx queues' context in hardware*/
+	for (nb_rxq = 0; nb_rxq < data->nb_rx_queues; nb_rxq++) {
+		ret = ice_rx_queue_start(dev, nb_rxq);
+		if (ret) {
+			PMD_DRV_LOG(ERR, "fail to start Rx queue %u", nb_rxq);
+			goto rx_err;
+		}
+	}
+
+	ret = ice_init_rss(pf);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to enable rss for PF");
+		goto rx_err;
+	}
+
+	ret = ice_aq_set_event_mask(hw, hw->port_info->lport,
+				    ((u16)(ICE_AQ_LINK_EVENT_LINK_FAULT |
+				     ICE_AQ_LINK_EVENT_PHY_TEMP_ALARM |
+				     ICE_AQ_LINK_EVENT_EXCESSIVE_ERRORS |
+				     ICE_AQ_LINK_EVENT_SIGNAL_DETECT |
+				     ICE_AQ_LINK_EVENT_AN_COMPLETED |
+				     ICE_AQ_LINK_EVENT_PORT_TX_SUSPENDED)),
+				     NULL);
+	if (ret != ICE_SUCCESS)
+		PMD_DRV_LOG(WARNING, "Fail to set phy mask");
+
+	pf->adapter_stopped = false;
+
+	return 0;
+
+	/* stop the started queues if failed to start all queues */
+rx_err:
+	for (i = 0; i < nb_rxq; i++)
+		ice_rx_queue_stop(dev, i);
+tx_err:
+	for (i = 0; i < nb_txq; i++)
+		ice_tx_queue_stop(dev, i);
+
+	return -EIO;
+}
+
+static int
+ice_dev_reset(struct rte_eth_dev *dev)
+{
+	int ret;
+
+	if (dev->data->sriov.active)
+		return -ENOTSUP;
+
+	ret = ice_dev_uninit(dev);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "failed to uninit device, status = %d", ret);
+		return -ENXIO;
+	}
+
+	ret = ice_dev_init(dev);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "failed to init device, status = %d", ret);
+		return -ENXIO;
+	}
+
+	return 0;
+}
+
+static int
 ice_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 	      struct rte_pci_device *pci_dev)
 {
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
new file mode 100644
index 0000000..9c5eee1
--- /dev/null
+++ b/drivers/net/ice/ice_rxtx.c
@@ -0,0 +1,923 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#include <rte_ethdev_driver.h>
+#include <rte_net.h>
+
+#include "ice_rxtx.h"
+
+#define ICE_TD_CMD ICE_TX_DESC_CMD_EOP
+
+#define ICE_TX_CKSUM_OFFLOAD_MASK (		 \
+		PKT_TX_IP_CKSUM |		 \
+		PKT_TX_L4_MASK |		 \
+		PKT_TX_TCP_SEG |		 \
+		PKT_TX_OUTER_IP_CKSUM)
+
+#define ICE_RX_ERR_BITS 0x3f
+
+static enum ice_status
+ice_program_hw_rx_queue(struct ice_rx_queue *rxq)
+{
+	struct ice_vsi *vsi = rxq->vsi;
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	struct rte_eth_dev *dev = ICE_VSI_TO_ETH_DEV(rxq->vsi);
+	struct ice_rlan_ctx rx_ctx;
+	enum ice_status err;
+	uint16_t buf_size, len;
+	struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
+	uint32_t regval;
+
+	/**
+	 * The kernel driver uses flex descriptor. It sets the register
+	 * to flex descriptor mode.
+	 * DPDK uses legacy descriptor. It should set the register back
+	 * to the default value, then uses legacy descriptor mode.
+	 */
+	regval = (0x01 << QRXFLXP_CNTXT_RXDID_PRIO_S) &
+		 QRXFLXP_CNTXT_RXDID_PRIO_M;
+	ICE_WRITE_REG(hw, QRXFLXP_CNTXT(rxq->reg_idx), regval);
+
+	/* Set buffer size as the head split is disabled. */
+	buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) -
+			      RTE_PKTMBUF_HEADROOM);
+	rxq->rx_hdr_len = 0;
+	rxq->rx_buf_len = RTE_ALIGN(buf_size, (1 << ICE_RLAN_CTX_DBUF_S));
+	len = ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len;
+	rxq->max_pkt_len = RTE_MIN(len,
+				   dev->data->dev_conf.rxmode.max_rx_pkt_len);
+
+	if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+		if (rxq->max_pkt_len <= ETHER_MAX_LEN ||
+		    rxq->max_pkt_len > ICE_FRAME_SIZE_MAX) {
+			PMD_DRV_LOG(ERR, "maximum packet length must "
+				    "be larger than %u and smaller than %u,"
+				    "as jumbo frame is enabled",
+				    (uint32_t)ETHER_MAX_LEN,
+				    (uint32_t)ICE_FRAME_SIZE_MAX);
+			return -EINVAL;
+		}
+	} else {
+		if (rxq->max_pkt_len < ETHER_MIN_LEN ||
+		    rxq->max_pkt_len > ETHER_MAX_LEN) {
+			PMD_DRV_LOG(ERR, "maximum packet length must be "
+				    "larger than %u and smaller than %u, "
+				    "as jumbo frame is disabled",
+				    (uint32_t)ETHER_MIN_LEN,
+				    (uint32_t)ETHER_MAX_LEN);
+			return -EINVAL;
+		}
+	}
+
+	memset(&rx_ctx, 0, sizeof(rx_ctx));
+
+	rx_ctx.base = rxq->rx_ring_phys_addr / ICE_QUEUE_BASE_ADDR_UNIT;
+	rx_ctx.qlen = rxq->nb_rx_desc;
+	rx_ctx.dbuf = rxq->rx_buf_len >> ICE_RLAN_CTX_DBUF_S;
+	rx_ctx.hbuf = rxq->rx_hdr_len >> ICE_RLAN_CTX_HBUF_S;
+	rx_ctx.dtype = 0; /* No Header Split mode */
+#ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
+	rx_ctx.dsize = 1; /* 32B descriptors */
+#endif
+	rx_ctx.rxmax = rxq->max_pkt_len;
+	/* TPH: Transaction Layer Packet (TLP) processing hints */
+	rx_ctx.tphrdesc_ena = 1;
+	rx_ctx.tphwdesc_ena = 1;
+	rx_ctx.tphdata_ena = 1;
+	rx_ctx.tphhead_ena = 1;
+	/* Low Receive Queue Threshold defined in 64 descriptors units.
+	 * When the number of free descriptors goes below the lrxqthresh,
+	 * an immediate interrupt is triggered.
+	 */
+	rx_ctx.lrxqthresh = 2;
+	/*default use 32 byte descriptor, vlan tag extract to L2TAG2(1st)*/
+	rx_ctx.l2tsel = 1;
+	rx_ctx.showiv = 0;
+
+	err = ice_clear_rxq_ctx(hw, rxq->reg_idx);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to clear Lan Rx queue (%u) context",
+			    rxq->queue_id);
+		return -EINVAL;
+	}
+	err = ice_write_rxq_ctx(hw, &rx_ctx, rxq->reg_idx);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to write Lan Rx queue (%u) context",
+			    rxq->queue_id);
+		return -EINVAL;
+	}
+
+	buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) -
+			      RTE_PKTMBUF_HEADROOM);
+
+	rxq->qrx_tail = hw->hw_addr + QRX_TAIL(rxq->reg_idx);
+
+	/* Init the Rx tail register*/
+	ICE_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1);
+
+	return 0;
+}
+
+/* Allocate mbufs for all descriptors in rx queue */
+static int
+ice_alloc_rx_queue_mbufs(struct ice_rx_queue *rxq)
+{
+	struct ice_rx_entry *rxe = rxq->sw_ring;
+	uint64_t dma_addr;
+	uint16_t i;
+
+	for (i = 0; i < rxq->nb_rx_desc; i++) {
+		volatile union ice_rx_desc *rxd;
+		struct rte_mbuf *mbuf = rte_mbuf_raw_alloc(rxq->mp);
+
+		if (unlikely(!mbuf)) {
+			PMD_DRV_LOG(ERR, "Failed to allocate mbuf for RX");
+			return -ENOMEM;
+		}
+
+		rte_mbuf_refcnt_set(mbuf, 1);
+		mbuf->next = NULL;
+		mbuf->data_off = RTE_PKTMBUF_HEADROOM;
+		mbuf->nb_segs = 1;
+		mbuf->port = rxq->port_id;
+
+		dma_addr =
+			rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf));
+
+		rxd = &rxq->rx_ring[i];
+		rxd->read.pkt_addr = dma_addr;
+		rxd->read.hdr_addr = 0;
+#ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
+		rxd->read.rsvd1 = 0;
+		rxd->read.rsvd2 = 0;
+#endif
+		rxe[i].mbuf = mbuf;
+	}
+
+	return 0;
+}
+
+/* Free all mbufs for descriptors in rx queue */
+static void
+ice_rx_queue_release_mbufs(struct ice_rx_queue *rxq)
+{
+	uint16_t i;
+
+	if (!rxq || !rxq->sw_ring) {
+		PMD_DRV_LOG(DEBUG, "Pointer to sw_ring is NULL");
+		return;
+	}
+
+	for (i = 0; i < rxq->nb_rx_desc; i++) {
+		if (rxq->sw_ring[i].mbuf) {
+			rte_pktmbuf_free_seg(rxq->sw_ring[i].mbuf);
+			rxq->sw_ring[i].mbuf = NULL;
+		}
+	}
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+		if (rxq->rx_nb_avail == 0)
+			return;
+		for (i = 0; i < rxq->rx_nb_avail; i++) {
+			struct rte_mbuf *mbuf;
+
+			mbuf = rxq->rx_stage[rxq->rx_next_avail + i];
+			rte_pktmbuf_free_seg(mbuf);
+		}
+		rxq->rx_nb_avail = 0;
+#endif /* RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC */
+}
+
+/* turn on or off rx queue
+ * @q_idx: queue index in pf scope
+ * @on: turn on or off the queue
+ */
+static int
+ice_switch_rx_queue(struct ice_hw *hw, uint16_t q_idx, bool on)
+{
+	uint32_t reg;
+	uint16_t j;
+
+	/* QRX_CTRL = QRX_ENA */
+	reg = ICE_READ_REG(hw, QRX_CTRL(q_idx));
+
+	if (on) {
+		if (reg & QRX_CTRL_QENA_STAT_M)
+			return 0; /* Already on, skip */
+		reg |= QRX_CTRL_QENA_REQ_M;
+	} else {
+		if (!(reg & QRX_CTRL_QENA_STAT_M))
+			return 0; /* Already off, skip */
+		reg &= ~QRX_CTRL_QENA_REQ_M;
+	}
+
+	/* Write the register */
+	ICE_WRITE_REG(hw, QRX_CTRL(q_idx), reg);
+	/* Check the result. It is said that QENA_STAT
+	 * follows the QENA_REQ not more than 10 use.
+	 * TODO: need to change the wait counter later
+	 */
+	for (j = 0; j < ICE_CHK_Q_ENA_COUNT; j++) {
+		rte_delay_us(ICE_CHK_Q_ENA_INTERVAL_US);
+		reg = ICE_READ_REG(hw, QRX_CTRL(q_idx));
+		if (on) {
+			if ((reg & QRX_CTRL_QENA_REQ_M) &&
+			    (reg & QRX_CTRL_QENA_STAT_M))
+				break;
+		} else {
+			if (!(reg & QRX_CTRL_QENA_REQ_M) &&
+			    !(reg & QRX_CTRL_QENA_STAT_M))
+				break;
+		}
+	}
+
+	/* Check if it is timeout */
+	if (j >= ICE_CHK_Q_ENA_COUNT) {
+		PMD_DRV_LOG(ERR, "Failed to %s rx queue[%u]",
+			    (on ? "enable" : "disable"), q_idx);
+		return -ETIMEDOUT;
+	}
+
+	return 0;
+}
+
+static inline int
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+ice_check_rx_burst_bulk_alloc_preconditions(struct ice_rx_queue *rxq)
+#else
+ice_check_rx_burst_bulk_alloc_preconditions
+	(__rte_unused struct ice_rx_queue *rxq)
+#endif
+{
+	int ret = 0;
+
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+	if (!(rxq->rx_free_thresh >= ICE_RX_MAX_BURST)) {
+		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: "
+			     "rxq->rx_free_thresh=%d, "
+			     "ICE_RX_MAX_BURST=%d",
+			     rxq->rx_free_thresh, ICE_RX_MAX_BURST);
+		ret = -EINVAL;
+	} else if (!(rxq->rx_free_thresh < rxq->nb_rx_desc)) {
+		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: "
+			     "rxq->rx_free_thresh=%d, "
+			     "rxq->nb_rx_desc=%d",
+			     rxq->rx_free_thresh, rxq->nb_rx_desc);
+		ret = -EINVAL;
+	} else if (rxq->nb_rx_desc % rxq->rx_free_thresh != 0) {
+		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: "
+			     "rxq->nb_rx_desc=%d, "
+			     "rxq->rx_free_thresh=%d",
+			     rxq->nb_rx_desc, rxq->rx_free_thresh);
+		ret = -EINVAL;
+	}
+#else
+	ret = -EINVAL;
+#endif
+
+	return ret;
+}
+
+/* reset fields in ice_rx_queue back to default */
+static void
+ice_reset_rx_queue(struct ice_rx_queue *rxq)
+{
+	unsigned int i;
+	uint16_t len;
+
+	if (!rxq) {
+		PMD_DRV_LOG(DEBUG, "Pointer to rxq is NULL");
+		return;
+	}
+
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+	if (ice_check_rx_burst_bulk_alloc_preconditions(rxq) == 0)
+		len = (uint16_t)(rxq->nb_rx_desc + ICE_RX_MAX_BURST);
+	else
+#endif /* RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC */
+		len = rxq->nb_rx_desc;
+
+	for (i = 0; i < len * sizeof(union ice_rx_desc); i++)
+		((volatile char *)rxq->rx_ring)[i] = 0;
+
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+	memset(&rxq->fake_mbuf, 0x0, sizeof(rxq->fake_mbuf));
+	for (i = 0; i < ICE_RX_MAX_BURST; ++i)
+		rxq->sw_ring[rxq->nb_rx_desc + i].mbuf = &rxq->fake_mbuf;
+
+	rxq->rx_nb_avail = 0;
+	rxq->rx_next_avail = 0;
+	rxq->rx_free_trigger = (uint16_t)(rxq->rx_free_thresh - 1);
+#endif /* RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC */
+
+	rxq->rx_tail = 0;
+	rxq->nb_rx_hold = 0;
+	rxq->pkt_first_seg = NULL;
+	rxq->pkt_last_seg = NULL;
+}
+
+int
+ice_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct ice_rx_queue *rxq;
+	int err;
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (rx_queue_id >= dev->data->nb_rx_queues) {
+		PMD_DRV_LOG(ERR, "RX queue %u is out of range %u",
+			    rx_queue_id, dev->data->nb_rx_queues);
+		return -EINVAL;
+	}
+
+	rxq = dev->data->rx_queues[rx_queue_id];
+	if (!rxq || !rxq->q_set) {
+		PMD_DRV_LOG(ERR, "RX queue %u not available or setup",
+			    rx_queue_id);
+		return -EINVAL;
+	}
+
+	err = ice_program_hw_rx_queue(rxq);
+	if (err) {
+		PMD_DRV_LOG(ERR, "fail to program RX queue %u",
+			    rx_queue_id);
+		return -EIO;
+	}
+
+	err = ice_alloc_rx_queue_mbufs(rxq);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to allocate RX queue mbuf");
+		return -ENOMEM;
+	}
+
+	rte_wmb();
+
+	/* Init the RX tail register. */
+	ICE_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1);
+
+	err = ice_switch_rx_queue(hw, rxq->reg_idx, TRUE);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u on",
+			    rx_queue_id);
+
+		ice_rx_queue_release_mbufs(rxq);
+		ice_reset_rx_queue(rxq);
+		return -EINVAL;
+	}
+
+	dev->data->rx_queue_state[rx_queue_id] =
+		RTE_ETH_QUEUE_STATE_STARTED;
+
+	return 0;
+}
+
+int
+ice_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct ice_rx_queue *rxq;
+	int err;
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	if (rx_queue_id < dev->data->nb_rx_queues) {
+		rxq = dev->data->rx_queues[rx_queue_id];
+
+		err = ice_switch_rx_queue(hw, rxq->reg_idx, FALSE);
+		if (err) {
+			PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off",
+				    rx_queue_id);
+			return -EINVAL;
+		}
+		ice_rx_queue_release_mbufs(rxq);
+		ice_reset_rx_queue(rxq);
+		dev->data->rx_queue_state[rx_queue_id] =
+			RTE_ETH_QUEUE_STATE_STOPPED;
+	}
+
+	return 0;
+}
+
+int
+ice_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct ice_tx_queue *txq;
+	int err;
+	struct ice_vsi *vsi;
+	struct ice_hw *hw;
+	struct ice_aqc_add_tx_qgrp txq_elem;
+	struct ice_tlan_ctx tx_ctx;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (tx_queue_id >= dev->data->nb_tx_queues) {
+		PMD_DRV_LOG(ERR, "TX queue %u is out of range %u",
+			    tx_queue_id, dev->data->nb_tx_queues);
+		return -EINVAL;
+	}
+
+	txq = dev->data->tx_queues[tx_queue_id];
+	if (!txq || !txq->q_set) {
+		PMD_DRV_LOG(ERR, "TX queue %u is not available or setup",
+			    tx_queue_id);
+		return -EINVAL;
+	}
+
+	vsi = txq->vsi;
+	hw = ICE_VSI_TO_HW(vsi);
+
+	memset(&txq_elem, 0, sizeof(txq_elem));
+	memset(&tx_ctx, 0, sizeof(tx_ctx));
+	txq_elem.num_txqs = 1;
+	txq_elem.txqs[0].txq_id = rte_cpu_to_le_16(txq->reg_idx);
+
+	tx_ctx.base = txq->tx_ring_phys_addr / ICE_QUEUE_BASE_ADDR_UNIT;
+	tx_ctx.qlen = txq->nb_tx_desc;
+	tx_ctx.pf_num = hw->pf_id;
+	tx_ctx.vmvf_type = ICE_TLAN_CTX_VMVF_TYPE_PF;
+	tx_ctx.src_vsi = vsi->vsi_id;
+	tx_ctx.port_num = hw->port_info->lport;
+	tx_ctx.tso_ena = 1; /* tso enable */
+	tx_ctx.tso_qnum = txq->reg_idx; /* index for tso state structure */
+	tx_ctx.legacy_int = 1; /* Legacy or Advanced Host Interface */
+
+	ice_set_ctx((uint8_t *)&tx_ctx, txq_elem.txqs[0].txq_ctx,
+		    ice_tlan_ctx_info);
+
+	txq->qtx_tail = hw->hw_addr + QTX_COMM_DBELL(txq->reg_idx);
+
+	/* Init the Tx tail register*/
+	ICE_PCI_REG_WRITE(txq->qtx_tail, 0);
+
+	err = ice_ena_vsi_txq(hw->port_info, vsi->idx, 0, 1, &txq_elem,
+			      sizeof(txq_elem), NULL);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to add lan txq");
+		return -EIO;
+	}
+	/* store the schedule node id */
+	txq->q_teid = txq_elem.txqs[0].q_teid;
+
+	dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED;
+	return 0;
+}
+
+/* Free all mbufs for descriptors in tx queue */
+static void
+ice_tx_queue_release_mbufs(struct ice_tx_queue *txq)
+{
+	uint16_t i;
+
+	if (!txq || !txq->sw_ring) {
+		PMD_DRV_LOG(DEBUG, "Pointer to txq or sw_ring is NULL");
+		return;
+	}
+
+	for (i = 0; i < txq->nb_tx_desc; i++) {
+		if (txq->sw_ring[i].mbuf) {
+			rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
+			txq->sw_ring[i].mbuf = NULL;
+		}
+	}
+}
+
+static void
+ice_reset_tx_queue(struct ice_tx_queue *txq)
+{
+	struct ice_tx_entry *txe;
+	uint16_t i, prev, size;
+
+	if (!txq) {
+		PMD_DRV_LOG(DEBUG, "Pointer to txq is NULL");
+		return;
+	}
+
+	txe = txq->sw_ring;
+	size = sizeof(struct ice_tx_desc) * txq->nb_tx_desc;
+	for (i = 0; i < size; i++)
+		((volatile char *)txq->tx_ring)[i] = 0;
+
+	prev = (uint16_t)(txq->nb_tx_desc - 1);
+	for (i = 0; i < txq->nb_tx_desc; i++) {
+		volatile struct ice_tx_desc *txd = &txq->tx_ring[i];
+
+		txd->cmd_type_offset_bsz =
+			rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE);
+		txe[i].mbuf =  NULL;
+		txe[i].last_id = i;
+		txe[prev].next_id = i;
+		prev = i;
+	}
+
+	txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
+	txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
+
+	txq->tx_tail = 0;
+	txq->nb_tx_used = 0;
+
+	txq->last_desc_cleaned = (uint16_t)(txq->nb_tx_desc - 1);
+	txq->nb_tx_free = (uint16_t)(txq->nb_tx_desc - 1);
+}
+
+int
+ice_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct ice_tx_queue *txq;
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	enum ice_status status;
+	uint16_t q_ids[1];
+	uint32_t q_teids[1];
+
+	if (tx_queue_id >= dev->data->nb_tx_queues) {
+		PMD_DRV_LOG(ERR, "TX queue %u is out of range %u",
+			    tx_queue_id, dev->data->nb_tx_queues);
+		return -EINVAL;
+	}
+
+	txq = dev->data->tx_queues[tx_queue_id];
+	if (!txq) {
+		PMD_DRV_LOG(ERR, "TX queue %u is not available",
+			    tx_queue_id);
+		return -EINVAL;
+	}
+
+	q_ids[0] = txq->reg_idx;
+	q_teids[0] = txq->q_teid;
+
+	status = ice_dis_vsi_txq(hw->port_info, 1, q_ids, q_teids,
+				 ICE_NO_RESET, 0, NULL);
+	if (status != ICE_SUCCESS) {
+		PMD_DRV_LOG(DEBUG, "Failed to disable Lan Tx queue");
+		return -EINVAL;
+	}
+
+	ice_tx_queue_release_mbufs(txq);
+	ice_reset_tx_queue(txq);
+	dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+
+	return 0;
+}
+
+int
+ice_rx_queue_setup(struct rte_eth_dev *dev,
+		   uint16_t queue_idx,
+		   uint16_t nb_desc,
+		   unsigned int socket_id,
+		   const struct rte_eth_rxconf *rx_conf,
+		   struct rte_mempool *mp)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_adapter *ad =
+		ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct ice_vsi *vsi = pf->main_vsi;
+	struct ice_rx_queue *rxq;
+	const struct rte_memzone *rz;
+	uint32_t ring_size;
+	uint16_t len;
+	int use_def_burst_func = 1;
+
+	if (nb_desc % ICE_ALIGN_RING_DESC != 0 ||
+	    nb_desc > ICE_MAX_RING_DESC ||
+	    nb_desc < ICE_MIN_RING_DESC) {
+		PMD_INIT_LOG(ERR, "Number (%u) of receive descriptors is "
+			     "invalid", nb_desc);
+		return -EINVAL;
+	}
+
+	/* Free memory if needed */
+	if (dev->data->rx_queues[queue_idx]) {
+		ice_rx_queue_release(dev->data->rx_queues[queue_idx]);
+		dev->data->rx_queues[queue_idx] = NULL;
+	}
+
+	/* Allocate the rx queue data structure */
+	rxq = rte_zmalloc_socket(NULL,
+				 sizeof(struct ice_rx_queue),
+				 RTE_CACHE_LINE_SIZE,
+				 socket_id);
+	if (!rxq) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for "
+			     "rx queue data structure");
+		return -ENOMEM;
+	}
+	rxq->mp = mp;
+	rxq->nb_rx_desc = nb_desc;
+	rxq->rx_free_thresh = rx_conf->rx_free_thresh;
+	rxq->queue_id = queue_idx;
+
+	rxq->reg_idx = vsi->base_queue + queue_idx;
+	rxq->port_id = dev->data->port_id;
+	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+		rxq->crc_len = ETHER_CRC_LEN;
+	else
+		rxq->crc_len = 0;
+
+	rxq->drop_en = rx_conf->rx_drop_en;
+	rxq->vsi = vsi;
+	rxq->rx_deferred_start = rx_conf->rx_deferred_start;
+
+	/* Allocate the maximun number of RX ring hardware descriptor. */
+	len = ICE_MAX_RING_DESC;
+
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+	/**
+	 * Allocating a little more memory because vectorized/bulk_alloc Rx
+	 * functions doesn't check boundaries each time.
+	 */
+	len += ICE_RX_MAX_BURST;
+#endif
+
+	/* Allocate the maximum number of RX ring hardware descriptor. */
+	ring_size = sizeof(union ice_rx_desc) * len;
+	ring_size = RTE_ALIGN(ring_size, ICE_DMA_MEM_ALIGN);
+	rz = rte_eth_dma_zone_reserve(dev, "rx_ring", queue_idx,
+				      ring_size, ICE_RING_BASE_ALIGN,
+				      socket_id);
+	if (!rz) {
+		ice_rx_queue_release(rxq);
+		PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for RX");
+		return -ENOMEM;
+	}
+
+	/* Zero all the descriptors in the ring. */
+	memset(rz->addr, 0, ring_size);
+
+	rxq->rx_ring_phys_addr = rz->phys_addr;
+	rxq->rx_ring = (union ice_rx_desc *)rz->addr;
+
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+	len = (uint16_t)(nb_desc + ICE_RX_MAX_BURST);
+#else
+	len = nb_desc;
+#endif
+
+	/* Allocate the software ring. */
+	rxq->sw_ring = rte_zmalloc_socket(NULL,
+					  sizeof(struct ice_rx_entry) * len,
+					  RTE_CACHE_LINE_SIZE,
+					  socket_id);
+	if (!rxq->sw_ring) {
+		ice_rx_queue_release(rxq);
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for SW ring");
+		return -ENOMEM;
+	}
+
+	ice_reset_rx_queue(rxq);
+	rxq->q_set = TRUE;
+	dev->data->rx_queues[queue_idx] = rxq;
+
+	use_def_burst_func = ice_check_rx_burst_bulk_alloc_preconditions(rxq);
+
+	if (!use_def_burst_func) {
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions are "
+			     "satisfied. Rx Burst Bulk Alloc function will be "
+			     "used on port=%d, queue=%d.",
+			     rxq->port_id, rxq->queue_id);
+#endif /* RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC */
+	} else {
+		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions are "
+			     "not satisfied, Scattered Rx is requested, "
+			     "or RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC is "
+			     "not enabled on port=%d, queue=%d.",
+			     rxq->port_id, rxq->queue_id);
+		ad->rx_bulk_alloc_allowed = false;
+	}
+
+	return 0;
+}
+
+void
+ice_rx_queue_release(void *rxq)
+{
+	struct ice_rx_queue *q = (struct ice_rx_queue *)rxq;
+
+	if (!q) {
+		PMD_DRV_LOG(DEBUG, "Pointer to rxq is NULL");
+		return;
+	}
+
+	ice_rx_queue_release_mbufs(q);
+	rte_free(q->sw_ring);
+	rte_free(q);
+}
+
+int
+ice_tx_queue_setup(struct rte_eth_dev *dev,
+		   uint16_t queue_idx,
+		   uint16_t nb_desc,
+		   unsigned int socket_id,
+		   const struct rte_eth_txconf *tx_conf)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vsi *vsi = pf->main_vsi;
+	struct ice_tx_queue *txq;
+	const struct rte_memzone *tz;
+	uint32_t ring_size;
+	uint16_t tx_rs_thresh, tx_free_thresh;
+	uint64_t offloads;
+
+	offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads;
+
+	if (nb_desc % ICE_ALIGN_RING_DESC != 0 ||
+	    nb_desc > ICE_MAX_RING_DESC ||
+	    nb_desc < ICE_MIN_RING_DESC) {
+		PMD_INIT_LOG(ERR, "Number (%u) of transmit descriptors is "
+			     "invalid", nb_desc);
+		return -EINVAL;
+	}
+
+	/**
+	 * The following two parameters control the setting of the RS bit on
+	 * transmit descriptors. TX descriptors will have their RS bit set
+	 * after txq->tx_rs_thresh descriptors have been used. The TX
+	 * descriptor ring will be cleaned after txq->tx_free_thresh
+	 * descriptors are used or if the number of descriptors required to
+	 * transmit a packet is greater than the number of free TX descriptors.
+	 *
+	 * The following constraints must be satisfied:
+	 *  - tx_rs_thresh must be greater than 0.
+	 *  - tx_rs_thresh must be less than the size of the ring minus 2.
+	 *  - tx_rs_thresh must be less than or equal to tx_free_thresh.
+	 *  - tx_rs_thresh must be a divisor of the ring size.
+	 *  - tx_free_thresh must be greater than 0.
+	 *  - tx_free_thresh must be less than the size of the ring minus 3.
+	 *
+	 * One descriptor in the TX ring is used as a sentinel to avoid a H/W
+	 * race condition, hence the maximum threshold constraints. When set
+	 * to zero use default values.
+	 */
+	tx_rs_thresh = (uint16_t)(tx_conf->tx_rs_thresh ?
+				  tx_conf->tx_rs_thresh :
+				  ICE_DEFAULT_TX_RSBIT_THRESH);
+	tx_free_thresh = (uint16_t)(tx_conf->tx_free_thresh ?
+				    tx_conf->tx_free_thresh :
+				    ICE_DEFAULT_TX_FREE_THRESH);
+	if (tx_rs_thresh >= (nb_desc - 2)) {
+		PMD_INIT_LOG(ERR, "tx_rs_thresh must be less than the "
+			     "number of TX descriptors minus 2. "
+			     "(tx_rs_thresh=%u port=%d queue=%d)",
+			     (unsigned int)tx_rs_thresh,
+			     (int)dev->data->port_id,
+			     (int)queue_idx);
+		return -EINVAL;
+	}
+	if (tx_free_thresh >= (nb_desc - 3)) {
+		PMD_INIT_LOG(ERR, "tx_rs_thresh must be less than the "
+			     "tx_free_thresh must be less than the "
+			     "number of TX descriptors minus 3. "
+			     "(tx_free_thresh=%u port=%d queue=%d)",
+			     (unsigned int)tx_free_thresh,
+			     (int)dev->data->port_id,
+			     (int)queue_idx);
+		return -EINVAL;
+	}
+	if (tx_rs_thresh > tx_free_thresh) {
+		PMD_INIT_LOG(ERR, "tx_rs_thresh must be less than or "
+			     "equal to tx_free_thresh. (tx_free_thresh=%u"
+			     " tx_rs_thresh=%u port=%d queue=%d)",
+			     (unsigned int)tx_free_thresh,
+			     (unsigned int)tx_rs_thresh,
+			     (int)dev->data->port_id,
+			     (int)queue_idx);
+		return -EINVAL;
+	}
+	if ((nb_desc % tx_rs_thresh) != 0) {
+		PMD_INIT_LOG(ERR, "tx_rs_thresh must be a divisor of the "
+			     "number of TX descriptors. (tx_rs_thresh=%u"
+			     " port=%d queue=%d)",
+			     (unsigned int)tx_rs_thresh,
+			     (int)dev->data->port_id,
+			     (int)queue_idx);
+		return -EINVAL;
+	}
+	if (tx_rs_thresh > 1 && tx_conf->tx_thresh.wthresh != 0) {
+		PMD_INIT_LOG(ERR, "TX WTHRESH must be set to 0 if "
+			     "tx_rs_thresh is greater than 1. "
+			     "(tx_rs_thresh=%u port=%d queue=%d)",
+			     (unsigned int)tx_rs_thresh,
+			     (int)dev->data->port_id,
+			     (int)queue_idx);
+		return -EINVAL;
+	}
+
+	/* Free memory if needed. */
+	if (dev->data->tx_queues[queue_idx]) {
+		ice_tx_queue_release(dev->data->tx_queues[queue_idx]);
+		dev->data->tx_queues[queue_idx] = NULL;
+	}
+
+	/* Allocate the TX queue data structure. */
+	txq = rte_zmalloc_socket(NULL,
+				 sizeof(struct ice_tx_queue),
+				 RTE_CACHE_LINE_SIZE,
+				 socket_id);
+	if (!txq) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for "
+			     "tx queue structure");
+		return -ENOMEM;
+	}
+
+	/* Allocate TX hardware ring descriptors. */
+	ring_size = sizeof(struct ice_tx_desc) * ICE_MAX_RING_DESC;
+	ring_size = RTE_ALIGN(ring_size, ICE_DMA_MEM_ALIGN);
+	tz = rte_eth_dma_zone_reserve(dev, "tx_ring", queue_idx,
+				      ring_size, ICE_RING_BASE_ALIGN,
+				      socket_id);
+	if (!tz) {
+		ice_tx_queue_release(txq);
+		PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for TX");
+		return -ENOMEM;
+	}
+
+	txq->nb_tx_desc = nb_desc;
+	txq->tx_rs_thresh = tx_rs_thresh;
+	txq->tx_free_thresh = tx_free_thresh;
+	txq->pthresh = tx_conf->tx_thresh.pthresh;
+	txq->hthresh = tx_conf->tx_thresh.hthresh;
+	txq->wthresh = tx_conf->tx_thresh.wthresh;
+	txq->queue_id = queue_idx;
+
+	txq->reg_idx = vsi->base_queue + queue_idx;
+	txq->port_id = dev->data->port_id;
+	txq->offloads = offloads;
+	txq->vsi = vsi;
+	txq->tx_deferred_start = tx_conf->tx_deferred_start;
+
+	txq->tx_ring_phys_addr = tz->phys_addr;
+	txq->tx_ring = (struct ice_tx_desc *)tz->addr;
+
+	/* Allocate software ring */
+	txq->sw_ring =
+		rte_zmalloc_socket(NULL,
+				   sizeof(struct ice_tx_entry) * nb_desc,
+				   RTE_CACHE_LINE_SIZE,
+				   socket_id);
+	if (!txq->sw_ring) {
+		ice_tx_queue_release(txq);
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for SW TX ring");
+		return -ENOMEM;
+	}
+
+	ice_reset_tx_queue(txq);
+	txq->q_set = TRUE;
+	dev->data->tx_queues[queue_idx] = txq;
+
+	return 0;
+}
+
+void
+ice_tx_queue_release(void *txq)
+{
+	struct ice_tx_queue *q = (struct ice_tx_queue *)txq;
+
+	if (!q) {
+		PMD_DRV_LOG(DEBUG, "Pointer to TX queue is NULL");
+		return;
+	}
+
+	ice_tx_queue_release_mbufs(q);
+	rte_free(q->sw_ring);
+	rte_free(q);
+}
+
+void
+ice_clear_queues(struct rte_eth_dev *dev)
+{
+	uint16_t i;
+
+	PMD_INIT_FUNC_TRACE();
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		ice_tx_queue_release_mbufs(dev->data->tx_queues[i]);
+		ice_reset_tx_queue(dev->data->tx_queues[i]);
+	}
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		ice_rx_queue_release_mbufs(dev->data->rx_queues[i]);
+		ice_reset_rx_queue(dev->data->rx_queues[i]);
+	}
+}
+
+void
+ice_free_queues(struct rte_eth_dev *dev)
+{
+	uint16_t i;
+
+	PMD_INIT_FUNC_TRACE();
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		if (!dev->data->rx_queues[i])
+			continue;
+		ice_rx_queue_release(dev->data->rx_queues[i]);
+		dev->data->rx_queues[i] = NULL;
+	}
+	dev->data->nb_rx_queues = 0;
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		if (!dev->data->tx_queues[i])
+			continue;
+		ice_tx_queue_release(dev->data->tx_queues[i]);
+		dev->data->tx_queues[i] = NULL;
+	}
+	dev->data->nb_tx_queues = 0;
+}
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
new file mode 100644
index 0000000..088a206
--- /dev/null
+++ b/drivers/net/ice/ice_rxtx.h
@@ -0,0 +1,137 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _ICE_RXTX_H_
+#define _ICE_RXTX_H_
+
+#include "ice_ethdev.h"
+
+#define ICE_ALIGN_RING_DESC  32
+#define ICE_MIN_RING_DESC    64
+#define ICE_MAX_RING_DESC    4096
+#define ICE_DMA_MEM_ALIGN    4096
+#define ICE_RING_BASE_ALIGN  128
+
+#define ICE_RX_MAX_BURST 32
+#define ICE_TX_MAX_BURST 32
+
+#define ICE_CHK_Q_ENA_COUNT        100
+#define ICE_CHK_Q_ENA_INTERVAL_US  100
+
+#ifdef RTE_LIBRTE_ICE_16BYTE_RX_DESC
+#define ice_rx_desc ice_16byte_rx_desc
+#else
+#define ice_rx_desc ice_32byte_rx_desc
+#endif
+
+#define ICE_SUPPORT_CHAIN_NUM 5
+
+struct ice_rx_entry {
+	struct rte_mbuf *mbuf;
+};
+
+struct ice_rx_queue {
+	struct rte_mempool *mp; /* mbuf pool to populate RX ring */
+	volatile union ice_rx_desc *rx_ring;/* RX ring virtual address */
+	uint64_t rx_ring_phys_addr; /* RX ring DMA address */
+	struct ice_rx_entry *sw_ring; /* address of RX soft ring */
+	uint16_t nb_rx_desc; /* number of RX descriptors */
+	uint16_t rx_free_thresh; /* max free RX desc to hold */
+	uint16_t rx_tail; /* current value of tail */
+	uint16_t nb_rx_hold; /* number of held free RX desc */
+	struct rte_mbuf *pkt_first_seg; /**< first segment of current packet */
+	struct rte_mbuf *pkt_last_seg; /**< last segment of current packet */
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+	uint16_t rx_nb_avail; /**< number of staged packets ready */
+	uint16_t rx_next_avail; /**< index of next staged packets */
+	uint16_t rx_free_trigger; /**< triggers rx buffer allocation */
+	struct rte_mbuf fake_mbuf; /**< dummy mbuf */
+	struct rte_mbuf *rx_stage[ICE_RX_MAX_BURST * 2];
+#endif
+	uint8_t port_id; /* device port ID */
+	uint8_t crc_len; /* 0 if CRC stripped, 4 otherwise */
+	uint16_t queue_id; /* RX queue index */
+	uint16_t reg_idx; /* RX queue register index */
+	uint8_t drop_en; /* if not 0, set register bit */
+	volatile uint8_t *qrx_tail; /* register address of tail */
+	struct ice_vsi *vsi; /* the VSI this queue belongs to */
+	uint16_t rx_buf_len; /* The packet buffer size */
+	uint16_t rx_hdr_len; /* The header buffer size */
+	uint16_t max_pkt_len; /* Maximum packet length */
+	bool q_set; /* indicate if rx queue has been configured */
+	bool rx_deferred_start; /* don't start this queue in dev start */
+};
+
+struct ice_tx_entry {
+	struct rte_mbuf *mbuf;
+	uint16_t next_id;
+	uint16_t last_id;
+};
+
+struct ice_tx_queue {
+	uint16_t nb_tx_desc; /* number of TX descriptors */
+	uint64_t tx_ring_phys_addr; /* TX ring DMA address */
+	volatile struct ice_tx_desc *tx_ring; /* TX ring virtual address */
+	struct ice_tx_entry *sw_ring; /* virtual address of SW ring */
+	uint16_t tx_tail; /* current value of tail register */
+	volatile uint8_t *qtx_tail; /* register address of tail */
+	uint16_t nb_tx_used; /* number of TX desc used since RS bit set */
+	/* index to last TX descriptor to have been cleaned */
+	uint16_t last_desc_cleaned;
+	/* Total number of TX descriptors ready to be allocated. */
+	uint16_t nb_tx_free;
+	/* Start freeing TX buffers if there are less free descriptors than
+	 * this value.
+	 */
+	uint16_t tx_free_thresh;
+	/* Number of TX descriptors to use before RS bit is set. */
+	uint16_t tx_rs_thresh;
+	uint8_t pthresh; /**< Prefetch threshold register. */
+	uint8_t hthresh; /**< Host threshold register. */
+	uint8_t wthresh; /**< Write-back threshold reg. */
+	uint8_t port_id; /* Device port identifier. */
+	uint16_t queue_id; /* TX queue index. */
+	uint32_t q_teid; /* TX schedule node id. */
+	uint16_t reg_idx;
+	uint64_t offloads;
+	struct ice_vsi *vsi; /* the VSI this queue belongs to */
+	uint16_t tx_next_dd;
+	uint16_t tx_next_rs;
+	bool tx_deferred_start; /* don't start this queue in dev start */
+	bool q_set; /* indicate if tx queue has been configured */
+};
+
+/* Offload features */
+union ice_tx_offload {
+	uint64_t data;
+	struct {
+		uint64_t l2_len:7; /* L2 (MAC) Header Length. */
+		uint64_t l3_len:9; /* L3 (IP) Header Length. */
+		uint64_t l4_len:8; /* L4 Header Length. */
+		uint64_t tso_segsz:16; /* TCP TSO segment size */
+		uint64_t outer_l2_len:8; /* outer L2 Header Length */
+		uint64_t outer_l3_len:16; /* outer L3 Header Length */
+	};
+};
+
+int ice_rx_queue_setup(struct rte_eth_dev *dev,
+		       uint16_t queue_idx,
+		       uint16_t nb_desc,
+		       unsigned int socket_id,
+		       const struct rte_eth_rxconf *rx_conf,
+		       struct rte_mempool *mp);
+int ice_tx_queue_setup(struct rte_eth_dev *dev,
+		       uint16_t queue_idx,
+		       uint16_t nb_desc,
+		       unsigned int socket_id,
+		       const struct rte_eth_txconf *tx_conf);
+int ice_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+int ice_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+int ice_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+int ice_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+void ice_rx_queue_release(void *rxq);
+void ice_tx_queue_release(void *txq);
+void ice_clear_queues(struct rte_eth_dev *dev);
+void ice_free_queues(struct rte_eth_dev *dev);
+#endif /* _ICE_RXTX_H_ */
diff --git a/drivers/net/ice/meson.build b/drivers/net/ice/meson.build
index 9ed7b27..857dc0e 100644
--- a/drivers/net/ice/meson.build
+++ b/drivers/net/ice/meson.build
@@ -5,7 +5,8 @@ subdir('base')
 objs = [base_objs]
 
 sources = files(
-	'ice_ethdev.c'
+	'ice_ethdev.c',
+	'ice_rxtx.c'
 	)
 
 deps += ['hash']
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v6 17/31] net/ice: support getting device information
  2018-12-18  8:46 ` [dpdk-dev] [PATCH v6 00/31] A new net PMD - ICE Wenzhuo Lu
                     ` (15 preceding siblings ...)
  2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 16/31] net/ice: support device and queue ops Wenzhuo Lu
@ 2018-12-18  8:46   ` Wenzhuo Lu
  2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 18/31] net/ice: support link update Wenzhuo Lu
                     ` (14 subsequent siblings)
  31 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-18  8:46 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add ops dev_infos_get.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/ice.ini |   1 +
 drivers/net/ice/ice_ethdev.c     | 103 +++++++++++++++++++++++++++++++++++++++
 drivers/net/ice/ice_ethdev.h     |  13 +++++
 3 files changed, 117 insertions(+)

diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
index a43a9cd..af8f0d3 100644
--- a/doc/guides/nics/features/ice.ini
+++ b/doc/guides/nics/features/ice.ini
@@ -4,6 +4,7 @@
 ; Refer to default.ini for the full list of available PMD features.
 ;
 [Features]
+Speed capabilities   = Y
 Queue start/stop     = Y
 BSD nic_uio          = Y
 Linux UIO            = Y
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 0274d9e..eee4053 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -19,6 +19,8 @@
 static void ice_dev_stop(struct rte_eth_dev *dev);
 static void ice_dev_close(struct rte_eth_dev *dev);
 static int ice_dev_reset(struct rte_eth_dev *dev);
+static void ice_dev_info_get(struct rte_eth_dev *dev,
+			     struct rte_eth_dev_info *dev_info);
 
 static const struct rte_pci_id pci_id_ice_map[] = {
 	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
@@ -41,6 +43,7 @@
 	.rx_queue_release             = ice_rx_queue_release,
 	.tx_queue_setup               = ice_tx_queue_setup,
 	.tx_queue_release             = ice_tx_queue_release,
+	.dev_infos_get                = ice_dev_info_get,
 };
 
 static void
@@ -790,6 +793,106 @@ static int ice_init_rss(struct ice_pf *pf)
 	return 0;
 }
 
+static void
+ice_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ice_vsi *vsi = pf->main_vsi;
+	struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
+
+	dev_info->min_rx_bufsize = ICE_BUF_SIZE_MIN;
+	dev_info->max_rx_pktlen = ICE_FRAME_SIZE_MAX;
+	dev_info->max_rx_queues = vsi->nb_qps;
+	dev_info->max_tx_queues = vsi->nb_qps;
+	dev_info->max_mac_addrs = vsi->max_macaddrs;
+	dev_info->max_vfs = pci_dev->max_vfs;
+
+	dev_info->rx_offload_capa =
+		DEV_RX_OFFLOAD_VLAN_STRIP |
+		DEV_RX_OFFLOAD_IPV4_CKSUM |
+		DEV_RX_OFFLOAD_UDP_CKSUM |
+		DEV_RX_OFFLOAD_TCP_CKSUM |
+		DEV_RX_OFFLOAD_QINQ_STRIP |
+		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		DEV_RX_OFFLOAD_VLAN_EXTEND |
+		DEV_RX_OFFLOAD_JUMBO_FRAME |
+		DEV_RX_OFFLOAD_KEEP_CRC |
+		DEV_RX_OFFLOAD_SCATTER |
+		DEV_RX_OFFLOAD_VLAN_FILTER;
+	dev_info->tx_offload_capa =
+		DEV_TX_OFFLOAD_VLAN_INSERT |
+		DEV_TX_OFFLOAD_QINQ_INSERT |
+		DEV_TX_OFFLOAD_IPV4_CKSUM |
+		DEV_TX_OFFLOAD_UDP_CKSUM |
+		DEV_TX_OFFLOAD_TCP_CKSUM |
+		DEV_TX_OFFLOAD_SCTP_CKSUM |
+		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		DEV_TX_OFFLOAD_TCP_TSO |
+		DEV_TX_OFFLOAD_MULTI_SEGS |
+		DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+	dev_info->rx_queue_offload_capa = 0;
+	dev_info->tx_queue_offload_capa = 0;
+
+	dev_info->reta_size = hw->func_caps.common_cap.rss_table_size;
+	dev_info->hash_key_size = (VSIQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t);
+	dev_info->flow_type_rss_offloads = ICE_RSS_OFFLOAD_ALL;
+
+	dev_info->default_rxconf = (struct rte_eth_rxconf) {
+		.rx_thresh = {
+			.pthresh = ICE_DEFAULT_RX_PTHRESH,
+			.hthresh = ICE_DEFAULT_RX_HTHRESH,
+			.wthresh = ICE_DEFAULT_RX_WTHRESH,
+		},
+		.rx_free_thresh = ICE_DEFAULT_RX_FREE_THRESH,
+		.rx_drop_en = 0,
+		.offloads = 0,
+	};
+
+	dev_info->default_txconf = (struct rte_eth_txconf) {
+		.tx_thresh = {
+			.pthresh = ICE_DEFAULT_TX_PTHRESH,
+			.hthresh = ICE_DEFAULT_TX_HTHRESH,
+			.wthresh = ICE_DEFAULT_TX_WTHRESH,
+		},
+		.tx_free_thresh = ICE_DEFAULT_TX_FREE_THRESH,
+		.tx_rs_thresh = ICE_DEFAULT_TX_RSBIT_THRESH,
+		.offloads = 0,
+	};
+
+	dev_info->rx_desc_lim = (struct rte_eth_desc_lim) {
+		.nb_max = ICE_MAX_RING_DESC,
+		.nb_min = ICE_MIN_RING_DESC,
+		.nb_align = ICE_ALIGN_RING_DESC,
+	};
+
+	dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
+		.nb_max = ICE_MAX_RING_DESC,
+		.nb_min = ICE_MIN_RING_DESC,
+		.nb_align = ICE_ALIGN_RING_DESC,
+	};
+
+	dev_info->speed_capa = ETH_LINK_SPEED_10M |
+			       ETH_LINK_SPEED_100M |
+			       ETH_LINK_SPEED_1G |
+			       ETH_LINK_SPEED_2_5G |
+			       ETH_LINK_SPEED_5G |
+			       ETH_LINK_SPEED_10G |
+			       ETH_LINK_SPEED_20G |
+			       ETH_LINK_SPEED_25G |
+			       ETH_LINK_SPEED_40G;
+
+	dev_info->nb_rx_queues = dev->data->nb_rx_queues;
+	dev_info->nb_tx_queues = dev->data->nb_tx_queues;
+
+	dev_info->default_rxportconf.burst_size = ICE_RX_MAX_BURST;
+	dev_info->default_txportconf.burst_size = ICE_TX_MAX_BURST;
+	dev_info->default_rxportconf.nb_queues = 1;
+	dev_info->default_txportconf.nb_queues = 1;
+	dev_info->default_rxportconf.ring_size = ICE_BUF_SIZE_MIN;
+	dev_info->default_txportconf.ring_size = ICE_BUF_SIZE_MIN;
+}
+
 static int
 ice_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 	      struct rte_pci_device *pci_dev)
diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
index 94e45c8..3cefa5b 100644
--- a/drivers/net/ice/ice_ethdev.h
+++ b/drivers/net/ice/ice_ethdev.h
@@ -102,6 +102,19 @@
 		       ICE_FLAG_RSS_AQ_CAPABLE | \
 		       ICE_FLAG_VF_MAC_BY_PF)
 
+#define ICE_RSS_OFFLOAD_ALL ( \
+	ETH_RSS_FRAG_IPV4 | \
+	ETH_RSS_NONFRAG_IPV4_TCP | \
+	ETH_RSS_NONFRAG_IPV4_UDP | \
+	ETH_RSS_NONFRAG_IPV4_SCTP | \
+	ETH_RSS_NONFRAG_IPV4_OTHER | \
+	ETH_RSS_FRAG_IPV6 | \
+	ETH_RSS_NONFRAG_IPV6_TCP | \
+	ETH_RSS_NONFRAG_IPV6_UDP | \
+	ETH_RSS_NONFRAG_IPV6_SCTP | \
+	ETH_RSS_NONFRAG_IPV6_OTHER | \
+	ETH_RSS_L2_PAYLOAD)
+
 struct ice_adapter;
 
 /**
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v6 18/31] net/ice: support link update
  2018-12-18  8:46 ` [dpdk-dev] [PATCH v6 00/31] A new net PMD - ICE Wenzhuo Lu
                     ` (16 preceding siblings ...)
  2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 17/31] net/ice: support getting device information Wenzhuo Lu
@ 2018-12-18  8:46   ` Wenzhuo Lu
  2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 19/31] net/ice: support queue information getting Wenzhuo Lu
                     ` (13 subsequent siblings)
  31 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-18  8:46 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add ops link_update.
LSC interrupt is also enabled in this patch.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/ice.ini |   2 +
 drivers/net/ice/ice_ethdev.c     | 332 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 334 insertions(+)

diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
index af8f0d3..eb852ff 100644
--- a/doc/guides/nics/features/ice.ini
+++ b/doc/guides/nics/features/ice.ini
@@ -5,6 +5,8 @@
 ;
 [Features]
 Speed capabilities   = Y
+Link status          = Y
+Link status event    = Y
 Queue start/stop     = Y
 BSD nic_uio          = Y
 Linux UIO            = Y
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index eee4053..853f43a 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -21,6 +21,8 @@
 static int ice_dev_reset(struct rte_eth_dev *dev);
 static void ice_dev_info_get(struct rte_eth_dev *dev,
 			     struct rte_eth_dev_info *dev_info);
+static int ice_link_update(struct rte_eth_dev *dev,
+			   int wait_to_complete);
 
 static const struct rte_pci_id pci_id_ice_map[] = {
 	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
@@ -44,6 +46,7 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 	.tx_queue_setup               = ice_tx_queue_setup,
 	.tx_queue_release             = ice_tx_queue_release,
 	.dev_infos_get                = ice_dev_info_get,
+	.link_update                  = ice_link_update,
 };
 
 static void
@@ -330,6 +333,187 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 	return 0;
 }
 
+/* Enable IRQ0 */
+static void
+ice_pf_enable_irq0(struct ice_hw *hw)
+{
+	/* reset the registers */
+	ICE_WRITE_REG(hw, PFINT_OICR_ENA, 0);
+	ICE_READ_REG(hw, PFINT_OICR);
+
+#ifdef ICE_LSE_SPT
+	ICE_WRITE_REG(hw, PFINT_OICR_ENA,
+		      (uint32_t)(PFINT_OICR_ENA_INT_ENA_M &
+				 (~PFINT_OICR_LINK_STAT_CHANGE_M)));
+
+	ICE_WRITE_REG(hw, PFINT_OICR_CTL,
+		      (0 & PFINT_OICR_CTL_MSIX_INDX_M) |
+		      ((0 << PFINT_OICR_CTL_ITR_INDX_S) &
+		       PFINT_OICR_CTL_ITR_INDX_M) |
+		      PFINT_OICR_CTL_CAUSE_ENA_M);
+
+	ICE_WRITE_REG(hw, PFINT_FW_CTL,
+		      (0 & PFINT_FW_CTL_MSIX_INDX_M) |
+		      ((0 << PFINT_FW_CTL_ITR_INDX_S) &
+		       PFINT_FW_CTL_ITR_INDX_M) |
+		      PFINT_FW_CTL_CAUSE_ENA_M);
+#else
+	ICE_WRITE_REG(hw, PFINT_OICR_ENA, PFINT_OICR_ENA_INT_ENA_M);
+#endif
+
+	ICE_WRITE_REG(hw, GLINT_DYN_CTL(0),
+		      GLINT_DYN_CTL_INTENA_M |
+		      GLINT_DYN_CTL_CLEARPBA_M |
+		      GLINT_DYN_CTL_ITR_INDX_M);
+
+	ice_flush(hw);
+}
+
+/* Disable IRQ0 */
+static void
+ice_pf_disable_irq0(struct ice_hw *hw)
+{
+	/* Disable all interrupt types */
+	ICE_WRITE_REG(hw, GLINT_DYN_CTL(0), GLINT_DYN_CTL_WB_ON_ITR_M);
+	ice_flush(hw);
+}
+
+#ifdef ICE_LSE_SPT
+static void
+ice_handle_aq_msg(struct rte_eth_dev *dev)
+{
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ice_ctl_q_info *cq = &hw->adminq;
+	struct ice_rq_event_info event;
+	uint16_t pending, opcode;
+	int ret;
+
+	event.buf_len = ICE_AQ_MAX_BUF_LEN;
+	event.msg_buf = rte_zmalloc(NULL, event.buf_len, 0);
+	if (!event.msg_buf) {
+		PMD_DRV_LOG(ERR, "Failed to allocate mem");
+		return;
+	}
+
+	pending = 1;
+	while (pending) {
+		ret = ice_clean_rq_elem(hw, cq, &event, &pending);
+
+		if (ret != ICE_SUCCESS) {
+			PMD_DRV_LOG(INFO,
+				    "Failed to read msg from AdminQ, "
+				    "adminq_err: %u",
+				    hw->adminq.sq_last_status);
+			break;
+		}
+		opcode = rte_le_to_cpu_16(event.desc.opcode);
+
+		switch (opcode) {
+		case ice_aqc_opc_get_link_status:
+			ret = ice_link_update(dev, 0);
+			if (!ret)
+				_rte_eth_dev_callback_process
+					(dev, RTE_ETH_EVENT_INTR_LSC, NULL);
+			break;
+		default:
+			PMD_DRV_LOG(DEBUG, "Request %u is not supported yet",
+				    opcode);
+			break;
+		}
+	}
+	rte_free(event.msg_buf);
+}
+#endif
+
+/**
+ * Interrupt handler triggered by NIC for handling
+ * specific interrupt.
+ *
+ * @param handle
+ *  Pointer to interrupt handle.
+ * @param param
+ *  The address of parameter (struct rte_eth_dev *) regsitered before.
+ *
+ * @return
+ *  void
+ */
+static void
+ice_interrupt_handler(void *param)
+{
+	struct rte_eth_dev *dev = (struct rte_eth_dev *)param;
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	uint32_t oicr;
+	uint32_t reg;
+	uint8_t pf_num;
+	uint8_t event;
+	uint16_t queue;
+#ifdef ICE_LSE_SPT
+	uint32_t int_fw_ctl;
+#endif
+
+	/* Disable interrupt */
+	ice_pf_disable_irq0(hw);
+
+	/* read out interrupt causes */
+	oicr = ICE_READ_REG(hw, PFINT_OICR);
+#ifdef ICE_LSE_SPT
+	int_fw_ctl = ICE_READ_REG(hw, PFINT_FW_CTL);
+#endif
+
+	/* No interrupt event indicated */
+	if (!(oicr & PFINT_OICR_INTEVENT_M)) {
+		PMD_DRV_LOG(INFO, "No interrupt event");
+		goto done;
+	}
+
+#ifdef ICE_LSE_SPT
+	if (int_fw_ctl & PFINT_FW_CTL_INTEVENT_M) {
+		PMD_DRV_LOG(INFO, "FW_CTL: link state change event");
+		ice_handle_aq_msg(dev);
+	}
+#else
+	if (oicr & PFINT_OICR_LINK_STAT_CHANGE_M) {
+		PMD_DRV_LOG(INFO, "OICR: link state change event");
+		ice_link_update(dev, 0);
+	}
+#endif
+
+	if (oicr & PFINT_OICR_MAL_DETECT_M) {
+		PMD_DRV_LOG(WARNING, "OICR: MDD event");
+		reg = ICE_READ_REG(hw, GL_MDET_TX_PQM);
+		if (reg & GL_MDET_TX_PQM_VALID_M) {
+			pf_num = (reg & GL_MDET_TX_PQM_PF_NUM_M) >>
+				 GL_MDET_TX_PQM_PF_NUM_S;
+			event = (reg & GL_MDET_TX_PQM_MAL_TYPE_M) >>
+				GL_MDET_TX_PQM_MAL_TYPE_S;
+			queue = (reg & GL_MDET_TX_PQM_QNUM_M) >>
+				GL_MDET_TX_PQM_QNUM_S;
+
+			PMD_DRV_LOG(WARNING, "Malicious Driver Detection event "
+				    "%d by PQM on TX queue %d PF# %d",
+				    event, queue, pf_num);
+		}
+
+		reg = ICE_READ_REG(hw, GL_MDET_TX_TCLAN);
+		if (reg & GL_MDET_TX_TCLAN_VALID_M) {
+			pf_num = (reg & GL_MDET_TX_TCLAN_PF_NUM_M) >>
+				 GL_MDET_TX_TCLAN_PF_NUM_S;
+			event = (reg & GL_MDET_TX_TCLAN_MAL_TYPE_M) >>
+				GL_MDET_TX_TCLAN_MAL_TYPE_S;
+			queue = (reg & GL_MDET_TX_TCLAN_QNUM_M) >>
+				GL_MDET_TX_TCLAN_QNUM_S;
+
+			PMD_DRV_LOG(WARNING, "Malicious Driver Detection event "
+				    "%d by TCLAN on TX queue %d PF# %d",
+				    event, queue, pf_num);
+		}
+	}
+done:
+	/* Enable interrupt */
+	ice_pf_enable_irq0(hw);
+	rte_intr_enable(dev->intr_handle);
+}
+
 /*  Initialize SW parameters of PF */
 static int
 ice_pf_sw_init(struct rte_eth_dev *dev)
@@ -487,6 +671,7 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 ice_dev_init(struct rte_eth_dev *dev)
 {
 	struct rte_pci_device *pci_dev;
+	struct rte_intr_handle *intr_handle;
 	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
 	int ret;
@@ -494,6 +679,7 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 	dev->dev_ops = &ice_eth_dev_ops;
 
 	pci_dev = RTE_DEV_TO_PCI(dev->device);
+	intr_handle = &pci_dev->intr_handle;
 
 	pf->adapter = ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
 	pf->adapter->eth_dev = dev;
@@ -539,6 +725,15 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 		goto err_pf_setup;
 	}
 
+	/* register callback func to eal lib */
+	rte_intr_callback_register(intr_handle,
+				   ice_interrupt_handler, dev);
+
+	ice_pf_enable_irq0(hw);
+
+	/* enable uio intr after callback register */
+	rte_intr_enable(intr_handle);
+
 	return 0;
 
 err_pf_setup:
@@ -585,6 +780,8 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 {
 	struct rte_eth_dev_data *data = dev->data;
 	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
 	uint16_t i;
 
 	/* avoid stopping again */
@@ -602,6 +799,13 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 	/* Clear all queues and release mbufs */
 	ice_clear_queues(dev);
 
+	/* Clean datapath event and queue/vec mapping */
+	rte_intr_efd_disable(intr_handle);
+	if (intr_handle->intr_vec) {
+		rte_free(intr_handle->intr_vec);
+		intr_handle->intr_vec = NULL;
+	}
+
 	pf->adapter_stopped = true;
 }
 
@@ -627,6 +831,8 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 {
 	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
 
 	ice_dev_close(dev);
 
@@ -637,6 +843,13 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 	rte_free(dev->data->mac_addrs);
 	dev->data->mac_addrs = NULL;
 
+	/* disable uio intr before callback unregister */
+	rte_intr_disable(intr_handle);
+
+	/* register callback func to eal lib */
+	rte_intr_callback_unregister(intr_handle,
+				     ice_interrupt_handler, dev);
+
 	ice_release_vsi(pf->main_vsi);
 	ice_sched_cleanup_all(hw);
 	rte_free(hw->port_info);
@@ -755,6 +968,9 @@ static int ice_init_rss(struct ice_pf *pf)
 	if (ret != ICE_SUCCESS)
 		PMD_DRV_LOG(WARNING, "Fail to set phy mask");
 
+	/* Call get_link_info aq commond to enable/disable LSE */
+	ice_link_update(dev, 0);
+
 	pf->adapter_stopped = false;
 
 	return 0;
@@ -893,6 +1109,122 @@ static int ice_init_rss(struct ice_pf *pf)
 	dev_info->default_txportconf.ring_size = ICE_BUF_SIZE_MIN;
 }
 
+static inline int
+ice_atomic_read_link_status(struct rte_eth_dev *dev,
+			    struct rte_eth_link *link)
+{
+	struct rte_eth_link *dst = link;
+	struct rte_eth_link *src = &dev->data->dev_link;
+
+	if (rte_atomic64_cmpset((uint64_t *)dst, *(uint64_t *)dst,
+				*(uint64_t *)src) == 0)
+		return -1;
+
+	return 0;
+}
+
+static inline int
+ice_atomic_write_link_status(struct rte_eth_dev *dev,
+			     struct rte_eth_link *link)
+{
+	struct rte_eth_link *dst = &dev->data->dev_link;
+	struct rte_eth_link *src = link;
+
+	if (rte_atomic64_cmpset((uint64_t *)dst, *(uint64_t *)dst,
+				*(uint64_t *)src) == 0)
+		return -1;
+
+	return 0;
+}
+
+static int
+ice_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete)
+{
+#define CHECK_INTERVAL 100  /* 100ms */
+#define MAX_REPEAT_TIME 10  /* 1s (10 * 100ms) in total */
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ice_link_status link_status;
+	struct rte_eth_link link, old;
+	int status;
+	unsigned int rep_cnt = MAX_REPEAT_TIME;
+	bool enable_lse = dev->data->dev_conf.intr_conf.lsc ? true : false;
+
+	memset(&link, 0, sizeof(link));
+	memset(&old, 0, sizeof(old));
+	memset(&link_status, 0, sizeof(link_status));
+	ice_atomic_read_link_status(dev, &old);
+
+	do {
+		/* Get link status information from hardware */
+		status = ice_aq_get_link_info(hw->port_info, enable_lse,
+					      &link_status, NULL);
+		if (status != ICE_SUCCESS) {
+			link.link_speed = ETH_SPEED_NUM_100M;
+			link.link_duplex = ETH_LINK_FULL_DUPLEX;
+			PMD_DRV_LOG(ERR, "Failed to get link info");
+			goto out;
+		}
+
+		link.link_status = link_status.link_info & ICE_AQ_LINK_UP;
+		if (!wait_to_complete || link.link_status)
+			break;
+
+		rte_delay_ms(CHECK_INTERVAL);
+	} while (--rep_cnt);
+
+	if (!link.link_status)
+		goto out;
+
+	/* Full-duplex operation at all supported speeds */
+	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+
+	/* Parse the link status */
+	switch (link_status.link_speed) {
+	case ICE_AQ_LINK_SPEED_10MB:
+		link.link_speed = ETH_SPEED_NUM_10M;
+		break;
+	case ICE_AQ_LINK_SPEED_100MB:
+		link.link_speed = ETH_SPEED_NUM_100M;
+		break;
+	case ICE_AQ_LINK_SPEED_1000MB:
+		link.link_speed = ETH_SPEED_NUM_1G;
+		break;
+	case ICE_AQ_LINK_SPEED_2500MB:
+		link.link_speed = ETH_SPEED_NUM_2_5G;
+		break;
+	case ICE_AQ_LINK_SPEED_5GB:
+		link.link_speed = ETH_SPEED_NUM_5G;
+		break;
+	case ICE_AQ_LINK_SPEED_10GB:
+		link.link_speed = ETH_SPEED_NUM_10G;
+		break;
+	case ICE_AQ_LINK_SPEED_20GB:
+		link.link_speed = ETH_SPEED_NUM_20G;
+		break;
+	case ICE_AQ_LINK_SPEED_25GB:
+		link.link_speed = ETH_SPEED_NUM_25G;
+		break;
+	case ICE_AQ_LINK_SPEED_40GB:
+		link.link_speed = ETH_SPEED_NUM_40G;
+		break;
+	case ICE_AQ_LINK_SPEED_UNKNOWN:
+	default:
+		PMD_DRV_LOG(ERR, "Unknown link speed");
+		link.link_speed = ETH_SPEED_NUM_NONE;
+		break;
+	}
+
+	link.link_autoneg = !(dev->data->dev_conf.link_speeds &
+			      ETH_LINK_SPEED_FIXED);
+
+out:
+	ice_atomic_write_link_status(dev, &link);
+	if (link.link_status == old.link_status)
+		return -1;
+
+	return 0;
+}
+
 static int
 ice_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 	      struct rte_pci_device *pci_dev)
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v6 19/31] net/ice: support queue information getting
  2018-12-18  8:46 ` [dpdk-dev] [PATCH v6 00/31] A new net PMD - ICE Wenzhuo Lu
                     ` (17 preceding siblings ...)
  2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 18/31] net/ice: support link update Wenzhuo Lu
@ 2018-12-18  8:46   ` Wenzhuo Lu
  2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 20/31] net/ice: support packet type getting Wenzhuo Lu
                     ` (12 subsequent siblings)
  31 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-18  8:46 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add below ops,
rxq_info_get
txq_info_get
rx_queue_count

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/ice/ice_ethdev.c | 63 ++++--------------------------------------
 drivers/net/ice/ice_ethdev.h | 13 ---------
 drivers/net/ice/ice_rxtx.c   | 66 ++++++++++++++++++++++++++++++++++++++++++++
 drivers/net/ice/ice_rxtx.h   |  5 ++++
 4 files changed, 76 insertions(+), 71 deletions(-)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 853f43a..d997501 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -47,6 +47,9 @@ static int ice_link_update(struct rte_eth_dev *dev,
 	.tx_queue_release             = ice_tx_queue_release,
 	.dev_infos_get                = ice_dev_info_get,
 	.link_update                  = ice_link_update,
+	.rxq_info_get                 = ice_rxq_info_get,
+	.txq_info_get                 = ice_txq_info_get,
+	.rx_queue_count               = ice_rx_queue_count,
 };
 
 static void
@@ -1024,69 +1027,13 @@ static int ice_init_rss(struct ice_pf *pf)
 	dev_info->max_mac_addrs = vsi->max_macaddrs;
 	dev_info->max_vfs = pci_dev->max_vfs;
 
-	dev_info->rx_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_QINQ_STRIP |
-		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_VLAN_EXTEND |
-		DEV_RX_OFFLOAD_JUMBO_FRAME |
-		DEV_RX_OFFLOAD_KEEP_CRC |
-		DEV_RX_OFFLOAD_SCATTER |
-		DEV_RX_OFFLOAD_VLAN_FILTER;
-	dev_info->tx_offload_capa =
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_QINQ_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_SCTP_CKSUM |
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_TCP_TSO |
-		DEV_TX_OFFLOAD_MULTI_SEGS |
-		DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+	dev_info->rx_offload_capa = 0;
+	dev_info->tx_offload_capa = 0;
 	dev_info->rx_queue_offload_capa = 0;
 	dev_info->tx_queue_offload_capa = 0;
 
 	dev_info->reta_size = hw->func_caps.common_cap.rss_table_size;
 	dev_info->hash_key_size = (VSIQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t);
-	dev_info->flow_type_rss_offloads = ICE_RSS_OFFLOAD_ALL;
-
-	dev_info->default_rxconf = (struct rte_eth_rxconf) {
-		.rx_thresh = {
-			.pthresh = ICE_DEFAULT_RX_PTHRESH,
-			.hthresh = ICE_DEFAULT_RX_HTHRESH,
-			.wthresh = ICE_DEFAULT_RX_WTHRESH,
-		},
-		.rx_free_thresh = ICE_DEFAULT_RX_FREE_THRESH,
-		.rx_drop_en = 0,
-		.offloads = 0,
-	};
-
-	dev_info->default_txconf = (struct rte_eth_txconf) {
-		.tx_thresh = {
-			.pthresh = ICE_DEFAULT_TX_PTHRESH,
-			.hthresh = ICE_DEFAULT_TX_HTHRESH,
-			.wthresh = ICE_DEFAULT_TX_WTHRESH,
-		},
-		.tx_free_thresh = ICE_DEFAULT_TX_FREE_THRESH,
-		.tx_rs_thresh = ICE_DEFAULT_TX_RSBIT_THRESH,
-		.offloads = 0,
-	};
-
-	dev_info->rx_desc_lim = (struct rte_eth_desc_lim) {
-		.nb_max = ICE_MAX_RING_DESC,
-		.nb_min = ICE_MIN_RING_DESC,
-		.nb_align = ICE_ALIGN_RING_DESC,
-	};
-
-	dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
-		.nb_max = ICE_MAX_RING_DESC,
-		.nb_min = ICE_MIN_RING_DESC,
-		.nb_align = ICE_ALIGN_RING_DESC,
-	};
 
 	dev_info->speed_capa = ETH_LINK_SPEED_10M |
 			       ETH_LINK_SPEED_100M |
diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
index 3cefa5b..94e45c8 100644
--- a/drivers/net/ice/ice_ethdev.h
+++ b/drivers/net/ice/ice_ethdev.h
@@ -102,19 +102,6 @@
 		       ICE_FLAG_RSS_AQ_CAPABLE | \
 		       ICE_FLAG_VF_MAC_BY_PF)
 
-#define ICE_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_FRAG_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_NONFRAG_IPV4_SCTP | \
-	ETH_RSS_NONFRAG_IPV4_OTHER | \
-	ETH_RSS_FRAG_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_NONFRAG_IPV6_SCTP | \
-	ETH_RSS_NONFRAG_IPV6_OTHER | \
-	ETH_RSS_L2_PAYLOAD)
-
 struct ice_adapter;
 
 /**
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 9c5eee1..e2b7710 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -881,6 +881,72 @@
 }
 
 void
+ice_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+		 struct rte_eth_rxq_info *qinfo)
+{
+	struct ice_rx_queue *rxq;
+
+	rxq = dev->data->rx_queues[queue_id];
+
+	qinfo->mp = rxq->mp;
+	qinfo->scattered_rx = dev->data->scattered_rx;
+	qinfo->nb_desc = rxq->nb_rx_desc;
+
+	qinfo->conf.rx_free_thresh = rxq->rx_free_thresh;
+	qinfo->conf.rx_drop_en = rxq->drop_en;
+	qinfo->conf.rx_deferred_start = rxq->rx_deferred_start;
+}
+
+void
+ice_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+		 struct rte_eth_txq_info *qinfo)
+{
+	struct ice_tx_queue *txq;
+
+	txq = dev->data->tx_queues[queue_id];
+
+	qinfo->nb_desc = txq->nb_tx_desc;
+
+	qinfo->conf.tx_thresh.pthresh = txq->pthresh;
+	qinfo->conf.tx_thresh.hthresh = txq->hthresh;
+	qinfo->conf.tx_thresh.wthresh = txq->wthresh;
+
+	qinfo->conf.tx_free_thresh = txq->tx_free_thresh;
+	qinfo->conf.tx_rs_thresh = txq->tx_rs_thresh;
+	qinfo->conf.offloads = txq->offloads;
+	qinfo->conf.tx_deferred_start = txq->tx_deferred_start;
+}
+
+uint32_t
+ice_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+#define ICE_RXQ_SCAN_INTERVAL 4
+	volatile union ice_rx_desc *rxdp;
+	struct ice_rx_queue *rxq;
+	uint16_t desc = 0;
+
+	rxq = dev->data->rx_queues[rx_queue_id];
+	rxdp = &rxq->rx_ring[rxq->rx_tail];
+	while ((desc < rxq->nb_rx_desc) &&
+	       ((rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
+		 ICE_RXD_QW1_STATUS_M) >> ICE_RXD_QW1_STATUS_S) &
+	       (1 << ICE_RX_DESC_STATUS_DD_S)) {
+		/**
+		 * Check the DD bit of a rx descriptor of each 4 in a group,
+		 * to avoid checking too frequently and downgrading performance
+		 * too much.
+		 */
+		desc += ICE_RXQ_SCAN_INTERVAL;
+		rxdp += ICE_RXQ_SCAN_INTERVAL;
+		if (rxq->rx_tail + desc >= rxq->nb_rx_desc)
+			rxdp = &(rxq->rx_ring[rxq->rx_tail +
+				 desc - rxq->nb_rx_desc]);
+	}
+
+	return desc;
+}
+
+void
 ice_clear_queues(struct rte_eth_dev *dev)
 {
 	uint16_t i;
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index 088a206..4323c00 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -134,4 +134,9 @@ int ice_tx_queue_setup(struct rte_eth_dev *dev,
 void ice_tx_queue_release(void *txq);
 void ice_clear_queues(struct rte_eth_dev *dev);
 void ice_free_queues(struct rte_eth_dev *dev);
+uint32_t ice_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+void ice_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+		      struct rte_eth_rxq_info *qinfo);
+void ice_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+		      struct rte_eth_txq_info *qinfo);
 #endif /* _ICE_RXTX_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v6 20/31] net/ice: support packet type getting
  2018-12-18  8:46 ` [dpdk-dev] [PATCH v6 00/31] A new net PMD - ICE Wenzhuo Lu
                     ` (18 preceding siblings ...)
  2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 19/31] net/ice: support queue information getting Wenzhuo Lu
@ 2018-12-18  8:46   ` Wenzhuo Lu
  2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 21/31] net/ice: support basic RX/TX Wenzhuo Lu
                     ` (11 subsequent siblings)
  31 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-18  8:46 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Wei Zhao

Add ops dev_supported_ptypes_get.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 drivers/net/ice/ice_ethdev.c |   2 +
 drivers/net/ice/ice_rxtx.c   | 601 +++++++++++++++++++++++++++++++++++++++++++
 drivers/net/ice/ice_rxtx.h   |   2 +
 3 files changed, 605 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index d997501..0b83bc6 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -46,6 +46,7 @@ static int ice_link_update(struct rte_eth_dev *dev,
 	.tx_queue_setup               = ice_tx_queue_setup,
 	.tx_queue_release             = ice_tx_queue_release,
 	.dev_infos_get                = ice_dev_info_get,
+	.dev_supported_ptypes_get     = ice_dev_supported_ptypes_get,
 	.link_update                  = ice_link_update,
 	.rxq_info_get                 = ice_rxq_info_get,
 	.txq_info_get                 = ice_txq_info_get,
@@ -681,6 +682,7 @@ static int ice_link_update(struct rte_eth_dev *dev,
 
 	dev->dev_ops = &ice_eth_dev_ops;
 
+	ice_set_default_ptype_table(dev);
 	pci_dev = RTE_DEV_TO_PCI(dev->device);
 	intr_handle = &pci_dev->intr_handle;
 
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index e2b7710..47c9d5b 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -946,6 +946,42 @@
 	return desc;
 }
 
+const uint32_t *
+ice_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
+{
+	static const uint32_t ptypes[] = {
+		/* refers to ice_get_default_pkt_type() */
+		RTE_PTYPE_L2_ETHER,
+		RTE_PTYPE_L2_ETHER_LLDP,
+		RTE_PTYPE_L2_ETHER_ARP,
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN,
+		RTE_PTYPE_L3_IPV6_EXT_UNKNOWN,
+		RTE_PTYPE_L4_FRAG,
+		RTE_PTYPE_L4_ICMP,
+		RTE_PTYPE_L4_NONFRAG,
+		RTE_PTYPE_L4_SCTP,
+		RTE_PTYPE_L4_TCP,
+		RTE_PTYPE_L4_UDP,
+		RTE_PTYPE_TUNNEL_GRENAT,
+		RTE_PTYPE_TUNNEL_IP,
+		RTE_PTYPE_INNER_L2_ETHER,
+		RTE_PTYPE_INNER_L2_ETHER_VLAN,
+		RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN,
+		RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN,
+		RTE_PTYPE_INNER_L4_FRAG,
+		RTE_PTYPE_INNER_L4_ICMP,
+		RTE_PTYPE_INNER_L4_NONFRAG,
+		RTE_PTYPE_INNER_L4_SCTP,
+		RTE_PTYPE_INNER_L4_TCP,
+		RTE_PTYPE_INNER_L4_UDP,
+		RTE_PTYPE_TUNNEL_GTPC,
+		RTE_PTYPE_TUNNEL_GTPU,
+		RTE_PTYPE_UNKNOWN
+	};
+
+	return ptypes;
+}
+
 void
 ice_clear_queues(struct rte_eth_dev *dev)
 {
@@ -987,3 +1023,568 @@
 	}
 	dev->data->nb_tx_queues = 0;
 }
+
+/* For each value it means, datasheet of hardware can tell more details
+ *
+ * @note: fix ice_dev_supported_ptypes_get() if any change here.
+ */
+static inline uint32_t
+ice_get_default_pkt_type(uint16_t ptype)
+{
+	static const uint32_t type_table[ICE_MAX_PKT_TYPE]
+		__rte_cache_aligned = {
+		/* L2 types */
+		/* [0] reserved */
+		[1] = RTE_PTYPE_L2_ETHER,
+		/* [2] - [5] reserved */
+		[6] = RTE_PTYPE_L2_ETHER_LLDP,
+		/* [7] - [10] reserved */
+		[11] = RTE_PTYPE_L2_ETHER_ARP,
+		/* [12] - [21] reserved */
+
+		/* Non tunneled IPv4 */
+		[22] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_FRAG,
+		[23] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_NONFRAG,
+		[24] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_UDP,
+		/* [25] reserved */
+		[26] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_TCP,
+		[27] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_SCTP,
+		[28] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_ICMP,
+
+		/* IPv4 --> IPv4 */
+		[29] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_FRAG,
+		[30] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_NONFRAG,
+		[31] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_UDP,
+		/* [32] reserved */
+		[33] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_TCP,
+		[34] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_SCTP,
+		[35] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv4 --> IPv6 */
+		[36] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_FRAG,
+		[37] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_NONFRAG,
+		[38] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_UDP,
+		/* [39] reserved */
+		[40] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_TCP,
+		[41] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_SCTP,
+		[42] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv4 --> GRE/Teredo/VXLAN */
+		[43] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT,
+
+		/* IPv4 --> GRE/Teredo/VXLAN --> IPv4 */
+		[44] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_FRAG,
+		[45] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_NONFRAG,
+		[46] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_UDP,
+		/* [47] reserved */
+		[48] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_TCP,
+		[49] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_SCTP,
+		[50] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv4 --> GRE/Teredo/VXLAN --> IPv6 */
+		[51] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_FRAG,
+		[52] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_NONFRAG,
+		[53] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_UDP,
+		/* [54] reserved */
+		[55] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_TCP,
+		[56] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_SCTP,
+		[57] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv4 --> GRE/Teredo/VXLAN --> MAC */
+		[58] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER,
+
+		/* IPv4 --> GRE/Teredo/VXLAN --> MAC --> IPv4 */
+		[59] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_FRAG,
+		[60] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_NONFRAG,
+		[61] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_UDP,
+		/* [62] reserved */
+		[63] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_TCP,
+		[64] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_SCTP,
+		[65] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv4 --> GRE/Teredo/VXLAN --> MAC --> IPv6 */
+		[66] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_FRAG,
+		[67] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_NONFRAG,
+		[68] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_UDP,
+		/* [69] reserved */
+		[70] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_TCP,
+		[71] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_SCTP,
+		[72] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv4 --> GRE/Teredo/VXLAN --> MAC/VLAN */
+		[73] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN,
+
+		/* IPv4 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv4 */
+		[74] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_FRAG,
+		[75] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_NONFRAG,
+		[76] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_UDP,
+		/* [77] reserved */
+		[78] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_TCP,
+		[79] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_SCTP,
+		[80] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv4 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv6 */
+		[81] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_FRAG,
+		[82] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_NONFRAG,
+		[83] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_UDP,
+		/* [84] reserved */
+		[85] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_TCP,
+		[86] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_SCTP,
+		[87] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_GRENAT |
+		       RTE_PTYPE_INNER_L2_ETHER_VLAN |
+		       RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_ICMP,
+
+		/* Non tunneled IPv6 */
+		[88] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_FRAG,
+		[89] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_NONFRAG,
+		[90] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_UDP,
+		/* [91] reserved */
+		[92] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_TCP,
+		[93] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_SCTP,
+		[94] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_L4_ICMP,
+
+		/* IPv6 --> IPv4 */
+		[95] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_FRAG,
+		[96] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_NONFRAG,
+		[97] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_UDP,
+		/* [98] reserved */
+		[99] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+		       RTE_PTYPE_TUNNEL_IP |
+		       RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+		       RTE_PTYPE_INNER_L4_TCP,
+		[100] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_IP |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_SCTP,
+		[101] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_IP |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv6 --> IPv6 */
+		[102] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_IP |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_FRAG,
+		[103] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_IP |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_NONFRAG,
+		[104] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_IP |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_UDP,
+		/* [105] reserved */
+		[106] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_IP |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_TCP,
+		[107] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_IP |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_SCTP,
+		[108] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_IP |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv6 --> GRE/Teredo/VXLAN */
+		[109] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT,
+
+		/* IPv6 --> GRE/Teredo/VXLAN --> IPv4 */
+		[110] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_FRAG,
+		[111] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_NONFRAG,
+		[112] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_UDP,
+		/* [113] reserved */
+		[114] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_TCP,
+		[115] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_SCTP,
+		[116] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv6 --> GRE/Teredo/VXLAN --> IPv6 */
+		[117] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_FRAG,
+		[118] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_NONFRAG,
+		[119] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_UDP,
+		/* [120] reserved */
+		[121] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_TCP,
+		[122] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_SCTP,
+		[123] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv6 --> GRE/Teredo/VXLAN --> MAC */
+		[124] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER,
+
+		/* IPv6 --> GRE/Teredo/VXLAN --> MAC --> IPv4 */
+		[125] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_FRAG,
+		[126] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_NONFRAG,
+		[127] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_UDP,
+		/* [128] reserved */
+		[129] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_TCP,
+		[130] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_SCTP,
+		[131] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv6 --> GRE/Teredo/VXLAN --> MAC --> IPv6 */
+		[132] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_FRAG,
+		[133] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_NONFRAG,
+		[134] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_UDP,
+		/* [135] reserved */
+		[136] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_TCP,
+		[137] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_SCTP,
+		[138] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv6 --> GRE/Teredo/VXLAN --> MAC/VLAN */
+		[139] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN,
+
+		/* IPv6 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv4 */
+		[140] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_FRAG,
+		[141] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_NONFRAG,
+		[142] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_UDP,
+		/* [143] reserved */
+		[144] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_TCP,
+		[145] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_SCTP,
+		[146] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_ICMP,
+
+		/* IPv6 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv6 */
+		[147] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_FRAG,
+		[148] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_NONFRAG,
+		[149] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_UDP,
+		/* [150] reserved */
+		[151] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_TCP,
+		[152] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_SCTP,
+		[153] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GRENAT |
+			RTE_PTYPE_INNER_L2_ETHER_VLAN |
+			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_INNER_L4_ICMP,
+		/* [154] - [255] reserved */
+		[256] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GTPC,
+		[257] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GTPC,
+		[258] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+				RTE_PTYPE_TUNNEL_GTPU,
+		[259] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+				RTE_PTYPE_TUNNEL_GTPU,
+		/* [260] - [263] reserved */
+		[264] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GTPC,
+		[265] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+			RTE_PTYPE_TUNNEL_GTPC,
+		[266] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+				RTE_PTYPE_TUNNEL_GTPU,
+		[267] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+				RTE_PTYPE_TUNNEL_GTPU,
+
+		/* All others reserved */
+	};
+
+	return type_table[ptype];
+}
+
+void __attribute__((cold))
+ice_set_default_ptype_table(struct rte_eth_dev *dev)
+{
+	struct ice_adapter *ad =
+		ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	int i;
+
+	for (i = 0; i < ICE_MAX_PKT_TYPE; i++)
+		ad->ptype_tbl[i] = ice_get_default_pkt_type(i);
+}
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index 4323c00..fd1c4ef 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -139,4 +139,6 @@ void ice_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 		      struct rte_eth_rxq_info *qinfo);
 void ice_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 		      struct rte_eth_txq_info *qinfo);
+void ice_set_default_ptype_table(struct rte_eth_dev *dev);
+const uint32_t *ice_dev_supported_ptypes_get(struct rte_eth_dev *dev);
 #endif /* _ICE_RXTX_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v6 21/31] net/ice: support basic RX/TX
  2018-12-18  8:46 ` [dpdk-dev] [PATCH v6 00/31] A new net PMD - ICE Wenzhuo Lu
                     ` (19 preceding siblings ...)
  2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 20/31] net/ice: support packet type getting Wenzhuo Lu
@ 2018-12-18  8:46   ` Wenzhuo Lu
  2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 22/31] net/ice: support MTU setting Wenzhuo Lu
                     ` (10 subsequent siblings)
  31 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-18  8:46 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add basic RX & TX support.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/ice.ini |   5 +
 drivers/net/ice/ice_ethdev.c     |  55 ++++-
 drivers/net/ice/ice_rxtx.c       | 495 ++++++++++++++++++++++++++++++++++++++-
 drivers/net/ice/ice_rxtx.h       |   8 +
 4 files changed, 559 insertions(+), 4 deletions(-)

diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
index eb852ff..a42cc20 100644
--- a/doc/guides/nics/features/ice.ini
+++ b/doc/guides/nics/features/ice.ini
@@ -8,6 +8,11 @@ Speed capabilities   = Y
 Link status          = Y
 Link status event    = Y
 Queue start/stop     = Y
+TSO                  = Y
+CRC offload          = Y
+L3 checksum offload  = Y
+L4 checksum offload  = Y
+Packet type parsing  = Y
 BSD nic_uio          = Y
 Linux UIO            = Y
 Linux VFIO           = Y
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 0b83bc6..eed0c30 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -681,6 +681,9 @@ static int ice_link_update(struct rte_eth_dev *dev,
 	int ret;
 
 	dev->dev_ops = &ice_eth_dev_ops;
+	dev->rx_pkt_burst = ice_recv_pkts;
+	dev->tx_pkt_burst = ice_xmit_pkts;
+	dev->tx_pkt_prepare = ice_prep_pkts;
 
 	ice_set_default_ptype_table(dev);
 	pci_dev = RTE_DEV_TO_PCI(dev->device);
@@ -962,6 +965,8 @@ static int ice_init_rss(struct ice_pf *pf)
 		goto rx_err;
 	}
 
+	ice_set_rx_function(dev);
+
 	ret = ice_aq_set_event_mask(hw, hw->port_info->lport,
 				    ((u16)(ICE_AQ_LINK_EVENT_LINK_FAULT |
 				     ICE_AQ_LINK_EVENT_PHY_TEMP_ALARM |
@@ -1029,14 +1034,60 @@ static int ice_init_rss(struct ice_pf *pf)
 	dev_info->max_mac_addrs = vsi->max_macaddrs;
 	dev_info->max_vfs = pci_dev->max_vfs;
 
-	dev_info->rx_offload_capa = 0;
-	dev_info->tx_offload_capa = 0;
+	dev_info->rx_offload_capa =
+		DEV_RX_OFFLOAD_IPV4_CKSUM |
+		DEV_RX_OFFLOAD_UDP_CKSUM |
+		DEV_RX_OFFLOAD_TCP_CKSUM |
+		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		DEV_RX_OFFLOAD_KEEP_CRC;
+	dev_info->tx_offload_capa =
+		DEV_TX_OFFLOAD_IPV4_CKSUM |
+		DEV_TX_OFFLOAD_UDP_CKSUM |
+		DEV_TX_OFFLOAD_TCP_CKSUM |
+		DEV_TX_OFFLOAD_SCTP_CKSUM |
+		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		DEV_TX_OFFLOAD_TCP_TSO |
+		DEV_TX_OFFLOAD_MULTI_SEGS;
 	dev_info->rx_queue_offload_capa = 0;
 	dev_info->tx_queue_offload_capa = 0;
 
 	dev_info->reta_size = hw->func_caps.common_cap.rss_table_size;
 	dev_info->hash_key_size = (VSIQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t);
 
+	dev_info->default_rxconf = (struct rte_eth_rxconf) {
+		.rx_thresh = {
+			.pthresh = ICE_DEFAULT_RX_PTHRESH,
+			.hthresh = ICE_DEFAULT_RX_HTHRESH,
+			.wthresh = ICE_DEFAULT_RX_WTHRESH,
+		},
+		.rx_free_thresh = ICE_DEFAULT_RX_FREE_THRESH,
+		.rx_drop_en = 0,
+		.offloads = 0,
+	};
+
+	dev_info->default_txconf = (struct rte_eth_txconf) {
+		.tx_thresh = {
+			.pthresh = ICE_DEFAULT_TX_PTHRESH,
+			.hthresh = ICE_DEFAULT_TX_HTHRESH,
+			.wthresh = ICE_DEFAULT_TX_WTHRESH,
+		},
+		.tx_free_thresh = ICE_DEFAULT_TX_FREE_THRESH,
+		.tx_rs_thresh = ICE_DEFAULT_TX_RSBIT_THRESH,
+		.offloads = 0,
+	};
+
+	dev_info->rx_desc_lim = (struct rte_eth_desc_lim) {
+		.nb_max = ICE_MAX_RING_DESC,
+		.nb_min = ICE_MIN_RING_DESC,
+		.nb_align = ICE_ALIGN_RING_DESC,
+	};
+
+	dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
+		.nb_max = ICE_MAX_RING_DESC,
+		.nb_min = ICE_MIN_RING_DESC,
+		.nb_align = ICE_ALIGN_RING_DESC,
+	};
+
 	dev_info->speed_capa = ETH_LINK_SPEED_10M |
 			       ETH_LINK_SPEED_100M |
 			       ETH_LINK_SPEED_1G |
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 47c9d5b..4ea414e 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -946,8 +946,36 @@
 	return desc;
 }
 
+/* Rx L3/L4 checksum */
+static inline uint64_t
+ice_rxd_error_to_pkt_flags(uint64_t qword)
+{
+	uint64_t flags = 0;
+	uint64_t error_bits = (qword >> ICE_RXD_QW1_ERROR_S);
+
+	if (likely((error_bits & ICE_RX_ERR_BITS) == 0)) {
+		flags |= (PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD);
+		return flags;
+	}
+
+	if (unlikely(error_bits & (1 << ICE_RX_DESC_ERROR_IPE_S)))
+		flags |= PKT_RX_IP_CKSUM_BAD;
+	else
+		flags |= PKT_RX_IP_CKSUM_GOOD;
+
+	if (unlikely(error_bits & (1 << ICE_RX_DESC_ERROR_L4E_S)))
+		flags |= PKT_RX_L4_CKSUM_BAD;
+	else
+		flags |= PKT_RX_L4_CKSUM_GOOD;
+
+	if (unlikely(error_bits & (1 << ICE_RX_DESC_ERROR_EIPE_S)))
+		flags |= PKT_RX_EIP_CKSUM_BAD;
+
+	return flags;
+}
+
 const uint32_t *
-ice_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
+ice_dev_supported_ptypes_get(struct rte_eth_dev *dev)
 {
 	static const uint32_t ptypes[] = {
 		/* refers to ice_get_default_pkt_type() */
@@ -979,7 +1007,9 @@
 		RTE_PTYPE_UNKNOWN
 	};
 
-	return ptypes;
+	if (dev->rx_pkt_burst == ice_recv_pkts)
+		return ptypes;
+	return NULL;
 }
 
 void
@@ -1024,6 +1054,467 @@
 	dev->data->nb_tx_queues = 0;
 }
 
+uint16_t
+ice_recv_pkts(void *rx_queue,
+	      struct rte_mbuf **rx_pkts,
+	      uint16_t nb_pkts)
+{
+	struct ice_rx_queue *rxq = rx_queue;
+	volatile union ice_rx_desc *rx_ring = rxq->rx_ring;
+	volatile union ice_rx_desc *rxdp;
+	struct ice_rx_entry *sw_ring = rxq->sw_ring;
+	struct ice_rx_entry *rxe;
+	struct rte_mbuf *nmb; /* new allocated mbuf */
+	struct rte_mbuf *rxm; /* pointer to store old mbuf in SW ring */
+	uint16_t rx_id = rxq->rx_tail;
+	uint16_t nb_rx = 0;
+	uint16_t nb_hold = 0;
+	uint16_t rx_packet_len;
+	uint32_t rx_status;
+	uint64_t qword1;
+	uint64_t dma_addr;
+	uint64_t pkt_flags = 0;
+	uint32_t *ptype_tbl = rxq->vsi->adapter->ptype_tbl;
+	struct rte_eth_dev *dev;
+
+	while (nb_rx < nb_pkts) {
+		rxdp = &rx_ring[rx_id];
+		qword1 = rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len);
+		rx_status = (qword1 & ICE_RXD_QW1_STATUS_M) >>
+			    ICE_RXD_QW1_STATUS_S;
+
+		/* Check the DD bit first */
+		if (!(rx_status & (1 << ICE_RX_DESC_STATUS_DD_S)))
+			break;
+
+		/* allocate mbuf */
+		nmb = rte_mbuf_raw_alloc(rxq->mp);
+		if (unlikely(!nmb)) {
+			dev = ICE_VSI_TO_ETH_DEV(rxq->vsi);
+			dev->data->rx_mbuf_alloc_failed++;
+			break;
+		}
+
+		nb_hold++;
+		rxe = &sw_ring[rx_id]; /* get corresponding mbuf in SW ring */
+		rx_id++;
+		if (unlikely(rx_id == rxq->nb_rx_desc))
+			rx_id = 0;
+		rxm = rxe->mbuf;
+		rxe->mbuf = nmb;
+		dma_addr =
+			rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb));
+
+		/**
+		 * fill the read format of descriptor with physic address in
+		 * new allocated mbuf: nmb
+		 */
+		rxdp->read.hdr_addr = 0;
+		rxdp->read.pkt_addr = dma_addr;
+
+		/* calculate rx_packet_len of the received pkt */
+		rx_packet_len = ((qword1 & ICE_RXD_QW1_LEN_PBUF_M) >>
+				ICE_RXD_QW1_LEN_PBUF_S) - rxq->crc_len;
+
+		/* fill old mbuf with received descriptor: rxd */
+		rxm->data_off = RTE_PKTMBUF_HEADROOM;
+		rte_prefetch0(RTE_PTR_ADD(rxm->buf_addr, RTE_PKTMBUF_HEADROOM));
+		rxm->nb_segs = 1;
+		rxm->next = NULL;
+		rxm->pkt_len = rx_packet_len;
+		rxm->data_len = rx_packet_len;
+		rxm->port = rxq->port_id;
+		rxm->packet_type = ptype_tbl[(uint8_t)((qword1 &
+							ICE_RXD_QW1_PTYPE_M) >>
+						       ICE_RXD_QW1_PTYPE_S)];
+		pkt_flags |= ice_rxd_error_to_pkt_flags(qword1);
+		rxm->ol_flags |= pkt_flags;
+		/* copy old mbuf to rx_pkts */
+		rx_pkts[nb_rx++] = rxm;
+	}
+	rxq->rx_tail = rx_id;
+	/**
+	 * If the number of free RX descriptors is greater than the RX free
+	 * threshold of the queue, advance the receive tail register of queue.
+	 * Update that register with the value of the last processed RX
+	 * descriptor minus 1.
+	 */
+	nb_hold = (uint16_t)(nb_hold + rxq->nb_rx_hold);
+	if (nb_hold > rxq->rx_free_thresh) {
+		rx_id = (uint16_t)(rx_id == 0 ?
+				   (rxq->nb_rx_desc - 1) : (rx_id - 1));
+		/* write TAIL register */
+		ICE_PCI_REG_WRITE(rxq->qrx_tail, rx_id);
+		nb_hold = 0;
+	}
+	rxq->nb_rx_hold = nb_hold;
+
+	/* return received packet in the burst */
+	return nb_rx;
+}
+
+static inline void
+ice_txd_enable_checksum(uint64_t ol_flags,
+			uint32_t *td_cmd,
+			uint32_t *td_offset,
+			union ice_tx_offload tx_offload)
+{
+	/* L2 length must be set. */
+	*td_offset |= (tx_offload.l2_len >> 1) <<
+		      ICE_TX_DESC_LEN_MACLEN_S;
+
+	/* Enable L3 checksum offloads */
+	if (ol_flags & PKT_TX_IP_CKSUM) {
+		*td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV4_CSUM;
+		*td_offset |= (tx_offload.l3_len >> 2) <<
+			      ICE_TX_DESC_LEN_IPLEN_S;
+	} else if (ol_flags & PKT_TX_IPV4) {
+		*td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV4;
+		*td_offset |= (tx_offload.l3_len >> 2) <<
+			      ICE_TX_DESC_LEN_IPLEN_S;
+	} else if (ol_flags & PKT_TX_IPV6) {
+		*td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV6;
+		*td_offset |= (tx_offload.l3_len >> 2) <<
+			      ICE_TX_DESC_LEN_IPLEN_S;
+	}
+
+	if (ol_flags & PKT_TX_TCP_SEG) {
+		*td_cmd |= ICE_TX_DESC_CMD_L4T_EOFT_TCP;
+		*td_offset |= (tx_offload.l4_len >> 2) <<
+			      ICE_TX_DESC_LEN_L4_LEN_S;
+		return;
+	}
+
+	/* Enable L4 checksum offloads */
+	switch (ol_flags & PKT_TX_L4_MASK) {
+	case PKT_TX_TCP_CKSUM:
+		*td_cmd |= ICE_TX_DESC_CMD_L4T_EOFT_TCP;
+		*td_offset |= (sizeof(struct tcp_hdr) >> 2) <<
+			      ICE_TX_DESC_LEN_L4_LEN_S;
+		break;
+	case PKT_TX_SCTP_CKSUM:
+		*td_cmd |= ICE_TX_DESC_CMD_L4T_EOFT_SCTP;
+		*td_offset |= (sizeof(struct sctp_hdr) >> 2) <<
+			      ICE_TX_DESC_LEN_L4_LEN_S;
+		break;
+	case PKT_TX_UDP_CKSUM:
+		*td_cmd |= ICE_TX_DESC_CMD_L4T_EOFT_UDP;
+		*td_offset |= (sizeof(struct udp_hdr) >> 2) <<
+			      ICE_TX_DESC_LEN_L4_LEN_S;
+		break;
+	default:
+		break;
+	}
+}
+
+static inline int
+ice_xmit_cleanup(struct ice_tx_queue *txq)
+{
+	struct ice_tx_entry *sw_ring = txq->sw_ring;
+	volatile struct ice_tx_desc *txd = txq->tx_ring;
+	uint16_t last_desc_cleaned = txq->last_desc_cleaned;
+	uint16_t nb_tx_desc = txq->nb_tx_desc;
+	uint16_t desc_to_clean_to;
+	uint16_t nb_tx_to_clean;
+
+	/* Determine the last descriptor needing to be cleaned */
+	desc_to_clean_to = (uint16_t)(last_desc_cleaned + txq->tx_rs_thresh);
+	if (desc_to_clean_to >= nb_tx_desc)
+		desc_to_clean_to = (uint16_t)(desc_to_clean_to - nb_tx_desc);
+
+	/* Check to make sure the last descriptor to clean is done */
+	desc_to_clean_to = sw_ring[desc_to_clean_to].last_id;
+	if (!(txd[desc_to_clean_to].cmd_type_offset_bsz &
+	    rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE))) {
+		PMD_TX_FREE_LOG(DEBUG, "TX descriptor %4u is not done "
+				"(port=%d queue=%d) value=0x%"PRIx64"\n",
+				desc_to_clean_to,
+				txq->port_id, txq->queue_id,
+				txd[desc_to_clean_to].cmd_type_offset_bsz);
+		/* Failed to clean any descriptors */
+		return -1;
+	}
+
+	/* Figure out how many descriptors will be cleaned */
+	if (last_desc_cleaned > desc_to_clean_to)
+		nb_tx_to_clean = (uint16_t)((nb_tx_desc - last_desc_cleaned) +
+					    desc_to_clean_to);
+	else
+		nb_tx_to_clean = (uint16_t)(desc_to_clean_to -
+					    last_desc_cleaned);
+
+	/* The last descriptor to clean is done, so that means all the
+	 * descriptors from the last descriptor that was cleaned
+	 * up to the last descriptor with the RS bit set
+	 * are done. Only reset the threshold descriptor.
+	 */
+	txd[desc_to_clean_to].cmd_type_offset_bsz = 0;
+
+	/* Update the txq to reflect the last descriptor that was cleaned */
+	txq->last_desc_cleaned = desc_to_clean_to;
+	txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + nb_tx_to_clean);
+
+	return 0;
+}
+
+/* Check if the context descriptor is needed for TX offloading */
+static inline uint16_t
+ice_calc_context_desc(uint64_t flags)
+{
+	static uint64_t mask = PKT_TX_TCP_SEG | PKT_TX_QINQ;
+
+	return (flags & mask) ? 1 : 0;
+}
+
+/* set ice TSO context descriptor */
+static inline uint64_t
+ice_set_tso_ctx(struct rte_mbuf *mbuf, union ice_tx_offload tx_offload)
+{
+	uint64_t ctx_desc = 0;
+	uint32_t cd_cmd, hdr_len, cd_tso_len;
+
+	if (!tx_offload.l4_len) {
+		PMD_TX_LOG(DEBUG, "L4 length set to 0");
+		return ctx_desc;
+	}
+
+	/**
+	 * in case of non tunneling packet, the outer_l2_len and
+	 * outer_l3_len must be 0.
+	 */
+	hdr_len = tx_offload.outer_l2_len +
+		  tx_offload.outer_l3_len +
+		  tx_offload.l2_len +
+		  tx_offload.l3_len +
+		  tx_offload.l4_len;
+
+	cd_cmd = ICE_TX_CTX_DESC_TSO;
+	cd_tso_len = mbuf->pkt_len - hdr_len;
+	ctx_desc |= ((uint64_t)cd_cmd << ICE_TXD_CTX_QW1_CMD_S) |
+		    ((uint64_t)cd_tso_len << ICE_TXD_CTX_QW1_TSO_LEN_S) |
+		    ((uint64_t)mbuf->tso_segsz << ICE_TXD_CTX_QW1_MSS_S);
+
+	return ctx_desc;
+}
+
+uint16_t
+ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+	struct ice_tx_queue *txq;
+	volatile struct ice_tx_desc *tx_ring;
+	volatile struct ice_tx_desc *txd;
+	struct ice_tx_entry *sw_ring;
+	struct ice_tx_entry *txe, *txn;
+	struct rte_mbuf *tx_pkt;
+	struct rte_mbuf *m_seg;
+	uint16_t tx_id;
+	uint16_t nb_tx;
+	uint16_t nb_used;
+	uint16_t nb_ctx;
+	uint32_t td_cmd = 0;
+	uint32_t td_offset = 0;
+	uint32_t td_tag = 0;
+	uint16_t tx_last;
+	uint64_t buf_dma_addr;
+	uint64_t ol_flags;
+	union ice_tx_offload tx_offload = {0};
+
+	txq = tx_queue;
+	sw_ring = txq->sw_ring;
+	tx_ring = txq->tx_ring;
+	tx_id = txq->tx_tail;
+	txe = &sw_ring[tx_id];
+
+	/* Check if the descriptor ring needs to be cleaned. */
+	if (txq->nb_tx_free < txq->tx_free_thresh)
+		ice_xmit_cleanup(txq);
+
+	for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
+		tx_pkt = *tx_pkts++;
+
+		td_cmd = 0;
+		ol_flags = tx_pkt->ol_flags;
+		tx_offload.l2_len = tx_pkt->l2_len;
+		tx_offload.l3_len = tx_pkt->l3_len;
+		tx_offload.outer_l2_len = tx_pkt->outer_l2_len;
+		tx_offload.outer_l3_len = tx_pkt->outer_l3_len;
+		tx_offload.l4_len = tx_pkt->l4_len;
+		tx_offload.tso_segsz = tx_pkt->tso_segsz;
+		/* Calculate the number of context descriptors needed. */
+		nb_ctx = ice_calc_context_desc(ol_flags);
+
+		/* The number of descriptors that must be allocated for
+		 * a packet equals to the number of the segments of that
+		 * packet plus the number of context descriptor if needed.
+		 */
+		nb_used = (uint16_t)(tx_pkt->nb_segs + nb_ctx);
+		tx_last = (uint16_t)(tx_id + nb_used - 1);
+
+		/* Circular ring */
+		if (tx_last >= txq->nb_tx_desc)
+			tx_last = (uint16_t)(tx_last - txq->nb_tx_desc);
+
+		if (nb_used > txq->nb_tx_free) {
+			if (ice_xmit_cleanup(txq) != 0) {
+				if (nb_tx == 0)
+					return 0;
+				goto end_of_tx;
+			}
+			if (unlikely(nb_used > txq->tx_rs_thresh)) {
+				while (nb_used > txq->nb_tx_free) {
+					if (ice_xmit_cleanup(txq) != 0) {
+						if (nb_tx == 0)
+							return 0;
+						goto end_of_tx;
+					}
+				}
+			}
+		}
+
+		/* Enable checksum offloading */
+		if (ol_flags & ICE_TX_CKSUM_OFFLOAD_MASK) {
+			ice_txd_enable_checksum(ol_flags, &td_cmd,
+						&td_offset, tx_offload);
+		}
+
+		if (nb_ctx) {
+			/* Setup TX context descriptor if required */
+			uint64_t cd_type_cmd_tso_mss = ICE_TX_DESC_DTYPE_CTX;
+
+			txn = &sw_ring[txe->next_id];
+			RTE_MBUF_PREFETCH_TO_FREE(txn->mbuf);
+			if (txe->mbuf) {
+				rte_pktmbuf_free_seg(txe->mbuf);
+				txe->mbuf = NULL;
+			}
+
+			if (ol_flags & PKT_TX_TCP_SEG)
+				cd_type_cmd_tso_mss |=
+					ice_set_tso_ctx(tx_pkt, tx_offload);
+
+			txe->last_id = tx_last;
+			tx_id = txe->next_id;
+			txe = txn;
+		}
+		m_seg = tx_pkt;
+
+		do {
+			txd = &tx_ring[tx_id];
+			txn = &sw_ring[txe->next_id];
+
+			if (txe->mbuf)
+				rte_pktmbuf_free_seg(txe->mbuf);
+			txe->mbuf = m_seg;
+
+			/* Setup TX Descriptor */
+			buf_dma_addr = rte_mbuf_data_iova(m_seg);
+			txd->buf_addr = rte_cpu_to_le_64(buf_dma_addr);
+			txd->cmd_type_offset_bsz =
+				rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DATA |
+				((uint64_t)td_cmd  << ICE_TXD_QW1_CMD_S) |
+				((uint64_t)td_offset << ICE_TXD_QW1_OFFSET_S) |
+				((uint64_t)m_seg->data_len  <<
+				 ICE_TXD_QW1_TX_BUF_SZ_S) |
+				((uint64_t)td_tag  << ICE_TXD_QW1_L2TAG1_S));
+
+			txe->last_id = tx_last;
+			tx_id = txe->next_id;
+			txe = txn;
+			m_seg = m_seg->next;
+		} while (m_seg);
+
+		/* fill the last descriptor with End of Packet (EOP) bit */
+		td_cmd |= ICE_TX_DESC_CMD_EOP;
+		txq->nb_tx_used = (uint16_t)(txq->nb_tx_used + nb_used);
+		txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_used);
+
+		/* set RS bit on the last descriptor of one packet */
+		if (txq->nb_tx_used >= txq->tx_rs_thresh) {
+			PMD_TX_FREE_LOG(DEBUG,
+					"Setting RS bit on TXD id="
+					"%4u (port=%d queue=%d)",
+					tx_last, txq->port_id, txq->queue_id);
+
+			td_cmd |= ICE_TX_DESC_CMD_RS;
+
+			/* Update txq RS bit counters */
+			txq->nb_tx_used = 0;
+		}
+		txd->cmd_type_offset_bsz |=
+			rte_cpu_to_le_64(((uint64_t)td_cmd) <<
+					 ICE_TXD_QW1_CMD_S);
+	}
+end_of_tx:
+	rte_wmb();
+
+	/* update Tail register */
+	ICE_PCI_REG_WRITE(txq->qtx_tail, tx_id);
+	txq->tx_tail = tx_id;
+
+	return nb_tx;
+}
+
+void __attribute__((cold))
+ice_set_rx_function(struct rte_eth_dev *dev)
+{
+	dev->rx_pkt_burst = ice_recv_pkts;
+}
+
+/*********************************************************************
+ *
+ *  TX prep functions
+ *
+ **********************************************************************/
+/* The default values of TSO MSS */
+#define ICE_MIN_TSO_MSS            64
+#define ICE_MAX_TSO_MSS            9728
+#define ICE_MAX_TSO_FRAME_SIZE     262144
+uint16_t
+ice_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
+	      uint16_t nb_pkts)
+{
+	int i, ret;
+	uint64_t ol_flags;
+	struct rte_mbuf *m;
+
+	for (i = 0; i < nb_pkts; i++) {
+		m = tx_pkts[i];
+		ol_flags = m->ol_flags;
+
+		if (ol_flags & PKT_TX_TCP_SEG &&
+		    (m->tso_segsz < ICE_MIN_TSO_MSS ||
+		     m->tso_segsz > ICE_MAX_TSO_MSS ||
+		     m->pkt_len > ICE_MAX_TSO_FRAME_SIZE)) {
+			/**
+			 * MSS outside the range are considered malicious
+			 */
+			rte_errno = -EINVAL;
+			return i;
+		}
+
+#ifdef RTE_LIBRTE_ETHDEV_DEBUG
+		ret = rte_validate_tx_offload(m);
+		if (ret != 0) {
+			rte_errno = ret;
+			return i;
+		}
+#endif
+		ret = rte_net_intel_cksum_prepare(m);
+		if (ret != 0) {
+			rte_errno = ret;
+			return i;
+		}
+	}
+	return i;
+}
+
+void __attribute__((cold))
+ice_set_tx_function(struct rte_eth_dev *dev)
+{
+		dev->tx_pkt_burst = ice_xmit_pkts;
+		dev->tx_pkt_prepare = ice_prep_pkts;
+}
+
 /* For each value it means, datasheet of hardware can tell more details
  *
  * @note: fix ice_dev_supported_ptypes_get() if any change here.
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index fd1c4ef..228d2ff 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -134,6 +134,14 @@ int ice_tx_queue_setup(struct rte_eth_dev *dev,
 void ice_tx_queue_release(void *txq);
 void ice_clear_queues(struct rte_eth_dev *dev);
 void ice_free_queues(struct rte_eth_dev *dev);
+uint16_t ice_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+		       uint16_t nb_pkts);
+uint16_t ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+		       uint16_t nb_pkts);
+void ice_set_rx_function(struct rte_eth_dev *dev);
+uint16_t ice_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
+		       uint16_t nb_pkts);
+void ice_set_tx_function(struct rte_eth_dev *dev);
 uint32_t ice_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 void ice_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 		      struct rte_eth_rxq_info *qinfo);
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v6 22/31] net/ice: support MTU setting
  2018-12-18  8:46 ` [dpdk-dev] [PATCH v6 00/31] A new net PMD - ICE Wenzhuo Lu
                     ` (20 preceding siblings ...)
  2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 21/31] net/ice: support basic RX/TX Wenzhuo Lu
@ 2018-12-18  8:46   ` Wenzhuo Lu
  2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 23/31] net/ice: support MAC ops Wenzhuo Lu
                     ` (9 subsequent siblings)
  31 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-18  8:46 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add ops mtu_set.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/ice.ini |  2 ++
 drivers/net/ice/ice_ethdev.c     | 35 +++++++++++++++++++++++++++++++++++
 2 files changed, 37 insertions(+)

diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
index a42cc20..7a87de3 100644
--- a/doc/guides/nics/features/ice.ini
+++ b/doc/guides/nics/features/ice.ini
@@ -8,6 +8,8 @@ Speed capabilities   = Y
 Link status          = Y
 Link status event    = Y
 Queue start/stop     = Y
+MTU update           = Y
+Jumbo frame          = Y
 TSO                  = Y
 CRC offload          = Y
 L3 checksum offload  = Y
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index eed0c30..f7c0b36 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -23,6 +23,7 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 			     struct rte_eth_dev_info *dev_info);
 static int ice_link_update(struct rte_eth_dev *dev,
 			   int wait_to_complete);
+static int ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu);
 
 static const struct rte_pci_id pci_id_ice_map[] = {
 	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
@@ -48,6 +49,7 @@ static int ice_link_update(struct rte_eth_dev *dev,
 	.dev_infos_get                = ice_dev_info_get,
 	.dev_supported_ptypes_get     = ice_dev_supported_ptypes_get,
 	.link_update                  = ice_link_update,
+	.mtu_set                      = ice_mtu_set,
 	.rxq_info_get                 = ice_rxq_info_get,
 	.txq_info_get                 = ice_txq_info_get,
 	.rx_queue_count               = ice_rx_queue_count,
@@ -1039,6 +1041,7 @@ static int ice_init_rss(struct ice_pf *pf)
 		DEV_RX_OFFLOAD_UDP_CKSUM |
 		DEV_RX_OFFLOAD_TCP_CKSUM |
 		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		DEV_RX_OFFLOAD_JUMBO_FRAME |
 		DEV_RX_OFFLOAD_KEEP_CRC;
 	dev_info->tx_offload_capa =
 		DEV_TX_OFFLOAD_IPV4_CKSUM |
@@ -1226,6 +1229,38 @@ static int ice_init_rss(struct ice_pf *pf)
 }
 
 static int
+ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct rte_eth_dev_data *dev_data = pf->dev_data;
+	uint32_t frame_size = mtu + ETHER_HDR_LEN
+			      + ETHER_CRC_LEN + ICE_VLAN_TAG_SIZE;
+
+	/* check if mtu is within the allowed range */
+	if (mtu < ETHER_MIN_MTU || frame_size > ICE_FRAME_SIZE_MAX)
+		return -EINVAL;
+
+	/* mtu setting is forbidden if port is start */
+	if (dev_data->dev_started) {
+		PMD_DRV_LOG(ERR,
+			    "port %d must be stopped before configuration",
+			    dev_data->port_id);
+		return -EBUSY;
+	}
+
+	if (frame_size > ETHER_MAX_LEN)
+		dev_data->dev_conf.rxmode.offloads |=
+			DEV_RX_OFFLOAD_JUMBO_FRAME;
+	else
+		dev_data->dev_conf.rxmode.offloads &=
+			~DEV_RX_OFFLOAD_JUMBO_FRAME;
+
+	dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
+
+	return 0;
+}
+
+static int
 ice_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 	      struct rte_pci_device *pci_dev)
 {
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v6 23/31] net/ice: support MAC ops
  2018-12-18  8:46 ` [dpdk-dev] [PATCH v6 00/31] A new net PMD - ICE Wenzhuo Lu
                     ` (21 preceding siblings ...)
  2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 22/31] net/ice: support MTU setting Wenzhuo Lu
@ 2018-12-18  8:46   ` Wenzhuo Lu
  2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 24/31] net/ice: support VLAN ops Wenzhuo Lu
                     ` (8 subsequent siblings)
  31 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-18  8:46 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add below ops,
mac_addr_set
mac_addr_add
mac_addr_remove

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/ice.ini |   2 +
 drivers/net/ice/ice_ethdev.c     | 235 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 237 insertions(+)

diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
index 7a87de3..ff4749f 100644
--- a/doc/guides/nics/features/ice.ini
+++ b/doc/guides/nics/features/ice.ini
@@ -11,6 +11,8 @@ Queue start/stop     = Y
 MTU update           = Y
 Jumbo frame          = Y
 TSO                  = Y
+Unicast MAC filter   = Y
+Multicast MAC filter = Y
 CRC offload          = Y
 L3 checksum offload  = Y
 L4 checksum offload  = Y
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index f7c0b36..a124c9c 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -24,6 +24,13 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 static int ice_link_update(struct rte_eth_dev *dev,
 			   int wait_to_complete);
 static int ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu);
+static int ice_macaddr_set(struct rte_eth_dev *dev,
+			   struct ether_addr *mac_addr);
+static int ice_macaddr_add(struct rte_eth_dev *dev,
+			   struct ether_addr *mac_addr,
+			   __rte_unused uint32_t index,
+			   uint32_t pool);
+static void ice_macaddr_remove(struct rte_eth_dev *dev, uint32_t index);
 
 static const struct rte_pci_id pci_id_ice_map[] = {
 	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
@@ -50,6 +57,9 @@ static int ice_link_update(struct rte_eth_dev *dev,
 	.dev_supported_ptypes_get     = ice_dev_supported_ptypes_get,
 	.link_update                  = ice_link_update,
 	.mtu_set                      = ice_mtu_set,
+	.mac_addr_set                 = ice_macaddr_set,
+	.mac_addr_add                 = ice_macaddr_add,
+	.mac_addr_remove              = ice_macaddr_remove,
 	.rxq_info_get                 = ice_rxq_info_get,
 	.txq_info_get                 = ice_txq_info_get,
 	.rx_queue_count               = ice_rx_queue_count,
@@ -339,6 +349,130 @@ static int ice_link_update(struct rte_eth_dev *dev,
 	return 0;
 }
 
+/* Find out specific MAC filter */
+static struct ice_mac_filter *
+ice_find_mac_filter(struct ice_vsi *vsi, struct ether_addr *macaddr)
+{
+	struct ice_mac_filter *f;
+
+	TAILQ_FOREACH(f, &vsi->mac_list, next) {
+		if (is_same_ether_addr(macaddr, &f->mac_info.mac_addr))
+			return f;
+	}
+
+	return NULL;
+}
+
+static int
+ice_add_mac_filter(struct ice_vsi *vsi, struct ether_addr *mac_addr)
+{
+	struct ice_fltr_list_entry *m_list_itr = NULL;
+	struct ice_mac_filter *f;
+	struct LIST_HEAD_TYPE list_head;
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	int ret = 0;
+
+	/* If it's added and configured, return */
+	f = ice_find_mac_filter(vsi, mac_addr);
+	if (f) {
+		PMD_DRV_LOG(INFO, "This MAC filter already exists.");
+		return 0;
+	}
+
+	INIT_LIST_HEAD(&list_head);
+
+	m_list_itr = (struct ice_fltr_list_entry *)
+		ice_malloc(hw, sizeof(*m_list_itr));
+	if (!m_list_itr) {
+		ret = -ENOMEM;
+		goto DONE;
+	}
+	ice_memcpy(m_list_itr->fltr_info.l_data.mac.mac_addr,
+		   mac_addr, ETH_ALEN, ICE_NONDMA_TO_NONDMA);
+	m_list_itr->fltr_info.src_id = ICE_SRC_ID_VSI;
+	m_list_itr->fltr_info.fltr_act = ICE_FWD_TO_VSI;
+	m_list_itr->fltr_info.lkup_type = ICE_SW_LKUP_MAC;
+	m_list_itr->fltr_info.flag = ICE_FLTR_TX;
+	m_list_itr->fltr_info.vsi_handle = vsi->idx;
+
+	LIST_ADD(&m_list_itr->list_entry, &list_head);
+
+	/* Add the mac */
+	ret = ice_add_mac(hw, &list_head);
+	if (ret != ICE_SUCCESS) {
+		PMD_DRV_LOG(ERR, "Failed to add MAC filter");
+		ret = -EINVAL;
+		goto DONE;
+	}
+	/* Add the mac addr into mac list */
+	f = rte_zmalloc(NULL, sizeof(*f), 0);
+	if (!f) {
+		PMD_DRV_LOG(ERR, "failed to allocate memory");
+		ret = -ENOMEM;
+		goto DONE;
+	}
+	rte_memcpy(&f->mac_info.mac_addr, mac_addr, ETH_ADDR_LEN);
+	TAILQ_INSERT_TAIL(&vsi->mac_list, f, next);
+	vsi->mac_num++;
+
+	ret = 0;
+
+DONE:
+	rte_free(m_list_itr);
+	return ret;
+}
+
+static int
+ice_remove_mac_filter(struct ice_vsi *vsi, struct ether_addr *mac_addr)
+{
+	struct ice_fltr_list_entry *m_list_itr = NULL;
+	struct ice_mac_filter *f;
+	struct LIST_HEAD_TYPE list_head;
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	int ret = 0;
+
+	/* Can't find it, return an error */
+	f = ice_find_mac_filter(vsi, mac_addr);
+	if (!f)
+		return -EINVAL;
+
+	INIT_LIST_HEAD(&list_head);
+
+	m_list_itr = (struct ice_fltr_list_entry *)
+		ice_malloc(hw, sizeof(*m_list_itr));
+	if (!m_list_itr) {
+		ret = -ENOMEM;
+		goto DONE;
+	}
+	ice_memcpy(m_list_itr->fltr_info.l_data.mac.mac_addr,
+		   mac_addr, ETH_ALEN, ICE_NONDMA_TO_NONDMA);
+	m_list_itr->fltr_info.src_id = ICE_SRC_ID_VSI;
+	m_list_itr->fltr_info.fltr_act = ICE_FWD_TO_VSI;
+	m_list_itr->fltr_info.lkup_type = ICE_SW_LKUP_MAC;
+	m_list_itr->fltr_info.flag = ICE_FLTR_TX;
+	m_list_itr->fltr_info.vsi_handle = vsi->idx;
+
+	LIST_ADD(&m_list_itr->list_entry, &list_head);
+
+	/* remove the mac filter */
+	ret = ice_remove_mac(hw, &list_head);
+	if (ret != ICE_SUCCESS) {
+		PMD_DRV_LOG(ERR, "Failed to remove MAC filter");
+		ret = -EINVAL;
+		goto DONE;
+	}
+
+	/* Remove the mac addr from mac list */
+	TAILQ_REMOVE(&vsi->mac_list, f, next);
+	rte_free(f);
+	vsi->mac_num--;
+
+	ret = 0;
+DONE:
+	rte_free(m_list_itr);
+	return ret;
+}
+
 /* Enable IRQ0 */
 static void
 ice_pf_enable_irq0(struct ice_hw *hw)
@@ -547,6 +681,9 @@ static int ice_link_update(struct rte_eth_dev *dev,
 	struct ice_vsi *vsi = NULL;
 	struct ice_vsi_ctx vsi_ctx;
 	int ret;
+	struct ether_addr broadcast = {
+		.addr_bytes = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff} };
+	struct ether_addr mac_addr;
 	uint16_t max_txqs[ICE_MAX_TRAFFIC_CLASS] = { 0 };
 	uint8_t tc_bitmap = 0x1;
 
@@ -632,6 +769,21 @@ static int ice_link_update(struct rte_eth_dev *dev,
 	pf->vsis_allocated = vsi_ctx.vsis_allocd;
 	pf->vsis_unallocated = vsi_ctx.vsis_unallocated;
 
+	/* MAC configuration */
+	rte_memcpy(pf->dev_addr.addr_bytes,
+		   hw->port_info->mac.perm_addr,
+		   ETH_ADDR_LEN);
+
+	rte_memcpy(&mac_addr, &pf->dev_addr, ETHER_ADDR_LEN);
+	ret = ice_add_mac_filter(vsi, &mac_addr);
+	if (ret != ICE_SUCCESS)
+		PMD_INIT_LOG(ERR, "Failed to add dflt MAC filter");
+
+	rte_memcpy(&mac_addr, &broadcast, ETHER_ADDR_LEN);
+	ret = ice_add_mac_filter(vsi, &mac_addr);
+	if (ret != ICE_SUCCESS)
+		PMD_INIT_LOG(ERR, "Failed to add MAC filter");
+
 	/* At the beginning, only TC0. */
 	/* What we need here is the maximam number of the TX queues.
 	 * Currently vsi->nb_qps means it.
@@ -1260,6 +1412,89 @@ static int ice_init_rss(struct ice_pf *pf)
 	return 0;
 }
 
+static int ice_macaddr_set(struct rte_eth_dev *dev,
+			   struct ether_addr *mac_addr)
+{
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vsi *vsi = pf->main_vsi;
+	struct ice_mac_filter *f;
+	uint8_t flags = 0;
+	int ret;
+
+	if (!is_valid_assigned_ether_addr(mac_addr)) {
+		PMD_DRV_LOG(ERR, "Tried to set invalid MAC address.");
+		return -EINVAL;
+	}
+
+	TAILQ_FOREACH(f, &vsi->mac_list, next) {
+		if (is_same_ether_addr(&pf->dev_addr, &f->mac_info.mac_addr))
+			break;
+	}
+
+	if (!f) {
+		PMD_DRV_LOG(ERR, "Failed to find filter for default mac");
+		return -EIO;
+	}
+
+	ret = ice_remove_mac_filter(vsi, &f->mac_info.mac_addr);
+	if (ret != ICE_SUCCESS) {
+		PMD_DRV_LOG(ERR, "Failed to delete mac filter");
+		return -EIO;
+	}
+	ret = ice_add_mac_filter(vsi, mac_addr);
+	if (ret != ICE_SUCCESS) {
+		PMD_DRV_LOG(ERR, "Failed to add mac filter");
+		return -EIO;
+	}
+	memcpy(&pf->dev_addr, mac_addr, ETH_ADDR_LEN);
+
+	flags = ICE_AQC_MAN_MAC_UPDATE_LAA_WOL;
+	ret = ice_aq_manage_mac_write(hw, mac_addr->addr_bytes, flags, NULL);
+	if (ret != ICE_SUCCESS)
+		PMD_DRV_LOG(ERR, "Failed to set manage mac");
+
+	return 0;
+}
+
+/* Add a MAC address, and update filters */
+static int
+ice_macaddr_add(struct rte_eth_dev *dev,
+		struct ether_addr *mac_addr,
+		__rte_unused uint32_t index,
+		__rte_unused uint32_t pool)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vsi *vsi = pf->main_vsi;
+	int ret;
+
+	ret = ice_add_mac_filter(vsi, mac_addr);
+	if (ret != ICE_SUCCESS) {
+		PMD_DRV_LOG(ERR, "Failed to add MAC filter");
+		return -EINVAL;
+	}
+
+	return ICE_SUCCESS;
+}
+
+/* Remove a MAC address, and update filters */
+static void
+ice_macaddr_remove(struct rte_eth_dev *dev, uint32_t index)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vsi *vsi = pf->main_vsi;
+	struct rte_eth_dev_data *data = dev->data;
+	struct ether_addr *macaddr;
+	int ret;
+
+	macaddr = &data->mac_addrs[index];
+	ret = ice_remove_mac_filter(vsi, macaddr);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to remove MAC filter");
+		return;
+	}
+}
+
 static int
 ice_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 	      struct rte_pci_device *pci_dev)
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v6 24/31] net/ice: support VLAN ops
  2018-12-18  8:46 ` [dpdk-dev] [PATCH v6 00/31] A new net PMD - ICE Wenzhuo Lu
                     ` (22 preceding siblings ...)
  2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 23/31] net/ice: support MAC ops Wenzhuo Lu
@ 2018-12-18  8:46   ` Wenzhuo Lu
  2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 25/31] net/ice: support RSS Wenzhuo Lu
                     ` (7 subsequent siblings)
  31 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-18  8:46 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add below ops,
ice_vlan_filter_set
ice_vlan_offload_set
ice_vlan_tpid_set
ice_vlan_pvid_set

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/ice.ini |   3 +
 doc/guides/nics/ice.rst          |  16 ++
 drivers/net/ice/ice_ethdev.c     | 598 ++++++++++++++++++++++++++++++++++++++-
 drivers/net/ice/ice_rxtx.c       |  54 ++++
 4 files changed, 670 insertions(+), 1 deletion(-)

diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
index ff4749f..b76cae3 100644
--- a/doc/guides/nics/features/ice.ini
+++ b/doc/guides/nics/features/ice.ini
@@ -13,7 +13,10 @@ Jumbo frame          = Y
 TSO                  = Y
 Unicast MAC filter   = Y
 Multicast MAC filter = Y
+VLAN filter          = Y
 CRC offload          = Y
+VLAN offload         = Y
+QinQ offload         = Y
 L3 checksum offload  = Y
 L4 checksum offload  = Y
 Packet type parsing  = Y
diff --git a/doc/guides/nics/ice.rst b/doc/guides/nics/ice.rst
index 96a594f..466af55 100644
--- a/doc/guides/nics/ice.rst
+++ b/doc/guides/nics/ice.rst
@@ -64,6 +64,22 @@ Driver compilation and testing
 Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
 for details.
 
+Sample Application Notes
+------------------------
+
+Vlan filter
+~~~~~~~~~~~
+
+Vlan filter only works when Promiscuous mode is off.
+
+To start ``testpmd``, and add vlan 10 to port 0:
+
+.. code-block:: console
+
+    ./app/testpmd -l 0-15 -n 4 -- -i
+    ...
+
+    testpmd> rx_vlan add 10 0
 
 Limitations or Known issues
 ---------------------------
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index a124c9c..67ab06c 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -24,6 +24,13 @@ static void ice_dev_info_get(struct rte_eth_dev *dev,
 static int ice_link_update(struct rte_eth_dev *dev,
 			   int wait_to_complete);
 static int ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu);
+static int ice_vlan_offload_set(struct rte_eth_dev *dev, int mask);
+static int ice_vlan_tpid_set(struct rte_eth_dev *dev,
+			     enum rte_vlan_type vlan_type,
+			     uint16_t tpid);
+static int ice_vlan_filter_set(struct rte_eth_dev *dev,
+			       uint16_t vlan_id,
+			       int on);
 static int ice_macaddr_set(struct rte_eth_dev *dev,
 			   struct ether_addr *mac_addr);
 static int ice_macaddr_add(struct rte_eth_dev *dev,
@@ -31,6 +38,8 @@ static int ice_macaddr_add(struct rte_eth_dev *dev,
 			   __rte_unused uint32_t index,
 			   uint32_t pool);
 static void ice_macaddr_remove(struct rte_eth_dev *dev, uint32_t index);
+static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
+			     uint16_t pvid, int on);
 
 static const struct rte_pci_id pci_id_ice_map[] = {
 	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
@@ -60,6 +69,10 @@ static int ice_macaddr_add(struct rte_eth_dev *dev,
 	.mac_addr_set                 = ice_macaddr_set,
 	.mac_addr_add                 = ice_macaddr_add,
 	.mac_addr_remove              = ice_macaddr_remove,
+	.vlan_filter_set              = ice_vlan_filter_set,
+	.vlan_offload_set             = ice_vlan_offload_set,
+	.vlan_tpid_set                = ice_vlan_tpid_set,
+	.vlan_pvid_set                = ice_vlan_pvid_set,
 	.rxq_info_get                 = ice_rxq_info_get,
 	.txq_info_get                 = ice_txq_info_get,
 	.rx_queue_count               = ice_rx_queue_count,
@@ -473,6 +486,297 @@ static int ice_macaddr_add(struct rte_eth_dev *dev,
 	return ret;
 }
 
+/* Find out specific VLAN filter */
+static struct ice_vlan_filter *
+ice_find_vlan_filter(struct ice_vsi *vsi, uint16_t vlan_id)
+{
+	struct ice_vlan_filter *f;
+
+	TAILQ_FOREACH(f, &vsi->vlan_list, next) {
+		if (vlan_id == f->vlan_info.vlan_id)
+			return f;
+	}
+
+	return NULL;
+}
+
+static int
+ice_add_vlan_filter(struct ice_vsi *vsi, uint16_t vlan_id)
+{
+	struct ice_fltr_list_entry *v_list_itr = NULL;
+	struct ice_vlan_filter *f;
+	struct LIST_HEAD_TYPE list_head;
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	int ret = 0;
+
+	if (!vsi || vlan_id > ETHER_MAX_VLAN_ID)
+		return -EINVAL;
+
+	/* If it's added and configured, return. */
+	f = ice_find_vlan_filter(vsi, vlan_id);
+	if (f) {
+		PMD_DRV_LOG(INFO, "This VLAN filter already exists.");
+		return 0;
+	}
+
+	if (!vsi->vlan_anti_spoof_on && !vsi->vlan_filter_on)
+		return 0;
+
+	INIT_LIST_HEAD(&list_head);
+
+	v_list_itr = (struct ice_fltr_list_entry *)
+		      ice_malloc(hw, sizeof(*v_list_itr));
+	if (!v_list_itr) {
+		ret = -ENOMEM;
+		goto DONE;
+	}
+	v_list_itr->fltr_info.l_data.vlan.vlan_id = vlan_id;
+	v_list_itr->fltr_info.src_id = ICE_SRC_ID_VSI;
+	v_list_itr->fltr_info.fltr_act = ICE_FWD_TO_VSI;
+	v_list_itr->fltr_info.lkup_type = ICE_SW_LKUP_VLAN;
+	v_list_itr->fltr_info.flag = ICE_FLTR_TX;
+	v_list_itr->fltr_info.vsi_handle = vsi->idx;
+
+	LIST_ADD(&v_list_itr->list_entry, &list_head);
+
+	/* Add the vlan */
+	ret = ice_add_vlan(hw, &list_head);
+	if (ret != ICE_SUCCESS) {
+		PMD_DRV_LOG(ERR, "Failed to add VLAN filter");
+		ret = -EINVAL;
+		goto DONE;
+	}
+
+	/* Add vlan into vlan list */
+	f = rte_zmalloc(NULL, sizeof(*f), 0);
+	if (!f) {
+		PMD_DRV_LOG(ERR, "failed to allocate memory");
+		ret = -ENOMEM;
+		goto DONE;
+	}
+	f->vlan_info.vlan_id = vlan_id;
+	TAILQ_INSERT_TAIL(&vsi->vlan_list, f, next);
+	vsi->vlan_num++;
+
+	ret = 0;
+
+DONE:
+	rte_free(v_list_itr);
+	return ret;
+}
+
+static int
+ice_remove_vlan_filter(struct ice_vsi *vsi, uint16_t vlan_id)
+{
+	struct ice_fltr_list_entry *v_list_itr = NULL;
+	struct ice_vlan_filter *f;
+	struct LIST_HEAD_TYPE list_head;
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	int ret = 0;
+
+	/**
+	 * Vlan 0 is the generic filter for untagged packets
+	 * and can't be removed.
+	 */
+	if (!vsi || vlan_id == 0 || vlan_id > ETHER_MAX_VLAN_ID)
+		return -EINVAL;
+
+	/* Can't find it, return an error */
+	f = ice_find_vlan_filter(vsi, vlan_id);
+	if (!f)
+		return -EINVAL;
+
+	INIT_LIST_HEAD(&list_head);
+
+	v_list_itr = (struct ice_fltr_list_entry *)
+		      ice_malloc(hw, sizeof(*v_list_itr));
+	if (!v_list_itr) {
+		ret = -ENOMEM;
+		goto DONE;
+	}
+
+	v_list_itr->fltr_info.l_data.vlan.vlan_id = vlan_id;
+	v_list_itr->fltr_info.src_id = ICE_SRC_ID_VSI;
+	v_list_itr->fltr_info.fltr_act = ICE_FWD_TO_VSI;
+	v_list_itr->fltr_info.lkup_type = ICE_SW_LKUP_VLAN;
+	v_list_itr->fltr_info.flag = ICE_FLTR_TX;
+	v_list_itr->fltr_info.vsi_handle = vsi->idx;
+
+	LIST_ADD(&v_list_itr->list_entry, &list_head);
+
+	/* remove the vlan filter */
+	ret = ice_remove_vlan(hw, &list_head);
+	if (ret != ICE_SUCCESS) {
+		PMD_DRV_LOG(ERR, "Failed to remove VLAN filter");
+		ret = -EINVAL;
+		goto DONE;
+	}
+
+	/* Remove the vlan id from vlan list */
+	TAILQ_REMOVE(&vsi->vlan_list, f, next);
+	rte_free(f);
+	vsi->vlan_num--;
+
+	ret = 0;
+DONE:
+	rte_free(v_list_itr);
+	return ret;
+}
+
+static int
+ice_remove_all_mac_vlan_filters(struct ice_vsi *vsi)
+{
+	struct ice_mac_filter *m_f;
+	struct ice_vlan_filter *v_f;
+	int ret = 0;
+
+	if (!vsi || !vsi->mac_num)
+		return -EINVAL;
+
+	TAILQ_FOREACH(m_f, &vsi->mac_list, next) {
+		ret = ice_remove_mac_filter(vsi, &m_f->mac_info.mac_addr);
+		if (ret != ICE_SUCCESS) {
+			ret = -EINVAL;
+			goto DONE;
+		}
+	}
+
+	if (vsi->vlan_num == 0)
+		return 0;
+
+	TAILQ_FOREACH(v_f, &vsi->vlan_list, next) {
+		ret = ice_remove_vlan_filter(vsi, v_f->vlan_info.vlan_id);
+		if (ret != ICE_SUCCESS) {
+			ret = -EINVAL;
+			goto DONE;
+		}
+	}
+
+DONE:
+	return ret;
+}
+
+static int
+ice_vsi_config_qinq_insertion(struct ice_vsi *vsi, bool on)
+{
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	struct ice_vsi_ctx ctxt;
+	uint8_t qinq_flags;
+	int ret = 0;
+
+	/* Check if it has been already on or off */
+	if (vsi->info.valid_sections &
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID)) {
+		if (on) {
+			if ((vsi->info.outer_tag_flags &
+			     ICE_AQ_VSI_OUTER_TAG_ACCEPT_HOST) ==
+			    ICE_AQ_VSI_OUTER_TAG_ACCEPT_HOST)
+				return 0; /* already on */
+		} else {
+			if (!(vsi->info.outer_tag_flags &
+			      ICE_AQ_VSI_OUTER_TAG_ACCEPT_HOST))
+				return 0; /* already off */
+		}
+	}
+
+	if (on)
+		qinq_flags = ICE_AQ_VSI_OUTER_TAG_ACCEPT_HOST;
+	else
+		qinq_flags = 0;
+	/* clear global insertion and use per packet insertion */
+	vsi->info.outer_tag_flags &= ~(ICE_AQ_VSI_OUTER_TAG_INSERT);
+	vsi->info.outer_tag_flags &= ~(ICE_AQ_VSI_OUTER_TAG_ACCEPT_HOST);
+	vsi->info.outer_tag_flags |= qinq_flags;
+	/* use default vlan type 0x8100 */
+	vsi->info.outer_tag_flags &= ~(ICE_AQ_VSI_OUTER_TAG_TYPE_M);
+	vsi->info.outer_tag_flags |= ICE_DFLT_OUTER_TAG_TYPE <<
+				     ICE_AQ_VSI_OUTER_TAG_TYPE_S;
+	(void)rte_memcpy(&ctxt.info, &vsi->info, sizeof(vsi->info));
+	ctxt.info.valid_sections =
+			rte_cpu_to_le_16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID);
+	ctxt.vsi_num = vsi->vsi_id;
+	ret = ice_update_vsi(hw, vsi->idx, &ctxt, NULL);
+	if (ret) {
+		PMD_DRV_LOG(INFO,
+			    "Update VSI failed to %s qinq stripping",
+			    on ? "enable" : "disable");
+		return -EINVAL;
+	}
+
+	vsi->info.valid_sections |=
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID);
+
+	return ret;
+}
+
+static int
+ice_vsi_config_qinq_stripping(struct ice_vsi *vsi, bool on)
+{
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	struct ice_vsi_ctx ctxt;
+	uint8_t qinq_flags;
+	int ret = 0;
+
+	/* Check if it has been already on or off */
+	if (vsi->info.valid_sections &
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID)) {
+		if (on) {
+			if ((vsi->info.outer_tag_flags &
+			     ICE_AQ_VSI_OUTER_TAG_MODE_M) ==
+			    ICE_AQ_VSI_OUTER_TAG_COPY)
+				return 0; /* already on */
+		} else {
+			if ((vsi->info.outer_tag_flags &
+			     ICE_AQ_VSI_OUTER_TAG_MODE_M) ==
+			    ICE_AQ_VSI_OUTER_TAG_NOTHING)
+				return 0; /* already off */
+		}
+	}
+
+	if (on)
+		qinq_flags = ICE_AQ_VSI_OUTER_TAG_COPY;
+	else
+		qinq_flags = ICE_AQ_VSI_OUTER_TAG_NOTHING;
+	vsi->info.outer_tag_flags &= ~(ICE_AQ_VSI_OUTER_TAG_MODE_M);
+	vsi->info.outer_tag_flags |= qinq_flags;
+	/* use default vlan type 0x8100 */
+	vsi->info.outer_tag_flags &= ~(ICE_AQ_VSI_OUTER_TAG_TYPE_M);
+	vsi->info.outer_tag_flags |= ICE_DFLT_OUTER_TAG_TYPE <<
+				     ICE_AQ_VSI_OUTER_TAG_TYPE_S;
+	(void)rte_memcpy(&ctxt.info, &vsi->info, sizeof(vsi->info));
+	ctxt.info.valid_sections =
+			rte_cpu_to_le_16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID);
+	ctxt.vsi_num = vsi->vsi_id;
+	ret = ice_update_vsi(hw, vsi->idx, &ctxt, NULL);
+	if (ret) {
+		PMD_DRV_LOG(INFO,
+			    "Update VSI failed to %s qinq stripping",
+			    on ? "enable" : "disable");
+		return -EINVAL;
+	}
+
+	vsi->info.valid_sections |=
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID);
+
+	return ret;
+}
+
+static int
+ice_vsi_config_double_vlan(struct ice_vsi *vsi, int on)
+{
+	int ret;
+
+	ret = ice_vsi_config_qinq_stripping(vsi, on);
+	if (ret)
+		PMD_DRV_LOG(ERR, "Fail to set qinq stripping - %d", ret);
+
+	ret = ice_vsi_config_qinq_insertion(vsi, on);
+	if (ret)
+		PMD_DRV_LOG(ERR, "Fail to set qinq insertion - %d", ret);
+
+	return ret;
+}
+
 /* Enable IRQ0 */
 static void
 ice_pf_enable_irq0(struct ice_hw *hw)
@@ -832,6 +1136,7 @@ static int ice_macaddr_add(struct rte_eth_dev *dev,
 	struct rte_intr_handle *intr_handle;
 	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vsi *vsi;
 	int ret;
 
 	dev->dev_ops = &ice_eth_dev_ops;
@@ -887,6 +1192,11 @@ static int ice_macaddr_add(struct rte_eth_dev *dev,
 		goto err_pf_setup;
 	}
 
+	vsi = pf->main_vsi;
+
+	/* Disable double vlan by default */
+	ice_vsi_config_double_vlan(vsi, FALSE);
+
 	/* register callback func to eal lib */
 	rte_intr_callback_register(intr_handle,
 				   ice_interrupt_handler, dev);
@@ -922,6 +1232,8 @@ static int ice_macaddr_add(struct rte_eth_dev *dev,
 
 	hw = ICE_VSI_TO_HW(vsi);
 
+	ice_remove_all_mac_vlan_filters(vsi);
+
 	memset(&vsi_ctx, 0, sizeof(vsi_ctx));
 
 	vsi_ctx.vsi_num = vsi->vsi_id;
@@ -1189,13 +1501,19 @@ static int ice_init_rss(struct ice_pf *pf)
 	dev_info->max_vfs = pci_dev->max_vfs;
 
 	dev_info->rx_offload_capa =
+		DEV_RX_OFFLOAD_VLAN_STRIP |
 		DEV_RX_OFFLOAD_IPV4_CKSUM |
 		DEV_RX_OFFLOAD_UDP_CKSUM |
 		DEV_RX_OFFLOAD_TCP_CKSUM |
+		DEV_RX_OFFLOAD_QINQ_STRIP |
 		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		DEV_RX_OFFLOAD_VLAN_EXTEND |
 		DEV_RX_OFFLOAD_JUMBO_FRAME |
-		DEV_RX_OFFLOAD_KEEP_CRC;
+		DEV_RX_OFFLOAD_KEEP_CRC |
+		DEV_RX_OFFLOAD_VLAN_FILTER;
 	dev_info->tx_offload_capa =
+		DEV_TX_OFFLOAD_VLAN_INSERT |
+		DEV_TX_OFFLOAD_QINQ_INSERT |
 		DEV_TX_OFFLOAD_IPV4_CKSUM |
 		DEV_TX_OFFLOAD_UDP_CKSUM |
 		DEV_TX_OFFLOAD_TCP_CKSUM |
@@ -1496,6 +1814,284 @@ static int ice_macaddr_set(struct rte_eth_dev *dev,
 }
 
 static int
+ice_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vsi *vsi = pf->main_vsi;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (on) {
+		ret = ice_add_vlan_filter(vsi, vlan_id);
+		if (ret < 0) {
+			PMD_DRV_LOG(ERR, "Failed to add vlan filter");
+			return -EINVAL;
+		}
+	} else {
+		ret = ice_remove_vlan_filter(vsi, vlan_id);
+		if (ret < 0) {
+			PMD_DRV_LOG(ERR, "Failed to remove vlan filter");
+			return -EINVAL;
+		}
+	}
+
+	return 0;
+}
+
+/* Configure vlan filter on or off */
+static int
+ice_vsi_config_vlan_filter(struct ice_vsi *vsi, bool on)
+{
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	struct ice_vsi_ctx ctxt;
+	uint8_t sec_flags, sw_flags2;
+	int ret = 0;
+
+	sec_flags = ICE_AQ_VSI_SEC_TX_VLAN_PRUNE_ENA <<
+		    ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S;
+	sw_flags2 = ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA;
+
+	if (on) {
+		vsi->info.sec_flags |= sec_flags;
+		vsi->info.sw_flags2 |= sw_flags2;
+	} else {
+		vsi->info.sec_flags &= ~sec_flags;
+		vsi->info.sw_flags2 &= ~sw_flags2;
+	}
+	vsi->info.sw_id = hw->port_info->sw_id;
+	(void)rte_memcpy(&ctxt.info, &vsi->info, sizeof(vsi->info));
+	ctxt.info.valid_sections =
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_SW_VALID |
+				 ICE_AQ_VSI_PROP_SECURITY_VALID);
+	ctxt.vsi_num = vsi->vsi_id;
+
+	ret = ice_update_vsi(hw, vsi->idx, &ctxt, NULL);
+	if (ret) {
+		PMD_DRV_LOG(INFO, "Update VSI failed to %s vlan rx pruning",
+			    on ? "enable" : "disable");
+		ret = -EINVAL;
+	} else {
+		vsi->info.valid_sections |=
+			rte_cpu_to_le_16(ICE_AQ_VSI_PROP_SW_VALID |
+					 ICE_AQ_VSI_PROP_SECURITY_VALID);
+	}
+
+	return ret;
+}
+
+static int
+ice_vsi_config_vlan_stripping(struct ice_vsi *vsi, bool on)
+{
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	struct ice_vsi_ctx ctxt;
+	uint8_t vlan_flags;
+	int ret = 0;
+
+	/* Check if it has been already on or off */
+	if (vsi->info.valid_sections &
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_VLAN_VALID)) {
+		if (on) {
+			if ((vsi->info.vlan_flags &
+			     ICE_AQ_VSI_VLAN_EMOD_M) ==
+			    ICE_AQ_VSI_VLAN_EMOD_STR_BOTH)
+				return 0; /* already on */
+		} else {
+			if ((vsi->info.vlan_flags &
+			     ICE_AQ_VSI_VLAN_EMOD_M) ==
+			    ICE_AQ_VSI_VLAN_EMOD_NOTHING)
+				return 0; /* already off */
+		}
+	}
+
+	if (on)
+		vlan_flags = ICE_AQ_VSI_VLAN_EMOD_STR_BOTH;
+	else
+		vlan_flags = ICE_AQ_VSI_VLAN_EMOD_NOTHING;
+	vsi->info.vlan_flags &= ~(ICE_AQ_VSI_VLAN_EMOD_M);
+	vsi->info.vlan_flags |= vlan_flags;
+	(void)rte_memcpy(&ctxt.info, &vsi->info, sizeof(vsi->info));
+	ctxt.info.valid_sections =
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_VLAN_VALID);
+	ctxt.vsi_num = vsi->vsi_id;
+	ret = ice_update_vsi(hw, vsi->idx, &ctxt, NULL);
+	if (ret) {
+		PMD_DRV_LOG(INFO, "Update VSI failed to %s vlan stripping",
+			    on ? "enable" : "disable");
+		return -EINVAL;
+	}
+
+	vsi->info.valid_sections |=
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_VLAN_VALID);
+
+	return ret;
+}
+
+static int
+ice_vlan_offload_set(struct rte_eth_dev *dev, int mask)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vsi *vsi = pf->main_vsi;
+	struct rte_eth_rxmode *rxmode;
+
+	rxmode = &dev->data->dev_conf.rxmode;
+	if (mask & ETH_VLAN_FILTER_MASK) {
+		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+			ice_vsi_config_vlan_filter(vsi, TRUE);
+		else
+			ice_vsi_config_vlan_filter(vsi, FALSE);
+	}
+
+	if (mask & ETH_VLAN_STRIP_MASK) {
+		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+			ice_vsi_config_vlan_stripping(vsi, TRUE);
+		else
+			ice_vsi_config_vlan_stripping(vsi, FALSE);
+	}
+
+	if (mask & ETH_VLAN_EXTEND_MASK) {
+		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+			ice_vsi_config_double_vlan(vsi, TRUE);
+		else
+			ice_vsi_config_double_vlan(vsi, FALSE);
+	}
+
+	return 0;
+}
+
+static int
+ice_vlan_tpid_set(struct rte_eth_dev *dev,
+		  enum rte_vlan_type vlan_type,
+		  uint16_t tpid)
+{
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	uint64_t reg_r = 0, reg_w = 0;
+	uint16_t reg_id = 0;
+	int ret = 0;
+	int qinq = dev->data->dev_conf.rxmode.offloads &
+		   DEV_RX_OFFLOAD_VLAN_EXTEND;
+
+	switch (vlan_type) {
+	case ETH_VLAN_TYPE_OUTER:
+		if (qinq)
+			reg_id = 3;
+		else
+			reg_id = 5;
+	break;
+	case ETH_VLAN_TYPE_INNER:
+		if (qinq) {
+			reg_id = 5;
+		} else {
+			PMD_DRV_LOG(ERR,
+				    "Unsupported vlan type in single vlan.");
+			return -EINVAL;
+		}
+		break;
+	default:
+		PMD_DRV_LOG(ERR, "Unsupported vlan type %d", vlan_type);
+		return -EINVAL;
+	}
+	reg_r = ICE_READ_REG(hw, GL_SWT_L2TAGCTRL(reg_id));
+	PMD_DRV_LOG(DEBUG, "Debug read from ICE GL_SWT_L2TAGCTRL[%d]: "
+		    "0x%08"PRIx64"", reg_id, reg_r);
+
+	reg_w = reg_r & (~(GL_SWT_L2TAGCTRL_ETHERTYPE_M));
+	reg_w |= ((uint64_t)tpid << GL_SWT_L2TAGCTRL_ETHERTYPE_S);
+	if (reg_r == reg_w) {
+		PMD_DRV_LOG(DEBUG, "No need to write");
+		return 0;
+	}
+
+	ICE_WRITE_REG(hw, GL_SWT_L2TAGCTRL(reg_id), reg_w);
+	PMD_DRV_LOG(DEBUG, "Debug write 0x%08"PRIx64" to "
+		    "ICE GL_SWT_L2TAGCTRL[%d]", reg_w, reg_id);
+
+	return ret;
+}
+
+static int
+ice_vsi_vlan_pvid_set(struct ice_vsi *vsi, struct ice_vsi_vlan_pvid_info *info)
+{
+	struct ice_hw *hw;
+	struct ice_vsi_ctx ctxt;
+	uint8_t vlan_flags = 0;
+	int ret;
+
+	if (!vsi || !info) {
+		PMD_DRV_LOG(ERR, "invalid parameters");
+		return -EINVAL;
+	}
+
+	if (info->on) {
+		vsi->info.pvid = info->config.pvid;
+		/**
+		 * If insert pvid is enabled, only tagged pkts are
+		 * allowed to be sent out.
+		 */
+		vlan_flags = ICE_AQ_VSI_PVLAN_INSERT_PVID |
+			     ICE_AQ_VSI_VLAN_MODE_UNTAGGED;
+	} else {
+		vsi->info.pvid = 0;
+		if (info->config.reject.tagged == 0)
+			vlan_flags |= ICE_AQ_VSI_VLAN_MODE_TAGGED;
+
+		if (info->config.reject.untagged == 0)
+			vlan_flags |= ICE_AQ_VSI_VLAN_MODE_UNTAGGED;
+	}
+	vsi->info.vlan_flags &= ~(ICE_AQ_VSI_PVLAN_INSERT_PVID |
+				  ICE_AQ_VSI_VLAN_MODE_M);
+	vsi->info.vlan_flags |= vlan_flags;
+	memset(&ctxt, 0, sizeof(ctxt));
+	rte_memcpy(&ctxt.info, &vsi->info, sizeof(vsi->info));
+	ctxt.info.valid_sections =
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_VLAN_VALID);
+	ctxt.vsi_num = vsi->vsi_id;
+
+	hw = ICE_VSI_TO_HW(vsi);
+	ret = ice_update_vsi(hw, vsi->idx, &ctxt, NULL);
+	if (ret != ICE_SUCCESS) {
+		PMD_DRV_LOG(ERR,
+			    "update VSI for VLAN insert failed, err %d",
+			    ret);
+		return -EINVAL;
+	}
+
+	vsi->info.valid_sections |=
+		rte_cpu_to_le_16(ICE_AQ_VSI_PROP_VLAN_VALID);
+
+	return ret;
+}
+
+static int
+ice_vlan_pvid_set(struct rte_eth_dev *dev, uint16_t pvid, int on)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vsi *vsi = pf->main_vsi;
+	struct rte_eth_dev_data *data = pf->dev_data;
+	struct ice_vsi_vlan_pvid_info info;
+	int ret;
+
+	memset(&info, 0, sizeof(info));
+	info.on = on;
+	if (info.on) {
+		info.config.pvid = pvid;
+	} else {
+		info.config.reject.tagged =
+			data->dev_conf.txmode.hw_vlan_reject_tagged;
+		info.config.reject.untagged =
+			data->dev_conf.txmode.hw_vlan_reject_untagged;
+	}
+
+	ret = ice_vsi_vlan_pvid_set(vsi, &info);
+	if (ret < 0) {
+		PMD_DRV_LOG(ERR, "Failed to set pvid.");
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int
 ice_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 	      struct rte_pci_device *pci_dev)
 {
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 4ea414e..67bbd08 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -974,6 +974,38 @@
 	return flags;
 }
 
+static inline void
+ice_rxd_to_vlan_tci(struct rte_mbuf *mb, volatile union ice_rx_desc *rxdp)
+{
+	if (rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
+	    (1 << ICE_RX_DESC_STATUS_L2TAG1P_S)) {
+		mb->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		mb->vlan_tci =
+			rte_le_to_cpu_16(rxdp->wb.qword0.lo_dword.l2tag1);
+		PMD_RX_LOG(DEBUG, "Descriptor l2tag1: %u",
+			   rte_le_to_cpu_16(rxdp->wb.qword0.lo_dword.l2tag1));
+	} else {
+		mb->vlan_tci = 0;
+	}
+
+#ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
+	if (rte_le_to_cpu_16(rxdp->wb.qword2.ext_status) &
+	    (1 << ICE_RX_DESC_EXT_STATUS_L2TAG2P_S)) {
+		mb->ol_flags |= PKT_RX_QINQ_STRIPPED | PKT_RX_QINQ |
+				PKT_RX_VLAN_STRIPPED | PKT_RX_VLAN;
+		mb->vlan_tci_outer = mb->vlan_tci;
+		mb->vlan_tci = rte_le_to_cpu_16(rxdp->wb.qword2.l2tag2_2);
+		PMD_RX_LOG(DEBUG, "Descriptor l2tag2_1: %u, l2tag2_2: %u",
+			   rte_le_to_cpu_16(rxdp->wb.qword2.l2tag2_1),
+			   rte_le_to_cpu_16(rxdp->wb.qword2.l2tag2_2));
+	} else {
+		mb->vlan_tci_outer = 0;
+	}
+#endif
+	PMD_RX_LOG(DEBUG, "Mbuf vlan_tci: %u, vlan_tci_outer: %u",
+		   mb->vlan_tci, mb->vlan_tci_outer);
+}
+
 const uint32_t *
 ice_dev_supported_ptypes_get(struct rte_eth_dev *dev)
 {
@@ -1124,6 +1156,7 @@
 		rxm->pkt_len = rx_packet_len;
 		rxm->data_len = rx_packet_len;
 		rxm->port = rxq->port_id;
+		ice_rxd_to_vlan_tci(rxm, rxdp);
 		rxm->packet_type = ptype_tbl[(uint8_t)((qword1 &
 							ICE_RXD_QW1_PTYPE_M) >>
 						       ICE_RXD_QW1_PTYPE_S)];
@@ -1371,6 +1404,12 @@
 			}
 		}
 
+		/* Descriptor based VLAN insertion */
+		if (ol_flags & (PKT_TX_VLAN | PKT_TX_QINQ)) {
+			td_cmd |= ICE_TX_DESC_CMD_IL2TAG1;
+			td_tag = tx_pkt->vlan_tci;
+		}
+
 		/* Enable checksum offloading */
 		if (ol_flags & ICE_TX_CKSUM_OFFLOAD_MASK) {
 			ice_txd_enable_checksum(ol_flags, &td_cmd,
@@ -1379,6 +1418,10 @@
 
 		if (nb_ctx) {
 			/* Setup TX context descriptor if required */
+			volatile struct ice_tx_ctx_desc *ctx_txd =
+				(volatile struct ice_tx_ctx_desc *)
+					&tx_ring[tx_id];
+			uint16_t cd_l2tag2 = 0;
 			uint64_t cd_type_cmd_tso_mss = ICE_TX_DESC_DTYPE_CTX;
 
 			txn = &sw_ring[txe->next_id];
@@ -1392,6 +1435,17 @@
 				cd_type_cmd_tso_mss |=
 					ice_set_tso_ctx(tx_pkt, tx_offload);
 
+			/* TX context descriptor based double VLAN insert */
+			if (ol_flags & PKT_TX_QINQ) {
+				cd_l2tag2 = tx_pkt->vlan_tci_outer;
+				cd_type_cmd_tso_mss |=
+					((uint64_t)ICE_TX_CTX_DESC_IL2TAG2 <<
+					 ICE_TXD_CTX_QW1_CMD_S);
+			}
+			ctx_txd->l2tag2 = rte_cpu_to_le_16(cd_l2tag2);
+			ctx_txd->qw1 =
+				rte_cpu_to_le_64(cd_type_cmd_tso_mss);
+
 			txe->last_id = tx_last;
 			tx_id = txe->next_id;
 			txe = txn;
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v6 25/31] net/ice: support RSS
  2018-12-18  8:46 ` [dpdk-dev] [PATCH v6 00/31] A new net PMD - ICE Wenzhuo Lu
                     ` (23 preceding siblings ...)
  2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 24/31] net/ice: support VLAN ops Wenzhuo Lu
@ 2018-12-18  8:46   ` Wenzhuo Lu
  2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 26/31] net/ice: support RX queue interruption Wenzhuo Lu
                     ` (6 subsequent siblings)
  31 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-18  8:46 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add below ops,
reta_update
reta_query
rss_hash_update
rss_hash_conf_get

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/ice.ini |   3 +
 drivers/net/ice/ice_ethdev.c     | 243 +++++++++++++++++++++++++++++++++++++++
 drivers/net/ice/ice_ethdev.h     |  13 +++
 drivers/net/ice/ice_rxtx.c       |  20 ++++
 4 files changed, 279 insertions(+)

diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
index b76cae3..b492ccd 100644
--- a/doc/guides/nics/features/ice.ini
+++ b/doc/guides/nics/features/ice.ini
@@ -13,6 +13,9 @@ Jumbo frame          = Y
 TSO                  = Y
 Unicast MAC filter   = Y
 Multicast MAC filter = Y
+RSS hash             = Y
+RSS key update       = Y
+RSS reta update      = Y
 VLAN filter          = Y
 CRC offload          = Y
 VLAN offload         = Y
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 67ab06c..a5e8da6 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -28,6 +28,16 @@ static int ice_link_update(struct rte_eth_dev *dev,
 static int ice_vlan_tpid_set(struct rte_eth_dev *dev,
 			     enum rte_vlan_type vlan_type,
 			     uint16_t tpid);
+static int ice_rss_reta_update(struct rte_eth_dev *dev,
+			       struct rte_eth_rss_reta_entry64 *reta_conf,
+			       uint16_t reta_size);
+static int ice_rss_reta_query(struct rte_eth_dev *dev,
+			      struct rte_eth_rss_reta_entry64 *reta_conf,
+			      uint16_t reta_size);
+static int ice_rss_hash_update(struct rte_eth_dev *dev,
+			       struct rte_eth_rss_conf *rss_conf);
+static int ice_rss_hash_conf_get(struct rte_eth_dev *dev,
+				 struct rte_eth_rss_conf *rss_conf);
 static int ice_vlan_filter_set(struct rte_eth_dev *dev,
 			       uint16_t vlan_id,
 			       int on);
@@ -72,6 +82,10 @@ static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
 	.vlan_filter_set              = ice_vlan_filter_set,
 	.vlan_offload_set             = ice_vlan_offload_set,
 	.vlan_tpid_set                = ice_vlan_tpid_set,
+	.reta_update                  = ice_rss_reta_update,
+	.reta_query                   = ice_rss_reta_query,
+	.rss_hash_update              = ice_rss_hash_update,
+	.rss_hash_conf_get            = ice_rss_hash_conf_get,
 	.vlan_pvid_set                = ice_vlan_pvid_set,
 	.rxq_info_get                 = ice_rxq_info_get,
 	.txq_info_get                 = ice_txq_info_get,
@@ -1526,6 +1540,7 @@ static int ice_init_rss(struct ice_pf *pf)
 
 	dev_info->reta_size = hw->func_caps.common_cap.rss_table_size;
 	dev_info->hash_key_size = (VSIQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t);
+	dev_info->flow_type_rss_offloads = ICE_RSS_OFFLOAD_ALL;
 
 	dev_info->default_rxconf = (struct rte_eth_rxconf) {
 		.rx_thresh = {
@@ -2010,6 +2025,234 @@ static int ice_macaddr_set(struct rte_eth_dev *dev,
 }
 
 static int
+ice_get_rss_lut(struct ice_vsi *vsi, uint8_t *lut, uint16_t lut_size)
+{
+	struct ice_pf *pf = ICE_VSI_TO_PF(vsi);
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	int ret;
+
+	if (!lut)
+		return -EINVAL;
+
+	if (pf->flags & ICE_FLAG_RSS_AQ_CAPABLE) {
+		ret = ice_aq_get_rss_lut(hw, vsi->idx, TRUE,
+					 lut, lut_size);
+		if (ret) {
+			PMD_DRV_LOG(ERR, "Failed to get RSS lookup table");
+			return -EINVAL;
+		}
+	} else {
+		uint64_t *lut_dw = (uint64_t *)lut;
+		uint16_t i, lut_size_dw = lut_size / 4;
+
+		for (i = 0; i < lut_size_dw; i++)
+			lut_dw[i] = ICE_READ_REG(hw, PFQF_HLUT(i));
+	}
+
+	return 0;
+}
+
+static int
+ice_set_rss_lut(struct ice_vsi *vsi, uint8_t *lut, uint16_t lut_size)
+{
+	struct ice_pf *pf = ICE_VSI_TO_PF(vsi);
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	int ret;
+
+	if (!vsi || !lut)
+		return -EINVAL;
+
+	if (pf->flags & ICE_FLAG_RSS_AQ_CAPABLE) {
+		ret = ice_aq_set_rss_lut(hw, vsi->idx, TRUE,
+					 lut, lut_size);
+		if (ret) {
+			PMD_DRV_LOG(ERR, "Failed to set RSS lookup table");
+			return -EINVAL;
+		}
+	} else {
+		uint64_t *lut_dw = (uint64_t *)lut;
+		uint16_t i, lut_size_dw = lut_size / 4;
+
+		for (i = 0; i < lut_size_dw; i++)
+			ICE_WRITE_REG(hw, PFQF_HLUT(i), lut_dw[i]);
+
+		ice_flush(hw);
+	}
+
+	return 0;
+}
+
+static int
+ice_rss_reta_update(struct rte_eth_dev *dev,
+		    struct rte_eth_rss_reta_entry64 *reta_conf,
+		    uint16_t reta_size)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	uint16_t i, lut_size = hw->func_caps.common_cap.rss_table_size;
+	uint16_t idx, shift;
+	uint8_t *lut;
+	int ret;
+
+	if (reta_size != lut_size ||
+	    reta_size > ETH_RSS_RETA_SIZE_512) {
+		PMD_DRV_LOG(ERR,
+			    "The size of hash lookup table configured (%d)"
+			    "doesn't match the number hardware can "
+			    "supported (%d)",
+			    reta_size, lut_size);
+		return -EINVAL;
+	}
+
+	lut = rte_zmalloc(NULL, reta_size, 0);
+	if (!lut) {
+		PMD_DRV_LOG(ERR, "No memory can be allocated");
+		return -ENOMEM;
+	}
+	ret = ice_get_rss_lut(pf->main_vsi, lut, reta_size);
+	if (ret)
+		goto out;
+
+	for (i = 0; i < reta_size; i++) {
+		idx = i / RTE_RETA_GROUP_SIZE;
+		shift = i % RTE_RETA_GROUP_SIZE;
+		if (reta_conf[idx].mask & (1ULL << shift))
+			lut[i] = reta_conf[idx].reta[shift];
+	}
+	ret = ice_set_rss_lut(pf->main_vsi, lut, reta_size);
+
+out:
+	rte_free(lut);
+
+	return ret;
+}
+
+static int
+ice_rss_reta_query(struct rte_eth_dev *dev,
+		   struct rte_eth_rss_reta_entry64 *reta_conf,
+		   uint16_t reta_size)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	uint16_t i, lut_size = hw->func_caps.common_cap.rss_table_size;
+	uint16_t idx, shift;
+	uint8_t *lut;
+	int ret;
+
+	if (reta_size != lut_size ||
+	    reta_size > ETH_RSS_RETA_SIZE_512) {
+		PMD_DRV_LOG(ERR,
+			    "The size of hash lookup table configured (%d)"
+			    "doesn't match the number hardware can "
+			    "supported (%d)",
+			    reta_size, lut_size);
+		return -EINVAL;
+	}
+
+	lut = rte_zmalloc(NULL, reta_size, 0);
+	if (!lut) {
+		PMD_DRV_LOG(ERR, "No memory can be allocated");
+		return -ENOMEM;
+	}
+
+	ret = ice_get_rss_lut(pf->main_vsi, lut, reta_size);
+	if (ret)
+		goto out;
+
+	for (i = 0; i < reta_size; i++) {
+		idx = i / RTE_RETA_GROUP_SIZE;
+		shift = i % RTE_RETA_GROUP_SIZE;
+		if (reta_conf[idx].mask & (1ULL << shift))
+			reta_conf[idx].reta[shift] = lut[i];
+	}
+
+out:
+	rte_free(lut);
+
+	return ret;
+}
+
+static int
+ice_set_rss_key(struct ice_vsi *vsi, uint8_t *key, uint8_t key_len)
+{
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	int ret = 0;
+
+	if (!key || key_len == 0) {
+		PMD_DRV_LOG(DEBUG, "No key to be configured");
+		return 0;
+	} else if (key_len != (VSIQF_HKEY_MAX_INDEX + 1) *
+		   sizeof(uint32_t)) {
+		PMD_DRV_LOG(ERR, "Invalid key length %u", key_len);
+		return -EINVAL;
+	}
+
+	struct ice_aqc_get_set_rss_keys *key_dw =
+		(struct ice_aqc_get_set_rss_keys *)key;
+
+	ret = ice_aq_set_rss_key(hw, vsi->idx, key_dw);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to configure RSS key via AQ");
+		ret = -EINVAL;
+	}
+
+	return ret;
+}
+
+static int
+ice_get_rss_key(struct ice_vsi *vsi, uint8_t *key, uint8_t *key_len)
+{
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	int ret;
+
+	if (!key || !key_len)
+		return -EINVAL;
+
+	ret = ice_aq_get_rss_key
+		(hw, vsi->idx,
+		 (struct ice_aqc_get_set_rss_keys *)key);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to get RSS key via AQ");
+		return -EINVAL;
+	}
+	*key_len = (VSIQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t);
+
+	return 0;
+}
+
+static int
+ice_rss_hash_update(struct rte_eth_dev *dev,
+		    struct rte_eth_rss_conf *rss_conf)
+{
+	enum ice_status status = ICE_SUCCESS;
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vsi *vsi = pf->main_vsi;
+
+	/* set hash key */
+	status = ice_set_rss_key(vsi, rss_conf->rss_key, rss_conf->rss_key_len);
+	if (status)
+		return status;
+
+	/* TODO: hash enable config, ice_add_rss_cfg */
+	return 0;
+}
+
+static int
+ice_rss_hash_conf_get(struct rte_eth_dev *dev,
+		      struct rte_eth_rss_conf *rss_conf)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vsi *vsi = pf->main_vsi;
+
+	ice_get_rss_key(vsi, rss_conf->rss_key,
+			&rss_conf->rss_key_len);
+
+	/* TODO: default set to 0 as hf config is not supported now */
+	rss_conf->rss_hf = 0;
+	return 0;
+}
+
+static int
 ice_vsi_vlan_pvid_set(struct ice_vsi *vsi, struct ice_vsi_vlan_pvid_info *info)
 {
 	struct ice_hw *hw;
diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
index 94e45c8..3cefa5b 100644
--- a/drivers/net/ice/ice_ethdev.h
+++ b/drivers/net/ice/ice_ethdev.h
@@ -102,6 +102,19 @@
 		       ICE_FLAG_RSS_AQ_CAPABLE | \
 		       ICE_FLAG_VF_MAC_BY_PF)
 
+#define ICE_RSS_OFFLOAD_ALL ( \
+	ETH_RSS_FRAG_IPV4 | \
+	ETH_RSS_NONFRAG_IPV4_TCP | \
+	ETH_RSS_NONFRAG_IPV4_UDP | \
+	ETH_RSS_NONFRAG_IPV4_SCTP | \
+	ETH_RSS_NONFRAG_IPV4_OTHER | \
+	ETH_RSS_FRAG_IPV6 | \
+	ETH_RSS_NONFRAG_IPV6_TCP | \
+	ETH_RSS_NONFRAG_IPV6_UDP | \
+	ETH_RSS_NONFRAG_IPV6_SCTP | \
+	ETH_RSS_NONFRAG_IPV6_OTHER | \
+	ETH_RSS_L2_PAYLOAD)
+
 struct ice_adapter;
 
 /**
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 67bbd08..f88e733 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -946,6 +946,20 @@
 	return desc;
 }
 
+/* Translate the rx descriptor status to pkt flags */
+static inline uint64_t
+ice_rxd_status_to_pkt_flags(uint64_t qword)
+{
+	uint64_t flags;
+
+	/* Check if RSS_HASH */
+	flags = (((qword >> ICE_RX_DESC_STATUS_FLTSTAT_S) &
+		  ICE_RX_DESC_FLTSTAT_RSS_HASH) ==
+		 ICE_RX_DESC_FLTSTAT_RSS_HASH) ? PKT_RX_RSS_HASH : 0;
+
+	return flags;
+}
+
 /* Rx L3/L4 checksum */
 static inline uint64_t
 ice_rxd_error_to_pkt_flags(uint64_t qword)
@@ -1094,6 +1108,7 @@
 	struct ice_rx_queue *rxq = rx_queue;
 	volatile union ice_rx_desc *rx_ring = rxq->rx_ring;
 	volatile union ice_rx_desc *rxdp;
+	union ice_rx_desc rxd;
 	struct ice_rx_entry *sw_ring = rxq->sw_ring;
 	struct ice_rx_entry *rxe;
 	struct rte_mbuf *nmb; /* new allocated mbuf */
@@ -1126,6 +1141,7 @@
 			dev->data->rx_mbuf_alloc_failed++;
 			break;
 		}
+		rxd = *rxdp; /* copy descriptor in ring to temp variable*/
 
 		nb_hold++;
 		rxe = &sw_ring[rx_id]; /* get corresponding mbuf in SW ring */
@@ -1160,7 +1176,11 @@
 		rxm->packet_type = ptype_tbl[(uint8_t)((qword1 &
 							ICE_RXD_QW1_PTYPE_M) >>
 						       ICE_RXD_QW1_PTYPE_S)];
+		pkt_flags = ice_rxd_status_to_pkt_flags(qword1);
 		pkt_flags |= ice_rxd_error_to_pkt_flags(qword1);
+		if (pkt_flags & PKT_RX_RSS_HASH)
+			rxm->hash.rss =
+				rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
 		rxm->ol_flags |= pkt_flags;
 		/* copy old mbuf to rx_pkts */
 		rx_pkts[nb_rx++] = rxm;
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v6 26/31] net/ice: support RX queue interruption
  2018-12-18  8:46 ` [dpdk-dev] [PATCH v6 00/31] A new net PMD - ICE Wenzhuo Lu
                     ` (24 preceding siblings ...)
  2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 25/31] net/ice: support RSS Wenzhuo Lu
@ 2018-12-18  8:46   ` Wenzhuo Lu
  2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 27/31] net/ice: support FW version getting Wenzhuo Lu
                     ` (5 subsequent siblings)
  31 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-18  8:46 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add below ops,
rx_queue_intr_enable
rx_queue_intr_disable

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/ice.ini |   1 +
 drivers/net/ice/ice_ethdev.c     | 230 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 231 insertions(+)

diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
index b492ccd..4d6ca4f 100644
--- a/doc/guides/nics/features/ice.ini
+++ b/doc/guides/nics/features/ice.ini
@@ -7,6 +7,7 @@
 Speed capabilities   = Y
 Link status          = Y
 Link status event    = Y
+Rx interrupt         = Y
 Queue start/stop     = Y
 MTU update           = Y
 Jumbo frame          = Y
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index a5e8da6..7c9ddcb 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -48,6 +48,10 @@ static int ice_macaddr_add(struct rte_eth_dev *dev,
 			   __rte_unused uint32_t index,
 			   uint32_t pool);
 static void ice_macaddr_remove(struct rte_eth_dev *dev, uint32_t index);
+static int ice_rx_queue_intr_enable(struct rte_eth_dev *dev,
+				    uint16_t queue_id);
+static int ice_rx_queue_intr_disable(struct rte_eth_dev *dev,
+				     uint16_t queue_id);
 static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
 			     uint16_t pvid, int on);
 
@@ -86,6 +90,8 @@ static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
 	.reta_query                   = ice_rss_reta_query,
 	.rss_hash_update              = ice_rss_hash_update,
 	.rss_hash_conf_get            = ice_rss_hash_conf_get,
+	.rx_queue_intr_enable         = ice_rx_queue_intr_enable,
+	.rx_queue_intr_disable        = ice_rx_queue_intr_disable,
 	.vlan_pvid_set                = ice_vlan_pvid_set,
 	.rxq_info_get                 = ice_rxq_info_get,
 	.txq_info_get                 = ice_txq_info_get,
@@ -1264,10 +1270,39 @@ static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
 }
 
 static void
+ice_vsi_disable_queues_intr(struct ice_vsi *vsi)
+{
+	struct rte_eth_dev *dev = vsi->adapter->eth_dev;
+	struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	uint16_t msix_intr, i;
+
+	/* disable interrupt and also clear all the exist config */
+	for (i = 0; i < vsi->nb_qps; i++) {
+		ICE_WRITE_REG(hw, QINT_TQCTL(vsi->base_queue + i), 0);
+		ICE_WRITE_REG(hw, QINT_RQCTL(vsi->base_queue + i), 0);
+		rte_wmb();
+	}
+
+	if (rte_intr_allow_others(intr_handle))
+		/* vfio-pci */
+		for (i = 0; i < vsi->nb_msix; i++) {
+			msix_intr = vsi->msix_intr + i;
+			ICE_WRITE_REG(hw, GLINT_DYN_CTL(msix_intr),
+				      GLINT_DYN_CTL_WB_ON_ITR_M);
+		}
+	else
+		/* igb_uio */
+		ICE_WRITE_REG(hw, GLINT_DYN_CTL(0), GLINT_DYN_CTL_WB_ON_ITR_M);
+}
+
+static void
 ice_dev_stop(struct rte_eth_dev *dev)
 {
 	struct rte_eth_dev_data *data = dev->data;
 	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_vsi *main_vsi = pf->main_vsi;
 	struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
 	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
 	uint16_t i;
@@ -1284,6 +1319,9 @@ static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
 	for (i = 0; i < data->nb_tx_queues; i++)
 		ice_tx_queue_stop(dev, i);
 
+	/* disable all queue interrupts */
+	ice_vsi_disable_queues_intr(main_vsi);
+
 	/* Clear all queues and release mbufs */
 	ice_clear_queues(dev);
 
@@ -1411,6 +1449,158 @@ static int ice_init_rss(struct ice_pf *pf)
 	return 0;
 }
 
+static void
+__vsi_queues_bind_intr(struct ice_vsi *vsi, uint16_t msix_vect,
+		       int base_queue, int nb_queue)
+{
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	uint32_t val, val_tx;
+	int i;
+
+	for (i = 0; i < nb_queue; i++) {
+		/*do actual bind*/
+		val = (msix_vect & QINT_RQCTL_MSIX_INDX_M) |
+		      (0 < QINT_RQCTL_ITR_INDX_S) | QINT_RQCTL_CAUSE_ENA_M;
+		val_tx = (msix_vect & QINT_TQCTL_MSIX_INDX_M) |
+			 (0 < QINT_TQCTL_ITR_INDX_S) | QINT_TQCTL_CAUSE_ENA_M;
+
+		PMD_DRV_LOG(INFO, "queue %d is binding to vect %d",
+			    base_queue + i, msix_vect);
+		/* set ITR0 value */
+		ICE_WRITE_REG(hw, GLINT_ITR(0, msix_vect), 0x10);
+		ICE_WRITE_REG(hw, QINT_RQCTL(base_queue + i), val);
+		ICE_WRITE_REG(hw, QINT_TQCTL(base_queue + i), val_tx);
+	}
+}
+
+static void
+ice_vsi_queues_bind_intr(struct ice_vsi *vsi)
+{
+	struct rte_eth_dev *dev = vsi->adapter->eth_dev;
+	struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	uint16_t msix_vect = vsi->msix_intr;
+	uint16_t nb_msix = RTE_MIN(vsi->nb_msix, intr_handle->nb_efd);
+	uint16_t queue_idx = 0;
+	int record = 0;
+	int i;
+
+	/* clear Rx/Tx queue interrupt */
+	for (i = 0; i < vsi->nb_used_qps; i++) {
+		ICE_WRITE_REG(hw, QINT_TQCTL(vsi->base_queue + i), 0);
+		ICE_WRITE_REG(hw, QINT_RQCTL(vsi->base_queue + i), 0);
+	}
+
+	/* PF bind interrupt */
+	if (rte_intr_dp_is_en(intr_handle)) {
+		queue_idx = 0;
+		record = 1;
+	}
+
+	for (i = 0; i < vsi->nb_used_qps; i++) {
+		if (nb_msix <= 1) {
+			if (!rte_intr_allow_others(intr_handle))
+				msix_vect = ICE_MISC_VEC_ID;
+
+			/* uio mapping all queue to one msix_vect */
+			__vsi_queues_bind_intr(vsi, msix_vect,
+					       vsi->base_queue + i,
+					       vsi->nb_used_qps - i);
+
+			for (; !!record && i < vsi->nb_used_qps; i++)
+				intr_handle->intr_vec[queue_idx + i] =
+					msix_vect;
+			break;
+		}
+
+		/* vfio 1:1 queue/msix_vect mapping */
+		__vsi_queues_bind_intr(vsi, msix_vect,
+				       vsi->base_queue + i, 1);
+
+		if (!!record)
+			intr_handle->intr_vec[queue_idx + i] = msix_vect;
+
+		msix_vect++;
+		nb_msix--;
+	}
+}
+
+static void
+ice_vsi_enable_queues_intr(struct ice_vsi *vsi)
+{
+	struct rte_eth_dev *dev = vsi->adapter->eth_dev;
+	struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	uint16_t msix_intr, i;
+
+	if (rte_intr_allow_others(intr_handle))
+		for (i = 0; i < vsi->nb_used_qps; i++) {
+			msix_intr = vsi->msix_intr + i;
+			ICE_WRITE_REG(hw, GLINT_DYN_CTL(msix_intr),
+				      GLINT_DYN_CTL_INTENA_M |
+				      GLINT_DYN_CTL_CLEARPBA_M |
+				      GLINT_DYN_CTL_ITR_INDX_M |
+				      GLINT_DYN_CTL_WB_ON_ITR_M);
+		}
+	else
+		ICE_WRITE_REG(hw, GLINT_DYN_CTL(0),
+			      GLINT_DYN_CTL_INTENA_M |
+			      GLINT_DYN_CTL_CLEARPBA_M |
+			      GLINT_DYN_CTL_ITR_INDX_M |
+			      GLINT_DYN_CTL_WB_ON_ITR_M);
+}
+
+static int
+ice_rxq_intr_setup(struct rte_eth_dev *dev)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+	struct ice_vsi *vsi = pf->main_vsi;
+	uint32_t intr_vector = 0;
+
+	rte_intr_disable(intr_handle);
+
+	/* check and configure queue intr-vector mapping */
+	if ((rte_intr_cap_multiple(intr_handle) ||
+	     !RTE_ETH_DEV_SRIOV(dev).active) &&
+	    dev->data->dev_conf.intr_conf.rxq != 0) {
+		intr_vector = dev->data->nb_rx_queues;
+		if (intr_vector > ICE_MAX_INTR_QUEUE_NUM) {
+			PMD_DRV_LOG(ERR, "At most %d intr queues supported",
+				    ICE_MAX_INTR_QUEUE_NUM);
+			return -ENOTSUP;
+		}
+		if (rte_intr_efd_enable(intr_handle, intr_vector))
+			return -1;
+	}
+
+	if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
+		intr_handle->intr_vec =
+		rte_zmalloc(NULL, dev->data->nb_rx_queues * sizeof(int),
+			    0);
+		if (!intr_handle->intr_vec) {
+			PMD_DRV_LOG(ERR,
+				    "Failed to allocate %d rx_queues intr_vec",
+				    dev->data->nb_rx_queues);
+			return -ENOMEM;
+		}
+	}
+
+	/* Map queues with MSIX interrupt */
+	vsi->nb_used_qps = dev->data->nb_rx_queues;
+	ice_vsi_queues_bind_intr(vsi);
+
+	/* Enable interrupts for all the queues */
+	ice_vsi_enable_queues_intr(vsi);
+
+	rte_intr_enable(intr_handle);
+
+	return 0;
+}
+
 static int
 ice_dev_start(struct rte_eth_dev *dev)
 {
@@ -1447,6 +1637,10 @@ static int ice_init_rss(struct ice_pf *pf)
 
 	ice_set_rx_function(dev);
 
+	/* enable Rx interrput and mapping Rx queue to interrupt vector */
+	if (ice_rxq_intr_setup(dev))
+		return -EIO;
+
 	ret = ice_aq_set_event_mask(hw, hw->port_info->lport,
 				    ((u16)(ICE_AQ_LINK_EVENT_LINK_FAULT |
 				     ICE_AQ_LINK_EVENT_PHY_TEMP_ALARM |
@@ -2252,6 +2446,42 @@ static int ice_macaddr_set(struct rte_eth_dev *dev,
 	return 0;
 }
 
+static int ice_rx_queue_intr_enable(struct rte_eth_dev *dev,
+				    uint16_t queue_id)
+{
+	struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	uint32_t val;
+	uint16_t msix_intr;
+
+	msix_intr = intr_handle->intr_vec[queue_id];
+
+	val = GLINT_DYN_CTL_INTENA_M | GLINT_DYN_CTL_CLEARPBA_M |
+	      GLINT_DYN_CTL_ITR_INDX_M;
+	val &= ~GLINT_DYN_CTL_WB_ON_ITR_M;
+
+	ICE_WRITE_REG(hw, GLINT_DYN_CTL(msix_intr), val);
+	rte_intr_enable(&pci_dev->intr_handle);
+
+	return 0;
+}
+
+static int ice_rx_queue_intr_disable(struct rte_eth_dev *dev,
+				     uint16_t queue_id)
+{
+	struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	uint16_t msix_intr;
+
+	msix_intr = intr_handle->intr_vec[queue_id];
+
+	ICE_WRITE_REG(hw, GLINT_DYN_CTL(msix_intr), GLINT_DYN_CTL_WB_ON_ITR_M);
+
+	return 0;
+}
+
 static int
 ice_vsi_vlan_pvid_set(struct ice_vsi *vsi, struct ice_vsi_vlan_pvid_info *info)
 {
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v6 27/31] net/ice: support FW version getting
  2018-12-18  8:46 ` [dpdk-dev] [PATCH v6 00/31] A new net PMD - ICE Wenzhuo Lu
                     ` (25 preceding siblings ...)
  2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 26/31] net/ice: support RX queue interruption Wenzhuo Lu
@ 2018-12-18  8:46   ` Wenzhuo Lu
  2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 28/31] net/ice: support EEPROM information getting Wenzhuo Lu
                     ` (4 subsequent siblings)
  31 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-18  8:46 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add ops fw_version_get.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/ice.ini |  1 +
 drivers/net/ice/ice_ethdev.c     | 21 +++++++++++++++++++++
 2 files changed, 22 insertions(+)

diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
index 4d6ca4f..c2d91a0 100644
--- a/doc/guides/nics/features/ice.ini
+++ b/doc/guides/nics/features/ice.ini
@@ -24,6 +24,7 @@ QinQ offload         = Y
 L3 checksum offload  = Y
 L4 checksum offload  = Y
 Packet type parsing  = Y
+FW version           = Y
 BSD nic_uio          = Y
 Linux UIO            = Y
 Linux VFIO           = Y
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 7c9ddcb..36295fd 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -52,6 +52,8 @@ static int ice_rx_queue_intr_enable(struct rte_eth_dev *dev,
 				    uint16_t queue_id);
 static int ice_rx_queue_intr_disable(struct rte_eth_dev *dev,
 				     uint16_t queue_id);
+static int ice_fw_version_get(struct rte_eth_dev *dev, char *fw_version,
+			      size_t fw_size);
 static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
 			     uint16_t pvid, int on);
 
@@ -92,6 +94,7 @@ static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
 	.rss_hash_conf_get            = ice_rss_hash_conf_get,
 	.rx_queue_intr_enable         = ice_rx_queue_intr_enable,
 	.rx_queue_intr_disable        = ice_rx_queue_intr_disable,
+	.fw_version_get               = ice_fw_version_get,
 	.vlan_pvid_set                = ice_vlan_pvid_set,
 	.rxq_info_get                 = ice_rxq_info_get,
 	.txq_info_get                 = ice_txq_info_get,
@@ -2483,6 +2486,24 @@ static int ice_rx_queue_intr_disable(struct rte_eth_dev *dev,
 }
 
 static int
+ice_fw_version_get(struct rte_eth_dev *dev, char *fw_version, size_t fw_size)
+{
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	int ret;
+
+	ret = snprintf(fw_version, fw_size, "%d.%d.%05d %d.%d",
+		       hw->fw_maj_ver, hw->fw_min_ver, hw->fw_build,
+		       hw->api_maj_ver, hw->api_min_ver);
+
+	/* add the size of '\0' */
+	ret += 1;
+	if (fw_size < (u32)ret)
+		return ret;
+	else
+		return 0;
+}
+
+static int
 ice_vsi_vlan_pvid_set(struct ice_vsi *vsi, struct ice_vsi_vlan_pvid_info *info)
 {
 	struct ice_hw *hw;
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v6 28/31] net/ice: support EEPROM information getting
  2018-12-18  8:46 ` [dpdk-dev] [PATCH v6 00/31] A new net PMD - ICE Wenzhuo Lu
                     ` (26 preceding siblings ...)
  2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 27/31] net/ice: support FW version getting Wenzhuo Lu
@ 2018-12-18  8:46   ` Wenzhuo Lu
  2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 29/31] net/ice: support advance RX/TX Wenzhuo Lu
                     ` (3 subsequent siblings)
  31 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-18  8:46 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Wei Zhao

Add below ops,
get_eeprom_length
get_eeprom

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 doc/guides/nics/features/ice.ini |  1 +
 drivers/net/ice/ice_ethdev.c     | 45 ++++++++++++++++++++++++++++++++++++++++
 2 files changed, 46 insertions(+)

diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
index c2d91a0..7e64c16 100644
--- a/doc/guides/nics/features/ice.ini
+++ b/doc/guides/nics/features/ice.ini
@@ -25,6 +25,7 @@ L3 checksum offload  = Y
 L4 checksum offload  = Y
 Packet type parsing  = Y
 FW version           = Y
+Module EEPROM dump   = Y
 BSD nic_uio          = Y
 Linux UIO            = Y
 Linux VFIO           = Y
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 36295fd..a5ee8f8 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -56,6 +56,9 @@ static int ice_fw_version_get(struct rte_eth_dev *dev, char *fw_version,
 			      size_t fw_size);
 static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
 			     uint16_t pvid, int on);
+static int ice_get_eeprom_length(struct rte_eth_dev *dev);
+static int ice_get_eeprom(struct rte_eth_dev *dev,
+			  struct rte_dev_eeprom_info *eeprom);
 
 static const struct rte_pci_id pci_id_ice_map[] = {
 	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
@@ -98,6 +101,8 @@ static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
 	.vlan_pvid_set                = ice_vlan_pvid_set,
 	.rxq_info_get                 = ice_rxq_info_get,
 	.txq_info_get                 = ice_txq_info_get,
+	.get_eeprom_length            = ice_get_eeprom_length,
+	.get_eeprom                   = ice_get_eeprom,
 	.rx_queue_count               = ice_rx_queue_count,
 };
 
@@ -2586,6 +2591,46 @@ static int ice_rx_queue_intr_disable(struct rte_eth_dev *dev,
 }
 
 static int
+ice_get_eeprom_length(struct rte_eth_dev *dev)
+{
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	/* Convert word count to byte count */
+	return hw->nvm.sr_words << 1;
+}
+
+static int
+ice_get_eeprom(struct rte_eth_dev *dev,
+	       struct rte_dev_eeprom_info *eeprom)
+{
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	uint16_t *data = eeprom->data;
+	uint16_t offset, length, i;
+	enum ice_status ret_code = ICE_SUCCESS;
+
+	offset = eeprom->offset >> 1;
+	length = eeprom->length >> 1;
+
+	if (offset > hw->nvm.sr_words ||
+	    offset + length > hw->nvm.sr_words) {
+		PMD_DRV_LOG(ERR, "Requested EEPROM bytes out of range.");
+		return -EINVAL;
+	}
+
+	eeprom->magic = hw->vendor_id | (hw->device_id << 16);
+
+	for (i = 0; i < length; i++) {
+		ret_code = ice_read_sr_word(hw, offset + i, &data[i]);
+		if (ret_code != ICE_SUCCESS) {
+			PMD_DRV_LOG(ERR, "EEPROM read failed.");
+			return -EIO;
+		}
+	}
+
+	return 0;
+}
+
+static int
 ice_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 	      struct rte_pci_device *pci_dev)
 {
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v6 29/31] net/ice: support advance RX/TX
  2018-12-18  8:46 ` [dpdk-dev] [PATCH v6 00/31] A new net PMD - ICE Wenzhuo Lu
                     ` (27 preceding siblings ...)
  2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 28/31] net/ice: support EEPROM information getting Wenzhuo Lu
@ 2018-12-18  8:46   ` Wenzhuo Lu
  2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 30/31] net/ice: support statistics Wenzhuo Lu
                     ` (2 subsequent siblings)
  31 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-18  8:46 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add RX functions, scatter and bulk.
Add TX function, simple.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/ice.ini |   2 +
 drivers/net/ice/ice_ethdev.c     |   4 +-
 drivers/net/ice/ice_rxtx.c       | 663 ++++++++++++++++++++++++++++++++++++++-
 3 files changed, 666 insertions(+), 3 deletions(-)

diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
index 7e64c16..134063b 100644
--- a/doc/guides/nics/features/ice.ini
+++ b/doc/guides/nics/features/ice.ini
@@ -8,9 +8,11 @@ Speed capabilities   = Y
 Link status          = Y
 Link status event    = Y
 Rx interrupt         = Y
+Fast mbuf free       = Y
 Queue start/stop     = Y
 MTU update           = Y
 Jumbo frame          = Y
+Scattered Rx         = Y
 TSO                  = Y
 Unicast MAC filter   = Y
 Multicast MAC filter = Y
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index a5ee8f8..c0c530f 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -1726,6 +1726,7 @@ static int ice_init_rss(struct ice_pf *pf)
 		DEV_RX_OFFLOAD_VLAN_EXTEND |
 		DEV_RX_OFFLOAD_JUMBO_FRAME |
 		DEV_RX_OFFLOAD_KEEP_CRC |
+		DEV_RX_OFFLOAD_SCATTER |
 		DEV_RX_OFFLOAD_VLAN_FILTER;
 	dev_info->tx_offload_capa =
 		DEV_TX_OFFLOAD_VLAN_INSERT |
@@ -1736,7 +1737,8 @@ static int ice_init_rss(struct ice_pf *pf)
 		DEV_TX_OFFLOAD_SCTP_CKSUM |
 		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
 		DEV_TX_OFFLOAD_TCP_TSO |
-		DEV_TX_OFFLOAD_MULTI_SEGS;
+		DEV_TX_OFFLOAD_MULTI_SEGS |
+		DEV_TX_OFFLOAD_MBUF_FAST_FREE;
 	dev_info->rx_queue_offload_capa = 0;
 	dev_info->tx_queue_offload_capa = 0;
 
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index f88e733..f7637d2 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -111,6 +111,10 @@
 	buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) -
 			      RTE_PKTMBUF_HEADROOM);
 
+	/* Check if scattered RX needs to be used. */
+	if ((rxq->max_pkt_len + 2 * ICE_VLAN_TAG_SIZE) > buf_size)
+		dev->data->scattered_rx = 1;
+
 	rxq->qrx_tail = hw->hw_addr + QRX_TAIL(rxq->reg_idx);
 
 	/* Init the Rx tail register*/
@@ -1020,6 +1024,430 @@
 		   mb->vlan_tci, mb->vlan_tci_outer);
 }
 
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+#define ICE_LOOK_AHEAD 8
+#if (ICE_LOOK_AHEAD != 8)
+#error "PMD ICE: ICE_LOOK_AHEAD must be 8\n"
+#endif
+static inline int
+ice_rx_scan_hw_ring(struct ice_rx_queue *rxq)
+{
+	volatile union ice_rx_desc *rxdp;
+	struct ice_rx_entry *rxep;
+	struct rte_mbuf *mb;
+	uint16_t pkt_len;
+	uint64_t qword1;
+	uint32_t rx_status;
+	int32_t s[ICE_LOOK_AHEAD], nb_dd;
+	int32_t i, j, nb_rx = 0;
+	uint64_t pkt_flags = 0;
+	uint32_t *ptype_tbl = rxq->vsi->adapter->ptype_tbl;
+
+	rxdp = &rxq->rx_ring[rxq->rx_tail];
+	rxep = &rxq->sw_ring[rxq->rx_tail];
+
+	qword1 = rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len);
+	rx_status = (qword1 & ICE_RXD_QW1_STATUS_M) >> ICE_RXD_QW1_STATUS_S;
+
+	/* Make sure there is at least 1 packet to receive */
+	if (!(rx_status & (1 << ICE_RX_DESC_STATUS_DD_S)))
+		return 0;
+
+	/**
+	 * Scan LOOK_AHEAD descriptors at a time to determine which
+	 * descriptors reference packets that are ready to be received.
+	 */
+	for (i = 0; i < ICE_RX_MAX_BURST; i += ICE_LOOK_AHEAD,
+	     rxdp += ICE_LOOK_AHEAD, rxep += ICE_LOOK_AHEAD) {
+		/* Read desc statuses backwards to avoid race condition */
+		for (j = ICE_LOOK_AHEAD - 1; j >= 0; j--) {
+			qword1 = rte_le_to_cpu_64(
+					rxdp[j].wb.qword1.status_error_len);
+			s[j] = (qword1 & ICE_RXD_QW1_STATUS_M) >>
+			       ICE_RXD_QW1_STATUS_S;
+		}
+
+		rte_smp_rmb();
+
+		/* Compute how many status bits were set */
+		for (j = 0, nb_dd = 0; j < ICE_LOOK_AHEAD; j++)
+			nb_dd += s[j] & (1 << ICE_RX_DESC_STATUS_DD_S);
+
+		nb_rx += nb_dd;
+
+		/* Translate descriptor info to mbuf parameters */
+		for (j = 0; j < nb_dd; j++) {
+			mb = rxep[j].mbuf;
+			qword1 = rte_le_to_cpu_64(
+					rxdp[j].wb.qword1.status_error_len);
+			pkt_len = ((qword1 & ICE_RXD_QW1_LEN_PBUF_M) >>
+				   ICE_RXD_QW1_LEN_PBUF_S) - rxq->crc_len;
+			mb->data_len = pkt_len;
+			mb->pkt_len = pkt_len;
+			mb->ol_flags = 0;
+			pkt_flags = ice_rxd_status_to_pkt_flags(qword1);
+			pkt_flags |= ice_rxd_error_to_pkt_flags(qword1);
+			if (pkt_flags & PKT_RX_RSS_HASH)
+				mb->hash.rss =
+					rte_le_to_cpu_32(
+						rxdp[j].wb.qword0.hi_dword.rss);
+			mb->packet_type = ptype_tbl[(uint8_t)(
+						(qword1 &
+						 ICE_RXD_QW1_PTYPE_M) >>
+						ICE_RXD_QW1_PTYPE_S)];
+			ice_rxd_to_vlan_tci(mb, &rxdp[j]);
+
+			mb->ol_flags |= pkt_flags;
+		}
+
+		for (j = 0; j < ICE_LOOK_AHEAD; j++)
+			rxq->rx_stage[i + j] = rxep[j].mbuf;
+
+		if (nb_dd != ICE_LOOK_AHEAD)
+			break;
+	}
+
+	/* Clear software ring entries */
+	for (i = 0; i < nb_rx; i++)
+		rxq->sw_ring[rxq->rx_tail + i].mbuf = NULL;
+
+	PMD_RX_LOG(DEBUG, "ice_rx_scan_hw_ring: "
+		   "port_id=%u, queue_id=%u, nb_rx=%d",
+		   rxq->port_id, rxq->queue_id, nb_rx);
+
+	return nb_rx;
+}
+
+static inline uint16_t
+ice_rx_fill_from_stage(struct ice_rx_queue *rxq,
+		       struct rte_mbuf **rx_pkts,
+		       uint16_t nb_pkts)
+{
+	uint16_t i;
+	struct rte_mbuf **stage = &rxq->rx_stage[rxq->rx_next_avail];
+
+	nb_pkts = (uint16_t)RTE_MIN(nb_pkts, rxq->rx_nb_avail);
+
+	for (i = 0; i < nb_pkts; i++)
+		rx_pkts[i] = stage[i];
+
+	rxq->rx_nb_avail = (uint16_t)(rxq->rx_nb_avail - nb_pkts);
+	rxq->rx_next_avail = (uint16_t)(rxq->rx_next_avail + nb_pkts);
+
+	return nb_pkts;
+}
+
+static inline int
+ice_rx_alloc_bufs(struct ice_rx_queue *rxq)
+{
+	volatile union ice_rx_desc *rxdp;
+	struct ice_rx_entry *rxep;
+	struct rte_mbuf *mb;
+	uint16_t alloc_idx, i;
+	uint64_t dma_addr;
+	int diag;
+
+	/* Allocate buffers in bulk */
+	alloc_idx = (uint16_t)(rxq->rx_free_trigger -
+			       (rxq->rx_free_thresh - 1));
+	rxep = &rxq->sw_ring[alloc_idx];
+	diag = rte_mempool_get_bulk(rxq->mp, (void *)rxep,
+				    rxq->rx_free_thresh);
+	if (unlikely(diag != 0)) {
+		PMD_RX_LOG(ERR, "Failed to get mbufs in bulk");
+		return -ENOMEM;
+	}
+
+	rxdp = &rxq->rx_ring[alloc_idx];
+	for (i = 0; i < rxq->rx_free_thresh; i++) {
+		if (likely(i < (rxq->rx_free_thresh - 1)))
+			/* Prefetch next mbuf */
+			rte_prefetch0(rxep[i + 1].mbuf);
+
+		mb = rxep[i].mbuf;
+		rte_mbuf_refcnt_set(mb, 1);
+		mb->next = NULL;
+		mb->data_off = RTE_PKTMBUF_HEADROOM;
+		mb->nb_segs = 1;
+		mb->port = rxq->port_id;
+		dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(mb));
+		rxdp[i].read.hdr_addr = 0;
+		rxdp[i].read.pkt_addr = dma_addr;
+	}
+
+	/* Update rx tail regsiter */
+	rte_wmb();
+	ICE_PCI_REG_WRITE(rxq->qrx_tail, rxq->rx_free_trigger);
+
+	rxq->rx_free_trigger =
+		(uint16_t)(rxq->rx_free_trigger + rxq->rx_free_thresh);
+	if (rxq->rx_free_trigger >= rxq->nb_rx_desc)
+		rxq->rx_free_trigger = (uint16_t)(rxq->rx_free_thresh - 1);
+
+	return 0;
+}
+
+static inline uint16_t
+rx_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
+{
+	struct ice_rx_queue *rxq = (struct ice_rx_queue *)rx_queue;
+	uint16_t nb_rx = 0;
+	struct rte_eth_dev *dev;
+
+	if (!nb_pkts)
+		return 0;
+
+	if (rxq->rx_nb_avail)
+		return ice_rx_fill_from_stage(rxq, rx_pkts, nb_pkts);
+
+	nb_rx = (uint16_t)ice_rx_scan_hw_ring(rxq);
+	rxq->rx_next_avail = 0;
+	rxq->rx_nb_avail = nb_rx;
+	rxq->rx_tail = (uint16_t)(rxq->rx_tail + nb_rx);
+
+	if (rxq->rx_tail > rxq->rx_free_trigger) {
+		if (ice_rx_alloc_bufs(rxq) != 0) {
+			uint16_t i, j;
+
+			dev = ICE_VSI_TO_ETH_DEV(rxq->vsi);
+			dev->data->rx_mbuf_alloc_failed +=
+				rxq->rx_free_thresh;
+			PMD_RX_LOG(DEBUG, "Rx mbuf alloc failed for "
+				   "port_id=%u, queue_id=%u",
+				   rxq->port_id, rxq->queue_id);
+			rxq->rx_nb_avail = 0;
+			rxq->rx_tail = (uint16_t)(rxq->rx_tail - nb_rx);
+			for (i = 0, j = rxq->rx_tail; i < nb_rx; i++, j++)
+				rxq->sw_ring[j].mbuf = rxq->rx_stage[i];
+
+			return 0;
+		}
+	}
+
+	if (rxq->rx_tail >= rxq->nb_rx_desc)
+		rxq->rx_tail = 0;
+
+	if (rxq->rx_nb_avail)
+		return ice_rx_fill_from_stage(rxq, rx_pkts, nb_pkts);
+
+	return 0;
+}
+
+static uint16_t
+ice_recv_pkts_bulk_alloc(void *rx_queue,
+			 struct rte_mbuf **rx_pkts,
+			 uint16_t nb_pkts)
+{
+	uint16_t nb_rx = 0;
+	uint16_t n;
+	uint16_t count;
+
+	if (unlikely(nb_pkts == 0))
+		return nb_rx;
+
+	if (likely(nb_pkts <= ICE_RX_MAX_BURST))
+		return rx_recv_pkts(rx_queue, rx_pkts, nb_pkts);
+
+	while (nb_pkts) {
+		n = RTE_MIN(nb_pkts, ICE_RX_MAX_BURST);
+		count = rx_recv_pkts(rx_queue, &rx_pkts[nb_rx], n);
+		nb_rx = (uint16_t)(nb_rx + count);
+		nb_pkts = (uint16_t)(nb_pkts - count);
+		if (count < n)
+			break;
+	}
+
+	return nb_rx;
+}
+#else
+static uint16_t
+ice_recv_pkts_bulk_alloc(void __rte_unused *rx_queue,
+			 struct rte_mbuf __rte_unused **rx_pkts,
+			 uint16_t __rte_unused nb_pkts)
+{
+	return 0;
+}
+#endif /* RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC */
+
+static uint16_t
+ice_recv_scattered_pkts(void *rx_queue,
+			struct rte_mbuf **rx_pkts,
+			uint16_t nb_pkts)
+{
+	struct ice_rx_queue *rxq = rx_queue;
+	volatile union ice_rx_desc *rx_ring = rxq->rx_ring;
+	volatile union ice_rx_desc *rxdp;
+	union ice_rx_desc rxd;
+	struct ice_rx_entry *sw_ring = rxq->sw_ring;
+	struct ice_rx_entry *rxe;
+	struct rte_mbuf *first_seg = rxq->pkt_first_seg;
+	struct rte_mbuf *last_seg = rxq->pkt_last_seg;
+	struct rte_mbuf *nmb; /* new allocated mbuf */
+	struct rte_mbuf *rxm; /* pointer to store old mbuf in SW ring */
+	uint16_t rx_id = rxq->rx_tail;
+	uint16_t nb_rx = 0;
+	uint16_t nb_hold = 0;
+	uint16_t rx_packet_len;
+	uint32_t rx_status;
+	uint64_t qword1;
+	uint64_t dma_addr;
+	uint64_t pkt_flags = 0;
+	uint32_t *ptype_tbl = rxq->vsi->adapter->ptype_tbl;
+	struct rte_eth_dev *dev;
+
+	while (nb_rx < nb_pkts) {
+		rxdp = &rx_ring[rx_id];
+		qword1 = rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len);
+		rx_status = (qword1 & ICE_RXD_QW1_STATUS_M) >>
+			    ICE_RXD_QW1_STATUS_S;
+
+		/* Check the DD bit first */
+		if (!(rx_status & (1 << ICE_RX_DESC_STATUS_DD_S)))
+			break;
+
+		/* allocate mbuf */
+		nmb = rte_mbuf_raw_alloc(rxq->mp);
+		if (unlikely(!nmb)) {
+			dev = ICE_VSI_TO_ETH_DEV(rxq->vsi);
+			dev->data->rx_mbuf_alloc_failed++;
+			break;
+		}
+		rxd = *rxdp; /* copy descriptor in ring to temp variable*/
+
+		nb_hold++;
+		rxe = &sw_ring[rx_id]; /* get corresponding mbuf in SW ring */
+		rx_id++;
+		if (unlikely(rx_id == rxq->nb_rx_desc))
+			rx_id = 0;
+
+		/* Prefetch next mbuf */
+		rte_prefetch0(sw_ring[rx_id].mbuf);
+
+		/**
+		 * When next RX descriptor is on a cache line boundary,
+		 * prefetch the next 4 RX descriptors and next 8 pointers
+		 * to mbufs.
+		 */
+		if ((rx_id & 0x3) == 0) {
+			rte_prefetch0(&rx_ring[rx_id]);
+			rte_prefetch0(&sw_ring[rx_id]);
+		}
+
+		rxm = rxe->mbuf;
+		rxe->mbuf = nmb;
+		dma_addr =
+			rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb));
+
+		/* Set data buffer address and data length of the mbuf */
+		rxdp->read.hdr_addr = 0;
+		rxdp->read.pkt_addr = dma_addr;
+		rx_packet_len = (qword1 & ICE_RXD_QW1_LEN_PBUF_M) >>
+				ICE_RXD_QW1_LEN_PBUF_S;
+		rxm->data_len = rx_packet_len;
+		rxm->data_off = RTE_PKTMBUF_HEADROOM;
+		ice_rxd_to_vlan_tci(rxm, rxdp);
+		rxm->packet_type = ptype_tbl[(uint8_t)((qword1 &
+							ICE_RXD_QW1_PTYPE_M) >>
+						       ICE_RXD_QW1_PTYPE_S)];
+
+		/**
+		 * If this is the first buffer of the received packet, set the
+		 * pointer to the first mbuf of the packet and initialize its
+		 * context. Otherwise, update the total length and the number
+		 * of segments of the current scattered packet, and update the
+		 * pointer to the last mbuf of the current packet.
+		 */
+		if (!first_seg) {
+			first_seg = rxm;
+			first_seg->nb_segs = 1;
+			first_seg->pkt_len = rx_packet_len;
+		} else {
+			first_seg->pkt_len =
+				(uint16_t)(first_seg->pkt_len +
+					   rx_packet_len);
+			first_seg->nb_segs++;
+			last_seg->next = rxm;
+		}
+
+		/**
+		 * If this is not the last buffer of the received packet,
+		 * update the pointer to the last mbuf of the current scattered
+		 * packet and continue to parse the RX ring.
+		 */
+		if (!(rx_status & (1 << ICE_RX_DESC_STATUS_EOF_S))) {
+			last_seg = rxm;
+			continue;
+		}
+
+		/**
+		 * This is the last buffer of the received packet. If the CRC
+		 * is not stripped by the hardware:
+		 *  - Subtract the CRC length from the total packet length.
+		 *  - If the last buffer only contains the whole CRC or a part
+		 *  of it, free the mbuf associated to the last buffer. If part
+		 *  of the CRC is also contained in the previous mbuf, subtract
+		 *  the length of that CRC part from the data length of the
+		 *  previous mbuf.
+		 */
+		rxm->next = NULL;
+		if (unlikely(rxq->crc_len > 0)) {
+			first_seg->pkt_len -= ETHER_CRC_LEN;
+			if (rx_packet_len <= ETHER_CRC_LEN) {
+				rte_pktmbuf_free_seg(rxm);
+				first_seg->nb_segs--;
+				last_seg->data_len =
+					(uint16_t)(last_seg->data_len -
+					(ETHER_CRC_LEN - rx_packet_len));
+				last_seg->next = NULL;
+			} else
+				rxm->data_len = (uint16_t)(rx_packet_len -
+							   ETHER_CRC_LEN);
+		}
+
+		first_seg->port = rxq->port_id;
+		first_seg->ol_flags = 0;
+
+		pkt_flags = ice_rxd_status_to_pkt_flags(qword1);
+		pkt_flags |= ice_rxd_error_to_pkt_flags(qword1);
+		if (pkt_flags & PKT_RX_RSS_HASH)
+			first_seg->hash.rss =
+				rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
+
+		first_seg->ol_flags |= pkt_flags;
+		/* Prefetch data of first segment, if configured to do so. */
+		rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr,
+					  first_seg->data_off));
+		rx_pkts[nb_rx++] = first_seg;
+		first_seg = NULL;
+	}
+
+	/* Record index of the next RX descriptor to probe. */
+	rxq->rx_tail = rx_id;
+	rxq->pkt_first_seg = first_seg;
+	rxq->pkt_last_seg = last_seg;
+
+	/**
+	 * If the number of free RX descriptors is greater than the RX free
+	 * threshold of the queue, advance the Receive Descriptor Tail (RDT)
+	 * register. Update the RDT with the value of the last processed RX
+	 * descriptor minus 1, to guarantee that the RDT register is never
+	 * equal to the RDH register, which creates a "full" ring situtation
+	 * from the hardware point of view.
+	 */
+	nb_hold = (uint16_t)(nb_hold + rxq->nb_rx_hold);
+	if (nb_hold > rxq->rx_free_thresh) {
+		rx_id = (uint16_t)(rx_id == 0 ?
+				   (rxq->nb_rx_desc - 1) : (rx_id - 1));
+		/* write TAIL register */
+		ICE_PCI_REG_WRITE(rxq->qrx_tail, rx_id);
+		nb_hold = 0;
+	}
+	rxq->nb_rx_hold = nb_hold;
+
+	/* return received packet in the burst */
+	return nb_rx;
+}
+
 const uint32_t *
 ice_dev_supported_ptypes_get(struct rte_eth_dev *dev)
 {
@@ -1053,7 +1481,11 @@
 		RTE_PTYPE_UNKNOWN
 	};
 
-	if (dev->rx_pkt_burst == ice_recv_pkts)
+	if (dev->rx_pkt_burst == ice_recv_pkts ||
+#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC
+	    dev->rx_pkt_burst == ice_recv_pkts_bulk_alloc ||
+#endif
+	    dev->rx_pkt_burst == ice_recv_scattered_pkts)
 		return ptypes;
 	return NULL;
 }
@@ -1310,6 +1742,20 @@
 	return 0;
 }
 
+/* Construct the tx flags */
+static inline uint64_t
+ice_build_ctob(uint32_t td_cmd,
+	       uint32_t td_offset,
+	       uint16_t size,
+	       uint32_t td_tag)
+{
+	return rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DATA |
+				((uint64_t)td_cmd << ICE_TXD_QW1_CMD_S) |
+				((uint64_t)td_offset << ICE_TXD_QW1_OFFSET_S) |
+				((uint64_t)size << ICE_TXD_QW1_TX_BUF_SZ_S) |
+				((uint64_t)td_tag << ICE_TXD_QW1_L2TAG1_S));
+}
+
 /* Check if the context descriptor is needed for TX offloading */
 static inline uint16_t
 ice_calc_context_desc(uint64_t flags)
@@ -1528,10 +1974,213 @@
 	return nb_tx;
 }
 
+static inline int __attribute__((always_inline))
+ice_tx_free_bufs(struct ice_tx_queue *txq)
+{
+	struct ice_tx_entry *txep;
+	uint16_t i;
+
+	if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
+	     rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M)) !=
+	    rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE))
+		return 0;
+
+	txep = &txq->sw_ring[txq->tx_next_dd - (txq->tx_rs_thresh - 1)];
+
+	for (i = 0; i < txq->tx_rs_thresh; i++)
+		rte_prefetch0((txep + i)->mbuf);
+
+	if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) {
+		for (i = 0; i < txq->tx_rs_thresh; ++i, ++txep) {
+			rte_mempool_put(txep->mbuf->pool, txep->mbuf);
+			txep->mbuf = NULL;
+		}
+	} else {
+		for (i = 0; i < txq->tx_rs_thresh; ++i, ++txep) {
+			rte_pktmbuf_free_seg(txep->mbuf);
+			txep->mbuf = NULL;
+		}
+	}
+
+	txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh);
+	txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh);
+	if (txq->tx_next_dd >= txq->nb_tx_desc)
+		txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1);
+
+	return txq->tx_rs_thresh;
+}
+
+/* Populate 4 descriptors with data from 4 mbufs */
+static inline void
+tx4(volatile struct ice_tx_desc *txdp, struct rte_mbuf **pkts)
+{
+	uint64_t dma_addr;
+	uint32_t i;
+
+	for (i = 0; i < 4; i++, txdp++, pkts++) {
+		dma_addr = rte_mbuf_data_iova(*pkts);
+		txdp->buf_addr = rte_cpu_to_le_64(dma_addr);
+		txdp->cmd_type_offset_bsz =
+			ice_build_ctob((uint32_t)ICE_TD_CMD, 0,
+				       (*pkts)->data_len, 0);
+	}
+}
+
+/* Populate 1 descriptor with data from 1 mbuf */
+static inline void
+tx1(volatile struct ice_tx_desc *txdp, struct rte_mbuf **pkts)
+{
+	uint64_t dma_addr;
+
+	dma_addr = rte_mbuf_data_iova(*pkts);
+	txdp->buf_addr = rte_cpu_to_le_64(dma_addr);
+	txdp->cmd_type_offset_bsz =
+		ice_build_ctob((uint32_t)ICE_TD_CMD, 0,
+			       (*pkts)->data_len, 0);
+}
+
+static inline void
+ice_tx_fill_hw_ring(struct ice_tx_queue *txq, struct rte_mbuf **pkts,
+		    uint16_t nb_pkts)
+{
+	volatile struct ice_tx_desc *txdp = &txq->tx_ring[txq->tx_tail];
+	struct ice_tx_entry *txep = &txq->sw_ring[txq->tx_tail];
+	const int N_PER_LOOP = 4;
+	const int N_PER_LOOP_MASK = N_PER_LOOP - 1;
+	int mainpart, leftover;
+	int i, j;
+
+	/**
+	 * Process most of the packets in chunks of N pkts.  Any
+	 * leftover packets will get processed one at a time.
+	 */
+	mainpart = nb_pkts & ((uint32_t)~N_PER_LOOP_MASK);
+	leftover = nb_pkts & ((uint32_t)N_PER_LOOP_MASK);
+	for (i = 0; i < mainpart; i += N_PER_LOOP) {
+		/* Copy N mbuf pointers to the S/W ring */
+		for (j = 0; j < N_PER_LOOP; ++j)
+			(txep + i + j)->mbuf = *(pkts + i + j);
+		tx4(txdp + i, pkts + i);
+	}
+
+	if (unlikely(leftover > 0)) {
+		for (i = 0; i < leftover; ++i) {
+			(txep + mainpart + i)->mbuf = *(pkts + mainpart + i);
+			tx1(txdp + mainpart + i, pkts + mainpart + i);
+		}
+	}
+}
+
+static inline uint16_t
+tx_xmit_pkts(struct ice_tx_queue *txq,
+	     struct rte_mbuf **tx_pkts,
+	     uint16_t nb_pkts)
+{
+	volatile struct ice_tx_desc *txr = txq->tx_ring;
+	uint16_t n = 0;
+
+	/**
+	 * Begin scanning the H/W ring for done descriptors when the number
+	 * of available descriptors drops below tx_free_thresh. For each done
+	 * descriptor, free the associated buffer.
+	 */
+	if (txq->nb_tx_free < txq->tx_free_thresh)
+		ice_tx_free_bufs(txq);
+
+	/* Use available descriptor only */
+	nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
+	if (unlikely(!nb_pkts))
+		return 0;
+
+	txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
+	if ((txq->tx_tail + nb_pkts) > txq->nb_tx_desc) {
+		n = (uint16_t)(txq->nb_tx_desc - txq->tx_tail);
+		ice_tx_fill_hw_ring(txq, tx_pkts, n);
+		txr[txq->tx_next_rs].cmd_type_offset_bsz |=
+			rte_cpu_to_le_64(((uint64_t)ICE_TX_DESC_CMD_RS) <<
+					 ICE_TXD_QW1_CMD_S);
+		txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
+		txq->tx_tail = 0;
+	}
+
+	/* Fill hardware descriptor ring with mbuf data */
+	ice_tx_fill_hw_ring(txq, tx_pkts + n, (uint16_t)(nb_pkts - n));
+	txq->tx_tail = (uint16_t)(txq->tx_tail + (nb_pkts - n));
+
+	/* Determin if RS bit needs to be set */
+	if (txq->tx_tail > txq->tx_next_rs) {
+		txr[txq->tx_next_rs].cmd_type_offset_bsz |=
+			rte_cpu_to_le_64(((uint64_t)ICE_TX_DESC_CMD_RS) <<
+					 ICE_TXD_QW1_CMD_S);
+		txq->tx_next_rs =
+			(uint16_t)(txq->tx_next_rs + txq->tx_rs_thresh);
+		if (txq->tx_next_rs >= txq->nb_tx_desc)
+			txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1);
+	}
+
+	if (txq->tx_tail >= txq->nb_tx_desc)
+		txq->tx_tail = 0;
+
+	/* Update the tx tail register */
+	rte_wmb();
+	ICE_PCI_REG_WRITE(txq->qtx_tail, txq->tx_tail);
+
+	return nb_pkts;
+}
+
+static uint16_t
+ice_xmit_pkts_simple(void *tx_queue,
+		     struct rte_mbuf **tx_pkts,
+		     uint16_t nb_pkts)
+{
+	uint16_t nb_tx = 0;
+
+	if (likely(nb_pkts <= ICE_TX_MAX_BURST))
+		return tx_xmit_pkts((struct ice_tx_queue *)tx_queue,
+				    tx_pkts, nb_pkts);
+
+	while (nb_pkts) {
+		uint16_t ret, num = (uint16_t)RTE_MIN(nb_pkts,
+						      ICE_TX_MAX_BURST);
+
+		ret = tx_xmit_pkts((struct ice_tx_queue *)tx_queue,
+				   &tx_pkts[nb_tx], num);
+		nb_tx = (uint16_t)(nb_tx + ret);
+		nb_pkts = (uint16_t)(nb_pkts - ret);
+		if (ret < num)
+			break;
+	}
+
+	return nb_tx;
+}
+
 void __attribute__((cold))
 ice_set_rx_function(struct rte_eth_dev *dev)
 {
-	dev->rx_pkt_burst = ice_recv_pkts;
+	PMD_INIT_FUNC_TRACE();
+	struct ice_adapter *ad =
+		ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+
+	if (dev->data->scattered_rx) {
+		/* Set the non-LRO scattered function */
+		PMD_INIT_LOG(DEBUG,
+			     "Using a Scattered function on port %d.",
+			     dev->data->port_id);
+		dev->rx_pkt_burst = ice_recv_scattered_pkts;
+	} else if (ad->rx_bulk_alloc_allowed) {
+		PMD_INIT_LOG(DEBUG,
+			     "Rx Burst Bulk Alloc Preconditions are "
+			     "satisfied. Rx Burst Bulk Alloc function "
+			     "will be used on port %d.",
+			     dev->data->port_id);
+		dev->rx_pkt_burst = ice_recv_pkts_bulk_alloc;
+	} else {
+		PMD_INIT_LOG(DEBUG,
+			     "Rx Burst Bulk Alloc Preconditions are not "
+			     "satisfied, Normal Rx will be used on port %d.",
+			     dev->data->port_id);
+		dev->rx_pkt_burst = ice_recv_pkts;
+	}
 }
 
 /*********************************************************************
@@ -1585,8 +2234,18 @@ void __attribute__((cold))
 void __attribute__((cold))
 ice_set_tx_function(struct rte_eth_dev *dev)
 {
+	struct ice_adapter *ad =
+		ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+
+	if (ad->tx_simple_allowed) {
+		PMD_INIT_LOG(DEBUG, "Simple tx finally be used.");
+		dev->tx_pkt_burst = ice_xmit_pkts_simple;
+		dev->tx_pkt_prepare = NULL;
+	} else {
+		PMD_INIT_LOG(DEBUG, "Normal tx finally be used.");
 		dev->tx_pkt_burst = ice_xmit_pkts;
 		dev->tx_pkt_prepare = ice_prep_pkts;
+	}
 }
 
 /* For each value it means, datasheet of hardware can tell more details
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v6 30/31] net/ice: support statistics
  2018-12-18  8:46 ` [dpdk-dev] [PATCH v6 00/31] A new net PMD - ICE Wenzhuo Lu
                     ` (28 preceding siblings ...)
  2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 29/31] net/ice: support advance RX/TX Wenzhuo Lu
@ 2018-12-18  8:46   ` Wenzhuo Lu
  2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 31/31] support descriptor ops Wenzhuo Lu
  2018-12-18 13:53   ` [dpdk-dev] [PATCH v6 00/31] A new net PMD - ICE Ferruh Yigit
  31 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-18  8:46 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Jia Guo

Add below ops,
stats_get
stats_reset
xstats_get
xstats_get_names
xstats_reset

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Jia Guo <jia.guo@intel.com>
---
 doc/guides/nics/features/ice.ini |   2 +
 drivers/net/ice/ice_ethdev.c     | 566 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 568 insertions(+)

diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
index 134063b..efa5601 100644
--- a/doc/guides/nics/features/ice.ini
+++ b/doc/guides/nics/features/ice.ini
@@ -26,6 +26,8 @@ QinQ offload         = Y
 L3 checksum offload  = Y
 L4 checksum offload  = Y
 Packet type parsing  = Y
+Basic stats          = Y
+Extended stats       = Y
 FW version           = Y
 Module EEPROM dump   = Y
 BSD nic_uio          = Y
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index c0c530f..5fb70ee 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -59,6 +59,14 @@ static int ice_vlan_pvid_set(struct rte_eth_dev *dev,
 static int ice_get_eeprom_length(struct rte_eth_dev *dev);
 static int ice_get_eeprom(struct rte_eth_dev *dev,
 			  struct rte_dev_eeprom_info *eeprom);
+static int ice_stats_get(struct rte_eth_dev *dev,
+			 struct rte_eth_stats *stats);
+static void ice_stats_reset(struct rte_eth_dev *dev);
+static int ice_xstats_get(struct rte_eth_dev *dev,
+			  struct rte_eth_xstat *xstats, unsigned int n);
+static int ice_xstats_get_names(struct rte_eth_dev *dev,
+				struct rte_eth_xstat_name *xstats_names,
+				unsigned int limit);
 
 static const struct rte_pci_id pci_id_ice_map[] = {
 	{ RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) },
@@ -104,8 +112,92 @@ static int ice_get_eeprom(struct rte_eth_dev *dev,
 	.get_eeprom_length            = ice_get_eeprom_length,
 	.get_eeprom                   = ice_get_eeprom,
 	.rx_queue_count               = ice_rx_queue_count,
+	.stats_get                    = ice_stats_get,
+	.stats_reset                  = ice_stats_reset,
+	.xstats_get                   = ice_xstats_get,
+	.xstats_get_names             = ice_xstats_get_names,
+	.xstats_reset                 = ice_stats_reset,
 };
 
+/* store statistics names and its offset in stats structure */
+struct ice_xstats_name_off {
+	char name[RTE_ETH_XSTATS_NAME_SIZE];
+	unsigned int offset;
+};
+
+static const struct ice_xstats_name_off ice_stats_strings[] = {
+	{"rx_unicast_packets", offsetof(struct ice_eth_stats, rx_unicast)},
+	{"rx_multicast_packets", offsetof(struct ice_eth_stats, rx_multicast)},
+	{"rx_broadcast_packets", offsetof(struct ice_eth_stats, rx_broadcast)},
+	{"rx_dropped", offsetof(struct ice_eth_stats, rx_discards)},
+	{"rx_unknown_protocol_packets", offsetof(struct ice_eth_stats,
+		rx_unknown_protocol)},
+	{"tx_unicast_packets", offsetof(struct ice_eth_stats, tx_unicast)},
+	{"tx_multicast_packets", offsetof(struct ice_eth_stats, tx_multicast)},
+	{"tx_broadcast_packets", offsetof(struct ice_eth_stats, tx_broadcast)},
+	{"tx_dropped", offsetof(struct ice_eth_stats, tx_discards)},
+};
+
+#define ICE_NB_ETH_XSTATS (sizeof(ice_stats_strings) / \
+		sizeof(ice_stats_strings[0]))
+
+static const struct ice_xstats_name_off ice_hw_port_strings[] = {
+	{"tx_link_down_dropped", offsetof(struct ice_hw_port_stats,
+		tx_dropped_link_down)},
+	{"rx_crc_errors", offsetof(struct ice_hw_port_stats, crc_errors)},
+	{"rx_illegal_byte_errors", offsetof(struct ice_hw_port_stats,
+		illegal_bytes)},
+	{"rx_error_bytes", offsetof(struct ice_hw_port_stats, error_bytes)},
+	{"mac_local_errors", offsetof(struct ice_hw_port_stats,
+		mac_local_faults)},
+	{"mac_remote_errors", offsetof(struct ice_hw_port_stats,
+		mac_remote_faults)},
+	{"rx_len_errors", offsetof(struct ice_hw_port_stats,
+		rx_len_errors)},
+	{"tx_xon_packets", offsetof(struct ice_hw_port_stats, link_xon_tx)},
+	{"rx_xon_packets", offsetof(struct ice_hw_port_stats, link_xon_rx)},
+	{"tx_xoff_packets", offsetof(struct ice_hw_port_stats, link_xoff_tx)},
+	{"rx_xoff_packets", offsetof(struct ice_hw_port_stats, link_xoff_rx)},
+	{"rx_size_64_packets", offsetof(struct ice_hw_port_stats, rx_size_64)},
+	{"rx_size_65_to_127_packets", offsetof(struct ice_hw_port_stats,
+		rx_size_127)},
+	{"rx_size_128_to_255_packets", offsetof(struct ice_hw_port_stats,
+		rx_size_255)},
+	{"rx_size_256_to_511_packets", offsetof(struct ice_hw_port_stats,
+		rx_size_511)},
+	{"rx_size_512_to_1023_packets", offsetof(struct ice_hw_port_stats,
+		rx_size_1023)},
+	{"rx_size_1024_to_1522_packets", offsetof(struct ice_hw_port_stats,
+		rx_size_1522)},
+	{"rx_size_1523_to_max_packets", offsetof(struct ice_hw_port_stats,
+		rx_size_big)},
+	{"rx_undersized_errors", offsetof(struct ice_hw_port_stats,
+		rx_undersize)},
+	{"rx_oversize_errors", offsetof(struct ice_hw_port_stats,
+		rx_oversize)},
+	{"rx_mac_short_pkt_dropped", offsetof(struct ice_hw_port_stats,
+		mac_short_pkt_dropped)},
+	{"rx_fragmented_errors", offsetof(struct ice_hw_port_stats,
+		rx_fragments)},
+	{"rx_jabber_errors", offsetof(struct ice_hw_port_stats, rx_jabber)},
+	{"tx_size_64_packets", offsetof(struct ice_hw_port_stats, tx_size_64)},
+	{"tx_size_65_to_127_packets", offsetof(struct ice_hw_port_stats,
+		tx_size_127)},
+	{"tx_size_128_to_255_packets", offsetof(struct ice_hw_port_stats,
+		tx_size_255)},
+	{"tx_size_256_to_511_packets", offsetof(struct ice_hw_port_stats,
+		tx_size_511)},
+	{"tx_size_512_to_1023_packets", offsetof(struct ice_hw_port_stats,
+		tx_size_1023)},
+	{"tx_size_1024_to_1522_packets", offsetof(struct ice_hw_port_stats,
+		tx_size_1522)},
+	{"tx_size_1523_to_max_packets", offsetof(struct ice_hw_port_stats,
+		tx_size_big)},
+};
+
+#define ICE_NB_HW_PORT_XSTATS (sizeof(ice_hw_port_strings) / \
+		sizeof(ice_hw_port_strings[0]))
+
 static void
 ice_init_controlq_parameter(struct ice_hw *hw)
 {
@@ -2632,6 +2724,480 @@ static int ice_rx_queue_intr_disable(struct rte_eth_dev *dev,
 	return 0;
 }
 
+static void
+ice_stat_update_32(struct ice_hw *hw,
+		   uint32_t reg,
+		   bool offset_loaded,
+		   uint64_t *offset,
+		   uint64_t *stat)
+{
+	uint64_t new_data;
+
+	new_data = (uint64_t)ICE_READ_REG(hw, reg);
+	if (!offset_loaded)
+		*offset = new_data;
+
+	if (new_data >= *offset)
+		*stat = (uint64_t)(new_data - *offset);
+	else
+		*stat = (uint64_t)((new_data +
+				    ((uint64_t)1 << ICE_32_BIT_WIDTH))
+				   - *offset);
+}
+
+static void
+ice_stat_update_40(struct ice_hw *hw,
+		   uint32_t hireg,
+		   uint32_t loreg,
+		   bool offset_loaded,
+		   uint64_t *offset,
+		   uint64_t *stat)
+{
+	uint64_t new_data;
+
+	new_data = (uint64_t)ICE_READ_REG(hw, loreg);
+	new_data |= (uint64_t)(ICE_READ_REG(hw, hireg) & ICE_8_BIT_MASK) <<
+		    ICE_32_BIT_WIDTH;
+
+	if (!offset_loaded)
+		*offset = new_data;
+
+	if (new_data >= *offset)
+		*stat = new_data - *offset;
+	else
+		*stat = (uint64_t)((new_data +
+				    ((uint64_t)1 << ICE_40_BIT_WIDTH)) -
+				   *offset);
+
+	*stat &= ICE_40_BIT_MASK;
+}
+
+/* Get all the statistics of a VSI */
+static void
+ice_update_vsi_stats(struct ice_vsi *vsi)
+{
+	struct ice_eth_stats *oes = &vsi->eth_stats_offset;
+	struct ice_eth_stats *nes = &vsi->eth_stats;
+	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
+	int idx = rte_le_to_cpu_16(vsi->vsi_id);
+
+	ice_stat_update_40(hw, GLV_GORCH(idx), GLV_GORCL(idx),
+			   vsi->offset_loaded, &oes->rx_bytes,
+			   &nes->rx_bytes);
+	ice_stat_update_40(hw, GLV_UPRCH(idx), GLV_UPRCL(idx),
+			   vsi->offset_loaded, &oes->rx_unicast,
+			   &nes->rx_unicast);
+	ice_stat_update_40(hw, GLV_MPRCH(idx), GLV_MPRCL(idx),
+			   vsi->offset_loaded, &oes->rx_multicast,
+			   &nes->rx_multicast);
+	ice_stat_update_40(hw, GLV_BPRCH(idx), GLV_BPRCL(idx),
+			   vsi->offset_loaded, &oes->rx_broadcast,
+			   &nes->rx_broadcast);
+	/* exclude CRC bytes */
+	nes->rx_bytes -= (nes->rx_unicast + nes->rx_multicast +
+			  nes->rx_broadcast) * ETHER_CRC_LEN;
+
+	ice_stat_update_32(hw, GLV_RDPC(idx), vsi->offset_loaded,
+			   &oes->rx_discards, &nes->rx_discards);
+	/* GLV_REPC not supported */
+	/* GLV_RMPC not supported */
+	ice_stat_update_32(hw, GLSWID_RUPP(idx), vsi->offset_loaded,
+			   &oes->rx_unknown_protocol,
+			   &nes->rx_unknown_protocol);
+	ice_stat_update_40(hw, GLV_GOTCH(idx), GLV_GOTCL(idx),
+			   vsi->offset_loaded, &oes->tx_bytes,
+			   &nes->tx_bytes);
+	ice_stat_update_40(hw, GLV_UPTCH(idx), GLV_UPTCL(idx),
+			   vsi->offset_loaded, &oes->tx_unicast,
+			   &nes->tx_unicast);
+	ice_stat_update_40(hw, GLV_MPTCH(idx), GLV_MPTCL(idx),
+			   vsi->offset_loaded, &oes->tx_multicast,
+			   &nes->tx_multicast);
+	ice_stat_update_40(hw, GLV_BPTCH(idx), GLV_BPTCL(idx),
+			   vsi->offset_loaded,  &oes->tx_broadcast,
+			   &nes->tx_broadcast);
+	/* GLV_TDPC not supported */
+	ice_stat_update_32(hw, GLV_TEPC(idx), vsi->offset_loaded,
+			   &oes->tx_errors, &nes->tx_errors);
+	vsi->offset_loaded = true;
+
+	PMD_DRV_LOG(DEBUG, "************** VSI[%u] stats start **************",
+		    vsi->vsi_id);
+	PMD_DRV_LOG(DEBUG, "rx_bytes:            %"PRIu64"", nes->rx_bytes);
+	PMD_DRV_LOG(DEBUG, "rx_unicast:          %"PRIu64"", nes->rx_unicast);
+	PMD_DRV_LOG(DEBUG, "rx_multicast:        %"PRIu64"", nes->rx_multicast);
+	PMD_DRV_LOG(DEBUG, "rx_broadcast:        %"PRIu64"", nes->rx_broadcast);
+	PMD_DRV_LOG(DEBUG, "rx_discards:         %"PRIu64"", nes->rx_discards);
+	PMD_DRV_LOG(DEBUG, "rx_unknown_protocol: %"PRIu64"",
+		    nes->rx_unknown_protocol);
+	PMD_DRV_LOG(DEBUG, "tx_bytes:            %"PRIu64"", nes->tx_bytes);
+	PMD_DRV_LOG(DEBUG, "tx_unicast:          %"PRIu64"", nes->tx_unicast);
+	PMD_DRV_LOG(DEBUG, "tx_multicast:        %"PRIu64"", nes->tx_multicast);
+	PMD_DRV_LOG(DEBUG, "tx_broadcast:        %"PRIu64"", nes->tx_broadcast);
+	PMD_DRV_LOG(DEBUG, "tx_discards:         %"PRIu64"", nes->tx_discards);
+	PMD_DRV_LOG(DEBUG, "tx_errors:           %"PRIu64"", nes->tx_errors);
+	PMD_DRV_LOG(DEBUG, "************** VSI[%u] stats end ****************",
+		    vsi->vsi_id);
+}
+
+static void
+ice_read_stats_registers(struct ice_pf *pf, struct ice_hw *hw)
+{
+	struct ice_hw_port_stats *ns = &pf->stats; /* new stats */
+	struct ice_hw_port_stats *os = &pf->stats_offset; /* old stats */
+
+	/* Get statistics of struct ice_eth_stats */
+	ice_stat_update_40(hw, GLPRT_GORCH(hw->port_info->lport),
+			   GLPRT_GORCL(hw->port_info->lport),
+			   pf->offset_loaded, &os->eth.rx_bytes,
+			   &ns->eth.rx_bytes);
+	ice_stat_update_40(hw, GLPRT_UPRCH(hw->port_info->lport),
+			   GLPRT_UPRCL(hw->port_info->lport),
+			   pf->offset_loaded, &os->eth.rx_unicast,
+			   &ns->eth.rx_unicast);
+	ice_stat_update_40(hw, GLPRT_MPRCH(hw->port_info->lport),
+			   GLPRT_MPRCL(hw->port_info->lport),
+			   pf->offset_loaded, &os->eth.rx_multicast,
+			   &ns->eth.rx_multicast);
+	ice_stat_update_40(hw, GLPRT_BPRCH(hw->port_info->lport),
+			   GLPRT_BPRCL(hw->port_info->lport),
+			   pf->offset_loaded, &os->eth.rx_broadcast,
+			   &ns->eth.rx_broadcast);
+	ice_stat_update_32(hw, PRTRPB_RDPC,
+			   pf->offset_loaded, &os->eth.rx_discards,
+			   &ns->eth.rx_discards);
+
+	/* Workaround: CRC size should not be included in byte statistics,
+	 * so subtract ETHER_CRC_LEN from the byte counter for each rx packet.
+	 */
+	ns->eth.rx_bytes -= (ns->eth.rx_unicast + ns->eth.rx_multicast +
+			     ns->eth.rx_broadcast) * ETHER_CRC_LEN;
+
+	/* GLPRT_REPC not supported */
+	/* GLPRT_RMPC not supported */
+	ice_stat_update_32(hw, GLSWID_RUPP(hw->port_info->lport),
+			   pf->offset_loaded,
+			   &os->eth.rx_unknown_protocol,
+			   &ns->eth.rx_unknown_protocol);
+	ice_stat_update_40(hw, GLPRT_GOTCH(hw->port_info->lport),
+			   GLPRT_GOTCL(hw->port_info->lport),
+			   pf->offset_loaded, &os->eth.tx_bytes,
+			   &ns->eth.tx_bytes);
+	ice_stat_update_40(hw, GLPRT_UPTCH(hw->port_info->lport),
+			   GLPRT_UPTCL(hw->port_info->lport),
+			   pf->offset_loaded, &os->eth.tx_unicast,
+			   &ns->eth.tx_unicast);
+	ice_stat_update_40(hw, GLPRT_MPTCH(hw->port_info->lport),
+			   GLPRT_MPTCL(hw->port_info->lport),
+			   pf->offset_loaded, &os->eth.tx_multicast,
+			   &ns->eth.tx_multicast);
+	ice_stat_update_40(hw, GLPRT_BPTCH(hw->port_info->lport),
+			   GLPRT_BPTCL(hw->port_info->lport),
+			   pf->offset_loaded, &os->eth.tx_broadcast,
+			   &ns->eth.tx_broadcast);
+	ns->eth.tx_bytes -= (ns->eth.tx_unicast + ns->eth.tx_multicast +
+			     ns->eth.tx_broadcast) * ETHER_CRC_LEN;
+
+	/* GLPRT_TEPC not supported */
+
+	/* additional port specific stats */
+	ice_stat_update_32(hw, GLPRT_TDOLD(hw->port_info->lport),
+			   pf->offset_loaded, &os->tx_dropped_link_down,
+			   &ns->tx_dropped_link_down);
+	ice_stat_update_32(hw, GLPRT_CRCERRS(hw->port_info->lport),
+			   pf->offset_loaded, &os->crc_errors,
+			   &ns->crc_errors);
+	ice_stat_update_32(hw, GLPRT_ILLERRC(hw->port_info->lport),
+			   pf->offset_loaded, &os->illegal_bytes,
+			   &ns->illegal_bytes);
+	/* GLPRT_ERRBC not supported */
+	ice_stat_update_32(hw, GLPRT_MLFC(hw->port_info->lport),
+			   pf->offset_loaded, &os->mac_local_faults,
+			   &ns->mac_local_faults);
+	ice_stat_update_32(hw, GLPRT_MRFC(hw->port_info->lport),
+			   pf->offset_loaded, &os->mac_remote_faults,
+			   &ns->mac_remote_faults);
+
+	ice_stat_update_32(hw, GLPRT_RLEC(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_len_errors,
+			   &ns->rx_len_errors);
+
+	ice_stat_update_32(hw, GLPRT_LXONRXC(hw->port_info->lport),
+			   pf->offset_loaded, &os->link_xon_rx,
+			   &ns->link_xon_rx);
+	ice_stat_update_32(hw, GLPRT_LXOFFRXC(hw->port_info->lport),
+			   pf->offset_loaded, &os->link_xoff_rx,
+			   &ns->link_xoff_rx);
+	ice_stat_update_32(hw, GLPRT_LXONTXC(hw->port_info->lport),
+			   pf->offset_loaded, &os->link_xon_tx,
+			   &ns->link_xon_tx);
+	ice_stat_update_32(hw, GLPRT_LXOFFTXC(hw->port_info->lport),
+			   pf->offset_loaded, &os->link_xoff_tx,
+			   &ns->link_xoff_tx);
+	ice_stat_update_40(hw, GLPRT_PRC64H(hw->port_info->lport),
+			   GLPRT_PRC64L(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_size_64,
+			   &ns->rx_size_64);
+	ice_stat_update_40(hw, GLPRT_PRC127H(hw->port_info->lport),
+			   GLPRT_PRC127L(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_size_127,
+			   &ns->rx_size_127);
+	ice_stat_update_40(hw, GLPRT_PRC255H(hw->port_info->lport),
+			   GLPRT_PRC255L(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_size_255,
+			   &ns->rx_size_255);
+	ice_stat_update_40(hw, GLPRT_PRC511H(hw->port_info->lport),
+			   GLPRT_PRC511L(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_size_511,
+			   &ns->rx_size_511);
+	ice_stat_update_40(hw, GLPRT_PRC1023H(hw->port_info->lport),
+			   GLPRT_PRC1023L(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_size_1023,
+			   &ns->rx_size_1023);
+	ice_stat_update_40(hw, GLPRT_PRC1522H(hw->port_info->lport),
+			   GLPRT_PRC1522L(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_size_1522,
+			   &ns->rx_size_1522);
+	ice_stat_update_40(hw, GLPRT_PRC9522H(hw->port_info->lport),
+			   GLPRT_PRC9522L(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_size_big,
+			   &ns->rx_size_big);
+	ice_stat_update_32(hw, GLPRT_RUC(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_undersize,
+			   &ns->rx_undersize);
+	ice_stat_update_32(hw, GLPRT_RFC(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_fragments,
+			   &ns->rx_fragments);
+	ice_stat_update_32(hw, GLPRT_ROC(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_oversize,
+			   &ns->rx_oversize);
+	ice_stat_update_32(hw, GLPRT_RJC(hw->port_info->lport),
+			   pf->offset_loaded, &os->rx_jabber,
+			   &ns->rx_jabber);
+	ice_stat_update_40(hw, GLPRT_PTC64H(hw->port_info->lport),
+			   GLPRT_PTC64L(hw->port_info->lport),
+			   pf->offset_loaded, &os->tx_size_64,
+			   &ns->tx_size_64);
+	ice_stat_update_40(hw, GLPRT_PTC127H(hw->port_info->lport),
+			   GLPRT_PTC127L(hw->port_info->lport),
+			   pf->offset_loaded, &os->tx_size_127,
+			   &ns->tx_size_127);
+	ice_stat_update_40(hw, GLPRT_PTC255H(hw->port_info->lport),
+			   GLPRT_PTC255L(hw->port_info->lport),
+			   pf->offset_loaded, &os->tx_size_255,
+			   &ns->tx_size_255);
+	ice_stat_update_40(hw, GLPRT_PTC511H(hw->port_info->lport),
+			   GLPRT_PTC511L(hw->port_info->lport),
+			   pf->offset_loaded, &os->tx_size_511,
+			   &ns->tx_size_511);
+	ice_stat_update_40(hw, GLPRT_PTC1023H(hw->port_info->lport),
+			   GLPRT_PTC1023L(hw->port_info->lport),
+			   pf->offset_loaded, &os->tx_size_1023,
+			   &ns->tx_size_1023);
+	ice_stat_update_40(hw, GLPRT_PTC1522H(hw->port_info->lport),
+			   GLPRT_PTC1522L(hw->port_info->lport),
+			   pf->offset_loaded, &os->tx_size_1522,
+			   &ns->tx_size_1522);
+	ice_stat_update_40(hw, GLPRT_PTC9522H(hw->port_info->lport),
+			   GLPRT_PTC9522L(hw->port_info->lport),
+			   pf->offset_loaded, &os->tx_size_big,
+			   &ns->tx_size_big);
+
+	/* GLPRT_MSPDC not supported */
+	/* GLPRT_XEC not supported */
+
+	pf->offset_loaded = true;
+
+	if (pf->main_vsi)
+		ice_update_vsi_stats(pf->main_vsi);
+}
+
+/* Get all statistics of a port */
+static int
+ice_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ice_hw_port_stats *ns = &pf->stats; /* new stats */
+
+	/* call read registers - updates values, now write them to struct */
+	ice_read_stats_registers(pf, hw);
+
+	stats->ipackets = ns->eth.rx_unicast +
+			  ns->eth.rx_multicast +
+			  ns->eth.rx_broadcast -
+			  ns->eth.rx_discards -
+			  pf->main_vsi->eth_stats.rx_discards;
+	stats->opackets = ns->eth.tx_unicast +
+			  ns->eth.tx_multicast +
+			  ns->eth.tx_broadcast;
+	stats->ibytes   = ns->eth.rx_bytes;
+	stats->obytes   = ns->eth.tx_bytes;
+	stats->oerrors  = ns->eth.tx_errors +
+			  pf->main_vsi->eth_stats.tx_errors;
+
+	/* Rx Errors */
+	stats->imissed  = ns->eth.rx_discards +
+			  pf->main_vsi->eth_stats.rx_discards;
+	stats->ierrors  = ns->crc_errors +
+			  ns->rx_undersize +
+			  ns->rx_oversize + ns->rx_fragments + ns->rx_jabber;
+
+	PMD_DRV_LOG(DEBUG, "*************** PF stats start *****************");
+	PMD_DRV_LOG(DEBUG, "rx_bytes:	%"PRIu64"", ns->eth.rx_bytes);
+	PMD_DRV_LOG(DEBUG, "rx_unicast:	%"PRIu64"", ns->eth.rx_unicast);
+	PMD_DRV_LOG(DEBUG, "rx_multicast:%"PRIu64"", ns->eth.rx_multicast);
+	PMD_DRV_LOG(DEBUG, "rx_broadcast:%"PRIu64"", ns->eth.rx_broadcast);
+	PMD_DRV_LOG(DEBUG, "rx_discards:%"PRIu64"", ns->eth.rx_discards);
+	PMD_DRV_LOG(DEBUG, "vsi rx_discards:%"PRIu64"",
+		    pf->main_vsi->eth_stats.rx_discards);
+	PMD_DRV_LOG(DEBUG, "rx_unknown_protocol:  %"PRIu64"",
+		    ns->eth.rx_unknown_protocol);
+	PMD_DRV_LOG(DEBUG, "tx_bytes:	%"PRIu64"", ns->eth.tx_bytes);
+	PMD_DRV_LOG(DEBUG, "tx_unicast:	%"PRIu64"", ns->eth.tx_unicast);
+	PMD_DRV_LOG(DEBUG, "tx_multicast:%"PRIu64"", ns->eth.tx_multicast);
+	PMD_DRV_LOG(DEBUG, "tx_broadcast:%"PRIu64"", ns->eth.tx_broadcast);
+	PMD_DRV_LOG(DEBUG, "tx_discards:%"PRIu64"", ns->eth.tx_discards);
+	PMD_DRV_LOG(DEBUG, "vsi tx_discards:%"PRIu64"",
+		    pf->main_vsi->eth_stats.tx_discards);
+	PMD_DRV_LOG(DEBUG, "tx_errors:		%"PRIu64"", ns->eth.tx_errors);
+
+	PMD_DRV_LOG(DEBUG, "tx_dropped_link_down:	%"PRIu64"",
+		    ns->tx_dropped_link_down);
+	PMD_DRV_LOG(DEBUG, "crc_errors:	%"PRIu64"", ns->crc_errors);
+	PMD_DRV_LOG(DEBUG, "illegal_bytes:	%"PRIu64"",
+		    ns->illegal_bytes);
+	PMD_DRV_LOG(DEBUG, "error_bytes:	%"PRIu64"", ns->error_bytes);
+	PMD_DRV_LOG(DEBUG, "mac_local_faults:	%"PRIu64"",
+		    ns->mac_local_faults);
+	PMD_DRV_LOG(DEBUG, "mac_remote_faults:	%"PRIu64"",
+		    ns->mac_remote_faults);
+	PMD_DRV_LOG(DEBUG, "link_xon_rx:	%"PRIu64"", ns->link_xon_rx);
+	PMD_DRV_LOG(DEBUG, "link_xoff_rx:	%"PRIu64"", ns->link_xoff_rx);
+	PMD_DRV_LOG(DEBUG, "link_xon_tx:	%"PRIu64"", ns->link_xon_tx);
+	PMD_DRV_LOG(DEBUG, "link_xoff_tx:	%"PRIu64"", ns->link_xoff_tx);
+	PMD_DRV_LOG(DEBUG, "rx_size_64:		%"PRIu64"", ns->rx_size_64);
+	PMD_DRV_LOG(DEBUG, "rx_size_127:	%"PRIu64"", ns->rx_size_127);
+	PMD_DRV_LOG(DEBUG, "rx_size_255:	%"PRIu64"", ns->rx_size_255);
+	PMD_DRV_LOG(DEBUG, "rx_size_511:	%"PRIu64"", ns->rx_size_511);
+	PMD_DRV_LOG(DEBUG, "rx_size_1023:	%"PRIu64"", ns->rx_size_1023);
+	PMD_DRV_LOG(DEBUG, "rx_size_1522:	%"PRIu64"", ns->rx_size_1522);
+	PMD_DRV_LOG(DEBUG, "rx_size_big:	%"PRIu64"", ns->rx_size_big);
+	PMD_DRV_LOG(DEBUG, "rx_undersize:	%"PRIu64"", ns->rx_undersize);
+	PMD_DRV_LOG(DEBUG, "rx_fragments:	%"PRIu64"", ns->rx_fragments);
+	PMD_DRV_LOG(DEBUG, "rx_oversize:	%"PRIu64"", ns->rx_oversize);
+	PMD_DRV_LOG(DEBUG, "rx_jabber:		%"PRIu64"", ns->rx_jabber);
+	PMD_DRV_LOG(DEBUG, "tx_size_64:		%"PRIu64"", ns->tx_size_64);
+	PMD_DRV_LOG(DEBUG, "tx_size_127:	%"PRIu64"", ns->tx_size_127);
+	PMD_DRV_LOG(DEBUG, "tx_size_255:	%"PRIu64"", ns->tx_size_255);
+	PMD_DRV_LOG(DEBUG, "tx_size_511:	%"PRIu64"", ns->tx_size_511);
+	PMD_DRV_LOG(DEBUG, "tx_size_1023:	%"PRIu64"", ns->tx_size_1023);
+	PMD_DRV_LOG(DEBUG, "tx_size_1522:	%"PRIu64"", ns->tx_size_1522);
+	PMD_DRV_LOG(DEBUG, "tx_size_big:	%"PRIu64"", ns->tx_size_big);
+	PMD_DRV_LOG(DEBUG, "rx_len_errors:	%"PRIu64"", ns->rx_len_errors);
+	PMD_DRV_LOG(DEBUG, "************* PF stats end ****************");
+	return 0;
+}
+
+/* Reset the statistics */
+static void
+ice_stats_reset(struct rte_eth_dev *dev)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	/* Mark PF and VSI stats to update the offset, aka "reset" */
+	pf->offset_loaded = false;
+	if (pf->main_vsi)
+		pf->main_vsi->offset_loaded = false;
+
+	/* read the stats, reading current register values into offset */
+	ice_read_stats_registers(pf, hw);
+}
+
+static uint32_t
+ice_xstats_calc_num(void)
+{
+	uint32_t num;
+
+	num = ICE_NB_ETH_XSTATS + ICE_NB_HW_PORT_XSTATS;
+
+	return num;
+}
+
+static int
+ice_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats,
+	       unsigned int n)
+{
+	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	unsigned int i;
+	unsigned int count;
+	struct ice_hw_port_stats *hw_stats = &pf->stats;
+
+	count = ice_xstats_calc_num();
+	if (n < count)
+		return count;
+
+	ice_read_stats_registers(pf, hw);
+
+	if (!xstats)
+		return 0;
+
+	count = 0;
+
+	/* Get stats from ice_eth_stats struct */
+	for (i = 0; i < ICE_NB_ETH_XSTATS; i++) {
+		xstats[count].value =
+			*(uint64_t *)((char *)&hw_stats->eth +
+				      ice_stats_strings[i].offset);
+		xstats[count].id = count;
+		count++;
+	}
+
+	/* Get individiual stats from ice_hw_port struct */
+	for (i = 0; i < ICE_NB_HW_PORT_XSTATS; i++) {
+		xstats[count].value =
+			*(uint64_t *)((char *)hw_stats +
+				      ice_hw_port_strings[i].offset);
+		xstats[count].id = count;
+		count++;
+	}
+
+	return count;
+}
+
+static int ice_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
+				struct rte_eth_xstat_name *xstats_names,
+				__rte_unused unsigned int limit)
+{
+	unsigned int count = 0;
+	unsigned int i;
+
+	if (!xstats_names)
+		return ice_xstats_calc_num();
+
+	/* Note: limit checked in rte_eth_xstats_names() */
+
+	/* Get stats from ice_eth_stats struct */
+	for (i = 0; i < ICE_NB_ETH_XSTATS; i++) {
+		snprintf(xstats_names[count].name,
+			 sizeof(xstats_names[count].name),
+			 "%s", ice_stats_strings[i].name);
+		count++;
+	}
+
+	/* Get individiual stats from ice_hw_port struct */
+	for (i = 0; i < ICE_NB_HW_PORT_XSTATS; i++) {
+		snprintf(xstats_names[count].name,
+			 sizeof(xstats_names[count].name),
+			 "%s", ice_hw_port_strings[i].name);
+		count++;
+	}
+
+	return count;
+}
+
 static int
 ice_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 	      struct rte_pci_device *pci_dev)
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* [dpdk-dev] [PATCH v6 31/31] support descriptor ops
  2018-12-18  8:46 ` [dpdk-dev] [PATCH v6 00/31] A new net PMD - ICE Wenzhuo Lu
                     ` (29 preceding siblings ...)
  2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 30/31] net/ice: support statistics Wenzhuo Lu
@ 2018-12-18  8:46   ` Wenzhuo Lu
  2018-12-18 13:53   ` [dpdk-dev] [PATCH v6 00/31] A new net PMD - ICE Ferruh Yigit
  31 siblings, 0 replies; 309+ messages in thread
From: Wenzhuo Lu @ 2018-12-18  8:46 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu, Qiming Yang, Xiaoyun Li, Jingjing Wu

Add below ops,
rx_descriptor_status
tx_descriptor_status

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/ice.ini |  2 ++
 drivers/net/ice/ice_ethdev.c     |  2 ++
 drivers/net/ice/ice_rxtx.c       | 58 ++++++++++++++++++++++++++++++++++++++++
 drivers/net/ice/ice_rxtx.h       |  2 ++
 4 files changed, 64 insertions(+)

diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
index efa5601..8b1e22e 100644
--- a/doc/guides/nics/features/ice.ini
+++ b/doc/guides/nics/features/ice.ini
@@ -26,6 +26,8 @@ QinQ offload         = Y
 L3 checksum offload  = Y
 L4 checksum offload  = Y
 Packet type parsing  = Y
+Rx descriptor status = Y
+Tx descriptor status = Y
 Basic stats          = Y
 Extended stats       = Y
 FW version           = Y
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 5fb70ee..0a81e04 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -112,6 +112,8 @@ static int ice_xstats_get_names(struct rte_eth_dev *dev,
 	.get_eeprom_length            = ice_get_eeprom_length,
 	.get_eeprom                   = ice_get_eeprom,
 	.rx_queue_count               = ice_rx_queue_count,
+	.rx_descriptor_status         = ice_rx_descriptor_status,
+	.tx_descriptor_status         = ice_tx_descriptor_status,
 	.stats_get                    = ice_stats_get,
 	.stats_reset                  = ice_stats_reset,
 	.xstats_get                   = ice_xstats_get,
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index f7637d2..78e40fe 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -1490,6 +1490,64 @@
 	return NULL;
 }
 
+int
+ice_rx_descriptor_status(void *rx_queue, uint16_t offset)
+{
+	struct ice_rx_queue *rxq = rx_queue;
+	volatile uint64_t *status;
+	uint64_t mask;
+	uint32_t desc;
+
+	if (unlikely(offset >= rxq->nb_rx_desc))
+		return -EINVAL;
+
+	if (offset >= rxq->nb_rx_desc - rxq->nb_rx_hold)
+		return RTE_ETH_RX_DESC_UNAVAIL;
+
+	desc = rxq->rx_tail + offset;
+	if (desc >= rxq->nb_rx_desc)
+		desc -= rxq->nb_rx_desc;
+
+	status = &rxq->rx_ring[desc].wb.qword1.status_error_len;
+	mask = rte_cpu_to_le_64((1ULL << ICE_RX_DESC_STATUS_DD_S) <<
+				ICE_RXD_QW1_STATUS_S);
+	if (*status & mask)
+		return RTE_ETH_RX_DESC_DONE;
+
+	return RTE_ETH_RX_DESC_AVAIL;
+}
+
+int
+ice_tx_descriptor_status(void *tx_queue, uint16_t offset)
+{
+	struct ice_tx_queue *txq = tx_queue;
+	volatile uint64_t *status;
+	uint64_t mask, expect;
+	uint32_t desc;
+
+	if (unlikely(offset >= txq->nb_tx_desc))
+		return -EINVAL;
+
+	desc = txq->tx_tail + offset;
+	/* go to next desc that has the RS bit */
+	desc = ((desc + txq->tx_rs_thresh - 1) / txq->tx_rs_thresh) *
+		txq->tx_rs_thresh;
+	if (desc >= txq->nb_tx_desc) {
+		desc -= txq->nb_tx_desc;
+		if (desc >= txq->nb_tx_desc)
+			desc -= txq->nb_tx_desc;
+	}
+
+	status = &txq->tx_ring[desc].cmd_type_offset_bsz;
+	mask = rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M);
+	expect = rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE <<
+				  ICE_TXD_QW1_DTYPE_S);
+	if ((*status & mask) == expect)
+		return RTE_ETH_TX_DESC_DONE;
+
+	return RTE_ETH_TX_DESC_FULL;
+}
+
 void
 ice_clear_queues(struct rte_eth_dev *dev)
 {
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index 228d2ff..ec0e52e 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -147,6 +147,8 @@ void ice_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 		      struct rte_eth_rxq_info *qinfo);
 void ice_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 		      struct rte_eth_txq_info *qinfo);
+int ice_rx_descriptor_status(void *rx_queue, uint16_t offset);
+int ice_tx_descriptor_status(void *tx_queue, uint16_t offset);
 void ice_set_default_ptype_table(struct rte_eth_dev *dev);
 const uint32_t *ice_dev_supported_ptypes_get(struct rte_eth_dev *dev);
 #endif /* _ICE_RXTX_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v6 00/31] A new net PMD - ICE
  2018-12-18  8:46 ` [dpdk-dev] [PATCH v6 00/31] A new net PMD - ICE Wenzhuo Lu
                     ` (30 preceding siblings ...)
  2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 31/31] support descriptor ops Wenzhuo Lu
@ 2018-12-18 13:53   ` Ferruh Yigit
  2018-12-19  3:27     ` Zhang, Qi Z
  31 siblings, 1 reply; 309+ messages in thread
From: Ferruh Yigit @ 2018-12-18 13:53 UTC (permalink / raw)
  To: Wenzhuo Lu, dev

On 12/18/2018 8:46 AM, Wenzhuo Lu wrote:
> This patch set adds the support of a new net PMD,
> Intel® Ethernet Network Adapters E810, also
> called ice.
> 
> Below features are enabled by this patch set,
> 
> Basic features:
> 1, Basic device operations: probe, initialization, start/stop, configure, info get.
> 2, RX/TX queue operations: setup/release, start/stop, info get.
> 3, RX/TX.
> 
> HW Offload features:
> 1, CRC Stripping/insertion.
> 2, L2/L3 checksum strip/insertion.
> 3, PVID set.
> 4, TPID change.
> 5, TSO (LRO/RSC not supported).
> 
> Stats:
> 1, statics & xstatics.
> 
> Switch functions:
> 1, MAC Filter Add/Delete.
> 2, VLAN Filter Add/Delete.
> 
> Power saving:
> 1, RX interrupt mode.
> 
> Misc:
> 1, Interrupt For Link Status.
> 2, firmware info query.
> 3, Jumbo Frame Support.
> 4, ptype check.
> 5, EEPROM check and set.
> 
> ---
> v2:
>  - Fix shared lib compile issue.
>  - Add meson build support.
>  - Update documents.
>  - Fix more checkpatch issues.
> 
> v3:
>  - Removed the support of secondary process.
>  - Splitted the base code to more patches.
>  - Pass NULL to rte_zmalloc.
>  - Changed some magic numbers to macros.
>  - Fixed the wrong implementation of a specific bitmapi.
> 
> v4:
>  - Moved meson build forward.
>  - Updated and splitted the document to related patches.
>  - Updated the device info.
>  - Removed unnecessary compile config.
>  - Removed the code of ops rx_descriptor_done.
>  - Adjusted the order of the functions.
>  - Added error print for MAC setting.
> 
> v5:
>  - Removed ice_dcb.c/h.
>  - Fixed compile error of icc and i686.
>  - Announced dependence of uio and vfio.
> 
> v6:
>  - Adjusted the order of the patches.
>  - Fixed some checkpatch errors.
>  - Some minor change.
> 
> Paul M Stillwell Jr (13):
>   net/ice/base: add registers for Intel(R) E800 Series NIC
>   net/ice/base: add basic structures
>   net/ice/base: add admin queue structures and commands
>   net/ice/base: add sideband queue info
>   net/ice/base: add device IDs for Intel(r) E800 Series NICs
>   net/ice/base: add control queue information
>   net/ice/base: add basic transmit scheduler
>   net/ice/base: add virtual switch code
>   net/ice/base: add code to work with the NVM
>   net/ice/base: add common functions
>   net/ice/base: add various headers
>   net/ice/base: add protocol structures and defines
>   net/ice/base: add structures for RX/TX queues
> 
> Wenzhuo Lu (18):
>   net/ice/base: add OS specific implementation
>   net/ice: support device initialization
>   net/ice: support device and queue ops
>   net/ice: support getting device information
>   net/ice: support link update
>   net/ice: support queue information getting
>   net/ice: support packet type getting
>   net/ice: support basic RX/TX
>   net/ice: support MTU setting
>   net/ice: support MAC ops
>   net/ice: support VLAN ops
>   net/ice: support RSS
>   net/ice: support RX queue interruption
>   net/ice: support FW version getting
>   net/ice: support EEPROM information getting
>   net/ice: support advance RX/TX
>   net/ice: support statistics
>   support descriptor ops

For series,
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>

^ permalink raw reply	[flat|nested] 309+ messages in thread

* Re: [dpdk-dev] [PATCH v6 00/31] A new net PMD - ICE
  2018-12-18 13:53   ` [dpdk-dev] [PATCH v6 00/31] A new net PMD - ICE Ferruh Yigit
@ 2018-12-19  3:27     ` Zhang, Qi Z
  0 siblings, 0 replies; 309+ messages in thread
From: Zhang, Qi Z @ 2018-12-19  3:27 UTC (permalink / raw)
  To: Yigit, Ferruh, Lu, Wenzhuo, dev



> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Ferruh Yigit
> Sent: Tuesday, December 18, 2018 9:53 PM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH v6 00/31] A new net PMD - ICE
> 
> On 12/18/2018 8:46 AM, Wenzhuo Lu wrote:
> > This patch set adds the support of a new net PMD, Intel® Ethernet
> > Network Adapters E810, also called ice.
> >
> > Below features are enabled by this patch set,
> >
> > Basic features:
> > 1, Basic device operations: probe, initialization, start/stop, configure, info get.
> > 2, RX/TX queue operations: setup/release, start/stop, info get.
> > 3, RX/TX.
> >
> > HW Offload features:
> > 1, CRC Stripping/insertion.
> > 2, L2/L3 checksum strip/insertion.
> > 3, PVID set.
> > 4, TPID change.
> > 5, TSO (LRO/RSC not supported).
> >
> > Stats:
> > 1, statics & xstatics.
> >
> > Switch functions:
> > 1, MAC Filter Add/Delete.
> > 2, VLAN Filter Add/Delete.
> >
> > Power saving:
> > 1, RX interrupt mode.
> >
> > Misc:
> > 1, Interrupt For Link Status.
> > 2, firmware info query.
> > 3, Jumbo Frame Support.
> > 4, ptype check.
> > 5, EEPROM check and set.
> >
> > ---
> > v2:
> >  - Fix shared lib compile issue.
> >  - Add meson build support.
> >  - Update documents.
> >  - Fix more checkpatch issues.
> >
> > v3:
> >  - Removed the support of secondary process.
> >  - Splitted the base code to more patches.
> >  - Pass NULL to rte_zmalloc.
> >  - Changed some magic numbers to macros.
> >  - Fixed the wrong implementation of a specific bitmapi.
> >
> > v4:
> >  - Moved meson build forward.
> >  - Updated and splitted the document to related patches.
> >  - Updated the device info.
> >  - Removed unnecessary compile config.
> >  - Removed the code of ops rx_descriptor_done.
> >  - Adjusted the order of the functions.
> >  - Added error print for MAC setting.
> >
> > v5:
> >  - Removed ice_dcb.c/h.
> >  - Fixed compile error of icc and i686.
> >  - Announced dependence of uio and vfio.
> >
> > v6:
> >  - Adjusted the order of the patches.
> >  - Fixed some checkpatch errors.
> >  - Some minor change.
> >
> > Paul M Stillwell Jr (13):
> >   net/ice/base: add registers for Intel(R) E800 Series NIC
> >   net/ice/base: add basic structures
> >   net/ice/base: add admin queue structures and commands
> >   net/ice/base: add sideband queue info
> >   net/ice/base: add device IDs for Intel(r) E800 Series NICs
> >   net/ice/base: add control queue information
> >   net/ice/base: add basic transmit scheduler
> >   net/ice/base: add virtual switch code
> >   net/ice/base: add code to work with the NVM
> >   net/ice/base: add common functions
> >   net/ice/base: add various headers
> >   net/ice/base: add protocol structures and defines
> >   net/ice/base: add structures for RX/TX queues
> >
> > Wenzhuo Lu (18):
> >   net/ice/base: add OS specific implementation
> >   net/ice: support device initialization
> >   net/ice: support device and queue ops
> >   net/ice: support getting device information
> >   net/ice: support link update
> >   net/ice: support queue information getting
> >   net/ice: support packet type getting
> >   net/ice: support basic RX/TX
> >   net/ice: support MTU setting
> >   net/ice: support MAC ops
> >   net/ice: support VLAN ops
> >   net/ice: support RSS
> >   net/ice: support RX queue interruption
> >   net/ice: support FW version getting
> >   net/ice: support EEPROM information getting
> >   net/ice: support advance RX/TX
> >   net/ice: support statistics
> >   support descriptor ops
> 
> For series,
> Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>

Patch 15/31 ~ 21/31
Reviewed-by: Qi Zhang <qi.z.zhang@intel.com>

Applied to dpdk-next-net-intel with minor fix as below.

1. fix missing "net/ice:" on title of patch 31/31
2. rename RX/TX to Rx/Tx to fix check-git-log warning.

Thanks
Qi


^ permalink raw reply	[flat|nested] 309+ messages in thread

end of thread, other threads:[~2018-12-19  3:27 UTC | newest]

Thread overview: 309+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-11-23  6:56 [dpdk-dev] [PATCH 00/19] A new net PMD - ice Wenzhuo Lu
2018-11-23  6:56 ` [dpdk-dev] [PATCH 01/19] net/ice: add base code Wenzhuo Lu
2018-11-23  6:56 ` [dpdk-dev] [PATCH 02/19] net/ice: support device initialization Wenzhuo Lu
2018-11-23  7:56   ` Varghese, Vipin
2018-11-26  5:09     ` Li, Xiaoyun
2018-11-26  5:13       ` Varghese, Vipin
2018-11-26  5:19         ` Li, Xiaoyun
2018-11-26  5:22           ` Varghese, Vipin
2018-11-23  6:56 ` [dpdk-dev] [PATCH 03/19] net/ice: support device and queue ops Wenzhuo Lu
2018-12-03 15:24   ` Rami Rosen
2018-12-03 15:43     ` Rami Rosen
2018-12-06  2:53     ` Lu, Wenzhuo
2018-11-23  6:56 ` [dpdk-dev] [PATCH 04/19] net/ice: support getting device information Wenzhuo Lu
2018-11-23  6:56 ` [dpdk-dev] [PATCH 05/19] net/ice: support packet type getting Wenzhuo Lu
2018-11-23  6:56 ` [dpdk-dev] [PATCH 06/19] net/ice: support link update Wenzhuo Lu
2018-11-23  6:56 ` [dpdk-dev] [PATCH 07/19] net/ice: support MTU setting Wenzhuo Lu
2018-11-23  9:58   ` Varghese, Vipin
2018-11-26  3:38     ` Yang, Qiming
2018-11-26  3:58       ` Varghese, Vipin
2018-11-23  6:56 ` [dpdk-dev] [PATCH 08/19] net/ice: support MAC ops Wenzhuo Lu
2018-11-23  6:56 ` [dpdk-dev] [PATCH 09/19] net/ice: support VLAN ops Wenzhuo Lu
2018-11-23  6:56 ` [dpdk-dev] [PATCH 10/19] net/ice: support RSS Wenzhuo Lu
2018-11-23  6:56 ` [dpdk-dev] [PATCH 11/19] net/ice: support RX queue interruption Wenzhuo Lu
2018-11-23  6:56 ` [dpdk-dev] [PATCH 12/19] net/ice: support FW version getting Wenzhuo Lu
2018-11-23  6:56 ` [dpdk-dev] [PATCH 13/19] net/ice: support EEPROM information getting Wenzhuo Lu
2018-11-23  6:56 ` [dpdk-dev] [PATCH 14/19] net/ice: support statistics Wenzhuo Lu
2018-11-23  6:56 ` [dpdk-dev] [PATCH 15/19] net/ice: support queue information getting Wenzhuo Lu
2018-11-23  6:56 ` [dpdk-dev] [PATCH 16/19] net/ice: support basic RX/TX Wenzhuo Lu
2018-11-23  6:56 ` [dpdk-dev] [PATCH 17/19] net/ice: support advance RX/TX Wenzhuo Lu
2018-11-23  6:56 ` [dpdk-dev] [PATCH 18/19] net/ice: support descriptor ops Wenzhuo Lu
2018-11-23  6:56 ` [dpdk-dev] [PATCH 19/19] doc: add ICE description and update release note Wenzhuo Lu
2018-11-23  7:45   ` Varghese, Vipin
2018-11-26  3:42     ` Yang, Qiming
2018-11-26  3:59       ` Varghese, Vipin
2018-11-23 11:00 ` [dpdk-dev] [PATCH 00/19] A new net PMD - ice Thomas Monjalon
2018-12-05  6:39   ` Lu, Wenzhuo
2018-12-05  7:28     ` Thomas Monjalon
2018-12-05  8:19       ` Lu, Wenzhuo
2018-12-03  7:06 ` [dpdk-dev] [PATCH v2 00/20] " Wenzhuo Lu
2018-12-03  7:06   ` [dpdk-dev] [PATCH v2 01/20] net/ice: add base code Wenzhuo Lu
2018-12-04  4:18     ` Varghese, Vipin
2018-12-06  3:27       ` Lu, Wenzhuo
2018-12-06  4:28         ` Varghese, Vipin
2018-12-06  5:55           ` Lu, Wenzhuo
2018-12-06  6:03             ` Varghese, Vipin
2018-12-06  6:23               ` Ferruh Yigit
2018-12-06  6:38               ` Lu, Wenzhuo
2018-12-06  6:41                 ` Varghese, Vipin
2018-12-06  7:06                   ` Zhang, Qi Z
2018-12-06  7:17                   ` Lu, Wenzhuo
2018-12-03  7:06   ` [dpdk-dev] [PATCH v2 02/20] net/ice: support device initialization Wenzhuo Lu
2018-12-03  9:07     ` Varghese, Vipin
2018-12-04  4:40     ` Varghese, Vipin
2018-12-06  5:01       ` Lu, Wenzhuo
2018-12-06  5:33         ` Varghese, Vipin
2018-12-06  6:13           ` Lu, Wenzhuo
2018-12-06  6:31             ` Varghese, Vipin
2018-12-06  7:04               ` Lu, Wenzhuo
     [not found]                 ` <039ED4275CED7440929022BC67E70611532FA732@SHSMSX103.ccr.corp.intel.com>
     [not found]                   ` <6A0DE07E22DDAD4C9103DF62FEBC09093FE11879@shsmsx102.ccr.corp.intel.com>
     [not found]                     ` <039ED4275CED7440929022BC67E70611532FA76F@SHSMSX103.ccr.corp.intel.com>
     [not found]                       ` <6A0DE07E22DDAD4C9103DF62FEBC09093FE1188F@shsmsx102.ccr.corp.intel.com>
2018-12-13  5:16                         ` Varghese, Vipin
2018-12-03  7:06   ` [dpdk-dev] [PATCH v2 03/20] net/ice: support device and queue ops Wenzhuo Lu
2018-12-04  4:53     ` Varghese, Vipin
2018-12-06  5:03       ` Lu, Wenzhuo
2018-12-06  5:26         ` Varghese, Vipin
2018-12-06 11:52           ` Ananyev, Konstantin
2018-12-06 14:16             ` Varghese, Vipin
2018-12-07  1:02               ` Lu, Wenzhuo
2018-12-03  7:06   ` [dpdk-dev] [PATCH v2 04/20] net/ice: support getting device information Wenzhuo Lu
2018-12-04  4:59     ` Varghese, Vipin
2018-12-06  5:28       ` Lu, Wenzhuo
2018-12-06  5:49         ` Varghese, Vipin
2018-12-03  7:06   ` [dpdk-dev] [PATCH v2 05/20] net/ice: support packet type getting Wenzhuo Lu
2018-12-04  5:19     ` Varghese, Vipin
2018-12-06  5:34       ` Lu, Wenzhuo
2018-12-03  7:06   ` [dpdk-dev] [PATCH v2 06/20] net/ice: support link update Wenzhuo Lu
2018-12-03  7:06   ` [dpdk-dev] [PATCH v2 07/20] net/ice: support MTU setting Wenzhuo Lu
2018-12-04  5:25     ` Varghese, Vipin
2018-12-04  5:51       ` Varghese, Vipin
2018-12-06  5:41         ` Lu, Wenzhuo
2018-12-06  5:56           ` Varghese, Vipin
2018-12-06  5:35       ` Lu, Wenzhuo
2018-12-03  7:06   ` [dpdk-dev] [PATCH v2 08/20] net/ice: support MAC ops Wenzhuo Lu
2018-12-03  7:06   ` [dpdk-dev] [PATCH v2 09/20] net/ice: support VLAN ops Wenzhuo Lu
2018-12-03  7:06   ` [dpdk-dev] [PATCH v2 10/20] net/ice: support RSS Wenzhuo Lu
2018-12-03  7:06   ` [dpdk-dev] [PATCH v2 11/20] net/ice: support RX queue interruption Wenzhuo Lu
2018-12-03  7:06   ` [dpdk-dev] [PATCH v2 12/20] net/ice: support FW version getting Wenzhuo Lu
2018-12-03  7:06   ` [dpdk-dev] [PATCH v2 13/20] net/ice: support EEPROM information getting Wenzhuo Lu
2018-12-03  7:06   ` [dpdk-dev] [PATCH v2 14/20] net/ice: support statistics Wenzhuo Lu
2018-12-04  5:35     ` Varghese, Vipin
2018-12-06  5:37       ` Lu, Wenzhuo
2018-12-06  5:50         ` Varghese, Vipin
2018-12-03  7:06   ` [dpdk-dev] [PATCH v2 15/20] net/ice: support queue information getting Wenzhuo Lu
2018-12-03  7:06   ` [dpdk-dev] [PATCH v2 16/20] net/ice: support basic RX/TX Wenzhuo Lu
2018-12-04  5:42     ` Varghese, Vipin
2018-12-04  5:44       ` Varghese, Vipin
2018-12-06  5:39       ` Lu, Wenzhuo
2018-12-06  5:55         ` Varghese, Vipin
2018-12-03  7:06   ` [dpdk-dev] [PATCH v2 17/20] net/ice: support advance RX/TX Wenzhuo Lu
2018-12-03  7:06   ` [dpdk-dev] [PATCH v2 18/20] net/ice: support descriptor ops Wenzhuo Lu
2018-12-03  7:07   ` [dpdk-dev] [PATCH v2 19/20] doc: add ICE description and update release note Wenzhuo Lu
2018-12-03  8:15     ` Varghese, Vipin
2018-12-05  6:54       ` Lu, Wenzhuo
2018-12-06  4:34         ` Varghese, Vipin
2018-12-06  6:05           ` Lu, Wenzhuo
2018-12-06  6:08             ` Varghese, Vipin
2018-12-06  6:23               ` Lu, Wenzhuo
2018-12-06  6:25                 ` Varghese, Vipin
2018-12-06  6:35                   ` Lu, Wenzhuo
2018-12-03  7:07   ` [dpdk-dev] [PATCH v2 20/20] net/ice: support meson build Wenzhuo Lu
2018-12-03 10:00     ` Varghese, Vipin
2018-12-05  7:03       ` Lu, Wenzhuo
2018-12-06  4:31         ` Varghese, Vipin
2018-12-06  5:59           ` Lu, Wenzhuo
2018-12-06  6:05             ` Varghese, Vipin
2018-12-12  6:59 ` [dpdk-dev] [PATCH v3 00/34] A new net PMD - ice Wenzhuo Lu
2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 01/34] net/ice: Add registers for Intel(R) E800 Series NIC Wenzhuo Lu
2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 02/34] net/ice: Add basic structures Wenzhuo Lu
2018-12-12 15:19     ` Ferruh Yigit
2018-12-12 16:54       ` Stillwell Jr, Paul M
2018-12-12 16:57         ` Ferruh Yigit
2018-12-12 16:55       ` Ferruh Yigit
2018-12-12 15:19     ` Ferruh Yigit
2018-12-13  5:17       ` Lu, Wenzhuo
2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 03/34] net/ice: Add admin queue structures and commands Wenzhuo Lu
2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 04/34] net/ice: Add sideband queue info Wenzhuo Lu
2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 05/34] net/ice: Add device IDs for Intel(r) E800 Series NICs Wenzhuo Lu
2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 06/34] net/ice: Add control queue information Wenzhuo Lu
2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 07/34] net/ice: Add data center bridging (DCB) Wenzhuo Lu
2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 08/34] net/ice: Add basic transmit scheduler Wenzhuo Lu
2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 09/34] net/ice: Add virtual switch code Wenzhuo Lu
2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 10/34] net/ice: Add code to work with the NVM Wenzhuo Lu
2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 11/34] net/ice: Add common functions Wenzhuo Lu
2018-12-12 19:58     ` Mattias Rönnblom
2018-12-12 21:18       ` Stillwell Jr, Paul M
2018-12-13  1:26         ` Lu, Wenzhuo
2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 12/34] net/ice: Add various headers Wenzhuo Lu
2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 13/34] net/ice: Add protocol structures and defines Wenzhuo Lu
2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 14/34] net/ice: Add structures for RX/TX queues Wenzhuo Lu
2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 15/34] net/ice: add OS specific implementation Wenzhuo Lu
2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 16/34] net/ice: support device initialization Wenzhuo Lu
2018-12-12 18:17     ` Ferruh Yigit
2018-12-13  2:39       ` Lu, Wenzhuo
2018-12-13 15:13         ` Ferruh Yigit
2018-12-14  2:30           ` Lu, Wenzhuo
2018-12-13  2:57       ` Lu, Wenzhuo
2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 17/34] net/ice: support device and queue ops Wenzhuo Lu
2018-12-12 20:07     ` Mattias Rönnblom
2018-12-13  1:34       ` Lu, Wenzhuo
2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 18/34] net/ice: support getting device information Wenzhuo Lu
2018-12-13  9:10     ` Zhang, Qi Z
2018-12-14  0:41       ` Lu, Wenzhuo
2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 19/34] net/ice: support packet type getting Wenzhuo Lu
2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 20/34] net/ice: support link update Wenzhuo Lu
2018-12-13  8:47     ` Zhang, Qi Z
2018-12-14  0:36       ` Lu, Wenzhuo
2018-12-14  2:43         ` Zhang, Qi Z
2018-12-14  8:09           ` Lu, Wenzhuo
2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 21/34] net/ice: support MTU setting Wenzhuo Lu
2018-12-13 21:05     ` Ferruh Yigit
2018-12-14  2:33       ` Lu, Wenzhuo
2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 22/34] net/ice: support MAC ops Wenzhuo Lu
2018-12-13  9:00     ` Zhang, Qi Z
2018-12-14  0:37       ` Lu, Wenzhuo
2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 23/34] net/ice: support VLAN ops Wenzhuo Lu
2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 24/34] net/ice: support RSS Wenzhuo Lu
2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 25/34] net/ice: support RX queue interruption Wenzhuo Lu
2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 26/34] net/ice: support FW version getting Wenzhuo Lu
2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 27/34] net/ice: support EEPROM information getting Wenzhuo Lu
2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 28/34] net/ice: support statistics Wenzhuo Lu
2018-12-12  6:59   ` [dpdk-dev] [PATCH v3 29/34] net/ice: support queue information getting Wenzhuo Lu
2018-12-12  7:00   ` [dpdk-dev] [PATCH v3 30/34] net/ice: support basic RX/TX Wenzhuo Lu
2018-12-12  7:00   ` [dpdk-dev] [PATCH v3 31/34] net/ice: support advance RX/TX Wenzhuo Lu
2018-12-12  7:00   ` [dpdk-dev] [PATCH v3 32/34] net/ice: support descriptor ops Wenzhuo Lu
2018-12-13 21:30     ` Ferruh Yigit
2018-12-14  2:39       ` Lu, Wenzhuo
2018-12-12  7:00   ` [dpdk-dev] [PATCH v3 33/34] doc: add ICE description and update release note Wenzhuo Lu
2018-12-13 21:34     ` Ferruh Yigit
2018-12-14  2:42       ` Lu, Wenzhuo
2018-12-12  7:00   ` [dpdk-dev] [PATCH v3 34/34] net/ice: support meson build Wenzhuo Lu
2018-12-13 21:15     ` Ferruh Yigit
2018-12-14  2:38       ` Lu, Wenzhuo
2018-12-14  8:47         ` Ferruh Yigit
2018-12-16  1:43           ` Lu, Wenzhuo
2018-12-13  6:02   ` [dpdk-dev] [PATCH v3 00/34] A new net PMD - ice Varghese, Vipin
2018-12-13  7:10     ` Lu, Wenzhuo
2018-12-13 13:09       ` Varghese, Vipin
2018-12-14  1:11         ` Lu, Wenzhuo
2018-12-14  3:26           ` Varghese, Vipin
2018-12-14  8:20             ` Lu, Wenzhuo
2018-12-14  8:34 ` [dpdk-dev] [PATCH v4 00/32] A new net PMD - ICE Wenzhuo Lu
2018-12-14  8:34   ` [dpdk-dev] [PATCH v4 01/32] net/ice/base: add registers for Intel(R) E800 Series NIC Wenzhuo Lu
2018-12-14  8:34   ` [dpdk-dev] [PATCH v4 02/32] net/ice/base: add basic structures Wenzhuo Lu
2018-12-14  8:34   ` [dpdk-dev] [PATCH v4 03/32] net/ice/base: add admin queue structures and commands Wenzhuo Lu
2018-12-14  8:34   ` [dpdk-dev] [PATCH v4 04/32] net/ice/base: add sideband queue info Wenzhuo Lu
2018-12-14  8:34   ` [dpdk-dev] [PATCH v4 05/32] net/ice/base: add device IDs for Intel(r) E800 Series NICs Wenzhuo Lu
2018-12-14  8:34   ` [dpdk-dev] [PATCH v4 06/32] net/ice/base: add control queue information Wenzhuo Lu
2018-12-14  8:34   ` [dpdk-dev] [PATCH v4 07/32] net/ice/base: add data center bridging (DCB) Wenzhuo Lu
2018-12-14  8:34   ` [dpdk-dev] [PATCH v4 08/32] net/ice/base: add basic transmit scheduler Wenzhuo Lu
2018-12-14  8:34   ` [dpdk-dev] [PATCH v4 09/32] net/ice/base: add virtual switch code Wenzhuo Lu
2018-12-14  8:34   ` [dpdk-dev] [PATCH v4 10/32] net/ice/base: add code to work with the NVM Wenzhuo Lu
2018-12-14  8:34   ` [dpdk-dev] [PATCH v4 11/32] net/ice/base: add common functions Wenzhuo Lu
2018-12-14  8:34   ` [dpdk-dev] [PATCH v4 12/32] net/ice/base: add various headers Wenzhuo Lu
2018-12-14  8:34   ` [dpdk-dev] [PATCH v4 13/32] net/ice/base: add protocol structures and defines Wenzhuo Lu
2018-12-14  8:35   ` [dpdk-dev] [PATCH v4 14/32] net/ice/base: add structures for RX/TX queues Wenzhuo Lu
2018-12-14  8:35   ` [dpdk-dev] [PATCH v4 15/32] net/ice/base: add OS specific implementation Wenzhuo Lu
2018-12-14  8:35   ` [dpdk-dev] [PATCH v4 16/32] net/ice: support device initialization Wenzhuo Lu
2018-12-14  9:46     ` Ferruh Yigit
2018-12-14 11:19       ` Zhang, Qi Z
2018-12-17  4:54       ` Lu, Wenzhuo
2018-12-14 12:05     ` David Marchand
2018-12-17  1:11       ` Lu, Wenzhuo
2018-12-14  8:35   ` [dpdk-dev] [PATCH v4 17/32] net/ice: support device and queue ops Wenzhuo Lu
2018-12-14  8:35   ` [dpdk-dev] [PATCH v4 18/32] net/ice: support getting device information Wenzhuo Lu
2018-12-14  8:35   ` [dpdk-dev] [PATCH v4 19/32] net/ice: support packet type getting Wenzhuo Lu
2018-12-14  8:35   ` [dpdk-dev] [PATCH v4 20/32] net/ice: support link update Wenzhuo Lu
2018-12-14  8:35   ` [dpdk-dev] [PATCH v4 21/32] net/ice: support MTU setting Wenzhuo Lu
2018-12-14  8:35   ` [dpdk-dev] [PATCH v4 22/32] net/ice: support MAC ops Wenzhuo Lu
2018-12-14  8:35   ` [dpdk-dev] [PATCH v4 23/32] net/ice: support VLAN ops Wenzhuo Lu
2018-12-14  8:35   ` [dpdk-dev] [PATCH v4 24/32] net/ice: support RSS Wenzhuo Lu
2018-12-14  8:35   ` [dpdk-dev] [PATCH v4 25/32] net/ice: support RX queue interruption Wenzhuo Lu
2018-12-14  8:35   ` [dpdk-dev] [PATCH v4 26/32] net/ice: support FW version getting Wenzhuo Lu
2018-12-14  8:35   ` [dpdk-dev] [PATCH v4 27/32] net/ice: support EEPROM information getting Wenzhuo Lu
2018-12-14  8:35   ` [dpdk-dev] [PATCH v4 28/32] net/ice: support statistics Wenzhuo Lu
2018-12-14  8:35   ` [dpdk-dev] [PATCH v4 29/32] net/ice: support queue information getting Wenzhuo Lu
2018-12-14  8:35   ` [dpdk-dev] [PATCH v4 30/32] net/ice: support basic RX/TX Wenzhuo Lu
2018-12-14 13:00     ` Ferruh Yigit
2018-12-14 16:41       ` Thomas Monjalon
2018-12-17  6:47       ` Lu, Wenzhuo
2018-12-14  8:35   ` [dpdk-dev] [PATCH v4 31/32] net/ice: support advance RX/TX Wenzhuo Lu
2018-12-14  8:35   ` [dpdk-dev] [PATCH v4 32/32] net/ice: support descriptor ops Wenzhuo Lu
2018-12-17  7:37 ` [dpdk-dev] [PATCH v5 00/31] A new net PMD - ICE Wenzhuo Lu
2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 01/31] net/ice/base: add registers for Intel(R) E800 Series NIC Wenzhuo Lu
2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 02/31] net/ice/base: add basic structures Wenzhuo Lu
2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 03/31] net/ice/base: add admin queue structures and commands Wenzhuo Lu
2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 04/31] net/ice/base: add sideband queue info Wenzhuo Lu
2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 05/31] net/ice/base: add device IDs for Intel(r) E800 Series NICs Wenzhuo Lu
2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 06/31] net/ice/base: add control queue information Wenzhuo Lu
2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 07/31] net/ice/base: add basic transmit scheduler Wenzhuo Lu
2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 08/31] net/ice/base: add virtual switch code Wenzhuo Lu
2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 09/31] net/ice/base: add code to work with the NVM Wenzhuo Lu
2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 10/31] net/ice/base: add common functions Wenzhuo Lu
2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 11/31] net/ice/base: add various headers Wenzhuo Lu
2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 12/31] net/ice/base: add protocol structures and defines Wenzhuo Lu
2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 13/31] net/ice/base: add structures for RX/TX queues Wenzhuo Lu
2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 14/31] net/ice/base: add OS specific implementation Wenzhuo Lu
2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 15/31] net/ice: support device initialization Wenzhuo Lu
2018-12-17 22:29     ` Ferruh Yigit
2018-12-18  1:12       ` Lu, Wenzhuo
2018-12-17 23:15     ` Ferruh Yigit
2018-12-18  1:42       ` Lu, Wenzhuo
2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 16/31] net/ice: support device and queue ops Wenzhuo Lu
2018-12-17 23:48     ` Ferruh Yigit
2018-12-18  1:33       ` Lu, Wenzhuo
2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 17/31] net/ice: support getting device information Wenzhuo Lu
2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 18/31] net/ice: support packet type getting Wenzhuo Lu
2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 19/31] net/ice: support link update Wenzhuo Lu
2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 20/31] net/ice: support MTU setting Wenzhuo Lu
2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 21/31] net/ice: support MAC ops Wenzhuo Lu
2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 22/31] net/ice: support VLAN ops Wenzhuo Lu
2018-12-17 22:45     ` Ferruh Yigit
2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 23/31] net/ice: support RSS Wenzhuo Lu
2018-12-17 22:47     ` Ferruh Yigit
2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 24/31] net/ice: support RX queue interruption Wenzhuo Lu
2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 25/31] net/ice: support FW version getting Wenzhuo Lu
2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 26/31] net/ice: support EEPROM information getting Wenzhuo Lu
2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 27/31] net/ice: support statistics Wenzhuo Lu
2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 28/31] net/ice: support queue information getting Wenzhuo Lu
2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 29/31] net/ice: support basic RX/TX Wenzhuo Lu
2018-12-17 22:58     ` Ferruh Yigit
2018-12-18  2:49       ` Lu, Wenzhuo
2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 30/31] net/ice: support advance RX/TX Wenzhuo Lu
2018-12-17 23:02     ` Ferruh Yigit
2018-12-18  3:11       ` Lu, Wenzhuo
2018-12-17 23:46     ` Ferruh Yigit
2018-12-18  3:13       ` Lu, Wenzhuo
2018-12-17  7:37   ` [dpdk-dev] [PATCH v5 31/31] net/ice: support descriptor ops Wenzhuo Lu
2018-12-18  8:46 ` [dpdk-dev] [PATCH v6 00/31] A new net PMD - ICE Wenzhuo Lu
2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 01/31] net/ice/base: add registers for Intel(R) E800 Series NIC Wenzhuo Lu
2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 02/31] net/ice/base: add basic structures Wenzhuo Lu
2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 03/31] net/ice/base: add admin queue structures and commands Wenzhuo Lu
2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 04/31] net/ice/base: add sideband queue info Wenzhuo Lu
2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 05/31] net/ice/base: add device IDs for Intel(r) E800 Series NICs Wenzhuo Lu
2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 06/31] net/ice/base: add control queue information Wenzhuo Lu
2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 07/31] net/ice/base: add basic transmit scheduler Wenzhuo Lu
2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 08/31] net/ice/base: add virtual switch code Wenzhuo Lu
2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 09/31] net/ice/base: add code to work with the NVM Wenzhuo Lu
2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 10/31] net/ice/base: add common functions Wenzhuo Lu
2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 11/31] net/ice/base: add various headers Wenzhuo Lu
2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 12/31] net/ice/base: add protocol structures and defines Wenzhuo Lu
2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 13/31] net/ice/base: add structures for RX/TX queues Wenzhuo Lu
2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 14/31] net/ice/base: add OS specific implementation Wenzhuo Lu
2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 15/31] net/ice: support device initialization Wenzhuo Lu
2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 16/31] net/ice: support device and queue ops Wenzhuo Lu
2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 17/31] net/ice: support getting device information Wenzhuo Lu
2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 18/31] net/ice: support link update Wenzhuo Lu
2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 19/31] net/ice: support queue information getting Wenzhuo Lu
2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 20/31] net/ice: support packet type getting Wenzhuo Lu
2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 21/31] net/ice: support basic RX/TX Wenzhuo Lu
2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 22/31] net/ice: support MTU setting Wenzhuo Lu
2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 23/31] net/ice: support MAC ops Wenzhuo Lu
2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 24/31] net/ice: support VLAN ops Wenzhuo Lu
2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 25/31] net/ice: support RSS Wenzhuo Lu
2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 26/31] net/ice: support RX queue interruption Wenzhuo Lu
2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 27/31] net/ice: support FW version getting Wenzhuo Lu
2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 28/31] net/ice: support EEPROM information getting Wenzhuo Lu
2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 29/31] net/ice: support advance RX/TX Wenzhuo Lu
2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 30/31] net/ice: support statistics Wenzhuo Lu
2018-12-18  8:46   ` [dpdk-dev] [PATCH v6 31/31] support descriptor ops Wenzhuo Lu
2018-12-18 13:53   ` [dpdk-dev] [PATCH v6 00/31] A new net PMD - ICE Ferruh Yigit
2018-12-19  3:27     ` Zhang, Qi Z

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).